├── Project_Report.pdf ├── README.md ├── Sample_test.las ├── media └── examples │ ├── AHN_trees.PNG │ ├── Ground_truth.png │ ├── IMG_20230221_092402.jpg │ ├── Internship_workflow.png │ ├── Output_sample.png │ ├── Shapefile_output.png │ ├── Trees_Amster.PNG │ └── Validation.png ├── requirements.txt └── scripts ├── 3D_clusters_to_2D_polygon.ipynb └── Internship_Tree-Segmentation_full_code_.ipynb /Project_Report.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/JorgesNofulla/Tree-trunk-segmentation/af4c925fc26b39df0125f27df6a832534d6a2a2a/Project_Report.pdf -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Individual Tree Trunk Segmentation README 2 | This repository contains the code to segment individual tree trunks out of an lidar point clouds that's already been filtered to contain only tree points. 3 | The algorithm uses simple libraries and makes full use of the point cloud data structure to ensure speed and efficenty in caclulations. 4 | 5 | 6 | ![](media/examples/Trees_Amster.PNG) 7 | 8 | Below it's the output of a sample dataset taken from the city of amsterdam: 9 | 10 | ![](media/examples/Output_sample.png) 11 | --- 12 | 13 | 14 | ## Project Folder Structure 15 | 16 | The repository contains the full codes used and a sample data that can be used to test the code. 17 | 18 | 19 | 1) [`media`](./media/examples/): Images used in this repository 20 | 1) [`scripts`](./scripts): Folder with the full scripts of the tree trunks delineation code. 21 | 1) [`Sample_test`](Sample_test.las): This is a small las file that can be used to test the code and get and output. 22 | 1) [`Requirements`](requirements.txt): The version of the libraries used in the code 23 | 1) [`Project Report`](Project_Report.pdf): The version of the libraries used in the code 24 | 25 | --- 26 | ## Installation 27 | 28 | 1) Clone this repository: 29 | ```bash 30 | git clone https://github.com/Amsterdam-Internships/Tree-trunk-segmentation.git 31 | ``` 32 | 33 | --- 34 | 35 | 36 | ## Parameters used in the code 37 | 38 | 1. The radius of the search spehere for the initial clustering. 39 | 2. The radius of the buffer on which we count the point density in x and y for each point (the parameter used for local maxima calculation). 40 | 3. The size of the search window for local maxima in each cluster. 41 | 4. The delineated trunks radius (visualized trunk). 42 | 5. The minimum eucledian distance that 2 peaks of the same cluster can have (else the peak won't be added). 43 | 6. The size of the small clusters we suspect as outliers (won't be deleted, they will just merge with a nearby big cluster if there is any, else they will be taken as individual clusters) 44 | 7. The minimal cluster size to be allowed as a tree. Deleting every cluster below this value (OPTIONAL!). 45 | 46 | 47 | 48 | ## How it works 49 | 50 | Below is the simplified workflow that shows how the algorithm works and what logic it is built on: 51 | 52 | ![](media/examples/Internship_workflow.png) 53 | --- 54 | ## Ground Truth vs Code Output 55 | 56 | The resulting output is converted into a 2D shapefile for improved visualization when comparing it to the ground truth. 57 | 58 | The ground truth: 59 | 60 | ![](media/examples/Ground_truth.png) 61 | 62 | 63 | The algorithm output : 64 | 65 | ![](media/examples/Shapefile_output.png) 66 | 67 | --- 68 | ## Accuracy assessment and Limitations 69 | 70 | The method works around the diversity of trees and performs really well in densely packed clusters. As compared to using tree parameters, this method is more adaptable as it only suggests that the tree trunks have more points than any other section of the tree in a 2D plane. 71 | 72 | The image below demonstrate the heterogeneous and unpredictable nature of tree diversity, which makes this method more reliable than using parameters. 73 | 74 | ![](media/examples/IMG_20230221_092402.jpg) 75 | 76 | Below is a manually checked cluster of trees that shows the accuraccy of our method. We expect this accuraccy to change by changing the parameters of the algorithm. 77 | It is also neccessary to mention that there is some inconsistency with the point cloud data while compared to the ground truth, so our assessment is not completely accurate. 78 | ![](media/examples/Validation.png) 79 | 80 | --- 81 | ## Acknowledgements 82 | 83 | This repository was created by [Jorges Nofulla](https://www.linkedin.com/in/jorges-nofulla-5a3139223/) in a collaboration with [Amsterdam Intelligence](https://www.amsterdamintelligence.com/) for the [City of Amsterdam](https://www.amsterdam.nl/). 84 | 85 | I would like to express my sincere appreciation to my two internship supervisors, Daan Bloembergen and Nico de Graaff, as well as my university professors, Sander Oude Elberink and Michael Yang, for their invaluable guidance and support throughout the development of this project.Their extensive knowledge and experience in the field provided helped me to better understand the complexities of the project and achieve a successful outcome. 86 | 87 | Our first part of the code, which clusters the trees is inspired by [Max-Hess](https://github.com/max-hess/GeometricNetworks) [![DOI](https://zenodo.org/badge/264818686.svg)](https://doi.org/10.5194/egusphere-egu21-4155 ) 88 | 89 | -------------------------------------------------------------------------------- /Sample_test.las: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/JorgesNofulla/Tree-trunk-segmentation/af4c925fc26b39df0125f27df6a832534d6a2a2a/Sample_test.las -------------------------------------------------------------------------------- /media/examples/AHN_trees.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/JorgesNofulla/Tree-trunk-segmentation/af4c925fc26b39df0125f27df6a832534d6a2a2a/media/examples/AHN_trees.PNG -------------------------------------------------------------------------------- /media/examples/Ground_truth.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/JorgesNofulla/Tree-trunk-segmentation/af4c925fc26b39df0125f27df6a832534d6a2a2a/media/examples/Ground_truth.png -------------------------------------------------------------------------------- /media/examples/IMG_20230221_092402.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/JorgesNofulla/Tree-trunk-segmentation/af4c925fc26b39df0125f27df6a832534d6a2a2a/media/examples/IMG_20230221_092402.jpg -------------------------------------------------------------------------------- /media/examples/Internship_workflow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/JorgesNofulla/Tree-trunk-segmentation/af4c925fc26b39df0125f27df6a832534d6a2a2a/media/examples/Internship_workflow.png -------------------------------------------------------------------------------- /media/examples/Output_sample.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/JorgesNofulla/Tree-trunk-segmentation/af4c925fc26b39df0125f27df6a832534d6a2a2a/media/examples/Output_sample.png -------------------------------------------------------------------------------- /media/examples/Shapefile_output.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/JorgesNofulla/Tree-trunk-segmentation/af4c925fc26b39df0125f27df6a832534d6a2a2a/media/examples/Shapefile_output.png -------------------------------------------------------------------------------- /media/examples/Trees_Amster.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/JorgesNofulla/Tree-trunk-segmentation/af4c925fc26b39df0125f27df6a832534d6a2a2a/media/examples/Trees_Amster.PNG -------------------------------------------------------------------------------- /media/examples/Validation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/JorgesNofulla/Tree-trunk-segmentation/af4c925fc26b39df0125f27df6a832534d6a2a2a/media/examples/Validation.png -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | numpy==1.23.5 2 | laspy==1.7.0 3 | scipy==1.8.1 4 | matplotlib==3.6.0 5 | pyshp==2.3.1 6 | -------------------------------------------------------------------------------- /scripts/3D_clusters_to_2D_polygon.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "id": "e5cb937b-983f-4ec3-b7fc-e98b2a4af008", 7 | "metadata": {}, 8 | "outputs": [], 9 | "source": [ 10 | "import numpy as np\n", 11 | "import pylas\n", 12 | "from collections import Counter\n", 13 | "from scipy.spatial import ConvexHull, Delaunay\n", 14 | "from shapely.geometry import LineString, Polygon, MultiPolygon,mapping\n", 15 | "import fiona" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "id": "709f33f3-1182-445b-aa0a-5c5e63aa7b0a", 21 | "metadata": { 22 | "tags": [] 23 | }, 24 | "source": [ 25 | "# Open and prepare the data" 26 | ] 27 | }, 28 | { 29 | "cell_type": "code", 30 | "execution_count": null, 31 | "id": "545aeebc-094e-4879-89c5-a2719f82a824", 32 | "metadata": {}, 33 | "outputs": [], 34 | "source": [ 35 | "las_test = pylas.open('All_area_full.las')" 36 | ] 37 | }, 38 | { 39 | "cell_type": "code", 40 | "execution_count": 2, 41 | "id": "3f0c7614-ad33-4438-8256-03b338cf719d", 42 | "metadata": {}, 43 | "outputs": [ 44 | { 45 | "name": "stdout", 46 | "output_type": "stream", 47 | "text": [ 48 | "This las file has 896981 points\n", 49 | "This las file has 2375 individual trees\n" 50 | ] 51 | } 52 | ], 53 | "source": [ 54 | "# Read in LAS file and extract tree coordinates\n", 55 | "# tree_coords should be a list of (x,y,z) tuples representing the tree positions\n", 56 | "data1 = las_test.read()\n", 57 | "array_test = data1.point_source_id\n", 58 | "print(\"This las file has\",len(data1.points),\"points\")\n", 59 | "counter_object3 = Counter(array_test)\n", 60 | "keys3 = counter_object3.keys()\n", 61 | "num_values3 = len(keys3)\n", 62 | "print(\"This las file has\",num_values3,\"individual trees\")\n", 63 | "tree_coord = {}\n", 64 | "for i in keys3:\n", 65 | " tree=[]\n", 66 | " tree = np.c_[data1.x[data1.point_source_id == i],data1.y[data1.point_source_id == i]]\n", 67 | " tree_coord[i]=[]\n", 68 | " tree_coord[i].append(tree)\n", 69 | " tree=[]" 70 | ] 71 | }, 72 | { 73 | "cell_type": "markdown", 74 | "id": "05b0d752-4eb2-4c4b-83c3-648ceb9eae25", 75 | "metadata": { 76 | "tags": [] 77 | }, 78 | "source": [ 79 | "# Create convex hulls" 80 | ] 81 | }, 82 | { 83 | "cell_type": "code", 84 | "execution_count": 3, 85 | "id": "880e762c-16c8-4239-90ec-01da8fa86410", 86 | "metadata": {}, 87 | "outputs": [], 88 | "source": [ 89 | "hulls = []\n", 90 | "for tree_id, coords in tree_coord.items():\n", 91 | " if len(coords[0]) > 3:\n", 92 | " hull = ConvexHull(coords[0])\n", 93 | " hulls.append(hull)" 94 | ] 95 | }, 96 | { 97 | "cell_type": "markdown", 98 | "id": "de2dd26a-4398-443a-924d-03586ca53b04", 99 | "metadata": { 100 | "tags": [] 101 | }, 102 | "source": [ 103 | "# Save the 2D border of individual trees as shapefiles" 104 | ] 105 | }, 106 | { 107 | "cell_type": "code", 108 | "execution_count": 4, 109 | "id": "cdd98c57-784b-4820-952d-0dea157e407e", 110 | "metadata": {}, 111 | "outputs": [], 112 | "source": [ 113 | "\n", 114 | "# Convert the ConvexHull objects to Polygon objects\n", 115 | "hulls_polygons = [Polygon(hull.points[hull.vertices]) for hull in hulls]\n", 116 | "\n", 117 | "# Set up the Shapefile schema\n", 118 | "schema = {\n", 119 | " 'geometry': 'LineString',\n", 120 | " 'properties': {'id': 'int', 'hull_id': 'int'},\n", 121 | "}\n", 122 | "i=0\n", 123 | "# Save the hull borders to a Shapefile\n", 124 | "with fiona.open('Shapefile_output.shp', 'w', 'ESRI Shapefile', schema) as c:\n", 125 | " for g in hulls_polygons:\n", 126 | " i=i+1\n", 127 | " hull_border = LineString(g.exterior)\n", 128 | " c.write({\n", 129 | " 'geometry': mapping(hull_border),\n", 130 | " 'properties': {'id': 1, 'hull_id': i},\n", 131 | " })\n" 132 | ] 133 | } 134 | ], 135 | "metadata": { 136 | "kernelspec": { 137 | "display_name": "Python", 138 | "language": "python", 139 | "name": "python3" 140 | }, 141 | "language_info": { 142 | "codemirror_mode": { 143 | "name": "ipython", 144 | "version": 3 145 | }, 146 | "file_extension": ".py", 147 | "mimetype": "text/x-python", 148 | "name": "python", 149 | "nbconvert_exporter": "python", 150 | "pygments_lexer": "ipython3", 151 | "version": "3.8.10" 152 | } 153 | }, 154 | "nbformat": 4, 155 | "nbformat_minor": 5 156 | } 157 | -------------------------------------------------------------------------------- /scripts/Internship_Tree-Segmentation_full_code_.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "228c23f8-c981-4664-a8dd-8f02cf40abbf", 6 | "metadata": { 7 | "tags": [] 8 | }, 9 | "source": [ 10 | "# Libraries and classes" 11 | ] 12 | }, 13 | { 14 | "cell_type": "code", 15 | "execution_count": 1, 16 | "id": "95d2c6a1-d483-44c8-a489-bad108e282a9", 17 | "metadata": {}, 18 | "outputs": [], 19 | "source": [ 20 | "# Libraries\n", 21 | "import numpy as np\n", 22 | "import laspy as lp\n", 23 | "import scipy\n", 24 | "import matplotlib.pyplot as plt\n", 25 | "from scipy.spatial import distance\n", 26 | "from scipy.spatial import KDTree, ckdtree\n", 27 | "from matplotlib.cm import tab20 as cmap" 28 | ] 29 | }, 30 | { 31 | "cell_type": "code", 32 | "execution_count": 2, 33 | "id": "5507e750-035b-48fa-8c4f-50f061b05a0a", 34 | "metadata": {}, 35 | "outputs": [], 36 | "source": [ 37 | "class Point:\n", 38 | "\n", 39 | " index = None\n", 40 | " position = None\n", 41 | " paths = []\n", 42 | " network = None\n", 43 | " vec = None\n", 44 | " linked_to = None\n", 45 | " treeID = None\n", 46 | "\n", 47 | " def __init__(self, index, position):\n", 48 | " self.index = index\n", 49 | " self.position = position\n", 50 | "\n", 51 | " def add_path(self, path):\n", 52 | " self.paths = np.append(self.paths, path)" 53 | ] 54 | }, 55 | { 56 | "cell_type": "code", 57 | "execution_count": 3, 58 | "id": "640b9618-577c-4542-a6af-7aed57fde1de", 59 | "metadata": {}, 60 | "outputs": [], 61 | "source": [ 62 | "class Path:\n", 63 | "\n", 64 | " index = None\n", 65 | " points = []\n", 66 | " network = None\n", 67 | "\n", 68 | " def __init__(self, index):\n", 69 | " self.index = index\n", 70 | "\n", 71 | " def add_point(self, this_point):\n", 72 | " self.points = np.append(self.points, this_point)\n", 73 | "\n", 74 | " def all_points_position(self):\n", 75 | " points_pos = np.c_[[],[],[]]\n", 76 | " for point in self.points:\n", 77 | " points_pos = np.append(points_pos, np.c_[point.position], axis=0)\n", 78 | " return points_pos" 79 | ] 80 | }, 81 | { 82 | "cell_type": "code", 83 | "execution_count": 4, 84 | "id": "39b791bb-2994-4706-a16b-819381b027f5", 85 | "metadata": {}, 86 | "outputs": [], 87 | "source": [ 88 | "class Network:\n", 89 | "\n", 90 | " index = None\n", 91 | " paths = []\n", 92 | " points = []\n", 93 | " top = None\n", 94 | "\n", 95 | " def __init__(self, index):\n", 96 | " self.index = index\n", 97 | "\n", 98 | " def add_path(self, path):\n", 99 | " self.paths = np.append(self.paths, path)\n", 100 | " path.network = self\n", 101 | " for point in path.points:\n", 102 | " point.network = self\n", 103 | " self.points = np.append(self.points, path.points)\n", 104 | "\n", 105 | " def size(self):\n", 106 | " points = np.array(())\n", 107 | " for path in self.paths:\n", 108 | " for point in path.points:\n", 109 | " points = np.append(points, point)\n", 110 | " return len(points)\n", 111 | "\n", 112 | " def all_paths(self):\n", 113 | " all_paths = np.array(())\n", 114 | " for path in self.paths:\n", 115 | " all_paths = np.append(all_paths, path.index)\n", 116 | " return all_paths\n", 117 | "\n", 118 | " def all_points_position(self):\n", 119 | " points_pos = np.array(())\n", 120 | " for point in self.points:\n", 121 | " points_pos = np.append(points_pos, point.position)\n", 122 | " points_pos = np.reshape(points_pos, (-1, 3))\n", 123 | " return points_pos" 124 | ] 125 | }, 126 | { 127 | "cell_type": "markdown", 128 | "id": "a97d2cf8-6b5a-4c27-bb0e-2230a2532db7", 129 | "metadata": { 130 | "tags": [] 131 | }, 132 | "source": [ 133 | "# Insert the parameters for the code" 134 | ] 135 | }, 136 | { 137 | "cell_type": "code", 138 | "execution_count": 5, 139 | "id": "3b6a28c8-4f82-44bb-8f88-a8f0dbe99d06", 140 | "metadata": {}, 141 | "outputs": [], 142 | "source": [ 143 | "# Insert the desired Las file here.\n", 144 | "las_file = lp.file.File('../Sample_test.las', mode=\"r\")\n", 145 | "# name your output\n", 146 | "output_las = \"Output_file.las\"\n", 147 | "# Parameters (all values are in meters)\n", 148 | "r = 4 # radius of the search spehere for the initial clustering\n", 149 | "radius = 0.8 # the radius on which we count the point density in x and y for each point (the parameter used for local maxima calculation)\n", 150 | "window_size = 6 # the size of the search window for local maxima in each cluster\n", 151 | "max_distance = 0.8 # the delineated trunks radius\n", 152 | "restrict_d = 6 # the minimum eucledian distance that 2 peaks of the same cluster can have\n", 153 | "small_clusters = 100 # the size of the small custers we suspect as outliers (won't be deleted, they will just merge with a nearby big cluster if there is any, else they will be taken as individual clusters)\n", 154 | "small_outliers = 30 # OPTIONAL! The minimal cluster size to be allowed as a tree. Deleting every cluster below this value.\n" 155 | ] 156 | }, 157 | { 158 | "cell_type": "code", 159 | "execution_count": 6, 160 | "id": "2a7bc6a5-cbd7-4e35-940d-6af58cb22dd3", 161 | "metadata": {}, 162 | "outputs": [], 163 | "source": [ 164 | "# CHECK THE END OF THE CODE TO CHOOSE THE TYPE OF THE OUTPUT !" 165 | ] 166 | }, 167 | { 168 | "cell_type": "markdown", 169 | "id": "38e832f2-80ba-4e9f-9f26-8323959b0c5e", 170 | "metadata": { 171 | "tags": [] 172 | }, 173 | "source": [ 174 | "# 1. Load and Prepare the data" 175 | ] 176 | }, 177 | { 178 | "cell_type": "code", 179 | "execution_count": 7, 180 | "id": "a445b5ed-46da-4d6e-91e6-dc1c5b4d94c2", 181 | "metadata": {}, 182 | "outputs": [], 183 | "source": [ 184 | "points = []\n", 185 | "\n", 186 | "# concatenate the file coordinates\n", 187 | "coord = np.c_[las_file.x, las_file.y, las_file.z]\n", 188 | "\n", 189 | "# turnk the coordinates into full integers\n", 190 | "x_new = (las_file.x).astype('int')\n", 191 | "y_new = (las_file.y).astype('int')\n", 192 | "z_new = (las_file.z).astype('int')\n", 193 | "\n", 194 | "# Reduce the data, get a flat voxel index. Lowers the number of points feed to the algorithm.\n", 195 | "new_coords = x_new + y_new * x_new.max() + z_new * y_new.max() * x_new.max()\n", 196 | "_, sl = np.unique(new_coords, return_index=True)\n", 197 | "coord = coord[sl,:]\n", 198 | "\n", 199 | "# sort the coordinates by z value\n", 200 | "coord_sorted = coord[coord[:, 2].argsort()]\n", 201 | "position = coord_sorted\n", 202 | "\n", 203 | "# points is a list of \"point\" class for each set of coordinates\n", 204 | "# i= index/position and coord_sorted as a position\n", 205 | "for i in range(len(coord_sorted)):\n", 206 | " i = Point(i, coord_sorted[i])\n", 207 | " points.append(i)\n", 208 | "# create an ordered array with the size of remaning coord\n", 209 | "index = np.arange(len(coord))" 210 | ] 211 | }, 212 | { 213 | "cell_type": "markdown", 214 | "id": "f2d8bebd-7aec-42d2-942f-2a09f0226260", 215 | "metadata": { 216 | "tags": [] 217 | }, 218 | "source": [ 219 | "# 2. Find centroids of point clusters and tree peaks" 220 | ] 221 | }, 222 | { 223 | "cell_type": "markdown", 224 | "id": "05081a96-5242-49ab-985e-7c3085919ea4", 225 | "metadata": {}, 226 | "source": [ 227 | "1. A collection of points in 3D space is given, with a manually input radius value.\n", 228 | "2. The code finds groups of points that are within the radius of each other, and it computes their group centroids.\n", 229 | "3. For each group, it finds the point with the highest Z-value (i.e., the top of the tree), and links it to the centroid.\n", 230 | "4. The code outputs the index of the closest point to the centroid for each group, and whether each point is the highest point of its group (i.e., at the top of the tree)." 231 | ] 232 | }, 233 | { 234 | "cell_type": "markdown", 235 | "id": "7d0b034c-223c-4ab8-8101-f7662de52feb", 236 | "metadata": {}, 237 | "source": [ 238 | "Reasons why the code runs fast\n", 239 | "1. Using list comprehension to find the neighbors with a higher z value rather than creating a new numpy array and using numpy's boolean indexing to find these neighbors.\n", 240 | "2. Checking for cases where a point has no neighbors within radius r first to avoid unnecessary calculations.\n", 241 | "3. Initializing the links and centroids arrays to the same length as the position array with zeros to avoid needing to use np.append() inside the loop." 242 | ] 243 | }, 244 | { 245 | "cell_type": "code", 246 | "execution_count": 8, 247 | "id": "0f1c083f-9851-45fa-8dfc-bb22c6a091db", 248 | "metadata": {}, 249 | "outputs": [], 250 | "source": [ 251 | "links = np.zeros(len(position), dtype=int)\n", 252 | "centroids = np.zeros((len(position), 3))\n", 253 | "has_parent = np.zeros(len(position), dtype=bool)\n", 254 | "\n", 255 | "# Find all points within distance r of point(s) x\n", 256 | "tree = scipy.spatial.cKDTree(position)\n", 257 | "nn = tree.query_ball_point(position, r)\n", 258 | "\n", 259 | "# Loop over all points\n", 260 | "for i, this_nn in enumerate(nn):\n", 261 | " # If the point has no neighbors within radius r, it is a tree peak\n", 262 | " if len(this_nn) == 1:\n", 263 | " links[i] = i\n", 264 | " centroids[i] = position[i]\n", 265 | " has_parent[i] = True\n", 266 | " # If the point has at least one neighbor within radius rlink_nn\n", 267 | " else:\n", 268 | " # Find all neighbors with a higher z value\n", 269 | " upper_nnbs = [j for j in this_nn if position[j, 2] > position[i, 2]]\n", 270 | " # If there are no such neighbors, the point is a tree peak\n", 271 | " if not upper_nnbs:\n", 272 | " links[i] = i\n", 273 | " centroids[i] = position[i]\n", 274 | " has_parent[i] = True\n", 275 | " # If there are any neighbors with a higher z value\n", 276 | " else:\n", 277 | " # Calculate the centroid of the group of neighbors\n", 278 | " c = np.mean(position[upper_nnbs], axis=0)\n", 279 | " centroids[i] = c\n", 280 | " # Calculate the distances between each neighbor and the centroid\n", 281 | " dist = scipy.spatial.distance.cdist(position[upper_nnbs], [c],\n", 282 | " metric=\"euclidean\")\n", 283 | " # Find the neighbor closest to the centroid and store its index as a link\n", 284 | " links[i] = upper_nnbs[np.argmin(dist)]\n", 285 | "\n", 286 | "# Convert links to integer type\n", 287 | "link_nn = links.astype(int)\n", 288 | "has_parent = has_parent.astype('int')" 289 | ] 290 | }, 291 | { 292 | "cell_type": "markdown", 293 | "id": "0641304f-6874-475f-9a89-309fe9847293", 294 | "metadata": { 295 | "tags": [] 296 | }, 297 | "source": [ 298 | "# 3. Label the points" 299 | ] 300 | }, 301 | { 302 | "cell_type": "markdown", 303 | "id": "45b6cec2-e0af-4e16-92b7-5258bef2040a", 304 | "metadata": {}, 305 | "source": [ 306 | "1. For each point, the code checks if it has already been assigned to a path.\n", 307 | "2. If not, it creates a new path and adds the current point to it.\n", 308 | "3. It then follows the links created in Part 1 to add more points to the path, until it reaches a point with no parent (i.e., at the top of the tree), at which point it ends the path.\n", 309 | "4. If the code encounters a point that is already in a path, it creates a new network that includes both the new path and the existing path." 310 | ] 311 | }, 312 | { 313 | "cell_type": "code", 314 | "execution_count": 9, 315 | "id": "68b019e2-7870-46e9-92c0-608b29d7c45e", 316 | "metadata": {}, 317 | "outputs": [], 318 | "source": [ 319 | "networks = []\n", 320 | "all_paths = []\n", 321 | "for p in points:\n", 322 | " current_idx = p.index\n", 323 | "\n", 324 | " if len(points[current_idx].paths) == 0:\n", 325 | " end = False\n", 326 | "\n", 327 | " # initialize new path\n", 328 | " new_path = Path(len(all_paths)) # len paths as index\n", 329 | " all_paths.append(new_path)\n", 330 | "\n", 331 | " # add first point to the path\n", 332 | " new_path.add_point(points[current_idx])\n", 333 | " points[current_idx].add_path(new_path)\n", 334 | "\n", 335 | " # append path\n", 336 | " while end == False:\n", 337 | "\n", 338 | " # point has a parent\n", 339 | " if has_parent[current_idx] != 1:\n", 340 | "\n", 341 | " # make link\n", 342 | " points[current_idx].linked_to = points[link_nn[current_idx]]\n", 343 | "\n", 344 | " if len(points[current_idx].linked_to.paths) == 0:\n", 345 | "\n", 346 | " # not in path\n", 347 | " points[current_idx].linked_to.add_path(new_path)\n", 348 | " new_path.add_point(points[current_idx].linked_to)\n", 349 | " current_idx = link_nn[current_idx]\n", 350 | "\n", 351 | " else:\n", 352 | " # in path\n", 353 | " points[current_idx].linked_to.network.add_path(new_path)\n", 354 | " points[current_idx].add_path(new_path)\n", 355 | " points[current_idx].linked_to.add_path(new_path)\n", 356 | " end = True\n", 357 | "\n", 358 | " # point has no parent\n", 359 | " # make network, end path\n", 360 | " else:\n", 361 | " points[current_idx].linked_to = points[current_idx]\n", 362 | " # init new network\n", 363 | " new_network = Network(len(networks)) # len networks as index\n", 364 | " new_network.add_path(new_path) # path and points are assigned to the network\n", 365 | " new_network.top = current_idx\n", 366 | " new_network.points = new_path.points # add points to the network\n", 367 | " networks.append(new_network)\n", 368 | " points[current_idx].network = new_network\n", 369 | " end = True" 370 | ] 371 | }, 372 | { 373 | "cell_type": "markdown", 374 | "id": "5168f332-463f-4241-8226-86155803b201", 375 | "metadata": { 376 | "tags": [] 377 | }, 378 | "source": [ 379 | "# 4. Get the labels array" 380 | ] 381 | }, 382 | { 383 | "cell_type": "code", 384 | "execution_count": 10, 385 | "id": "9b5694fc-fd07-463c-a9dc-e8900a804e58", 386 | "metadata": {}, 387 | "outputs": [], 388 | "source": [ 389 | "# new np array to extract and store all our individual tree labels from\n", 390 | "labels = np.zeros(len(points))\n", 391 | "centroids = np.array(())\n", 392 | "size = np.array(())\n", 393 | "# extract the label value from class network to our new built array\n", 394 | "for p in points:\n", 395 | " labels[p.index] = p.network.index\n", 396 | "labels = labels.astype('int')" 397 | ] 398 | }, 399 | { 400 | "cell_type": "markdown", 401 | "id": "fb9f8ca0-ebb0-4291-92a5-79e7b4ef367a", 402 | "metadata": { 403 | "tags": [] 404 | }, 405 | "source": [ 406 | "# Remove all the outlier clusters" 407 | ] 408 | }, 409 | { 410 | "cell_type": "code", 411 | "execution_count": 11, 412 | "id": "4ad3b541-34ab-4a67-b75e-04f4a4756d61", 413 | "metadata": {}, 414 | "outputs": [], 415 | "source": [ 416 | "array_test = np.column_stack((position, labels))" 417 | ] 418 | }, 419 | { 420 | "cell_type": "code", 421 | "execution_count": 12, 422 | "id": "5b23e757-95ae-4e02-aeb5-9f2854d512c6", 423 | "metadata": {}, 424 | "outputs": [], 425 | "source": [ 426 | "# Get the count of each cluster label\n", 427 | "labels_new = array_test[:, 3]\n", 428 | "array = array_test[:, 0:3]\n", 429 | "unique, counts = np.unique(labels_new, return_counts=True)\n", 430 | "\n", 431 | "# Create a dictionary to store the count of each label\n", 432 | "label_count = dict(zip(unique, counts))\n", 433 | "\n", 434 | "# Initialize an empty list to store the indices of the large clusters\n", 435 | "large_cluster_indices = []\n", 436 | "\n", 437 | "# Iterate through the cluster labels\n", 438 | "for i, label in enumerate(labels_new):\n", 439 | " # If the label corresponds to a large cluster, add the index to the list\n", 440 | " if label_count.get(label, 0) >= 10:\n", 441 | " large_cluster_indices.append(i)\n", 442 | "\n", 443 | "# Use the indices of the large clusters to create a new array\n", 444 | "array_test = array[large_cluster_indices, :]\n", 445 | "\n", 446 | "# Add the labels as the last column of the new array\n", 447 | "array_test = np.column_stack((array_test, labels_new[large_cluster_indices]))" 448 | ] 449 | }, 450 | { 451 | "cell_type": "markdown", 452 | "id": "7f0dec85-96de-4f2e-98b9-628e124f9424", 453 | "metadata": { 454 | "tags": [] 455 | }, 456 | "source": [ 457 | "# Fix the small clusters" 458 | ] 459 | }, 460 | { 461 | "cell_type": "code", 462 | "execution_count": 13, 463 | "id": "5114be52-b644-49c1-8e62-856bc85facca", 464 | "metadata": {}, 465 | "outputs": [], 466 | "source": [ 467 | "# Prepare the array for the \"fix small clusters\" code\n", 468 | "labels_2 = array_test[:, 3].astype('int')\n", 469 | "labels33, point_count33 = np.unique(labels_2, return_counts=True)" 470 | ] 471 | }, 472 | { 473 | "cell_type": "code", 474 | "execution_count": 14, 475 | "id": "4153088c-218e-43a3-bbaf-b434857abe6b", 476 | "metadata": {}, 477 | "outputs": [], 478 | "source": [ 479 | "iterating_array = []\n", 480 | "for i in range(len(labels33)):\n", 481 | " if point_count33[i] <= small_clusters:\n", 482 | " iterating_array.append(labels33[i])" 483 | ] 484 | }, 485 | { 486 | "cell_type": "code", 487 | "execution_count": 15, 488 | "id": "e4656378-ea93-4c26-acfe-4daa1e8a9a0d", 489 | "metadata": {}, 490 | "outputs": [], 491 | "source": [ 492 | "# Get centroids of all clusters in the dataset\n", 493 | "all_centroids = []\n", 494 | "all_labs=[]\n", 495 | "for label in np.unique(array_test[:, 3]):\n", 496 | " centroid = array_test[array_test[:, 3] == label, :2].mean(axis=0)\n", 497 | " all_centroids.append(centroid)\n", 498 | " all_labs.append(label)" 499 | ] 500 | }, 501 | { 502 | "cell_type": "code", 503 | "execution_count": 16, 504 | "id": "e783db14-9efd-4968-a525-609f97d281f9", 505 | "metadata": {}, 506 | "outputs": [], 507 | "source": [ 508 | "# find the pairs of the closest clusters\n", 509 | "\n", 510 | "tree1 = KDTree(all_centroids)\n", 511 | "\n", 512 | "labels_nn = []\n", 513 | "for i in range(len(all_labs)):\n", 514 | " point_cent = all_centroids[i]\n", 515 | " dist, idx = tree1.query(point_cent, k=2)\n", 516 | " closest_idx = idx[1] if idx[0] == i else idx[0]\n", 517 | " labels_nn.append([all_labs[i], all_labs[closest_idx ]])\n", 518 | "\n", 519 | "# filter the list so it contains only the small clusters that we will fix\n", 520 | "filtered_list = [x for x in labels_nn if int(x[0]) in iterating_array]\n", 521 | "array_test2 = array_test.copy()" 522 | ] 523 | }, 524 | { 525 | "cell_type": "code", 526 | "execution_count": 17, 527 | "id": "d4084d21-4ada-4426-b71f-0ed50e9c0a41", 528 | "metadata": {}, 529 | "outputs": [], 530 | "source": [ 531 | "# Some other parameters for tree merging\n", 532 | "diff_height = 1.5 # the difference in height between 2 clusters very close to each other (this is the parameter to take care of branches that are classified as a separate cluster)\n", 533 | "branch_dist = 0.8 # the max distance a branch clsuter can be from the main tree\n", 534 | "min_dist_tree = 1 # the max distance of 2 clusters to be checked if they are the same tree\n", 535 | "\n", 536 | "for i in filtered_list:\n", 537 | " coord_xy = array_test2[array_test2[:, 3] == i[0]]\n", 538 | " coord_xy2 = array_test2[array_test2[:, 3] == i[1]]\n", 539 | " wk = distance.cdist(coord_xy[:, :2], coord_xy2[:, :2], 'euclidean')\n", 540 | " z = abs(coord_xy[:, 2:3].min()-coord_xy[:, 2:3].min())\n", 541 | " kk = array_test2[:, 2][array_test2[:, 3] == i[1]]\n", 542 | " z = abs(coord_xy[:, 2:3].min() - kk.min())\n", 543 | " if len(array_test2[array_test2 == i[0]]) < (small_clusters/2) and wk.min() < min_dist_tree :\n", 544 | " array_test[:, 3][array_test[:, 3] == i[0]] = i[1]\n", 545 | " if wk.min() < branch_dist and z > diff_height:\n", 546 | " array_test[:, 3][array_test[:, 3] == i[0]] = i[1]\n", 547 | " if len(array_test2[array_test2 == i[0]]) < small_clusters and wk.min() < min_dist_tree/2:\n", 548 | " array_test[:, 3][array_test[:, 3] == i[0]] = i[1]\n", 549 | " coord_xy = []\n", 550 | " coord_xy2 = []\n", 551 | " wk = []\n", 552 | " ind = []" 553 | ] 554 | }, 555 | { 556 | "cell_type": "markdown", 557 | "id": "48dd44d0-d676-4b26-ac40-64760713e39d", 558 | "metadata": { 559 | "tags": [] 560 | }, 561 | "source": [ 562 | "# Optional! Delete small clusters" 563 | ] 564 | }, 565 | { 566 | "cell_type": "code", 567 | "execution_count": 18, 568 | "id": "fec470d9-21b0-42e3-9d9e-2cca447c5b0b", 569 | "metadata": {}, 570 | "outputs": [], 571 | "source": [ 572 | "# Get the count of each cluster label\n", 573 | "labels_new = array_test[:, 3]\n", 574 | "array = array_test[:, 0:3]\n", 575 | "unique, counts = np.unique(labels_new, return_counts=True)\n", 576 | "\n", 577 | "# Create a dictionary to store the count of each label\n", 578 | "label_count = dict(zip(unique, counts))\n", 579 | "\n", 580 | "# Initialize an empty list to store the indices of the large clusters\n", 581 | "large_cluster_indices = []\n", 582 | "\n", 583 | "# Iterate through the cluster labels\n", 584 | "for i, label in enumerate(labels_new):\n", 585 | " # If the label corresponds to a large cluster, add the index to the list\n", 586 | " if label_count.get(label, 0) >= small_outliers:\n", 587 | " large_cluster_indices.append(i)\n", 588 | "\n", 589 | "# Use the indices of the large clusters to create a new array\n", 590 | "array_test = array[large_cluster_indices, :]\n", 591 | "\n", 592 | "# Add the labels as the last column of the new array\n", 593 | "array_test = np.column_stack((array_test, labels_new[large_cluster_indices]))" 594 | ] 595 | }, 596 | { 597 | "cell_type": "markdown", 598 | "id": "3b80a909-e3a0-4e57-8ae4-87e6d990798f", 599 | "metadata": { 600 | "tags": [] 601 | }, 602 | "source": [ 603 | "# 6. Get the number of points in buffer per point (the local maxima column)" 604 | ] 605 | }, 606 | { 607 | "cell_type": "code", 608 | "execution_count": 19, 609 | "id": "515d54e6-f1b5-4212-8f30-5cfd79d7b41d", 610 | "metadata": {}, 611 | "outputs": [], 612 | "source": [ 613 | "# Input data\n", 614 | "points = array_test[:, :2]\n", 615 | "\n", 616 | "# Create KDTree from points\n", 617 | "kd_tree = KDTree(points)\n", 618 | "\n", 619 | "# Array to store the number of points in the buffer for each point\n", 620 | "count = np.zeros(len(points), dtype=int)\n", 621 | "\n", 622 | "# Loop over each point and find points in the buffer\n", 623 | "for i, p in enumerate(points):\n", 624 | " idx = kd_tree.query_ball_point(p, radius)\n", 625 | " count[i] = len(idx) - 1" 626 | ] 627 | }, 628 | { 629 | "cell_type": "markdown", 630 | "id": "3c4d42d2-aeeb-4541-9f28-52c085f99289", 631 | "metadata": { 632 | "tags": [] 633 | }, 634 | "source": [ 635 | "# 7. Find the tree trunks" 636 | ] 637 | }, 638 | { 639 | "cell_type": "code", 640 | "execution_count": 20, 641 | "id": "ec8a3c8a-3954-45c4-9e07-cf8743d27c4a", 642 | "metadata": {}, 643 | "outputs": [], 644 | "source": [ 645 | "# create the full array\n", 646 | "full_array = np.column_stack((array_test, count))\n", 647 | "\n", 648 | "\n", 649 | "def cluster_local_maxima1(full_array, window_size, max_distance, restrict_d):\n", 650 | " # get the unique label of tree clusters\n", 651 | " unique_clusters = np.unique(full_array[:, 3])\n", 652 | " current_label = 1\n", 653 | " labels = np.zeros(full_array.shape[0], dtype=np.int64)\n", 654 | " full_array = np.column_stack((full_array, labels))\n", 655 | " iteration=0\n", 656 | " # Iterate through every single tree cluster separately\n", 657 | " for cluster_id in unique_clusters:\n", 658 | " peaks1=[]\n", 659 | " dist_peaks = 100\n", 660 | " # Form an array for the cluster of this iteration\n", 661 | " kot_arr = full_array[full_array[:, 3] == cluster_id]\n", 662 | " x1 = kot_arr[:, 0]\n", 663 | " y1 = kot_arr[:, 1]\n", 664 | " z1 = kot_arr[:, 2]\n", 665 | " p1 = kot_arr[:, 4]\n", 666 | " labels_k = kot_arr[:, 5]\n", 667 | " # Now we iterate through each point of the cluster of this iteration\n", 668 | " for i in range(len(kot_arr)):\n", 669 | " # We form a seach window around each point of the cluster\n", 670 | " x_min = x1[i] - window_size\n", 671 | " x_max = x1[i] + window_size\n", 672 | " y_min = y1[i] - window_size\n", 673 | " y_max = y1[i] + window_size\n", 674 | " in_window = np.bitwise_and(x1 >= x_min, x1 <= x_max)\n", 675 | " in_window = np.bitwise_and(in_window, np.bitwise_and(y1 >= y_min, y1 <= y_max))\n", 676 | " in_window = np.bitwise_and(in_window, kot_arr[:, 3] == cluster_id)\n", 677 | "\n", 678 | " # Calculate and save the distances between the local maximas we find.\n", 679 | " if len(peaks1) > 0:\n", 680 | " this_point = [x1[i],y1[i]]\n", 681 | " peak_array = np.array(peaks1)\n", 682 | " this_point = np.array(this_point)\n", 683 | " this_point = this_point.reshape(1, 2)\n", 684 | " dist_peaks = distance.cdist(peak_array, this_point, 'euclidean')\n", 685 | " \n", 686 | " # We find the local maximas for each window\n", 687 | " # Then we restric every local maximas that are way too close with each other with the parameter \"restrict_d\"\n", 688 | " # Then the local maximas with an accepted distace between each other are relabeld as a unique number for each unique tree.\n", 689 | " if np.max(p1[in_window]) == p1[i] and np.min(dist_peaks) > restrict_d:\n", 690 | " peaks1.append([x1[i], y1[i]])\n", 691 | " points_to_label = np.argwhere(np.logical_and(np.abs(x1-x1[i]) <= max_distance, np.abs(y1-y1[i]) <= max_distance))\n", 692 | " points_to_label = points_to_label.flatten()\n", 693 | " if labels_k[i] == 0:\n", 694 | " labels_k[points_to_label] = current_label\n", 695 | " current_label += 1\n", 696 | " else:\n", 697 | " labels_k[points_to_label] = labels_k[i]\n", 698 | "\n", 699 | " # we create a new array with the new labels for trunks\n", 700 | " new_2 = np.c_[x1,y1,z1,labels_k]\n", 701 | " if iteration == 0:\n", 702 | " final_result = new_2\n", 703 | " else:\n", 704 | " final_result = np.vstack((final_result, new_2))\n", 705 | " iteration=1\n", 706 | "\n", 707 | " return final_result\n", 708 | "\n", 709 | "\n", 710 | "Final_labels = cluster_local_maxima1(full_array, window_size, max_distance, restrict_d)" 711 | ] 712 | }, 713 | { 714 | "cell_type": "code", 715 | "execution_count": 21, 716 | "id": "af3dfae7-36f4-4e9b-84e9-708f0e08896c", 717 | "metadata": {}, 718 | "outputs": [ 719 | { 720 | "name": "stdout", 721 | "output_type": "stream", 722 | "text": [ 723 | "there are 23 trees in this area\n" 724 | ] 725 | } 726 | ], 727 | "source": [ 728 | "# Get the number of trees in this las file\n", 729 | "tree_count = np.unique(Final_labels[:, 3])\n", 730 | "print(\"there are\", len(tree_count), \"trees in this area\")" 731 | ] 732 | }, 733 | { 734 | "cell_type": "markdown", 735 | "id": "0787fdd4-1697-4da9-b735-ac408e086863", 736 | "metadata": { 737 | "tags": [] 738 | }, 739 | "source": [ 740 | "# Save the trunk Point Cloud as a new Las file" 741 | ] 742 | }, 743 | { 744 | "cell_type": "code", 745 | "execution_count": 22, 746 | "id": "8c7d3b07-0469-45ef-95bd-d40428c553f5", 747 | "metadata": {}, 748 | "outputs": [], 749 | "source": [ 750 | "lb = Final_labels\n", 751 | "c = Final_labels[:, 3]\n", 752 | "\n", 753 | "vals = np.linspace(0, 1, 100)\n", 754 | "np.random.shuffle(vals)\n", 755 | "cmap = plt.cm.colors.ListedColormap(plt.cm.tab20(vals))\n", 756 | "header = lp.header.Header()\n", 757 | "header.data_format_id = 2\n", 758 | "new_las = lp.file.File(output_las, mode='w', header=header)\n", 759 | "new_las.header.scale = [0.01, 0.01, 0.01]\n", 760 | "new_las.header.offset = [lb[:, 0].min(), lb[:, 1].min(), lb[:, 2].min()]\n", 761 | "new_las.x = lb[:, 0]\n", 762 | "new_las.y = lb[:, 1]\n", 763 | "new_las.z = lb[:, 2]\n", 764 | "new_las.pt_src_id = c.astype('uint16')\n", 765 | "new_las.close()" 766 | ] 767 | }, 768 | { 769 | "cell_type": "markdown", 770 | "id": "964afb50-fa13-4d98-ba37-cb7323a43d10", 771 | "metadata": { 772 | "tags": [] 773 | }, 774 | "source": [ 775 | "# ALTERNATIVE OUTPUT\n", 776 | "# Getting only one point(centroid) in X and Y for each tree trunk" 777 | ] 778 | }, 779 | { 780 | "cell_type": "code", 781 | "execution_count": 23, 782 | "id": "39e6b6d7-1f7f-41ab-8530-8cc461afc348", 783 | "metadata": {}, 784 | "outputs": [], 785 | "source": [ 786 | "# as a dictionary" 787 | ] 788 | }, 789 | { 790 | "cell_type": "code", 791 | "execution_count": 24, 792 | "id": "1031848f-e9f6-4560-9db5-b0b715450a2b", 793 | "metadata": {}, 794 | "outputs": [ 795 | { 796 | "data": { 797 | "text/plain": [ 798 | "'\\nCentroid_tree = np.unique(Final_labels[:, 3])[1:]\\ncentroids_coord = {}\\n\\n# Iterate through each cluster and find the centroid\\nfor label in Centroid_tree:\\n cluster_points = Final_labels[Final_labels[:, 3] == label][:, :2]\\n centroid = np.mean(cluster_points, axis=0)\\n centroids_coord[label] = centroid\\n\\n#print(centroids_coord)\\n'" 799 | ] 800 | }, 801 | "execution_count": 24, 802 | "metadata": {}, 803 | "output_type": "execute_result" 804 | } 805 | ], 806 | "source": [ 807 | "'''\n", 808 | "Centroid_tree = np.unique(Final_labels[:, 3])[1:]\n", 809 | "centroids_coord = {}\n", 810 | "\n", 811 | "# Iterate through each cluster and find the centroid\n", 812 | "for label in Centroid_tree:\n", 813 | " cluster_points = Final_labels[Final_labels[:, 3] == label][:, :2]\n", 814 | " centroid = np.mean(cluster_points, axis=0)\n", 815 | " centroids_coord[label] = centroid\n", 816 | "\n", 817 | "#print(centroids_coord)\n", 818 | "'''" 819 | ] 820 | }, 821 | { 822 | "cell_type": "code", 823 | "execution_count": 25, 824 | "id": "be783bb4-66a4-485f-b269-e284651d3660", 825 | "metadata": {}, 826 | "outputs": [], 827 | "source": [ 828 | "# as a numpy array" 829 | ] 830 | }, 831 | { 832 | "cell_type": "code", 833 | "execution_count": 26, 834 | "id": "0cd57a99-e402-4007-baab-ec4c1dcc5c9c", 835 | "metadata": {}, 836 | "outputs": [], 837 | "source": [ 838 | "# Get the unique cluster labels excluding label zero\n", 839 | "Centroid_tree = np.unique(Final_labels[:, 3])[1:]\n", 840 | "# Initialize an empty list to store the centroids for each cluster\n", 841 | "centroids_array = []\n", 842 | "\n", 843 | "# Iterate through each cluster and find the centroid\n", 844 | "for label in Centroid_tree:\n", 845 | " cluster_points = Final_labels[Final_labels[:, 3] == label][:, :2]\n", 846 | " centroid = list(np.mean(cluster_points, axis=0))\n", 847 | " centroids_array.append([centroid[0], centroid[1], label])\n", 848 | "\n", 849 | "centroids_array = np.array(centroids_array)" 850 | ] 851 | }, 852 | { 853 | "cell_type": "markdown", 854 | "id": "48a05d69-e024-4872-bf34-4c23fd234009", 855 | "metadata": { 856 | "tags": [] 857 | }, 858 | "source": [ 859 | "# Save only the tree centroids as 2D points (LAS FILE)" 860 | ] 861 | }, 862 | { 863 | "cell_type": "code", 864 | "execution_count": 27, 865 | "id": "be2a11fa-dd5e-4f31-a92b-e1e52873325c", 866 | "metadata": {}, 867 | "outputs": [], 868 | "source": [ 869 | "output_name = \"Output_centroids_only.las\"\n", 870 | "z_value = np.zeros(centroids_array.shape[0], dtype=np.int64)\n", 871 | "lab = centroids_array\n", 872 | "c = centroids_array[:, 2]\n", 873 | "\n", 874 | "vals = np.linspace(0, 1, 100)\n", 875 | "np.random.shuffle(vals)\n", 876 | "cmap = plt.cm.colors.ListedColormap(plt.cm.tab20(vals))\n", 877 | "header = lp.header.Header()\n", 878 | "header.data_format_id = 2\n", 879 | "new_las = lp.file.File(output_name, mode='w', header=header)\n", 880 | "new_las.header.scale = [0.01, 0.01,0.01]\n", 881 | "new_las.header.offset = [lab[:, 0].min(), lab[:, 1].min(),z_value.min()]\n", 882 | "new_las.x = lab[:, 0]\n", 883 | "new_las.y = lab[:, 1]\n", 884 | "new_las.z = z_value\n", 885 | "new_las.pt_src_id = c.astype('uint16')\n", 886 | "new_las.close()" 887 | ] 888 | }, 889 | { 890 | "cell_type": "markdown", 891 | "id": "06ec598e-1c28-4a5b-81a6-857e98c4e56f", 892 | "metadata": { 893 | "tags": [] 894 | }, 895 | "source": [ 896 | "# Save only the tree centroids as 2D points (SHAPEFILE)" 897 | ] 898 | }, 899 | { 900 | "cell_type": "code", 901 | "execution_count": 28, 902 | "id": "f7da509d-d268-4c22-9169-ae366b4b6119", 903 | "metadata": {}, 904 | "outputs": [], 905 | "source": [ 906 | "import shapefile\n", 907 | "# Create a new shapefile\n", 908 | "sf = shapefile.Writer(\"Output_centroids_only.las\", shapeType=shapefile.POINT)\n", 909 | "\n", 910 | "# Define the fields for the shapefile\n", 911 | "sf.field(\"label\", \"N\")\n", 912 | "\n", 913 | "# Iterate through each row of the array and add a point to the shapefile\n", 914 | "for row in centroids_array:\n", 915 | " # Extract the x, y, and label values from the row\n", 916 | " x, y, label = row\n", 917 | "\n", 918 | " # Add a point to the shapefile with the x and y coordinates\n", 919 | " sf.point(x, y)\n", 920 | "\n", 921 | " # Set the attributes for the point\n", 922 | " sf.record(label)\n", 923 | "\n", 924 | "# Save and close the shapefile\n", 925 | "sf.close()" 926 | ] 927 | }, 928 | { 929 | "cell_type": "code", 930 | "execution_count": null, 931 | "id": "92966ff4-f432-4e02-975a-63adf9b4ca32", 932 | "metadata": {}, 933 | "outputs": [], 934 | "source": [] 935 | } 936 | ], 937 | "metadata": { 938 | "kernelspec": { 939 | "display_name": "Python", 940 | "language": "python", 941 | "name": "python3" 942 | }, 943 | "language_info": { 944 | "codemirror_mode": { 945 | "name": "ipython", 946 | "version": 3 947 | }, 948 | "file_extension": ".py", 949 | "mimetype": "text/x-python", 950 | "name": "python", 951 | "nbconvert_exporter": "python", 952 | "pygments_lexer": "ipython3", 953 | "version": "3.8.10" 954 | } 955 | }, 956 | "nbformat": 4, 957 | "nbformat_minor": 5 958 | } 959 | --------------------------------------------------------------------------------