├── .github └── FUNDING.yml ├── README.md ├── data ├── bunny.ply ├── bunny_pcd.ply ├── depth.png ├── depth_2.png ├── depth_2_pcd.ply ├── depth_2_pcd_downsampled.ply └── rgb.jpg ├── grounddetection ├── computer_vision_coordinate_system.py ├── ground_detection.py └── organized_pointcloud.py ├── introduction ├── introduction_np_o3d.py ├── introduction_random_points.py ├── introduction_rgbd.py └── introduction_sampling.py ├── output ├── bunny_pcd.ply └── depth_grayscale.png ├── pointcloudfromdepth ├── colored_pointcloud_from_depth.py └── pointcloud_from_depth.py ├── preprocessing ├── crop_pointcloud_o3d.py ├── downsampling.py ├── passthrough_filter_np.py └── point_cloud_filtering.py └── segmentation ├── Projection_clustering.py └── clustering.py /.github/FUNDING.yml: -------------------------------------------------------------------------------- 1 | # These are supported funding model platforms 2 | 3 | github: Chim-So # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2] 4 | patreon: # Replace with a single Patreon username 5 | open_collective: # Replace with a single Open Collective username 6 | ko_fi: chimso # Replace with a single Ko-fi username 7 | tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel 8 | community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry 9 | liberapay: # Replace with a single Liberapay username 10 | issuehunt: # Replace with a single IssueHunt username 11 | lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry 12 | polar: # Replace with a single Polar username 13 | buy_me_a_coffee: # Replace with a single Buy Me a Coffee username 14 | thanks_dev: # Replace with a single thanks.dev username 15 | custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2'] 16 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # pointcloudprocessing 2 | This repository contains the code examples of my medium tutorial "Point Cloud Processing". 3 | "Point Cloud Processing" tutorial is beginner-friendly in which we will simply introduce the point cloud processing pipeline from data preparation to data segmentation and classification. 4 | 5 | # Content 6 | In this repository you will find: 7 | - [data](https://github.com/Chim-SO/pointcloudprocessing/tree/main/data) folder: 8 | Includes the input files that are used for demonstration. 9 | 10 | - [output](https://github.com/Chim-SO/pointcloudprocessing/tree/main/output) folder: 11 | Includes some saved output data. 12 | 13 | - [introduction](https://github.com/Chim-SO/pointcloudprocessing/tree/main/introduction) folder: 14 | includes the examples of the first tutorial: [Introduction to Point Cloud Processing](https://medium.com/@chimso1994/introduction-to-point-cloud-processing-dbda9b167534). 15 | - [introduction_random_points.py](https://github.com/Chim-SO/pointcloudprocessing/blob/main/introduction/introduction_random_points.py) : creates a random point cloud and display it using Matplotlib. 16 | - [introduction_sampling.py](https://github.com/Chim-SO/pointcloudprocessing/blob/main/introduction/introduction_sampling.py) : samples point cloud from a mesh and display it using Open3D. 17 | - [introduction_np_o3d.py](https://github.com/Chim-SO/pointcloudprocessing/blob/main/introduction/introduction_np_o3d.py) : switching between Open3D and NumPy. 18 | - [introduction_rgbd.py](https://github.com/Chim-SO/pointcloudprocessing/blob/main/introduction/introduction_rgbd.py) : computing point clours from RGB-D data using Open3D functions. 19 | 20 | - [pointcloudfromdepth](https://github.com/Chim-SO/pointcloudprocessing/tree/main/pointcloudfromdepth) folder: 21 | includes the examples of the second tutorial: [Point cloud computing from RGB-D images](https://medium.com/@chimso1994/point-cloud-computing-from-rgb-d-images-918414d57e80). 22 | - [pointcloud_from_depth.py](https://github.com/Chim-SO/pointcloudprocessing/blob/main/pointcloudfromdepth/pointcloud_from_depth.py) : compute point clouds from RGB-D data without using Open3D functions. 23 | - [colored_pointcloud_from_depth.py](https://github.com/Chim-SO/pointcloudprocessing/blob/main/pointcloudfromdepth/colored_pointcloud_from_depth.py) : compute colored point clouds. 24 | 25 | - [grounddetection](https://github.com/Chim-SO/pointcloudprocessing/tree/main/grounddetection) folder: 26 | Includes the examples of the third tutorial: [Understand point clouds: a simple ground detection algorithm]() 27 | - [computer_vision_coordinate_system.py](https://github.com/Chim-SO/pointcloudprocessing/blob/main/grounddetection/computer_vision_coordinate_system.py) : Understand the computer vision system coordinate. 28 | - [ground_detection.py](https://github.com/Chim-SO/pointcloudprocessing/blob/main/grounddetection/ground_detection.py) : A simple ground detection algorithm. 29 | - [organized_pointcloud.py](https://github.com/Chim-SO/pointcloudprocessing/blob/main/grounddetection/organized_pointcloud.py) : compute organized point cloud. 30 | 31 | - [preprocessing](https://github.com/Chim-SO/pointcloudprocessing/tree/main/preprocessing) folder: 32 | Includes the examples of the 4th tutorial: [Point Cloud Filtering in Python](https://medium.com/@chimso1994/point-cloud-filtering-in-python-e8a06bbbcee5). 33 | - [crop_pointcloud_o3d.py](https://github.com/Chim-SO/pointcloudprocessing/blob/main/preprocessing/crop_pointcloud_o3d.py) : pass-through filter using Open3D. 34 | - [passthrough_filter_np.py](https://github.com/Chim-SO/pointcloudprocessing/blob/main/preprocessing/passthrough_filter_np.py) : a NumPy implementation of pass-through filter. 35 | - [downsampling.py](https://github.com/Chim-SO/pointcloudprocessing/blob/main/preprocessing/downsampling.py) : down-sampling methods. 36 | - [point_cloud_filtering.py](https://github.com/Chim-SO/pointcloudprocessing/blob/main/preprocessing/point_cloud_filtering.py) : outlier removal filters: statistical outlier removal and radius outlier removal demonstration. 37 | 38 | - [segmentation](https://github.com/Chim-SO/pointcloudprocessing/tree/main/segmentation) folder: 39 | Includes the examples of the 5th tutorial: [Point Cloud Segmentation in Python](https://medium.com/mlearning-ai/point-cloud-segmentation-in-python-2fdbf5ea0617). 40 | - [clustering.py](https://github.com/Chim-SO/pointcloudprocessing/blob/main/segmentation/clustering.py): point cloud clustering. 41 | - [projection_clustering.py](https://github.com/Chim-SO/pointcloudprocessing/blob/main/segmentation/Projection_clustering.py): clustering of a projected point cloud. 42 | 43 | 44 | # Requirements 45 | - Open3D : 0.15.1 46 | - NumPy : 1.21.6 47 | -------------------------------------------------------------------------------- /data/bunny_pcd.ply: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Chim-SO/pointcloudprocessing/cfc9c28eaf407b030327bc1f6ea7eb873baa185a/data/bunny_pcd.ply -------------------------------------------------------------------------------- /data/depth.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Chim-SO/pointcloudprocessing/cfc9c28eaf407b030327bc1f6ea7eb873baa185a/data/depth.png -------------------------------------------------------------------------------- /data/depth_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Chim-SO/pointcloudprocessing/cfc9c28eaf407b030327bc1f6ea7eb873baa185a/data/depth_2.png -------------------------------------------------------------------------------- /data/depth_2_pcd.ply: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Chim-SO/pointcloudprocessing/cfc9c28eaf407b030327bc1f6ea7eb873baa185a/data/depth_2_pcd.ply -------------------------------------------------------------------------------- /data/depth_2_pcd_downsampled.ply: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Chim-SO/pointcloudprocessing/cfc9c28eaf407b030327bc1f6ea7eb873baa185a/data/depth_2_pcd_downsampled.ply -------------------------------------------------------------------------------- /data/rgb.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Chim-SO/pointcloudprocessing/cfc9c28eaf407b030327bc1f6ea7eb873baa185a/data/rgb.jpg -------------------------------------------------------------------------------- /grounddetection/computer_vision_coordinate_system.py: -------------------------------------------------------------------------------- 1 | import imageio.v3 as iio 2 | import matplotlib.pyplot as plt 3 | import numpy 4 | import numpy as np 5 | import open3d as o3d 6 | 7 | if __name__ == '__main__': 8 | # Read point cloud 9 | pcd = o3d.io.read_point_cloud("../data/depth_2_pcd.ply") 10 | 11 | # Create a 3D coordinate system: 12 | origin = o3d.geometry.TriangleMesh.create_coordinate_frame(size=0.5) 13 | 14 | # geometries to draw: 15 | geometries = [pcd, origin] 16 | 17 | # Get max and min points of each axis x, y and z: 18 | x_max = max(pcd.points, key=lambda x: x[0]) 19 | y_max = max(pcd.points, key=lambda x: x[1]) 20 | z_max = max(pcd.points, key=lambda x: x[2]) 21 | x_min = min(pcd.points, key=lambda x: x[0]) 22 | y_min = min(pcd.points, key=lambda x: x[1]) 23 | z_min = min(pcd.points, key=lambda x: x[2]) 24 | 25 | # Colors: 26 | RED = [1., 0., 0.] 27 | GREEN = [0., 1., 0.] 28 | BLUE = [0., 0., 1.] 29 | YELLOW = [1., 1., 0.] 30 | MAGENTA = [1., 0., 1.] 31 | CYAN = [0., 1., 1.] 32 | 33 | # Draw spheres at positions with colors: 34 | positions = [x_max, y_max, z_max, x_min, y_min, z_min] 35 | colors = [RED, GREEN, BLUE, MAGENTA, YELLOW, CYAN] 36 | for i in range(len(positions)): 37 | # Create a sphere mesh: 38 | sphere = o3d.geometry.TriangleMesh.create_sphere(radius=0.1) 39 | # move to the point position: 40 | sphere.translate(np.asarray(positions[i])) 41 | # add color: 42 | sphere.paint_uniform_color(np.asarray(colors[i])) 43 | # compute normals for vertices or faces: 44 | sphere.compute_vertex_normals() 45 | # add to geometry list to display later: 46 | geometries.append(sphere) 47 | 48 | # Display: 49 | o3d.visualization.draw_geometries(geometries) 50 | -------------------------------------------------------------------------------- /grounddetection/ground_detection.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import open3d as o3d 3 | 4 | if __name__ == '__main__': 5 | # Read point cloud 6 | pcd = o3d.io.read_point_cloud("../data/depth_2_pcd.ply") 7 | # Create a 3D coordinate system: 8 | origin = o3d.geometry.TriangleMesh.create_coordinate_frame(size=0.5) 9 | 10 | # Green color: 11 | GREEN = [0., 1., 0.] 12 | 13 | # Define a threshold: 14 | THRESHOLD = 0.075 15 | 16 | # Get the max value along the y-axis: 17 | y_max = max(pcd.points, key=lambda x: x[1])[1] 18 | 19 | # Get the original points color to be updated: 20 | pcd_colors = np.asarray(pcd.colors) 21 | 22 | # Number of points: 23 | n_points = pcd_colors.shape[0] 24 | 25 | # update color: 26 | for i in range(n_points): 27 | # if the current point is aground point: 28 | if pcd.points[i][1] >= y_max - THRESHOLD: 29 | pcd_colors[i] = GREEN # color it green 30 | pcd.colors = o3d.utility.Vector3dVector(pcd_colors) 31 | 32 | # Display: 33 | o3d.visualization.draw_geometries([pcd, origin]) 34 | -------------------------------------------------------------------------------- /grounddetection/organized_pointcloud.py: -------------------------------------------------------------------------------- 1 | import imageio.v3 as iio 2 | import numpy as np 3 | import matplotlib.pyplot as plt 4 | import open3d as o3d 5 | 6 | if __name__ == '__main__': 7 | # Camera parameters: 8 | FX_DEPTH = 5.8262448167737955e+02 9 | FY_DEPTH = 5.8269103270988637e+02 10 | CX_DEPTH = 3.1304475870804731e+02 11 | CY_DEPTH = 2.3844389626620386e+02 12 | 13 | # Read depth image: 14 | depth_image = iio.imread('../data/depth_2.png') 15 | # Compute the grayscale image: 16 | depth_grayscale = np.array(256 * depth_image / 0x0fff, dtype=np.uint8) 17 | # Convert a grayscale image to a 3-channel image: 18 | depth_grayscale = np.stack((depth_grayscale,) * 3, axis=-1) 19 | 20 | # get depth image resolution: 21 | height, width = depth_image.shape 22 | # compute indices and reshape it to have the same shape as the depth image: 23 | jj = np.tile(range(width), height).reshape((height, width)) 24 | ii = np.repeat(range(height), width).reshape((height, width)) 25 | # Compute constants: 26 | xx = (jj - CX_DEPTH) / FX_DEPTH 27 | yy = (ii - CY_DEPTH) / FY_DEPTH 28 | # compute organised point cloud: 29 | organized_pcd = np.dstack((xx * depth_image, yy * depth_image, depth_image)) 30 | print(organized_pcd.shape) 31 | # Ground_detection: 32 | THRESHOLD = 0.075 * 1000 # Define a threshold 33 | y_max = max(organized_pcd.reshape((height * width, 3)), key=lambda x: x[1])[ 34 | 1] # Get the max value along the y-axis 35 | 36 | # Set the ground pixels to green: 37 | for i in range(height): 38 | for j in range(width): 39 | if organized_pcd[i][j][1] >= y_max - THRESHOLD: 40 | depth_grayscale[i][j] = [0, 255, 0] # Update the depth image 41 | 42 | # Display depth_grayscale: 43 | plt.imshow(depth_grayscale) 44 | plt.show() 45 | -------------------------------------------------------------------------------- /introduction/introduction_np_o3d.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import matplotlib.pyplot as plt 3 | import open3d as o3d 4 | 5 | if __name__ == '__main__': 6 | # Create numpy pointcloud: 7 | number_points = 2000 8 | pcd_np = np.random.rand(number_points, 3) 9 | 10 | # Convert to Open3D.PointCLoud: 11 | pcd_o3d = o3d.geometry.PointCloud() # create point cloud object 12 | pcd_o3d.points = o3d.utility.Vector3dVector(pcd_np) # set pcd_np as the point cloud points 13 | 14 | # Visualize: 15 | o3d.visualization.draw_geometries([pcd_o3d]) 16 | 17 | # Read the bunny point cloud file: 18 | pcd_o3d = o3d.io.read_point_cloud("../data/bunny_pcd.ply") 19 | 20 | # Convert the open3d object to numpy: 21 | pcd_np = np.asarray(pcd_o3d.points) 22 | 23 | # Display using matplotlib: 24 | # Create Figure: 25 | fig, ax = plt.subplots(subplot_kw={"projection": "3d"}) 26 | ax.scatter3D(pcd_np[:, 0], pcd_np[:, 2], pcd_np[:, 1]) 27 | # label the axes 28 | ax.set_xlabel("X") 29 | ax.set_ylabel("Y") 30 | ax.set_zlabel("Z") 31 | ax.set_title("Bunny Point Cloud") 32 | # display: 33 | plt.show() 34 | -------------------------------------------------------------------------------- /introduction/introduction_random_points.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import matplotlib.pyplot as plt 3 | 4 | if __name__ == '__main__': 5 | # let's create random point cloud: 6 | number_points = 5 7 | pcd = np.random.rand(number_points, 3) # random points from a uniform distribution over [0, 1) 8 | print(pcd) 9 | 10 | # Create Figure: 11 | fig, ax = plt.subplots(subplot_kw={"projection": "3d"}) 12 | ax.scatter3D(pcd[:, 0], pcd[:, 1], pcd[:, 2]) 13 | # label the axes 14 | ax.set_xlabel("X") 15 | ax.set_ylabel("Y") 16 | ax.set_zlabel("Z") 17 | ax.set_title("Random Point Cloud") 18 | # display: 19 | plt.show() 20 | -------------------------------------------------------------------------------- /introduction/introduction_rgbd.py: -------------------------------------------------------------------------------- 1 | import open3d as o3d 2 | 3 | if __name__ == '__main__': 4 | # read the color and the depth image: 5 | rgb_image = o3d.io.read_image("../data/rgb.jpg") 6 | depth_image = o3d.io.read_image("../data/depth.png") 7 | 8 | # create an rgbd image object: 9 | rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth( 10 | rgb_image, depth_image, convert_rgb_to_intensity=False) 11 | # use the rgbd image to create point cloud: 12 | pcd = o3d.geometry.PointCloud.create_from_rgbd_image( 13 | rgbd_image, 14 | o3d.camera.PinholeCameraIntrinsic( 15 | o3d.camera.PinholeCameraIntrinsicParameters.PrimeSenseDefault)) 16 | # visualize: 17 | o3d.visualization.draw_geometries([pcd]) -------------------------------------------------------------------------------- /introduction/introduction_sampling.py: -------------------------------------------------------------------------------- 1 | # reference: http://graphics.im.ntu.edu.tw/~robin/courses/cg03/model/ 2 | import open3d as o3d 3 | 4 | if __name__ == '__main__': 5 | # Read the PLY file: 6 | mesh = o3d.io.read_triangle_mesh("../data/bunny.ply") 7 | 8 | # Import data from open3D: 9 | # bunny = o3d.data.BunnyMesh() 10 | # mesh = o3d.io.read_triangle_mesh(bunny.path) 11 | 12 | # Visualize: 13 | mesh.compute_vertex_normals() # compute normals for vertices or faces 14 | o3d.visualization.draw_geometries([mesh]) 15 | 16 | # Sample 1000 points: 17 | pcd = mesh.sample_points_uniformly(number_of_points=1000) 18 | 19 | # visualize: 20 | o3d.visualization.draw_geometries([pcd]) 21 | 22 | # Save into ply file: 23 | o3d.io.write_point_cloud("../output/bunny_pcd.ply", pcd) 24 | -------------------------------------------------------------------------------- /output/bunny_pcd.ply: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Chim-SO/pointcloudprocessing/cfc9c28eaf407b030327bc1f6ea7eb873baa185a/output/bunny_pcd.ply -------------------------------------------------------------------------------- /output/depth_grayscale.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Chim-SO/pointcloudprocessing/cfc9c28eaf407b030327bc1f6ea7eb873baa185a/output/depth_grayscale.png -------------------------------------------------------------------------------- /pointcloudfromdepth/colored_pointcloud_from_depth.py: -------------------------------------------------------------------------------- 1 | import imageio.v3 as iio 2 | import numpy as np 3 | import matplotlib.pyplot as plt 4 | import open3d as o3d 5 | 6 | 7 | def compute_colored_pointcloud_nested_loops(depth_image, rgb_image): 8 | """ 9 | Compute the colored point cloud using nested loops. 10 | This function is written for demonstration purposes. 11 | It takes time. 12 | :param depth_image: a depth image 13 | :param rgb_image: the corresponding rgb image 14 | :return: the colored point cloud 15 | """ 16 | 17 | height, width = depth_image.shape 18 | colors = [] 19 | pcd = [] 20 | for i in range(height): 21 | for j in range(width): 22 | """ 23 | Convert the pixel from depth coordinate system 24 | to depth sensor 3D coordinate system 25 | """ 26 | z = depth_image[i][j] 27 | x = (j - CX_DEPTH) * z / FX_DEPTH 28 | y = (i - CY_DEPTH) * z / FY_DEPTH 29 | 30 | """ 31 | Convert the point from depth sensor 3D coordinate system 32 | to rgb camera coordinate system: 33 | """ 34 | [x_RGB, y_RGB, z_RGB] = np.linalg.inv(R).dot([x, y, z]) - np.linalg.inv(R).dot(T) 35 | 36 | """ 37 | Convert from rgb camera coordinate system 38 | to rgb image coordinate system: 39 | """ 40 | j_rgb = int((x_RGB * FX_RGB) / z_RGB + CX_RGB + width / 2) 41 | i_rgb = int((y_RGB * FY_RGB) / z_RGB + CY_RGB) 42 | 43 | # Add point to point cloud: 44 | pcd.append([x, y, z]) 45 | 46 | # Add the color of the pixel if it exists: 47 | if 0 <= j_rgb < width and 0 <= i_rgb < height: 48 | colors.append(rgb_image[i_rgb][j_rgb]) 49 | else: 50 | colors.append([0., 0., 0.]) 51 | return [np.array(pcd), np.array(colors)] 52 | 53 | 54 | if __name__ == '__main__': 55 | # Depth Camera parameters: 56 | FX_DEPTH = 5.8262448167737955e+02 57 | FY_DEPTH = 5.8269103270988637e+02 58 | CX_DEPTH = 3.1304475870804731e+02 59 | CY_DEPTH = 2.3844389626620386e+02 60 | 61 | # RGB camera intrinsic Parameters: 62 | FX_RGB = 5.1885790117450188e+02 63 | FY_RGB = 5.1946961112127485e+02 64 | CX_RGB = 3.2558244941119034e+0 65 | CY_RGB = 2.5373616633400465e+02 66 | 67 | # Rotation matrix: 68 | R = -np.array([[9.9997798940829263e-01, 5.0518419386157446e-03, 4.3011152014118693e-03], 69 | [-5.0359919480810989e-03, 9.9998051861143999e-01, -3.6879781309514218e-03], 70 | [- 4.3196624923060242e-03, 3.6662365748484798e-03, 9.9998394948385538e-01]]) 71 | # Translation vector: 72 | T = np.array([2.5031875059141302e-02, -2.9342312935846411e-04, 6.6238747008330102e-04]) 73 | 74 | # Read depth and color image: 75 | depth_image = iio.imread('../data/depth.png') 76 | rgb_image = iio.imread('../data/rgb.jpg') 77 | 78 | # Display depth and grayscale image: 79 | fig, axs = plt.subplots(1, 2) 80 | axs[0].imshow(depth_image, cmap="gray") 81 | axs[0].set_title('Depth image') 82 | axs[1].imshow(rgb_image) 83 | axs[1].set_title('RGB image') 84 | plt.show() 85 | 86 | # compute point cloud: 87 | # Both images has the same resolution 88 | height, width = depth_image.shape 89 | 90 | # compute indices: 91 | jj = np.tile(range(width), height) 92 | ii = np.repeat(range(height), width) 93 | 94 | # Compute constants: 95 | xx = (jj - CX_DEPTH) / FX_DEPTH 96 | yy = (ii - CY_DEPTH) / FY_DEPTH 97 | 98 | # transform depth image to vector of z: 99 | length = height * width 100 | z = depth_image.reshape(length) 101 | 102 | # compute point cloud 103 | pcd = np.dstack((xx * z, yy * z, z)).reshape((length, 3)) 104 | cam_RGB = np.apply_along_axis(np.linalg.inv(R).dot, 1, pcd) - np.linalg.inv(R).dot(T) 105 | xx_rgb = ((cam_RGB[:, 0] * FX_RGB) / cam_RGB[:, 2] + CX_RGB + width / 2).astype(int).clip(0, width - 1) 106 | yy_rgb = ((cam_RGB[:, 1] * FY_RGB) / cam_RGB[:, 2] + CY_RGB).astype(int).clip(0, height - 1) 107 | colors = rgb_image[yy_rgb, xx_rgb] 108 | 109 | # Convert to Open3D.PointCLoud: 110 | pcd_o3d = o3d.geometry.PointCloud() # create a point cloud object 111 | pcd_o3d.points = o3d.utility.Vector3dVector(pcd) 112 | pcd_o3d.colors = o3d.utility.Vector3dVector(np.array(colors / 255)) 113 | # Visualize: 114 | o3d.visualization.draw_geometries([pcd_o3d]) -------------------------------------------------------------------------------- /pointcloudfromdepth/pointcloud_from_depth.py: -------------------------------------------------------------------------------- 1 | import imageio.v3 as iio 2 | import numpy as np 3 | import matplotlib.pyplot as plt 4 | import open3d as o3d 5 | 6 | 7 | def compute_pcd_nested_loops(depth_im): 8 | """ 9 | Compute the point cloud using nested loops. 10 | This function is written for demonstration purposes. 11 | It takes time. 12 | :param depth_im: a depth image 13 | :return: the computed point cloud 14 | """ 15 | height, width = depth_im.shape 16 | pcd = [] 17 | for i in range(height): 18 | for j in range(width): 19 | z = depth_image[i][j] 20 | x = (j - CX_DEPTH) * depth_im[i][j] / FX_DEPTH 21 | y = (i - CY_DEPTH) * depth_im[i][j] / FY_DEPTH 22 | pcd.append([x, y, z]) 23 | return pcd 24 | 25 | 26 | def compute_pcd_vectorization(depth_im): 27 | """ 28 | This function use victorization operations to optimize time. 29 | We can do better by computing constants first. 30 | Please see the main function. 31 | :param depth_im: a depth image 32 | :return: the computed point cloud 33 | """ 34 | # get depth resolution: 35 | height, width = depth_im.shape 36 | # compute indices: 37 | jj = np.tile(range(width), height) 38 | ii = np.repeat(range(height), width) 39 | # rechape depth image 40 | z = depth_im.reshape(height * width) 41 | # compute pcd: 42 | pcd = np.dstack([(ii - CX_DEPTH) * z / FX_DEPTH, 43 | (jj - CY_DEPTH) * z / FY_DEPTH, 44 | z]).reshape((length, 3)) 45 | return pcd 46 | 47 | 48 | if __name__ == '__main__': 49 | # Camera parameters: 50 | FX_DEPTH = 5.8262448167737955e+02 51 | FY_DEPTH = 5.8269103270988637e+02 52 | CX_DEPTH = 3.1304475870804731e+02 53 | CY_DEPTH = 2.3844389626620386e+02 54 | 55 | # Read depth image: 56 | depth_image = iio.imread('../data/depth.png') 57 | 58 | # print some properties: 59 | print(f"Image resolution: {depth_image.shape}") 60 | print(f"Data type: {depth_image.dtype}") 61 | print(f"Min value: {np.min(depth_image)}") 62 | print(f"Max value: {np.max(depth_image)}") 63 | 64 | # Compute depth grayscale: 65 | depth_grayscale = np.array(256 * depth_image / 0x0fff, dtype=np.uint8) 66 | iio.imwrite('../output/depth_grayscale.png', depth_grayscale) 67 | 68 | # Display depth and grayscale image: 69 | fig, axs = plt.subplots(1, 2) 70 | axs[0].imshow(depth_image, cmap="gray") 71 | axs[0].set_title('Depth image') 72 | axs[1].imshow(depth_grayscale, cmap="gray") 73 | axs[1].set_title('Depth grayscale image') 74 | plt.show() 75 | 76 | # get depth image resolution: 77 | height, width = depth_image.shape 78 | # compute indices: 79 | jj = np.tile(range(width), height) 80 | ii = np.repeat(range(height), width) 81 | # Compute constants: 82 | xx = (jj - CX_DEPTH) / FX_DEPTH 83 | yy = (ii - CY_DEPTH) / FY_DEPTH 84 | # transform depth image to vector of z: 85 | length = height * width 86 | z = depth_image.reshape(length) 87 | # compute point cloud 88 | pcd = np.dstack((xx * z, yy * z, z)).reshape((length, 3)) 89 | 90 | # Convert to Open3D.PointCLoud: 91 | pcd_o3d = o3d.geometry.PointCloud() # create point cloud object 92 | pcd_o3d.points = o3d.utility.Vector3dVector(pcd) # set pcd_np as the point cloud points 93 | # Visualize: 94 | o3d.visualization.draw_geometries([pcd_o3d]) 95 | -------------------------------------------------------------------------------- /preprocessing/crop_pointcloud_o3d.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import open3d as o3d 3 | import math 4 | import itertools 5 | 6 | if __name__ == '__main__': 7 | # Read point cloud: 8 | pcd = o3d.io.read_point_cloud("../data/depth_2_pcd.ply") 9 | 10 | # Create bounding box: 11 | bounds = [[-math.inf, math.inf], [-math.inf, math.inf], [0.8, 2]] # set the bounds 12 | bounding_box_points = list(itertools.product(*bounds)) # create limit points 13 | bounding_box = o3d.geometry.AxisAlignedBoundingBox.create_from_points( 14 | o3d.utility.Vector3dVector(bounding_box_points)) # create bounding box object 15 | 16 | # Crop the point cloud using the bounding box: 17 | pcd_croped = pcd.crop(bounding_box) 18 | 19 | # Display the cropped point cloud: 20 | o3d.visualization.draw_geometries([pcd_croped]) 21 | -------------------------------------------------------------------------------- /preprocessing/downsampling.py: -------------------------------------------------------------------------------- 1 | import open3d as o3d 2 | import numpy as np 3 | 4 | if __name__ == '__main__': 5 | # Read point cloud: 6 | pcd = o3d.io.read_point_cloud("../data/depth_2_pcd.ply") 7 | 8 | # Random down-sampling: 9 | random_pcd = pcd.random_down_sample(sampling_ratio=0.005) 10 | 11 | # Uniform down-sampling: 12 | uniform_pcd = pcd.uniform_down_sample(every_k_points=200) 13 | 14 | # Voxel down-sampling: 15 | voxel_pcd = pcd.voxel_down_sample(voxel_size=0.4) 16 | 17 | # Translating point clouds and updating colors to display: 18 | points = np.asarray(random_pcd.points) 19 | points += [-3, 3, 0] 20 | random_pcd.points = o3d.utility.Vector3dVector(points) 21 | 22 | points = np.asarray(uniform_pcd.points) 23 | points += [0, 3, 0] 24 | uniform_pcd.points = o3d.utility.Vector3dVector(points) 25 | 26 | points = np.asarray(voxel_pcd.points) 27 | points += [3, 3, 0] 28 | voxel_pcd.points = o3d.utility.Vector3dVector(points) 29 | 30 | # Display: 31 | o3d.visualization.draw_geometries([pcd, random_pcd, uniform_pcd, voxel_pcd]) 32 | 33 | 34 | -------------------------------------------------------------------------------- /preprocessing/passthrough_filter_np.py: -------------------------------------------------------------------------------- 1 | import open3d as o3d 2 | import numpy as np 3 | 4 | if __name__ == '__main__': 5 | # Read point cloud 6 | pcd = o3d.io.read_point_cloud("../data/depth_2_pcd.ply") 7 | 8 | # Convert it into numpy array: 9 | points = np.asarray(pcd.points) 10 | # Create passthrough filter: 11 | pass_through_filter = np.logical_and(points[:, 2] >= 0.8, points[:, 2] <= 2.5) 12 | # Get the points that passed through the filter: 13 | filtered_points = points[pass_through_filter] 14 | 15 | # Create filtered open3D point cloud: 16 | filtered_pcd = o3d.geometry.PointCloud() # create point cloud object 17 | filtered_pcd.points = o3d.utility.Vector3dVector(filtered_points) # set pcd_np as the point cloud points 18 | 19 | # To preserve the color information in case of colored point cloud: 20 | colors = np.asarray(pcd.colors) 21 | filtered_pcd.colors = o3d.utility.Vector3dVector(colors[pass_through_filter]) 22 | 23 | # Display: 24 | o3d.visualization.draw_geometries([filtered_pcd]) 25 | 26 | # Or change the color of the points: 27 | colors[pass_through_filter] = [0., 1., 0.] 28 | pcd.colors = o3d.utility.Vector3dVector(colors) 29 | o3d.visualization.draw_geometries([pcd]) 30 | 31 | -------------------------------------------------------------------------------- /preprocessing/point_cloud_filtering.py: -------------------------------------------------------------------------------- 1 | import open3d as o3d 2 | import numpy as np 3 | 4 | if __name__ == '__main__': 5 | # Read point cloud: 6 | pcd = o3d.io.read_point_cloud("../data/depth_2_pcd.ply") 7 | # Down sampling to reduce the running time: 8 | pcd = pcd.voxel_down_sample(voxel_size=0.02) 9 | 10 | # Radius outlier removal: 11 | pcd_rad, ind_rad = pcd.remove_radius_outlier(nb_points=16, radius=0.05) 12 | outlier_rad_pcd = pcd.select_by_index(ind_rad, invert=True) 13 | outlier_rad_pcd.paint_uniform_color([1., 0., 1.]) 14 | 15 | # Statistical outlier removal: 16 | pcd_stat, ind_stat = pcd.remove_statistical_outlier(nb_neighbors=20, 17 | std_ratio=2.0) 18 | outlier_stat_pcd = pcd.select_by_index(ind_stat, invert=True) 19 | outlier_stat_pcd.paint_uniform_color([0., 0., 1.]) 20 | 21 | # Translate to visualize: 22 | points = np.asarray(pcd_stat.points) 23 | points += [3, 0, 0] 24 | pcd_stat.points = o3d.utility.Vector3dVector(points) 25 | 26 | points = np.asarray(outlier_stat_pcd.points) 27 | points += [3, 0, 0] 28 | outlier_stat_pcd.points = o3d.utility.Vector3dVector(points) 29 | 30 | # Display: 31 | o3d.visualization.draw_geometries([pcd_stat, pcd_rad, outlier_stat_pcd, outlier_rad_pcd]) 32 | 33 | 34 | 35 | -------------------------------------------------------------------------------- /segmentation/Projection_clustering.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import open3d as o3d 3 | from sklearn.cluster import KMeans, DBSCAN, OPTICS 4 | import matplotlib.pyplot as plt 5 | from sklearn.preprocessing import StandardScaler 6 | 7 | if __name__ == '__main__': 8 | # Read point cloud: 9 | pcd = o3d.io.read_point_cloud("../data/depth_2_pcd_downsampled.ply") 10 | o3d.visualization.draw_geometries([pcd]) 11 | # Get points and transform it to a numpy array: 12 | points = np.asarray(pcd.points).copy() 13 | 14 | # Project points on 0XZ plane: 15 | points[:, 1] = 0 16 | pcd_projected = o3d.geometry.PointCloud() # create point cloud object 17 | pcd_projected.points = o3d.utility.Vector3dVector(points) # set pcd_np as the point cloud points 18 | o3d.visualization.draw_geometries([pcd_projected]) 19 | 20 | # projection: consider the x and z coordinates: 21 | points_xz = points[:, [0, 2]] 22 | # Normalisation: 23 | scaled_points = StandardScaler().fit_transform(points_xz) 24 | # Clustering: 25 | model = DBSCAN(eps=0.15, min_samples=10) 26 | model.fit(scaled_points) 27 | 28 | # Get labels: 29 | labels = model.labels_ 30 | # Get the number of colors: 31 | n_clusters = len(set(labels)) 32 | 33 | # Mapping the labels classes to a color map: 34 | colors = plt.get_cmap("tab20")(labels / (n_clusters if n_clusters > 0 else 1)) 35 | # Attribute to noise the black color: 36 | colors[labels < 0] = 0 37 | # Update points colors: 38 | pcd_projected.colors = o3d.utility.Vector3dVector(colors[:, :3]) 39 | pcd.colors = o3d.utility.Vector3dVector(colors[:, :3]) 40 | 41 | # Display: 42 | o3d.visualization.draw_geometries([pcd_projected]) 43 | o3d.visualization.draw_geometries([pcd]) -------------------------------------------------------------------------------- /segmentation/clustering.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import open3d as o3d 3 | from sklearn.cluster import KMeans, DBSCAN 4 | import matplotlib.pyplot as plt 5 | from sklearn.preprocessing import StandardScaler 6 | 7 | if __name__ == '__main__': 8 | # Read point cloud: 9 | pcd = o3d.io.read_point_cloud("../data/depth_2_pcd_downsampled.ply") 10 | # Get points and transform it to a numpy array: 11 | points = np.asarray(pcd.points) 12 | 13 | # Normalisation: 14 | scaled_points = StandardScaler().fit_transform(points) 15 | 16 | # Clustering: 17 | model = DBSCAN(eps=0.15, min_samples=10) 18 | # model = KMeans(n_clusters=4) 19 | model.fit(scaled_points) 20 | 21 | # Get labels: 22 | labels = model.labels_ 23 | # Get the number of colors: 24 | n_clusters = len(set(labels)) 25 | 26 | # Mapping the labels classes to a color map: 27 | colors = plt.get_cmap("tab20")(labels / (n_clusters if n_clusters > 0 else 1)) 28 | # Attribute to noise the black color: 29 | colors[labels < 0] = 0 30 | # Update points colors: 31 | pcd.colors = o3d.utility.Vector3dVector(colors[:, :3]) 32 | 33 | # Display: 34 | o3d.visualization.draw_geometries([pcd]) --------------------------------------------------------------------------------