├── .gitignore
├── LICENSE
├── README.md
├── dataset.py
├── image
├── modelnet40_train10_bookshelf.png
├── modelnet40_train14_plant.png
├── modelnet40_train7_vase.png
├── shapenetcorev2_test37_earphone.png
├── shapenetcorev2_test59_lamp.png
├── shapenetcorev2_train4_tower.png
├── shapenetpart_train13_chair.png
├── shapenetpart_train2_table.png
└── shapenetpart_train4_airplane.png
└── visualize.py
/.gitignore:
--------------------------------------------------------------------------------
1 | .DS_Store
2 | */.DS_Store
3 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2019 An Tao
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Point Cloud Datasets
2 |
3 | This repository provides ShapeNetCore.v2, ShapeNetPart, ModelNet40 and ModelNet10 datasets in HDF5 format. For each shape in these datasets, we use farthest point sampling algorithm to uniformly sample 2,048 points from shape surface. All points are then centered and scaled. We follow the train/val/test split in official documents.
4 |
5 | We also provide code to load and visualize our datasets with PyTorch 1.2 and Python 3.7. See `dataset.py` and run it to have a try.
6 |
7 | To visualize, run `visualize.py` to generate XML file and use [Mitsuba](https://www.mitsuba-renderer.org/index.html) to render it. Our code is from this [repo](https://github.com/zekunhao1995/PointFlowRenderer).
8 |
9 |
10 | ## Download link:
11 |
12 | - ShapeNetCore.v2 (0.98G) [[TsinghuaCloud]](https://cloud.tsinghua.edu.cn/f/06a3c383dc474179b97d/) [[BaiduDisk]](https://pan.baidu.com/s/154As2kzHZczMipuoZIc0kg)
13 | - ShapeNetPart (338M) [[TsinghuaCloud]](https://cloud.tsinghua.edu.cn/f/c25d94e163454196a26b/) [[BaiduDisk]](https://pan.baidu.com/s/1yi4bMVBE2mV8NqVRtNLoqw)
14 | - ModelNet40 (194M) [[TsinghuaCloud]](https://cloud.tsinghua.edu.cn/f/b3d9fe3e2a514def8097/) [[BaiduDisk]](https://pan.baidu.com/s/1NQZgN8tvHVqQntxefcdVAg)
15 | - ModelNet10 (72.5M) [[TsinghuaCloud]](https://cloud.tsinghua.edu.cn/f/5414376f6afd41ce9b6d/) [[BaiduDisk]](https://pan.baidu.com/s/1tfnKQ_yg3SfIgyLSwQ2E0g)
16 |
17 |
18 | ## ShapeNetCore.v2
19 | ShapeNetCore.v2 datset contains 51,127 pre-aligned shapes from 55 categories, which are split into 35,708 (70%) for training, 5,158 (10%) shapes for validation and 10,261 (20%) shapes for testing. In official document there should be 51,190 shapes in total, but 63 shapes are missing in original downloaded ShapeNetCore.v2 dataset from [here](https://www.shapenet.org/download/shapenetcore).
20 |
21 | The 55 categories include: `airplane`, `bag`, `basket`, `bathtub`, `bed`, `bench`, `birdhouse`, `bookshelf`, `bottle`, `bowl`, `bus`, `cabinet`, `camera`, `can`, `cap`, `car`, `cellphone`, `chair`, `clock`, `dishwasher`, `earphone`, `faucet`, `file`, `guitar`, `helmet`, `jar`, `keyboard`, `knife`, `lamp`, `laptop`, `mailbox`, `microphone`, `microwave`, `monitor`, `motorcycle`, `mug`, `piano`, `pillow`, `pistol`, `pot`, `printer`, `remote_control`, `rifle`, `rocket`, `skateboard`, `sofa`, `speaker`, `stove`, `table`, `telephone`, `tin_can`, `tower`, `train`, `vessel`, `washer`.
22 |
23 | Some visualized point clouds in our ShapeNetCore.v2 dataset:
24 |
25 |
26 |
27 |
28 |
29 | earphone lamp tower
30 |
31 |
32 | ## ShapeNetPart
33 | ShapeNetPart dataset contains 16,881 pre-aligned shapes from 16 categories, annotated with 50 segmentation parts in total. Most object categories are labeled with two to five segmentation parts. There are 12,137 (70%) shapes for training, 1,870 (10%) shapes for validation, and 2,874 (20%) shapes for testing. We also pack the segementation label in our dataset. The link for official dataset is [here](https://shapenet.cs.stanford.edu/media/shapenet_part_seg_hdf5_data.zip).
34 |
35 | The 16 categories include: `airplane`, `bag`, `cap`, `car`, `chair`, `earphone`, `guitar`, `knife`, `lamp`, `laptop`, `motorbike`, `mug`, `pistol`, `rocket`, `skateboard`, `table`.
36 |
37 | Although ShapeNetPart is made from ShapeNetCore, the number of points per shape in official ShapeNetPart dataset is not very large and sometimes less than 2,048. Thus the uniform sampling quality of our ShapeNetPart dataset is lower than our ShapeNetCore.v2 dataset.
38 |
39 | In this dataset, we change segmentation label for each point into range 0~49 according to its category. You can find a index mapping list in `dataset.py`.
40 |
41 | Some visualized point clouds in our ShapeNetPart dataset:
42 |
43 |
44 |
45 |
46 |
47 | airplane table chair
48 |
49 |
50 | ## ModelNet40
51 | ModelNet40 dataset contains 12,311 pre-aligned shapes from 40 categories, which are split into 9,843 (80%) for training and 2,468 (20%) for testing. The link for official dataset is [here](http://3dvision.princeton.edu/projects/2014/3DShapeNets/ModelNet10.zip).
52 |
53 | The 40 categories include: `airplane`, `bathtub`, `bed`, `bench`, `bookshelf`, `bottle`, `bowl`, `car`, `chair`, `cone`, `cup`, `curtain`, `desk`, `door`, `dresser`, `flower_pot`, `glass_box`, `guitar`, `keyboard`, `lamp`, `laptop`, `mantel`, `monitor`, `night_stand`, `person`, `piano`, `plant`, `radio`, `range_hood`, `sink`, `sofa`, `stairs`, `stool`, `table`, `tent`, `toilet`, `tv_stand`, `vase`, `wardrobe`, `xbox`.
54 |
55 | **Note**: The widely used 2,048 points sampled ModelNet40 dataset ([link](https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip)) only contains 9,840 shapes for training, not 9,843 in official. Our ModelNet40 dataset fixs this problem and can substitute the above mentioned dataset perfectly.
56 |
57 | Some visualized point clouds in our ModelNet40 dataset:
58 |
59 |
60 |
61 |
62 |
63 | vase bookshelf plant
64 |
65 |
66 | ## ModelNet10
67 | ModelNet10 dataset is a part of ModelNet40 dataset, containing 4,899 pre-aligned shapes from 10 categories. There are 3,991 (80%) shapes for training and 908 (20%) shapes for testing. The link for official dataset is [here](http://modelnet.cs.princeton.edu/ModelNet40.zip).
68 |
69 | The 10 categories include: `bathtub`, `bed`, `chair`, `desk`, `dresser`, `monitor`, `night_stand`, `sofa`, `table`, `toilet`.
70 |
71 |
72 | ## Dataset performance
73 | Repos below use our datasets:
74 |
75 | - [antao97/UnsupervisedPointCloudReconstruction](https://github.com/antao97/UnsupervisedPointCloudReconstruction)
76 | - coming soon ...
77 |
78 |
79 |
80 | #### Reference repos:
81 |
82 | - [charlesq34/pointnet](https://github.com/charlesq34/pointnet)
83 | - [charlesq34/pointnet2](https://github.com/charlesq34/pointnet2)
84 | - [stevenygd/PointFlow](https://github.com/stevenygd/PointFlow)
85 | - [zekunhao1995/PointFlowRenderer](https://github.com/zekunhao1995/PointFlowRenderer)
86 | - [WangYueFt/dgcnn](https://github.com/WangYueFt/dgcnn)
87 |
88 |
89 |
--------------------------------------------------------------------------------
/dataset.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 | """
4 | @Author: An Tao
5 | @Contact: ta19@mails.tsinghua.edu.cn
6 | @File: dataset.py
7 | @Time: 2020/1/2 10:26 AM
8 | """
9 |
10 | import os
11 | import torch
12 | import json
13 | import h5py
14 | from glob import glob
15 | import numpy as np
16 | import torch.utils.data as data
17 |
18 |
19 | shapenetpart_cat2id = {'airplane': 0, 'bag': 1, 'cap': 2, 'car': 3, 'chair': 4,
20 | 'earphone': 5, 'guitar': 6, 'knife': 7, 'lamp': 8, 'laptop': 9,
21 | 'motor': 10, 'mug': 11, 'pistol': 12, 'rocket': 13, 'skateboard': 14, 'table': 15}
22 | shapenetpart_seg_num = [4, 2, 2, 4, 4, 3, 3, 2, 4, 2, 6, 2, 3, 3, 3, 3]
23 | shapenetpart_seg_start_index = [0, 4, 6, 8, 12, 16, 19, 22, 24, 28, 30, 36, 38, 41, 44, 47]
24 |
25 |
26 | def translate_pointcloud(pointcloud):
27 | xyz1 = np.random.uniform(low=2./3., high=3./2., size=[3])
28 | xyz2 = np.random.uniform(low=-0.2, high=0.2, size=[3])
29 |
30 | translated_pointcloud = np.add(np.multiply(pointcloud, xyz1), xyz2).astype('float32')
31 | return translated_pointcloud
32 |
33 |
34 | def jitter_pointcloud(pointcloud, sigma=0.01, clip=0.02):
35 | N, C = pointcloud.shape
36 | pointcloud += np.clip(sigma * np.random.randn(N, C), -1*clip, clip)
37 | return pointcloud
38 |
39 |
40 | def rotate_pointcloud(pointcloud):
41 | theta = np.pi*2 * np.random.rand()
42 | rotation_matrix = np.array([[np.cos(theta), -np.sin(theta)],[np.sin(theta), np.cos(theta)]])
43 | pointcloud[:,[0,2]] = pointcloud[:,[0,2]].dot(rotation_matrix) # random rotation (x,z)
44 | return pointcloud
45 |
46 |
47 | class Dataset(data.Dataset):
48 | def __init__(self, root, dataset_name='modelnet40', class_choice=None,
49 | num_points=2048, split='train', load_name=True, load_file=True,
50 | segmentation=False, random_rotate=False, random_jitter=False,
51 | random_translate=False):
52 |
53 | assert dataset_name.lower() in ['shapenetcorev2', 'shapenetpart',
54 | 'modelnet10', 'modelnet40', 'shapenetpartpart']
55 | assert num_points <= 2048
56 |
57 | if dataset_name in ['shapenetcorev2', 'shapenetpart', 'shapenetpartpart']:
58 | assert split.lower() in ['train', 'test', 'val', 'trainval', 'all']
59 | else:
60 | assert split.lower() in ['train', 'test', 'all']
61 |
62 | if dataset_name not in ['shapenetpart'] and segmentation == True:
63 | raise AssertionError
64 |
65 | self.root = os.path.join(root, dataset_name + '_hdf5_2048')
66 | self.dataset_name = dataset_name
67 | self.class_choice = class_choice
68 | self.num_points = num_points
69 | self.split = split
70 | self.load_name = load_name
71 | self.load_file = load_file
72 | self.segmentation = segmentation
73 | self.random_rotate = random_rotate
74 | self.random_jitter = random_jitter
75 | self.random_translate = random_translate
76 |
77 | self.path_h5py_all = []
78 | self.path_name_all = []
79 | self.path_file_all = []
80 |
81 | if self.split in ['train', 'trainval', 'all']:
82 | self.get_path('train')
83 | if self.dataset_name in ['shapenetcorev2', 'shapenetpart', 'shapenetpartpart']:
84 | if self.split in ['val', 'trainval', 'all']:
85 | self.get_path('val')
86 | if self.split in ['test', 'all']:
87 | self.get_path('test')
88 |
89 | data, label, seg = self.load_h5py(self.path_h5py_all)
90 |
91 | if self.load_name or self.class_choice != None:
92 | self.name = np.array(self.load_json(self.path_name_all)) # load label name
93 |
94 | if self.load_file:
95 | self.file = np.array(self.load_json(self.path_file_all)) # load file name
96 |
97 | self.data = np.concatenate(data, axis=0)
98 | self.label = np.concatenate(label, axis=0)
99 | if self.segmentation:
100 | self.seg = np.concatenate(seg, axis=0)
101 |
102 | if self.class_choice != None:
103 | indices = (self.name == class_choice)
104 | self.data = self.data[indices]
105 | self.label = self.label[indices]
106 | self.name = self.name[indices]
107 | if self.segmentation:
108 | self.seg = self.seg[indices]
109 | id_choice = shapenetpart_cat2id[class_choice]
110 | self.seg_num_all = shapenetpart_seg_num[id_choice]
111 | self.seg_start_index = shapenetpart_seg_start_index[id_choice]
112 | if self.load_file:
113 | self.file = self.file[indices]
114 | elif self.segmentation:
115 | self.seg_num_all = 50
116 | self.seg_start_index = 0
117 |
118 | def get_path(self, type):
119 | path_h5py = os.path.join(self.root, '%s*.h5'%type)
120 | paths = glob(path_h5py)
121 | paths_sort = [os.path.join(self.root, type + str(i) + '.h5') for i in range(len(paths))]
122 | self.path_h5py_all += paths_sort
123 | if self.load_name:
124 | paths_json = [os.path.join(self.root, type + str(i) + '_id2name.json') for i in range(len(paths))]
125 | self.path_name_all += paths_json
126 | if self.load_file:
127 | paths_json = [os.path.join(self.root, type + str(i) + '_id2file.json') for i in range(len(paths))]
128 | self.path_file_all += paths_json
129 | return
130 |
131 | def load_h5py(self, path):
132 | all_data = []
133 | all_label = []
134 | all_seg = []
135 | for h5_name in path:
136 | f = h5py.File(h5_name, 'r+')
137 | data = f['data'][:].astype('float32')
138 | label = f['label'][:].astype('int64')
139 | if self.segmentation:
140 | seg = f['seg'][:].astype('int64')
141 | f.close()
142 | all_data.append(data)
143 | all_label.append(label)
144 | if self.segmentation:
145 | all_seg.append(seg)
146 | return all_data, all_label, all_seg
147 |
148 | def load_json(self, path):
149 | all_data = []
150 | for json_name in path:
151 | j = open(json_name, 'r+')
152 | data = json.load(j)
153 | all_data += data
154 | return all_data
155 |
156 | def __getitem__(self, item):
157 | point_set = self.data[item][:self.num_points]
158 | label = self.label[item]
159 | if self.load_name:
160 | name = self.name[item] # get label name
161 | if self.load_file:
162 | file = self.file[item] # get file name
163 |
164 | if self.random_rotate:
165 | point_set = rotate_pointcloud(point_set)
166 | if self.random_jitter:
167 | point_set = jitter_pointcloud(point_set)
168 | if self.random_translate:
169 | point_set = translate_pointcloud(point_set)
170 |
171 | # convert numpy array to pytorch Tensor
172 | point_set = torch.from_numpy(point_set)
173 | label = torch.from_numpy(np.array([label]).astype(np.int64))
174 | label = label.squeeze(0)
175 |
176 | if self.segmentation:
177 | seg = self.seg[item]
178 | seg = torch.from_numpy(seg)
179 | return point_set, label, seg, name, file
180 | else:
181 | return point_set, label, name, file
182 |
183 | def __len__(self):
184 | return self.data.shape[0]
185 |
186 |
187 | if __name__ == '__main__':
188 | root = os.getcwd()
189 |
190 | # choose dataset name from 'shapenetcorev2', 'shapenetpart', 'modelnet40' and 'modelnet10'
191 | dataset_name = 'shapenetcorev2'
192 |
193 | # choose split type from 'train', 'test', 'all', 'trainval' and 'val'
194 | # only shapenetcorev2 and shapenetpart dataset support 'trainval' and 'val'
195 | split = 'train'
196 |
197 | d = Dataset(root=root, dataset_name=dataset_name, num_points=2048, split=split)
198 | print("datasize:", d.__len__())
199 |
200 | item = 0
201 | ps, lb, n, f = d[item]
202 | print(ps.size(), ps.type(), lb.size(), lb.type(), n, f)
--------------------------------------------------------------------------------
/image/modelnet40_train10_bookshelf.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/antao97/PointCloudDatasets/54b1bfd4f53ac7f5e9ddbf6633c2ea8f8da1a4ca/image/modelnet40_train10_bookshelf.png
--------------------------------------------------------------------------------
/image/modelnet40_train14_plant.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/antao97/PointCloudDatasets/54b1bfd4f53ac7f5e9ddbf6633c2ea8f8da1a4ca/image/modelnet40_train14_plant.png
--------------------------------------------------------------------------------
/image/modelnet40_train7_vase.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/antao97/PointCloudDatasets/54b1bfd4f53ac7f5e9ddbf6633c2ea8f8da1a4ca/image/modelnet40_train7_vase.png
--------------------------------------------------------------------------------
/image/shapenetcorev2_test37_earphone.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/antao97/PointCloudDatasets/54b1bfd4f53ac7f5e9ddbf6633c2ea8f8da1a4ca/image/shapenetcorev2_test37_earphone.png
--------------------------------------------------------------------------------
/image/shapenetcorev2_test59_lamp.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/antao97/PointCloudDatasets/54b1bfd4f53ac7f5e9ddbf6633c2ea8f8da1a4ca/image/shapenetcorev2_test59_lamp.png
--------------------------------------------------------------------------------
/image/shapenetcorev2_train4_tower.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/antao97/PointCloudDatasets/54b1bfd4f53ac7f5e9ddbf6633c2ea8f8da1a4ca/image/shapenetcorev2_train4_tower.png
--------------------------------------------------------------------------------
/image/shapenetpart_train13_chair.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/antao97/PointCloudDatasets/54b1bfd4f53ac7f5e9ddbf6633c2ea8f8da1a4ca/image/shapenetpart_train13_chair.png
--------------------------------------------------------------------------------
/image/shapenetpart_train2_table.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/antao97/PointCloudDatasets/54b1bfd4f53ac7f5e9ddbf6633c2ea8f8da1a4ca/image/shapenetpart_train2_table.png
--------------------------------------------------------------------------------
/image/shapenetpart_train4_airplane.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/antao97/PointCloudDatasets/54b1bfd4f53ac7f5e9ddbf6633c2ea8f8da1a4ca/image/shapenetpart_train4_airplane.png
--------------------------------------------------------------------------------
/visualize.py:
--------------------------------------------------------------------------------
1 | import os
2 | import numpy as np
3 |
4 | def standardize_bbox(pcl, points_per_object):
5 | pt_indices = np.random.choice(pcl.shape[0], points_per_object, replace=False)
6 | np.random.shuffle(pt_indices)
7 | pcl = pcl[pt_indices] # n by 3
8 | mins = np.amin(pcl, axis=0)
9 | maxs = np.amax(pcl, axis=0)
10 | center = ( mins + maxs ) / 2.
11 | scale = np.amax(maxs-mins)
12 | print("Center: {}, Scale: {}".format(center, scale))
13 | result = ((pcl - center)/scale).astype(np.float32) # [-0.5, 0.5]
14 | return result
15 |
16 | xml_head = \
17 | """
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 | """
49 |
50 | xml_ball_segment = \
51 | """
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 | """
63 |
64 | xml_tail = \
65 | """
66 |
67 |
68 |
69 |
70 |
71 |
72 |
73 |
74 |
75 |
76 |
77 |
78 |
79 |
80 |
81 |
82 |
83 |
84 | """
85 |
86 | def colormap(x,y,z):
87 | vec = np.array([x,y,z])
88 | vec = np.clip(vec, 0.001,1.0)
89 | norm = np.sqrt(np.sum(vec**2))
90 | vec /= norm
91 | return [vec[0], vec[1], vec[2]]
92 |
93 | def mitsuba(pcl, path, clr=None):
94 | xml_segments = [xml_head]
95 |
96 | # pcl = standardize_bbox(pcl, 2048)
97 | pcl = pcl[:,[2,0,1]]
98 | pcl[:,0] *= -1
99 | h = np.min(pcl[:,2])
100 |
101 | for i in range(pcl.shape[0]):
102 | if clr == None:
103 | color = colormap(pcl[i,0]+0.5,pcl[i,1]+0.5,pcl[i,2]+0.5)
104 | else:
105 | color = clr
106 | if h < -0.25:
107 | xml_segments.append(xml_ball_segment.format(pcl[i,0],pcl[i,1],pcl[i,2]-h-0.6875, *color))
108 | else:
109 | xml_segments.append(xml_ball_segment.format(pcl[i,0],pcl[i,1],pcl[i,2], *color))
110 | xml_segments.append(xml_tail)
111 |
112 | xml_content = str.join('', xml_segments)
113 |
114 | with open(path, 'w') as f:
115 | f.write(xml_content)
116 |
117 | if __name__ == '__main__':
118 | item = 0
119 | split = 'train'
120 | dataset_name = 'shapenetcorev2'
121 | root = os.getcwd()
122 | save_root = os.path.join("image", dataset_name)
123 | if not os.path.exists(save_root):
124 | os.makedirs(save_root)
125 |
126 | from dataset import Dataset
127 | d = Dataset(root=root, dataset_name=dataset_name,
128 | num_points=2048, split=split, random_rotation=False, load_name=True)
129 | print("datasize:", d.__len__())
130 |
131 | pts, lb, n = d[item]
132 | print(pts.size(), pts.type(), lb.size(), lb.type(), n)
133 | path = os.path.join(save_root, dataset_name + '_' + split + str(item) + '_' + str(n) + '.xml')
134 | mitsuba(pts.numpy(), path)
135 |
--------------------------------------------------------------------------------