├── .gitmodules ├── LICENSE ├── README.md ├── cd_normal_fscore.cu ├── data ├── all │ └── .gitkeep └── demo │ └── .gitkeep ├── log └── shapenet_pretrained │ └── checkpoints │ └── .gitkeep ├── network ├── dataset.py ├── network.py ├── test.py ├── train.py └── utils.py ├── postprocess ├── CMakeLists.txt ├── main.py ├── postprocess.cpp └── run_demo.py ├── preprocess_with_gt_mesh ├── CMakeLists.txt ├── Intersection.cpp ├── Intersection.h ├── calc_candidates_label.cpp ├── calc_geo_dis.cpp ├── main.py ├── octree.h ├── preprocess_mesh.cpp ├── propose_candidates.cpp ├── run_demo.py └── sample_pc.cpp ├── preprocess_with_pc ├── CMakeLists.txt ├── main.py ├── propose_candidates.cpp └── run_demo.py └── teaser.jpg /.gitmodules: -------------------------------------------------------------------------------- 1 | [submodule "annoy"] 2 | path = annoy 3 | url = https://github.com/spotify/annoy 4 | branch = master 5 | [submodule "SparseConvNet"] 6 | path = SparseConvNet 7 | url = https://github.com/facebookresearch/SparseConvNet.git 8 | branch = master 9 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 Liu Minghua 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Meshing-Point-Clouds-with-IER 2 | Codes for Meshing Point Clouds with Predicted Intrinsic-Extrinsic Ratio Guidance (ECCV2020). 3 | [[paper](https://arxiv.org/pdf/2007.09267.pdf)] 4 | 5 | ![](/teaser.jpg) 6 | 7 | We propose a novel mesh reconstruction method that leverages the input point cloud as much as possible, by predicting which triplets of points should form faces. Our key innovation is a surrogate of local connectivity, calculated by comparing the intrinsic/extrinsic metrics. We learn to classify the candidate triangles using a deep network and then feed the results to a post-processing module for mesh generation. Our method can not only preserve fine-grained details, handle ambiguous structures, but also possess strong generalizability to unseen categories. 8 | 9 | ### 0. Envrionment & Prerequisites. 10 | a) Environment: 11 | - PyTorch 1.3.1 12 | - Python 3.6 13 | - Cuda 10.0 14 | 15 | b) Download submodules [annoy(1.16)](https://github.com/spotify/annoy/archive/v1.16.zip) and [SparseConvNet(0.2)](https://github.com/facebookresearch/SparseConvNet/) and install SparseConvNet: 16 | 17 | ``` 18 | git submodule update --init --recursive 19 | cd SparseConvNet/ 20 | sh develop.sh 21 | ``` 22 | **annoy 1.17 changed their API. Please download the [previous version](https://github.com/spotify/annoy/archive/v1.16.zip).** 23 | 24 | c) Install [plyfile](https://github.com/dranjan/python-plyfile), pickle, and tqdm with pip. 25 | 26 | ### 1. Download pretrained models and demo data. 27 | You can download the pretrained model and demo data from [here](https://drive.google.com/drive/folders/1Wb-mU3mcxpKAQyb7LqqQYKbfnpmDXSiZ?usp=sharing) to get a quick look. Demo data includes ten shapes (both gt mesh and point cloud) and their pre-generated pickle files. The pickle files contain the point cloud vertices and proposed candidate triangles (vertex indices and gt labels). You can use the pickles files to train or test the network. 28 | 29 | ### 2. Classify proposed candidate triangles with a neural network. 30 | You can use `network/test.py` to classify the proposed candidate triangles. You can find the prediced labels (npy files) at `log/shapenet_pretrained/test_demo`. 300,000 triangles per npy file and each shape may have multiple npy files. 31 | 32 | ### 3. Post-process and get output meshes. 33 | 34 | You can feed the pickle files and the predicted npy files into a post-process program to get output meshes. 35 | 36 | First, compile cpp codes: 37 | 38 | ``` 39 | cd postprocess 40 | mkdir build 41 | cd build 42 | cmake .. 43 | make 44 | cd .. 45 | ``` 46 | Then, you can post-process all the demo shapes with `run_demo.py` or post-process a single shape with `main.py`. You can find the generated demo meshes at `log/shapenet_pretrained/test_demo/output_mesh`. 47 | 48 | ### 4. Train your own network. 49 | You can download all the pickle files for the full ShapeNet dataset from [here](https://drive.google.com/drive/folders/1Wb-mU3mcxpKAQyb7LqqQYKbfnpmDXSiZ?usp=sharing)(23,108 shapes, ~42.2GB). Then use `network/train.py` to train your own network. 50 | 51 | ### 5. Generate your own training data. 52 | 53 | You can generate your own training data with gt mesh (ply). 54 | 55 | First, compile the cpp code: 56 | 57 | ``` 58 | cd preprocess_with_gt_mesh 59 | mkdir build 60 | cd build 61 | cmake .. 62 | make 63 | cd .. 64 | ``` 65 | Then, you can use `main.py` to generate the picke file for a single shape or use `run_demo.py` to generate the pickle files for all the demo meshes. The total runtime for each shape may take several minutes. You can use multiple processes to accelerate. 66 | 67 | In detail, the training data generation consists of several steps: 68 | - preprocess the gt mesh: normalize mesh, merge close vertices, etc. 69 | - sample point cloud: sample 12,000 ~ 12,800 points with Poisson sampling and use binary search to determine the radius. 70 | - calculate geodesic distance between pairs of points: it may take up to 1 minute. In some cases (e.g., complex and broken meshes), it may time out and thus fail to generate the final pickle file. 71 | - propose candidate triangles based on KNN. 72 | - calculate the distances between the candidate triangles and ground truth mesh. 73 | 74 | ### 6. Generate pickle files with only point clouds. 75 | You can also generate pickle files with only point clouds (ply), so that you can feed the pickle files into the network and the postprocess program to get the final mesh. 76 | 77 | First, compile the cpp code: 78 | ``` 79 | cd preprocess_with_pc 80 | mkdir build 81 | cd build 82 | cmake .. 83 | make 84 | cd .. 85 | ``` 86 | 87 | Then, you can use `main.py` to generate the picke file for a single shape or use `run_demo.py` to generate the pickle files for all the demo point clouds. The total runtime for each shape may take less than one minute. You can use multiple processes to accelerate. Please note that, in this way, the candidate labels in the pickle files will be set to -1. 88 | 89 | The input point cloud should contain 12,000 ~ 12,800 points (to best fit our pre-trained network). Using Poisson sampling as pre-processing can get evenly distributed point cloud and thus boost the performance. Currently, our method do not support very noisy point clouds. 90 | 91 | If you find our work useful for your research, please cite: 92 | 93 | ``` 94 | @article{liu2020meshing, 95 | title={Meshing Point Clouds with Predicted Intrinsic-Extrinsic Ratio Guidance}, 96 | author={Liu, Minghua and Zhang, Xiaoshuai and Su, Hao}, 97 | journal={arXiv preprint arXiv:2007.09267}, 98 | year={2020} 99 | } 100 | ``` 101 | 102 | ### 103 | -------------------------------------------------------------------------------- /cd_normal_fscore.cu: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | #include 9 | 10 | #define N 1111111 11 | #define B 1 12 | #define MESH 5555555 13 | 14 | __global__ void NmDistanceKernel(int b,int n,const float * xyz,int m,const float * xyz2,float * result,int * result_i){ 15 | const int batch=512; 16 | __shared__ float buf[batch*3]; 17 | for (int i=blockIdx.x;ibest){ 62 | result[(i*n+j)]=best; 63 | result_i[(i*n+j)]=best_i; 64 | } 65 | } 66 | __syncthreads(); 67 | } 68 | } 69 | } 70 | void chamfer_cuda_forward(int b, int n, float * xyz1, int m, float * xyz2, float * dist1, int * idx1,float * dist2, int * idx2, cudaStream_t stream){ 71 | NmDistanceKernel<<>>(b, n, xyz1, m, xyz2, dist1, idx1); 72 | cudaDeviceSynchronize(); 73 | NmDistanceKernel<<>>(b, m, xyz2, n, xyz1, dist2, idx2); 74 | cudaDeviceSynchronize(); 75 | return ; 76 | } 77 | 78 | 79 | float xyz1[B][N][3], xyz2[B][N][3]; 80 | float dist1[B][N], dist2[B][N]; 81 | int idx1[B][N], idx2[B][N]; 82 | 83 | float *xyz1_gpu, *xyz2_gpu, *dist1_gpu, *dist2_gpu; 84 | int *idx1_gpu, *idx2_gpu; 85 | 86 | 87 | struct Point { 88 | double x, y, z; 89 | Point() {}; 90 | Point (double _x, double _y, double _z) { 91 | x = _x; y = _y; z = _z; 92 | }; 93 | Point operator - (const Point& v) const { 94 | return Point(x - v.x, y - v.y, z - v.z);} 95 | 96 | Point operator + (const Point& v) const { 97 | return Point(x + v.x, y + v.y, z + v.z);} 98 | 99 | Point operator * (const double t) const { 100 | return Point(x * t, y * t, z * t);} 101 | 102 | double length() { 103 | return sqrt(x * x + y * y + z * z);} 104 | 105 | void normalize() { 106 | double l = length(); 107 | x /= l; y /= l; z /= l;} 108 | 109 | float dot(const Point& v) const { 110 | return x * v.x + y * v.y + z * v.z;} 111 | 112 | Point cross(const Point& v) const { 113 | return Point( 114 | y * v.z - z * v.y, 115 | z * v.x - x * v.z, 116 | x * v.y - y * v.x);} 117 | 118 | }vertices1[MESH], vertices2[MESH], normal1[MESH], normal2[MESH]; 119 | 120 | struct Face { 121 | int a, b, c; 122 | double s; 123 | Face() {}; 124 | Face (int _a, int _b, int _c) { 125 | a = _a; b = _b; c = _c; 126 | }; 127 | }faces1[MESH], faces2[MESH]; 128 | 129 | int n_vertices_1, n_vertices_2, n_faces_1, n_faces_2; 130 | int n = 0, m = 0; 131 | int resolution = 1000000; 132 | 133 | Point randomPointTriangle(Point a, Point b, Point c) { 134 | double r1 = (double) rand() / RAND_MAX; 135 | double r2 = (double) rand() / RAND_MAX; 136 | double r1sqr = std::sqrt(r1); 137 | double OneMinR1Sqr = (1 - r1sqr); 138 | double OneMinR2 = (1 - r2); 139 | a = a * OneMinR1Sqr; 140 | b = b * OneMinR2; 141 | return (c * r2 + b) * r1sqr + a; 142 | } 143 | 144 | int main(int argc, char ** argv) { 145 | std::string mesh1_file = argv[1]; 146 | std::string mesh2_file = argv[2]; 147 | std::string model_id = argv[3]; 148 | 149 | freopen(mesh1_file.c_str(), "r", stdin); 150 | scanf("%d%d", &n_vertices_1, &n_faces_1); 151 | for (int i = 0; i < n_vertices_1; i++) { 152 | double x, y, z; 153 | scanf("%lf %lf %lf", &x, &y, &z); 154 | vertices1[i] = Point(x, y, z); 155 | } 156 | double sum_area = 0; 157 | for (int i = 0; i < n_faces_1; i++) { 158 | int _, a, b, c; 159 | scanf("%d %d %d %d", &_, &a, &b, &c); 160 | faces1[i] = Face(a, b, c); 161 | faces1[i].s = (vertices1[c] - vertices1[a]).cross((vertices1[b] - vertices1[a])).length() / 2; 162 | if (std::isnan(faces1[i].s)) 163 | faces1[i].s=0; 164 | sum_area += faces1[i].s; 165 | } 166 | for (int i = 0; i < n_faces_1; i++) { 167 | int a = faces1[i].a, b = faces1[i].b, c = faces1[i].c; 168 | int t = round(resolution * (faces1[i].s / sum_area)); 169 | Point normal = (vertices1[c] - vertices1[a]).cross(vertices1[b] - vertices1[a]); 170 | normal.normalize(); 171 | for (int j = 0; j < t; j++) { 172 | Point p = randomPointTriangle(vertices1[a], vertices1[b], vertices1[c]); 173 | xyz1[0][n][0] = p.x; xyz1[0][n][1] = p.y; xyz1[0][n][2] = p.z; 174 | normal1[n] = normal; 175 | n++; 176 | } 177 | } 178 | 179 | freopen(mesh2_file.c_str(), "r", stdin); 180 | scanf("%d%d", &n_vertices_2, &n_faces_2); 181 | for (int i = 0; i < n_vertices_2; i++) { 182 | double x, y, z; 183 | scanf("%lf %lf %lf", &x, &y, &z); 184 | vertices2[i] = Point(x, y, z); 185 | } 186 | sum_area = 0; 187 | for (int i = 0; i < n_faces_2; i++) { 188 | int _, a, b, c; 189 | scanf("%d %d %d %d", &_, &a, &b, &c); 190 | faces2[i] = Face(a, b, c); 191 | faces2[i].s = (vertices2[c] - vertices2[a]).cross((vertices2[b] - vertices2[a])).length() / 2; 192 | sum_area += faces2[i].s; 193 | } 194 | for (int i = 0; i < n_faces_2; i++) { 195 | int a = faces2[i].a, b = faces2[i].b, c = faces2[i].c; 196 | int t = round(resolution * (faces2[i].s / sum_area)); 197 | Point normal = (vertices2[c] - vertices2[a]).cross(vertices2[b] - vertices2[a]); 198 | normal.normalize(); 199 | for (int j = 0; j < t; j++) { 200 | Point p = randomPointTriangle(vertices2[a], vertices2[b], vertices2[c]); 201 | xyz2[0][m][0] = p.x; xyz2[0][m][1] = p.y; xyz2[0][m][2] = p.z; 202 | normal2[m] = normal; 203 | m++; 204 | } 205 | } 206 | 207 | size_t xyz_size = max(n, m) * 3 * sizeof(float); 208 | size_t dis_size = max(n, m) * sizeof(float); 209 | size_t idx_size = max(n, m) * sizeof(int); 210 | cudaMalloc((void **) &xyz1_gpu, xyz_size); 211 | cudaMalloc((void **) &xyz2_gpu, xyz_size); 212 | cudaMalloc((void **) &dist1_gpu, dis_size); 213 | cudaMalloc((void **) &dist2_gpu, dis_size); 214 | cudaMalloc((void **) &idx1_gpu, idx_size); 215 | cudaMalloc((void **) &idx2_gpu, idx_size); 216 | 217 | cudaMemcpy(xyz1_gpu, &xyz1[0][0], xyz_size, cudaMemcpyHostToDevice); 218 | cudaMemcpy(xyz2_gpu, &xyz2[0][0], xyz_size, cudaMemcpyHostToDevice); 219 | 220 | chamfer_cuda_forward(1, n, xyz1_gpu, m, xyz2_gpu, dist1_gpu, idx1_gpu, dist2_gpu, idx2_gpu, NULL); 221 | 222 | cudaMemcpy(&dist1[0][0], dist1_gpu, dis_size, cudaMemcpyDeviceToHost); 223 | cudaMemcpy(&dist2[0][0], dist2_gpu, dis_size, cudaMemcpyDeviceToHost); 224 | 225 | cudaMemcpy(&idx1[0][0], idx1_gpu, idx_size, cudaMemcpyDeviceToHost); 226 | cudaMemcpy(&idx2[0][0], idx2_gpu, idx_size, cudaMemcpyDeviceToHost); 227 | 228 | cudaError_t err = cudaGetLastError(); 229 | if (err != cudaSuccess) { 230 | printf("error in nnd updateOutput: %s\n", cudaGetErrorString(err)); 231 | return 0; 232 | } 233 | 234 | double sum = 0; 235 | 236 | double sum_normal = 0; 237 | 238 | // normal consistency 239 | for (int i = 0; i < n; i++) { 240 | sum_normal += abs(normal1[i].dot(normal2[idx1[0][i]])); 241 | } 242 | 243 | for (int i = 0; i < m; i++) { 244 | sum_normal += abs(normal2[i].dot(normal1[idx2[0][i]])); 245 | } 246 | 247 | // f-score for different threshold 248 | for (int k = 0; k <= 40; k++) { 249 | double threashold = sqrt(sum_area / resolution) * (1.0 + (double)k / 20); 250 | int cnt1 = n, cnt2 = m; 251 | for (int i = 0; i < n; i++) { 252 | double d = sqrt(dist1[0][i]); 253 | if (d > threashold) 254 | cnt1--; 255 | if (k == 0) sum += d; 256 | } 257 | for (int i = 0; i < m; i++) { 258 | double d = sqrt(dist2[0][i]); 259 | if (d > threashold) 260 | cnt2--; 261 | if (k == 0) sum += d; 262 | } 263 | double t1 = (double) cnt1 / n; 264 | double t2 = (double) cnt2 / m; 265 | double f1 = 2 * t1 * t2 / (t1 + t2 + 1e-9); 266 | printf("%lf ", f1); 267 | } 268 | 269 | // chamfer distance & normal consistency 270 | printf("%lf %lf %s\n", sum / (n + m), sum_normal / (n + m), model_id.c_str()); 271 | return 0; 272 | } 273 | -------------------------------------------------------------------------------- /data/all/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Colin97/Point2Mesh/aa080b26692ad50ea9f35527764385545922aae1/data/all/.gitkeep -------------------------------------------------------------------------------- /data/demo/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Colin97/Point2Mesh/aa080b26692ad50ea9f35527764385545922aae1/data/demo/.gitkeep -------------------------------------------------------------------------------- /log/shapenet_pretrained/checkpoints/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Colin97/Point2Mesh/aa080b26692ad50ea9f35527764385545922aae1/log/shapenet_pretrained/checkpoints/.gitkeep -------------------------------------------------------------------------------- /network/dataset.py: -------------------------------------------------------------------------------- 1 | # *_*coding:utf-8 *_* 2 | import os 3 | import json 4 | import warnings 5 | import numpy as np 6 | import pickle 7 | import torch 8 | from torch.utils.data import Dataset 9 | 10 | class TrainDataset(Dataset): 11 | def __init__(self, root = '../data/all', npoints = 12800, ntriangles = 25000, split = 'train'): 12 | self.npoints = npoints 13 | self.ntriangles = ntriangles 14 | self.root = root 15 | self.split = split 16 | 17 | if split == 'train': 18 | with open(os.path.join(self.root, 'train.txt'), 'r') as f: 19 | self.ids = [line.strip() for line in f.readlines()] 20 | else: 21 | with open(os.path.join(self.root, 'val.txt'), 'r') as f: 22 | self.ids = [line.strip() for line in f.readlines()] 23 | 24 | self.data = [] 25 | for id in self.ids: 26 | d = pickle.load(open(os.path.join(self.root, id + '.p'), 'rb')) 27 | self.data.append(d) 28 | 29 | def __getitem__(self, index): 30 | d = self.data[index] 31 | idx = torch.randperm(len(d['vertex_idx'])) 32 | idx = idx[:self.ntriangles] 33 | return torch.from_numpy(d['pc']).float(), torch.from_numpy(d['vertex_idx'][idx]).long(), torch.from_numpy(d['label'][idx]).long() 34 | 35 | def __len__(self): 36 | return len(self.ids) 37 | 38 | class TestDataset(Dataset): 39 | def __init__(self, root, npoints=12800, ntriangles=350000): 40 | self.npoints = npoints 41 | self.ntriangles = ntriangles 42 | self.root = root 43 | 44 | with open(os.path.join(self.root, 'models.txt'), 'r') as f: 45 | self.ids = [line.strip() for line in f.readlines()] 46 | 47 | self.data = [] 48 | for i, id in enumerate(self.ids): 49 | d = pickle.load(open(os.path.join(self.root, id + '.p'), 'rb')) 50 | n = len(d['vertex_idx']) 51 | for i in range((n + self.ntriangles - 1) // self.ntriangles): 52 | dd = {} 53 | dd['pc'] = d['pc'] 54 | l = i * self.ntriangles 55 | r = min((i + 1) * self.ntriangles, n) 56 | dd['vertex_idx'] = d['vertex_idx'][l: r] 57 | dd['label'] = d['label'][l: r] 58 | if (r - l < self.ntriangles): 59 | padding_size = self.ntriangles - (r - l) 60 | ver_padding = np.repeat(np.expand_dims(d['vertex_idx'][-1], 0), padding_size, axis = 0) 61 | label_padding = np.repeat(np.expand_dims(d['label'][-1], 0), padding_size, axis = 0) 62 | dd['vertex_idx'] = np.concatenate((dd['vertex_idx'], ver_padding), axis=0) 63 | dd['label'] = np.concatenate((dd['label'], label_padding), axis=0) 64 | dd['model_ids'] = id + '_%d'%i 65 | self.data.append(dd) 66 | 67 | def __getitem__(self, index): 68 | d = self.data[index] 69 | return d['pc'], d['vertex_idx'], d['label'], d['model_ids'] 70 | 71 | def __len__(self): 72 | return len(self.data) 73 | 74 | -------------------------------------------------------------------------------- /network/network.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.functional as F 4 | import sparseconvnet as scn 5 | 6 | class get_model(nn.Module): 7 | def __init__(self): 8 | super(get_model, self).__init__() 9 | self.part_num = 3 10 | self.resolution = 150 11 | self.dimension = 3 12 | self.reps = 2 #Conv block repetition factor 13 | self.m = 32 #Unet number of features 14 | self.nPlanes = [self.m, 2 * self.m, 3 * self.m, 4 * self.m, 5 * self.m] #UNet number of features per level 15 | self.sparseModel = scn.Sequential().add( 16 | scn.InputLayer(self.dimension, torch.LongTensor([self.resolution * 8 + 15] * 3), mode=3)).add( 17 | scn.SubmanifoldConvolution(self.dimension, 1, self.m, 3, False)).add( 18 | scn.FullyConvolutionalNet(self.dimension, self.reps, self.nPlanes, residual_blocks=False, downsample=[3,2])).add( 19 | scn.BatchNormReLU(sum(self.nPlanes))).add( 20 | scn.OutputLayer(self.dimension)) 21 | self.nc = 64 22 | self.linear = nn.Linear(sum(self.nPlanes), self.nc) 23 | self.convs1 = torch.nn.Conv1d(self.nc * 3, 128, 1) 24 | self.convs2 = torch.nn.Conv1d(128, 64, 1) 25 | self.convs3 = torch.nn.Conv1d(64, self.part_num, 1) 26 | self.bns1 = nn.BatchNorm1d(128) 27 | self.bns2 = nn.BatchNorm1d(64) 28 | 29 | def forward(self, pc, idx): 30 | B, N, _ = pc.size() 31 | _, M, _ = idx.size() 32 | 33 | pc = pc.view(-1, 3) 34 | pc = (pc * 2 + 4) * self.resolution 35 | pc = torch.floor(pc).long() 36 | x = torch.cat((pc, torch.arange(B).unsqueeze(-1).cuda().repeat(1, N).view(-1, 1)), 1) 37 | x = self.sparseModel([x, torch.ones((B * N, 1)).cuda()]) 38 | x = self.linear(x) 39 | x = x.view(B, N, self.nc) 40 | x = x.transpose(1, 2) 41 | 42 | x = x.unsqueeze(-1).repeat(1, 1, 1, 3) 43 | idx = idx.unsqueeze(1).repeat(1, self.nc, 1, 1) 44 | x = x.gather(2, idx) 45 | x = x.permute(0, 3, 1, 2).contiguous() 46 | x = x.view(B, self.nc * 3, M) 47 | 48 | x = F.relu(self.bns1(self.convs1(x))) 49 | x = F.relu(self.bns2(self.convs2(x))) 50 | x = self.convs3(x) 51 | x = x.transpose(2, 1).contiguous() 52 | x = F.log_softmax(x.view(-1, self.part_num), dim=-1) 53 | x = x.view(B, M, self.part_num) 54 | 55 | return x 56 | 57 | 58 | class get_loss(torch.nn.Module): 59 | def __init__(self): 60 | super(get_loss, self).__init__() 61 | 62 | def forward(self, pred, target): 63 | pred1 = torch.cat([pred[:,0].unsqueeze(-1), torch.max(pred[:,1:], 1, keepdim = True)[0]], 1) 64 | target1 = target.clone() 65 | target1[target1 != 0] = 1 66 | loss_2 = F.nll_loss(pred1, target1) 67 | loss_3 = F.nll_loss(pred, target) 68 | return loss_2, loss_3 -------------------------------------------------------------------------------- /network/test.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | from dataset import TestDataset 4 | import torch 5 | from pathlib import Path 6 | import importlib 7 | from tqdm import tqdm 8 | import numpy as np 9 | 10 | def parse_args(): 11 | parser = argparse.ArgumentParser('Model') 12 | parser.add_argument('--batch_size', type=int, default=8, help='Batch size.') 13 | parser.add_argument('--gpu', type=str, default='0', help='GPU to use [default: GPU 0]') 14 | parser.add_argument('--log_dir', type=str, default='shapenet_pretrained', help='Log path [default: None]') 15 | parser.add_argument('--npoint', type=int, default=12800, help='Size of point cloud [default: 12800]') 16 | parser.add_argument('--ntriangle', type=int, default=300000, help='Size of candidates per pass per shape') 17 | parser.add_argument('--output_dir', type=str, default='test_demo', help='dir to store predicted label') 18 | parser.add_argument('--input_dir', type=str, default='../data/demo', help='dir of input pickle file') 19 | return parser.parse_args() 20 | 21 | def main(args): 22 | os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu 23 | log_dir = Path('../log/%s' % args.log_dir) 24 | checkpoints_dir = log_dir.joinpath('checkpoints/') 25 | output_dir = log_dir.joinpath('%s'%args.output_dir) 26 | output_dir.mkdir(exist_ok = True) 27 | 28 | Dataset = TestDataset(root = args.input_dir, npoints = args.npoint, ntriangles = args.ntriangle) 29 | DataLoader = torch.utils.data.DataLoader(Dataset, batch_size = args.batch_size, shuffle = False, num_workers = 12) 30 | print("The number of test data is: %d." % len(Dataset)) 31 | 32 | MODEL = importlib.import_module("network") 33 | classifier = MODEL.get_model().cuda() 34 | classifier = torch.nn.DataParallel(classifier) 35 | 36 | checkpoint = torch.load(checkpoints_dir.joinpath('best_model.pth')) 37 | classifier.load_state_dict(checkpoint['model_state_dict']) 38 | print('Pretrained model loaded.') 39 | 40 | with torch.no_grad(): 41 | total_correct = 0 42 | total_seen = 0 43 | classifier = classifier.eval() 44 | 45 | for batch_id, (pc, vertex_idx, label, model_ids) in tqdm(enumerate(DataLoader), total = len(DataLoader), smoothing = 0.9): 46 | B, _, _ = pc.size() 47 | pc, vertex_idx = pc.float().cuda(), vertex_idx.long().cuda() 48 | pred = classifier(pc, vertex_idx) 49 | pred = pred.contiguous().view(-1, 3).max(1)[1] 50 | pred = pred.cpu().data.numpy() 51 | label = label.view(-1).data.numpy() 52 | total_correct += np.sum(pred == label) 53 | total_seen += (B * args.ntriangle) 54 | pred = pred.reshape((B, args.ntriangle)) 55 | 56 | for i in range(B): 57 | np.save(os.path.join(output_dir, '%s.npy' % model_ids[i]), pred[i]) 58 | 59 | print(total_correct / float(total_seen)) 60 | 61 | if __name__ == '__main__': 62 | args = parse_args() 63 | main(args) -------------------------------------------------------------------------------- /network/train.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | from dataset import TrainDataset 4 | import torch 5 | import datetime 6 | import logging 7 | from pathlib import Path 8 | import sys 9 | import importlib 10 | import shutil 11 | from tqdm import tqdm 12 | import numpy as np 13 | from utils import weights_init, bn_momentum_adjust 14 | 15 | def parse_args(): 16 | parser = argparse.ArgumentParser('Model') 17 | parser.add_argument('--batch_size', type=int, default=30, help='Batch Size during training [default: 16]') 18 | parser.add_argument('--epoch', default=50, type=int, help='Epoch to run [default: 50]') 19 | parser.add_argument('--learning_rate', default=0.001, type=float, help='Initial learning rate [default: 0.001]') 20 | parser.add_argument('--gpu', type=str, default='0', help='GPU to use [default: GPU 0]') 21 | parser.add_argument('--log_dir', type=str, default=None, help='Log path [default: None]') 22 | parser.add_argument('--dataset_dir', type=str, default="../data/all/", help='Dataset path') 23 | parser.add_argument('--decay_rate', type=float, default=1e-4, help='weight decay [default: 1e-4]') 24 | parser.add_argument('--npoint', type=int, default=12800, help='Point Number [default: 12800]') 25 | parser.add_argument('--ntriangle', type=int, default=25000, help='Triangle Number [default: 100000]') 26 | parser.add_argument('--step_size', type=int, default=10, help='Decay step for lr decay [default: every 20 epochs]') 27 | parser.add_argument('--lr_decay', type=float, default=0.8, help='Decay rate for lr decay [default: 0.5]') 28 | return parser.parse_args() 29 | 30 | def main(args): 31 | def log_string(str): 32 | logger.info(str) 33 | print(str) 34 | 35 | os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu 36 | timestr = str(datetime.datetime.now().strftime('%m-%d_%H-%M')) 37 | experiment_dir = Path('../log/') 38 | experiment_dir.mkdir(exist_ok=True) 39 | if args.log_dir is None: 40 | experiment_dir = experiment_dir.joinpath(timestr) 41 | else: 42 | experiment_dir = experiment_dir.joinpath(args.log_dir) 43 | experiment_dir.mkdir(exist_ok=True) 44 | checkpoints_dir = experiment_dir.joinpath('checkpoints/') 45 | checkpoints_dir.mkdir(exist_ok=True) 46 | shutil.copy('dataset.py', str(experiment_dir)) 47 | shutil.copy('network.py', str(experiment_dir)) 48 | shutil.copy('train.py', str(experiment_dir)) 49 | shutil.copy('utils.py', str(experiment_dir)) 50 | 51 | args = parse_args() 52 | logger = logging.getLogger("Model") 53 | logger.setLevel(logging.INFO) 54 | formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s') 55 | file_handler = logging.FileHandler('%s/log.txt' % experiment_dir) 56 | file_handler.setLevel(logging.INFO) 57 | file_handler.setFormatter(formatter) 58 | logger.addHandler(file_handler) 59 | log_string('PARAMETER ...') 60 | log_string(args) 61 | 62 | TRAIN_DATASET = TrainDataset(root = args.dataset_dir, npoints = args.npoint, ntriangles = args.ntriangle, split = 'train') 63 | trainDataLoader = torch.utils.data.DataLoader(TRAIN_DATASET, batch_size = args.batch_size, shuffle = True, num_workers = 12) 64 | VAL_DATASET = TrainDataset(root = args.dataset_dir, npoints = args.npoint, ntriangles = args.ntriangle, split = 'val') 65 | valDataLoader = torch.utils.data.DataLoader(VAL_DATASET, batch_size = args.batch_size, shuffle = False, num_workers = 12) 66 | log_string("The number of train data is: %d" % len(TRAIN_DATASET)) 67 | log_string("The number of val data is: %d" % len(VAL_DATASET)) 68 | 69 | MODEL = importlib.import_module("network") 70 | classifier = MODEL.get_model() 71 | criterion = MODEL.get_loss() 72 | classifier = torch.nn.DataParallel(classifier).cuda() 73 | criterion = torch.nn.DataParallel(criterion).cuda() 74 | 75 | try: 76 | checkpoint = torch.load(checkpoints_dir.joinpath('best_model.pth')) 77 | start_epoch = checkpoint['epoch'] 78 | best_acc = checkpoint['test_acc'] 79 | classifier.load_state_dict(checkpoint['model_state_dict']) 80 | log_string('Pretrained model loaded...') 81 | except: 82 | log_string('No existing model, starting training from scratch...') 83 | start_epoch = 0 84 | best_acc = 0 85 | classifier = classifier.apply(weights_init) 86 | 87 | optimizer = torch.optim.Adam( 88 | classifier.parameters(), 89 | lr=args.learning_rate, 90 | betas=(0.9, 0.999), 91 | eps=1e-08, 92 | weight_decay=args.decay_rate 93 | ) 94 | 95 | 96 | LEARNING_RATE_CLIP = 1e-5 97 | MOMENTUM_ORIGINAL = 0.1 98 | MOMENTUM_DECCAY = 0.5 99 | MOMENTUM_DECCAY_STEP = args.step_size 100 | 101 | global_epoch = 0 102 | 103 | for epoch in range(start_epoch, args.epoch): 104 | log_string('Epoch %d (%d/%s):' % (global_epoch + 1, epoch + 1, args.epoch)) 105 | 106 | lr = max(args.learning_rate * (args.lr_decay ** (epoch // args.step_size)), LEARNING_RATE_CLIP) 107 | log_string('Learning rate:%f' % lr) 108 | for param_group in optimizer.param_groups: 109 | param_group['lr'] = lr 110 | momentum = MOMENTUM_ORIGINAL * (MOMENTUM_DECCAY ** (epoch // MOMENTUM_DECCAY_STEP)) 111 | if momentum < 0.01: 112 | momentum = 0.01 113 | log_string('BN momentum updated to: %f' % momentum) 114 | classifier = classifier.apply(lambda x: bn_momentum_adjust(x,momentum)) 115 | 116 | classifier = classifier.train() 117 | 118 | acc_buffer = [] 119 | loss_2_buffer = [] 120 | loss_3_buffer = [] 121 | loss_buffer = [] 122 | 123 | for i, (pc, vertex_idx, label) in tqdm(enumerate(trainDataLoader), total = len(trainDataLoader), smoothing = 0.9): 124 | pc, vertex_idx, label = pc.cuda(), vertex_idx.cuda(), label.cuda() 125 | B, _, _ = pc.size() 126 | optimizer.zero_grad() 127 | 128 | pred = classifier(pc, vertex_idx) 129 | pred = pred.contiguous().view(-1, 3) 130 | label = label.view(-1) 131 | pred_choice = pred.data.max(1)[1] 132 | correct = pred_choice.eq(label.data).sum().cpu() 133 | acc_buffer.append(correct.item() / (B * args.ntriangle)) 134 | loss_2, loss_3 = criterion(pred, label) 135 | loss = (loss_2 + loss_3).mean() 136 | loss.backward() 137 | optimizer.step() 138 | loss_2_buffer.append(loss_2.mean().cpu().item()) 139 | loss_3_buffer.append(loss_3.mean().cpu().item()) 140 | loss_buffer.append(loss.cpu().item()) 141 | 142 | train_acc = np.mean(acc_buffer) 143 | log_string('Train accuracy: %.5f' % train_acc) 144 | log_string('Train loss: %.5f' % np.mean(loss_buffer)) 145 | log_string('Train loss_2: %.5f' % np.mean(loss_2_buffer)) 146 | log_string('Train loss_3: %.5f' % np.mean(loss_3_buffer)) 147 | 148 | 149 | with torch.no_grad(): 150 | classifier = classifier.eval() 151 | acc_buffer = [] 152 | loss_2_buffer = [] 153 | loss_3_buffer = [] 154 | loss_buffer = [] 155 | 156 | for i, (pc, vertex_idx, label) in tqdm(enumerate(valDataLoader), total = len(valDataLoader), smoothing = 0.9): 157 | pc, vertex_idx, label = pc.cuda(), vertex_idx.cuda(), label.cuda() 158 | B, _, _ = pc.size() 159 | pred = classifier(pc, vertex_idx) 160 | pred = pred.contiguous().view(-1, 3) 161 | label = label.view(-1) 162 | pred_choice = pred.data.max(1)[1] 163 | correct = pred_choice.eq(label.data).sum().cpu() 164 | acc_buffer.append(correct.item() / (B * args.ntriangle)) 165 | loss_2, loss_3 = criterion(pred, label) 166 | loss = (loss_2 + loss_3).mean() 167 | loss_2_buffer.append(loss_2.mean().cpu().item()) 168 | loss_3_buffer.append(loss_3.mean().cpu().item()) 169 | loss_buffer.append(loss.cpu().item()) 170 | 171 | test_acc = np.mean(acc_buffer) 172 | log_string('Val accuracy: %.5f' % test_acc) 173 | log_string('Val loss: %.5f' % np.mean(loss_buffer)) 174 | log_string('Val loss_2: %.5f' % np.mean(loss_2_buffer)) 175 | log_string('Val loss_3: %.5f' % np.mean(loss_3_buffer)) 176 | 177 | 178 | if (test_acc >= best_acc): 179 | best_acc = test_acc 180 | log_string('Saving model...') 181 | savepath = str(checkpoints_dir) + '/best_model.pth' 182 | state = { 183 | 'epoch': epoch, 184 | 'train_acc': train_acc, 185 | 'test_acc': test_acc, 186 | 'model_state_dict': classifier.state_dict(), 187 | 'optimizer_state_dict': optimizer.state_dict(), 188 | } 189 | torch.save(state, savepath) 190 | 191 | log_string('Saving model....') 192 | savepath = str(checkpoints_dir) + '/last_model.pth' 193 | state = { 194 | 'epoch': epoch, 195 | 'train_acc': train_acc, 196 | 'test_acc': test_acc, 197 | 'model_state_dict': classifier.state_dict(), 198 | 'optimizer_state_dict': optimizer.state_dict(), 199 | } 200 | torch.save(state, savepath) 201 | 202 | log_string('Best accuracy is: %.5f'%best_acc) 203 | global_epoch += 1 204 | 205 | if __name__ == '__main__': 206 | args = parse_args() 207 | main(args) -------------------------------------------------------------------------------- /network/utils.py: -------------------------------------------------------------------------------- 1 | import torch 2 | 3 | def weights_init(m): 4 | classname = m.__class__.__name__ 5 | if classname.find('Conv2d') != -1: 6 | torch.nn.init.xavier_normal_(m.weight.data) 7 | torch.nn.init.constant_(m.bias.data, 0.0) 8 | elif classname.find('Linear') != -1: 9 | torch.nn.init.xavier_normal_(m.weight.data) 10 | torch.nn.init.constant_(m.bias.data, 0.0) 11 | 12 | def bn_momentum_adjust(m, momentum): 13 | if isinstance(m, torch.nn.BatchNorm2d) or isinstance(m, torch.nn.BatchNorm1d): 14 | m.momentum = momentum -------------------------------------------------------------------------------- /postprocess/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | set(CMAKE_CXX_FLAGS "-std=c++11 -O2") 2 | add_executable(postprocess postprocess.cpp) -------------------------------------------------------------------------------- /postprocess/main.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | import plyfile 4 | import numpy as np 5 | import random 6 | import pickle 7 | 8 | def parse_args(): 9 | parser = argparse.ArgumentParser('Model') 10 | parser.add_argument('--pickle_file', type=str, default='../data/demo/597cb92a5bfb580eed98cca8f0ccd5f7.p', help='preprocessed pickle file') 11 | parser.add_argument('--pred_dir', type=str, default='../log/shapenet_pretrained/test_demo', help='dir of predicted labels') 12 | parser.add_argument('--output', type=str, default='../log/shapenet_pretrained/test_demo/output_mesh/597cb92a5bfb580eed98cca8f0ccd5f7.ply', help='output ply file') 13 | parser.add_argument('--log_dir', type=str, default=None, help='log dir of intermediate files') 14 | parser.add_argument('--ntriangle', type=int, default=300000, help='Size of candidates per pass per shape') 15 | return parser.parse_args() 16 | 17 | def merge(args, model_id, pred_txt): 18 | preprocessed_data = pickle.load(open(args.pickle_file, 'rb')) 19 | pc = preprocessed_data['pc'] 20 | idx = preprocessed_data['vertex_idx'] 21 | n = pc.shape[0] 22 | m = idx.shape[0] 23 | 24 | pred = [] 25 | for i in range((m + args.ntriangle - 1) // args.ntriangle): 26 | pred.append(np.load(os.path.join(args.pred_dir, '%s_%d.npy'%(model_id, i)))) 27 | pred = np.concatenate(pred, axis = 0) 28 | pred = pred[: m] 29 | 30 | with open(pred_txt, 'w') as f: 31 | f.write('%d\n' % n) 32 | for i in range(n): 33 | f.write('%f %f %f\n' % (pc[i][0], pc[i][1], pc[i][2])) 34 | f.write('%d\n' % m) 35 | for i in range(m): 36 | f.write("%d %d %d %d\n" % (idx[i][0], idx[i][1], idx[i][2], pred[i])) 37 | 38 | if __name__ == '__main__': 39 | args = parse_args() 40 | model_id = os.path.basename(args.pickle_file).split('.p')[0] 41 | args.log_dir = "log/%s" % model_id[:6] 42 | os.makedirs(args.log_dir, exist_ok = True) 43 | 44 | print("Merging npy files!") 45 | pred_txt = os.path.join(args.log_dir, 'pred.txt') 46 | merge(args, model_id, pred_txt) 47 | 48 | print("Postprocessing!") 49 | os.system("./build/postprocess %s %s" % (pred_txt, args.output)) 50 | 51 | print("Finish!") 52 | -------------------------------------------------------------------------------- /postprocess/postprocess.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include "../annoy/src/kissrandom.h" 3 | #include "../annoy/src/annoylib.h" 4 | #include 5 | #include 6 | #include 7 | #include 8 | #define N 555555 9 | using namespace std; 10 | 11 | std::vector knn_id[N]; 12 | std::vector knn_dis[N]; 13 | int n_pc, n_candidates; 14 | double average_nn_dis = 0; 15 | map, int> edge_cnt; 16 | int last_visit_idx[N * 10]; 17 | int visit_idx = 0; 18 | AnnoyIndex pc_knn = AnnoyIndex(3); 19 | 20 | struct Point { 21 | double x, y, z; 22 | Point() {}; 23 | Point (double _x, double _y, double _z) { 24 | x = _x; y = _y; z = _z; 25 | }; 26 | Point operator - (const Point& v) const { 27 | return Point(x - v.x, y - v.y, z - v.z);} 28 | 29 | Point operator + (const Point& v) const { 30 | return Point(x + v.x, y + v.y, z + v.z);} 31 | 32 | Point operator * (const double t) const { 33 | return Point(x * t, y * t, z * t);} 34 | 35 | double length() { 36 | return sqrt(x * x + y * y + z * z);} 37 | 38 | void normalize() { 39 | double l = length(); 40 | x /= l; y /= l; z /= l;} 41 | 42 | double dot(const Point& v) const { 43 | return x * v.x + y * v.y + z * v.z;} 44 | 45 | Point cross(const Point& v) const { 46 | return Point( 47 | y * v.z - z * v.y, 48 | z * v.x - x * v.z, 49 | x * v.y - y * v.x);} 50 | }pc[N]; 51 | 52 | struct Face { 53 | int a, b, c; 54 | int label, id; 55 | bool in_final_mesh; 56 | double l; 57 | Face() {}; 58 | Face (int _a, int _b, int _c) { 59 | a = _a; b = _b; c = _c; 60 | }; 61 | bool operator < (const Face& rhs) const { 62 | if (label == rhs.label) 63 | return l < rhs.l; 64 | return label < rhs.label; 65 | }; 66 | } candidates[N * 10]; 67 | vectorvertex_faces[N]; 68 | 69 | 70 | bool check_inside(Point A, Point B, Point P, Point Q) { 71 | // check whether triangle ABQ contain triangle ABP 72 | Point normal = (Q - A).cross(B - A); 73 | normal.normalize(); 74 | double d = normal.dot(P - A); 75 | if (fabs(d) > average_nn_dis * 0.3) 76 | return false; 77 | P = P - (normal * d); 78 | 79 | double sABP = (P - A).cross(B - A).length(); 80 | double sAQP = (P - A).cross(Q - A).length(); 81 | double sBQP = (Q - B).cross(P - B).length(); 82 | double sABQ = (Q - A).cross(B - A).length(); 83 | 84 | if (fabs(sABP + sAQP + sBQP - sABQ) < 1e-5) 85 | return true; 86 | return false; 87 | } 88 | 89 | bool seg_seg_intersect(Point A, Point B, Point C, Point D, Point P, Point Q) { 90 | if (((A - C).length() < 1e-5 && (B - D).length() < 1e-5) || 91 | ((A - D).length() < 1e-5 && (B - C).length() < 1e-5)) { 92 | if (check_inside(A, B, P, Q)) 93 | return true; 94 | if (check_inside(A, B, Q, P)) 95 | return true; 96 | } 97 | 98 | // check whether segment AB and segment CD intersect 99 | Point norm = (B - A).cross(D - C); 100 | if (norm.length() < 1e-5) 101 | return false; 102 | 103 | norm.normalize(); 104 | double d = norm.dot(C - A); 105 | if (fabs(d) > average_nn_dis * 0.3) 106 | return false; 107 | 108 | if (fabs(d) > 1e-9) { 109 | A = A + norm * d; 110 | B = B + norm * d; 111 | } 112 | 113 | if ((A - C).length() < 1e-5 || (A - D).length() < 1e-5 || (B - C).length() < 1e-5 || (B - D).length() < 1e-5) 114 | return false; 115 | 116 | // area ratio 117 | Point V1 = (B - A); 118 | V1.normalize(); 119 | Point V2 = (D - C); 120 | V2.normalize(); 121 | Point V1V2 = V1.cross(V2); 122 | Point R1R2 = (C - A); 123 | 124 | double t1, t2; 125 | if (fabs(V1V2.x) > fabs(V1V2.y) && fabs(V1V2.x) > fabs(V1V2.z)) { 126 | t1 = (R1R2.cross(V2).x)/(V1V2.x); 127 | t2 = (R1R2.cross(V1).x)/(V1V2.x); 128 | } 129 | else if (fabs(V1V2.y) > fabs(V1V2.x) && fabs(V1V2.y) > fabs(V1V2.z)) { 130 | t1 = (R1R2.cross(V2).y)/(V1V2.y); 131 | t2 = (R1R2.cross(V1).y)/(V1V2.y); 132 | } 133 | else { 134 | t1 = (R1R2.cross(V2).z)/(V1V2.z); 135 | t2 = (R1R2.cross(V1).z)/(V1V2.z); 136 | } 137 | if (t1 < 1e-5 || t1 > (A - B).length() - 1e-5) 138 | return false; 139 | if (t2 < 1e-5 || t2 > (C - D).length() - 1e-5) 140 | return false; 141 | 142 | return true; 143 | } 144 | 145 | bool tri_tri_intersect(Point A, Point B, Point C, Point P, Point Q, Point R) { 146 | if (seg_seg_intersect(A, B, P, Q, C, R)) 147 | return true; 148 | if (seg_seg_intersect(A, B, Q, R, C, P)) 149 | return true; 150 | if (seg_seg_intersect(A, B, P, R, C, Q)) 151 | return true; 152 | if (seg_seg_intersect(B, C, P, Q, A, R)) 153 | return true; 154 | if (seg_seg_intersect(B, C, Q, R, A, P)) 155 | return true; 156 | if (seg_seg_intersect(B, C, P, R, A, Q)) 157 | return true; 158 | if (seg_seg_intersect(A, C, P, Q, B, R)) 159 | return true; 160 | if (seg_seg_intersect(A, C, Q, R, B, P)) 161 | return true; 162 | if (seg_seg_intersect(A, C, P, R, B, Q)) 163 | return true; 164 | return false; 165 | } 166 | 167 | bool tri_mesh_intersect(int a, int b, int c) { 168 | visit_idx++; 169 | int ver[3] = {a, b, c}; 170 | for (int i = 0; i < 3; i++) 171 | for (int j = 0; j < 5; j++) 172 | for (auto face : vertex_faces[knn_id[ver[i]][j]]) { 173 | if (last_visit_idx[face.id] == visit_idx) 174 | continue; 175 | last_visit_idx[face.id] = visit_idx; 176 | if (tri_tri_intersect(pc[a], pc[b], pc[c], pc[face.a], pc[face.b], pc[face.c])) 177 | return true; 178 | } 179 | return false; 180 | } 181 | 182 | void edge_add(int a, int b) { 183 | if (a > b) 184 | swap(a, b); 185 | edge_cnt[make_pair(a, b)] += 1; 186 | return ; 187 | } 188 | 189 | bool edge_check(int a, int b) { 190 | if (a > b) 191 | swap(a, b); 192 | return edge_cnt[make_pair(a, b)] < 2; 193 | } 194 | 195 | int main(int argc, char ** argv) { 196 | string input_file = argv[1]; 197 | string output_file = argv[2]; 198 | 199 | freopen(input_file.c_str(), "r", stdin); 200 | scanf("%d", &n_pc); 201 | for (int i = 0; i < n_pc; i++) { 202 | double vec[3]; 203 | scanf("%lf%lf%lf", &vec[0], &vec[1], &vec[2]); 204 | pc[i] = Point(vec[0], vec[1], vec[2]); 205 | pc_knn.add_item(i, vec); 206 | } 207 | pc_knn.build(10); 208 | 209 | int K = 80; 210 | for (int i = 0; i < n_pc; i++) { 211 | pc_knn.get_nns_by_item(i, K + 1, -1, &knn_id[i], &knn_dis[i]); 212 | average_nn_dis += knn_dis[i][1]; 213 | 214 | for (int j = 1; j < K; j++) 215 | for (int k = j + 1; k < K; k++) { 216 | Point A = pc[i]; 217 | Point B = pc[knn_id[i][j]]; 218 | Point C = pc[knn_id[i][k]]; 219 | if (fabs((B - C).length() - (A - B).length() - (A - C).length()) < 1e-3) { 220 | edge_cnt[make_pair(knn_id[i][j], knn_id[i][k])] = 100; 221 | edge_cnt[make_pair(knn_id[i][k], knn_id[i][j])] = 100; 222 | } 223 | } 224 | } 225 | average_nn_dis /= n_pc; 226 | 227 | scanf("%d", &n_candidates); 228 | for (int i = 0; i < n_candidates; i++) { 229 | int a, b, c, label; 230 | scanf("%d %d %d %d", &a, &b, &c, &label); 231 | int ver[3] = {a, b, c}; 232 | sort(&ver[0], &ver[3]); 233 | a = ver[0]; b = ver[1]; c = ver[2]; 234 | candidates[i] = Face(a, b, c); 235 | candidates[i].label = label; 236 | candidates[i].in_final_mesh = false; 237 | Point A = pc[a]; 238 | Point B = pc[b]; 239 | Point C = pc[c]; 240 | double l1 = (A - B).length(); 241 | double l2 = (A - C).length(); 242 | double l3 = (B - C).length(); 243 | candidates[i].l = max(max(l1, l2), l3); 244 | candidates[i].id = i; 245 | Point AC = C - A; 246 | AC.normalize(); 247 | Point AB = B - A; 248 | AB.normalize(); 249 | if (AC.cross(AB).length() < 1e-3) 250 | candidates[i].label = 0; 251 | } 252 | 253 | sort(&candidates[0], &candidates[n_candidates]); 254 | 255 | int n_faces = 0; 256 | for (int i = 0; i < n_candidates; i++) { 257 | Face face = candidates[i]; 258 | 259 | if (face.label == 0) 260 | continue; 261 | 262 | int a = face.a, b = face.b, c = face.c; 263 | 264 | if (!(edge_check(a, b) && edge_check(a, c) && edge_check(b, c))) 265 | continue; 266 | 267 | if (tri_mesh_intersect(a, b, c)) 268 | continue; 269 | 270 | candidates[i].in_final_mesh = true; 271 | n_faces++; 272 | vertex_faces[a].push_back(face); 273 | vertex_faces[b].push_back(face); 274 | vertex_faces[c].push_back(face); 275 | edge_add(a, b); edge_add(a, c); edge_add(b, c); 276 | } 277 | 278 | freopen(output_file.c_str(), "w", stdout); 279 | printf("ply\n"); 280 | printf("format ascii 1.0\n"); 281 | printf("element vertex %d\n", n_pc); 282 | printf("property float x\n"); 283 | printf("property float y\n"); 284 | printf("property float z\n"); 285 | printf("element face %d\n", n_faces); 286 | printf("property list uchar int vertex_indices\n"); 287 | printf("end_header\n"); 288 | 289 | for (int i = 0; i < n_pc; i++) 290 | printf("%lf %lf %lf\n", pc[i].x, pc[i].y, pc[i].z); 291 | 292 | for (int i = 0; i < n_candidates; i++) 293 | if (candidates[i].in_final_mesh) 294 | printf("3 %d %d %d\n", candidates[i].a, candidates[i].b, candidates[i].c); 295 | return 0; 296 | } -------------------------------------------------------------------------------- /postprocess/run_demo.py: -------------------------------------------------------------------------------- 1 | import os 2 | with open("../data/demo/models.txt", "r") as f: 3 | models = [line.rstrip() for line in f.readlines()] 4 | 5 | os.makedirs("../log/shapenet_pretrained/test_demo/output_mesh/", exist_ok = True) 6 | 7 | for model in models: 8 | pickle_file = "../data/demo/%s.p" % model 9 | output_file = "../log/shapenet_pretrained/test_demo/output_mesh/%s.ply" % model 10 | os.system("python3 main.py --pickle_file %s --output %s"%(pickle_file, output_file)) -------------------------------------------------------------------------------- /preprocess_with_gt_mesh/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | set(CMAKE_CXX_FLAGS "-std=c++11 -O2") 2 | add_executable(preprocess_mesh preprocess_mesh.cpp Intersection.cpp) 3 | add_executable(sample_pc sample_pc.cpp) 4 | add_executable(calc_geo_dis calc_geo_dis.cpp Intersection.cpp) 5 | add_executable(propose_candidates propose_candidates.cpp Intersection.cpp) 6 | add_executable(calc_candidates_label calc_candidates_label.cpp) -------------------------------------------------------------------------------- /preprocess_with_gt_mesh/Intersection.cpp: -------------------------------------------------------------------------------- 1 | /********************************************************/ 2 | /* AABB-triangle overlap test code */ 3 | /* by Tomas Akenine-Möller */ 4 | /* Function: int triBoxOverlap(float boxcenter[3], */ 5 | /* float boxhalfsize[3],float triverts[3][3]); */ 6 | /* History: */ 7 | /* 2001-03-05: released the code in its first version */ 8 | /* 2001-06-18: changed the order of the tests, faster */ 9 | /* */ 10 | /* Acknowledgement: Many thanks to Pierre Terdiman for */ 11 | /* suggestions and discussions on how to optimize code. */ 12 | /* Thanks to David Hunt for finding a ">="-bug! */ 13 | /********************************************************/ 14 | #ifndef INTERSECTION_H_ 15 | #define INTERSECTION_H_ 16 | 17 | #include 18 | #include 19 | 20 | #define X 0 21 | #define Y 1 22 | #define Z 2 23 | 24 | #define CROSS(dest,v1,v2) \ 25 | dest[0]=v1[1]*v2[2]-v1[2]*v2[1]; \ 26 | dest[1]=v1[2]*v2[0]-v1[0]*v2[2]; \ 27 | dest[2]=v1[0]*v2[1]-v1[1]*v2[0]; 28 | 29 | #define DOT(v1,v2) (v1[0]*v2[0]+v1[1]*v2[1]+v1[2]*v2[2]) 30 | 31 | #define SUB(dest,v1,v2) \ 32 | dest[0]=v1[0]-v2[0]; \ 33 | dest[1]=v1[1]-v2[1]; \ 34 | dest[2]=v1[2]-v2[2]; 35 | 36 | #define FINDMINMAX(x0,x1,x2,min,max) \ 37 | min = max = x0; \ 38 | if(x1max) max=x1;\ 40 | if(x2max) max=x2; 42 | 43 | int planeBoxOverlap(float normal[3],float d, float maxbox[3]) 44 | { 45 | int q; 46 | float vmin[3],vmax[3]; 47 | for(q=X;q<=Z;q++) 48 | { 49 | if(normal[q]>0.0f) 50 | { 51 | vmin[q]=-maxbox[q]; 52 | vmax[q]=maxbox[q]; 53 | } 54 | else 55 | { 56 | vmin[q]=maxbox[q]; 57 | vmax[q]=-maxbox[q]; 58 | } 59 | } 60 | if(DOT(normal,vmin)+d>0.0f) return 0; 61 | if(DOT(normal,vmax)+d>=0.0f) return 1; 62 | 63 | return 0; 64 | } 65 | 66 | 67 | /*======================== X-tests ========================*/ 68 | #define AXISTEST_X01(a, b, fa, fb) \ 69 | p0 = a*v0[Y] - b*v0[Z]; \ 70 | p2 = a*v2[Y] - b*v2[Z]; \ 71 | if(p0rad || max<-rad) return 0; 74 | 75 | #define AXISTEST_X2(a, b, fa, fb) \ 76 | p0 = a*v0[Y] - b*v0[Z]; \ 77 | p1 = a*v1[Y] - b*v1[Z]; \ 78 | if(p0rad || max<-rad) return 0; 81 | 82 | /*======================== Y-tests ========================*/ 83 | #define AXISTEST_Y02(a, b, fa, fb) \ 84 | p0 = -a*v0[X] + b*v0[Z]; \ 85 | p2 = -a*v2[X] + b*v2[Z]; \ 86 | if(p0rad || max<-rad) return 0; 89 | 90 | #define AXISTEST_Y1(a, b, fa, fb) \ 91 | p0 = -a*v0[X] + b*v0[Z]; \ 92 | p1 = -a*v1[X] + b*v1[Z]; \ 93 | if(p0rad || max<-rad) return 0; 96 | 97 | /*======================== Z-tests ========================*/ 98 | 99 | #define AXISTEST_Z12(a, b, fa, fb) \ 100 | p1 = a*v1[X] - b*v1[Y]; \ 101 | p2 = a*v2[X] - b*v2[Y]; \ 102 | if(p2rad || max<-rad) return 0; 105 | 106 | #define AXISTEST_Z0(a, b, fa, fb) \ 107 | p0 = a*v0[X] - b*v0[Y]; \ 108 | p1 = a*v1[X] - b*v1[Y]; \ 109 | if(p0rad || max<-rad) return 0; 112 | 113 | int triBoxOverlap(float boxcenter[3],float boxhalfsize[3],float triverts[3][3]) 114 | { 115 | 116 | /* use separating axis theorem to test overlap between triangle and box */ 117 | /* need to test for overlap in these directions: */ 118 | /* 1) the {x,y,z}-directions (actually, since we use the AABB of the triangle */ 119 | /* we do not even need to test these) */ 120 | /* 2) normal of the triangle */ 121 | /* 3) crossproduct(edge from tri, {x,y,z}-directin) */ 122 | /* this gives 3x3=9 more tests */ 123 | float v0[3],v1[3],v2[3]; 124 | float min,max,d,p0,p1,p2,rad,fex,fey,fez; 125 | float normal[3],e0[3],e1[3],e2[3]; 126 | 127 | /* This is the fastest branch on Sun */ 128 | /* move everything so that the boxcenter is in (0,0,0) */ 129 | SUB(v0,triverts[0],boxcenter); 130 | SUB(v1,triverts[1],boxcenter); 131 | SUB(v2,triverts[2],boxcenter); 132 | 133 | /* compute triangle edges */ 134 | SUB(e0,v1,v0); /* tri edge 0 */ 135 | SUB(e1,v2,v1); /* tri edge 1 */ 136 | SUB(e2,v0,v2); /* tri edge 2 */ 137 | 138 | /* Bullet 3: */ 139 | /* test the 9 tests first (this was faster) */ 140 | fex = fabs(e0[X]); 141 | fey = fabs(e0[Y]); 142 | fez = fabs(e0[Z]); 143 | AXISTEST_X01(e0[Z], e0[Y], fez, fey); 144 | AXISTEST_Y02(e0[Z], e0[X], fez, fex); 145 | AXISTEST_Z12(e0[Y], e0[X], fey, fex); 146 | 147 | fex = fabs(e1[X]); 148 | fey = fabs(e1[Y]); 149 | fez = fabs(e1[Z]); 150 | AXISTEST_X01(e1[Z], e1[Y], fez, fey); 151 | AXISTEST_Y02(e1[Z], e1[X], fez, fex); 152 | AXISTEST_Z0(e1[Y], e1[X], fey, fex); 153 | 154 | fex = fabs(e2[X]); 155 | fey = fabs(e2[Y]); 156 | fez = fabs(e2[Z]); 157 | AXISTEST_X2(e2[Z], e2[Y], fez, fey); 158 | AXISTEST_Y1(e2[Z], e2[X], fez, fex); 159 | AXISTEST_Z12(e2[Y], e2[X], fey, fex); 160 | 161 | /* Bullet 1: */ 162 | /* first test overlap in the {x,y,z}-directions */ 163 | /* find min, max of the triangle each direction, and test for overlap in */ 164 | /* that direction -- this is equivalent to testing a minimal AABB around */ 165 | /* the triangle against the AABB */ 166 | 167 | /* test in X-direction */ 168 | FINDMINMAX(v0[X],v1[X],v2[X],min,max); 169 | if(min>boxhalfsize[X] || max<-boxhalfsize[X]) return 0; 170 | 171 | /* test in Y-direction */ 172 | FINDMINMAX(v0[Y],v1[Y],v2[Y],min,max); 173 | if(min>boxhalfsize[Y] || max<-boxhalfsize[Y]) return 0; 174 | 175 | /* test in Z-direction */ 176 | FINDMINMAX(v0[Z],v1[Z],v2[Z],min,max); 177 | if(min>boxhalfsize[Z] || max<-boxhalfsize[Z]) return 0; 178 | 179 | /* Bullet 2: */ 180 | /* test if the box intersects the plane of the triangle */ 181 | /* compute plane equation of triangle: normal*x+d=0 */ 182 | CROSS(normal,e0,e1); 183 | d=-DOT(normal,v0); /* plane eq: normal.x+d=0 */ 184 | if(!planeBoxOverlap(normal,d,boxhalfsize)) return 0; 185 | 186 | return 1; /* box and triangle overlaps */ 187 | } 188 | 189 | #endif -------------------------------------------------------------------------------- /preprocess_with_gt_mesh/Intersection.h: -------------------------------------------------------------------------------- 1 | #ifndef INTERSECTION_H_ 2 | #define INTERSECTION_H_ 3 | 4 | int triBoxOverlap(float boxcenter[3],float boxhalfsize[3],float triverts[3][3]); 5 | 6 | #endif 7 | -------------------------------------------------------------------------------- /preprocess_with_gt_mesh/calc_candidates_label.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #define N 5555555 6 | using namespace std; 7 | 8 | int n_pc; 9 | int n_candidates; 10 | map, double> geo_dis; 11 | 12 | struct Point { 13 | double x, y, z; 14 | Point() {}; 15 | Point (double _x, double _y, double _z) { 16 | x = _x; y = _y; z = _z; 17 | }; 18 | Point operator - (const Point& v) const { 19 | return Point(x - v.x, y - v.y, z - v.z);} 20 | 21 | Point operator + (const Point& v) const { 22 | return Point(x + v.x, y + v.y, z + v.z);} 23 | 24 | Point operator * (const double t) const { 25 | return Point(x * t, y * t, z * t);} 26 | 27 | double length() { 28 | return sqrt(x * x + y * y + z * z);} 29 | }pc[N]; 30 | 31 | struct Face { 32 | int a, b, c, type; 33 | Face() {}; 34 | Face (int _a, int _b, int _c) { 35 | a = _a; b = _b; c = _c; 36 | }; 37 | }candidates[N * 10]; 38 | 39 | double get_geo_dis(int u, int v) { 40 | if (u > v) swap(u, v); 41 | double d = geo_dis[make_pair(u, v)]; 42 | if (fabs(d) < 1e-9) 43 | return 1e9; 44 | return d; 45 | } 46 | 47 | int main(int argc, char ** argv) { 48 | string points_file = argv[1]; 49 | string geo_dis_file = argv[2]; 50 | string candidates_file = argv[3]; 51 | string label_file = argv[4]; 52 | double tau = atof(argv[5]); 53 | 54 | freopen(points_file.c_str(), "r", stdin); 55 | scanf("%d", &n_pc); 56 | for (int i = 0; i < n_pc; i++) { 57 | double vec[3]; 58 | scanf("%lf%lf%lf", &vec[0], &vec[1], &vec[2]); 59 | pc[i] = Point(vec[0], vec[1], vec[2]); 60 | } 61 | 62 | freopen(geo_dis_file.c_str(), "r", stdin); 63 | int n_pairs; 64 | scanf("%d", &n_pairs); 65 | for (int i = 0 ; i < n_pairs; i++) { 66 | int u, v; 67 | double d; 68 | scanf("%d%d%lf", &u, &v, &d); 69 | if (u > v) 70 | swap(u, v); 71 | geo_dis[make_pair(u, v)] = d; 72 | } 73 | 74 | freopen(candidates_file.c_str(), "r", stdin); 75 | scanf("%d", &n_candidates); 76 | for (int i = 0; i < n_candidates; i++) { 77 | int a, b, c; 78 | double d; 79 | scanf("%d %d %d %lf", &a, &b, &c, &d); 80 | Point A = pc[a]; 81 | Point B = pc[b]; 82 | Point C = pc[c]; 83 | double l1 = (A - B).length(); 84 | double l2 = (A - C).length(); 85 | double l3 = (B - C).length(); 86 | double ratio = (get_geo_dis(a, b) + get_geo_dis(a, c) + get_geo_dis(b, c)) / (l1 + l2 + l3); 87 | candidates[i] = Face(a, b, c); 88 | if (ratio > tau) 89 | candidates[i].type = 0; 90 | else 91 | candidates[i].type = d < 0.0005 ? 1 : 2; 92 | } 93 | 94 | freopen(label_file.c_str(), "w", stdout); 95 | printf("%d\n", n_candidates); 96 | for (int i = 0; i < n_candidates; i++) { 97 | Face f = candidates[i]; 98 | printf("%d %d %d %d\n", f.a, f.b, f.c, f.type); 99 | } 100 | return 0; 101 | } -------------------------------------------------------------------------------- /preprocess_with_gt_mesh/calc_geo_dis.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include "../annoy/src/kissrandom.h" 3 | #include "../annoy/src/annoylib.h" 4 | #include "octree.h" 5 | #include 6 | #include 7 | #include 8 | #include 9 | #include 10 | #define N 5555555 11 | using namespace std; 12 | 13 | struct Point { 14 | double x, y, z; 15 | Point() {}; 16 | Point (double _x, double _y, double _z) { 17 | x = _x; y = _y; z = _z; 18 | }; 19 | Point operator - (const Point& v) const { 20 | return Point(x - v.x, y - v.y, z - v.z); 21 | }; 22 | 23 | Point operator + (const Point& v) const { 24 | return Point(x + v.x, y + v.y, z + v.z); 25 | }; 26 | 27 | Point operator * (const double t) const { 28 | return Point(x * t, y * t, z * t); 29 | } 30 | 31 | double length() { 32 | return sqrt(x * x + y * y + z * z); 33 | } 34 | 35 | void normalize() { 36 | double l = length(); 37 | x /= l; y /= l; z /= l; 38 | return ; 39 | } 40 | 41 | double dot(const Point& v) const { 42 | return x * v.x + y * v.y + z * v.z; 43 | }; 44 | 45 | Point cross(const Point& v) const { 46 | return Point( 47 | y * v.z - z * v.y, 48 | z * v.x - x * v.z, 49 | x * v.y - y * v.x);}; 50 | }vertices[N], pc[N]; 51 | 52 | struct Face { 53 | int a, b, c; 54 | Face() {}; 55 | Face (int _a, int _b, int _c) { 56 | a = _a; b = _b; c = _c; 57 | }; 58 | }faces[N]; 59 | 60 | int n_vertices, n_faces, n_pc; 61 | vector face_points[N]; 62 | map, vector > edge_faces; 63 | vector vertex_faces[N]; 64 | int stack_A[N], stack_B[N], stack_C[N], stack_face[N]; 65 | Point AA[N], BB[N], CC[N]; 66 | double average_nn_dis = 0; 67 | Point S, SS, R1[N], R2[N]; 68 | int s_id, start_time; 69 | const int timeout_sec = 60; 70 | map, double> geo_dis; 71 | AnnoyIndex pc_knn = AnnoyIndex(3); 72 | 73 | bool SameSide(Point A, Point B, Point C, Point P) { 74 | Point v1 = (B - A).cross(C - A); 75 | Point v2 = (B - A).cross(P - A); 76 | return v1.dot(v2) >= 0; 77 | } 78 | 79 | bool PointinTriangle(Point A, Point B, Point C, Point P) { 80 | return SameSide(A, B, C, P) && SameSide(B, C, A, P) && SameSide(C, A, B, P); 81 | } 82 | 83 | double disPointSegment(Point P, Point A, Point B) { 84 | double lAB = (A - B).length(); 85 | double r = (P - A).dot(B - A) / (lAB * lAB); 86 | if (r < 0) 87 | return (A - P).length(); 88 | else if (r > 1) 89 | return (B - P).length(); 90 | else 91 | return (A + ((B - A) * r) - P).length(); 92 | } 93 | 94 | double disPointTriangle(Point P, Point A, Point B, Point C) { 95 | Point normal = (B - A).cross(C - A); 96 | normal.normalize(); 97 | double t = (A - P).dot(normal); 98 | Point Q = P + (normal * t); 99 | if (PointinTriangle(A, B, C, Q)) 100 | return (Q - P).length(); 101 | double dAB = disPointSegment(P, A, B); 102 | double dAC = disPointSegment(P, A, C); 103 | double dBC = disPointSegment(P, B, C); 104 | return min(min(dAB, dAC), dBC); 105 | } 106 | 107 | void update_geo_dis(int u, int v, double d) { 108 | if (u > v) 109 | swap(u, v); 110 | if (geo_dis[make_pair(u, v)] < 1e-9 || d < geo_dis[make_pair(u, v)]) 111 | geo_dis[make_pair(u, v)] = d; 112 | return ; 113 | } 114 | 115 | Point compute_CC(Point S, Point A, Point B, Point C, Point AA, Point BB) { 116 | Point norm = (AA - BB).cross(Point(0, 0, 1)); 117 | norm.normalize(); 118 | double lAB = (A - B).length(); 119 | double d1 = (C - A).dot(B - A) / (lAB * lAB); 120 | double d2 = (C - A).cross(B - A).length() / lAB; 121 | Point CC = AA + (BB - AA) * d1 + norm * d2; 122 | if ((S - AA).cross(BB - AA).dot((BB - AA).cross(CC - AA)) < 0) 123 | CC = AA + (BB - AA) * d1 - norm * d2; 124 | return CC; 125 | } 126 | 127 | Point compute_PP(Point P, Point A, Point B, Point C, Point AA, Point BB, Point CC) { 128 | double s0 = (B - P).cross(C - P).length(); 129 | double s1 = (A - P).cross(C - P).length(); 130 | double s2 = (A - P).cross(B - P).length(); 131 | double s = s0 + s1 + s2; 132 | return AA * (s0 / s) + BB * (s1 / s) + CC * (s2 / s); 133 | } 134 | 135 | int dir(Point ray1, Point ray2) { 136 | double tmp = ray1.cross(ray2).dot(Point(0, 0, 1)); 137 | if (tmp > 1e-9) 138 | return 1; 139 | else if (tmp < -1e-9) 140 | return -1; 141 | else 142 | return 0; 143 | } 144 | 145 | bool in_two_ray(Point A, Point B, Point C) { 146 | return (dir(A, C) != -1 && dir(C, B) != -1); 147 | } 148 | 149 | void connect(int depth) { 150 | vector points = face_points[stack_face[depth]]; 151 | for (int q_id: points) { 152 | Point Q = compute_PP(pc[q_id], vertices[stack_A[depth]], vertices[stack_B[depth]], 153 | vertices[stack_C[depth]], AA[depth], BB[depth], CC[depth]); 154 | Point R3 = (Q - SS); 155 | double d = R3.length(); 156 | if (d > 5 * average_nn_dis) 157 | continue; 158 | R3.normalize(); 159 | if (in_two_ray(R1[depth - 1], R2[depth - 1], R3) == false) 160 | continue; 161 | update_geo_dis(s_id, q_id, d); 162 | } 163 | } 164 | 165 | bool check_dis(int depth) { 166 | for (int i = 1; i < 10; i++) { 167 | Point p = (BB[depth] - AA[depth]) * (0.1 * i) + AA[depth]; 168 | if ((p - SS).length() < 5 * average_nn_dis) 169 | return true; 170 | } 171 | return false; 172 | } 173 | 174 | bool update_ray(int depth) { 175 | Point r1 = AA[depth + 1] - SS; 176 | Point r2 = BB[depth + 1] - SS; 177 | r1.normalize(); 178 | r2.normalize(); 179 | if (dir(r1, r2) == -1) 180 | swap(r1, r2); 181 | if (depth == 0) { 182 | R1[0] = r1; 183 | R2[0] = r2; 184 | return true; 185 | } 186 | R1[depth] = dir(R1[depth - 1], r1) == 1 ? r1 : R1[depth - 1]; 187 | R2[depth] = dir(r2, R2[depth - 1]) == 1 ? r2 : R2[depth - 1]; 188 | if (dir(R1[depth], R2[depth]) == -1) { 189 | return false; 190 | } 191 | if (!(in_two_ray(R1[depth], R2[depth], r1) || 192 | in_two_ray(R1[depth], R2[depth], r2) || 193 | in_two_ray(r1, r2, R1[depth]) || 194 | in_two_ray(r1, r2, R2[depth]))) 195 | return false; 196 | 197 | if (R1[depth].dot(R2[depth]) > 0.999) 198 | return false; 199 | return true; 200 | } 201 | 202 | void unfold_path(int depth) { 203 | if (clock() - start_time > timeout_sec * CLOCKS_PER_SEC) { 204 | printf("Error: time out when calculating geo distances.\n"); 205 | exit(0); 206 | } 207 | if (depth > 25) 208 | return ; 209 | 210 | for (int i = 0; i < depth; i++) 211 | if (stack_face[i] == stack_face[depth]) 212 | return ; 213 | 214 | Point A = vertices[stack_A[depth]]; 215 | Point B = vertices[stack_B[depth]]; 216 | Point C = vertices[stack_C[depth]]; 217 | if (depth == 0) { 218 | AA[0] = Point(0, 0, 0); 219 | BB[0] = Point((A - B).length(), 0, 0); 220 | CC[0] = compute_CC(Point(3, -1, 0), A, B, C, AA[0], BB[0]); 221 | SS = compute_PP(S, A, B, C, AA[0], BB[0], CC[0]); 222 | } 223 | else { 224 | connect(depth); 225 | if (!check_dis(depth)) 226 | return ; 227 | } 228 | 229 | vector adj_face_ids = stack_A[depth] < stack_C[depth] ? 230 | edge_faces[make_pair(stack_A[depth], stack_C[depth])] : 231 | edge_faces[make_pair(stack_C[depth], stack_A[depth])]; 232 | stack_A[depth + 1] = stack_A[depth]; 233 | stack_B[depth + 1] = stack_C[depth]; 234 | AA[depth + 1] = AA[depth]; 235 | BB[depth + 1] = CC[depth]; 236 | for (int id: adj_face_ids) 237 | if (id != stack_face[depth]) { 238 | stack_face[depth + 1] = id; 239 | Face face = faces[id]; 240 | if (face.a != stack_A[depth] && face.a != stack_C[depth]) 241 | stack_C[depth + 1] = face.a; 242 | else if (face.b != stack_A[depth] && face.b != stack_C[depth]) 243 | stack_C[depth + 1] = face.b; 244 | else 245 | stack_C[depth + 1] = face.c; 246 | CC[depth + 1] = compute_CC(BB[depth], A, C, vertices[stack_C[depth + 1]], AA[depth], CC[depth]); 247 | if (update_ray(depth)) 248 | unfold_path(depth + 1); 249 | } 250 | 251 | if (depth == 0) 252 | return ; 253 | adj_face_ids = stack_B[depth] < stack_C[depth] ? 254 | edge_faces[make_pair(stack_B[depth], stack_C[depth])] : 255 | edge_faces[make_pair(stack_C[depth], stack_B[depth])]; 256 | stack_A[depth + 1] = stack_B[depth]; 257 | stack_B[depth + 1] = stack_C[depth]; 258 | AA[depth + 1] = BB[depth]; 259 | BB[depth + 1] = CC[depth]; 260 | for (int id: adj_face_ids) 261 | if (id != stack_face[depth]) { 262 | stack_face[depth + 1] = id; 263 | Face face = faces[id]; 264 | if (face.a != stack_B[depth] && face.a != stack_C[depth]) 265 | stack_C[depth + 1] = face.a; 266 | else if (face.b != stack_B[depth] && face.b != stack_C[depth]) 267 | stack_C[depth + 1] = face.b; 268 | else 269 | stack_C[depth + 1] = face.c; 270 | CC[depth + 1] = compute_CC(AA[depth], B, C, vertices[stack_C[depth + 1]], BB[depth], CC[depth]); 271 | if (update_ray(depth)) 272 | unfold_path(depth + 1); 273 | } 274 | return ; 275 | } 276 | 277 | int main(int argc, char ** argv) { 278 | string pc_file = argv[1]; 279 | string mesh_file = argv[2]; 280 | string geo_dis_file = argv[3]; 281 | 282 | freopen(pc_file.c_str(), "r", stdin); 283 | scanf("%d", &n_pc); 284 | for (int i = 0; i < n_pc; i++) { 285 | double vec[3]; 286 | scanf("%lf%lf%lf", &vec[0], &vec[1], &vec[2]); 287 | pc[i] = Point(vec[0], vec[1], vec[2]); 288 | pc_knn.add_item(i, vec); 289 | } 290 | pc_knn.build(10); 291 | 292 | for (int i = 0; i < n_pc; i++) { 293 | std::vector closest; 294 | std::vector dis; 295 | pc_knn.get_nns_by_item(i, 5, -1, &closest, &dis); 296 | average_nn_dis += dis[1]; 297 | } 298 | average_nn_dis /= n_pc; 299 | 300 | freopen(mesh_file.c_str(), "r", stdin); 301 | scanf("%d%d", &n_vertices, &n_faces); 302 | double coor_min = 1e9, coor_max = -1e9; 303 | for (int i = 0; i < n_vertices; i++) { 304 | double x, y, z; 305 | scanf("%lf%lf%lf", &x, &y, &z); 306 | vertices[i] = Point(x, y, z); 307 | coor_max = max(coor_max, x); coor_max = max(coor_max, y); coor_max = max(coor_max, z); 308 | coor_min = min(coor_min, x); coor_min = min(coor_min, y); coor_min = min(coor_min, z); 309 | } 310 | 311 | double range_l[3] = {coor_min - 0.1, coor_min - 0.1, coor_min - 0.1}; 312 | double range_r[3] = {coor_max + 0.1, coor_max + 0.1, coor_max + 0.1}; 313 | Octree *root = new Octree(range_l, range_r, average_nn_dis * 2); 314 | 315 | for (int i = 0; i < n_faces; i++) { 316 | int a, b, c; 317 | scanf("%d%d%d", &a, &b, &c); 318 | faces[i] = Face(a, b, c); 319 | float triangle[3][3]; 320 | triangle[0][0] = vertices[a].x; triangle[0][1] = vertices[a].y; triangle[0][2] = vertices[a].z; 321 | triangle[1][0] = vertices[b].x; triangle[1][1] = vertices[b].y; triangle[1][2] = vertices[b].z; 322 | triangle[2][0] = vertices[c].x; triangle[2][1] = vertices[c].y; triangle[2][2] = vertices[c].z; 323 | root->insert(triangle, i); 324 | } 325 | 326 | for (int i = 0; i < n_pc; i++) { 327 | double query_l[3], query_r[3]; 328 | Point p = pc[i]; 329 | query_l[0] = p.x; query_l[1] = p.y; query_l[2] = p.z; 330 | query_r[0] = p.x; query_r[1] = p.y; query_r[2] = p.z; 331 | set face_id; 332 | root->query(query_l, query_r, &face_id); 333 | double min_dis = 1e9; 334 | int min_id = -1; 335 | for (int id: face_id) { 336 | Face face = faces[id]; 337 | Point A = vertices[face.a], B = vertices[face.b], C = vertices[face.c]; 338 | double dis = disPointTriangle(p, A, B, C); 339 | if (dis < min_dis) { 340 | min_dis = dis; 341 | min_id = id; 342 | } 343 | } 344 | face_points[min_id].push_back(i); 345 | } 346 | 347 | for (int i = 0; i < n_faces; i++) { 348 | int ver[3]; 349 | ver[0] = faces[i].a; 350 | ver[1] = faces[i].b; 351 | ver[2] = faces[i].c; 352 | sort(&ver[0], &ver[3]); 353 | edge_faces[make_pair(ver[0], ver[1])].push_back(i); 354 | edge_faces[make_pair(ver[0], ver[2])].push_back(i); 355 | edge_faces[make_pair(ver[1], ver[2])].push_back(i); 356 | vertex_faces[ver[0]].push_back(i); 357 | vertex_faces[ver[1]].push_back(i); 358 | vertex_faces[ver[2]].push_back(i); 359 | } 360 | 361 | start_time = clock(); 362 | for (int i = 0; i < n_faces; i++) { 363 | if (clock() - start_time > timeout_sec * CLOCKS_PER_SEC) { 364 | printf("Error: time out when calculating geo distances.\n"); 365 | exit(0); 366 | } 367 | int size = face_points[i].size(); 368 | if (size == 0) 369 | continue; 370 | for (int j = 0; j < size; j++) 371 | for (int k = j + 1; k < size; k++) { 372 | Point U = pc[face_points[i][j]]; 373 | Point V = pc[face_points[i][k]]; 374 | double d = (U - V).length(); 375 | if (d < 5 * average_nn_dis) 376 | update_geo_dis(face_points[i][j], face_points[i][k], d); 377 | } 378 | for (int j = 0; j < size; j++) { 379 | s_id = face_points[i][j]; 380 | S = pc[s_id]; 381 | int a = faces[i].a, b = faces[i].b, c = faces[i].c; 382 | stack_face[0] = i; 383 | stack_A[0] = a; stack_B[0] = b; stack_C[0] = c; 384 | unfold_path(0); 385 | stack_A[0] = c; stack_B[0] = a; stack_C[0] = b; 386 | unfold_path(0); 387 | stack_A[0] = b; stack_B[0] = c; stack_C[0] = a; 388 | unfold_path(0); 389 | } 390 | } 391 | 392 | for (int i = 0; i < n_vertices; i++) { 393 | for (int j = 0; j < vertex_faces[i].size(); j++) 394 | for (int k = j + 1; k < vertex_faces[i].size(); k++) { 395 | int f0 = vertex_faces[i][j]; 396 | int f1 = vertex_faces[i][k]; 397 | for (int u: face_points[f0]) 398 | for (int v: face_points[f1]) { 399 | double d = (pc[u] - vertices[i]).length() + (pc[v] - vertices[i]).length(); 400 | if (d < 5 * average_nn_dis) 401 | update_geo_dis(u, v, d); 402 | } 403 | } 404 | } 405 | 406 | freopen(geo_dis_file.c_str(), "w", stdout); 407 | int pair_cnt = 0; 408 | for (auto p: geo_dis) 409 | pair_cnt++; 410 | printf("%d\n", pair_cnt); 411 | for (auto p: geo_dis) 412 | printf("%d %d %lf\n", p.first.first, p.first.second, p.second); 413 | return 0; 414 | } -------------------------------------------------------------------------------- /preprocess_with_gt_mesh/main.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | import plyfile 4 | import numpy as np 5 | import random 6 | import pickle 7 | from plyfile import PlyData 8 | 9 | def parse_args(): 10 | parser = argparse.ArgumentParser('Model') 11 | parser.add_argument('--input', type=str, default='../data/demo/gt_mesh/597cb92a5bfb580eed98cca8f0ccd5f7.ply', help='input ply file') 12 | parser.add_argument('--output', type=str, default='../data/demo/597cb92a5bfb580eed98cca8f0ccd5f7.p', help='output pickle file') 13 | parser.add_argument('--log_dir', type=str, default='log_d60ec6', help='log dir of intermediate files') 14 | parser.add_argument('--K', default=50, type=int, help='number of nearest neighbor when proposing candidates') 15 | parser.add_argument('--tau', default=1.3, type=float, help='threshold used to filter out incorrect candidates') 16 | return parser.parse_args() 17 | 18 | def output_mesh_txt(out_file, plydata): 19 | try: 20 | x = plydata['vertex']['x'] 21 | y = plydata['vertex']['y'] 22 | z = plydata['vertex']['z'] 23 | indices = plydata['face']['vertex_indices'] 24 | except: 25 | print("Error: fail to parse the input file.") 26 | n = len(x) 27 | m = len(indices) 28 | if n > 100000 or m > 500000: 29 | print("Error: Too large mesh file.") 30 | exit() 31 | if n == 0 or m == 0: 32 | print("Error: There should contain at least one vertex and one face.") 33 | exit() 34 | with open(out_file, 'w') as f: 35 | f.write("%d\n%d\n" % (n, m)) 36 | for i in range(n): 37 | f.write("%f %f %f\n" % (x[i], y[i], z[i])) 38 | for i in range(m): 39 | f.write("%d %d %d\n" % (indices[i][0], indices[i][1], indices[i][2])) 40 | 41 | def resample(pc, n): 42 | idx = np.arange(pc.shape[0]) 43 | if idx.shape[0] < n: 44 | idx = np.concatenate([idx, np.random.randint(pc.shape[0], size = n - pc.shape[0])]) 45 | return pc[idx[:n]] 46 | 47 | def gen_pickle_file(label_txt, pc_txt, pickle_file): 48 | with open(label_txt, 'r') as f: 49 | lines = f.readlines() 50 | n = int(lines[0]) 51 | vertices = np.zeros((n, 3), dtype = np.int16) 52 | flag = np.zeros(n, dtype = np.int8) 53 | for j in range(n): 54 | vertices[j][0] = int(lines[j + 1].split()[0]) 55 | vertices[j][1] = int(lines[j + 1].split()[1]) 56 | vertices[j][2] = int(lines[j + 1].split()[2]) 57 | flag[j] = int(lines[j + 1].split()[3]) 58 | 59 | with open(pc_txt, 'r') as f: 60 | lines = f.readlines() 61 | n = int(lines[0]) 62 | pc = np.zeros((n, 3)) 63 | for j in range(n): 64 | pc[j][0] = float(lines[j + 1].split()[0]) 65 | pc[j][1] = float(lines[j + 1].split()[1]) 66 | pc[j][2] = float(lines[j + 1].split()[2]) 67 | pc = resample(pc, 12800) 68 | gt = {'pc': pc, 'vertex_idx': vertices, 'label': flag} 69 | pickle.dump(gt, open(pickle_file,'wb')) 70 | 71 | 72 | if __name__ == '__main__': 73 | args = parse_args() 74 | args.log_dir = "log/%s" % os.path.basename(args.input)[:6] 75 | os.makedirs(args.log_dir, exist_ok = True) 76 | 77 | print("Loading input mesh!") 78 | plydata = PlyData.read(args.input) 79 | mesh_txt = os.path.join(args.log_dir, "mesh.txt") 80 | output_mesh_txt(mesh_txt, plydata) 81 | 82 | print("Preprocessing input mesh!") 83 | new_mesh_txt = os.path.join(args.log_dir, "new_mesh.txt") 84 | os.system("./build/preprocess_mesh %s %s"%(mesh_txt, new_mesh_txt)) 85 | 86 | print("Sampling point cloud (12000~12800 points, using binary search to determine radius)!") 87 | pc_txt = os.path.join(args.log_dir, "pc.txt") 88 | os.system("./build/sample_pc %s %s 12000 12800"%(new_mesh_txt, pc_txt)) 89 | 90 | print("Calculating geodesic distances (may take up to 1 minute)!") 91 | geo_dis_txt = os.path.join(args.log_dir, "geo_dis.txt") 92 | os.system("./build/calc_geo_dis %s %s %s"%(pc_txt, new_mesh_txt, geo_dis_txt)) 93 | 94 | print("Proposing candidates and calculating distances to the gt mesh!") 95 | candidates_txt = os.path.join(args.log_dir, "candidates.txt") 96 | os.system("./build/propose_candidates %s %s %s %d"%(pc_txt, new_mesh_txt, candidates_txt, args.K)) 97 | 98 | print("Calculating candidates' label!") 99 | label_txt = os.path.join(args.log_dir, "label.txt") 100 | os.system("./build/calc_candidates_label %s %s %s %s %f"%(pc_txt, geo_dis_txt, candidates_txt, label_txt, args.tau)) 101 | 102 | print("Generating pickle file!") 103 | pickle_file = args.output 104 | gen_pickle_file(label_txt, pc_txt, pickle_file) 105 | print("Finish!") 106 | -------------------------------------------------------------------------------- /preprocess_with_gt_mesh/octree.h: -------------------------------------------------------------------------------- 1 | #ifndef OCTREE_H_ 2 | #define OCTREE_H_ 3 | 4 | #include 5 | #include 6 | #include "Intersection.h" 7 | 8 | class Octree { 9 | public: 10 | double range_l[3], range_r[3]; 11 | double l_min; 12 | float box_center[3], box_half_size[3]; 13 | std::set face_ids; 14 | Octree *children[8]; 15 | bool is_leaf; 16 | Octree() {} 17 | Octree(double _range_l[3], double _range_r[3], double _l_min) { 18 | for (int i = 0; i < 3; i++) { 19 | range_l[i] = _range_l[i]; 20 | range_r[i] = _range_r[i]; 21 | box_center[i] = (range_l[i] + range_r[i]) / 2; 22 | box_half_size[i] = (range_r[i] - range_l[i]) / 2; 23 | } 24 | l_min = _l_min; 25 | for (int i = 0; i < 8; i++) children[i] = NULL; 26 | is_leaf = (range_r[0] - range_l[0]) < l_min && (range_r[1] - range_l[1]) < l_min && (range_r[2] - range_l[2]) < l_min; 27 | } 28 | void insert(float triangle[3][3], int id) { 29 | if (triBoxOverlap(box_center, box_half_size, triangle) == 0) 30 | return ; 31 | face_ids.insert(id); 32 | if (is_leaf) 33 | return ; 34 | for (int i = 0; i < 8; i++) { 35 | if (children[i] == NULL) { 36 | double _range_l[3], _range_r[3]; 37 | for (int j = 0; j < 3; j++) 38 | if (i & (1 << j)) { 39 | _range_l[j] = range_l[j]; 40 | _range_r[j] = (range_l[j] + range_r[j]) / 2; 41 | } 42 | else { 43 | _range_l[j] = (range_l[j] + range_r[j]) / 2; 44 | _range_r[j] = range_r[j]; 45 | } 46 | children[i] = new Octree(_range_l, _range_r, l_min); 47 | } 48 | children[i]->insert(triangle, id); 49 | } 50 | } 51 | void query(double query_l[3], double query_r[3], std::set *ans) { 52 | if ((query_l[0] - l_min < range_l[0] && query_r[0] + l_min > range_r[0] && 53 | query_l[1] - l_min < range_l[1] && query_r[1] + l_min > range_r[1] && 54 | query_l[2] - l_min < range_l[2] && query_r[2] + l_min > range_r[2])) { 55 | for (int face_id: face_ids) 56 | ans->insert(face_id); 57 | return ; 58 | } 59 | for (int i = 0; i < 8; i++) 60 | if (children[i] != NULL) { 61 | if (query_l[0] - l_min > children[i]->range_r[0] || query_r[0] + l_min < children[i]->range_l[0] || 62 | query_l[1] - l_min > children[i]->range_r[1] || query_r[1] + l_min < children[i]->range_l[1] || 63 | query_l[2] - l_min > children[i]->range_r[2] || query_r[2] + l_min < children[i]->range_l[2]) 64 | continue; 65 | children[i]->query(query_l, query_r, ans); 66 | } 67 | return ; 68 | } 69 | }; 70 | 71 | #endif 72 | 73 | -------------------------------------------------------------------------------- /preprocess_with_gt_mesh/preprocess_mesh.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include "octree.h" 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | #include 9 | #include 10 | #define N 5555555 11 | using namespace std; 12 | 13 | int n_vertices; 14 | int n_faces; 15 | Octree *root; 16 | bool face_valid[N]; 17 | int vertex_mapping[N]; 18 | int vertex_id[N]; 19 | int vertex_cnt[N]; 20 | 21 | struct Point { 22 | double x, y, z; 23 | int id; 24 | double s; 25 | bool operator < (const Point & rhs) const { 26 | return s > rhs.s; 27 | } 28 | Point() {}; 29 | Point (double _x, double _y, double _z) { 30 | x = _x; y = _y; z = _z; 31 | }; 32 | Point operator - (const Point& v) const { 33 | return Point(x - v.x, y - v.y, z - v.z);} 34 | 35 | Point operator + (const Point& v) const { 36 | return Point(x + v.x, y + v.y, z + v.z);} 37 | 38 | Point operator * (const double t) const { 39 | return Point(x * t, y * t, z * t);} 40 | 41 | double length() { 42 | return sqrt(x * x + y * y + z * z);} 43 | 44 | void normalize() { 45 | double l = length(); 46 | x /= l; y /= l; z /= l;} 47 | 48 | double dot(const Point& v) const { 49 | return x * v.x + y * v.y + z * v.z;} 50 | 51 | Point cross(const Point& v) const { 52 | return Point( 53 | y * v.z - z * v.y, 54 | z * v.x - x * v.z, 55 | x * v.y - y * v.x);} 56 | 57 | }vertices[N], vertices1[N]; 58 | 59 | struct Face { 60 | int a, b, c; 61 | Face() {}; 62 | Face (int _a, int _b, int _c) { 63 | a = _a; b = _b; c = _c; 64 | }; 65 | }faces[N]; 66 | 67 | map, bool> face_visit; 68 | 69 | bool face_check(int a, int b, int c) { 70 | int ver[3]; 71 | ver[0] = a; ver[1] = b; ver[2] = c; 72 | sort(&ver[0], &ver[3]); 73 | if (face_visit[make_tuple(ver[0], ver[1], ver[2])]) 74 | return false; 75 | face_visit[make_tuple(ver[0], ver[1], ver[2])] = true; 76 | return true; 77 | } 78 | 79 | map, vector > edge_faces; 80 | 81 | void add_edge_faces(int a, int b, int i) { 82 | if (a > b) 83 | swap(a, b); 84 | edge_faces[make_pair(a, b)].push_back(i); 85 | return ; 86 | } 87 | 88 | vector get_edge_faces(int a, int b) { 89 | if (a > b) 90 | swap(a, b); 91 | return edge_faces[make_pair(a, b)]; 92 | } 93 | 94 | bool point_in_segment(int a, int b, int c) { 95 | if (c == a || c == b) 96 | return false; 97 | Point A = vertices[a]; 98 | Point B = vertices[b]; 99 | Point C = vertices[c]; 100 | double lAB = (B - A).length(); 101 | double t = ((B - A).dot(C - A)) / (lAB * lAB); 102 | if (t < 0.01 || t > 0.99) 103 | return false; 104 | Point D = A + ((B - A) * t); 105 | return (D - C).length() < 1e-3; 106 | } 107 | 108 | bool insert_face(int a, int b, int c, int id) { 109 | if (a == b || a == c || b == c) 110 | return false; 111 | if (!face_check(a, b, c)) 112 | return false; 113 | faces[id] = Face(a, b, c); 114 | face_valid[id] = true; 115 | float triangle[3][3]; 116 | triangle[0][0] = vertices[a].x; triangle[0][1] = vertices[a].y; triangle[0][2] = vertices[a].z; 117 | triangle[1][0] = vertices[b].x; triangle[1][1] = vertices[b].y; triangle[1][2] = vertices[b].z; 118 | triangle[2][0] = vertices[c].x; triangle[2][1] = vertices[c].y; triangle[2][2] = vertices[c].z; 119 | root->insert(triangle, id); 120 | add_edge_faces(a, b, id); 121 | add_edge_faces(a, c, id); 122 | add_edge_faces(c, b, id); 123 | return true; 124 | } 125 | 126 | void split(int a, int b, int p) { 127 | vector face_ids = get_edge_faces(a, b); 128 | for (int id: face_ids) { 129 | if (!face_valid[id]) continue; 130 | Face f = faces[id]; 131 | int c; 132 | if (f.a != a && f.a != b) 133 | c = f.a; 134 | else if (f.b != a && f.b != b) 135 | c = f.b; 136 | else 137 | c = f.c; 138 | if (p == c) 139 | continue; 140 | face_valid[id] = false; 141 | insert_face(a, p, c, n_faces); 142 | n_faces++; 143 | insert_face(b, p, c, n_faces); 144 | n_faces++; 145 | } 146 | return ; 147 | } 148 | 149 | int main(int argc, char ** argv) { 150 | string input_file = argv[1]; 151 | string output_file = argv[2]; 152 | 153 | freopen(input_file.c_str(), "r", stdin); 154 | scanf("%d%d", &n_vertices, &n_faces); 155 | for (int i = 0; i < n_vertices; i++) { 156 | double x, y, z; 157 | scanf("%lf%lf%lf", &x, &y, &z); 158 | vertices[i] = Point(x, y, z); 159 | vertex_cnt[i] = 0; 160 | } 161 | 162 | for (int i = 0; i < n_faces; i++) { 163 | int a, b, c; 164 | scanf("%d%d%d", &a, &b, &c); 165 | faces[i].a = a; faces[i].b = b; faces[i].c = c; 166 | vertex_cnt[a]++; vertex_cnt[b]++; vertex_cnt[c]++; 167 | } 168 | 169 | double xmin = 1e9, ymin = 1e9, zmin = 1e9; 170 | double xmax = -1e9, ymax = -1e9, zmax = -1e9; 171 | for (int i = 0; i < n_vertices; i++) { 172 | if (vertex_cnt[i] == 0) 173 | continue; 174 | double x = vertices[i].x, y = vertices[i].y, z = vertices[i].z; 175 | xmax = max(xmax, x); ymax = max(ymax, y); zmax = max(zmax, z); 176 | xmin = min(xmin, x); ymin = min(ymin, y); zmin = min(zmin, z); 177 | } 178 | 179 | double scale = sqrt((xmax - xmin) * (xmax - xmin) + (ymax - ymin) * (ymax - ymin) + (zmax - zmin) * (zmax - zmin)); 180 | 181 | for (int i = 0; i < n_vertices; i++) { 182 | vertices[i].x = (vertices[i].x - (xmax + xmin) / 2) / scale; 183 | vertices[i].y = (vertices[i].y - (ymax + ymin) / 2) / scale; 184 | vertices[i].z = (vertices[i].z - (zmax + zmin) / 2) / scale; 185 | vertices1[i] = vertices[i]; 186 | vertices1[i].s = 0; 187 | vertices1[i].id = i; 188 | } 189 | 190 | for (int i = 0; i < n_faces; i++) { 191 | int a = faces[i].a, b = faces[i].b, c = faces[i].c; 192 | double s = (vertices[c] - vertices[a]).cross(vertices[b] - vertices[a]).length(); 193 | vertices1[a].s = max(vertices1[a].s, s); 194 | vertices1[b].s = max(vertices1[b].s, s); 195 | vertices1[c].s = max(vertices1[c].s, s); 196 | } 197 | 198 | sort(&vertices1[0], &vertices1[n_vertices]); 199 | 200 | for (int i = 0; i < n_vertices; i++) { 201 | int uid = vertices1[i].id; 202 | if (vertex_cnt[uid] == 0) { 203 | vertex_mapping[uid] = -1; 204 | continue; 205 | } 206 | vertex_mapping[uid] = uid; 207 | for (int j = 0; j < i; j++) { 208 | int vid = vertices1[j].id; 209 | if (vertex_mapping[vid] == vid && (vertices[uid] - vertices[vid]).length() < 0.001) { 210 | vertex_mapping[uid] = vertex_mapping[vid]; 211 | } 212 | } 213 | } 214 | 215 | double range_l[3] = {-0.6, -0.6, -0.6}; 216 | double range_r[3] = {0.6, 0.6, 0.6}; 217 | root = new Octree(range_l, range_r, 0.01); 218 | for (int i = 0; i < n_faces; i++) { 219 | Face f = faces[i]; 220 | insert_face(vertex_mapping[f.a], vertex_mapping[f.b], vertex_mapping[f.c], i); 221 | } 222 | 223 | for (int i = 0; i < n_vertices; i++) { 224 | if (vertex_mapping[i] != i) continue; 225 | double query_l[3], query_r[3]; 226 | 227 | query_l[0] = vertices[i].x; query_l[1] = vertices[i].y; query_l[2] = vertices[i].z; 228 | query_r[0] = vertices[i].x; query_r[1] = vertices[i].y; query_r[2] = vertices[i].z; 229 | set face_id; 230 | root->query(query_l, query_r, &face_id); 231 | for (int id: face_id) { 232 | if (!face_valid[id]) continue; 233 | Face f = faces[id]; 234 | if (point_in_segment(f.a, f.b, i)) 235 | split(f.a, f.b, i); 236 | else if (point_in_segment(f.a, f.c, i)) 237 | split(f.a, f.c, i); 238 | else if (point_in_segment(f.b, f.c, i)) 239 | split(f.b, f.c, i); 240 | } 241 | } 242 | 243 | int n_new_vertices = 0, n_new_faces = 0; 244 | for (int i = 0; i < n_vertices; i++) { 245 | if (vertex_mapping[i] == i) { 246 | vertex_id[i] = n_new_vertices; 247 | n_new_vertices++; 248 | } 249 | } 250 | for (int i = 0; i < n_faces; i++) 251 | if (face_valid[i]) 252 | n_new_faces++; 253 | 254 | freopen(output_file.c_str(), "w", stdout); 255 | printf("%d\n%d\n", n_new_vertices, n_new_faces); 256 | for (int i = 0; i < n_vertices; i++) 257 | if (vertex_mapping[i] == i) 258 | printf("%lf %lf %lf\n", vertices[i].x, vertices[i].y, vertices[i].z); 259 | for (int i = 0; i < n_faces; i++) 260 | if (face_valid[i]) 261 | printf("%d %d %d\n", vertex_id[faces[i].a], vertex_id[faces[i].b], vertex_id[faces[i].c]); 262 | return 0; 263 | } -------------------------------------------------------------------------------- /preprocess_with_gt_mesh/propose_candidates.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include "../annoy/src/kissrandom.h" 3 | #include "../annoy/src/annoylib.h" 4 | #include "octree.h" 5 | #include 6 | #include 7 | #include 8 | #include 9 | #include 10 | #define N 5555555 11 | using namespace std; 12 | 13 | std::vector knn_id[N]; 14 | std::vector knn_dis[N]; 15 | int n_vertices, n_faces, n_pc, n_candidates = 0; 16 | double average_nn_dis = 0, face_max_l = 1e9; 17 | map, bool> face_visit; 18 | AnnoyIndex pc_knn = AnnoyIndex(3); 19 | 20 | struct Point { 21 | double x, y, z; 22 | Point() {}; 23 | Point (double _x, double _y, double _z) { 24 | x = _x; y = _y; z = _z; 25 | }; 26 | Point operator - (const Point& v) const { 27 | return Point(x - v.x, y - v.y, z - v.z);} 28 | 29 | Point operator + (const Point& v) const { 30 | return Point(x + v.x, y + v.y, z + v.z);} 31 | 32 | Point operator * (const double t) const { 33 | return Point(x * t, y * t, z * t);} 34 | 35 | double length() { 36 | return sqrt(x * x + y * y + z * z);} 37 | 38 | void normalize() { 39 | double l = length(); 40 | x /= l; y /= l; z /= l;} 41 | 42 | double dot(const Point& v) const { 43 | return x * v.x + y * v.y + z * v.z;} 44 | 45 | Point cross(const Point& v) const { 46 | return Point( 47 | y * v.z - z * v.y, 48 | z * v.x - x * v.z, 49 | x * v.y - y * v.x);} 50 | }vertices[N], pc[N]; 51 | 52 | struct Face { 53 | int a, b, c; 54 | double d; 55 | Face() {}; 56 | Face (int _a, int _b, int _c) { 57 | a = _a; b = _b; c = _c; 58 | }; 59 | }faces[N], candidates[N * 10]; 60 | 61 | bool face_check(int a, int b, int c) { 62 | int ver[3]; 63 | ver[0] = a; ver[1] = b; ver[2] = c; 64 | sort(&ver[0], &ver[3]); 65 | if (face_visit[make_tuple(ver[0], ver[1], ver[2])]) 66 | return false; 67 | face_visit[make_tuple(ver[0], ver[1], ver[2])] = true; 68 | return true; 69 | } 70 | 71 | Point randomPointTriangle(Point a, Point b, Point c) { 72 | double r1 = (double) rand() / RAND_MAX; 73 | double r2 = (double) rand() / RAND_MAX; 74 | double r1sqr = std::sqrt(r1); 75 | double OneMinR1Sqr = (1 - r1sqr); 76 | double OneMinR2 = (1 - r2); 77 | a = a * OneMinR1Sqr; 78 | b = b * OneMinR2; 79 | return (c * r2 + b) * r1sqr + a; 80 | } 81 | 82 | bool SameSide(Point A, Point B, Point C, Point P) { 83 | Point v1 = (B - A).cross(C - A); 84 | Point v2 = (B - A).cross(P - A); 85 | return v1.dot(v2) >= 0; 86 | } 87 | 88 | bool PointinTriangle(Point A, Point B, Point C, Point P) { 89 | return SameSide(A, B, C, P) && SameSide(B, C, A, P) && SameSide(C, A, B, P); 90 | } 91 | 92 | double disPointSegment(Point P, Point A, Point B) { 93 | double lAB = (A - B).length(); 94 | double r = (P - A).dot(B - A) / (lAB * lAB); 95 | if (r < 0) 96 | return (A - P).length(); 97 | else if (r > 1) 98 | return (B - P).length(); 99 | else 100 | return (A + ((B - A) * r) - P).length(); 101 | } 102 | 103 | double disPointTriangle(Point P, Point A, Point B, Point C) { 104 | Point normal = (B - A).cross(C - A); 105 | normal.normalize(); 106 | double t = (A - P).dot(normal); 107 | Point Q = P + (normal * t); 108 | if (PointinTriangle(A, B, C, Q)) 109 | return (Q - P).length(); 110 | double dAB = disPointSegment(P, A, B); 111 | double dAC = disPointSegment(P, A, C); 112 | double dBC = disPointSegment(P, B, C); 113 | return min(min(dAB, dAC), dBC); 114 | } 115 | 116 | double PointMeshDis(Point p, double r, Octree *root) { 117 | double query_l[3], query_r[3], min_dis = 1e9; 118 | query_l[0] = p.x - r; query_l[1] = p.y - r; query_l[2] = p.z - r; 119 | query_r[0] = p.x + r; query_r[1] = p.y + r; query_r[2] = p.z + r; 120 | set face_id; 121 | root->query(query_l, query_r, &face_id); 122 | for (int id: face_id) { 123 | Face face = faces[id]; 124 | min_dis = min(min_dis, disPointTriangle(p, vertices[face.a], vertices[face.b], vertices[face.c])); 125 | } 126 | return min_dis; 127 | } 128 | 129 | int main(int argc, char ** argv) { 130 | string pc_file = argv[1]; 131 | string mesh_file = argv[2]; 132 | string candidates_file = argv[3]; 133 | int K = atoi(argv[4]); 134 | 135 | freopen(pc_file.c_str(), "r", stdin); 136 | scanf("%d", &n_pc); 137 | for (int i = 0; i < n_pc; i++) { 138 | double vec[3]; 139 | scanf("%lf%lf%lf", &vec[0], &vec[1], &vec[2]); 140 | pc[i] = Point(vec[0], vec[1], vec[2]); 141 | pc_knn.add_item(i, vec); 142 | } 143 | pc_knn.build(10); 144 | 145 | for (int i = 0; i < n_pc; i++) { 146 | pc_knn.get_nns_by_item(i, K + 1, -1, &knn_id[i], &knn_dis[i]); 147 | average_nn_dis += knn_dis[i][1]; 148 | face_max_l = min(knn_dis[i][K], face_max_l); 149 | } 150 | average_nn_dis /= n_pc; 151 | if (face_max_l < average_nn_dis * 2) 152 | face_max_l = average_nn_dis * 2; 153 | 154 | freopen(mesh_file.c_str(), "r", stdin); 155 | scanf("%d%d", &n_vertices, &n_faces); 156 | double coor_min = 1e9, coor_max = -1e9; 157 | for (int i = 0; i < n_vertices; i++) { 158 | double x, y, z; 159 | scanf("%lf%lf%lf", &x, &y, &z); 160 | vertices[i] = Point(x, y, z); 161 | coor_max = max(coor_max, x); coor_max = max(coor_max, y); coor_max = max(coor_max, z); 162 | coor_min = min(coor_min, x); coor_min = min(coor_min, y); coor_min = min(coor_min, z); 163 | } 164 | 165 | double range_l[3] = {coor_min - 0.1, coor_min - 0.1, coor_min - 0.1}; 166 | double range_r[3] = {coor_max + 0.1, coor_max + 0.1, coor_max + 0.1}; 167 | Octree *root = new Octree(range_l, range_r, average_nn_dis * 2); 168 | 169 | for (int i = 0; i < n_faces; i++) { 170 | int a, b, c; 171 | scanf("%d%d%d", &a, &b, &c); 172 | faces[i] = Face(a, b, c); 173 | float triangle[3][3]; 174 | triangle[0][0] = vertices[a].x; triangle[0][1] = vertices[a].y; triangle[0][2] = vertices[a].z; 175 | triangle[1][0] = vertices[b].x; triangle[1][1] = vertices[b].y; triangle[1][2] = vertices[b].z; 176 | triangle[2][0] = vertices[c].x; triangle[2][1] = vertices[c].y; triangle[2][2] = vertices[c].z; 177 | root->insert(triangle, i); 178 | } 179 | 180 | for (int i = 0; i < n_pc; i++) { 181 | for (int j = 1; j <= K; j++) 182 | for (int k = j + 1; k <= K; k++) { 183 | int a = i, b = knn_id[i][j], c = knn_id[i][k]; 184 | if (!face_check(a, b, c)) 185 | continue; 186 | Point A = pc[a]; 187 | Point B = pc[b]; 188 | Point C = pc[c]; 189 | if ((A - B).length() > face_max_l || (A - C).length() > face_max_l || (B - C).length() > face_max_l) 190 | continue; 191 | double sum = 0; 192 | for (int t = 0; t < 10; t++) { 193 | Point P = randomPointTriangle(A, B, C); 194 | sum += PointMeshDis(P, 2 * average_nn_dis, root); 195 | } 196 | candidates[n_candidates] = Face(a, b, c); 197 | candidates[n_candidates].d = sum / 10; 198 | n_candidates++; 199 | } 200 | } 201 | 202 | freopen(candidates_file.c_str(), "w", stdout); 203 | printf("%d\n", n_candidates); 204 | for (int i = 0; i < n_candidates; i++) { 205 | Face f = candidates[i]; 206 | printf("%d %d %d %lf\n", f.a, f.b, f.c, f.d); 207 | } 208 | return 0; 209 | } -------------------------------------------------------------------------------- /preprocess_with_gt_mesh/run_demo.py: -------------------------------------------------------------------------------- 1 | import os 2 | with open("../data/demo/models.txt", "r") as f: 3 | models = [line.rstrip() for line in f.readlines()] 4 | 5 | for model in models: 6 | input_file = "../data/demo/gt_mesh/%s.ply" % model 7 | output_file = "../data/demo/%s.p" % model 8 | os.system("python3 main.py --input %s --output %s"%(input_file, output_file)) -------------------------------------------------------------------------------- /preprocess_with_gt_mesh/sample_pc.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #include 6 | #define N 5555555 7 | using namespace std; 8 | 9 | int n_vertices; 10 | int n_faces; 11 | 12 | struct Point { 13 | double x, y, z; 14 | Point() {}; 15 | Point (double _x, double _y, double _z) { 16 | x = _x; y = _y; z = _z; 17 | }; 18 | Point operator - (const Point& v) const { 19 | return Point(x - v.x, y - v.y, z - v.z);} 20 | 21 | Point operator + (const Point& v) const { 22 | return Point(x + v.x, y + v.y, z + v.z);} 23 | 24 | Point operator * (const double t) const { 25 | return Point(x * t, y * t, z * t);} 26 | 27 | double length() { 28 | return sqrt(x * x + y * y + z * z);} 29 | 30 | }vertices[N]; 31 | 32 | struct Face { 33 | int a, b, c; 34 | Face() {}; 35 | Face (int _a, int _b, int _c) { 36 | a = _a; b = _b; c = _c; 37 | }; 38 | }faces[N]; 39 | 40 | Point randomPointTriangle(Point a, Point b, Point c) { 41 | double r1 = (double) rand() / RAND_MAX; 42 | double r2 = (double) rand() / RAND_MAX; 43 | double r1sqr = std::sqrt(r1); 44 | double OneMinR1Sqr = (1 - r1sqr); 45 | double OneMinR2 = (1 - r2); 46 | a = a * OneMinR1Sqr; 47 | b = b * OneMinR2; 48 | return (c * r2 + b) * r1sqr + a; 49 | } 50 | 51 | int main(int argc, char ** argv) { 52 | string input_file = argv[1]; 53 | string output_file = argv[2]; 54 | int num_l = atoi(argv[3]); 55 | int num_r = atoi(argv[4]); 56 | 57 | freopen(input_file.c_str(), "r", stdin); 58 | scanf("%d%d", &n_vertices, &n_faces); 59 | 60 | for (int i = 0; i < n_vertices; i++) { 61 | double x, y, z; 62 | scanf("%lf%lf%lf", &x, &y, &z); 63 | vertices[i] = Point(x, y, z); 64 | } 65 | 66 | for (int i = 0; i < n_faces; i++) { 67 | int a, b, c; 68 | scanf("%d%d%d", &a, &b, &c); 69 | faces[i] = Face(a, b, c); 70 | } 71 | 72 | vector sampled_points; 73 | double ll = 0.0001, rr = 0.03; 74 | int iter = 0; 75 | while (true) { 76 | double r = (ll + rr) / 2; 77 | for (int i = 0; i < n_faces; i++) { 78 | Point A = vertices[faces[i].a]; 79 | Point B = vertices[faces[i].b]; 80 | Point C = vertices[faces[i].c]; 81 | while (true) { 82 | bool flag = false; 83 | for (int j = 0; j < 100; j++) { 84 | Point u = randomPointTriangle(A, B, C); 85 | bool intersect = false; 86 | for (Point v: sampled_points) 87 | if ((u - v).length() < r) { 88 | intersect = true; 89 | break; 90 | } 91 | if (!intersect) { 92 | flag = true; 93 | sampled_points.push_back(u); 94 | } 95 | } 96 | if (!flag) break; 97 | } 98 | if (sampled_points.size() > num_r) 99 | break; 100 | } 101 | printf(" [iteration: %d, radius: %.5lf, #points: %d]\n", iter, r, sampled_points.size()); 102 | if (sampled_points.size() >= num_l && sampled_points.size() <= num_r) 103 | break; 104 | else if (sampled_points.size() < num_l) 105 | rr = r; 106 | else 107 | ll = r; 108 | sampled_points.clear(); 109 | iter++; 110 | } 111 | 112 | freopen(output_file.c_str(), "w", stdout); 113 | printf("%d\n", sampled_points.size()); 114 | for (Point u: sampled_points) 115 | printf("%lf %lf %lf\n", u.x, u.y, u.z); 116 | return 0; 117 | } -------------------------------------------------------------------------------- /preprocess_with_pc/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | set(CMAKE_CXX_FLAGS "-std=c++11 -O2") 2 | add_executable(propose_candidates propose_candidates.cpp) -------------------------------------------------------------------------------- /preprocess_with_pc/main.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | import plyfile 4 | import numpy as np 5 | import random 6 | import pickle 7 | from plyfile import PlyData 8 | 9 | def parse_args(): 10 | parser = argparse.ArgumentParser('Model') 11 | parser.add_argument('--input', type=str, default='../data/demo/pc/597cb92a5bfb580eed98cca8f0ccd5f7.ply', help='input ply file') 12 | parser.add_argument('--output', type=str, default='../data/demo/597cb92a5bfb580eed98cca8f0ccd5f7.p', help='output pickle file') 13 | parser.add_argument('--log_dir', type=str, default='log_d60ec6', help='log dir of intermediate files') 14 | parser.add_argument('--K', default=50, type=int, help='number of nearest neighbor when proposing candidates') 15 | return parser.parse_args() 16 | 17 | def output_pc_txt(out_file, plydata): 18 | try: 19 | x = plydata['vertex']['x'] 20 | y = plydata['vertex']['y'] 21 | z = plydata['vertex']['z'] 22 | except: 23 | print("Error: fail to parse the input file.") 24 | n = len(x) 25 | if n < 12000 or n > 12800: 26 | print("Error: The size of point cloud should be between 12,000 ~ 12,800 to fit the pre-trained model.") 27 | exit() 28 | with open(out_file, 'w') as f: 29 | f.write("%d\n" % (n)) 30 | for i in range(n): 31 | f.write("%f %f %f\n" % (x[i], y[i], z[i])) 32 | 33 | def resample(pc, n): 34 | idx = np.arange(pc.shape[0]) 35 | if idx.shape[0] < n: 36 | idx = np.concatenate([idx, np.random.randint(pc.shape[0], size = n - pc.shape[0])]) 37 | return pc[idx[:n]] 38 | 39 | def gen_pickle_file(candidates_txt, pickle_file): 40 | with open(candidates_txt, 'r') as f: 41 | lines = f.readlines() 42 | n = int(lines[0]) 43 | m = int(lines[n + 1]) 44 | pc = np.zeros((n, 3)) 45 | vertex_idx = np.zeros((m, 3), dtype = np.int16) 46 | label = np.ones(m, dtype = np.int8) * -1 47 | 48 | for j in range(n): 49 | pc[j][0] = float(lines[j + 1].split()[0]) 50 | pc[j][1] = float(lines[j + 1].split()[1]) 51 | pc[j][2] = float(lines[j + 1].split()[2]) 52 | 53 | for j in range(m): 54 | vertex_idx[j][0] = float(lines[j + 2 + n].split()[0]) 55 | vertex_idx[j][1] = float(lines[j + 2 + n].split()[1]) 56 | vertex_idx[j][2] = float(lines[j + 2 + n].split()[2]) 57 | 58 | pc = resample(pc, 12800) 59 | gt = {'pc': pc, 'vertex_idx': vertex_idx, 'label': label} 60 | pickle.dump(gt, open(pickle_file,'wb')) 61 | 62 | 63 | if __name__ == '__main__': 64 | args = parse_args() 65 | args.log_dir = "log/%s" % os.path.basename(args.input)[:6] 66 | os.makedirs(args.log_dir, exist_ok = True) 67 | 68 | print("Loading input mesh!") 69 | plydata = PlyData.read(args.input) 70 | pc_txt = os.path.join(args.log_dir, "pc.txt") 71 | output_pc_txt(pc_txt, plydata) 72 | 73 | print("Proposing candidates!") 74 | candidates_txt = os.path.join(args.log_dir, "candidates.txt") 75 | os.system("./build/propose_candidates %s %s %d"%(pc_txt, candidates_txt, args.K)) 76 | 77 | print("Generating pickle file!") 78 | pickle_file = args.output 79 | gen_pickle_file(candidates_txt, pickle_file) 80 | print("Finish!") 81 | -------------------------------------------------------------------------------- /preprocess_with_pc/propose_candidates.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include "../annoy/src/kissrandom.h" 3 | #include "../annoy/src/annoylib.h" 4 | #include 5 | #include 6 | #include 7 | #include 8 | #include 9 | #define N 5555555 10 | using namespace std; 11 | 12 | std::vector knn_id[N]; 13 | std::vector knn_dis[N]; 14 | int n_pc, n_candidates = 0; 15 | double average_nn_dis = 0, face_max_l = 1e9; 16 | map, bool> face_visit; 17 | AnnoyIndex pc_knn = AnnoyIndex(3); 18 | 19 | struct Point { 20 | double x, y, z; 21 | Point() {}; 22 | Point (double _x, double _y, double _z) { 23 | x = _x; y = _y; z = _z; 24 | }; 25 | Point operator - (const Point& v) const { 26 | return Point(x - v.x, y - v.y, z - v.z);} 27 | 28 | Point operator + (const Point& v) const { 29 | return Point(x + v.x, y + v.y, z + v.z);} 30 | 31 | Point operator * (const double t) const { 32 | return Point(x * t, y * t, z * t);} 33 | 34 | double length() { 35 | return sqrt(x * x + y * y + z * z);} 36 | }pc[N]; 37 | 38 | struct Face { 39 | int a, b, c; 40 | double d; 41 | Face() {}; 42 | Face (int _a, int _b, int _c) { 43 | a = _a; b = _b; c = _c; 44 | }; 45 | } candidates[N * 10]; 46 | 47 | bool face_check(int a, int b, int c) { 48 | int ver[3]; 49 | ver[0] = a; ver[1] = b; ver[2] = c; 50 | sort(&ver[0], &ver[3]); 51 | if (face_visit[make_tuple(ver[0], ver[1], ver[2])]) 52 | return false; 53 | face_visit[make_tuple(ver[0], ver[1], ver[2])] = true; 54 | return true; 55 | } 56 | 57 | int main(int argc, char ** argv) { 58 | string pc_file = argv[1]; 59 | string candidates_file = argv[2]; 60 | int K = atoi(argv[3]); 61 | 62 | freopen(pc_file.c_str(), "r", stdin); 63 | scanf("%d", &n_pc); 64 | 65 | double xmin = 1e9, ymin = 1e9, zmin = 1e9; 66 | double xmax = -1e9, ymax = -1e9, zmax = -1e9; 67 | for (int i = 0; i < n_pc; i++) { 68 | double x, y, z; 69 | scanf("%lf%lf%lf", &x, &y, &z); 70 | pc[i] = Point(x, y, z); 71 | xmax = max(xmax, x); ymax = max(ymax, y); zmax = max(zmax, z); 72 | xmin = min(xmin, x); ymin = min(ymin, y); zmin = min(zmin, z); 73 | } 74 | 75 | double scale = sqrt((xmax - xmin) * (xmax - xmin) + (ymax - ymin) * (ymax - ymin) + (zmax - zmin) * (zmax - zmin)); 76 | 77 | for (int i = 0; i < n_pc; i++) { 78 | pc[i].x = (pc[i].x - (xmax + xmin) / 2) / scale; 79 | pc[i].y = (pc[i].y - (ymax + ymin) / 2) / scale; 80 | pc[i].z = (pc[i].z - (zmax + zmin) / 2) / scale; 81 | } 82 | 83 | for (int i = 0; i < n_pc; i++) { 84 | double vec[3] = {pc[i].x, pc[i].y, pc[i].z}; 85 | pc_knn.add_item(i, vec); 86 | } 87 | pc_knn.build(10); 88 | 89 | for (int i = 0; i < n_pc; i++) { 90 | pc_knn.get_nns_by_item(i, K + 1, -1, &knn_id[i], &knn_dis[i]); 91 | average_nn_dis += knn_dis[i][1]; 92 | face_max_l = min(knn_dis[i][K], face_max_l); 93 | } 94 | average_nn_dis /= n_pc; 95 | if (face_max_l < average_nn_dis * 2) 96 | face_max_l = average_nn_dis * 2; 97 | 98 | 99 | for (int i = 0; i < n_pc; i++) { 100 | for (int j = 1; j <= K; j++) 101 | for (int k = j + 1; k <= K; k++) { 102 | int a = i, b = knn_id[i][j], c = knn_id[i][k]; 103 | if (!face_check(a, b, c)) 104 | continue; 105 | Point A = pc[a]; 106 | Point B = pc[b]; 107 | Point C = pc[c]; 108 | if ((A - B).length() > face_max_l || (A - C).length() > face_max_l || (B - C).length() > face_max_l) 109 | continue; 110 | candidates[n_candidates] = Face(a, b, c); 111 | n_candidates++; 112 | } 113 | } 114 | 115 | freopen(candidates_file.c_str(), "w", stdout); 116 | printf("%d\n", n_pc); 117 | for (int i = 0; i < n_pc; i++) 118 | printf("%lf %lf %lf\n", pc[i].x, pc[i].y, pc[i].z); 119 | printf("%d\n", n_candidates); 120 | for (int i = 0; i < n_candidates; i++) { 121 | Face f = candidates[i]; 122 | printf("%d %d %d\n", f.a, f.b, f.c); 123 | } 124 | return 0; 125 | } -------------------------------------------------------------------------------- /preprocess_with_pc/run_demo.py: -------------------------------------------------------------------------------- 1 | import os 2 | with open("../data/demo/models.txt", "r") as f: 3 | models = [line.rstrip() for line in f.readlines()] 4 | 5 | for model in models: 6 | input_file = "../data/demo/pc/%s.ply" % model 7 | output_file = "../data/demo/%s.p" % model 8 | os.system("python3 main.py --input %s --output %s"%(input_file, output_file)) -------------------------------------------------------------------------------- /teaser.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Colin97/Point2Mesh/aa080b26692ad50ea9f35527764385545922aae1/teaser.jpg --------------------------------------------------------------------------------