├── LICENSE.md
├── README.md
├── data
└── input.ply
├── img
├── 2d_query.png
├── 2d_test.png
├── 2d_train.png
├── Teaser1.jpg
├── abccomp.jpg
├── bim_mesh.png
├── bim_points.png
└── test1.gif
├── pcp.py
└── tf.yaml
/LICENSE.md:
--------------------------------------------------------------------------------
1 | Copyright 2021 Baorui, Ma and Zhizhong, Han and Yu-shen, Liu and Matthias, Zwicker
2 |
3 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
4 |
5 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
6 |
7 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Surface Reconstruction from Point Clouds by Learning Predictive Context Priors (CVPR 2022)
2 |
3 | This repository contains the code to reproduce the results from the paper.
4 | [Surface Reconstruction from Point Clouds by Learning Predictive Context Priors](https://arxiv.org/abs/2204.11015).
5 |
6 | You can find detailed usage instructions for training your own models and using pretrained models below.
7 |
8 | If you find our code or paper useful, please consider citing
9 |
10 | @inproceedings{PredictiveContextPriors,
11 | title = {Surface Reconstruction from Point Clouds by Learning Predictive Context Priors},
12 | author = {Baorui, Ma and Yu-Shen, Liu and Matthias, Zwicker and Zhizhong, Han},
13 | booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
14 | year = {2022}
15 | }
16 |
17 | ## Pytorch Version
18 | This work was originally implemented by tensorflow, pytroch version of the code will be released soon that is easier to use.
19 |
20 | Related work
21 | ```bash
22 | Pytorch
23 | https://github.com/mabaorui/NeuralPull-Pytorch
24 | Tensorflow
25 | https://github.com/mabaorui/NeuralPull
26 | https://github.com/mabaorui/OnSurfacePrior
27 | https://github.com/mabaorui/PredictableContextPrior
28 | ```
29 |
30 | ## Surface Reconstruction Demo
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 | 
41 |
42 |
43 |
44 | ## Predicted Queries Visualization
45 |
46 | 

47 |
48 |
49 | ### Predicted queries in Loccal Coorinate System
50 |
51 |
52 |
53 |
54 | ## Installation
55 | First you have to make sure that you have all dependencies in place.
56 | The simplest way to do so, is to use [anaconda](https://www.anaconda.com/).
57 |
58 | You can create an anaconda environment called `tf` using
59 | ```
60 | conda env create -f tf.yaml
61 | conda activate tf
62 | ```
63 | ## Training
64 | You should train the Local Context Prior Network first, run
65 | ```
66 | python pcp.py --input_ply_file test.ply --data_dir ./data/ --CUDA 0 --OUTPUT_DIR_LOCAL ./local_net/ --OUTPUT_DIR_GLOBAL ./glocal_net/ --train --save_idx -1
67 | ```
68 | You should put the point cloud file(--input_ply_file, only ply format) into the '--data_dir' folder.
69 |
70 | Then train the Predictive Context Prior Network, run
71 | ```
72 | python pcp.py --input_ply_file test.ply --data_dir ./data/ --CUDA 0 --OUTPUT_DIR_LOCAL ./local_net/ --OUTPUT_DIR_GLOBAL ./glocal_net/ --finetune --save_idx -1
73 | ```
74 |
75 | ## Test
76 | You can extract the mesh model from the trained network, run
77 | ```
78 | python pcp.py --input_ply_file test.ply --data_dir ./data/ --CUDA 0 --OUTPUT_DIR_LOCAL ./local_net/ --OUTPUT_DIR_GLOBAL ./glocal_net/ --test --save_idx -1
79 | ```
80 |
81 | ## ToDo
82 | In different datasets or your own data, because of the variation in point cloud density, this ['0.5' parameter](https://github.com/mabaorui/PredictableContextPrior/blob/c8fc75f8087370953d1e4089283d520cd1af07a5/pcp.py#L401) has a very strong influence on the final result, which controls the distance between the query points and the point cloud. So if you want to get better results, you should adjust this parameter. We give '0.5' here as a reference value, and this value can be used for most object-level reconstructions. For the scene dataset, we will later publish the reference values for the hyperparameter settings for the scene dataset.
--------------------------------------------------------------------------------
/img/2d_query.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mabaorui/PredictableContextPrior/6d953784a2ebc724eb947667f04b88204918012e/img/2d_query.png
--------------------------------------------------------------------------------
/img/2d_test.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mabaorui/PredictableContextPrior/6d953784a2ebc724eb947667f04b88204918012e/img/2d_test.png
--------------------------------------------------------------------------------
/img/2d_train.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mabaorui/PredictableContextPrior/6d953784a2ebc724eb947667f04b88204918012e/img/2d_train.png
--------------------------------------------------------------------------------
/img/Teaser1.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mabaorui/PredictableContextPrior/6d953784a2ebc724eb947667f04b88204918012e/img/Teaser1.jpg
--------------------------------------------------------------------------------
/img/abccomp.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mabaorui/PredictableContextPrior/6d953784a2ebc724eb947667f04b88204918012e/img/abccomp.jpg
--------------------------------------------------------------------------------
/img/bim_mesh.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mabaorui/PredictableContextPrior/6d953784a2ebc724eb947667f04b88204918012e/img/bim_mesh.png
--------------------------------------------------------------------------------
/img/bim_points.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mabaorui/PredictableContextPrior/6d953784a2ebc724eb947667f04b88204918012e/img/bim_points.png
--------------------------------------------------------------------------------
/img/test1.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mabaorui/PredictableContextPrior/6d953784a2ebc724eb947667f04b88204918012e/img/test1.gif
--------------------------------------------------------------------------------
/pcp.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | """
3 | Created on Thu Jul 23 16:44:22 2020
4 |
5 | @author: Administrator
6 | """
7 |
8 | import numpy as np
9 | #import tensorflow as tf
10 | import tensorflow.compat.v1 as tf
11 | tf.disable_v2_behavior()
12 | import os
13 | import shutil
14 | import random
15 | import math
16 | import scipy.io as sio
17 | import time
18 | import argparse
19 | #from im2mesh.utils import libmcubes
20 | import trimesh
21 | from scipy.spatial import cKDTree
22 | from plyfile import PlyData
23 | from plyfile import PlyElement
24 | from skimage.measure import marching_cubes_lewiner
25 |
26 |
27 |
28 | parser = argparse.ArgumentParser()
29 | parser.add_argument('--train',action='store_true', default=False)
30 | parser.add_argument('--finetune',action='store_true', default=False)
31 | parser.add_argument('--test',action='store_true', default=False)
32 | parser.add_argument("--save_idx", type=int, default=-1)
33 | parser.add_argument('--input_ply_file', type=str, default="test.ply")
34 | parser.add_argument('--data_dir', type=str, default="test.ply")
35 | parser.add_argument('--CUDA', type=int, default=0)
36 | parser.add_argument('--OUTPUT_DIR_LOCAL', type=str, default="test.ply")
37 | parser.add_argument('--OUTPUT_DIR_GLOBAL', type=str, default="test.ply")
38 | a = parser.parse_args()
39 |
40 | cuda_idx = str(a.CUDA)
41 | os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
42 | os.environ["CUDA_VISIBLE_DEVICES"]= cuda_idx
43 |
44 | class_idx = '03211117'
45 | name = 'totempole'
46 | BS = 1
47 | primitives = 1
48 | POINT_NUM = 400
49 | POINT_NUM_GT = 10000
50 |
51 | part_vox_size = 6
52 | OUTPUT_DIR = a.OUTPUT_DIR_LOCAL
53 | OUTPUT_DIR_FINETUNE = a.OUTPUT_DIR_GLOBAL
54 | LR = 0.0001
55 | START = 0
56 | SHAPE_NUM = 8000
57 | BD_EMPTY = 0.05
58 | TRAIN = a.train
59 | bd = 0.55
60 |
61 | if(TRAIN):
62 | if os.path.exists(OUTPUT_DIR):
63 | shutil.rmtree(OUTPUT_DIR)
64 | print ('test_res_dir: deleted and then created!')
65 | os.makedirs(OUTPUT_DIR)
66 | if os.path.exists(OUTPUT_DIR_FINETUNE):
67 | shutil.rmtree(OUTPUT_DIR_FINETUNE)
68 | print ('test_res_dir: deleted and then created!')
69 | os.makedirs(OUTPUT_DIR_FINETUNE)
70 |
71 |
72 |
73 | def normal_points(ps_gt, ps, translation = False):
74 | tt = 0
75 | if((np.max(ps_gt[:,0])-np.min(ps_gt[:,0])))>(np.max(ps_gt[:,1])-np.min(ps_gt[:,1])):
76 | tt = (np.max(ps_gt[:,0])-np.min(ps_gt[:,0]))
77 | else:
78 | tt = (np.max(ps_gt[:,1])-np.min(ps_gt[:,1]))
79 | if(tt < (np.max(ps_gt[:,2])-np.min(ps_gt[:,2]))):
80 | tt = (np.max(ps_gt[:,2])-np.min(ps_gt[:,2]))
81 | #print('tt:',tt)
82 | tt = 10/(10*tt)
83 | ps_gt = ps_gt*tt
84 | ps = ps*tt
85 | if(translation):
86 | t = np.mean(ps_gt,axis = 0)
87 | ps_gt = ps_gt - t
88 | ps = ps - t
89 | #print('normal_gt:',np.max(ps_gt),np.min(ps_gt))
90 | #print('normal:',np.max(ps),np.min(ps))
91 | return ps_gt, ps
92 |
93 | def fully_connected(inputs,
94 | num_outputs,
95 | scope,
96 | use_xavier=True,
97 | stddev=1e-3,
98 | weight_decay=0.0,
99 | activation_fn=tf.nn.relu,
100 | bn=False,
101 | bn_decay=None,
102 | is_training=None):
103 | """ Fully connected layer with non-linear operation.
104 |
105 | Args:
106 | inputs: 2-D tensor BxN
107 | num_outputs: int
108 |
109 | Returns:
110 | Variable tensor of size B x num_outputs.
111 | """
112 | with tf.variable_scope(scope) as sc:
113 | num_input_units = inputs.get_shape()[-1].value
114 | weights = _variable_with_weight_decay('weights',
115 | shape=[num_input_units, num_outputs],
116 | use_xavier=use_xavier,
117 | stddev=stddev,
118 | wd=weight_decay)
119 | outputs = tf.matmul(inputs, weights)
120 | biases = _variable_on_cpu('biases', [num_outputs],
121 | tf.constant_initializer(0.0))
122 | outputs = tf.nn.bias_add(outputs, biases)
123 |
124 |
125 | if activation_fn is not None:
126 | outputs = activation_fn(outputs)
127 | return outputs
128 | def max_pool2d(inputs,
129 | kernel_size,
130 | scope,
131 | stride=[2, 2],
132 | padding='VALID'):
133 | """ 2D max pooling.
134 |
135 | Args:
136 | inputs: 4-D tensor BxHxWxC
137 | kernel_size: a list of 2 ints
138 | stride: a list of 2 ints
139 |
140 | Returns:
141 | Variable tensor
142 | """
143 | with tf.variable_scope(scope) as sc:
144 | kernel_h, kernel_w = kernel_size
145 | stride_h, stride_w = stride
146 | outputs = tf.nn.max_pool(inputs,
147 | ksize=[1, kernel_h, kernel_w, 1],
148 | strides=[1, stride_h, stride_w, 1],
149 | padding=padding,
150 | name=sc.name)
151 | return outputs
152 | def _variable_on_cpu(name, shape, initializer, use_fp16=False):
153 | """Helper to create a Variable stored on CPU memory.
154 | Args:
155 | name: name of the variable
156 | shape: list of ints
157 | initializer: initializer for Variable
158 | Returns:
159 | Variable Tensor
160 | """
161 | with tf.device('/cpu:0'):
162 | dtype = tf.float16 if use_fp16 else tf.float32
163 | var = tf.get_variable(name, shape, initializer=initializer, dtype=dtype)
164 | return var
165 |
166 | def _variable_with_weight_decay(name, shape, stddev, wd, use_xavier=True):
167 | """Helper to create an initialized Variable with weight decay.
168 |
169 | Note that the Variable is initialized with a truncated normal distribution.
170 | A weight decay is added only if one is specified.
171 |
172 | Args:
173 | name: name of the variable
174 | shape: list of ints
175 | stddev: standard deviation of a truncated Gaussian
176 | wd: add L2Loss weight decay multiplied by this float. If None, weight
177 | decay is not added for this Variable.
178 | use_xavier: bool, whether to use xavier initializer
179 |
180 | Returns:
181 | Variable Tensor
182 | """
183 | if use_xavier:
184 | initializer = tf.contrib.layers.xavier_initializer()
185 | else:
186 | initializer = tf.truncated_normal_initializer(stddev=stddev)
187 | var = _variable_on_cpu(name, shape, initializer)
188 | if wd is not None:
189 | weight_decay = tf.multiply(tf.nn.l2_loss(var), wd, name='weight_loss')
190 | tf.add_to_collection('losses', weight_decay)
191 | return var
192 |
193 | def conv2d(inputs,
194 | num_output_channels,
195 | kernel_size,
196 | scope,
197 | stride=[1, 1],
198 | padding='SAME',
199 | use_xavier=True,
200 | stddev=1e-3,
201 | weight_decay=0.0,
202 | activation_fn=tf.nn.relu,
203 | bn=False,
204 | bn_decay=None,
205 | is_training=None):
206 | """ 2D convolution with non-linear operation.
207 |
208 | Args:
209 | inputs: 4-D tensor variable BxHxWxC
210 | num_output_channels: int
211 | kernel_size: a list of 2 ints
212 | scope: string
213 | stride: a list of 2 ints
214 | padding: 'SAME' or 'VALID'
215 | use_xavier: bool, use xavier_initializer if true
216 | stddev: float, stddev for truncated_normal init
217 | weight_decay: float
218 | activation_fn: function
219 | bn: bool, whether to use batch norm
220 | bn_decay: float or float tensor variable in [0,1]
221 | is_training: bool Tensor variable
222 |
223 | Returns:
224 | Variable tensor
225 | """
226 | with tf.variable_scope(scope) as sc:
227 | kernel_h, kernel_w = kernel_size
228 | num_in_channels = inputs.get_shape()[-1].value
229 | kernel_shape = [kernel_h, kernel_w,
230 | num_in_channels, num_output_channels]
231 | kernel = _variable_with_weight_decay('weights',
232 | shape=kernel_shape,
233 | use_xavier=use_xavier,
234 | stddev=stddev,
235 | wd=weight_decay)
236 | stride_h, stride_w = stride
237 | outputs = tf.nn.conv2d(inputs, kernel,
238 | [1, stride_h, stride_w, 1],
239 | padding=padding)
240 | biases = _variable_on_cpu('biases', [num_output_channels],
241 | tf.constant_initializer(0.0))
242 | outputs = tf.nn.bias_add(outputs, biases)
243 |
244 |
245 | if activation_fn is not None:
246 | outputs = activation_fn(outputs)
247 | return outputs
248 |
249 | def distance_p2p(points_src, normals_src, points_tgt, normals_tgt):
250 | ''' Computes minimal distances of each point in points_src to points_tgt.
251 |
252 | Args:
253 | points_src (numpy array): source points
254 | normals_src (numpy array): source normals
255 | points_tgt (numpy array): target points
256 | normals_tgt (numpy array): target normals
257 | '''
258 | kdtree = KDTree(points_tgt)
259 | dist, idx = kdtree.query(points_src)
260 |
261 | if normals_src is not None and normals_tgt is not None:
262 | normals_src = \
263 | normals_src / np.linalg.norm(normals_src, axis=-1, keepdims=True)
264 | normals_tgt = \
265 | normals_tgt / np.linalg.norm(normals_tgt, axis=-1, keepdims=True)
266 |
267 | normals_dot_product = (normals_tgt[idx] * normals_src).sum(axis=-1)
268 | # Handle normals that point into wrong direction gracefully
269 | # (mostly due to mehtod not caring about this in generation)
270 | normals_dot_product = np.abs(normals_dot_product)
271 | else:
272 | normals_dot_product = np.array(
273 | [np.nan] * points_src.shape[0], dtype=np.float32)
274 | return dist, normals_dot_product
275 |
276 | def eval_pointcloud(pointcloud, pointcloud_tgt,
277 | normals=None, normals_tgt=None):
278 | ''' Evaluates a point cloud.
279 |
280 | Args:
281 | pointcloud (numpy array): predicted point cloud
282 | pointcloud_tgt (numpy array): target point cloud
283 | normals (numpy array): predicted normals
284 | normals_tgt (numpy array): target normals
285 | '''
286 | # Return maximum losses if pointcloud is empty
287 |
288 |
289 | pointcloud = np.asarray(pointcloud)
290 | pointcloud_tgt = np.asarray(pointcloud_tgt)
291 |
292 | # Completeness: how far are the points of the target point cloud
293 | # from thre predicted point cloud
294 | completeness, completeness_normals = distance_p2p(
295 | pointcloud_tgt, normals_tgt, pointcloud, normals
296 | )
297 | completeness2 = completeness**2
298 |
299 | completeness = completeness.mean()
300 | completeness2 = completeness2.mean()
301 | completeness_normals = np.absolute(completeness_normals).mean()
302 |
303 | # Accuracy: how far are th points of the predicted pointcloud
304 | # from the target pointcloud
305 | accuracy, accuracy_normals = distance_p2p(
306 | pointcloud, normals, pointcloud_tgt, normals_tgt
307 | )
308 |
309 | accuracy2 = accuracy**2
310 |
311 | accuracy = accuracy.mean()
312 | accuracy2 = accuracy2.mean()
313 | accuracy_normals = np.absolute(accuracy_normals).mean()
314 |
315 | # Chamfer distance
316 | chamferL2 = 0.5 * (completeness2 + accuracy2)
317 | #print(completeness,accuracy,completeness2,accuracy2)
318 | #print('chamferL2:',chamferL2)
319 | normals_correctness = (
320 | 0.5 * completeness_normals + 0.5 * accuracy_normals
321 | )
322 | chamferL1 = 0.5 * (completeness + accuracy)
323 | print('chamferL2:',chamferL2,'accuracy:',accuracy,'normals_correctness:',normals_correctness,'chamferL1:',chamferL1)
324 | return normals_correctness, chamferL1, chamferL2
325 |
326 |
327 | def safe_norm_np(x, epsilon=1e-12, axis=1):
328 | return np.sqrt(np.sum(x*x, axis=axis) + epsilon)
329 |
330 | def safe_norm(x, epsilon=1e-12, axis=None):
331 | return tf.sqrt(tf.reduce_sum(x ** 2, axis=axis) + epsilon)
332 | #return tf.reduce_sum(x ** 2, axis=axis)
333 |
334 | def boundingbox(x,y,z):
335 | return min(x),max(x),min(y),max(y),min(z),max(z)
336 |
337 |
338 | def get_data_from_filename(filename):
339 | load_data = np.load(filename)
340 | point = np.asarray(load_data['sample_near']).reshape(-1,POINT_NUM,3)
341 | sample = np.asarray(load_data['sample']).reshape(-1,POINT_NUM,3)
342 | rt = random.randint(0,sample.shape[0]-1)
343 | #rt = random.randint(0,int((sample.shape[0]-1)/5))
344 | sample = sample[rt,:,:].reshape(BS, POINT_NUM, 3)
345 | point = point[rt,:,:].reshape(BS, POINT_NUM, 3)
346 |
347 |
348 |
349 | #print('input_points_bs:',filename)
350 | #print(input_points_bs)
351 | return point.astype(np.float32), sample.astype(np.float32)
352 |
353 | def sample_query_points(input_ply_file):
354 | data = PlyData.read(a.data_dir + input_ply_file)
355 | v = data['vertex'].data
356 | v = np.asarray(v)
357 | print(v.shape)
358 |
359 | #rt = np.random.choice(v.shape, 50000, replace = False)
360 |
361 | points = []
362 | for i in range(v.shape[0]):
363 | points.append(np.array([v[i][0],v[i][1],v[i][2]]))
364 | points = np.asarray(points)
365 | pointcloud_s =points.astype(np.float32)
366 | print('pointcloud sparse:',pointcloud_s.shape[0])
367 |
368 | pointcloud_s_t = pointcloud_s - np.array([np.min(pointcloud_s[:,0]),np.min(pointcloud_s[:,1]),np.min(pointcloud_s[:,2])])
369 | pointcloud_s_t = pointcloud_s_t / (np.array([np.max(pointcloud_s[:,0]) - np.min(pointcloud_s[:,0]), np.max(pointcloud_s[:,0]) - np.min(pointcloud_s[:,0]), np.max(pointcloud_s[:,0]) - np.min(pointcloud_s[:,0])]))
370 | trans = np.array([np.min(pointcloud_s[:,0]),np.min(pointcloud_s[:,1]),np.min(pointcloud_s[:,2])])
371 | scal = np.array([np.max(pointcloud_s[:,0]) - np.min(pointcloud_s[:,0]), np.max(pointcloud_s[:,0]) - np.min(pointcloud_s[:,0]), np.max(pointcloud_s[:,0]) - np.min(pointcloud_s[:,0])])
372 | pointcloud_s = pointcloud_s_t
373 |
374 | print(np.min(pointcloud_s[:,0]), np.max(pointcloud_s[:,0]))
375 | print(np.min(pointcloud_s[:,1]), np.max(pointcloud_s[:,1]))
376 | print(np.min(pointcloud_s[:,2]), np.max(pointcloud_s[:,2]))
377 |
378 | sample = []
379 | sample_near = []
380 | sample_near_o = []
381 | sample_dis = []
382 | sample_vec = []
383 | gt_kd_tree = cKDTree(pointcloud_s)
384 | for i in range(int(500000/pointcloud_s.shape[0])):
385 |
386 | pnts = pointcloud_s
387 | ptree = cKDTree(pnts)
388 | i = 0
389 | sigmas = []
390 | for p in np.array_split(pnts,100,axis=0):
391 | d = ptree.query(p,51)
392 | sigmas.append(d[0][:,-1])
393 |
394 | i = i+1
395 |
396 | sigmas = np.concatenate(sigmas)
397 | sigmas_big = 0.2 * np.ones_like(sigmas)
398 | sigmas = sigmas
399 |
400 | #tt = pnts + 0.5*0.25*np.expand_dims(sigmas,-1) * np.random.normal(0.0, 1.0, size=pnts.shape)
401 | tt = pnts + 0.5*np.expand_dims(sigmas,-1) * np.random.normal(0.0, 1.0, size=pnts.shape)
402 | #tt = pnts + 1*np.expand_dims(sigmas_big,-1) * np.random.normal(0.0, 1.0, size=pnts.shape)
403 | sample.append(tt)
404 | distances, vertex_ids = gt_kd_tree.query(tt, p=2, k = 1)
405 |
406 |
407 | vertex_ids = np.asarray(vertex_ids)
408 | print('distances:',distances.shape)
409 | #print(vertex_ids)
410 |
411 | sample_near.append(pointcloud_s[vertex_ids].reshape(-1,3))
412 | for i in range(int(500000/pointcloud_s.shape[0])):
413 |
414 | pnts = pointcloud_s
415 | ptree = cKDTree(pnts)
416 | i = 0
417 | sigmas = []
418 | for p in np.array_split(pnts,100,axis=0):
419 | d = ptree.query(p,51)
420 | sigmas.append(d[0][:,-1])
421 |
422 | i = i+1
423 |
424 | sigmas = np.concatenate(sigmas)
425 | sigmas_big = 0.2 * np.ones_like(sigmas)
426 | sigmas = sigmas
427 |
428 | #tt = pnts + 0.5*0.25*np.expand_dims(sigmas,-1) * np.random.normal(0.0, 1.0, size=pnts.shape)
429 | tt = pnts + 1.0*np.expand_dims(sigmas,-1) * np.random.normal(0.0, 1.0, size=pnts.shape)
430 | #tt = pnts + 1*np.expand_dims(sigmas_big,-1) * np.random.normal(0.0, 1.0, size=pnts.shape)
431 | sample.append(tt)
432 | distances, vertex_ids = gt_kd_tree.query(tt, p=2, k = 1)
433 |
434 |
435 | vertex_ids = np.asarray(vertex_ids)
436 | print('distances:',distances.shape)
437 | #print(vertex_ids)
438 |
439 | sample_near.append(pointcloud_s[vertex_ids].reshape(-1,3))
440 |
441 |
442 |
443 |
444 |
445 |
446 | sample = np.asarray(sample).reshape(-1,3)
447 | sample_near = np.asarray(sample_near).reshape(-1,3)
448 | np.savez_compressed(a.data_dir + input_ply_file , sample = sample, sample_near=sample_near,pointcloud_s = pointcloud_s, trans = trans, scal = scal)
449 | sample_all = sample.reshape(-1,3)
450 | sample_near_all = sample_near.reshape(-1,3)
451 | sample_part = [[] for i in range(part_vox_size*part_vox_size*part_vox_size)]
452 | sample_near_part = [[] for i in range(part_vox_size*part_vox_size*part_vox_size)]
453 | bd_max_x = np.max(pointcloud_s[:,0])
454 | bd_max_y = np.max(pointcloud_s[:,1])
455 | bd_max_z = np.max(pointcloud_s[:,2])
456 | bd_min_x = np.min(pointcloud_s[:,0])
457 | bd_min_y = np.min(pointcloud_s[:,1])
458 | bd_min_z = np.min(pointcloud_s[:,2])
459 | for l in range(sample_near_all.shape[0]):
460 | ex = sample_near_all[l,0] - bd_min_x
461 | ix = int(math.floor(ex/((bd_max_x- bd_min_x)/(part_vox_size))))
462 | #print(ex,ix)
463 | ey = sample_near_all[l,1] - bd_min_y
464 | iy = int(math.floor(ey/((bd_max_y- bd_min_y)/(part_vox_size))))
465 | ez = sample_near_all[l,2] - bd_min_z
466 | iz = int(math.floor(ez/((bd_max_z- bd_min_z)/(part_vox_size))))
467 | ix = np.clip(ix,0,part_vox_size-1)
468 | iy = np.clip(iy,0,part_vox_size-1)
469 | iz = np.clip(iz,0,part_vox_size-1)
470 | #print(ix,iy,iz)
471 | sample_part[ix*(part_vox_size)*(part_vox_size)+iy*(part_vox_size)+iz].append(sample_all[l])
472 | sample_near_part[ix*(part_vox_size)*(part_vox_size)+iy*(part_vox_size)+iz].append(sample_near_all[l])
473 |
474 |
475 | for iv in range(len(sample_part)):
476 | #print(np.asarray(sample[iv]).shape)
477 | np.savez(a.data_dir + input_ply_file + '_' + str(iv), sample = sample_part[iv],sample_near = sample_near_part[iv])
478 | if(a.train):
479 | sample_query_points(a.input_ply_file)
480 | mm = 0
481 | files = []
482 | files_path = []
483 |
484 | files.append(a.input_ply_file)
485 | files_path.append(a.data_dir + a.input_ply_file)
486 |
487 |
488 |
489 | points_all = []
490 | samples_all = []
491 |
492 | if(a.train):
493 | for fi in range(len(files_path)):
494 | print(files_path[fi])
495 | for i in range(part_vox_size*part_vox_size*part_vox_size):
496 | #for i in range(100):
497 | if(os.path.exists(files_path[fi] + '_{}.npz'.format(i))):
498 | print(i)
499 | load_data = np.load(files_path[fi] + '_{}.npz'.format(i))
500 | sample_near = np.asarray(load_data['sample_near'])
501 | sampler = np.asarray(load_data['sample'])
502 | print(sample_near.shape[0])
503 | if(sample_near.shape[0]>=POINT_NUM):
504 | print(sample_near.shape[0])
505 | sample_near,sampler = normal_points(sample_near,sampler,True)
506 | tt = int(math.floor(sample_near.shape[0]*1.0/POINT_NUM))
507 | tt = (tt*POINT_NUM)
508 | #print(sample_near[0:tt,:].shape)
509 | points_all.append(sample_near[0:tt,:])
510 | samples_all.append(sampler[0:tt,:])
511 | #print(points_all[i].shape)
512 |
513 | SHAPE_NUM = len(files)
514 | #SHAPE_NUM = 26
515 | print('SHAPE_NUM:',SHAPE_NUM)
516 |
517 |
518 | points_target = tf.placeholder(tf.float32, shape=[BS,POINT_NUM,3])
519 | input_points_3d = tf.placeholder(tf.float32, shape=[BS,POINT_NUM,3])
520 | normal_gt = tf.placeholder(tf.float32, shape=[BS,None,3])
521 | points_target_num = tf.placeholder(tf.int32, shape=[1,1])
522 | points_input_num = tf.placeholder(tf.int32, shape=[1,1])
523 | points_cd = tf.placeholder(tf.float32, shape=[BS,None,3])
524 |
525 | def local_decoder(feature,input_points_3d):
526 | with tf.variable_scope('local', reuse=tf.AUTO_REUSE):
527 | feature_f = tf.nn.relu(tf.layers.dense(feature,512))
528 | net = tf.nn.relu(tf.layers.dense(input_points_3d, 512))
529 | net = tf.concat([net,feature_f],2)
530 | print('net:',net)
531 | with tf.variable_scope('decoder', reuse=tf.AUTO_REUSE):
532 | for i in range(8):
533 | with tf.variable_scope("resnetBlockFC_%d" % i ):
534 | b_initializer=tf.constant_initializer(0.0)
535 | w_initializer = tf.random_normal_initializer(mean=0.0,stddev=np.sqrt(2) / np.sqrt(512))
536 | net = tf.layers.dense(tf.nn.relu(net),512,kernel_initializer=w_initializer,bias_initializer=b_initializer)
537 |
538 | b_initializer=tf.constant_initializer(-0.5)
539 | w_initializer = tf.random_normal_initializer(mean=2*np.sqrt(np.pi) / np.sqrt(512), stddev = 0.000001)
540 | print('net:',net)
541 | sdf = tf.layers.dense(tf.nn.relu(net),1,kernel_initializer=w_initializer,bias_initializer=b_initializer)
542 | print('sdf',sdf)
543 |
544 | grad = tf.gradients(ys=sdf, xs=input_points_3d)
545 | print('grad',grad)
546 | print(grad[0])
547 | normal_p_lenght = tf.expand_dims(safe_norm(grad[0],axis = -1),-1)
548 | print('normal_p_lenght',normal_p_lenght)
549 | grad_norm = grad[0]/normal_p_lenght
550 | print('grad_norm',grad_norm)
551 | return sdf,grad_norm
552 |
553 | input_points_3d_global = tf.placeholder(tf.float32, shape=[BS,None,3])
554 | points_target_global = tf.placeholder(tf.float32, shape=[BS,None,3])
555 | feature_global = tf.placeholder(tf.float32, shape=[BS,None,SHAPE_NUM])
556 | #with tf.variable_scope('pointnet', reuse=tf.AUTO_REUSE):
557 | # input_image = tf.expand_dims(points_target_global,-1)
558 | # net = conv2d(input_image, 64, [1,3], padding='VALID', stride = [1,1], is_training = True, scope = 'conv1')
559 | # net = conv2d(input_image, 128, [1,3], padding='VALID', stride = [1,1], is_training = True, scope = 'conv2')
560 | # net = conv2d(input_image, 1024, [1,3], padding='VALID', stride = [1,1], is_training = True, scope = 'conv3')
561 | # net = max_pool2d(net,[POINT_NUM,1], padding = 'VALID', scope = 'maxpool')
562 | # net = tf.reshape(net,[1,-1])
563 | # net = fully_connected(net, 512, is_training = True, scope = 'fc1')
564 | # net = fully_connected(net, 256, is_training = True, scope = 'fc2')
565 | # feature_global = net
566 | #feature_global = tf.tile(tf.expand_dims(feature_global,1),[1,POINT_NUM,1])
567 | def global_decoder(feature_global_f,input_points_3d_global_f):
568 | with tf.variable_scope('global', reuse=tf.AUTO_REUSE):
569 | feature_g = tf.nn.relu(tf.layers.dense(feature_global_f,512))
570 | net_g = tf.nn.relu(tf.layers.dense(input_points_3d_global_f, 512))
571 | #print(net_g,feature_g)
572 | net_g = tf.concat([net_g,feature_g],2)
573 | for i in range(8):
574 | net_g = tf.layers.dense(tf.nn.relu(net_g),512)
575 |
576 | feature_output = tf.layers.dense(tf.nn.relu(net_g),SHAPE_NUM)
577 | d_output = tf.layers.dense(tf.nn.relu(net_g),3)
578 | sdf_g,grad_norm_g = local_decoder(feature_output,input_points_3d_global_f+d_output)
579 | g_points_g = input_points_3d_global_f - sdf_g * grad_norm_g
580 | return g_points_g, sdf_g
581 |
582 | g_points_g, sdf_g = global_decoder(feature_global,input_points_3d_global)
583 | loss_g = tf.reduce_mean(tf.norm((points_target_global-g_points_g), axis=-1))
584 |
585 |
586 |
587 |
588 | with tf.variable_scope('pointnet', reuse=tf.AUTO_REUSE):
589 | input_image = tf.expand_dims(points_target,-1)
590 | net = conv2d(input_image, 64, [1,3], padding='VALID', stride = [1,1], is_training = True, scope = 'conv1')
591 | net = conv2d(input_image, 128, [1,3], padding='VALID', stride = [1,1], is_training = True, scope = 'conv2')
592 | net = conv2d(input_image, 1024, [1,3], padding='VALID', stride = [1,1], is_training = True, scope = 'conv3')
593 | net = max_pool2d(net,[POINT_NUM,1], padding = 'VALID', scope = 'maxpool')
594 | net = tf.reshape(net,[1,-1])
595 | net = fully_connected(net, 512, is_training = True, scope = 'fc1')
596 | net = fully_connected(net, 256, is_training = True, scope = 'fc2')
597 | feature = net
598 | feature = tf.tile(tf.expand_dims(feature,1),[1,POINT_NUM,1])
599 | sdf,grad_norm = local_decoder(feature,input_points_3d)
600 | g_points = input_points_3d - sdf * grad_norm
601 |
602 |
603 | loss = tf.reduce_mean(tf.norm((points_target, g_points), axis=-1))
604 |
605 | t_vars = tf.trainable_variables()
606 | optim = tf.train.AdamOptimizer(learning_rate=LR, beta1=0.9)
607 | loss_grads_and_vars = optim.compute_gradients(loss, var_list=t_vars)
608 | loss_optim = optim.apply_gradients(loss_grads_and_vars)
609 |
610 | global_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='global')
611 | loss_grads_and_vars_g = optim.compute_gradients(loss_g, var_list=global_vars)
612 | #loss_grads_and_vars_g = optim.compute_gradients(loss_g, var_list=t_vars)
613 | loss_optim_g = optim.apply_gradients(loss_grads_and_vars_g)
614 |
615 |
616 | config = tf.ConfigProto(allow_soft_placement=False)
617 | saver_restore = tf.train.Saver(var_list=t_vars)
618 | saver = tf.train.Saver(max_to_keep=2000000)
619 |
620 |
621 |
622 |
623 | with tf.Session(config=config) as sess:
624 | feature_bs_all = []
625 | for i in range(SHAPE_NUM):
626 | tt = []
627 | for j in range(int(POINT_NUM)):
628 | t = np.zeros(SHAPE_NUM)
629 |
630 | t[i] = 1
631 | tt.append(t)
632 | feature_bs_all.append(tt)
633 | feature_bs_all = np.asarray(feature_bs_all)
634 | #print(feature_bs_all,feature_bs_all.shape)
635 | if(TRAIN):
636 | print('train start')
637 | sess.run(tf.global_variables_initializer())
638 | start_time = time.time()
639 | POINT_NUM_GT_bs = np.array(POINT_NUM_GT).reshape(1,1)
640 | points_input_num_bs = np.array(POINT_NUM).reshape(1,1)
641 | print('data shape:',len(points_all))
642 | for bi in range(500):
643 | epoch_index = np.random.choice(len(points_all)-1, len(points_all)-1, replace = False)
644 | for epoch in epoch_index:
645 |
646 |
647 | points = points_all[epoch].reshape(-1,POINT_NUM,3)
648 | samples = samples_all[epoch].reshape(-1,POINT_NUM,3)
649 |
650 | rt = random.randint(0,samples.shape[0]-1)
651 | sample = samples[rt,:].reshape(BS, POINT_NUM, 3)
652 | point = points[rt,:].reshape(BS, POINT_NUM, 3)
653 |
654 | _, loss_c,g_points_g_c = sess.run([loss_optim,loss,g_points],feed_dict={points_target_num:POINT_NUM_GT_bs,points_input_num:points_input_num_bs,
655 | points_target:point,input_points_3d:sample})
656 |
657 | if(bi%100 == 0):
658 | print('model:',bi,'epoch:',epoch,'loss:',loss_c)
659 | saver.save(sess, os.path.join(OUTPUT_DIR, "model"), global_step=bi)
660 |
661 | saver.save(sess, os.path.join(OUTPUT_DIR, "model"), global_step=bi)
662 | if(a.finetune):
663 | print('finuetune')
664 | POINT_NUM_GT_bs = np.array(POINT_NUM_GT).reshape(1,1)
665 | points_input_num_bs = np.array(POINT_NUM).reshape(1,1)
666 | points_all = []
667 | samples_all = []
668 | for epoch in range(1):
669 | print('epoch:',epoch)
670 | sess.run(tf.global_variables_initializer())
671 | start_time = time.time()
672 | checkpoint = tf.train.get_checkpoint_state(OUTPUT_DIR).all_model_checkpoint_paths
673 | print(checkpoint[a.save_idx])
674 |
675 |
676 | saver.restore(sess, checkpoint[a.save_idx])
677 | print(files_path[0] + '.npz')
678 | load_data = np.load(files_path[0] + '.npz')
679 |
680 | points = load_data['sample_near'].reshape(-1,3)
681 | samples = load_data['sample'].reshape(-1,3)
682 | SP_NUM = points.shape[0]
683 | for bi in range(100010):
684 | feature_bs = feature_bs_all[0]
685 | rt = np.random.choice(SP_NUM, POINT_NUM, replace = False)
686 | #rt = random.randint(0,samples.shape[0]-1)
687 | sample = samples[rt,:].reshape(BS, POINT_NUM, 3)
688 | point = points[rt,:].reshape(BS, POINT_NUM, 3)
689 |
690 | feature_bs_t = feature_bs.reshape(BS,POINT_NUM,SHAPE_NUM)
691 | _, loss_c = sess.run([loss_optim_g,loss_g],feed_dict={feature_global:feature_bs_t,points_target_num:POINT_NUM_GT_bs,points_input_num:points_input_num_bs,
692 | points_target_global:point,input_points_3d_global:sample})
693 |
694 | if(bi%100000 == 0):
695 | print('model:',bi,'epoch:',epoch,'loss:',loss_c)
696 | saver.save(sess, os.path.join(OUTPUT_DIR_FINETUNE, "model"), global_step=bi)
697 | #saver.save(sess, os.path.join(OUTPUT_DIR_FINETUNE, "model"), global_step=epoch)
698 |
699 | if(a.test):
700 | """ feature_bs = []
701 | for j in range(vox_size*vox_size):
702 | t = np.zeros(SHAPE_NUM)
703 | t[0] = 1
704 | feature_bs.append(t)
705 | feature_bs = np.asarray(feature_bs)
706 | sdf_c = sess.run([sdf_g],feed_dict={input_points_3d_global:input_points_2d_bs_t,feature_global:feature_bs_t,
707 | points_target_num:POINT_NUM_GT_bs,points_input_num:points_input_num_bs}) """
708 | print('test start')
709 |
710 |
711 | s = np.arange(-bd,bd, (2*bd)/128)
712 |
713 | print(s.shape[0])
714 | vox_size = s.shape[0]
715 | POINT_NUM_GT_bs = np.array(vox_size).reshape(1,1)
716 | points_input_num_bs = np.array(POINT_NUM).reshape(1,1)
717 |
718 |
719 |
720 |
721 | POINT_NUM_GT_bs = np.array(vox_size*vox_size).reshape(1,1)
722 |
723 | sess.run(tf.global_variables_initializer())
724 | checkpoint = tf.train.get_checkpoint_state(OUTPUT_DIR_FINETUNE).all_model_checkpoint_paths
725 | print(checkpoint[a.save_idx])
726 | saver.restore(sess, checkpoint[a.save_idx])
727 |
728 | #saver.restore(sess, a.out_dir + 'model-0')
729 |
730 | point_sparse = np.load(a.data_dir + a.input_ply_file + '.npz')['pointcloud_s']
731 |
732 |
733 |
734 | input_points_2d_bs = []
735 |
736 | bd_max = [np.max(point_sparse[:,0]), np.max(point_sparse[:,1]), np.max(point_sparse[:,2])]
737 | bd_min = [np.min(point_sparse[:,0]), np.min(point_sparse[:,1]),np.min(point_sparse[:,2])]
738 | bd_max = np.asarray(bd_max) + 0.05
739 | bd_min = np.asarray(bd_min) - 0.05
740 | sx = np.arange(bd_min[0], bd_max[0], (bd_max[0] - bd_min[0])/vox_size)
741 | sy = np.arange(bd_min[1], bd_max[1], (bd_max[1] - bd_min[1])/vox_size)
742 | sz = np.arange(bd_min[2], bd_max[2], (bd_max[2] - bd_min[2])/vox_size)
743 | print(bd_max)
744 | print(bd_min)
745 | for i in sx:
746 | for j in sy:
747 | for k in sz:
748 | input_points_2d_bs.append(np.asarray([i,j,k]))
749 | input_points_2d_bs = np.asarray(input_points_2d_bs)
750 | input_points_2d_bs = input_points_2d_bs.reshape((vox_size,vox_size,vox_size,3))
751 |
752 | vox = []
753 | feature_bs = []
754 | for j in range(vox_size*vox_size):
755 | t = np.zeros(SHAPE_NUM)
756 | t[0] = 1
757 | feature_bs.append(t)
758 | feature_bs = np.asarray(feature_bs)
759 | for i in range(input_points_2d_bs.shape[0]):
760 |
761 | input_points_2d_bs_t = input_points_2d_bs[i,:,:,:]
762 | input_points_2d_bs_t = input_points_2d_bs_t.reshape(BS, vox_size*vox_size, 3)
763 | feature_bs_t = feature_bs.reshape(BS,vox_size*vox_size,SHAPE_NUM)
764 | sdf_c = sess.run([sdf_g],feed_dict={input_points_3d_global:input_points_2d_bs_t,feature_global:feature_bs_t,
765 | points_target_num:POINT_NUM_GT_bs,points_input_num:points_input_num_bs})
766 | vox.append(sdf_c)
767 |
768 |
769 | vox = np.asarray(vox)
770 | #vis_single_points(moved_points, 'moved_points.ply')
771 | #print('vox',np.min(vox),np.max(vox),np.mean(vox))
772 | vox = vox.reshape((vox_size,vox_size,vox_size))
773 | vox_max = np.max(vox.reshape((-1)))
774 | vox_min = np.min(vox.reshape((-1)))
775 | print('max_min:',vox_max,vox_min,np.mean(vox))
776 |
777 | #threshs = [0.001,0.0015,0.002,0.0025,0.005]
778 | threshs = [0.005]
779 | for thresh in threshs:
780 | print(np.sum(vox>thresh),np.sum(vox0.0)0.0)>np.sum(vox<0.0)):
790 | triangles_t = []
791 | for it in range(triangles.shape[0]):
792 | tt = np.array([triangles[it,2],triangles[it,1],triangles[it,0]])
793 | triangles_t.append(tt)
794 | triangles_t = np.asarray(triangles_t)
795 | else:
796 | triangles_t = triangles
797 | triangles_t = np.asarray(triangles_t)
798 |
799 | vertices -= 0.5
800 | # Undo padding
801 | vertices -= 1
802 | # Normalize to bounding box
803 | vertices /= np.array([vox_size-1, vox_size-1, vox_size-1])
804 | vertices = (bd_max-bd_min) * vertices + bd_min
805 | mesh = trimesh.Trimesh(vertices, triangles_t,
806 | vertex_normals=None,
807 | process=False)
808 |
809 |
810 | loc_data = np.load(a.data_dir + a.input_ply_file + '.npz')
811 | vertices = vertices * loc_data['scal'] + loc_data['trans']
812 | mesh = trimesh.Trimesh(vertices, triangles_t,
813 | vertex_normals=None,
814 | process=False)
815 | mesh.export(OUTPUT_DIR_FINETUNE + '/PCL_' + a.input_ply_file + '_'+ str(thresh) + '.off')
816 |
817 |
818 |
819 |
820 |
821 |
--------------------------------------------------------------------------------
/tf.yaml:
--------------------------------------------------------------------------------
1 | name: tf
2 | channels:
3 | - https://conda.anaconda.org/anaconda
4 | - https://conda.anaconda.org/conda-forge
5 | - defaults
6 | dependencies:
7 | - _libgcc_mutex=0.1=conda_forge
8 | - _openmp_mutex=4.5=1_gnu
9 | - blas=1.0=mkl
10 | - bzip2=1.0.8=h7b6447c_0
11 | - c-ares=1.17.1=h27cfd23_0
12 | - ca-certificates=2022.2.1=h06a4308_0
13 | - cairo=1.16.0=hf32fb01_1
14 | - certifi=2021.5.30=py36h06a4308_0
15 | - cloudpickle=2.0.0=pyhd3eb1b0_0
16 | - cmake=3.19.6=h973ab73_0
17 | - cycler=0.10.0=py36_0
18 | - cytoolz=0.11.0=py36h7b6447c_0
19 | - dask-core=1.1.4=py36_1
20 | - dbus=1.13.18=hb2f20db_0
21 | - expat=2.4.1=h2531618_2
22 | - ffmpeg=4.0=hcdf2ecd_0
23 | - fontconfig=2.13.1=h6c09931_0
24 | - freeglut=3.0.0=hf484d3e_5
25 | - freetype=2.10.4=h5ab3b9f_0
26 | - glib=2.69.0=h5202010_0
27 | - graphite2=1.3.14=h23475e2_0
28 | - gst-plugins-base=1.14.0=h8213a91_2
29 | - gstreamer=1.14.0=h28cd5cc_2
30 | - harfbuzz=1.8.8=hffaf4a1_0
31 | - hdf5=1.10.2=hba1933b_1
32 | - icu=58.2=he6710b0_3
33 | - imageio=2.9.0=pyhd3eb1b0_0
34 | - intel-openmp=2020.2=254
35 | - jasper=2.0.14=h07fcdf6_1
36 | - jpeg=9b=h024ee3a_2
37 | - kiwisolver=1.3.1=py36h2531618_0
38 | - krb5=1.19.2=hac12032_0
39 | - lcms2=2.12=h3be6417_0
40 | - ld_impl_linux-64=2.35.1=h7274673_9
41 | - libcurl=7.78.0=h0b77cf5_0
42 | - libedit=3.1.20210714=h7f8727e_0
43 | - libev=4.33=h7b6447c_0
44 | - libffi=3.3=he6710b0_2
45 | - libgcc-ng=11.1.0=hc902ee8_8
46 | - libgfortran-ng=7.3.0=hdf63c60_0
47 | - libglu=9.0.0=hf484d3e_1
48 | - libgomp=11.1.0=hc902ee8_8
49 | - libnghttp2=1.41.0=hf8bcb03_2
50 | - libopencv=3.4.2=hb342d67_1
51 | - libopus=1.3.1=h7b6447c_0
52 | - libpng=1.6.37=hbc83047_0
53 | - libssh2=1.9.0=h1ba5d50_1
54 | - libstdcxx-ng=9.3.0=hd4cf53a_17
55 | - libtiff=4.2.0=h85742a9_0
56 | - libuuid=1.0.3=h1bed415_2
57 | - libuv=1.40.0=h7b6447c_0
58 | - libvpx=1.7.0=h439df22_0
59 | - libwebp-base=1.2.0=h27cfd23_0
60 | - libxcb=1.14=h7b6447c_0
61 | - libxml2=2.9.10=hb55368b_3
62 | - llvm-openmp=8.0.1=hc9558a2_0
63 | - lz4-c=1.9.3=h2531618_0
64 | - matplotlib=3.3.4=py36h06a4308_0
65 | - matplotlib-base=3.3.4=py36h62a2d02_0
66 | - mkl=2019.4=243
67 | - mkl-service=2.3.0=py36he904b0f_0
68 | - mkl_fft=1.2.0=py36h23d657b_0
69 | - mkl_random=1.1.0=py36hd6b4f25_0
70 | - ncurses=6.2=he6710b0_1
71 | - networkx=2.2=py36_1
72 | - numpy-base=1.19.1=py36hfa32c7d_0
73 | - olefile=0.46=py36_0
74 | - opencv=3.4.2=py36h6fd60c2_1
75 | - openmp=8.0.1=0
76 | - openssl=1.1.1n=h7f8727e_0
77 | - pcre=8.45=h295c915_0
78 | - pip=21.1.3=py36h06a4308_0
79 | - pixman=0.40.0=h7b6447c_0
80 | - plyfile=0.7.4=pyhd8ed1ab_0
81 | - point_cloud_utils=0.18.0=py36h355b2fd_1
82 | - py-opencv=3.4.2=py36hb342d67_1
83 | - pyqt=5.9.2=py36h05f1152_2
84 | - python=3.6.12=hcff3b4d_2
85 | - python-dateutil=2.8.2=pyhd3eb1b0_0
86 | - python_abi=3.6=2_cp36m
87 | - pytz=2021.3=pyhd3eb1b0_0
88 | - pywavelets=1.1.1=py36h7b6447c_2
89 | - qt=5.9.7=h5867ecd_1
90 | - readline=8.1=h27cfd23_0
91 | - rhash=1.4.1=h3c74f83_1
92 | - scikit-image=0.15.0=py36hb3f55d8_2
93 | - scipy=1.5.2=py36h0b6359f_0
94 | - setuptools=52.0.0=py36h06a4308_0
95 | - sip=4.19.8=py36hf484d3e_0
96 | - six=1.15.0=py_0
97 | - sqlite=3.36.0=hc218d9a_0
98 | - tk=8.6.10=hbc83047_0
99 | - toolz=0.11.2=pyhd3eb1b0_0
100 | - tornado=6.1=py36h27cfd23_0
101 | - wheel=0.36.2=pyhd3eb1b0_0
102 | - xz=5.2.5=h7b6447c_0
103 | - zlib=1.2.11=h7b6447c_3
104 | - zstd=1.4.9=haebb681_0
105 | - pip:
106 | - absl-py==0.13.0
107 | - addict==2.4.0
108 | - anyio==3.3.4
109 | - argon2-cffi==21.1.0
110 | - astunparse==1.6.3
111 | - async-generator==1.10
112 | - attrs==21.2.0
113 | - babel==2.9.1
114 | - backcall==0.2.0
115 | - bleach==4.1.0
116 | - cached-property==1.5.2
117 | - cachetools==4.2.2
118 | - cffi==1.15.0
119 | - charset-normalizer==2.0.3
120 | - contextvars==2.4
121 | - cython==0.29.24
122 | - dataclasses==0.8
123 | - decorator==5.1.0
124 | - defusedxml==0.7.1
125 | - deprecation==2.1.0
126 | - entrypoints==0.3
127 | - flatbuffers==1.12
128 | - gast==0.4.0
129 | - google-auth==1.34.0
130 | - google-auth-oauthlib==0.4.4
131 | - google-pasta==0.2.0
132 | - grpcio==1.34.1
133 | - h5py==3.1.0
134 | - idna==3.2
135 | - immutables==0.16
136 | - importlib-metadata==4.6.1
137 | - ipykernel==5.5.6
138 | - ipython==7.16.1
139 | - ipython-genutils==0.2.0
140 | - ipywidgets==7.6.5
141 | - jedi==0.18.0
142 | - jinja2==3.0.2
143 | - joblib==1.1.0
144 | - json5==0.9.6
145 | - jsonschema==4.0.0
146 | - jupyter-client==7.0.6
147 | - jupyter-core==4.9.1
148 | - jupyter-packaging==0.10.6
149 | - jupyter-server==1.11.2
150 | - jupyterlab==3.2.2
151 | - jupyterlab-pygments==0.1.2
152 | - jupyterlab-server==2.8.2
153 | - jupyterlab-widgets==1.0.2
154 | - keras-nightly==2.5.0.dev2021032900
155 | - keras-preprocessing==1.1.2
156 | - markdown==3.3.4
157 | - markupsafe==2.0.1
158 | - mistune==0.8.4
159 | - nbclassic==0.3.4
160 | - nbclient==0.5.4
161 | - nbconvert==6.0.7
162 | - nbformat==5.1.3
163 | - nest-asyncio==1.5.1
164 | - ninja==1.10.2.2
165 | - notebook==6.4.5
166 | - numpy==1.19.5
167 | - oauthlib==3.1.1
168 | - open3d==0.13.0
169 | - opt-einsum==3.3.0
170 | - packaging==21.2
171 | - pandas==1.1.5
172 | - pandocfilters==1.5.0
173 | - parso==0.8.2
174 | - pexpect==4.8.0
175 | - pickleshare==0.7.5
176 | - pillow==8.3.1
177 | - prometheus-client==0.12.0
178 | - prompt-toolkit==3.0.22
179 | - protobuf==3.17.3
180 | - ptyprocess==0.7.0
181 | - pyasn1==0.4.8
182 | - pyasn1-modules==0.2.8
183 | - pycparser==2.21
184 | - pygments==2.10.0
185 | - pymcubes==0.1.2
186 | - pyparsing==2.4.7
187 | - pyrsistent==0.18.0
188 | - pyyaml==6.0
189 | - pyzmq==22.3.0
190 | - requests==2.26.0
191 | - requests-oauthlib==1.3.0
192 | - rsa==4.7.2
193 | - scikit-learn==0.24.2
194 | - send2trash==1.8.0
195 | - sniffio==1.2.0
196 | - tensorboard==2.5.0
197 | - tensorboard-data-server==0.6.1
198 | - tensorboard-plugin-wit==1.8.0
199 | - tensorflow==2.5.0
200 | - tensorflow-estimator==2.5.0
201 | - termcolor==1.1.0
202 | - terminado==0.12.1
203 | - testpath==0.5.0
204 | - threadpoolctl==3.0.0
205 | - tomlkit==0.7.2
206 | - tqdm==4.62.3
207 | - traitlets==4.3.3
208 | - trimesh==3.9.25
209 | - typing-extensions==3.7.4.3
210 | - urllib3==1.26.6
211 | - wcwidth==0.2.5
212 | - webencodings==0.5.1
213 | - websocket-client==1.2.1
214 | - werkzeug==2.0.1
215 | - widgetsnbextension==3.5.2
216 | - wrapt==1.12.1
217 | - zipp==3.5.0
218 | prefix: /home/mabaorui/anaconda3/envs/tf
219 |
--------------------------------------------------------------------------------