├── README.md ├── code ├── README.md ├── data_provider.py ├── data_provider.pyc ├── main.py ├── model_generator2_2new6.py ├── model_utils.py ├── tf_ops │ ├── CD │ │ ├── __init__.py │ │ ├── __init__.pyc │ │ ├── makefile │ │ ├── render_balls_so.cpp │ │ ├── render_balls_so.so │ │ ├── tf_cd_compile.sh │ │ ├── tf_cd_compile_abi.sh │ │ ├── tf_nndistance.cpp │ │ ├── tf_nndistance.py │ │ ├── tf_nndistance.pyc │ │ ├── tf_nndistance_g.cu │ │ ├── tf_nndistance_g.cu.o │ │ └── tf_nndistance_so.so │ ├── __init__.py │ ├── __init__.pyc │ ├── compile.sh │ ├── emd │ │ ├── __init__.py │ │ ├── __init__.pyc │ │ ├── tf_auctionmatch.cpp │ │ ├── tf_auctionmatch.py │ │ ├── tf_auctionmatch.pyc │ │ ├── tf_auctionmatch_compile.sh │ │ ├── tf_auctionmatch_compile_abi.sh │ │ ├── tf_auctionmatch_g.cu │ │ ├── tf_auctionmatch_g.cu.o │ │ └── tf_auctionmatch_so.so │ ├── grouping │ │ ├── __init__.py │ │ ├── __init__.pyc │ │ ├── compile.sh │ │ ├── query_ball_point.cpp │ │ ├── query_ball_point.cu │ │ ├── query_ball_point_block.cu │ │ ├── query_ball_point_grid.cu │ │ ├── selection_sort.cpp │ │ ├── selection_sort.cu │ │ ├── selection_sort_const.cu │ │ ├── test_knn.py │ │ ├── tf_grouping.cpp │ │ ├── tf_grouping.py │ │ ├── tf_grouping.pyc │ │ ├── tf_grouping_compile.sh │ │ ├── tf_grouping_compile_abi.sh │ │ ├── tf_grouping_g.cu │ │ ├── tf_grouping_g.cu.o │ │ ├── tf_grouping_op_test.py │ │ └── tf_grouping_so.so │ ├── interpolation │ │ ├── __init__.py │ │ ├── __init__.pyc │ │ ├── interpolate.cpp │ │ ├── tf_interpolate.cpp │ │ ├── tf_interpolate.py │ │ ├── tf_interpolate.pyc │ │ ├── tf_interpolate_compile.sh │ │ ├── tf_interpolate_compile_abi.sh │ │ ├── tf_interpolate_op_test.py │ │ ├── tf_interpolate_so.so │ │ └── visu_interpolation.py │ └── sampling │ │ ├── __init__.py │ │ ├── __init__.pyc │ │ ├── tf_sampling.cpp │ │ ├── tf_sampling.py │ │ ├── tf_sampling.pyc │ │ ├── tf_sampling_compile.sh │ │ ├── tf_sampling_compile_abi.sh │ │ ├── tf_sampling_g.cu │ │ ├── tf_sampling_g.cu.o │ │ └── tf_sampling_so.so └── utils │ ├── __init__.py │ ├── __init__.pyc │ ├── data_prep_util.py │ ├── eulerangles.py │ ├── eulerangles.pyc │ ├── modelnet_data_prep.py │ ├── modelnet_data_prep.pyc │ ├── off2obj.py │ ├── pc_util.py │ ├── pc_util.pyc │ ├── plyfile.py │ ├── plyfile.pyc │ ├── pointnet_util.py │ ├── pointnet_util.pyc │ ├── provider.py │ ├── provider.pyc │ ├── show3d.py │ ├── show3d.pyc │ ├── tf_util.py │ ├── tf_util.pyc │ ├── tf_util2.py │ ├── tf_util2.pyc │ └── write_result2html.py ├── data └── test_data │ └── our_collected_data │ └── MC_5k │ ├── Icosahedron.xyz │ ├── Octahedron.xyz │ ├── camel.xyz │ ├── casting.xyz │ ├── chair.xyz │ ├── coverrear_Lp.xyz │ ├── cow.xyz │ ├── duck.xyz │ ├── eight.xyz │ ├── elephant.xyz │ ├── elk.xyz │ ├── fandisk.xyz │ ├── genus3.xyz │ ├── horse.xyz │ ├── kitten.xyz │ ├── moai.xyz │ ├── pig.xyz │ ├── quadric.xyz │ ├── sculpt.xyz │ └── star.xyz ├── evaluation_code ├── CMakeLists.txt ├── evaluation.cpp ├── nicolo.off └── nicolo.xyz ├── h5_data └── README.md ├── model └── README.md ├── prepare_data ├── MeshSegmentation.zip └── Poisson_sample.tar └── supplementary material └── supp.pdf /README.md: -------------------------------------------------------------------------------- 1 | # PU-Net: Point Cloud Upsampling Network 2 | by [Lequan Yu](http://appsrv.cse.cuhk.edu.hk/~lqyu/), Xianzhi Li, [Chi-Wing Fu](http://www.cse.cuhk.edu.hk/~cwfu/), [Daniel Cohen-Or](https://www.cs.tau.ac.il/~dcor/), [Pheng-Ann Heng](http://www.cse.cuhk.edu.hk/~pheng/). 3 | 4 | ### Introduction 5 | 6 | This repository is for our CVPR 2018 paper '[PU-Net: Point Cloud Upsampling Network](https://arxiv.org/abs/1801.06761)'. The code is modified from [PointNet++](https://github.com/charlesq34/pointnet2) and [PointSetGeneration](https://github.com/fanhqme/PointSetGeneration). 7 | 8 | ### Installation 9 | This repository is based on Tensorflow and the TF operators from PointNet++. Therefore, you need to install tensorflow and compile the TF operators. 10 | 11 | For installing tensorflow, please follow the official instructions in [here](https://www.tensorflow.org/install/install_linux). The code is tested under TF1.3 (higher version should also work) and Python 2.7 on Ubuntu 16.04. 12 | 13 | For compiling TF operators, please check `tf_xxx_compile.sh` under each op subfolder in `code/tf_ops` folder. Note that you need to update `nvcc`, `python` and `tensoflow include library` if necessary. You also need to remove `-D_GLIBCXX_USE_CXX11_ABI=0` flag in g++ command in order to compile correctly if necessary. 14 | 15 | To compile the operators in TF version >=1.4, you need to modify the compile scripts slightly. 16 | 17 | First, find Tensorflow include and library paths. 18 | 19 | TF_INC=$(python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())') 20 | TF_LIB=$(python -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())') 21 | 22 | Then, add flags of `-I$TF_INC/external/nsync/public -L$TF_LIB -ltensorflow_framework` to the `g++` commands. You can refer [tf_cd_compile.sh](https://github.com/yulequan/PU-Net/blob/master/code/tf_ops/CD/tf_cd_compile.sh). 23 | 24 | ### Note 25 | When running the code, if you have `undefined symbol: _ZTIN10tensorflow8OpKernelE` error, you need to compile the TF operators. If you have already added the `-I$TF_INC/external/nsync/public -L$TF_LIB -ltensorflow_framework` but still have ` cannot find -ltensorflow_framework` error. Please use 'locate tensorflow_framework 26 | ' to locate the tensorflow_framework library and make sure this path is in `$TF_LIB`. 27 | 28 | ### Usage 29 | 30 | 1. Clone the repository: 31 | 32 | ```shell 33 | git clone https://github.com/yulequan/PU-Net.git 34 | cd PU-Net 35 | ``` 36 | 2. Compile the TF operators 37 | Follow the above information to compile the TF operators. 38 | 39 | 3. Train the model: 40 | First, you need to download the training patches in HDF5 format from [GoogleDrive](https://drive.google.com/file/d/1wMtNGvliK_pUTogfzMyrz57iDb_jSQR8/view?usp=sharing) and put it in folder `h5_data`. 41 | Then run: 42 | ```shell 43 | cd code 44 | python main.py --phase train 45 | ``` 46 | 47 | 4. Evaluate the model: 48 | First, you need to download the pretrained model from [GoogleDrive](https://drive.google.com/file/d/1PWZb0d8QbmEAuYtJunQ9Z30VPgdU6rdd/view?usp=sharing), extract it and put it in folder 'model'. 49 | Then run: 50 | ```shell 51 | cd code 52 | python main.py --phase test --log_dir ../model/generator2_new6 53 | ``` 54 | You will see the input and output results in the folder `../model/generator2_new6/result`. 55 | 56 | 5. The training and testing mesh files can be downloaded from [GoogleDrive](https://drive.google.com/file/d/1R21MD1O6q8E7ANui8FR0MaABkKc30PG4/view?usp=sharing). 57 | 58 | Note: In this version, we treat the whole input point cloud as a single input. If the number of points in your input point cloud is big, you had better divide the input point cloud into patches and treat each patch as a single input. (see our following work [EC-Net](https://github.com/yulequan/EC-Net)) 59 | 60 | ### Evaluation code 61 | We provide the code to calculate the metric NUC in the evaluation code folder. In order to use it, you need to install the CGAL library. Please refer [this link](https://www.cgal.org/download/linux.html) to install this library. 62 | Then: 63 | ```shell 64 | cd evaluation_code 65 | cmake . 66 | make 67 | ./evaluation nicolo.off nicolo.xyz 68 | ``` 69 | The second argument is the mesh, and the third one is the predicted points. 70 | 71 | After running this program, the distances of each predicted point to the surface are written in `nicolo_point2mesh_distance.xyz`, and the density of each disk (n_i/N) are written in `nicolo_density.xyz`. 72 | 73 | ## Citation 74 | 75 | If PU-Net is useful for your research, please consider citing: 76 | 77 | @inproceedings{yu2018pu, 78 | title={PU-Net: Point Cloud Upsampling Network}, 79 | author={Yu, Lequan and Li, Xianzhi and Fu, Chi-Wing and Cohen-Or, Daniel and Heng, Pheng-Ann}, 80 | booktitle = {Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, 81 | year = {2018} 82 | } 83 | 84 | ### Questions 85 | 86 | Please contact 'lqyu@cse.cuhk.edu.hk' 87 | 88 | -------------------------------------------------------------------------------- /code/README.md: -------------------------------------------------------------------------------- 1 | # PointSR 2 | -------------------------------------------------------------------------------- /code/data_provider.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/data_provider.pyc -------------------------------------------------------------------------------- /code/main.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | import time 4 | import numpy as np 5 | import tensorflow as tf 6 | from tqdm import tqdm 7 | from glob import glob 8 | import socket 9 | from matplotlib import pyplot as plt 10 | import model_generator2_2new6 as MODEL_GEN 11 | import model_utils 12 | import data_provider 13 | from utils import pc_util 14 | 15 | parser = argparse.ArgumentParser() 16 | parser.add_argument('--phase', default='test', help='train or test [default: train]') 17 | parser.add_argument('--gpu', default='0', help='GPU to use [default: GPU 0]') 18 | parser.add_argument('--log_dir', default='../model/generator2_new6', help='Log dir [default: log]') 19 | parser.add_argument('--num_point', type=int, default=1024,help='Point Number [1024/2048] [default: 1024]') 20 | parser.add_argument('--up_ratio', type=int, default=4, help='Upsampling Ratio [default: 2]') 21 | parser.add_argument('--max_epoch', type=int, default=120, help='Epoch to run [default: 500]') 22 | parser.add_argument('--batch_size', type=int, default=28, help='Batch Size during training [default: 32]') 23 | parser.add_argument('--learning_rate', type=float, default=0.001) 24 | 25 | ASSIGN_MODEL_PATH=None 26 | USE_DATA_NORM = True 27 | USE_RANDOM_INPUT = True 28 | USE_REPULSION_LOSS = True 29 | 30 | FLAGS = parser.parse_args() 31 | PHASE = FLAGS.phase 32 | GPU_INDEX = FLAGS.gpu 33 | BATCH_SIZE = FLAGS.batch_size 34 | NUM_POINT = FLAGS.num_point 35 | UP_RATIO = FLAGS.up_ratio 36 | MAX_EPOCH = FLAGS.max_epoch 37 | BASE_LEARNING_RATE = FLAGS.learning_rate 38 | MODEL_DIR = FLAGS.log_dir 39 | 40 | print socket.gethostname() 41 | print FLAGS 42 | os.environ['CUDA_VISIBLE_DEVICES'] = GPU_INDEX 43 | 44 | def log_string(out_str): 45 | global LOG_FOUT 46 | LOG_FOUT.write(out_str) 47 | LOG_FOUT.flush() 48 | 49 | def train(assign_model_path=None): 50 | is_training = True 51 | bn_decay = 0.95 52 | step = tf.Variable(0,trainable=False) 53 | learning_rate = BASE_LEARNING_RATE 54 | tf.summary.scalar('bn_decay', bn_decay) 55 | tf.summary.scalar('learning_rate', learning_rate) 56 | 57 | # get placeholder 58 | pointclouds_pl, pointclouds_gt, pointclouds_gt_normal, pointclouds_radius = MODEL_GEN.placeholder_inputs(BATCH_SIZE, NUM_POINT, UP_RATIO) 59 | 60 | #create the generator model 61 | pred,_ = MODEL_GEN.get_gen_model(pointclouds_pl, is_training, scope='generator',bradius=pointclouds_radius, 62 | reuse=None,use_normal=False, use_bn=False,use_ibn=False, 63 | bn_decay=bn_decay,up_ratio=UP_RATIO) 64 | 65 | #get emd loss 66 | gen_loss_emd,matchl_out = model_utils.get_emd_loss(pred, pointclouds_gt, pointclouds_radius) 67 | 68 | #get repulsion loss 69 | if USE_REPULSION_LOSS: 70 | gen_repulsion_loss = model_utils.get_repulsion_loss4(pred) 71 | tf.summary.scalar('loss/gen_repulsion_loss', gen_repulsion_loss) 72 | else: 73 | gen_repulsion_loss =0.0 74 | 75 | #get total loss function 76 | pre_gen_loss = 100 * gen_loss_emd + gen_repulsion_loss + tf.losses.get_regularization_loss() 77 | 78 | # create pre-generator ops 79 | gen_update_ops = [op for op in tf.get_collection(tf.GraphKeys.UPDATE_OPS) if op.name.startswith("generator")] 80 | gen_tvars = [var for var in tf.trainable_variables() if var.name.startswith("generator")] 81 | 82 | with tf.control_dependencies(gen_update_ops): 83 | pre_gen_train = tf.train.AdamOptimizer(learning_rate,beta1=0.9).minimize(pre_gen_loss,var_list=gen_tvars, 84 | colocate_gradients_with_ops=True, 85 | global_step=step) 86 | # merge summary and add pointclouds summary 87 | tf.summary.scalar('loss/gen_emd', gen_loss_emd) 88 | tf.summary.scalar('loss/regularation', tf.losses.get_regularization_loss()) 89 | tf.summary.scalar('loss/pre_gen_total', pre_gen_loss) 90 | pretrain_merged = tf.summary.merge_all() 91 | 92 | pointclouds_image_input = tf.placeholder(tf.float32, shape=[None, 500, 1500, 1]) 93 | pointclouds_input_summary = tf.summary.image('pointcloud_input', pointclouds_image_input, max_outputs=1) 94 | pointclouds_image_pred = tf.placeholder(tf.float32, shape=[None, 500, 1500, 1]) 95 | pointclouds_pred_summary = tf.summary.image('pointcloud_pred', pointclouds_image_pred, max_outputs=1) 96 | pointclouds_image_gt = tf.placeholder(tf.float32, shape=[None, 500, 1500, 1]) 97 | pointclouds_gt_summary = tf.summary.image('pointcloud_gt', pointclouds_image_gt, max_outputs=1) 98 | image_merged = tf.summary.merge([pointclouds_input_summary,pointclouds_pred_summary,pointclouds_gt_summary]) 99 | 100 | # Create a session 101 | config = tf.ConfigProto() 102 | config.gpu_options.allow_growth = True 103 | config.allow_soft_placement = True 104 | config.log_device_placement = False 105 | with tf.Session(config=config) as sess: 106 | train_writer = tf.summary.FileWriter(os.path.join(MODEL_DIR, 'train'), sess.graph) 107 | init = tf.global_variables_initializer() 108 | sess.run(init) 109 | ops = {'pointclouds_pl': pointclouds_pl, 110 | 'pointclouds_gt': pointclouds_gt, 111 | 'pointclouds_gt_normal':pointclouds_gt_normal, 112 | 'pointclouds_radius': pointclouds_radius, 113 | 'pointclouds_image_input':pointclouds_image_input, 114 | 'pointclouds_image_pred': pointclouds_image_pred, 115 | 'pointclouds_image_gt': pointclouds_image_gt, 116 | 'pretrain_merged':pretrain_merged, 117 | 'image_merged': image_merged, 118 | 'gen_loss_emd': gen_loss_emd, 119 | 'pre_gen_train':pre_gen_train, 120 | 'pred': pred, 121 | 'step': step, 122 | } 123 | #restore the model 124 | saver = tf.train.Saver(max_to_keep=6) 125 | restore_epoch, checkpoint_path = model_utils.pre_load_checkpoint(MODEL_DIR) 126 | global LOG_FOUT 127 | if restore_epoch==0: 128 | LOG_FOUT = open(os.path.join(MODEL_DIR, 'log_train.txt'), 'w') 129 | LOG_FOUT.write(str(socket.gethostname()) + '\n') 130 | LOG_FOUT.write(str(FLAGS) + '\n') 131 | else: 132 | LOG_FOUT = open(os.path.join(MODEL_DIR, 'log_train.txt'), 'a') 133 | saver.restore(sess,checkpoint_path) 134 | 135 | ###assign the generator with another model file 136 | if assign_model_path is not None: 137 | print "Load pre-train model from %s"%(assign_model_path) 138 | assign_saver = tf.train.Saver(var_list=[var for var in tf.trainable_variables() if var.name.startswith("generator")]) 139 | assign_saver.restore(sess, assign_model_path) 140 | 141 | ##read data 142 | input_data, gt_data, data_radius, _ = data_provider.load_patch_data(skip_rate=1, num_point=NUM_POINT, norm=USE_DATA_NORM, 143 | use_randominput = USE_RANDOM_INPUT) 144 | 145 | fetchworker = data_provider.Fetcher(input_data,gt_data,data_radius,BATCH_SIZE,NUM_POINT,USE_RANDOM_INPUT,USE_DATA_NORM) 146 | fetchworker.start() 147 | for epoch in tqdm(range(restore_epoch,MAX_EPOCH+1),ncols=55): 148 | log_string('**** EPOCH %03d ****\t' % (epoch)) 149 | train_one_epoch(sess, ops, fetchworker, train_writer) 150 | if epoch % 20 == 0: 151 | saver.save(sess, os.path.join(MODEL_DIR, "model"), global_step=epoch) 152 | fetchworker.shutdown() 153 | 154 | def train_one_epoch(sess, ops, fetchworker, train_writer): 155 | loss_sum = [] 156 | fetch_time = 0 157 | for batch_idx in range(fetchworker.num_batches): 158 | start = time.time() 159 | batch_input_data, batch_data_gt, radius =fetchworker.fetch() 160 | end = time.time() 161 | fetch_time+= end-start 162 | feed_dict = {ops['pointclouds_pl']: batch_input_data, 163 | ops['pointclouds_gt']: batch_data_gt[:,:,0:3], 164 | ops['pointclouds_gt_normal']:batch_data_gt[:,:,0:3], 165 | ops['pointclouds_radius']: radius} 166 | summary,step, _, pred_val,gen_loss_emd = sess.run( [ops['pretrain_merged'],ops['step'],ops['pre_gen_train'], 167 | ops['pred'], ops['gen_loss_emd']], feed_dict=feed_dict) 168 | train_writer.add_summary(summary, step) 169 | loss_sum.append(gen_loss_emd) 170 | 171 | if step%30 == 0: 172 | pointclouds_image_input = pc_util.point_cloud_three_views(batch_input_data[0,:,0:3]) 173 | pointclouds_image_input = np.expand_dims(np.expand_dims(pointclouds_image_input,axis=-1),axis=0) 174 | pointclouds_image_pred = pc_util.point_cloud_three_views(pred_val[0, :, :]) 175 | pointclouds_image_pred = np.expand_dims(np.expand_dims(pointclouds_image_pred, axis=-1), axis=0) 176 | pointclouds_image_gt = pc_util.point_cloud_three_views(batch_data_gt[0, :, 0:3]) 177 | pointclouds_image_gt = np.expand_dims(np.expand_dims(pointclouds_image_gt, axis=-1), axis=0) 178 | feed_dict ={ops['pointclouds_image_input']:pointclouds_image_input, 179 | ops['pointclouds_image_pred']: pointclouds_image_pred, 180 | ops['pointclouds_image_gt']: pointclouds_image_gt, 181 | } 182 | summary = sess.run(ops['image_merged'],feed_dict) 183 | train_writer.add_summary(summary,step) 184 | 185 | loss_sum = np.asarray(loss_sum) 186 | log_string('step: %d mean gen_loss_emd: %f\n' % (step, round(loss_sum.mean(),4))) 187 | print 'read data time: %s mean gen_loss_emd: %f' % (round(fetch_time,4), round(loss_sum.mean(),4)) 188 | 189 | 190 | def prediction_whole_model(data_folder=None,show=False,use_normal=False): 191 | data_folder = '../data/test_data/our_collected_data/MC_5k' 192 | phase = data_folder.split('/')[-2]+data_folder.split('/')[-1] 193 | save_path = os.path.join(MODEL_DIR, 'result/' + phase) 194 | 195 | if not os.path.exists(save_path): 196 | os.makedirs(save_path) 197 | samples = glob(data_folder + "/*.xyz") 198 | samples.sort(reverse=True) 199 | input = np.loadtxt(samples[0]) 200 | 201 | if use_normal: 202 | pointclouds_ipt = tf.placeholder(tf.float32, shape=(1, input.shape[0], 6)) 203 | else: 204 | pointclouds_ipt = tf.placeholder(tf.float32, shape=(1, input.shape[0], 3)) 205 | pred, _ = MODEL_GEN.get_gen_model(pointclouds_ipt, is_training=False, scope='generator', bradius=1.0, 206 | reuse=None, use_normal=use_normal, use_bn=False, use_ibn=False, bn_decay=0.95, up_ratio=UP_RATIO) 207 | saver = tf.train.Saver() 208 | _, restore_model_path = model_utils.pre_load_checkpoint(MODEL_DIR) 209 | print restore_model_path 210 | 211 | config = tf.ConfigProto() 212 | config.gpu_options.allow_growth = True 213 | config.allow_soft_placement = True 214 | with tf.Session(config=config) as sess: 215 | saver.restore(sess, restore_model_path) 216 | samples = glob(data_folder+"/*.xyz") 217 | samples.sort() 218 | total_time = 0 219 | for i,item in enumerate(samples): 220 | input = np.loadtxt(item) 221 | gt = input 222 | 223 | # input = data_provider.jitter_perturbation_point_cloud(np.expand_dims(input,axis=0),sigma=0.003,clip=0.006) 224 | input = np.expand_dims(input, axis=0) 225 | 226 | if not use_normal: 227 | input = input[:,:,0:3] 228 | gt = gt[:,0:3] 229 | print item, input.shape 230 | 231 | start_time = time.time() 232 | pred_pl = sess.run(pred, feed_dict={pointclouds_ipt: input}) 233 | total_time +=time.time()-start_time 234 | norm_pl = np.zeros_like(pred_pl) 235 | 236 | ##--------------visualize predicted point cloud---------------------- 237 | path = os.path.join(save_path,item.split('/')[-1]) 238 | if show: 239 | f,axis = plt.subplots(3) 240 | axis[0].imshow(pc_util.point_cloud_three_views(input[0, :,0:3],diameter=5)) 241 | axis[1].imshow(pc_util.point_cloud_three_views(pred_pl[0,:,:],diameter=5)) 242 | axis[2].imshow(pc_util.point_cloud_three_views(gt[:,0:3], diameter=5)) 243 | plt.show() 244 | data_provider.save_pl(path, np.hstack((pred_pl[0, ...],norm_pl[0, ...]))) 245 | path = path[:-4]+'_input.xyz' 246 | data_provider.save_pl(path, input[0]) 247 | print total_time/20 248 | 249 | if __name__ == "__main__": 250 | np.random.seed(int(time.time())) 251 | tf.set_random_seed(int(time.time())) 252 | if PHASE=='train': 253 | # copy the code 254 | assert not os.path.exists(os.path.join(MODEL_DIR, 'code/')) 255 | os.makedirs(os.path.join(MODEL_DIR, 'code/')) 256 | os.system('cp -r * %s' % (os.path.join(MODEL_DIR, 'code/'))) # bkp of model def 257 | 258 | train(assign_model_path=ASSIGN_MODEL_PATH) 259 | LOG_FOUT.close() 260 | else: 261 | prediction_whole_model() 262 | -------------------------------------------------------------------------------- /code/model_generator2_2new6.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | from utils import tf_util2 3 | from utils.pointnet_util import pointnet_sa_module,pointnet_fp_module 4 | 5 | def placeholder_inputs(batch_size, num_point,up_ratio = 4): 6 | pointclouds_pl = tf.placeholder(tf.float32, shape=(batch_size, num_point, 6)) 7 | pointclouds_gt = tf.placeholder(tf.float32, shape=(batch_size, num_point*up_ratio, 3)) 8 | pointclouds_normal = tf.placeholder(tf.float32, shape=(batch_size, num_point * up_ratio, 3)) 9 | pointclouds_radius = tf.placeholder(tf.float32, shape=(batch_size)) 10 | return pointclouds_pl, pointclouds_gt,pointclouds_normal, pointclouds_radius 11 | 12 | 13 | def get_gen_model(point_cloud, is_training, scope, bradius = 1.0, reuse=None, use_rv=False, use_bn = False,use_ibn = False, 14 | use_normal=False,bn_decay=None, up_ratio = 4,idx=None): 15 | 16 | with tf.variable_scope(scope,reuse=reuse) as sc: 17 | batch_size = point_cloud.get_shape()[0].value 18 | num_point = point_cloud.get_shape()[1].value 19 | l0_xyz = point_cloud[:,:,0:3] 20 | if use_normal: 21 | l0_points = point_cloud[:,:,3:] 22 | else: 23 | l0_points = None 24 | # Layer 1 25 | l1_xyz, l1_points, l1_indices = pointnet_sa_module(l0_xyz, l0_points, npoint=num_point, radius=bradius*0.05,bn=use_bn,ibn = use_ibn, 26 | nsample=32, mlp=[32, 32, 64], mlp2=None, group_all=False, 27 | is_training=is_training, bn_decay=bn_decay, scope='layer1') 28 | 29 | l2_xyz, l2_points, l2_indices = pointnet_sa_module(l1_xyz, l1_points, npoint=num_point/2, radius=bradius*0.1,bn=use_bn,ibn = use_ibn, 30 | nsample=32, mlp=[64, 64, 128], mlp2=None, group_all=False, 31 | is_training=is_training, bn_decay=bn_decay, scope='layer2') 32 | 33 | l3_xyz, l3_points, l3_indices = pointnet_sa_module(l2_xyz, l2_points, npoint=num_point/4, radius=bradius*0.2,bn=use_bn,ibn = use_ibn, 34 | nsample=32, mlp=[128, 128, 256], mlp2=None, group_all=False, 35 | is_training=is_training, bn_decay=bn_decay, scope='layer3') 36 | 37 | l4_xyz, l4_points, l4_indices = pointnet_sa_module(l3_xyz, l3_points, npoint=num_point/8, radius=bradius*0.3,bn=use_bn,ibn = use_ibn, 38 | nsample=32, mlp=[256, 256, 512], mlp2=None, group_all=False, 39 | is_training=is_training, bn_decay=bn_decay, scope='layer4') 40 | 41 | # Feature Propagation layers 42 | up_l4_points = pointnet_fp_module(l0_xyz, l4_xyz, None, l4_points, [64], is_training, bn_decay, 43 | scope='fa_layer1',bn=use_bn,ibn = use_ibn) 44 | 45 | up_l3_points = pointnet_fp_module(l0_xyz, l3_xyz, None, l3_points, [64], is_training, bn_decay, 46 | scope='fa_layer2',bn=use_bn,ibn = use_ibn) 47 | 48 | up_l2_points = pointnet_fp_module(l0_xyz, l2_xyz, None, l2_points, [64], is_training, bn_decay, 49 | scope='fa_layer3',bn=use_bn,ibn = use_ibn) 50 | 51 | ###concat feature 52 | with tf.variable_scope('up_layer',reuse=reuse): 53 | new_points_list = [] 54 | for i in range(up_ratio): 55 | concat_feat = tf.concat([up_l4_points, up_l3_points, up_l2_points, l1_points, l0_xyz], axis=-1) 56 | concat_feat = tf.expand_dims(concat_feat, axis=2) 57 | concat_feat = tf_util2.conv2d(concat_feat, 256, [1, 1], 58 | padding='VALID', stride=[1, 1], 59 | bn=False, is_training=is_training, 60 | scope='fc_layer0_%d'%(i), bn_decay=bn_decay) 61 | 62 | new_points = tf_util2.conv2d(concat_feat, 128, [1, 1], 63 | padding='VALID', stride=[1, 1], 64 | bn=use_bn, is_training=is_training, 65 | scope='conv_%d' % (i), 66 | bn_decay=bn_decay) 67 | new_points_list.append(new_points) 68 | net = tf.concat(new_points_list,axis=1) 69 | 70 | #get the xyz 71 | coord = tf_util2.conv2d(net, 64, [1, 1], 72 | padding='VALID', stride=[1, 1], 73 | bn=False, is_training=is_training, 74 | scope='fc_layer1', bn_decay=bn_decay) 75 | 76 | coord = tf_util2.conv2d(coord, 3, [1, 1], 77 | padding='VALID', stride=[1, 1], 78 | bn=False, is_training=is_training, 79 | scope='fc_layer2', bn_decay=bn_decay, 80 | activation_fn=None, weight_decay=0.0) # B*(2N)*1*3 81 | coord = tf.squeeze(coord, [2]) # B*(2N)*3 82 | 83 | return coord,None -------------------------------------------------------------------------------- /code/model_utils.py: -------------------------------------------------------------------------------- 1 | import os 2 | import tensorflow as tf 3 | from tf_ops.emd import tf_auctionmatch 4 | from tf_ops.CD import tf_nndistance 5 | from tf_ops.sampling import tf_sampling 6 | from tf_ops.grouping.tf_grouping import query_ball_point, group_point 7 | 8 | def pre_load_checkpoint(checkpoint_dir): 9 | ckpt = tf.train.get_checkpoint_state(checkpoint_dir) 10 | if ckpt and ckpt.model_checkpoint_path: 11 | # print(" [*] Reading checkpoint from {}".format(ckpt.model_checkpoint_path)) 12 | epoch_step = int(os.path.basename(ckpt.model_checkpoint_path).split('-')[1]) 13 | return epoch_step,ckpt.model_checkpoint_path 14 | else: 15 | return 0,None 16 | 17 | 18 | def get_repulsion_loss4(pred, nsample=20, radius=0.07): 19 | # pred: (batch_size, npoint,3) 20 | idx, pts_cnt = query_ball_point(radius, nsample, pred, pred) 21 | tf.summary.histogram('smooth/unque_index', pts_cnt) 22 | 23 | grouped_pred = group_point(pred, idx) # (batch_size, npoint, nsample, 3) 24 | grouped_pred -= tf.expand_dims(pred, 2) 25 | 26 | ##get the uniform loss 27 | h = 0.03 28 | dist_square = tf.reduce_sum(grouped_pred ** 2, axis=-1) 29 | dist_square, idx = tf.nn.top_k(-dist_square, 5) 30 | dist_square = -dist_square[:, :, 1:] # remove the first one 31 | dist_square = tf.maximum(1e-12,dist_square) 32 | dist = tf.sqrt(dist_square) 33 | weight = tf.exp(-dist_square/h**2) 34 | uniform_loss = tf.reduce_mean(radius-dist*weight) 35 | return uniform_loss 36 | 37 | 38 | def get_emd_loss(pred, gt, radius): 39 | """ pred: BxNxC, 40 | label: BxN, """ 41 | batch_size = pred.get_shape()[0].value 42 | matchl_out, matchr_out = tf_auctionmatch.auction_match(pred, gt) 43 | matched_out = tf_sampling.gather_point(gt, matchl_out) 44 | dist = tf.reshape((pred - matched_out) ** 2, shape=(batch_size, -1)) 45 | dist = tf.reduce_mean(dist, axis=1, keep_dims=True) 46 | dist_norm = dist / radius 47 | 48 | emd_loss = tf.reduce_mean(dist_norm) 49 | return emd_loss,matchl_out 50 | 51 | def get_cd_loss(pred, gt, radius): 52 | """ pred: BxNxC, 53 | label: BxN, """ 54 | dists_forward, _, dists_backward, _ = tf_nndistance.nn_distance(gt, pred) 55 | #dists_forward is for each element in gt, the cloest distance to this element 56 | CD_dist = 0.8*dists_forward + 0.2*dists_backward 57 | CD_dist = tf.reduce_mean(CD_dist, axis=1) 58 | CD_dist_norm = CD_dist/radius 59 | cd_loss = tf.reduce_mean(CD_dist_norm) 60 | return cd_loss,None 61 | 62 | 63 | if __name__ == '__main__': 64 | gt = tf.constant([[[1,0,0],[2,0,0],[3,0,0],[4,0,0]]],tf.float32) 65 | pred = tf.constant([[[-10,0,0], [1,0, 0], [2,0, 0], [3,0,0]]],tf.float32) 66 | 67 | dists_forward, idx1, dists_backward, idx2 = tf_nndistance.nn_distance(gt, pred) 68 | with tf.Session() as sess: 69 | print idx1.eval() # for each element in gt, the idx of pred 70 | print idx2.eval() # for each element in pred, -------------------------------------------------------------------------------- /code/tf_ops/CD/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/CD/__init__.py -------------------------------------------------------------------------------- /code/tf_ops/CD/__init__.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/CD/__init__.pyc -------------------------------------------------------------------------------- /code/tf_ops/CD/makefile: -------------------------------------------------------------------------------- 1 | nvcc = /usr/local/cuda-8.0/bin/nvcc 2 | cudalib = /usr/local/cuda-8.0/lib64/ 3 | tensorflow = /home/lqyu/software/anaconda2/lib/python2.7/site-packages/tensorflow/include 4 | 5 | all: tf_nndistance_so.so render_balls_so.so 6 | .PHONY : all 7 | 8 | tf_nndistance_so.so: tf_nndistance_g.cu.o tf_nndistance.cpp 9 | g++ -std=c++11 tf_nndistance.cpp tf_nndistance_g.cu.o -o tf_nndistance_so.so -shared -fPIC -I $(tensorflow) -lcudart -L $(cudalib) -O2 -D_GLIBCXX_USE_CXX11_ABI=0 10 | 11 | tf_nndistance_g.cu.o: tf_nndistance_g.cu 12 | $(nvcc) -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11 -c -o tf_nndistance_g.cu.o tf_nndistance_g.cu -I $(tensorflow) -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -O2 13 | 14 | render_balls_so.so: render_balls_so.cpp 15 | g++ -std=c++11 render_balls_so.cpp -o render_balls_so.so -shared -fPIC -O2 -D_GLIBCXX_USE_CXX11_ABI=0 16 | 17 | 18 | -------------------------------------------------------------------------------- /code/tf_ops/CD/render_balls_so.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | using namespace std; 6 | 7 | struct PointInfo{ 8 | int x,y,z; 9 | float r,g,b; 10 | }; 11 | 12 | extern "C"{ 13 | 14 | void render_ball(int h,int w,unsigned char * show,int n,int * xyzs,float * c0,float * c1,float * c2,int r){ 15 | r=max(r,1); 16 | vector depth(h*w,-2100000000); 17 | vector pattern; 18 | for (int dx=-r;dx<=r;dx++) 19 | for (int dy=-r;dy<=r;dy++) 20 | if (dx*dx+dy*dy=h || y2<0 || y2>=w) && depth[x2*w+y2]best){ 119 | result[(i*n+j)]=best; 120 | result_i[(i*n+j)]=best_i; 121 | } 122 | } 123 | __syncthreads(); 124 | } 125 | } 126 | } 127 | void NmDistanceKernelLauncher(int b,int n,const float * xyz,int m,const float * xyz2,float * result,int * result_i,float * result2,int * result2_i){ 128 | NmDistanceKernel<<>>(b,n,xyz,m,xyz2,result,result_i); 129 | NmDistanceKernel<<>>(b,m,xyz2,n,xyz,result2,result2_i); 130 | } 131 | __global__ void NmDistanceGradKernel(int b,int n,const float * xyz1,int m,const float * xyz2,const float * grad_dist1,const int * idx1,float * grad_xyz1,float * grad_xyz2){ 132 | for (int i=blockIdx.x;i>>(b,n,xyz1,m,xyz2,grad_dist1,idx1,grad_xyz1,grad_xyz2); 155 | NmDistanceGradKernel<<>>(b,m,xyz2,n,xyz1,grad_dist2,idx2,grad_xyz2,grad_xyz1); 156 | } 157 | 158 | #endif 159 | -------------------------------------------------------------------------------- /code/tf_ops/CD/tf_nndistance_g.cu.o: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/CD/tf_nndistance_g.cu.o -------------------------------------------------------------------------------- /code/tf_ops/CD/tf_nndistance_so.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/CD/tf_nndistance_so.so -------------------------------------------------------------------------------- /code/tf_ops/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/__init__.py -------------------------------------------------------------------------------- /code/tf_ops/__init__.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/__init__.pyc -------------------------------------------------------------------------------- /code/tf_ops/compile.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | cd CD 4 | sh tf*abi.sh 5 | cd .. 6 | 7 | cd emd 8 | sh tf*abi.sh 9 | cd .. 10 | 11 | cd grouping 12 | sh tf*abi.sh 13 | cd .. 14 | 15 | cd interpolation 16 | sh tf*abi.sh 17 | cd .. 18 | 19 | cd sampling 20 | sh tf*abi.sh 21 | cd .. 22 | -------------------------------------------------------------------------------- /code/tf_ops/emd/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/emd/__init__.py -------------------------------------------------------------------------------- /code/tf_ops/emd/__init__.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/emd/__init__.pyc -------------------------------------------------------------------------------- /code/tf_ops/emd/tf_auctionmatch.cpp: -------------------------------------------------------------------------------- 1 | #include "tensorflow/core/framework/op.h" 2 | #include "tensorflow/core/framework/op_kernel.h" 3 | #include "tensorflow/core/framework/shape_inference.h" 4 | #include "tensorflow/core/framework/common_shape_fns.h" 5 | #include 6 | #include 7 | #include 8 | using namespace tensorflow; 9 | REGISTER_OP("AuctionMatch") 10 | .Input("xyz1: float32") 11 | .Input("xyz2: float32") 12 | .Output("matchl: int32") 13 | .Output("matchr: int32") 14 | .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c){ 15 | ::tensorflow::shape_inference::ShapeHandle dims1; 16 | c->WithRank(c->input(0), 3, &dims1); 17 | ::tensorflow::shape_inference::ShapeHandle dims2; 18 | c->WithRank(c->input(1), 3, &dims2); 19 | ::tensorflow::shape_inference::ShapeHandle output1 = c->MakeShape({c->Dim(dims1, 0), c->Dim(dims1, 1)}); 20 | c->set_output(0, output1); 21 | ::tensorflow::shape_inference::ShapeHandle output2 = c->MakeShape({c->Dim(dims2, 0), c->Dim(dims2, 1)}); 22 | c->set_output(1, output2); 23 | return Status::OK(); 24 | }); 25 | void AuctionMatchLauncher(int b,int n,const float * xyz1,const float * xyz2,int * matchl,int * matchr,float * cost); 26 | 27 | class AuctionMatchGpuOp: public OpKernel{ 28 | public: 29 | explicit AuctionMatchGpuOp(OpKernelConstruction* context):OpKernel(context){} 30 | void Compute(OpKernelContext * context)override{ 31 | const Tensor& xyz1_tensor=context->input(0); 32 | OP_REQUIRES(context,xyz1_tensor.dims()==3 && xyz1_tensor.shape().dim_size(2)==3,errors::InvalidArgument("ApproxMatch expects (batch_size,num_points,3) xyz1 shape")); 33 | auto xyz1_flat=xyz1_tensor.flat(); 34 | const float * xyz1=&(xyz1_flat(0)); 35 | int b=xyz1_tensor.shape().dim_size(0); 36 | int n=xyz1_tensor.shape().dim_size(1); 37 | OP_REQUIRES(context,n<=4096,errors::InvalidArgument("AuctionMatch handles at most 4096 dataset points")); 38 | 39 | const Tensor& xyz2_tensor=context->input(1); 40 | OP_REQUIRES(context,xyz2_tensor.dims()==3 && xyz2_tensor.shape().dim_size(2)==3 && xyz2_tensor.shape().dim_size(0)==b && xyz2_tensor.shape().dim_size(1)==n,errors::InvalidArgument("AuctionMatch expects (batch_size,num_points,3) xyz2 shape, and shape must match with xyz1")); 41 | auto xyz2_flat=xyz2_tensor.flat(); 42 | const float * xyz2=&(xyz2_flat(0)); 43 | 44 | Tensor * matchl_tensor=NULL; 45 | OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,n},&matchl_tensor)); 46 | auto matchl_flat=matchl_tensor->flat(); 47 | int * matchl=&(matchl_flat(0)); 48 | Tensor * matchr_tensor=NULL; 49 | OP_REQUIRES_OK(context,context->allocate_output(1,TensorShape{b,n},&matchr_tensor)); 50 | auto matchr_flat=matchr_tensor->flat(); 51 | int * matchr=&(matchr_flat(0)); 52 | 53 | Tensor temp_tensor; 54 | OP_REQUIRES_OK(context,context->allocate_temp(DataTypeToEnum::value,TensorShape{b,n,n},&temp_tensor)); 55 | auto temp_flat=temp_tensor.flat(); 56 | float * temp=&(temp_flat(0)); 57 | 58 | AuctionMatchLauncher(b,n,xyz1,xyz2,matchl,matchr,temp); 59 | } 60 | }; 61 | REGISTER_KERNEL_BUILDER(Name("AuctionMatch").Device(DEVICE_GPU), AuctionMatchGpuOp); 62 | -------------------------------------------------------------------------------- /code/tf_ops/emd/tf_auctionmatch.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | from tensorflow.python.framework import ops 3 | import sys 4 | import os 5 | 6 | BASE_DIR = os.path.dirname(os.path.abspath(__file__)) 7 | sys.path.append(BASE_DIR) 8 | auctionmatch_module = tf.load_op_library(os.path.join(BASE_DIR, 'tf_auctionmatch_so.so')) 9 | 10 | def auction_match(xyz1,xyz2): 11 | ''' 12 | input: 13 | xyz1 : batch_size * #points * 3 14 | xyz2 : batch_size * #points * 3 15 | returns: 16 | matchl : batch_size * #npoints 17 | matchr : batch_size * #npoints 18 | ''' 19 | return auctionmatch_module.auction_match(xyz1,xyz2) 20 | ops.NoGradient('AuctionMatch') 21 | 22 | # TF1.0 API requires set shape in C++ 23 | # @tf.RegisterShape('AuctionMatch') 24 | # def _auction_match_shape(op): 25 | # shape1=op.inputs[0].get_shape().with_rank(3) 26 | # shape2=op.inputs[1].get_shape().with_rank(3) 27 | # return [ 28 | # tf.TensorShape([shape1.dims[0],shape1.dims[1]]), 29 | # tf.TensorShape([shape2.dims[0],shape2.dims[1]]) 30 | # ] 31 | 32 | if __name__=='__main__': 33 | from tf_ops.grouping import tf_grouping 34 | from tf_ops.sampling import tf_sampling 35 | 36 | npoint=4096 37 | xyz1_in=tf.placeholder(tf.float32,shape=(32,npoint,3)) 38 | xyz2_in=tf.placeholder(tf.float32,shape=(32,npoint,3)) 39 | matchl_out,matchr_out=auction_match(xyz1_in,xyz2_in) 40 | matched_out=tf_sampling.gather_point(xyz2_in,matchl_out) 41 | import numpy as np 42 | np.random.seed(100) 43 | xyz1=np.random.randn(32,npoint,3).astype('float32') 44 | xyz2=xyz1.copy()+np.random.randn(32,npoint,3)*0.01 45 | for i in xrange(len(xyz2)): 46 | xyz2[i]=np.roll(xyz2[i],i,axis=0) 47 | with tf.Session('') as sess: 48 | ret=sess.run(matched_out,feed_dict={xyz1_in:xyz1,xyz2_in:xyz2}) 49 | print ((xyz1-ret)**2).mean() 50 | -------------------------------------------------------------------------------- /code/tf_ops/emd/tf_auctionmatch.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/emd/tf_auctionmatch.pyc -------------------------------------------------------------------------------- /code/tf_ops/emd/tf_auctionmatch_compile.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | nvcc=/usr/local/cuda-9.0/bin/nvcc 3 | cudalib=/usr/local/cuda-9.0/lib64/ 4 | TF_INC=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_include())') 5 | TF_LIB=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())') 6 | 7 | $nvcc tf_auctionmatch_g.cu -c -o tf_auctionmatch_g.cu.o -std=c++11 -I $TF_INC -DGOOGLE_CUDA=1\ 8 | -x cu -Xcompiler -fPIC -O2 9 | 10 | g++ tf_auctionmatch.cpp tf_auctionmatch_g.cu.o -o tf_auctionmatch_so.so -std=c++11 -shared -fPIC -I $TF_INC \ 11 | -I$TF_INC/external/nsync/public -L$TF_LIB -ltensorflow_framework -lcudart -L $cudalib -O2 12 | -------------------------------------------------------------------------------- /code/tf_ops/emd/tf_auctionmatch_compile_abi.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | nvcc=/usr/local/cuda-9.0/bin/nvcc 3 | cudalib=/usr/local/cuda-9.0/lib64/ 4 | TF_INC=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_include())') 5 | TF_LIB=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())') 6 | 7 | $nvcc tf_auctionmatch_g.cu -c -o tf_auctionmatch_g.cu.o -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11 -I $TF_INC -DGOOGLE_CUDA=1\ 8 | -x cu -Xcompiler -fPIC -O2 9 | 10 | g++ tf_auctionmatch.cpp tf_auctionmatch_g.cu.o -o tf_auctionmatch_so.so -std=c++11 -shared -fPIC -I $TF_INC \ 11 | -I$TF_INC/external/nsync/public -L$TF_LIB -ltensorflow_framework -lcudart -L $cudalib -O2 -D_GLIBCXX_USE_CXX11_ABI=0 12 | -------------------------------------------------------------------------------- /code/tf_ops/emd/tf_auctionmatch_g.cu: -------------------------------------------------------------------------------- 1 | #include 2 | __global__ void AuctionMatchKernel(int b,int n,const float * __restrict__ xyz1,const float * __restrict__ xyz2,int * matchl,int * matchr,float * cost){ 3 | //this kernel handles up to 4096 points 4 | const int NMax=4096; 5 | __shared__ short Queue[NMax]; 6 | __shared__ short matchrbuf[NMax]; 7 | __shared__ float pricer[NMax]; 8 | __shared__ float bests[32][3]; 9 | __shared__ int qhead,qlen; 10 | const int BufLen=2048; 11 | __shared__ float buf[BufLen]; 12 | for (int bno=blockIdx.x;bno1; 91 | } 92 | int vj,vj2,vj3,vj4; 93 | if (value1=blockDim.x*4){ 149 | for (int j=threadIdx.x;j=blockDim.x*2){ 187 | for (int j=threadIdx.x;j0;i>>=1){ 220 | float b1=__shfl_down(best,i,32); 221 | float b2=__shfl_down(best2,i,32); 222 | int bj=__shfl_down(bestj,i,32); 223 | if (best>5][0]=best; 233 | bests[threadIdx.x>>5][1]=best2; 234 | *(int*)&bests[threadIdx.x>>5][2]=bestj; 235 | } 236 | __syncthreads(); 237 | int nn=blockDim.x>>5; 238 | if (threadIdx.x>1;i>0;i>>=1){ 243 | float b1=__shfl_down(best,i,32); 244 | float b2=__shfl_down(best2,i,32); 245 | int bj=__shfl_down(bestj,i,32); 246 | if (best=n) 260 | qhead-=n; 261 | int old=matchrbuf[bestj]; 262 | pricer[bestj]+=delta; 263 | cnt++; 264 | if (old!=-1){ 265 | int ql=qlen; 266 | int tail=qhead+ql; 267 | qlen=ql+1; 268 | if (tail>=n) 269 | tail-=n; 270 | Queue[tail]=old; 271 | } 272 | if (cnt==(40*n)){ 273 | if (tolerance==1.0) 274 | qlen=0; 275 | tolerance=fminf(1.0,tolerance*100); 276 | cnt=0; 277 | } 278 | } 279 | __syncthreads(); 280 | if (threadIdx.x==0){ 281 | matchrbuf[bestj]=i; 282 | } 283 | } 284 | __syncthreads(); 285 | for (int j=threadIdx.x;j>>(b,n,xyz1,xyz2,matchl,matchr,cost); 294 | } 295 | 296 | -------------------------------------------------------------------------------- /code/tf_ops/emd/tf_auctionmatch_g.cu.o: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/emd/tf_auctionmatch_g.cu.o -------------------------------------------------------------------------------- /code/tf_ops/emd/tf_auctionmatch_so.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/emd/tf_auctionmatch_so.so -------------------------------------------------------------------------------- /code/tf_ops/grouping/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/grouping/__init__.py -------------------------------------------------------------------------------- /code/tf_ops/grouping/__init__.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/grouping/__init__.pyc -------------------------------------------------------------------------------- /code/tf_ops/grouping/compile.sh: -------------------------------------------------------------------------------- 1 | g++ query_ball_point.cpp -o query_ball_point 2 | nvcc query_ball_point.cu -o query_ball_point_cuda 3 | nvcc query_ball_point_block.cu -o query_ball_point_block 4 | nvcc query_ball_point_grid.cu -o query_ball_point_grid 5 | nvcc query_ball_point_grid_count.cu -o query_ball_point_grid_count 6 | g++ -Wall selection_sort.cpp -o selection_sort 7 | nvcc selection_sort.cu -o selection_sort_cuda 8 | -------------------------------------------------------------------------------- /code/tf_ops/grouping/query_ball_point.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include // memset 4 | #include // rand, RAND_MAX 5 | #include // sqrtf 6 | #include 7 | #include 8 | using namespace std; 9 | float randomf(){ 10 | return (rand()+0.5)/(RAND_MAX+1.0); 11 | } 12 | static double get_time(){ 13 | timespec tp; 14 | clock_gettime(CLOCK_MONOTONIC,&tp); 15 | return tp.tv_sec+tp.tv_nsec*1e-9; 16 | } 17 | // input: radius (1), nsample (1), xyz1 (b,n,3), xyz2 (b,m,3) 18 | // output: idx (b,m,nsample) 19 | void query_ball_point_cpu(int b, int n, int m, const float* radius, int nsample, const float *xyz1, const float *xyz2, int *idx) { 20 | for (int i=0;i 2 | #include 3 | #include // memset 4 | #include // rand, RAND_MAX 5 | #include // sqrtf 6 | #include 7 | #include 8 | #include "cuPrintf.cuh" 9 | #include "cuPrintf.cu" 10 | 11 | using namespace std; 12 | using namespace std; 13 | float randomf(){ 14 | return (rand()+0.5)/(RAND_MAX+1.0); 15 | } 16 | static double get_time(){ 17 | timespec tp; 18 | clock_gettime(CLOCK_MONOTONIC,&tp); 19 | return tp.tv_sec+tp.tv_nsec*1e-9; 20 | } 21 | // input: radius (1), nsample (1), xyz1 (b,n,3), xyz2 (b,m,3) 22 | // output: idx (b,m,nsample) 23 | __global__ void query_ball_point_gpu(int b, int n, int m, const float* radius, int nsample, const float *xyz1, const float *xyz2, int *idx) { 24 | for (int i=0;i>>(b,n,m,radius,nsample,xyz1,xyz2,idx); 117 | cudaDeviceSynchronize(); 118 | printf("query_ball_point gpu time %f\n",get_time()-t0); 119 | 120 | t0=get_time(); 121 | group_point_gpu<<<1,1>>>(b,n,c,m,nsample,points,idx,out); 122 | cudaDeviceSynchronize(); 123 | printf("grou_point gpu time %f\n",get_time()-t0); 124 | 125 | t0=get_time(); 126 | group_point_grad_gpu<<<1,1>>>(b,n,c,m,nsample,grad_out,idx,grad_points); 127 | cudaDeviceSynchronize(); 128 | printf("grou_point_grad gpu time %f\n",get_time()-t0); 129 | 130 | cudaFree(xyz1); 131 | cudaFree(xyz2); 132 | cudaFree(points); 133 | cudaFree(idx); 134 | cudaFree(out); 135 | cudaFree(grad_out); 136 | cudaFree(grad_points); 137 | return 0; 138 | } 139 | -------------------------------------------------------------------------------- /code/tf_ops/grouping/query_ball_point_block.cu: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include // memset 4 | #include // rand, RAND_MAX 5 | #include // sqrtf 6 | #include 7 | #include 8 | using namespace std; 9 | float randomf(){ 10 | return (rand()+0.5)/(RAND_MAX+1.0); 11 | } 12 | static double get_time(){ 13 | timespec tp; 14 | clock_gettime(CLOCK_MONOTONIC,&tp); 15 | return tp.tv_sec+tp.tv_nsec*1e-9; 16 | } 17 | // input: radius (1), nsample (1), xyz1 (b,n,3), xyz2 (b,m,3) 18 | // output: idx (b,m,nsample) 19 | __global__ void query_ball_point_gpu(int b, int n, int m, const float *radius, int nsample, const float *xyz1, const float *xyz2, int *idx) { 20 | int index = threadIdx.x; 21 | xyz1 += n*3*index; 22 | xyz2 += m*3*index; 23 | idx += m*nsample*index; 24 | 25 | for (int j=0;j>>(b,n,m,radius,nsample,xyz1,xyz2,idx); 113 | cudaDeviceSynchronize(); 114 | printf("query_ball_point gpu time %f\n",get_time()-t0); 115 | 116 | t0=get_time(); 117 | group_point_gpu<<<1,b>>>(b,n,c,m,nsample,points,idx,out); 118 | cudaDeviceSynchronize(); 119 | printf("grou_point gpu time %f\n",get_time()-t0); 120 | 121 | t0=get_time(); 122 | group_point_grad_gpu<<<1,b>>>(b,n,c,m,nsample,grad_out,idx,grad_points); 123 | cudaDeviceSynchronize(); 124 | printf("grou_point_grad gpu time %f\n",get_time()-t0); 125 | 126 | cudaFree(xyz1); 127 | cudaFree(xyz2); 128 | cudaFree(points); 129 | cudaFree(idx); 130 | cudaFree(out); 131 | cudaFree(grad_out); 132 | cudaFree(grad_points); 133 | return 0; 134 | } 135 | -------------------------------------------------------------------------------- /code/tf_ops/grouping/query_ball_point_grid.cu: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include // memset 4 | #include // rand, RAND_MAX 5 | #include // sqrtf 6 | #include 7 | #include 8 | using namespace std; 9 | float randomf(){ 10 | return (rand()+0.5)/(RAND_MAX+1.0); 11 | } 12 | static double get_time(){ 13 | timespec tp; 14 | clock_gettime(CLOCK_MONOTONIC,&tp); 15 | return tp.tv_sec+tp.tv_nsec*1e-9; 16 | } 17 | // input: radius (1), nsample (1), xyz1 (b,n,3), xyz2 (b,m,3) 18 | // output: idx (b,m,nsample) 19 | __global__ void query_ball_point_gpu(int b, int n, int m, const float *radius, int nsample, const float *xyz1, const float *xyz2, int *idx) { 20 | int batch_index = blockIdx.x; 21 | xyz1 += n*3*batch_index; 22 | xyz2 += m*3*batch_index; 23 | idx += m*nsample*batch_index; 24 | 25 | int index = threadIdx.x; 26 | int stride = blockDim.x; 27 | 28 | for (int j=index;j>>(b,n,m,radius,nsample,xyz1,xyz2,idx); 123 | cudaDeviceSynchronize(); 124 | printf("query_ball_point gpu time %f\n",get_time()-t0); 125 | 126 | t0=get_time(); 127 | group_point_gpu<<>>(b,n,c,m,nsample,points,idx,out); 128 | cudaDeviceSynchronize(); 129 | printf("grou_point gpu time %f\n",get_time()-t0); 130 | 131 | t0=get_time(); 132 | group_point_grad_gpu<<>>(b,n,c,m,nsample,grad_out,idx,grad_points); 133 | cudaDeviceSynchronize(); 134 | printf("grou_point_grad gpu time %f\n",get_time()-t0); 135 | 136 | cudaFree(xyz1); 137 | cudaFree(xyz2); 138 | cudaFree(points); 139 | cudaFree(idx); 140 | cudaFree(out); 141 | cudaFree(grad_out); 142 | cudaFree(grad_points); 143 | return 0; 144 | } 145 | -------------------------------------------------------------------------------- /code/tf_ops/grouping/selection_sort.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include // memset 4 | #include // rand, RAND_MAX 5 | #include // sqrtf 6 | #include 7 | #include 8 | using namespace std; 9 | float randomf(){ 10 | return (rand()+0.5)/(RAND_MAX+1.0); 11 | } 12 | static double get_time(){ 13 | timespec tp; 14 | clock_gettime(CLOCK_MONOTONIC,&tp); 15 | return tp.tv_sec+tp.tv_nsec*1e-9; 16 | } 17 | 18 | // input: k (1), distance matrix dist (b,m,n) 19 | // output: idx (b,m,n), val (b,m,n) 20 | void selection_sort_cpu(int b, int n, int m, int k, const float *dist, int *idx, float *val) { 21 | float *p_dist; 22 | float tmp; 23 | int tmpi; 24 | for (int i=0;i 2 | #include 3 | #include // memset 4 | #include // rand, RAND_MAX 5 | #include // sqrtf 6 | #include 7 | #include 8 | using namespace std; 9 | float randomf(){ 10 | return (rand()+0.5)/(RAND_MAX+1.0); 11 | } 12 | static double get_time(){ 13 | timespec tp; 14 | clock_gettime(CLOCK_MONOTONIC,&tp); 15 | return tp.tv_sec+tp.tv_nsec*1e-9; 16 | } 17 | 18 | // input: k (1), distance matrix dist (b,m,n) 19 | // output: idx (b,m,k), val (b,m,k) 20 | __global__ void selection_sort_gpu(int b, int n, int m, int k, float *dist, int *idx, float *val) { 21 | int batch_index = blockIdx.x; 22 | dist+=m*n*batch_index; 23 | idx+=m*k*batch_index; 24 | val+=m*k*batch_index; 25 | 26 | int index = threadIdx.x; 27 | int stride = blockDim.x; 28 | 29 | float *p_dist; 30 | for (int j=index;j>>(b,n,m,k,dist,idx,val); 68 | cudaDeviceSynchronize(); 69 | printf("selection sort cpu time %f\n",get_time()-t0); 70 | 71 | return 0; 72 | } 73 | -------------------------------------------------------------------------------- /code/tf_ops/grouping/selection_sort_const.cu: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include // memset 4 | #include // rand, RAND_MAX 5 | #include // sqrtf 6 | #include 7 | #include 8 | using namespace std; 9 | float randomf(){ 10 | return (rand()+0.5)/(RAND_MAX+1.0); 11 | } 12 | static double get_time(){ 13 | timespec tp; 14 | clock_gettime(CLOCK_MONOTONIC,&tp); 15 | return tp.tv_sec+tp.tv_nsec*1e-9; 16 | } 17 | 18 | // input: k (1), distance matrix dist (b,m,n) 19 | // output: idx (b,m,n), dist_out (b,m,n) 20 | __global__ void selection_sort_gpu(int b, int n, int m, int k, const float *dist, int *outi, float *out) { 21 | int batch_index = blockIdx.x; 22 | dist+=m*n*batch_index; 23 | outi+=m*n*batch_index; 24 | out+=m*n*batch_index; 25 | 26 | int index = threadIdx.x; 27 | int stride = blockDim.x; 28 | 29 | // copy from dist to dist_out 30 | for (int j=index;j>>(b,n,m,k,dist,idx,dist_out); 84 | cudaDeviceSynchronize(); 85 | printf("selection sort cpu time %f\n",get_time()-t0); 86 | 87 | //for (int i=0;i 2 | #include 3 | #include // memset 4 | #include // rand, RAND_MAX 5 | #include // sqrtf 6 | #include 7 | #include "tensorflow/core/framework/op.h" 8 | #include "tensorflow/core/framework/op_kernel.h" 9 | #include "tensorflow/core/framework/shape_inference.h" 10 | #include "tensorflow/core/framework/common_shape_fns.h" 11 | #include 12 | using namespace tensorflow; 13 | 14 | REGISTER_OP("QueryBallPoint") 15 | .Attr("nsample: int") 16 | .Input("xyz1: float32") 17 | .Input("xyz2: float32") 18 | .Input("radius: float32") 19 | .Output("idx: int32") 20 | .Output("pts_cnt: int32") 21 | .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) { 22 | ::tensorflow::shape_inference::ShapeHandle dims2; // batch_size * npoint * 3 23 | c->WithRank(c->input(1), 3, &dims2); 24 | int nsample; 25 | TF_RETURN_IF_ERROR(c->GetAttr("nsample", &nsample)); 26 | ::tensorflow::shape_inference::ShapeHandle output1 = c->MakeShape({c->Dim(dims2, 0), c->Dim(dims2, 1), nsample}); 27 | c->set_output(0, output1); 28 | ::tensorflow::shape_inference::ShapeHandle output2 = c->MakeShape({c->Dim(dims2, 0), c->Dim(dims2, 1)}); 29 | c->set_output(1, output2); 30 | return Status::OK(); 31 | }); 32 | REGISTER_OP("SelectionSort") 33 | .Attr("k: int") 34 | .Input("dist: float32") 35 | .Output("outi: int32") 36 | .Output("out: float32") 37 | .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) { 38 | c->set_output(0, c->input(0)); 39 | c->set_output(1, c->input(0)); 40 | return Status::OK(); 41 | }); 42 | REGISTER_OP("GroupPoint") 43 | .Input("points: float32") 44 | .Input("idx: int32") 45 | .Output("out: float32") 46 | .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) { 47 | ::tensorflow::shape_inference::ShapeHandle dims1; // batch_size * ndataset * channels 48 | c->WithRank(c->input(0), 3, &dims1); 49 | ::tensorflow::shape_inference::ShapeHandle dims2; // batch_size * npoints * nsample 50 | c->WithRank(c->input(1), 3, &dims2); 51 | // batch_size * npoints * nsample * channels 52 | ::tensorflow::shape_inference::ShapeHandle output = c->MakeShape({c->Dim(dims2, 0), c->Dim(dims2, 1), c->Dim(dims2, 2), c->Dim(dims1, 2)}); 53 | c->set_output(0, output); 54 | return Status::OK(); 55 | }); 56 | REGISTER_OP("GroupPointGrad") 57 | .Input("points: float32") 58 | .Input("idx: int32") 59 | .Input("grad_out: float32") 60 | .Output("grad_points: float32") 61 | .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) { 62 | c->set_output(0, c->input(0)); 63 | return Status::OK(); 64 | }); 65 | 66 | 67 | void queryBallPointLauncher(int b, int n, int m, const float* radius, int nsample, const float *xyz1, const float *xyz2, int *idx, int *pts_cnt); 68 | class QueryBallPointGpuOp : public OpKernel { 69 | public: 70 | explicit QueryBallPointGpuOp(OpKernelConstruction* context) : OpKernel(context) { 71 | //OP_REQUIRES_OK(context, context->GetAttr("radius", &radius_)); 72 | //OP_REQUIRES(context, radius_ > 0, errors::InvalidArgument("QueryBallPoint expects positive radius")); 73 | 74 | OP_REQUIRES_OK(context, context->GetAttr("nsample", &nsample_)); 75 | OP_REQUIRES(context, nsample_ > 0, errors::InvalidArgument("QueryBallPoint expects positive nsample")); 76 | } 77 | 78 | void Compute(OpKernelContext* context) override { 79 | const Tensor& xyz1_tensor = context->input(0); 80 | OP_REQUIRES(context, xyz1_tensor.dims()==3 && xyz1_tensor.shape().dim_size(2)==3, errors::InvalidArgument("QueryBallPoint expects (batch_size, ndataset, 3) xyz1 shape.")); 81 | int b = xyz1_tensor.shape().dim_size(0); 82 | int n = xyz1_tensor.shape().dim_size(1); 83 | 84 | const Tensor& xyz2_tensor = context->input(1); 85 | OP_REQUIRES(context, xyz2_tensor.dims()==3 && xyz2_tensor.shape().dim_size(2)==3, errors::InvalidArgument("QueryBallPoint expects (batch_size, npoint, 3) xyz2 shape.")); 86 | int m = xyz2_tensor.shape().dim_size(1); 87 | 88 | Tensor *idx_tensor = nullptr; 89 | OP_REQUIRES_OK(context, context->allocate_output(0, TensorShape{b,m,nsample_}, &idx_tensor)); 90 | Tensor *pts_cnt_tensor = nullptr; 91 | OP_REQUIRES_OK(context, context->allocate_output(1, TensorShape{b,m}, &pts_cnt_tensor)); 92 | 93 | const Tensor& radius_tensor = context->input(2); 94 | auto radius_flat = radius_tensor.flat(); 95 | const float *radius = &(radius_flat(0)); 96 | 97 | auto xyz1_flat = xyz1_tensor.flat(); 98 | const float *xyz1 = &(xyz1_flat(0)); 99 | auto xyz2_flat = xyz2_tensor.flat(); 100 | const float *xyz2 = &(xyz2_flat(0)); 101 | auto idx_flat = idx_tensor->flat(); 102 | int *idx = &(idx_flat(0)); 103 | auto pts_cnt_flat = pts_cnt_tensor->flat(); 104 | int *pts_cnt = &(pts_cnt_flat(0)); 105 | queryBallPointLauncher(b,n,m,radius,nsample_,xyz1,xyz2,idx,pts_cnt); 106 | } 107 | private: 108 | int nsample_; 109 | }; 110 | REGISTER_KERNEL_BUILDER(Name("QueryBallPoint").Device(DEVICE_GPU), QueryBallPointGpuOp); 111 | 112 | void selectionSortLauncher(int b, int n, int m, int k, const float *dist, int *outi, float *out); 113 | class SelectionSortGpuOp : public OpKernel { 114 | public: 115 | explicit SelectionSortGpuOp(OpKernelConstruction* context) : OpKernel(context) { 116 | OP_REQUIRES_OK(context, context->GetAttr("k", &k_)); 117 | OP_REQUIRES(context, k_ > 0, errors::InvalidArgument("SelectionSort expects positive k")); 118 | } 119 | 120 | void Compute(OpKernelContext* context) override { 121 | const Tensor& dist_tensor = context->input(0); 122 | OP_REQUIRES(context, dist_tensor.dims()==3, errors::InvalidArgument("SelectionSort expects (b,m,n) dist shape.")); 123 | int b = dist_tensor.shape().dim_size(0); 124 | int m = dist_tensor.shape().dim_size(1); 125 | int n = dist_tensor.shape().dim_size(2); 126 | 127 | Tensor *outi_tensor = nullptr; 128 | OP_REQUIRES_OK(context, context->allocate_output(0, TensorShape{b,m,n}, &outi_tensor)); 129 | Tensor *out_tensor = nullptr; 130 | OP_REQUIRES_OK(context, context->allocate_output(1, TensorShape{b,m,n}, &out_tensor)); 131 | 132 | auto dist_flat = dist_tensor.flat(); 133 | const float *dist = &(dist_flat(0)); 134 | auto outi_flat = outi_tensor->flat(); 135 | int *outi = &(outi_flat(0)); 136 | auto out_flat = out_tensor->flat(); 137 | float *out = &(out_flat(0)); 138 | selectionSortLauncher(b,n,m,k_,dist,outi,out); 139 | } 140 | private: 141 | int k_; 142 | }; 143 | REGISTER_KERNEL_BUILDER(Name("SelectionSort").Device(DEVICE_GPU), SelectionSortGpuOp); 144 | 145 | 146 | void groupPointLauncher(int b, int n, int c, int m, int nsample, const float *points, const int *idx, float *out); 147 | class GroupPointGpuOp: public OpKernel{ 148 | public: 149 | explicit GroupPointGpuOp(OpKernelConstruction * context):OpKernel(context){} 150 | 151 | void Compute(OpKernelContext * context) override { 152 | const Tensor& points_tensor=context->input(0); 153 | OP_REQUIRES(context, points_tensor.dims()==3, errors::InvalidArgument("GroupPoint expects (batch_size, num_points, channel) points shape")); 154 | int b = points_tensor.shape().dim_size(0); 155 | int n = points_tensor.shape().dim_size(1); 156 | int c = points_tensor.shape().dim_size(2); 157 | 158 | const Tensor& idx_tensor=context->input(1); 159 | OP_REQUIRES(context,idx_tensor.dims()==3 && idx_tensor.shape().dim_size(0)==b, errors::InvalidArgument("GroupPoint expects (batch_size, npoints, nsample) idx shape")); 160 | int m = idx_tensor.shape().dim_size(1); 161 | int nsample = idx_tensor.shape().dim_size(2); 162 | 163 | Tensor * out_tensor = nullptr; 164 | OP_REQUIRES_OK(context, context->allocate_output(0,TensorShape{b,m,nsample,c}, &out_tensor)); 165 | 166 | auto points_flat = points_tensor.flat(); 167 | const float *points = &(points_flat(0)); 168 | auto idx_flat = idx_tensor.flat(); 169 | const int *idx = &(idx_flat(0)); 170 | auto out_flat = out_tensor->flat(); 171 | float *out = &(out_flat(0)); 172 | groupPointLauncher(b,n,c,m,nsample,points,idx,out); 173 | } 174 | }; 175 | REGISTER_KERNEL_BUILDER(Name("GroupPoint").Device(DEVICE_GPU),GroupPointGpuOp); 176 | 177 | void groupPointGradLauncher(int b, int n, int c, int m, int nsample, const float *grad_out, const int *idx, float *grad_points); 178 | class GroupPointGradGpuOp: public OpKernel{ 179 | public: 180 | explicit GroupPointGradGpuOp(OpKernelConstruction * context):OpKernel(context){} 181 | 182 | void Compute(OpKernelContext * context) override { 183 | const Tensor& points_tensor=context->input(0); 184 | OP_REQUIRES(context, points_tensor.dims()==3, errors::InvalidArgument("GroupPointGrad expects (batch_size, num_points, channel) points shape")); 185 | int b = points_tensor.shape().dim_size(0); 186 | int n = points_tensor.shape().dim_size(1); 187 | int c = points_tensor.shape().dim_size(2); 188 | 189 | const Tensor& idx_tensor=context->input(1); 190 | OP_REQUIRES(context,idx_tensor.dims()==3 && idx_tensor.shape().dim_size(0)==b, errors::InvalidArgument("GroupPointGrad expects (batch_size, npoints, nsample) idx shape")); 191 | int m = idx_tensor.shape().dim_size(1); 192 | int nsample = idx_tensor.shape().dim_size(2); 193 | 194 | const Tensor& grad_out_tensor=context->input(2); 195 | OP_REQUIRES(context,grad_out_tensor.dims()==4 && grad_out_tensor.shape().dim_size(0)==b && grad_out_tensor.shape().dim_size(1)==m && grad_out_tensor.shape().dim_size(2)==nsample && grad_out_tensor.shape().dim_size(3)==c, errors::InvalidArgument("GroupPointGrad expects (batch_size, npoints, nsample, channel) grad_out shape")); 196 | 197 | Tensor * grad_points_tensor = nullptr; 198 | OP_REQUIRES_OK(context, context->allocate_output(0,TensorShape{b,n,c}, &grad_points_tensor)); 199 | 200 | auto points_flat = points_tensor.flat(); 201 | const float *points = &(points_flat(0)); 202 | auto idx_flat = idx_tensor.flat(); 203 | const int *idx = &(idx_flat(0)); 204 | auto grad_out_flat = grad_out_tensor.flat(); 205 | const float *grad_out = &(grad_out_flat(0)); 206 | auto grad_points_flat = grad_points_tensor->flat(); 207 | float *grad_points = &(grad_points_flat(0)); 208 | cudaMemset(grad_points, 0, sizeof(float)*b*n*c); 209 | groupPointGradLauncher(b,n,c,m,nsample,grad_out,idx,grad_points); 210 | } 211 | }; 212 | REGISTER_KERNEL_BUILDER(Name("GroupPointGrad").Device(DEVICE_GPU),GroupPointGradGpuOp); 213 | 214 | 215 | -------------------------------------------------------------------------------- /code/tf_ops/grouping/tf_grouping.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | from tensorflow.python.framework import ops 3 | import sys 4 | import os 5 | BASE_DIR = os.path.dirname(os.path.abspath(__file__)) 6 | sys.path.append(BASE_DIR) 7 | grouping_module=tf.load_op_library(os.path.join(BASE_DIR, 'tf_grouping_so.so')) 8 | def query_ball_point(radius, nsample, xyz1, xyz2): 9 | ''' 10 | Input: 11 | radius: float32, ball search radius 12 | nsample: int32, number of points selected in each ball region 13 | xyz1: (batch_size, ndataset, 3) float32 array, input points 14 | xyz2: (batch_size, npoint, 3) float32 array, query points 15 | Output: 16 | idx: (batch_size, npoint, nsample) int32 array, indices to input points 17 | pts_cnt: (batch_size, npoint) int32 array, number of unique points in each local region 18 | ''' 19 | #return grouping_module.query_ball_point(radius, nsample, xyz1, xyz2) 20 | return grouping_module.query_ball_point(xyz1, xyz2, radius, nsample) 21 | ops.NoGradient('QueryBallPoint') 22 | def select_top_k(k, dist): 23 | ''' 24 | Input: 25 | k: int32, number of k SMALLEST elements selected 26 | dist: (b,m,n) float32 array, distance matrix, m query points, n dataset points 27 | Output: 28 | idx: (b,m,n) int32 array, first k in n are indices to the top k 29 | dist_out: (b,m,n) float32 array, first k in n are the top k 30 | ''' 31 | return grouping_module.selection_sort(dist, k) 32 | ops.NoGradient('SelectionSort') 33 | def group_point(points, idx): 34 | ''' 35 | Input: 36 | points: (batch_size, ndataset, channel) float32 array, points to sample from 37 | idx: (batch_size, npoint, nsample) int32 array, indices to points 38 | Output: 39 | out: (batch_size, npoint, nsample, channel) float32 array, values sampled from points 40 | ''' 41 | return grouping_module.group_point(points, idx) 42 | @tf.RegisterGradient('GroupPoint') 43 | def _group_point_grad(op, grad_out): 44 | points = op.inputs[0] 45 | idx = op.inputs[1] 46 | return [grouping_module.group_point_grad(points, idx, grad_out), None] 47 | 48 | def knn_point(k, xyz1, xyz2): 49 | ''' 50 | Input: 51 | k: int32, number of k in k-nn search 52 | xyz1: (batch_size, ndataset, c) float32 array, input points 53 | xyz2: (batch_size, npoint, c) float32 array, query points 54 | Output: 55 | val: (batch_size, npoint, k) float32 array, L2 distances 56 | idx: (batch_size, npoint, k) int32 array, indices to input points 57 | ''' 58 | # b = xyz1.get_shape()[0].value 59 | # n = xyz1.get_shape()[1].value 60 | # c = xyz1.get_shape()[2].value 61 | # m = xyz2.get_shape()[1].value 62 | # xyz1 = tf.tile(tf.reshape(xyz1, (b,1,n,c)), [1,m,1,1]) 63 | # xyz2 = tf.tile(tf.reshape(xyz2, (b,m,1,c)), [1,1,n,1]) 64 | xyz1 = tf.expand_dims(xyz1,axis=1) 65 | xyz2 = tf.expand_dims(xyz2,axis=2) 66 | dist = tf.reduce_sum((xyz1-xyz2)**2, -1) 67 | 68 | # outi, out = select_top_k(k, dist) 69 | # idx = tf.slice(outi, [0,0,0], [-1,-1,k]) 70 | # val = tf.slice(out, [0,0,0], [-1,-1,k]) 71 | 72 | val, idx = tf.nn.top_k(-dist, k=k) # ONLY SUPPORT CPU 73 | return val, idx 74 | 75 | if __name__=='__main__': 76 | knn=True 77 | import numpy as np 78 | import time 79 | np.random.seed(100) 80 | pts = np.random.random((32,512,64)).astype('float32') 81 | tmp1 = np.random.random((32,512,3)).astype('float32') 82 | tmp2 = np.random.random((32,128,3)).astype('float32') 83 | with tf.device('/gpu:1'): 84 | points = tf.constant(pts) 85 | xyz1 = tf.constant(tmp1) 86 | xyz2 = tf.constant(tmp2) 87 | radius = 0.1 88 | nsample = 64 89 | if knn: 90 | _, idx = knn_point(nsample, xyz1, xyz2) 91 | grouped_points = group_point(points, idx) 92 | else: 93 | idx, _ = query_ball_point(radius, nsample, xyz1, xyz2) 94 | grouped_points = group_point(points, idx) 95 | #grouped_points_grad = tf.ones_like(grouped_points) 96 | #points_grad = tf.gradients(grouped_points, points, grouped_points_grad) 97 | with tf.Session('') as sess: 98 | now = time.time() 99 | for _ in range(100): 100 | ret = sess.run(grouped_points) 101 | print time.time() - now 102 | print ret.shape, ret.dtype 103 | print ret 104 | 105 | 106 | -------------------------------------------------------------------------------- /code/tf_ops/grouping/tf_grouping.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/grouping/tf_grouping.pyc -------------------------------------------------------------------------------- /code/tf_ops/grouping/tf_grouping_compile.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | nvcc=/usr/local/cuda-9.0/bin/nvcc 3 | cudainc=/usr/local/cuda-9.0/include/ 4 | cudalib=/usr/local/cuda-9.0/lib64/ 5 | TF_INC=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_include())') 6 | TF_LIB=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())') 7 | 8 | $nvcc tf_grouping_g.cu -c -o tf_grouping_g.cu.o -std=c++11 -I $TF_INC -DGOOGLE_CUDA=1\ 9 | -x cu -Xcompiler -fPIC -O2 10 | 11 | g++ tf_grouping.cpp tf_grouping_g.cu.o -o tf_grouping_so.so -std=c++11 -shared -fPIC -I $TF_INC \ 12 | -I$TF_INC/external/nsync/public -I $cudainc -L$TF_LIB -ltensorflow_framework -lcudart -L $cudalib -O2 13 | -------------------------------------------------------------------------------- /code/tf_ops/grouping/tf_grouping_compile_abi.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | nvcc=/usr/local/cuda-9.0/bin/nvcc 3 | cudainc=/usr/local/cuda-9.0/include/ 4 | cudalib=/usr/local/cuda-9.0/lib64/ 5 | TF_INC=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_include())') 6 | TF_LIB=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())') 7 | 8 | $nvcc tf_grouping_g.cu -c -o tf_grouping_g.cu.o -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11 -I $TF_INC -DGOOGLE_CUDA=1\ 9 | -x cu -Xcompiler -fPIC -O2 10 | 11 | g++ tf_grouping.cpp tf_grouping_g.cu.o -o tf_grouping_so.so -std=c++11 -shared -fPIC -I $TF_INC \ 12 | -I$TF_INC/external/nsync/public -I $cudainc -L$TF_LIB -ltensorflow_framework -lcudart -L $cudalib -O2 -D_GLIBCXX_USE_CXX11_ABI=0 13 | -------------------------------------------------------------------------------- /code/tf_ops/grouping/tf_grouping_g.cu: -------------------------------------------------------------------------------- 1 | // input: radius (1), nsample (1), xyz1 (b,n,3), xyz2 (b,m,3) 2 | // output: idx (b,m,nsample), pts_cnt (b,m) 3 | __global__ void query_ball_point_gpu(int b, int n, int m, const float *radius, int nsample, const float *xyz1, const float *xyz2, int *idx, int *pts_cnt) { 4 | int batch_index = blockIdx.x; 5 | xyz1 += n*3*batch_index; 6 | xyz2 += m*3*batch_index; 7 | idx += m*nsample*batch_index; 8 | pts_cnt += m*batch_index; // counting how many unique points selected in local region 9 | 10 | int index = threadIdx.x; 11 | int stride = blockDim.x; 12 | 13 | for (int j=index;j>>(b,n,m,radius,nsample,xyz1,xyz2,idx,pts_cnt); 127 | //cudaDeviceSynchronize(); 128 | } 129 | void selectionSortLauncher(int b, int n, int m, int k, const float *dist, int *outi, float *out) { 130 | selection_sort_gpu<<>>(b,n,m,k,dist,outi,out); 131 | //cudaDeviceSynchronize(); 132 | } 133 | void groupPointLauncher(int b, int n, int c, int m, int nsample, const float *points, const int *idx, float *out){ 134 | group_point_gpu<<>>(b,n,c,m,nsample,points,idx,out); 135 | //cudaDeviceSynchronize(); 136 | } 137 | void groupPointGradLauncher(int b, int n, int c, int m, int nsample, const float *grad_out, const int *idx, float *grad_points){ 138 | group_point_grad_gpu<<>>(b,n,c,m,nsample,grad_out,idx,grad_points); 139 | //group_point_grad_gpu<<<1,1>>>(b,n,c,m,nsample,grad_out,idx,grad_points); 140 | //cudaDeviceSynchronize(); 141 | } 142 | -------------------------------------------------------------------------------- /code/tf_ops/grouping/tf_grouping_g.cu.o: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/grouping/tf_grouping_g.cu.o -------------------------------------------------------------------------------- /code/tf_ops/grouping/tf_grouping_op_test.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | import numpy as np 3 | from tf_grouping import query_ball_point, group_point 4 | 5 | class GroupPointTest(tf.test.TestCase): 6 | def test(self): 7 | pass 8 | 9 | def test_grad(self): 10 | with tf.device('/gpu:0'): 11 | points = tf.constant(np.random.random((1,128,16)).astype('float32')) 12 | print points 13 | xyz1 = tf.constant(np.random.random((1,128,3)).astype('float32')) 14 | xyz2 = tf.constant(np.random.random((1,8,3)).astype('float32')) 15 | radius = 0.3 16 | nsample = 32 17 | idx, pts_cnt = query_ball_point(radius, nsample, xyz1, xyz2) 18 | grouped_points = group_point(points, idx) 19 | print grouped_points 20 | 21 | with self.test_session(): 22 | print "---- Going to compute gradient error" 23 | err = tf.test.compute_gradient_error(points, (1,128,16), grouped_points, (1,8,32,16)) 24 | print err 25 | self.assertLess(err, 1e-4) 26 | 27 | if __name__=='__main__': 28 | tf.test.main() 29 | -------------------------------------------------------------------------------- /code/tf_ops/grouping/tf_grouping_so.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/grouping/tf_grouping_so.so -------------------------------------------------------------------------------- /code/tf_ops/interpolation/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/interpolation/__init__.py -------------------------------------------------------------------------------- /code/tf_ops/interpolation/__init__.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/interpolation/__init__.pyc -------------------------------------------------------------------------------- /code/tf_ops/interpolation/interpolate.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include // memset 4 | #include // rand, RAND_MAX 5 | #include // sqrtf 6 | #include 7 | #include 8 | using namespace std; 9 | float randomf(){ 10 | return (rand()+0.5)/(RAND_MAX+1.0); 11 | } 12 | static double get_time(){ 13 | timespec tp; 14 | clock_gettime(CLOCK_MONOTONIC,&tp); 15 | return tp.tv_sec+tp.tv_nsec*1e-9; 16 | } 17 | 18 | // Find three nearest neigbors with square distance 19 | // input: xyz1 (b,n,3), xyz2(b,m,3) 20 | // output: dist (b,n,3), idx (b,n,3) 21 | void threenn_cpu(int b, int n, int m, const float *xyz1, const float *xyz2, float *dist, int *idx) { 22 | for (int i=0;i 2 | #include 3 | #include // memset 4 | #include // rand, RAND_MAX 5 | #include // sqrtf 6 | #include "tensorflow/core/framework/op.h" 7 | #include "tensorflow/core/framework/op_kernel.h" 8 | #include "tensorflow/core/framework/shape_inference.h" 9 | #include "tensorflow/core/framework/common_shape_fns.h" 10 | using namespace tensorflow; 11 | 12 | REGISTER_OP("ThreeNN") 13 | .Input("xyz1: float32") 14 | .Input("xyz2: float32") 15 | .Output("dist: float32") 16 | .Output("idx: int32") 17 | .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) { 18 | c->set_output(0, c->input(0)); 19 | c->set_output(1, c->input(0)); 20 | return Status::OK(); 21 | }); 22 | REGISTER_OP("ThreeInterpolate") 23 | .Input("points: float32") 24 | .Input("idx: int32") 25 | .Input("weight: float32") 26 | .Output("out: float32") 27 | .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) { 28 | ::tensorflow::shape_inference::ShapeHandle dims1; // (b,m,c) 29 | c->WithRank(c->input(0), 3, &dims1); 30 | ::tensorflow::shape_inference::ShapeHandle dims2; // (b,n,3) 31 | c->WithRank(c->input(1), 3, &dims2); 32 | // (b,n,c) 33 | ::tensorflow::shape_inference::ShapeHandle output = c->MakeShape({c->Dim(dims1, 0), c->Dim(dims2, 1), c->Dim(dims1, 2)}); 34 | c->set_output(0, output); 35 | return Status::OK(); 36 | }); 37 | REGISTER_OP("ThreeInterpolateGrad") 38 | .Input("points: float32") 39 | .Input("idx: int32") 40 | .Input("weight: float32") 41 | .Input("grad_out: float32") 42 | .Output("grad_points: float32") 43 | .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) { 44 | c->set_output(0, c->input(0)); 45 | return Status::OK(); 46 | }); 47 | 48 | float randomf(){ 49 | return (rand()+0.5)/(RAND_MAX+1.0); 50 | } 51 | static double get_time(){ 52 | timespec tp; 53 | clock_gettime(CLOCK_MONOTONIC,&tp); 54 | return tp.tv_sec+tp.tv_nsec*1e-9; 55 | } 56 | 57 | // Find three nearest neigbors with square distance 58 | // input: xyz1 (b,n,3), xyz2(b,m,3) 59 | // output: dist (b,n,3), idx (b,n,3) 60 | void threenn_cpu(int b, int n, int m, const float *xyz1, const float *xyz2, float *dist, int *idx) { 61 | for (int i=0;iinput(0); 163 | OP_REQUIRES(context, xyz1_tensor.dims()==3 && xyz1_tensor.shape().dim_size(2)==3, errors::InvalidArgument("ThreeNN expects (b,n,3) xyz1 shape.")); 164 | int b = xyz1_tensor.shape().dim_size(0); 165 | int n = xyz1_tensor.shape().dim_size(1); 166 | 167 | const Tensor& xyz2_tensor = context->input(1); 168 | OP_REQUIRES(context, xyz2_tensor.dims()==3 && xyz2_tensor.shape().dim_size(2)==3, errors::InvalidArgument("ThreeNN expects (b,m,3) xyz2 shape.")); 169 | int m = xyz2_tensor.shape().dim_size(1); 170 | 171 | Tensor *dist_tensor = nullptr; 172 | OP_REQUIRES_OK(context, context->allocate_output(0, TensorShape{b,n,3}, &dist_tensor)); 173 | Tensor *idx_tensor = nullptr; 174 | OP_REQUIRES_OK(context, context->allocate_output(1, TensorShape{b,n,3}, &idx_tensor)); 175 | 176 | auto xyz1_flat = xyz1_tensor.flat(); 177 | const float *xyz1 = &(xyz1_flat(0)); 178 | auto xyz2_flat = xyz2_tensor.flat(); 179 | const float *xyz2 = &(xyz2_flat(0)); 180 | auto dist_flat = dist_tensor->flat(); 181 | float *dist = &(dist_flat(0)); 182 | auto idx_flat = idx_tensor->flat(); 183 | int *idx = &(idx_flat(0)); 184 | threenn_cpu(b,n,m,xyz1,xyz2,dist,idx); 185 | } 186 | }; 187 | REGISTER_KERNEL_BUILDER(Name("ThreeNN").Device(DEVICE_CPU), ThreeNNOp); 188 | 189 | 190 | 191 | class ThreeInterpolateOp: public OpKernel{ 192 | public: 193 | explicit ThreeInterpolateOp(OpKernelConstruction * context):OpKernel(context){} 194 | 195 | void Compute(OpKernelContext * context) override { 196 | const Tensor& points_tensor=context->input(0); 197 | OP_REQUIRES(context, points_tensor.dims()==3, errors::InvalidArgument("ThreeInterpolate expects (b,m,c) points shape")); 198 | int b = points_tensor.shape().dim_size(0); 199 | int m = points_tensor.shape().dim_size(1); 200 | int c = points_tensor.shape().dim_size(2); 201 | 202 | const Tensor& idx_tensor=context->input(1); 203 | OP_REQUIRES(context,idx_tensor.dims()==3 && idx_tensor.shape().dim_size(0)==b && idx_tensor.shape().dim_size(2)==3, errors::InvalidArgument("ThreeInterpolate expects (b,n,3) idx shape")); 204 | int n = idx_tensor.shape().dim_size(1); 205 | const Tensor& weight_tensor=context->input(2); 206 | OP_REQUIRES(context,weight_tensor.dims()==3 && weight_tensor.shape().dim_size(0)==b && weight_tensor.shape().dim_size(1)==n && weight_tensor.shape().dim_size(2)==3, errors::InvalidArgument("ThreeInterpolate expects (b,n,3) weight shape")); 207 | 208 | Tensor * out_tensor = nullptr; 209 | OP_REQUIRES_OK(context, context->allocate_output(0,TensorShape{b,n,c}, &out_tensor)); 210 | 211 | auto points_flat = points_tensor.flat(); 212 | const float *points = &(points_flat(0)); 213 | auto idx_flat = idx_tensor.flat(); 214 | const int *idx = &(idx_flat(0)); 215 | auto weight_flat = weight_tensor.flat(); 216 | const float *weight = &(weight_flat(0)); 217 | auto out_flat = out_tensor->flat(); 218 | float *out = &(out_flat(0)); 219 | threeinterpolate_cpu(b,m,c,n,points,idx,weight,out); 220 | } 221 | }; 222 | REGISTER_KERNEL_BUILDER(Name("ThreeInterpolate").Device(DEVICE_CPU),ThreeInterpolateOp); 223 | 224 | 225 | class ThreeInterpolateGradOp: public OpKernel{ 226 | public: 227 | explicit ThreeInterpolateGradOp(OpKernelConstruction * context):OpKernel(context){} 228 | 229 | void Compute(OpKernelContext * context) override { 230 | const Tensor& points_tensor=context->input(0); 231 | OP_REQUIRES(context, points_tensor.dims()==3, errors::InvalidArgument("ThreeInterpolateGrad expects (b,m,c) points shape")); 232 | int b = points_tensor.shape().dim_size(0); 233 | int m = points_tensor.shape().dim_size(1); 234 | int c = points_tensor.shape().dim_size(2); 235 | 236 | const Tensor& idx_tensor=context->input(1); 237 | OP_REQUIRES(context,idx_tensor.dims()==3 && idx_tensor.shape().dim_size(0)==b, errors::InvalidArgument("ThreeInterpolateGrad expects (b,n,3) idx shape")); 238 | int n = idx_tensor.shape().dim_size(1); 239 | const Tensor& weight_tensor=context->input(2); 240 | OP_REQUIRES(context,weight_tensor.dims()==3 && weight_tensor.shape().dim_size(0)==b && weight_tensor.shape().dim_size(1)==n && weight_tensor.shape().dim_size(2)==3, errors::InvalidArgument("ThreeInterpolateGrad expects (b,n,3) weight shape")); 241 | 242 | const Tensor& grad_out_tensor=context->input(3); 243 | OP_REQUIRES(context,grad_out_tensor.dims()==3 && grad_out_tensor.shape().dim_size(0)==b && grad_out_tensor.shape().dim_size(1)==n && grad_out_tensor.shape().dim_size(2)==c, errors::InvalidArgument("ThreeInterpolateGrad expects (b,n,c) grad_out shape")); 244 | 245 | Tensor * grad_points_tensor = nullptr; 246 | OP_REQUIRES_OK(context, context->allocate_output(0,TensorShape{b,m,c}, &grad_points_tensor)); 247 | 248 | auto points_flat = points_tensor.flat(); 249 | const float *points = &(points_flat(0)); 250 | auto idx_flat = idx_tensor.flat(); 251 | const int *idx = &(idx_flat(0)); 252 | auto weight_flat = weight_tensor.flat(); 253 | const float *weight = &(weight_flat(0)); 254 | auto grad_out_flat = grad_out_tensor.flat(); 255 | const float *grad_out = &(grad_out_flat(0)); 256 | auto grad_points_flat = grad_points_tensor->flat(); 257 | float *grad_points = &(grad_points_flat(0)); 258 | memset(grad_points, 0, sizeof(float)*b*m*c); 259 | threeinterpolate_grad_cpu(b,n,c,m,grad_out,idx,weight,grad_points); 260 | } 261 | }; 262 | REGISTER_KERNEL_BUILDER(Name("ThreeInterpolateGrad").Device(DEVICE_CPU),ThreeInterpolateGradOp); 263 | 264 | 265 | -------------------------------------------------------------------------------- /code/tf_ops/interpolation/tf_interpolate.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | from tensorflow.python.framework import ops 3 | import sys 4 | import os 5 | BASE_DIR = os.path.dirname(__file__) 6 | sys.path.append(BASE_DIR) 7 | interpolate_module=tf.load_op_library(os.path.join(BASE_DIR, 'tf_interpolate_so.so')) 8 | def three_nn(xyz1, xyz2): 9 | ''' 10 | Input: 11 | xyz1: (b,n,3) float32 array, unknown points 12 | xyz2: (b,m,3) float32 array, known points 13 | Output: 14 | dist: (b,n,3) float32 array, distances to known points 15 | idx: (b,n,3) int32 array, indices to known points 16 | ''' 17 | return interpolate_module.three_nn(xyz1, xyz2) 18 | ops.NoGradient('ThreeNN') 19 | def three_interpolate(points, idx, weight): 20 | ''' 21 | Input: 22 | points: (b,m,c) float32 array, known points 23 | idx: (b,n,3) int32 array, indices to known points 24 | weight: (b,n,3) float32 array, weights on known points 25 | Output: 26 | out: (b,n,c) float32 array, interpolated point values 27 | ''' 28 | return interpolate_module.three_interpolate(points, idx, weight) 29 | @tf.RegisterGradient('ThreeInterpolate') 30 | def _three_interpolate_grad(op, grad_out): 31 | points = op.inputs[0] 32 | idx = op.inputs[1] 33 | weight = op.inputs[2] 34 | return [interpolate_module.three_interpolate_grad(points, idx, weight, grad_out), None, None] 35 | 36 | if __name__=='__main__': 37 | import numpy as np 38 | import time 39 | np.random.seed(100) 40 | pts = np.random.random((32,128,64)).astype('float32') 41 | tmp1 = np.random.random((32,512,3)).astype('float32') 42 | tmp2 = np.random.random((32,128,3)).astype('float32') 43 | with tf.device('/cpu:0'): 44 | points = tf.constant(pts) 45 | xyz1 = tf.constant(tmp1) 46 | xyz2 = tf.constant(tmp2) 47 | dist, idx = three_nn(xyz1, xyz2) 48 | weight = tf.ones_like(dist)/3.0 49 | interpolated_points = three_interpolate(points, idx, weight) 50 | with tf.Session('') as sess: 51 | now = time.time() 52 | for _ in range(100): 53 | ret = sess.run(interpolated_points) 54 | print time.time() - now 55 | print ret.shape, ret.dtype 56 | #print ret 57 | 58 | 59 | 60 | -------------------------------------------------------------------------------- /code/tf_ops/interpolation/tf_interpolate.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/interpolation/tf_interpolate.pyc -------------------------------------------------------------------------------- /code/tf_ops/interpolation/tf_interpolate_compile.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | nvcc=/usr/local/cuda-9.0/bin/nvcc 3 | cudalib=/usr/local/cuda-9.0/lib64/ 4 | TF_INC=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_include())') 5 | TF_LIB=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())') 6 | 7 | g++ tf_interpolate.cpp -o tf_interpolate_so.so -std=c++11 -shared -fPIC -I $TF_INC \ 8 | -I$TF_INC/external/nsync/public -L$TF_LIB -ltensorflow_framework -lcudart -L $cudalib -O2 9 | -------------------------------------------------------------------------------- /code/tf_ops/interpolation/tf_interpolate_compile_abi.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | nvcc=/usr/local/cuda-9.0/bin/nvcc 3 | cudalib=/usr/local/cuda-9.0/lib64/ 4 | TF_INC=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_include())') 5 | TF_LIB=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())') 6 | 7 | g++ tf_interpolate.cpp -o tf_interpolate_so.so -std=c++11 -shared -fPIC -I $TF_INC \ 8 | -I$TF_INC/external/nsync/public -L$TF_LIB -ltensorflow_framework -lcudart -L $cudalib -O2 -D_GLIBCXX_USE_CXX11_ABI=0 9 | -------------------------------------------------------------------------------- /code/tf_ops/interpolation/tf_interpolate_op_test.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | import numpy as np 3 | from tf_interpolate import three_nn, three_interpolate 4 | 5 | class GroupPointTest(tf.test.TestCase): 6 | def test(self): 7 | pass 8 | 9 | def test_grad(self): 10 | with self.test_session(): 11 | points = tf.constant(np.random.random((1,8,16)).astype('float32')) 12 | print points 13 | xyz1 = tf.constant(np.random.random((1,128,3)).astype('float32')) 14 | xyz2 = tf.constant(np.random.random((1,8,3)).astype('float32')) 15 | dist, idx = three_nn(xyz1, xyz2) 16 | weight = tf.ones_like(dist)/3.0 17 | interpolated_points = three_interpolate(points, idx, weight) 18 | print interpolated_points 19 | err = tf.test.compute_gradient_error(points, (1,8,16), interpolated_points, (1,128,16)) 20 | print err 21 | self.assertLess(err, 1e-4) 22 | 23 | if __name__=='__main__': 24 | tf.test.main() 25 | -------------------------------------------------------------------------------- /code/tf_ops/interpolation/tf_interpolate_so.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/interpolation/tf_interpolate_so.so -------------------------------------------------------------------------------- /code/tf_ops/interpolation/visu_interpolation.py: -------------------------------------------------------------------------------- 1 | ''' Visualize part segmentation ''' 2 | import os 3 | import sys 4 | ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) 5 | sys.path.append('/home/rqi/Projects/toolkits/visualization') 6 | from show3d_balls import showpoints 7 | import numpy as np 8 | from tf_interpolate import three_nn, three_interpolate 9 | import tensorflow as tf 10 | 11 | 12 | pts2 = np.array([[0,0,1],[1,0,0],[0,1,0],[1,1,0]]).astype('float32') 13 | xyz1 = np.random.random((100,3)).astype('float32') 14 | xyz2 = np.array([[0,0,0],[1,0,0],[0,1,0],[1,1,1]]).astype('float32') 15 | 16 | def fun(xyz1,xyz2,pts2): 17 | with tf.device('/cpu:0'): 18 | points = tf.constant(np.expand_dims(pts2,0)) 19 | xyz1 = tf.constant(np.expand_dims(xyz1,0)) 20 | xyz2 = tf.constant(np.expand_dims(xyz2,0)) 21 | dist, idx = three_nn(xyz1, xyz2) 22 | #weight = tf.ones_like(dist)/3.0 23 | dist = tf.maximum(dist, 1e-10) 24 | norm = tf.reduce_sum((1.0/dist),axis=2,keep_dims=True) 25 | norm = tf.tile(norm, [1,1,3]) 26 | print norm 27 | weight = (1.0/dist) / norm 28 | interpolated_points = three_interpolate(points, idx, weight) 29 | with tf.Session('') as sess: 30 | tmp,pts1,d,w = sess.run([xyz1, interpolated_points, dist, weight]) 31 | #print w 32 | pts1 = pts1.squeeze() 33 | return pts1 34 | 35 | pts1 = fun(xyz1,xyz2,pts2) 36 | all_pts = np.zeros((104,3)) 37 | all_pts[0:100,:] = pts1 38 | all_pts[100:,:] = pts2 39 | all_xyz = np.zeros((104,3)) 40 | all_xyz[0:100,:]=xyz1 41 | all_xyz[100:,:]=xyz2 42 | showpoints(xyz2, pts2, ballradius=8) 43 | showpoints(xyz1, pts1, ballradius=8) 44 | showpoints(all_xyz, all_pts, ballradius=8) 45 | -------------------------------------------------------------------------------- /code/tf_ops/sampling/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/sampling/__init__.py -------------------------------------------------------------------------------- /code/tf_ops/sampling/__init__.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/sampling/__init__.pyc -------------------------------------------------------------------------------- /code/tf_ops/sampling/tf_sampling.cpp: -------------------------------------------------------------------------------- 1 | /* Furthest point sampling 2 | * Original author: Haoqiang Fan 3 | * Modified by Charles R. Qi 4 | * All Rights Reserved. 2017. 5 | */ 6 | #include "tensorflow/core/framework/op.h" 7 | #include "tensorflow/core/framework/op_kernel.h" 8 | #include "tensorflow/core/framework/shape_inference.h" 9 | #include "tensorflow/core/framework/common_shape_fns.h" 10 | #include 11 | 12 | using namespace tensorflow; 13 | 14 | REGISTER_OP("ProbSample") 15 | .Input("inp: float32") 16 | .Input("inpr: float32") 17 | .Output("out: int32") 18 | .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) { 19 | ::tensorflow::shape_inference::ShapeHandle dims1; // batch_size * ncategory 20 | c->WithRank(c->input(0), 2, &dims1); 21 | ::tensorflow::shape_inference::ShapeHandle dims2; // batch_size * npoints 22 | c->WithRank(c->input(1), 2, &dims2); 23 | // batch_size * npoints 24 | ::tensorflow::shape_inference::ShapeHandle output = c->MakeShape({c->Dim(dims2, 0), c->Dim(dims2, 1)}); 25 | c->set_output(0, output); 26 | return Status::OK(); 27 | }); 28 | REGISTER_OP("FarthestPointSample") 29 | .Attr("npoint: int") 30 | .Input("inp: float32") 31 | .Output("out: int32") 32 | .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) { 33 | ::tensorflow::shape_inference::ShapeHandle dims1; // batch_size * npoint * 3 34 | c->WithRank(c->input(0), 3, &dims1); 35 | int npoint; 36 | TF_RETURN_IF_ERROR(c->GetAttr("npoint", &npoint)); 37 | ::tensorflow::shape_inference::ShapeHandle output = c->MakeShape({c->Dim(dims1, 0), npoint}); 38 | c->set_output(0, output); 39 | return Status::OK(); 40 | }); 41 | REGISTER_OP("GatherPoint") 42 | .Input("inp: float32") 43 | .Input("idx: int32") 44 | .Output("out: float32") 45 | .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) { 46 | ::tensorflow::shape_inference::ShapeHandle dims1; // batch_size * ndataset * 3 47 | c->WithRank(c->input(0), 3, &dims1); 48 | ::tensorflow::shape_inference::ShapeHandle dims2; // batch_size * npoints 49 | c->WithRank(c->input(1), 2, &dims2); 50 | // batch_size * npoints * 3 51 | ::tensorflow::shape_inference::ShapeHandle output = c->MakeShape({c->Dim(dims1, 0), c->Dim(dims2, 1), c->Dim(dims1, 2)}); 52 | c->set_output(0, output); 53 | return Status::OK(); 54 | }); 55 | REGISTER_OP("GatherPointGrad") 56 | .Input("inp: float32") 57 | .Input("idx: int32") 58 | .Input("out_g: float32") 59 | .Output("inp_g: float32") 60 | .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) { 61 | c->set_output(0, c->input(0)); 62 | return Status::OK(); 63 | }); 64 | 65 | void probsampleLauncher(int b,int n,int m,const float * inp_p,const float * inp_r,float * temp,int * out); 66 | class ProbSampleGpuOp: public OpKernel{ 67 | public: 68 | explicit ProbSampleGpuOp(OpKernelConstruction* context):OpKernel(context){} 69 | void Compute(OpKernelContext * context)override{ 70 | const Tensor& inp_tensor=context->input(0); 71 | const Tensor& inpr_tensor=context->input(1); 72 | auto inp_flat=inp_tensor.flat(); 73 | auto inpr_flat=inpr_tensor.flat(); 74 | const float * inp=&(inp_flat(0)); 75 | const float * inpr=&(inpr_flat(0)); 76 | OP_REQUIRES(context,inp_tensor.dims()==2,errors::InvalidArgument("ProbSample expects (batch_size,num_choices) inp shape")); 77 | int b=inp_tensor.shape().dim_size(0); 78 | int n=inp_tensor.shape().dim_size(1); 79 | OP_REQUIRES(context,inpr_tensor.dims()==2 && inpr_tensor.shape().dim_size(0)==b,errors::InvalidArgument("ProbSample expects (batch_size,num_points) inpr shape")); 80 | int m=inpr_tensor.shape().dim_size(1); 81 | Tensor * out_tensor=NULL; 82 | OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,m},&out_tensor)); 83 | auto out_flat=out_tensor->flat(); 84 | int * out=&(out_flat(0)); 85 | Tensor temp_tensor; 86 | OP_REQUIRES_OK(context,context->allocate_temp(DataTypeToEnum::value,TensorShape{b,n},&temp_tensor)); 87 | auto temp_flat=temp_tensor.flat(); 88 | float * temp=&(temp_flat(0)); 89 | probsampleLauncher(b,n,m,inp,inpr,temp,out); 90 | } 91 | }; 92 | REGISTER_KERNEL_BUILDER(Name("ProbSample").Device(DEVICE_GPU), ProbSampleGpuOp); 93 | 94 | void farthestpointsamplingLauncher(int b,int n,int m,const float * inp,float * temp,int * out); 95 | class FarthestPointSampleGpuOp: public OpKernel{ 96 | public: 97 | explicit FarthestPointSampleGpuOp(OpKernelConstruction* context):OpKernel(context) { 98 | OP_REQUIRES_OK(context, context->GetAttr("npoint", &npoint_)); 99 | OP_REQUIRES(context, npoint_ > 0, errors::InvalidArgument("FarthestPointSample expects positive npoint")); 100 | } 101 | void Compute(OpKernelContext * context)override{ 102 | int m = npoint_; 103 | 104 | const Tensor& inp_tensor=context->input(0); 105 | OP_REQUIRES(context,inp_tensor.dims()==3 && inp_tensor.shape().dim_size(2)==3,errors::InvalidArgument("FarthestPointSample expects (batch_size,num_points,3) inp shape")); 106 | int b=inp_tensor.shape().dim_size(0); 107 | int n=inp_tensor.shape().dim_size(1); 108 | auto inp_flat=inp_tensor.flat(); 109 | const float * inp=&(inp_flat(0)); 110 | Tensor * out_tensor; 111 | OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,m},&out_tensor)); 112 | auto out_flat=out_tensor->flat(); 113 | int * out=&(out_flat(0)); 114 | Tensor temp_tensor; 115 | OP_REQUIRES_OK(context,context->allocate_temp(DataTypeToEnum::value,TensorShape{32,n},&temp_tensor)); 116 | auto temp_flat=temp_tensor.flat(); 117 | float * temp=&(temp_flat(0)); 118 | farthestpointsamplingLauncher(b,n,m,inp,temp,out); 119 | } 120 | private: 121 | int npoint_; 122 | }; 123 | REGISTER_KERNEL_BUILDER(Name("FarthestPointSample").Device(DEVICE_GPU),FarthestPointSampleGpuOp); 124 | 125 | void gatherpointLauncher(int b,int n,int m,const float * inp,const int * idx,float * out); 126 | class GatherPointGpuOp: public OpKernel{ 127 | public: 128 | explicit GatherPointGpuOp(OpKernelConstruction * context):OpKernel(context){} 129 | void Compute(OpKernelContext * context)override{ 130 | const Tensor& inp_tensor=context->input(0); 131 | OP_REQUIRES(context,inp_tensor.dims()==3 && inp_tensor.shape().dim_size(2)==3,errors::InvalidArgument("GatherPoint expects (batch_size,num_points,3) inp shape")); 132 | int b=inp_tensor.shape().dim_size(0); 133 | int n=inp_tensor.shape().dim_size(1); 134 | const Tensor& idx_tensor=context->input(1); 135 | OP_REQUIRES(context,idx_tensor.dims()==2 && idx_tensor.shape().dim_size(0)==b,errors::InvalidArgument("GatherPoint expects (batch_size,num_result) idx shape")); 136 | int m=idx_tensor.shape().dim_size(1); 137 | auto inp_flat=inp_tensor.flat(); 138 | const float * inp=&(inp_flat(0)); 139 | auto idx_flat=idx_tensor.flat(); 140 | const int * idx=&(idx_flat(0)); 141 | Tensor * out_tensor=NULL; 142 | OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,m,3},&out_tensor)); 143 | auto out_flat=out_tensor->flat(); 144 | float * out=&(out_flat(0)); 145 | gatherpointLauncher(b,n,m,inp,idx,out); 146 | } 147 | }; 148 | REGISTER_KERNEL_BUILDER(Name("GatherPoint").Device(DEVICE_GPU),GatherPointGpuOp); 149 | 150 | void scatteraddpointLauncher(int b,int n,int m,const float * out_g,const int * idx,float * inp_g); 151 | class GatherPointGradGpuOp: public OpKernel{ 152 | public: 153 | explicit GatherPointGradGpuOp(OpKernelConstruction * context):OpKernel(context){} 154 | void Compute(OpKernelContext * context)override{ 155 | const Tensor& inp_tensor=context->input(0); 156 | OP_REQUIRES(context,inp_tensor.dims()==3 && inp_tensor.shape().dim_size(2)==3,errors::InvalidArgument("GatherPointGradGpuOp expects (batch_size,num_points,3) inp")); 157 | int b=inp_tensor.shape().dim_size(0); 158 | int n=inp_tensor.shape().dim_size(1); 159 | const Tensor& idx_tensor=context->input(1); 160 | OP_REQUIRES(context,idx_tensor.dims()==2 && idx_tensor.shape().dim_size(0)==b,errors::InvalidArgument("GatherPointGradGpuOp expects (batch_size,num_result) idx shape")); 161 | int m=idx_tensor.shape().dim_size(1); 162 | auto inp_flat=inp_tensor.flat(); 163 | const float * inp=&(inp_flat(0)); 164 | auto idx_flat=idx_tensor.flat(); 165 | const int * idx=&(idx_flat(0)); 166 | const Tensor& out_g_tensor=context->input(2); 167 | OP_REQUIRES(context,out_g_tensor.dims()==3 && out_g_tensor.shape().dim_size(0)==b && out_g_tensor.shape().dim_size(1)==m && out_g_tensor.shape().dim_size(2)==3,errors::InvalidArgument("GatherPointGradGpuOp expects (batch_size,num_result,3) out_g shape")); 168 | auto out_g_flat=out_g_tensor.flat(); 169 | const float * out_g=&(out_g_flat(0)); 170 | Tensor * inp_g_tensor=NULL; 171 | OP_REQUIRES_OK(context,context->allocate_output(0,TensorShape{b,n,3},&inp_g_tensor)); 172 | auto inp_g_flat=inp_g_tensor->flat(); 173 | float * inp_g=&(inp_g_flat(0)); 174 | cudaMemset(inp_g,0,b*n*3*4); 175 | scatteraddpointLauncher(b,n,m,out_g,idx,inp_g); 176 | } 177 | }; 178 | REGISTER_KERNEL_BUILDER(Name("GatherPointGrad").Device(DEVICE_GPU),GatherPointGradGpuOp); 179 | 180 | -------------------------------------------------------------------------------- /code/tf_ops/sampling/tf_sampling.py: -------------------------------------------------------------------------------- 1 | ''' Furthest point sampling 2 | Original author: Haoqiang Fan 3 | Modified by Charles R. Qi 4 | All Rights Reserved. 2017. 5 | ''' 6 | import tensorflow as tf 7 | from tensorflow.python.framework import ops 8 | import sys 9 | import os 10 | BASE_DIR = os.path.dirname(os.path.abspath(__file__)) 11 | sys.path.append(BASE_DIR) 12 | sampling_module=tf.load_op_library(os.path.join(BASE_DIR, 'tf_sampling_so.so')) 13 | def prob_sample(inp,inpr): 14 | ''' 15 | input: 16 | batch_size * ncategory float32 17 | batch_size * npoints float32 18 | returns: 19 | batch_size * npoints int32 20 | ''' 21 | return sampling_module.prob_sample(inp,inpr) 22 | ops.NoGradient('ProbSample') 23 | # TF1.0 API requires set shape in C++ 24 | #@tf.RegisterShape('ProbSample') 25 | #def _prob_sample_shape(op): 26 | # shape1=op.inputs[0].get_shape().with_rank(2) 27 | # shape2=op.inputs[1].get_shape().with_rank(2) 28 | # return [tf.TensorShape([shape2.dims[0],shape2.dims[1]])] 29 | def gather_point(inp,idx): 30 | ''' 31 | input: 32 | batch_size * ndataset * 3 float32 33 | batch_size * npoints int32 34 | returns: 35 | batch_size * npoints * 3 float32 36 | ''' 37 | return sampling_module.gather_point(inp,idx) 38 | #@tf.RegisterShape('GatherPoint') 39 | #def _gather_point_shape(op): 40 | # shape1=op.inputs[0].get_shape().with_rank(3) 41 | # shape2=op.inputs[1].get_shape().with_rank(2) 42 | # return [tf.TensorShape([shape1.dims[0],shape2.dims[1],shape1.dims[2]])] 43 | @tf.RegisterGradient('GatherPoint') 44 | def _gather_point_grad(op,out_g): 45 | inp=op.inputs[0] 46 | idx=op.inputs[1] 47 | return [sampling_module.gather_point_grad(inp,idx,out_g),None] 48 | def farthest_point_sample(npoint,inp): 49 | ''' 50 | input: 51 | int32 52 | batch_size * ndataset * 3 float32 53 | returns: 54 | batch_size * npoint int32 55 | ''' 56 | return sampling_module.farthest_point_sample(inp, npoint) 57 | ops.NoGradient('FarthestPointSample') 58 | 59 | 60 | if __name__=='__main__': 61 | import numpy as np 62 | np.random.seed(100) 63 | triangles=np.random.rand(1,5,3,3).astype('float32') 64 | with tf.device('/gpu:1'): 65 | inp=tf.constant(triangles) 66 | tria=inp[:,:,0,:] 67 | trib=inp[:,:,1,:] 68 | tric=inp[:,:,2,:] 69 | areas=tf.sqrt(tf.reduce_sum(tf.cross(trib-tria,tric-tria)**2,2)+1e-9) 70 | randomnumbers=tf.random_uniform((1,8192)) 71 | triids=prob_sample(areas,randomnumbers) 72 | tria_sample=gather_point(tria,triids) 73 | trib_sample=gather_point(trib,triids) 74 | tric_sample=gather_point(tric,triids) 75 | us=tf.random_uniform((1,8192)) 76 | vs=tf.random_uniform((1,8192)) 77 | uplusv=1-tf.abs(us+vs-1) 78 | uminusv=us-vs 79 | us=(uplusv+uminusv)*0.5 80 | vs=(uplusv-uminusv)*0.5 81 | pt_sample=tria_sample+(trib_sample-tria_sample)*tf.expand_dims(us,-1)+(tric_sample-tria_sample)*tf.expand_dims(vs,-1) 82 | print 'pt_sample: ', pt_sample 83 | reduced_sample=gather_point(pt_sample,farthest_point_sample(1024,pt_sample)) 84 | print reduced_sample 85 | with tf.Session('') as sess: 86 | ret=sess.run(reduced_sample) 87 | print ret.shape,ret.dtype 88 | import cPickle as pickle 89 | pickle.dump(ret,open('1.pkl','wb'),-1) 90 | -------------------------------------------------------------------------------- /code/tf_ops/sampling/tf_sampling.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/sampling/tf_sampling.pyc -------------------------------------------------------------------------------- /code/tf_ops/sampling/tf_sampling_compile.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | nvcc=/usr/local/cuda-9.0/bin/nvcc 3 | cudainc=/usr/local/cuda-9.0/include/ 4 | cudalib=/usr/local/cuda-9.0/lib64/ 5 | TF_INC=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_include())') 6 | TF_LIB=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())') 7 | 8 | $nvcc tf_sampling_g.cu -c -o tf_sampling_g.cu.o -std=c++11 -I $TF_INC -DGOOGLE_CUDA=1\ 9 | -x cu -Xcompiler -fPIC -O2 10 | 11 | g++ tf_sampling.cpp tf_sampling_g.cu.o -o tf_sampling_so.so -std=c++11 -shared -fPIC -I $TF_INC \ 12 | -I$TF_INC/external/nsync/public -I $cudainc -L$TF_LIB -ltensorflow_framework -lcudart -L $cudalib -O2 13 | -------------------------------------------------------------------------------- /code/tf_ops/sampling/tf_sampling_compile_abi.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | nvcc=/usr/local/cuda-9.0/bin/nvcc 3 | cudainc=/usr/local/cuda-9.0/include/ 4 | cudalib=/usr/local/cuda-9.0/lib64/ 5 | TF_INC=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_include())') 6 | TF_LIB=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())') 7 | 8 | $nvcc tf_sampling_g.cu -c -o tf_sampling_g.cu.o -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11 -I $TF_INC -DGOOGLE_CUDA=1\ 9 | -x cu -Xcompiler -fPIC -O2 10 | 11 | g++ tf_sampling.cpp tf_sampling_g.cu.o -o tf_sampling_so.so -std=c++11 -shared -fPIC -I $TF_INC \ 12 | -I$TF_INC/external/nsync/public -I $cudainc -L$TF_LIB -ltensorflow_framework -lcudart -L $cudalib -O2 -D_GLIBCXX_USE_CXX11_ABI=0 13 | -------------------------------------------------------------------------------- /code/tf_ops/sampling/tf_sampling_g.cu: -------------------------------------------------------------------------------- 1 | /* Furthest point sampling GPU implementation 2 | * Original author: Haoqiang Fan 3 | * Modified by Charles R. Qi 4 | * All Rights Reserved. 2017. 5 | */ 6 | 7 | __global__ void cumsumKernel(int b,int n,const float * __restrict__ inp,float * __restrict__ out){ 8 | const int BlockSize=2048; 9 | const int paddingLevel=5; 10 | __shared__ float buffer4[BlockSize*4]; 11 | __shared__ float buffer[BlockSize+(BlockSize>>paddingLevel)]; 12 | for (int i=blockIdx.x;i>2; 18 | for (int k=threadIdx.x*4;k>2)+(k>>(2+paddingLevel))]=v4; 33 | }else{ 34 | float v=0; 35 | for (int k2=k;k2>2)+(k>>(2+paddingLevel))]=v; 43 | } 44 | } 45 | int u=0; 46 | for (;(2<>(u+1));k+=blockDim.x){ 49 | int i1=(((k<<1)+2)<>paddingLevel; 52 | i2+=i2>>paddingLevel; 53 | buffer[i1]+=buffer[i2]; 54 | } 55 | } 56 | u--; 57 | for (;u>=0;u--){ 58 | __syncthreads(); 59 | for (int k=threadIdx.x;k>(u+1));k+=blockDim.x){ 60 | int i1=(((k<<1)+3)<>paddingLevel; 63 | i2+=i2>>paddingLevel; 64 | buffer[i1]+=buffer[i2]; 65 | } 66 | } 67 | __syncthreads(); 68 | for (int k=threadIdx.x*4;k>2)-1)+(((k>>2)-1)>>paddingLevel); 71 | buffer4[k]+=buffer[k2]; 72 | buffer4[k+1]+=buffer[k2]; 73 | buffer4[k+2]+=buffer[k2]; 74 | buffer4[k+3]+=buffer[k2]; 75 | } 76 | } 77 | __syncthreads(); 78 | for (int k=threadIdx.x;k>paddingLevel)]+runningsum2; 82 | float r2=runningsum+t; 83 | runningsum2=t-(r2-runningsum); 84 | runningsum=r2; 85 | __syncthreads(); 86 | } 87 | } 88 | } 89 | 90 | __global__ void binarysearchKernel(int b,int n,int m,const float * __restrict__ dataset,const float * __restrict__ query, int * __restrict__ result){ 91 | int base=1; 92 | while (base=1;k>>=1) 99 | if (r>=k && dataset[i*n+r-k]>=q) 100 | r-=k; 101 | result[i*m+j]=r; 102 | } 103 | } 104 | } 105 | __global__ void farthestpointsamplingKernel(int b,int n,int m,const float * __restrict__ dataset,float * __restrict__ temp,int * __restrict__ idxs){ 106 | if (m<=0) 107 | return; 108 | const int BlockSize=512; 109 | __shared__ float dists[BlockSize]; 110 | __shared__ int dists_i[BlockSize]; 111 | const int BufferSize=3072; 112 | __shared__ float buf[BufferSize*3]; 113 | for (int i=blockIdx.x;ibest){ 147 | best=d2; 148 | besti=k; 149 | } 150 | } 151 | dists[threadIdx.x]=best; 152 | dists_i[threadIdx.x]=besti; 153 | for (int u=0;(1<>(u+1))){ 156 | int i1=(threadIdx.x*2)<>>(b,n,inp,out); 196 | } 197 | //require b*n working space 198 | void probsampleLauncher(int b,int n,int m,const float * inp_p,const float * inp_r,float * temp,int * out){ 199 | cumsumKernel<<<32,512>>>(b,n,inp_p,temp); 200 | binarysearchKernel<<>>(b,n,m,temp,inp_r,out); 201 | } 202 | //require 32*n working space 203 | void farthestpointsamplingLauncher(int b,int n,int m,const float * inp,float * temp,int * out){ 204 | farthestpointsamplingKernel<<<32,512>>>(b,n,m,inp,temp,out); 205 | } 206 | void gatherpointLauncher(int b,int n,int m,const float * inp,const int * idx,float * out){ 207 | gatherpointKernel<<>>(b,n,m,inp,idx,out); 208 | } 209 | void scatteraddpointLauncher(int b,int n,int m,const float * out_g,const int * idx,float * inp_g){ 210 | scatteraddpointKernel<<>>(b,n,m,out_g,idx,inp_g); 211 | } 212 | 213 | -------------------------------------------------------------------------------- /code/tf_ops/sampling/tf_sampling_g.cu.o: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/sampling/tf_sampling_g.cu.o -------------------------------------------------------------------------------- /code/tf_ops/sampling/tf_sampling_so.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/tf_ops/sampling/tf_sampling_so.so -------------------------------------------------------------------------------- /code/utils/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/utils/__init__.py -------------------------------------------------------------------------------- /code/utils/__init__.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/utils/__init__.pyc -------------------------------------------------------------------------------- /code/utils/data_prep_util.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | BASE_DIR = os.path.dirname(os.path.abspath(__file__)) 4 | sys.path.append(BASE_DIR) 5 | from plyfile import (PlyData, PlyElement, make2d, PlyParseError, PlyProperty) 6 | import numpy as np 7 | import h5py 8 | 9 | SAMPLING_BIN = os.path.join(BASE_DIR, 'third_party/mesh_sampling/build/pcsample') 10 | 11 | SAMPLING_POINT_NUM = 2048 12 | SAMPLING_LEAF_SIZE = 0.005 13 | 14 | MODELNET40_PATH = '../datasets/modelnet40' 15 | def export_ply(pc, filename): 16 | vertex = np.zeros(pc.shape[0], dtype=[('x', 'f4'), ('y', 'f4'), ('z', 'f4')]) 17 | for i in range(pc.shape[0]): 18 | vertex[i] = (pc[i][0], pc[i][1], pc[i][2]) 19 | ply_out = PlyData([PlyElement.describe(vertex, 'vertex', comments=['vertices'])]) 20 | ply_out.write(filename) 21 | 22 | # Sample points on the obj shape 23 | def get_sampling_command(obj_filename, ply_filename): 24 | cmd = SAMPLING_BIN + ' ' + obj_filename 25 | cmd += ' ' + ply_filename 26 | cmd += ' -n_samples %d ' % SAMPLING_POINT_NUM 27 | cmd += ' -leaf_size %f ' % SAMPLING_LEAF_SIZE 28 | return cmd 29 | 30 | # -------------------------------------------------------------- 31 | # Following are the helper functions to load MODELNET40 shapes 32 | # -------------------------------------------------------------- 33 | 34 | # Read in the list of categories in MODELNET40 35 | def get_category_names(): 36 | shape_names_file = os.path.join(MODELNET40_PATH, 'shape_names.txt') 37 | shape_names = [line.rstrip() for line in open(shape_names_file)] 38 | return shape_names 39 | 40 | # Return all the filepaths for the shapes in MODELNET40 41 | def get_obj_filenames(): 42 | obj_filelist_file = os.path.join(MODELNET40_PATH, 'filelist.txt') 43 | obj_filenames = [os.path.join(MODELNET40_PATH, line.rstrip()) for line in open(obj_filelist_file)] 44 | print('Got %d obj files in modelnet40.' % len(obj_filenames)) 45 | return obj_filenames 46 | 47 | # Helper function to create the father folder and all subdir folders if not exist 48 | def batch_mkdir(output_folder, subdir_list): 49 | if not os.path.exists(output_folder): 50 | os.mkdir(output_folder) 51 | for subdir in subdir_list: 52 | if not os.path.exists(os.path.join(output_folder, subdir)): 53 | os.mkdir(os.path.join(output_folder, subdir)) 54 | 55 | # ---------------------------------------------------------------- 56 | # Following are the helper functions to load save/load HDF5 files 57 | # ---------------------------------------------------------------- 58 | 59 | # Write numpy array data and label to h5_filename 60 | def save_h5_data_label_normal(h5_filename, data, label, normal, 61 | data_dtype='float32', label_dtype='uint8', noral_dtype='float32'): 62 | h5_fout = h5py.File(h5_filename) 63 | h5_fout.create_dataset( 64 | 'data', data=data, 65 | compression='gzip', compression_opts=4, 66 | dtype=data_dtype) 67 | h5_fout.create_dataset( 68 | 'normal', data=normal, 69 | compression='gzip', compression_opts=4, 70 | dtype=normal_dtype) 71 | h5_fout.create_dataset( 72 | 'label', data=label, 73 | compression='gzip', compression_opts=1, 74 | dtype=label_dtype) 75 | h5_fout.close() 76 | 77 | 78 | # Write numpy array data and label to h5_filename 79 | def save_h5(h5_filename, data, label, data_dtype='uint8', label_dtype='uint8'): 80 | h5_fout = h5py.File(h5_filename) 81 | h5_fout.create_dataset( 82 | 'data', data=data, 83 | compression='gzip', compression_opts=4, 84 | dtype=data_dtype) 85 | h5_fout.create_dataset( 86 | 'label', data=label, 87 | compression='gzip', compression_opts=1, 88 | dtype=label_dtype) 89 | h5_fout.close() 90 | 91 | # Read numpy array data and label from h5_filename 92 | def load_h5_data_label_normal(h5_filename): 93 | f = h5py.File(h5_filename) 94 | data = f['data'][:] 95 | label = f['label'][:] 96 | normal = f['normal'][:] 97 | return (data, label, normal) 98 | 99 | # Read numpy array data and label from h5_filename 100 | def load_h5_data_label_seg(h5_filename): 101 | f = h5py.File(h5_filename) 102 | data = f['data'][:] 103 | label = f['label'][:] 104 | seg = f['pid'][:] 105 | return (data, label, seg) 106 | 107 | # Read numpy array data and label from h5_filename 108 | def load_h5(h5_filename): 109 | f = h5py.File(h5_filename) 110 | data = f['data'][:] 111 | label = f['label'][:] 112 | return (data, label) 113 | 114 | # ---------------------------------------------------------------- 115 | # Following are the helper functions to load save/load PLY files 116 | # ---------------------------------------------------------------- 117 | 118 | # Load PLY file 119 | def load_ply_data(filename, point_num): 120 | plydata = PlyData.read(filename) 121 | pc = plydata['vertex'].data[:point_num] 122 | pc_array = np.array([[x, y, z] for x,y,z in pc]) 123 | return pc_array 124 | 125 | # Load PLY file 126 | def load_ply_normal(filename, point_num): 127 | plydata = PlyData.read(filename) 128 | pc = plydata['normal'].data[:point_num] 129 | pc_array = np.array([[x, y, z] for x,y,z in pc]) 130 | return pc_array 131 | 132 | # Make up rows for Nxk array 133 | # Input Pad is 'edge' or 'constant' 134 | def pad_arr_rows(arr, row, pad='edge'): 135 | assert(len(arr.shape) == 2) 136 | assert(arr.shape[0] <= row) 137 | assert(pad == 'edge' or pad == 'constant') 138 | if arr.shape[0] == row: 139 | return arr 140 | if pad == 'edge': 141 | return np.lib.pad(arr, ((0, row-arr.shape[0]), (0, 0)), 'edge') 142 | if pad == 'constant': 143 | return np.lib.pad(arr, ((0, row-arr.shape[0]), (0, 0)), 'constant', (0, 0)) 144 | 145 | 146 | -------------------------------------------------------------------------------- /code/utils/eulerangles.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/utils/eulerangles.pyc -------------------------------------------------------------------------------- /code/utils/modelnet_data_prep.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/utils/modelnet_data_prep.pyc -------------------------------------------------------------------------------- /code/utils/off2obj.py: -------------------------------------------------------------------------------- 1 | #! /usr/bin/python 2 | # Written by John Bowers 3 | # http://johnsresearch.wordpress.com 4 | # 2009 5 | # You are welcome to use this however you want, this is public domain. 6 | 7 | import sys 8 | 9 | if len(sys.argv) == 3: 10 | off_path = sys.argv[1] 11 | obj_path = sys.argv[2] 12 | else: 13 | print "USAGE: off2obj.py [path to mesh] [output path]" 14 | sys.exit(0) 15 | 16 | # Class Mesh represents a mesh by a vertex list and a face list 17 | # and has a method loadFromOffFile to load the Mesh data from an 18 | # OFF file. 19 | class Mesh: 20 | """Class Represents a Mesh by (V, F)""" 21 | def __init__(self): 22 | self.verts = [] 23 | self.faces = [] 24 | self.nVerts = 0 25 | self.nFaces = 0 26 | self.edges = None 27 | def writeToObjFile(self, pathToObjFile): 28 | objFile = open(pathToObjFile, 'w') 29 | objFile.write("# off2obj OBJ File") 30 | objFile.write("# http://johnsresearch.wordpress.com\n") 31 | for vert in self.verts: 32 | objFile.write("v ") 33 | objFile.write(str(vert[0])) 34 | objFile.write(" ") 35 | objFile.write(str(vert[1])) 36 | objFile.write(" ") 37 | objFile.write(str(vert[2])) 38 | objFile.write("\n") 39 | objFile.write("s off\n") 40 | for face in self.faces: 41 | objFile.write("f ") 42 | objFile.write(str(face[0]+1)) 43 | objFile.write(" ") 44 | objFile.write(str(face[1]+1)) 45 | objFile.write(" ") 46 | objFile.write(str(face[2]+1)) 47 | objFile.write("\n") 48 | objFile.close() 49 | def loadFromOffFile(self, pathToOffFile): 50 | #Reset this mesh: 51 | self.verts = [] 52 | self.faces = [] 53 | self.nVerts = 0 54 | self.nFaces = 0 55 | 56 | #Open the file for reading: 57 | offFile = open(pathToOffFile, 'r') 58 | lines = offFile.readlines() 59 | 60 | #Read the number of verts and faces 61 | params = lines[1].split() 62 | self.nVerts = int(params[0]) 63 | self.nFaces = int(params[1]) 64 | 65 | #split the remaining lines into vert and face arrays 66 | vertLines = lines[2:2+self.nVerts] 67 | faceLines = lines[2+self.nVerts:2+self.nVerts+self.nFaces] 68 | 69 | #Create the verts array 70 | for vertLine in vertLines: 71 | XYZ = vertLine.split() 72 | self.verts.append([float(XYZ[0]), float(XYZ[1]), float(XYZ[2])]) 73 | 74 | #Create the faces array 75 | for faceLine in faceLines: 76 | XYZ = faceLine.split() 77 | self.faces.append((int(XYZ[1]), int(XYZ[2]), int(XYZ[3]))) 78 | if not(int(XYZ[0]) == 3): 79 | print "ERROR: This OFF loader can only handle meshes with 3 vertex faces." 80 | print "A face with", XYZ[0], "vertices is included in the file. Exiting." 81 | offFile.close() 82 | sys.exit(0) 83 | 84 | #Cleanup 85 | offFile.close() 86 | def edgeList(self): 87 | if not(self.edges == None): 88 | return self.edges 89 | self.edges = [] 90 | for i in range(0, self.nVerts): 91 | self.edges.append([]) 92 | for face in self.faces: 93 | i = face[0] 94 | j = face[1] 95 | k = face[2] 96 | if not(j in self.edges[i]): 97 | self.edges[i].append(j) 98 | if not(k in self.edges[i]): 99 | self.edges[i].append(k) 100 | if not(i in self.edges[j]): 101 | self.edges[j].append(i) 102 | if not(k in self.edges[j]): 103 | self.edges[j].append(k) 104 | if not(i in self.edges[k]): 105 | self.edges[k].append(i) 106 | if not(j in self.edges[k]): 107 | self.edges[k].append(j) 108 | return self.edges 109 | 110 | """ Main Program """ 111 | 112 | mesh = Mesh() 113 | mesh.loadFromOffFile(off_path) 114 | mesh.writeToObjFile(obj_path) -------------------------------------------------------------------------------- /code/utils/pc_util.py: -------------------------------------------------------------------------------- 1 | """ Utility functions for processing point clouds. 2 | 3 | Author: Charles R. Qi, Hao Su 4 | Date: November 2016 5 | """ 6 | 7 | import os 8 | import sys 9 | from matplotlib import pyplot as plt 10 | from matplotlib import colors 11 | 12 | BASE_DIR = os.path.dirname(os.path.abspath(__file__)) 13 | sys.path.append(BASE_DIR) 14 | 15 | # Draw point cloud 16 | from eulerangles import euler2mat 17 | 18 | # Point cloud IO 19 | import numpy as np 20 | from plyfile import PlyData, PlyElement 21 | 22 | 23 | # ---------------------------------------- 24 | # Point Cloud/Volume Conversions 25 | # ---------------------------------------- 26 | 27 | def point_cloud_to_volume_batch(point_clouds, vsize=12, radius=1.0, flatten=True): 28 | """ Input is BxNx3 batch of point cloud 29 | Output is Bx(vsize^3) 30 | """ 31 | vol_list = [] 32 | for b in range(point_clouds.shape[0]): 33 | vol = point_cloud_to_volume(np.squeeze(point_clouds[b,:,:]), vsize, radius) 34 | if flatten: 35 | vol_list.append(vol.flatten()) 36 | else: 37 | vol_list.append(np.expand_dims(np.expand_dims(vol, -1), 0)) 38 | if flatten: 39 | return np.vstack(vol_list) 40 | else: 41 | return np.concatenate(vol_list, 0) 42 | 43 | 44 | def point_cloud_to_volume(points, vsize, radius=1.0): 45 | """ input is Nx3 points. 46 | output is vsize*vsize*vsize 47 | assumes points are in range [-radius, radius] 48 | """ 49 | vol = np.zeros((vsize,vsize,vsize)) 50 | voxel = 2*radius/float(vsize) 51 | locations = (points + radius)/voxel 52 | locations = locations.astype(int) 53 | vol[locations[:,0],locations[:,1],locations[:,2]] = 1.0 54 | return vol 55 | 56 | #a = np.zeros((16,1024,3)) 57 | #print point_cloud_to_volume_batch(a, 12, 1.0, False).shape 58 | 59 | def volume_to_point_cloud(vol): 60 | """ vol is occupancy grid (value = 0 or 1) of size vsize*vsize*vsize 61 | return Nx3 numpy array. 62 | """ 63 | vsize = vol.shape[0] 64 | assert(vol.shape[1] == vsize and vol.shape[1] == vsize) 65 | points = [] 66 | for a in range(vsize): 67 | for b in range(vsize): 68 | for c in range(vsize): 69 | if vol[a,b,c] == 1: 70 | points.append(np.array([a,b,c])) 71 | if len(points) == 0: 72 | return np.zeros((0,3)) 73 | points = np.vstack(points) 74 | return points 75 | 76 | # ---------------------------------------- 77 | # Point cloud IO 78 | # ---------------------------------------- 79 | 80 | def read_ply(filename): 81 | """ read XYZ point cloud from filename PLY file """ 82 | plydata = PlyData.read(filename) 83 | pc = plydata['vertex'].data 84 | pc_array = np.array([[x, y, z] for x,y,z in pc]) 85 | return pc_array 86 | 87 | 88 | def write_ply(points, filename, text=True): 89 | """ input: Nx3, write points to filename as PLY format. """ 90 | points = [(points[i,0], points[i,1], points[i,2]) for i in range(points.shape[0])] 91 | vertex = np.array(points, dtype=[('x', 'f4'), ('y', 'f4'),('z', 'f4')]) 92 | el = PlyElement.describe(vertex, 'vertex', comments=['vertices']) 93 | PlyData([el], text=text).write(filename) 94 | 95 | 96 | # ---------------------------------------- 97 | # Simple Point cloud and Volume Renderers 98 | # ---------------------------------------- 99 | 100 | def draw_point_cloud(input_points, canvasSize=500, space=240, diameter=10, 101 | xrot=0, yrot=0, zrot=0, switch_xyz=[0,1,2], normalize=True): 102 | """ Render point cloud to image with alpha channel. 103 | Input: 104 | points: Nx3 numpy array (+y is up direction) 105 | Output: 106 | gray image as numpy array of size canvasSizexcanvasSize 107 | """ 108 | canvasSizeX = canvasSize 109 | canvasSizeY = canvasSize 110 | 111 | image = np.zeros((canvasSizeX, canvasSizeY)) 112 | if input_points is None or input_points.shape[0] == 0: 113 | return image 114 | 115 | points = input_points[:, switch_xyz] 116 | M = euler2mat(zrot, yrot, xrot) 117 | points = (np.dot(M, points.transpose())).transpose() 118 | 119 | # Normalize the point cloud 120 | # We normalize scale to fit points in a unit sphere 121 | if normalize: 122 | centroid = np.mean(points, axis=0) 123 | points -= centroid 124 | furthest_distance = np.max(np.sqrt(np.sum(abs(points)**2,axis=-1))) 125 | points /= furthest_distance 126 | 127 | # Pre-compute the Gaussian disk 128 | radius = (diameter-1)/2.0 129 | disk = np.zeros((diameter, diameter)) 130 | for i in range(diameter): 131 | for j in range(diameter): 132 | if (i - radius) * (i-radius) + (j-radius) * (j-radius) <= radius * radius: 133 | disk[i, j] = np.exp((-(i-radius)**2 - (j-radius)**2)/(radius**2)) 134 | mask = np.argwhere(disk > 0) 135 | dx = mask[:, 0] 136 | dy = mask[:, 1] 137 | dv = disk[disk > 0] 138 | 139 | # Order points by z-buffer 140 | zorder = np.argsort(points[:, 2]) 141 | points = points[zorder, :] 142 | points[:, 2] = (points[:, 2] - np.min(points[:, 2])) / (np.max(points[:, 2] - np.min(points[:, 2]))) 143 | max_depth = np.max(points[:, 2]) 144 | 145 | for i in range(points.shape[0]): 146 | j = points.shape[0] - i - 1 147 | x = points[j, 0] 148 | y = points[j, 1] 149 | xc = canvasSizeX/2 + (x*space) 150 | yc = canvasSizeY/2 + (y*space) 151 | xc = int(np.round(xc)) 152 | yc = int(np.round(yc)) 153 | 154 | px = dx + xc 155 | py = dy + yc 156 | #image[px, py] = image[px, py] * 0.7 + dv * (max_depth - points[j, 2]) * 0.3 157 | image[px, py] = image[px, py] * 0.7 + dv * 0.3 158 | 159 | val = np.max(image) 160 | val = np.percentile(image,99.9) 161 | image = image / val 162 | mask = image==0 163 | 164 | image[image>1.0]=1.0 165 | image = 1.0-image 166 | #image = np.expand_dims(image, axis=-1) 167 | #image = np.concatenate((image*0.3+0.7,np.ones_like(image), np.ones_like(image)), axis=2) 168 | #image = colors.hsv_to_rgb(image) 169 | image[mask]=1.0 170 | 171 | 172 | return image 173 | 174 | def point_cloud_three_views(points,diameter=5): 175 | """ input points Nx3 numpy array (+y is up direction). 176 | return an numpy array gray image of size 500x1500. """ 177 | # +y is up direction 178 | # xrot is azimuth 179 | # yrot is in-plane 180 | # zrot is elevation 181 | # img1 = draw_point_cloud(points, xrot=90/180.0*np.pi, yrot=0/180.0*np.pi, zrot=0/180.0*np.pi,diameter=diameter) 182 | # img2 = draw_point_cloud(points, xrot=180/180.0*np.pi, yrot=0/180.0*np.pi, zrot=0/180.0*np.pi,diameter=diameter) 183 | # img3 = draw_point_cloud(points, xrot=0/180.0*np.pi, yrot=-90/180.0*np.pi, zrot=0/180.0*np.pi,diameter=diameter) 184 | # image_large = np.concatenate([img1, img2, img3], 1) 185 | 186 | img1 = draw_point_cloud(points, zrot=110 / 180.0 * np.pi, xrot=135 / 180.0 * np.pi, yrot=0 / 180.0 * np.pi,diameter=diameter) 187 | img2 = draw_point_cloud(points, zrot=70 / 180.0 * np.pi, xrot=135 / 180.0 * np.pi, yrot=0 / 180.0 * np.pi,diameter=diameter) 188 | img3 = draw_point_cloud(points, zrot=180.0 / 180.0 * np.pi, xrot=90 / 180.0 * np.pi, yrot=0 / 180.0 * np.pi,diameter=diameter) 189 | image_large = np.concatenate([img1, img2, img3], 1) 190 | 191 | return image_large 192 | 193 | 194 | from PIL import Image 195 | def point_cloud_three_views_demo(): 196 | """ Demo for draw_point_cloud function """ 197 | points = read_ply('../third_party/mesh_sampling/piano.ply') 198 | im_array = point_cloud_three_views(points) 199 | img = Image.fromarray(np.uint8(im_array*255.0)) 200 | img.save('piano.jpg') 201 | 202 | if __name__=="__main__": 203 | point_cloud_three_views_demo() 204 | 205 | 206 | import matplotlib.pyplot as plt 207 | def pyplot_draw_point_cloud(points, output_filename): 208 | """ points is a Nx3 numpy array """ 209 | fig = plt.figure() 210 | ax = fig.add_subplot(111, projection='3d') 211 | ax.scatter(points[:,0], points[:,1], points[:,2]) 212 | ax.set_xlabel('x') 213 | ax.set_ylabel('y') 214 | ax.set_zlabel('z') 215 | #savefig(output_filename) 216 | 217 | def pyplot_draw_volume(vol, output_filename): 218 | """ vol is of size vsize*vsize*vsize 219 | output an image to output_filename 220 | """ 221 | points = volume_to_point_cloud(vol) 222 | pyplot_draw_point_cloud(points, output_filename) 223 | -------------------------------------------------------------------------------- /code/utils/pc_util.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/utils/pc_util.pyc -------------------------------------------------------------------------------- /code/utils/plyfile.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/utils/plyfile.pyc -------------------------------------------------------------------------------- /code/utils/pointnet_util.py: -------------------------------------------------------------------------------- 1 | """ PointNet++ Layers 2 | 3 | Author: Charles R. Qi 4 | Date: November 2017 5 | """ 6 | 7 | import os 8 | import sys 9 | from tf_ops.sampling.tf_sampling import farthest_point_sample, gather_point 10 | from tf_ops.grouping.tf_grouping import query_ball_point, group_point, knn_point 11 | from tf_ops.interpolation.tf_interpolate import three_nn, three_interpolate 12 | import tensorflow as tf 13 | import numpy as np 14 | import tf_util2 15 | 16 | def sample_and_group(npoint, radius, nsample, xyz, points, tnet_spec=None, knn=False, use_xyz=True): 17 | ''' 18 | Input: 19 | npoint: int32 20 | radius: float32 21 | nsample: int32 22 | xyz: (batch_size, ndataset, 3) TF tensor 23 | points: (batch_size, ndataset, channel) TF tensor, if None will just use xyz as points 24 | tnet_spec: dict (keys: mlp, mlp2, is_training, bn_decay), if None do not apply tnet 25 | knn: bool, if True use kNN instead of radius search 26 | use_xyz: bool, if True concat XYZ with local point features, otherwise just use point features 27 | Output: 28 | new_xyz: (batch_size, npoint, 3) TF tensor 29 | new_points: (batch_size, npoint, nsample, 3+channel) TF tensor 30 | idx: (batch_size, npoint, nsample) TF tensor, indices of local points as in ndataset points 31 | grouped_xyz: (batch_size, npoint, nsample, 3) TF tensor, normalized point XYZs 32 | (subtracted by seed point XYZ) in local regions 33 | ''' 34 | 35 | new_xyz = gather_point(xyz, farthest_point_sample(npoint, xyz)) # (batch_size, npoint, 3) 36 | if knn: 37 | _,idx = knn_point(nsample, xyz, new_xyz) 38 | else: 39 | if np.isscalar(radius): 40 | idx, pts_cnt = query_ball_point(radius, nsample, xyz, new_xyz) 41 | else: 42 | idx_list = [] 43 | for radius_one, xyz_one, new_xyz_one in zip(tf.unstack(radius,axis=0), tf.unstack(xyz, axis=0),tf.unstack(new_xyz, axis=0)): 44 | idx_one, _ = query_ball_point(radius_one, nsample, tf.expand_dims(xyz_one, axis=0), tf.expand_dims(new_xyz_one, axis=0)) 45 | idx_list.append(idx_one) 46 | idx = tf.stack(idx_list, axis=0) 47 | idx = tf.squeeze(idx, axis=1) 48 | 49 | grouped_xyz = group_point(xyz, idx) # (batch_size, npoint, nsample, 3) 50 | grouped_xyz -= tf.tile(tf.expand_dims(new_xyz, 2), [1,1,nsample,1]) # translation normalization 51 | if tnet_spec is not None: 52 | grouped_xyz = tnet(grouped_xyz, tnet_spec) 53 | if points is not None: 54 | grouped_points = group_point(points, idx) # (batch_size, npoint, nsample, channel) 55 | if use_xyz: 56 | # new_points = tf.concat([grouped_xyz, tf.tile(tf.expand_dims(new_xyz, 2), [1,1,nsample,1]),grouped_points], axis=-1) # (batch_size, npoint, nample, 3+channel) 57 | new_points = tf.concat([grouped_xyz, grouped_points],axis=-1) # (batch_size, npoint, nample, 3+channel) 58 | else: 59 | new_points = grouped_points 60 | else: 61 | # new_points = tf.concat([grouped_xyz, tf.tile(tf.expand_dims(new_xyz, 2), [1,1,nsample,1])], axis=-1) 62 | new_points = grouped_xyz 63 | 64 | return new_xyz, new_points, idx, grouped_xyz 65 | 66 | 67 | def sample_and_group_all(xyz, points, use_xyz=True): 68 | ''' 69 | Inputs: 70 | xyz: (batch_size, ndataset, 3) TF tensor 71 | points: (batch_size, ndataset, channel) TF tensor, if None will just use xyz as points 72 | use_xyz: bool, if True concat XYZ with local point features, otherwise just use point features 73 | Outputs: 74 | new_xyz: (batch_size, 1, 3) as (0,0,0) 75 | new_points: (batch_size, 1, ndataset, 3+channel) TF tensor 76 | Note: 77 | Equivalent to sample_and_group with npoint=1, radius=inf, use (0,0,0) as the centroid 78 | ''' 79 | batch_size = xyz.get_shape()[0].value 80 | nsample = xyz.get_shape()[1].value 81 | new_xyz = tf.constant(np.tile(np.array([0,0,0]).reshape((1,1,3)), (batch_size,1,1)),dtype=tf.float32) # (batch_size, 1, 3) 82 | idx = tf.constant(np.tile(np.array(range(nsample)).reshape((1,1,nsample)), (batch_size,1,1))) 83 | grouped_xyz = tf.reshape(xyz, (batch_size, 1, nsample, 3)) # (batch_size, npoint=1, nsample, 3) 84 | if points is not None: 85 | if use_xyz: 86 | new_points = tf.concat([xyz, points], axis=2) # (batch_size, 16, 259) 87 | else: 88 | new_points = points 89 | new_points = tf.expand_dims(new_points, 1) # (batch_size, 1, 16, 259) 90 | else: 91 | new_points = grouped_xyz 92 | return new_xyz, new_points, idx, grouped_xyz 93 | 94 | 95 | def pointnet_sa_module(xyz, points, npoint, radius, nsample, mlp, mlp2, group_all, is_training, 96 | bn_decay, scope, bn=True, ibn=False, pooling='max', tnet_spec=None, knn=False, use_xyz=True): 97 | ''' PointNet Set Abstraction (SA) Module 98 | Input: 99 | xyz: (batch_size, ndataset, 3) TF tensor 100 | points: (batch_size, ndataset, channel) TF tensor 101 | npoint: int32 -- #points sampled in farthest point sampling 102 | radius: float32 -- search radius in local region 103 | batch_radius: the size of each object 104 | nsample: int32 -- how many points in each local region 105 | mlp: list of int32 -- output size for MLP on each point 106 | mlp2: list of int32 -- output size for MLP on each region 107 | group_all: bool -- group all points into one PC if set true, OVERRIDE 108 | npoint, radius and nsample settings 109 | use_xyz: bool, if True concat XYZ with local point features, otherwise just use point features 110 | Return: 111 | new_xyz: (batch_size, npoint, 3) TF tensor 112 | new_points: (batch_size, npoint, mlp[-1] or mlp2[-1]) TF tensor 113 | idx: (batch_size, npoint, nsample) int32 -- indices for local regions 114 | ''' 115 | with tf.variable_scope(scope) as sc: 116 | if group_all: 117 | nsample = xyz.get_shape()[1].value 118 | new_xyz, new_points, idx, grouped_xyz = sample_and_group_all(xyz, points, use_xyz) 119 | else: 120 | new_xyz, new_points, idx, grouped_xyz = sample_and_group(npoint, radius, nsample, xyz, points, tnet_spec, knn, use_xyz) 121 | if mlp2 is None: mlp2 = [] 122 | for i, num_out_channel in enumerate(mlp): 123 | new_points = tf_util2.conv2d(new_points, num_out_channel, [1,1], 124 | padding='VALID', stride=[1,1], 125 | bn=bn, ibn=ibn, is_training=is_training, 126 | scope='conv%d'%(i), bn_decay=bn_decay) 127 | if pooling=='avg': 128 | new_points = tf.layers.average_pooling2d(new_points, [1,nsample], [1,1], padding='VALID', name='avgpool1') 129 | elif pooling=='weighted_avg': 130 | with tf.variable_scope('weighted_avg1'): 131 | dists = tf.norm(grouped_xyz,axis=-1,ord=2,keep_dims=True) 132 | exp_dists = tf.exp(-dists * 5) 133 | weights = exp_dists/tf.reduce_sum(exp_dists,axis=2,keep_dims=True) # (batch_size, npoint, nsample, 1) 134 | new_points *= weights # (batch_size, npoint, nsample, mlp[-1]) 135 | new_points = tf.reduce_sum(new_points, axis=2, keep_dims=True) 136 | elif pooling=='max': 137 | new_points = tf.reduce_max(new_points, axis=[2], keep_dims=True) 138 | elif pooling=='min': 139 | new_points = tf.layers.max_pooling2d(-1 * new_points, [1, nsample], [1, 1], padding='VALID',name='minpool1') 140 | elif pooling=='max_and_avg': 141 | avg_points = tf.layers.max_pooling2d(new_points, [1,nsample], [1,1], padding='VALID', name='maxpool1') 142 | max_points = tf.layers.average_pooling2d(new_points, [1,nsample],[1,1], padding='VALID', name='avgpool1') 143 | new_points = tf.concat([avg_points, max_points], axis=-1) 144 | 145 | if mlp2 is None: mlp2 = [] 146 | for i, num_out_channel in enumerate(mlp2): 147 | new_points = tf_util2.conv2d(new_points, num_out_channel, [1,1], 148 | padding='VALID', stride=[1,1], 149 | bn=bn, ibn=ibn,is_training=is_training, 150 | scope='conv_post_%d'%(i), bn_decay=bn_decay) 151 | new_points = tf.squeeze(new_points, [2]) # (batch_size, npoints, mlp2[-1]) 152 | return new_xyz, new_points, idx 153 | 154 | def pointnet_sa_module_msg(xyz, points, npoint, radius_list, nsample_list, mlp_list, is_training, bn_decay, scope, bn=True, ibn = False, use_xyz=True): 155 | ''' PointNet Set Abstraction (SA) module with Multi-Scale Grouping (MSG) 156 | Input: 157 | xyz: (batch_size, ndataset, 3) TF tensor 158 | points: (batch_size, ndataset, channel) TF tensor 159 | npoint: int32 -- #points sampled in farthest point sampling 160 | radius: list of float32 -- search radius in local region 161 | nsample: list of int32 -- how many points in each local region 162 | mlp: list of list of int32 -- output size for MLP on each point 163 | use_xyz: bool, if True concat XYZ with local point features, otherwise just use point features 164 | Return: 165 | new_xyz: (batch_size, npoint, 3) TF tensor 166 | new_points: (batch_size, npoint, \sum_k{mlp[k][-1]}) TF tensor 167 | ''' 168 | with tf.variable_scope(scope) as sc: 169 | new_xyz = gather_point(xyz, farthest_point_sample(npoint, xyz)) 170 | new_points_list = [] 171 | for i in range(len(radius_list)): 172 | radius = radius_list[i] 173 | nsample = nsample_list[i] 174 | idx, pts_cnt = query_ball_point(radius, nsample, xyz, new_xyz) 175 | grouped_xyz = group_point(xyz, idx) 176 | grouped_xyz -= tf.expand_dims(new_xyz, 2) 177 | if points is not None: 178 | grouped_points = group_point(points, idx) 179 | if use_xyz: 180 | grouped_points = tf.concat([grouped_points, grouped_xyz], axis=-1) 181 | else: 182 | grouped_points = grouped_xyz 183 | for j,num_out_channel in enumerate(mlp_list[i]): 184 | grouped_points = tf_util2.conv2d(grouped_points, num_out_channel, [1,1], 185 | padding='VALID', stride=[1,1], bn=bn, ibn=ibn,is_training=is_training, 186 | scope='conv%d_%d'%(i,j), bn_decay=bn_decay) 187 | new_points = tf.reduce_max(grouped_points, axis=[2]) 188 | new_points_list.append(new_points) 189 | new_points_concat = tf.concat(new_points_list, axis=-1) 190 | return new_xyz, new_points_concat 191 | 192 | 193 | def pointnet_fp_module(xyz1, xyz2, points1, points2, mlp, is_training, bn_decay, scope, bn=True,ibn=False): 194 | ''' PointNet Feature Propogation (FP) Module 195 | Input: 196 | xyz1: (batch_size, ndataset1, 3) TF tensor 197 | xyz2: (batch_size, ndataset2, 3) TF tensor, sparser than xyz1 198 | points1: (batch_size, ndataset1, nchannel1) TF tensor 199 | points2: (batch_size, ndataset2, nchannel2) TF tensor 200 | mlp: list of int32 -- output size for MLP on each point 201 | Return: 202 | new_points: (batch_size, ndataset1, mlp[-1]) TF tensor 203 | ''' 204 | with tf.variable_scope(scope) as sc: 205 | dist, idx = three_nn(xyz1, xyz2) 206 | dist = tf.maximum(dist, 1e-10) 207 | norm = tf.reduce_sum((1.0/dist),axis=2,keep_dims=True) 208 | norm = tf.tile(norm,[1,1,3]) 209 | weight = (1.0/dist) / norm 210 | interpolated_points = three_interpolate(points2, idx, weight) 211 | 212 | if points1 is not None: 213 | new_points1 = tf.concat(axis=2, values=[interpolated_points, points1]) # B,ndataset1,nchannel1+nchannel2 214 | else: 215 | new_points1 = interpolated_points 216 | new_points1 = tf.expand_dims(new_points1, 2) 217 | for i, num_out_channel in enumerate(mlp): 218 | new_points1 = tf_util2.conv2d(new_points1, num_out_channel, [1,1], 219 | padding='VALID', stride=[1,1], 220 | bn=bn, ibn=ibn,is_training=is_training, 221 | scope='conv_%d'%(i), bn_decay=bn_decay) 222 | new_points1 = tf.squeeze(new_points1, [2]) # B,ndataset1,mlp[-1] 223 | return new_points1 224 | -------------------------------------------------------------------------------- /code/utils/pointnet_util.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/utils/pointnet_util.pyc -------------------------------------------------------------------------------- /code/utils/provider.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | import numpy as np 4 | import h5py 5 | BASE_DIR = os.path.dirname(os.path.abspath(__file__)) 6 | sys.path.append(BASE_DIR) 7 | 8 | # Download dataset for point cloud classification 9 | DATA_DIR = os.path.join(BASE_DIR, 'data') 10 | if not os.path.exists(DATA_DIR): 11 | os.mkdir(DATA_DIR) 12 | if not os.path.exists(os.path.join(DATA_DIR, 'modelnet40_ply_hdf5_2048')): 13 | www = 'https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip' 14 | zipfile = os.path.basename(www) 15 | os.system('wget %s; unzip %s' % (www, zipfile)) 16 | os.system('mv %s %s' % (zipfile[:-4], DATA_DIR)) 17 | os.system('rm %s' % (zipfile)) 18 | 19 | 20 | def shuffle_data(data, labels): 21 | """ Shuffle data and labels. 22 | Input: 23 | data: B,N,... numpy array 24 | label: B,... numpy array 25 | Return: 26 | shuffled data, label and shuffle indices 27 | """ 28 | idx = np.arange(len(labels)) 29 | np.random.shuffle(idx) 30 | return data[idx, ...], labels[idx], idx 31 | 32 | 33 | def rotate_point_cloud(batch_data): 34 | """ Randomly rotate the point clouds to augument the dataset 35 | rotation is per shape based along up direction 36 | Input: 37 | BxNx3 array, original batch of point clouds 38 | Return: 39 | BxNx3 array, rotated batch of point clouds 40 | """ 41 | rotated_data = np.zeros(batch_data.shape, dtype=np.float32) 42 | for k in xrange(batch_data.shape[0]): 43 | rotation_angle = np.random.uniform() * 2 * np.pi 44 | cosval = np.cos(rotation_angle) 45 | sinval = np.sin(rotation_angle) 46 | rotation_matrix = np.array([[cosval, 0, sinval], 47 | [0, 1, 0], 48 | [-sinval, 0, cosval]]) 49 | shape_pc = batch_data[k, ...] 50 | rotated_data[k, ...] = np.dot(shape_pc.reshape((-1, 3)), rotation_matrix) 51 | return rotated_data 52 | 53 | 54 | def rotate_point_cloud_by_angle(batch_data, rotation_angle): 55 | """ Rotate the point cloud along up direction with certain angle. 56 | Input: 57 | BxNx3 array, original batch of point clouds 58 | Return: 59 | BxNx3 array, rotated batch of point clouds 60 | """ 61 | rotated_data = np.zeros(batch_data.shape, dtype=np.float32) 62 | for k in xrange(batch_data.shape[0]): 63 | #rotation_angle = np.random.uniform() * 2 * np.pi 64 | cosval = np.cos(rotation_angle) 65 | sinval = np.sin(rotation_angle) 66 | rotation_matrix = np.array([[cosval, 0, sinval], 67 | [0, 1, 0], 68 | [-sinval, 0, cosval]]) 69 | shape_pc = batch_data[k, ...] 70 | rotated_data[k, ...] = np.dot(shape_pc.reshape((-1, 3)), rotation_matrix) 71 | return rotated_data 72 | 73 | 74 | def rotate_perturbation_point_cloud(batch_data, angle_sigma=0.06, angle_clip=0.18): 75 | """ Randomly perturb the point clouds by small rotations 76 | Input: 77 | BxNx3 array, original batch of point clouds 78 | Return: 79 | BxNx3 array, rotated batch of point clouds 80 | """ 81 | rotated_data = np.zeros(batch_data.shape, dtype=np.float32) 82 | for k in xrange(batch_data.shape[0]): 83 | angles = np.clip(angle_sigma*np.random.randn(3), -angle_clip, angle_clip) 84 | Rx = np.array([[1,0,0], 85 | [0,np.cos(angles[0]),-np.sin(angles[0])], 86 | [0,np.sin(angles[0]),np.cos(angles[0])]]) 87 | Ry = np.array([[np.cos(angles[1]),0,np.sin(angles[1])], 88 | [0,1,0], 89 | [-np.sin(angles[1]),0,np.cos(angles[1])]]) 90 | Rz = np.array([[np.cos(angles[2]),-np.sin(angles[2]),0], 91 | [np.sin(angles[2]),np.cos(angles[2]),0], 92 | [0,0,1]]) 93 | R = np.dot(Rz, np.dot(Ry,Rx)) 94 | shape_pc = batch_data[k, ...] 95 | rotated_data[k, ...] = np.dot(shape_pc.reshape((-1, 3)), R) 96 | return rotated_data 97 | 98 | 99 | def jitter_point_cloud(batch_data, sigma=0.01, clip=0.05): 100 | """ Randomly jitter points. jittering is per point. 101 | Input: 102 | BxNx3 array, original batch of point clouds 103 | Return: 104 | BxNx3 array, jittered batch of point clouds 105 | """ 106 | B, N, C = batch_data.shape 107 | assert(clip > 0) 108 | jittered_data = np.clip(sigma * np.random.randn(B, N, C), -1*clip, clip) 109 | jittered_data += batch_data 110 | return jittered_data 111 | 112 | def shift_point_cloud(batch_data, shift_range=0.1): 113 | """ Randomly shift point cloud. Shift is per point cloud. 114 | Input: 115 | BxNx3 array, original batch of point clouds 116 | Return: 117 | BxNx3 array, shifted batch of point clouds 118 | """ 119 | B, N, C = batch_data.shape 120 | shifts = np.random.uniform(-shift_range, shift_range, (B,3)) 121 | for batch_index in range(B): 122 | batch_data[batch_index,:,:] += shifts[batch_index,:] 123 | return batch_data 124 | 125 | 126 | def random_scale_point_cloud(batch_data, scale_low=0.8, scale_high=1.25): 127 | """ Randomly scale the point cloud. Scale is per point cloud. 128 | Input: 129 | BxNx3 array, original batch of point clouds 130 | Return: 131 | BxNx3 array, scaled batch of point clouds 132 | """ 133 | B, N, C = batch_data.shape 134 | scales = np.random.uniform(scale_low, scale_high, B) 135 | for batch_index in range(B): 136 | batch_data[batch_index,:,:] *= scales[batch_index] 137 | return batch_data 138 | 139 | def getDataFiles(list_filename): 140 | return [line.rstrip() for line in open(list_filename)] 141 | 142 | def load_h5(h5_filename): 143 | f = h5py.File(h5_filename) 144 | data = f['data'][:] 145 | label = f['label'][:] 146 | return (data, label) 147 | 148 | def loadDataFile(filename): 149 | return load_h5(filename) -------------------------------------------------------------------------------- /code/utils/provider.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/utils/provider.pyc -------------------------------------------------------------------------------- /code/utils/show3d.py: -------------------------------------------------------------------------------- 1 | ''' 2 | 3 | The default behavior is to visualize the points as white dots 4 | >>>show3d.showpoints(np.random.rand(10000,3)) 5 | 6 | Control: 7 | key q: quit 8 | key Q: sys.exit(0) 9 | key n: zoom in 10 | key m: zoom out 11 | key s: save screenshot to 'show3d.png' 12 | Mouse: rotate 13 | 14 | You can also play a video by specifying waittime 15 | >>>[show3d.showpoints(np.random.rand(10000,3),waittime=10) for i in xrange(10000)] 16 | 17 | Color can also be useful 18 | >>>green=np.linspace(0,1,10000) 19 | >>>red=np.linspace(1,0,10000) 20 | >>>blue=np.linspace(1,0,10000)**2 21 | >>>show3d.showpoints(np.random.rand(10000,3),green,red,blue) 22 | 23 | Additional Parameters 24 | --------------------- 25 | normalizecolor: 26 | if True (default), scale the maximum color to 1 for each channel. 27 | magnifyBlue: 28 | if True, magnify the blue dots to make them more visible 29 | background: 30 | the background color. Defaults to black (0,0,0) 31 | freezerot: 32 | disable rotation 33 | 34 | ''' 35 | 36 | 37 | import numpy as np 38 | import cv2 39 | import sys 40 | showsz=800 41 | mousex,mousey=0.5,0.5 42 | zoom=1.0 43 | changed=True 44 | def onmouse(*args): 45 | global mousex,mousey,changed 46 | y=args[1] 47 | x=args[2] 48 | mousex=x/float(showsz) 49 | mousey=y/float(showsz) 50 | changed=True 51 | 52 | def showpoints(xyz,c0=None,c1=None,c2=None,waittime=0,showrot=False,magnifyBlue=0,freezerot=False,background=(0,0,0),normalizecolor=True): 53 | cv2.namedWindow('show3d') 54 | cv2.moveWindow('show3d', 0, 0) 55 | cv2.setMouseCallback('show3d', onmouse) 56 | 57 | global showsz,mousex,mousey,zoom,changed 58 | if len(xyz.shape)!=2 or xyz.shape[1]!=3: 59 | raise Exception('showpoints expects (n,3) shape for xyz') 60 | if c0 is not None and c0.shape!=xyz.shape[:1]: 61 | raise Exception('showpoints expects (n,) shape for c0') 62 | if c1 is not None and c1.shape!=xyz.shape[:1]: 63 | raise Exception('showpoints expects (n,) shape for c1') 64 | if c2 is not None and c2.shape!=xyz.shape[:1]: 65 | raise Exception('showpoints expects (n,) shape for c2') 66 | xyz=xyz-xyz.mean(axis=0) 67 | radius=((xyz**2).sum(axis=-1)**0.5).max() 68 | xyz/=(radius*2.2)/showsz 69 | if c0 is None: 70 | c0=np.zeros((len(xyz),),dtype='float32')+255 71 | if c1 is None: 72 | c1=c0 73 | if c2 is None: 74 | c2=c0 75 | if normalizecolor: 76 | c0=c0/((c0.max()+1e-14)/255.0) 77 | c1=c1/((c1.max()+1e-14)/255.0) 78 | c2=c2/((c2.max()+1e-14)/255.0) 79 | 80 | show=np.zeros((showsz,showsz,3),dtype='uint8') 81 | def render(): 82 | rotmat=np.eye(3) 83 | if not freezerot: 84 | xangle=(mousey-0.5)*np.pi*1.2 85 | else: 86 | xangle=0 87 | rotmat=rotmat.dot(np.array([ 88 | [1.0,0.0,0.0], 89 | [0.0,np.cos(xangle),-np.sin(xangle)], 90 | [0.0,np.sin(xangle),np.cos(xangle)], 91 | ])) 92 | if not freezerot: 93 | yangle=(mousex-0.5)*np.pi*1.2 94 | else: 95 | yangle=0 96 | rotmat=rotmat.dot(np.array([ 97 | [np.cos(yangle),0.0,-np.sin(yangle)], 98 | [0.0,1.0,0.0], 99 | [np.sin(yangle),0.0,np.cos(yangle)], 100 | ])) 101 | rotmat*=zoom 102 | nxyz=xyz.dot(rotmat) 103 | nz=nxyz[:,2].argsort() 104 | nxyz=nxyz[nz] 105 | nxyz=(nxyz[:,:2]+[showsz/2,showsz/2]).astype('int32') 106 | p=nxyz[:,0]*showsz+nxyz[:,1] 107 | show[:]=background 108 | m=(nxyz[:,0]>=0)*(nxyz[:,0]=0)*(nxyz[:,1]0: 113 | show[:,:,0]=np.maximum(show[:,:,0],np.roll(show[:,:,0],1,axis=0)) 114 | if magnifyBlue>=2: 115 | show[:,:,0]=np.maximum(show[:,:,0],np.roll(show[:,:,0],-1,axis=0)) 116 | show[:,:,0]=np.maximum(show[:,:,0],np.roll(show[:,:,0],1,axis=1)) 117 | if magnifyBlue>=2: 118 | show[:,:,0]=np.maximum(show[:,:,0],np.roll(show[:,:,0],-1,axis=1)) 119 | if showrot: 120 | cv2.putText(show,'xangle %d'%(int(xangle/np.pi*180)),(30,showsz-30),0,0.5,cv2.cv.CV_RGB(255,0,0)) 121 | cv2.putText(show,'yangle %d'%(int(yangle/np.pi*180)),(30,showsz-50),0,0.5,cv2.cv.CV_RGB(255,0,0)) 122 | cv2.putText(show,'zoom %d%%'%(int(zoom*100)),(30,showsz-70),0,0.5,cv2.cv.CV_RGB(255,0,0)) 123 | changed=True 124 | while True: 125 | if changed: 126 | render() 127 | changed=False 128 | cv2.imshow('show3d',show) 129 | if waittime==0: 130 | cmd=cv2.waitKey(10)%256 131 | else: 132 | cmd=cv2.waitKey(waittime)%256 133 | if cmd==ord('q'): 134 | break 135 | elif cmd==ord('Q'): 136 | sys.exit(0) 137 | if cmd==ord('n'): 138 | zoom*=1.1 139 | changed=True 140 | elif cmd==ord('m'): 141 | zoom/=1.1 142 | changed=True 143 | elif cmd==ord('r'): 144 | zoom=1.0 145 | changed=True 146 | elif cmd==ord('s'): 147 | cv2.imwrite('show3d.png',show) 148 | if waittime!=0: 149 | break 150 | return cmd 151 | if __name__=='__main__': 152 | showpoints(np.random.rand(10000,3)) 153 | green=np.linspace(0,1,10000) 154 | red=np.linspace(1,0,10000)**0.5 155 | blue=np.linspace(1,0,10000) 156 | showpoints(np.random.rand(10000,3),green,red,blue,magnifyBlue=True) 157 | -------------------------------------------------------------------------------- /code/utils/show3d.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/utils/show3d.pyc -------------------------------------------------------------------------------- /code/utils/tf_util.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/utils/tf_util.pyc -------------------------------------------------------------------------------- /code/utils/tf_util2.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | 3 | def lrelu(x, alpha=0.2): 4 | return tf.nn.relu(x) - alpha * tf.nn.relu(-x) 5 | 6 | 7 | # def lrelu2(x, leak=0.2, name="lrelu"): 8 | # with tf.variable_scope(name): 9 | # f1 = 0.5 * (1 + leak) 10 | # f2 = 0.5 * (1 - leak) 11 | # return f1 * x + f2 * abs(x) 12 | 13 | def instance_norm(net, train=True,weight_decay=0.00001): 14 | batch, rows, cols, channels = [i.value for i in net.get_shape()] 15 | var_shape = [channels] 16 | mu, sigma_sq = tf.nn.moments(net, [1, 2], keep_dims=True) 17 | 18 | shift = tf.get_variable('shift',shape=var_shape, 19 | initializer=tf.zeros_initializer, 20 | regularizer=tf.contrib.layers.l2_regularizer(weight_decay)) 21 | scale = tf.get_variable('scale', shape=var_shape, 22 | initializer=tf.ones_initializer, 23 | regularizer=tf.contrib.layers.l2_regularizer(weight_decay)) 24 | epsilon = 1e-3 25 | normalized = (net - mu) / tf.square(sigma_sq + epsilon) 26 | return scale * normalized + shift 27 | 28 | def conv2d(inputs, 29 | num_output_channels, 30 | kernel_size, 31 | scope=None, 32 | stride=[1, 1], 33 | padding='SAME', 34 | use_xavier=True, 35 | stddev=1e-3, 36 | weight_decay=0.00001, 37 | activation_fn=tf.nn.relu, 38 | bn=False, 39 | ibn = False, 40 | bn_decay=None, 41 | use_bias = True, 42 | is_training=None, 43 | reuse=None): 44 | """ 2D convolution with non-linear operation. 45 | 46 | Args: 47 | inputs: 4-D tensor variable BxHxWxC 48 | num_output_channels: int 49 | kernel_size: a list of 2 ints 50 | scope: string 51 | stride: a list of 2 ints 52 | padding: 'SAME' or 'VALID' 53 | use_xavier: bool, use xavier_initializer if true 54 | stddev: float, stddev for truncated_normal init 55 | weight_decay: float 56 | activation_fn: function 57 | bn: bool, whether to use batch norm 58 | bn_decay: float or float tensor variable in [0,1] 59 | is_training: bool Tensor variable 60 | 61 | Returns: 62 | Variable tensor 63 | """ 64 | with tf.variable_scope(scope,reuse=reuse) as sc: 65 | if use_xavier: 66 | initializer = tf.contrib.layers.xavier_initializer() 67 | else: 68 | initializer = tf.truncated_normal_initializer(stddev=stddev) 69 | 70 | outputs = tf.layers.conv2d(inputs,num_output_channels,kernel_size,stride,padding, 71 | kernel_initializer=initializer, 72 | kernel_regularizer=tf.contrib.layers.l2_regularizer(weight_decay), 73 | bias_regularizer=tf.contrib.layers.l2_regularizer(weight_decay), 74 | use_bias=use_bias,reuse=None) 75 | assert not (bn and ibn) 76 | if bn: 77 | outputs = tf.layers.batch_normalization(outputs,momentum=bn_decay,training=is_training,renorm=False,fused=True) 78 | #outputs = tf.contrib.layers.batch_norm(outputs,is_training=is_training) 79 | if ibn: 80 | outputs = instance_norm(outputs,is_training) 81 | 82 | 83 | if activation_fn is not None: 84 | outputs = activation_fn(outputs) 85 | 86 | return outputs 87 | 88 | 89 | def fully_connected(inputs, 90 | num_outputs, 91 | scope, 92 | use_xavier=True, 93 | stddev=1e-3, 94 | weight_decay=0.00001, 95 | activation_fn=tf.nn.relu, 96 | bn=False, 97 | bn_decay=None, 98 | use_bias = True, 99 | is_training=None): 100 | """ Fully connected layer with non-linear operation. 101 | 102 | Args: 103 | inputs: 2-D tensor BxN 104 | num_outputs: int 105 | 106 | Returns: 107 | Variable tensor of size B x num_outputs. 108 | """ 109 | 110 | with tf.variable_scope(scope) as sc: 111 | if use_xavier: 112 | initializer = tf.contrib.layers.xavier_initializer() 113 | else: 114 | initializer = tf.truncated_normal_initializer(stddev=stddev) 115 | 116 | outputs = tf.layers.dense(inputs,num_outputs, 117 | use_bias=use_bias,kernel_initializer=initializer, 118 | kernel_regularizer=tf.contrib.layers.l2_regularizer(weight_decay), 119 | bias_regularizer=tf.contrib.layers.l2_regularizer(weight_decay), 120 | reuse=None) 121 | 122 | if bn: 123 | outputs = tf.layers.batch_normalization(outputs, momentum=bn_decay, training=is_training, renorm=False) 124 | 125 | if activation_fn is not None: 126 | outputs = activation_fn(outputs) 127 | 128 | return outputs 129 | -------------------------------------------------------------------------------- /code/utils/tf_util2.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/code/utils/tf_util2.pyc -------------------------------------------------------------------------------- /code/utils/write_result2html.py: -------------------------------------------------------------------------------- 1 | import os 2 | import numpy as np 3 | from tqdm import tqdm 4 | from utils import pc_util 5 | from scipy.misc import imsave 6 | 7 | def write_result(): 8 | root_path = "/home/lqyu/server/proj49/PointSR_data/test_data/our_collected_data" 9 | model_names = ['1024_nonormal_generator2_2', '1024_nonormal_generator2_2_uniformloss', 10 | '1024_nonormal_generator2_2_recursive'] 11 | 12 | index_path = os.path.join("index.html") 13 | index = open(index_path, "w") 14 | index.write("") 15 | index.write("") 16 | 17 | index.write("") 18 | for model in model_names: 19 | index.write("" % model) 20 | index.write("") 21 | 22 | # get sample list 23 | items = os.listdir(root_path + "/" + model_names[0]) 24 | items.sort() 25 | 26 | # mkdir model image path 27 | for model in model_names: 28 | if not os.path.exists(root_path + "/" + model + "_three_view_img/"): 29 | os.makedirs(root_path + "/" + model + "_three_view_img/") 30 | 31 | # write img to file 32 | for item in tqdm(items): 33 | index.write("") 34 | index.write("" % item) 35 | 36 | # write prediction 37 | for model in model_names: 38 | path = root_path + "/" + model +"/" + item 39 | if not os.path.exists(path): 40 | continue 41 | img_path = root_path + "/" + model + "_three_view_img/" + item 42 | img_path = img_path.replace("xyz", "png") 43 | if not os.path.exists(img_path): 44 | data = np.loadtxt(path) 45 | data = data[:, 0:3] 46 | img = pc_util.point_cloud_three_views(data, diameter=8) 47 | imsave(img_path, img) 48 | index.write("" % img_path) 49 | index.write("") 50 | index.close() 51 | 52 | 53 | def write_result2html_benchmark(): 54 | root_path = "/home/lqyu/server/proj49/PointSR_data/test_data/our_collected_data" 55 | phase = 'surface_benchmark' 56 | input_path ="../data/"+phase+"/1024_nonuniform" 57 | gt_path = "../data/"+phase+"/4096" 58 | model_names = ['1024_nonormal_generator2_2','1024_nonormal_generator2_2_uniformloss','1024_nonormal_generator2_2_recursive'] 59 | 60 | 61 | index_path = os.path.join(root_path, phase + "_index.html") 62 | index = open(index_path, "w") 63 | index.write("
name
%s
%s
") 64 | index.write("") 65 | index.write("") 66 | 67 | index.write("") 68 | for model in model_names: 69 | index.write("" % model) 70 | index.write("") 71 | 72 | # get sample list 73 | items = os.listdir(root_path + "/" + model_names[0] + "/result/" + phase) 74 | items.sort() 75 | 76 | # mkdir model image path 77 | for model in model_names: 78 | if not os.path.exists(root_path + "/" + model + "/result/" + phase + "_three_view_img/"): 79 | os.makedirs(root_path + "/" + model + "/result/" + phase + "_three_view_img/") 80 | 81 | # write img to file 82 | for item in tqdm(items): 83 | index.write("") 84 | index.write("" % item) 85 | 86 | # write input image 87 | object = item.split("_")[0] 88 | id = item.split(".")[0] 89 | path = input_path + "/%s.xyz" % (id) 90 | img_path = input_path + "_three_view_img/%s.png" % (id) 91 | if not os.path.exists(input_path + "_three_view_img/"): 92 | os.makedirs(input_path + "_three_view_img/") 93 | if not os.path.exists(img_path): 94 | data = np.loadtxt(path) 95 | data = data[:, 0:3] 96 | img = pc_util.point_cloud_three_views(data,diameter=8) 97 | imsave(img_path, img) 98 | index.write("" % img_path) 99 | # write gt image 100 | path = gt_path + "/%s.xyz" % (id) 101 | img_path = gt_path + "_three_view_img/%s.png" % (id) 102 | if not os.path.exists(gt_path + "_three_view_img/"): 103 | os.makedirs(gt_path + "_three_view_img/") 104 | if not os.path.exists(img_path): 105 | data = np.loadtxt(path) 106 | data = data[:, 0:3] 107 | img = pc_util.point_cloud_three_views(data,diameter=8) 108 | imsave(img_path, img) 109 | index.write("" % img_path) 110 | index.write("") 111 | 112 | index.write("") 113 | # write prediction 114 | for model in model_names: 115 | path = root_path + "/" + model + "/result/" + phase + "/" + item 116 | if not os.path.exists(path): 117 | continue 118 | img_path = root_path + "/" + model + "/result/" + phase + "_three_view_img/" + item 119 | img_path = img_path.replace("xyz", "png") 120 | if not os.path.exists(img_path): 121 | data = np.loadtxt(path) 122 | data = data[:, 0:3] 123 | img = pc_util.point_cloud_three_views(data,diameter=8) 124 | imsave(img_path, img) 125 | index.write("" % img_path) 126 | index.write("") 127 | index.close() 128 | 129 | 130 | def write_result2html_ModelNet(): 131 | root_path = "../model" 132 | gt_path = "../data/ModelNet10_poisson_normal" 133 | #gt_path = "../data/Patches" 134 | model_names = ['1024_generator2_2','new_1024_generator2_2','new_1024_generator2_2_fixed_lr'] 135 | phase = 'test' 136 | 137 | index_path = os.path.join(root_path, phase + "_index.html") 138 | index = open(index_path, "w") 139 | index.write("
nameInputRefered GT
%s
%s
") 140 | index.write("") 141 | index.write("") 142 | 143 | index.write("") 144 | for model in model_names: 145 | index.write("" % model) 146 | index.write("") 147 | 148 | # get sample list 149 | items = os.listdir(root_path + "/" + model_names[0] + "/result/" + phase) 150 | items.sort() 151 | 152 | # mkdir model image path 153 | for model in model_names: 154 | if not os.path.exists(root_path + "/" + model + "/result/" + phase + "_three_view_img/"): 155 | os.makedirs(root_path + "/" + model + "/result/" + phase + "_three_view_img/") 156 | 157 | # write img to file 158 | for item in tqdm(items[::25]): 159 | index.write("") 160 | index.write("" % item) 161 | 162 | # write input image 163 | object = item.split("_")[0] 164 | id = item.split(".")[0] 165 | fixed = "%s/1024_nonuniform/%s" % (gt_path, 'train') 166 | path = fixed + "/%s.xyz" % (id) 167 | img_path = fixed + "_three_view_img/%s.png" % (id) 168 | if not os.path.exists(fixed + "_three_view_img/"): 169 | os.makedirs(fixed + "_three_view_img/") 170 | if not os.path.exists(img_path): 171 | data = np.loadtxt(path) 172 | data = data[:, 0:3] 173 | img = pc_util.point_cloud_three_views(data,diameter=8) 174 | imsave(img_path, img) 175 | index.write("" % img_path) 176 | # write gt image 177 | fixed = "%s/4096/%s" % (gt_path, 'train') 178 | path = fixed + "/%s.xyz" % (id) 179 | img_path = fixed + "_three_view_img/%s.png" % (id) 180 | if not os.path.exists(fixed + "_three_view_img/"): 181 | os.makedirs(fixed + "_three_view_img/") 182 | if not os.path.exists(img_path): 183 | data = np.loadtxt(path) 184 | data = data[:, 0:3] 185 | img = pc_util.point_cloud_three_views(data,diameter=8) 186 | imsave(img_path, img) 187 | index.write("" % img_path) 188 | index.write("") 189 | 190 | index.write("") 191 | # write prediction 192 | for model in model_names: 193 | path = root_path + "/" + model + "/result/" + phase + "/" + item 194 | if not os.path.exists(path): 195 | continue 196 | img_path = root_path + "/" + model + "/result/" + phase + "_three_view_img/" + item 197 | img_path = img_path.replace("xyz", "png") 198 | if not os.path.exists(img_path): 199 | data = np.loadtxt(path) 200 | data = data[:, 0:3] 201 | img = pc_util.point_cloud_three_views(data,diameter=8) 202 | imsave(img_path, img) 203 | index.write("" % img_path) 204 | index.write("") 205 | index.close() 206 | 207 | if __name__ == '__main__': 208 | write_result2html_ModelNet() 209 | #write_result2html_benchmark() 210 | #calculate_emd_error('ModelNet40') 211 | -------------------------------------------------------------------------------- /evaluation_code/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | # Created by the script cgal_create_cmake_script 2 | # This is the CMake script for compiling a CGAL application. 3 | 4 | 5 | project( Distance_2_Tests ) 6 | cmake_minimum_required(VERSION 2.8.10) 7 | set (CMAKE_CXX_STANDARD 11) 8 | 9 | find_package(OpenMP) 10 | if (OPENMP_FOUND) 11 | set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${OpenMP_C_FLAGS}") 12 | set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${OpenMP_CXX_FLAGS}") 13 | endif() 14 | 15 | 16 | find_package(CGAL QUIET) 17 | if ( CGAL_FOUND ) 18 | include( ${CGAL_USE_FILE} ) 19 | include( CGAL_CreateSingleSourceCGALProgram ) 20 | include_directories (BEFORE "../../include") 21 | # create a target per cppfile 22 | file(GLOB cppfiles RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} ${CMAKE_CURRENT_SOURCE_DIR}/*.cpp) 23 | foreach(cppfile ${cppfiles}) 24 | create_single_source_cgal_program( "${cppfile}" ) 25 | endforeach() 26 | 27 | else() 28 | message(STATUS "This program requires the CGAL library, and will not be compiled.") 29 | endif() 30 | 31 | -------------------------------------------------------------------------------- /h5_data/README.md: -------------------------------------------------------------------------------- 1 | Please download training data in HDF5 format from the following links and put them in this folder: 2 | 3 | Patches_noHole_and_collected.h5: [GoogleDrive](https://drive.google.com/file/d/1wMtNGvliK_pUTogfzMyrz57iDb_jSQR8/view?usp=sharing) 4 | 5 | -------------------------------------------------------------------------------- /model/README.md: -------------------------------------------------------------------------------- 1 | Please download pretrained model from the following links and put them in this folder: 2 | 3 | generator2_new6.zip: [GoogleDrive](https://drive.google.com/file/d/1PWZb0d8QbmEAuYtJunQ9Z30VPgdU6rdd/view?usp=sharing) 4 | 5 | -------------------------------------------------------------------------------- /prepare_data/MeshSegmentation.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/prepare_data/MeshSegmentation.zip -------------------------------------------------------------------------------- /prepare_data/Poisson_sample.tar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/prepare_data/Poisson_sample.tar -------------------------------------------------------------------------------- /supplementary material/supp.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yulequan/PU-Net/c8bb205689dd508d145c406feff8878b39b12a2d/supplementary material/supp.pdf --------------------------------------------------------------------------------
nameInputRefered GT
%s
%s