├── CL3D ├── README.md ├── config_shape.py ├── dataloader_ptcl.py ├── dataloader_shape.py ├── environment.yml ├── eval_shape.py ├── isosurface │ ├── LIB_PATH │ ├── computeDistanceField │ ├── computeMarchingCubes │ ├── displayDistanceField │ ├── libtbb_preview.so.2 │ └── libtcmalloc.so.4 ├── main_proxy.py ├── mesh_gen_utils │ ├── libkdtree │ │ ├── LICENSE.txt │ │ ├── MANIFEST.in │ │ ├── README │ │ ├── README.rst │ │ ├── __init__.py │ │ ├── __pycache__ │ │ │ └── __init__.cpython-36.pyc │ │ ├── pykdtree │ │ │ ├── __init__.py │ │ │ ├── __pycache__ │ │ │ │ └── __init__.cpython-36.pyc │ │ │ ├── _kdtree_core.c │ │ │ ├── _kdtree_core.c.mako │ │ │ ├── kdtree.c │ │ │ ├── kdtree.cpython-36m-x86_64-linux-gnu.so │ │ │ ├── kdtree.pyx │ │ │ ├── render_template.py │ │ │ └── test_tree.py │ │ └── setup.cfg │ ├── libmcubes │ │ ├── LICENSE │ │ ├── README.rst │ │ ├── __init__.py │ │ ├── __pycache__ │ │ │ ├── __init__.cpython-36.pyc │ │ │ └── exporter.cpython-36.pyc │ │ ├── exporter.py │ │ ├── marchingcubes.cpp │ │ ├── marchingcubes.h │ │ ├── mcubes.cpp │ │ ├── mcubes.cpython-36m-x86_64-linux-gnu.so │ │ ├── mcubes.pyx │ │ ├── pyarray_symbol.h │ │ ├── pyarraymodule.h │ │ ├── pywrapper.cpp │ │ └── pywrapper.h │ ├── libmesh │ │ ├── __init__.py │ │ ├── __pycache__ │ │ │ ├── __init__.cpython-36.pyc │ │ │ └── inside_mesh.cpython-36.pyc │ │ ├── inside_mesh.py │ │ ├── triangle_hash.cpp │ │ ├── triangle_hash.cpython-36m-x86_64-linux-gnu.so │ │ └── triangle_hash.pyx │ └── libmise │ │ ├── __init__.py │ │ ├── __pycache__ │ │ └── __init__.cpython-36.pyc │ │ ├── mise.cpp │ │ ├── mise.cpython-36m-x86_64-linux-gnu.so │ │ ├── mise.pyx │ │ └── test.py ├── model_convsdfnet.py ├── model_pointcloud.py ├── model_shape.py ├── perm │ ├── rep_10_2_13.npz │ └── rep_1_5_55.npz ├── plot_script_shape.py ├── setup.py ├── train_shape.py └── utils_shape.py ├── README.md ├── YASS ├── README.md ├── data_generator │ ├── __init__.py │ └── cifar_mean_image.npy ├── dataset_incr_cifar.py ├── main_incr_cifar.py ├── model.py └── utils │ ├── __init__.py │ ├── color_jitter │ ├── __init__.py │ ├── cjitter.cpp │ ├── cjitter.h │ ├── jitter.pyx │ └── setup.py │ ├── get_samples.py │ ├── loader_utils.py │ └── model_utils.py ├── auto_enc ├── README.md ├── autoenc_incr_main.py ├── autoencoder.py ├── dataset_incr_cifar_autoenc.py └── utils │ ├── __init__.py │ ├── color_jitter │ ├── __init__.py │ ├── cjitter.cpp │ ├── cjitter.h │ ├── jitter.pyx │ └── setup.py │ ├── get_samples.py │ ├── loader_utils.py │ ├── metric.py │ └── model_utils.py └── environment.yml /CL3D/README.md: -------------------------------------------------------------------------------- 1 | ### Environment Setup 2 | The instructions in this section follow [SDFNet](https://github.com/rehg-lab/3DShapeGen/tree/master/SDFNet) 3 | 4 | Create environment using [anaconda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) 5 | ```bash 6 | conda env create -f ../environment.yml 7 | ``` 8 | Note that this code runs with PyTorch 1.12.1 and CUDA 10.2. Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install other PyTorch and CUDA versions. `torch-scatter` might need to be reinstalled to match with the CUDA version. Please follow the instructions [here](https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html) to install `torch-scatter`. 9 | 10 | Compile OccNet extension modules in `mesh_gen_utils` 11 | ```bash 12 | python setup.py build_ext --inplace 13 | ``` 14 | To generate ground truths, follow [SDFNet](https://github.com/rehg-lab/3DShapeGen/tree/master/SDFNet) 15 | 16 | ### Data 17 | 1. [ShapeNetCore.v2 SDF + Point Clouds](https://www.dropbox.com/s/75lxxtmxkdr1be9/ShapeNet55_sdf.tar) 18 | 1. [Training and Validation Point Clouds](https://www.dropbox.com/s/g20usd7lo7jn3go/ShapeNet_ptcl_55.tar) 19 | 1. [3-DOF viewpoint LRBg ShapeNetCore.v2 renders](https://www.dropbox.com/s/yw03ohg04834vvv/ShapeNet55_3DOF-VC_LRBg.tar) 20 | 1. [Train/Val json on 13 classes of ShapeNetCore.v2](https://www.dropbox.com/s/7shqu6krvs9x1ib/data_split.json) 21 | 1. [Test json on 13 classes, 100 objects per class of ShapeNetCore.v2](https://www.dropbox.com/s/7ig5n662gv0uq6k/sample.json) 22 | 1. [Train/Val json on 55 classes of ShapeNetCore.v2](https://www.dropbox.com/s/7shqu6krvs9x1ib/data_split_55.json) 23 | 1. [Test json on 55 classes, 30 objects per class of ShapeNetCore.v2](https://www.dropbox.com/s/ryca8on5uhhmt04/sample_30obj_55.json) 24 | 25 | ### Training C-SDFNet 26 | After changing the parameters in `config_shape.py` run the following to train the model from scratch 27 | ```bash 28 | python train_shape.py 29 | ``` 30 | ### Pre-trained models 31 | The following are links to download pretrained C-SDFNet and C-OccNet models 32 | 1. [SDFNet VC with 2.5D inputs Single Exposure ShapeNetCore.v2](https://www.dropbox.com/sh/tnx34ony9y4wwsi/AABSkTG4lbtfzmLGDf6QHpOWa) 33 | 2. [OccNet VC with 2.5D inputs Single Exposure ShapeNetCore.v2](https://www.dropbox.com/sh/3jszdblnxtiit6z/AADZIvfPuTcl-wA7O1WU0UITa) 34 | 3. [SDFNet VC with 2.5D inputs Repeated Exposures ShapeNet13](https://www.dropbox.com/sh/ozdl057aiyka926/AADXpbgLBsO9Yfzw9TGOkYMYa) 35 | 4. [OccNet VC with 2.5D inputs Repeated Exposures ShapeNet13](https://www.dropbox.com/sh/eb2b0yhuq3tovqh/AABxF1A2bOgeMhpsKzYY5eUza) 36 | 5. [SDFNet OC with 2.5D inputs Repeated Exposures ShapeNet13](https://www.dropbox.com/sh/j9y8r4y6aszhb2j/AADNl6Qagd1NZ1VHIJ81hv8ea) 37 | 6. [SDFNet VC with 3D inputs Single Exposure ShapeNetCore.v2](https://www.dropbox.com/sh/wr2fctu6ldwtus8/AADZCv8ulGSHS39-6EUrybc6a?dl=0) 38 | 7. [ConvSDFNet VC with 3D inputs Single Exposure ShapeNetCore.v2](https://www.dropbox.com/sh/vmas6ja18slyap3/AABoC1ZcteY2m4VgPdAyq0xDa?dl=0) 39 | 40 | ### Testing SDFNet 41 | ```bash 42 | python eval_shape.py 43 | python plot_script_shape.py 44 | ``` 45 | 46 | ### Evaluating Proxy Task 47 | ```bash 48 | python main_proxy.py --num_explr= 49 | ``` 50 | This project uses code based on parts of the following repository 51 | 52 | 1. [3D Reconstruction of Novel Object Shapes from Single Images](https://github.com/rehg-lab/3DShapeGen) 53 | -------------------------------------------------------------------------------- /CL3D/config_shape.py: -------------------------------------------------------------------------------- 1 | path = dict( 2 | src_dataset_path = '/data/DevLearning/SDFNet_data/ShapeNet55_3DOF-VC_LRBg', 3 | input_image_path = None, 4 | input_depth_path = 'depth_NPZ', 5 | input_normal_path = 'normal_output', 6 | input_seg_path = 'segmentation', 7 | src_pt_path = '/data/DevLearning/SDFNet_data/ShapeNet55_sdf', 8 | src_ptcl_path = '/data/DevLearning/SDFNet_data/ShapeNet_ptcl_55', 9 | data_split_json_path = '/data/DevLearning/SDFNet_data/json_files/data_split_55.json' 10 | ) 11 | data_setting = dict( 12 | input_size = 224, 13 | img_extension = 'png', 14 | random_view = True, 15 | seq_len = 25, 16 | categories = None 17 | ) 18 | training = dict( 19 | out_dir = '/data/DevLearning/model_output_incr/test_release', 20 | batch_size = 64, 21 | batch_size_eval = 16, 22 | num_epochs = 500, 23 | 24 | save_model_step = 50, 25 | # Evaluated on val data of all seen classes after each exposure 26 | eval_step = 500, 27 | verbose_step = 10, 28 | num_points = 2048, 29 | # Example of a valid cont 30 | # cont = 'model-0-500.pth.tar', 31 | cont = None, 32 | shape_rep = 'sdf', 33 | model = None, 34 | coord_system = '3dvc', 35 | pointcloud = False, 36 | num_rep = 1, 37 | nclass = 5, 38 | ) 39 | logging = dict( 40 | log_dir = '/data/DevLearning/SDFNet_model_output/log', 41 | exp_name = 'test' 42 | ) 43 | testing = dict( 44 | eval_task_name = 'test', 45 | box_size = 1.7, 46 | # Always 1 if generating mesh on the fly 47 | batch_size_test = 1, 48 | # Eval up to "split_counter" learning exposure 49 | split_counter = 10 50 | ) 51 | 52 | -------------------------------------------------------------------------------- /CL3D/dataloader_ptcl.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import torch 3 | from torch.utils.data import Dataset 4 | from PIL import Image 5 | import os 6 | import glob 7 | import utils_shape as utils 8 | import json 9 | from torchvision import transforms 10 | 11 | from dataloader_shape import Dataset as Dataset_Incr 12 | 13 | class Dataset(Dataset_Incr): 14 | def __init__(self, config, num_points=-1,mode='train',shape_rep='occnet',coord_system='3dvc', \ 15 | iso=0.003, perm=None, all_classes=None): 16 | super().__init__(config, num_points, mode, shape_rep, coord_system, iso, perm, all_classes) 17 | self.obj_cat_map = [(obj,cat) for cat in self.catnames \ 18 | for obj in self.split[cat] \ 19 | if os.path.exists(os.path.join(self.src_dataset_path, cat,obj)) \ 20 | if os.path.exists(os.path.join(self.src_ptcl_path, cat,obj))] 21 | 22 | self.classes = np.asarray([cat for _, cat in self.obj_cat_map]) 23 | self.all_indices = np.arange(len(self.classes)) 24 | self.sdf_h5_paths = [os.path.join(self.src_pt_path, cat, obj, \ 25 | 'ori_sample.h5') \ 26 | for (obj, cat) in self.obj_cat_map \ 27 | if os.path.exists(os.path.join(self.src_ptcl_path, cat, obj))] 28 | self.pointcld_split_paths = [os.path.join(self.src_ptcl_path, cat, obj) \ 29 | for (obj, cat) in self.obj_cat_map \ 30 | if os.path.exists(os.path.join(self.src_ptcl_path, cat, obj))] 31 | 32 | self.metadata_split_paths = [os.path.join(self.src_dataset_path, cat, \ 33 | obj, 'metadata.txt') \ 34 | for (obj, cat) in self.obj_cat_map \ 35 | if os.path.exists(os.path.join(self.src_ptcl_path, cat, obj))] 36 | if self.coord_system == '3dvc': 37 | self.hvc_metadata_split_paths = [os.path.join(self.src_dataset_path, cat, \ 38 | obj, '3DOF_vc_metadata.txt') \ 39 | for (obj, cat) in self.obj_cat_map \ 40 | if os.path.exists(os.path.join(self.src_ptcl_path, cat, obj))] 41 | 42 | def get_data_sample(self, index, img_idx=-1): 43 | if self.random_view: 44 | assert img_idx != -1 45 | 46 | else: 47 | idx = index//self.seq_len 48 | img_idx = index % self.seq_len 49 | 50 | index = idx 51 | 52 | index = self.current_indices[index] 53 | label = self.cat_map[self.classes[index]] 54 | return label 55 | 56 | 57 | def get_pointcloud_sample(self, index, img_idx=-1): 58 | if self.random_view: 59 | assert img_idx != -1 60 | else: 61 | idx = index//self.seq_len 62 | img_idx = index % self.seq_len 63 | 64 | index = idx 65 | 66 | index = self.current_indices[index] 67 | 68 | input_pointcld_path = self.pointcld_split_paths[index] 69 | input_pointcld_path = os.path.join(input_pointcld_path,\ 70 | 'pointcloud.npz') 71 | input_ptcld_dict = np.load(input_pointcld_path, mmap_mode='r') 72 | input_pointcld = input_ptcld_dict['points'].astype(np.float32) 73 | input_normals = input_ptcld_dict['normals'].astype(np.float32) 74 | 75 | if self.mode != 'test': 76 | input_pointcld, input_normals = utils.sample_points(input_pointcld, input_normals, self.num_points) 77 | else: 78 | sub_input_pointcld, sub_input_normals = utils.sample_points(input_pointcld, input_normals, self.num_points) 79 | 80 | if self.coord_system == '2dvc': 81 | 82 | input_metadata_path = self.metadata_split_paths[index] 83 | meta = np.loadtxt(input_metadata_path) 84 | rotate_dict = {'elev': meta[img_idx][1], 'azim': meta[img_idx][0]} 85 | 86 | input_pointcld = utils.apply_rotate(input_pointcld, rotate_dict) 87 | input_normals = utils.apply_rotate(input_normals, rotate_dict) 88 | 89 | if self.mode == "test": 90 | sub_input_pointcld = utils.apply_rotate(sub_input_pointcld, rotate_dict) 91 | sub_input_normals = utils.apply_rotate(sub_input_normals, rotate_dict) 92 | sub_input_pointcld = torch.FloatTensor(sub_input_pointcld) 93 | sub_input_normals = torch.FloatTensor(sub_input_normals) 94 | 95 | elif self.coord_system == '3dvc': 96 | 97 | input_hvc_meta_path = self.hvc_metadata_split_paths[index] 98 | hvc_meta = np.loadtxt(input_hvc_meta_path) 99 | hvc_rotate_dict = {'elev': hvc_meta[1], 'azim': hvc_meta[0]} 100 | input_pointcld = utils.apply_rotate(input_pointcld, hvc_rotate_dict) 101 | input_normals = utils.apply_rotate(input_normals, hvc_rotate_dict) 102 | 103 | 104 | input_metadata_path = self.metadata_split_paths[index] 105 | meta = np.loadtxt(input_metadata_path) 106 | rotate_dict = {'elev': meta[img_idx][1], 'azim': meta[img_idx][0]-180} 107 | 108 | input_pointcld = utils.apply_rotate(input_pointcld, rotate_dict) 109 | input_normals = utils.apply_rotate(input_normals, rotate_dict) 110 | 111 | if self.mode == 'test': 112 | sub_input_pointcld = utils.apply_rotate(sub_input_pointcld, hvc_rotate_dict) 113 | sub_input_normals = utils.apply_rotate(sub_input_normals, hvc_rotate_dict) 114 | sub_input_pointcld = utils.apply_rotate(sub_input_pointcld, rotate_dict) 115 | sub_input_normals = utils.apply_rotate(sub_input_normals, rotate_dict) 116 | sub_input_pointcld = torch.FloatTensor(sub_input_pointcld) 117 | sub_input_normals = torch.FloatTensor(sub_input_normals) 118 | 119 | input_pointcld = torch.FloatTensor(input_pointcld) 120 | input_normals = torch.FloatTensor(input_normals) 121 | 122 | if self.mode != 'test': 123 | return input_pointcld, input_normals 124 | 125 | return input_pointcld, input_normals, sub_input_pointcld, sub_input_normals 126 | 127 | 128 | def __getitem__(self, index): 129 | if self.random_view: 130 | img_idx = np.random.choice(self.seq_len) 131 | else: 132 | img_idx = -1 133 | label = self.get_data_sample(index, img_idx) 134 | 135 | points_data, vals_data = self.get_points_sdf_sample(index, img_idx) 136 | if self.shape_rep == 'occ': 137 | vals_data = (vals_data.cpu().numpy() <= 0.003).astype(np.float32) 138 | vals_data = torch.FloatTensor(vals_data) 139 | idx, img_idx = self.get_img_index(index, img_idx) 140 | if self.mode != 'test': 141 | pointcloud_data, normals_data = \ 142 | self.get_pointcloud_sample(index, img_idx) 143 | 144 | 145 | if self.mode == 'test': 146 | pointcloud_data, normals_data, sub_pointcloud_data, sub_normals_data = self.get_pointcloud_sample(index, img_idx) 147 | return sub_pointcloud_data, points_data, vals_data, pointcloud_data, \ 148 | normals_data, self.obj_cat_map[self.current_indices[idx]], img_idx, label 149 | 150 | return pointcloud_data, points_data, vals_data, label 151 | 152 | 153 | def __len__(self): 154 | if len(self.current_indices) != 0: 155 | num_mdl = len(self.current_indices) 156 | else: 157 | num_mdl = len(self.pointcld_split_paths) 158 | if self.random_view: 159 | return num_mdl 160 | return num_mdl*self.seq_len 161 | 162 | 163 | 164 | -------------------------------------------------------------------------------- /CL3D/environment.yml: -------------------------------------------------------------------------------- 1 | name: py38 2 | channels: 3 | - conda-forge 4 | - pytorch 5 | - anaconda 6 | - defaults 7 | dependencies: 8 | - _libgcc_mutex=0.1=main 9 | - _openmp_mutex=5.1=1_gnu 10 | - blas=1.0=mkl 11 | - brotlipy=0.7.0=py38h27cfd23_1003 12 | - bzip2=1.0.8=h7b6447c_0 13 | - ca-certificates=2022.4.26=h06a4308_0 14 | - certifi=2022.6.15=py38h06a4308_0 15 | - cffi=1.15.1=py38h74dc2b5_0 16 | - charset-normalizer=2.0.4=pyhd3eb1b0_0 17 | - cryptography=37.0.1=py38h9ce1e76_0 18 | - cudatoolkit=10.2.89=hfd86e86_1 19 | - ffmpeg=4.3=hf484d3e_0 20 | - freetype=2.11.0=h70c0345_0 21 | - giflib=5.2.1=h7b6447c_0 22 | - gmp=6.2.1=h295c915_3 23 | - gnutls=3.6.15=he1e5248_0 24 | - h5py=3.6.0=py38ha0f2276_0 25 | - hdf5=1.10.6=h3ffc7dd_1 26 | - idna=3.3=pyhd3eb1b0_0 27 | - intel-openmp=2021.4.0=h06a4308_3561 28 | - jpeg=9e=h7f8727e_0 29 | - lame=3.100=h7b6447c_0 30 | - lcms2=2.12=h3be6417_0 31 | - ld_impl_linux-64=2.38=h1181459_1 32 | - lerc=3.0=h295c915_0 33 | - libdeflate=1.8=h7f8727e_5 34 | - libffi=3.3=he6710b0_2 35 | - libgcc-ng=11.2.0=h1234567_1 36 | - libgfortran-ng=11.2.0=h00389a5_1 37 | - libgfortran5=11.2.0=h1234567_1 38 | - libgomp=11.2.0=h1234567_1 39 | - libiconv=1.16=h7f8727e_2 40 | - libidn2=2.3.2=h7f8727e_0 41 | - libpng=1.6.37=hbc83047_0 42 | - libstdcxx-ng=11.2.0=h1234567_1 43 | - libtasn1=4.16.0=h27cfd23_0 44 | - libtiff=4.4.0=hecacb30_0 45 | - libunistring=0.9.10=h27cfd23_0 46 | - libwebp=1.2.2=h55f646e_0 47 | - libwebp-base=1.2.2=h7f8727e_0 48 | - lz4-c=1.9.3=h295c915_1 49 | - mkl=2021.4.0=h06a4308_640 50 | - mkl-service=2.4.0=py38h7f8727e_0 51 | - mkl_fft=1.3.1=py38hd3c417c_0 52 | - mkl_random=1.2.2=py38h51133e4_0 53 | - ncurses=6.3=h5eee18b_3 54 | - nettle=3.7.3=hbbd107a_1 55 | - numpy=1.23.1=py38h6c91a56_0 56 | - numpy-base=1.23.1=py38ha15fc14_0 57 | - openh264=2.1.1=h4ff587b_0 58 | - openssl=1.1.1o=h7f8727e_0 59 | - pillow=9.2.0=py38hace64e9_1 60 | - pip=22.1.2=py38h06a4308_0 61 | - pycparser=2.21=pyhd3eb1b0_0 62 | - pyopenssl=22.0.0=pyhd3eb1b0_0 63 | - pysocks=1.7.1=py38h06a4308_0 64 | - python=3.8.13=h12debd9_0 65 | - python_abi=3.8=2_cp38 66 | - pytorch=1.12.1=py3.8_cuda10.2_cudnn7.6.5_0 67 | - pytorch-mutex=1.0=cuda 68 | - readline=8.1.2=h7f8727e_1 69 | - requests=2.28.1=py38h06a4308_0 70 | - setuptools=61.2.0=py38h06a4308_0 71 | - six=1.16.0=pyhd3eb1b0_1 72 | - sqlite=3.39.2=h5082296_0 73 | - tk=8.6.12=h1ccaba5_0 74 | - torchaudio=0.12.1=py38_cu102 75 | - torchvision=0.13.1=py38_cu102 76 | - trimesh=3.13.4=pyh6c4a22f_0 77 | - typing_extensions=4.3.0=py38h06a4308_0 78 | - urllib3=1.26.11=py38h06a4308_0 79 | - wheel=0.37.1=pyhd3eb1b0_0 80 | - xz=5.2.5=h7f8727e_1 81 | - zlib=1.2.12=h7f8727e_2 82 | - zstd=1.5.2=ha4553b6_0 83 | - pip: 84 | - cython==0.29.32 85 | - torch-scatter==2.0.9 86 | - tqdm==4.64.0 87 | prefix: /home/athai6/miniconda3/envs/py38 88 | -------------------------------------------------------------------------------- /CL3D/eval_shape.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import numpy as np 3 | import os 4 | import trimesh 5 | from tqdm import tqdm 6 | import config_shape as config 7 | from dataloader_shape import Dataset 8 | from dataloader_ptcl import Dataset as Dataset_Ptc 9 | 10 | from model_shape import SDFNet 11 | from model_pointcloud import PointCloudNet 12 | from model_convsdfnet import ConvSDFNet 13 | from torch.autograd import Variable 14 | import torch.optim as optim 15 | import utils_shape as utils 16 | 17 | 18 | def main(): 19 | out_dir = config.training['out_dir'] 20 | shape_rep = config.training['shape_rep'] 21 | cont = config.training['cont'] 22 | 23 | eval_task_name = config.testing['eval_task_name'] 24 | eval_dir = os.path.join(out_dir, 'eval') 25 | eval_task_dir = os.path.join(eval_dir, eval_task_name) 26 | os.makedirs(eval_task_dir, exist_ok=True) 27 | 28 | batch_size_test = config.testing['batch_size_test'] 29 | coord_system = config.training['coord_system'] 30 | 31 | box_size = config.testing['box_size'] 32 | 33 | split_counter = config.testing['split_counter']+1 34 | 35 | nclass = config.training['nclass'] 36 | 37 | # Whether to use pointclouds as input 38 | pointcloud = config.training['pointcloud'] 39 | 40 | # Get model 41 | model_type = config.training['model'] 42 | if model_type == None: 43 | model_type = 'SDFNet' #default to be SDFNet 44 | 45 | # Dataset 46 | print('Loading data...') 47 | if not pointcloud: 48 | test_dataset = Dataset(config, mode='test', shape_rep=shape_rep, \ 49 | coord_system=coord_system) 50 | else: 51 | test_dataset = Dataset_Ptc(config, mode='test', shape_rep=shape_rep, coord_system=coord_system) 52 | 53 | test_loader = torch.utils.data.DataLoader( 54 | test_dataset, batch_size=batch_size_test, num_workers=12,pin_memory=True) 55 | all_classes_orig = test_dataset.catnames 56 | 57 | # Load info 58 | val_file = os.path.join(out_dir, 'train.npz') 59 | val_file = np.load(val_file, allow_pickle=True) 60 | all_classes = val_file['perm'] 61 | 62 | cat_map = {} 63 | 64 | for cl_ind, cl_group in enumerate(all_classes): 65 | for sub_cl_ind, cl in enumerate(cl_group): 66 | if cl not in cat_map: 67 | cat_map[cl] = len(cat_map.keys()) 68 | 69 | test_dataset.update_class_map(cat_map) 70 | current_counter = 0 71 | if not cont is None: 72 | try: 73 | current_counter = int(cont.split('-')[1])+1 74 | except Exception: 75 | print('Current counter is not an integer') 76 | 77 | # Loading model 78 | if model_type == "SDFNet": 79 | model = SDFNet(config) 80 | elif model_type == "PointCloudNet": 81 | model = PointCloudNet(config) 82 | elif model_type == "ConvSDFNet": 83 | model = ConvSDFNet(config) 84 | else: 85 | raise Exception("Model type not supported") 86 | model = torch.nn.DataParallel(model).cuda() 87 | optimizer = optim.Adam(model.parameters(), lr=1e-4) 88 | 89 | 90 | out_obj_cat_all = [] 91 | out_pose_all = [] 92 | out_cd_all = [] 93 | out_normals_all = [] 94 | out_iou_all = [] 95 | out_fscore_all = [] 96 | out_acc_all = [] 97 | 98 | seen_classes = list(set(all_classes[:current_counter].reshape(-1))) 99 | print('Start counter: ', current_counter) 100 | print('End counter: ', split_counter-1) 101 | for cl_count, cl_group in enumerate(all_classes[current_counter:split_counter]): 102 | print('Num seen classes: ', len(seen_classes)) 103 | cl_count += current_counter 104 | # Load model and reload loader 105 | if shape_rep == 'sdf': 106 | model_path = 'best_model_iou_train-%s.pth.tar'%(cl_count) 107 | elif shape_rep == 'occ': 108 | model_path = 'best_model_train-%s.pth.tar'%(cl_count) 109 | 110 | new_classes = [] 111 | for cl in cl_group: 112 | if cl not in seen_classes: 113 | seen_classes.append(cl) 114 | new_classes.append(cl) 115 | model_path = os.path.join(out_dir, model_path) 116 | model.module.load_state_dict(torch.load(model_path)) 117 | model.eval() 118 | 119 | out_obj_cat_cl = [] 120 | out_pose_cl = [] 121 | out_cd_cl = [] 122 | out_normals_cl = [] 123 | out_iou_cl = [] 124 | out_fscore_cl = [] 125 | out_acc_cl = [] 126 | 127 | for s in range(len(seen_classes)): 128 | print('Evaluating exposure %s, class %s'\ 129 | %(cl_count, seen_classes[s])) 130 | 131 | out_obj_cat = [] 132 | out_pose = [] 133 | out_cd = [] 134 | out_normals = [] 135 | out_iou = [] 136 | out_fscore = [] 137 | out_acc = [] 138 | test_dataset.clear() 139 | test_dataset.get_current_data_class(seen_classes[s]) 140 | 141 | with tqdm(total=int(len(test_loader)), ascii=True) as pbar: 142 | with torch.no_grad(): 143 | for mbatch in test_loader: 144 | img_input, points_input, values, pointclouds, normals, \ 145 | obj_cat, pose, labels = mbatch 146 | img_input = Variable(img_input).cuda() 147 | 148 | points_input = Variable(points_input).cuda() 149 | values = Variable(values).cuda() 150 | labels = Variable(labels).cuda() 151 | 152 | optimizer.zero_grad() 153 | 154 | obj, cat = obj_cat 155 | cat_path = os.path.join(eval_task_dir, cat[0]) 156 | 157 | os.makedirs(cat_path, exist_ok=True) 158 | if shape_rep == 'occ': 159 | mesh = utils.generate_mesh(img_input, points_input, \ 160 | model.module) 161 | obj_path = os.path.join(cat_path, '%s.obj' % obj[0]) 162 | mesh.export(obj_path) 163 | elif shape_rep == 'sdf': 164 | obj_path = os.path.join(cat_path, '%s-%s.obj' \ 165 | % (cl_count, obj[0])) 166 | sdf_path = os.path.join(cat_path, '%s-%s.dist' \ 167 | % (cl_count, obj[0])) 168 | mesh = utils.generate_mesh_mise_sdf(img_input, \ 169 | points_input, model.module, box_size=box_size,\ 170 | upsampling_steps=2, resolution=64) 171 | mesh.export(obj_path) 172 | 173 | # Save gen info 174 | out_obj_cat.append(obj_cat) 175 | out_pose.append(pose) 176 | 177 | # Calculate metrics 178 | if shape_rep == 'occ': 179 | out_dict = utils.eval_mesh(mesh, pointclouds, normals,\ 180 | points_input, values) 181 | elif shape_rep == 'sdf': 182 | # load the mesh 183 | if os.path.exists(obj_path): 184 | #### Load mesh 185 | try: 186 | mesh = trimesh.load(obj_path) 187 | except Exception: 188 | mesh = None 189 | else: 190 | mesh = None 191 | sdf_val = model(points_input, img_input) 192 | 193 | out_dict = utils.eval_mesh(mesh, pointclouds, normals, \ 194 | points_input, values, shape_rep='sdf',\ 195 | sdf_val=sdf_val) 196 | 197 | out_cd.append(out_dict['cd']) 198 | out_normals.append(out_dict['normals']) 199 | out_iou.append(out_dict['iou']) 200 | out_fscore.append(out_dict['fscore']) 201 | pbar.update(1) 202 | 203 | out_obj_cat_cl.append(out_obj_cat) 204 | out_pose_cl.append(out_pose) 205 | out_cd_cl.append(out_cd) 206 | out_normals_cl.append(out_normals) 207 | out_iou_cl.append(out_iou) 208 | out_fscore_cl.append(out_fscore) 209 | 210 | out_obj_cat_all.append(out_obj_cat_cl) 211 | out_pose_all.append(out_pose_cl) 212 | out_cd_all.append(out_cd_cl) 213 | out_normals_all.append(out_normals_cl) 214 | out_iou_all.append(out_iou_cl) 215 | out_fscore_all.append(out_fscore_cl) 216 | np.savez(os.path.join(eval_task_dir, 'out-%s.npz'%(split_counter)), \ 217 | obj_cat=np.array(out_obj_cat_all), pose=np.array(out_pose_all),\ 218 | cd=np.array(out_cd_all), normals=np.array(out_normals_all),\ 219 | iou=np.array(out_iou_all), fscore=np.array(out_fscore_all),\ 220 | all_classes=all_classes, seen_classes=seen_classes) 221 | 222 | if __name__ == '__main__': 223 | main() 224 | 225 | 226 | 227 | -------------------------------------------------------------------------------- /CL3D/isosurface/LIB_PATH: -------------------------------------------------------------------------------- 1 | export LD_LIBRARY_PATH="/home/ant/miniconda/lib:/home/ant/miniconda/envs/sdf_net/lib:./isosurface:$LD_LIBRARY_PATH" 2 | -------------------------------------------------------------------------------- /CL3D/isosurface/computeDistanceField: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/CL3D/isosurface/computeDistanceField -------------------------------------------------------------------------------- /CL3D/isosurface/computeMarchingCubes: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/CL3D/isosurface/computeMarchingCubes -------------------------------------------------------------------------------- /CL3D/isosurface/displayDistanceField: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/CL3D/isosurface/displayDistanceField -------------------------------------------------------------------------------- /CL3D/isosurface/libtbb_preview.so.2: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/CL3D/isosurface/libtbb_preview.so.2 -------------------------------------------------------------------------------- /CL3D/isosurface/libtcmalloc.so.4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/CL3D/isosurface/libtcmalloc.so.4 -------------------------------------------------------------------------------- /CL3D/main_proxy.py: -------------------------------------------------------------------------------- 1 | ########### Train a classifier on top of pretrained shape features 2 | import torch 3 | import torch.optim as optim 4 | import torch.nn as nn 5 | from torch.autograd import Variable 6 | import numpy as np 7 | import os 8 | import sys 9 | import torch.multiprocessing as mp 10 | import subprocess 11 | import config_shape as config 12 | from datetime import datetime 13 | import utils_shape as utils 14 | from dataloader_shape import Dataset 15 | 16 | from model_shape import SDFNet 17 | from tqdm import tqdm 18 | import copy 19 | import argparse 20 | 21 | parser = argparse.ArgumentParser(description="Proxy Task") 22 | parser.add_argument("--num_explr", default=20, type=int, 23 | help="Number of exemplars") 24 | 25 | def calc_acc(plabels, glabels): 26 | ''' 27 | Calculates classification accuracy 28 | args: 29 | plabels: predicted labels 30 | glabels: ground truth labels 31 | ''' 32 | mean_acc = 0 33 | label_set = set(glabels) 34 | 35 | per_class_acc = {} 36 | for gl in label_set: 37 | pl = plabels[glabels == gl] 38 | pl_pred = pl[pl == gl] 39 | mean_acc += len(pl_pred)/len(pl) 40 | per_class_acc[gl] = len(pl_pred)/len(pl) 41 | return mean_acc/len(label_set), per_class_acc 42 | 43 | def forward_pass(model, loader, train_loader, num_classes, mode='val'): 44 | model.eval() 45 | feats = [] 46 | exemplar_feats, exemplar_labels = get_exemplar_feats(model, train_loader) 47 | glabels = [] 48 | with tqdm(total=int(len(loader)), ascii=True) as pbar: 49 | with torch.no_grad(): 50 | for data in loader: 51 | if mode == 'val': 52 | img_input, points_input, values, labels = data 53 | else: 54 | img_input, points_input, values, _, _, _, _, labels = data 55 | img_input = Variable(img_input).cuda() 56 | 57 | feats.append(model(img_input).cpu().numpy()) 58 | glabels.append(labels) 59 | pbar.update(1) 60 | feats = np.concatenate(feats, axis=0) 61 | glabels = np.concatenate(glabels) 62 | 63 | dist_matr = compute_dist(feats, exemplar_feats) 64 | plabels = np.argmax(dist_matr,axis=1) 65 | return calc_acc(plabels, glabels) 66 | 67 | def get_exemplar_feats(model, loader): 68 | ''' 69 | Get exemplar features and labels 70 | args: 71 | model: shape model 72 | loader: train loader 73 | ''' 74 | model.eval() 75 | glabels = [] 76 | feats = [] 77 | with tqdm(total=int(len(loader)), ascii=True) as pbar: 78 | with torch.no_grad(): 79 | for data in loader: 80 | img_input, points_input, values, labels = data 81 | img_input = Variable(img_input).cuda() 82 | 83 | feats.append(model(img_input).cpu().numpy()) 84 | glabels.append(labels) 85 | pbar.update(1) 86 | feats = np.concatenate(feats, axis=0) 87 | glabels = np.concatenate(glabels) 88 | mean_feats = [] 89 | mean_labels = [] 90 | # Gets mean features for each ground truth label 91 | for g in set(glabels): 92 | mfeats = np.mean(feats[glabels == g],axis=0) 93 | mean_feats.append(mfeats.reshape(1,-1)) 94 | mean_labels.append(g) 95 | feats = np.concatenate(mean_feats,axis=0) 96 | glabels = np.array(mean_labels) 97 | glabels_argsort = np.argsort(glabels) 98 | feats = feats[glabels_argsort] 99 | glabels = glabels[glabels_argsort] 100 | return feats, glabels 101 | 102 | def compute_dist(feats, exemplar_feats): 103 | ''' 104 | Gets cosine distances between exemplar features and test features 105 | args: 106 | feats: test features 107 | exemplar_feats: exemplar features 108 | ''' 109 | normalized_feats = feats/np.linalg.norm(feats,axis=1,keepdims=True) 110 | normalized_ex_feats = exemplar_feats/np.linalg.norm(exemplar_feats,axis=1,keepdims=True) 111 | dist = np.dot(normalized_feats,normalized_ex_feats.transpose(1,0)) 112 | return dist 113 | 114 | def main(): 115 | args = parser.parse_args() 116 | num_explr = args.num_explr 117 | 118 | torch.backends.cudnn.benchmark=True 119 | out_dir = config.training['out_dir'] 120 | os.makedirs(out_dir, exist_ok=True) 121 | 122 | num_classes = config.training['nclass'] 123 | coord_system = config.training['coord_system'] 124 | shape_rep = config.training['shape_rep'] 125 | 126 | # Dataset 127 | print('Loading data...') 128 | train_dataset = Dataset(num_points=2048, mode='train', shape_rep=shape_rep, \ 129 | coord_system=coord_system, config=config) 130 | test_dataset = Dataset(num_points=2048, mode='test', shape_rep=shape_rep, \ 131 | coord_system=coord_system, config=config) 132 | 133 | train_loader = torch.utils.data.DataLoader( 134 | train_dataset, batch_size=256, num_workers=12, shuffle=True,\ 135 | pin_memory=True) 136 | test_loader = torch.utils.data.DataLoader( 137 | test_dataset, batch_size=100, num_workers=12, drop_last=False,pin_memory=True) 138 | 139 | # Model 140 | print('Initializing network...') 141 | shape_model = SDFNet(config=config) 142 | 143 | shape_train_file = os.path.join(out_dir, 'train.npz') 144 | if os.path.exists(shape_train_file): 145 | shape_train_file = np.load(shape_train_file, allow_pickle=True) 146 | else: 147 | raise Exception("Train npz for shape model does not exist") 148 | all_classes = shape_train_file['perm'] 149 | 150 | cat_map = {} 151 | 152 | for cl_ind, cl_group in enumerate(all_classes): 153 | for sub_cl_ind, cl in enumerate(cl_group): 154 | if cl not in cat_map: 155 | cat_map[cl] = len(cat_map.keys()) 156 | train_dataset.update_class_map(cat_map) 157 | test_dataset.update_class_map(cat_map) 158 | 159 | train_dataset.init_exemplar() 160 | seen_classes = [] 161 | 162 | test_accs = [] 163 | classifier_dir = os.path.join(out_dir, 'self_sup_classifier') 164 | os.makedirs(classifier_dir, exist_ok=True) 165 | for cl_count, cl_group in enumerate(all_classes): 166 | for cl in cl_group: 167 | if cl not in seen_classes: 168 | test_dataset.get_current_data_class(cl) 169 | seen_classes.append(cl) 170 | 171 | train_dataset.get_current_data_class(cl) 172 | train_dataset.sample_exemplar_rep(cl, num_explr) 173 | 174 | train_dataset.set_train_on_exemplar() 175 | 176 | model_path = 'best_model_iou_train-%s.pth.tar'%(cl_count) 177 | 178 | model_path = os.path.join(out_dir, model_path) 179 | shape_model.load_state_dict(torch.load(model_path)) 180 | model = torch.nn.DataParallel(shape_model.encoder).cuda() 181 | test_acc, per_class_acc = forward_pass(model, test_loader, train_loader, \ 182 | len(seen_classes), mode='test') 183 | 184 | test_accs.append(test_acc) 185 | train_dataset.clear() 186 | 187 | print('Accuracy on test set: ', test_acc) 188 | 189 | np.savez(os.path.join(classifier_dir, 'val.npz'), test_acc=test_accs) 190 | 191 | if __name__ == '__main__': 192 | main() -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libkdtree/LICENSE.txt: -------------------------------------------------------------------------------- 1 | GNU LESSER GENERAL PUBLIC LICENSE 2 | Version 3, 29 June 2007 3 | 4 | Copyright (C) 2007, 2015 Free Software Foundation, Inc. 5 | Everyone is permitted to copy and distribute verbatim copies 6 | of this license document, but changing it is not allowed. 7 | 8 | 9 | This version of the GNU Lesser General Public License incorporates 10 | the terms and conditions of version 3 of the GNU General Public 11 | License, supplemented by the additional permissions listed below. 12 | 13 | 0. Additional Definitions. 14 | 15 | As used herein, "this License" refers to version 3 of the GNU Lesser 16 | General Public License, and the "GNU GPL" refers to version 3 of the GNU 17 | General Public License. 18 | 19 | "The Library" refers to a covered work governed by this License, 20 | other than an Application or a Combined Work as defined below. 21 | 22 | An "Application" is any work that makes use of an interface provided 23 | by the Library, but which is not otherwise based on the Library. 24 | Defining a subclass of a class defined by the Library is deemed a mode 25 | of using an interface provided by the Library. 26 | 27 | A "Combined Work" is a work produced by combining or linking an 28 | Application with the Library. The particular version of the Library 29 | with which the Combined Work was made is also called the "Linked 30 | Version". 31 | 32 | The "Minimal Corresponding Source" for a Combined Work means the 33 | Corresponding Source for the Combined Work, excluding any source code 34 | for portions of the Combined Work that, considered in isolation, are 35 | based on the Application, and not on the Linked Version. 36 | 37 | The "Corresponding Application Code" for a Combined Work means the 38 | object code and/or source code for the Application, including any data 39 | and utility programs needed for reproducing the Combined Work from the 40 | Application, but excluding the System Libraries of the Combined Work. 41 | 42 | 1. Exception to Section 3 of the GNU GPL. 43 | 44 | You may convey a covered work under sections 3 and 4 of this License 45 | without being bound by section 3 of the GNU GPL. 46 | 47 | 2. Conveying Modified Versions. 48 | 49 | If you modify a copy of the Library, and, in your modifications, a 50 | facility refers to a function or data to be supplied by an Application 51 | that uses the facility (other than as an argument passed when the 52 | facility is invoked), then you may convey a copy of the modified 53 | version: 54 | 55 | a) under this License, provided that you make a good faith effort to 56 | ensure that, in the event an Application does not supply the 57 | function or data, the facility still operates, and performs 58 | whatever part of its purpose remains meaningful, or 59 | 60 | b) under the GNU GPL, with none of the additional permissions of 61 | this License applicable to that copy. 62 | 63 | 3. Object Code Incorporating Material from Library Header Files. 64 | 65 | The object code form of an Application may incorporate material from 66 | a header file that is part of the Library. You may convey such object 67 | code under terms of your choice, provided that, if the incorporated 68 | material is not limited to numerical parameters, data structure 69 | layouts and accessors, or small macros, inline functions and templates 70 | (ten or fewer lines in length), you do both of the following: 71 | 72 | a) Give prominent notice with each copy of the object code that the 73 | Library is used in it and that the Library and its use are 74 | covered by this License. 75 | 76 | b) Accompany the object code with a copy of the GNU GPL and this license 77 | document. 78 | 79 | 4. Combined Works. 80 | 81 | You may convey a Combined Work under terms of your choice that, 82 | taken together, effectively do not restrict modification of the 83 | portions of the Library contained in the Combined Work and reverse 84 | engineering for debugging such modifications, if you also do each of 85 | the following: 86 | 87 | a) Give prominent notice with each copy of the Combined Work that 88 | the Library is used in it and that the Library and its use are 89 | covered by this License. 90 | 91 | b) Accompany the Combined Work with a copy of the GNU GPL and this license 92 | document. 93 | 94 | c) For a Combined Work that displays copyright notices during 95 | execution, include the copyright notice for the Library among 96 | these notices, as well as a reference directing the user to the 97 | copies of the GNU GPL and this license document. 98 | 99 | d) Do one of the following: 100 | 101 | 0) Convey the Minimal Corresponding Source under the terms of this 102 | License, and the Corresponding Application Code in a form 103 | suitable for, and under terms that permit, the user to 104 | recombine or relink the Application with a modified version of 105 | the Linked Version to produce a modified Combined Work, in the 106 | manner specified by section 6 of the GNU GPL for conveying 107 | Corresponding Source. 108 | 109 | 1) Use a suitable shared library mechanism for linking with the 110 | Library. A suitable mechanism is one that (a) uses at run time 111 | a copy of the Library already present on the user's computer 112 | system, and (b) will operate properly with a modified version 113 | of the Library that is interface-compatible with the Linked 114 | Version. 115 | 116 | e) Provide Installation Information, but only if you would otherwise 117 | be required to provide such information under section 6 of the 118 | GNU GPL, and only to the extent that such information is 119 | necessary to install and execute a modified version of the 120 | Combined Work produced by recombining or relinking the 121 | Application with a modified version of the Linked Version. (If 122 | you use option 4d0, the Installation Information must accompany 123 | the Minimal Corresponding Source and Corresponding Application 124 | Code. If you use option 4d1, you must provide the Installation 125 | Information in the manner specified by section 6 of the GNU GPL 126 | for conveying Corresponding Source.) 127 | 128 | 5. Combined Libraries. 129 | 130 | You may place library facilities that are a work based on the 131 | Library side by side in a single library together with other library 132 | facilities that are not Applications and are not covered by this 133 | License, and convey such a combined library under terms of your 134 | choice, if you do both of the following: 135 | 136 | a) Accompany the combined library with a copy of the same work based 137 | on the Library, uncombined with any other library facilities, 138 | conveyed under the terms of this License. 139 | 140 | b) Give prominent notice with the combined library that part of it 141 | is a work based on the Library, and explaining where to find the 142 | accompanying uncombined form of the same work. 143 | 144 | 6. Revised Versions of the GNU Lesser General Public License. 145 | 146 | The Free Software Foundation may publish revised and/or new versions 147 | of the GNU Lesser General Public License from time to time. Such new 148 | versions will be similar in spirit to the present version, but may 149 | differ in detail to address new problems or concerns. 150 | 151 | Each version is given a distinguishing version number. If the 152 | Library as you received it specifies that a certain numbered version 153 | of the GNU Lesser General Public License "or any later version" 154 | applies to it, you have the option of following the terms and 155 | conditions either of that published version or of any later version 156 | published by the Free Software Foundation. If the Library as you 157 | received it does not specify a version number of the GNU Lesser 158 | General Public License, you may choose any version of the GNU Lesser 159 | General Public License ever published by the Free Software Foundation. 160 | 161 | If the Library as you received it specifies that a proxy can decide 162 | whether future versions of the GNU Lesser General Public License shall 163 | apply, that proxy's public statement of acceptance of any version is 164 | permanent authorization for you to choose that version for the 165 | Library. 166 | -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libkdtree/MANIFEST.in: -------------------------------------------------------------------------------- 1 | exclude pykdtree/render_template.py 2 | include LICENSE.txt 3 | -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libkdtree/README: -------------------------------------------------------------------------------- 1 | .. image:: https://travis-ci.org/storpipfugl/pykdtree.svg?branch=master 2 | :target: https://travis-ci.org/storpipfugl/pykdtree 3 | .. image:: https://ci.appveyor.com/api/projects/status/ubo92368ktt2d25g/branch/master 4 | :target: https://ci.appveyor.com/project/storpipfugl/pykdtree 5 | 6 | ======== 7 | pykdtree 8 | ======== 9 | 10 | Objective 11 | --------- 12 | pykdtree is a kd-tree implementation for fast nearest neighbour search in Python. 13 | The aim is to be the fastest implementation around for common use cases (low dimensions and low number of neighbours) for both tree construction and queries. 14 | 15 | The implementation is based on scipy.spatial.cKDTree and libANN by combining the best features from both and focus on implementation efficiency. 16 | 17 | The interface is similar to that of scipy.spatial.cKDTree except only Euclidean distance measure is supported. 18 | 19 | Queries are optionally multithreaded using OpenMP. 20 | 21 | Installation 22 | ------------ 23 | Default build of pykdtree with OpenMP enabled queries using libgomp 24 | 25 | .. code-block:: bash 26 | 27 | $ cd 28 | $ python setup.py install 29 | 30 | If it fails with undefined compiler flags or you want to use another OpenMP implementation please modify setup.py at the indicated point to match your system. 31 | 32 | Building without OpenMP support is controlled by the USE_OMP environment variable 33 | 34 | .. code-block:: bash 35 | 36 | $ cd 37 | $ export USE_OMP=0 38 | $ python setup.py install 39 | 40 | Note evironment variables are by default not exported when using sudo so in this case do 41 | 42 | .. code-block:: bash 43 | 44 | $ USE_OMP=0 sudo -E python setup.py install 45 | 46 | Usage 47 | ----- 48 | The usage of pykdtree is similar to scipy.spatial.cKDTree so for now refer to its documentation 49 | 50 | >>> from pykdtree.kdtree import KDTree 51 | >>> kd_tree = KDTree(data_pts) 52 | >>> dist, idx = kd_tree.query(query_pts, k=8) 53 | 54 | The number of threads to be used in OpenMP enabled queries can be controlled with the standard OpenMP environment variable OMP_NUM_THREADS. 55 | 56 | The **leafsize** argument (number of data points per leaf) for the tree creation can be used to control the memory overhead of the kd-tree. pykdtree uses a default **leafsize=16**. 57 | Increasing **leafsize** will reduce the memory overhead and construction time but increase query time. 58 | 59 | pykdtree accepts data in double precision (numpy.float64) or single precision (numpy.float32) floating point. If data of another type is used an internal copy in double precision is made resulting in a memory overhead. If the kd-tree is constructed on single precision data the query points must be single precision as well. 60 | 61 | Benchmarks 62 | ---------- 63 | Comparison with scipy.spatial.cKDTree and libANN. This benchmark is on geospatial 3D data with 10053632 data points and 4276224 query points. The results are indexed relative to the construction time of scipy.spatial.cKDTree. A leafsize of 10 (scipy.spatial.cKDTree default) is used. 64 | 65 | Note: libANN is *not* thread safe. In this benchmark libANN is compiled with "-O3 -funroll-loops -ffast-math -fprefetch-loop-arrays" in order to achieve optimum performance. 66 | 67 | ================== ===================== ====== ======== ================== 68 | Operation scipy.spatial.cKDTree libANN pykdtree pykdtree 4 threads 69 | ------------------ --------------------- ------ -------- ------------------ 70 | 71 | Construction 100 304 96 96 72 | 73 | query 1 neighbour 1267 294 223 70 74 | 75 | Total 1 neighbour 1367 598 319 166 76 | 77 | query 8 neighbours 2193 625 449 143 78 | 79 | Total 8 neighbours 2293 929 545 293 80 | ================== ===================== ====== ======== ================== 81 | 82 | Looking at the combined construction and query this gives the following performance improvement relative to scipy.spatial.cKDTree 83 | 84 | ========== ====== ======== ================== 85 | Neighbours libANN pykdtree pykdtree 4 threads 86 | ---------- ------ -------- ------------------ 87 | 1 129% 329% 723% 88 | 89 | 8 147% 320% 682% 90 | ========== ====== ======== ================== 91 | 92 | Note: mileage will vary with the dataset at hand and computer architecture. 93 | 94 | Test 95 | ---- 96 | Run the unit tests using nosetest 97 | 98 | .. code-block:: bash 99 | 100 | $ cd 101 | $ python setup.py nosetests 102 | 103 | Installing on AppVeyor 104 | ---------------------- 105 | 106 | Pykdtree requires the "stdint.h" header file which is not available on certain 107 | versions of Windows or certain Windows compilers including those on the 108 | continuous integration platform AppVeyor. To get around this the header file(s) 109 | can be downloaded and placed in the correct "include" directory. This can 110 | be done by adding the `anaconda/missing-headers.ps1` script to your repository 111 | and running it the install step of `appveyor.yml`: 112 | 113 | # install missing headers that aren't included with MSVC 2008 114 | # https://github.com/omnia-md/conda-recipes/pull/524 115 | - "powershell ./appveyor/missing-headers.ps1" 116 | 117 | In addition to this, AppVeyor does not support OpenMP so this feature must be 118 | turned off by adding the following to `appveyor.yml` in the 119 | `environment` section: 120 | 121 | environment: 122 | global: 123 | # Don't build with openmp because it isn't supported in appveyor's compilers 124 | USE_OMP: "0" 125 | 126 | Changelog 127 | --------- 128 | v1.3.1 : Fix masking in the "query" method introduced in 1.3.0 129 | 130 | v1.3.0 : Keyword argument "mask" added to "query" method. OpenMP compilation now works for MS Visual Studio compiler 131 | 132 | v1.2.2 : Build process fixes 133 | 134 | v1.2.1 : Fixed OpenMP thread safety issue introduced in v1.2.0 135 | 136 | v1.2.0 : 64 and 32 bit MSVC Windows support added 137 | 138 | v1.1.1 : Same as v1.1 release due to incorrect pypi release 139 | 140 | v1.1 : Build process improvements. Add data attribute to kdtree class for scipy interface compatibility 141 | 142 | v1.0 : Switched license from GPLv3 to LGPLv3 143 | 144 | v0.3 : Avoid zipping of installed egg 145 | 146 | v0.2 : Reduced memory footprint. Can now handle single precision data internally avoiding copy conversion to double precision. Default leafsize changed from 10 to 16 as this reduces the memory footprint and makes it a cache line multiplum (negligible if any query performance observed in benchmarks). Reduced memory allocation for leaf nodes. Applied patch for building on OS X. 147 | 148 | v0.1 : Initial version. 149 | -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libkdtree/README.rst: -------------------------------------------------------------------------------- 1 | .. image:: https://travis-ci.org/storpipfugl/pykdtree.svg?branch=master 2 | :target: https://travis-ci.org/storpipfugl/pykdtree 3 | .. image:: https://ci.appveyor.com/api/projects/status/ubo92368ktt2d25g/branch/master 4 | :target: https://ci.appveyor.com/project/storpipfugl/pykdtree 5 | 6 | ======== 7 | pykdtree 8 | ======== 9 | 10 | Objective 11 | --------- 12 | pykdtree is a kd-tree implementation for fast nearest neighbour search in Python. 13 | The aim is to be the fastest implementation around for common use cases (low dimensions and low number of neighbours) for both tree construction and queries. 14 | 15 | The implementation is based on scipy.spatial.cKDTree and libANN by combining the best features from both and focus on implementation efficiency. 16 | 17 | The interface is similar to that of scipy.spatial.cKDTree except only Euclidean distance measure is supported. 18 | 19 | Queries are optionally multithreaded using OpenMP. 20 | 21 | Installation 22 | ------------ 23 | Default build of pykdtree with OpenMP enabled queries using libgomp 24 | 25 | .. code-block:: bash 26 | 27 | $ cd 28 | $ python setup.py install 29 | 30 | If it fails with undefined compiler flags or you want to use another OpenMP implementation please modify setup.py at the indicated point to match your system. 31 | 32 | Building without OpenMP support is controlled by the USE_OMP environment variable 33 | 34 | .. code-block:: bash 35 | 36 | $ cd 37 | $ export USE_OMP=0 38 | $ python setup.py install 39 | 40 | Note evironment variables are by default not exported when using sudo so in this case do 41 | 42 | .. code-block:: bash 43 | 44 | $ USE_OMP=0 sudo -E python setup.py install 45 | 46 | Usage 47 | ----- 48 | The usage of pykdtree is similar to scipy.spatial.cKDTree so for now refer to its documentation 49 | 50 | >>> from pykdtree.kdtree import KDTree 51 | >>> kd_tree = KDTree(data_pts) 52 | >>> dist, idx = kd_tree.query(query_pts, k=8) 53 | 54 | The number of threads to be used in OpenMP enabled queries can be controlled with the standard OpenMP environment variable OMP_NUM_THREADS. 55 | 56 | The **leafsize** argument (number of data points per leaf) for the tree creation can be used to control the memory overhead of the kd-tree. pykdtree uses a default **leafsize=16**. 57 | Increasing **leafsize** will reduce the memory overhead and construction time but increase query time. 58 | 59 | pykdtree accepts data in double precision (numpy.float64) or single precision (numpy.float32) floating point. If data of another type is used an internal copy in double precision is made resulting in a memory overhead. If the kd-tree is constructed on single precision data the query points must be single precision as well. 60 | 61 | Benchmarks 62 | ---------- 63 | Comparison with scipy.spatial.cKDTree and libANN. This benchmark is on geospatial 3D data with 10053632 data points and 4276224 query points. The results are indexed relative to the construction time of scipy.spatial.cKDTree. A leafsize of 10 (scipy.spatial.cKDTree default) is used. 64 | 65 | Note: libANN is *not* thread safe. In this benchmark libANN is compiled with "-O3 -funroll-loops -ffast-math -fprefetch-loop-arrays" in order to achieve optimum performance. 66 | 67 | ================== ===================== ====== ======== ================== 68 | Operation scipy.spatial.cKDTree libANN pykdtree pykdtree 4 threads 69 | ------------------ --------------------- ------ -------- ------------------ 70 | 71 | Construction 100 304 96 96 72 | 73 | query 1 neighbour 1267 294 223 70 74 | 75 | Total 1 neighbour 1367 598 319 166 76 | 77 | query 8 neighbours 2193 625 449 143 78 | 79 | Total 8 neighbours 2293 929 545 293 80 | ================== ===================== ====== ======== ================== 81 | 82 | Looking at the combined construction and query this gives the following performance improvement relative to scipy.spatial.cKDTree 83 | 84 | ========== ====== ======== ================== 85 | Neighbours libANN pykdtree pykdtree 4 threads 86 | ---------- ------ -------- ------------------ 87 | 1 129% 329% 723% 88 | 89 | 8 147% 320% 682% 90 | ========== ====== ======== ================== 91 | 92 | Note: mileage will vary with the dataset at hand and computer architecture. 93 | 94 | Test 95 | ---- 96 | Run the unit tests using nosetest 97 | 98 | .. code-block:: bash 99 | 100 | $ cd 101 | $ python setup.py nosetests 102 | 103 | Installing on AppVeyor 104 | ---------------------- 105 | 106 | Pykdtree requires the "stdint.h" header file which is not available on certain 107 | versions of Windows or certain Windows compilers including those on the 108 | continuous integration platform AppVeyor. To get around this the header file(s) 109 | can be downloaded and placed in the correct "include" directory. This can 110 | be done by adding the `anaconda/missing-headers.ps1` script to your repository 111 | and running it the install step of `appveyor.yml`: 112 | 113 | # install missing headers that aren't included with MSVC 2008 114 | # https://github.com/omnia-md/conda-recipes/pull/524 115 | - "powershell ./appveyor/missing-headers.ps1" 116 | 117 | In addition to this, AppVeyor does not support OpenMP so this feature must be 118 | turned off by adding the following to `appveyor.yml` in the 119 | `environment` section: 120 | 121 | environment: 122 | global: 123 | # Don't build with openmp because it isn't supported in appveyor's compilers 124 | USE_OMP: "0" 125 | 126 | Changelog 127 | --------- 128 | v1.3.1 : Fix masking in the "query" method introduced in 1.3.0 129 | 130 | v1.3.0 : Keyword argument "mask" added to "query" method. OpenMP compilation now works for MS Visual Studio compiler 131 | 132 | v1.2.2 : Build process fixes 133 | 134 | v1.2.1 : Fixed OpenMP thread safety issue introduced in v1.2.0 135 | 136 | v1.2.0 : 64 and 32 bit MSVC Windows support added 137 | 138 | v1.1.1 : Same as v1.1 release due to incorrect pypi release 139 | 140 | v1.1 : Build process improvements. Add data attribute to kdtree class for scipy interface compatibility 141 | 142 | v1.0 : Switched license from GPLv3 to LGPLv3 143 | 144 | v0.3 : Avoid zipping of installed egg 145 | 146 | v0.2 : Reduced memory footprint. Can now handle single precision data internally avoiding copy conversion to double precision. Default leafsize changed from 10 to 16 as this reduces the memory footprint and makes it a cache line multiplum (negligible if any query performance observed in benchmarks). Reduced memory allocation for leaf nodes. Applied patch for building on OS X. 147 | 148 | v0.1 : Initial version. 149 | -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libkdtree/__init__.py: -------------------------------------------------------------------------------- 1 | from .pykdtree.kdtree import KDTree 2 | 3 | 4 | __all__ = [ 5 | KDTree 6 | ] 7 | -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libkdtree/__pycache__/__init__.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/CL3D/mesh_gen_utils/libkdtree/__pycache__/__init__.cpython-36.pyc -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libkdtree/pykdtree/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/CL3D/mesh_gen_utils/libkdtree/pykdtree/__init__.py -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libkdtree/pykdtree/__pycache__/__init__.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/CL3D/mesh_gen_utils/libkdtree/pykdtree/__pycache__/__init__.cpython-36.pyc -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libkdtree/pykdtree/kdtree.cpython-36m-x86_64-linux-gnu.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/CL3D/mesh_gen_utils/libkdtree/pykdtree/kdtree.cpython-36m-x86_64-linux-gnu.so -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libkdtree/pykdtree/kdtree.pyx: -------------------------------------------------------------------------------- 1 | #pykdtree, Fast kd-tree implementation with OpenMP-enabled queries 2 | # 3 | #Copyright (C) 2013 - present Esben S. Nielsen 4 | # 5 | # This program is free software: you can redistribute it and/or modify it under 6 | # the terms of the GNU Lesser General Public License as published by the Free 7 | # Software Foundation, either version 3 of the License, or 8 | #(at your option) any later version. 9 | # 10 | # This program is distributed in the hope that it will be useful, but WITHOUT 11 | # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS 12 | # FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more 13 | # details. 14 | # 15 | # You should have received a copy of the GNU Lesser General Public License along 16 | # with this program. If not, see . 17 | 18 | import numpy as np 19 | cimport numpy as np 20 | from libc.stdint cimport uint32_t, int8_t, uint8_t 21 | cimport cython 22 | 23 | 24 | # Node structure 25 | cdef struct node_float: 26 | float cut_val 27 | int8_t cut_dim 28 | uint32_t start_idx 29 | uint32_t n 30 | float cut_bounds_lv 31 | float cut_bounds_hv 32 | node_float *left_child 33 | node_float *right_child 34 | 35 | cdef struct tree_float: 36 | float *bbox 37 | int8_t no_dims 38 | uint32_t *pidx 39 | node_float *root 40 | 41 | cdef struct node_double: 42 | double cut_val 43 | int8_t cut_dim 44 | uint32_t start_idx 45 | uint32_t n 46 | double cut_bounds_lv 47 | double cut_bounds_hv 48 | node_double *left_child 49 | node_double *right_child 50 | 51 | cdef struct tree_double: 52 | double *bbox 53 | int8_t no_dims 54 | uint32_t *pidx 55 | node_double *root 56 | 57 | cdef extern tree_float* construct_tree_float(float *pa, int8_t no_dims, uint32_t n, uint32_t bsp) nogil 58 | cdef extern void search_tree_float(tree_float *kdtree, float *pa, float *point_coords, uint32_t num_points, uint32_t k, float distance_upper_bound, float eps_fac, uint8_t *mask, uint32_t *closest_idxs, float *closest_dists) nogil 59 | cdef extern void delete_tree_float(tree_float *kdtree) 60 | 61 | cdef extern tree_double* construct_tree_double(double *pa, int8_t no_dims, uint32_t n, uint32_t bsp) nogil 62 | cdef extern void search_tree_double(tree_double *kdtree, double *pa, double *point_coords, uint32_t num_points, uint32_t k, double distance_upper_bound, double eps_fac, uint8_t *mask, uint32_t *closest_idxs, double *closest_dists) nogil 63 | cdef extern void delete_tree_double(tree_double *kdtree) 64 | 65 | cdef class KDTree: 66 | """kd-tree for fast nearest-neighbour lookup. 67 | The interface is made to resemble the scipy.spatial kd-tree except 68 | only Euclidean distance measure is supported. 69 | 70 | :Parameters: 71 | data_pts : numpy array 72 | Data points with shape (n , dims) 73 | leafsize : int, optional 74 | Maximum number of data points in tree leaf 75 | """ 76 | 77 | cdef tree_float *_kdtree_float 78 | cdef tree_double *_kdtree_double 79 | cdef readonly np.ndarray data_pts 80 | cdef readonly np.ndarray data 81 | cdef float *_data_pts_data_float 82 | cdef double *_data_pts_data_double 83 | cdef readonly uint32_t n 84 | cdef readonly int8_t ndim 85 | cdef readonly uint32_t leafsize 86 | 87 | def __cinit__(KDTree self): 88 | self._kdtree_float = NULL 89 | self._kdtree_double = NULL 90 | 91 | def __init__(KDTree self, np.ndarray data_pts not None, int leafsize=16): 92 | 93 | # Check arguments 94 | if leafsize < 1: 95 | raise ValueError('leafsize must be greater than zero') 96 | 97 | # Get data content 98 | cdef np.ndarray[float, ndim=1] data_array_float 99 | cdef np.ndarray[double, ndim=1] data_array_double 100 | 101 | if data_pts.dtype == np.float32: 102 | data_array_float = np.ascontiguousarray(data_pts.ravel(), dtype=np.float32) 103 | self._data_pts_data_float = data_array_float.data 104 | self.data_pts = data_array_float 105 | else: 106 | data_array_double = np.ascontiguousarray(data_pts.ravel(), dtype=np.float64) 107 | self._data_pts_data_double = data_array_double.data 108 | self.data_pts = data_array_double 109 | 110 | # scipy interface compatibility 111 | self.data = self.data_pts 112 | 113 | # Get tree info 114 | self.n = data_pts.shape[0] 115 | self.leafsize = leafsize 116 | if data_pts.ndim == 1: 117 | self.ndim = 1 118 | else: 119 | self.ndim = data_pts.shape[1] 120 | 121 | # Release GIL and construct tree 122 | if data_pts.dtype == np.float32: 123 | with nogil: 124 | self._kdtree_float = construct_tree_float(self._data_pts_data_float, self.ndim, 125 | self.n, self.leafsize) 126 | else: 127 | with nogil: 128 | self._kdtree_double = construct_tree_double(self._data_pts_data_double, self.ndim, 129 | self.n, self.leafsize) 130 | 131 | 132 | def query(KDTree self, np.ndarray query_pts not None, k=1, eps=0, 133 | distance_upper_bound=None, sqr_dists=False, mask=None): 134 | """Query the kd-tree for nearest neighbors 135 | 136 | :Parameters: 137 | query_pts : numpy array 138 | Query points with shape (m, dims) 139 | k : int 140 | The number of nearest neighbours to return 141 | eps : non-negative float 142 | Return approximate nearest neighbours; the k-th returned value 143 | is guaranteed to be no further than (1 + eps) times the distance 144 | to the real k-th nearest neighbour 145 | distance_upper_bound : non-negative float 146 | Return only neighbors within this distance. 147 | This is used to prune tree searches. 148 | sqr_dists : bool, optional 149 | Internally pykdtree works with squared distances. 150 | Determines if the squared or Euclidean distances are returned. 151 | mask : numpy array, optional 152 | Array of booleans where neighbors are considered invalid and 153 | should not be returned. A mask value of True represents an 154 | invalid pixel. Mask should have shape (n,) to match data points. 155 | By default all points are considered valid. 156 | 157 | """ 158 | 159 | # Check arguments 160 | if k < 1: 161 | raise ValueError('Number of neighbours must be greater than zero') 162 | elif eps < 0: 163 | raise ValueError('eps must be non-negative') 164 | elif distance_upper_bound is not None: 165 | if distance_upper_bound < 0: 166 | raise ValueError('distance_upper_bound must be non negative') 167 | 168 | # Check dimensions 169 | if query_pts.ndim == 1: 170 | q_ndim = 1 171 | else: 172 | q_ndim = query_pts.shape[1] 173 | 174 | if self.ndim != q_ndim: 175 | raise ValueError('Data and query points must have same dimensions') 176 | 177 | if self.data_pts.dtype == np.float32 and query_pts.dtype != np.float32: 178 | raise TypeError('Type mismatch. query points must be of type float32 when data points are of type float32') 179 | 180 | # Get query info 181 | cdef uint32_t num_qpoints = query_pts.shape[0] 182 | cdef uint32_t num_n = k 183 | cdef np.ndarray[uint32_t, ndim=1] closest_idxs = np.empty(num_qpoints * k, dtype=np.uint32) 184 | cdef np.ndarray[float, ndim=1] closest_dists_float 185 | cdef np.ndarray[double, ndim=1] closest_dists_double 186 | 187 | 188 | # Set up return arrays 189 | cdef uint32_t *closest_idxs_data = closest_idxs.data 190 | cdef float *closest_dists_data_float 191 | cdef double *closest_dists_data_double 192 | 193 | # Get query points data 194 | cdef np.ndarray[float, ndim=1] query_array_float 195 | cdef np.ndarray[double, ndim=1] query_array_double 196 | cdef float *query_array_data_float 197 | cdef double *query_array_data_double 198 | cdef np.ndarray[np.uint8_t, ndim=1] query_mask 199 | cdef np.uint8_t *query_mask_data 200 | 201 | if mask is not None and mask.size != self.n: 202 | raise ValueError('Mask must have the same size as data points') 203 | elif mask is not None: 204 | query_mask = np.ascontiguousarray(mask.ravel(), dtype=np.uint8) 205 | query_mask_data = query_mask.data 206 | else: 207 | query_mask_data = NULL 208 | 209 | 210 | if query_pts.dtype == np.float32 and self.data_pts.dtype == np.float32: 211 | closest_dists_float = np.empty(num_qpoints * k, dtype=np.float32) 212 | closest_dists = closest_dists_float 213 | closest_dists_data_float = closest_dists_float.data 214 | query_array_float = np.ascontiguousarray(query_pts.ravel(), dtype=np.float32) 215 | query_array_data_float = query_array_float.data 216 | else: 217 | closest_dists_double = np.empty(num_qpoints * k, dtype=np.float64) 218 | closest_dists = closest_dists_double 219 | closest_dists_data_double = closest_dists_double.data 220 | query_array_double = np.ascontiguousarray(query_pts.ravel(), dtype=np.float64) 221 | query_array_data_double = query_array_double.data 222 | 223 | # Setup distance_upper_bound 224 | cdef float dub_float 225 | cdef double dub_double 226 | if distance_upper_bound is None: 227 | if self.data_pts.dtype == np.float32: 228 | dub_float = np.finfo(np.float32).max 229 | else: 230 | dub_double = np.finfo(np.float64).max 231 | else: 232 | if self.data_pts.dtype == np.float32: 233 | dub_float = (distance_upper_bound * distance_upper_bound) 234 | else: 235 | dub_double = (distance_upper_bound * distance_upper_bound) 236 | 237 | # Set epsilon 238 | cdef double epsilon_float = eps 239 | cdef double epsilon_double = eps 240 | 241 | # Release GIL and query tree 242 | if self.data_pts.dtype == np.float32: 243 | with nogil: 244 | search_tree_float(self._kdtree_float, self._data_pts_data_float, 245 | query_array_data_float, num_qpoints, num_n, dub_float, epsilon_float, 246 | query_mask_data, closest_idxs_data, closest_dists_data_float) 247 | 248 | else: 249 | with nogil: 250 | search_tree_double(self._kdtree_double, self._data_pts_data_double, 251 | query_array_data_double, num_qpoints, num_n, dub_double, epsilon_double, 252 | query_mask_data, closest_idxs_data, closest_dists_data_double) 253 | 254 | # Shape result 255 | if k > 1: 256 | closest_dists_res = closest_dists.reshape(num_qpoints, k) 257 | closest_idxs_res = closest_idxs.reshape(num_qpoints, k) 258 | else: 259 | closest_dists_res = closest_dists 260 | closest_idxs_res = closest_idxs 261 | 262 | if distance_upper_bound is not None: # Mark out of bounds results 263 | if self.data_pts.dtype == np.float32: 264 | idx_out = (closest_dists_res >= dub_float) 265 | else: 266 | idx_out = (closest_dists_res >= dub_double) 267 | 268 | closest_dists_res[idx_out] = np.Inf 269 | closest_idxs_res[idx_out] = self.n 270 | 271 | if not sqr_dists: # Return actual cartesian distances 272 | closest_dists_res = np.sqrt(closest_dists_res) 273 | 274 | return closest_dists_res, closest_idxs_res 275 | 276 | def __dealloc__(KDTree self): 277 | if self._kdtree_float != NULL: 278 | delete_tree_float(self._kdtree_float) 279 | elif self._kdtree_double != NULL: 280 | delete_tree_double(self._kdtree_double) 281 | -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libkdtree/pykdtree/render_template.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | from mako.template import Template 4 | 5 | mytemplate = Template(filename='_kdtree_core.c.mako') 6 | with open('_kdtree_core.c', 'w') as fp: 7 | fp.write(mytemplate.render()) 8 | -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libkdtree/pykdtree/test_tree.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | 3 | from pykdtree.kdtree import KDTree 4 | 5 | 6 | data_pts_real = np.array([[ 790535.062, -369324.656, 6310963.5 ], 7 | [ 790024.312, -365155.688, 6311270. ], 8 | [ 789515.75 , -361009.469, 6311572. ], 9 | [ 789011. , -356886.562, 6311869.5 ], 10 | [ 788508.438, -352785.969, 6312163. ], 11 | [ 788007.25 , -348707.219, 6312452. ], 12 | [ 787509.188, -344650.875, 6312737. ], 13 | [ 787014.438, -340616.906, 6313018. ], 14 | [ 786520.312, -336604.156, 6313294.5 ], 15 | [ 786030.312, -332613.844, 6313567. ], 16 | [ 785541.562, -328644.375, 6313835.5 ], 17 | [ 785054.75 , -324696.031, 6314100.5 ], 18 | [ 784571.188, -320769.5 , 6314361.5 ], 19 | [ 784089.312, -316863.562, 6314618.5 ], 20 | [ 783610.562, -312978.719, 6314871.5 ], 21 | [ 783133. , -309114.312, 6315121. ], 22 | [ 782658.25 , -305270.531, 6315367. ], 23 | [ 782184.312, -301446.719, 6315609. ], 24 | [ 781715.062, -297643.844, 6315847.5 ], 25 | [ 781246.188, -293860.281, 6316083. ], 26 | [ 780780.125, -290096.938, 6316314.5 ], 27 | [ 780316.312, -286353.469, 6316542.5 ], 28 | [ 779855.625, -282629.75 , 6316767.5 ], 29 | [ 779394.75 , -278924.781, 6316988.5 ], 30 | [ 778937.312, -275239.625, 6317206.5 ], 31 | [ 778489.812, -271638.094, 6317418. ], 32 | [ 778044.688, -268050.562, 6317626. ], 33 | [ 777599.688, -264476.75 , 6317831.5 ], 34 | [ 777157.625, -260916.859, 6318034. ], 35 | [ 776716.688, -257371.125, 6318233.5 ], 36 | [ 776276.812, -253838.891, 6318430.5 ], 37 | [ 775838.125, -250320.266, 6318624.5 ], 38 | [ 775400.75 , -246815.516, 6318816.5 ], 39 | [ 774965.312, -243324.953, 6319005. ], 40 | [ 774532.062, -239848.25 , 6319191. ], 41 | [ 774100.25 , -236385.516, 6319374.5 ], 42 | [ 773667.875, -232936.016, 6319555.5 ], 43 | [ 773238.562, -229500.812, 6319734. ], 44 | [ 772810.938, -226079.562, 6319909.5 ], 45 | [ 772385.25 , -222672.219, 6320082.5 ], 46 | [ 771960. , -219278.5 , 6320253. ], 47 | [ 771535.938, -215898.609, 6320421. ], 48 | [ 771114. , -212532.625, 6320587. ], 49 | [ 770695. , -209180.859, 6320749.5 ], 50 | [ 770275.25 , -205842.562, 6320910.5 ], 51 | [ 769857.188, -202518.125, 6321068.5 ], 52 | [ 769442.312, -199207.844, 6321224.5 ], 53 | [ 769027.812, -195911.203, 6321378. ], 54 | [ 768615.938, -192628.859, 6321529. ], 55 | [ 768204.688, -189359.969, 6321677.5 ], 56 | [ 767794.062, -186104.844, 6321824. ], 57 | [ 767386.25 , -182864.016, 6321968.5 ], 58 | [ 766980.062, -179636.969, 6322110. ], 59 | [ 766575.625, -176423.75 , 6322249.5 ], 60 | [ 766170.688, -173224.172, 6322387. ], 61 | [ 765769.812, -170038.984, 6322522.5 ], 62 | [ 765369.5 , -166867.312, 6322655. ], 63 | [ 764970.562, -163709.594, 6322786. ], 64 | [ 764573. , -160565.781, 6322914.5 ], 65 | [ 764177.75 , -157435.938, 6323041. ], 66 | [ 763784.188, -154320.062, 6323165.5 ], 67 | [ 763392.375, -151218.047, 6323288. ], 68 | [ 763000.938, -148129.734, 6323408. ], 69 | [ 762610.812, -145055.344, 6323526.5 ], 70 | [ 762224.188, -141995.141, 6323642.5 ], 71 | [ 761847.188, -139025.734, 6323754. ], 72 | [ 761472.375, -136066.312, 6323863.5 ], 73 | [ 761098.125, -133116.859, 6323971.5 ], 74 | [ 760725.25 , -130177.484, 6324077.5 ], 75 | [ 760354. , -127247.984, 6324181.5 ], 76 | [ 759982.812, -124328.336, 6324284.5 ], 77 | [ 759614. , -121418.844, 6324385. ], 78 | [ 759244.688, -118519.102, 6324484.5 ], 79 | [ 758877.125, -115629.305, 6324582. ], 80 | [ 758511.562, -112749.648, 6324677.5 ], 81 | [ 758145.625, -109879.82 , 6324772.5 ], 82 | [ 757781.688, -107019.953, 6324865. ], 83 | [ 757418.438, -104170.047, 6324956. ], 84 | [ 757056.562, -101330.125, 6325045.5 ], 85 | [ 756697. , -98500.266, 6325133.5 ], 86 | [ 756337.375, -95680.289, 6325219.5 ], 87 | [ 755978.062, -92870.148, 6325304.5 ], 88 | [ 755621.188, -90070.109, 6325387.5 ], 89 | [ 755264.625, -87280.008, 6325469. ], 90 | [ 754909.188, -84499.828, 6325549. ], 91 | [ 754555.062, -81729.609, 6325628. ], 92 | [ 754202.938, -78969.43 , 6325705. ], 93 | [ 753850.688, -76219.133, 6325781. ], 94 | [ 753499.875, -73478.836, 6325855. ], 95 | [ 753151.375, -70748.578, 6325927.5 ], 96 | [ 752802.312, -68028.188, 6325999. ], 97 | [ 752455.75 , -65317.871, 6326068.5 ], 98 | [ 752108.625, -62617.344, 6326137.5 ], 99 | [ 751764.125, -59926.969, 6326204.5 ], 100 | [ 751420.125, -57246.434, 6326270. ], 101 | [ 751077.438, -54575.902, 6326334.5 ], 102 | [ 750735.312, -51915.363, 6326397.5 ], 103 | [ 750396.188, -49264.852, 6326458.5 ], 104 | [ 750056.375, -46624.227, 6326519. ], 105 | [ 749718.875, -43993.633, 6326578. ]]) 106 | 107 | def test1d(): 108 | 109 | data_pts = np.arange(1000) 110 | kdtree = KDTree(data_pts, leafsize=15) 111 | query_pts = np.arange(400, 300, -10) 112 | dist, idx = kdtree.query(query_pts) 113 | assert idx[0] == 400 114 | assert dist[0] == 0 115 | assert idx[1] == 390 116 | 117 | def test3d(): 118 | 119 | 120 | #7, 93, 45 121 | query_pts = np.array([[ 787014.438, -340616.906, 6313018.], 122 | [751763.125, -59925.969, 6326205.5], 123 | [769957.188, -202418.125, 6321069.5]]) 124 | 125 | 126 | kdtree = KDTree(data_pts_real) 127 | dist, idx = kdtree.query(query_pts, sqr_dists=True) 128 | 129 | epsilon = 1e-5 130 | assert idx[0] == 7 131 | assert idx[1] == 93 132 | assert idx[2] == 45 133 | assert dist[0] == 0 134 | assert abs(dist[1] - 3.) < epsilon * dist[1] 135 | assert abs(dist[2] - 20001.) < epsilon * dist[2] 136 | 137 | def test3d_float32(): 138 | 139 | 140 | #7, 93, 45 141 | query_pts = np.array([[ 787014.438, -340616.906, 6313018.], 142 | [751763.125, -59925.969, 6326205.5], 143 | [769957.188, -202418.125, 6321069.5]], dtype=np.float32) 144 | 145 | 146 | kdtree = KDTree(data_pts_real.astype(np.float32)) 147 | dist, idx = kdtree.query(query_pts, sqr_dists=True) 148 | epsilon = 1e-5 149 | assert idx[0] == 7 150 | assert idx[1] == 93 151 | assert idx[2] == 45 152 | assert dist[0] == 0 153 | assert abs(dist[1] - 3.) < epsilon * dist[1] 154 | assert abs(dist[2] - 20001.) < epsilon * dist[2] 155 | assert kdtree.data_pts.dtype == np.float32 156 | 157 | def test3d_float32_mismatch(): 158 | 159 | 160 | #7, 93, 45 161 | query_pts = np.array([[ 787014.438, -340616.906, 6313018.], 162 | [751763.125, -59925.969, 6326205.5], 163 | [769957.188, -202418.125, 6321069.5]], dtype=np.float32) 164 | 165 | kdtree = KDTree(data_pts_real) 166 | dist, idx = kdtree.query(query_pts, sqr_dists=True) 167 | 168 | def test3d_float32_mismatch2(): 169 | 170 | 171 | #7, 93, 45 172 | query_pts = np.array([[ 787014.438, -340616.906, 6313018.], 173 | [751763.125, -59925.969, 6326205.5], 174 | [769957.188, -202418.125, 6321069.5]]) 175 | 176 | kdtree = KDTree(data_pts_real.astype(np.float32)) 177 | try: 178 | dist, idx = kdtree.query(query_pts, sqr_dists=True) 179 | assert False 180 | except TypeError: 181 | assert True 182 | 183 | 184 | def test3d_8n(): 185 | query_pts = np.array([[ 787014.438, -340616.906, 6313018.], 186 | [751763.125, -59925.969, 6326205.5], 187 | [769957.188, -202418.125, 6321069.5]]) 188 | 189 | kdtree = KDTree(data_pts_real) 190 | dist, idx = kdtree.query(query_pts, k=8) 191 | 192 | exp_dist = np.array([[ 0.00000000e+00, 4.05250235e+03, 4.07389794e+03, 8.08201128e+03, 193 | 8.17063009e+03, 1.20904577e+04, 1.22902057e+04, 1.60775136e+04], 194 | [ 1.73205081e+00, 2.70216896e+03, 2.71431274e+03, 5.39537066e+03, 195 | 5.43793210e+03, 8.07855631e+03, 8.17119970e+03, 1.07513693e+04], 196 | [ 1.41424892e+02, 3.25500021e+03, 3.44284958e+03, 6.58019346e+03, 197 | 6.81038455e+03, 9.89140135e+03, 1.01918659e+04, 1.31892516e+04]]) 198 | 199 | exp_idx = np.array([[ 7, 8, 6, 9, 5, 10, 4, 11], 200 | [93, 94, 92, 95, 91, 96, 90, 97], 201 | [45, 46, 44, 47, 43, 48, 42, 49]]) 202 | 203 | assert np.array_equal(idx, exp_idx) 204 | assert np.allclose(dist, exp_dist) 205 | 206 | def test3d_8n_ub(): 207 | query_pts = np.array([[ 787014.438, -340616.906, 6313018.], 208 | [751763.125, -59925.969, 6326205.5], 209 | [769957.188, -202418.125, 6321069.5]]) 210 | 211 | kdtree = KDTree(data_pts_real) 212 | dist, idx = kdtree.query(query_pts, k=8, distance_upper_bound=10e3, sqr_dists=False) 213 | 214 | exp_dist = np.array([[ 0.00000000e+00, 4.05250235e+03, 4.07389794e+03, 8.08201128e+03, 215 | 8.17063009e+03, np.Inf, np.Inf, np.Inf], 216 | [ 1.73205081e+00, 2.70216896e+03, 2.71431274e+03, 5.39537066e+03, 217 | 5.43793210e+03, 8.07855631e+03, 8.17119970e+03, np.Inf], 218 | [ 1.41424892e+02, 3.25500021e+03, 3.44284958e+03, 6.58019346e+03, 219 | 6.81038455e+03, 9.89140135e+03, np.Inf, np.Inf]]) 220 | n = 100 221 | exp_idx = np.array([[ 7, 8, 6, 9, 5, n, n, n], 222 | [93, 94, 92, 95, 91, 96, 90, n], 223 | [45, 46, 44, 47, 43, 48, n, n]]) 224 | 225 | assert np.array_equal(idx, exp_idx) 226 | assert np.allclose(dist, exp_dist) 227 | 228 | def test3d_8n_ub_leaf20(): 229 | query_pts = np.array([[ 787014.438, -340616.906, 6313018.], 230 | [751763.125, -59925.969, 6326205.5], 231 | [769957.188, -202418.125, 6321069.5]]) 232 | 233 | kdtree = KDTree(data_pts_real, leafsize=20) 234 | dist, idx = kdtree.query(query_pts, k=8, distance_upper_bound=10e3, sqr_dists=False) 235 | 236 | exp_dist = np.array([[ 0.00000000e+00, 4.05250235e+03, 4.07389794e+03, 8.08201128e+03, 237 | 8.17063009e+03, np.Inf, np.Inf, np.Inf], 238 | [ 1.73205081e+00, 2.70216896e+03, 2.71431274e+03, 5.39537066e+03, 239 | 5.43793210e+03, 8.07855631e+03, 8.17119970e+03, np.Inf], 240 | [ 1.41424892e+02, 3.25500021e+03, 3.44284958e+03, 6.58019346e+03, 241 | 6.81038455e+03, 9.89140135e+03, np.Inf, np.Inf]]) 242 | n = 100 243 | exp_idx = np.array([[ 7, 8, 6, 9, 5, n, n, n], 244 | [93, 94, 92, 95, 91, 96, 90, n], 245 | [45, 46, 44, 47, 43, 48, n, n]]) 246 | 247 | assert np.array_equal(idx, exp_idx) 248 | assert np.allclose(dist, exp_dist) 249 | 250 | def test3d_8n_ub_eps(): 251 | query_pts = np.array([[ 787014.438, -340616.906, 6313018.], 252 | [751763.125, -59925.969, 6326205.5], 253 | [769957.188, -202418.125, 6321069.5]]) 254 | 255 | kdtree = KDTree(data_pts_real) 256 | dist, idx = kdtree.query(query_pts, k=8, eps=0.1, distance_upper_bound=10e3, sqr_dists=False) 257 | 258 | exp_dist = np.array([[ 0.00000000e+00, 4.05250235e+03, 4.07389794e+03, 8.08201128e+03, 259 | 8.17063009e+03, np.Inf, np.Inf, np.Inf], 260 | [ 1.73205081e+00, 2.70216896e+03, 2.71431274e+03, 5.39537066e+03, 261 | 5.43793210e+03, 8.07855631e+03, 8.17119970e+03, np.Inf], 262 | [ 1.41424892e+02, 3.25500021e+03, 3.44284958e+03, 6.58019346e+03, 263 | 6.81038455e+03, 9.89140135e+03, np.Inf, np.Inf]]) 264 | n = 100 265 | exp_idx = np.array([[ 7, 8, 6, 9, 5, n, n, n], 266 | [93, 94, 92, 95, 91, 96, 90, n], 267 | [45, 46, 44, 47, 43, 48, n, n]]) 268 | 269 | assert np.array_equal(idx, exp_idx) 270 | assert np.allclose(dist, exp_dist) 271 | 272 | def test3d_large_query(): 273 | # Target idxs: 7, 93, 45 274 | query_pts = np.array([[ 787014.438, -340616.906, 6313018.], 275 | [751763.125, -59925.969, 6326205.5], 276 | [769957.188, -202418.125, 6321069.5]]) 277 | 278 | # Repeat the same points multiple times to get 60000 query points 279 | n = 20000 280 | query_pts = np.repeat(query_pts, n, axis=0) 281 | 282 | kdtree = KDTree(data_pts_real) 283 | dist, idx = kdtree.query(query_pts, sqr_dists=True) 284 | 285 | epsilon = 1e-5 286 | assert np.all(idx[:n] == 7) 287 | assert np.all(idx[n:2*n] == 93) 288 | assert np.all(idx[2*n:] == 45) 289 | assert np.all(dist[:n] == 0) 290 | assert np.all(abs(dist[n:2*n] - 3.) < epsilon * dist[n:2*n]) 291 | assert np.all(abs(dist[2*n:] - 20001.) < epsilon * dist[2*n:]) 292 | 293 | def test_scipy_comp(): 294 | 295 | query_pts = np.array([[ 787014.438, -340616.906, 6313018.], 296 | [751763.125, -59925.969, 6326205.5], 297 | [769957.188, -202418.125, 6321069.5]]) 298 | 299 | kdtree = KDTree(data_pts_real) 300 | assert id(kdtree.data) == id(kdtree.data_pts) 301 | 302 | 303 | def test1d_mask(): 304 | data_pts = np.arange(1000) 305 | # put the input locations in random order 306 | np.random.shuffle(data_pts) 307 | bad_idx = np.nonzero(data_pts == 400) 308 | nearest_idx_1 = np.nonzero(data_pts == 399) 309 | nearest_idx_2 = np.nonzero(data_pts == 390) 310 | kdtree = KDTree(data_pts, leafsize=15) 311 | # shift the query points just a little bit for known neighbors 312 | # we want 399 as a result, not 401, when we query for ~400 313 | query_pts = np.arange(399.9, 299.9, -10) 314 | query_mask = np.zeros(data_pts.shape[0]).astype(bool) 315 | query_mask[bad_idx] = True 316 | dist, idx = kdtree.query(query_pts, mask=query_mask) 317 | assert idx[0] == nearest_idx_1 # 399, would be 400 if no mask 318 | assert np.isclose(dist[0], 0.9) 319 | assert idx[1] == nearest_idx_2 # 390 320 | assert np.isclose(dist[1], 0.1) 321 | 322 | 323 | def test1d_all_masked(): 324 | data_pts = np.arange(1000) 325 | np.random.shuffle(data_pts) 326 | kdtree = KDTree(data_pts, leafsize=15) 327 | query_pts = np.arange(400, 300, -10) 328 | query_mask = np.ones(data_pts.shape[0]).astype(bool) 329 | dist, idx = kdtree.query(query_pts, mask=query_mask) 330 | # all invalid 331 | assert np.all(i >= 1000 for i in idx) 332 | assert np.all(d >= 1001 for d in dist) 333 | 334 | 335 | def test3d_mask(): 336 | #7, 93, 45 337 | query_pts = np.array([[ 787014.438, -340616.906, 6313018.], 338 | [751763.125, -59925.969, 6326205.5], 339 | [769957.188, -202418.125, 6321069.5]]) 340 | 341 | kdtree = KDTree(data_pts_real) 342 | query_mask = np.zeros(data_pts_real.shape[0]) 343 | query_mask[6:10] = True 344 | dist, idx = kdtree.query(query_pts, sqr_dists=True, mask=query_mask) 345 | 346 | epsilon = 1e-5 347 | assert idx[0] == 5 # would be 7 if no mask 348 | assert idx[1] == 93 349 | assert idx[2] == 45 350 | # would be 0 if no mask 351 | assert abs(dist[0] - 66759196.1053) < epsilon * dist[0] 352 | assert abs(dist[1] - 3.) < epsilon * dist[1] 353 | assert abs(dist[2] - 20001.) < epsilon * dist[2] 354 | -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libkdtree/setup.cfg: -------------------------------------------------------------------------------- 1 | [bdist_rpm] 2 | requires=numpy 3 | release=1 4 | 5 | 6 | -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmcubes/LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2012-2015, P. M. Neila 2 | All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without 5 | modification, are permitted provided that the following conditions are met: 6 | 7 | * Redistributions of source code must retain the above copyright notice, this 8 | list of conditions and the following disclaimer. 9 | 10 | * Redistributions in binary form must reproduce the above copyright notice, 11 | this list of conditions and the following disclaimer in the documentation 12 | and/or other materials provided with the distribution. 13 | 14 | * Neither the name of the copyright holder nor the names of its 15 | contributors may be used to endorse or promote products derived from 16 | this software without specific prior written permission. 17 | 18 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 19 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 20 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 21 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 22 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 23 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 24 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 25 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 26 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 27 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 28 | -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmcubes/README.rst: -------------------------------------------------------------------------------- 1 | ======== 2 | PyMCubes 3 | ======== 4 | 5 | PyMCubes is an implementation of the marching cubes algorithm to extract 6 | isosurfaces from volumetric data. The volumetric data can be given as a 7 | three-dimensional NumPy array or as a Python function ``f(x, y, z)``. The first 8 | option is much faster, but it requires more memory and becomes unfeasible for 9 | very large volumes. 10 | 11 | PyMCubes also provides a function to export the results of the marching cubes as 12 | COLLADA ``(.dae)`` files. This requires the 13 | `PyCollada `_ library. 14 | 15 | Installation 16 | ============ 17 | 18 | Just as any standard Python package, clone or download the project 19 | and run:: 20 | 21 | $ cd path/to/PyMCubes 22 | $ python setup.py build 23 | $ python setup.py install 24 | 25 | If you do not have write permission on the directory of Python packages, 26 | install with the ``--user`` option:: 27 | 28 | $ python setup.py install --user 29 | 30 | Example 31 | ======= 32 | 33 | The following example creates a data volume with spherical isosurfaces and 34 | extracts one of them (i.e., a sphere) with PyMCubes. The result is exported as 35 | ``sphere.dae``:: 36 | 37 | >>> import numpy as np 38 | >>> import mcubes 39 | 40 | # Create a data volume (30 x 30 x 30) 41 | >>> X, Y, Z = np.mgrid[:30, :30, :30] 42 | >>> u = (X-15)**2 + (Y-15)**2 + (Z-15)**2 - 8**2 43 | 44 | # Extract the 0-isosurface 45 | >>> vertices, triangles = mcubes.marching_cubes(u, 0) 46 | 47 | # Export the result to sphere.dae 48 | >>> mcubes.export_mesh(vertices, triangles, "sphere.dae", "MySphere") 49 | 50 | The second example is very similar to the first one, but it uses a function 51 | to represent the volume instead of a NumPy array:: 52 | 53 | >>> import numpy as np 54 | >>> import mcubes 55 | 56 | # Create the volume 57 | >>> f = lambda x, y, z: x**2 + y**2 + z**2 58 | 59 | # Extract the 16-isosurface 60 | >>> vertices, triangles = mcubes.marching_cubes_func((-10,-10,-10), (10,10,10), 61 | ... 100, 100, 100, f, 16) 62 | 63 | # Export the result to sphere2.dae 64 | >>> mcubes.export_mesh(vertices, triangles, "sphere2.dae", "MySphere") 65 | -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmcubes/__init__.py: -------------------------------------------------------------------------------- 1 | from mesh_gen_utils.libmcubes.mcubes import ( 2 | marching_cubes, marching_cubes_func 3 | ) 4 | from mesh_gen_utils.libmcubes.exporter import ( 5 | export_mesh, export_obj, export_off 6 | ) 7 | 8 | 9 | __all__ = [ 10 | marching_cubes, marching_cubes_func, 11 | export_mesh, export_obj, export_off 12 | ] 13 | -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmcubes/__pycache__/__init__.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/CL3D/mesh_gen_utils/libmcubes/__pycache__/__init__.cpython-36.pyc -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmcubes/__pycache__/exporter.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/CL3D/mesh_gen_utils/libmcubes/__pycache__/exporter.cpython-36.pyc -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmcubes/exporter.py: -------------------------------------------------------------------------------- 1 | 2 | import numpy as np 3 | 4 | 5 | def export_obj(vertices, triangles, filename): 6 | """ 7 | Exports a mesh in the (.obj) format. 8 | """ 9 | 10 | with open(filename, 'w') as fh: 11 | 12 | for v in vertices: 13 | fh.write("v {} {} {}\n".format(*v)) 14 | 15 | for f in triangles: 16 | fh.write("f {} {} {}\n".format(*(f + 1))) 17 | 18 | 19 | def export_off(vertices, triangles, filename): 20 | """ 21 | Exports a mesh in the (.off) format. 22 | """ 23 | 24 | with open(filename, 'w') as fh: 25 | fh.write('OFF\n') 26 | fh.write('{} {} 0\n'.format(len(vertices), len(triangles))) 27 | 28 | for v in vertices: 29 | fh.write("{} {} {}\n".format(*v)) 30 | 31 | for f in triangles: 32 | fh.write("3 {} {} {}\n".format(*f)) 33 | 34 | 35 | def export_mesh(vertices, triangles, filename, mesh_name="mcubes_mesh"): 36 | """ 37 | Exports a mesh in the COLLADA (.dae) format. 38 | 39 | Needs PyCollada (https://github.com/pycollada/pycollada). 40 | """ 41 | 42 | import collada 43 | 44 | mesh = collada.Collada() 45 | 46 | vert_src = collada.source.FloatSource("verts-array", vertices, ('X','Y','Z')) 47 | geom = collada.geometry.Geometry(mesh, "geometry0", mesh_name, [vert_src]) 48 | 49 | input_list = collada.source.InputList() 50 | input_list.addInput(0, 'VERTEX', "#verts-array") 51 | 52 | triset = geom.createTriangleSet(np.copy(triangles), input_list, "") 53 | geom.primitives.append(triset) 54 | mesh.geometries.append(geom) 55 | 56 | geomnode = collada.scene.GeometryNode(geom, []) 57 | node = collada.scene.Node(mesh_name, children=[geomnode]) 58 | 59 | myscene = collada.scene.Scene("mcubes_scene", [node]) 60 | mesh.scenes.append(myscene) 61 | mesh.scene = myscene 62 | 63 | mesh.write(filename) 64 | -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmcubes/mcubes.cpython-36m-x86_64-linux-gnu.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/CL3D/mesh_gen_utils/libmcubes/mcubes.cpython-36m-x86_64-linux-gnu.so -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmcubes/mcubes.pyx: -------------------------------------------------------------------------------- 1 | 2 | # distutils: language = c++ 3 | # cython: embedsignature = True 4 | 5 | # from libcpp.vector cimport vector 6 | import numpy as np 7 | 8 | # Define PY_ARRAY_UNIQUE_SYMBOL 9 | cdef extern from "pyarray_symbol.h": 10 | pass 11 | 12 | cimport numpy as np 13 | 14 | np.import_array() 15 | 16 | cdef extern from "pywrapper.h": 17 | cdef object c_marching_cubes "marching_cubes"(np.ndarray, double) except + 18 | cdef object c_marching_cubes2 "marching_cubes2"(np.ndarray, double) except + 19 | cdef object c_marching_cubes3 "marching_cubes3"(np.ndarray, double) except + 20 | cdef object c_marching_cubes_func "marching_cubes_func"(tuple, tuple, int, int, int, object, double) except + 21 | 22 | def marching_cubes(np.ndarray volume, float isovalue): 23 | 24 | verts, faces = c_marching_cubes(volume, isovalue) 25 | verts.shape = (-1, 3) 26 | faces.shape = (-1, 3) 27 | return verts, faces 28 | 29 | def marching_cubes2(np.ndarray volume, float isovalue): 30 | 31 | verts, faces = c_marching_cubes2(volume, isovalue) 32 | verts.shape = (-1, 3) 33 | faces.shape = (-1, 3) 34 | return verts, faces 35 | 36 | def marching_cubes3(np.ndarray volume, float isovalue): 37 | 38 | verts, faces = c_marching_cubes3(volume, isovalue) 39 | verts.shape = (-1, 3) 40 | faces.shape = (-1, 3) 41 | return verts, faces 42 | 43 | def marching_cubes_func(tuple lower, tuple upper, int numx, int numy, int numz, object f, double isovalue): 44 | 45 | verts, faces = c_marching_cubes_func(lower, upper, numx, numy, numz, f, isovalue) 46 | verts.shape = (-1, 3) 47 | faces.shape = (-1, 3) 48 | return verts, faces 49 | -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmcubes/pyarray_symbol.h: -------------------------------------------------------------------------------- 1 | 2 | #define PY_ARRAY_UNIQUE_SYMBOL mcubes_PyArray_API 3 | -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmcubes/pyarraymodule.h: -------------------------------------------------------------------------------- 1 | 2 | #ifndef _EXTMODULE_H 3 | #define _EXTMODULE_H 4 | 5 | #include 6 | #include 7 | 8 | // #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION 9 | #define PY_ARRAY_UNIQUE_SYMBOL mcubes_PyArray_API 10 | #define NO_IMPORT_ARRAY 11 | #include "numpy/arrayobject.h" 12 | 13 | #include 14 | 15 | template 16 | struct numpy_typemap; 17 | 18 | #define define_numpy_type(ctype, dtype) \ 19 | template<> \ 20 | struct numpy_typemap \ 21 | {static const int type = dtype;}; 22 | 23 | define_numpy_type(bool, NPY_BOOL); 24 | define_numpy_type(char, NPY_BYTE); 25 | define_numpy_type(short, NPY_SHORT); 26 | define_numpy_type(int, NPY_INT); 27 | define_numpy_type(long, NPY_LONG); 28 | define_numpy_type(long long, NPY_LONGLONG); 29 | define_numpy_type(unsigned char, NPY_UBYTE); 30 | define_numpy_type(unsigned short, NPY_USHORT); 31 | define_numpy_type(unsigned int, NPY_UINT); 32 | define_numpy_type(unsigned long, NPY_ULONG); 33 | define_numpy_type(unsigned long long, NPY_ULONGLONG); 34 | define_numpy_type(float, NPY_FLOAT); 35 | define_numpy_type(double, NPY_DOUBLE); 36 | define_numpy_type(long double, NPY_LONGDOUBLE); 37 | define_numpy_type(std::complex, NPY_CFLOAT); 38 | define_numpy_type(std::complex, NPY_CDOUBLE); 39 | define_numpy_type(std::complex, NPY_CLONGDOUBLE); 40 | 41 | template 42 | T PyArray_SafeGet(const PyArrayObject* aobj, const npy_intp* indaux) 43 | { 44 | // HORROR. 45 | npy_intp* ind = const_cast(indaux); 46 | void* ptr = PyArray_GetPtr(const_cast(aobj), ind); 47 | switch(PyArray_TYPE(aobj)) 48 | { 49 | case NPY_BOOL: 50 | return static_cast(*reinterpret_cast(ptr)); 51 | case NPY_BYTE: 52 | return static_cast(*reinterpret_cast(ptr)); 53 | case NPY_SHORT: 54 | return static_cast(*reinterpret_cast(ptr)); 55 | case NPY_INT: 56 | return static_cast(*reinterpret_cast(ptr)); 57 | case NPY_LONG: 58 | return static_cast(*reinterpret_cast(ptr)); 59 | case NPY_LONGLONG: 60 | return static_cast(*reinterpret_cast(ptr)); 61 | case NPY_UBYTE: 62 | return static_cast(*reinterpret_cast(ptr)); 63 | case NPY_USHORT: 64 | return static_cast(*reinterpret_cast(ptr)); 65 | case NPY_UINT: 66 | return static_cast(*reinterpret_cast(ptr)); 67 | case NPY_ULONG: 68 | return static_cast(*reinterpret_cast(ptr)); 69 | case NPY_ULONGLONG: 70 | return static_cast(*reinterpret_cast(ptr)); 71 | case NPY_FLOAT: 72 | return static_cast(*reinterpret_cast(ptr)); 73 | case NPY_DOUBLE: 74 | return static_cast(*reinterpret_cast(ptr)); 75 | case NPY_LONGDOUBLE: 76 | return static_cast(*reinterpret_cast(ptr)); 77 | default: 78 | throw std::runtime_error("data type not supported"); 79 | } 80 | } 81 | 82 | template 83 | T PyArray_SafeSet(PyArrayObject* aobj, const npy_intp* indaux, const T& value) 84 | { 85 | // HORROR. 86 | npy_intp* ind = const_cast(indaux); 87 | void* ptr = PyArray_GetPtr(aobj, ind); 88 | switch(PyArray_TYPE(aobj)) 89 | { 90 | case NPY_BOOL: 91 | *reinterpret_cast(ptr) = static_cast(value); 92 | break; 93 | case NPY_BYTE: 94 | *reinterpret_cast(ptr) = static_cast(value); 95 | break; 96 | case NPY_SHORT: 97 | *reinterpret_cast(ptr) = static_cast(value); 98 | break; 99 | case NPY_INT: 100 | *reinterpret_cast(ptr) = static_cast(value); 101 | break; 102 | case NPY_LONG: 103 | *reinterpret_cast(ptr) = static_cast(value); 104 | break; 105 | case NPY_LONGLONG: 106 | *reinterpret_cast(ptr) = static_cast(value); 107 | break; 108 | case NPY_UBYTE: 109 | *reinterpret_cast(ptr) = static_cast(value); 110 | break; 111 | case NPY_USHORT: 112 | *reinterpret_cast(ptr) = static_cast(value); 113 | break; 114 | case NPY_UINT: 115 | *reinterpret_cast(ptr) = static_cast(value); 116 | break; 117 | case NPY_ULONG: 118 | *reinterpret_cast(ptr) = static_cast(value); 119 | break; 120 | case NPY_ULONGLONG: 121 | *reinterpret_cast(ptr) = static_cast(value); 122 | break; 123 | case NPY_FLOAT: 124 | *reinterpret_cast(ptr) = static_cast(value); 125 | break; 126 | case NPY_DOUBLE: 127 | *reinterpret_cast(ptr) = static_cast(value); 128 | break; 129 | case NPY_LONGDOUBLE: 130 | *reinterpret_cast(ptr) = static_cast(value); 131 | break; 132 | default: 133 | throw std::runtime_error("data type not supported"); 134 | } 135 | } 136 | 137 | #endif 138 | -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmcubes/pywrapper.cpp: -------------------------------------------------------------------------------- 1 | 2 | #include "pywrapper.h" 3 | 4 | #include "marchingcubes.h" 5 | 6 | #include 7 | 8 | struct PythonToCFunc 9 | { 10 | PyObject* func; 11 | PythonToCFunc(PyObject* func) {this->func = func;} 12 | double operator()(double x, double y, double z) 13 | { 14 | PyObject* res = PyObject_CallFunction(func, "(d,d,d)", x, y, z); // py::extract(func(x,y,z)); 15 | if(res == NULL) 16 | return 0.0; 17 | 18 | double result = PyFloat_AsDouble(res); 19 | Py_DECREF(res); 20 | return result; 21 | } 22 | }; 23 | 24 | PyObject* marching_cubes_func(PyObject* lower, PyObject* upper, 25 | int numx, int numy, int numz, PyObject* f, double isovalue) 26 | { 27 | std::vector vertices; 28 | std::vector polygons; 29 | 30 | // Copy the lower and upper coordinates to a C array. 31 | double lower_[3]; 32 | double upper_[3]; 33 | for(int i=0; i<3; ++i) 34 | { 35 | PyObject* l = PySequence_GetItem(lower, i); 36 | if(l == NULL) 37 | throw std::runtime_error("error"); 38 | PyObject* u = PySequence_GetItem(upper, i); 39 | if(u == NULL) 40 | { 41 | Py_DECREF(l); 42 | throw std::runtime_error("error"); 43 | } 44 | 45 | lower_[i] = PyFloat_AsDouble(l); 46 | upper_[i] = PyFloat_AsDouble(u); 47 | 48 | Py_DECREF(l); 49 | Py_DECREF(u); 50 | if(lower_[i]==-1.0 || upper_[i]==-1.0) 51 | { 52 | if(PyErr_Occurred()) 53 | throw std::runtime_error("error"); 54 | } 55 | } 56 | 57 | // Marching cubes. 58 | mc::marching_cubes(lower_, upper_, numx, numy, numz, PythonToCFunc(f), isovalue, vertices, polygons); 59 | 60 | // Copy the result to two Python ndarrays. 61 | npy_intp size_vertices = vertices.size(); 62 | npy_intp size_polygons = polygons.size(); 63 | PyArrayObject* verticesarr = reinterpret_cast(PyArray_SimpleNew(1, &size_vertices, PyArray_DOUBLE)); 64 | PyArrayObject* polygonsarr = reinterpret_cast(PyArray_SimpleNew(1, &size_polygons, PyArray_ULONG)); 65 | 66 | std::vector::const_iterator it = vertices.begin(); 67 | for(int i=0; it!=vertices.end(); ++i, ++it) 68 | *reinterpret_cast(PyArray_GETPTR1(verticesarr, i)) = *it; 69 | std::vector::const_iterator it2 = polygons.begin(); 70 | for(int i=0; it2!=polygons.end(); ++i, ++it2) 71 | *reinterpret_cast(PyArray_GETPTR1(polygonsarr, i)) = *it2; 72 | 73 | PyObject* res = Py_BuildValue("(O,O)", verticesarr, polygonsarr); 74 | Py_XDECREF(verticesarr); 75 | Py_XDECREF(polygonsarr); 76 | return res; 77 | } 78 | 79 | struct PyArrayToCFunc 80 | { 81 | PyArrayObject* arr; 82 | PyArrayToCFunc(PyArrayObject* arr) {this->arr = arr;} 83 | double operator()(int x, int y, int z) 84 | { 85 | npy_intp c[3] = {x,y,z}; 86 | return PyArray_SafeGet(arr, c); 87 | } 88 | }; 89 | 90 | PyObject* marching_cubes(PyArrayObject* arr, double isovalue) 91 | { 92 | if(PyArray_NDIM(arr) != 3) 93 | throw std::runtime_error("Only three-dimensional arrays are supported."); 94 | 95 | // Prepare data. 96 | npy_intp* shape = PyArray_DIMS(arr); 97 | double lower[3] = {0,0,0}; 98 | double upper[3] = {shape[0]-1, shape[1]-1, shape[2]-1}; 99 | long numx = upper[0] - lower[0] + 1; 100 | long numy = upper[1] - lower[1] + 1; 101 | long numz = upper[2] - lower[2] + 1; 102 | std::vector vertices; 103 | std::vector polygons; 104 | 105 | // Marching cubes. 106 | mc::marching_cubes(lower, upper, numx, numy, numz, PyArrayToCFunc(arr), isovalue, 107 | vertices, polygons); 108 | 109 | // Copy the result to two Python ndarrays. 110 | npy_intp size_vertices = vertices.size(); 111 | npy_intp size_polygons = polygons.size(); 112 | PyArrayObject* verticesarr = reinterpret_cast(PyArray_SimpleNew(1, &size_vertices, PyArray_DOUBLE)); 113 | PyArrayObject* polygonsarr = reinterpret_cast(PyArray_SimpleNew(1, &size_polygons, PyArray_ULONG)); 114 | 115 | std::vector::const_iterator it = vertices.begin(); 116 | for(int i=0; it!=vertices.end(); ++i, ++it) 117 | *reinterpret_cast(PyArray_GETPTR1(verticesarr, i)) = *it; 118 | std::vector::const_iterator it2 = polygons.begin(); 119 | for(int i=0; it2!=polygons.end(); ++i, ++it2) 120 | *reinterpret_cast(PyArray_GETPTR1(polygonsarr, i)) = *it2; 121 | 122 | PyObject* res = Py_BuildValue("(O,O)", verticesarr, polygonsarr); 123 | Py_XDECREF(verticesarr); 124 | Py_XDECREF(polygonsarr); 125 | 126 | return res; 127 | } 128 | 129 | PyObject* marching_cubes2(PyArrayObject* arr, double isovalue) 130 | { 131 | if(PyArray_NDIM(arr) != 3) 132 | throw std::runtime_error("Only three-dimensional arrays are supported."); 133 | 134 | // Prepare data. 135 | npy_intp* shape = PyArray_DIMS(arr); 136 | double lower[3] = {0,0,0}; 137 | double upper[3] = {shape[0]-1, shape[1]-1, shape[2]-1}; 138 | long numx = upper[0] - lower[0] + 1; 139 | long numy = upper[1] - lower[1] + 1; 140 | long numz = upper[2] - lower[2] + 1; 141 | std::vector vertices; 142 | std::vector polygons; 143 | 144 | // Marching cubes. 145 | mc::marching_cubes2(lower, upper, numx, numy, numz, PyArrayToCFunc(arr), isovalue, 146 | vertices, polygons); 147 | 148 | // Copy the result to two Python ndarrays. 149 | npy_intp size_vertices = vertices.size(); 150 | npy_intp size_polygons = polygons.size(); 151 | PyArrayObject* verticesarr = reinterpret_cast(PyArray_SimpleNew(1, &size_vertices, PyArray_DOUBLE)); 152 | PyArrayObject* polygonsarr = reinterpret_cast(PyArray_SimpleNew(1, &size_polygons, PyArray_ULONG)); 153 | 154 | std::vector::const_iterator it = vertices.begin(); 155 | for(int i=0; it!=vertices.end(); ++i, ++it) 156 | *reinterpret_cast(PyArray_GETPTR1(verticesarr, i)) = *it; 157 | std::vector::const_iterator it2 = polygons.begin(); 158 | for(int i=0; it2!=polygons.end(); ++i, ++it2) 159 | *reinterpret_cast(PyArray_GETPTR1(polygonsarr, i)) = *it2; 160 | 161 | PyObject* res = Py_BuildValue("(O,O)", verticesarr, polygonsarr); 162 | Py_XDECREF(verticesarr); 163 | Py_XDECREF(polygonsarr); 164 | 165 | return res; 166 | } 167 | 168 | PyObject* marching_cubes3(PyArrayObject* arr, double isovalue) 169 | { 170 | if(PyArray_NDIM(arr) != 3) 171 | throw std::runtime_error("Only three-dimensional arrays are supported."); 172 | 173 | // Prepare data. 174 | npy_intp* shape = PyArray_DIMS(arr); 175 | double lower[3] = {0,0,0}; 176 | double upper[3] = {shape[0]-1, shape[1]-1, shape[2]-1}; 177 | long numx = upper[0] - lower[0] + 1; 178 | long numy = upper[1] - lower[1] + 1; 179 | long numz = upper[2] - lower[2] + 1; 180 | std::vector vertices; 181 | std::vector polygons; 182 | 183 | // Marching cubes. 184 | mc::marching_cubes3(lower, upper, numx, numy, numz, PyArrayToCFunc(arr), isovalue, 185 | vertices, polygons); 186 | 187 | // Copy the result to two Python ndarrays. 188 | npy_intp size_vertices = vertices.size(); 189 | npy_intp size_polygons = polygons.size(); 190 | PyArrayObject* verticesarr = reinterpret_cast(PyArray_SimpleNew(1, &size_vertices, PyArray_DOUBLE)); 191 | PyArrayObject* polygonsarr = reinterpret_cast(PyArray_SimpleNew(1, &size_polygons, PyArray_ULONG)); 192 | 193 | std::vector::const_iterator it = vertices.begin(); 194 | for(int i=0; it!=vertices.end(); ++i, ++it) 195 | *reinterpret_cast(PyArray_GETPTR1(verticesarr, i)) = *it; 196 | std::vector::const_iterator it2 = polygons.begin(); 197 | for(int i=0; it2!=polygons.end(); ++i, ++it2) 198 | *reinterpret_cast(PyArray_GETPTR1(polygonsarr, i)) = *it2; 199 | 200 | PyObject* res = Py_BuildValue("(O,O)", verticesarr, polygonsarr); 201 | Py_XDECREF(verticesarr); 202 | Py_XDECREF(polygonsarr); 203 | 204 | return res; 205 | } -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmcubes/pywrapper.h: -------------------------------------------------------------------------------- 1 | 2 | #ifndef _PYWRAPPER_H 3 | #define _PYWRAPPER_H 4 | 5 | #include 6 | #include "pyarraymodule.h" 7 | 8 | #include 9 | 10 | PyObject* marching_cubes(PyArrayObject* arr, double isovalue); 11 | PyObject* marching_cubes2(PyArrayObject* arr, double isovalue); 12 | PyObject* marching_cubes3(PyArrayObject* arr, double isovalue); 13 | PyObject* marching_cubes_func(PyObject* lower, PyObject* upper, 14 | int numx, int numy, int numz, PyObject* f, double isovalue); 15 | 16 | #endif // _PYWRAPPER_H 17 | -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmesh/__init__.py: -------------------------------------------------------------------------------- 1 | from .inside_mesh import ( 2 | check_mesh_contains, MeshIntersector, TriangleIntersector2d 3 | ) 4 | 5 | 6 | __all__ = [ 7 | check_mesh_contains, MeshIntersector, TriangleIntersector2d 8 | ] 9 | -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmesh/__pycache__/__init__.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/CL3D/mesh_gen_utils/libmesh/__pycache__/__init__.cpython-36.pyc -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmesh/__pycache__/inside_mesh.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/CL3D/mesh_gen_utils/libmesh/__pycache__/inside_mesh.cpython-36.pyc -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmesh/inside_mesh.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from .triangle_hash import TriangleHash as _TriangleHash 3 | 4 | 5 | def check_mesh_contains(mesh, points, hash_resolution=512): 6 | intersector = MeshIntersector(mesh, hash_resolution) 7 | contains = intersector.query(points) 8 | return contains 9 | 10 | 11 | class MeshIntersector: 12 | def __init__(self, mesh, resolution=512): 13 | triangles = mesh.vertices[mesh.faces].astype(np.float64) 14 | n_tri = triangles.shape[0] 15 | 16 | self.resolution = resolution 17 | self.bbox_min = triangles.reshape(3 * n_tri, 3).min(axis=0) 18 | self.bbox_max = triangles.reshape(3 * n_tri, 3).max(axis=0) 19 | # Tranlate and scale it to [0.5, self.resolution - 0.5]^3 20 | self.scale = (resolution - 1) / (self.bbox_max - self.bbox_min) 21 | self.translate = 0.5 - self.scale * self.bbox_min 22 | 23 | self._triangles = triangles = self.rescale(triangles) 24 | # assert(np.allclose(triangles.reshape(-1, 3).min(0), 0.5)) 25 | # assert(np.allclose(triangles.reshape(-1, 3).max(0), resolution - 0.5)) 26 | 27 | triangles2d = triangles[:, :, :2] 28 | self._tri_intersector2d = TriangleIntersector2d( 29 | triangles2d, resolution) 30 | 31 | def query(self, points): 32 | # Rescale points 33 | points = self.rescale(points) 34 | 35 | # placeholder result with no hits we'll fill in later 36 | contains = np.zeros(len(points), dtype=np.bool) 37 | 38 | # cull points outside of the axis aligned bounding box 39 | # this avoids running ray tests unless points are close 40 | inside_aabb = np.all( 41 | (0 <= points) & (points <= self.resolution), axis=1) 42 | if not inside_aabb.any(): 43 | return contains 44 | 45 | # Only consider points inside bounding box 46 | mask = inside_aabb 47 | points = points[mask] 48 | 49 | # Compute intersection depth and check order 50 | points_indices, tri_indices = self._tri_intersector2d.query(points[:, :2]) 51 | 52 | triangles_intersect = self._triangles[tri_indices] 53 | points_intersect = points[points_indices] 54 | 55 | depth_intersect, abs_n_2 = self.compute_intersection_depth( 56 | points_intersect, triangles_intersect) 57 | 58 | # Count number of intersections in both directions 59 | smaller_depth = depth_intersect >= points_intersect[:, 2] * abs_n_2 60 | bigger_depth = depth_intersect < points_intersect[:, 2] * abs_n_2 61 | points_indices_0 = points_indices[smaller_depth] 62 | points_indices_1 = points_indices[bigger_depth] 63 | 64 | nintersect0 = np.bincount(points_indices_0, minlength=points.shape[0]) 65 | nintersect1 = np.bincount(points_indices_1, minlength=points.shape[0]) 66 | 67 | # Check if point contained in mesh 68 | contains1 = (np.mod(nintersect0, 2) == 1) 69 | contains2 = (np.mod(nintersect1, 2) == 1) 70 | if (contains1 != contains2).any(): 71 | print('Warning: contains1 != contains2 for some points.') 72 | contains[mask] = (contains1 & contains2) 73 | return contains 74 | 75 | def compute_intersection_depth(self, points, triangles): 76 | t1 = triangles[:, 0, :] 77 | t2 = triangles[:, 1, :] 78 | t3 = triangles[:, 2, :] 79 | 80 | v1 = t3 - t1 81 | v2 = t2 - t1 82 | # v1 = v1 / np.linalg.norm(v1, axis=-1, keepdims=True) 83 | # v2 = v2 / np.linalg.norm(v2, axis=-1, keepdims=True) 84 | 85 | normals = np.cross(v1, v2) 86 | alpha = np.sum(normals[:, :2] * (t1[:, :2] - points[:, :2]), axis=1) 87 | 88 | n_2 = normals[:, 2] 89 | t1_2 = t1[:, 2] 90 | s_n_2 = np.sign(n_2) 91 | abs_n_2 = np.abs(n_2) 92 | 93 | mask = (abs_n_2 != 0) 94 | 95 | depth_intersect = np.full(points.shape[0], np.nan) 96 | depth_intersect[mask] = \ 97 | t1_2[mask] * abs_n_2[mask] + alpha[mask] * s_n_2[mask] 98 | 99 | # Test the depth: 100 | # TODO: remove and put into tests 101 | # points_new = np.concatenate([points[:, :2], depth_intersect[:, None]], axis=1) 102 | # alpha = (normals * t1).sum(-1) 103 | # mask = (depth_intersect == depth_intersect) 104 | # assert(np.allclose((points_new[mask] * normals[mask]).sum(-1), 105 | # alpha[mask])) 106 | return depth_intersect, abs_n_2 107 | 108 | def rescale(self, array): 109 | array = self.scale * array + self.translate 110 | return array 111 | 112 | 113 | class TriangleIntersector2d: 114 | def __init__(self, triangles, resolution=128): 115 | self.triangles = triangles 116 | self.tri_hash = _TriangleHash(triangles, resolution) 117 | 118 | def query(self, points): 119 | point_indices, tri_indices = self.tri_hash.query(points) 120 | point_indices = np.array(point_indices, dtype=np.int64) 121 | tri_indices = np.array(tri_indices, dtype=np.int64) 122 | points = points[point_indices] 123 | triangles = self.triangles[tri_indices] 124 | mask = self.check_triangles(points, triangles) 125 | point_indices = point_indices[mask] 126 | tri_indices = tri_indices[mask] 127 | return point_indices, tri_indices 128 | 129 | def check_triangles(self, points, triangles): 130 | contains = np.zeros(points.shape[0], dtype=np.bool) 131 | A = triangles[:, :2] - triangles[:, 2:] 132 | A = A.transpose([0, 2, 1]) 133 | y = points - triangles[:, 2] 134 | 135 | detA = A[:, 0, 0] * A[:, 1, 1] - A[:, 0, 1] * A[:, 1, 0] 136 | 137 | mask = (np.abs(detA) != 0.) 138 | A = A[mask] 139 | y = y[mask] 140 | detA = detA[mask] 141 | 142 | s_detA = np.sign(detA) 143 | abs_detA = np.abs(detA) 144 | 145 | u = (A[:, 1, 1] * y[:, 0] - A[:, 0, 1] * y[:, 1]) * s_detA 146 | v = (-A[:, 1, 0] * y[:, 0] + A[:, 0, 0] * y[:, 1]) * s_detA 147 | 148 | sum_uv = u + v 149 | contains[mask] = ( 150 | (0 < u) & (u < abs_detA) & (0 < v) & (v < abs_detA) 151 | & (0 < sum_uv) & (sum_uv < abs_detA) 152 | ) 153 | return contains 154 | 155 | -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmesh/triangle_hash.cpython-36m-x86_64-linux-gnu.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/CL3D/mesh_gen_utils/libmesh/triangle_hash.cpython-36m-x86_64-linux-gnu.so -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmesh/triangle_hash.pyx: -------------------------------------------------------------------------------- 1 | 2 | # distutils: language=c++ 3 | import numpy as np 4 | cimport numpy as np 5 | cimport cython 6 | from libcpp.vector cimport vector 7 | from libc.math cimport floor, ceil 8 | 9 | cdef class TriangleHash: 10 | cdef vector[vector[int]] spatial_hash 11 | cdef int resolution 12 | 13 | def __cinit__(self, double[:, :, :] triangles, int resolution): 14 | self.spatial_hash.resize(resolution * resolution) 15 | self.resolution = resolution 16 | self._build_hash(triangles) 17 | 18 | @cython.boundscheck(False) # Deactivate bounds checking 19 | @cython.wraparound(False) # Deactivate negative indexing. 20 | cdef int _build_hash(self, double[:, :, :] triangles): 21 | assert(triangles.shape[1] == 3) 22 | assert(triangles.shape[2] == 2) 23 | 24 | cdef int n_tri = triangles.shape[0] 25 | cdef int bbox_min[2] 26 | cdef int bbox_max[2] 27 | 28 | cdef int i_tri, j, x, y 29 | cdef int spatial_idx 30 | 31 | for i_tri in range(n_tri): 32 | # Compute bounding box 33 | for j in range(2): 34 | bbox_min[j] = min( 35 | triangles[i_tri, 0, j], triangles[i_tri, 1, j], triangles[i_tri, 2, j] 36 | ) 37 | bbox_max[j] = max( 38 | triangles[i_tri, 0, j], triangles[i_tri, 1, j], triangles[i_tri, 2, j] 39 | ) 40 | bbox_min[j] = min(max(bbox_min[j], 0), self.resolution - 1) 41 | bbox_max[j] = min(max(bbox_max[j], 0), self.resolution - 1) 42 | 43 | # Find all voxels where bounding box intersects 44 | for x in range(bbox_min[0], bbox_max[0] + 1): 45 | for y in range(bbox_min[1], bbox_max[1] + 1): 46 | spatial_idx = self.resolution * x + y 47 | self.spatial_hash[spatial_idx].push_back(i_tri) 48 | 49 | @cython.boundscheck(False) # Deactivate bounds checking 50 | @cython.wraparound(False) # Deactivate negative indexing. 51 | cpdef query(self, double[:, :] points): 52 | assert(points.shape[1] == 2) 53 | cdef int n_points = points.shape[0] 54 | 55 | cdef vector[int] points_indices 56 | cdef vector[int] tri_indices 57 | # cdef int[:] points_indices_np 58 | # cdef int[:] tri_indices_np 59 | 60 | cdef int i_point, k, x, y 61 | cdef int spatial_idx 62 | 63 | for i_point in range(n_points): 64 | x = int(points[i_point, 0]) 65 | y = int(points[i_point, 1]) 66 | if not (0 <= x < self.resolution and 0 <= y < self.resolution): 67 | continue 68 | 69 | spatial_idx = self.resolution * x + y 70 | for i_tri in self.spatial_hash[spatial_idx]: 71 | points_indices.push_back(i_point) 72 | tri_indices.push_back(i_tri) 73 | 74 | points_indices_np = np.zeros(points_indices.size(), dtype=np.int32) 75 | tri_indices_np = np.zeros(tri_indices.size(), dtype=np.int32) 76 | 77 | cdef int[:] points_indices_view = points_indices_np 78 | cdef int[:] tri_indices_view = tri_indices_np 79 | 80 | for k in range(points_indices.size()): 81 | points_indices_view[k] = points_indices[k] 82 | 83 | for k in range(tri_indices.size()): 84 | tri_indices_view[k] = tri_indices[k] 85 | 86 | return points_indices_np, tri_indices_np 87 | -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmise/__init__.py: -------------------------------------------------------------------------------- 1 | from .mise import MISE 2 | 3 | 4 | __all__ = [ 5 | MISE 6 | ] 7 | -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmise/__pycache__/__init__.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/CL3D/mesh_gen_utils/libmise/__pycache__/__init__.cpython-36.pyc -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmise/mise.cpython-36m-x86_64-linux-gnu.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/CL3D/mesh_gen_utils/libmise/mise.cpython-36m-x86_64-linux-gnu.so -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmise/mise.pyx: -------------------------------------------------------------------------------- 1 | # distutils: language = c++ 2 | cimport cython 3 | from cython.operator cimport dereference as dref 4 | from libcpp.vector cimport vector 5 | from libcpp.map cimport map 6 | from libc.math cimport isnan, NAN 7 | import numpy as np 8 | 9 | 10 | cdef struct Vector3D: 11 | int x, y, z 12 | 13 | 14 | cdef struct Voxel: 15 | Vector3D loc 16 | unsigned int level 17 | bint is_leaf 18 | unsigned long children[2][2][2] 19 | 20 | 21 | cdef struct GridPoint: 22 | Vector3D loc 23 | double value 24 | bint known 25 | 26 | 27 | cdef inline unsigned long vec_to_idx(Vector3D coord, long resolution): 28 | cdef unsigned long idx 29 | idx = resolution * resolution * coord.x + resolution * coord.y + coord.z 30 | return idx 31 | 32 | 33 | cdef class MISE: 34 | cdef vector[Voxel] voxels 35 | cdef vector[GridPoint] grid_points 36 | cdef map[long, long] grid_point_hash 37 | cdef readonly int resolution_0 38 | cdef readonly int depth 39 | cdef readonly double threshold 40 | cdef readonly int voxel_size_0 41 | cdef readonly int resolution 42 | 43 | def __cinit__(self, int resolution_0, int depth, double threshold): 44 | self.resolution_0 = resolution_0 45 | self.depth = depth 46 | self.threshold = threshold 47 | self.voxel_size_0 = (1 << depth) 48 | self.resolution = resolution_0 * self.voxel_size_0 49 | 50 | # Create initial voxels 51 | self.voxels.reserve(resolution_0 * resolution_0 * resolution_0) 52 | 53 | cdef Voxel voxel 54 | cdef GridPoint point 55 | cdef Vector3D loc 56 | cdef int i, j, k 57 | for i in range(resolution_0): 58 | for j in range(resolution_0): 59 | for k in range (resolution_0): 60 | loc = Vector3D( 61 | i * self.voxel_size_0, 62 | j * self.voxel_size_0, 63 | k * self.voxel_size_0, 64 | ) 65 | voxel = Voxel( 66 | loc=loc, 67 | level=0, 68 | is_leaf=True, 69 | ) 70 | 71 | assert(self.voxels.size() == vec_to_idx(Vector3D(i, j, k), resolution_0)) 72 | self.voxels.push_back(voxel) 73 | 74 | # Create initial grid points 75 | self.grid_points.reserve((resolution_0 + 1) * (resolution_0 + 1) * (resolution_0 + 1)) 76 | for i in range(resolution_0 + 1): 77 | for j in range(resolution_0 + 1): 78 | for k in range(resolution_0 + 1): 79 | loc = Vector3D( 80 | i * self.voxel_size_0, 81 | j * self.voxel_size_0, 82 | k * self.voxel_size_0, 83 | ) 84 | assert(self.grid_points.size() == vec_to_idx(Vector3D(i, j, k), resolution_0 + 1)) 85 | self.add_grid_point(loc) 86 | 87 | def update(self, long[:, :] points, double[:] values): 88 | """Update points and set their values. Also determine all active voxels and subdivide them.""" 89 | assert(points.shape[0] == values.shape[0]) 90 | assert(points.shape[1] == 3) 91 | cdef Vector3D loc 92 | cdef long idx 93 | cdef int i 94 | 95 | # Find all indices of point and set value 96 | for i in range(points.shape[0]): 97 | loc = Vector3D(points[i, 0], points[i, 1], points[i, 2]) 98 | idx = self.get_grid_point_idx(loc) 99 | if idx == -1: 100 | raise ValueError('Point not in grid!') 101 | self.grid_points[idx].value = values[i] 102 | self.grid_points[idx].known = True 103 | # Subdivide activate voxels and add new points 104 | self.subdivide_voxels() 105 | 106 | def query(self): 107 | """Query points to evaluate.""" 108 | # Find all points with unknown value 109 | cdef vector[Vector3D] points 110 | cdef int n_unknown = 0 111 | for p in self.grid_points: 112 | if not p.known: 113 | n_unknown += 1 114 | 115 | points.reserve(n_unknown) 116 | for p in self.grid_points: 117 | if not p.known: 118 | points.push_back(p.loc) 119 | 120 | # Convert to numpy 121 | points_np = np.zeros((points.size(), 3), dtype=np.int64) 122 | cdef long[:, :] points_view = points_np 123 | for i in range(points.size()): 124 | points_view[i, 0] = points[i].x 125 | points_view[i, 1] = points[i].y 126 | points_view[i, 2] = points[i].z 127 | 128 | return points_np 129 | 130 | def to_dense(self): 131 | """Output dense matrix at highest resolution.""" 132 | out_array = np.full((self.resolution + 1,) * 3, np.nan) 133 | cdef double[:, :, :] out_view = out_array 134 | cdef GridPoint point 135 | cdef int i, j, k 136 | 137 | for point in self.grid_points: 138 | # Take voxel for which points is upper left corner 139 | # assert(point.known) 140 | out_view[point.loc.x, point.loc.y, point.loc.z] = point.value 141 | 142 | # Complete along x axis 143 | for i in range(1, self.resolution + 1): 144 | for j in range(self.resolution + 1): 145 | for k in range(self.resolution + 1): 146 | if isnan(out_view[i, j, k]): 147 | out_view[i, j, k] = out_view[i-1, j, k] 148 | 149 | # Complete along y axis 150 | for i in range(self.resolution + 1): 151 | for j in range(1, self.resolution + 1): 152 | for k in range(self.resolution + 1): 153 | if isnan(out_view[i, j, k]): 154 | out_view[i, j, k] = out_view[i, j-1, k] 155 | 156 | 157 | # Complete along z axis 158 | for i in range(self.resolution + 1): 159 | for j in range(self.resolution + 1): 160 | for k in range(1, self.resolution + 1): 161 | if isnan(out_view[i, j, k]): 162 | out_view[i, j, k] = out_view[i, j, k-1] 163 | assert(not isnan(out_view[i, j, k])) 164 | return out_array 165 | 166 | def get_points(self): 167 | points_np = np.zeros((self.grid_points.size(), 3), dtype=np.int64) 168 | values_np = np.zeros((self.grid_points.size()), dtype=np.float64) 169 | 170 | cdef long[:, :] points_view = points_np 171 | cdef double[:] values_view = values_np 172 | cdef Vector3D loc 173 | cdef int i 174 | 175 | for i in range(self.grid_points.size()): 176 | loc = self.grid_points[i].loc 177 | points_view[i, 0] = loc.x 178 | points_view[i, 1] = loc.y 179 | points_view[i, 2] = loc.z 180 | values_view[i] = self.grid_points[i].value 181 | 182 | return points_np, values_np 183 | 184 | cdef void subdivide_voxels(self) except +: 185 | cdef vector[bint] next_to_positive 186 | cdef vector[bint] next_to_negative 187 | cdef int i, j, k 188 | cdef long idx 189 | cdef Vector3D loc, adj_loc 190 | 191 | # Initialize vectors 192 | next_to_positive.resize(self.voxels.size(), False) 193 | next_to_negative.resize(self.voxels.size(), False) 194 | 195 | # Iterate over grid points and mark voxels active 196 | # TODO: can move this to update operation and add attibute to voxel 197 | for grid_point in self.grid_points: 198 | loc = grid_point.loc 199 | if not grid_point.known: 200 | continue 201 | 202 | # Iterate over the 8 adjacent voxels 203 | for i in range(-1, 1): 204 | for j in range(-1, 1): 205 | for k in range(-1, 1): 206 | adj_loc = Vector3D( 207 | x=loc.x + i, 208 | y=loc.y + j, 209 | z=loc.z + k, 210 | ) 211 | idx = self.get_voxel_idx(adj_loc) 212 | if idx == -1: 213 | continue 214 | 215 | if grid_point.value >= self.threshold: 216 | next_to_positive[idx] = True 217 | if grid_point.value <= self.threshold: 218 | next_to_negative[idx] = True 219 | 220 | cdef int n_subdivide = 0 221 | 222 | for idx in range(self.voxels.size()): 223 | if not self.voxels[idx].is_leaf or self.voxels[idx].level == self.depth: 224 | continue 225 | if next_to_positive[idx] and next_to_negative[idx]: 226 | n_subdivide += 1 227 | 228 | self.voxels.reserve(self.voxels.size() + 8 * n_subdivide) 229 | self.grid_points.reserve(self.voxels.size() + 19 * n_subdivide) 230 | 231 | for idx in range(self.voxels.size()): 232 | if not self.voxels[idx].is_leaf or self.voxels[idx].level == self.depth: 233 | continue 234 | if next_to_positive[idx] and next_to_negative[idx]: 235 | self.subdivide_voxel(idx) 236 | 237 | cdef void subdivide_voxel(self, long idx): 238 | cdef Voxel voxel 239 | cdef GridPoint point 240 | cdef Vector3D loc0 = self.voxels[idx].loc 241 | cdef Vector3D loc 242 | cdef int new_level = self.voxels[idx].level + 1 243 | cdef int new_size = 1 << (self.depth - new_level) 244 | assert(new_level <= self.depth) 245 | assert(1 <= new_size <= self.voxel_size_0) 246 | 247 | # Current voxel is not leaf anymore 248 | self.voxels[idx].is_leaf = False 249 | # Add new voxels 250 | cdef int i, j, k 251 | for i in range(2): 252 | for j in range(2): 253 | for k in range(2): 254 | loc = Vector3D( 255 | x=loc0.x + i * new_size, 256 | y=loc0.y + j * new_size, 257 | z=loc0.z + k * new_size, 258 | ) 259 | voxel = Voxel( 260 | loc=loc, 261 | level=new_level, 262 | is_leaf=True 263 | ) 264 | 265 | self.voxels[idx].children[i][j][k] = self.voxels.size() 266 | self.voxels.push_back(voxel) 267 | 268 | # Add new grid points 269 | for i in range(3): 270 | for j in range(3): 271 | for k in range(3): 272 | loc = Vector3D( 273 | loc0.x + i * new_size, 274 | loc0.y + j * new_size, 275 | loc0.z + k * new_size, 276 | ) 277 | 278 | # Only add new grid points 279 | if self.get_grid_point_idx(loc) == -1: 280 | self.add_grid_point(loc) 281 | 282 | 283 | @cython.cdivision(True) 284 | cdef long get_voxel_idx(self, Vector3D loc) except +: 285 | """Utility function for getting voxel index corresponding to 3D coordinates.""" 286 | # Shorthands 287 | cdef long resolution = self.resolution 288 | cdef long resolution_0 = self.resolution_0 289 | cdef long depth = self.depth 290 | cdef long voxel_size_0 = self.voxel_size_0 291 | 292 | # Return -1 if point lies outside bounds 293 | if not (0 <= loc.x < resolution and 0<= loc.y < resolution and 0 <= loc.z < resolution): 294 | return -1 295 | 296 | # Coordinates in coarse voxel grid 297 | cdef Vector3D loc0 = Vector3D( 298 | x=loc.x >> depth, 299 | y=loc.y >> depth, 300 | z=loc.z >> depth, 301 | ) 302 | 303 | # Initial voxels 304 | cdef int idx = vec_to_idx(loc0, resolution_0) 305 | cdef Voxel voxel = self.voxels[idx] 306 | assert(voxel.loc.x == loc0.x * voxel_size_0) 307 | assert(voxel.loc.y == loc0.y * voxel_size_0) 308 | assert(voxel.loc.z == loc0.z * voxel_size_0) 309 | 310 | # Relative coordinates 311 | cdef Vector3D loc_rel = Vector3D( 312 | x=loc.x - (loc0.x << depth), 313 | y=loc.y - (loc0.y << depth), 314 | z=loc.z - (loc0.z << depth), 315 | ) 316 | 317 | cdef Vector3D loc_offset 318 | cdef long voxel_size = voxel_size_0 319 | 320 | while not voxel.is_leaf: 321 | voxel_size = voxel_size >> 1 322 | assert(voxel_size >= 1) 323 | 324 | # Determine child 325 | loc_offset = Vector3D( 326 | x=1 if (loc_rel.x >= voxel_size) else 0, 327 | y=1 if (loc_rel.y >= voxel_size) else 0, 328 | z=1 if (loc_rel.z >= voxel_size) else 0, 329 | ) 330 | # New voxel 331 | idx = voxel.children[loc_offset.x][loc_offset.y][loc_offset.z] 332 | voxel = self.voxels[idx] 333 | 334 | # New relative coordinates 335 | loc_rel = Vector3D( 336 | x=loc_rel.x - loc_offset.x * voxel_size, 337 | y=loc_rel.y - loc_offset.y * voxel_size, 338 | z=loc_rel.z - loc_offset.z * voxel_size, 339 | ) 340 | 341 | assert(0<= loc_rel.x < voxel_size) 342 | assert(0<= loc_rel.y < voxel_size) 343 | assert(0<= loc_rel.z < voxel_size) 344 | 345 | 346 | # Return idx 347 | return idx 348 | 349 | 350 | cdef inline void add_grid_point(self, Vector3D loc): 351 | cdef GridPoint point = GridPoint( 352 | loc=loc, 353 | value=0., 354 | known=False, 355 | ) 356 | self.grid_point_hash[vec_to_idx(loc, self.resolution + 1)] = self.grid_points.size() 357 | self.grid_points.push_back(point) 358 | 359 | cdef inline int get_grid_point_idx(self, Vector3D loc): 360 | p_idx = self.grid_point_hash.find(vec_to_idx(loc, self.resolution + 1)) 361 | if p_idx == self.grid_point_hash.end(): 362 | return -1 363 | 364 | cdef int idx = dref(p_idx).second 365 | assert(self.grid_points[idx].loc.x == loc.x) 366 | assert(self.grid_points[idx].loc.y == loc.y) 367 | assert(self.grid_points[idx].loc.z == loc.z) 368 | 369 | return idx -------------------------------------------------------------------------------- /CL3D/mesh_gen_utils/libmise/test.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from mise import MISE 3 | import time 4 | 5 | t0 = time.time() 6 | extractor = MISE(1, 2, 0.) 7 | 8 | p = extractor.query() 9 | i = 0 10 | 11 | while p.shape[0] != 0: 12 | print(i) 13 | print(p) 14 | v = 2 * (p.sum(axis=-1) > 2).astype(np.float64) - 1 15 | extractor.update(p, v) 16 | p = extractor.query() 17 | i += 1 18 | if (i >= 8): 19 | break 20 | 21 | print(extractor.to_dense()) 22 | # p, v = extractor.get_points() 23 | # print(p) 24 | # print(v) 25 | print('Total time: %f' % (time.time() - t0)) 26 | -------------------------------------------------------------------------------- /CL3D/model_pointcloud.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from model_shape import SDFNet 4 | 5 | 6 | def maxpool(x, dim=-1, keepdim=False): 7 | out, _ = x.max(dim=dim, keepdim=keepdim) 8 | return out 9 | 10 | class PointCloudNet(SDFNet): 11 | 12 | def __init__(self, config, input_point_dim=3, latent_dim=512, size_hidden=512, pretrained=False): 13 | super().__init__(config, input_point_dim, latent_dim, size_hidden, pretrained) 14 | self.encoder = ResnetPointnet(latent_dim=latent_dim, size_hidden=size_hidden) 15 | 16 | 17 | class ResnetPointnet(nn.Module): 18 | ''' PointNet-based encoder network with ResNet blocks 19 | 20 | Args: 21 | latent_dim: dimension of conditioned code, default to 512 22 | point_dim: input points dimension, default to 3 23 | size_hidden: dimension of points block hidden size, default to 512 24 | pretrained: whether the encoder is ImageNet pretrained, 25 | default to False 26 | ''' 27 | 28 | def __init__(self, latent_dim=128, point_dim=3, size_hidden=128): 29 | super().__init__() 30 | self.latent_dim = latent_dim 31 | 32 | self.fc_pos = nn.Linear(point_dim, 2*size_hidden) 33 | self.block_0 = ResnetBlockFC(2*size_hidden, size_hidden) 34 | self.block_1 = ResnetBlockFC(2*size_hidden, size_hidden) 35 | self.block_2 = ResnetBlockFC(2*size_hidden, size_hidden) 36 | self.block_3 = ResnetBlockFC(2*size_hidden, size_hidden) 37 | self.block_4 = ResnetBlockFC(2*size_hidden, size_hidden) 38 | self.fc_c = nn.Linear(size_hidden, latent_dim) 39 | 40 | self.actvn = nn.ReLU() 41 | self.pool = maxpool 42 | 43 | def forward(self, p): 44 | _, T, D = p.size() 45 | 46 | # output size: B x T X F 47 | net = self.fc_pos(p) 48 | net = self.block_0(net) 49 | pooled = self.pool(net, dim=1, keepdim=True).expand(net.size()) 50 | net = torch.cat([net, pooled], dim=2) 51 | 52 | net = self.block_1(net) 53 | pooled = self.pool(net, dim=1, keepdim=True).expand(net.size()) 54 | net = torch.cat([net, pooled], dim=2) 55 | 56 | net = self.block_2(net) 57 | pooled = self.pool(net, dim=1, keepdim=True).expand(net.size()) 58 | net = torch.cat([net, pooled], dim=2) 59 | 60 | net = self.block_3(net) 61 | pooled = self.pool(net, dim=1, keepdim=True).expand(net.size()) 62 | net = torch.cat([net, pooled], dim=2) 63 | 64 | net = self.block_4(net) 65 | 66 | # Recude to B x F 67 | net = self.pool(net, dim=1) 68 | 69 | c = self.fc_c(self.actvn(net)) 70 | 71 | return c 72 | 73 | class ResnetBlockFC(nn.Module): 74 | ''' Fully connected ResNet Block class. 75 | Args: 76 | size_in (int): input dimension 77 | size_out (int): output dimension 78 | size_h (int): hidden dimension 79 | ''' 80 | 81 | def __init__(self, size_in, size_out=None, size_h=None): 82 | super().__init__() 83 | # Attributes 84 | if size_out is None: 85 | size_out = size_in 86 | 87 | if size_h is None: 88 | size_h = min(size_in, size_out) 89 | 90 | self.size_in = size_in 91 | self.size_h = size_h 92 | self.size_out = size_out 93 | # Submodules 94 | self.fc_0 = nn.Linear(size_in, size_h) 95 | self.fc_1 = nn.Linear(size_h, size_out) 96 | self.actvn = nn.ReLU() 97 | 98 | if size_in == size_out: 99 | self.shortcut = None 100 | else: 101 | self.shortcut = nn.Linear(size_in, size_out, bias=False) 102 | # Initialization 103 | nn.init.zeros_(self.fc_1.weight) 104 | 105 | def forward(self, x): 106 | net = self.fc_0(self.actvn(x)) 107 | dx = self.fc_1(self.actvn(net)) 108 | 109 | if self.shortcut is not None: 110 | x_s = self.shortcut(x) 111 | else: 112 | x_s = x 113 | 114 | return x_s + dx 115 | -------------------------------------------------------------------------------- /CL3D/model_shape.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from torchvision import models 4 | 5 | class SDFNet(nn.Module): 6 | ''' SDFNet 3D regressor class 7 | 8 | Args: 9 | input_point_dim: dimension of input points, default to 3 10 | latent_dim: dimension of conditioned code, default to 256 11 | size_hidden: dimension of points block hidden size, default to 256 12 | pretrained: whether the encoder is ImageNet pretrained, 13 | default to False 14 | 15 | ''' 16 | def __init__(self, config, input_point_dim=3, latent_dim=256, size_hidden=256, pretrained=False): 17 | super().__init__() 18 | 19 | self.encoder = Encoder(config, latent_dim, pretrained=pretrained) 20 | self.decoder = Decoder(input_point_dim, latent_dim, size_hidden) 21 | self.config = config 22 | 23 | def forward(self, points, inputs): 24 | assert points.size(0) == inputs.size(0) 25 | batch_size = points.size(0) 26 | latent_feats = self.encoder(inputs) 27 | score = self.decoder(points, latent_feats) 28 | return score 29 | 30 | class Encoder(nn.Module): 31 | def __init__(self, config, latent_dim, pretrained): 32 | super().__init__() 33 | self.features = models.resnet18(\ 34 | pretrained=pretrained) 35 | self.config = config 36 | # Reinitialize the first conv layer for D+N inputs 37 | if self.config.path['input_image_path'] is None: 38 | self.features.conv1 = nn.Conv2d(4,\ 39 | 64,\ 40 | kernel_size=7,\ 41 | stride=2,\ 42 | padding=3,\ 43 | bias=False) 44 | self.features.fc = nn.Sequential() 45 | self.fc = nn.Linear(512, latent_dim) 46 | 47 | def forward(self, x): 48 | feat = self.features(x) 49 | latent_feat = self.fc(feat) 50 | return latent_feat 51 | 52 | class Decoder(nn.Module): 53 | def __init__(self, input_dim, latent_dim, size_hidden): 54 | super().__init__() 55 | self.fc_p = nn.Conv1d(input_dim, size_hidden, 1) 56 | 57 | self.block0 = CResnetBlockConv(latent_dim, size_hidden) 58 | self.block1 = CResnetBlockConv(latent_dim, size_hidden) 59 | self.block2 = CResnetBlockConv(latent_dim, size_hidden) 60 | self.block3 = CResnetBlockConv(latent_dim, size_hidden) 61 | self.block4 = CResnetBlockConv(latent_dim, size_hidden) 62 | 63 | self.bn = CBatchNorm(latent_dim, size_hidden) 64 | 65 | self.fc_out = nn.Conv1d(size_hidden, 1, 1) 66 | 67 | self.actvn = nn.ReLU() 68 | 69 | def forward(self, p, c): 70 | p = p.transpose(1, 2) 71 | batch_size, D, T = p.size() 72 | net = self.fc_p(p) 73 | 74 | net = self.block0(net, c) 75 | net = self.block1(net, c) 76 | net = self.block2(net, c) 77 | net = self.block3(net, c) 78 | net = self.block4(net, c) 79 | 80 | out = self.fc_out(self.actvn(self.bn(net, c))) 81 | out = out.squeeze(1) 82 | 83 | return out 84 | 85 | class CBatchNorm(nn.Module): 86 | def __init__(self, latent_dim, feature_dim): 87 | super().__init__() 88 | self.latent_dim = latent_dim 89 | 90 | self.feature_dim = feature_dim 91 | self.conv_gamma = nn.Conv1d(self.latent_dim, self.feature_dim, 1) 92 | self.conv_beta = nn.Conv1d(self.latent_dim, self.feature_dim, 1) 93 | self.bn = nn.BatchNorm1d(self.feature_dim, affine=False) 94 | 95 | self.reset_parameters() 96 | 97 | def reset_parameters(self): 98 | nn.init.zeros_(self.conv_gamma.weight) 99 | nn.init.zeros_(self.conv_beta.weight) 100 | nn.init.ones_(self.conv_gamma.bias) 101 | nn.init.zeros_(self.conv_beta.bias) 102 | 103 | def forward(self, x, c): 104 | latent = c 105 | assert(x.size(0) == c.size(0)) 106 | assert(c.size(1) == self.latent_dim) 107 | 108 | # c is assumed to be of size batch_size x latent_dim x T 109 | if len(c.size()) == 2: 110 | c = c.unsqueeze(2) 111 | 112 | # Affine mapping 113 | gamma = self.conv_gamma(c) 114 | beta = self.conv_beta(c) 115 | 116 | # Batchnorm 117 | net = self.bn(x) 118 | out = gamma * net + beta 119 | 120 | return out 121 | 122 | class CResnetBlockConv(nn.Module): 123 | def __init__(self, latent_dim, size_in, size_hidden=None, size_out=None): 124 | super().__init__() 125 | if size_hidden is None: 126 | size_hidden = size_in 127 | if size_out is None: 128 | size_out = size_in 129 | 130 | self.size_in = size_in 131 | self.size_hidden = size_hidden 132 | self.size_out = size_out 133 | 134 | self.bn_0 = CBatchNorm(\ 135 | latent_dim, self.size_in) 136 | self.bn_1 = CBatchNorm(\ 137 | latent_dim, self.size_hidden) 138 | 139 | self.fc_0 = nn.Conv1d(self.size_in, self.size_hidden, 1) 140 | self.fc_1 = nn.Conv1d(self.size_hidden, self.size_out, 1) 141 | self.actvn = nn.ReLU() 142 | 143 | nn.init.zeros_(self.fc_1.weight) 144 | 145 | def forward(self, x, c): 146 | net = self.fc_0(self.actvn(self.bn_0(x, c))) 147 | dx = self.fc_1(self.actvn(self.bn_1(net, c))) 148 | 149 | return x + dx 150 | 151 | -------------------------------------------------------------------------------- /CL3D/perm/rep_10_2_13.npz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/CL3D/perm/rep_10_2_13.npz -------------------------------------------------------------------------------- /CL3D/perm/rep_1_5_55.npz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/CL3D/perm/rep_1_5_55.npz -------------------------------------------------------------------------------- /CL3D/plot_script_shape.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import matplotlib.pyplot as plt 3 | import matplotlib 4 | 5 | TITLE_SIZE = 28 6 | AXIS_SIZE = 24 7 | matplotlib.rcParams['font.family'] = ['serif'] 8 | matplotlib.rc('axes', titlesize=TITLE_SIZE) 9 | matplotlib.rc('axes', labelsize=AXIS_SIZE) 10 | dotted_line_width = 2.5 11 | fig, ax1 = plt.subplots() 12 | ax1.tick_params(labelsize=20) 13 | 14 | ###### File names 15 | file_list = ['test/sdfnet_55_single/eval/sdfnet_55_single/out.npz', 16 | 'test/occnet_55_single/eval/occnet_55_single/out.npz'] 17 | 18 | ###### Curve labels 19 | labels = ['Algo 1', 'Algo 2'] 20 | ###### Curve colors 21 | colors = ['r', 'orange'] 22 | 23 | fig = plt.gcf() 24 | 25 | ###### Change this for different runs 26 | total_classes = 55 27 | n_exposures = 11 28 | n_cls_per_exposure = 5 29 | ################################################ 30 | 31 | x_axis = np.arange(n_exposures) 32 | seen_classes = (np.arange(total_classes)+1)*n_cls_per_exposure 33 | future_classes = total_classes-seen_classes 34 | 35 | ###### False: For single exposure, True for repeated exposures 36 | rep = False 37 | 38 | mean_acc = [] 39 | for i,f in enumerate(file_list): 40 | print(f) 41 | 42 | acc_matrr = np.zeros((n_exposures, total_classes)) 43 | first_exp = {} 44 | 45 | file = np.load(f, allow_pickle=True) 46 | if 'fscore' in file.files: 47 | arr_acc = np.asarray(file['fscore']) 48 | acc = [] 49 | for j,exp in enumerate(arr_acc): 50 | if type(exp) is np.ndarray or isinstance(exp, list): 51 | exp_acc = [] 52 | for k,cl in enumerate(exp): 53 | exp_acc.append(np.mean(cl,axis=0)[1]) 54 | acc_matrr[j,k] = np.mean(cl,axis=0)[1] 55 | 56 | if rep: 57 | if k not in first_exp: 58 | first_exp[k] = j 59 | acc.append(np.mean(exp_acc)) 60 | else: 61 | acc = arr_acc 62 | break 63 | else: 64 | raise Exception("fscore not in numpy file") 65 | 66 | plt.plot(x_axis,acc,'o-',label=labels[i],markersize=5,color=colors[i],linewidth=3) 67 | 68 | ####### Batch 69 | gt = ax1.plot([n_exposures-1], 0.50, 'x', color='r', mew=3, label='2.5D Inp. Batch',markersize=15,zorder=10)[0] 70 | gt.set_clip_on(False) 71 | 72 | plt.ylim(0,0.55) 73 | 74 | plt.title('Single Exposure Shape Reconstruction on ShapeNetCore.v2') 75 | 76 | plt.xlabel('Learning Exposures') 77 | plt.ylabel('Fscore@1') 78 | ax1.legend(ncol=2, numpoints=1, borderaxespad=0., fancybox=True, framealpha=0.7, fontsize=20) 79 | plt.grid() 80 | fig.set_size_inches(16, 6) 81 | plt.savefig('shape.pdf',dpi=300, bbox_inches='tight', pad_inches=0.01 ,transparent=True) 82 | plt.show() -------------------------------------------------------------------------------- /CL3D/setup.py: -------------------------------------------------------------------------------- 1 | try: 2 | from setuptools import setup 3 | except ImportError: 4 | from distutils.core import setup 5 | from distutils.extension import Extension 6 | from Cython.Build import cythonize 7 | from torch.utils.cpp_extension import BuildExtension, CppExtension, CUDAExtension 8 | import numpy 9 | 10 | 11 | # Get the numpy include directory. 12 | numpy_include_dir = numpy.get_include() 13 | 14 | # Extensions 15 | # pykdtree (kd tree) 16 | pykdtree = Extension( 17 | 'mesh_gen_utils.libkdtree.pykdtree.kdtree', 18 | sources=[ 19 | 'mesh_gen_utils/libkdtree/pykdtree/kdtree.c', 20 | 'mesh_gen_utils/libkdtree/pykdtree/_kdtree_core.c' 21 | ], 22 | language='c', 23 | extra_compile_args=['-std=c99', '-O3', '-fopenmp'], 24 | extra_link_args=['-lgomp'], 25 | ) 26 | 27 | # mcubes (marching cubes algorithm) 28 | mcubes_module = Extension( 29 | 'mesh_gen_utils.libmcubes.mcubes', 30 | sources=[ 31 | 'mesh_gen_utils/libmcubes/mcubes.pyx', 32 | 'mesh_gen_utils/libmcubes/pywrapper.cpp', 33 | 'mesh_gen_utils/libmcubes/marchingcubes.cpp' 34 | ], 35 | language='c++', 36 | extra_compile_args=['-std=c++11'], 37 | include_dirs=[numpy_include_dir] 38 | ) 39 | 40 | # triangle hash (efficient mesh intersection) 41 | triangle_hash_module = Extension( 42 | 'mesh_gen_utils.libmesh.triangle_hash', 43 | sources=[ 44 | 'mesh_gen_utils/libmesh/triangle_hash.pyx' 45 | ], 46 | libraries=['m'] # Unix-like specific 47 | ) 48 | 49 | # mise (efficient mesh extraction) 50 | mise_module = Extension( 51 | 'mesh_gen_utils.libmise.mise', 52 | sources=[ 53 | 'mesh_gen_utils/libmise/mise.pyx' 54 | ], 55 | ) 56 | 57 | # Gather all extension modules 58 | ext_modules = [ 59 | pykdtree, 60 | mcubes_module, 61 | triangle_hash_module, 62 | mise_module 63 | ] 64 | 65 | setup( 66 | ext_modules=cythonize(ext_modules), 67 | cmdclass={ 68 | 'build_ext': BuildExtension 69 | }, 70 | include_dirs=[numpy.get_include()] 71 | ) 72 | -------------------------------------------------------------------------------- /CL3D/utils_shape.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import os 3 | from datetime import datetime 4 | import copy 5 | import torch 6 | from mesh_gen_utils.libmise import MISE 7 | from mesh_gen_utils.libmesh import check_mesh_contains 8 | from mesh_gen_utils import libmcubes 9 | import trimesh 10 | from mesh_gen_utils.libkdtree import KDTree 11 | from torch.autograd import Variable 12 | import h5py 13 | import torch.nn as nn 14 | from PIL import Image 15 | 16 | 17 | def writelogfile(config, log_dir): 18 | log_file_name = os.path.join(log_dir, 'log.txt') 19 | with open(log_file_name, "a+") as log_file: 20 | log_string = get_log_string(config) 21 | log_file.write(log_string) 22 | 23 | def get_log_string(config): 24 | now = str(datetime.now().strftime("%H:%M %d-%m-%Y")) 25 | log_string = "" 26 | log_string += " -------- Hyperparameters and settings -------- \n" 27 | log_string += "{:25} {}\n".format('Time:', now) 28 | log_string += "{:25} {}\n".format('Mini-batch size:', \ 29 | config.training['batch_size']) 30 | log_string += "{:25} {}\n".format('Batch size eval:', \ 31 | config.training['batch_size_eval']) 32 | log_string += "{:25} {}\n".format('Num epochs:', \ 33 | config.training['num_epochs']) 34 | log_string += "{:25} {}\n".format('Out directory:', \ 35 | config.training['out_dir']) 36 | log_string += "{:25} {}\n".format('Random view:', \ 37 | config.data_setting['random_view']) 38 | log_string += "{:25} {}\n".format('Sequence length:', \ 39 | config.data_setting['seq_len']) 40 | log_string += "{:25} {}\n".format('Input size:', \ 41 | config.data_setting['input_size']) 42 | log_string += " -------- Data paths -------- \n" 43 | log_string += "{:25} {}\n".format('Dataset path', \ 44 | config.path['src_dataset_path']) 45 | log_string += "{:25} {}\n".format('Point path', \ 46 | config.path['src_pt_path']) 47 | log_string += " ------------------------------------------------------ \n" 48 | return log_string 49 | 50 | def compute_iou(occ1, occ2): 51 | ''' Computes the Intersection over Union (IoU) value for two sets of 52 | occupancy values. 53 | 54 | Args: 55 | occ1 (tensor): first set of occupancy values 56 | occ2 (tensor): second set of occupancy values 57 | ''' 58 | occ1 = np.asarray(occ1) 59 | occ2 = np.asarray(occ2) 60 | 61 | # Put all data in second dimension 62 | # Also works for 1-dimensional data 63 | if occ1.ndim >= 2: 64 | occ1 = occ1.reshape(occ1.shape[0], -1) 65 | if occ2.ndim >= 2: 66 | occ2 = occ2.reshape(occ2.shape[0], -1) 67 | 68 | # Convert to boolean values 69 | occ1_temp = copy.deepcopy(occ1) 70 | occ2_temp = copy.deepcopy(occ2) 71 | occ1 = (occ1 >= 0.5) 72 | occ2 = (occ2 >= 0.5) 73 | 74 | # Compute IOU 75 | area_union = (occ1 | occ2).astype(np.float32).sum(axis=-1) 76 | if (area_union == 0).any(): 77 | return 0. 78 | 79 | area_intersect = (occ1 & occ2).astype(np.float32).sum(axis=-1) 80 | 81 | iou = (area_intersect / area_union) 82 | if isinstance(iou, (list,np.ndarray)): 83 | iou = np.mean(iou, axis=0) 84 | return iou 85 | 86 | def compute_acc(sdf_pred, sdf, thres=0.01, iso=0.003): 87 | # import pdb; pdb.set_trace() 88 | sdf_pred = np.asarray(sdf_pred) 89 | sdf = np.asarray(sdf) 90 | 91 | acc_sign = (((sdf_pred-iso) * (sdf-iso)) > 0).mean(axis=-1) 92 | acc_sign = np.mean(acc_sign, axis=0) 93 | 94 | occ_pred = sdf_pred <= iso 95 | occ = sdf <= iso 96 | 97 | iou = compute_iou(occ_pred, occ) 98 | 99 | acc_thres = (np.abs(sdf_pred-sdf) <= thres).mean(axis=-1) 100 | acc_thres = np.mean(acc_thres, axis=0) 101 | return acc_sign, acc_thres, iou 102 | 103 | def get_sdf_h5(sdf_h5_file): 104 | h5_f = h5py.File(sdf_h5_file, 'r') 105 | try: 106 | if ('pc_sdf_original' in h5_f.keys() 107 | and 'pc_sdf_sample' in h5_f.keys() 108 | and 'norm_params' in h5_f.keys()): 109 | ori_sdf = h5_f['pc_sdf_original'][:].astype(np.float32) 110 | sample_sdf = h5_f['pc_sdf_sample'][:].astype(np.float32) 111 | ori_pt = ori_sdf[:,:3]#, ori_sdf[:,3] 112 | ori_sdf_val = None 113 | if sample_sdf.shape[1] == 4: 114 | sample_pt, sample_sdf_val = sample_sdf[:,:3], sample_sdf[:,3] 115 | else: 116 | sample_pt, sample_sdf_val = None, sample_sdf[:, 0] 117 | norm_params = h5_f['norm_params'][:] 118 | sdf_params = h5_f['sdf_params'][:] 119 | else: 120 | raise Exception("no sdf and sample") 121 | finally: 122 | h5_f.close() 123 | return ori_pt, ori_sdf_val, sample_pt, sample_sdf_val, norm_params, sdf_params 124 | 125 | def apply_rotate(input_points, rotate_dict): 126 | theta_azim = rotate_dict['azim'] 127 | theta_elev = rotate_dict['elev'] 128 | theta_azim = np.pi+theta_azim/180*np.pi 129 | theta_elev = theta_elev/180*np.pi 130 | r_elev = np.array([[1, 0, 0], 131 | [0, np.cos(theta_elev), -np.sin(theta_elev)], 132 | [0, np.sin(theta_elev), np.cos(theta_elev)]]) 133 | r_azim = np.array([[np.cos(theta_azim), 0, np.sin(theta_azim)], 134 | [0, 1, 0], 135 | [-np.sin(theta_azim),0, np.cos(theta_azim)]]) 136 | 137 | rotated_points = r_elev@r_azim@input_points.T 138 | return rotated_points.T 139 | 140 | def sample_points(input_points, input_occs, num_points): 141 | if num_points != -1: 142 | idx = torch.randint(len(input_points), size=(num_points,)) 143 | else: 144 | idx = torch.arange(len(input_points)) 145 | selected_points = input_points[idx, :] 146 | selected_occs = input_occs[idx] 147 | return selected_points, selected_occs 148 | 149 | def normalize_imagenet(x): 150 | ''' Normalize input images according to ImageNet standards. 151 | Args: 152 | x (tensor): input images 153 | ''' 154 | x = x.clone() 155 | x[:, 0] = (x[:, 0] - 0.485) / 0.229 156 | x[:, 1] = (x[:, 1] - 0.456) / 0.224 157 | x[:, 2] = (x[:, 2] - 0.406) / 0.225 158 | return x 159 | 160 | def LpLoss(logits, sdf, p=1, thres=0.01, weight=4.): 161 | 162 | sdf = Variable(sdf.data, requires_grad=False).cuda() 163 | loss = torch.abs(logits-sdf).pow(p).cuda() 164 | weight_mask = torch.ones(loss.shape).cuda() 165 | weight_mask[torch.abs(sdf) < thres] =\ 166 | weight_mask[torch.abs(sdf) < thres]*weight 167 | loss = loss * weight_mask 168 | loss = torch.sum(loss, dim=-1, keepdim=False) 169 | loss = torch.mean(loss) 170 | return loss 171 | 172 | def generate_mesh(img, points, model, threshold=0.2, box_size=1.7, \ 173 | resolution0=16, upsampling_steps=2): 174 | ''' 175 | Generate mesh function for OccNet 176 | ''' 177 | model.eval() 178 | 179 | threshold = np.log(threshold) - np.log(1. - threshold) 180 | mesh_extractor = MISE( 181 | resolution0, upsampling_steps, threshold) 182 | p = mesh_extractor.query() 183 | 184 | with torch.no_grad(): 185 | feats = model.encoder(img) 186 | 187 | while p.shape[0] != 0: 188 | pq = torch.FloatTensor(p).cuda() 189 | pq = pq / mesh_extractor.resolution 190 | 191 | pq = box_size * (pq - 0.5) 192 | 193 | with torch.no_grad(): 194 | pq = pq.unsqueeze(0) 195 | occ_pred = model.decoder(pq, feats) 196 | values = occ_pred.squeeze(0).detach().cpu().numpy() 197 | values = values.astype(np.float64) 198 | mesh_extractor.update(p, values) 199 | 200 | p = mesh_extractor.query() 201 | value_grid = mesh_extractor.to_dense() 202 | 203 | mesh = extract_mesh(value_grid, feats, box_size, threshold) 204 | return mesh 205 | 206 | def extract_mesh(value_grid, feats, box_size, threshold, constant_values=-1e6): 207 | n_x, n_y, n_z = value_grid.shape 208 | value_grid_padded = np.pad( 209 | value_grid, 1, 'constant', constant_values=constant_values) 210 | vertices, triangles = libmcubes.marching_cubes( 211 | value_grid_padded, threshold) 212 | # Shift back vertices by 0.5 213 | vertices -= 0.5 214 | # Undo padding 215 | vertices -= 1 216 | # Normalize 217 | vertices /= np.array([n_x-1, n_y-1, n_z-1]) 218 | vertices = box_size * (vertices - 0.5) 219 | 220 | # Create mesh 221 | mesh = trimesh.Trimesh(vertices, triangles, process=False) 222 | 223 | return mesh 224 | 225 | def eval_mesh(mesh, pointcloud_gt, normals_gt, points, val_gt, \ 226 | num_fscore_thres=6, n_points=300000, shape_rep='occ', \ 227 | sdf_val=None, iso=0.003): 228 | 229 | if mesh is not None and type(mesh)==trimesh.base.Trimesh and len(mesh.vertices) != 0 and len(mesh.faces) != 0: 230 | pointcloud, idx = mesh.sample(n_points, return_index=True) 231 | pointcloud = pointcloud.astype(np.float32) 232 | normals = mesh.face_normals[idx] 233 | else: 234 | if shape_rep == 'occ': 235 | return {'iou': 0., 'cd': 2*np.sqrt(3), 'completeness': np.sqrt(3),\ 236 | 'accuracy': np.sqrt(3), 'normals_completeness': -1,\ 237 | 'normals_accuracy': -1, 'normals': -1, \ 238 | 'fscore': np.zeros(6, dtype=np.float32), \ 239 | 'precision': np.zeros(6, dtype=np.float32), \ 240 | 'recall': np.zeros(6, dtype=np.float32)} 241 | return {'iou': [0.,0.], 'cd': 2*np.sqrt(3), 'completeness': np.sqrt(3),\ 242 | 'accuracy': np.sqrt(3), 'normals_completeness': -1,\ 243 | 'normals_accuracy': -1, 'normals': -1, \ 244 | 'fscore': np.zeros(6, dtype=np.float32), \ 245 | 'precision': np.zeros(6, dtype=np.float32), \ 246 | 'recall': np.zeros(6, dtype=np.float32)} 247 | # Eval pointcloud 248 | pointcloud = np.asarray(pointcloud) 249 | pointcloud_gt = np.asarray(pointcloud_gt.squeeze(0)) 250 | normals = np.asarray(normals) 251 | normals_gt = np.asarray(normals_gt.squeeze(0)) 252 | 253 | ####### Normalize 254 | pointcloud /= (2*np.max(np.abs(pointcloud))) 255 | pointcloud_gt /= (2*np.max(np.abs(pointcloud_gt))) 256 | 257 | # Completeness: how far are the points of the target point cloud 258 | # from thre predicted point cloud 259 | completeness, normals_completeness = distance_p2p( 260 | pointcloud_gt, normals_gt, pointcloud, normals) 261 | 262 | # Accuracy: how far are th points of the predicted pointcloud 263 | # from the target pointcloud 264 | accuracy, normals_accuracy = distance_p2p( 265 | pointcloud, normals, pointcloud_gt, normals_gt 266 | ) 267 | 268 | # Get fscore 269 | fscore_array, precision_array, recall_array = [], [], [] 270 | for i, thres in enumerate([0.5, 1, 2, 5, 10, 20]): 271 | fscore, precision, recall = calculate_fscore(\ 272 | accuracy, completeness, thres/100.) 273 | fscore_array.append(fscore) 274 | precision_array.append(precision) 275 | recall_array.append(recall) 276 | fscore_array = np.array(fscore_array, dtype=np.float32) 277 | precision_array = np.array(precision_array, dtype=np.float32) 278 | recall_array = np.array(recall_array, dtype=np.float32) 279 | 280 | accuracy = accuracy.mean() 281 | normals_accuracy = normals_accuracy.mean() 282 | 283 | completeness = completeness.mean() 284 | normals_completeness = normals_completeness.mean() 285 | 286 | cd = completeness + accuracy 287 | normals = 0.5*(normals_completeness+normals_accuracy) 288 | 289 | # Compute IoU 290 | if shape_rep == 'occ': 291 | occ_mesh = check_mesh_contains(mesh, points.cpu().numpy().squeeze(0)) 292 | iou = compute_iou(occ_mesh, val_gt.cpu().numpy().squeeze(0)) 293 | else: 294 | occ_mesh = check_mesh_contains(mesh, points.cpu().numpy().squeeze(0)) 295 | val_gt_np = val_gt.cpu().numpy() 296 | occ_gt = val_gt_np <= iso 297 | iou = compute_iou(occ_mesh, occ_gt) 298 | 299 | # sdf iou 300 | sdf_iou, _, _ = compute_acc(sdf_val.cpu().numpy(),\ 301 | val_gt.cpu().numpy()) 302 | iou = np.array([iou, sdf_iou]) 303 | 304 | return {'iou': iou, 'cd': cd, 'completeness': completeness,\ 305 | 'accuracy': accuracy, \ 306 | 'normals_completeness': normals_completeness,\ 307 | 'normals_accuracy': normals_accuracy, 'normals': normals, \ 308 | 'fscore': fscore_array, 'precision': precision_array,\ 309 | 'recall': recall_array} 310 | 311 | def calculate_fscore(accuracy, completeness, threshold): 312 | recall = np.sum(completeness < threshold)/len(completeness) 313 | precision = np.sum(accuracy < threshold)/len(accuracy) 314 | if precision + recall > 0: 315 | fscore = 2*recall*precision/(recall+precision) 316 | else: 317 | fscore = 0 318 | return fscore, precision, recall 319 | 320 | 321 | def distance_p2p(points_src, normals_src, points_tgt, normals_tgt): 322 | ''' Computes minimal distances of each point in points_src to points_tgt. 323 | 324 | Args: 325 | points_src (numpy array): source points 326 | normals_src (numpy array): source normals 327 | points_tgt (numpy array): target points 328 | normals_tgt (numpy array): target normals 329 | ''' 330 | kdtree = KDTree(points_tgt) 331 | dist, idx = kdtree.query(points_src) 332 | 333 | if normals_src is not None and normals_tgt is not None: 334 | normals_src = \ 335 | normals_src / np.linalg.norm(normals_src, axis=-1, keepdims=True) 336 | normals_tgt = \ 337 | normals_tgt / np.linalg.norm(normals_tgt, axis=-1, keepdims=True) 338 | 339 | normals_dot_product = (normals_tgt[idx] * normals_src).sum(axis=-1) 340 | # Handle normals that point into wrong direction gracefully 341 | # (mostly due to mehtod not caring about this in generation) 342 | normals_dot_product = np.abs(normals_dot_product) 343 | else: 344 | normals_dot_product = np.array( 345 | [np.nan] * points_src.shape[0], dtype=np.float32) 346 | return dist, normals_dot_product 347 | 348 | 349 | def generate_mesh_mise_sdf(img, points, model, threshold=0.003, box_size=1.7, \ 350 | resolution=64, upsampling_steps=2): 351 | ''' 352 | Generates mesh for sdf representations using MISE algorithm 353 | ''' 354 | model.eval() 355 | 356 | resolution0 = resolution // (2**upsampling_steps) 357 | 358 | total_points = (resolution+1)**3 359 | split_size = int(np.ceil(total_points*1.0/128**3)) 360 | mesh_extractor = MISE( 361 | resolution0, upsampling_steps, threshold) 362 | p = mesh_extractor.query() 363 | with torch.no_grad(): 364 | feats = model.encoder(img) 365 | while p.shape[0] != 0: 366 | 367 | pq = p / mesh_extractor.resolution 368 | 369 | pq = box_size * (pq - 0.5) 370 | occ_pred = [] 371 | with torch.no_grad(): 372 | if pq.shape[0] > 128**3: 373 | 374 | pq = np.array_split(pq, split_size) 375 | 376 | for ind in range(split_size): 377 | 378 | occ_pred_split = model.decoder(torch.FloatTensor(pq[ind])\ 379 | .cuda().unsqueeze(0), feats) 380 | occ_pred.append(occ_pred_split.cpu().numpy().reshape(-1)) 381 | occ_pred = np.concatenate(np.asarray(occ_pred),axis=0) 382 | values = occ_pred.reshape(-1) 383 | else: 384 | pq = torch.FloatTensor(pq).cuda().unsqueeze(0) 385 | occ_pred = model.decoder(pq, feats) 386 | values = occ_pred.squeeze(0).detach().cpu().numpy() 387 | values = values.astype(np.float64) 388 | mesh_extractor.update(p, values) 389 | 390 | p = mesh_extractor.query() 391 | value_grid = mesh_extractor.to_dense() 392 | mesh = extract_mesh(value_grid, feats, box_size, threshold, constant_values=1e6) 393 | return mesh 394 | 395 | 396 | 397 | 398 | 399 | 400 | 401 | 402 | 403 | 404 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # The Surprising Positive Knowledge Transfer in Continual 3D Object Shape Reconstruction 2 | 3 | In this work we investigate continual learning of reconstruction tasks which surprisingly do not suffer from catastrophic forgetting and exhibit positive forward knowledge transfer. In addition, we provide a novel analysis of knowledge transfer ability in CL. We further show the potential of using the feature representation learned in 3D shape reconstruction to serve as a proxy task for classification. [Link](https://arxiv.org/abs/2101.07295) to our paper and [link](https://rehg-lab.github.io/publication-pages/CLRec/) to our project webpage. 4 | 5 | This repository consists of the code for reproducing CL of 3D shape reconstruction, proxy task and autoencoder results in the main text, and YASS and dynamic representation tracking in the appendix. 6 | 7 | ## Training and evaluating Single Object 3D Shape Reconstruction and proxy task 8 | Follow instructions in [CL3D README](https://github.com/rehg-lab/CLRec/tree/master/CL3D#readme) 9 | 10 | ## Training and evaluating autoencoder 11 | Follow instructions in [Autoencoder README](https://github.com/rehg-lab/CLRec/blob/master/auto_enc/README.md) 12 | 13 | ## Training and evaluating YASS 14 | Follow instructions in [YASS README](https://github.com/rehg-lab/CLRec/blob/master/YASS/README.md) 15 | 16 | ## Dynamic Representation Tracking 17 | Follow instructions in [DyRT README](https://github.com/IsaacRe/dynamic-representation-tracking/blob/master/README.md) 18 | 19 | ## Citing 20 | ```bibtex 21 | @misc{thai2021surprising, 22 | title={The Surprising Positive Knowledge Transfer in Continual 3D Object Shape Reconstruction}, 23 | author={Anh Thai and Stefan Stojanov and Zixuan Huang and Isaac Rehg and James M. Rehg}, 24 | year={2021}, 25 | eprint={2101.07295}, 26 | archivePrefix={arXiv}, 27 | primaryClass={cs.LG} 28 | } 29 | ``` 30 | 31 | 32 | -------------------------------------------------------------------------------- /YASS/README.md: -------------------------------------------------------------------------------- 1 | The instructions in this README follow [Incremental Object Learning from Contiguous Views](https://github.com/iolfcv/experiments/blob/master/README.md) 2 | 3 | ### Environment Setup 4 | If the environment `clpy38` is not created already, create environment using [anaconda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) 5 | ```bash 6 | conda env create -f ../environment.yml 7 | ``` 8 | Otherwise, run the following line to activate the environment 9 | ```bash 10 | conda activate clpy38 11 | ``` 12 | 13 | ### Running Incremental Learning Models 14 | 15 | The main program has a separate train and test process. Both can run simultaneously using 1 GPU provided that batch_size + test_batch_size images can fit on GPU memory. By default, the train and test processes use the first and second GPU devices visible, unless the '--one_gpu' flag is used, in which case both use the first device visible. 16 | 17 | ``` 18 | usage: main_incr_cifar.py [-h] [--outfile OUTFILE] [--save_all] 19 | [--save_all_dir SAVE_ALL_DIR] [--resume] 20 | [--resume_outfile RESUME_OUTFILE] 21 | [--init_lr INIT_LR] [--init_lr_ft INIT_LR_FT] 22 | [--num_epoch NUM_EPOCH] 23 | [--num_epoch_ft NUM_EPOCH_FT] [--lrd LRD] [--wd WD] 24 | [--batch_size BATCH_SIZE] [--llr_freq LLR_FREQ] 25 | [--batch_size_test BATCH_SIZE_TEST] 26 | [--lexp_len LEXP_LEN] [--size_test SIZE_TEST] 27 | [--num_exemplars NUM_EXEMPLARS] 28 | [--img_size IMG_SIZE] 29 | [--rendered_img_size RENDERED_IMG_SIZE] 30 | [--total_classes TOTAL_CLASSES] 31 | [--num_classes NUM_CLASSES] [--num_iters NUM_ITERS] 32 | [--algo ALGO] [--no_dist] [--pt] [--ncm] [--network] 33 | [--sample SAMPLE] [--explr_neg_sig] [--random_explr] 34 | [--loss LOSS] [--full_explr] [--diff_order] 35 | [--subset] [--no_jitter] [--h_ch H_CH] [--s_ch S_CH] 36 | [--l_ch L_CH] [--aug AUG] [--s_wo_rep] 37 | [--test_freq TEST_FREQ] [--num_workers NUM_WORKERS] 38 | [--one_gpu] 39 | 40 | Incremental learning 41 | 42 | optional arguments: 43 | -h, --help show this help message and exit 44 | --outfile OUTFILE Output file name (should have .csv extension) 45 | --save_all Option to save models after each test_freq number of 46 | learning exposures 47 | --save_all_dir SAVE_ALL_DIR 48 | Directory to store all models in 49 | --resume Resume training from checkpoint at outfile 50 | --resume_outfile RESUME_OUTFILE 51 | Output file name after resuming 52 | --init_lr INIT_LR initial learning rate 53 | --init_lr_ft INIT_LR_FT 54 | Init learning rate for balanced finetuning (for E2E) 55 | --num_epoch NUM_EPOCH 56 | Number of epochs 57 | --num_epoch_ft NUM_EPOCH_FT 58 | Number of epochs for balanced finetuning (for E2E) 59 | --lrd LRD Learning rate decrease factor 60 | --wd WD Weight decay for SGD 61 | --batch_size BATCH_SIZE 62 | Mini batch size for training 63 | --llr_freq LLR_FREQ Learning rate lowering frequency for SGD (for E2E) 64 | --batch_size_test BATCH_SIZE_TEST 65 | Mini batch size for testing 66 | --lexp_len LEXP_LEN Number of frames in Learning Exposure 67 | --size_test SIZE_TEST 68 | Number of test images per object 69 | --num_exemplars NUM_EXEMPLARS 70 | number of exemplars 71 | --img_size IMG_SIZE Size of images input to the network 72 | --rendered_img_size RENDERED_IMG_SIZE 73 | Size of rendered images 74 | --total_classes TOTAL_CLASSES 75 | Total number of classes 76 | --num_classes NUM_CLASSES 77 | Number of classes for each learning exposure 78 | --num_iters NUM_ITERS 79 | Total number of learning exposures (currently only 80 | integer multiples of args.total_classes each class 81 | seen equal number of times) 82 | --algo ALGO Algorithm to run. Options : icarl, e2e, lwf 83 | --no_dist Option to switch off distillation loss 84 | --pt Option to start from an ImageNet pretrained model 85 | --ncm Use nearest class mean classification (for E2E) 86 | --network Use network output to classify (for iCaRL) 87 | --sample SAMPLE Sampling mechanism to be performed 88 | --explr_neg_sig Option to use exemplars as negative signals (for 89 | iCaRL) 90 | --random_explr Option for random exemplar set 91 | --loss LOSS Loss to be used in classification 92 | --full_explr Option to use the full exemplar set 93 | --diff_order Use a random order of classes introduced 94 | --subset Use a random subset of classes 95 | --no_jitter Option for no color jittering (for iCaRL) 96 | --h_ch H_CH Color jittering : max hue change 97 | --s_ch S_CH Color jittering : max saturation change 98 | --l_ch L_CH Color jittering : max lightness change 99 | --aug AUG Data augmentation to perform on train data 100 | --s_wo_rep Sampling train data with replacement 101 | --test_freq TEST_FREQ 102 | Number of iterations of training after which a test is 103 | done/model saved 104 | --num_workers NUM_WORKERS 105 | Maximum number of threads spawned at anystage of 106 | execution 107 | --one_gpu Option to run multiprocessing on 1 GPU 108 | ``` 109 | 110 | Following is an example command to run YASS on CIFAR 100 on 2 GPUS 111 | 112 | ```bash 113 | time CUDA_VISIBLE_DEVICES=0,1 python main_incr_cifar.py --outfile=results/test.csv --aug=e2e --batch_size_test=100 --num_exemplars=2000 --total_classes=100 --num_iters=100 --lexp_len=500 --network --sample=wg --loss=CE --random --diff_order --full_explr --no_dist --s_wo_rep 114 | ``` 115 | 116 | This project uses code based on parts of the following repository 117 | 118 | 1. [Incremental Object Learning from Contiguous Views](https://github.com/iolfcv/experiments/) 119 | 120 | -------------------------------------------------------------------------------- /YASS/data_generator/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/YASS/data_generator/__init__.py -------------------------------------------------------------------------------- /YASS/data_generator/cifar_mean_image.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/YASS/data_generator/cifar_mean_image.npy -------------------------------------------------------------------------------- /YASS/utils/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/YASS/utils/__init__.py -------------------------------------------------------------------------------- /YASS/utils/color_jitter/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/YASS/utils/color_jitter/__init__.py -------------------------------------------------------------------------------- /YASS/utils/color_jitter/cjitter.cpp: -------------------------------------------------------------------------------- 1 | #include "cjitter.h" 2 | 3 | static inline double image_hue2rgb(double p, double q, double t) { 4 | if (t < 0.) t += 1; 5 | if (t > 1.) t -= 1; 6 | if (t < 1./6) 7 | return p + (q - p) * 6. * t; 8 | else if (t < 1./2) 9 | return q; 10 | else if (t < 2./3) 11 | return p + (q - p) * (2./3 - t) * 6.; 12 | else 13 | return p; 14 | } 15 | 16 | /* 17 | This function randomly changes the color in each channel in HSL space. It first 18 | converts the RGB batch of images to HSL; then applies a different color 19 | jittering in each channel; and converts back to RGB space. The function expects 20 | a set of images of size [3,Nim,H,W] in the RGB range [0, 255]. It does this 21 | color jittering operation in place. 22 | */ 23 | void cjitter(float* img, int height, int width, float h_change, float s_change, float l_change){ 24 | std::random_device rd; 25 | std::mt19937 gen(rd()); 26 | std::uniform_real_distribution<> dist(-1, 1); 27 | 28 | h_change *= dist(gen); 29 | s_change *= dist(gen); 30 | l_change *= dist(gen); 31 | 32 | // std::cout<<"h_change : "< 0.5 ? d / (2 - mx - mn) : d / (mx + mn); 73 | } 74 | 75 | // rgb2hsl_t += (timenano() - tic); 76 | 77 | /**************** Change color by *_change ****************/ 78 | // tic = timenano(); 79 | 80 | h += h_change; 81 | h = fmod(h, 1.0); 82 | h = h < 0 ? 1 + h : h; 83 | s += s_change; 84 | s = s < 0 ? 0 : (s > 1 ? 1 : s); 85 | l += l_change; 86 | l = l < 0 ? 0 : (l > 1 ? 1 : l); 87 | 88 | // clrchg_t += (timenano() - tic); 89 | 90 | /**************** Finally convert back to RGB ****************/ 91 | // tic = timenano(); 92 | 93 | if(s == 0) { 94 | // achromatic 95 | r = l; 96 | g = l; 97 | b = l; 98 | } else { 99 | double q = (l < 0.5) ? (l * (1 + s)) : (l + s - l * s); 100 | double p = 2 * l - q; 101 | double hr = h + 1./3; 102 | double hg = h; 103 | double hb = h - 1./3; 104 | r = image_hue2rgb(p, q, hr); 105 | g = image_hue2rgb(p, q, hg); 106 | b = image_hue2rgb(p, q, hb); 107 | } 108 | 109 | // hsl2rgb_t += (timenano() - tic); 110 | 111 | /**************** Clamp and store values ****************/ 112 | // tic = timenano(); 113 | 114 | r *= 255; 115 | g *= 255; 116 | b *= 255; 117 | *r_ptr = r; 118 | *g_ptr = g; 119 | *b_ptr = b; 120 | 121 | ++r_ptr; 122 | ++g_ptr; 123 | ++b_ptr; 124 | 125 | // std::cout<<"Iteration : ("< 5 | 6 | #define min(a, b) (((a) < (b)) ? (a) : (b)) 7 | #define max(a, b) (((a) > (b)) ? (a) : (b)) 8 | 9 | void cjitter(float* img, int height, int width, float h_change, float s_change, float l_change); 10 | 11 | #endif 12 | -------------------------------------------------------------------------------- /YASS/utils/color_jitter/jitter.pyx: -------------------------------------------------------------------------------- 1 | # distutils: language = c++ 2 | 3 | import cython 4 | import numpy as np 5 | cimport numpy as np 6 | 7 | 8 | # declare the interface to the C code 9 | cdef extern from "cjitter.h": 10 | void cjitter (float* img, int height, int width, float h_change, float s_change, float l_change) 11 | 12 | @cython.boundscheck(False) 13 | @cython.wraparound(False) 14 | def jitter(np.ndarray[float, ndim=3, mode="c"] img not None, float h_change, float s_change, float l_change): 15 | cdef int height, width 16 | height, width = img.shape[1], img.shape[2] 17 | cjitter(&img[0, 0, 0], height, width, h_change, s_change, l_change) 18 | 19 | return None -------------------------------------------------------------------------------- /YASS/utils/color_jitter/setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | from distutils.core import setup 4 | from distutils.extension import Extension 5 | from Cython.Distutils import build_ext 6 | 7 | import numpy 8 | 9 | setup( 10 | cmdclass = {'build_ext': build_ext}, 11 | ext_modules = [Extension("jitter", 12 | sources=["jitter.pyx", "cjitter.cpp"], 13 | include_dirs=[numpy.get_include()], 14 | language="c++", 15 | extra_compile_args=["-std=c++11"], 16 | extra_link_args=["-std=c++11"])], 17 | ) 18 | 19 | 20 | -------------------------------------------------------------------------------- /YASS/utils/get_samples.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import cv2 3 | import random 4 | 5 | def get_samples(images, bboxes, cl, size, class_map, get_negatives=True): 6 | ''' 7 | Returns bounding box cropped images (+ negative 8 | background patches depending on job), class labels, 9 | and the bounding box coordinates 10 | ''' 11 | # Constants used for overlap range 12 | e = 50 13 | resize_buffer = 20 14 | rendered_img_size = images.shape[1] 15 | ov_range = 0.15*rendered_img_size 16 | 17 | xs = [] 18 | ys = [] 19 | bbs = [] 20 | 21 | for row in range(images.shape[0]): 22 | img = images[row] 23 | x_min, x_max, y_min, y_max = bboxes[row] 24 | pos_img = img[y_min:y_max, x_min:x_max] 25 | pos_img = cv2.resize(pos_img, (size, size)) 26 | 27 | xs.append(pos_img) 28 | if class_map is None: 29 | ys.append(-2) 30 | else: 31 | ys.append(class_map[cl]) 32 | bbs.append(bboxes[row]) 33 | 34 | #if job is train then add negative samples of the data as well 35 | if get_negatives: 36 | while (True): 37 | # Get the negative bounding box from the image outside the 38 | # original bounding box but with some overlap range 39 | # Randomly select 4 points within a window specified 40 | # by rendered_img_size and e 41 | amin = random.randint(0, rendered_img_size - e) 42 | bmin = random.randint(0, rendered_img_size - e) 43 | # Adding the resize buffer to prevent resizing 44 | # errors on very thin strips 45 | amax = random.randint(amin+resize_buffer, rendered_img_size) 46 | bmax = random.randint(bmin+resize_buffer, rendered_img_size) 47 | 48 | a_ratio = float(amax-amin)/float(bmax-bmin) 49 | 50 | if 0.5 < a_ratio and a_ratio < 2: 51 | if (overlap(x_min, y_min, x_max, y_max, 52 | amin, bmin, amax, bmax, ov_range)): 53 | break 54 | 55 | neg_img = img[amin:amax, bmin:bmax] #create negative image 56 | 57 | neg_img = cv2.resize(neg_img, (size,size)) 58 | 59 | xs.append(neg_img) 60 | ys.append(-1) # append -1 for negative sample 61 | bbs.append([bmin, bmax, amin, amax]) 62 | 63 | x = np.array(xs, dtype=np.float32) 64 | y = np.array(ys) 65 | bb = np.array(bbs) 66 | 67 | x = x.reshape((x.shape[0], size, size, 3)) 68 | 69 | # Make it 3xsizexsize 70 | x = x.transpose(0,3,1,2) 71 | 72 | return [x,y,bb] 73 | 74 | def overlap(xmin, ymin, xmax, ymax, amin, bmin, amax, bmax, ov_range): 75 | """ 76 | Checks for the rectangular collision for 77 | the two boxes with a certain overlap range. 78 | Returns True if the two boxes are valid 79 | and do not overlap more than the allowed range 80 | """ 81 | if amax < (xmin+ov_range): 82 | if bmax < (ymin+ov_range) or bmin > (ymax-ov_range): 83 | return True 84 | if amin > xmax - ov_range: 85 | if bmax < (ymin+ov_range) or bmin > (ymax-ov_range): 86 | return True 87 | return False -------------------------------------------------------------------------------- /YASS/utils/loader_utils.py: -------------------------------------------------------------------------------- 1 | # from torch._six import int_classes as _int_classes 2 | _int_classes = int 3 | import torch 4 | from torch.utils.data.sampler import Sampler 5 | from multiprocessing.dummy import Pool as ThreadPool 6 | import itertools 7 | 8 | 9 | class CustomRandomSampler(Sampler): 10 | ''' 11 | Samples elements randomly, without replacement. 12 | This sampling only shuffles within epoch intervals of the dataset 13 | Arguments: 14 | data_source (Dataset): dataset to sample from 15 | num_epochs (int) : Number of epochs in the train dataset 16 | num_workers (int) : Number of workers to use for generating iterator 17 | ''' 18 | 19 | def __init__(self, data_source, num_epochs, num_workers, weights=None, replacement=True): 20 | self.data_source = data_source 21 | self.num_epochs = num_epochs 22 | self.num_workers = num_workers 23 | self.datalen = len(data_source) 24 | self.weights = weights 25 | self.replacement = replacement 26 | 27 | def __iter__(self): 28 | iter_array = [] 29 | pool = ThreadPool(self.num_workers) 30 | 31 | def get_randperm(i): 32 | if self.weights is None: 33 | return torch.randperm(self.datalen).tolist() 34 | # self.weights = torch.tensor(self.weights, dtype=torch.double) 35 | return torch.multinomial(torch.tensor(self.weights, dtype=torch.double), self.datalen, self.replacement).tolist() 36 | iter_array = list(itertools.chain.from_iterable( 37 | pool.map(get_randperm, range(self.num_epochs)))) 38 | pool.close() 39 | pool.join() 40 | return iter(iter_array) 41 | 42 | def __len__(self): 43 | return len(self.data_source) 44 | 45 | 46 | class CustomBatchSampler(object): 47 | ''' 48 | Wraps another custom sampler with epoch intervals 49 | to yield a mini-batch of indices. 50 | 51 | Args: 52 | sampler (Sampler): Base sampler. 53 | batch_size (int): Size of mini-batch. 54 | drop_last (bool): If ``True``, the sampler will drop the last batch if 55 | its size would be less than ``batch_size`` 56 | epoch_size : Number of items in an epoch 57 | ''' 58 | 59 | def __init__(self, sampler, batch_size, drop_last, epoch_size): 60 | if not isinstance(sampler, Sampler): 61 | raise ValueError('sampler should be an instance of ' 62 | 'torch.utils.data.Sampler, but got sampler={}' 63 | .format(sampler)) 64 | if (not isinstance(batch_size, _int_classes) 65 | or isinstance(batch_size, bool) 66 | or batch_size <= 0): 67 | raise ValueError('batch_size should be a positive integral value, ' 68 | 'but got batch_size={}'.format(batch_size)) 69 | if not isinstance(drop_last, bool): 70 | raise ValueError('drop_last should be a boolean value, but got ' 71 | 'drop_last={}'.format(drop_last)) 72 | self.sampler = sampler 73 | self.batch_size = batch_size 74 | self.drop_last = drop_last 75 | self.epoch_size = epoch_size 76 | self.num_epochs = len(self.sampler)/self.epoch_size 77 | 78 | if self.drop_last: 79 | self.num_batches_per_epoch = self.epoch_size // self.batch_size 80 | else: 81 | self.num_batches_per_epoch = ( 82 | self.epoch_size + self.batch_size - 1) // self.batch_size 83 | 84 | def __iter__(self): 85 | batch = [] 86 | epoch_ctr = 0 87 | for idx in self.sampler: 88 | epoch_ctr += 1 89 | batch.append(int(idx)) 90 | if len(batch) == self.batch_size or epoch_ctr == self.epoch_size: 91 | yield batch 92 | batch = [] 93 | if epoch_ctr == self.epoch_size: 94 | epoch_ctr = 0 95 | 96 | if len(batch) > 0 and not self.drop_last: 97 | yield batch 98 | 99 | def __len__(self): 100 | return self.num_epochs * self.num_batches_per_epoch 101 | -------------------------------------------------------------------------------- /YASS/utils/model_utils.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from torch.autograd import Variable 4 | 5 | def kaiming_normal_init(m): 6 | ''' 7 | Initializes network parameters using Kaiming-Normal initialization 8 | ''' 9 | if isinstance(m, nn.Conv2d): 10 | nn.init.kaiming_normal_(m.weight, nonlinearity='relu') 11 | elif isinstance(m, nn.Linear): 12 | nn.init.kaiming_normal_(m.weight, nonlinearity='sigmoid') 13 | 14 | def MultiClassCrossEntropyLoss(logits, labels, T, device): 15 | ''' 16 | Cross Entropy Distillation Loss 17 | ''' 18 | labels = Variable(labels.data, requires_grad=False).cuda(device=device) 19 | outputs = torch.log_softmax(logits/T, dim=1) 20 | labels = torch.softmax(labels/T, dim=1) 21 | outputs = torch.sum(outputs * labels, dim=1, keepdim=False) 22 | outputs = -torch.mean(outputs, dim=0, keepdim=False) 23 | return Variable(outputs.data, requires_grad=True).cuda(device=device) -------------------------------------------------------------------------------- /auto_enc/README.md: -------------------------------------------------------------------------------- 1 | ### Environment Setup 2 | If the environment `clpy38` is not created already, create environment using [anaconda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) 3 | ```bash 4 | conda env create -f ../environment.yml 5 | ``` 6 | Otherwise, run the following line to activate the environment 7 | ```bash 8 | conda activate clpy38 9 | ``` 10 | 11 | ### Training autoencoder 12 | Following is an example command to train the autoencoder on CIFAR 100 on 2 GPUS 13 | ```bash 14 | CUDA_VISIBLE_DEVICES=0,1 python autoenc_incr_main.py --outfile=results/autoenc_100cls_single.csv --lexp_len=500 --img_size=32 --total_classes=100 --num_iters=100 --num_epoch=250 --num_classes=1 15 | ``` 16 | -------------------------------------------------------------------------------- /auto_enc/autoencoder.py: -------------------------------------------------------------------------------- 1 | import torch.nn as nn 2 | import torch.nn.functional as F 3 | 4 | class ConvAutoencoder(nn.Module): 5 | def __init__(self): 6 | super(ConvAutoencoder, self).__init__() 7 | ## encoder layers ## 8 | self.conv1 = nn.Conv2d(3, 16, 3, padding=1) 9 | self.conv2 = nn.Conv2d(16, 16, 3, padding=1) 10 | self.conv3 = nn.Conv2d(16, 16, 3, padding=1) 11 | self.conv4 = nn.Conv2d(16, 16, 3, padding=1) 12 | 13 | self.pool = nn.MaxPool2d(2, 2) 14 | 15 | ## decoder layers ## 16 | self.t_conv1 = nn.ConvTranspose2d(16, 16, 2, stride=2) 17 | self.t_conv2 = nn.ConvTranspose2d(16, 16, 2, stride=2) 18 | self.t_conv3 = nn.ConvTranspose2d(16, 16, 2, stride=2) 19 | self.t_conv4 = nn.ConvTranspose2d(16, 3, 2, stride=2) 20 | 21 | self.classes_map = {} 22 | self.classes = [] 23 | self.n_known = 0 24 | self.n_classes = 0 25 | 26 | def forward(self, x): 27 | ## encode ## 28 | x = F.relu(self.conv1(x)) 29 | x = self.pool(x) 30 | x = F.relu(self.conv2(x)) 31 | x = self.pool(x) 32 | 33 | x = F.relu(self.conv3(x)) 34 | x = self.pool(x) 35 | 36 | x = F.relu(self.conv4(x)) 37 | x = self.pool(x) 38 | 39 | ## decode ## 40 | x = F.relu(self.t_conv1(x)) 41 | x = F.relu(self.t_conv2(x)) 42 | x = F.relu(self.t_conv3(x)) 43 | x = F.relu(self.t_conv4(x)) 44 | 45 | 46 | return x 47 | 48 | def increment_classes(self, new_classes): 49 | ''' 50 | Add new output nodes when new classes are seen and make changes to 51 | model data members 52 | ''' 53 | n = len(new_classes) 54 | 55 | self.n_classes += n 56 | 57 | for i, cl in enumerate(new_classes): 58 | self.classes_map[cl] = self.n_known + i 59 | self.classes.append(cl) 60 | -------------------------------------------------------------------------------- /auto_enc/dataset_incr_cifar_autoenc.py: -------------------------------------------------------------------------------- 1 | from torchvision.datasets import CIFAR10 2 | import torchvision.transforms as transforms 3 | import numpy as np 4 | import torch 5 | from PIL import Image 6 | import cv2 7 | import time 8 | 9 | class iCIFAR10(CIFAR10): 10 | def __init__(self, args, root, classes, 11 | train=True, 12 | transform=None, 13 | target_transform=None, 14 | download=False, 15 | mean_image=None,clb=False): 16 | """ 17 | Args : 18 | args : arguments from the argument parser 19 | classes : list of groundtruth classes 20 | train : Whether the model is training or testing 21 | transform : Image transformation performed on train set 22 | target_transform : Image transformation performed on test set 23 | download : Whether to download from the source 24 | mean_image : the mean image over the train dataset 25 | """ 26 | # Inherits from CIFAR10, where self.train_data and self.test_data 27 | # are of type uint8, dimension 32x32x3 28 | super(iCIFAR10, self).__init__(root, 29 | train=train, 30 | transform=transform, 31 | target_transform=target_transform, 32 | download=download) 33 | self.clb = clb 34 | self.img_size = args.img_size 35 | # Number of frame at each learning exposure 36 | self.num_e_frames = args.lexp_len 37 | 38 | self.num_classes = args.num_classes 39 | # Select a subset of classes for incremental training and testing 40 | if self.train: 41 | self.train_data = self.data 42 | self.train_labels = self.targets 43 | # Resize and transpose to CxWxH all train images 44 | resized_train_images = np.zeros((len(self.train_data), 3, \ 45 | self.img_size, self.img_size), dtype=np.uint8) 46 | for i, train_image in enumerate(self.train_data): 47 | resized_train_images[i] = cv2.resize(train_image, \ 48 | (self.img_size, self.img_size)).transpose(2,0,1) 49 | self.train_data = resized_train_images 50 | self.all_train_data = self.train_data 51 | print(self.all_train_data.shape) 52 | self.all_train_labels = self.train_labels 53 | self.all_train_coverage = np.zeros(len(self.all_train_labels), \ 54 | dtype=np.bool_) 55 | self.train_data, self.train_labels, self.train_coverage = \ 56 | [], [], [] 57 | # e_maps keeps track of new images in the current learning 58 | # exposure with regard to images from the exemplar set 59 | self.e_maps = -np.ones((self.num_e_frames*self.num_classes,2), dtype=np.int32) 60 | else: 61 | self.test_data = self.data 62 | self.test_labels = self.targets 63 | # Resize all test images 64 | resized_test_images = np.zeros((len(self.test_data), 3, \ 65 | self.img_size, self.img_size), dtype=np.uint8) 66 | for i, test_image in enumerate(self.test_data): 67 | resized_test_images[i] = cv2.resize(test_image, \ 68 | (self.img_size, self.img_size)).transpose(2,0,1) 69 | self.test_data = resized_test_images 70 | self.all_test_data = self.test_data 71 | print(self.all_test_data.shape) 72 | self.all_test_labels = self.test_labels 73 | self.test_data, self.test_labels = [], [] 74 | 75 | def __getitem__(self, index): 76 | # Data and mean image of dimension 3 x img_size x img_size, unormalized 77 | if self.train: 78 | img = self.train_data[index] 79 | 80 | img = img/255. 81 | # Augment : Random crops and horizontal flips 82 | random_cropped = np.zeros(img.shape, dtype=np.float32) 83 | padded = np.pad(img, ((0, 0), (4, 4), (4, 4)), 84 | mode="constant") 85 | crops = np.random.random_integers(0, high=8, size=(1, 2)) 86 | if (np.random.randint(2) > 0): 87 | random_cropped[:, :, :] = padded[:, 88 | crops[0, 0]:(crops[0, 0] + self.img_size), 89 | crops[0, 1]:(crops[0, 1] + self.img_size)] 90 | else: 91 | random_cropped[:, :, :] = padded[:, 92 | crops[0, 0]:(crops[0, 0] + self.img_size), 93 | crops[0, 1]:(crops[0, 1] + self.img_size)][:, :, ::-1] 94 | 95 | img = torch.FloatTensor(img) 96 | 97 | target = self.train_labels[index] 98 | else: 99 | img = self.test_data[index] 100 | img = img/255. 101 | 102 | target = self.test_labels[index] 103 | 104 | img = torch.FloatTensor(img) 105 | target = np.array(target) 106 | 107 | return index, img, target 108 | 109 | def __len__(self): 110 | if self.train: 111 | return len(self.train_data) 112 | else: 113 | return len(self.test_data) 114 | 115 | def load_data_class(self, classes, model_classes, iteration): 116 | """Loads train data, label and e_maps for current learning exposure 117 | Args : 118 | classes : List of groudtruth classes 119 | model_classes : List of the classes that the model sees 120 | iteration : Learning exposure the model is on 121 | """ 122 | # called in train only 123 | if self.train: 124 | train_data = [] 125 | train_labels = [] 126 | self.e_maps = -np.ones((self.num_e_frames*self.num_classes,2), dtype=np.int32) 127 | for i,(gt_label, model_label) in enumerate(zip(classes, model_classes)): 128 | rand = np.random.choice(500, self.num_e_frames, replace = False) 129 | 130 | s_ind = np.where( \ 131 | np.array(self.all_train_labels) == gt_label)[0] 132 | 133 | s_images = self.all_train_data[s_ind[rand]] 134 | 135 | s_labels = np.array(self.all_train_labels)[s_ind[rand]] 136 | 137 | train_data.append(s_images) 138 | train_labels.append(np.array([model_label]*len(s_images))) 139 | 140 | self.all_train_coverage[s_ind[rand]] = True 141 | 142 | self.e_maps[i*len(s_images):(i+1)*len(s_images),0] = iteration 143 | self.e_maps[i*len(s_images):(i+1)*len(s_images),1] = s_ind[rand] 144 | self.train_data = np.concatenate(np.array(train_data, \ 145 | dtype=np.uint8),axis=0) 146 | self.train_labels = np.concatenate(np.array(train_labels, \ 147 | dtype=np.int32), axis=0).tolist() 148 | 149 | def expand(self, model_new_classes, gt_new_classes): 150 | """Expands current test set if new classes are seen 151 | Args : 152 | model_new_classes : List of new classes that the model sees 153 | gt_new_classes : List of the classes that the model sees 154 | """ 155 | # calls in test only 156 | 157 | if not self.train: 158 | test_data = [] 159 | test_labels = [] 160 | 161 | for (mdl_label, gt_label) in \ 162 | zip(model_new_classes, gt_new_classes): 163 | s_images = self.all_test_data[\ 164 | np.array(self.all_test_labels) == gt_label] 165 | test_data.append(s_images) 166 | test_labels.append(np.array([mdl_label]*len(s_images))) 167 | 168 | 169 | if len(test_data) > 0: 170 | test_data = np.concatenate( \ 171 | np.array(test_data, dtype=np.uint8),axis=0) 172 | test_labels = np.concatenate( \ 173 | np.array(test_labels, dtype=np.uint8), axis=0) 174 | if len(self.test_data) == 0: 175 | self.test_data = test_data 176 | self.test_labels = test_labels.tolist() 177 | else: 178 | if len(test_data) > 0: 179 | self.test_data = np.concatenate( \ 180 | [self.test_data, test_data], axis=0) 181 | self.test_labels = np.concatenate( \ 182 | [self.test_labels, test_labels], axis=0).tolist() 183 | 184 | def get_train_coverage(self, label): 185 | """ 186 | Returns the coverage of requested label. Coverage is calculated 187 | based on the number of images the model has seen for this 188 | label / the total number of images of this label 189 | Args: 190 | label : The requested label 191 | """ 192 | num_images_label = len(self.all_train_coverage[np.array(self.all_train_labels) == label]) 193 | num_images_covered = self.all_train_coverage[np.array(self.all_train_labels) == label].sum() 194 | 195 | return num_images_covered*100./ num_images_label 196 | 197 | def get_image_class(self, label): 198 | """Returns the images and e_maps of the requested label 199 | Args: 200 | label : The requested label 201 | """ 202 | return self.train_data[ \ 203 | np.array(self.train_labels) == label], \ 204 | self.e_maps[np.array(self.train_labels) == label] 205 | 206 | def append(self, images, labels, e_map_data): 207 | """Appends dataset with images, labels and frame data from exemplars 208 | 209 | Args: 210 | images: Tensor of shape (N, C, H, W) 211 | labels: list of labels 212 | e_map_data: frame data of exemplars 213 | """ 214 | self.train_data = np.concatenate((self.train_data, images), axis=0) 215 | self.train_labels = self.train_labels + labels 216 | self.e_maps = np.concatenate((self.e_maps, e_map_data), axis=0) 217 | 218 | class iCIFAR100(iCIFAR10): 219 | base_folder = "cifar-100-python" 220 | url = "http://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz" 221 | filename = "cifar-100-python.tar.gz" 222 | tgz_md5 = "eb9058c3a382ffc7106e4002c42a8d85" 223 | train_list = [ 224 | ["train", "16019d7e3df5f24257cddd939b257f8d"], 225 | ] 226 | test_list = [ 227 | ["test", "f0ef6b0ae62326f3e7ffdfab6717acfc"], 228 | ] 229 | meta = { 230 | 'filename': 'meta', 231 | 'key': 'fine_label_names', 232 | 'md5': '7973b15100ade9c7d40fb424638fde48', 233 | } 234 | 235 | -------------------------------------------------------------------------------- /auto_enc/utils/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/auto_enc/utils/__init__.py -------------------------------------------------------------------------------- /auto_enc/utils/color_jitter/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rehg-lab/CLRec/d990d2c100bd052bb2efd03ff084c9a225eee9b4/auto_enc/utils/color_jitter/__init__.py -------------------------------------------------------------------------------- /auto_enc/utils/color_jitter/cjitter.cpp: -------------------------------------------------------------------------------- 1 | #include "cjitter.h" 2 | 3 | static inline double image_hue2rgb(double p, double q, double t) { 4 | if (t < 0.) t += 1; 5 | if (t > 1.) t -= 1; 6 | if (t < 1./6) 7 | return p + (q - p) * 6. * t; 8 | else if (t < 1./2) 9 | return q; 10 | else if (t < 2./3) 11 | return p + (q - p) * (2./3 - t) * 6.; 12 | else 13 | return p; 14 | } 15 | 16 | /* 17 | This function randomly changes the color in each channel in HSL space. It first 18 | converts the RGB batch of images to HSL; then applies a different color 19 | jittering in each channel; and converts back to RGB space. The function expects 20 | a set of images of size [3,Nim,H,W] in the RGB range [0, 255]. It does this 21 | color jittering operation in place. 22 | */ 23 | void cjitter(float* img, int height, int width, float h_change, float s_change, float l_change){ 24 | std::random_device rd; 25 | std::mt19937 gen(rd()); 26 | std::uniform_real_distribution<> dist(-1, 1); 27 | 28 | h_change *= dist(gen); 29 | s_change *= dist(gen); 30 | l_change *= dist(gen); 31 | 32 | // std::cout<<"h_change : "< 0.5 ? d / (2 - mx - mn) : d / (mx + mn); 73 | } 74 | 75 | // rgb2hsl_t += (timenano() - tic); 76 | 77 | /**************** Change color by *_change ****************/ 78 | // tic = timenano(); 79 | 80 | h += h_change; 81 | h = fmod(h, 1.0); 82 | h = h < 0 ? 1 + h : h; 83 | s += s_change; 84 | s = s < 0 ? 0 : (s > 1 ? 1 : s); 85 | l += l_change; 86 | l = l < 0 ? 0 : (l > 1 ? 1 : l); 87 | 88 | // clrchg_t += (timenano() - tic); 89 | 90 | /**************** Finally convert back to RGB ****************/ 91 | // tic = timenano(); 92 | 93 | if(s == 0) { 94 | // achromatic 95 | r = l; 96 | g = l; 97 | b = l; 98 | } else { 99 | double q = (l < 0.5) ? (l * (1 + s)) : (l + s - l * s); 100 | double p = 2 * l - q; 101 | double hr = h + 1./3; 102 | double hg = h; 103 | double hb = h - 1./3; 104 | r = image_hue2rgb(p, q, hr); 105 | g = image_hue2rgb(p, q, hg); 106 | b = image_hue2rgb(p, q, hb); 107 | } 108 | 109 | // hsl2rgb_t += (timenano() - tic); 110 | 111 | /**************** Clamp and store values ****************/ 112 | // tic = timenano(); 113 | 114 | r *= 255; 115 | g *= 255; 116 | b *= 255; 117 | *r_ptr = r; 118 | *g_ptr = g; 119 | *b_ptr = b; 120 | 121 | ++r_ptr; 122 | ++g_ptr; 123 | ++b_ptr; 124 | 125 | // std::cout<<"Iteration : ("< 5 | 6 | #define min(a, b) (((a) < (b)) ? (a) : (b)) 7 | #define max(a, b) (((a) > (b)) ? (a) : (b)) 8 | 9 | void cjitter(float* img, int height, int width, float h_change, float s_change, float l_change); 10 | 11 | #endif 12 | -------------------------------------------------------------------------------- /auto_enc/utils/color_jitter/jitter.pyx: -------------------------------------------------------------------------------- 1 | # distutils: language = c++ 2 | 3 | import cython 4 | import numpy as np 5 | cimport numpy as np 6 | 7 | 8 | # declare the interface to the C code 9 | cdef extern from "cjitter.h": 10 | void cjitter (float* img, int height, int width, float h_change, float s_change, float l_change) 11 | 12 | @cython.boundscheck(False) 13 | @cython.wraparound(False) 14 | def jitter(np.ndarray[float, ndim=3, mode="c"] img not None, float h_change, float s_change, float l_change): 15 | cdef int height, width 16 | height, width = img.shape[1], img.shape[2] 17 | cjitter(&img[0, 0, 0], height, width, h_change, s_change, l_change) 18 | 19 | return None -------------------------------------------------------------------------------- /auto_enc/utils/color_jitter/setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | from distutils.core import setup 4 | from distutils.extension import Extension 5 | from Cython.Distutils import build_ext 6 | 7 | import numpy 8 | 9 | setup( 10 | cmdclass = {'build_ext': build_ext}, 11 | ext_modules = [Extension("jitter", 12 | sources=["jitter.pyx", "cjitter.cpp"], 13 | include_dirs=[numpy.get_include()], 14 | language="c++", 15 | extra_compile_args=["-std=c++11"], 16 | extra_link_args=["-std=c++11"])], 17 | ) 18 | 19 | 20 | -------------------------------------------------------------------------------- /auto_enc/utils/get_samples.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import cv2 3 | import random 4 | 5 | def get_samples(images, bboxes, cl, size, class_map, get_negatives=True): 6 | ''' 7 | Returns bounding box cropped images (+ negative 8 | background patches depending on job), class labels, 9 | and the bounding box coordinates 10 | ''' 11 | # Constants used for overlap range 12 | e = 50 13 | resize_buffer = 20 14 | rendered_img_size = images.shape[1] 15 | ov_range = 0.15*rendered_img_size 16 | 17 | xs = [] 18 | ys = [] 19 | bbs = [] 20 | 21 | for row in range(images.shape[0]): 22 | img = images[row] 23 | x_min, x_max, y_min, y_max = bboxes[row] 24 | pos_img = img[y_min:y_max, x_min:x_max] 25 | pos_img = cv2.resize(pos_img, (size, size)) 26 | 27 | xs.append(pos_img) 28 | if class_map is None: 29 | ys.append(-2) 30 | else: 31 | ys.append(class_map[cl]) 32 | bbs.append(bboxes[row]) 33 | 34 | #if job is train then add negative samples of the data as well 35 | if get_negatives: 36 | while (True): 37 | # Get the negative bounding box from the image outside the 38 | # original bounding box but with some overlap range 39 | # Randomly select 4 points within a window specified 40 | # by rendered_img_size and e 41 | amin = random.randint(0, rendered_img_size - e) 42 | bmin = random.randint(0, rendered_img_size - e) 43 | # Adding the resize buffer to prevent resizing 44 | # errors on very thin strips 45 | amax = random.randint(amin+resize_buffer, rendered_img_size) 46 | bmax = random.randint(bmin+resize_buffer, rendered_img_size) 47 | 48 | a_ratio = float(amax-amin)/float(bmax-bmin) 49 | 50 | if 0.5 < a_ratio and a_ratio < 2: 51 | if (overlap(x_min, y_min, x_max, y_max, 52 | amin, bmin, amax, bmax, ov_range)): 53 | break 54 | 55 | neg_img = img[amin:amax, bmin:bmax] #create negative image 56 | 57 | neg_img = cv2.resize(neg_img, (size,size)) 58 | 59 | xs.append(neg_img) 60 | ys.append(-1) # append -1 for negative sample 61 | bbs.append([bmin, bmax, amin, amax]) 62 | 63 | x = np.array(xs, dtype=np.float32) 64 | y = np.array(ys) 65 | bb = np.array(bbs) 66 | 67 | x = x.reshape((x.shape[0], size, size, 3)) 68 | 69 | # Make it 3xsizexsize 70 | x = x.transpose(0,3,1,2) 71 | 72 | return [x,y,bb] 73 | 74 | def overlap(xmin, ymin, xmax, ymax, amin, bmin, amax, bmax, ov_range): 75 | """ 76 | Checks for the rectangular collision for 77 | the two boxes with a certain overlap range. 78 | Returns True if the two boxes are valid 79 | and do not overlap more than the allowed range 80 | """ 81 | if amax < (xmin+ov_range): 82 | if bmax < (ymin+ov_range) or bmin > (ymax-ov_range): 83 | return True 84 | if amin > xmax - ov_range: 85 | if bmax < (ymin+ov_range) or bmin > (ymax-ov_range): 86 | return True 87 | return False -------------------------------------------------------------------------------- /auto_enc/utils/loader_utils.py: -------------------------------------------------------------------------------- 1 | # from torch._six import int_classes as _int_classes 2 | _int_classes = int 3 | import torch 4 | from torch.utils.data.sampler import Sampler 5 | from multiprocessing.dummy import Pool as ThreadPool 6 | import itertools 7 | 8 | 9 | class CustomRandomSampler(Sampler): 10 | ''' 11 | Samples elements randomly, without replacement. 12 | This sampling only shuffles within epoch intervals of the dataset 13 | Arguments: 14 | data_source (Dataset): dataset to sample from 15 | num_epochs (int) : Number of epochs in the train dataset 16 | num_workers (int) : Number of workers to use for generating iterator 17 | ''' 18 | 19 | def __init__(self, data_source, num_epochs, num_workers, weights=None, replacement=True): 20 | self.data_source = data_source 21 | self.num_epochs = num_epochs 22 | self.num_workers = num_workers 23 | self.datalen = len(data_source) 24 | self.weights = weights 25 | self.replacement = replacement 26 | 27 | def __iter__(self): 28 | iter_array = [] 29 | pool = ThreadPool(self.num_workers) 30 | 31 | def get_randperm(i): 32 | if self.weights is None: 33 | return torch.randperm(self.datalen).tolist() 34 | # self.weights = torch.tensor(self.weights, dtype=torch.double) 35 | return torch.multinomial(torch.tensor(self.weights, dtype=torch.double), self.datalen, self.replacement).tolist() 36 | iter_array = list(itertools.chain.from_iterable( 37 | pool.map(get_randperm, range(self.num_epochs)))) 38 | pool.close() 39 | pool.join() 40 | return iter(iter_array) 41 | 42 | def __len__(self): 43 | return len(self.data_source) 44 | 45 | 46 | class CustomBatchSampler(object): 47 | ''' 48 | Wraps another custom sampler with epoch intervals 49 | to yield a mini-batch of indices. 50 | 51 | Args: 52 | sampler (Sampler): Base sampler. 53 | batch_size (int): Size of mini-batch. 54 | drop_last (bool): If ``True``, the sampler will drop the last batch if 55 | its size would be less than ``batch_size`` 56 | epoch_size : Number of items in an epoch 57 | ''' 58 | 59 | def __init__(self, sampler, batch_size, drop_last, epoch_size): 60 | if not isinstance(sampler, Sampler): 61 | raise ValueError('sampler should be an instance of ' 62 | 'torch.utils.data.Sampler, but got sampler={}' 63 | .format(sampler)) 64 | if (not isinstance(batch_size, _int_classes) 65 | or isinstance(batch_size, bool) 66 | or batch_size <= 0): 67 | raise ValueError('batch_size should be a positive integral value, ' 68 | 'but got batch_size={}'.format(batch_size)) 69 | if not isinstance(drop_last, bool): 70 | raise ValueError('drop_last should be a boolean value, but got ' 71 | 'drop_last={}'.format(drop_last)) 72 | self.sampler = sampler 73 | self.batch_size = batch_size 74 | self.drop_last = drop_last 75 | self.epoch_size = epoch_size 76 | self.num_epochs = len(self.sampler)/self.epoch_size 77 | 78 | if self.drop_last: 79 | self.num_batches_per_epoch = self.epoch_size // self.batch_size 80 | else: 81 | self.num_batches_per_epoch = ( 82 | self.epoch_size + self.batch_size - 1) // self.batch_size 83 | 84 | def __iter__(self): 85 | batch = [] 86 | epoch_ctr = 0 87 | for idx in self.sampler: 88 | epoch_ctr += 1 89 | batch.append(int(idx)) 90 | if len(batch) == self.batch_size or epoch_ctr == self.epoch_size: 91 | yield batch 92 | batch = [] 93 | if epoch_ctr == self.epoch_size: 94 | epoch_ctr = 0 95 | 96 | if len(batch) > 0 and not self.drop_last: 97 | yield batch 98 | 99 | def __len__(self): 100 | return self.num_epochs * self.num_batches_per_epoch 101 | -------------------------------------------------------------------------------- /auto_enc/utils/metric.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from skimage.metrics import structural_similarity as ssim 3 | from PIL import Image 4 | 5 | 6 | def calc_ssim(gts, preds): 7 | metric = 0.5*(1+np.array([ssim(gt.transpose(1,2,0), pred.transpose(1,2,0), multichannel=True, data_range=1) for (gt,pred) in zip(gts, preds)])) 8 | return metric 9 | 10 | 11 | def test(): 12 | img_path = '../source.png' 13 | 14 | img = Image.open(img_path).convert('RGB') 15 | img = np.array(img) 16 | 17 | img_path2 = '../test.png' 18 | 19 | img2 = Image.open(img_path2).convert('RGB') 20 | img2 = np.array(img2) 21 | 22 | metric = calc_ssim(img, img2) 23 | print(metric) 24 | 25 | -------------------------------------------------------------------------------- /auto_enc/utils/model_utils.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from torch.autograd import Variable 4 | 5 | def kaiming_normal_init(m): 6 | ''' 7 | Initializes network parameters using Kaiming-Normal initialization 8 | ''' 9 | if isinstance(m, nn.Conv2d): 10 | nn.init.kaiming_normal_(m.weight, nonlinearity='relu') 11 | elif isinstance(m, nn.Linear): 12 | nn.init.kaiming_normal_(m.weight, nonlinearity='sigmoid') 13 | 14 | def MultiClassCrossEntropyLoss(logits, labels, T, device): 15 | ''' 16 | Cross Entropy Distillation Loss 17 | ''' 18 | labels = Variable(labels.data, requires_grad=False).cuda(device=device) 19 | outputs = torch.log_softmax(logits/T, dim=1) 20 | labels = torch.softmax(labels/T, dim=1) 21 | outputs = torch.sum(outputs * labels, dim=1, keepdim=False) 22 | outputs = -torch.mean(outputs, dim=0, keepdim=False) 23 | return Variable(outputs.data, requires_grad=True).cuda(device=device) -------------------------------------------------------------------------------- /environment.yml: -------------------------------------------------------------------------------- 1 | name: clpy38 2 | channels: 3 | - pytorch 4 | - conda-forge 5 | - defaults 6 | dependencies: 7 | - _libgcc_mutex=0.1=conda_forge 8 | - _openmp_mutex=4.5=2_kmp_llvm 9 | - alsa-lib=1.2.6.1=h7f98852_0 10 | - aom=3.3.0=h27087fc_1 11 | - attr=2.5.1=h166bdaf_1 12 | - blas=1.0=mkl 13 | - brotlipy=0.7.0=py38h27cfd23_1003 14 | - bzip2=1.0.8=h7b6447c_0 15 | - c-ares=1.18.1=h7f98852_0 16 | - ca-certificates=2022.6.15=ha878542_0 17 | - cached-property=1.5.2=hd8ed1ab_1 18 | - cached_property=1.5.2=pyha770c72_1 19 | - cairo=1.16.0=ha61ee94_1012 20 | - certifi=2022.6.15=py38h578d9bd_0 21 | - cffi=1.15.1=py38h74dc2b5_0 22 | - charset-normalizer=2.0.4=pyhd3eb1b0_0 23 | - cryptography=37.0.1=py38h9ce1e76_0 24 | - cudatoolkit=10.2.89=hfd86e86_1 25 | - dbus=1.13.6=h5008d03_3 26 | - expat=2.4.8=h27087fc_0 27 | - ffmpeg=4.4.2=habc3f16_0 28 | - fftw=3.3.10=nompi_ha7695d1_103 29 | - font-ttf-dejavu-sans-mono=2.37=hab24e00_0 30 | - font-ttf-inconsolata=3.000=h77eed37_0 31 | - font-ttf-source-code-pro=2.038=h77eed37_0 32 | - font-ttf-ubuntu=0.83=hab24e00_0 33 | - fontconfig=2.14.0=h8e229c2_0 34 | - fonts-conda-ecosystem=1=0 35 | - fonts-conda-forge=1=0 36 | - freeglut=3.2.2=h9c3ff4c_1 37 | - freetype=2.11.0=h70c0345_0 38 | - gettext=0.19.8.1=h73d1719_1008 39 | - giflib=5.2.1=h7b6447c_0 40 | - glib=2.72.1=h6239696_0 41 | - glib-tools=2.72.1=h6239696_0 42 | - gmp=6.2.1=h295c915_3 43 | - gnutls=3.7.7=hf3e180e_0 44 | - graphite2=1.3.13=h58526e2_1001 45 | - gst-plugins-base=1.20.3=hf6a322e_0 46 | - gstreamer=1.20.3=hd4edc92_0 47 | - h5py=3.7.0=nompi_py38h8afedcf_100 48 | - harfbuzz=4.4.1=hf9f4e7c_0 49 | - hdf5=1.12.1=nompi_h2386368_104 50 | - icu=70.1=h27087fc_0 51 | - idna=3.3=pyhd3eb1b0_0 52 | - jack=1.9.18=h8c3723f_1002 53 | - jasper=2.0.33=ha77e612_0 54 | - jpeg=9e=h7f8727e_0 55 | - keyutils=1.6.1=h166bdaf_0 56 | - krb5=1.19.3=h3790be6_0 57 | - lame=3.100=h7b6447c_0 58 | - lcms2=2.12=h3be6417_0 59 | - ld_impl_linux-64=2.38=h1181459_1 60 | - lerc=3.0=h295c915_0 61 | - libblas=3.9.0=16_linux64_mkl 62 | - libcap=2.64=ha37c62d_0 63 | - libcblas=3.9.0=16_linux64_mkl 64 | - libclang=14.0.6=default_h2e3cab8_0 65 | - libclang13=14.0.6=default_h3a83d3e_0 66 | - libcups=2.3.3=h3e49a29_2 67 | - libcurl=7.83.1=h7bff187_0 68 | - libdb=6.2.32=h9c3ff4c_0 69 | - libdeflate=1.8=h7f8727e_5 70 | - libdrm=2.4.112=h166bdaf_0 71 | - libedit=3.1.20191231=he28a2e2_2 72 | - libev=4.33=h516909a_1 73 | - libevent=2.1.10=h9b69904_4 74 | - libffi=3.4.2=h7f98852_5 75 | - libflac=1.3.4=h27087fc_0 76 | - libgcc-ng=12.1.0=h8d9b700_16 77 | - libgfortran-ng=11.2.0=h00389a5_1 78 | - libgfortran5=11.2.0=h1234567_1 79 | - libglib=2.72.1=h2d90d5f_0 80 | - libglu=9.0.0=he1b5a44_1001 81 | - libiconv=1.16=h516909a_0 82 | - libidn2=2.3.2=h7f8727e_0 83 | - liblapack=3.9.0=16_linux64_mkl 84 | - liblapacke=3.9.0=16_linux64_mkl 85 | - libllvm14=14.0.6=he0ac6c6_0 86 | - libnghttp2=1.47.0=hdcd2b5c_1 87 | - libnsl=2.0.0=h7f98852_0 88 | - libogg=1.3.4=h7f98852_1 89 | - libopencv=4.6.0=py38hc65905f_0 90 | - libopus=1.3.1=h7b6447c_0 91 | - libpciaccess=0.16=h516909a_0 92 | - libpng=1.6.37=hbc83047_0 93 | - libpq=14.5=hd77ab85_0 94 | - libprotobuf=3.20.1=h4ff587b_0 95 | - libsndfile=1.0.31=h9c3ff4c_1 96 | - libssh2=1.10.0=haa6b8db_3 97 | - libstdcxx-ng=12.1.0=ha89aaad_16 98 | - libtasn1=4.18.0=h166bdaf_1 99 | - libtiff=4.4.0=hecacb30_0 100 | - libtool=2.4.6=h9c3ff4c_1008 101 | - libudev1=249=h166bdaf_4 102 | - libunistring=0.9.10=h27cfd23_0 103 | - libuuid=2.32.1=h7f98852_1000 104 | - libva=2.15.0=h166bdaf_0 105 | - libvorbis=1.3.7=h9c3ff4c_0 106 | - libvpx=1.11.0=h9c3ff4c_3 107 | - libwebp=1.2.2=h55f646e_0 108 | - libwebp-base=1.2.2=h7f8727e_0 109 | - libxcb=1.13=h7f98852_1004 110 | - libxkbcommon=1.0.3=he3ba5ed_0 111 | - libxml2=2.9.14=h22db469_4 112 | - libzlib=1.2.12=h166bdaf_2 113 | - llvm-openmp=14.0.4=he0ac6c6_0 114 | - lz4-c=1.9.3=h295c915_1 115 | - mkl=2022.1.0=h84fe81f_915 116 | - mysql-common=8.0.30=haf5c9bc_0 117 | - mysql-libs=8.0.30=h28c427c_0 118 | - ncurses=6.3=h5eee18b_3 119 | - nettle=3.8.1=hc379101_1 120 | - ninja=1.10.2=h06a4308_5 121 | - ninja-base=1.10.2=hd09550d_5 122 | - nspr=4.32=h9c3ff4c_1 123 | - nss=3.78=h2350873_0 124 | - numpy=1.23.2=py38h3a7f9d9_0 125 | - opencv=4.6.0=py38h578d9bd_0 126 | - openh264=2.1.1=h4ff587b_0 127 | - openssl=1.1.1q=h166bdaf_0 128 | - p11-kit=0.24.1=hc5aa10d_0 129 | - pcre=8.45=h9c3ff4c_0 130 | - pillow=9.2.0=py38hace64e9_1 131 | - pip=22.1.2=py38h06a4308_0 132 | - pixman=0.40.0=h36c2ea0_0 133 | - portaudio=19.6.0=h57a0ea0_5 134 | - pthread-stubs=0.4=h36c2ea0_1001 135 | - pulseaudio=14.0=h7f54b18_8 136 | - py-opencv=4.6.0=py38h7f3c49e_0 137 | - pycparser=2.21=pyhd3eb1b0_0 138 | - pyopenssl=22.0.0=pyhd3eb1b0_0 139 | - pysocks=1.7.1=py38h06a4308_0 140 | - python=3.8.13=h582c2e5_0_cpython 141 | - python_abi=3.8=2_cp38 142 | - pytorch=1.12.1=py3.8_cuda10.2_cudnn7.6.5_0 143 | - pytorch-mutex=1.0=cuda 144 | - qt-main=5.15.4=ha5833f6_2 145 | - readline=8.1.2=h7f8727e_1 146 | - requests=2.28.1=py38h06a4308_0 147 | - setuptools=61.2.0=py38h06a4308_0 148 | - sleef=3.5.1=h9b69904_2 149 | - sqlite=3.39.2=h5082296_0 150 | - svt-av1=1.1.0=h27087fc_1 151 | - tbb=2021.5.0=hd09550d_0 152 | - tk=8.6.12=h1ccaba5_0 153 | - torchaudio=0.12.1=py38_cu102 154 | - torchvision=0.13.1=py38_cu102 155 | - trimesh=3.13.5=pyh6c4a22f_0 156 | - typing_extensions=4.3.0=py38h06a4308_0 157 | - urllib3=1.26.11=py38h06a4308_0 158 | - wheel=0.37.1=pyhd3eb1b0_0 159 | - x264=1!161.3030=h7f98852_1 160 | - x265=3.5=h924138e_3 161 | - xcb-util=0.4.0=h166bdaf_0 162 | - xcb-util-image=0.4.0=h166bdaf_0 163 | - xcb-util-keysyms=0.4.0=h166bdaf_0 164 | - xcb-util-renderutil=0.3.9=h166bdaf_0 165 | - xcb-util-wm=0.4.1=h166bdaf_0 166 | - xorg-fixesproto=5.0=h7f98852_1002 167 | - xorg-inputproto=2.3.2=h7f98852_1002 168 | - xorg-kbproto=1.0.7=h7f98852_1002 169 | - xorg-libice=1.0.10=h7f98852_0 170 | - xorg-libsm=1.2.3=hd9c2040_1000 171 | - xorg-libx11=1.7.2=h7f98852_0 172 | - xorg-libxau=1.0.9=h7f98852_0 173 | - xorg-libxdmcp=1.1.3=h7f98852_0 174 | - xorg-libxext=1.3.4=h7f98852_1 175 | - xorg-libxfixes=5.0.3=h7f98852_1004 176 | - xorg-libxi=1.7.10=h7f98852_0 177 | - xorg-libxrender=0.9.10=h7f98852_1003 178 | - xorg-renderproto=0.11.1=h7f98852_1002 179 | - xorg-xextproto=7.3.0=h7f98852_1002 180 | - xorg-xproto=7.0.31=h7f98852_1007 181 | - xz=5.2.5=h7f8727e_1 182 | - zlib=1.2.12=h7f8727e_2 183 | - zstd=1.5.2=ha4553b6_0 184 | - pip: 185 | - imageio==2.21.1 186 | - networkx==2.8.5 187 | - packaging==21.3 188 | - pyparsing==3.0.9 189 | - pywavelets==1.3.0 190 | - scikit-image==0.19.3 191 | - scipy==1.9.0 192 | - tifffile==2022.8.12 193 | - torch-scatter==2.0.9 194 | - tqdm==4.64.0 195 | prefix: /home/ant/miniconda3/envs/clpy38 196 | --------------------------------------------------------------------------------