├── .gitignore ├── README.md ├── __init__.py ├── data ├── .gitignore ├── __init__.py ├── ids.py ├── manager.py ├── point_clouds.py ├── renderings.py └── voxels.py ├── eval ├── .gitignore ├── __init__.py ├── chamfer.py ├── ffd_emd.py ├── iou.py ├── normalize.py ├── path.py ├── point_cloud.py ├── retrofit.py └── templates.py ├── ffd ├── __init__.py ├── bernstein.py ├── deform.py └── util.py ├── inference ├── .gitignore ├── __init__.py ├── clouds.py ├── meshes.py ├── path.py ├── predictions.py └── voxels.py ├── metrics ├── __init__.py ├── base.py ├── np_impl.py └── tf_impl.py ├── model ├── .gitignore ├── __init__.py ├── builder.py ├── classifier_builder.py ├── data.py ├── mobilenet │ ├── __init__.py │ ├── mobilenet_1p8.py │ └── mobilenet_old.py └── template_ffd_builder.py ├── paper ├── .gitignore ├── big_table.py ├── cdf.py ├── create_mixed_params.py ├── create_paper_params.py ├── infer_real.py ├── real_images.py ├── segment.py ├── selected_histograms.py ├── sup_vid.py └── top_k.py ├── scripts ├── .gitignore ├── chamfer.py ├── check_predictions.py ├── clear_results.py ├── create_ffd.py ├── create_split_mesh.py ├── create_voxels.py ├── eval.py ├── ffd_emd.py ├── infer.py ├── iou.py ├── profile.py ├── save_inferred_meshes.py ├── test_model.py ├── train.py ├── vis │ ├── clouds.py │ ├── meshes.py │ └── voxels.py ├── vis_inputs.py └── vis_predictions.py └── templates ├── .gitignore ├── __init__.py ├── annotations_ffd.py ├── ffd.py ├── ids.py ├── mesh.py ├── path.py └── templates.json /.gitignore: -------------------------------------------------------------------------------- 1 | *.pyc 2 | __pycache__/* 3 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # template_ffd 2 | Code for paper [Learning Free-Form Deformations for 3D Object Reconstruction](https://arxiv.org/abs/1803.10932) in [this repository](https://github.com/jackd/template_ffd). 3 | 4 | # Getting Started 5 | ``` 6 | cd /path/to/parent_dir 7 | git clone https://github.com/jackd/template_ffd.git 8 | # non-pip dependencies 9 | git clone https://github.com/jackd/dids.git # framework for manipulating datasets 10 | git clone https://github.com/jackd/util3d.git # general 3d object utilities 11 | git clone https://github.com/jackd/shapenet.git # dataset access 12 | git clone https://github.com/jackd/tf_nearest_neighbour.git # for chamfer loss 13 | git clone https://github.com/jackd/tf_toolbox.git # optional 14 | ``` 15 | To run, ensure the parent directoy is on your `PYTHON_PATH`. 16 | ``` 17 | export PYTHONPATH=$PYTHONPATH:/path/to/parent_dir 18 | ``` 19 | 20 | So long as your `PYTHONPATH` is set as above, these repositories should work 'out of the box', except for `tf_nearest_neighbour` which requires the tensorflow op to be built. See the main [repository](https://github.com/jackd/tf_nearest_neighbour) for details. 21 | 22 | Install pip dependencies 23 | ``` 24 | pip install h5py progress numpy pyemd 25 | ``` 26 | 27 | To use visualizations you'll also need `mayavi`. 28 | ``` 29 | pip install mayavi 30 | ``` 31 | 32 | See [tensorflow documentation](https://www.tensorflow.org/install/) for installation. CUDA enabled GPU recommended. 33 | 34 | ## Data 35 | This repository depends on the Dictionary Interface to Datasets ([`dids`](https://github.com/jackd/dids.git)) repository for dataset management and [`util3d`](https://github.com/jackd/util3d.git) for various 3d utility functions. 36 | 37 | This code base is set up to train on the [`ShapeNet`](https://www.shapenet.org/) Core dataset. We cannot provide the data for this dataset, though it is freely available for registered users. We provide functionality for rendering, loading and converting data in the [`shapenet`](https://github.com/jackd/shapenet) repository. For this project, most data accessing should "just work". There are, however, 2 manual steps that must be completed. 38 | 39 | 1. Add the path to your shapenet core data to the environment variable `SHAPENET_CORE_PATH`, 40 | ``` 41 | export SHAPENET_CORE_PATH=/path/to/shapenet/dataset/ShapeNetCore.v1 42 | ``` 43 | This folder should contain the `.zip` files for each category, named by the category id, e.g. all plane `obj` files should be in `02691156.zip`. 44 | 2. Render the images, 45 | ``` 46 | cd /path/to/parent_dir/shapenet/core/blender_renderings/scripts 47 | python render_cat.py plane 48 | python create_archive.py plane 49 | ``` 50 | [`Blender`](https://www.blender.org/) is required for this. The binary must either be on your path, or supplied via `render_cat.py`'s `--blender_path` argument. 51 | 52 | Other data preprocessing is required before training can begin (parsing mesh data, sampling meshes, calculating FFD decomposition), though this should be handled as the need arises. 53 | 54 | In order to evaluate IoU scores, meshes must first be converted to voxels. To allow this, make the `util3d` binvox binary executable 55 | ``` 56 | chmod +x /path/to/parent_dir/util3d/bin/binvox 57 | ``` 58 | 59 | You can force any of this data processing for any category to occur by manually. See the example for generating plane data below. 60 | ``` 61 | cd /path/to/parent_dir/shapenet/core/meshes/scripts 62 | python generate_mesh_data.py plane 63 | cd ../../point_clouds/scripts 64 | python create_point_clouds.py plane 65 | cd ../../voxels/scripts 66 | # For IoU data. 67 | python create_voxels.py plane 68 | python create_archive.py plane 69 | ``` 70 | 71 | Note evaluation of models produces a large amount of data. In particular, inferred meshes generated for IoU evaluation can be particularly large for a low `edge_length_threshold`. You can safely delete any data in `inference/_inferences` or `eval/_eval` and it will be regenerated if required. 72 | 73 | ## Models 74 | Different models can be built using different hyper-parameter sets. Models are built using the `model.template_ffd_builder.TemplateFfdBuilder` class. Each hyperparameter set should have a `MODEL_ID` and an associated `model/params/MODEL_ID.json` file. Default values are speficied where they are used in the code. 75 | 76 | See `paper/create_paper_params.py` for the parameter sets used for the models presented in the paper. 77 | 78 | ## Training 79 | Training can be done via the `scripts/train.py` script. For example, 80 | ``` 81 | python train.py example -s 200000 82 | ``` 83 | will train the model with ID `'example'` for 200000 steps (default is 100000). 84 | 85 | To view training summaries, run 86 | ``` 87 | tensorboard --logdir=model/_model/MODEL_ID 88 | ``` 89 | 90 | Training to 100000 steps as done in the paper takes roughly 8 hours to an NVidia GTX-1070. 91 | 92 | ## Evaluation 93 | There are a number of steps to evaluation, depending on the metrics required. 94 | * To create predictions (network outputs, deformation parameters `Delta P`), run `scripts/infer.py MODEL_ID` 95 | * See also `scripts/iou.py`, `scripts/chamfer.py` and `scripts/ffd_emd.py` (slow). 96 | 97 | ## Paper Figures 98 | See the `paper` subdirectory for various scripts used to generate the figures presented in the paper. 99 | 100 | ## Reference 101 | If you find this code useful in your research, please cite the [following paper](https://128.84.21.199/abs/1803.10932). 102 | ``` 103 | @article{jack2018learning, 104 | title={Learning Free-Form Deformations for 3D Object Reconstruction}, 105 | author={Jack, Dominic and Pontes, Jhony K and Sridharan, Sridha and Fookes, Clinton and Shirazi, Sareh and Maire, Frederic and Eriksson, Anders}, 106 | journal={arXiv preprint arXiv:1803.10932}, 107 | year={2018} 108 | } 109 | ``` 110 | 111 | ## CHANGELOG 112 | Since the initial release, a small bug has been fixed where batch normalization was being applied both before and after activations in some cases. This shouldn't make a massive difference to performance, but may mean models previously trained can no longer be loaded properly. To revert to older functionality, add `'use_bn_bugged_version': true` to the params file. 113 | -------------------------------------------------------------------------------- /__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jackd/template_ffd/0f9111ffb340449ad87fbf52220273a15819ec4f/__init__.py -------------------------------------------------------------------------------- /data/.gitignore: -------------------------------------------------------------------------------- 1 | _ids 2 | _filled_voxels 3 | -------------------------------------------------------------------------------- /data/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jackd/template_ffd/0f9111ffb340449ad87fbf52220273a15819ec4f/data/__init__.py -------------------------------------------------------------------------------- /data/ids.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | _ids_dir = os.path.join(os.path.realpath(os.path.dirname(__file__)), '_ids') 4 | 5 | 6 | class SplitConfig(object): 7 | def __init__(self, train_prop=0.8, seed=0): 8 | self._train_prop = train_prop 9 | self._seed = seed 10 | self._split_config_id = 's%d-%s' % (seed, train_prop) 11 | root_dir = os.path.join(_ids_dir, self._split_config_id) 12 | if not os.path.isdir(root_dir): 13 | os.makedirs(root_dir) 14 | self._root_dir = root_dir 15 | 16 | @property 17 | def root_dir(self): 18 | return self._root_dir 19 | 20 | def get_txt_path(self, cat_id, mode): 21 | return os.path.join(self._root_dir, '%s_%s.txt' % (cat_id, mode)) 22 | 23 | def _get_example_ids(self, cat_id, mode): 24 | if not self.has_split(cat_id): 25 | self.create_split(cat_id, overwrite=True) 26 | if mode in ('predict', 'infer'): 27 | mode = 'eval' 28 | with open(self.get_txt_path(cat_id, mode)) as fp: 29 | example_ids = [i.rstrip() for i in fp.readlines()] 30 | return example_ids 31 | 32 | def get_example_ids(self, cat_id, mode): 33 | if isinstance(cat_id, (list, tuple)): 34 | return tuple(self._get_example_ids(c, mode) for c in cat_id) 35 | else: 36 | return self._get_example_ids(cat_id, mode) 37 | 38 | def has_split(self, cat_id): 39 | return all(os.path.isfile(self.get_txt_path(cat_id, m)) 40 | for m in ('train', 'eval')) 41 | 42 | def create_split(self, cat_id, overwrite=False): 43 | import random 44 | from shapenet.core import get_example_ids 45 | from template_ffd.templates.ids import get_template_ids 46 | if not overwrite and self.has_split(cat_id): 47 | return 48 | template_ids = set(get_template_ids(cat_id)) 49 | example_ids = get_example_ids(cat_id) 50 | example_ids = [i for i in example_ids if i not in template_ids] 51 | example_ids.sort() 52 | random.seed(self._seed) 53 | random.shuffle(example_ids) 54 | train_ids, eval_ids = _train_eval_partition( 55 | example_ids, self._train_prop) 56 | train_ids.sort() 57 | eval_ids.sort() 58 | for mode, ids in (('train', train_ids), ('eval', eval_ids)): 59 | with open(self.get_txt_path(cat_id, mode), 'w') as fp: 60 | fp.writelines(('%s\n' % i for i in ids)) 61 | 62 | 63 | def _train_eval_partition(example_list, train_prop=0.8): 64 | n = len(example_list) 65 | n_train = int(n*train_prop) 66 | return example_list[:n_train], example_list[n_train:] 67 | 68 | 69 | def get_example_ids(cat_id, mode, **config_kwargs): 70 | return SplitConfig(**config_kwargs).get_example_ids(cat_id, mode) 71 | 72 | 73 | if __name__ == '__main__': 74 | from template_ffd.templates.ids import get_templated_cat_ids 75 | config = SplitConfig() 76 | for cat_id in get_templated_cat_ids(): 77 | config.create_split(cat_id) 78 | -------------------------------------------------------------------------------- /data/manager.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | 3 | 4 | def base_dataset(example_ids): 5 | # indices = tf.range(len(example_ids), dtype=tf.int32) 6 | # example_ids = tf.convert_to_tensor(example_ids, tf.string) 7 | # 8 | # def map_fn(index): 9 | # return tf.gather(example_ids, index) 10 | # 11 | # return tf.data.Dataset.from_tensor_slices(indices).map(map_fn) 12 | example_ids = tf.convert_to_tensor(example_ids, tf.string) 13 | return tf.data.Dataset.from_tensor_slices(example_ids) 14 | 15 | 16 | class MapManager(object): 17 | @property 18 | def output_shape(self): 19 | raise NotImplementedError('Abstract method') 20 | 21 | @property 22 | def output_type(self): 23 | raise NotImplementedError('Abstract method') 24 | 25 | def map_np(self, example_id): 26 | raise NotImplementedError('Abstract method') 27 | 28 | def map_tf(self, example_id): 29 | return tf.py_func( 30 | self.map_np, [example_id], self.output_type, stateful=False) 31 | 32 | def get_generator_dataset(self, example_ids): 33 | def generator_fn(): 34 | for example_id in example_ids: 35 | yield self.map_np(example_id) 36 | 37 | return tf.data.Dataset.from_generator( 38 | generator_fn, self.output_type, self.output_shape) 39 | 40 | 41 | class ZippedMapManager(MapManager): 42 | def __init__(self, managers): 43 | self._managers = managers 44 | 45 | @property 46 | def output_shape(self): 47 | return tuple(m.output_shape for m in self._managers) 48 | 49 | @property 50 | def output_type(self): 51 | return tuple(m.output_type for m in self._managers) 52 | 53 | def map_np(self, example_id): 54 | return tuple(m.map_np(example_id) for m in self._managers) 55 | 56 | 57 | if __name__ == '__main__': 58 | from point_clouds import SampledPointCloudManager 59 | from renderings import RenderingsManager 60 | from shapenet.core import cat_desc_to_id, get_example_ids 61 | from shapenet.core.blender_renderings.config import RenderConfig 62 | cat_desc = 'plane' 63 | cat_id = cat_desc_to_id(cat_desc) 64 | example_ids = get_example_ids(cat_id) 65 | 66 | view_index = 5 67 | render_config = RenderConfig() 68 | renderings_manager = RenderingsManager(render_config, view_index, cat_id) 69 | 70 | n_samples = 16384 71 | n_resamples = 1024 72 | cloud_manager = SampledPointCloudManager(cat_id, n_samples, n_resamples) 73 | 74 | manager = ZippedMapManager((renderings_manager, cloud_manager)) 75 | 76 | dataset = base_dataset(example_ids).map(manager.map_tf) 77 | image, cloud = dataset.make_one_shot_iterator().get_next() 78 | 79 | def vis(image, cloud): 80 | import matplotlib.pyplot as plt 81 | from mayavi import mlab 82 | from util3d.mayavi_vis import vis_point_cloud 83 | plt.imshow(image) 84 | vis_point_cloud(cloud, color=(0, 0, 1), scale_factor=0.01) 85 | plt.show(block=False) 86 | mlab.show() 87 | plt.close() 88 | 89 | with tf.train.MonitoredSession() as sess: 90 | while not sess.should_stop(): 91 | im, cl = sess.run([image, cloud]) 92 | vis(im, cl) 93 | -------------------------------------------------------------------------------- /data/point_clouds.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import tensorflow as tf 3 | from manager import MapManager, base_dataset 4 | from util3d.point_cloud import sample_points 5 | from shapenet.core.point_clouds import get_point_cloud_dataset 6 | 7 | 8 | class SampledPointCloudManager(MapManager): 9 | def __init__(self, cat_id, n_samples, n_resamples): 10 | self._cat_id = cat_id 11 | self._n_samples = n_samples 12 | self._n_resamples = n_resamples 13 | self._dataset = get_point_cloud_dataset(cat_id, n_samples) 14 | self._dataset.open() 15 | 16 | @property 17 | def output_shape(self): 18 | return (self._n_resamples, 3) 19 | 20 | @property 21 | def output_type(self): 22 | return tf.float32 23 | 24 | def map_np(self, example_id): 25 | points = np.array(self._dataset[example_id], dtype=np.float32) 26 | return sample_points(points, self._n_resamples, axis=0) 27 | 28 | 29 | def get_sampled_point_cloud_dataset( 30 | cat_id, example_ids, n_samples, n_resamples): 31 | manager = SampledPointCloudManager(cat_id, n_samples, n_resamples) 32 | base = base_dataset(example_ids) 33 | return base.map(manager.map_tf) 34 | 35 | 36 | if __name__ == '__main__': 37 | from mayavi import mlab 38 | from util3d.mayavi_vis import vis_point_cloud 39 | from shapenet.core import cat_desc_to_id, get_example_ids 40 | cat_desc = 'plane' 41 | n_samples = 16384 42 | # n_resamples = None 43 | n_resamples = 1024 44 | cat_id = cat_desc_to_id(cat_desc) 45 | example_ids = get_example_ids(cat_id) 46 | dataset = get_sampled_point_cloud_dataset( 47 | cat_id, example_ids, n_samples, n_resamples) 48 | pc = dataset.make_one_shot_iterator().get_next() 49 | with tf.train.MonitoredSession() as sess: 50 | while not sess.should_stop(): 51 | cloud = sess.run(pc) 52 | vis_point_cloud(cloud, color=(0, 0, 1), scale_factor=0.01) 53 | mlab.show() 54 | -------------------------------------------------------------------------------- /data/renderings.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | from shapenet.image import with_background 3 | from manager import MapManager, base_dataset 4 | 5 | 6 | class RenderingsManager(MapManager): 7 | def __init__(self, render_config, view_index, cat_id): 8 | self._config = render_config 9 | self._dataset = render_config.get_dataset(cat_id, view_index) 10 | self._dataset.open() 11 | self._cat_id = cat_id 12 | 13 | def map_np(self, example_id): 14 | return with_background(self._dataset[example_id], 255) 15 | 16 | @property 17 | def output_shape(self): 18 | return self._config.shape + (3,) 19 | 20 | @property 21 | def output_type(self): 22 | return tf.uint8 23 | 24 | 25 | def get_renderings_dataset( 26 | render_config, view_index, cat_id, example_ids): 27 | manager = RenderingsManager(render_config, view_index, cat_id) 28 | return base_dataset(example_ids).map(manager.map_tf) 29 | 30 | 31 | if __name__ == '__main__': 32 | import matplotlib.pyplot as plt 33 | from shapenet.core import cat_desc_to_id, get_example_ids 34 | from shapenet.core.blender_renderings.config import RenderConfig 35 | cat_desc = 'plane' 36 | view_index = 5 37 | config = RenderConfig() 38 | cat_id = cat_desc_to_id(cat_desc) 39 | example_ids = get_example_ids(cat_id) 40 | dataset = get_renderings_dataset(config, view_index, cat_id, example_ids) 41 | image_tf = dataset.make_one_shot_iterator().get_next() 42 | with tf.train.MonitoredSession() as sess: 43 | while not sess.should_stop(): 44 | image = sess.run(image_tf) 45 | plt.imshow(image) 46 | plt.show() 47 | -------------------------------------------------------------------------------- /data/voxels.py: -------------------------------------------------------------------------------- 1 | import os 2 | import util3d.voxel.dataset as bvd 3 | from shapenet.core.voxels.config import VoxelConfig 4 | from dids.core import BiKeyDataset 5 | 6 | _voxels_dir = os.path.join( 7 | os.path.realpath(os.path.dirname(__file__)), '_filled_voxels') 8 | 9 | 10 | def fill_voxels(voxels): 11 | import numpy as np 12 | from util3d.voxel.manip import filled_voxels 13 | from util3d.voxel.binvox import DenseVoxels 14 | if isinstance(voxels, np.ndarray): 15 | return filled_voxels(voxels) 16 | else: 17 | return DenseVoxels( 18 | filled_voxels(voxels.dense_data()), voxels.translate, voxels.scale) 19 | 20 | 21 | def create_filled_data(unfilled_dataset, dst, overwrite=False, message=None): 22 | src = unfilled_dataset.map(fill_voxels) 23 | with src: 24 | dst.save_dataset(src, overwrite=overwrite, message=message) 25 | 26 | 27 | def _get_filled_gt_voxel_dataset_single(cat_id, mode): 28 | folder = os.path.join(_voxels_dir, cat_id) 29 | if not os.path.isdir(folder): 30 | os.makedirs(folder) 31 | return bvd.BinvoxDataset(folder, mode=mode) 32 | 33 | 34 | def _get_filled_gt_voxel_dataset(cat_id, mode): 35 | if isinstance(cat_id, (list, tuple)): 36 | datasets = {k: _get_filled_gt_voxel_dataset_single(k, mode) 37 | for k in cat_id} 38 | return BiKeyDataset(datasets) 39 | else: 40 | return _get_filled_gt_voxel_dataset_single(cat_id, mode) 41 | 42 | 43 | def create_filled_gt_data(cat_id, overwrite=False): 44 | src = get_unfilled_gt_voxel_dataset(cat_id) 45 | dst = _get_filled_gt_voxel_dataset(cat_id, 'a') 46 | with src: 47 | with dst: 48 | create_filled_data( 49 | src, dst, overwrite=overwrite, 50 | message='Filling ground truth voxels...') 51 | 52 | 53 | def get_filled_gt_voxel_dataset(cat_id, auto_save=True, example_ids=None): 54 | if auto_save: 55 | create_filled_gt_data(cat_id, example_ids) 56 | return _get_filled_gt_voxel_dataset(cat_id, 'r') 57 | 58 | 59 | def _get_unfilled_gt_voxel_dataset_single(cat_id): 60 | return VoxelConfig().get_dataset(cat_id) 61 | 62 | 63 | def get_unfilled_gt_voxel_dataset(cat_id): 64 | if isinstance(cat_id, (list, tuple)): 65 | datasets = { 66 | k: _get_unfilled_gt_voxel_dataset_single(k) for k in cat_id} 67 | return BiKeyDataset(datasets) 68 | else: 69 | return _get_unfilled_gt_voxel_dataset_single(cat_id) 70 | 71 | 72 | def get_gt_voxel_dataset( 73 | cat_id, filled=False, auto_save=True, example_ids=None): 74 | kwargs = dict(auto_save=auto_save, example_ids=example_ids) 75 | if filled: 76 | return get_filled_gt_voxel_dataset(cat_id, **kwargs) 77 | else: 78 | return get_unfilled_gt_voxel_dataset(cat_id, **kwargs) 79 | -------------------------------------------------------------------------------- /eval/.gitignore: -------------------------------------------------------------------------------- 1 | _eval 2 | -------------------------------------------------------------------------------- /eval/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jackd/template_ffd/0f9111ffb340449ad87fbf52220273a15819ec4f/eval/__init__.py -------------------------------------------------------------------------------- /eval/chamfer.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from dids.file_io.json_dataset import JsonAutoSavingManager 3 | from template_ffd.metrics.np_impl import np_metrics 4 | 5 | import template_ffd.inference.clouds as clouds 6 | from template_ffd.model import get_builder 7 | from path import get_eval_path 8 | from point_cloud import get_lazy_evaluation_dataset 9 | 10 | 11 | def _get_lazy_chamfer_dataset(inf_cloud_dataset, cat_id, n_samples): 12 | return get_lazy_evaluation_dataset( 13 | inf_cloud_dataset, cat_id, n_samples, 14 | lambda c0, c1: np_metrics.chamfer(c0, c1) / n_samples) 15 | 16 | 17 | class _TemplateChamferAutoSavingManager(JsonAutoSavingManager): 18 | def __init__(self, model_id, n_samples=1024): 19 | self._model_id = model_id 20 | self._n_samples = n_samples 21 | 22 | @property 23 | def path(self): 24 | return get_eval_path( 25 | 'chamfer', 'template', 26 | str(self._n_samples), 27 | '%s.json' % self._model_id) 28 | 29 | @property 30 | def saving_message(self): 31 | return ('Creating chosen template Chamfer data\n' 32 | 'model_id: %s\nn_samples: %d' % 33 | (self._model_id, self._n_samples)) 34 | 35 | def get_lazy_dataset(self): 36 | from shapenet.core.point_clouds import get_point_cloud_dataset 37 | from util3d.point_cloud import sample_points 38 | from template_ffd.model import get_builder 39 | from template_ffd.inference.predictions import \ 40 | get_selected_template_idx_dataset 41 | builder = get_builder(self._model_id) 42 | cat_id = builder.cat_id 43 | template_ids = builder.template_ids 44 | clouds = [] 45 | 46 | def sample_fn(cloud): 47 | return sample_points(np.array(cloud), self._n_samples) 48 | 49 | gt_clouds = get_point_cloud_dataset( 50 | cat_id, builder.n_samples).map(sample_fn) 51 | with gt_clouds: 52 | for example_id in template_ids: 53 | clouds.append(np.array(gt_clouds[example_id])) 54 | 55 | idx_dataset = get_selected_template_idx_dataset(self._model_id) 56 | inf_cloud_ds = idx_dataset.map(lambda i: np.array(clouds[i])) 57 | return _get_lazy_chamfer_dataset(inf_cloud_ds, cat_id, self._n_samples) 58 | 59 | 60 | class _ChamferAutoSavingManager(JsonAutoSavingManager): 61 | def __init__(self, model_id, n_samples=1024, **kwargs): 62 | self._model_id = model_id 63 | self._n_samples = n_samples 64 | self._nested_depth = 3 65 | self._kwargs = kwargs 66 | 67 | @property 68 | def saving_message(self): 69 | items = ( 70 | ('model_id', self._model_id), 71 | ('n_samples', self._n_samples) 72 | ) + tuple(self._kwargs.items()) 73 | return 'Creating Chamfer data\n%s' % '\n'.join( 74 | '%s: %s' % (k, v) for k, v in items) 75 | 76 | def get_inferred_cloud_dataset(self): 77 | raise NotImplementedError('Abstract method') 78 | 79 | def get_lazy_dataset(self): 80 | inf_cloud_ds = self.get_inferred_cloud_dataset() 81 | cat_id = get_builder(self._model_id).cat_id 82 | return _get_lazy_chamfer_dataset(inf_cloud_ds, cat_id, self._n_samples) 83 | 84 | 85 | class _PreSampledChamferAutoSavingManager(_ChamferAutoSavingManager): 86 | @property 87 | def path(self): 88 | return get_eval_path( 89 | 'chamfer', 'presampled', 90 | str(self._n_samples), 91 | '%s.json' % self._model_id) 92 | 93 | def get_inferred_cloud_dataset(self): 94 | return clouds.get_inferred_cloud_dataset( 95 | pre_sampled=True, model_id=self._model_id, 96 | n_samples=self._n_samples, **self._kwargs) 97 | 98 | 99 | class _PostSampledChamferAutoSavingManager(_ChamferAutoSavingManager): 100 | @property 101 | def path(self): 102 | return get_eval_path( 103 | 'chamfer', 'postsampled', str(self._n_samples), 104 | str(self._kwargs['edge_length_threshold']), 105 | '%s.json' % self._model_id) 106 | 107 | def get_inferred_cloud_dataset(self): 108 | return clouds.get_inferred_cloud_dataset( 109 | pre_sampled=False, model_id=self._model_id, 110 | n_samples=self._n_samples, **self._kwargs) 111 | 112 | 113 | def get_chamfer_manager(model_id, pre_sampled=True, **kwargs): 114 | if pre_sampled: 115 | return _PreSampledChamferAutoSavingManager(model_id, **kwargs) 116 | else: 117 | return _PostSampledChamferAutoSavingManager(model_id, **kwargs) 118 | 119 | 120 | def get_chamfer_average(model_id, pre_sampled=True, cat_desc=None, **kwargs): 121 | import os 122 | from shapenet.core import cat_desc_to_id 123 | manager = get_chamfer_manager(model_id, pre_sampled, **kwargs) 124 | 125 | values = None 126 | if os.path.isfile(manager.path): 127 | with manager.get_saving_dataset('r') as ds: 128 | values = np.array(tuple(ds.values())) 129 | if values is None or len(values) == 0: 130 | manager.save_all() 131 | with manager.get_saving_dataset('r') as ds: 132 | if cat_desc is not None: 133 | if not isinstance(cat_desc, (list, tuple, set)): 134 | cat_desc = [cat_desc] 135 | cat_id = set(cat_desc_to_id(cat_desc)) 136 | ds = ds.filter_keys(lambda key: key[0] in cat_id) 137 | values = np.array(tuple(ds.values())) 138 | return np.mean(values) 139 | 140 | 141 | def get_template_chamfer_manager(model_id, n_samples=1024): 142 | return _TemplateChamferAutoSavingManager(model_id, n_samples) 143 | -------------------------------------------------------------------------------- /eval/ffd_emd.py: -------------------------------------------------------------------------------- 1 | import string 2 | import numpy as np 3 | from dids.file_io.json_dataset import JsonAutoSavingManager 4 | from template_ffd.metrics.np_impl import np_metrics 5 | 6 | import template_ffd.inference.clouds as clouds 7 | from template_ffd.model import get_builder 8 | from path import get_eval_path 9 | from point_cloud import get_lazy_evaluation_dataset 10 | 11 | 12 | def _get_lazy_emd_dataset(inf_cloud_dataset, cat_id, n_samples): 13 | def eval_fn(c0, c1): 14 | return np_metrics.emd(c0, c1) 15 | return get_lazy_evaluation_dataset( 16 | inf_cloud_dataset, cat_id, n_samples, eval_fn) 17 | 18 | 19 | class _TemplateEmdAutoSavingManager(JsonAutoSavingManager): 20 | def __init__(self, model_id, n_samples=1024): 21 | self._model_id = model_id 22 | self._n_samples = n_samples 23 | self._nested_depth = 3 24 | 25 | @property 26 | def path(self): 27 | return get_eval_path( 28 | 'emd', 'template', str(self._n_samples), 29 | '%s.json' % self._model_id) 30 | 31 | @property 32 | def saving_message(self): 33 | return ('Creating chosen template EMD data\n' 34 | 'model_id: %s\nn_samples: %d' % 35 | (self._model_id, self._n_samples)) 36 | 37 | def get_lazy_dataset(self): 38 | from shapenet.core.point_clouds import get_point_cloud_dataset 39 | from util3d.point_cloud import sample_points 40 | from template_ffd.model import get_builder 41 | from template_ffd.inference.predictions import get_predictions_dataset 42 | builder = get_builder(self._model_id) 43 | cat_id = builder.cat_id 44 | template_ids = builder.template_ids 45 | clouds = [] 46 | 47 | def sample_fn(cloud): 48 | return sample_points(np.array(cloud), self._n_samples) 49 | 50 | gt_clouds = get_point_cloud_dataset( 51 | cat_id, builder.n_samples).map(sample_fn) 52 | with gt_clouds: 53 | for example_id in template_ids: 54 | clouds.append(np.array(gt_clouds[example_id])) 55 | 56 | predictions = get_predictions_dataset( 57 | self._model_id) 58 | inf_cloud_ds = predictions.map(lambda i: clouds[i].copy()) 59 | return _get_lazy_emd_dataset(inf_cloud_ds, cat_id, self._n_samples) 60 | 61 | 62 | class _EmdAutoSavingManager(JsonAutoSavingManager): 63 | def __init__(self, model_id, n_samples=1024, **kwargs): 64 | self._model_id = model_id 65 | self._n_samples = n_samples 66 | self._kwargs = kwargs 67 | self._nested_depth = 3 68 | 69 | @property 70 | def saving_message(self): 71 | items = ( 72 | ('model_id', self._model_id), 73 | ('n_samples', self._n_samples) 74 | ) + tuple(self._kwargs.items()) 75 | return 'Creating EMD data\n%s' % string.join( 76 | ('%s: %s' % (k, v) for k, v in items), '\n') 77 | 78 | def get_inferred_cloud_dataset(self): 79 | raise NotImplementedError('Abstract method') 80 | 81 | def get_lazy_dataset(self): 82 | inf_cloud_ds = self.get_inferred_cloud_dataset() 83 | cat_id = get_builder(self._model_id).cat_id 84 | return _get_lazy_emd_dataset(inf_cloud_ds, cat_id, self._n_samples) 85 | 86 | 87 | class _PreSampledEmdAutoSavingManager(_EmdAutoSavingManager): 88 | @property 89 | def path(self): 90 | return get_eval_path( 91 | 'emd', 'presampled', str(self._n_samples), 92 | '%s.json' % self._model_id) 93 | 94 | def get_inferred_cloud_dataset(self): 95 | return clouds.get_inferred_cloud_dataset( 96 | pre_sampled=True, model_id=self._model_id, 97 | n_samples=self._n_samples, **self._kwargs) 98 | 99 | 100 | class _PostSampledEmdAutoSavingManager(_EmdAutoSavingManager): 101 | @property 102 | def path(self): 103 | return get_eval_path( 104 | 'emd', 'postsampled', str(self._n_samples), 105 | '%s.json' % self._model_id) 106 | 107 | def get_inferred_cloud_dataset(self): 108 | return clouds.get_inferred_cloud_dataset( 109 | pre_sampled=False, model_id=self._model_id, 110 | n_samples=self._n_samples, **self._kwargs) 111 | 112 | 113 | def get_emd_manager(model_id, pre_sampled=True, **kwargs): 114 | if pre_sampled: 115 | return _PreSampledEmdAutoSavingManager(model_id, **kwargs) 116 | else: 117 | return _PostSampledEmdAutoSavingManager(model_id, **kwargs) 118 | 119 | 120 | def get_emd_average(model_id, pre_sampled=True, **kwargs): 121 | import os 122 | manager = get_emd_manager(model_id, pre_sampled, **kwargs) 123 | values = None 124 | if os.path.isfile(manager.path): 125 | with manager.get_saving_dataset('r') as ds: 126 | values = np.array(tuple(ds.values())) 127 | if values is None: 128 | try: 129 | manager.save_all() 130 | except Exception: 131 | os.remove(manager.path) 132 | raise 133 | with manager.get_saving_dataset('r') as ds: 134 | values = np.array(tuple(ds.values())) 135 | return np.mean(values) 136 | 137 | 138 | def get_template_emd_manager(model_id, n_samples=1024): 139 | return _TemplateEmdAutoSavingManager(model_id, n_samples) 140 | -------------------------------------------------------------------------------- /eval/iou.py: -------------------------------------------------------------------------------- 1 | from __future__ import division 2 | import os 3 | import numpy as np 4 | from dids import Dataset 5 | from dids.file_io.json_dataset import JsonAutoSavingManager 6 | from shapenet.core.voxels.config import VoxelConfig 7 | from template_ffd.inference.voxels import get_voxel_dataset 8 | from template_ffd.model import load_params 9 | from template_ffd.data.voxels import get_gt_voxel_dataset 10 | from shapenet.core import cat_desc_to_id 11 | from path import get_eval_path 12 | from template_ffd.model import get_builder 13 | 14 | 15 | def intersection_over_union(v0, v1): 16 | intersection = np.sum(np.logical_and(v0, v1)) 17 | union = np.sum(np.logical_or(v0, v1)) 18 | return intersection / union 19 | 20 | 21 | class IouTemplateSavingManager(JsonAutoSavingManager): 22 | def __init__(self, model_id, filled=True, voxel_config=None): 23 | self._model_id = model_id 24 | self._filled = filled 25 | self._voxel_config = VoxelConfig() if voxel_config is None else \ 26 | voxel_config 27 | 28 | @property 29 | def saving_message(self): 30 | return ('Creating selected template IoU data\nmodel_id: %s\nfilled: %s' 31 | % (self._model_id, self._filled)) 32 | 33 | @property 34 | def path(self): 35 | fs = 'filled' if self._filled else 'unfilled' 36 | return get_eval_path( 37 | 'iou', 'template', 38 | self._voxel_config.voxel_id, fs, '%s.json' % self._model_id) 39 | 40 | def get_lazy_dataset(self): 41 | from template_ffd.inference.predictions import \ 42 | get_selected_template_idx_dataset 43 | builder = get_builder(self._model_id) 44 | template_ids = builder.template_ids 45 | 46 | gt_ds = get_gt_voxel_dataset( 47 | builder.cat_id, filled=self._filled, auto_save=True, 48 | example_ids=template_ids) 49 | gt_ds = gt_ds.map(lambda v: v.data) 50 | with gt_ds: 51 | template_voxels = tuple(gt_ds[tid] for tid in template_ids) 52 | 53 | selected_ds = get_selected_template_idx_dataset(self._model_id) 54 | selected_ds = selected_ds.map(lambda i: template_voxels[i]) 55 | 56 | return Dataset.zip(selected_ds, gt_ds).map( 57 | lambda v: intersection_over_union(*v)) 58 | 59 | 60 | class IouAutoSavingManager(JsonAutoSavingManager): 61 | def __init__( 62 | self, model_id, edge_length_threshold=0.1, filled=False, 63 | voxel_config=None): 64 | self._model_id = model_id 65 | self._edge_length_threshold = edge_length_threshold 66 | self._filled = filled 67 | self._voxel_config = VoxelConfig() if voxel_config is None else \ 68 | voxel_config 69 | self._nested_depth = 3 70 | 71 | @property 72 | def saving_message(self): 73 | return ('Creating IoU data\n' 74 | 'model_id: %s\n' 75 | 'edge_length_threshold: %.3f\n' 76 | 'filled: %s\n' 77 | 'voxel_config: %s' % ( 78 | self._model_id, self._edge_length_threshold, 79 | self._filled, self._voxel_config.voxel_id)) 80 | 81 | @property 82 | def path(self): 83 | fs = 'filled' if self._filled else 'unfilled' 84 | return get_eval_path( 85 | 'iou', str(self._edge_length_threshold), 86 | self._voxel_config.voxel_id, fs, '%s.json' % self._model_id) 87 | 88 | def get_lazy_dataset(self): 89 | cat_id = cat_desc_to_id(load_params(self._model_id)['cat_desc']) 90 | if not isinstance(cat_id, (list, tuple)): 91 | cat_id = [cat_id] 92 | inferred_dataset = get_voxel_dataset( 93 | self._model_id, self._edge_length_threshold, self._voxel_config, 94 | filled=self._filled) 95 | 96 | gt_dataset = get_gt_voxel_dataset(cat_id, filled=self._filled) 97 | gt_dataset = gt_dataset.map_keys(lambda key: key[:2]) 98 | 99 | with inferred_dataset: 100 | keys = tuple(inferred_dataset.keys()) 101 | 102 | voxel_datasets = Dataset.zip(inferred_dataset, gt_dataset) 103 | voxel_datasets = voxel_datasets.subset(keys) 104 | 105 | def map_fn(v): 106 | return intersection_over_union( 107 | v[0].dense_data(), v[1].dense_data()) 108 | 109 | iou_dataset = voxel_datasets.map(map_fn) 110 | return iou_dataset 111 | 112 | 113 | def get_iou_dataset( 114 | model_id, edge_length_threshold=0.1, filled=False, 115 | recalc=False): 116 | manager = IouAutoSavingManager( 117 | model_id=model_id, 118 | edge_length_threshold=edge_length_threshold, 119 | filled=filled 120 | ) 121 | if not recalc: 122 | if not os.path.isfile(manager.path): 123 | recalc = True 124 | else: 125 | with manager.get_saving_dataset() as ds: 126 | if len(ds) == 0: 127 | recalc = True 128 | 129 | if recalc: 130 | manager.save_all() 131 | return manager.get_saving_dataset('r') 132 | 133 | 134 | def get_iou_average( 135 | model_id, edge_length_threshold=0.1, filled=False): 136 | with get_iou_dataset(model_id, edge_length_threshold=edge_length_threshold, 137 | filled=filled) as ds: 138 | values = list(ds.values()) 139 | return np.mean(values) 140 | -------------------------------------------------------------------------------- /eval/normalize.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from dids.file_io.json_dataset import JsonAutoSavingManager 3 | from template_ffd.data.ids import get_example_ids 4 | from path import get_eval_path 5 | 6 | 7 | def get_normalization_params(vertices): 8 | from scipy.optimize import minimize 9 | vertices = np.array(vertices) 10 | vertical_offset = np.min(vertices[:, 1]) 11 | vertices[:, 1] -= vertical_offset 12 | 13 | def f(x): 14 | x = np.array([x[0], 0, x[1]]) 15 | dist2 = np.sum((vertices - x)**2, axis=-1) 16 | return np.max(dist2) 17 | 18 | opt = minimize(f, np.array([0, 0])).x 19 | offset = np.array([opt[0], vertical_offset, opt[1]], dtype=np.float32) 20 | vertices[:, [0, 2]] -= opt 21 | 22 | radius = np.sqrt(np.max(np.sum(vertices**2, axis=-1))) 23 | unit1 = 3.2 24 | scale_factor = radius / unit1 25 | return offset, scale_factor 26 | 27 | 28 | def normalized(points, offset, scale_factor): 29 | return (points - offset) / scale_factor 30 | 31 | 32 | def normalize(points, offset, scale_factor): 33 | points -= offset 34 | points /= scale_factor 35 | 36 | 37 | class _NormalizationParamsAutoSavingManager(JsonAutoSavingManager): 38 | def __init__(self, cat_id): 39 | self._cat_id = cat_id 40 | 41 | @property 42 | def saving_message(self): 43 | return ( 44 | 'Creating transform parameter dataset\ncat_id: %s' % self._cat_id) 45 | 46 | @property 47 | def path(self): 48 | return get_eval_path('transform_params', '%s.json' % self._cat_id) 49 | 50 | def get_lazy_dataset(self): 51 | from shapenet.core.meshes import get_mesh_dataset 52 | example_ids = get_example_ids(self._cat_id, 'eval') 53 | mesh_ds = get_mesh_dataset(self._cat_id).subset(example_ids) 54 | 55 | def map_fn(mesh): 56 | vertices = mesh['vertices'] 57 | offset, scale_factor = get_normalization_params(vertices) 58 | return dict( 59 | offset=[float(o) for o in offset], 60 | scale_factor=float(scale_factor)) 61 | 62 | return mesh_ds.map(map_fn) 63 | 64 | 65 | def get_normalization_params_dataset(cat_id): 66 | from dids.core import BiKeyDataset 67 | 68 | def f(c): 69 | return _NormalizationParamsAutoSavingManager(c).get_saved_dataset() 70 | if isinstance(cat_id, (list, tuple)): 71 | dataset = BiKeyDataset({c: f(c) for c in cat_id}) 72 | else: 73 | dataset = f(cat_id) 74 | 75 | return dataset.map( 76 | lambda x: {k: np.array(v, dtype=np.float32) for k, v in x.items()}) 77 | -------------------------------------------------------------------------------- /eval/path.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | _eval_dir = os.path.realpath(os.path.dirname(__file__)) 4 | 5 | 6 | def get_eval_dir(*args): 7 | folder = os.path.join(_eval_dir, '_eval', *args) 8 | if not os.path.isdir(folder): 9 | os.makedirs(folder) 10 | return folder 11 | 12 | 13 | def get_eval_path(*args): 14 | return os.path.join(get_eval_dir(*args[:-1]), args[-1]) 15 | -------------------------------------------------------------------------------- /eval/point_cloud.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from dids.core import Dataset 3 | # from dids.core import BiKeyDataset 4 | from shapenet.core.point_clouds import get_point_cloud_dataset 5 | from util3d.point_cloud import sample_points 6 | from normalize import get_normalization_params_dataset, normalized 7 | from template_ffd.data.ids import get_example_ids 8 | 9 | 10 | def _get_lazy_evaluation_dataset_single( 11 | inf_cloud_ds, cat_id, n_samples, eval_fn): 12 | 13 | def sample_fn(cloud): 14 | return sample_points(np.array(cloud), n_samples) 15 | 16 | example_ids = get_example_ids(cat_id, 'eval') 17 | 18 | normalization_ds = get_normalization_params_dataset(cat_id) 19 | gt_cloud_ds = get_point_cloud_dataset( 20 | cat_id, n_samples, example_ids=example_ids).map(sample_fn) 21 | 22 | with inf_cloud_ds: 23 | keys = tuple(inf_cloud_ds.keys()) 24 | 25 | normalization_ds = normalization_ds.map_keys( 26 | lambda key: key[:2]) 27 | gt_cloud_ds = gt_cloud_ds.map_keys(lambda key: key[:2]) 28 | 29 | zipped = Dataset.zip( 30 | inf_cloud_ds, gt_cloud_ds, normalization_ds).subset( 31 | keys, check_present=False) 32 | 33 | def map_fn(data): 34 | inf_cloud, gt_cloud, norm_params = data 35 | inf_cloud = normalized(inf_cloud, **norm_params) 36 | gt_cloud = normalized(gt_cloud, **norm_params) 37 | return eval_fn(inf_cloud, gt_cloud) 38 | 39 | dataset = zipped.map(map_fn) 40 | return dataset 41 | 42 | 43 | def get_lazy_evaluation_dataset(inf_cloud_ds, cat_id, n_samples, eval_fn): 44 | if not isinstance(cat_id, (list, tuple)): 45 | cat_id = [cat_id] 46 | return _get_lazy_evaluation_dataset_single( 47 | inf_cloud_ds, cat_id, n_samples, eval_fn) 48 | 49 | # if isinstance(cat_id, (list, tuple)): 50 | # return _get_lazy_evaluation_dataset_single( 51 | # inf_cloud_ds, cat_id, n_samples, eval_fn) 52 | 53 | # def f(cid): 54 | # return _get_lazy_evaluation_dataset_single( 55 | # inf_cloud_ds, cid, n_samples, eval_fn) 56 | # if isinstance(cat_id, (list, tuple)): 57 | # datasets = {k: f(k) for k in cat_id} 58 | # return BiKeyDataset(datasets) 59 | # else: 60 | # return f(cat_id) 61 | -------------------------------------------------------------------------------- /eval/retrofit.py: -------------------------------------------------------------------------------- 1 | """Code for retrofitting code designed for single views to multiple views.""" 2 | import numpy as np 3 | from template_ffd.model import get_builder 4 | 5 | 6 | def retrofit_eval_fn(original_fn): 7 | def f(model_id, *args, **kwargs): 8 | if 'view_index' in kwargs: 9 | view_index = kwargs['view_index'] 10 | if isinstance(view_index, int): 11 | return original_fn(model_id, *args, **kwargs) 12 | else: 13 | del kwargs['view_index'] 14 | else: 15 | view_index = None 16 | if view_index is None: 17 | view_index = get_builder(model_id).view_index 18 | if isinstance(view_index, int): 19 | return original_fn( 20 | model_id, *args, view_index=view_index, **kwargs) 21 | assert(isinstance(view_index, (list, tuple))) 22 | values = [original_fn(model_id, *args, view_index=vi, **kwargs) 23 | for vi in view_index] 24 | return np.mean(values) 25 | return f 26 | -------------------------------------------------------------------------------- /eval/templates.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from template_ffd.inference.predictions import get_predictions_dataset 3 | from template_ffd.model import get_builder 4 | 5 | 6 | def print_template_scores(model_id, by_weight=False): 7 | builder = get_builder(model_id) 8 | template_ids = builder.template_ids 9 | n = len(template_ids) 10 | counts = np.zeros((n,), dtype=np.int32) 11 | totals = np.zeros((n,), dtype=np.float32) 12 | dataset = get_predictions_dataset(model_id) 13 | 14 | with dataset: 15 | for example_id in dataset: 16 | probs = np.array(dataset[example_id]['probs']) 17 | counts[np.argmax(probs)] += 1 18 | totals += probs 19 | 20 | if by_weight: 21 | zipped = list(zip(template_ids, range(n), totals)) 22 | zipped.sort(key=lambda x: x[2], reverse=True) 23 | for rank, (k, i, p) in enumerate(zipped): 24 | print(rank, i, p, k) 25 | print([z[1] for z in zipped]) 26 | else: 27 | zipped = list(zip(template_ids, range(n), counts)) 28 | zipped.sort(key=lambda x: x[2], reverse=True) 29 | for rank, (k, i, p) in enumerate(zipped): 30 | print(rank, i, p, k) 31 | print([z[1] for z in zipped]) 32 | 33 | 34 | if __name__ == '__main__': 35 | import argparse 36 | 37 | parser = argparse.ArgumentParser() 38 | parser.add_argument( 39 | 'model_id', help='id of model defined in params') 40 | parser.add_argument('-w', '--by_weight', action='store_true') 41 | args = parser.parse_args() 42 | print_template_scores(args.model_id, args.by_weight) 43 | -------------------------------------------------------------------------------- /ffd/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jackd/template_ffd/0f9111ffb340449ad87fbf52220273a15819ec4f/ffd/__init__.py -------------------------------------------------------------------------------- /ffd/bernstein.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from scipy.special import comb 3 | from util import mesh3d 4 | 5 | 6 | def bernstein_poly(n, v, stu): 7 | coeff = comb(n, v) 8 | weights = coeff * ((1 - stu) ** (n - v)) * (stu ** v) 9 | return weights 10 | 11 | 12 | def trivariate_bernstein(stu, lattice): 13 | if len(lattice.shape) != 4 or lattice.shape[3] != 3: 14 | raise ValueError('lattice must have shape (L, M, N, 3)') 15 | l, m, n = (d - 1 for d in lattice.shape[:3]) 16 | lmn = np.array([l, m, n], dtype=np.int32) 17 | v = mesh3d( 18 | np.arange(l+1, dtype=np.int32), 19 | np.arange(m+1, dtype=np.int32), 20 | np.arange(n+1, dtype=np.int32), 21 | dtype=np.int32) 22 | stu = np.reshape(stu, (-1, 1, 1, 1, 3)) 23 | weights = bernstein_poly(n=lmn, v=v, stu=stu) 24 | weights = np.prod(weights, axis=-1, keepdims=True) 25 | return np.sum(weights * lattice, axis=(1, 2, 3)) 26 | -------------------------------------------------------------------------------- /ffd/deform.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import util 3 | from bernstein import bernstein_poly, trivariate_bernstein 4 | 5 | 6 | def xyz_to_stu(xyz, origin, stu_axes): 7 | if stu_axes.shape == (3,): 8 | stu_axes = np.diag(stu_axes) 9 | # raise ValueError( 10 | # 'stu_axes should have shape (3,), got %s' % str(stu_axes.shape)) 11 | # s, t, u = np.diag(stu_axes) 12 | assert(stu_axes.shape == (3, 3)) 13 | s, t, u = stu_axes 14 | tu = np.cross(t, u) 15 | su = np.cross(s, u) 16 | st = np.cross(s, t) 17 | 18 | diff = xyz - origin 19 | 20 | # TODO: vectorize? np.dot(diff, [tu, su, st]) / ... 21 | stu = np.stack([ 22 | np.dot(diff, tu) / np.dot(s, tu), 23 | np.dot(diff, su) / np.dot(t, su), 24 | np.dot(diff, st) / np.dot(u, st) 25 | ], axis=-1) 26 | return stu 27 | 28 | 29 | def stu_to_xyz(stu_points, stu_origin, stu_axes): 30 | if stu_axes.shape != (3,): 31 | raise NotImplementedError() 32 | return stu_origin + stu_points*stu_axes 33 | 34 | 35 | def get_stu_control_points(dims): 36 | stu_lattice = util.mesh3d( 37 | *(np.linspace(0, 1, d+1) for d in dims), dtype=np.float32) 38 | stu_points = np.reshape(stu_lattice, (-1, 3)) 39 | return stu_points 40 | 41 | 42 | def get_control_points(dims, stu_origin, stu_axes): 43 | stu_points = get_stu_control_points(dims) 44 | xyz_points = stu_to_xyz(stu_points, stu_origin, stu_axes) 45 | return xyz_points 46 | 47 | 48 | def get_stu_deformation_matrix(stu, dims): 49 | v = util.mesh3d( 50 | *(np.arange(0, d+1, dtype=np.int32) for d in dims), 51 | dtype=np.int32) 52 | v = np.reshape(v, (-1, 3)) 53 | 54 | weights = bernstein_poly( 55 | n=np.array(dims, dtype=np.int32), 56 | v=v, 57 | stu=np.expand_dims(stu, axis=-2)) 58 | 59 | b = np.prod(weights, axis=-1) 60 | return b 61 | 62 | 63 | def get_deformation_matrix(xyz, dims, stu_origin, stu_axes): 64 | stu = xyz_to_stu(xyz, stu_origin, stu_axes) 65 | return get_stu_deformation_matrix(stu, dims) 66 | 67 | 68 | def get_ffd(xyz, dims, stu_origin=None, stu_axes=None): 69 | if stu_origin is None or stu_axes is None: 70 | if not (stu_origin is None and stu_axes is None): 71 | raise ValueError( 72 | 'Either both or neither of stu_origin/stu_axes must be None') 73 | stu_origin, stu_axes = get_stu_params(xyz) 74 | b = get_deformation_matrix(xyz, dims, stu_origin, stu_axes) 75 | p = get_control_points(dims, stu_origin, stu_axes) 76 | return b, p 77 | 78 | 79 | def deform_mesh(xyz, lattice): 80 | return trivariate_bernstein(lattice, xyz) 81 | 82 | 83 | def get_stu_params(xyz): 84 | minimum, maximum = util.extent(xyz, axis=0) 85 | stu_origin = minimum 86 | # stu_axes = np.diag(maximum - minimum) 87 | stu_axes = maximum - minimum 88 | return stu_origin, stu_axes 89 | -------------------------------------------------------------------------------- /ffd/util.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | 3 | 4 | def mesh3d(x, y, z, dtype=np.float32): 5 | grid = np.empty(x.shape + y.shape + z.shape + (3,), dtype=dtype) 6 | grid[..., 0] = x[:, np.newaxis, np.newaxis] 7 | grid[..., 1] = y[np.newaxis, :, np.newaxis] 8 | grid[..., 2] = z[np.newaxis, np.newaxis, :] 9 | return grid 10 | 11 | 12 | def extent(x, *args, **kwargs): 13 | return np.min(x, *args, **kwargs), np.max(x, *args, **kwargs) 14 | -------------------------------------------------------------------------------- /inference/.gitignore: -------------------------------------------------------------------------------- 1 | _inferences 2 | -------------------------------------------------------------------------------- /inference/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jackd/template_ffd/0f9111ffb340449ad87fbf52220273a15819ec4f/inference/__init__.py -------------------------------------------------------------------------------- /inference/clouds.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from util3d.mesh.sample import sample_faces 3 | from dids.file_io.hdf5 import Hdf5AutoSavingManager 4 | from path import get_inference_path 5 | from template_ffd.model import get_builder 6 | 7 | 8 | class _PostSampledCloudManager(Hdf5AutoSavingManager): 9 | def __init__(self, model_id, n_samples=1024, edge_length_threshold=0.1): 10 | self._model_id = model_id 11 | self._n_samples = n_samples 12 | self._edge_length_threshold = edge_length_threshold 13 | self._nested_depth = 3 14 | 15 | @property 16 | def path(self): 17 | return get_inference_path( 18 | 'cloud', 'postsampled', 19 | str(self._n_samples), str(self._edge_length_threshold), 20 | '%s.hdf5' % self._model_id) 21 | 22 | @property 23 | def saving_message(self): 24 | return ( 25 | 'Generating postampled point cloud\n' 26 | 'model_id: %s\n' 27 | 'n_samples: %d\n' 28 | 'edge_length_threshold: %s' % ( 29 | self._model_id, self._n_samples, self._edge_length_threshold)) 30 | 31 | def get_lazy_dataset(self): 32 | from meshes import get_inferred_mesh_dataset 33 | 34 | def map_fn(mesh): 35 | vertices, faces = ( 36 | np.array(mesh[k]) for k in ('vertices', 'faces')) 37 | cloud = sample_faces(vertices, faces, self._n_samples) 38 | return cloud 39 | 40 | return get_inferred_mesh_dataset( 41 | self._model_id, self._edge_length_threshold).map(map_fn) 42 | 43 | 44 | class _PreSampledCloudManager(Hdf5AutoSavingManager): 45 | def __init__(self, model_id, n_samples=1024): 46 | self._model_id = model_id 47 | self._n_samples = n_samples 48 | self._nested_depth = 3 49 | 50 | @property 51 | def path(self): 52 | return get_inference_path( 53 | 'cloud', 'presampled', str(self._n_samples), 54 | '%s.hdf5' % self._model_id) 55 | 56 | @property 57 | def saving_message(self): 58 | return ( 59 | 'Generating presampled point cloud\n' 60 | 'model_id: %s\n' 61 | 'n_samples: %d' % (self._model_id, self._n_samples)) 62 | 63 | def get_lazy_dataset(self): 64 | from predictions import get_predictions_dataset 65 | builder = get_builder(self._model_id) 66 | cloud_fn = builder.get_prediction_to_cloud_fn(self._n_samples) 67 | 68 | def map_fn(predictions): 69 | return cloud_fn(**predictions)['cloud'] 70 | 71 | predictions_ds = get_predictions_dataset(self._model_id) 72 | 73 | return predictions_ds.map(map_fn) 74 | 75 | 76 | def get_cloud_manager(model_id, pre_sampled=False, **kwargs): 77 | if pre_sampled: 78 | return _PreSampledCloudManager(model_id, **kwargs) 79 | else: 80 | return _PostSampledCloudManager(model_id, **kwargs) 81 | 82 | 83 | def get_inferred_cloud_dataset(model_id, pre_sampled=False, **kwargs): 84 | return get_cloud_manager( 85 | model_id, pre_sampled, **kwargs).get_saved_dataset() 86 | -------------------------------------------------------------------------------- /inference/meshes.py: -------------------------------------------------------------------------------- 1 | from dids.file_io.hdf5 import Hdf5AutoSavingManager 2 | from template_ffd.model import get_builder 3 | from path import get_inference_path 4 | 5 | 6 | class InferredMeshManager(Hdf5AutoSavingManager): 7 | def __init__(self, model_id, edge_length_threshold=0.1): 8 | self._model_id = model_id 9 | self._edge_length_threshold = edge_length_threshold 10 | self._nested_depth = 3 11 | 12 | @property 13 | def path(self): 14 | elt = self._edge_length_threshold 15 | es = 'base' if elt is None else str(elt) 16 | return get_inference_path('meshes', es, '%s.hdf5' % self._model_id) 17 | 18 | @property 19 | def saving_message(self): 20 | return ( 21 | 'Saving mesh data\n' 22 | 'model_id: %s\n' 23 | 'edge_length_threshold: %s\n' % 24 | (self._model_id, self._edge_length_threshold)) 25 | 26 | def get_lazy_dataset(self): 27 | from predictions import get_predictions_dataset 28 | builder = get_builder(self._model_id) 29 | mesh_fn = builder.get_prediction_to_mesh_fn( 30 | self._edge_length_threshold) 31 | 32 | def map_fn(prediction): 33 | mesh = mesh_fn(**prediction) 34 | return {k: mesh[k] for k in ('vertices', 'faces', 'attrs')} 35 | 36 | mesh_ds = get_predictions_dataset(self._model_id).map(map_fn) 37 | return mesh_ds 38 | 39 | 40 | def get_inferred_mesh_dataset( 41 | model_id, edge_length_threshold=0.1, lazy=True): 42 | manager = InferredMeshManager(model_id, edge_length_threshold) 43 | if lazy: 44 | return manager.get_lazy_dataset() 45 | else: 46 | return manager.get_saved_dataset() 47 | -------------------------------------------------------------------------------- /inference/path.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | inference_dir = os.path.realpath(os.path.dirname(__file__)) 4 | 5 | 6 | def get_inference_subdir(*args): 7 | subdir = os.path.join(inference_dir, '_inferences', *args) 8 | if not os.path.isdir(subdir): 9 | os.makedirs(subdir) 10 | return subdir 11 | 12 | 13 | def get_inference_path(*args): 14 | return os.path.join(get_inference_subdir(*args[:-1]), args[-1]) 15 | -------------------------------------------------------------------------------- /inference/predictions.py: -------------------------------------------------------------------------------- 1 | # from dids.file_io.hdf5 import NestedHdf5Dataset 2 | from dids.file_io.hdf5 import NestedHdf5Dataset 3 | from shapenet.util import LengthedGenerator 4 | from path import get_inference_path 5 | from template_ffd.model import get_builder 6 | from template_ffd.data.ids import get_example_ids 7 | 8 | 9 | def get_predictions_data_path(model_id): 10 | return get_inference_path('predictions', '%s.hdf5' % model_id) 11 | 12 | 13 | def get_predictions_data(model_id, mode='infer'): 14 | builder = get_builder(model_id) 15 | cat_id = builder.cat_id 16 | example_ids = get_example_ids(cat_id, mode) 17 | n = len(example_ids) 18 | view_index = builder.view_index 19 | if isinstance(view_index, (list, tuple)): 20 | n *= len(view_index) 21 | 22 | estimator = builder.get_estimator() 23 | predictions = estimator.predict(builder.get_predict_inputs) 24 | return LengthedGenerator(predictions, n) 25 | 26 | 27 | def create_predictions_data(model_id, overwrite=False): 28 | def map_fn(prediction): 29 | cat_id, example_id, view_index, probs, dp = ( 30 | prediction[k] for k in ( 31 | 'cat_id', 'example_id', 'view_index', 'probs', 'dp')) 32 | return (cat_id, example_id, str(view_index)), dict(probs=probs, dp=dp) 33 | 34 | predictions = get_predictions_data(model_id) 35 | mapped = (map_fn(p) for p in predictions) 36 | gen = LengthedGenerator(mapped, len(predictions)) 37 | 38 | with _get_predictions_dataset(model_id, mode='a') as dataset: 39 | dataset.save_items(gen, overwrite=overwrite) 40 | 41 | 42 | def _get_predictions_dataset(model_id, mode): 43 | return NestedHdf5Dataset( 44 | depth=3, path=get_predictions_data_path(model_id), mode=mode) 45 | 46 | 47 | def get_predictions_dataset(model_id, mode='r'): 48 | import os 49 | path = get_predictions_data_path(model_id) 50 | if not os.path.isfile(path): 51 | print('Creating predictions data') 52 | create_predictions_data(model_id) 53 | return _get_predictions_dataset(model_id, mode) 54 | 55 | 56 | def get_selected_template_idx_dataset(model_id): 57 | import numpy as np 58 | 59 | def map_fn(pred): 60 | return np.argmax(np.array(pred['probs'])) 61 | 62 | return _get_predictions_dataset(model_id).map(map_fn) 63 | -------------------------------------------------------------------------------- /inference/voxels.py: -------------------------------------------------------------------------------- 1 | import os 2 | from path import get_inference_subdir 3 | import util3d.voxel.dataset as bvd 4 | import util3d.voxel.convert as bio 5 | from shapenet.core.voxels.config import VoxelConfig 6 | from meshes import get_inferred_mesh_dataset 7 | 8 | _default_config = VoxelConfig() 9 | 10 | 11 | def get_voxel_subdir(model_id, edge_length_threshold=0.1, voxel_config=None, 12 | filled=False): 13 | if voxel_config is None: 14 | voxel_config = _default_config 15 | es = 'base' if edge_length_threshold is None else \ 16 | str(edge_length_threshold) 17 | fs = 'filled' if filled else 'unfilled' 18 | args = ['voxels', es, voxel_config.voxel_id, model_id, fs] 19 | return get_inference_subdir(*args) 20 | 21 | 22 | def _get_base_voxel_dataset( 23 | model_id, edge_length_threshold=0.1, voxel_config=None, filled=False, 24 | auto_save=True): 25 | kwargs = dict( 26 | model_id=model_id, 27 | edge_length_threshold=edge_length_threshold, 28 | voxel_config=voxel_config, 29 | filled=filled 30 | ) 31 | subdir = get_voxel_subdir(**kwargs) 32 | if auto_save: 33 | create_voxel_data(overwrite=False, **kwargs) 34 | 35 | return bvd.BinvoxDataset(subdir, mode='r') 36 | 37 | 38 | def _flatten_dataset(dataset): 39 | 40 | def key_map_fn(args): 41 | folder = os.path.join(*args[:-1]) 42 | if not os.path.isdir(folder): 43 | os.makedirs(folder) 44 | return os.path.join(folder, args[-1]) 45 | 46 | def inverse_key_map_fn(subpath): 47 | return tuple(k for k in subpath.split('/') if len(k) > 0) 48 | 49 | return dataset.map_keys( 50 | key_map_fn, inverse_key_map_fn) 51 | 52 | 53 | def get_voxel_dataset( 54 | model_id, edge_length_threshold=0.1, voxel_config=None, filled=False, 55 | auto_save=True): 56 | base_dataset = _get_base_voxel_dataset( 57 | model_id, edge_length_threshold=edge_length_threshold, 58 | voxel_config=voxel_config, filled=filled, 59 | auto_save=auto_save) 60 | 61 | dataset = _flatten_dataset(base_dataset) 62 | return dataset 63 | 64 | 65 | def _create_unfilled_voxel_data( 66 | model_id, edge_length_threshold=0.1, voxel_config=None, 67 | overwrite=False): 68 | import numpy as np 69 | from progress.bar import IncrementalBar 70 | if voxel_config is None: 71 | voxel_config = _default_config 72 | mesh_dataset = get_inferred_mesh_dataset( 73 | model_id, edge_length_threshold) 74 | voxel_dataset = _get_base_voxel_dataset( 75 | model_id, edge_length_threshold, voxel_config, filled=False, 76 | auto_save=False) 77 | 78 | kwargs = dict( 79 | voxel_dim=voxel_config.voxel_dim, 80 | exact=voxel_config.exact, 81 | dc=voxel_config.dc, 82 | aw=voxel_config.aw) 83 | 84 | with mesh_dataset: 85 | print('Creating unfilled voxel data') 86 | for k, v in kwargs.items(): 87 | print('%s = %s' % (k, v)) 88 | bar = IncrementalBar(max=len(mesh_dataset)) 89 | for k in mesh_dataset.keys(): 90 | binvox_path = voxel_dataset.path(os.path.join(*k)) 91 | if not os.path.isfile(binvox_path): 92 | mesh = mesh_dataset[k] 93 | vertices, faces = ( 94 | np.array(mesh[k]) for k in ('vertices', 'faces')) 95 | folder = os.path.dirname(binvox_path) 96 | if not os.path.isdir(folder): 97 | os.makedirs(folder) 98 | bio.mesh_to_binvox( 99 | vertices, faces, binvox_path, **kwargs) 100 | bar.next() 101 | bar.finish() 102 | 103 | 104 | def _create_filled_voxel_data(**kwargs): 105 | from template_ffd.data.voxels import create_filled_data 106 | 107 | overwrite = kwargs.pop('overwrite', False) 108 | src = _get_base_voxel_dataset(filled=False, **kwargs) 109 | dst = bvd.BinvoxDataset( 110 | get_voxel_subdir(filled=True, **kwargs), mode='a') 111 | 112 | src = _flatten_dataset(src) 113 | dst = _flatten_dataset(dst) 114 | with src: 115 | with dst: 116 | message = 'Creating filled voxels' 117 | create_filled_data( 118 | src, dst, message=message, overwrite=overwrite) 119 | 120 | 121 | def create_voxel_data( 122 | model_id, edge_length_threshold=0.1, voxel_config=None, filled=False, 123 | overwrite=False): 124 | kwargs = dict( 125 | model_id=model_id, 126 | edge_length_threshold=edge_length_threshold, 127 | voxel_config=voxel_config, 128 | overwrite=overwrite 129 | ) 130 | if filled: 131 | _create_filled_voxel_data(**kwargs) 132 | else: 133 | _create_unfilled_voxel_data(**kwargs) 134 | -------------------------------------------------------------------------------- /metrics/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jackd/template_ffd/0f9111ffb340449ad87fbf52220273a15819ec4f/metrics/__init__.py -------------------------------------------------------------------------------- /metrics/base.py: -------------------------------------------------------------------------------- 1 | class Metrics(object): 2 | def sum(self, array, axis=None): 3 | raise NotImplementedError('Abstract method') 4 | 5 | def max(self, array, axis=None): 6 | raise NotImplementedError('Abstract method') 7 | 8 | def min(self, array, axis=None): 9 | raise NotImplementedError('Abstract method') 10 | 11 | def expand_dims(self, array, axis): 12 | raise NotImplementedError('Abstract method') 13 | 14 | def sqrt(self, x): 15 | raise NotImplementedError('Abstract method') 16 | 17 | def top_k(self, x, axis): 18 | raise NotImplementedError('Abstract method') 19 | 20 | def _size_check(self, s1, s2): 21 | for s1s, s2s in zip(s1.shape[:-2], s2.shape[:-2]): 22 | if s1s != s2s and not (s1s == 1 or s2s == 1): 23 | raise ValueError( 24 | 'Invalid shape for s1, s2: %s, %s' 25 | % (str(s1.shape), str(s2.shape))) 26 | if s1.shape[-1] != s2.shape[-1]: 27 | raise ValueError( 28 | 'last dim of s1 and s2 must be same, but got %d, %d' 29 | % (s1.shape[-1], s2.shape[-1])) 30 | 31 | def _dist2(self, s1, s2): 32 | s1 = self.expand_dims(s1, axis=-2) 33 | s2 = self.expand_dims(s2, axis=-3) 34 | # diff = s1 - s2 35 | # return self.sum(diff*diff, axis=-1) 36 | return self.sum((s1 - s2)**2, axis=-1) 37 | 38 | def _unidirectional_chamfer(self, dist2, reverse=False): 39 | return self.sum(self.min(dist2, axis=-2 if reverse else -1), axis=-1) 40 | 41 | def unidirectional_chamfer(self, s1, s2, reverse=False): 42 | self._size_check(s1, s2) 43 | dist2 = self._dist2(s1, s2) 44 | return self._unidirectional_chamfer(dist2, reverse=reverse) 45 | 46 | def _bidirectional_chamfer(self, s1, s2): 47 | dist2 = self._dist2(s1, s2) 48 | return self._unidirectional_chamfer(dist2, reverse=False) + \ 49 | self._unidirectional_chamfer(dist2, reverse=True) 50 | 51 | def bidirectional_chamfer(self, s1, s2): 52 | self._size_check(s1, s2) 53 | return self._bidirectional_chamfer(s1, s2) 54 | 55 | def chamfer(self, s1, s2): 56 | return self.bidirectional_chamfer(s1, s2) 57 | 58 | def _unidirectional_n_chamfer(self, n, neg_dist2, reverse=False): 59 | values, _ = self.top_k(neg_dist2, n, axis=-2 if reverse else -1) 60 | return -self.sum(values, axis=(-2, -1)) 61 | 62 | def unidirectional_n_chamfer(self, n, s1, s2, reverse=False): 63 | self._check_sizes(s1, s2) 64 | neg_dist2 = -self.dist2(s1, s2) 65 | return self._unidirectional_n_chamfer(n, neg_dist2, reverse=reverse) 66 | 67 | def _bidirectional_n_chamfer(self, n, s1, s2): 68 | neg_dist2 = -self._dist2(s1, s2) 69 | return self._unidirectional_n_chamfer(n, neg_dist2, reverse=False) + \ 70 | self._unidirectional_n_chamfer(n, neg_dist2, reverse=True) 71 | 72 | def bidirectional_n_chamfer(self, n, s1, s2): 73 | self._size_check(s1, s2) 74 | return self._bidirectional_n_chamfer(n, s1, s2) 75 | 76 | def n_chamfer(self, n, s1, s2): 77 | return self.bidirectional_n_chamfer(n, s1, s2) 78 | 79 | def _unidirectional_hausdorff(self, dist2, reverse=False): 80 | return self.max(self.min(dist2, axis=-2 if reverse else -1), axis=-1) 81 | 82 | def unidirectional_hausdorff(self, s1, s2, reverse=False): 83 | self._size_check(s1, s2) 84 | return self._unidirectional_hausdorff(s1, s2, reverse=reverse) 85 | 86 | def _bidirectional_hausdorff(self, s1, s2): 87 | dist2 = self._dist2(s1, s2) 88 | return max( 89 | self._unidirectional_hausdorff(dist2, reverse=False), 90 | self._unidirectional_hausdorff(dist2, reverse=True)) 91 | 92 | def bidirectional_hausdorff(self, s1, s2): 93 | self._size_check(s1, s2) 94 | return self._bidirectional_hausdorff(s1, s2) 95 | 96 | def hausdorff(self, s1, s2): 97 | return self.bidirectional_hausdorff(s1, s2) 98 | 99 | def unidirectional_modified_chamfer(self, s1, s2, reverse=False): 100 | self._size_check(s1, s2) 101 | dist2 = self._dist2(s1, s2) 102 | dist = self.sqrt(dist2) 103 | return self._unidirectional_chamfer(dist, reverse=reverse) 104 | 105 | def _bidirectional_modified_chamfer(self, s1, s2): 106 | dist2 = self._dist2(s1, s2) 107 | dist = self.sqrt(dist2) 108 | return self._unidirectional_chamfer(dist, reverse=False) + \ 109 | self._unidirectional_chamfer(dist, reverse=True) 110 | 111 | def bidirectional_modified_chamfer(self, s1, s2): 112 | self._size_check(s1, s2) 113 | return self._bidirectional_modified_chamfer(s1, s2) 114 | 115 | def modified_chamfer(self, s1, s2): 116 | return self.bidirectional_modified_chamfer(s1, s2) 117 | -------------------------------------------------------------------------------- /metrics/np_impl.py: -------------------------------------------------------------------------------- 1 | from __future__ import division 2 | 3 | import numpy as np 4 | from base import Metrics 5 | 6 | 7 | class _NumpyMetrics(Metrics): 8 | def sum(self, array, axis=None): 9 | return np.sum(array, axis=axis) 10 | 11 | def max(self, array, axis=None): 12 | return np.max(array, axis=axis) 13 | 14 | def min(self, array, axis=None): 15 | return np.min(array, axis=axis) 16 | 17 | def expand_dims(self, array, axis): 18 | return np.expand_dims(array, axis=axis) 19 | 20 | def sqrt(self, x): 21 | return np.sqrt(x) 22 | 23 | def top_k(self, x, k, axis=-1): 24 | raise NotImplementedError() 25 | 26 | def emd(self, p0, p1): 27 | # from emd import emd 28 | # return emd(p0, p1) 29 | 30 | from pyemd import emd 31 | n = p0.shape[0] 32 | dist = np.zeros((n*2, n*2)) 33 | d0 = np.sqrt(self._dist2(p0, p1)) 34 | dist[:n, n:] = d0 35 | dist[n:, :n] = d0.T 36 | 37 | assert(np.allclose(dist, dist.T)) 38 | h0 = np.zeros((2*n,), dtype=np.float64) 39 | h0[:n] = 1.0 40 | h1 = np.zeros((2*n,), dtype=np.float64) 41 | h1[n:] = 1.0 42 | return emd(h0, h1, dist) / n 43 | 44 | 45 | np_metrics = _NumpyMetrics() 46 | -------------------------------------------------------------------------------- /metrics/tf_impl.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | from base import Metrics 3 | 4 | 5 | class _TensorflowMetrics(Metrics): 6 | def sum(self, array, axis=None): 7 | return tf.reduce_sum(array, axis=axis) 8 | 9 | def max(self, array, axis=None): 10 | return tf.reduce_max(array, axis=axis) 11 | 12 | def min(self, array, axis=None): 13 | return tf.reduce_min(array, axis=axis) 14 | 15 | def expand_dims(self, array, axis): 16 | return tf.expand_dims(array, axis=axis) 17 | 18 | def sqrt(self, x): 19 | return tf.sqrt(x) 20 | 21 | def top_k(self, x, k, axis=-1): 22 | n = len(x.shape) 23 | if axis not in [-1, n-1]: 24 | if axis < 0: 25 | axis += n 26 | x = tf.transpose(x, range(axis) + range(axis+1, n) + [axis]) 27 | return tf.nn.top_k(x, k) 28 | 29 | # def _size_check(self, s1, s2): 30 | # for s1s, s2s in zip(s1.shape[:-2], s2.shape[:-2]): 31 | # if s1s != s2s and s1s is not None and s2s is not None: 32 | # raise ValueError('s1 and s2 must share same shape[:-2]') 33 | # if s1.shape[-1] != s2.shape[-1]: 34 | # raise ValueError( 35 | # 'last dim of s1 and s2 must be same, but got %d, %d' 36 | # % (s1.shape[-1], s2.shape[-1])) 37 | 38 | def _unidirectional_chamfer(self, dist2, reverse=False): 39 | with tf.name_scope('chamfer_unidirecitonal'): 40 | return super(_TensorflowMetrics, self)._unidirectional_chamfer( 41 | dist2, reverse=reverse) 42 | 43 | def _bidirectional_chamfer(self, s1, s2): 44 | from tf_nearest_neighbour import nn_distance 45 | with tf.name_scope('chamfer'): 46 | shape1 = s1.shape.as_list() 47 | shape2 = s2.shape.as_list() 48 | 49 | s1 = tf.reshape(s1, [-1] + shape1[-2:]) 50 | s2 = tf.reshape(s2, [-1] + shape2[-2:]) 51 | dist1, _, dist2, __ = nn_distance(s1, s2) 52 | loss1 = tf.reduce_sum(dist1, axis=-1) 53 | loss2 = tf.reduce_sum(dist2, axis=-1) 54 | if len(shape1) > 3: 55 | loss1 = tf.reshape(loss1, shape1[:-2]) 56 | if len(shape2) > 3: 57 | loss2 = tf.reshape(loss2, shape2[:-2]) 58 | return loss1 + loss2 59 | 60 | def _unidirectional_hausdorff(self, dist2, reverse=False): 61 | with tf.name_scope('chamfer_unidirecitonal'): 62 | return super(_TensorflowMetrics, self)._unidirectional_hausdorff( 63 | dist2, reverse=reverse) 64 | 65 | def _bidirectional_hausdorff(self, dist2, reverse=False): 66 | with tf.name_scope('hausdorff'): 67 | return super(_TensorflowMetrics, self)._bidirectional_hausdorff( 68 | dist2, reverse=reverse) 69 | 70 | def unidirectional_modified_chamfer(self, s1, s2, reverse=False): 71 | with tf.name_scope('modified_chamfer_unidirectional'): 72 | return super( 73 | _TensorflowMetrics, self).unidirectional_modified_chamfer( 74 | s1, s2, reverse=reverse) 75 | 76 | def _bidirectional_modified_chamfer(self, s1, s2): 77 | with tf.name_scope('modified_chamfer'): 78 | return super( 79 | _TensorflowMetrics, self)._bidirectional_modified_chamfer( 80 | s1, s2) 81 | 82 | 83 | tf_metrics = _TensorflowMetrics() 84 | -------------------------------------------------------------------------------- /model/.gitignore: -------------------------------------------------------------------------------- 1 | _model/* 2 | params 3 | -------------------------------------------------------------------------------- /model/__init__.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | params_dir = os.path.join(os.path.dirname(__file__), 'params') 4 | if not os.path.isdir(params_dir): 5 | os.makedirs(params_dir) 6 | 7 | 8 | def get_params_path(model_id): 9 | return os.path.join(params_dir, '%s.json' % model_id) 10 | 11 | 12 | def load_params(model_id): 13 | import json 14 | path = get_params_path(model_id) 15 | if not os.path.isfile(path): 16 | raise ValueError('No parameter file found at %s for model %s' % 17 | (path, model_id)) 18 | with open(path, 'r') as fp: 19 | params = json.load(fp) 20 | return params 21 | 22 | 23 | def get_builder(model_id): 24 | params = load_params(model_id) 25 | family = params.get('family', 'template_ffd') 26 | if family == 'template_ffd': 27 | from template_ffd_builder import TemplateFfdBuilder 28 | return TemplateFfdBuilder(model_id, params) 29 | elif family == 'classifier': 30 | from classifier_builder import ClassifierBuilder 31 | return ClassifierBuilder(model_id, params) 32 | -------------------------------------------------------------------------------- /model/builder.py: -------------------------------------------------------------------------------- 1 | from __future__ import division 2 | import os 3 | import numpy as np 4 | import tensorflow as tf 5 | 6 | 7 | estimator_dir = os.path.join(os.path.dirname(__file__), '_model') 8 | 9 | 10 | def _tuple_generator(nested_vals): 11 | iters = tuple(iter(nested_generator(v)) for v in nested_vals) 12 | try: 13 | while True: 14 | yield tuple(next(i) for i in iters) 15 | except StopIteration: 16 | pass 17 | 18 | 19 | def _list_generator(nested_vals): 20 | iters = tuple(iter(nested_generator(v)) for v in nested_vals) 21 | try: 22 | while True: 23 | yield [next(i) for i in iters] 24 | except StopIteration: 25 | pass 26 | 27 | 28 | def _dict_generator(nested_vals): 29 | iters = {k: iter(nested_generator(v)) for k, v in nested_vals.items()} 30 | try: 31 | while True: 32 | yield {k: next(i) for k, i in iters.items()} 33 | except StopIteration: 34 | pass 35 | 36 | 37 | def nested_generator(nested_vals): 38 | if isinstance(nested_vals, np.ndarray): 39 | return nested_vals 40 | elif isinstance(nested_vals, (list, tuple)): 41 | if all(isinstance(v, str) for v in nested_vals): 42 | return nested_vals 43 | elif isinstance(nested_vals, tuple): 44 | return _tuple_generator(nested_vals) 45 | else: 46 | return _list_generator(nested_vals) 47 | elif isinstance(nested_vals, dict): 48 | return _dict_generator(nested_vals) 49 | else: 50 | raise TypeError( 51 | 'Unrecognized type for nested_generator: %s' 52 | % str(type(nested_vals))) 53 | 54 | 55 | def initialize_uninitialized_variables(sess): 56 | global_vars = tf.global_variables() 57 | is_init = sess.run( 58 | [tf.is_variable_initialized(var) for var in global_vars]) 59 | init_vars = [v for (v, i) in zip(global_vars, is_init) if not i] 60 | sess.run(tf.variables_initializer(init_vars)) 61 | 62 | 63 | class ModelBuilder(object): 64 | """ 65 | Abstract base class for building models. 66 | 67 | Basically an umbrella class containing required functions to build data 68 | pipelines and `tf.estimator.Estimator`s. 69 | 70 | Concrete implementations must implement: 71 | * estimator construction: 72 | * get_inference 73 | * get_inference_loss 74 | * get_train_op 75 | * data pipelines: 76 | * get_inputs 77 | 78 | Implementations are encouraged to implement: 79 | * get_predictions 80 | * get_eval_metrics 81 | * vis_input_data 82 | * vis_prediction_data 83 | """ 84 | 85 | def __init__(self, model_id, params): 86 | self._model_id = model_id 87 | self._params = params 88 | self._initializer_run = False 89 | 90 | @property 91 | def initializer_run(self): 92 | """Flag indicating an initial run prior to main training.""" 93 | return self._initializer_run 94 | 95 | def initialize_variables(self): 96 | model_dir = self.model_dir 97 | if not os.path.isdir(model_dir): 98 | os.makedirs(model_dir) 99 | elif len(os.listdir(model_dir)) > 0: 100 | print('Initialization already complete. Skipping.') 101 | return 102 | self._initializer_run = True 103 | try: 104 | with tf.Graph().as_default(): 105 | with tf.Session() as sess: 106 | features, labels = self.get_train_inputs() 107 | self.get_estimator_spec(features, labels, 'train') 108 | initialize_uninitialized_variables(sess) 109 | saver = tf.train.Saver() 110 | save_path = os.path.join(self.model_dir, 'model') 111 | saver.save(sess, save_path, global_step=0) 112 | 113 | except Exception: 114 | self._initializer_run = False 115 | raise 116 | self._initializer_run = False 117 | 118 | @property 119 | def model_id(self): 120 | return self._model_id 121 | 122 | @property 123 | def params(self): 124 | return self._params 125 | 126 | @property 127 | def model_dir(self): 128 | return os.path.join(estimator_dir, self.model_id) 129 | 130 | @property 131 | def batch_size(self): 132 | return self.params['batch_size'] 133 | 134 | @property 135 | def default_max_steps(self): 136 | """Default maximum number of training steps.""" 137 | return self.params.get('default_max_steps', 100000) 138 | 139 | def get_inference(self, features, mode): 140 | """Get inferred value of the model.""" 141 | raise NotImplementedError('Abstract method') 142 | 143 | def get_inference_loss(self, inference, labels): 144 | """Get the loss assocaited with inferences.""" 145 | raise NotImplementedError('Abstract method') 146 | 147 | def get_train_op(self, loss, step): 148 | """ 149 | Get the train operation. 150 | 151 | This operation is called within a `tf.control_dependencies(update_ops)` 152 | block, so implementations do not have to worry about update ops that 153 | are defined in the calculation of the loss, e.g batch_normalization 154 | update ops. 155 | """ 156 | raise NotImplementedError('Abstract method') 157 | 158 | def vis_example_data(self, feature_data, label_data): 159 | """ 160 | Function for visualizing a batch of data for training or evaluation. 161 | 162 | All inputs are numpy arrays, or nested dicts/lists of numpy arrays. 163 | 164 | Not necessary for training/evaluation/infering, but handy for 165 | debugging. 166 | """ 167 | raise NotImplementedError() 168 | 169 | def vis_prediction_data(self, prediction_data, feature_data, label_data): 170 | """ 171 | Function for visualizing a batch of data for training or evaluation. 172 | 173 | All inputs are numpy arrays, or nested dicts/lists of numpy arrays. 174 | 175 | `label_data` may be `None`. 176 | 177 | Not necessary for training/evaluation/infering, but handy for 178 | debugging. 179 | """ 180 | raise NotImplementedError() 181 | 182 | def get_predictions(self, inferences): 183 | """Get predictions. Defaults to the identity, returning inferences.""" 184 | return inferences 185 | 186 | def get_eval_metric_ops(self, predictions, labels): 187 | """Get evaluation metrics. Defaults to empty dictionary.""" 188 | return dict() 189 | 190 | def get_total_loss(self, inference_loss): 191 | """ 192 | Get total loss, combining inference loss and regularization losses. 193 | 194 | If no regularization losses, just returns the inference loss. 195 | """ 196 | reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) 197 | if len(reg_losses) > 0: 198 | tf.summary.scalar( 199 | 'inference_loss', inference_loss, family='sublosses') 200 | reg_loss = tf.add_n(reg_losses) 201 | tf.summary.scalar('reg_loss', reg_loss, family='sublosses') 202 | loss = inference_loss + reg_loss 203 | else: 204 | loss = inference_loss 205 | return loss 206 | 207 | def get_estimator_spec(self, features, labels, mode, config=None): 208 | """See `tf.estmator.EstimatorSpec`.""" 209 | inference = self.get_inference(features, mode) 210 | predictions = self.get_predictions(inference) 211 | spec_kwargs = dict(mode=mode, predictions=predictions) 212 | 213 | if mode == tf.estimator.ModeKeys.PREDICT: 214 | return tf.estimator.EstimatorSpec(**spec_kwargs) 215 | 216 | inference_loss = self.get_inference_loss(inference, labels) 217 | loss = self.get_total_loss(inference_loss) 218 | spec_kwargs['loss'] = loss 219 | 220 | if mode == tf.estimator.ModeKeys.EVAL: 221 | spec_kwargs['eval_metric_ops'] = self.get_eval_metric_ops( 222 | predictions, labels) 223 | return tf.estimator.EstimatorSpec(**spec_kwargs) 224 | 225 | step = tf.train.get_or_create_global_step() 226 | update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) 227 | with tf.control_dependencies(update_ops): 228 | train_op = self.get_train_op(loss=loss, step=step) 229 | spec_kwargs['train_op'] = train_op 230 | 231 | if mode == tf.estimator.ModeKeys.TRAIN: 232 | return tf.estimator.EstimatorSpec(**spec_kwargs) 233 | 234 | raise ValueError('Unrecognized mode %s' % mode) 235 | 236 | def get_train_inputs(self): 237 | """ 238 | Get all features and labels for training. 239 | 240 | Returns (features, labels), where each of (features, labels) can be 241 | a tensor, or possibly nested list/tuple/dict. 242 | """ 243 | return self.get_inputs(mode=tf.estimator.ModeKeys.TRAIN) 244 | 245 | def get_eval_inputs(self): 246 | """ 247 | Get all features and labels for evlauation. 248 | 249 | Returns (features, labels), where each of (features, labels) can be 250 | a tensor, or possibly nested list/tuple/dict. 251 | """ 252 | return self.get_inputs(mode=tf.estimator.ModeKeys.EVAL) 253 | 254 | def get_predict_inputs(self): 255 | """ 256 | Abstract method that returns all features required by the model. 257 | 258 | Returned value can be a single tensor, or possibly nested 259 | list/tuple/dict. 260 | """ 261 | return self.get_inputs(mode=tf.estimator.ModeKeys.PREDICT) 262 | 263 | def get_inputs(self, mode, repeat=None): 264 | """Get (features, labels) for use in training/evaluation/prediction.""" 265 | raise NotImplementedError() 266 | 267 | def get_estimator(self, config=None): 268 | """Get the `tf.estimator.Estimator` defined by this builder.""" 269 | return tf.estimator.Estimator( 270 | self.get_estimator_spec, self.model_dir, config=config) 271 | 272 | def train(self, config=None, **train_kwargs): 273 | """Wrapper around `tf.estimator.Estimator.train`.""" 274 | estimator = self.get_estimator(config=config) 275 | estimator.train(self.get_train_inputs, **train_kwargs) 276 | 277 | def predict(self, config=None, **predict_kwargs): 278 | """Wrapper around `tf.estimator.Estimator.predict`.""" 279 | estimator = self.get_estimator(config=config) 280 | return estimator.predict(self.get_predict_inputs, **predict_kwargs) 281 | 282 | def eval(self, config=None, **eval_kwargs): 283 | """Wrapper around `tf.estimator.Estimator.eval`.""" 284 | estimator = self.get_estimator(config=config) 285 | return estimator.evaluate(self.get_eval_inputs, **eval_kwargs) 286 | 287 | def vis_inputs(self, mode=tf.estimator.ModeKeys.TRAIN): 288 | """ 289 | Visualize inputs defined by this model according. 290 | 291 | Depends on `vis_example_data` implementation. 292 | """ 293 | graph = tf.Graph() 294 | with graph.as_default(): 295 | if mode == tf.estimator.ModeKeys.PREDICT: 296 | features, labels = self.get_predict_inputs() 297 | elif mode == tf.estimator.ModeKeys.TRAIN: 298 | features, labels = self.get_train_inputs() 299 | 300 | with tf.train.MonitoredSession() as sess: 301 | while not sess.should_stop(): 302 | data = sess.run([features, labels]) 303 | for record in nested_generator(data): 304 | self.vis_example_data(*record) 305 | 306 | def vis_predictions(self, mode=tf.estimator.ModeKeys.PREDICT): 307 | """ 308 | Visualize inputs and predictions defined by this model. 309 | 310 | Depends on `vis_prediction_data` implementation. 311 | """ 312 | graph = tf.Graph() 313 | with graph.as_default(): 314 | if mode == tf.estimator.ModeKeys.PREDICT: 315 | features, labels = self.get_predict_inputs() 316 | elif mode == tf.estimator.ModeKeys.TRAIN: 317 | features, labels = self.get_train_inputs() 318 | 319 | predictions = self.get_estimator_spec( 320 | features, labels, tf.estimator.ModeKeys.PREDICT).predictions 321 | 322 | data_tensors = [predictions, features] 323 | if labels is not None: 324 | data_tensors.append(labels) 325 | saver = tf.train.Saver() 326 | 327 | with tf.train.MonitoredSession() as sess: 328 | saver.restore( 329 | sess, tf.train.latest_checkpoint(self.model_dir)) 330 | while not sess.should_stop(): 331 | data = sess.run(data_tensors) 332 | for record in nested_generator(data): 333 | self.vis_prediction_data(*record) 334 | -------------------------------------------------------------------------------- /model/classifier_builder.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import 2 | from __future__ import division 3 | from __future__ import print_function 4 | 5 | import numpy as np 6 | import tensorflow as tf 7 | from shapenet.core import cat_desc_to_id, cat_id_to_desc 8 | from .builder import ModelBuilder 9 | from .template_ffd_builder import get_mobilenet_features 10 | from .template_ffd_builder import batch_norm_then 11 | 12 | _cat_descs5 = ( 13 | 'plane', 14 | 'bench', 15 | 'car', 16 | 'chair', 17 | 'sofa' 18 | ) 19 | 20 | _cat_descs8 = ( 21 | 'cabinet', 22 | 'monitor', 23 | 'lamp', 24 | 'speaker', 25 | 'pistol', 26 | 'table', 27 | 'cellphone', 28 | 'watercraft', 29 | ) 30 | 31 | _cat_descs13 = _cat_descs5 + _cat_descs8 32 | _cat_ids13 = tuple(cat_desc_to_id(c) for c in _cat_descs13) 33 | 34 | 35 | def get_tf_dataset( 36 | render_config, view_index, cat_ids, example_ids, num_parallel_calls=8, 37 | shuffle=False, repeat=False, batch_size=None): 38 | from .data import get_image_dataset 39 | dids_ds = get_image_dataset( 40 | cat_ids, example_ids, view_index, render_config) 41 | dids_ds.open() 42 | 43 | cat_indices, example_ids, view_indices = zip(*dids_ds.keys()) 44 | 45 | n_examples = len(view_indices) 46 | cat_indices = tf.convert_to_tensor(cat_indices, tf.int32) 47 | example_ids = tf.convert_to_tensor(example_ids, tf.string) 48 | view_indices = tf.convert_to_tensor(view_indices, tf.int32) 49 | 50 | dataset = tf.data.Dataset.from_tensor_slices( 51 | (cat_indices, example_ids, view_indices)) 52 | 53 | if repeat: 54 | dataset = dataset.repeat() 55 | if shuffle: 56 | dataset = dataset.shuffle(n_examples) 57 | 58 | def map_fn_np(cat_index, example_id, view_index): 59 | return dids_ds[cat_index, example_id, view_index] 60 | 61 | def map_fn_tf(cat_index, example_id, view_index): 62 | image = tf.py_func( 63 | map_fn_np, (cat_index, example_id, view_index), tf.uint8) 64 | image.set_shape(render_config.shape + (3,)) 65 | labels = cat_index 66 | image = tf.image.per_image_standardization(image) 67 | features = dict( 68 | image=image, 69 | example_id=example_id, 70 | view_index=view_index, 71 | cat_index=cat_index 72 | ) 73 | return features, labels 74 | 75 | dataset = dataset.map(map_fn_tf, num_parallel_calls=num_parallel_calls) 76 | if batch_size is not None: 77 | dataset = dataset.batch(batch_size) 78 | 79 | dataset = dataset.prefetch(2) 80 | return dataset 81 | 82 | 83 | class ClassifierBuilder(ModelBuilder): 84 | @property 85 | def n_classes(self): 86 | return len(self.cat_ids) 87 | 88 | def get_inference(self, features, mode): 89 | alpha = self.params.get('alpha', 0.25) 90 | load_weights = self._initializer_run 91 | image = features['image'] 92 | # mode = tf.estimator.ModeKeys.TRAIN 93 | mobilenet_features = get_mobilenet_features( 94 | image, mode, load_weights, alpha) 95 | final_filters = self.params.get('final_conv_filters') 96 | if final_filters is None: 97 | final_features = tf.reduce_mean(mobilenet_features, axis=(1, 2)) 98 | else: 99 | activation = batch_norm_then( 100 | tf.nn.relu6, training=mode == tf.estimator.ModeKeys.TRAIN) 101 | final_features = tf.layers.conv2d( 102 | mobilenet_features, final_filters, 1, 1, activation=activation) 103 | final_features = tf.layers.flatten(final_features) 104 | 105 | logits = tf.layers.dense(final_features, self.n_classes) 106 | return dict( 107 | logits=logits, 108 | cat_index=features['cat_index'], 109 | exmaple_id=features['example_id'], 110 | view_index=features['view_index']) 111 | 112 | def get_inference_loss(self, inference, labels): 113 | """Get the loss assocaited with inferences.""" 114 | logits = inference['logits'] 115 | return tf.losses.sparse_softmax_cross_entropy(labels, logits) 116 | 117 | def get_train_op(self, loss, step): 118 | optimizer = tf.train.AdamOptimizer( 119 | self.params.get('learning_rate', 1e-3)) 120 | return optimizer.minimize(loss, global_step=step) 121 | 122 | @property 123 | def cat_descs(self): 124 | if not hasattr(self, '_cat_descs'): 125 | self._cat_descs = [cat_id_to_desc(c) for c in self.cat_ids] 126 | return self._cat_descs 127 | 128 | @property 129 | def cat_ids(self): 130 | return self.params.get('cat_ids', _cat_ids13) 131 | 132 | def vis_example_data(self, feature_data, label_data): 133 | import matplotlib.pyplot as plt 134 | image = feature_data['image'] 135 | image -= np.min(image) 136 | image /= np.max(image) 137 | plt.imshow(image) 138 | plt.title(self.cat_descs[label_data]) 139 | plt.show() 140 | 141 | def vis_prediction_data(self, prediction_data, feature_data, label_data): 142 | import matplotlib.pyplot as plt 143 | image = feature_data['image'] 144 | image -= np.min(image) 145 | image /= np.max(image) 146 | probs = prediction_data['probs'] 147 | pred = prediction_data['predictions'] 148 | plt.imshow(image) 149 | cat_descs = self.cat_descs 150 | for cat_desc, prob in zip(cat_descs, probs): 151 | print('%.3f: %s' % (prob, cat_desc)) 152 | plt.title('%s, inferred %s' 153 | % (cat_descs[label_data], cat_descs[pred])) 154 | plt.show() 155 | 156 | def get_predictions(self, inferences): 157 | preds = inferences.copy() 158 | logits = inferences['logits'] 159 | preds['predictions'] = tf.argmax(logits, axis=-1) 160 | preds['probs'] = tf.nn.softmax(logits) 161 | return preds 162 | 163 | def get_eval_metric_ops(self, predictions, labels): 164 | accuracy = tf.metrics.accuracy( 165 | predictions=predictions['predictions'], labels=labels) 166 | return dict(accuracy=accuracy) 167 | 168 | @property 169 | def batch_size(self): 170 | return 64 171 | 172 | def get_inputs(self, mode, repeat=None): 173 | from shapenet.core.blender_renderings.config import RenderConfig 174 | from ..data.ids import get_example_ids 175 | render_config = RenderConfig() 176 | view_index = self.params.get( 177 | 'view_index', range(render_config.n_images)) 178 | cat_ids = self.cat_ids 179 | example_ids = tuple( 180 | get_example_ids(cat_id, mode) for cat_id in cat_ids) 181 | if repeat is None: 182 | repeat = mode == tf.estimator.ModeKeys.TRAIN 183 | dataset = get_tf_dataset( 184 | render_config, view_index, cat_ids, example_ids, 185 | batch_size=self.batch_size, shuffle=True, 186 | repeat=repeat) 187 | return dataset.make_one_shot_iterator().get_next() 188 | -------------------------------------------------------------------------------- /model/data.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | 3 | 4 | def get_image_dataset(cat_ids, example_ids, view_indices, render_config=None): 5 | from shapenet.image import with_background 6 | from dids.core import BiKeyDataset 7 | if render_config is None: 8 | from shapenet.core.blender_renderings.config import RenderConfig 9 | render_config = RenderConfig() 10 | if isinstance(cat_ids, str): 11 | cat_ids = [cat_ids] 12 | example_ids = [example_ids] 13 | if isinstance(view_indices, int): 14 | view_indices = [view_indices] 15 | datasets = { 16 | c: render_config.get_multi_view_dataset( 17 | c, view_indices=view_indices, example_ids=eid) 18 | for c, eid in zip(cat_ids, example_ids)} 19 | dataset = BiKeyDataset(datasets).map( 20 | lambda image: with_background(image, 255)) 21 | dataset = dataset.map_keys( 22 | lambda key: (key[0], (key[1], key[2])), 23 | lambda key: (key[0],) + key[1]) 24 | return dataset 25 | 26 | 27 | def get_cloud_dataset(cat_ids, example_ids, n_samples=16384, n_resamples=1024): 28 | import os 29 | from shapenet.core.point_clouds import PointCloudAutoSavingManager 30 | from util3d.point_cloud import sample_points 31 | from dids.core import BiKeyDataset 32 | if isinstance(cat_ids, str): 33 | cat_ids = [cat_ids] 34 | example_ids = [example_ids] 35 | datasets = {} 36 | for cat_id, e_ids in zip(cat_ids, example_ids): 37 | manager = PointCloudAutoSavingManager(cat_id, n_samples) 38 | if not os.path.isfile(manager.path): 39 | manager.save_all() 40 | datasets[cat_id] = manager.get_saving_dataset( 41 | mode='r').subset(e_ids) 42 | return BiKeyDataset(datasets).map( 43 | lambda x: sample_points(np.array(x, dtype=np.float32), n_resamples)) 44 | 45 | 46 | if __name__ == '__main__': 47 | from shapenet.core import cat_desc_to_id 48 | from template_ffd.data.ids import get_example_ids 49 | import random 50 | cat_ids = [cat_desc_to_id(i) for i in ('plane', 'car')] 51 | view_indices = [1, 5, 6] 52 | mode = 'train' 53 | example_ids = [get_example_ids(cat_id, mode) for cat_id in cat_ids] 54 | image_dataset = get_image_dataset(cat_ids, example_ids, view_indices) 55 | cloud_dataset = get_cloud_dataset(cat_ids, example_ids) 56 | 57 | image_dataset.open() 58 | cloud_dataset.open() 59 | 60 | keys = list((tuple(k) for k in image_dataset.keys())) 61 | random.shuffle(keys) 62 | 63 | def vis(image, cloud): 64 | import matplotlib.pyplot as plt 65 | from util3d.mayavi_vis import vis_point_cloud, mlab 66 | plt.imshow(image) 67 | plt.show(block=False) 68 | vis_point_cloud( 69 | cloud, axis_order='xzy', color=(0, 0, 1), scale_factor=0.02) 70 | mlab.show() 71 | plt.close() 72 | 73 | cat_ids, example_ids, view_indices = zip(*keys) 74 | for (cat_id, example_id, view_index) in zip( 75 | cat_ids, example_ids, view_indices): 76 | image = image_dataset[cat_id, example_id, view_index] 77 | cloud = cloud_dataset[cat_id, example_id] 78 | vis(image, cloud) 79 | -------------------------------------------------------------------------------- /model/mobilenet/__init__.py: -------------------------------------------------------------------------------- 1 | """ 2 | Hacked mobilenet version implementation was changed at tensorflow v1.8 3 | 4 | Both other files in this directory are minor changes of 5 | tf.keras.applications.mobilenet 6 | with error checks on input sizes removed. 7 | 8 | Tensorflow version prior to 1.8 use the old version. 9 | 10 | Note: keras doesn't play well with native tensorflow. If re-implementing, 11 | users are strongly encouraged to use `models.research.slim.nets.mobilenet` 12 | from [here](https://github.com/tensorflow/models). 13 | """ 14 | 15 | try: 16 | from mobilenet_1p8 import MobileNet 17 | except ImportError: 18 | from mobilenet_old import MobileNet 19 | 20 | __all__ = [MobileNet] 21 | -------------------------------------------------------------------------------- /model/mobilenet/mobilenet_1p8.py: -------------------------------------------------------------------------------- 1 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | # ============================================================================== 15 | # pylint: disable=invalid-name 16 | # pylint: disable=unused-import 17 | """ 18 | Almost exactly the same as tf.keras.applications.mobilenet, tf version 1.8.0 19 | 20 | Changes: 21 | * removed error checks to allow non-square input images. See # HACK lines. 22 | * commented out unused exports 23 | * commented out @tf_export lines 24 | 25 | 26 | MobileNet v1 models for Keras. 27 | 28 | MobileNet is a general architecture and can be used for multiple use cases. 29 | Depending on the use case, it can use different input layer size and 30 | different width factors. This allows different width models to reduce 31 | the number of multiply-adds and thereby 32 | reduce inference cost on mobile devices. 33 | 34 | MobileNets support any input size greater than 32 x 32, with larger image sizes 35 | offering better performance. 36 | The number of parameters and number of multiply-adds 37 | can be modified by using the `alpha` parameter, 38 | which increases/decreases the number of filters in each layer. 39 | By altering the image size and `alpha` parameter, 40 | all 16 models from the paper can be built, with ImageNet weights provided. 41 | 42 | The paper demonstrates the performance of MobileNets using `alpha` values of 43 | 1.0 (also called 100 % MobileNet), 0.75, 0.5 and 0.25. 44 | For each of these `alpha` values, weights for 4 different input image sizes 45 | are provided (224, 192, 160, 128). 46 | 47 | The following table describes the size and accuracy of the 100% MobileNet 48 | on size 224 x 224: 49 | ---------------------------------------------------------------------------- 50 | Width Multiplier (alpha) | ImageNet Acc | Multiply-Adds (M) | Params (M) 51 | ---------------------------------------------------------------------------- 52 | | 1.0 MobileNet-224 | 70.6 % | 529 | 4.2 | 53 | | 0.75 MobileNet-224 | 68.4 % | 325 | 2.6 | 54 | | 0.50 MobileNet-224 | 63.7 % | 149 | 1.3 | 55 | | 0.25 MobileNet-224 | 50.6 % | 41 | 0.5 | 56 | ---------------------------------------------------------------------------- 57 | 58 | The following table describes the performance of 59 | the 100 % MobileNet on various input sizes: 60 | ------------------------------------------------------------------------ 61 | Resolution | ImageNet Acc | Multiply-Adds (M) | Params (M) 62 | ------------------------------------------------------------------------ 63 | | 1.0 MobileNet-224 | 70.6 % | 529 | 4.2 | 64 | | 1.0 MobileNet-192 | 69.1 % | 529 | 4.2 | 65 | | 1.0 MobileNet-160 | 67.2 % | 529 | 4.2 | 66 | | 1.0 MobileNet-128 | 64.4 % | 529 | 4.2 | 67 | ------------------------------------------------------------------------ 68 | 69 | The weights for all 16 models are obtained and translated 70 | from TensorFlow checkpoints found at 71 | https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md 72 | 73 | # Reference 74 | - [MobileNets: Efficient Convolutional Neural Networks for 75 | Mobile Vision Applications](https://arxiv.org/pdf/1704.04861.pdf)) 76 | """ 77 | from __future__ import absolute_import 78 | from __future__ import division 79 | from __future__ import print_function 80 | 81 | import os 82 | 83 | from tensorflow.python.keras import backend as K 84 | # from tensorflow.python.keras import constraints 85 | # from tensorflow.python.keras import initializers 86 | # from tensorflow.python.keras import regularizers 87 | from tensorflow.python.keras.applications import imagenet_utils 88 | from tensorflow.python.keras.applications.imagenet_utils import _obtain_input_shape 89 | # from tensorflow.python.keras.applications.imagenet_utils import decode_predictions 90 | # from tensorflow.python.keras.engine import InputSpec 91 | from tensorflow.python.keras.engine.network import get_source_inputs 92 | from tensorflow.python.keras.layers import Activation 93 | from tensorflow.python.keras.layers import BatchNormalization 94 | from tensorflow.python.keras.layers import Conv2D 95 | from tensorflow.python.keras.layers import DepthwiseConv2D 96 | from tensorflow.python.keras.layers import Dropout 97 | from tensorflow.python.keras.layers import GlobalAveragePooling2D 98 | from tensorflow.python.keras.layers import GlobalMaxPooling2D 99 | from tensorflow.python.keras.layers import Input 100 | from tensorflow.python.keras.layers import Reshape 101 | from tensorflow.python.keras.layers import ZeroPadding2D 102 | from tensorflow.python.keras.models import Model 103 | # from tensorflow.python.keras.utils import conv_utils 104 | from tensorflow.python.keras.utils.data_utils import get_file 105 | from tensorflow.python.platform import tf_logging as logging 106 | # from tensorflow.python.util.tf_export import tf_export 107 | 108 | 109 | BASE_WEIGHT_PATH = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.6/' 110 | 111 | 112 | def relu6(x): 113 | return K.relu(x, max_value=6) 114 | 115 | 116 | # @tf_export('keras.applications.mobilenet.preprocess_input') 117 | def preprocess_input(x): 118 | """Preprocesses a numpy array encoding a batch of images. 119 | 120 | Arguments: 121 | x: a 4D numpy array consists of RGB values within [0, 255]. 122 | 123 | Returns: 124 | Preprocessed array. 125 | """ 126 | return imagenet_utils.preprocess_input(x, mode='tf') 127 | 128 | 129 | # @tf_export('keras.applications.MobileNet', 130 | # 'keras.applications.mobilenet.MobileNet') 131 | def MobileNet(input_shape=None, 132 | alpha=1.0, 133 | depth_multiplier=1, 134 | dropout=1e-3, 135 | include_top=True, 136 | weights='imagenet', 137 | input_tensor=None, 138 | pooling=None, 139 | classes=1000): 140 | """Instantiates the MobileNet architecture. 141 | 142 | To load a MobileNet model via `load_model`, import the custom 143 | objects `relu6` and pass them to the `custom_objects` parameter. 144 | E.g. 145 | model = load_model('mobilenet.h5', custom_objects={ 146 | 'relu6': mobilenet.relu6}) 147 | 148 | Arguments: 149 | input_shape: optional shape tuple, only to be specified 150 | if `include_top` is False (otherwise the input shape 151 | has to be `(224, 224, 3)` (with `channels_last` data format) 152 | or (3, 224, 224) (with `channels_first` data format). 153 | It should have exactly 3 inputs channels, 154 | and width and height should be no smaller than 32. 155 | E.g. `(200, 200, 3)` would be one valid value. 156 | alpha: controls the width of the network. 157 | - If `alpha` < 1.0, proportionally decreases the number 158 | of filters in each layer. 159 | - If `alpha` > 1.0, proportionally increases the number 160 | of filters in each layer. 161 | - If `alpha` = 1, default number of filters from the paper 162 | are used at each layer. 163 | depth_multiplier: depth multiplier for depthwise convolution 164 | (also called the resolution multiplier) 165 | dropout: dropout rate 166 | include_top: whether to include the fully-connected 167 | layer at the top of the network. 168 | weights: one of `None` (random initialization), 169 | 'imagenet' (pre-training on ImageNet), 170 | or the path to the weights file to be loaded. 171 | input_tensor: optional Keras tensor (i.e. output of 172 | `layers.Input()`) 173 | to use as image input for the model. 174 | pooling: Optional pooling mode for feature extraction 175 | when `include_top` is `False`. 176 | - `None` means that the output of the model 177 | will be the 4D tensor output of the 178 | last convolutional layer. 179 | - `avg` means that global average pooling 180 | will be applied to the output of the 181 | last convolutional layer, and thus 182 | the output of the model will be a 183 | 2D tensor. 184 | - `max` means that global max pooling will 185 | be applied. 186 | classes: optional number of classes to classify images 187 | into, only to be specified if `include_top` is True, and 188 | if no `weights` argument is specified. 189 | 190 | Returns: 191 | A Keras model instance. 192 | 193 | Raises: 194 | ValueError: in case of invalid argument for `weights`, 195 | or invalid input shape. 196 | RuntimeError: If attempting to run this model with a 197 | backend that does not support separable convolutions. 198 | """ 199 | 200 | if not (weights in {'imagenet', None} or os.path.exists(weights)): 201 | raise ValueError('The `weights` argument should be either ' 202 | '`None` (random initialization), `imagenet` ' 203 | '(pre-training on ImageNet), ' 204 | 'or the path to the weights file to be loaded.') 205 | 206 | if weights == 'imagenet' and include_top and classes != 1000: 207 | raise ValueError('If using `weights` as ImageNet with `include_top` ' 208 | 'as true, `classes` should be 1000') 209 | 210 | # Determine proper input shape and default size. 211 | if input_shape is None: 212 | default_size = 224 213 | else: 214 | if K.image_data_format() == 'channels_first': 215 | rows = input_shape[1] 216 | cols = input_shape[2] 217 | else: 218 | rows = input_shape[0] 219 | cols = input_shape[1] 220 | 221 | # HACK 222 | # if rows == cols and rows in [128, 160, 192, 224]: 223 | if rows in [128, 160, 192, 224]: 224 | default_size = rows 225 | else: 226 | default_size = 224 227 | 228 | input_shape = _obtain_input_shape( 229 | input_shape, 230 | default_size=default_size, 231 | min_size=32, 232 | data_format=K.image_data_format(), 233 | require_flatten=include_top, 234 | weights=weights) 235 | 236 | if K.image_data_format() == 'channels_last': 237 | row_axis, col_axis = (0, 1) 238 | else: 239 | row_axis, col_axis = (1, 2) 240 | rows = input_shape[row_axis] 241 | cols = input_shape[col_axis] 242 | 243 | if weights == 'imagenet': 244 | if depth_multiplier != 1: 245 | raise ValueError('If imagenet weights are being loaded, ' 246 | 'depth multiplier must be 1') 247 | 248 | if alpha not in [0.25, 0.50, 0.75, 1.0]: 249 | raise ValueError('If imagenet weights are being loaded, ' 250 | 'alpha can be one of' 251 | '`0.25`, `0.50`, `0.75` or `1.0` only.') 252 | 253 | # HACK 254 | # if rows != cols or rows not in [128, 160, 192, 224]: 255 | if rows not in [128, 160, 192, 224]: 256 | if rows is None: 257 | rows = 224 258 | logging.warning('MobileNet shape is undefined.' 259 | ' Weights for input shape (224, 224) will be loaded.') 260 | else: 261 | raise ValueError('If imagenet weights are being loaded, ' 262 | 'input must have a static square shape (one of ' 263 | '(128, 128), (160, 160), (192, 192), or (224, 224)).' 264 | ' Input shape provided = %s' % (input_shape,)) 265 | 266 | if K.image_data_format() != 'channels_last': 267 | logging.warning('The MobileNet family of models is only available ' 268 | 'for the input data format "channels_last" ' 269 | '(width, height, channels). ' 270 | 'However your settings specify the default ' 271 | 'data format "channels_first" (channels, width, height).' 272 | ' You should set `image_data_format="channels_last"` ' 273 | 'in your Keras config located at ~/.keras/keras.json. ' 274 | 'The model being returned right now will expect inputs ' 275 | 'to follow the "channels_last" data format.') 276 | K.set_image_data_format('channels_last') 277 | old_data_format = 'channels_first' 278 | else: 279 | old_data_format = None 280 | 281 | if input_tensor is None: 282 | img_input = Input(shape=input_shape) 283 | else: 284 | if not K.is_keras_tensor(input_tensor): 285 | img_input = Input(tensor=input_tensor, shape=input_shape) 286 | else: 287 | img_input = input_tensor 288 | 289 | x = _conv_block(img_input, 32, alpha, strides=(2, 2)) 290 | x = _depthwise_conv_block(x, 64, alpha, depth_multiplier, block_id=1) 291 | 292 | x = _depthwise_conv_block( 293 | x, 128, alpha, depth_multiplier, strides=(2, 2), block_id=2) 294 | x = _depthwise_conv_block(x, 128, alpha, depth_multiplier, block_id=3) 295 | 296 | x = _depthwise_conv_block( 297 | x, 256, alpha, depth_multiplier, strides=(2, 2), block_id=4) 298 | x = _depthwise_conv_block(x, 256, alpha, depth_multiplier, block_id=5) 299 | 300 | x = _depthwise_conv_block( 301 | x, 512, alpha, depth_multiplier, strides=(2, 2), block_id=6) 302 | x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=7) 303 | x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=8) 304 | x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=9) 305 | x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=10) 306 | x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=11) 307 | 308 | x = _depthwise_conv_block( 309 | x, 1024, alpha, depth_multiplier, strides=(2, 2), block_id=12) 310 | x = _depthwise_conv_block(x, 1024, alpha, depth_multiplier, block_id=13) 311 | 312 | if include_top: 313 | if K.image_data_format() == 'channels_first': 314 | shape = (int(1024 * alpha), 1, 1) 315 | else: 316 | shape = (1, 1, int(1024 * alpha)) 317 | 318 | x = GlobalAveragePooling2D()(x) 319 | x = Reshape(shape, name='reshape_1')(x) 320 | x = Dropout(dropout, name='dropout')(x) 321 | x = Conv2D(classes, (1, 1), padding='same', name='conv_preds')(x) 322 | x = Activation('softmax', name='act_softmax')(x) 323 | x = Reshape((classes,), name='reshape_2')(x) 324 | else: 325 | if pooling == 'avg': 326 | x = GlobalAveragePooling2D()(x) 327 | elif pooling == 'max': 328 | x = GlobalMaxPooling2D()(x) 329 | 330 | # Ensure that the model takes into account 331 | # any potential predecessors of `input_tensor`. 332 | if input_tensor is not None: 333 | inputs = get_source_inputs(input_tensor) 334 | else: 335 | inputs = img_input 336 | 337 | # Create model. 338 | model = Model(inputs, x, name='mobilenet_%0.2f_%s' % (alpha, rows)) 339 | 340 | # load weights 341 | if weights == 'imagenet': 342 | if K.image_data_format() == 'channels_first': 343 | raise ValueError('Weights for "channels_first" format ' 344 | 'are not available.') 345 | if alpha == 1.0: 346 | alpha_text = '1_0' 347 | elif alpha == 0.75: 348 | alpha_text = '7_5' 349 | elif alpha == 0.50: 350 | alpha_text = '5_0' 351 | else: 352 | alpha_text = '2_5' 353 | 354 | if include_top: 355 | model_name = 'mobilenet_%s_%d_tf.h5' % (alpha_text, rows) 356 | weigh_path = BASE_WEIGHT_PATH + model_name 357 | weights_path = get_file(model_name, weigh_path, cache_subdir='models') 358 | else: 359 | model_name = 'mobilenet_%s_%d_tf_no_top.h5' % (alpha_text, rows) 360 | weigh_path = BASE_WEIGHT_PATH + model_name 361 | weights_path = get_file(model_name, weigh_path, cache_subdir='models') 362 | model.load_weights(weights_path) 363 | elif weights is not None: 364 | model.load_weights(weights) 365 | 366 | if old_data_format: 367 | K.set_image_data_format(old_data_format) 368 | return model 369 | 370 | 371 | def _conv_block(inputs, filters, alpha, kernel=(3, 3), strides=(1, 1)): 372 | """Adds an initial convolution layer (with batch normalization and relu6). 373 | 374 | Arguments: 375 | inputs: Input tensor of shape `(rows, cols, 3)` 376 | (with `channels_last` data format) or 377 | (3, rows, cols) (with `channels_first` data format). 378 | It should have exactly 3 inputs channels, 379 | and width and height should be no smaller than 32. 380 | E.g. `(224, 224, 3)` would be one valid value. 381 | filters: Integer, the dimensionality of the output space 382 | (i.e. the number of output filters in the convolution). 383 | alpha: controls the width of the network. 384 | - If `alpha` < 1.0, proportionally decreases the number 385 | of filters in each layer. 386 | - If `alpha` > 1.0, proportionally increases the number 387 | of filters in each layer. 388 | - If `alpha` = 1, default number of filters from the paper 389 | are used at each layer. 390 | kernel: An integer or tuple/list of 2 integers, specifying the 391 | width and height of the 2D convolution window. 392 | Can be a single integer to specify the same value for 393 | all spatial dimensions. 394 | strides: An integer or tuple/list of 2 integers, 395 | specifying the strides of the convolution along the width and height. 396 | Can be a single integer to specify the same value for 397 | all spatial dimensions. 398 | Specifying any stride value != 1 is incompatible with specifying 399 | any `dilation_rate` value != 1. 400 | 401 | Input shape: 402 | 4D tensor with shape: 403 | `(samples, channels, rows, cols)` if data_format='channels_first' 404 | or 4D tensor with shape: 405 | `(samples, rows, cols, channels)` if data_format='channels_last'. 406 | 407 | Output shape: 408 | 4D tensor with shape: 409 | `(samples, filters, new_rows, new_cols)` if data_format='channels_first' 410 | or 4D tensor with shape: 411 | `(samples, new_rows, new_cols, filters)` if data_format='channels_last'. 412 | `rows` and `cols` values might have changed due to stride. 413 | 414 | Returns: 415 | Output tensor of block. 416 | """ 417 | channel_axis = 1 if K.image_data_format() == 'channels_first' else -1 418 | filters = int(filters * alpha) 419 | x = ZeroPadding2D(padding=(1, 1), name='conv1_pad')(inputs) 420 | x = Conv2D( 421 | filters, 422 | kernel, 423 | padding='valid', 424 | use_bias=False, 425 | strides=strides, 426 | name='conv1')(x) 427 | x = BatchNormalization(axis=channel_axis, name='conv1_bn')(x) 428 | return Activation(relu6, name='conv1_relu')(x) 429 | 430 | 431 | def _depthwise_conv_block(inputs, 432 | pointwise_conv_filters, 433 | alpha, 434 | depth_multiplier=1, 435 | strides=(1, 1), 436 | block_id=1): 437 | """Adds a depthwise convolution block. 438 | 439 | A depthwise convolution block consists of a depthwise conv, 440 | batch normalization, relu6, pointwise convolution, 441 | batch normalization and relu6 activation. 442 | 443 | Arguments: 444 | inputs: Input tensor of shape `(rows, cols, channels)` 445 | (with `channels_last` data format) or 446 | (channels, rows, cols) (with `channels_first` data format). 447 | pointwise_conv_filters: Integer, the dimensionality of the output space 448 | (i.e. the number of output filters in the pointwise convolution). 449 | alpha: controls the width of the network. 450 | - If `alpha` < 1.0, proportionally decreases the number 451 | of filters in each layer. 452 | - If `alpha` > 1.0, proportionally increases the number 453 | of filters in each layer. 454 | - If `alpha` = 1, default number of filters from the paper 455 | are used at each layer. 456 | depth_multiplier: The number of depthwise convolution output channels 457 | for each input channel. 458 | The total number of depthwise convolution output 459 | channels will be equal to `filters_in * depth_multiplier`. 460 | strides: An integer or tuple/list of 2 integers, 461 | specifying the strides of the convolution along the width and height. 462 | Can be a single integer to specify the same value for 463 | all spatial dimensions. 464 | Specifying any stride value != 1 is incompatible with specifying 465 | any `dilation_rate` value != 1. 466 | block_id: Integer, a unique identification designating the block number. 467 | 468 | Input shape: 469 | 4D tensor with shape: 470 | `(batch, channels, rows, cols)` if data_format='channels_first' 471 | or 4D tensor with shape: 472 | `(batch, rows, cols, channels)` if data_format='channels_last'. 473 | 474 | Output shape: 475 | 4D tensor with shape: 476 | `(batch, filters, new_rows, new_cols)` if data_format='channels_first' 477 | or 4D tensor with shape: 478 | `(batch, new_rows, new_cols, filters)` if data_format='channels_last'. 479 | `rows` and `cols` values might have changed due to stride. 480 | 481 | Returns: 482 | Output tensor of block. 483 | """ 484 | channel_axis = 1 if K.image_data_format() == 'channels_first' else -1 485 | pointwise_conv_filters = int(pointwise_conv_filters * alpha) 486 | x = ZeroPadding2D(padding=(1, 1), name='conv_pad_%d' % block_id)(inputs) 487 | x = DepthwiseConv2D( # pylint: disable=not-callable 488 | (3, 3), 489 | padding='valid', 490 | depth_multiplier=depth_multiplier, 491 | strides=strides, 492 | use_bias=False, 493 | name='conv_dw_%d' % block_id)(x) 494 | x = BatchNormalization(axis=channel_axis, name='conv_dw_%d_bn' % block_id)(x) 495 | x = Activation(relu6, name='conv_dw_%d_relu' % block_id)(x) 496 | 497 | x = Conv2D( 498 | pointwise_conv_filters, (1, 1), 499 | padding='same', 500 | use_bias=False, 501 | strides=(1, 1), 502 | name='conv_pw_%d' % block_id)( 503 | x) 504 | x = BatchNormalization(axis=channel_axis, name='conv_pw_%d_bn' % block_id)(x) 505 | return Activation(relu6, name='conv_pw_%d_relu' % block_id)(x) 506 | -------------------------------------------------------------------------------- /paper/.gitignore: -------------------------------------------------------------------------------- 1 | real_images 2 | segmentations 3 | figs 4 | top_k_results 5 | big_table_results 6 | sup_vid_results 7 | -------------------------------------------------------------------------------- /paper/big_table.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | import random 4 | import numpy as np 5 | from dids import Dataset 6 | from shapenet.core.blender_renderings.config import RenderConfig 7 | from shapenet.core.meshes import get_mesh_dataset 8 | from shapenet.core import cat_desc_to_id 9 | 10 | from template_ffd.inference.clouds import get_cloud_manager 11 | from template_ffd.inference.meshes import get_inferred_mesh_dataset 12 | from template_ffd.inference.voxels import get_voxel_dataset 13 | from template_ffd.inference.predictions import \ 14 | get_selected_template_idx_dataset 15 | from template_ffd.data.ids import get_example_ids 16 | from template_ffd.templates.ids import get_template_ids 17 | 18 | 19 | def get_ds(cat_desc, regime='e'): 20 | view_index = 5 21 | edge_length_threshold = 0.02 22 | 23 | cat_id = cat_desc_to_id(cat_desc) 24 | model_id = '%s_%s' % (regime, cat_desc) 25 | 26 | image_ds = RenderConfig().get_dataset(cat_id, view_index) 27 | cloud_ds = get_cloud_manager( 28 | model_id, pre_sampled=True, n_samples=n_samples).get_lazy_dataset() 29 | mesh_ds = get_inferred_mesh_dataset( 30 | model_id, edge_length_threshold=edge_length_threshold) 31 | gt_mesh_ds = get_mesh_dataset(cat_id) 32 | voxel_ds = get_voxel_dataset( 33 | model_id, edge_length_threshold=edge_length_threshold, filled=False) 34 | selected_template_ds = get_selected_template_idx_dataset(model_id) 35 | 36 | template_meshes = [] 37 | with gt_mesh_ds: 38 | for template_id in get_template_ids(cat_id): 39 | mesh = gt_mesh_ds[template_id] 40 | template_meshes.append( 41 | {k: np.array(mesh[k]) for k in ('vertices', 'faces')}) 42 | 43 | template_mesh_ds = selected_template_ds.map(lambda i: template_meshes[i]) 44 | 45 | return Dataset.zip( 46 | image_ds, gt_mesh_ds, cloud_ds, mesh_ds, voxel_ds, template_mesh_ds) 47 | 48 | 49 | def vis(cat_desc, regime='e', shuffle=True): 50 | import matplotlib.pyplot as plt 51 | from mayavi import mlab 52 | from util3d.mayavi_vis import vis_point_cloud, vis_voxels 53 | 54 | all_ds = get_ds(cat_desc, regime) 55 | cat_id = cat_desc_to_id(cat_desc) 56 | example_ids = list(get_example_ids(cat_id, 'eval')) 57 | random.shuffle(example_ids) 58 | 59 | def vis_mesh(mesh, include_wireframe=False, **kwargs): 60 | from util3d.mayavi_vis import vis_mesh as vm 61 | v, f = (np.array(mesh[k]) for k in ('vertices', 'faces')) 62 | vm(v, f, include_wireframe=include_wireframe, **kwargs) 63 | 64 | with all_ds: 65 | for example_id in example_ids: 66 | print(example_id) 67 | image, gt_mesh, cloud, mesh, voxels, template_mesh = \ 68 | all_ds[example_id] 69 | plt.imshow(image) 70 | mlab.figure() 71 | vis_mesh(gt_mesh, color=(0, 0, 1)) 72 | mlab.figure() 73 | vis_mesh(mesh, color=(0, 1, 0)) 74 | mlab.figure() 75 | vis_mesh(template_mesh, color=(1, 0, 0)) 76 | mlab.figure() 77 | vis_point_cloud( 78 | np.array(cloud), scale_factor=0.01, color=(0, 1, 0)) 79 | mlab.figure() 80 | vis_voxels(voxels.data, color=(0, 1, 0)) 81 | 82 | plt.show(block=False) 83 | mlab.show() 84 | plt.close() 85 | 86 | 87 | def export(cat_desc, example_ids, regime='e'): 88 | from scipy.misc import imsave 89 | from util3d.mesh.obj_io import write_obj 90 | import os 91 | all_ds = get_ds(cat_desc, regime) 92 | base = os.path.realpath(os.path.dirname(__file__)) 93 | 94 | with all_ds: 95 | for example_id in example_ids: 96 | folder = os.path.join( 97 | base, 'big_table_results', cat_desc, example_id) 98 | if not os.path.isdir(folder): 99 | os.makedirs(folder) 100 | image, gt_mesh, cloud, mesh, voxels, template_mesh = \ 101 | all_ds[example_id] 102 | imsave(os.path.join(folder, 'image.png'), image) 103 | v, f = (np.array(mesh[k]) for k in ('vertices', 'faces')) 104 | write_obj(os.path.join(folder, 'deformed.obj'), v, f) 105 | v, f = (np.array(template_mesh[k]) for k in ('vertices', 'faces')) 106 | write_obj(os.path.join(folder, 'template.obj'), v, f) 107 | v, f = (np.array(gt_mesh[k]) for k in ('vertices', 'faces')) 108 | write_obj(os.path.join(folder, 'model.obj'), v, f) 109 | np.save(os.path.join( 110 | folder, 'inferred_cloud.npy'), np.array(cloud)) 111 | path = os.path.join(folder, 'deformed.binvox') 112 | voxels.save(path) 113 | 114 | 115 | n_samples = 8192 116 | regime = 'e' 117 | 118 | # cat_desc = 'chair' 119 | # example_ids = [ 120 | # '52cfbd8c8650402ba72559fc4f86f700', 121 | # # '8590bac753fbcccb203a669367e5b2a', 122 | # '353bbd3b916426d24502f857a1cf320e', 123 | # ] 124 | 125 | # cat_desc = 'plane' 126 | # example_ids = [ 127 | # '7bc46908d079551eed02ab0379740cae', 128 | # '5aeb583ee6e0e4ea42d0e83abdfab1fd', 129 | # 'bbd8e6b06d8906d5eccd82bb51193a7f', 130 | # ] 131 | 132 | # cat_desc = 'car' 133 | # example_ids = [ 134 | # # '7d7ace3866016bf6fef78228d4881428', 135 | # '8d26c4ebd58fbe678ba7af9f04c27920', 136 | # '764f08cd895e492e5dca6305fb9f97ca', 137 | # 'e2722a39dbc33044bbecf72e56fe7e5d' 138 | # ] 139 | 140 | # cat_desc = 'sofa' 141 | # example_ids = [ 142 | # '2e5d49e60a1f3abae9deec47d8412ee', 143 | # 'db8c451f7b01ae88f91663a74ccd2338', 144 | # 'e3b28c9216617a638ab9d2d7b1d714', 145 | # ] 146 | 147 | cat_desc = 'table' 148 | example_ids = [ 149 | 'd3fd6d332e6e8bccd5382f3f8f33a9f4', 150 | '5d00596375ec8bd89940e75c3dc3e7', 151 | '5ac1ba406888f05e855931d119219022', 152 | ] 153 | 154 | # vis(cat_desc, regime) 155 | export(cat_desc, example_ids, regime) 156 | -------------------------------------------------------------------------------- /paper/cdf.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | from __future__ import division 4 | 5 | 6 | def analyse(cat_desc, templates=False, save=False, metric='chamfer'): 7 | if templates: 8 | n = [1, 2, 4, 8, 16, 30] 9 | model_ids = [('et%d_%s' % (i, cat_desc)) for i in n] 10 | model_ids[-1] = 'e_%s' % cat_desc 11 | labels = ['$T = %d$' % i for i in n] 12 | colors = [ 13 | 'orange', 14 | 'g', 15 | 'b', 16 | 'k', 17 | 'cyan', 18 | 'r', 19 | ] 20 | fig_name = '%s_%s_T' % (cat_desc, metric) 21 | if metric == 'iou': 22 | legend_size = 10 23 | else: 24 | legend_size = 10 25 | else: 26 | labels = ['b', 'w', 'e', 'r'] 27 | model_ids = ['%s_%s' % (m, cat_desc) for m in labels] 28 | colors = [ 29 | 'r', 30 | 'orange', 31 | 'g', 32 | 'b', 33 | ] 34 | fig_name = '%s_%s' % (cat_desc, metric) 35 | legend_size = 10 36 | analyse_models( 37 | model_ids, labels, colors, save, metric, fig_name, title=None, 38 | legend_size=legend_size) 39 | 40 | 41 | def analyse_models(model_ids, labels, colors, save=False, metric='chamfer', 42 | fig_name=None, title=None, legend_size=None): 43 | import numpy as np 44 | import matplotlib.pyplot as plt 45 | 46 | template_vals = [] 47 | model_vals = [] 48 | 49 | if metric == 'chamfer': 50 | from template_ffd.eval.chamfer import get_chamfer_manager 51 | from template_ffd.eval.chamfer import get_template_chamfer_manager 52 | 53 | def template_fn(i): 54 | return get_template_chamfer_manager(i).get_saved_dataset() 55 | 56 | def model_fn(i): 57 | return get_chamfer_manager(i).get_saved_dataset() 58 | 59 | plot_fn = plt.semilogx 60 | 61 | def value_map_fn(x): 62 | return x 63 | 64 | reverse = False 65 | ylabel = '$\lambda_c < X$' 66 | xlim = None 67 | neg = False 68 | loc = 'lower right' 69 | # xlim = [2e-2, 1.1] 70 | 71 | elif metric == 'iou': 72 | from template_ffd.eval.iou import get_iou_dataset 73 | from template_ffd.eval.iou import IouTemplateSavingManager 74 | 75 | def template_fn(i): 76 | return IouTemplateSavingManager(i).get_saved_dataset() 77 | 78 | def model_fn(i): 79 | return get_iou_dataset(i, filled=True, edge_length_threshold=0.02) 80 | 81 | plot_fn = plt.plot 82 | # plot_fn = plt.semilogx 83 | 84 | def value_map_fn(x): 85 | # return [1 - xi for xi in x] 86 | return x 87 | 88 | reverse = True 89 | ylabel = '$IoU > X$' 90 | xlim = [0, 1] 91 | neg = False 92 | loc = 'upper right' 93 | 94 | else: 95 | raise ValueError('metric %s not recognized' % metric) 96 | 97 | xlabel = '$X$' 98 | 99 | for model_id in model_ids: 100 | with template_fn(model_id) as ds: 101 | values = value_map_fn(list(ds.values())) 102 | values.sort(reverse=reverse) 103 | template_vals.append(values) 104 | with model_fn(model_id) as ds: 105 | values = value_map_fn(list(ds.values())) 106 | values.sort(reverse=reverse) 107 | model_vals.append(values) 108 | 109 | fig = plt.figure() 110 | for mv, tv, label, i, c in zip( 111 | model_vals, template_vals, labels, model_ids, colors): 112 | n = len(mv) 113 | cdf = (np.array(range(n)) + 1) / n 114 | if neg: 115 | cdf = 1 - cdf 116 | plot_fn(mv, cdf, color=c, label=label, linestyle='dashed') 117 | plot_fn(tv, cdf, color=c, linestyle='dotted') 118 | ax = plt.gca() 119 | plt.xlabel(xlabel, fontsize=16) 120 | plt.ylabel(ylabel, fontsize=16) 121 | if title is not None: 122 | plt.title(title) 123 | if xlim is not None: 124 | ax.set_xlim(*xlim) 125 | 126 | # if legend_size is not None: 127 | # plt.legend(prop={'size': legend_size}) 128 | ax.legend(loc=loc) 129 | 130 | if save: 131 | import os 132 | folder = os.path.join( 133 | os.path.realpath(os.path.dirname(__file__)), 'figs') 134 | if not os.path.isdir(folder): 135 | os.makedirs(folder) 136 | fn = os.path.join(folder, '%s.eps' % fig_name) 137 | fig.savefig(fn, format='eps') 138 | else: 139 | plt.show() 140 | 141 | 142 | if __name__ == '__main__': 143 | import argparse 144 | 145 | parser = argparse.ArgumentParser() 146 | parser.add_argument( 147 | 'cat', help='cat_desc to analyse') 148 | parser.add_argument('-s', '--save', action='store_true') 149 | parser.add_argument('-t', '--templates', action='store_true') 150 | parser.add_argument( 151 | '-m', '--metric', default='chamfer', choices=['chamfer', 'iou']) 152 | args = parser.parse_args() 153 | analyse(args.cat, args.templates, args.save, args.metric) 154 | -------------------------------------------------------------------------------- /paper/create_mixed_params.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | """Creates params file for single model trained across 13 categories.""" 4 | import os 5 | import json 6 | from progress.spinner import Spinner 7 | from template_ffd.model import load_params, get_params_path, get_builder 8 | # from shapenet.core import cat_desc_to_id 9 | 10 | path = get_params_path('e_all_v8') 11 | if os.path.isfile(path): 12 | os.remove(path) 13 | # print('Path %s already exists.') 14 | # exit() 15 | 16 | 17 | def get_template_counts(model_id): 18 | import tensorflow as tf 19 | import numpy as np 20 | print('Getting template counts for %s' % model_id) 21 | graph = tf.Graph() 22 | with graph.as_default(): 23 | builder = get_builder(model_id) 24 | features, labels = builder.get_inputs(mode='train', repeat=False) 25 | spec = builder.get_estimator_spec(features, labels, mode='eval') 26 | predictions = spec.predictions 27 | probs = predictions['probs'] 28 | counts = tf.argmax(probs, axis=-1) 29 | totals = np.zeros((builder.n_templates,), dtype=np.int32) 30 | saver = tf.train.Saver() 31 | 32 | with tf.train.MonitoredSession() as sess: 33 | saver.restore(sess, tf.train.latest_checkpoint(builder.model_dir)) 34 | spinner = Spinner() 35 | while not sess.should_stop(): 36 | c = sess.run(counts) 37 | for ci in c: 38 | totals[ci] += 1 39 | spinner.next() 40 | # break 41 | spinner.finish() 42 | return totals 43 | 44 | 45 | def get_top_k(x, k): 46 | print(x) 47 | ret = x.argsort()[-k:][::-1] 48 | print('---') 49 | print(ret) 50 | return list(ret) 51 | 52 | 53 | descs = ( 54 | 'plane', 55 | 'bench', 56 | 'car', 57 | 'chair', 58 | 'sofa', 59 | 'cabinet', 60 | 'monitor', 61 | 'lamp', 62 | 'speaker', 63 | 'pistol', 64 | 'table', 65 | 'cellphone', 66 | 'watercraft', 67 | ) 68 | 69 | model_ids = tuple('e_%s_v8' % c for c in descs) 70 | template_idx = tuple(get_top_k(get_template_counts(m), 2) for m in model_ids) 71 | params = load_params(model_ids[0]) 72 | 73 | params['cat_desc'] = descs 74 | params['template_idxs'] = template_idx 75 | params['use_bn_bugged_version'] = False 76 | 77 | with open(path, 'w') as fp: 78 | json.dump(params, fp) 79 | -------------------------------------------------------------------------------- /paper/create_paper_params.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | import os 4 | import json 5 | from template_ffd.model import get_params_path 6 | 7 | 8 | def write_params(model_id, params): 9 | path = get_params_path(model_id) 10 | if os.path.isfile(path): 11 | print('Params already exist for %s' % model_id) 12 | else: 13 | with open(path, 'w') as fp: 14 | json.dump(params, fp) 15 | print('Wrote params for %s' % model_id) 16 | 17 | 18 | cats = ( 19 | 'plane', 'car', 'bench', 'chair', 'sofa', 'table', 'cabinet', 'monitor', 20 | 'lamp', 'speaker', 'watercraft', 'cellphone', 'pistol') 21 | param_types = ('b', 'e', 'w', 'r') 22 | use_bn_bugged_version = True 23 | 24 | params = { 25 | 'b': {}, 26 | 'e': { 27 | 'entropy_loss': { 28 | 'weight': 1e2, 29 | 'exp_annealing_rate': 1e-4 30 | } 31 | }, 32 | 'w': { 33 | 'gamma': 'log', 34 | 'prob_eps': 1e-3 35 | }, 36 | 'r': { 37 | 'dp_regularization': { 38 | 'weight': 1e0, 39 | 'exp_annealing_rate': 1e-4 40 | } 41 | }, 42 | 'rm1': { 43 | 'dp_regularization': { 44 | 'weight': 1e-1, 45 | 'exp_annealing_rate': 1e-4 46 | } 47 | } 48 | } 49 | for k, v in params.items(): 50 | v['inference_params'] = {'alpha': 0.25} 51 | v['use_bn_bugged_version'] = use_bn_bugged_version 52 | 53 | for cat in cats: 54 | for p in param_types: 55 | ps = params[p] 56 | ps['cat_desc'] = cat 57 | model_id = '%s_%s' % (p, cat) 58 | write_params(model_id, ps) 59 | 60 | # multi view param sets 61 | src = get_params_path('e_%s' % cat) 62 | model_id = 'e_%s_v8' % cat 63 | 64 | with open(src, 'r') as fp: 65 | ps = json.load(fp) 66 | 67 | ps['view_index'] = range(8) 68 | write_params(model_id, ps) 69 | del ps['inference_params'] 70 | model_id = '%s_full' % model_id 71 | write_params(model_id, ps) 72 | 73 | 74 | # # TODO: change template ids 75 | # ps = params['e'].copy() 76 | # ps['cat_desc'] = cats[:5] 77 | # write_params('e_all-5', ps) 78 | # ps['view_index'] = range(8) 79 | # write_params('e_all-5_v8', ps) 80 | # del ps['view_index'] 81 | # ps['cat_desc'] = cats 82 | # write_params('e_all-13', ps) 83 | # ps['view_index'] = range(8) 84 | # write_params('e_all-13_v8', ps) 85 | -------------------------------------------------------------------------------- /paper/infer_real.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | _paper_dir = os.path.realpath(os.path.dirname(__file__)) 4 | 5 | 6 | def get_path(cat_id, example_id, ext='png'): 7 | return os.path.join( 8 | _paper_dir, 'real_images', 'final', cat_id, 9 | '%s.%s' % (example_id, ext)) 10 | 11 | 12 | def vis_mesh(vertices, faces, original_vertices, **kwargs): 13 | from util3d.mayavi_vis import vis_mesh 14 | from mayavi import mlab 15 | mlab.figure() 16 | vis_mesh( 17 | vertices=vertices, faces=faces, color=(0, 1, 0), 18 | include_wireframe=False) 19 | mlab.figure() 20 | vis_mesh( 21 | vertices=original_vertices, faces=faces, color=(1, 0, 0), 22 | include_wireframe=False) 23 | mlab.show() 24 | 25 | 26 | def get_inference(model_id, example_id, ext='png', edge_length_threshold=0.02): 27 | import tensorflow as tf 28 | from template_ffd.model import get_builder 29 | import PIL 30 | import numpy as np 31 | from shapenet.image import with_background 32 | builder = get_builder(model_id) 33 | cat_id = builder.cat_id 34 | 35 | example_ids = [example_id] 36 | paths = [get_path(cat_id, e, ext) for e in example_ids] 37 | for path in paths: 38 | if not os.path.isfile(path): 39 | raise Exception('No file at path %s' % path) 40 | 41 | def gen(): 42 | for example_id, path in zip(example_ids, paths): 43 | image = np.array(PIL.Image.open(path)) 44 | image = with_background(image, 255) 45 | yield example_id, image 46 | 47 | render_params = builder.params.get('render_params', {}) 48 | shape = tuple(render_params.get('shape', (192, 256))) 49 | shape = shape + (3,) 50 | 51 | def input_fn(): 52 | ds = tf.data.Dataset.from_generator( 53 | gen, (tf.string, tf.uint8), ((), shape)) 54 | example_id, image = ds.make_one_shot_iterator().get_next() 55 | # image_content = tf.read_file(path) 56 | # if ext == 'png': 57 | # image = tf.image.decode_png(image_content) 58 | # elif ext == 'jpg': 59 | # image = tf.image.decode_jpg(image_content) 60 | # else: 61 | # raise ValueError('ext must be in ("png", "jpg")') 62 | image.set_shape((192, 256, 3)) 63 | image = tf.image.per_image_standardization(image) 64 | example_id = tf.expand_dims(example_id, axis=0) 65 | image = tf.expand_dims(image, axis=0) 66 | return dict(example_id=example_id, image=image) 67 | 68 | estimator = builder.get_estimator() 69 | mesh_fn = builder.get_prediction_to_mesh_fn(edge_length_threshold) 70 | for pred in estimator.predict(input_fn): 71 | example_id = pred.pop('example_id') 72 | mesh = mesh_fn(**pred) 73 | vis_mesh(**mesh) 74 | 75 | 76 | if __name__ == '__main__': 77 | model_id = 'b_plane' 78 | example_id = 'bomber-00' 79 | get_inference(model_id, example_id, 'png') 80 | -------------------------------------------------------------------------------- /paper/real_images.py: -------------------------------------------------------------------------------- 1 | import os 2 | import numpy as np 3 | from scipy.misc import imread 4 | cat_desc = 'car' 5 | regime = 'r' 6 | model_id = '%s_%s' % (regime, cat_desc) 7 | 8 | folder = os.path.join( 9 | os.path.realpath( 10 | os.path.dirname(__file__)), 'real_images', 'jhonys', cat_desc) 11 | 12 | fns = [fn for fn in os.listdir(folder) if fn[-4:] == '.png'] 13 | 14 | 15 | def save(): 16 | import tensorflow as tf 17 | from util3d.mesh.obj_io import write_obj 18 | from shapenet.image import with_background 19 | from template_ffd.model import get_builder 20 | builder = get_builder(model_id) 21 | 22 | mesh_fn = builder.get_prediction_to_mesh_fn(0.02) 23 | cloud_fn = builder.get_prediction_to_cloud_fn() 24 | 25 | graph = tf.Graph() 26 | with graph.as_default(): 27 | image = tf.placeholder(shape=(192, 256, 3), dtype=tf.uint8) 28 | std_image = tf.image.per_image_standardization(image) 29 | std_image = tf.expand_dims(std_image, axis=0) 30 | example_id = tf.constant(['blah'], dtype=tf.string) 31 | spec = builder.get_estimator_spec( 32 | dict(example_id=example_id, image=std_image), 33 | None, tf.estimator.ModeKeys.PREDICT) 34 | predictions = spec.predictions 35 | probs_tf = predictions['probs'] 36 | dp_tf = predictions['dp'] 37 | saver = tf.train.Saver() 38 | 39 | with tf.Session(graph=graph) as sess: 40 | saver.restore(sess, tf.train.latest_checkpoint(builder.model_dir)) 41 | for fn in fns: 42 | path = os.path.join(folder, fn) 43 | image_data = np.array(imread(path)) 44 | if image_data.shape[-1] == 4: 45 | image_data = with_background(image_data, (255, 255, 255)) 46 | probs, dp = sess.run( 47 | [probs_tf, dp_tf], feed_dict={image: image_data}) 48 | probs = probs[0] 49 | dp = dp[0] 50 | mesh = mesh_fn(probs, dp) 51 | cloud = cloud_fn(probs, dp)['cloud'] 52 | v, ov, f = ( 53 | mesh[k] for k in('vertices', 'original_vertices', 'faces')) 54 | path = '%s.obj' % path[:-4] 55 | write_obj(path, v, f) 56 | p2 = '%s_template.obj' % path[:-4] 57 | np.save('%s_cloud.npy' % path[:-4], cloud) 58 | write_obj(p2, ov, f) 59 | 60 | 61 | def vis(): 62 | from util3d.mesh.obj_io import parse_obj 63 | from util3d.mayavi_vis import vis_mesh 64 | from mayavi import mlab 65 | import matplotlib.pyplot as plt 66 | 67 | for fn in fns: 68 | path = os.path.join(folder, fn) 69 | image = imread(path) 70 | p0 = '%s.obj' % path[:-4] 71 | vertices, faces = parse_obj(p0)[:2] 72 | p1 = '%s_template.obj' % path[:-4] 73 | tv, tf = parse_obj(p1)[:2] 74 | # cloud = np.load('%s_cloud.npy' % path[:-4]) 75 | assert(np.all(tf == faces)) 76 | print(np.max(np.abs(vertices - tv))) 77 | # mlab.figure() 78 | # vis_point_cloud(cloud, color=(0, 1, 0), scale_factor=0.02) 79 | mlab.figure() 80 | vis_mesh(vertices, faces, include_wireframe=False, color=(0, 1, 0)) 81 | mlab.figure() 82 | vis_mesh(tv, tf, include_wireframe=False) 83 | plt.figure() 84 | plt.imshow(image) 85 | # plt.show(block=False) 86 | # mlab.show() 87 | # plt.close() 88 | plt.show(block=False) 89 | mlab.show() 90 | 91 | 92 | save() 93 | vis() 94 | -------------------------------------------------------------------------------- /paper/segment.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | import os 4 | import numpy as np 5 | from mayavi import mlab 6 | from util3d.mayavi_vis import vis_mesh 7 | from template_ffd.inference.predictions import get_predictions_dataset 8 | from template_ffd.model import get_builder 9 | from template_ffd.data.ids import get_example_ids 10 | from shapenet.core.blender_renderings.config import RenderConfig 11 | import matplotlib.pyplot as plt 12 | # from model.template import segment_faces 13 | 14 | _paper_dir = os.path.realpath(os.path.dirname(__file__)) 15 | 16 | colors = [ 17 | # (1, 1, 1), 18 | (0, 0, 1), 19 | (0, 1, 0), 20 | (1, 0, 0), 21 | (0, 1, 1), 22 | (1, 0, 1), 23 | (1, 1, 0), 24 | (0, 0, 0), 25 | ] 26 | nc = len(colors) 27 | 28 | 29 | def segmented_cloud(cloud, segmentation): 30 | assert(np.min(segmentation) >= 1) 31 | for i in range(1, np.max(segmentation)+1): 32 | yield cloud[segmentation == i] 33 | 34 | 35 | def vis_clouds(clouds): 36 | for i, cloud in enumerate(clouds): 37 | x, z, y = cloud.T 38 | mlab.points3d(x, y, z, color=colors[i % nc], scale_factor=0.01) 39 | 40 | 41 | def vis_segmented_mesh(vertices, faces, **kwargs): 42 | for i, f in enumerate(faces): 43 | vis_mesh(vertices, f, color=colors[i % nc], **kwargs) 44 | 45 | 46 | def vis_segmentations( 47 | model_id, example_ids=None, vis_mesh=False, 48 | edge_length_threshold=0.02, include_wireframe=False, 49 | save=False): 50 | from scipy.misc import imsave 51 | if save and example_ids is None: 52 | raise ValueError('Cannot save without specifying example_ids') 53 | builder = get_builder(model_id) 54 | cat_id = builder.cat_id 55 | if example_ids is None: 56 | example_ids = example_ids = get_example_ids(cat_id, 'eval') 57 | if vis_mesh: 58 | segmented_fn = builder.get_segmented_mesh_fn(edge_length_threshold) 59 | else: 60 | segmented_fn = builder.get_segmented_cloud_fn() 61 | config = RenderConfig() 62 | 63 | with get_predictions_dataset(model_id) as predictions: 64 | with config.get_dataset(cat_id, builder.view_index) as image_ds: 65 | for example_id in example_ids: 66 | example = predictions[example_id] 67 | probs, dp = (np.array(example[k]) for k in ('probs', 'dp')) 68 | result = segmented_fn(probs, dp) 69 | if result is not None: 70 | image = image_ds[example_id] 71 | print(example_id) 72 | segmentation = result['segmentation'] 73 | if vis_mesh: 74 | vertices = result['vertices'] 75 | faces = result['faces'] 76 | original_points = result['original_points'] 77 | original_seg = result['original_segmentation'] 78 | f0 = mlab.figure(bgcolor=(1, 1, 1)) 79 | vis_segmented_mesh( 80 | vertices, segmented_cloud(faces, segmentation), 81 | include_wireframe=include_wireframe, 82 | opacity=0.2) 83 | f1 = mlab.figure(bgcolor=(1, 1, 1)) 84 | vis_clouds( 85 | segmented_cloud(original_points, original_seg)) 86 | else: 87 | points = result['points'] 88 | original_points = result['original_points'] 89 | f0 = mlab.figure(bgcolor=(1, 1, 1)) 90 | vis_clouds(segmented_cloud(points, segmentation)) 91 | f1 = mlab.figure(bgcolor=(1, 1, 1)) 92 | vis_clouds( 93 | segmented_cloud(original_points, segmentation)) 94 | 95 | if save: 96 | folder = os.path.join( 97 | _paper_dir, 'segmentations', model_id, example_id) 98 | if not os.path.isdir(folder): 99 | os.makedirs(folder) 100 | fn = 'inferred_%s.png' % ( 101 | 'mesh' if vis_mesh else 'cloud') 102 | p0 = os.path.join(folder, fn) 103 | mlab.savefig(p0, figure=f0) 104 | p1 = os.path.join(folder, 'annotated_cloud.png') 105 | mlab.savefig(p1, figure=f1) 106 | pi = os.path.join(folder, 'query_image.png') 107 | imsave(pi, image) 108 | mlab.close() 109 | else: 110 | plt.imshow(image) 111 | plt.show(block=False) 112 | mlab.show() 113 | plt.close() 114 | 115 | 116 | if __name__ == '__main__': 117 | import argparse 118 | 119 | parser = argparse.ArgumentParser() 120 | parser.add_argument( 121 | 'model_id', help='id of model defined in params') 122 | parser.add_argument('-i', '--example_ids', type=str, nargs='*') 123 | parser.add_argument('-m', '--mesh', action='store_true') 124 | parser.add_argument('-t', '--edge_length_threshold', default=0.02) 125 | parser.add_argument('-w', '--wireframe', action='store_true') 126 | parser.add_argument('-s', '--save', action='store_true') 127 | args = parser.parse_args() 128 | model_id = args.model_id 129 | example_ids = args.example_ids 130 | if isinstance(example_ids, (list, tuple)) and len(example_ids) == 0: 131 | example_ids = None 132 | 133 | vis_segmentations( 134 | model_id, example_ids, args.mesh, args.edge_length_threshold, 135 | args.wireframe, args.save) 136 | -------------------------------------------------------------------------------- /paper/selected_histograms.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | from __future__ import division 4 | import numpy as np 5 | import matplotlib.pyplot as plt 6 | 7 | 8 | def get_hist_data(model_id, n_bins, mode): 9 | from shapenet.core import cat_desc_to_id 10 | from template_ffd.templates.ids import get_template_ids 11 | from template_ffd.model import load_params 12 | from template_ffd.inference.predictions import get_predictions_dataset 13 | cat_id = cat_desc_to_id(load_params(model_id)['cat_desc']) 14 | n_templates = len(get_template_ids(cat_id)) 15 | 16 | counts = np.zeros((n_bins,), dtype=np.int32) 17 | argmax_counts = np.zeros((n_templates,), dtype=np.int32) 18 | 19 | with get_predictions_dataset(model_id) as dataset: 20 | for example_id in dataset: 21 | probs = np.array(dataset[example_id]['probs']) 22 | counts[int(np.max(probs) * n_bins)] += 1 23 | # prob_indices = np.array(0.999*probs * n_bins, dtype=np.int32) 24 | # for pi in prob_indices: 25 | # counts[pi] += 1 26 | argmax_counts[np.argmax(probs)] += 1 27 | 28 | counts = counts / np.sum(counts) 29 | argmax_counts = argmax_counts / np.sum(argmax_counts) 30 | return counts, argmax_counts 31 | 32 | 33 | def analyse(cat_desc, save): 34 | 35 | model_ids = [ 36 | 'b_', 37 | 'w_', 38 | 'e_', 39 | 'r_', 40 | ] 41 | 42 | model_ids = ['%s%s' % (m, cat_desc) for m in model_ids] 43 | 44 | labels = [ 45 | 'b', 46 | 'w', 47 | 'e', 48 | 'r', 49 | ] 50 | 51 | colors = [ 52 | 'r', 53 | 'orange', 54 | 'g', 55 | 'b', 56 | ] 57 | 58 | mode = 'eval' 59 | n_bins = 10 60 | 61 | plt.rc('text', usetex=True) 62 | plt.rc('font', family='serif') 63 | 64 | n_ids = len(model_ids) 65 | 66 | counts = [] 67 | argmax_counts = [] 68 | 69 | for model_id in model_ids: 70 | c, ac = get_hist_data(model_id, n_bins, mode) 71 | counts.append(c) 72 | argmax_counts.append(ac) 73 | 74 | for i in range(n_ids): 75 | argmax_counts[i] = np.array( 76 | sorted(list(argmax_counts[i]), reverse=True)) 77 | 78 | w = 0.9 / n_ids 79 | dx = 0.05 / n_ids + np.arange(n_ids)*w - 0.3 80 | 81 | n_templates = len(argmax_counts[0]) 82 | n_templates = 15 83 | x = np.array(range(n_templates)) 84 | template_fig = plt.figure() 85 | # plt.title(cat_desc) 86 | ax = plt.gca() 87 | for i in range(n_ids): 88 | ax.bar(x + dx[i], argmax_counts[i][:n_templates], 89 | width=w, color=colors[i], align='center', label=labels[i]) 90 | plt.xlabel('Template', fontsize=20) 91 | plt.ylabel('Normalized frequency', fontsize=20) 92 | plt.xticks(np.arange(n_templates), ('',)*n_templates) 93 | 94 | ax.legend(loc='upper right') 95 | plt.legend(prop={'size': 20}) 96 | 97 | x = np.array(range(n_bins)) / n_bins 98 | w = 0.9 / (n_ids*n_bins) 99 | offset = (1./n_bins - n_ids*w) / 2 100 | dx = np.array(range(n_ids)) * w + offset 101 | # gamma_fig = plt.figure() 102 | # ax = plt.gca() 103 | # for i in range(n_ids): 104 | # ax.bar(x + dx[i], counts[i], width=w, color=colors[i], 105 | # align='edge', label=labels[i]) 106 | # plt.xlabel('$\max_t\gamma^{(t)}$') 107 | # plt.ylabel('Normalized frequency') 108 | # ax.legend(loc='upper right') 109 | 110 | if save: 111 | import os 112 | folder = os.path.join( 113 | os.path.realpath(os.path.dirname(__file__)), 'figs') 114 | if not os.path.isdir(folder): 115 | os.makedirs(folder) 116 | fn = os.path.join(folder, '%s_template_counts.eps' % cat_desc) 117 | template_fig.savefig(fn, format='eps') 118 | # gamma_fig.savefig('max_gamma_counts.eps', format='eps') 119 | else: 120 | plt.show() 121 | 122 | 123 | if __name__ == '__main__': 124 | import argparse 125 | 126 | parser = argparse.ArgumentParser() 127 | parser.add_argument( 128 | 'cat', help='cat_desc to analyse') 129 | parser.add_argument('-s', '--save', action='store_true') 130 | args = parser.parse_args() 131 | analyse(args.cat, args.save) 132 | -------------------------------------------------------------------------------- /paper/sup_vid.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | import os 4 | import random 5 | from progress.bar import IncrementalBar 6 | import numpy as np 7 | from mayavi import mlab 8 | from dids import Dataset 9 | from template_ffd.inference.predictions import get_predictions_dataset 10 | from template_ffd.model import get_builder 11 | from template_ffd.templates.ffd import get_ffd_dataset 12 | from template_ffd.templates.mesh import get_template_mesh_dataset 13 | from shapenet.core.meshes import get_mesh_dataset 14 | from shapenet.core.blender_renderings.config import RenderConfig 15 | 16 | 17 | def get_source(vertices, faces, opacity=0.2): 18 | x, z, y = vertices.T 19 | mesh = mlab.triangular_mesh( 20 | x, y, z, faces, color=(0, 0, 1), opacity=opacity) 21 | return mesh.mlab_source 22 | 23 | 24 | def update(source, vertices, angle_change=2): 25 | az, el, dist, focal = mlab.view() 26 | x, z, y = vertices.T 27 | source.set(x=x, y=y, z=z) 28 | mlab.view(az+angle_change, el, dist, focal) 29 | 30 | 31 | def get_vertices(b, p, dp, n_frames): 32 | return [np.matmul(b, p + t*dp) for t in np.linspace(0, 1, n_frames)] 33 | 34 | 35 | def vis_anim(b, p, dp, faces, duration, fps): 36 | n_frames = duration * fps 37 | delay = 1000 // fps 38 | angle_change = 360 // n_frames 39 | mlab.figure() 40 | vertices = get_vertices(b, p, dp, n_frames) 41 | source = get_source(vertices[0], faces) 42 | 43 | @mlab.animate(delay=delay) 44 | def anim(): 45 | for v in vertices: 46 | update(source, v, angle_change=angle_change) 47 | yield 48 | 49 | anim() 50 | 51 | 52 | def vis(b, p, dp, faces, gt_mesh, image, duration=5, fps=2): 53 | import matplotlib.pyplot as plt 54 | plt.imshow(image) 55 | mlab.figure() 56 | v, f = (np.array(gt_mesh[k]) for k in ('vertices', 'faces')) 57 | x, z, y = v.T 58 | mlab.triangular_mesh(x, y, z, f, color=(0, 0, 1), opacity=0.2) 59 | mlab.figure() 60 | vis_anim(b, p, dp, faces, duration, fps) 61 | plt.show(block=False) 62 | mlab.show() 63 | plt.close() 64 | 65 | 66 | def frame_fn(frame_index): 67 | return 'frame%04d.png' % frame_index 68 | 69 | 70 | def save_frames(source, vertices, images_dir): 71 | print('Saving frames...') 72 | if not os.path.isdir(images_dir): 73 | os.makedirs(images_dir) 74 | bar = IncrementalBar(max=len(vertices)) 75 | angle_change = 360 // len(vertices) 76 | for i, v in enumerate(vertices): 77 | update(source, v, angle_change=angle_change) 78 | mlab.savefig(filename=os.path.join(images_dir, frame_fn(i))) 79 | bar.next() 80 | bar.finish() 81 | mlab.close() 82 | 83 | 84 | def merge_frames(video_path, images_dir, fps=50): 85 | import subprocess 86 | subprocess.call([ 87 | 'ffmpeg', 88 | '-framerate', str(fps), 89 | '-i', '/%s/frame%%04d.png' % images_dir, 90 | '-c:v', 'libx264', 91 | '-profile:v', 'high', 92 | '-crf', '20', 93 | '-pix_fmt', 'yuv420p', 94 | video_path 95 | ]) 96 | 97 | 98 | def save_anim(subdir, b, p, dp, faces, duration=5, fps=50): 99 | n_frames = int(fps * duration) 100 | images_dir = os.path.join(subdir, 'video_frames') 101 | vertices = get_vertices(b, p, dp, n_frames) 102 | source = get_source(vertices[0], faces) 103 | save_frames(source, vertices, images_dir) 104 | video_path = os.path.join(subdir, 'deformation.mp4') 105 | merge_frames(video_path, images_dir, fps) 106 | 107 | 108 | def save(subdir, b, p, dp, faces, gt_mesh, image, duration=5, fps=50): 109 | from scipy.misc import imsave 110 | from util3d.mesh.obj_io import write_obj 111 | imsave(os.path.join(subdir, 'image.png'), image) 112 | v, f = (np.array(gt_mesh[k]) for k in ('vertices', 'faces')) 113 | write_obj(os.path.join(subdir, 'gt_mesh.obj'), v, f) 114 | save_anim(subdir, b, p, dp, faces, duration, fps) 115 | 116 | 117 | def get_data(model_id, example_ids=None): 118 | edge_length_threshold = 0.02 119 | builder = get_builder(model_id) 120 | cat_id = builder.cat_id 121 | 122 | with get_ffd_dataset(cat_id, edge_length_threshold=0.02) as ffd_ds: 123 | template_ids, bs, ps = zip(*builder.get_ffd_data(ffd_ds)) 124 | 125 | with get_template_mesh_dataset(cat_id, edge_length_threshold) as mesh_ds: 126 | faces = [np.array(mesh_ds[e]['faces']) for e in template_ids] 127 | 128 | predictions_ds = get_predictions_dataset(model_id) 129 | mesh_ds = get_mesh_dataset(cat_id) 130 | image_ds = RenderConfig().get_dataset(cat_id, builder.view_index) 131 | zipped = Dataset.zip(predictions_ds, mesh_ds, image_ds) 132 | with zipped: 133 | if example_ids is None: 134 | example_ids = list(predictions_ds.keys()) 135 | random.shuffle(example_ids) 136 | for example_id in example_ids: 137 | print(example_id) 138 | pred, mesh, image = zipped[example_id] 139 | i = np.argmax(pred['probs']) 140 | dp = np.array(pred['dp'][i]) 141 | b = bs[i] 142 | p = ps[i] 143 | yield example_id, b, p, dp, faces[i], mesh, image 144 | 145 | 146 | def vis_all(model_id, example_ids=None): 147 | for example_id, b, p, dp, f, mesh, image in get_data( 148 | model_id, example_ids): 149 | vis(b, p, dp, f, mesh, image) 150 | 151 | 152 | def save_all(model_id, example_ids): 153 | root_dir = os.path.join( 154 | os.path.realpath(os.path.dirname(__file__)), 'sup_vid_results', 155 | model_id) 156 | for example_id, b, p, dp, f, mesh, image in get_data( 157 | model_id, example_ids): 158 | subdir = os.path.join(root_dir, example_id) 159 | if not os.path.isdir(subdir): 160 | os.makedirs(subdir) 161 | save(subdir, b, p, dp, f, mesh, image, fps=50) 162 | 163 | 164 | # cat_desc = 'chair' 165 | # example_ids = [ 166 | # '52cfbd8c8650402ba72559fc4f86f700', 167 | # # '8590bac753fbcccb203a669367e5b2a', 168 | # '353bbd3b916426d24502f857a1cf320e', 169 | # ] 170 | 171 | # cat_desc = 'plane' 172 | # example_ids = [ 173 | # '7bc46908d079551eed02ab0379740cae', 174 | # '5aeb583ee6e0e4ea42d0e83abdfab1fd', 175 | # 'bbd8e6b06d8906d5eccd82bb51193a7f', 176 | # ] 177 | 178 | cat_desc = 'car' 179 | example_ids = [ 180 | # '7d7ace3866016bf6fef78228d4881428', 181 | '8d26c4ebd58fbe678ba7af9f04c27920', 182 | '764f08cd895e492e5dca6305fb9f97ca', 183 | 'e2722a39dbc33044bbecf72e56fe7e5d' 184 | ] 185 | 186 | # cat_desc = 'sofa' 187 | # example_ids = [ 188 | # '2e5d49e60a1f3abae9deec47d8412ee', 189 | # 'db8c451f7b01ae88f91663a74ccd2338', 190 | # 'e3b28c9216617a638ab9d2d7b1d714', 191 | # ] 192 | 193 | 194 | # cat_desc = 'table' 195 | # example_ids = [ 196 | # 'd3fd6d332e6e8bccd5382f3f8f33a9f4', 197 | # '5d00596375ec8bd89940e75c3dc3e7', 198 | # # 'df7761a3b4ac638c9eaceb124b71b7be', 199 | # # '60ef2830979fd08ec72d4ae978770752', 200 | # # '5ac1ba406888f05e855931d119219022', 201 | # ] 202 | 203 | regime = 'e' 204 | model_id = '%s_%s' % (regime, cat_desc) 205 | # vis_all(model_id) 206 | save_all(model_id, example_ids) 207 | -------------------------------------------------------------------------------- /paper/top_k.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | import random 4 | import numpy as np 5 | import matplotlib.pyplot as plt 6 | from mayavi import mlab 7 | 8 | from dids import Dataset 9 | from shapenet.core.blender_renderings.config import RenderConfig 10 | from shapenet.core.meshes import get_mesh_dataset 11 | from shapenet.core import cat_desc_to_id 12 | 13 | from template_ffd.inference.predictions import get_predictions_dataset 14 | from template_ffd.data.ids import get_example_ids 15 | from template_ffd.model import get_builder 16 | 17 | 18 | regime = 'e' 19 | cat_desc = 'chair' 20 | view_index = 5 21 | edge_length_threshold = 0.02 22 | 23 | shuffle = True 24 | k = 3 25 | 26 | cat_id = cat_desc_to_id(cat_desc) 27 | model_id = '%s_%s' % (regime, cat_desc) 28 | builder = get_builder(model_id) 29 | 30 | image_ds = RenderConfig().get_dataset(cat_id, view_index) 31 | gt_mesh_ds = get_mesh_dataset(cat_id) 32 | predictions_ds = get_predictions_dataset(model_id) 33 | 34 | top_k_mesh_fn = builder.get_prediction_to_top_k_mesh_fn( 35 | edge_length_threshold, k) 36 | 37 | all_ds = Dataset.zip(image_ds, gt_mesh_ds, predictions_ds) 38 | 39 | 40 | def vis(): 41 | 42 | def vis_mesh(mesh, include_wireframe=False, **kwargs): 43 | from util3d.mayavi_vis import vis_mesh as vm 44 | v, f = (np.array(mesh[k]) for k in ('vertices', 'faces')) 45 | vm(v, f, include_wireframe=include_wireframe, **kwargs) 46 | 47 | example_ids = list(get_example_ids(cat_id, 'eval')) 48 | random.shuffle(example_ids) 49 | 50 | with all_ds: 51 | for example_id in example_ids: 52 | print(example_id) 53 | image, gt_mesh, predictions = all_ds[example_id] 54 | meshes = top_k_mesh_fn( 55 | *(np.array(predictions[k]) for k in ('probs', 'dp'))) 56 | plt.imshow(image) 57 | mlab.figure() 58 | vis_mesh(gt_mesh, color=(0, 0, 1)) 59 | for mesh in meshes: 60 | v, f, ov = (mesh[k] for k in 61 | ('vertices', 'faces', 'original_vertices')) 62 | mlab.figure() 63 | vis_mesh({'vertices': v, 'faces': f}, color=(0, 1, 0)) 64 | mlab.figure() 65 | vis_mesh({'vertices': ov, 'faces': f}, color=(1, 0, 0)) 66 | 67 | plt.show(block=False) 68 | mlab.show() 69 | plt.close() 70 | 71 | 72 | def export(example_id): 73 | import os 74 | from util3d.mesh.obj_io import write_obj 75 | from scipy.misc import imsave 76 | save_dir = os.path.join(os.path.realpath(os.path.dirname(__file__)), 77 | 'top_k_results', example_id) 78 | if not os.path.isdir(save_dir): 79 | os.makedirs(save_dir) 80 | with all_ds: 81 | print(example_id) 82 | image, gt_mesh, predictions = all_ds[example_id] 83 | meshes = top_k_mesh_fn( 84 | *(np.array(predictions[k]) for k in ('probs', 'dp'))) 85 | for i, mesh in enumerate(meshes): 86 | ov, v, f = ( 87 | mesh[k] for k in ('original_vertices', 'vertices', 'faces')) 88 | write_obj(os.path.join(save_dir, 'template%d.obj' % i), ov, f) 89 | write_obj(os.path.join(save_dir, 'deformed%d.obj' % i), v, f) 90 | v, f = (np.array(gt_mesh[k]) for k in ('vertices', 'faces')) 91 | write_obj(os.path.join(save_dir, 'ground_truth.obj'), v, f) 92 | imsave(os.path.join(save_dir, 'image.png'), image) 93 | 94 | 95 | # chair 96 | export('114f72b38dcabdf0823f29d871e57676') 97 | -------------------------------------------------------------------------------- /scripts/.gitignore: -------------------------------------------------------------------------------- 1 | _profiles/* 2 | -------------------------------------------------------------------------------- /scripts/chamfer.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | 4 | def create_and_report( 5 | pre_sampled, model_id, n_samples, cat_desc, edge_length_threshold, 6 | overwrite=False): 7 | import template_ffd.eval.chamfer as chamfer 8 | kwargs = dict( 9 | pre_sampled=pre_sampled, 10 | model_id=model_id, 11 | n_samples=n_samples, 12 | cat_desc=cat_desc, 13 | edge_length_threshold=edge_length_threshold, 14 | ) 15 | if pre_sampled: 16 | kwargs.pop('edge_length_threshold') 17 | mean = chamfer.get_chamfer_average(**kwargs) 18 | print(mean) 19 | 20 | 21 | if __name__ == '__main__': 22 | import argparse 23 | parser = argparse.ArgumentParser() 24 | parser.add_argument( 25 | 'model_id', help='id of model defined in params') 26 | parser.add_argument('-o', '--overwrite', action='store_true') 27 | parser.add_argument('-post', '--post_sampled', action='store_true') 28 | parser.add_argument('-n', '--n_samples', type=int, default=1024) 29 | parser.add_argument('-c', '--cat_desc', type=str, nargs='*') 30 | parser.add_argument( 31 | '-t', '--edge_length_threshold', type=float, default=0.02) 32 | args = parser.parse_args() 33 | 34 | create_and_report( 35 | not args.post_sampled, 36 | args.model_id, 37 | args.n_samples, 38 | args.cat_desc, 39 | args.edge_length_threshold, 40 | args.overwrite) 41 | -------------------------------------------------------------------------------- /scripts/check_predictions.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | """Script for checking if a given model has predictions for all example_ids.""" 3 | 4 | 5 | def check_predictions(model_id): 6 | from template_ffd.inference.predictions import get_predictions_dataset 7 | from template_ffd.model import get_builder 8 | from template_ffd.data.ids import get_example_ids 9 | builder = get_builder(model_id) 10 | cat_id = builder.cat_id 11 | example_ids = get_example_ids(cat_id, 'eval') 12 | 13 | missing = [] 14 | with get_predictions_dataset(model_id, 'r') as dataset: 15 | for example_id in example_ids: 16 | if example_id not in dataset: 17 | missing.append(example_id) 18 | else: 19 | example = dataset[example_id] 20 | if not all(k in example for k in ('probs', 'dp')): 21 | missing.append(example_id) 22 | 23 | if len(missing) == 0: 24 | print('No predictions missing!') 25 | else: 26 | print('%d / %d predictions missing' % (len(missing), len(example_ids))) 27 | for example_id in example_ids: 28 | print(example_id) 29 | 30 | 31 | if __name__ == '__main__': 32 | import argparse 33 | parser = argparse.ArgumentParser() 34 | parser.add_argument('model_id') 35 | args = parser.parse_args() 36 | check_predictions(args.model_id) 37 | -------------------------------------------------------------------------------- /scripts/clear_results.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | 4 | def clear_results(model_id, actual=False): 5 | import os 6 | import shutil 7 | from template_ffd.inference.predictions import get_predictions_data_path 8 | from template_ffd.inference.meshes import InferredMeshManager 9 | from template_ffd.inference.clouds import get_cloud_manager 10 | from template_ffd.inference.voxels import get_voxel_subdir 11 | from template_ffd.eval.chamfer import \ 12 | get_chamfer_manager, get_template_chamfer_manager 13 | from template_ffd.eval.iou import IouAutoSavingManager 14 | from template_ffd.eval.ffd_emd import get_template_emd_manager 15 | from template_ffd.eval.ffd_emd import get_emd_manager 16 | 17 | def maybe_remove(path): 18 | if os.path.isfile(path): 19 | print('Removing file %s' % path) 20 | if actual: 21 | os.remove(path) 22 | elif os.path.isdir(path): 23 | print('Removing subdir %s' % path) 24 | if actual: 25 | shutil.rmtree(path) 26 | 27 | predictions_path = get_predictions_data_path(model_id) 28 | maybe_remove(predictions_path) 29 | maybe_remove(get_cloud_manager(model_id, pre_sampled=True).path) 30 | maybe_remove(get_chamfer_manager(model_id, pre_sampled=True).path) 31 | maybe_remove(get_template_chamfer_manager(model_id).path) 32 | maybe_remove(get_emd_manager(model_id, pre_sampled=True).path) 33 | maybe_remove(get_template_emd_manager(model_id).path) 34 | 35 | for elt in (None, 0.1, 0.05, 0.02, 0.01): 36 | maybe_remove(InferredMeshManager(model_id, elt).path) 37 | 38 | maybe_remove(get_cloud_manager( 39 | model_id, pre_sampled=False, edge_length_threshold=elt).path) 40 | 41 | for filled in (True, False): 42 | subdir = get_voxel_subdir(model_id, elt, filled=filled) 43 | maybe_remove(subdir) 44 | maybe_remove(IouAutoSavingManager(model_id, elt, filled).path) 45 | 46 | kwargs = dict( 47 | model_id=model_id, edge_length_threshold=elt) 48 | for ps in (True, False): 49 | maybe_remove(get_chamfer_manager(pre_sampled=ps, **kwargs).path) 50 | maybe_remove(get_emd_manager(pre_sampled=ps, **kwargs).path) 51 | 52 | if not actual: 53 | print('NOTE: this was a dry run. Files not actually removed') 54 | print('Use -a for actual run.') 55 | 56 | 57 | if __name__ == '__main__': 58 | import argparse 59 | 60 | parser = argparse.ArgumentParser() 61 | parser.add_argument('model_id') 62 | parser.add_argument( 63 | '-a', '--actual_run', action='store_true', help='actual run') 64 | 65 | args = parser.parse_args() 66 | clear_results(args.model_id, args.actual_run) 67 | -------------------------------------------------------------------------------- /scripts/create_ffd.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | 4 | def create_ffd( 5 | n=3, cat_descs=None, edge_length_threshold=None, n_samples=None, 6 | overwrite=False): 7 | from shapenet.core import cat_desc_to_id 8 | from template_ffd.templates.ids import get_templated_cat_ids 9 | from template_ffd.templates.ffd import create_ffd_data 10 | if cat_descs is None or len(cat_descs) == 0: 11 | cat_ids = get_templated_cat_ids() 12 | else: 13 | cat_ids = [cat_desc_to_id(c) for c in cat_descs] 14 | for cat_id in cat_ids: 15 | create_ffd_data( 16 | cat_id, n=n, edge_length_threshold=edge_length_threshold, 17 | n_samples=n_samples, overwrite=overwrite) 18 | 19 | 20 | if __name__ == '__main__': 21 | import argparse 22 | 23 | parser = argparse.ArgumentParser() 24 | parser.add_argument('-c', '--cats', default=None, nargs='*') 25 | parser.add_argument('-n', type=int, default=3) 26 | parser.add_argument( 27 | '-e', '--edge_length_threshold', default=None, type=float) 28 | parser.add_argument('-s', '--n_samples', default=None, type=int) 29 | parser.add_argument('-o', '--overwrite', action='store_true') 30 | 31 | args = parser.parse_args() 32 | create_ffd( 33 | args.n, args.cats, args.edge_length_threshold, args.n_samples, 34 | args.overwrite) 35 | -------------------------------------------------------------------------------- /scripts/create_split_mesh.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | 4 | def create_split_mesh( 5 | cat_desc, edge_length_threshold, overwrite=False, 6 | start_threshold=None): 7 | """Create split mesh data for templates.""" 8 | from shapenet.core import cat_desc_to_id 9 | from shapenet.core.meshes.config import get_mesh_config 10 | from template_ffd.templates.ids import get_template_ids 11 | cat_id = cat_desc_to_id(cat_desc) 12 | example_ids = get_template_ids(cat_id) 13 | config = get_mesh_config(edge_length_threshold) 14 | init = None if start_threshold is None else get_mesh_config( 15 | start_threshold) 16 | config.create_cat_data(cat_id, example_ids, overwrite, init) 17 | 18 | 19 | if __name__ == '__main__': 20 | import argparse 21 | parser = argparse.ArgumentParser() 22 | parser.add_argument('cat', type=str) 23 | parser.add_argument('edge_length_threshold', type=float) 24 | parser.add_argument( 25 | '-i', '--initial_edge_length_threshold', type=float) 26 | parser.add_argument('-o', '--overwrite', action='store_true') 27 | 28 | args = parser.parse_args() 29 | create_split_mesh( 30 | args.cat, args.edge_length_threshold, args.overwrite, 31 | args.initial_edge_length_threshold) 32 | -------------------------------------------------------------------------------- /scripts/create_voxels.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | 4 | def create_voxels( 5 | model_id, edge_length_threshold, filled, overwrite, cat_desc): 6 | if model_id is None: 7 | if cat_desc is None: 8 | raise ValueError('One of model_id or cat_desc must be supplied') 9 | from template_ffd.data.voxels import create_filled_gt_data 10 | from shapenet.core import cat_desc_to_id 11 | cat_id = cat_desc_to_id(cat_desc) 12 | create_filled_gt_data(cat_id, overwrite=overwrite) 13 | else: 14 | from template_ffd.inference.voxels import create_voxel_data 15 | create_voxel_data( 16 | model_id, edge_length_threshold, filled=filled, 17 | overwrite=overwrite) 18 | 19 | 20 | if __name__ == '__main__': 21 | import argparse 22 | parser = argparse.ArgumentParser() 23 | parser.add_argument('model_id', type=str, default=None, nargs='?') 24 | parser.add_argument('-c', '--cat', type=str, default=None) 25 | parser.add_argument( 26 | '-t', '--edge_length_threshold', type=float, default=0.1) 27 | parser.add_argument('-f', '--filled', action='store_true') 28 | parser.add_argument('-o', '--overwrite', action='store_true') 29 | 30 | args = parser.parse_args() 31 | create_voxels( 32 | args.model_id, args.edge_length_threshold, args.filled, args.overwrite, 33 | cat_desc=args.cat) 34 | -------------------------------------------------------------------------------- /scripts/eval.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | 4 | def eval_model(model_id): 5 | import tensorflow as tf 6 | from template_ffd.model import get_builder 7 | tf.logging.set_verbosity(tf.logging.INFO) 8 | builder = get_builder(model_id) 9 | builder.initialize_variables() 10 | print(builder.eval()) 11 | 12 | 13 | if __name__ == '__main__': 14 | import argparse 15 | 16 | parser = argparse.ArgumentParser() 17 | parser.add_argument( 18 | 'model_id', help='id of model defined in params') 19 | args = parser.parse_args() 20 | eval_model(args.model_id) 21 | -------------------------------------------------------------------------------- /scripts/ffd_emd.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | 4 | def create_and_report( 5 | pre_sampled, model_id, n_samples, edge_length_threshold, 6 | overwrite=False): 7 | import template_ffd.eval.ffd_emd as emd 8 | kwargs = dict( 9 | pre_sampled=pre_sampled, 10 | model_id=model_id, 11 | n_samples=n_samples, 12 | edge_length_threshold=edge_length_threshold, 13 | ) 14 | if pre_sampled: 15 | kwargs.pop('edge_length_threshold') 16 | print(emd.get_emd_average(**kwargs)) 17 | 18 | 19 | if __name__ == '__main__': 20 | import argparse 21 | parser = argparse.ArgumentParser() 22 | parser.add_argument( 23 | 'model_id', help='id of model defined in params') 24 | parser.add_argument('-o', '--overwrite', action='store_true') 25 | # parser.add_argument('-pre', '--pre_sampled', action='store_true') 26 | parser.add_argument('-post', '--post_sampled', action='store_true') 27 | parser.add_argument('-n', '--n_samples', type=int, default=1024) 28 | parser.add_argument( 29 | '-t', '--edge_length_threshold', type=float, default=0.02) 30 | args = parser.parse_args() 31 | 32 | create_and_report( 33 | not args.post_sampled, 34 | args.model_id, 35 | args.n_samples, 36 | args.edge_length_threshold, 37 | args.overwrite) 38 | -------------------------------------------------------------------------------- /scripts/infer.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | 4 | def generate_inferences( 5 | model_id, overwrite=False): 6 | from template_ffd.inference.predictions import create_predictions_data 7 | create_predictions_data(model_id, overwrite=overwrite) 8 | 9 | 10 | if __name__ == '__main__': 11 | import argparse 12 | parser = argparse.ArgumentParser() 13 | parser.add_argument( 14 | 'model_id', help='id of model defined in params') 15 | parser.add_argument('-v', '--view_index', default=None, type=int) 16 | parser.add_argument('-o', '--overwrite', action='store_true') 17 | args = parser.parse_args() 18 | generate_inferences( 19 | args.model_id, overwrite=args.overwrite) 20 | -------------------------------------------------------------------------------- /scripts/iou.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | 4 | def create_and_report( 5 | model_id, edge_length_threshold, filled, overwrite=False): 6 | import template_ffd.eval.iou as iou 7 | print(iou.get_iou_average( 8 | model_id=model_id, 9 | edge_length_threshold=edge_length_threshold, 10 | filled=filled)) 11 | 12 | 13 | if __name__ == '__main__': 14 | import argparse 15 | parser = argparse.ArgumentParser() 16 | parser.add_argument( 17 | 'model_id', help='id of model defined in params') 18 | parser.add_argument('-o', '--overwrite', action='store_true') 19 | parser.add_argument( 20 | '-t', '--edge_length_threshold', type=float, default=0.02) 21 | # parser.add_argument('-f', '--filled', action='store_true') 22 | parser.add_argument('-ho', '--hollow', action='store_true') 23 | args = parser.parse_args() 24 | 25 | create_and_report( 26 | args.model_id, 27 | args.edge_length_threshold, 28 | not args.hollow, 29 | args.overwrite) 30 | -------------------------------------------------------------------------------- /scripts/profile.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | 4 | def main(model_id, skip_runs=10): 5 | import os 6 | import tensorflow as tf 7 | from template_ffd.model import get_builder 8 | from tf_toolbox.profile import create_profile 9 | builder = get_builder(model_id) 10 | 11 | def graph_fn(): 12 | mode = tf.estimator.ModeKeys.TRAIN 13 | features, labels = builder.get_inputs(mode) 14 | spec = builder.get_estimator_spec(features, labels, mode) 15 | return spec.train_op 16 | 17 | folder = os.path.join( 18 | os.path.realpath(os.path.dirname(__file__)), '_profiles') 19 | if not os.path.isdir(folder): 20 | os.makedirs(folder) 21 | filename = os.path.join(folder, '%s.json' % model_id) 22 | 23 | create_profile(graph_fn, filename, skip_runs) 24 | 25 | 26 | if __name__ == '__main__': 27 | import argparse 28 | 29 | parser = argparse.ArgumentParser() 30 | parser.add_argument('model_id', help='id of model defined in params') 31 | parser.add_argument('-s', '--skip_runs', type=int, default=10) 32 | args = parser.parse_args() 33 | 34 | main(args.model_id) 35 | -------------------------------------------------------------------------------- /scripts/save_inferred_meshes.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | 4 | def create_inferred_meshes(model_id, edge_length_threshold): 5 | from template_ffd.inference.meshes import get_inferred_mesh_dataset 6 | get_inferred_mesh_dataset(model_id, edge_length_threshold, lazy=False) 7 | 8 | 9 | if __name__ == '__main__': 10 | import argparse 11 | parser = argparse.ArgumentParser() 12 | parser.add_argument( 13 | 'model_id', help='id of model defined in params') 14 | # parser.add_argument('-o', '--overwrite', action='store_true') 15 | parser.add_argument( 16 | '-t', '--edge_length_threshold', type=float, default=0.02) 17 | args = parser.parse_args() 18 | 19 | create_inferred_meshes( 20 | args.model_id, 21 | args.edge_length_threshold, 22 | # args.overwrite 23 | ) 24 | -------------------------------------------------------------------------------- /scripts/test_model.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | 4 | def main(model_id): 5 | import tensorflow as tf 6 | import tf_toolbox.testing 7 | from template_ffd.model import get_builder 8 | 9 | builder = get_builder(model_id) 10 | 11 | def get_train_op(): 12 | features, labels = builder.get_train_inputs() 13 | return builder.get_estimator_spec( 14 | features, labels, tf.estimator.ModeKeys.TRAIN).train_op 15 | 16 | update_ops_run = tf_toolbox.testing.do_update_ops_run(get_train_op) 17 | tf_toolbox.testing.report_train_val_changes(get_train_op) 18 | 19 | if update_ops_run: 20 | print('Update ops run :)') 21 | else: 22 | print('Update ops not run :(') 23 | 24 | 25 | if __name__ == '__main__': 26 | import argparse 27 | 28 | parser = argparse.ArgumentParser() 29 | parser.add_argument('model_id', help='id of model defined in params') 30 | args = parser.parse_args() 31 | 32 | main(args.model_id) 33 | -------------------------------------------------------------------------------- /scripts/train.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | 4 | def train(model_id, max_steps): 5 | import tensorflow as tf 6 | from template_ffd.model import get_builder 7 | tf.logging.set_verbosity(tf.logging.INFO) 8 | builder = get_builder(model_id) 9 | builder.initialize_variables() 10 | if max_steps is None: 11 | max_steps = builder.default_max_steps 12 | builder.train(max_steps=max_steps) 13 | 14 | 15 | if __name__ == '__main__': 16 | import argparse 17 | 18 | parser = argparse.ArgumentParser() 19 | parser.add_argument( 20 | 'model_id', help='id of model defined in params') 21 | parser.add_argument('-s', '--max-steps', default=None, type=float) 22 | args = parser.parse_args() 23 | train(args.model_id, max_steps=args.max_steps) 24 | -------------------------------------------------------------------------------- /scripts/vis/clouds.py: -------------------------------------------------------------------------------- 1 | 2 | 3 | def vis_clouds( 4 | model_id, pre_sampled=True, n_samples=1024, edge_length_threshold=0.1, 5 | shuffle=False): 6 | import random 7 | import numpy as np 8 | from mayavi import mlab 9 | import matplotlib.pyplot as plt 10 | from dids import Dataset 11 | from shapenet.core.blender_renderings.config import RenderConfig 12 | from shapenet.core.meshes import get_mesh_dataset 13 | from util3d.mayavi_vis import vis_point_cloud 14 | from util3d.mayavi_vis import vis_mesh 15 | from template_ffd.data.ids import get_example_ids 16 | from template_ffd.inference.clouds import get_inferred_cloud_dataset 17 | from template_ffd.model import get_builder 18 | builder = get_builder(model_id) 19 | cat_id = builder.cat_id 20 | kwargs = dict(model_id=model_id, n_samples=n_samples) 21 | if not pre_sampled: 22 | kwargs['edge_length_threshold'] = edge_length_threshold 23 | cloud_dataset = get_inferred_cloud_dataset( 24 | pre_sampled=pre_sampled, **kwargs) 25 | image_dataset = RenderConfig().get_dataset(cat_id, builder.view_index) 26 | 27 | example_ids = get_example_ids(cat_id, 'eval') 28 | if shuffle: 29 | example_ids = list(example_ids) 30 | random.shuffle(example_ids) 31 | mesh_dataset = get_mesh_dataset(cat_id) 32 | zipped_dataset = Dataset.zip(image_dataset, cloud_dataset, mesh_dataset) 33 | # zipped_dataset = Dataset.zip(image_dataset, cloud_dataset) 34 | with zipped_dataset: 35 | for example_id in example_ids: 36 | image, cloud, mesh = zipped_dataset[example_id] 37 | # image, cloud = zipped_dataset[example_id] 38 | plt.imshow(image) 39 | vis_point_cloud( 40 | np.array(cloud), color=(0, 1, 0), scale_factor=0.01) 41 | v, f = (np.array(mesh[k]) for k in ('vertices', 'faces')) 42 | vis_mesh( 43 | v, f, color=(0, 0, 1), opacity=0.1, include_wireframe=False) 44 | plt.show(block=False) 45 | mlab.show() 46 | plt.close() 47 | 48 | 49 | if __name__ == '__main__': 50 | import argparse 51 | parser = argparse.ArgumentParser() 52 | parser.add_argument( 53 | 'model_id', help='id of model defined in params') 54 | parser.add_argument('-o', '--overwrite', action='store_true') 55 | parser.add_argument('-pre', '--pre_sampled', action='store_true') 56 | parser.add_argument('-n', '--n_samples', type=int, default=1024) 57 | parser.add_argument( 58 | '-t', '--edge_length_threshold', type=float, default=0.1) 59 | parser.add_argument('-s', '--shuffle', action='store_true') 60 | args = parser.parse_args() 61 | vis_clouds( 62 | args.model_id, 63 | args.pre_sampled, 64 | args.n_samples, 65 | args.edge_length_threshold, 66 | args.shuffle 67 | ) 68 | -------------------------------------------------------------------------------- /scripts/vis/meshes.py: -------------------------------------------------------------------------------- 1 | def vis_mesh(model_id, edge_length_threshold, shuffle=False, wireframe=False): 2 | import numpy as np 3 | from mayavi import mlab 4 | from shapenet.core.meshes import get_mesh_dataset 5 | from shapenet.core import cat_desc_to_id 6 | from util3d.mayavi_vis import vis_mesh 7 | from template_ffd.inference.meshes import get_inferred_mesh_dataset 8 | from template_ffd.model import load_params 9 | import random 10 | 11 | def vis(mesh, **kwargs): 12 | v, f = (np.array(mesh[k]) for k in ('vertices', 'faces')) 13 | vis_mesh(v, f, include_wireframe=wireframe, **kwargs) 14 | 15 | cat_id = cat_desc_to_id(load_params(model_id)['cat_desc']) 16 | inf_mesh_dataset = get_inferred_mesh_dataset( 17 | model_id, edge_length_threshold) 18 | with inf_mesh_dataset: 19 | with get_mesh_dataset(cat_id) as gt_mesh_dataset: 20 | example_ids = list(inf_mesh_dataset.keys()) 21 | if shuffle: 22 | random.shuffle(example_ids) 23 | 24 | for example_id in example_ids: 25 | inf = inf_mesh_dataset[example_id] 26 | gt = gt_mesh_dataset[example_id] 27 | mlab.figure() 28 | vis(inf, color=(0, 1, 0), opacity=0.2) 29 | mlab.figure() 30 | vis(gt, opacity=0.2) 31 | mlab.show() 32 | 33 | 34 | if __name__ == '__main__': 35 | import argparse 36 | 37 | parser = argparse.ArgumentParser() 38 | parser.add_argument( 39 | 'model_id', help='id of model defined in params') 40 | parser.add_argument('-t', '--edge_length_threshold', default=None, 41 | type=float) 42 | parser.add_argument('-w', '--wireframe', action='store_true') 43 | parser.add_argument('-s', '--shuffle', action='store_true') 44 | args = parser.parse_args() 45 | vis_mesh( 46 | args.model_id, args.edge_length_threshold, args.wireframe, 47 | args.shuffle) 48 | -------------------------------------------------------------------------------- /scripts/vis/voxels.py: -------------------------------------------------------------------------------- 1 | def vis_voxels(model_id, edge_length_threshold, filled, shuffle=False): 2 | from mayavi import mlab 3 | from util3d.mayavi_vis import vis_voxels 4 | from shapenet.core import cat_desc_to_id 5 | from template_ffd.inference.voxels import get_voxel_dataset 6 | from template_ffd.data.voxels import get_gt_voxel_dataset 7 | from template_ffd.model import load_params 8 | from template_ffd.data.ids import get_example_ids 9 | cat_id = cat_desc_to_id(load_params(model_id)['cat_desc']) 10 | gt_ds = get_gt_voxel_dataset(cat_id, filled) 11 | inf_ds = get_voxel_dataset(model_id, edge_length_threshold) 12 | example_ids = get_example_ids(cat_id, 'eval') 13 | if shuffle: 14 | example_ids = list(example_ids) 15 | example_ids.shuffle 16 | 17 | with gt_ds: 18 | with inf_ds: 19 | for example_id in example_ids: 20 | gt = gt_ds[example_id].data 21 | inf = inf_ds[example_id].data 22 | vis_voxels(gt, color=(0, 0, 1)) 23 | mlab.figure() 24 | vis_voxels(inf, color=(0, 1, 0)) 25 | mlab.show() 26 | 27 | 28 | if __name__ == '__main__': 29 | import argparse 30 | parser = argparse.ArgumentParser() 31 | parser.add_argument( 32 | 'model_id', help='id of model defined in params') 33 | parser.add_argument( 34 | '-t', '--edge_length_threshold', type=float, default=0.1) 35 | parser.add_argument('-s', '--shuffle', action='store_true') 36 | parser.add_argument('-f', '--filled', action='store_true') 37 | args = parser.parse_args() 38 | vis_voxels( 39 | args.model_id, args.edge_length_threshold, args.filled, args.shuffle) 40 | -------------------------------------------------------------------------------- /scripts/vis_inputs.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | 4 | def main(model_id, mode): 5 | from template_ffd.model import get_builder 6 | builder = get_builder(model_id) 7 | builder.vis_inputs() 8 | 9 | 10 | if __name__ == '__main__': 11 | import argparse 12 | 13 | parser = argparse.ArgumentParser() 14 | parser.add_argument('model_id', help='id of model defined in params') 15 | parser.add_argument( 16 | '-m', '--mode', default='train', choices=['train', 'eval', 'infer']) 17 | args = parser.parse_args() 18 | 19 | main(args.model_id, args.mode) 20 | -------------------------------------------------------------------------------- /scripts/vis_predictions.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | 4 | def main(model_id): 5 | import tensorflow as tf 6 | from template_ffd.model import get_builder 7 | tf.logging.set_verbosity(tf.logging.INFO) 8 | builder = get_builder(model_id) 9 | builder.vis_predictions() 10 | 11 | 12 | if __name__ == '__main__': 13 | import argparse 14 | 15 | parser = argparse.ArgumentParser() 16 | parser.add_argument('model_id', help='id of model defined in params') 17 | args = parser.parse_args() 18 | 19 | main(args.model_id) 20 | -------------------------------------------------------------------------------- /templates/.gitignore: -------------------------------------------------------------------------------- 1 | _ffd 2 | _split_mesh 3 | -------------------------------------------------------------------------------- /templates/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jackd/template_ffd/0f9111ffb340449ad87fbf52220273a15819ec4f/templates/__init__.py -------------------------------------------------------------------------------- /templates/annotations_ffd.py: -------------------------------------------------------------------------------- 1 | import dids.file_io.hdf5 as h 2 | from ids import get_template_ids 3 | 4 | 5 | def _calculate_ffd(n, vertices, points): 6 | import template_ffd.ffd.deform as ffd 7 | stu_origin, stu_axes = ffd.get_stu_params(vertices) 8 | dims = (n,) * 3 9 | return ffd.get_ffd(points, dims) 10 | 11 | 12 | class FfdAnnotations(h.Hdf5AutoSavingManager): 13 | def __init__(self, cat_id, n=3): 14 | self._cat_id = cat_id 15 | self._n = n 16 | 17 | @property 18 | def saving_message(self): 19 | return ( 20 | 'Creating annotations FFD data\n' 21 | 'cat_id: %s\n' 22 | 'n: %d\n' % (self._cat_id, self._n)) 23 | 24 | @property 25 | def path(self): 26 | import os 27 | from path import templates_dir 28 | return os.path.join( 29 | templates_dir, '_ffd', str(self._n), 'annotations', 30 | '%s.hdf5' % self._cat_id) 31 | 32 | def get_lazy_dataset(self): 33 | import numpy as np 34 | from shapenet.core.meshes import get_mesh_dataset 35 | from shapenet.core.annotations.datasets import PointCloudDataset 36 | from dids import Dataset 37 | vertices_dataset = get_mesh_dataset(self._cat_id).map( 38 | lambda mesh: np.array(mesh['vertices'])) 39 | points_dataset = PointCloudDataset(self._cat_id) 40 | zipped = Dataset.zip(vertices_dataset, points_dataset) 41 | 42 | def map_fn(inputs): 43 | vertices, points = inputs 44 | b, p = _calculate_ffd(self._n, vertices, points) 45 | return dict(b=b, p=p) 46 | 47 | with points_dataset: 48 | keys = [k for k in get_template_ids(self._cat_id) 49 | if k in points_dataset] 50 | 51 | return zipped.map(map_fn).subset(keys) 52 | 53 | 54 | def _get_annotations_ffd_dataset(cat_id, n=3): 55 | return FfdAnnotations(cat_id, n).get_saved_dataset() 56 | 57 | 58 | def get_annotations_ffd_dataset(cat_id, n=3): 59 | if isinstance(cat_id, (list, tuple)): 60 | from dids.core import BiKeyDataset 61 | datasets = { 62 | c: _get_annotations_ffd_dataset(c, n=n) for c in cat_id} 63 | return BiKeyDataset(datasets) 64 | else: 65 | return _get_annotations_ffd_dataset(cat_id, n=n) 66 | -------------------------------------------------------------------------------- /templates/ffd.py: -------------------------------------------------------------------------------- 1 | import os 2 | import numpy as np 3 | import dids.file_io.hdf5 as h 4 | from path import get_ffd_group_path 5 | from mesh import get_template_mesh_dataset 6 | 7 | 8 | def _calculate_ffd(vertices, faces, n=3, n_samples=None): 9 | import template_ffd.ffd.deform as ffd 10 | import util3d.mesh.sample as sample 11 | stu_origin, stu_axes = ffd.get_stu_params(vertices) 12 | if n_samples is None: 13 | points = vertices 14 | else: 15 | points = sample.sample_faces(vertices, faces, n_samples) 16 | dims = (n,) * 3 17 | return ffd.get_ffd(points, dims) 18 | 19 | 20 | class FfdManager(h.Hdf5AutoSavingManager): 21 | def __init__( 22 | self, cat_id, n=3, edge_length_threshold=None, n_samples=None): 23 | self._cat_id = cat_id 24 | self._n = n 25 | self._edge_length_threshold = edge_length_threshold 26 | self._n_samples = n_samples 27 | 28 | @property 29 | def saving_message(self): 30 | return ( 31 | 'Creating FFD data\n' 32 | 'cat_id: %s\n' 33 | 'n: %d\n' 34 | 'edge_length_threshold: %s\n' 35 | 'n_samples: %s' % ( 36 | self._cat_id, self._n, self._edge_length_threshold, 37 | self._n_samples)) 38 | 39 | @property 40 | def path(self): 41 | return get_ffd_group_path( 42 | self._cat_id, 43 | self._n, 44 | self._edge_length_threshold, 45 | self._n_samples) 46 | 47 | def get_lazy_dataset(self): 48 | base = get_template_mesh_dataset( 49 | self._cat_id, self._edge_length_threshold) 50 | 51 | def map_fn(base): 52 | vertices, faces = ( 53 | np.array(base[k]) for k in ('vertices', 'faces')) 54 | b, p = _calculate_ffd(vertices, faces, self._n, self._n_samples) 55 | return dict(b=b, p=p) 56 | 57 | return base.map(map_fn) 58 | 59 | 60 | def create_ffd_data( 61 | cat_id, n=3, edge_length_threshold=None, n_samples=None, 62 | overwrite=False): 63 | FfdManager(cat_id, n, edge_length_threshold, n_samples).save_all( 64 | overwrite=overwrite) 65 | 66 | 67 | def _get_ffd_dataset(cat_id, n=3, edge_length_threshold=None, n_samples=None): 68 | manager = FfdManager( 69 | cat_id=cat_id, 70 | n=n, 71 | edge_length_threshold=edge_length_threshold, 72 | n_samples=n_samples) 73 | if not os.path.isfile(manager.path): 74 | return manager.get_saved_dataset() 75 | else: 76 | return manager.get_saving_dataset() 77 | 78 | 79 | def get_ffd_dataset(cat_ids, n=3, edge_length_threshold=None, n_samples=None): 80 | from dids.core import BiKeyDataset 81 | kwargs = dict( 82 | n=n, edge_length_threshold=edge_length_threshold, n_samples=n_samples) 83 | if isinstance(cat_ids, str): 84 | cat_ids = [cat_ids] 85 | datasets = {c: _get_ffd_dataset(c, **kwargs) for c in cat_ids} 86 | return BiKeyDataset(datasets) 87 | -------------------------------------------------------------------------------- /templates/ids.py: -------------------------------------------------------------------------------- 1 | def _get_template_ids(): 2 | import json 3 | import path 4 | with open(path.template_ids_path, 'r') as f: 5 | template_ids = json.load(f) 6 | return template_ids 7 | 8 | 9 | _template_ids = {k: tuple(v) for k, v in _get_template_ids().items()} 10 | 11 | 12 | def get_template_ids(cat_id): 13 | return _template_ids[cat_id] 14 | 15 | 16 | def get_templated_cat_ids(): 17 | return _template_ids.keys() 18 | -------------------------------------------------------------------------------- /templates/mesh.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import dids.file_io.hdf5 as h 3 | from path import get_split_mesh_group_path 4 | from ids import get_template_ids 5 | 6 | 7 | class SplitTemplateMeshManager(h.Hdf5AutoSavingManager): 8 | def __init__(self, cat_id, edge_length_threshold, initial_thresh=None): 9 | self._cat_id = cat_id 10 | self._edge_length_threshold = edge_length_threshold 11 | self._initial_thresh = initial_thresh 12 | if initial_thresh is not None and \ 13 | initial_thresh <= edge_length_threshold: 14 | raise ValueError( 15 | 'initial_thresh must be greater than edge_length_threshold') 16 | 17 | @property 18 | def path(self): 19 | return get_split_mesh_group_path( 20 | self._edge_length_threshold, self._cat_id) 21 | 22 | @property 23 | def saving_message(self): 24 | return ('Creating split template mesh data\n' 25 | 'cat_id: %s\n' 26 | 'edge_length_threshold: %s\n' % 27 | (self._cat_id, self._edge_length_threshold)) 28 | 29 | def get_lazy_dataset(self): 30 | from util3d.mesh.edge_splitter import split_to_threshold 31 | base = get_template_mesh_dataset(self._cat_id, self._initial_thresh) 32 | base = base.subset(get_template_ids(self._cat_id)) 33 | 34 | def map_fn(mesh): 35 | vertices, faces = ( 36 | np.array(mesh[k]) for k in ('vertices', 'faces')) 37 | vertices, faces = split_to_threshold( 38 | vertices, faces, self._edge_length_threshold) 39 | return dict(vertices=np.array(vertices), faces=np.array(faces)) 40 | 41 | return base.map(map_fn) 42 | 43 | 44 | def get_split_template_mesh_dataset(cat_id, edge_length_threshold): 45 | return SplitTemplateMeshManager( 46 | cat_id, edge_length_threshold).get_saved_dataset() 47 | 48 | 49 | def _get_template_mesh_dataset(cat_id, edge_length_threshold=None): 50 | import shapenet.core.meshes as m 51 | if edge_length_threshold is None: 52 | return m.get_mesh_dataset(cat_id).subset(get_template_ids(cat_id)) 53 | else: 54 | return get_split_template_mesh_dataset( 55 | cat_id=cat_id, 56 | edge_length_threshold=edge_length_threshold) 57 | 58 | 59 | def get_template_mesh_dataset(cat_id, edge_length_threshold=None): 60 | if isinstance(cat_id, (list, tuple)): 61 | from dids.core import BiKeyDataset 62 | datasets = {c: _get_template_mesh_dataset(c, edge_length_threshold) 63 | for c in cat_id} 64 | return BiKeyDataset(datasets) 65 | else: 66 | return _get_template_mesh_dataset(cat_id, edge_length_threshold) 67 | -------------------------------------------------------------------------------- /templates/path.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | templates_dir = os.path.realpath(os.path.dirname(__file__)) 4 | template_ids_path = os.path.join(templates_dir, 'templates.json') 5 | 6 | 7 | def get_ffd_group_dir(n=3, edge_length_threshold=None, n_samples=None): 8 | root = os.path.join(templates_dir, '_ffd', str(n)) 9 | if n_samples is None: 10 | es = 'base' if edge_length_threshold is None else \ 11 | str(edge_length_threshold) 12 | return os.path.join(root, 'mesh', es) 13 | else: 14 | if edge_length_threshold is not None: 15 | raise ValueError( 16 | 'Cannot have both n_samples and edge_length_threshold') 17 | return os.path.join(root, 'sampled', str(n_samples)) 18 | 19 | 20 | def get_ffd_group_path( 21 | cat_id, n=3, edge_length_threshold=None, n_samples=None): 22 | return os.path.join(get_ffd_group_dir(n, edge_length_threshold, n_samples), 23 | '%s.hdf5' % cat_id) 24 | 25 | 26 | def get_split_mesh_group_dir(edge_length_threshold): 27 | d = os.path.join(templates_dir, '_split_mesh', str(edge_length_threshold)) 28 | if not os.path.isdir(d): 29 | os.makedirs(d) 30 | return d 31 | 32 | 33 | def get_split_mesh_group_path(edge_length_threshold, cat_id): 34 | return os.path.join( 35 | get_split_mesh_group_dir(edge_length_threshold), '%s.hdf5' % cat_id) 36 | -------------------------------------------------------------------------------- /templates/templates.json: -------------------------------------------------------------------------------- 1 | { 2 | "02691156": [ 3 | "22acc443fd007fce6e80138ae17d7d07", 4 | "47a4ed133dd37264521546825315c695", 5 | "4d97f6fcb6886f49cc14f1e6f4f4f49b", 6 | "52a6ae9074397d5f65f50257ecdfa5c7", 7 | "f8bc6483dd3c87085df50a69f2f8e096", 8 | "24fbe7a49fd786c5fb5c1b0f759e2bc1", 9 | "4561def0c651631122309ea5a3ab0f04", 10 | "6058d6701a0ca4e748e8405d6c51a908", 11 | "a4678e6798e768c3b6a66ea321171690", 12 | "6d432caaa8eba4fb44b2fa2cac0778f5", 13 | "d8a43017132c210cc1006ed55bc1a3fc", 14 | "5f9707e5f6b820275823df672991ed66", 15 | "427240c0fde25a90e6901f9a264cdbc0", 16 | "ff725af6df1b76207b164268a44f7712", 17 | "b4d64b689e870d1b828204947d78b9af", 18 | "45574293b59c62ff301fa0a0663ee996", 19 | "80770c90ba84524e825b20c2472ad90a", 20 | "cb744ec78dd320efd2c2bfa672ed621f", 21 | "b9006dadc7ae7f7d21afc48d963f897", 22 | "f26ea1a00455f44fb88e2a19106395c2", 23 | "30c0995dcb7c10039a6e43b878d5b335", 24 | "f1ef7546cc85a1815823df672991ed66", 25 | "842e5dcd452f34aa8caa71b1fbf7fb98", 26 | "40192d0e50b4d2c1f27a705edb2f9ba6", 27 | "486f1238321ffd2825eb6beb311c44e1", 28 | "ffef991d85e3136a9a6e43b878d5b335", 29 | "4d84619c0da53326e90916c8815b5c43", 30 | "1f47381312c9bebc9bff604afc480b65", 31 | "18d0da47a238945abc0909d98a1ff2b4", 32 | "af55f398af2373aa18b14db3b83de9ff" 33 | ], 34 | "02958343": [ 35 | "12cd99c20b1a5a932e877e82c90c24d", 36 | "1acfbda4ce0ec524bedced414fad522f", 37 | "1d4b2404a00ef4bb627014ff98c41eb1", 38 | "373cf6c8f79376482d7d789814cae761", 39 | "3d358a98f90e8b4d5b1edf5d4f643136", 40 | "4ef6af15bcc78650bedced414fad522f", 41 | "5343e944a7753108aa69dfdc5532bb13", 42 | "63e0df089d6c1442f3aed64053e21b3c", 43 | "6b79cfceb6f614527e7afb83f93db294", 44 | "70899bf99412a69db38722563212fa4b", 45 | "78f9e32385dd7db27996cb12b5662363", 46 | "7f2b01a53684e72154b49557f8ea8b42", 47 | "857a3a01bd311511f200a72c9245aee7", 48 | "9fa56c19e4d54cca99c8d14f483ffc82", 49 | "a4fc879c642e8fc4a5a4c80d90b70728", 50 | "a88c4427e1f0e871d7755e7baabe8a6f", 51 | "aa5fac5424a05c6be092951e627bdb8b", 52 | "beedf39c8f5709bea9fe1734a6086750", 53 | "bf52cda8de7e5eff36dfef0450f0ee37", 54 | "bfa01c632be2eb06e8a3b392b986583", 55 | "c004e655af0b35e3bda72093f9b5aa73", 56 | "c59e3f28f42e6ca3186fee06ae26176f", 57 | "d1bf2bf3302c0ec0e21186de41a0101", 58 | "d224a01422266fc51b3c1c9f0ad4025", 59 | "d443e86ae023ceeb16abce8cb03e7794", 60 | "d9b2fc71e809140bbe40bb45ea25a041", 61 | "dadcc1200b43a12be8b7f81712644c1e", 62 | "db14f415a203403e2d7d789814cae761", 63 | "e3dff7195a2026dba4db43fa521d5c03", 64 | "ee0232b37ee6265bda72093f9b5aa73" 65 | ], 66 | "03001627": [ 67 | "1006be65e7bc937e9141f9b58470d646", 68 | "1007e20d5e811b308351982a6e40cf41", 69 | "100b18376b885f206ae9ad7e32c4139d", 70 | "1013f70851210a618f2e765c4a8ed3d", 71 | "1015e71a0d21b127de03ab2a27ba7531", 72 | "1016f4debe988507589aae130c1f06fb", 73 | "1022fe7dd03f6a4d4d5ad9f13ac9f4e7", 74 | "1028b32dc1873c2afe26a3ac360dbd4", 75 | "1031fc859dc3177a2f84cb7932f866fd", 76 | "1033ee86cc8bac4390962e4fb7072b86", 77 | "103a0a413d4c3353a723872ad91e4ed1", 78 | "103b75dfd146976563ed57e35c972b4b", 79 | "1055f78d441d170c4f3443b22038d340", 80 | "106c7f10c5bf5bd5f51f77a6d7299806", 81 | "1079635b3da12a812cee4bf5d0f11ffe", 82 | "10c08a28cae054e53a762233fffc49ea", 83 | "107ed94869ed6f1be13496cd332ce78f", 84 | "108238b535eb293cd79b19c7c4f0e293", 85 | "10d174a00639990492d9da2668ec34c", 86 | "10e523060bb5b51f9ee9f382b1dfb770", 87 | "ff529b9ad2d5c6abf7e98086e1ca9511", 88 | "113016635d554d5171fb733891076ecf", 89 | "11358c94662a68117e66b3e5c11f24d4", 90 | "1145248e1eba424d492d9da2668ec34c", 91 | "11525a18678f7ce6ae1e1181f20bb9c8", 92 | "117930a8f2e37f9b707cdefe012d0353", 93 | "117bd6da01905949a81116f5456ee312", 94 | "dfd7e14d25b81c1db5d9b03636b8bad3", 95 | "11d3fc4092e616a7a6fee8e2140acec9", 96 | "124ef426dfa0aa38ff6069724068a578" 97 | ], 98 | "04256520": [ 99 | "1037fd31d12178d396f164a988ef37cc", 100 | "1050790962944624febad4f49b26ec52", 101 | "105849baff12c6fc2bf2dcc31ba1713", 102 | "107637b6bdf8129d4904d89e9169817b", 103 | "107bce22d72f322eedf1bb0b62653056", 104 | "113a2544e062127d79414e04132a8bef", 105 | "1149bd16e834d8e6433619555ecca8aa", 106 | "1168fc14c294f7ac14038d588fd1342f", 107 | "118a7d6a1dfbbc14300703f05f8ccc25", 108 | "11d5e99e8faa10ff3564590844406360", 109 | "1230d31e3a6cbf309cd431573238602d", 110 | "12766a14eb23967492d9da2668ec34c", 111 | "1a4a8592046253ab5ff61a3a2a0e2484", 112 | "23eb95ad8124b45cc27ecf743c1aa320", 113 | "241876321940a2c976e9713f57a5fcb6", 114 | "2a8554af80cfa5e719fb4103277a6b93", 115 | "43720278eea721d27d18877f45b7c3cc", 116 | "438c3671222b3e6c800d7b7d07715065", 117 | "4424a906da6fd4c961bf0ba277ea473b", 118 | "44503d9ba877251a4b48718ea0a8b483", 119 | "77a5f44875119a6b5369e32fb818f337", 120 | "785ba264dfcf722bf284a86ef67b13e6", 121 | "79170ac3bea792317984fb9ec7e40829", 122 | "7ae657b39aa2be68ccd1bcd57588acf8", 123 | "7b0f429c12c00dcf4a06efdbafdd7ea", 124 | "7b8a8776c2bd135694e14fba4acebb36", 125 | "7b914fb42c8f2368393b1800bfc51a93", 126 | "9c1b448ec62cb9fb36dd029536673b0b", 127 | "a7e4616a2a315dfac5ddc26ef5560e77", 128 | "af0c4f45e0444ecb01c58badc8bbc39" 129 | ], 130 | "04379243": [ 131 | "1011e1c9812b84d2a9ed7bb5b55809f8", 132 | "104c8e90ecf0e5351ed672982b7954af", 133 | "104ebf7f96c77fb46a0faccc2a4015d8", 134 | "105b9a03ddfaf5c5e7828dbf1991f6a4", 135 | "109738784a0a6129a02c88fe01f2b9c1", 136 | "10a4e263f8087c4b8cf2bc41970d572a", 137 | "10b0d655bf4938eae1ab19b3beff6716", 138 | "10b5723ea035cb047464e25da6d2e90", 139 | "11103f4da22fe6604b3c42e318f3affc", 140 | "114a39da3b4bf118d42ec7e303174a87", 141 | "11520534ea9a889c7d36177f6cb74069", 142 | "1164897f678f3bd627e98d0f3d735480", 143 | "11aee78983b57cb34138477d68528833", 144 | "11b110b37b1cbfd6bdfce662c3df88af", 145 | "11c192ef34f5dea0a1bc88a716ad63b2", 146 | "11cd9cbf28d3918f1b17743c18fb63dc", 147 | "11cdaf2939502622815a10e5a35009c9", 148 | "1469f244a1968345e2d95336601deece", 149 | "16802a946bd714e819fb4103277a6b93", 150 | "1686831b1e585dd9729c5ef452d153c3", 151 | "30c88fa790ac14f750d31060ff1b5551", 152 | "3bfc7947fb8abec5d925b06c8689bfe2", 153 | "3c079f540fafa7e13b3db95ce254f64d", 154 | "3c1b4a85f3a287fe47d51fb55a1c2980", 155 | "41cdb5b619790d5a74eb542502c2205f", 156 | "41e1dd0f69afd7b093e18ebd46d61795", 157 | "41eda879e1b7eee2dec2e3fb3c73544", 158 | "41ffb4ec3d22e9dd9e7e7bd5f870f40d", 159 | "421657269cac10f7492d9da2668ec34c", 160 | "4c977a08c3969494d5883ca9b41ac387" 161 | ], 162 | "03211117": [ 163 | "d4d94e7a1f75a67e3f7b7c3393bbad8", 164 | "416674f64be11975bc4f8438441dcb1d", 165 | "a5939f4fbe1009687f2411014f221968", 166 | "93b69d3caf90a837e441f5bb6f88ca61", 167 | "d5d6824b5115b3d65167d3ead22db5b1", 168 | "5f73ccba7af987789744d3b3ee0cc03", 169 | "7467b25f70675892d50c22be0354e623", 170 | "cf4c78178c9dc8c292df4681ccc21025", 171 | "8f032c701a2d1de772167aadb6db5f77", 172 | "ba0f98e212668fdd22532be027c41b0c", 173 | "abc4a3eb2c6fbe8064d221a686772b82", 174 | "b3a975cb984a8fc6cc98452c8fce6b43", 175 | "e5dd90d78168e53741e88434245c899", 176 | "87882e55a8914e78a3cb15c59bd3ecf2", 177 | "b3ed6cea7ecd3f56e481cbc0aafd242a", 178 | "6c8f7736660f2e97e441f5bb6f88ca61", 179 | "191bc5c03d27789379857d0b1bb98706", 180 | "cc4b7dbffb52fdacaccd68c8aac6846c", 181 | "3c495b3a2c2af890acc9692a1d1e7dea", 182 | "d89cb5da6288ae91a21dea5979316c3e", 183 | "ebd183cd1d075dd3bed06f4e613c0aec", 184 | "b0cc3e614afbe6546892efe917403e6c", 185 | "3899bd2eb8f9e9d07e76a21e51d48a61", 186 | "646b0bd4e03fba8d566636e42679cc7f", 187 | "70df1655d1e766ece537be33cc045ee9", 188 | "26c4051b7dfbccf4afaac116abdd44e", 189 | "800ca9956f66a22a23d94393165a64e3", 190 | "9c23caf872048374ec8285b7fd906069", 191 | "3de6f62a6faeb80933e9820fd7ca74b3", 192 | "2971f417b08961475a4cd9b26f359d36" 193 | ], 194 | "03636649": [ 195 | "a5bfb9a3571e7e86e59f529cd1b6faa8", 196 | "13c361b7b046fe9f35b0d1c9f81f0b6c", 197 | "64fe64c30ac05282443f70ad172f4dd5", 198 | "463a3ee50280fddafcb8d8c6d4df8143", 199 | "91c55497aeec1fc55e29ce2c9d37b952", 200 | "be5b76136b37205738e43095496b061", 201 | "e98c05f4cc8c7afcf648915c85184f8c", 202 | "a90fe01c3ef3ee30fcb8d8c6d4df8143", 203 | "fd5f6ab819910a66dc7f95a5a82e36f7", 204 | "f5c61ca4acfb7f5435836c728d324152", 205 | "a68678b3e52fcda2bd239d670cf7d8dc", 206 | "47adca3b217160d4b0957d845ac33749", 207 | "85dfdbe562059fa058b65cbe3be2c45c", 208 | "39fcaf51940333b46ab88e9b8b75d248", 209 | "fd8f9cb134743e0c80bcdfbddc82df7a", 210 | "7a2362fbddbee9a4d197f67767b32741", 211 | "c6424950ca9447627d8864caa856253b", 212 | "1b79210962721517fcddd74ee6c69025", 213 | "31a15957bd4f32f87eedf2c7d21f7cfa", 214 | "3dda46a537bc16e689ab11a408196888", 215 | "4deef34d95367b58c0d95250e682f6ee", 216 | "5270f973e56a05f12cd2160e449d45ae", 217 | "78a11c0b8e964c9b41657e31b569b105", 218 | "74dff8f0a865368b4a8e02787dff638e", 219 | "2a52bd01472ec7e1589ec67c01f5c1a7", 220 | "1f80e265e6a038ad9c5c74ff620f967b", 221 | "66cec0a2ab63d9101b6c273f8ff0e8b6", 222 | "26316fabe129210317fad902853ecfd7", 223 | "787bd789cf2aab676e0185e256a599cc", 224 | "12d03f06746eb49990c2e24416edfe5b" 225 | ], 226 | "03691459": [ 227 | "9076b1b9e23c7446d747b49524a1246e", 228 | "1eb6ae90ea03673ee792f9d89b97c271", 229 | "b3158c23dbd08f554cf39544f467e5c6", 230 | "6d755a3d6d0f265d77ea5e1afa5bfe6", 231 | "fb4c855848345ecd3e738e11bd8803f8", 232 | "4ded23cf84c993e9df3c63f2cd487888", 233 | "7fb191e5d0d7464b538cf6df9faa9b65", 234 | "bfce87b0ea79c8aa776400d171cf9dfa", 235 | "5ea3d1068a624c1da91bbba4742a1643", 236 | "60a7df9bf00844735e7cf7bd2b19c869", 237 | "a82329a937432afe8d28f674ed08c521", 238 | "f88ff1c46ccace6d5392678120123c42", 239 | "e2d6a0851b9357141574d21c0c95092f", 240 | "710014b815369e1c2bcea2cd4cc7b042", 241 | "b8410a2c19a50aa88b04a17db360913", 242 | "b03efb93acd16a49699abba79f165934", 243 | "6592d33f84263ef435cd53a06b1d2317", 244 | "481d17e1ab933142b868767ca39f1cf9", 245 | "20ac1211f88a8a1878396b03f57f644c", 246 | "e1b3bb54b9855f12d88a3e0e92891ad5", 247 | "1ba6735cd32d907ad493bfe20f94b6ab", 248 | "8aea25f1090e419c9f78b1e1185445c4", 249 | "908202c73ba60671c0d274eb53f065ff", 250 | "33b19fdf42fd767d871a975200291c6f", 251 | "ac951c58cd826af6a89585af9e32f3d7", 252 | "eadad629c581c28c6b424c689f1d711a", 253 | "c556fa897131c0c833b20ff045584bf3", 254 | "6a864ca4b19aca77645b6a2a45925e6", 255 | "2cc52cc8e9de5c12f398d0c5832df00e", 256 | "f993f348260454bb538cf6df9faa9b65" 257 | ], 258 | "04530566": [ 259 | "8790881fbc0331a87cef2df31bcf9d93", 260 | "a0f1e4ef99b57121a9142e7277ee08f1", 261 | "cfb1882ac34b81d8a357368f9af15b34", 262 | "6d0c48b62f610ec0b90142192ec795d", 263 | "1f7f1d7c3882f638fb64f9487ce62dd2", 264 | "338e37f313d48118789eecd157794d2a", 265 | "acc71731a16d074f5a11da1e572e8f01", 266 | "65b75158bb049f5af647317afa6ffdd4", 267 | "df575767acf17d7188ca49762bf17cdc", 268 | "bffc229892a3d301c8bb4876165f947c", 269 | "fcf21e1176459664806b90e3f08c9a28", 270 | "5b1ef304e7a8cebde255aabfeb1b2b82", 271 | "ccf527bf6ea742f0afe1d4530f4c6e24", 272 | "19640fee71ffa82816581cd5751ca97f", 273 | "8849abb0be0a0ca99cace9782a7cd30a", 274 | "e02d395707464e692ef42ab47be9662", 275 | "873b7ab23a5c85e365a308491a8f2afe", 276 | "33f4d31a559bc07fc1ccec171a275967", 277 | "84b75e53176c9f1fe1e2f026632da15", 278 | "8e431fd55a7aca0b124dae0a32996c4c", 279 | "6a5405246814b82281c5ee986f4484ec", 280 | "e3e2bf1879ec9298c711893477336d39", 281 | "62cebab704dbc0d02b76c9fef45435c7", 282 | "548c6234fc7c787bfeea5c85a86089b5", 283 | "863fd298e6ea46a5614edc3c9b2489f4", 284 | "4690184ef7ea805dfdd29529d1a15514", 285 | "5f7c0e4368784e795dbfbfcedb83d61", 286 | "6f61d84f9373f7c5c05557706bb20c4", 287 | "f10162679968fb0d8f21fab201b7ef8d", 288 | "249d543a30a88020be7995d5b4bc81b7" 289 | ], 290 | "02933112": [ 291 | "17d25c26485edcf94da5feafe6f1c8fc", 292 | "1ceaae0aaeeeaa1e5a8eba5f6050bab", 293 | "59b0ac376af08592824662341ce2b233", 294 | "1f4ccbdbd0162e9be3f7a74e12a274ef", 295 | "c7a7a1254c5d98b8449f1c29830da6c6", 296 | "14ef9da3809148601b17743c18fb63dc", 297 | "75c26ffff01ea7063ab3dfa44f5fab01", 298 | "4b2e20535d3ecd016b7154919b02cbec", 299 | "bb0255c8582c74c6557f50690310ce8d", 300 | "2856634c4c0551a814038d588fd1342f", 301 | "4c7cc0a0e83f995ad40c07d3c15cc681", 302 | "4b80db7aaf0dff0c4da5feafe6f1c8fc", 303 | "39b50a129ff530efb4ba4a53b97265b", 304 | "10798ccb7072393e86d53ab0fe94e911", 305 | "60508a8437c09eb2247353095dc395a2", 306 | "8390466432e2c364298a4bdd07dbdc0", 307 | "19dd35ef180808c38f1735145fdf5c5c", 308 | "47e09607098c43b5768049e7324c832a", 309 | "62d67406fb239e21533276a8c0b1c862", 310 | "c8f5521a1f0ddac6c59350d819542ec7", 311 | "1d93291de09fa5c876e9713f57a5fcb6", 312 | "24b3f8b6bf4a9a7391a3d45e8887248a", 313 | "ac499f75479d2e372ad490d4d7fae486", 314 | "134055516ed892913ba1c51b82b58419", 315 | "ca6712dace1e32a548d8ff57878739ca", 316 | "7ed5c429313f20e079bb09dc5605a57", 317 | "3d21c18153474a0acf004563556ddb36", 318 | "b6c1fd850c5b042c738e43095496b061", 319 | "a80ad4eafdb304edb6b975d10a10702", 320 | "4d36c59bd32fd885aadbf8208284c675" 321 | ], 322 | "02828884": [ 323 | "c8fa692760ba875848d791284650e46d", 324 | "405d1666d90df2c139842e32fb9b4e4a", 325 | "9ad5cab6ff1e45fd48113d3612de043b", 326 | "895563d304772f50ad5067eac75a07f7", 327 | "cae6c2b329bbc12de5d5fc930770c792", 328 | "5ab38425eb09fe33cac2a982f1c2a5b5", 329 | "6ee844357bbc5bddd4d8765e3910f617", 330 | "c06a17f2c79d01949c8a0ee9a6d1d4b2", 331 | "86980fcab93e60151f53db693ffe56c5", 332 | "ac4de7e08bc1f024c955e5ed03ef3a2f", 333 | "9fe85429413af216cb2a965e75be701c", 334 | "a82c14ef1540d3abf4c42dc386169bd6", 335 | "e223e77b8db4aea17d8864caa856253b", 336 | "7f7956f11a1fdfd0d81202a54291c0af", 337 | "a8c8aca72463418581faebbdea6bd9be", 338 | "6accdfe97ecfa9952056b4bd5d870b47", 339 | "17a7a1a4761e9aa1d4bf2d5f775ffe5e", 340 | "bdc3a9776cd0d69b26abe89c4547d5f1", 341 | "3b660f1b7f7f41be25ebd1cd0b422e32", 342 | "cd7689a92e0d39896812e49a5c99d0f3", 343 | "e941e1929bdc87d5ad876645af0395fd", 344 | "a58cb33e8aa8142af155d75bbf62b80", 345 | "38e367e4421ec3cbba70cedf91706353", 346 | "608af07bd357d605f155d75bbf62b80", 347 | "6e045aac2c52c7c556f6ef8b6ca8f4cc", 348 | "22031fe0420834a9ad5067eac75a07f7", 349 | "7aca4c47c6861f2122445e799be0f18", 350 | "931017bd80aa7a90edccc47bf0dcf5d3", 351 | "60757e398b7d51c5c143e86eb74c3988", 352 | "3d470843f013f9d8c9fd4914d3d18461" 353 | ], 354 | "02992529": [ 355 | "2009fed0ea8d1fc3e2dbe704eec7e8d9", 356 | "280c7836d6302b4bf3f8467cb6fe657e", 357 | "758b9da6bf573838d214322c26aa0cfd", 358 | "2c1cf34eb46756c1aa8ee930afa0ad31", 359 | "e862392921d99119ee50cfd2d11d046b", 360 | "1d390d2560fc259336eb9fe355d50fdf", 361 | "975de17c0b5d96cda38e5bd2fdb10d11", 362 | "638695025ca04d351d57a73214dd2a04", 363 | "df23ff3151aa1f0a3cb2f20e21cb06ff", 364 | "39e571ec1bcd371bb785a4ac4a0dbd73", 365 | "6449223cebdb2dc9c8c3ac9bcbcd9687", 366 | "641234f284243decea95e61f66327e28", 367 | "4b91c9711ca991768a57ed2dc0905847", 368 | "11e925e3ea180b583388c2584b2f0f90", 369 | "7bf76e92b684d32671ab0d8014490a7d", 370 | "a3f4cae960ac74babc54d4bc75a1a826", 371 | "24d22e3257420bc2b785a4ac4a0dbd73", 372 | "5fea05a3cdcc05756dba92a4e2177102", 373 | "7f5b9c8fdb46cf7c562c8e1ac545ef78", 374 | "e8d704bfd9b6c7f39af985c9d0dd6085", 375 | "2874447c7552c6b942f892024869e751", 376 | "359be05b5042209a502122ac3599bb74", 377 | "cd7e6ab98a2d090099b101e0ce243aa5", 378 | "17d4ef9fa59ede481cfeae953cc2339d", 379 | "e8db0c85b07ac581d409d3400adf2d96", 380 | "9e5b9dd860689c5c217f93816a639386", 381 | "515b5a63ebb56aad7aeb5bbdd191fd4d", 382 | "db4c6fcc45cdee08d8dc338f42ea38e9", 383 | "c8948cb8ec0f10ebc2287ee053005989", 384 | "6710c7894efe94cb918f30c146a92bd0" 385 | ], 386 | "03948459": [ 387 | "2137b954f778282ac24d00518a3dd6ec", 388 | "a3679104af613021912d826efe946a9f", 389 | "1226a05aba30d0987deae9192b6f5fdc", 390 | "5cc23a432b1b88dbf5029f48ea6cff14", 391 | "3da97809eca46f35a04e0afc178df76", 392 | "89bf5ef4ec5f275a70eabd84d42e060f", 393 | "b3a66094d5ee833bf4de29b99f103946", 394 | "e1c06a263876db5528881fe8c24c5c4b", 395 | "a0a1633186261a031274aa253a241db2", 396 | "f3f6678898938575575e33965575974", 397 | "ed051e6c8ac281facb14d3281c3904f0", 398 | "9b6c6048719e7e024cb47a45be6e6ae3", 399 | "f80c465c9e401dab44608b3255ca1886", 400 | "75476ba20ddf71fae868ed06b4dfef2d", 401 | "2f50338488d6254c6460d209041b501", 402 | "edec08542b9312b712b38b1d99376c0b", 403 | "6f8956455813727c3e93a3c4fff5237", 404 | "b0a050985a5ce6be25508ed649b952cb", 405 | "1660ef4b3f20b1e2a94b922b533051b7", 406 | "2a8f236c10ec9b98ba9409808fba922a", 407 | "345179cdbb6ac9da4dd752ddde80fb1", 408 | "41bca1dbde9fe5159220647403cfb896", 409 | "51c3ad915c53f9bde3f7f7749a803060", 410 | "42740af029297f1d9874fa4c7b1a4298", 411 | "6663978acbe7f2b5336eda14178e5ec4", 412 | "623068000236841ec686380962888391", 413 | "254c2310734d6a5ce3cf43faf3d38113", 414 | "1b63a4a6fad149acfa040b4405acb380", 415 | "4dcc11b6acc758b1429a1687ed6390ec", 416 | "1f646ff59cabdddcd810dcd63f342aca" 417 | ] 418 | } 419 | --------------------------------------------------------------------------------