├── doc
├── Github_intro.png
├── pretrained_models_guide.md
├── object_segmentation_guide.md
├── object_classification_guide.md
├── scene_segmentation_guide.md
├── visualization_guide.md
└── new_dataset_guide.md
├── cpp_wrappers
├── compile_wrappers.sh
├── cpp_subsampling
│ ├── setup.py
│ ├── grid_subsampling
│ │ ├── grid_subsampling.h
│ │ └── grid_subsampling.cpp
│ └── wrapper.cpp
└── cpp_utils
│ └── cloud
│ ├── cloud.cpp
│ └── cloud.h
├── convert.py
├── pipenvlist.txt
├── LICENSE
├── tf_custom_ops
├── tf_neighbors
│ ├── neighbors
│ │ ├── neighbors.h
│ │ └── neighbors.cpp
│ ├── tf_neighbors.cpp
│ └── tf_batch_neighbors.cpp
├── cpp_utils
│ └── cloud
│ │ ├── cloud.cpp
│ │ └── cloud.h
├── compile_op.sh
├── notes.md
└── tf_subsampling
│ ├── grid_subsampling
│ ├── grid_subsampling.h
│ └── grid_subsampling.cpp
│ ├── tf_subsampling.cpp
│ └── tf_batch_subsampling.cpp
├── Instruction_Manual
├── setup.sh
├── AWS_help.md
├── Hyperparameters_help.md
├── Running test and train.md
└── Launching and Setting up the instance.md
├── INSTALL.md
├── envlist.txt
├── Log_2020-09-29_02-19-56
└── F1_Score.txt
├── README.md
├── test_accuracy.py
├── aerotronic.yml
├── utils
├── mesh.py
└── metrics.py
├── training_S3DIS.py
├── training_ShapeNetPart.py
├── training_Semantic3D.py
├── training_NPM3D.py
├── training_DALES.py
├── training_Scannet.py
├── training_ModelNet40.py
├── visualize_deformations.py
├── models
└── KPCNN_model.py
├── visualize_ERFs.py
├── test_any_model.py
└── kernels
└── kernel_points.py
/doc/Github_intro.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Arjun-NA/KPConv_for_DALES/HEAD/doc/Github_intro.png
--------------------------------------------------------------------------------
/cpp_wrappers/compile_wrappers.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # Compile cpp subsampling
4 | cd cpp_subsampling
5 | python3 setup.py build_ext --inplace
6 | cd ..
7 |
8 |
--------------------------------------------------------------------------------
/doc/pretrained_models_guide.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | ## Test a pretrained network
4 |
5 | ### Data
6 |
7 | We provide two examples of pretrained models:
8 | - A network with rigid KPConv trained on S3DIS: link (50 MB)
9 | - A network with deformable KPConv trained on NPM3D: link (54 MB)
10 |
11 |
12 |
13 | Unzip the log folder anywhere.
14 |
15 | ### Test model
16 |
17 | In `test_any_model.py`, choose the path of the log you just unzipped with the `chosen_log` variable:
18 |
19 | chosen_log = 'path_to_pretrained_log'
20 |
--------------------------------------------------------------------------------
/cpp_wrappers/cpp_subsampling/setup.py:
--------------------------------------------------------------------------------
1 | from distutils.core import setup, Extension
2 | import numpy.distutils.misc_util
3 |
4 | # Adding OpenCV to project
5 | # ************************
6 |
7 | # Adding sources of the project
8 | # *****************************
9 |
10 | m_name = "grid_subsampling"
11 |
12 | SOURCES = ["../cpp_utils/cloud/cloud.cpp",
13 | "grid_subsampling/grid_subsampling.cpp",
14 | "wrapper.cpp"]
15 |
16 | module = Extension(m_name,
17 | sources=SOURCES,
18 | extra_compile_args=['-std=c++11',
19 | '-D_GLIBCXX_USE_CXX11_ABI=0'])
20 |
21 | setup(ext_modules=[module], include_dirs=numpy.distutils.misc_util.get_numpy_include_dirs())
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
--------------------------------------------------------------------------------
/convert.py:
--------------------------------------------------------------------------------
1 | from plyfile import PlyData, PlyElement, PlyProperty, PlyListProperty
2 | from os import listdir,path
3 |
4 | to_ascii = False
5 | file_path = '.'
6 | files = [f for f in listdir(file_path) if f[-4:] == '.ply']
7 | for each_file in files:
8 | print('\n Loading.... ', path.join(file_path, each_file) )
9 | data = PlyData.read(path.join(file_path, each_file) )
10 | print('\n Loaded..... ', path.join(file_path, each_file) )
11 | data.elements[0].data.dtype.names = ['x', 'y', 'z', 'reflectance', 'class']
12 | data.elements[0].properties = (PlyProperty('x', 'float'), PlyProperty('y', 'float'),
13 | PlyProperty('z', 'float'), PlyProperty('reflectance', 'int'),
14 | PlyProperty('class', 'int'))
15 | data1 = PlyData([data.elements[0]], text=to_ascii)
16 | data1.write(path.join(file_path,'bin_'+ each_file) )
17 | print('\n completed.. ', each_file)
18 | data2 = PlyData.read(path.join(file_path, 'bin_'+each_file) )
19 | print(data.elements[0])
20 | print('\n')
21 |
--------------------------------------------------------------------------------
/pipenvlist.txt:
--------------------------------------------------------------------------------
1 | absl-py==0.9.0
2 | astor==0.8.0
3 | certifi==2020.4.5.2
4 | chardet==3.0.4
5 | cycler==0.10.0
6 | gast==0.2.2
7 | google-pasta==0.2.0
8 | grpcio==1.27.2
9 | h5py==2.10.0
10 | idna==2.9
11 | imbalanced-learn==0.7.0
12 | joblib==0.16.0
13 | Keras==2.3.1
14 | Keras-Applications==1.0.8
15 | Keras-Preprocessing==1.1.0
16 | kiwisolver==1.2.0
17 | Markdown==3.1.1
18 | matplotlib==3.2.2
19 | mkl-fft==1.1.0
20 | mkl-random==1.1.1
21 | mkl-service==2.3.0
22 | numpy==1.18.1
23 | opt-einsum==3.1.0
24 | pandas==1.0.5
25 | plyfile==0.7.2
26 | protobuf==3.12.3
27 | psutil==5.7.2
28 | pyparsing==2.4.7
29 | python-dateutil==2.8.1
30 | python-mnist==0.7
31 | pytz==2020.1
32 | PyYAML==5.3.1
33 | requests==2.24.0
34 | scikit-learn==0.23.1
35 | scipy==1.4.1
36 | six==1.15.0
37 | sklearn==0.0
38 | svgpathtools==1.3.3
39 | svgwrite==1.4
40 | tensorboard==1.14.0
41 | tensorflow==1.14.0
42 | tensorflow-estimator==1.14.0
43 | termcolor==1.1.0
44 | threadpoolctl==2.1.0
45 | tqdm==4.46.1
46 | transforms3d==0.3.1
47 | urllib3==1.25.9
48 | webencodings==0.5.1
49 | Werkzeug==0.16.1
50 | wrapt==1.12.1
51 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2019 HuguesTHOMAS
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/tf_custom_ops/tf_neighbors/neighbors/neighbors.h:
--------------------------------------------------------------------------------
1 |
2 |
3 | #include "../../cpp_utils/cloud/cloud.h"
4 | #include "../../cpp_utils/nanoflann/nanoflann.hpp"
5 |
6 | #include
7 | #include
8 |
9 | using namespace std;
10 |
11 |
12 | void ordered_neighbors(vector& queries,
13 | vector& supports,
14 | vector& neighbors_indices,
15 | float radius);
16 |
17 | void batch_ordered_neighbors(vector& queries,
18 | vector& supports,
19 | vector& q_batches,
20 | vector& s_batches,
21 | vector& neighbors_indices,
22 | float radius);
23 |
24 | void batch_nanoflann_neighbors(vector& queries,
25 | vector& supports,
26 | vector& q_batches,
27 | vector& s_batches,
28 | vector& neighbors_indices,
29 | float radius);
30 |
--------------------------------------------------------------------------------
/Instruction_Manual/setup.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | ## Arjun's KPConv adjustments...
3 | sudo mkdir -p -m 777 $HOME/lidar
4 | sudo chmod g+s $HOME/lidar
5 | cd $HOME/lidar
6 | git clone https://github.com/Arjun-NA/KPConv_for_DALES
7 |
8 | cd $HOME/lidar/KPConv_for_DALES
9 | if ( ! /home/ubuntu/anaconda3/bin/conda env list | grep "^aerotronic" >/dev/null 2>&1 ); then
10 | /home/ubuntu/anaconda3/bin/conda env create --file ./aerotronic.yml
11 | else
12 | echo "aerotronic conda environment is already present"
13 | echo "run conda activate aerotronic"
14 | fi
15 |
16 | eval "$(conda shell.bash hook)"
17 | conda activate aerotronic
18 | cd $HOME/lidar/KPConv_for_DALES/tf_custom_ops
19 | TF_LIB=$(python3 -c 'import tensorflow as tf;print(tf.sysconfig.get_lib())' 2>/dev/null)
20 |
21 | echo "TFLIB: "
22 | echo $TF_LIB
23 | cd $TF_LIB
24 | ls
25 | sudo cp libtensorflow_framework.so.1 libtensorflow_framework.so
26 | cd $HOME/lidar/KPConv_for_DALES/tf_custom_ops
27 | export LD_LIBRARY_PATH=${LD_LIBRARY_PATH:+${LD_LIBRARY_PATH}:}${TF_LIB}
28 | sh ./compile_op.sh
29 |
30 |
31 | cd $HOME/lidar/KPConv_for_DALES/cpp_wrappers
32 | sh ./compile_wrappers.sh
33 |
34 |
35 | ## Tmux Installation
36 | sudo apt-get --assume-yes install tmux
37 |
--------------------------------------------------------------------------------
/cpp_wrappers/cpp_utils/cloud/cloud.cpp:
--------------------------------------------------------------------------------
1 | //
2 | //
3 | // 0==========================0
4 | // | Local feature test |
5 | // 0==========================0
6 | //
7 | // version 1.0 :
8 | // >
9 | //
10 | //---------------------------------------------------
11 | //
12 | // Cloud source :
13 | // Define usefull Functions/Methods
14 | //
15 | //----------------------------------------------------
16 | //
17 | // Hugues THOMAS - 10/02/2017
18 | //
19 |
20 |
21 | #include "cloud.h"
22 |
23 |
24 | // Getters
25 | // *******
26 |
27 | PointXYZ max_point(std::vector points)
28 | {
29 | // Initiate limits
30 | PointXYZ maxP(points[0]);
31 |
32 | // Loop over all points
33 | for (auto p : points)
34 | {
35 | if (p.x > maxP.x)
36 | maxP.x = p.x;
37 |
38 | if (p.y > maxP.y)
39 | maxP.y = p.y;
40 |
41 | if (p.z > maxP.z)
42 | maxP.z = p.z;
43 | }
44 |
45 | return maxP;
46 | }
47 |
48 | PointXYZ min_point(std::vector points)
49 | {
50 | // Initiate limits
51 | PointXYZ minP(points[0]);
52 |
53 | // Loop over all points
54 | for (auto p : points)
55 | {
56 | if (p.x < minP.x)
57 | minP.x = p.x;
58 |
59 | if (p.y < minP.y)
60 | minP.y = p.y;
61 |
62 | if (p.z < minP.z)
63 | minP.z = p.z;
64 | }
65 |
66 | return minP;
67 | }
--------------------------------------------------------------------------------
/tf_custom_ops/cpp_utils/cloud/cloud.cpp:
--------------------------------------------------------------------------------
1 | //
2 | //
3 | // 0==========================0
4 | // | Local feature test |
5 | // 0==========================0
6 | //
7 | // version 1.0 :
8 | // >
9 | //
10 | //---------------------------------------------------
11 | //
12 | // Cloud source :
13 | // Define usefull Functions/Methods
14 | //
15 | //----------------------------------------------------
16 | //
17 | // Hugues THOMAS - 10/02/2017
18 | //
19 |
20 |
21 | #include "cloud.h"
22 |
23 |
24 | // Getters
25 | // *******
26 |
27 | PointXYZ max_point(std::vector points)
28 | {
29 | // Initiate limits
30 | PointXYZ maxP(points[0]);
31 |
32 | // Loop over all points
33 | for (auto p : points)
34 | {
35 | if (p.x > maxP.x)
36 | maxP.x = p.x;
37 |
38 | if (p.y > maxP.y)
39 | maxP.y = p.y;
40 |
41 | if (p.z > maxP.z)
42 | maxP.z = p.z;
43 | }
44 |
45 | return maxP;
46 | }
47 |
48 | PointXYZ min_point(std::vector points)
49 | {
50 | // Initiate limits
51 | PointXYZ minP(points[0]);
52 |
53 | // Loop over all points
54 | for (auto p : points)
55 | {
56 | if (p.x < minP.x)
57 | minP.x = p.x;
58 |
59 | if (p.y < minP.y)
60 | minP.y = p.y;
61 |
62 | if (p.z < minP.z)
63 | minP.z = p.z;
64 | }
65 |
66 | return minP;
67 | }
--------------------------------------------------------------------------------
/tf_custom_ops/compile_op.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # Get TF variables
4 | TF_INC=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_include())')
5 | TF_LIB=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())')
6 | TF_LIB2='/home/ubuntu/Arjun/KPConv/tf_custom_ops'
7 | # Neighbors op
8 | g++ -std=c++11 -shared tf_neighbors/tf_neighbors.cpp tf_neighbors/neighbors/neighbors.cpp cpp_utils/cloud/cloud.cpp -o tf_neighbors.so -fPIC -I$TF_INC -I$TF_INC/external/nsync/public -L$TF_LIB -L$TF_LIB2 -ltensorflow_framework -O2
9 | g++ -std=c++11 -shared tf_neighbors/tf_batch_neighbors.cpp tf_neighbors/neighbors/neighbors.cpp cpp_utils/cloud/cloud.cpp -o tf_batch_neighbors.so -fPIC -I$TF_INC -I$TF_INC/external/nsync/public -L$TF_LIB -L$TF_LIB2 -ltensorflow_framework -O2
10 |
11 | # Subsampling op
12 | g++ -std=c++11 -shared tf_subsampling/tf_subsampling.cpp tf_subsampling/grid_subsampling/grid_subsampling.cpp cpp_utils/cloud/cloud.cpp -o tf_subsampling.so -fPIC -I$TF_INC -I$TF_INC/external/nsync/public -L$TF_LIB -ltensorflow_framework -O2
13 | g++ -std=c++11 -shared tf_subsampling/tf_batch_subsampling.cpp tf_subsampling/grid_subsampling/grid_subsampling.cpp cpp_utils/cloud/cloud.cpp -o tf_batch_subsampling.so -fPIC -I$TF_INC -I$TF_INC/external/nsync/public -L$TF_LIB -ltensorflow_framework -O2
14 |
--------------------------------------------------------------------------------
/tf_custom_ops/notes.md:
--------------------------------------------------------------------------------
1 | # **NOTES**
2 |
3 | ### 1. Error while execution compile.sh "-ltensorflow_core not found":
4 | __*Reason*__ : Library files of Tensorflow in ubuntu systems are saved in .so format. For some weird reasons versions of them are saved as so.1 or so.2 and suddenly shell forgets what they are
5 | 'So' better go to the location of the include and see which is what.
6 |
7 | To see the location use this inside python or the conda environment python (whichever you created for your project):
8 | ```
9 | import tensorflow as tf
10 | print(tf.sysconfig.get_lib())
11 | ```
12 | OR you could also do the following in normal shell/cmd do the following
13 | ```
14 | TF_INC=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_include())')
15 | TF_LIB=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())')
16 | cd $TF_LIB
17 | ```
18 | After that the variables TF_LIB will lead you to the location.
19 | Where did I magically get these from? It's over here in [compile_op.sh](./tf_custom_ops/compile_op.sh)
20 |
21 | Inside the location, you can either create a soft link like this [issue solution](https://github.com/HuguesTHOMAS/KPConv/issues/79)
22 | or You can create a duplication of libtensorflow_framework.so.1 or libtensorflow_framework.so.2 with the name libtensorflow_framework.so
23 | to do that you can use:
24 | ```
25 | cp libtensorflow_framework.so.1 libtensorflow_framework.so
26 | ```
27 |
--------------------------------------------------------------------------------
/doc/object_segmentation_guide.md:
--------------------------------------------------------------------------------
1 |
2 | ## Object Part Segmentation on ShapeNetPart
3 |
4 | ### Data
5 |
6 | ShapeNetPart dataset can be downloaded here (635 MB). Uncompress the folder and move it to `Data/ShapeNetPart/shapenetcore_partanno_segmentation_benchmark_v0`.
7 |
8 | ### Training
9 |
10 | Simply run the following script to start the training:
11 |
12 | python3 training_ShapeNetPart.py
13 |
14 | Similarly to ModelNet40 training, the parameters can be modified in a configuration subclass called `ShapeNetPartConfig`, and the first run of this script might take some time to precompute dataset structures.
15 |
16 | ### Plot a logged training
17 |
18 | When you start a new training, it is saved in a `results` folder. A dated log folder will be created, containing many information including loss values, validation metrics, model snapshots, etc.
19 |
20 | In `plot_convergence.py`, you will find detailed comments explaining how to choose which training log you want to plot. Follow them and then run the script :
21 |
22 | python3 plot_convergence.py
23 |
24 |
25 | ### Test the trained model
26 |
27 | The test script is the same for all models (segmentation or classification). In `test_any_model.py`, you will find detailed comments explaining how to choose which logged trained model you want to test. Follow them and then run the script :
28 |
29 | python3 test_any_model.py
30 |
--------------------------------------------------------------------------------
/doc/object_classification_guide.md:
--------------------------------------------------------------------------------
1 |
2 | ## Object classification on ModelNet40
3 |
4 | ### Data
5 |
6 | Regularly sampled clouds from ModelNet40 dataset can be downloaded here (1.6 GB). Uncompress the folder and move it to `Data/ModelNet40/modelnet40_normal_resampled`.
7 |
8 | N.B. If you want to place your data anywhere else, you just have to change the variable `self.path` of `ModelNet40Dataset` class (in the file `datasets/ModelNet40.py`).
9 |
10 | ### Training a model
11 |
12 | Simply run the following script to start the training:
13 |
14 | python3 training_ModelNet40.py
15 |
16 | This file contains a configuration subclass `ModelNet40Config`, inherited from the general configuration class `Config` defined in `utils/config.py`. The value of every parameter can be modified in the subclass. The first run of this script will precompute structures for the dataset which might take some time.
17 |
18 | ### Plot a logged training
19 |
20 | When you start a new training, it is saved in a `results` folder. A dated log folder will be created, containing many information including loss values, validation metrics, model snapshots, etc.
21 |
22 | In `plot_convergence.py`, you will find detailed comments explaining how to choose which training log you want to plot. Follow them and then run the script :
23 |
24 | python3 plot_convergence.py
25 |
26 |
27 | ### Test the trained model
28 |
29 | The test script is the same for all models (segmentation or classification). In `test_any_model.py`, you will find detailed comments explaining how to choose which logged trained model you want to test. Follow them and then run the script :
30 |
31 | python3 test_any_model.py
32 |
--------------------------------------------------------------------------------
/INSTALL.md:
--------------------------------------------------------------------------------
1 | ### Installation instructions for Ubuntu 16.04
2 |
3 | * Make sure CUDA and cuDNN are installed. Three configurations have been tested:
4 | - TensorFlow 1.4.1, CUDA 8.0 and cuDNN 6.0
5 | - TensorFlow 1.12.0, CUDA 9.0 and cuDNN 7.4
6 | - ~~TensorFlow 1.13.0, CUDA 10.0 and cuDNN 7.5~~ (bug found only with this version).
7 |
8 | * Ensure all python packages are installed :
9 |
10 | sudo apt update
11 | sudo apt install python3-dev python3-pip python3-tk
12 |
13 | * Follow Tensorflow installation procedure.
14 |
15 | * Install the other dependencies with pip:
16 | - numpy
17 | - scikit-learn
18 | - psutil
19 | - matplotlib (for visualization)
20 | - mayavi (for visualization)
21 | - PyQt5 (for visualization)
22 |
23 | * Compile the customized Tensorflow operators located in `tf_custom_ops`. Open a terminal in this folder, and run:
24 |
25 | sh compile_op.sh
26 |
27 | N.B. If you installed Tensorflow in a virtual environment, it needs to be activated when running these scripts
28 |
29 | * Compile the C++ extension module for python located in `cpp_wrappers`. Open a terminal in this folder, and run:
30 |
31 | sh compile_wrappers.sh
32 |
33 | You should now be able to train Kernel-Point Convolution models
34 |
35 | ### Installation instructions for Ubuntu 18.04 (Thank to @noahtren)
36 |
37 | * Remove the `-D_GLIBCXX_USE_CXX11_ABI=0` flag for each line in `tf_custom_ops/compile_op.sh` (problem with the version of gcc). One configuration has been tested:
38 |
39 | - TensorFlow 1.12.0, CUDA 9.0 and cuDNN 7.3.1
40 |
--------------------------------------------------------------------------------
/envlist.txt:
--------------------------------------------------------------------------------
1 | # This file may be used to create an environment using:
2 | # $ conda create --name --file
3 | # platform: linux-64
4 | _libgcc_mutex=0.1=main
5 | _tflow_select=2.1.0=gpu
6 | absl-py=0.9.0=py37_0
7 | astor=0.8.0=py37_0
8 | blas=1.0=mkl
9 | c-ares=1.15.0=h7b6447c_1001
10 | ca-certificates=2020.1.1=0
11 | certifi=2020.4.5.2=py37_0
12 | cudatoolkit=9.0=h13b8566_0
13 | cudnn=7.6.5=cuda9.0_0
14 | cupti=9.0.176=0
15 | gast=0.2.2=py37_0
16 | google-pasta=0.2.0=py_0
17 | grpcio=1.27.2=py37hf8bcb03_0
18 | h5py=2.10.0=py37h7918eee_0
19 | hdf5=1.10.4=hb1b8bf9_0
20 | intel-openmp=2020.1=217
21 | keras-applications=1.0.8=py_0
22 | keras-base=2.3.1=py37_0
23 | keras-gpu=2.3.1=0
24 | keras-preprocessing=1.1.0=py_1
25 | ld_impl_linux-64=2.33.1=h53a641e_7
26 | libedit=3.1.20181209=hc058e9b_0
27 | libffi=3.3=he6710b0_1
28 | libgcc-ng=9.1.0=hdf63c60_0
29 | libgfortran-ng=7.3.0=hdf63c60_0
30 | libprotobuf=3.12.3=hd408876_0
31 | libstdcxx-ng=9.1.0=hdf63c60_0
32 | markdown=3.1.1=py37_0
33 | mkl=2020.1=217
34 | mkl-service=2.3.0=py37he904b0f_0
35 | mkl_fft=1.1.0=py37h23d657b_0
36 | mkl_random=1.1.1=py37h0573a6f_0
37 | ncurses=6.2=he6710b0_1
38 | numpy=1.18.1=py37h4f9e942_0
39 | numpy-base=1.18.1=py37hde5b4d6_1
40 | openssl=1.1.1g=h7b6447c_0
41 | opt_einsum=3.1.0=py_0
42 | pip=20.1.1=py37_1
43 | protobuf=3.12.3=py37he6710b0_0
44 | python=3.7.7=hcff3b4d_5
45 | pyyaml=5.3.1=py37h7b6447c_0
46 | readline=8.0=h7b6447c_0
47 | scipy=1.4.1=py37h0b6359f_0
48 | setuptools=47.3.0=py37_0
49 | six=1.15.0=py_0
50 | sqlite=3.31.1=h62c20be_1
51 | tensorboard=1.14.0=py37hf484d3e_0
52 | tensorflow=1.14.0=gpu_py37hae64822_0
53 | tensorflow-base=1.14.0=gpu_py37h8f37b9b_0
54 | tensorflow-estimator=1.14.0=py_0
55 | tensorflow-gpu=1.14.0=h0d30ee6_0
56 | termcolor=1.1.0=py37_1
57 | tk=8.6.8=hbc83047_0
58 | webencodings=0.5.1=py37_1
59 | werkzeug=0.16.1=py_0
60 | wheel=0.34.2=py37_0
61 | wrapt=1.12.1=py37h7b6447c_1
62 | xz=5.2.5=h7b6447c_0
63 | yaml=0.1.7=had09818_2
64 | zlib=1.2.11=h7b6447c_3
65 |
--------------------------------------------------------------------------------
/Log_2020-09-29_02-19-56/F1_Score.txt:
--------------------------------------------------------------------------------
1 | All files good
2 |
3 | Loading.... ../Results_from_RED/Log_2020-09-29_02-19-56/predictions/bin_5080_54400.ply
4 | micro F1: 0.971800472005263
5 | macro F1: 0.7125992781285472
6 |
7 | Loading.... ../Results_from_RED/Log_2020-09-29_02-19-56/predictions/bin_5080_54470.ply
8 | micro F1: 0.9712402459534125
9 | macro F1: 0.7603846950573112
10 |
11 | Loading.... ../Results_from_RED/Log_2020-09-29_02-19-56/predictions/bin_5100_54440.ply
12 | micro F1: 0.9339789109964945
13 | macro F1: 0.7407313819921052
14 |
15 | Loading.... ../Results_from_RED/Log_2020-09-29_02-19-56/predictions/bin_5100_54490.ply
16 | micro F1: 0.9733621763924409
17 | macro F1: 0.7564696469960107
18 |
19 | Loading.... ../Results_from_RED/Log_2020-09-29_02-19-56/predictions/bin_5120_54445.ply
20 | micro F1: 0.9724872673445031
21 | macro F1: 0.784785751017888
22 |
23 | Loading.... ../Results_from_RED/Log_2020-09-29_02-19-56/predictions/bin_5135_54430.ply
24 | micro F1: 0.9759974172148351
25 | macro F1: 0.764491141588723
26 |
27 | Loading.... ../Results_from_RED/Log_2020-09-29_02-19-56/predictions/bin_5135_54435.ply
28 | micro F1: 0.977462815230023
29 | macro F1: 0.7926956961914926
30 |
31 | Loading.... ../Results_from_RED/Log_2020-09-29_02-19-56/predictions/bin_5140_54390.ply
32 | micro F1: 0.964461245744877
33 | macro F1: 0.6541784907805973
34 |
35 | Loading.... ../Results_from_RED/Log_2020-09-29_02-19-56/predictions/bin_5150_54325.ply
36 | micro F1: 0.9740829226300796
37 | macro F1: 0.6994128385643351
38 |
39 | Loading.... ../Results_from_RED/Log_2020-09-29_02-19-56/predictions/bin_5155_54335.ply
40 | micro F1: 0.9722904007687808
41 | macro F1: 0.5379838415421019
42 |
43 | Loading.... ../Results_from_RED/Log_2020-09-29_02-19-56/predictions/bin_5175_54395.ply
44 | micro F1: 0.9705561295053557
45 | macro F1: 0.7616489924197007
46 | Final Avg micro : 0.9688836367078242 | Avg macro : 0.7241256140253466
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 | 
3 |
4 | ### [Created by Hugues THOMAS](https://github.com/HuguesTHOMAS/KPConv)
5 |
6 |
7 | ### Update 27/04/2020: [PyTorch implementation](https://github.com/HuguesTHOMAS/KPConv-PyTorch) available. With SemanticKitti, and Windows supported.
8 |
9 | This repository contains the implementation of **Kernel Point Convolution** (KPConv), a point convolution operator
10 | presented in our ICCV2019 paper ([arXiv](https://arxiv.org/abs/1904.08889)).
11 |
12 | **Update 03/05/2019, bug found with TF 1.13 and CUDA 10.**
13 | We found an internal bug inside tf.matmul operation. It returns absurd values like 1e12, leading to the
14 | apparition of NaNs in our network. We advise to use the code with CUDA 9.0 and TF 1.12.
15 | More info in [issue #15](https://github.com/HuguesTHOMAS/KPConv/issues/15)
16 |
17 |
18 | ## Installation
19 |
20 | A step-by-step installation guide for Ubuntu 16.04 is provided in [INSTALL.md](./INSTALL.md). Windows is currently
21 | not supported as the code uses tensorflow custom operations.
22 |
23 |
24 | ## TO USE IN DALES DATASET
25 |
26 | Use the [convert.py](convert.py) to convert the DALES ascii ply file to bin ply file.
27 | copy the convert.py to the location of the ascii ply files and run it.
28 | Utilize [requirements](pipenvlist.txt) and [conda_env](envlist.txt)
29 | For conda env creation you can use :
30 | ```
31 | conda create --name --file envlist.txt
32 | ```
33 | and for pip
34 | ```
35 | pip install -r pipenvlist.txt
36 | ```
37 | ## MY RESULTS IN DALES DATASET
38 | The result of the testing is uploaded [here](https://indiana-my.sharepoint.com/:u:/g/personal/arjuna_iu_edu/ERlV6lBVnQtMvAyfloE354YBNUglxQbroAUnGds8x8Rjcg?e=vIk490)
39 | and the [test_accuracy.py](./test_accuracy.py) was used to calculate F1 scores which gave [this](./Log_2020-09-29_02-19-56/F1_Score.txt) result with Final Avg micro F1 : 0.97 | Avg macro F1: 0.72. This result was far better than any other point convolutions I had used.
40 |
--------------------------------------------------------------------------------
/cpp_wrappers/cpp_subsampling/grid_subsampling/grid_subsampling.h:
--------------------------------------------------------------------------------
1 |
2 |
3 | #include "../../cpp_utils/cloud/cloud.h"
4 |
5 | #include
6 | #include
7 |
8 | using namespace std;
9 |
10 | class SampledData
11 | {
12 | public:
13 |
14 | // Elements
15 | // ********
16 |
17 | int count;
18 | PointXYZ point;
19 | vector features;
20 | vector> labels;
21 |
22 |
23 | // Methods
24 | // *******
25 |
26 | // Constructor
27 | SampledData()
28 | {
29 | count = 0;
30 | point = PointXYZ();
31 | }
32 |
33 | SampledData(const size_t fdim, const size_t ldim)
34 | {
35 | count = 0;
36 | point = PointXYZ();
37 | features = vector(fdim);
38 | labels = vector>(ldim);
39 | }
40 |
41 | // Method Update
42 | void update_all(const PointXYZ p, vector::iterator f_begin, vector::iterator l_begin)
43 | {
44 | count += 1;
45 | point += p;
46 | transform (features.begin(), features.end(), f_begin, features.begin(), plus());
47 | int i = 0;
48 | for(vector::iterator it = l_begin; it != l_begin + labels.size(); ++it)
49 | {
50 | labels[i][*it] += 1;
51 | i++;
52 | }
53 | return;
54 | }
55 | void update_features(const PointXYZ p, vector::iterator f_begin)
56 | {
57 | count += 1;
58 | point += p;
59 | transform (features.begin(), features.end(), f_begin, features.begin(), plus());
60 | return;
61 | }
62 | void update_classes(const PointXYZ p, vector::iterator l_begin)
63 | {
64 | count += 1;
65 | point += p;
66 | int i = 0;
67 | for(vector::iterator it = l_begin; it != l_begin + labels.size(); ++it)
68 | {
69 | labels[i][*it] += 1;
70 | i++;
71 | }
72 | return;
73 | }
74 | void update_points(const PointXYZ p)
75 | {
76 | count += 1;
77 | point += p;
78 | return;
79 | }
80 | };
81 |
82 |
83 |
84 | void grid_subsampling(vector& original_points,
85 | vector& subsampled_points,
86 | vector& original_features,
87 | vector& subsampled_features,
88 | vector& original_classes,
89 | vector& subsampled_classes,
90 | float sampleDl,
91 | int verbose);
92 |
93 |
--------------------------------------------------------------------------------
/doc/scene_segmentation_guide.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | ## Scene Segmentation on S3DIS
4 |
5 | ### Data
6 |
7 | S3DIS dataset can be downloaded here (4.8 GB). Download the file named `Stanford3dDataset_v1.2.zip`, uncompress the folder and move it to `Data/S3DIS/Stanford3dDataset_v1.2`.
8 |
9 | ### Training
10 |
11 | Simply run the following script to start the training:
12 |
13 | python3 training_S3DIS.py
14 |
15 | Similarly to ModelNet40 training, the parameters can be modified in a configuration subclass called `S3DISConfig`, and the first run of this script might take some time to precompute dataset structures.
16 |
17 |
18 | ## Scene Segmentation on Scannet
19 |
20 | Incoming
21 |
22 | ## Scene Segmentation on Semantic3D
23 |
24 | ### Data
25 |
26 | Semantic3D dataset can be found here. Download and unzip every point cloud as ascii files and place them in a folder called `Data/Semantic3D/original_data`. You also have to download and unzip the groundthruth labels as ascii files in the same folder
27 |
28 |
29 | ### Training
30 |
31 | Simply run the following script to start the training:
32 |
33 | python3 training_Semantic3D.py
34 |
35 | Similarly to ModelNet40 training, the parameters can be modified in a configuration subclass called `Semantic3DConfig`, and the first run of this script might take some time to precompute dataset structures.
36 |
37 |
38 | ## Scene Segmentation on NPM3D
39 |
40 | Incoming
41 |
42 |
43 | ## Plot and test trained models
44 |
45 | ### Plot a logged training
46 |
47 | When you start a new training, it is saved in a `results` folder. A dated log folder will be created, containing many information including loss values, validation metrics, model snapshots, etc.
48 |
49 | In `plot_convergence.py`, you will find detailed comments explaining how to choose which training log you want to plot. Follow them and then run the script :
50 |
51 | python3 plot_convergence.py
52 |
53 |
54 | ### Test the trained model
55 |
56 | The test script is the same for all models (segmentation or classification). In `test_any_model.py`, you will find detailed comments explaining how to choose which logged trained model you want to test. Follow them and then run the script :
57 |
58 | python3 test_any_model.py
59 |
--------------------------------------------------------------------------------
/tf_custom_ops/tf_subsampling/grid_subsampling/grid_subsampling.h:
--------------------------------------------------------------------------------
1 |
2 |
3 | #include "../../cpp_utils/cloud/cloud.h"
4 |
5 | #include
6 | #include
7 |
8 | using namespace std;
9 |
10 | class SampledData
11 | {
12 | public:
13 |
14 | // Elements
15 | // ********
16 |
17 | int count;
18 | PointXYZ point;
19 | vector features;
20 | unordered_map labels;
21 |
22 |
23 | // Methods
24 | // *******
25 |
26 | // Constructor
27 | SampledData()
28 | {
29 | count = 0;
30 | point = PointXYZ();
31 | }
32 |
33 | SampledData(const size_t fdim)
34 | {
35 | count = 0;
36 | point = PointXYZ();
37 | features = vector(fdim);
38 | }
39 |
40 | // Method Update
41 | void update_all(const PointXYZ p, std::vector::iterator f_begin, const int l)
42 | {
43 | count += 1;
44 | point += p;
45 | std::transform (features.begin(), features.end(), f_begin, features.begin(), std::plus());
46 | labels[l] += 1;
47 | return;
48 | }
49 | void update_features(const PointXYZ p, std::vector::iterator f_begin)
50 | {
51 | count += 1;
52 | point += p;
53 | std::transform (features.begin(), features.end(), f_begin, features.begin(), std::plus());
54 | return;
55 | }
56 | void update_classes(const PointXYZ p, const int l)
57 | {
58 | count += 1;
59 | point += p;
60 | labels[l] += 1;
61 | return;
62 | }
63 | void update_points(const PointXYZ p)
64 | {
65 | count += 1;
66 | point += p;
67 | return;
68 | }
69 | };
70 |
71 |
72 |
73 | void grid_subsampling(vector& original_points,
74 | vector& subsampled_points,
75 | vector& original_features,
76 | vector& subsampled_features,
77 | vector& original_classes,
78 | vector& subsampled_classes,
79 | float sampleDl);
80 |
81 |
82 | void batch_grid_subsampling(vector& original_points,
83 | vector& subsampled_points,
84 | vector& original_features,
85 | vector& subsampled_features,
86 | vector& original_classes,
87 | vector& subsampled_classes,
88 | vector& original_batches,
89 | vector& subsampled_batches,
90 | float sampleDl);
91 |
92 |
--------------------------------------------------------------------------------
/Instruction_Manual/AWS_help.md:
--------------------------------------------------------------------------------
1 |
2 | # AWS Commands for EC2 instance
3 |
4 | Basics and Advanced commands which is useful for setting up and running Amazon AMI EC2 instance.
5 |
6 | ## Managing External Storage
7 |
8 | For more detailed guide [try here](https://www.digitalocean.com/community/tutorials/how-to-partition-and-format-storage-devices-in-linux)
9 | Command used to list all usable storage
10 | ```
11 | sudo lsblk
12 | ```
13 | Some versions of lsblk will print all of this information if we type:
14 | ```
15 | sudo lsblk --fs
16 | ```
17 | Command for creating partition
18 | ```
19 | sudo parted /dev/sda mklabel gpt
20 | sudo parted -a opt /dev/sda mkpart primary ext4 0% 100%
21 | ```
22 |
23 | Create a Filesystem on the New Partition
24 | ```
25 | sudo mkfs.ext4 -L datapartition /dev/sda1
26 | ```
27 |
28 |
29 | ## Mounting an extra storage eg: HDD storage
30 |
31 | Create the mounting point using the following commands
32 | ```
33 | cd /
34 | sudo mkdir media/NewVol/
35 | ```
36 | Adding storage to the running instance. (You should have already done the volume attach to the instance from the AWS Management Console)
37 | ```
38 | cd /
39 | sudo mount -t ext4 /dev/[storage_name] /media/NewVol/
40 | ```
41 |
42 | [storage_name]
43 | : This should be found out on your own. This can be found out by checking command `lsblk` or the directory list using the ``` ls``` command in /dev/ folder before and after you attach the volume (Notice the change in it) to the instance via AWS Management Console.
44 |
some common names look like nvme1n1 or SDF or XDF or XVDF where F is variable
45 |
46 |
47 |
48 | ## Copying files from/to s3
49 | ```
50 | aws s3 cp filename_from filename_to
51 | aws s3 cp --recursive folder_from folder_to
52 | ```
53 |
54 | ## Copying folder from local storage to AWS s3
55 | As the usage of cp does not help in copying folder from the instance to the s3 cloud we utilize sync
56 | ```
57 | aws s3 sync folder_to_copy S3_URL
58 | ```
59 |
60 | ## Configuring AWS credentials to access S3
61 | Fix for the error **"Unable to locate credentials"**
62 |
63 | ### Quick configuration with aws configure
64 |
65 | For general use, the aws configure command is the fastest way to set up your AWS CLI installation.
66 |
67 | the AWS CLI prompts you for four pieces of information:
68 | - Access key ID (for each user have max 2. It can be found from IAM -> Users ->Select user -> Security credentials )
69 | - Secret access key (for each user have max 2)
70 | - AWS Region (eg: us-east-2)
71 | - Output format (you can leave it at none)
72 |
73 |
74 |
75 |
76 |
77 |
78 |
--------------------------------------------------------------------------------
/test_accuracy.py:
--------------------------------------------------------------------------------
1 | import time
2 | import os
3 | import sys
4 | import matplotlib.pyplot as plt
5 | import numpy as np
6 | from sklearn.metrics import confusion_matrix, f1_score
7 |
8 | # Custom libs
9 |
10 | # Dataset
11 | from plyfile import PlyData, PlyElement
12 | labels = ['Unclassified','Gnd', 'Trees', 'Car', 'Truck', 'Wire', 'Fence', 'Poles' , 'Bldngs']
13 | ignored_labels = ['Unclassified']
14 | final_labels = labels[:]
15 | for each in ignored_labels: final_labels.remove(each)
16 | # Given below is the path to test output and ground truth files.
17 | # All files in test output file should be present in ground truth folder for successful execution of this file
18 | # This Also works in windows
19 | test_predictions_path = 'results/Log_2020-09-29_02-19-56/predictions/'
20 | test_groundtruth_path = '../Data/test_bin/'
21 | files_pred = [f for f in os.listdir(test_predictions_path) if f[-4:] == '.ply']
22 | files_ground = [f for f in os.listdir(test_groundtruth_path) if f[-4:] == '.ply']
23 | if(all(each in files_ground for each in files_pred )):
24 | print("All files good")
25 | else:
26 | print("Error some files at ",test_predictions_path, "not matching with files at",test_groundtruth_path)
27 | exit()
28 |
29 | once = False
30 | total_list_micro = list()
31 | total_list_macro = list()
32 | Cum = None
33 | for each_file in files_pred:
34 | print('\n Loading.... ', os.path.join(test_predictions_path, each_file))
35 | data_pred = PlyData.read(os.path.join(test_predictions_path, each_file))
36 | data_grtr = PlyData.read(os.path.join(test_groundtruth_path, each_file))
37 | y_true = data_grtr.elements[0]['class']
38 | y_pred = data_pred.elements[0]['preds']
39 | """ # Uncomment these lines for saving confusion matrix in a color scale in pdf format
40 | C = confusion_matrix(y_true, y_pred, normalize='pred')
41 |
42 | for l_ind, label_value in enumerate(labels):
43 | if label_value in ignored_labels:
44 | C = np.delete(C, l_ind, axis=0)
45 | C = np.delete(C, l_ind, axis=1)
46 |
47 | if not once:
48 | Cum = C
49 | else:
50 | Cum += C
51 | plt.imshow(Cum)
52 | ticks = range(len(final_labels))
53 | plt.xticks(ticks=ticks,labels=final_labels)
54 | plt.yticks(ticks=ticks,labels=final_labels)
55 | if not once: plt.colorbar()
56 | once = True
57 | plt.title(" Confusion Matrix ")
58 | plt.savefig("results/"+each_file[:-4]+'.pdf')
59 | """
60 | F1_score_micro = f1_score(y_true, y_pred, average='micro')
61 | F1_score_macro = f1_score(y_true, y_pred, average='macro')
62 | print("micro F1: \t",F1_score_micro)
63 | print("macro F1: \t",F1_score_macro)
64 | total_list_macro += [F1_score_macro]
65 | total_list_micro += [F1_score_micro]
66 | avg_micro = sum(total_list_micro)/len(total_list_micro)
67 | avg_macro = sum(total_list_macro)/len(total_list_macro)
68 | print( " Final Avg micro : ", avg_micro, "| Avg macro : ", avg_macro)
69 |
--------------------------------------------------------------------------------
/Instruction_Manual/Hyperparameters_help.md:
--------------------------------------------------------------------------------
1 | # This file is a summary of hyperparameter that was adjusted to improve accuracy
2 |
3 | For KP Convolution the following parameters change the network complexity :
4 | 1. architecture: Trivially this helps in varying the complexity of the network as required. The parameter is provided as a list of words that the software recognizes. The words/ modules that exist in this particular software are the followings
5 | - 'unary'
6 | - 'simple'
7 | - 'simple_strided'
8 | - 'resnet'
9 | - 'resnetb'
10 | - 'resnetb_light'
11 | - 'resnetb_deformable'
12 | - 'inceptiong_deformable'
13 | - 'resnetb_strided'
14 | - 'resnetb_light_strided'
15 | - 'resnetb_deformable_strided'
16 | - 'inception_deformable_strided'
17 | - 'vgg'
18 | - 'max_pool'
19 | - 'global_average'
20 | - 'nearest_upsample'
21 | - 'simple_upsample'
22 | - 'resnetb_upsample'
23 | 2. num_kernel_points: This parameter Highly influences the accuracy of the model as well as the complexity of the model. Correct tuning is required and this parameter should be an odd number
24 | 3. first_subsampling_dl: This parameter reduces the input cloud into a cloud with a min distance between points with the value provided for the variable.
25 | 4. in_radius: This is the radius of the sphere which is taken in for each iteration of the convolution.
26 | 5. density_parameter: Density of neighborhoods for deformable convs (which need bigger radiuses). For normal conv we use KP_extent.
27 | 6. KP_influence: This sets the behavior of the KPConv. Acceptable parameters are : 'linear','constant' and 'gaussian'.
28 | 7. convolution_mode: This sets another behavior of the KPConv. Acceptable parameters are: 'closest' and 'sum'.
29 | 8. batch_num: Batch number for training.
30 | 9. learning_rate: Learning rate of the network.
31 | 10. Augmentation of the input: This is another crucial way to improve the model. This includes many variables
32 | - augment_scale_anisotropic = True
33 | - augment_symmetries = [True, False, False]
34 | - augment_rotation = 'vertical'
35 | - augment_scale_min = 0.9
36 | - augment_scale_max = 1.1
37 | - augment_noise = 0.01
38 | - augment_occlusion = 'none'
39 | - augment_color = 1.0
40 |
41 | ## What I tried
42 |
43 | - The architecture, to include Squeeze and Excite which did not provide a better result
44 | - The architecture, reducing the number of layers decreased accuracy tremendously, thus 5 is the minimum number of layers for the dataset (one layer means the start of the model to a strided layer, the exception being the first layer).
45 | - Radius of KPConv, More radius gained more accuracy (20 was optimum)
46 | - density_parameter, no significant change was observed
47 | - first_sabsampling_dl, there was variations and proper tuning is needed
48 | - num_kernel_points, increasing was not possible due to Graphics card limits.
49 | - KP_influence, tried gaussian but no good change was observed.
50 | - convolution_mode, changed to closest but no improvement was observed.
51 |
--------------------------------------------------------------------------------
/cpp_wrappers/cpp_utils/cloud/cloud.h:
--------------------------------------------------------------------------------
1 | //
2 | //
3 | // 0==========================0
4 | // | Local feature test |
5 | // 0==========================0
6 | //
7 | // version 1.0 :
8 | // >
9 | //
10 | //---------------------------------------------------
11 | //
12 | // Cloud header
13 | //
14 | //----------------------------------------------------
15 | //
16 | // Hugues THOMAS - 10/02/2017
17 | //
18 |
19 |
20 | # pragma once
21 |
22 | #include
23 | #include
24 | #include