├── figures
├── alpha_over_time.gif
├── gamma_over_time.gif
├── kitti_teaser_image.png
├── void_teaser_image.png
├── kitti_teaser_pointcloud.gif
└── void_teaser_pointcloud.gif
├── license
└── README.md
/figures/alpha_over_time.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/alexklwong/adaframe-depth-completion/HEAD/figures/alpha_over_time.gif
--------------------------------------------------------------------------------
/figures/gamma_over_time.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/alexklwong/adaframe-depth-completion/HEAD/figures/gamma_over_time.gif
--------------------------------------------------------------------------------
/figures/kitti_teaser_image.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/alexklwong/adaframe-depth-completion/HEAD/figures/kitti_teaser_image.png
--------------------------------------------------------------------------------
/figures/void_teaser_image.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/alexklwong/adaframe-depth-completion/HEAD/figures/void_teaser_image.png
--------------------------------------------------------------------------------
/figures/kitti_teaser_pointcloud.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/alexklwong/adaframe-depth-completion/HEAD/figures/kitti_teaser_pointcloud.gif
--------------------------------------------------------------------------------
/figures/void_teaser_pointcloud.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/alexklwong/adaframe-depth-completion/HEAD/figures/void_teaser_pointcloud.gif
--------------------------------------------------------------------------------
/license:
--------------------------------------------------------------------------------
1 | Academic Software License
2 |
3 | AdaFrame
4 |
5 | No Commercial Use
6 |
7 | This License governs use of the accompanying Software, and your use of the Software constitutes acceptance of this license.
8 |
9 | You may use this Software for any non-commercial purpose, subject to the restrictions in this license. Uses which are non-commercial include teaching, academic research, and personal experimentation.
10 |
11 | You may not use or distribute this Software or any derivative works in any form for any commercial purpose. Examples of commercial purposes would be running business operations, licensing, leasing, or selling the Software, or distributing the Software for use with commercial products.
12 |
13 | You may modify this Software and distribute the modified Software for non-commercial purposes; however, you may not grant rights to the Software or derivative works that are broader than those provided by this License. For example, you may not distribute modifications of the Software under terms that would permit commercial use, or under terms that purport to require the Software or derivative works to be sublicensed to others.
14 |
15 | You agree:
16 |
17 | Not remove any copyright or other notices from the Software.
18 |
19 | That if you distribute the Software in source or object form, you will include a verbatim copy of this license.
20 |
21 | That if you distribute derivative works of the Software in source code form you do so only under a license that includes all of the provisions of this License, and if you distribute derivative works of the Software solely in object form you do so only under a license that complies with this License.
22 |
23 | That if you have modified the Software or created derivative works, and distribute such modifications or derivative works, you will cause the modified files to carry prominent notices so that recipients know that they are not receiving the original Software. Such notices must state: (i) that you have changed the Software; and (ii) the date of any changes.
24 |
25 | THAT THIS PRODUCT IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
26 | PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS PRODUCT, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. YOU MUST PASS THIS LIMITATION OF LIABILITY ON WHENEVER YOU DISTRIBUTE THE SOFTWARE OR DERIVATIVE WORKS.
27 |
28 | That if you sue anyone over patents that you think may apply to the Software or anyone's use of the Software, your license to the Software ends automatically.
29 |
30 | That your rights under the License end automatically if you breach it in any way.
31 |
32 | UCLA reserves all rights not expressly granted to you in this license.
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # An Adaptive Framework for Learning Unsupervised Depth Completion
2 |
3 | PyTorch implementation of *An Adaptive Framework for Learning Unsupervised Depth Completion*
4 |
5 | Project AdaFrame: Ada(ptive) Frame(work) for Depth Completion
6 |
7 | Published in RA-L January 2021 and ICRA 2021
8 |
9 | [[publication]](https://ieeexplore.ieee.org/document/9351588)
10 |
11 | Model have been tested on Ubuntu 16.04, 20.04 using Python 3.5, 3.6, PyTorch 1.2.0
12 |
13 | Authors: [Alex Wong](http://web.cs.ucla.edu/~alexw/), [Xiaohan Fei](https://feixh.github.io/)
14 |
15 | If this work is useful to you, please cite our paper:
16 | ```
17 | @article{wong2021adaptive,
18 | title={An Adaptive Framework for Learning Unsupervised Depth Completion},
19 | author={Wong, Alex and Fei, Xiaohan and Hong, Byung-Woo and Soatto, Stefano},
20 | journal={IEEE Robotics and Automation Letters},
21 | volume={6},
22 | number={2},
23 | pages={3120--3127},
24 | year={2021},
25 | publisher={IEEE}
26 | }
27 | ```
28 |
29 | **Table of Contents**
30 | 1. [About sparse-to-dense depth completion](#about-sparse-to-dense)
31 | 2. [About AdaFrame](#about-adaframe)
32 | 3. [Related projects](#related-projects)
33 | 4. [License and disclaimer](#license-disclaimer)
34 |
35 | ## About sparse-to-dense depth completion
36 | In the sparse-to-dense depth completion problem, we seek to infer the dense depth map of a 3-D scene using an RGB image and its associated sparse depth measurements in the form of a sparse depth map, obtained either from computational methods such as SfM (Strcuture-from-Motion) or active sensors such as lidar or structured light sensors.
37 |
38 | | *RGB image from the VOID dataset* | *Our densified depth map -- colored and backprojected to 3D* |
39 | | :----------------------------------------: | :--------------------------------------------------------: |
40 | |
|
|
41 |
42 | | *RGB image from the KITTI dataset* | *Our densified depth map -- colored and backprojected to 3D* |
43 | | :-----------------------------------------: | :--------------------------------------------------------: |
44 | |
|
|
45 |
46 | To follow the literature and benchmarks for this task, you may visit:
47 | [Awesome State of Depth Completion](https://github.com/alexklwong/awesome-state-of-depth-completion)
48 |
49 | ## About AdaFrame
50 | A number of computer vision problems can be formulated as an energy function which consists of the linear combination of a data fidelity (fitness to data) term and a regularizer (bias or prior). The data fidelity is weighted uniformly by a scalar α and the regularizer by γ that determine their relative significance.
51 |
52 | However, uniform static α does not account for visibility phenomenon (occlusions) and uniform static γ does may impose too much or too little regularization. We propose an adaptive framework (α and γ) that consists of weighting schemes that vary spatially (image domain) and temporally (over training time) based on the residual or fitness of model to data.
53 |
54 | **α** starts by weighting all pixel locations approximately uniformly and gradually downweights regions with high residual over time. α is conditioned on the mean or global residual, as the model become better fitted to the data, we become more confident that the high residual regions be results of occlusions yielding a sharper curve over time. Here is a visualization of α:
55 |
56 |
57 |
58 |
63 |
64 |