└── README.md /README.md: -------------------------------------------------------------------------------- 1 | # Self-Supervised-depth 2 | by kalilia. 3 | 4 | # Contents 5 | 6 | * [Overview](#0-depth-estimation-overview) 7 | * [Datasets](#*-datasets) 8 | * [1-Monocular-Video: SfM based monocular depth](#Mono-SfM) 9 | * [2017-2019](#2017) 10 | * [2020](#2020) 11 | * [2021](#2021) 12 | * [Indoor](#indoor) 13 | * [2-Mutiview: Multi-view-Stereo](#3-Multi-view-stereo) 14 | * [3-Light-Field-based](#Light-Filed-based-depth) 15 | * [i-Night time depth](#NightTime-depth) 16 | * [ii-Semantic aware depth](#Semantic-aware-depth) 17 | * [Related: depth complementation](#6-depth-estimation-and-complementation) 18 | * [Related: Video-depth](#Video-depth) 19 | * [Related: SLAM-Odometry](#4-SLAM-Visual-Odometry) 20 | 21 | # 0-depth-estimation-overview 22 | | Conference | Tittle |code|Author|mark|note| 23 | |--------------|:------------------------------------------------------------------------------------------:|----|----|----|----| 24 | | | [ Single Image Depth Estimation: An Overview](https://arxiv.org/pdf/2104.06456.pdf) ||Istanbul Technical University|:hear_no_evil:|| 25 | # *-datasets 26 | | Tittle |yaer|mark|note| 27 | |:------------------------------------------------------------------------------------------:|----|----|----| 28 | | [ Vision meets Robotics: The KITTI Dataset](http://www.cvlibs.net/publications/Geiger2013IJRR.pdf) |2012||Karlsruhe Institute of Technology|:hear_no_evil:|| 29 | | [ NYUDepth-v2:Indoor Segmentation and Support Inference from RGBD Images](https://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf) |2012||New York University|:hear_no_evil:|| 30 | | [ nuScenes: A multimodal dataset for autonomous driving](https://arxiv.org/pdf/1903.11027.pdf) |2018||nuTonomy: an APTIV company|:hear_no_evil:|| 31 | 32 | 33 | # Mono-SfM 34 | ## 2017 35 | | Conference | Tittle |code|Author|mark|note| 36 | |--------------|:-------------------------------------------------------------------------------------------:|----|-----|----|----| 37 | | CVPR2017 |[Semi-Supervised Deep Learning for Monocular Depth Map Prediction](http://arxiv.org/abs/1702.02706) ||RWTH Aachen University|:see_no_evil:| 38 | | CVPR2017 |[SfMLearner: Unsupervised Learning of Depth and Ego-Motion from Video](http://arxiv.org/abs/1704.07813) |[link](https://github.com/tinghuiz/SfMLearner)|UC Berkeley|:star:|[link](https://www.yuque.com/kalilia/amcd6z/qcevce)| 39 | 40 | ([Back](#Contents)) 41 | 42 | ## 2018 43 | | Conference | Tittle |code|Author|mark|note| 44 | |--------------|:-------------------------------------------------------------------------------------------:|----|-----|----|----| 45 | | CVPR2018 |[DVO: Learning Depth from Monocular Videos using Direct Methods](http://arxiv.org/abs/1712.00175) ||Carnegie Mellon University|:see_no_evil:| 46 | | CVPR2018 |[GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose](http://arxiv.org/abs/1803.02276) |[link](https://github.com/yzcjtr/GeoNet)|SenseTime Research|:see_no_evil:| 47 | | ECCV2018 |[DF-Net: Unsupervised Joint Learning of Depth and Flow using Cross-Task Consistency](http://arxiv.org/abs/1809.01649) |[link](http://yuliang.vision/DF-Net/)|Virginia Tech|:see_no_evil:| 48 | | ECCV2018 |[Supervising the new with the old: learning SFM from SFM](https://www.robots.ox.ac.uk/~vedaldi/assets/pubs/klodt18supervising.pdf) |)|University of Oxford|:see_no_evil:| 49 | 50 | ([Back](#Contents)) 51 | ## 2019 52 | | Conference | Tittle |code|Author|mark|note| 53 | |--------------|:-------------------------------------------------------------------------------------------:|----|-----|----|----| 54 | |2019 | [Self-Supervised 3D Keypoint Learning for Ego-motion Estimation](http://arxiv.org/abs/1912.03426)||Toyota Research Institute (TRI)|:see_no_evil:| 55 | |ICRA2019 | [SuperDepth: Self-Supervised, Super-Resolved Monocular Depth Estimation](http://arxiv.org/abs/1810.01849)||Toyota Research Institute (TRI)|:see_no_evil:| 56 | |AAAI2019 | [Depth prediction without the sensors: Leveraging structure for unsupervised learning from monocular videos](http://arxiv.org/abs/1811.06152)|[link](https://sites.google.com/view/struct2depth)|Harvard University/Google Brain|:see_no_evil:| 57 | |ICCV2019 | [Moving indoor: Unsupervised video depth learning in challenging environments.](https://arxiv.org/pdf/1910.08898.pdf)||Tsinghua University|:see_no_evil:| 58 | |ICCV2019 | [Unsupervised High-Resolution Depth Learning From Videos With Dual Networks](https://arxiv.org/pdf/1910.08897.pdf)||Tsinghua University|:see_no_evil:| 59 | |ICCV2019 | [Self-Supervised Monocular Depth Hints](https://arxiv.org/pdf/1909.09051.pdf)|[link](www.github.com/nianticlabs/depth-hints)|Niantic|:see_no_evil:| 60 | |ICCV2019 | [Monodepth2: Digging into self-supervised monocular depth estimation](http://arxiv.org/abs/1806.01260)|[link](www.github.com/nianticlabs/monodepth2)|UCL/niantic|:star2:| 61 | |NIPS2019 | [SC-SfMLearner: Unsupervised scale-consistent depth and ego-motion learning from monocular video](http://arxiv.org/abs/1908.10553)||University of Adelaide, Australia|:see_no_evil:| 62 | |CVPR2019 | [Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation](http://arxiv.org/abs/1805.09806)|[link](https://github.com/anuragranj/cc)|Max Planck Institute for Intelligent Systems|:see_no_evil:| 63 | |CoRL2019 | [Robust Semi-Supervised Monocular Depth Estimation with Reprojected Distances](https://arxiv.org/pdf/1910.01765.pdf)||Toyota Research Institute (TRI)|:see_no_evil:| 64 | 65 | ([Back](#Contents)) 66 | ## 2020 67 | | Conference | Tittle |code|Author|mark|note| 68 | |--------------|:-------------------------------------------------------------------------------------------:|----|-----|----|----| 69 | |ECCV2020 | [DeepSFM: Structure From Motion Via Deep Bundle Adjustment](http://arxiv.org/abs/1912.09697)||Fudan University|:see_no_evil:| 70 | |ECCV2020 | [P2Net: Patch-match and Plane-regularization for Unsupervised Indoor Depth Estimation](https://arxiv.org/pdf/2007.07696.pdf)||ShanghaiTech Univsertiy|:see_no_evil:| 71 | |ECCV2020 |[Feature-metric Loss for Self-supervised Learning of Depth and Egomotion](https://arxiv.org/pdf/2007.10603.pdf)|[link](https://github.com/sconlyshootery/FeatDepth)||:see_no_evil:|| 72 | |CoRL2020 | [Unsupervised Monocular Depth Learning in Dynamic Scenes](http://arxiv.org/abs/2010.16404)||Google Research|:see_no_evil:| 73 | |CoRL2020 | [Attentional Separation-and-Aggregation Network for Self-supervised Depth-Pose Learning in Dynamic Scenes](http://arxiv.org/abs/2011.09369)||Tsinghua University|:hear_no_evil:| 74 | |3DV2020 | [Neural Ray Surfaces for Self-Supervised Learning of Depth and Ego-motion](http://arxiv.org/abs/2008.06630)||Toyota Research Institute (TRI)| 75 | |ICLR2020 | [Semantically-Guided Representation Learning for Self-Supervised Monocular Depth](http://arxiv.org/abs/2002.12319)||Toyota Research Institute (TRI)| 76 | |CVPR2020 | [On the uncertainty of self-supervised monocular depth estimation](http://arxiv.org/abs/2005.06209)|[link](https://github.com/mattpoggi/mono-uncertainty)|University of Bologna, Italy|:see_no_evil:| 77 | |CVPR2020 | [Towards Better Generalization: Joint Depth-Pose Learning without PoseNet](http://arxiv.org/abs/2004.01314)|[link](https://github.com/B1ueber2y/TrianFlow)|Tsinghua University|:see_no_evil:|[link](https://www.yuque.com/kalilia/amcd6z/ztlpsr)| 78 | |CVPR2020 | [3D Packing for Self-Supervised Monocular Depth Estimation](http://arxiv.org/abs/1905.02693)||Toyota Research Institute (TRI)|:star2:|[link](https://www.yuque.com/kalilia/amcd6z/sfenyx)| 79 | |CVPR2020 | [Self-supervised monocular trained depth estimation using self-attention and discrete disparity volume](http://arxiv.org/abs/2003.13951)||University of Adelaide|:see_no_evil:| 80 | |2020 | [SAFENet: Self-Supervised Monocular Depth Estimation with Semantic-Aware Feature Extraction](http://arxiv.org/abs/2010.02893)|[link](https://github.com/TRI-ML/packnet-sfm)|Toyota Research Institute (TRI)|:see_no_evil:| 81 | |2020 | [Self-Supervised Monocular Depth Estimation : Solving the Dynamic Object Problem by Semantic Guidance](http://arxiv.org/abs/2007.06936)||Technische Universit¨at Braunschweig, Germany|:see_no_evil:| 82 | |IROS2020 | [Toward Hierarchical Self-Supervised Monocular Absolute Depth Estimation for Autonomous Driving Applications](https://arxiv.org/pdf/2004.05560.pdf)|[link](https://github.com/TJ-IPLab/DNet)|Tongji University|:see_no_evil:| 83 | 84 | ([Back](#Contents)) 85 | ## 2021 86 | | Conference | Tittle |code|Author|mark|note| 87 | |--------------|:-------------------------------------------------------------------------------------------:|----|-----|----|----| 88 | |AAAI2021|HR-Depth : High Resolution Self-Supervised Monocular Depth Estimation|[link](https://github.com/shawLyu/HR-Depth)|Zhejiang University|:star:|[link](https://www.yuque.com/kalilia/amcd6z/ekrber)| 89 | |AAAI2021|[Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection Consistency](http://arxiv.org/abs/2102.02629)|[link](https://github.com/SeokjuLee/Insta-DM)|KAIST|:star:|[link](https://www.yuque.com/kalilia/amcd6z/oz1hqh)| 90 | |CVPR2021 | [Manydepth:The Temporal Opportunist: Self-Supervised Multi-Frame Monocular Depth](https://arxiv.org/pdf/2104.14540.pdf)|[link](http://arxiv.org/abs/2106.03505)|Niantic|:see_no_evil:| 91 | |CVPR2021 | [MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera](http://arxiv.org/abs/2011.11814)|[link](https://vision.in.tum.de/research/monorec)|TUM|:see_no_evil:| 92 | |IROS2021 | [Self-Supervised Scale Recovery for Monocular Depth and Egomotion Estimation](https://arxiv.org/pdf/2009.03787.pdf)|[link](https://github.com/utiasSTARS/learned_scale_recovery)|University of Toronto|:see_no_evil:| 93 | |2021 | [Self-supervised Depth Estimation Leveraging Global Perception and Geometric Smoothness Using On-board Videos](http://arxiv.org/abs/2011.11814)||Hong Kong Polytechnic University|:see_no_evil:| 94 | |2021 | [Self-Supervised Structure-from-Motion through Tightly-Coupled Depth and Egomotion Networks](https://arxiv.org/pdf/2106.04007.pdf)||University of Toronto|:see_no_evil:| 95 | |2021 | [Moving SLAM: Fully Unsupervised Deep Learning in Non-Rigid Scenes](http://arxiv.org/abs/2105.02195)||HKUST|:see_no_evil:| 96 | |2021 | [Unsupervised Joint Learning of Depth, Optical Flow, Ego-motion from Video](https://arxiv.org/pdf/2105.14520.pdf)||Tongji University|:see_no_evil:| 97 | |2021 | [Monocular Depth Estimation through Virtual-world Supervision and Real-world SfM Self-Supervision](https://arxiv.org/pdf/2103.12209v1.pdf)|||:see_no_evil:| 98 | |2021 | [Self-Supervised Learning of Depth and Ego- Motion from Video by Alternative Training and Geometric Constraints from 3D to 2D](https://arxiv.org/pdf/2108.01980.pdf)|||:see_no_evil:| 99 | ||-update-time-09-13-2021-|||| 100 | |ICCV2021 | [Fine-grained Semantics-aware Representation Enhancement for Self-supervised Monocular Depth Estimation](http://arxiv.org/abs/2108.08829)||Seoul National University|:see_no_evil:| 101 | |ICCV2021 | [Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular Depth Estimation in the Dark](https://github.com/w2kun/RNW)||Nanjing University of Science and Technology|:see_no_evil:| 102 | |ICCV2021 | [Self-supervised Monocular Depth Estimation for All Day Images using Domain Separation](http://arxiv.org/abs/2108.07628)||Zhejiang University|:see_no_evil:| 103 | |ICCV2021 | [StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation](http://arxiv.org/abs/2108.08574)||Shanghai Jiao Tong University|:see_no_evil:| 104 | |ICCV2021 | [MonoIndoor: Towards Good Practice of Self-Supervised Monocular Depth Estimation for Indoor Environments](https://arxiv.org/pdf/2107.12429.pdf)||OPPO US Research Center|:see_no_evil:| 105 | |Sensors Journal 2021 | [Unsupervised Monocular Depth Perception: Focusing on Moving Objects](http://arxiv.org/abs/2108.13062)||Chinese University of Hong Kong|:see_no_evil:| 106 | |2021 | [R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of Dynamic Scenes](https://github.com/w2kun/RNW)||TUM|:star:| 107 | |2021 | [Unsupervised Monocular Depth Estimation in Highly Complex Environments](http://arxiv.org/abs/2107.13137)||East China University of Science and Technology|:see_no_evil:| 108 | ||-update-time-10-13-2021-|||| 109 | |3DV 2021 | [PLNet: Plane and Line Priors for Unsupervised Indoor Depth Estimation](https://arxiv.org/pdf/2110.05839.pdf)||The Chinese University of Hong Kong|:see_no_evil:| 110 | ||-update-time-11-29-2021-|||| 111 | | 2021 BMVC |[X-Distill: Improving Self-Supervised Monocular Depth via Cross-Task Distillation](https://arxiv.org/pdf/2110.12516.pdf) |||| 112 | | 2021 BMVC |[Self-Supervised Monocular Depth Estimation with Internal Feature Fusion](https://arxiv.org/pdf/2110.09482.pdf) |||| 113 | | 2021 3DV |[Attention meets Geometry: Geometry Guided Spatial-Temporal Attention for Consistent Self-Supervised Monocular Depth Estimation](https://arxiv.org/pdf/2110.08192.pdf) |||| 114 | |ICCV2021 | [Excavating the Potential Capacity of Self-Supervised Monocular Depth Estimation](https://arxiv.org/pdf/2109.12484.pdf)||Peking University|:see_no_evil:| 115 | | 2021 |[SUB-Depth: Self-distillation and Uncertainty Boosting Self-supervised Monocular Depth Estimation](https://arxiv.org/pdf/2111.09692.pdf) |||| 116 | ## indoor 117 | | Conference | Tittle |code|Author|mark|note| 118 | |--------------|:-------------------------------------------------------------------------------------------:|----|-----|----|----| 119 | ||-update-time-01-19-2022-|||| 120 | |ICCV2019|[Moving Indoor: Unsupervised Video Depth Learning in Challenging Environments](https://arxiv.org/pdf/1910.08898.pdf)||Tsinghua University|| 121 | |ECCV2020|[P2Net: Patch-match and Plane-regularizationfor Unsupervised Indoor Depth Estimation](https://arxiv.org/pdf/2007.07696.pdf)||ShanghaiTech Univsertiy|| 122 | |ICCV2021|[StructDepth: Leveraging the structural regularities for self-supervised indoordepth estimation](https://arxiv.org/pdf/2108.08574.pdf)|||| 123 | |ICCV2021|[MonoIndoor: Towards Good Practice of Self-Supervised Monocular Depth Estimation for Indoor Environments](https://arxiv.org/pdf/2107.12429.pdf)||OPPO US Research Cente|| 124 | |3DV2021|[PLNet: Plane and Line Priors for Unsupervised Indoor Depth Estimation](https://arxiv.org/pdf/2110.05839.pdf)||The Chinese University of Hong Kong|| 125 | |TIPAMI|[Auto-Rectify Network for Unsupervised IndoorDepth Estimation](https://arxiv.org/pdf/2006.02708.pdf)||University of Adelaide|| 126 | |2022|[Toward Practical Self-Supervised Monocular Indoor Depth Estimation](https://arxiv.org/pdf/2112.02306.pdf)||University of Southern California|| 127 | ([Back](#Contents)) 128 | 129 | # 3-Multi-view-stereo 130 | | Conference | Tittle |code|Author|mark| 131 | |--------------|:-------------------------------------------------------------------------------------------:|----|-----|-----| 132 | |PAMI2008| [SGM:Stereo processing by Semi-Global matching and Mutual Information](http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=54679C33E714E9151BE8BC102B19A29E?doi=10.1.1.386.5238&rep=rep1&type=pdf) ||German Aerospace Cente|:see_no_evil:| 133 | |ECCV2016| [Unsupervised CNN for Single View Depth Estimation: Geometry to the Rescue](http://arxiv.org/abs/1603.04992) ||University of Adelaide|:see_no_evil:| 134 | |CVPR2017| [DispNet: Unsupervised Monocular Depth Estimation with Left-Right Consistency](https://arxiv.org/pdf/1609.03677.pdf) ||University College London|:see_no_evil:| 135 | || [Cost Volume Pyramid Based Depth Inference for Multi-View Stereo Jiayu](http://arxiv.org/abs/2104.04314) |[link](https://github.com/BaiFree/CVP-MVSNet)|Northwestern Polytechnical University|:see_no_evil:| 136 | | CVPR2020 |[Semi-Supervised Deep Learning for Monocular Depth Map Prediction](http://arxiv.org/abs/1912.08329) ||Australian National University|:see_no_evil:| 137 | | AAAI2021 | [Self-supervised Multi-view Stereo via Effective Co-Segmentation and Data-Augmentation](http://arxiv.org/abs/2104.05374) ||South China University of Technology|:see_no_evil:| 138 | | CVPR2021 | [Differentiable Diffusion for Dense Depth Estimation from Multi-view Images](https://arxiv.org/pdf/2106.08917.pdf) ||Brown University|:see_no_evil:| 139 | | ICCV2021 |[NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo](https://arxiv.org/pdf/2109.01129.pdf) ||Australian National University|:star:| 140 | 141 | ([Back](#Contents)) 142 | # 4-SLAM-Visual-Odometry 143 | | Conference | Tittle |code|Author|mark| 144 | |--------------|:-------------------------------------------------------------------------------------------:|----|-----|-----| 145 | | ECCV2014 | [LSD-SLAM: Large-Scale Direct Monocular SLAM](https://link.springer.com/content/pdf/10.1007%2F978-3-319-10605-2_54.pdf) ||TUM|:see_no_evil:| 146 | | TR2015 | [ORB-SLAM: A Versatile and Accurate Monocular SLAM System](http://arxiv.org/abs/1502.00956) ||Universidad de Zaragoza|:see_no_evil:| 147 | | 2016 | [Direct Visual Odometry using Bit-Planes](https://arxiv.org/pdf/1604.00990.pdf) ||Carnegie Mellon University|:see_no_evil:| 148 | | TR2017 | [ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras](http://arxiv.org/abs/1610.06475) ||Universidad de Zaragoza|:see_no_evil:| 149 | |2016| [A Photometrically Calibrated Benchmark For Monocular Visual Odometry](http://arxiv.org/abs/1607.02555) ||TUM|:see_no_evil:| 150 | 151 | ([Back](#Contents)) 152 | ## 2018 153 | | Conference | Tittle |code|Author|mark| 154 | |--------------|:-------------------------------------------------------------------------------------------:|----|-----|-----| 155 | |PAMI2018| [DSO: Direct Sparse Odometry](http://arxiv.org/abs/1607.02565) ||TUM|:see_no_evil:| 156 | |IROS2018| [LDSO: Direct Sparse Odometry with Loop Closure](http://arxiv.org/abs/1808.01111) ||TUM|:see_no_evil:| 157 | | ECCV2018 | [Deep Virtual Stereo Odometry:Leveraging Deep Depth Prediction for Monocular Direct Sparse Odometry](https://arxiv.org/pdf/1807.02570) ||TUM|:see_no_evil:| 158 | |2018| [Self-improving visual odometry](http://arxiv.org/abs/1812.03245) ||Magic Leap, Inc.|:see_no_evil:| 159 | 160 | ([Back](#Contents)) 161 | ## 2019 162 | | Conference | Tittle |code|Author|mark| 163 | |--------------|:-------------------------------------------------------------------------------------------:|----|-----|-----| 164 | |ICLR2019| [BA-NET: DENSE BUNDLE ADJUSTMENT NETWORKS](http://arxiv.org/abs/1806.04807) ||Simon Fraser University|:see_no_evil:| 165 | | | [TartanVO: A Generalizable Learning-based VO](https://arxiv.org/pdf/2011.00359.pdf) |[link](https://github.com/castacks/tartanvo)|Carnegie Mellon University|:see_no_evil:| 166 | | IROS | [D2VO: Monocular Deep Direct Visual Odometry](https://arxiv.org/pdf/2103.13201.pdf) |||:see_no_evil:| 167 | 168 | ([Back](#Contents)) 169 | ## 2020 170 | | Conference | Tittle |code|Author|mark| 171 | |--------------|:-------------------------------------------------------------------------------------------:|----|-----|-----| 172 | | ECCV2020 | [Pseudo RGB-D for Self-Improving Monocular SLAM and Depth Prediction](http://arxiv.org/abs/2004.10681) ||IIIT-Delhi|:see_no_evil:| 173 | | CVPR2020 | [VOLDOR: Visual Odometry from Log-logistic Dense Optical flow Residuals](http://arxiv.org/abs/2104.06789) ||Stevens Institute of Technology|:see_no_evil:| 174 | | 2021 | [Generalizing to the Open World: Deep Visual Odometry with Online Adaptation](http://arxiv.org/abs/2103.15279) ||Peking University|:see_no_evil:| 175 | | ICRA2021 | [SA-LOAM: Semantic-aided LiDAR SLAM with Loop Closure](http://arxiv.org/abs/2106.11516) ||Zhejiang University|:see_no_evil:| 176 | 177 | ([Back](#Contents)) 178 | # Semantic-aware-depth 179 | | Conference | Tittle |code|Author|mark| 180 | |--------------|:-------------------------------------------------------------------------------------------:|----|-----|-----| 181 | |2020| [SAFENet: Self-Supervised Monocular Depth Estimation with Semantic-Aware Feature Extraction](http://arxiv.org/abs/2010.02893) ||KAIST|| 182 | |AAAI2021| [Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection Consistency](http://arxiv.org/abs/2102.02629) ||KAIST|| 183 | |ICCV2021| [Fine-grained Semantics-aware Representation Enhancement for Self-supervised Monocular Depth Estimation](http://arxiv.org/abs/2108.08829) ||Seoul National University|| 184 | 185 | # 1-Monocular-depth with Cost Volume 186 | | Conference | Tittle |code|Author|mark|note| 187 | |--------------|:------------------------------------------------------------------------------------------:|----|----|----|----| 188 | |NIPS2020 | [ Forget About the LiDAR: Self-Supervised Depth Estimators with MED Probability Volumes](https://arxiv.org/pdf/2008.03633.pdf) ||Korea Advanced Institute of Science and Technology|:hear_no_evil:|link| 189 | | CVPR2021 | [DRO: Deep Recurrent Optimizer for Structure-from-Motion](https://arxiv.org/pdf/2103.13201.pdf) ||Alibaba A.I. Labs|:see_no_evil:|link| 190 | |CVPR2021 | [The Temporal Opportunist: Self-Supervised Multi-Frame Monocular Depth](https://arxiv.org/pdf/2104.14540.pdf)|[link](https://github.com/nianticlabs/manydepth)|Niantic|:see_no_evil:|| 191 | |CVPR2020 |[Self-supervised Monocular Trained Depth Estimation using Self-attention and Discrete Disparity Volume](https://arxiv.org/pdf/2003.13951.pdf)|[link](https://github.com/sjsu-smart-lab/Self-supervised-Monocular-Trained-Depth-Estimation-using-Self-attention-and-Discrete-Disparity-Volum)|Australian Institute for Machine Learning|:see_no_evil:|| 192 | 193 | ([Back](#Contents)) 194 | 195 | # Video-depth 196 | | Conference | Tittle |code|Author|mark| 197 | |--------------|:-------------------------------------------------------------------------------------------:|----|-----|-----| 198 | ||-update-time-10-13-2021-|||| 199 | |CVPR 2020| [Consistent Video Depth Estimation](https://arxiv.org/pdf/2012.05901.pdf) ||University of Washington|| 200 | |CVPR 2021| [Robust Consistent Video Depth Estimation](https://arxiv.org/pdf/2012.05901.pdf) ||Facebook|| 201 | |SIGGRAPH 2021| [Consistent Depth of Moving Objects in Video](https://arxiv.org/pdf/2108.01166.pdf) |||| 202 | 203 | ([Back](#Contents)) 204 | 205 | # Virtual2Real-depth 206 | | Conference | Tittle |code|Author|mark| 207 | |--------------|:-------------------------------------------------------------------------------------------:|----|-----|-----| 208 | || []() |||| 209 | 210 | ([Back](#Contents)) 211 | # NightTime-depth 212 | | Conference | Tittle |code|Author|mark| 213 | |--------------|:-------------------------------------------------------------------------------------------:|----|-----|-----| 214 | |ICCV 2021| [Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular Depth Estimation in the Dark](https://github.com/w2kun/RNW) ||Nanjing University of Science and Technology|:see_no_evil:| 215 | |ICCV 2021| [Self-supervised Monocular Depth Estimation for All Day Images using Domain Separation](http://arxiv.org/abs/2108.07628) ||Zhejiang University|:see_no_evil:| 216 | |2021| [Unsupervised Depth and Ego-motion Estimation for Monocular Thermal Video using Multi-spectral Consistency Loss](https://arxiv.org/abs/2103.00760) ||KAIST|:see_no_evil:| 217 | 218 | ([Back](#Contents)) 219 | # Light-Filed-based-depth 220 | | Conference | Tittle |code|Author|mark| 221 | |--------------|:-------------------------------------------------------------------------------------------:|----|-----|-----| 222 | | CVPR2021 | [Differentiable Diffusion for Dense Depth Estimation from Multi-view Images](https://arxiv.org/pdf/2106.08917.pdf) ||Brown University|:see_no_evil:| 223 | | IROS2021 | [Unsupervised Learning of Depth Estimation and Visual Odometry for Sparse Light Field Cameras](http://arxiv.org/abs/2103.11322) ||The University of Sydney|:see_no_evil:| 224 | | 2021 | [Occlusion-aware Unsupervised Learning of Depth from 4-D Light Fields](https://arxiv.org/pdf/2106.03043.pdf) ||University of Sydney|:see_no_evil:| 225 | 226 | ([Back](#Contents)) 227 | # 6-depth-estimation-and-complementation 228 | | Conference | Tittle |code|Author|mark| 229 | |--------------|:-------------------------------------------------------------------------------------------:|----|-----|-----| 230 | || [Sparse Auxiliary Networks for Unified Monocular Depth Prediction and Completion Vitor](http://arxiv.org/abs/2103.16690) ||Toyota Research Institute (TRI)|:see_no_evil:| 231 | |3DV2019| [Enhancing self-supervised monocular depth estimation with traditional visual odometry](http://arxiv.org/abs/1908.03127) ||Univrses AB|:see_no_evil:| 232 | |ECCV2020 |[S3Net: Semantic-aware self-supervised depth estimation with monocular videos and synthetic data](https://arxiv.org/pdf/2007.14511.pdf)||UCSD|:see_no_evil:| 233 | |ICCV2021 |[Unsupervised Depth Completion with Calibrated Backprojection Layers Alex](http://arxiv.org/abs/2108.10531)||UCLA|:see_no_evil:| 234 | ||-update-time-11-29-2021-|||| 235 | | |[Self-Supervised Depth Completion for Active Stereo](https://arxiv.org/pdf/2110.03234.pdf)||UCLA|:see_no_evil:| 236 | ([Back](#Contents)) 237 | --------------------------------------------------------------------------------