├── LICENSE.md ├── README.md └── image ├── concept_figure.png └── image.md /LICENSE.md: -------------------------------------------------------------------------------- 1 | Attribution-NonCommercial-ShareAlike 4.0 International 2 | 3 | ======================================================================= 4 | 5 | Creative Commons Corporation ("Creative Commons") is not a law firm and 6 | does not provide legal services or legal advice. Distribution of 7 | Creative Commons public licenses does not create a lawyer-client or 8 | other relationship. Creative Commons makes its licenses and related 9 | information available on an "as-is" basis. Creative Commons gives no 10 | warranties regarding its licenses, any material licensed under their 11 | terms and conditions, or any related information. Creative Commons 12 | disclaims all liability for damages resulting from their use to the 13 | fullest extent possible. 14 | 15 | Using Creative Commons Public Licenses 16 | 17 | Creative Commons public licenses provide a standard set of terms and 18 | conditions that creators and other rights holders may use to share 19 | original works of authorship and other material subject to copyright 20 | and certain other rights specified in the public license below. The 21 | following considerations are for informational purposes only, are not 22 | exhaustive, and do not form part of our licenses. 23 | 24 | Considerations for licensors: Our public licenses are 25 | intended for use by those authorized to give the public 26 | permission to use material in ways otherwise restricted by 27 | copyright and certain other rights. Our licenses are 28 | irrevocable. Licensors should read and understand the terms 29 | and conditions of the license they choose before applying it. 30 | Licensors should also secure all rights necessary before 31 | applying our licenses so that the public can reuse the 32 | material as expected. Licensors should clearly mark any 33 | material not subject to the license. This includes other CC- 34 | licensed material, or material used under an exception or 35 | limitation to copyright. More considerations for licensors: 36 | wiki.creativecommons.org/Considerations_for_licensors 37 | 38 | Considerations for the public: By using one of our public 39 | licenses, a licensor grants the public permission to use the 40 | licensed material under specified terms and conditions. If 41 | the licensor's permission is not necessary for any reason--for 42 | example, because of any applicable exception or limitation to 43 | copyright--then that use is not regulated by the license. Our 44 | licenses grant only permissions under copyright and certain 45 | other rights that a licensor has authority to grant. Use of 46 | the licensed material may still be restricted for other 47 | reasons, including because others have copyright or other 48 | rights in the material. A licensor may make special requests, 49 | such as asking that all changes be marked or described. 50 | Although not required by our licenses, you are encouraged to 51 | respect those requests where reasonable. More_considerations 52 | for the public: 53 | wiki.creativecommons.org/Considerations_for_licensees 54 | 55 | ======================================================================= 56 | 57 | Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International 58 | Public License 59 | 60 | By exercising the Licensed Rights (defined below), You accept and agree 61 | to be bound by the terms and conditions of this Creative Commons 62 | Attribution-NonCommercial-ShareAlike 4.0 International Public License 63 | ("Public License"). To the extent this Public License may be 64 | interpreted as a contract, You are granted the Licensed Rights in 65 | consideration of Your acceptance of these terms and conditions, and the 66 | Licensor grants You such rights in consideration of benefits the 67 | Licensor receives from making the Licensed Material available under 68 | these terms and conditions. 69 | 70 | 71 | Section 1 -- Definitions. 72 | 73 | a. Adapted Material means material subject to Copyright and Similar 74 | Rights that is derived from or based upon the Licensed Material 75 | and in which the Licensed Material is translated, altered, 76 | arranged, transformed, or otherwise modified in a manner requiring 77 | permission under the Copyright and Similar Rights held by the 78 | Licensor. For purposes of this Public License, where the Licensed 79 | Material is a musical work, performance, or sound recording, 80 | Adapted Material is always produced where the Licensed Material is 81 | synched in timed relation with a moving image. 82 | 83 | b. Adapter's License means the license You apply to Your Copyright 84 | and Similar Rights in Your contributions to Adapted Material in 85 | accordance with the terms and conditions of this Public License. 86 | 87 | c. BY-NC-SA Compatible License means a license listed at 88 | creativecommons.org/compatiblelicenses, approved by Creative 89 | Commons as essentially the equivalent of this Public License. 90 | 91 | d. Copyright and Similar Rights means copyright and/or similar rights 92 | closely related to copyright including, without limitation, 93 | performance, broadcast, sound recording, and Sui Generis Database 94 | Rights, without regard to how the rights are labeled or 95 | categorized. For purposes of this Public License, the rights 96 | specified in Section 2(b)(1)-(2) are not Copyright and Similar 97 | Rights. 98 | 99 | e. Effective Technological Measures means those measures that, in the 100 | absence of proper authority, may not be circumvented under laws 101 | fulfilling obligations under Article 11 of the WIPO Copyright 102 | Treaty adopted on December 20, 1996, and/or similar international 103 | agreements. 104 | 105 | f. Exceptions and Limitations means fair use, fair dealing, and/or 106 | any other exception or limitation to Copyright and Similar Rights 107 | that applies to Your use of the Licensed Material. 108 | 109 | g. License Elements means the license attributes listed in the name 110 | of a Creative Commons Public License. The License Elements of this 111 | Public License are Attribution, NonCommercial, and ShareAlike. 112 | 113 | h. Licensed Material means the artistic or literary work, database, 114 | or other material to which the Licensor applied this Public 115 | License. 116 | 117 | i. Licensed Rights means the rights granted to You subject to the 118 | terms and conditions of this Public License, which are limited to 119 | all Copyright and Similar Rights that apply to Your use of the 120 | Licensed Material and that the Licensor has authority to license. 121 | 122 | j. Licensor means the individual(s) or entity(ies) granting rights 123 | under this Public License. 124 | 125 | k. NonCommercial means not primarily intended for or directed towards 126 | commercial advantage or monetary compensation. For purposes of 127 | this Public License, the exchange of the Licensed Material for 128 | other material subject to Copyright and Similar Rights by digital 129 | file-sharing or similar means is NonCommercial provided there is 130 | no payment of monetary compensation in connection with the 131 | exchange. 132 | 133 | l. Share means to provide material to the public by any means or 134 | process that requires permission under the Licensed Rights, such 135 | as reproduction, public display, public performance, distribution, 136 | dissemination, communication, or importation, and to make material 137 | available to the public including in ways that members of the 138 | public may access the material from a place and at a time 139 | individually chosen by them. 140 | 141 | m. Sui Generis Database Rights means rights other than copyright 142 | resulting from Directive 96/9/EC of the European Parliament and of 143 | the Council of 11 March 1996 on the legal protection of databases, 144 | as amended and/or succeeded, as well as other essentially 145 | equivalent rights anywhere in the world. 146 | 147 | n. You means the individual or entity exercising the Licensed Rights 148 | under this Public License. Your has a corresponding meaning. 149 | 150 | 151 | Section 2 -- Scope. 152 | 153 | a. License grant. 154 | 155 | 1. Subject to the terms and conditions of this Public License, 156 | the Licensor hereby grants You a worldwide, royalty-free, 157 | non-sublicensable, non-exclusive, irrevocable license to 158 | exercise the Licensed Rights in the Licensed Material to: 159 | 160 | a. reproduce and Share the Licensed Material, in whole or 161 | in part, for NonCommercial purposes only; and 162 | 163 | b. produce, reproduce, and Share Adapted Material for 164 | NonCommercial purposes only. 165 | 166 | 2. Exceptions and Limitations. For the avoidance of doubt, where 167 | Exceptions and Limitations apply to Your use, this Public 168 | License does not apply, and You do not need to comply with 169 | its terms and conditions. 170 | 171 | 3. Term. The term of this Public License is specified in Section 172 | 6(a). 173 | 174 | 4. Media and formats; technical modifications allowed. The 175 | Licensor authorizes You to exercise the Licensed Rights in 176 | all media and formats whether now known or hereafter created, 177 | and to make technical modifications necessary to do so. The 178 | Licensor waives and/or agrees not to assert any right or 179 | authority to forbid You from making technical modifications 180 | necessary to exercise the Licensed Rights, including 181 | technical modifications necessary to circumvent Effective 182 | Technological Measures. For purposes of this Public License, 183 | simply making modifications authorized by this Section 2(a) 184 | (4) never produces Adapted Material. 185 | 186 | 5. Downstream recipients. 187 | 188 | a. Offer from the Licensor -- Licensed Material. Every 189 | recipient of the Licensed Material automatically 190 | receives an offer from the Licensor to exercise the 191 | Licensed Rights under the terms and conditions of this 192 | Public License. 193 | 194 | b. Additional offer from the Licensor -- Adapted Material. 195 | Every recipient of Adapted Material from You 196 | automatically receives an offer from the Licensor to 197 | exercise the Licensed Rights in the Adapted Material 198 | under the conditions of the Adapter's License You apply. 199 | 200 | c. No downstream restrictions. You may not offer or impose 201 | any additional or different terms or conditions on, or 202 | apply any Effective Technological Measures to, the 203 | Licensed Material if doing so restricts exercise of the 204 | Licensed Rights by any recipient of the Licensed 205 | Material. 206 | 207 | 6. No endorsement. Nothing in this Public License constitutes or 208 | may be construed as permission to assert or imply that You 209 | are, or that Your use of the Licensed Material is, connected 210 | with, or sponsored, endorsed, or granted official status by, 211 | the Licensor or others designated to receive attribution as 212 | provided in Section 3(a)(1)(A)(i). 213 | 214 | b. Other rights. 215 | 216 | 1. Moral rights, such as the right of integrity, are not 217 | licensed under this Public License, nor are publicity, 218 | privacy, and/or other similar personality rights; however, to 219 | the extent possible, the Licensor waives and/or agrees not to 220 | assert any such rights held by the Licensor to the limited 221 | extent necessary to allow You to exercise the Licensed 222 | Rights, but not otherwise. 223 | 224 | 2. Patent and trademark rights are not licensed under this 225 | Public License. 226 | 227 | 3. To the extent possible, the Licensor waives any right to 228 | collect royalties from You for the exercise of the Licensed 229 | Rights, whether directly or through a collecting society 230 | under any voluntary or waivable statutory or compulsory 231 | licensing scheme. In all other cases the Licensor expressly 232 | reserves any right to collect such royalties, including when 233 | the Licensed Material is used other than for NonCommercial 234 | purposes. 235 | 236 | 237 | Section 3 -- License Conditions. 238 | 239 | Your exercise of the Licensed Rights is expressly made subject to the 240 | following conditions. 241 | 242 | a. Attribution. 243 | 244 | 1. If You Share the Licensed Material (including in modified 245 | form), You must: 246 | 247 | a. retain the following if it is supplied by the Licensor 248 | with the Licensed Material: 249 | 250 | i. identification of the creator(s) of the Licensed 251 | Material and any others designated to receive 252 | attribution, in any reasonable manner requested by 253 | the Licensor (including by pseudonym if 254 | designated); 255 | 256 | ii. a copyright notice; 257 | 258 | iii. a notice that refers to this Public License; 259 | 260 | iv. a notice that refers to the disclaimer of 261 | warranties; 262 | 263 | v. a URI or hyperlink to the Licensed Material to the 264 | extent reasonably practicable; 265 | 266 | b. indicate if You modified the Licensed Material and 267 | retain an indication of any previous modifications; and 268 | 269 | c. indicate the Licensed Material is licensed under this 270 | Public License, and include the text of, or the URI or 271 | hyperlink to, this Public License. 272 | 273 | 2. You may satisfy the conditions in Section 3(a)(1) in any 274 | reasonable manner based on the medium, means, and context in 275 | which You Share the Licensed Material. For example, it may be 276 | reasonable to satisfy the conditions by providing a URI or 277 | hyperlink to a resource that includes the required 278 | information. 279 | 3. If requested by the Licensor, You must remove any of the 280 | information required by Section 3(a)(1)(A) to the extent 281 | reasonably practicable. 282 | 283 | b. ShareAlike. 284 | 285 | In addition to the conditions in Section 3(a), if You Share 286 | Adapted Material You produce, the following conditions also apply. 287 | 288 | 1. The Adapter's License You apply must be a Creative Commons 289 | license with the same License Elements, this version or 290 | later, or a BY-NC-SA Compatible License. 291 | 292 | 2. You must include the text of, or the URI or hyperlink to, the 293 | Adapter's License You apply. You may satisfy this condition 294 | in any reasonable manner based on the medium, means, and 295 | context in which You Share Adapted Material. 296 | 297 | 3. You may not offer or impose any additional or different terms 298 | or conditions on, or apply any Effective Technological 299 | Measures to, Adapted Material that restrict exercise of the 300 | rights granted under the Adapter's License You apply. 301 | 302 | 303 | Section 4 -- Sui Generis Database Rights. 304 | 305 | Where the Licensed Rights include Sui Generis Database Rights that 306 | apply to Your use of the Licensed Material: 307 | 308 | a. for the avoidance of doubt, Section 2(a)(1) grants You the right 309 | to extract, reuse, reproduce, and Share all or a substantial 310 | portion of the contents of the database for NonCommercial purposes 311 | only; 312 | 313 | b. if You include all or a substantial portion of the database 314 | contents in a database in which You have Sui Generis Database 315 | Rights, then the database in which You have Sui Generis Database 316 | Rights (but not its individual contents) is Adapted Material, 317 | including for purposes of Section 3(b); and 318 | 319 | c. You must comply with the conditions in Section 3(a) if You Share 320 | all or a substantial portion of the contents of the database. 321 | 322 | For the avoidance of doubt, this Section 4 supplements and does not 323 | replace Your obligations under this Public License where the Licensed 324 | Rights include other Copyright and Similar Rights. 325 | 326 | 327 | Section 5 -- Disclaimer of Warranties and Limitation of Liability. 328 | 329 | a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE 330 | EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS 331 | AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF 332 | ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, 333 | IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, 334 | WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR 335 | PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, 336 | ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT 337 | KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT 338 | ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU. 339 | 340 | b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE 341 | TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, 342 | NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, 343 | INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, 344 | COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR 345 | USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN 346 | ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR 347 | DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR 348 | IN PART, THIS LIMITATION MAY NOT APPLY TO YOU. 349 | 350 | c. The disclaimer of warranties and limitation of liability provided 351 | above shall be interpreted in a manner that, to the extent 352 | possible, most closely approximates an absolute disclaimer and 353 | waiver of all liability. 354 | 355 | 356 | Section 6 -- Term and Termination. 357 | 358 | a. This Public License applies for the term of the Copyright and 359 | Similar Rights licensed here. However, if You fail to comply with 360 | this Public License, then Your rights under this Public License 361 | terminate automatically. 362 | 363 | b. Where Your right to use the Licensed Material has terminated under 364 | Section 6(a), it reinstates: 365 | 366 | 1. automatically as of the date the violation is cured, provided 367 | it is cured within 30 days of Your discovery of the 368 | violation; or 369 | 370 | 2. upon express reinstatement by the Licensor. 371 | 372 | For the avoidance of doubt, this Section 6(b) does not affect any 373 | right the Licensor may have to seek remedies for Your violations 374 | of this Public License. 375 | 376 | c. For the avoidance of doubt, the Licensor may also offer the 377 | Licensed Material under separate terms or conditions or stop 378 | distributing the Licensed Material at any time; however, doing so 379 | will not terminate this Public License. 380 | 381 | d. Sections 1, 5, 6, 7, and 8 survive termination of this Public 382 | License. 383 | 384 | 385 | Section 7 -- Other Terms and Conditions. 386 | 387 | a. The Licensor shall not be bound by any additional or different 388 | terms or conditions communicated by You unless expressly agreed. 389 | 390 | b. Any arrangements, understandings, or agreements regarding the 391 | Licensed Material not stated herein are separate from and 392 | independent of the terms and conditions of this Public License. 393 | 394 | 395 | Section 8 -- Interpretation. 396 | 397 | a. For the avoidance of doubt, this Public License does not, and 398 | shall not be interpreted to, reduce, limit, restrict, or impose 399 | conditions on any use of the Licensed Material that could lawfully 400 | be made without permission under this Public License. 401 | 402 | b. To the extent possible, if any provision of this Public License is 403 | deemed unenforceable, it shall be automatically reformed to the 404 | minimum extent necessary to make it enforceable. If the provision 405 | cannot be reformed, it shall be severed from this Public License 406 | without affecting the enforceability of the remaining terms and 407 | conditions. 408 | 409 | c. No term or condition of this Public License will be waived and no 410 | failure to comply consented to unless expressly agreed to by the 411 | Licensor. 412 | 413 | d. Nothing in this Public License constitutes or may be interpreted 414 | as a limitation upon, or waiver of, any privileges and immunities 415 | that apply to the Licensor or You, including from the legal 416 | processes of any jurisdiction or authority. 417 | 418 | ======================================================================= 419 | 420 | Creative Commons is not a party to its public 421 | licenses. Notwithstanding, Creative Commons may elect to apply one of 422 | its public licenses to material it publishes and in those instances 423 | will be considered the “Licensor.” The text of the Creative Commons 424 | public licenses is dedicated to the public domain under the CC0 Public 425 | Domain Dedication. Except for the limited purpose of indicating that 426 | material is shared under a Creative Commons public license or as 427 | otherwise permitted by the Creative Commons policies published at 428 | creativecommons.org/policies, Creative Commons does not authorize the 429 | use of the trademark "Creative Commons" or any other trademark or logo 430 | of Creative Commons without its prior written consent including, 431 | without limitation, in connection with any unauthorized modifications 432 | to any of its public licenses or any other arrangements, 433 | understandings, or agreements concerning use of licensed material. For 434 | the avoidance of doubt, this paragraph does not form part of the 435 | public licenses. 436 | 437 | Creative Commons may be contacted at creativecommons.org. 438 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Deep Learning for Localization and Mapping 2 | 3 | ![image](image/concept_figure.png) 4 | This repository is a collection of deep learning based localization and mapping approaches. A survey on Deep Learning for Visual Localization and Mapping is offered in the following paper: 5 | 6 | >Deep Learning for Visual Localization and Mapping: A Survey 7 | > 8 | >[Changhao Chen](https://changhao-chen.github.io/), [Bing Wang](https://www.cs.ox.ac.uk/people/bing.wang/), [Chris Xiaoxuan Lu](https://christopherlu.github.io/), [Niki Trigoni](https://www.cs.ox.ac.uk/people/niki.trigoni/) and [Andrew Markham](https://www.cs.ox.ac.uk/people/andrew.markham/) 9 | > 10 | >**IEEE Transactions on Neural Networks and Learning Systems** [[PDF](https://arxiv.org/abs/2308.14039)] 11 | 12 | A survey on Deep Learning for Inertial Positioning is offered in the following paper: 13 | 14 | >Deep Learning for Inertial Positioning: A Survey 15 | > 16 | >[Changhao Chen](https://changhao-chen.github.io/), Xianfei Pan 17 | > 18 | >**IEEE Transactions on Intelligent Transportation Systems** [[PDF](https://arxiv.org/abs/2303.03757)] 19 | 20 | Previous Version. 21 | 22 | >A Survey on Deep Learning for Localization and Mapping: Towards the Age of Spatial Machine Intelligence 23 | > 24 | >[Changhao Chen](https://changhao-chen.github.io/), [Bing Wang](https://www.cs.ox.ac.uk/people/bing.wang/), [Chris Xiaoxuan Lu](https://christopherlu.github.io/), [Niki Trigoni](https://www.cs.ox.ac.uk/people/niki.trigoni/) and [Andrew Markham](https://www.cs.ox.ac.uk/people/andrew.markham/) 25 | > 26 | >**arXiv:2006.12567** [[PDF](https://arxiv.org/abs/2006.12567)] 27 | 28 | ## News 29 | ### Update: Jun-22-2020 30 | - We released our survey paper. 31 | ### Update: Aug-30-2023 32 | - Our Survey "Deep Learning for Visual Localization and Mapping: A Survey" was accepted to IEEE TNNLS. 33 | ### Update: Mar-13-2024 34 | - Our Survey "Deep Learning for Inertial Positioning: A Survey" was accepted to IEEE TITS. 35 | 36 | ## TO DO 37 | 38 | ## Category 39 | - [Odometry Estimation](#Odometry-Estimation) 40 | - [Visual Odometry](#Visual-Odometry) 41 | - [Visual-Inertial Odometry](#Visual-Inertial-Odometry) 42 | - [Inertial Odometry](#Inertial-Odometry) 43 | - [LIDAR Odometry](#LIDAR-Odometry) 44 | - [Mapping](#Mapping) 45 | - [Geometric Mapping](#Geometric-Mapping) 46 | - [Semantic Mapping](#Semantic-Mapping) 47 | - [General Mapping](#General-Mapping) 48 | - [Global localization](#Global-Localization) 49 | - [2D-to-2D Localization](#2D-to-2D-Localization) 50 | - [2D-to-3D Localization](#2D-to-3D-Localization) 51 | - [3D-to-3D Localization](#3D-to-3D-Localization) 52 | - [Simultaneous Localization and Mapping (SLAM)](#SLAM) 53 | - [Local Optimization](#Local-Optimization) 54 | - [Global Optimization](#Global-Optimization) 55 | - [Keyframe and Loop-closure Detection](#Keyframe-and-Loop-closure-Detection) 56 | - [Uncertainty Estimation](#Uncertainty-Estimation) 57 | 58 | ## If you find this repository useful, please cite our paper: 59 | 60 | @misc{chen2020survey, 61 | title={A Survey on Deep Learning for Localization and Mapping: Towards the Age of Spatial Machine Intelligence}, 62 | author={Changhao Chen and Bing Wang and Chris Xiaoxuan Lu and Niki Trigoni and Andrew Markham}, 63 | year={2020}, 64 | eprint={2006.12567}, 65 | archivePrefix={arXiv}, 66 | primaryClass={cs.CV} 67 | } 68 | 69 | 70 | ## Categorized by Topic 71 | *The Date in the table denotes the publication date (e.g. date of conference). 72 | ### Odometry Estimation 73 | #### Visual Odometry 74 | | Models |Date| Publication| Paper | Code | 75 | |----------|----|------------|------|---| 76 | | Konda et al. | 2015 | VISAPP | [Learning visual odometry with a convolutional network](https://www.iro.umontreal.ca/~memisevr/pubs/VISAPP2015.pdf) | | 77 | | Costante et al. | 2016 | RA-L | [Exploring Representation Learning With CNNs for Frame-to-Frame Ego-Motion Estimation](https://ieeexplore.ieee.org/document/7347378) | | 78 | | Backprop KF | 2016 | NeurIPS | [Backprop KF: Learning Discriminative Deterministic State Estimators](https://arxiv.org/abs/1605.07148) | | 79 | | DeepVO | 2017 | ICRA | [DeepVO: Towards End-to-End Visual Odometry with Deep Recurrent Convolutional Neural Networks](https://arxiv.org/abs/1709.08429) | | 80 | | SfmLearner | 2017 | CVPR | [unsupervised learning of depth and ego-motion from video](http://openaccess.thecvf.com/content_cvpr_2018/papers/Mahjourian_Unsupervised_Learning_of_CVPR_2018_paper.pdf) | [TF](https://github.com/tinghuiz/SfMLearner) [PT](https://github.com/ClementPinard/SfmLearner-Pytorch)| 81 | | Yin et al. | 2017 | ICCV | [Scale Recovery for Monocular Visual Odometry Using Depth Estimated With Deep Convolutional Neural Fields](http://openaccess.thecvf.com/content_ICCV_2017/papers/Yin_Scale_Recovery_for_ICCV_2017_paper.pdf) | | 82 | | UnDeepVO | 2018 | ICRA | [UnDeepVO: Monocular Visual Odometry through Unsupervised Deep Learning](https://arxiv.org/abs/1709.06841) | | 83 | | Barnes et al. | 2018 | ICRA | [Driven to Distraction: Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments](https://arxiv.org/abs/1711.06623) | | 84 | | GeoNet | 2018 | CVPR | [GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose](https://arxiv.org/abs/1803.02276) | [TF](https://github.com/yzcjtr/GeoNet) | 85 | | Zhan et al. | 2018 | CVPR | [Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction](https://arxiv.org/abs/1803.03893) | [Caffe](https://github.com/Huangying-Zhan/Depth-VO-Feat) | 86 | | DPF | 2018 | RSS | [Differentiable Particle Filters: End-to-End Learning with Algorithmic Priors](https://arxiv.org/abs/1805.11122) | [TF](https://github.com/tu-rbo/differentiable-particle-filters) | 87 | | Yang et al. | 2018 | ECCV | [Deep Virtual Stereo Odometry: Leveraging Deep Depth Prediction for Monocular Direct Sparse Odometry](https://arxiv.org/abs/1807.02570) | | 88 | | Zhao et al. | 2018 | IROS | [Learning monocular visual odometry with dense 3d mapping from dense 3d flow](https://arxiv.org/abs/1803.02286) | | 89 | | Turan et al. | 2018 | IROS | [Unsupervised Odometry and Depth Learning for Endoscopic Capsule Robots](https://arxiv.org/pdf/1803.01047.pdf) | | 90 | | Struct2Depth | 2019 | AAAI | [Depth Prediction Without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos](https://arxiv.org/abs/1811.06152) | [TF](https://github.com/tensorflow/models/tree/master/research/struct2depth) | 91 | | Saputra et al.| 2019 | ICRA | [Learning monocular visual odometry through geometry-aware curriculum learning](https://arxiv.org/abs/1903.10543) | | 92 | | GANVO | 2019 | ICRA | [GANVO: Unsupervised deep monocular visual odometry and depth estimation with generative adversarial networks](https://arxiv.org/abs/1809.05786) | | 93 | | CNN-SVO | 2019 | ICRA | [CNN-SVO: Improving the Mapping in Semi-Direct Visual Odometry Using Single-Image Depth Prediction](https://ieeexplore.ieee.org/document/8794425) | [ROS](https://github.com/yan99033/CNN-SVO) | 94 | | Li et al. | 2019 | ICRA | [Pose graph optimization for unsupervised monocular visual odometry](https://arxiv.org/abs/1903.06315) | | 95 | | Xue et al.| 2019 | CVPR | [Beyond tracking: Selecting memory and refining poses for deep visual odometry](https://arxiv.org/abs/1904.01892) | | 96 | | Wang et al.| 2019 | CVPR | [Recurrent neural network for (un-) supervised learning of monocular video visual odometry and depth](https://arxiv.org/abs/1904.07087) | | 97 | | Li et al. | 2019 | ICCV | [Sequential adversarial learning for self-supervised deep visual odometry](https://arxiv.org/abs/1908.08704) | | 98 | | Saputra et al. | 2019 | ICCV | [Distilling knowledge from a deep pose regressor network](https://arxiv.org/abs/1908.00858) | | 99 | | Gordon et al. | 2019 | ICCV | [Depth from videos in the wild: Unsupervised monocular depth learning from unknown cameras](https://arxiv.org/abs/1904.04998) | [TF](https://github.com/google-research/google-research/tree/master/depth_from_video_in_the_wild) | 100 | | Koumis et al. | 2019 | IROS | [Estimating Metric Scale Visual Odometry from Videos using 3D Convolutional Networks](https://jpreiss.github.io/pubs/Koumis_Preiss_3DCVO_IROS2019.pdf) | | 101 | | Bian et al. | 2019 | NeurIPS | [Unsupervised Scale-consistent Depth and Ego-motion Learning from Monocular Video](https://papers.nips.cc/paper/8299-unsupervised-scale-consistent-depth-and-ego-motion-learning-from-monocular-video.pdf) | [PT](https://github.com/JiawangBian/SC-SfMLearner-Release) | 102 | | D3VO | 2020 | CVPR | [D3VO: Deep Depth, Deep Pose and Deep Uncertainty for Monocular Visual Odometry](https://arxiv.org/pdf/2003.01060.pdf#page=9&zoom=100,412,902) | | 103 | | Jiang et al. | 2020 | CVPR | [Joint Unsupervised Learning of Optical Flow and Egomotion with Bi-Level Optimization](https://arxiv.org/pdf/2002.11826.pdf) | | 104 | 105 | 106 | #### Visual-Inertial Odometry 107 | | Models |Date| Publication| Paper | Code | 108 | |----------|----|------------|------|---| 109 | | VINet | 2017 | AAAI | [VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem](https://arxiv.org/abs/1701.08376) | | 110 | | VIOLearner | 2019 | TPAMI | [Unsupervised deep visual-inertial odometry with online error correction for rgb-d imagery](https://ieeexplore.ieee.org/document/8691513) | | 111 | | SelectFusion | 2019 | CVPR | [Selective Sensor Fusion for Neural Visual-Inertial Odometry](https://arxiv.org/abs/1903.01534) | | 112 | | DeepVIO | 2019 | IROS | [DeepVIO: Self-supervised deep learning of monocular visual inertial odometry using 3d geometric constraints](https://arxiv.org/abs/1906.11435) | | 113 | 114 | 115 | #### Inertial Odometry 116 | | Models |Date| Publication| Paper | Code | 117 | |----------|----|------------|------|---| 118 | | IONet | 2018 | AAAI | [IONet: Learning to Cure the Curse of Drift in Inertial Odometry](https://arxiv.org/abs/1802.02209) | | 119 | | RIDI | 2018 | ECCV | [RIDI: Robust IMU Double Integration](https://arxiv.org/abs/1712.09004) | [Py](https://github.com/higerra/ridi_imu) | 120 | | Wagstaff et al. | 2018 | IPIN | [LSTM-Based Zero-Velocity Detection for Robust Inertial Navigation](https://ieeexplore.ieee.org/abstract/document/8533770) | [PT](https://github.com/utiasSTARS/pyshoe) | 121 | | Cortes et al. | 2019 | MLSP | [Deep Learning Based Speed Estimation for Constraining Strapdown Inertial Navigation on Smartphones](https://ieeexplore.ieee.org/abstract/document/8516710) | | 122 | | MotionTransformer| 2019 | AAAI | [MotionTransformer: Transferring Neural Inertial Tracking between Domains](https://www.aaai.org/ojs/index.php/AAAI/article/view/4802) | | 123 | | AbolDeepIO | 2019 | TITS | [AbolDeepIO: A Novel Deep Inertial Odometry Network for Autonomous Vehicles](https://ieeexplore.ieee.org/abstract/document/8693766) | | 124 | | Brossard et al. | 2019 | ICRA | [Learning wheel odometry and imu errors for localization](https://hal.archives-ouvertes.fr/hal-01874593/document) | | 125 | | OriNet | 2019 | RA-L | [OriNet: Robust 3-D Orientation Estimation With a Single Particular IMU](https://ieeexplore.ieee.org/abstract/document/8931590) | [PT](https://github.com/mbrossar/denoise-imu-gyro) | 126 | | L-IONet | 2020| IoT-J | [Deep Learning based Pedestrian Inertial Navigation: Methods, Dataset and On-Device Inference](https://arxiv.org/abs/2001.04061) | | 127 | 128 | #### LIDAR Odometry 129 | | Models |Date| Publication| Paper | Code | 130 | |----------|----|------------|------|---| 131 | | Velas et al. | 2018 | ICARSC | [CNN for IMU Assisted Odometry Estimation using Velodyne LiDAR](https://arxiv.org/abs/1712.06352) | | 132 | | LO-Net | 2019 | CVPR | [LO-Net: Deep Real-time Lidar Odometry](https://arxiv.org/abs/1904.08242) | | 133 | | DeepPCO | 2019 | IROS | [DeepPCO: End-to-End Point Cloud Odometry through Deep Parallel Neural Network](https://arxiv.org/abs/1910.11088) | | 134 | | Valente et al. | 2019 | IROS | [Deep sensor fusion for real-time odometry estimation](https://ieeexplore.ieee.org/document/8967803) | | 135 | 136 | ### Mapping 137 | #### Geometric Mapping 138 | ##### Depth Representation 139 | * Joint learning of depth and ego-motion has been discussed in Visual Odometry. We do not include these works here, although they can produce depth representation. 140 | 141 | | Models |Date| Publication| Paper | Code | 142 | |----------|----|------------|------|---| 143 | | Eigen et al. | 2014 | NeurIPS | [Depth Map Prediction from a Single Image using a Multi-Scale Deep Network](https://papers.nips.cc/paper/5539-depth-map-prediction-from-a-single-image-using-a-multi-scale-deep-network.pdf) | | 144 | | Liu et al. | 2015 | TPAMI | [Learning depth from single monocular images using deep convolutional neural fields](https://arxiv.org/abs/1502.07411) | | 145 | | Garg et al. | 2016 | ECCV | [Unsupervised cnn for single view depth estimation: Geometry to the rescue](https://arxiv.org/abs/1603.04992) | | 146 | | Demon | 2017 | CVPR | [Demon: Depth and motion network for learning monocular stereo](https://arxiv.org/abs/1612.02401) | | 147 | | Godard et al. | 2017 | CVPR | [Unsupervised monocular depth estimation with left-right consistency](https://arxiv.org/abs/1609.03677) | | 148 | | Wang et al. | 2018 | CVPR | [Learning depth from monocular videos using direct methods](https://arxiv.org/abs/1712.00175) | | 149 | 150 | ##### Voxel Representation 151 | | Models |Date| Publication| Paper | Code | 152 | |----------|----|------------|------|---| 153 | | SurfaceNet | 2017 | CVPR | [SurfaceNet: An End-to-end 3D Neural Network for Multiview Stereopsis](https://arxiv.org/abs/1708.01749) | | 154 | | Dai et al. | 2017 | CVPR | [Shape completion using 3d-encoder-predictor cnns and shape synthesis](https://arxiv.org/abs/1612.00101) | | 155 | | Hane et al. | 2017 | 3DV | [Hierarchical surface prediction for 3d object reconstruction](https://arxiv.org/abs/1704.00710) | | 156 | | OctNetFusion | 2017 | 3DV | [Octnetfusion: Learning depth fusion from data](https://arxiv.org/abs/1704.01047) | | 157 | | OGN | 2017 | ICCV | [Octree generating networks: Efficient convolutional architectures for high-resolution 3d outputs](https://arxiv.org/abs/1703.09438) | | 158 | | Kar et al. | 2017 | NeurIPS | [Learning a multi-view stereo machine](https://arxiv.org/abs/1708.05375) | | 159 | | RayNet | 2018 | CVPR | [RayNet: Learning Volumetric 3D Reconstruction with Ray Potentials](https://arxiv.org/abs/1901.01535) | | 160 | 161 | 162 | ##### Point Representation 163 | | Models |Date| Publication| Paper | Code | 164 | |----------|----|------------|------|---| 165 | | Fan et al. | 2017 | CVPR | [A point set generation network for 3d object reconstruction from a single image](https://arxiv.org/abs/1612.00603) | | 166 | 167 | ##### Mesh Representation 168 | | Models |Date| Publication| Paper | Code | 169 | |----------|----|------------|------|---| 170 | | Ladicky et al. | 2017 | ICCV | [From point clouds to mesh using regression](https://ieeexplore.ieee.org/document/8237682) | | 171 | | Mukasa et al. | 2017 | ICCVW | [3d scene mesh from cnn depth predictions and sparse monocular slam](http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w17/Mukasa_3D_Scene_Mesh_ICCV_2017_paper.pdf) | | 172 | | Wang et al. | 2018 | ECCV | [Pixel2mesh: Generating 3d mesh models from single rgb images](https://arxiv.org/abs/1804.01654) | | 173 | | Groueix et al. | 2018 | CVPR | [AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation](https://arxiv.org/abs/1802.05384) | | 174 | | Scan2Mesh | 2019 | CVPR | [Scan2mesh: From unstructured range scans to 3d meshes](https://arxiv.org/abs/1811.10464) | | 175 | | Bloesch et al. | 2019 | ICCV | [Learning meshes for dense visual SLAM](https://www.imperial.ac.uk/media/imperial-college/research-centres-and-groups/dyson-robotics-lab/mbloesch_etal_iccv2019.pdf) | | 176 | 177 | #### Semantic Mapping 178 | | Models |Date| Publication| Paper | Code | 179 | |----------|----|------------|------|---| 180 | | SemanticFusion | 2017 | ICRA | [Semanticfusion: Dense 3d semantic mapping with convolutional neural networks](https://arxiv.org/abs/1609.05130) | | 181 | | DA-RNN | 2017 | RSS | [DA-RNN: Semantic mapping with data associated recurrent neural networks](https://arxiv.org/abs/1703.03098) | | 182 | | Ma et al. | 2017 | IROS | [Multi-view deep learning for consistent semantic mapping with rgb-d cameras](https://arxiv.org/abs/1703.08866) | | 183 | | Sunderhauf et al. | 2017 | IROS | [Meaningful maps with object-oriented semantic mapping](https://arxiv.org/abs/1609.07849) | | 184 | | Fusion++ | 2018 | 3DV | [Fusion++: Volumetric object-level SLAM](https://arxiv.org/abs/1808.08378) | | 185 | | Grinvald et al. | 2019 | RA-L | [Volumetric instance-aware semantic mapping and 3d object discovery](https://arxiv.org/abs/1903.00268) | | 186 | | PanopticFusion | 2019 | IROS | [Panopticfusion: Online volumetric semantic mapping at the level of stuff and things](https://arxiv.org/abs/1903.01177) | | 187 | 188 | #### General Mapping 189 | * neural scene representation, task-driven representation 190 | 191 | | Models |Date| Publication| Paper | Code | 192 | |----------|----|------------|------|---| 193 | | Mirowski et al. | 2017 | ICLR | [Learning to navigate in complex environments](https://arxiv.org/abs/1611.03673) | | 194 | | Zhu et al. | 2017 | ICRA | [Target-driven visual navigation in indoor scenes using deep reinforcement learning](https://arxiv.org/abs/1609.05143) | | 195 | | Eslami et al. | 2018 | Science | [Neural scene representation and rendering](https://science.sciencemag.org/content/360/6394/1204) | | 196 | | CodeSLAM | 2018 | CVPR | [CodeSLAM — Learning a Compact, Optimisable Representation for Dense Visual SLAM](https://arxiv.org/abs/1804.00874) | | 197 | | Mirowski et al. | 2018 | NeurIPS | [Learning to navigate in cities without a map](https://arxiv.org/abs/1804.00168) | | 198 | | SRN | 2019 | NeurIPS | [Scene representation networks: Continuous 3d-structure-aware neural scene representations](https://arxiv.org/abs/1906.01618) | | 199 | | Tobin et al. | 2019 | NeurIPS | [Geometry-aware neural rendering](https://arxiv.org/abs/1911.04554) | | 200 | | Lim et al. | 2019 | NeurIPS | [Neural multisensory scene inference](https://arxiv.org/abs/1910.02344) | | 201 | 202 | 203 | ### Global Localization 204 | #### 2D-to-2D Localization 205 | ##### Implicit Map Based Localization 206 | | Models |Date| Publication| Paper | Code | 207 | |----------|----|------------|------|---| 208 | | PoseNet | 2015 | ICCV | [PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization](https://arxiv.org/abs/1505.07427) | | 209 | | Bayesian PoseNet | 2016 | ICRA | [Modelling uncertainty in deep learning for camera relocalization](https://arxiv.org/abs/1509.05909) | | 210 | | BranchNet | 2017 | ICRA | [Delving deeper into convolutional neural networks for camera relocalization](http://ieeexplore.ieee.org/document/7989663/) | | 211 | | VidLoc | 2017 | CVPR | [VidLoc: A Deep Spatio-Temporal Model for 6-DoF Video-Clip Relocalization](https://arxiv.org/abs/1702.06521) | | 212 | | Geometric PoseNet | 2017 | CVPR | [Geometric loss functions for camera pose regression with deep learning](http://openaccess.thecvf.com/content_cvpr_2017/html/Kendall_Geometric_Loss_Functions_CVPR_2017_paper.html) | | 213 | | Naseer et al. | 2017 | IROS | [Deep Regression for Monocular Camera-based 6-DoF Global Localization in Outdoor Environments](http://ais.informatik.uni-freiburg.de/publications/papers/naseer17iros.pdf)| | 214 | | LSTM-PoseNet | 2017| ICCV | [Image-based localization using lstms for structured feature correlation](https://arxiv.org/abs/1611.07890) | | 215 | | Hourglass PoseNet | 2017| ICCV Workshops | [Image-based localization using hourglass networks](http://openaccess.thecvf.com/content_ICCV_2017_workshops/w17/html/Melekhov_Image-Based_Localization_Using_ICCV_2017_paper.html) | | 216 | | VLocNet | 2018 | ICRA | [Deep auxiliary learning for visual localization and odometry](https://arxiv.org/abs/1803.03642)| | 217 | | MapNet | 2018 | CVPR | [Geometry-Aware Learning of Maps for Camera Localization](https://arxiv.org/abs/1712.03342)| | 218 | | SPP-Net | 2018 | BMVC | [Synthetic view generation for absolute pose regression and image synthesis](http://bmvc2018.org/contents/papers/0221.pdf) | | 219 | | GPoseNet | 2018 | BMVC | [A hybrid probabilistic model for camera relocalization](http://bmvc2018.org/contents/papers/0799.pdf) | | 220 | | VLocNet++ | 2018 | RA-L | [Vlocnet++: Deep multitask learning for semantic visual localization and odometry](https://arxiv.org/abs/1804.08366)| | 221 | | Xue et al. | 2019 | ICCV | [Local supports global: Deep camera relocalization with sequence enhancement](https://arxiv.org/abs/1908.04391)| | 222 | | Huang et al. | 2019 | ICCV | [Prior guided dropout for robust visual localization in dynamic environments](http://openaccess.thecvf.com/content_ICCV_2019/papers/Huang_Prior_Guided_Dropout_for_Robust_Visual_Localization_in_Dynamic_Environments_ICCV_2019_paper.pdf) | | 223 | | Bui et al. | 2019 | ICCVW | [Adversarial networks for camera pose regression and refinement](https://arxiv.org/abs/1903.06646) | | 224 | | GN-Net | 2020 | RA-L | [GN-Net: The Gauss-Newton Loss for Multi-Weather Relocalization](https://ieeexplore.ieee.org/abstract/document/8954808) | | 225 | | AtLoc | 2020 | AAAI | [AtLoc: Attention Guided Camera Localization](https://arxiv.org/abs/1909.03557) | | 226 | 227 | ##### Explicit Map Based Localization 228 | | Models |Date| Publication| Paper | Code | 229 | |----------|----|------------|------|---| 230 | | Laskar et al. | 2017| ICCV Workshops | [Camera Relocalization by Computing Pairwise Relative Poses Using Convolutional Neural Network](http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w17/Laskar_Camera_Relocalization_by_ICCV_2017_paper.pdf) | | 231 | | DELS-3D | 2018 | CVPR | [Dels-3d: Deep localization and segmentation with a 3d semantic map](https://arxiv.org/abs/1805.04949) | | 232 | | AnchorNet | 2018 | BMVC | [Improved visual relocalization by discovering anchor points](https://arxiv.org/abs/1811.04370) | | 233 | | RelocNet | 2018 | ECCV | [RelocNet: Continuous Metric Learning Relocalisation using Neural Nets](http://openaccess.thecvf.com/content_ECCV_2018/papers/Vassileios_Balntas_RelocNet_Continous_Metric_ECCV_2018_paper.pdf) | | 234 | | CamNet | 2019 | ICCV | [Camnet: Coarse-to-fine retrieval for camera re-localization](http://openaccess.thecvf.com/content_ICCV_2019/html/Ding_CamNet_Coarse-to-Fine_Retrieval_for_Camera_Re-Localization_ICCV_2019_paper.html)| | 235 | 236 | 237 | #### 2D-to-3D Localization 238 | ##### Descriptor Matching 239 | | Models |Date| Publication| Paper | Code | 240 | |----------|----|------------|------|---| 241 | | NetVLAD | 2016 | CVPR | [Netvlad: Cnn architecture for weakly supervised place recognition](https://arxiv.org/abs/1511.07247) | | 242 | | DELF | 2017 | CVPR | [Large-scale image retrieval with attentive deep local features](https://arxiv.org/abs/1612.06321) | | InLoc | 2018/06 | CVPR | [InLoc: Indoor Visual Localization with Dense Matching and View Synthesis](http://openaccess.thecvf.com/content_cvpr_2018/papers/Taira_InLoc_Indoor_Visual_CVPR_2018_paper.pdf) | | 243 | | Schonberger et al.| 2018/06 | CVPR | [Semantic Visual Localization](http://openaccess.thecvf.com/content_cvpr_2018/papers/Schonberger_Semantic_Visual_Localization_CVPR_2018_paper.pdf) | | 244 | | SuperPoint | 2018 | CVPRW | [Superpoint: Selfsupervised interest point detection and description](https://arxiv.org/abs/1712.07629) | | 245 | | NC-Net | 2018 | NeurIPS | [Neighbourhood consensus networks](https://arxiv.org/abs/1810.10510) | | 246 | | Sarlin et al. | 2019/06 | CVPR | [From Coarse to Fine: Robust Hierarchical Localization at Large Scale](http://openaccess.thecvf.com/content_CVPR_2019/papers/Sarlin_From_Coarse_to_Fine_Robust_Hierarchical_Localization_at_Large_Scale_CVPR_2019_paper.pdf)| | 247 | | 2D3D-MatchNet | 2019 | ICRA | [2d3d-matchnet: learning to match keypoints across 2d image and 3d point cloud](https://ieeexplore.ieee.org/document/8794415) | | 248 | | D2-Net | 2019 | CVPR | [D2-net: A trainable cnn for joint description and detection of local features](https://arxiv.org/abs/1905.03561) | | 249 | | Speciale et al. | 2019 | CVPR | [Privacy preserving image-based localization](https://arxiv.org/abs/1903.05572) | | 250 | | OOI-Net | 2019 | CVPR | [Visual localization by learning objects-of-interest dense match regression](https://europe.naverlabs.com/wp-content/uploads/2019/05/Visual-Localization-by-Learning-Objects-of-Interest-Dense-Match-Regression.pdf) | | 251 | | Camposeco et al. | 2019 | CVPR | [scene compression for visual localization](https://arxiv.org/abs/1807.07512) | | 252 | | Cheng et al. | 2019 | CVPR | [Cascaded parallel filtering for memory-efficient image-based localization](https://arxiv.org/abs/1908.06141) | | 253 | |Taira et al. | 2019 | CVPR | [Is this the right place? geometric-semantic pose verification for indoor visual localization](https://arxiv.org/abs/1908.04598) | | 254 | | R2D2 | 2019| NeurIPS | [R2d2: Repeatable and reliable detector and descriptor](https://papers.nips.cc/paper/9407-r2d2-reliable-and-repeatable-detector-and-descriptor.pdf) | | 255 | | ASLFeat | 2020 | CVPR | [Aslfeat: Learning local features of accurate shape and localization](https://arxiv.org/abs/2003.10071) | | 256 | 257 | ##### Scene Coordinate Regression 258 | | Models |Date| Publication| Paper | Code | 259 | |----------|----|------------|------|---| 260 | | DSAC | 2017/07 | CVPR | [DSAC - Differentiable RANSAC for Camera Localization](http://openaccess.thecvf.com/content_cvpr_2017/html/Brachmann_DSAC_-_Differentiable_CVPR_2017_paper.html) | | 261 | | DSAC++ | 2018/06 | CVPR | [Learning less is more-6d camera localization via 3d surface regression](https://arxiv.org/abs/1711.10228) | | 262 | | Dense SCR | 2018/07 | RSS | [Full-Frame Scene Coordinate Regression for Image-Based Localization](https://arxiv.org/abs/1802.03237) | | 263 | | DSAC++ angle | 2018/09 | ECCV | [Scene coordinate regression with angle-based reprojection loss for camera relocalization](http://openaccess.thecvf.com/content_eccv_2018_workshops/w16/html/Li_Scene_Coordinate_Regression_with_Angle-Based_Reprojection_Loss_for_Camera_Relocalization_ECCVW_2018_paper.html) | | 264 | | Confidence SCR | 2018/09 | BMVC | [Scene Coordinate and Correspondence Learning for Image-Based Localization](https://arxiv.org/abs/1805.08443) | | 265 | | ESAC | 2019/10 | ICCV | [Expert Sample Consensus Applied to Camera Re-Localization](http://openaccess.thecvf.com/content_ICCV_2019/html/Ding_CamNet_Coarse-to-Fine_Retrieval_for_Camera_Re-Localization_ICCV_2019_paper.html)| | 266 | | NG-RANSAC | 2019/06 | CVPR | [Neural-Guided RANSAC: Learning Where to Sample Model Hypotheses](http://openaccess.thecvf.com/content_ICCV_2019/papers/Brachmann_Neural-Guided_RANSAC_Learning_Where_to_Sample_Model_Hypotheses_ICCV_2019_paper.pdf)| | 267 | | SANet | 2019/10 | ICCV | [SANet: scene agnostic network for camera localization](http://openaccess.thecvf.com/content_ICCV_2019/papers/Yang_SANet_Scene_Agnostic_Network_for_Camera_Localization_ICCV_2019_paper.pdf)| | 268 | | HSC-Net | 2020 | CVPR | [Hierarchical scene coordinate classification and regression for visual localization](https://arxiv.org/abs/1909.06216) | | 269 | | KF-Net | 2020 | CVPR | [Kfnet: Learning temporal camera relocalization using kalman filtering](https://arxiv.org/abs/2003.10629) | | 270 | 271 | #### 3D-to-3D Localization 272 | | Models |Date| Publication| Paper | Code | 273 | |----------|----|------------|------|---| 274 | | LocNet | 2018 | IV | [Locnet: Global localization in 3d point clouds for mobile vehicles](https://arxiv.org/abs/1712.02165) | | 275 | | PointNetVLAD | 2018 | CVPR | [Pointnetvlad: Deep point cloud based retrieval for large-scale place recognition](https://arxiv.org/abs/1804.03492) | | 276 | | Barsan et al. | 2018 | CoRL | [Learning to localize using a lidar intensity map](http://proceedings.mlr.press/v87/barsan18a/barsan18a.pdf) | | 277 | | L3-Net | 2019 | CVPR | [L3-net: Towards learning based lidar localization for autonomous driving](https://songshiyu01.github.io/pdf/L3Net_W.Lu_Y.Zhou_S.Song_CVPR2019.pdf) | | 278 | | PCAN | 2019 | CVPR | [PCAN: 3D Attention Map Learning Using Contextual Information for Point Cloud Based Retrieval](https://arxiv.org/abs/1904.09793) | | 279 | | DeepICP | 2019 | CVPR | [Deepicp: An end-to-end deep neural network for 3d point cloud registration](https://arxiv.org/abs/1905.04153) | | 280 | | DCP | 2019 | CVPR | [Deep closest point: Learning representations for point cloud registration](https://arxiv.org/abs/1905.03304) | | 281 | | D3Feat | 2020 | CVPR | [D3feat: Joint learning of dense detection and description of 3d local features](https://arxiv.org/abs/2003.03164) | | 282 | 283 | ### SLAM 284 | 285 | #### Local Optimization 286 | | Models |Date| Publication| Paper | Code | 287 | |----------|----|------------|------|---| 288 | | LS-Net | 2018 | ECCV | [Learning to solve nonlinear least squares for monocular stereo](https://arxiv.org/abs/1809.02966) | | 289 | | BA-Net | 2019 | ICLR | [BA-Net: Dense bundle adjustment network](https://arxiv.org/abs/1806.04807) | | 290 | 291 | #### Global Optimization 292 | | Models |Date| Publication| Paper | Code | 293 | |----------|----|------------|------|---| 294 | | CNN-SLAM | 2017 | CVPR | [CNN-SLAM: Real-time dense monocular SLAM with learned depth prediction](https://arxiv.org/abs/1704.03489) | | 295 | | Li et al. | 2019 | ICRA | [Pose graph optimization for unsupervised monocular visual odometry](https://arxiv.org/abs/1903.06315) | | 296 | | DeepTAM | 2020 | IJCV | [DeepTAM: Deep Tracking and Mapping with Convolutional Neural Networks](https://lmb.informatik.uni-freiburg.de/Publications/2019/ZUB19a/) | | 297 | | DeepFactors | 2020 | RA-L | [DeepFactors: Real-Time Probabilistic Dense Monocular SLAM](https://arxiv.org/abs/2001.05049) | | 298 | 299 | 300 | #### Keyframe and Loop-closure Detection 301 | | Models |Date| Publication| Paper | Code | 302 | |----------|----|------------|------|---| 303 | | Sunderhauf et al. | 2015 | RSS | [Place recognition with convnet landmarks: Viewpoint-robust, condition-robust, training-free](https://nikosuenderhauf.github.io/assets/papers/rss15_placeRec.pdf) | | 304 | | Gao et al. | 2017 | AR | [Unsupervised learning to detect loops using deep neural networks for visual slam system](https://dl.acm.org/citation.cfm?id=3040686) | | 305 | | Huang et al. | 2018 | RSS | [Lightweight unsupervised deep loop closure](https://arxiv.org/abs/1805.07703) | | 306 | | Sheng et al. | 2019 | ICCV | [Unsupervised Collaborative Learning of Keyframe Detection and Visual Odometry Towards Monocular Deep SLAM](http://openaccess.thecvf.com/content_ICCV_2019/html/Sheng_Unsupervised_Collaborative_Learning_of_Keyframe_Detection_and_Visual_Odometry_Towards_ICCV_2019_paper.html) | | 307 | | Memon et al. | 2020 | RAS | [Loop closure detection using supervised and unsupervised deep neural networks for monocular slam systems](https://www.sciencedirect.com/science/article/abs/pii/S0921889019308425) | | 308 | 309 | #### Uncertainty Estimation 310 | | Models |Date| Publication| Paper | Code | 311 | |----------|----|------------|------|---| 312 | | Kendall et al. | 2016 | ICRA | [Modelling uncertainty in deep learning for camera relocalization](https://arxiv.org/abs/1509.05909) | | 313 | | Kendall et al. | 2017 | NeurIPS | [What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?](https://arxiv.org/abs/1703.04977) | | 314 | | VidLoc | 2017 | CVPR | [VidLoc: A Deep Spatio-Temporal Model for 6-DoF Video-Clip Relocalization](https://arxiv.org/abs/1702.06521) | | 315 | | Wang et al. | 2018 | IJRR | [End-to-end, sequenceto-sequence probabilistic visual odometry through deep neural networks](https://researchportal.hw.ac.uk/en/publications/end-to-end-sequence-to-sequence-probabilistic-visual-odometry-thr) | | 316 | | Chen et al. | 2019 | TMC | [Deep neural network based inertial odometry using low-cost inertial measurement units](http://www.cs.ox.ac.uk/files/11501/DNN_IONet.pdf) | | 317 | 318 | This list is maintained by [Changhao Chen](http://www.cs.ox.ac.uk/people/changhao.chen/website/) and [Bing Wang](http://www.cs.ox.ac.uk/people/bing.wang/), Department of Computer Science, University of Oxford. 319 | 320 | Please contact them (email: changhao.chen@cs.ox.ac.uk; bing.wang@cs.ox.ac.uk), if you have any question or would like to add your work on this list. 321 | -------------------------------------------------------------------------------- /image/concept_figure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/changhao-chen/deep-learning-localization-mapping/e7f3a0703197712e3d362716bea1cc734e63cc3f/image/concept_figure.png -------------------------------------------------------------------------------- /image/image.md: -------------------------------------------------------------------------------- 1 | 2 | --------------------------------------------------------------------------------