├── .github
└── workflows
│ └── pandoc_fun.yml
├── README.md
├── how_to_add.md
└── src
├── DynamicScenes+Rendering.bib
├── DynamicScenes+Rendering.md
├── LightFields+Plenoxels.bib
├── LightFields+Plenoxels.md
├── NeRF+Architecture_Improvements.bib
├── NeRF+Architecture_Improvements.md
├── Review_Papers.bib
├── Review_Papers.md
├── Robotics_Applications.bib
├── Robotics_Applications.md
├── Speed_Improvements.bib
├── Speed_Improvements.md
├── build.sh
├── contents.md
├── frontmatter.md
├── gen
├── DynamicScenes+Rendering-output.md
├── LightFields+Plenoxels-output.md
├── NeRF+Architecture_Improvements-output.md
├── Review_Papers-output.md
├── Robotics_Applications-output.md
└── Speed_Improvements-output.md
├── generate.sh
└── sections.csv
/.github/workflows/pandoc_fun.yml:
--------------------------------------------------------------------------------
1 | name: Pandoc Fun
2 | on:
3 | push:
4 | branches:
5 | - main
6 | jobs:
7 | normal_ci:
8 | runs-on: ubuntu-latest
9 | steps:
10 | - name: Check out repository code
11 | uses: actions/checkout@v2
12 | - name: Set-up Pandoc
13 | uses: r-lib/actions/setup-pandoc@v2
14 | with:
15 | pandoc-version: '2.14.2' # The pandoc version to download (if necessary) and use.
16 | - name: Recent IEEE CLS
17 | run: wget https://raw.githubusercontent.com/citation-style-language/styles/master/ieee.csl
18 | - name: Set permissions
19 | run: cd ${{ github.workspace }} && chmod 740 ./src/generate.sh && chmod 740 ./src/build.sh
20 | - name: Generate files
21 | run: ./src/generate.sh
22 | - name: Build Readme
23 | run: ./src/build.sh
24 | - name: Clean up temp cls
25 | run: rm ieee.csl
26 | - name: Push New Readme
27 | uses: stefanzweifel/git-auto-commit-action@v4
28 | with:
29 | commit_message: New papers!
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Neural Fields for Robotics Resources
2 | A repo collating papers and other material related to neural radiance fields (NeRFs), neural scene representations and associated works with a focus towards applications in robotics.
3 |
4 | This repo is maintained by the [Robotic Imaging Research Group](https://roboticimaging.org) at the [University of Sydney](https://sydney.edu.au). We are embedded within the [Australian Centre for Robotics](https://www.sydney.edu.au/engineering/our-research/robotics-and-intelligent-systems/australian-centre-for-field-robotics.html) in the Faculty of Engineering.
5 |
6 | To contribute, please see the `how_to_add.md` file.
7 | ## Contents
8 | - [Review Papers](#Review_Papers)
9 | - [NeRF + Architecture Improvements](#NeRF+Architecture_Improvements)
10 | - [Light Fields + Plenoxels](#LightFields+Plenoxels)
11 | - [Dynamic Scenes + Rendering](#DynamicScenes+Rendering)
12 | - [Speed Improvements](#Speed_Improvements)
13 | - [Robotics Applications](#Robotics_Applications)
14 |
15 | ## Review Papers
16 | \[1\] A. Tewari *et al.*, “State of the Art on Neural
18 | Rendering,” *Computer Graphics Forum*, Jul. 2020, Accessed: Apr. 04,
19 | 2023. \[Online\]. Available:
20 |
21 |
22 | \[2\] Y. Xie *et al.*, “Neural Fields in Visual
24 | Computing and Beyond,” *Computer Graphics Forum*, May 2022, Accessed:
25 | Apr. 04, 2023. \[Online\]. Available:
26 |
27 |
28 | \[3\] A. Tewari *et al.*, “Advances in Neural
30 | Rendering,” *arXiv:2111.05849 \[cs\]*, Nov. 2021, Accessed: Nov. 27,
31 | 2021. \[Online\]. Available:
32 |
33 | \[4\] M. Toschi, R. D. Matteo, R. Spezialetti, D. D.
35 | Gregorio, L. D. Stefano, and S. Salti, “ReLight my NeRF: A dataset for
36 | novel view synthesis and relighting of real world objects.” 2023.
37 | Available:
38 |
39 | \[5\] M. Tancik *et al.*, “Nerfstudio: A modular
41 | framework for neural radiance field development,” *arXiv preprint
42 | arXiv:2302.04264*, 2023.
43 |
44 | \[6\] M. Wallingford *et al.*, “Neural radiance field
46 | codebooks,” *arXiv preprint arXiv:2301.04101*, 2023.
47 |
48 | ## NeRF + Architecture Improvements
49 | \[1\] M. Niemeyer, J. T. Barron, B. Mildenhall, M. S.
51 | M. Sajjadi, A. Geiger, and N. Radwan, “RegNeRF: Regularizing neural
52 | radiance fields for view synthesis from sparse inputs,” in *Proc. IEEE
53 | conf. On computer vision and pattern recognition (CVPR)*, 2022.
54 | Available:
55 |
56 | \[2\] Z. Kuang, K. Olszewski, M. Chai, Z. Huang, P.
58 | Achlioptas, and S. Tulyakov, “NeROIC: Neural object capture and
59 | rendering from online image collections,” *Computing Research Repository
60 | (CoRR)*, vol. abs/2201.02533, 2022.
61 |
62 | \[3\] F. Wimbauer, S. Wu, and C. Rupprecht,
64 | “De-rendering 3D Objects in the Wild,” *arXiv:2201.02279 \[cs\]*, Jan.
65 | 2022, Accessed: Jan. 23, 2022. \[Online\]. Available:
66 |
67 |
68 | \[4\] M. Kim, S. Seo, and B. Han, “InfoNeRF: Ray
70 | Entropy Minimization for Few-Shot Neural Volume Rendering,”
71 | *arXiv:2112.15399 \[cs, eess\]*, Dec. 2021, Accessed: Jan. 23, 2022.
72 | \[Online\]. Available:
73 |
74 | \[5\] C. C. Yoonwoo Jeong Seokjun Ahn and J. Park,
76 | “Self-Calibrating Neural Radiance Fields,” in *ICCV*, 2021.
77 |
78 | \[6\] Y. Xiangli *et al.*, “CityNeRF: Building NeRF
80 | at City Scale,” *arXiv preprint arXiv:2112.05504*, 2021.
81 |
82 | \[7\] M. Tancik *et al.*, “Block-NeRF: Scalable Large
84 | Scene Neural View Synthesis,” *arXiv*, 2022.
85 |
86 | \[8\] K. Rematas, R. Martin-Brualla, and V. Ferrari,
88 | “ShaRF: Shape-conditioned Radiance Fields from a Single View.”
89 | 2021.
90 |
91 | \[9\] B. Kaya, S. Kumar, F. Sarno, V. Ferrari, and L.
93 | V. Gool, “Neural Radiance Fields Approach to Deep Multi-View Photometric
94 | Stereo.” 2021.
95 |
96 | \[10\] Q. Xu *et al.*, “Point-NeRF: Point-based Neural
98 | Radiance Fields,” *arXiv preprint arXiv:2201.08845*, 2022.
99 |
100 | \[11\] C. Xie, K. Park, R. Martin-Brualla, and M.
102 | Brown, “FiG-NeRF: Figure-Ground Neural Radiance Fields for 3D Object
103 | Category Modelling,” *arXiv:2104.08418 \[cs\]*, Apr. 2021, Accessed:
104 | Sep. 25, 2021. \[Online\]. Available:
105 |
106 |
107 | \[12\] A. Yu, R. Li, M. Tancik, H. Li, R. Ng, and A.
109 | Kanazawa, “PlenOctrees for Real-time Rendering of Neural Radiance
110 | Fields,” *arXiv:2103.14024 \[cs\]*, Aug. 2021, Accessed: Sep. 25, 2021.
111 | \[Online\]. Available:
112 |
113 | \[13\] B. Mildenhall, P. P. Srinivasan, M. Tancik, J.
115 | T. Barron, R. Ramamoorthi, and R. Ng, “NeRF: Representing Scenes as
116 | Neural Radiance Fields for View Synthesis,” in *Computer Vision – ECCV
117 | 2020*, A. Vedaldi, H. Bischof, T. Brox, and J.-M. Frahm, Eds., in
118 | Lecture Notes in Computer Science. Cham: Springer International
119 | Publishing, 2020, pp. 405–421. doi:
120 | [gj826m](https://doi.org/gj826m).
121 |
122 | \[14\] A. Yu, V. Ye, M. Tancik, and A. Kanazawa,
124 | “pixelNeRF: Neural Radiance Fields From One
125 | or Few Images,” 2021, pp. 4578–4587. Accessed: Sep. 25, 2021.
126 | \[Online\]. Available:
127 |
128 |
129 | \[15\] R. Martin-Brualla, N. Radwan, M. S. M. Sajjadi,
131 | J. T. Barron, A. Dosovitskiy, and D. Duckworth, “NeRF in the Wild:
132 | Neural Radiance Fields for Unconstrained Photo Collections,” 2021, pp.
133 | 7210–7219. Accessed: Sep. 25, 2021. \[Online\]. Available:
134 |
135 |
136 | \[16\] L. Yen-Chen, P. Florence, J. T. Barron, A.
138 | Rodriguez, P. Isola, and T.-Y. Lin, “INeRF: Inverting Neural Radiance
139 | Fields for Pose Estimation,” *arXiv:2012.05877 \[cs\]*, Aug. 2021,
140 | Accessed: Sep. 25, 2021. \[Online\]. Available:
141 |
142 |
143 | \[17\] C. Gao, Y. Shih, W.-S. Lai, C.-K. Liang, and
145 | J.-B. Huang, “Portrait Neural Radiance Fields from a Single Image,”
146 | *arXiv:2012.05903 \[cs\]*, Apr. 2021, Accessed: Sep. 25, 2021.
147 | \[Online\]. Available:
148 |
149 | \[18\] C.-H. Lin, W.-C. Ma, A. Torralba, and S. Lucey,
151 | “BARF: Bundle-Adjusting Neural Radiance Fields,” *arXiv:2104.06405
152 | \[cs\]*, Aug. 2021, Accessed: Sep. 25, 2021. \[Online\]. Available:
153 |
154 |
155 | \[19\] K. Zhang, G. Riegler, N. Snavely, and V.
157 | Koltun, “NeRF++: Analyzing and Improving Neural Radiance Fields,”
158 | *arXiv:2010.07492 \[cs\]*, Oct. 2020, Accessed: Sep. 25, 2021.
159 | \[Online\]. Available:
160 |
161 | \[20\] C. Reiser, S. Peng, Y. Liao, and A. Geiger,
163 | “KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny
164 | MLPs,” *arXiv:2103.13744 \[cs\]*, Aug. 2021, Accessed: Sep. 25, 2021.
165 | \[Online\]. Available:
166 |
167 | \[21\] D. Rebain, W. Jiang, S. Yazdani, K. Li, K. M.
169 | Yi, and A. Tagliasacchi, “DeRF: Decomposed Radiance Fields,” 2021, pp.
170 | 14153–14161. Accessed: Sep. 25, 2021. \[Online\]. Available:
171 |
172 |
173 | \[22\] J. T. Barron, B. Mildenhall, M. Tancik, P.
175 | Hedman, R. Martin-Brualla, and P. P. Srinivasan, “Mip-NeRF: A Multiscale
176 | Representation for Anti-Aliasing Neural Radiance Fields,”
177 | *arXiv:2103.13415 \[cs\]*, Aug. 2021, Accessed: Sep. 25, 2021.
178 | \[Online\]. Available:
179 |
180 | \[23\] P. Hedman, P. P. Srinivasan, B. Mildenhall, J.
182 | T. Barron, and P. Debevec, “Baking Neural Radiance Fields for Real-Time
183 | View Synthesis,” *arXiv:2103.14645 \[cs\]*, Mar. 2021, Accessed: Sep.
184 | 25, 2021. \[Online\]. Available:
185 |
186 |
187 | \[24\] Z. Wang, S. Wu, W. Xie, M. Chen, and V. A.
189 | Prisacariu, “NeRF–: Neural Radiance Fields Without Known Camera
190 | Parameters,” *arXiv:2102.07064 \[cs\]*, Feb. 2021, Accessed: Sep. 25,
191 | 2021. \[Online\]. Available:
192 |
193 | \[25\] J. Li, Z. Feng, Q. She, H. Ding, C. Wang, and
195 | G. H. Lee, “MINE: Towards Continuous Depth MPI with NeRF for Novel View
196 | Synthesis,” *arXiv:2103.14910 \[cs\]*, Jul. 2021, Accessed: Oct. 11,
197 | 2021. \[Online\]. Available:
198 |
199 | ## Light Fields + Plenoxels
200 | \[1\] J. Ost, I. Laradji, A. Newell, Y. Bahat, and F.
202 | Heide, “Neural Point Light Fields,” *CoRR*, vol. abs/2112.01473, 2021,
203 | Available:
204 |
205 | \[2\] M. Suhail, C. Esteves, L. Sigal, and A.
207 | Makadia, “Light field neural rendering.” 2021. Available:
208 |
209 |
210 | \[3\] Alex Yu and Sara Fridovich-Keil, M. Tancik, Q.
212 | Chen, B. Recht, and A. Kanazawa, “Plenoxels: Radiance fields without
213 | neural networks.” 2021. Available:
214 |
215 |
216 | \[4\] V. Sitzmann, S. Rezchikov, W. T. Freeman, J. B.
218 | Tenenbaum, and F. Durand, “Light field networks: Neural scene
219 | representations with single-evaluation rendering,” in *Proc. NeurIPS*,
220 | 2021.
221 |
222 | ## Dynamic Scenes + Rendering
223 | \[1\] A. Pumarola, E. Corona, G. Pons-Moll, and F.
225 | Moreno-Noguer, “D-NeRF: Neural Radiance Fields for Dynamic Scenes,” in
226 | *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
227 | Recognition (CVPR)*, Jun. 2021, pp. 10318–10327.
228 |
229 | \[2\] E. Tretschk, A. Tewari, V. Golyanik, M.
231 | Zollhöfer, C. Lassner, and C. Theobalt, “Non-Rigid Neural Radiance
232 | Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From
233 | Monocular Video,” in *Proceedings of the IEEE/CVF International
234 | Conference on Computer Vision (ICCV)*, Oct. 2021, pp.
235 | 12959–12970.
236 |
237 | \[3\] Z. Li, S. Niklaus, N. Snavely, and O. Wang,
239 | “Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic
240 | Scenes,” in *Proceedings of the IEEE/CVF Conference on Computer Vision
241 | and Pattern Recognition (CVPR)*, Jun. 2021, pp. 6498–6508.
242 |
243 | \[4\] K. Park *et al.*, “Nerfies: Deformable Neural
245 | Radiance Fields,” in *Proceedings of the IEEE/CVF International
246 | Conference on Computer Vision (ICCV)*, Oct. 2021, pp. 5865–5874.
247 |
248 | \[5\] C.-Y. Weng, B. Curless, P. P. Srinivasan, J. T.
250 | Barron, and I. Kemelmacher-Shlizerman, “HumanNeRF: Free-viewpoint
251 | Rendering of Moving People from Monocular Video,” *arXiv*, 2022.
252 |
253 | \[6\] K. Park *et al.*, “HyperNeRF: A
255 | Higher-Dimensional Representation for Topologically Varying Neural
256 | Radiance Fields,” *ACM Trans. Graph.*, vol. 40, no. 6, Dec. 2021.
257 |
258 | \[7\] G. Yang, M. Vo, N. Natalia, D. Ramanan, V.
260 | Andrea, and J. Hanbyul, “BANMo: Building Animatable 3D Neural Models
261 | from Many Casual Videos,” *arXiv preprint arXiv:2112.12761*,
262 | 2021.
263 |
264 | \[8\] S. Peng *et al.*, “Animatable neural radiance
266 | fields for modeling dynamic human bodies,” in *Proceedings of the
267 | IEEE/CVF international conference on computer vision (ICCV)*, 2021, pp.
268 | 14314–14323.
269 |
270 | ## Speed Improvements
271 | \[1\] T. Müller, A. Evans, C. Schied, and A. Keller,
273 | “Instant Neural Graphics Primitives with a Multiresolution Hash
274 | Encoding,” *arXiv:2201.05989*, Jan. 2022.
275 |
276 | \[2\] K. Deng, A. Liu, J.-Y. Zhu, and D. Ramanan,
278 | “Depth-supervised NeRF: Fewer Views and Faster Training for Free,”
279 | *arXiv preprint arXiv:2107.02791*, 2021.
280 |
281 | \[3\] L. Li, Z. Shen, Z. Wang, L. Shen, and L. Bo,
283 | “Compressing volumetric radiance fields to 1 MB,” *arXiv preprint
284 | arXiv:2211.16386*, 2022.
285 |
286 | \[4\] J. E. Johnson, R. Lguensat, R. Fablet, E.
288 | Cosme, and J. L. Sommer, “Neural fields for fast and scalable
289 | interpolation of geophysical ocean variables,” *arXiv preprint
290 | arXiv:2211.10444*, 2022.
291 |
292 | \[5\] K. Deng, A. Liu, J.-Y. Zhu, and D. Ramanan,
294 | “Depth-supervised NeRF: Fewer views and faster training for free,” in
295 | *Proceedings of the IEEE/CVF conference on computer vision and pattern
296 | recognition (CVPR)*, 2022.
297 |
298 | \[6\] F. Wang, S. Tan, X. Li, Z. Tian, and H. Liu,
300 | “Mixed neural voxels for fast multi-view video synthesis,” *arXiv
301 | preprint arXiv:2212.00190*, 2022.
302 |
303 | \[7\] P. Wang *et al.*, “F2-NeRF: Fast neural
305 | radiance field training with free camera trajectories,” *CVPR*,
306 | 2023.
307 |
308 | \[8\] S. Lee, G. Park, H. Son, J. Ryu, and H. J.
310 | Chae, “FastSurf: Fast neural RGB-d surface reconstruction using
311 | per-frame intrinsic refinement and TSDF fusion prior learning,” *arXiv
312 | preprint arXiv:2303.04508*, 2023.
313 |
314 | \[9\] Y. Wang, Q. Han, M. Habermann, K. Daniilidis,
316 | C. Theobalt, and L. Liu, “NeuS2: Fast learning of neural implicit
317 | surfaces for multi-view reconstruction.” arXiv, 2022. doi:
318 | [10.48550/ARXIV.2212.05231](https://doi.org/10.48550/ARXIV.2212.05231).
319 |
320 | ## Robotics Applications
321 | \[1\] D. Driess, Z. Huang, Y. Li, R. Tedrake, and M.
323 | Toussaint, “Learning Multi-Object Dynamics with Compositional Neural
324 | Radiance Fields,” *arXiv preprint arXiv:2202.11855*, 2022.
325 |
326 | \[2\] L. Yen-Chen, P. Florence, J. T. Barron, T.-Y.
328 | Lin, A. Rodriguez, and P. Isola, “NeRF-Supervision: Learning Dense
329 | Object Descriptors from Neural Radiance Fields,” in *IEEE Conference on
330 | Robotics and Automation (ICRA)*, 2022.
331 |
332 | \[3\] Z. Zhu *et al.*, “NICE-SLAM: Neural Implicit
334 | Scalable Encoding for SLAM,” *arXiv*, 2021.
335 |
336 | \[4\] M. Adamkiewicz *et al.*, “Vision-Only Robot
338 | Navigation in a Neural Radiance World,” *arXiv:2110.00168 \[cs\]*, Sep.
339 | 2021, Accessed: Oct. 11, 2021. \[Online\]. Available:
340 |
341 |
342 | \[5\] E. Sucar, S. Liu, J. Ortiz, and A. J. Davison,
344 | “iMAP: Implicit Mapping and Positioning in
345 | Real-Time,” in *Proceedings of the IEEE/CVF International Conference on
346 | Computer Vision (ICCV)*, Oct. 2021, pp. 6229–6238.
347 |
--------------------------------------------------------------------------------
/how_to_add.md:
--------------------------------------------------------------------------------
1 | # Adding Resources
2 | This repo is maintained by the [Robotic Imaging Research Group](https://roboticimaging.org) at the [University of Sydney](https://sydney.edu.au). We are embedded within the [Australian Centre for Field Robotics](https://www.sydney.edu.au/engineering/our-research/robotics-and-intelligent-systems/australian-centre-for-field-robotics.html) and [Sydney Institute for Robotics and Intelligent Systems](https://www.sydney.edu.au/engineering/our-research/robotics-and-intelligent-systems/sydney-institute-for-robotics-and-intelligent-systems.html).
3 |
4 | To assist us with keeping this repo up-to-date, please feel free to suggest new resources by creating a pull request and adding your submission into one of the relevant `src/*.bib` files.
5 | - If there are additional links (paper, project pages, posters, videos, code etc.), please add the links below the citation by adding the links in the notes section of the bibtex.
6 | - More recent papers should be located up the top of each section (try to follow dates).
7 | - If you see arXiv links where papers have since been published, please update them.
8 | - If new categories emerge, or papers may be better positioned - your suggestions are welcomed to clean up the document. Please use the previous sections as templates to add in new areas, or create an issue where we can help you out!
9 |
10 | Thank you in advance for your submissions in tracking the exciting world of NeRFs/neural scenes!
11 |
--------------------------------------------------------------------------------
/src/DynamicScenes+Rendering.bib:
--------------------------------------------------------------------------------
1 |
2 | @inproceedings{pumarola_d-nerf_2021,
3 | title = {D-{NeRF}: {Neural} {Radiance} {Fields} for {Dynamic} {Scenes}},
4 | booktitle = {Proceedings of the {IEEE}/{CVF} {Conference} on {Computer} {Vision} and {Pattern} {Recognition} ({CVPR})},
5 | author = {Pumarola, Albert and Corona, Enric and Pons-Moll, Gerard and Moreno-Noguer, Francesc},
6 | month = jun,
7 | year = {2021},
8 | pages = {10318--10327},
9 | }
10 |
11 | @inproceedings{tretschk_non-rigid_2021,
12 | title = {Non-{Rigid} {Neural} {Radiance} {Fields}: {Reconstruction} and {Novel} {View} {Synthesis} of a {Dynamic} {Scene} {From} {Monocular} {Video}},
13 | booktitle = {Proceedings of the {IEEE}/{CVF} {International} {Conference} on {Computer} {Vision} ({ICCV})},
14 | author = {Tretschk, Edgar and Tewari, Ayush and Golyanik, Vladislav and Zollhöfer, Michael and Lassner, Christoph and Theobalt, Christian},
15 | month = oct,
16 | year = {2021},
17 | pages = {12959--12970},
18 | }
19 |
20 | @inproceedings{li_neural_2021,
21 | title = {Neural {Scene} {Flow} {Fields} for {Space}-{Time} {View} {Synthesis} of {Dynamic} {Scenes}},
22 | booktitle = {Proceedings of the {IEEE}/{CVF} {Conference} on {Computer} {Vision} and {Pattern} {Recognition} ({CVPR})},
23 | author = {Li, Zhengqi and Niklaus, Simon and Snavely, Noah and Wang, Oliver},
24 | month = jun,
25 | year = {2021},
26 | pages = {6498--6508},
27 | }
28 |
29 | @inproceedings{park_nerfies_2021,
30 | title = {Nerfies: {Deformable} {Neural} {Radiance} {Fields}},
31 | booktitle = {Proceedings of the {IEEE}/{CVF} {International} {Conference} on {Computer} {Vision} ({ICCV})},
32 | author = {Park, Keunhong and Sinha, Utkarsh and Barron, Jonathan T. and Bouaziz, Sofien and Goldman, Dan B and Seitz, Steven M. and Martin-Brualla, Ricardo},
33 | month = oct,
34 | year = {2021},
35 | pages = {5865--5874},
36 | }
37 |
38 | @article{weng_humannerf_2022,
39 | title = {{HumanNeRF}: {Free}-viewpoint {Rendering} of {Moving} {People} from {Monocular} {Video}},
40 | journal = {arXiv},
41 | author = {Weng, Chung-Yi and Curless, Brian and Srinivasan, Pratul P. and Barron, Jonathan T. and Kemelmacher-Shlizerman, Ira},
42 | year = {2022},
43 | }
44 |
45 | @article{park_hypernerf_2021,
46 | title = {{HyperNeRF}: {A} {Higher}-{Dimensional} {Representation} for {Topologically} {Varying} {Neural} {Radiance} {Fields}},
47 | volume = {40},
48 | number = {6},
49 | journal = {ACM Trans. Graph.},
50 | author = {Park, Keunhong and Sinha, Utkarsh and Hedman, Peter and Barron, Jonathan T. and Bouaziz, Sofien and Goldman, Dan B and Martin-Brualla, Ricardo and Seitz, Steven M.},
51 | month = dec,
52 | year = {2021},
53 | annote = {Publisher: ACM},
54 | }
55 |
56 | @article{yang_banmo_2021,
57 | title = {{BANMo}: {Building} {Animatable} {3D} {Neural} {Models} from {Many} {Casual} {Videos}},
58 | journal = {arXiv preprint arXiv:2112.12761},
59 | author = {Yang, Gengshan and Vo, Minh and Natalia, Neverova and Ramanan, Deva and Andrea, Vedaldi and Hanbyul, Joo},
60 | year = {2021},
61 | }
62 |
63 |
64 | @InProceedings{Peng_2021_ICCV,
65 | author = {Peng, Sida and Dong, Junting and Wang, Qianqian and Zhang, Shangzhan and Shuai, Qing and Zhou, Xiaowei and Bao, Hujun},
66 | title = {Animatable Neural Radiance Fields for Modeling Dynamic Human Bodies},
67 | booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
68 | month = {October},
69 | year = {2021},
70 | pages = {14314-14323}
71 | }
72 |
--------------------------------------------------------------------------------
/src/DynamicScenes+Rendering.md:
--------------------------------------------------------------------------------
1 | ---
2 | bibliography: DynamicScenes+Rendering.bib
3 | nocite: "@*"
4 | ---
5 |
--------------------------------------------------------------------------------
/src/LightFields+Plenoxels.bib:
--------------------------------------------------------------------------------
1 |
2 | @article{ost_neural_2021,
3 | title = {Neural {Point} {Light} {Fields}},
4 | volume = {abs/2112.01473},
5 | url = {https://arxiv.org/abs/2112.01473},
6 | journal = {CoRR},
7 | author = {Ost, Julian and Laradji, Issam and Newell, Alejandro and Bahat, Yuval and Heide, Felix},
8 | year = {2021},
9 | note = {arXiv: 2112.01473},
10 | }
11 |
12 |
13 |
14 | @misc{suhail2021light,
15 | title={Light Field Neural Rendering},
16 | author={Mohammed Suhail and Carlos Esteves and Leonid Sigal and Ameesh Makadia},
17 | year={2021},
18 | eprint={2112.09687},
19 | archivePrefix={arXiv},
20 | primaryClass={cs.CV}
21 | }
22 |
23 | @misc{yu2021plenoxels,
24 | title={Plenoxels: Radiance Fields without Neural Networks},
25 | author={{Alex Yu and Sara Fridovich-Keil} and Matthew Tancik and Qinhong Chen and Benjamin Recht and Angjoo Kanazawa},
26 | year={2021},
27 | eprint={2112.05131},
28 | archivePrefix={arXiv},
29 | primaryClass={cs.CV}
30 | }
31 |
32 | @inproceedings{sitzmann2021lfns,
33 | author = {Sitzmann, Vincent
34 | and Rezchikov, Semon
35 | and Freeman, William T.
36 | and Tenenbaum, Joshua B.
37 | and Durand, Fredo},
38 | title = {Light Field Networks: Neural Scene Representations
39 | with Single-Evaluation Rendering},
40 | booktitle = {Proc. NeurIPS},
41 | year={2021},
42 | note = {Project Page: https://www.vincentsitzmann.com/lfns, Code: https://github.com/vsitzmann/light-field-networks}
43 | }
44 |
--------------------------------------------------------------------------------
/src/LightFields+Plenoxels.md:
--------------------------------------------------------------------------------
1 | ---
2 | bibliography: LightFields+Plenoxels.bib
3 | nocite: "@*"
4 | ---
5 |
--------------------------------------------------------------------------------
/src/NeRF+Architecture_Improvements.bib:
--------------------------------------------------------------------------------
1 | @InProceedings{Niemeyer2021Regnerf,
2 | author = {Michael Niemeyer and Jonathan T. Barron and Ben Mildenhall and Mehdi S. M. Sajjadi and Andreas Geiger and Noha Radwan},
3 | title = {RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs},
4 | url = {https://arxiv.org/abs/2112.00724},
5 | booktitle = {Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
6 | year = {2022},
7 | note = {Project Page: https://m-niemeyer.github.io/regnerf/index.html}
8 | }
9 |
10 | @article{kuang2021neroic,
11 | author = {Kuang, Zhengfei and Olszewski, Kyle and Chai, Menglei and Huang, Zeng and Achlioptas, Panos and Tulyakov, Sergey},
12 | title = {{NeROIC}: Neural Object Capture and Rendering from Online Image Collections},
13 | journal = {Computing Research Repository (CoRR)},
14 | volume = {abs/2201.02533},
15 | year = {2022},
16 | note = {Project Page: https://formyfamily.github.io/NeROIC, Code: https://github.com/snap-research/NeROIC}
17 | }
18 |
19 | @article{wimbauer_-rendering_2022,
20 | title = {De-rendering {3D} {Objects} in the {Wild}},
21 | url = {http://arxiv.org/abs/2201.02279},
22 | abstract = {With increasing focus on augmented and virtual reality applications (XR) comes the demand for algorithms that can lift objects from images and videos into representations that are suitable for a wide variety of related 3D tasks. Large-scale deployment of XR devices and applications means that we cannot solely rely on supervised learning, as collecting and annotating data for the unlimited variety of objects in the real world is infeasible. We present a weakly supervised method that is able to decompose a single image of an object into shape (depth and normals), material (albedo, reflectivity and shininess) and global lighting parameters. For training, the method only relies on a rough initial shape estimate of the training objects to bootstrap the learning process. This shape supervision can come for example from a pretrained depth network or - more generically - from a traditional structure-from-motion pipeline. In our experiments, we show that the method can successfully de-render 2D images into a decomposed 3D representation and generalizes to unseen object categories. Since in-the-wild evaluation is difficult due to the lack of ground truth data, we also introduce a photo-realistic synthetic test set that allows for quantitative evaluation.},
23 | urldate = {2022-01-23},
24 | journal = {arXiv:2201.02279 [cs]},
25 | author = {Wimbauer, Felix and Wu, Shangzhe and Rupprecht, Christian},
26 | month = jan,
27 | year = {2022},
28 | keywords = {Computer Science - Computer Vision and Pattern Recognition},
29 | annote = {arXiv: 2201.02279},
30 | file = {arXiv Fulltext PDF:/home/jack/Zotero/storage/SVVVNB5C/Wimbauer et al. - 2022 - De-rendering 3D Objects in the Wild.pdf:application/pdf;arXiv.org Snapshot:/home/jack/Zotero/storage/E3FGZTN2/2201.html:text/html},
31 | }
32 |
33 | @article{kim_infonerf_2021,
34 | title = {{InfoNeRF}: {Ray} {Entropy} {Minimization} for {Few}-{Shot} {Neural} {Volume} {Rendering}},
35 | shorttitle = {{InfoNeRF}},
36 | url = {http://arxiv.org/abs/2112.15399},
37 | abstract = {We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation. The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints by imposing the entropy constraint of the density in each ray. In addition, to alleviate the potential degenerate issue when all training images are acquired from almost redundant viewpoints, we further incorporate the spatially smoothness constraint into the estimated images by restricting information gains from a pair of rays with slightly different viewpoints. The main idea of our algorithm is to make reconstructed scenes compact along individual rays and consistent across rays in the neighborhood. The proposed regularizers can be plugged into most of existing neural volume rendering techniques based on NeRF in a straightforward way. Despite its simplicity, we achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks. Our project website is available at {\textbackslash}textbackslashurl\{http://cvlab.snu.ac.kr/research/InfoNeRF\}.},
38 | urldate = {2022-01-23},
39 | journal = {arXiv:2112.15399 [cs, eess]},
40 | author = {Kim, Mijeong and Seo, Seonguk and Han, Bohyung},
41 | month = dec,
42 | year = {2021},
43 | keywords = {Computer Science - Computer Vision and Pattern Recognition, Computer Science - Graphics, Electrical Engineering and Systems Science - Image and Video Processing},
44 | annote = {arXiv: 2112.15399},
45 | file = {arXiv Fulltext PDF:/home/jack/Zotero/storage/SWBHFCWU/Kim et al. - 2021 - InfoNeRF Ray Entropy Minimization for Few-Shot Ne.pdf:application/pdf;arXiv.org Snapshot:/home/jack/Zotero/storage/IFTRRP9N/2112.html:text/html},
46 | }
47 |
48 | @inproceedings{yoonwoo_jeong_self-calibrating_2021,
49 | title = {Self-{Calibrating} {Neural} {Radiance} {Fields}},
50 | booktitle = {{ICCV}},
51 | author = {Yoonwoo Jeong, Seokjun Ahn, Christopehr Choy, Animashree Anandkumar, Minsu Cho, and Park, Jaesik},
52 | year = {2021},
53 | }
54 |
55 | @article{xiangli_citynerf_2021,
56 | title = {{CityNeRF}: {Building} {NeRF} at {City} {Scale}},
57 | journal = {arXiv preprint arXiv:2112.05504},
58 | author = {Xiangli, Yuanbo and Xu, Linning and Pan, Xingang and Zhao, Nanxuan and Rao, Anyi and Theobalt, Christian and Dai, Bo and Lin, Dahua},
59 | year = {2021},
60 | }
61 |
62 | @article{tancik_block-nerf_2022,
63 | title = {Block-{NeRF}: {Scalable} {Large} {Scene} {Neural} {View} {Synthesis}},
64 | journal = {arXiv},
65 | author = {Tancik, Matthew and Casser, Vincent and Yan, Xinchen and Pradhan, Sabeek and Mildenhall, Ben and Srinivasan, Pratul and Barron, Jonathan T. and Kretzschmar, Henrik},
66 | year = {2022},
67 | }
68 |
69 | @misc{rematas_sharf_2021,
70 | title = {{ShaRF}: {Shape}-conditioned {Radiance} {Fields} from a {Single} {View}},
71 | author = {Rematas, Konstantinos and Martin-Brualla, Ricardo and Ferrari, Vittorio},
72 | year = {2021},
73 | annote = {\_eprint: 2102.08860},
74 | }
75 |
76 | @misc{kaya_neural_2021,
77 | title = {Neural {Radiance} {Fields} {Approach} to {Deep} {Multi}-{View} {Photometric} {Stereo}},
78 | author = {Kaya, Berk and Kumar, Suryansh and Sarno, Francesco and Ferrari, Vittorio and Gool, Luc Van},
79 | year = {2021},
80 | annote = {\_eprint: 2110.05594},
81 | }
82 |
83 | @article{xu_point-nerf_2022,
84 | title = {Point-{NeRF}: {Point}-based {Neural} {Radiance} {Fields}},
85 | journal = {arXiv preprint arXiv:2201.08845},
86 | author = {Xu, Qiangeng and Xu, Zexiang and Philip, Julien and Bi, Sai and Shu, Zhixin and Sunkavalli, Kalyan and Neumann, Ulrich},
87 | year = {2022},
88 | }
89 |
90 | @article{xie_fig-nerf_2021,
91 | title = {{FiG}-{NeRF}: {Figure}-{Ground} {Neural} {Radiance} {Fields} for {3D} {Object} {Category} {Modelling}},
92 | shorttitle = {{FiG}-{NeRF}},
93 | url = {http://arxiv.org/abs/2104.08418},
94 | abstract = {We investigate the use of Neural Radiance Fields (NeRF) to learn high quality 3D object category models from collections of input images. In contrast to previous work, we are able to do this whilst simultaneously separating foreground objects from their varying backgrounds. We achieve this via a 2-component NeRF model, FiG-NeRF, that prefers explanation of the scene as a geometrically constant background and a deformable foreground that represents the object category. We show that this method can learn accurate 3D object category models using only photometric supervision and casually captured images of the objects. Additionally, our 2-part decomposition allows the model to perform accurate and crisp amodal segmentation. We quantitatively evaluate our method with view synthesis and image fidelity metrics, using synthetic, lab-captured, and in-the-wild data. Our results demonstrate convincing 3D object category modelling that exceed the performance of existing methods.},
95 | language = {en},
96 | urldate = {2021-09-25},
97 | journal = {arXiv:2104.08418 [cs]},
98 | author = {Xie, Christopher and Park, Keunhong and Martin-Brualla, Ricardo and Brown, Matthew},
99 | month = apr,
100 | year = {2021},
101 | keywords = {⛔ No DOI found, Computer Science - Computer Vision and Pattern Recognition},
102 | annote = {ZSCC: 0000001 arXiv: 2104.08418},
103 | file = {Xie et al. - 2021 - FiG-NeRF Figure-Ground Neural Radiance Fields for.pdf:/home/jack/Zotero/storage/3M4QMGRS/Xie et al. - 2021 - FiG-NeRF Figure-Ground Neural Radiance Fields for.pdf:application/pdf},
104 | }
105 |
106 | @article{yu_plenoctrees_2021,
107 | title = {{PlenOctrees} for {Real}-time {Rendering} of {Neural} {Radiance} {Fields}},
108 | url = {http://arxiv.org/abs/2103.14024},
109 | abstract = {We introduce a method to render Neural Radiance Fields (NeRFs) in real time using PlenOctrees, an octree-based 3D representation which supports view-dependent effects. Our method can render 800×800 images at more than 150 FPS, which is over 3000 times faster than conventional NeRFs. We do so without sacrificing quality while preserving the ability of NeRFs to perform free-viewpoint rendering of scenes with arbitrary geometry and view-dependent effects. Real-time performance is achieved by pre-tabulating the NeRF into a PlenOctree. In order to preserve viewdependent effects such as specularities, we factorize the appearance via closed-form spherical basis functions. Specifically, we show that it is possible to train NeRFs to predict a spherical harmonic representation of radiance, removing the viewing direction as an input to the neural network. Furthermore, we show that PlenOctrees can be directly optimized to further minimize the reconstruction loss, which leads to equal or better quality compared to competing methods. Moreover, this octree optimization step can be used to reduce the training time, as we no longer need to wait for the NeRF training to converge fully. Our real-time neural rendering approach may potentially enable new applications such as 6-DOF industrial and product visualizations, as well as next generation AR/VR systems. PlenOctrees are amenable to in-browser rendering as well; please visit the project page for the interactive online demo, as well as video and code: https://alexyu. net/plenoctrees.},
110 | language = {en},
111 | urldate = {2021-09-25},
112 | journal = {arXiv:2103.14024 [cs]},
113 | author = {Yu, Alex and Li, Ruilong and Tancik, Matthew and Li, Hao and Ng, Ren and Kanazawa, Angjoo},
114 | month = aug,
115 | year = {2021},
116 | keywords = {⛔ No DOI found, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Graphics},
117 | annote = {Comment: ICCV 2021 (Oral)},
118 | annote = {ZSCC: 0000012 arXiv: 2103.14024},
119 | file = {Yu et al. - 2021 - PlenOctrees for Real-time Rendering of Neural Radi.pdf:/home/jack/Zotero/storage/4UDQVPAE/Yu et al. - 2021 - PlenOctrees for Real-time Rendering of Neural Radi.pdf:application/pdf},
120 | }
121 |
122 | @inproceedings{mildenhall_nerf_2020,
123 | address = {Cham},
124 | series = {Lecture {Notes} in {Computer} {Science}},
125 | title = {{NeRF}: {Representing} {Scenes} as {Neural} {Radiance} {Fields} for {View} {Synthesis}},
126 | isbn = {978-3-030-58452-8},
127 | shorttitle = {{NeRF}},
128 | doi = {10/gj826m},
129 | abstract = {We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x, y, z) and viewing direction (θ,ϕ)(θ,ϕ)({\textbackslash}textbackslashtextbackslashtheta ,{\textbackslash}textbackslashtextbackslashphi )) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons.},
130 | language = {en},
131 | booktitle = {Computer {Vision} – {ECCV} 2020},
132 | publisher = {Springer International Publishing},
133 | author = {Mildenhall, Ben and Srinivasan, Pratul P. and Tancik, Matthew and Barron, Jonathan T. and Ramamoorthi, Ravi and Ng, Ren},
134 | editor = {Vedaldi, Andrea and Bischof, Horst and Brox, Thomas and Frahm, Jan-Michael},
135 | year = {2020},
136 | keywords = {3D deep learning, Image-based rendering, Scene representation, View synthesis, Volume rendering},
137 | pages = {405--421},
138 | annote = {ZSCC: NoCitationData[s0]},
139 | file = {Submitted Version:/home/jack/Zotero/storage/NWYLSFAT/Mildenhall et al. - 2020 - NeRF Representing Scenes as Neural Radiance Field.pdf:application/pdf},
140 | }
141 |
142 | @inproceedings{yu_pixelnerf_2021,
143 | title = {{pixelNeRF}: {Neural} {Radiance} {Fields} {From} {One} or {Few} {Images}},
144 | shorttitle = {{pixelNeRF}},
145 | url = {https://openaccess.thecvf.com/content/CVPR2021/html/Yu_pixelNeRF_Neural_Radiance_Fields_From_One_or_Few_Images_CVPR_2021_paper.html},
146 | language = {en},
147 | urldate = {2021-09-25},
148 | author = {Yu, Alex and Ye, Vickie and Tancik, Matthew and Kanazawa, Angjoo},
149 | year = {2021},
150 | pages = {4578--4587},
151 | annote = {ZSCC: 0000041},
152 | file = {Full Text PDF:/home/jack/Zotero/storage/JANH35S9/Yu et al. - 2021 - pixelNeRF Neural Radiance Fields From One or Few .pdf:application/pdf;Snapshot:/home/jack/Zotero/storage/2LMSGYRR/Yu_pixelNeRF_Neural_Radiance_Fields_From_One_or_Few_Images_CVPR_2021_paper.html:text/html},
153 | }
154 |
155 | @inproceedings{martin-brualla_nerf_2021,
156 | title = {{NeRF} in the {Wild}: {Neural} {Radiance} {Fields} for {Unconstrained} {Photo} {Collections}},
157 | shorttitle = {{NeRF} in the {Wild}},
158 | url = {https://openaccess.thecvf.com/content/CVPR2021/html/Martin-Brualla_NeRF_in_the_Wild_Neural_Radiance_Fields_for_Unconstrained_Photo_CVPR_2021_paper.html},
159 | language = {en},
160 | urldate = {2021-09-25},
161 | author = {Martin-Brualla, Ricardo and Radwan, Noha and Sajjadi, Mehdi S. M. and Barron, Jonathan T. and Dosovitskiy, Alexey and Duckworth, Daniel},
162 | year = {2021},
163 | pages = {7210--7219},
164 | annote = {ZSCC: 0000082},
165 | file = {Full Text PDF:/home/jack/Zotero/storage/ET6FIPMJ/Martin-Brualla et al. - 2021 - NeRF in the Wild Neural Radiance Fields for Uncon.pdf:application/pdf;Snapshot:/home/jack/Zotero/storage/PYVPQEEI/Martin-Brualla_NeRF_in_the_Wild_Neural_Radiance_Fields_for_Unconstrained_Photo_CVPR_2021_paper.html:text/html},
166 | }
167 |
168 | @article{yen-chen_inerf_2021,
169 | title = {{INeRF}: {Inverting} {Neural} {Radiance} {Fields} for {Pose} {Estimation}},
170 | shorttitle = {{INeRF}},
171 | url = {http://arxiv.org/abs/2012.05877},
172 | abstract = {We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF). NeRFs have been shown to be remarkably effective for the task of view synthesis - synthesizing photorealistic novel views of real-world scenes or objects. In this work, we investigate whether we can apply analysis-by-synthesis via NeRF for mesh-free, RGB-only 6DoF pose estimation - given an image, find the translation and rotation of a camera relative to a 3D object or scene. Our method assumes that no object mesh models are available during either training or test time. Starting from an initial pose estimate, we use gradient descent to minimize the residual between pixels rendered from a NeRF and pixels in an observed image. In our experiments, we first study 1) how to sample rays during pose refinement for iNeRF to collect informative gradients and 2) how different batch sizes of rays affect iNeRF on a synthetic dataset. We then show that for complex real-world scenes from the LLFF dataset, iNeRF can improve NeRF by estimating the camera poses of novel images and using these images as additional training data for NeRF. Finally, we show iNeRF can perform category-level object pose estimation, including object instances not seen during training, with RGB images by inverting a NeRF model inferred from a single view.},
173 | urldate = {2021-09-25},
174 | journal = {arXiv:2012.05877 [cs]},
175 | author = {Yen-Chen, Lin and Florence, Pete and Barron, Jonathan T. and Rodriguez, Alberto and Isola, Phillip and Lin, Tsung-Yi},
176 | month = aug,
177 | year = {2021},
178 | keywords = {⛔ No DOI found, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Robotics},
179 | annote = {Comment: IROS 2021, Website: http://yenchenlin.me/inerf/},
180 | annote = {ZSCC: NoCitationData[s0] arXiv: 2012.05877},
181 | file = {arXiv Fulltext PDF:/home/jack/Zotero/storage/CT7XRVWR/Yen-Chen et al. - 2021 - INeRF Inverting Neural Radiance Fields for Pose E.pdf:application/pdf;arXiv.org Snapshot:/home/jack/Zotero/storage/93ZRLS7C/2012.html:text/html},
182 | }
183 |
184 | @article{gao_portrait_2021,
185 | title = {Portrait {Neural} {Radiance} {Fields} from a {Single} {Image}},
186 | url = {http://arxiv.org/abs/2012.05903},
187 | abstract = {We present a method for estimating Neural Radiance Fields (NeRF) from a single headshot portrait. While NeRF has demonstrated high-quality view synthesis, it requires multiple images of static scenes and thus impractical for casual captures and moving subjects. In this work, we propose to pretrain the weights of a multilayer perceptron (MLP), which implicitly models the volumetric density and colors, with a meta-learning framework using a light stage portrait dataset. To improve the generalization to unseen faces, we train the MLP in the canonical coordinate space approximated by 3D face morphable models. We quantitatively evaluate the method using controlled captures and demonstrate the generalization to real portrait images, showing favorable results against state-of-the-arts.},
188 | urldate = {2021-09-25},
189 | journal = {arXiv:2012.05903 [cs]},
190 | author = {Gao, Chen and Shih, Yichang and Lai, Wei-Sheng and Liang, Chia-Kai and Huang, Jia-Bin},
191 | month = apr,
192 | year = {2021},
193 | keywords = {⛔ No DOI found, Computer Science - Computer Vision and Pattern Recognition},
194 | annote = {Comment: Project webpage: https://portrait-nerf.github.io/},
195 | annote = {ZSCC: NoCitationData[s0] arXiv: 2012.05903},
196 | file = {arXiv Fulltext PDF:/home/jack/Zotero/storage/7NW6B9MG/Gao et al. - 2021 - Portrait Neural Radiance Fields from a Single Imag.pdf:application/pdf;arXiv.org Snapshot:/home/jack/Zotero/storage/JMEHQ4CL/2012.html:text/html},
197 | }
198 |
199 | @article{lin_barf_2021,
200 | title = {{BARF}: {Bundle}-{Adjusting} {Neural} {Radiance} {Fields}},
201 | shorttitle = {{BARF}},
202 | url = {http://arxiv.org/abs/2104.06405},
203 | abstract = {Neural Radiance Fields (NeRF) have recently gained a surge of interest within the computer vision community for its power to synthesize photorealistic novel views of real-world scenes. One limitation of NeRF, however, is its requirement of accurate camera poses to learn the scene representations. In this paper, we propose Bundle-Adjusting Neural Radiance Fields (BARF) for training NeRF from imperfect (or even unknown) camera poses – the joint problem of learning neural 3D representations and registering camera frames. We establish a theoretical connection to classical image alignment and show that coarse-to-fine registration is also applicable to NeRF. Furthermore, we show that na{\textbackslash}textbackslashtextbackslash"ively applying positional encoding in NeRF has a negative impact on registration with a synthesis-based objective. Experiments on synthetic and real-world data show that BARF can effectively optimize the neural scene representations and resolve large camera pose misalignment at the same time. This enables view synthesis and localization of video sequences from unknown camera poses, opening up new avenues for visual localization systems (e.g. SLAM) and potential applications for dense 3D mapping and reconstruction.},
204 | urldate = {2021-09-25},
205 | journal = {arXiv:2104.06405 [cs]},
206 | author = {Lin, Chen-Hsuan and Ma, Wei-Chiu and Torralba, Antonio and Lucey, Simon},
207 | month = aug,
208 | year = {2021},
209 | keywords = {⛔ No DOI found, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Graphics, Computer Science - Machine Learning, Computer Science - Robotics},
210 | annote = {Comment: Accepted to ICCV 2021 as oral presentation (project page \& code: https://chenhsuanlin.bitbucket.io/bundle-adjusting-NeRF)},
211 | annote = {ZSCC: 0000003 arXiv: 2104.06405},
212 | file = {arXiv Fulltext PDF:/home/jack/Zotero/storage/JE6EBEIX/Lin et al. - 2021 - BARF Bundle-Adjusting Neural Radiance Fields.pdf:application/pdf;arXiv.org Snapshot:/home/jack/Zotero/storage/C7UT3SLU/2104.html:text/html},
213 | }
214 |
215 | @article{zhang_nerf_2020,
216 | title = {{NeRF}++: {Analyzing} and {Improving} {Neural} {Radiance} {Fields}},
217 | shorttitle = {{NeRF}++},
218 | url = {http://arxiv.org/abs/2010.07492},
219 | abstract = {Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings, including 360◦ capture of bounded scenes and forward-facing capture of bounded and unbounded scenes. NeRF fits multilayer perceptrons (MLPs) representing view-invariant opacity and view-dependent color volumes to a set of training images, and samples novel views based on volume rendering techniques. In this technical report, we first remark on radiance fields and their potential ambiguities, namely the shape-radiance ambiguity, and analyze NeRF’s success in avoiding such ambiguities. Second, we address a parametrization issue involved in applying NeRF to 360◦ captures of objects within large-scale, unbounded 3D scenes. Our method improves view synthesis fidelity in this challenging scenario. Code is available at https://github.com/Kai-46/nerfplusplus.},
220 | language = {en},
221 | urldate = {2021-09-25},
222 | journal = {arXiv:2010.07492 [cs]},
223 | author = {Zhang, Kai and Riegler, Gernot and Snavely, Noah and Koltun, Vladlen},
224 | month = oct,
225 | year = {2020},
226 | keywords = {⛔ No DOI found, Computer Science - Computer Vision and Pattern Recognition},
227 | annote = {Comment: Code is available at https://github.com/Kai-46/nerfplusplus; fix a minor formatting issue in Fig. 4},
228 | annote = {ZSCC: NoCitationData[s0] arXiv: 2010.07492},
229 | file = {Zhang et al. - 2020 - NeRF++ Analyzing and Improving Neural Radiance Fi.pdf:/home/jack/Zotero/storage/XB9MGTMK/Zhang et al. - 2020 - NeRF++ Analyzing and Improving Neural Radiance Fi.pdf:application/pdf},
230 | }
231 |
232 | @article{reiser_kilonerf_2021,
233 | title = {{KiloNeRF}: {Speeding} up {Neural} {Radiance} {Fields} with {Thousands} of {Tiny} {MLPs}},
234 | shorttitle = {{KiloNeRF}},
235 | url = {http://arxiv.org/abs/2103.13744},
236 | abstract = {NeRF synthesizes novel views of a scene with unprecedented quality by fitting a neural radiance field to RGB images. However, NeRF requires querying a deep Multi-Layer Perceptron (MLP) millions of times, leading to slow rendering times, even on modern GPUs. In this paper, we demonstrate that real-time rendering is possible by utilizing thousands of tiny MLPs instead of one single large MLP. In our setting, each individual MLP only needs to represent parts of the scene, thus smaller and faster-to-evaluate MLPs can be used. By combining this divide-and-conquer strategy with further optimizations, rendering is accelerated by three orders of magnitude compared to the original NeRF model without incurring high storage costs. Further, using teacher-student distillation for training, we show that this speed-up can be achieved without sacrificing visual quality.},
237 | urldate = {2021-09-25},
238 | journal = {arXiv:2103.13744 [cs]},
239 | author = {Reiser, Christian and Peng, Songyou and Liao, Yiyi and Geiger, Andreas},
240 | month = aug,
241 | year = {2021},
242 | keywords = {⛔ No DOI found, Computer Science - Computer Vision and Pattern Recognition},
243 | annote = {Comment: ICCV 2021. Code, pretrained models and an interactive viewer are available at https://github.com/creiser/kilonerf/},
244 | annote = {ZSCC: 0000006 arXiv: 2103.13744},
245 | file = {arXiv Fulltext PDF:/home/jack/Zotero/storage/S7G3N9HV/Reiser et al. - 2021 - KiloNeRF Speeding up Neural Radiance Fields with .pdf:application/pdf;arXiv.org Snapshot:/home/jack/Zotero/storage/YCNWFGTS/2103.html:text/html},
246 | }
247 |
248 | @inproceedings{rebain_derf_2021,
249 | title = {{DeRF}: {Decomposed} {Radiance} {Fields}},
250 | shorttitle = {{DeRF}},
251 | url = {https://openaccess.thecvf.com/content/CVPR2021/html/Rebain_DeRF_Decomposed_Radiance_Fields_CVPR_2021_paper.html},
252 | language = {en},
253 | urldate = {2021-09-25},
254 | author = {Rebain, Daniel and Jiang, Wei and Yazdani, Soroosh and Li, Ke and Yi, Kwang Moo and Tagliasacchi, Andrea},
255 | year = {2021},
256 | pages = {14153--14161},
257 | annote = {ZSCC: 0000017},
258 | file = {Full Text PDF:/home/jack/Zotero/storage/3IGZ9P5N/Rebain et al. - 2021 - DeRF Decomposed Radiance Fields.pdf:application/pdf;Snapshot:/home/jack/Zotero/storage/DW5X8L7G/Rebain_DeRF_Decomposed_Radiance_Fields_CVPR_2021_paper.html:text/html},
259 | }
260 |
261 | @article{barron_mip-nerf_2021,
262 | title = {Mip-{NeRF}: {A} {Multiscale} {Representation} for {Anti}-{Aliasing} {Neural} {Radiance} {Fields}},
263 | shorttitle = {Mip-{NeRF}},
264 | url = {http://arxiv.org/abs/2103.13415},
265 | abstract = {The rendering procedure used by neural radiance fields (NeRF) samples a scene with a single ray per pixel and may therefore produce renderings that are excessively blurred or aliased when training or testing images observe scene content at different resolutions. The straightforward solution of supersampling by rendering with multiple rays per pixel is impractical for NeRF, because rendering each ray requires querying a multilayer perceptron hundreds of times. Our solution, which we call "mip-NeRF" (a la "mipmap"), extends NeRF to represent the scene at a continuously-valued scale. By efficiently rendering anti-aliased conical frustums instead of rays, mip-NeRF reduces objectionable aliasing artifacts and significantly improves NeRF's ability to represent fine details, while also being 7\% faster than NeRF and half the size. Compared to NeRF, mip-NeRF reduces average error rates by 17\% on the dataset presented with NeRF and by 60\% on a challenging multiscale variant of that dataset that we present. Mip-NeRF is also able to match the accuracy of a brute-force supersampled NeRF on our multiscale dataset while being 22x faster.},
266 | urldate = {2021-09-25},
267 | journal = {arXiv:2103.13415 [cs]},
268 | author = {Barron, Jonathan T. and Mildenhall, Ben and Tancik, Matthew and Hedman, Peter and Martin-Brualla, Ricardo and Srinivasan, Pratul P.},
269 | month = aug,
270 | year = {2021},
271 | keywords = {⛔ No DOI found, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Graphics},
272 | annote = {ZSCC: 0000003 arXiv: 2103.13415},
273 | file = {arXiv Fulltext PDF:/home/jack/Zotero/storage/U4P2XYSN/Barron et al. - 2021 - Mip-NeRF A Multiscale Representation for Anti-Ali.pdf:application/pdf;arXiv.org Snapshot:/home/jack/Zotero/storage/YMNW6YNA/2103.html:text/html},
274 | }
275 |
276 | @article{hedman_baking_2021,
277 | title = {Baking {Neural} {Radiance} {Fields} for {Real}-{Time} {View} {Synthesis}},
278 | url = {http://arxiv.org/abs/2103.14645},
279 | abstract = {Neural volumetric representations such as Neural Radiance Fields (NeRF) have emerged as a compelling technique for learning to represent 3D scenes from images with the goal of rendering photorealistic images of the scene from unobserved viewpoints. However, NeRF's computational requirements are prohibitive for real-time applications: rendering views from a trained NeRF requires querying a multilayer perceptron (MLP) hundreds of times per ray. We present a method to train a NeRF, then precompute and store (i.e. "bake") it as a novel representation called a Sparse Neural Radiance Grid (SNeRG) that enables real-time rendering on commodity hardware. To achieve this, we introduce 1) a reformulation of NeRF's architecture, and 2) a sparse voxel grid representation with learned feature vectors. The resulting scene representation retains NeRF's ability to render fine geometric details and view-dependent appearance, is compact (averaging less than 90 MB per scene), and can be rendered in real-time (higher than 30 frames per second on a laptop GPU). Actual screen captures are shown in our video.},
280 | urldate = {2021-09-25},
281 | journal = {arXiv:2103.14645 [cs]},
282 | author = {Hedman, Peter and Srinivasan, Pratul P. and Mildenhall, Ben and Barron, Jonathan T. and Debevec, Paul},
283 | month = mar,
284 | year = {2021},
285 | keywords = {⛔ No DOI found, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Graphics},
286 | annote = {Comment: Project page: https://nerf.live},
287 | annote = {ZSCC: 0000002 arXiv: 2103.14645},
288 | file = {arXiv Fulltext PDF:/home/jack/Zotero/storage/QT2L9Q9Z/Hedman et al. - 2021 - Baking Neural Radiance Fields for Real-Time View S.pdf:application/pdf;arXiv.org Snapshot:/home/jack/Zotero/storage/REXBSYHG/2103.html:text/html},
289 | }
290 |
291 | @article{wang_nerf_2021,
292 | title = {{NeRF}–: {Neural} {Radiance} {Fields} {Without} {Known} {Camera} {Parameters}},
293 | shorttitle = {{NeRF}–},
294 | url = {http://arxiv.org/abs/2102.07064},
295 | abstract = {This paper tackles the problem of novel view synthesis (NVS) from 2D images without known camera poses and intrinsics. Among various NVS techniques, Neural Radiance Field (NeRF) has recently gained popularity due to its remarkable synthesis quality. Existing NeRF-based approaches assume that the camera parameters associated with each input image are either directly accessible at training, or can be accurately estimated with conventional techniques based on correspondences, such as Structure-from-Motion. In this work, we propose an end-to-end framework, termed NeRF–, for training NeRF models given only RGB images, without pre-computed camera parameters. Specifically, we show that the camera parameters, including both intrinsics and extrinsics, can be automatically discovered via joint optimisation during the training of the NeRF model. On the standard LLFF benchmark, our model achieves comparable novel view synthesis results compared to the baseline trained with COLMAP pre-computed camera parameters. We also conduct extensive analyses to understand the model behaviour under different camera trajectories, and show that in scenarios where COLMAP fails, our model still produces robust results.},
296 | language = {en},
297 | urldate = {2021-09-25},
298 | journal = {arXiv:2102.07064 [cs]},
299 | author = {Wang, Zirui and Wu, Shangzhe and Xie, Weidi and Chen, Min and Prisacariu, Victor Adrian},
300 | month = feb,
301 | year = {2021},
302 | keywords = {⛔ No DOI found, Computer Science - Computer Vision and Pattern Recognition},
303 | annote = {Comment: project page see nerfmm.active.vision},
304 | annote = {ZSCC: 0000010 arXiv: 2102.07064},
305 | file = {Wang et al. - 2021 - NeRF-- Neural Radiance Fields Without Known Camer.pdf:/home/jack/Zotero/storage/3ZWIX5H6/Wang et al. - 2021 - NeRF-- Neural Radiance Fields Without Known Camer.pdf:application/pdf},
306 | }
307 |
308 | @article{li_mine_2021,
309 | title = {{MINE}: {Towards} {Continuous} {Depth} {MPI} with {NeRF} for {Novel} {View} {Synthesis}},
310 | shorttitle = {{MINE}},
311 | url = {http://arxiv.org/abs/2103.14910},
312 | abstract = {In this paper, we propose MINE to perform novel view synthesis and depth estimation via dense 3D reconstruction from a single image. Our approach is a continuous depth generalization of the Multiplane Images (MPI) by introducing the NEural radiance fields (NeRF). Given a single image as input, MINE predicts a 4-channel image (RGB and volume density) at arbitrary depth values to jointly reconstruct the camera frustum and fill in occluded contents. The reconstructed and inpainted frustum can then be easily rendered into novel RGB or depth views using differentiable rendering. Extensive experiments on RealEstate10K, KITTI and Flowers Light Fields show that our MINE outperforms state-of-the-art by a large margin in novel view synthesis. We also achieve competitive results in depth estimation on iBims-1 and NYU-v2 without annotated depth supervision. Our source code is available at https://github.com/vincentfung13/MINE},
313 | urldate = {2021-10-11},
314 | journal = {arXiv:2103.14910 [cs]},
315 | author = {Li, Jiaxin and Feng, Zijian and She, Qi and Ding, Henghui and Wang, Changhu and Lee, Gim Hee},
316 | month = jul,
317 | year = {2021},
318 | keywords = {⛔ No DOI found, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Graphics, Computer Science - Machine Learning},
319 | annote = {Comment: ICCV 2021. Main paper and supplementary materials},
320 | annote = {ZSCC: 0000000 arXiv: 2103.14910},
321 | }
322 |
323 |
324 |
325 | %% Novel ideas
326 | @article{chen2023factor,
327 | title={Factor Fields: A Unified Framework for Neural Fields and Beyond},
328 | author={Chen, Anpei and Xu, Zexiang and Wei, Xinyue and Tang, Siyu and Su, Hao and Geiger, Andreas},
329 | journal={arXiv preprint arXiv:2302.01226},
330 | year={2023}
331 | }
332 |
333 | @inproceedings{
334 | ma2023image,
335 | title={Image as Set of Points},
336 | author={Xu Ma and Yuqian Zhou and Huan Wang and Can Qin and Bin Sun and Chang Liu and Yun Fu},
337 | booktitle={The Eleventh International Conference on Learning Representations },
338 | year={2023},
339 | url={https://openreview.net/forum?id=awnvqZja69}
340 | }
341 |
342 | @inproceedings{mehta2022level,
343 | title={A level set theory for neural implicit evolution under explicit flows},
344 | author={Mehta, Ishit and Chandraker, Manmohan and Ramamoorthi, Ravi},
345 | booktitle={Computer Vision--ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23--27, 2022, Proceedings, Part II},
346 | pages={711--729},
347 | year={2022},
348 | organization={Springer}
349 | }
350 |
351 | @article{zhang2022seeing,
352 | title={Seeing a Rose in Five Thousand Ways},
353 | author={Zhang, Yunzhi and Wu, Shangzhe and Snavely, Noah and Wu, Jiajun},
354 | journal={arXiv preprint arXiv:2212.04965},
355 | year={2022}
356 | }
357 |
358 | %% Novel Approach
359 | @article{nguyen2022s4nd,
360 | title={S4ND: Modeling Images and Videos as Multidimensional Signals Using State Spaces},
361 | author={Nguyen, Eric and Goel, Karan and Gu, Albert and Downs, Gordon W and Shah, Preey and Dao, Tri and Baccus, Stephen A and R{\'e}, Christopher},
362 | journal={arXiv preprint arXiv:2210.06583},
363 | year={2022}
364 | }
365 |
366 | @article{dogaru2022sphere,
367 | title={Sphere-Guided Training of Neural Implicit Surfaces},
368 | author={Dogaru, Andreea and Ardelean, Andrei Timotei and Ignatyev, Savva and Burnaev, Evgeny and Zakharov, Egor},
369 | journal={arXiv preprint arXiv:2209.15511},
370 | year={2022}
371 | }
372 |
373 | @article{ma2022totems,
374 | author = {Ma, Jingwei and Chai, Lucy and Huh, Minyoung and Wang, Tongzhou and Lim, Ser-Nam and Isola, Phillip and Torralba, Antonio},
375 | title = {Totems: Physical Objects for Verifying Visual Integrity},
376 | journal = {ECCV},
377 | year = {2022},
378 | }
379 |
380 | @inproceedings{
381 | yang2022polynomial,
382 | title={Polynomial Neural Fields for Subband Decomposition and Manipulation},
383 | author={Guandao Yang and Sagie Benaim and Varun Jampani and Kyle Genova and Jonathan T. Barron and Thomas Funkhouser and Bharath Hariharan and Serge Belongie},
384 | booktitle={Advances in Neural Information Processing Systems},
385 | editor={Alice H. Oh and Alekh Agarwal and Danielle Belgrave and Kyunghyun Cho},
386 | year={2022},
387 | url={https://openreview.net/forum?id=juE5ErmZB61}
388 | }
389 |
390 | @article{shue20223d,
391 | title={3D Neural Field Generation using Triplane Diffusion},
392 | author={Shue, J Ryan and Chan, Eric Ryan and Po, Ryan and Ankner, Zachary and Wu, Jiajun and Wetzstein, Gordon},
393 | journal={arXiv preprint arXiv:2211.16677},
394 | year={2022}
395 | }
396 |
397 | @article{wang20224k,
398 | title={4K-NeRF: High Fidelity Neural Radiance Fields at Ultra High Resolutions},
399 | author={Wang, Zhongshu and Li, Lingzhi and Shen, Zhen and Shen, Li and Bo, Liefeng},
400 | journal={arXiv preprint arXiv:2212.04701},
401 | year={2022}
402 | }
403 |
404 | @article{chung2022meil,
405 | title={MEIL-NeRF: Memory-Efficient Incremental Learning of Neural Radiance Fields},
406 | author={Chung, Jaeyoung and Lee, Kanggeon and Baik, Sungyong and Lee, Kyoung Mu},
407 | journal={arXiv preprint arXiv:2212.08328},
408 | year={2022}
409 | }
410 |
411 | @article{siddiqui2022panoptic,
412 | title={Panoptic Lifting for 3D Scene Understanding with Neural Fields},
413 | author={Siddiqui, Yawar and Porzi, Lorenzo and Bul{\'o}, Samuel Rota and M{\"u}ller, Norman and Nie{\ss}ner, Matthias and Dai, Angela and Kontschieder, Peter},
414 | journal={arXiv preprint arXiv:2212.09802},
415 | year={2022}
416 | }
417 |
418 | @article{li2022climatenerf,
419 | title={ClimateNeRF: Physically-based Neural Rendering for Extreme Climate Synthesis},
420 | author={Li, Yuan and Lin, Zhi-Hao and Forsyth, David and Huang, Jia-Bin and Wang, Shenlong},
421 | journal={arXiv e-prints},
422 | pages={arXiv--2211},
423 | year={2022}
424 | }
425 |
426 | @article{zarzar2022segnerf,
427 | title={SegNeRF: 3D Part Segmentation with Neural Radiance Fields},
428 | author={Zarzar, Jesus and Rojas, Sara and Giancola, Silvio and Ghanem, Bernard},
429 | journal={arXiv preprint arXiv:2211.11215},
430 | year={2022}
431 | }
432 |
433 | @article{guo2022incremental,
434 | title={Incremental Learning for Neural Radiance Field with Uncertainty-Filtered Knowledge Distillation},
435 | author={Guo, Mengqi and Li, Chen and Lee, Gim Hee},
436 | journal={arXiv preprint arXiv:2212.10950},
437 | year={2022}
438 | }
439 |
440 | @article{weder2022removing,
441 | title={Removing Objects From Neural Radiance Fields},
442 | author={Weder, Silvan and Garcia-Hernando, Guillermo and Monszpart, Aron and Pollefeys, Marc and Brostow, Gabriel and Firman, Michael and Vicente, Sara},
443 | journal={arXiv preprint arXiv:2212.11966},
444 | year={2022}
445 | }
446 |
447 | @article{fridovich2023k,
448 | title={K-planes: Explicit radiance fields in space, time, and appearance},
449 | author={Fridovich-Keil, Sara and Meanti, Giacomo and Warburg, Frederik and Recht, Benjamin and Kanazawa, Angjoo},
450 | journal={arXiv preprint arXiv:2301.10241},
451 | year={2023}
452 | }
453 |
454 | @article{reiser2023merf,
455 | title={Merf: Memory-efficient radiance fields for real-time view synthesis in unbounded scenes},
456 | author={Reiser, Christian and Szeliski, Richard and Verbin, Dor and Srinivasan, Pratul P and Mildenhall, Ben and Geiger, Andreas and Barron, Jonathan T and Hedman, Peter},
457 | journal={arXiv preprint arXiv:2302.12249},
458 | year={2023}
459 | }
460 |
461 | @article{yariv2023bakedsdf,
462 | title={BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis},
463 | author={Yariv, Lior and Hedman, Peter and Reiser, Christian and Verbin, Dor and Srinivasan, Pratul P and Szeliski, Richard and Barron, Jonathan T and Mildenhall, Ben},
464 | journal={arXiv preprint arXiv:2302.14859},
465 | year={2023}
466 | }
467 |
468 | @article{zhang2023nerflets,
469 | title={Nerflets: Local Radiance Fields for Efficient Structure-Aware 3D Scene Representation from 2D Supervision},
470 | author={Zhang, Xiaoshuai and Kundu, Abhijit and Funkhouser, Thomas and Guibas, Leonidas and Su, Hao and Genova, Kyle},
471 | journal={arXiv preprint arXiv:2303.03361},
472 | year={2023}
473 | }
474 |
475 | @article{han2023multiscale,
476 | title={Multiscale Tensor Decomposition and Rendering Equation Encoding for View Synthesis},
477 | author={Han, Kang and Xiang, Wei},
478 | journal={arXiv preprint arXiv:2303.03808},
479 | year={2023}
480 | }
481 |
482 | @article{cai2023neuda,
483 | title={NeuDA: Neural Deformable Anchor for High-Fidelity Implicit Surface Reconstruction},
484 | author={Cai, Bowen and Huang, Jinchi and Jia, Rongfei and Lv, Chengfei and Fu, Huan},
485 | journal={arXiv preprint arXiv:2303.02375},
486 | year={2023}
487 | }
488 |
489 | @article{bai2023self,
490 | title={Self-NeRF: A Self-Training Pipeline for Few-Shot Neural Radiance Fields},
491 | author={Bai, Jiayang and Huang, Letian and Gong, Wen and Guo, Jie and Guo, Yanwen},
492 | journal={arXiv preprint arXiv:2303.05775},
493 | year={2023}
494 | }
495 |
496 | @article{moreau2023crossfire,
497 | title={CROSSFIRE: Camera Relocalization On Self-Supervised Features from an Implicit Representation},
498 | author={Moreau, Arthur and Piasco, Nathan and Bennehar, Moussab and Tsishkou, Dzmitry and Stanciulescu, Bogdan and de La Fortelle, Arnaud},
499 | journal={arXiv preprint arXiv:2303.04869},
500 | year={2023}
501 | }
502 |
503 | @article{zhang2023structural,
504 | title={Structural Multiplane Image: Bridging Neural View Synthesis and 3D Reconstruction},
505 | author={Zhang, Mingfang and Wang, Jinglu and Li, Xiao and Huang, Yifei and Sato, Yoichi and Lu, Yan},
506 | journal={arXiv preprint arXiv:2303.05937},
507 | year={2023}
508 | }
509 |
510 | @article{zhou2023nerflix,
511 | title={NeRFLiX: High-Quality Neural View Synthesis by Learning a Degradation-Driven Inter-viewpoint MiXer},
512 | author={Zhou, Kun and Li, Wenbo and Wang, Yi and Hu, Tao and Jiang, Nianjuan and Han, Xiaoguang and Lu, Jiangbo},
513 | journal={arXiv preprint arXiv:2303.06919},
514 | year={2023}
515 | }
516 |
517 | @article{yang2022freenerf,
518 | author = {Jiawei Yang and Marco Pavone and Yue Wang},
519 | title = {FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency Regularization},
520 | journal = {Proceedings of the IEEE/CVF International Conference on Computer Vision (CVPR)},
521 | year = {2023},
522 | }
523 |
524 |
525 | @article{rojas2023re,
526 | title={Re-ReND: Real-time Rendering of NeRFs across Devices},
527 | author={Rojas, Sara and Zarzar, Jesus and Perez, Juan Camilo and Sanakoyeu, Artsiom and Thabet, Ali and Pumarola, Albert and Ghanem, Bernard},
528 | journal={arXiv preprint arXiv:2303.08717},
529 | year={2023}
530 | }
531 |
532 | @article{wimbauer2023behind,
533 | title={Behind the Scenes: Density Fields for Single View Reconstruction},
534 | author={Wimbauer, Felix and Yang, Nan and Rupprecht, Christian and Cremers, Daniel},
535 | journal={arXiv preprint arXiv:2301.07668},
536 | year={2023}
537 | }
538 |
539 |
540 | @article{meng2023neat,
541 | title={NeAT: Learning Neural Implicit Surfaces with Arbitrary Topologies from Multi-view Images},
542 | author={Meng, Xiaoxu and Chen, Weikai and Yang, Bo},
543 | journal={arXiv preprint arXiv:2303.12012},
544 | year={2023}
545 | }
546 |
547 | @inproceedings{choi2023CVPR,
548 | author ={Changwoon Choi and Sang Min Kim and Young Min Kim},
549 | title ={Balanced Spherical Grid for Egocentric View Synthesis},
550 | booktitle ={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}
551 | month ={June},
552 | year ={2023},
553 | pages ={TBD},
554 | }
555 |
556 | @article{liang2023envidr,
557 | title={ENVIDR: Implicit Differentiable Renderer with Neural Environment Lighting},
558 | author={Liang, Ruofan and Chen, Huiting and Li, Chunlin and Chen, Fan and Panneer, Selvakumar and Vijaykumar, Nandita},
559 | journal={arXiv preprint arXiv:2303.13022},
560 | year={2023}
561 | }
562 |
563 | @inproceedings{meuleman2023localrf,
564 | author = {Meuleman, Andreas and Liu, Yu-Lun and Gao, Chen and Huang, Jia-Bin and Kim, Changil and Kim, Min H. and Kopf, Johannes},
565 | title = {Progressively Optimized Local Radiance Fields for Robust View Synthesis},
566 | booktitle = {CVPR},
567 | year = {2023},
568 | }
569 |
570 | @misc{tang2023ablenerf,
571 | title={ABLE-NeRF: Attention-Based Rendering with Learnable Embeddings for Neural Radiance Field},
572 | author={Zhe Jun Tang and Tat-Jen Cham and Haiyu Zhao},
573 | year={2023},
574 | eprint={2303.13817},
575 | archivePrefix={arXiv},
576 | primaryClass={cs.CV}
577 | }
578 |
579 | @misc{hou2023neudf,
580 | title={NeUDF: Learning Unsigned Distance Fields from Multi-view Images for Reconstructing Non-watertight Models},
581 | author={Fei Hou and Jukai Deng and Xuhui Chen and Wencheng Wang and Ying He},
582 | year={2023},
583 | eprint={2303.15368},
584 | archivePrefix={arXiv},
585 | primaryClass={cs.CV}
586 | }
587 |
588 | @misc{jain2023enhanced,
589 | title={Enhanced Stable View Synthesis},
590 | author={Nishant Jain and Suryansh Kumar and Luc Van Gool},
591 | year={2023},
592 | eprint={2303.17094},
593 | archivePrefix={arXiv},
594 | primaryClass={cs.CV}
595 | }
596 |
597 | @misc{zhu2023vdnnerf,
598 | title={VDN-NeRF: Resolving Shape-Radiance Ambiguity via View-Dependence Normalization},
599 | author={Bingfan Zhu and Yanchao Yang and Xulong Wang and Youyi Zheng and Leonidas Guibas},
600 | year={2023},
601 | eprint={2303.17968},
602 | archivePrefix={arXiv},
603 | primaryClass={cs.CV}
604 | }
605 |
606 | @misc{zhang2023nemf,
607 | title={NeMF: Inverse Volume Rendering with Neural Microflake Field},
608 | author={Youjia Zhang and Teng Xu and Junqing Yu and Yuteng Ye and Junle Wang and Yanqing Jing and Jingyi Yu and Wei Yang},
609 | year={2023},
610 | eprint={2304.00782},
611 | archivePrefix={arXiv},
612 | primaryClass={cs.CV}
613 | }
614 |
615 | @inproceedings{Liu23NeUDF,
616 | author = Liu, Yu-Tao and Wang, Li and Yang, Jie and Chen, Weikai and Meng, Xiaoxu and Yang, Bo and Gao, Lin},
617 | title = {NeUDF: Leaning Neural Unsigned Distance Fields with Volume Rendering},
618 | booktitle={Computer Vision and Pattern Recognition (CVPR)},
619 | year = {2023},
620 | }
621 |
622 | @inproceedings{wan2023ndrf,
623 | title={Learning Neural Duplex Radiance Fields for Real-Time View Synthesis},
624 | author={Ziyu Wan and Christian Richardt and Aljaž Božič and Chao Li and Vijay Rengarajan and Seonghyeon Nam and Xiaoyu Xiang and Tuotuo Li and Bo Zhu and Rakesh Ranjan and Jing Liao},
625 | booktitle={CVPR},
626 | year={2023}
627 | }
628 |
629 | @misc{chang2023pointersect,
630 | title={Pointersect: Neural Rendering with Cloud-Ray Intersection},
631 | author={Jen-Hao Rick Chang and Wei-Yu Chen and Anurag Ranjan and Kwang Moo Yi and Oncel Tuzel},
632 | year={2023},
633 | eprint={2304.12390},
634 | archivePrefix={arXiv},
635 | primaryClass={cs.CV}
636 | }
637 |
638 | @misc{yuan2023nslfol,
639 | title={NSLF-OL: Online Learning of Neural Surface Light Fields alongside Real-time Incremental 3D Reconstruction},
640 | author={Yijun Yuan and Andreas Nuchter},
641 | year={2023},
642 | eprint={2305.00282},
643 | archivePrefix={arXiv},
644 | primaryClass={cs.CV}
645 | }
646 | %%PointCloud
647 | @article{huang2022boosting,
648 | title={Boosting Point Clouds Rendering via Radiance Mapping},
649 | author={Huang, Xiaoyang and Zhang, Yi and Ni, Bingbing and Li, Teng and Chen, Kai and Zhang, Wenjun},
650 | journal={arXiv preprint arXiv:2210.15107},
651 | year={2022}
652 | }
653 |
654 | %%Indoor
655 | @article{zhu20232,
656 | title={I2-SDF: Intrinsic Indoor Scene Reconstruction and Editing via Raytracing in Neural SDFs},
657 | author={Zhu, Jingsen and Huo, Yuchi and Ye, Qi and Luan, Fujun and Li, Jifan and Xi, Dianbing and Wang, Lisha and Tang, Rui and Hua, Wei and Bao, Hujun and others},
658 | journal={arXiv preprint arXiv:2303.07634},
659 | year={2023}
660 | }
661 |
662 | @article{liang2023helixsurf,
663 | title={HelixSurf: A Robust and Efficient Neural Implicit Surface Learning of Indoor Scenes with Iterative Intertwined Regularization},
664 | author={Liang, Zhihao and Huang, Zhangjin and Ding, Changxing and Jia, Kui},
665 | journal={arXiv preprint arXiv:2302.14340},
666 | year={2023}
667 | }
668 |
669 | %%Large-scale
670 | @article{zhang2023efficient,
671 | title={Efficient Large-scale Scene Representation with a Hybrid of High-resolution Grid and Plane Features},
672 | author={Zhang, Yuqi and Chen, Guanying and Cui, Shuguang},
673 | journal={arXiv preprint arXiv:2303.03003},
674 | year={2023}
675 | }
676 |
677 | @misc{xu2023gridguided,
678 | title={Grid-guided Neural Radiance Fields for Large Urban Scenes},
679 | author={Linning Xu and Yuanbo Xiangli and Sida Peng and Xingang Pan and Nanxuan Zhao and Christian Theobalt and Bo Dai and Dahua Lin},
680 | year={2023},
681 | eprint={2303.14001},
682 | archivePrefix={arXiv},
683 | primaryClass={cs.CV}
684 | }
685 |
686 | @inproceedings{mi2023switchnerf,
687 | title={Switch-NeRF: Learning Scene Decomposition with Mixture of Experts for Large-scale Neural Radiance Fields},
688 | author={Zhenxing Mi and Dan Xu},
689 | booktitle={International Conference on Learning Representations (ICLR)},
690 | year={2023},
691 | url={https://openreview.net/forum?id=PQ2zoIZqvm}
692 | }
693 |
694 | %%Fourier
695 | @article{wu2022neural,
696 | title={Neural Fourier Filter Bank},
697 | author={Wu, Zhijie and Jin, Yuhe and Moo Yi, Kwang},
698 | journal={arXiv e-prints},
699 | pages={arXiv--2212},
700 | year={2022}
701 | }
702 |
703 | @article{tancik2020fourfeat,
704 | title={Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains},
705 | author={Matthew Tancik and Pratul P. Srinivasan and Ben Mildenhall and Sara Fridovich-Keil and Nithin Raghavan and Utkarsh Singhal and Ravi Ramamoorthi and Jonathan T. Barron and Ren Ng},
706 | journal={NeurIPS},
707 | year={2020}
708 | }
709 |
710 | %%poses
711 | @article{levy2023melon,
712 | title={MELON: NeRF with Unposed Images Using Equivalence Class Estimation},
713 | author={Levy, Axel and Matthews, Mark and Sela, Matan and Wetzstein, Gordon and Lagun, Dmitry},
714 | journal={arXiv preprint arXiv:2303.08096},
715 | year={2023}
716 | }
717 |
718 | @inproceedings{bian2022nopenerf,
719 | author = {Wenjing Bian and Zirui Wang and Kejie Li and Jiawang Bian and Victor Adrian Prisacariu},
720 | title = {NoPe-NeRF: Optimising Neural Radiance Field with No Pose Prior},
721 | journal = {CVPR},
722 | year = {2023}
723 | }
724 |
725 | @article{sinha2022sparsepose,
726 | title={SparsePose: Sparse-View Camera Pose Regression and Refinement},
727 | author={Sinha, Samarth and Zhang, Jason Y and Tagliasacchi, Andrea and Gilitschenski, Igor and Lindell, David B},
728 | journal={arXiv preprint arXiv:2211.16991},
729 | year={2022}
730 | }
731 |
732 | @article{truong2022sparf,
733 | title={SPARF: Neural Radiance Fields from Sparse and Noisy Poses},
734 | author={Truong, Prune and Rakotosaona, Marie-Julie and Manhardt, Fabian and Tombari, Federico},
735 | journal={arXiv preprint arXiv:2211.11738},
736 | year={2022}
737 | }
738 |
739 | %% geometry
740 | @article{kulhanek2023tetranerf,
741 | title={{T}etra-{NeRF}: Representing Neural Radiance Fields Using Tetrahedra},
742 | author={Kulhanek, Jonas and Sattler, Torsten},
743 | journal={arXiv preprint arXiv:2304.09987},
744 | year={2023},
745 | }
746 |
747 | @inproceedings{wang2023fegr,
748 | title = {Neural Fields meet Explicit Geometric Representations for Inverse Rendering of Urban Scenes},
749 | author = {Zian Wang and Tianchang Shen and Jun Gao and Shengyu Huang and Jacob Munkberg
750 | and Jon Hasselgren and Zan Gojcic and Wenzheng Chen and Sanja Fidler},
751 | booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
752 | month = {June},
753 | year = {2023}
754 | }
755 |
756 | @misc{fang2023evaluate,
757 | title={Evaluate Geometry of Radiance Field with Low-frequency Color Prior},
758 | author={Qihang Fang and Yafei Song and Keqiang Li and Li Shen and Huaiyu Wu and Gang Xiong and Liefeng Bo},
759 | year={2023},
760 | eprint={2304.04351},
761 | archivePrefix={arXiv},
762 | primaryClass={cs.CV}
763 | }
764 |
765 | @misc{yang2023nerfvs,
766 | title={NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry Scaffolds},
767 | author={Chen Yang and Peihao Li and Zanwei Zhou and Shanxin Yuan and Bingbing Liu and Xiaokang Yang and Weichao Qiu and Wei Shen},
768 | year={2023},
769 | eprint={2304.06287},
770 | archivePrefix={arXiv},
771 | primaryClass={cs.CV}
772 | }
773 |
774 | @article{rakotosaona2023nerfmeshing,
775 | title={NeRFMeshing: Distilling Neural Radiance Fields into Geometrically-Accurate 3D Meshes},
776 | author={Rakotosaona, Marie-Julie and Manhardt, Fabian and Arroyo, Diego Martin and Niemeyer, Michael and Kundu, Abhijit and Tombari, Federico},
777 | journal={arXiv preprint arXiv:2303.09431},
778 | year={2023}
779 | }
780 |
781 | @article{10.1145/3528223.3530140,
782 | author = {Matveev, Albert and Rakhimov, Ruslan and Artemov, Alexey and Bobrovskikh, Gleb and Egiazarian, Vage and Bogomolov, Emil and Panozzo, Daniele and Zorin, Denis and Burnaev, Evgeny},
783 | title = {DEF: Deep Estimation of Sharp Geometric Features in 3D Shapes},
784 | year = {2022},
785 | issue_date = {July 2022},
786 | publisher = {Association for Computing Machinery},
787 | address = {New York, NY, USA},
788 | volume = {41},
789 | number = {4},
790 | issn = {0730-0301},
791 | url = {https://doi.org/10.1145/3528223.3530140},
792 | doi = {10.1145/3528223.3530140},
793 | journal = {ACM Trans. Graph.},
794 | month = {jul},
795 | articleno = {108},
796 | numpages = {22},
797 | keywords = {curve extraction, sharp geometric features, deep learning}
798 | }
799 |
800 | @article{zou2022mononeuralfusion,
801 | title={MonoNeuralFusion: Online Monocular Neural 3D Reconstruction with Geometric Priors},
802 | author={Zou, Zi-Xin and Huang, Shi-Sheng and Cao, Yan-Pei and Mu, Tai-Jiang and Shan, Ying and Fu, Hongbo},
803 | journal={arXiv preprint arXiv:2209.15153},
804 | year={2022}
805 | }
806 |
807 |
808 | %% depth
809 | @inproceedings{uy-scade-cvpr23,
810 | title = {SCADE: NeRFs from Space Carving with Ambiguity-Aware Depth Estimates},
811 | author = {Mikaela Angelina Uy and Ricardo Martin-Brualla and Leonidas Guibas and Ke Li},
812 | booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
813 | year = {2023}
814 | }
815 |
816 | @article{wang2023sparsenerf,
817 | title={SparseNeRF: Distilling Depth Ranking for Few-shot Novel View Synthesis},
818 | author={Guangcong and Zhaoxi Chen and Chen Change Loy and Ziwei Liu},
819 | journal={Technical Report},
820 | year={2023}
821 | }
822 |
823 | @inproceedings{bae2022irondepth,
824 | title={IronDepth: Iterative Refinement of Single-View Depth using Surface Normal and its Uncertainty},
825 | author={Bae, Gwangbin and Budvytis, Ignas and Cipolla, Roberto},
826 | booktitle={British Machine Vision Conference (BMVC)},
827 | year={2022}
828 | }
829 |
830 | %%stereo
831 | @inproceedings{du2023cross,
832 | title={Learning to Render Novel Views from Wide-Baseline Stereo Pairs},
833 | author={Du, Yilun and Smith, Cameron and Tewari, Ayush and Sitzmann, Vincent},
834 | booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
835 | year={2023}
836 | }
837 |
838 | @inproceedings{Tosi2023CVPR,
839 | author = {Tosi, Fabio and Tonioni, Alessio and De Gregorio, Daniele and Poggi, Matteo},
840 | title = {NeRF-Supervised Deep Stereo},
841 | booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
842 | year = {2023}
843 | }
844 |
845 | %% visual challenges
846 | @inproceedings{dai2023hybrid,
847 | title={Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur},
848 | author={Dai, Peng and Zhang, Yinda and Yu, Xin and Lyu, Xiaoyang and Qi, Xiaojuan},
849 | booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
850 | year={2023}
851 | }
852 |
853 | @misc{levy2023seathrunerf,
854 | title={SeaThru-NeRF: Neural Radiance Fields in Scattering Media},
855 | author={Deborah Levy and Amit Peleg and Naama Pearl and Dan Rosenbaum and Derya Akkaynak and Simon Korman and Tali Treibitz},
856 | year={2023},
857 | eprint={2304.07743},
858 | archivePrefix={arXiv},
859 | primaryClass={cs.CV}
860 | }
861 |
862 | @misc{qiu2023looking,
863 | title={Looking Through the Glass: Neural Surface Reconstruction Against High Specular Reflections},
864 | author={Jiaxiong Qiu and Peng-Tao Jiang and Yifan Zhu and Ze-Xin Yin and Ming-Ming Cheng and Bo Ren},
865 | year={2023},
866 | eprint={2304.08706},
867 | archivePrefix={arXiv},
868 | primaryClass={cs.CV}
869 | }
870 |
871 | @inproceedings{Nerfbusters2023,
872 | title = {Nerfbusters: Removing Ghostly Artifacts from Casually Captured NeRFs},
873 | author = {Frederik Warburg* and Ethan Weber* and Matthew Tancik and Aleksander Hołyński and Angjoo Kanazawa},
874 | journal = {arXiv preprint},
875 | year = {2023},
876 | }
877 |
878 | @misc{tong2023seeing,
879 | title={Seeing Through the Glass: Neural 3D Reconstruction of Object Inside a Transparent Container},
880 | author={Jinguang Tong and Sundaram Muthu and Fahira Afzal Maken and Chuong Nguyen and Hongdong Li},
881 | year={2023},
882 | eprint={2303.13805},
883 | archivePrefix={arXiv},
884 | primaryClass={cs.CV}
885 | }
886 |
887 | @misc{hu2023point2pix,
888 | title={Point2Pix: Photo-Realistic Point Cloud Rendering via Neural Radiance Fields},
889 | author={Tao Hu and Xiaogang Xu and Shu Liu and Jiaya Jia},
890 | year={2023},
891 | eprint={2303.16482},
892 | archivePrefix={arXiv},
893 | primaryClass={cs.CV}
894 | }
895 |
896 | @misc{wang2022badnerf,
897 | title={BAD-NeRF: Bundle Adjusted Deblur Neural Radiance Fields},
898 | author={Peng Wang and Lingzhe Zhao and Ruijie Ma and Peidong Liu},
899 | year={2022},
900 | eprint={2211.12853},
901 | archivePrefix={arXiv},
902 | primaryClass={cs.CV}
903 | }
904 |
905 | @misc{chen2023dehazenerf,
906 | title={DehazeNeRF: Multiple Image Haze Removal and 3D Shape Reconstruction using Neural Radiance Fields},
907 | author={Wei-Ting Chen and Wang Yifan and Sy-Yen Kuo and Gordon Wetzstein},
908 | year={2023},
909 | eprint={2303.11364},
910 | archivePrefix={arXiv},
911 | primaryClass={cs.CV}
912 | }
913 |
914 | @article{cui2023aleth,
915 | title={Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields},
916 | author={Cui, Ziteng and Gu, Lin and Sun, Xiao and Qiao, Yu and Harada, Tatsuya},
917 | journal={arXiv preprint arXiv:2303.05807},
918 | year={2023}
919 | }
920 |
921 | @article{wu2023alpha,
922 | title={$$\backslash$alpha $ Surf: Implicit Surface Reconstruction for Semi-Transparent and Thin Objects with Decoupled Geometry and Opacity},
923 | author={Wu, Tianhao and Liang, Hanxue and Zhong, Fangcheng and Riegler, Gernot and Vainer, Shimon and Oztireli, Cengiz},
924 | journal={arXiv preprint arXiv:2303.10083},
925 | year={2023}
926 | }
927 |
928 | @article{lee2023extremenerf,
929 | title={ExtremeNeRF: Few-shot Neural Radiance Fields Under Unconstrained Illumination},
930 | author={Lee, SeokYeong and Choi, JunYong and Kim, Seungryong and Kim, Ig-Jae and Cho, Junghyun},
931 | journal={arXiv preprint arXiv:2303.11728},
932 | year={2023}
933 | }
934 |
935 |
936 | @article{chen2023dehazenerf,
937 | title={DehazeNeRF: Multiple Image Haze Removal and 3D Shape Reconstruction using Neural Radiance Fields},
938 | author={Chen, Wei-Ting and Yifan, Wang and Kuo, Sy-Yen and Wetzstein, Gordon},
939 | journal={arXiv preprint arXiv:2303.11364},
940 | year={2023}
941 | }
942 |
943 | @inproceedings{low2022minimal,
944 | title={Minimal Neural Atlas: Parameterizing Complex Surfaces with Minimal Charts and Distortion},
945 | author={Low, Weng Fei and Lee, Gim Hee},
946 | booktitle={Computer Vision--ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23--27, 2022, Proceedings, Part II},
947 | pages={465--481},
948 | year={2022},
949 | organization={Springer}
950 | }
951 |
952 | @article{ge2023ref,
953 | title={Ref-NeuS: Ambiguity-Reduced Neural Implicit Surface Learning for Multi-View Reconstruction with Reflection},
954 | author={Ge, Wenhang and Hu, Tao and Zhao, Haoyu and Liu, Shu and Chen, Ying-Cong},
955 | journal={arXiv preprint arXiv:2303.10840},
956 | year={2023}
957 | }
958 |
959 | @incollection{pan2022sampling,
960 | title={Sampling Neural Radiance Fields for Refractive Objects},
961 | author={Pan, Jen-I and Su, Jheng-Wei and Hsiao, Kai-Wen and Yen, Ting-Yu and Chu, Hung-Kuo},
962 | booktitle={SIGGRAPH Asia 2022 Technical Communications},
963 | pages={1--4},
964 | year={2022}
965 | }
--------------------------------------------------------------------------------
/src/NeRF+Architecture_Improvements.md:
--------------------------------------------------------------------------------
1 | ---
2 | bibliography: NeRF+Architecture_Improvements.bib
3 | nocite: "@*"
4 | ---
5 |
--------------------------------------------------------------------------------
/src/Review_Papers.bib:
--------------------------------------------------------------------------------
1 |
2 | @article{tewari_state_2020,
3 | title = {State of the {Art} on {Neural} {Rendering}},
4 | url = {https://onlinelibrary.wiley.com/doi/full/10.1111/cgf.14022},
5 | abstract = {Efficient rendering of photo-realistic virtual worlds is a long standing effort of computer graphics. Modern graphics techniques have succeeded in synthesizing photo-realistic images from hand-crafted scene representations. However, the automatic generation of shape, materials, lighting, and other aspects of scenes remains a challenging problem that, if solved, would make photo-realistic computer graphics more widely accessible. Concurrently, progress in computer vision and machine learning have given rise to a new approach to image synthesis and editing, namely deep generative models. Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e.g., by the integration of differentiable rendering into network training. With a plethora of applications in computer graphics and vision, neural rendering is poised to become a new area in the graphics community, yet no survey of this emerging field exists. This state-of-the-art report summarizes the recent trends and applications of neural rendering. We focus on approaches that combine classic computer graphics techniques with deep generative models to obtain controllable and photorealistic outputs. Starting with an overview of the underlying computer graphics and machine learning concepts, we discuss critical aspects of neural rendering approaches. Specifically, our emphasis is on the type of control, i.e., how the control is provided, which parts of the pipeline are learned, explicit vs. implicit control, generalization, and stochastic vs. deterministic synthesis. The second half of this state-of-the-art report is focused on the many important use cases for the described algorithms such as novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free-viewpoint video, and the creation of photo-realistic avatars for virtual and augmented reality telepresence. Finally, we conclude with a discussion of the social implications of such technology and investigate open research problems.},
6 | language = {en},
7 | urldate = {2023-04-04},
8 | journal = {Computer Graphics Forum},
9 | author = {Tewari, Ayush and Fried, Ohad and Thies, Justus and Sitzmann, Vincent and Lombardi, Stephen and Sunkavalli, Kalyan and Martin-Brualla, Ricardo and Simon, Tomas and Saragih, Jason and Nießner, Matthias and Pandey, Rohit and Fanello, Sean and Wetzstein, Gordon and Zhu, Jun-Yan and Theobalt, Christian and Agrawala, Maneesh and Shechtman, Eli and Goldman, Dan B. and Zollhöfer, Michael},
10 | month = jul,
11 | year = {2020},
12 | keywords = {⛔ No DOI found, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Graphics},
13 | annote = {Comment: Eurographics 2020 survey paper},
14 | annote = {ZSCC: NoCitationData[s0] arXiv: 2004.03805},
15 | }
16 |
17 | @article{xie_neural_2021,
18 | title = {Neural {Fields} in {Visual} {Computing} and {Beyond}},
19 | url = {https://onlinelibrary.wiley.com/doi/full/10.1111/cgf.14505},
20 | abstract = {Recent advances in machine learning have created increasing interest in solving visual computing problems using a class of coordinate-based neural networks that parametrize physical properties of scenes or objects across space and time. These methods, which we call neural fields, have seen successful application in the synthesis of 3D shapes and image, animation of human bodies, 3D reconstruction, and pose estimation. However, due to rapid progress in a short time, many papers exist but a comprehensive review and formulation of the problem has not yet emerged. In this report, we address this limitation by providing context, mathematical grounding, and an extensive review of literature on neural fields. This report covers research along two dimensions. In Part I, we focus on techniques in neural fields by identifying common components of neural field methods, including different representations, architectures, forward mapping, and generalization methods. In Part II, we focus on applications of neural fields to different problems in visual computing, and beyond (e.g., robotics, audio). Our review shows the breadth of topics already covered in visual computing, both historically and in current incarnations, demonstrating the improved quality, flexibility, and capability brought by neural fields methods. Finally, we present a companion website that contributes a living version of this review that can be continually updated by the community.},
21 | urldate = {2023-04-04},
22 | journal = {Computer Graphics Forum},
23 | author = {Xie, Yiheng and Takikawa, Towaki and Saito, Shunsuke and Litany, Or and Yan, Shiqin and Khan, Numair and Tombari, Federico and Tompkin, James and Sitzmann, Vincent and Sridhar, Srinath},
24 | month = may,
25 | year = {2022},
26 | keywords = {⛔ No DOI found, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Graphics, Computer Science - Machine Learning},
27 | annote = {arXiv: 2111.11426},
28 | annote = {Comment: Equal advising: Vincent Sitzmann and Srinath Sridhar},
29 | file = {arXiv Fulltext PDF:/home/jack/Zotero/storage/JYWWFDUN/Xie et al. - 2021 - Neural Fields in Visual Computing and Beyond.pdf:application/pdf;arXiv.org Snapshot:/home/jack/Zotero/storage/B7R4AUZY/2111.html:text/html},
30 | }
31 |
32 | @article{tewari_advances_2021,
33 | title = {Advances in {Neural} {Rendering}},
34 | url = {http://arxiv.org/abs/2111.05849},
35 | abstract = {Synthesizing photo-realistic images and videos is at the heart of computer graphics and has been the focus of decades of research. Traditionally, synthetic images of a scene are generated using rendering algorithms such as rasterization or ray tracing, which take specifically defined representations of geometry and material properties as input. Collectively, these inputs define the actual scene and what is rendered, and are referred to as the scene representation (where a scene consists of one or more objects). Example scene representations are triangle meshes with accompanied textures (e.g., created by an artist), point clouds (e.g., from a depth sensor), volumetric grids (e.g., from a CT scan), or implicit surface functions (e.g., truncated signed distance fields). The reconstruction of such a scene representation from observations using differentiable rendering losses is known as inverse graphics or inverse rendering. Neural rendering is closely related, and combines ideas from classical computer graphics and machine learning to create algorithms for synthesizing images from real-world observations. Neural rendering is a leap forward towards the goal of synthesizing photo-realistic image and video content. In recent years, we have seen immense progress in this field through hundreds of publications that show different ways to inject learnable components into the rendering pipeline. This state-of-the-art report on advances in neural rendering focuses on methods that combine classical rendering principles with learned 3D scene representations, often now referred to as neural scene representations. A key advantage of these methods is that they are 3D-consistent by design, enabling applications such as novel viewpoint synthesis of a captured scene. In addition to methods that handle static scenes, we cover neural scene representations for modeling nonrigidly deforming objects and scene editing and composition. While most of these approaches are scene-specific, we also discuss techniques that generalize across object classes and can be used for generative tasks. In addition to reviewing these state-ofthe-art methods, we provide an overview of fundamental concepts and definitions used in the current literature. We conclude with a discussion on open challenges and social implications.},
36 | language = {en},
37 | urldate = {2021-11-27},
38 | journal = {arXiv:2111.05849 [cs]},
39 | author = {Tewari, Ayush and Thies, Justus and Mildenhall, Ben and Srinivasan, Pratul and Tretschk, Edgar and Wang, Yifan and Lassner, Christoph and Sitzmann, Vincent and Martin-Brualla, Ricardo and Lombardi, Stephen and Simon, Tomas and Theobalt, Christian and Niessner, Matthias and Barron, Jonathan T. and Wetzstein, Gordon and Zollhoefer, Michael and Golyanik, Vladislav},
40 | month = nov,
41 | year = {2021},
42 | keywords = {⛔ No DOI found, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Graphics},
43 | annote = {arXiv: 2111.05849},
44 | annote = {Comment: 29 pages, 14 figures, 5 tables},
45 | file = {Tewari et al. - 2021 - Advances in Neural Rendering.pdf:/home/jack/Zotero/storage/UL3RGZTN/Tewari et al. - 2021 - Advances in Neural Rendering.pdf:application/pdf},
46 | }
47 |
48 |
49 | @misc{toschi2023relight,
50 | title={ReLight My NeRF: A Dataset for Novel View Synthesis and Relighting of Real World Objects},
51 | author={Marco Toschi and Riccardo De Matteo and Riccardo Spezialetti and Daniele De Gregorio and Luigi Di Stefano and Samuele Salti},
52 | year={2023},
53 | eprint={2304.10448},
54 | archivePrefix={arXiv},
55 | primaryClass={cs.CV}
56 | }
57 |
58 | @article{tancik2023nerfstudio,
59 | title={Nerfstudio: A modular framework for neural radiance field development},
60 | author={Tancik, Matthew and Weber, Ethan and Ng, Evonne and Li, Ruilong and Yi, Brent and Kerr, Justin and Wang, Terrance and Kristoffersen, Alexander and Austin, Jake and Salahi, Kamyar and others},
61 | journal={arXiv preprint arXiv:2302.04264},
62 | year={2023}
63 | }
64 |
65 | @article{wallingford2023neural,
66 | title={Neural Radiance Field Codebooks},
67 | author={Wallingford, Matthew and Kusupati, Aditya and Fang, Alex and Ramanujan, Vivek and Kembhavi, Aniruddha and Mottaghi, Roozbeh and Farhadi, Ali},
68 | journal={arXiv preprint arXiv:2301.04101},
69 | year={2023}
70 | }
--------------------------------------------------------------------------------
/src/Review_Papers.md:
--------------------------------------------------------------------------------
1 | ---
2 | bibliography: Review_Papers.bib
3 | nocite: "@*"
4 | ---
5 |
--------------------------------------------------------------------------------
/src/Robotics_Applications.bib:
--------------------------------------------------------------------------------
1 |
2 | @article{driess_learning_2022,
3 | title = {Learning {Multi}-{Object} {Dynamics} with {Compositional} {Neural} {Radiance} {Fields}},
4 | journal = {arXiv preprint arXiv:2202.11855},
5 | author = {Driess, Danny and Huang, Zhiao and Li, Yunzhu and Tedrake, Russ and Toussaint, Marc},
6 | year = {2022},
7 | }
8 |
9 | @inproceedings{yen-chen_nerf-supervision_2022,
10 | title = {{NeRF}-{Supervision}: {Learning} {Dense} {Object} {Descriptors} from {Neural} {Radiance} {Fields}},
11 | booktitle = {{IEEE} {Conference} on {Robotics} and {Automation} ({ICRA})},
12 | author = {Yen-Chen, Lin and Florence, Pete and Barron, Jonathan T. and Lin, Tsung-Yi and Rodriguez, Alberto and Isola, Phillip},
13 | year = {2022},
14 | }
15 |
16 |
17 |
18 | @article{zhu_nice-slam_2021,
19 | title = {{NICE}-{SLAM}: {Neural} {Implicit} {Scalable} {Encoding} for {SLAM}},
20 | journal = {arXiv},
21 | author = {Zhu, Zihan and Peng, Songyou and Larsson, Viktor and Xu, Weiwei and Bao, Hujun and Cui, Zhaopeng and Oswald, Martin R. and Pollefeys, Marc},
22 | year = {2021},
23 | }
24 |
25 | @article{adamkiewicz_vision-only_2021,
26 | title = {Vision-{Only} {Robot} {Navigation} in a {Neural} {Radiance} {World}},
27 | url = {http://arxiv.org/abs/2110.00168},
28 | abstract = {Neural Radiance Fields (NeRFs) have recently emerged as a powerful paradigm for the representation of natural, complex 3D scenes. NeRFs represent continuous volumetric density and RGB values in a neural network, and generate photo-realistic images from unseen camera viewpoints through ray tracing. We propose an algorithm for navigating a robot through a 3D environment represented as a NeRF using only an on-board RGB camera for localization. We assume the NeRF for the scene has been pre-trained offline, and the robot's objective is to navigate through unoccupied space in the NeRF to reach a goal pose. We introduce a trajectory optimization algorithm that avoids collisions with high-density regions in the NeRF based on a discrete time version of differential flatness that is amenable to constraining the robot's full pose and control inputs. We also introduce an optimization based filtering method to estimate 6DoF pose and velocities for the robot in the NeRF given only an onboard RGB camera. We combine the trajectory planner with the pose filter in an online replanning loop to give a vision-based robot navigation pipeline. We present simulation results with a quadrotor robot navigating through a jungle gym environment, the inside of a church, and Stonehenge using only an RGB camera. We also demonstrate an omnidirectional ground robot navigating through the church, requiring it to reorient to fit through the narrow gap. Videos of this work can be found at https://mikh3x4.github.io/nerf-navigation/ .},
29 | urldate = {2021-10-11},
30 | journal = {arXiv:2110.00168 [cs]},
31 | author = {Adamkiewicz, Michal and Chen, Timothy and Caccavale, Adam and Gardner, Rachel and Culbertson, Preston and Bohg, Jeannette and Schwager, Mac},
32 | month = sep,
33 | year = {2021},
34 | keywords = {Computer Science - Robotics},
35 | annote = {ZSCC: 0000000 arXiv: 2110.00168},
36 | file = {arXiv Fulltext PDF:/home/jack/Zotero/storage/GMSQPRWH/Adamkiewicz et al. - 2021 - Vision-Only Robot Navigation in a Neural Radiance .pdf:application/pdf;arXiv.org Snapshot:/home/jack/Zotero/storage/EBYTKNHK/2110.html:text/html},
37 | }
38 |
39 | @inproceedings{sucar_imap_2021,
40 | title = {{iMAP}: {Implicit} {Mapping} and {Positioning} in {Real}-{Time}},
41 | booktitle = {Proceedings of the {IEEE}/{CVF} {International} {Conference} on {Computer} {Vision} ({ICCV})},
42 | author = {Sucar, Edgar and Liu, Shikun and Ortiz, Joseph and Davison, Andrew J.},
43 | month = oct,
44 | year = {2021},
45 | pages = {6229--6238},
46 | }
47 |
48 |
49 | @misc{wang2023coslam,
50 | title={Co-SLAM: Joint Coordinate and Sparse Parametric Encodings for Neural Real-Time SLAM},
51 | author={Hengyi Wang and Jingwen Wang and Lourdes Agapito},
52 | year={2023},
53 | eprint={2304.14377},
54 | archivePrefix={arXiv},
55 | primaryClass={cs.CV}
56 | }
57 |
58 | @misc{sandström2023pointslam,
59 | title={Point-SLAM: Dense Neural Point Cloud-based SLAM},
60 | author={Erik Sandström and Yue Li and Luc Van Gool and Martin R. Oswald},
61 | year={2023},
62 | eprint={2304.04278},
63 | archivePrefix={arXiv},
64 | primaryClass={cs.CV}
65 | }
66 |
67 | @article{matsuki2023newton,
68 | title={NEWTON: Neural View-Centric Mapping for On-the-Fly Large-Scale SLAM},
69 | author={Matsuki, Hidenobu and Tateno, Keisuke and Niemeyer, Michael and Tombari, Federic},
70 | journal={arXiv preprint arXiv:2303.13654},
71 | year={2023}
72 | }
73 |
74 | @article{kong2023vmap,
75 | title={vmap: Vectorised object mapping for neural field slam},
76 | author={Kong, Xin and Liu, Shikun and Taher, Marwan and Davison, Andrew J},
77 | journal={arXiv preprint arXiv:2302.01838},
78 | year={2023}
79 | }
80 |
81 | @article{li2023dense,
82 | title={Dense RGB SLAM with Neural Implicit Maps},
83 | author={Li, Heng and Gu, Xiaodong and Yuan, Weihao and Yang, Luwei and Dong, Zilong and Tan, Ping},
84 | journal={arXiv preprint arXiv:2301.08930},
85 | year={2023}
86 | }
87 |
88 | @article{rosinol2022nerf,
89 | title={NeRF-SLAM: Real-Time Dense Monocular SLAM with Neural Radiance Fields},
90 | author={Rosinol, Antoni and Leonard, John J and Carlone, Luca},
91 | journal={arXiv preprint arXiv:2210.13641},
92 | year={2022}
93 | }
94 |
95 | @article{weng2023ngdf,
96 | title={Neural Grasp Distance Fields for Robot Manipulation,
97 | author={Weng, Thomas and Held, David and Meier, Franziska and Mukadam, Mustafa},
98 | journal={IEEE International Conference on Robotics and Automation (ICRA)},
99 | year={2023}
100 | }
101 |
102 | @article{griffiths2022nocal,
103 | doi = {10.48550/ARXIV.2210.07435},
104 | author = {Griffiths, Ryan and Naylor, Jack and Dansereau, Donald G.},
105 | keywords = {Robotics (cs.RO), Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
106 | title = {NOCaL: Calibration-Free Semi-Supervised Learning of Odometry and Camera Intrinsics},
107 | publisher = {arXiv},
108 | year = {2022},
109 | copyright = {arXiv.org perpetual, non-exclusive license},
110 | }
111 |
112 | @inproceedings{
113 | lin2022mira,
114 | title={{MIRA}: Mental Imagery for Robotic Affordances},
115 | author={Yen-Chen Lin and Pete Florence and Andy Zeng and Jonathan T. Barron and Yilun Du and Wei-Chiu Ma and Anthony Simeonov and Alberto Rodriguez Garcia and Phillip Isola},
116 | booktitle={6th Annual Conference on Robot Learning},
117 | year={2022},
118 | url={https://openreview.net/forum?id=AmPeAFzU3a4}
119 | }
120 |
121 | @article{driess2022reinforcement,
122 | title={Reinforcement learning with neural radiance fields},
123 | author={Driess, Danny and Schubert, Ingmar and Florence, Pete and Li, Yunzhu and Toussaint, Marc},
124 | journal={arXiv preprint arXiv:2206.01634},
125 | year={2022}
126 | }
127 |
128 | @misc{marza2023autonerf,
129 | title={AutoNeRF: Training Implicit Scene Representations with Autonomous Agents},
130 | author={Pierre Marza and Laetitia Matignon and Olivier Simonin and Dhruv Batra and Christian Wolf and Devendra Singh Chaplot},
131 | year={2023},
132 | eprint={2304.11241},
133 | archivePrefix={arXiv},
134 | primaryClass={cs.CV}
135 | }
136 |
137 | @inproceedings{
138 | yang2023neural,
139 | title={Neural Volumetric Memory for Visual Locomotion Control},
140 | author={Ruihan Yang and Ge Yang and Xiaolong Wang},
141 | booktitle={Conference on Computer Vision and Pattern Recognition 2023},
142 | year={2023},
143 | url={https://openreview.net/forum?id=JYyWCcmwDS}
144 | }
145 |
146 | @article{deng2023nerf,
147 | title={NeRF-LOAM: Neural Implicit Representation for Large-Scale Incremental LiDAR Odometry and Mapping},
148 | author={Deng, Junyuan and Chen, Xieyuanli and Xia, Songpengcheng and Sun, Zhen and Liu, Guoqing and Yu, Wenxian and Pei, Ling},
149 | journal={arXiv preprint arXiv:2303.10709},
150 | year={2023}
151 | }
152 |
153 | @article{chen2023refinement,
154 | title={Refinement for Absolute Pose Regression with Neural Feature Synthesis},
155 | author={Chen, Shuai and Bhalgat, Yash and Li, Xinghui and Bian, Jiawang and Li, Kejie and Wang, Zirui and Prisacariu, Victor Adrian},
156 | journal={arXiv preprint arXiv:2303.10087},
157 | year={2023}
158 | }
159 |
160 | @article{patel2023dronerf,
161 | title={DroNeRF: Real-time Multi-agent Drone Pose Optimization for Computing Neural Radiance Fields},
162 | author={Patel, Dipam and Pham, Phu and Bera, Aniket},
163 | journal={arXiv preprint arXiv:2303.04322},
164 | year={2023}
165 | }
166 |
167 | @article{yan2023render,
168 | title={Render-and-Compare: Cross-View 6 DoF Localization from Noisy Prior},
169 | author={Yan, Shen and Cheng, Xiaoya and Liu, Yuxiang and Zhu, Juelin and Wu, Rouwan and Liu, Yu and Zhang, Maojun},
170 | journal={arXiv preprint arXiv:2302.06287},
171 | year={2023}
172 | }
173 |
174 | @article{zhan2022activermap,
175 | title={ActiveRMAP: Radiance Field for Active Mapping And Planning},
176 | author={Zhan, Huangying and Zheng, Jiyang and Xu, Yi and Reid, Ian and Rezatofighi, Hamid},
177 | journal={arXiv preprint arXiv:2211.12656},
178 | year={2022}
179 | }
180 |
181 | @misc{liu2023nerfloc,
182 | title={NeRF-Loc: Visual Localization with Conditional Neural Radiance Field},
183 | author={Jianlin Liu and Qiang Nie and Yong Liu and Chengjie Wang},
184 | year={2023},
185 | eprint={2304.07979},
186 | archivePrefix={arXiv},
187 | primaryClass={cs.CV}
188 | }
--------------------------------------------------------------------------------
/src/Robotics_Applications.md:
--------------------------------------------------------------------------------
1 | ---
2 | bibliography: Robotics_Applications.bib
3 | nocite: "@*"
4 | ---
5 |
--------------------------------------------------------------------------------
/src/Speed_Improvements.bib:
--------------------------------------------------------------------------------
1 |
2 | @article{muller_instant_2022,
3 | title = {Instant {Neural} {Graphics} {Primitives} with a {Multiresolution} {Hash} {Encoding}},
4 | journal = {arXiv:2201.05989},
5 | author = {Müller, Thomas and Evans, Alex and Schied, Christoph and Keller, Alexander},
6 | month = jan,
7 | year = {2022},
8 | }
9 |
10 |
11 |
12 | @article{deng_depth-supervised_2021,
13 | title = {Depth-supervised {NeRF}: {Fewer} {Views} and {Faster} {Training} for {Free}},
14 | journal = {arXiv preprint arXiv:2107.02791},
15 | author = {Deng, Kangle and Liu, Andrew and Zhu, Jun-Yan and Ramanan, Deva},
16 | year = {2021},
17 | }
18 |
19 |
20 | @article{li2022compressing,
21 | title={Compressing Volumetric Radiance Fields to 1 MB},
22 | author={Li, Lingzhi and Shen, Zhen and Wang, Zhongshu and Shen, Li and Bo, Liefeng},
23 | journal={arXiv preprint arXiv:2211.16386},
24 | year={2022}
25 | }
26 |
27 | @article{johnson2022neural,
28 | title={Neural Fields for Fast and Scalable Interpolation of Geophysical Ocean Variables},
29 | author={Johnson, J Emmanuel and Lguensat, Redouane and Fablet, Ronan and Cosme, Emmanuel and Sommer, Julien Le},
30 | journal={arXiv preprint arXiv:2211.10444},
31 | year={2022}
32 | }
33 |
34 | @InProceedings{kangle2021dsnerf,
35 | author = {Deng, Kangle and Liu, Andrew and Zhu, Jun-Yan and Ramanan, Deva},
36 | title = {Depth-supervised {NeRF}: Fewer Views and Faster Training for Free},
37 | booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
38 | month = {June},
39 | year = {2022}
40 | }
41 |
42 | @article{wang2022mixed,
43 | title={Mixed Neural Voxels for Fast Multi-view Video Synthesis},
44 | author={Wang, Feng and Tan, Sinan and Li, Xinghang and Tian, Zeyue and Liu, Huaping},
45 | journal={arXiv preprint arXiv:2212.00190},
46 | year={2022}
47 | }
48 |
49 | @article{wang2023f2nerf,
50 | title={F2-NeRF: Fast Neural Radiance Field Training with Free Camera Trajectories},
51 | author={Wang, Peng and Liu, Yuan and Chen, Zhaoxi and Liu, Lingjie and Liu, Ziwei and Komura, Taku and Theobalt, Christian and Wang, Wenping},
52 | journal={CVPR},
53 | year={2023}
54 | }
55 |
56 | @article{lee2023fastsurf,
57 | title={FastSurf: Fast Neural RGB-D Surface Reconstruction using Per-Frame Intrinsic Refinement and TSDF Fusion Prior Learning},
58 | author={Lee, Seunghwan and Park, Gwanmo and Son, Hyewon and Ryu, Jiwon and Chae, Han Joo},
59 | journal={arXiv preprint arXiv:2303.04508},
60 | year={2023}
61 | }
62 |
63 | @misc{https://doi.org/10.48550/arxiv.2212.05231,
64 | doi = {10.48550/ARXIV.2212.05231},
65 | url = {https://arxiv.org/abs/2212.05231},
66 | author = {Wang, Yiming and Han, Qin and Habermann, Marc and Daniilidis, Kostas and Theobalt, Christian and Liu, Lingjie},
67 | keywords = {Computer Vision and Pattern Recognition (cs.CV), Graphics (cs.GR), FOS: Computer and information sciences, FOS: Computer and information sciences},
68 | title = {NeuS2: Fast Learning of Neural Implicit Surfaces for Multi-view Reconstruction},
69 | publisher = {arXiv},
70 | year = {2022},
71 | copyright = {arXiv.org perpetual, non-exclusive license}
72 | }
--------------------------------------------------------------------------------
/src/Speed_Improvements.md:
--------------------------------------------------------------------------------
1 | ---
2 | bibliography: Speed_Improvements.bib
3 | nocite: "@*"
4 | ---
5 |
--------------------------------------------------------------------------------
/src/build.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | echo "Beginning build..."
4 | sections="src/sections.csv"
5 |
6 | cat "src/frontmatter.md" > README.md
7 | cat "src/contents.md" >> README.md
8 |
9 | while IFS=, read -r filename formatted_name
10 | do
11 | echo "Working on: $filename"
12 | echo "" >> README.md
13 | echo "## $formatted_name" >> README.md
14 | cat "src/gen/$filename-output.md" >> README.md
15 | done < $sections
16 |
17 | echo "Built without errors!"
18 | echo "Neural Fields for Robotics Resources successfully updated."
--------------------------------------------------------------------------------
/src/contents.md:
--------------------------------------------------------------------------------
1 | ## Contents
2 | - [Review Papers](#Review_Papers)
3 | - [NeRF + Architecture Improvements](#NeRF+Architecture_Improvements)
4 | - [Light Fields + Plenoxels](#LightFields+Plenoxels)
5 | - [Dynamic Scenes + Rendering](#DynamicScenes+Rendering)
6 | - [Speed Improvements](#Speed_Improvements)
7 | - [Robotics Applications](#Robotics_Applications)
8 |
--------------------------------------------------------------------------------
/src/frontmatter.md:
--------------------------------------------------------------------------------
1 | # Neural Fields for Robotics Resources
2 | A repo collating papers and other material related to neural radiance fields (NeRFs), neural scene representations and associated works with a focus towards applications in robotics.
3 |
4 | This repo is maintained by the [Robotic Imaging Research Group](https://roboticimaging.org) at the [University of Sydney](https://sydney.edu.au). We are embedded within the [Australian Centre for Robotics](https://www.sydney.edu.au/engineering/our-research/robotics-and-intelligent-systems/australian-centre-for-field-robotics.html) in the Faculty of Engineering.
5 |
6 | To contribute, please see the `how_to_add.md` file.
7 |
--------------------------------------------------------------------------------
/src/gen/DynamicScenes+Rendering-output.md:
--------------------------------------------------------------------------------
1 | \[1\] A. Pumarola, E. Corona, G. Pons-Moll, and F.
3 | Moreno-Noguer, “D-NeRF: Neural Radiance Fields for Dynamic Scenes,” in
4 | *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
5 | Recognition (CVPR)*, Jun. 2021, pp. 10318–10327.
6 |
7 | \[2\] E. Tretschk, A. Tewari, V. Golyanik, M.
9 | Zollhöfer, C. Lassner, and C. Theobalt, “Non-Rigid Neural Radiance
10 | Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From
11 | Monocular Video,” in *Proceedings of the IEEE/CVF International
12 | Conference on Computer Vision (ICCV)*, Oct. 2021, pp.
13 | 12959–12970.
14 |
15 | \[3\] Z. Li, S. Niklaus, N. Snavely, and O. Wang,
17 | “Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic
18 | Scenes,” in *Proceedings of the IEEE/CVF Conference on Computer Vision
19 | and Pattern Recognition (CVPR)*, Jun. 2021, pp. 6498–6508.
20 |
21 | \[4\] K. Park *et al.*, “Nerfies: Deformable Neural
23 | Radiance Fields,” in *Proceedings of the IEEE/CVF International
24 | Conference on Computer Vision (ICCV)*, Oct. 2021, pp. 5865–5874.
25 |
26 | \[5\] C.-Y. Weng, B. Curless, P. P. Srinivasan, J. T.
28 | Barron, and I. Kemelmacher-Shlizerman, “HumanNeRF: Free-viewpoint
29 | Rendering of Moving People from Monocular Video,” *arXiv*, 2022.
30 |
31 | \[6\] K. Park *et al.*, “HyperNeRF: A
33 | Higher-Dimensional Representation for Topologically Varying Neural
34 | Radiance Fields,” *ACM Trans. Graph.*, vol. 40, no. 6, Dec. 2021.
35 |
36 | \[7\] G. Yang, M. Vo, N. Natalia, D. Ramanan, V.
38 | Andrea, and J. Hanbyul, “BANMo: Building Animatable 3D Neural Models
39 | from Many Casual Videos,” *arXiv preprint arXiv:2112.12761*,
40 | 2021.
41 |
42 | \[8\] S. Peng *et al.*, “Animatable neural radiance
44 | fields for modeling dynamic human bodies,” in *Proceedings of the
45 | IEEE/CVF international conference on computer vision (ICCV)*, 2021, pp.
46 | 14314–14323.
47 |
--------------------------------------------------------------------------------
/src/gen/LightFields+Plenoxels-output.md:
--------------------------------------------------------------------------------
1 | \[1\] J. Ost, I. Laradji, A. Newell, Y. Bahat, and F.
3 | Heide, “Neural Point Light Fields,” *CoRR*, vol. abs/2112.01473, 2021,
4 | Available:
5 |
6 | \[2\] M. Suhail, C. Esteves, L. Sigal, and A.
8 | Makadia, “Light field neural rendering.” 2021. Available:
9 |
10 |
11 | \[3\] Alex Yu and Sara Fridovich-Keil, M. Tancik, Q.
13 | Chen, B. Recht, and A. Kanazawa, “Plenoxels: Radiance fields without
14 | neural networks.” 2021. Available:
15 |
16 |
17 | \[4\] V. Sitzmann, S. Rezchikov, W. T. Freeman, J. B.
19 | Tenenbaum, and F. Durand, “Light field networks: Neural scene
20 | representations with single-evaluation rendering,” in *Proc. NeurIPS*,
21 | 2021.
22 |
--------------------------------------------------------------------------------
/src/gen/NeRF+Architecture_Improvements-output.md:
--------------------------------------------------------------------------------
1 | \[1\] M. Niemeyer, J. T. Barron, B. Mildenhall, M. S.
3 | M. Sajjadi, A. Geiger, and N. Radwan, “RegNeRF: Regularizing neural
4 | radiance fields for view synthesis from sparse inputs,” in *Proc. IEEE
5 | conf. On computer vision and pattern recognition (CVPR)*, 2022.
6 | Available:
7 |
8 | \[2\] Z. Kuang, K. Olszewski, M. Chai, Z. Huang, P.
10 | Achlioptas, and S. Tulyakov, “NeROIC: Neural object capture and
11 | rendering from online image collections,” *Computing Research Repository
12 | (CoRR)*, vol. abs/2201.02533, 2022.
13 |
14 | \[3\] F. Wimbauer, S. Wu, and C. Rupprecht,
16 | “De-rendering 3D Objects in the Wild,” *arXiv:2201.02279 \[cs\]*, Jan.
17 | 2022, Accessed: Jan. 23, 2022. \[Online\]. Available:
18 |
19 |
20 | \[4\] M. Kim, S. Seo, and B. Han, “InfoNeRF: Ray
22 | Entropy Minimization for Few-Shot Neural Volume Rendering,”
23 | *arXiv:2112.15399 \[cs, eess\]*, Dec. 2021, Accessed: Jan. 23, 2022.
24 | \[Online\]. Available:
25 |
26 | \[5\] C. C. Yoonwoo Jeong Seokjun Ahn and J. Park,
28 | “Self-Calibrating Neural Radiance Fields,” in *ICCV*, 2021.
29 |
30 | \[6\] Y. Xiangli *et al.*, “CityNeRF: Building NeRF
32 | at City Scale,” *arXiv preprint arXiv:2112.05504*, 2021.
33 |
34 | \[7\] M. Tancik *et al.*, “Block-NeRF: Scalable Large
36 | Scene Neural View Synthesis,” *arXiv*, 2022.
37 |
38 | \[8\] K. Rematas, R. Martin-Brualla, and V. Ferrari,
40 | “ShaRF: Shape-conditioned Radiance Fields from a Single View.”
41 | 2021.
42 |
43 | \[9\] B. Kaya, S. Kumar, F. Sarno, V. Ferrari, and L.
45 | V. Gool, “Neural Radiance Fields Approach to Deep Multi-View Photometric
46 | Stereo.” 2021.
47 |
48 | \[10\] Q. Xu *et al.*, “Point-NeRF: Point-based Neural
50 | Radiance Fields,” *arXiv preprint arXiv:2201.08845*, 2022.
51 |
52 | \[11\] C. Xie, K. Park, R. Martin-Brualla, and M.
54 | Brown, “FiG-NeRF: Figure-Ground Neural Radiance Fields for 3D Object
55 | Category Modelling,” *arXiv:2104.08418 \[cs\]*, Apr. 2021, Accessed:
56 | Sep. 25, 2021. \[Online\]. Available:
57 |
58 |
59 | \[12\] A. Yu, R. Li, M. Tancik, H. Li, R. Ng, and A.
61 | Kanazawa, “PlenOctrees for Real-time Rendering of Neural Radiance
62 | Fields,” *arXiv:2103.14024 \[cs\]*, Aug. 2021, Accessed: Sep. 25, 2021.
63 | \[Online\]. Available:
64 |
65 | \[13\] B. Mildenhall, P. P. Srinivasan, M. Tancik, J.
67 | T. Barron, R. Ramamoorthi, and R. Ng, “NeRF: Representing Scenes as
68 | Neural Radiance Fields for View Synthesis,” in *Computer Vision – ECCV
69 | 2020*, A. Vedaldi, H. Bischof, T. Brox, and J.-M. Frahm, Eds., in
70 | Lecture Notes in Computer Science. Cham: Springer International
71 | Publishing, 2020, pp. 405–421. doi:
72 | [gj826m](https://doi.org/gj826m).
73 |
74 | \[14\] A. Yu, V. Ye, M. Tancik, and A. Kanazawa,
76 | “pixelNeRF: Neural Radiance Fields From One
77 | or Few Images,” 2021, pp. 4578–4587. Accessed: Sep. 25, 2021.
78 | \[Online\]. Available:
79 |
80 |
81 | \[15\] R. Martin-Brualla, N. Radwan, M. S. M. Sajjadi,
83 | J. T. Barron, A. Dosovitskiy, and D. Duckworth, “NeRF in the Wild:
84 | Neural Radiance Fields for Unconstrained Photo Collections,” 2021, pp.
85 | 7210–7219. Accessed: Sep. 25, 2021. \[Online\]. Available:
86 |
87 |
88 | \[16\] L. Yen-Chen, P. Florence, J. T. Barron, A.
90 | Rodriguez, P. Isola, and T.-Y. Lin, “INeRF: Inverting Neural Radiance
91 | Fields for Pose Estimation,” *arXiv:2012.05877 \[cs\]*, Aug. 2021,
92 | Accessed: Sep. 25, 2021. \[Online\]. Available:
93 |
94 |
95 | \[17\] C. Gao, Y. Shih, W.-S. Lai, C.-K. Liang, and
97 | J.-B. Huang, “Portrait Neural Radiance Fields from a Single Image,”
98 | *arXiv:2012.05903 \[cs\]*, Apr. 2021, Accessed: Sep. 25, 2021.
99 | \[Online\]. Available:
100 |
101 | \[18\] C.-H. Lin, W.-C. Ma, A. Torralba, and S. Lucey,
103 | “BARF: Bundle-Adjusting Neural Radiance Fields,” *arXiv:2104.06405
104 | \[cs\]*, Aug. 2021, Accessed: Sep. 25, 2021. \[Online\]. Available:
105 |
106 |
107 | \[19\] K. Zhang, G. Riegler, N. Snavely, and V.
109 | Koltun, “NeRF++: Analyzing and Improving Neural Radiance Fields,”
110 | *arXiv:2010.07492 \[cs\]*, Oct. 2020, Accessed: Sep. 25, 2021.
111 | \[Online\]. Available:
112 |
113 | \[20\] C. Reiser, S. Peng, Y. Liao, and A. Geiger,
115 | “KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny
116 | MLPs,” *arXiv:2103.13744 \[cs\]*, Aug. 2021, Accessed: Sep. 25, 2021.
117 | \[Online\]. Available:
118 |
119 | \[21\] D. Rebain, W. Jiang, S. Yazdani, K. Li, K. M.
121 | Yi, and A. Tagliasacchi, “DeRF: Decomposed Radiance Fields,” 2021, pp.
122 | 14153–14161. Accessed: Sep. 25, 2021. \[Online\]. Available:
123 |
124 |
125 | \[22\] J. T. Barron, B. Mildenhall, M. Tancik, P.
127 | Hedman, R. Martin-Brualla, and P. P. Srinivasan, “Mip-NeRF: A Multiscale
128 | Representation for Anti-Aliasing Neural Radiance Fields,”
129 | *arXiv:2103.13415 \[cs\]*, Aug. 2021, Accessed: Sep. 25, 2021.
130 | \[Online\]. Available:
131 |
132 | \[23\] P. Hedman, P. P. Srinivasan, B. Mildenhall, J.
134 | T. Barron, and P. Debevec, “Baking Neural Radiance Fields for Real-Time
135 | View Synthesis,” *arXiv:2103.14645 \[cs\]*, Mar. 2021, Accessed: Sep.
136 | 25, 2021. \[Online\]. Available:
137 |
138 |
139 | \[24\] Z. Wang, S. Wu, W. Xie, M. Chen, and V. A.
141 | Prisacariu, “NeRF–: Neural Radiance Fields Without Known Camera
142 | Parameters,” *arXiv:2102.07064 \[cs\]*, Feb. 2021, Accessed: Sep. 25,
143 | 2021. \[Online\]. Available:
144 |
145 | \[25\] J. Li, Z. Feng, Q. She, H. Ding, C. Wang, and
147 | G. H. Lee, “MINE: Towards Continuous Depth MPI with NeRF for Novel View
148 | Synthesis,” *arXiv:2103.14910 \[cs\]*, Jul. 2021, Accessed: Oct. 11,
149 | 2021. \[Online\]. Available:
150 |
--------------------------------------------------------------------------------
/src/gen/Review_Papers-output.md:
--------------------------------------------------------------------------------
1 | \[1\] A. Tewari *et al.*, “State of the Art on Neural
3 | Rendering,” *Computer Graphics Forum*, Jul. 2020, Accessed: Apr. 04,
4 | 2023. \[Online\]. Available:
5 |
6 |
7 | \[2\] Y. Xie *et al.*, “Neural Fields in Visual
9 | Computing and Beyond,” *Computer Graphics Forum*, May 2022, Accessed:
10 | Apr. 04, 2023. \[Online\]. Available:
11 |
12 |
13 | \[3\] A. Tewari *et al.*, “Advances in Neural
15 | Rendering,” *arXiv:2111.05849 \[cs\]*, Nov. 2021, Accessed: Nov. 27,
16 | 2021. \[Online\]. Available:
17 |
18 | \[4\] M. Toschi, R. D. Matteo, R. Spezialetti, D. D.
20 | Gregorio, L. D. Stefano, and S. Salti, “ReLight my NeRF: A dataset for
21 | novel view synthesis and relighting of real world objects.” 2023.
22 | Available:
23 |
24 | \[5\] M. Tancik *et al.*, “Nerfstudio: A modular
26 | framework for neural radiance field development,” *arXiv preprint
27 | arXiv:2302.04264*, 2023.
28 |
29 | \[6\] M. Wallingford *et al.*, “Neural radiance field
31 | codebooks,” *arXiv preprint arXiv:2301.04101*, 2023.
32 |
--------------------------------------------------------------------------------
/src/gen/Robotics_Applications-output.md:
--------------------------------------------------------------------------------
1 | \[1\] D. Driess, Z. Huang, Y. Li, R. Tedrake, and M.
3 | Toussaint, “Learning Multi-Object Dynamics with Compositional Neural
4 | Radiance Fields,” *arXiv preprint arXiv:2202.11855*, 2022.
5 |
6 | \[2\] L. Yen-Chen, P. Florence, J. T. Barron, T.-Y.
8 | Lin, A. Rodriguez, and P. Isola, “NeRF-Supervision: Learning Dense
9 | Object Descriptors from Neural Radiance Fields,” in *IEEE Conference on
10 | Robotics and Automation (ICRA)*, 2022.
11 |
12 | \[3\] Z. Zhu *et al.*, “NICE-SLAM: Neural Implicit
14 | Scalable Encoding for SLAM,” *arXiv*, 2021.
15 |
16 | \[4\] M. Adamkiewicz *et al.*, “Vision-Only Robot
18 | Navigation in a Neural Radiance World,” *arXiv:2110.00168 \[cs\]*, Sep.
19 | 2021, Accessed: Oct. 11, 2021. \[Online\]. Available:
20 |
21 |
22 | \[5\] E. Sucar, S. Liu, J. Ortiz, and A. J. Davison,
24 | “iMAP: Implicit Mapping and Positioning in
25 | Real-Time,” in *Proceedings of the IEEE/CVF International Conference on
26 | Computer Vision (ICCV)*, Oct. 2021, pp. 6229–6238.
27 |
--------------------------------------------------------------------------------
/src/gen/Speed_Improvements-output.md:
--------------------------------------------------------------------------------
1 | \[1\] T. Müller, A. Evans, C. Schied, and A. Keller,
3 | “Instant Neural Graphics Primitives with a Multiresolution Hash
4 | Encoding,” *arXiv:2201.05989*, Jan. 2022.
5 |
6 | \[2\] K. Deng, A. Liu, J.-Y. Zhu, and D. Ramanan,
8 | “Depth-supervised NeRF: Fewer Views and Faster Training for Free,”
9 | *arXiv preprint arXiv:2107.02791*, 2021.
10 |
11 | \[3\] L. Li, Z. Shen, Z. Wang, L. Shen, and L. Bo,
13 | “Compressing volumetric radiance fields to 1 MB,” *arXiv preprint
14 | arXiv:2211.16386*, 2022.
15 |
16 | \[4\] J. E. Johnson, R. Lguensat, R. Fablet, E.
18 | Cosme, and J. L. Sommer, “Neural fields for fast and scalable
19 | interpolation of geophysical ocean variables,” *arXiv preprint
20 | arXiv:2211.10444*, 2022.
21 |
22 | \[5\] K. Deng, A. Liu, J.-Y. Zhu, and D. Ramanan,
24 | “Depth-supervised NeRF: Fewer views and faster training for free,” in
25 | *Proceedings of the IEEE/CVF conference on computer vision and pattern
26 | recognition (CVPR)*, 2022.
27 |
28 | \[6\] F. Wang, S. Tan, X. Li, Z. Tian, and H. Liu,
30 | “Mixed neural voxels for fast multi-view video synthesis,” *arXiv
31 | preprint arXiv:2212.00190*, 2022.
32 |
33 | \[7\] P. Wang *et al.*, “F2-NeRF: Fast neural
35 | radiance field training with free camera trajectories,” *CVPR*,
36 | 2023.
37 |
38 | \[8\] S. Lee, G. Park, H. Son, J. Ryu, and H. J.
40 | Chae, “FastSurf: Fast neural RGB-d surface reconstruction using
41 | per-frame intrinsic refinement and TSDF fusion prior learning,” *arXiv
42 | preprint arXiv:2303.04508*, 2023.
43 |
44 | \[9\] Y. Wang, Q. Han, M. Habermann, K. Daniilidis,
46 | C. Theobalt, and L. Liu, “NeuS2: Fast learning of neural implicit
47 | surfaces for multi-view reconstruction.” arXiv, 2022. doi:
48 | [10.48550/ARXIV.2212.05231](https://doi.org/10.48550/ARXIV.2212.05231).
49 |
--------------------------------------------------------------------------------
/src/generate.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | echo "Beginning generate..."
3 | sections="./src/sections.csv"
4 |
5 | # Reset Contents File
6 | echo "## Contents" > "./src/contents.md"
7 |
8 | while IFS=, read -r filename formatted_name
9 | do
10 | echo "Working on: $filename"
11 | pandoc -t markdown_strict --citeproc --csl "./ieee.csl" "./src/$filename.md" -o "./src/gen/$filename-output.md" --bibliography "./src/$filename.bib"
12 |
13 | echo "- [$formatted_name](#$filename)" >> "./src/contents.md"
14 | done < $sections
15 |
16 | echo "Markdown generated w/out error!"
17 | echo "Moving to build..."
--------------------------------------------------------------------------------
/src/sections.csv:
--------------------------------------------------------------------------------
1 | Review_Papers,Review Papers
2 | NeRF+Architecture_Improvements,NeRF + Architecture Improvements
3 | LightFields+Plenoxels,Light Fields + Plenoxels
4 | DynamicScenes+Rendering,Dynamic Scenes + Rendering
5 | Speed_Improvements,Speed Improvements
6 | Robotics_Applications,Robotics Applications
7 |
--------------------------------------------------------------------------------