├── LICENSE
├── .gitignore
└── README.md
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2022 가짜연구소 (Pseudo Lab)
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | pip-wheel-metadata/
24 | share/python-wheels/
25 | *.egg-info/
26 | .installed.cfg
27 | *.egg
28 | MANIFEST
29 |
30 | # PyInstaller
31 | # Usually these files are written by a python script from a template
32 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
33 | *.manifest
34 | *.spec
35 |
36 | # Installer logs
37 | pip-log.txt
38 | pip-delete-this-directory.txt
39 |
40 | # Unit test / coverage reports
41 | htmlcov/
42 | .tox/
43 | .nox/
44 | .coverage
45 | .coverage.*
46 | .cache
47 | nosetests.xml
48 | coverage.xml
49 | *.cover
50 | *.py,cover
51 | .hypothesis/
52 | .pytest_cache/
53 |
54 | # Translations
55 | *.mo
56 | *.pot
57 |
58 | # Django stuff:
59 | *.log
60 | local_settings.py
61 | db.sqlite3
62 | db.sqlite3-journal
63 |
64 | # Flask stuff:
65 | instance/
66 | .webassets-cache
67 |
68 | # Scrapy stuff:
69 | .scrapy
70 |
71 | # Sphinx documentation
72 | docs/_build/
73 |
74 | # PyBuilder
75 | target/
76 |
77 | # Jupyter Notebook
78 | .ipynb_checkpoints
79 |
80 | # IPython
81 | profile_default/
82 | ipython_config.py
83 |
84 | # pyenv
85 | .python-version
86 |
87 | # pipenv
88 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
89 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
90 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
91 | # install all needed dependencies.
92 | #Pipfile.lock
93 |
94 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow
95 | __pypackages__/
96 |
97 | # Celery stuff
98 | celerybeat-schedule
99 | celerybeat.pid
100 |
101 | # SageMath parsed files
102 | *.sage.py
103 |
104 | # Environments
105 | .env
106 | .venv
107 | env/
108 | venv/
109 | ENV/
110 | env.bak/
111 | venv.bak/
112 |
113 | # Spyder project settings
114 | .spyderproject
115 | .spyproject
116 |
117 | # Rope project settings
118 | .ropeproject
119 |
120 | # mkdocs documentation
121 | /site
122 |
123 | # mypy
124 | .mypy_cache/
125 | .dmypy.json
126 | dmypy.json
127 |
128 | # Pyre type checker
129 | .pyre/
130 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # nerds-nerf
2 |
3 | 이 repository는 [가짜연구소](https://pseudo-lab.com)의 OpenLab 중 하나인, Nerd's NeRF 입니다.
4 | NeRF에 대해 깊이 이해하고, 관련된 최신 논문들을 따라잡는 것이 Nerd's NeRF팀의 목적입니다! 오픈 아카데미 형식으로, 누구나 참여할 수 있는 형태로 모임을 진행하여 국내 3D Computer Vision과 NeRF 영역에서 공유의 가치를 실현하며, 함께 성장하는 데 기여하고자 합니다.
5 |
6 | [소개 페이지](https://pseudo-lab.com/Nerd-s-NeRF-2efcb794acbb4a04880d09b162d123aa)
7 | 참여 방법: 매주 목요일 오후 10시, [가짜연구소 Discord](https://discord.gg/sDgnqYWA3G) Room GH로 입장!
8 | ㅌ
9 | ## Contributor
10 |
11 | - [김찬란 _Chanran Kim_](https://www.youtube.com/channel/UCWnc2XGGO9EqNcuXP-FVsuw) | [Github](https://github.com/seriousran) | [LinkedIn](https://www.linkedin.com/in/chanran-kim/) |
12 | - [박정현 _Junghyun Park_](https://www.youtube.com/channel/UCjNHFyqcXtSLS4vXBa3Nh6A) | [Github](https://github.com/parkjh688) | [LinkedIn](https://www.linkedin.com/in/junghyun-eden/) |
13 | - [신동원 _Dong-won Shin_](https://www.youtube.com/c/SLAMKR) | [Github](https://github.com/dong-won-shin) | [LinkedIn](https://www.linkedin.com/in/dong-won-shin-7a11b2240/) |
14 | - [이인희 _Inhee Lee_](https://www.youtube.com/@sulwon3902/featured) | [Github](https://github.com/Sulwon-0516) | [LinkedIn](https://www.linkedin.com/in/sulwon/) |
15 | - [김선호 _Sunho Kim_](https://www.youtube.com/channel/UCe8Q012lKq887dP76COM6eg) | [Github](https://github.com/Philipshrimp) | [LinkedIn](https://www.linkedin.com/in/ssunhokim/) |
16 | - 구승연 | Github | LinkedIn |
17 | - [김도연 _Doyeon Kim_](https://www.youtube.com/channel/UCwds8vDfpS0D96uidDJSZ1Q) | [Blog](https://xoft.tistory.com/) | [LinkedIn](https://www.linkedin.com/in/xoft/) |
18 | - 전승진 | Github | LinkedIn |
19 | - 김건호 | Github | LinkedIn |
20 | - 김석민 | Github | LinkedIn |
21 | - 강윤석 | Github | LinkedIn |
22 | - 강창진 | Github | LinkedIn |
23 | - 윤일승 | Github | LinkedIn |
24 | - 이인서 | Github | LinkedIn |
25 | - 김세연 | Github | LinkedIn |
26 | - 박찬민 | Github | LinkedIn |
27 | - 김성엽 | Github | LinkedIn |
28 |
29 | ## Schedule
30 |
31 | | idx | Date | Presenter | Review (Youtube) | Paper / Code |
32 | |----:|:-----------|:----------|:-----------------|:------------ |
33 | | 1 | 2022.09.15 | 김찬란 | link | [NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis](https://arxiv.org/abs/2003.08934) (ECCV 2020) |
34 | | 2 | 2022.09.22 | 신동원 | [link](https://youtu.be/-plCk0IhBGQ) | [FastNeRF: High-Fidelity Neural Rendering at 200FPS](https://ieeexplore.ieee.org/document/9710021) (ICCV 2021) |
35 | | 3 | 2022.09.22 | 박정현 | [link](https://youtu.be/JMl9zkSudyU) | [Instant NGP : Instant Neural Graphics Primitives with a Multiresolution Hash Encoding](https://nvlabs.github.io/instant-ngp/) (SIGGRAPH 2022) |
36 | | 4 | 2022.09.29 | 김찬란 | link | [NeRF official code](https://github.com/bmild/nerf) |
37 | | 5 | 2022.10.06 | 신동원 | [link](https://youtu.be/kN7kIwRBKis) | [Loc-NeRF: Monte Carlo Localization using Neural Radiance Fields](https://arxiv.org/abs/2209.09050) |
38 | | 6 | 2022.10.06 | 박정현 | [link](https://youtu.be/yXjVZ0tBNO8) | [NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections](https://arxiv.org/abs/2008.02268) (CVPR 2021) |
39 | | 7 | 2022.10.13 | 김찬란 | link | NeRF at ECCV 2022 preview |
40 | | 8 | 2022.10.20 | 박정현 | [link](https://youtu.be/RWOp8zGcbLI) |[pixelNeRF: Neural Radiance Fields from One or Few Images](https://arxiv.org/abs/2012.02190) (CVPR 2021) |
41 | | 9 | 2022.10.27 | 신동원 | [link](https://youtu.be/Eu7vVwnvkIU) | [Nerfies: Deformable Neural Radiance Fields](https://nerfies.github.io/) (ICCV 2021) |
42 | | 10 | 2022.11.03 | 박정현 | [link](https://www.youtube.com/watch?v=nmM8nknt_bE&t=1202s) | [Block-NeRF: Scalable Large Scene Neural View Synthesis](https://arxiv.org/abs/2202.05263) (CVPR 2022) |
43 | | 11 | 2022.11.10 | 김찬란 | link | [Plenoxels: Radiance Fields without Neural Networks](https://openaccess.thecvf.com/content/CVPR2022/papers/Fridovich-Keil_Plenoxels_Radiance_Fields_Without_Neural_Networks_CVPR_2022_paper.pdf) (CVPR 2022) |
44 | | 12 | 2022.11.17 | 신동원 | [link](https://youtu.be/C9JHhkDJSpM) | [Instant NGP](https://github.com/NVlabs/instant-ngp.git) hands on tutorial |
45 | | 13 | 2022.11.24 | 박정현 | [link](https://www.youtube.com/watch?v=xq9SNMgQGOA) | [NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images](https://openaccess.thecvf.com/content/CVPR2022/papers/Mildenhall_NeRF_in_the_Dark_High_Dynamic_Range_View_Synthesis_From_CVPR_2022_paper.pdf) (CVPR 2022) |
46 | | 14 | 2022.12.01 | 이인희 | [link](https://youtu.be/_-fuel5WXSM) | [Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields](https://openaccess.thecvf.com/content/ICCV2021/papers/Barron_Mip-NeRF_A_Multiscale_Representation_for_Anti-Aliasing_Neural_Radiance_Fields_ICCV_2021_paper.pdf) (ICCV 2021) |
47 | | 15 | 2022.12.15 | 김찬란 | link | [InfoNeRF: Ray Entropy Minimizationfor Few-Shot Neural Volume Rendering](https://openaccess.thecvf.com/content/CVPR2022/papers/Kim_InfoNeRF_Ray_Entropy_Minimization_for_Few-Shot_Neural_Volume_Rendering_CVPR_2022_paper.pdf) (CVPR 2022) |
48 | | 16 | 2022.12.22 | 신동원 | [link](https://youtu.be/y-4vLEo7hns) | [D-NeRF: Neural Radiance Fields for Dynamic Scenes](https://openaccess.thecvf.com/content/CVPR2021/papers/Pumarola_D-NeRF_Neural_Radiance_Fields_for_Dynamic_Scenes_CVPR_2021_paper.pdf) (CVPR 2021) |
49 | | 17 | 2023.01.05 | 토론 | link | [RODIN](https://www.microsoft.com/en-us/research/publication/rodin-a-generative-model-for-sculpting-3d-digital-avatars-using-diffusion/), [NeRF-SLAM](https://github.com/ToniRV/NeRF-SLAM.git) |
50 | | 18 | 2023.01.12 | 박정현 | [link](https://www.youtube.com/watch?v=XMu2ujSM8Ik&t=1724s) | [IBRNet: Learning Multi-View Image-Based Rendering](https://arxiv.org/pdf/2102.13090.pdf) (CVPR 2021) |
51 | | 19 | 2023.01.18 | 이인희 | [link](https://youtu.be/R_eVgkBgFBM) | [Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains](https://proceedings.neurips.cc/paper/2020/file/55053683268957697aa39fba6f231c68-Paper.pdf) (NeurIPS 2020) |
52 | | 20 | 2023.01.26 | 토론 | link | [COLMAP](https://github.com/colmap/colmap.git) |
53 | | 21 | 2023.02.02 | 김선호 | [link](https://www.youtube.com/watch?v=iqEfKA7seNk) | [BARF: Bundle-Adjusting Neural Radiance Fields](https://openaccess.thecvf.com/content/ICCV2021/papers/Lin_BARF_Bundle-Adjusting_Neural_Radiance_Fields_ICCV_2021_paper.pdf) (ICCV 2021) |
54 | | 22 | 2023.02.09 | 신동원 | [link](https://youtu.be/RGHcAnBFJYg) | [NeRF-SLAM: Real-Time Dense Monocular SLAM with radiance Fields](https://arxiv.org/pdf/2210.13641.pdf) |
55 | | 23 | 2023.02.16 | 이인희 | link | [NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction](https://arxiv.org/pdf/2210.13641.pdf) (NeurIPS 2021) |
56 | | 24 | 2023.02.23 | 토론 | link | Differentiable Rendering |
57 | | 25 | 2023.03.02 | 박정현 | [link](https://youtu.be/dysfF6As_Io) | [Depth-supervised NeRF: Fewer Views and Faster Training for Free](https://arxiv.org/pdf/2107.02791.pdf) (CVPR 2022) |
58 | | 26 | 2023.03.09 | 김선호 | [link](https://youtu.be/pcnTE3gqIoY) | [Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis](https://arxiv.org/pdf/2104.00677.pdf) (ICCV 2021) |
59 | | 27 | 2023.03.16 | 김찬란 | link | [Compressing Volumetric Radiance Fields to 1 MB](https://arxiv.org/pdf/2211.16386.pdf) (CVPR 2023) |
60 | | 28 | 2023.03.23 | 박정현 | [link](https://youtu.be/7cwLQ5_O9aw)| [LERF: Language Embedded Radiance Fields](https://arxiv.org/pdf/2303.09553.pdf) |
61 | | 29 | 2023.03.30 | 토론 | link | 최신 동향 및 CVPR 2023 관련 토론 |
62 | | 30 | 2023.04.06 | 이인희 | link | [Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields](https://arxiv.org/pdf/2112.03907) (CVPR 2022) |
63 | | 31 | 2023.04.13 | 김선호 | [link](https://youtu.be/nNHqj23MBKQ)| [FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency Regularization](https://arxiv.org/pdf/2303.07418) (CVPR 2023) |
64 | | 32 | 2023.04.20 | 김찬란 | link | [Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields](https://arxiv.org/pdf/2304.06706) |
65 | | 33 | 2023.04.27 | 토론 | link | 자유주제 토론 |
66 | | 34 | 2023.05.11 | 박정현 | [link](https://youtu.be/2nSeVSjBuGU) | [NoPe-NeRF: Optimising Neural Radiance Field with No Pose Prior](https://arxiv.org/pdf/2212.07388)|
67 | | 35 | 2023.05.18 | 김선호 | [link](https://youtu.be/mrRQ-6iC9xA) | [F2-NeRF: Fast Neural Radiance Field Training with Free Camera Trajectories](https://arxiv.org/pdf/2303.15951.pdf) (CVPR 2023)|
68 | | 36 | 2023.06.01 | 토론 | link | NeRF 개별 실습 결과 공유 및 실험 희망 환경 공유 |
69 | | 37 | 2023.06.08 | 구승연 | [link](https://www.youtube.com/watch?v=t8YVORJJgsE) | [NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling](https://cwchenwang.github.io/NeRF-SR/)|
70 | | 38 | 2023.06.15 | 김도연 | [link](https://youtu.be/PPr9kCtsVFs) | DreamFusion: Text-to-3D using 2D Diffusion|
71 | | 39 | 2023.06.22 | 김찬란 | link | NeRFPlayer: A Streamable Dynamic Scene Representation with Decomposed Neural Radiance Fields|
72 | | 40 | 2023.06.29 | 토론 | link | 자유주제 토론 |
73 | | 41 | 2023.07.06 | 박정현 | link | Removing Objects From Neural Radiance Fields|
74 | | 42 | 2023.07.13 | 김선호 | [link](https://youtu.be/KU4HD660kf0) | [Accelerated Coordinate Encoding: Learning to Relocalize in Minutes using RGB and Poses](https://arxiv.org/abs/2305.14059) (CVPR 2023)|
75 | | 43 | 2023.07.20 | 구승연 | link | ABLE-NeRF: Attention-Based Rendering with Learnable Embeddings for Neural Radiance Field|
76 | | 44 | 2023.07.27 | 김도연 | [link](https://youtu.be/HKtBkIxMY7o) | TensoRF: Tensorial Radiance Fields |
77 | | 45 | 2023.08.10 | - | - | 자유주제 토론 |
78 | | 46 | 2023.08.24 | 김찬란 | link | Reference-guided Controllable Inpainting of Neural Radiance Fields |
79 | | 47 | 2023.08.31 | - | - | 자유주제 토론 - 오프라인 모임 |
80 | | 48 | 2023.09.14 | 김선호 | [link](https://youtu.be/hrCHu5R_v8E?si=bMCH39YN-Ct78bnY) | BundleSDF: Neural 6-DoF Tracking and 3D Reconstruction of Unknown Objects|
81 | | 49 | 2023.09.21 | 박정현 | [link](https://youtu.be/rvNCcXe3trs?si=shO-rN7oTBhxagYO) | Neuralangelo: High-Fidelity Neural Surface Reconstruction|
82 | | 50 | 2023.10.05 | 김도연 | [link](https://youtu.be/wvlgjhrrQZU) | 3D Gaussian Splatting for Real-Time Radiance Field Rendering |
83 | | 51 | 2023.10.12 | 구승연 | link | Bayes' Rays: Uncertainty Quantification for Neural Radiance Fields |
84 | | 52 | 2023.10.19 | 김찬란 | link | 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering |
85 | | 53 | 2023.10.26 | 박정현 | [link](https://youtu.be/k3IVcdL2qX0?si=xUN1tPeBKaCVJuc5) | Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors |
86 | | 54 | 2023.11.02 | 김선호 | [link](https://youtu.be/KUQ0fvFa88I?si=NAn6Z93gqr8wlM8N) | surfing StreetSurf: Extending Multi-view Implicit Surface |Reconstruction to Street Views
87 | | 55 | 2023.11.09 | 김도연 | [link](https://youtu.be/wYf-hAM3YzI) | DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation |
88 | | 56 | 2023.11.16 | 구승연 | link | HumanRF: High-Fidelity Neural Radiance Fields for Humans in Motion |
89 | | 57 | 2023.11.23 | 김찬란 | link | L2G-NeRF: Local-to-Global Registration for Bundle-Adjusting Neural Radiance Fields |
90 | | 58 | 2023.12.07 | 김선호 | [link](https://youtu.be/B7q-MRRDXm8?si=M3S93-6zNs39lk8T) | CLNeRF: Continual Learning Meets NeRF |
91 | | 59 | 2023.12.14 | 구승연 | link | Bayes' Rays: Uncertainty Quantification for Neural Radiance Fields |
92 | | 60 | 2023.12.21 | 김도연 | [link](https://youtu.be/A0qB37P4hQg) | One-2-3-45++: Fast Single Image to 3D Objects with Consistent Multi-View Generation and 3D Diffusion |
93 | | 61 | 2024.02.22 | - | - | 자유주제 토론 |
94 | | 62 | 2024.02.29 | 김찬란 | link | Gaussian Splatting with NeRF-based Color and Opacity |
95 | | 63 | 2024.03.07 | 김선호 | [link](https://youtu.be/xn5ssDBdZH8?si=KJka1I4Ql2R9XaOr) | COLMAP-Free 3D Gaussian Splatting |
96 | | 64 | 2024.03.14 | 김도연 | [link](https://youtu.be/fG2SNvWzz54?si=IOSrCF1lHe1oDnjE) | GPS-Gaussian: Generalizable Pixel-wise 3D Gaussian Splatting for Real-time Human Novel View Synthesis |
97 | | - | 2024.03.14 | 전승진 | link | LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes |
98 | | 65 | 2024.03.21 | 김건호 | link | SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering |
99 | | 66 | 2024.03.28 | 김석민 | [link](https://youtu.be/aFb8gi7ywkM?si=4wQsH6dVvsnud_I8) | Dynamic 3D Gaussians: Tracking by Persistent Dynamic View Synthesis |
100 | | - | 2024.03.28 | 강윤석 | [link](https://youtu.be/thANEeTbnfE?si=FRC2RTYEeNBUuIrL) | VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction |
101 | | 67 | 2024.04.04 | 강창진 | link | GauStudio: A Modular Framework for 3D Gaussian Splatting and Beyond |
102 | | - | 2024.04.04 | 윤일승 | link | GS-SLAM: Dense Visual SLAM with 3D Gaussian Splatting |
103 | | 68 | 2024.04.11 | - | - | 자유주제 토론 - 오프라인 모임 |
104 | | 69 | 2024.04.18 | 김찬란 | link | RealmDreamer: Text-Driven 3D Scene Generation with Inpainting and Depth Diffusion |
105 | | 70 | 2024.04.25 | 박정현 | link | TripoSR: Fast 3D Object Reconstruction from a Single Image |
106 | | - | 2024.04.25 | 전승진 | link | DUSt3R: Geometric 3D Vision Made Easy |
107 | | 71 | 2024.05.02 | 김도연 | [link](https://youtu.be/9S2z3h2YkfM?si=PHiYmfN6Z2z8Qfpn) | InstantSplat: Unbounded Sparse-view Pose-free Gaussian Splatting in 40 Seconds |
108 | | - | 2024.05.02 | 김건호 | [link](https://youtu.be/jXlbbAftQyU?si=w4lM-JPDTTeQ31UA) | Surface Reconstruction from Gaussian Splatting via Novel Stereo Views |
109 | | 72 | 2024.05.09 | 김선호 | link | FoundationPose: Unified 6D Pose Estimation and Tracking of Novel Objects |
110 | | - | 2024.05.09 | 김석민 | link | High-quality Surface Reconstruction using Gaussian Surfels |
111 | | 73 | 2024.05.16 | 강창진 | link | Compact 3D Gaussian Representation for Radiance Field |
112 | | - | 2024.05.16 | 윤일승 | link | Gaussian Splatting SLAM |
113 | | 74 | 2024.05.23 | 강윤석 | [link](https://youtu.be/oJEPQoE-_Rg?si=DBAuDEsjrkQcMbhv) | EAGLES: Efficient Accelerated 3D Gaussians with Lightweight EncodingS |
114 | | - | 2024.05.23 | - | - | 자유주제 토론 : 3DGS 서비스화 이슈 |
115 | | 75 | 2024.05.30 | 김찬란 | link | CAT3D : Create Anything in 3D with Multi-View Diffusion Models |
116 | | 76 | 2024.06.06 | 김도연 | link | GaussianEditor: Swift and Controllable 3D Editing with Gaussian Splatting |
117 | | 77 | 2024.06.13 | 김건호 | link | Tetrahedron Splatting for 3D Generation, NeurIPS 2024 submission |
118 | | - | 2024.06.13 | 김선호 | link | 2DGS: 2D Gaussian Splatting for Geometrically Accurate Radiance Fields |
119 | | 78 | 2024.06.20 | 강윤석 | link | CityGaussian: Real-time High-quality Large-Scale Scene Rendering with Gaussians |
120 | | 79 | 2024.06.27 | 윤일승 | link | DrivingGaussian: Composite Gaussian Splatting for Surrounding Dynamic Autonomous Driving Scenes |
121 | | - | 2024.06.27 | 강창진 | link | LucidDreamer: Towards High-Fidelity Text-to-3D Generation via Interval Score Matching |
122 | | 80 | 2024.07.04 | 전승진 | link | 3D-HGS: 3D Half-Gaussian Splatting |
123 | | - | 2024.07.04 | 김석민 | link | Vidu4D: Single Generated Video to High-Fidelity 4D Reconstruction with Dynamic Gaussian Surfels |
124 | | 81 | 2024.07.18 | 박정현 | link | MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images |
125 | | 82 | 2024.08.01 | 김선호 | link | SMERF: Streamable Memory Efficient Radiance Fields for Real-Time Large-Scene Exploration |
126 | | 83 | 2024.08.08 | 김찬란 | link | Global Structure-from-Motion Revisited |
127 | | - | 2024.08.08 | 김도연 | link | Grendel-GS : On Scaling Up 3D Gaussian Splatting Training |
128 | | 84 | 2024.08.15 | 김건호 | link | Self-augmented Gaussian Splatting with Structure-aware Masks for Sparse-view 3D Reconstruction |
129 | | - | 2024.08.15 | 강윤석 | link | ThermoNeRF: Multimodal Neural Radiance Fields for Thermal Novel View Synthesis |
130 | | 85 | 2024.08.22 | 김석민 | link | MagicDrive3D: Controllable 3D Generation for Any-View Rendering in Street Scenes, arXiv 2024 |
131 | | 86 | 2024.08.29 | 윤일승 | link | 3DGS-ReLoc: 3D Gaussian Splatting for Map Representation and Visual ReLocalization |
132 | | 87 | 2024.09.12 | 전승진 | link | Spann3R 3D Reconstruction with Spatial Memory |
133 | | 88 | 2024.09.26 | 김선호 | link | 3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes |
134 | | - | 2024.09.26 | 김도연 | link | Instant Facial Gaussians Translator for Relightable and Interactable Facial Rendering |
135 | | 89 | 2024.10.03 | 박정현 | link | MetaFood CVPR 2024 Challenge |
136 | | - | 2024.10.03 | 김건호 | link | PAPR: Proximity Attention Point Rendering", NeurIPS 2023 Spotlight |
137 | | 90 | 2024.10.10 | 김석민 | link | SwinGS: Sliding Window Gaussian Splatting for Volumetric Video Streaming with Arbitrary Length |
138 | | 91 | 2024.10.17 | 윤일승 | link | GaussReg: Fast 3D Registration with Gaussian Splatting |
139 | | 92 | 2024.10.24 | 강윤석 | link | 3D-LLM: Injecting the 3D World into Large Language Models(NeurIPS 2023 Spotlight) |
140 | | 93 | 2024.10.31 | 김찬란 | link | ActiveSplat: High-Fidelity Scene Reconstruction through Active Gaussian Splatting |
141 | | 94 | 2024.11.14 | 김도연 | link | Long-LRM: Long-sequence Large Reconstruction Model for Wide-coverage Gaussian Splats |
142 | | - | 2024.11.14 | 전승진 | link | DepthSplat : Connecting Gaussian Splatting and Depth |
143 | | 95 | 2024.11.21 | 김석민 | link | GaussianBeV : 3D Gaussian Representation meets Perception Models for BeV Segmentation |
144 | | 96 | 2024.11.28 | 강윤석 | link | ThermalGaussian: Thermal 3D Gaussian Splatting |
145 | | 97 | 2024.12.05 | 김건호 | link | LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation |
146 | | 98 | 2024.12.26 | 윤일승 | link | Gassidy: Gaussian Splatting SLAM in Dynamic Environments |
147 | | 99 | 2025.01.02 | 김찬란 | link | SelfSplat: Pose-Free and 3D Prior-Free Generalizable 3D Gaussian Splatting |
148 | | 100 | 2025.01.09 | 김도연 | link | DiffusionGS: Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D Generation |
149 | | 101 | 2025.01.16 | 김석민 | link | gsplat: An Open-Source Library for Gaussian Splatting |
150 | | 102 | 2025.01.23 | 김건호 | link | Physically Compatible 3D Object Modeling from a Single Image, NeurIPS 2024 Spotlight |
151 | | 103 | 2025.02.27 | 강윤석 | link | NO POSE, NO PROBLEM: SURPRISINGLY SIMPLE 3D GAUSSIAN SPLATS FROM SPARSE UNPOSED IMAGES(ICLR 2025, Oral) |
152 | | 104 | 2025.03.03 | - | link | O.T. |
153 | | 105 | 2025.03.13 | 김도연 | link | Sparse Voxels Rasterization: Real-time High-fidelity Radiance Field Rendering |
154 | | 106 | 2025.03.20 | 윤일승 | link | Speedy-Splat: Fast 3D Gaussian Splatting with Sparse Pixels and Sparse Primitives |
155 |
--------------------------------------------------------------------------------