├── CITATION.cff ├── LICENSE └── README.md /CITATION.cff: -------------------------------------------------------------------------------- 1 | cff-version: 1.2.0 2 | message: "If this overiew is useful to you, please cite as below." 3 | authors: 4 | - family-names: Sitzmann 5 | given-names: Vincent 6 | orcid: https://orcid.org/0000-0002-0107-5704 7 | title: "Awesome Implicit Representations - A curated list of resources on implicit neural representations" 8 | version: 1.0.0 9 | url: https://github.com/vsitzmann/awesome-implicit-representations 10 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 Vincent Sitzmann 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Awesome Implicit Neural Representations [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome) 2 | A curated list of resources on implicit neural representations, inspired by [awesome-computer-vision](https://github.com/jbhuang0604/awesome-computer-vision). 3 | 4 | ## Hiring graduate students! 5 | I am looking for graduate students to join my new lab at MIT CSAIL in July 2022. 6 | If you are excited about neural implicit representations, neural rendering, neural scene representations, and their applications 7 | in vision, graphics, and robotics, apply [here](https://gradapply.mit.edu/eecs/apply/login/)! In the webform, you can choose me as "Potential Adviser", 8 | and in your SoP, please describe how our research interests are well-aligned. The deadline is Dec 15th! 9 | 10 | ## Disclaimer 11 | This list does __not aim to be exhaustive__, as implicit neural representations are a rapidly growing research field with 12 | hundreds of papers to date. Instead, it lists the papers that I give my students to read, which introduce key concepts & foundations of 13 | implicit neural representations across applications. I will therefore generally __not merge pull requests__. 14 | This is not an evaluation of the quality or impact of a paper, but rather the result of my and my students' research interests. 15 | 16 | However, if you see potential for another list that is broader or narrower in scope, get in touch, and I'm happy 17 | to link to it right here and contribute to it as well as I can! 18 | 19 | Disclosure: I am an author on the following papers. 20 | * [Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations](https://vsitzmann.github.io/srns/) 21 | * [MetaSDF: MetaSDF: Meta-Learning Signed Distance Functions](https://vsitzmann.github.io/metasdf/) 22 | * [Implicit Neural Representations with Periodic Activation Functions](https://vsitzmann.github.io/siren/) 23 | * [Inferring Semantic Information with 3D Neural Scene Representations](https://www.computationalimaging.org/publications/semantic-srn/) 24 | * [Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering](vsitzmann.github.io/lfns/) 25 | 26 | 27 | ## Table of contents 28 | - [What are implicit neural representations?](#what-are-implicit-neural-representations) 29 | - [Why are they interesting?](#why-are-they-interesting) 30 | - [Colabs](#colabs) 31 | - [Papers](#papers) 32 | - [Talks](#Talks) 33 | 34 | 35 | ## What are implicit neural representations? 36 | Implicit Neural Representations (sometimes also referred to as coordinate-based representations) are a novel way to parameterize 37 | signals of all kinds. Conventional signal representations are usually discrete - for instance, images are discrete grids 38 | of pixels, audio signals are discrete samples of amplitudes, and 3D shapes are usually parameterized as grids of voxels, 39 | point clouds, or meshes. In contrast, Implicit Neural Representations parameterize a signal as a *continuous function* that 40 | maps the domain of the signal (i.e., a coordinate, such as a pixel coordinate for an image) to whatever is at that coordinate 41 | (for an image, an R,G,B color). Of course, these functions are usually not analytically tractable - it is impossible to 42 | "write down" the function that parameterizes a natural image as a mathematical formula. Implicit Neural Representations 43 | thus approximate that function via a neural network. 44 | 45 | ## Why are they interesting? 46 | Implicit Neural Representations have several benefits: First, they are not coupled to spatial resolution anymore, the way, for instance, 47 | an image is coupled to the number of pixels. This is because they are continuous functions! 48 | Thus, the memory required to parameterize the signal is *independent* of spatial 49 | resolution, and only scales with the complexity of the underyling signal. Another corollary of this is that implicit 50 | representations have "infinite resolution" - they can be sampled at arbitrary spatial resolutions. 51 | 52 | This is immediately useful for a number of applications, such as super-resolution, or in parameterizing signals in 3D and higher dimensions, 53 | where memory requirements grow intractably fast with spatial resolution. 54 | Further, generalizing across neural implicit representations amounts to learning a prior over a space of functions, implemented 55 | via learning a prior over the weights of neural networks - this is commonly referred to as meta-learning and is an extremely exciting 56 | intersection of two very active research areas! 57 | Another exciting overlap is between neural implicit representations and the study of symmetries in neural network architectures - 58 | for intance, creating a neural network architecture that is 3D rotation-equivariant immediately yields a viable path to rotation-equivariant generative models via neural implicit representations. 59 | 60 | Another key promise of implicit neural representations lie in algorithms that directly operate in the space 61 | of these representations. In other words: What's the "convolutional neural network" equivalent of a neural network 62 | operating on images represented by implicit representations? 63 | 64 | # Colabs 65 | This is a list of Google Colabs that immediately allow you to jump in and toy around with implicit neural representations! 66 | * [Implicit Neural Representations with Periodic Activation Functions](https://colab.research.google.com/github/vsitzmann/siren/blob/master/explore_siren.ipynb) 67 | shows how to fit images, audio signals, and even solve simple Partial Differential Equations with the SIREN architecture. 68 | * [Neural Radiance Fields (NeRF)](https://colab.research.google.com/github/bmild/nerf/blob/master/tiny_nerf.ipynb) 69 | shows how to fit a neural radiance field, allowing novel view synthesis of a single 3D scene. 70 | * [MetaSDF & MetaSiren](https://colab.research.google.com/github/vsitzmann/metasdf/blob/master/MetaSDF.ipynb) shows how 71 | you can leverage gradient-based meta-learning to generalize across neural implicit representations. 72 | * [Neural Descriptor Fields](https://colab.research.google.com/drive/16bFIFq_E8mnAVwZ_V2qQiKp4x4D0n1sG?usp=sharing) Learn how 73 | you can use globally conditioned neural implicit representations as self-supervised correspondence learners, enabling robotics 74 | imitation tasks. 75 | 76 | # Papers 77 | ## Implicit Neural Representations of Geometry 78 | The following three papers first (and concurrently) demonstrated that implicit neural representations outperform grid-, point-, and mesh-based 79 | representations in parameterizing geometry and seamlessly allow for learning priors over shapes. 80 | * [DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation](https://arxiv.org/abs/1901.05103) (Park et al. 2019) 81 | * [Occupancy Networks: Learning 3D Reconstruction in Function Space](https://arxiv.org/abs/1812.03828) (Mescheder et al. 2019) 82 | * [IM-Net: Learning Implicit Fields for Generative Shape Modeling](https://arxiv.org/abs/1812.02822) (Chen et al. 2018) 83 | 84 | Since then, implicit neural representations have achieved state-of-the-art-results in 3D computer vision: 85 | * [Sal: Sign agnostic learning of shapes from raw data](https://github.com/matanatz/SAL) (Atzmon et al. 2019) shows how we may learn SDFs from raw data (i.e., without ground-truth signed distance values) 86 | * [Implicit Geometric Regularization for Learning Shapes](https://github.com/amosgropp/IGR) (Gropp et al. 2020) shows how we may learn SDFs from raw data (i.e., without ground-truth signed distance values) 87 | * [Local Implicit Grid Representations for 3D Scenes](https://geometry.stanford.edu/papers/jsmhnf-lligrf3s-20/jsmhnf-lligrf3s-20.pdf), [Convolutional Occupancy Networks](https://arxiv.org/abs/2003.04618), [Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction](https://arxiv.org/abs/2003.10983) 88 | concurrently proposed hybrid voxelgrid/implicit representations to fit large-scale 3D scenes. 89 | * [Implicit Neural Representations with Periodic Activation Functions](https://vsitzmann.github.io/siren/) (Sitzmann et al. 2020) 90 | demonstrates how we may parameterize room-scale 3D scenes via a single implicit neural representation by leveraging sinusoidal activation functions. 91 | * [Neural Unsigned Distance Fields for Implicit Function Learning](https://arxiv.org/pdf/2010.13938.pdf) (Chibane et al. 2020) 92 | proposes to learn unsigned distance fields from raw point clouds, doing away with the requirement of water-tight surfaces. 93 | 94 | ## Implicit representations of Geometry and Appearance 95 | ### From 2D supervision only (“inverse graphics”) 96 | 3D scenes can be represented as 3D-structured neural scene representations, i.e., neural implicit representations that map a 97 | 3D coordinate to a representation of whatever is at that 3D coordinate. This then requires the formulation of a neural renderer, 98 | in particular, a ray-marcher, which performs rendering by repeatedly sampling the neural implicit representation along a ray. 99 | * [Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations](https://vsitzmann.github.io/srns/) proposed to learn an implicit representations 100 | of 3D shape and geometry given only 2D images, via a differentiable ray-marcher, and generalizes across 3D scenes for 101 | reconstruction from a single image via hyper-networks. This was demonstrated for single-object scenes, but also for simple room-scale scenes (see talk). 102 | * [Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision](https://github.com/autonomousvision/differentiable_volumetric_rendering) (Niemeyer et al. 2020), 103 | replaces LSTM-based ray-marcher in SRNs with a fully-connected neural network & analytical gradients, enabling easy extraction of the final 3D geometry. 104 | * [Neural Radiance Fields (NeRF)](https://www.matthewtancik.com/nerf) (Mildenhall et al. 2020) proposes positional encodings, volumetric rendering & ray-direction conditioning for high-quality reconstruction of 105 | single scenes, and has spawned a large amount of follow-up work on volumetric rendering of 3D implicit representations. 106 | For a curated list of NeRF follow-up work specifically, see [awesome-NeRF](https://github.com/yenchenlin/awesome-NeRF) 107 | * [SDF-SRN: Learning Signed Distance 3D Object Reconstruction from Static Images](https://github.com/chenhsuanlin/signed-distance-SRN) (Lin et al. 2020), 108 | demonstrates how we may train Scene Representation Networks from a single observation only. 109 | * [Pixel-NERF](https://alexyu.net/pixelnerf/) (Yu et al. 2020) proposes to condition a NeRF on local features lying on camera rays, 110 | extracted from contact images, as proposed in PiFU (see "from 3D supervision"). 111 | * [Multiview neural surface reconstruction by disentangling geometry and appearance](https://lioryariv.github.io/idr/) (Yariv et al. 2020) 112 | demonstrates sphere-tracing with positional encodings for reconstruction of complex 3D scenes, and proposes a surface normal and view-direction 113 | dependent rendering network for capturing view-dependent effects. 114 | 115 | One may also encode geometry and appearance of a 3D scene via its 360-degree, 4D light field. This obviates the need for 116 | ray-marching and enables real-time rendering and fast training with minimal memory footprint, but requires additional machinery to ensure 117 | multi-view consistency. 118 | * [Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering](vsitzmann.github.io/lfns/) (Sitzmann et al. 2021) 119 | proposes to represent 3D scenes via their 360-degree light field parameterized as a neural implicit representation. 120 | 121 | ### From 3D supervision 122 | * [Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization](https://shunsukesaito.github.io/PIFu/) (Saito et al. 2019) 123 | Pifu first introduced the concept of conditioning an implicit representation on local features extracted from context images. Follow-up work 124 | achieves photo-realistic, real-time re-rendering. 125 | * [Texture Fields: Learning Texture Representations in Function Space](https://autonomousvision.github.io/texture-fields/) (Oechsle et al.) 126 | 127 | ### For dynamic scenes 128 | * [Occupancy flow: 4d reconstruction by learning particle dynamics](https://avg.is.tuebingen.mpg.de/publications/niemeyer2019iccv) 129 | (Niemeyer et al. 2019) first proposed to learn a space-time neural implicit representation by representing a 4D warp field 130 | with an implicit neural representation. 131 | 132 | The following papers concurrently proposed to leverage a similar approach for the reconstruction of dynamic scenes 133 | from 2D observations only via Neural Radiance Fields. 134 | * [D-NeRF: Neural Radiance Fields for Dynamic Scenes](https://arxiv.org/abs/2011.13961) 135 | * [Deformable Neural Radiance Fields](https://nerfies.github.io/) 136 | * [Neural Radiance Flow for 4D View Synthesis and Video Processing](https://yilundu.github.io/nerflow/) 137 | * [Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes](http://www.cs.cornell.edu/~zl548/NSFF/) 138 | * [Space-time Neural Irradiance Fields for Free-Viewpoint Video](https://video-nerf.github.io/) 139 | * [Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video](https://gvv.mpi-inf.mpg.de/projects/nonrigid_nerf/) 140 | 141 | ## Symmetries in Implicit Neural Representations 142 | * [Vector Neurons: A General Framework for SO(3)-Equivariant Networks](https://cs.stanford.edu/~congyue/vnn/) (Deng et al. 2021) 143 | makes conditional implicit neural representations equivariant to SO(3), enabling the learning of a rotation-equivariant 144 | shape space and subsequent reconstruction of 3D geometry of single objects in unseen poses. 145 | 146 | ## Hybrid implicit / explicit (condition implicit on local features) 147 | The following four papers concurrently proposed to condition an implicit neural representation on local features stored in a voxelgrid: 148 | * [Implicit Functions in Feature Space for 3D ShapeReconstruction and Completion](https://virtualhumans.mpi-inf.mpg.de/papers/chibane20ifnet/chibane20ifnet.pdf) 149 | * [Local Implicit Grid Representations for 3D Scenes](https://geometry.stanford.edu/papers/jsmhnf-lligrf3s-20/jsmhnf-lligrf3s-20.pdf) 150 | * [Convolutional Occupancy Networks](https://arxiv.org/abs/2003.04618) 151 | * [Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction](https://arxiv.org/abs/2003.10983) 152 | 153 | This has since been leveraged for inverse graphics as well: 154 | * [Neural Sparse Voxel Fields](https://github.com/facebookresearch/NSVF) Applies a similar concept to neural radiance fields. 155 | * [Pixel-NERF](https://alexyu.net/pixelnerf/) (Yu et al. 2020) proposes to condition a NeRF on local features lying on camera rays, 156 | extracted from contact images, as proposed in PiFU (see "from 3D supervision"). 157 | 158 | The following papers condition a deep signed distance function on local patches: 159 | * [Local Deep Implicit Functions for 3D Shape](https://ldif.cs.princeton.edu/) 160 | * [PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations](http://gvv.mpi-inf.mpg.de/projects/PatchNets/) 161 | 162 | ## Learning correspondence with Neural Implicit Representations 163 | * [Inferring Semantic Information with 3D Neural Scene Representations](https://www.computationalimaging.org/publications/semantic-srn/) leverages 164 | features learned by Scene Representation Networks for weakly supervised semantic segmentation of 3D objects. 165 | * [Neural Descriptor Fields: SE(3)-Equvariant Object Representations for Manipulation](https://yilundu.github.io/ndf/) 166 | leverages features learned by occupancy networks to establish correspondence, used for robotics imitation learning. 167 | 168 | ## Robotics Applications 169 | * [ 3D Neural Scene Representations for Visuomotor Control](https://3d-representation-learning.github.io/nerf-dy/) 170 | learns latent state space for robotics tasks using neural rendering, and subsequently expresses policies in that latent space. 171 | * [Full-Body Visual Self-Modeling of Robot Morphologies ](https://robot-morphology.cs.columbia.edu/) 172 | uses neural implicit geometry representation for learning a robot self-model, enabling space occupancy queries for given joint angles. 173 | * [Neural Descriptor Fields: SE(3)-Equvariant Object Representations for Manipulation](https://yilundu.github.io/ndf/) 174 | leverages neural fields & vector neurons as an object-centric representation that enables imitation learning of pick-and-place tasks, generalizing across SE(3) poses. 175 | 176 | ## Generalization & Meta-Learning with Neural Implicit Representations 177 | * DeepSDF, Occupancy Networks, IM-Net concurrently proposed conditioning via concatenation. 178 | * [Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization](https://shunsukesaito.github.io/PIFu/) (Saito et al. 2019) 179 | proposed to locally condition implicit representations on ray features extracted from context images. 180 | * [Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations](https://vsitzmann.github.io/srns/) (Sitzmann et al. 2019) proposed meta-learning via hypernetworks. 181 | * [MetaSDF: MetaSDF: Meta-Learning Signed Distance Functions](https://vsitzmann.github.io/metasdf/) (Sitzmann et al. 2020) proposed gradient-based meta-learning for implicit neural representations 182 | * [SDF-SRN: Learning Signed Distance 3D Object Reconstruction from Static Images](https://github.com/chenhsuanlin/signed-distance-SRN) (Lin et al. 2020) show how to learn 3D implicit representations from single-image supervision only. 183 | * [Learned Initializations for Optimizing Coordinate-Based Neural Representations](https://www.matthewtancik.com/learnit) (Tancik et al. 2020) explored gradient-based meta-learning for NeRF. 184 | 185 | ## Fitting high-frequency detail with positional encoding & periodic nonlinearities 186 | * [Neural Radiance Fields (NeRF)](https://www.matthewtancik.com/nerf) (Mildenhall et al. 2020) proposed positional encodings. 187 | * [Implicit Neural Representations with Periodic Activation Functions](https://vsitzmann.github.io/siren/) (Sitzmann et al. 2020) proposed implicit representations with periodic nonlinearities. 188 | * [Fourier features let networks learn high frequency functions in low dimensional domains](https://people.eecs.berkeley.edu/~bmild/fourfeat/) (Tancik et al. 2020) explores positional encodings in an NTK framework. 189 | 190 | ## Implicit Neural Representations of Images 191 | * [Compositional Pattern-Producing Networks: Compositional pattern producing networks: A novel abstraction of development](https://link.springer.com/content/pdf/10.1007/s10710-007-9028-8.pdf) (Stanley et al. 2007) 192 | first proposed to parameterize images implicitly via neural networks. 193 | * [Implicit Neural Representations with Periodic Activation Functions](https://vsitzmann.github.io/siren/) (Sitzmann et al. 2020) proposed to generalize across implicit representations of images via hypernetworks. 194 | * [X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation](https://xfields.mpi-inf.mpg.de/) (Bemana et al. 2020) parameterizes the Jacobian of pixel position with respect to view, time, illumination, etc. to naturally interpolate images. 195 | * [Learning Continuous Image Representation with Local Implicit Image Function](https://github.com/yinboc/liif) (Chen et al. 2020) proposed a hypernetwork-based GAN for images. 196 | * [Alias-Free Generative Adversarial Networks (StyleGAN3)](https://nvlabs.github.io/stylegan3/) uses FILM-conditioned MLP 197 | as an image GAN. 198 | 199 | ## Composing implicit neural representations 200 | The following papers propose to assemble scenes from per-object 3D implicit neural representations. 201 | * [GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields](https://arxiv.org/abs/2011.12100) (Niemeyer et al. 2021) 202 | * [Object-centric Neural Rendering](https://arxiv.org/pdf/2012.08503.pdf) (Guo et al. 2020) 203 | * [Unsupervised Discovery of Object Radiance Fields](https://kovenyu.com/uorf/) (Yu et al. 2021) 204 | 205 | ## Implicit Representations for Partial Differential Equations & Boundary Value Problems 206 | * [Implicit Geometric Regularization for Learning Shapes](https://github.com/amosgropp/IGR) (Gropp et al. 2020) learns SDFs by enforcing constraints of the Eikonal equation via the loss. 207 | * [Implicit Neural Representations with Periodic Activation Functions](https://vsitzmann.github.io/siren/) (Sitzmann et al. 2020) proposes to leverage the periodic sine as an 208 | activation function, enabling the parameterization of functions with non-trivial higher-order derivatives and the solution of complicated PDEs. 209 | * [AutoInt: Automatic Integration for Fast Neural Volume Rendering](https://davidlindell.com/publications/autoint) (Lindell et al. 2020) 210 | * [MeshfreeFlowNet: Physics-Constrained Deep Continuous Space-Time Super-Resolution Framework](http://www.maxjiang.ml/proj/meshfreeflownet) (Jiang et al. 2020) performs super-resolution for spatio-temporal flow functions using local implicit representaitons, with auxiliary PDE losses. 211 | 212 | ## Generative Adverserial Networks with Implicit Representations 213 | ### For 3D 214 | * [Generative Radiance Fields for 3D-Aware Image Synthesis](https://autonomousvision.github.io/graf/) (Schwarz et al. 2020) 215 | * [pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis](https://arxiv.org/abs/2012.00926) (Chan et al. 2020) 216 | * [Unconstrained Scene Generation with Locally Conditioned Radiance Fields](https://arxiv.org/pdf/2104.00670.pdf) (DeVries et al. 2021) Leverage a hybrid implicit-explicit representation, 217 | by generating a 2D feature grid floorplan with a classic convolutional GAN, and then conditioning a 3D neural implicit representation on these features. 218 | This enables generation of room-scale 3D scenes. 219 | * [Alias-Free Generative Adversarial Networks (StyleGAN3)](https://nvlabs.github.io/stylegan3/) uses FILM-conditioned MLP 220 | as an image GAN. 221 | 222 | ### For 2D 223 | For 2D image synthesis, neural implicit representations enable the generation of high-resolution images, while also 224 | allowing the principled treatment of symmetries such as rotation and translation equivariance. 225 | * [Adversarial Generation of Continuous Images](https://arxiv.org/abs/2011.12026) (Skorokhodov et al. 2020) 226 | * [Learning Continuous Image Representation with Local Implicit Image Function](https://github.com/yinboc/liif) (Chen et al. 2020) 227 | * [Image Generators with Conditionally-Independent Pixel Synthesis](https://arxiv.org/abs/2011.13775) (Anokhin et al. 2020) 228 | * [Alias-Free GAN](https://nvlabs.github.io/alias-free-gan/) (Karras et al. 2021) 229 | 230 | ## Image-to-image translation 231 | * [Spatially-Adaptive Pixelwise Networks for Fast Image Translation](https://arxiv.org/pdf/2012.02992.pdf) (Shaham et al. 2020) 232 | leverages a hybrid implicit-explicit representation for fast high-resolution image2image translation. 233 | 234 | ## Articulated representations 235 | * [NASA: Neural Articulated Shape Approximation](https://virtualhumans.mpi-inf.mpg.de/papers/NASA20/NASA.pdf) (Deng et al. 2020) 236 | represents an articulated object as a composition of local, deformable implicit elements. 237 | 238 | # Talks 239 | * [Vincent Sitzmann: Implicit Neural Scene Representations (Scene Representation Networks, MetaSDF, Semantic Segmentation with Implicit Neural Representations, SIREN)](https://www.youtube.com/watch?v=__F9CCqbWQk&t=1s) 240 | * [Andreas Geiger: Neural Implicit Representations for 3D Vision (Occupancy Networks, Texture Fields, Occupancy Flow, Differentiable Volumetric Rendering, GRAF)](https://www.youtube.com/watch?v=F9mRv4v80w0) 241 | * [Gerard Pons-Moll: Shape Representations: Parametric Meshes vs Implicit Functions](https://www.youtube.com/watch?v=_4E2iEmJXW8) 242 | * [Yaron Lipman: Implicit Neural Representations](https://www.youtube.com/watch?v=rUd6qiSNwHs&list=PLat4GgaVK09e7aBNVlZelWWZIUzdq0RQ2&index=11) 243 | 244 | # Links 245 | * [awesome-NeRF](https://github.com/yenchenlin/awesome-NeRF) - List of implicit representations specifically on neural radiance fields (NeRF) 246 | 247 | ## License 248 | License: MIT 249 | 250 | --------------------------------------------------------------------------------