├── my_notes ├── AI_Art │ ├── AIArt.png │ └── README.md └── Hand_Mocap │ ├── HandMocapNotes.png │ └── README.md ├── .gitmodules └── README.md /my_notes/AI_Art/AIArt.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ykk648/awesome-papers-are-all-you-need/HEAD/my_notes/AI_Art/AIArt.png -------------------------------------------------------------------------------- /my_notes/Hand_Mocap/HandMocapNotes.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ykk648/awesome-papers-are-all-you-need/HEAD/my_notes/Hand_Mocap/HandMocapNotes.png -------------------------------------------------------------------------------- /.gitmodules: -------------------------------------------------------------------------------- 1 | [submodule "Hand3DResearch"] 2 | path = Hand3DResearch 3 | url = https://github.com/SeanChenxy/Hand3DResearch.git 4 | branch = main 5 | [submodule "Human-Video-Generation"] 6 | path = Human-Video-Generation 7 | url = git@github.com:yule-li/Human-Video-Generation.git 8 | branch = master 9 | [submodule "hmr-survey"] 10 | path = hmr-survey 11 | url = git@github.com:tinatiansjz/hmr-survey.git 12 | branch = main 13 | [submodule "HelloFace"] 14 | path = HelloFace 15 | url = git@github.com:becauseofAI/HelloFace.git 16 | branch = master 17 | [submodule "awesome-NeRF"] 18 | path = awesome-NeRF 19 | url = https://github.com/koolo233/awesome-NeRF.git 20 | branch = main 21 | [submodule "awesome-ai-painting"] 22 | path = awesome-ai-painting 23 | url = https://github.com/hua1995116/awesome-ai-painting.git 24 | branch = master 25 | [submodule "Awesome-Face-Restoration"] 26 | path = Awesome-Face-Restoration 27 | url = https://github.com/TaoWangzj/Awesome-Face-Restoration.git 28 | branch = main -------------------------------------------------------------------------------- /my_notes/Hand_Mocap/README.md: -------------------------------------------------------------------------------- 1 | ## Overview 2 | 3 | ![HandMocapNotes](./HandMocapNotes.png) 4 | 5 | 6 | ## Hand Estimate 7 | 8 | ### Param Model 9 | 10 | - MANO 11 | 12 | - SMPL+H 13 | - 61 (3cam+3rot+15*3+10shape) 14 | 15 | ### MANO params 16 | 17 | - minimal-hand 18 | 19 | - novel points 20 | 21 | - DetNet + IKNet 22 | 23 | - 100fps 24 | 25 | - pipe 26 | 27 | - image -> 2d+3d detect -> mano shape IK -> hand mesh 28 | 29 | - dataset 30 | 31 | - DetNet 32 | 33 | - CMU ( CMU panoptic dataset 34 | - RHD ( rendered hand pose dataset 35 | - GAN (GANerated Hands Dataset 36 | 37 | - IKNet 38 | 39 | - MANO mocap data + interpolation aug 40 | 41 | - MobileHand 42 | 43 | - novel points 44 | 45 | - mobilenetv3 46 | 47 | - 110 Hz on a GPU or 75 Hz on a CPU 48 | 49 | - 23 degrees of freedom 50 | 51 | - replace MANO&PCA 52 | 53 | - pipe 54 | 55 | - image -> camera/mano param -> project 2d to cal loss 56 | 57 | - dataset 58 | 59 | - train&val 60 | 61 | - FreiHand/STB 62 | 63 | - metrics 64 | 65 | - 3d kps 66 | 67 | - PCK/AUC 68 | 69 | - hand shape 70 | 71 | - mesh error/F-score 72 | 73 | - S2HAND 74 | 75 | - novel points 76 | 77 | - self supervised 78 | - 2D-3D Consistency Loss 79 | 80 | - dataset 81 | 82 | - FreiHand/HO3D 83 | 84 | ### SMPLX params 85 | 86 | - FrankHand 87 | 88 | - novel points 89 | 90 | - encoder(r50)-decoder HMR like network 91 | 92 | - pipe 93 | 94 | - image -> image feature -> smplx param -> smplx mesh 95 | 96 | - dataset 97 | 98 | - train 99 | 100 | - FreiHand/HO-3D/MTC/STB/RHD/MPII+NZSL 101 | 102 | - test 103 | 104 | - STB/RHD/MPII+NZSL 105 | 106 | ### MANO mesh 107 | 108 | - MobRecon 109 | 110 | - novel points 111 | 112 | - lightweight 113 | - SpiralConv 114 | 115 | - pipe 116 | 117 | - image -> 2d detect -> 3d lifting -> Regress Mesh (SpiralConv) 118 | 119 | - dataset 120 | 121 | - FreiHand/Human3.6M 122 | - self 123 | 124 | - real world testset 125 | - complement data 126 | 127 | - test 128 | 129 | - FreiHand/RHD/HO3Dv2 130 | 131 | - metrics 132 | 133 | - MPJPE/PA-MAPJPE/Acc/AUC/F-Score 134 | 135 | - HandOccNet 136 | 137 | - novel points 138 | 139 | - FPN+FIT(feature injecting transformer)+SET(self-enhancing transformer) 140 | 141 | - dataset 142 | 143 | - HO3D/FPHA(first-person hand action) 144 | -------------------------------------------------------------------------------- /my_notes/AI_Art/README.md: -------------------------------------------------------------------------------- 1 | # Overview 2 | 3 | ![AIArt](./AIArt.png) 4 | 5 | # AI Art 6 | 7 | ## Talking Head 8 | 9 | ### HumanFace 10 | 11 | - Image Animation 12 | 13 | - monkey-net 14 | 15 | - network 16 | 17 | - unsupervised kp detector 18 | - dense motion netwrok 19 | - motion transfer network 20 | 21 | - FOM 22 | 23 | - metrics 24 | 25 | - same as monkey-net 26 | 27 | - network 28 | 29 | - unsupervised keypoint detector 30 | 31 | - jacobian 32 | 33 | - local affine transformations 34 | - dense motion network 35 | 36 | - transform map 37 | - occlusion map 38 | 39 | - datasets 40 | 41 | - Thai-Chi-HD 42 | 43 | - DaGAN 44 | 45 | - loss 46 | 47 | - PSNR/SSIM/AKD/AED 48 | - same as MarioNETte 49 | 50 | - dataset 51 | 52 | - VoxCeleb 53 | 54 | - network 55 | 56 | - Depth-Aware 57 | 58 | - face depth model 59 | 60 | - Depth-guided Facial Keypoints Detection 61 | - cross-modal attention machanism 62 | 63 | - face kp 64 | - face depth 65 | 66 | - vid2vid 67 | 68 | - TPSMM 69 | 70 | ### Anime 71 | 72 | - TalkingHeadAnime 73 | 74 | - pose(6 -> face 75 | 76 | - left/right eye, mouth, head x/y/z 77 | 78 | - data 79 | 80 | - collect from MMD 81 | 82 | - network 83 | 84 | - face morpher 85 | 86 | - GANimation based 87 | 88 | - 2 generator like cyclegan 89 | - alpha mask + alpha blend 90 | ( attention based generator 91 | 92 | - face rotator 93 | 94 | - EnhancedView based 95 | 96 | - TalkingHeadAnime2 97 | 98 | - update 99 | 100 | - manual pose 101 | - more expressive (eyebrow 102 | - ifacialmocap 103 | 104 | - network 105 | 106 | - pose 6 -> 42 107 | - eyebrow morpher 108 | 109 | - segment + remove + change 110 | 111 | - eye&mouth morpher 112 | 113 | - EasyVtuber 114 | 115 | - mediapipe -> face mesh -> vector -> THA 116 | - obs stream 117 | 118 | - EasyVtuber2 119 | 120 | - anime source 121 | 122 | - waifu labs 123 | 124 | - crypko.ai 125 | 126 | - ifacialmocap + UDP 127 | - obs -> unity capture 128 | 129 | ### Metrics 130 | 131 | - PSNR 132 | - SSIM 133 | - from monkey-net 134 | 135 | - AKD (Average Keypoint Distance 136 | - AED (Average Euclidean Distance 137 | - MKR (Missing Keypoint Rate 138 | - for video 139 | 140 | - L1 141 | - FID (Frechet Inception Distance 142 | 143 | - from MarioNETte 144 | 145 | - CSIM 146 | 147 | - Cosine similarity for identity 148 | 149 | - PRMSE 150 | 151 | - root mean square error of the head pose angles for head pose 152 | 153 | - AUCON 154 | 155 | - the ratio of identical facial action unit values 156 | 157 | ## Text 2 Video 158 | 159 | ### Make A Video (meta 160 | 161 | - pipe 162 | 163 | - text(CLIP to image Prior 164 | - spatiotemporal conv decoder 165 | - frame interpolation 166 | - SR+spatiotemporal SR 167 | 168 | - datasets 169 | 170 | - LAION-5B 171 | - WebVid-10M 172 | - HD-VILA-100M 173 | 174 | - metrics 175 | 176 | - FVD (Frechet Video Distance 177 | - FID (Frechet Inception Distance 178 | - CLIPSIM (CLIP similarity between video frames 179 | 180 | ### Imagen Video (google 181 | 182 | - pipe 183 | 184 | - 7 models 185 | - text-conditional video genereate 186 | - spatial&temporal SR 187 | 188 | - datasets 189 | 190 | - 14m internet video-text pair 191 | - 60m image-text pair 192 | - LAION-400M 193 | 194 | ### Phenaki (google 195 | 196 | ### CogVideo 197 | 198 | - pipe 199 | 200 | - multi-frame-rate hierarchical training 201 | 202 | - frame rate/text/frame token 203 | - stage1 sequential gen 204 | - stage2 recursive interpolation 205 | 206 | - dual-channel attention 207 | 208 | - freeze CogView2 209 | - add sptial-temporal attention channel 210 | 211 | ## Text to Image 212 | 213 | ### service 214 | 215 | - image generate service 216 | 217 | - midjourney 218 | 219 | - pornpen 220 | 221 | - novel ai 222 | 223 | - aitags 224 | 225 | - chinese 226 | 227 | - yige 228 | 229 | - local/colab/hugging face 230 | 231 | - multimodalart 232 | 233 | - mindseye 234 | 235 | - using colab to run different models 236 | 237 | - majesty-diffusion 238 | 239 | - Latent Diffusion 240 | - V-Objective Diffusion 241 | 242 | - DreamBooth 243 | 244 | - Dreambooth-Stable-Diffusion 245 | 246 | - fast-stable-diffusion 247 | 248 | - fully colab 249 | 250 | - NovelAI 251 | 252 | - tag generate 253 | 254 | ### Diffusion 255 | 256 | - DALL-E 257 | - GLIDE 258 | 259 | - Latent Diffusion (CompVis 260 | 261 | - diffusion in latent space 262 | - Stable Diffusion 263 | 264 | - pipe 265 | 266 | - 860M UNet 267 | - CLIP ViT-L/14 text encoder 268 | 269 | - datasets 270 | 271 | - LAION-5B 272 | 273 | - disco-diffusion 274 | 275 | - ERNIE-ViLG 276 | 277 | - Imagen (Google 278 | 279 | - Dreambooth 280 | 281 | - use few images finetune T2I model 282 | 283 | ### Transformer 284 | 285 | - CogView2 286 | 287 | ## Image Generate 288 | 289 | ### NFT 290 | 291 | - generate 292 | 293 | - generate & interpolation & style transfer 294 | 295 | - image compose 296 | 297 | - website 298 | 299 | - nftcn 300 | 301 | - opensea 302 | 303 | ### AnimeFace 304 | 305 | - StyleGan 306 | 307 | - NVlabs 308 | - stylegan2-ada-pytorch 309 | 310 | - upfirdn2d 311 | 312 | - Competitor 313 | 314 | - Anime 315 | 316 | - crypko.ai 317 | 318 | - waifu 319 | 320 | ### Semantic Image Synthesis 321 | 322 | - SPADE 323 | 324 | - spatially-adaptive normalization 325 | 326 | - semantic mask -> scale/bias 327 | 328 | - generator 329 | 330 | - pix2pixHD remove encoder 331 | 332 | ## Nerf 333 | 334 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 |  ↖  click there to get TOC 2 | 3 | 4 | 5 | # Awesome-Papers-Are-All-You-Need 6 | 7 | 8 | 9 | Record papers I have read or reproduced since 2020 which were beneficial to my work. 10 | 11 | My paper notes: [2021](https://ykk648.github.io/posts/65074/) [2022](https://ykk648.github.io/posts/65394/) 12 | 13 | My Xmind notes: 14 | 15 | [AI Art](./my_notes/AI_Art) (TalkingHead/Text2Image/Text2Video etc.) 16 | 17 | [Hand Mocap](./my_notes/Hand_Mocap) 18 | 19 | Recommend: 20 | 21 | [hmr-survey](https://github.com/tinatiansjz/hmr-survey) by tinatiansjz 22 | 23 | [Hand3DResearch](https://github.com/SeanChenxy/Hand3DResearch) by SeanChenxy 24 | 25 | [Human-Video-Generation](https://github.com/yule-li/Human-Video-Generation) by yule-li. 26 | 27 | [HelloFace](https://github.com/becauseofAI/HelloFace) by becauseofAI 28 | 29 | [awesome-NeRF](https://github.com/koolo233/awesome-NeRF) by koolo233 30 | 31 | [awesome-ai-painting](https://github.com/hua1995116/awesome-ai-painting) by hua1995116 32 | 33 | [Awesome-Face-Restoration](https://github.com/TaoWangzj/Awesome-Face-Restoration) by TaoWangzj 34 | 35 | --- 36 | 37 | 38 | 39 | ### 3D Face Reconstruction 40 | 41 | | Year | Name | Paper | Codes | 42 | | ---- | --------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | 43 | | 2018 | 3DDFA | [Face Alignment in Full Pose Range: A 3D Total Solution](https://arxiv.org/abs/1804.01005) | [official](https://github.com/cleardusk/3DDFA) | 44 | | 2019 | Deep3DFaceRecon | [Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set](https://arxiv.org/abs/1903.08527) | [unofficial](https://github.com/sicxu/Deep3DFaceRecon_pytorch) | 45 | | 2020 | **3DDFA_V2** | [Towards Fast, Accurate and Stable 3D Dense Face Alignment](https://guojianzhu.com/assets/pdfs/3162.pdf) | [official](https://github.com/cleardusk/3DDFA_V2) | 46 | | 2020 | **Detailed3DFace** | [FaceScape: a Large-scale High Quality 3D Face Dataset and Detailed Riggable 3D Face Prediction](https://openaccess.thecvf.com/content_CVPR_2020/papers/Yang_FaceScape_A_Large-Scale_High_Quality_3D_Face_Dataset_and_Detailed_CVPR_2020_paper.pdf) | [official](https://github.com/yanght321/Detailed3DFace) | 47 | | 2021 | DECA | [Detailed Expression Capture and Animation](https://arxiv.org/abs/2012.04012) | [official](https://github.com/YadiraF/DECA) | 48 | | | **Imperial College London** | | | 49 | | 2019 | GANFit | [Generative Adversarial Network Fitting for High Fidelity 3D Face Reconstruction](https://openaccess.thecvf.com/content_CVPR_2019/papers/Gecer_GANFIT_Generative_Adversarial_Network_Fitting_for_High_Fidelity_3D_Face_CVPR_2019_paper.pdf) | [official](https://github.com/barisgecer/GANFit) | 50 | | 2021 | TBGAN | [Synthesizing Coupled 3D Face Modalities by Trunk-Branch Generative Adversarial Networks](https://barisgecer.github.io/files/gecer_tbgan_arxiv.pdf) | [official](https://github.com/barisgecer/TBGAN) | 51 | | 2021 | OSTeC | [One-Shot Texture Completion](https://openaccess.thecvf.com/content/CVPR2021/papers/Gecer_OSTeC_One-Shot_Texture_Completion_CVPR_2021_paper.pdf) | [official](https://github.com/barisgecer/OSTeC) | 52 | | 2021 | Fast-GANFit | [Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face Reconstruction]() | | 53 | | 2021 | AvatarMe | [AvatarMe: Realistically Renderable 3D Facial Reconstruction "in-the-wild"](https://arxiv.org/abs/2003.13845) | [official](https://github.com/lattas/AvatarMe) | 54 | | 2021 | AvatarMe++ | [AvatarMe++: Facial Shape and BRDF Inference with Photorealistic Rendering-Aware GANs](https://arxiv.org/abs/2112.05957) | | 55 | 56 | 57 | 58 | --- 59 | 60 | 61 | 62 | ### 3D Human Digitization 63 | 64 | | Year | Name | Paper | Codes | 65 | | ---- | ------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | 66 | | 2019 | speech2gesture | [Learning Individual Styles of Conversational Gesture](https://arxiv.org/abs/1906.04160) | [official](https://github.com/amirbar/speech2gesture) | 67 | | 2020 | Monoport | [Monoport: Monocular Volumetric Human Teleportation](https://arxiv.org/pdf/2007.13988v1.pdf) | [official](https://github.com/Project-Splinter/MonoPort) | 68 | | 2020 | PiFuHD | [PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization](https://arxiv.org/pdf/2004.00452.pdf) | [Meta](https://github.com/facebookresearch/pifuhd) | 69 | | 2021 | iPERCore | [Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis](https://arxiv.org/pdf/2011.09055.pdf) | [official](https://github.com/iPERDance/iPERCore) | 70 | | 2021 | **ContactHumanDynamics** | [Contact and Human Dynamics from Monocular Video](https://geometry.stanford.edu/projects/human-dynamics-eccv-2020/) | [Stanford](https://github.com/davrempe/contact-human-dynamics) | 71 | | 2021 | HuMoR | [HuMoR: 3D Human Motion Model for Robust Pose Estimation](https://geometry.stanford.edu/projects/humor/docs/humor.pdf) | [Stanford](https://github.com/davrempe/humor) | 72 | | 2021 | MeTRAbs | [MeTRAbs: Metric-Scale Truncation-Robust Heatmaps for Absolute 3D Human Pose Estimation](https://arxiv.org/abs/2007.07227) | [official](https://github.com/isarandi/metrabs) | 73 | | 2022 | DeepMotion | | [official](https://deepmotion.com/Animate-3D) | 74 | 75 | #### Motion Capture & Driven 76 | 77 | | Year | Name | Paper | Codes | 78 | | ---- | ------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | 79 | | 2021 | ParameterizedMotion | [Learning a family of motor skills from a single motion clip](http://mrl.snu.ac.kr/research/ProjectParameterizedMotion/ParameterizedMotion.html) | [official](https://github.com/snumrl/ParameterizedMotion) | 80 | | 2021 | 1165048017 Blog | | [official](https://github.com/1165048017/BlogLearning/blob/master/BlogContents/%E8%BF%90%E5%8A%A8%E6%8D%95%E6%8D%89.md) | 81 | | 2021 | **TDPT** | | [official](https://github.com/digital-standard/ThreeDPoseTracker) | 82 | | 2021 | IK/FABRIK/CCDIK | | [UE4 doc](https://docs.unrealengine.com/4.27/zh-CN/) | 83 | 84 | #### Human Kp Estimation 85 | 86 | | Year | Name | Paper | Codes | 87 | | ---- | ------------- | ------------------------------------------------------------ | ------------------------------------------------------ | 88 | | | **2D Kp** | | | 89 | | 2018 | AlphaPose | [RMPE: Regional Multi-Person Pose Estimation](https://github.com/MVIG-SJTU/AlphaPose) | [official](https://github.com/MVIG-SJTU/AlphaPose) | 90 | | | **3D Kp** | | | 91 | | 2019 | mvpose | [Fast and Robust Multi-Person 3D Pose Estimation from Multiple Views](https://arxiv.org/pdf/1901.04111.pdf) | [ZJU3DV](https://github.com/zju3dv/mvpose) | 92 | | 2022 | PoseTriplet | [Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision](https://arxiv.org/pdf/2203.15625) | [official](https://github.com/Garfield-kh/PoseTriplet) | 93 | | | **combine** | | | 94 | | 2021 | **MediaPipe** | | [official](https://github.com/google/mediapipe) | 95 | | 2021 | mmpose | | [official](https://github.com/open-mmlab/mmpose) | 96 | 97 | #### Human Motion Estimation 98 | 99 | [inverse dynamics](https://www.cnblogs.com/ArenAK/archive/2010/07/25/1784782.html) 100 | 101 | | Year | Name | Paper | Codes | 102 | | ---- | -------------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | 103 | | |  **Tricks** | | | 104 | | 2020 | 6D rotation | [On the Continuity of Rotation Representations in Neural Networks](https://arxiv.org/abs/1812.07035) | [official](https://github.com/Janus-Shiau/6d_rot_tensorflow) | 105 | | |  **Body Model** | | | 106 | | 2015 | SMPL | [SMPL: A Skinned Multi-Person Linear Model](http://files.is.tue.mpg.de/black/papers/SMPL2015.pdf) | [official](https://github.com/CalciferZh/SMPL) | 107 | | 2019 | SMPL-X | [SMPL-X: A new joint 3D model of the human body, face and hands together](https://ps.is.tuebingen.mpg.de/uploads_file/attachment/attachment/497/SMPL-X.pdf) | [official](https://github.com/vchoutas/smplx) | 108 | | |  **Image Based** | | | 109 | | 2017 | VNect | [VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera](https://vcai.mpi-inf.mpg.de/projects/VNect/content/VNect_SIGGRAPH2017.pdf) | Max Planck | 110 | | 2019 | SPIN | [Learning to Reconstruct 3D Human Pose and Shape via Model-fitting in the Loop](https://arxiv.org/pdf/1909.12828.pdf) | [official](https://github.com/nkolot/SPIN) | 111 | | 2021 | PyMAF | [PyMAF: 3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop](https://arxiv.org/pdf/2103.16507.pdf) | [official](https://github.com/HongwenZhang/PyMAF) | 112 | | 2021 | **MeshGraphormer** | [Mesh Graphormer](http://xxx.itp.ac.cn/abs/2104.00272) | [microsoft](https://github.com/microsoft/MeshGraphormer) | 113 | | 2021 | **ROMP** | [Monocular, One-stage, Regression of Multiple 3D People](https://arxiv.org/abs/2008.12272) | [official](https://github.com/Arthur151/ROMP) | 114 | | 2021 | DynaBOA | [Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation](https://arxiv.org/abs/2111.04017) | [official](https://github.com/syguan96/DynaBOA) | 115 | | 2021 | PARE | [PARE: Part Attention Regressor for 3D Human Body Estimation](https://arxiv.org/abs/2104.08527) | [Max Planck](https://github.com/mkocabas/PARE) | 116 | | 2021 | PoseTriplet | [PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision](https://arxiv.org/pdf/2203.15625) | [official](https://github.com/Garfield-kh/PoseTriplet) | 117 | | 2020 | PhysCap | [PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time](https://vcai.mpi-inf.mpg.de/projects/PhysCap/data/physcap.pdf) | [Max Planck](https://github.com/soshishimada/PhysCap_demo_release/) | 118 | | 2021 | HybrIK | [HybrIK: A Hybrid Analytical-Neural Inverse Kinematics Solution for 3D Human Pose and Shape Estimation](https://openaccess.thecvf.com/content/CVPR2021/html/Li_HybrIK_A_Hybrid_Analytical-Neural_Inverse_Kinematics_Solution_for_3D_Human_CVPR_2021_paper.html) | [official](https://github.com/Jeff-sjtu/HybrIK) | 119 | | 2021 | Physics-based Human Motion Estimation | [Physics-based Human Motion Estimation and Synthesis from Videos](https://arxiv.org/abs/2109.09913) | Nvidia | 120 | | 2021 | SimPoE | [SimPoE: Simulated Character Control for 3D Human Pose Estimation](https://arxiv.org/pdf/2104.00683.pdf) | Meta | 121 | | 2021 | imGHUM | [imGHUM: Implicit Generative Models of 3D Human Shape and Articulated Pose](https://arxiv.org/abs/2108.10842) | [Google](https://github.com/google-research/google-research/tree/master/imghum) | 122 | | 2021 | PoseAug | [PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation](https://arxiv.org/pdf/2105.02465.pdf) | [official](https://github.com/jfzhang95/PoseAug) | 123 | | 2022 | MuJoCo | | [deepmind](https://github.com/deepmind/mujoco) | 124 | | |  **Temporal Based** | | | 125 | | 2020 | VIBE | [VIBE: Video Inference for Human Body Pose and Shape Estimation](https://arxiv.org/abs/1912.05656) | [official](https://github.com/mkocabas/VIBE) | 126 | | 2021 | TCMR | [TCMR: Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video](https://arxiv.org/abs/2011.08627) | [official](https://github.com/hongsukchoi/TCMR_RELEASE) | 127 | | 2021 | maed | [MAED: Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation](https://arxiv.org/abs/2109.02303) | [official](https://github.com/ziniuwan/maed) | 128 | | | ** Full Body** | | | 129 | | 2021 | **FrankMocap** | [A Strong and Easy-to-use Single View 3D Hand+Body Pose Estimator](https://arxiv.org/pdf/2008.08324.pdf) | [Meta](https://github.com/facebookresearch/frankmocap) | 130 | | 2021 | PIXIE | [Collaborative Regression of Expressive Bodies using Moderation](https://ps.is.mpg.de/uploads_file/attachment/attachment/667/PIXIE_3DV_CR.pdf) | [official](https://github.com/YadiraF/PIXIE) | 131 | | 2022 | Hand4Whole | [Accurate 3D Hand Pose Estimation for Whole-Body 3D Human Mesh Estimation](https://arxiv.org/abs/2011.11534) | [official](https://github.com/mks0601/Hand4Whole_RELEASE) | 132 | | |  **Multi Views** | | | 133 | | 2020 | 3D Human Pose Estimation | [3D Human Pose Estimation using Multi Camera](https://zenodo.org/record/4003521/files/3D_Human_Pose_Estimation_Using_Multi_Camera.pdf?download=1) | [official](https://github.com/shashikg/3D-Human-Pose-Estimation-using-Multi-Camera) | 134 | | 2020 | Learnable Triangulation | [Learnable Triangulation of Human Pose](https://arxiv.org/abs/1905.05754) | [official](https://github.com/karfly/learnable-triangulation-pytorch) | 135 | | 2020 | Epipolar Transformers | [Epipolar Transformers](https://arxiv.org/abs/2005.04551) | [official](https://github.com/yihui-he/epipolar-transformers) | 136 | | 2020 | VoxelPose | [VoxelPose: Towards Multi-Camera 3D Human Pose Estimation in Wild Environment](https://arxiv.org/abs/2004.06239) | [microsoft](https://github.com/microsoft/voxelpose-pytorch) | 137 | | 2020 | **EasyMocap** | [ Motion Capture from Internet Videos](https://arxiv.org/abs/2008.07931) | [ZJU3DV](https://github.com/zju3dv/EasyMocap) | 138 | | 2021 | PlaneSweepPose | [Multi-View Multi-Person 3D Pose Estimation with Plane Sweep Stereo](https://arxiv.org/abs/2104.02273) | [official](https://github.com/jiahaoLjh/PlaneSweepPose) | 139 | | 2021 | freemocap | | [official](https://github.com/freemocap/freemocap) | 140 | | 2022 | Generalizable Human Pose Triangulation | [Generalizable Human Pose Triangulation](https://arxiv.org/abs/2110.00280) | [official](https://github.com/kristijanbartol/general-3d-humans) | 141 | 142 | #### Hand Estimation 143 | 144 | | Year | Name | Paper | Codes | 145 | | ---- | ---------- | ------------------------------------------------------------ | ------------------------------------------------------------ | 146 | | 2017 | MANO | [Embodied Hands: Modeling and Capturing Hands and Bodies Together](https://ps.is.tuebingen.mpg.de/uploads_file/attachment/attachment/392/Embodied_Hands_SiggraphAsia2017.pdf) | [official](https://mano.is.tue.mpg.de/index.html) | 147 | | 2020 | Mediapipe | [MediaPipe Hands: On-device Real-time Hand Tracking](https://arxiv.org/abs/2006.10214) | [Google](https://github.com/google/mediapipe) | 148 | | 2021 | MocapNETv3 | [Towards Holistic Real-time Human 3D Pose Estimation using MocapNETs](https://www.bmvc2021-virtualconference.com/assets/papers/1334.pdf) | [official](https://github.com/FORTH-ModelBasedTracker/MocapNET) | 149 | | 2021 | S2HAND | [S2HAND: Model-based 3D Hand Reconstruction via Self-Supervised Learning](https://arxiv.org/abs/2103.11703) | [Tencent](https://github.com/TerenceCYJ/S2HAND) | 150 | 151 | 152 | 153 | --- 154 | 155 | ### Classification 156 | 157 | | Year | Name | Paper | Codes | 158 | | ---- | ------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | 159 | | 2021 | MLP-Mixer | [MLP-Mixer: An all-MLP Architecture for Vision](https://arxiv.org/abs/2105.01601) | [official](https://github.com/google-research/vision_transformer) | 160 | | 2021 | Noisy Student | [Self-training with Noisy Student improves ImageNet classification](https://arxiv.org/abs/1911.04252) | [official](https://github.com/google-research/noisystudent) | 161 | | 2021 | ImageNet-21K | [ImageNet-21K Pretraining for the Masses](https://github.com/Alibaba-MIIL/ImageNet21K) | [official](https://github.com/Alibaba-MIIL/ImageNet21K) | 162 | | 2021 | MicroNet | [MicroNet: Improving Image Recognition with Extremely Low FLOPs](https://arxiv.org/abs/2108.05894) | [official](https://github.com/liyunsheng13/micronet) | 163 | | 2021 | RepVGG | [RepVGG: Making VGG-style ConvNets Great Again ](https://arxiv.org/abs/2101.03697) | [official](https://github.com/DingXiaoH/RepVGG) | 164 | | 2022 | ConvNeXt | [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) | [official](https://github.com/facebookresearch/ConvNeXt) | 165 | 166 | 167 | 168 | --- 169 | 170 | ### Face Detection 171 | 172 | | Year | Name | Paper | Codes | 173 | | ---- | --------- | ------------------------------------------------------------ | ------------------------------------------------------------ | 174 | | 2016 | MTCNN | [Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks](https://arxiv.org/abs/1604.02878) | [unofficial](https://github.com/taotaonice/FaceShifter/tree/master/face_modules/mtcnn_pytorch) | 175 | | 2020 | DSFD | [DSFD: Dual Shot Face Detector](https://arxiv.org/abs/1810.10220) | [official](https://github.com/Tencent/FaceDetection-DSFD) | 176 | | 2021 | **SCRFD** | [Sample and Computation Redistribution for Efficient Face Detection](https://arxiv.org/abs/2105.04714) | [official](https://github.com/deepinsight/insightface/tree/master/detection/scrfd) | 177 | 178 | --- 179 | 180 | ### Face Swap 181 | 182 | | Year | Name | Paper | Codes | 183 | | ---- | ----------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | 184 | | 2019 | FSGAN | [FSGAN: Subject Agnostic Face Swapping and Reenactment](https://arxiv.org/pdf/1908.05932.pdf) | [official](https://github.com/YuvalNirkin/fsgan) | 185 | | 2020 | Disney | [High-Resolution Neural Face Swapping for Visual Effects](https://s3.amazonaws.com/disney-research-data/wp-content/uploads/2020/06/18013325/High-Resolution-Neural-Face-Swapping-for-Visual-Effects.pdf) | [unofficial](https://github.com/Arthurzhangsheng/High-Resolution-Neural-Face-Swapping-for-Visual-Effects) | 186 | | 2020 | FaceShifter | [FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping](https://arxiv.org/abs/1912.13457) | [unofficial](https://github.com/mindslab-ai/faceshifter) | 187 | | 2021 | **SimSwap** | [SimSwap: An Efficient Framework For High Fidelity Face Swapping](https://arxiv.org/pdf/2106.06340v1.pdf) | [official](https://github.com/neuralchen/SimSwap) | 188 | | 2021 | InfoSwap | [Information Bottleneck Disentanglement for Identity Swapping](https://openaccess.thecvf.com/content/CVPR2021/papers/Gao_Information_Bottleneck_Disentanglement_for_Identity_Swapping_CVPR_2021_paper.pdf) | [official](https://github.com/GGGHSL/InfoSwap-master) | 189 | | 2021 | ShapeEditer | [ShapeEditer: a StyleGAN Encoder for Face Swapping](https://arxiv.org/abs/2106.13984) | | 190 | | 2021 | **HifiFace** | [HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping](https://arxiv.org/pdf/2106.09965) | [unofficial](https://github.com/mindslab-ai/hififace) | 191 | | 2022 | MobileFaceSwap | [MobileFaceSwap: A Lightweight Framework for Video Face Swapping](https://arxiv.org/abs/2201.03808) | [baidu](https://github.com/Seanseattle/MobileFaceSwap) | 192 | | 2022 | Stitch it in Time | [Stitch it in Time: GAN-Based Facial Editing of Real Videos](https://arxiv.org/abs/2201.08361) | [official](https://github.com/rotemtzaban/STIT) | 193 | 194 | 195 | 196 | --- 197 | 198 | ### Image2Image Translation 199 | 200 | | Year | Name | Paper | Codes | 201 | | ---- | ----------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | 202 | | 2019 | SPADE | [Semantic Image Synthesis with Spatially-Adaptive Normalization](https://arxiv.org/abs/1903.07291) | [Nvidia](https://github.com/NVlabs/SPADE) | 203 | | 2021 | OASIS | [You Only Need Adversarial Supervision for Semantic Image Synthesis](https://arxiv.org/abs/2012.04781) | [official](https://github.com/boschresearch/OASIS) | 204 | | 2017 | pix2pix | [Image-to-Image Translation with Conditional Adversarial Networks](https://arxiv.org/abs/1611.07004) | [official](https://github.com/phillipi/pix2pix) | 205 | | 2018 | **pix2pixHD** | [High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs](https://arxiv.org/abs/1711.11585v1) | [nvidia](https://github.com/NVIDIA/pix2pixHD) | 206 | | 2018 | vid2vid | [Video-to-Video Synthesis](https://arxiv.org/abs/1808.06601) | [nvidia](https://github.com/NVIDIA/vid2vid) | 207 | | | **anime face** | | | 208 | | 2019 | TalkingHeadAnime | | [official](http://github.com/dragonmeteor/talking-head-anime-demo) | 209 | | 2021 | TalkingHeadAnime2 | | [official](https://github.com/pkhungurn/talking-head-anime-2-demo) | 210 | | 2022 | EasyVtuber | | [official](https://github.com/yuyuyzl/EasyVtuber) | 211 | 212 | --- 213 | 214 | ### Image Generation 215 | 216 | | Year | Name | Paper | Codes | 217 | | ---- | ---------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | 218 | | 2020 | ALAE | [Adversarial Latent Autoencoders](https://arxiv.org/abs/2004.04467) | [official](https://github.com/podgorskiy/ALAE) | 219 | | 2020 | GANSpace | [GANSpace: Discovering Interpretable GAN Controls](https://arxiv.org/abs/2004.02546) | [official](https://github.com/harskish/ganspace) | 220 | | 2021 | Cartoon-StyleGAN | [Fine-tuning StyleGAN2 for Cartoon Face Generation](https://arxiv.org/abs/2106.12445) | [official](https://github.com/happy-jihye/Cartoon-StyleGAN) | 221 | | 2021 | Barbershop | [GAN-based Image Compositing using Segmentation Masks](https://arxiv.org/abs/2106.01505) | [official](https://github.com/ZPdesu/Barbershop) | 222 | | 2021 | GANs N' Roses | [GANs N' Roses: Stable, Controllable, Diverse Image to Image Translation](https://arxiv.org/abs/2106.06561) | [official](https://github.com/mchong6/GANsNRoses) | 223 | | 2021 | PTI | [PTI: Pivotal Tuning for Latent-based editing of Real Images](https://arxiv.org/abs/2106.05744) | [official](https://github.com/danielroich/PTI) | 224 | | 2021 | sefa | [Closed-Form Factorization of Latent Semantics in GANs](https://arxiv.org/pdf/2007.06600.pdf) | [Genforce](https://github.com/genforce/sefa) | 225 | | 2021 | StyleMapGAN | [StyleMapGAN: Exploiting Spatial Dimensions of Latent in GAN for Real-time Image Editing](https://arxiv.org/abs/2104.14754) | [NAVER AI](https://github.com/naver-ai/StyleMapGAN) | 226 | | 2021 | SuperStyleNet | [SuperStlyeNet: Deep Image Synthesis with Superpixel Based Style Encoder](https://www.bmvc2021-virtualconference.com/assets/papers/0051.pdf) | [official](https://github.com/BenjaminJonghyun/SuperStyleNet) | 227 | | 2021 | Chunkmogrify | [Real Image Inversion via Segments](http://arxiv.org/abs/2110.06269) | [Adobe](https://github.com/futscdav/Chunkmogrify) | 228 | | 2021 | encoder4editing | [Designing an Encoder for StyleGAN Image Manipulation](https://arxiv.org/abs/2102.02766) | [official](https://github.com/omertov/encoder4editing) | 229 | | 2021 | Projected GANs | [Projected GANs Converge Faster](http://www.cvlibs.net/publications/Sauer2021NEURIPS.pdf) | [official](https://github.com/autonomousvision/projected_gan) | 230 | | 2022 | CLIPasso | [Semantically-Aware Object Sketching](https://arxiv.org/abs/2202.05822) | [official](https://github.com/yael-vinker/CLIPasso) | 231 | 232 | 233 | 234 | #### Nvidia 235 | 236 | | Year | Name | Paper | Codes | 237 | | ---- | ----------------- | ------------------------------------------------------------ | --------------------------------------------------------- | 238 | | 2019 | StyleGan | [A Style-Based Generator Architecture for Generative Adversarial Networks](https://arxiv.org/abs/1812.04948) | [Nvidia](https://github.com/NVlabs/stylegan) | 239 | | 2019 | **StyleGan2** | [Analyzing and Improving the Image Quality of StyleGAN](http://arxiv.org/abs/1912.04958) | [Nvidia](https://github.com/NVlabs/stylegan2) | 240 | | 2021 | **stylegan2-ada** | [Training Generative Adversarial Networks with Limited Data]() | [Nvidia](https://github.com/NVlabs/stylegan2-ada-pytorch) | 241 | | 2021 | StyleGan3 | [Alias-Free Generative Adversarial Networks](https://arxiv.org/abs/2106.12423) | [Nvidia](https://github.com/NVlabs/stylegan3) | 242 | | 2021 | SemanticGAN | [Semantic Segmentation with Generative Models: Semi-Supervised Learning and Strong Out-of-Domain Generalization](https://arxiv.org/abs/2104.05833) | [Nvidia](https://github.com/nv-tlabs/semanticGAN_code) | 243 | 244 | 245 | --- 246 | 247 | ### Neural Head & Body 248 | 249 | | Year | Name | Paper | Codes | 250 | | ---- | ---------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | 251 | | 2020 | **FirstOrder** | [First Order Motion Model for Image Animation](https://papers.nips.cc/paper/8935-first-order-motion-model-for-image-animation) | [official](https://github.com/AliaksandrSiarohin/first-order-model) | 252 | | 2021 | speech2gesture | [NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video](https://arxiv.org/pdf/2104.00681.pdf) | [ZJU3DV](https://github.com/zju3dv/NeuralRecon) | 253 | | 2021 | StyleGestures | [Style-controllable speech-driven gesture synthesis using normalising flows](https://diglib.eg.org/handle/10.1111/cgf13946) | [official](https://github.com/simonalexanderson/StyleGestures) | 254 | | 2021 | **face-vid2vid** | [One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing](https://arxiv.org/abs/2011.15126) | [Nvidia Project](https://nvlabs.github.io/face-vid2vid/) [unofficial](https://github.com/zhanglonghao1992/One-Shot_Free-View_Neural_Talking_Head_Synthesis) [unofficial-2](https://github.com/zhengkw18/face-vid2vid) | 255 | | 2022 | DaGAN | [Depth-Aware Generative Adversarial Network for Talking Head Video Generation](https://arxiv.org/abs/2203.06605) | [official](https://github.com/harlanhong/CVPR2022-DaGAN) | 256 | 257 | #### NeRF 258 | 259 | | Year | Name | Paper | Codes | 260 | | ---- | ----------- | ------------------------------------------------------------ | ------------------------------------------------ | 261 | | 2020 | NeRF | [Representing Scenes as Neural Radiance Fields for View Synthesis](https://arxiv.org/abs/2003.08934) | [Berkeley](https://github.com/bmild/nerf) | 262 | | 2021 | Neural Body | [Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans](https://arxiv.org/pdf/2012.15838.pdf) | [ZJU3DV](https://github.com/zju3dv/neuralbody) | 263 | | 2021 | **AD-Nerf** | [AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis](https://arxiv.org/abs/2103.11078) | [official](https://github.com/YudongGuo/AD-NeRF) | 264 | 265 | --- 266 | 267 | ### Object Detection 268 | 269 | | Year | Name | Paper | Codes | 270 | | ---- | ----------------- | ------------------------------------------------------------ | ---------------------------------------------------------- | 271 | | 2020 | 100 Days of Hands | [Understanding Human Hands in Contact at Internet Scale](https://fouheylab.eecs.umich.edu/~dandans/projects/100DOH/file/hands.pdf) | [official](https://github.com/ddshan/hand_object_detector) | 272 | | 2021 | **YOLOX** | [YOLOX: Exceeding YOLO Series in 2021](https://arxiv.org/abs/2107.08430) | [Megvii](https://github.com/Megvii-BaseDetection/YOLOX) | 273 | 274 | 275 | 276 | --- 277 | 278 | ### Super Resolution 279 | 280 | | Year | Name | Paper | Codes | 281 | | ---- | ----------- | ------------------------------------------------------------ | -------------------------------------------------- | 282 | | 2018 | ESRGAN | [ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks](https://arxiv.org/abs/1809.00219) | [official](https://github.com/xinntao/ESRGAN) | 283 | | 2020 | DFDNet | [Blind Face Restoration via Deep Multi-scale Component Dictionaries](https://arxiv.org/pdf/2008.00418.pdf) | [official](https://github.com/csxmli2016/DFDNet) | 284 | | 2021 | Real-ESRGAN | [Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data](https://arxiv.org/abs/2107.10833) | [official](https://github.com/xinntao/Real-ESRGAN) | 285 | | 2021 | **GFPGAN** | [GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior](https://arxiv.org/abs/2101.04061) | [official](https://github.com/TencentARC/GFPGAN) | 286 | | 2021 | **GPEN** | [GAN Prior Embedded Network for Blind Face Restoration in the Wild](https://arxiv.org/abs/2105.06070) | [official](https://github.com/yangxy/GPEN) | 287 | | 2022 | SwinIR | [Image Restoration Using Swin Transformer](https://arxiv.org/abs/2108.10257) | [official](https://github.com/JingyunLiang/SwinIR) | 288 | | 2022 | VRT | [A Video Restoration Transformer](https://arxiv.org/abs/2201.12288) | [official](https://github.com/JingyunLiang/VRT) | 289 | 290 | --- 291 | 292 | ### ViT Transformer 293 | 294 | | Year | Name | Paper | Codes | 295 | | ---- | ------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | 296 | | 2017 | google Attention | [Attention Is All You Need](https://arxiv.org/abs/1706.03762) | [official](https://github.com/jadore801120/attention-is-all-you-need-pytorch) | 297 | | 2020 | **ViT** | [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) | [Google](https://github.com/google-research/vision_transformer) | 298 | | 2021 | Token Labeling | [All Tokens Matter: Token Labeling for Training Better Vision Transformers](https://arxiv.org/abs/2104.10858) | [official](https://github.com/zihangJiang/TokenLabeling) | 299 | | 2021 | Tokens-to-Token ViT | [Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet](https://openaccess.thecvf.com/content/ICCV2021/papers/Yuan_Tokens-to-Token_ViT_Training_Vision_Transformers_From_Scratch_on_ImageNet_ICCV_2021_paper.pdf) | [official](https://github.com/yitu-opensource/T2T-ViT) | 300 | | 2021 | **MAE** | [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) | [Meta](https://github.com/facebookresearch/mae) | 301 | 302 | --- 303 | 304 | ### Metric 305 | 306 | | Year | Name | Paper | Codes | 307 | | ---- | -------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | 308 | | 2017 | AdaIN | [Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization](https://arxiv.org/abs/1703.06868) | [official](https://github.com/xunhuang1995/AdaIN-style) | 309 | | 2018 | lpips | [The Unreasonable Effectiveness of Deep Features as a Perceptual Metric](https://arxiv.org/abs/1801.03924) | [OpenAI](https://github.com/richzhang/PerceptualSimilarity) | 310 | | 2020 | IBA | [Restricting the Flow: Information Bottlenecks for Attribution](https://openreview.net/attachment?id=S1xWh1rYwB&name=original_pd) | [official](https://github.com/BioroboticsLab/IBA-paper-code) | 311 | | 2021 | Focal Frequency Loss | [Focal Frequency Loss for Image Reconstruction and Synthesis](https://arxiv.org/abs/2012.12821) | [official](https://github.com/EndlessSora/focal-frequency-loss) | 312 | | 2022 | ffcv | | [MIT](https://github.com/libffcv/ffcv) | 313 | 314 | --- 315 | 316 | ### Others 317 | 318 | | Year | Name | Paper | Codes | 319 | | ---- | ------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | 320 | | 2020 | 3d photo inpainting | [3D Photography using Context-aware Layered Depth Inpainting](https://arxiv.org/abs/2004.04727) | [official](https://github.com/vt-vl-lab/3d-photo-inpainting) | 321 | | 2021 | ParameterizedMotion | [Learning a family of motor skills from a single motion clip](http://mrl.snu.ac.kr/research/ProjectParameterizedMotion/ParameterizedMotion.pdf) | [official](https://github.com/snumrl/ParameterizedMotion) | 322 | | 2021 | AnimeInterp | [Deep Animation Video Interpolation in the Wild](https://arxiv.org/abs/2104.02495) | [SenseTime](https://github.com/lisiyao21/AnimeInterp) | 323 | | 2021 | DALLE | [Zero-Shot Text-to-Image Generation](https://arxiv.org/abs/2102.12092) | [OpenAI](https://github.com/lucidrains/DALLE-pytorch) | 324 | 325 | --------------------------------------------------------------------------------