└── README.md /README.md: -------------------------------------------------------------------------------- 1 | # Awesome Audio-driven Talking Face Generation 2 | 3 | ## 2D Encoder-Decoder Based 4 | 5 | - StyleHEAT: One-Shot High-Resolution Editable Talking Face Generation via Pre-trained StyleGAN [F Yin 2022] [arXiv] [demo](https://feiiyin.github.io/StyleHEAT/) [project page](https://feiiyin.github.io/StyleHEAT/) 6 | - Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation [Hang Zhou 2021] [CVPR] [demo](https://www.youtube.com/watch?v=lNQQHIggnUg) [project page](https://github.com/Hangz-nju-cuhk/Talking-Face_PC-AVS) 7 | - Talking Head Generation with Audio and Speech Related Facial Action Units [S Chen 2021] [BMVC] 8 | - Speech Driven Talking Face Generation from a Single Image and an Emotion Condition [SE Eskimez 2021] [arXiv] [project page](https://github.com/eeskimez/emotalkingface) 9 | - HeadGAN: Video-and-Audio-Driven Talking Head Synthesis [MC Doukas 2021] [arXiv] [demo](https://crossminds.ai/video/headgan-video-and-audio-driven-talking-head-synthesis-6062842b40ac1ab106a4849e/) [project page]() 10 | - Arbitrary Talking Face Generation via Attentional Audio-Visual Coherence Learning [Hao Zhu 2020] [IJCAI] 11 | - A Lip Sync Expert Is All You Need for Speech to Lip Generation In The Wild [K R Prajwal 2020] [ACMMM] [demo](https://crossminds.ai/video/a-lip-sync-expert-is-all-you-need-for-speech-to-lip-generation-in-the-wild-5fecb0d974cbe5b2a4175b62/) [project page](https://github.com/Rudrabha/Wav2Lip) 12 | - Talking Face Generation with Expression-Tailored Generative Adversarial Network [D Zeng 2020] [ACMMM] 13 | - Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis [KR Prajwal 2020] [CVPR] [demo](https://www.youtube.com/watch?v=HziA-jmlk_4) [project page](https://github.com/Rudrabha/Lip2Wav) 14 | - Robust One Shot Audio to Video Generation [N Kumar 2020] [CVPRW] [demo](https://www.facebook.com/wdeepvision2020/videos/925563794582962/) [project page]() 15 | - Talking Face Generation by Adversarially Disentangled Audio-Visual Representation [Hang Zhou 2019] [AAAI] [demo](https://www.youtube.com/watch?v=-J2zANwdjcQ) [project page](https://github.com/Hangz-nju-cuhk/Talking-Face-Generation-DAVS) 16 | - Talking face generation by conditional recurrent adversarial network [Yang Song 2019] [IJCAI] [demo](https://www.youtube.com/watch?v=Sr4smQo5BAQ) [project page](https://github.com/susanqq/Talking_Face_Generation) 17 | - Realistic Speech-Driven Facial Animation with GANs [Konstantinos Vougioukas 2019] [IJCV] [demo](https://sites.google.com/view/facial-animation) [project page](https://github.com/DinoMan/speech-driven-animation) 18 | - Animating Face using Disentangled Audio Representations [G Mittal 2019] [WACV] 19 | - Lip Movements Generation at a Glance [Lele Chen 2018] [ECCV] [demo](https://www.youtube.com/watch?v=7IX_sIL5v0c) [project page](https://github.com/lelechen63/3d_gan) 20 | - X2Face: A network for controlling face generation using images, audio, and pose codes [Olivia Wiles 2018] [ECCV] [demo](https://www.youtube.com/watch?v=q6dt-2izYM4) [project page](https://github.com/oawiles/X2Face) 21 | - Generative Adversarial Talking Head: Bringing Portraits to Life with a Weakly Supervised Neural Network [HX Pham 2018] [arXiv] [demo](https://www.youtube.com/watch?v=Zr9MlAazPpo) 22 | - You said that? [Chung 2017] [BMVC] [demo](https://www.youtube.com/watch?v=lXhkxjSJ6p8) [project page](https://github.com/joonson/yousaidthat) 23 | 24 | 25 | 26 | ## Landmark Based 27 | 28 | - Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation [YUANXUN LU 2021] [SIGGRAPH] [demo](https://replicate.com/yuanxunlu/livespeechportraits) [project page](https://github.com/YuanxunLu/LiveSpeechPortraits) 29 | - Imitating Arbitrary Talking Style for Realistic Audio-Driven Talking Face Synthesis [H Wu 2021] [ACMMM] [demo](https://github.com/wuhaozhe/style_avatar) [project page](https://github.com/wuhaozhe/style_avatar) 30 | - MakeItTalk: Speaker-Aware Talking-Head Animation [YANG ZHOU 2020] [SIGGRAPH] [demo](https://www.youtube.com/watch?v=vUMGKASgbf8&) [project page](https://github.com/yzhou359/MakeItTalk) 31 | - Speech-driven Facial Animation using Cascaded GANs for Learning of Motion and Texture [Dipanjan Das, Sandika Biswas 2020] [ECCV] 32 | - A Neural Lip-Sync Framework for Synthesizing Photorealistic Virtual News Anchors [R Zheng 2020] [ICPR] 33 | - Hierarchical Cross-Modal Talking Face Generation with Dynamic Pixel-Wise Loss [Lele Chen 2019] [CVPR] [demo](https://www.youtube.com/watch?v=eH7h_bDRX2Q&t=50s) [project page](https://github.com/lelechen63/ATVGnet) 34 | - Speech-Driven Facial Reenactment Using Conditional Generative Adversarial Networks [SA Jalalifar 2018] [arXiv] 35 | - Synthesizing Obama: learning lip sync from audio [SUPASORN SUWAJANAKORN 2017] [SIGGRAPH] [demo](https://www.youtube.com/watch?v=9Yq67CjDqvw) 36 | 37 | 38 | 39 | ## 3D Model Based 40 | 41 | - Everybody’s Talkin’: Let Me Talk as You Want [Linsen Song 2022] [TIFS] [demo](https://www.youtube.com/watch?v=tNPuAnvijQk) 42 | - One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning [Suzhen Wang 2022] [AAAI] [demo](https://www.youtube.com/watch?v=HHj-XCXXePY) [projectpage](https://github.com/FuxiVirtualHuman/AAAI22-one-shot-talking-face) 43 | - FaceFormer: Speech-Driven 3D Facial Animation with Transformers [Y Fan 2022] [CVPR] [demo](https://www.youtube.com/watch?v=NYms53uf9YY) [projectpage](https://github.com/EvelynFan/FaceFormer) 44 | - Iterative Text-based Editing of Talking-heads Using Neural Retargeting [Xinwei Yao 2021] [ICML] [demo](https://www.youtube.com/watch?v=oo4tB0f6uqQ) 45 | - AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis [Yudong Guo 2021] [ICCV] [demo](https://www.youtube.com/watch?v=TQO2EBYXLyU) [projectpage](https://github.com/YudongGuo/AD-NeRF) 46 | - Audio-driven emotional video portraits [X Ji 2021] [CVPR] [demo](https://www.youtube.com/watch?v=o6LQfLkizbw) [projectpage](https://github.com/jixinya/EVP) 47 | - FACIAL: Synthesizing Dynamic Talking Face with Implicit Attribute Learning [C Zhang 2021] [ICCV] [demo](https://www.youtube.com/watch?v=hl9ek3bUV1E) [projectpage](https://github.com/zhangchenxu528/FACIAL) 48 | - Flow-guided One-shot Talking Face Generation with a High-resolution Audio-visual Dataset [Z Zhang 2021] [CVPR] [demo](https://www.youtube.com/watch?v=uJdBgWYBTww) [projectpage](https://github.com/MRzzm/HDTF) 49 | - Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion [Suzhen Wang 2021] [IJCAI] [demo](https://www.youtube.com/watch?v=xvcBJ29l8rA) [projectpage](https://github.com/wangsuzhen/Audio2Head) 50 | - MeshTalk: 3D Face Animation from Speech using Cross-Modality Disentanglement [A Richard 2021] [ICCV] [demo](https://www.facebook.com/MetaResearch/videos/251508987094387/) [projectpage](https://github.com/facebookresearch/meshtalk) 51 | - 3D-TalkEmo: Learning to Synthesize 3D Emotional Talking Head [Q Wang 2021] [arXiv] 52 | - Write-a-speaker: Text-based Emotional and Rhythmic Talking-head Generation [L Li 2021] [AAAI] [demo](https://www.youtube.com/watch?v=weHA6LHv-Ew) [projectpage](https://github.com/FuxiVirtualHuman/Write-a-Speaker) 53 | - Text2Video: Text-driven Talking-head Video Synthesis with Phonetic Dictionary [S Zhang 2021 ] [ICASSP] [demo](https://twitter.com/_akhaliq/status/1389054381182570497) [projectpage](https://github.com/sibozhang/Text2Video) 54 | - Neural Voice Puppetry: Audio-driven Facial Reenactment [Justus Thies 2020] [ECCV] [demo](https://www.youtube.com/watch?v=s74_yQiJMXA) [projectpage](https://github.com/miu200521358/NeuralVoicePuppetryMMD) 55 | - Audio-driven Talking Face Video Generation with Learning-based Personalized Head Pose [Ran Yi 2020] [arXiv] [projectpage](https://github.com/yiranran/Audio-driven-TalkingFace-HeadPose) 56 | - Talking-head Generation with Rhythmic Head Motion [Lele Chen 2020] [ECCV] [demo](https://www.youtube.com/watch?v=kToSgSFoRz8) [projectpage](https://github.com/lelechen63/Talking-head-Generation-with-Rhythmic-Head-Motion) 57 | - Modality Dropout for Improved Performance-driven Talking Faces [‎Hussen Abdelaziz 2020] [ICMI] 58 | - Audio- and Gaze-driven Facial Animation of Codec Avatars [A Richard 2020] [arXiv] [demo](https://www.youtube.com/watch?v=1nZjW_xoCDQ) [projectpage](https://research.facebook.com/videos/audio-and-gaze-driven-facial-animation-of-codec-avatars/) 59 | - Text-based editing of talking-head video [OHAD FRIED 2019] [arXiv] [demo](https://www.youtube.com/watch?v=0ybLCfVeFL4) 60 | - Capture, Learning, and Synthesis of 3D Speaking Styles [D Cudeiro 2019] [CVPR] [demo](https://www.youtube.com/watch?v=XceCxf_GyW4) [projectpage](https://github.com/TimoBolkart/voca) 61 | - Visemenet: audio-driven animator-centric speech animation [YANG ZHOU 2018] [TOG] [demo](https://www.youtube.com/watch?v=kk2EnyMD3mo) 62 | - Speech-Driven Expressive Talking Lips with Conditional Sequential Generative Adversarial Networks [N Sadoughi 2018] [TAC] 63 | - Speech-driven 3D Facial Animation with Implicit Emotional Awareness: A Deep Learning Approach [Hai X. Pham 2017] [IEEE Trans. Syst. Man Cybern.: Syst.] 64 | - Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and Emotion [TERO KARRAS 2017] [TOG] [demo](https://www.youtube.com/watch?v=lDzrfdpGqw4&t) [projectpage](https://research.nvidia.com/publication/2017-07_audio-driven-facial-animation-joint-end-end-learning-pose-and-emotion) 65 | - A deep learning approach for generalized speech animation [SARAH TAYLOR 2017] [SIGGRAPH] [demo](https://www.youtube.com/watch?v=GwV1n8v_bpA) 66 | - End-to-end Learning for 3D Facial Animation from Speech [HX Pham 2017] [ICMI] 67 | - JALI: An Animator-Centric Viseme Model for Expressive Lip Synchronization [Pif Edwards 2016] [SIGGRAPH] [demo](https://www.youtube.com/watch?v=vniMsN53ZPI) 68 | 69 | 70 | 71 | 72 | 73 | ## Survey 74 | 75 | What comprises a good talking-head video generation?: A Survey and Benchmark [Lele Chen 2020] [paper](https://arxiv.org/abs/2005.03201) 76 | 77 | Deep Audio-Visual Learning: A Survey [Hao Zhu 2020] [paper](https://arxiv.org/abs/2001.04758) 78 | 79 | Handbook of Digital Face Manipulation and Detection [Yuxin Wang 2022] [paper](https://library.oapen.org/bitstream/handle/20.500.12657/52835/978-3-030-87664-7.pdf?sequence=1) 80 | 81 | Deep Learning for Visual Speech Analysis: A Survey [paper](https://arxiv.org/abs/2205.10839) 82 | 83 | 84 | 85 | ## Datasets 86 | 87 | - GRID 2006 [project page](http://spandh.dcs.shef.ac.uk/avlombard/) 88 | - TCD-TIMIT 2015 [project page](https://sigmedia.tcd.ie/) 89 | - LRW 2016 [project page](https://www.robots.ox.ac.uk/~vgg/data/lip_reading/lrw1.html) 90 | - MODALITY 2017 [project page](http://www.modality-corpus.org/) 91 | - ObamaSet 2017 92 | - Voxceleb1 2017 [project page](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/) 93 | - Voxceleb2 2018 [project page](https://www.robots.ox.ac.uk/~vgg/data/voxceleb2/) 94 | - LRS2-BBC 2018 [project page](https://www.robots.ox.ac.uk/~vgg/data/lip_reading/lrs2.html) 95 | - LRS3-TED 2018 [project page](https://www.robots.ox.ac.uk/~vgg/data/lip_reading/lrs3.html) 96 | - HDTF 2020 [project page](https://github.com/MRzzm/HDTF) 97 | - CREMA-D 2014 [project page](https://github.com/CheyneyComputerScience/CREMA-D) 98 | - MSP-IMPROV 2016 [project page](https://ecs.utdallas.edu/research/researchlabs/msp-lab/MSP-Improv.html) 99 | - RAVDESS 2018 [project page](https://sites.psychlabs.ryerson.ca/smartlab/resources/speech-song-database-ravdess/) 100 | - MELD 2018 [project page](https://affective-meld.github.io/) 101 | - MEAD 2020 [project page](https://wywu.github.io/projects/MEAD/MEAD.html) 102 | - CAVSR1.0 1998 103 | - HIT Bi-CAV 2005 104 | - LRW-1000 2018 [project page](https://github.com/VIPL-Audio-Visual-Speech-Understanding/Lipreading-DenseNet3D) 105 | 106 | 107 | 108 | ## Metrics 109 | 110 | | Metrics | Paper | 111 | | ---------------------------------------------------- | ------------------------------------------------------------ | 112 | | PSNR (peak signal-to-noise ratio) | - | 113 | | SSIM (structural similarity index measure) | Image quality assessment: from error visibility to structural similarity. | 114 | | CPBD(cumulative probability of blur detection) | A no-reference image blur metric based on the cumulative probability of blur detection | 115 | | LPIPS (Learned Perceptual Image Patch Similarity) - | The Unreasonable Effectiveness of Deep Features as a Perceptual Metric | 116 | | NIQE (Natural Image Quality Evaluator) | Making a ‘Completely Blind’ Image Quality Analyzer | 117 | | FID (Fréchet inception distance) | GANs trained by a two time-scale update rule converge to a local nash equilibrium | 118 | | LMD (landmark distance error) | Lip Movements Generation at a Glance | 119 | | LRA (lip-reading accuracy) | Talking Face Generation by Conditional Recurrent Adversarial Network | 120 | | WER(word error rate) | Lipnet: end-to-end sentencelevel lipreading. | 121 | | LSE-D (Lip Sync Error - Distance) | Out of time: automated lip sync in the wild | 122 | | LSE-C (Lip Sync Error - Confidence) | Out of time: automated lip sync in the wild | 123 | | ACD(Average content distance) | Facenet: a unified embedding for face recognition and clustering. | 124 | | CSIM(cosine similarity) | Arcface: additive angular margin loss for deep face recognition. | 125 | | EAR(eye aspect ratio) | Real-time eye blink detection using facial landmarks. In: Computer Vision Winter Workshop | 126 | | ESD(emotion similarity distance) | What comprises a good talking-head video generation?: A Survey and Benchmark | 127 | --------------------------------------------------------------------------------