├── .gitignore ├── README.md └── docs ├── images └── teaser.jpeg ├── index.html ├── paper.pdf ├── static ├── css │ ├── bulma-carousel.min.css │ ├── bulma-slider.min.css │ ├── bulma.css.map.txt │ ├── bulma.min.css │ ├── fontawesome.all.min.css │ ├── index.css │ └── video-gallery.css ├── images │ └── favicon.ico └── js │ ├── bulma-carousel.js │ ├── bulma-carousel.min.js │ ├── bulma-slider.js │ ├── bulma-slider.min.js │ ├── fontawesome.all.min.js │ ├── index.js │ └── video-gallery.js ├── supplementary_material.pdf └── videos ├── expression_merged_frames.mp4 ├── free_wild_pose_merged_frames_1.mp4 ├── free_wild_pose_merged_frames_2.mp4 ├── front_merged_frames_0.mp4 ├── front_merged_frames_1.mp4 ├── front_merged_frames_2.mp4 ├── recon_merged_frames_0.mp4 ├── recon_merged_frames_1.mp4 ├── recon_merged_frames_2.mp4 ├── reenact_merged_frames_0.mp4 ├── reenact_merged_frames_1.mp4 ├── reenact_merged_frames_2.mp4 ├── reenact_wild_merged_frames_1.mp4 └── reenact_wild_merged_frames_2.mp4 /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | pip-wheel-metadata/ 24 | share/python-wheels/ 25 | *.egg-info/ 26 | .installed.cfg 27 | *.egg 28 | MANIFEST 29 | 30 | # PyInstaller 31 | # Usually these files are written by a python script from a template 32 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 33 | *.manifest 34 | *.spec 35 | 36 | # Installer logs 37 | pip-log.txt 38 | pip-delete-this-directory.txt 39 | 40 | # Unit test / coverage reports 41 | htmlcov/ 42 | .tox/ 43 | .nox/ 44 | .coverage 45 | .coverage.* 46 | .cache 47 | nosetests.xml 48 | coverage.xml 49 | *.cover 50 | *.py,cover 51 | .hypothesis/ 52 | .pytest_cache/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | target/ 76 | 77 | # Jupyter Notebook 78 | .ipynb_checkpoints 79 | 80 | # IPython 81 | profile_default/ 82 | ipython_config.py 83 | 84 | # pyenv 85 | .python-version 86 | 87 | # pipenv 88 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 89 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 90 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 91 | # install all needed dependencies. 92 | #Pipfile.lock 93 | 94 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow 95 | __pypackages__/ 96 | 97 | # Celery stuff 98 | celerybeat-schedule 99 | celerybeat.pid 100 | 101 | # SageMath parsed files 102 | *.sage.py 103 | 104 | # Environments 105 | .env 106 | .venv 107 | env/ 108 | venv/ 109 | ENV/ 110 | env.bak/ 111 | venv.bak/ 112 | 113 | # Spyder project settings 114 | .spyderproject 115 | .spyproject 116 | 117 | # Rope project settings 118 | .ropeproject 119 | 120 | # mkdocs documentation 121 | /site 122 | 123 | # mypy 124 | .mypy_cache/ 125 | .dmypy.json 126 | dmypy.json 127 | 128 | # Pyre type checker 129 | .pyre/ 130 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # PECHead 2 | [CVPR 2023] High-Fidelity and Freely Controllable Talking Head Video Generation 3 | 4 | We don’t have any plans to make the code and models publicly available. 5 | Feel free to contact me if you have any questions about this paper. 6 | -------------------------------------------------------------------------------- /docs/images/teaser.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ueoo/PECHead/7b0a8991d8a97762a2e239308f366f7bcbf996de/docs/images/teaser.jpeg -------------------------------------------------------------------------------- /docs/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 |
4 | 5 | 9 | 10 | 11 |144 | Talking head generation is to generate video based on a given source identity and target motion. However, current methods face several 145 | challenges that limit the quality and controllability of the generated videos. First, the generated face often has unexpected 146 | deformation and severe distortions. Second, the driving image does not explicitly disentangle movement-relevant information, such as 147 | poses and expressions, which restricts the manipulation of different attributes during generation. Third, the generated videos tend to 148 | have flickering artifacts due to the inconsistency of the extracted landmarks between adjacent frames. 149 |
150 |151 | In this paper, we propose a novel model that produces high-fidelity talking head videos with free control over head pose and 152 | expression. Our method leverages both self-supervised learned landmarks and 3D face model-based landmarks to model the motion. We also 153 | introduce a novel motion-aware multi-scale feature alignment module to effectively transfer the motion without face distortion. 154 | Furthermore, we enhance the smoothness of the synthesized talking head videos with a feature context adaptation and propagation 155 | module. We evaluate our model on challenging datasets and demonstrate its state-of-the-art performance. 156 |
157 |188 | Using PECHead you can create great visual results for head pose and expression editing. 189 |
190 |233 | Using PECHead you can create great visual results for portrait frontalization. 234 |
235 |283 | Using PECHead you can create great visual results for portrait reconstruction. 284 |
285 |339 | Using PECHead you can create great visual results for portrait reenactment. 340 |
341 |374 | Using PECHead you can create great visual results for expression editing. 375 |
376 |415 | Using PECHead you can create great visual results for portrait reconstruction on wild identities. 416 |
417 |@inproceedings{gao2023high,
440 | title={High-fidelity and freely controllable talking head video generation},
441 | author={Gao, Yue and Zhou, Yuan and Wang, Jinglu and Li, Xiao and Ming, Xiang and Lu, Yan},
442 | booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
443 | pages={5609--5619},
444 | year={2023}
445 | }
446 |