├── .gitattributes
├── .gitignore
├── LICENSE
├── README.md
├── install-dependencies.command
├── install-dependencies.sh
├── resources
├── Logo_Blender.svg.png
├── blender_community_badge_white.png
├── install.png
├── panel_dev.png
├── panel_import.png
├── panel_run.png
├── panel_warning.png
├── side_panel.png
├── speech_driven_animation.gif
├── uninstall.png
├── voca.png
└── voca_blender_animation.gif
└── voca-addon
├── __init__.py
├── audio
├── sentence20.wav
└── test_sentence.wav
├── flame
└── generic_model.pkl
├── model
├── gstep_52280.model.data-00000-of-00001
├── gstep_52280.model.index
└── gstep_52280.model.meta
├── operators.py
├── panels.py
├── smpl_webuser
├── LICENSE.txt
├── __init__.py
├── lbs.py
├── posemapper.py
├── serialization.py
└── verts.py
├── template
└── FLAME_sample.ply
└── utils
├── audio_handler.py
├── ctypesloader.py
├── edit_sequences.py
└── inference.py
/.gitattributes:
--------------------------------------------------------------------------------
1 | # Auto detect text files and perform LF normalization
2 | * text=auto
3 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | voca-addon/animation-output
2 | voca-addon/ds_graph/output_graph.pb
3 | .DS_Store
4 | voca-addon.zip
5 |
6 | # Byte-compiled / optimized / DLL files
7 | __pycache__/
8 | *.py[cod]
9 | *$py.class
10 |
11 | # C extensions
12 | *.so
13 |
14 | # Distribution / packaging
15 | .Python
16 | build/
17 | develop-eggs/
18 | dist/
19 | downloads/
20 | eggs/
21 | .eggs/
22 | lib/
23 | lib64/
24 | parts/
25 | sdist/
26 | var/
27 | wheels/
28 | *.egg-info/
29 | .installed.cfg
30 | *.egg
31 | MANIFEST
32 |
33 | # PyInstaller
34 | # Usually these files are written by a python script from a template
35 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
36 | *.manifest
37 | *.spec
38 |
39 | # Installer logs
40 | pip-log.txt
41 | pip-delete-this-directory.txt
42 |
43 | # Unit test / coverage reports
44 | htmlcov/
45 | .tox/
46 | .nox/
47 | .coverage
48 | .coverage.*
49 | .cache
50 | nosetests.xml
51 | coverage.xml
52 | *.cover
53 | .hypothesis/
54 | .pytest_cache/
55 |
56 | # Translations
57 | *.mo
58 | *.pot
59 |
60 | # Django stuff:
61 | *.log
62 | local_settings.py
63 | db.sqlite3
64 |
65 | # Flask stuff:
66 | instance/
67 | .webassets-cache
68 |
69 | # Scrapy stuff:
70 | .scrapy
71 |
72 | # Sphinx documentation
73 | docs/_build/
74 |
75 | # PyBuilder
76 | target/
77 |
78 | # Jupyter Notebook
79 | .ipynb_checkpoints
80 |
81 | # IPython
82 | profile_default/
83 | ipython_config.py
84 |
85 | # pyenv
86 | .python-version
87 |
88 | # celery beat schedule file
89 | celerybeat-schedule
90 |
91 | # SageMath parsed files
92 | *.sage.py
93 |
94 | # Environments
95 | .env
96 | .venv
97 | env/
98 | venv/
99 | ENV/
100 | env.bak/
101 | venv.bak/
102 |
103 | # Spyder project settings
104 | .spyderproject
105 | .spyproject
106 |
107 | # Rope project settings
108 | .ropeproject
109 |
110 | # mkdocs documentation
111 | /site
112 |
113 | # mypy
114 | .mypy_cache/
115 | .dmypy.json
116 | dmypy.json
117 |
118 | # Pyre type checker
119 | .pyre/
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2022 Edoardo Conti
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 | VOCA Blender Addon
7 |
8 | # 📝 Table of Contents
9 | - [About](#about)
10 | - [Project Topology](#project-topology)
11 | - [Installation and Usage](#ins-usage)
12 | - [Authors](#authors)
13 | - [Acknowledgments](#acknowledgement)
14 |
15 | # 📋About
16 |
17 |
18 |
19 | VOCA is a simple and generic speech-driven facial animation framework that works across a range of identities. This add-on integrates VOCA withing Blender and allows the user to:
20 |
21 | * Run VOCA and synthesize a character animation given an speech signal. VOCA outputs a set meshes with .obj extension which must to be imported.
22 | * Import the output meshes of VOCA and generate from them a single animated mesh.
23 |
24 | For more details please see the scientific publication of the VOCA framework [here](https://voca.is.tue.mpg.de/).
25 |
26 | The original VOCA framework repository can be found here [here](https://github.com/TimoBolkart/voca).
27 |
28 | # 👩💻Installation and Usage
29 | 1. The add-on works with \Blender v2.92.0 (python 3.7) and requires some dependencies that can be installed directly from the preferences panel.
30 | 2. Download the latest release.
31 | 3. Import the downloaded .zip archive in Blender (Edit > Preferences > Add-ons > Install) and enable the add-on.
32 | 4. Install the dependencies by clicking the dedicated button
33 | 5. The add-on options are accessible in the 3D View side panel named "VOCA"
34 | 6. (optional) If you want to uninstall the addon, you can also uninstall the dependencies from the preferences panel.
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 | ## To generate a new sequence of meshes:
44 | 1. Expand the 'Run VOCA Model' panel.
45 | 2. Select the right path for the mesh template (.ply) to animate, the audio file with the speech signal and the desired output directory.
46 | 3. Hit 'Run' and wait the end of the process.
47 |
48 |
49 |
50 |
51 | ## To import the VOCA-generated meshes and generate the animated mesh:
52 | 1. Expand the 'Import Mesh' panel.
53 | 2. Select the path to the audio file and the output directory.
54 | 3. Hit 'Import' and wait.
55 |
56 |
57 |
58 |
59 | ## The 'Dev' panel allows you to:
60 | * Hide/unhide non-VOCA meshes.
61 | * Remove all meshes from the scene.
62 | * Remove all non-VOCA meshes from the scene.
63 | * Edit sequences (flame parameters: head, pose, blink)
64 |
65 |
66 |
67 |
68 | # 🗂 Project Topology
69 | ```
70 | voca-blender/
71 | ├─ voca-addon/
72 | │ ├─ audio/
73 | │ │ ├─ sentence20.wav
74 | │ │ └─ test_sentence.wav
75 | │ ├─ flame/
76 | │ │ └─ generic_model.pkl
77 | │ ├─ model/
78 | │ │ └─ gstep.model.*
79 | │ ├─ smpl_webuser/
80 | │ │ ├─ lbs.py
81 | │ │ ├─ posemapper.py
82 | │ │ ├─ serialization.py
83 | │ │ └─ verts.py
84 | │ ├─ template/
85 | │ │ └─ FLAME_sample.ply
86 | │ ├─ utils/
87 | │ | ├─ ctypesloader.py
88 | │ │ ├─ audio_handler.py
89 | │ │ ├─ edit_sequences.py
90 | │ │ └─ inference.py
91 | │ ├─ operators.py
92 | │ ├─ panels.py
93 | │ └─ __init__.py
94 | ├─ install-dependencies.command
95 | ├─ install-dependencies.sh
96 | ├─ LICENSE
97 | └─ README.md
98 | ```
99 |
100 | # ✍️ Authors
101 | - Conti Edoardo [@edoardo-conti](https://github.com/edoardo-conti)
102 | - Federici Lorenzo [@lorenzo-federici](https://github.com/lorenzo-federici)
103 | - Melnic Andrian [@andrian-melnic](https://github.com/andrian-melnic)
104 |
105 | # 🎉 Acknowledgements
106 | - Computer Graphics e Multimedia Class - Professor Primo Zingaretti
107 |
--------------------------------------------------------------------------------
/install-dependencies.command:
--------------------------------------------------------------------------------
1 | #!/bin/sh
2 |
3 | bold=$(tput bold)
4 | normal=$(tput sgr0)
5 |
6 | SCRIPT_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" &> /dev/null && pwd)
7 | SCRIPT_PATH="$SCRIPT_DIR/script-utils/ctypesloader.py"
8 |
9 | # go to home directory
10 | cd $HOME
11 |
12 | # update brew (install if needed)
13 | echo "${bold}> checking homebrew installation...${normal}"
14 | which -s brew
15 | if [[ $? != 0 ]] ; then
16 | # Install Homebrew
17 | /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
18 | echo " ${bold}homebrew installed${normal}"
19 | else
20 | brew update -q
21 | echo " ${bold}ok${normal}"
22 | fi
23 |
24 | # install pyenv
25 | echo "\n${bold}> installing brew packages (git, wget, pyenv)...${normal}"
26 | if brew install git wget pyenv ; then
27 | echo " ${bold}ok${normal}"
28 | else
29 | echo " ${bold}failed${normal}"
30 | exit 1
31 | fi
32 |
33 | # install python 3.7.13 if not present and set it as default
34 | echo "\n${bold}> installing python version 3.7.13 (required by voca)${normal}"
35 | if ! grep -q 'eval "$(pyenv init -)"' '.zshrc' ; then
36 | echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.zshrc
37 | echo 'command -v pyenv >/dev/null || export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.zshrc
38 | echo 'eval "$(pyenv init -)"' >> ~/.zshrc
39 | . $HOME/.zshrc
40 | fi
41 |
42 | if
43 | pyenv install 3.7.13 -s &&
44 | pyenv local 3.7.13 &&
45 | . $HOME/.zshrc
46 | then
47 | echo " ${bold}ok${normal}"
48 | else
49 | echo " ${bold}failed${normal}"
50 | exit 1
51 | fi
52 |
53 | BLENDER_SCRIPTS_DIR="/Applications/Blender.app/Contents/Resources/2.92/scripts"
54 | BLENDER_MODULE_DIR="/Applications/Blender.app/Contents/Resources/2.92/scripts/modules"
55 |
56 | # create the 'modules' directory in blender if not already there
57 | mkdir -p "$BLENDER_SCRIPTS_DIR"/modules
58 | # make sure pip version is up-to-date
59 | pip install -t $BLENDER_MODULE_DIR -U pip
60 | # install dependencies
61 | echo "\n${bold}> installing pip modules...${normal}"
62 | pip install -t $BLENDER_MODULE_DIR wget numpy scipy chumpy opencv-python resampy python-speech-features tensorflow==1.15.2 scikit-learn image ipython matplotlib trimesh pyrender
63 | pip install -t $BLENDER_MODULE_DIR --upgrade protobuf==3.20.0
64 | pip install -t $BLENDER_MODULE_DIR https://github.com/MPI-IS/mesh/releases/download/v0.4/psbody_mesh-0.4-cp37-cp37m-macosx_10_9_x86_64.whl
65 | cp $SCRIPT_PATH /Applications/Blender.app/Contents/Resources/2.92/scripts/modules/OpenGL/platform/ctypesloader.py
66 | echo " ${bold}ok${normal}"
67 |
68 | # reset the python system version
69 | pyenv local system
70 | . $HOME/.zshrc
71 |
72 | echo "\n${bold}> blender voca addon dependencies installed successfully!${normal}"
--------------------------------------------------------------------------------
/install-dependencies.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | bold=$(tput bold)
4 | normal=$(tput sgr0)
5 |
6 | shopt -s xpg_echo
7 |
8 | # go to home directory
9 | cd $HOME
10 |
11 | # install pyenv
12 | echo "${bold}> installing pyenv...${normal}"
13 | if ! command -v pyenv &> /dev/null
14 | then
15 | sudo apt-get update
16 | sudo apt-get install make build-essential libssl-dev zlib1g-dev \
17 | libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm \
18 | libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev \
19 | git ffmpeg
20 | if curl -L https://github.com/pyenv/pyenv-installer/raw/master/bin/pyenv-installer | bash ; then
21 | #echo " ${bold}pyenv installed! please, run this script again to continue${normal}"
22 | #exec bash
23 | . $HOME/.bashrc
24 | echo " ${bold}ok${normal}"
25 | else
26 | echo " ${bold}failed${normal}"
27 | exit 1
28 | fi
29 | else
30 | echo " ${bold}skip${normal}"
31 | fi
32 |
33 |
34 | # install python 3.7.13 if not present and set it as default
35 | echo "\n${bold}> installing python version 3.7.13 (required by voca)${normal}"
36 | if ! grep -q 'eval "$(pyenv init -)"' '.bashrc' ; then
37 | echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc
38 | echo 'command -v pyenv >/dev/null || export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc
39 | echo 'eval "$(pyenv init -)"' >> ~/.bashrc
40 | . $HOME/.bashrc
41 | fi
42 |
43 | if
44 | pyenv install 3.7.13 -s &&
45 | pyenv local 3.7.13 &&
46 | . $HOME/.bashrc
47 | then
48 | echo " ${bold}ok${normal}"
49 | else
50 | echo " ${bold}failed${normal}"
51 | exit 1
52 | fi
53 |
54 | BLENDER_SCRIPTS_DIR="path_to_blender/2.92/scripts"
55 | BLENDER_MODULE_DIR="path_to_blender/2.92/scripts/modules"
56 |
57 | # create the 'modules' directory in blender if not already there
58 | mkdir -p "$BLENDER_SCRIPTS_DIR"/modules
59 | # make sure pip version is up-to-date
60 | pip install -t $BLENDER_MODULE_DIR -U pip
61 | # install dependencies
62 | echo "\n${bold}> installing pip modules...${normal}"
63 | pip install -t $BLENDER_MODULE_DIR wget numpy scipy chumpy opencv-python resampy python-speech-features tensorflow==1.15.2 scikit-learn image ipython matplotlib trimesh pyrender
64 | pip install -t $BLENDER_MODULE_DIR --upgrade protobuf==3.20.0
65 | pip install -t $BLENDER_MODULE_DIR https://github.com/MPI-IS/mesh/releases/download/v0.4/psbody_mesh-0.4-cp37-cp37m-linux_x86_64.whl
66 | echo " ${bold}ok${normal}"
67 |
68 | # reset the python system version
69 | pyenv local system
70 | . $HOME/.bashrc
71 |
72 | echo "\n${bold}> blender voca addon dependencies installed successfully!${normal}"
--------------------------------------------------------------------------------
/resources/Logo_Blender.svg.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SasageyoOrg/voca-blender/61c39846a48ce282a51347f4191a52d81199748b/resources/Logo_Blender.svg.png
--------------------------------------------------------------------------------
/resources/blender_community_badge_white.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SasageyoOrg/voca-blender/61c39846a48ce282a51347f4191a52d81199748b/resources/blender_community_badge_white.png
--------------------------------------------------------------------------------
/resources/install.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SasageyoOrg/voca-blender/61c39846a48ce282a51347f4191a52d81199748b/resources/install.png
--------------------------------------------------------------------------------
/resources/panel_dev.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SasageyoOrg/voca-blender/61c39846a48ce282a51347f4191a52d81199748b/resources/panel_dev.png
--------------------------------------------------------------------------------
/resources/panel_import.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SasageyoOrg/voca-blender/61c39846a48ce282a51347f4191a52d81199748b/resources/panel_import.png
--------------------------------------------------------------------------------
/resources/panel_run.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SasageyoOrg/voca-blender/61c39846a48ce282a51347f4191a52d81199748b/resources/panel_run.png
--------------------------------------------------------------------------------
/resources/panel_warning.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SasageyoOrg/voca-blender/61c39846a48ce282a51347f4191a52d81199748b/resources/panel_warning.png
--------------------------------------------------------------------------------
/resources/side_panel.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SasageyoOrg/voca-blender/61c39846a48ce282a51347f4191a52d81199748b/resources/side_panel.png
--------------------------------------------------------------------------------
/resources/speech_driven_animation.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SasageyoOrg/voca-blender/61c39846a48ce282a51347f4191a52d81199748b/resources/speech_driven_animation.gif
--------------------------------------------------------------------------------
/resources/uninstall.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SasageyoOrg/voca-blender/61c39846a48ce282a51347f4191a52d81199748b/resources/uninstall.png
--------------------------------------------------------------------------------
/resources/voca.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SasageyoOrg/voca-blender/61c39846a48ce282a51347f4191a52d81199748b/resources/voca.png
--------------------------------------------------------------------------------
/resources/voca_blender_animation.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SasageyoOrg/voca-blender/61c39846a48ce282a51347f4191a52d81199748b/resources/voca_blender_animation.gif
--------------------------------------------------------------------------------
/voca-addon/__init__.py:
--------------------------------------------------------------------------------
1 | bl_info = {
2 | "name": "VOCA Add-On",
3 | "author": "Sasageyo",
4 | "version": (1, 0, 0),
5 | "blender": (2, 92, 0),
6 | "location": "View3D",
7 | "description": "Add-on for voca framework",
8 | "warning": "Requires installation of dependencies",
9 | "doc_url": "",
10 | "category": "VOCA",
11 | }
12 |
13 | import bpy
14 | import os
15 | import sys
16 | import subprocess
17 | import importlib
18 | import shutil
19 | from collections import namedtuple
20 | from sys import platform
21 | import time
22 |
23 | # Declare all modules that this add-on depends on, that may need to be installed. The package and (global) name can be
24 | # set to None, if they are equal to the module name. See import_module and ensure_and_import_module for the explanation
25 | # of the arguments. DO NOT use this to import other parts of your Python add-on, import them as usual with an
26 | # "import" statement.
27 | Dependency = namedtuple("Dependency", ["module", "package", "name", "importable"])
28 | dependencies = (Dependency(module="scipy", package=None, name=None, importable=True),
29 | Dependency(module="chumpy", package=None, name="ch", importable=True),
30 | Dependency(module="cv2", package="opencv-python", name=None, importable=True),
31 | Dependency(module="resampy", package=None, name=None, importable=True),
32 | Dependency(module="python_speech_features", package=None, name=None, importable=True),
33 | Dependency(module="tensorflow", package="tensorflow==1.15.2", name="tf", importable=True),
34 | Dependency(module="sklearn", package="scikit-learn", name=None, importable=True),
35 | Dependency(module="ipython", package=None, name=None, importable=False),
36 | Dependency(module="matplotlib", package=None, name=None, importable=True),
37 | Dependency(module="trimesh", package=None, name=None, importable=True),
38 | Dependency(module="pyrender", package=None, name=None, importable=False))
39 |
40 | dependencies_installed = False
41 |
42 | PROP_DEP = [
43 | ('installing', bpy.props.BoolProperty(default = False)),
44 | ('uninstalling', bpy.props.BoolProperty(default = False))
45 | ]
46 |
47 | def refresh_all_areas():
48 | for wm in bpy.data.window_managers:
49 | for w in wm.windows:
50 | for area in w.screen.areas:
51 | area.tag_redraw()
52 |
53 | def install_pip():
54 | environ_copy = dict(os.environ)
55 | environ_copy["PYTHONNOUSERSITE"] = "1"
56 |
57 | try:
58 | # Check if pip is already installed
59 | subprocess.run([sys.executable, "-m", "pip", "--version"], check=True)
60 | except subprocess.CalledProcessError:
61 | # install pip
62 | import ensurepip
63 | ensurepip.bootstrap()
64 | os.environ.pop("PIP_REQ_TRACKER", None)
65 |
66 | # update pip
67 | subprocess.run([sys.executable, "-m", "pip", "install", "-U", "pip"], check=True)
68 |
69 | # some version checks before start!
70 | # fix protobuf module version
71 | subprocess.run([sys.executable, "-m", "pip", "install", "--upgrade", "protobuf==3.20.0"], check=True, env=environ_copy)
72 | # todo: maybe macos only!
73 | subprocess.run([sys.executable, "-m", "pip", "install", "--upgrade", "numpy==1.21.6"], check=True, env=environ_copy)
74 | subprocess.run([sys.executable, "-m", "pip", "install", "--upgrade", "numba==0.55.2"], check=True, env=environ_copy)
75 |
76 |
77 | def import_module(module_name, global_name=None, importable="False", reload=True):
78 | if global_name is None:
79 | global_name = module_name
80 |
81 | if importable :
82 | if global_name in globals():
83 | importlib.reload(globals()[global_name])
84 | print(module_name + ' module already there')
85 | else:
86 | # Attempt to import the module and assign it to globals dictionary. This allow to access the module under
87 | # the given name, just like the regular import would.
88 | globals()[global_name] = importlib.import_module(module_name)
89 | print(module_name + ' module successfully imported')
90 |
91 |
92 | def install_and_import_module(module_name, package_name=None, global_name=None, importable="False"):
93 | if package_name is None:
94 | package_name = module_name
95 | if global_name is None:
96 | global_name = module_name
97 |
98 | # Create a copy of the environment variables and modify them for the subprocess call
99 | environ_copy = dict(os.environ)
100 | environ_copy["PYTHONNOUSERSITE"] = "1"
101 | # launch pip install
102 | subprocess.run([sys.executable, "-m", "pip", "install", package_name], check=True, env=environ_copy)
103 |
104 | # The installation succeeded, attempt to import the module again
105 | import_module(module_name, global_name, importable)
106 |
107 |
108 | def complete_installation():
109 | # Create a copy of the environment variables and modify them for the subprocess call
110 | environ_copy = dict(os.environ)
111 | environ_copy["PYTHONNOUSERSITE"] = "1"
112 |
113 | # fix protobuf module version
114 | # subprocess.run([sys.executable, "-m", "pip", "install", "--upgrade", "protobuf==3.20.0"], check=True, env=environ_copy)
115 |
116 | # install Mesh from remote wheel
117 | temp_whl = "https://github.com/MPI-IS/mesh/releases/download/v0.4/psbody_mesh-0.4-cp37-cp37m-macosx_10_9_x86_64.whl"
118 | subprocess.run([sys.executable, "-m", "pip", "install", temp_whl], check=True, env=environ_copy)
119 |
120 | # fix the OpenGL package (ony macOS -> darwin)
121 | if platform == "darwin":
122 | # get the path to the patched python file
123 | src = bpy.utils.user_resource('SCRIPTS', 'addons') + '/voca-addon/utils/ctypesloader.py'
124 | # get the path to the python library inside the blender app
125 | dst = sys.path[next(i for i, string in enumerate(sys.path) if 'site-packages' in string)] + "/OpenGL/platform/ctypesloader.py"
126 | try:
127 | shutil.copy(src, dst)
128 | print("OpenGL fixed successfully (macOS only)")
129 | except shutil.SameFileError:
130 | print("Source and destination represents the same file.")
131 | except PermissionError:
132 | print("Permission denied.")
133 | except:
134 | print("Error occurred while copying file.")
135 |
136 |
137 | def uninstall_modules():
138 | # Create a copy of the environment variables and modify them for the subprocess call
139 | environ_copy = dict(os.environ)
140 | environ_copy["PYTHONNOUSERSITE"] = "1"
141 |
142 | for dependency in dependencies:
143 | module_to_remove = dependency.module if dependency.package is None else dependency.package
144 |
145 | subprocess.run([sys.executable, "-m", "pip", "uninstall", "-y", module_to_remove], check=True, env=environ_copy)
146 |
147 | # mode = True -> register operators and panels (False is viceversa)
148 | def custom_un_register(mode):
149 | from . panels import run_model_panel, mesh_import_panel, dev_pannel
150 | from . operators import Run_VOCA, Mesh_Import, Mesh_Edit, Mesh_Delete, Mesh_Delete_Other
151 |
152 | CLASSES = [
153 | Run_VOCA,
154 | Mesh_Import,
155 | Mesh_Edit,
156 | run_model_panel,
157 | mesh_import_panel,
158 | dev_pannel,
159 | Mesh_Delete,
160 | Mesh_Delete_Other
161 | ]
162 | PROPS = panels.PROPS
163 |
164 | if mode:
165 | for PROP in PROPS.values():
166 | for (prop_name, prop_value) in PROP:
167 | setattr(bpy.types.Scene, prop_name, prop_value)
168 |
169 | for cls in CLASSES:
170 | bpy.utils.register_class(cls)
171 | else:
172 | for PROP in PROPS.values():
173 | for (prop_name, prop_value) in PROP:
174 | delattr(bpy.types.Scene, prop_name)
175 |
176 | for klass in CLASSES:
177 | bpy.utils.unregister_class(klass)
178 |
179 | class EXAMPLE_PT_warning_panel(bpy.types.Panel):
180 | bl_label = "Dependencies Warning"
181 | bl_category = "VOCA"
182 | bl_space_type = "VIEW_3D"
183 | bl_region_type = "UI"
184 |
185 | @classmethod
186 | def poll(self, context):
187 | return not dependencies_installed
188 |
189 | def draw(self, context):
190 | layout = self.layout
191 |
192 | if not context.scene.installing :
193 | lines = [f"Please install the missing dependencies for the \"{bl_info.get('name')}\" add-on.",
194 | f"1. Open the preferences (Edit > Preferences > Add-ons).",
195 | f"2. Search for the \"{bl_info.get('name')}\" add-on.",
196 | f"3. Open the details section of the add-on.",
197 | f"4. Click on the \"{EXAMPLE_OT_install_dependencies.bl_label}\" button.",
198 | f" This will download and install the missing Python packages, if Blender has the required",
199 | f" permissions."]
200 | else :
201 | lines = [f"Installing the addon's dependencies",
202 | f"Please, wait until the end of the process."]
203 |
204 | for line in lines:
205 | layout.label(text=line)
206 |
207 | class EXAMPLE_OT_install_dependencies(bpy.types.Operator):
208 | bl_idname = "example.install_dependencies"
209 | bl_label = "Install dependencies"
210 | bl_description = ("Downloads and installs the required python packages for this add-on. "
211 | "Internet connection is required. Blender may have to be started with "
212 | "elevated permissions in order to install the package")
213 | bl_options = {"REGISTER", "INTERNAL"}
214 |
215 | @classmethod
216 | def poll(self, context):
217 | # Deactivate when dependencies have been installed
218 | return not dependencies_installed
219 |
220 | def execute(self, context):
221 | try:
222 | # change ui and refresh
223 | context.scene.installing = True
224 | refresh_all_areas()
225 | context.area.tag_redraw()
226 | time.sleep(1)
227 | context.area.tag_redraw()
228 | # start install ->
229 | install_pip()
230 | for dependency in dependencies:
231 | install_and_import_module(module_name=dependency.module,
232 | package_name=dependency.package,
233 | global_name=dependency.name,
234 | importable=dependency.importable)
235 | complete_installation()
236 | # <- end install
237 | # change ui and refresh
238 | context.scene.installing = False
239 | refresh_all_areas()
240 |
241 | except (subprocess.CalledProcessError, ImportError) as err:
242 | self.report({"ERROR"}, str(err))
243 | return {"CANCELLED"}
244 |
245 | global dependencies_installed
246 | dependencies_installed = True
247 |
248 | # Import and register the panels and operators since dependencies are installed
249 | custom_un_register(True)
250 |
251 | return {"FINISHED"}
252 |
253 | class EXAMPLE_OT_uninstall_dependencies(bpy.types.Operator):
254 | bl_idname = "example.uninstall_dependencies"
255 | bl_label = "Uninstall dependencies"
256 | bl_description = ("Uninstalls the required python packages for this add-on. ")
257 | bl_options = {"REGISTER", "INTERNAL"}
258 |
259 | @classmethod
260 | def poll(self, context):
261 | # Deactivate when dependencies have been uninstalled
262 | return dependencies_installed
263 |
264 | def execute(self, context):
265 | try:
266 | uninstall_modules()
267 | except (subprocess.CalledProcessError, ImportError) as err:
268 | self.report({"ERROR"}, str(err))
269 | return {"CANCELLED"}
270 |
271 | global dependencies_installed
272 | dependencies_installed = False
273 |
274 | # Import and register the panels and operators since dependencies are installed
275 | custom_un_register(False)
276 |
277 | return {"FINISHED"}
278 |
279 | class EXAMPLE_preferences(bpy.types.AddonPreferences):
280 | bl_idname = __name__
281 |
282 | def draw(self, context):
283 | layout = self.layout
284 | row = layout.row()
285 | row.operator(EXAMPLE_OT_install_dependencies.bl_idname, icon="CONSOLE")
286 | row.operator(EXAMPLE_OT_uninstall_dependencies.bl_idname, icon="CONSOLE")
287 |
288 | preference_classes = (EXAMPLE_PT_warning_panel,
289 | EXAMPLE_OT_install_dependencies,
290 | EXAMPLE_OT_uninstall_dependencies,
291 | EXAMPLE_preferences)
292 |
293 | # ADD-ON func ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
294 | def register():
295 | global dependencies_installed
296 | dependencies_installed = False
297 |
298 | for cls in preference_classes:
299 | bpy.utils.register_class(cls)
300 | for (prop_name, prop_value) in PROP_DEP:
301 | setattr(bpy.types.Scene, prop_name, prop_value)
302 |
303 | try:
304 | for dependency in dependencies:
305 | import_module(module_name=dependency.module, global_name=dependency.name, importable=dependency.importable)
306 | dependencies_installed = True
307 |
308 | # Import and register the panels and operators since dependencies are installed
309 | custom_un_register(True)
310 | except ModuleNotFoundError:
311 | # Don't register other panels, operators etc.
312 | print("error, some packages are missing")
313 |
314 | def unregister():
315 | for cls in preference_classes:
316 | bpy.utils.unregister_class(cls)
317 | for (prop_name, prop_value) in PROP_DEP:
318 | delattr(bpy.types.Scene, prop_name)
319 |
320 | if dependencies_installed:
321 | custom_un_register(False)
322 |
323 | if __name__ == "__main__":
324 | register()
325 |
326 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
327 |
--------------------------------------------------------------------------------
/voca-addon/audio/sentence20.wav:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SasageyoOrg/voca-blender/61c39846a48ce282a51347f4191a52d81199748b/voca-addon/audio/sentence20.wav
--------------------------------------------------------------------------------
/voca-addon/audio/test_sentence.wav:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SasageyoOrg/voca-blender/61c39846a48ce282a51347f4191a52d81199748b/voca-addon/audio/test_sentence.wav
--------------------------------------------------------------------------------
/voca-addon/flame/generic_model.pkl:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SasageyoOrg/voca-blender/61c39846a48ce282a51347f4191a52d81199748b/voca-addon/flame/generic_model.pkl
--------------------------------------------------------------------------------
/voca-addon/model/gstep_52280.model.data-00000-of-00001:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SasageyoOrg/voca-blender/61c39846a48ce282a51347f4191a52d81199748b/voca-addon/model/gstep_52280.model.data-00000-of-00001
--------------------------------------------------------------------------------
/voca-addon/model/gstep_52280.model.index:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SasageyoOrg/voca-blender/61c39846a48ce282a51347f4191a52d81199748b/voca-addon/model/gstep_52280.model.index
--------------------------------------------------------------------------------
/voca-addon/model/gstep_52280.model.meta:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SasageyoOrg/voca-blender/61c39846a48ce282a51347f4191a52d81199748b/voca-addon/model/gstep_52280.model.meta
--------------------------------------------------------------------------------
/voca-addon/operators.py:
--------------------------------------------------------------------------------
1 | # ---------------------------------------------------------------------------- #
2 | # Modules imports #
3 | # ---------------------------------------------------------------------------- #
4 | import bpy
5 | import sys
6 | sys.tracebacklimit = -1
7 | from bpy.types import Operator
8 | from bpy.props import IntProperty
9 |
10 | from os import listdir
11 | from contextlib import redirect_stdout
12 | from pathlib import Path
13 |
14 | from . utils.inference import inference
15 | from . utils.edit_sequences import add_eye_blink
16 | from . utils.edit_sequences import alter_sequence_shape
17 | from . utils.edit_sequences import alter_sequence_head_pose
18 |
19 | # ---------------------------------------------------------------------------- #
20 | # MAIN OPERATORS #
21 | # ---------------------------------------------------------------------------- #
22 |
23 | # --------------------------------- Run VOCA --------------------------------- #
24 | class Run_VOCA(Operator):
25 | """VOCA Inference"""
26 | bl_idname = "opr.runvoca"
27 | bl_label = "Run VOCA"
28 | bl_options = {'REGISTER', 'UNDO'}
29 |
30 | def execute(self, context):
31 | # get props
32 | (template_fname, audio_fname, out_path, uv_template_fname, texture_img_fname, condition_idx) = (
33 | context.scene.TemplatePath,
34 | context.scene.AudioPath,
35 | context.scene.OutputPath,
36 | context.scene.TextureObjPath,
37 | context.scene.TextureIMGPath,
38 | context.scene.Condition
39 | )
40 |
41 | # Standard VOCA's Path
42 | addondir = bpy.utils.user_resource('SCRIPTS', 'addons')
43 | tf_model_fname = addondir + '/voca-addon/model/gstep_52280.model'
44 | ds_fname = addondir + '/voca-addon/ds_graph/output_graph.pb'
45 |
46 | # Inference
47 | print("Start inference")
48 | try:
49 | inference(tf_model_fname,
50 | ds_fname,
51 | audio_fname,
52 | template_fname,
53 | condition_idx,
54 | uv_template_fname,
55 | texture_img_fname,
56 | out_path)
57 | except Exception as e:
58 | self.report({"ERROR"}, ("Errore: " + str(e)))
59 | return {"CANCELLED"}
60 | print("End inference\n")
61 |
62 | # Call Import Meshes
63 | try:
64 | bpy.ops.opr.meshimport('EXEC_DEFAULT', choice = 2)
65 | except Exception as e:
66 | self.report({"ERROR"}, ("Errore: " + str(e)))
67 |
68 | return {'FINISHED'}
69 |
70 |
71 | # ------------------------------- Import Meshes ------------------------------ #
72 |
73 | class Mesh_Import(Operator):
74 | """VOCA Inference"""
75 | bl_idname = "opr.meshimport"
76 | bl_label = "Mesh Import"
77 | bl_options = {'REGISTER', 'UNDO'}
78 |
79 | choice : IntProperty(default = 1)
80 |
81 | def create_shapekeys(self, directory):
82 | #directory = directory + "/meshes"
83 | filepaths = [Path(directory, f) for f in listdir(directory)]
84 | filepaths.sort()
85 |
86 | def import_obj(filepaths):
87 | with redirect_stdout(None):
88 | try:
89 | bpy.ops.import_scene.obj(filepath=filepaths, split_mode="OFF")
90 | except Exception as e:
91 | self.report({"ERROR"}, ("Errore: " + str(e)))
92 |
93 |
94 |
95 |
96 | def objtodefault(obj):
97 | # set default position and rotation
98 | obj.location = [0, 0, 0]
99 | obj.rotation_euler = [0, 0, 0] # face z-axis
100 |
101 |
102 | # import first obj
103 | import_obj(str(filepaths[0]))
104 | # foreground selected object
105 | main_obj = bpy.context.selected_objects[-1]
106 | objtodefault(main_obj)
107 |
108 | # todo: da vedere
109 | main_obj.shape_key_add(name=main_obj.name)
110 | # mesh smoothing
111 | for face in main_obj.data.polygons:
112 | face.use_smooth = True
113 | # todo: shape key con le varie mesh relative alla prima per ogni istante
114 | main_key = main_obj.data.shape_keys
115 | # active foreground object
116 | bpy.context.view_layer.objects.active = main_obj
117 | seq_len = len(filepaths)
118 |
119 | # import the rest
120 | try:
121 | for i, filepath in enumerate(filepaths[1:]):
122 | import_obj(str(filepath))
123 | # foreground selected object
124 | current_obj = bpy.context.selected_objects[-1]
125 | objtodefault(current_obj)
126 |
127 | # join shape keys from imported meshes into the first one
128 | bpy.ops.object.join_shapes()
129 |
130 | # remove other meshes meshes
131 | bpy.data.objects.remove(current_obj, do_unlink=True)
132 | except Exception as e:
133 | self.report({"ERROR"}, ("Errore: " + str(e)))
134 |
135 |
136 | # set keyframes
137 | main_key.use_relative = True
138 | for i, key_block in enumerate(main_key.key_blocks[1:]):
139 | key_block.value = 0.0
140 | key_block.keyframe_insert("value", frame=i)
141 | key_block.value = 1.0
142 | key_block.keyframe_insert("value", frame=i+1)
143 | key_block.value = 0.0
144 | key_block.keyframe_insert("value", frame=i+2)
145 |
146 | # set start/end time
147 | bpy.context.scene.frame_start = 0
148 | bpy.context.scene.frame_end = seq_len - 1
149 | bpy.context.scene.frame_set(0)
150 |
151 | # rename obj
152 | main_obj.name = "VOCA_mesh"
153 |
154 | def add_audio(self, scene, audio_filepath):
155 | try:
156 | name_audio = 'VOCA_' + (audio_filepath.rsplit("/")[-1])
157 | scene.sequence_editor.sequences.new_sound(name_audio, audio_filepath, 1, 0)
158 | except Exception as e:
159 | self.report({"ERROR"}, ("Errore: " + str(e)))
160 |
161 |
162 | def execute(self, context):
163 | if self.choice == 1:
164 | # params of IMPORT MESHES PANEL
165 | (audio_fname, out_path) = (
166 | context.scene.AudioPathMesh,
167 | context.scene.MeshPath
168 | )
169 | self.add_audio(context.scene, audio_fname)
170 | elif self.choice == 2:
171 | # params of RUN MODEL PANL
172 | (audio_fname, out_path) = (
173 | context.scene.AudioPath,
174 | (context.scene.OutputPath + 'meshes/')
175 | )
176 | self.add_audio(context.scene, audio_fname)
177 | elif self.choice == 3:
178 | # params of EDIT PANEL
179 | (audio_fname, out_path) = (
180 | context.scene.AudioPath_edit,
181 | (context.scene.OutputPath_edit + 'meshes/')
182 | )
183 |
184 | list_sounds = context.scene.sequence_editor.sequences
185 | cond = any('VOCA' in strip.name for strip in list_sounds)
186 | if not cond:
187 | self.add_audio(context.scene, audio_fname)
188 |
189 | print("IMPORT\n")
190 | # IMPORTING MESHES
191 | try:
192 | self.create_shapekeys(out_path)
193 | except Exception as e:
194 | self.report({"ERROR"}, ("Errore: " + str(e)))
195 |
196 | # set the camera location and rotation
197 | context.scene.camera.rotation_euler = (0,0,0)
198 | context.scene.camera.location = (0, -0.02, 1.2)
199 |
200 | # set frame rate to 60 fps
201 | context.scene.render.fps = 60
202 |
203 | return {'FINISHED'}
204 |
205 |
206 | # ----------------------------- Edit Meshes VOCA ----------------------------- #
207 | class Mesh_Edit(Operator):
208 | """VOCA Inference"""
209 | bl_idname = "opr.meshedit"
210 | bl_label = "Mesh Edit"
211 | bl_options = {'REGISTER', 'UNDO'}
212 |
213 | def execute(self, context):
214 | # get props
215 | (source_path, out_path, flame_model_path, mode, uv_template_fname, texture_img_fname) = (
216 | context.scene.SourceMeshPath_edit,
217 | context.scene.OutputPath_edit,
218 | context.scene.FlameModelPath_edit,
219 | context.scene.DropdownChoice,
220 | context.scene.TextureObjPath_edit,
221 | context.scene.TextureIMGPath_edit,
222 | )
223 |
224 | mode_edit = (
225 | context.scene.n_blink,
226 | context.scene.duration_blink,
227 | context.scene.index_shape,
228 | context.scene.maxVariation_shape,
229 | context.scene.index_pose,
230 | context.scene.maxVariation_pose
231 | )
232 |
233 | # switch modes
234 | try:
235 | if mode == 'Blink':
236 | (param_a, param_b, _, _, _, _) = mode_edit
237 | add_eye_blink(source_path, out_path, flame_model_path, param_a, param_b,
238 | uv_template_fname=uv_template_fname, texture_img_fname=texture_img_fname)
239 | elif mode == 'Shape':
240 | (_, _, param_a, param_b, _, _) = mode_edit
241 | alter_sequence_shape(source_path, out_path, flame_model_path, pc_idx=param_a, pc_range=(
242 | 0, param_b), uv_template_fname=uv_template_fname, texture_img_fname=texture_img_fname)
243 | elif mode == 'Pose':
244 | (_, _, _, _, param_a, param_b) = mode_edit
245 | alter_sequence_head_pose(source_path, out_path, flame_model_path, pose_idx=param_a,
246 | rot_angle=param_b, uv_template_fname=uv_template_fname, texture_img_fname=texture_img_fname)
247 | except Exception as e:
248 | self.report({"ERROR"}, ("Errore: " + str(e)))
249 |
250 | # Call Import Meshes
251 | try:
252 | bpy.ops.opr.meshimport('EXEC_DEFAULT', choice = 3)
253 | except Exception as e:
254 | self.report({"ERROR"}, ("Errore: " + str(e)))
255 |
256 | return {'FINISHED'}
257 |
258 | # ---------------------------------------------------------------------------- #
259 | # OTHER OPERATORS #
260 | # ---------------------------------------------------------------------------- #
261 |
262 | # ----------------------------- DELETE ALL MESHES ---------------------------- #
263 | class Mesh_Delete(Operator):
264 | bl_idname = 'object.delete_meshes'
265 | bl_label = 'Delete meshes'
266 | bl_description = 'Delete all meshes'
267 | bl_options = {'REGISTER', 'UNDO'}
268 |
269 | def execute(self, context):
270 | for obj in bpy.context.scene.objects:
271 | if obj.type == 'MESH':
272 | obj.select_set(True)
273 | bpy.ops.object.delete()
274 | return {'FINISHED'}
275 |
276 | # ---------------------------- DELETE OTHER MESHES --------------------------- #
277 | class Mesh_Delete_Other(Operator):
278 | bl_idname = 'object.delete_other_meshes'
279 | bl_label = 'Delete other meshes'
280 | bl_description = 'Delete all other meshes'
281 | bl_options = {'REGISTER', 'UNDO'}
282 |
283 | def execute(self, context):
284 | for obj in bpy.context.scene.objects:
285 | print(obj.name)
286 | if obj.type == 'MESH' and (not "VOCA" in obj.name):
287 | obj.select_set(True)
288 | bpy.ops.object.delete()
289 | return {'FINISHED'}
--------------------------------------------------------------------------------
/voca-addon/panels.py:
--------------------------------------------------------------------------------
1 | # ---------------------------------------------------------------------------- #
2 | # Module imports #
3 | # ---------------------------------------------------------------------------- #
4 |
5 | import bpy
6 | import re
7 | from curses import panel
8 | from math import pi
9 | from bpy.types import Panel
10 | from bpy.props import ( StringProperty,
11 | BoolProperty,
12 | IntProperty,
13 | FloatProperty,
14 | EnumProperty,
15 | )
16 |
17 | # ---------------------------------------------------------------------------- #
18 | # Callbacks and props #
19 | # ---------------------------------------------------------------------------- #
20 |
21 |
22 | def loadUI_w_label(scene, name, box):
23 | row = box.row()
24 | # add space on var name
25 | name_string = re.sub(r'(?<=[a-z])(?=[A-Z])', ' ',
26 | name).replace('_edit', '').replace(' Path', '')
27 | row.label(text=name_string + ': ')
28 | row = box.row()
29 | row.prop(scene, name)
30 |
31 |
32 | def loadUI_no_label(scene, box, props, check, choice_texture):
33 | for (prop_name, _) in props:
34 | row = box.row()
35 | # Disable row if not checked
36 | if prop_name != check:
37 | row = row.row()
38 | row.enabled = choice_texture
39 | row.prop(scene, prop_name)
40 |
41 |
42 | def hide_callback(self, context):
43 | for obj in bpy.context.blend_data.objects:
44 | if obj.type == 'MESH' and (not "VOCA" in obj.name):
45 | obj.hide_viewport = context.scene.hide
46 |
47 | PROPS = {
48 | # -------------------------------- Main props -------------------------------- #
49 | 'RUN': [
50 | ('TemplatePath', StringProperty(name = "", default = "path_to_template.ply", description = "Define the root path of the Template", subtype = 'FILE_PATH')),
51 | ('AudioPath', StringProperty(name = "", default = "path_to_audio.wav", description = "Define the root path of the Audio", subtype = 'FILE_PATH')),
52 | ('OutputPath', StringProperty(name = "", default = "path_to_output_meshes/", description = "Define the root path of the Output", subtype = 'DIR_PATH')),
53 | ('Condition', IntProperty(name = "", default = 3, description = "facial expression", min = 0, max = 8))
54 | ],
55 | 'TEXTURE_RUN':[
56 | ('AddTexture', BoolProperty(name = 'Add Texture', default = False)),
57 | ('TextureObjPath', StringProperty(name = "", default = "path_to_OBJ/", description = "Define the root path of the OBJ texture", subtype = 'FILE_PATH')),
58 | ('TextureIMGPath', StringProperty(name = "", default = "path_to_IMG/", description = "Define the root path of the IMG texture", subtype = 'FILE_PATH'))
59 | ],
60 | 'MESH' : [
61 | ('AudioPathMesh', StringProperty(name = "", default = "path_to_audio.wav", description = "Define the root path of the Audio", subtype = 'FILE_PATH')),
62 | ('MeshPath', StringProperty(name = "", default = "path_to_meshes/", description = "Define the root path of the Output", subtype = 'DIR_PATH'))
63 | ],
64 | # -------------------------------- Edit props -------------------------------- #
65 | 'EDIT':[
66 | ('SourceMeshPath_edit', StringProperty(name = "", default = "path_to_source_meshes/", description = "Define the root path of the Source", subtype = 'DIR_PATH')),
67 | ('OutputPath_edit', StringProperty(name = "", default = "path_to_output_edited/", description = "Define the root path of the Output", subtype = 'DIR_PATH')),
68 | ('FlameModelPath_edit', StringProperty(name = "", default = "path_to_model.pkl", description = "Define the path of the FLAME model", subtype = 'FILE_PATH')),
69 | ('AudioPath_edit', StringProperty(name = "", default = "path_to_audio.wav", description = "Define the root path of the Audio", subtype = 'FILE_PATH')),
70 | ('DropdownChoice', EnumProperty(
71 | items=(
72 | ("Blink", "Blink", "Tooltip for Blink"),
73 | ("Shape", "Shape", "Tooltip for Shape"),
74 | ("Pose", "Pose", "Tooltip for Pose"),
75 | ),
76 | name = "Edit",
77 | default = "Blink",
78 | description = "Dropdown to choice edit mode"))
79 | ],
80 | 'BLINK':[
81 | ('n_blink', IntProperty(name = "Number", default = 2, description = "Define the root path of the Output", min = 0, max = 100)),
82 | ('duration_blink', IntProperty(name = "Duration", default = 15, description = "Define the root path of the Output", min = 5, max = 30))
83 | ],
84 | 'SHAPE':[
85 | ('index_shape', IntProperty(name = "Index", default = 0, description = "", min = 0, max = 299)),
86 | ('maxVariation_shape', IntProperty(name = "Max Variation", default = 3, description = "", min = 0, max = 3))
87 | ],
88 | 'POSE':[
89 | ('index_pose', IntProperty(name = "Index", default = 3, description = "", min = 3, max = 5)),
90 | ('maxVariation_pose', FloatProperty(name = "Max Variation", default = 0.52, description = "", min = 0.0, max = (2*pi - 0.01)))
91 | ],
92 | 'TEXTURE_EDIT':[
93 | ('AddTexture_edit', BoolProperty(name = 'Add Texture', default = False)),
94 | ('TextureObjPath_edit', StringProperty(name = "", default = "path_to_OBJ/", description = "Define the root path of the OBJ texture", subtype = 'FILE_PATH')),
95 | ('TextureIMGPath_edit', StringProperty(name = "", default = "path_to_IMG/", description = "Define the root path of the IMG texture", subtype = 'FILE_PATH'))
96 | ],
97 | # -------------------------------- Other props ------------------------------- #
98 | 'HIDE': [('hide', BoolProperty(name="Hide meshes", description="Check-box to hide no-VOCA meshes", default=False, update=hide_callback))]
99 | }
100 |
101 | # ---------------------------------------------------------------------------- #
102 | # Panels #
103 | # ---------------------------------------------------------------------------- #
104 |
105 | # ------------------------------ Run Voca panel ------------------------------ #
106 | class run_model_panel(Panel):
107 | bl_idname = "VOCA_PT_run_model"
108 | bl_label = 'Run VOCA Model'
109 | bl_space_type = 'VIEW_3D'
110 | bl_region_type = 'UI'
111 | bl_category = 'VOCA'
112 | # bl_options = {'DEFAULT_CLOSED'}
113 |
114 | def draw(self, context):
115 | layout = self.layout
116 |
117 | box = layout.box()
118 | for (prop_name, _) in PROPS['RUN']:
119 | loadUI_w_label(context.scene, prop_name, box)
120 |
121 | # Texture
122 | box = layout.box()
123 | choice_texture = context.scene.AddTexture
124 | loadUI_no_label(context.scene, box, PROPS['TEXTURE_RUN'], 'AddTexture', choice_texture)
125 |
126 | col = self.layout.column()
127 | col.operator('opr.runvoca', text='Run')
128 |
129 | # ---------------------------- Import meshes panel --------------------------- #
130 | class mesh_import_panel(Panel):
131 | bl_idname = "VOCA_PT_mesh_import"
132 | bl_label = 'Import Mesh'
133 | bl_space_type = 'VIEW_3D'
134 | bl_region_type = 'UI'
135 | bl_category = 'VOCA'
136 | bl_options = {'DEFAULT_CLOSED'}
137 |
138 | def draw(self, context):
139 | layout = self.layout
140 |
141 | box = layout.box()
142 | for (prop_name, _) in PROPS['MESH']:
143 | loadUI_w_label(context.scene, prop_name, box)
144 |
145 | col = self.layout.column()
146 | col.operator('opr.meshimport', text='Import')
147 |
148 | # --------------------------------- Dev Panel -------------------------------- #
149 | class dev_pannel(Panel):
150 | bl_idname = "VOCA_PT_dev_obj"
151 | bl_label = 'Dev'
152 | bl_space_type = 'VIEW_3D'
153 | bl_region_type = 'UI'
154 | bl_category = 'VOCA'
155 | bl_options = {'DEFAULT_CLOSED'}
156 |
157 | def draw(self, context):
158 | layout = self.layout
159 | col = self.layout.column()
160 | row = col.row(align=True)
161 |
162 | # ----------------------------- Clean options box ---------------------------- #
163 | box = layout.box()
164 | row = box.row()
165 | row.prop(context.scene, PROPS['HIDE'][0][0], text="Hide non-VOCA meshes")
166 | row = box.row()
167 | row.enabled = not(context.scene.hide)
168 | row.operator("object.delete_other_meshes", text="Delete non-VOCA Object")
169 | row = box.row()
170 | row.operator("object.delete_meshes", text="Delete ALL Object")
171 |
172 | # ------------------------------ Edit meshes box ----------------------------- #
173 | # add path UI
174 | box_edit = layout.box()
175 | box = box_edit.box()
176 | for (prop_name, _) in PROPS['EDIT'][0:3]:
177 | loadUI_w_label(context.scene, prop_name, box)
178 |
179 | # Check if exists sound that have substring VOCA_
180 | # if there aren't, show prop. for path audio edit
181 | list_sounds = context.scene.sequence_editor.sequences
182 | cond = any('VOCA' in strip.name for strip in list_sounds)
183 | if not cond:
184 | box = box_edit.box()
185 | loadUI_w_label(context.scene, PROPS['EDIT'][3][0], box)
186 |
187 | # ------------------------------- Add mode box ------------------------------- #
188 | box = box_edit.box()
189 | row = box.row()
190 | row.prop(context.scene, PROPS['EDIT'][4][0])
191 | mode = context.scene.DropdownChoice
192 | loadUI_no_label(context.scene, box, PROPS[mode.upper()], '', True)
193 |
194 | # --------------------------- Texture settings box --------------------------- #
195 | box = box_edit.box()
196 | choice_texture = context.scene.AddTexture_edit
197 | loadUI_no_label(context.scene, box, PROPS['TEXTURE_EDIT'], 'AddTexture_edit', choice_texture)
198 |
199 | # col = self.layout.column()
200 | col = box_edit.column()
201 | col.operator('opr.meshedit', text='Run')
202 |
--------------------------------------------------------------------------------
/voca-addon/smpl_webuser/LICENSE.txt:
--------------------------------------------------------------------------------
1 | Please read carefully the following terms and conditions and any accompanying documentation before you download and/or use the SMPL body model and software, (the "Model"), including 3D meshes, blend weights blend shapes, textures, software, scripts, and animations. By downloading and/or using the Model, you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Model.
2 |
3 | Ownership
4 | The Model has been developed at the Max Planck Institute for Intelligent Systems (hereinafter "MPI") and is owned by and proprietary material of the Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (hereinafter “MPG”; MPI and MPG hereinafter collectively “Max-Planck”).
5 |
6 | License Grant
7 | Max-Planck grants you a non-exclusive, non-transferable, free of charge right:
8 |
9 | To download the Model and use it on computers owned, leased or otherwise controlled by you and/or your organisation;
10 | To use the Model for the sole purpose of performing non-commercial scientific research, non-commercial education, or non-commercial artistic projects.
11 | Any other use, in particular any use for commercial purposes, is prohibited. This includes, without limitation, incorporation in a commercial product, use in a commercial service, as training data for a commercial product, for commercial ergonomic analysis (e.g. product design, architectural design, etc.), or production of other artifacts for commercial purposes including, for example, web services, movies, television programs, mobile applications, or video games. The Model may not be used for pornographic purposes or to generate pornographic material whether commercial or not. This license also prohibits the use of the Model to train methods/algorithms/neural networks/etc. for commercial use of any kind. The Model may not be reproduced, modified and/or made available in any form to any third party without Max-Planck’s prior written permission. By downloading the Model, you agree not to reverse engineer it.
12 |
13 | Disclaimer of Representations and Warranties
14 | You expressly acknowledge and agree that the Model results from basic research, is provided “AS IS”, may contain errors, and that any use of the Model is at your sole risk. MAX-PLANCK MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE MODEL, NEITHER EXPRESS NOR IMPLIED, AND THE ABSENCE OF ANY LEGAL OR ACTUAL DEFECTS, WHETHER DISCOVERABLE OR NOT. Specifically, and not to limit the foregoing, Max-Planck makes no representations or warranties (i) regarding the merchantability or fitness for a particular purpose of the Model, (ii) that the use of the Model will not infringe any patents, copyrights or other intellectual property rights of a third party, and (iii) that the use of the Model will not cause any damage of any kind to you or a third party.
15 |
16 | Limitation of Liability
17 | Under no circumstances shall Max-Planck be liable for any incidental, special, indirect or consequential damages arising out of or relating to this license, including but not limited to, any lost profits, business interruption, loss of programs or other data, or all other commercial damages or losses, even if advised of the possibility thereof.
18 |
19 | No Maintenance Services
20 | You understand and agree that Max-Planck is under no obligation to provide either maintenance services, update services, notices of latent defects, or corrections of defects with regard to the Model. Max-Planck nevertheless reserves the right to update, modify, or discontinue the Model at any time.
21 |
22 | Publication with SMPL
23 | You agree to cite the most recent paper describing the model as specified on the download website. This website lists the most up to date bibliographic information on the about page.
24 |
25 | Media projects with SMPL
26 | When using SMPL in a media project please give credit to Max Planck Institute for Intelligent Systems. For example: SMPL was used for character animation courtesy of the Max Planck Institute for Intelligent Systems.
27 | Commercial licensing opportunities
28 | For commercial use in the fields of medicine, psychology, and biomechanics, please contact ps-license@tue.mpg.de.
29 | For commercial use in all other fields please contact Body Labs Inc at smpl@bodylabs.com
--------------------------------------------------------------------------------
/voca-addon/smpl_webuser/__init__.py:
--------------------------------------------------------------------------------
1 | '''
2 | '''
--------------------------------------------------------------------------------
/voca-addon/smpl_webuser/lbs.py:
--------------------------------------------------------------------------------
1 | '''
2 | Copyright 2015 Matthew Loper, Naureen Mahmood and the Max Planck Gesellschaft. All rights reserved.
3 | This software is provided for research purposes only.
4 | By using this software you agree to the terms of the SMPL Model license here http://smpl.is.tue.mpg.de/license
5 |
6 | More information about SMPL is available here http://smpl.is.tue.mpg.
7 | For comments or questions, please email us at: smpl@tuebingen.mpg.de
8 |
9 |
10 | About this file:
11 | ================
12 | This file defines linear blend skinning for the SMPL loader which
13 | defines the effect of bones and blendshapes on the vertices of the template mesh.
14 |
15 | Modules included:
16 | - global_rigid_transformation:
17 | computes global rotation & translation of the model
18 | - verts_core: [overloaded function inherited from verts.verts_core]
19 | computes the blending of joint-influences for each vertex based on type of skinning
20 |
21 | '''
22 |
23 | from . posemapper import posemap
24 | import chumpy
25 | import numpy as np
26 |
27 | def global_rigid_transformation(pose, J, kintree_table, xp):
28 | results = {}
29 | pose = pose.reshape((-1,3))
30 | id_to_col = {kintree_table[1,i] : i for i in range(kintree_table.shape[1])}
31 | parent = {i : id_to_col[kintree_table[0,i]] for i in range(1, kintree_table.shape[1])}
32 |
33 | if xp == chumpy:
34 | from . posemapper import Rodrigues
35 | rodrigues = lambda x : Rodrigues(x)
36 | else:
37 | import cv2
38 | rodrigues = lambda x : cv2.Rodrigues(x)[0]
39 |
40 | with_zeros = lambda x : xp.vstack((x, xp.array([[0.0, 0.0, 0.0, 1.0]])))
41 | results[0] = with_zeros(xp.hstack((rodrigues(pose[0,:]), J[0,:].reshape((3,1)))))
42 |
43 | for i in range(1, kintree_table.shape[1]):
44 | results[i] = results[parent[i]].dot(with_zeros(xp.hstack((
45 | rodrigues(pose[i,:]),
46 | ((J[i,:] - J[parent[i],:]).reshape((3,1)))
47 | ))))
48 |
49 | pack = lambda x : xp.hstack([np.zeros((4, 3)), x.reshape((4,1))])
50 |
51 | results = [results[i] for i in sorted(results.keys())]
52 | results_global = results
53 |
54 | if True:
55 | results2 = [results[i] - (pack(
56 | results[i].dot(xp.concatenate( ( (J[i,:]), 0 ) )))
57 | ) for i in range(len(results))]
58 | results = results2
59 | result = xp.dstack(results)
60 | return result, results_global
61 |
62 |
63 | def verts_core(pose, v, J, weights, kintree_table, want_Jtr=False, xp=chumpy):
64 | A, A_global = global_rigid_transformation(pose, J, kintree_table, xp)
65 | T = A.dot(weights.T)
66 |
67 | rest_shape_h = xp.vstack((v.T, np.ones((1, v.shape[0]))))
68 |
69 | v =(T[:,0,:] * rest_shape_h[0, :].reshape((1, -1)) +
70 | T[:,1,:] * rest_shape_h[1, :].reshape((1, -1)) +
71 | T[:,2,:] * rest_shape_h[2, :].reshape((1, -1)) +
72 | T[:,3,:] * rest_shape_h[3, :].reshape((1, -1))).T
73 |
74 | v = v[:,:3]
75 |
76 | if not want_Jtr:
77 | return v
78 | Jtr = xp.vstack([g[:3,3] for g in A_global])
79 | return (v, Jtr)
80 |
81 |
--------------------------------------------------------------------------------
/voca-addon/smpl_webuser/posemapper.py:
--------------------------------------------------------------------------------
1 | '''
2 | Copyright 2015 Matthew Loper, Naureen Mahmood and the Max Planck Gesellschaft. All rights reserved.
3 | This software is provided for research purposes only.
4 | By using this software you agree to the terms of the SMPL Model license here http://smpl.is.tue.mpg.de/license
5 |
6 | More information about SMPL is available here http://smpl.is.tue.mpg.
7 | For comments or questions, please email us at: smpl@tuebingen.mpg.de
8 |
9 |
10 | About this file:
11 | ================
12 | This module defines the mapping of joint-angles to pose-blendshapes.
13 |
14 | Modules included:
15 | - posemap:
16 | computes the joint-to-pose blend shape mapping given a mapping type as input
17 |
18 | '''
19 |
20 | import chumpy as ch
21 | import numpy as np
22 | import cv2
23 |
24 |
25 | class Rodrigues(ch.Ch):
26 | dterms = 'rt'
27 |
28 | def compute_r(self):
29 | return cv2.Rodrigues(self.rt.r)[0]
30 |
31 | def compute_dr_wrt(self, wrt):
32 | if wrt is self.rt:
33 | return cv2.Rodrigues(self.rt.r)[1].T
34 |
35 |
36 | def lrotmin(p):
37 | if isinstance(p, np.ndarray):
38 | p = p.ravel()[3:]
39 | return np.concatenate([(cv2.Rodrigues(np.array(pp))[0]-np.eye(3)).ravel() for pp in p.reshape((-1,3))]).ravel()
40 | if p.ndim != 2 or p.shape[1] != 3:
41 | p = p.reshape((-1,3))
42 | p = p[1:]
43 | return ch.concatenate([(Rodrigues(pp)-ch.eye(3)).ravel() for pp in p]).ravel()
44 |
45 | def posemap(s):
46 | if s == 'lrotmin':
47 | return lrotmin
48 | else:
49 | raise Exception('Unknown posemapping: %s' % (str(s),))
50 |
--------------------------------------------------------------------------------
/voca-addon/smpl_webuser/serialization.py:
--------------------------------------------------------------------------------
1 | '''
2 | Copyright 2015 Matthew Loper, Naureen Mahmood and the Max Planck Gesellschaft. All rights reserved.
3 | This software is provided for research purposes only.
4 | By using this software you agree to the terms of the SMPL Model license here http://smpl.is.tue.mpg.de/license
5 |
6 | More information about SMPL is available here http://smpl.is.tue.mpg.
7 | For comments or questions, please email us at: smpl@tuebingen.mpg.de
8 |
9 |
10 | About this file:
11 | ================
12 | This file defines the serialization functions of the SMPL model.
13 |
14 | Modules included:
15 | - save_model:
16 | saves the SMPL model to a given file location as a .pkl file
17 | - load_model:
18 | loads the SMPL model from a given file location (i.e. a .pkl file location),
19 | or a dictionary object.
20 |
21 | '''
22 |
23 | __all__ = ['load_model', 'save_model']
24 |
25 | import numpy as np
26 | import _pickle as pickle
27 | import chumpy as ch
28 | from chumpy.ch import MatVecMult
29 | from . posemapper import posemap
30 | from . verts import verts_core
31 |
32 | # def save_model(model, fname):
33 | # m0 = model
34 | # trainer_dict = {'v_template': np.asarray(m0.v_template),'J': np.asarray(m0.J),'weights': np.asarray(m0.weights),'kintree_table': m0.kintree_table,'f': m0.f, 'bs_type': m0.bs_type, 'posedirs': np.asarray(m0.posedirs)}
35 | # if hasattr(model, 'J_regressor'):
36 | # trainer_dict['J_regressor'] = m0.J_regressor
37 | # if hasattr(model, 'J_regressor_prior'):
38 | # trainer_dict['J_regressor_prior'] = m0.J_regressor_prior
39 | # if hasattr(model, 'weights_prior'):
40 | # trainer_dict['weights_prior'] = m0.weights_prior
41 | # if hasattr(model, 'shapedirs'):
42 | # trainer_dict['shapedirs'] = m0.shapedirs
43 | # if hasattr(model, 'vert_sym_idxs'):
44 | # trainer_dict['vert_sym_idxs'] = m0.vert_sym_idxs
45 | # if hasattr(model, 'bs_style'):
46 | # trainer_dict['bs_style'] = model.bs_style
47 | # else:
48 | # trainer_dict['bs_style'] = 'lbs'
49 | # pickle.dump(trainer_dict, open(fname, 'w'), -1)
50 |
51 |
52 | def backwards_compatibility_replacements(dd):
53 |
54 | # replacements
55 | if 'default_v' in dd:
56 | dd['v_template'] = dd['default_v']
57 | del dd['default_v']
58 | if 'template_v' in dd:
59 | dd['v_template'] = dd['template_v']
60 | del dd['template_v']
61 | if 'joint_regressor' in dd:
62 | dd['J_regressor'] = dd['joint_regressor']
63 | del dd['joint_regressor']
64 | if 'blendshapes' in dd:
65 | dd['posedirs'] = dd['blendshapes']
66 | del dd['blendshapes']
67 | if 'J' not in dd:
68 | dd['J'] = dd['joints']
69 | del dd['joints']
70 |
71 | # defaults
72 | if 'bs_style' not in dd:
73 | dd['bs_style'] = 'lbs'
74 |
75 |
76 |
77 | def ready_arguments(fname_or_dict):
78 |
79 | if not isinstance(fname_or_dict, dict):
80 | dd = pickle.load(open(fname_or_dict, 'rb'), encoding='latin1')
81 | else:
82 | dd = fname_or_dict
83 |
84 | backwards_compatibility_replacements(dd)
85 |
86 | want_shapemodel = 'shapedirs' in dd
87 | nposeparms = dd['kintree_table'].shape[1]*3
88 |
89 | if 'trans' not in dd:
90 | dd['trans'] = np.zeros(3)
91 | if 'pose' not in dd:
92 | dd['pose'] = np.zeros(nposeparms)
93 | if 'shapedirs' in dd and 'betas' not in dd:
94 | dd['betas'] = np.zeros(dd['shapedirs'].shape[-1])
95 |
96 | for s in ['v_template', 'weights', 'posedirs', 'pose', 'trans', 'shapedirs', 'betas', 'J']:
97 | if (s in dd) and not hasattr(dd[s], 'dterms'):
98 | dd[s] = ch.array(dd[s])
99 |
100 | if want_shapemodel:
101 | dd['v_shaped'] = dd['shapedirs'].dot(dd['betas'])+dd['v_template']
102 | v_shaped = dd['v_shaped']
103 | J_tmpx = MatVecMult(dd['J_regressor'], v_shaped[:,0])
104 | J_tmpy = MatVecMult(dd['J_regressor'], v_shaped[:,1])
105 | J_tmpz = MatVecMult(dd['J_regressor'], v_shaped[:,2])
106 | dd['J'] = ch.vstack((J_tmpx, J_tmpy, J_tmpz)).T
107 | dd['v_posed'] = v_shaped + dd['posedirs'].dot(posemap(dd['bs_type'])(dd['pose']))
108 | else:
109 | dd['v_posed'] = dd['v_template'] + dd['posedirs'].dot(posemap(dd['bs_type'])(dd['pose']))
110 |
111 | return dd
112 |
113 |
114 |
115 | def load_model(fname_or_dict):
116 | dd = ready_arguments(fname_or_dict)
117 |
118 | args = {
119 | 'pose': dd['pose'],
120 | 'v': dd['v_posed'],
121 | 'J': dd['J'],
122 | 'weights': dd['weights'],
123 | 'kintree_table': dd['kintree_table'],
124 | 'xp': ch,
125 | 'want_Jtr': True,
126 | 'bs_style': dd['bs_style']
127 | }
128 |
129 | result, Jtr = verts_core(**args)
130 | result = result + dd['trans'].reshape((1,3))
131 | result.J_transformed = Jtr + dd['trans'].reshape((1,3))
132 |
133 | for k, v in dd.items():
134 | setattr(result, k, v)
135 |
136 | return result
137 |
138 |
--------------------------------------------------------------------------------
/voca-addon/smpl_webuser/verts.py:
--------------------------------------------------------------------------------
1 | '''
2 | Copyright 2015 Matthew Loper, Naureen Mahmood and the Max Planck Gesellschaft. All rights reserved.
3 | This software is provided for research purposes only.
4 | By using this software you agree to the terms of the SMPL Model license here http://smpl.is.tue.mpg.de/license
5 |
6 | More information about SMPL is available here http://smpl.is.tue.mpg.
7 | For comments or questions, please email us at: smpl@tuebingen.mpg.de
8 |
9 |
10 | About this file:
11 | ================
12 | This file defines the basic skinning modules for the SMPL loader which
13 | defines the effect of bones and blendshapes on the vertices of the template mesh.
14 |
15 | Modules included:
16 | - verts_decorated:
17 | creates an instance of the SMPL model which inherits model attributes from another
18 | SMPL model.
19 | - verts_core: [overloaded function inherited by lbs.verts_core]
20 | computes the blending of joint-influences for each vertex based on type of skinning
21 |
22 | '''
23 |
24 | import chumpy
25 | from . import lbs as lbs
26 | from . posemapper import posemap
27 | import scipy.sparse as sp
28 | from chumpy.ch import MatVecMult
29 |
30 | def ischumpy(x): return hasattr(x, 'dterms')
31 |
32 | def verts_decorated(trans, pose,
33 | v_template, J, weights, kintree_table, bs_style, f,
34 | bs_type=None, posedirs=None, betas=None, shapedirs=None, want_Jtr=False):
35 |
36 | for which in [trans, pose, v_template, weights, posedirs, betas, shapedirs]:
37 | if which is not None:
38 | assert ischumpy(which)
39 |
40 | v = v_template
41 |
42 | if shapedirs is not None:
43 | if betas is None:
44 | betas = chumpy.zeros(shapedirs.shape[-1])
45 | v_shaped = v + shapedirs.dot(betas)
46 | else:
47 | v_shaped = v
48 |
49 | if posedirs is not None:
50 | v_posed = v_shaped + posedirs.dot(posemap(bs_type)(pose))
51 | else:
52 | v_posed = v_shaped
53 |
54 | v = v_posed
55 |
56 | if sp.issparse(J):
57 | regressor = J
58 | J_tmpx = MatVecMult(regressor, v_shaped[:,0])
59 | J_tmpy = MatVecMult(regressor, v_shaped[:,1])
60 | J_tmpz = MatVecMult(regressor, v_shaped[:,2])
61 | J = chumpy.vstack((J_tmpx, J_tmpy, J_tmpz)).T
62 | else:
63 | assert(ischumpy(J))
64 |
65 | assert(bs_style=='lbs')
66 | result, Jtr = lbs.verts_core(pose, v, J, weights, kintree_table, want_Jtr=True, xp=chumpy)
67 |
68 | tr = trans.reshape((1,3))
69 | result = result + tr
70 | Jtr = Jtr + tr
71 |
72 | result.trans = trans
73 | result.f = f
74 | result.pose = pose
75 | result.v_template = v_template
76 | result.J = J
77 | result.weights = weights
78 | result.kintree_table = kintree_table
79 | result.bs_style = bs_style
80 | result.bs_type =bs_type
81 | if posedirs is not None:
82 | result.posedirs = posedirs
83 | result.v_posed = v_posed
84 | if shapedirs is not None:
85 | result.shapedirs = shapedirs
86 | result.betas = betas
87 | result.v_shaped = v_shaped
88 | if want_Jtr:
89 | result.J_transformed = Jtr
90 | return result
91 |
92 | def verts_core(pose, v, J, weights, kintree_table, bs_style, want_Jtr=False, xp=chumpy):
93 |
94 | if xp == chumpy:
95 | assert(hasattr(pose, 'dterms'))
96 | assert(hasattr(v, 'dterms'))
97 | assert(hasattr(J, 'dterms'))
98 | assert(hasattr(weights, 'dterms'))
99 |
100 | assert(bs_style=='lbs')
101 | result = lbs.verts_core(pose, v, J, weights, kintree_table, want_Jtr, xp)
102 |
103 | return result
104 |
--------------------------------------------------------------------------------
/voca-addon/template/FLAME_sample.ply:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SasageyoOrg/voca-blender/61c39846a48ce282a51347f4191a52d81199748b/voca-addon/template/FLAME_sample.ply
--------------------------------------------------------------------------------
/voca-addon/utils/audio_handler.py:
--------------------------------------------------------------------------------
1 | '''
2 | Max-Planck-Gesellschaft zur Foerderung der Wissenschaften e.V. (MPG) is holder of all proprietary rights on this
3 | computer program.
4 |
5 | You can only use this computer program if you have closed a license agreement with MPG or you get the right to use
6 | the computer program from someone who is authorized to grant you that right.
7 |
8 | Any use of the computer program without a valid license is prohibited and liable to prosecution.
9 |
10 | Copyright 2019 Max-Planck-Gesellschaft zur Foerderung der Wissenschaften e.V. (MPG). acting on behalf of its
11 | Max Planck Institute for Intelligent Systems and the Max Planck Institute for Biological Cybernetics.
12 | All rights reserved.
13 |
14 | More information about VOCA is available at http://voca.is.tue.mpg.de.
15 | For comments or questions, please email us at voca@tue.mpg.de
16 | '''
17 |
18 | import re
19 | import copy
20 | import resampy
21 | import numpy as np
22 | import tensorflow as tf
23 | from python_speech_features import mfcc
24 |
25 |
26 | def interpolate_features(features, input_rate, output_rate, output_len=None):
27 | num_features = features.shape[1]
28 | input_len = features.shape[0]
29 | seq_len = input_len / float(input_rate)
30 | if output_len is None:
31 | output_len = int(seq_len * output_rate)
32 | input_timestamps = np.arange(input_len) / float(input_rate)
33 | output_timestamps = np.arange(output_len) / float(output_rate)
34 | output_features = np.zeros((output_len, num_features))
35 | for feat in range(num_features):
36 | output_features[:, feat] = np.interp(output_timestamps,
37 | input_timestamps,
38 | features[:, feat])
39 | return output_features
40 |
41 | class AudioHandler:
42 | def __init__(self, config):
43 | self.config = config
44 | self.audio_feature_type = config['audio_feature_type']
45 | self.num_audio_features = config['num_audio_features']
46 | self.audio_window_size = config['audio_window_size']
47 | self.audio_window_stride = config['audio_window_stride']
48 |
49 | def process(self, audio):
50 | if self.audio_feature_type.lower() == "none":
51 | return None
52 | elif self.audio_feature_type.lower() == 'deepspeech':
53 | return self.convert_to_deepspeech(audio)
54 | else:
55 | raise NotImplementedError("Audio features not supported")
56 |
57 | def convert_to_deepspeech(self, audio):
58 | def audioToInputVector(audio, fs, numcep, numcontext):
59 | # Get mfcc coefficients
60 | features = mfcc(audio, samplerate=fs, numcep=numcep)
61 |
62 | # We only keep every second feature (BiRNN stride = 2)
63 | features = features[::2]
64 |
65 | # One stride per time step in the input
66 | num_strides = len(features)
67 |
68 | # Add empty initial and final contexts
69 | empty_context = np.zeros((numcontext, numcep), dtype=features.dtype)
70 | features = np.concatenate((empty_context, features, empty_context))
71 |
72 | # Create a view into the array with overlapping strides of size
73 | # numcontext (past) + 1 (present) + numcontext (future)
74 | window_size = 2 * numcontext + 1
75 | train_inputs = np.lib.stride_tricks.as_strided(
76 | features,
77 | (num_strides, window_size, numcep),
78 | (features.strides[0], features.strides[0], features.strides[1]),
79 | writeable=False)
80 |
81 | # Flatten the second and third dimensions
82 | train_inputs = np.reshape(train_inputs, [num_strides, -1])
83 |
84 | train_inputs = np.copy(train_inputs)
85 | train_inputs = (train_inputs - np.mean(train_inputs)) / np.std(train_inputs)
86 |
87 | # Return results
88 | return train_inputs
89 |
90 | if type(audio) == dict:
91 | pass
92 | else:
93 | raise ValueError('Wrong type for audio')
94 |
95 | # Load graph and place_holders
96 | #with tf.gfile.GFile(self.config['deepspeech_graph_fname'], "rb") as f:
97 | with tf.io.gfile.GFile(self.config['deepspeech_graph_fname'], "rb") as f:
98 | #graph_def = tf.GraphDef()
99 | graph_def = tf.compat.v1.GraphDef()
100 | graph_def.ParseFromString(f.read())
101 |
102 | #graph = tf.get_default_graph()
103 | graph = tf.compat.v1.get_default_graph()
104 | tf.import_graph_def(graph_def, name="deepspeech")
105 | input_tensor = graph.get_tensor_by_name('deepspeech/input_node:0')
106 | seq_length = graph.get_tensor_by_name('deepspeech/input_lengths:0')
107 | layer_6 = graph.get_tensor_by_name('deepspeech/logits:0')
108 |
109 | n_input = 26
110 | n_context = 9
111 |
112 | processed_audio = copy.deepcopy(audio)
113 | #with tf.Session(graph=graph) as sess:
114 | with tf.compat.v1.Session(graph=graph) as sess:
115 | for subj in audio.keys():
116 | for seq in audio[subj].keys():
117 | print('process audio: %s - %s' % (subj, seq))
118 |
119 | audio_sample = audio[subj][seq]['audio']
120 | sample_rate = audio[subj][seq]['sample_rate']
121 | resampled_audio = resampy.resample(audio_sample.astype(float), sample_rate, 16000)
122 | input_vector = audioToInputVector(resampled_audio.astype('int16'), 16000, n_input, n_context)
123 |
124 | network_output = sess.run(layer_6, feed_dict={input_tensor: input_vector[np.newaxis, ...],
125 | seq_length: [input_vector.shape[0]]})
126 |
127 | # Resample network output from 50 fps to 60 fps
128 | audio_len_s = float(audio_sample.shape[0]) / sample_rate
129 | num_frames = int(round(audio_len_s * 60))
130 | network_output = interpolate_features(network_output[:, 0], 50, 60,
131 | output_len=num_frames)
132 |
133 | # Make windows
134 | zero_pad = np.zeros((int(self.audio_window_size / 2), network_output.shape[1]))
135 | network_output = np.concatenate((zero_pad, network_output, zero_pad), axis=0)
136 | windows = []
137 | for window_index in range(0, network_output.shape[0] - self.audio_window_size, self.audio_window_stride):
138 | windows.append(network_output[window_index:window_index + self.audio_window_size])
139 |
140 | processed_audio[subj][seq]['audio'] = np.array(windows)
141 | return processed_audio
142 |
143 |
--------------------------------------------------------------------------------
/voca-addon/utils/ctypesloader.py:
--------------------------------------------------------------------------------
1 | """ctypes abstraction layer
2 |
3 | We keep rewriting functions as the main entry points change,
4 | so let's just localise the changes here...
5 | """
6 | import ctypes, logging, os, sys
7 | _log = logging.getLogger( 'OpenGL.platform.ctypesloader' )
8 | #_log.setLevel( logging.DEBUG )
9 | ctypes_version = [
10 | int(x) for x in ctypes.__version__.split('.')
11 | ]
12 | from ctypes import util
13 | import OpenGL
14 |
15 | DLL_DIRECTORY = os.path.join( os.path.dirname( OpenGL.__file__ ), 'DLLS' )
16 |
17 | def loadLibrary( dllType, name, mode = ctypes.RTLD_GLOBAL ):
18 | """Load a given library by name with the given mode
19 |
20 | dllType -- the standard ctypes pointer to a dll type, such as
21 | ctypes.cdll or ctypes.windll or the underlying ctypes.CDLL or
22 | ctypes.WinDLL classes.
23 | name -- a short module name, e.g. 'GL' or 'GLU'
24 | mode -- ctypes.RTLD_GLOBAL or ctypes.RTLD_LOCAL,
25 | controls whether the module resolves names via other
26 | modules already loaded into this process. GL modules
27 | generally need to be loaded with GLOBAL flags
28 |
29 | returns the ctypes C-module object
30 | """
31 | if isinstance( dllType, ctypes.LibraryLoader ):
32 | dllType = dllType._dlltype
33 | fullName = None
34 | try:
35 | # fullName = util.find_library( name )
36 | if "opengl" in name.lower():
37 | fullName = '/System/Library/Frameworks/OpenGL.framework/OpenGL'
38 | elif "glut" in name.lower():
39 | fullName = '/System/Library/Frameworks/GLUT.framework/GLUT'
40 | else:
41 | fullName = util.find_library( name )
42 | if fullName is not None:
43 | name = fullName
44 | elif os.path.isfile( os.path.join( DLL_DIRECTORY, name + '.dll' )):
45 | name = os.path.join( DLL_DIRECTORY, name + '.dll' )
46 | except Exception as err:
47 | _log.info( '''Failed on util.find_library( %r ): %s''', name, err )
48 | # Should the call fail, we just try to load the base filename...
49 | pass
50 | try:
51 | return dllType( name, mode )
52 | except Exception as err:
53 | err.args += (name,fullName)
54 | raise
55 |
56 | def buildFunction( functionType, name, dll ):
57 | """Abstract away the ctypes function-creation operation"""
58 | return functionType( (name, dll), )
59 |
--------------------------------------------------------------------------------
/voca-addon/utils/edit_sequences.py:
--------------------------------------------------------------------------------
1 | '''
2 | Max-Planck-Gesellschaft zur Foerderung der Wissenschaften e.V. (MPG) is holder of all proprietary rights on this
3 | computer program.
4 |
5 | You can only use this computer program if you have closed a license agreement with MPG or you get the right to use
6 | the computer program from someone who is authorized to grant you that right.
7 |
8 | Any use of the computer program without a valid license is prohibited and liable to prosecution.
9 |
10 | Copyright 2019 Max-Planck-Gesellschaft zur Foerderung der Wissenschaften e.V. (MPG). acting on behalf of its
11 | Max Planck Institute for Intelligent Systems and the Max Planck Institute for Biological Cybernetics.
12 | All rights reserved.
13 |
14 | More information about VOCA is available at http://voca.is.tue.mpg.de.
15 | For comments or questions, please email us at voca@tue.mpg.de
16 | '''
17 |
18 | import os
19 | import glob
20 | import numpy as np
21 | from psbody.mesh import Mesh
22 | from . inference import output_sequence_meshes
23 | from .. smpl_webuser.serialization import load_model
24 |
25 | # def edit_sequences(mode, param_a, param_b, source_path, out_path, flame_model_path, uv_template_fname='', texture_img_fname=''):
26 | # if mode == 'Blink':
27 | # add_eye_blink(source_path, out_path, flame_model_path, param_a, param_b, uv_template_fname=uv_template_fname, texture_img_fname=texture_img_fname)
28 | # elif mode == 'Shape':
29 | # alter_sequence_shape(source_path, out_path, flame_model_path, pc_idx=param_a, pc_range=param_b, uv_template_fname='', texture_img_fname='')
30 | # elif mode == 'Pose':
31 | # alter_sequence_head_pose(source_path, out_path, flame_model_path, pose_idx=param_a, rot_angle=param_b, uv_template_fname='', texture_img_fname='')
32 |
33 | def add_eye_blink(source_path, out_path, flame_model_fname, num_blinks, blink_duration, uv_template_fname='', texture_img_fname=''):
34 | '''
35 | Load existing animation sequence in "zero pose" and add eye blinks over time
36 | :param source_path: path of the animation sequence (files must be provided in OBJ file format)
37 | :param out_path: output path of the altered sequence
38 | :param flame_model_fname: path of the FLAME model
39 | :param num_blinks: number of blinks within the sequence
40 | :param blink_duration: duration of a blink in number of frames
41 | '''
42 |
43 | if not os.path.exists(out_path):
44 | os.makedirs(out_path)
45 |
46 | # Load sequence files
47 | sequence_fnames = sorted(glob.glob(os.path.join(source_path, '*.obj')))
48 | num_frames = len(sequence_fnames)
49 | if num_frames == 0:
50 | print('No sequence meshes found')
51 | return
52 |
53 | # Load FLAME head model
54 | model = load_model(flame_model_fname)
55 |
56 | blink_exp_betas = np.array(
57 | [0.04676158497927314, 0.03758675711005459, -0.8504121184951298, 0.10082324210507627, -0.574142329926028,
58 | 0.6440016589938355, 0.36403779939335984, 0.21642312586261656, 0.6754551784690193, 1.80958618462892,
59 | 0.7790133813372259, -0.24181691256476057, 0.826280685961679, -0.013525679499256753, 1.849393698014113,
60 | -0.263035686247264, 0.42284248271332153, 0.10550891351425384, 0.6720993875023772, 0.41703592560736436,
61 | 3.308019065485072, 1.3358509602858895, 1.2997143108969278, -1.2463587328652894, -1.4818961382824924,
62 | -0.6233880069345369, 0.26812528424728455, 0.5154889093160832, 0.6116267181402183, 0.9068826814583771,
63 | -0.38869613253448576, 1.3311776710005476, -0.5802565274559162, -0.7920775624092143, -1.3278601781150017,
64 | -1.2066425872386706, 0.34250140710360893, -0.7230686724732668, -0.6859285483325263, -1.524877347586566,
65 | -1.2639479212965923, -0.019294228307535275, 0.2906175769381998, -1.4082782880837976, 0.9095436721066045,
66 | 1.6007365724960054, 2.0302381182163574, 0.5367600947801505, -0.12233184771794232, -0.506024823810769,
67 | 2.4312326730634783, 0.5622323258974669, 0.19022395712837198, -0.7729758559103581, -1.5624233513002923,
68 | 0.8275863297957926, 1.1661887586553132, 1.2299311381779416, -1.4146929897142397, -0.42980549225554004,
69 | -1.4282801579740614, 0.26172301287347266, -0.5109318114918897, -0.6399495909195524, -0.733476856285442,
70 | 1.219652074726591, 0.08194907995352405, 0.4420398361785991, -1.184769973221183, 1.5126082924326332,
71 | 0.4442281271081217, -0.005079477284341147, 1.764084274265486, 0.2815940264026848, 0.2898827213634057,
72 | -0.3686662696397026, 1.9125365942683656, 2.1801452989500274, -2.3915065327980467, 0.5794919897154226,
73 | -1.777680085517591, 2.9015718628823604, -2.0516886588315777, 0.4146899057365943, -0.29917763685660903,
74 | -0.5839240983516372, 2.1592457102697007, -0.8747902386178202, -0.5152943072876817, 0.12620001057735733,
75 | 1.3144109838803493, -0.5027032013330108, 1.2160353388774487, 0.7543834001473375, -3.512095548974531,
76 | -0.9304382646186183, -0.30102930208709433, 0.9332135959962723, -0.52926196689098, 0.23509772959302958])
77 |
78 | step = blink_duration//3
79 | blink_weights = np.hstack((np.interp(np.arange(step), [0,step], [0,1]), np.ones(step), np.interp(np.arange(step), [0,step], [1,0])))
80 |
81 | frequency = num_frames // (num_blinks+1)
82 | weights = np.zeros(num_frames)
83 | for i in range(num_blinks):
84 | x1 = (i+1)*frequency-blink_duration//2
85 | x2 = x1+3*step
86 | if x1 >= 0 and x2 < weights.shape[0]:
87 | weights[x1:x2] = blink_weights
88 |
89 | predicted_vertices = np.zeros((num_frames, model.v_template.shape[0], model.v_template.shape[1]))
90 |
91 | for frame_idx in range(num_frames):
92 | model.v_template[:] = Mesh(filename=sequence_fnames[frame_idx]).v
93 | model.betas[300:] = weights[frame_idx]*blink_exp_betas
94 | predicted_vertices[frame_idx] = model.r
95 |
96 | output_sequence_meshes(predicted_vertices, Mesh(model.v_template, model.f), out_path, uv_template_fname=uv_template_fname, texture_img_fname=texture_img_fname)
97 |
98 | def alter_sequence_shape(source_path, out_path, flame_model_fname, pc_idx=0, pc_range=(0,3), uv_template_fname='', texture_img_fname=''):
99 | '''
100 | Load existing animation sequence in "zero pose" and change the identity dependent shape over time.
101 | :param source_path: path of the animation sequence (files must be provided in OBJ file format)
102 | :param out_path: output path of the altered sequence
103 | :param flame_model_fname: path of the FLAME model
104 | :param pc_idx Identity shape parameter to be varied in [0,300) as FLAME provides 300 shape paramters
105 | :param pc_range Tuple (start/end, max/min) defining the range of the shape variation.
106 | i.e. (0,3) varies the shape from 0 to 3 stdev and back to 0
107 | '''
108 |
109 | if pc_idx < 0 or pc_idx >= 300:
110 | print('shape parameter index out of range [0,300)')
111 | return
112 |
113 | if not os.path.exists(out_path):
114 | os.makedirs(out_path)
115 |
116 | # Load sequence files
117 | sequence_fnames = sorted(glob.glob(os.path.join(source_path, '*.obj')))
118 | num_frames = len(sequence_fnames)
119 | if num_frames == 0:
120 | print('No sequence meshes found')
121 | return
122 |
123 | if not num_frames % 2 == 0:
124 | num_frames = num_frames - 1
125 |
126 | # Load FLAME head model
127 | model = load_model(flame_model_fname)
128 | model_parms = np.zeros((num_frames, 300))
129 |
130 | # Generate interpolated shape parameters for each frame
131 | x1, y1 = [0, num_frames/2], pc_range
132 | x2, y2 = [num_frames/2, num_frames], pc_range[::-1]
133 |
134 | xsteps1 = np.arange(0, num_frames/2)
135 | xsteps2 = np.arange(num_frames/2, num_frames)
136 |
137 | model_parms[:, pc_idx] = np.hstack((np.interp(xsteps1, x1, y1), np.interp(xsteps2, x2, y2)))
138 |
139 | predicted_vertices = np.zeros((num_frames, model.v_template.shape[0], model.v_template.shape[1]))
140 |
141 | for frame_idx in range(num_frames):
142 | model.v_template[:] = Mesh(filename=sequence_fnames[frame_idx]).v
143 | model.betas[:300] = model_parms[frame_idx]
144 | predicted_vertices[frame_idx] = model.r
145 |
146 | output_sequence_meshes(predicted_vertices, Mesh(model.v_template, model.f), out_path, uv_template_fname=uv_template_fname, texture_img_fname=texture_img_fname)
147 |
148 | def alter_sequence_head_pose(source_path, out_path, flame_model_fname, pose_idx=3, rot_angle=np.pi/6, uv_template_fname='', texture_img_fname=''):
149 | '''
150 | Load existing animation sequence in "zero pose" and change the head pose (i.e. rotation around the neck) over time.
151 | :param source_path: path of the animation sequence (files must be provided in OBJ file format)
152 | :param out_path: output path of the altered sequence
153 | :param flame_model_fname: path of the FLAME model
154 | :param pose_idx: head pose parameter to be varied in [3,6)
155 | :param rot_angle: maximum rotation angle in [0,2pi)
156 | '''
157 |
158 | if pose_idx < 3 or pose_idx >= 6:
159 | print('pose parameter index out of range [3,6)')
160 | return
161 |
162 | if not os.path.exists(out_path):
163 | os.makedirs(out_path)
164 |
165 | # Load sequence files
166 | sequence_fnames = sorted(glob.glob(os.path.join(source_path, '*.obj')))
167 | num_frames = len(sequence_fnames)
168 | if num_frames == 0:
169 | print('No sequence meshes found')
170 | return
171 |
172 | if not num_frames % 2 == 0:
173 | num_frames = num_frames - 1
174 |
175 | # Load FLAME head model
176 | model = load_model(flame_model_fname)
177 | model_parms = np.zeros((num_frames, model.pose.shape[0]))
178 |
179 | # Generate interpolated pose parameters for each frame
180 | x1, y1 = [0, num_frames//4], [0, rot_angle]
181 | x2, y2 = [num_frames//4, num_frames//2], [rot_angle, 0]
182 | x3, y3 = [num_frames//2, 3*num_frames//4], [0, -rot_angle]
183 | x4, y4 = [3*num_frames//4, num_frames], [-rot_angle, 0]
184 |
185 | xsteps1 = np.arange(0, num_frames//4)
186 | xsteps2 = np.arange(num_frames//4, num_frames/2)
187 | xsteps3 = np.arange(num_frames//2, 3*num_frames//4)
188 | xsteps4 = np.arange(3*num_frames//4, num_frames)
189 |
190 | model_parms[:, pose_idx] = np.hstack((np.interp(xsteps1, x1, y1),
191 | np.interp(xsteps2, x2, y2),
192 | np.interp(xsteps3, x3, y3),
193 | np.interp(xsteps4, x4, y4)))
194 |
195 | predicted_vertices = np.zeros((num_frames, model.v_template.shape[0], model.v_template.shape[1]))
196 |
197 | for frame_idx in range(num_frames):
198 | model.v_template[:] = Mesh(filename=sequence_fnames[frame_idx]).v
199 | model.pose[:] = model_parms[frame_idx]
200 | predicted_vertices[frame_idx] = model.r
201 |
202 | output_sequence_meshes(predicted_vertices, Mesh(model.v_template, model.f), out_path, uv_template_fname=uv_template_fname, texture_img_fname=texture_img_fname)
--------------------------------------------------------------------------------
/voca-addon/utils/inference.py:
--------------------------------------------------------------------------------
1 | '''
2 | Max-Planck-Gesellschaft zur Foerderung der Wissenschaften e.V. (MPG) is holder of all proprietary rights on this
3 | computer program.
4 |
5 | You can only use this computer program if you have closed a license agreement with MPG or you get the right to use
6 | the computer program from someone who is authorized to grant you that right.
7 |
8 | Any use of the computer program without a valid license is prohibited and liable to prosecution.
9 |
10 | Copyright 2019 Max-Planck-Gesellschaft zur Foerderung der Wissenschaften e.V. (MPG). acting on behalf of its
11 | Max Planck Institute for Intelligent Systems and the Max Planck Institute for Biological Cybernetics.
12 | All rights reserved.
13 |
14 | More information about VOCA is available at http://voca.is.tue.mpg.de.
15 | For comments or questions, please email us at voca@tue.mpg.de
16 | '''
17 |
18 | import os
19 | import sys
20 | from pathlib import Path
21 |
22 | # # get home directory to build the absolute path to virtual environment's python3.7
23 | # homeuserdir = str(Path.home())
24 | # abs_path = homeuserdir + '/.virtualenvs/vocablender/lib/python3.7/site-packages'
25 | # # get the index of the blender's python lib in $PATH variable
26 | # sys_path_index = next(i for i, string in enumerate(sys.path) if 'site-packages' in string)
27 | # # insert the virtual environment's python into the sys.path if needed
28 | # if not any('.virtualenvs/vocablender' in string for string in sys.path) :
29 | # sys.path.insert(sys_path_index, abs_path)
30 |
31 | # print(sys.path)
32 |
33 | import tempfile
34 | import numpy as np
35 | import tensorflow as tf
36 | from scipy.io import wavfile
37 | from psbody.mesh import Mesh
38 | from . audio_handler import AudioHandler
39 |
40 | def process_audio(ds_path, audio, sample_rate):
41 | config = {}
42 | config['deepspeech_graph_fname'] = ds_path
43 | config['audio_feature_type'] = 'deepspeech'
44 | config['num_audio_features'] = 29
45 |
46 | config['audio_window_size'] = 16
47 | config['audio_window_stride'] = 1
48 |
49 | tmp_audio = {'subj': {'seq': {'audio': audio, 'sample_rate': sample_rate}}}
50 | audio_handler = AudioHandler(config)
51 |
52 | return audio_handler.process(tmp_audio)['subj']['seq']['audio']
53 |
54 |
55 | def output_sequence_meshes(sequence_vertices, template, out_path, uv_template_fname='', texture_img_fname=''):
56 | mesh_out_path = os.path.join(out_path, 'meshes')
57 | if not os.path.exists(mesh_out_path):
58 | os.makedirs(mesh_out_path)
59 |
60 | if os.path.exists(uv_template_fname):
61 | uv_template = Mesh(filename=uv_template_fname)
62 | vt, ft = uv_template.vt, uv_template.ft
63 | else:
64 | vt, ft = None, None
65 |
66 | num_frames = sequence_vertices.shape[0]
67 | for i_frame in range(num_frames):
68 | out_fname = os.path.join(mesh_out_path, '%05d.obj' % i_frame)
69 | out_mesh = Mesh(sequence_vertices[i_frame], template.f)
70 | if vt is not None and ft is not None:
71 | out_mesh.vt, out_mesh.ft = vt, ft
72 | if os.path.exists(texture_img_fname):
73 | out_mesh.set_texture_image(texture_img_fname)
74 | out_mesh.write_obj(out_fname)
75 |
76 |
77 | def inference(tf_model_fname, ds_fname, audio_fname, template_fname, condition_idx, uv_template_fname, texture_img_fname, out_path):
78 | template = Mesh(filename=template_fname)
79 |
80 | sample_rate, audio = wavfile.read(audio_fname)
81 | if audio.ndim != 1:
82 | print('Audio has multiple channels, only first channel is considered')
83 | audio = audio[:,0]
84 |
85 | processed_audio = process_audio(ds_fname, audio, sample_rate)
86 |
87 | # Load previously saved meta graph in the default graph
88 | #saver = tf.train.import_meta_graph(tf_model_fname + '.meta')
89 | saver = tf.compat.v1.train.import_meta_graph(tf_model_fname + '.meta')
90 | #graph = tf.get_default_graph()
91 | graph = tf.compat.v1.get_default_graph()
92 |
93 | speech_features = graph.get_tensor_by_name(u'VOCA/Inputs_encoder/speech_features:0')
94 | condition_subject_id = graph.get_tensor_by_name(u'VOCA/Inputs_encoder/condition_subject_id:0')
95 | is_training = graph.get_tensor_by_name(u'VOCA/Inputs_encoder/is_training:0')
96 | input_template = graph.get_tensor_by_name(u'VOCA/Inputs_decoder/template_placeholder:0')
97 | output_decoder = graph.get_tensor_by_name(u'VOCA/output_decoder:0')
98 |
99 | num_frames = processed_audio.shape[0]
100 | feed_dict = {speech_features: np.expand_dims(np.stack(processed_audio), -1),
101 | condition_subject_id: np.repeat(condition_idx-1, num_frames),
102 | is_training: False,
103 | input_template: np.repeat(template.v[np.newaxis, :, :, np.newaxis], num_frames, axis=0)}
104 |
105 | #with tf.Session() as session:
106 | with tf.compat.v1.Session() as session:
107 | # Restore trained model
108 | saver.restore(session, tf_model_fname)
109 | predicted_vertices = np.squeeze(session.run(output_decoder, feed_dict))
110 | output_sequence_meshes(predicted_vertices, template, out_path, uv_template_fname, texture_img_fname)
111 | #tf.reset_default_graph()
112 | tf.compat.v1.reset_default_graph()
--------------------------------------------------------------------------------