├── LICENSE ├── README.md ├── config_test.py ├── demo.py ├── dynamic_contour_embedding.py ├── flame_model ├── FLAME_sample.ply ├── flame_dynamic_embedding.npy └── flame_static_embedding.pkl ├── gif └── celeba_reconstruction.gif ├── input_images ├── 000001.jpg └── 000013.jpg ├── requirements.txt ├── run_RingNet.py ├── smpl_webuser ├── LICENSE.txt ├── __init__.py ├── lbs.py ├── posemapper.py ├── serialization.py └── verts.py └── util ├── __init__.py ├── image.py ├── project_on_mesh.py ├── renderer.py └── using_flame_parameters.py /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 Soubhik Sanyal, Timo Bolkart, Michael J. Black 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # RingNet 2 | 3 | ![alt text](https://github.com/soubhiksanyal/RingNet/blob/master/gif/celeba_reconstruction.gif?raw=true) 4 | 5 | This is an official repository of the paper Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision. The project was formerly referred by RingNet. The codebase consists of the inference code, i.e. give an face image using this code one can generate a 3D mesh of a complete head with the face region. For further details on the method please refer to the following publication, 6 | 7 | ``` 8 | Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision 9 | Soubhik Sanyal, Timo Bolkart, Haiwen Feng, Michael J. Black 10 | CVPR 2019 11 | ``` 12 | 13 | More details on our NoW benchmark dataset, 3D face reconstruction challenge can be found in our [project page](https://ringnet.is.tue.mpg.de). A pdf preprint is also available on the [project page](https://ringnet.is.tue.mpg.de). 14 | 15 | * **Update**: We have changed the license agreemnt for the RingNet code and pre-trained weights. Both are now available under **MIT license** excluding the NoW Challenge dataset. 16 | 17 | * **Update**: We have released the **evaluation code for NoW Benchmark challenge** [here](https://github.com/soubhiksanyal/now_evaluation). 18 | 19 | * **Update**: Add demo to build a texture for the reconstructed mesh from the input image. 20 | 21 | * **Update**: NoW Dataset is divided into Test set and Validation Set. **Ground Truth scans** are available for the Validation Set. Please Check our [project page](https://ringnet.is.tue.mpg.de) for more details. 22 | 23 | * **Update**: We have released a **PyTorch implementation of the decoder FLAME with dynamic conture loading** which can be directly used for training networks. Please check [FLAME_PyTorch](https://github.com/soubhiksanyal/FLAME_PyTorch) for the code. 24 | 25 | ## Installation 26 | 27 | The code uses **Python 2.7** and it is tested on Tensorflow gpu version 1.12.0, with CUDA-9.0 and cuDNN-7.3. 28 | 29 | ### Setup RingNet Virtual Environment 30 | 31 | ``` 32 | virtualenv --no-site-packages /.virtualenvs/RingNet 33 | source /.virtualenvs/RingNet/bin/activate 34 | pip install --upgrade pip==19.1.1 35 | ``` 36 | ### Clone the project and install requirements 37 | 38 | ``` 39 | git clone https://github.com/soubhiksanyal/RingNet.git 40 | cd RingNet 41 | pip install -r requirements.txt 42 | pip install opendr==0.77 43 | mkdir model 44 | ``` 45 | Install mesh processing libraries from [MPI-IS/mesh](https://github.com/MPI-IS/mesh). (This now only works with python 3, so donot install it) 46 | 47 | * Update: Please install the following [fork](https://github.com/TimoBolkart/mesh) for working with the mesh processing libraries with python 2.7 48 | 49 | ## Download models 50 | 51 | * Download pretrained RingNet weights from the [project website](https://ringnet.is.tue.mpg.de), downloads page. Copy this inside the **model** folder 52 | * Download FLAME 2019 model from [here](http://flame.is.tue.mpg.de/). Copy it inside the **flame_model** folder. This step is optional and only required if you want to use the output Flame parameters to play with the 3D mesh, i.e., to neutralize the pose and 53 | expression and only using the shape as a template for other methods like [VOCA (Voice Operated Character Animation)](https://github.com/TimoBolkart/voca). 54 | * Download the [FLAME_texture_data](http://files.is.tue.mpg.de/tbolkart/FLAME/FLAME_texture_data.zip) and unpack this into the **flame_model** folder. 55 | 56 | ## Demo 57 | 58 | RingNet requires a loose crop of the face in the image. We provide two sample images in the **input_images** folder which are taken from [CelebA Dataset](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html). 59 | 60 | #### Output predicted mesh rendering 61 | 62 | Run the following command from the terminal to check the predictions of RingNet 63 | ``` 64 | python -m demo --img_path ./input_images/000001.jpg --out_folder ./RingNet_output 65 | ``` 66 | Provide the image path and it will output the predictions in **./RingNet_output/images/**. 67 | 68 | #### Output predicted mesh 69 | 70 | If you want the output mesh then run the following command 71 | ``` 72 | python -m demo --img_path ./input_images/000001.jpg --out_folder ./RingNet_output --save_obj_file=True 73 | ``` 74 | It will save a *.obj file of the predicted mesh in **./RingNet_output/mesh/**. 75 | 76 | #### Output textured mesh 77 | 78 | If you want the output the predicted mesh with the image projected onto the mesh as texture then run the following command 79 | ``` 80 | python -m demo --img_path ./input_images/000001.jpg --out_folder ./RingNet_output --save_texture=True 81 | ``` 82 | It will save a *.obj, *.mtl, and *.png file of the predicted mesh in **./RingNet_output/texture/**. 83 | 84 | #### Output FLAME and camera parameters 85 | 86 | If you want the predicted FLAME and camera parameters then run the following command 87 | ``` 88 | python -m demo --img_path ./input_images/000001.jpg --out_folder ./RingNet_output --save_obj_file=True --save_flame_parameters=True 89 | ``` 90 | It will save a *.npy file of the predicted flame and camera parameters and in **./RingNet_output/params/**. 91 | 92 | #### Generate VOCA templates 93 | 94 | If you want to play with the 3D mesh, i.e. neutralize pose and expression of the 3D mesh to use it as a template in [VOCA (Voice Operated Character Animation)](https://github.com/TimoBolkart/voca), run the following command 95 | ``` 96 | python -m demo --img_path ./input_images/000013.jpg --out_folder ./RingNet_output --save_obj_file=True --save_flame_parameters=True --neutralize_expression=True 97 | ``` 98 | 99 | ## License 100 | 101 | Free for non-commercial and scientific research purposes. By using this code, you acknowledge that you have read the license terms (https://ringnet.is.tue.mpg.de/license.html), understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not use the code. For commercial use please check the website (https://ringnet.is.tue.mpg.de/license.html). 102 | 103 | ## Referencing RingNet 104 | 105 | Please cite the following paper if you use the code directly or indirectly in your research/projects. 106 | ``` 107 | @inproceedings{RingNet:CVPR:2019, 108 | title = {Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision}, 109 | author = {Sanyal, Soubhik and Bolkart, Timo and Feng, Haiwen and Black, Michael}, 110 | booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)}, 111 | month = jun, 112 | year = {2019}, 113 | month_numeric = {6} 114 | } 115 | ``` 116 | 117 | ## Contact 118 | 119 | If you have any questions you can contact us at soubhik.sanyal@tuebingen.mpg.de and timo.bolkart@tuebingen.mpg.de. 120 | 121 | ## Acknowledgement 122 | 123 | * We thank [Ahmed Osman](https://github.com/ahmedosman) for his support in the tensorflow implementation of FLAME. 124 | * We thank Raffi Enficiaud and Ahmed Osman for pushing the release of psbody.mesh. 125 | * We thank Benjamin Pellkofer and Jonathan Williams for helping with our [RingNet project website](https://ringnet.is.tue.mpg.de). 126 | -------------------------------------------------------------------------------- /config_test.py: -------------------------------------------------------------------------------- 1 | """ 2 | Author: Soubhik Sanyal 3 | Copyright (c) 2019, Soubhik Sanyal 4 | All rights reserved. 5 | 6 | Max-Planck-Gesellschaft zur Foerderung der Wissenschaften e.V. (MPG) is holder of all proprietary rights on this 7 | computer program. 8 | 9 | You can only use this computer program if you have closed a license agreement with MPG or you get the right to use 10 | the computer program from someone who is authorized to grant you that right. 11 | 12 | Any use of the computer program without a valid license is prohibited and liable to prosecution. 13 | 14 | Copyright 2019 Max-Planck-Gesellschaft zur Foerderung der Wissenschaften e.V. (MPG). acting on behalf of its 15 | Max Planck Institute for Intelligent Systems and the Max Planck Institute for Biological Cybernetics. 16 | All rights reserved. 17 | 18 | More information about RingNet is available at https://ringnet.is.tue.mpg.de. 19 | 20 | based on github.com/akanazawa/hmr 21 | """ 22 | # Sets default args 23 | # Note all data format is NHWC because slim resnet wants NHWC. 24 | import sys 25 | from absl import flags 26 | 27 | PRETRAINED_MODEL = './model/ring_6_68641' 28 | 29 | flags.DEFINE_string('img_path', '/ps/project/face2d3d/face2mesh/website_release_testings/single_image_test/000001.jpg', 'Image to run') 30 | flags.DEFINE_string('out_folder', './RingNet_output', 31 | 'The output path to store images') 32 | 33 | flags.DEFINE_boolean('save_obj_file', False, 34 | 'If true the output meshes will be saved') 35 | 36 | flags.DEFINE_boolean('save_flame_parameters', False, 37 | 'If true the camera and flame parameters will be saved') 38 | 39 | flags.DEFINE_boolean('neutralize_expression', False, 40 | 'If true the camera and flame parameters will be saved') 41 | 42 | flags.DEFINE_boolean('save_texture', False, 43 | 'If true the texture map will be stored') 44 | 45 | flags.DEFINE_string('flame_model_path', './flame_model/generic_model.pkl', 'path to the neutral FLAME model') 46 | 47 | flags.DEFINE_string('flame_texture_data_path', './flame_model/texture_data_512.npy', 'path to the FLAME texture data') 48 | 49 | 50 | flags.DEFINE_string('load_path', PRETRAINED_MODEL, 'path to trained model') 51 | 52 | flags.DEFINE_integer('batch_size', 1, 53 | 'Fixed to 1 for inference') 54 | 55 | # Don't change if testing: 56 | flags.DEFINE_integer('img_size', 224, 57 | 'Input image size to the network after preprocessing') 58 | flags.DEFINE_string('data_format', 'NHWC', 'Data format') 59 | 60 | # Flame parameters: 61 | flags.DEFINE_integer('pose_params', 6, 62 | 'number of flame pose parameters') 63 | flags.DEFINE_integer('shape_params', 100, 64 | 'number of flame shape parameters') 65 | flags.DEFINE_integer('expression_params', 50, 66 | 'number of flame expression parameters') 67 | 68 | def get_config(): 69 | config = flags.FLAGS 70 | config(sys.argv) 71 | return config 72 | -------------------------------------------------------------------------------- /demo.py: -------------------------------------------------------------------------------- 1 | """ 2 | Author: Soubhik Sanyal 3 | Copyright (c) 2019, Soubhik Sanyal 4 | All rights reserved. 5 | 6 | Max-Planck-Gesellschaft zur Foerderung der Wissenschaften e.V. (MPG) is holder of all proprietary rights on this 7 | computer program. 8 | 9 | You can only use this computer program if you have closed a license agreement with MPG or you get the right to use 10 | the computer program from someone who is authorized to grant you that right. 11 | 12 | Any use of the computer program without a valid license is prohibited and liable to prosecution. 13 | 14 | Copyright 2019 Max-Planck-Gesellschaft zur Foerderung der Wissenschaften e.V. (MPG). acting on behalf of its 15 | Max Planck Institute for Intelligent Systems and the Max Planck Institute for Biological Cybernetics. 16 | All rights reserved. 17 | 18 | More information about RingNet is available at https://ringnet.is.tue.mpg.de. 19 | 20 | based on github.com/akanazawa/hmr 21 | """ 22 | ## Demo of RingNet. 23 | ## Note that RingNet requires a loose crop of the face in the image. 24 | ## Sample usage: 25 | ## Run the following command to generate check the RingNet predictions on loosely cropped face images 26 | # python -m demo --img_path *.jpg --out_folder ./RingNet_output 27 | ## To output the meshes run the following command 28 | # python -m demo --img_path *.jpg --out_folder ./RingNet_output --save_obj_file=True 29 | ## To output both meshes and flame parameters run the following command 30 | # python -m demo --img_path *.jpg --out_folder ./RingNet_output --save_obj_file=True --save_flame_parameters=True 31 | ## To output both meshes and flame parameters and generate a neutralized mesh run the following command 32 | # python -m demo --img_path *.jpg --out_folder ./RingNet_output --save_obj_file=True --save_flame_parameters=True --neutralize_expression=True 33 | from __future__ import absolute_import 34 | from __future__ import division 35 | from __future__ import print_function 36 | 37 | import sys 38 | import os 39 | from absl import flags 40 | import numpy as np 41 | import skimage.io as io 42 | import cv2 43 | import matplotlib.pyplot as plt 44 | import tensorflow as tf 45 | from psbody.mesh import Mesh 46 | from smpl_webuser.serialization import load_model 47 | 48 | from util import renderer as vis_util 49 | from util import image as img_util 50 | from util.project_on_mesh import compute_texture_map 51 | from config_test import get_config 52 | from run_RingNet import RingNet_inference 53 | 54 | def visualize(img, proc_param, verts, cam, img_name='test_image'): 55 | """ 56 | Renders the result in original image coordinate frame. 57 | """ 58 | cam_for_render, vert_shifted = vis_util.get_original( 59 | proc_param, verts, cam, img_size=img.shape[:2]) 60 | 61 | # Render results 62 | rend_img_overlay = renderer( 63 | vert_shifted*1.0, cam=cam_for_render, img=img, do_alpha=True) 64 | rend_img = renderer( 65 | vert_shifted*1.0, cam=cam_for_render, img_size=img.shape[:2]) 66 | rend_img_vp1 = renderer.rotated( 67 | vert_shifted, 30, cam=cam_for_render, img_size=img.shape[:2]) 68 | 69 | import matplotlib.pyplot as plt 70 | fig = plt.figure(1) 71 | plt.clf() 72 | plt.subplot(221) 73 | plt.imshow(img) 74 | plt.title('input') 75 | plt.axis('off') 76 | plt.subplot(222) 77 | plt.imshow(rend_img_overlay) 78 | plt.title('3D Mesh overlay') 79 | plt.axis('off') 80 | plt.subplot(223) 81 | plt.imshow(rend_img) 82 | plt.title('3D mesh') 83 | plt.axis('off') 84 | plt.subplot(224) 85 | plt.imshow(rend_img_vp1) 86 | plt.title('diff vp') 87 | plt.axis('off') 88 | plt.draw() 89 | plt.show(block=False) 90 | fig.savefig(img_name + '.png') 91 | # import ipdb 92 | # ipdb.set_trace() 93 | 94 | 95 | def create_texture(img, proc_param, verts, faces, cam, texture_data): 96 | cam_for_render, vert_shifted = vis_util.get_original(proc_param, verts, cam, img_size=img.shape[:2]) 97 | 98 | texture_map = compute_texture_map(img, vert_shifted, faces, cam_for_render, texture_data) 99 | return texture_map 100 | 101 | 102 | def preprocess_image(img_path): 103 | img = io.imread(img_path) 104 | if np.max(img.shape[:2]) != config.img_size: 105 | print('Resizing so the max image size is %d..' % config.img_size) 106 | scale = (float(config.img_size) / np.max(img.shape[:2])) 107 | else: 108 | scale = 1.0#scaling_factor 109 | center = np.round(np.array(img.shape[:2]) / 2).astype(int) 110 | # image center in (x,y) 111 | center = center[::-1] 112 | crop, proc_param = img_util.scale_and_crop(img, scale, center, 113 | config.img_size) 114 | # import ipdb; ipdb.set_trace() 115 | # Normalize image to [-1, 1] 116 | # plt.imshow(crop/255.0) 117 | # plt.show() 118 | crop = 2 * ((crop / 255.) - 0.5) 119 | 120 | return crop, proc_param, img 121 | 122 | 123 | def main(config, template_mesh): 124 | sess = tf.Session() 125 | model = RingNet_inference(config, sess=sess) 126 | input_img, proc_param, img = preprocess_image(config.img_path) 127 | vertices, flame_parameters = model.predict(np.expand_dims(input_img, axis=0), get_parameters=True) 128 | cams = flame_parameters[0][:3] 129 | visualize(img, proc_param, vertices[0], cams, img_name=config.out_folder + '/images/' + config.img_path.split('/')[-1][:-4]) 130 | 131 | if config.save_obj_file: 132 | if not os.path.exists(config.out_folder + '/mesh'): 133 | os.mkdir(config.out_folder + '/mesh') 134 | mesh = Mesh(v=vertices[0], f=template_mesh.f) 135 | mesh.write_obj(config.out_folder + '/mesh/' + config.img_path.split('/')[-1][:-4] + '.obj') 136 | 137 | if config.save_flame_parameters: 138 | if not os.path.exists(config.out_folder + '/params'): 139 | os.mkdir(config.out_folder + '/params') 140 | flame_parameters_ = {'cam': flame_parameters[0][:3], 'pose': flame_parameters[0][3:3+config.pose_params], 'shape': flame_parameters[0][3+config.pose_params:3+config.pose_params+config.shape_params], 141 | 'expression': flame_parameters[0][3+config.pose_params+config.shape_params:]} 142 | np.save(config.out_folder + '/params/' + config.img_path.split('/')[-1][:-4] + '.npy', flame_parameters_) 143 | 144 | if config.neutralize_expression: 145 | from util.using_flame_parameters import make_prdicted_mesh_neutral 146 | if not os.path.exists(config.out_folder + '/neutral_mesh'): 147 | os.mkdir(config.out_folder + '/neutral_mesh') 148 | neutral_mesh = make_prdicted_mesh_neutral(config.out_folder + '/params/' + config.img_path.split('/')[-1][:-4] + '.npy', config.flame_model_path) 149 | neutral_mesh.write_obj(config.out_folder + '/neutral_mesh/' + config.img_path.split('/')[-1][:-4] + '.obj') 150 | 151 | if config.save_texture: 152 | if not os.path.exists(config.flame_texture_data_path): 153 | print('FLAME texture data not found') 154 | return 155 | texture_data = np.load(config.flame_texture_data_path, allow_pickle=True)[()] 156 | texture = create_texture(img, proc_param, vertices[0], template_mesh.f, cams, texture_data) 157 | 158 | if not os.path.exists(config.out_folder + '/texture'): 159 | os.mkdir(config.out_folder + '/texture') 160 | 161 | cv2.imwrite(config.out_folder + '/texture/' + config.img_path.split('/')[-1][:-4] + '.png', texture[:,:,::-1]) 162 | mesh = Mesh(v=vertices[0], f=template_mesh.f) 163 | mesh.vt = texture_data['vt'] 164 | mesh.ft = texture_data['ft'] 165 | mesh.set_texture_image(config.out_folder + '/texture/' + config.img_path.split('/')[-1][:-4] + '.png') 166 | mesh.write_obj(config.out_folder + '/texture/' + config.img_path.split('/')[-1][:-4] + '.obj') 167 | 168 | 169 | 170 | if __name__ == '__main__': 171 | config = get_config() 172 | template_mesh = Mesh(filename='./flame_model/FLAME_sample.ply') 173 | renderer = vis_util.SMPLRenderer(faces=template_mesh.f) 174 | 175 | if not os.path.exists(config.out_folder): 176 | os.makedirs(config.out_folder) 177 | 178 | if not os.path.exists(config.out_folder + '/images'): 179 | os.mkdir(config.out_folder + '/images') 180 | 181 | main(config, template_mesh) 182 | -------------------------------------------------------------------------------- /dynamic_contour_embedding.py: -------------------------------------------------------------------------------- 1 | """ 2 | Author: Soubhik Sanyal 3 | Copyright (c) 2019, Soubhik Sanyal 4 | All rights reserved. 5 | Max-Planck-Gesellschaft zur Foerderung der Wissenschaften e.V. (MPG) is holder of all proprietary rights on this 6 | computer program. 7 | You can only use this computer program if you have closed a license agreement with MPG or you get the right to use 8 | the computer program from someone who is authorized to grant you that right. 9 | Any use of the computer program without a valid license is prohibited and liable to prosecution. 10 | Copyright 2019 Max-Planck-Gesellschaft zur Foerderung der Wissenschaften e.V. (MPG). acting on behalf of its 11 | Max Planck Institute for Intelligent Systems and the Max Planck Institute for Biological Cybernetics. 12 | All rights reserved. 13 | More information about RingNet is available at https://ringnet.is.tue.mpg.de. 14 | """ 15 | 16 | ## A function to load the dynamic contour and the static landmarks on a template mesh 17 | ## Please cite the updated citaion from https://ringnet.is.tue.mpg.de if you use the dynamic contour for FLAME 18 | ## The use of static and dynamic contours for any project follows the liscencing from FLAME (http://flame.is.tue.mpg.de/) 19 | 20 | import numpy as np 21 | import pyrender 22 | import trimesh 23 | from smpl_webuser.serialization import load_model 24 | from psbody.mesh import Mesh 25 | import cPickle as pickle 26 | 27 | def load_static_embedding(static_embedding_path): 28 | with open(static_embedding_path, 'rb') as f: 29 | lmk_indexes_dict = pickle.load(f) 30 | lmk_face_idx = lmk_indexes_dict[ 'lmk_face_idx' ].astype( np.uint32 ) 31 | lmk_b_coords = lmk_indexes_dict[ 'lmk_b_coords' ] 32 | return lmk_face_idx, lmk_b_coords 33 | 34 | def mesh_points_by_barycentric_coordinates(mesh_verts, mesh_faces, lmk_face_idx, lmk_b_coords): 35 | # function: evaluation 3d points given mesh and landmark embedding 36 | # modified from https://github.com/Rubikplayer/flame-fitting/blob/master/fitting/landmarks.py 37 | dif1 = np.vstack([(mesh_verts[mesh_faces[lmk_face_idx], 0] * lmk_b_coords).sum(axis=1), 38 | (mesh_verts[mesh_faces[lmk_face_idx], 1] * lmk_b_coords).sum(axis=1), 39 | (mesh_verts[mesh_faces[lmk_face_idx], 2] * lmk_b_coords).sum(axis=1)]).T 40 | return dif1 41 | 42 | def load_dynamic_contour(template_flame_path='None', contour_embeddings_path='None', static_embedding_path='None', angle=0): 43 | template_mesh = Mesh(filename=template_flame_path) 44 | contour_embeddings_path = contour_embeddings_path 45 | dynamic_lmks_embeddings = np.load(contour_embeddings_path, allow_pickle=True).item() 46 | lmk_face_idx_static, lmk_b_coords_static = load_static_embedding(static_embedding_path) 47 | lmk_face_idx_dynamic = dynamic_lmks_embeddings['lmk_face_idx'][angle] 48 | lmk_b_coords_dynamic = dynamic_lmks_embeddings['lmk_b_coords'][angle] 49 | dynamic_lmks = mesh_points_by_barycentric_coordinates(template_mesh.v, template_mesh.f, lmk_face_idx_dynamic, lmk_b_coords_dynamic) 50 | static_lmks = mesh_points_by_barycentric_coordinates(template_mesh.v, template_mesh.f, lmk_face_idx_static, lmk_b_coords_static) 51 | total_lmks = np.vstack([dynamic_lmks, static_lmks]) 52 | 53 | # Visualization of the pose dependent contour on the template mesh 54 | vertex_colors = np.ones([template_mesh.v.shape[0], 4]) * [0.3, 0.3, 0.3, 0.8] 55 | tri_mesh = trimesh.Trimesh(template_mesh.v, template_mesh.f, 56 | vertex_colors=vertex_colors) 57 | mesh = pyrender.Mesh.from_trimesh(tri_mesh) 58 | scene = pyrender.Scene() 59 | scene.add(mesh) 60 | sm = trimesh.creation.uv_sphere(radius=0.005) 61 | sm.visual.vertex_colors = [0.9, 0.1, 0.1, 1.0] 62 | tfs = np.tile(np.eye(4), (len(total_lmks), 1, 1)) 63 | tfs[:, :3, 3] = total_lmks 64 | joints_pcl = pyrender.Mesh.from_trimesh(sm, poses=tfs) 65 | scene.add(joints_pcl) 66 | pyrender.Viewer(scene, use_raymond_lighting=True) 67 | 68 | if __name__ == '__main__': 69 | # angle = 35.0 #in degrees 70 | angle = 0.0 #in degrees 71 | # angle = -16.0 #in degrees 72 | if angle < 0: 73 | angle = 39 - angle 74 | contour_embeddings_path = './flame_model/flame_dynamic_embedding.npy' 75 | static_embedding_path = './flame_model/flame_static_embedding.pkl' 76 | load_dynamic_contour(template_flame_path='./flame_model/FLAME_sample.ply', contour_embeddings_path=contour_embeddings_path, static_embedding_path=static_embedding_path, angle=int(angle)) 77 | -------------------------------------------------------------------------------- /flame_model/FLAME_sample.ply: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/soubhiksanyal/RingNet/792e2b6ff0200394bebe1fa920008e31303862ed/flame_model/FLAME_sample.ply -------------------------------------------------------------------------------- /flame_model/flame_dynamic_embedding.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/soubhiksanyal/RingNet/792e2b6ff0200394bebe1fa920008e31303862ed/flame_model/flame_dynamic_embedding.npy -------------------------------------------------------------------------------- /flame_model/flame_static_embedding.pkl: -------------------------------------------------------------------------------- 1 | (dp0 2 | S'lmk_face_idx' 3 | p1 4 | cnumpy.core.multiarray 5 | _reconstruct 6 | p2 7 | (cnumpy 8 | ndarray 9 | p3 10 | (I0 11 | tp4 12 | S'b' 13 | p5 14 | tp6 15 | Rp7 16 | (I1 17 | (I51 18 | tp8 19 | cnumpy 20 | dtype 21 | p9 22 | (S'u4' 23 | p10 24 | I0 25 | I1 26 | tp11 27 | Rp12 28 | (I3 29 | S'<' 30 | p13 31 | NNNI-1 32 | I-1 33 | I0 34 | tp14 35 | bI00 36 | S'\xdd\x18\x00\x00\xbc\x0e\x00\x00)\x0b\x00\x00\x87"\x00\x00I"\x00\x00\xee\x0e\x00\x00\xc9"\x00\x00\xd6\x08\x00\x00\xbb"\x00\x00\x81\x14\x00\x00\xb2!\x00\x00\x9c\x04\x00\x00\x9e\x0e\x00\x00`"\x00\x00\xbe\x08\x00\x00\xad\x1c\x00\x00c"\x00\x00 \x17\x00\x00U\x18\x00\x00p\x1a\x00\x00l\x1d\x00\x00\x91\x0e\x00\x00\x1b"\x00\x00.\x06\x00\x00^!\x00\x00\t\x19\x00\x00\' \x00\x00\xc0\x17\x00\x00\x0c\x01\x00\x00o\x0e\x00\x00k\x16\x00\x00\xff\x1c\x00\x00\xde\x1c\x00\x002\t\x00\x00@"\x00\x00m\x17\x00\x00u\x03\x00\x00\x95\x03\x00\x00\x8a\x17\x00\x00\xb5\x17\x00\x00_"\x00\x00\x94\t\x00\x00\x16\x1d\x00\x00\xfa\x1c\x00\x00\xf7\r\x00\x00\xbb!\x00\x00\xfd\x01\x00\x00\x95\x03\x00\x00\xa4\x17\x00\x00\xdc!\x00\x00E\x1d\x00\x00' 37 | p15 38 | tp16 39 | bsS'lmk_b_coords' 40 | p17 41 | g2 42 | (g3 43 | (I0 44 | tp18 45 | g5 46 | tp19 47 | Rp20 48 | (I1 49 | (I51 50 | I3 51 | tp21 52 | g9 53 | (S'f8' 54 | p22 55 | I0 56 | I1 57 | tp23 58 | Rp24 59 | (I3 60 | S'<' 61 | p25 62 | NNNI-1 63 | I-1 64 | I0 65 | tp26 66 | bI01 67 | S'\xde\xec\xa0\xd6\x90_\xdd?\x15\x1b\xa2\xf4\xe6.\xe1?\xbc6\xe1fZ\x97\xcc?\xd1o\x1f\xe1;I\xd6?\x16\x95\xa9\x8eh3\xca?\x14\x95\xa9\x8eh3\xca?\x9d\'\xc4\xf7\x9b\x8a\xdc? \xc2=\x188\x81\xa2?$\x1b\xa2\xf4\xe6.\xe1?\xb2\xb8\xea\xdd.\xf7\xc6?\x00\x00\x00\x00\x00\x84\xf6<\xbe\x12|\xbdr\xe0\xe9?\x00b"l\xedYp?\xf3\xe2\xf2\x99W\xc1\xeb?WI\xe9\xe0\x9d\xca\xe5?\xc0\xd8\x91\xa6\xb6\x80\xe3?@\x97{\xfa\x8a\xbbs?T\xa3\xee\xcb*{\xcc?PI\xe9\xe0\x9d\xca\xe5? \xda~\xfe\x1dQ\xbf?8\x84$]+N\xe0?\xae{?\x90\x90\xd7\xd4?R{-q\x1dU\xd2?\xacf\xc1\xa5\xeb|\xe2?`\x18\xee=\xe6I\x8f?\xb6D^oC\xad\xde?\x11\x7f?\x90\x90\xd7\xd4?\xd6\x96\x8e\xcb8\xa0\xd9?\x1e\xda~\xfe\x1dQ\xbf?@\x18\xee=\xe6I\x8f?@\x7fx\xce\xcb|\xe2?\x9e\x05\xdb\x11\xcd\'\xde?\rC\x17\x0c1n\xc1?x\nU\x15|\xde\xc0?\x00tL\x1eu\xa02?Yt\xe4\x8d^\xde\xc0?\x0cC\x17\x0c1n\xc1?\xba\xff\xda\x11\xcd\'\xde?O& \xfd{+\xb8?\xdfC9"v\x10\xa8?\x00\x00\x02<\x0f6\x1c?\x90?9"v\x10\xa8?\xa0/\x16\x83g\xffd?\x08\xf8\xab\x0el$\xb3?0\xd6mk;\x8c\xa3?\x8e\x1c"D\xa7N\xef?\x80\x03?\xc2\xd0\x02v?$l\x9c\xd0b\xe6\xe9?\xc0&/\x84\xdf \xd9?\xe3L.Zb\r\x96?p\x93\xe2\xd46N\x9e?\xcd\xb6i\xba\xd7$\xd7?H\xe074\xdcD\xdd?\x14\xc2=\x188\x81\xa2?3\xd18NPX\xca?\x8eym\xbb\xcb\xf5\xd6?\xe7\xbb=\xfd\x7f\xf0\xdb?5\xd18NPX\xca?\xb86\xe1fZ\x97\xcc?\xdeM\xfa\xa0xUw?\xc7\xb6i\xba\xd7$\xd7?\xdd:9\x85\x02+\xef?\x16\xbf"\xd3\x83\xd0\x93?\x85p\xa6Q\x8ek\xdd?q\x80"\xa5J\x14\xc0?3\xc1~\x10\xdc\x04\xce?\xa4\xf9\xc9\x99\xfa\x81\xc5?\xf7\x82\xb7C#_\xe6?S\xfa\xc9\x99\xfa\x81\xc5?\xd40\xb8\xd7X\xa1\xb5?\xfd\x8e%o\xb7\xe7\xc5?>\x82\xa1\xe8\xc1\r\xb7?\xc5j\x1c<\x8d_\xa5?\xef\x7f\xe8>>\xfb\xcd?M\x9fw\xa3\xe6\xb5\xa4?\xd5\x9fz\xc06\x92\xe1?\xf1\x7f\xe8>>\xfb\xcd?\xbfy\x1e\xe4>>\xe4?:\x8f\xa1\xe8\xc1\r\xb7?\xfd\xc0fd\xee\x9b\xe6?\x93O\x1bMC\xe1\xdb?!\xe4\xf7\xbe\xe7h\xd8?\x0e(3\\\xed.\xe0?I\xd2g\x85\xddu\xe6?\xacd\x18\xa1\x812\xb6?\x8e\x03a\xf6S7\xea?\xe0M\xe8+5\x02\xe9?\xcfsI\xdeX\xba\xc4?\xa3?\xeb[\x83\xa5\x97?\x01\xe5\xd8\x18\x91\xe5\xec?|jO\x1b\xf8\'\xee?\x0c\xa6o\\K1\xa1?\xda\xac@\xa3 \xc0\x85?\x11\xe5\xd8\x18\x91\xe5\xec?4\xdd\x95\xf2V~\xd5?\xd9\x03?\xc2\xd0\x02v?\xb4o\xbc{\x17+\x96?^\xd6mk;\x8c\xa3?\xa5\xba\x17`\xde\x03;\xbd!YA\x87\x1e}\xe2?4\x89\x16\x9fxO\xef?\x13XA\x87\x1e}\xe2?\xac\xb8\xea\xdd.\xf7\xc6?gc\xfa\xa0xUw?0\xd6\xc3\xe4\x15\xb2\xe7?\x95\'\xc4\xf7\x9b\x8a\xdc?\xe7\xbb=\xfd\x7f\xf0\xdb?\x8eym\xbb\xcb\xf5\xd6?\xc9o\x1f\xe1;I\xd6?0\xd6\xc3\xe4\x15\xb2\xe7?\x81\xe074\xdcD\xdd?\xdf\xec\xa0\xd6\x90_\xdd?\xbf\x9e\xd8X\xaf\x9f\x9a?#]\xab\x8f$\x04\xc6?\xfa\x82T\xfc\x84)\xe1?\xbax>b\xde\xca|?\xe02\xb8\xd7X\xa1\xb5?Z\xa3\xee\xcb*{\xcc?\xb5\x0b\xa7L\xcb\xf2\xd2?\x96\xd8\x91\xa6\xb6\x80\xe3?Y\xc2~\x10\xdc\x04\xce?\xfd\xc0fd\xee\x9b\xe6?\x00\x97\x8e\xcb8\xa0\xd9?}{\x1e\xe4>>\xe4?\xb6D^oC\xad\xde?\xbf>\x0e\xe0ko\xd8?\x93O\x1bMC\xe1\xdb?R{-q\x1dU\xd2?\x8ak\x1c<\x8d_\xa5?\xae\x82$]+N\xe0?\xfd\x8e%o\xb7\xe7\xc5?\xd5\x9fz\xc06\x92\xe1?\x01\xeb\xb8 \x05\xec\xa4?^\xa4\xea[\x83\xa5\x97?\xcfsI\xdeX\xba\xc4?\xcc\xb0\x87\xc6\x10\x02\xe9?\x8e\xcb\xec\xeb_\x19\xc7?N\xa8\xf4\x84\x991\xb6?I\xd2g\x85\xddu\xe6?&&3\\\xed.\xe0?\x1a5\x16\x83g\xffd?\x85Q@\xa3 \xc0\x85?\x8f%\xbf\x99\t\xec\xee?TiO\x1b\xf8\'\xee?\xfb% \xfd{+\xb8?e\x92\xdf\x04G\xdc\xe2?\x95\xa4\xc4\xa76\x9b\xee?+:\xc0\x17\xb6\x91\xd8\xbc\x93\xa4\xc4\xa76\x9b\xee?\xf0\\\x8e\xbdtf\xc8?\xdfo\xe2\xd46N\x9e?l\xe4d\x04\x18N\xec>\xa3&/\x84\xdf \xd9?' 68 | p27 69 | tp28 70 | bs. -------------------------------------------------------------------------------- /gif/celeba_reconstruction.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/soubhiksanyal/RingNet/792e2b6ff0200394bebe1fa920008e31303862ed/gif/celeba_reconstruction.gif -------------------------------------------------------------------------------- /input_images/000001.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/soubhiksanyal/RingNet/792e2b6ff0200394bebe1fa920008e31303862ed/input_images/000001.jpg -------------------------------------------------------------------------------- /input_images/000013.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/soubhiksanyal/RingNet/792e2b6ff0200394bebe1fa920008e31303862ed/input_images/000013.jpg -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | numpy==1.16.3 2 | scipy==1.2.1 3 | matplotlib==2.2.3 4 | scikit-image==0.14.2 5 | chumpy==0.68 6 | opencv-python==4.0.0.21 7 | absl-py 8 | tensorflow-gpu==1.12.0 9 | ipdb 10 | pyrender==0.1.30 11 | trimesh==3.2.17 12 | -------------------------------------------------------------------------------- /run_RingNet.py: -------------------------------------------------------------------------------- 1 | """ 2 | Author: Soubhik Sanyal 3 | Copyright (c) 2019, Soubhik Sanyal 4 | 5 | Max-Planck-Gesellschaft zur Foerderung der Wissenschaften e.V. (MPG) is holder of all proprietary rights on this 6 | computer program. 7 | 8 | You can only use this computer program if you have closed a license agreement with MPG or you get the right to use 9 | the computer program from someone who is authorized to grant you that right. 10 | 11 | Any use of the computer program without a valid license is prohibited and liable to prosecution. 12 | 13 | Copyright 2019 Max-Planck-Gesellschaft zur Foerderung der Wissenschaften e.V. (MPG). acting on behalf of its 14 | Max Planck Institute for Intelligent Systems and the Max Planck Institute for Biological Cybernetics. 15 | All rights reserved. 16 | 17 | More information about RingNet is available at https://ringnet.is.tue.mpg.de. 18 | 19 | All rights reserved. 20 | based on github.com/akanazawa/hmr 21 | """ 22 | # RingNet Inference for single image. 23 | from __future__ import absolute_import 24 | from __future__ import division 25 | from __future__ import print_function 26 | 27 | import tensorflow as tf 28 | import numpy as np 29 | from os.path import exists 30 | 31 | class RingNet_inference(object): 32 | def __init__(self, config, sess=None): 33 | self.config = config 34 | self.load_path = config.load_path 35 | if not config.load_path: 36 | raise Exception( 37 | "provide a pretrained model path" 38 | ) 39 | if not exists(config.load_path + '.index'): 40 | print('%s couldnt find..' % config.load_path) 41 | import ipdb 42 | ipdb.set_trace() 43 | 44 | # Data 45 | self.batch_size = config.batch_size 46 | self.img_size = config.img_size 47 | self.data_format = config.data_format 48 | input_size = (self.batch_size, self.img_size, self.img_size, 3) 49 | self.images_pl = tf.placeholder(tf.float32, shape=input_size, name='input_images') 50 | 51 | if sess is None: 52 | self.sess = tf.Session() 53 | else: 54 | self.sess = sess 55 | 56 | # Load graph. 57 | self.saver = tf.train.import_meta_graph(self.load_path+'.meta') 58 | self.graph = tf.get_default_graph() 59 | self.prepare() 60 | 61 | 62 | def prepare(self): 63 | print('Restoring checkpoint %s..' % self.load_path) 64 | self.saver.restore(self.sess, self.load_path) 65 | 66 | 67 | def predict(self, images, get_parameters=False): 68 | """ 69 | images: batch_size, img_size, img_size, 3 # Here for inference the batch size is always set to 1 70 | Preprocessed to range [-1, 1] 71 | """ 72 | results = self.predict_dict(images) 73 | if get_parameters: 74 | return results['vertices'], results['parameters'] 75 | else: 76 | return results['vertices'] 77 | 78 | 79 | def predict_dict(self, images): 80 | """ 81 | Runs the model with images. 82 | """ 83 | images_ip = self.graph.get_tensor_by_name(u'input_images_1:0') 84 | params = self.graph.get_tensor_by_name(u'add_2:0') 85 | verts = self.graph.get_tensor_by_name(u'Flamenetnormal_2/Add_9:0') 86 | feed_dict = { 87 | images_ip: images, 88 | } 89 | fetch_dict = { 90 | 'vertices': verts, 91 | 'parameters': params, 92 | } 93 | results = self.sess.run(fetch_dict, feed_dict) 94 | tf.reset_default_graph() 95 | return results 96 | -------------------------------------------------------------------------------- /smpl_webuser/LICENSE.txt: -------------------------------------------------------------------------------- 1 | Please read carefully the following terms and conditions and any accompanying documentation before you download and/or use the SMPL body model and software, (the "Model"), including 3D meshes, blend weights blend shapes, textures, software, scripts, and animations. By downloading and/or using the Model, you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Model. 2 | 3 | Ownership 4 | The Model has been developed at the Max Planck Institute for Intelligent Systems (hereinafter "MPI") and is owned by and proprietary material of the Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (hereinafter “MPG”; MPI and MPG hereinafter collectively “Max-Planck”). 5 | 6 | License Grant 7 | Max-Planck grants you a non-exclusive, non-transferable, free of charge right: 8 | 9 | To download the Model and use it on computers owned, leased or otherwise controlled by you and/or your organisation; 10 | To use the Model for the sole purpose of performing non-commercial scientific research, non-commercial education, or non-commercial artistic projects. 11 | Any other use, in particular any use for commercial purposes, is prohibited. This includes, without limitation, incorporation in a commercial product, use in a commercial service, as training data for a commercial product, for commercial ergonomic analysis (e.g. product design, architectural design, etc.), or production of other artifacts for commercial purposes including, for example, web services, movies, television programs, mobile applications, or video games. The Model may not be used for pornographic purposes or to generate pornographic material whether commercial or not. This license also prohibits the use of the Model to train methods/algorithms/neural networks/etc. for commercial use of any kind. The Model may not be reproduced, modified and/or made available in any form to any third party without Max-Planck’s prior written permission. By downloading the Model, you agree not to reverse engineer it. 12 | 13 | Disclaimer of Representations and Warranties 14 | You expressly acknowledge and agree that the Model results from basic research, is provided “AS IS”, may contain errors, and that any use of the Model is at your sole risk. MAX-PLANCK MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE MODEL, NEITHER EXPRESS NOR IMPLIED, AND THE ABSENCE OF ANY LEGAL OR ACTUAL DEFECTS, WHETHER DISCOVERABLE OR NOT. Specifically, and not to limit the foregoing, Max-Planck makes no representations or warranties (i) regarding the merchantability or fitness for a particular purpose of the Model, (ii) that the use of the Model will not infringe any patents, copyrights or other intellectual property rights of a third party, and (iii) that the use of the Model will not cause any damage of any kind to you or a third party. 15 | 16 | Limitation of Liability 17 | Under no circumstances shall Max-Planck be liable for any incidental, special, indirect or consequential damages arising out of or relating to this license, including but not limited to, any lost profits, business interruption, loss of programs or other data, or all other commercial damages or losses, even if advised of the possibility thereof. 18 | 19 | No Maintenance Services 20 | You understand and agree that Max-Planck is under no obligation to provide either maintenance services, update services, notices of latent defects, or corrections of defects with regard to the Model. Max-Planck nevertheless reserves the right to update, modify, or discontinue the Model at any time. 21 | 22 | Publication with SMPL 23 | You agree to cite the most recent paper describing the model as specified on the download website. This website lists the most up to date bibliographic information on the about page. 24 | 25 | Media projects with SMPL 26 | When using SMPL in a media project please give credit to Max Planck Institute for Intelligent Systems. For example: SMPL was used for character animation courtesy of the Max Planck Institute for Intelligent Systems. 27 | Commercial licensing opportunities 28 | For commercial use in the fields of medicine, psychology, and biomechanics, please contact ps-license@tue.mpg.de. 29 | For commercial use in all other fields please contact Body Labs Inc at smpl@bodylabs.com -------------------------------------------------------------------------------- /smpl_webuser/__init__.py: -------------------------------------------------------------------------------- 1 | ''' 2 | ''' -------------------------------------------------------------------------------- /smpl_webuser/lbs.py: -------------------------------------------------------------------------------- 1 | ''' 2 | Copyright 2015 Matthew Loper, Naureen Mahmood and the Max Planck Gesellschaft. All rights reserved. 3 | This software is provided for research purposes only. 4 | By using this software you agree to the terms of the SMPL Model license here http://smpl.is.tue.mpg.de/license 5 | 6 | More information about SMPL is available here http://smpl.is.tue.mpg. 7 | For comments or questions, please email us at: smpl@tuebingen.mpg.de 8 | 9 | 10 | About this file: 11 | ================ 12 | This file defines linear blend skinning for the SMPL loader which 13 | defines the effect of bones and blendshapes on the vertices of the template mesh. 14 | 15 | Modules included: 16 | - global_rigid_transformation: 17 | computes global rotation & translation of the model 18 | - verts_core: [overloaded function inherited from verts.verts_core] 19 | computes the blending of joint-influences for each vertex based on type of skinning 20 | 21 | ''' 22 | 23 | from posemapper import posemap 24 | import chumpy 25 | import numpy as np 26 | 27 | def global_rigid_transformation(pose, J, kintree_table, xp): 28 | results = {} 29 | pose = pose.reshape((-1,3)) 30 | id_to_col = {kintree_table[1,i] : i for i in range(kintree_table.shape[1])} 31 | parent = {i : id_to_col[kintree_table[0,i]] for i in range(1, kintree_table.shape[1])} 32 | 33 | if xp == chumpy: 34 | from posemapper import Rodrigues 35 | rodrigues = lambda x : Rodrigues(x) 36 | else: 37 | import cv2 38 | rodrigues = lambda x : cv2.Rodrigues(x)[0] 39 | 40 | with_zeros = lambda x : xp.vstack((x, xp.array([[0.0, 0.0, 0.0, 1.0]]))) 41 | results[0] = with_zeros(xp.hstack((rodrigues(pose[0,:]), J[0,:].reshape((3,1))))) 42 | 43 | for i in range(1, kintree_table.shape[1]): 44 | results[i] = results[parent[i]].dot(with_zeros(xp.hstack(( 45 | rodrigues(pose[i,:]), 46 | ((J[i,:] - J[parent[i],:]).reshape((3,1))) 47 | )))) 48 | 49 | pack = lambda x : xp.hstack([np.zeros((4, 3)), x.reshape((4,1))]) 50 | 51 | results = [results[i] for i in sorted(results.keys())] 52 | results_global = results 53 | 54 | if True: 55 | results2 = [results[i] - (pack( 56 | results[i].dot(xp.concatenate( ( (J[i,:]), 0 ) ))) 57 | ) for i in range(len(results))] 58 | results = results2 59 | result = xp.dstack(results) 60 | return result, results_global 61 | 62 | 63 | def verts_core(pose, v, J, weights, kintree_table, want_Jtr=False, xp=chumpy): 64 | A, A_global = global_rigid_transformation(pose, J, kintree_table, xp) 65 | T = A.dot(weights.T) 66 | 67 | rest_shape_h = xp.vstack((v.T, np.ones((1, v.shape[0])))) 68 | 69 | v =(T[:,0,:] * rest_shape_h[0, :].reshape((1, -1)) + 70 | T[:,1,:] * rest_shape_h[1, :].reshape((1, -1)) + 71 | T[:,2,:] * rest_shape_h[2, :].reshape((1, -1)) + 72 | T[:,3,:] * rest_shape_h[3, :].reshape((1, -1))).T 73 | 74 | v = v[:,:3] 75 | 76 | if not want_Jtr: 77 | return v 78 | Jtr = xp.vstack([g[:3,3] for g in A_global]) 79 | return (v, Jtr) 80 | 81 | -------------------------------------------------------------------------------- /smpl_webuser/posemapper.py: -------------------------------------------------------------------------------- 1 | ''' 2 | Copyright 2015 Matthew Loper, Naureen Mahmood and the Max Planck Gesellschaft. All rights reserved. 3 | This software is provided for research purposes only. 4 | By using this software you agree to the terms of the SMPL Model license here http://smpl.is.tue.mpg.de/license 5 | 6 | More information about SMPL is available here http://smpl.is.tue.mpg. 7 | For comments or questions, please email us at: smpl@tuebingen.mpg.de 8 | 9 | 10 | About this file: 11 | ================ 12 | This module defines the mapping of joint-angles to pose-blendshapes. 13 | 14 | Modules included: 15 | - posemap: 16 | computes the joint-to-pose blend shape mapping given a mapping type as input 17 | 18 | ''' 19 | 20 | import chumpy as ch 21 | import numpy as np 22 | import cv2 23 | 24 | 25 | class Rodrigues(ch.Ch): 26 | dterms = 'rt' 27 | 28 | def compute_r(self): 29 | return cv2.Rodrigues(self.rt.r)[0] 30 | 31 | def compute_dr_wrt(self, wrt): 32 | if wrt is self.rt: 33 | return cv2.Rodrigues(self.rt.r)[1].T 34 | 35 | 36 | def lrotmin(p): 37 | if isinstance(p, np.ndarray): 38 | p = p.ravel()[3:] 39 | return np.concatenate([(cv2.Rodrigues(np.array(pp))[0]-np.eye(3)).ravel() for pp in p.reshape((-1,3))]).ravel() 40 | if p.ndim != 2 or p.shape[1] != 3: 41 | p = p.reshape((-1,3)) 42 | p = p[1:] 43 | return ch.concatenate([(Rodrigues(pp)-ch.eye(3)).ravel() for pp in p]).ravel() 44 | 45 | def posemap(s): 46 | if s == 'lrotmin': 47 | return lrotmin 48 | else: 49 | raise Exception('Unknown posemapping: %s' % (str(s),)) 50 | -------------------------------------------------------------------------------- /smpl_webuser/serialization.py: -------------------------------------------------------------------------------- 1 | ''' 2 | Copyright 2015 Matthew Loper, Naureen Mahmood and the Max Planck Gesellschaft. All rights reserved. 3 | This software is provided for research purposes only. 4 | By using this software you agree to the terms of the SMPL Model license here http://smpl.is.tue.mpg.de/license 5 | 6 | More information about SMPL is available here http://smpl.is.tue.mpg. 7 | For comments or questions, please email us at: smpl@tuebingen.mpg.de 8 | 9 | 10 | About this file: 11 | ================ 12 | This file defines the serialization functions of the SMPL model. 13 | 14 | Modules included: 15 | - save_model: 16 | saves the SMPL model to a given file location as a .pkl file 17 | - load_model: 18 | loads the SMPL model from a given file location (i.e. a .pkl file location), 19 | or a dictionary object. 20 | 21 | ''' 22 | 23 | __all__ = ['load_model', 'save_model'] 24 | 25 | import numpy as np 26 | import cPickle as pickle 27 | import chumpy as ch 28 | from chumpy.ch import MatVecMult 29 | from posemapper import posemap 30 | from verts import verts_core 31 | 32 | def save_model(model, fname): 33 | m0 = model 34 | trainer_dict = {'v_template': np.asarray(m0.v_template),'J': np.asarray(m0.J),'weights': np.asarray(m0.weights),'kintree_table': m0.kintree_table,'f': m0.f, 'bs_type': m0.bs_type, 'posedirs': np.asarray(m0.posedirs)} 35 | if hasattr(model, 'J_regressor'): 36 | trainer_dict['J_regressor'] = m0.J_regressor 37 | if hasattr(model, 'J_regressor_prior'): 38 | trainer_dict['J_regressor_prior'] = m0.J_regressor_prior 39 | if hasattr(model, 'weights_prior'): 40 | trainer_dict['weights_prior'] = m0.weights_prior 41 | if hasattr(model, 'shapedirs'): 42 | trainer_dict['shapedirs'] = m0.shapedirs 43 | if hasattr(model, 'vert_sym_idxs'): 44 | trainer_dict['vert_sym_idxs'] = m0.vert_sym_idxs 45 | if hasattr(model, 'bs_style'): 46 | trainer_dict['bs_style'] = model.bs_style 47 | else: 48 | trainer_dict['bs_style'] = 'lbs' 49 | pickle.dump(trainer_dict, open(fname, 'w'), -1) 50 | 51 | 52 | def backwards_compatibility_replacements(dd): 53 | 54 | # replacements 55 | if 'default_v' in dd: 56 | dd['v_template'] = dd['default_v'] 57 | del dd['default_v'] 58 | if 'template_v' in dd: 59 | dd['v_template'] = dd['template_v'] 60 | del dd['template_v'] 61 | if 'joint_regressor' in dd: 62 | dd['J_regressor'] = dd['joint_regressor'] 63 | del dd['joint_regressor'] 64 | if 'blendshapes' in dd: 65 | dd['posedirs'] = dd['blendshapes'] 66 | del dd['blendshapes'] 67 | if 'J' not in dd: 68 | dd['J'] = dd['joints'] 69 | del dd['joints'] 70 | 71 | # defaults 72 | if 'bs_style' not in dd: 73 | dd['bs_style'] = 'lbs' 74 | 75 | 76 | 77 | def ready_arguments(fname_or_dict): 78 | 79 | if not isinstance(fname_or_dict, dict): 80 | dd = pickle.load(open(fname_or_dict)) 81 | else: 82 | dd = fname_or_dict 83 | 84 | backwards_compatibility_replacements(dd) 85 | 86 | want_shapemodel = 'shapedirs' in dd 87 | nposeparms = dd['kintree_table'].shape[1]*3 88 | 89 | if 'trans' not in dd: 90 | dd['trans'] = np.zeros(3) 91 | if 'pose' not in dd: 92 | dd['pose'] = np.zeros(nposeparms) 93 | if 'shapedirs' in dd and 'betas' not in dd: 94 | dd['betas'] = np.zeros(dd['shapedirs'].shape[-1]) 95 | 96 | for s in ['v_template', 'weights', 'posedirs', 'pose', 'trans', 'shapedirs', 'betas', 'J']: 97 | if (s in dd) and not hasattr(dd[s], 'dterms'): 98 | dd[s] = ch.array(dd[s]) 99 | 100 | if want_shapemodel: 101 | dd['v_shaped'] = dd['shapedirs'].dot(dd['betas'])+dd['v_template'] 102 | v_shaped = dd['v_shaped'] 103 | J_tmpx = MatVecMult(dd['J_regressor'], v_shaped[:,0]) 104 | J_tmpy = MatVecMult(dd['J_regressor'], v_shaped[:,1]) 105 | J_tmpz = MatVecMult(dd['J_regressor'], v_shaped[:,2]) 106 | dd['J'] = ch.vstack((J_tmpx, J_tmpy, J_tmpz)).T 107 | dd['v_posed'] = v_shaped + dd['posedirs'].dot(posemap(dd['bs_type'])(dd['pose'])) 108 | else: 109 | dd['v_posed'] = dd['v_template'] + dd['posedirs'].dot(posemap(dd['bs_type'])(dd['pose'])) 110 | 111 | return dd 112 | 113 | 114 | 115 | def load_model(fname_or_dict): 116 | dd = ready_arguments(fname_or_dict) 117 | 118 | args = { 119 | 'pose': dd['pose'], 120 | 'v': dd['v_posed'], 121 | 'J': dd['J'], 122 | 'weights': dd['weights'], 123 | 'kintree_table': dd['kintree_table'], 124 | 'xp': ch, 125 | 'want_Jtr': True, 126 | 'bs_style': dd['bs_style'] 127 | } 128 | 129 | result, Jtr = verts_core(**args) 130 | result = result + dd['trans'].reshape((1,3)) 131 | result.J_transformed = Jtr + dd['trans'].reshape((1,3)) 132 | 133 | for k, v in dd.items(): 134 | setattr(result, k, v) 135 | 136 | return result 137 | 138 | -------------------------------------------------------------------------------- /smpl_webuser/verts.py: -------------------------------------------------------------------------------- 1 | ''' 2 | Copyright 2015 Matthew Loper, Naureen Mahmood and the Max Planck Gesellschaft. All rights reserved. 3 | This software is provided for research purposes only. 4 | By using this software you agree to the terms of the SMPL Model license here http://smpl.is.tue.mpg.de/license 5 | 6 | More information about SMPL is available here http://smpl.is.tue.mpg. 7 | For comments or questions, please email us at: smpl@tuebingen.mpg.de 8 | 9 | 10 | About this file: 11 | ================ 12 | This file defines the basic skinning modules for the SMPL loader which 13 | defines the effect of bones and blendshapes on the vertices of the template mesh. 14 | 15 | Modules included: 16 | - verts_decorated: 17 | creates an instance of the SMPL model which inherits model attributes from another 18 | SMPL model. 19 | - verts_core: [overloaded function inherited by lbs.verts_core] 20 | computes the blending of joint-influences for each vertex based on type of skinning 21 | 22 | ''' 23 | 24 | import chumpy 25 | import lbs 26 | from posemapper import posemap 27 | import scipy.sparse as sp 28 | from chumpy.ch import MatVecMult 29 | 30 | def ischumpy(x): return hasattr(x, 'dterms') 31 | 32 | def verts_decorated(trans, pose, 33 | v_template, J, weights, kintree_table, bs_style, f, 34 | bs_type=None, posedirs=None, betas=None, shapedirs=None, want_Jtr=False): 35 | 36 | for which in [trans, pose, v_template, weights, posedirs, betas, shapedirs]: 37 | if which is not None: 38 | assert ischumpy(which) 39 | 40 | v = v_template 41 | 42 | if shapedirs is not None: 43 | if betas is None: 44 | betas = chumpy.zeros(shapedirs.shape[-1]) 45 | v_shaped = v + shapedirs.dot(betas) 46 | else: 47 | v_shaped = v 48 | 49 | if posedirs is not None: 50 | v_posed = v_shaped + posedirs.dot(posemap(bs_type)(pose)) 51 | else: 52 | v_posed = v_shaped 53 | 54 | v = v_posed 55 | 56 | if sp.issparse(J): 57 | regressor = J 58 | J_tmpx = MatVecMult(regressor, v_shaped[:,0]) 59 | J_tmpy = MatVecMult(regressor, v_shaped[:,1]) 60 | J_tmpz = MatVecMult(regressor, v_shaped[:,2]) 61 | J = chumpy.vstack((J_tmpx, J_tmpy, J_tmpz)).T 62 | else: 63 | assert(ischumpy(J)) 64 | 65 | assert(bs_style=='lbs') 66 | result, Jtr = lbs.verts_core(pose, v, J, weights, kintree_table, want_Jtr=True, xp=chumpy) 67 | 68 | tr = trans.reshape((1,3)) 69 | result = result + tr 70 | Jtr = Jtr + tr 71 | 72 | result.trans = trans 73 | result.f = f 74 | result.pose = pose 75 | result.v_template = v_template 76 | result.J = J 77 | result.weights = weights 78 | result.kintree_table = kintree_table 79 | result.bs_style = bs_style 80 | result.bs_type =bs_type 81 | if posedirs is not None: 82 | result.posedirs = posedirs 83 | result.v_posed = v_posed 84 | if shapedirs is not None: 85 | result.shapedirs = shapedirs 86 | result.betas = betas 87 | result.v_shaped = v_shaped 88 | if want_Jtr: 89 | result.J_transformed = Jtr 90 | return result 91 | 92 | def verts_core(pose, v, J, weights, kintree_table, bs_style, want_Jtr=False, xp=chumpy): 93 | 94 | if xp == chumpy: 95 | assert(hasattr(pose, 'dterms')) 96 | assert(hasattr(v, 'dterms')) 97 | assert(hasattr(J, 'dterms')) 98 | assert(hasattr(weights, 'dterms')) 99 | 100 | assert(bs_style=='lbs') 101 | result = lbs.verts_core(pose, v, J, weights, kintree_table, want_Jtr, xp) 102 | 103 | return result 104 | -------------------------------------------------------------------------------- /util/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/soubhiksanyal/RingNet/792e2b6ff0200394bebe1fa920008e31303862ed/util/__init__.py -------------------------------------------------------------------------------- /util/image.py: -------------------------------------------------------------------------------- 1 | """ 2 | Author: Soubhik Sanyal 3 | Copyright (c) 2019, Soubhik Sanyal 4 | All rights reserved. 5 | 6 | Max-Planck-Gesellschaft zur Foerderung der Wissenschaften e.V. (MPG) is holder of all proprietary rights on this 7 | computer program. 8 | 9 | You can only use this computer program if you have closed a license agreement with MPG or you get the right to use 10 | the computer program from someone who is authorized to grant you that right. 11 | 12 | Any use of the computer program without a valid license is prohibited and liable to prosecution. 13 | 14 | Copyright 2019 Max-Planck-Gesellschaft zur Foerderung der Wissenschaften e.V. (MPG). acting on behalf of its 15 | Max Planck Institute for Intelligent Systems and the Max Planck Institute for Biological Cybernetics. 16 | All rights reserved. 17 | 18 | More information about RingNet is available at https://ringnet.is.tue.mpg.de. 19 | 20 | based on github.com/akanazawa/hmr 21 | """ 22 | # Preprocessing. 23 | 24 | import numpy as np 25 | import cv2 26 | 27 | 28 | def resize_img(img, scale_factor): 29 | new_size = (np.floor(np.array(img.shape[0:2]) * scale_factor)).astype(int) 30 | new_img = cv2.resize(img, (new_size[1], new_size[0])) 31 | # This is scale factor of [height, width] i.e. [y, x] 32 | actual_factor = [ 33 | new_size[0] / float(img.shape[0]), new_size[1] / float(img.shape[1]) 34 | ] 35 | return new_img, actual_factor 36 | 37 | 38 | def scale_and_crop(image, scale, center, img_size): 39 | image_scaled, scale_factors = resize_img(image, scale) 40 | # Swap so it's [x, y] 41 | scale_factors = [scale_factors[1], scale_factors[0]] 42 | center_scaled = np.round(center * scale_factors).astype(np.int) 43 | 44 | margin = int(img_size / 2) 45 | image_pad = np.pad( 46 | image_scaled, ((margin, ), (margin, ), (0, )), mode='edge') 47 | center_pad = center_scaled + margin 48 | # figure out starting point 49 | start_pt = center_pad - margin 50 | end_pt = center_pad + margin 51 | # crop: 52 | crop = image_pad[start_pt[1]:end_pt[1], start_pt[0]:end_pt[0], :] 53 | proc_param = { 54 | 'scale': scale, 55 | 'start_pt': start_pt, 56 | 'end_pt': end_pt, 57 | 'img_size': img_size 58 | } 59 | 60 | return crop, proc_param 61 | -------------------------------------------------------------------------------- /util/project_on_mesh.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from psbody.mesh import Mesh 3 | from opendr.camera import ProjectPoints 4 | 5 | def compute_texture_map(source_img, verts, faces, cam, texture_data): 6 | ''' 7 | Given an image and a mesh aligned with the image (under scale-orthographic projection), project the image onto the 8 | mesh and return a texture map. 9 | ''' 10 | 11 | x_coords = texture_data.get('x_coords') 12 | y_coords = texture_data.get('y_coords') 13 | valid_pixel_ids = texture_data.get('valid_pixel_ids') 14 | valid_pixel_3d_faces = texture_data.get('valid_pixel_3d_faces') 15 | valid_pixel_b_coords = texture_data.get('valid_pixel_b_coords') 16 | img_size = texture_data.get('img_size') 17 | 18 | pixel_3d_points = verts[valid_pixel_3d_faces[:, 0], :] * valid_pixel_b_coords[:, 0][:, np.newaxis] + \ 19 | verts[valid_pixel_3d_faces[:, 1], :] * valid_pixel_b_coords[:, 1][:, np.newaxis] + \ 20 | verts[valid_pixel_3d_faces[:, 2], :] * valid_pixel_b_coords[:, 2][:, np.newaxis] 21 | 22 | vertex_normals = Mesh(verts, faces).estimate_vertex_normals() 23 | pixel_3d_normals = vertex_normals[valid_pixel_3d_faces[:, 0], :] * valid_pixel_b_coords[:, 0][:, np.newaxis] + \ 24 | vertex_normals[valid_pixel_3d_faces[:, 1], :] * valid_pixel_b_coords[:, 1][:, np.newaxis] + \ 25 | vertex_normals[valid_pixel_3d_faces[:, 2], :] * valid_pixel_b_coords[:, 2][:, np.newaxis] 26 | n_dot_view = pixel_3d_normals[:,2] 27 | 28 | proj_2d_points = ProjectPoints(f=cam[0] * np.ones(2), rt=np.zeros(3), t=np.zeros(3), k=np.zeros(5), c=cam[1:3]) 29 | proj_2d_points.v = pixel_3d_points 30 | proj_2d_points = np.round(proj_2d_points.r).astype(int) 31 | 32 | texture = np.zeros((img_size, img_size, 3)) 33 | for i, (x, y) in enumerate(proj_2d_points): 34 | if n_dot_view[i] > 0.0: 35 | continue 36 | if x > 0 and x < source_img.shape[1] and y > 0 and y < source_img.shape[0]: 37 | texture[y_coords[valid_pixel_ids[i]].astype(int), x_coords[valid_pixel_ids[i]].astype(int), :3] = source_img[y, x] 38 | return texture -------------------------------------------------------------------------------- /util/renderer.py: -------------------------------------------------------------------------------- 1 | """ 2 | Author: Soubhik Sanyal 3 | Copyright (c) 2019, Soubhik Sanyal 4 | All rights reserved. 5 | 6 | Max-Planck-Gesellschaft zur Foerderung der Wissenschaften e.V. (MPG) is holder of all proprietary rights on this 7 | computer program. 8 | 9 | You can only use this computer program if you have closed a license agreement with MPG or you get the right to use 10 | the computer program from someone who is authorized to grant you that right. 11 | 12 | Any use of the computer program without a valid license is prohibited and liable to prosecution. 13 | 14 | Copyright 2019 Max-Planck-Gesellschaft zur Foerderung der Wissenschaften e.V. (MPG). acting on behalf of its 15 | Max Planck Institute for Intelligent Systems and the Max Planck Institute for Biological Cybernetics. 16 | All rights reserved. 17 | 18 | More information about RingNet is available at https://ringnet.is.tue.mpg.de. 19 | 20 | based on github.com/akanazawa/hmr 21 | """ 22 | # Renders mesh using OpenDr for visualization. 23 | 24 | from __future__ import absolute_import 25 | from __future__ import division 26 | from __future__ import print_function 27 | 28 | import numpy as np 29 | import cv2 30 | 31 | from opendr.camera import ProjectPoints 32 | from opendr.renderer import ColoredRenderer 33 | from opendr.lighting import LambertianPointLight 34 | 35 | colors = { 36 | # colorbline/print/copy safe: 37 | 'light_blue': [0.65098039, 0.74117647, 0.85882353], 38 | 'light_pink': [.9, .7, .7], # This is used to do no-3d 39 | } 40 | 41 | 42 | class SMPLRenderer(object): 43 | def __init__(self, 44 | img_size=256, 45 | flength=500., 46 | faces=None): 47 | self.faces = faces 48 | self.w = img_size 49 | self.h = img_size 50 | self.flength = flength 51 | 52 | def __call__(self, 53 | verts, 54 | cam=None, 55 | img=None, 56 | do_alpha=False, 57 | far=None, 58 | near=None, 59 | color_id=0, 60 | img_size=None): 61 | """ 62 | cam is 3D [f, px, py] 63 | """ 64 | if img is not None: 65 | h, w = img.shape[:2] 66 | elif img_size is not None: 67 | h = img_size[0] 68 | w = img_size[1] 69 | else: 70 | h = self.h 71 | w = self.w 72 | 73 | if cam is None: 74 | cam = [self.flength, w / 2., h / 2.] 75 | 76 | use_cam = ProjectPoints( 77 | f=cam[0] * np.ones(2), 78 | rt=np.zeros(3), 79 | t=np.zeros(3), 80 | k=np.zeros(5), 81 | c=cam[1:3]) 82 | 83 | if near is None: 84 | near = np.maximum(np.min(verts[:, 2]) - 25, 0.1) 85 | if far is None: 86 | far = np.maximum(np.max(verts[:, 2]) + 25, 25) 87 | 88 | imtmp = render_model( 89 | verts, 90 | self.faces, 91 | w, 92 | h, 93 | use_cam, 94 | do_alpha=do_alpha, 95 | img=img, 96 | far=far, 97 | near=near, 98 | color_id=color_id) 99 | 100 | return (imtmp * 255).astype('uint8') 101 | 102 | def rotated(self, 103 | verts, 104 | deg, 105 | cam=None, 106 | axis='y', 107 | img=None, 108 | do_alpha=True, 109 | far=None, 110 | near=None, 111 | color_id=0, 112 | img_size=None): 113 | import math 114 | if axis == 'y': 115 | around = cv2.Rodrigues(np.array([0, math.radians(deg), 0]))[0] 116 | elif axis == 'x': 117 | around = cv2.Rodrigues(np.array([math.radians(deg), 0, 0]))[0] 118 | else: 119 | around = cv2.Rodrigues(np.array([0, 0, math.radians(deg)]))[0] 120 | center = verts.mean(axis=0) 121 | new_v = np.dot((verts - center), around) + center 122 | 123 | return self.__call__( 124 | new_v, 125 | cam, 126 | img=img, 127 | do_alpha=do_alpha, 128 | far=far, 129 | near=near, 130 | img_size=img_size, 131 | color_id=color_id) 132 | 133 | 134 | def _create_renderer(w=640, 135 | h=480, 136 | rt=np.zeros(3), 137 | t=np.zeros(3), 138 | f=None, 139 | c=None, 140 | k=None, 141 | near=.5, 142 | far=10.): 143 | 144 | f = np.array([w, w]) / 2. if f is None else f 145 | c = np.array([w, h]) / 2. if c is None else c 146 | k = np.zeros(5) if k is None else k 147 | 148 | rn = ColoredRenderer() 149 | 150 | rn.camera = ProjectPoints(rt=rt, t=t, f=f, c=c, k=k) 151 | rn.frustum = {'near': near, 'far': far, 'height': h, 'width': w} 152 | return rn 153 | 154 | 155 | def _rotateY(points, angle): 156 | """Rotate the points by a specified angle.""" 157 | ry = np.array([[np.cos(angle), 0., np.sin(angle)], [0., 1., 0.], 158 | [-np.sin(angle), 0., np.cos(angle)]]) 159 | return np.dot(points, ry) 160 | 161 | 162 | def simple_renderer(rn, 163 | verts, 164 | faces, 165 | yrot=np.radians(120), 166 | color=colors['light_pink']): 167 | # Rendered model color 168 | rn.set(v=verts, f=faces, vc=color, bgcolor=np.ones(3)) 169 | albedo = rn.vc 170 | 171 | # Construct Back Light (on back right corner) 172 | rn.vc = LambertianPointLight( 173 | f=rn.f, 174 | v=rn.v, 175 | num_verts=len(rn.v), 176 | light_pos=_rotateY(np.array([-200, -100, -100]), yrot), 177 | vc=albedo, 178 | light_color=np.array([1, 1, 1])) 179 | 180 | # Construct Left Light 181 | rn.vc += LambertianPointLight( 182 | f=rn.f, 183 | v=rn.v, 184 | num_verts=len(rn.v), 185 | light_pos=_rotateY(np.array([800, 10, 300]), yrot), 186 | vc=albedo, 187 | light_color=np.array([1, 1, 1])) 188 | 189 | # Construct Right Light 190 | rn.vc += LambertianPointLight( 191 | f=rn.f, 192 | v=rn.v, 193 | num_verts=len(rn.v), 194 | light_pos=_rotateY(np.array([-500, 500, 1000]), yrot), 195 | vc=albedo, 196 | light_color=np.array([.7, .7, .7])) 197 | 198 | return rn.r 199 | 200 | 201 | def get_alpha(imtmp, bgval=1.): 202 | h, w = imtmp.shape[:2] 203 | alpha = (~np.all(imtmp == bgval, axis=2)).astype(imtmp.dtype) 204 | 205 | b_channel, g_channel, r_channel = cv2.split(imtmp) 206 | 207 | im_RGBA = cv2.merge((b_channel, g_channel, r_channel, alpha.astype( 208 | imtmp.dtype))) 209 | return im_RGBA 210 | 211 | 212 | def append_alpha(imtmp): 213 | alpha = np.ones_like(imtmp[:, :, 0]).astype(imtmp.dtype) 214 | if np.issubdtype(imtmp.dtype, np.uint8): 215 | alpha = alpha * 255 216 | b_channel, g_channel, r_channel = cv2.split(imtmp) 217 | im_RGBA = cv2.merge((b_channel, g_channel, r_channel, alpha)) 218 | return im_RGBA 219 | 220 | 221 | def render_model(verts, 222 | faces, 223 | w, 224 | h, 225 | cam, 226 | near=0.5, 227 | far=25, 228 | img=None, 229 | do_alpha=False, 230 | color_id=None): 231 | rn = _create_renderer( 232 | w=w, h=h, near=near, far=far, rt=cam.rt, t=cam.t, f=cam.f, c=cam.c) 233 | 234 | # Uses img as background, otherwise white background. 235 | if img is not None: 236 | rn.background_image = img / 255. if img.max() > 1 else img 237 | 238 | if color_id is None: 239 | color = colors['light_blue'] 240 | else: 241 | color_list = colors.values() 242 | color = color_list[color_id % len(color_list)] 243 | 244 | imtmp = simple_renderer(rn, verts, faces, color=color) 245 | 246 | # If white bg, make transparent. 247 | if img is None and do_alpha: 248 | imtmp = get_alpha(imtmp) 249 | elif img is not None and do_alpha: 250 | imtmp = append_alpha(imtmp) 251 | 252 | return imtmp 253 | 254 | 255 | # ------------------------------ 256 | 257 | 258 | def get_original(proc_param, verts, cam, img_size): 259 | img_size = proc_param['img_size'] 260 | undo_scale = 1. / np.array(proc_param['scale']) 261 | 262 | cam_s = cam[0] 263 | cam_pos = cam[1:] 264 | principal_pt = np.array([img_size, img_size]) / 2. 265 | flength = 50000.0 #500. 266 | tz = flength / (0.5 * img_size * cam_s) 267 | trans = np.hstack([cam_pos, tz]) 268 | vert_shifted = verts + trans 269 | 270 | start_pt = proc_param['start_pt'] - 0.5 * img_size 271 | final_principal_pt = (principal_pt + start_pt) * undo_scale 272 | cam_for_render = np.hstack( 273 | [np.mean(flength * undo_scale), final_principal_pt]) 274 | 275 | # This is in padded image. 276 | # kp_original = (joints + proc_param['start_pt']) * undo_scale 277 | # Subtract padding from joints. 278 | margin = int(img_size / 2) 279 | # kp_original = (joints + proc_param['start_pt'] - margin) * undo_scale 280 | 281 | return cam_for_render, vert_shifted, #kp_original 282 | -------------------------------------------------------------------------------- /util/using_flame_parameters.py: -------------------------------------------------------------------------------- 1 | """ 2 | Author: Soubhik Sanyal 3 | Copyright (c) 2019, Soubhik Sanyal 4 | All rights reserved. 5 | 6 | Max-Planck-Gesellschaft zur Foerderung der Wissenschaften e.V. (MPG) is holder of all proprietary rights on this 7 | computer program. 8 | 9 | You can only use this computer program if you have closed a license agreement with MPG or you get the right to use 10 | the computer program from someone who is authorized to grant you that right. 11 | 12 | Any use of the computer program without a valid license is prohibited and liable to prosecution. 13 | 14 | Copyright 2019 Max-Planck-Gesellschaft zur Foerderung der Wissenschaften e.V. (MPG). acting on behalf of its 15 | Max Planck Institute for Intelligent Systems and the Max Planck Institute for Biological Cybernetics. 16 | 17 | More information about RingNet is available at https://ringnet.is.tue.mpg.de. 18 | """ 19 | # This function Netralize the pose and expression of the predicted mesh and generates a template mesh with only the identity information 20 | import numpy as np 21 | import chumpy as ch 22 | from smpl_webuser.serialization import load_model 23 | from smpl_webuser.verts import verts_decorated 24 | from psbody.mesh import Mesh 25 | 26 | 27 | def make_prdicted_mesh_neutral(predicted_params_path, flame_model_path): 28 | params = np.load(predicted_params_path, allow_pickle=True) 29 | params = params[()] 30 | pose = np.zeros(15) 31 | expression = np.zeros(100) 32 | shape = np.hstack((params['shape'], np.zeros(300-params['shape'].shape[0]))) 33 | flame_genral_model = load_model(flame_model_path) 34 | generated_neutral_mesh = verts_decorated(ch.array([0.0,0.0,0.0]), 35 | ch.array(pose), 36 | ch.array(flame_genral_model.r), 37 | flame_genral_model.J_regressor, 38 | ch.array(flame_genral_model.weights), 39 | flame_genral_model.kintree_table, 40 | flame_genral_model.bs_style, 41 | flame_genral_model.f, 42 | bs_type=flame_genral_model.bs_type, 43 | posedirs=ch.array(flame_genral_model.posedirs), 44 | betas=ch.array(np.hstack((shape,expression))),#betas=ch.array(np.concatenate((theta[0,75:85], np.zeros(390)))), # 45 | shapedirs=ch.array(flame_genral_model.shapedirs), 46 | want_Jtr=True) 47 | neutral_mesh = Mesh(v=generated_neutral_mesh.r, f=generated_neutral_mesh.f) 48 | return neutral_mesh 49 | --------------------------------------------------------------------------------