├── .gitignore ├── LICENSE.txt ├── README.md ├── configurations ├── fwd_lbs.yml ├── inv_lbs.yml └── leap_model.yml ├── data_preparation ├── README.md ├── amass.py ├── split_movi_training.txt └── split_movi_validation.txt ├── environment.yml ├── examples ├── README.md ├── query_leap.py └── sample_smph_body.pt ├── leap ├── __init__.py ├── leap_body_model.py ├── modules │ ├── __init__.py │ ├── encoders.py │ ├── layers.py │ └── modules.py └── tools │ ├── __init__.py │ ├── libmesh │ ├── .gitignore │ ├── __init__.py │ ├── inside_mesh.py │ └── triangle_hash.pyx │ └── libmise │ ├── .gitignore │ ├── __init__.py │ ├── mise.pyx │ └── test.py ├── requirements.txt ├── setup.py └── training_code ├── checkpoints.py ├── config.py ├── datasets ├── __init__.py └── amass.py ├── evaluate_leap.py ├── train_leap.py ├── trainers ├── __init__.py └── leap_trainer.py └── utils.py /.gitignore: -------------------------------------------------------------------------------- 1 | build 2 | leap.egg-info 3 | training_code/trained_models 4 | data_preparation/tmp_data -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | Copyright (c) 2021, Marko Mihajlovic, Yan Zhang, Michael J. Black, and Siyu Tang 2 | All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without 5 | modification, are permitted provided that the following conditions are met: 6 | 7 | 1. Before commercial usage of source code, the copyright holder must be contacted. 8 | 9 | 2. Redistributions of source code must retain the above copyright notice, this 10 | list of conditions and the following disclaimer. 11 | 12 | 3. Redistributions in binary form must reproduce the above copyright notice, 13 | this list of conditions and the following disclaimer in the documentation 14 | and/or other materials provided with the distribution. 15 | 16 | 4. Neither the name of the copyright holder nor the names of its 17 | contributors may be used to endorse or promote products derived from 18 | this software without specific prior written permission. 19 | 20 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 21 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 22 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 23 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 24 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 25 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 26 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 27 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 28 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # LEAP: Learning Articulated Occupancy of People 2 | [**Paper**](https://arxiv.org/pdf/2104.06849.pdf) | [**Video**](https://www.youtube.com/watch?v=UVB8A_T5e3c) | [**Project Page**](https://neuralbodies.github.io/LEAP) 3 | 4 |
5 | teaser figure 6 |
7 | 8 | This is the official implementation of the CVPR 2021 submission [**LEAP: Learning Articulated Occupancy of People**](https://neuralbodies.github.io/LEAP) 9 | 10 | LEAP is a neural network architecture for representing volumetric animatable human bodies. It follows traditional human body modeling techniques and leverages a statistical human prior to generalize to unseen humans. 11 | 12 | If you find our code or paper useful, please consider citing: 13 | ```bibtex 14 | @InProceedings{LEAP:CVPR:21, 15 | title = {{LEAP}: Learning Articulated Occupancy of People}, 16 | author = {Mihajlovic, Marko and Zhang, Yan and Black, Michael J and Tang, Siyu}, 17 | booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, 18 | month = {June}, 19 | year = {2021}, 20 | } 21 | ``` 22 | Contact [Marko Mihajlovic](mailto:markomih@ethz.ch) for questions or open an issue / a pull request. 23 | 24 | # Prerequests 25 | ## 1) SMPL body model 26 | Download a SMPL body model ([**SMPL**](https://smpl.is.tue.mpg.de/), [**SMPL+H**](https://mano.is.tue.mpg.de/), [**SMPL+X**](https://smpl-x.is.tue.mpg.de/), [**MANO**](https://mano.is.tue.mpg.de/)) and store it under `${BODY_MODELS}` directory of the following structure: 27 | ```bash 28 | ${BODY_MODELS} 29 | ├── smpl 30 | │ └── x 31 | ├── smplh 32 | │ ├── male 33 | | │ └── model.npz 34 | │ ├── female 35 | | │ └── model.npz 36 | │ └── neutral 37 | | └── model.npz 38 | ├── mano 39 | | └── x 40 | └── smplx 41 | └── x 42 | ``` 43 | 44 | NOTE: currently only SMPL+H model is supported. Other models will be available soon. 45 | 46 | ## 2) Installation 47 | Another prerequest is to install python packages specified in the `requirements.txt` file, which can be conveniently 48 | accomplished by using an [Anaconda](https://www.anaconda.com/) environment: 49 | ```bash 50 | # clone the repo 51 | git clone https://github.com/neuralbodies/leap.git 52 | cd ./leap 53 | 54 | # create environment 55 | conda env create -f environment.yml 56 | conda activate leap 57 | ``` 58 | and install the `leap` package via `pip`: 59 | ```bash 60 | # note: install the build-essentials package if not already installed (`sudo apt install build-essential`) 61 | python setup.py build_ext --inplace 62 | pip install -e . 63 | ``` 64 | 65 | ## 3) (Optional) Download LEAP pretrained models 66 | Download LEAP pretrained models from [**here**](https://drive.google.com/drive/folders/1HkkH013ErpekedqAEEifQxyMOoVu3ugg?usp=sharing) and extract them under `${LEAP_MODELS}` directory. 67 | 68 | ## Usage 69 | Check demo code in `examples/query_leap.py` for a demonstration on how to use LEAP for differentiable occupancy checks. 70 | 71 | ## Train your own model 72 | Follow instructions specified in `data_preparation/README.md` on how to prepare training data. 73 | Then, replace placeholders for pre-defined path variables in configuration files (`configurations/*.yml`) and execute `training_code/train_leap.py` script to train the neural network modules. 74 | 75 | LEAP consists of two LBS networks and one occupancy decoder. 76 | ```shell script 77 | cd training_code 78 | ``` 79 | To train the forward LBS network, execute the following command: 80 | ```shell script 81 | python train_leap.py ../configurations/fwd_lbs.yml 82 | ``` 83 | 84 | To train the inverse LBS network: 85 | ```shell script 86 | python train_leap.py ../configurations/inv_lbs.yml 87 | ``` 88 | Once the LBS networks are trained, execute the following command to train the occupancy network: 89 | ```shell script 90 | python train_leap.py ../configurations/leap_model.yml 91 | ``` 92 | 93 | See specified yml configuration files for details about network hyperparameters. 94 | -------------------------------------------------------------------------------- /configurations/fwd_lbs.yml: -------------------------------------------------------------------------------- 1 | method: fwd_lbs 2 | 3 | device: cuda 4 | 5 | data: 6 | dataset: amass 7 | dataset_folder: ${TRAINING_DATA_ROOT} 8 | bm_path: ${BODY_MODELS}/smplh 9 | 10 | train_split: ${TRAINING_DATA_ROOT}/split_movi_training.txt 11 | val_split: ${TRAINING_DATA_ROOT}/split_movi_validation.txt 12 | test_split: ${TRAINING_DATA_ROOT}/split_movi_validation.txt 13 | 14 | sampling_config: 15 | n_points_can: 2048 # number of points sampled in the canonical space 16 | 17 | points_uniform_ratio: 0.5 # 50% of training points are sampled uniformly and 50% around the mesh surface 18 | bbox_padding: 0 # padding for boxes around meshes 19 | points_padding: 0.1 # padding for points 20 | points_sigma: 0.01 # sampling std 21 | 22 | model: # hyper parameters for the forward LBS model 23 | hidden_size: 200 # per-layer number of neurons 24 | pn_dim: 100 # PointNet feature dimensionality 25 | 26 | training: 27 | out_dir: ./trained_models/fwd_lbs/movi_split 28 | batch_size: 30 29 | 30 | model_selection_metric: sk_loss 31 | model_selection_mode: minimize 32 | 33 | backup_every: 2000 34 | validate_every: 2000 35 | max_iterations: 500000 36 | 37 | max_epochs: -1 38 | print_every: 50 39 | -------------------------------------------------------------------------------- /configurations/inv_lbs.yml: -------------------------------------------------------------------------------- 1 | method: inv_lbs 2 | 3 | device: cuda 4 | 5 | data: 6 | dataset: amass 7 | dataset_folder: ${TRAINING_DATA_ROOT} 8 | bm_path: ${BODY_MODELS}/smplh 9 | 10 | train_split: ${TRAINING_DATA_ROOT}/split_movi_training.txt 11 | val_split: ${TRAINING_DATA_ROOT}/split_movi_validation.txt 12 | test_split: ${TRAINING_DATA_ROOT}/split_movi_validation.txt 13 | 14 | sampling_config: 15 | n_points_posed: 2048 # number of points sampled in the posed space 16 | 17 | points_uniform_ratio: 0.5 # 50% of training points are sampled uniformly and 50% around the mesh surface 18 | bbox_padding: 0 # padding for boxes around meshes 19 | points_padding: 0.1 # padding for points 20 | points_sigma: 0.01 # sampling std 21 | 22 | model: # hyper parameters for the forward LBS model 23 | hidden_size: 200 # per-layer number of neurons 24 | pn_dim: 100 # PointNet feature dimensionality 25 | fwd_trans_cond_dim: 80 26 | 27 | training: 28 | out_dir: ./trained_models/inv_lbs/movi_split 29 | batch_size: 30 30 | 31 | model_selection_metric: sk_loss 32 | model_selection_mode: minimize 33 | 34 | backup_every: 2000 35 | validate_every: 2000 36 | max_iterations: 500000 37 | 38 | max_epochs: -1 39 | print_every: 50 40 | -------------------------------------------------------------------------------- /configurations/leap_model.yml: -------------------------------------------------------------------------------- 1 | method: leap_model 2 | 3 | device: cuda 4 | 5 | data: 6 | dataset: amass 7 | dataset_folder: ${TRAINING_DATA_ROOT} 8 | bm_path: ${BODY_MODELS}/smplh 9 | 10 | train_split: ${TRAINING_DATA_ROOT}/split_movi_training.txt 11 | val_split: ${TRAINING_DATA_ROOT}/split_movi_validation.txt 12 | test_split: ${TRAINING_DATA_ROOT}/split_movi_validation.txt 13 | 14 | sampling_config: 15 | n_points_posed: 2048 # number of points sampled in the posed space 16 | n_points_can: 2048 # number of points sampled in the canonical space 17 | 18 | points_uniform_ratio: 0.5 # 50% of training points are sampled uniformly and 50% around the mesh surface 19 | bbox_padding: 0 # padding for boxes around meshes 20 | points_padding: 0.1 # padding for points 21 | points_sigma: 0.01 # sampling std 22 | 23 | model: # hyper parameters for the occupancy model 24 | shape_encoder: 25 | out_dim: 100 26 | hidden_size: 128 27 | 28 | structure_encoder: 29 | local_feature_size: 6 30 | 31 | pose_encoder: null 32 | 33 | onet: 34 | hidden_size: 256 35 | 36 | local_feature_encoder: 37 | point_feature_len: 120 38 | 39 | inv_lbs_model_path: ./trained_models/inv_lbs/movi_split/model_best.pt 40 | inv_lbs_model_config: 41 | hidden_size: 200 # per-layer number of neurons 42 | pn_dim: 100 # PointNet feature dimensionality 43 | fwd_trans_cond_dim: 80 44 | 45 | fwd_lbs_model_path: ./trained_models/fwd_lbs/movi_split/model_best.pt 46 | fwd_lbs_model_config: 47 | hidden_size: 200 # per-layer number of neurons 48 | pn_dim: 100 # PointNet feature dimensionality 49 | 50 | training: 51 | out_dir: ./trained_models/leap_model 52 | batch_size: 30 53 | 54 | model_selection_metric: iou 55 | model_selection_mode: maximize 56 | 57 | backup_every: 2000 58 | validate_every: 2000 59 | max_iterations: 500000 60 | 61 | max_epochs: -1 62 | print_every: 50 63 | -------------------------------------------------------------------------------- /data_preparation/README.md: -------------------------------------------------------------------------------- 1 | # Data preprocessing 2 | Code to prepare training data from AMASS dataset for the SMPL+H body model. 3 | 4 | Download [**AMASS**](https://amass.is.tue.mpg.de/) dataset and save it in a desired directory `${AMASS_ROOT}` and then execute the following command to prepare training files: 5 | 6 | ```shell script 7 | python amass.py --src_dataset_path ${AMASS_ROOT} --dst_dataset_path ${TRAINING_DATA_ROOT} --subsets BMLmovi --bm_dir_path ${BODY_MODELS}/smplh 8 | ``` 9 | Then, copy reduced the training/test split files for the MoVi dataset into the training directory. 10 | ```shell script 11 | cp ./split_movi_training.txt ${TRAINING_DATA_ROOT}/split_movi_training.txt 12 | cp ./split_movi_validation.txt ${TRAINING_DATA_ROOT}/split_movi_validation.txt 13 | ``` 14 | -------------------------------------------------------------------------------- /data_preparation/amass.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | from glob import glob 3 | from os.path import basename, join, splitext 4 | from pathlib import Path 5 | 6 | import numpy as np 7 | import torch 8 | 9 | from leap.leap_body_model import LEAPBodyModel 10 | 11 | 12 | @torch.no_grad() 13 | def main(args): 14 | for subset in args.subsets.split(','): 15 | subset_dir = join(args.src_dataset_path, subset) 16 | subjects = [basename(s_dir) for s_dir in sorted(glob(join(subset_dir, '*')))] 17 | 18 | for subject in subjects: 19 | subject_dir = join(subset_dir, subject) 20 | shape_data = np.load(join(subject_dir, 'shape.npz')) 21 | gender = shape_data['gender'].item() 22 | bm_path = join(args.bm_dir_path, gender, 'model.npz') 23 | 24 | sequences = [basename(sn) for sn in glob(join(subject_dir, '*.npz')) if not sn.endswith('shape.npz')] 25 | for sequence in sequences: 26 | sequence_path = join(subject_dir, sequence) 27 | sequence_name = splitext(sequence)[0] 28 | data = np.load(sequence_path, allow_pickle=True) 29 | 30 | b_size = data['poses'].shape[0] 31 | leap_body_model = LEAPBodyModel(bm_path=bm_path, num_betas=data['betas'].shape[0], batch_size=b_size) 32 | 33 | leap_body_model.set_parameters( 34 | betas=torch.Tensor(data['betas']).unsqueeze(0).repeat(b_size, 1), 35 | pose_body=torch.Tensor(data['poses'][:, 3:66]), # 21 joints 36 | pose_hand=torch.Tensor(data['poses'][:, 66:])) 37 | leap_body_model.forward_parametric_model() 38 | 39 | to_save = { 40 | 'can_vertices': leap_body_model.can_vert, 41 | 'posed_vertices': leap_body_model.posed_vert, 42 | 'pose_mat': leap_body_model.pose_rot_mat, 43 | 'rel_joints': leap_body_model.rel_joints, 44 | 'fwd_transformation': leap_body_model.fwd_transformation, 45 | 'frame_name': [f'{sequence_name}_{f_idx:06d}' for f_idx in range(b_size)], 46 | 'gender': [gender] * b_size 47 | } 48 | dir_path = join(args.dst_dataset_path, subset, subject, sequence_name) 49 | Path(dir_path).mkdir(parents=True, exist_ok=True) 50 | print(f'Saving:\t{dir_path}') 51 | for b_ind in range(b_size): 52 | with open(join(dir_path, f'{b_ind:06d}.npz'), 'wb') as file: 53 | np.savez(file, **{key: to_np(val[b_ind]) for key, val in to_save.items()}) 54 | 55 | 56 | def to_np(variable): 57 | if torch.is_tensor(variable): 58 | variable = variable.detach().cpu().numpy() 59 | 60 | return variable 61 | 62 | 63 | if __name__ == '__main__': 64 | parser = argparse.ArgumentParser('Preprocess AMASS dataset.') 65 | parser.add_argument('--src_dataset_path', type=str, required=True, 66 | help='Path to AMASS dataset.') 67 | parser.add_argument('--dst_dataset_path', type=str, required=True, 68 | help='Directory path to store preprocessed dataset.') 69 | parser.add_argument('--subsets', type=str, metavar='LIST', required=True, 70 | help='Subsets of AMASS to use, separated by comma.') 71 | parser.add_argument('--bm_dir_path', type=str, required=True, 72 | help='Path to body model') 73 | 74 | main(parser.parse_args()) 75 | -------------------------------------------------------------------------------- /data_preparation/split_movi_training.txt: -------------------------------------------------------------------------------- 1 | BMLmovi/Subject_12_F_MoSh/Subject_12_F_1_poses 2 | BMLmovi/Subject_12_F_MoSh/Subject_12_F_2_poses 3 | BMLmovi/Subject_12_F_MoSh/Subject_12_F_3_poses 4 | BMLmovi/Subject_12_F_MoSh/Subject_12_F_4_poses 5 | BMLmovi/Subject_12_F_MoSh/Subject_12_F_5_poses 6 | BMLmovi/Subject_12_F_MoSh/Subject_12_F_6_poses 7 | BMLmovi/Subject_12_F_MoSh/Subject_12_F_7_poses 8 | BMLmovi/Subject_12_F_MoSh/Subject_12_F_8_poses 9 | BMLmovi/Subject_12_F_MoSh/Subject_12_F_9_poses 10 | BMLmovi/Subject_13_F_MoSh/Subject_13_F_1_poses 11 | BMLmovi/Subject_13_F_MoSh/Subject_13_F_2_poses 12 | BMLmovi/Subject_13_F_MoSh/Subject_13_F_3_poses 13 | BMLmovi/Subject_13_F_MoSh/Subject_13_F_4_poses 14 | BMLmovi/Subject_13_F_MoSh/Subject_13_F_5_poses 15 | BMLmovi/Subject_13_F_MoSh/Subject_13_F_6_poses 16 | BMLmovi/Subject_13_F_MoSh/Subject_13_F_7_poses 17 | BMLmovi/Subject_13_F_MoSh/Subject_13_F_8_poses 18 | BMLmovi/Subject_13_F_MoSh/Subject_13_F_9_poses 19 | BMLmovi/Subject_14_F_MoSh/Subject_14_F_1_poses 20 | BMLmovi/Subject_14_F_MoSh/Subject_14_F_2_poses 21 | BMLmovi/Subject_14_F_MoSh/Subject_14_F_3_poses 22 | BMLmovi/Subject_14_F_MoSh/Subject_14_F_4_poses 23 | BMLmovi/Subject_14_F_MoSh/Subject_14_F_5_poses 24 | BMLmovi/Subject_14_F_MoSh/Subject_14_F_6_poses 25 | BMLmovi/Subject_14_F_MoSh/Subject_14_F_7_poses 26 | BMLmovi/Subject_14_F_MoSh/Subject_14_F_8_poses 27 | BMLmovi/Subject_14_F_MoSh/Subject_14_F_9_poses 28 | BMLmovi/Subject_15_F_MoSh/Subject_15_F_1_poses 29 | BMLmovi/Subject_15_F_MoSh/Subject_15_F_2_poses 30 | BMLmovi/Subject_15_F_MoSh/Subject_15_F_3_poses 31 | BMLmovi/Subject_15_F_MoSh/Subject_15_F_4_poses 32 | BMLmovi/Subject_15_F_MoSh/Subject_15_F_5_poses 33 | BMLmovi/Subject_15_F_MoSh/Subject_15_F_6_poses 34 | BMLmovi/Subject_15_F_MoSh/Subject_15_F_7_poses 35 | BMLmovi/Subject_15_F_MoSh/Subject_15_F_8_poses 36 | BMLmovi/Subject_15_F_MoSh/Subject_15_F_9_poses 37 | BMLmovi/Subject_16_F_MoSh/Subject_16_F_1_poses 38 | BMLmovi/Subject_16_F_MoSh/Subject_16_F_2_poses 39 | BMLmovi/Subject_16_F_MoSh/Subject_16_F_3_poses 40 | BMLmovi/Subject_16_F_MoSh/Subject_16_F_4_poses 41 | BMLmovi/Subject_16_F_MoSh/Subject_16_F_5_poses 42 | BMLmovi/Subject_16_F_MoSh/Subject_16_F_6_poses 43 | BMLmovi/Subject_16_F_MoSh/Subject_16_F_7_poses 44 | BMLmovi/Subject_16_F_MoSh/Subject_16_F_8_poses 45 | BMLmovi/Subject_16_F_MoSh/Subject_16_F_9_poses 46 | BMLmovi/Subject_17_F_MoSh/Subject_17_F_1_poses 47 | BMLmovi/Subject_17_F_MoSh/Subject_17_F_2_poses 48 | BMLmovi/Subject_17_F_MoSh/Subject_17_F_3_poses 49 | BMLmovi/Subject_17_F_MoSh/Subject_17_F_4_poses 50 | BMLmovi/Subject_17_F_MoSh/Subject_17_F_5_poses 51 | BMLmovi/Subject_17_F_MoSh/Subject_17_F_6_poses 52 | BMLmovi/Subject_17_F_MoSh/Subject_17_F_7_poses 53 | BMLmovi/Subject_17_F_MoSh/Subject_17_F_8_poses 54 | BMLmovi/Subject_17_F_MoSh/Subject_17_F_9_poses 55 | BMLmovi/Subject_18_F_MoSh/Subject_18_F_1_poses 56 | BMLmovi/Subject_18_F_MoSh/Subject_18_F_2_poses 57 | BMLmovi/Subject_18_F_MoSh/Subject_18_F_3_poses 58 | BMLmovi/Subject_18_F_MoSh/Subject_18_F_4_poses 59 | BMLmovi/Subject_18_F_MoSh/Subject_18_F_5_poses 60 | BMLmovi/Subject_18_F_MoSh/Subject_18_F_6_poses 61 | BMLmovi/Subject_18_F_MoSh/Subject_18_F_7_poses 62 | BMLmovi/Subject_18_F_MoSh/Subject_18_F_8_poses 63 | BMLmovi/Subject_18_F_MoSh/Subject_18_F_9_poses 64 | BMLmovi/Subject_19_F_MoSh/Subject_19_F_1_poses 65 | BMLmovi/Subject_19_F_MoSh/Subject_19_F_2_poses 66 | BMLmovi/Subject_19_F_MoSh/Subject_19_F_3_poses 67 | BMLmovi/Subject_19_F_MoSh/Subject_19_F_4_poses 68 | BMLmovi/Subject_19_F_MoSh/Subject_19_F_5_poses 69 | BMLmovi/Subject_19_F_MoSh/Subject_19_F_6_poses 70 | BMLmovi/Subject_19_F_MoSh/Subject_19_F_7_poses 71 | BMLmovi/Subject_19_F_MoSh/Subject_19_F_8_poses 72 | BMLmovi/Subject_19_F_MoSh/Subject_19_F_9_poses 73 | BMLmovi/Subject_20_F_MoSh/Subject_20_F_1_poses 74 | BMLmovi/Subject_20_F_MoSh/Subject_20_F_2_poses 75 | BMLmovi/Subject_20_F_MoSh/Subject_20_F_3_poses 76 | BMLmovi/Subject_20_F_MoSh/Subject_20_F_4_poses 77 | BMLmovi/Subject_20_F_MoSh/Subject_20_F_5_poses 78 | BMLmovi/Subject_20_F_MoSh/Subject_20_F_6_poses 79 | BMLmovi/Subject_20_F_MoSh/Subject_20_F_7_poses 80 | BMLmovi/Subject_20_F_MoSh/Subject_20_F_8_poses 81 | BMLmovi/Subject_20_F_MoSh/Subject_20_F_9_poses 82 | BMLmovi/Subject_22_F_MoSh/Subject_22_F_1_poses 83 | BMLmovi/Subject_22_F_MoSh/Subject_22_F_2_poses 84 | BMLmovi/Subject_22_F_MoSh/Subject_22_F_3_poses 85 | BMLmovi/Subject_22_F_MoSh/Subject_22_F_4_poses 86 | BMLmovi/Subject_22_F_MoSh/Subject_22_F_5_poses 87 | BMLmovi/Subject_22_F_MoSh/Subject_22_F_6_poses 88 | BMLmovi/Subject_22_F_MoSh/Subject_22_F_7_poses 89 | BMLmovi/Subject_22_F_MoSh/Subject_22_F_8_poses 90 | BMLmovi/Subject_22_F_MoSh/Subject_22_F_9_poses 91 | BMLmovi/Subject_23_F_MoSh/Subject_23_F_1_poses 92 | BMLmovi/Subject_23_F_MoSh/Subject_23_F_2_poses 93 | BMLmovi/Subject_23_F_MoSh/Subject_23_F_3_poses 94 | BMLmovi/Subject_23_F_MoSh/Subject_23_F_4_poses 95 | BMLmovi/Subject_23_F_MoSh/Subject_23_F_5_poses 96 | BMLmovi/Subject_23_F_MoSh/Subject_23_F_6_poses 97 | BMLmovi/Subject_23_F_MoSh/Subject_23_F_7_poses 98 | BMLmovi/Subject_23_F_MoSh/Subject_23_F_8_poses 99 | BMLmovi/Subject_23_F_MoSh/Subject_23_F_9_poses 100 | BMLmovi/Subject_24_F_MoSh/Subject_24_F_1_poses 101 | BMLmovi/Subject_24_F_MoSh/Subject_24_F_2_poses 102 | BMLmovi/Subject_24_F_MoSh/Subject_24_F_3_poses 103 | BMLmovi/Subject_24_F_MoSh/Subject_24_F_4_poses 104 | BMLmovi/Subject_24_F_MoSh/Subject_24_F_5_poses 105 | BMLmovi/Subject_24_F_MoSh/Subject_24_F_6_poses 106 | BMLmovi/Subject_24_F_MoSh/Subject_24_F_7_poses 107 | BMLmovi/Subject_24_F_MoSh/Subject_24_F_8_poses 108 | BMLmovi/Subject_24_F_MoSh/Subject_24_F_9_poses 109 | BMLmovi/Subject_25_F_MoSh/Subject_25_F_1_poses 110 | BMLmovi/Subject_25_F_MoSh/Subject_25_F_2_poses 111 | BMLmovi/Subject_25_F_MoSh/Subject_25_F_3_poses 112 | BMLmovi/Subject_25_F_MoSh/Subject_25_F_4_poses 113 | BMLmovi/Subject_25_F_MoSh/Subject_25_F_5_poses 114 | BMLmovi/Subject_25_F_MoSh/Subject_25_F_6_poses 115 | BMLmovi/Subject_25_F_MoSh/Subject_25_F_7_poses 116 | BMLmovi/Subject_25_F_MoSh/Subject_25_F_8_poses 117 | BMLmovi/Subject_25_F_MoSh/Subject_25_F_9_poses 118 | BMLmovi/Subject_27_F_MoSh/Subject_27_F_1_poses 119 | BMLmovi/Subject_27_F_MoSh/Subject_27_F_2_poses 120 | BMLmovi/Subject_27_F_MoSh/Subject_27_F_3_poses 121 | BMLmovi/Subject_27_F_MoSh/Subject_27_F_4_poses 122 | BMLmovi/Subject_27_F_MoSh/Subject_27_F_5_poses 123 | BMLmovi/Subject_27_F_MoSh/Subject_27_F_6_poses 124 | BMLmovi/Subject_27_F_MoSh/Subject_27_F_7_poses 125 | BMLmovi/Subject_27_F_MoSh/Subject_27_F_8_poses 126 | BMLmovi/Subject_27_F_MoSh/Subject_27_F_9_poses 127 | BMLmovi/Subject_28_F_MoSh/Subject_28_F_1_poses 128 | BMLmovi/Subject_28_F_MoSh/Subject_28_F_2_poses 129 | BMLmovi/Subject_28_F_MoSh/Subject_28_F_3_poses 130 | BMLmovi/Subject_28_F_MoSh/Subject_28_F_4_poses 131 | BMLmovi/Subject_28_F_MoSh/Subject_28_F_5_poses 132 | BMLmovi/Subject_28_F_MoSh/Subject_28_F_6_poses 133 | BMLmovi/Subject_28_F_MoSh/Subject_28_F_7_poses 134 | BMLmovi/Subject_28_F_MoSh/Subject_28_F_8_poses 135 | BMLmovi/Subject_28_F_MoSh/Subject_28_F_9_poses 136 | BMLmovi/Subject_29_F_MoSh/Subject_29_F_1_poses 137 | BMLmovi/Subject_29_F_MoSh/Subject_29_F_2_poses 138 | BMLmovi/Subject_29_F_MoSh/Subject_29_F_3_poses 139 | BMLmovi/Subject_29_F_MoSh/Subject_29_F_4_poses 140 | BMLmovi/Subject_29_F_MoSh/Subject_29_F_5_poses 141 | BMLmovi/Subject_29_F_MoSh/Subject_29_F_6_poses 142 | BMLmovi/Subject_29_F_MoSh/Subject_29_F_7_poses 143 | BMLmovi/Subject_29_F_MoSh/Subject_29_F_8_poses 144 | BMLmovi/Subject_29_F_MoSh/Subject_29_F_9_poses 145 | BMLmovi/Subject_2_F_MoSh/Subject_2_F_1_poses 146 | BMLmovi/Subject_2_F_MoSh/Subject_2_F_2_poses 147 | BMLmovi/Subject_2_F_MoSh/Subject_2_F_3_poses 148 | BMLmovi/Subject_2_F_MoSh/Subject_2_F_4_poses 149 | BMLmovi/Subject_2_F_MoSh/Subject_2_F_5_poses 150 | BMLmovi/Subject_2_F_MoSh/Subject_2_F_6_poses 151 | BMLmovi/Subject_2_F_MoSh/Subject_2_F_7_poses 152 | BMLmovi/Subject_2_F_MoSh/Subject_2_F_8_poses 153 | BMLmovi/Subject_2_F_MoSh/Subject_2_F_9_poses 154 | BMLmovi/Subject_30_F_MoSh/Subject_30_F_1_poses 155 | BMLmovi/Subject_30_F_MoSh/Subject_30_F_2_poses 156 | BMLmovi/Subject_30_F_MoSh/Subject_30_F_3_poses 157 | BMLmovi/Subject_30_F_MoSh/Subject_30_F_4_poses 158 | BMLmovi/Subject_30_F_MoSh/Subject_30_F_5_poses 159 | BMLmovi/Subject_30_F_MoSh/Subject_30_F_6_poses 160 | BMLmovi/Subject_30_F_MoSh/Subject_30_F_7_poses 161 | BMLmovi/Subject_30_F_MoSh/Subject_30_F_8_poses 162 | BMLmovi/Subject_30_F_MoSh/Subject_30_F_9_poses 163 | BMLmovi/Subject_32_F_MoSh/Subject_32_F_1_poses 164 | BMLmovi/Subject_32_F_MoSh/Subject_32_F_2_poses 165 | BMLmovi/Subject_32_F_MoSh/Subject_32_F_3_poses 166 | BMLmovi/Subject_32_F_MoSh/Subject_32_F_4_poses 167 | BMLmovi/Subject_32_F_MoSh/Subject_32_F_5_poses 168 | BMLmovi/Subject_32_F_MoSh/Subject_32_F_6_poses 169 | BMLmovi/Subject_32_F_MoSh/Subject_32_F_7_poses 170 | BMLmovi/Subject_32_F_MoSh/Subject_32_F_8_poses 171 | BMLmovi/Subject_32_F_MoSh/Subject_32_F_9_poses 172 | BMLmovi/Subject_33_F_MoSh/Subject_33_F_1_poses 173 | BMLmovi/Subject_33_F_MoSh/Subject_33_F_2_poses 174 | BMLmovi/Subject_33_F_MoSh/Subject_33_F_3_poses 175 | BMLmovi/Subject_33_F_MoSh/Subject_33_F_4_poses 176 | BMLmovi/Subject_33_F_MoSh/Subject_33_F_5_poses 177 | BMLmovi/Subject_33_F_MoSh/Subject_33_F_6_poses 178 | BMLmovi/Subject_33_F_MoSh/Subject_33_F_7_poses 179 | BMLmovi/Subject_33_F_MoSh/Subject_33_F_8_poses 180 | BMLmovi/Subject_33_F_MoSh/Subject_33_F_9_poses 181 | BMLmovi/Subject_34_F_MoSh/Subject_34_F_1_poses 182 | BMLmovi/Subject_34_F_MoSh/Subject_34_F_2_poses 183 | BMLmovi/Subject_34_F_MoSh/Subject_34_F_3_poses 184 | BMLmovi/Subject_34_F_MoSh/Subject_34_F_4_poses 185 | BMLmovi/Subject_34_F_MoSh/Subject_34_F_5_poses 186 | BMLmovi/Subject_34_F_MoSh/Subject_34_F_6_poses 187 | BMLmovi/Subject_34_F_MoSh/Subject_34_F_7_poses 188 | BMLmovi/Subject_34_F_MoSh/Subject_34_F_8_poses 189 | BMLmovi/Subject_34_F_MoSh/Subject_34_F_9_poses 190 | BMLmovi/Subject_35_F_MoSh/Subject_35_F_1_poses 191 | BMLmovi/Subject_35_F_MoSh/Subject_35_F_2_poses 192 | BMLmovi/Subject_35_F_MoSh/Subject_35_F_3_poses 193 | BMLmovi/Subject_35_F_MoSh/Subject_35_F_4_poses 194 | BMLmovi/Subject_35_F_MoSh/Subject_35_F_5_poses 195 | BMLmovi/Subject_35_F_MoSh/Subject_35_F_6_poses 196 | BMLmovi/Subject_35_F_MoSh/Subject_35_F_7_poses 197 | BMLmovi/Subject_35_F_MoSh/Subject_35_F_8_poses 198 | BMLmovi/Subject_35_F_MoSh/Subject_35_F_9_poses 199 | BMLmovi/Subject_36_F_MoSh/Subject_36_F_1_poses 200 | BMLmovi/Subject_36_F_MoSh/Subject_36_F_2_poses 201 | BMLmovi/Subject_36_F_MoSh/Subject_36_F_3_poses 202 | BMLmovi/Subject_36_F_MoSh/Subject_36_F_4_poses 203 | BMLmovi/Subject_36_F_MoSh/Subject_36_F_5_poses 204 | BMLmovi/Subject_36_F_MoSh/Subject_36_F_6_poses 205 | BMLmovi/Subject_36_F_MoSh/Subject_36_F_7_poses 206 | BMLmovi/Subject_36_F_MoSh/Subject_36_F_8_poses 207 | BMLmovi/Subject_36_F_MoSh/Subject_36_F_9_poses 208 | BMLmovi/Subject_37_F_MoSh/Subject_37_F_1_poses 209 | BMLmovi/Subject_37_F_MoSh/Subject_37_F_2_poses 210 | BMLmovi/Subject_37_F_MoSh/Subject_37_F_3_poses 211 | BMLmovi/Subject_37_F_MoSh/Subject_37_F_4_poses 212 | BMLmovi/Subject_37_F_MoSh/Subject_37_F_5_poses 213 | BMLmovi/Subject_37_F_MoSh/Subject_37_F_6_poses 214 | BMLmovi/Subject_37_F_MoSh/Subject_37_F_7_poses 215 | BMLmovi/Subject_37_F_MoSh/Subject_37_F_8_poses 216 | BMLmovi/Subject_37_F_MoSh/Subject_37_F_9_poses 217 | BMLmovi/Subject_38_F_MoSh/Subject_38_F_1_poses 218 | BMLmovi/Subject_38_F_MoSh/Subject_38_F_2_poses 219 | BMLmovi/Subject_38_F_MoSh/Subject_38_F_3_poses 220 | BMLmovi/Subject_38_F_MoSh/Subject_38_F_4_poses 221 | BMLmovi/Subject_38_F_MoSh/Subject_38_F_5_poses 222 | BMLmovi/Subject_38_F_MoSh/Subject_38_F_6_poses 223 | BMLmovi/Subject_38_F_MoSh/Subject_38_F_7_poses 224 | BMLmovi/Subject_38_F_MoSh/Subject_38_F_8_poses 225 | BMLmovi/Subject_38_F_MoSh/Subject_38_F_9_poses 226 | BMLmovi/Subject_39_F_MoSh/Subject_39_F_1_poses 227 | BMLmovi/Subject_39_F_MoSh/Subject_39_F_2_poses 228 | BMLmovi/Subject_39_F_MoSh/Subject_39_F_3_poses 229 | BMLmovi/Subject_39_F_MoSh/Subject_39_F_4_poses 230 | BMLmovi/Subject_39_F_MoSh/Subject_39_F_5_poses 231 | BMLmovi/Subject_39_F_MoSh/Subject_39_F_6_poses 232 | BMLmovi/Subject_39_F_MoSh/Subject_39_F_7_poses 233 | BMLmovi/Subject_39_F_MoSh/Subject_39_F_8_poses 234 | BMLmovi/Subject_39_F_MoSh/Subject_39_F_9_poses 235 | BMLmovi/Subject_3_F_MoSh/Subject_3_F_1_poses 236 | BMLmovi/Subject_3_F_MoSh/Subject_3_F_2_poses 237 | BMLmovi/Subject_3_F_MoSh/Subject_3_F_3_poses 238 | BMLmovi/Subject_3_F_MoSh/Subject_3_F_4_poses 239 | BMLmovi/Subject_3_F_MoSh/Subject_3_F_5_poses 240 | BMLmovi/Subject_3_F_MoSh/Subject_3_F_6_poses 241 | BMLmovi/Subject_3_F_MoSh/Subject_3_F_7_poses 242 | BMLmovi/Subject_3_F_MoSh/Subject_3_F_8_poses 243 | BMLmovi/Subject_3_F_MoSh/Subject_3_F_9_poses 244 | BMLmovi/Subject_40_F_MoSh/Subject_40_F_1_poses 245 | BMLmovi/Subject_40_F_MoSh/Subject_40_F_2_poses 246 | BMLmovi/Subject_40_F_MoSh/Subject_40_F_3_poses 247 | BMLmovi/Subject_40_F_MoSh/Subject_40_F_4_poses 248 | BMLmovi/Subject_40_F_MoSh/Subject_40_F_5_poses 249 | BMLmovi/Subject_40_F_MoSh/Subject_40_F_6_poses 250 | BMLmovi/Subject_40_F_MoSh/Subject_40_F_7_poses 251 | BMLmovi/Subject_40_F_MoSh/Subject_40_F_8_poses 252 | BMLmovi/Subject_40_F_MoSh/Subject_40_F_9_poses 253 | BMLmovi/Subject_42_F_MoSh/Subject_42_F_1_poses 254 | BMLmovi/Subject_42_F_MoSh/Subject_42_F_2_poses 255 | BMLmovi/Subject_42_F_MoSh/Subject_42_F_3_poses 256 | BMLmovi/Subject_42_F_MoSh/Subject_42_F_4_poses 257 | BMLmovi/Subject_42_F_MoSh/Subject_42_F_5_poses 258 | BMLmovi/Subject_42_F_MoSh/Subject_42_F_6_poses 259 | BMLmovi/Subject_42_F_MoSh/Subject_42_F_7_poses 260 | BMLmovi/Subject_42_F_MoSh/Subject_42_F_8_poses 261 | BMLmovi/Subject_42_F_MoSh/Subject_42_F_9_poses 262 | BMLmovi/Subject_43_F_MoSh/Subject_43_F_1_poses 263 | BMLmovi/Subject_43_F_MoSh/Subject_43_F_2_poses 264 | BMLmovi/Subject_43_F_MoSh/Subject_43_F_3_poses 265 | BMLmovi/Subject_43_F_MoSh/Subject_43_F_4_poses 266 | BMLmovi/Subject_43_F_MoSh/Subject_43_F_5_poses 267 | BMLmovi/Subject_43_F_MoSh/Subject_43_F_6_poses 268 | BMLmovi/Subject_43_F_MoSh/Subject_43_F_7_poses 269 | BMLmovi/Subject_43_F_MoSh/Subject_43_F_8_poses 270 | BMLmovi/Subject_43_F_MoSh/Subject_43_F_9_poses 271 | BMLmovi/Subject_44_F_MoSh/Subject_44_F_1_poses 272 | BMLmovi/Subject_44_F_MoSh/Subject_44_F_2_poses 273 | BMLmovi/Subject_44_F_MoSh/Subject_44_F_3_poses 274 | BMLmovi/Subject_44_F_MoSh/Subject_44_F_4_poses 275 | BMLmovi/Subject_44_F_MoSh/Subject_44_F_5_poses 276 | BMLmovi/Subject_44_F_MoSh/Subject_44_F_6_poses 277 | BMLmovi/Subject_44_F_MoSh/Subject_44_F_7_poses 278 | BMLmovi/Subject_44_F_MoSh/Subject_44_F_8_poses 279 | BMLmovi/Subject_44_F_MoSh/Subject_44_F_9_poses 280 | BMLmovi/Subject_45_F_MoSh/Subject_45_F_1_poses 281 | BMLmovi/Subject_45_F_MoSh/Subject_45_F_2_poses 282 | BMLmovi/Subject_45_F_MoSh/Subject_45_F_3_poses 283 | BMLmovi/Subject_45_F_MoSh/Subject_45_F_4_poses 284 | BMLmovi/Subject_45_F_MoSh/Subject_45_F_5_poses 285 | BMLmovi/Subject_45_F_MoSh/Subject_45_F_6_poses 286 | BMLmovi/Subject_45_F_MoSh/Subject_45_F_7_poses 287 | BMLmovi/Subject_45_F_MoSh/Subject_45_F_8_poses 288 | BMLmovi/Subject_45_F_MoSh/Subject_45_F_9_poses 289 | BMLmovi/Subject_46_F_MoSh/Subject_46_F_1_poses 290 | BMLmovi/Subject_46_F_MoSh/Subject_46_F_2_poses 291 | BMLmovi/Subject_46_F_MoSh/Subject_46_F_3_poses 292 | BMLmovi/Subject_46_F_MoSh/Subject_46_F_4_poses 293 | BMLmovi/Subject_46_F_MoSh/Subject_46_F_5_poses 294 | BMLmovi/Subject_46_F_MoSh/Subject_46_F_6_poses 295 | BMLmovi/Subject_46_F_MoSh/Subject_46_F_7_poses 296 | BMLmovi/Subject_46_F_MoSh/Subject_46_F_8_poses 297 | BMLmovi/Subject_46_F_MoSh/Subject_46_F_9_poses 298 | BMLmovi/Subject_47_F_MoSh/Subject_47_F_1_poses 299 | BMLmovi/Subject_47_F_MoSh/Subject_47_F_2_poses 300 | BMLmovi/Subject_47_F_MoSh/Subject_47_F_3_poses 301 | BMLmovi/Subject_47_F_MoSh/Subject_47_F_4_poses 302 | BMLmovi/Subject_47_F_MoSh/Subject_47_F_5_poses 303 | BMLmovi/Subject_47_F_MoSh/Subject_47_F_6_poses 304 | BMLmovi/Subject_47_F_MoSh/Subject_47_F_7_poses 305 | BMLmovi/Subject_47_F_MoSh/Subject_47_F_8_poses 306 | BMLmovi/Subject_47_F_MoSh/Subject_47_F_9_poses 307 | BMLmovi/Subject_48_F_MoSh/Subject_48_F_1_poses 308 | BMLmovi/Subject_48_F_MoSh/Subject_48_F_2_poses 309 | BMLmovi/Subject_48_F_MoSh/Subject_48_F_3_poses 310 | BMLmovi/Subject_48_F_MoSh/Subject_48_F_4_poses 311 | BMLmovi/Subject_48_F_MoSh/Subject_48_F_5_poses 312 | BMLmovi/Subject_48_F_MoSh/Subject_48_F_6_poses 313 | BMLmovi/Subject_48_F_MoSh/Subject_48_F_7_poses 314 | BMLmovi/Subject_48_F_MoSh/Subject_48_F_8_poses 315 | BMLmovi/Subject_48_F_MoSh/Subject_48_F_9_poses 316 | BMLmovi/Subject_4_F_MoSh/Subject_4_F_1_poses 317 | BMLmovi/Subject_4_F_MoSh/Subject_4_F_2_poses 318 | BMLmovi/Subject_4_F_MoSh/Subject_4_F_3_poses 319 | BMLmovi/Subject_4_F_MoSh/Subject_4_F_4_poses 320 | BMLmovi/Subject_4_F_MoSh/Subject_4_F_5_poses 321 | BMLmovi/Subject_4_F_MoSh/Subject_4_F_6_poses 322 | BMLmovi/Subject_4_F_MoSh/Subject_4_F_7_poses 323 | BMLmovi/Subject_4_F_MoSh/Subject_4_F_8_poses 324 | BMLmovi/Subject_4_F_MoSh/Subject_4_F_9_poses 325 | BMLmovi/Subject_50_F_MoSh/Subject_50_F_1_poses 326 | BMLmovi/Subject_50_F_MoSh/Subject_50_F_2_poses 327 | BMLmovi/Subject_50_F_MoSh/Subject_50_F_3_poses 328 | BMLmovi/Subject_50_F_MoSh/Subject_50_F_4_poses 329 | BMLmovi/Subject_50_F_MoSh/Subject_50_F_5_poses 330 | BMLmovi/Subject_50_F_MoSh/Subject_50_F_6_poses 331 | BMLmovi/Subject_50_F_MoSh/Subject_50_F_7_poses 332 | BMLmovi/Subject_50_F_MoSh/Subject_50_F_8_poses 333 | BMLmovi/Subject_50_F_MoSh/Subject_50_F_9_poses 334 | BMLmovi/Subject_52_F_MoSh/Subject_52_F_1_poses 335 | BMLmovi/Subject_52_F_MoSh/Subject_52_F_2_poses 336 | BMLmovi/Subject_52_F_MoSh/Subject_52_F_3_poses 337 | BMLmovi/Subject_52_F_MoSh/Subject_52_F_4_poses 338 | BMLmovi/Subject_52_F_MoSh/Subject_52_F_5_poses 339 | BMLmovi/Subject_52_F_MoSh/Subject_52_F_6_poses 340 | BMLmovi/Subject_52_F_MoSh/Subject_52_F_7_poses 341 | BMLmovi/Subject_52_F_MoSh/Subject_52_F_8_poses 342 | BMLmovi/Subject_52_F_MoSh/Subject_52_F_9_poses 343 | BMLmovi/Subject_53_F_MoSh/Subject_53_F_1_poses 344 | BMLmovi/Subject_53_F_MoSh/Subject_53_F_2_poses 345 | BMLmovi/Subject_53_F_MoSh/Subject_53_F_3_poses 346 | BMLmovi/Subject_53_F_MoSh/Subject_53_F_4_poses 347 | BMLmovi/Subject_53_F_MoSh/Subject_53_F_5_poses 348 | BMLmovi/Subject_53_F_MoSh/Subject_53_F_6_poses 349 | BMLmovi/Subject_53_F_MoSh/Subject_53_F_7_poses 350 | BMLmovi/Subject_53_F_MoSh/Subject_53_F_8_poses 351 | BMLmovi/Subject_53_F_MoSh/Subject_53_F_9_poses 352 | BMLmovi/Subject_54_F_MoSh/Subject_54_F_1_poses 353 | BMLmovi/Subject_54_F_MoSh/Subject_54_F_2_poses 354 | BMLmovi/Subject_54_F_MoSh/Subject_54_F_3_poses 355 | BMLmovi/Subject_54_F_MoSh/Subject_54_F_4_poses 356 | BMLmovi/Subject_54_F_MoSh/Subject_54_F_5_poses 357 | BMLmovi/Subject_54_F_MoSh/Subject_54_F_6_poses 358 | BMLmovi/Subject_54_F_MoSh/Subject_54_F_7_poses 359 | BMLmovi/Subject_54_F_MoSh/Subject_54_F_8_poses 360 | BMLmovi/Subject_54_F_MoSh/Subject_54_F_9_poses 361 | BMLmovi/Subject_55_F_MoSh/Subject_55_F_1_poses 362 | BMLmovi/Subject_55_F_MoSh/Subject_55_F_2_poses 363 | BMLmovi/Subject_55_F_MoSh/Subject_55_F_3_poses 364 | BMLmovi/Subject_55_F_MoSh/Subject_55_F_4_poses 365 | BMLmovi/Subject_55_F_MoSh/Subject_55_F_5_poses 366 | BMLmovi/Subject_55_F_MoSh/Subject_55_F_6_poses 367 | BMLmovi/Subject_55_F_MoSh/Subject_55_F_7_poses 368 | BMLmovi/Subject_55_F_MoSh/Subject_55_F_8_poses 369 | BMLmovi/Subject_55_F_MoSh/Subject_55_F_9_poses 370 | BMLmovi/Subject_56_F_MoSh/Subject_56_F_1_poses 371 | BMLmovi/Subject_56_F_MoSh/Subject_56_F_2_poses 372 | BMLmovi/Subject_56_F_MoSh/Subject_56_F_3_poses 373 | BMLmovi/Subject_56_F_MoSh/Subject_56_F_4_poses 374 | BMLmovi/Subject_56_F_MoSh/Subject_56_F_5_poses 375 | BMLmovi/Subject_56_F_MoSh/Subject_56_F_6_poses 376 | BMLmovi/Subject_56_F_MoSh/Subject_56_F_7_poses 377 | BMLmovi/Subject_56_F_MoSh/Subject_56_F_8_poses 378 | BMLmovi/Subject_56_F_MoSh/Subject_56_F_9_poses 379 | BMLmovi/Subject_57_F_MoSh/Subject_57_F_1_poses 380 | BMLmovi/Subject_57_F_MoSh/Subject_57_F_2_poses 381 | BMLmovi/Subject_57_F_MoSh/Subject_57_F_3_poses 382 | BMLmovi/Subject_57_F_MoSh/Subject_57_F_4_poses 383 | BMLmovi/Subject_57_F_MoSh/Subject_57_F_5_poses 384 | BMLmovi/Subject_57_F_MoSh/Subject_57_F_6_poses 385 | BMLmovi/Subject_57_F_MoSh/Subject_57_F_7_poses 386 | BMLmovi/Subject_57_F_MoSh/Subject_57_F_9_poses 387 | BMLmovi/Subject_58_F_MoSh/Subject_58_F_1_poses 388 | BMLmovi/Subject_58_F_MoSh/Subject_58_F_2_poses 389 | BMLmovi/Subject_58_F_MoSh/Subject_58_F_3_poses 390 | BMLmovi/Subject_58_F_MoSh/Subject_58_F_4_poses 391 | BMLmovi/Subject_58_F_MoSh/Subject_58_F_5_poses 392 | BMLmovi/Subject_58_F_MoSh/Subject_58_F_6_poses 393 | BMLmovi/Subject_58_F_MoSh/Subject_58_F_7_poses 394 | BMLmovi/Subject_58_F_MoSh/Subject_58_F_8_poses 395 | BMLmovi/Subject_58_F_MoSh/Subject_58_F_9_poses 396 | BMLmovi/Subject_59_F_MoSh/Subject_59_F_1_poses 397 | BMLmovi/Subject_59_F_MoSh/Subject_59_F_2_poses 398 | BMLmovi/Subject_59_F_MoSh/Subject_59_F_3_poses 399 | BMLmovi/Subject_59_F_MoSh/Subject_59_F_4_poses 400 | BMLmovi/Subject_59_F_MoSh/Subject_59_F_5_poses 401 | BMLmovi/Subject_59_F_MoSh/Subject_59_F_6_poses 402 | BMLmovi/Subject_59_F_MoSh/Subject_59_F_7_poses 403 | BMLmovi/Subject_59_F_MoSh/Subject_59_F_8_poses 404 | BMLmovi/Subject_59_F_MoSh/Subject_59_F_9_poses 405 | BMLmovi/Subject_5_F_MoSh/Subject_5_F_1_poses 406 | BMLmovi/Subject_5_F_MoSh/Subject_5_F_2_poses 407 | BMLmovi/Subject_5_F_MoSh/Subject_5_F_3_poses 408 | BMLmovi/Subject_5_F_MoSh/Subject_5_F_4_poses 409 | BMLmovi/Subject_5_F_MoSh/Subject_5_F_5_poses 410 | BMLmovi/Subject_5_F_MoSh/Subject_5_F_6_poses 411 | BMLmovi/Subject_5_F_MoSh/Subject_5_F_7_poses 412 | BMLmovi/Subject_5_F_MoSh/Subject_5_F_8_poses 413 | BMLmovi/Subject_5_F_MoSh/Subject_5_F_9_poses 414 | BMLmovi/Subject_60_F_MoSh/Subject_60_F_1_poses 415 | BMLmovi/Subject_60_F_MoSh/Subject_60_F_2_poses 416 | BMLmovi/Subject_60_F_MoSh/Subject_60_F_3_poses 417 | BMLmovi/Subject_60_F_MoSh/Subject_60_F_4_poses 418 | BMLmovi/Subject_60_F_MoSh/Subject_60_F_5_poses 419 | BMLmovi/Subject_60_F_MoSh/Subject_60_F_6_poses 420 | BMLmovi/Subject_60_F_MoSh/Subject_60_F_7_poses 421 | BMLmovi/Subject_60_F_MoSh/Subject_60_F_8_poses 422 | BMLmovi/Subject_60_F_MoSh/Subject_60_F_9_poses 423 | BMLmovi/Subject_62_F_MoSh/Subject_62_F_1_poses 424 | BMLmovi/Subject_62_F_MoSh/Subject_62_F_2_poses 425 | BMLmovi/Subject_62_F_MoSh/Subject_62_F_3_poses 426 | BMLmovi/Subject_62_F_MoSh/Subject_62_F_4_poses 427 | BMLmovi/Subject_62_F_MoSh/Subject_62_F_5_poses 428 | BMLmovi/Subject_62_F_MoSh/Subject_62_F_6_poses 429 | BMLmovi/Subject_62_F_MoSh/Subject_62_F_7_poses 430 | BMLmovi/Subject_62_F_MoSh/Subject_62_F_8_poses 431 | BMLmovi/Subject_62_F_MoSh/Subject_62_F_9_poses 432 | BMLmovi/Subject_63_F_MoSh/Subject_63_F_1_poses 433 | BMLmovi/Subject_63_F_MoSh/Subject_63_F_2_poses 434 | BMLmovi/Subject_63_F_MoSh/Subject_63_F_3_poses 435 | BMLmovi/Subject_63_F_MoSh/Subject_63_F_4_poses 436 | BMLmovi/Subject_63_F_MoSh/Subject_63_F_5_poses 437 | BMLmovi/Subject_63_F_MoSh/Subject_63_F_6_poses 438 | BMLmovi/Subject_63_F_MoSh/Subject_63_F_7_poses 439 | BMLmovi/Subject_63_F_MoSh/Subject_63_F_8_poses 440 | BMLmovi/Subject_63_F_MoSh/Subject_63_F_9_poses 441 | BMLmovi/Subject_64_F_MoSh/Subject_64_F_1_poses 442 | BMLmovi/Subject_64_F_MoSh/Subject_64_F_2_poses 443 | BMLmovi/Subject_64_F_MoSh/Subject_64_F_3_poses 444 | BMLmovi/Subject_64_F_MoSh/Subject_64_F_4_poses 445 | BMLmovi/Subject_64_F_MoSh/Subject_64_F_5_poses 446 | BMLmovi/Subject_64_F_MoSh/Subject_64_F_6_poses 447 | BMLmovi/Subject_64_F_MoSh/Subject_64_F_7_poses 448 | BMLmovi/Subject_64_F_MoSh/Subject_64_F_8_poses 449 | BMLmovi/Subject_64_F_MoSh/Subject_64_F_9_poses 450 | BMLmovi/Subject_65_F_MoSh/Subject_65_F_1_poses 451 | BMLmovi/Subject_65_F_MoSh/Subject_65_F_2_poses 452 | BMLmovi/Subject_65_F_MoSh/Subject_65_F_3_poses 453 | BMLmovi/Subject_65_F_MoSh/Subject_65_F_4_poses 454 | BMLmovi/Subject_65_F_MoSh/Subject_65_F_5_poses 455 | BMLmovi/Subject_65_F_MoSh/Subject_65_F_6_poses 456 | BMLmovi/Subject_65_F_MoSh/Subject_65_F_7_poses 457 | BMLmovi/Subject_65_F_MoSh/Subject_65_F_8_poses 458 | BMLmovi/Subject_65_F_MoSh/Subject_65_F_9_poses 459 | BMLmovi/Subject_66_F_MoSh/Subject_66_F_1_poses 460 | BMLmovi/Subject_66_F_MoSh/Subject_66_F_2_poses 461 | BMLmovi/Subject_66_F_MoSh/Subject_66_F_3_poses 462 | BMLmovi/Subject_66_F_MoSh/Subject_66_F_4_poses 463 | BMLmovi/Subject_66_F_MoSh/Subject_66_F_5_poses 464 | BMLmovi/Subject_66_F_MoSh/Subject_66_F_6_poses 465 | BMLmovi/Subject_66_F_MoSh/Subject_66_F_7_poses 466 | BMLmovi/Subject_66_F_MoSh/Subject_66_F_8_poses 467 | BMLmovi/Subject_66_F_MoSh/Subject_66_F_9_poses 468 | BMLmovi/Subject_67_F_MoSh/Subject_67_F_1_poses 469 | BMLmovi/Subject_67_F_MoSh/Subject_67_F_2_poses 470 | BMLmovi/Subject_67_F_MoSh/Subject_67_F_3_poses 471 | BMLmovi/Subject_67_F_MoSh/Subject_67_F_4_poses 472 | BMLmovi/Subject_67_F_MoSh/Subject_67_F_5_poses 473 | BMLmovi/Subject_67_F_MoSh/Subject_67_F_6_poses 474 | BMLmovi/Subject_67_F_MoSh/Subject_67_F_7_poses 475 | BMLmovi/Subject_67_F_MoSh/Subject_67_F_8_poses 476 | BMLmovi/Subject_67_F_MoSh/Subject_67_F_9_poses 477 | BMLmovi/Subject_68_F_MoSh/Subject_68_F_1_poses 478 | BMLmovi/Subject_68_F_MoSh/Subject_68_F_2_poses 479 | BMLmovi/Subject_68_F_MoSh/Subject_68_F_3_poses 480 | BMLmovi/Subject_68_F_MoSh/Subject_68_F_4_poses 481 | BMLmovi/Subject_68_F_MoSh/Subject_68_F_5_poses 482 | BMLmovi/Subject_68_F_MoSh/Subject_68_F_6_poses 483 | BMLmovi/Subject_68_F_MoSh/Subject_68_F_7_poses 484 | BMLmovi/Subject_68_F_MoSh/Subject_68_F_8_poses 485 | BMLmovi/Subject_68_F_MoSh/Subject_68_F_9_poses 486 | BMLmovi/Subject_69_F_MoSh/Subject_69_F_1_poses 487 | BMLmovi/Subject_69_F_MoSh/Subject_69_F_2_poses 488 | BMLmovi/Subject_69_F_MoSh/Subject_69_F_3_poses 489 | BMLmovi/Subject_69_F_MoSh/Subject_69_F_4_poses 490 | BMLmovi/Subject_69_F_MoSh/Subject_69_F_5_poses 491 | BMLmovi/Subject_69_F_MoSh/Subject_69_F_6_poses 492 | BMLmovi/Subject_69_F_MoSh/Subject_69_F_7_poses 493 | BMLmovi/Subject_69_F_MoSh/Subject_69_F_8_poses 494 | BMLmovi/Subject_69_F_MoSh/Subject_69_F_9_poses 495 | BMLmovi/Subject_6_F_MoSh/Subject_6_F_1_poses 496 | BMLmovi/Subject_6_F_MoSh/Subject_6_F_2_poses 497 | BMLmovi/Subject_6_F_MoSh/Subject_6_F_3_poses 498 | BMLmovi/Subject_6_F_MoSh/Subject_6_F_4_poses 499 | BMLmovi/Subject_6_F_MoSh/Subject_6_F_5_poses 500 | BMLmovi/Subject_6_F_MoSh/Subject_6_F_6_poses 501 | BMLmovi/Subject_6_F_MoSh/Subject_6_F_7_poses 502 | BMLmovi/Subject_6_F_MoSh/Subject_6_F_8_poses 503 | BMLmovi/Subject_6_F_MoSh/Subject_6_F_9_poses 504 | BMLmovi/Subject_70_F_MoSh/Subject_70_F_1_poses 505 | BMLmovi/Subject_70_F_MoSh/Subject_70_F_2_poses 506 | BMLmovi/Subject_70_F_MoSh/Subject_70_F_3_poses 507 | BMLmovi/Subject_70_F_MoSh/Subject_70_F_4_poses 508 | BMLmovi/Subject_70_F_MoSh/Subject_70_F_5_poses 509 | BMLmovi/Subject_70_F_MoSh/Subject_70_F_6_poses 510 | BMLmovi/Subject_70_F_MoSh/Subject_70_F_7_poses 511 | BMLmovi/Subject_70_F_MoSh/Subject_70_F_8_poses 512 | BMLmovi/Subject_70_F_MoSh/Subject_70_F_9_poses 513 | BMLmovi/Subject_72_F_MoSh/Subject_72_F_1_poses 514 | BMLmovi/Subject_72_F_MoSh/Subject_72_F_2_poses 515 | BMLmovi/Subject_72_F_MoSh/Subject_72_F_3_poses 516 | BMLmovi/Subject_72_F_MoSh/Subject_72_F_4_poses 517 | BMLmovi/Subject_72_F_MoSh/Subject_72_F_5_poses 518 | BMLmovi/Subject_72_F_MoSh/Subject_72_F_6_poses 519 | BMLmovi/Subject_72_F_MoSh/Subject_72_F_7_poses 520 | BMLmovi/Subject_72_F_MoSh/Subject_72_F_8_poses 521 | BMLmovi/Subject_72_F_MoSh/Subject_72_F_9_poses 522 | BMLmovi/Subject_73_F_MoSh/Subject_73_F_1_poses 523 | BMLmovi/Subject_73_F_MoSh/Subject_73_F_2_poses 524 | BMLmovi/Subject_73_F_MoSh/Subject_73_F_3_poses 525 | BMLmovi/Subject_73_F_MoSh/Subject_73_F_4_poses 526 | BMLmovi/Subject_73_F_MoSh/Subject_73_F_5_poses 527 | BMLmovi/Subject_73_F_MoSh/Subject_73_F_6_poses 528 | BMLmovi/Subject_73_F_MoSh/Subject_73_F_7_poses 529 | BMLmovi/Subject_73_F_MoSh/Subject_73_F_8_poses 530 | BMLmovi/Subject_73_F_MoSh/Subject_73_F_9_poses 531 | BMLmovi/Subject_74_F_MoSh/Subject_74_F_1_poses 532 | BMLmovi/Subject_74_F_MoSh/Subject_74_F_2_poses 533 | BMLmovi/Subject_74_F_MoSh/Subject_74_F_3_poses 534 | BMLmovi/Subject_74_F_MoSh/Subject_74_F_4_poses 535 | BMLmovi/Subject_74_F_MoSh/Subject_74_F_5_poses 536 | BMLmovi/Subject_74_F_MoSh/Subject_74_F_6_poses 537 | BMLmovi/Subject_74_F_MoSh/Subject_74_F_7_poses 538 | BMLmovi/Subject_74_F_MoSh/Subject_74_F_8_poses 539 | BMLmovi/Subject_74_F_MoSh/Subject_74_F_9_poses 540 | BMLmovi/Subject_75_F_MoSh/Subject_75_F_1_poses 541 | BMLmovi/Subject_75_F_MoSh/Subject_75_F_2_poses 542 | BMLmovi/Subject_75_F_MoSh/Subject_75_F_3_poses 543 | BMLmovi/Subject_75_F_MoSh/Subject_75_F_4_poses 544 | BMLmovi/Subject_75_F_MoSh/Subject_75_F_5_poses 545 | BMLmovi/Subject_75_F_MoSh/Subject_75_F_6_poses 546 | BMLmovi/Subject_75_F_MoSh/Subject_75_F_7_poses 547 | BMLmovi/Subject_75_F_MoSh/Subject_75_F_8_poses 548 | BMLmovi/Subject_75_F_MoSh/Subject_75_F_9_poses 549 | BMLmovi/Subject_76_F_MoSh/Subject_76_F_1_poses 550 | BMLmovi/Subject_76_F_MoSh/Subject_76_F_2_poses 551 | BMLmovi/Subject_76_F_MoSh/Subject_76_F_3_poses 552 | BMLmovi/Subject_76_F_MoSh/Subject_76_F_4_poses 553 | BMLmovi/Subject_76_F_MoSh/Subject_76_F_5_poses 554 | BMLmovi/Subject_76_F_MoSh/Subject_76_F_6_poses 555 | BMLmovi/Subject_76_F_MoSh/Subject_76_F_7_poses 556 | BMLmovi/Subject_76_F_MoSh/Subject_76_F_8_poses 557 | BMLmovi/Subject_76_F_MoSh/Subject_76_F_9_poses 558 | BMLmovi/Subject_77_F_MoSh/Subject_77_F_1_poses 559 | BMLmovi/Subject_77_F_MoSh/Subject_77_F_2_poses 560 | BMLmovi/Subject_77_F_MoSh/Subject_77_F_3_poses 561 | BMLmovi/Subject_77_F_MoSh/Subject_77_F_4_poses 562 | BMLmovi/Subject_77_F_MoSh/Subject_77_F_5_poses 563 | BMLmovi/Subject_77_F_MoSh/Subject_77_F_6_poses 564 | BMLmovi/Subject_77_F_MoSh/Subject_77_F_7_poses 565 | BMLmovi/Subject_77_F_MoSh/Subject_77_F_8_poses 566 | BMLmovi/Subject_77_F_MoSh/Subject_77_F_9_poses 567 | BMLmovi/Subject_78_F_MoSh/Subject_78_F_1_poses 568 | BMLmovi/Subject_78_F_MoSh/Subject_78_F_2_poses 569 | BMLmovi/Subject_78_F_MoSh/Subject_78_F_3_poses 570 | BMLmovi/Subject_78_F_MoSh/Subject_78_F_4_poses 571 | BMLmovi/Subject_78_F_MoSh/Subject_78_F_5_poses 572 | BMLmovi/Subject_78_F_MoSh/Subject_78_F_6_poses 573 | BMLmovi/Subject_78_F_MoSh/Subject_78_F_7_poses 574 | BMLmovi/Subject_78_F_MoSh/Subject_78_F_8_poses 575 | BMLmovi/Subject_78_F_MoSh/Subject_78_F_9_poses 576 | BMLmovi/Subject_79_F_MoSh/Subject_79_F_1_poses 577 | BMLmovi/Subject_79_F_MoSh/Subject_79_F_2_poses 578 | BMLmovi/Subject_79_F_MoSh/Subject_79_F_3_poses 579 | BMLmovi/Subject_79_F_MoSh/Subject_79_F_4_poses 580 | BMLmovi/Subject_79_F_MoSh/Subject_79_F_5_poses 581 | BMLmovi/Subject_79_F_MoSh/Subject_79_F_6_poses 582 | BMLmovi/Subject_79_F_MoSh/Subject_79_F_7_poses 583 | BMLmovi/Subject_79_F_MoSh/Subject_79_F_8_poses 584 | BMLmovi/Subject_79_F_MoSh/Subject_79_F_9_poses 585 | BMLmovi/Subject_80_F_MoSh/Subject_80_F_1_poses 586 | BMLmovi/Subject_80_F_MoSh/Subject_80_F_2_poses 587 | BMLmovi/Subject_80_F_MoSh/Subject_80_F_3_poses 588 | BMLmovi/Subject_80_F_MoSh/Subject_80_F_4_poses 589 | BMLmovi/Subject_80_F_MoSh/Subject_80_F_5_poses 590 | BMLmovi/Subject_80_F_MoSh/Subject_80_F_6_poses 591 | BMLmovi/Subject_80_F_MoSh/Subject_80_F_7_poses 592 | BMLmovi/Subject_80_F_MoSh/Subject_80_F_8_poses 593 | BMLmovi/Subject_80_F_MoSh/Subject_80_F_9_poses 594 | BMLmovi/Subject_82_F_MoSh/Subject_82_F_1_poses 595 | BMLmovi/Subject_82_F_MoSh/Subject_82_F_2_poses 596 | BMLmovi/Subject_82_F_MoSh/Subject_82_F_3_poses 597 | BMLmovi/Subject_82_F_MoSh/Subject_82_F_4_poses 598 | BMLmovi/Subject_82_F_MoSh/Subject_82_F_5_poses 599 | BMLmovi/Subject_82_F_MoSh/Subject_82_F_6_poses 600 | BMLmovi/Subject_82_F_MoSh/Subject_82_F_7_poses 601 | BMLmovi/Subject_82_F_MoSh/Subject_82_F_8_poses 602 | BMLmovi/Subject_82_F_MoSh/Subject_82_F_9_poses 603 | BMLmovi/Subject_83_F_MoSh/Subject_83_F_1_poses 604 | BMLmovi/Subject_83_F_MoSh/Subject_83_F_2_poses 605 | BMLmovi/Subject_83_F_MoSh/Subject_83_F_3_poses 606 | BMLmovi/Subject_83_F_MoSh/Subject_83_F_4_poses 607 | BMLmovi/Subject_83_F_MoSh/Subject_83_F_5_poses 608 | BMLmovi/Subject_83_F_MoSh/Subject_83_F_6_poses 609 | BMLmovi/Subject_83_F_MoSh/Subject_83_F_7_poses 610 | BMLmovi/Subject_83_F_MoSh/Subject_83_F_8_poses 611 | BMLmovi/Subject_83_F_MoSh/Subject_83_F_9_poses 612 | BMLmovi/Subject_84_F_MoSh/Subject_84_F_1_poses 613 | BMLmovi/Subject_84_F_MoSh/Subject_84_F_2_poses 614 | BMLmovi/Subject_84_F_MoSh/Subject_84_F_3_poses 615 | BMLmovi/Subject_84_F_MoSh/Subject_84_F_4_poses 616 | BMLmovi/Subject_84_F_MoSh/Subject_84_F_5_poses 617 | BMLmovi/Subject_84_F_MoSh/Subject_84_F_6_poses 618 | BMLmovi/Subject_84_F_MoSh/Subject_84_F_7_poses 619 | BMLmovi/Subject_84_F_MoSh/Subject_84_F_8_poses 620 | BMLmovi/Subject_84_F_MoSh/Subject_84_F_9_poses 621 | BMLmovi/Subject_85_F_MoSh/Subject_85_F_1_poses 622 | BMLmovi/Subject_85_F_MoSh/Subject_85_F_2_poses 623 | BMLmovi/Subject_85_F_MoSh/Subject_85_F_3_poses 624 | BMLmovi/Subject_85_F_MoSh/Subject_85_F_4_poses 625 | BMLmovi/Subject_85_F_MoSh/Subject_85_F_5_poses 626 | BMLmovi/Subject_85_F_MoSh/Subject_85_F_6_poses 627 | BMLmovi/Subject_85_F_MoSh/Subject_85_F_7_poses 628 | BMLmovi/Subject_85_F_MoSh/Subject_85_F_8_poses 629 | BMLmovi/Subject_85_F_MoSh/Subject_85_F_9_poses 630 | BMLmovi/Subject_86_F_MoSh/Subject_86_F_1_poses 631 | BMLmovi/Subject_86_F_MoSh/Subject_86_F_2_poses 632 | BMLmovi/Subject_86_F_MoSh/Subject_86_F_3_poses 633 | BMLmovi/Subject_86_F_MoSh/Subject_86_F_4_poses 634 | BMLmovi/Subject_86_F_MoSh/Subject_86_F_5_poses 635 | BMLmovi/Subject_86_F_MoSh/Subject_86_F_6_poses 636 | BMLmovi/Subject_86_F_MoSh/Subject_86_F_7_poses 637 | BMLmovi/Subject_86_F_MoSh/Subject_86_F_8_poses 638 | BMLmovi/Subject_86_F_MoSh/Subject_86_F_9_poses 639 | BMLmovi/Subject_87_F_MoSh/Subject_87_F_1_poses 640 | BMLmovi/Subject_87_F_MoSh/Subject_87_F_2_poses 641 | BMLmovi/Subject_87_F_MoSh/Subject_87_F_3_poses 642 | BMLmovi/Subject_87_F_MoSh/Subject_87_F_4_poses 643 | BMLmovi/Subject_87_F_MoSh/Subject_87_F_5_poses 644 | BMLmovi/Subject_87_F_MoSh/Subject_87_F_6_poses 645 | BMLmovi/Subject_87_F_MoSh/Subject_87_F_7_poses 646 | BMLmovi/Subject_87_F_MoSh/Subject_87_F_8_poses 647 | BMLmovi/Subject_87_F_MoSh/Subject_87_F_9_poses 648 | BMLmovi/Subject_88_F_MoSh/Subject_88_F_1_poses 649 | BMLmovi/Subject_88_F_MoSh/Subject_88_F_2_poses 650 | BMLmovi/Subject_88_F_MoSh/Subject_88_F_3_poses 651 | BMLmovi/Subject_88_F_MoSh/Subject_88_F_4_poses 652 | BMLmovi/Subject_88_F_MoSh/Subject_88_F_5_poses 653 | BMLmovi/Subject_88_F_MoSh/Subject_88_F_6_poses 654 | BMLmovi/Subject_88_F_MoSh/Subject_88_F_7_poses 655 | BMLmovi/Subject_88_F_MoSh/Subject_88_F_8_poses 656 | BMLmovi/Subject_88_F_MoSh/Subject_88_F_9_poses 657 | BMLmovi/Subject_89_F_MoSh/Subject_89_F_1_poses 658 | BMLmovi/Subject_89_F_MoSh/Subject_89_F_2_poses 659 | BMLmovi/Subject_89_F_MoSh/Subject_89_F_3_poses 660 | BMLmovi/Subject_89_F_MoSh/Subject_89_F_4_poses 661 | BMLmovi/Subject_89_F_MoSh/Subject_89_F_5_poses 662 | BMLmovi/Subject_89_F_MoSh/Subject_89_F_6_poses 663 | BMLmovi/Subject_89_F_MoSh/Subject_89_F_7_poses 664 | BMLmovi/Subject_89_F_MoSh/Subject_89_F_8_poses 665 | BMLmovi/Subject_89_F_MoSh/Subject_89_F_9_poses 666 | BMLmovi/Subject_8_F_MoSh/Subject_8_F_1_poses 667 | BMLmovi/Subject_8_F_MoSh/Subject_8_F_2_poses 668 | BMLmovi/Subject_8_F_MoSh/Subject_8_F_3_poses 669 | BMLmovi/Subject_8_F_MoSh/Subject_8_F_4_poses 670 | BMLmovi/Subject_8_F_MoSh/Subject_8_F_5_poses 671 | BMLmovi/Subject_8_F_MoSh/Subject_8_F_6_poses 672 | BMLmovi/Subject_8_F_MoSh/Subject_8_F_7_poses 673 | BMLmovi/Subject_8_F_MoSh/Subject_8_F_8_poses 674 | BMLmovi/Subject_8_F_MoSh/Subject_8_F_9_poses 675 | BMLmovi/Subject_90_F_MoSh/Subject_90_F_1_poses 676 | BMLmovi/Subject_90_F_MoSh/Subject_90_F_2_poses 677 | BMLmovi/Subject_90_F_MoSh/Subject_90_F_3_poses 678 | BMLmovi/Subject_90_F_MoSh/Subject_90_F_4_poses 679 | BMLmovi/Subject_90_F_MoSh/Subject_90_F_5_poses 680 | BMLmovi/Subject_90_F_MoSh/Subject_90_F_6_poses 681 | BMLmovi/Subject_90_F_MoSh/Subject_90_F_7_poses 682 | BMLmovi/Subject_90_F_MoSh/Subject_90_F_8_poses 683 | BMLmovi/Subject_90_F_MoSh/Subject_90_F_9_poses 684 | BMLmovi/Subject_9_F_MoSh/Subject_9_F_1_poses 685 | BMLmovi/Subject_9_F_MoSh/Subject_9_F_2_poses 686 | BMLmovi/Subject_9_F_MoSh/Subject_9_F_3_poses 687 | BMLmovi/Subject_9_F_MoSh/Subject_9_F_4_poses 688 | BMLmovi/Subject_9_F_MoSh/Subject_9_F_5_poses 689 | BMLmovi/Subject_9_F_MoSh/Subject_9_F_6_poses 690 | BMLmovi/Subject_9_F_MoSh/Subject_9_F_7_poses 691 | BMLmovi/Subject_9_F_MoSh/Subject_9_F_8_poses 692 | BMLmovi/Subject_9_F_MoSh/Subject_9_F_9_poses 693 | -------------------------------------------------------------------------------- /data_preparation/split_movi_validation.txt: -------------------------------------------------------------------------------- 1 | BMLmovi/Subject_1_F_MoSh/Subject_1_F_1_poses 2 | BMLmovi/Subject_1_F_MoSh/Subject_1_F_2_poses 3 | BMLmovi/Subject_11_F_MoSh/Subject_11_F_1_poses 4 | BMLmovi/Subject_11_F_MoSh/Subject_11_F_2_poses 5 | BMLmovi/Subject_21_F_MoSh/Subject_21_F_1_poses 6 | BMLmovi/Subject_21_F_MoSh/Subject_21_F_2_poses 7 | BMLmovi/Subject_31_F_MoSh/Subject_31_F_1_poses 8 | BMLmovi/Subject_31_F_MoSh/Subject_31_F_2_poses 9 | BMLmovi/Subject_41_F_MoSh/Subject_41_F_1_poses 10 | BMLmovi/Subject_41_F_MoSh/Subject_41_F_2_poses 11 | BMLmovi/Subject_51_F_MoSh/Subject_51_F_1_poses 12 | BMLmovi/Subject_51_F_MoSh/Subject_51_F_2_poses 13 | BMLmovi/Subject_61_F_MoSh/Subject_61_F_1_poses 14 | BMLmovi/Subject_61_F_MoSh/Subject_61_F_2_poses 15 | BMLmovi/Subject_71_F_MoSh/Subject_71_F_1_poses 16 | BMLmovi/Subject_71_F_MoSh/Subject_71_F_2_poses 17 | BMLmovi/Subject_81_F_MoSh/Subject_81_F_1_poses 18 | BMLmovi/Subject_81_F_MoSh/Subject_81_F_2_poses 19 | -------------------------------------------------------------------------------- /environment.yml: -------------------------------------------------------------------------------- 1 | name: leap 2 | channels: 3 | - conda-forge 4 | - defaults 5 | dependencies: 6 | - _libgcc_mutex=0.1=conda_forge 7 | - _openmp_mutex=4.5=1_gnu 8 | - appdirs=1.4.4=pyh9f0ad1d_0 9 | - argon2-cffi=20.1.0=py39h3811e60_2 10 | - async_generator=1.10=py_0 11 | - attrs=21.2.0=pyhd8ed1ab_0 12 | - backcall=0.2.0=pyh9f0ad1d_0 13 | - backports=1.0=py_2 14 | - backports.functools_lru_cache=1.6.4=pyhd8ed1ab_0 15 | - bleach=3.3.0=pyh44b312d_0 16 | - blosc=1.21.0=h9c3ff4c_0 17 | - brotli=1.0.9=h9c3ff4c_4 18 | - brotlipy=0.7.0=py39h3811e60_1001 19 | - brunsli=0.1=h9c3ff4c_0 20 | - bzip2=1.0.8=h7f98852_4 21 | - ca-certificates=2020.12.5=ha878542_0 22 | - certifi=2020.12.5=py39hf3d152e_1 23 | - cffi=1.14.5=py39he32792d_0 24 | - chardet=4.0.0=py39hf3d152e_1 25 | - charls=2.2.0=h9c3ff4c_0 26 | - cloudpickle=1.6.0=py_0 27 | - cryptography=3.4.7=py39hbca0aa6_0 28 | - cycler=0.10.0=py_2 29 | - cython=0.29.23=py39he80948d_0 30 | - cytoolz=0.11.0=py39h3811e60_3 31 | - dask-core=2021.4.1=pyhd8ed1ab_0 32 | - dbus=1.13.6=h48d8840_2 33 | - decorator=4.4.2=py_0 34 | - defusedxml=0.7.1=pyhd8ed1ab_0 35 | - entrypoints=0.3=pyhd8ed1ab_1003 36 | - expat=2.3.0=h9c3ff4c_0 37 | - fontconfig=2.13.1=hba837de_1005 38 | - freetype=2.10.4=h0708190_1 39 | - fsspec=2021.4.0=pyhd8ed1ab_0 40 | - gettext=0.19.8.1=h0b5b191_1005 41 | - giflib=5.2.1=h36c2ea0_2 42 | - glib=2.68.2=h9c3ff4c_0 43 | - glib-tools=2.68.2=h9c3ff4c_0 44 | - gst-plugins-base=1.14.0=hbbd80ab_1 45 | - gstreamer=1.14.0=h28cd5cc_2 46 | - icu=58.2=hf484d3e_1000 47 | - idna=2.10=pyh9f0ad1d_0 48 | - imagecodecs=2021.3.31=py39h559889c_0 49 | - imageio=2.9.0=py_0 50 | - importlib-metadata=4.0.1=py39hf3d152e_0 51 | - ipykernel=5.5.5=py39hef51801_0 52 | - ipython=7.23.1=py39hef51801_0 53 | - ipython_genutils=0.2.0=py_1 54 | - ipywidgets=7.6.3=pyhd3deb0d_0 55 | - jedi=0.18.0=py39hf3d152e_2 56 | - jinja2=3.0.0=pyhd8ed1ab_0 57 | - jpeg=9d=h36c2ea0_0 58 | - jsonschema=3.2.0=pyhd8ed1ab_3 59 | - jupyter=1.0.0=py39hf3d152e_6 60 | - jupyter_client=6.1.12=pyhd8ed1ab_0 61 | - jupyter_console=6.4.0=pyhd8ed1ab_0 62 | - jupyter_core=4.7.1=py39hf3d152e_0 63 | - jupyterlab_pygments=0.1.2=pyh9f0ad1d_0 64 | - jupyterlab_widgets=1.0.0=pyhd8ed1ab_1 65 | - jxrlib=1.1=h7f98852_2 66 | - kiwisolver=1.3.1=py39h1a9c180_1 67 | - lcms2=2.12=hddcbb42_0 68 | - ld_impl_linux-64=2.35.1=hea4e1c9_2 69 | - lerc=2.2.1=h9c3ff4c_0 70 | - libaec=1.0.4=h9c3ff4c_1 71 | - libblas=3.9.0=9_openblas 72 | - libcblas=3.9.0=9_openblas 73 | - libdeflate=1.7=h7f98852_5 74 | - libffi=3.3=h58526e2_2 75 | - libgcc-ng=9.3.0=h2828fa1_19 76 | - libgfortran-ng=9.3.0=hff62375_19 77 | - libgfortran5=9.3.0=hff62375_19 78 | - libglib=2.68.2=h3e27bee_0 79 | - libgomp=9.3.0=h2828fa1_19 80 | - libiconv=1.16=h516909a_0 81 | - liblapack=3.9.0=9_openblas 82 | - libopenblas=0.3.15=pthreads_h8fe5266_0 83 | - libpng=1.6.37=h21135ba_2 84 | - libprotobuf=3.16.0=h780b84a_0 85 | - libsodium=1.0.18=h36c2ea0_1 86 | - libstdcxx-ng=9.3.0=h6de172a_19 87 | - libtiff=4.2.0=hdc55705_1 88 | - libuuid=2.32.1=h7f98852_1000 89 | - libwebp-base=1.2.0=h7f98852_2 90 | - libxcb=1.13=h7f98852_1003 91 | - libxml2=2.9.10=hb55368b_3 92 | - libzopfli=1.0.3=h9c3ff4c_0 93 | - locket=0.2.0=py_2 94 | - lz4-c=1.9.3=h9c3ff4c_0 95 | - markupsafe=2.0.0=py39h3811e60_0 96 | - matplotlib-base=3.4.2=py39h2fa2bec_0 97 | - matplotlib-inline=0.1.2=pyhd8ed1ab_2 98 | - mistune=0.8.4=py39h3811e60_1003 99 | - nbclient=0.5.3=pyhd8ed1ab_0 100 | - nbconvert=6.0.7=py39hf3d152e_3 101 | - nbformat=5.1.3=pyhd8ed1ab_0 102 | - ncurses=6.2=h58526e2_4 103 | - nest-asyncio=1.5.1=pyhd8ed1ab_0 104 | - networkx=2.5.1=pyhd8ed1ab_0 105 | - notebook=6.3.0=pyha770c72_1 106 | - numpy=1.20.2=py39hdbf815f_0 107 | - olefile=0.46=pyh9f0ad1d_1 108 | - openjpeg=2.4.0=hf7af979_0 109 | - openssl=1.1.1k=h7f98852_0 110 | - packaging=20.9=pyh44b312d_0 111 | - pandoc=2.13=h7f98852_0 112 | - pandocfilters=1.4.2=py_1 113 | - parso=0.8.2=pyhd8ed1ab_0 114 | - partd=1.2.0=pyhd8ed1ab_0 115 | - pcre=8.44=he1b5a44_0 116 | - pexpect=4.8.0=pyh9f0ad1d_2 117 | - pickleshare=0.7.5=py_1003 118 | - pillow=8.1.2=py39hf95b381_1 119 | - pip=21.1.1=pyhd8ed1ab_0 120 | - pooch=1.3.0=pyhd8ed1ab_0 121 | - prometheus_client=0.10.1=pyhd8ed1ab_0 122 | - prompt-toolkit=3.0.18=pyha770c72_0 123 | - prompt_toolkit=3.0.18=hd8ed1ab_0 124 | - protobuf=3.16.0=py39he80948d_0 125 | - pthread-stubs=0.4=h36c2ea0_1001 126 | - ptyprocess=0.7.0=pyhd3deb0d_0 127 | - pycparser=2.20=pyh9f0ad1d_2 128 | - pygments=2.9.0=pyhd8ed1ab_0 129 | - pyopenssl=20.0.1=pyhd8ed1ab_0 130 | - pyparsing=2.4.7=pyh9f0ad1d_0 131 | - pyqt=5.9.2=py39h2531618_6 132 | - pyrsistent=0.17.3=py39h3811e60_2 133 | - pysocks=1.7.1=py39hf3d152e_3 134 | - python=3.9.2=hffdb5ce_0_cpython 135 | - python-dateutil=2.8.1=py_0 136 | - python_abi=3.9=1_cp39 137 | - pywavelets=1.1.1=py39hce5d2b2_3 138 | - pyzmq=22.0.3=py39h37b5a0c_1 139 | - qt=5.9.7=h5867ecd_1 140 | - qtconsole=5.1.0=pyhd8ed1ab_0 141 | - qtpy=1.9.0=py_0 142 | - readline=8.1=h46c0cb4_0 143 | - requests=2.25.1=pyhd3deb0d_0 144 | - scikit-image=0.18.1=py39hde0f152_0 145 | - scipy=1.6.3=py39hee8e79c_0 146 | - send2trash=1.5.0=py_0 147 | - sip=4.19.13=py39h2531618_0 148 | - six=1.16.0=pyh6c4a22f_0 149 | - snappy=1.1.8=he1b5a44_3 150 | - sqlite=3.35.5=h74cdb3f_0 151 | - tensorboardx=2.2=pyhd8ed1ab_0 152 | - terminado=0.9.4=py39h06a4308_0 153 | - testpath=0.4.4=py_0 154 | - tifffile=2021.4.8=pyhd8ed1ab_0 155 | - tk=8.6.10=h21135ba_1 156 | - toolz=0.11.1=py_0 157 | - tornado=6.1=py39h3811e60_1 158 | - traitlets=5.0.5=py_0 159 | - tzdata=2021a=he74cb21_0 160 | - urllib3=1.26.4=pyhd8ed1ab_0 161 | - wcwidth=0.2.5=pyh9f0ad1d_2 162 | - webencodings=0.5.1=py_1 163 | - wheel=0.36.2=pyhd3deb0d_0 164 | - widgetsnbextension=3.5.1=py39hf3d152e_4 165 | - xorg-libxau=1.0.9=h7f98852_0 166 | - xorg-libxdmcp=1.1.3=h7f98852_0 167 | - xz=5.2.5=h516909a_1 168 | - yaml=0.2.5=h516909a_0 169 | - zeromq=4.3.4=h9c3ff4c_0 170 | - zfp=0.5.5=h9c3ff4c_5 171 | - zipp=3.4.1=pyhd8ed1ab_0 172 | - zlib=1.2.11=h516909a_1010 173 | - zstd=1.4.9=ha95c52a_0 174 | - pip: 175 | - freetype-py==2.2.0 176 | - opencv-python==4.5.2.54 177 | - pyglet==1.5.16 178 | - pyopengl==3.1.0 179 | - pyrender==0.1.45 180 | - pyyaml==5.4.1 181 | - setuptools==52.0.0 182 | - torch==1.8.1 183 | - torchgeometry==0.1.2 184 | - tqdm==4.60.0 185 | - trimesh==3.9.16 186 | - typing-extensions==3.10.0.0 187 | -------------------------------------------------------------------------------- /examples/README.md: -------------------------------------------------------------------------------- 1 | # Examples 2 | Demo code in `query_leap.py` demonstrations how to use LEAP for differentiable occupancy checks. 3 | 4 | To run the script execute: 5 | ```shell script 6 | python query_leap.py --leap_path ${LEAP_MODELS}/smplh/sample_body.pt --bm_path ${BODY_MODELS}/smplh 7 | ``` 8 | -------------------------------------------------------------------------------- /examples/query_leap.py: -------------------------------------------------------------------------------- 1 | import os 2 | import argparse 3 | 4 | import pyrender 5 | import torch 6 | import trimesh 7 | import numpy as np 8 | 9 | from leap import LEAPBodyModel 10 | 11 | 12 | def sample_points(mesh_vert, n_points=15000): 13 | vert = mesh_vert.detach().cpu().numpy().reshape((-1, 3)) 14 | bb_min, bb_max = np.min(vert, axis=0), np.max(vert, axis=0) 15 | loc = np.array([(bb_min[0] + bb_max[0]) / 2, (bb_min[1] + bb_max[1]) / 2, (bb_min[2] + bb_max[2]) / 2]) 16 | scale = (bb_max - bb_min).max() 17 | 18 | points_uniform = np.random.rand(n_points, 3) 19 | points_uniform = 1.1 * (points_uniform - 0.5) 20 | points_uniform *= scale 21 | np_points = points_uniform + np.expand_dims(loc, axis=0) 22 | 23 | tensor_points = torch.from_numpy(np_points).to(device=mesh_vert.device, dtype=mesh_vert.dtype) 24 | tensor_points = tensor_points.unsqueeze(0) 25 | return np_points, tensor_points 26 | 27 | 28 | def vis_create_pc(pts, color=(0.0, 1.0, 0.0), radius=0.005): 29 | if torch.is_tensor(pts): 30 | pts = pts.cpu().numpy() 31 | 32 | tfs = np.tile(np.eye(4), (pts.shape[0], 1, 1)) 33 | tfs[:, :3, 3] = pts 34 | sm_in = trimesh.creation.uv_sphere(radius=radius) 35 | sm_in.visual.vertex_colors = color 36 | 37 | return pyrender.Mesh.from_trimesh(sm_in, poses=tfs) 38 | 39 | 40 | def main(leap_path, smpl_param_file, bm_path, device): 41 | # load SMPL parameters 42 | smpl_body = torch.load(smpl_param_file, map_location=torch.device('cpu')) 43 | smpl_body = {key: val.to(device=device) if torch.is_tensor(val) else val for key, val in smpl_body.items()} 44 | 45 | # load LEAP 46 | leap_model = LEAPBodyModel(leap_path, 47 | bm_path=os.path.join(bm_path, smpl_body['gender'], 'model.npz'), 48 | num_betas=smpl_body['betas'].shape[1], 49 | batch_size=smpl_body['betas'].shape[0], 50 | device=device) 51 | leap_model.set_parameters(betas=smpl_body['betas'], 52 | pose_body=smpl_body['pose_body'], 53 | pose_hand=smpl_body['pose_hand']) 54 | leap_model.forward_parametric_model() 55 | 56 | # uniform points 57 | np_query_points, tensor_query_points = sample_points(leap_model.posed_vert) 58 | occupancy = leap_model(tensor_query_points) < 0.5 59 | inside_points = (occupancy < 0.5).squeeze().detach().cpu().numpy() # 0.5 is threshold 60 | 61 | posed_mesh = leap_model.extract_posed_mesh() 62 | # can_mesh = leap_model.extract_canonical_mesh() # mesh in canonical pose 63 | 64 | # visualize 65 | scene = pyrender.Scene(ambient_light=[.1, 0.1, 0.1], bg_color=[1.0, 1.0, 1.0]) 66 | scene.add(pyrender.Mesh.from_trimesh(posed_mesh)) 67 | scene.add(vis_create_pc(np_query_points[inside_points], color=(1., 0., 0.))) # red - inside points 68 | scene.add(vis_create_pc(np_query_points[~inside_points], color=(0., 1., 0.))) # blue - outside points 69 | pyrender.Viewer(scene, use_raymond_lighting=True, run_in_thread=False) 70 | 71 | 72 | if __name__ == '__main__': 73 | parser = argparse.ArgumentParser('Visualize LEAP mesh and query points.') 74 | parser.add_argument('--leap_path', type=str, 75 | help='Path to a pretrained LEAP model.') 76 | 77 | parser.add_argument('--bm_path', type=str, 78 | help='Path to the SMPL+H body model.') 79 | 80 | parser.add_argument('--device', type=str, choices=['cuda', 'cpu'], default='cuda', 81 | help='Device (cuda or cpu).') 82 | 83 | args = parser.parse_args() 84 | main(args.leap_path, 85 | './sample_smph_body.pt', 86 | args.bm_path, 87 | args.device) 88 | -------------------------------------------------------------------------------- /examples/sample_smph_body.pt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neuralbodies/leap/f2475588ddbc673365b429b21e4ba8f88bfd357c/examples/sample_smph_body.pt -------------------------------------------------------------------------------- /leap/__init__.py: -------------------------------------------------------------------------------- 1 | from .leap_body_model import LEAPBodyModel 2 | from .modules import ( 3 | LEAPModel, 4 | LEAPOccupancyDecoder, 5 | INVLBS, 6 | FWDLBS, 7 | ) 8 | 9 | __all__ = [ 10 | LEAPBodyModel, 11 | LEAPModel, 12 | LEAPOccupancyDecoder, 13 | INVLBS, 14 | FWDLBS, 15 | ] 16 | -------------------------------------------------------------------------------- /leap/leap_body_model.py: -------------------------------------------------------------------------------- 1 | import os.path as osp 2 | import pickle 3 | 4 | import numpy as np 5 | 6 | import torch 7 | import torch.nn as nn 8 | import torch.nn.functional as F 9 | import trimesh 10 | 11 | try: 12 | import torchgeometry 13 | from skimage import measure 14 | except Exception: 15 | pass 16 | 17 | from .modules import LEAPModel 18 | from .tools.libmise import MISE 19 | 20 | 21 | class LEAPBodyModel(nn.Module): 22 | def __init__(self, 23 | leap_path=None, 24 | bm_path=None, 25 | num_betas=16, 26 | batch_size=1, 27 | dtype=torch.float32, 28 | device='cpu'): 29 | """ Interface for LEAP body model. 30 | 31 | Args: 32 | leap_path (str): Path to pretrained LEAP model. 33 | bm_path (str): Path to a SMPL-compatible model file. 34 | num_betas (int): Number of shape coefficients for SMPL. 35 | batch_size (int): Batch size. 36 | dtype (torch.dtype): Datatype of pytorch tensors 37 | device (str): Which device to use. 38 | """ 39 | super(LEAPBodyModel, self).__init__() 40 | 41 | self.batch_size = batch_size 42 | self.num_betas = num_betas 43 | self.dtype = dtype 44 | self.device = device 45 | 46 | # load SMPL-based model 47 | smpl_dict, self.model_type = self.load_smpl_model(bm_path) 48 | self.model = None 49 | if leap_path is not None: 50 | self.model = LEAPModel.load_from_file(leap_path) 51 | self.model = self.model.to(device=device) 52 | self.model.eval() 53 | 54 | # parse loaded model dictionary 55 | if self.num_betas < 1: 56 | self.num_betas = smpl_dict['shapedirs'].shape[-1] 57 | 58 | weights = np.repeat(smpl_dict['weights'][np.newaxis], batch_size, axis=0) 59 | kintree_table = smpl_dict['kintree_table'].astype(np.int32) 60 | v_template = np.repeat(smpl_dict['v_template'][np.newaxis], batch_size, axis=0) 61 | joint_regressor = smpl_dict['J_regressor'] # V x K 62 | pose_dirs = smpl_dict['posedirs'].reshape([smpl_dict['posedirs'].shape[0] * 3, -1]).T # 6890*30 x 207 63 | shape_dirs = smpl_dict['shapedirs'][:, :, :self.num_betas] 64 | 65 | self.v_template = torch.tensor(v_template, dtype=dtype) 66 | self.shape_dirs = torch.tensor(shape_dirs, dtype=dtype) 67 | self.pose_dirs = torch.tensor(pose_dirs, dtype=dtype) 68 | self.joint_regressor = torch.tensor(joint_regressor, dtype=dtype) 69 | self.kintree_table = torch.tensor(kintree_table, dtype=torch.int32) 70 | self.weights = torch.tensor(weights, dtype=dtype) 71 | 72 | if self.model_type == 'smplx': 73 | begin_shape_id = 300 if smpl_dict['shapedirs'].shape[-1] > 300 else 10 74 | num_expressions = 10 75 | expr_dirs = smpl_dict['shapedirs'][:, :, begin_shape_id:(begin_shape_id + num_expressions)] 76 | expr_dirs = torch.tensor(expr_dirs, dtype=dtype) 77 | self.shape_dirs = torch.cat([self.shape_dirs, expr_dirs]) 78 | 79 | # init controllable parameters 80 | self.betas = None 81 | self.root_loc = None 82 | self.root_orient = None 83 | self.pose_body = None 84 | self.pose_hand = None 85 | self.pose_jaw = None 86 | self.pose_eye = None 87 | 88 | # intermediate representations 89 | self.pose_rot_mat = None 90 | self.posed_vert = None 91 | self.can_vert = None 92 | self.rel_joints = None 93 | self.fwd_transformation = None 94 | 95 | if self.device != 'cpu': 96 | self.to_device(self.device) 97 | 98 | def to_device(self, device): 99 | """ Move pytorch tensor variables to a device. 100 | 101 | Args: 102 | device (str): PyTorch device. 103 | """ 104 | self.device = device 105 | 106 | for attr in self.__dict__.keys(): 107 | var = self.__getattribute__(attr) 108 | if torch.is_tensor(var): 109 | self.__setattr__(attr, var.to(device=self.device)) 110 | 111 | def set_grad_param(self, require_grad=True): 112 | """ Set require_grad of pytorch tensors. 113 | 114 | Args: 115 | require_grad (bool): 116 | """ 117 | for attr in self.__dict__.keys(): 118 | var = self.__getattribute__(attr) 119 | if torch.is_tensor(var): 120 | var.require_grad = require_grad 121 | 122 | def set_parameters(self, 123 | betas=None, 124 | pose_body=None, 125 | pose_hand=None, 126 | pose_jaw=None, 127 | pose_eye=None, 128 | expression=None): 129 | """ Set controllable parameters. 130 | 131 | Args: 132 | betas (torch.tensor): SMPL shape coefficients (B x betas len). 133 | pose_body (torch.tensor): Body pose parameters (B x body joints * 3). 134 | pose_hand (torch.tensor): Hand pose parameters (B x hand joints * 3). 135 | pose_jaw (torch.tensor): Jaw pose parameters (compatible with SMPL+X) (B x jaw joints * 3). 136 | pose_eye (torch.tensor): Eye pose parameters (compatible with SMPL+X) (B x eye joints * 3). 137 | expression (torch.tensor): Expression coefficients (compatible with SMPL+X) (B x expr len). 138 | """ 139 | if betas is None: 140 | betas = torch.tensor(np.zeros((self.batch_size, self.num_betas)), dtype=self.dtype) 141 | else: 142 | betas = betas.view(self.batch_size, self.num_betas) 143 | 144 | if pose_body is None and self.model_type in ['smpl', 'smplh', 'smplx']: 145 | pose_body = torch.tensor(np.zeros((self.batch_size, 63)), dtype=self.dtype) 146 | else: 147 | pose_body = pose_body.view(self.batch_size, 63) 148 | 149 | # pose_hand 150 | if pose_hand is None: 151 | if self.model_type in ['smpl']: 152 | pose_hand = torch.tensor(np.zeros((self.batch_size, 1 * 3 * 2)), dtype=self.dtype) 153 | elif self.model_type in ['smplh', 'smplx']: 154 | pose_hand = torch.tensor(np.zeros((self.batch_size, 15 * 3 * 2)), dtype=self.dtype) 155 | elif self.model_type in ['mano']: 156 | pose_hand = torch.tensor(np.zeros((self.batch_size, 15 * 3)), dtype=self.dtype) 157 | else: 158 | pose_hand = pose_hand.view(self.batch_size, -1) 159 | 160 | # face poses 161 | if self.model_type == 'smplx': 162 | if pose_jaw is None: 163 | pose_jaw = torch.tensor(np.zeros((self.batch_size, 1 * 3)), dtype=self.dtype) 164 | else: 165 | pose_jaw = pose_jaw.view(self.batch_size, 1*3) 166 | 167 | if pose_eye is None: 168 | pose_eye = torch.tensor(np.zeros((self.batch_size, 2 * 3)), dtype=self.dtype) 169 | else: 170 | pose_eye = pose_eye.view(self.batch_size, 2*3) 171 | 172 | if expression is None: 173 | expression = torch.tensor(np.zeros((self.batch_size, self.num_expressions)), dtype=self.dtype) 174 | else: 175 | expression = expression.view(self.batch_size, self.num_expressions) 176 | 177 | betas = torch.cat([betas, expression], dim=-1) 178 | 179 | self.root_loc = torch.tensor(np.zeros((self.batch_size, 1*3)), dtype=self.dtype, device=self.device) 180 | self.root_orient = torch.tensor(np.zeros((self.batch_size, 1*3)), dtype=self.dtype, device=self.device) 181 | 182 | self.betas = betas 183 | self.pose_body = pose_body 184 | self.pose_hand = pose_hand 185 | self.pose_jaw = pose_jaw 186 | self.pose_eye = pose_eye 187 | 188 | def _get_full_pose(self): 189 | """ Concatenates joints. 190 | 191 | Returns: 192 | full_pose (torch.tensor): Full pose (B, num_joints*3) 193 | """ 194 | full_pose = [self.root_orient] 195 | if self.model_type in ['smplh', 'smpl']: 196 | full_pose.extend([self.pose_body, self.pose_hand]) 197 | elif self.model_type == 'smplx': 198 | full_pose.extend([self.pose_body, self.pose_jaw, self.pose_eye, self.pose_hand]) 199 | elif self.model_type in ['mano']: 200 | full_pose.extend([self.pose_hand]) 201 | else: 202 | raise Exception('Unsupported model type.') 203 | 204 | full_pose = torch.cat(full_pose, dim=1) 205 | return full_pose 206 | 207 | def forward(self, points): 208 | """ Checks whether given query points are located inside of a human body. 209 | 210 | Args: 211 | points (torch.tensor): Query points (B x T x 3) 212 | 213 | Returns: 214 | occupancy values (torch.tensor): (B x T) 215 | """ 216 | self.forward_parametric_model() 217 | occupancy = self._query_occupancy(points) 218 | return occupancy 219 | 220 | def _query_occupancy(self, points, canonical_points=False): 221 | if not canonical_points: 222 | # project query points to the canonical space 223 | point_weights, can_points = \ 224 | self.model.inv_lbs(points, self.can_vert, self.posed_vert, self.fwd_transformation) 225 | fwd_point_weights = self.model.fwd_lbs(can_points, self.can_vert) 226 | cycle_distance = torch.sum((point_weights - fwd_point_weights).abs(), dim=-1, keepdim=True) 227 | else: # if points are directly sampled in the canonical space 228 | can_points = points 229 | point_weights = self.model.fwd_lbs(can_points, self.can_vert) 230 | cycle_distance = torch.zeros_like(point_weights[..., :1]) 231 | 232 | # occupancy check 233 | occupancy = self.model.leap_occupancy_decoder( 234 | can_points=can_points, point_weights=point_weights, cycle_distance=cycle_distance, 235 | can_vert=self.can_vert, 236 | rot_mats=self.pose_rot_mat, rel_joints=self.rel_joints, 237 | root_loc=self.root_loc, fwd_transformation=self.fwd_transformation) 238 | return occupancy 239 | 240 | def forward_parametric_model(self): 241 | B = self.pose_body.shape[0] 242 | 243 | # pose to rot matrices 244 | full_pose = self._get_full_pose() 245 | full_pose = full_pose.view(B, -1, 3) 246 | 247 | self.pose_rot_mat = torchgeometry.angle_axis_to_rotation_matrix(full_pose.view(-1, 3))[:, :3, :3] 248 | self.pose_rot_mat = self.pose_rot_mat.view(B, -1, 3, 3) 249 | 250 | # Compute identity-dependent correctives 251 | identity_offsets = torch.einsum('bl,mkl->bmk', self.betas, self.shape_dirs) 252 | 253 | # Compute pose-dependent correctives 254 | _pose_feature = self.pose_rot_mat[:, 1:, :, :] - torch.eye(3, dtype=self.dtype, device=self.device) # (NxKx3x3) 255 | pose_offsets = torch.matmul( 256 | _pose_feature.view(B, -1), 257 | self.pose_dirs 258 | ).view(B, -1, 3) # (N x P) x (P, V * 3) -> N x V x 3 259 | 260 | self.can_vert = self.v_template + identity_offsets + pose_offsets 261 | 262 | # Regress joint locations 263 | self.can_joint_loc = torch.einsum('bik,ji->bjk', self.v_template + identity_offsets, self.joint_regressor) 264 | 265 | # Skinning 266 | self.fwd_transformation, self.rel_joints = self.batch_rigid_transform(self.pose_rot_mat, self.can_joint_loc) 267 | self.posed_vert = self.lbs_skinning(self.fwd_transformation, self.can_vert) 268 | 269 | def batch_rigid_transform(self, rot_mats, joints): 270 | """ Rigid transformations over joints 271 | 272 | Args: 273 | rot_mats (torch.tensor): Rotation matrices (BxNx3x3). 274 | joints (torch.tensor): Joint locations (BxNx3). 275 | 276 | Returns: 277 | posed_joints (torch.tensor): The locations of the joints after applying transformations (BxNx3). 278 | rel_transforms (torch.tensor): Relative wrt root joint rigid transformations (BxNx4x4). 279 | """ 280 | B, K = rot_mats.shape[0], joints.shape[1] 281 | 282 | parents = self.kintree_table[0].long() 283 | 284 | joints = torch.unsqueeze(joints, dim=-1) 285 | 286 | rel_joints = joints.clone() 287 | rel_joints[:, 1:] -= joints[:, parents[1:]] 288 | 289 | transforms_mat = torch.cat([ 290 | F.pad(rot_mats.reshape(-1, 3, 3), [0, 0, 0, 1]), 291 | F.pad(rel_joints.reshape(-1, 3, 1), [0, 0, 0, 1], value=1) 292 | ], dim=2).reshape(-1, joints.shape[1], 4, 4) 293 | 294 | transform_chain = [transforms_mat[:, 0]] 295 | for i in range(1, parents.shape[0]): 296 | curr_res = torch.matmul(transform_chain[parents[i]], transforms_mat[:, i]) 297 | transform_chain.append(curr_res) 298 | 299 | transforms = torch.stack(transform_chain, dim=1) 300 | joints_hom = torch.cat([ 301 | joints, 302 | torch.zeros([B, K, 1, 1], dtype=self.dtype, device=self.device) 303 | ], dim=2) 304 | init_bone = F.pad(torch.matmul(transforms, joints_hom), [3, 0, 0, 0, 0, 0, 0, 0]) 305 | rel_transforms = transforms - init_bone 306 | 307 | return rel_transforms, rel_joints 308 | 309 | @staticmethod 310 | def load_smpl_model(bm_path): 311 | assert osp.exists(bm_path), f'File does not exist: {bm_path}' 312 | 313 | # load smpl parameters 314 | ext = osp.splitext(bm_path)[-1] 315 | if ext == '.npz': 316 | smpl_model = np.load(bm_path, allow_pickle=True) 317 | elif ext == 'pkl': 318 | with open(bm_path, 'rb') as smpl_file: 319 | smpl_model = pickle.load(smpl_file, encoding='latin1') 320 | else: 321 | raise ValueError(f'Invalid file type: {ext}') 322 | 323 | num_joints = smpl_model['posedirs'].shape[2] // 3 324 | model_type = {69: 'smpl', 153: 'smplh', 162: 'smplx', 45: 'mano'}[num_joints] 325 | 326 | return smpl_model, model_type 327 | 328 | @staticmethod 329 | def get_num_joints(bm_path): 330 | model_type = LEAPBodyModel.load_smpl_model(bm_path)[1] 331 | 332 | num_joints = { 333 | 'smpl': 24, 334 | 'smplh': 52, 335 | 'smplx': 55, 336 | 'mano': 16, 337 | }[model_type] 338 | 339 | return model_type, num_joints 340 | 341 | @staticmethod 342 | def get_parent_mapping(model_type): 343 | smplh_mappings = [ 344 | -1, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 9, 9, 12, 13, 14, 16, 17, 18, 19, 20, 22, 23, 20, 25, 345 | 26, 20, 28, 29, 20, 31, 32, 20, 34, 35, 21, 37, 38, 21, 40, 41, 21, 43, 44, 21, 46, 47, 21, 49, 50 346 | ] 347 | 348 | if model_type == 'smpl': 349 | smpl_mappings = smplh_mappings[:22] + [smplh_mappings[25]] + [smplh_mappings[40]] 350 | return smpl_mappings 351 | elif model_type == 'smplh': 352 | return smplh_mappings 353 | else: 354 | raise NotImplementedError 355 | 356 | def parse_state_dict(self, state_dict): 357 | """ Parse state_dict of model and return scalars. 358 | 359 | Args: 360 | state_dict (dict): State dict of model 361 | """ 362 | 363 | for k, v in self.module_dict.items(): 364 | if k in state_dict: 365 | v.load_state_dict(state_dict[k]) 366 | else: 367 | print(f'Warning: Could not find {k} in checkpoint!') 368 | scalars = {k: v for k, v in state_dict.items() if k not in self.module_dict} 369 | return scalars 370 | 371 | def lbs_skinning(self, fwd_transformation, can_vert): 372 | """ Conversion of canonical vertices to posed vertices via linear blend skinning. 373 | 374 | Args: 375 | fwd_transformation (torch.tensor): Forward rigid transformation tensor (B x K x 4 x 4). 376 | can_vert (torch.tensor): Canonical vertices (B x V x 3). 377 | 378 | Returns: 379 | posed_vert (torch.tensor): Posed vertices (B x V x 3). 380 | """ 381 | B = fwd_transformation.shape[0] 382 | 383 | _fwd_lbs_trans = torch.matmul( 384 | self.weights, 385 | fwd_transformation.view(B, -1, 16) 386 | ).view(B, -1, 4, 4) 387 | 388 | _vert_hom = torch.cat([ 389 | can_vert, 390 | torch.ones([B, can_vert.shape[1], 1], dtype=self.dtype, device=self.device) 391 | ], dim=2) 392 | posed_vert = torch.matmul(_fwd_lbs_trans, torch.unsqueeze(_vert_hom, dim=-1))[:, :, :3, 0] 393 | return posed_vert 394 | 395 | @torch.no_grad() 396 | def _extract_mesh(self, vertices, resolution0, upsampling_steps, canonical_points=False): 397 | """ Runs marching cubes to extract mesh for the occupancy representation. """ 398 | device = self.device 399 | 400 | # compute scale and loc 401 | bb_min = np.min(vertices, axis=0) 402 | bb_max = np.max(vertices, axis=0) 403 | loc = np.array([ 404 | (bb_min[0] + bb_max[0]) / 2, 405 | (bb_min[1] + bb_max[1]) / 2, 406 | (bb_min[2] + bb_max[2]) / 2 407 | ]) 408 | scale = (bb_max - bb_min).max() 409 | 410 | scale = torch.FloatTensor([scale]).to(device=device) 411 | loc = torch.from_numpy(loc).to(device=device) 412 | 413 | # create MISE 414 | threshold = 0.5 415 | padding = 0.1 416 | box_size = 1 + padding 417 | mesh_extractor = MISE(resolution0, upsampling_steps, threshold) 418 | 419 | # sample initial points 420 | points = mesh_extractor.query() 421 | while points.shape[0] != 0: 422 | sampled_points = torch.FloatTensor(points).to(device=device) # Query points 423 | sampled_points = sampled_points / mesh_extractor.resolution # Normalize to bounding box 424 | sampled_points = box_size * (sampled_points - 0.5) 425 | sampled_points *= scale 426 | sampled_points += loc 427 | 428 | # check occupancy for sampled points 429 | p_split = torch.split(sampled_points, 50000) # to prevent OOM 430 | occ_hats = [] 431 | for pi in p_split: 432 | pi = pi.unsqueeze(0).to(device=device) 433 | occ_hats.append(self._query_occupancy(pi, canonical_points).cpu().squeeze(0)) 434 | values = torch.cat(occ_hats, dim=0).numpy().astype(np.float64) 435 | 436 | # sample points again 437 | mesh_extractor.update(points, values) 438 | points = mesh_extractor.query() 439 | 440 | occ_hat = mesh_extractor.to_dense() 441 | # Some short hands 442 | n_x, n_y, n_z = occ_hat.shape 443 | box_size = 1 + padding 444 | # Make sure that mesh is watertight 445 | occ_hat_padded = np.pad(occ_hat, 1, 'constant', constant_values=-1e6) 446 | vertices, faces, _, _ = measure.marching_cubes(occ_hat_padded, level=threshold) 447 | 448 | vertices -= 1 # Undo padding 449 | # Normalize to bounding box 450 | vertices /= np.array([n_x - 1, n_y - 1, n_z - 1]) 451 | vertices = box_size * (vertices - 0.5) 452 | vertices = vertices * scale.item() 453 | vertices = vertices + loc.view(1, 3).detach().cpu().numpy() 454 | 455 | # Create mesh 456 | mesh = trimesh.Trimesh(vertices, faces, process=False) 457 | return mesh 458 | 459 | @torch.no_grad() 460 | def extract_canonical_mesh(self, resolution0=32, upsampling_steps=3): 461 | self.model.eval() 462 | self.forward_parametric_model() 463 | 464 | mesh = self._extract_mesh(self.can_vert.squeeze(0).detach().cpu().numpy(), 465 | resolution0, 466 | upsampling_steps, 467 | canonical_points=True) 468 | return mesh 469 | 470 | @torch.no_grad() 471 | def extract_posed_mesh(self, resolution0=32, upsampling_steps=3): 472 | self.model.eval() 473 | self.forward_parametric_model() 474 | 475 | mesh = self._extract_mesh(self.posed_vert.squeeze(0).detach().cpu().numpy(), 476 | resolution0, 477 | upsampling_steps, 478 | canonical_points=False) 479 | return mesh 480 | -------------------------------------------------------------------------------- /leap/modules/__init__.py: -------------------------------------------------------------------------------- 1 | from .modules import ( 2 | LEAPModel, 3 | LEAPOccupancyDecoder, 4 | INVLBS, 5 | FWDLBS, 6 | ) 7 | 8 | __all__ = [ 9 | LEAPModel, 10 | LEAPOccupancyDecoder, 11 | INVLBS, 12 | FWDLBS, 13 | ] 14 | -------------------------------------------------------------------------------- /leap/modules/encoders.py: -------------------------------------------------------------------------------- 1 | from abc import ABCMeta, ABC 2 | from copy import deepcopy 3 | from urllib.parse import urlparse 4 | from os import path as osp 5 | 6 | import torch 7 | import torch.nn as nn 8 | import torch.nn.functional as F 9 | from torch.utils import model_zoo 10 | 11 | from .layers import CBatchNorm1d, ResnetPointnet, BoneMLP, CResnetBlockConv1d 12 | 13 | 14 | class BaseModule(nn.Module, metaclass=ABCMeta): 15 | @classmethod 16 | def load(cls, config, state_dict=None): 17 | model = cls.from_cfg(config) # create model 18 | if model is not None and state_dict is not None: 19 | model.load_state_dict(state_dict) # load weights 20 | 21 | return model 22 | 23 | @classmethod 24 | def from_cfg(cls, config): 25 | raise NotImplementedError 26 | 27 | @staticmethod 28 | def parse_pytorch_file(file_path): 29 | # parse file 30 | if urlparse(file_path).scheme in ('http', 'https'): 31 | state_dict = model_zoo.load_url(file_path, progress=True) 32 | else: 33 | assert osp.exists(file_path), f'File does not exist: {file_path}' 34 | print('=> Loading checkpoint from local file...') 35 | state_dict = torch.load(file_path, map_location=torch.device('cpu')) 36 | return state_dict 37 | 38 | 39 | class LBSNet(BaseModule, ABC): 40 | def __init__(self, num_joints, hidden_size, pn_dim): 41 | super().__init__() 42 | 43 | self.dim = 3 44 | self.pn_dim = pn_dim 45 | self.c_dim = self.get_c_dim() 46 | self.num_joints = num_joints 47 | self.hidden_size = hidden_size 48 | 49 | # create network 50 | self.point_encoder = ResnetPointnet(hidden_dim=hidden_size, out_dim=self.pn_dim) 51 | 52 | self.fc_p = nn.Conv1d(self.dim, self.hidden_size, (1,)) 53 | 54 | self.block0 = CResnetBlockConv1d(self.c_dim, self.hidden_size) 55 | self.block1 = CResnetBlockConv1d(self.c_dim, self.hidden_size) 56 | self.block2 = CResnetBlockConv1d(self.c_dim, self.hidden_size) 57 | 58 | self.bn = CBatchNorm1d(self.c_dim, self.hidden_size) 59 | 60 | self.fc_out = nn.Conv1d(self.hidden_size, self.num_joints, (1,)) 61 | 62 | self.act = F.relu 63 | 64 | def get_c_dim(self): 65 | raise NotImplementedError 66 | 67 | def _forward(self, points, cond_code): 68 | B, T, _ = points.shape 69 | p = points.transpose(1, 2) 70 | 71 | net = self.fc_p(p) # B x hidden_dim x T 72 | 73 | lbs_code = cond_code.unsqueeze(1).repeat(1, points.shape[1], 1) # B x T x c_dim 74 | 75 | c = lbs_code.transpose(1, 2) 76 | net = self.block0(net, c) 77 | net = self.block1(net, c) 78 | net = self.block2(net, c) 79 | 80 | out = self.fc_out(self.act(self.bn(net, c))) # B x K x T 81 | out = out.transpose(1, 2) # B x T x K 82 | 83 | point_weights = torch.softmax(out, dim=-1) 84 | return point_weights 85 | 86 | 87 | class ONet(BaseModule): 88 | def __init__(self, num_joints, point_feature_len, hidden_size): 89 | super().__init__() 90 | 91 | self.num_joints = num_joints 92 | self.point_feature_len = point_feature_len 93 | self.c_dim = point_feature_len + 1 # + 1 for the cycle-distance feature 94 | 95 | self.fc_p = nn.Conv1d(3, hidden_size, (1,)) 96 | self.fc_0 = nn.Conv1d(hidden_size, hidden_size, (1,)) 97 | self.fc_1 = nn.Conv1d(hidden_size, hidden_size, (1,)) 98 | self.fc_2 = nn.Conv1d(hidden_size, hidden_size, (1,)) 99 | self.fc_3 = nn.Conv1d(hidden_size, hidden_size, (1,)) 100 | self.fc_4 = nn.Conv1d(hidden_size, hidden_size, (1,)) 101 | 102 | self.bn_0 = CBatchNorm1d(self.c_dim, hidden_size) 103 | self.bn_1 = CBatchNorm1d(self.c_dim, hidden_size) 104 | self.bn_2 = CBatchNorm1d(self.c_dim, hidden_size) 105 | self.bn_3 = CBatchNorm1d(self.c_dim, hidden_size) 106 | self.bn_4 = CBatchNorm1d(self.c_dim, hidden_size) 107 | self.bn_5 = CBatchNorm1d(self.c_dim, hidden_size) 108 | 109 | self.fc_out = nn.Conv1d(hidden_size, 1, (1,)) 110 | 111 | self.act = F.relu 112 | 113 | def forward(self, can_points, local_point_feature, cycle_distance): 114 | can_points = can_points.transpose(1, 2) 115 | local_cond_code = torch.cat((local_point_feature, cycle_distance), dim=-1) 116 | local_cond_code = local_cond_code.transpose(1, 2) 117 | 118 | net = self.fc_p(can_points) 119 | net = self.act(self.bn_0(net, local_cond_code)) 120 | net = self.fc_0(net) 121 | net = self.act(self.bn_1(net, local_cond_code)) 122 | net = self.fc_1(net) 123 | net = self.act(self.bn_2(net, local_cond_code)) 124 | net = self.fc_2(net) 125 | net = self.act(self.bn_3(net, local_cond_code)) 126 | net = self.fc_3(net) 127 | net = self.act(self.bn_4(net, local_cond_code)) 128 | net = self.fc_4(net) 129 | net = self.act(self.bn_5(net, local_cond_code)) 130 | out = self.fc_out(net) 131 | out = out.squeeze(1) 132 | 133 | return out 134 | 135 | @classmethod 136 | def from_cfg(cls, config): 137 | return cls( 138 | num_joints=config['num_joints'], 139 | point_feature_len=config['point_feature_len'], 140 | hidden_size=config['hidden_size'], 141 | ) 142 | 143 | 144 | class ShapeEncoder(nn.Module): 145 | def __init__(self, out_dim, hidden_size): 146 | super().__init__() 147 | 148 | self.out_dim = out_dim 149 | self.hidden_dim = hidden_size 150 | self.point_encoder = ResnetPointnet(out_dim, hidden_size) 151 | 152 | def get_out_dim(self): 153 | return self.out_dim 154 | 155 | @classmethod 156 | def from_cfg(cls, config): 157 | return cls( 158 | out_dim=config['out_dim'], 159 | hidden_size=config['hidden_size'], 160 | ) 161 | 162 | def forward(self, can_vertices): 163 | """ 164 | Args: 165 | can_vertices: B x N x 3 166 | Returns: 167 | 168 | """ 169 | 170 | return self.point_encoder(can_vertices) 171 | 172 | 173 | class StructureEncoder(nn.Module): 174 | def __init__(self, local_feature_size, parent_mapping): 175 | super().__init__() 176 | 177 | self.bone_dim = 12 # 3x3 for pose and 1x3 for joint loc 178 | self.input_dim = self.bone_dim + 1 # +1 for bone length 179 | self.parent_mapping = parent_mapping 180 | self.num_joints = len(parent_mapping) 181 | self.out_dim = self.num_joints * local_feature_size 182 | 183 | self.proj_bone_prior = nn.Linear(self.num_joints * self.bone_dim, local_feature_size) 184 | self.net = nn.ModuleList([ 185 | BoneMLP(self.input_dim, local_feature_size) for _ in range(self.num_joints) 186 | ]) 187 | 188 | def get_out_dim(self): 189 | return self.out_dim 190 | 191 | @classmethod 192 | def from_cfg(cls, config): 193 | return cls( 194 | local_feature_size=config['local_feature_size'], 195 | parent_mapping=config['parent_mapping'] 196 | ) 197 | 198 | def forward(self, pose, rel_joints): 199 | """ 200 | 201 | Args: 202 | pose: B x num_joints x 3 x 3 203 | rel_joints: B x num_joints x 3 204 | """ 205 | B, K = rel_joints.shape[0], rel_joints.shape[1] 206 | bone_lengths = torch.norm(rel_joints.squeeze(-1), dim=-1).view(B, K, 1) # B x num_joints x 1 207 | 208 | bone_features = torch.cat((pose.contiguous().view(B, K, -1), 209 | rel_joints.contiguous().view(B, K, -1)), dim=-1) 210 | 211 | root_bone_prior = self.proj_bone_prior(bone_features.contiguous().view(B, -1)) # B, bottleneck 212 | 213 | # fwd pass through the bone encoder 214 | features = [None] * self.num_joints 215 | bone_transforms = torch.cat((bone_features, bone_lengths), dim=-1) 216 | 217 | for i, mlp in enumerate(self.net): 218 | parent = self.parent_mapping[i] 219 | if parent == -1: 220 | features[i] = mlp(bone_transforms[:, i, :], root_bone_prior) 221 | else: 222 | features[i] = mlp(bone_transforms[:, i, :], features[parent]) 223 | 224 | features = torch.cat(features, dim=-1) # B x f_len 225 | return features 226 | 227 | 228 | class PoseEncoder(BaseModule): 229 | def __init__(self, num_joints, cfg): 230 | super().__init__() 231 | self.cfg = cfg 232 | self.num_joints = num_joints 233 | 234 | def get_out_dim(self): 235 | if self.cfg is None: 236 | return 0 237 | 238 | return self.num_joints * 3 239 | 240 | @classmethod 241 | def from_cfg(cls, config): 242 | num_joints = 0 if config is None else config['num_joints'] 243 | 244 | return cls(num_joints, config) 245 | 246 | @staticmethod 247 | def forward(trans, fwd_transformation): 248 | """ 249 | 250 | Args: 251 | trans: B x 3 252 | fwd_transformation (optional): B x K x 4 x 4 253 | 254 | Returns: 255 | """ 256 | B, K = fwd_transformation.shape[0], fwd_transformation.shape[1] 257 | trans = torch.cat([ 258 | trans, 259 | torch.ones(B, 1, device=trans.device) 260 | ], dim=-1).unsqueeze(1).repeat(1, K, 1) 261 | fwd_transformation = torch.inverse(fwd_transformation).view(-1, 4, 4) 262 | 263 | root_proj_embedding = torch.matmul( 264 | fwd_transformation, 265 | trans.view(-1, 4, 1) 266 | ).view(B, K, 4)[:, :, :3].contiguous().view(B, -1) 267 | 268 | return root_proj_embedding 269 | 270 | 271 | class LocalFeatureEncoder(BaseModule): 272 | def __init__(self, num_joints, z_dim, point_feature_len): 273 | super().__init__() 274 | 275 | self.num_joints = num_joints 276 | self.z_dim = z_dim 277 | self.point_feature_len = point_feature_len 278 | 279 | self.net = nn.Conv1d( 280 | in_channels=self.num_joints * self.z_dim, 281 | out_channels=self.num_joints * self.point_feature_len, 282 | kernel_size=(1,), 283 | groups=self.num_joints 284 | ) 285 | 286 | def get_out_dim(self): 287 | return self.point_feature_len 288 | 289 | @classmethod 290 | def from_cfg(cls, config): 291 | return cls( 292 | num_joints=config['num_joints'], 293 | z_dim=config['z_dim'], 294 | point_feature_len=config['point_feature_len'] 295 | ) 296 | 297 | def forward(self, shape_code, structure_code, pose_code, lbs_weights): 298 | """ 299 | skinning_weights: B x T x K 300 | """ 301 | B, T, K = lbs_weights.shape 302 | assert K == self.num_joints 303 | 304 | # compute global feature vector 305 | global_feature = [] 306 | if shape_code is not None: 307 | global_feature.append(shape_code) 308 | if structure_code is not None: 309 | global_feature.append(structure_code) 310 | if pose_code is not None: 311 | global_feature.append(pose_code) 312 | 313 | global_feature = torch.cat(global_feature, dim=-1) # B x -1 314 | 315 | # compute per-point local feature vector 316 | global_feature = global_feature.unsqueeze(1).repeat(1, K, 1) # B x K x -1 317 | global_feature = global_feature.view(B, -1, 1) # B x -1 x 1 318 | 319 | local_feature = self.net(global_feature).view(B, K, -1) 320 | 321 | # weighted average based on the skinning net 322 | local_feature = local_feature.unsqueeze(1).repeat(1, T, 1, 1) # B x T x K x F 323 | local_feature = local_feature.view(B * T, K, -1) # B*T x K x F 324 | lbs_weights = lbs_weights.view(B*T, 1, K) # B*T x 1 x K 325 | local_feature = torch.bmm(lbs_weights, local_feature).view(B, T, -1) # B x T x F 326 | 327 | return local_feature 328 | 329 | 330 | class LEAPOccupancyDecoder(BaseModule): 331 | def __init__(self, 332 | shape_encoder: ShapeEncoder, 333 | structure_encoder: StructureEncoder, 334 | pose_encoder: PoseEncoder, 335 | local_feature_encoder: LocalFeatureEncoder, 336 | onet: ONet): 337 | super().__init__() 338 | 339 | self.shape_encoder = shape_encoder 340 | self.structure_encoder = structure_encoder 341 | self.pose_encoder = pose_encoder 342 | 343 | self.local_feature_encoder = local_feature_encoder 344 | self.onet = onet 345 | 346 | @classmethod 347 | def from_cfg(cls, config): 348 | shape_encoder = ShapeEncoder.from_cfg(deepcopy(config['shape_encoder'])) 349 | 350 | structure_encoder_config = deepcopy(config['structure_encoder']) 351 | structure_encoder_config['parent_mapping'] = config['parent_mapping'] 352 | structure_encoder = StructureEncoder.from_cfg(structure_encoder_config) 353 | 354 | pose_encoder = PoseEncoder.from_cfg(deepcopy(config['pose_encoder'])) 355 | 356 | local_feature_encoder_config = deepcopy(config['local_feature_encoder']) 357 | local_feature_encoder_config['num_joints'] = config['num_joints'] 358 | z_dim = pose_encoder.get_out_dim() + shape_encoder.get_out_dim() + structure_encoder.get_out_dim() 359 | local_feature_encoder_config['z_dim'] = z_dim 360 | local_feature_encoder = LocalFeatureEncoder.from_cfg(local_feature_encoder_config) 361 | 362 | onet_config = deepcopy(config['onet']) 363 | onet_config['num_joints'] = config['num_joints'] 364 | onet_config['point_feature_len'] = local_feature_encoder.get_out_dim() 365 | onet = ONet.from_cfg(onet_config) 366 | 367 | return cls(shape_encoder, structure_encoder, pose_encoder, local_feature_encoder, onet) 368 | 369 | @classmethod 370 | def load_from_file(cls, file_path): 371 | state_dict = cls.parse_pytorch_file(file_path) 372 | config = state_dict['leap_occupancy_decoder_config'] 373 | model_state_dict = state_dict['leap_occupancy_decoder_weights'] 374 | return cls.load(config, model_state_dict) 375 | 376 | def forward(self, 377 | can_points, point_weights, cycle_distance, 378 | can_vert=None, # for shape code 379 | rot_mats=None, rel_joints=None, # for structure code 380 | root_loc=None, fwd_transformation=None): # for pose code 381 | shape_code, pose_code, structure_code = None, None, None 382 | 383 | if self.shape_encoder.get_out_dim() > 0: 384 | shape_code = self.shape_encoder(can_vert) 385 | 386 | if self.structure_encoder.get_out_dim() > 0: 387 | structure_code = self.structure_encoder(rot_mats, rel_joints) 388 | 389 | if self.pose_encoder.get_out_dim() > 0: 390 | pose_code = self.pose_encoder(root_loc, fwd_transformation) 391 | 392 | local_feature = self.local_feature_encoder(shape_code, structure_code, pose_code, point_weights) 393 | 394 | occupancy = self.onet(can_points, local_feature, cycle_distance) 395 | 396 | return occupancy 397 | -------------------------------------------------------------------------------- /leap/modules/layers.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | 4 | 5 | class BoneMLP(nn.Module): 6 | def __init__(self, bone_dim, bone_feature_dim): 7 | super(BoneMLP, self).__init__() 8 | 9 | n_features = bone_dim + bone_feature_dim 10 | self.net = nn.Sequential( 11 | nn.Linear(n_features, n_features), 12 | nn.ReLU(), 13 | nn.Linear(n_features, bone_feature_dim), 14 | nn.ReLU() 15 | ) 16 | 17 | def forward(self, bone, bone_feature): 18 | return self.net(torch.cat((bone, bone_feature), dim=-1)) 19 | 20 | 21 | class CBatchNorm1d(nn.Module): 22 | """ Conditional batch normalization layer class. 23 | 24 | Args: 25 | c_dim (int): dimension of latent conditioned code c 26 | f_dim (int): feature dimension 27 | """ 28 | 29 | def __init__(self, c_dim, f_dim): 30 | super().__init__() 31 | 32 | self.c_dim = c_dim 33 | self.f_dim = f_dim 34 | 35 | # Submodules 36 | self.conv_gamma = nn.Conv1d(c_dim, f_dim, 1) 37 | self.conv_beta = nn.Conv1d(c_dim, f_dim, 1) 38 | self.bn = nn.BatchNorm1d(f_dim, affine=False) 39 | self.reset_parameters() 40 | 41 | def reset_parameters(self): 42 | nn.init.zeros_(self.conv_gamma.weight) 43 | nn.init.zeros_(self.conv_beta.weight) 44 | nn.init.ones_(self.conv_gamma.bias) 45 | nn.init.zeros_(self.conv_beta.bias) 46 | 47 | def forward(self, x, c): 48 | assert x.size(0) == c.size(0) 49 | assert c.size(1) == self.c_dim 50 | 51 | if len(c.size()) == 2: # B x c_dim x T 52 | c = c.unsqueeze(2) 53 | 54 | # Affine mapping 55 | gamma = self.conv_gamma(c) 56 | beta = self.conv_beta(c) 57 | 58 | # Batch norm 59 | net = self.bn(x) 60 | out = gamma * net + beta 61 | 62 | return out 63 | 64 | 65 | class ResnetPointnet(nn.Module): 66 | """ PointNet-based encoder network with ResNet blocks. 67 | 68 | Args: 69 | out_dim (int): dimension of latent code c 70 | hidden_dim (int): hidden dimension of the network 71 | """ 72 | 73 | def __init__(self, out_dim, hidden_dim): 74 | super().__init__() 75 | self.out_dim = out_dim 76 | 77 | dim = 3 78 | self.fc_pos = nn.Linear(dim, 2 * hidden_dim) 79 | self.block_0 = ResnetBlockFC(2 * hidden_dim, hidden_dim) 80 | self.block_1 = ResnetBlockFC(2 * hidden_dim, hidden_dim) 81 | self.block_2 = ResnetBlockFC(2 * hidden_dim, hidden_dim) 82 | self.block_3 = ResnetBlockFC(2 * hidden_dim, hidden_dim) 83 | self.block_4 = ResnetBlockFC(2 * hidden_dim, hidden_dim) 84 | self.fc_c = nn.Linear(hidden_dim, out_dim) 85 | 86 | self.act = nn.ReLU() 87 | 88 | @staticmethod 89 | def pool(x, dim=-1, keepdim=False): 90 | return x.max(dim=dim, keepdim=keepdim)[0] 91 | 92 | def forward(self, p): 93 | # output size: B x T X F 94 | net = self.fc_pos(p) 95 | net = self.block_0(net) 96 | pooled = self.pool(net, dim=1, keepdim=True).expand(net.size()) 97 | net = torch.cat([net, pooled], dim=2) 98 | 99 | net = self.block_1(net) 100 | pooled = self.pool(net, dim=1, keepdim=True).expand(net.size()) 101 | net = torch.cat([net, pooled], dim=2) 102 | 103 | net = self.block_2(net) 104 | pooled = self.pool(net, dim=1, keepdim=True).expand(net.size()) 105 | net = torch.cat([net, pooled], dim=2) 106 | 107 | net = self.block_3(net) 108 | pooled = self.pool(net, dim=1, keepdim=True).expand(net.size()) 109 | net = torch.cat([net, pooled], dim=2) 110 | 111 | net = self.block_4(net) 112 | 113 | # to B x F 114 | net = self.pool(net, dim=1) 115 | 116 | c = self.fc_c(self.act(net)) 117 | 118 | return c 119 | 120 | 121 | class ResnetBlockFC(nn.Module): 122 | """ Fully connected ResNet Block class. 123 | 124 | Args: 125 | size_in (int): input dimension 126 | size_out (int): output dimension 127 | size_h (int): hidden dimension 128 | """ 129 | 130 | def __init__(self, size_in, size_out=None, size_h=None): 131 | super().__init__() 132 | # Attributes 133 | if size_out is None: 134 | size_out = size_in 135 | 136 | if size_h is None: 137 | size_h = min(size_in, size_out) 138 | 139 | self.size_in = size_in 140 | self.size_h = size_h 141 | self.size_out = size_out 142 | # Submodules 143 | self.fc_0 = nn.Linear(size_in, size_h) 144 | self.fc_1 = nn.Linear(size_h, size_out) 145 | self.actvn = nn.ReLU() 146 | 147 | if size_in == size_out: 148 | self.shortcut = None 149 | else: 150 | self.shortcut = nn.Linear(size_in, size_out, bias=False) 151 | # Initialization 152 | nn.init.zeros_(self.fc_1.weight) 153 | 154 | def forward(self, x): 155 | net = self.fc_0(self.actvn(x)) 156 | dx = self.fc_1(self.actvn(net)) 157 | 158 | if self.shortcut is not None: 159 | x_s = self.shortcut(x) 160 | else: 161 | x_s = x 162 | 163 | return x_s + dx 164 | 165 | 166 | class CResnetBlockConv1d(nn.Module): 167 | """ Conditional batch normalization-based Resnet block class. 168 | 169 | Args: 170 | c_dim (int): dimension of latend conditioned code c 171 | size_in (int): input dimension 172 | size_out (int): output dimension 173 | size_h (int): hidden dimension 174 | """ 175 | 176 | def __init__(self, c_dim, size_in, size_h=None, size_out=None): 177 | super().__init__() 178 | # Attributes 179 | if size_h is None: 180 | size_h = size_in 181 | if size_out is None: 182 | size_out = size_in 183 | 184 | self.size_in = size_in 185 | self.size_h = size_h 186 | self.size_out = size_out 187 | 188 | # Submodules 189 | self.bn_0 = CBatchNorm1d(c_dim, size_in) 190 | self.bn_1 = CBatchNorm1d(c_dim, size_h) 191 | 192 | self.fc_0 = nn.Conv1d(size_in, size_h, 1) 193 | self.fc_1 = nn.Conv1d(size_h, size_out, 1) 194 | self.actvn = nn.ReLU() 195 | 196 | if size_in == size_out: 197 | self.shortcut = None 198 | else: 199 | self.shortcut = nn.Conv1d(size_in, size_out, 1, bias=False) 200 | # Initialization 201 | nn.init.zeros_(self.fc_1.weight) 202 | 203 | def forward(self, x, c): 204 | net = self.fc_0(self.actvn(self.bn_0(x, c))) 205 | dx = self.fc_1(self.actvn(self.bn_1(net, c))) 206 | 207 | if self.shortcut is not None: 208 | x_s = self.shortcut(x) 209 | else: 210 | x_s = x 211 | 212 | return x_s + dx 213 | -------------------------------------------------------------------------------- /leap/modules/modules.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | 4 | from .encoders import LBSNet, LEAPOccupancyDecoder, BaseModule 5 | 6 | 7 | class INVLBS(LBSNet): 8 | def __init__(self, num_joints, hidden_size, pn_dim, fwd_trans_cond_dim): 9 | self.fwd_trans_cond_dim = fwd_trans_cond_dim 10 | super().__init__(num_joints, hidden_size, pn_dim) 11 | 12 | self.fc_fwd = nn.Sequential( 13 | nn.Linear(self.num_joints * 12, 100), nn.ReLU(), 14 | nn.Linear(100, 100), nn.ReLU(), 15 | nn.Linear(100, self.fwd_trans_cond_dim), 16 | ) 17 | 18 | def get_c_dim(self): 19 | return self.pn_dim * 2 + self.fwd_trans_cond_dim 20 | 21 | @classmethod 22 | def load_from_file(cls, file_path): 23 | state_dict = cls.parse_pytorch_file(file_path) 24 | config = state_dict['inv_lbs_config'] 25 | model_state_dict = state_dict['inv_lbs_model'] 26 | return cls.load(config, model_state_dict) 27 | 28 | @classmethod 29 | def from_cfg(cls, config): 30 | model = cls( 31 | num_joints=config['num_joints'], 32 | hidden_size=config['hidden_size'], 33 | pn_dim=config['pn_dim'], 34 | fwd_trans_cond_dim=config['fwd_trans_cond_dim']) 35 | 36 | return model 37 | 38 | def forward(self, points, can_vertices, posed_vertices, fwd_transformation, compute_can_points=True): 39 | """ 40 | Args: 41 | points: B x T x 3 42 | can_vertices: B x N x 3 43 | posed_vertices: B x N x 3 44 | fwd_transformation (torch.tensor): Forward transformation tensor. (B x K x 4 x 4) 45 | compute_can_points (bool): Whether to return estimated canonical points. 46 | 47 | Returns: 48 | if compute_can_points is True tuple of: 49 | skinning weights (torch.Tensor): B x T x K 50 | canonical points (torch.Tensor): B x T x 3 51 | otherwise: 52 | skinning weights (torch.Tensor): B x T x K 53 | """ 54 | B, K = fwd_transformation.shape[:2] 55 | 56 | can_code = self.point_encoder(can_vertices) 57 | posed_code = self.point_encoder(posed_vertices) 58 | fwd_trans_code = self.fc_fwd(fwd_transformation[..., :3, :].reshape(B, -1)) 59 | 60 | lbs_code = torch.cat((can_code, posed_code, fwd_trans_code), dim=-1) 61 | point_weights = self._forward(points, lbs_code) 62 | 63 | if compute_can_points: 64 | can_points = self.posed2can_points(points, point_weights, fwd_transformation) 65 | ret_interface = point_weights, can_points # B x T x K 66 | else: 67 | ret_interface = point_weights 68 | 69 | return ret_interface 70 | 71 | @staticmethod 72 | def posed2can_points(points, point_weights, fwd_transformation): 73 | """ 74 | Args: 75 | points: B x T x 3 76 | point_weights: B x T x K 77 | fwd_transformation: B x K x 4 x 4 78 | 79 | Returns: 80 | canonical points: B x T x 3 81 | """ 82 | B, T, K = point_weights.shape 83 | point_weights = point_weights.view(B * T, 1, K) # B*T x 1 x K 84 | 85 | fwd_transformation = fwd_transformation.unsqueeze(1).repeat(1, T, 1, 1, 1) # B X K x 4 x 4 -> B x T x K x 4 x 4 86 | fwd_transformation = fwd_transformation.view(B * T, K, -1) # B*T x K x 16 87 | back_trans = torch.bmm(point_weights, fwd_transformation).view(B * T, 4, 4) 88 | back_trans = torch.inverse(back_trans) 89 | 90 | points = torch.cat([points, torch.ones(B, T, 1, device=points.device)], dim=-1).view(B * T, 4, 1) 91 | can_points = torch.bmm(back_trans, points)[:, :3, 0].view(B, T, 3) 92 | 93 | return can_points 94 | 95 | 96 | class FWDLBS(LBSNet): 97 | def __init__(self, num_joints, hidden_size, pn_dim): 98 | super().__init__(num_joints, hidden_size, pn_dim) 99 | 100 | def get_c_dim(self): 101 | return self.pn_dim 102 | 103 | @classmethod 104 | def load_from_file(cls, file_path): 105 | state_dict = cls.parse_pytorch_file(file_path) 106 | config = state_dict['fwd_lbs_config'] 107 | model_state_dict = state_dict['fwd_lbs_model'] 108 | return cls.load(config, model_state_dict) 109 | 110 | @classmethod 111 | def from_cfg(cls, config): 112 | model = cls( 113 | num_joints=config['num_joints'], 114 | hidden_size=config['hidden_size'], 115 | pn_dim=config['pn_dim']) 116 | 117 | return model 118 | 119 | def forward(self, points, can_vertices): 120 | """ 121 | Args: 122 | points: B x T x 3 123 | can_vertices: B x N x 3 124 | Returns: 125 | 126 | """ 127 | vert_code = self.point_encoder(can_vertices) # B x pn_dim 128 | point_weights = self._forward(points, vert_code) 129 | return point_weights # B x T x K 130 | 131 | 132 | class LEAPModel(BaseModule): 133 | def __init__(self, 134 | inv_lbs: INVLBS, 135 | fwd_lbs: FWDLBS, 136 | leap_occupancy_decoder: LEAPOccupancyDecoder): 137 | super(LEAPModel, self).__init__() 138 | 139 | # NN modules 140 | self.inv_lbs = inv_lbs 141 | self.fwd_lbs = fwd_lbs 142 | self.leap_occupancy_decoder = leap_occupancy_decoder 143 | 144 | @classmethod 145 | def from_cfg(cls, config): 146 | leap_model = cls( 147 | inv_lbs=INVLBS.load_from_file(config['inv_lbs_model_path']), 148 | fwd_lbs=FWDLBS.load_from_file(config['fwd_lbs_model_path']), 149 | leap_occupancy_decoder=LEAPOccupancyDecoder.from_cfg(config)) 150 | 151 | return leap_model 152 | 153 | @classmethod 154 | def load_from_file(cls, file_path): 155 | state_dict = cls.parse_pytorch_file(file_path) 156 | config = state_dict['leap_model_config'] 157 | model_state_dict = state_dict['leap_model_model'] 158 | 159 | leap_model = cls( 160 | inv_lbs=INVLBS.from_cfg(config['inv_lbs_model_config']), 161 | fwd_lbs=FWDLBS.from_cfg(config['fwd_lbs_model_config']), 162 | leap_occupancy_decoder=LEAPOccupancyDecoder.from_cfg(config)) 163 | 164 | leap_model.load_state_dict(model_state_dict) 165 | return leap_model 166 | 167 | def to(self, **kwargs): 168 | self.inv_lbs = self.inv_lbs.to(**kwargs) 169 | self.fwd_lbs = self.fwd_lbs.to(**kwargs) 170 | self.leap_occupancy_decoder = self.leap_occupancy_decoder.to(**kwargs) 171 | return self 172 | 173 | def eval(self): 174 | self.inv_lbs.eval() 175 | self.fwd_lbs.eval() 176 | self.leap_occupancy_decoder.eval() 177 | -------------------------------------------------------------------------------- /leap/tools/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neuralbodies/leap/f2475588ddbc673365b429b21e4ba8f88bfd357c/leap/tools/__init__.py -------------------------------------------------------------------------------- /leap/tools/libmesh/.gitignore: -------------------------------------------------------------------------------- 1 | triangle_hash.cpp 2 | build 3 | *.so -------------------------------------------------------------------------------- /leap/tools/libmesh/__init__.py: -------------------------------------------------------------------------------- 1 | from .inside_mesh import ( 2 | check_mesh_contains, MeshIntersector, TriangleIntersector2d 3 | ) 4 | 5 | 6 | __all__ = [ 7 | check_mesh_contains, MeshIntersector, TriangleIntersector2d 8 | ] 9 | -------------------------------------------------------------------------------- /leap/tools/libmesh/inside_mesh.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from .triangle_hash import TriangleHash as _TriangleHash 3 | 4 | 5 | def check_mesh_contains(mesh, points, hash_resolution=512): 6 | intersector = MeshIntersector(mesh, hash_resolution) 7 | contains = intersector.query(points) 8 | return contains 9 | 10 | 11 | class MeshIntersector: 12 | def __init__(self, mesh, resolution=512): 13 | triangles = mesh.vertices[mesh.faces].astype(np.float64) 14 | n_tri = triangles.shape[0] 15 | 16 | self.resolution = resolution 17 | self.bbox_min = triangles.reshape(3 * n_tri, 3).min(axis=0) 18 | self.bbox_max = triangles.reshape(3 * n_tri, 3).max(axis=0) 19 | # Translate and scale it to [0.5, self.resolution - 0.5]^3 20 | self.scale = (resolution - 1) / (self.bbox_max - self.bbox_min) 21 | self.translate = 0.5 - self.scale * self.bbox_min 22 | 23 | self._triangles = triangles = self.rescale(triangles) 24 | # assert(np.allclose(triangles.reshape(-1, 3).min(0), 0.5)) 25 | # assert(np.allclose(triangles.reshape(-1, 3).max(0), resolution - 0.5)) 26 | 27 | triangles2d = triangles[:, :, :2] 28 | self._tri_intersector2d = TriangleIntersector2d( 29 | triangles2d, resolution) 30 | 31 | def query(self, points): 32 | # Rescale points 33 | points = self.rescale(points) 34 | 35 | # placeholder result with no hits we'll fill in later 36 | contains = np.zeros(len(points), dtype=np.bool) 37 | 38 | # cull points outside of the axis aligned bounding box 39 | # this avoids running ray tests unless points are close 40 | inside_aabb = np.all( 41 | (0 <= points) & (points <= self.resolution), axis=1) 42 | if not inside_aabb.any(): 43 | return contains 44 | 45 | # Only consider points inside bounding box 46 | mask = inside_aabb 47 | points = points[mask] 48 | 49 | # Compute intersection depth and check order 50 | points_indices, tri_indices = self._tri_intersector2d.query(points[:, :2]) 51 | 52 | triangles_intersect = self._triangles[tri_indices] 53 | points_intersect = points[points_indices] 54 | 55 | depth_intersect, abs_n_2 = self.compute_intersection_depth( 56 | points_intersect, triangles_intersect) 57 | 58 | # Count number of intersections in both directions 59 | smaller_depth = depth_intersect >= points_intersect[:, 2] * abs_n_2 60 | bigger_depth = depth_intersect < points_intersect[:, 2] * abs_n_2 61 | points_indices_0 = points_indices[smaller_depth] 62 | points_indices_1 = points_indices[bigger_depth] 63 | 64 | nintersect0 = np.bincount(points_indices_0, minlength=points.shape[0]) 65 | nintersect1 = np.bincount(points_indices_1, minlength=points.shape[0]) 66 | 67 | # Check if point contained in mesh 68 | contains1 = (np.mod(nintersect0, 2) == 1) 69 | contains2 = (np.mod(nintersect1, 2) == 1) 70 | # if (contains1 != contains2).any(): print('Warning: contains1 != contains2 for some points.') 71 | contains[mask] = (contains1 & contains2) 72 | return contains 73 | 74 | def compute_intersection_depth(self, points, triangles): 75 | t1 = triangles[:, 0, :] 76 | t2 = triangles[:, 1, :] 77 | t3 = triangles[:, 2, :] 78 | 79 | v1 = t3 - t1 80 | v2 = t2 - t1 81 | # v1 = v1 / np.linalg.norm(v1, axis=-1, keepdims=True) 82 | # v2 = v2 / np.linalg.norm(v2, axis=-1, keepdims=True) 83 | 84 | normals = np.cross(v1, v2) 85 | alpha = np.sum(normals[:, :2] * (t1[:, :2] - points[:, :2]), axis=1) 86 | 87 | n_2 = normals[:, 2] 88 | t1_2 = t1[:, 2] 89 | s_n_2 = np.sign(n_2) 90 | abs_n_2 = np.abs(n_2) 91 | 92 | mask = (abs_n_2 != 0) 93 | 94 | depth_intersect = np.full(points.shape[0], np.nan) 95 | depth_intersect[mask] = \ 96 | t1_2[mask] * abs_n_2[mask] + alpha[mask] * s_n_2[mask] 97 | 98 | # Test the depth: 99 | # TODO: remove and put into tests 100 | # points_new = np.concatenate([points[:, :2], depth_intersect[:, None]], axis=1) 101 | # alpha = (normals * t1).sum(-1) 102 | # mask = (depth_intersect == depth_intersect) 103 | # assert(np.allclose((points_new[mask] * normals[mask]).sum(-1), 104 | # alpha[mask])) 105 | return depth_intersect, abs_n_2 106 | 107 | def rescale(self, array): 108 | array = self.scale * array + self.translate 109 | return array 110 | 111 | 112 | class TriangleIntersector2d: 113 | def __init__(self, triangles, resolution=128): 114 | self.triangles = triangles 115 | self.tri_hash = _TriangleHash(triangles, resolution) 116 | 117 | def query(self, points): 118 | point_indices, tri_indices = self.tri_hash.query(points) 119 | point_indices = np.array(point_indices, dtype=np.int64) 120 | tri_indices = np.array(tri_indices, dtype=np.int64) 121 | points = points[point_indices] 122 | triangles = self.triangles[tri_indices] 123 | mask = self.check_triangles(points, triangles) 124 | point_indices = point_indices[mask] 125 | tri_indices = tri_indices[mask] 126 | return point_indices, tri_indices 127 | 128 | def check_triangles(self, points, triangles): 129 | contains = np.zeros(points.shape[0], dtype=np.bool) 130 | A = triangles[:, :2] - triangles[:, 2:] 131 | A = A.transpose([0, 2, 1]) 132 | y = points - triangles[:, 2] 133 | 134 | detA = A[:, 0, 0] * A[:, 1, 1] - A[:, 0, 1] * A[:, 1, 0] 135 | 136 | mask = (np.abs(detA) != 0.) 137 | A = A[mask] 138 | y = y[mask] 139 | detA = detA[mask] 140 | 141 | s_detA = np.sign(detA) 142 | abs_detA = np.abs(detA) 143 | 144 | u = (A[:, 1, 1] * y[:, 0] - A[:, 0, 1] * y[:, 1]) * s_detA 145 | v = (-A[:, 1, 0] * y[:, 0] + A[:, 0, 0] * y[:, 1]) * s_detA 146 | 147 | sum_uv = u + v 148 | contains[mask] = ( 149 | (0 < u) & (u < abs_detA) & (0 < v) & (v < abs_detA) 150 | & (0 < sum_uv) & (sum_uv < abs_detA) 151 | ) 152 | return contains 153 | 154 | -------------------------------------------------------------------------------- /leap/tools/libmesh/triangle_hash.pyx: -------------------------------------------------------------------------------- 1 | 2 | # distutils: language=c++ 3 | import numpy as np 4 | cimport numpy as np 5 | cimport cython 6 | from libcpp.vector cimport vector 7 | from libc.math cimport floor, ceil 8 | 9 | cdef class TriangleHash: 10 | cdef vector[vector[int]] spatial_hash 11 | cdef int resolution 12 | 13 | def __cinit__(self, double[:, :, :] triangles, int resolution): 14 | self.spatial_hash.resize(resolution * resolution) 15 | self.resolution = resolution 16 | self._build_hash(triangles) 17 | 18 | @cython.boundscheck(False) # Deactivate bounds checking 19 | @cython.wraparound(False) # Deactivate negative indexing. 20 | cdef int _build_hash(self, double[:, :, :] triangles): 21 | assert(triangles.shape[1] == 3) 22 | assert(triangles.shape[2] == 2) 23 | 24 | cdef int n_tri = triangles.shape[0] 25 | cdef int bbox_min[2] 26 | cdef int bbox_max[2] 27 | 28 | cdef int i_tri, j, x, y 29 | cdef int spatial_idx 30 | 31 | for i_tri in range(n_tri): 32 | # Compute bounding box 33 | for j in range(2): 34 | bbox_min[j] = min( 35 | triangles[i_tri, 0, j], triangles[i_tri, 1, j], triangles[i_tri, 2, j] 36 | ) 37 | bbox_max[j] = max( 38 | triangles[i_tri, 0, j], triangles[i_tri, 1, j], triangles[i_tri, 2, j] 39 | ) 40 | bbox_min[j] = min(max(bbox_min[j], 0), self.resolution - 1) 41 | bbox_max[j] = min(max(bbox_max[j], 0), self.resolution - 1) 42 | 43 | # Find all voxels where bounding box intersects 44 | for x in range(bbox_min[0], bbox_max[0] + 1): 45 | for y in range(bbox_min[1], bbox_max[1] + 1): 46 | spatial_idx = self.resolution * x + y 47 | self.spatial_hash[spatial_idx].push_back(i_tri) 48 | 49 | @cython.boundscheck(False) # Deactivate bounds checking 50 | @cython.wraparound(False) # Deactivate negative indexing. 51 | cpdef query(self, double[:, :] points): 52 | assert(points.shape[1] == 2) 53 | cdef int n_points = points.shape[0] 54 | 55 | cdef vector[int] points_indices 56 | cdef vector[int] tri_indices 57 | # cdef int[:] points_indices_np 58 | # cdef int[:] tri_indices_np 59 | 60 | cdef int i_point, k, x, y 61 | cdef int spatial_idx 62 | 63 | for i_point in range(n_points): 64 | x = int(points[i_point, 0]) 65 | y = int(points[i_point, 1]) 66 | if not (0 <= x < self.resolution and 0 <= y < self.resolution): 67 | continue 68 | 69 | spatial_idx = self.resolution * x + y 70 | for i_tri in self.spatial_hash[spatial_idx]: 71 | points_indices.push_back(i_point) 72 | tri_indices.push_back(i_tri) 73 | 74 | points_indices_np = np.zeros(points_indices.size(), dtype=np.int32) 75 | tri_indices_np = np.zeros(tri_indices.size(), dtype=np.int32) 76 | 77 | cdef int[:] points_indices_view = points_indices_np 78 | cdef int[:] tri_indices_view = tri_indices_np 79 | 80 | for k in range(points_indices.size()): 81 | points_indices_view[k] = points_indices[k] 82 | 83 | for k in range(tri_indices.size()): 84 | tri_indices_view[k] = tri_indices[k] 85 | 86 | return points_indices_np, tri_indices_np 87 | -------------------------------------------------------------------------------- /leap/tools/libmise/.gitignore: -------------------------------------------------------------------------------- 1 | mise.c 2 | mise.cpp 3 | mise.html 4 | *.so -------------------------------------------------------------------------------- /leap/tools/libmise/__init__.py: -------------------------------------------------------------------------------- 1 | from .mise import MISE 2 | 3 | 4 | __all__ = [ 5 | MISE 6 | ] 7 | -------------------------------------------------------------------------------- /leap/tools/libmise/mise.pyx: -------------------------------------------------------------------------------- 1 | # distutils: language = c++ 2 | cimport cython 3 | from cython.operator cimport dereference as dref 4 | from libcpp.vector cimport vector 5 | from libcpp.map cimport map 6 | from libc.math cimport isnan, NAN 7 | import numpy as np 8 | 9 | 10 | cdef struct Vector3D: 11 | int x, y, z 12 | 13 | 14 | cdef struct Voxel: 15 | Vector3D loc 16 | unsigned int level 17 | bint is_leaf 18 | unsigned long children[2][2][2] 19 | 20 | 21 | cdef struct GridPoint: 22 | Vector3D loc 23 | double value 24 | bint known 25 | 26 | 27 | cdef inline unsigned long vec_to_idx(Vector3D coord, long resolution): 28 | cdef unsigned long idx 29 | idx = resolution * resolution * coord.x + resolution * coord.y + coord.z 30 | return idx 31 | 32 | 33 | cdef class MISE: 34 | cdef vector[Voxel] voxels 35 | cdef vector[GridPoint] grid_points 36 | cdef map[long, long] grid_point_hash 37 | cdef readonly int resolution_0 38 | cdef readonly int depth 39 | cdef readonly double threshold 40 | cdef readonly int voxel_size_0 41 | cdef readonly int resolution 42 | 43 | def __cinit__(self, int resolution_0, int depth, double threshold): 44 | self.resolution_0 = resolution_0 45 | self.depth = depth 46 | self.threshold = threshold 47 | self.voxel_size_0 = (1 << depth) 48 | self.resolution = resolution_0 * self.voxel_size_0 49 | 50 | # Create initial voxels 51 | self.voxels.reserve(resolution_0 * resolution_0 * resolution_0) 52 | 53 | cdef Voxel voxel 54 | cdef GridPoint point 55 | cdef Vector3D loc 56 | cdef int i, j, k 57 | for i in range(resolution_0): 58 | for j in range(resolution_0): 59 | for k in range (resolution_0): 60 | loc = Vector3D( 61 | i * self.voxel_size_0, 62 | j * self.voxel_size_0, 63 | k * self.voxel_size_0, 64 | ) 65 | voxel = Voxel( 66 | loc=loc, 67 | level=0, 68 | is_leaf=True, 69 | ) 70 | 71 | assert(self.voxels.size() == vec_to_idx(Vector3D(i, j, k), resolution_0)) 72 | self.voxels.push_back(voxel) 73 | 74 | # Create initial grid points 75 | self.grid_points.reserve((resolution_0 + 1) * (resolution_0 + 1) * (resolution_0 + 1)) 76 | for i in range(resolution_0 + 1): 77 | for j in range(resolution_0 + 1): 78 | for k in range(resolution_0 + 1): 79 | loc = Vector3D( 80 | i * self.voxel_size_0, 81 | j * self.voxel_size_0, 82 | k * self.voxel_size_0, 83 | ) 84 | assert(self.grid_points.size() == vec_to_idx(Vector3D(i, j, k), resolution_0 + 1)) 85 | self.add_grid_point(loc) 86 | 87 | def update(self, long[:, :] points, double[:] values): 88 | """Update points and set their values. Also determine all active voxels and subdivide them.""" 89 | assert(points.shape[0] == values.shape[0]) 90 | assert(points.shape[1] == 3) 91 | cdef Vector3D loc 92 | cdef long idx 93 | cdef int i 94 | 95 | # Find all indices of point and set value 96 | for i in range(points.shape[0]): 97 | loc = Vector3D(points[i, 0], points[i, 1], points[i, 2]) 98 | idx = self.get_grid_point_idx(loc) 99 | if idx == -1: 100 | raise ValueError('Point not in grid!') 101 | self.grid_points[idx].value = values[i] 102 | self.grid_points[idx].known = True 103 | # Subdivide activate voxels and add new points 104 | self.subdivide_voxels() 105 | 106 | def query(self): 107 | """Query points to evaluate.""" 108 | # Find all points with unknown value 109 | cdef vector[Vector3D] points 110 | cdef int n_unknown = 0 111 | for p in self.grid_points: 112 | if not p.known: 113 | n_unknown += 1 114 | 115 | points.reserve(n_unknown) 116 | for p in self.grid_points: 117 | if not p.known: 118 | points.push_back(p.loc) 119 | 120 | # Convert to numpy 121 | points_np = np.zeros((points.size(), 3), dtype=np.int64) 122 | cdef long[:, :] points_view = points_np 123 | for i in range(points.size()): 124 | points_view[i, 0] = points[i].x 125 | points_view[i, 1] = points[i].y 126 | points_view[i, 2] = points[i].z 127 | 128 | return points_np 129 | 130 | def to_dense(self): 131 | """Output dense matrix at highest resolution.""" 132 | out_array = np.full((self.resolution + 1,) * 3, np.nan) 133 | cdef double[:, :, :] out_view = out_array 134 | cdef GridPoint point 135 | cdef int i, j, k 136 | 137 | for point in self.grid_points: 138 | # Take voxel for which points is upper left corner 139 | # assert(point.known) 140 | out_view[point.loc.x, point.loc.y, point.loc.z] = point.value 141 | 142 | # Complete along x axis 143 | for i in range(1, self.resolution + 1): 144 | for j in range(self.resolution + 1): 145 | for k in range(self.resolution + 1): 146 | if isnan(out_view[i, j, k]): 147 | out_view[i, j, k] = out_view[i-1, j, k] 148 | 149 | # Complete along y axis 150 | for i in range(self.resolution + 1): 151 | for j in range(1, self.resolution + 1): 152 | for k in range(self.resolution + 1): 153 | if isnan(out_view[i, j, k]): 154 | out_view[i, j, k] = out_view[i, j-1, k] 155 | 156 | 157 | # Complete along z axis 158 | for i in range(self.resolution + 1): 159 | for j in range(self.resolution + 1): 160 | for k in range(1, self.resolution + 1): 161 | if isnan(out_view[i, j, k]): 162 | out_view[i, j, k] = out_view[i, j, k-1] 163 | assert(not isnan(out_view[i, j, k])) 164 | return out_array 165 | 166 | def get_points(self): 167 | points_np = np.zeros((self.grid_points.size(), 3), dtype=np.int64) 168 | values_np = np.zeros((self.grid_points.size()), dtype=np.float64) 169 | 170 | cdef long[:, :] points_view = points_np 171 | cdef double[:] values_view = values_np 172 | cdef Vector3D loc 173 | cdef int i 174 | 175 | for i in range(self.grid_points.size()): 176 | loc = self.grid_points[i].loc 177 | points_view[i, 0] = loc.x 178 | points_view[i, 1] = loc.y 179 | points_view[i, 2] = loc.z 180 | values_view[i] = self.grid_points[i].value 181 | 182 | return points_np, values_np 183 | 184 | cdef void subdivide_voxels(self) except +: 185 | cdef vector[bint] next_to_positive 186 | cdef vector[bint] next_to_negative 187 | cdef int i, j, k 188 | cdef long idx 189 | cdef Vector3D loc, adj_loc 190 | 191 | # Initialize vectors 192 | next_to_positive.resize(self.voxels.size(), False) 193 | next_to_negative.resize(self.voxels.size(), False) 194 | 195 | # Iterate over grid points and mark voxels active 196 | # TODO: can move this to update operation and add attibute to voxel 197 | for grid_point in self.grid_points: 198 | loc = grid_point.loc 199 | if not grid_point.known: 200 | continue 201 | 202 | # Iterate over the 8 adjacent voxels 203 | for i in range(-1, 1): 204 | for j in range(-1, 1): 205 | for k in range(-1, 1): 206 | adj_loc = Vector3D( 207 | x=loc.x + i, 208 | y=loc.y + j, 209 | z=loc.z + k, 210 | ) 211 | idx = self.get_voxel_idx(adj_loc) 212 | if idx == -1: 213 | continue 214 | 215 | if grid_point.value >= self.threshold: 216 | next_to_positive[idx] = True 217 | if grid_point.value <= self.threshold: 218 | next_to_negative[idx] = True 219 | 220 | cdef int n_subdivide = 0 221 | 222 | for idx in range(self.voxels.size()): 223 | if not self.voxels[idx].is_leaf or self.voxels[idx].level == self.depth: 224 | continue 225 | if next_to_positive[idx] and next_to_negative[idx]: 226 | n_subdivide += 1 227 | 228 | self.voxels.reserve(self.voxels.size() + 8 * n_subdivide) 229 | self.grid_points.reserve(self.voxels.size() + 19 * n_subdivide) 230 | 231 | for idx in range(self.voxels.size()): 232 | if not self.voxels[idx].is_leaf or self.voxels[idx].level == self.depth: 233 | continue 234 | if next_to_positive[idx] and next_to_negative[idx]: 235 | self.subdivide_voxel(idx) 236 | 237 | cdef void subdivide_voxel(self, long idx): 238 | cdef Voxel voxel 239 | cdef GridPoint point 240 | cdef Vector3D loc0 = self.voxels[idx].loc 241 | cdef Vector3D loc 242 | cdef int new_level = self.voxels[idx].level + 1 243 | cdef int new_size = 1 << (self.depth - new_level) 244 | assert(new_level <= self.depth) 245 | assert(1 <= new_size <= self.voxel_size_0) 246 | 247 | # Current voxel is not leaf anymore 248 | self.voxels[idx].is_leaf = False 249 | # Add new voxels 250 | cdef int i, j, k 251 | for i in range(2): 252 | for j in range(2): 253 | for k in range(2): 254 | loc = Vector3D( 255 | x=loc0.x + i * new_size, 256 | y=loc0.y + j * new_size, 257 | z=loc0.z + k * new_size, 258 | ) 259 | voxel = Voxel( 260 | loc=loc, 261 | level=new_level, 262 | is_leaf=True 263 | ) 264 | 265 | self.voxels[idx].children[i][j][k] = self.voxels.size() 266 | self.voxels.push_back(voxel) 267 | 268 | # Add new grid points 269 | for i in range(3): 270 | for j in range(3): 271 | for k in range(3): 272 | loc = Vector3D( 273 | loc0.x + i * new_size, 274 | loc0.y + j * new_size, 275 | loc0.z + k * new_size, 276 | ) 277 | 278 | # Only add new grid points 279 | if self.get_grid_point_idx(loc) == -1: 280 | self.add_grid_point(loc) 281 | 282 | 283 | @cython.cdivision(True) 284 | cdef long get_voxel_idx(self, Vector3D loc) except +: 285 | """Utility function for getting voxel index corresponding to 3D coordinates.""" 286 | # Shorten 287 | cdef long resolution = self.resolution 288 | cdef long resolution_0 = self.resolution_0 289 | cdef long depth = self.depth 290 | cdef long voxel_size_0 = self.voxel_size_0 291 | 292 | # Return -1 if point lies outside bounds 293 | if not (0 <= loc.x < resolution and 0<= loc.y < resolution and 0 <= loc.z < resolution): 294 | return -1 295 | 296 | # Coordinates in coarse voxel grid 297 | cdef Vector3D loc0 = Vector3D( 298 | x=loc.x >> depth, 299 | y=loc.y >> depth, 300 | z=loc.z >> depth, 301 | ) 302 | 303 | # Initial voxels 304 | cdef int idx = vec_to_idx(loc0, resolution_0) 305 | cdef Voxel voxel = self.voxels[idx] 306 | assert(voxel.loc.x == loc0.x * voxel_size_0) 307 | assert(voxel.loc.y == loc0.y * voxel_size_0) 308 | assert(voxel.loc.z == loc0.z * voxel_size_0) 309 | 310 | # Relative coordinates 311 | cdef Vector3D loc_rel = Vector3D( 312 | x=loc.x - (loc0.x << depth), 313 | y=loc.y - (loc0.y << depth), 314 | z=loc.z - (loc0.z << depth), 315 | ) 316 | 317 | cdef Vector3D loc_offset 318 | cdef long voxel_size = voxel_size_0 319 | 320 | while not voxel.is_leaf: 321 | voxel_size = voxel_size >> 1 322 | assert(voxel_size >= 1) 323 | 324 | # Determine child 325 | loc_offset = Vector3D( 326 | x=1 if (loc_rel.x >= voxel_size) else 0, 327 | y=1 if (loc_rel.y >= voxel_size) else 0, 328 | z=1 if (loc_rel.z >= voxel_size) else 0, 329 | ) 330 | # New voxel 331 | idx = voxel.children[loc_offset.x][loc_offset.y][loc_offset.z] 332 | voxel = self.voxels[idx] 333 | 334 | # New relative coordinates 335 | loc_rel = Vector3D( 336 | x=loc_rel.x - loc_offset.x * voxel_size, 337 | y=loc_rel.y - loc_offset.y * voxel_size, 338 | z=loc_rel.z - loc_offset.z * voxel_size, 339 | ) 340 | 341 | assert(0<= loc_rel.x < voxel_size) 342 | assert(0<= loc_rel.y < voxel_size) 343 | assert(0<= loc_rel.z < voxel_size) 344 | 345 | 346 | # Return idx 347 | return idx 348 | 349 | 350 | cdef inline void add_grid_point(self, Vector3D loc): 351 | cdef GridPoint point = GridPoint( 352 | loc=loc, 353 | value=0., 354 | known=False, 355 | ) 356 | self.grid_point_hash[vec_to_idx(loc, self.resolution + 1)] = self.grid_points.size() 357 | self.grid_points.push_back(point) 358 | 359 | cdef inline int get_grid_point_idx(self, Vector3D loc): 360 | p_idx = self.grid_point_hash.find(vec_to_idx(loc, self.resolution + 1)) 361 | if p_idx == self.grid_point_hash.end(): 362 | return -1 363 | 364 | cdef int idx = dref(p_idx).second 365 | assert(self.grid_points[idx].loc.x == loc.x) 366 | assert(self.grid_points[idx].loc.y == loc.y) 367 | assert(self.grid_points[idx].loc.z == loc.z) 368 | 369 | return idx -------------------------------------------------------------------------------- /leap/tools/libmise/test.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from mise import MISE 3 | import time 4 | 5 | t0 = time.time() 6 | extractor = MISE(1, 2, 0.) 7 | 8 | p = extractor.query() 9 | i = 0 10 | 11 | while p.shape[0] != 0: 12 | print(i) 13 | print(p) 14 | v = 2 * (p.sum(axis=-1) > 2).astype(np.float64) - 1 15 | extractor.update(p, v) 16 | p = extractor.query() 17 | i += 1 18 | if (i >= 8): 19 | break 20 | 21 | print(extractor.to_dense()) 22 | # p, v = extractor.get_points() 23 | # print(p) 24 | # print(v) 25 | print('Total time: %f' % (time.time() - t0)) 26 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | torch 2 | numpy 3 | trimesh 4 | PyYAML 5 | scipy 6 | setuptools 7 | tqdm 8 | leap 9 | tensorboardx 10 | cython 11 | scikit-image 12 | yaml 13 | torchgeometry 14 | pyrender 15 | setuptools 16 | opencv-python -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | try: 2 | from setuptools import setup 3 | except ImportError: 4 | from distutils.core import setup 5 | from distutils.extension import Extension 6 | from Cython.Build import cythonize 7 | from torch.utils.cpp_extension import BuildExtension 8 | import numpy 9 | 10 | # Get the numpy include directory. 11 | numpy_include_dir = numpy.get_include() 12 | 13 | # efficient mesh extraction (Occupancy networks: Learning 3d reconstruction in function space, CVPR 2019) 14 | mise_module = Extension( 15 | 'leap.tools.libmise.mise', 16 | sources=[ 17 | 'leap/tools/libmise/mise.pyx' 18 | ], 19 | ) 20 | 21 | # occupancy checks needed for training 22 | libmesh_module = Extension( 23 | 'leap.tools.libmesh.triangle_hash', 24 | sources=[ 25 | 'leap/tools/libmesh/triangle_hash.pyx' 26 | ], 27 | libraries=['m'], # Unix-like specific 28 | include_dirs=[numpy_include_dir] 29 | ) 30 | 31 | ext_modules = [ 32 | libmesh_module, 33 | mise_module, 34 | ] 35 | 36 | setup( 37 | name='leap', 38 | version='0.0.1', 39 | ext_modules=cythonize(ext_modules), 40 | cmdclass={ 41 | 'build_ext': BuildExtension 42 | }, 43 | url='https://neuralbodies.github.io/LEAP', 44 | license='', 45 | author='Marko Mihajlovic', 46 | author_email='markomih@inf.ethz.ch', 47 | description='' 48 | ) 49 | -------------------------------------------------------------------------------- /training_code/checkpoints.py: -------------------------------------------------------------------------------- 1 | import os 2 | import torch 3 | from torch.utils import model_zoo 4 | from urllib.parse import urlparse 5 | 6 | import utils 7 | 8 | 9 | class CheckpointIO: 10 | """ CheckpointIO class. 11 | 12 | It handles saving and loading checkpoints. 13 | 14 | Args: 15 | checkpoint_dir (str): path where checkpoints are saved 16 | """ 17 | 18 | def __init__(self, checkpoint_dir, model, optimizer, cfg): 19 | self.module_dict_params = { 20 | f"{cfg['method']}_model": model, 21 | f"optimizer": optimizer, 22 | f"{cfg['method']}_config": cfg['model'], 23 | } 24 | self.checkpoint_dir = checkpoint_dir 25 | utils.cond_mkdir(checkpoint_dir) 26 | 27 | def save(self, filename, **kwargs): 28 | """ Saves the current module dictionary. 29 | 30 | Args: 31 | filename (str): name of output file 32 | """ 33 | if not os.path.isabs(filename): 34 | filename = os.path.join(self.checkpoint_dir, filename) 35 | 36 | out_dict = kwargs 37 | for k, v in self.module_dict_params.items(): 38 | out_dict[k] = v 39 | if hasattr(v, 'state_dict'): 40 | out_dict[k] = v.state_dict() 41 | 42 | torch.save(out_dict, filename) 43 | 44 | def load(self, filename): 45 | """ Loads a module dictionary from local file or url. 46 | 47 | Args: 48 | filename (str): name of saved module dictionary 49 | """ 50 | # parse file 51 | if urlparse(filename).scheme in ('http', 'https'): 52 | state_dict = model_zoo.load_url(filename, progress=True) 53 | else: 54 | if not os.path.isabs(filename): 55 | filename = os.path.join(self.checkpoint_dir, filename) 56 | 57 | if not os.path.exists(filename): 58 | raise FileExistsError 59 | 60 | state_dict = torch.load(filename) 61 | 62 | print(f'=> Loading checkpoint from: {filename}') 63 | for k, v in self.module_dict_params.items(): 64 | if hasattr(v, 'load_state_dict') and v is not None: 65 | v.load_state_dict(state_dict[k]) 66 | 67 | scalars = {k: v for k, v in state_dict.items() if k not in self.module_dict_params} 68 | return scalars 69 | -------------------------------------------------------------------------------- /training_code/config.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | import yaml 4 | 5 | import datasets 6 | import leap 7 | import trainers 8 | 9 | 10 | def load_config(path): 11 | """ Loads config file. 12 | 13 | Args: 14 | path (str): path to config file 15 | """ 16 | with open(path, 'r') as f: 17 | cfg = yaml.safe_load(f) 18 | 19 | # add additional attributes 20 | bm_path = os.path.join(cfg['data']['bm_path'], 'neutral', 'model.npz') 21 | model_type, num_joints = leap.LEAPBodyModel.get_num_joints(bm_path) 22 | 23 | cfg['model']['num_joints'] = num_joints 24 | cfg['model']['model_type'] = model_type 25 | cfg['model']['parent_mapping'] = leap.LEAPBodyModel.get_parent_mapping(model_type) 26 | if cfg['method'] == 'leap_model': 27 | for key in ['inv_lbs_model_config', 'fwd_lbs_model_config']: 28 | for attr in ['num_joints', 'model_type', 'parent_mapping']: 29 | cfg['model'][key][attr] = cfg['model'][attr] 30 | 31 | return cfg 32 | 33 | 34 | def get_model(cfg): 35 | """ Returns the model instance. 36 | 37 | Args: 38 | cfg (dict): config dictionary 39 | 40 | Returns: 41 | model (torch.nn.Module) 42 | """ 43 | method = cfg['method'] 44 | 45 | assert method in ['leap_model', 'inv_lbs', 'fwd_lbs'], \ 46 | 'Not supported method type' 47 | 48 | model = { 49 | 'leap_model': leap.LEAPModel, 50 | 'inv_lbs': leap.INVLBS, 51 | 'fwd_lbs': leap.FWDLBS, 52 | }[method].from_cfg(cfg['model']) 53 | 54 | return model.to(device=cfg['device']) 55 | 56 | 57 | def get_trainer(model, optimizer, cfg): 58 | """ Returns a trainer instance. 59 | 60 | Args: 61 | model (nn.Module): the model which is used 62 | optimizer (optimizer): pytorch optimizer 63 | cfg (dict): config dictionary 64 | 65 | Returns: 66 | trainer instance (BaseTrainer) 67 | """ 68 | method = cfg['method'] 69 | 70 | assert method in ['leap_model', 'inv_lbs', 'fwd_lbs'], \ 71 | 'Not supported method type' 72 | 73 | trainer = { 74 | 'leap_model': trainers.LEAPModelTrainer, 75 | 'inv_lbs': trainers.INVLBSTrainer, 76 | 'fwd_lbs': trainers.FWDLBSTrainer, 77 | }[method](model, optimizer, cfg) 78 | 79 | return trainer 80 | 81 | 82 | def get_dataset(mode, cfg): 83 | """ Returns the dataset. 84 | 85 | Args: 86 | mode (str): `train`, `val`, or 'test' dataset mode 87 | cfg (dict): config dictionary 88 | 89 | Returns: 90 | dataset (torch.data.utils.data.Dataset) 91 | """ 92 | method = cfg['method'] 93 | dataset_type = cfg['data']['dataset'] 94 | 95 | assert method in ['leap_model', 'inv_lbs', 'fwd_lbs'] 96 | assert dataset_type in ['amass'] 97 | assert mode in ['train', 'val', 'test'] 98 | 99 | # Create dataset 100 | if dataset_type == 'amass': 101 | dataset = { 102 | 'leap_model': datasets.AmassLEAPOccupancyDataset, 103 | 'inv_lbs': datasets.AmassINVLBSDataset, 104 | 'fwd_lbs': datasets.AmassFWDLBSDataset, 105 | }[method] 106 | else: 107 | raise NotImplementedError(f'Not supported dataset type ({dataset_type})') 108 | 109 | dataset = dataset(cfg['data'], mode) 110 | 111 | return dataset 112 | 113 | # def get_generator(model, cfg): 114 | # """ Returns a generator instance. 115 | # 116 | # Args: 117 | # model (nn.Module): the model which is used 118 | # cfg (dict): config dictionary 119 | # device (device): pytorch device 120 | # """ 121 | # assert cfg['method'] == 'leap_model' 122 | # generator = None # todo impl this 123 | # return generator 124 | 125 | 126 | # Datasets 127 | -------------------------------------------------------------------------------- /training_code/datasets/__init__.py: -------------------------------------------------------------------------------- 1 | from .amass import ( 2 | AmassLEAPOccupancyDataset, 3 | AmassINVLBSDataset, 4 | AmassFWDLBSDataset 5 | ) 6 | 7 | __all__ = [ 8 | AmassLEAPOccupancyDataset, 9 | AmassFWDLBSDataset, 10 | AmassINVLBSDataset 11 | ] 12 | -------------------------------------------------------------------------------- /training_code/datasets/amass.py: -------------------------------------------------------------------------------- 1 | import glob 2 | import os.path as osp 3 | 4 | import numpy as np 5 | from scipy.spatial import cKDTree as KDTree 6 | from torch.utils import data 7 | from trimesh import Trimesh 8 | from leap.tools.libmesh import check_mesh_contains 9 | 10 | 11 | class AmassDataset(data.Dataset): 12 | """ AMASS dataset class for occupancy training. """ 13 | 14 | def __init__(self, cfg, mode): 15 | """ Initialization of the the 3D shape dataset. 16 | 17 | Args: 18 | cfg (dict): dataset configuration 19 | mode (str): `train`, `val`, or 'test' dataset mode 20 | """ 21 | # Attributes 22 | self.dataset_folder = cfg['dataset_folder'] 23 | self.bm_path = cfg['bm_path'] 24 | self.split_file = cfg[f'{mode}_split'] 25 | 26 | # Sampling config 27 | sampling_config = cfg.get('sampling_config', {}) 28 | self.points_uniform_ratio = sampling_config.get('points_uniform_ratio', 0.5) 29 | self.bbox_padding = sampling_config.get('bbox_padding', 0) 30 | self.points_padding = sampling_config.get('points_padding', 0.1) 31 | self.points_sigma = sampling_config.get('points_sigma', 0.01) 32 | 33 | self.n_points_posed = sampling_config.get('n_points_posed', 2048) 34 | self.n_points_can = sampling_config.get('n_points_can', 2048) 35 | 36 | # Get all models 37 | self.data = self._load_data_files() 38 | 39 | def _load_data_files(self): 40 | # load SMPL datasets 41 | # smpl_model, num_joints, model_type = BodyModel.load_smpl_model() 42 | self.faces = np.load(osp.join(self.bm_path, 'neutral', 'model.npz'))['f'] 43 | self.sk_weights = { 44 | gender: np.load(osp.join(self.bm_path, gender, 'model.npz'))['weights'] 45 | for gender in ['male', 'female', 'neutral'] 46 | } 47 | 48 | # list files 49 | data_list = [] 50 | with open(self.split_file, 'r') as f: 51 | for _sequence in f: 52 | sequence = _sequence.strip() # sequence in format dataset/subject/sequence 53 | sequence = sequence.replace('/', osp.sep) 54 | points_dir = osp.join(self.dataset_folder, sequence) 55 | data_files = sorted(glob.glob(osp.join(points_dir, '*.npz'))) 56 | data_list.extend(data_files) 57 | 58 | return data_list 59 | 60 | def __len__(self): 61 | return len(self.data) 62 | 63 | def __getitem__(self, idx): 64 | """ Returns an item of the dataset. 65 | 66 | Args: 67 | idx (int): ID of datasets point 68 | """ 69 | data_path = self.data[idx] 70 | np_data = np.load(data_path) 71 | 72 | to_ret = { 73 | 'can_vertices': np_data['can_vertices'].astype(np.float32), 74 | 'posed_vertices': np_data['posed_vertices'].astype(np.float32), 75 | 76 | 'fwd_transformation': np_data['fwd_transformation'].astype(np.float32), 77 | 'pose': np_data['pose_mat'].astype(np.float32), 78 | 'rel_joints': np_data['rel_joints'].astype(np.float32), 79 | } 80 | self.add_data_files(np_data, to_ret) 81 | 82 | return to_ret 83 | 84 | def add_data_files(self, np_data, to_ret): 85 | pass 86 | 87 | def compute_sk_weights(self, vertices, sampled_points, gender): 88 | kd_tree = KDTree(vertices) 89 | p_idx = kd_tree.query(sampled_points)[1] 90 | points_sk_weights = self.sk_weights[gender][p_idx, :] 91 | 92 | return points_sk_weights.astype(np.float32) 93 | 94 | def sample_points(self, mesh, n_points, prefix='', compute_occupancy=False): 95 | # Get extents of model. 96 | bb_min = np.min(mesh.vertices, axis=0) 97 | bb_max = np.max(mesh.vertices, axis=0) 98 | total_size = (bb_max - bb_min).max() 99 | 100 | # Scales all dimensions equally. 101 | scale = total_size / (1 - self.bbox_padding) 102 | loc = np.array([(bb_min[0] + bb_max[0]) / 2., 103 | (bb_min[1] + bb_max[1]) / 2., 104 | (bb_min[2] + bb_max[2]) / 2.], dtype=np.float32) 105 | 106 | n_points_uniform = int(n_points * self.points_uniform_ratio) 107 | n_points_surface = n_points - n_points_uniform 108 | 109 | box_size = 1 + self.points_padding 110 | points_uniform = np.random.rand(n_points_uniform, 3) 111 | points_uniform = box_size * (points_uniform - 0.5) 112 | # Scale points in (padded) unit box back to the original space 113 | points_uniform *= scale 114 | points_uniform += loc 115 | # Sample points around posed-mesh surface 116 | n_points_surface_cloth = n_points_surface 117 | points_surface = mesh.sample(n_points_surface_cloth) 118 | 119 | points_surface = points_surface[:n_points_surface_cloth] 120 | points_surface += np.random.normal(scale=self.points_sigma, size=points_surface.shape) 121 | 122 | # Check occupancy values for sampled points 123 | query_points = np.vstack([points_uniform, points_surface]).astype(np.float32) 124 | 125 | to_ret = { 126 | f'{prefix}points': query_points, 127 | f'{prefix}loc': loc, 128 | f'{prefix}scale': np.asarray(scale), 129 | } 130 | if compute_occupancy: 131 | to_ret[f'{prefix}occ'] = check_mesh_contains(mesh, query_points).astype(np.float32) 132 | 133 | return to_ret 134 | 135 | 136 | class AmassLEAPOccupancyDataset(AmassDataset): 137 | """ AMASS dataset class for occupancy training. """ 138 | 139 | def __init__(self, cfg, mode): 140 | super().__init__(cfg, mode) 141 | 142 | def add_data_files(self, np_data, to_ret): 143 | # sample training points 144 | to_ret.update(self.sample_points( 145 | Trimesh(to_ret['posed_vertices'], self.faces), 146 | self.n_points_posed, 147 | compute_occupancy=True)) 148 | 149 | to_ret.update(self.sample_points( 150 | Trimesh(to_ret['can_vertices'], self.faces), 151 | self.n_points_can, 152 | prefix='can_', 153 | compute_occupancy=True)) 154 | 155 | to_ret['can_points_sk_weights'] = self.compute_sk_weights( 156 | to_ret['can_vertices'], 157 | to_ret['can_points'], 158 | np_data['gender'].item()) 159 | 160 | 161 | class AmassFWDLBSDataset(AmassDataset): 162 | """ AMASS dataset class for forward LBS training. """ 163 | 164 | def __init__(self, cfg, mode): 165 | super().__init__(cfg, mode) 166 | 167 | def __getitem__(self, idx): 168 | """ Returns an item of the dataset. 169 | 170 | Args: 171 | idx (int): ID of datasets point 172 | """ 173 | data_path = self.data[idx] 174 | np_data = np.load(data_path, allow_pickle=True) 175 | 176 | to_ret = { 177 | 'can_vertices': np_data['can_vertices'].astype(np.float32), 178 | } 179 | 180 | # sample points 181 | to_ret.update(self.sample_points( 182 | Trimesh(to_ret['can_vertices'], self.faces), 183 | self.n_points_can, 184 | prefix='can_')) 185 | 186 | # proxy skinning weights 187 | to_ret['can_points_sk_weights'] = self.compute_sk_weights( 188 | to_ret['can_vertices'], 189 | to_ret['can_points'], 190 | np_data['gender'].item()) 191 | 192 | return to_ret 193 | 194 | 195 | class AmassINVLBSDataset(AmassDataset): 196 | """ AMASS dataset class for inverse LBS training. """ 197 | 198 | def __init__(self, cfg, mode): 199 | super().__init__(cfg, mode) 200 | 201 | def __getitem__(self, idx): 202 | data_path = self.data[idx] 203 | np_data = np.load(data_path) 204 | 205 | to_ret = { 206 | 'can_vertices': np_data['can_vertices'].astype(np.float32), 207 | 'posed_vertices': np_data['posed_vertices'].astype(np.float32), 208 | 'fwd_transformation': np_data['fwd_transformation'].astype(np.float32), 209 | } 210 | 211 | # sample points 212 | to_ret.update(self.sample_points( 213 | Trimesh(to_ret['posed_vertices'], self.faces), self.n_points_posed)) 214 | 215 | # proxy skinning weights 216 | to_ret['points_sk_weights'] = self.compute_sk_weights( 217 | to_ret['posed_vertices'], 218 | to_ret['points'], 219 | np_data['gender'].item()) 220 | 221 | return to_ret 222 | -------------------------------------------------------------------------------- /training_code/evaluate_leap.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import logging 3 | import os 4 | 5 | import numpy as np 6 | import torch 7 | import yaml 8 | 9 | import checkpoints 10 | import config 11 | import utils 12 | 13 | 14 | def main(cfg, num_workers): 15 | # Shortened 16 | out_dir = cfg['training']['out_dir'] 17 | batch_size = cfg['training']['batch_size'] 18 | utils.save_config(os.path.join(out_dir, 'config.yml'), cfg) 19 | 20 | model_selection_metric = cfg['training']['model_selection_metric'] 21 | model_selection_sign = 1 if cfg['training']['model_selection_mode'] == 'maximize' else -1 22 | 23 | # Output directory 24 | utils.cond_mkdir(out_dir) 25 | 26 | # Dataset 27 | test_dataset = config.get_dataset('test', cfg) 28 | 29 | test_loader = torch.utils.data.DataLoader( 30 | test_dataset, 31 | batch_size=batch_size, 32 | num_workers=num_workers, 33 | shuffle=False) 34 | 35 | # Model 36 | model = config.get_model(cfg) 37 | trainer = config.get_trainer(model, None, cfg) 38 | 39 | # Print model 40 | print(model) 41 | logger = logging.getLogger(__name__) 42 | logger.info(f'Total number of parameters: {sum(p.numel() for p in model.parameters())}') 43 | 44 | ckp = checkpoints.CheckpointIO(out_dir, model, None, cfg) 45 | try: 46 | load_dict = ckp.load('model_best.pt') 47 | logger.info('Model loaded') 48 | except FileExistsError: 49 | logger.info('Model NOT loaded') 50 | load_dict = dict() 51 | 52 | metric_val_best = load_dict.get('loss_val_best', -model_selection_sign * np.inf) 53 | 54 | logger.info(f'Current best validation metric ({model_selection_metric}): {metric_val_best:.6f}') 55 | 56 | eval_dict = trainer.evaluate(test_loader) 57 | metric_val = eval_dict[model_selection_metric] 58 | logger.info(f'Validation metric ({model_selection_metric}): {metric_val:.8f}') 59 | 60 | eval_dict_path = os.path.join(out_dir, 'eval_dict.yml') 61 | with open(eval_dict_path, 'w') as f: 62 | yaml.dump(config, f) 63 | 64 | print(f'Results saved in {eval_dict_path}') 65 | 66 | 67 | if __name__ == '__main__': 68 | parser = argparse.ArgumentParser() 69 | parser.add_argument( 70 | 'config', 71 | type=str, 72 | help='Path to a config file.') 73 | parser.add_argument( 74 | '--num_workers', 75 | type=int, 76 | default=6, 77 | help='Number of workers for datasets loaders.') 78 | args = parser.parse_args() 79 | 80 | logging.basicConfig(level=logging.INFO) 81 | 82 | main(cfg=config.load_config(args.config), 83 | num_workers=args.num_workers) 84 | -------------------------------------------------------------------------------- /training_code/train_leap.py: -------------------------------------------------------------------------------- 1 | import os 2 | import argparse 3 | import logging 4 | 5 | import numpy as np 6 | import torch 7 | import torch.optim as optim 8 | import tensorboardX 9 | 10 | import config 11 | import checkpoints 12 | import utils 13 | 14 | 15 | def main(cfg, num_workers): 16 | # Shortened 17 | out_dir = cfg['training']['out_dir'] 18 | batch_size = cfg['training']['batch_size'] 19 | backup_every = cfg['training']['backup_every'] 20 | utils.save_config(os.path.join(out_dir, 'config.yml'), cfg) 21 | 22 | model_selection_metric = cfg['training']['model_selection_metric'] 23 | model_selection_sign = 1 if cfg['training']['model_selection_mode'] == 'maximize' else -1 24 | 25 | # Output directory 26 | utils.cond_mkdir(out_dir) 27 | 28 | # Dataset 29 | train_dataset = config.get_dataset('train', cfg) 30 | val_dataset = config.get_dataset('val', cfg) 31 | 32 | train_loader = torch.utils.data.DataLoader( 33 | train_dataset, 34 | batch_size=batch_size, 35 | num_workers=num_workers, 36 | shuffle=True) 37 | val_loader = torch.utils.data.DataLoader( 38 | val_dataset, 39 | batch_size=batch_size, 40 | num_workers=num_workers, 41 | shuffle=False) 42 | 43 | # Model 44 | model = config.get_model(cfg) 45 | optimizer = optim.Adam(model.parameters(), lr=1e-4) 46 | trainer = config.get_trainer(model, optimizer, cfg) 47 | 48 | # Print model 49 | print(model) 50 | logger = logging.getLogger(__name__) 51 | logger.info(f'Total number of parameters: {sum(p.numel() for p in model.parameters())}') 52 | 53 | # load pretrained model 54 | tb_logger = tensorboardX.SummaryWriter(os.path.join(out_dir, 'logs')) 55 | ckp = checkpoints.CheckpointIO(out_dir, model, optimizer, cfg) 56 | try: 57 | load_dict = ckp.load('model_best.pt') 58 | logger.info('Model loaded') 59 | except FileExistsError: 60 | logger.info('Model NOT loaded') 61 | load_dict = dict() 62 | 63 | epoch_it = load_dict.get('epoch_it', -1) 64 | it = load_dict.get('it', -1) 65 | metric_val_best = load_dict.get('loss_val_best', -model_selection_sign * np.inf) 66 | 67 | logger.info(f'Current best validation metric ({model_selection_metric}): {metric_val_best:.6f}') 68 | 69 | # Shortened 70 | print_every = cfg['training']['print_every'] 71 | validate_every = cfg['training']['validate_every'] 72 | max_iterations = cfg['training']['max_iterations'] 73 | max_epochs = cfg['training']['max_epochs'] 74 | 75 | while True: 76 | epoch_it += 1 77 | 78 | for batch in train_loader: 79 | it += 1 80 | loss_dict = trainer.train_step(batch) 81 | loss = loss_dict['total_loss'] 82 | for k, v in loss_dict.items(): 83 | tb_logger.add_scalar(f'train/{k}', v, it) 84 | 85 | # Print output 86 | if print_every > 0 and (it % print_every) == 0: 87 | logger.info(f'[Epoch {epoch_it:02d}] it={it:03d}, loss={loss:.8f}') 88 | 89 | # Backup if necessary 90 | if backup_every > 0 and (it % backup_every) == 0: 91 | logger.info('Backup checkpoint') 92 | ckp.save(f'model_{it:d}.pt', epoch_it=epoch_it, it=it, loss_val_best=metric_val_best) 93 | 94 | # Run validation 95 | if validate_every > 0 and (it % validate_every) == 0: 96 | eval_dict = trainer.evaluate(val_loader) 97 | print('eval_dict=\n', eval_dict) 98 | metric_val = eval_dict[model_selection_metric] 99 | logger.info(f'Validation metric ({model_selection_metric}): {metric_val:.8f}') 100 | 101 | for k, v in eval_dict.items(): 102 | tb_logger.add_scalar(f'val/{k}', v, it) 103 | 104 | if model_selection_sign * (metric_val - metric_val_best) > 0: 105 | metric_val_best = metric_val 106 | logger.info(f'New best model (loss {metric_val_best:.8f}') 107 | ckp.save('model_best.pt', epoch_it=epoch_it, it=it, loss_val_best=metric_val_best) 108 | 109 | if (0 < max_iterations <= it) or (0 < max_epochs <= epoch_it): 110 | logger.info(f'Maximum iteration/epochs ({epoch_it}/{it}) reached. Exiting.') 111 | ckp.save(f'model_{it:d}.pt', epoch_it=epoch_it, it=it, loss_val_best=metric_val_best) 112 | exit(3) 113 | 114 | 115 | if __name__ == '__main__': 116 | parser = argparse.ArgumentParser() 117 | parser.add_argument( 118 | 'config', 119 | type=str, 120 | help='Path to a config file.') 121 | parser.add_argument( 122 | '--num_workers', 123 | type=int, 124 | default=6, 125 | help='Number of workers for datasets loaders.') 126 | args = parser.parse_args() 127 | 128 | logging.basicConfig(level=logging.INFO) 129 | 130 | main(cfg=config.load_config(args.config), 131 | num_workers=args.num_workers) 132 | -------------------------------------------------------------------------------- /training_code/trainers/__init__.py: -------------------------------------------------------------------------------- 1 | from .leap_trainer import FWDLBSTrainer, INVLBSTrainer, LEAPModelTrainer 2 | 3 | __all__ = [ 4 | FWDLBSTrainer, 5 | INVLBSTrainer, 6 | LEAPModelTrainer 7 | ] 8 | 9 | -------------------------------------------------------------------------------- /training_code/trainers/leap_trainer.py: -------------------------------------------------------------------------------- 1 | from collections import defaultdict 2 | 3 | import numpy as np 4 | import torch 5 | import tqdm 6 | 7 | from leap.modules import LEAPModel, INVLBS, FWDLBS 8 | 9 | 10 | class BaseTrainer: 11 | """ Base trainers class. 12 | 13 | Args: 14 | model (torch.nn.Module): Occupancy Network model 15 | optimizer (torch.optim.Optimizer): pytorch optimizer object 16 | cfg (dict): configuration 17 | """ 18 | 19 | def __init__(self, model, optimizer, cfg): 20 | self.model = model 21 | self.optimizer = optimizer 22 | self.device = cfg['device'] 23 | 24 | def evaluate(self, val_loader): 25 | """ Performs an evaluation. 26 | Args: 27 | val_loader (torch.DataLoader): pytorch dataloader 28 | """ 29 | eval_list = defaultdict(list) 30 | 31 | for data in tqdm.tqdm(val_loader): 32 | eval_step_dict = self.eval_step(data) 33 | 34 | for k, v in eval_step_dict.items(): 35 | eval_list[k].append(v) 36 | 37 | eval_dict = {k: np.mean(v) for k, v in eval_list.items()} 38 | return eval_dict 39 | 40 | def _train_mode(self): 41 | self.model.train() 42 | 43 | def train_step(self, data): 44 | self._train_mode() 45 | self.optimizer.zero_grad() 46 | loss_dict = self.compute_loss(data) 47 | loss_dict['total_loss'].backward() 48 | self.optimizer.step() 49 | return {k: v.item() for k, v in loss_dict.items()} 50 | 51 | @torch.no_grad() 52 | def eval_step(self, data): 53 | """ Performs an evaluation step. 54 | 55 | Args: 56 | data (dict): datasets dictionary 57 | """ 58 | self.model.eval() 59 | eval_loss_dict = self.compute_eval_loss(data) 60 | return {k: v.item() for k, v in eval_loss_dict.items()} 61 | 62 | def compute_loss(self, *kwargs): 63 | """ Computes the training loss. 64 | 65 | Args: 66 | kwargs (dict): datasets dictionary 67 | """ 68 | raise NotImplementedError 69 | 70 | @torch.no_grad() 71 | def compute_eval_loss(self, data): 72 | """ Computes the validation loss. 73 | 74 | Args: 75 | data (dict): datasets dictionary 76 | """ 77 | return self.compute_loss(data) 78 | 79 | 80 | class FWDLBSTrainer(BaseTrainer): 81 | 82 | def __init__(self, model: FWDLBS, optimizer: torch.optim.Optimizer, cfg: dict): 83 | super().__init__(model, optimizer, cfg) 84 | 85 | def compute_loss(self, data): 86 | sk_weights = self.model( 87 | data['can_points'].to(device=self.device), 88 | data['can_vertices'].to(device=self.device)) 89 | gt_sk_weights = data['can_points_sk_weights'].to(device=self.device) 90 | 91 | loss_dict = { 92 | 'sk_loss': (sk_weights - gt_sk_weights).abs().sum(-1).mean(-1).mean() 93 | } 94 | loss_dict['total_loss'] = loss_dict['sk_loss'] 95 | return loss_dict 96 | 97 | 98 | class INVLBSTrainer(BaseTrainer): 99 | def __init__(self, model: INVLBS, optimizer: torch.optim.Optimizer, cfg: dict): 100 | super().__init__(model, optimizer, cfg) 101 | 102 | def compute_loss(self, data): 103 | sk_weights = self.model( 104 | points=data['points'].to(device=self.device), 105 | can_vertices=data['can_vertices'].to(device=self.device), 106 | posed_vertices=data['posed_vertices'].to(device=self.device), 107 | fwd_transformation=data['fwd_transformation'].to(device=self.device), 108 | compute_can_points=False) 109 | gt_sk_weights = data['points_sk_weights'].to(device=self.device) 110 | 111 | loss_dict = { 112 | 'sk_loss': (sk_weights - gt_sk_weights).abs().sum(-1).mean(-1).mean() 113 | } 114 | loss_dict['total_loss'] = loss_dict['sk_loss'] 115 | return loss_dict 116 | 117 | 118 | class LEAPModelTrainer(BaseTrainer): 119 | def __init__(self, model: LEAPModel, optimizer: torch.optim.Optimizer, cfg: dict): 120 | super().__init__(model, optimizer, cfg) 121 | 122 | self._eval_lbs_mode() 123 | 124 | def _eval_lbs_mode(self): 125 | self.model.inv_lbs.require_grad = False 126 | self.model.inv_lbs.eval() 127 | 128 | self.model.fwd_lbs.require_grad = False 129 | self.model.fwd_lbs.eval() 130 | 131 | def _train_mode(self): 132 | self.model.train() 133 | self._eval_lbs_mode() 134 | 135 | @torch.no_grad() 136 | def compute_eval_loss(self, data): 137 | # Only evaluate uniformly sampled points 138 | n_points = data['points'].shape[1] // 2 139 | 140 | gt_occupancy = data['occ'][:, :n_points].to(device=self.device) 141 | can_vert = data['can_vertices'].to(device=self.device) 142 | point_weights, can_points = self.model.inv_lbs( 143 | points=data['points'][:, :n_points, :].to(device=self.device), 144 | can_vertices=can_vert, 145 | posed_vertices=data['posed_vertices'].to(device=self.device), 146 | fwd_transformation=data['fwd_transformation'].to(device=self.device)) 147 | 148 | fwd_point_weights = self.model.fwd_lbs(can_points, can_vert) 149 | cycle_distance = torch.sum((point_weights - fwd_point_weights).abs(), dim=-1, keepdim=True) 150 | 151 | occupancy = torch.sigmoid(self.model.leap_occupancy_decoder( 152 | can_points=can_points, point_weights=point_weights, cycle_distance=cycle_distance, can_vert=can_vert, 153 | rot_mats=data['pose'].to(device=self.device), rel_joints=data['rel_joints'].to(device=self.device))) 154 | 155 | return { 156 | 'iou': self.compute_iou(occupancy >= 0.5, gt_occupancy >= 0.5).mean() 157 | } 158 | 159 | def compute_loss(self, data): 160 | gt_occupancy = data['occ'].to(device=self.device) 161 | can_vert = data['can_vertices'].to(device=self.device) 162 | with torch.no_grad(): 163 | point_weights, can_points = self.model.inv_lbs( 164 | data['points'].to(device=self.device), 165 | can_vert, 166 | data['posed_vertices'].to(device=self.device), 167 | data['fwd_transformation'].to(device=self.device)) 168 | fwd_point_weights = self.model.fwd_lbs(can_points, can_vert) 169 | cycle_distance = torch.sum((point_weights - fwd_point_weights).abs(), dim=-1, keepdim=True) 170 | 171 | # handle points directly sampled in the canonical space 172 | if 'can_points' in data: 173 | can_points = torch.cat(( 174 | can_points, 175 | data['can_points'].to(device=self.device) 176 | ), dim=1) 177 | 178 | point_weights = torch.cat(( 179 | point_weights, 180 | data['can_points_sk_weights'].to(device=self.device) 181 | ), dim=1) 182 | 183 | cycle_distance = torch.cat(( 184 | cycle_distance, 185 | torch.zeros((*data['can_points'].shape[:-1], cycle_distance.shape[-1]), 186 | device=self.device, dtype=cycle_distance.dtype) 187 | ), dim=1) 188 | 189 | gt_occupancy = torch.cat(( 190 | gt_occupancy, 191 | data['can_occ'].to(device=self.device) 192 | ), dim=1) 193 | 194 | occupancy = torch.sigmoid(self.model.leap_occupancy_decoder( 195 | can_points=can_points, point_weights=point_weights, cycle_distance=cycle_distance, can_vert=can_vert, 196 | rot_mats=data['pose'].to(device=self.device), rel_joints=data['rel_joints'].to(device=self.device))) 197 | 198 | loss_dict = { 199 | 'occ_loss': ((occupancy - gt_occupancy) ** 2).sum(-1).mean(), 200 | } 201 | loss_dict['total_loss'] = loss_dict['occ_loss'] 202 | return loss_dict 203 | 204 | @staticmethod 205 | def compute_iou(occ1, occ2): 206 | """ Computes the Intersection over Union (IoU) value for two sets of 207 | occupancy values. 208 | 209 | Args: 210 | occ1 (tensor): first set of occupancy values 211 | occ2 (tensor): second set of occupancy values 212 | """ 213 | # Also works for 1-dimensional data 214 | if len(occ1.shape) >= 2: 215 | occ1 = occ1.reshape(occ1.shape[0], -1) 216 | if len(occ2.shape) >= 2: 217 | occ2 = occ2.reshape(occ2.shape[0], -1) 218 | 219 | # Convert to boolean values 220 | occ1 = (occ1 >= 0.5) 221 | occ2 = (occ2 >= 0.5) 222 | 223 | # Compute IOU 224 | area_union = (occ1 | occ2).float().sum(axis=-1) 225 | area_intersect = (occ1 & occ2).float().sum(axis=-1) 226 | 227 | iou = (area_intersect / area_union) 228 | 229 | return iou 230 | -------------------------------------------------------------------------------- /training_code/utils.py: -------------------------------------------------------------------------------- 1 | import os 2 | import yaml 3 | from pathlib import Path 4 | 5 | 6 | 7 | def cond_mkdir(dir_path): 8 | Path(dir_path).mkdir(parents=True, exist_ok=True) 9 | 10 | 11 | def save_config(path, config): 12 | """ Saves config file. 13 | 14 | Args: 15 | path (str): Path to config file. 16 | config (dict): Config dictionary. 17 | """ 18 | cond_mkdir(os.path.dirname(path)) 19 | 20 | config['git_head'] = _get_git_commit_head() 21 | with open(path, 'w') as f: 22 | yaml.dump(config, f) 23 | 24 | 25 | def _get_git_commit_head(): 26 | try: 27 | import subprocess 28 | head = subprocess.check_output("git rev-parse HEAD", stderr=subprocess.DEVNULL, shell=True) 29 | return head.decode('utf-8').strip() 30 | except: 31 | return '' 32 | --------------------------------------------------------------------------------