├── .gitignore ├── LICENSE.md ├── README.md ├── __init__.py ├── calc_skirt_weight.py ├── count_data.py ├── global_var.py ├── pivots ├── __init__.py ├── find_pivots.py └── gen_test.py ├── simulation_pose ├── __init__.py ├── gen_motion.py ├── gen_pose.py ├── gen_pose_utils.py ├── post_proc.py └── readme.md ├── simulation_style ├── __init__.py ├── choose_avail.py ├── post_proc.py ├── readme.md ├── style_shape.py ├── style_shape_pant.py ├── style_shape_skirt.py └── vis_simulation.py ├── smpl_lib ├── __init__.py ├── ch.py ├── ch_smpl.py ├── convert_smpl_models.py ├── lbs.py ├── posemapper.py ├── serialization.py ├── smpl_paths.py └── verts.py ├── smpl_torch.py ├── style_pca ├── __init__.py ├── copy_raw.py ├── gender_pca.py ├── pca_interactive.py ├── proc_raw.py ├── proc_raw_skirt.py ├── proc_raw_skirt_ext.py ├── readme.md ├── style_pca.py └── vis_raw.py ├── utils ├── __init__.py ├── diffusion_smoothing.py ├── geodesic.py ├── geometry.py ├── ios.py ├── part_body.py ├── render_lib │ ├── __init__.py │ ├── mesh_core.cpp │ ├── mesh_core.h │ ├── mesh_core_cython.pyx │ └── setup.py ├── renderer.py ├── rotation.py └── smpl.py └── visualize_dataset.py /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__/ 2 | *.pyc 3 | -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | License 2 | Copyright (c) 2019 Gerard Pons Moll, Max-Planck-Gesellschaft 3 | 4 | **Please read carefully the following terms and conditions and any accompanying documentation before you download and/or use this software and associated documentation files (the "Software").** 5 | 6 | The authors hereby grant you a non-exclusive, non-transferable, free of charge right to copy, modify, merge, publish, distribute, and sublicense the Software for the sole purpose of performing non-commercial scientific research, non-commercial education, or non-commercial artistic projects. 7 | 8 | Any other use, in particular any use for commercial purposes, is prohibited. This includes, without limitation, incorporation in a commercial product, use in a commercial service, or production of other artefacts for commercial purposes. 9 | 10 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 11 | 12 | You understand and agree that the authors are under no obligation to provide either maintenance services, update services, notices of latent defects, or corrections of defects with regard to the Software. The authors nevertheless reserve the right to update, modify, or discontinue the Software at any time. 13 | 14 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. You agree to cite the **TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style** paper in documents and papers that report on research using this Software. 15 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # TailorNet Dataset 2 | This repository is a toolbox to process, visualize the dataset for "TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style" (CVPR 2020 Oral) 3 | 4 | [[model repository](https://github.com/chaitanya100100/TailorNet)][[arxiv](https://arxiv.org/abs/2003.04583)][[project website](https://virtualhumans.mpi-inf.mpg.de/tailornet/)][[YouTube](https://www.youtube.com/watch?v=F0O21a_fsBQ)] 5 | 6 | ## Update 7 | 2021/2/2 dataset uploaded to Baidu Drive 8 | 2021/1/7 data generation codes 9 | 2020/12/7 short pants, skirt are available 10 | 2020/7/31 pants, shirt are available 11 | 12 | 13 | ## Requirements 14 | python3 15 | pytorch 16 | chumpy 17 | opencv-python 18 | cython 19 | 20 | ## SMPL model 21 | 1. Register and download SMPL models [here](https://smpl.is.tue.mpg.de) 22 | 2. Unzip `SMPL_python_v.1.0.0.zip` and put `smpl/models/*.pkl` in `ROOT/smpl`(specify `ROOT` in `global_var.py`) 23 | 3. Run `smpl_lib/convert_smpl_models.py` 24 | 25 | ## Data preparation 26 | All data is available in the following links: 27 | [Data](https://nextcloud.mpi-klsb.mpg.de/index.php/s/W7a57iXRG9Yms6P) or 28 | [Baidu Drive](https://pan.baidu.com/s/14roMijl36ruppRNd3tNC2g)(password:TLNT) 29 | 1. Download meta data (dataset_meta.zip) of the dataset 30 | 31 | 2. Download one or more sub-dataset (other garment classes are coming soon) 32 | t-shirt_female(6.9G) 33 | t-shirt_male(7.2G) 34 | old-t-shirt_female(10G) 35 | t-shirt_female_sample(19M) 36 | shirt_female(12.7G) 37 | shirt_male(13.5G) 38 | pant_female(3.3G) 39 | pant_male(3.4G) 40 | short-pant_female(1.9G) 41 | short-pant_male(2G) 42 | skirt_female(5G) 43 | 44 | 3. Specify the variable `ROOT` in `global_var.py` 45 | 4. Unzip all downloaded files to `ROOT` 46 | 47 | ## Dataset Description 48 | Currently, we have 6 garment classes (t-shirt, shirt, pant, skirt, short-pant, old-t-shirt). 49 | In TailorNet paper, we trained and tested our model using `old-t-shirt`. 50 | Compared to `old-t-shirt`, `t-shirt` has a different topology, higher quality and larger style variation. 51 | Use `old-t-shirt` if you want a fair comparison with the results in our paper. 52 | 53 | The dataset structure looks like this: 54 | ``` 55 | ROOT 56 | ----smpl 57 | ----apose.npy 58 | ----garment_class_info.pkl 59 | ----split_static_pose_shape.npz 60 | 61 | ----_ (e.g., t-shirt_female) 62 | --------pose/ 63 | ------------_ (e.g., 000_023) 64 | --------shape/ 65 | --------style/ 66 | --------style_shape/ 67 | --------avail.txt 68 | --------pivots.txt 69 | --------test.txt 70 | --------style_model.npz 71 | ``` 72 | 73 | We provide `apose.npy`, `garment_class_info.pkl` and `split_static_pose_shape.npz` separately in `dataset_meta.zip`, and each `_` in a separate zip file. 74 | 75 | - `split_static_pose_shape.npz` contains a dictionary `{'train': , 'test': }` where `` and `` are np arrays specifying the indices of poses which goes into train and test set respectively. 76 | - `garment_class_info.pkl` contains a dictionary `{: {'f': , 'vert_indices': } }` where `` denotes the vertex indices of high resolution SMPL body template which defines the garment topology of ``, and `` denotes the faces of template garment mesh. 77 | - `apose.npy` contains the thetas for A-pose on which garment style space is modeled. 78 | 79 | - For each `_`, 80 | - `shape` directory contains uniformally chosen shape(beta) parameters. 81 | 82 | - `style_model.npz` contains a dictionary with these variables: `pca_w`, `mean`, `coeff_mean`, `coeff_range`. For given style `gamma`, garment vertices can be obtained using the following equation: 83 | - `pca_w * (gamma + coeff_mean) + mean` 84 | - `style` directory contains uniformally chosen style(gamma) parameters. 85 | - All styles are simulated on all shapes in A-pose and results are stored in `style_shape` directory. Out of those, shape_style pairs (also called pivots) with feasible simulation results are listed in `avail.txt`. 86 | - `pivots.txt` lists those pivots which are chosen as per the algorithm described in subsection - Choosing K Style-Shape Prototypes - to simulate training data. `test.txt` lists additional pivots chosen to generate testing data. 87 | - Each chosen pivot, denoted as `_`, is simulated in few pose sequences. Simulation results are stored in `pose/_` directory as unposed garment displacements. (Garment displacements are added on unposed template before applying standard SMPL skinning to get the final garment. See paper for details.) 88 | - `pose/_` also contains displacements for smoothed unposed garment. 89 | 90 | ## Usage 91 | If you want to convert the data to the mesh format (e.g., .obj), please check Line 44 to Line 64 in `visualize_dataset.py`. This code converts the TailorNet data sequences into meshes (gar_v, gar_f are vertices and faces of the garment) and renders them. 92 | 93 | ## Visualize the dataset 94 | 1. Install the renderer 95 | ``` 96 | cd utils/render_lib 97 | python setup.py build_ext -i 98 | ``` 99 | 2. Run the visualizer 100 | ``` 101 | python visualize_dataset.py 102 | ``` 103 | 104 | ## Dataset Generation 105 | Download datagen_assets.zip and unzip it to `ROOT`. Please check readme.md in each directory for detail 106 | 1. style_pca 107 | Scripts that process garment registrations and model the garment style space. 108 | 2. simulation_style 109 | Simulate all (style, shape) combinations in A-pose. 110 | 3. pivots 111 | Generate pivots and test set. 112 | 4. simulation_pose 113 | Simulate different poses for pivots and test (style, shape). 114 | 115 | Since our raw data is not public and simulation in Marvelous Designer cannot be scripted, 116 | these codes are only for reference. If you want to simulate your own data, 117 | make sure you understand most code and the paper, 118 | so that you can modify parameters that are highly dependent of the data. 119 | 120 | [Here](https://nextcloud.mpi-klsb.mpg.de/index.php/s/W7a57iXRG9Yms6P?path=%2F&openfile=10944907) is an example video of simulation in Marvelous Designer. 121 | 122 | 123 | ## Count the dataset 124 | ``` 125 | python count_data.py 126 | —————————————————————————————————————————————————————————————————————————————— 127 | | | | train style_shape| test style_shape| | 128 | | class| gender|train pose| test pose|train pose| test pose| total| 129 | —————————————————————————————————————————————————————————————————————————————— 130 | | t-shirt| female| 14589| 3309| 776| 224| 18898| 131 | | t-shirt| male| 14397| 3353| 815| 185| 18750| 132 | | shirt| female| 14553| 3342| 856| 144| 18895| 133 | | shirt| male| 14322| 3328| 831| 169| 18650| 134 | | pant| female| 14569| 3430| 805| 195| 18999| 135 | | pant| male| 14562| 3423| 793| 203| 18981| 136 | |short-pant| female| 14546| 3451| 804| 196| 18997| 137 | |short-pant| male| 14563| 3426| 796| 203| 18988| 138 | | skirt| female| 14554| 3444| 803| 197| 18998| 139 | | total| 130655| 30506| 7279| 1716| 170156| 140 | —————————————————————————————————————————————————————————————————————————————— 141 | ``` 142 | 143 | ## TODO 144 | - [x] Dataset generation codes 145 | - [x] Style space visualizer 146 | - [ ] Blender visualizer 147 | - [x] Shirt, pants, skirt 148 | - [x] T-shirt 149 | - [x] Basic visualizer 150 | 151 | ## Citation 152 | Cite us: 153 | ``` 154 | @inproceedings{patel20tailornet, 155 | title = {TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style}, 156 | author = {Patel, Chaitanya and Liao, Zhouyingcheng and Pons-Moll, Gerard}, 157 | booktitle = {{IEEE} Conference on Computer Vision and Pattern Recognition (CVPR)}, 158 | month = {jun}, 159 | organization = {{IEEE}}, 160 | year = {2020}, 161 | } 162 | ``` 163 | -------------------------------------------------------------------------------- /__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zycliao/TailorNet_dataset/c4ce37cf321608b4f6e2907fe6ac6b79fa931175/__init__.py -------------------------------------------------------------------------------- /calc_skirt_weight.py: -------------------------------------------------------------------------------- 1 | # find correspondce between the skirt and the SMPL body 2 | import pickle 3 | import os.path as osp 4 | import numpy as np 5 | import cv2 6 | from utils.rotation import get_Apose 7 | from global_var import ROOT 8 | from smpl_torch import SMPLNP, TorchSMPL4Garment 9 | from sklearn.decomposition import PCA 10 | from utils.renderer import Renderer 11 | 12 | 13 | # load template skirt 14 | style_model = np.load(osp.join(ROOT, 'skirt_female', 'style_model.npz')) 15 | pca = PCA(n_components=4) 16 | pca.components_ = style_model['pca_w'] 17 | pca.mean_ = style_model['mean'] 18 | skirt_v = pca.inverse_transform(np.zeros([1, 4])).reshape([-1, 3]) 19 | 20 | # move the skirt to the right position 21 | with open(osp.join(ROOT, 'garment_class_info.pkl'), 'rb') as f: 22 | garment_meta = pickle.load(f) 23 | skirt_f = garment_meta['skirt']['f'] 24 | vert_indices = garment_meta['pant']['vert_indices'] 25 | up_bnd_inds = np.load(osp.join(ROOT, 'skirt_upper_boundary.npy')) 26 | pant_up_bnd_inds = np.load(osp.join(ROOT, 'pant_upper_boundary.npy')) 27 | waist_body_inds = vert_indices[pant_up_bnd_inds] 28 | 29 | smpl = SMPLNP(gender='female') 30 | apose = get_Apose() 31 | body_v, _ = smpl(np.zeros([300]), apose, None, None) 32 | trans = np.mean(body_v[waist_body_inds], 0, keepdims=True) - np.mean(skirt_v[up_bnd_inds], 0, keepdims=True) 33 | skirt_v = skirt_v + trans 34 | 35 | skirt_v[:, 0] -= 0.01 36 | 37 | p = 1 38 | K = 100 39 | 40 | # find closest vertices 41 | dist = np.sqrt(np.sum(np.square(skirt_v[:, None] - body_v[None]), 2)) # n_skirt, n_body 42 | body_ind = np.argsort(dist, 1)[:, :K] 43 | body_dist = np.sort(dist, 1)[:, :K] 44 | # Inverse distance weighting 45 | w = 1/(body_dist**p) 46 | w = w / np.sum(w, 1, keepdims=True) 47 | n_skirt = len(skirt_v) 48 | n_body = len(body_v) 49 | skirt_weight = np.zeros([n_skirt, n_body], dtype=np.float32) 50 | skirt_weight[np.tile(np.arange(n_skirt)[:, None], (1, K)), body_ind] = w 51 | np.savez_compressed('C:/data/v3/skirt_weight.npz', w=skirt_weight) 52 | 53 | exit() 54 | 55 | 56 | # test 57 | renderer = Renderer(512) 58 | smpl = SMPLNP(gender='female', skirt=True) 59 | smpl_torch = TorchSMPL4Garment('female') 60 | 61 | import torch 62 | disp = smpl_torch.forward_unpose_deformation(torch.from_numpy(np.zeros([1, 72])).float(), torch.from_numpy(np.zeros([1, 300])).float(), 63 | torch.from_numpy(skirt_v)[None].float()) 64 | disp = disp.detach().cpu().numpy()[0] 65 | 66 | for t in np.linspace(0, 1, 20): 67 | theta = np.zeros([72]) 68 | theta[5] = t 69 | theta[8] = -t 70 | body_v, gar_v = smpl(np.zeros([300]), theta, disp, 'skirt') 71 | img = renderer([body_v, gar_v], [smpl.base.faces, skirt_f], 72 | [np.array([0.6, 0.6, 0.9]), np.array([0.8, 0.5, 0.3])], trans=[1, 0, 0]) 73 | cv2.imshow('a', img) 74 | cv2.waitKey() 75 | 76 | # disp = skirt_v - body_v[skirt_ind] 77 | # betas = np.zeros([10, 300]) 78 | # betas[:, 1] = np.linspace(-2, 2, 10) 79 | # body_vs, _ = smpl(betas, np.tile(apose[None], (10, 1)), None, None, batch=True) 80 | # gar_vs = body_vs[:, skirt_ind] + disp[None] 81 | # 82 | # for i, (body_v, gar_v) in enumerate(zip(body_vs, gar_vs)): 83 | # img = renderer([body_v, gar_v], [smpl.base.faces, skirt_f], 84 | # [np.array([0.6, 0.6, 0.9]), np.array([0.8, 0.5, 0.3])], trans=[1, 0, 0]) 85 | # cv2.imwrite('a{}.jpg'.format(i), img) 86 | # 87 | -------------------------------------------------------------------------------- /count_data.py: -------------------------------------------------------------------------------- 1 | import os 2 | import os.path as osp 3 | import numpy as np 4 | from global_var import ROOT 5 | 6 | 7 | if __name__ == '__main__': 8 | all_garment_class = ['t-shirt', 'shirt', 'pant', 'short-pant', 'skirt'] 9 | genders = ['female', 'male'] 10 | splits = ['train', 'test'] 11 | img_size = 512 12 | 13 | split_idx = np.load(osp.join(ROOT, 'split_static_pose_shape.npz')) 14 | train_idx = split_idx['train'] 15 | test_idx = split_idx['test'] 16 | 17 | print("—" * 78) 18 | print("|{:>10}|{:>10}|{:>21}|{:>21}|{:>10}|".format('', '', 'train style_shape', 'test style_shape', '')) 19 | print("|{:>10}|".format('class'), end='') 20 | print("{:>10}|".format('gender'), end='') 21 | print("{:>10}|{:>10}|".format('train pose', 'test pose'), end='') 22 | print("{:>10}|{:>10}|".format('train pose', 'test pose'), end='') 23 | print("{:>10}|".format('total')) 24 | print("—" * 78) 25 | 26 | count_split = np.array([0, 0, 0, 0]) 27 | for garment_class in all_garment_class: 28 | for gender in genders: 29 | if garment_class == 'skirt' and gender == 'male': 30 | continue 31 | print("|{:>10}|".format(garment_class), end='') 32 | print("{:>10}|".format(gender), end='') 33 | count_one_class = 0 34 | for split in splits: 35 | pose_dir = osp.join(ROOT, '{}_{}'.format(garment_class, gender), 'pose') 36 | if split == 'train': 37 | ss_path = osp.join(ROOT, '{}_{}'.format(garment_class, gender), 'pivots.txt') 38 | else: 39 | ss_path = osp.join(ROOT, '{}_{}'.format(garment_class, gender), 'test.txt') 40 | with open(ss_path) as f: 41 | all_ss = f.read().strip().splitlines() 42 | train_num = 0 43 | test_num = 0 44 | for ss in all_ss: 45 | pose_ss_dir = osp.join(pose_dir, ss) 46 | if not osp.exists(pose_ss_dir): 47 | continue 48 | unpose_names = [k for k in os.listdir(pose_ss_dir) if k.startswith('unposed') and k.endswith('.npy')] 49 | for unpose_name in unpose_names: 50 | seq_str = unpose_name.replace('unposed_', '').replace('.npy', '') 51 | pose_path = osp.join(pose_ss_dir, 'poses_{}.npz'.format(seq_str)) 52 | 53 | pose_idx = np.load(pose_path)['pose_order'] 54 | train_num += np.sum(np.in1d(pose_idx, train_idx)) 55 | test_num += np.sum(np.in1d(pose_idx, test_idx)) 56 | print("{:>10}|".format(train_num), end='') 57 | print("{:>10}|".format(test_num), end='') 58 | count_one_class += train_num 59 | count_one_class += test_num 60 | if split == 'train': 61 | count_split[0] += train_num 62 | count_split[1] += test_num 63 | else: 64 | count_split[2] += train_num 65 | count_split[3] += test_num 66 | print("{:>10}|".format(count_one_class)) 67 | print("|{:>21}|".format('total'), end='') 68 | for n in count_split: 69 | print("{:>10}|".format(n), end='') 70 | print("{:>10}|".format(np.sum(count_split))) 71 | print("—" * 78) 72 | 73 | 74 | -------------------------------------------------------------------------------- /global_var.py: -------------------------------------------------------------------------------- 1 | import os 2 | os.environ['KMP_DUPLICATE_LIB_OK'] = "TRUE" 3 | 4 | PROJ_ROOT = os.path.dirname(os.path.abspath(__file__)) 5 | ROOT = r'D:\data\v3' # PLEASE replace this by your own data directory! 6 | SMPL_PATH_MALE = os.path.join(ROOT, 'smpl', 'basicmodel_m_lbs_10_207_0_v1.0.0.pkl') 7 | SMPL_PATH_FEMALE = os.path.join(ROOT, 'smpl', 'basicModel_f_lbs_10_207_0_v1.0.0.pkl') 8 | -------------------------------------------------------------------------------- /pivots/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zycliao/TailorNet_dataset/c4ce37cf321608b4f6e2907fe6ac6b79fa931175/pivots/__init__.py -------------------------------------------------------------------------------- /pivots/find_pivots.py: -------------------------------------------------------------------------------- 1 | import os 2 | import os.path as osp 3 | import numpy as np 4 | from tqdm import tqdm 5 | import torch 6 | import torch.nn as nn 7 | from global_var import ROOT 8 | 9 | 10 | def calc_dist(v1, v2): 11 | if isinstance(v1, np.ndarray): 12 | return np.mean(np.sqrt(np.sum(np.square(v1 - v2), -1))) 13 | elif isinstance(v1, torch.Tensor): 14 | return torch.mean(torch.sqrt(torch.sum(torch.pow(v1 - v2, 2), -1))) 15 | 16 | 17 | class Comb(nn.Module): 18 | def __init__(self, nbasis): 19 | super(Comb, self).__init__() 20 | self.nbasis = nbasis 21 | self.softmax = nn.Softmax() 22 | self.w = nn.Parameter(torch.zeros(nbasis), requires_grad=True) 23 | 24 | def forward(self, x): 25 | # x: (N, v, 3) 26 | norm_w = self.softmax(self.w) 27 | return torch.einsum('n,nvt->vt', norm_w, x) 28 | 29 | def reset(self): 30 | self.w.data *= 0 31 | 32 | @property 33 | def norm_w(self): 34 | return self.softmax(self.w.data.detach()) 35 | 36 | 37 | class WOptimizer(object): 38 | """ 39 | Use this class to optimize the coefficients of a linear combination 40 | """ 41 | def __init__(self, basis, lr=1e-1, max_iter=100): 42 | if isinstance(basis, list) or isinstance(basis, tuple): 43 | basis = torch.stack(basis, 0) 44 | self.basis = basis 45 | self.nbasis = basis.shape[0] 46 | self.comb = Comb(self.nbasis) 47 | self.lr = lr 48 | self.max_iter = max_iter 49 | self.criterons = torch.nn.L1Loss() 50 | 51 | def __call__(self, x): 52 | self.comb.reset() 53 | x = x.detach() 54 | optimizer = torch.optim.Adam(self.comb.parameters(), self.lr) 55 | dist = np.inf 56 | comb_result = None 57 | for i in range(self.max_iter): 58 | optimizer.zero_grad() 59 | comb_result = self.comb(self.basis) 60 | loss = self.criterons(comb_result, x) 61 | loss.backward() 62 | optimizer.step() 63 | d = calc_dist(comb_result, x).detach().cpu().item() 64 | if d < dist: 65 | dist = d 66 | return comb_result.detach(), self.comb.norm_w, dist 67 | 68 | def cuda(self): 69 | self.comb.cuda() 70 | 71 | class Runner(object): 72 | def __init__(self, garment_class, gender, num_pivots=20): 73 | self.garment_class = garment_class 74 | self.gender = gender 75 | self.K = num_pivots 76 | self.lr = 1e-1 77 | self.max_iter = 100 78 | self.criterons = torch.nn.L1Loss() 79 | 80 | self.data_dir = osp.join(ROOT, '{}_{}'.format(garment_class, gender)) 81 | self.ss_dir = osp.join(self.data_dir, 'style_shape') 82 | 83 | self.verts_dict = {} 84 | with open(osp.join(self.data_dir, 'avail.txt')) as f: 85 | all_ss = f.read().strip().splitlines() 86 | for ss in all_ss: 87 | beta_str, gamma_str = ss.split('_') 88 | d = np.load(osp.join(self.ss_dir, 'beta{}_gamma{}.npy'.format(beta_str, gamma_str))) 89 | self.verts_dict["{}_{}".format(beta_str, gamma_str)] = torch.from_numpy(d.astype(np.float32)) 90 | 91 | self.current_basis = [] 92 | 93 | def iter(self): 94 | nbasis = len(self.current_basis) 95 | basis = [] 96 | for bname in self.current_basis: 97 | basis.append(self.verts_dict[bname]) 98 | basis = torch.stack(basis, 0) 99 | woptim = WOptimizer(basis) 100 | 101 | names, dists = [], [] 102 | for name, verts in tqdm(self.verts_dict.items()): 103 | comb_results, _, dist = woptim(verts) 104 | # print("{} dist: {:.4f} mm".format(name, dist*1000)) 105 | names.append(name) 106 | dists.append(dist) 107 | dists = np.array(dists) 108 | mean_dist = np.mean(dists) 109 | max_i = np.argmax(dists) 110 | print("basis num: {}".format(nbasis)) 111 | print("max dist: {} mm".format(dists[max_i] * 1000.)) 112 | print("mean dist: {} mm".format(mean_dist * 1000.)) 113 | self.current_basis.append(names[int(max_i)]) 114 | 115 | def baseline(self): 116 | import random 117 | all_names = list(self.verts_dict.keys()) 118 | random.shuffle(all_names) 119 | basis = [] 120 | for name in all_names[:self.K]: 121 | basis.append(self.verts_dict[name]) 122 | basis = torch.stack(basis, 0) 123 | nbasis = basis.shape[0] 124 | woptim = WOptimizer(basis) 125 | 126 | names, dists = [], [] 127 | for name, verts in tqdm(self.verts_dict.items()): 128 | comb_results, _, dist = woptim(verts) 129 | # print("{} dist: {:.4f} mm".format(name, dist*1000)) 130 | names.append(name) 131 | dists.append(dist) 132 | dists = np.array(dists) 133 | mean_dist = np.mean(dists) 134 | max_i = np.argmax(dists) 135 | print("Baseline---basis num: {}".format(nbasis)) 136 | print("max dist: {} mm".format(dists[max_i] * 1000.)) 137 | print("mean dist: {} mm".format(mean_dist * 1000.)) 138 | 139 | def baseline_kmeans(self, k=None): 140 | from sklearn.cluster import KMeans 141 | kmeans = KMeans(n_clusters=self.K if k is None else k) 142 | all_verts = [] 143 | all_names = [] 144 | for name, verts in self.verts_dict.items(): 145 | all_verts.append(verts.numpy()) 146 | all_names.append(name) 147 | all_verts = np.stack(all_verts, 0).reshape(len(all_verts), -1) 148 | kmeans.fit(all_verts) 149 | centers = kmeans.cluster_centers_ 150 | basis = [] 151 | basis_names = [] 152 | for i in range(len(centers)): 153 | center = centers[i: i+1] 154 | dist = np.mean(np.square(all_verts - center), -1) 155 | min_idx = np.argmin(dist) 156 | one_basis = all_verts[min_idx].reshape([-1, 3]) 157 | basis.append(one_basis) 158 | basis_names.append(all_names[min_idx]) 159 | basis = torch.from_numpy(np.array(basis)) 160 | 161 | woptim = WOptimizer(basis) 162 | 163 | names, dists = [], [] 164 | for name, verts in tqdm(self.verts_dict.items()): 165 | comb_results, _, dist = woptim(verts) 166 | # print("{} dist: {:.4f} mm".format(name, dist*1000)) 167 | names.append(name) 168 | dists.append(dist) 169 | dists = np.array(dists) 170 | mean_dist = np.mean(dists) 171 | max_i = np.argmax(dists) 172 | print("KMeans---basis num: {}".format(k)) 173 | print("max dist: {} mm".format(dists[max_i] * 1000.)) 174 | print("mean dist: {} mm".format(mean_dist * 1000.)) 175 | return basis_names 176 | 177 | 178 | def save_basis(self): 179 | with open(os.path.join(self.data_dir, 'pivots.txt'), 'w') as f: 180 | for bname in self.current_basis: 181 | f.write(bname+'\n') 182 | 183 | 184 | 185 | if __name__ == '__main__': 186 | garment_class = 'skirt' 187 | gender = 'female' 188 | num_pivots = 20 189 | num_kmeans = 10 190 | runner = Runner(garment_class, gender, num_pivots) 191 | # test kmeans 192 | # runner.baseline_kmeans() 193 | 194 | # kmeans + greedy 195 | runner.current_basis.extend(runner.baseline_kmeans(num_kmeans)) 196 | left_k = runner.K - len(runner.current_basis) 197 | for _ in range(left_k): 198 | runner.iter() 199 | runner.save_basis() 200 | runner.iter() 201 | -------------------------------------------------------------------------------- /pivots/gen_test.py: -------------------------------------------------------------------------------- 1 | import os 2 | import numpy as np 3 | from global_var import ROOT 4 | 5 | 6 | def read_ss(path): 7 | with open(path) as f: 8 | c = f.read().strip().splitlines() 9 | return c 10 | 11 | if __name__ == '__main__': 12 | garment_class = 'skirt' 13 | gender = 'female' 14 | test_num = 20 15 | 16 | data_dir = os.path.join(ROOT, f'{garment_class}_{gender}') 17 | pivots = read_ss(os.path.join(data_dir, 'pivots.txt')) 18 | all_ss = read_ss(os.path.join(data_dir, 'avail.txt')) 19 | other_ss = [] 20 | for ss in all_ss: 21 | if ss not in pivots: 22 | other_ss.append(ss) 23 | ind = np.arange(len(other_ss)) 24 | np.random.shuffle(ind) 25 | ind = ind[:test_num] 26 | other_ss = np.array(other_ss) 27 | test_ss = other_ss[ind] 28 | with open(os.path.join(data_dir, 'test.txt'), 'w') as f: 29 | for ss in test_ss: 30 | f.write(ss) 31 | f.write('\n') -------------------------------------------------------------------------------- /simulation_pose/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zycliao/TailorNet_dataset/c4ce37cf321608b4f6e2907fe6ac6b79fa931175/simulation_pose/__init__.py -------------------------------------------------------------------------------- /simulation_pose/gen_motion.py: -------------------------------------------------------------------------------- 1 | import os 2 | import os.path as osp 3 | import numpy as np 4 | 5 | from global_var import ROOT 6 | from utils.rotation import get_Apose, interpolate_pose 7 | from smpl_torch import SMPLNP_Lres 8 | from utils.ios import save_pc2, read_pc2 9 | from utils.part_body import part_body_faces 10 | 11 | 12 | if __name__ == '__main__': 13 | garment_class = 'skirt' 14 | gender = 'female' 15 | 16 | TEST = True 17 | STABILITY_FRAMES = 2 18 | if TEST: 19 | batch_num = 1 20 | else: 21 | batch_num = 18 22 | # 0.1 for upper clothes. 0.05 for lower clothes 23 | lowest = -2 24 | 25 | smpl = SMPLNP_Lres(gender=gender) 26 | apose = get_Apose() 27 | num_betas = 10 if gender == 'neutral' else 300 28 | vcanonical = smpl(np.zeros([num_betas]), apose) 29 | 30 | root_dir = osp.join(ROOT, '{}_{}'.format(garment_class, gender)) 31 | ss_dir = osp.join(root_dir, 'style_shape') 32 | 33 | # read pivots 34 | if TEST: 35 | pivots_path = osp.join(root_dir, 'test.txt') 36 | else: 37 | pivots_path = osp.join(root_dir, 'pivots.txt') 38 | with open(pivots_path) as f: 39 | pivots = f.read().strip().splitlines() 40 | pivots = [k.split('_') for k in pivots] 41 | 42 | # read betas and gammas 43 | betas = np.load(osp.join(root_dir, 'shape', 'betas.npy')) 44 | gammas = np.load(osp.join(root_dir, 'style', 'gammas.npy')) 45 | 46 | 47 | # pivots = [['000', '000'], ['001', '000']] 48 | for beta_str, gamma_str in pivots: 49 | beta_gamma = "{}_{}".format(beta_str, gamma_str) 50 | save_dir = os.path.join(root_dir, 'pose', beta_gamma) 51 | if not os.path.exists(save_dir): 52 | os.makedirs(save_dir) 53 | beta = betas[int(beta_str)] 54 | gamma = gammas[int(gamma_str)] 55 | vbeta = smpl(beta, apose) 56 | 57 | res_dir = os.path.join(save_dir, 'res') 58 | if not os.path.isdir(res_dir): 59 | os.makedirs(res_dir) 60 | os.chmod(res_dir, 0o774) 61 | 62 | # shape transition, read from style_shape 63 | if garment_class in ['pant', 'skirt', 'short-pant']: 64 | transition_path = osp.join(ss_dir, f'motion_beta{beta_str}_gamma{gamma_str}.pc2') 65 | else: 66 | transition_path = osp.join(ss_dir, f'motion_{beta_str}.pc2') 67 | transition_verts = read_pc2(transition_path) 68 | 69 | for bi in range(batch_num): 70 | pose_label_path = os.path.join(save_dir, 'poses_{:03d}.npz'.format(bi)) 71 | if not osp.exists(pose_label_path): 72 | print("{} doesn't exist. Skip it".format(pose_label_path)) 73 | continue 74 | dst_path = os.path.join(save_dir, 'motion_{:03d}.pc2'.format(bi)) 75 | if osp.exists(dst_path): 76 | print("{} already exists. Skip it".format(dst_path)) 77 | continue 78 | pose_label = np.load(pose_label_path) 79 | 80 | # poses 81 | thetas = pose_label['thetas'] 82 | verts = smpl(np.tile(np.expand_dims(beta, 0), [thetas.shape[0], 1]), thetas, batch=True) 83 | verts = np.tile(np.expand_dims(verts, 1), [1, STABILITY_FRAMES + 1, 1, 1]) 84 | verts = np.reshape(verts, [-1, 6890, 3]) 85 | verts[:, :, 1] -= lowest 86 | 87 | verts = np.concatenate([transition_verts, verts], 0) 88 | save_pc2(verts, dst_path) 89 | -------------------------------------------------------------------------------- /simulation_pose/gen_pose.py: -------------------------------------------------------------------------------- 1 | import os 2 | import os.path as osp 3 | import numpy as np 4 | from tqdm import tqdm 5 | 6 | from global_var import ROOT 7 | from utils.rotation import get_Apose, interpolate_pose 8 | from smpl_torch import SMPLNP_Lres 9 | import shutil 10 | import time 11 | from simulation_pose.gen_pose_utils import \ 12 | Dataset, is_intersec, find_index, calc_dis 13 | from utils.part_body import part_body_faces 14 | 15 | 16 | def gen_pose(pose_num, betas, nfold=1, div_thresh=0.1, DEBUG=False, gender='neutral', 17 | intersection=True, garment_class=None): 18 | """ 19 | if beta is not None, we will discard frames with self-intersection 20 | """ 21 | t0 = time.time() 22 | apose = get_Apose() 23 | smpl = SMPLNP_Lres(gender=gender) 24 | if garment_class is None: 25 | faces = smpl.base.faces 26 | else: 27 | faces = part_body_faces(garment_class) 28 | 29 | dataset = Dataset() 30 | data_num = len(dataset) 31 | poses = np.copy(dataset.poses) 32 | 33 | beta_num = len(betas) 34 | if intersection: 35 | all_verts = [] 36 | for beta in betas: 37 | verts = smpl(np.tile(np.expand_dims(beta, 0), [poses.shape[0], 1]), poses, batch=True) 38 | all_verts.append(verts) 39 | good_poses = [] 40 | for pi in tqdm(range(len(poses))): 41 | p = poses[pi] 42 | no_intersec = True 43 | for beta_idx in range(beta_num): 44 | v = all_verts[beta_idx][pi] 45 | if is_intersec(v, faces): 46 | no_intersec = False 47 | break 48 | if no_intersec: 49 | good_poses.append(p) 50 | poses = np.array(good_poses) 51 | data_num = poses.shape[0] 52 | 53 | if 0 < pose_num < data_num: 54 | data_num = pose_num*nfold 55 | random_idx = np.arange(data_num) 56 | np.random.shuffle(random_idx) 57 | random_idx = random_idx[:data_num] 58 | poses = poses[random_idx] 59 | 60 | all_poses = np.copy(poses) 61 | all_data_num = data_num 62 | npose_pfold = int(np.ceil(1.*all_data_num / nfold)) 63 | all_thetas = [] 64 | all_pose_orders = [] 65 | for fold_i in range(nfold): 66 | poses = np.copy(all_poses[fold_i*npose_pfold: (fold_i+1)*npose_pfold]) 67 | data_num = len(poses) 68 | verts, joints = smpl(np.tile(np.expand_dims(betas[0]*0, 0), [data_num, 1]), poses, batch=True, return_J=True) 69 | 70 | _, apose_joints = smpl(betas[0]*0, apose, return_J=True) 71 | 72 | pose_num = joints.shape[0] 73 | chosen_mask = np.zeros([joints.shape[0]], dtype=np.bool) 74 | dist = calc_dis(joints, joints, garment_class) 75 | 76 | a_dist = calc_dis(np.expand_dims(apose_joints, 0), joints, garment_class)[0] 77 | closest = np.argmin(a_dist) 78 | chosen_mask[closest] = True 79 | pose_order = [-2, closest] 80 | dist_path = [a_dist[closest]] 81 | thetas = [apose, poses[closest]] 82 | 83 | print(pose_num) 84 | for c in tqdm(range(pose_num - 1)): 85 | last_idx = pose_order[-1] 86 | cur_dist = dist[last_idx] 87 | cur_dist[np.where(chosen_mask)] = np.inf 88 | closest = np.argmin(cur_dist) 89 | d = cur_dist[closest] 90 | finished = False 91 | if d > div_thresh: 92 | if not intersection: 93 | div_num = int(d / div_thresh) 94 | inter_poses = interpolate_pose(poses[last_idx], 95 | poses[closest], div_num) 96 | inter_poses = inter_poses[1: -1] 97 | else: 98 | cur_dist_copy = np.copy(cur_dist) 99 | while True: 100 | closest = np.argmin(cur_dist_copy) 101 | d = cur_dist_copy[closest] 102 | if np.isinf(d): 103 | finished = True 104 | break 105 | cur_dist_copy[closest] = np.inf 106 | # interpolate and see if there's self-intersection 107 | div_num = int(d / div_thresh) 108 | inter_poses = interpolate_pose(poses[last_idx], 109 | poses[closest], div_num) 110 | inter_poses = inter_poses[1: -1] 111 | 112 | all_verts = [] 113 | for beta in betas: 114 | vs = smpl(np.tile(np.expand_dims(beta, 0), [len(inter_poses), 1]), inter_poses, batch=True) 115 | all_verts.append(vs) 116 | intersec = False 117 | for vs in all_verts: 118 | for v in vs: 119 | if is_intersec(v, faces): 120 | intersec = True 121 | break 122 | if intersec: 123 | break 124 | # if there's self-intersection, we should find a second closest pose. Otherwise, break the loop 125 | if not intersec: 126 | break 127 | # print(c) 128 | if not finished: 129 | for dd in range(div_num): 130 | pose_order.append(-1) 131 | dist_path.append(-1) 132 | thetas.append(inter_poses[dd]) 133 | if not finished: 134 | chosen_mask[closest] = True 135 | pose_order.append(closest) 136 | dist_path.append(cur_dist[closest]) 137 | thetas.append(poses[closest]) 138 | else: 139 | break 140 | pose_order = np.array(pose_order) 141 | thetas = np.array(thetas) 142 | dur = time.time() - t0 143 | print("theta num: {}".format(thetas.shape[0])) 144 | print("time: {} s".format(dur)) 145 | 146 | glob_pose_order = find_index(dataset.poses, thetas) 147 | all_thetas.append(thetas) 148 | all_pose_orders.append(glob_pose_order) 149 | return all_thetas, all_pose_orders 150 | 151 | 152 | if __name__ == '__main__': 153 | garment_class = 'skirt' 154 | genders = ['female'] 155 | TEST = True 156 | for gender in genders: 157 | 158 | GEN_POSE = True 159 | POSE_NUM = 50 160 | STABILITY_FRAMES = 2 161 | if TEST: 162 | batch_num = 1 163 | else: 164 | batch_num = 18 165 | # 0.1 for upper clothes. 0.05 for lower clothes 166 | DIV_THRESH = 0.1 167 | lowest = -2 168 | 169 | smpl = SMPLNP_Lres(gender=gender) 170 | A_theta = get_Apose() 171 | 172 | num_betas = 10 if gender == 'neutral' else 300 173 | root_dir = osp.join(ROOT, '{}_{}'.format(garment_class, gender)) 174 | 175 | # read betas and gammas 176 | betas = np.load(osp.join(root_dir, 'shape', 'betas.npy')) 177 | gammas = np.load(osp.join(root_dir, 'style', 'gammas.npy')) 178 | 179 | for beta_i, beta in enumerate(betas): 180 | 181 | save_dir = os.path.join(root_dir, 'pose', 'beta_{:03d}'.format(beta_i)) 182 | if not os.path.exists(save_dir): 183 | os.makedirs(save_dir) 184 | pose_label_paths = [os.path.join(save_dir, 'poses_{:03d}.npz'.format(k)) for k in range(batch_num)] 185 | exist = True 186 | for pose_label_path in pose_label_paths: 187 | if not os.path.exists(pose_label_path): 188 | exist = False 189 | if TEST: 190 | assert exist, pose_label_paths 191 | if not exist: 192 | all_thetas, all_pose_order = gen_pose(POSE_NUM, beta[None], nfold=batch_num, div_thresh=DIV_THRESH, 193 | gender=gender, garment_class=garment_class) 194 | for pose_label_path, thetas, pose_order in zip(pose_label_paths, all_thetas, all_pose_order): 195 | np.savez(pose_label_path, pose_order=pose_order, thetas=thetas.astype(np.float32)) 196 | 197 | # read pivots 198 | if TEST: 199 | pivots_path = osp.join(root_dir, 'test.txt') 200 | else: 201 | pivots_path = osp.join(root_dir, 'pivots.txt') 202 | if osp.exists(pivots_path): 203 | with open(pivots_path) as f: 204 | pivots = f.read().strip().splitlines() 205 | pivots = [k.split('_') for k in pivots] 206 | else: 207 | continue 208 | 209 | for beta_str, gamma_str in pivots: 210 | if int(beta_str) != beta_i: 211 | continue 212 | pivots_pose_dir = os.path.join(root_dir, 'pose', '{}_{}'.format(beta_str, gamma_str)) 213 | if not os.path.exists(pivots_pose_dir): 214 | os.makedirs(pivots_pose_dir) 215 | for bi, pose_label_path in enumerate(pose_label_paths): 216 | pivots_pose_path = os.path.join(pivots_pose_dir, 'poses_{:03d}.npz'.format(bi)) 217 | if not os.path.exists(pivots_pose_path): 218 | shutil.copy(pose_label_path, pivots_pose_path) 219 | 220 | -------------------------------------------------------------------------------- /simulation_pose/gen_pose_utils.py: -------------------------------------------------------------------------------- 1 | import os 2 | import numpy as np 3 | import cv2 4 | import time 5 | import pickle 6 | 7 | from utils.smpl import SMPLNP 8 | from global_var import ROOT 9 | from utils.rotation import normalize_y_rotation, flip_theta 10 | from utils.rotation import get_Apose, interpolate_pose 11 | # from utils.renderer import Renderer 12 | # from utils.geometry import get_selfintersections 13 | try: 14 | import pymesh 15 | from pymesh.selfintersection import detect_self_intersection 16 | geodis = np.load(os.path.join(ROOT, 'smpl', 'smpl_geodesic.npy')) 17 | except: 18 | print("Warning: ?") 19 | 20 | 21 | PAIR_THRESH1 = 160 22 | PAIR_THRESH2 = 140 23 | 24 | GEO_THRESH = 0.25 25 | 26 | EPS = 1e-6 27 | 28 | def find_index(orig_poses, target_poses): 29 | orig_poses_ext = np.expand_dims(orig_poses, 1) 30 | target_poses_ext = np.expand_dims(target_poses, 0) 31 | # (1782, 3003) 32 | diff = np.sum(np.abs(orig_poses_ext - target_poses_ext), axis=2) 33 | pose_order = np.argmin(diff, axis=0) 34 | pose_order[np.where(np.min(diff, axis=0) > EPS)] = -1 35 | pose_order[0] = -2 36 | return pose_order 37 | 38 | 39 | def get_selfintersections(v, f): 40 | mspy = pymesh.form_mesh(v, f) 41 | face_pairs = detect_self_intersection(mspy) 42 | mspy.add_attribute('face_area') 43 | face_areas = mspy.get_attribute('face_area') 44 | intersecting_area = face_areas[np.unique(face_pairs.ravel())].sum() 45 | return face_pairs, intersecting_area 46 | 47 | 48 | def is_intersec(v, f, thresh=GEO_THRESH): 49 | pair, _ = get_selfintersections(v, f) 50 | max_dis = 0 51 | for p in pair: 52 | i1 = f[p[0], 0] 53 | i2 = f[p[1], 0] 54 | # print(i1) 55 | # print(i2) 56 | g = geodis[i1, i2] 57 | max_dis = np.maximum(g, max_dis) 58 | return max_dis > thresh 59 | 60 | class Dataset(object): 61 | def __init__(self, split=None): 62 | """ 63 | pose number: 1782 64 | if beta is not None, we will discard frames with self-intersection 65 | """ 66 | pose_npy_path = os.path.join(ROOT, 'pose/pose_norm.npy') 67 | raw_pose_path = os.path.join(ROOT, 'pose/pose_raw.npy') 68 | if os.path.exists(pose_npy_path): 69 | self.poses = np.load(pose_npy_path) 70 | else: 71 | pose_dir = os.path.join(ROOT, 'pose', 'SMPL') 72 | with open(os.path.join(pose_dir, 'male.pkl')) as f: 73 | mpose = pickle.load(f) 74 | raw_poses = np.array(mpose) 75 | 76 | flip_poses = flip_theta(raw_poses, batch=True) 77 | raw_poses = np.stack([raw_poses, flip_poses], 0).reshape((-1, 72)) 78 | raw_poses_copy = np.copy(raw_poses) 79 | for i, raw_pose in enumerate(raw_poses_copy): 80 | raw_pose[:3] = normalize_y_rotation(raw_pose[:3]) 81 | norm_poses = raw_poses_copy 82 | np.save(raw_pose_path, raw_poses) 83 | np.save(pose_npy_path, norm_poses) 84 | self.poses = norm_poses 85 | self.poses_nosplit = np.array(self.poses) 86 | if split is not None: 87 | assert split in ['train', 'test'] 88 | split_idx = np.load(os.path.join(ROOT, 'pose/split.npz')) 89 | self.poses = self.poses[split_idx[split]] 90 | 91 | def get_item(self, idx): 92 | return self.poses[idx] 93 | 94 | def __len__(self): 95 | return self.poses.shape[0] 96 | 97 | def debug(self): 98 | smpl = SMPLNP() 99 | renderer = Renderer(img_size=512) 100 | for p in self.poses: 101 | v, _, _ = smpl(np.zeros(10), p) 102 | img = renderer(v, smpl.base.faces) 103 | cv2.imshow('img', img) 104 | cv2.waitKey() 105 | 106 | 107 | def calc_dis(j1, j2, garment_class=None): 108 | valid_joints = { 109 | # "smooth_TShirtNoCoat": [1, 2, 3, 4, 6, 7, 8, 9, 10, 11, 12], 110 | # "smooth_ShirtNoCoat": [1, 2, 3, 4, 6, 7, 8, 9, 10, 11, 12], 111 | # "smooth_Pants": [0, 1, 2, 3, 4, 5], 112 | "t-shirt": [1, 2, 3, 4, 5, 6, 9, 12, 13, 14, 16, 17, 18, 19], 113 | "shirt": [1, 2, 3, 4, 5, 6, 9, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23], 114 | "pant": [0, 1, 2, 3, 4, 5, 7, 8, 10, 11] 115 | } 116 | j1s = np.expand_dims(j1, 1) # (bs, 1, jnum, 3) 117 | j2s = np.expand_dims(j2, 0) # (1, bs, jnum, 3) 118 | if garment_class is not None: 119 | valid_j = valid_joints[garment_class] 120 | j1s = j1s[:, :, valid_j] 121 | j2s = j2s[:, :, valid_j] 122 | dist = np.sqrt(np.sum(np.square(j1s - j2s), -1)) 123 | dist = np.max(dist, -1) 124 | # (j1_bs, j2_bs) 125 | return dist 126 | 127 | 128 | def gen_pose(pose_num, nfold=1, div_thresh=0.1, DEBUG=False, beta=None, gender='neutral', 129 | intersection=True, garment_class=None): 130 | """ 131 | if beta is not None, we will discard frames with self-intersection 132 | """ 133 | t0 = time.time() 134 | apose = get_Apose() 135 | smpl = SMPLNP(gender=gender) 136 | if beta is None: 137 | beta = np.zeros([10]) 138 | 139 | dataset = Dataset() 140 | data_num = len(dataset) 141 | poses = np.copy(dataset.poses) 142 | 143 | if intersection: 144 | verts, _, _ = smpl(np.tile(np.expand_dims(beta, 0), [poses.shape[0], 1]), poses, batch=True) 145 | good_poses = [] 146 | for v, p in zip(verts, poses): 147 | if not is_intersec(v, smpl.base.faces): 148 | good_poses.append(p) 149 | poses = np.array(good_poses) 150 | data_num = poses.shape[0] 151 | 152 | if 0 < pose_num < data_num: 153 | data_num = pose_num*nfold 154 | random_idx = np.arange(data_num) 155 | np.random.shuffle(random_idx) 156 | random_idx = random_idx[:data_num] 157 | poses = poses[random_idx] 158 | 159 | all_poses = np.copy(poses) 160 | all_data_num = data_num 161 | npose_pfold = int(np.ceil(1.*all_data_num / nfold)) 162 | all_thetas = [] 163 | all_pose_orders = [] 164 | for fold_i in range(nfold): 165 | poses = np.copy(all_poses[fold_i*npose_pfold: (fold_i+1)*npose_pfold]) 166 | data_num = len(poses) 167 | verts, joints, _ = smpl(np.tile(np.expand_dims(beta, 0), [data_num, 1]), poses, batch=True) 168 | 169 | _, apose_joints, _ = smpl(beta, apose) 170 | 171 | pose_num = joints.shape[0] 172 | chosen_mask = np.zeros([joints.shape[0]], dtype=np.bool) 173 | dist = calc_dis(joints, joints, garment_class) 174 | 175 | a_dist = calc_dis(np.expand_dims(apose_joints, 0), joints, garment_class)[0] 176 | closest = np.argmin(a_dist) 177 | chosen_mask[closest] = True 178 | pose_order = [-2, closest] 179 | dist_path = [a_dist[closest]] 180 | thetas = [apose, poses[closest]] 181 | 182 | for c in range(pose_num - 1): 183 | last_idx = pose_order[-1] 184 | cur_dist = dist[last_idx] 185 | cur_dist[np.where(chosen_mask)] = np.inf 186 | closest = np.argmin(cur_dist) 187 | d = cur_dist[closest] 188 | finished = False 189 | if d > div_thresh: 190 | if not intersection: 191 | div_num = int(d / div_thresh) 192 | inter_poses = interpolate_pose(poses[last_idx], 193 | poses[closest], div_num) 194 | inter_poses = inter_poses[1: -1] 195 | else: 196 | cur_dist_copy = np.copy(cur_dist) 197 | while True: 198 | closest = np.argmin(cur_dist_copy) 199 | d = cur_dist_copy[closest] 200 | if np.isinf(d): 201 | finished = True 202 | break 203 | cur_dist_copy[closest] = np.inf 204 | # interpolate and see if there's self-intersection 205 | div_num = int(d / div_thresh) 206 | inter_poses = interpolate_pose(poses[last_idx], 207 | poses[closest], div_num) 208 | inter_poses = inter_poses[1: -1] 209 | vs, _, _ = smpl(np.tile(np.expand_dims(beta, 0), [len(inter_poses), 1]), inter_poses, batch=True) 210 | intersec = False 211 | for v in vs: 212 | if is_intersec(v, smpl.base.faces): 213 | intersec = True 214 | break 215 | # if there's self-intersection, we should find a second closest pose. Otherwise, break the loop 216 | if not intersec: 217 | break 218 | # print(c) 219 | if not finished: 220 | for dd in range(div_num): 221 | pose_order.append(-1) 222 | dist_path.append(-1) 223 | thetas.append(inter_poses[dd]) 224 | if not finished: 225 | chosen_mask[closest] = True 226 | pose_order.append(closest) 227 | dist_path.append(cur_dist[closest]) 228 | thetas.append(poses[closest]) 229 | else: 230 | break 231 | pose_order = np.array(pose_order) 232 | thetas = np.array(thetas) 233 | dur = time.time() - t0 234 | print("theta num: {}".format(thetas.shape[0])) 235 | print("time: {} s".format(dur)) 236 | if DEBUG: 237 | # renderer = Renderer(img_size=448) 238 | for i, po in enumerate(pose_order): 239 | if i < 2067: 240 | continue 241 | if po >= 0: 242 | v = verts[po] 243 | else: 244 | v, _, _ = smpl(np.zeros([10], np.float32), thetas[i]) 245 | 246 | # img = renderer(v, smpl.base.faces) 247 | pair, _ = get_selfintersections(v, smpl.base.faces) 248 | # cv2.putText(img, "{} {} {}".format(i, pair.shape[0], po >= 0), (10, 10), cv2.FONT_HERSHEY_SIMPLEX, 1.2, (0, 0, 255), 2) 249 | print("{} {} {}".format(i, pair.shape[0], po >= 0)) 250 | # cv2.imshow('img', img) 251 | # cv2.waitKey() 252 | 253 | glob_pose_order = find_index(dataset.poses, thetas) 254 | all_thetas.append(thetas) 255 | all_pose_orders.append(glob_pose_order) 256 | all_thetas = np.concatenate(all_thetas, 0) 257 | all_pose_orders = np.concatenate(all_pose_orders, 0) 258 | return all_thetas, all_pose_orders 259 | 260 | 261 | def gen_pose_two_shapes(pose_num, beta1, beta2, nfold=1, div_thresh=0.1, DEBUG=False, gender='neutral', 262 | intersection=True, garment_class=None, pose_split=None): 263 | """ 264 | if beta is not None, we will discard frames with self-intersection 265 | """ 266 | t0 = time.time() 267 | apose = get_Apose() 268 | smpl = SMPLNP(gender=gender) 269 | 270 | dataset = Dataset(split=pose_split) 271 | data_num = len(dataset) 272 | poses = np.copy(dataset.poses) 273 | 274 | if intersection: 275 | verts1, _, _ = smpl(np.tile(np.expand_dims(beta1, 0), [poses.shape[0], 1]), poses, batch=True) 276 | verts2, _, _ = smpl(np.tile(np.expand_dims(beta2, 0), [poses.shape[0], 1]), poses, batch=True) 277 | good_poses = [] 278 | for v1, v2, p in zip(verts1, verts2, poses): 279 | if not is_intersec(v1, smpl.base.faces) and not is_intersec(v2, smpl.base.faces): 280 | good_poses.append(p) 281 | poses = np.array(good_poses) 282 | data_num = poses.shape[0] 283 | 284 | if 0 < pose_num < data_num: 285 | random_idx = np.arange(data_num) 286 | np.random.shuffle(random_idx) 287 | data_num = pose_num * nfold 288 | random_idx = random_idx[:data_num] 289 | poses = poses[random_idx] 290 | 291 | all_poses = np.copy(poses) 292 | all_data_num = data_num 293 | npose_pfold = int(np.ceil(1.*all_data_num / nfold)) 294 | all_thetas = [] 295 | all_pose_orders = [] 296 | for fold_i in range(nfold): 297 | poses = np.copy(all_poses[fold_i*npose_pfold: (fold_i+1)*npose_pfold]) 298 | data_num = len(poses) 299 | verts, joints, _ = smpl(np.tile(np.expand_dims(beta1, 0), [data_num, 1]), poses, batch=True) 300 | 301 | _, apose_joints, _ = smpl(beta1, apose) 302 | 303 | pose_num = joints.shape[0] 304 | chosen_mask = np.zeros([joints.shape[0]], dtype=np.bool) 305 | dist = calc_dis(joints, joints, garment_class) 306 | 307 | try: 308 | a_dist = calc_dis(np.expand_dims(apose_joints, 0), joints, garment_class)[0] 309 | except IndexError as e: 310 | import IPython; IPython.embed() 311 | closest = np.argmin(a_dist) 312 | chosen_mask[closest] = True 313 | pose_order = [-2, closest] 314 | dist_path = [a_dist[closest]] 315 | thetas = [apose, poses[closest]] 316 | 317 | for c in range(pose_num - 1): 318 | last_idx = pose_order[-1] 319 | cur_dist = dist[last_idx] 320 | cur_dist[np.where(chosen_mask)] = np.inf 321 | closest = np.argmin(cur_dist) 322 | d = cur_dist[closest] 323 | finished = False 324 | if d > div_thresh: 325 | if not intersection: 326 | div_num = int(d / div_thresh) 327 | inter_poses = interpolate_pose(poses[last_idx], 328 | poses[closest], div_num) 329 | inter_poses = inter_poses[1: -1] 330 | else: 331 | cur_dist_copy = np.copy(cur_dist) 332 | while True: 333 | closest = np.argmin(cur_dist_copy) 334 | d = cur_dist_copy[closest] 335 | if np.isinf(d): 336 | finished = True 337 | break 338 | cur_dist_copy[closest] = np.inf 339 | # interpolate and see if there's self-intersection 340 | div_num = int(d / div_thresh) 341 | inter_poses = interpolate_pose(poses[last_idx], 342 | poses[closest], div_num) 343 | inter_poses = inter_poses[1: -1] 344 | vs1, _, _ = smpl(np.tile(np.expand_dims(beta1, 0), [len(inter_poses), 1]), inter_poses, batch=True) 345 | vs2, _, _ = smpl(np.tile(np.expand_dims(beta2, 0), [len(inter_poses), 1]), inter_poses, batch=True) 346 | intersec = False 347 | for v1, v2 in zip(vs1, vs2): 348 | if is_intersec(v1, smpl.base.faces) or is_intersec(v2, smpl.base.faces): 349 | intersec = True 350 | break 351 | # if there's self-intersection, we should find a second closest pose. Otherwise, break the loop 352 | if not intersec: 353 | break 354 | # print(c) 355 | if not finished: 356 | for dd in range(div_num): 357 | pose_order.append(-1) 358 | dist_path.append(-1) 359 | thetas.append(inter_poses[dd]) 360 | if not finished: 361 | chosen_mask[closest] = True 362 | pose_order.append(closest) 363 | dist_path.append(cur_dist[closest]) 364 | thetas.append(poses[closest]) 365 | else: 366 | break 367 | pose_order = np.array(pose_order) 368 | thetas = np.array(thetas) 369 | dur = time.time() - t0 370 | print("theta num: {}".format(thetas.shape[0])) 371 | print("time: {} s".format(dur)) 372 | if DEBUG: 373 | # renderer = Renderer(img_size=448) 374 | for i, po in enumerate(pose_order): 375 | if i < 2067: 376 | continue 377 | if po >= 0: 378 | v = verts[po] 379 | else: 380 | v, _, _ = smpl(np.zeros([10], np.float32), thetas[i]) 381 | 382 | # img = renderer(v, smpl.base.faces) 383 | pair, _ = get_selfintersections(v, smpl.base.faces) 384 | # cv2.putText(img, "{} {} {}".format(i, pair.shape[0], po >= 0), (10, 10), cv2.FONT_HERSHEY_SIMPLEX, 1.2, (0, 0, 255), 2) 385 | print("{} {} {}".format(i, pair.shape[0], po >= 0)) 386 | # cv2.imshow('img', img) 387 | # cv2.waitKey() 388 | 389 | glob_pose_order = find_index(dataset.poses_nosplit, thetas) 390 | all_thetas.append(thetas) 391 | all_pose_orders.append(glob_pose_order) 392 | all_thetas = np.concatenate(all_thetas, 0) 393 | all_pose_orders = np.concatenate(all_pose_orders, 0) 394 | return all_thetas, all_pose_orders 395 | 396 | 397 | if __name__ == '__main__': 398 | div_thresh = 0.15 399 | DEBUG = False 400 | 401 | # people_name = 'rp_scott_rigged_005_zup_a' 402 | # people_dir = os.path.join(global_var.DATA_DIR, 'neutral_cloth_test', people_name) 403 | # label = np.load(os.path.join(people_dir, 'labels.npz')) 404 | # beta = label['betas'] 405 | 406 | thetas, pose_order = gen_pose(50, nfold=1, div_thresh=div_thresh, DEBUG=DEBUG, intersection=True, garment_class='smooth_TShirtNoCoat') 407 | # if not DEBUG: 408 | # np.savez(os.path.join(global_var.DATA_DIR, 'closest_static_pose.npz'), 409 | # pose_order=pose_order, thetas=thetas) -------------------------------------------------------------------------------- /simulation_pose/post_proc.py: -------------------------------------------------------------------------------- 1 | """ 2 | translate and unpose simulation results 3 | """ 4 | import os 5 | import os.path as osp 6 | import torch 7 | import numpy as np 8 | import trimesh 9 | from smpl_torch import SMPLNP 10 | from utils.rotation import get_Apose 11 | from utils.ios import read_pc2 12 | from global_var import ROOT 13 | 14 | 15 | if __name__ == '__main__': 16 | garment_class = 'skirt' 17 | gender = 'female' 18 | lowest = -2 19 | STABILITY_FRAMES = 2 20 | 21 | smpl = SMPLNP(gender) 22 | apose = torch.from_numpy(get_Apose().astype(np.float32)) 23 | data_root = osp.join(ROOT, '{}_{}'.format(garment_class, gender)) 24 | pose_dir = osp.join(ROOT, '{}_{}'.format(garment_class, gender), 'pose') 25 | ss_dir = osp.join(data_root, 'style_shape') 26 | shape_dir = osp.join(data_root, 'shape') 27 | 28 | beta_strs = [k.replace('.obj', '') for k in os.listdir(shape_dir) if k.endswith('.obj')] 29 | betas = np.load(osp.join(data_root, 'shape', 'betas.npy')) 30 | 31 | all_ss = [k for k in os.listdir(pose_dir) if len(k) == 7] 32 | for ss in all_ss: 33 | beta_str, gamma_str = ss.split('_') 34 | pose_ss_dir = osp.join(pose_dir, ss) 35 | 36 | if garment_class in ['pant', 'skirt', 'short-pant']: 37 | transition_path = osp.join(ss_dir, f'motion_beta{beta_str}_gamma{gamma_str}.pc2') 38 | else: 39 | transition_path = osp.join(ss_dir, f'motion_{beta_str}.pc2') 40 | transition_verts = read_pc2(transition_path) 41 | transition_num = transition_verts.shape[0] 42 | 43 | result_names = [k for k in os.listdir(pose_ss_dir) if k.startswith('result_') and k.endswith('.pc2')] 44 | for result_name in result_names: 45 | batch_str = result_name.replace('result_', '').replace('.pc2', '') 46 | garment_path = osp.join(pose_ss_dir, result_name) 47 | save_path = osp.join(pose_ss_dir, 'unposed_{}.npy'.format(batch_str)) 48 | if os.path.exists(save_path): 49 | continue 50 | theta_path = osp.join(pose_ss_dir, 'poses_{}.npz'.format(batch_str)) 51 | thetas = np.load(theta_path)['thetas'].astype(np.float32) 52 | garment_vs = read_pc2(garment_path) 53 | if garment_vs is None: 54 | print("{} is broken".format(result_name)) 55 | continue 56 | garment_vs[:, :, 1] += lowest 57 | beta = betas[int(beta_str)] 58 | 59 | all_unposed = [] 60 | for theta_i, theta in enumerate(thetas): 61 | ii = len(transition_verts)+theta_i*(STABILITY_FRAMES+1)+STABILITY_FRAMES 62 | v = garment_vs[ii] 63 | unposed_v = smpl.base.forward_unpose_deformation(torch.from_numpy(theta).unsqueeze(0), torch.from_numpy(beta).unsqueeze(0), 64 | torch.from_numpy(v.astype(np.float32)).unsqueeze(0), garment_class) 65 | unposed_v = unposed_v[0].detach().cpu().numpy() 66 | all_unposed.append(unposed_v) 67 | all_unposed = np.array(all_unposed, dtype=np.float32) 68 | np.save(save_path, all_unposed) 69 | print(save_path) 70 | -------------------------------------------------------------------------------- /simulation_pose/readme.md: -------------------------------------------------------------------------------- 1 | ## usage 2 | 1. gen_pose.py 3 | 2. gen_motion.py 4 | 3. Cloth simulation in Marvelous Designer 5 | ``` 6 | avatar: ROOT/style_shape/up_beta{}.obj 7 | garment: ROOT/style_shape/input_beta{}_gamma{}.obj 8 | motion: ROOT/pose/{beta}_{gamma}/motion_{}.pc2 9 | 10 | export: ROOT/pose/{beta}_{gamma}/result_{}.pc2 11 | ``` 12 | 4. post_proc.py -------------------------------------------------------------------------------- /simulation_style/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zycliao/TailorNet_dataset/c4ce37cf321608b4f6e2907fe6ac6b79fa931175/simulation_style/__init__.py -------------------------------------------------------------------------------- /simulation_style/choose_avail.py: -------------------------------------------------------------------------------- 1 | import os 2 | from global_var import ROOT 3 | 4 | if __name__ == '__main__': 5 | gender = 'female' 6 | gc = 'skirt' 7 | data_dir = os.path.join(ROOT, '{}_{}'.format(gc, gender)) 8 | bad_dir = os.path.join(data_dir, 'style_shape_vis', 'bad') 9 | ss_dir = os.path.join(data_dir, 'style_shape') 10 | 11 | all_ss = [k.replace('.obj', '') for k in os.listdir(ss_dir) if k.endswith('.obj') and k.startswith('beta')] 12 | all_bad = [k.replace('.jpg', '') for k in os.listdir(bad_dir)] 13 | 14 | with open(os.path.join(data_dir, 'avail.txt'), 'w') as f: 15 | for ss in all_ss: 16 | if ss in all_bad: 17 | continue 18 | ss_item = ss.replace('beta', '').replace('gamma', '') 19 | f.write(ss_item+'\n') 20 | -------------------------------------------------------------------------------- /simulation_style/post_proc.py: -------------------------------------------------------------------------------- 1 | """ 2 | translate and unpose simulation results 3 | """ 4 | import os 5 | import os.path as osp 6 | import torch 7 | import numpy as np 8 | import trimesh 9 | from smpl_torch import SMPLNP 10 | from utils.rotation import get_Apose 11 | from utils.ios import write_obj 12 | from global_var import ROOT 13 | 14 | if __name__ == '__main__': 15 | garment_class = 'skirt' 16 | gender = 'female' 17 | lowest = -2 18 | 19 | import pickle 20 | with open(os.path.join(ROOT, 'garment_class_info.pkl'), 'rb') as f: 21 | class_info = pickle.load(f, encoding='latin-1') 22 | garment_f = class_info[garment_class]['f'] 23 | 24 | smpl = SMPLNP(gender) 25 | apose = torch.from_numpy(get_Apose().astype(np.float32)) 26 | data_dir = osp.join(ROOT, '{}_{}'.format(garment_class, gender), 'style_shape') 27 | shape_dir = osp.join(ROOT, '{}_{}'.format(garment_class, gender), 'shape') 28 | 29 | betas = np.load(osp.join(shape_dir, 'betas.npy')) 30 | 31 | 32 | with open(osp.join(data_dir, '../avail.txt')) as f: 33 | all_ss = f.read().strip().splitlines() 34 | for ss in all_ss: 35 | beta_str, gamma_str = ss.split('_') 36 | style_shape = f'beta{beta_str}_gamma{gamma_str}' 37 | style_shape_path = osp.join(data_dir, style_shape+'.obj') 38 | style_shape_save_path = osp.join(data_dir, style_shape+'.npy') 39 | beta = betas[int(beta_str)] 40 | 41 | m = trimesh.load(style_shape_path, process=False) 42 | v = m.vertices 43 | v[:, 1] += lowest 44 | 45 | unposed_v = smpl.base.forward_unpose_deformation(apose.unsqueeze(0), torch.from_numpy(beta).unsqueeze(0), 46 | torch.from_numpy(v.astype(np.float32)).unsqueeze(0), garment_class) 47 | unposed_v = unposed_v[0].detach().cpu().numpy() 48 | np.save(style_shape_save_path, unposed_v) 49 | -------------------------------------------------------------------------------- /simulation_style/readme.md: -------------------------------------------------------------------------------- 1 | ## usage 2 | 1. style_shape.py 3 | prepare for simulation 4 | 2. Cloth simulation in Marvelous Designer 5 | ``` 6 | avatar: ROOT/style_shape/up_beta{}.obj 7 | garment: ROOT/style_shape/input_beta{}_gamma{}.obj 8 | motion: ROOT/style_shape/motion_{}.pc2 9 | 10 | export: ROOT/style_shape/beta{}_gamma{}.obj 11 | ``` 12 | 13 | 3. vis_simulation.py 14 | visualize simulation results 15 | 4. choose_avail.py 16 | select all bad results (e.g., too big shirt on a small body) 17 | and run choose_avail.py 18 | 5. post_proc.py -------------------------------------------------------------------------------- /simulation_style/style_shape.py: -------------------------------------------------------------------------------- 1 | """ 2 | simulate different style-shape combinations in A-pose 3 | """ 4 | import os 5 | import os.path as osp 6 | import trimesh 7 | import numpy as np 8 | from utils.ios import save_pc2 9 | from utils.rotation import interpolate_pose 10 | import pickle 11 | from smpl_torch import SMPLNP_Lres, SMPLNP 12 | from utils.rotation import get_Apose 13 | from sklearn.decomposition import PCA 14 | from utils.part_body import part_body_faces 15 | from utils.diffusion_smoothing import DiffusionSmoothing as DS 16 | 17 | from global_var import ROOT 18 | 19 | APOSE = get_Apose() 20 | 21 | def gamma_transform(gamma, coeff_mean, coeff_range): 22 | return coeff_mean + gamma * coeff_range 23 | 24 | def retarget(beta, theta, garment_verts, vert_indices, smpl_hres): 25 | smpl_hres.pose[:] = theta 26 | smpl_hres.betas[:] = 0 27 | zero_body = np.array(smpl_hres.r) 28 | 29 | smpl_hres.betas[:] = beta 30 | reg_body = np.array(smpl_hres.r) 31 | retarget_d = (zero_body - reg_body)[vert_indices] 32 | retarget_verts = garment_verts + retarget_d 33 | return retarget_verts 34 | 35 | 36 | def find_perpendicular_foot(x0, x1, x2): 37 | # x0, x1 define a line. x2 is an arbitary point 38 | x10 = x1 - x0 39 | x02 = x0 - x2 40 | return x0 - (x10) * np.dot(x10, x02) / np.dot(x10, x10) 41 | 42 | 43 | class GarmentAlign(object): 44 | def __init__(self, gender): 45 | model = np.load(os.path.join(ROOT, 'smpl/smpl_hres_{}.npz'.format(gender))) 46 | model_lres = np.load(os.path.join(ROOT, 'smpl/smpl_{}.npz'.format(gender))) 47 | with open(os.path.join(ROOT, 'garment_class_info.pkl'), 'rb') as f: 48 | self.class_info = pickle.load(f, encoding='latin-1') 49 | self.w = np.array(model['J_regressor'], dtype=np.float).T 50 | self.w_lres = np.array(model_lres['J_regressor'], dtype=np.float).T 51 | 52 | 53 | def align(self, gar_v, body_v, gc): 54 | vert_indices = self.class_info[gc]['vert_indices'] 55 | gar_smpl = np.zeros([self.w.shape[0], 3]) 56 | gar_smpl[vert_indices] = gar_v 57 | gar_j = np.einsum('vt,vj->jt', gar_smpl, self.w) 58 | body_j = np.einsum('vt,vj->jt', body_v, self.w_lres) 59 | if gc == 't-shirt': 60 | return np.mean(body_j[[16, 17]] - gar_j[[16, 17]], 0, keepdims=True) 61 | elif gc == 'shirt': 62 | left_b = np.load(os.path.join(ROOT, 'shirt_left_boundary.npy')) 63 | right_b = np.load(os.path.join(ROOT, 'shirt_right_boundary.npy')) 64 | left_b_center = np.mean(gar_v[left_b], 0) 65 | right_b_center = np.mean(gar_v[right_b], 0) 66 | # left_trans = find_perpendicular_foot(body_j[18], body_j[20], left_b_center) - left_b_center 67 | # right_trans = find_perpendicular_foot(body_j[19], body_j[21], right_b_center) - right_b_center 68 | # final_trans = ((left_trans + right_trans) / 2.)[None] 69 | # return final_trans 70 | 71 | import torch 72 | trans = torch.zeros(3, requires_grad=True) 73 | lbc_torch = torch.tensor(left_b_center.astype(np.float32), requires_grad=True) 74 | rbc_torch = torch.tensor(right_b_center.astype(np.float32), requires_grad=True) 75 | 76 | body_j_torch = torch.tensor(body_j.astype(np.float32), requires_grad=True) 77 | gar_j_torch = torch.tensor(gar_j.astype(np.float32)) 78 | 79 | def dist(x0, x1, x2): 80 | x10 = x1 - x0 81 | return torch.norm(x2 - x0 - x10 * torch.dot(x2-x0, x10) / torch.dot(x10, x10)) 82 | 83 | optim = torch.optim.SGD([trans], lr=1e-3) 84 | for i in range(50): 85 | optim.zero_grad() 86 | loss = dist(body_j_torch[18], body_j_torch[20], lbc_torch+trans) + \ 87 | dist(body_j_torch[19], body_j_torch[21], rbc_torch+trans) 88 | 89 | loss.backward() 90 | optim.step() 91 | # print(f"{i}\t{loss.cpu().detach().numpy()}") 92 | 93 | return trans.detach().numpy()[None] 94 | return np.mean(body_j[[18, 19]] - gar_j[[18, 19]], 0, keepdims=True) 95 | 96 | 97 | 98 | 99 | if __name__ == '__main__': 100 | # STAGE1_FNUM = 5 101 | # STAGE2_FNUM = 5 102 | # STAGE3_FNUM = 5 103 | SM_FNUM = 7 104 | STABLE_FNUM = 6 105 | END_FNUM = 2 106 | lowest = -2 107 | apose = get_Apose() 108 | 109 | with open(osp.join(ROOT, 'garment_class_info.pkl'), 'rb') as f: 110 | garment_meta = pickle.load(f) 111 | for gender in ['neutral', 'female', 'male']: 112 | smpl = SMPLNP_Lres(gender=gender) 113 | smpl_hres = SMPLNP(gender=gender) 114 | gar_align = GarmentAlign(gender) 115 | num_betas = 10 if gender == 'neutral' else 300 116 | betas = np.zeros([9, num_betas], dtype=np.float32) 117 | betas[0, 0] = 2 118 | betas[1, 0] = -2 119 | betas[2, 1] = 2 120 | betas[3, 1] = -2 121 | betas[4, 0] = 1 122 | betas[5, 0] = -1 123 | betas[6, 1] = 1 124 | betas[7, 1] = -1 125 | vcanonical = smpl(np.zeros_like(betas[0]), apose) 126 | 127 | for gc in ['shirt']: 128 | gc_gender_dir = osp.join(ROOT, '{}_{}'.format(gc, gender)) 129 | shape_dir = osp.join(gc_gender_dir, 'shape') 130 | save_dir = osp.join(gc_gender_dir, 'style_shape') 131 | style_dir = osp.join(gc_gender_dir, 'style') 132 | if not osp.exists(shape_dir): 133 | os.makedirs(shape_dir) 134 | if not osp.exists(save_dir): 135 | os.makedirs(save_dir) 136 | if not osp.exists(style_dir): 137 | os.makedirs(style_dir) 138 | 139 | # shape 140 | np.save(osp.join(shape_dir, 'betas.npy'), betas) 141 | part_faces = part_body_faces(gc) 142 | start_body_list = [] 143 | ds = DS(smpl.base.np_v_template, part_faces) 144 | for i, beta in enumerate(betas): 145 | np.save(osp.join(shape_dir, 'beta_{:03d}.npy'.format(i)), beta) 146 | 147 | vbeta = smpl(beta, apose) 148 | 149 | # smoothing 150 | sm_body_list = [] 151 | sm_body = np.copy(vbeta) 152 | for sm_i in range(SM_FNUM): 153 | sm_body = ds.smooth(sm_body, smoothness=0.2) 154 | sm_body = ds.smooth(sm_body, smoothness=0.2) 155 | sm_body_list.append(sm_body) 156 | sm_body_list = np.array(sm_body_list)[::-1] 157 | 158 | vstart = sm_body_list[0] 159 | if np.mean(np.abs(vcanonical)) > np.mean(np.abs(vbeta)): # thin body 160 | # if True: 161 | vbody = np.concatenate((sm_body_list, np.tile(vbeta[None], [STABLE_FNUM, 1, 1])), 0) 162 | else: # big body 163 | vbody = np.concatenate((np.tile(vstart[None], [STABLE_FNUM, 1, 1]), sm_body_list), 0) 164 | vbody = np.concatenate((vbody, np.tile(vbeta[None], (END_FNUM, 1, 1))), 0) 165 | vbody[:, :, 1] -= lowest 166 | start_body_list.append(np.copy(vbody[0])) 167 | m = trimesh.Trimesh(vertices=vbody[0], faces=part_faces, process=False) 168 | m.export(osp.join(save_dir, 'up_beta{:03d}.obj'.format(i))) 169 | save_pc2(vbody, os.path.join(save_dir, 'motion_{:03d}.pc2'.format(i))) 170 | # save body 171 | vv = vbody[-1] 172 | vv[:, 1] += lowest 173 | mbody = trimesh.Trimesh(vertices=vv, faces=smpl.base.faces, process=False) 174 | mbody.export(osp.join(shape_dir, '{:03d}.obj'.format(i))) 175 | 176 | # gamma 177 | gammas = [] 178 | if gc in ['t-shirt', 'shirt', 'pant']: 179 | gammas.append([0., 0., 0., 0.]) 180 | for x1 in np.arange(-1, 1.01, 0.5): 181 | for x2 in np.arange(-1, 1.01, 0.5): 182 | if x1 == 0 and x2 == 0: 183 | continue 184 | gammas.append([x1, x2, 0, 0]) 185 | gammas = np.array(gammas, dtype=np.float32) 186 | np.save(osp.join(style_dir, 'gammas.npy'), gammas) 187 | 188 | style_model = np.load(osp.join(gc_gender_dir, 'style_model.npz')) 189 | pca = PCA(n_components=4) 190 | pca.components_ = style_model['pca_w'] 191 | pca.mean_ = style_model['mean'] 192 | coeff_mean = style_model['coeff_mean'] 193 | coeff_range = style_model['coeff_range'] 194 | # trans = style_model['trans'] 195 | faces = garment_meta[gc]['f'] 196 | vert_indices = garment_meta[gc]['vert_indices'] 197 | # upper_boundary = np.load(osp.join(ROOT, '{}_upper_boundary.npy'.format(gc))) 198 | 199 | for i, gamma in enumerate(gammas): 200 | np.save(osp.join(style_dir, 'gamma_{:03d}.npy'.format(i)), gamma) 201 | gamma = gamma_transform(gamma, coeff_mean, coeff_range) 202 | v = pca.inverse_transform(gamma[None]).reshape([-1, 3]) 203 | v[:, 1] -= lowest 204 | notrans_m = trimesh.Trimesh(vertices=v, faces=faces, process=False) 205 | notrans_m.export(osp.join(style_dir, '{:03d}.obj'.format(i))) 206 | 207 | for j, beta in enumerate(betas): 208 | # body_v, _ = smpl_hres(, apose, None, None) 209 | body_v = np.copy(start_body_list[j]) 210 | gar_v = np.copy(v) 211 | trans = gar_align.align(gar_v, body_v, gc) 212 | # trans = np.mean(body_v[vert_indices][upper_boundary] - v[upper_boundary], axis=0, keepdims=True) 213 | gar_v += trans 214 | m = trimesh.Trimesh(vertices=gar_v, faces=faces, process=False) 215 | m.export(osp.join(save_dir, 'input_beta{:03d}_gamma{:03d}.obj'.format(j, i))) 216 | 217 | -------------------------------------------------------------------------------- /simulation_style/style_shape_pant.py: -------------------------------------------------------------------------------- 1 | """ 2 | simulate different style-shape combinations in A-pose 3 | """ 4 | import os 5 | import os.path as osp 6 | import trimesh 7 | import numpy as np 8 | from utils.ios import save_pc2 9 | from utils.rotation import interpolate_pose 10 | import pickle 11 | from smpl_torch import SMPLNP_Lres, SMPLNP 12 | from utils.rotation import get_Apose 13 | from sklearn.decomposition import PCA 14 | from utils.part_body import part_body_faces 15 | from utils.diffusion_smoothing import DiffusionSmoothing as DS 16 | 17 | from global_var import ROOT 18 | 19 | APOSE = get_Apose() 20 | 21 | def gamma_transform(gamma, coeff_mean, coeff_range): 22 | return coeff_mean + gamma * coeff_range 23 | 24 | def retarget(beta, theta, garment_verts, vert_indices, smpl_hres): 25 | smpl_hres.pose[:] = theta 26 | smpl_hres.betas[:] = 0 27 | zero_body = np.array(smpl_hres.r) 28 | 29 | smpl_hres.betas[:] = beta 30 | reg_body = np.array(smpl_hres.r) 31 | retarget_d = (zero_body - reg_body)[vert_indices] 32 | retarget_verts = garment_verts + retarget_d 33 | return retarget_verts 34 | 35 | 36 | def find_perpendicular_foot(x0, x1, x2): 37 | # x0, x1 define a line. x2 is an arbitary point 38 | x10 = x1 - x0 39 | x02 = x0 - x2 40 | return x0 - (x10) * np.dot(x10, x02) / np.dot(x10, x10) 41 | 42 | 43 | class GarmentAlign(object): 44 | def __init__(self, gender): 45 | model = np.load(os.path.join(ROOT, 'smpl/smpl_hres_{}.npz'.format(gender))) 46 | model_lres = np.load(os.path.join(ROOT, 'smpl/smpl_{}.npz'.format(gender))) 47 | with open(os.path.join(ROOT, 'garment_class_info.pkl'), 'rb') as f: 48 | self.class_info = pickle.load(f, encoding='latin-1') 49 | self.w = np.array(model['J_regressor'], dtype=np.float).T 50 | self.w_lres = np.array(model_lres['J_regressor'], dtype=np.float).T 51 | 52 | 53 | def align(self, gar_v, body_v, gc): 54 | vert_indices = self.class_info[gc]['vert_indices'] 55 | gar_smpl = np.zeros([self.w.shape[0], 3]) 56 | gar_smpl[vert_indices] = gar_v 57 | gar_j = np.einsum('vt,vj->jt', gar_smpl, self.w) 58 | body_j = np.einsum('vt,vj->jt', body_v, self.w_lres) 59 | if gc == 't-shirt': 60 | return np.mean(body_j[[16, 17]] - gar_j[[16, 17]], 0, keepdims=True) 61 | elif gc == 'shirt': 62 | left_b = np.load(os.path.join(ROOT, 'shirt_left_boundary.npy')) 63 | right_b = np.load(os.path.join(ROOT, 'shirt_right_boundary.npy')) 64 | left_b_center = np.mean(gar_v[left_b], 0) 65 | right_b_center = np.mean(gar_v[right_b], 0) 66 | # left_trans = find_perpendicular_foot(body_j[18], body_j[20], left_b_center) - left_b_center 67 | # right_trans = find_perpendicular_foot(body_j[19], body_j[21], right_b_center) - right_b_center 68 | # final_trans = ((left_trans + right_trans) / 2.)[None] 69 | # return final_trans 70 | 71 | import torch 72 | trans = torch.zeros(3, requires_grad=True) 73 | lbc_torch = torch.tensor(left_b_center.astype(np.float32), requires_grad=True) 74 | rbc_torch = torch.tensor(right_b_center.astype(np.float32), requires_grad=True) 75 | 76 | body_j_torch = torch.tensor(body_j.astype(np.float32), requires_grad=True) 77 | gar_j_torch = torch.tensor(gar_j.astype(np.float32)) 78 | 79 | def dist(x0, x1, x2): 80 | x10 = x1 - x0 81 | return torch.norm(x2 - x0 - x10 * torch.dot(x2-x0, x10) / torch.dot(x10, x10)) 82 | 83 | optim = torch.optim.SGD([trans], lr=1e-3) 84 | for i in range(50): 85 | optim.zero_grad() 86 | loss = dist(body_j_torch[18], body_j_torch[20], lbc_torch+trans) + \ 87 | dist(body_j_torch[19], body_j_torch[21], rbc_torch+trans) 88 | 89 | loss.backward() 90 | optim.step() 91 | # print(f"{i}\t{loss.cpu().detach().numpy()}") 92 | 93 | return trans.detach().numpy()[None] 94 | return np.mean(body_j[[18, 19]] - gar_j[[18, 19]], 0, keepdims=True) 95 | elif gc == 'pant': 96 | return np.mean(body_j[[1, 2]] - gar_j[[1, 2]], 0, keepdims=True) 97 | 98 | 99 | def polygon_area(v): 100 | x = v[:, 0].copy() 101 | y = v[:, 2].copy() 102 | return 0.5*np.abs(np.dot(x,np.roll(y,1))-np.dot(y,np.roll(x,1))) 103 | 104 | 105 | 106 | if __name__ == '__main__': 107 | # STAGE1_FNUM = 5 108 | # STAGE2_FNUM = 5 109 | # STAGE3_FNUM = 5 110 | SM_FNUM = 6 111 | STABLE_FNUM = 4 112 | END_FNUM = 5 113 | lowest = -2 114 | apose = get_Apose() 115 | 116 | with open(osp.join(ROOT, 'garment_class_info.pkl'), 'rb') as f: 117 | garment_meta = pickle.load(f) 118 | for gender in ['female', 'male']: 119 | smpl = SMPLNP_Lres(gender=gender) 120 | smpl_hres = SMPLNP(gender=gender) 121 | gar_align = GarmentAlign(gender) 122 | num_betas = 10 if gender == 'neutral' else 300 123 | betas = np.zeros([9, num_betas], dtype=np.float32) 124 | betas[0, 0] = 2 125 | betas[1, 0] = -2 126 | betas[2, 1] = 2 127 | betas[3, 1] = -2 128 | betas[4, 0] = 1 129 | betas[5, 0] = -1 130 | betas[6, 1] = 1 131 | betas[7, 1] = -1 132 | vcanonical = smpl(np.zeros_like(betas[0]), apose) 133 | 134 | for gc in ['short-pant']: 135 | gc_gender_dir = osp.join(ROOT, '{}_{}'.format(gc, gender)) 136 | shape_dir = osp.join(gc_gender_dir, 'shape') 137 | save_dir = osp.join(gc_gender_dir, 'style_shape') 138 | style_dir = osp.join(gc_gender_dir, 'style') 139 | if not osp.exists(shape_dir): 140 | os.makedirs(shape_dir) 141 | if not osp.exists(save_dir): 142 | os.makedirs(save_dir) 143 | if not osp.exists(style_dir): 144 | os.makedirs(style_dir) 145 | part_faces = part_body_faces(gc) 146 | ds = DS(smpl.base.np_v_template, part_faces) 147 | 148 | # gamma 149 | gammas = [] 150 | if gc in ['t-shirt', 'shirt', 'pant', 'short-pant']: 151 | gammas.append([0., 0., 0., 0.]) 152 | for x1 in np.arange(-1, 1.01, 0.5): 153 | for x2 in np.arange(-1, 1.01, 0.5): 154 | if x1 == 0 and x2 == 0: 155 | continue 156 | gammas.append([x1, x2, 0, 0]) 157 | gammas = np.array(gammas, dtype=np.float32) 158 | np.save(osp.join(style_dir, 'gammas.npy'), gammas) 159 | 160 | style_model = np.load(osp.join(gc_gender_dir, 'style_model.npz')) 161 | pca = PCA(n_components=4) 162 | pca.components_ = style_model['pca_w'] 163 | pca.mean_ = style_model['mean'] 164 | coeff_mean = style_model['coeff_mean'] 165 | coeff_range = style_model['coeff_range'] 166 | # trans = style_model['trans'] 167 | faces = garment_meta[gc]['f'] 168 | vert_indices = garment_meta[gc]['vert_indices'] 169 | # upper_boundary = np.load(osp.join(ROOT, '{}_upper_boundary.npy'.format(gc))) 170 | 171 | up_bnd_inds = np.load(osp.join(ROOT, '{}_upper_boundary.npy'.format(gc))) 172 | waist_body_inds = vert_indices[up_bnd_inds] 173 | betas_big2thin = np.zeros([20, num_betas]) 174 | betas_big2thin[:, 1] = np.linspace(-3, 3, 20) 175 | bodies_big2thin_unsmooth, _ = smpl_hres(betas_big2thin, np.tile(apose[None], (20, 1)), None, None, batch=True) 176 | bodies_big2thin = [] 177 | for unsmooth_body in bodies_big2thin_unsmooth: 178 | smooth_body = unsmooth_body 179 | for smooth_i in range(SM_FNUM): 180 | smooth_body = ds.smooth(smooth_body, smoothness=0.1) 181 | bodies_big2thin.append(smooth_body) 182 | bodies_big2thin = np.array(bodies_big2thin) 183 | 184 | start_betas = [] 185 | for gamma_i, gamma in enumerate(gammas): 186 | np.save(osp.join(style_dir, 'gamma_{:03d}.npy'.format(gamma_i)), gamma) 187 | gamma = gamma_transform(gamma, coeff_mean, coeff_range) 188 | v = pca.inverse_transform(gamma[None]).reshape([-1, 3]) 189 | v[:, 1] -= lowest 190 | notrans_m = trimesh.Trimesh(vertices=v, faces=faces, process=False) 191 | notrans_m.export(osp.join(style_dir, '{:03d}.obj'.format(gamma_i))) 192 | 193 | 194 | # for each gamma, find a proper beta that fits it well, so that 195 | # the waist band can be well pinned in MD 196 | pant_area = polygon_area(v[up_bnd_inds]) 197 | for body_i, body_v in enumerate(bodies_big2thin): 198 | body_area = polygon_area(body_v[waist_body_inds]) 199 | if gc == 'short-pant' and gender == 'male': 200 | if body_area * 1.2 < pant_area: 201 | break 202 | else: 203 | if body_area * 1.1 < pant_area: 204 | break 205 | start_beta = betas_big2thin[body_i].copy() 206 | start_betas.append(start_beta) 207 | 208 | # initial garment mesh 209 | # body_v, _ = smpl_hres(, apose, None, None) 210 | body_v = np.copy(bodies_big2thin[body_i]) 211 | gar_v = np.copy(v) 212 | trans = np.mean(body_v[waist_body_inds]-gar_v[up_bnd_inds], 0, keepdims=True) 213 | gar_v += trans 214 | gar_v[:, 1] -= lowest 215 | m = trimesh.Trimesh(vertices=gar_v, faces=faces, process=False) 216 | m.export(osp.join(save_dir, 'input_gamma{:03d}.obj'.format(gamma_i))) 217 | 218 | # initial body mesh 219 | start_body = smpl(start_beta, apose) 220 | start_body[:, 1] -= lowest 221 | for smooth_i in range(SM_FNUM): 222 | start_body = ds.smooth(start_body, smoothness=0.1) 223 | m = trimesh.Trimesh(vertices=start_body, faces=part_faces, process=False) 224 | m.export(osp.join(save_dir, 'up_gamma{:03d}.obj'.format(gamma_i))) 225 | 226 | # shape 227 | np.save(osp.join(shape_dir, 'betas.npy'), betas) 228 | for beta_i, beta in enumerate(betas): 229 | np.save(osp.join(shape_dir, 'beta_{:03d}.npy'.format(beta_i)), beta) 230 | 231 | final_body = smpl(beta, apose) 232 | final_body[:, 1] -= lowest 233 | trans_body = interpolate_pose(start_body, final_body, SM_FNUM-2) 234 | vbody = np.concatenate((np.tile(start_body[None], (STABLE_FNUM, 1, 1)), trans_body, 235 | np.tile(final_body[None], (END_FNUM, 1, 1))), 0) 236 | 237 | save_pc2(vbody, os.path.join(save_dir, 'motion_beta{:03d}_gamma{:03d}.pc2'.format(beta_i, gamma_i))) 238 | # save body 239 | vv = vbody[-1] 240 | vv[:, 1] += lowest 241 | mbody = trimesh.Trimesh(vertices=vv, faces=smpl.base.faces, process=False) 242 | mbody.export(osp.join(shape_dir, '{:03d}.obj'.format(beta_i))) 243 | 244 | -------------------------------------------------------------------------------- /simulation_style/style_shape_skirt.py: -------------------------------------------------------------------------------- 1 | """ 2 | simulate different style-shape combinations in A-pose 3 | """ 4 | import os 5 | import os.path as osp 6 | import trimesh 7 | import numpy as np 8 | from utils.ios import save_pc2 9 | from utils.rotation import interpolate_pose 10 | import pickle 11 | from smpl_torch import SMPLNP_Lres, SMPLNP 12 | from utils.rotation import get_Apose 13 | from sklearn.decomposition import PCA 14 | from utils.part_body import part_body_faces 15 | from utils.diffusion_smoothing import DiffusionSmoothing as DS 16 | 17 | from global_var import ROOT 18 | 19 | APOSE = get_Apose() 20 | 21 | def gamma_transform(gamma, coeff_mean, coeff_range): 22 | return coeff_mean + gamma * coeff_range 23 | 24 | 25 | def find_perpendicular_foot(x0, x1, x2): 26 | # x0, x1 define a line. x2 is an arbitary point 27 | x10 = x1 - x0 28 | x02 = x0 - x2 29 | return x0 - (x10) * np.dot(x10, x02) / np.dot(x10, x10) 30 | 31 | 32 | def polygon_area(v): 33 | x = v[:, 0].copy() 34 | y = v[:, 2].copy() 35 | return 0.5*np.abs(np.dot(x,np.roll(y,1))-np.dot(y,np.roll(x,1))) 36 | 37 | 38 | 39 | if __name__ == '__main__': 40 | # STAGE1_FNUM = 5 41 | # STAGE2_FNUM = 5 42 | # STAGE3_FNUM = 5 43 | SM_FNUM = 6 44 | STABLE_FNUM = 4 45 | END_FNUM = 5 46 | lowest = -2 47 | apose = get_Apose() 48 | 49 | with open(osp.join(ROOT, 'garment_class_info.pkl'), 'rb') as f: 50 | garment_meta = pickle.load(f) 51 | for gender in ['female']: 52 | smpl = SMPLNP_Lres(gender=gender) 53 | smpl_hres = SMPLNP(gender=gender) 54 | num_betas = 10 if gender == 'neutral' else 300 55 | betas = np.zeros([9, num_betas], dtype=np.float32) 56 | betas[0, 0] = 2 57 | betas[1, 0] = -2 58 | betas[2, 1] = 2 59 | betas[3, 1] = -2 60 | betas[4, 0] = 1 61 | betas[5, 0] = -1 62 | betas[6, 1] = 1 63 | betas[7, 1] = -1 64 | vcanonical = smpl(np.zeros_like(betas[0]), apose) 65 | 66 | for gc in ['skirt']: 67 | gc_gender_dir = osp.join(ROOT, '{}_{}'.format(gc, gender)) 68 | shape_dir = osp.join(gc_gender_dir, 'shape') 69 | save_dir = osp.join(gc_gender_dir, 'style_shape') 70 | style_dir = osp.join(gc_gender_dir, 'style') 71 | if not osp.exists(shape_dir): 72 | os.makedirs(shape_dir) 73 | if not osp.exists(save_dir): 74 | os.makedirs(save_dir) 75 | if not osp.exists(style_dir): 76 | os.makedirs(style_dir) 77 | part_faces = part_body_faces(gc) 78 | ds = DS(smpl.base.np_v_template, part_faces) 79 | 80 | # gamma 81 | gammas = [] 82 | gammas.append([0., 0., 0., 0.]) 83 | # for x1 in np.linspace(-1, 1, 11): 84 | # if x1 == 0: 85 | # continue 86 | # gammas.append([x1, 0, 0, 0]) 87 | for x1 in np.linspace(-1, 1, 7): 88 | for x2 in np.linspace(-1, 1, 3): 89 | if x1 == 0 and x2 == 0: 90 | continue 91 | gammas.append([x1, x2, 0, 0]) 92 | gammas = np.array(gammas, dtype=np.float32) 93 | np.save(osp.join(style_dir, 'gammas.npy'), gammas) 94 | 95 | style_model = np.load(osp.join(gc_gender_dir, 'style_model.npz')) 96 | pca = PCA(n_components=4) 97 | pca.components_ = style_model['pca_w'] 98 | pca.mean_ = style_model['mean'] 99 | coeff_mean = style_model['coeff_mean'] 100 | coeff_range = style_model['coeff_range'] 101 | # trans = style_model['trans'] 102 | faces = garment_meta[gc]['f'] 103 | vert_indices = garment_meta['pant']['vert_indices'] 104 | # upper_boundary = np.load(osp.join(ROOT, '{}_upper_boundary.npy'.format(gc))) 105 | 106 | up_bnd_inds = np.load(osp.join(ROOT, '{}_upper_boundary.npy'.format(gc))) 107 | pant_up_bnd_inds = np.load(osp.join(ROOT, 'pant_upper_boundary.npy')) 108 | 109 | waist_body_inds = vert_indices[pant_up_bnd_inds] 110 | betas_big2thin = np.zeros([20, num_betas]) 111 | betas_big2thin[:, 1] = np.linspace(-3, 3, 20) 112 | bodies_big2thin_unsmooth, _ = smpl_hres(betas_big2thin, np.tile(apose[None], (20, 1)), None, None, batch=True) 113 | bodies_big2thin = [] 114 | for unsmooth_body in bodies_big2thin_unsmooth: 115 | smooth_body = unsmooth_body 116 | for smooth_i in range(SM_FNUM): 117 | smooth_body = ds.smooth(smooth_body, smoothness=0.1) 118 | bodies_big2thin.append(smooth_body) 119 | bodies_big2thin = np.array(bodies_big2thin) 120 | 121 | start_betas = [] 122 | for gamma_i, gamma in enumerate(gammas): 123 | np.save(osp.join(style_dir, 'gamma_{:03d}.npy'.format(gamma_i)), gamma) 124 | gamma = gamma_transform(gamma, coeff_mean, coeff_range) 125 | v = pca.inverse_transform(gamma[None]).reshape([-1, 3]) 126 | v[:, 1] -= lowest 127 | notrans_m = trimesh.Trimesh(vertices=v, faces=faces, process=False) 128 | notrans_m.export(osp.join(style_dir, '{:03d}.obj'.format(gamma_i))) 129 | 130 | 131 | # for each gamma, find a proper beta that fits it well, so that 132 | # the waist band can be well pinned in MD 133 | pant_area = polygon_area(v[up_bnd_inds]) 134 | for body_i, body_v in enumerate(bodies_big2thin): 135 | body_area = polygon_area(body_v[waist_body_inds]) 136 | if body_area * 1.1 < pant_area: 137 | break 138 | start_beta = betas_big2thin[body_i].copy() 139 | start_betas.append(start_beta) 140 | 141 | # initial garment mesh 142 | # body_v, _ = smpl_hres(, apose, None, None) 143 | body_v = np.copy(bodies_big2thin[body_i]) 144 | gar_v = np.copy(v) 145 | # trans = np.mean(body_v[waist_body_inds]-gar_v[up_bnd_inds], 0, keepdims=True) 146 | trans = np.mean(body_v[waist_body_inds], 0, keepdims=True) - np.mean(gar_v[up_bnd_inds], 0, keepdims=True) 147 | gar_v += trans 148 | gar_v[:, 1] -= lowest 149 | m = trimesh.Trimesh(vertices=gar_v, faces=faces, process=False) 150 | m.export(osp.join(save_dir, 'input_gamma{:03d}.obj'.format(gamma_i))) 151 | 152 | # initial body mesh 153 | start_body = smpl(start_beta, apose) 154 | start_body[:, 1] -= lowest 155 | for smooth_i in range(SM_FNUM): 156 | start_body = ds.smooth(start_body, smoothness=0.1) 157 | m = trimesh.Trimesh(vertices=start_body, faces=part_faces, process=False) 158 | m.export(osp.join(save_dir, 'up_gamma{:03d}.obj'.format(gamma_i))) 159 | 160 | # shape 161 | np.save(osp.join(shape_dir, 'betas.npy'), betas) 162 | for beta_i, beta in enumerate(betas): 163 | np.save(osp.join(shape_dir, 'beta_{:03d}.npy'.format(beta_i)), beta) 164 | 165 | final_body = smpl(beta, apose) 166 | final_body[:, 1] -= lowest 167 | trans_body = interpolate_pose(start_body, final_body, SM_FNUM-2) 168 | vbody = np.concatenate((np.tile(start_body[None], (STABLE_FNUM, 1, 1)), trans_body, 169 | np.tile(final_body[None], (END_FNUM, 1, 1))), 0) 170 | 171 | save_pc2(vbody, os.path.join(save_dir, 'motion_beta{:03d}_gamma{:03d}.pc2'.format(beta_i, gamma_i))) 172 | # save body 173 | vv = vbody[-1] 174 | vv[:, 1] += lowest 175 | mbody = trimesh.Trimesh(vertices=vv, faces=smpl.base.faces, process=False) 176 | mbody.export(osp.join(shape_dir, '{:03d}.obj'.format(beta_i))) 177 | 178 | -------------------------------------------------------------------------------- /simulation_style/vis_simulation.py: -------------------------------------------------------------------------------- 1 | import os 2 | import os.path as osp 3 | import cv2 4 | import numpy as np 5 | import trimesh 6 | from utils.renderer import Renderer 7 | from global_var import ROOT 8 | 9 | 10 | if __name__ == '__main__': 11 | garment_class = 'skirt' 12 | gender = 'female' 13 | renderer = Renderer(512) 14 | 15 | data_dir = osp.join(ROOT, '{}_{}'.format(garment_class, gender), 'style_shape') 16 | shape_dir = osp.join(ROOT, '{}_{}'.format(garment_class, gender), 'shape') 17 | save_dir = osp.join(ROOT, '{}_{}'.format(garment_class, gender), 'style_shape_vis') 18 | os.makedirs(save_dir, exist_ok=True) 19 | 20 | betas = [k.replace('.obj', '') for k in os.listdir(shape_dir) if k.endswith('.obj')] 21 | all_vbody = {} 22 | body_f = None 23 | for beta in betas: 24 | m = trimesh.load(osp.join(shape_dir, '{}.obj'.format(beta)), process=False) 25 | all_vbody[beta] = np.array(m.vertices) 26 | body_f = np.array(m.faces) 27 | 28 | all_style_shape = [k for k in os.listdir(data_dir) if k.endswith('.obj') and k.startswith('beta')] 29 | for style_shape in all_style_shape: 30 | style_shape_path = osp.join(data_dir, style_shape) 31 | style_shape_save_path = osp.join(save_dir, style_shape) 32 | beta = style_shape.split('_gamma')[0].replace('beta', '') 33 | 34 | garment_m = trimesh.load(style_shape_path, process=False) 35 | garment_v = garment_m.vertices 36 | garment_v[:, 1] -= 2 37 | img = renderer([all_vbody[beta], garment_v], [body_f, garment_m.faces], 38 | [np.array([0.6, 0.6, 0.9]), np.array([0.8, 0.5, 0.3])], trans=[1, 0, 0]) 39 | img_back = renderer([all_vbody[beta], garment_v], [body_f, garment_m.faces], 40 | [np.array([0.6, 0.6, 0.9]), np.array([0.8, 0.5, 0.3])], trans=[1, 0, 0], euler=[180, 0, 0]) 41 | cv2.imwrite(osp.join(save_dir, style_shape.replace('.obj', '.jpg')), np.concatenate((img, img_back), 1)) 42 | -------------------------------------------------------------------------------- /smpl_lib/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zycliao/TailorNet_dataset/c4ce37cf321608b4f6e2907fe6ac6b79fa931175/smpl_lib/__init__.py -------------------------------------------------------------------------------- /smpl_lib/ch.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python2 2 | # -*- coding: utf-8 -*- 3 | 4 | import numpy as np 5 | import chumpy as ch 6 | import scipy.sparse as sp 7 | 8 | from chumpy.utils import col 9 | 10 | 11 | class sp_dot(ch.Ch): 12 | terms = 'a', 13 | dterms = 'b', 14 | 15 | def on_changed(self, which): 16 | if 'a' in which: 17 | a_csr = sp.csr_matrix(self.a) 18 | # To stay consistent with numpy, we must upgrade 1D arrays to 2D 19 | self.ar = sp.csr_matrix((a_csr.data, a_csr.indices, a_csr.indptr), 20 | shape=(max(np.sum(a_csr.shape[:-1]), 1), a_csr.shape[-1])) 21 | 22 | if 'b' in which: 23 | self.br = col(self.b.r) if len(self.b.r.shape) < 2 else self.b.r.reshape((self.b.r.shape[0], -1)) 24 | 25 | if 'a' in which or 'b' in which: 26 | self.k = sp.kron(self.ar, sp.eye(self.br.shape[1], self.br.shape[1])) 27 | 28 | def compute_r(self): 29 | return self.a.dot(self.b.r) 30 | 31 | def compute(self): 32 | if self.br.ndim <= 1: 33 | return self.ar 34 | elif self.br.ndim <= 2: 35 | return self.k 36 | else: 37 | raise NotImplementedError 38 | 39 | def compute_dr_wrt(self, wrt): 40 | if wrt is self.b: 41 | return self.compute() 42 | 43 | class PReLU(ch.Ch): 44 | terms = 'p' 45 | dterms = 'x' 46 | 47 | def compute_r(self): 48 | r = self.x.r.copy() 49 | r[r < 0] *= self.p 50 | 51 | return r 52 | 53 | def compute_dr_wrt(self, wrt): 54 | if wrt is not self.x: 55 | return None 56 | 57 | dr = np.zeros(self.x.r.shape) 58 | dr[self.x.r > 0] = 1 59 | dr[self.x.r < 0] = self.p 60 | 61 | return sp.diags([dr.ravel()], [0]) 62 | 63 | ## RelU is PRelU with p=0 64 | class Clamp(ch.Ch): 65 | dterms = 'x' 66 | terms = 'c' 67 | 68 | def compute_r(self): 69 | r = self.x.r.copy() 70 | r[r > self.c] = self.c 71 | 72 | return r 73 | 74 | def compute_dr_wrt(self, wrt): 75 | if wrt is not self.x: 76 | return None 77 | 78 | dr = np.zeros(self.x.r.shape) 79 | dr[self.x.r < self.c] = 1 80 | 81 | return sp.diags([dr.ravel()], [0]) -------------------------------------------------------------------------------- /smpl_lib/ch_smpl.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python2 2 | # -*- coding: utf-8 -*- 3 | 4 | import numpy as np 5 | import chumpy as ch 6 | import cPickle as pkl 7 | import scipy.sparse as sp 8 | from chumpy.ch import Ch 9 | from posemapper import posemap, Rodrigues 10 | from serialization import backwards_compatibility_replacements 11 | 12 | from lib.ch import sp_dot 13 | 14 | 15 | class Smpl(Ch): 16 | """ 17 | Class to store SMPL object with slightly improved code and access to more matrices 18 | """ 19 | terms = 'model', 20 | dterms = 'trans', 'betas', 'pose', 'v_personal', 'v_template' 21 | 22 | def __init__(self, *args, **kwargs): 23 | self.on_changed(self._dirty_vars) 24 | 25 | def on_changed(self, which): 26 | if 'model' in which: 27 | if not isinstance(self.model, dict): 28 | dd = pkl.load(open(self.model)) 29 | else: 30 | dd = self.model 31 | 32 | backwards_compatibility_replacements(dd) 33 | 34 | # for s in ['v_template', 'weights', 'posedirs', 'pose', 'trans', 'shapedirs', 'betas', 'J']: 35 | for s in ['posedirs', 'shapedirs']: 36 | if (s in dd) and not hasattr(dd[s], 'dterms'): 37 | dd[s] = ch.array(dd[s]) 38 | 39 | self.f = dd['f'] 40 | self.shapedirs = dd['shapedirs'] 41 | self.J_regressor = dd['J_regressor'] 42 | if 'J_regressor_prior' in dd: 43 | self.J_regressor_prior = dd['J_regressor_prior'] 44 | self.bs_type = dd['bs_type'] 45 | self.bs_style = dd['bs_style'] 46 | self.weights = ch.array(dd['weights']) 47 | if 'vert_sym_idxs' in dd: 48 | self.vert_sym_idxs = dd['vert_sym_idxs'] 49 | if 'weights_prior' in dd: 50 | self.weights_prior = dd['weights_prior'] 51 | self.kintree_table = dd['kintree_table'] 52 | self.posedirs = dd['posedirs'] 53 | 54 | if not hasattr(self, 'betas'): 55 | self.betas = ch.zeros(self.shapedirs.shape[-1]) 56 | 57 | if not hasattr(self, 'trans'): 58 | self.trans = ch.zeros(3) 59 | 60 | if not hasattr(self, 'pose'): 61 | self.pose = ch.zeros(72) 62 | 63 | if not hasattr(self, 'v_template'): 64 | self.v_template = ch.array(dd['v_template']) 65 | 66 | if not hasattr(self, 'v_personal'): 67 | self.v_personal = ch.zeros_like(self.v_template) 68 | 69 | self._set_up() 70 | 71 | def _set_up(self): 72 | self.v_shaped = self.shapedirs.dot(self.betas) + self.v_template 73 | 74 | self.v_shaped_personal = self.v_shaped + self.v_personal 75 | if sp.issparse(self.J_regressor): 76 | self.J = sp_dot(self.J_regressor, self.v_shaped) 77 | else: 78 | self.J = ch.sum(self.J_regressor.T.reshape(-1, 1, 24) * self.v_shaped.reshape(-1, 3, 1), axis=0).T 79 | self.v_posevariation = self.posedirs.dot(posemap(self.bs_type)(self.pose)) 80 | self.v_poseshaped = self.v_shaped_personal + self.v_posevariation 81 | 82 | self.A, A_global = self._global_rigid_transformation() 83 | self.Jtr = ch.vstack([g[:3, 3] for g in A_global]) 84 | self.J_transformed = self.Jtr + self.trans.reshape((1, 3)) 85 | 86 | self.V = self.A.dot(self.weights.T) 87 | 88 | rest_shape_h = ch.hstack((self.v_poseshaped, ch.ones((self.v_poseshaped.shape[0], 1)))) 89 | self.v_posed = ch.sum(self.V.T * rest_shape_h.reshape(-1, 4, 1), axis=1)[:, :3] 90 | self.v = self.v_posed + self.trans 91 | 92 | def _global_rigid_transformation(self): 93 | results = {} 94 | pose = self.pose.reshape((-1, 3)) 95 | parent = {i: self.kintree_table[0, i] for i in range(1, self.kintree_table.shape[1])} 96 | 97 | with_zeros = lambda x: ch.vstack((x, ch.array([[0.0, 0.0, 0.0, 1.0]]))) 98 | pack = lambda x: ch.hstack([ch.zeros((4, 3)), x.reshape((4, 1))]) 99 | 100 | results[0] = with_zeros(ch.hstack((Rodrigues(pose[0, :]), self.J[0, :].reshape((3, 1))))) 101 | 102 | for i in range(1, self.kintree_table.shape[1]): 103 | results[i] = results[parent[i]].dot(with_zeros(ch.hstack(( 104 | Rodrigues(pose[i, :]), # rotation around bone endpoint 105 | (self.J[i, :] - self.J[parent[i], :]).reshape((3, 1)) # bone 106 | )))) 107 | 108 | results = [results[i] for i in sorted(results.keys())] 109 | results_global = results 110 | 111 | # subtract rotated J position 112 | results2 = [results[i] - (pack( 113 | results[i].dot(ch.concatenate((self.J[i, :], [0])))) 114 | ) for i in range(len(results))] 115 | result = ch.dstack(results2) 116 | 117 | return result, results_global 118 | 119 | def compute_r(self): 120 | return self.v.r 121 | 122 | def compute_dr_wrt(self, wrt): 123 | if wrt is not self.trans and wrt is not self.betas and wrt is not self.pose and wrt is not self.v_personal and wrt is not self.v_template: 124 | return None 125 | 126 | return self.v.dr_wrt(wrt) 127 | 128 | 129 | if __name__ == '__main__': 130 | from utils.smpl_paths import SmplPaths 131 | 132 | dp = SmplPaths(gender='neutral') 133 | 134 | smpl = Smpl(dp.get_smpl_file()) 135 | 136 | from psbody.mesh.meshviewer import MeshViewer 137 | from psbody.mesh import Mesh 138 | mv = MeshViewer() 139 | mv.set_static_meshes([Mesh(smpl.r, smpl.f)]) 140 | 141 | raw_input("Press Enter to continue...") -------------------------------------------------------------------------------- /smpl_lib/convert_smpl_models.py: -------------------------------------------------------------------------------- 1 | """ 2 | Save SMPL model weights in numpy format 3 | """ 4 | import os 5 | import numpy as np 6 | import pickle as pkl 7 | from smpl_lib.smpl_paths import SmplPaths 8 | import global_var 9 | 10 | 11 | def proc_data(k, v): 12 | if k == 'J_regressor': 13 | v = v.todense() 14 | if k == 'kintree_table': 15 | return np.array(v)[0].astype(np.int) 16 | else: 17 | return np.array(v, dtype=np.float32) 18 | 19 | if __name__ == '__main__': 20 | genders = ['male', 'female'] 21 | keys = ['f', 'v_template', 'shapedirs', 'J_regressor', 'posedirs', 22 | 'kintree_table', 'weights', 'J'] 23 | SAVE_DIR = os.path.join(global_var.ROOT, 'smpl') 24 | if not os.path.exists(SAVE_DIR): 25 | os.makedirs(SAVE_DIR) 26 | for gender in genders: 27 | low_res = SmplPaths(gender=gender) 28 | low_res_model = pkl.load(open(low_res.get_smpl_file(), 'rb'), encoding='latin1') 29 | print(low_res_model.keys()) 30 | high_res_model = low_res.get_hres_smpl_model_data() 31 | low_res_dict, high_res_dict = {}, {} 32 | for k in keys: 33 | low_res_dict[k] = proc_data(k, low_res_model[k]) 34 | high_res_dict[k] = proc_data(k, high_res_model[k]) 35 | np.savez(os.path.join(SAVE_DIR, 'smpl_{}.npz'.format(gender)), **low_res_dict) 36 | np.savez(os.path.join(SAVE_DIR, 'smpl_hres_{}.npz'.format(gender)), **high_res_dict) -------------------------------------------------------------------------------- /smpl_lib/lbs.py: -------------------------------------------------------------------------------- 1 | ## This function is copied from https://github.com/Rubikplayer/flame-fitting 2 | 3 | ''' 4 | Copyright 2015 Matthew Loper, Naureen Mahmood and the Max Planck Gesellschaft. All rights reserved. 5 | This software is provided for research purposes only. 6 | By using this software you agree to the terms of the SMPL Model license here http://smpl.is.tue.mpg.de/license 7 | More information about SMPL is available here http://smpl.is.tue.mpg. 8 | For comments or questions, please email us at: smpl@tuebingen.mpg.de 9 | About this file: 10 | ================ 11 | This file defines linear blend skinning for the SMPL loader which 12 | defines the effect of bones and blendshapes on the vertices of the template mesh. 13 | Modules included: 14 | - global_rigid_transformation: 15 | computes global rotation & translation of the model 16 | - verts_core: [overloaded function inherited from verts.verts_core] 17 | computes the blending of joint-influences for each vertex based on type of skinning 18 | ''' 19 | 20 | from .posemapper import posemap 21 | import chumpy 22 | import numpy as np 23 | 24 | 25 | def global_rigid_transformation(pose, J, kintree_table, xp): 26 | results = {} 27 | pose = pose.reshape((-1, 3)) 28 | id_to_col = {kintree_table[1, i]: i for i in range(kintree_table.shape[1])} 29 | parent = {i: id_to_col[kintree_table[0, i]] for i in range(1, kintree_table.shape[1])} 30 | 31 | if xp == chumpy: 32 | from posemapper import Rodrigues 33 | rodrigues = lambda x: Rodrigues(x) 34 | else: 35 | import cv2 36 | rodrigues = lambda x: cv2.Rodrigues(x)[0] 37 | 38 | with_zeros = lambda x: xp.vstack((x, xp.array([[0.0, 0.0, 0.0, 1.0]]))) 39 | results[0] = with_zeros(xp.hstack((rodrigues(pose[0, :]), J[0, :].reshape((3, 1))))) 40 | 41 | for i in range(1, kintree_table.shape[1]): 42 | results[i] = results[parent[i]].dot(with_zeros(xp.hstack(( 43 | rodrigues(pose[i, :]), 44 | ((J[i, :] - J[parent[i], :]).reshape((3, 1))) 45 | )))) 46 | 47 | pack = lambda x: xp.hstack([np.zeros((4, 3)), x.reshape((4, 1))]) 48 | 49 | results = [results[i] for i in sorted(results.keys())] 50 | results_global = results 51 | 52 | if True: 53 | results2 = [results[i] - (pack( 54 | results[i].dot(xp.concatenate(((J[i, :]), 0)))) 55 | ) for i in range(len(results))] 56 | results = results2 57 | result = xp.dstack(results) 58 | return result, results_global 59 | 60 | 61 | def verts_core(pose, v, J, weights, kintree_table, want_Jtr=False, xp=chumpy): 62 | A, A_global = global_rigid_transformation(pose, J, kintree_table, xp) 63 | T = A.dot(weights.T) 64 | 65 | rest_shape_h = xp.vstack((v.T, np.ones((1, v.shape[0])))) 66 | 67 | v = (T[:, 0, :] * rest_shape_h[0, :].reshape((1, -1)) + 68 | T[:, 1, :] * rest_shape_h[1, :].reshape((1, -1)) + 69 | T[:, 2, :] * rest_shape_h[2, :].reshape((1, -1)) + 70 | T[:, 3, :] * rest_shape_h[3, :].reshape((1, -1))).T 71 | 72 | v = v[:, :3] 73 | 74 | if not want_Jtr: 75 | return v 76 | Jtr = xp.vstack([g[:3, 3] for g in A_global]) 77 | return (v, Jtr) -------------------------------------------------------------------------------- /smpl_lib/posemapper.py: -------------------------------------------------------------------------------- 1 | ## This function is copied from https://github.com/Rubikplayer/flame-fitting 2 | 3 | ''' 4 | Copyright 2015 Matthew Loper, Naureen Mahmood and the Max Planck Gesellschaft. All rights reserved. 5 | This software is provided for research purposes only. 6 | By using this software you agree to the terms of the SMPL Model license here http://smpl.is.tue.mpg.de/license 7 | More information about SMPL is available here http://smpl.is.tue.mpg. 8 | For comments or questions, please email us at: smpl@tuebingen.mpg.de 9 | About this file: 10 | ================ 11 | This module defines the mapping of joint-angles to pose-blendshapes. 12 | Modules included: 13 | - posemap: 14 | computes the joint-to-pose blend shape mapping given a mapping type as input 15 | ''' 16 | 17 | import chumpy as ch 18 | import numpy as np 19 | import cv2 20 | 21 | 22 | class Rodrigues(ch.Ch): 23 | dterms = 'rt' 24 | 25 | def compute_r(self): 26 | return cv2.Rodrigues(self.rt.r)[0] 27 | 28 | def compute_dr_wrt(self, wrt): 29 | if wrt is self.rt: 30 | return cv2.Rodrigues(self.rt.r)[1].T 31 | 32 | 33 | def lrotmin(p): 34 | if isinstance(p, np.ndarray): 35 | p = p.ravel()[3:] 36 | return np.concatenate( 37 | [(cv2.Rodrigues(np.array(pp))[0] - np.eye(3)).ravel() for pp in p.reshape((-1, 3))]).ravel() 38 | if p.ndim != 2 or p.shape[1] != 3: 39 | p = p.reshape((-1, 3)) 40 | p = p[1:] 41 | return ch.concatenate([(Rodrigues(pp) - ch.eye(3)).ravel() for pp in p]).ravel() 42 | 43 | 44 | def posemap(s): 45 | if s == 'lrotmin': 46 | return lrotmin 47 | else: 48 | raise Exception('Unknown posemapping: %s' % (str(s),)) -------------------------------------------------------------------------------- /smpl_lib/serialization.py: -------------------------------------------------------------------------------- 1 | ## This function is copied from https://github.com/Rubikplayer/flame-fitting 2 | 3 | ''' 4 | Copyright 2015 Matthew Loper, Naureen Mahmood and the Max Planck Gesellschaft. All rights reserved. 5 | This software is provided for research purposes only. 6 | By using this software you agree to the terms of the SMPL Model license here http://smpl.is.tue.mpg.de/license 7 | More information about SMPL is available here http://smpl.is.tue.mpg. 8 | For comments or questions, please email us at: smpl@tuebingen.mpg.de 9 | About this file: 10 | ================ 11 | This file defines the serialization functions of the SMPL model. 12 | Modules included: 13 | - save_model: 14 | saves the SMPL model to a given file location as a .pkl file 15 | - load_model: 16 | loads the SMPL model from a given file location (i.e. a .pkl file location), 17 | or a dictionary object. 18 | ''' 19 | # import cPickle as pickle 20 | import pickle 21 | import numpy as np 22 | import chumpy as ch 23 | from chumpy.ch import MatVecMult 24 | from .verts import verts_core 25 | from .posemapper import posemap 26 | 27 | def backwards_compatibility_replacements(dd): 28 | # replacements 29 | if 'default_v' in dd: 30 | dd['v_template'] = dd['default_v'] 31 | del dd['default_v'] 32 | if 'template_v' in dd: 33 | dd['v_template'] = dd['template_v'] 34 | del dd['template_v'] 35 | if 'joint_regressor' in dd: 36 | dd['J_regressor'] = dd['joint_regressor'] 37 | del dd['joint_regressor'] 38 | if 'blendshapes' in dd: 39 | dd['posedirs'] = dd['blendshapes'] 40 | del dd['blendshapes'] 41 | if 'J' not in dd: 42 | dd['J'] = dd['joints'] 43 | del dd['joints'] 44 | 45 | # defaults 46 | if 'bs_style' not in dd: 47 | dd['bs_style'] = 'lbs' 48 | 49 | 50 | def ready_arguments(fname_or_dict): 51 | if not isinstance(fname_or_dict, dict): 52 | dd = pickle.load(open(fname_or_dict)) 53 | else: 54 | dd = fname_or_dict 55 | 56 | backwards_compatibility_replacements(dd) 57 | 58 | want_shapemodel = 'shapedirs' in dd 59 | nposeparms = dd['kintree_table'].shape[1] * 3 60 | 61 | if 'trans' not in dd: 62 | dd['trans'] = np.zeros(3) 63 | if 'pose' not in dd: 64 | dd['pose'] = np.zeros(nposeparms) 65 | if 'shapedirs' in dd and 'betas' not in dd: 66 | dd['betas'] = np.zeros(dd['shapedirs'].shape[-1]) 67 | 68 | for s in ['v_template', 'weights', 'posedirs', 'pose', 'trans', 'shapedirs', 'betas', 'J']: 69 | if (s in dd) and not hasattr(dd[s], 'dterms'): 70 | dd[s] = ch.array(dd[s]) 71 | 72 | if want_shapemodel: 73 | dd['v_shaped'] = dd['shapedirs'].dot(dd['betas']) + dd['v_template'] 74 | v_shaped = dd['v_shaped'] 75 | J_tmpx = MatVecMult(dd['J_regressor'], v_shaped[:, 0]) 76 | J_tmpy = MatVecMult(dd['J_regressor'], v_shaped[:, 1]) 77 | J_tmpz = MatVecMult(dd['J_regressor'], v_shaped[:, 2]) 78 | dd['J'] = ch.vstack((J_tmpx, J_tmpy, J_tmpz)).T 79 | dd['v_posed'] = v_shaped + dd['posedirs'].dot(posemap(dd['bs_type'])(dd['pose'])) 80 | else: 81 | dd['v_posed'] = dd['v_template'] + dd['posedirs'].dot(posemap(dd['bs_type'])(dd['pose'])) 82 | 83 | return dd 84 | 85 | def load_model(fname_or_dict): 86 | dd = ready_arguments(fname_or_dict) 87 | 88 | args = { 89 | 'pose': dd['pose'], 90 | 'v': dd['v_posed'], 91 | 'J': dd['J'], 92 | 'weights': dd['weights'], 93 | 'kintree_table': dd['kintree_table'], 94 | 'xp': ch, 95 | 'want_Jtr': True, 96 | 'bs_style': dd['bs_style'] 97 | } 98 | 99 | result, Jtr = verts_core(**args) 100 | result = result + dd['trans'].reshape((1, 3)) 101 | result.J_transformed = Jtr + dd['trans'].reshape((1, 3)) 102 | 103 | for k, v in dd.items(): 104 | setattr(result, k, v) 105 | 106 | return result -------------------------------------------------------------------------------- /smpl_lib/smpl_paths.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | # import cPickle as pkl 3 | import pickle as pkl 4 | from smpl_lib.serialization import backwards_compatibility_replacements, load_model 5 | import scipy.sparse as sp 6 | from global_var import SMPL_PATH_MALE, SMPL_PATH_FEMALE 7 | 8 | 9 | def row(A): 10 | return A.reshape((1, -1)) 11 | 12 | 13 | def col(A): 14 | return A.reshape((-1, 1)) 15 | 16 | 17 | def get_vert_connectivity(mesh_v, mesh_f): 18 | """Returns a sparse matrix (of size #verts x #verts) where each nonzero 19 | element indicates a neighborhood relation. For example, if there is a 20 | nonzero element in position (15,12), that means vertex 15 is connected 21 | by an edge to vertex 12.""" 22 | 23 | vpv = sp.csc_matrix((len(mesh_v),len(mesh_v))) 24 | 25 | # for each column in the faces... 26 | for i in range(3): 27 | IS = mesh_f[:,i] 28 | JS = mesh_f[:,(i+1)%3] 29 | data = np.ones(len(IS)) 30 | ij = np.vstack((row(IS.flatten()), row(JS.flatten()))) 31 | mtx = sp.csc_matrix((data, ij), shape=vpv.shape) 32 | vpv = vpv + mtx + mtx.T 33 | 34 | return vpv 35 | 36 | 37 | def get_vert_opposites_per_edge(mesh_v, mesh_f): 38 | """Returns a dictionary from vertidx-pairs to opposites. 39 | For example, a key consist of [4,5)] meaning the edge between 40 | vertices 4 and 5, and a value might be [10,11] which are the indices 41 | of the vertices opposing this edge.""" 42 | result = {} 43 | for f in mesh_f: 44 | for i in range(3): 45 | key = [f[i], f[(i+1)%3]] 46 | key.sort() 47 | key = tuple(key) 48 | val = f[(i+2)%3] 49 | 50 | if key in result: 51 | result[key].append(val) 52 | else: 53 | result[key] = [val] 54 | return result 55 | 56 | def get_vertices_per_edge(mesh_v, mesh_f): 57 | """Returns an Ex2 array of adjacencies between vertices, where 58 | each element in the array is a vertex index. Each edge is included 59 | only once. If output of get_faces_per_edge is provided, this is used to 60 | avoid call to get_vert_connectivity()""" 61 | 62 | vc = sp.coo_matrix(get_vert_connectivity(mesh_v, mesh_f)) 63 | result = np.hstack((col(vc.row), col(vc.col))) 64 | result = result[result[:,0] < result[:,1]] # for uniqueness 65 | 66 | return result 67 | 68 | def loop_subdivider(mesh_v, mesh_f): 69 | 70 | IS = [] 71 | JS = [] 72 | data = [] 73 | 74 | vc = get_vert_connectivity(mesh_v, mesh_f) 75 | ve = get_vertices_per_edge(mesh_v, mesh_f) 76 | vo = get_vert_opposites_per_edge(mesh_v, mesh_f) 77 | 78 | if True: 79 | # New values for each vertex 80 | for idx in range(len(mesh_v)): 81 | 82 | # find neighboring vertices 83 | nbrs = np.nonzero(vc[:,idx])[0] 84 | 85 | nn = len(nbrs) 86 | 87 | if nn < 3: 88 | wt = 0. 89 | elif nn == 3: 90 | wt = 3./16. 91 | elif nn > 3: 92 | wt = 3. / (8. * nn) 93 | else: 94 | raise Exception('nn should be 3 or more') 95 | if wt > 0.: 96 | for nbr in nbrs: 97 | IS.append(idx) 98 | JS.append(nbr) 99 | data.append(wt) 100 | 101 | JS.append(idx) 102 | IS.append(idx) 103 | data.append(1. - (wt * nn)) 104 | 105 | start = len(mesh_v) 106 | edge_to_midpoint = {} 107 | 108 | if True: 109 | # New values for each edge: 110 | # new edge verts depend on the verts they span 111 | for idx, vs in enumerate(ve): 112 | 113 | vsl = list(vs) 114 | vsl.sort() 115 | IS.append(start + idx) 116 | IS.append(start + idx) 117 | JS.append(vsl[0]) 118 | JS.append(vsl[1]) 119 | data.append(3./8) 120 | data.append(3./8) 121 | 122 | opposites = vo[(vsl[0], vsl[1])] 123 | for opp in opposites: 124 | IS.append(start + idx) 125 | JS.append(opp) 126 | data.append(2./8./len(opposites)) 127 | 128 | edge_to_midpoint[(vsl[0], vsl[1])] = start + idx 129 | edge_to_midpoint[(vsl[1], vsl[0])] = start + idx 130 | 131 | f = [] 132 | 133 | for f_i, old_f in enumerate(mesh_f): 134 | ff = np.concatenate((old_f, old_f)) 135 | 136 | for i in range(3): 137 | v0 = edge_to_midpoint[(ff[i], ff[i+1])] 138 | v1 = ff[i+1] 139 | v2 = edge_to_midpoint[(ff[i+1], ff[i+2])] 140 | f.append(row(np.array([v0,v1,v2]))) 141 | v0 = edge_to_midpoint[(ff[0], ff[1])] 142 | v1 = edge_to_midpoint[(ff[1], ff[2])] 143 | v2 = edge_to_midpoint[(ff[2], ff[3])] 144 | f.append(row(np.array([v0,v1,v2]))) 145 | 146 | f = np.vstack(f) 147 | 148 | IS = np.array(IS, dtype=np.uint32) 149 | JS = np.array(JS, dtype=np.uint32) 150 | 151 | if True: # for x,y,z coords 152 | IS = np.concatenate((IS*3, IS*3+1, IS*3+2)) 153 | JS = np.concatenate((JS*3, JS*3+1, JS*3+2)) 154 | data = np.concatenate ((data,data,data)) 155 | 156 | ij = np.vstack((IS.flatten(), JS.flatten())) 157 | mtx = sp.csc_matrix((data, ij)) 158 | 159 | return mtx, f 160 | 161 | def get_hres(v, f): 162 | """ 163 | Get an upsampled version of the mesh. 164 | OUTPUT: 165 | - nv: new vertices 166 | - nf: faces of the upsampled 167 | - mapping: mapping from low res to high res 168 | """ 169 | (mapping, nf) = loop_subdivider(v, f) 170 | nv = mapping.dot(v.ravel()).reshape(-1, 3) 171 | return (nv, nf, mapping) 172 | 173 | 174 | # smpl_vt_ft_path = '/BS/bharat/work/MGN_final_release/assets/smpl_vt_ft.pkl' 175 | class SmplPaths: 176 | def __init__(self, project_dir='', exp_name='', gender='neutral', garment=''): 177 | self.project_dir = project_dir 178 | # experiments name 179 | self.exp_name = exp_name 180 | self.gender = gender 181 | self.garment = garment 182 | 183 | def get_smpl_file(self): 184 | if self.gender == 'male': 185 | return SMPL_PATH_MALE 186 | elif self.gender == 'female': 187 | return SMPL_PATH_FEMALE 188 | else: 189 | print("Get proper neutral model and check for neutral betas.") 190 | raise NotImplemented 191 | 192 | def get_smpl(self): 193 | smpl_m = load_model(self.get_smpl_file()) 194 | smpl_m.gender = self.gender 195 | return smpl_m 196 | 197 | def get_hres_smpl_model_data(self): 198 | 199 | dd = pkl.load(open(self.get_smpl_file(), 'rb'), encoding='latin1') 200 | backwards_compatibility_replacements(dd) 201 | 202 | hv, hf, mapping = get_hres(dd['v_template'], dd['f']) 203 | 204 | num_betas = dd['shapedirs'].shape[-1] 205 | J_reg = dd['J_regressor'].asformat('csr') 206 | 207 | model = { 208 | 'v_template': hv, 209 | 'weights': np.hstack([ 210 | np.expand_dims( 211 | np.mean( 212 | mapping.dot(np.repeat(np.expand_dims(dd['weights'][:, i], -1), 3)).reshape(-1, 3) 213 | , axis=1), 214 | axis=-1) 215 | for i in range(24) 216 | ]), 217 | 'posedirs': mapping.dot(dd['posedirs'].reshape((-1, 207))).reshape(-1, 3, 207), 218 | 'shapedirs': mapping.dot(dd['shapedirs'].reshape((-1, num_betas))).reshape(-1, 3, num_betas), 219 | 'J_regressor': sp.csr_matrix((J_reg.data, J_reg.indices, J_reg.indptr), shape=(24, hv.shape[0])), 220 | 'kintree_table': dd['kintree_table'], 221 | 'bs_type': dd['bs_type'], 222 | 'bs_style': dd['bs_style'], 223 | 'J': dd['J'], 224 | 'f': hf, 225 | } 226 | 227 | return model 228 | 229 | def get_hres_smpl(self): 230 | smpl_m = load_model(self.get_hres_smpl_model_data()) 231 | smpl_m.gender = self.gender 232 | return smpl_m 233 | 234 | # @staticmethod 235 | # def get_vt_ft(): 236 | # vt, ft = pkl.load(open(smpl_vt_ft_path)) 237 | # return vt, ft 238 | # 239 | # @staticmethod 240 | # def get_vt_ft_hres(): 241 | # vt, ft = SmplPaths.get_vt_ft() 242 | # vt, ft, _ = get_hres(np.hstack((vt, np.ones((vt.shape[0], 1)))), ft) 243 | # return vt[:, :2], ft 244 | 245 | 246 | if __name__ == '__main__': 247 | dp = SmplPaths(gender='female') 248 | smpl_file = dp.get_smpl_file() 249 | smpl = dp.get_smpl() 250 | smpl_hres = dp.get_hres_smpl() 251 | print(smpl_file) -------------------------------------------------------------------------------- /smpl_lib/verts.py: -------------------------------------------------------------------------------- 1 | ## This function is copied from https://github.com/Rubikplayer/flame-fitting 2 | 3 | ''' 4 | Copyright 2015 Matthew Loper, Naureen Mahmood and the Max Planck Gesellschaft. All rights reserved. 5 | This software is provided for research purposes only. 6 | By using this software you agree to the terms of the SMPL Model license here http://smpl.is.tue.mpg.de/license 7 | More information about SMPL is available here http://smpl.is.tue.mpg. 8 | For comments or questions, please email us at: smpl@tuebingen.mpg.de 9 | About this file: 10 | ================ 11 | This file defines the basic skinning modules for the SMPL loader which 12 | defines the effect of bones and blendshapes on the vertices of the template mesh. 13 | Modules included: 14 | - verts_decorated: 15 | creates an instance of the SMPL model which inherits model attributes from another 16 | SMPL model. 17 | - verts_core: [overloaded function inherited by lbs.verts_core] 18 | computes the blending of joint-influences for each vertex based on type of skinning 19 | ''' 20 | 21 | import chumpy 22 | from . import lbs 23 | from .posemapper import posemap 24 | import scipy.sparse as sp 25 | from chumpy.ch import MatVecMult 26 | 27 | 28 | def ischumpy(x): return hasattr(x, 'dterms') 29 | 30 | 31 | def verts_decorated(trans, pose, 32 | v_template, J, weights, kintree_table, bs_style, f, 33 | bs_type=None, posedirs=None, betas=None, shapedirs=None, want_Jtr=False): 34 | for which in [trans, pose, v_template, weights, posedirs, betas, shapedirs]: 35 | if which is not None: 36 | assert ischumpy(which) 37 | 38 | v = v_template 39 | 40 | if shapedirs is not None: 41 | if betas is None: 42 | betas = chumpy.zeros(shapedirs.shape[-1]) 43 | v_shaped = v + shapedirs.dot(betas) 44 | else: 45 | v_shaped = v 46 | 47 | if posedirs is not None: 48 | v_posed = v_shaped + posedirs.dot(posemap(bs_type)(pose)) 49 | else: 50 | v_posed = v_shaped 51 | 52 | v = v_posed 53 | 54 | if sp.issparse(J): 55 | regressor = J 56 | J_tmpx = MatVecMult(regressor, v_shaped[:, 0]) 57 | J_tmpy = MatVecMult(regressor, v_shaped[:, 1]) 58 | J_tmpz = MatVecMult(regressor, v_shaped[:, 2]) 59 | J = chumpy.vstack((J_tmpx, J_tmpy, J_tmpz)).T 60 | else: 61 | assert (ischumpy(J)) 62 | 63 | assert (bs_style == 'lbs') 64 | result, Jtr = lbs.verts_core(pose, v, J, weights, kintree_table, want_Jtr=True, xp=chumpy) 65 | 66 | tr = trans.reshape((1, 3)) 67 | result = result + tr 68 | Jtr = Jtr + tr 69 | 70 | result.trans = trans 71 | result.f = f 72 | result.pose = pose 73 | result.v_template = v_template 74 | result.J = J 75 | result.weights = weights 76 | result.kintree_table = kintree_table 77 | result.bs_style = bs_style 78 | result.bs_type = bs_type 79 | if posedirs is not None: 80 | result.posedirs = posedirs 81 | result.v_posed = v_posed 82 | if shapedirs is not None: 83 | result.shapedirs = shapedirs 84 | result.betas = betas 85 | result.v_shaped = v_shaped 86 | if want_Jtr: 87 | result.J_transformed = Jtr 88 | return result 89 | 90 | 91 | def verts_core(pose, v, J, weights, kintree_table, bs_style, want_Jtr=False, xp=chumpy): 92 | if xp == chumpy: 93 | assert (hasattr(pose, 'dterms')) 94 | assert (hasattr(v, 'dterms')) 95 | assert (hasattr(J, 'dterms')) 96 | assert (hasattr(weights, 'dterms')) 97 | 98 | assert (bs_style == 'lbs') 99 | result = lbs.verts_core(pose, v, J, weights, kintree_table, want_Jtr, xp) 100 | 101 | return result -------------------------------------------------------------------------------- /style_pca/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zycliao/TailorNet_dataset/c4ce37cf321608b4f6e2907fe6ac6b79fa931175/style_pca/__init__.py -------------------------------------------------------------------------------- /style_pca/copy_raw.py: -------------------------------------------------------------------------------- 1 | """ 2 | Go through the scanning dataset and find available registrations for each garment class 3 | save their displacement (in one .npy file) and meta information (triangulation, mapping with SMPL) 4 | INPUT: registration mesh under /BS/bharat-2/work/data/renderings 5 | OUTPUT: {ROOT}/raw_data/{garment_class}.npy 6 | {ROOT}/garment_class_info.pkl 7 | """ 8 | import os 9 | import json 10 | import numpy as np 11 | import pickle 12 | from utils.rotation import get_Apose 13 | import trimesh 14 | 15 | APOSE = get_Apose() 16 | 17 | upper_lower_dict = {'t-shirt': 'UpperClothes', 'pant': 'Pants', 'shirt': 'UpperClothes', 'short-pant': 'Pants' 18 | } 19 | v_num = {'t-shirt': 7702, 'shirt': 9723, 'pant': 4718, 'short-pant': 2710, 'coat': 10116, 'skirt': 7130} 20 | 21 | 22 | def sort_types(RAW_DIR): 23 | set_name = os.listdir(RAW_DIR) 24 | people_name = {} 25 | garment_types = {} 26 | garment_paths = {} 27 | genders = {} 28 | for sn in set_name: 29 | names = [] 30 | for n in os.listdir(os.path.join(RAW_DIR, sn)): 31 | people_dir = os.path.join(RAW_DIR, sn, n) 32 | if not os.path.isdir(people_dir): 33 | continue 34 | json_path = os.path.join(people_dir, "{}_annotations.json".format(n)) 35 | if not os.path.exists(json_path): 36 | continue 37 | with open(json_path) as jf: 38 | annotations = json.load(jf) 39 | gtypes = annotations['garments'] 40 | 41 | # statistics 42 | for gt in gtypes: 43 | if gt not in garment_types: 44 | garment_types[gt] = 1 45 | garment_paths[gt] = [people_dir] 46 | else: 47 | garment_types[gt] += 1 48 | garment_paths[gt].append(people_dir) 49 | 50 | gender = annotations['gender'] 51 | if gender not in genders: 52 | genders[gender] = 1 53 | else: 54 | genders[gender] += 1 55 | 56 | print(garment_types) 57 | print(genders) 58 | return garment_paths 59 | 60 | 61 | if __name__ == '__main__': 62 | garment_class = 'short-pant' 63 | upper_lower = upper_lower_dict[garment_class] 64 | RAW_DIR = '/BS/bharat-2/work/data/renderings' 65 | SAVE_DIR = '/BS/cloth-anim/static00/tailor_data/raw_data' 66 | 67 | garment_paths = sort_types(RAW_DIR) 68 | garment_paths = garment_paths[garment_class] 69 | 70 | all_disp = [] 71 | betas = [] 72 | vert_indices, faces = None, None 73 | for garment_path in garment_paths: 74 | people_name = os.path.split(garment_path)[1] 75 | garment_ply_path = os.path.join(garment_path, 'temp20', '{}_unposed.ply'.format(upper_lower)) 76 | garment_pkl_path = os.path.join(garment_path, 'temp20', '{}_unposed.pkl'.format(upper_lower)) 77 | if not (os.path.exists(garment_ply_path) and os.path.exists(garment_pkl_path)): 78 | continue 79 | with open(garment_pkl_path, 'rb') as f: 80 | garment_info = pickle.load(f, encoding='latin-1') 81 | 82 | displacement = garment_info['garment_offsets'] 83 | beta = garment_info['betas'] 84 | assert beta.shape == (10,) 85 | if len(displacement) != v_num[garment_class]: 86 | continue 87 | all_disp.append(displacement) 88 | betas.append(beta) 89 | all_disp = np.array(all_disp).astype(np.float32) 90 | np.save(os.path.join(SAVE_DIR, '{}.npy'.format(garment_class)), all_disp) 91 | betas = np.array(betas).astype(np.float32) 92 | np.save(os.path.join(SAVE_DIR, '{}_betas.npy'.format(garment_class)), betas) 93 | 94 | -------------------------------------------------------------------------------- /style_pca/gender_pca.py: -------------------------------------------------------------------------------- 1 | import os 2 | import os.path as osp 3 | import numpy as np 4 | from global_var import ROOT 5 | 6 | 7 | pca_range = { 8 | 't-shirt': { 9 | 'neutral': [[-0.4, 0, 0, 0], [2, 1.5, 0, 0], -0.004], 10 | 'male': [[0.6, 0, 0, 0], [2, 1.5, 0, 0], 0.008], 11 | 'female': [[-1, 0, 0, 0], [2, 1.5, 0, 0], -0.016] 12 | }, 13 | # 'shirt': { 14 | # 'neutral': [[0.5, 0, 0, 0], [1.5, 2., 0, 0]], 15 | # 'male': [[2, 0, 0, 0], [1.5, 2, 0, 0], 0.004], 16 | # 'female': [[-0.4, 0, 0, 0], [1.5, 2, 0, 0], -0.008 ] 17 | # }, 18 | 'shirt': { 19 | 'neutral': [[0., 0, 0, 0], [1, 1.5, 0, 0]], 20 | 'male': [[1.5, 0, 0, 0], [1, 1.5, 0, 0]], 21 | 'female': [[-0.5, 0, 0, 0], [1, 1.5, 0, 0]] 22 | }, 23 | 'pant': { 24 | 'neutral': [[0.5, 0.5, 0, 0], [1, 0.5, 0, 0], -0.014], 25 | 'male': [[1, 0.5, 0, 0], [1, 0.5, 0, 0], -0.03], 26 | 'female': [[0, 0.5, 0, 0], [1, 0.5, 0, 0], 0.01] 27 | }, 28 | 'skirt_orig': { 29 | 'female': [[-2, 0, 0, 0], [2, 0, 0, 0], 0] 30 | }, 31 | 'skirt': { 32 | 'female': [[-3, -0.6, 0, 0], [3, 0.6, 0, 0], 0] 33 | }, 34 | 'short-pant': { 35 | 'female': [[0, -0.6, 0, 0], [3, 1, 0, 0]], 36 | 'male': [[-1, 0, 0, 0], [2, 1, 0, 0]], 37 | } 38 | 39 | } 40 | 41 | 42 | if __name__ == '__main__': 43 | for gc in ['skirt']: 44 | orig_pca_path = osp.join(ROOT, 'pca', 'style_model_{}.npz'.format(gc)) 45 | orig_pca = np.load(orig_pca_path) 46 | for gender in ['female']: 47 | pca_r = pca_range[gc][gender] 48 | save_dir = osp.join(ROOT, '{}_{}'.format(gc, gender)) 49 | os.makedirs(save_dir, exist_ok=True) 50 | np.savez(osp.join(save_dir, 'style_model.npz'), 51 | pca_w=orig_pca['pca_w'], mean=orig_pca['mean'], 52 | coeff_mean=np.array(pca_r[0], dtype=np.float32), 53 | coeff_range=np.array(pca_r[1], dtype=np.float32)) -------------------------------------------------------------------------------- /style_pca/pca_interactive.py: -------------------------------------------------------------------------------- 1 | """ 2 | Interactively visualize style PCA 3 | """ 4 | import os 5 | import os.path as osp 6 | import cv2 7 | import pickle 8 | import numpy as np 9 | import trimesh 10 | from utils.renderer import Renderer 11 | from sklearn.decomposition import PCA 12 | import global_var 13 | 14 | 15 | class Controller(object): 16 | def __init__(self, gc, gender): 17 | data_dir = osp.join(global_var.ROOT, 'pca') 18 | style_model = np.load(osp.join(data_dir, 'style_model_{}.npz'.format(gc))) 19 | 20 | self.renderer = Renderer(512) 21 | self.img = None 22 | self.body = trimesh.load(osp.join(global_var.ROOT, 'smpl', 'hres_{}.obj'.format(gender)), process=False) 23 | 24 | with open(osp.join(global_var.ROOT, 'garment_class_info.pkl'), 'rb') as f: 25 | garment_meta = pickle.load(f) 26 | # self.vert_indcies = garment_meta[gc]['vert_indices'] 27 | self.f = garment_meta[gc]['f'] 28 | 29 | self.gamma = np.zeros([1, 4], dtype=np.float32) 30 | self.pca = PCA(n_components=4) 31 | self.pca.components_ = style_model['pca_w'] 32 | self.pca.mean_ = style_model['mean'] 33 | win_name = '{}_{}'.format(gc, gender) 34 | cv2.namedWindow(win_name) 35 | cv2.createTrackbar('0', win_name, 100, 200, self.value_change) 36 | cv2.createTrackbar('1', win_name, 100, 200, self.value_change) 37 | cv2.createTrackbar('2', win_name, 100, 200, self.value_change) 38 | cv2.createTrackbar('3', win_name, 100, 200, self.value_change) 39 | cv2.createTrackbar('trans', win_name, 100, 200, self.value_change) 40 | self.trans = 0 41 | self.win_name = win_name 42 | 43 | self.render() 44 | 45 | def value_change(self, x): 46 | g0 = cv2.getTrackbarPos('0', self.win_name) 47 | g1 = cv2.getTrackbarPos('1', self.win_name) 48 | g2 = cv2.getTrackbarPos('2', self.win_name) 49 | g3 = cv2.getTrackbarPos('3', self.win_name) 50 | trans = cv2.getTrackbarPos('trans', self.win_name) 51 | self.gamma[0] = [self.convert(g0), self.convert(g1), self.convert(g2), self.convert(g3)] 52 | self.trans = (trans - 100) / 1000. 53 | self.render() 54 | print(f"{self.gamma[0]}\t{self.trans*100}") 55 | 56 | def convert(self, v): 57 | return (v - 100.) / 20. 58 | 59 | def render(self): 60 | v = self.pca.inverse_transform(self.gamma).reshape([-1, 3]) 61 | v[:, 1] += self.trans 62 | self.img = self.renderer([v, self.body.vertices], [self.f, self.body.faces], 63 | [np.array([0.8, 0.5, 0.3]), np.array([0.6, 0.6, 0.9])], trans=(1, 0, 0.3)) 64 | 65 | 66 | if __name__ == '__main__': 67 | gender = 'female' 68 | garment_class = 'skirt' 69 | controller = Controller(garment_class, gender) 70 | while True: 71 | cv2.imshow(controller.win_name, controller.img) 72 | k = cv2.waitKey(10) 73 | if k == ord('q'): 74 | break 75 | cv2.destroyAllWindows() 76 | -------------------------------------------------------------------------------- /style_pca/proc_raw.py: -------------------------------------------------------------------------------- 1 | """ 2 | 1. Convert raw garment displacement to mesh 3 | (the same displacement to different meshes of different genders) 4 | Here, we use chest-flattened SMPL body, so that the garment mesh doesn't have chest bump 5 | 2. Smooth all garments and save their mesh and displacement 6 | INPUT: {ROOT}/raw_data/{garment_class}.npy 7 | OUTPUT: {ROOT}/{garment_class}_{gender}/pca/raw.npy 8 | smooth.npy 9 | smooth_disp.npy 10 | """ 11 | import os 12 | import os.path as osp 13 | import pickle 14 | import numpy as np 15 | from tqdm import tqdm 16 | from smpl_torch import SMPLNP 17 | from utils.rotation import get_Apose 18 | from utils.diffusion_smoothing import DiffusionSmoothing as DS 19 | from global_var import ROOT 20 | 21 | 22 | if __name__ == '__main__': 23 | garment_class = 'pant' 24 | raw_dir = osp.join(ROOT, 'raw_data') 25 | # save_dir = osp.join(ROOT, '{}_{}/pca'.format(garment_class, gender)) 26 | 27 | smpl = SMPLNP('neutral') 28 | apose = get_Apose() 29 | with open(osp.join(ROOT, 'garment_class_info.pkl'), 'rb') as f: 30 | class_info = pickle.load(f, encoding='latin-1') 31 | vert_indices = class_info[garment_class]['vert_indices'] 32 | 33 | raw_path = osp.join(raw_dir, '{}.npy'.format(garment_class)) 34 | beta_path = osp.join(raw_dir, '{}_betas.npy'.format(garment_class)) 35 | all_disp = np.load(raw_path) 36 | betas = np.load(beta_path) 37 | 38 | data_num = len(all_disp) 39 | vbody, vcloth = smpl(betas, np.tile(apose[None], [data_num, 1]), 40 | all_disp, garment_class, batch=True) 41 | canonical_body, _ = smpl(np.zeros([10]), apose, None, None) 42 | trans = np.mean((canonical_body[None] - vbody)[:, vert_indices, :], 1, keepdims=True) 43 | trans_cloth = vcloth + trans 44 | 45 | # if you use your own data, make sure trans_cloth is a garment that aligns canonical_body (a-pose, zero beta) 46 | 47 | np.save(osp.join(raw_dir, '{}_trans.npy'.format(garment_class)), trans_cloth) 48 | 49 | # smoothing 50 | print("Start smoothing") 51 | ds = DS(trans_cloth[0], class_info[garment_class]['f']) 52 | smooth_step = 100 53 | smooth_vs = [] 54 | for unsmooth_v in tqdm(trans_cloth): 55 | smooth_v = unsmooth_v.copy() 56 | for _ in range(smooth_step): 57 | smooth_v = ds.smooth(smooth_v, smoothness=0.03) 58 | smooth_vs.append(smooth_v) 59 | smooth_vs = np.array(smooth_vs, dtype=np.float32) 60 | 61 | np.save(osp.join(raw_dir, '{}_smooth.npy'.format(garment_class)), smooth_vs) 62 | -------------------------------------------------------------------------------- /style_pca/proc_raw_skirt.py: -------------------------------------------------------------------------------- 1 | """ 2 | Go through the scanning dataset and find available registrations for each garment class 3 | save their displacement (in one .npy file) and meta information (triangulation, mapping with SMPL) 4 | INPUT: registration mesh under /BS/bharat-2/work/data/renderings 5 | OUTPUT: {ROOT}/raw_data/{garment_class}.npy 6 | {ROOT}/garment_class_info.pkl 7 | """ 8 | import os 9 | import os.path as osp 10 | import json 11 | import numpy as np 12 | import pickle 13 | from utils.ios import read_obj 14 | from smpl_torch import SMPLNP 15 | from tqdm import tqdm 16 | from utils.rotation import get_Apose 17 | from utils.diffusion_smoothing import DiffusionSmoothing as DS 18 | from global_var import * 19 | 20 | APOSE = get_Apose() 21 | 22 | v_num = {'t-shirt': 7702, 'shirt': 9723, 'pant': 4718, 'short-pant': 2710, 'coat': 10116, 'skirt': 7130} 23 | 24 | 25 | def get_verts(RAW_DIR): 26 | verts = [] 27 | people_names = [os.path.splitext(k)[0] for k in os.listdir(RAW_DIR)] 28 | people_names = list(set(people_names)) 29 | faces = None 30 | for people_name in people_names: 31 | obj_path = osp.join(RAW_DIR, '{}.obj'.format(people_name)) 32 | 33 | if not os.path.exists(obj_path): 34 | continue 35 | v, faces_ = read_obj(obj_path) 36 | if faces is None: 37 | faces = faces_ 38 | verts.append(v) 39 | 40 | return people_names, verts, faces 41 | 42 | 43 | if __name__ == '__main__': 44 | garment_class = 'skirt' 45 | RAW_DIR = osp.join(ROOT, 'skirt_reg') 46 | SAVE_DIR = osp.join(ROOT, 'raw_data') 47 | 48 | people_names, verts, faces = get_verts(RAW_DIR) 49 | n_verts = verts[0].shape[0] 50 | 51 | smpl = SMPLNP('female') 52 | apose = get_Apose() 53 | canonical_body, _ = smpl(np.zeros([300]), apose, None, None) 54 | with open(osp.join(ROOT, 'garment_class_info.pkl'), 'rb') as f: 55 | class_info = pickle.load(f, encoding='latin-1') 56 | pant_ind = class_info['pant']['vert_indices'] 57 | pant_bnd = np.load(osp.join(ROOT, 'pant_upper_boundary.npy')) 58 | skirt_bnd = np.load(osp.join(ROOT, 'skirt_upper_boundary.npy')) 59 | pant_bnd_loc = np.mean(canonical_body[pant_ind][pant_bnd], 0) 60 | 61 | all_v = [] 62 | for people_name, v in zip(people_names, verts): 63 | skirt_bnd_loc = np.mean(v[skirt_bnd], 0) 64 | trans = (pant_bnd_loc - skirt_bnd_loc)[None] 65 | trans_v = v + trans 66 | all_v.append(trans_v) 67 | all_v = np.array(all_v).astype(np.float32) 68 | np.save(os.path.join(SAVE_DIR, '{}.npy'.format(garment_class)), all_v) 69 | np.save(os.path.join(SAVE_DIR, '{}_trans.npy'.format(garment_class)), all_v) 70 | 71 | # smoothing 72 | ds = DS(all_v[0], faces) 73 | smooth_step = 100 74 | smooth_vs = [] 75 | for unsmooth_v in tqdm(all_v): 76 | smooth_v = unsmooth_v.copy() 77 | for _ in range(smooth_step): 78 | smooth_v = ds.smooth(smooth_v, smoothness=0.03) 79 | smooth_vs.append(smooth_v) 80 | smooth_vs = np.array(smooth_vs, dtype=np.float32) 81 | 82 | np.save(osp.join(SAVE_DIR, '{}_smooth.npy'.format(garment_class)), smooth_vs) 83 | 84 | # save garment meta information (triangulation and vert_indices for each garment class) 85 | garment_meta_path = os.path.join(SAVE_DIR, '..', 'garment_class_info.pkl') 86 | if os.path.exists(garment_meta_path): 87 | with open(garment_meta_path, 'rb') as f: 88 | garment_meta = pickle.load(f, encoding='latin-1') 89 | else: 90 | garment_meta = {} 91 | if garment_class not in garment_meta: 92 | garment_meta[garment_class] = {'vert_indices': [-1]*n_verts, 'f': faces} 93 | with open(garment_meta_path, 'wb') as f: 94 | pickle.dump(garment_meta, f) 95 | -------------------------------------------------------------------------------- /style_pca/proc_raw_skirt_ext.py: -------------------------------------------------------------------------------- 1 | """ 2 | Go through the scanning dataset and find available registrations for each garment class 3 | save their displacement (in one .npy file) and meta information (triangulation, mapping with SMPL) 4 | INPUT: registration mesh under /BS/bharat-2/work/data/renderings 5 | OUTPUT: {ROOT}/raw_data/{garment_class}.npy 6 | {ROOT}/garment_class_info.pkl 7 | """ 8 | import os 9 | import os.path as osp 10 | import json 11 | import numpy as np 12 | import pickle 13 | from utils.ios import read_obj 14 | from smpl_torch import SMPLNP 15 | from tqdm import tqdm 16 | from utils.rotation import get_Apose 17 | from utils.diffusion_smoothing import DiffusionSmoothing as DS 18 | from global_var import * 19 | 20 | APOSE = get_Apose() 21 | 22 | v_num = {'t-shirt': 7702, 'shirt': 9723, 'pant': 4718, 'short-pant': 2710, 'coat': 10116, 'skirt': 7130} 23 | 24 | 25 | def get_verts(RAW_DIR): 26 | verts = [] 27 | people_names = [os.path.splitext(k)[0] for k in os.listdir(RAW_DIR)] 28 | people_names = list(set(people_names)) 29 | faces = None 30 | for people_name in people_names: 31 | obj_path = osp.join(RAW_DIR, '{}.obj'.format(people_name)) 32 | 33 | if not os.path.exists(obj_path): 34 | continue 35 | v, faces_ = read_obj(obj_path) 36 | if faces is None: 37 | faces = faces_ 38 | verts.append(v) 39 | verts = np.array(verts) 40 | 41 | return people_names, verts, faces 42 | 43 | 44 | def get_orig_verts(RAW_DIR, bad_path): 45 | verts = [] 46 | people_names = [os.path.splitext(k)[0] for k in os.listdir(RAW_DIR)] 47 | people_names = list(set(people_names)) 48 | faces = None 49 | for people_name in people_names: 50 | obj_path = osp.join(RAW_DIR, '{}.obj'.format(people_name)) 51 | 52 | if not os.path.exists(obj_path): 53 | continue 54 | v, faces_ = read_obj(obj_path) 55 | if faces is None: 56 | faces = faces_ 57 | verts.append(v) 58 | 59 | with open(bad_path) as f: 60 | bad_idx = f.read().splitlines() 61 | valid_vs = [] 62 | bad_idx = [int(k) for k in bad_idx] 63 | for i in range(len(verts)): 64 | if i not in bad_idx: 65 | valid_vs.append(verts[i]) 66 | verts = np.array(valid_vs) 67 | 68 | return people_names, verts, faces 69 | 70 | 71 | 72 | if __name__ == '__main__': 73 | garment_class = 'skirt' 74 | RAW_DIR = osp.join(ROOT, 'skirt_orig_reg') 75 | RAW_EXT_DIR = osp.join(ROOT, 'skirt_reg') 76 | SAVE_DIR = osp.join(ROOT, 'raw_data') 77 | # SAVE_DIR = '/BS/cloth-anim/static00/tailor_data/raw_data' 78 | 79 | people_names, verts, faces = get_orig_verts(RAW_DIR, osp.join(ROOT, 'skirt_orig_bad.txt')) 80 | people_names_ext, verts_ext, _ = get_verts(RAW_EXT_DIR) 81 | verts_ext /= 1.15 82 | verts = np.concatenate((verts, verts_ext), 0) 83 | people_names = people_names + people_names_ext 84 | n_verts = verts[0].shape[0] 85 | 86 | smpl = SMPLNP('female') 87 | apose = get_Apose() 88 | canonical_body, _ = smpl(np.zeros([300]), apose, None, None) 89 | with open(osp.join(ROOT, 'garment_class_info.pkl'), 'rb') as f: 90 | class_info = pickle.load(f, encoding='latin-1') 91 | pant_ind = class_info['pant']['vert_indices'] 92 | pant_bnd = np.load(osp.join(ROOT, 'pant_upper_boundary.npy')) 93 | skirt_bnd = np.load(osp.join(ROOT, 'skirt_upper_boundary.npy')) 94 | pant_bnd_loc = np.mean(canonical_body[pant_ind][pant_bnd], 0) 95 | 96 | all_v = [] 97 | for people_name, v in zip(people_names, verts): 98 | skirt_bnd_loc = np.mean(v[skirt_bnd], 0) 99 | trans = (pant_bnd_loc - skirt_bnd_loc)[None] 100 | trans_v = v + trans 101 | all_v.append(trans_v) 102 | all_v = np.array(all_v).astype(np.float32) 103 | np.save(os.path.join(SAVE_DIR, '{}.npy'.format(garment_class)), all_v) 104 | np.save(os.path.join(SAVE_DIR, '{}_trans.npy'.format(garment_class)), all_v) 105 | 106 | # smoothing 107 | ds = DS(all_v[0], faces) 108 | smooth_step = 100 109 | smooth_vs = [] 110 | for unsmooth_v in tqdm(all_v): 111 | smooth_v = unsmooth_v.copy() 112 | for _ in range(smooth_step): 113 | smooth_v = ds.smooth(smooth_v, smoothness=0.03) 114 | smooth_vs.append(smooth_v) 115 | smooth_vs = np.array(smooth_vs, dtype=np.float32) 116 | 117 | np.save(osp.join(SAVE_DIR, '{}_smooth.npy'.format(garment_class)), smooth_vs) 118 | 119 | -------------------------------------------------------------------------------- /style_pca/readme.md: -------------------------------------------------------------------------------- 1 | ## usage 2 | Note: Since our raw data is not public, scripts in this directory are only for reference. 3 | If you want to model style space yourself, you might skip `copy_raw.py` and start from line 45 of `proc_raw.ppy`. 4 | Also make sure you understand all parameters in `style_pca.py` and `gender_pca.py` and set them yourself, 5 | because those parameters may be highly dependent of raw data. 6 | 1. copy_raw.py 7 | copy registration meshes and do some processing 8 | 2. proc_raw.py 9 | 3. vis_raw.py 10 | visualize raw/smooth garments 11 | 4. manually pick out noisy garments 12 | 5. style_pca.py 13 | 6. pca_interactive.py 14 | adjust pca and trans for each gender. 15 | Hardcode the parameter range in gender_pca.py 16 | 7. gender_pca.py 17 | 18 | ## extending skirt space 19 | after running the above procedure, 20 | rename the following files/folders: 21 | skirt_reg->skirt_orig_reg 22 | skirt_female->skirt_orig_female 23 | pca/...skirt...->pca/...skirt_orig... 24 | raw_data/...skirt...->raw_data/...skirt_orig... 25 | 26 | then, run 27 | proc_raw_skirt_ext.py 28 | style_pca.py 29 | gender_pca.py 30 | -------------------------------------------------------------------------------- /style_pca/style_pca.py: -------------------------------------------------------------------------------- 1 | import os 2 | import pickle 3 | import numpy as np 4 | import skimage.io as sio 5 | from utils.renderer import Renderer 6 | from sklearn.decomposition import PCA 7 | from utils.rotation import get_Apose 8 | from global_var import * 9 | 10 | if __name__ == '__main__': 11 | garment_class = 'skirt' 12 | n_components = 4 13 | raw_dir = os.path.join(ROOT, 'raw_data') 14 | data_dir = os.path.join(ROOT, 'pca') 15 | os.makedirs(data_dir, exist_ok=True) 16 | with open(os.path.join(ROOT, 'garment_class_info.pkl'), 'rb') as f: 17 | class_info = pickle.load(f, encoding='latin-1') 18 | faces = class_info[garment_class]['f'] 19 | vis_dir = os.path.join(data_dir, 'vis_pca_{}'.format(garment_class)) 20 | os.makedirs(vis_dir, exist_ok=True) 21 | vs = np.load(os.path.join(raw_dir, '{}_smooth.npy'.format(garment_class))) 22 | bad_path = os.path.join(raw_dir, '{}_bad.txt'.format(garment_class)) 23 | if os.path.exists(bad_path): 24 | with open(bad_path) as f: 25 | bad_idx = f.read().splitlines() 26 | valid_vs = [] 27 | bad_idx = [int(k) for k in bad_idx] 28 | for i in range(len(vs)): 29 | if i not in bad_idx: 30 | valid_vs.append(vs[i]) 31 | vs = np.array(valid_vs) 32 | if 'skirt' in garment_class: 33 | skirt_bnd = np.load(os.path.join(ROOT, 'skirt_upper_boundary.npy')) 34 | skirt_bnd_loc = np.mean(vs[0, skirt_bnd], 0, keepdims=True) 35 | vs = vs * 1.15 36 | all_vs = [vs] 37 | for _ in range(20): 38 | random_scale_xz = np.random.uniform(-0.2, 0.2, (len(vs), 1, 1)) + 1 39 | random_scale_y = np.random.uniform(-0.7, 0.7, (len(vs), 1, 1)) + 1 40 | random_scale = np.concatenate((random_scale_xz, random_scale_y, random_scale_xz), 2) 41 | all_vs.append(random_scale*vs) 42 | vs = np.concatenate(all_vs, 0) 43 | vs = vs - np.mean(vs[:, skirt_bnd], 1, keepdims=True) + skirt_bnd_loc[None] 44 | if garment_class == 'short-pant': 45 | vs = vs * 1.1 46 | data_num = len(vs) 47 | print(data_num) 48 | apose = get_Apose() 49 | pca = PCA(n_components=n_components) 50 | pca.fit(vs.reshape([data_num, -1])) 51 | pca_coeff = pca.transform(vs.reshape([data_num, -1])) 52 | coeff_mean = np.mean(pca_coeff, 0) 53 | coeff_std = np.std(pca_coeff, 0) 54 | print(coeff_mean) 55 | print(coeff_std) 56 | np.savez(os.path.join(data_dir, 'style_model_{}.npz'.format(garment_class)), 57 | pca_w=pca.components_, mean=np.mean(vs.reshape([data_num, -1]), axis=0), coeff_mean=coeff_mean, coeff_std=coeff_std) 58 | 59 | # # sample and visualize 60 | # r = Renderer(512) 61 | # for i1, e1 in enumerate(np.arange(-2., 2.01, 0.5)): 62 | # for i2, e2 in enumerate(np.arange(-1.5, 1.51, 0.5)): 63 | # std_coeff = np.array([e1, e2, 0, 0], dtype=np.float32) 64 | # coeff = (std_coeff + coeff_mean) * coeff_std 65 | # rec_verts = pca.inverse_transform(coeff[None])[0].reshape([-1, 3]) 66 | # # _, rec_mesh = smpl(np.zeros([10]), apose, rec_verts, garment_class) 67 | # img = r(rec_verts, faces, trans=(2., 0, 0)) 68 | # sio.imsave(os.path.join(vis_dir, '{}_{}.jpg'.format(i1, i2)), img) -------------------------------------------------------------------------------- /style_pca/vis_raw.py: -------------------------------------------------------------------------------- 1 | import os 2 | import pickle 3 | import numpy as np 4 | import skimage.io as sio 5 | from smpl_torch import SMPLNP 6 | from utils.renderer import Renderer 7 | from utils.rotation import get_Apose 8 | 9 | 10 | if __name__ == '__main__': 11 | garment_class = 'skirt' 12 | data_dir = 'C:/data/v3/raw_data' 13 | filenames = ['{}_trans.npy'.format(garment_class), '{}_smooth.npy'.format(garment_class)] 14 | r = Renderer(512) 15 | apose = get_Apose() 16 | with open('C:/data/v3/garment_class_info.pkl', 'rb') as f: 17 | class_info = pickle.load(f, encoding='latin-1') 18 | 19 | for filename in filenames: 20 | save_dir = os.path.join(data_dir, filename.replace('.npy', '')) 21 | all_v = np.load(os.path.join(data_dir, filename)) 22 | 23 | if not os.path.exists(save_dir): 24 | os.makedirs(save_dir) 25 | 26 | for i, v in enumerate(all_v): 27 | # vbody, vcloth = smpl(np.zeros([300]), apose, disp, garment_class) 28 | img = r(v, class_info[garment_class]['f'], trans=(2., 0, 0)) 29 | sio.imsave(os.path.join(save_dir, '{}.jpg'.format(i)), img) -------------------------------------------------------------------------------- /utils/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zycliao/TailorNet_dataset/c4ce37cf321608b4f6e2907fe6ac6b79fa931175/utils/__init__.py -------------------------------------------------------------------------------- /utils/diffusion_smoothing.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import scipy 3 | import scipy.sparse as sp 4 | from psbody.mesh import Mesh 5 | import stat 6 | 7 | 8 | def get_edges2face(faces): 9 | from itertools import combinations 10 | from collections import OrderedDict 11 | # Returns a structure that contains the faces corresponding to every edge 12 | edges = OrderedDict() 13 | for iface, f in enumerate(faces): 14 | sorted_face_edges = tuple(combinations(sorted(f), 2)) 15 | for sorted_face_edge in sorted_face_edges: 16 | if sorted_face_edge in edges: 17 | edges[sorted_face_edge].faces.add(iface) 18 | else: 19 | edges[sorted_face_edge] = lambda:0 20 | edges[sorted_face_edge].faces = set([iface]) 21 | return edges 22 | 23 | 24 | def get_boundary_verts(verts, faces, connected_boundaries=True, connected_faces=False): 25 | """ 26 | Given a mesh returns boundary vertices 27 | if connected_boundaries is True it returs a list of lists 28 | OUTPUT: 29 | boundary_verts: list of verts 30 | cnct_bound_verts: list of list containing the N ordered rings of the mesh 31 | """ 32 | MIN_NUM_VERTS_RING = 10 33 | # Ordred dictionary 34 | edge_dict = get_edges2face(faces) 35 | boundary_verts = [] 36 | boundary_edges = [] 37 | boundary_faces = [] 38 | for edge, (key, val) in enumerate(edge_dict.items()): 39 | if len(val.faces) == 1: 40 | boundary_verts += list(key) 41 | boundary_edges.append(edge) 42 | for face_id in val.faces: 43 | boundary_faces.append(face_id) 44 | boundary_verts = list(set(boundary_verts)) 45 | if not connected_boundaries: 46 | return boundary_verts 47 | n_removed_verts = 0 48 | if connected_boundaries: 49 | edge_mat = np.array(list(edge_dict.keys())) 50 | # Edges on the boundary 51 | edge_mat = edge_mat[np.array(boundary_edges)] 52 | 53 | # check that every vertex is shared by only two edges 54 | for v in boundary_verts: 55 | if np.sum(edge_mat == v) != 2: 56 | import ipdb; ipdb.set_trace(); 57 | raise ValueError('The boundary edges are not closed loops!') 58 | 59 | cnct_bound_verts = [] 60 | while len(edge_mat > 0): 61 | # boundary verts, indices of conected boundary verts in order 62 | bverts = [] 63 | orig_vert = edge_mat[0, 0] 64 | bverts.append(orig_vert) 65 | vert = edge_mat[0, 1] 66 | edge = 0 67 | while orig_vert != vert: 68 | bverts.append(vert) 69 | # remove edge from queue 70 | edge_mask = np.ones(edge_mat.shape[0], dtype=bool) 71 | edge_mask[edge] = False 72 | edge_mat = edge_mat[edge_mask] 73 | edge = np.where(np.sum(edge_mat == vert, axis=1) > 0)[0] 74 | tmp = edge_mat[edge] 75 | vert = tmp[tmp != vert][0] 76 | # remove the last edge 77 | edge_mask = np.ones(edge_mat.shape[0], dtype=bool) 78 | edge_mask[edge] = False 79 | edge_mat = edge_mat[edge_mask] 80 | if len(bverts) > MIN_NUM_VERTS_RING: 81 | # add ring to the list 82 | cnct_bound_verts.append(bverts) 83 | else: 84 | n_removed_verts += len(bverts) 85 | count = 0 86 | for ring in cnct_bound_verts: count += len(ring) 87 | assert(len(boundary_verts) - n_removed_verts == count), "Error computing boundary rings !!" 88 | 89 | if connected_faces: 90 | return (boundary_verts, boundary_faces, cnct_bound_verts) 91 | else: 92 | return (boundary_verts, cnct_bound_verts) 93 | 94 | 95 | def numpy_laplacian_uniform(v, f): 96 | """ Compute laplacian operator on part_mesh. This can be cached. 97 | """ 98 | import scipy.sparse as sp 99 | from sklearn.preprocessing import normalize 100 | from psbody.mesh.topology.connectivity import get_vert_connectivity 101 | 102 | connectivity = get_vert_connectivity(Mesh(v=v, f=f)) 103 | # connectivity is a sparse matrix, and np.clip can not applied directly on 104 | # a sparse matrix. 105 | connectivity.data = np.clip(connectivity.data, 0, 1) 106 | lap = normalize(connectivity, norm='l1', axis=1) 107 | lap = lap - sp.eye(connectivity.shape[0]) 108 | 109 | return lap 110 | 111 | 112 | def numpy_laplacian_cot(v, f): 113 | n = len(v) 114 | 115 | v_a = f[:, 0] 116 | v_b = f[:, 1] 117 | v_c = f[:, 2] 118 | 119 | ab = v[v_a] - v[v_b] 120 | bc = v[v_b] - v[v_c] 121 | ca = v[v_c] - v[v_a] 122 | 123 | cot_a = -1 * (ab * ca).sum(axis=1) / (np.sqrt(np.sum(np.cross(ab, ca) ** 2, axis=-1)) + 1.e-10) 124 | cot_b = -1 * (bc * ab).sum(axis=1) / (np.sqrt(np.sum(np.cross(bc, ab) ** 2, axis=-1)) + 1.e-10) 125 | cot_c = -1 * (ca * bc).sum(axis=1) / (np.sqrt(np.sum(np.cross(ca, bc) ** 2, axis=-1)) + 1.e-10) 126 | 127 | I = np.concatenate((v_a, v_c, v_a, v_b, v_b, v_c)) 128 | J = np.concatenate((v_c, v_a, v_b, v_a, v_c, v_b)) 129 | W = 0.5 * np.concatenate((cot_b, cot_b, cot_c, cot_c, cot_a, cot_a)) 130 | 131 | L = sp.csr_matrix((W, (I, J)), shape=(n, n)) 132 | L = L - sp.spdiags(L * np.ones(n), 0, n, n) 133 | 134 | return L 135 | 136 | 137 | class DiffusionSmoothing(object): 138 | 139 | def __init__(self, v, f, Ltype="cotangent"): 140 | 141 | assert(Ltype in ["cotangent", "uniform"]) 142 | self.Ltype = Ltype 143 | self.num_v = v.shape[0] 144 | self.f = f 145 | self.set_boundary_ids_and_mats(v, f) 146 | self.L = None if self.Ltype == "cotangent" else self.get_uniform_lap_smoothing(v) 147 | 148 | def get_uniform_lap_smoothing(self, v): 149 | L = numpy_laplacian_uniform(v, self.f) 150 | 151 | # remove rows corresponding to boundary vertices 152 | for row in self.b_ids: 153 | L.data[L.indptr[row]:L.indptr[row + 1]] = 0 154 | L.eliminate_zeros() 155 | 156 | num_b = self.b_ids.shape[0] 157 | I = np.tile(self.b_ids, 3) 158 | J = np.hstack(( 159 | self.b_ids, 160 | self.b_ids[self.l_ids], 161 | self.b_ids[self.r_ids], 162 | )) 163 | W = np.hstack(( 164 | -1 * np.ones(num_b), 165 | 0.5 * np.ones(num_b), 166 | 0.5 * np.ones(num_b), 167 | )) 168 | mat = sp.csr_matrix((W, (I, J)), shape=(self.num_v, self.num_v)) 169 | L = L + mat 170 | return L 171 | 172 | def set_boundary_ids_and_mats(self, v, f): 173 | _, b_rings = get_boundary_verts(v, f) 174 | 175 | def shift_left(ls, k): 176 | return ls[k:] + ls[:k] 177 | 178 | b_ids = [] 179 | l_ids = [] 180 | r_ids = [] 181 | for rg in b_rings: 182 | tmp = list(range(len(b_ids), len(b_ids) + len(rg))) 183 | ltmp = shift_left(tmp, 1) 184 | rtmp = shift_left(tmp, -1) 185 | l_ids.extend(ltmp) 186 | r_ids.extend(rtmp) 187 | 188 | b_ids.extend(rg) 189 | 190 | b_ids = np.asarray(b_ids) 191 | num_b = b_ids.shape[0] 192 | m_ids = np.arange(num_b) 193 | l_ids = np.asarray(l_ids) 194 | r_ids = np.asarray(r_ids) 195 | 196 | self.right_edge_mat = sp.csr_matrix(( 197 | np.hstack((-1*np.ones(num_b), np.ones(num_b))), 198 | (np.hstack((m_ids, m_ids)), np.hstack((m_ids, r_ids))) 199 | ), shape=(num_b, num_b) 200 | ) 201 | 202 | self.left_edge_mat = sp.csr_matrix(( 203 | np.hstack((-1 * np.ones(num_b), np.ones(num_b))), 204 | (np.hstack((m_ids, m_ids)), np.hstack((m_ids, l_ids))) 205 | ), shape=(num_b, num_b) 206 | ) 207 | 208 | self.b_ids = b_ids 209 | self.l_ids = l_ids 210 | self.r_ids = r_ids 211 | 212 | def smooth_cotlap(self, verts, smoothness=0.03): 213 | L = numpy_laplacian_cot(verts, self.f) 214 | new_verts = verts + smoothness * L.dot(verts) 215 | 216 | b_verts = verts[self.b_ids] 217 | le = 1. / (np.linalg.norm(self.left_edge_mat.dot(b_verts), axis=-1) + 1.0e-10) 218 | ri = 1. / (np.linalg.norm(self.right_edge_mat.dot(b_verts), axis=-1) + 1.0e-10) 219 | 220 | num_b = b_verts.shape[0] 221 | I = np.tile(np.arange(num_b), 3) 222 | J = np.hstack(( 223 | np.arange(num_b), 224 | self.l_ids, 225 | self.r_ids, 226 | )) 227 | W = np.hstack(( 228 | -1*np.ones(num_b), 229 | le / (le + ri), 230 | ri / (le + ri), 231 | )) 232 | mat = sp.csr_matrix((W, (I, J)), shape=(num_b, num_b)) 233 | new_verts[self.b_ids] = verts[self.b_ids] + smoothness * mat.dot(verts[self.b_ids]) 234 | return new_verts 235 | 236 | def smooth_uniform(self, verts, smoothness=0.03): 237 | new_verts = verts + smoothness * self.L.dot(verts) 238 | return new_verts 239 | 240 | def smooth(self, verts, smoothness=0.03): 241 | return self.smooth_uniform(verts, smoothness) if self.Ltype == "uniform" else self.smooth_cotlap(verts, smoothness) 242 | 243 | 244 | if __name__ == "__main__": 245 | import os 246 | import global_var 247 | from utils.args import get_args 248 | from tqdm import tqdm 249 | 250 | garment_class = 'smooth_TShirtNoCoat' 251 | gender, list_name = get_args() 252 | 253 | with open(os.path.join(global_var.ROOT, '{}_{}.txt').format(gender, list_name)) as f: 254 | avail_items = f.read().splitlines() 255 | avail_items = [k.split('\t') for k in avail_items] 256 | people_names = [k[0] for k in avail_items if k[1] == garment_class] 257 | shape_root = os.path.join(global_var.ROOT, 'neutral_shape_static_pose_new') 258 | smoothing = None 259 | 260 | shape_names = ["{:02d}".format(k) for k in range(0, 100)] 261 | 262 | for people_name, garment_class in tqdm(avail_items): 263 | shape_static_pose_people = os.path.join(shape_root, people_name) 264 | for shape_name in shape_names: 265 | garment_path = os.path.join(shape_static_pose_people, '{}_{}.obj'.format(shape_name, garment_class)) 266 | if not os.path.exists(garment_path): 267 | print("{} doesn't exist".format(garment_path)) 268 | try: 269 | m = Mesh(filename=garment_path) 270 | except AttributeError as e: 271 | print(e) 272 | print(garment_path) 273 | exit() 274 | 275 | if smoothing is None: 276 | smoothing = DiffusionSmoothing(m.v, m.f, Ltype="cotangent") 277 | steps = [30] 278 | 279 | verts_smooth = m.v.copy() 280 | for i, step in enumerate(steps): 281 | smooth_name = '_sm{}'.format(step) 282 | dst_path = os.path.join(shape_static_pose_people, '{}{}_{}.obj'.format(shape_name, smooth_name, garment_class)) 283 | if os.path.exists(dst_path): 284 | print("{} exists. Skip".format(dst_path)) 285 | for _ in range(step): 286 | verts_smooth = smoothing.smooth(verts_smooth, smoothness=0.03) 287 | ms_smooth = Mesh(v=verts_smooth, f=m.f) 288 | 289 | ms_smooth.write_obj(dst_path) 290 | os.chmod(dst_path, stat.S_IRWXU | stat.S_IRWXG | stat.S_IROTH) -------------------------------------------------------------------------------- /utils/geodesic.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from scipy import sparse 3 | from scipy.sparse.linalg import splu 4 | import os 5 | import global_var 6 | 7 | def veclen(vectors): 8 | """ return L2 norm (vector length) along the last axis, for example to compute the length of an array of vectors """ 9 | return np.sqrt(np.sum(vectors**2, axis=-1)) 10 | 11 | def normalized(vectors): 12 | """ normalize array of vectors along the last axis """ 13 | return vectors / veclen(vectors)[..., np.newaxis] 14 | 15 | def compute_mesh_laplacian(verts, tris): 16 | """ 17 | computes a sparse matrix representing the discretized laplace-beltrami operator of the mesh 18 | given by n vertex positions ("verts") and a m triangles ("tris") 19 | 20 | verts: (n, 3) array (float) 21 | tris: (m, 3) array (int) - indices into the verts array 22 | 23 | computes the conformal weights ("cotangent weights") for the mesh, ie: 24 | w_ij = - .5 * (cot \alpha + cot \beta) 25 | 26 | See: 27 | Olga Sorkine, "Laplacian Mesh Processing" 28 | and for theoretical comparison of different discretizations, see 29 | Max Wardetzky et al., "Discrete Laplace operators: No free lunch" 30 | 31 | returns matrix L that computes the laplacian coordinates, e.g. L * x = delta 32 | """ 33 | n = len(verts) 34 | W_ij = np.empty(0) 35 | I = np.empty(0, np.int32) 36 | J = np.empty(0, np.int32) 37 | for i1, i2, i3 in [(0, 1, 2), (1, 2, 0), (2, 0, 1)]: # for edge i2 --> i3 facing vertex i1 38 | vi1 = tris[:, i1] # vertex index of i1 39 | vi2 = tris[:, i2] 40 | vi3 = tris[:, i3] 41 | # vertex vi1 faces the edge between vi2--vi3 42 | # compute the angle at v1 43 | # add cotangent angle at v1 to opposite edge v2--v3 44 | # the cotangent weights are symmetric 45 | u = verts[vi2] - verts[vi1] 46 | v = verts[vi3] - verts[vi1] 47 | cotan = (u * v).sum(axis=1) / veclen(np.cross(u, v)) 48 | W_ij = np.append(W_ij, 0.5 * cotan) 49 | I = np.append(I, vi2) 50 | J = np.append(J, vi3) 51 | W_ij = np.append(W_ij, 0.5 * cotan) 52 | I = np.append(I, vi3) 53 | J = np.append(J, vi2) 54 | L = sparse.csr_matrix((W_ij, (I, J)), shape=(n, n)) 55 | # compute diagonal entries 56 | L = L - sparse.spdiags(L * np.ones(n), 0, n, n) 57 | L = L.tocsr() 58 | # area matrix 59 | e1 = verts[tris[:, 1]] - verts[tris[:, 0]] 60 | e2 = verts[tris[:, 2]] - verts[tris[:, 0]] 61 | n = np.cross(e1, e2) 62 | triangle_area = .5 * veclen(n) 63 | # compute per-vertex area 64 | vertex_area = np.zeros(len(verts)) 65 | ta3 = triangle_area / 3 66 | for i in xrange(tris.shape[1]): 67 | bc = np.bincount(tris[:, i].astype(int), ta3) 68 | vertex_area[:len(bc)] += bc 69 | VA = sparse.spdiags(vertex_area, 0, len(verts), len(verts)) 70 | return L, VA 71 | 72 | 73 | class GeodesicDistanceComputation(object): 74 | """ 75 | Computation of geodesic distances on triangle meshes using the heat method from the impressive paper 76 | 77 | Geodesics in Heat: A New Approach to Computing Distance Based on Heat Flow 78 | Keenan Crane, Clarisse Weischedel, Max Wardetzky 79 | ACM Transactions on Graphics (SIGGRAPH 2013) 80 | 81 | Example usage: 82 | >>> compute_distance = GeodesicDistanceComputation(vertices, triangles) 83 | >>> distance_of_each_vertex_to_vertex_0 = compute_distance(0) 84 | 85 | .. note: Sometimes you may get a RuntimeError: 'Factor is exactly singular' 86 | As discussed in https://groups.google.com/forum/#!topic/dedalus-users/mCYcalFpnoo 87 | just multiplying the input vertices by 1e6 might solve the problem 88 | 89 | """ 90 | 91 | def __init__(self, verts, tris, m=10.0): 92 | self._verts = verts 93 | self._tris = tris 94 | # precompute some stuff needed later on 95 | e01 = verts[tris[:, 1]] - verts[tris[:, 0]] 96 | e12 = verts[tris[:, 2]] - verts[tris[:, 1]] 97 | e20 = verts[tris[:, 0]] - verts[tris[:, 2]] 98 | self._triangle_area = .5 * veclen(np.cross(e01, e12)) 99 | unit_normal = normalized(np.cross(normalized(e01), normalized(e12))) 100 | self._unit_normal_cross_e01 = np.cross(unit_normal, e01) 101 | self._unit_normal_cross_e12 = np.cross(unit_normal, e12) 102 | self._unit_normal_cross_e20 = np.cross(unit_normal, e20) 103 | # parameters for heat method 104 | h = np.mean(map(veclen, [e01, e12, e20])) 105 | t = m * h ** 2 106 | # pre-factorize poisson systems 107 | Lc, A = compute_mesh_laplacian(verts, tris) 108 | self._factored_AtLc = splu((A - t * Lc).tocsc()).solve 109 | self._factored_L = splu(Lc.tocsc()).solve 110 | 111 | def __call__(self, idx): 112 | """ 113 | computes geodesic distances to all vertices in the mesh 114 | idx can be either an integer (single vertex index) or a list of vertex indices 115 | or an array of bools of length n (with n the number of vertices in the mesh) 116 | """ 117 | u0 = np.zeros(len(self._verts)) 118 | u0[idx] = 1.0 119 | # heat method, step 1 120 | u = self._factored_AtLc(u0).ravel() 121 | # heat method step 2 122 | grad_u = 1 / (2 * self._triangle_area)[:, np.newaxis] * ( 123 | self._unit_normal_cross_e01 * u[self._tris[:, 2]][:, np.newaxis] 124 | + self._unit_normal_cross_e12 * u[self._tris[:, 0]][:, np.newaxis] 125 | + self._unit_normal_cross_e20 * u[self._tris[:, 1]][:, np.newaxis] 126 | ) 127 | X = -grad_u / veclen(grad_u)[:, np.newaxis] 128 | # heat method step 3 129 | div_Xs = np.zeros(len(self._verts)) 130 | for i1, i2, i3 in [(0, 1, 2), (1, 2, 0), (2, 0, 1)]: # for edge i2 --> i3 facing vertex i1 131 | vi1, vi2, vi3 = self._tris[:, i1], self._tris[:, i2], self._tris[:, i3] 132 | e1 = self._verts[vi2] - self._verts[vi1] 133 | e2 = self._verts[vi3] - self._verts[vi1] 134 | e_opp = self._verts[vi3] - self._verts[vi2] 135 | cot1 = 1 / np.tan(np.arccos( 136 | (normalized(-e2) * normalized(-e_opp)).sum(axis=1))) 137 | cot2 = 1 / np.tan(np.arccos( 138 | (normalized(-e1) * normalized(e_opp)).sum(axis=1))) 139 | div_Xs += np.bincount( 140 | vi1.astype(int), 141 | 0.5 * (cot1 * (e1 * X).sum(axis=1) + cot2 * (e2 * X).sum(axis=1)), 142 | minlength=len(self._verts)) 143 | phi = self._factored_L(div_Xs).ravel() 144 | phi -= phi.min() 145 | return phi 146 | 147 | 148 | if __name__ == '__main__': 149 | from utils.smpl import SMPLNP 150 | smpl = SMPLNP() 151 | v, _, _ = smpl(np.zeros([10]), np.zeros([72])) 152 | runner = GeodesicDistanceComputation(v, smpl.base.faces) 153 | geodis = [] 154 | for i in range(6890): 155 | phi = runner(i) 156 | geodis.append(phi) 157 | geodis = np.array(geodis) 158 | np.save(os.path.join(global_var.ROOT, 'smpl_geodesic.npy'), geodis) 159 | -------------------------------------------------------------------------------- /utils/geometry.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import chumpy as ch 3 | from psbody.mesh import Mesh 4 | 5 | 6 | def delete_verts(faces, delete_mat): 7 | new_faces = [] 8 | for face in faces: 9 | to_delete = False 10 | for fidx in face: 11 | if delete_mat[fidx]: 12 | to_delete = True 13 | break 14 | if not to_delete: 15 | new_faces.append(face) 16 | if len(new_faces) > 0: 17 | new_faces = np.stack(new_faces, 0) 18 | else: 19 | new_faces = None 20 | return new_faces 21 | 22 | 23 | def divide_obj(obj_path): 24 | with open(obj_path) as f: 25 | lines = f.read().splitlines() 26 | items = {'v': [], 'vn': [], 'f': []} 27 | last_type = '' 28 | for line in lines: 29 | line_type = line.split(' ')[0] 30 | if line_type in items: 31 | if line_type == 'v' or line_type == 'vn': 32 | if line_type != last_type: 33 | items[line_type].append([line+'\n']) 34 | else: 35 | items[line_type][-1].append(line+'\n') 36 | if line_type == 'f': 37 | single_face = line.split(' ')[1:] 38 | for fi, sf in enumerate(single_face): 39 | single_face[fi] = sf.split('//') 40 | single_face = np.array(single_face).astype(np.int) 41 | if line_type != last_type: 42 | items['f'].append([single_face]) 43 | else: 44 | items['f'][-1].append(single_face) 45 | last_type = line_type 46 | v_num, vn_num = 0, 0 47 | for obj_idx, one_obj_faces in enumerate(items['f']): 48 | if obj_idx >= 1: 49 | v_num += len(items['v'][obj_idx-1]) 50 | vn_num += len(items['vn'][obj_idx-1]) 51 | for fi, one_obj_face in enumerate(one_obj_faces): 52 | one_obj_face[:, 0] -= v_num 53 | one_obj_face[:, 1] -= vn_num 54 | new_line = 'f' 55 | for face_v in one_obj_face: 56 | new_line += " {}//{}".format(face_v[0], face_v[1]) 57 | new_line += '\n' 58 | one_obj_faces[fi] = new_line 59 | return items 60 | 61 | 62 | def unpose_garment(smpl, v_free, vert_indices=None): 63 | smpl.v_personal[:] = 0 64 | c = smpl[vert_indices] 65 | E = { 66 | 'v_personal_high': c - v_free 67 | } 68 | ch.minimize(E, x0=[smpl.v_personal], options={'e_3': .00001}) 69 | smpl.pose[:] = 0 70 | smpl.trans[:] = 0 71 | 72 | return Mesh(smpl.r, smpl.f).keep_vertices(vert_indices), np.array(smpl.v_personal) 73 | 74 | 75 | def unpose_skirt(smpl, verts): 76 | rotmat =smpl.A.r[:, :, 0] 77 | # inv_rotmat = np.linalg.inv(rotmat) 78 | verts_homo = np.hstack((verts, np.ones((verts.shape[0], 1)))) 79 | verts = verts_homo.dot(np.linalg.inv(rotmat.T))[:, :3] 80 | return verts 81 | 82 | 83 | def get_selfintersections(v, f): 84 | import pymesh 85 | from pymesh.selfintersection import detect_self_intersection 86 | import numpy as np 87 | mspy = pymesh.form_mesh(v, f) 88 | face_pairs = detect_self_intersection(mspy) 89 | mspy.add_attribute('face_area') 90 | face_areas = mspy.get_attribute('face_area') 91 | intersecting_area = face_areas[np.unique(face_pairs.ravel())].sum() 92 | return face_pairs, intersecting_area 93 | 94 | 95 | if __name__ == '__main__': 96 | import global_var 97 | import os 98 | items = divide_obj(os.path.join(global_var.ROOT, 'static_pose', 'rp_aaron_posed_006_30k', 'result_7.obj')) 99 | obj_num = len(items['f']) 100 | for obj_idx in range(obj_num): 101 | with open("{}.obj".format(obj_idx), 'w') as f: 102 | f.writelines(items['v'][obj_idx]) 103 | f.writelines(items['vn'][obj_idx]) 104 | f.writelines(items['f'][obj_idx]) -------------------------------------------------------------------------------- /utils/ios.py: -------------------------------------------------------------------------------- 1 | import os 2 | from os.path import join as opj 3 | import shutil 4 | from datetime import datetime 5 | import numpy as np 6 | from plyfile import PlyData 7 | import struct 8 | import global_var 9 | 10 | 11 | def read_ply(fname): 12 | plydata = PlyData.read(fname) 13 | vert_ele = plydata['vertex'] 14 | verts = np.stack([vert_ele['x'], vert_ele['y'], vert_ele['z']], 1) 15 | faces = np.stack(list(plydata['face']['vertex_indices']), 0) 16 | return verts, faces 17 | 18 | 19 | def read_obj(path): 20 | """ 21 | read verts and faces from obj file. This func will convert quad mesh to triangle mesh 22 | """ 23 | with open(path) as f: 24 | lines = f.read().splitlines() 25 | verts = [] 26 | faces = [] 27 | for line in lines: 28 | if line.startswith('v '): 29 | verts.append(np.array([float(k) for k in line.split(' ')[1:]])) 30 | elif line.startswith('f '): 31 | try: 32 | onef = np.array([int(k) for k in line.split(' ')[1:]]) 33 | except ValueError: 34 | continue 35 | if len(onef) == 4: 36 | faces.append(onef[[0, 1, 2]]) 37 | faces.append(onef[[0, 2, 3]]) 38 | elif len(onef) > 4: 39 | pass 40 | else: 41 | faces.append(onef) 42 | if len(faces) == 0: 43 | return np.stack(verts), None 44 | else: 45 | return np.stack(verts), np.stack(faces)-1 46 | 47 | 48 | def write_obj(verts, faces, path, color_idx=None): 49 | faces = faces + 1 50 | with open(path, 'w') as f: 51 | for vidx, v in enumerate(verts): 52 | if color_idx is not None and color_idx[vidx]: 53 | f.write("v {:.5f} {:.5f} {:.5f} 1 0 0\n".format(v[0], v[1], v[2])) 54 | else: 55 | f.write("v {:.5f} {:.5f} {:.5f}\n".format(v[0], v[1], v[2])) 56 | for fa in faces: 57 | f.write("f {:d} {:d} {:d}\n".format(fa[0], fa[1], fa[2])) 58 | 59 | 60 | def save_pc2(vertices, path): 61 | # vertices: (N, V, 3), N is the number of frames, V is the number of vertices 62 | # path: a .pc2 file 63 | nframes, nverts, _ = vertices.shape 64 | with open(path, 'wb') as f: 65 | headerStr = struct.pack('<12siiffi', b'POINTCACHE2\0', 66 | 1, nverts, 1, 1, nframes) 67 | f.write(headerStr) 68 | v = vertices.reshape(-1, 3).astype(np.float32) 69 | for v_ in v: 70 | f.write(struct.pack(' however, why writting in c++ is still slow?) 6 | 7 | Author: Yao Feng 8 | Mail: yaofeng1995@gmail.com 9 | */ 10 | 11 | #include "mesh_core.h" 12 | 13 | 14 | /* Judge whether the point is in the triangle 15 | Method: 16 | http://blackpawn.com/texts/pointinpoly/ 17 | Args: 18 | point: [x, y] 19 | tri_points: three vertices(2d points) of a triangle. 2 coords x 3 vertices 20 | Returns: 21 | bool: true for in triangle 22 | */ 23 | bool isPointInTri(point p, point p0, point p1, point p2) 24 | { 25 | // vectors 26 | point v0, v1, v2; 27 | v0 = p2 - p0; 28 | v1 = p1 - p0; 29 | v2 = p - p0; 30 | 31 | // dot products 32 | float dot00 = v0.dot(v0); //v0.x * v0.x + v0.y * v0.y //np.dot(v0.T, v0) 33 | float dot01 = v0.dot(v1); //v0.x * v1.x + v0.y * v1.y //np.dot(v0.T, v1) 34 | float dot02 = v0.dot(v2); //v0.x * v2.x + v0.y * v2.y //np.dot(v0.T, v2) 35 | float dot11 = v1.dot(v1); //v1.x * v1.x + v1.y * v1.y //np.dot(v1.T, v1) 36 | float dot12 = v1.dot(v2); //v1.x * v2.x + v1.y * v2.y//np.dot(v1.T, v2) 37 | 38 | // barycentric coordinates 39 | float inverDeno; 40 | if(dot00*dot11 - dot01*dot01 == 0) 41 | inverDeno = 0; 42 | else 43 | inverDeno = 1/(dot00*dot11 - dot01*dot01); 44 | 45 | float u = (dot11*dot02 - dot01*dot12)*inverDeno; 46 | float v = (dot00*dot12 - dot01*dot02)*inverDeno; 47 | 48 | // check if point in triangle 49 | return (u >= 0) && (v >= 0) && (u + v < 1); 50 | } 51 | 52 | 53 | void get_point_weight(float* weight, point p, point p0, point p1, point p2) 54 | { 55 | // vectors 56 | point v0, v1, v2; 57 | v0 = p2 - p0; 58 | v1 = p1 - p0; 59 | v2 = p - p0; 60 | 61 | // dot products 62 | float dot00 = v0.dot(v0); //v0.x * v0.x + v0.y * v0.y //np.dot(v0.T, v0) 63 | float dot01 = v0.dot(v1); //v0.x * v1.x + v0.y * v1.y //np.dot(v0.T, v1) 64 | float dot02 = v0.dot(v2); //v0.x * v2.x + v0.y * v2.y //np.dot(v0.T, v2) 65 | float dot11 = v1.dot(v1); //v1.x * v1.x + v1.y * v1.y //np.dot(v1.T, v1) 66 | float dot12 = v1.dot(v2); //v1.x * v2.x + v1.y * v2.y//np.dot(v1.T, v2) 67 | 68 | // barycentric coordinates 69 | float inverDeno; 70 | if(dot00*dot11 - dot01*dot01 == 0) 71 | inverDeno = 0; 72 | else 73 | inverDeno = 1/(dot00*dot11 - dot01*dot01); 74 | 75 | float u = (dot11*dot02 - dot01*dot12)*inverDeno; 76 | float v = (dot00*dot12 - dot01*dot02)*inverDeno; 77 | 78 | // weight 79 | weight[0] = 1 - u - v; 80 | weight[1] = v; 81 | weight[2] = u; 82 | } 83 | 84 | 85 | void _get_normal_core( 86 | float* normal, float* tri_normal, int* triangles, 87 | int ntri) 88 | { 89 | int i, j; 90 | int tri_p0_ind, tri_p1_ind, tri_p2_ind; 91 | 92 | for(i = 0; i < ntri; i++) 93 | { 94 | tri_p0_ind = triangles[3*i]; 95 | tri_p1_ind = triangles[3*i + 1]; 96 | tri_p2_ind = triangles[3*i + 2]; 97 | 98 | for(j = 0; j < 3; j++) 99 | { 100 | normal[3*tri_p0_ind + j] = normal[3*tri_p0_ind + j] + tri_normal[3*i + j]; 101 | normal[3*tri_p1_ind + j] = normal[3*tri_p1_ind + j] + tri_normal[3*i + j]; 102 | normal[3*tri_p2_ind + j] = normal[3*tri_p2_ind + j] + tri_normal[3*i + j]; 103 | } 104 | } 105 | } 106 | 107 | 108 | void _rasterize_triangles_core( 109 | float* vertices, int* triangles, 110 | float* depth_buffer, int* triangle_buffer, float* barycentric_weight, 111 | int nver, int ntri, 112 | int h, int w) 113 | { 114 | int i; 115 | int x, y, k; 116 | int tri_p0_ind, tri_p1_ind, tri_p2_ind; 117 | point p0, p1, p2, p; 118 | int x_min, x_max, y_min, y_max; 119 | float p_depth, p0_depth, p1_depth, p2_depth; 120 | float weight[3]; 121 | 122 | for(i = 0; i < ntri; i++) 123 | { 124 | tri_p0_ind = triangles[3*i]; 125 | tri_p1_ind = triangles[3*i + 1]; 126 | tri_p2_ind = triangles[3*i + 2]; 127 | 128 | p0.x = vertices[3*tri_p0_ind]; p0.y = vertices[3*tri_p0_ind + 1]; p0_depth = vertices[3*tri_p0_ind + 2]; 129 | p1.x = vertices[3*tri_p1_ind]; p1.y = vertices[3*tri_p1_ind + 1]; p1_depth = vertices[3*tri_p1_ind + 2]; 130 | p2.x = vertices[3*tri_p2_ind]; p2.y = vertices[3*tri_p2_ind + 1]; p2_depth = vertices[3*tri_p2_ind + 2]; 131 | 132 | x_min = max((int)ceil(min(p0.x, min(p1.x, p2.x))), 0); 133 | x_max = min((int)floor(max(p0.x, max(p1.x, p2.x))), w - 1); 134 | 135 | y_min = max((int)ceil(min(p0.y, min(p1.y, p2.y))), 0); 136 | y_max = min((int)floor(max(p0.y, max(p1.y, p2.y))), h - 1); 137 | 138 | if(x_max < x_min || y_max < y_min) 139 | { 140 | continue; 141 | } 142 | 143 | for(y = y_min; y <= y_max; y++) //h 144 | { 145 | for(x = x_min; x <= x_max; x++) //w 146 | { 147 | p.x = x; p.y = y; 148 | if(p.x < 2 || p.x > w - 3 || p.y < 2 || p.y > h - 3 || isPointInTri(p, p0, p1, p2)) 149 | { 150 | get_point_weight(weight, p, p0, p1, p2); 151 | p_depth = weight[0]*p0_depth + weight[1]*p1_depth + weight[2]*p2_depth; 152 | 153 | if((p_depth > depth_buffer[y*w + x])) 154 | { 155 | depth_buffer[y*w + x] = p_depth; 156 | triangle_buffer[y*w + x] = i; 157 | for(k = 0; k < 3; k++) 158 | { 159 | barycentric_weight[y*w*3 + x*3 + k] = weight[k]; 160 | } 161 | } 162 | } 163 | } 164 | } 165 | } 166 | } 167 | 168 | 169 | void _render_colors_core( 170 | float* image, float* vertices, int* triangles, 171 | float* colors, 172 | float* depth_buffer, 173 | int nver, int ntri, 174 | int h, int w, int c) 175 | { 176 | int i; 177 | int x, y, k; 178 | int tri_p0_ind, tri_p1_ind, tri_p2_ind; 179 | point p0, p1, p2, p; 180 | int x_min, x_max, y_min, y_max; 181 | float p_depth, p0_depth, p1_depth, p2_depth; 182 | float p_color, p0_color, p1_color, p2_color; 183 | float weight[3]; 184 | 185 | for(i = 0; i < ntri; i++) 186 | { 187 | tri_p0_ind = triangles[3*i]; 188 | tri_p1_ind = triangles[3*i + 1]; 189 | tri_p2_ind = triangles[3*i + 2]; 190 | 191 | p0.x = vertices[3*tri_p0_ind]; p0.y = vertices[3*tri_p0_ind + 1]; p0_depth = vertices[3*tri_p0_ind + 2]; 192 | p1.x = vertices[3*tri_p1_ind]; p1.y = vertices[3*tri_p1_ind + 1]; p1_depth = vertices[3*tri_p1_ind + 2]; 193 | p2.x = vertices[3*tri_p2_ind]; p2.y = vertices[3*tri_p2_ind + 1]; p2_depth = vertices[3*tri_p2_ind + 2]; 194 | 195 | x_min = max((int)ceil(min(p0.x, min(p1.x, p2.x))), 0); 196 | x_max = min((int)floor(max(p0.x, max(p1.x, p2.x))), w - 1); 197 | 198 | y_min = max((int)ceil(min(p0.y, min(p1.y, p2.y))), 0); 199 | y_max = min((int)floor(max(p0.y, max(p1.y, p2.y))), h - 1); 200 | 201 | if(x_max < x_min || y_max < y_min) 202 | { 203 | continue; 204 | } 205 | 206 | for(y = y_min; y <= y_max; y++) //h 207 | { 208 | for(x = x_min; x <= x_max; x++) //w 209 | { 210 | p.x = x; p.y = y; 211 | if(p.x < 2 || p.x > w - 3 || p.y < 2 || p.y > h - 3 || isPointInTri(p, p0, p1, p2)) 212 | { 213 | get_point_weight(weight, p, p0, p1, p2); 214 | p_depth = weight[0]*p0_depth + weight[1]*p1_depth + weight[2]*p2_depth; 215 | 216 | if((p_depth > depth_buffer[y*w + x])) 217 | { 218 | for(k = 0; k < c; k++) // c 219 | { 220 | p0_color = colors[c*tri_p0_ind + k]; 221 | p1_color = colors[c*tri_p1_ind + k]; 222 | p2_color = colors[c*tri_p2_ind + k]; 223 | 224 | p_color = weight[0]*p0_color + weight[1]*p1_color + weight[2]*p2_color; 225 | image[y*w*c + x*c + k] = p_color; 226 | } 227 | 228 | depth_buffer[y*w + x] = p_depth; 229 | } 230 | } 231 | } 232 | } 233 | } 234 | } 235 | 236 | 237 | void _render_texture_core( 238 | float* image, float* vertices, int* triangles, 239 | float* texture, float* tex_coords, int* tex_triangles, 240 | float* depth_buffer, 241 | int nver, int tex_nver, int ntri, 242 | int h, int w, int c, 243 | int tex_h, int tex_w, int tex_c, 244 | int mapping_type) 245 | { 246 | int i; 247 | int x, y, k; 248 | int tri_p0_ind, tri_p1_ind, tri_p2_ind; 249 | int tex_tri_p0_ind, tex_tri_p1_ind, tex_tri_p2_ind; 250 | point p0, p1, p2, p; 251 | point tex_p0, tex_p1, tex_p2, tex_p; 252 | int x_min, x_max, y_min, y_max; 253 | float weight[3]; 254 | float p_depth, p0_depth, p1_depth, p2_depth; 255 | float xd, yd; 256 | float ul, ur, dl, dr; 257 | for(i = 0; i < ntri; i++) 258 | { 259 | // mesh 260 | tri_p0_ind = triangles[3*i]; 261 | tri_p1_ind = triangles[3*i + 1]; 262 | tri_p2_ind = triangles[3*i + 2]; 263 | 264 | p0.x = vertices[3*tri_p0_ind]; p0.y = vertices[3*tri_p0_ind + 1]; p0_depth = vertices[3*tri_p0_ind + 2]; 265 | p1.x = vertices[3*tri_p1_ind]; p1.y = vertices[3*tri_p1_ind + 1]; p1_depth = vertices[3*tri_p1_ind + 2]; 266 | p2.x = vertices[3*tri_p2_ind]; p2.y = vertices[3*tri_p2_ind + 1]; p2_depth = vertices[3*tri_p2_ind + 2]; 267 | 268 | // texture 269 | tex_tri_p0_ind = tex_triangles[3*i]; 270 | tex_tri_p1_ind = tex_triangles[3*i + 1]; 271 | tex_tri_p2_ind = tex_triangles[3*i + 2]; 272 | 273 | tex_p0.x = tex_coords[3*tex_tri_p0_ind]; tex_p0.y = tex_coords[3*tri_p0_ind + 1]; 274 | tex_p1.x = tex_coords[3*tex_tri_p1_ind]; tex_p1.y = tex_coords[3*tri_p1_ind + 1]; 275 | tex_p2.x = tex_coords[3*tex_tri_p2_ind]; tex_p2.y = tex_coords[3*tri_p2_ind + 1]; 276 | 277 | 278 | x_min = max((int)ceil(min(p0.x, min(p1.x, p2.x))), 0); 279 | x_max = min((int)floor(max(p0.x, max(p1.x, p2.x))), w - 1); 280 | 281 | y_min = max((int)ceil(min(p0.y, min(p1.y, p2.y))), 0); 282 | y_max = min((int)floor(max(p0.y, max(p1.y, p2.y))), h - 1); 283 | 284 | 285 | if(x_max < x_min || y_max < y_min) 286 | { 287 | continue; 288 | } 289 | 290 | for(y = y_min; y <= y_max; y++) //h 291 | { 292 | for(x = x_min; x <= x_max; x++) //w 293 | { 294 | p.x = x; p.y = y; 295 | if(p.x < 2 || p.x > w - 3 || p.y < 2 || p.y > h - 3 || isPointInTri(p, p0, p1, p2)) 296 | { 297 | get_point_weight(weight, p, p0, p1, p2); 298 | p_depth = weight[0]*p0_depth + weight[1]*p1_depth + weight[2]*p2_depth; 299 | 300 | if((p_depth > depth_buffer[y*w + x])) 301 | { 302 | // -- color from texture 303 | // cal weight in mesh tri 304 | get_point_weight(weight, p, p0, p1, p2); 305 | // cal coord in texture 306 | tex_p = tex_p0*weight[0] + tex_p1*weight[1] + tex_p2*weight[2]; 307 | tex_p.x = max(min(tex_p.x, float(tex_w - 1)), float(0)); 308 | tex_p.y = max(min(tex_p.y, float(tex_h - 1)), float(0)); 309 | 310 | yd = tex_p.y - floor(tex_p.y); 311 | xd = tex_p.x - floor(tex_p.x); 312 | for(k = 0; k < c; k++) 313 | { 314 | if(mapping_type==0)// nearest 315 | { 316 | image[y*w*c + x*c + k] = texture[int(round(tex_p.y))*tex_w*tex_c + int(round(tex_p.x))*tex_c + k]; 317 | } 318 | else//bilinear interp 319 | { 320 | ul = texture[(int)floor(tex_p.y)*tex_w*tex_c + (int)floor(tex_p.x)*tex_c + k]; 321 | ur = texture[(int)floor(tex_p.y)*tex_w*tex_c + (int)ceil(tex_p.x)*tex_c + k]; 322 | dl = texture[(int)ceil(tex_p.y)*tex_w*tex_c + (int)floor(tex_p.x)*tex_c + k]; 323 | dr = texture[(int)ceil(tex_p.y)*tex_w*tex_c + (int)ceil(tex_p.x)*tex_c + k]; 324 | 325 | image[y*w*c + x*c + k] = ul*(1-xd)*(1-yd) + ur*xd*(1-yd) + dl*(1-xd)*yd + dr*xd*yd; 326 | } 327 | 328 | } 329 | 330 | depth_buffer[y*w + x] = p_depth; 331 | } 332 | } 333 | } 334 | } 335 | } 336 | } 337 | 338 | 339 | 340 | // ------------------------------------------------- write 341 | // obj write 342 | // Ref: https://github.com/patrikhuber/eos/blob/master/include/eos/core/Mesh.hpp 343 | void _write_obj_with_colors_texture(string filename, string mtl_name, 344 | float* vertices, int* triangles, float* colors, float* uv_coords, 345 | int nver, int ntri, int ntexver) 346 | { 347 | int i; 348 | 349 | ofstream obj_file(filename); 350 | 351 | // first line of the obj file: the mtl name 352 | obj_file << "mtllib " << mtl_name << endl; 353 | 354 | // write vertices 355 | for (i = 0; i < nver; ++i) 356 | { 357 | obj_file << "v " << vertices[3*i] << " " << vertices[3*i + 1] << " " << vertices[3*i + 2] << colors[3*i] << " " << colors[3*i + 1] << " " << colors[3*i + 2] << endl; 358 | } 359 | 360 | // write uv coordinates 361 | for (i = 0; i < ntexver; ++i) 362 | { 363 | //obj_file << "vt " << uv_coords[2*i] << " " << (1 - uv_coords[2*i + 1]) << endl; 364 | obj_file << "vt " << uv_coords[2*i] << " " << uv_coords[2*i + 1] << endl; 365 | } 366 | 367 | obj_file << "usemtl FaceTexture" << endl; 368 | // write triangles 369 | for (i = 0; i < ntri; ++i) 370 | { 371 | // obj_file << "f " << triangles[3*i] << "/" << triangles[3*i] << " " << triangles[3*i + 1] << "/" << triangles[3*i + 1] << " " << triangles[3*i + 2] << "/" << triangles[3*i + 2] << endl; 372 | obj_file << "f " << triangles[3*i + 2] << "/" << triangles[3*i + 2] << " " << triangles[3*i + 1] << "/" << triangles[3*i + 1] << " " << triangles[3*i] << "/" << triangles[3*i] << endl; 373 | } 374 | 375 | } -------------------------------------------------------------------------------- /utils/render_lib/mesh_core.h: -------------------------------------------------------------------------------- 1 | #ifndef MESH_CORE_HPP_ 2 | #define MESH_CORE_HPP_ 3 | 4 | #include 5 | #include 6 | #include 7 | #include 8 | #include 9 | #include 10 | 11 | using namespace std; 12 | 13 | class point 14 | { 15 | public: 16 | float x; 17 | float y; 18 | 19 | float dot(point p) 20 | { 21 | return this->x * p.x + this->y * p.y; 22 | } 23 | 24 | point operator-(const point& p) 25 | { 26 | point np; 27 | np.x = this->x - p.x; 28 | np.y = this->y - p.y; 29 | return np; 30 | } 31 | 32 | point operator+(const point& p) 33 | { 34 | point np; 35 | np.x = this->x + p.x; 36 | np.y = this->y + p.y; 37 | return np; 38 | } 39 | 40 | point operator*(float s) 41 | { 42 | point np; 43 | np.x = s * this->x; 44 | np.y = s * this->y; 45 | return np; 46 | } 47 | }; 48 | 49 | 50 | bool isPointInTri(point p, point p0, point p1, point p2, int h, int w); 51 | void get_point_weight(float* weight, point p, point p0, point p1, point p2); 52 | 53 | void _get_normal_core( 54 | float* normal, float* tri_normal, int* triangles, 55 | int ntri); 56 | 57 | void _rasterize_triangles_core( 58 | float* vertices, int* triangles, 59 | float* depth_buffer, int* triangle_buffer, float* barycentric_weight, 60 | int nver, int ntri, 61 | int h, int w); 62 | 63 | void _render_colors_core( 64 | float* image, float* vertices, int* triangles, 65 | float* colors, 66 | float* depth_buffer, 67 | int nver, int ntri, 68 | int h, int w, int c); 69 | 70 | void _render_texture_core( 71 | float* image, float* vertices, int* triangles, 72 | float* texture, float* tex_coords, int* tex_triangles, 73 | float* depth_buffer, 74 | int nver, int tex_nver, int ntri, 75 | int h, int w, int c, 76 | int tex_h, int tex_w, int tex_c, 77 | int mapping_type); 78 | 79 | void _write_obj_with_colors_texture(string filename, string mtl_name, 80 | float* vertices, int* triangles, float* colors, float* uv_coords, 81 | int nver, int ntri, int ntexver); 82 | 83 | #endif -------------------------------------------------------------------------------- /utils/render_lib/mesh_core_cython.pyx: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | cimport numpy as np 3 | from libcpp.string cimport string 4 | 5 | # use the Numpy-C-API from Cython 6 | np.import_array() 7 | 8 | # cdefine the signature of our c function 9 | cdef extern from "mesh_core.h": 10 | void _rasterize_triangles_core( 11 | float* vertices, int* triangles, 12 | float* depth_buffer, int* triangle_buffer, float* barycentric_weight, 13 | int nver, int ntri, 14 | int h, int w) 15 | 16 | void _render_colors_core( 17 | float* image, float* vertices, int* triangles, 18 | float* colors, 19 | float* depth_buffer, 20 | int nver, int ntri, 21 | int h, int w, int c) 22 | 23 | void _render_texture_core( 24 | float* image, float* vertices, int* triangles, 25 | float* texture, float* tex_coords, int* tex_triangles, 26 | float* depth_buffer, 27 | int nver, int tex_nver, int ntri, 28 | int h, int w, int c, 29 | int tex_h, int tex_w, int tex_c, 30 | int mapping_type) 31 | 32 | void _get_normal_core( 33 | float* normal, float* tri_normal, int* triangles, 34 | int ntri) 35 | 36 | void _write_obj_with_colors_texture(string filename, string mtl_name, 37 | float* vertices, int* triangles, float* colors, float* uv_coords, 38 | int nver, int ntri, int ntexver) 39 | 40 | def get_normal_core(np.ndarray[float, ndim=2, mode = "c"] normal not None, 41 | np.ndarray[float, ndim=2, mode = "c"] tri_normal not None, 42 | np.ndarray[int, ndim=2, mode="c"] triangles not None, 43 | int ntri 44 | ): 45 | _get_normal_core( 46 | np.PyArray_DATA(normal), np.PyArray_DATA(tri_normal), np.PyArray_DATA(triangles), 47 | ntri) 48 | 49 | def rasterize_triangles_core( 50 | np.ndarray[float, ndim=2, mode = "c"] vertices not None, 51 | np.ndarray[int, ndim=2, mode="c"] triangles not None, 52 | np.ndarray[float, ndim=2, mode = "c"] depth_buffer not None, 53 | np.ndarray[int, ndim=2, mode = "c"] triangle_buffer not None, 54 | np.ndarray[float, ndim=2, mode = "c"] barycentric_weight not None, 55 | int nver, int ntri, 56 | int h, int w 57 | ): 58 | _rasterize_triangles_core( 59 | np.PyArray_DATA(vertices), np.PyArray_DATA(triangles), 60 | np.PyArray_DATA(depth_buffer), np.PyArray_DATA(triangle_buffer), np.PyArray_DATA(barycentric_weight), 61 | nver, ntri, 62 | h, w) 63 | 64 | def render_colors_core(np.ndarray[float, ndim=3, mode = "c"] image not None, 65 | np.ndarray[float, ndim=2, mode = "c"] vertices not None, 66 | np.ndarray[int, ndim=2, mode="c"] triangles not None, 67 | np.ndarray[float, ndim=2, mode = "c"] colors not None, 68 | np.ndarray[float, ndim=2, mode = "c"] depth_buffer not None, 69 | int nver, int ntri, 70 | int h, int w, int c 71 | ): 72 | _render_colors_core( 73 | np.PyArray_DATA(image), np.PyArray_DATA(vertices), np.PyArray_DATA(triangles), 74 | np.PyArray_DATA(colors), 75 | np.PyArray_DATA(depth_buffer), 76 | nver, ntri, 77 | h, w, c) 78 | 79 | def render_texture_core(np.ndarray[float, ndim=3, mode = "c"] image not None, 80 | np.ndarray[float, ndim=2, mode = "c"] vertices not None, 81 | np.ndarray[int, ndim=2, mode="c"] triangles not None, 82 | np.ndarray[float, ndim=3, mode = "c"] texture not None, 83 | np.ndarray[float, ndim=2, mode = "c"] tex_coords not None, 84 | np.ndarray[int, ndim=2, mode="c"] tex_triangles not None, 85 | np.ndarray[float, ndim=2, mode = "c"] depth_buffer not None, 86 | int nver, int tex_nver, int ntri, 87 | int h, int w, int c, 88 | int tex_h, int tex_w, int tex_c, 89 | int mapping_type 90 | ): 91 | _render_texture_core( 92 | np.PyArray_DATA(image), np.PyArray_DATA(vertices), np.PyArray_DATA(triangles), 93 | np.PyArray_DATA(texture), np.PyArray_DATA(tex_coords), np.PyArray_DATA(tex_triangles), 94 | np.PyArray_DATA(depth_buffer), 95 | nver, tex_nver, ntri, 96 | h, w, c, 97 | tex_h, tex_w, tex_c, 98 | mapping_type) 99 | 100 | def write_obj_with_colors_texture_core(string filename, string mtl_name, 101 | np.ndarray[float, ndim=2, mode = "c"] vertices not None, 102 | np.ndarray[int, ndim=2, mode="c"] triangles not None, 103 | np.ndarray[float, ndim=2, mode = "c"] colors not None, 104 | np.ndarray[float, ndim=2, mode = "c"] uv_coords not None, 105 | int nver, int ntri, int ntexver 106 | ): 107 | _write_obj_with_colors_texture(filename, mtl_name, 108 | np.PyArray_DATA(vertices), np.PyArray_DATA(triangles), np.PyArray_DATA(colors), np.PyArray_DATA(uv_coords), 109 | nver, ntri, ntexver) 110 | -------------------------------------------------------------------------------- /utils/render_lib/setup.py: -------------------------------------------------------------------------------- 1 | ''' 2 | python setup.py build_ext -i 3 | to compile 4 | ''' 5 | 6 | # setup.py 7 | from distutils.core import setup, Extension 8 | from Cython.Build import cythonize 9 | from Cython.Distutils import build_ext 10 | import numpy 11 | 12 | setup( 13 | name = 'mesh_core_cython', 14 | cmdclass={'build_ext': build_ext}, 15 | ext_modules=[Extension("mesh_core_cython", 16 | sources=["mesh_core_cython.pyx", "mesh_core.cpp"], 17 | language='c++', 18 | include_dirs=[numpy.get_include()])], 19 | ) 20 | 21 | -------------------------------------------------------------------------------- /utils/rotation.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import os 3 | import pickle 4 | import numpy as np 5 | 6 | import global_var 7 | 8 | 9 | def expmap2rotmat(r): 10 | """ 11 | :param r: Axis-angle, Nx3 12 | :return: Rotation matrix, Nx3x3 13 | """ 14 | EPS = 1e-8 15 | assert r.shape[1] == 3 16 | bs = r.shape[0] 17 | theta = np.sqrt(np.sum(np.square(r), 1, keepdims=True)) 18 | cos_theta = np.expand_dims(np.cos(theta), -1) 19 | sin_theta = np.expand_dims(np.sin(theta), -1) 20 | eye = np.tile(np.expand_dims(np.eye(3), 0), (bs, 1, 1)) 21 | norm_r = r / (theta + EPS) 22 | r_1 = np.expand_dims(norm_r, 2) # N, 3, 1 23 | r_2 = np.expand_dims(norm_r, 1) # N, 1, 3 24 | zero_col = np.zeros([bs, 1]).astype(r.dtype) 25 | skew_sym = np.concatenate([zero_col, -norm_r[:, 2:3], norm_r[:, 1:2], norm_r[:, 2:3], zero_col, 26 | -norm_r[:, 0:1], -norm_r[:, 1:2], norm_r[:, 0:1], zero_col], 1) 27 | skew_sym = skew_sym.reshape(bs, 3, 3) 28 | R = cos_theta*eye + (1-cos_theta)*np.einsum('npq,nqu->npu', r_1, r_2) + sin_theta*skew_sym 29 | return R 30 | 31 | 32 | def rotmat2expmap(R): 33 | """ 34 | :param R: Rotation matrix, Nx3x3 35 | :return: r: Rotation vector, Nx3 36 | """ 37 | assert R.shape[1] == R.shape[2] == 3 38 | theta = np.arccos(np.clip((R[:, 0, 0] + R[:, 1, 1] + R[:, 2, 2] - 1) / 2, -1., 1.)).reshape([-1, 1]) 39 | r = np.stack((R[:, 2, 1]-R[:, 1, 2], R[:, 0, 2]-R[:, 2, 0], R[:, 1, 0]-R[:, 0, 1]), 1) / (2*np.sin(theta)) 40 | r_norm = r / np.sqrt(np.sum(np.square(r), 1, keepdims=True)) 41 | return theta * r_norm 42 | 43 | 44 | def interpolate_pose(pose1, pose2, inter_num): 45 | """ 46 | linear interpolation between two axis-angle 47 | """ 48 | p1 = np.expand_dims(pose1, 0) 49 | delta = (pose2-pose1) / (inter_num + 1) 50 | linspace = np.arange(0, inter_num+2) 51 | for _ in range(pose1.ndim): 52 | linspace = np.expand_dims(linspace, -1) 53 | return linspace * delta + p1 54 | 55 | 56 | def flip_theta(theta, batch=False): 57 | """ 58 | flip SMPL theta along y-z plane 59 | if batch is True, theta shape is Nx72, otherwise 72 60 | """ 61 | exg_idx = [0, 2, 1, 3, 5, 4, 6, 8, 7, 9, 11, 10, 12, 14, 13, 15, 17, 16, 19, 18, 21, 20, 23, 22] 62 | if batch: 63 | new_theta = np.reshape(theta, [-1, 24, 3]) 64 | new_theta = new_theta[:, exg_idx] 65 | new_theta[:, :, 1:3] *= -1 66 | else: 67 | new_theta = np.reshape(theta, [24, 3]) 68 | new_theta = new_theta[exg_idx] 69 | new_theta[:, 1:3] *= -1 70 | new_theta = new_theta.reshape(theta.shape) 71 | return new_theta 72 | 73 | 74 | def root_rotation(theta): 75 | """ 76 | rotate along x axis for 180 degree 77 | """ 78 | rotmat = np.array([[1, 0, 0],[0, -1, 0],[0, 0, -1]], dtype=theta.dtype) 79 | x = cv2.Rodrigues(theta[:3])[0] 80 | y = cv2.Rodrigues(np.matmul(x, rotmat))[0][:, 0] 81 | return np.concatenate([y, theta[3:]], 0) 82 | 83 | 84 | import sys 85 | def get_Apose(): 86 | if sys.version_info.major == 3: 87 | with open(os.path.join(global_var.ROOT, 'apose.pkl'), 'rb') as f: 88 | APOSE = np.array(pickle.load(f, encoding='latin-1')['pose']).astype(np.float32) 89 | else: 90 | with open(os.path.join(global_var.ROOT, 'apose.pkl'), 'rb') as f: 91 | APOSE = np.array(pickle.load(f)['pose']).astype(np.float32) 92 | flip_pose = flip_theta(APOSE) 93 | APOSE[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 15]] = 0 94 | APOSE[[14, 17, 19, 21, 23]] = flip_pose[[14, 17, 19, 21, 23]] 95 | APOSE = APOSE.reshape([72]) 96 | return APOSE 97 | 98 | 99 | def normalize_y_rotation(raw_theta): 100 | """ 101 | rotate along y axis so that root rotation can always face the camera 102 | theta should be a [3] or [72] numpy array 103 | """ 104 | only_global = True 105 | if raw_theta.shape == (72,): 106 | theta = raw_theta[:3] 107 | only_global = False 108 | else: 109 | theta = raw_theta[:] 110 | raw_rot = cv2.Rodrigues(theta)[0] 111 | rot_z = raw_rot[:, 2] 112 | # we should rotate along y axis counter-clockwise for t rads to make the object face the camera 113 | if rot_z[2] == 0: 114 | t = (rot_z[0] / np.abs(rot_z[0])) * np.pi / 2 115 | elif rot_z[2] > 0: 116 | t = np.arctan(rot_z[0]/rot_z[2]) 117 | else: 118 | t = np.arctan(rot_z[0]/rot_z[2]) + np.pi 119 | cost, sint = np.cos(t), np.sin(t) 120 | norm_rot = np.array([[cost, 0, -sint],[0, 1, 0],[sint, 0, cost]]) 121 | final_rot = np.matmul(norm_rot, raw_rot) 122 | final_theta = cv2.Rodrigues(final_rot)[0][:, 0] 123 | if not only_global: 124 | return np.concatenate([final_theta, raw_theta[3:]], 0) 125 | else: 126 | return final_theta 127 | 128 | 129 | def diff_y_rotation(theta1, theta2): 130 | """ 131 | subtract the y rotation of theta2 by theta1 132 | return value is an angle which lies in [-pi, pi] 133 | """ 134 | if theta1.shape == (72,): 135 | theta1 = theta1[:3] 136 | if theta2.shape == (72,): 137 | theta2 = theta2[:3] 138 | rot_z_1 = cv2.Rodrigues(theta1)[0][:, 2] 139 | rot_z_2 = cv2.Rodrigues(theta2)[0][:, 2] 140 | rot_z_1[1] = 0 141 | rot_z_2[1] = 0 142 | rot_z_1 = rot_z_1 / np.linalg.norm(rot_z_1) 143 | rot_z_2 = rot_z_2 / np.linalg.norm(rot_z_2) 144 | dtheta = np.arccos(np.clip(np.dot(rot_z_1, rot_z_2), a_min=-1, a_max=1)) 145 | if np.cross(rot_z_1, rot_z_2)[1] < 0: 146 | dtheta *= -1 147 | return dtheta 148 | 149 | 150 | def rotate_y(raw_theta, deg): 151 | """ 152 | rotate along y axis by some degree 153 | """ 154 | only_global = True 155 | if raw_theta.shape == (72,): 156 | theta = raw_theta[:3] 157 | only_global = False 158 | else: 159 | theta = raw_theta[:] 160 | raw_rot = cv2.Rodrigues(theta)[0] 161 | cost, sint = np.cos(deg), np.sin(deg) 162 | y_rot = np.array([[cost, 0, -sint], [0, 1, 0], [sint, 0, cost]]) 163 | final_rot = np.matmul(y_rot, raw_rot) 164 | final_theta = cv2.Rodrigues(final_rot)[0][:, 0] 165 | if not only_global: 166 | return np.concatenate([final_theta, raw_theta[3:]], 0) 167 | else: 168 | return final_theta 169 | 170 | 171 | def angle_y(raw_theta): 172 | """ 173 | calculate y 174 | """ 175 | if raw_theta.shape == (72,): 176 | theta = raw_theta[:3] 177 | else: 178 | theta = raw_theta[:] 179 | raw_rot = cv2.Rodrigues(theta)[0] 180 | rot_z = raw_rot[:, 2] 181 | # we should rotate along y axis counter-clockwise for t rads to make the object face the camera 182 | if rot_z[2] == 0: 183 | t = (rot_z[0] / np.abs(rot_z[0])) * np.pi / 2 184 | elif rot_z[2] > 0: 185 | t = np.arctan(rot_z[0]/rot_z[2]) 186 | else: 187 | t = np.arctan(rot_z[0]/rot_z[2]) + np.pi 188 | return t 189 | 190 | 191 | if __name__ == '__main__': 192 | apose = get_Apose().astype(np.float32) 193 | np.save(os.path.join(global_var.ROOT, 'apose.npy'), apose) -------------------------------------------------------------------------------- /utils/smpl.py: -------------------------------------------------------------------------------- 1 | ''' 2 | file: SMPL.py 3 | 4 | date: 2018_05_03 5 | author: zhangxiong(1025679612@qq.com) 6 | mark: the algorithm is cited from original SMPL 7 | 8 | modified by Zhouyingcheng Liao(zycliao@gmail.com) 9 | ''' 10 | import torch 11 | import os 12 | import sys 13 | import numpy as np 14 | import torch.nn as nn 15 | import torch.nn.functional as F 16 | 17 | import global_var 18 | 19 | 20 | def batch_rodrigues(theta): 21 | # theta N x 3 22 | l1norm = torch.norm(theta + 1e-8, p=2, dim=1) 23 | angle = torch.unsqueeze(l1norm, -1) 24 | normalized = torch.div(theta, angle) 25 | angle = angle * 0.5 26 | v_cos = torch.cos(angle) 27 | v_sin = torch.sin(angle) 28 | quat = torch.cat([v_cos, v_sin * normalized], dim=1) 29 | 30 | return quat2mat(quat) 31 | 32 | 33 | def quat2mat(quat): 34 | """Convert quaternion coefficients to rotation matrix. 35 | Args: 36 | quat: size = [B, 4] 4 <===>(w, x, y, z) 37 | Returns: 38 | Rotation matrix corresponding to the quaternion -- size = [B, 3, 3] 39 | """ 40 | norm_quat = quat 41 | norm_quat = norm_quat / norm_quat.norm(p=2, dim=1, keepdim=True) 42 | w, x, y, z = norm_quat[:, 0], norm_quat[:, 1], norm_quat[:, 2], norm_quat[:, 3] 43 | 44 | B = quat.size(0) 45 | 46 | w2, x2, y2, z2 = w.pow(2), x.pow(2), y.pow(2), z.pow(2) 47 | wx, wy, wz = w * x, w * y, w * z 48 | xy, xz, yz = x * y, x * z, y * z 49 | 50 | rotMat = torch.stack([w2 + x2 - y2 - z2, 2 * xy - 2 * wz, 2 * wy + 2 * xz, 51 | 2 * wz + 2 * xy, w2 - x2 + y2 - z2, 2 * yz - 2 * wx, 52 | 2 * xz - 2 * wy, 2 * wx + 2 * yz, w2 - x2 - y2 + z2], dim=1).view(B, 3, 3) 53 | return rotMat 54 | 55 | 56 | def batch_global_rigid_transformation(Rs, Js, parent, device, rotate_base=False): 57 | N = Rs.shape[0] 58 | if rotate_base: 59 | np_rot_x = np.array([[1, 0, 0], [0, -1, 0], [0, 0, -1]], dtype=np.float) 60 | np_rot_x = np.reshape(np.tile(np_rot_x, [N, 1]), [N, 3, 3]) 61 | rot_x = torch.from_numpy(np_rot_x).float().to(device) 62 | root_rotation = torch.matmul(Rs[:, 0, :, :], rot_x) 63 | else: 64 | root_rotation = Rs[:, 0, :, :] 65 | Js = torch.unsqueeze(Js, -1) 66 | 67 | def make_A(R, t): 68 | R_homo = F.pad(R, [0, 0, 0, 1, 0, 0]) 69 | t_homo = torch.cat([t, torch.ones(N, 1, 1).to(device)], dim=1) 70 | return torch.cat([R_homo, t_homo], 2) 71 | 72 | A0 = make_A(root_rotation, Js[:, 0]) 73 | results = [A0] 74 | 75 | for i in range(1, parent.shape[0]): 76 | j_here = Js[:, i] - Js[:, parent[i]] 77 | A_here = make_A(Rs[:, i], j_here) 78 | res_here = torch.matmul(results[parent[i]], A_here) 79 | results.append(res_here) 80 | 81 | results = torch.stack(results, dim=1) 82 | 83 | new_J = results[:, :, :3, 3] 84 | Js_w0 = torch.cat([Js, torch.zeros(N, 24, 1, 1).to(device)], dim=2) 85 | init_bone = torch.matmul(results, Js_w0) 86 | init_bone = F.pad(init_bone, [3, 0, 0, 0, 0, 0, 0, 0]) 87 | A = results - init_bone 88 | 89 | return new_J, A 90 | 91 | 92 | def batch_lrotmin(theta): 93 | theta = theta[:, 3:].contiguous() 94 | Rs = batch_rodrigues(theta.view(-1, 3)) 95 | print(Rs.shape) 96 | e = torch.eye(3).float() 97 | Rs = Rs.sub(1.0, e) 98 | 99 | return Rs.view(-1, 23 * 9) 100 | 101 | 102 | def batch_orth_proj(X, camera): 103 | ''' 104 | X is N x num_points x 3 105 | ''' 106 | camera = camera.view(-1, 1, 3) 107 | X_trans = X[:, :, :2] + camera[:, :, 1:] 108 | shape = X_trans.shape 109 | return (camera[:, :, 0] * X_trans.view(shape[0], -1)).view(shape) 110 | 111 | 112 | class SMPL(nn.Module): 113 | def __init__(self, gender='neutral', joint_type='cocoplus'): 114 | """ 115 | cocoplus joints order: 116 | 0 R ankle 9 L shoulder 117 | 1 R knee 10 L elbow 118 | 2 R hip 11 L wrist 119 | 3 L hip 12 neck 120 | 4 L knee 13 head * 121 | 5 L ankle 14 nose 122 | 6 R wrist 15 L eye 123 | 7 R elbow 16 R eye 124 | 8 R shoulder 17 L ear 125 | 18 R ear 126 | """ 127 | super(SMPL, self).__init__() 128 | model_path = os.path.join(global_var.RES_DIR, 'smpl_model_{}.npz').format(gender) 129 | if joint_type not in ['cocoplus', 'lsp', 'h36m']: 130 | msg = 'unknow joint type: {}, it must be either "cocoplus" or "lsp" or "h36m"'.format(joint_type) 131 | sys.exit(msg) 132 | 133 | self.model_path = model_path 134 | self.joint_type = joint_type 135 | # with open(model_path, 'rb') as reader: 136 | # model = pickle.load(reader, encoding='iso-8859-1') 137 | model = np.load(model_path) 138 | 139 | self.faces = model['f'] 140 | 141 | np_v_template = np.array(model['v_template'], dtype=np.float) 142 | 143 | self.register_buffer('v_template', torch.from_numpy(np_v_template).float()) 144 | self.size = [np_v_template.shape[0], 3] 145 | 146 | np_shapedirs = np.array(model['shapedirs'], dtype=np.float) 147 | self.num_betas = np_shapedirs.shape[-1] 148 | np_shapedirs = np.reshape(np_shapedirs, [-1, self.num_betas]).T 149 | self.register_buffer('shapedirs', torch.from_numpy(np_shapedirs).float()) 150 | 151 | np_J_regressor = np.array(model['J_regressor'], dtype=np.float) 152 | self.register_buffer('J_regressor', torch.from_numpy(np_J_regressor).float()) 153 | 154 | np_posedirs = np.array(model['posedirs'], dtype=np.float) 155 | num_pose_basis = np_posedirs.shape[-1] 156 | np_posedirs = np.reshape(np_posedirs, [-1, num_pose_basis]).T 157 | self.register_buffer('posedirs', torch.from_numpy(np_posedirs).float()) 158 | 159 | self.parents = np.array(model['kintree_table'])[0].astype(np.int32) 160 | 161 | if joint_type == 'h36m': 162 | np_joint_regressor = np.array(model['h36m_regressor'], dtype=np.float) 163 | else: 164 | np_joint_regressor = np.array(model['cocoplus_regressor'], dtype=np.float) 165 | if joint_type == 'lsp': 166 | self.register_buffer('joint_regressor', torch.from_numpy(np_joint_regressor[:, :14]).float()) 167 | else: 168 | self.register_buffer('joint_regressor', torch.from_numpy(np_joint_regressor).float()) 169 | 170 | np_weights = np.array(model['weights'], dtype=np.float) 171 | 172 | vertex_count = np_weights.shape[0] 173 | vertex_component = np_weights.shape[1] 174 | 175 | self.register_buffer('weight', torch.from_numpy(np_weights).float().reshape(1, vertex_count, vertex_component)) 176 | 177 | self.register_buffer('e3', torch.eye(3).float()) 178 | self.seg_score = torch.nn.Parameter(torch.normal(torch.zeros(13776, 14, dtype=torch.float), 1.), requires_grad=True) 179 | self.cur_device = None 180 | 181 | def save_obj(self, verts, obj_mesh_name): 182 | if self.faces is None: 183 | msg = 'obj not saveable!' 184 | sys.exit(msg) 185 | 186 | with open(obj_mesh_name, 'w') as fp: 187 | for v in verts: 188 | fp.write('v %f %f %f\n' % (v[0], v[1], v[2])) 189 | 190 | for f in self.faces: # Faces are 1-based, not 0-based in obj files 191 | fp.write('f %d %d %d\n' % (f[0] + 1, f[1] + 1, f[2] + 1)) 192 | 193 | def forward(self, beta, theta, get_skin=True, rotate_base=False): 194 | if not self.cur_device: 195 | device = beta.device 196 | self.cur_device = torch.device(device.type, device.index) 197 | 198 | num_batch = beta.shape[0] 199 | 200 | v_shaped = torch.matmul(beta, self.shapedirs).view(-1, self.size[0], self.size[1]) + self.v_template 201 | Jx = torch.matmul(v_shaped[:, :, 0], self.J_regressor) 202 | Jy = torch.matmul(v_shaped[:, :, 1], self.J_regressor) 203 | Jz = torch.matmul(v_shaped[:, :, 2], self.J_regressor) 204 | J = torch.stack([Jx, Jy, Jz], dim=2) 205 | 206 | Rs = batch_rodrigues(theta.contiguous().view(-1, 3)).view(-1, 24, 3, 3) 207 | pose_feature = (Rs[:, 1:, :, :]).sub(1.0, self.e3).view(-1, 207) 208 | v_posed = torch.matmul(pose_feature, self.posedirs).view(-1, self.size[0], self.size[1]) + v_shaped 209 | self.J_transformed, A = batch_global_rigid_transformation(Rs, J, self.parents, self.cur_device, rotate_base=rotate_base) 210 | 211 | W = self.weight.view(1, 6890, 24).repeat(num_batch, 1, 1) 212 | T = torch.matmul(W, A.view(num_batch, 24, 16)).view(num_batch, -1, 4, 4) 213 | 214 | v_posed_homo = torch.cat([v_posed, torch.ones(num_batch, v_posed.shape[1], 1, device=self.cur_device)], dim=2) 215 | v_homo = torch.matmul(T, torch.unsqueeze(v_posed_homo, -1)) 216 | 217 | verts = v_homo[:, :, :3, 0] 218 | 219 | joint_x = torch.matmul(verts[:, :, 0], self.joint_regressor) 220 | joint_y = torch.matmul(verts[:, :, 1], self.joint_regressor) 221 | joint_z = torch.matmul(verts[:, :, 2], self.joint_regressor) 222 | 223 | joints = torch.stack([joint_x, joint_y, joint_z], dim=2) 224 | self.A = A 225 | 226 | if get_skin: 227 | return verts, joints, Rs 228 | else: 229 | return joints 230 | 231 | 232 | class SMPLNP(object): 233 | def __init__(self, gender='neutral', cuda=False): 234 | self.base = SMPL(gender) 235 | if cuda: 236 | self.cuda = True 237 | self.base.cuda() 238 | else: 239 | self.cuda = False 240 | 241 | def __call__(self, beta, theta, batch=False): 242 | if not batch: 243 | beta = np.expand_dims(beta, 0) 244 | theta = np.expand_dims(theta, 0) 245 | beta = torch.FloatTensor(beta) 246 | theta = torch.FloatTensor(theta) 247 | if self.cuda: 248 | beta = beta.cuda() 249 | theta = theta.cuda() 250 | vs, js, rs = [], [], [] 251 | N = beta.shape[0] 252 | iter_num = int(np.ceil(np.true_divide(N, 100))) 253 | for i in range(iter_num): 254 | lp = i*100 255 | rp = np.minimum((i+1)*100, N) 256 | v, j, r = self.base.forward(beta[lp: rp], theta[lp: rp]) 257 | vs.append(v.detach().cpu().numpy()) 258 | js.append(j.detach().cpu().numpy()) 259 | rs.append(r.detach().cpu().numpy()) 260 | vs = np.concatenate(vs, 0) 261 | js = np.concatenate(js, 0) 262 | rs = np.concatenate(rs, 0) 263 | if not batch: 264 | vs = vs[0] 265 | js = js[0] 266 | rs = rs[0] 267 | return vs, js, rs 268 | -------------------------------------------------------------------------------- /visualize_dataset.py: -------------------------------------------------------------------------------- 1 | import os 2 | import os.path as osp 3 | import cv2 4 | import numpy as np 5 | import pickle 6 | from utils.renderer import Renderer 7 | from smpl_torch import SMPLNP 8 | from global_var import ROOT 9 | 10 | 11 | if __name__ == '__main__': 12 | garment_class = 't-shirt' 13 | gender = 'female' 14 | img_size = 512 15 | renderer = Renderer(img_size) 16 | smpl = SMPLNP(gender=gender, cuda=False) 17 | 18 | pose_dir = osp.join(ROOT, '{}_{}'.format(garment_class, gender), 'pose') 19 | shape_dir = osp.join(ROOT, '{}_{}'.format(garment_class, gender), 'shape') 20 | ss_dir = osp.join(ROOT, '{}_{}'.format(garment_class, gender), 'style_shape') 21 | pose_vis_dir = osp.join(ROOT, '{}_{}'.format(garment_class, gender), 'pose_vis') 22 | ss_vis_dir = osp.join(ROOT, '{}_{}'.format(garment_class, gender), 'style_shape_vis') 23 | pivots_path = osp.join(ROOT, '{}_{}'.format(garment_class, gender), 'pivots.txt') 24 | avail_path = osp.join(ROOT, '{}_{}'.format(garment_class, gender), 'avail.txt') 25 | os.makedirs(pose_vis_dir, exist_ok=True) 26 | os.makedirs(ss_vis_dir, exist_ok=True) 27 | 28 | with open(os.path.join(ROOT, 'garment_class_info.pkl'), 'rb') as f: 29 | class_info = pickle.load(f, encoding='latin-1') 30 | body_f = smpl.base.faces 31 | garment_f = class_info[garment_class]['f'] 32 | 33 | # 1. Visualize pivots data 34 | with open(pivots_path) as f: 35 | all_pivots = f.read().strip().splitlines() 36 | for ss in all_pivots: 37 | beta_str, gamma_str = ss.split('_') 38 | pose_ss_dir = osp.join(pose_dir, ss) 39 | if not osp.exists(pose_ss_dir): 40 | continue 41 | unpose_names = [k for k in os.listdir(pose_ss_dir) if k.startswith('unposed') and k.endswith('.npy')] 42 | beta = np.load(osp.join(shape_dir, 'beta_{}.npy'.format(beta_str))) 43 | for unpose_name in unpose_names: 44 | seq_str = unpose_name.replace('unposed_', '').replace('.npy', '') 45 | pose_path = osp.join(pose_ss_dir, 'poses_{}.npz'.format(seq_str)) 46 | unpose_path = osp.join(pose_ss_dir, unpose_name) 47 | save_path = osp.join(pose_vis_dir, ss + '_{}.mp4'.format(seq_str)) 48 | 49 | unpose_v = np.load(unpose_path) 50 | thetas = np.load(pose_path)['thetas'] 51 | n = thetas.shape[0] 52 | 53 | all_body, all_gar = smpl(np.tile(beta[None], [n, 1]), thetas, unpose_v, garment_class, batch=True) 54 | 55 | video_writer = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'XVID'), 6., (img_size*2, img_size)) 56 | 57 | for i, (body_v, gar_v) in enumerate(zip(all_body, all_gar)): 58 | img = renderer([body_v, gar_v], [body_f, garment_f], 59 | [np.array([0.6, 0.6, 0.9]), np.array([0.8, 0.5, 0.3])], center=True) 60 | img_back = renderer([body_v, gar_v], [body_f, garment_f], 61 | [np.array([0.6, 0.6, 0.9]), np.array([0.8, 0.5, 0.3])], center=True, euler=(180, 0, 0)) 62 | video_writer.write(np.concatenate((img, img_back), 1)) 63 | video_writer.release() 64 | print("{} written".format(save_path)) 65 | 66 | # 2. Visualize all style_shape in canonical pose 67 | with open(avail_path) as f: 68 | all_ss = f.read().strip().splitlines() 69 | apose = np.load(osp.join(ROOT, 'apose.npy')) 70 | for ss in all_ss: 71 | beta_str, gamma_str = ss.split('_') 72 | unpose_path = osp.join(ss_dir, 'beta{}_gamma{}.npy'.format(beta_str, gamma_str)) 73 | if not osp.exists(unpose_path): 74 | continue 75 | 76 | save_path = osp.join(ss_vis_dir, 'beta{}_gamma{}.jpg'.format(beta_str, gamma_str)) 77 | beta = np.load(osp.join(shape_dir, 'beta_{}.npy'.format(beta_str))) 78 | unpose_v = np.load(unpose_path) 79 | body_v, gar_v = smpl(beta, apose, unpose_v, garment_class, batch=False) 80 | 81 | img = renderer([body_v, gar_v], [body_f, garment_f], 82 | [np.array([0.6, 0.6, 0.9]), np.array([0.8, 0.5, 0.3])], center=True) 83 | img_back = renderer([body_v, gar_v], [body_f, garment_f], 84 | [np.array([0.6, 0.6, 0.9]), np.array([0.8, 0.5, 0.3])], center=True, euler=(180, 0, 0)) 85 | cv2.imwrite(save_path, np.concatenate((img, img_back), 1)) 86 | print("{} written".format(save_path)) 87 | 88 | --------------------------------------------------------------------------------