├── .gitignore
├── LICENSE.md
├── README.md
├── generate_toy_dataset.py
├── inverse_rig.py
├── logo
└── SEED.jpg
├── model.py
├── requirements.txt
├── rig.py
└── train_rig_approximation.py
/.gitignore:
--------------------------------------------------------------------------------
1 | checkpoints
2 | dataset
3 | __pycache__
--------------------------------------------------------------------------------
/LICENSE.md:
--------------------------------------------------------------------------------
1 | Copyright (c) 2023 Electronic Arts Inc. All rights reserved.
2 |
3 | Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
4 |
5 | 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
6 | 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
7 | 3. Neither the name of Electronic Arts, Inc. ("EA") nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
8 |
9 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
10 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Rig Inversion by Training a Differentiable Rig Function
2 |
3 | This code serves an an example of using the technique described in the paper Rig Inversion by Training a Differentiable Rig Function published at Siggraph Asia 2022.
4 |
5 | [Rig Inversion by Training a Differentiable Rig Function](https://arxiv.org/abs/2301.09567)
6 |
7 | Rig inversion is demonstrated using a toy rig.
8 |
9 | ## How to use
10 |
11 | ### Step 1 : Generate a training dataset using your rig
12 |
13 | python generate_toy_dataset.py
14 |
15 | ### Step 2 : Train a model to approximate the rig and test it using animations
16 |
17 | python train_rig_approximation.py
18 |
19 | ### Step 3 : Inverse the rig using the rig approximation trained in step 2
20 |
21 | python inverse_rig.py
22 |
23 | ## Contents
24 |
25 | - `generate_toy_dataset.py`: Will generate a dataset to train the rig approximation of our toy rig
26 | - `inverse_rig.py`: Inverse the rig for the test mesh data using a trained rig approximation
27 | - `model.py`: Model definition for rig approximation and rig inversion
28 | - `rig.py`: Definition of our toy rig function
29 | - `train_rig_approximation.py`: Trains a rig approximation using a dataset of rig function data points
30 |
31 | ## References
32 |
33 | If you use this technique, please cite the paper:
34 |
35 | > *Marquis Bolduc, Mathieu and Phan, Hau Nghiep*. **[Rig Inversion by Training a Differentiable Rig Function](https://arxiv.org/abs/2301.09567)**. SIGGRAPH Asia 2022 Technical Communications.
36 |
37 | BibTeX:
38 |
39 | ```
40 | @inproceedings{10.1145/3550340.3564218,
41 | author = {Marquis Bolduc, Mathieu and Phan, Hau Nghiep},
42 | title = {Rig Inversion by Training a Differentiable Rig Function},
43 | year = {2022},
44 | isbn = {9781450394659},
45 | publisher = {Association for Computing Machinery},
46 | address = {New York, NY, USA},
47 | url = {https://doi.org/10.1145/3550340.3564218},
48 | doi = {10.1145/3550340.3564218},
49 | abstract = {Rig inversion is the problem of creating a method that can find the rig parameter vector that best approximates a given input mesh. In this paper we propose to solve this problem by first obtaining a differentiable rig function by training a multi layer perceptron to approximate the rig function. This differentiable rig function can then be used to train a deep learning model of rig inversion.},
50 | booktitle = {SIGGRAPH Asia 2022 Technical Communications},
51 | articleno = {15},
52 | numpages = {4},
53 | keywords = {computer animation, neural networks, rig inversion},
54 | location = {Daegu, Republic of Korea},
55 | series = {SA '22}
56 | }
57 | ```
58 |
59 | ## Authors
60 |
61 |

62 | Search for Extraordinary Experiences Division (SEED) - Electronic Arts
http://seed.ea.com
63 | We are a cross-disciplinary team within EA Worldwide Studios.
64 | Our mission is to explore, build and help define the future of interactive entertainment.
65 |
66 | This technique was created by Mathieu Marquis Bolduc and Hau Nghiep Phan
67 |
68 | ## Licenses
69 |
70 | - The source code uses *BSD 3-Clause License* as detailed in [LICENSE.md](LICENSE.md)
71 |
--------------------------------------------------------------------------------
/generate_toy_dataset.py:
--------------------------------------------------------------------------------
1 | #
2 | # Copyright (c) 2021-2024 Electronic Arts Inc. All Rights Reserved
3 | #
4 |
5 | import os
6 | import numpy as np
7 | import pathlib
8 | from rig import unit_cube, rig_model
9 |
10 | def generate_dataset(num_samples=1000):
11 | ###
12 | ### Generate a simple toy dataset of vertices by applying random transformations to a unit cube
13 | ### These transformations refine a toy rig of transform coordinates to vertices
14 | ### In a real use case this dataset would be generated from running the actual animation rig
15 | ###
16 |
17 | current_folder = pathlib.Path(__file__).parent.resolve()
18 | rig_parameters_dataset = []
19 | mesh_vertices_dataset = []
20 |
21 | for sample in range(num_samples):
22 |
23 | #all rig parameters of our toy rig are [0,1]
24 | rig_parameters = np.random.rand(6)
25 |
26 | mesh = rig_model(rig_parameters)
27 |
28 | rig_parameters_dataset.append(rig_parameters)
29 | mesh_vertices_dataset.append(mesh)
30 |
31 | dataset_folder = os.path.join(current_folder, 'dataset')
32 | os.makedirs(dataset_folder, exist_ok=True)
33 | np.savez(os.path.join(dataset_folder, 'dataset.npz'), rig_parameters_dataset, mesh_vertices_dataset)
34 |
35 | ###
36 | ### We also generate a smooth synthetic animation
37 | ### This would normally be captured or generated animation we wish to find right parameters for.
38 | ###
39 |
40 | anim_length = 128
41 |
42 | start = np.random.rand(6)
43 | end = np.random.rand(6)
44 | deltas = end-start
45 | anim = [start + (x/anim_length)*deltas for x in range(anim_length)]
46 | anim_4D = [rig_model(rig_parameters) for rig_parameters in anim]
47 |
48 | np.save(os.path.join(dataset_folder, 'anim.npy'), anim)
49 | np.save(os.path.join(dataset_folder, 'anim_4D.npy'), anim_4D)
50 |
51 | if __name__ == "__main__":
52 |
53 | generate_dataset()
--------------------------------------------------------------------------------
/inverse_rig.py:
--------------------------------------------------------------------------------
1 | #
2 | # Copyright (c) 2021-2024 Electronic Arts Inc. All Rights Reserved
3 | #
4 |
5 | import os
6 | from tqdm import tqdm
7 | import numpy as np
8 | import pathlib
9 | import torch
10 | import torch.nn as nn
11 | from torch.utils.data import DataLoader
12 | from model import make_rig_2_mesh_model, make_mesh_2_rig_model
13 | from rig import unit_cube, rig_model
14 |
15 | def train():
16 |
17 | #
18 | # Train an encoder to inverse the rig in a self-supervised fashion
19 | # for a particular set of data
20 | # using a pretrained differentiable rig approximation
21 | #
22 |
23 | current_folder = pathlib.Path(__file__).parent.resolve()
24 | dataset_folder = os.path.join(current_folder, 'dataset')
25 | checkpoint_folder = os.path.join(current_folder, 'checkpoints')
26 | rig2mesh_checkpoint_path = os.path.join(checkpoint_folder, 'rig2mesh.pth.tar')
27 |
28 | model_shape = [512]
29 |
30 | rig2mesh_checkpoint = torch.load(rig2mesh_checkpoint_path)
31 | rig2mesh_model_shape = rig2mesh_checkpoint['model_shape']
32 | num_ctrl = rig2mesh_checkpoint['num_ctrl']
33 | num_vertices = rig2mesh_checkpoint['num_vertices']
34 |
35 | decoder_model = make_rig_2_mesh_model(num_ctrl, rig2mesh_model_shape, num_vertices*3)
36 | decoder_model.load_state_dict(rig2mesh_checkpoint['model_state_dict'])
37 | decoder_model.eval()
38 | print('Decoder/rig approximation model')
39 | print(decoder_model)
40 |
41 | encoder_model = make_mesh_2_rig_model(num_ctrl, model_shape, num_vertices*3)
42 | encoder_model.train()
43 | print('Encoder/rig inversion model')
44 | print(encoder_model)
45 |
46 | optimizer = torch.optim.Adam(encoder_model.parameters(), lr=1e-5, betas=(0.9, 0.999))
47 | criterion = nn.MSELoss()
48 | lr_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, factor=0.5, patience=20, eps=1e-6)
49 |
50 | ## This is the "capture" data we want to find rig parameters for
51 | anim_4D = np.load(os.path.join(dataset_folder, 'anim_4D.npy'))
52 | anim_4D_flatten = np.asarray([x.flatten() for x in anim_4D])
53 |
54 | dataloader = DataLoader(anim_4D_flatten, 32, num_workers=0, pin_memory=True, drop_last=False, shuffle=True)
55 |
56 | epoch = 0
57 | best_loss = 1000000
58 | early_stop_count = 0
59 | early_stop_patience = 50
60 |
61 | while True:
62 | num_batch = 0
63 | sum_loss = 0
64 |
65 | for batch in tqdm(dataloader):
66 |
67 | # Train the encoder in a self-supervised fashion
68 | vertices = batch.float()
69 | optimizer.zero_grad()
70 | rig_ctrl = encoder_model(vertices)
71 | rig_output = decoder_model(rig_ctrl)
72 | loss = criterion(rig_output, vertices).mean() # loss is mesh to mesh, there is no loss on the rig parameters here.
73 | sum_loss += loss.item()
74 | num_batch += len(vertices)
75 | loss.backward()
76 | optimizer.step()
77 |
78 | #This is an improvement not published in the paper. For most rigs, zero rig parameters are expected to produce a known neutral pose, and vice versa
79 | #We can use this to regularize the training by feeding the "neutral" mesh to the decoder and expect zero rig parameters.
80 | if epoch > 0:
81 | optimizer.zero_grad()
82 | zero_output = encoder_model(torch.tensor(np.expand_dims(unit_cube.flatten(),0)).float())
83 | loss = zero_output.mean()
84 | loss.backward()
85 | optimizer.step()
86 |
87 | sum_loss /= num_batch
88 |
89 | if (sum_loss) < best_loss:
90 | early_stop_count = 0
91 | best_loss = (sum_loss)
92 | elif early_stop_count >= early_stop_patience:
93 | break
94 | else:
95 | early_stop_count += 1
96 |
97 | print(epoch, "training loss", sum_loss, 'lr', optimizer.param_groups[0]['lr'], 'early_stop_count', early_stop_count)
98 | lr_scheduler.step(sum_loss)
99 |
100 | epoch += 1
101 |
102 | return encoder_model
103 |
104 | def test(encoder_model):
105 |
106 | #
107 | # Test how successfull the rig inversion was by running the rig parameters produced by the encoder through the *actual* rig.
108 | #
109 |
110 | encoder_model.eval()
111 |
112 | current_folder = pathlib.Path(__file__).parent.resolve()
113 | dataset_folder = os.path.join(current_folder, 'dataset')
114 | anim_4D = np.load(os.path.join(dataset_folder, 'anim_4D.npy'))
115 |
116 | # Use the decoder to get rig parameters
117 | found_anim = np.array([encoder_model(torch.tensor(mesh.flatten()).float()).detach().numpy() for mesh in anim_4D])
118 |
119 | # to test we apply these rig parameters to the REAL rig
120 | found_4d = np.array([rig_model(frame) for frame in found_anim])
121 |
122 | average_vertex_error = dist = np.linalg.norm(found_4d-anim_4D, axis=-1).mean()
123 |
124 | print('Euclidian rig inversion error:', average_vertex_error)
125 | print('Unlike the decoder model which approximates the rig, the encoder model that inverse the rig is thrown away. A new model should be trained to inverse the rig on new data')
126 |
127 | if __name__ == "__main__":
128 |
129 | encoder_model = train()
130 | test(encoder_model)
--------------------------------------------------------------------------------
/logo/SEED.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/electronicarts/rig-inversion/25b49f89f57afd2bb2df80fa728021ef5b80bc68/logo/SEED.jpg
--------------------------------------------------------------------------------
/model.py:
--------------------------------------------------------------------------------
1 | #
2 | # Copyright (c) 2021-2024 Electronic Arts Inc. All Rights Reserved
3 | #
4 |
5 | import torch.nn as nn
6 | import numpy as np
7 |
8 | def make_rig_2_mesh_model(num_input, shape, num_output):
9 |
10 | nn_layers = [nn.Linear(num_input, shape[0]), nn.LeakyReLU()]
11 | for layer0, layer1 in zip(shape[:-1], shape[1:]):
12 | nn_layers.append(nn.Linear(layer0, layer1))
13 | nn_layers.append(nn.LeakyReLU())
14 |
15 | nn_layers.append(nn.Linear(shape[-1], num_output))
16 |
17 | #Mesh values are unbounded so the last layer does not have an activation
18 |
19 | return nn.Sequential(*nn_layers)
20 |
21 | def make_mesh_2_rig_model(num_parameters, shape, num_vertices_values):
22 |
23 | nn_layers = [nn.Linear(num_vertices_values, shape[0]), nn.LeakyReLU()]
24 | for layer0, layer1 in zip(shape[:-1], shape[1:]):
25 | nn_layers.append(nn.Linear(layer0, layer1))
26 | nn_layers.append(nn.LeakyReLU())
27 |
28 | nn_layers.append(nn.Linear(shape[-1], num_parameters))
29 |
30 | #Rig values are between 0 and 1 for the toy rig
31 | nn_layers.append(nn.Sigmoid())
32 |
33 | return nn.Sequential(*nn_layers)
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | pytorch
2 | scipy
3 | tqdm
--------------------------------------------------------------------------------
/rig.py:
--------------------------------------------------------------------------------
1 |
2 | #
3 | # Copyright (c) 2021-2024 Electronic Arts Inc. All Rights Reserved
4 | #
5 |
6 | from scipy.spatial.transform import Rotation as R
7 | import numpy as np
8 |
9 | unit_cube = np.asarray([ [-1, -1, -1],
10 | [-1, -1, 1],
11 | [-1, 1, -1],
12 | [-1, 1, 1],
13 | [1, -1, -1],
14 | [1, -1, 1],
15 | [1, 1, -1],
16 | [1, -1, -1]])
17 |
18 | def rig_model(rig_parameters):
19 |
20 | # return vertices values from rig parameters
21 | # This toy rig is a simple unit cube mesh deformed with random axis-scale and rotation
22 |
23 | sx = R.from_euler('x', (rig_parameters[0]-0.5)*90, degrees=True)
24 | sy = R.from_euler('y', (rig_parameters[1]-0.5)*90, degrees=True)
25 | sz = R.from_euler('z', (rig_parameters[2]-0.5)*90, degrees=True)
26 |
27 | scale = np.zeros((3,3))
28 | scale[0,0] = (rig_parameters[3]*2.5)-0.5
29 | scale[1,1] = (rig_parameters[4]*2.5)-0.5
30 | scale[2,2] = (rig_parameters[5]*2.5)-0.5
31 |
32 |
33 | # Convention of order of operation is unimportant here since we are making a toy blackbox rig
34 | mesh = unit_cube
35 | mesh = np.matmul(mesh, scale)
36 | mesh = sx.apply(mesh)
37 | mesh = sy.apply(mesh)
38 | mesh = sz.apply(mesh)
39 |
40 | return mesh
--------------------------------------------------------------------------------
/train_rig_approximation.py:
--------------------------------------------------------------------------------
1 |
2 | #
3 | # Copyright (c) 2021-2024 Electronic Arts Inc. All Rights Reserved
4 | #
5 |
6 | import os
7 | from tqdm import tqdm
8 | import numpy as np
9 | import pathlib
10 | import torch
11 | import torch.nn as nn
12 | from torch.utils.data import DataLoader
13 | from model import make_rig_2_mesh_model
14 |
15 | class ToyDataset():
16 | def __init__(self, rig_controls, vertices):
17 |
18 | self.rig_controls = rig_controls
19 | self.vertices = vertices
20 |
21 | assert len(rig_controls) == len(vertices)
22 |
23 | self.length = len(rig_controls)
24 | self.num_vertices = len(vertices[0])
25 | self.num_ctrl = len(rig_controls[0])
26 |
27 | def __getitem__(self, index):
28 |
29 | return torch.tensor(self.rig_controls[index]).float(), torch.tensor(self.vertices[index]).flatten().float()
30 |
31 | def __len__(self):
32 |
33 | return self.length
34 |
35 | def train():
36 | #
37 | # Train a rig approximation model using the dataset created in generate_toy_dataset.py
38 | # This model will be used to inverse the rig for a test animation in inverse_rig.py
39 | #
40 |
41 | current_folder = pathlib.Path(__file__).parent.resolve()
42 | checkpoint_folder = os.path.join(current_folder, 'checkpoints')
43 | os.makedirs(checkpoint_folder, exist_ok=True)
44 | checkpoint_path = os.path.join(checkpoint_folder, 'rig2mesh.pth.tar')
45 |
46 | model_shape = [1024, 1024]
47 |
48 | dataset_folder = os.path.join(current_folder, 'dataset')
49 | data = np.load(os.path.join(dataset_folder, 'dataset.npz'))
50 | #We keep 10% of the dataset as a validation set
51 | dataset = ToyDataset(data['arr_0'][:-100], data['arr_1'][:-100])
52 | validate_dataset = ToyDataset(data['arr_0'][-100:], data['arr_1'][-100:])
53 |
54 | print(len(dataset), 'training samples', len(validate_dataset), 'validation samples')
55 |
56 | #some pytorch version have a really slow dataloader for in-memory dataset so we arent using any workers
57 | dataloader = DataLoader(dataset, 32, num_workers=0, pin_memory=True, drop_last=False, shuffle=True)
58 | validate_dataloader = DataLoader(validate_dataset, 32, num_workers=0, pin_memory=True, drop_last=False, shuffle=False)
59 |
60 | model = make_rig_2_mesh_model(dataset.num_ctrl, model_shape, dataset.num_vertices*3)
61 | model.train()
62 |
63 | print('Rig Approximation model')
64 | print(model)
65 |
66 | optimizer = torch.optim.Adam(model.parameters(), lr=1e-4, betas=(0.9, 0.999))
67 | criterion = nn.MSELoss()
68 | lr_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, factor=0.5, patience=20, eps=1e-9)
69 |
70 | epoch = 0
71 | best_val = 1000000
72 | early_stop_count = 0
73 | early_stop_patience = 50
74 |
75 | def save_checkpoint():
76 | torch.save({
77 | 'model_state_dict': model.state_dict(),
78 | 'model_shape': model_shape,
79 | 'num_ctrl': dataset.num_ctrl,
80 | 'num_vertices': dataset.num_vertices
81 | }, checkpoint_path)
82 |
83 | while True:
84 | num_batch = 0
85 | sum_loss = 0
86 | sum_loss_validate = 0
87 | num_batch_validate = 0
88 | model.train()
89 |
90 | for batch in tqdm(dataloader):
91 |
92 | ctrl_batch, vertex_batch = batch
93 |
94 | optimizer.zero_grad()
95 | out_vertex = model(ctrl_batch)
96 |
97 | loss = criterion(out_vertex, vertex_batch).mean()
98 | sum_loss += loss.item()
99 | num_batch += len(ctrl_batch)
100 |
101 | loss.backward()
102 | optimizer.step()
103 |
104 | with torch.no_grad():
105 | model.eval()
106 | for batch in tqdm(validate_dataloader):
107 |
108 | ctrl_batch, vertex_batch = batch
109 |
110 | out_vertex = model(ctrl_batch)
111 | loss = criterion(out_vertex, vertex_batch).mean()
112 |
113 | sum_loss_validate += loss.item()
114 | num_batch_validate += len(ctrl_batch)
115 |
116 | sum_loss /= num_batch
117 | sum_loss_validate /= num_batch_validate
118 |
119 | if (sum_loss_validate) < best_val:
120 | early_stop_count = 0
121 | best_val = (sum_loss_validate)
122 | save_checkpoint()
123 | elif early_stop_count >= early_stop_patience:
124 | break
125 | else:
126 | early_stop_count += 1
127 |
128 | print(epoch, "training loss", sum_loss, "validate loss", sum_loss_validate, optimizer.param_groups[0]['lr'], 'early_stop_count', early_stop_count)
129 | lr_scheduler.step(sum_loss)
130 |
131 | epoch += 1
132 |
133 | def test():
134 |
135 | # Always test using animations
136 | current_folder = pathlib.Path(__file__).parent.resolve()
137 | checkpoint_path = os.path.join(current_folder, 'checkpoints', 'rig2mesh.pth.tar')
138 | dataset_folder = os.path.join(current_folder, 'dataset')
139 |
140 | #re-load saved decoder model
141 | rig2mesh_checkpoint = torch.load(checkpoint_path)
142 | model_shape = rig2mesh_checkpoint['model_shape']
143 | num_ctrl = rig2mesh_checkpoint['num_ctrl']
144 | num_vertices = rig2mesh_checkpoint['num_vertices']
145 | model = make_rig_2_mesh_model(num_ctrl, model_shape, num_vertices*3)
146 | model.load_state_dict(rig2mesh_checkpoint['model_state_dict'])
147 | model.eval()
148 |
149 | #load test animation and ground truth
150 | anim = np.load(os.path.join(dataset_folder, 'anim.npy'))
151 | anim_4D = np.load(os.path.join(dataset_folder, 'anim_4D.npy'))
152 |
153 | #run animation through decoder model that learned to approximate the rig
154 | with torch.no_grad():
155 | found_4d = np.asarray([model(torch.tensor(frame).float()).detach().numpy().reshape(-1, 3) for frame in anim])
156 |
157 | average_vertex_error = np.linalg.norm(found_4d-anim_4D, axis=-1).mean()
158 |
159 | print('Euclidian rig approximation error:', average_vertex_error)
160 | print('This error bounds the rig inversion error.')
161 | print('So its important to get the rig approximation as accurate as possible')
162 | print('Fortunately, you should be able to use the rig to create as much training data as desired')
163 |
164 | if __name__ == "__main__":
165 |
166 | train()
167 | test()
--------------------------------------------------------------------------------