├── File Description
├── README.md
├── Reports
├── .~lock.file_description.odt#
├── GR_report.pdf
├── error data spreadsheet.ods
└── file_description.odt
├── Root
├── Attention.py
├── Config.py
├── Config.pyc
├── DIP_IMU_NN_BiRNN.py
├── DIP_IMU_NN_MLP.py
├── DIP_calib_BiRNN.py
├── IMUDataset.py
├── IMUDataset.pyc
├── Network.py
├── Network.pyc
├── __init__.py
├── __pycache__
│ ├── Attention.cpython-35.pyc
│ ├── Config.cpython-35.pyc
│ ├── DIP_IMU_NN_BiRNN.cpython-35.pyc
│ ├── DIP_IMU_NN_MLP.cpython-35.pyc
│ ├── IMUDataset.cpython-35.pyc
│ ├── Network.cpython-35.pyc
│ ├── eval_model.cpython-35.pyc
│ ├── myUtil.cpython-35.pyc
│ └── train_BiRNN.cpython-35.pyc
├── analyseData.py
├── calculateDIPLoss
├── compareDIPFiles.py
├── createData.py
├── createVideo.py
├── duplicate.py
├── eval_dip_nn.py
├── eval_model.py
├── eval_ori_pose.py
├── generateLossFiles
├── myUtil.py
├── myUtil.pyc
├── new_createData.py
├── new_prepareData.py
├── parseJson.py
├── plotGraph.py
├── prepareData.py
├── pytorch_modelsummary.py
├── pytorch_modelsummary.pyc
├── saveVideo.py
├── test_MLP.py
├── test_attn.py
├── train_BiLSTM.py
├── train_BiRNN.py
├── train_BiRNN_Full.py
├── train_MLP.py
├── train_attn.py
├── train_attn_greedy.py
├── train_correctNetwork.py
├── viewSMPL_copy.py
├── visualize.py
├── visualize_DFKI.py
├── visualize_dip_own.py
└── visualize_dip_syn.py
├── S11_WalkTogether_gt_WBMGO56A_tcVI.mp4
├── chumpy
├── .gitignore
├── .hgignore
├── .travis.yml
├── LICENSE.txt
├── MANIFEST.in
├── Makefile
├── README.md
├── chumpy
│ ├── __init__.py
│ ├── api_compatibility.py
│ ├── ch.py
│ ├── ch_ops.py
│ ├── ch_random.py
│ ├── extras.py
│ ├── linalg.py
│ ├── logic.py
│ ├── monitor.py
│ ├── optimization.py
│ ├── optimization_internal.py
│ ├── optional_test_performance.py
│ ├── reordering.py
│ ├── test_ch.py
│ ├── test_inner_composition.py
│ ├── test_linalg.py
│ ├── test_optimization.py
│ ├── testing.py
│ ├── utils.py
│ └── version.py
├── circle.yml
├── requirements.txt
└── setup.py
├── output
├── 2.2.PNG
├── 2.4.PNG
├── 3_5.PNG
├── alldipmodel.png
├── approaches.png
├── dipmodel.png
├── mergeourdip.PNG
├── onevsall.png
└── out.txt
└── smpl
├── models
├── SMPL_sampleShape1_f.json
├── SMPL_sampleShape1_m.json
├── SMPL_sampleShape2_f.json
├── SMPL_sampleShape2_m.json
├── basicModel_f_lbs_10_207_0_v1.0.0.pkl
└── basicModel_m_lbs_10_207_0_v1.0.0.pkl
└── smpl_webuser
├── lbs.py
├── lbs.pyc
├── posemapper.py
├── posemapper.pyc
├── serialization.py
├── serialization.pyc
├── verts.py
└── verts.pyc
/File Description:
--------------------------------------------------------------------------------
1 | human-pose
2 |
3 | This file is to give a brief overview of each file included in this project.
4 |
5 | File description:
6 |
7 | analyseData: • reads raw orientation from syntehtic and DIP_IMU • reads ground truth pose from syntehtic and DIP_IMU • reads calibrated orientation from prepared syntehtic and DIP_IMU dataset • visualize data distribution in boxplot
8 | calculateDIpLoss: • calculates per joint per frame loss in euler angle degrees of provided DIP_model from generated npz files by run_evaluation file of DIP_IMU • saves the losses in text file
9 | compareDIPFiles: • reads DIP_IMU_nn • reads DIP_IMU and calibrate according to our 1st understanding • compare values between the above two
10 | Config: • contains common configuration parameters used by multiple files
11 | createData: • calibrates DIP_IMU raw data following 1st approach • save them in separate files per activity
12 | DIP_calib_BiRNN: • training of short LSTM on our calibrated DIP_IMU
13 | eval_dip_nn: • evaluate own trained model on DIP_IMU_nn data • use testWindow method to evaluate Short BiLSTM • use test method to evaluate MLP
14 | eval_model: • evaluate all BiLSTM model trained on synthetic data
15 | eval_ori_pose: • evaluate forward kinematic • get predicted orientation • feed results of forward kinematic to get pose from trained model
16 | IMUDataset: • reads data from generated calibrated files • multiple methods to create batch as and when required based on different strategies – to be called during training • read single file – to be called during test
17 | myUtil: • contains some generic utility methods
18 | Network: • contains the architectures of different network as experimented
19 | new_createData: • calibrates DIP_IMU raw data following last approach • save them in separate files per activity
20 | new_prepareData: • calibrates synthetic raw data following last approach • save them in separate files per activity in separate folders for different datasets
21 | parseJson - • parse json files • calibrate data and store in desired format • It was written to process our own recordings
22 | plotGraph – • generate figures given in the report from loss files
23 | prepareData- • calibrates synthetic raw data following first approach • save them in separate files per activity in separate folders for different datasets
24 | saveVideo - • creates video from frames stored in a folder • used to create video of activities for target and prediction
25 | test_MLP- • tests trained MLP model • datapath and modelpath need to be specified in the file as required
26 | train_BiLSTM - • trains RNN model with sequential stateful strategy
27 | train_BiRNN- • trains RNN model with random stateless strategy
28 | train_BiRNN_Full- • trains RNN model with random stateless strategy on multiple datasets
29 | train_MLP- • trains MLP model
30 | visualize _- • visualize result generated from eval_model • uses SMPL python package dependencies • it also can store the frames for making video by saveVideo file.
31 |
32 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 | This repository contains my guided research work which was based on the original paper [Deep Inertial Poser](http://dip.is.tuebingen.mpg.de/assets/dip.pdf)
3 |
4 | For the full report of my guided research kindly see here [Deep Learning Precise 3D Human Pose from Sparse IMUs](Reports/GR_report.pdf)
5 |
6 | ### Dataset
7 | Data can be downloaded from - http://dip.is.tue.mpg.de/downloads
8 | ### Results
9 | ###### some of the qualitative results are given below
10 | 
11 |
12 | ##### some of the quantitave comparisons are given below
13 | 
14 | 
15 | 
16 |
--------------------------------------------------------------------------------
/Reports/.~lock.file_description.odt#:
--------------------------------------------------------------------------------
1 | ,guha,pc-2162.kl.dfki.de,27.09.2019 22:57,file:///home/guha/.config/libreoffice/4;
--------------------------------------------------------------------------------
/Reports/GR_report.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/Reports/GR_report.pdf
--------------------------------------------------------------------------------
/Reports/error data spreadsheet.ods:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/Reports/error data spreadsheet.ods
--------------------------------------------------------------------------------
/Reports/file_description.odt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/Reports/file_description.odt
--------------------------------------------------------------------------------
/Root/Config.py:
--------------------------------------------------------------------------------
1 | ori_dim = 5*4
2 | acc_dim = 5*3
3 | input_dim = ori_dim
4 | output_dim = 15*4
5 | hid_dim=512
6 |
7 | batch_len = 10
8 | seq_len = 200
9 |
10 | n_layers = 2
11 | dropout = 0.3
12 |
13 | traindata_path = '/data/Guha/GR/Dataset/Train'
14 | testdata_path = '/data/Guha/GR/Dataset/Test/'
15 | validdata_path = '/data/Guha/GR/Dataset/Validation'
16 | train_dip_path = '/data/Guha/GR/Dataset/DIP_IMU/Train'
17 | valid_dip_path = '/data/Guha/GR/Dataset/DIP_IMU/Validation'
18 | human36_path = '/data/Guha/GR/Dataset/H36'
19 | use_cuda =True
20 |
--------------------------------------------------------------------------------
/Root/Config.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/Root/Config.pyc
--------------------------------------------------------------------------------
/Root/DIP_IMU_NN_MLP.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 | import torch.optim as optim
4 | import Config as cfg
5 | import numpy as np
6 | from pyquaternion import Quaternion
7 | import itertools
8 | import random
9 |
10 | class InverseKinematic(nn.Module):
11 | def __init__(self):
12 | super(InverseKinematic,self).__init__()
13 | self.net = nn.Sequential(
14 | nn.Linear(20,60), nn.ReLU(),nn.Linear(60,180),nn.ReLU(),nn.Linear(180,60)
15 | )
16 | def forward(self, input):
17 | out = self.net(input)
18 | return out
19 |
20 |
21 | def _loss_impl(predicted, expected):
22 | L1 = predicted - expected
23 | return torch.mean((torch.norm(L1, 2, 1)))
24 |
25 | def preparedata(path):
26 | data_dict = dict(np.load(path, encoding='latin1'))
27 | oriList = data_dict['orientation']
28 | poseList = data_dict['smpl_pose']
29 | batch_ori = []
30 | batch_pose = []
31 | for i in range(len(oriList)):
32 | ori = oriList[i].reshape(-1,5,3,3)
33 | ori_quat = np.asarray([Quaternion(matrix=ori[k, j, :, :]).elements for k, j in
34 | itertools.product(range(ori.shape[0]), range(5))])
35 | ori_quat = ori_quat.reshape(-1, 5 * 4)
36 | pose = poseList[i].reshape(-1,15,3,3)
37 | pose_quat = np.asarray([Quaternion(matrix=pose[k, j, :, :]).elements for k, j in
38 | itertools.product(range(pose.shape[0]), range(15))])
39 | pose_quat = pose_quat.reshape(-1, 15 * 4)
40 |
41 | batch_ori.append(ori_quat)
42 | batch_pose.append(pose_quat)
43 |
44 | return batch_ori,batch_pose
45 |
46 | def train(basepath):
47 |
48 | model = InverseKinematic().cuda()
49 | modelPath = basepath
50 | optimizer = optim.Adam(model.parameters(), lr=.001)
51 | trainpath = '/data/Guha/GR/DIPIMUandOthers/DIP_IMU_and_Others/DIP_IMU_nn/imu_own_training.npz'
52 | validpath = '/data/Guha/GR/DIPIMUandOthers/DIP_IMU_and_Others/DIP_IMU_nn/imu_own_validation.npz'
53 | # trainpath = '/data/Guha/GR/dataset/DIP_IMU_nn/imu_own_training.npz'
54 | # validpath = '/data/Guha/GR/dataset/DIP_IMU_nn/imu_own_validation.npz'
55 | f = open(modelPath + 'model_details', 'w')
56 | f.write(' comments: dip_imu_nn training on MLP')
57 | f.write(str(model))
58 | f.write('\n')
59 | epoch_loss = {'train': [],'validation':[]}
60 | batch_ori, batch_pose = preparedata(trainpath)
61 |
62 | print('no of batches--', len(batch_ori))
63 | f.write('no of batches-- {} \n'.format(len(batch_ori)))
64 |
65 | min_valid_loss = 0.0
66 | for epoch in range(50):
67 | ############### training #############
68 | train_loss = []
69 | model.train()
70 | for input,target in zip(batch_ori,batch_pose):
71 | input = torch.FloatTensor(input).cuda()
72 | target = torch.FloatTensor(target).cuda()
73 | output = model(input)
74 | loss = _loss_impl(output,target)
75 | optimizer.zero_grad()
76 | loss.backward()
77 | optimizer.step()
78 | loss.detach()
79 | train_loss.append(loss.item())
80 |
81 | train_loss = torch.mean(torch.FloatTensor(train_loss))
82 | print('epoch no ----------> {} training loss {} '.format(epoch, train_loss.item()))
83 | f.write('epoch no ----------> {} training loss {} '.format(epoch, train_loss.item()))
84 | epoch_loss['train'].append(train_loss)
85 | # we save the model after each epoch : epoch_{}.pth.tar
86 | state = {
87 | 'epoch': epoch + 1,
88 | 'state_dict': model.state_dict(),
89 | 'epoch_loss': train_loss
90 | }
91 | torch.save(state, modelPath + 'epoch_{}.pth.tar'.format(epoch + 1))
92 | ############### validation ###############
93 | data_dict = dict(np.load(validpath, encoding='latin1'))
94 | oriList = data_dict['orientation']
95 | poseList = data_dict['smpl_pose']
96 |
97 | model.eval()
98 | valid_loss = []
99 | for i in range(len(oriList)):
100 | ori = oriList[i].reshape(-1,5,3,3)
101 | ori_quat = np.asarray([Quaternion(matrix=ori[k, j, :, :]).elements for k, j in
102 | itertools.product(range(ori.shape[0]), range(5))])
103 | ori_quat = ori_quat.reshape(-1, 5 * 4)
104 | pose = poseList[i].reshape(-1,15,3,3)
105 | pose_quat = np.asarray([Quaternion(matrix=pose[k, j, :, :]).elements for k, j in
106 | itertools.product(range(pose.shape[0]), range(15))])
107 | pose_quat = pose_quat.reshape(-1, 15 * 4)
108 |
109 | input = torch.FloatTensor(ori_quat).cuda()
110 | target = torch.FloatTensor(pose_quat).cuda()
111 | output = model(input)
112 | loss = _loss_impl(output, target)
113 | valid_loss.append(loss.item())
114 | valid_loss = torch.mean(torch.FloatTensor(valid_loss))
115 | # we save the model if current validation loss is less than prev : validation.pth.tar
116 | if (min_valid_loss == 0 or valid_loss < min_valid_loss):
117 | min_valid_loss = valid_loss
118 | state = {
119 | 'epoch': epoch + 1,
120 | 'state_dict': model.state_dict(),
121 | 'validation_loss': valid_loss
122 | }
123 | torch.save(state, modelPath + 'validation.pth.tar')
124 | print('epoch no ----------> {} validation loss {} '.format(epoch, valid_loss.item()))
125 | f.write('epoch no ----------> {} validation loss {} '.format(epoch, valid_loss.item()))
126 | epoch_loss['validation'].append(valid_loss)
127 |
128 | f.close()
129 | plotGraph(epoch_loss,basepath)
130 |
131 | def plotGraph(epoch_loss,basepath):
132 | import matplotlib.pyplot as plt
133 | fig = plt.figure(1)
134 | trainloss = epoch_loss['train']
135 | validloss = epoch_loss['validation']
136 |
137 | plt.plot(np.arange(len(trainloss)),trainloss , 'r--',label='training loss')
138 | plt.plot(np.arange(len(validloss)),validloss,'g--',label = 'validation loss')
139 | plt.legend()
140 | plt.savefig(basepath+'.png')
141 | plt.show()
142 |
143 | if __name__ == "__main__":
144 | train('/data/Guha/GR/model/dip_nn_mlp/')
145 |
146 |
147 |
148 |
149 |
--------------------------------------------------------------------------------
/Root/DIP_calib_BiRNN.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 | import torch.optim as optim
4 | import Config as cfg
5 | import numpy as np
6 | from time import time
7 | import random
8 | from IMUDataset import IMUDataset
9 |
10 | class BiRNN(nn.Module):
11 | def __init__(self):
12 | super(BiRNN,self).__init__()
13 |
14 | self.input_dim = cfg.input_dim
15 | self.hid_dim = 256
16 | self.n_layers = cfg.n_layers
17 | self.dropout = cfg.dropout
18 |
19 | self.relu = nn.ReLU()
20 | self.pre_fc = nn.Linear(cfg.input_dim , 256)
21 | self.lstm = nn.LSTM(256, 256, cfg.n_layers, batch_first=True, dropout=cfg.dropout,bidirectional=True)
22 | self.post_fc = nn.Linear(256*2,cfg.output_dim)
23 | self.dropout = nn.Dropout(cfg.dropout)
24 |
25 | def forward(self, X):
26 | # src = [ batch size, seq len, input dim]
27 | batch_size = X.shape[0]
28 | seq_len = X.shape[1]
29 | input_dim = X.shape[2]
30 |
31 | X = X.view(-1,input_dim)
32 | X = self.pre_fc(X)
33 | X = self.relu(X)
34 | X = X.view(batch_size,seq_len, -1)
35 | lstm_out, (_, _) = self.lstm(X)
36 |
37 | """lstm_out : [batch size, src sent len, hid dim * n directions]
38 | hidden : [n layers * n directions, batch size, hid dim]
39 | cell : [n layers * n directions,batch size, hid dim]
40 | lstm_out are always from the top hidden layer """
41 |
42 | fc_out = self.post_fc(lstm_out)
43 | return fc_out
44 |
45 | class TrainingEngine:
46 | def __init__(self):
47 | self.datapath = '/data/Guha/GR/Dataset/DIP_IMU2/'
48 | self.modelPath = '/data/Guha/GR/model/dip_calib/'
49 | self.model = BiRNN().cuda()
50 | # baseModelPath = '/data/Guha/GR/model/13/epoch_5.pth.tar'
51 | #
52 | # with open(baseModelPath, 'rb') as tar:
53 | # checkpoint = torch.load(tar)
54 | # model_weights = checkpoint['state_dict']
55 | # self.model.load_state_dict(model_weights)
56 |
57 | def _loss_impl(self,predicted, expected):
58 | L1 = predicted - expected
59 | return torch.mean((torch.norm(L1, 2, 2)))
60 |
61 | def train(self,n_epochs):
62 | f = open(self.modelPath+'model_details','w')
63 | f.write(str(self.model))
64 | f.write('\n')
65 |
66 | np.random.seed(1234)
67 | lr = 0.001
68 | gradient_clip = 0.1
69 | optimizer = optim.Adam(self.model.parameters(),lr=lr)
70 |
71 | print('Training for %d epochs' % (n_epochs))
72 |
73 | print('batch size--> {}, Seq len--> {}'.format(cfg.batch_len, cfg.seq_len))
74 | f.write('batch size--> {}, Seq len--> {} \n'.format(cfg.batch_len, cfg.seq_len))
75 | epoch_loss = {'train': [], 'validation': []}
76 | self.dataset = IMUDataset(self.datapath, ['train'])
77 | min_valid_loss = 0.0
78 | for epoch in range(0, n_epochs):
79 | train_loss = []
80 | start_time = time()
81 | self.dataset.loadfiles(self.datapath,['train'])
82 | ####################### training #######################
83 | while (len(self.dataset.files) > 0):
84 | # Pick a random chunk from each sequence
85 | self.dataset.createbatch_no_replacement()
86 | inputs = torch.FloatTensor(self.dataset.input).cuda()
87 | outputs = torch.FloatTensor(self.dataset.target).cuda()
88 | chunk_in = list(torch.split(inputs, cfg.seq_len))[:-1]
89 | chunk_out = list(torch.split(outputs, cfg.seq_len))[:-1]
90 | if (len(chunk_in) == 0):
91 | continue
92 | data = [(chunk_in[i], chunk_out[i]) for i in range(len(chunk_in))]
93 | random.shuffle(data)
94 | X, Y = zip(*data)
95 | chunk_in = torch.stack(X, dim=0)
96 | chunk_out = torch.stack(Y, dim=0)
97 | print('no of chunks %d \n' % (len(chunk_in)))
98 | f.write('no of chunks %d \n' % (len(chunk_in)))
99 | self.model.train()
100 | optimizer.zero_grad()
101 | predictions = self.model(chunk_in)
102 |
103 | loss = self._loss_impl(predictions, chunk_out)
104 | loss.backward()
105 | nn.utils.clip_grad_norm_(self.model.parameters(), gradient_clip)
106 | optimizer.step()
107 | loss.detach()
108 |
109 | train_loss.append(loss.item())
110 |
111 | train_loss = torch.mean(torch.FloatTensor(train_loss))
112 | # we save the model after each epoch : epoch_{}.pth.tar
113 | state = {
114 | 'epoch': epoch + 1,
115 | 'state_dict': self.model.state_dict(),
116 | 'epoch_loss': train_loss
117 | }
118 | torch.save(state, self.modelPath + 'epoch_{}.pth.tar'.format(epoch + 1))
119 | # logging to track
120 | debug_string = 'epoch No {}, training loss {} , Time taken {} \n'.format(
121 | epoch + 1, train_loss, start_time - time()
122 | )
123 | print(debug_string)
124 | f.write(debug_string)
125 | f.write('\n')
126 | epoch_loss['train'].append(train_loss)
127 | ####################### Validation #######################
128 | valid_loss = []
129 | self.model.eval()
130 | self.dataset.loadfiles(self.datapath, ['validation'])
131 | for file in self.dataset.files:
132 | self.dataset.readfile(file)
133 | input = torch.FloatTensor(self.dataset.input).unsqueeze(0).cuda()
134 | target = torch.FloatTensor(self.dataset.target).unsqueeze(0).cuda()
135 |
136 | output = self.model(input)
137 | loss = self._loss_impl(output, target)
138 | valid_loss.append(loss)
139 | valid_loss = torch.mean(torch.FloatTensor(valid_loss))
140 | # we save the model if current validation loss is less than prev : validation.pth.tar
141 | if (min_valid_loss == 0 or valid_loss < min_valid_loss):
142 | min_valid_loss = valid_loss
143 | state = {
144 | 'epoch': epoch + 1,
145 | 'state_dict': self.model.state_dict(),
146 | 'validation_loss': valid_loss
147 | }
148 | torch.save(state, self.modelPath + 'validation.pth.tar')
149 |
150 | # logging to track
151 | debug_string = 'epoch No {}, valid loss {} ,Time taken {} \n'.format(
152 | epoch + 1, valid_loss, start_time - time()
153 | )
154 | print(debug_string)
155 | f.write(debug_string)
156 | f.write('\n')
157 | epoch_loss['validation'].append(valid_loss)
158 |
159 | f.write(str(epoch_loss))
160 | f.close()
161 | self.plotGraph(epoch_loss, self.modelPath)
162 |
163 |
164 | def plotGraph(epoch_loss,basepath):
165 | import matplotlib.pyplot as plt
166 | fig = plt.figure(1)
167 | trainloss = epoch_loss['train']
168 | validloss = epoch_loss['validation']
169 |
170 | plt.plot(np.arange(trainloss), trainloss, 'r--', label='training loss')
171 | plt.plot(np.arange(validloss), validloss, 'g--', label='validation loss')
172 | plt.legend()
173 | plt.savefig(basepath+'.png')
174 | plt.show()
175 |
176 | if __name__ == "__main__":
177 | trainingEngine = TrainingEngine()
178 | trainingEngine.train(n_epochs=50)
179 |
180 |
181 |
182 |
183 |
--------------------------------------------------------------------------------
/Root/IMUDataset.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/Root/IMUDataset.pyc
--------------------------------------------------------------------------------
/Root/Network.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 | import Config as cfg
4 |
5 |
6 | ############# BiRNN network - random stateless ###############
7 | class BiRNN(nn.Module):
8 | def __init__(self):
9 | super(BiRNN,self).__init__()
10 |
11 | self.input_dim = cfg.input_dim
12 | self.hid_dim = cfg.hid_dim
13 | self.n_layers = cfg.n_layers
14 | self.dropout = cfg.dropout
15 |
16 | self.relu = nn.ReLU()
17 | self.pre_fc = nn.Linear(cfg.input_dim , cfg.hid_dim)
18 | self.lstm = nn.LSTM(cfg.hid_dim, cfg.hid_dim, cfg.n_layers, batch_first=True, dropout=cfg.dropout,bidirectional=True)
19 | self.post_fc = nn.Linear(cfg.hid_dim*2,cfg.output_dim)
20 | self.dropout = nn.Dropout(cfg.dropout)
21 |
22 | def forward(self, X):
23 | # src = [ batch size, seq len, input dim]
24 | batch_size = X.shape[0]
25 | seq_len = X.shape[1]
26 | input_dim = X.shape[2]
27 | #X = torch.Tensor(src)
28 | X = X.view(-1,input_dim)
29 | X = self.pre_fc(X)
30 | X = self.relu(X)
31 | X = X.view(batch_size,seq_len, -1)
32 | lstm_out, (hidden, cell) = self.lstm(X)
33 |
34 | """lstm_out : [batch size, src sent len, hid dim * n directions]
35 | hidden : [n layers * n directions, batch size, hid dim]
36 | cell : [n layers * n directions,batch size, hid dim]
37 | lstm_out are always from the top hidden layer """
38 |
39 | fc_out = self.post_fc(lstm_out)
40 |
41 | return fc_out
42 |
43 | ############# BiRNN network - sequential stateful ###############
44 | class BiLSTM(nn.Module):
45 | def __init__(self):
46 | super(BiLSTM,self).__init__()
47 | self.relu = nn.ReLU()
48 | self.dropout = nn.Dropout(cfg.dropout)
49 | self.pre_fc = nn.Sequential(nn.Linear(cfg.input_dim, cfg.hid_dim),self.relu)
50 | self.lstm = nn.LSTM(cfg.hid_dim, cfg.hid_dim, cfg.n_layers, batch_first=True, dropout=cfg.dropout, bidirectional=True)
51 | self.post_fc = nn.Linear(cfg.hid_dim * 2, cfg.output_dim)
52 |
53 |
54 | def forward(self, X, h_0, c_0):
55 | X = self.pre_fc(X)
56 | """lstm_out : [batch size, src sent len, hid dim * n directions]
57 | hidden : [n layers * n directions, batch size, hid dim]
58 | cell : [n layers * n directions,batch size, hid dim]
59 | lstm_out are always from the top hidden layer """
60 |
61 | lstm_out, (hidden, cell) = self.lstm(X,(h_0,c_0))
62 | hidden,cell = (hidden, cell)
63 | fc_out = self.post_fc(lstm_out)
64 | return fc_out,hidden,cell
65 |
66 | ############# MLP predicting orientation from pose ###############
67 | class ForwardKinematic(nn.Module):
68 | def __init__(self):
69 | super(ForwardKinematic,self).__init__()
70 | self.net = nn.Sequential(
71 | nn.Linear(60,120), nn.ReLU(),nn.Linear(120,180),nn.Dropout(0.3),
72 | nn.ReLU(),nn.Linear(180,60),nn.Dropout(0.3),nn.ReLU(),nn.Linear(60,20)
73 | )
74 | def forward(self, input):
75 | out = self.net(input)
76 | return out
77 | ############# MLP predicting pose from orientation ###############
78 | class InverseKinematic(nn.Module):
79 | def __init__(self):
80 | super(InverseKinematic,self).__init__()
81 | self.net = nn.Sequential(
82 | nn.Linear(20,60), nn.ReLU(),nn.Linear(60,180),nn.ReLU(),nn.Linear(180,60)
83 | )
84 | def forward(self, input):
85 | out = self.net(input)
86 | return out
87 |
--------------------------------------------------------------------------------
/Root/Network.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/Root/Network.pyc
--------------------------------------------------------------------------------
/Root/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/Root/__init__.py
--------------------------------------------------------------------------------
/Root/__pycache__/Attention.cpython-35.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/Root/__pycache__/Attention.cpython-35.pyc
--------------------------------------------------------------------------------
/Root/__pycache__/Config.cpython-35.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/Root/__pycache__/Config.cpython-35.pyc
--------------------------------------------------------------------------------
/Root/__pycache__/DIP_IMU_NN_BiRNN.cpython-35.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/Root/__pycache__/DIP_IMU_NN_BiRNN.cpython-35.pyc
--------------------------------------------------------------------------------
/Root/__pycache__/DIP_IMU_NN_MLP.cpython-35.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/Root/__pycache__/DIP_IMU_NN_MLP.cpython-35.pyc
--------------------------------------------------------------------------------
/Root/__pycache__/IMUDataset.cpython-35.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/Root/__pycache__/IMUDataset.cpython-35.pyc
--------------------------------------------------------------------------------
/Root/__pycache__/Network.cpython-35.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/Root/__pycache__/Network.cpython-35.pyc
--------------------------------------------------------------------------------
/Root/__pycache__/eval_model.cpython-35.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/Root/__pycache__/eval_model.cpython-35.pyc
--------------------------------------------------------------------------------
/Root/__pycache__/myUtil.cpython-35.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/Root/__pycache__/myUtil.cpython-35.pyc
--------------------------------------------------------------------------------
/Root/__pycache__/train_BiRNN.cpython-35.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/Root/__pycache__/train_BiRNN.cpython-35.pyc
--------------------------------------------------------------------------------
/Root/analyseData.py:
--------------------------------------------------------------------------------
1 | """
2 | This file generates box plot for orientation,pose and calibrated orientation at first frame.
3 | raw orientation are read from raw files
4 | pose and calibrated orientation are read from our generated calibrated datasets
5 | """
6 | import os
7 | import numpy as np
8 | import myUtil
9 | import transforms3d
10 | import quaternion
11 |
12 |
13 | x_labels = ['AMASS_ACCAD', 'AMASS_BioMotion', 'AMASS_CMU_Kitchen', 'AMASS_Eyes', 'AMASS_MIXAMO',
14 | 'AMASS_SSM', 'AMASS_Transition', 'CMU', 'H36','AMASS_HDM05', 'HEva', 'JointLimit','DIP_IMU']
15 |
16 | trainset = ['AMASS_ACCAD', 'AMASS_BioMotion', 'AMASS_CMU_Kitchen', 'AMASS_Eyes', 'AMASS_MIXAMO',
17 | 'AMASS_SSM', 'AMASS_Transition', 'CMU', 'H36','AMASS_HDM05', 'HEva', 'JointLimit','DIP_IMU']
18 |
19 |
20 | # joints index predicted by model.Indices are aligned with SMPL indices
21 | SMPL_MAJOR_JOINTS = [1, 2, 3, 4, 5, 6, 9, 12, 13, 14, 15, 16, 17, 18, 19]
22 | SMPL_NR_JOINTS = 24
23 |
24 | boneSMPLDict = {'L_Knee': 4,
25 | 'R_Knee': 5,
26 | 'Head': 15,
27 | 'L_Shoulder': 16,
28 | 'R_Shoulder': 17
29 | }
30 | # only 'L_Knee','R_Knee','Head','L_Shoulder','R_Shoulder' are plotted
31 | jointsToPlot = [3,4,10,11,12]
32 |
33 | #datapath = '/data/Guha/GR/synthetic60FPS/' ######## raw dataset path
34 | datapath ='/data/Guha/GR/Dataset/' ######## calibrated dataset path
35 |
36 |
37 | poses = []
38 |
39 | # loop through all datasets and read pose/orientation/calib orientation- should be handled once at a time
40 | for t in trainset:
41 | path = datapath+t
42 | dset_joints = []
43 | for f in os.listdir(path):
44 | filepath = os.path.join(path,f)
45 | data_dict = np.load(filepath, encoding='latin1')
46 | # sample_pose = myUtil.smpl_reduced_to_full(np.asarray(data_dict['poses']).reshape(-1,15*3*3)).reshape(-1,24,3,3)
47 | # sample_pose = np.asarray(data_dict['poses']).reshape(-1,15,3,3)[0,jointsToPlot,:,:]
48 | if (len(data_dict['ori']) == 0):
49 | continue
50 | sample_ori = np.asarray(data_dict['ori']).reshape(-1, 5, 3, 3)[0, :, :, :]
51 |
52 | #pose_x,pose_y,pose_z = transforms3d.euler.mat2euler(sample_pose[0,0,:,:], axes='sxyz')
53 |
54 | #pose_x,pose_y,pose_z = transforms3d.euler.mat2euler(quaternion.as_rotation_matrix(quaternion.from_rotation_vector(sample_pose[0,15,:])))
55 | #ori_x,ori_y,ori_z = transforms3d.euler.mat2euler(sample_ori[0,2,:,:], axes='sxyz')
56 | euler_joints = [list(transforms3d.euler.mat2euler(sample_ori[i, :, :], axes='sxyz')) for i in range(5)]
57 | dset_joints.append(euler_joints)
58 |
59 | poses.append(np.asarray(dset_joints))
60 | ############### for raw data of dip imu #####################
61 | # trainset = ['s_01', 's_02','s_03', 's_04', 's_05','s_06','s_07','s_08','s_09','s_10']
62 | # datapath = '/data/Guha/GR/DIPIMUandOthers/DIP_IMU_and_Others/DIP_IMU/'
63 | # jointsToPlot = [4,5,15,16,17]
64 | # dip_sensor_idx = [7, 8, 11, 12, 0, 2]
65 | # dset_joints = []
66 | # for t in trainset:
67 | # path = datapath+t
68 | # for f in os.listdir(path):
69 | # filepath = os.path.join(path,f)
70 | # data_dict = np.load(filepath, encoding='latin1')
71 | # if (len(data_dict['imu']) == 0):
72 | # continue
73 | # imu_ori = data_dict['imu'][:, :, 0:9]
74 | # imu_ori = np.asarray([imu_ori[:, k, :] for k in dip_sensor_idx])
75 | # sample_ori = imu_ori.reshape(-1, 6, 3, 3)[0,:,:,:]
76 | # print('files--',path,f)
77 | # # sample_pose = myUtil.smpl_reduced_to_full(np.asarray(data_dict['poses']).reshape(-1,15*3*3)).reshape(-1,24,3,3)
78 | # #sample_pose = np.asarray(data_dict['gt']).reshape(-1,24,3)[0,jointsToPlot,:]
79 | #
80 | # if(len(sample_ori)==0):
81 | # continue
82 | # #euler_joints = [list(transforms3d.euler.mat2euler(quaternion.as_rotation_matrix(quaternion.from_rotation_vector(sample_ori[i,:,:])))) for i in range(6)]
83 | # euler_joints = [list(transforms3d.euler.mat2euler(sample_ori[i, :, :], axes='sxyz')) for i in range(6)]
84 | # dset_joints.append(euler_joints)
85 | #
86 | # poses.append(np.asarray(dset_joints))
87 |
88 |
89 | ################# plots boxplot for each joint in euler angle seperately for x,y,z################
90 | euler_dict = {0:'x axis',1:'y axis',2:'z axis'}
91 | oriDict = ['L_Shoulder', 'R_Shoulder', 'L_Knee', 'R_Knee', 'Head']
92 | poseDict = ['L_Knee','R_Knee','Head','L_Shoulder','R_Shoulder']
93 | poses = np.asarray(poses)
94 | import matplotlib.pyplot as plt
95 | for i,key in enumerate(oriDict):
96 | if(i!=1):
97 | continue
98 | for j in range(3):
99 | title = key + ' ' + euler_dict[j]
100 | plt.figure(title)
101 | temp = []
102 | for d in range(len(poses)):
103 | temp.append(poses[d][:,i,j])
104 | plt.boxplot(temp)
105 | plt.xticks(np.arange(len(x_labels)),x_labels,rotation=50)
106 | plt.title(title)
107 | plt.tight_layout()
108 | plt.show()
109 | #plt.savefig('/data/Guha/GR/Output/graphs/pose/'+title+'.png')
110 |
111 |
112 |
113 |
114 |
115 |
116 |
117 |
118 |
--------------------------------------------------------------------------------
/Root/calculateDIPLoss:
--------------------------------------------------------------------------------
1 | """ this file calculates per joint per frame loss in euler angle degrees of provided DIP_model from generated npz files by run_evaluation file of DIP_IMU
2 | """
3 |
4 |
5 |
6 | import matplotlib.pyplot as plt
7 | import numpy as np
8 | import transforms3d
9 | import itertools
10 |
11 | ########## loss file to save data
12 | loss_file = open('/data/Guha/GR/Output/loss_dipsynthetic.txt', 'w')
13 |
14 | ########### calculates mean square error per joint per frame ############
15 | def loss_impl(predicted, expected):
16 | error = predicted - expected
17 | error_norm = np.linalg.norm(error, axis=2)
18 | error_per_joint = np.mean(error_norm,axis=1)
19 | error_per_frame_per_joint = np.mean(error_per_joint, axis=0)
20 | return error_per_frame_per_joint
21 |
22 | import quaternion
23 | SMPL_MAJOR_JOINTS = [1, 2, 3, 4, 5, 6, 9, 12, 13, 14, 15, 16, 17, 18, 19]
24 | file_path = '/data/Guha/GR/Output/TestSet/dip_own/test_synthetic.npz'
25 | with open(file_path, 'rb') as file:
26 | data_dict = dict(np.load(file))
27 | gt_list = data_dict['gt']
28 | pred_list = data_dict['prediction']
29 |
30 | for act in range(18):
31 | ######### read grount truth and prediction for each activity ##############
32 | gt = gt_list[act].reshape(-1,24,3)
33 | pred = pred_list[act].reshape(-1, 24,3)
34 | seq_len = len(pred)
35 |
36 | ############# convert prediction given in axis-angle to euler angle
37 | pred_aa = pred[:, SMPL_MAJOR_JOINTS, :]
38 | pred_quat = quaternion.as_float_array(quaternion.from_rotation_vector(pred_aa))
39 | pred_euler = np.asarray([transforms3d.euler.quat2euler(pred_quat[k,j]) for k, j in
40 | itertools.product(range(seq_len), range(15))])
41 | pred_euler = (pred_euler * 180)/ np.pi
42 |
43 | ############# convert ground truth given in axis-angle to euler angle
44 | gt_aa = gt[:, SMPL_MAJOR_JOINTS, :]
45 | gt_quat = quaternion.as_float_array(quaternion.from_rotation_vector(gt_aa))
46 | gt_euler = np.asarray([transforms3d.euler.quat2euler(gt_quat[k, j]) for k, j in
47 | itertools.product(range(seq_len), range(15))])
48 | gt_euler = (gt_euler * 180) / np.pi
49 |
50 | ########### calculate loss and store in file
51 | loss = loss_impl(pred_euler.reshape(seq_len,15,3), gt_euler.reshape(seq_len,15,3))
52 | loss_file.write('{}-- {}\n'.format(act, loss))
53 |
54 | loss_file.close()
--------------------------------------------------------------------------------
/Root/compareDIPFiles.py:
--------------------------------------------------------------------------------
1 | import sys
2 | sys.path.insert(0, '/data/Guha/GR/code/GR19/smpl/smpl_webuser')
3 | import numpy as np
4 | from opendr.renderer import ColoredRenderer
5 | from opendr.lighting import LambertianPointLight
6 | from opendr.camera import ProjectPoints
7 | from serialization import load_model
8 | import os
9 | import myUtil
10 | import cv2
11 | import matplotlib.pyplot as plt
12 | import itertools
13 |
14 | ####################################3 Load SMPL model (here we load the female model) ######################################
15 | m1 = load_model('../models/basicModel_m_lbs_10_207_0_v1.0.0.pkl')
16 | m1.betas[:] = np.random.rand(m1.betas.size) * .03
17 |
18 | m2 = load_model('../models/basicModel_m_lbs_10_207_0_v1.0.0.pkl')
19 | m2.betas[:] = np.random.rand(m2.betas.size) * .03
20 | ## Create OpenDR renderer
21 | rn1 = ColoredRenderer()
22 | rn2 = ColoredRenderer()
23 |
24 | ## Assign attributes to renderer
25 | w, h = (640, 480)
26 |
27 | rn1.camera = ProjectPoints(v=m1, rt=np.zeros(3), t=np.array([0, 0, 2.]), f=np.array([w, w]) / 2.,
28 | c=np.array([w, h]) / 2., k=np.zeros(5))
29 | rn1.frustum = {'near': 1., 'far': 10., 'width': w, 'height': h}
30 | rn1.set(v=m1, f=m1.f, bgcolor=np.zeros(3))
31 |
32 | rn2.camera = ProjectPoints(v=m2, rt=np.zeros(3), t=np.array([0, 0, 2.]), f=np.array([w, w]) / 2.,
33 | c=np.array([w, h]) / 2., k=np.zeros(5))
34 | rn2.frustum = {'near': 1., 'far': 10., 'width': w, 'height': h}
35 | rn2.set(v=m2, f=m2.f, bgcolor=np.zeros(3))
36 |
37 | ## Construct point light source
38 | rn1.vc = LambertianPointLight(
39 | f=m1.f,
40 | v=rn1.v,
41 | num_verts=len(m1),
42 | light_pos=np.array([-1000, -1000, -2000]),
43 | vc=np.ones_like(m1) * .9,
44 | light_color=np.array([1., 1., 1.]))
45 |
46 | rn2.vc = LambertianPointLight(
47 | f=m2.f,
48 | v=rn2.v,
49 | num_verts=len(m2),
50 | light_pos=np.array([-1000, -1000, -2000]),
51 | vc=np.ones_like(m2) * .9,
52 | light_color=np.array([1., 1., 1.]))
53 | ####################################### finish of adapting SMPL python initialization ########################
54 |
55 | ####################################### read the two files #############################
56 |
57 | ############### raw file
58 | DIPPath = '/data/Guha/GR/DIPIMUandOthers/DIP_IMU_and_Others/DIP_IMU/s_10'
59 | imu_order = ['head', 'spine2', 'belly', 'lchest', 'rchest', 'lshoulder', 'rshoulder', 'lelbow', 'relbow', 'lhip', 'rhip', 'lknee', 'rknee', 'lwrist', 'lwrist', 'lankle', 'rankle']
60 | SENSORS = [ 'lknee', 'rknee','lelbow', 'relbow', 'head','belly']
61 | SMPL_SENSOR = ['L_Knee','R_Knee','L_Elbow','R_Elbow','Head','Pelvis']
62 | sensor_idx = [11,12,7,8,0,2]
63 |
64 |
65 | ############# calirated file
66 | SMPL_MAJOR_JOINTS = [1, 2, 3, 4, 5, 6, 9, 12, 13, 14, 15, 16, 17, 18, 19]
67 | file_path1 = '/data/Guha/GR/code/dip18/train_and_eval/data/dipIMU/imu_own_validation.npz'
68 | file_path2 = '/data/Guha/GR/DIPIMUandOthers/DIP_IMU_and_Others/DIP_IMU/s_10'
69 | fileList = os.listdir(file_path2)
70 | dippath = os.path.join(file_path2,fileList[4])
71 | with open(file_path1, 'rb') as file1, open(dippath, 'rb') as file2:
72 | data_dict_nn = dict(np.load(file1))
73 | smpl_nn = data_dict_nn['smpl_pose'][1]
74 | smpl_nn_full = myUtil.smpl_reduced_to_full(smpl_nn)
75 | aa_nn = myUtil.rot_matrix_to_aa(smpl_nn_full)
76 |
77 | data_dict_imu = dict(np.load(file2))
78 | smpl_imu = data_dict_imu['gt']
79 |
80 | seq_len = aa_nn.shape[0]
81 | print (aa_nn - smpl_imu)
82 |
83 | for seq_num in range(seq_len):
84 | pose1 = aa_nn[seq_num]
85 | pose2 = smpl_imu[seq_num]
86 |
87 | m1.pose[:] = (pose1).reshape(72)
88 | m1.pose[0] = np.pi
89 |
90 | m2.pose[:] = (pose2).reshape(72)
91 | m2.pose[0] = np.pi
92 |
93 | cv2.imshow('GT', rn1.r)
94 | cv2.imshow('Prediction', rn2.r)
95 | #Press Q on keyboard to exit
96 | if cv2.waitKey(1) & 0xFF == ord('q'):
97 | break
98 |
99 | # nn_img = rn1.r * 255
100 | # imu_img = rn2.r * 255
101 | # cv2.imwrite('/data/Guha/GR/Output/folder1/' + str(seq_num) + '.png', nn_img)
102 | # cv2.imwrite('/data/Guha/GR/Output/folder2/' + str(seq_num) + '.png', imu_img)
103 |
104 |
105 | ########################## our calibration code #######################
106 |
107 | imu_ori = data_dict_imu['imu'][:, :, 0:9]
108 | frames2del = np.unique(np.where(np.isnan(imu_ori) == True)[0])
109 | imu_ori = np.delete(imu_ori, frames2del, 0)
110 | imu_ori = np.asarray([imu_ori[:, k, :] for k in sensor_idx])
111 | imu_ori = imu_ori.reshape(-1, 6, 3, 3)
112 |
113 | imu_acc = data_dict_imu['imu'][:, :, 9:12]
114 | imu_acc = np.delete(imu_acc, frames2del, 0)
115 | imu_acc = np.asarray([imu_acc[:, k, :] for k in sensor_idx])
116 | imu_acc = imu_acc.reshape(-1, 6, 3)
117 |
118 | seq_len = imu_acc.shape[0]
119 | print('seq len:', seq_len)
120 |
121 | head_0_frame = imu_ori[0, 0, :].reshape(3, 3)
122 | norm_ori_1 = np.asarray(
123 | [np.dot(np.linalg.inv(head_0_frame), imu_ori[k, j, :].reshape(3, 3)) for k, j in
124 | itertools.product(range(seq_len), range(6))])
125 | norm_ori_1 = norm_ori_1.reshape(-1, 6, 3, 3)
126 |
127 | R_TS_0 = norm_ori_1[0, 0, :, :]
128 | R_TS_1 = norm_ori_1[0, 1, :, :]
129 | # inv_R_TB_0 = np.asarray([[0,-1,0],[1,0,0],[0,0,1]])
130 | inv_R_TB_0 = np.asarray([[-0.0254474, -0.98612685, 0.16403132],
131 | [0.99638575, -0.01171777, 0.08413162],
132 | [-0.08104237, 0.1655794, 0.98286092]])
133 | # inv_R_TB_1 = np.asarray([[0,1,0],[-1,0,0],[0,0,1]])
134 | inv_R_TB_1 = np.asarray([[-0.01307566, 0.97978612, - 0.1996201],
135 | [-0.97720228, 0.02978678, 0.21021048],
136 | [0.21190735,
137 | 0.19781786,
138 | 0.95705975]])
139 |
140 | R_BS_0 = np.dot(inv_R_TB_0, R_TS_0)
141 | R_BS_1 = np.dot(inv_R_TB_1, R_TS_1)
142 |
143 | bone2S = norm_ori_1[0, :, :, :]
144 |
145 | bone2S[0, :, :] = R_BS_0
146 | bone2S[1, :, :] = R_BS_1
147 |
148 | # bone_ori = np.asarray([np.dot(norm_ori_1[k, j, :, :], np.linalg.inv(bone2S[j])) for k,j in itertools.product(range(seq_len),range(6))])
149 |
150 | bone_ori = np.asarray([np.dot(bone2S[j], norm_ori_1[k, j, :, :]) for k, j in
151 | itertools.product(range(seq_len), range(6))])
152 |
153 | bone_ori = bone_ori.reshape(-1, 6, 3, 3)
154 |
155 | norm_ori_2 = np.asarray(
156 | [np.dot(np.linalg.inv(bone_ori[i, 5, :, :]), bone_ori[i, j, :, :]) for i, j in
157 | itertools.product(range(seq_len), range(6))])
158 |
159 | norm_ori_2 = norm_ori_2.reshape(-1, 6, 3, 3)
160 |
161 | # acceleration
162 | norm_acc_1 = np.asarray(
163 | [np.dot(norm_ori_1[k, j, :].reshape(3, 3), imu_acc[k, j, :]) for k, j in
164 | itertools.product(range(seq_len), range(6))])
165 | norm_acc_1 = norm_acc_1.reshape(-1, 6, 3)
166 |
167 | norm_acc_2 = np.asarray(
168 | [np.dot(np.linalg.inv(norm_ori_2[k, 5, :, :]), (norm_acc_1[k, j, :] - norm_acc_1[k, 5, :])) for k, j in
169 | itertools.product(range(seq_len), range(6))])
170 | norm_acc_2 = norm_acc_2.reshape(-1, 6, 3)
171 |
172 | ori1 = data_dict_nn['orientation'][1]
173 | ori1 = np.delete(ori1, frames2del, 0)
174 | ori1 = ori1[:, :].reshape(-1, 5, 9)
175 | diff_ori = norm_ori_2[:, 0:5, :, :].reshape(-1, 5, 9) - ori1
176 |
177 | acc1 = data_dict_nn['acceleration'][1]
178 | acc1 = np.delete(acc1, frames2del, 0)
179 | acc1 = acc1[:, :].reshape(-1, 5, 3)
180 | diff_acc = norm_acc_2[:, 0:5, :].reshape(-1, 5, 3) - acc1
181 | for i in range(5):
182 | plt.figure('Difference of Orientation: {}'.format(SENSORS[i]))
183 | plt.title('Difference of Orientation: {} between DIP_IMU_nn & our calibration'.format(SENSORS[i]))
184 | plt.boxplot(diff_ori[:, i, :])
185 |
186 | plt.figure('Difference of Acceleration: {}'.format(SENSORS[i]))
187 | plt.title('Difference of Acceleration: {} between DIP_IMU_nn & our calibration'.format(SENSORS[i]))
188 | plt.boxplot(diff_acc[:, i, :])
189 | plt.show()
190 |
191 |
192 |
--------------------------------------------------------------------------------
/Root/createData.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import os
3 | import transforms3d
4 | import cv2
5 | import pickle as pkl
6 | import matplotlib.pyplot as plt
7 | import copy
8 | import pdb
9 | import quaternion
10 | import myUtil
11 | import itertools
12 | import h5py
13 |
14 |
15 | def calibrateRawIMU(acc,ori,pose):
16 |
17 | ######### **********the order of IMUs are: [left_lower_wrist, right_lower_wrist, left_lower_leg, right_loewr_leg, head, back]
18 | #SMPL_SENSOR = ['L_Shoulder', 'R_Shoulder', 'L_Knee', 'R_Knee', 'Head', 'Pelvis']
19 | SMPL_SENSOR = ['L_Elbow', 'R_Elbow', 'L_Knee', 'R_Knee', 'Head', 'Pelvis']
20 |
21 | # safety check if any frame has NAN Values
22 | frames2del = np.unique(np.where(np.isnan(ori) == True)[0])
23 | ori = np.delete(ori, frames2del, 0)
24 | acc = np.delete(acc, frames2del, 0)
25 | pose = np.delete(pose, frames2del, 0)
26 | pose = pose.reshape(-1,24,3)
27 | seq_len = len(ori)
28 |
29 | # calibration of pose parameters
30 | # ---------------------------------------------
31 | SMPL_MAJOR_JOINTS = [1, 2, 3, 4, 5, 6, 9, 12, 13, 14, 15, 16, 17, 18, 19]
32 | pose = pose[:, SMPL_MAJOR_JOINTS, :]
33 | qs = quaternion.from_rotation_vector(pose)
34 | pose_rot = np.reshape(quaternion.as_rotation_matrix(qs), [seq_len, 15, 9])
35 | pose = np.reshape(pose_rot, [seq_len, 9 * 15])
36 |
37 | ############### calculation in Rotation Matrix #########################
38 |
39 | # head sensor for the first frame
40 | head_quat = ori[0, 4, :, :]
41 | Q = quaternion.as_rotation_matrix(np.quaternion(0.5, 0.5, 0.5, 0.5))
42 | # calib: R(T_I) which is constant over the frames
43 | calib = np.linalg.inv(np.dot(head_quat, Q))
44 | #calib = np.linalg.inv(head_quat)
45 |
46 | # bone2sensor: R(B_S) calculated once for each sensor and remain constant over the frames as used further
47 | bone2sensor = {}
48 | for i in range(len(SMPL_SENSOR)):
49 | sensorid = SMPL_SENSOR[i]
50 | qbone = quaternion.as_rotation_matrix(quaternion.from_rotation_vector(myUtil.getGlobalBoneOri(sensorid)))
51 | #qbone = myUtil.getGlobalBoneOriFromPose(pose_rot, sensorid)
52 | qsensor = np.dot(calib, ori[0, i, :, :])
53 | boneQuat = np.dot(np.linalg.inv(qsensor), qbone)
54 | bone2sensor[i] = boneQuat
55 |
56 | # calibrated_ori: R(T_B) calculated as calib * sensor data(changes every frame) * bone2sensor(corresponding sensor)
57 | calibrated_ori = np.asarray(
58 | [np.linalg.multi_dot([calib, ori[k, j, :, :], bone2sensor[j]]) for
59 | k, j in itertools.product(range(seq_len), range(6))]
60 | )
61 | calibrated_ori = calibrated_ori.reshape(-1, 6, 3, 3)
62 | root_inv = np.linalg.inv(calibrated_ori[:, 5, :, :])
63 | # root_inv = np.transpose(calibrated_ori[:, 5], [0, 2, 1])
64 | norm_ori = np.asarray(
65 | [np.matmul(root_inv[k], calibrated_ori[k, j, :, :]) for k, j in itertools.product(range(seq_len), range(6))])
66 | norm_ori = norm_ori.reshape(-1, 6, 3, 3)
67 |
68 | # calibration of acceleration
69 | acc_1 = np.asarray(
70 | [np.dot(ori[k, j, :], acc[k, j, :]) for k, j in
71 | itertools.product(range(seq_len), range(6))])
72 | acc_1 = acc_1.reshape(-1, 6, 3)
73 | calib_acc = np.asarray([np.dot(calib, acc_1[k, j, :]) for k, j in
74 | itertools.product(range(seq_len), range(6))])
75 | calib_acc = calib_acc.reshape(-1, 6, 3)
76 | norm_acc = np.asarray(
77 | [np.dot(root_inv[k], (calib_acc[k, j, :] - calib_acc[k, 5, :])) for k, j in
78 | itertools.product(range(seq_len), range(6))])
79 | norm_acc = norm_acc.reshape(-1, 6, 3)
80 |
81 |
82 | #return norm_acc[:,-1,:],ori_quat[:,-1,:],pose_quat
83 | return norm_acc[:,0:5,:],norm_ori[:,0:5,:],pose
84 |
85 | if __name__ == '__main__':
86 | ######################## read DataFile - DIP_IMU ###########################
87 | DIPPath = '/data/Guha/GR/Dataset/DIP_IMU2/{}'
88 | imu_order = ['head', 'spine2', 'belly', 'lchest', 'rchest', 'lshoulder', 'rshoulder', 'lelbow', 'relbow', 'lhip',
89 | 'rhip', 'lknee', 'rknee', 'lwrist', 'rwrist', 'lankle', 'rankle']
90 | SENSORS = ['lelbow', 'relbow', 'lknee', 'rknee', 'head', 'belly']
91 | #SENSORS = ['lwrist', 'rwrist', 'lknee', 'rknee', 'head', 'belly']
92 | sensor_idx = [7, 8, 11, 12, 0, 2]
93 | FolerPath = '/data/Guha/GR/DIPIMUandOthers/DIP_IMU_and_Others/DIP_IMU/s_{}'
94 | for sub in range(10,11):
95 | path = FolerPath.format(sub)
96 | for f in os.listdir(path):
97 | with open(os.path.join(path,f),'rb') as file:
98 | data_dict = pkl.load(file,encoding='latin1')
99 | if (len(data_dict['imu']) == 0):
100 | continue
101 | imu_ori = data_dict['imu'][:, :, 0:9]
102 | #frames2del = np.unique(np.where(np.isnan(imu_ori) == True)[0])
103 | #imu_ori = np.delete(imu_ori, frames2del, 0)
104 | imu_ori = np.asarray([imu_ori[:, k, :] for k in sensor_idx])
105 | raw_ori_dip = imu_ori.reshape(-1, 6, 3, 3)
106 |
107 | imu_acc = data_dict['imu'][:, :, 9:12]
108 | #imu_acc = np.delete(imu_acc, frames2del, 0)
109 | imu_acc = np.asarray([imu_acc[:, k, :] for k in sensor_idx])
110 | raw_acc_dip = imu_acc.reshape(-1, 6, 3)
111 | raw_pose_dip = data_dict['gt']
112 |
113 | # calibrate data
114 | acc_dip, ori_dip, pose_dip = calibrateRawIMU(raw_acc_dip, raw_ori_dip, raw_pose_dip)
115 | savepath = DIPPath.format(path.split('/')[-1]+'_'+f.split('.pkl')[0])
116 |
117 |
118 | #content = {"acc": acc,"ori": ori,"pose":pose}
119 | np.savez_compressed(savepath,acc = acc_dip,ori = ori_dip,pose = pose_dip)
120 | print ('save path-- ', f)
121 | #break
122 |
123 |
124 |
125 |
126 |
127 |
128 |
--------------------------------------------------------------------------------
/Root/createVideo.py:
--------------------------------------------------------------------------------
1 | import cv2
2 | import numpy as np
3 | import glob
4 | import scipy.io as sio
5 |
6 | import numpy as np
7 | from opendr.renderer import ColoredRenderer
8 | from opendr.lighting import LambertianPointLight
9 | from opendr.camera import ProjectPoints
10 | from GR19.smpl.smpl_webuser.serialization import load_model
11 | import os
12 | import transforms3d
13 | import cv2
14 | import pickle as pkl
15 | import matplotlib.pyplot as plt
16 | import copy
17 | import pdb
18 |
19 | ## Load SMPL model (here we load the female model)
20 | m1 = load_model('../models/basicModel_m_lbs_10_207_0_v1.0.0.pkl')
21 | m1.betas[:] = np.random.rand(m1.betas.size) * .03
22 |
23 | m2 = load_model('../models/basicModel_m_lbs_10_207_0_v1.0.0.pkl')
24 | m2.betas[:] = np.random.rand(m2.betas.size) * .03
25 | ## Create OpenDR renderer
26 | rn1 = ColoredRenderer()
27 | rn2 = ColoredRenderer()
28 |
29 | ## Assign attributes to renderer
30 | w, h = (640, 480)
31 |
32 | rn1.camera = ProjectPoints(v=m1, rt=np.zeros(3), t=np.array([0, 0, 2.]), f=np.array([w, w]) / 2.,
33 | c=np.array([w, h]) / 2., k=np.zeros(5))
34 | rn1.frustum = {'near': 1., 'far': 10., 'width': w, 'height': h}
35 | rn1.set(v=m1, f=m1.f, bgcolor=np.zeros(3))
36 |
37 | rn2.camera = ProjectPoints(v=m2, rt=np.zeros(3), t=np.array([0, 0, 2.]), f=np.array([w, w]) / 2.,
38 | c=np.array([w, h]) / 2., k=np.zeros(5))
39 | rn2.frustum = {'near': 1., 'far': 10., 'width': w, 'height': h}
40 | rn2.set(v=m2, f=m2.f, bgcolor=np.zeros(3))
41 |
42 | ## Construct point light source
43 | rn1.vc = LambertianPointLight(
44 | f=m1.f,
45 | v=rn1.v,
46 | num_verts=len(m1),
47 | light_pos=np.array([-1000, -1000, -2000]),
48 | vc=np.ones_like(m1) * .9,
49 | light_color=np.array([1., 1., 1.]))
50 |
51 | rn2.vc = LambertianPointLight(
52 | f=m2.f,
53 | v=rn2.v,
54 | num_verts=len(m2),
55 | light_pos=np.array([-1000, -1000, -2000]),
56 | vc=np.ones_like(m2) * .9,
57 | light_color=np.array([1., 1., 1.]))
58 |
59 |
60 | file_path = '/data/Guha/GR//code/dip18/train_and_eval/evaluation_results/validate_our_data_all_frames.npz'
61 | with open(file_path, 'rb') as file:
62 | data_dict = dict(np.load(file))
63 | gt = data_dict['gt']
64 | pred = data_dict['prediction']
65 | gt_arr = []
66 | pred_arr = []
67 | height, width, layers = (640, 480, 3)
68 | size = (640, 480)
69 |
70 | act = 1
71 | gt = gt[act].reshape(-1, 24, 3)
72 | pred = pred[act].reshape(-1, 24, 3)
73 | print('activity no: ', act)
74 | seq_len = gt.shape[0]
75 | print('seq len:', seq_len)
76 | for seq_num in range(seq_len):
77 | pose1 = gt[seq_num]
78 | pose2 = pred[seq_num]
79 |
80 | m1.pose[:] = (pose1).reshape(72)
81 | m1.pose[0] = np.pi
82 |
83 | m2.pose[:] = (pose2).reshape(72)
84 | m2.pose[0] = np.pi
85 |
86 | # cv2.imshow('GT', rn1.r)
87 | # cv2.imshow('Prediction', rn2.r)
88 | # Press Q on keyboard to exit
89 | # if cv2.waitKey(1) & 0xFF == ord('q'):
90 | # break
91 |
92 | gt_img = rn1.r * 255
93 | pred_img = rn2.r * 255
94 | cv2.imwrite('/data/Guha/GR/Output/GT/'+str(seq_num)+'.png', gt_img)
95 | cv2.imwrite('/data/Guha/GR/Output/Prediction/'+str(seq_num)+'.png', pred_img)
96 | # gt_arr.append(gt_img)
97 | # pred_arr.append(pred_img)
98 |
99 |
100 | # #out = cv2.VideoWriter('AW7.mp4',cv2.VideoWriter_fourcc(*'DIVX'), 15, size))
101 | # fourcc = cv2.VideoWriter_fourcc(*'mp4v')
102 | # gt_out = cv2.VideoWriter('Act_GT:'+act,fourcc, 20.0, size)
103 | # pred_out = cv2.VideoWriter('Act_Pred:'+act,fourcc, 20.0, size)
104 | #
105 | # for i in range(len(gt_arr)):
106 | # gt_out.write(gt_arr[i])
107 | # print ('gt finished')
108 | # for i in range(len(pred_arr)):
109 | # pred_out.write(gt_arr[i])
110 | # print ('pred finished')
111 | # gt_out.release()
112 | # pred_out.release()
113 |
--------------------------------------------------------------------------------
/Root/duplicate.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from Network import BiRNN
3 | import torch.optim as optim
4 | from IMUDataset import IMUDataset
5 | from time import time
6 | import torch
7 | import Config as cfg
8 | import torch.nn as nn
9 | import random
10 | import matplotlib.pyplot as plt
11 |
12 | class TrainingEngine:
13 | def __init__(self):
14 | self.trainset = ['AMASS_ACCAD', 'AMASS_BioMotion', 'AMASS_CMU_Kitchen', 'AMASS_Eyes', 'AMASS_MIXAMO',
15 | 'AMASS_SSM', 'AMASS_Transition', 'CMU', 'H36']
16 | self.testSet = ['AMASS_HDM05', 'HEva', 'JointLimit']
17 | self.datapath = '/data/Guha/GR/Dataset'
18 | self.dataset = IMUDataset(self.datapath,self.trainset)
19 | self.use_cuda =True
20 | self.modelPath = '/data/Guha/GR/model/13/'
21 | self.model = BiRNN().cuda()
22 | self.mseloss = nn.MSELoss()
23 | baseModelPath = '/data/Guha/GR/model/13/epoch_1.pth.tar'
24 |
25 | with open(baseModelPath, 'rb') as tar:
26 | checkpoint = torch.load(tar)
27 | model_weights = checkpoint['state_dict']
28 | self.model.load_state_dict(model_weights)
29 |
30 | def train(self,n_epochs):
31 | f = open(self.modelPath+'model_details','w')
32 | f.write(str(self.model))
33 | f.write('\n')
34 |
35 | np.random.seed(1234)
36 | lr = 0.001
37 | gradient_clip = 0.1
38 | optimizer = optim.Adam(self.model.parameters(),lr=lr)
39 |
40 | print('Training for %d epochs' % (n_epochs))
41 | no_of_trainbatch = int(len(self.dataset.files) / cfg.batch_len)
42 |
43 | print('batch size--> %d, Seq len--> %d, no of batches--> %d' % (cfg.batch_len, cfg.seq_len, no_of_trainbatch))
44 | f.write('batch size--> %d, Seq len--> %d, no of batches--> %d \n' % (cfg.batch_len, cfg.seq_len, no_of_trainbatch))
45 |
46 | min_batch_loss = 0.0
47 | min_valid_loss = 0.0
48 |
49 | try:
50 | for epoch in range(1,n_epochs):
51 | epoch_loss = []
52 | start_time = time()
53 | self.dataset.loadfiles(self.datapath,self.trainset)
54 | ####################### training #######################
55 | # while(len(self.dataset.files) > 0):
56 | # # Pick a random chunk from each sequence
57 | # self.dataset.createbatch_no_replacement()
58 | # inputs = torch.FloatTensor(self.dataset.input)
59 | # outputs = torch.FloatTensor(self.dataset.target)
60 | #
61 | # if self.use_cuda:
62 | # inputs = inputs.cuda()
63 | # outputs = outputs.cuda()
64 | # self.model.cuda()
65 | #
66 | # chunk_in = list(torch.split(inputs, cfg.seq_len))[:-1]
67 | # chunk_out = list(torch.split(outputs, cfg.seq_len))[:-1]
68 | # random.shuffle(chunk_in)
69 | # random.shuffle(chunk_out)
70 | # chunk_in = torch.stack(chunk_in, dim=0)
71 | # chunk_out = torch.stack(chunk_out, dim=0)
72 | # print('no of chunks %d \n' % (len(chunk_in)))
73 | # f.write('no of chunks %d \n' % (len(chunk_in)))
74 | # self.model.train()
75 | # optimizer.zero_grad()
76 | # predictions = self.model(chunk_in)
77 | #
78 | # loss = self._loss_impl(predictions, chunk_out)
79 | # loss.backward()
80 | # nn.utils.clip_grad_norm_(self.model.parameters(), gradient_clip)
81 | # optimizer.step()
82 | # loss.detach()
83 | #
84 | # epoch_loss.append(loss.item())
85 | # if (min_batch_loss == 0 or loss < min_batch_loss):
86 | # min_batch_loss = loss
87 | # print ('training loss %f ' % (loss.item()))
88 | # f.write('training loss %f \n' % (loss.item()))
89 | #
90 | # epoch_loss = torch.mean(torch.FloatTensor(epoch_loss))
91 | # # we save the model after each epoch : epoch_{}.pth.tar
92 | # state = {
93 | # 'epoch': epoch + 1,
94 | # 'state_dict': self.model.state_dict(),
95 | # 'epoch_loss': epoch_loss
96 | # }
97 | # torch.save(state, self.modelPath + 'epoch_{}.pth.tar'.format(epoch+1))
98 |
99 | ####################### Validation #######################
100 | valid_meanloss = []
101 | valid_maxloss = []
102 | data_to_plot = []
103 | self.model.eval()
104 |
105 | for d in self.testSet:
106 | self.dataset.loadfiles(self.datapath, [d])
107 | dset_loss = []
108 | for f in self.dataset.files:
109 | self.dataset.readfile(f)
110 | input = torch.FloatTensor(self.dataset.input)
111 | input = torch.unsqueeze(input, 0)
112 | target = torch.FloatTensor(self.dataset.target)
113 | if self.use_cuda:
114 | input = input.cuda()
115 |
116 | prediction = self.model(input)
117 | prediction = prediction.detach().reshape_as(target).cpu()
118 | loss = torch.norm((prediction-target),2,1)
119 | mean_loss = torch.mean(loss)
120 | max_loss = torch.max(loss)
121 | dset_loss.extend(loss.numpy())
122 | valid_meanloss.append(mean_loss)
123 | valid_maxloss.append(max_loss)
124 | print(
125 | 'mean loss %f, max loss %f, \n' % (
126 | mean_loss, max_loss))
127 |
128 | data_to_plot.append(dset_loss)
129 |
130 | # save box plots of three dataset
131 | fig = plt.figure('epoch: '+str(epoch))
132 | # Create an axes instance
133 | ax = fig.add_subplot(111)
134 | # Create the boxplot
135 | ax.boxplot(data_to_plot)
136 | ax.set_xticklabels(self.testSet)
137 | # Save the figure
138 | fig.savefig(self.modelPath+'epoch: '+str(epoch)+'.png', bbox_inches='tight')
139 |
140 | mean_valid_loss = torch.mean(torch.FloatTensor(valid_meanloss))
141 | max_valid_loss = torch.max(torch.FloatTensor(valid_maxloss))
142 | # we save the model if current validation loss is less than prev : validation.pth.tar
143 | if (min_valid_loss == 0 or mean_valid_loss < min_valid_loss):
144 | min_valid_loss = mean_valid_loss
145 | state = {
146 | 'epoch': epoch + 1,
147 | 'state_dict': self.model.state_dict(),
148 | 'validation_loss': mean_valid_loss
149 | }
150 | torch.save(state, self.modelPath + 'validation.pth.tar')
151 |
152 | # logging to track
153 | print ('epoch No %d, epoch loss %d , validation mean loss %d, validation max loss %d, Time taken %d \n' % (
154 | epoch + 1, epoch_loss, mean_valid_loss.item(),max_valid_loss.item(), start_time - time()))
155 | f.write('epoch No %d, epoch loss %d , validation mean loss %d, validation max loss %d, Time taken %d \n' % (
156 | epoch + 1, epoch_loss, mean_valid_loss.item(), max_valid_loss.item(), start_time - time()))
157 |
158 | f.close()
159 | except KeyboardInterrupt:
160 | print('Training aborted.')
161 |
162 | def _loss_impl(self, predicted, expected):
163 | L1 = predicted - expected
164 | batch_size = predicted.shape[0]
165 | dist = torch.sum(torch.norm(L1, 2, 2))
166 | return dist/ batch_size
167 |
168 |
169 |
170 | if __name__ == '__main__':
171 | trainingEngine = TrainingEngine()
172 | trainingEngine.train(n_epochs=30)
--------------------------------------------------------------------------------
/Root/eval_dip_nn.py:
--------------------------------------------------------------------------------
1 | # from Network import BiLSTM
2 | # from Network import BiRNN
3 | from DIP_IMU_NN_MLP import InverseKinematic
4 | from DIP_IMU_NN_BiRNN import BiRNN
5 | import numpy as np
6 | from IMUDataset import IMUDataset
7 | import torch
8 | import Config as cfg
9 | import itertools
10 | import transforms3d
11 |
12 | class TestEngine:
13 | def __init__(self):
14 | self.testModel = InverseKinematic().cuda()
15 | #self.testModel = BiRNN().cuda()
16 |
17 | modelPath = '/data/Guha/GR/model/dip_nn_mlp/validation.pth.tar'
18 | self.base = '/data/Guha/GR/Output/TestSet/dip_nn/'
19 | with open(modelPath , 'rb') as tar:
20 | checkpoint = torch.load(tar)
21 | model_weights = checkpoint['state_dict']
22 | #epoch_loss = checkpoint['validation_loss']
23 | self.testModel.load_state_dict(model_weights)
24 | self.loss_file = open('/data/Guha/GR/Output/loss/loss_dip_mlp.txt', 'w')
25 |
26 | # test the whole activity at once
27 | def test(self):
28 | from pyquaternion import Quaternion
29 | file = '/data/Guha/GR/code/dip18/train_and_eval/data/dipIMU/imu_own_test.npz'
30 | data_dict = dict(np.load(file))
31 | for act in range(18):
32 | sample_pose = data_dict['smpl_pose'][act].reshape(-1, 15, 3, 3)
33 | sample_ori = data_dict['orientation'][act].reshape(-1, 5, 3, 3)
34 | act = data_dict['file_id'][act]
35 | seq_len = sample_ori.shape[0]
36 | ori_quat = np.asarray([Quaternion(matrix=sample_ori[k, j, :, :]).elements for k, j in
37 | itertools.product(range(seq_len), range(5))])
38 | ori_quat = ori_quat.reshape(-1, 5 * 4)
39 | pose_quat = np.asarray([Quaternion(matrix=sample_pose[k, j, :, :]).elements for k, j in
40 | itertools.product(range(seq_len), range(15))])
41 |
42 | pose_quat = pose_quat.reshape(-1, 15, 4)
43 |
44 | input = torch.FloatTensor(ori_quat)
45 | target = pose_quat
46 |
47 | # bilstm
48 | # initialize hidden and cell state at each new batch
49 | hidden = torch.zeros(cfg.n_layers * 2, 1, cfg.hid_dim, dtype=torch.double).cuda()
50 | cell = torch.zeros(cfg.n_layers * 2, 1, cfg.hid_dim, dtype=torch.double).cuda()
51 | # prediction,_,_ = self.testModel(input,hidden,cell)
52 |
53 | # birnn
54 | #prediction = self.testModel(input)
55 |
56 | # mlp
57 | prediction = self.testModel(input)
58 | prediction = prediction.detach().cpu().numpy().reshape(-1, 15, 4)
59 |
60 | # Renormalize prediction
61 | predictions = np.asarray(predictions)
62 | norms = np.linalg.norm(predictions, axis=2)
63 | predictions = np.asarray(
64 | [predictions[k, j, :] / norms[0, 0] for k, j in itertools.product(range(seq_len), range(15))])
65 | predictions = predictions.reshape(seq_len, 15, 4)
66 | ################### convert to euler
67 | target_euler = np.asarray([transforms3d.euler.quat2euler(target[k, j]) for k, j in
68 | itertools.product(range(seq_len), range(15))])
69 | target_euler = (target_euler * 180) / np.pi
70 | pred_euler = np.asarray([transforms3d.euler.quat2euler(predictions[k, j]) for k, j in
71 | itertools.product(range(seq_len), range(15))])
72 | pred_euler = (pred_euler * 180) / np.pi
73 | ##################calculate loss
74 | loss = self.loss_impl(target_euler.reshape(-1, 15, 3), pred_euler.reshape(-1, 15, 3))
75 | # loss_file.write('{}-- {}\n'.format(f,loss))
76 | # print(f+'-------'+str(loss))
77 | print(act + '-------' + str(loss))
78 | self.loss_file.write('{}\n'.format(loss))
79 | # save GT and prediction
80 | #np.savez_compressed(self.base + act , target=target.cpu().numpy(), predictions=prediction)
81 |
82 | # test each timestep within a window
83 | def testWindow(self,len_past,len_future):
84 | from pyquaternion import Quaternion
85 | file = '/data/Guha/GR/code/dip18/train_and_eval/data/dipIMU/imu_own_test.npz'
86 | data_dict = dict(np.load(file))
87 | for act in range(18):
88 | sample_pose = data_dict['smpl_pose'][act].reshape(-1, 15, 3, 3)
89 | sample_ori = data_dict['orientation'][act].reshape(-1, 5, 3, 3)
90 | act = data_dict['file_id'][act]
91 | seq_len = sample_ori.shape[0]
92 | ori_quat = np.asarray([Quaternion(matrix=sample_ori[k, j, :, :]).elements for k, j in
93 | itertools.product(range(seq_len), range(5))])
94 | ori_quat = ori_quat.reshape(-1, 5 * 4)
95 | pose_quat = np.asarray([Quaternion(matrix=sample_pose[k, j, :, :]).elements for k, j in
96 | itertools.product(range(seq_len), range(15))])
97 |
98 | pose_quat = pose_quat.reshape(-1, 15 , 4)
99 |
100 | input = torch.FloatTensor(ori_quat)
101 | target = pose_quat
102 |
103 | # bilstm
104 | # initialize hidden and cell state at each new batch
105 | hidden = torch.zeros(cfg.n_layers * 2, 1, cfg.hid_dim, dtype=torch.double).cuda()
106 | cell = torch.zeros(cfg.n_layers * 2, 1, cfg.hid_dim, dtype=torch.double).cuda()
107 |
108 | predictions = []
109 | # loop over all frames in input. take the window to predict each timestep t
110 | for step in range(seq_len):
111 | start_idx = max(step - len_past, 0)
112 | end_idx = min(step + len_future + 1, seq_len)
113 | in_window = input[start_idx:end_idx]
114 | in_window = torch.FloatTensor(in_window).unsqueeze(0).cuda()
115 | # target_window = target[start_idx:end_idx]
116 | # bilstm
117 | # output,_,_ = self.testModel(in_window,hidden,cell)
118 |
119 | # birnn
120 | output = self.testModel(in_window)
121 |
122 | prediction_step = min(step, len_past)
123 | pred = output[:, prediction_step:prediction_step + 1].detach().cpu().numpy().reshape(15, 4)
124 | predictions.append(pred)
125 |
126 | # Renormalize prediction
127 | predictions = np.asarray(predictions)
128 | norms = np.linalg.norm(predictions, axis=2)
129 | predictions = np.asarray(
130 | [predictions[k, j, :] / norms[0, 0] for k, j in itertools.product(range(seq_len), range(15))])
131 | predictions = predictions.reshape(seq_len, 15, 4)
132 | ################### convert to euler
133 | target_euler = np.asarray([transforms3d.euler.quat2euler(target[k, j]) for k, j in
134 | itertools.product(range(seq_len), range(15))])
135 | target_euler = (target_euler * 180) / np.pi
136 | pred_euler = np.asarray([transforms3d.euler.quat2euler(predictions[k, j]) for k, j in
137 | itertools.product(range(seq_len), range(15))])
138 | pred_euler = (pred_euler * 180) / np.pi
139 | ##################calculate loss
140 | loss = self.loss_impl(target_euler.reshape(-1, 15, 3), pred_euler.reshape(-1, 15, 3))
141 | # loss_file.write('{}-- {}\n'.format(f,loss))
142 | # print(f+'-------'+str(loss))
143 | print(act + '-------' + str(loss))
144 | self.loss_file.write('{}\n'.format(loss))
145 | # save GT and prediction
146 | #np.savez_compressed(self.base + act, target=target, predictions=predictions)
147 |
148 | self.loss_file.close()
149 |
150 | def loss_impl(self, predicted, expected):
151 | error = predicted - expected
152 | error_norm = np.linalg.norm(error, axis=2)
153 | error_per_joint = np.mean(error_norm, axis=1)
154 | error_per_frame_per_joint = np.mean(error_per_joint, axis=0)
155 | return error_per_frame_per_joint
156 |
157 |
158 | if __name__ == '__main__':
159 | testEngine = TestEngine()
160 | testEngine.testWindow(20,5)
161 | #testEngine.test()
162 |
163 |
164 |
165 |
--------------------------------------------------------------------------------
/Root/generateLossFiles:
--------------------------------------------------------------------------------
1 | import matplotlib.pyplot as plt
2 | import numpy as np
3 |
4 |
5 | file_path1 = '/data/Guha/GR/Output/loss_9_AMASS_Transition.txt'
6 | file_path2 = '/data/Guha/GR/Output/loss_13_AMASS_Transition.txt'
7 | file_path3 = '/data/Guha/GR/Output/loss_9.txt'
8 | file_path4 = '/data/Guha/GR/Output/loss_13_s11.txt'
9 | selected = [0,2,4,6,9,10,11,14,16]
10 | loss1,loss2, loss3, loss4 = [],[],[],[]
11 | with open(file_path1, 'r') as file1, open(file_path2, 'r') as file2, open(file_path3, 'r') as file3, open(file_path4, 'r') as file4:
12 | for line1,line2,line3,line4 in zip (file1.readlines(),file2.readlines(),file3.readlines(),file4.readlines()):
13 | loss1.append(float(line1.split(' ')[1]))
14 | loss2.append(float(line2.split(' ')[1]))
15 | loss3.append(float(line3.split(' ')[1]))
16 | loss4.append(float(line4.split(' ')[1]))
17 |
18 | # loss1 = [loss1[i] for i in selected]
19 | # loss2 = [loss2[i] for i in selected]
20 | x_ind = np.arange(len(loss1))
21 | # width = 0.18
22 | # r1 = np.arange(len(loss1))
23 | # r2 = [x + width for x in r1]
24 | # r3 = [x + width for x in r2]
25 | # r4 = [x + width for x in r3]
26 | fig, ax = plt.subplots(figsize=(10,7))
27 | # rects1 = ax.bar(r1, loss1, width, edgecolor='white',label='model:BiRNN',align='edge')
28 | # rects2 = ax.bar(r2, loss2, width, edgecolor='white', label='model: MLP',align='edge')
29 | # rects1 = ax.bar(r3, loss3, width, edgecolor='white',label='DIP:Synthetic',align='edge')
30 | # rects2 = ax.bar(r4, loss4, width, edgecolor='white',label='DIP:Fine-tuned',align='edge')
31 |
32 | ax.plot(np.arange(len(loss1)),loss1,label='model a: AMASS_Transition',dashes=[4,2], color='brown')
33 | ax.plot(np.arange(len(loss2)),loss2,label='model b: AMASS_Transition',dashes=[4,2])
34 | ax.plot(np.arange(len(loss1)),loss3,label='model a: H36')
35 | ax.plot(np.arange(len(loss2)),loss4,label='model b: H36')
36 |
37 | # Add some text for labels, title and custom x-axis tick labels, etc.
38 | ax.set_ylabel('Mean Pose Error (quaternion)')
39 | ax.set_xlabel('test activities')
40 | ax.set_title('comparison of models trained on H36 vs all dataset')
41 | ax.set_xticks(x_ind)
42 | #ax.set_xticklabels(('G1', 'G2', 'G3', 'G4', 'G5'))
43 | ax.legend()
44 | plt.show()
45 |
46 | # loss_file = open('/data/Guha/GR/Output/loss_dip_fine2.txt', 'w')
47 | # def loss_impl(predicted, expected):
48 | # L1 = np.abs(predicted.reshape(-1, 60)) - np.abs(expected.reshape(-1, 60))
49 | # return np.mean((np.linalg.norm(L1, 2, 1)))
50 | #
51 | # import quaternion
52 | # SMPL_MAJOR_JOINTS = [1, 2, 3, 4, 5, 6, 9, 12, 13, 14, 15, 16, 17, 18, 19]
53 | # file_path = '/data/Guha/GR/Output/TestSet/dip_own/test_finetuned.npz'
54 | # with open(file_path, 'rb') as file:
55 | # data_dict = dict(np.load(file))
56 | # gt_list = data_dict['gt']
57 | # pred_list = data_dict['prediction']
58 | #
59 | # for act in range(18):
60 | # gt = gt_list[act].reshape(-1,24,3)
61 | # pred = pred_list[act].reshape(-1, 24,3)
62 | #
63 | # pred = pred[:, SMPL_MAJOR_JOINTS, :]
64 | # pred = quaternion.as_float_array(quaternion.from_rotation_vector(pred))
65 | #
66 | # gt = gt[:, SMPL_MAJOR_JOINTS, :]
67 | # gt = quaternion.as_float_array(quaternion.from_rotation_vector(gt))
68 | #
69 | # loss = loss_impl(pred, gt)
70 | # loss_file.write('{}-- {}\n'.format(act, loss))
71 | #
72 | # loss_file.close()
--------------------------------------------------------------------------------
/Root/myUtil.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/Root/myUtil.pyc
--------------------------------------------------------------------------------
/Root/new_createData.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import os
3 | import transforms3d
4 | import cv2
5 | import pickle as pkl
6 | import matplotlib.pyplot as plt
7 | import copy
8 | import pdb
9 | import quaternion
10 | import myUtil
11 | import itertools
12 | import h5py
13 |
14 |
15 | def calibrateRawIMU(acc,ori,pose):
16 |
17 | ######### **********the order of IMUs are: [left_lower_wrist, right_lower_wrist, left_lower_leg, right_loewr_leg, head, back]
18 | #SMPL_SENSOR = ['L_Shoulder', 'R_Shoulder', 'L_Knee', 'R_Knee', 'Head', 'Pelvis']
19 | SMPL_SENSOR = ['L_Elbow', 'R_Elbow', 'L_Knee', 'R_Knee', 'Head', 'Pelvis']
20 |
21 | # safety check if any frame has NAN Values
22 | frames2del = np.unique(np.where(np.isnan(ori) == True)[0])
23 | ori = np.delete(ori, frames2del, 0)
24 | acc = np.delete(acc, frames2del, 0)
25 | pose = np.delete(pose, frames2del, 0)
26 | pose = pose.reshape(-1,24,3)
27 | seq_len = len(ori)
28 |
29 | # calibration of pose parameters
30 | # ---------------------------------------------
31 | SMPL_MAJOR_JOINTS = [1, 2, 3, 4, 5, 6, 9, 12, 13, 14, 15, 16, 17, 18, 19]
32 | pose = pose[:, SMPL_MAJOR_JOINTS, :]
33 | qs = quaternion.from_rotation_vector(pose)
34 | pose_rot = np.reshape(quaternion.as_rotation_matrix(qs), [seq_len, 15, 9])
35 | pose = pose_rot.reshape(-1,15,3,3)
36 |
37 | ############### calculation in Rotation Matrix #########################
38 | # 1. calib: R(T_I) = inv( R(I_S) * R(S_B) * R(B_T) ) [sensor 4: head for 1st frame]. rti is constant for all frames
39 | ris_head = ori[0, 4, :, :]
40 | rsb_head = np.array([[0, 0, 1], [1, 0, 0], [0, 1, 0]])
41 | rbt_head = np.identity(3)
42 | rti = np.linalg.inv(np.linalg.multi_dot([ris_head, rsb_head, rbt_head]))
43 | # 2. R(T_S) = R(T_I) * R(I_S) for all sensors for all frames
44 | seq_len = len(ori)
45 | rts = np.asarray(
46 | [np.dot(rti, ori[k, j, :, :]) for
47 | k, j in itertools.product(range(seq_len), range(6))]
48 | ).reshape(-1,6,3,3)
49 | # 3. calculate R(B_T) for all 6 joints
50 | rbt = myUtil.getGlobalBoneOriFromPose(pose[0], SMPL_SENSOR)
51 |
52 | # 4. calculate R(B_S) = R(B_T) * R(T_S) for the first frame which will be constant across all frames
53 | rbs = np.array([np.dot(rbt[k], rts[0, k]) for k in range(6)])
54 |
55 | # 5. calculate bone2smpl (all frames) : R(T_B) = R(T_S) * inv(R(B_S))
56 | rtb = np.asarray(
57 | [np.dot(rts[k, j, :, :], rbs[j]) for
58 | k, j in itertools.product(range(seq_len), range(6))]
59 | )
60 | calibrated_ori = rtb.reshape(-1, 6, 3, 3)
61 |
62 | # 6. normalize respect to root
63 | root_inv = np.linalg.inv(calibrated_ori[:, 5, :, :])
64 | # root_inv = np.transpose(calibrated_ori[:, 5], [0, 2, 1])
65 | norm_ori = np.asarray(
66 | [np.matmul(root_inv[k], calibrated_ori[k, j, :, :]) for k, j in itertools.product(range(seq_len), range(6))])
67 | norm_ori = norm_ori.reshape(-1, 6, 3, 3)
68 |
69 | #return norm_acc[:,-1,:],ori_quat[:,-1,:],pose_quat
70 | return norm_ori[:,0:5,:],pose
71 |
72 | if __name__ == '__main__':
73 | ######################## read DataFile - DIP_IMU ###########################
74 | DIPPath = '/data/Guha/GR/Dataset/DIP_IMU/{}'
75 | imu_order = ['head', 'spine2', 'belly', 'lchest', 'rchest', 'lshoulder', 'rshoulder', 'lelbow', 'relbow', 'lhip',
76 | 'rhip', 'lknee', 'rknee', 'lwrist', 'rwrist', 'lankle', 'rankle']
77 | SENSORS = ['lelbow', 'relbow', 'lknee', 'rknee', 'head', 'belly']
78 | #SENSORS = ['lwrist', 'rwrist', 'lknee', 'rknee', 'head', 'belly']
79 | sensor_idx = [7, 8, 11, 12, 0, 2]
80 | FolerPath = '/data/Guha/GR/DIPIMUandOthers/DIP_IMU_and_Others/DIP_IMU/s_0{}'
81 | for sub in range(1,10):
82 | path = FolerPath.format(sub)
83 | for f in os.listdir(path):
84 | with open(os.path.join(path,f),'rb') as file:
85 | data_dict = pkl.load(file,encoding='latin1')
86 | if (len(data_dict['imu']) == 0):
87 | continue
88 | imu_ori = data_dict['imu'][:, :, 0:9]
89 | #frames2del = np.unique(np.where(np.isnan(imu_ori) == True)[0])
90 | #imu_ori = np.delete(imu_ori, frames2del, 0)
91 | imu_ori = np.asarray([imu_ori[:, k, :] for k in sensor_idx])
92 | raw_ori_dip = imu_ori.reshape(-1, 6, 3, 3)
93 |
94 | imu_acc = data_dict['imu'][:, :, 9:12]
95 | #imu_acc = np.delete(imu_acc, frames2del, 0)
96 | imu_acc = np.asarray([imu_acc[:, k, :] for k in sensor_idx])
97 | raw_acc_dip = imu_acc.reshape(-1, 6, 3)
98 | raw_pose_dip = data_dict['gt']
99 |
100 | # calibrate data
101 | ori_dip, pose_dip = calibrateRawIMU(raw_acc_dip, raw_ori_dip, raw_pose_dip)
102 | savepath = DIPPath.format(path.split('/')[-1]+'_'+f.split('.pkl')[0])
103 |
104 |
105 | #content = {"acc": acc,"ori": ori,"pose":pose}
106 | np.savez_compressed(savepath,ori = ori_dip,pose = pose_dip)
107 | print ('save path-- ', f)
108 | #break
109 |
110 |
111 |
112 |
113 |
114 |
115 |
--------------------------------------------------------------------------------
/Root/new_prepareData.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import os
3 | import pickle as pkl
4 | import quaternion
5 | import myUtil
6 | import itertools
7 | import h5py
8 |
9 |
10 | def calibrateRawIMU(acc,ori,pose):
11 |
12 | ######### **********the order of IMUs are: [left_lower_wrist, right_lower_wrist, left_lower_leg, right_loewr_leg, head, back]
13 | #SMPL_SENSOR = ['L_Shoulder', 'R_Shoulder', 'L_Knee', 'R_Knee', 'Head', 'Pelvis']
14 | SMPL_SENSOR = ['L_Elbow', 'R_Elbow', 'L_Knee', 'R_Knee', 'Head', 'Pelvis']
15 |
16 | # safety check if any frame has NAN Values
17 | frames2del = np.unique(np.where(np.isnan(ori) == True)[0])
18 | ori = np.delete(ori, frames2del, 0)
19 | acc = np.delete(acc, frames2del, 0)
20 | pose = np.delete(pose, frames2del, 0)
21 | pose = pose.reshape(-1, 15, 3, 3)
22 | seq_len = len(ori)
23 |
24 | ############### calculation in Rotation Matrix #########################
25 | # 1. calib: R(T_I) = inv( R(I_S) * R(S_B) * R(B_T) ) [sensor 4: head for 1st frame]. rti is constant for all frames
26 | ris_head = ori[0, 4, :, :]
27 | rsb_head = np.array([[0, 0, 1], [1, 0, 0], [0, 1, 0]])
28 | rbt_head = np.identity(3)
29 | rti = np.linalg.inv(np.linalg.multi_dot([ris_head, rsb_head, rbt_head]))
30 | # 2. R(T_S) = R(T_I) * R(I_S) for all sensors for all frames
31 | seq_len = len(ori)
32 | rts = np.asarray(
33 | [np.dot(rti, ori[k, j, :, :]) for
34 | k, j in itertools.product(range(seq_len), range(6))]
35 | ).reshape(-1,6,3,3)
36 | # 3. calculate R(B_T) for all 6 joints
37 | rbt = myUtil.getGlobalBoneOriFromPose(pose[0], SMPL_SENSOR)
38 |
39 | # 4. calculate R(B_S) = R(B_T) * R(T_S) for the first frame which will be constant across all frames
40 | rbs = np.array([np.dot(rbt[k], rts[0, k]) for k in range(6)])
41 |
42 | # 5. calculate bone2smpl (all frames) : R(T_B) = R(T_S) * inv(R(B_S))
43 | rtb = np.asarray(
44 | [np.dot(rts[k, j, :, :], rbs[j]) for
45 | k, j in itertools.product(range(seq_len), range(6))]
46 | )
47 | calibrated_ori = rtb.reshape(-1, 6, 3, 3)
48 |
49 | # 6. normalize respect to root
50 | root_inv = np.linalg.inv(calibrated_ori[:, 5, :, :])
51 | # root_inv = np.transpose(calibrated_ori[:, 5], [0, 2, 1])
52 | norm_ori = np.asarray(
53 | [np.matmul(root_inv[k], calibrated_ori[k, j, :, :]) for k, j in itertools.product(range(seq_len), range(6))])
54 | norm_ori = norm_ori.reshape(-1, 6, 3, 3)
55 |
56 | #return norm_acc[:,-1,:],ori_quat[:,-1,:],pose_quat
57 | return norm_ori[:,0:5,:],pose
58 |
59 | ##################### synthetic data - ###########################
60 |
61 | if __name__ == '__main__':
62 | trainset = ['AMASS_MIXAMO','AMASS_SSM', 'AMASS_Transition', 'CMU','AMASS_HDM05', 'HEva', 'JointLimit']
63 | for dset in trainset:
64 | dataset = dset
65 | train_path = '/data/Guha/GR/Dataset/{}'.format(dataset)
66 | test_path = '/data/Guha/GR/Dataset/Test/{}'
67 | valid_path = '/data/Guha/GR/Dataset/Validation/{}'
68 |
69 | if(not os.path.exists(train_path)):
70 | os.makedirs(train_path)
71 |
72 | FolerPath = '/data/Guha/GR/synthetic60FPS/{}'.format(dataset)
73 | fileList = os.listdir(FolerPath)
74 | train_idx = len(fileList)
75 | valid_idx = len(fileList) * 0.85
76 |
77 | for idx,file_num in enumerate(fileList):
78 | path = os.path.join(FolerPath,file_num)
79 | #path = '/data/Guha/GR/synthetic60FPS/H36/S5_Walking.pkl'
80 | with open(path , 'rb') as f:
81 | print (path)
82 | data_dict = pkl.load(f,encoding='latin1')
83 | raw_acc_syn= np.asarray(data_dict['acc'])
84 | raw_ori_syn = np.asarray(data_dict['ori'])
85 | raw_pose_syn = np.asarray(data_dict['poses'])
86 |
87 | if(len(raw_acc_syn) == 0):
88 | continue
89 |
90 | # calibrate data
91 | ori, pose = calibrateRawIMU(raw_acc_syn, raw_ori_syn, raw_pose_syn)
92 | # acc1, ori1, pose1 = calibrateRawIMU1(raw_acc_syn, raw_ori_syn, raw_pose_syn)
93 | # acc2, ori2, pose2 = calibrateRawIMU2(raw_acc_syn, raw_ori_syn, raw_pose_syn)
94 | if (idx <= train_idx):
95 | #path = train_path.format(file_num.split('.pkl')[0])
96 | savepath = train_path+'/'+file_num.split('.pkl')[0]
97 | # train_meta.write(str(len(acc))+' '+file_num.split('.pkl')[0]+'\n')
98 | # train_size += len(acc)
99 | elif (idx >= train_idx and idx < valid_idx):
100 | path = valid_path.format(file_num.split('.pkl')[0])
101 | # valid_meta.write(str(len(acc))+' '+file_num.split('.pkl')[0]+'\n')
102 | # valid_size += len(acc)
103 | else:
104 | path = test_path.format(file_num.split('.pkl')[0])
105 | # test_meta.write(str(len(acc))+' '+file_num.split('.pkl')[0]+'\n')
106 | # test_size += len(acc)
107 | np.savez_compressed(savepath,ori = ori,pose = pose)
108 | print ('save path-- ', path)
109 | # break
110 | # train_meta.write(str(train_size)+' total_frames'+'\n')
111 | # valid_meta.write(str(valid_size)+' total_frames'+ '\n')
112 | # test_meta.write(str(test_size)+ ' total_frames'+'\n')
113 | # train_meta.close()
114 | # valid_meta.close()
115 | # test_meta.close()
116 |
117 |
118 |
119 |
120 |
121 |
122 |
--------------------------------------------------------------------------------
/Root/parseJson.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 | def parseJson(filepath):
4 | import json
5 | SMPL_SENSOR = ['L_Shoulder', 'R_Shoulder', 'L_Knee', 'R_Knee', 'Head', 'Pelvis']
6 | sensorDict = {'L_Shoulder':30,'R_Shoulder':25,'L_knee':10,'R_knee':12,'Head':20,'Pelvis':15}
7 | sensorIds = ['30','25','10','12','20','15']
8 | #sensorIds = ['25', '30', '12', '11', '20', '16']
9 | rawOri = []
10 | with open(filepath, "r") as read_file:
11 | lines = read_file.readlines()
12 | for i,line in enumerate(lines):
13 | print(i)
14 | jsondict = json.loads(line)
15 | imus = jsondict['data']['imu']
16 |
17 | # loop through 6 sensor ids #
18 | ori_t=[]
19 | for id in sensorIds:
20 | if(id in imus):
21 | ori_t.append(imus[id]['orientation'])
22 | else:
23 | ori_t.append([0,0,0,0])
24 |
25 | rawOri.append(ori_t)
26 | rawOri = np.array(rawOri)
27 |
28 | ## impute missing sensor values with previous timestep ##
29 | while(np.any(~np.any(rawOri,axis=2))):
30 | m_rows,m_cols = np.where(~np.any(rawOri,axis=2))
31 | rawOri[m_rows,m_cols] = rawOri[(m_rows-1),m_cols]
32 | return rawOri
33 |
34 |
35 | def calibrateRawIMU(ori):
36 | import quaternion
37 | import itertools
38 | import myUtil
39 | ######### **********the order of IMUs are: [left_lower_wrist, right_lower_wrist, left_lower_leg, right_loewr_leg, head, back]
40 | SMPL_SENSOR = ['L_Shoulder', 'R_Shoulder', 'L_Knee', 'R_Knee', 'Head', 'Pelvis']
41 |
42 | ############### calculation in Rotation Matrix #########################
43 | seq_len = len(ori)
44 | # head sensor for the first frame
45 | head_quat = quaternion.from_float_array( ori[100, 4, :])
46 | Q = np.quaternion(0.5, 0.5, 0.5, 0.5)
47 | # calib: R(T_I) which is constant over the frames
48 | calib = np.linalg.inv(quaternion.as_rotation_matrix( np.dot(head_quat, Q)))
49 |
50 | # bone2sensor: R(B_S) calculated once for each sensor and remain constant over the frames as used further
51 | bone2sensor = {}
52 | for i in range(len(SMPL_SENSOR)):
53 | sensorid = SMPL_SENSOR[i]
54 | qbone = quaternion.as_rotation_matrix(quaternion.from_rotation_vector(myUtil.getGlobalBoneOri(sensorid)))
55 | qsensor = np.dot(calib, quaternion.as_rotation_matrix(quaternion.from_float_array(ori[100, i, :])))
56 | boneQuat = np.dot(np.linalg.inv(qsensor), qbone)
57 | bone2sensor[i] = boneQuat
58 |
59 | # calibrated_ori: R(T_B) calculated as calib * sensor data(changes every frame) * bone2sensor(corresponding sensor)
60 | calibrated_ori = np.asarray(
61 | [np.linalg.multi_dot([calib, quaternion.as_rotation_matrix(quaternion.from_float_array(ori[k, j, :] )), bone2sensor[j]]) for
62 | k, j in itertools.product(range(seq_len), range(6))]
63 | )
64 | calibrated_ori = calibrated_ori.reshape(-1, 6, 3,3)
65 | root_inv = np.linalg.inv(calibrated_ori[:, 5, :,:])
66 | # root_inv = np.transpose(calibrated_ori[:, 5], [0, 2, 1])
67 | norm_ori = np.asarray(
68 | [np.matmul(root_inv[k], calibrated_ori[k, j, :,:]) for k, j in itertools.product(range(seq_len), range(6))])
69 | norm_ori = norm_ori.reshape(-1, 6, 3,3)
70 |
71 | return norm_ori[:,0:5,:,:]
72 |
73 | if __name__ == '__main__':
74 | filepath = "/data/Guha/GR/Dataset/DFKI/walking_1.json"
75 | ori = parseJson(filepath)
76 | #calibrated = calibrateRawIMU(ori)
77 | np.savez_compressed( "/data/Guha/GR/Dataset/DFKI/walking_1", ori=ori)
78 |
--------------------------------------------------------------------------------
/Root/plotGraph.py:
--------------------------------------------------------------------------------
1 | import matplotlib.pyplot as plt
2 | import numpy as np
3 |
4 | ############# file paths of loss files #############
5 | file_path1 = '/data/Guha/GR/Output/loss/loss_dipf9.txt'
6 | file_path2 = '/data/Guha/GR/Output/loss/loss_dipsynthetic.txt'
7 | file_path3 = '/data/Guha/GR/Output/loss/loss_dip_birnn.txt'
8 | file_path4 = '/data/Guha/GR/Output/loss/loss_dip_mlp.txt'
9 |
10 | loss1,loss2, loss3, loss4 = [],[],[],[]
11 | with open(file_path1, 'r') as file1, open(file_path2, 'r') as file2, open(file_path3, 'r') as file3, open(file_path4, 'r') as file4:
12 | for line1,line2,line3,line4 in zip (file1.readlines(),file2.readlines(),file3.readlines(),file4.readlines()):
13 | loss1.append(float(line1))
14 | loss2.append(float(line2))
15 | loss3.append(float(line3))
16 | loss4.append(float(line4))
17 |
18 | x_ind = np.arange(len(loss1))
19 | fig, ax = plt.subplots(figsize=(10,7))
20 |
21 | ################### to plot bar plots ####################
22 | # width = 0.18
23 | # r1 = np.arange(len(loss1))
24 | # r2 = [x + width for x in r1]
25 | # r3 = [x + width for x in r2]
26 | # r4 = [x + width for x in r3]
27 | # rects1 = ax.bar(r1, loss1, width, edgecolor='white',label='model:BiRNN',align='edge')
28 | # rects2 = ax.bar(r2, loss2, width, edgecolor='white', label='model: MLP',align='edge')
29 | # rects1 = ax.bar(r3, loss3, width, edgecolor='white',label='DIP:Synthetic',align='edge')
30 | # rects2 = ax.bar(r4, loss4, width, edgecolor='white',label='DIP:Fine-tuned',align='edge')
31 |
32 | ############# to plot line graph ###############
33 | ax.plot(np.arange(len(loss1)),loss1,'b8:',label='DIP:synthetic')
34 | ax.plot(np.arange(len(loss2)),loss2,'Pg-',label='DIP:finetuned')
35 | ax.plot(np.arange(len(loss1)),loss3,'rs--',label='Our Short BiLSTM',dashes=[2,2])
36 | ax.plot(np.arange(len(loss2)),loss4,'>c-.',label='Our MLP')
37 |
38 | # Add some text for labels, title and custom x-axis tick labels, etc.
39 | ax.set_ylabel('mean joint angle error (euler in degrees)')
40 | ax.set_xlabel('test activities')
41 | ax.set_title('State-of-the-art models vs our trained models')
42 | ax.set_xticks(x_ind)
43 | #ax.set_xticklabels(('G1', 'G2', 'G3', 'G4', 'G5'))
44 | ax.legend()
45 | plt.show()
46 |
47 |
--------------------------------------------------------------------------------
/Root/prepareData.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import os
3 | import pickle as pkl
4 | import quaternion
5 | import myUtil
6 | import itertools
7 | import h5py
8 |
9 | def getGlobalBoneOriFromPose(pose,boneName):
10 | pose = pose.reshape(1,15*3*3)
11 | fullPose = myUtil.smpl_reduced_to_full(pose).reshape(24,3,3)
12 | return fullPose[myUtil.boneSMPLDict[boneName]]
13 |
14 | def calibrateRawIMU(acc,ori,pose):
15 |
16 | ######### **********the order of IMUs are: [left_lower_wrist, right_lower_wrist, left_lower_leg, right_loewr_leg, head, back]
17 | SMPL_SENSOR = ['L_Shoulder', 'R_Shoulder', 'L_Knee', 'R_Knee', 'Head', 'Pelvis']
18 |
19 | # safety check if any frame has NAN Values
20 | frames2del = np.unique(np.where(np.isnan(ori) == True)[0])
21 | ori = np.delete(ori, frames2del, 0)
22 | acc = np.delete(acc, frames2del, 0)
23 | pose = np.delete(pose, frames2del, 0)
24 | pose = pose.reshape(-1,15,3,3)
25 |
26 | ############### calculation in Rotation Matrix #########################
27 | seq_len = len(ori)
28 | # head sensor for the first frame
29 | head_quat = ori[1, 4, :, :]
30 | Q = quaternion.as_rotation_matrix(np.quaternion(0.5, 0.5, 0.5, 0.5))
31 | #Q = np.array([[0,0,1],[0,1,0],[-1,0,0]])
32 | # calib: R(T_I) which is constant over the frames
33 | calib = np.linalg.inv(np.dot(head_quat, Q))
34 | #calib = np.linalg.inv(head_quat)
35 |
36 | # bone2sensor: R(B_S) calculated once for each sensor and remain constant over the frames as used further
37 | bone2sensor = {}
38 | for i in range(len(SMPL_SENSOR)):
39 | sensorid = SMPL_SENSOR[i]
40 | #qbone = quaternion.as_rotation_matrix(quaternion.from_rotation_vector(myUtil.getGlobalBoneOri(sensorid)))
41 | qbone = myUtil.getGlobalBoneOriFromPose(pose[0],sensorid)
42 | qsensor = np.dot(calib, ori[1, i, :, :])
43 | boneQuat = np.dot(np.linalg.inv(qsensor), qbone)
44 | bone2sensor[i] = boneQuat
45 |
46 | # calibrated_ori: R(T_B) calculated as calib * sensor data(changes every frame) * bone2sensor(corresponding sensor)
47 | calibrated_ori = np.asarray(
48 | [np.linalg.multi_dot([calib, ori[k, j, :, :], bone2sensor[j]]) for
49 | k, j in itertools.product(range(seq_len), range(6))]
50 | )
51 | calibrated_ori = calibrated_ori.reshape(-1, 6, 3, 3)
52 | root_inv = np.linalg.inv(calibrated_ori[:, 5, :, :])
53 | # root_inv = np.transpose(calibrated_ori[:, 5], [0, 2, 1])
54 | norm_ori = np.asarray(
55 | [np.matmul(root_inv[k], calibrated_ori[k, j, :, :]) for k, j in itertools.product(range(seq_len), range(6))])
56 | norm_ori = norm_ori.reshape(-1, 6, 3, 3)
57 |
58 | # calibration of acceleration
59 | acc_1 = np.asarray(
60 | [np.dot(ori[k, j, :], acc[k, j, :]) for k, j in
61 | itertools.product(range(seq_len), range(6))])
62 | acc_1 = acc_1.reshape(-1, 6, 3)
63 | calib_acc = np.asarray([np.dot(calib, acc_1[k, j, :]) for k, j in
64 | itertools.product(range(seq_len), range(6))])
65 | calib_acc = calib_acc.reshape(-1, 6, 3)
66 | norm_acc = np.asarray(
67 | [np.dot(root_inv[k], (calib_acc[k, j, :] - calib_acc[k, 5, :])) for k, j in
68 | itertools.product(range(seq_len), range(6))])
69 | norm_acc = norm_acc.reshape(-1, 6, 3)
70 |
71 | # calibration of pose parameters
72 | # ---------------------------------------------
73 |
74 | #return norm_acc[:,-1,:],ori_quat[:,-1,:],pose_quat
75 | return norm_acc[:,0:5,:],norm_ori[:,0:5,:],pose
76 |
77 | ##################### synthetic data - ###########################
78 |
79 | if __name__ == '__main__':
80 | dataset = 'AMASS_HDM05'
81 | train_path = '/data/Guha/GR/Dataset/{}'.format(dataset)
82 | test_path = '/data/Guha/GR/Dataset/Test/{}'
83 | valid_path = '/data/Guha/GR/Dataset/Validation/{}'
84 |
85 | #os.makedirs(train_path)
86 |
87 | FolerPath = '/data/Guha/GR/synthetic60FPS/{}'.format(dataset)
88 | fileList = os.listdir(FolerPath)
89 | train_idx = len(fileList)
90 | valid_idx = len(fileList) * 0.85
91 |
92 | # train_meta = open(train_path.format('meta.txt'),'wb')
93 | # test_meta = open(test_path.format('meta.txt'), 'wb')
94 | # valid_meta = open(valid_path.format('meta.txt'), 'wb')
95 | # train_size = 0
96 | # valid_size = 0
97 | # test_size = 0
98 |
99 | for idx,file_num in enumerate(fileList):
100 | path = os.path.join(FolerPath,file_num)
101 | #path = '/data/Guha/GR/synthetic60FPS/H36/S5_Walking.pkl'
102 | with open(path , 'rb') as f:
103 | print (path)
104 | data_dict = pkl.load(f,encoding='latin1')
105 | raw_acc_syn= np.asarray(data_dict['acc'])
106 | raw_ori_syn = np.asarray(data_dict['ori'])
107 | raw_pose_syn = np.asarray(data_dict['poses'])
108 |
109 | if(len(raw_acc_syn) == 0):
110 | continue
111 |
112 | # calibrate data
113 | acc, ori, pose = calibrateRawIMU(raw_acc_syn, raw_ori_syn, raw_pose_syn)
114 | # acc1, ori1, pose1 = calibrateRawIMU1(raw_acc_syn, raw_ori_syn, raw_pose_syn)
115 | # acc2, ori2, pose2 = calibrateRawIMU2(raw_acc_syn, raw_ori_syn, raw_pose_syn)
116 | if (idx <= train_idx):
117 | #path = train_path.format(file_num.split('.pkl')[0])
118 | savepath = train_path+'/'+file_num.split('.pkl')[0]
119 | # train_meta.write(str(len(acc))+' '+file_num.split('.pkl')[0]+'\n')
120 | # train_size += len(acc)
121 | elif (idx >= train_idx and idx < valid_idx):
122 | path = valid_path.format(file_num.split('.pkl')[0])
123 | # valid_meta.write(str(len(acc))+' '+file_num.split('.pkl')[0]+'\n')
124 | # valid_size += len(acc)
125 | else:
126 | path = test_path.format(file_num.split('.pkl')[0])
127 | # test_meta.write(str(len(acc))+' '+file_num.split('.pkl')[0]+'\n')
128 | # test_size += len(acc)
129 | np.savez_compressed(savepath,acc = acc,ori = ori,pose = pose)
130 | print ('save path-- ', path)
131 | # break
132 | # train_meta.write(str(train_size)+' total_frames'+'\n')
133 | # valid_meta.write(str(valid_size)+' total_frames'+ '\n')
134 | # test_meta.write(str(test_size)+ ' total_frames'+'\n')
135 | # train_meta.close()
136 | # valid_meta.close()
137 | # test_meta.close()
138 |
139 |
140 |
141 |
142 |
143 |
144 |
--------------------------------------------------------------------------------
/Root/pytorch_modelsummary.py:
--------------------------------------------------------------------------------
1 | '''
2 | Generates a summary of a model's layers and dimensionality
3 | '''
4 |
5 | import torch
6 | from torch import nn
7 | from torch.autograd import Variable
8 | import numpy as np
9 | import pandas as pd
10 |
11 | class ModelSummary(object):
12 |
13 | def __init__(self, model, input_size=(1,1,256,256)):
14 | '''
15 | Generates summaries of model layers and dimensions.
16 | '''
17 | self.model = model
18 | self.input_size = input_size
19 |
20 | self.summarize()
21 | print(self.summary)
22 |
23 | def get_variable_sizes(self):
24 | '''Run sample input through each layer to get output sizes'''
25 | input_ = Variable(torch.FloatTensor(*self.input_size), volatile=True)
26 | mods = list(self.model.modules())
27 | in_sizes = []
28 | out_sizes = []
29 | for i in range(1, len(mods)):
30 | m = mods[i]
31 | out = m(input_)
32 | in_sizes.append(np.array(input_.size()))
33 | out_sizes.append(np.array(out.size()))
34 | input_ = out
35 |
36 | self.in_sizes = in_sizes
37 | self.out_sizes = out_sizes
38 | return
39 |
40 | def get_layer_names(self):
41 | '''Collect Layer Names'''
42 | mods = list(self.model.named_modules())
43 | names = []
44 | layers = []
45 | for m in mods[1:]:
46 | names += [m[0]]
47 | layers += [str(m[1].__class__)]
48 |
49 | layer_types = [x.split('.')[-1][:-2] for x in layers]
50 |
51 | self.layer_names = names
52 | self.layer_types = layer_types
53 | return
54 |
55 | def get_parameter_sizes(self):
56 | '''Get sizes of all parameters in `model`'''
57 | mods = list(self.model.modules())
58 | sizes = []
59 |
60 | for i in range(1,len(mods)):
61 | m = mods[i]
62 | p = list(m.parameters())
63 | modsz = []
64 | for j in range(len(p)):
65 | modsz.append(np.array(p[j].size()))
66 | sizes.append(modsz)
67 |
68 | self.param_sizes = sizes
69 | return
70 |
71 | def get_parameter_nums(self):
72 | '''Get number of parameters in each layer'''
73 | param_nums = []
74 | for mod in self.param_sizes:
75 | all_params = 0
76 | for p in mod:
77 | all_params += np.prod(p)
78 | param_nums.append(all_params)
79 | self.param_nums = param_nums
80 | return
81 |
82 | def make_summary(self):
83 | '''
84 | Makes a summary listing with:
85 |
86 | Layer Name, Layer Type, Input Size, Output Size, Number of Parameters
87 | '''
88 |
89 | df = pd.DataFrame( np.zeros( (len(self.layer_names), 5) ) )
90 | df.columns = ['Name', 'Type', 'InSz', 'OutSz', 'Params']
91 |
92 | df['Name'] = self.layer_names
93 | df['Type'] = self.layer_types
94 | df['InSz'] = self.in_sizes
95 | df['OutSz'] = self.out_sizes
96 | df['Params'] = self.param_nums
97 |
98 | self.summary = df
99 | return
100 |
101 | def summarize(self):
102 | self.get_variable_sizes()
103 | self.get_layer_names()
104 | self.get_parameter_sizes()
105 | self.get_parameter_nums()
106 | self.make_summary()
107 |
108 | return
109 |
--------------------------------------------------------------------------------
/Root/pytorch_modelsummary.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/Root/pytorch_modelsummary.pyc
--------------------------------------------------------------------------------
/Root/saveVideo.py:
--------------------------------------------------------------------------------
1 | import cv2
2 | import os
3 |
4 | gt_arr = []
5 | pred_arr = []
6 |
7 | ############ size of video ############
8 | height, width, layers = (640, 480,3)
9 | size = (640, 480)
10 |
11 | ############### read each frame in loop and append in list ###########
12 | files = os.listdir('/data/Guha/GR/Output/Prediction/')
13 | for i in range(len(files)):
14 | #file = glob.('/data/Guha/Tour20/frames/BF6.mp4/frame{}.jpg'.format(i))
15 | gt_img = cv2.imread('/data/Guha/GR/Output/GT/{}.png'.format(i))
16 | pred_img = cv2.imread('/data/Guha/GR/Output/Prediction/{}.png'.format(i))
17 |
18 | gt_arr.append(gt_img)
19 | pred_arr.append(pred_img)
20 |
21 | ############ create videowriter object ##########
22 | #out = cv2.VideoWriter('AW7.mp4',cv2.VideoWriter_fourcc(*'DIVX'), 15, size))
23 | fourcc = cv2.VideoWriter_fourcc(*'mp4v')
24 | gt_out = cv2.VideoWriter('/data/Guha/GR/Output/TestSet/13/'+'gt.mp4',fourcc, 30.0, size)
25 | pred_out = cv2.VideoWriter('/data/Guha/GR/Output/TestSet/13/'+'pred.mp4',fourcc, 30.0, size)
26 |
27 | #################### create the video and save ###########
28 | for i in range(len(gt_arr)):
29 | gt_out.write(gt_arr[i])
30 | print ('gt finished')
31 | for i in range(len(pred_arr)):
32 | pred_out.write(pred_arr[i])
33 | print ('pred finished')
34 | gt_out.release()
35 | pred_out.release()
36 |
--------------------------------------------------------------------------------
/Root/test_MLP.py:
--------------------------------------------------------------------------------
1 | from Network import InverseKinematic
2 | import numpy as np
3 | from IMUDataset import IMUDataset
4 | import torch
5 | import Config as cfg
6 | import itertools
7 | import transforms3d
8 |
9 | class TestEngine:
10 | def __init__(self):
11 | self.testSet = ['s_11']
12 | self.datapath = '/data/Guha/GR/Dataset.old/'
13 | self.dataset = IMUDataset(self.datapath,self.testSet)
14 | self.use_cuda =True
15 | self.testModel = InverseKinematic().cuda()
16 |
17 | modelPath = '/data/Guha/GR/model/14/validation.pth.tar'
18 | self.base = '/data/Guha/GR/Output/TestSet/14/'
19 | with open(modelPath , 'rb') as tar:
20 | checkpoint = torch.load(tar)
21 | model_weights = checkpoint['state_dict']
22 | self.testModel.load_state_dict(model_weights)
23 |
24 | def test(self):
25 | valid_meanloss = []
26 | valid_maxloss = []
27 | # data_to_plot = []
28 | self.testModel.eval()
29 | loss_file = open('/data/Guha/GR/Output/loss_14.txt', 'w')
30 | for d in self.testSet:
31 | self.dataset.loadfiles(self.datapath, [d])
32 | dset_loss = []
33 | for file in self.dataset.files:
34 | #file = '/data/Guha/GR/Dataset.old/s_11/S11_WalkTogether.npz'
35 | self.dataset.readfile(file)
36 | input = torch.FloatTensor(self.dataset.input)
37 | seq_len = len(input)
38 | target = torch.FloatTensor(self.dataset.target)
39 | target = target.reshape(seq_len,15,4)
40 | if self.use_cuda:
41 | input = input.cuda()
42 |
43 | predictions = self.testModel(input)
44 |
45 | ################## Renormalize prediction
46 | predictions = predictions.detach().reshape_as(target).cpu()
47 | predictions = np.asarray(predictions)
48 | norms = np.linalg.norm(predictions, axis=2)
49 | predictions = np.asarray(
50 | [predictions[k, j, :] / norms[0, 0] for k, j in itertools.product(range(seq_len), range(15))])
51 | predictions = predictions.reshape(seq_len, 15, 4)
52 |
53 | ################### convert to euler
54 | target_euler = np.asarray([transforms3d.euler.quat2euler(target[k, j]) for k, j in
55 | itertools.product(range(seq_len), range(15))])
56 | target_euler = (target_euler * 180) / np.pi
57 | pred_euler = np.asarray([transforms3d.euler.quat2euler(predictions[k, j]) for k, j in
58 | itertools.product(range(seq_len), range(15))])
59 | pred_euler = (pred_euler * 180) / np.pi
60 |
61 |
62 | loss = self.loss_impl(pred_euler.reshape(-1,15,3), target_euler.reshape(-1,15,3))
63 | loss_file.write('{}\n'.format(loss))
64 | # mean_loss = torch.mean(loss)
65 | # max_loss = torch.max(loss)
66 | # dset_loss.extend(loss.numpy())
67 | # valid_meanloss.append(mean_loss)
68 | # valid_maxloss.append(max_loss)
69 | # print('mean loss {}, max loss {}, \n'.format(mean_loss, max_loss))
70 | # np.savez_compressed(self.base + file.split('/')[-1], target=target.cpu().numpy(), predictions=prediction)
71 |
72 | loss_file.close()
73 |
74 | def loss_impl(self, predicted, expected):
75 | error = predicted - expected
76 | error_norm = np.linalg.norm(error, axis=2)
77 | error_per_joint = np.mean(error_norm, axis=1)
78 | error_per_frame_per_joint = np.mean(error_per_joint, axis=0)
79 | return error_per_frame_per_joint
80 |
81 | if __name__ == '__main__':
82 | testEngine = TestEngine()
83 | testEngine.test()
84 |
85 |
86 |
87 |
--------------------------------------------------------------------------------
/Root/test_attn.py:
--------------------------------------------------------------------------------
1 | from Attention import RawDataset
2 | from Attention import Encoder
3 | from Attention import Decoder
4 | import torch
5 | import Config as cfg
6 | import matplotlib.pyplot as plt
7 | import numpy as np
8 | import myUtil
9 | #import quaternion
10 | import itertools
11 |
12 | class TestEngine:
13 | def __init__(self):
14 |
15 | self.datapath = '/data/Guha/GR/synthetic60FPS/'
16 | self.dataset = RawDataset()
17 | #self.modelPath = '/data/Guha/GR/model/18/'
18 | self.encoder = Encoder(input_dim=24,enc_units=256).cuda()
19 | self.decoder = Decoder(output_dim=60, dec_units=256, enc_units=256).cuda()
20 | baseModelPath = '/data/Guha/GR/model/attn_greedy/epoch_6.pth.tar'
21 | self.base = '/data/Guha/GR/Output/TestSet/attn/'
22 |
23 | with open(baseModelPath, 'rb') as tar:
24 | checkpoint = torch.load(tar)
25 | self.encoder.load_state_dict(checkpoint['encoder_dict'])
26 | self.decoder.load_state_dict(checkpoint['decoder_dict'])
27 |
28 | def test(self):
29 | try:
30 | dset = ['AMASS_ACCAD', 'AMASS_BioMotion', 'AMASS_CMU_Kitchen', 'CMU', 'HEva', 'H36']
31 |
32 | ####################### Validation #######################
33 | self.dataset.loadfiles(self.datapath, ['H36'])
34 | valid_loss = []
35 | for file in self.dataset.files:
36 | # Pick a random sequence
37 | # file = '/data/Guha/GR/Dataset/DFKI/walking_1.npz'
38 | # file = '/data/Guha/GR/DIPIMUandOthers/DIP_IMU_and_Others/DIP_IMU/s_01/01.pkl'
39 |
40 | ################# test on synthetic data #########################
41 | chunk_in, chunk_target = self.dataset.readfile(file,cfg.seq_len)
42 |
43 | ################# test on DIP_IMU data #########################
44 | #input_batches, output_batches = self.dataset.readDIPfile(file,cfg.seq_len)
45 |
46 | ################# test on DFKI data #########################
47 | #input_batches = self.dataset.readDFKIfile(file, cfg.seq_len)
48 |
49 | self.encoder.eval()
50 | self.decoder.eval()
51 | if (len(chunk_in) == 0):
52 | continue
53 | print('chunk list size', len(chunk_in))
54 | # pass all the chunks through encoder and accumulate c_out and c_hidden in a list
55 | enc_output = []
56 | enc_hidden = []
57 | for c_in in chunk_in:
58 | # chunk_in: (batch_sz:10, seq_len: 200, in_dim: 20)
59 | # chunk_out: (batch_sz:10, seq_len: 200, out_dim: 60)
60 | c_in = c_in.unsqueeze(0)
61 | c_enc_out, c_enc_hidden = self.encoder(c_in)
62 | enc_output.append(c_enc_out)
63 | enc_hidden.append(c_enc_hidden)
64 |
65 | # decoder input for the first timestep
66 | batch_sz = 1
67 | tpose = np.array([[1, 0, 0, 0] * 15] * batch_sz)
68 | dec_input = torch.FloatTensor(tpose.reshape(batch_sz, 1, 60)).cuda()
69 |
70 | # ########### for start with Ipose ###################
71 | # SMPL_MAJOR_JOINTS = [1, 2, 3, 4, 5, 6, 9, 12, 13, 14, 15, 16, 17, 18, 19]
72 | # ipose = myUtil.Ipose.reshape(-1, 24, 3)[:, SMPL_MAJOR_JOINTS, :]
73 | # qs = quaternion.from_rotation_vector(ipose)
74 | # qs = quaternion.as_float_array(qs)
75 | # dec_input = torch.FloatTensor(qs.reshape(1, 1, 60)).cuda()
76 | dec_input = chunk_target[0][0, :].reshape(1,1,60)
77 |
78 | # pass all chunks to the decoder and predict for each timestep for all chunks sequentially
79 | predictions = []
80 | for c_enc_out, c_enc_hidden, c_target in zip(enc_output, enc_hidden, chunk_target):
81 | dec_hidden = c_enc_hidden
82 | loss = 0.0
83 | for t in range(c_target.shape[0]):
84 | pred_t, dec_hidden = self.decoder(dec_input, dec_hidden, c_enc_out)
85 | dec_input = pred_t.unsqueeze(1)
86 | #loss += self._loss_impl(c_target[t], pred_t)
87 | predictions.append(pred_t.detach().cpu().numpy())
88 |
89 |
90 | target = torch.cat(chunk_target).detach().cpu().numpy()
91 | predictions = np.asarray(predictions).reshape(-1,15,4)
92 | norms = np.linalg.norm(predictions, axis=2)
93 | predictions = np.asarray(
94 | [predictions[k, j, :] / norms[0, 0] for k, j in itertools.product(range(predictions.shape[0]), range(15))])
95 | np.savez_compressed(self.base + file.split('/')[-1], target=target, predictions=predictions)
96 | #np.savez_compressed(self.base + file.split('/')[-1], predictions=pred)
97 | #break
98 |
99 | except KeyboardInterrupt:
100 | print('Testing aborted.')
101 |
102 | def _loss_impl(self, predicted, expected):
103 | L1 = predicted - expected
104 | return torch.mean((torch.norm(L1, 2, 1)))
105 |
106 |
107 |
108 | if __name__ == '__main__':
109 | trainingEngine = TestEngine()
110 | trainingEngine.test()
--------------------------------------------------------------------------------
/Root/train_BiLSTM.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from Network import BiLSTM
3 | import torch.optim as optim
4 | from IMUDataset import IMUDataset
5 | from time import time
6 | import torch
7 | import Config as cfg
8 | import torch.nn as nn
9 |
10 | class TrainingEngine:
11 | def __init__(self):
12 | self.dataPath = '/data/Guha/GR/Dataset.old/'
13 | self.trainset = ['s1_s9']
14 | self.testset = ['s_11']
15 | self.train_dataset = IMUDataset(self.dataPath, self.trainset)
16 | self.valid_dataset = IMUDataset(self.dataPath, self.testset)
17 | self.modelPath = '/data/Guha/GR/model/sequential/'
18 | self.model = BiLSTM().cuda()
19 | #self.mseloss = nn.MSELoss(reduction='sum')
20 |
21 | def train(self,n_epochs):
22 |
23 | f = open(self.modelPath+'model_details','w')
24 | f.write(str(self.model))
25 | f.write('\n')
26 |
27 | np.random.seed(1234)
28 | lr = 0.001
29 | gradient_clip = 0.1
30 | optimizer = optim.Adam(self.model.parameters(),lr=lr)
31 |
32 | print('Training for %d epochs' % (n_epochs))
33 | no_of_trainbatch = int(len(self.train_dataset.files)/ cfg.batch_len)
34 | no_of_validbatch = int(len(self.valid_dataset.files)/ cfg.batch_len)
35 | print('batch size--> %d, seq len %d, no of batches--> %d' % (cfg.batch_len,cfg.seq_len, no_of_trainbatch))
36 | f.write('batch size--> %d, seq len %d, no of batches--> %d \n' % (cfg.batch_len,cfg.seq_len, no_of_trainbatch))
37 |
38 | min_batch_loss = 0.0
39 | min_valid_loss = 0.0
40 |
41 | try:
42 | ################ epoch loop ###################
43 | epoch_loss = {'train': [], 'validation': []}
44 | for epoch in range(n_epochs):
45 | train_loss = []
46 | start_time = time()
47 | ####################### training #######################
48 | self.train_dataset.loadfiles( self.dataPath, self.trainset)
49 | self.model.train()
50 | while (len(self.train_dataset.files) > 0):
51 | self.train_dataset.prepareBatchOfMotion(10)
52 | inputs = self.train_dataset.input
53 | outputs = self.train_dataset.target
54 | ##################### divide the data into chunk of seq len
55 | chunk_in = list(torch.split(inputs,cfg.seq_len,dim=1))
56 | chunk_out = list(torch.split(outputs, cfg.seq_len, dim=1))
57 | if (len(chunk_in) == 0):
58 | continue
59 | print('chunk list size',len(chunk_in))
60 | # initialize hidden and cell state at each new batch
61 | hidden = torch.zeros(cfg.n_layers * 2, chunk_in[0].shape[0], cfg.hid_dim, dtype=torch.double).cuda()
62 | cell = torch.zeros(cfg.n_layers * 2, chunk_in[0].shape[0], cfg.hid_dim, dtype=torch.double).cuda()
63 |
64 | for c_in,c_out in zip(chunk_in,chunk_out):
65 | optimizer.zero_grad()
66 | c_pred,hidden,cell = self.model(c_in,hidden,cell)
67 | # hidden = hidden.detach()
68 | # cell = cell.detach()
69 | loss = self._loss_impl(c_pred,c_out)
70 | loss.backward(retain_graph=True)
71 | nn.utils.clip_grad_norm_(self.model.parameters(), gradient_clip)
72 | optimizer.step()
73 | loss.detach()
74 | train_loss.append(loss.item())
75 |
76 | train_loss = torch.mean(torch.FloatTensor(train_loss))
77 | epoch_loss['train'].append(train_loss)
78 | # we save the model after each epoch : epoch_{}.pth.tar
79 | state = {
80 | 'epoch': epoch + 1,
81 | 'state_dict': self.model.state_dict(),
82 | 'epoch_loss': train_loss
83 | }
84 | torch.save(state, self.modelPath + 'epoch_{}.pth.tar'.format(epoch+1))
85 | debug_string = 'epoch No {}, epoch loss {}, Time taken {} \n'.format(
86 | epoch + 1, train_loss, start_time - time()
87 | )
88 | print(debug_string)
89 | f.write(debug_string)
90 | f.write('\n')
91 | ####################### Validation #######################
92 | self.model.eval()
93 | self.valid_dataset.loadfiles( self.dataPath, self.testset)
94 | valid_loss = []
95 | while (len(self.valid_dataset.files) > 0):
96 | self.valid_dataset.prepareBatchOfMotion(10)
97 | inputs = self.valid_dataset.input
98 | outputs = self.valid_dataset.target
99 | # initialize hidden and cell state at each new batch
100 | hidden = torch.zeros(cfg.n_layers * 2, cfg.batch_len, cfg.hid_dim, dtype=torch.double).cuda()
101 | cell = torch.zeros(cfg.n_layers * 2, cfg.batch_len, cfg.hid_dim, dtype=torch.double).cuda()
102 |
103 | # divide the data into chunk of seq len
104 | chunk_in = list(torch.split(inputs, cfg.seq_len, dim=1))
105 | chunk_out = list(torch.split(outputs, cfg.seq_len, dim=1))
106 | if (len(chunk_in) == 0):
107 | continue
108 | for c_in, c_out in zip(chunk_in, chunk_out):
109 |
110 | c_pred,hidden,cell = self.model(c_in,hidden,cell)
111 | hidden = hidden.detach()
112 | cell = cell.detach()
113 | loss = self._loss_impl(c_pred,c_out).detach()
114 | valid_loss.append(loss.item())
115 |
116 | valid_loss = torch.mean(torch.FloatTensor(valid_loss))
117 | epoch_loss['validation'].append(valid_loss)
118 | # we save the model if current validation loss is less than prev : validation.pth.tar
119 | if (min_valid_loss == 0 or valid_loss < min_valid_loss):
120 | min_valid_loss = valid_loss
121 | state = {
122 | 'epoch': epoch + 1,
123 | 'state_dict': self.model.state_dict(),
124 | 'validation_loss': valid_loss
125 | }
126 | torch.save(state, self.modelPath + 'validation.pth.tar')
127 |
128 | # logging to track
129 | debug_string = 'epoch No {}, validation loss {}, Time taken {} \n'.format(
130 | epoch + 1, valid_loss, start_time - time()
131 | )
132 | print(debug_string)
133 | f.write(debug_string)
134 | f.write('\n')
135 |
136 | f.write('{}'.format(epoch_loss))
137 | f.close()
138 | except KeyboardInterrupt:
139 | print('Training aborted.')
140 |
141 | def _loss_impl(self, predicted, expected):
142 | L1 = predicted - expected
143 | return torch.mean((torch.norm(L1, 2, 2)))
144 |
145 |
146 |
147 | if __name__ == '__main__':
148 | trainingEngine = TrainingEngine()
149 | trainingEngine.train(n_epochs=50)
--------------------------------------------------------------------------------
/Root/train_BiRNN.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 | import torch.optim as optim
4 | import Config as cfg
5 | import numpy as np
6 | from time import time
7 | import random
8 | from IMUDataset import IMUDataset
9 |
10 | class BiRNN(nn.Module):
11 | def __init__(self):
12 | super(BiRNN,self).__init__()
13 |
14 | self.input_dim = cfg.input_dim
15 | self.hid_dim = 256
16 | self.n_layers = cfg.n_layers
17 | self.dropout = cfg.dropout
18 |
19 | self.relu = nn.ReLU()
20 | self.pre_fc = nn.Linear(cfg.input_dim , 256)
21 | self.lstm = nn.LSTM(256, 256, cfg.n_layers, batch_first=True, dropout=cfg.dropout,bidirectional=True)
22 | self.post_fc = nn.Linear(256*2,cfg.output_dim)
23 | self.dropout = nn.Dropout(cfg.dropout)
24 |
25 | def forward(self, X):
26 | # src = [ batch size, seq len, input dim]
27 | batch_size = X.shape[0]
28 | seq_len = X.shape[1]
29 | input_dim = X.shape[2]
30 |
31 | X = X.view(-1,input_dim)
32 | X = self.pre_fc(X)
33 | X = self.relu(X)
34 | X = X.view(batch_size,seq_len, -1)
35 | lstm_out, (_, _) = self.lstm(X)
36 |
37 | """lstm_out : [batch size, src sent len, hid dim * n directions]
38 | hidden : [n layers * n directions, batch size, hid dim]
39 | cell : [n layers * n directions,batch size, hid dim]
40 | lstm_out are always from the top hidden layer """
41 |
42 | fc_out = self.post_fc(lstm_out)
43 | return fc_out
44 |
45 | class TrainingEngine:
46 | def __init__(self):
47 | self.datapath = '/data/Guha/GR/Dataset/'
48 | self.modelPath = '/data/Guha/GR/model/H36_DIP/'
49 | self.model = BiRNN().cuda()
50 | self.trainset = ['H36','DIP_IMU/train']
51 | self.testset = ['DIP_IMU/validation']
52 | # baseModelPath = '/data/Guha/GR/model/13/epoch_5.pth.tar'
53 | #
54 | # with open(baseModelPath, 'rb') as tar:
55 | # checkpoint = torch.load(tar)
56 | # model_weights = checkpoint['state_dict']
57 | # self.model.load_state_dict(model_weights)
58 |
59 | def _loss_impl(self,predicted, expected):
60 | L1 = predicted - expected
61 | return torch.mean((torch.norm(L1, 2, 2)))
62 |
63 | def train(self,n_epochs):
64 | f = open(self.modelPath+'model_details','w')
65 | f.write(str(self.model))
66 | f.write('\n')
67 |
68 | np.random.seed(1234)
69 | lr = 0.001
70 | gradient_clip = 0.1
71 | optimizer = optim.Adam(self.model.parameters(),lr=lr)
72 |
73 | print('Training for %d epochs' % (n_epochs))
74 |
75 | print('batch size--> {}, Seq len--> {}'.format(cfg.batch_len, cfg.seq_len))
76 | f.write('batch size--> {}, Seq len--> {} \n'.format(cfg.batch_len, cfg.seq_len))
77 | epoch_loss = {'train': [], 'validation': []}
78 | self.dataset = IMUDataset(self.datapath, self.trainset)
79 | min_valid_loss = 0.0
80 | for epoch in range(0, n_epochs):
81 | train_loss = []
82 | start_time = time()
83 | self.dataset.loadfiles(self.datapath,self.trainset)
84 | ####################### training #######################
85 | while (len(self.dataset.files) > 0):
86 | # Pick a random chunk from each sequence
87 | self.dataset.createbatch_no_replacement()
88 | inputs = torch.FloatTensor(self.dataset.input).cuda()
89 | outputs = torch.FloatTensor(self.dataset.target).cuda()
90 | chunk_in = list(torch.split(inputs, cfg.seq_len))[:-1]
91 | chunk_out = list(torch.split(outputs, cfg.seq_len))[:-1]
92 | if (len(chunk_in) == 0):
93 | continue
94 | data = [(chunk_in[i], chunk_out[i]) for i in range(len(chunk_in))]
95 | random.shuffle(data)
96 | X, Y = zip(*data)
97 | chunk_in = torch.stack(X, dim=0)
98 | chunk_out = torch.stack(Y, dim=0)
99 | print('no of chunks %d \n' % (len(chunk_in)))
100 | f.write('no of chunks %d \n' % (len(chunk_in)))
101 | self.model.train()
102 | optimizer.zero_grad()
103 | predictions = self.model(chunk_in)
104 |
105 | loss = self._loss_impl(predictions, chunk_out)
106 | loss.backward()
107 | nn.utils.clip_grad_norm_(self.model.parameters(), gradient_clip)
108 | optimizer.step()
109 | loss.detach()
110 |
111 | train_loss.append(loss.item())
112 |
113 | train_loss = torch.mean(torch.FloatTensor(train_loss))
114 | # we save the model after each epoch : epoch_{}.pth.tar
115 | state = {
116 | 'epoch': epoch + 1,
117 | 'state_dict': self.model.state_dict(),
118 | 'epoch_loss': train_loss
119 | }
120 | torch.save(state, self.modelPath + 'epoch_{}.pth.tar'.format(epoch + 1))
121 | # logging to track
122 | debug_string = 'epoch No {}, training loss {} , Time taken {} \n'.format(
123 | epoch + 1, train_loss, start_time - time()
124 | )
125 | print(debug_string)
126 | f.write(debug_string)
127 | f.write('\n')
128 | epoch_loss['train'].append(train_loss)
129 | ####################### Validation #######################
130 | valid_loss = []
131 | self.model.eval()
132 | self.dataset.loadfiles(self.datapath, self.testset)
133 | for file in self.dataset.files:
134 | self.dataset.readfile(file)
135 | input = torch.FloatTensor(self.dataset.input).unsqueeze(0).cuda()
136 | target = torch.FloatTensor(self.dataset.target).unsqueeze(0).cuda()
137 |
138 | output = self.model(input)
139 | loss = self._loss_impl(output, target)
140 | valid_loss.append(loss)
141 | valid_loss = torch.mean(torch.FloatTensor(valid_loss))
142 | # we save the model if current validation loss is less than prev : validation.pth.tar
143 | if (min_valid_loss == 0 or valid_loss < min_valid_loss):
144 | min_valid_loss = valid_loss
145 | state = {
146 | 'epoch': epoch + 1,
147 | 'state_dict': self.model.state_dict(),
148 | 'validation_loss': valid_loss
149 | }
150 | torch.save(state, self.modelPath + 'validation.pth.tar')
151 |
152 | # logging to track
153 | debug_string = 'epoch No {}, valid loss {} ,Time taken {} \n'.format(
154 | epoch + 1, valid_loss, start_time - time()
155 | )
156 | print(debug_string)
157 | f.write(debug_string)
158 | f.write('\n')
159 | epoch_loss['validation'].append(valid_loss)
160 |
161 | f.write(str(epoch_loss))
162 | f.close()
163 | self.plotGraph(epoch_loss, self.modelPath)
164 |
165 |
166 | def plotGraph(epoch_loss,basepath):
167 | import matplotlib.pyplot as plt
168 | fig = plt.figure(1)
169 | trainloss = epoch_loss['train']
170 | validloss = epoch_loss['validation']
171 |
172 | plt.plot(np.arange(trainloss), trainloss, 'r--', label='training loss')
173 | plt.plot(np.arange(validloss), validloss, 'g--', label='validation loss')
174 | plt.legend()
175 | plt.savefig(basepath+'.png')
176 | plt.show()
177 |
178 | if __name__ == "__main__":
179 | trainingEngine = TrainingEngine()
180 | trainingEngine.train(n_epochs=50)
--------------------------------------------------------------------------------
/Root/train_MLP.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from Network import ForwardKinematic
3 | import torch.optim as optim
4 | from IMUDataset import IMUDataset
5 | from time import time
6 | import torch
7 | import Config as cfg
8 | import torch.nn as nn
9 | import random
10 | import matplotlib.pyplot as plt
11 |
12 | class TrainingEngine:
13 | def __init__(self):
14 | # self.trainset = ['AMASS_ACCAD', 'AMASS_BioMotion', 'AMASS_CMU_Kitchen', 'AMASS_Eyes', 'AMASS_MIXAMO',
15 | # 'AMASS_SSM', 'AMASS_Transition', 'CMU', 'H36']
16 | self.trainset = ['H36','DIP_IMU/train']
17 | #self.testSet = ['AMASS_HDM05', 'HEva', 'JointLimit']
18 | self.testset = ['DIP_IMU/validation']
19 | self.dataPath = '/data/Guha/GR/Dataset/'
20 | self.train_dataset = IMUDataset(self.dataPath, self.trainset)
21 | self.valid_dataset = IMUDataset(self.dataPath, self.testset)
22 | self.modelPath = '/data/Guha/GR/model/forward/'
23 | self.model = ForwardKinematic().cuda()
24 | self.mseloss = nn.MSELoss(reduction='sum')
25 | # baseModelPath = '/data/Guha/GR/model/16/validation.pth.tar'
26 | #
27 | # with open(baseModelPath, 'rb') as tar:
28 | # checkpoint = torch.load(tar)
29 | # model_weights = checkpoint['state_dict']
30 | # self.model.load_state_dict(model_weights)
31 |
32 | def train(self,n_epochs):
33 | f = open(self.modelPath+'model_details','a')
34 | f.write(str(self.model))
35 | f.write('\n')
36 |
37 | np.random.seed(1234)
38 | lr = 0.001
39 | optimizer = optim.Adam(self.model.parameters(),lr=lr)
40 |
41 | print('Training for %d epochs' % (n_epochs))
42 | no_of_trainbatch = int(self.train_dataset.total_frames / 200)
43 |
44 | print('batch size--> {}, no of batches--> {}'.format(200, no_of_trainbatch))
45 | f.write('batch size--> {}, no of batches--> {} \n'.format(200, no_of_trainbatch))
46 |
47 | min_valid_loss = 0.0
48 |
49 | try:
50 | ################ epoch loop ###################
51 | epoch_loss = {'train': [], 'validation': []}
52 | for epoch in range(n_epochs):
53 | train_loss = []
54 | start_time = time()
55 | ####################### training #######################
56 | self.train_dataset.loadfiles(self.dataPath, self.trainset)
57 | self.model.train()
58 | while(len(self.train_dataset.files) > 0):
59 | # Pick a random chunk from each sequence
60 | self.train_dataset.createbatch_no_replacement()
61 | outputs = self.train_dataset.input
62 | inputs = self.train_dataset.target
63 | data = [(inputs[i], outputs[i]) for i in range(len(inputs))]
64 | random.shuffle(data)
65 | X, Y = zip(*data)
66 | X = torch.FloatTensor(list(X)).cuda()
67 | Y = torch.FloatTensor(list(Y)).cuda()
68 | X_list = list(torch.split(X, 200))
69 | Y_list = list(torch.split(Y, 200))
70 | for x,y in zip(X_list,Y_list):
71 |
72 | optimizer.zero_grad()
73 | predictions = self.model(x)
74 |
75 | loss = self._loss_impl(predictions, y)
76 | loss.backward()
77 | optimizer.step()
78 | loss.detach()
79 | train_loss.append(loss.item())
80 |
81 | train_loss = torch.mean(torch.FloatTensor(train_loss))
82 | epoch_loss['train'].append(train_loss)
83 | # we save the model after each epoch : epoch_{}.pth.tar
84 | state = {
85 | 'epoch': epoch + 1,
86 | 'state_dict': self.model.state_dict(),
87 | 'epoch_loss': epoch_loss
88 | }
89 | torch.save(state, self.modelPath + 'epoch_{}.pth.tar'.format(epoch+1))
90 | debug_string = 'epoch No {}, epoch loss {}, Time taken {} \n'.format(
91 | epoch + 1, train_loss, start_time - time()
92 | )
93 | print(debug_string)
94 | f.write(debug_string)
95 | f.write('\n')
96 | ####################### Validation #######################
97 | self.model.eval()
98 | self.valid_dataset.loadfiles(self.dataPath, self.testset)
99 | valid_loss = []
100 | for file in self.valid_dataset.files:
101 | self.valid_dataset.readfile(file)
102 | target = torch.FloatTensor(self.valid_dataset.input)
103 | input = torch.FloatTensor(self.valid_dataset.target)
104 | input = input.cuda()
105 |
106 | prediction = self.model(input)
107 | prediction = prediction.detach().reshape_as(target).cpu()
108 | loss = self._loss_impl(prediction,target)
109 | # loss = (prediction - target)
110 | valid_loss.append(loss.item())
111 |
112 | valid_loss = torch.mean(torch.FloatTensor(valid_loss))
113 | epoch_loss['validation'].append(valid_loss)
114 | # we save the model if current validation loss is less than prev : validation.pth.tar
115 | if (min_valid_loss == 0 or valid_loss < min_valid_loss):
116 | min_valid_loss = valid_loss
117 | state = {
118 | 'epoch': epoch + 1,
119 | 'state_dict': self.model.state_dict(),
120 | 'validation_loss': valid_loss
121 | }
122 | torch.save(state, self.modelPath + 'validation.pth.tar')
123 |
124 |
125 | # save box plots of three dataset
126 | # fig = plt.figure('epoch: '+str(epoch))
127 | # # Create an axes instance
128 | # ax = fig.add_subplot(111)
129 | # # Create the boxplot
130 | # ax.boxplot(dset_loss)
131 | # ax.set_xticklabels(self.testSet)
132 | # # Save the figure
133 | # fig.savefig(self.modelPath+'epoch: '+str(epoch)+'.png', bbox_inches='tight')
134 |
135 | # logging to track
136 | debug_string = 'epoch No {}, validation loss {}, Time taken {} \n'.format(
137 | epoch + 1, valid_loss, start_time - time()
138 | )
139 | print(debug_string)
140 | f.write(debug_string)
141 | f.write('\n')
142 |
143 | f.write('{}'.format(epoch_loss))
144 | f.close()
145 | except KeyboardInterrupt:
146 | print('Training aborted.')
147 |
148 | def _loss_impl(self, predicted, expected):
149 | L1 = predicted - expected
150 | return torch.mean((torch.norm(L1, 2, 1)))
151 |
152 |
153 | if __name__ == '__main__':
154 | trainingEngine = TrainingEngine()
155 | trainingEngine.train(n_epochs=30)
--------------------------------------------------------------------------------
/Root/train_correctNetwork.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from Network import BiRNN,CorrectPose
3 | import torch.optim as optim
4 | from IMUDataset import IMUDataset
5 | from time import time
6 | import torch
7 | import Config as cfg
8 | import torch.nn as nn
9 | from eval_model import TestEngine
10 |
11 |
12 | class TrainingEngine:
13 | def __init__(self):
14 | self.train_dataset = IMUDataset(cfg.traindata_path)
15 | self.valid_dataset = IMUDataset(cfg.validdata_path)
16 | self.use_cuda =True
17 | self.modelPath = '/data/Guha/GR/model/11/'
18 | self.poseEstimator = BiRNN()
19 | self.mseloss = nn.MSELoss()
20 | baseModelPath = '/data/Guha/GR/model/9/validation.pth.tar'
21 |
22 | with open(baseModelPath, 'rb') as tar:
23 | checkpoint = torch.load(tar)
24 | model_weights = checkpoint['state_dict']
25 |
26 | self.poseEstimator.load_state_dict(model_weights)
27 | self.poseCorrector = CorrectPose(cfg.input_dim+cfg.output_dim,cfg.output_dim)
28 | self.poseEstimator.cuda().eval()
29 | self.poseCorrector.cuda()
30 |
31 |
32 |
33 | def train(self,n_epochs):
34 |
35 | f = open(self.modelPath+'model_details','w')
36 | f.write(str(self.poseCorrector))
37 | f.write('\n')
38 |
39 | np.random.seed(1234)
40 | lr = 0.001
41 | optimizer = optim.Adam(self.poseCorrector.parameters(),lr=lr)
42 |
43 | print('Training for %d epochs' % (n_epochs))
44 |
45 | min_batch_loss = 0.0
46 | min_valid_loss = 0.0
47 |
48 | try:
49 | for epoch in range(n_epochs):
50 | epoch_loss = []
51 | start_time = time()
52 | self.poseCorrector.train()
53 | for tf in self.train_dataset.files:
54 | self.train_dataset.readfile(tf)
55 | input = torch.FloatTensor(self.train_dataset.input)
56 | input = torch.unsqueeze(input, 0)
57 | target = torch.FloatTensor(self.train_dataset.target)
58 | if self.use_cuda:
59 | input = input.cuda()
60 | target = target.cuda()
61 | # pose prediction
62 | pose_prediction = self.poseEstimator(input).squeeze(0).detach()
63 | #target = target.reshape_as(pose_prediction)
64 | in_correct = torch.cat((input.squeeze(0),pose_prediction),1)
65 | # pose correction
66 | pose_correction = self.poseCorrector(in_correct)
67 | pose_correction = pose_correction.reshape_as(target)
68 | loss = self._loss_impl(pose_correction, target)
69 | loss.backward()
70 | optimizer.step()
71 | loss.detach()
72 | epoch_loss.append(loss.item())
73 | if (min_batch_loss == 0 or loss < min_batch_loss):
74 | min_batch_loss = loss
75 | print ('file ----------> %{} training loss {} '.format(tf, loss.item()))
76 | f.write('file ----------> %{} training loss {} \n'.format(tf, loss.item()))
77 |
78 | epoch_loss = torch.mean(torch.FloatTensor(epoch_loss))
79 | # we save the model after each epoch : epoch_{}.pth.tar
80 | state = {
81 | 'epoch': epoch + 1,
82 | 'state_dict': self.poseCorrector.state_dict(),
83 | 'epoch_loss': epoch_loss
84 | }
85 | torch.save(state, self.modelPath + 'epoch_{}.pth.tar'.format(epoch+1))
86 |
87 | ####################### Validation #######################
88 | valid_loss = []
89 | self.poseCorrector.eval()
90 | for vf in self.valid_dataset.files:
91 | self.valid_dataset.readfile(vf)
92 | input = torch.FloatTensor(self.valid_dataset.input)
93 | input = torch.unsqueeze(input, 0)
94 | target = torch.FloatTensor(self.valid_dataset.target)
95 | if self.use_cuda:
96 | input = input.cuda()
97 | target = target.cuda()
98 | # pose prediction
99 | pose_prediction = self.poseEstimator(input).squeeze(0)
100 | #target = target.reshape_as(pose_prediction)
101 | in_correct = torch.cat((input.squeeze(0),pose_prediction),1)
102 | # pose correction
103 | pose_correction = self.poseCorrector(in_correct)
104 | pose_correction = pose_correction.reshape_as(target)
105 | loss = self._loss_impl(pose_correction, target)
106 | loss.detach()
107 | valid_loss.append(loss.item())
108 |
109 | valid_loss = torch.mean(torch.FloatTensor(valid_loss))
110 | # we save the model if current validation loss is less than prev : validation.pth.tar
111 | if (min_valid_loss == 0 or valid_loss < min_valid_loss):
112 | min_valid_loss = valid_loss
113 | state = {
114 | 'epoch': epoch + 1,
115 | 'state_dict': self.poseCorrector.state_dict(),
116 | 'validation_loss': valid_loss
117 | }
118 | torch.save(state, self.modelPath + 'validation.pth.tar')
119 |
120 | # logging to track
121 | print ('epoch No %d, epoch loss %d , validation loss %d, Time taken %d \n' % (
122 | epoch + 1, epoch_loss, valid_loss, start_time - time()))
123 | f.write('epoch No %d, epoch loss %d , validation loss %d, Time taken %d \n' % (
124 | epoch + 1, epoch_loss, valid_loss, start_time - time()))
125 |
126 | f.close()
127 | except KeyboardInterrupt:
128 | print('Training aborted.')
129 |
130 | def _loss_impl(self, predicted, expected):
131 | L1 = predicted - expected
132 | dist = torch.sum(torch.norm(L1, 2, 2))
133 | return dist/ predicted.shape[0]
134 |
135 |
136 |
137 | if __name__ == '__main__':
138 | trainingEngine = TrainingEngine()
139 | trainingEngine.train(n_epochs=30)
--------------------------------------------------------------------------------
/Root/visualize.py:
--------------------------------------------------------------------------------
1 | ############## add SMPL python folder in path ########
2 | import sys
3 | sys.path.insert(0, '/data/Guha/GR/code/GR19/smpl/smpl_webuser')
4 | sys.path.insert(0, '/data/Guha/GR/code/GR19/smpl/models')
5 |
6 | import shutil
7 | import os
8 | import numpy as np
9 | import myUtil
10 | from opendr.renderer import ColoredRenderer
11 | from opendr.lighting import LambertianPointLight
12 | from opendr.camera import ProjectPoints
13 | from serialization import load_model
14 | import cv2
15 |
16 | ############################ Use SMPL python library packages to instantiate SMPL body model ############
17 | ## Load SMPL model (here we load the female model)
18 | m1 = load_model('/data/Guha/GR/code/GR19/smpl/models/basicModel_m_lbs_10_207_0_v1.0.0.pkl')
19 | m1.betas[:] = np.random.rand(m1.betas.size) * .03
20 |
21 | m2 = load_model('/data/Guha/GR/code/GR19/smpl/models/basicModel_m_lbs_10_207_0_v1.0.0.pkl')
22 | m2.betas[:] = np.random.rand(m2.betas.size) * .03
23 | ## Create OpenDR renderer
24 | rn1 = ColoredRenderer()
25 | rn2 = ColoredRenderer()
26 |
27 | ## Assign attributes to renderer
28 | w, h = (640, 480)
29 |
30 | rn1.camera = ProjectPoints(v=m1, rt=np.zeros(3), t=np.array([0, 0, 2.]), f=np.array([w, w]) / 2.,
31 | c=np.array([w, h]) / 2., k=np.zeros(5))
32 | rn1.frustum = {'near': 1., 'far': 10., 'width': w, 'height': h}
33 | rn1.set(v=m1, f=m1.f, bgcolor=np.zeros(3))
34 |
35 | rn2.camera = ProjectPoints(v=m2, rt=np.zeros(3), t=np.array([0, 0, 2.]), f=np.array([w, w]) / 2.,
36 | c=np.array([w, h]) / 2., k=np.zeros(5))
37 | rn2.frustum = {'near': 1., 'far': 10., 'width': w, 'height': h}
38 | rn2.set(v=m2, f=m2.f, bgcolor=np.zeros(3))
39 |
40 | ## Construct point light source
41 | rn1.vc = LambertianPointLight(
42 | f=m1.f,
43 | v=rn1.v,
44 | num_verts=len(m1),
45 | light_pos=np.array([-1000, -1000, -2000]),
46 | vc=np.ones_like(m1) * .9,
47 | light_color=np.array([1., 1., 1.]))
48 |
49 | rn2.vc = LambertianPointLight(
50 | f=m2.f,
51 | v=rn2.v,
52 | num_verts=len(m2),
53 | light_pos=np.array([-1000, -1000, -2000]),
54 | vc=np.ones_like(m2) * .9,
55 | light_color=np.array([1., 1., 1.]))
56 | ###################### Finish of Initialization of SMPL body model #############
57 |
58 | ########## path of the input file
59 | Result_path = '/data/Guha/GR/Output/TestSet/13/'
60 | fileList = os.listdir(Result_path)
61 | print (fileList)
62 | path = os.path.join(Result_path,'mazen_c3dairkick_jumpinplace.npz')
63 |
64 | ############ below code is required if we want to create video - it saves each frame ina folder ######
65 | # shutil.rmtree('/data/Guha/GR/Output/GT/')
66 | # shutil.rmtree('/data/Guha/GR/Output/Prediction/')
67 | # os.mkdir('/data/Guha/GR/Output/GT/')
68 | # os.mkdir('/data/Guha/GR/Output/Prediction/')
69 |
70 |
71 |
72 | with open(path, 'rb') as file:
73 | print (path)
74 | data_dict = dict(np.load(file))
75 | gt = data_dict['target']
76 | pred = data_dict['predictions']
77 |
78 | # for SMPL 15 to 24 joints
79 | gt_full = myUtil.smpl_reduced_to_full(gt.reshape(-1,15*4))
80 | pred_full = myUtil.smpl_reduced_to_full(pred.reshape(-1,15*4))
81 |
82 | #from quat to axis-angle
83 | gt_aa = myUtil.quat_to_aa_representation(gt_full,24)
84 | pred_aa = myUtil.quat_to_aa_representation(pred_full,24)
85 |
86 | seq_len = pred_aa.shape[0]
87 | print('seq len:', seq_len)
88 | ######### loop through the Sequence
89 | for seq_num in range(seq_len):
90 |
91 | pose1 = gt_aa[seq_num]
92 | pose2 = pred_aa[seq_num]
93 |
94 | ############ update SMPL model with ground truth pose parameters
95 | m1.pose[:] = pose1
96 | m1.pose[0] = np.pi
97 |
98 | ############ update SMPL model with prediction pose parameters
99 | m2.pose[:] = pose2
100 | m2.pose[0] = np.pi
101 |
102 | ####################to visualize runtime demo########################
103 | cv2.imshow('GT', rn1.r)
104 | cv2.imshow('Prediction', rn2.r)
105 | # Press Q on keyboard to exit
106 | if cv2.waitKey(1) & 0xFF == ord('q'):
107 | break
108 | print('gt values--',pose1)
109 | print('pred values--', pose2)
110 |
111 | ################## while saving frames to create video ##################
112 | # gt_img = rn1.r * 255
113 | # pred_img = rn2.r * 255
114 | # cv2.imwrite('/data/Guha/GR/Output/GT/' + str(seq_num) + '.png', gt_img)
115 | # cv2.imwrite('/data/Guha/GR/Output/Prediction/' + str(seq_num) + '.png', pred_img)
116 |
117 |
--------------------------------------------------------------------------------
/Root/visualize_DFKI.py:
--------------------------------------------------------------------------------
1 | import sys
2 | sys.path.insert(0, '/data/Guha/GR/code/GR19/smpl/smpl_webuser')
3 | sys.path.insert(0, '/data/Guha/GR/code/GR19/smpl/models')
4 |
5 | import shutil
6 | import os
7 | import numpy as np
8 | import myUtil
9 | from opendr.renderer import ColoredRenderer
10 | from opendr.lighting import LambertianPointLight
11 | from opendr.camera import ProjectPoints
12 | from serialization import load_model
13 | import cv2
14 |
15 | ## Load SMPL model (here we load the female model)
16 | m1 = load_model('/data/Guha/GR/code/GR19/smpl/models/basicModel_m_lbs_10_207_0_v1.0.0.pkl')
17 | m1.betas[:] = np.random.rand(m1.betas.size) * .03
18 |
19 | ## Create OpenDR renderer
20 | rn1 = ColoredRenderer()
21 |
22 | ## Assign attributes to renderer
23 | w, h = (640, 480)
24 |
25 | rn1.camera = ProjectPoints(v=m1, rt=np.zeros(3), t=np.array([0, 0, 2.]), f=np.array([w, w]) / 2.,
26 | c=np.array([w, h]) / 2., k=np.zeros(5))
27 | rn1.frustum = {'near': 1., 'far': 10., 'width': w, 'height': h}
28 | rn1.set(v=m1, f=m1.f, bgcolor=np.zeros(3))
29 |
30 | ## Construct point light source
31 | rn1.vc = LambertianPointLight(
32 | f=m1.f,
33 | v=rn1.v,
34 | num_verts=len(m1),
35 | light_pos=np.array([-1000, -1000, -2000]),
36 | vc=np.ones_like(m1) * .9,
37 | light_color=np.array([1., 1., 1.]))
38 |
39 | Result_path = '/data/Guha/GR/Output/ValidationSet/18/'
40 | fileList = os.listdir(Result_path)
41 | print (fileList)
42 | shutil.rmtree('/data/Guha/GR/Output/GT/')
43 | shutil.rmtree('/data/Guha/GR/Output/Prediction/')
44 | os.mkdir('/data/Guha/GR/Output/GT/')
45 | os.mkdir('/data/Guha/GR/Output/Prediction/')
46 |
47 | path = os.path.join(Result_path,'walking_1.npz')
48 | with open(path, 'rb') as file:
49 | print (path)
50 | data_dict = dict(np.load(file))
51 | pred = data_dict['predictions']
52 |
53 | # for SMPL 15 to 24 joints
54 | pred_full = myUtil.smpl_reduced_to_full(pred.reshape(-1,15*4))
55 |
56 | #from quat to axis-angle
57 | pred_aa = myUtil.quat_to_aa_representation(pred_full,24)
58 |
59 | seq_len = pred_aa.shape[0]
60 | print('seq len:', seq_len)
61 | for seq_num in range(seq_len):
62 |
63 | pose1 = pred_aa[seq_num]
64 |
65 | m1.pose[:] = pose1
66 | m1.pose[0] = np.pi
67 |
68 | #to visualize demo
69 | cv2.imshow('Prediction', rn1.r)
70 | # Press Q on keyboard to exit
71 | if cv2.waitKey(1) & 0xFF == ord('q'):
72 | break
73 |
74 | # # while saving frames to create video
75 | # pred_img = rn1.r * 255
76 | # cv2.imwrite('/data/Guha/GR/Output/Prediction/' + str(seq_num) + '.png', pred_img)
77 |
78 |
--------------------------------------------------------------------------------
/Root/visualize_dip_own.py:
--------------------------------------------------------------------------------
1 | import sys
2 | sys.path.insert(0, '/data/Guha/GR/code/GR19/smpl/smpl_webuser')
3 | sys.path.insert(0, '/data/Guha/GR/code/GR19/smpl/models')
4 |
5 | import shutil
6 | import os
7 | import numpy as np
8 | import myUtil
9 | from opendr.renderer import ColoredRenderer
10 | from opendr.lighting import LambertianPointLight
11 | from opendr.camera import ProjectPoints
12 | from serialization import load_model
13 | import cv2
14 |
15 | ############################ Use SMPL python library packages to instantiate SMPL body model ############
16 | ## Load SMPL model (here we load the female model)
17 | m1 = load_model('/data/Guha/GR/code/GR19/smpl/models/basicModel_m_lbs_10_207_0_v1.0.0.pkl')
18 | m1.betas[:] = np.random.rand(m1.betas.size) * .03
19 |
20 | m2 = load_model('/data/Guha/GR/code/GR19/smpl/models/basicModel_m_lbs_10_207_0_v1.0.0.pkl')
21 | m2.betas[:] = np.random.rand(m2.betas.size) * .03
22 | ## Create OpenDR renderer
23 | rn1 = ColoredRenderer()
24 | rn2 = ColoredRenderer()
25 |
26 | ## Assign attributes to renderer
27 | w, h = (640, 480)
28 |
29 | rn1.camera = ProjectPoints(v=m1, rt=np.zeros(3), t=np.array([0, 0, 2.]), f=np.array([w, w]) / 2.,
30 | c=np.array([w, h]) / 2., k=np.zeros(5))
31 | rn1.frustum = {'near': 1., 'far': 10., 'width': w, 'height': h}
32 | rn1.set(v=m1, f=m1.f, bgcolor=np.zeros(3))
33 |
34 | rn2.camera = ProjectPoints(v=m2, rt=np.zeros(3), t=np.array([0, 0, 2.]), f=np.array([w, w]) / 2.,
35 | c=np.array([w, h]) / 2., k=np.zeros(5))
36 | rn2.frustum = {'near': 1., 'far': 10., 'width': w, 'height': h}
37 | rn2.set(v=m2, f=m2.f, bgcolor=np.zeros(3))
38 |
39 | ## Construct point light source
40 | rn1.vc = LambertianPointLight(
41 | f=m1.f,
42 | v=rn1.v,
43 | num_verts=len(m1),
44 | light_pos=np.array([-1000, -1000, -2000]),
45 | vc=np.ones_like(m1) * .9,
46 | light_color=np.array([1., 1., 1.]))
47 |
48 | rn2.vc = LambertianPointLight(
49 | f=m2.f,
50 | v=rn2.v,
51 | num_verts=len(m2),
52 | light_pos=np.array([-1000, -1000, -2000]),
53 | vc=np.ones_like(m2) * .9,
54 | light_color=np.array([1., 1., 1.]))
55 | ###################### Finish of Initialization of SMPL body model #############
56 |
57 | ############ below code is required if we want to create video - it saves each frame ina folder ######
58 | # shutil.rmtree('/data/Guha/GR/Output/GT/')
59 | # shutil.rmtree('/data/Guha/GR/Output/Prediction/')
60 | # os.mkdir('/data/Guha/GR/Output/GT/')
61 | # os.mkdir('/data/Guha/GR/Output/Prediction/')
62 |
63 | file_path = '/data/Guha/GR/Output/TestSet/dip_own/test_our_data_all_frames.npz'
64 | with open(file_path, 'rb') as file:
65 | data_dict = dict(np.load(file))
66 | gt = data_dict['gt']
67 | pred = data_dict['prediction']
68 |
69 | for act in range(18):
70 | act = 16
71 | gt = gt[act].reshape(-1,24*3)
72 | pred = pred[act].reshape(-1, 24*3)
73 | print('activity no: ',act)
74 | seq_len = gt.shape[0]
75 | print('seq len:',seq_len)
76 | for seq_num in range(seq_len):
77 | pose1 = gt[seq_num]
78 | pose2 = pred[seq_num]
79 |
80 | m1.pose[:] = pose1
81 | m1.pose[0] = np.pi
82 |
83 | m2.pose[:] = pose2
84 | m2.pose[0] = np.pi
85 |
86 | ####################to visualize runtime demo########################
87 | cv2.imshow('GT', rn1.r)
88 | cv2.imshow('Prediction', rn2.r)
89 | # Press Q on keyboard to exit
90 | if cv2.waitKey(1) & 0xFF == ord('q'):
91 | break
92 | print('gt values--', pose1)
93 | print('pred values--', pose2)
94 |
95 | ################## while saving frames to create video ##################
96 | # gt_img = rn1.r * 255
97 | # pred_img = rn2.r * 255
98 | # cv2.imwrite('/data/Guha/GR/Output/GT/' + str(seq_num) + '.png', gt_img)
99 | # cv2.imwrite('/data/Guha/GR/Output/Prediction/' + str(seq_num) + '.png', pred_img)
--------------------------------------------------------------------------------
/Root/visualize_dip_syn.py:
--------------------------------------------------------------------------------
1 | import sys
2 | sys.path.insert(0, '/data/Guha/GR/code/GR19/smpl/smpl_webuser')
3 | sys.path.insert(0, '/data/Guha/GR/code/GR19/smpl/models')
4 |
5 | import shutil
6 | import os
7 | import numpy as np
8 | import myUtil
9 | from opendr.renderer import ColoredRenderer
10 | from opendr.lighting import LambertianPointLight
11 | from opendr.camera import ProjectPoints
12 | from serialization import load_model
13 | import cv2
14 |
15 | ## Load SMPL model (here we load the female model)
16 | m1 = load_model('/data/Guha/GR/code/GR19/smpl/models/basicModel_m_lbs_10_207_0_v1.0.0.pkl')
17 | m1.betas[:] = np.random.rand(m1.betas.size) * .03
18 |
19 | m2 = load_model('/data/Guha/GR/code/GR19/smpl/models/basicModel_m_lbs_10_207_0_v1.0.0.pkl')
20 | m2.betas[:] = np.random.rand(m2.betas.size) * .03
21 | ## Create OpenDR renderer
22 | rn1 = ColoredRenderer()
23 | rn2 = ColoredRenderer()
24 |
25 | ## Assign attributes to renderer
26 | w, h = (640, 480)
27 |
28 | rn1.camera = ProjectPoints(v=m1, rt=np.zeros(3), t=np.array([0, 0, 2.]), f=np.array([w, w]) / 2.,
29 | c=np.array([w, h]) / 2., k=np.zeros(5))
30 | rn1.frustum = {'near': 1., 'far': 10., 'width': w, 'height': h}
31 | rn1.set(v=m1, f=m1.f, bgcolor=np.zeros(3))
32 |
33 | rn2.camera = ProjectPoints(v=m2, rt=np.zeros(3), t=np.array([0, 0, 2.]), f=np.array([w, w]) / 2.,
34 | c=np.array([w, h]) / 2., k=np.zeros(5))
35 | rn2.frustum = {'near': 1., 'far': 10., 'width': w, 'height': h}
36 | rn2.set(v=m2, f=m2.f, bgcolor=np.zeros(3))
37 |
38 | ## Construct point light source
39 | rn1.vc = LambertianPointLight(
40 | f=m1.f,
41 | v=rn1.v,
42 | num_verts=len(m1),
43 | light_pos=np.array([-1000, -1000, -2000]),
44 | vc=np.ones_like(m1) * .9,
45 | light_color=np.array([1., 1., 1.]))
46 |
47 | rn2.vc = LambertianPointLight(
48 | f=m2.f,
49 | v=rn2.v,
50 | num_verts=len(m2),
51 | light_pos=np.array([-1000, -1000, -2000]),
52 | vc=np.ones_like(m2) * .9,
53 | light_color=np.array([1., 1., 1.]))
54 |
55 |
56 |
57 | Result_path = '/data/Guha/GR/Output/DIP_IMU/'
58 | fileList = os.listdir(Result_path)
59 | print (fileList)
60 | shutil.rmtree('/data/Guha/GR/Output/GT/')
61 | shutil.rmtree('/data/Guha/GR/Output/Prediction/')
62 | os.mkdir('/data/Guha/GR/Output/GT/')
63 | os.mkdir('/data/Guha/GR/Output/Prediction/')
64 |
65 | path1 = '/data/Guha/GR/DIPIMUandOthers/DIP_IMU_and_Others/DIP_IMU/s_10/01_b.pkl'
66 | path2 = '/data/Guha/GR/synthetic60FPS/H36/S8_WalkDog.pkl'
67 | with open(path1, 'rb') as file1, open(path2, 'rb') as file2:
68 | print (path1)
69 | data_dict1 = np.load(file1, encoding='latin1')
70 | sample_pose1 = data_dict1['poses'].reshape(-1,135)
71 | sample_ori1 = data_dict1['ori'].reshape(-1, 5, 3,3)
72 | sample_acc1= data_dict1['acc'].reshape(-1, 5, 3)
73 |
74 | print(path2)
75 | data_dict2 = np.load(file2, encoding='latin1')
76 | sample_pose2 = data_dict2['poses'].reshape(-1,135)
77 | sample_ori2 = data_dict2['ori'].reshape(-1, 5, 3,3)
78 | sample_acc2 = data_dict2['acc'].reshape(-1, 5, 3)
79 |
80 | # for SMPL 15 to 24 joints
81 | dip = myUtil.smpl_reduced_to_full(sample_pose1)
82 | syn = myUtil.smpl_reduced_to_full(sample_pose2)
83 |
84 | #from quat to axis-angle
85 | dip_aa = myUtil.rot_matrix_to_aa(dip)
86 | syn_aa = myUtil.rot_matrix_to_aa(syn)
87 |
88 | seq_len = dip_aa.shape[0]
89 | print('seq len:', seq_len)
90 | for seq_num in range(seq_len):
91 |
92 | pose1 = dip_aa[seq_num]
93 | pose2 = syn_aa[seq_num]
94 |
95 | m1.pose[:] = pose1
96 | m1.pose[0] = np.pi
97 |
98 | m2.pose[:] = pose2
99 | m2.pose[0] = np.pi
100 |
101 |
102 | #to visualize demo
103 | cv2.imshow('Dip', rn1.r)
104 | cv2.imshow('synthetic', rn2.r)
105 | # Press Q on keyboard to exit
106 | if cv2.waitKey(1) & 0xFF == ord('q'):
107 | break
108 |
109 | # # while saving frames to create video
110 | # gt_img = rn1.r * 255
111 | # pred_img = rn2.r * 255
112 | # cv2.imwrite('/data/Guha/GR/Output/GT/' + str(seq_num) + '.png', gt_img)
113 | # cv2.imwrite('/data/Guha/GR/Output/Prediction/' + str(seq_num) + '.png', pred_img)
114 |
115 |
--------------------------------------------------------------------------------
/S11_WalkTogether_gt_WBMGO56A_tcVI.mp4:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/S11_WalkTogether_gt_WBMGO56A_tcVI.mp4
--------------------------------------------------------------------------------
/chumpy/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 |
5 | # C extensions
6 | *.so
7 |
8 | # Distribution / packaging
9 | .Python
10 | env/
11 | bin/
12 | build/
13 | develop-eggs/
14 | dist/
15 | eggs/
16 | lib/
17 | lib64/
18 | parts/
19 | sdist/
20 | var/
21 | *.egg-info/
22 | .installed.cfg
23 | *.egg
24 |
25 | # Installer logs
26 | pip-log.txt
27 | pip-delete-this-directory.txt
28 |
29 | # Unit test / coverage reports
30 | htmlcov/
31 | .tox/
32 | .coverage
33 | .cache
34 | nosetests.xml
35 | coverage.xml
36 |
37 | # Translations
38 | *.mo
39 |
40 | # Mr Developer
41 | .mr.developer.cfg
42 | .project
43 | .pydevproject
44 |
45 | # Rope
46 | .ropeproject
47 |
48 | # Django stuff:
49 | *.log
50 | *.pot
51 |
52 | # Sphinx documentation
53 | docs/_build/
54 |
55 | .DS_Store
--------------------------------------------------------------------------------
/chumpy/.hgignore:
--------------------------------------------------------------------------------
1 | syntax: glob
2 | *.pyc
--------------------------------------------------------------------------------
/chumpy/.travis.yml:
--------------------------------------------------------------------------------
1 | language: python
2 | python:
3 | - '2.7'
4 | virtualenv:
5 | system_site_packages: true
6 | before_install:
7 | - sudo cat /etc/issue.net
8 | - sudo apt-get update -q
9 | - sudo apt-get install -qq python-dev gfortran pkg-config liblapack-dev
10 | install:
11 | - pip install --upgrade pip
12 | - travis_wait pip install -r requirements.txt
13 | script: make test
14 |
--------------------------------------------------------------------------------
/chumpy/LICENSE.txt:
--------------------------------------------------------------------------------
1 | The MIT License (MIT)
2 |
3 | Copyright (c) 2014 Max-Planck-Gesellschaft
4 | Copyright (c) 2014 Matthew Loper
5 |
6 | Permission is hereby granted, free of charge, to any person obtaining a copy
7 | of this software and associated documentation files (the "Software"), to deal
8 | in the Software without restriction, including without limitation the rights
9 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
10 | copies of the Software, and to permit persons to whom the Software is
11 | furnished to do so, subject to the following conditions:
12 |
13 | The above copyright notice and this permission notice shall be included in
14 | all copies or substantial portions of the Software.
15 |
16 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
22 | THE SOFTWARE.
23 |
--------------------------------------------------------------------------------
/chumpy/MANIFEST.in:
--------------------------------------------------------------------------------
1 | global-include . *.py *.c *.h Makefile *.pyx requirements.txt
2 | global-exclude chumpy/optional_test_performance.py
3 | prune dist
4 |
5 |
6 |
--------------------------------------------------------------------------------
/chumpy/Makefile:
--------------------------------------------------------------------------------
1 | all:
2 |
3 | upload:
4 | python setup.py sdist
5 | twine upload dist/*
6 | #sdist:
7 | # python setup.py sdist && rsync -avz dist/chumpy-0.5.tar.gz files:~/chumpy/latest.tgz && python ./api_compatibility.py && rsync -avz ./api_compatibility.html files:~/chumpy/
8 |
9 | clean:
10 |
11 | tidy:
12 |
13 | test: clean qtest
14 | qtest: all
15 | # For some reason the import changes for Python 3 caused the Python 2 test
16 | # loader to give up without loading any tests. So we discover them ourselves.
17 | # python -m unittest
18 | find chumpy -name 'test_*.py' | sed -e 's/\.py$$//' -e 's/\//./' | xargs python -m unittest
19 |
20 | coverage: clean qcov
21 | qcov: all
22 | env LD_PRELOAD=$(PRELOADED) coverage run --source=. -m unittest discover -s .
23 | coverage html
24 | coverage report -m
25 |
26 |
--------------------------------------------------------------------------------
/chumpy/README.md:
--------------------------------------------------------------------------------
1 | chumpy
2 | ======
3 |
4 | Chumpy is a Python-based framework designed to handle the **auto-differentiation** problem,
5 | which is to evaluate an expression and its derivatives with respect to its inputs, by use of the chain rule.
6 |
7 | Chumpy is intended to make construction and local
8 | minimization of objectives easier.
9 |
10 | Specifically, it provides:
11 |
12 | - Easy problem construction by using Numpy’s application interface
13 | - Easy access to derivatives via auto differentiation
14 | - Easy local optimization methods (12 of them: most of which use the derivatives)
15 |
16 | Chumpy comes with its own demos, which can be seen by typing the following:
17 |
18 | ```python
19 | import chumpy
20 | chumpy.demo() # prints out a list of possible demos
21 | ```
22 |
23 | Licensing is specified in the attached LICENSE.txt.
24 |
--------------------------------------------------------------------------------
/chumpy/chumpy/__init__.py:
--------------------------------------------------------------------------------
1 | from .ch import *
2 | from .logic import *
3 |
4 | from .optimization import minimize
5 | from . import extras
6 | from . import testing
7 | from .version import version as __version__
8 |
9 | from .version import version as __version__
10 |
11 | from numpy import bool, int, float, complex, object, unicode, str, nan, inf
12 |
13 | def test():
14 | from os.path import split
15 | import unittest
16 | test_loader= unittest.TestLoader()
17 | test_loader = test_loader.discover(split(__file__)[0])
18 | test_runner = unittest.TextTestRunner()
19 | test_runner.run( test_loader )
20 |
21 |
22 | demos = {}
23 |
24 | demos['scalar'] = """
25 | import chumpy as ch
26 |
27 | [x1, x2, x3] = ch.array(10), ch.array(20), ch.array(30)
28 | result = x1+x2+x3
29 | print result # prints [ 60.]
30 | print result.dr_wrt(x1) # prints 1
31 | """
32 |
33 | demos['show_tree'] = """
34 | import chumpy as ch
35 |
36 | [x1, x2, x3] = ch.array(10), ch.array(20), ch.array(30)
37 | for i in range(3): x2 = x1 + x2 + x3
38 |
39 | x2.dr_wrt(x1) # pull cache
40 | x2.dr_wrt(x3) # pull cache
41 | x1.label='x1' # for clarity in show_tree()
42 | x2.label='x2' # for clarity in show_tree()
43 | x3.label='x3' # for clarity in show_tree()
44 | x2.show_tree(cachelim=1e-4) # in MB
45 | """
46 |
47 | demos['matrix'] = """
48 | import chumpy as ch
49 |
50 | x1, x2, x3, x4 = ch.eye(10), ch.array(1), ch.array(5), ch.array(10)
51 | y = x1*(x2-x3)+x4
52 | print y
53 | print y.dr_wrt(x2)
54 | """
55 |
56 | demos['linalg'] = """
57 | import chumpy as ch
58 |
59 | m = [ch.random.randn(100).reshape((10,10)) for i in range(3)]
60 | y = m[0].dot(m[1]).dot(ch.linalg.inv(m[2])) * ch.linalg.det(m[0])
61 | print y.shape
62 | print y.dr_wrt(m[0]).shape
63 | """
64 |
65 | demos['inheritance'] = """
66 | import chumpy as ch
67 | import numpy as np
68 |
69 | class Sin(ch.Ch):
70 |
71 | dterms = ('x',)
72 |
73 | def compute_r(self):
74 | return np.sin(self.x.r)
75 |
76 | def compute_dr_wrt(self, wrt):
77 | import scipy.sparse
78 | if wrt is self.x:
79 | result = np.cos(self.x.r)
80 | return scipy.sparse.diags([result.ravel()], [0]) if len(result)>1 else np.atleast_2d(result)
81 |
82 | x1 = Ch([10,20,30])
83 | result = Sin(x1) # or "result = Sin(x=x1)"
84 | print result.r
85 | print result.dr_wrt(x1)
86 | """
87 |
88 | demos['optimization'] = """
89 | import chumpy as ch
90 |
91 | x = ch.zeros(10)
92 | y = ch.zeros(10)
93 |
94 | # Beale's function
95 | e1 = 1.5 - x + x*y
96 | e2 = 2.25 - x + x*(y**2)
97 | e3 = 2.625 - x + x*(y**3)
98 |
99 | objective = {'e1': e1, 'e2': e2, 'e3': e3}
100 | ch.minimize(objective, x0=[x,y], method='dogleg')
101 | print x # should be all 3.0
102 | print y # should be all 0.5
103 | """
104 |
105 |
106 |
107 |
108 | def demo(which=None):
109 | if which not in demos:
110 | print('Please indicate which demo you want, as follows:')
111 | for key in demos:
112 | print("\tdemo('%s')" % (key,))
113 | return
114 |
115 | print('- - - - - - - - - - - - - - - - - - - - - - -')
116 | print(demos[which])
117 | print('- - - - - - - - - - -
- - - - - - - - - - - -\n')
118 | exec('global np\n' + demos[which], globals(), locals())
119 |
--------------------------------------------------------------------------------
/chumpy/chumpy/ch_random.py:
--------------------------------------------------------------------------------
1 | """
2 | Author(s): Matthew Loper
3 |
4 | See LICENCE.txt for licensing and contact information.
5 | """
6 |
7 | import numpy.random
8 | from .ch import Ch
9 |
10 | api_not_implemented = ['choice','bytes','shuffle','permutation']
11 |
12 | api_wrapped_simple = [
13 | # simple random data
14 | 'rand','randn','randint','random_integers','random_sample','random','ranf','sample',
15 |
16 | # distributions
17 | 'beta','binomial','chisquare','dirichlet','exponential','f','gamma','geometric','gumbel','hypergeometric',
18 | 'laplace','logistic','lognormal','logseries','multinomial','multivariate_normal','negative_binomial',
19 | 'noncentral_chisquare','noncentral_f','normal','pareto','poisson','power','rayleigh','standard_cauchy',
20 | 'standard_exponential','standard_gamma','standard_normal','standard_t','triangular','uniform','vonmises',
21 | 'wald','weibull','zipf']
22 |
23 | api_wrapped_direct = ['seed', 'get_state', 'set_state']
24 |
25 | for rtn in api_wrapped_simple:
26 | exec('def %s(*args, **kwargs) : return Ch(numpy.random.%s(*args, **kwargs))' % (rtn, rtn))
27 |
28 | for rtn in api_wrapped_direct:
29 | exec('%s = numpy.random.%s' % (rtn, rtn))
30 |
31 | __all__ = api_wrapped_simple + api_wrapped_direct
32 |
33 |
--------------------------------------------------------------------------------
/chumpy/chumpy/extras.py:
--------------------------------------------------------------------------------
1 | __author__ = 'matt'
2 |
3 | from . import ch
4 | import numpy as np
5 | from .utils import row, col
6 | import scipy.sparse as sp
7 | import scipy.special
8 |
9 | class Interp3D(ch.Ch):
10 | dterms = 'locations'
11 | terms = 'image'
12 |
13 | def on_changed(self, which):
14 | if 'image' in which:
15 | self.gx, self.gy, self.gz = np.gradient(self.image)
16 |
17 | def compute_r(self):
18 | locations = self.locations.r.copy()
19 | for i in range(3):
20 | locations[:,i] = np.clip(locations[:,i], 0, self.image.shape[i]-1)
21 | locs = np.floor(locations).astype(np.uint32)
22 | result = self.image[locs[:,0], locs[:,1], locs[:,2]]
23 | offset = (locations - locs)
24 | dr = self.dr_wrt(self.locations).dot(offset.ravel())
25 | return result + dr
26 |
27 | def compute_dr_wrt(self, wrt):
28 | if wrt is self.locations:
29 | locations = self.locations.r.copy()
30 | for i in range(3):
31 | locations[:,i] = np.clip(locations[:,i], 0, self.image.shape[i]-1)
32 | locations = locations.astype(np.uint32)
33 |
34 | xc = col(self.gx[locations[:,0], locations[:,1], locations[:,2]])
35 | yc = col(self.gy[locations[:,0], locations[:,1], locations[:,2]])
36 | zc = col(self.gz[locations[:,0], locations[:,1], locations[:,2]])
37 |
38 | data = np.vstack([xc.ravel(), yc.ravel(), zc.ravel()]).T.copy()
39 | JS = np.arange(locations.size)
40 | IS = JS // 3
41 |
42 | return sp.csc_matrix((data.ravel(), (IS, JS)))
43 |
44 |
45 | class gamma(ch.Ch):
46 | dterms = 'x',
47 |
48 | def compute_r(self):
49 | return scipy.special.gamma(self.x.r)
50 |
51 | def compute_dr_wrt(self, wrt):
52 | if wrt is self.x:
53 | d = scipy.special.polygamma(0, self.x.r)*self.r
54 | return sp.diags([d.ravel()], [0])
55 |
56 | # This function is based directly on the "moment" function
57 | # in scipy, specifically in mstats_basic.py.
58 | def moment(a, moment=1, axis=0):
59 | if moment == 1:
60 | # By definition the first moment about the mean is 0.
61 | shape = list(a.shape)
62 | del shape[axis]
63 | if shape:
64 | # return an actual array of the appropriate shape
65 | return ch.zeros(shape, dtype=float)
66 | else:
67 | # the input was 1D, so return a scalar instead of a rank-0 array
68 | return np.float64(0.0)
69 | else:
70 | mn = ch.expand_dims(a.mean(axis=axis), axis)
71 | s = ch.power((a-mn), moment)
72 | return s.mean(axis=axis)
73 |
--------------------------------------------------------------------------------
/chumpy/chumpy/logic.py:
--------------------------------------------------------------------------------
1 | """
2 | Author(s): Matthew Loper
3 |
4 | See LICENCE.txt for licensing and contact information.
5 | """
6 |
7 | __author__ = 'matt'
8 |
9 |
10 | __all__ = [] # added to incrementally below
11 |
12 | from . import ch
13 | from .ch import Ch
14 | import numpy as np
15 |
16 | class LogicFunc(Ch):
17 | dterms = 'a' # we keep this here so that changes to children of "a" will trigger cache changes
18 | terms = 'args', 'kwargs', 'funcname'
19 |
20 | def compute_r(self):
21 | arr = self.a
22 | fn = getattr(np, self.funcname)
23 | return fn(arr, *self.args, **self.kwargs)
24 |
25 | def compute_dr_wrt(self, wrt):
26 | pass
27 |
28 |
29 | unaries = 'all', 'any', 'isfinite', 'isinf', 'isnan', 'isneginf', 'isposinf', 'logical_not'
30 | for unary in unaries:
31 | exec("def %s(a, *args, **kwargs): return LogicFunc(a=a, args=args, kwargs=kwargs, funcname='%s')" % (unary, unary))
32 | __all__ += unaries
33 |
34 |
35 |
36 | if __name__ == '__main__':
37 | from . import ch
38 | print(all(np.array([1,2,3])))
39 | print(isinf(np.array([0,2,3])))
40 |
--------------------------------------------------------------------------------
/chumpy/chumpy/monitor.py:
--------------------------------------------------------------------------------
1 | '''
2 | Logging service for tracking dr tree changes from root objective
3 | and record every step that incrementally changes the dr tree
4 |
5 | '''
6 | import os, sys, time
7 | import json
8 | import psutil
9 |
10 | import scipy.sparse as sp
11 | import numpy as np
12 | from . import reordering
13 |
14 | _TWO_20 = float(2 **20)
15 |
16 | '''
17 | memory utils
18 |
19 | '''
20 | def pdb_mem():
21 | from .monitor import get_current_memory
22 | mem = get_current_memory()
23 | if mem > 7000:
24 | import pdb;pdb.set_trace()
25 |
26 | def get_peak_mem():
27 | '''
28 | this returns peak memory use since process starts till the moment its called
29 | '''
30 | import resource
31 | rusage_denom = 1024.
32 | if sys.platform == 'darwin':
33 | # ... it seems that in OSX the output is different units ...
34 | rusage_denom = rusage_denom * rusage_denom
35 | mem = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / rusage_denom
36 | return mem
37 |
38 | def get_current_memory():
39 | p = psutil.Process(os.getpid())
40 | mem = p.memory_info()[0]/_TWO_20
41 |
42 | return mem
43 |
44 | '''
45 | Helper for Profiler
46 | '''
47 |
48 | def build_cache_info(k, v, info_dict):
49 | if v is not None:
50 | issparse = sp.issparse(v)
51 | size = v.size
52 | if issparse:
53 | nonzero = len(v.data)
54 | else:
55 | nonzero = np.count_nonzero(v)
56 | info_dict[k.short_name] = {
57 | 'sparse': issparse,
58 | 'size' : str(size),
59 | 'nonzero' : nonzero,
60 | }
61 |
62 |
63 | def cache_info(ch_node):
64 | result = {}
65 | if isinstance(ch_node, reordering.Concatenate) and hasattr(ch_node, 'dr_cached') and len(ch_node.dr_cached) > 0:
66 | for k, v in ch_node.dr_cached.items():
67 | build_cache_info(k, v, result)
68 | elif len(ch_node._cache['drs']) > 0:
69 | for k, v in ch_node._cache['drs'].items():
70 | build_cache_info(k, v, result)
71 |
72 | return result
73 |
74 | class DrWrtProfiler(object):
75 | base_path = os.path.abspath('profiles')
76 |
77 | def __init__(self, root, base_path=None):
78 | self.root = root.obj
79 | self.history = []
80 |
81 | ts = time.time()
82 | if base_path:
83 | self.base_path = base_path
84 |
85 | self.path = os.path.join(self.base_path, 'profile_%s.json' % str(ts))
86 | self.root_path = os.path.join(self.base_path, 'root_%s.json' % str(ts))
87 |
88 |
89 | with open(self.root_path, 'w') as f:
90 | json.dump(self.dump_tree(self.root), f, indent=4)
91 |
92 | def dump_tree(self, node):
93 | if not hasattr(node, 'dterms'):
94 | return []
95 |
96 | node_dict = self.serialize_node(node, verbose=False)
97 | if hasattr(node, 'visited') and node.visited:
98 | node_dict.update({'indirect':True})
99 | return node_dict
100 |
101 | node.visited = True
102 | children_list = []
103 | for dterm in node.dterms:
104 | if hasattr(node, dterm):
105 | child = getattr(node, dterm)
106 | if hasattr(child, 'dterms') or hasattr(child, 'terms'):
107 | children_list.append(self.dump_tree(child))
108 | node_dict.update({'children':children_list})
109 | return node_dict
110 |
111 | def serialize_node(self, ch_node, verbose=True):
112 | node_id = id(ch_node)
113 | name = ch_node.short_name
114 | ts = time.time()
115 | status = ch_node._status
116 | mem = get_current_memory()
117 | node_cache_info = cache_info(ch_node)
118 |
119 | rec = {
120 | 'id': str(node_id),
121 | 'indirect' : False,
122 | }
123 | if verbose:
124 | rec.update({
125 | 'name':name,
126 | 'ts' : ts,
127 | 'status':status,
128 | 'mem': mem,
129 | 'cache': node_cache_info,
130 | })
131 | return rec
132 |
133 | def show_tree(self, label):
134 | '''
135 | show tree from the root node
136 | '''
137 | self.root.show_tree_cache(label)
138 |
139 | def record(self, ch_node):
140 | '''
141 | Incremental changes
142 | '''
143 | rec = self.serialize_node(ch_node)
144 | self.history.append(rec)
145 |
146 | def harvest(self):
147 | print('collecting and dump to file %s' % self.path)
148 | with open(self.path, 'w') as f:
149 | json.dump(self.history, f, indent=4)
--------------------------------------------------------------------------------
/chumpy/chumpy/optimization.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | """
4 | Author(s): Matthew Loper
5 |
6 | See LICENCE.txt for licensing and contact information.
7 | """
8 |
9 | __all__ = ['minimize']
10 |
11 | import numpy as np
12 | from . import ch
13 | import scipy.sparse as sp
14 | import scipy.optimize
15 |
16 | from .optimization_internal import minimize_dogleg
17 |
18 | #from memory_profiler import profile, memory_usage
19 |
20 | # def disable_cache_for_single_parent_node(node):
21 | # if hasattr(node, '_parents') and len(node._parents.keys()) == 1:
22 | # node.want_cache = False
23 |
24 |
25 | # Nelder-Mead
26 | # Powell
27 | # CG
28 | # BFGS
29 | # Newton-CG
30 | # Anneal
31 | # L-BFGS-B
32 | # TNC
33 | # COBYLA
34 | # SLSQP
35 | # dogleg
36 | # trust-ncg
37 | def minimize(fun, x0, method='dogleg', bounds=None, constraints=(), tol=None, callback=None, options=None):
38 |
39 | if method == 'dogleg':
40 | if options is None: options = {}
41 | return minimize_dogleg(fun, free_variables=x0, on_step=callback, **options)
42 |
43 | if isinstance(fun, list) or isinstance(fun, tuple):
44 | fun = ch.concatenate([f.ravel() for f in fun])
45 | if isinstance(fun, dict):
46 | fun = ch.concatenate([f.ravel() for f in list(fun.values())])
47 | obj = fun
48 | free_variables = x0
49 |
50 |
51 | from .ch import SumOfSquares
52 |
53 | hessp = None
54 | hess = None
55 | if obj.size == 1:
56 | obj_scalar = obj
57 | else:
58 | obj_scalar = SumOfSquares(obj)
59 |
60 | def hessp(vs, p,obj, obj_scalar, free_variables):
61 | changevars(vs,obj,obj_scalar,free_variables)
62 | if not hasattr(hessp, 'vs'):
63 | hessp.vs = vs*0+1e16
64 | if np.max(np.abs(vs-hessp.vs)) > 0:
65 |
66 | J = ns_jacfunc(vs,obj,obj_scalar,free_variables)
67 | hessp.J = J
68 | hessp.H = 2. * J.T.dot(J)
69 | hessp.vs = vs
70 | return np.array(hessp.H.dot(p)).ravel()
71 | #return 2*np.array(hessp.J.T.dot(hessp.J.dot(p))).ravel()
72 |
73 | if method.lower() != 'newton-cg':
74 | def hess(vs, obj, obj_scalar, free_variables):
75 | changevars(vs,obj,obj_scalar,free_variables)
76 | if not hasattr(hessp, 'vs'):
77 | hessp.vs = vs*0+1e16
78 | if np.max(np.abs(vs-hessp.vs)) > 0:
79 | J = ns_jacfunc(vs,obj,obj_scalar,free_variables)
80 | hessp.H = 2. * J.T.dot(J)
81 | return hessp.H
82 |
83 | def changevars(vs, obj, obj_scalar, free_variables):
84 | cur = 0
85 | changed = False
86 | for idx, freevar in enumerate(free_variables):
87 | sz = freevar.r.size
88 | newvals = vs[cur:cur+sz].copy().reshape(free_variables[idx].shape)
89 | if np.max(np.abs(newvals-free_variables[idx]).ravel()) > 0:
90 | free_variables[idx][:] = newvals
91 | changed = True
92 |
93 | cur += sz
94 |
95 | methods_without_callback = ('anneal', 'powell', 'cobyla', 'slsqp')
96 | if callback is not None and changed and method.lower() in methods_without_callback:
97 | callback(None)
98 |
99 | return changed
100 |
101 | def residuals(vs,obj, obj_scalar, free_variables):
102 | changevars(vs, obj, obj_scalar, free_variables)
103 | residuals = obj_scalar.r.ravel()[0]
104 | return residuals
105 |
106 | def scalar_jacfunc(vs,obj, obj_scalar, free_variables):
107 | if not hasattr(scalar_jacfunc, 'vs'):
108 | scalar_jacfunc.vs = vs*0+1e16
109 | if np.max(np.abs(vs-scalar_jacfunc.vs)) == 0:
110 | return scalar_jacfunc.J
111 |
112 | changevars(vs, obj, obj_scalar, free_variables)
113 |
114 | if True: # faster, at least on some problems
115 | result = np.concatenate([np.array(obj_scalar.lop(wrt, np.array([[1]]))).ravel() for wrt in free_variables])
116 | else:
117 | jacs = [obj_scalar.dr_wrt(wrt) for wrt in free_variables]
118 | for idx, jac in enumerate(jacs):
119 | if sp.issparse(jac):
120 | jacs[idx] = jacs[idx].todense()
121 | result = np.concatenate([jac.ravel() for jac in jacs])
122 |
123 | scalar_jacfunc.J = result
124 | scalar_jacfunc.vs = vs
125 | return result.ravel()
126 |
127 | def ns_jacfunc(vs,obj, obj_scalar, free_variables):
128 | if not hasattr(ns_jacfunc, 'vs'):
129 | ns_jacfunc.vs = vs*0+1e16
130 | if np.max(np.abs(vs-ns_jacfunc.vs)) == 0:
131 | return ns_jacfunc.J
132 |
133 | changevars(vs, obj, obj_scalar, free_variables)
134 | jacs = [obj.dr_wrt(wrt) for wrt in free_variables]
135 | result = hstack(jacs)
136 |
137 | ns_jacfunc.J = result
138 | ns_jacfunc.vs = vs
139 | return result
140 |
141 |
142 | x1 = scipy.optimize.minimize(
143 | method=method,
144 | fun=residuals,
145 | callback=callback,
146 | x0=np.concatenate([free_variable.r.ravel() for free_variable in free_variables]),
147 | jac=scalar_jacfunc,
148 | hessp=hessp, hess=hess, args=(obj, obj_scalar, free_variables),
149 | bounds=bounds, constraints=constraints, tol=tol, options=options).x
150 |
151 | changevars(x1, obj, obj_scalar, free_variables)
152 | return free_variables
153 |
154 |
155 | def main():
156 | pass
157 |
158 |
159 | if __name__ == '__main__':
160 | main()
161 |
162 |
--------------------------------------------------------------------------------
/chumpy/chumpy/optional_test_performance.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # encoding: utf-8
3 | """
4 | Author(s): Matthew Loper
5 |
6 | See LICENCE.txt for licensing and contact information.
7 | """
8 |
9 | import unittest
10 | import numpy as np
11 | from functools import reduce
12 |
13 |
14 | has_ressources = True
15 | try:
16 | import resource
17 |
18 | def abstract_ressource_timer():
19 | return resource.getrusage(resource.RUSAGE_SELF)
20 | def abstract_ressource_counter(r1, r2):
21 | _r1 = r1.ru_stime + r1.ru_utime
22 | _r2 = r2.ru_stime + r2.ru_utime
23 |
24 | return _r2 - _r1
25 | except ImportError:
26 | has_ressources = False
27 | pass
28 |
29 |
30 | if not has_ressources:
31 | try:
32 | from ctypes import *
33 |
34 |
35 |
36 | def abstract_ressource_timer():
37 | val = c_int64()
38 | windll.Kernel32.QueryPerformanceCounter(byref(val))
39 | return val
40 | def abstract_ressource_counter(r1, r2):
41 | """Returns the elapsed time between r2 and r1 (r2 > r1) in milliseconds"""
42 | val = c_int64()
43 | windll.Kernel32.QueryPerformanceFrequency(byref(val))
44 |
45 | return (1000*float(r2.value-r1.value))/val.value
46 |
47 | except ImportError:
48 | has_win32api = False
49 |
50 |
51 |
52 |
53 | from . import ch
54 |
55 |
56 |
57 | class Timer(object):
58 |
59 | def __enter__(self):
60 | self.r1 = abstract_ressource_timer()
61 |
62 | def __exit__(self, exception_type, exception_value, traceback):
63 | self.r2 = abstract_ressource_timer()
64 |
65 | self.elapsed = abstract_ressource_counter(self.r1, self.r2)
66 |
67 | # def timer():
68 | # tm = resource.getrusage(resource.RUSAGE_SELF)
69 | # return tm.ru_stime + tm.ru_utime
70 | #
71 | # svd1
72 |
73 | def timer(setup, go, n):
74 | tms = []
75 | for i in range(n):
76 | if setup is not None:
77 | setup()
78 |
79 | tm0 = abstract_ressource_timer()
80 |
81 | # if False:
82 | # from body.misc.profdot import profdot
83 | # profdot('go()', globals(), locals())
84 | # import pdb; pdb.set_trace()
85 |
86 | go()
87 | tm1 = abstract_ressource_timer()
88 |
89 | tms.append(abstract_ressource_counter(tm0, tm1))
90 |
91 | #raw_input(tms)
92 | return np.mean(tms) # see docs for timeit, which recommend getting minimum
93 |
94 |
95 |
96 | import timeit
97 | class TestPerformance(unittest.TestCase):
98 |
99 | def setUp(self):
100 | np.random.seed(0)
101 | self.mtx_10 = ch.array(np.random.randn(100).reshape((10,10)))
102 | self.mtx_1k = ch.array(np.random.randn(1000000).reshape((1000,1000)))
103 |
104 | def compute_binary_ratios(self, vecsize, numvecs):
105 |
106 | ratio = {}
107 | for funcname in ['add', 'subtract', 'multiply', 'divide', 'power']:
108 | for xp in ch, np:
109 | func = getattr(xp, funcname)
110 | vecs = [xp.random.rand(vecsize) for i in range(numvecs)]
111 |
112 | if xp is ch:
113 | f = reduce(lambda x, y : func(x,y), vecs)
114 | def go():
115 | for v in vecs:
116 | v.x *= -1
117 | _ = f.r
118 |
119 | tm_ch = timer(None, go, 10)
120 | else: # xp is np
121 | def go():
122 | for v in vecs:
123 | v *= -1
124 | _ = reduce(lambda x, y : func(x,y), vecs)
125 |
126 | tm_np = timer(None, go, 10)
127 |
128 | ratio[funcname] = tm_ch / tm_np
129 |
130 | return ratio
131 |
132 | def test_binary_ratios(self):
133 | ratios = self.compute_binary_ratios(vecsize=5000, numvecs=100)
134 | tol = 1e-1
135 | self.assertLess(ratios['add'], 8+tol)
136 | self.assertLess(ratios['subtract'], 8+tol)
137 | self.assertLess(ratios['multiply'], 8+tol)
138 | self.assertLess(ratios['divide'], 4+tol)
139 | self.assertLess(ratios['power'], 2+tol)
140 | #print ratios
141 |
142 |
143 | def test_svd(self):
144 | mtx = ch.array(np.random.randn(100).reshape((10,10)))
145 |
146 |
147 | # Get times for svd
148 | from .linalg import svd
149 | u, s, v = svd(mtx)
150 | def setup():
151 | mtx.x = -mtx.x
152 |
153 | def go_r():
154 | _ = u.r
155 | _ = s.r
156 | _ = v.r
157 |
158 | def go_dr():
159 | _ = u.dr_wrt(mtx)
160 | _ = s.dr_wrt(mtx)
161 | _ = v.dr_wrt(mtx)
162 |
163 | cht_r = timer(setup, go_r, 20)
164 | cht_dr = timer(setup, go_dr, 1)
165 |
166 | # Get times for numpy svd
167 | def go():
168 | u,s,v = np.linalg.svd(mtx.x)
169 | npt = timer(setup = None, go = go, n = 20)
170 |
171 | # Compare
172 | #print cht_r / npt
173 | #print cht_dr / npt
174 | self.assertLess(cht_r / npt, 3.3)
175 | self.assertLess(cht_dr / npt, 2700)
176 |
177 |
178 |
179 |
180 |
181 | if __name__ == '__main__':
182 | suite = unittest.TestLoader().loadTestsFromTestCase(TestPerformance)
183 | unittest.TextTestRunner(verbosity=2).run(suite)
184 |
--------------------------------------------------------------------------------
/chumpy/chumpy/test_inner_composition.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # encoding: utf-8
3 | """
4 | Author(s): Matthew Loper
5 |
6 | See LICENCE.txt for licensing and contact information.
7 | """
8 |
9 | import unittest
10 | from .ch import Ch, depends_on
11 |
12 | class TestInnerComposition(unittest.TestCase):
13 |
14 | def test_ic(self):
15 | child = Child(a=Ch(10))
16 | parent = Parent(child=child, aliased=Ch(50))
17 |
18 | junk = [parent.aliased_dependency for k in range(3)]
19 | self.assertTrue(parent.dcount == 1)
20 | self.assertTrue(parent.ocount == 0)
21 | self.assertTrue(parent.rcount == 0)
22 |
23 | junk = [parent.r for k in range(3)]
24 | self.assertTrue(parent.dcount == 1)
25 | self.assertTrue(parent.ocount == 1)
26 | self.assertTrue(parent.rcount == 1)
27 |
28 | parent.aliased = Ch(20)
29 | junk = [parent.aliased_dependency for k in range(3)]
30 | self.assertTrue(parent.dcount == 2)
31 | self.assertTrue(parent.ocount == 1)
32 | self.assertTrue(parent.rcount == 1)
33 |
34 | junk = [parent.r for k in range(3)]
35 | self.assertTrue(parent.dcount == 2)
36 | self.assertTrue(parent.ocount == 2)
37 | self.assertTrue(parent.rcount == 2)
38 |
39 | class Parent(Ch):
40 | dterms = ('aliased', 'child')
41 |
42 | def __init__(self, *args, **kwargs):
43 | self.dcount = 0
44 | self.ocount = 0
45 | self.rcount = 0
46 |
47 |
48 | def on_changed(self, which):
49 | assert('aliased' in which and 'child' in which)
50 | if 'aliased' in which:
51 | self.ocount += 1
52 |
53 | @depends_on('aliased')
54 | def aliased_dependency(self):
55 | self.dcount += 1
56 |
57 | @property
58 | def aliased(self):
59 | return self.child.a
60 |
61 | @aliased.setter
62 | def aliased(self, val):
63 | self.child.a = val
64 |
65 | def compute_r(self):
66 | self.rcount += 1
67 | return 0
68 |
69 | def compute_dr_wrt(self, wrt):
70 | pass
71 |
72 |
73 | class Child(Ch):
74 | dterms = ('a',)
75 |
76 |
77 |
78 | if __name__ == '__main__':
79 | suite = unittest.TestLoader().loadTestsFromTestCase(TestInnerComposition)
80 | unittest.TextTestRunner(verbosity=2).run(suite)
81 |
--------------------------------------------------------------------------------
/chumpy/chumpy/test_optimization.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # encoding: utf-8
3 | """
4 | Author(s): Matthew Loper
5 |
6 | See LICENCE.txt for licensing and contact information.
7 | """
8 |
9 | import time
10 | from numpy import *
11 | import unittest
12 | from . import ch
13 | from .optimization import minimize
14 | from .ch import Ch
15 | import numpy as np
16 | from scipy.optimize import rosen, rosen_der
17 | from .utils import row, col
18 |
19 |
20 | visualize = False
21 |
22 |
23 | def Rosen():
24 |
25 | args = {
26 | 'x1': Ch(-120.),
27 | 'x2': Ch(-100.)
28 | }
29 | r1 = Ch(lambda x1, x2 : (x2 - x1**2.) * 10., args)
30 | r2 = Ch(lambda x1 : x1 * -1. + 1, args)
31 |
32 | func = [r1, r2]
33 |
34 | return func, [args['x1'], args['x2']]
35 |
36 | class Madsen(Ch):
37 | dterms = ('x',)
38 | def compute_r(self):
39 | x1 = self.x.r[0]
40 | x2 = self.x.r[1]
41 | result = np.array((
42 | x1**2 + x2**2 + x1 * x2,
43 | np.sin(x1),
44 | np.cos(x2)
45 | ))
46 | return result
47 |
48 | def compute_dr_wrt(self, wrt):
49 | if wrt is not self.x:
50 | return None
51 | jac = np.zeros((3,2))
52 | x1 = self.x.r[0]
53 | x2 = self.x.r[1]
54 | jac[0,0] = 2. * x1 + x2
55 | jac[0,1] = 2. * x2 + x1
56 |
57 | jac[1,0] = np.cos(x1)
58 | jac[1,1] = 0
59 |
60 | jac[2,0] = 0
61 | jac[2,1] = -np.sin(x2)
62 |
63 | return jac
64 |
65 |
66 | def set_and_get_r(self, x_in):
67 | self.x = Ch(x_in)
68 | return col(self.r)
69 |
70 | def set_and_get_dr(self, x_in):
71 | self.x = Ch(x_in)
72 | return self.dr_wrt(self.x)
73 |
74 |
75 |
76 |
77 | class RosenCh(Ch):
78 | dterms = ('x',)
79 | def compute_r(self):
80 |
81 | result = np.array((rosen(self.x.r) ))
82 |
83 | return result
84 |
85 | def set_and_get_r(self, x_in):
86 | self.x = Ch(x_in)
87 | return col(self.r)
88 |
89 | def set_and_get_dr(self, x_in):
90 | self.x = Ch(x_in)
91 | return self.dr_wrt(self.x).flatten()
92 |
93 |
94 | def compute_dr_wrt(self, wrt):
95 | if wrt is self.x:
96 | if visualize:
97 | import matplotlib.pyplot as plt
98 | residuals = np.sum(self.r**2)
99 | print('------> RESIDUALS %.2e' % (residuals,))
100 | print('------> CURRENT GUESS %s' % (str(self.x.r),))
101 | plt.figure(123)
102 |
103 | if not hasattr(self, 'vs'):
104 | self.vs = []
105 | self.xs = []
106 | self.ys = []
107 | self.vs.append(residuals)
108 | self.xs.append(self.x.r[0])
109 | self.ys.append(self.x.r[1])
110 | plt.clf();
111 | plt.subplot(1,2,1)
112 | plt.plot(self.vs)
113 | plt.subplot(1,2,2)
114 | plt.plot(self.xs, self.ys)
115 | plt.draw()
116 |
117 |
118 | return row(rosen_der(self.x.r))
119 |
120 |
121 |
122 | class TestOptimization(unittest.TestCase):
123 |
124 | def test_dogleg_rosen(self):
125 | obj, freevars = Rosen()
126 | minimize(fun=obj, x0=freevars, method='dogleg', options={'maxiter': 337, 'disp': False})
127 | self.assertTrue(freevars[0].r[0]==1.)
128 | self.assertTrue(freevars[1].r[0]==1.)
129 |
130 | def test_dogleg_madsen(self):
131 | obj = Madsen(x = Ch(np.array((3.,1.))))
132 | minimize(fun=obj, x0=[obj.x], method='dogleg', options={'maxiter': 34, 'disp': False})
133 | self.assertTrue(np.sum(obj.r**2)/2 < 0.386599528247)
134 |
135 | @unittest.skip('negative sign in exponent screws with reverse mode')
136 | def test_bfgs_rosen(self):
137 | from .optimization import minimize_bfgs_lsq
138 | obj, freevars = Rosen()
139 | minimize_bfgs_lsq(obj=obj, niters=421, verbose=False, free_variables=freevars)
140 | self.assertTrue(freevars[0].r[0]==1.)
141 | self.assertTrue(freevars[1].r[0]==1.)
142 |
143 | def test_bfgs_madsen(self):
144 | from .ch import SumOfSquares
145 | import scipy.optimize
146 | obj = Ch(lambda x : SumOfSquares(Madsen(x = x)) )
147 |
148 | def errfunc(x):
149 | obj.x = Ch(x)
150 | return obj.r
151 |
152 | def gradfunc(x):
153 | obj.x = Ch(x)
154 | return obj.dr_wrt(obj.x).ravel()
155 |
156 | x0 = np.array((3., 1.))
157 |
158 | # Optimize with built-in bfgs.
159 | # Note: with 8 iters, this actually requires 14 gradient evaluations.
160 | # This can be verified by setting "disp" to 1.
161 | #tm = time.time()
162 | x1 = scipy.optimize.fmin_bfgs(errfunc, x0, fprime=gradfunc, maxiter=8, disp=0)
163 | #print 'forward: took %.es' % (time.time() - tm,)
164 | self.assertLess(obj.r/2., 0.4)
165 |
166 | # Optimize with chumpy's minimize (which uses scipy's bfgs).
167 | obj.x = x0
168 | minimize(fun=obj, x0=[obj.x], method='bfgs', options={'maxiter': 8, 'disp': False})
169 | self.assertLess(obj.r/2., 0.4)
170 |
171 | def test_nested_select(self):
172 | def beales(x, y):
173 | e1 = 1.5 - x + x*y
174 | e2 = 2.25 - x + x*(y**2)
175 | e3 = 2.625 - x + x*(y**3)
176 | return {'e1': e1, 'e2': e2, 'e3': e3}
177 |
178 | x1 = ch.zeros(10)
179 | y1 = ch.zeros(10)
180 |
181 | # With a single select this worked
182 | minimize(beales(x1, y1), x0=[x1[1:4], y1], method='dogleg', options={'disp': False})
183 |
184 | x2 = ch.zeros(10)
185 | y2 = ch.zeros(10)
186 |
187 | # But this used to raise `AttributeError: 'Select' object has no attribute 'x'`
188 | minimize(beales(x2, y2), x0=[x2[1:8][:3], y2], method='dogleg', options={'disp': False})
189 | np.testing.assert_array_equal(x1, x2)
190 | np.testing.assert_array_equal(y1, y2)
191 |
192 |
193 | suite = unittest.TestLoader().loadTestsFromTestCase(TestOptimization)
194 |
195 | if __name__ == '__main__':
196 |
197 | if False: # show rosen
198 | import matplotlib.pyplot as plt
199 | visualize = True
200 | plt.ion()
201 | unittest.main()
202 | import pdb; pdb.set_trace()
203 | else:
204 | unittest.main()
205 |
--------------------------------------------------------------------------------
/chumpy/chumpy/testing.py:
--------------------------------------------------------------------------------
1 | from . import ch
2 | import numpy as np
3 |
4 | fn1 = 'assert_allclose', 'assert_almost_equal', 'assert_approx_equal', 'assert_array_almost_equal', 'assert_array_almost_equal_nulp', 'assert_array_equal', 'assert_array_less', 'assert_array_max_ulp', 'assert_equal', 'assert_no_warnings', 'assert_string_equal'
5 | fn2 = 'assert_raises', 'assert_warns'
6 |
7 | # These are unhandled
8 | fn3 = 'build_err_msg', 'dec', 'decorate_methods', 'decorators', 'division', 'importall', 'jiffies', 'measure', 'memusage', 'nosetester', 'numpytest', 'print_assert_equal', 'print_function', 'raises', 'rand', 'run_module_suite', 'rundocs', 'runstring', 'test', 'utils', 'verbose'
9 |
10 | __all__ = fn1 + fn2
11 |
12 | for rtn in fn1:
13 | exec('def %s(*args, **kwargs) : return np.testing.%s(np.asarray(args[0]), np.asarray(args[1]), *args[2:], **kwargs)' % (rtn, rtn))
14 |
15 | for rtn in fn2:
16 | exec('def %s(*args, **kwargs) : return np.testing.%s(*args, **kwargs)' % (rtn, rtn))
17 |
18 |
19 |
20 | if __name__ == '__main__':
21 | main()
--------------------------------------------------------------------------------
/chumpy/chumpy/utils.py:
--------------------------------------------------------------------------------
1 | """
2 | Author(s): Matthew Loper
3 |
4 | See LICENCE.txt for licensing and contact information.
5 | """
6 | import scipy.sparse as sp
7 | import numpy as np
8 |
9 | def row(A):
10 | return A.reshape((1, -1))
11 |
12 |
13 | def col(A):
14 | return A.reshape((-1, 1))
15 |
16 | class timer(object):
17 | def time(self):
18 | import time
19 | return time.time()
20 | def __init__(self):
21 | self._elapsed = 0
22 | self._start = self.time()
23 | def __call__(self):
24 | if self._start is not None:
25 | return self._elapsed + self.time() - self._start
26 | else:
27 | return self._elapsed
28 | def pause(self):
29 | assert self._start is not None
30 | self._elapsed += self.time() - self._start
31 | self._start = None
32 | def resume(self):
33 | assert self._start is None
34 | self._start = self.time()
35 |
36 | def dfs_do_func_on_graph(node, func, *args, **kwargs):
37 | '''
38 | invoke func on each node of the dr graph
39 | '''
40 | for _node in node.tree_iterator():
41 | func(_node, *args, **kwargs)
42 |
43 |
44 | def sparse_is_desireable(lhs, rhs):
45 | '''
46 | Examines a pair of matrices and determines if the result of their multiplication should be sparse or not.
47 | '''
48 | return False
49 | if len(lhs.shape) == 1:
50 | return False
51 | else:
52 | lhs_rows, lhs_cols = lhs.shape
53 |
54 | if len(rhs.shape) == 1:
55 | rhs_rows = 1
56 | rhs_cols = rhs.size
57 | else:
58 | rhs_rows, rhs_cols = rhs.shape
59 |
60 | result_size = lhs_rows * rhs_cols
61 |
62 | if sp.issparse(lhs) and sp.issparse(rhs):
63 | return True
64 | elif sp.issparse(lhs):
65 | lhs_zero_rows = lhs_rows - np.unique(lhs.nonzero()[0]).size
66 | rhs_zero_cols = np.all(rhs==0, axis=0).sum()
67 |
68 | elif sp.issparse(rhs):
69 | lhs_zero_rows = np.all(lhs==0, axis=1).sum()
70 | rhs_zero_cols = rhs_cols- np.unique(rhs.nonzero()[1]).size
71 | else:
72 | lhs_zero_rows = np.all(lhs==0, axis=1).sum()
73 | rhs_zero_cols = np.all(rhs==0, axis=0).sum()
74 |
75 | num_zeros = lhs_zero_rows * rhs_cols + rhs_zero_cols * lhs_rows - lhs_zero_rows * rhs_zero_cols
76 |
77 | # A sparse matrix uses roughly 16 bytes per nonzero element (8 + 2 4-byte inds), while a dense matrix uses 8 bytes per element. So the break even point for sparsity is 50% nonzero. But in practice, it seems to be that the compression in a csc or csr matrix gets us break even at ~65% nonzero, which lets us say 50% is a conservative, worst cases cutoff.
78 | return (float(num_zeros) / float(size)) >= 0.5
79 |
80 |
81 | def convert_inputs_to_sparse_if_necessary(lhs, rhs):
82 | '''
83 | This function checks to see if a sparse output is desireable given the inputs and if so, casts the inputs to sparse in order to make it so.
84 | '''
85 | if not sp.issparse(lhs) or not sp.issparse(rhs):
86 | if sparse_is_desireable(lhs, rhs):
87 | if not sp.issparse(lhs):
88 | lhs = sp.csc_matrix(lhs)
89 | #print "converting lhs into sparse matrix"
90 | if not sp.issparse(rhs):
91 | rhs = sp.csc_matrix(rhs)
92 | #print "converting rhs into sparse matrix"
93 | return lhs, rhs
94 |
--------------------------------------------------------------------------------
/chumpy/chumpy/version.py:
--------------------------------------------------------------------------------
1 | version = '0.67.6'
2 | short_version = version
3 | full_version = version
4 |
--------------------------------------------------------------------------------
/chumpy/circle.yml:
--------------------------------------------------------------------------------
1 | general:
2 | branches:
3 | ignore:
4 | - /zz.*/ # Don't run tests on deprecated branches.
5 |
6 | machine:
7 | environment:
8 | PYTHONPATH: /usr/local/lib/python2.7/dist-packages
9 |
10 | dependencies:
11 | pre:
12 | - sudo apt-get update
13 | # Is gfortran needed? This was copied from `.travis.yml` which included it.
14 | - sudo apt-get install -qq python-dev gfortran pkg-config liblapack-dev
15 |
16 | test:
17 | override:
18 | - make test
19 |
--------------------------------------------------------------------------------
/chumpy/requirements.txt:
--------------------------------------------------------------------------------
1 | numpy>=1.8.1
2 | scipy>=0.13.0
3 | six>=1.11.0
4 |
--------------------------------------------------------------------------------
/chumpy/setup.py:
--------------------------------------------------------------------------------
1 | """
2 | Author(s): Matthew Loper
3 |
4 | See LICENCE.txt for licensing and contact information.
5 | """
6 |
7 | from distutils.core import setup
8 | try: # for pip >= 10
9 | from pip._internal.req import parse_requirements
10 | except ImportError: # for pip <= 9.0.3
11 | from pip.req import parse_requirements
12 | from runpy import run_path
13 |
14 | install_reqs = parse_requirements('requirements.txt', session=False)
15 | install_requires = [str(ir.req) for ir in install_reqs]
16 |
17 | def get_version():
18 | namespace = run_path('chumpy/version.py')
19 | return namespace['version']
20 |
21 | setup(name='chumpy',
22 | version=get_version(),
23 | packages = ['chumpy'],
24 | author='Matthew Loper',
25 | author_email='matt.loper@gmail.com',
26 | url='https://github.com/mattloper/chumpy',
27 | description='chumpy',
28 | license='MIT',
29 | install_requires=install_requires,
30 |
31 | # See https://pypi.python.org/pypi?%3Aaction=list_classifiers
32 | classifiers=[
33 | # How mature is this project? Common values are
34 | # 3 - Alpha
35 | # 4 - Beta
36 | # 5 - Production/Stable
37 | 'Development Status :: 4 - Beta',
38 |
39 | # Indicate who your project is intended for
40 | 'Intended Audience :: Science/Research',
41 | 'Topic :: Scientific/Engineering :: Mathematics',
42 |
43 | # Pick your license as you wish (should match "license" above)
44 | 'License :: OSI Approved :: MIT License',
45 |
46 | # Specify the Python versions you support here. In particular, ensure
47 | # that you indicate whether you support Python 2, Python 3 or both.
48 | 'Programming Language :: Python :: 2',
49 | 'Programming Language :: Python :: 2.7',
50 |
51 | 'Operating System :: MacOS :: MacOS X',
52 | 'Operating System :: POSIX :: Linux'
53 | ],
54 | )
55 |
56 |
--------------------------------------------------------------------------------
/output/2.2.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/output/2.2.PNG
--------------------------------------------------------------------------------
/output/2.4.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/output/2.4.PNG
--------------------------------------------------------------------------------
/output/3_5.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/output/3_5.PNG
--------------------------------------------------------------------------------
/output/alldipmodel.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/output/alldipmodel.png
--------------------------------------------------------------------------------
/output/approaches.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/output/approaches.png
--------------------------------------------------------------------------------
/output/dipmodel.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/output/dipmodel.png
--------------------------------------------------------------------------------
/output/mergeourdip.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/output/mergeourdip.PNG
--------------------------------------------------------------------------------
/output/onevsall.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/output/onevsall.png
--------------------------------------------------------------------------------
/output/out.txt:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/smpl/models/SMPL_sampleShape1_f.json:
--------------------------------------------------------------------------------
1 | {
2 | "betas": [ 1.69672052, -1.02610688, 1.80937542, 0.22764221, 0.06701877,
3 | 0.55952842, -0.23857718, 0.91494987, -1.26867501, -1.53360225]
4 | }
--------------------------------------------------------------------------------
/smpl/models/SMPL_sampleShape1_m.json:
--------------------------------------------------------------------------------
1 | {
2 | "betas": [ 0.55717977, -1.81291238, -0.54321285, 0.23705893, -0.50107065,
3 | 1.24639222, 0.43375487, 0.15281353, -0.23500944, 0.10896058]
4 | }
--------------------------------------------------------------------------------
/smpl/models/SMPL_sampleShape2_f.json:
--------------------------------------------------------------------------------
1 | {
2 | "betas": [-1.50338909, 0.41133214, -0.31445075, -0.90729174, 0.89161303,
3 | -1.1674648 , -0.36843207, -0.42175958, 1.00391208, 1.2608627 ]
4 | }
--------------------------------------------------------------------------------
/smpl/models/SMPL_sampleShape2_m.json:
--------------------------------------------------------------------------------
1 | {
2 | "betas": [ 0.0067062 , 1.70791948, 0.99016648, 0.49893772, -0.93875966,
3 | 0.80808619, -0.02349927, -1.32182901, 1.70036501, -0.149897 ]
4 | }
--------------------------------------------------------------------------------
/smpl/models/basicModel_f_lbs_10_207_0_v1.0.0.pkl:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/smpl/models/basicModel_f_lbs_10_207_0_v1.0.0.pkl
--------------------------------------------------------------------------------
/smpl/models/basicModel_m_lbs_10_207_0_v1.0.0.pkl:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/smpl/models/basicModel_m_lbs_10_207_0_v1.0.0.pkl
--------------------------------------------------------------------------------
/smpl/smpl_webuser/lbs.py:
--------------------------------------------------------------------------------
1 | '''
2 | Copyright 2015 Matthew Loper, Naureen Mahmood and the Max Planck Gesellschaft. All rights reserved.
3 | This software is provided for research purposes only.
4 | By using this software you agree to the terms of the SMPL Model license here http://smpl.is.tue.mpg.de/license
5 |
6 | More information about SMPL is available here http://smpl.is.tue.mpg.
7 | For comments or questions, please email us at: smpl@tuebingen.mpg.de
8 |
9 |
10 | About this file:
11 | ================
12 | This file defines linear blend skinning for the SMPL loader which
13 | defines the effect of bones and blendshapes on the vertices of the template mesh.
14 |
15 | Modules included:
16 | - global_rigid_transformation:
17 | computes global rotation & translation of the model
18 | - verts_core: [overloaded function inherited from verts.verts_core]
19 | computes the blending of joint-influences for each vertex based on type of skinning
20 |
21 | '''
22 |
23 | from posemapper import posemap
24 | import chumpy
25 | import numpy as np
26 |
27 | def global_rigid_transformation(pose, J, kintree_table, xp):
28 | results = {}
29 | pose = pose.reshape((-1,3))
30 | id_to_col = {kintree_table[1,i] : i for i in range(kintree_table.shape[1])}
31 | parent = {i : id_to_col[kintree_table[0,i]] for i in range(1, kintree_table.shape[1])}
32 |
33 | if xp == chumpy:
34 | from posemapper import Rodrigues
35 | rodrigues = lambda x : Rodrigues(x)
36 | else:
37 | import cv2
38 | rodrigues = lambda x : cv2.Rodrigues(x)[0]
39 |
40 | with_zeros = lambda x : xp.vstack((x, xp.array([[0.0, 0.0, 0.0, 1.0]])))
41 | results[0] = with_zeros(xp.hstack((rodrigues(pose[0,:]), J[0,:].reshape((3,1)))))
42 |
43 | for i in range(1, kintree_table.shape[1]):
44 | results[i] = results[parent[i]].dot(with_zeros(xp.hstack((
45 | rodrigues(pose[i,:]),
46 | ((J[i,:] - J[parent[i],:]).reshape((3,1)))
47 | ))))
48 |
49 | pack = lambda x : xp.hstack([np.zeros((4, 3)), x.reshape((4,1))])
50 |
51 | results = [results[i] for i in sorted(results.keys())]
52 | results_global = results
53 |
54 | if True:
55 | results2 = [results[i] - (pack(
56 | results[i].dot(xp.concatenate( ( (J[i,:]), 0 ) )))
57 | ) for i in range(len(results))]
58 | results = results2
59 | result = xp.dstack(results)
60 | return result, results_global
61 |
62 |
63 | def verts_core(pose, v, J, weights, kintree_table, want_Jtr=False, xp=chumpy):
64 | A, A_global = global_rigid_transformation(pose, J, kintree_table, xp)
65 | T = A.dot(weights.T)
66 |
67 | rest_shape_h = xp.vstack((v.T, np.ones((1, v.shape[0]))))
68 |
69 | v =(T[:,0,:] * rest_shape_h[0, :].reshape((1, -1)) +
70 | T[:,1,:] * rest_shape_h[1, :].reshape((1, -1)) +
71 | T[:,2,:] * rest_shape_h[2, :].reshape((1, -1)) +
72 | T[:,3,:] * rest_shape_h[3, :].reshape((1, -1))).T
73 |
74 | v = v[:,:3]
75 |
76 | if not want_Jtr:
77 | return v
78 | Jtr = xp.vstack([g[:3,3] for g in A_global])
79 | return (v, Jtr)
80 |
81 |
--------------------------------------------------------------------------------
/smpl/smpl_webuser/lbs.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/smpl/smpl_webuser/lbs.pyc
--------------------------------------------------------------------------------
/smpl/smpl_webuser/posemapper.py:
--------------------------------------------------------------------------------
1 | '''
2 | Copyright 2015 Matthew Loper, Naureen Mahmood and the Max Planck Gesellschaft. All rights reserved.
3 | This software is provided for research purposes only.
4 | By using this software you agree to the terms of the SMPL Model license here http://smpl.is.tue.mpg.de/license
5 |
6 | More information about SMPL is available here http://smpl.is.tue.mpg.
7 | For comments or questions, please email us at: smpl@tuebingen.mpg.de
8 |
9 |
10 | About this file:
11 | ================
12 | This module defines the mapping of joint-angles to pose-blendshapes.
13 |
14 | Modules included:
15 | - posemap:
16 | computes the joint-to-pose blend shape mapping given a mapping type as input
17 |
18 | '''
19 |
20 | import chumpy as ch
21 | import numpy as np
22 | import cv2
23 |
24 |
25 | class Rodrigues(ch.Ch):
26 | dterms = 'rt'
27 |
28 | def compute_r(self):
29 | return cv2.Rodrigues(self.rt.r)[0]
30 |
31 | def compute_dr_wrt(self, wrt):
32 | if wrt is self.rt:
33 | return cv2.Rodrigues(self.rt.r)[1].T
34 |
35 |
36 | def lrotmin(p):
37 | if isinstance(p, np.ndarray):
38 | p = p.ravel()[3:]
39 | return np.concatenate([(cv2.Rodrigues(np.array(pp))[0]-np.eye(3)).ravel() for pp in p.reshape((-1,3))]).ravel()
40 | if p.ndim != 2 or p.shape[1] != 3:
41 | p = p.reshape((-1,3))
42 | p = p[1:]
43 | return ch.concatenate([(Rodrigues(pp)-ch.eye(3)).ravel() for pp in p]).ravel()
44 |
45 | def posemap(s):
46 | if s == 'lrotmin':
47 | return lrotmin
48 | else:
49 | raise Exception('Unknown posemapping: %s' % (str(s),))
50 |
--------------------------------------------------------------------------------
/smpl/smpl_webuser/posemapper.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/smpl/smpl_webuser/posemapper.pyc
--------------------------------------------------------------------------------
/smpl/smpl_webuser/serialization.py:
--------------------------------------------------------------------------------
1 | '''
2 | Copyright 2015 Matthew Loper, Naureen Mahmood and the Max Planck Gesellschaft. All rights reserved.
3 | This software is provided for research purposes only.
4 | By using this software you agree to the terms of the SMPL Model license here http://smpl.is.tue.mpg.de/license
5 |
6 | More information about SMPL is available here http://smpl.is.tue.mpg.
7 | For comments or questions, please email us at: smpl@tuebingen.mpg.de
8 |
9 |
10 | About this file:
11 | ================
12 | This file defines the serialization functions of the SMPL model.
13 |
14 | Modules included:
15 | - save_model:
16 | saves the SMPL model to a given file location as a .pkl file
17 | - load_model:
18 | loads the SMPL model from a given file location (i.e. a .pkl file location),
19 | or a dictionary object.
20 |
21 | '''
22 |
23 | __all__ = ['load_model', 'save_model']
24 |
25 | import numpy as np
26 | import pickle as pickle
27 | import chumpy as ch
28 | from chumpy.ch import MatVecMult
29 | from posemapper import posemap
30 | from verts import verts_core
31 |
32 | def save_model(model, fname):
33 | m0 = model
34 | trainer_dict = {'v_template': np.asarray(m0.v_template),'J': np.asarray(m0.J),'weights': np.asarray(m0.weights),'kintree_table': m0.kintree_table,'f': m0.f, 'bs_type': m0.bs_type, 'posedirs': np.asarray(m0.posedirs)}
35 | if hasattr(model, 'J_regressor'):
36 | trainer_dict['J_regressor'] = m0.J_regressor
37 | if hasattr(model, 'J_regressor_prior'):
38 | trainer_dict['J_regressor_prior'] = m0.J_regressor_prior
39 | if hasattr(model, 'weights_prior'):
40 | trainer_dict['weights_prior'] = m0.weights_prior
41 | if hasattr(model, 'shapedirs'):
42 | trainer_dict['shapedirs'] = m0.shapedirs
43 | if hasattr(model, 'vert_sym_idxs'):
44 | trainer_dict['vert_sym_idxs'] = m0.vert_sym_idxs
45 | if hasattr(model, 'bs_style'):
46 | trainer_dict['bs_style'] = model.bs_style
47 | else:
48 | trainer_dict['bs_style'] = 'lbs'
49 | pickle.dump(trainer_dict, open(fname, 'w'), -1)
50 |
51 |
52 | def backwards_compatibility_replacements(dd):
53 |
54 | # replacements
55 | if 'default_v' in dd:
56 | dd['v_template'] = dd['default_v']
57 | del dd['default_v']
58 | if 'template_v' in dd:
59 | dd['v_template'] = dd['template_v']
60 | del dd['template_v']
61 | if 'joint_regressor' in dd:
62 | dd['J_regressor'] = dd['joint_regressor']
63 | del dd['joint_regressor']
64 | if 'blendshapes' in dd:
65 | dd['posedirs'] = dd['blendshapes']
66 | del dd['blendshapes']
67 | if 'J' not in dd:
68 | dd['J'] = dd['joints']
69 | del dd['joints']
70 |
71 | # defaults
72 | if 'bs_style' not in dd:
73 | dd['bs_style'] = 'lbs'
74 |
75 |
76 |
77 | def ready_arguments(fname_or_dict):
78 | print(fname_or_dict)
79 | if not isinstance(fname_or_dict, dict):
80 | #dd = pickle.load(open(fname_or_dict,'rb'),encoding='latin1')
81 | dd = pickle.load(open(fname_or_dict, 'rb'))
82 | else:
83 | dd = fname_or_dict
84 |
85 | backwards_compatibility_replacements(dd)
86 |
87 | want_shapemodel = 'shapedirs' in dd
88 | nposeparms = dd['kintree_table'].shape[1]*3
89 |
90 | if 'trans' not in dd:
91 | dd['trans'] = np.zeros(3)
92 | if 'pose' not in dd:
93 | dd['pose'] = np.zeros(nposeparms)
94 | if 'shapedirs' in dd and 'betas' not in dd:
95 | dd['betas'] = np.zeros(dd['shapedirs'].shape[-1])
96 |
97 | for s in ['v_template', 'weights', 'posedirs', 'pose', 'trans', 'shapedirs', 'betas', 'J']:
98 | if (s in dd) and not hasattr(dd[s], 'dterms'):
99 | dd[s] = ch.array(dd[s])
100 |
101 | if want_shapemodel:
102 | dd['v_shaped'] = dd['shapedirs'].dot(dd['betas'])+dd['v_template']
103 | v_shaped = dd['v_shaped']
104 | J_tmpx = MatVecMult(dd['J_regressor'], v_shaped[:,0])
105 | J_tmpy = MatVecMult(dd['J_regressor'], v_shaped[:,1])
106 | J_tmpz = MatVecMult(dd['J_regressor'], v_shaped[:,2])
107 | dd['J'] = ch.vstack((J_tmpx, J_tmpy, J_tmpz)).T
108 | dd['v_posed'] = v_shaped + dd['posedirs'].dot(posemap(dd['bs_type'])(dd['pose']))
109 | else:
110 | dd['v_posed'] = dd['v_template'] + dd['posedirs'].dot(posemap(dd['bs_type'])(dd['pose']))
111 |
112 | return dd
113 |
114 |
115 |
116 | def load_model(fname_or_dict):
117 | dd = ready_arguments(fname_or_dict)
118 |
119 | args = {
120 | 'pose': dd['pose'],
121 | 'v': dd['v_posed'],
122 | 'J': dd['J'],
123 | 'weights': dd['weights'],
124 | 'kintree_table': dd['kintree_table'],
125 | 'xp': ch,
126 | 'want_Jtr': True,
127 | 'bs_style': dd['bs_style']
128 | }
129 |
130 | result, Jtr = verts_core(**args)
131 | result = result + dd['trans'].reshape((1,3))
132 | result.J_transformed = Jtr + dd['trans'].reshape((1,3))
133 |
134 | for k, v in dd.items():
135 | setattr(result, k, v)
136 |
137 | return result
138 |
139 |
140 | #m= load_model( '../models/basicModel_f_lbs_10_207_0_v1.0.0.pkl' )
--------------------------------------------------------------------------------
/smpl/smpl_webuser/serialization.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/smpl/smpl_webuser/serialization.pyc
--------------------------------------------------------------------------------
/smpl/smpl_webuser/verts.py:
--------------------------------------------------------------------------------
1 | '''
2 | Copyright 2015 Matthew Loper, Naureen Mahmood and the Max Planck Gesellschaft. All rights reserved.
3 | This software is provided for research purposes only.
4 | By using this software you agree to the terms of the SMPL Model license here http://smpl.is.tue.mpg.de/license
5 |
6 | More information about SMPL is available here http://smpl.is.tue.mpg.
7 | For comments or questions, please email us at: smpl@tuebingen.mpg.de
8 |
9 |
10 | About this file:
11 | ================
12 | This file defines the basic skinning modules for the SMPL loader which
13 | defines the effect of bones and blendshapes on the vertices of the template mesh.
14 |
15 | Modules included:
16 | - verts_decorated:
17 | creates an instance of the SMPL model which inherits model attributes from another
18 | SMPL model.
19 | - verts_core: [overloaded function inherited by lbs.verts_core]
20 | computes the blending of joint-influences for each vertex based on type of skinning
21 |
22 | '''
23 |
24 | import chumpy
25 | import lbs
26 | from posemapper import posemap
27 | import scipy.sparse as sp
28 | from chumpy.ch import MatVecMult
29 |
30 | def ischumpy(x): return hasattr(x, 'dterms')
31 |
32 | def verts_decorated(trans, pose,
33 | v_template, J, weights, kintree_table, bs_style, f,
34 | bs_type=None, posedirs=None, betas=None, shapedirs=None, want_Jtr=False):
35 |
36 | for which in [trans, pose, v_template, weights, posedirs, betas, shapedirs]:
37 | if which is not None:
38 | assert ischumpy(which)
39 |
40 | v = v_template
41 |
42 | if shapedirs is not None:
43 | if betas is None:
44 | betas = chumpy.zeros(shapedirs.shape[-1])
45 | v_shaped = v + shapedirs.dot(betas)
46 | else:
47 | v_shaped = v
48 |
49 | if posedirs is not None:
50 | v_posed = v_shaped + posedirs.dot(posemap(bs_type)(pose))
51 | else:
52 | v_posed = v_shaped
53 |
54 | v = v_posed
55 |
56 | if sp.issparse(J):
57 | regressor = J
58 | J_tmpx = MatVecMult(regressor, v_shaped[:,0])
59 | J_tmpy = MatVecMult(regressor, v_shaped[:,1])
60 | J_tmpz = MatVecMult(regressor, v_shaped[:,2])
61 | J = chumpy.vstack((J_tmpx, J_tmpy, J_tmpz)).T
62 | else:
63 | assert(ischumpy(J))
64 |
65 | assert(bs_style=='lbs')
66 | result, Jtr = lbs.verts_core(pose, v, J, weights, kintree_table, want_Jtr=True, xp=chumpy)
67 |
68 | tr = trans.reshape((1,3))
69 | result = result + tr
70 | Jtr = Jtr + tr
71 |
72 | result.trans = trans
73 | result.f = f
74 | result.pose = pose
75 | result.v_template = v_template
76 | result.J = J
77 | result.weights = weights
78 | result.kintree_table = kintree_table
79 | result.bs_style = bs_style
80 | result.bs_type =bs_type
81 | if posedirs is not None:
82 | result.posedirs = posedirs
83 | result.v_posed = v_posed
84 | if shapedirs is not None:
85 | result.shapedirs = shapedirs
86 | result.betas = betas
87 | result.v_shaped = v_shaped
88 | if want_Jtr:
89 | result.J_transformed = Jtr
90 | return result
91 |
92 | def verts_core(pose, v, J, weights, kintree_table, bs_style, want_Jtr=False, xp=chumpy):
93 |
94 | if xp == chumpy:
95 | assert(hasattr(pose, 'dterms'))
96 | assert(hasattr(v, 'dterms'))
97 | assert(hasattr(J, 'dterms'))
98 | assert(hasattr(weights, 'dterms'))
99 |
100 | assert(bs_style=='lbs')
101 | result = lbs.verts_core(pose, v, J, weights, kintree_table, want_Jtr, xp)
102 |
103 | return result
104 |
--------------------------------------------------------------------------------
/smpl/smpl_webuser/verts.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/crazy-bot/Human-Motion-Estimation-from-sparse-IMUs/09782b1144059e451d1128d07782a7da9186a11a/smpl/smpl_webuser/verts.pyc
--------------------------------------------------------------------------------