├── .idea
├── .gitignore
├── Automated-Gait-Analysis.iml
├── inspectionProfiles
│ └── profiles_settings.xml
├── misc.xml
├── modules.xml
├── other.xml
└── vcs.xml
├── README.md
├── __pycache__
└── visualizer.cpython-37.pyc
├── demo
├── example.gif
├── example2.gif
├── example3_1.png
├── example3_2.png
└── example3_3.png
├── gc_classification.py
├── gc_comparison.py
├── gc_pigparsing.py
├── gc_preprocessing.py
├── kinematics_extraction.py
├── kinematics_processing.py
├── pose_estimation.py
└── visualizer.py
/.idea/.gitignore:
--------------------------------------------------------------------------------
1 |
2 | # Default ignored files
3 | /workspace.xml
--------------------------------------------------------------------------------
/.idea/Automated-Gait-Analysis.iml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
--------------------------------------------------------------------------------
/.idea/inspectionProfiles/profiles_settings.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
--------------------------------------------------------------------------------
/.idea/misc.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
--------------------------------------------------------------------------------
/.idea/modules.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
--------------------------------------------------------------------------------
/.idea/other.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
--------------------------------------------------------------------------------
/.idea/vcs.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Automated-Gait-Analysis
2 | A human pose estimation system for videos, that aims to extract features describing a gait (walk) and deploy classifiers to detect abnormalities and more, with respect to kinematics.
3 | This is my final year project for my BSc. Honours in Artificial Intelligence.
4 |
5 | 
6 | First step: Extract keypoints from synchronized video sequences using Pre-trained AI models: Object detector YOLOv3, and Pose estimator AlphaPose. Keypoints, represent the point of a joint [x,y]. Pose, is a list of 17 keypoints. Data, is a list of n poses, where n is the number of frames in the video. The concepts are stored in a json.
7 |
8 |
9 |
10 |
11 | Second step: Extract raw angle kinematics. As demonstrated above knee flexion and extension is computed using the side view. In general hip flexion and extension is computed using the side view also, while hip and knee abduction/adduction are computed using the front view.
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 | Third step: Process kinematics. The processing pipeline follows: gap filling, smoothing, gait cycle slicing, resampling and finally averaging. Demonstrating above, we see a smoothed gait cycle, an average of all gait cycles in one capture, and an average of all gait cycles in six captures (three walking left ot right and another three walking right to left).
23 |
24 |
--------------------------------------------------------------------------------
/__pycache__/visualizer.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RussellSB/automated-gait-analysis/8e1cf4ed6902d66208d4140bf4589edb179c110a/__pycache__/visualizer.cpython-37.pyc
--------------------------------------------------------------------------------
/demo/example.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RussellSB/automated-gait-analysis/8e1cf4ed6902d66208d4140bf4589edb179c110a/demo/example.gif
--------------------------------------------------------------------------------
/demo/example2.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RussellSB/automated-gait-analysis/8e1cf4ed6902d66208d4140bf4589edb179c110a/demo/example2.gif
--------------------------------------------------------------------------------
/demo/example3_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RussellSB/automated-gait-analysis/8e1cf4ed6902d66208d4140bf4589edb179c110a/demo/example3_1.png
--------------------------------------------------------------------------------
/demo/example3_2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RussellSB/automated-gait-analysis/8e1cf4ed6902d66208d4140bf4589edb179c110a/demo/example3_2.png
--------------------------------------------------------------------------------
/demo/example3_3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RussellSB/automated-gait-analysis/8e1cf4ed6902d66208d4140bf4589edb179c110a/demo/example3_3.png
--------------------------------------------------------------------------------
/gc_classification.py:
--------------------------------------------------------------------------------
1 | #==================================================================================
2 | # GC CLASSIFICATION
3 | #----------------------------------------------------------------------------------
4 | # Input: Pre-processed gait cycles, Output: LR, SVM & CNN Summaries
5 | # Classifies according to testing and training data
6 | #==================================================================================
7 | # Imports
8 | #==================================================================================
9 | import pickle
10 | import keras
11 | import numpy as np
12 | import matplotlib.pyplot as plt
13 | from sklearn.model_selection import train_test_split
14 | from sklearn.dummy import DummyClassifier
15 | from keras.models import Sequential
16 | from keras.layers import Dense, Dropout, GlobalAveragePooling1D
17 | from keras.layers.convolutional import Conv1D, MaxPooling1D
18 | from sklearn import metrics
19 | from sklearn.metrics import ConfusionMatrixDisplay
20 | from sklearn.linear_model import LogisticRegression
21 | from sklearn.svm import LinearSVC
22 | import random
23 | from sklearn.preprocessing import LabelEncoder
24 | from statistics import mean, stdev
25 | from tqdm import trange
26 |
27 | #==================================================================================
28 | # Constants
29 | #==================================================================================
30 | # General hyper-parameters
31 | LABEL = 'abnormality' # options: 'abnormality', 'gender', 'age', 'id'
32 | BINARY = True
33 | SPLIT_BY_ID = True
34 | SHOW_TEST_IDs = True
35 | TEST_SIZE = 0.2
36 | REPEAT = True
37 | REPEAT_AMOUNT = 50
38 | SEED = random.randint(1, 1000)
39 |
40 | if(REPEAT):
41 | acc_dc = []
42 | acc_lr = []
43 | acc_svm = []
44 | acc_cnn = []
45 |
46 | # Logistic regression hyper-parameter
47 | solver = 'liblinear' # for small datasets, improves performance
48 |
49 | # CNN hyper-parameters
50 | epochs = 100
51 | filter1 = 101
52 | filter2 = 162
53 | kernel = 10
54 | dropout_rate = 0.5
55 |
56 | # more epochs for id than others
57 | # NOTEWORTHY SEEDS
58 | ### V. GOOD
59 | # id: 182, 495 (81%, 81%, 72%), 179, 988 # identification needs more epochs and test size
60 | # gender: 238 (91%, 94%, 66%)
61 | # abnormality: 168 (97%, 97%, 100%)
62 | # abnormality: 899 (70%, 70%, 74% - all of whats in test set is normal)
63 | # abnormality: 577 (98%, 98%, 73%), 588
64 |
65 | ### V. BAD
66 | # abnormality: 854, 791, 267, 617, 476, 609, 83, 482, 820, 112... etc (CNN classifies everything as normal -
67 | # except SVM and LR classify at 70%)
68 | # age: all.... (around 30%)
69 |
70 | #==================================================================================
71 | # Methods
72 | #==================================================================================
73 | # Evaluates the sklearn model w.r.t confusion matrix and ground-truth metrics
74 | def evaluate_sk_summary(classifier, X_test, y_test, sklearnType, cmap):
75 | score = classifier.score(X_test, y_test)
76 | if(REPEAT):
77 | if(sklearnType == 'dc'): acc_dc.append(score)
78 | if (sklearnType == 'lr'): acc_lr.append(score)
79 | if (sklearnType == 'svm'): acc_svm.append(score)
80 |
81 | if(not REPEAT):
82 | predicted = classifier.predict(X_test)
83 | print("Classification report for", sklearnType, "classifier %s:\n%s\n"
84 | % (classifier, metrics.classification_report(y_test, predicted)))
85 | disp = metrics.plot_confusion_matrix(classifier, X_test, y_test,
86 | include_values=True, cmap=cmap)
87 | disp.figure_.suptitle("")
88 | print("Confusion matrix for " + sklearnType + ":\n%s" % disp.confusion_matrix)
89 | plt.xlabel('Predicted ' + LABEL)
90 | plt.ylabel('True ' + LABEL)
91 | plt.title('Confusion Matrix (Accuracy: {:.2f})'.format(score))
92 | plt.show()
93 | plt.close()
94 |
95 | # Plots confusion matrix without requiring a classifier
96 | def plot_cm(cm, labels, name, cmap, score):
97 | disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=labels)
98 | disp.plot(include_values=True, cmap=cmap)
99 | plt.xlabel('Predicted ' + LABEL)
100 | plt.ylabel('True ' + LABEL)
101 | plt.title('Confusion Matrix (Accuracy: {:.2f})'.format(score)) # name
102 | plt.show()
103 |
104 | # Evaluates the keras neural network model w.r.t confusion matrix and ground-truth metrics
105 | def evaluate_nn_summary(model, X_test, y_test, disp_labels):
106 | pred = model.predict_classes(X_test)
107 | score = sum(y_test == pred) / len(y_test)
108 |
109 | if(REPEAT):
110 | acc_cnn.append(score)
111 |
112 | if(not REPEAT):
113 | print('Classification Report CNN:')
114 | print(metrics.classification_report(y_test, pred))
115 | cm = metrics.confusion_matrix(y_test, pred)
116 | print('Confusion matrix for CNN:\n', cm)
117 | plot_cm(cm, disp_labels, 'CNN', plt.cm.Reds, score)
118 |
119 | # Define and Evaluate a baseline model (Dummy classifier)
120 | def dc(X_train, X_test, y_train, y_test):
121 | classifier = DummyClassifier(strategy="most_frequent")
122 | classifier.fit(X_train, y_train)
123 | evaluate_sk_summary(classifier, X_test, y_test, 'dc', plt.cm.Reds)
124 | return classifier
125 |
126 | # Define and Evaluate a Logistic Regression Model
127 | def lr(X_train, X_test, y_train, y_test):
128 | classifier = LogisticRegression(solver=solver, random_state=SEED)
129 | classifier.fit(X_train, y_train)
130 | evaluate_sk_summary(classifier, X_test, y_test, 'lr', plt.cm.Reds)
131 | return classifier
132 |
133 | # Define and Evaluate an SVM Model
134 | def svm(X_train, X_test, y_train, y_test):
135 | classifier = LinearSVC(random_state=SEED)
136 | classifier.fit(X_train, y_train)
137 | evaluate_sk_summary(classifier, X_test, y_test, 'svm', plt.cm.Reds)
138 | return classifier
139 |
140 | # Flattens each 8x101 sample to one single long 1x808 vector
141 | def flattenData(data):
142 | X = []
143 | for d in data:
144 | x = []
145 | for i in range(0, 8):
146 | x.extend(d[i].tolist())
147 | X.append(x)
148 | X = np.array(X)
149 | return X
150 |
151 | def splitById(X, y, labels_id):
152 | id_count = len(np.unique(labels_id))
153 | test_size = int(TEST_SIZE * id_count)
154 |
155 | test_ids = []
156 | random.seed(SEED)
157 |
158 | # Ensures that participants in test set are different from training
159 | # and that the test set has a variety - and doesn't always have the same
160 | # type of target
161 | for i in range(0, test_size):
162 | while True:
163 | random_id = random.randint(1, id_count-1)
164 | if(i > 0):
165 | id_prev = test_ids[-1]
166 | for j in range(0, len(X)):
167 | if(labels_id[j] == id_prev):
168 | label_prev = y[j]
169 | break
170 | for j in range(0, len(X)):
171 | if(labels_id[j] == random_id):
172 | label_curr = y[j]
173 | break
174 | if(i == 0 or random_id != test_ids[-1] and label_curr != label_prev):
175 | test_ids.append(random_id)
176 | break
177 |
178 | X_train, y_train, X_test, y_test = [], [], [], []
179 | for i in range(0, len(X)):
180 | test = False
181 | for x in test_ids:
182 | if(labels_id[i] == x):
183 | test = True
184 | break
185 |
186 | if(test):
187 | X_test.append(X[i])
188 | y_test.append(y[i])
189 | else:
190 | X_train.append(X[i])
191 | y_train.append(y[i])
192 |
193 | if(not REPEAT):
194 | print('==========TRAINING==========')
195 | for label in set(y):
196 | print(str(label) + ':', y_train.count(label))
197 | print('==========TESTING==========')
198 | for label in set(y):
199 | print(str(label) + ':', y_test.count(label))
200 |
201 | if (SHOW_TEST_IDs):
202 | print('(Test IDs:', np.unique(test_ids),')')
203 |
204 | train_set = list(zip(X_train, y_train))
205 | random.shuffle(train_set)
206 | X_train, y_train = zip(*train_set)
207 |
208 | test_set = list(zip(X_test, y_test))
209 | random.shuffle(test_set)
210 | X_test, y_test = zip(*test_set)
211 | print(len(y_test))
212 | return (X_train, X_test, y_train, y_test)
213 |
214 | def mlModels(data_train_test):
215 | X_train = flattenData(data_train_test[0])
216 | X_test = flattenData(data_train_test[1])
217 | y_train = np.array(data_train_test[2])
218 | y_test = np.array(data_train_test[3])
219 |
220 | model_dc = dc(X_train, X_test, y_train, y_test)
221 | model_lr = lr(X_train, X_test, y_train, y_test)
222 | model_svm = svm(X_train, X_test, y_train, y_test)
223 |
224 | return model_dc, model_lr, model_svm
225 |
226 | def nn(data_train_test):
227 | X_train = data_train_test[0]
228 | X_test = data_train_test[1]
229 | y_train = data_train_test[2]
230 | y_test = data_train_test[3]
231 |
232 | # Pre-process to (samples, time-steps, features)
233 | X_train = np.array([np.array(x).transpose() for x in X_train])
234 | X_test = np.array([np.array(x).transpose() for x in X_test])
235 |
236 | y = [*y_train, *y_test]
237 | label_encoder = LabelEncoder()
238 | disp_labels = sorted(list(dict.fromkeys(y).keys()))
239 | y_test = label_encoder.fit_transform(y_test)
240 | y_train = label_encoder.fit_transform(y_train)
241 | y = [*y_train, *y_test]
242 | n_outputs = len(np.unique(y))
243 | y_train = keras.utils.to_categorical(y_train, num_classes=n_outputs)
244 | n_timesteps, n_features = X_train.shape[1], X_train.shape[2]
245 |
246 | model_m = Sequential()
247 | model_m.add(Conv1D(filter1, kernel, activation='relu', input_shape=(n_timesteps, n_features)))
248 | model_m.add(MaxPooling1D(3))
249 | model_m.add(Conv1D(filter2, kernel, activation='relu'))
250 | model_m.add(GlobalAveragePooling1D())
251 | model_m.add(Dropout(dropout_rate))
252 | model_m.add(Dense(n_outputs, activation='softmax'))
253 | #print(model_m.summary())
254 | if(BINARY): loss = 'binary_crossentropy'
255 | else: loss = 'categorical_crossentropy'
256 | model_m.compile(loss=loss,
257 | optimizer='adam', metrics=['accuracy'])
258 |
259 | model_m.fit(X_train, y_train, epochs=epochs, verbose=0) # validation_split=TEST_SIZE
260 | evaluate_nn_summary(model_m, X_test, y_test, disp_labels)
261 | return model_m
262 |
263 | #==================================================================================
264 | # Main
265 | #==================================================================================
266 | def main():
267 | DATA = 'data' if LABEL != 'abnormality' else 'data_na'
268 | ID = 'labels_id' if LABEL != 'abnormality' else 'labels_id_na'
269 |
270 | with open('..\\classifier_data\\' + DATA + '.pickle', 'rb') as f:
271 | data = pickle.load(f)
272 | with open('..\\classifier_data\\labels_' + LABEL + '.pickle', 'rb') as f:
273 | labels = pickle.load(f)
274 | with open('..\\classifier_data\\' + ID + '.pickle', 'rb') as f:
275 | labels_id = pickle.load(f)
276 |
277 | n = REPEAT_AMOUNT if(REPEAT) else 1
278 | for _ in trange(n, ncols=100):
279 | print()
280 | if(REPEAT): SEED = random.randint(1, 1000)
281 | if(SPLIT_BY_ID):
282 | data_train_test = splitById(data, labels, labels_id)
283 | else:
284 | data_train_test = train_test_split(data, labels, test_size=TEST_SIZE, shuffle=True, random_state=SEED)
285 |
286 | mlModels(data_train_test)
287 | nn(data_train_test)
288 | if(not REPEAT): print('SEED:', SEED)
289 |
290 | if(REPEAT):
291 | print("\n,\n,\n,\n,\n, \n,\n,\n,\n,\n, \n,\n,\n,\n,\n")
292 | print('==== Average of ', n, ' times ========== ')
293 | print('Base classifier: {:.4f} % accuracy with {:.4f} % deviance'.format(mean(acc_dc) * 100, stdev(acc_dc) * 100))
294 | print('Logistic Regression: {:.4f} % accuracy with {:.4f} % deviance'.format(mean(acc_lr) * 100, stdev(acc_lr) * 100))
295 | print('Support Vector Machine: {:.4f} % accuracy with {:.4f} % deviance'.format(mean(acc_svm) * 100, stdev(acc_svm) * 100))
296 | print('Convolutional Neural Network: {:.4f} % accuracy with {:.4f} % deviance'.format(mean(acc_cnn) * 100, stdev(acc_cnn) * 100))
297 |
298 | if __name__ == '__main__':
299 | main()
300 |
--------------------------------------------------------------------------------
/gc_comparison.py:
--------------------------------------------------------------------------------
1 | #==================================================================================
2 | # COMPARISON
3 | #----------------------------------------------------------------------------------
4 | # Input: Gait cycles, Output: Similarity
5 | # Compares the kinematics of my automated system with
6 | # the lab's Vicon Plug-In-Gait System
7 | #==================================================================================
8 | # Imports
9 | #==================================================================================
10 | import json
11 | import matplotlib.pyplot as plt
12 | from scipy import stats
13 | import numpy as np
14 | import math
15 | from statistics import mean
16 |
17 | #==================================================================================
18 | # Constants
19 | #==================================================================================
20 | red = "#ff002b" # "#FF4A7E"# "#E0082D"
21 | blue = "#0077ff" # "#72B6E9" # "#55BED7"#
22 | red2 = "#383838" #"#682c8e" #"#ff4800"# "#ff002b"
23 | blue2 = "#383838" # "#682c8e" # "#0077ff"
24 |
25 | #==================================================================================
26 | # Methods
27 | #==================================================================================
28 |
29 | # Compare all visually through graph plots
30 | def compare_visually_all(gc_PE, gc_PIG, code, name):
31 | dictAvg = {}
32 | dictAvg['pe_knee_L'] = gc_PE['knee_' + code + '_avg']['gcL_avg']
33 | dictAvg['pig_knee_L'] = gc_PIG['knee_' + code + '_avg']['gcL_avg']
34 | dictAvg['pe_knee_R'] = gc_PE['knee_' + code + '_avg']['gcR_avg']
35 | dictAvg['pig_knee_R'] = gc_PIG['knee_' + code + '_avg']['gcR_avg']
36 |
37 | dictAvg['pe_hip_L'] = gc_PE['hip_' + code + '_avg']['gcL_avg']
38 | dictAvg['pig_hip_L'] = gc_PIG['hip_' + code + '_avg']['gcL_avg']
39 | dictAvg['pe_hip_R'] = gc_PE['hip_' + code + '_avg']['gcR_avg']
40 | dictAvg['pig_hip_R'] = gc_PIG['hip_' + code + '_avg']['gcR_avg']
41 |
42 | dictSTD = {}
43 | dictSTD['pe_knee_L'] = gc_PE['knee_' + code + '_avg']['gcL_std']
44 | dictSTD['pig_knee_L'] = gc_PIG['knee_' + code + '_avg']['gcL_std']
45 | dictSTD['pe_knee_R'] = gc_PE['knee_' + code + '_avg']['gcR_std']
46 | dictSTD['pig_knee_R'] = gc_PIG['knee_' + code + '_avg']['gcR_std']
47 |
48 | dictSTD['pe_hip_L'] = gc_PE['hip_' + code + '_avg']['gcL_std']
49 | dictSTD['pig_hip_L'] = gc_PIG['hip_' + code + '_avg']['gcL_std']
50 | dictSTD['pe_hip_R'] = gc_PE['hip_' + code + '_avg']['gcR_std']
51 | dictSTD['pig_hip_R'] = gc_PIG['hip_' + code + '_avg']['gcR_std']
52 |
53 | print("PE: {:.0f}L, {:.0f}R".format(
54 | gc_PE['knee_' + code + '_avg']['gcL_count'], gc_PE['knee_' + code + '_avg']['gcR_count']))
55 | print("PIG: {:.0f}L, {:.0f}R".format(
56 | gc_PIG['knee_' + code + '_avg']['gcL_count'], gc_PIG['knee_' + code + '_avg']['gcR_count']))
57 |
58 | isLeft = True
59 | color=red
60 | color2=red2
61 | i = 0
62 | for key in dictAvg:
63 | if (i % 2): # compares here
64 | y1 = dictAvg[key]
65 | y1STD = dictSTD[key]
66 |
67 | fig, ax = plt.subplots()
68 | side = 'Left ' if(isLeft) else 'Right '
69 | title = side + (key.split('_')[1]).capitalize() + ' ' + name
70 | ax.set_title(title)
71 | ax.set_xlabel('Time (%)') #
72 | ax.set_ylabel(r"${\Theta}$ (degrees)")
73 |
74 | ax.plot(y0, color=color, label='Automated') # mean PE
75 | std1 = (np.array(y0) + np.array(y0STD)).tolist()
76 | std2 = (np.array(y0) - np.array(y0STD)).tolist()
77 | ax.plot(std1, '--', color=color, alpha=0)
78 | ax.plot(std2, '--', color=color, alpha=0)
79 | ax.fill_between(range(0,101), std1, std2, color=color, alpha=0.15)
80 |
81 | ax.plot(y1, color=color2, label='Marker-based') # mean PIG
82 | std1 = (np.array(y1) + np.array(y1STD)).tolist()
83 | std2 = (np.array(y1) - np.array(y1STD)).tolist()
84 | ax.plot(std1, '--', color=color2, alpha=0)
85 | ax.plot(std2, '--', color=color2, alpha=0)
86 | ax.fill_between(range(0,101), std1, std2, color=color2, alpha=0.15)
87 |
88 | plt.xlim(0, 100)
89 | ax.legend()
90 | plt.show()
91 | plt.close()
92 |
93 | isLeft = False if isLeft else True
94 | color = red if (isLeft) else blue
95 | color2 = red2 if (isLeft) else blue2
96 | else:
97 | y0 = dictAvg[key]
98 | y0STD = dictSTD[key]
99 | i += 1
100 |
101 | def errors_all(gc_PE, gc_PIG, code, name):
102 | dictAvg = {}
103 | dictAvg['pe_knee_L'] = gc_PE['knee_' + code + '_avg']['gcL_avg']
104 | dictAvg['pig_knee_L'] = gc_PIG['knee_' + code + '_avg']['gcL_avg']
105 | dictAvg['pe_knee_R'] = gc_PE['knee_' + code + '_avg']['gcR_avg']
106 | dictAvg['pig_knee_R'] = gc_PIG['knee_' + code + '_avg']['gcR_avg']
107 |
108 | dictAvg['pe_hip_L'] = gc_PE['hip_' + code + '_avg']['gcL_avg']
109 | dictAvg['pig_hip_L'] = gc_PIG['hip_' + code + '_avg']['gcL_avg']
110 | dictAvg['pe_hip_R'] = gc_PE['hip_' + code + '_avg']['gcR_avg']
111 | dictAvg['pig_hip_R'] = gc_PIG['hip_' + code + '_avg']['gcR_avg']
112 |
113 | i = 0
114 | overall_error = []
115 | for key in dictAvg:
116 | if (i % 2): # COMPARE
117 | y1 = dictAvg[key] # MB
118 |
119 | error = []
120 | for n in range(len(y1)):
121 | error.append(abs(y1[n] - y0[n]))
122 | overall_error += error
123 | print("{}\t\t{:.2f}\t{:.2f}\t{:.2f}".format(code+' '+key[4:], min(error), mean(error), max(error)))
124 |
125 | else:
126 | y0 = dictAvg[key] # PE
127 | i += 1
128 |
129 | return overall_error
130 |
131 | #==================================================================================
132 | # Main
133 | #==================================================================================
134 | def main():
135 | i = '14' # Set participant you want to compare here (Either 1, 3, 5 or 14)
136 | filePath = '..\\Part'+ i + '\\'
137 | filePE = filePath + 'Part' + i + '_gc.json'
138 | filePIG = filePath + 'Part' + i + '_gc_pig.json'
139 | with open(filePE, 'r') as f:
140 | gc_PE = json.load(f)
141 | with open(filePIG, 'r') as f:
142 | gc_PIG = json.load(f)
143 |
144 | # Simply displaying them visually by mean and standard deviation
145 | compare_visually_all(gc_PE, gc_PIG, 'FlexExt', 'Flexion and Extension')
146 | compare_visually_all(gc_PE, gc_PIG, 'AbdAdd', 'Abduction and Adduction')
147 |
148 | print('KinematicVariable \tMin \tAvg \tMax')
149 | overall_error = []
150 | overall_error += errors_all(gc_PE, gc_PIG, 'FlexExt', 'Flexion and Extension')
151 | overall_error += errors_all(gc_PE, gc_PIG, 'AbdAdd', 'Abduction and Adduction')
152 | print('=======================================')
153 | print("Average angle error overall: {:.2f}".format(mean(overall_error)))
154 |
155 | if __name__ == '__main__':
156 | main()
--------------------------------------------------------------------------------
/gc_pigparsing.py:
--------------------------------------------------------------------------------
1 | #==================================================================================
2 | # PLUG IN-GAIT PARSING
3 | #----------------------------------------------------------------------------------
4 | # Input: Excel data, Output: JSON
5 | # Parses gait cycle excel data processed using
6 | # Vicon Plug-In-Gait model with marker-based motion capture
7 | #==================================================================================
8 | # Imports
9 | #==================================================================================
10 | import json
11 | import numpy as np
12 | import pandas as pd
13 |
14 | #==================================================================================
15 | # Methods
16 | #==================================================================================
17 | # Filling in gaps, to cater for ray occlusions in plug in gait
18 | def gapfill(angleList):
19 | df = pd.DataFrame({'ang': angleList})
20 | df['ang'].interpolate(method='linear', inplace=True)
21 | angleList = df['ang'].tolist()
22 | for i in range(0, len(angleList)):
23 | if(np.isnan(angleList[i])):
24 | angleList[i] = angleList[i + 1]
25 | return angleList
26 |
27 | # Returns average of left and right gait cycles respectively
28 | def avg_gcLR(gcLR):
29 | gcL = np.array(gcLR[0]) # list of left gait cycles
30 | gcR = np.array(gcLR[1]) # list of right gait cycles
31 |
32 | gcL_avg = np.mean(gcL, axis=0)
33 | gcL_std = np.std(gcL, axis=0)
34 |
35 | gcR_avg = np.mean(gcR, axis=0)
36 | gcR_std = np.std(gcR, axis=0)
37 |
38 | avg_gcLR = {
39 | 'gcL_avg' : gcL_avg.tolist(),
40 | 'gcL_std' : gcL_std.tolist(),
41 | 'gcR_avg': gcR_avg.tolist(),
42 | 'gcR_std': gcR_std.tolist(),
43 | 'gcL_count' : len(gcL),
44 | 'gcR_count' : len(gcR)
45 | }
46 | return avg_gcLR
47 | #==================================================================================
48 | # Main
49 | #==================================================================================
50 | def main():
51 | i = '05' # Select participant with marker-based kinematic data
52 | filePath = '..\\Part'+ i + '\\'
53 | filePIG = filePath + 'Part' + i + '_gc_pig.xlsx'
54 | writeFile = filePath + 'Part' + i + '_gc_pig.json'
55 |
56 | # In the same structure as kinematics_processing.py
57 | knee_FlexExt_gc = [[], []]
58 | hip_FlexExt_gc = [[], []]
59 | knee_AbdAdd_gc = [[], []]
60 | hip_AbdAdd_gc = [[], []]
61 |
62 | xls = pd.ExcelFile(filePIG)
63 |
64 | print('Parsing the PiG sheets...')
65 | for sheet_name in xls.sheet_names:
66 | df_walk = pd.read_excel(filePIG, sheet_name=sheet_name, header=None)
67 | df_gc = df_walk.itertuples()
68 |
69 | isDetected = []
70 | gaitCycleData = []
71 | for rows in df_gc:
72 | isDetected.append(rows[4] == 'deg')
73 | gaitCycleData.append(list(rows[5:106]))
74 |
75 | i = 0
76 | # Batch parsing gait cycles
77 | while(True):
78 | try:
79 | isDetected[i]
80 | except IndexError:
81 | break
82 |
83 | if(isDetected[i]):
84 | gc = gapfill(gaitCycleData[i])
85 | hip_FlexExt_gc[1].append(gc)
86 | if (isDetected[i + 1]):
87 | gc = gapfill(gaitCycleData[i + 1])
88 | hip_FlexExt_gc[0].append(gc)
89 | if (isDetected[i + 2]):
90 | gc = gapfill(gaitCycleData[i + 2])
91 | knee_FlexExt_gc[0].append(gc)
92 | if (isDetected[i + 3]):
93 | gc = gapfill(gaitCycleData[i + 3])
94 | knee_FlexExt_gc[1].append(gc)
95 | if (isDetected[i + 4]):
96 | gc = gapfill(gaitCycleData[i + 4])
97 | hip_AbdAdd_gc[0].append(gc)
98 | if (isDetected[i + 5]):
99 | gc = gapfill(gaitCycleData[i + 5])
100 | hip_AbdAdd_gc[1].append(gc)
101 | if (isDetected[i + 6]):
102 | gc = gapfill(gaitCycleData[i + 6])
103 | knee_AbdAdd_gc[0].append(gc)
104 |
105 | # Try catch because if this is the last batch, 7th line can be cut short
106 | try:
107 | isDetected[i+7]
108 | except IndexError:
109 | break
110 |
111 | if (isDetected[i + 7]):
112 | gc = gapfill(gaitCycleData[i + 7])
113 | knee_AbdAdd_gc[1].append(gc)
114 |
115 | i += 9
116 |
117 | # Averaging
118 | knee_FlexExt_avg = avg_gcLR(knee_FlexExt_gc)
119 | hip_FlexExt_avg = avg_gcLR(hip_FlexExt_gc)
120 | knee_AbdAdd_avg = avg_gcLR(knee_AbdAdd_gc)
121 | hip_AbdAdd_avg = avg_gcLR(hip_AbdAdd_gc)
122 |
123 | jsonDict = {
124 | 'knee_FlexExt_avg': knee_FlexExt_avg,
125 | 'hip_FlexExt_avg': hip_FlexExt_avg,
126 | 'knee_AbdAdd_avg': knee_AbdAdd_avg,
127 | 'hip_AbdAdd_avg': hip_AbdAdd_avg,
128 |
129 | 'knee_FlexExt_gc': knee_FlexExt_gc,
130 | 'hip_FlexExt_gc': hip_FlexExt_gc,
131 | 'knee_AbdAdd_gc': knee_AbdAdd_gc,
132 | 'hip_AbdAdd_gc': hip_AbdAdd_gc,
133 | }
134 |
135 | with open(writeFile, 'w') as outfile:
136 | json.dump(jsonDict, outfile, separators=(',', ':'))
137 |
138 | print('Finished!')
139 |
140 | if __name__ == '__main__':
141 | main()
--------------------------------------------------------------------------------
/gc_preprocessing.py:
--------------------------------------------------------------------------------
1 | #==================================================================================
2 | # DATA PREPROCESSING
3 | #----------------------------------------------------------------------------------
4 | # Input: Gait cycles, Output: Datasets
5 | # Prepares dataset from the kinematics, in a format
6 | # ready for classification of gait cycles.
7 | #==================================================================================
8 | # Imports
9 | #==================================================================================
10 | import numpy as np
11 | import json
12 | import pickle
13 |
14 | #==================================================================================
15 | # Constants
16 | #==================================================================================
17 | # Normality, Age, Gender
18 | partInfo = {
19 | # Normal participants
20 | '1':('N', 23, 'M'), '2':('N', 20, 'M'), '3':('N', 20, 'M'),
21 | '4':('N', 20, 'F'), '5':('N', 22, 'M'), '6':('N', 19, 'M'),
22 | '7':('N', 20, 'F'), '8':('N', 20, 'M'), '9':('N', 22, 'F'),
23 | '10':('N', 22, 'F'), '11':('N', 19, 'F'), '12':('N', 20, 'M'),
24 | '13':('N', 20, 'F'), '14':('N', 20, 'M'), '15':('N', 78, 'F'),
25 | '16':('N', 80, 'M'), '17':('N', 20, 'F'),
26 | # Abnormal participants (simulations from Part14)
27 | '18':('A', 20, 'M'), '19':('A', 20, 'M'), '20':('A', 20, 'M'),
28 | '21':('A', 20, 'M')
29 | }
30 |
31 | #==================================================================================
32 | # Methods
33 | #==================================================================================
34 | # Returns list of 2d arrays of all of the participant's gait cycles
35 | def getgc_glob(gc_PE):
36 | # Left
37 | knee_FlexExt_L = gc_PE['knee_FlexExt_gc'][0]
38 | hip_FlexExt_L = gc_PE['hip_FlexExt_gc'][0]
39 | knee_AbdAdd_L = gc_PE['knee_AbdAdd_gc'][0]
40 | hip_AbdAdd_L = gc_PE['hip_AbdAdd_gc'][0]
41 | len_gcL = len(knee_FlexExt_L)
42 |
43 | # Right
44 | knee_FlexExt_R = gc_PE['knee_FlexExt_gc'][1]
45 | hip_FlexExt_R = gc_PE['hip_FlexExt_gc'][1]
46 | knee_AbdAdd_R = gc_PE['knee_AbdAdd_gc'][1]
47 | hip_AbdAdd_R = gc_PE['hip_AbdAdd_gc'][1]
48 | len_gcR = len(knee_FlexExt_R)
49 |
50 | len_min = min(len_gcL, len_gcR) # must set a minumum for matching and consistent L and R gait cycles
51 |
52 | # Reversing order of gait cycles so that they are consistent with each other when normalized
53 | knee_FlexExt_L.reverse()
54 | knee_AbdAdd_L.reverse()
55 | hip_FlexExt_L.reverse()
56 | hip_AbdAdd_L.reverse()
57 | knee_FlexExt_R.reverse()
58 | hip_FlexExt_R.reverse()
59 | knee_AbdAdd_R.reverse()
60 | hip_AbdAdd_R.reverse()
61 |
62 | kinematics = []
63 | for i in range(0, len_min):
64 | arr2d = []
65 |
66 | arr2d.append(knee_FlexExt_L[i])
67 | arr2d.append(knee_FlexExt_R[i])
68 | arr2d.append(hip_FlexExt_L[i])
69 | arr2d.append(hip_FlexExt_R[i])
70 |
71 | arr2d.append(knee_AbdAdd_L[i])
72 | arr2d.append(knee_AbdAdd_R[i])
73 | arr2d.append(hip_AbdAdd_L[i])
74 | arr2d.append(hip_AbdAdd_R[i])
75 |
76 | arr2d = np.array(arr2d)
77 | kinematics.append(arr2d)
78 | print(len(kinematics))
79 | return kinematics
80 |
81 | #==================================================================================
82 | # Main
83 | #==================================================================================
84 | def main():
85 | data = []
86 | labels_id = []
87 | labels_age = []
88 | labels_gen = []
89 |
90 | data_na = []
91 | labels_na = []
92 | labels_id_na = []
93 |
94 | # Prepares gait data collected from the lab
95 | for i in range(1, 22):
96 | id = str(i)
97 | id = '0' + id if len(id) < 2 else i
98 | part = 'Part' + str(id)
99 |
100 | file = '..\\' + part + '\\' + part + '_gc.json'
101 | with open(file, 'r') as f:
102 | gc_PE = json.load(f)
103 |
104 | kinematics = getgc_glob(gc_PE)
105 |
106 | for gc in kinematics:
107 | na = partInfo[str(i)][0]
108 | #na = 0 if na == 'N' else 1
109 | na = 'Normal' if na == 'N' else 'Abnormal'
110 |
111 | gen = partInfo[str(i)][2]
112 | #gen = 0 if gen == 'F' else 1
113 | gen = 'Female' if gen == 'F' else 'Male'
114 |
115 | age = partInfo[str(i)][1]
116 |
117 | # Separate abnormal/normal set from normal set
118 | if(na=='Abnormal'):
119 | data_na.append(gc)
120 | labels_na.append(na)
121 | labels_id_na.append(i)
122 | else:
123 | data.append(gc)
124 | labels_id.append(i)
125 | labels_age.append(age)
126 | labels_gen.append(gen)
127 |
128 | data_na.append(gc)
129 | labels_na.append(na)
130 | labels_id_na.append(i)
131 |
132 | with open('..\\classifier_data\\data.pickle', 'wb') as f:
133 | pickle.dump(data, f)
134 | with open('..\\classifier_data\\labels_id.pickle', 'wb') as f:
135 | pickle.dump(labels_id, f)
136 | with open('..\\classifier_data\\labels_age.pickle', 'wb') as f:
137 | pickle.dump(labels_age, f)
138 | with open('..\\classifier_data\\labels_gender.pickle', 'wb') as f:
139 | pickle.dump(labels_gen, f)
140 |
141 | with open('..\\classifier_data\\data_na.pickle', 'wb') as f:
142 | pickle.dump(data_na, f)
143 | with open('..\\classifier_data\\labels_id_na.pickle', 'wb') as f:
144 | pickle.dump(labels_id_na, f)
145 | with open('..\\classifier_data\\labels_abnormality.pickle', 'wb') as f:
146 | pickle.dump(labels_na, f)
147 |
148 | if __name__ == '__main__':
149 | main()
--------------------------------------------------------------------------------
/kinematics_extraction.py:
--------------------------------------------------------------------------------
1 | #==================================================================================
2 | # KINEMATICS_EXTRACTION
3 | #----------------------------------------------------------------------------------
4 | # Input: Pose sequence, Output: Raw kinematics
5 | # Given a JSON describing poses throughout two video views,
6 | # Extracts kinematics and computes kinematics through joint angles
7 | #==================================================================================
8 | # Imports
9 | #==================================================================================
10 | import numpy as np
11 | import json
12 | import time
13 |
14 | #==================================================================================
15 | # Constants
16 | #==================================================================================
17 | ptID = {
18 | 'nose': 0,
19 | 'eye_L': 1,'eye_R': 2,
20 | 'ear_L': 3,'ear_R': 4,
21 | 'shoulder_L': 5, 'shoulder_R': 6,
22 | 'elbow_L': 7, 'elbow_R': 8,
23 | 'wrist_L': 9, 'wrist_R': 10,
24 | 'hip_L': 11, 'hip_R': 12,
25 | 'knee_L': 13, 'knee_R': 14,
26 | 'ankle_L': 15, 'ankle_R': 16
27 | }
28 |
29 | #==================================================================================
30 | # Methods
31 | #==================================================================================
32 | # Calculates joint angle of knee in Side view
33 | def calc_knee_angle_S(hip, knee, ankle, isRightToLeft):
34 | if (hip == [-1, -1] or knee == [-1, -1] or ankle == [-1,-1]):
35 | return None # returns this value as error code for no keypoint detection
36 |
37 | # Identifying joint positions
38 | a = np.array(hip)
39 | b = np.array(knee)
40 | c = np.array(ankle)
41 |
42 | # Compute vectors from main joint
43 | ba = a - b
44 | m_ba = - ba
45 | bc = c - b
46 |
47 | cosine_angle = np.dot(m_ba, bc) / (np.linalg.norm(m_ba) * np.linalg.norm(bc))
48 | angle = np.arccos(cosine_angle)
49 | angle = np.degrees(angle)
50 |
51 | if (isRightToLeft and bc[0] < m_ba[0]): angle = - angle # Check if angle should be negative when walking <---
52 | elif (not isRightToLeft and bc[0] > m_ba[0]): angle = - angle # Check if angle should be negative when walking --->
53 | return angle.tolist()
54 |
55 | # Calculates joint angle of hip in Side view
56 | def calc_hip_angle_S(hip, knee, isRightToLeft):
57 | if(hip == [-1,-1] or knee == [-1,-1]):
58 | return None # returns this value as error code for no keypoint detection
59 |
60 | # Identifying joint positions
61 | a = np.array(hip) # Main joint
62 | b = np.array(knee)
63 |
64 | # Compute vectors from joints
65 | ab = b - a
66 | m_N = np.array([0,-1])
67 |
68 | cosine_angle = np.dot(ab, m_N) / (np.linalg.norm(ab) * np.linalg.norm(m_N))
69 | angle = np.arccos(cosine_angle)
70 | angle = np.degrees(angle)
71 |
72 | if (isRightToLeft and ab[0] > m_N[0]): angle = - angle
73 | elif (not isRightToLeft and ab[0] < m_N[0]): angle = - angle
74 | return angle.tolist()
75 |
76 | # Traversing through pose to compute kinematics in sideView
77 | def raw_angles_S(data, isRightToLeft=False, limit=10000):
78 | knee_ang_L = []
79 | knee_ang_R = []
80 | hip_ang_L = []
81 | hip_ang_R = []
82 |
83 | count = 1
84 | for pose in data:
85 | #Left
86 | knee_L = pose[ptID['knee_L']]
87 | ankle_L = pose[ptID['ankle_L']]
88 | hip_L = pose[ptID['hip_L']]
89 |
90 | angle = calc_knee_angle_S(hip_L, knee_L, ankle_L, isRightToLeft)
91 | knee_ang_L.append(angle)
92 | angle = calc_hip_angle_S(hip_L, knee_L, isRightToLeft)
93 | hip_ang_L.append(angle)
94 |
95 | #Right
96 | knee_R = pose[ptID['knee_R']]
97 | ankle_R = pose[ptID['ankle_R']]
98 | hip_R = pose[ptID['hip_R']]
99 |
100 | angle = calc_knee_angle_S(hip_R, knee_R, ankle_R, isRightToLeft)
101 | knee_ang_R.append(angle)
102 | angle = calc_hip_angle_S(hip_R, knee_R, isRightToLeft)
103 | hip_ang_R.append(angle)
104 |
105 | if(count == limit): break
106 | count += 1
107 |
108 | knee_ang = [knee_ang_L, knee_ang_R]
109 | hip_ang = [hip_ang_L, hip_ang_R]
110 |
111 | return knee_ang, hip_ang
112 |
113 | # Calculates joint angle of knee in Front view
114 | def calc_knee_angle_F(hip, knee, ankle, isRightToLeft):
115 | if (hip == [-1, -1] or knee == [-1, -1] or ankle == [-1,-1]):
116 | return None # returns this value as error code for no keypoint detection
117 |
118 | # Identifying joint positions
119 | a = np.array(hip)
120 | b = np.array(knee)
121 | c = np.array(ankle)
122 |
123 | # Compute vectors from main joint
124 | ba = a - b
125 | m_ba = - ba
126 | bc = c - b
127 |
128 | cosine_angle = np.dot(m_ba, bc) / (np.linalg.norm(m_ba) * np.linalg.norm(bc))
129 | angle = np.arccos(cosine_angle)
130 | angle = np.degrees(angle)
131 |
132 | if (isRightToLeft and bc[0] > m_ba[0]): angle = - angle
133 | elif (not isRightToLeft and bc[0] < m_ba[0]): angle = - angle
134 |
135 | angle = angle + 5 # Heuristic catering for perpendicular of pelvis
136 | return angle.tolist()
137 |
138 | # Calculates joint angle of hip in Front view
139 | def calc_hip_angle_F(hip, knee, isRightToLeft):
140 | if(hip == [-1,-1] or knee == [-1,-1]):
141 | return None # returns this value as error code for no keypoint detection
142 |
143 | # Identifying joint positions
144 | a = np.array(hip) # Main joint
145 | b = np.array(knee)
146 |
147 | # Compute vectors from joints
148 | ab = b - a
149 | m_N = np.array([0,-1])
150 |
151 | cosine_angle = np.dot(ab, m_N) / (np.linalg.norm(ab) * np.linalg.norm(m_N))
152 | angle = np.arccos(cosine_angle)
153 | angle = np.degrees(angle)
154 |
155 | if (isRightToLeft and ab[0] > m_N[0]): angle = - angle
156 | elif (not isRightToLeft and ab[0] < m_N[0]): angle = - angle
157 |
158 | angle = angle * 4/3 - 5 # A heuristic for catering for forward/backward pelvic tilt and perpendicular of pelvis
159 | return angle.tolist()
160 |
161 | # Traversing through pose to compute kinematics in Front view
162 | def raw_angles_F(data, isRightToLeft=False, limit=10000):
163 |
164 | knee_ang_L = []
165 | knee_ang_R = []
166 | hip_ang_L = []
167 | hip_ang_R = []
168 |
169 | count = 1
170 | for pose in data:
171 | #Left
172 | knee_L = pose[ptID['knee_L']]
173 | ankle_L = pose[ptID['ankle_L']]
174 | hip_L = pose[ptID['hip_L']]
175 |
176 | angle = calc_knee_angle_F(hip_L, knee_L, ankle_L, isRightToLeft)
177 | knee_ang_L.append(angle)
178 | angle = calc_hip_angle_F(hip_L, knee_L, isRightToLeft)
179 | hip_ang_L.append(angle)
180 |
181 | #Right
182 | knee_R = pose[ptID['knee_R']]
183 | ankle_R = pose[ptID['ankle_R']]
184 | hip_R = pose[ptID['hip_R']]
185 |
186 | angle = calc_knee_angle_F(hip_R, knee_R, ankle_R, not isRightToLeft)
187 | knee_ang_R.append(angle)
188 |
189 | angle = calc_hip_angle_F(hip_R, knee_R, not isRightToLeft)
190 | hip_ang_R.append(angle)
191 |
192 | if(count == limit): break
193 | count += 1
194 |
195 | knee_ang = [knee_ang_L, knee_ang_R]
196 | hip_ang = [hip_ang_L, hip_ang_R]
197 |
198 | return knee_ang, hip_ang
199 |
200 | # Checks which direction gait is from side view (affects how angles in saggital plane are calculated)
201 | def checkGaitDirectionS(dataS, dimS):
202 | # Finds first instance of ankle in video
203 | for pose in dataS:
204 | ankle_L = pose[ptID['ankle_L']]
205 | ankle_R = pose[ptID['ankle_R']]
206 | if(ankle_L != [-1,-1]):
207 | ankle_init = ankle_L
208 | break
209 | if (ankle_R != [-1, -1]):
210 | ankle_init = ankle_R
211 | break
212 |
213 | init_x = ankle_init[0]
214 | max_x = dimS[0]
215 | if (init_x > max_x / 2):
216 | return True
217 | else:
218 | return False
219 |
220 | # Computes and saves kinematics (joint angles) from poses
221 | def kinematics_extract(readFile, writeFile):
222 | with open(readFile, 'r') as f:
223 | jsonPose = json.load(f)
224 |
225 | jsonList = []
226 | for cap in jsonPose:
227 | dataS = cap['dataS']
228 | dimS = cap['dimS']
229 | dataF = cap['dataF']
230 | lenS = cap['lenS']
231 | lenF = cap['lenF']
232 |
233 | limit = max(lenF, lenS) # Can set to min if the same is desired
234 | isRightToLeft = checkGaitDirectionS(dataS, dimS) # True: Right to Left, False: Left to Right
235 |
236 | knee_FlexExt, hip_FlexExt = raw_angles_S(dataS, isRightToLeft, limit) # Coronal plane
237 | knee_AbdAdd, hip_AbdAdd = raw_angles_F(dataF, isRightToLeft, limit) # Sagittal plane
238 | jsonDict = {
239 | 'knee_FlexExt' : knee_FlexExt,
240 | 'hip_FlexExt' : hip_FlexExt,
241 | 'knee_AbdAdd' : knee_AbdAdd,
242 | 'hip_AbdAdd' : hip_AbdAdd
243 | }
244 | jsonList.append(jsonDict)
245 |
246 | with open(writeFile, 'w') as outfile:
247 | json.dump(jsonList, outfile, separators=(',', ':'))
248 |
249 | #==================================================================================
250 | # Main
251 | #==================================================================================
252 | def main():
253 | for i in range(1, 22):
254 | if(len(str(i)) < 2): i = '0' + str(i)
255 | path = '..\\Part' + str(i) + '\\'
256 | readFile = path + 'Part' + str(i) + '_pose.json'
257 | writeFile = path + 'Part' + str(i) + '_angles.json'
258 | start_time = time.time()
259 | kinematics_extract(readFile, writeFile)
260 | print('Kinematics extracted and saved in', '\"'+writeFile+'\"', '[Time:', '{0:.2f}'.format(time.time() - start_time), 's]')
261 |
262 | if __name__ == '__main__':
263 | main()
264 |
--------------------------------------------------------------------------------
/kinematics_processing.py:
--------------------------------------------------------------------------------
1 | #==================================================================================
2 | # KINEMATICS_PROCESSING
3 | #----------------------------------------------------------------------------------
4 | # Input: Pose Json and Raw angles, Output: Gait Cycle graphs
5 | # Given a JSON describing angles of joints throughout a walk,
6 | # Smooth kinematics and averages to one standard gait cycle.
7 | #==================================================================================
8 | # Imports
9 | #==================================================================================
10 | # import matplotlib.pyplot as plt # for debugging
11 | # from visualizer import plot_raw_all, plot_gcLR # for debugging
12 | from statistics import mean
13 | from scipy import signal
14 | import pandas as pd
15 | import numpy as np
16 | import json
17 | import math
18 | import time
19 |
20 | #==================================================================================
21 | # Constants
22 | #==================================================================================
23 | ptID = {
24 | 'nose': 0,
25 | 'eye_L': 1,'eye_R': 2,
26 | 'ear_L': 3,'ear_R': 4,
27 | 'shoulder_L': 5, 'shoulder_R': 6,
28 | 'elbow_L': 7, 'elbow_R': 8,
29 | 'wrist_L': 9, 'wrist_R': 10,
30 | 'hip_L': 11, 'hip_R': 12,
31 | 'knee_L': 13, 'knee_R': 14,
32 | 'ankle_L': 15, 'ankle_R': 16
33 | }
34 |
35 | red = "#FF4A7E"
36 | blue = "#72B6E9"
37 |
38 | #==================================================================================
39 | # Methods
40 | #==================================================================================
41 | # Filling in gaps, to cater for low confidence in estimation
42 | def gapfill(angleList):
43 | df = pd.DataFrame({'ang': angleList})
44 | df['ang'].interpolate(method='linear', inplace=True)
45 | return df['ang'].tolist()
46 |
47 | # Fills gaps of left and right kinematics
48 | def gapfillLR(angLR):
49 | angL = angLR[0]
50 | angR = angLR[1]
51 |
52 | filledL = gapfill(angL)
53 | filledR = gapfill(angR)
54 | angLR_filled = [filledL, filledR]
55 | return angLR_filled
56 |
57 | # Exponential moving average for a list (naive smoothing)
58 | def smooth(angle_list, weight): # Weight between 0 and 1
59 | last = angle_list[0] # First value in the plot (first timestep)
60 | smoothed = []
61 | for angle in angle_list:
62 | if(math.isnan(angle) or math.isnan(last)): # Caters for no person detecion, which shouldn't occur with this pipeline due to gap filling
63 | smoothed.append(None)
64 | last = angle
65 | else:
66 | smoothed_val = last * weight + (1 - weight) * angle # Calculate smoothed value
67 | smoothed.append(smoothed_val)
68 | last = smoothed_val # Anchor the last smoothed value
69 | return smoothed
70 |
71 | def smoothLR(angles_list, weight):
72 | angles_L = angles_list[0]
73 | angles_R = angles_list[1]
74 | smooth_L = smooth(angles_L, weight)
75 | smooth_R = smooth(angles_R, weight)
76 | smoothed_LR = [smooth_L, smooth_R]
77 |
78 | return smoothed_LR
79 |
80 | # Returns list of frames where step on of a particular leg occurs
81 | def getStepOnFrames(dataS, L_or_R, diff_thresh, N, avg_thresh):
82 | ankle_points = []
83 | isGrounded_srs = []
84 | stepOnFrames = []
85 | seekStepOn = True
86 |
87 | for i in range(len(dataS)):
88 | pose = dataS[i]
89 | isGrounded = False
90 |
91 | ankle_pos = pose[ptID['ankle_' + L_or_R]]
92 | ankle_X = ankle_pos[0]
93 | ankle_Y = ankle_pos[1]
94 |
95 | # first frame neglected as the algorithm checks the previous frame every time
96 | if (i > 0 and (ankle_pos != [-1,-1] and ankle_points[-1] != [-1,-1]) ):
97 | ankle_pos_prev = ankle_points[-1]
98 | ankle_X_prev = ankle_pos_prev[0]
99 | ankle_Y_prev = ankle_pos_prev[1]
100 |
101 | X_diff = ankle_X - ankle_X_prev
102 | Y_diff = ankle_Y - ankle_Y_prev
103 |
104 | diff = pow(pow(Y_diff, 2) + pow(X_diff, 2), 1/2)
105 | if (diff < diff_thresh): isGrounded = True
106 |
107 | isGrounded_recent = isGrounded_srs[-N:]
108 | isGrounded_avg = sum(isGrounded_recent)/len(isGrounded_recent)
109 |
110 | # print(i, ankle_pos, abs_diff, isGrounded, isGrounded_avg)
111 |
112 | if(seekStepOn):
113 | if(isGrounded_avg > avg_thresh):
114 | stepOnFrames.append(i-N)
115 | seekStepOn = False
116 | else:
117 | if(isGrounded_avg == 0):
118 | seekStepOn = True
119 | ankle_points.append(ankle_pos)
120 | isGrounded_srs.append(isGrounded)
121 | return stepOnFrames
122 |
123 | # Returns set of subsets for gait cycles
124 | def gaitCycle_filter(angle_list, stepOnFrames):
125 | gc = [] # gait cycle list to store subsets
126 | for i in range(len(stepOnFrames) - 1, 0, -1):
127 | end = stepOnFrames[i] - 1
128 | start = stepOnFrames[i-1]
129 |
130 | if(start >= 0):
131 | subset = angle_list[start:end]
132 | gc.append(subset)
133 | return gc
134 |
135 | # Returns right and left gait cycles of angle list
136 | def gcLR(angleList, stepOnFrames_L, stepOnFrames_R):
137 | gc_L = gaitCycle_filter(angleList[0], stepOnFrames_L)
138 | gc_R = gaitCycle_filter(angleList[1], stepOnFrames_R)
139 | gc = [gc_L, gc_R]
140 | return gc
141 |
142 | # Removes short gait cycles relative to the longest gait cycle
143 | def gcLR_removeShort(gcLR1, gcLR2, gcLR3, gcLR4):
144 | len_gc_L = [len(x) for x in gcLR1[0]]
145 | len_gc_R = [len(x) for x in gcLR1[1]]
146 |
147 | thresh_gc_LR_short = [0.7 * mean(len_gc_L), 0.7 * mean(len_gc_R)]
148 | thresh_gc_LR_long = [1.3 * mean(len_gc_L), 1.3 * mean(len_gc_R)]
149 |
150 | # Removes from left then right
151 | for h in range(0, 2):
152 | i = 0
153 | limit = len(gcLR1[h])
154 | while True:
155 | len_gc = len(gcLR1[h][i])
156 | if(len_gc <= thresh_gc_LR_short[h] or len_gc >= thresh_gc_LR_long[h]):
157 | del gcLR1[h][i]
158 | del gcLR2[h][i]
159 | del gcLR3[h][i]
160 | del gcLR4[h][i]
161 | i -= 1
162 | limit -= 1
163 |
164 | i += 1
165 | if(i >= limit): break
166 |
167 |
168 | return gcLR1, gcLR2, gcLR3, gcLR4
169 |
170 | # Normalizes the xrange to a sample of N data points
171 | def resample_gcLR(gcLR, N):
172 | gcL = gcLR[0]
173 | gcR = gcLR[1]
174 | gcLR_resampled = [[], []]
175 |
176 | for angleList in gcL:
177 | for i in range(0,len(angleList)):
178 | if(angleList[i] == None):
179 | angleList[i] = 0
180 | angleListL = signal.resample(angleList, N)
181 | gcLR_resampled[0].append(angleListL.tolist())
182 |
183 | for angleList in gcR:
184 | for i in range(0,len(angleList)):
185 | if(angleList[i] == None):
186 | angleList[i] = 0
187 | angleListR = signal.resample(angleList, N)
188 | gcLR_resampled[1].append(angleListR.tolist())
189 |
190 | return gcLR_resampled
191 |
192 | # Returns average of left and right gait cycles respectively
193 | def avg_gcLR(gcLR):
194 | gcL = np.array(gcLR[0]) # list of left gait cycles
195 | gcR = np.array(gcLR[1]) # list of right gait cycles
196 |
197 | gcL_avg = np.mean(gcL, axis=0)
198 | gcL_std = np.std(gcL, axis=0)
199 |
200 | gcR_avg = np.mean(gcR, axis=0)
201 | gcR_std = np.std(gcR, axis=0)
202 |
203 | avg_gcLR = {
204 | 'gcL_avg' : gcL_avg.tolist(),
205 | 'gcL_std' : gcL_std.tolist(),
206 | 'gcR_avg': gcR_avg.tolist(),
207 | 'gcR_std': gcR_std.tolist(),
208 | 'gcL_count' : len(gcL),
209 | 'gcR_count' : len(gcR)
210 | }
211 | return avg_gcLR
212 |
213 | def kinematics_process(poseFile, anglesFile, writeFile):
214 | with open(poseFile, 'r') as f:
215 | jsonPose = json.load(f)
216 | with open(anglesFile, 'r') as f:
217 | jsonAngles = json.load(f)
218 |
219 | len1 = len(jsonPose)
220 | len2 = len(jsonAngles)
221 | if (len1 != len2):
222 | print('Error: jsonPose of len', len1, 'does not match jsonAngles of len', len2)
223 | exit()
224 |
225 | knee_FlexExt_gc = [[], []]
226 | hip_FlexExt_gc = [[], []]
227 | knee_AbdAdd_gc = [[], []]
228 | hip_AbdAdd_gc = [[], []]
229 |
230 | # Traverse through each capture of the participant's gait
231 | for i in range(0, len1):
232 | pose_srs = jsonPose[i]
233 | dataS = pose_srs['dataS']
234 |
235 | raw_angles = jsonAngles[i]
236 | knee_FlexExt = raw_angles['knee_FlexExt']
237 | hip_FlexExt = raw_angles['hip_FlexExt']
238 | knee_AbdAdd = raw_angles['knee_AbdAdd']
239 | hip_AbdAdd = raw_angles['hip_AbdAdd']
240 |
241 | # Gap filling
242 | knee_FlexExt0 = gapfillLR(knee_FlexExt)
243 | hip_FlexExt0 = gapfillLR(hip_FlexExt)
244 | knee_AbdAdd0 = gapfillLR(knee_AbdAdd)
245 | hip_AbdAdd0 = gapfillLR(hip_AbdAdd)
246 |
247 | # Smoothing
248 | weight = 0.8
249 | knee_FlexExt1 = smoothLR(knee_FlexExt0, weight)
250 | hip_FlexExt1 = smoothLR(hip_FlexExt0, weight)
251 | knee_AbdAdd1 = smoothLR(knee_AbdAdd0, weight)
252 | hip_AbdAdd1 = smoothLR(hip_AbdAdd0, weight)
253 |
254 | #plot_raw_all(knee_FlexExt1, hip_FlexExt1, knee_AbdAdd1, hip_AbdAdd1) # for debugging
255 |
256 | # Slicing into gait cycles
257 | stepOnFrames_L = getStepOnFrames(dataS, 'L', 2.2, 8, 0.8) # 8
258 | stepOnFrames_R = getStepOnFrames(dataS, 'R', 2.2, 8, 0.8)
259 | knee_FlexExt2 = gcLR(knee_FlexExt1, stepOnFrames_L, stepOnFrames_R)
260 | hip_FlexExt2 = gcLR(hip_FlexExt1, stepOnFrames_L, stepOnFrames_R)
261 | knee_AbdAdd2 = gcLR(knee_AbdAdd1, stepOnFrames_L, stepOnFrames_R)
262 | hip_AbdAdd2 = gcLR(hip_AbdAdd1, stepOnFrames_L, stepOnFrames_R)
263 |
264 | # Removing gait cycles that are relatively too short to be correct
265 | knee_FlexExt2, hip_FlexExt2, knee_AbdAdd2, hip_AbdAdd2 = gcLR_removeShort(knee_FlexExt2, hip_FlexExt2,
266 | knee_AbdAdd2, hip_AbdAdd2)
267 |
268 | # Resampling to 100 (100 and 0 inclusive)
269 | knee_FlexExt3 = resample_gcLR(knee_FlexExt2, 101)
270 | hip_FlexExt3 = resample_gcLR(hip_FlexExt2, 101)
271 | knee_AbdAdd3 = resample_gcLR(knee_AbdAdd2, 101)
272 | hip_AbdAdd3 = resample_gcLR(hip_AbdAdd2, 101)
273 |
274 | #plot_gcLR(hip_FlexExt2, 'hip flex/ext') # for debugging
275 |
276 | # Adding to global gait cycle instances list
277 | for gc in knee_FlexExt3[0]: knee_FlexExt_gc[0].append(gc)
278 | for gc in knee_FlexExt3[1]: knee_FlexExt_gc[1].append(gc)
279 |
280 | for gc in hip_FlexExt3[0]: hip_FlexExt_gc[0].append(gc)
281 | for gc in hip_FlexExt3[1]: hip_FlexExt_gc[1].append(gc)
282 |
283 | for gc in knee_AbdAdd3[0]: knee_AbdAdd_gc[0].append(gc)
284 | for gc in knee_AbdAdd3[1]: knee_AbdAdd_gc[1].append(gc)
285 |
286 | for gc in hip_AbdAdd3[0]: hip_AbdAdd_gc[0].append(gc)
287 | for gc in hip_AbdAdd3[1]: hip_AbdAdd_gc[1].append(gc)
288 |
289 | # Averaging
290 | knee_FlexExt_avg = avg_gcLR(knee_FlexExt_gc)
291 | hip_FlexExt_avg = avg_gcLR(hip_FlexExt_gc)
292 | knee_AbdAdd_avg = avg_gcLR(knee_AbdAdd_gc)
293 | hip_AbdAdd_avg = avg_gcLR(hip_AbdAdd_gc)
294 |
295 | jsonDict = {
296 | 'knee_FlexExt_avg': knee_FlexExt_avg,
297 | 'hip_FlexExt_avg': hip_FlexExt_avg,
298 | 'knee_AbdAdd_avg': knee_AbdAdd_avg,
299 | 'hip_AbdAdd_avg': hip_AbdAdd_avg,
300 |
301 | 'knee_FlexExt_gc': knee_FlexExt_gc,
302 | 'hip_FlexExt_gc': hip_FlexExt_gc,
303 | 'knee_AbdAdd_gc': knee_AbdAdd_gc,
304 | 'hip_AbdAdd_gc': hip_AbdAdd_gc,
305 | }
306 |
307 | with open(writeFile, 'w') as outfile:
308 | json.dump(jsonDict, outfile, separators=(',', ':'))
309 |
310 | #==================================================================================
311 | # Main
312 | #==================================================================================
313 | def main():
314 | for i in range(1, 22):
315 | if(len(str(i)) < 2): i = '0' + str(i)
316 | path = '..\\Part' + str(i) + '\\'
317 | poseFile = path + 'Part' + str(i) + '_pose.json'
318 | anglesFile = path + 'Part' + str(i) + '_angles.json'
319 | writeFile = path + 'Part' + str(i) + '_gc.json'
320 | start_time = time.time()
321 | kinematics_process(poseFile, anglesFile, writeFile)
322 | print('Kinematics processed and saved in', '\"'+writeFile+'\"', '[Time:', '{0:.2f}'.format(time.time() - start_time), 's]')
323 |
324 | if __name__ == '__main__':
325 | main()
326 |
--------------------------------------------------------------------------------
/pose_estimation.py:
--------------------------------------------------------------------------------
1 | #==================================================================================
2 | # POSE ESTIMATION
3 | #----------------------------------------------------------------------------------
4 | # Input: Video x 2, Output: JSON
5 | # Given a front/back view and side view video of someone
6 | # walking, this will generate a json, describing the pose
7 | # via key-points in graph form, throughout every frame.
8 | #==================================================================================
9 |
10 | #==================================================================================
11 | # Imports
12 | #==================================================================================
13 | from __future__ import division
14 | from gluoncv.data.transforms.pose import detector_to_alpha_pose, heatmap_to_coord_alpha_pose
15 | from gluoncv.model_zoo import get_model
16 | from tqdm import tqdm
17 | import gluoncv as gcv
18 | import mxnet as mx
19 | import time, cv2
20 | import json
21 | import glob
22 | import os
23 |
24 | #==================================================================================
25 | # AI Detectors
26 | # Person detector; YOLO via GPU.
27 | # Pose Estimator; AlphaPose via CPU (no GPU support).
28 | #==================================================================================
29 | ctx = mx.gpu(0)
30 | detector = get_model('yolo3_mobilenet1.0_coco', pretrained=True, ctx=ctx)
31 | detector.reset_class(classes=['person'], reuse_weights={'person': 'person'})
32 | estimator = get_model('alpha_pose_resnet101_v1b_coco', pretrained='de56b871')
33 | detector.hybridize()
34 | estimator.hybridize()
35 |
36 | #==================================================================================
37 | # Methods
38 | #==================================================================================
39 | # Returns a keypoints with respect to the current frame's pose
40 | def curr_pose(img, coords, confidence, scores, keypoint_thresh=0.2):
41 | i = scores.argmax() # gets index of most confident bbox estimation
42 | pts = coords[i] # coords of most confident pose in frame
43 |
44 | pose_data = []
45 |
46 | for j in range(0, len(pts)):
47 | x = -1
48 | y = -1
49 | if(confidence[i][j] > keypoint_thresh):
50 | x = int(pts[j][0])
51 | y = int(img.shape[0] - pts[j][1])
52 | pose_data.append([x,y])
53 |
54 | return pose_data
55 |
56 | # Given one video, returns list of pose information in preparation for json file
57 | def video_to_listPose(vid):
58 | cap = cv2.VideoCapture(vid) # load video
59 | if (cap.isOpened() == False): # Check if camera opened successfully
60 | print("Error opening video stream or file")
61 | return
62 |
63 | frame_count = 0
64 | pose_data_vid = []
65 | dimensions = (0, 0)
66 | frame_length = (int)(cap.get(cv2.CAP_PROP_FRAME_COUNT))
67 |
68 | pbar = tqdm(total=frame_length, ncols=1, desc='.')
69 | pbar.ncols = 100
70 |
71 | # Iterate through every frame in video
72 | while(cap.isOpened()):
73 | ret, frame = cap.read() # read current frame
74 | if (frame is None):
75 | break # If current frame doesn't exist, finished iterating through frames
76 | frame = mx.nd.array(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)).astype('uint8') # mxnet readable
77 |
78 | # Person detection
79 | x, frame = gcv.data.transforms.presets.yolo.transform_test(frame) # short=406, max_size=1024
80 | class_IDs, scores, bounding_boxs = detector(x.as_in_context(ctx))
81 |
82 | # Pose estimation
83 | pose_input, upscale_bbox = detector_to_alpha_pose(frame, class_IDs, scores, bounding_boxs,
84 | output_shape=(320, 256))
85 | # Gets current pose keypoints
86 | if (upscale_bbox is None): # Caters for no person detection
87 | pbar.set_description_str('Skipping ')
88 | pose_data_curr = [[-1, -1] for j in range(0, 17)]
89 | else: # Caters for person detection
90 | pbar.set_description_str('Processing')
91 | predicted_heatmap = estimator(pose_input)
92 | pred_coords, confidence = heatmap_to_coord_alpha_pose(predicted_heatmap, upscale_bbox)
93 |
94 | scores = scores.asnumpy()
95 | confidence = confidence.asnumpy()
96 | pred_coords = pred_coords.asnumpy()
97 |
98 | # Preparing for json
99 | pose_data_curr = curr_pose(frame, pred_coords, confidence, scores, keypoint_thresh=0.2)
100 | pose_data_vid.append(pose_data_curr)
101 |
102 | if (frame_count == 0):
103 | dimensions = [frame.shape[1], frame.shape[0]]
104 | frame_count += 1
105 | pbar.update(1)
106 | cap.release()
107 | pbar.close()
108 |
109 | return dimensions, pose_data_vid
110 |
111 | # Given two videos it will output the json describing all poses in both videos
112 | def videos_to_jsonPose(vidSide, vidFront, partId, capId):
113 | dimensions_side, pose_vid_side = video_to_listPose(vidSide)
114 | dimensions_front, pose_vid_front = video_to_listPose(vidFront)
115 |
116 | if(dimensions_side != dimensions_front):
117 | print('Warning: side video', dimensions_side, 'and front video', dimensions_front, 'of different dimensions' )
118 |
119 | if (len(pose_vid_side) != len(pose_vid_front)):
120 | print('Warning: side video', len(pose_vid_side), 'and front video', len(pose_vid_front), 'of different frame counts')
121 |
122 | jsonPose_dict = {
123 | 'partId': partId,
124 | 'capId': capId,
125 | 'dimS': dimensions_side,
126 | 'lenS' : len(pose_vid_side),
127 | 'dimF' : dimensions_front,
128 | 'lenF' : len(pose_vid_front),
129 | 'dataS' : pose_vid_side,
130 | 'dataF' : pose_vid_front
131 | }
132 | return jsonPose_dict
133 |
134 | def estimate_poses(path, writeFile):
135 | jsonPose_list = []
136 | fs_pair = []
137 | i = 0
138 | for filename in glob.glob(os.path.join(path, '*.avi')):
139 | fs_pair.append(filename)
140 | if (i % 2):
141 | print('Capture pair', '('+ str(int((i+1)/2)) +'/6)', ':', '\"'+fs_pair[1]+'\"', ',', '\"'+fs_pair[0]+'\"')
142 | capId = fs_pair[0].split('-')[1]
143 | partId = fs_pair[0].split('-')[0].split('\\')[2]
144 | jsonPose_dict = videos_to_jsonPose(fs_pair[1], fs_pair[0], partId, capId)
145 | jsonPose_list.append(jsonPose_dict)
146 | fs_pair.clear()
147 | i += 1
148 |
149 | with open(writeFile, 'w') as outfile:
150 | json.dump(jsonPose_list, outfile, separators=(',', ':'))
151 |
152 | #==================================================================================
153 | # Main
154 | #==================================================================================
155 | def main():
156 | for i in range(1, 22):
157 | if (len(str(i)) < 2): i = '0' + str(i)
158 | path = '..\\Part' + str(i) + '\\'
159 | writeFile = path + 'Part' + str(i) + '_pose.json'
160 | start_time = time.time()
161 | estimate_poses(path, writeFile)
162 | print('Poses estimated and saved in', '\"'+writeFile+'\"', '[Time:', '{0:.2f}'.format(time.time() - start_time), 's]')
163 |
164 | if __name__ == '__main__':
165 | main()
--------------------------------------------------------------------------------
/visualizer.py:
--------------------------------------------------------------------------------
1 | #==================================================================================
2 | # VISUALIZER
3 | #----------------------------------------------------------------------------------
4 | # Input: JSON, Output: Debugging plots / Gifs
5 | # Visualizes saved graph structure of poses, as well as
6 | # saved raw kinematics, and processed kinematics
7 | #==================================================================================
8 | # Imports
9 | #==================================================================================
10 | import numpy as np
11 | import matplotlib.pyplot as plt
12 | import json
13 | import io
14 | from PIL import Image
15 | import imageio
16 | from tqdm import trange
17 | import matplotlib.gridspec as gridspec
18 |
19 | #==================================================================================
20 | # Constants
21 | #==================================================================================
22 | joint_pairs = [[0, 1], [1, 3], [0, 2], [2, 4],
23 | [5, 6], [5, 7], [7, 9], [6, 8], [8, 10],
24 | [5, 11], [6, 12], [11, 12],
25 | [11, 13], [12, 14], [13, 15], [14, 16]]
26 | colormap_index = np.linspace(0, 1, len(joint_pairs))
27 |
28 | ptID = {
29 | 'nose': 0,
30 | 'eye_L': 1,'eye_R': 2,
31 | 'ear_L': 3,'ear_R': 4,
32 | 'shoulder_L': 5, 'shoulder_R': 6,
33 | 'elbow_L': 7, 'elbow_R': 8,
34 | 'wrist_L': 9, 'wrist_R': 10,
35 | 'hip_L': 11, 'hip_R': 12,
36 | 'knee_L': 13, 'knee_R': 14,
37 | 'ankle_L': 15, 'ankle_R': 16
38 | }
39 |
40 | red = "#FF4A7E"
41 | blue = "#72B6E9"
42 | #==================================================================================
43 | # Methods
44 | #==================================================================================
45 | # Speeds up gif
46 | def gif_speedup(filename):
47 | gif = imageio.mimread(filename, memtest=False)
48 | imageio.mimsave(filename, gif, duration=1/30)
49 |
50 | # Saves gif of pose estimation in a capture
51 | def gif_pose(poseFile, i, outpath):
52 | with open(poseFile, 'r') as f:
53 | jsonPose = json.load(f)
54 |
55 | dataS = jsonPose[i]['dataS']
56 | dimS = jsonPose[i]['dimS']
57 | dataF = jsonPose[i]['dataF']
58 | dimF = jsonPose[i]['dimF']
59 | capId = jsonPose[i]['capId']
60 | partId = jsonPose[i]['partId']
61 |
62 | filename = outpath + partId + '-' + capId + '-PE.gif'
63 | ims = [] # List of images for gif
64 |
65 | print('Visualizing poses...')
66 | for i in trange(len(dataS), ncols=100):
67 | fig, (axF, axS) = plt.subplots(1, 2, figsize=(10, 4), constrained_layout=True)
68 | fig.suptitle('Pose estimation of \"' + partId + '-' + capId + '\"')
69 |
70 | axF.set_xlabel('Front view')
71 | axF.set(xlim=(0, dimF[0]), ylim=(0, dimF[1]))
72 |
73 | axS.set_xlabel('Side view')
74 | axS.set(xlim=(0, dimS[0]), ylim=(0, dimS[1]))
75 |
76 | # Front view
77 | pose = dataF[i]
78 | for cm_ind, jp in zip(colormap_index, joint_pairs):
79 | joint1 = pose[jp[0]]
80 | joint2 = pose[jp[1]]
81 | if (joint1 > [-1, -1] and joint2 > [-1, -1]):
82 | x = [joint1[0], joint2[0]]
83 | y = [joint1[1], joint2[1]]
84 | axF.plot(x, y, linewidth=3.0, alpha=0.7, color=plt.cm.cool(cm_ind))
85 | axF.scatter(x, y, s=20)
86 |
87 | # Side view
88 | pose = dataS[i]
89 | for cm_ind, jp in zip(colormap_index, joint_pairs):
90 | joint1 = pose[jp[0]]
91 | joint2 = pose[jp[1]]
92 | if (joint1 > [-1, -1] and joint2 > [-1, -1]):
93 | x = [joint1[0], joint2[0]]
94 | y = [joint1[1], joint2[1]]
95 | axS.plot(x, y, linewidth=3.0, alpha=0.7, color=plt.cm.cool(cm_ind))
96 | axS.scatter(x, y, s=20)
97 |
98 | buf = io.BytesIO()
99 | plt.savefig(buf, format='png')
100 | buf.seek(0)
101 | im = Image.open(buf)
102 | im = im.convert('RGB')
103 | ims.append(im)
104 | plt.close()
105 | im.save(filename, save_all=True, append_images=ims, duration=0, loop=0)
106 | buf.close()
107 | gif_speedup(filename)
108 | print('Saved as', '\"'+filename+'\"')
109 |
110 | # Returns x and y lists of leg catering for no keypoint detection
111 | def leg_points(pose, L_or_R):
112 | x, y = [], []
113 |
114 | hip = pose[ptID['hip_' + L_or_R]]
115 | knee = pose[ptID['knee_' + L_or_R]]
116 | ankle = pose[ptID['ankle_' + L_or_R]]
117 |
118 | if (hip != [-1, -1]):
119 | x.append(hip[0])
120 | y.append(hip[1])
121 | if (knee != [-1, -1]):
122 | x.append(knee[0])
123 | y.append(knee[1])
124 | if (ankle != [-1, -1]):
125 | x.append(ankle[0])
126 | y.append(ankle[1])
127 |
128 | return x, y
129 |
130 | # Saves gif describing flexion/extension angle extraction from side view
131 | def gif_flexext(poseFile, anglesFile, i, outpath):
132 | with open(poseFile, 'r') as f:
133 | jsonPose = json.load(f)
134 | with open(anglesFile, 'r') as f:
135 | jsonAngles = json.load(f)
136 |
137 | dataS = jsonPose[i]['dataS']
138 | dimS = jsonPose[i]['dimS']
139 | capId = jsonPose[i]['capId']
140 | partId = jsonPose[i]['partId']
141 | knee_FlexExt = jsonAngles[i]['knee_FlexExt']
142 | hip_FlexExt = jsonAngles[i]['hip_FlexExt']
143 |
144 | filename = outpath + partId + '-' + capId + '-FE.gif'
145 | ims = [] # List of images for gif
146 | gs = gridspec.GridSpec(2, 2)
147 |
148 | print('Visualizing flexion and extension...')
149 | for i in trange(len(dataS), ncols=100):
150 | fig = plt.figure(figsize=(12, 6))
151 | ax1 = fig.add_subplot(gs[:, 0])
152 | ax2 = fig.add_subplot(gs[0, 1])
153 | ax3 = fig.add_subplot(gs[1, 1])
154 |
155 | # ax1: Leg poses
156 | ax1.set_title('Flexion and Extension from Side View')
157 | ax1.set(xlim=(0, dimS[0]), ylim=(0, dimS[1]))
158 | pose = dataS[i]
159 | x_L, y_L = leg_points(pose, 'L')
160 | x_R, y_R = leg_points(pose, 'R')
161 | ax1.scatter(x_L, y_L, s=20, color=red)
162 | ax1.scatter(x_R, y_R, s=20, color=blue)
163 | ax1.plot(x_L, y_L, color=red)
164 | ax1.plot(x_R, y_R, color=blue)
165 |
166 | # ax2: Knee flexion / extension
167 | ax2.set_title('Knee Flexion/Extension')
168 | ax2.set_ylabel(r"${\Theta}$ (degrees)")
169 | ax2.set(xlim=(0, len(dataS)), ylim=(-20, 80))
170 | ax2.plot(knee_FlexExt[0][0:i], color=red)
171 | ax2.plot(knee_FlexExt[1][0:i], color=blue)
172 |
173 | # ax3: Hip flexion / extension
174 | ax3.set_title('Hip Flexion/Extension')
175 | ax3.set_ylabel(r"${\Theta}$ (degrees)")
176 | ax3.set_xlabel('Frame (count)')
177 | ax3.set(xlim=(0, len(dataS)), ylim=(-30, 60))
178 | ax3.plot(hip_FlexExt[0][0:i], color=red)
179 | ax3.plot(hip_FlexExt[1][0:i], color=blue)
180 |
181 | buf = io.BytesIO()
182 | plt.tight_layout()
183 | plt.savefig(buf, format='png')
184 | buf.seek(0)
185 | im = Image.open(buf)
186 | im = im.convert('RGB')
187 | ims.append(im)
188 | plt.close()
189 | im.save(filename, save_all=True, append_images=ims, duration=0, loop=0)
190 | buf.close()
191 | gif_speedup(filename)
192 | print('Saved as', '\"' + filename + '\"')
193 |
194 |
195 | # Saves gif describing flexion/extension angle extraction from side view
196 | def gif_abdadd(poseFile, anglesFile, i, outpath):
197 | with open(poseFile, 'r') as f:
198 | jsonPose = json.load(f)
199 | with open(anglesFile, 'r') as f:
200 | jsonAngles = json.load(f)
201 |
202 | dataF = jsonPose[i]['dataF']
203 | dimF = jsonPose[i]['dimF']
204 | capId = jsonPose[i]['capId']
205 | partId = jsonPose[i]['partId']
206 | knee_AbdAdd = jsonAngles[i]['knee_AbdAdd']
207 | hip_AbdAdd = jsonAngles[i]['hip_AbdAdd']
208 |
209 | filename = outpath + partId + '-' + capId + '-AA.gif'
210 | ims = [] # List of images for gif
211 | gs = gridspec.GridSpec(2, 2)
212 |
213 | print('Visualizing flexion and extension...')
214 | for i in trange(len(dataF), ncols=100):
215 | fig = plt.figure(figsize=(12, 6))
216 | ax1 = fig.add_subplot(gs[:, 0])
217 | ax2 = fig.add_subplot(gs[0, 1])
218 | ax3 = fig.add_subplot(gs[1, 1])
219 |
220 | # ax1: Leg poses
221 | ax1.set_title('Abduction and Adduction from Front View')
222 | ax1.set(xlim=(0, dimF[0]), ylim=(0, dimF[1]))
223 | pose = dataF[i]
224 | x_L, y_L = leg_points(pose, 'L')
225 | x_R, y_R = leg_points(pose, 'R')
226 | ax1.scatter(x_L, y_L, s=20, color=red)
227 | ax1.scatter(x_R, y_R, s=20, color=blue)
228 | ax1.plot(x_L, y_L, color=red)
229 | ax1.plot(x_R, y_R, color=blue)
230 |
231 | # ax2: Knee abduction / adduction
232 | ax2.set_title('Knee Abduction/Adduction')
233 | ax2.set_ylabel(r"${\Theta}$ (degrees)")
234 | ax2.set(xlim=(0, len(dataF)), ylim=(-20, 20))
235 | ax2.plot(knee_AbdAdd[0][0:i], color=red)
236 | ax2.plot(knee_AbdAdd[1][0:i], color=blue)
237 |
238 | # ax3: Hip abduction / adduction
239 | ax3.set_title('Hip Abduction/Adduction')
240 | ax3.set_xlabel('Frame (count)')
241 | ax3.set_ylabel(r"${\Theta}$ (degrees)")
242 | ax3.set(xlim=(0, len(dataF)), ylim=(-20, 30))
243 | ax3.plot(hip_AbdAdd[0][0:i], color=red)
244 | ax3.plot(hip_AbdAdd[1][0:i], color=blue)
245 |
246 | buf = io.BytesIO()
247 | plt.tight_layout()
248 | plt.savefig(buf, format='png')
249 | buf.seek(0)
250 | im = Image.open(buf)
251 | im = im.convert('RGB')
252 | ims.append(im)
253 | plt.close()
254 | im.save(filename, save_all=True, append_images=ims, duration=0, loop=0)
255 | buf.close()
256 | gif_speedup(filename)
257 | print('Saved as', '\"' + filename + '\"')
258 |
259 | # Plots kinematics of left or right leg, used for viewing all gait cycles
260 | def plot_angles(angleList, title, isRed):
261 | if(isRed): color = red
262 | else: color = blue
263 | fig, ax = plt.subplots()
264 | ax.set_title(title)
265 | ax.set_xlabel('Time (%)') #
266 | ax.set_ylabel(r"${\Theta}$ (degrees)")
267 | ax.plot(angleList, color=color)
268 | plt.show()
269 | plt.close()
270 |
271 | # Plots kinematics of left or right leg, used for viewing all gait cycles
272 | def plot_anglesLR(angleList, title, xlabel):
273 | xmax = max(len(angleList[0]), len(angleList[1]))
274 | fig, ax = plt.subplots()
275 | ax.set_title(title)
276 | ax.set_xlabel(xlabel)
277 | ax.set_ylabel(r"${\Theta}$ (degrees)")
278 | ax.plot(angleList[0], color=red, label='Left')
279 | ax.plot(angleList[1], color=blue, label='Right')
280 | ax.legend()
281 | plt.show()
282 | plt.close()
283 |
284 | # Plots each angle list gait cycle in list
285 | def plot_gc(gc, title, isRed):
286 | for angleList in gc:
287 | plot_angles(angleList, title, isRed)
288 |
289 | # Plots left and right gait cycles
290 | def plot_gcLR(gcLR, title):
291 | plot_gc(gcLR[0], title, True)
292 | plot_gc(gcLR[1], title, False)
293 |
294 | # Plots average as well as standard deviation
295 | def plot_avg(avg, std, title, N, isRed):
296 | if (isRed):
297 | color = red
298 | else:
299 | color = blue
300 |
301 | xmax = len(avg)
302 | fig, ax = plt.subplots()
303 | ax.set_title(title + ' (' + str(N) + ' Gait Cycles)')
304 | ax.set_xlabel('Time (%)')
305 | ax.set_ylabel(r"${\Theta}$ (degrees)")
306 | ax.set_xlim(0, 100)
307 | ax.plot(avg, color=color)
308 |
309 | std1_gcL = (np.array(avg) + np.array(std)).tolist()
310 | std2_gcL = (np.array(avg) - np.array(std)).tolist()
311 | ax.plot(std1_gcL, '--', color=color)
312 | ax.plot(std2_gcL, '--', color=color)
313 |
314 | # Plots left and right average as well as standard deviation
315 | def plot_avg_gcLR(avg_LR, title, plotSep):
316 | avg_gcL = avg_LR['gcL_avg']
317 | avg_gcR = avg_LR['gcR_avg']
318 | std_gcL = avg_LR['gcL_std']
319 | std_gcR = avg_LR['gcR_std']
320 | N_L = avg_LR['gcL_count']
321 | N_R = avg_LR['gcR_count']
322 |
323 | if(not plotSep):
324 | leftMax = len(avg_gcL) - 1
325 | rightMax = len(avg_gcR) - 1
326 | xmax = max(leftMax, rightMax)
327 | fig, ax = plt.subplots()
328 | ax.set_title(title + ' (' + str(N_L) + 'L, ' + str(N_R) + 'R Gait Cycles)')
329 | ax.set_xlabel('Frame') # Time (%)
330 | ax.set_ylabel(r"${\Theta}$ (degrees)")
331 | ax.plot(avg_gcL, color=red)
332 | ax.plot(avg_gcR, color=blue)
333 | ax.set_xlim(0, 100)
334 | plt.show()
335 | plt.close()
336 | else:
337 | plot_avg(avg_gcL, std_gcL, title, N_L, isRed=True)
338 | plot_avg(avg_gcR, std_gcR, title, N_R, isRed=False)
339 | plt.show()
340 | plt.close()
341 |
342 | def plot_avg_gcLR_all(gcFile):
343 | with open(gcFile, 'r') as f:
344 | gc = json.load(f)
345 |
346 | knee_FlexExt_avg = gc['knee_FlexExt_avg']
347 | hip_FlexExt_avg = gc['hip_FlexExt_avg']
348 | knee_AbdAdd_avg = gc['knee_AbdAdd_avg']
349 | hip_AbdAdd_avg = gc['hip_AbdAdd_avg']
350 |
351 | plot_avg_gcLR(knee_FlexExt_avg, 'Knee Flexion/Extension', plotSep=False)
352 | plot_avg_gcLR(hip_FlexExt_avg, 'Hip Flexion/Extension', plotSep=False)
353 | plot_avg_gcLR(knee_AbdAdd_avg, 'Knee Abduction/Adduction', plotSep=False)
354 | plot_avg_gcLR(hip_AbdAdd_avg, 'Hip Abduction/Adduction', plotSep=False)
355 |
356 | # Uncomment what is necessary for gait cycle display
357 | #plot_gcLR(gc['knee_FlexExt_gc'], 'Knee Flexion/Extension')
358 | #plot_gcLR(gc['hip_FlexExt_gc'], 'Hip Flexion/Extension')
359 | #plot_gcLR(gc['knee_AbdAdd_gc'], 'Knee Abduction/Adduction')
360 | #plot_gcLR(gc['hip_AbdAdd_gc'], 'Knee Flexion/Extension')
361 |
362 | def plot_raw_all(kneeFlexExt, hipFlexExt, kneeAbdAdd, hipAbdAdd):
363 | plot_anglesLR(kneeFlexExt, 'Knee Flexion/Extension', 'Frame')
364 | plot_anglesLR(hipFlexExt, 'Hip Flexion/Extension', 'Frame')
365 | plot_anglesLR(kneeAbdAdd, 'Knee Abduction/Adduction', 'Frame')
366 | plot_anglesLR(hipAbdAdd, 'Hip Abduction/Adduction', 'Frame')
367 |
368 | def plot_raw_all_file(anglesFile, i):
369 | with open(anglesFile, 'r') as f:
370 | jsonAngles = json.load(f)
371 | plot_raw_all(jsonAngles[i]['knee_FlexExt'], jsonAngles[i]['hip_FlexExt'], jsonAngles[i]['knee_AbdAdd'], jsonAngles[i]['hip_AbdAdd'])
372 |
373 | def main():
374 | i = '05' # input('Enter participant code')
375 | path = '..\\Part' + str(i) + '\\'
376 | poseFile = path + 'Part' + str(i) + '_pose.json'
377 | anglesFile = path + 'Part' + str(i) + '_angles.json'
378 | #gcFile = path + 'Part' + str(i) + '_gc.json'
379 | #plot_avg_gcLR_all(gcFile)
380 | #plot_raw_all_file(anglesFile, 2)
381 |
382 | i = 1 # The gait number
383 | gif_pose(poseFile, i, path)
384 | gif_flexext(poseFile, anglesFile, i, path)
385 | gif_abdadd(poseFile, anglesFile, i, path)
386 |
387 | #==================================================================================
388 | # Main
389 | #==================================================================================
390 | if __name__ == '__main__':
391 | main()
--------------------------------------------------------------------------------