├── README.md
├── Tutorial_Estimating_Energy_Expenditure_During_Exercise.ipynb
├── Tutorial_Pose_Estimation_for_Biomechanics.ipynb
└── Tutorial_Shoulder_Rhythm_Personalization.ipynb
/README.md:
--------------------------------------------------------------------------------
1 | Our tutorials aim to get individuals started with biomechanical modeling and machine learning approaches to study human movement.
2 |
3 | Learn more about the Mobilize Center (https://mobilize.stanford.edu), a Biomedical Technology Resource Center at Stanford University, funded by the NIH National Institute of Biomedical Imaging and Bioengineering through Grant P41EB027060, and the Restore Center (https://restore.stanford.edu), a Medical Rehabilitation Research Resource Network Center supported by the Eunice Kennedy Shriver National Institute of Child Health & Human Development and the National Institute of Neurological Disorders and Stroke under Grant P2CHD101913.
4 |
5 | ------------------------------------------------------------------------------------------------------
6 |
7 | Pose Estimation for Biomechanics - This tutorial demonstrates how to use the OpenPose open-source software to extract knee flexion curves and gait cycles from a video of a
8 | subject walking.
9 |
10 | Energy Expenditure During Exercise - This tutorial demonstrates how to estimate energy expenditure during various exercises from inertial measurement units (IMUs) worn on the leg.
11 |
12 | OpenCap Visualization Notebook - This tutorial demonstrates how to perform kinematic analyses for data collected with OpenCap. The [Google Colab notebook](https://github.com/stanfordnmbl/opencap-processing/blob/main/example.ipynb) can be found in the [OpenCap Processing Github repository](https://github.com/stanfordnmbl/opencap-processing).
13 |
--------------------------------------------------------------------------------
/Tutorial_Estimating_Energy_Expenditure_During_Exercise.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "Tutorial: Estimating Energy Expenditure During Exercise",
7 | "provenance": [],
8 | "collapsed_sections": [
9 | "PBzVrpDXG2s5"
10 | ],
11 | "include_colab_link": true
12 | },
13 | "kernelspec": {
14 | "name": "python3",
15 | "display_name": "Python 3"
16 | }
17 | },
18 | "cells": [
19 | {
20 | "cell_type": "markdown",
21 | "metadata": {
22 | "id": "view-in-github",
23 | "colab_type": "text"
24 | },
25 | "source": [
26 | ""
27 | ]
28 | },
29 | {
30 | "cell_type": "markdown",
31 | "metadata": {
32 | "id": "X38L6tanrnrB"
33 | },
34 | "source": [
35 | "### Mobilize Center & Restore Center @ Stanford Tutorial\n",
36 | "# Estimating Energy Expenditure During Exercise \n",
37 | "\n",
38 | "Physical inactivity is the fourth leading cause of global mortality. Health organizations have requested a tool to objectively measure physical activity. Respirometry and doubly labeled water accurately estimate energy expenditure, but are infeasible for everyday use. Smartwatches are portable, but have significant errors. Existing wearable methods poorly estimate time-varying activity, which comprises 40% of daily steps. We developed a Wearable System that estimates metabolic energy expenditure in real-time during common steady-state and time-varying activities with substantially lower error than state-of-the-art methods. \n",
39 | "\n",
40 | "The Wearable System uses inertial measurement units worn on the shank and thigh as they distinguish lower-limb activity better than wrist or trunk kinematics and converge more quickly than physiological signals. When evaluated with a diverse group of new subjects, the Wearable System has a cumulative error of 13% across common activities, significantly less than 42% for a smartwatch and 44% for an activity-specific smartwatch. This approach enables accurate physical activity monitoring which could enable new energy balance systems for weight management or large-scale activity monitoring. We will walk through how we process data from the IMUs and perform estimation of energy expenditure.\n",
41 | "\n",
42 | "##Tutorial Overview\n",
43 | "**In this notebook we illustrate how to analyze human motion measured by two inertial measurement units worn on one leg to estimate the calories burned during exercise.** The notebook provides a pipeline that you can adapt to get started with IMU-based energy estimation projects.\n",
44 | "\n",
45 | "This work is designed to capture energy expenditure during cyclic motions, for example, walking, running, stair climbing, and biking. These cyclic motions allow for understanding periodic and repetitive movements associated with energy expenditure. For tasks with one-off movements like weight lifting, this approach is not well-suited. Here we focus on lower-limb movement, but this approach could capture upper-limb movement with additional sensors worn on the appropriate limb. We also are only able to estimate aerobic activities because we rely on respirometry data as a ground truth signal, which is inaccurate for measuring for anaerobic activites such as sprinting or weight lifting.\n",
46 | "\n",
47 | "The notebook is a part of the [Mobilize Center](https://mobilize.stanford.edu) webinar series, and is jointly offered with the [Restore Center](https://restore.stanford.edu). The Mobilize Center is an NIH-funded Biomedical Technology Resource Center which provides tools and training to help researchers produce insights from wearables, video, medical images, and other data sources. The Restore Center is an NIH Medical Rehabilitation Research Resource Network Center which is creating a worldwide collaboration to advance the use of real-world data in rehabilitation outcomes for those with movement impairments.\n",
48 | "\n",
49 | "##Background and Citation\n",
50 | "Most of the [code](https://github.com/pslade2/EnergyExpenditure) in this notebook comes from our study on estimating energy expenditure during exercise. To cite our work, please use:\n",
51 | "\n",
52 | "*Slade, P., Kochenderfer, M.J., Delp, S.L., Collins, S.. Sensing leg movement enhances wearable monitoring of energy expenditure. Nat Commun 12, 4312 (2021).*\n",
53 | "\n",
54 | "Learn more about this work:\n",
55 | "* [Read our publication](https://www.nature.com/articles/s41467-021-24173-x)\n",
56 | "* [See our hardware documentation](https://spectrum.ieee.org/a-diy-calorie-counter-more-accurate-than-a-smartphone)"
57 | ]
58 | },
59 | {
60 | "cell_type": "markdown",
61 | "metadata": {
62 | "id": "PBzVrpDXG2s5"
63 | },
64 | "source": [
65 | "## Learning Goals\n",
66 | "\n",
67 | "In this tutorial notebook, you will learn how to:\n",
68 | "\n",
69 | "* Perform the data collection protocol for this approach\n",
70 | " - Select sensors\n",
71 | " - Place sensors\n",
72 | "* Interpret inertial measurement unit data for lower-extremity movement\n",
73 | "* Preprocess movement data for the energy expenditure algorithm\n",
74 | " - Segment cyclic movement by gait cycle\n",
75 | " - Filter lower-limb movement data\n",
76 | " - Format data for each gait cycle\n",
77 | "* Estimate energy expenditure\n",
78 | " - Use the model to estimate energy expenditure\n",
79 | " - Compare estimates against ground truth\n",
80 | " - Develop intuition into the model \n",
81 | " - Explore time-varying estimation\n",
82 | " "
83 | ]
84 | },
85 | {
86 | "cell_type": "markdown",
87 | "metadata": {
88 | "id": "xM_Vf8l4f5iE"
89 | },
90 | "source": [
91 | "# Setup\n",
92 | "\n"
93 | ]
94 | },
95 | {
96 | "cell_type": "markdown",
97 | "metadata": {
98 | "id": "GsH_4sdoQPgB"
99 | },
100 | "source": [
101 | "\n",
102 | "## Introduction to Google Colab\n",
103 | "\n",
104 | "Google Colab is a cloud-based environment for running Python code interactively (via Jupyter notebooks, for those who are familiar with those). If you are new to Colab, you can learn about the key features in [this tutorial](https://colab.research.google.com/notebooks/basic_features_overview.ipynb). For the purposes of our tutorial, you only need to understand how to interact with the \"code cells.\" "
105 | ]
106 | },
107 | {
108 | "cell_type": "markdown",
109 | "source": [
110 | "\n",
111 | "## Installation\n",
112 | "Run the code below to install and load the necessary packages and the data. The data for this tutorial are stored in a [Github repository](https://github.com/stanfordnmbl/mobilize-tutorials-data). It is done when you see `Unpacking objects: 100% (19/19), done.`"
113 | ],
114 | "metadata": {
115 | "id": "DQ8KtFKspj10"
116 | }
117 | },
118 | {
119 | "cell_type": "code",
120 | "source": [
121 | "# Installing and loading necessary packages and data files\n",
122 | "from scipy import signal\n",
123 | "import numpy as np\n",
124 | "import time\n",
125 | "import os\n",
126 | "import sys\n",
127 | "import matplotlib.pyplot as plt\n",
128 | "%matplotlib inline\n",
129 | "from google.colab import drive\n",
130 | "!git clone https://github.com/stanfordnmbl/mobilize-tutorials-data.git\n",
131 | "drive_path = '/content/mobilize-tutorials-data/' # path to folder containing code and data"
132 | ],
133 | "metadata": {
134 | "id": "QqLcHEddgJlE"
135 | },
136 | "execution_count": null,
137 | "outputs": []
138 | },
139 | {
140 | "cell_type": "markdown",
141 | "source": [
142 | "# Protocol for Collecting Data"
143 | ],
144 | "metadata": {
145 | "id": "R84zIq7dgjmk"
146 | }
147 | },
148 | {
149 | "cell_type": "markdown",
150 | "source": [
151 | "## Sensors and Hardware\n",
152 | "\n",
153 | "The wearable system we used to estimate energy expenditure consisted of two inertial measurement unit (IMU) sensors, a microcontroller (Raspberry Pi), and rechargeable battery. The details of the components we selected are available in [this article](https://spectrum.ieee.org/a-diy-calorie-counter-more-accurate-than-a-smartphone). \n",
154 | "\n",
155 | "The components we have selected can be replaced with others, but may require modifications to the code we provide. In order to provide sufficient sensing to capture dynamic movement like running, you should select inertial measurement units that can measure +/-16G of acceleration, +/- 2000 degrees per second of rotation, and that can sample data at a rate of at least 50 Hz.\n",
156 | "\n",
157 | "## Placement of Sensors\n",
158 | "\n",
159 | "The IMUs were placed on the lateral side of the left leg, one in the middle of the thigh, and the other on the middle of the shank. The inertial measurement units consist of a three-axis accelerometer (accel) and a three-axis gyroscope (gyro). The IMUs were oriented so that the X, Y, and Z directions represent the fore-aft, mediolateral, and vertical axes, respectively. These orientations are assigned during quiet standing, but the sensors are attached to different body parts that move relative to the global reference frame. \n",
160 | "\n",
161 | "It is important to note that the accuracy of the calorie estimates are sensitive to the placement of the IMUs. For example, misalignment of the three axes changed estimates by a large amount, so the training data for the data-driven model was augmented with random small rotations to provide a model that was more robust to small errors in placement. Translation of the sensor along the vertical axis resulted in only very small changes in error. \n",
162 | "\n",
163 | "For use with other patient populations, we recommend following the axes alignment as closely as possible. Making distal or proximal translations to sensor placement should have little impact on the device performance. "
164 | ],
165 | "metadata": {
166 | "id": "FuEnMjXMkSne"
167 | }
168 | },
169 | {
170 | "cell_type": "markdown",
171 | "source": [
172 | "# Interpreting Inertial Measurement Unit Data for Lower-Extremity Movement\n",
173 | "\n"
174 | ],
175 | "metadata": {
176 | "id": "w4EW9wzigqlE"
177 | }
178 | },
179 | {
180 | "cell_type": "markdown",
181 | "source": [
182 | "## Loading and Formatting Time Series IMU Data\n",
183 | "\n",
184 | "We will now load pre-recorded IMU data into the variable `raw_IMU_data`. This data is the raw time series data recorded at 100 Hz during walking. We recommend a minimum sampling rate of 50 Hz to capture dynamic activities like running. The data should be stored in the following format:"
185 | ],
186 | "metadata": {
187 | "id": "pHFF-akq8-wv"
188 | }
189 | },
190 | {
191 | "cell_type": "markdown",
192 | "source": [
193 | ""
194 | ],
195 | "metadata": {
196 | "id": "qqWn6Ayl7hUz"
197 | }
198 | },
199 | {
200 | "cell_type": "code",
201 | "source": [
202 | "# You can edit this first variable to choose which activity data you want to run (walk, run, bike, stairs)\n",
203 | "# Options for dataset_name in the line below: [walk_sample.csv, run_sample.csv, stairs_sample.csv, bike_sample.csv]\n",
204 | "dataset_name = 'walk_sample.csv'\n",
205 | "data_path = drive_path+'energyExpenditureData/' + dataset_name\n",
206 | "raw_IMU_data = np.loadtxt(data_path, delimiter=',')\n",
207 | "print(raw_IMU_data.shape)"
208 | ],
209 | "metadata": {
210 | "id": "cs5H1YH3NHKn"
211 | },
212 | "execution_count": null,
213 | "outputs": []
214 | },
215 | {
216 | "cell_type": "markdown",
217 | "source": [
218 | "Now we have loaded the CSV file containing the walking data sample. \n",
219 | "\n",
220 | "We have also printed out the size of the matrix containing the data. The size indicates that there are 13 time-series signals (columns) with 35,500 samples (rows) for each signal. These signals, in order, are:\n",
221 | "- The three signals from the gyroscope on the shank (x,y,z axes signals)\n",
222 | "- The three signals from the gyroscope on the thigh (x,y,z axes signals)\n",
223 | "- The three signals from the accelerometer on the thigh (x,y,z axes signals) \n",
224 | "- The three signals from the accelerometer on the shank (x,y,z axes signals)\n",
225 | "- One signal to indicate corrupted messages \n",
226 | "\n",
227 | "Since we sampled data at 100 Hz (or 100 samples/sec), that means this file is approximately 355 seconds or about 6 minutes. \n",
228 | "\n",
229 | "If you would like to use sensor data from a different sensor, format the samples and signals into the same rows and columns format defined here to be able to use the following processing code."
230 | ],
231 | "metadata": {
232 | "id": "iRwsQrItNHaX"
233 | }
234 | },
235 | {
236 | "cell_type": "markdown",
237 | "source": [
238 | "## Understanding IMU Signals During Lower-limb Movement\n",
239 | "\n",
240 | "Now we will plot the different signals to visualize the raw data. "
241 | ],
242 | "metadata": {
243 | "id": "20HT8mkq9GoW"
244 | }
245 | },
246 | {
247 | "cell_type": "code",
248 | "source": [
249 | "plot_start = 5000 # Sample number to start plotting from\n",
250 | "plot_length = 500 # Number of samples to plot (seconds * 100)\n",
251 | "\n",
252 | "plt.figure() \n",
253 | "plt.title('Gyroscope signals on the shank')\n",
254 | "plt.plot(raw_IMU_data[plot_start:plot_start+plot_length,0:3])\n",
255 | "plt.xlabel('Samples (100 Hz)')\n",
256 | "plt.ylabel('Rate of rotation (deg/s)')\n",
257 | "plt.legend(['x','y','z'])\n",
258 | "plt.show()\n",
259 | "\n",
260 | "plt.figure() \n",
261 | "plt.title('Gyroscope signals on the thigh')\n",
262 | "plt.plot(raw_IMU_data[plot_start:plot_start+plot_length,3:6])\n",
263 | "plt.xlabel('Samples (100 Hz)')\n",
264 | "plt.ylabel('Rate of rotation (deg/s)')\n",
265 | "plt.legend(['x','y','z'])\n",
266 | "plt.show()\n",
267 | "\n",
268 | "plt.figure() \n",
269 | "plt.title('Accelerometer signals on the thigh')\n",
270 | "plt.plot(raw_IMU_data[plot_start:plot_start+plot_length,6:9])\n",
271 | "plt.xlabel('Samples (100 Hz)')\n",
272 | "plt.ylabel('Accelerations (m/s^2)')\n",
273 | "plt.legend(['x','y','z'])\n",
274 | "plt.show()\n",
275 | "\n",
276 | "plt.figure() \n",
277 | "plt.title('Accelerometer signals on the shank')\n",
278 | "plt.plot(raw_IMU_data[plot_start:plot_start+plot_length,9:12])\n",
279 | "plt.xlabel('Samples (100 Hz)')\n",
280 | "plt.ylabel('Accelerations (m/s^2)')\n",
281 | "plt.legend(['x','y','z'])\n",
282 | "plt.show()"
283 | ],
284 | "metadata": {
285 | "id": "DL3F5d_WD7df"
286 | },
287 | "execution_count": null,
288 | "outputs": []
289 | },
290 | {
291 | "cell_type": "markdown",
292 | "source": [
293 | "Notice that the signals are cyclic, repeating approximately every 120 samples. This is because as the person walks, their leg moves in the same way with each step. Motion in the sagittal plane is associated with the z-axis measurements. We can see that these signals have the largest rates of rotation. The vertical axis measurements of acceleration (y-axis) show large spikes when the leg comes in contact with the ground."
294 | ],
295 | "metadata": {
296 | "id": "ShHpfOoOtFVc"
297 | }
298 | },
299 | {
300 | "cell_type": "markdown",
301 | "source": [
302 | "# Preprocessing Movement Data\n",
303 | "\n",
304 | "Next we will look at steps to preprocess our IMU data to extract the essential information about the movements. This will include filtering the signals to remove noise and segmenting the periodic motions into strides, also called gait cycles."
305 | ],
306 | "metadata": {
307 | "id": "vtDDm_QBglnM"
308 | }
309 | },
310 | {
311 | "cell_type": "markdown",
312 | "source": [
313 | "## Filter Lower-Limb Movement Data\n",
314 | "\n",
315 | "You'll notice that the raw IMU signals do not appear very smooth. Most have small perturbations, which is often called sensor noise. This noise is due to many factors including the sensitivity of the sensors (even resting sensors have noise) and high frequency components of motion, such as slight vibration or shifting of the sensor when in contact with the person due to shifting clothing or skin.\n",
316 | "\n",
317 | "We will use a low-pass filter to eliminate these higher frequencies of noise and isolate only a portion of lower-frequency signal associated with movement. Here, we use a common low-pass Butterworth filter to isolate a signal of 6 Hz or less, which is standard for filtering movement data for walking or running. This cutoff works well for our tasks of walking, running, biking, and stair climbing. You may want to pick a higher value for higher speed tasks like sprinting."
318 | ],
319 | "metadata": {
320 | "id": "kifafd1e9q_m"
321 | }
322 | },
323 | {
324 | "cell_type": "code",
325 | "source": [
326 | "# Design the filter\n",
327 | "wn = 6 # Crossover frequency for low-pass filter (Hz)\n",
328 | "filt_order = 4 # Fourth order filter\n",
329 | "sample_freq = 100 # in Hz *Change this value if you use a different sample frequency*\n",
330 | "b,a = signal.butter(filt_order, wn, fs=sample_freq) # params for low-pass filter\n",
331 | "\n",
332 | "# Filter data\n",
333 | "filtered_data = signal.filtfilt(b,a,raw_IMU_data[:,:-1], axis=0)\n",
334 | "\n",
335 | "# Plot filtered and raw gyroscope shank z-axis signal\n",
336 | "shank_gyro_z_index = 2 # column number for z-axis of shank gyroscope readings used to segment gait cycle\n",
337 | "\n",
338 | "plt.figure() \n",
339 | "plt.title('Gyroscope shank z-axis signal')\n",
340 | "plt.plot(raw_IMU_data[plot_start:plot_start+plot_length,shank_gyro_z_index])\n",
341 | "plt.plot(filtered_data[plot_start:plot_start+plot_length,shank_gyro_z_index])\n",
342 | "plt.xlabel('Samples (100 Hz)')\n",
343 | "plt.ylabel('Rate of rotation (deg/s)')\n",
344 | "plt.legend(['raw','filtered'])\n",
345 | "plt.show()"
346 | ],
347 | "metadata": {
348 | "id": "PM0iXYRO9tt_"
349 | },
350 | "execution_count": null,
351 | "outputs": []
352 | },
353 | {
354 | "cell_type": "markdown",
355 | "source": [
356 | "\n",
357 | "Notice in the plot above, the filtered signal (in orange) appears much smoother but retains the same general shape of the unfiltered signal (in blue)."
358 | ],
359 | "metadata": {
360 | "id": "WkvYwVnZwjt9"
361 | }
362 | },
363 | {
364 | "cell_type": "markdown",
365 | "source": [
366 | "## Segmenting Cyclic Movement by Gait Cycle\n",
367 | "\n",
368 | "Next, we'll segment periodic movements by gait cycle. This is often done with a measure of ground reaction forces when the heel of the foot strikes the ground. \n",
369 | "\n",
370 | "With wearable sensors like IMUs, we typically do not have that data. We will instead use change in rotation of the leg right before heelstrike, specifically, the sagittal plane rotation of the shank IMU (the gyroscope z-axis signal shown in the plot above). Immediately before heel strike the leg stops swinging forward and starts to move backwards. This change in rotational direction creates a peak in the angular velocity of the shank.\n",
371 | "\n",
372 | "To detect the peak, we use a simple `find_peaks` function. The function depends on two parameters to automatically segment this movement:\n",
373 | "\n",
374 | "**Threshold selection:** We selected a threshold of 70 deg/s which reliably segmented gait cycles from slow walking at 0.7 m/s to running at 3.25 m/s and worked well for cycling and stair climbing. If you plan to use this approach for participants moving very slowly, you may need to adjust this threshold. \n",
375 | "\n",
376 | "**Minimum distance between peaks:** The minimum distance between peaks corresponds to the fastest expected gait cycle. Here we selected a minimum gait cycle of 0.6 seconds, which was a smaller value than the shortest gait cycle we recorded. If you plan to evaluate motion that is much faster or slower, you may need to adjust this value. \n",
377 | "\n",
378 | "Visual inspection was performed on motion data for all the activities we intended to estimate energy expenditure. If you would like to include additional activities, we recommend using trial and error to tune the threshold and minimum distance between peaks to determine what parameters work best for segmenting gait cycles."
379 | ],
380 | "metadata": {
381 | "id": "D-0fXeGr9lwP"
382 | }
383 | },
384 | {
385 | "cell_type": "code",
386 | "source": [
387 | "peak_height_thresh = 70 # minimum value of the shank IMU gyro-Z reading in deg/s\n",
388 | "peak_min_dist = int(0.6*sample_freq) # min number of samples between peaks\n",
389 | "\n",
390 | "# Find peaks\n",
391 | "peak_index_list = signal.find_peaks(filtered_data[:,shank_gyro_z_index], height=peak_height_thresh, distance=peak_min_dist)[0]\n",
392 | "peak_index_list_plot = signal.find_peaks(filtered_data[plot_start:plot_start+plot_length,shank_gyro_z_index], height=peak_height_thresh, distance=peak_min_dist)[0]\n",
393 | "\n",
394 | "# Plot\n",
395 | "plt.figure() \n",
396 | "plt.title('Gyroscope shank z-axis signal')\n",
397 | "plt.plot(filtered_data[plot_start:plot_start+plot_length,shank_gyro_z_index])\n",
398 | "axes = plt.gca()\n",
399 | "ymin, ymax = axes.get_ylim()\n",
400 | "xmin, xmax = axes.get_xlim()\n",
401 | "plt.plot([xmin,xmax],[peak_height_thresh, peak_height_thresh], color='k',ls='--')\n",
402 | "for i in peak_index_list_plot:\n",
403 | " plt.plot([i,i],[ymin,ymax],color='k')\n",
404 | "plt.xlabel('Samples (100 Hz)')\n",
405 | "plt.ylabel('Rate of rotation (deg/s)')\n",
406 | "plt.legend(['filtered signal','threshold','detected heelstrikes'],loc='lower right')\n",
407 | "plt.show()"
408 | ],
409 | "metadata": {
410 | "id": "T3QP__Pk9qsn"
411 | },
412 | "execution_count": null,
413 | "outputs": []
414 | },
415 | {
416 | "cell_type": "markdown",
417 | "source": [
418 | "## Format Data for Each Gait Cycle\n",
419 | "\n",
420 | "Now that we have segmented movement by gait cycle, we will format the data to pass into our estimation model. The duration of gait cycles changes depending on walking speed and activity, but our estimation model uses a fixed input size. We perform a time-invariant formatting to adjust varying lengths of gait cycles into one fixed size, by averaging the measurements across a gait cycle into a set number of discrete bins. \n",
421 | "\n",
422 | "In this case we use 30 bins to capture sufficient detail about the movement. Our experiments found that increasing the number of bins did not improve model performance for our application. Fewer bins resulted in worse performance, likely due to a loss of detailed information about movements.\n",
423 | "\n",
424 | "We also use a special condition where a gait cycle is ignored if it takes longer than the 4 second time window used to detect gait cycles. This was to prevent fringe cases where motion was not cyclic, such as someone pausing mid-step for a long time, or outside the activities we intended to capture. In this case, the slowest speed of stair walking had a gait cycle of about 2.4 seconds, so we were able to detect all gait cycles within 4 seconds.\n",
425 | "\n",
426 | "Run the following cell to process the time-series data into a time-invariant format. "
427 | ],
428 | "metadata": {
429 | "id": "4T-1yTgJ9uGG"
430 | }
431 | },
432 | {
433 | "cell_type": "code",
434 | "source": [
435 | "# Variables to change depending on your data\n",
436 | "weight = 68 # participant bodyweight in kg\n",
437 | "height = 1.74 # participant height in m\n",
438 | "num_bins = 30 # number of bins to discretize one gait cycle of data from each signal into a fixed size\n",
439 | "stride_detect_window = 4*sample_freq # maximum time length for strides (seconds * samples/second)\n",
440 | "\n",
441 | "# Initializing variables and useful parameters\n",
442 | "gait_cnt = 0 # keep track of the number of gait cycles\n",
443 | "gait_data = np.zeros((3+12*num_bins, len(peak_index_list)-1)) # storage for gait cycles of formatted data \n",
444 | "deg2rad = 0.0174533 # rad / deg\n",
445 | "\n",
446 | "# process time series data, format units, and discretize each gait cycle into a fixed number of discrete bins\n",
447 | "def processRawGait(data_array, start_ind, end_ind, b, a, weight, height, num_bins=30):\n",
448 | " gait_data = data_array[start_ind:end_ind, :] # crop to the gait cycle\n",
449 | " gait_data = gait_data*np.array([deg2rad,-deg2rad,-deg2rad,deg2rad,-deg2rad,-deg2rad,1,-1,-1,1,-1,-1]) # flip y & z, convert to rad/s\n",
450 | " bin_gait = signal.resample(gait_data, num_bins, axis=0) # resample gait cycle of data into fixed number of bins along axis = 0 by default\n",
451 | " shift_flip_bin_gait = bin_gait.transpose() # get in shape of [feats x bins] for correct flattening\n",
452 | " model_input = shift_flip_bin_gait.flatten()\n",
453 | " model_input = np.insert(model_input, 0, [1.0, weight, height]) # adding a 1 for the bias term at start\n",
454 | " return model_input\n",
455 | "\n",
456 | "for i in range(len(peak_index_list)-1): # looping through each gait cycle of data\n",
457 | " gait_start_index = peak_index_list[i] # index at start of gait\n",
458 | " gait_stop_index = peak_index_list[i+1] # index at end of gait\n",
459 | " if (gait_stop_index - gait_start_index) <= stride_detect_window: # if gait cycle within maximum time allowed\n",
460 | " gait_data[:,gait_cnt] = processRawGait(filtered_data, gait_start_index, gait_stop_index, b, a, weight, height, num_bins)\n",
461 | " gait_cnt += 1 # increment number of gait cycles of data stored\n",
462 | "gait_data = gait_data[:,:gait_cnt] # get rid of empty storage\n",
463 | "\n",
464 | "# Plotting discretized and continuous sampled data for one gait cycle\n",
465 | "gait_cycle_to_plot = 1\n",
466 | "plt.figure() \n",
467 | "plt.title('Discretized and sampled data for gyroscope shank z-axis signal')\n",
468 | "start_gait_index = peak_index_list[gait_cycle_to_plot]\n",
469 | "stop_gait_index = peak_index_list[gait_cycle_to_plot+1]\n",
470 | "plt.plot(filtered_data[start_gait_index:stop_gait_index,shank_gyro_z_index])\n",
471 | "\n",
472 | "# plotting discretized data\n",
473 | "gait_length = stop_gait_index - start_gait_index\n",
474 | "x = np.linspace(start=0,stop=gait_length,num=num_bins+1)\n",
475 | "bin = 2\n",
476 | "plt.plot(x[:-1], gait_data[3+num_bins*bin:3+num_bins*(bin+1),gait_cycle_to_plot]*(-1/deg2rad))\n",
477 | "plt.xticks(ticks=[0,gait_length],labels=[0,100])\n",
478 | "plt.xlabel('Gait cycle (%)')\n",
479 | "plt.ylabel('Rate of rotation (deg/s)')\n",
480 | "plt.legend(['sampled data', 'discretized data'])\n",
481 | "plt.show()\n",
482 | "\n",
483 | "print(\"Format for processed gait data:\",gait_data.shape)"
484 | ],
485 | "metadata": {
486 | "id": "jQGbRGFwuZ5C"
487 | },
488 | "execution_count": null,
489 | "outputs": []
490 | },
491 | {
492 | "cell_type": "markdown",
493 | "source": [
494 | "Notice the shape of the processed gait data is (363, X). The length of processed data for each gait cycle is 363 values. The first three are the model bias parameter, the subject weight, and the subject height. Then the next 360 values are comprised of the 30 bins recording one gait cycle of data for the 12 signals of the thigh and shank IMUs. The value for X is the number of gait cycles recorded during the condition."
495 | ],
496 | "metadata": {
497 | "id": "1tIcMsqbPjbY"
498 | }
499 | },
500 | {
501 | "cell_type": "markdown",
502 | "source": [
503 | "# Estimating Energy Expenditure\n",
504 | "\n",
505 | "We will now load in the model weights and pass the formatted data into the data-driven model to estimate the energy expended with each gait cycle."
506 | ],
507 | "metadata": {
508 | "id": "WiVPg349gtnE"
509 | }
510 | },
511 | {
512 | "cell_type": "code",
513 | "source": [
514 | "# Load in model weights\n",
515 | "model_dir = drive_path+'energyExpenditureData/full_fold_aug/' # path to model weights\n",
516 | "model_weights = np.loadtxt(model_dir + 'weights.csv', delimiter=',') # model weight vector\n",
517 | "weights_unnorm = model_weights[3:] # get rid of bias and height/weight offset\n",
518 | "unnorm_weights = np.reshape(weights_unnorm,(-1,num_bins))\n",
519 | "\n",
520 | "# Apply model to data to estimate energy expended\n",
521 | "estimates = np.zeros(gait_cnt)\n",
522 | "for i in range(gait_cnt):\n",
523 | " estimates[i] = np.dot(gait_data[:,i], model_weights)\n",
524 | "\n",
525 | "# Plot energy estimates\n",
526 | "plt.figure()\n",
527 | "plt.plot(estimates)\n",
528 | "plt.title(\"Energy expenditure per gait cycle\")\n",
529 | "plt.xlabel(\"Gait cycles\")\n",
530 | "plt.ylabel(\"Energy expenditure (W)\")\n",
531 | "plt.show()"
532 | ],
533 | "metadata": {
534 | "id": "5mchOYxn-PxO"
535 | },
536 | "execution_count": null,
537 | "outputs": []
538 | },
539 | {
540 | "cell_type": "markdown",
541 | "source": [
542 | "We will now compare the energy expenditure estimates to ground truth respirometry measurements collected for walking in the lab. Note that if you re-run with another activity of data this plot will still compare it to the respirometry recorded during walking."
543 | ],
544 | "metadata": {
545 | "id": "gKW3rGGuTzOg"
546 | }
547 | },
548 | {
549 | "cell_type": "code",
550 | "source": [
551 | "metabolics_breaths = np.loadtxt(drive_path+'energyExpenditureData/walking_metabolics.csv', delimiter=',', skiprows=1) \n",
552 | "avg_metabolics = np.mean(metabolics_breaths[len(metabolics_breaths)//2:])\n",
553 | "\n",
554 | "plt.figure()\n",
555 | "plt.plot(estimates)\n",
556 | "plt.title(\"Energy expenditure per gait cycle\")\n",
557 | "plt.xlabel(\"Gait cycles\")\n",
558 | "plt.ylabel(\"Energy expenditure (W)\")\n",
559 | "plt.plot(np.linspace(start=0,stop=len(estimates),num=len(metabolics_breaths)),metabolics_breaths, c='k', alpha=0.5)\n",
560 | "plt.plot([0,len(estimates)],[avg_metabolics, avg_metabolics], c='k')\n",
561 | "plt.legend(['estimates per gait cycle', 'respirometry per breath','steady-state respirometry'])\n",
562 | "avg_energy_expenditure_estimate = np.mean(estimates)\n",
563 | "plt.plot([0,len(estimates)],[avg_energy_expenditure_estimate, avg_energy_expenditure_estimate], c='b')\n",
564 | "plt.legend(['estimates per gait cycle', 'respirometry per breath','steady-state respirometry', 'steady-state estimates'])\n",
565 | "plt.show()"
566 | ],
567 | "metadata": {
568 | "id": "Pj6y4X4tzPMk"
569 | },
570 | "execution_count": null,
571 | "outputs": []
572 | },
573 | {
574 | "cell_type": "markdown",
575 | "source": [
576 | "We see that the steady-state respirometry (in black) is quite close to the estimated energy expenditure from our model (in blue). You will also notice a slight delay in the respirometry. "
577 | ],
578 | "metadata": {
579 | "id": "pqHzrtqx48Eb"
580 | }
581 | },
582 | {
583 | "cell_type": "markdown",
584 | "source": [
585 | "## Intuition Behind the Data-Driven Model\n",
586 | "\n",
587 | "The Wearable System used a linear regression model to estimate energy expenditure with inputs from the IMU signals worn on the shank and thigh. A larger magnitude weight indicated a more informative input. The input signals are shown in descending order of importance based on contribution to total model weight. Gyroscope inputs were more informative than accelerations. The X, Y, and Z directions represent the fore-aft, mediolateral, and vertical axes, respectively. Run the following cell to visualize the model weights."
588 | ],
589 | "metadata": {
590 | "id": "aZRdjfLdgYT0"
591 | }
592 | },
593 | {
594 | "cell_type": "code",
595 | "source": [
596 | "# @title Plotting model weights\n",
597 | "pelv_x_dir = \" Z\"# dir\"\n",
598 | "pelv_y_dir = \" Y\"# dir\"\n",
599 | "pelv_z_dir = \" -X\"# dir\"\n",
600 | "thigh_x_dir = \" X\"# dir\"\n",
601 | "thigh_y_dir = \" -Z\"#-dir\"\n",
602 | "thigh_z_dir = \" Y\"#-dir\"\n",
603 | "shank_x_dir = \" X\"#-dir\"\n",
604 | "shank_y_dir = \" -Z\"#-dir\"\n",
605 | "shank_z_dir = \" Y\"#-dir\"\n",
606 | "thigh = \"Thigh\"\n",
607 | "shank = \"Shank\"\n",
608 | "accel = \" accel\"\n",
609 | "gryo = \" gyro\"\n",
610 | "\n",
611 | "full_features = ['bias','weight','height',\n",
612 | " shank+gryo+shank_x_dir,shank+gryo+shank_y_dir,shank+gryo+shank_z_dir,\n",
613 | " thigh+gryo+thigh_x_dir,thigh+gryo+thigh_y_dir,thigh+gryo+thigh_z_dir,\n",
614 | " thigh+accel+thigh_x_dir,thigh+accel+thigh_y_dir,thigh+accel+thigh_z_dir,\n",
615 | " shank+accel+shank_x_dir,shank+accel+shank_y_dir,shank+accel+shank_z_dir]\n",
616 | "\n",
617 | "plt.figure()\n",
618 | "plt.imshow(unnorm_weights, cmap='RdBu')\n",
619 | "\n",
620 | "xticks = [0,6.5,14,21.5,29]\n",
621 | "yticks=[0,1,2,3,4,5,6,7,8,9,10,11]\n",
622 | "gait_labels = ['0','25','50','75','100']\n",
623 | "plt.yticks(ticks=yticks, labels=full_features[3:])\n",
624 | "plt.xticks(ticks=xticks, labels=gait_labels)\n",
625 | "plt.ylabel(\"Input signals\")\n",
626 | "plt.xlabel(\"Gait cycle (%)\")\n",
627 | "plt.title(\"Estimation model weights\")\n",
628 | "plt.show()"
629 | ],
630 | "metadata": {
631 | "id": "C2u1KMgF-GH_"
632 | },
633 | "execution_count": null,
634 | "outputs": []
635 | },
636 | {
637 | "cell_type": "markdown",
638 | "source": [
639 | "Here you can see how different signals are weighted during the gait cycle. For example, the thigh gyro (z-direction) has a large negative weight (red) during stance (around 25% of the gait cycle), but large positive weight (blue) during swing (around 75% of the gait cycle)."
640 | ],
641 | "metadata": {
642 | "id": "BEWCGaMnNmXu"
643 | }
644 | },
645 | {
646 | "cell_type": "markdown",
647 | "source": [
648 | "## Exploring Time-Varying Activities\n",
649 | "\n",
650 | "This model can be used as a person transitions between different activities. In this example, we'll specifically look at a time-varying condition where participants transitioned between walking and running on a treadmill, following a step function speed profile with a period of 30 seconds. Run the code block below to plot the continous measurement of energy expenditure. "
651 | ],
652 | "metadata": {
653 | "id": "BvJ1dkOd-Kjl"
654 | }
655 | },
656 | {
657 | "cell_type": "code",
658 | "source": [
659 | "# @title Energy expenditure when transitioning between walking and running\n",
660 | "walking_data_path = drive_path+'energyExpenditureData/walk_to_run_sample.csv'\n",
661 | "raw_IMU_data = np.loadtxt(walking_data_path, delimiter=',')\n",
662 | "\n",
663 | "b,a = signal.butter(filt_order, wn, fs=sample_freq) # params for low-pass filter\n",
664 | "filtered_data = signal.filtfilt(b,a,raw_IMU_data[:,:-1], axis=0)\n",
665 | "shank_gyro_z_index = 2 # column number for z-axis of shank gyroscope readings used to segment gait cycle\n",
666 | "\n",
667 | "peak_index_list = signal.find_peaks(filtered_data[:,shank_gyro_z_index], height=peak_height_thresh, distance=peak_min_dist)[0]\n",
668 | "peak_index_list_plot = signal.find_peaks(filtered_data[plot_start:plot_start+plot_length,shank_gyro_z_index], height=peak_height_thresh, distance=peak_min_dist)[0]\n",
669 | "\n",
670 | "gait_data = np.zeros((3+12*num_bins, 1000)) # storage for up to 1000 gait cycles of formatted data \n",
671 | "gait_cnt = 0\n",
672 | "for i in range(len(peak_index_list)-1): # looping through each gait cycle of data\n",
673 | " gait_start_index = peak_index_list[i] # index at start of gait\n",
674 | " gait_stop_index = peak_index_list[i+1] # index at end of gait\n",
675 | " if (gait_stop_index - gait_start_index) <= stride_detect_window: # if gait cycle within maximum time allowed\n",
676 | " gait_data[:,gait_cnt] = processRawGait(filtered_data, gait_start_index, gait_stop_index, b, a, weight, height, num_bins)\n",
677 | " gait_cnt += 1 # increment number of gait cycles of data stored\n",
678 | "gait_data = gait_data[:,:gait_cnt] # get rid of empty storage\n",
679 | "\n",
680 | "estimates = np.zeros(gait_cnt)\n",
681 | "for i in range(gait_cnt):\n",
682 | " estimates[i] = np.dot(gait_data[:,i], model_weights)\n",
683 | "\n",
684 | "plt.figure()\n",
685 | "plt.plot(estimates)\n",
686 | "plt.plot([0,len(estimates)],[300,300])\n",
687 | "plt.plot([0,len(estimates)],[750,750])\n",
688 | "\n",
689 | "plt.title(\"Energy expenditure per gait cycle\")\n",
690 | "plt.xlabel(\"Gait cycles\")\n",
691 | "plt.ylabel(\"Energy expenditure (W)\")\n",
692 | "plt.legend(['Per step estimates','Steady-state walking energy','Steady-state running energy'])\n",
693 | "plt.show()"
694 | ],
695 | "metadata": {
696 | "id": "QuoM8cee-P-n"
697 | },
698 | "execution_count": null,
699 | "outputs": []
700 | },
701 | {
702 | "cell_type": "markdown",
703 | "source": [
704 | "Notice how the energy expenditure changes smoothly between between walking and running states."
705 | ],
706 | "metadata": {
707 | "id": "DEl1VQNYQOM2"
708 | }
709 | },
710 | {
711 | "cell_type": "markdown",
712 | "source": [
713 | "# Exploring the Code on Your Own\n",
714 | "1. What happens if you filter the data at 20 Hz, instead of 6 Hz?\n",
715 | "2. What is the minimum threshold you can use to separate the gait cycles during walking?\n",
716 | "3. Now you can test out data from other activities. Go up to the \"Loading and Formatting Time Series IMU Data\" cell and edit the variable named 'dataset_name' to select running, biking, or stair climbing data. You can change the threshold to separate gait cycles and the filter value to explore how those impact the movement data of other activities."
717 | ],
718 | "metadata": {
719 | "id": "M7ZPIxcY3Rq6"
720 | }
721 | },
722 | {
723 | "cell_type": "markdown",
724 | "source": [
725 | "# Open-Ended Questions\n",
726 | "\n",
727 | "1. How does the model depend on subject specific parameters like height and weight? \n",
728 | "2. How could you use the model to determine the importance of a single signal?\n",
729 | "3. What would happen if two activities had the exact same motion (e.g., biking with a different resistance level but at the same pedaling frequency)?\n",
730 | "4. What additional information could you incorporate to differentiate between activities with similar motions? \n",
731 | "5. How might you include that additional information in the model?\n",
732 | "\n"
733 | ],
734 | "metadata": {
735 | "id": "egrL0leI-Qie"
736 | }
737 | },
738 | {
739 | "cell_type": "markdown",
740 | "metadata": {
741 | "id": "JE3IfCEDo8qo"
742 | },
743 | "source": [
744 | "#Feedback\n",
745 | "\n",
746 | "This notebook is a work-in-progress and we welcome your feedback on how to increase its usefulness. Email comments to us at [mobilize-center@stanford.edu](mailto:mobilize-center@stanford.edu) or submit an issue on GitHub."
747 | ]
748 | },
749 | {
750 | "cell_type": "markdown",
751 | "metadata": {
752 | "id": "ifTa38Spblu9"
753 | },
754 | "source": [
755 | "\n",
756 | "\n",
757 | "---\n",
758 | "\n"
759 | ]
760 | },
761 | {
762 | "cell_type": "markdown",
763 | "metadata": {
764 | "id": "j9gidtZddPYg"
765 | },
766 | "source": [
767 | "*Version* 1.05\n",
768 | "\n",
769 | "Creator: Patrick Slade | Contributors: Patrick Slade, Joy Ku, Matt Petrucci \n",
770 | "Last Updated on May 4, 2022\n",
771 | "\n",
772 | "This notebook is made available under the [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0)."
773 | ]
774 | }
775 | ]
776 | }
--------------------------------------------------------------------------------
/Tutorial_Pose_Estimation_for_Biomechanics.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "Tutorial: Pose Estimation for Biomechanics (Mobilize & Restore Centers @ Stanford)",
7 | "provenance": [],
8 | "collapsed_sections": [
9 | "PBzVrpDXG2s5",
10 | "GsH_4sdoQPgB",
11 | "qRWUKVuuBqqT"
12 | ],
13 | "include_colab_link": true
14 | },
15 | "kernelspec": {
16 | "name": "python3",
17 | "display_name": "Python 3"
18 | },
19 | "accelerator": "GPU"
20 | },
21 | "cells": [
22 | {
23 | "cell_type": "markdown",
24 | "metadata": {
25 | "id": "view-in-github",
26 | "colab_type": "text"
27 | },
28 | "source": [
29 | ""
30 | ]
31 | },
32 | {
33 | "cell_type": "markdown",
34 | "metadata": {
35 | "id": "X38L6tanrnrB"
36 | },
37 | "source": [
38 | "### Mobilize Center & Restore Center @ Stanford Tutorial\n",
39 | "# Pose Estimation for Biomechanics \n",
40 | "\n",
41 | "Quantitative motion analysis is important for the diagnostics of movement disorders and for research. State-of-the-art measurements involve optical motion capture using reflective markers and expensive cameras to capture trajectories of these markers. Though the technique provides high frequency data for assessment, the cost, skills, and time involved limit wide adoption.\n",
42 | "\n",
43 | "Recent advancements in deep learning allow us to very robustly detect body landmarks (such as toes, hips, shoulders, etc.) in images from commodity cameras, such as found in smartphones. We can apply these techniques to videos to derive trajectories of landmarks in time. Recent studies show that these trajectories can be used for some clinical applications, potentially reducing the cost of movement analyses by orders of magnitude and facilitating more frequent assessments.\n",
44 | "\n",
45 | "##Tutorial Overview\n",
46 | "**In this notebook we illustrate how to use one deep learning algorithm to analyze human motion. Specifically, we will extract knee flexion curves and gait cycles from a video of a subject walking.** The notebook is for illustrative purposes only, and multiple improvements should be incorporated to obtain accurate knee flexion angles. However, the notebook does provide a pipeline that you can adapt to get started with video-based pose estimation projects.\n",
47 | "\n",
48 | "The notebook is a part of the [Mobilize Center](https://mobilize.stanford.edu) webinar series, and is jointly offered with the [Restore Center](https://restore.stanford.edu). The Mobilize Center is an NIH-funded Biomedical Technology Resource Center which provides tools and training to help researchers produce insights from wearables, video, medical images, and other data sources. The Restore Center is an NIH Medical Rehabilitation Research Resource Network Center which is creating a worldwide collaboration to advance the use of real-world data in rehabilitation outcomes for those with movement impairments.\n",
49 | "\n",
50 | "##Background and Citation\n",
51 | "Most of the [code](https://github.com/stanfordnmbl/mobile-gaitlab) in this notebook comes from our study on gait analysis in the cerebral palsy population. To cite our work, please use:\n",
52 | "\n",
53 | "*Łukasz Kidziński, Bryan Yang, Jennifer L. Hicks, Apoorva Rajagopal, Scott L. Delp, and Michael H. Schwartz. \"Deep neural networks enable quantitative movement analysis using single-camera videos.\" Nature communications 11, no. 1 (2020): 1-10.*\n",
54 | "\n",
55 | "Learn more about this work:\n",
56 | "* [Read our publication](https://www.nature.com/articles/s41467-020-17807-z)\n",
57 | "* Watch the video abstract of this study below"
58 | ]
59 | },
60 | {
61 | "cell_type": "code",
62 | "metadata": {
63 | "id": "Lm4GnG_xGI1b",
64 | "colab": {
65 | "base_uri": "https://localhost:8080/",
66 | "height": 320
67 | },
68 | "outputId": "527fe632-a5e7-44cd-dad7-e95e6406d668"
69 | },
70 | "source": [
71 | "from IPython.display import YouTubeVideo\n",
72 | "YouTubeVideo('pb4WvAhsRe4')"
73 | ],
74 | "execution_count": null,
75 | "outputs": [
76 | {
77 | "output_type": "execute_result",
78 | "data": {
79 | "text/html": [
80 | "\n",
81 | " \n",
88 | " "
89 | ],
90 | "text/plain": [
91 | ""
92 | ],
93 | "image/jpeg": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEABALDBoYFhsaGBoeHRsfISclHyAiIiUlJycmLicxMi4nLS01PVBCNzhLOS0tRmFFS1NWW1xbMkFlbWRYbFBZW1cBERISGRYZMBsbMFc9NT1XV1dXV1dXV1dXV1dXV1dXV1dXV1dXV1dXV1dXWFdXV1dXV1dXV1dXXVdXV1dXV1ddV//AABEIAWgB4AMBIgACEQEDEQH/xAAbAAEAAgMBAQAAAAAAAAAAAAAAAwUBAgQGB//EAEcQAAIBAgMEBQYMBQMEAgMAAAABAgMRBBIhBTFBURMiMmFxUoGRobHRBhQVFkJTYnKSosHhIzM0VPBzgrIkwtLxQ5MlRGP/xAAZAQEBAQEBAQAAAAAAAAAAAAAAAQIDBAX/xAAnEQEBAAIBBAICAgIDAAAAAAAAAQIREgMTITFBUYHwMmGxwQQicf/aAAwDAQACEQMRAD8A+fgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAtlsKbnVSnBQp1MjlJ2u+Nl4GMbshQrwpU6qnnTabjJWSbV3p3MCqBdUfg5Vc1Fyhe8laLu9LpPXSzdlv4kcthzyZ4zg1GLcm3ZZlKayq/dACpAAAAAAAAAAAAAAAAAAAAAAAAAAAGYq7sdHxKXNesbWY2+nMDrWz5P6UfWb/JkvKj6zPKNdvL6cIO9bKl5UfWZ+SZ+VH1jnj9nby+leC1obCqTvacNFfj7jVbEn5cPX7iyy+mbLPasBafIVTy4ev3GfkKp5cPX7ioqgWvyDU8uHr9xn5BqeXD1+4ujapBbfIFTy4fm9xn5v1PLh+b3DRtUAuPm9U8uH5vcU5DYDphhLpO+9X3EiwH2vUTcbnTyriB3rZv2/UbLZf2/UTnivay+lcCy+Svt+oko7FzzUektfu/cncxO1kqQXE9gtSa6RaPyf3MfIT+sX4f3NuaoBcfIL+sX4f3M/ID+tX4f3LpNqYFz8gP6xfh/cfID+tX4f3GjamBdfN9/Wr8P7mfm8/rV+H9xo2pAdGOwvQ1HC+ayTva285yKkp0ZT7MXLhor6/4mb4mkouNuMIvztGtLETgrRk1rfTn/jO+lhVWqQi279FTa71pdei78wFdKlJKLcWlLs3T18OZLRwVWdR04wlnW+LVmvG5b0qvSzlJ/wD605zikurkUdI27nCPpZJs7H5YrFyWZ06fRz3XfXhlfe7N69xqSMXKqvE43FKcoVK1XMurLry1s36dbjCrFVutSdSTpXaak+rm5Pm9dOJZVdoVY4alXTSqOMad0tXllUve/NWObbLyxp5LwjUvVypu3WUWvQNQ5VAo4ySzp15Zm43Tk3e+qfHf6zSFXEzVVqdVqMbVOs+zd6NX11b9ZriNpVJ1M6eR5cvV004+ltvzljiMR0cKVZaOtOM5WS62WNp+ltk0u6qKdBOOZya1srK+5L3mzowW+bX+x+8tcdhlCMKeeNO0q3aurpySVrJ8Eiur0lGP82nPui5N+OqRdEu0WJwzp2vfe004uLTST3PxRAd+04yTtKeeSnK8lxeWBwGWgAAADbLz0A1OzC4WEqc6lWUoxjKMVkipNuSb4tWWhrToJaz0424v3Is6VScKbVOrGhFNZ5PNbM7uMVZNt6NtgVONwzo1Z0275Xa/MgO/FwlnlGrrUi+s073fPvTOOdO3euYGgAAAAAAAAAA2p9peKLNFZT7S8UWiRnJ36KSBLEiiSxOOT0pImUaotcHsWdSLnJqEU7NyT9pyW2RBs92zvuDVmTww3ROazKXgRT3nfpPL1mDKMI2R6HnZMowjJWWbGUEZAyjwx7lHhiVcVlS7MfBewmiyKl2Y+C9hKjhX0MPSaJJEiiSI5VtsT4L+ZH/OB07N2TPEPR2Vr3tfjbmSzwMKVRONVSVuCsZiWxFW7TfNmiN6poezD08GXtsjKMIyaYZMhGSoGQEVHmdvf1D+7H2FcWW3v6h/dj7CvjBvcm7clcxW4lw9aMF1qanqnq+Fmmt3en5jpeKlRq0qkEs0acbX1WsbP2nPhqEJ3zVFDVLVX0aeu/uXpJMTTzSgrr+VB+rcJLb4K0w+NlT6XLZ9JBwlfk97XeawxMo050k+pNxcl3xvb2mkIXjJ8rGVSvG6u25W08CzGnhJPGN0IUbK0Zylfjqlp6n6TGKxcqqpqVv4cFBW5Ln3nPYE2aDZzbSTbsty5eBqCK7ZbSqSnGcmnZQTVlqopLjxaRMtqL4z0mRdFmv0do9nlexWAu00lrV5Td5W42SSSVyIAihd7IoQqRpxUKU802q2dpTjHS2TW+69ra38xSF1srAxmqT6Oc3ObTqQduitaz8eOulvOBC8HBN5IucVfrOWiX2rK689jalCyTpwpJX1qSvlXhmepY42Lnarh1FSbee1r5vKS4t7/Ocyi5Sj00Ollz4RS3ynJaPwA3w8WutdtcLRUU3/AMn6ifEYio6fRU6dOpZqVR1ctot3UVeTSvvIWvpQlmlZKMZLLJ77K27vsvG2450nFSoyo1MRNyVSaWZZWrrSy13u/qAixcnKUqlRWne1VW7Mvc+HoOWdOz0e/g9z7r7n4ndLERrVJVMqjUek6b3PhbXwXn4nPUpZNEs8HeyeklzTXCS/zeBySpck0/Jf6cyGx1yVtzbXf+q/U68FgnVzzlRnPJBySV05u6Vue5t6cgKkHftPBqnJZIyipQjKUG7uDf0X7deZwAAAAAAG1PtLxRaoqqfaXii0RnJ6OikiSRI4kkTjXpSR3rxPR4jFSlhpxp6Jtcba3R5xMusDiYOi1NXbfOxz+Uym1f101d+skOmMY3eVEFRdZno6ft5erNCNkaoyjs87ZGTCMoqMmQCjKPDHuUeGRnJcVnS7MfBewmiQ0uzHwXsJkcK+hh6SI3RGiRHKuj0WyMS1SUY71F39JT9HNLV387OjZmKyZrq/V0Xfc1VJSk7t6braWN9PGWbt0zenuXJhXtqZRtUjlUVdvve81R6MXgy9t0ZRhGTTDJkADJlGDJUeZ29/UP7sfYT7Ag8td62y2dvPa+mniQbe/qH4R9hPsBX6ZPVKF7a+fx8N/IxW4q6dCc1eMW0t9vBv9GdFaVpU1pZ04Xv4HPSxE4LqSa46c/8AGT4h9el9yHsGN1So6fYl5/Vx9ZJhldR3b7vW3B7yN7vx71p5vQS4XVx3aRb1W7rP0now/lI55ekVaNoU07bm7eJHVp5bb9Un6UdFd5lGNnpZW83MxXglUStor3s9bL9iZ4TzZ/SyuQ3VNtN8Fa/nJpSU466yte/hw9BrSXUkufp019Ghz4TbW2nR9TN329QnTtGL539pv/8AF576ejUVl1Y+/Xdctxmvwm0dWGV28DQkrdp+PAjOeXtqBa7PwkWqearUhKtLLDIrpWaV5a66vd5yqLfZkKyhFQxCo9LK0Iu95Pc2rLq8r6X3EFjhanWlCKtF6Se+8lxb3L9zSeFjCyUX1pRWWKtnlbRW4R49243oUYylaWjTs7Jay5vv8LnXWqvJG0lGTsrrrZla+SL+1vvzXeBX1XOOik2+Mr697UbevTluIq2NpRpdDUnVvmU80Mt1ZO0Xr3336d5HXkoWyrV7llV1bm+Mly4GcPiowjNdLGjWcovO4uV4q94uydnez7+IHHiMR01edSEHeTuorV/udUZwyONZ9Wy0i81SD4NPl4s1xVaFac+hcowb7KUYq3N+3VmscOodZSyNf/JOLX4Y7/PYDpnS6JrRNN9WespO+53+jfla5NVknGcM8o5IqdaUFd2ukoJ+Mk7kuGhGnC91dp5c2id9ey7Xvv17PnIXUqWlkn0GS8p31Sje3BO921p3+IFTi8M6c4ypylJTipxb0lZ339+jIXP6yOvPc/3OnG4OtKeeT6RSSlGpeyceG+1uVu4hVGUVrVhHuz5vVG4EXRJ9hp9z0ZHKLTs1Z952JQ41c3hT99jowmLoQqQzqUop8VFpaaO3Gz1t3AVQLHamJU4006iq1E5ZpqLjo7ZY6pN2s+HErgNqfaXii1RVU+0vFFojOT0dFJE3RGiRHGvS7cBgamInkpRzPe+SXNvgX2G+DU6cXKpJXX0VJW/9lZsXEShTxGRtO0HfuWa/6HqNnQz0E5SzZtTDGVu1CqFpNcL6ElTZt1dPXk2kWNWgk78nY4sVXSnFLezeN16c8ptVWs7GUSYqNqj79SNHpl3Hkymq2RkwZRplkyYMlK2R4U9yjw6M5LisqXZj4L2EqIqXZj4L2EqOFfQw9JEdGGoSqzjCCcpSdkkcyLPYdXLXVt7jNK3PKzlW/wDxcYP4LTTbqzgtN0Za+c58Rhcley3OPrLXZdCM3Nzlma3q7vfma4vDwhWp5dHaWvmM45ePBPmf04vk9TXFO3gV9ei4ScWXWJxuSN23bj4FdtGnZp+a516Wd5ccniznxXIjZGqNj1OLJlGEZKMmUYMhl5nb39Q/ux9hw0cROm26c5Qb35ZNew7dvf1D8I+wrjFbjpw+JjG+emqjcr3b7np6zbFPrUv9OHsFDDUpQvKsk7vq2enVb496S85jG76f+nD2CKT7aWu579d9+BmlLLU10yx48NDWTvVWXmtzv/jIa3afDU7XLj5n2xrbooVNZS4K77tUzRO8bu26XdvsYo9ip4L2mFJdG1fW+63r9Q5eJv6pry1pO0l77ElNWVtO01bdw5nOdM3Zx5Zr93Axg1WIq8LfZb5bnfzmKm62m+P/AB5m0t6X2Xv3bnuMx1k1w6u5d6X6m/6RBW7UvFmhmTu2zBxvtoLnY/xhqCpxpNZ30cqmW6lpfJd620dtSmLbZdSclBRw8qrpSvCSzWi272lZaq+vpILmpjIdZQeZRdpWdlKXs1f/ALIKVZt5W4zd1njKKhJab1b6a36PgQ1nBN9H21e8Vq9e01zv6e44pTUoqVs0I+Q9afer6teIHfXUbytDuqO7UlLXW3Jrj5mctCnBKaUKTrJxsq045XDW7i3ZN7vMTRrtwUr5oRtGSXFX074tPv5G0qVOpTl/AnWqQlFJQk1eEk7S0Wq0XnYHLXrQ6WUMPC0E9HFt+jS/gS0YdXNlU9+VSbcE+M5Sejt6OGupiv0VKdSnCP8ADpu0pN3cn5Pe/VoQVqspLr3bluguC4QSW7n4eIHRVxKzZpTzPWzSVr+Vmaur8lf1EsMROrGXRZG0v4inbLOGnabelnZ6sqKkdb1JKP2Vq13Jbl5zNDFwg5Lo80JRyzTk02rpqzW7VJ7gG0XUdT+KoxaisqjbLl4ZbaNHLou8mxmJ6WUWo5Yxioxje9ku/jq2c4GXIwAAAAG9LtR8UWd9XYq6faXii0M5O/RbJkiI4ki7jlXqW2ypKNHEc2qcV6ZN+wvNgU74XqvXPO/pK7E4SOGw0IN/xHadV/atpHzXLH4PU28HKS+lVaXhbU5Vi1ive7V+8rMdHSL4p3OypdSaZyYzWN+83j7c8qYlqUITXgznRPhdaVSPddHOmenD6efP7bI2NUZNuTMpW1tfuNsyeqVk+BrczcG2x4Y9wmeHRMlxWdNrLG3kr2EiI6XZj4L2G6OFfQw9JUyz2DZYmMn9GMn+V+8rEek2Rgo0sLLE1LZqitC/CPPz+w49S6jVunRsNqVfEc2o28zN9opxqU1fVt+wg+DEXKtXmuzGlbzt6exm204uNWm3xf6GZ4XDzb+f8IcVC8JJ8jFKXSYbL9KHsNq13GXgQbLlapbyk0dLPHL6ePq3U2gTMiccsmuTYR655cbNNkZNUZNMsxq5l2bNesyYMoQrzO3v6h+C9hXFjt3+ofgvYVxi+20scPNpNRbT3ei5Jjd9P/Th7COGJqRVozklyTfgdLhCrHNmmujpxUrQT3WXlIg4kzDOiFKk2kpzu3ZdSK/7yTFYSFKbhOdRNfYj/wCYHIn6zBYR2anS6VOpk11yR4Xv9PuI8PhYVJZYTm39yK/7wOM3lUbtfgrI6K+Gp05uEp1E1/8Azj4+WSz2co0lVbnket8kNz3Ptl2Obpc1RN6aq99fE3pPNVdrvfu03f8Ao3w2DhVbUJTdld9SP/maTpU4ScXOopLR9SP6TNzP7TTlB3YzZ7pQUnm13XjFL0qT4HCc1C0wG0KcI0nJ1FKjNyioWSle2/k9LN8irAHW60JO+XK730k1r3PUn6aDac3OE1uqKz/Fbf8A5vK02jNrc/NwAuYNQkpNxs11lZ2a5pq6a8dxJiKNFUs05ySjJKLp6txmm+PCy9pVYfF5dLaXvbh424PvRb0Jp0ekpYhUMrUXe/0rtR0Wtmm0+Gu4CHE06dKcoauNLRcE29de9/sVtfGSle3VT4L9XxJNpuUKjpNt5Hq27uUnvk33nEAAAAAAAAAAAG1LtLxRaFXT7S8UWbM5O/RbRLfYeHTqOtP+XS1ffL6Mf18xw7MwqrVcrllileUtNFdLj3tFjtLaEMnQUI5aUX55Pymzjk9NQbRxjrVHyv6S12btWFLC06bnaUZTk14ybXqPPU2syvu4lhisYqqyu0tzcktyS7K9/MzY49XPjNRPLasG22yKrtGE1ljfV7+BxNwf0es3uW5Ll4nXVoumkmkrq+jvbuNenny6lrowj6zXNMiRtgnmqRt3mKitJrvO+N8nvFlGTRM3R0c2wMGSo2R4c9wjw5nJcVnS7MfBew3RHT7MfBewmw9JznGEd8nbXcca+hh6dmzMFLEVo047nrJ8orey0+EG0oykqNL+XTWXu00MVcdDCQ6LDayklnqvVvuXJFI5Xdzhcd5bq6XWwsfChCtmllz5Ut9tL3185rjNqxnOLzXUX+hjC4mLiqb6yUZWiuGl3JvnvscdadHTLGV09b217/8AORZPLzT/AJGuprTqntSFmldt9ww07SjLvRpGnGNNSUWr7tVfTi0aQlfRHWY68fbhzufiuzGRtVl4kSJ8f20+a18TmOvT/jNmrJNt0bI0RsbRkyjVGyKlea27/UPwXsK4sdu/1D8F7CuMX23AypNXV9Hv7zAIMpiU23dtt83qYAG/SytlzSy8ru3oLai8FaDzzpyVOObLn1l9Jbv8ucOAwMq7koyhHLFyeZ2vbgjkAssWsK4SlCdR1NLKV3x525HA6smrOUmuV3Y0AG0Kko3yyavvs7GJSbd27swAN51ZS7Um/FtmgAAAAAAAOjC4yVNSjlhOMrNxnG6ur2fjq/Sc4Akr1pVJynN3lJ3bIwAAAAAAAAAAAA2p9peKLMrKfaXiiyZnJ26PtLRqyi7xdrqz71yNpzzO7IUbJmLHqb5lGzazLir2v3XLPGYpYeOSNo1JRjmimpZFvs9NHu9JUtX3ktXCNxzK7fFttkmvlx6uFvlrDFOndxSu1va1XgZWLlky7vJd+PFvxOeNGTaVuKRfYrYMaUP4jvPLfR6LuN/9ZXCYVybIrO08z4NJ97OytPNK5y04qKslYlizevO2eXjTdGyNUbI2w2Mo1TMlRujxB7ZHiTOSxZU+zHwRJTm4tOLs1uZHDsx8F7DZHGvfh6TzquW81bNbgzpte7LlGNONVRUG1PIs127Kzla3PiUscW21UazPf1tx2bJoXle70hJJX3XKpUpJWsakj59wszy/H+3ZTxjeeT1vv8+5LwGz676ZXd1x8zO7CbH/AIVOdTSNS7UVvsuL9JDToRhdR5vXznXU1P35Xjp3Vqt/S7eF9CNEaJEJNJlltsmbGqM3NMtjNzUyio85t3+ofgvYVxYbc/qH4L2FeYvtsABAAAE1DDVKmbo4SllV5WV7LmQktDE1KebJNxzK0rO11yIgAAAAAAAAAAAAAAAAAAAAAAAAAAAAADan2l4osmVtPtLxRZMzXbpCNjUXMvTK3LSnHLTj9qKZVItEurB/YRjIrmpL+NFfbS9Z6HbGIjPOo8Ek/E8zTlapF8c1/WW9aFlN89RryzlfGnKjaJqmbI9MeFImZRqjZFRsbI1RlFRsjxR7VHijOSxYw7MfBGxrDsx8F7DY5Pbh6bGbmtzKI6bW+x6bs5LXtK3+25xYnf5jt2SnKO/jP/iV+JfW9BjTlP55fh6Ppo/F8LH6WS/ma0KjiSRbtQfdl9CI3vZ3k1J+/Ln1GyN0RokRtwb3MpmqNohGxkwCo85tz+ofgvYV5Ybc/qH4L2FeYrcAAQAAB1YPHSoqooxg+kjleaN7LuOUtqmwKsYxaalmjmSs032dFfeuutdxpHYVZqHY60rLrx3dW0tHquut24CsB3R2VUcKkk4vo55GlJa6Ntx56J7jXG7Mq0bua6qla6afFpNq91ez38gOMAAAAAAAAAAAAAAAAAAAAAAAAAAbU+0vFFiyupLrR8UXPxOXNGcrI69OucyT/E59w+J1OS9JncdplEJaYeV6cO5W9DOL4nU8n1o7MLSkqdpJq0uJmtco4lrNW8r9S8xnY/2op4UJqSbhLeuHeXla+WNuKQ+WcvKsRlM7FfkzZLu9Rvm8/BypmyZ16Ph6jKhHki9yJwrlNkdKox5G/QR5Gu5E4Vyo8Ue/WHj3ngByl9Jx0sIdmPgjYlo4WThF6axXsNvic+70nPcenG+EJkl+KVPJv5zPxWp5D9KDpMosdhS1a8X+X9jhxK63mR27EpzjVeaLScXqctalJydoya5pE+HLGzuZfj/bup/y8P4v/iQJnTTj/Co30alu/wBv7G6vyOly1jj+/NZzm3KbJnVbu9Rurcl6Cdxz4OZMydKhHkjboYvgXuROFcyMo6lQjy9Y+Lx7y9yHCvJbc/qH4L2FeWfwhgo4mSXkx9hWBAlxFJQaS4xT9KM0sNOcXKKuk7b1vs37Eb45daP3I+wDmBvShmkluu0r+J10cLlxcaU7NKooyvuav3cCW6S5SJKG0sS7uMm1FKUtI6RWVctN0Rits1ZzUo2pqKtGMUrJdXu+zH0GcCv4codW9RyjaS06sG1aW692iDG7PlRjGTlGUZNpOLfC19/C/HuG/OmZnN6rWO0aqbakleTk7Rius003a3Jskxu1KteKjNq2l7JLM9dW7d703HCCtpVR6qblGN91836Jmehj9ZD0T/8AE66EajpRVOcYppqWapCF+s/KauJRr9NTbqQlUclkaqQn1tLdlu3Aib8uKrSy21Turpq/O3HwIzqx18yzNN2d2tzeeVzlKoAAAAAAAAAAAAAAAAAAAAAkoduP3l7T1ioM8rhP5tP78fafQlhzn1I6YKtUDdUSzWGNnh7cG/A56dFZ0RtiaOSms29u9uNjuWdPqU14yTZyYrBTqyvUlLwWiNzC05SNYx01O7GYbL0fJxXsOJbP8fO2T4rH1qq6L4slFaKebluZq9OxmZQVM3jR7jnWElx09JuqMl9Jm+0583SqPcbKj3HOoT8tmUp+Wx2qdyOlYdckZ+LrkjmUp+V6iSNR816CdunciZYaJ8uPp6q+HrPmKLxuKXLb0+Dot0qen0I+w6Y0Dr2bQvh6L504f8Udawx57PLvPStVE36Esvi1jWMnB3jC7+1dIsx2mVsnj2gw+HyrPN5Y7lzfgiLoHHR+Z81zJMbGtW0lLIuUVw85DHANJJSl52dZhuOU3Lu1vi45YYdJXc6kte6MdfaSxpHLjZVYxpU6cJ9RS6ys7uVr7/A3pYadk3dednS4S44yfvlq5adSo9xIqXcc8aMl9N+lm6VTyzParPNOqC5Gyw65EF6nlmc8+aHapzif4uuQWFRGqku71kkavgTt1eceJ+FUMuMkvsx9hTlx8KnfGS+7H2FOX0w2jNrc2vOT47tR+5H2GMPg51IuUFdLfr3X/Q6pYGdaeWms0o04PLxd7LT0oFukOy3FVHKe6MW+zm1W6yvvLPDW6WrVSWZVKcouCz2vGTdly3X7yLD7FxMFV6kndOFoWlduz9HeWmD2XWyTvFpuMXZWi7wppNPz3XfvOWTy9XKebtSY6pKlKGRyj25ptZdZSaul4JLzEU6/SYdReVOk4pc2nm492hpi8RPETzWbyxStrK0Vx8Lv1k2z8JUqUqqgnaUqcForOUpaJt7jevDtrWMt9q5gtsTsKov5LVaUV/FhDWUJcU1y7yOOxqmV5mo1crlCi1LpJJb3a2mik++xprlHHCu0kssWluur2O7ASUk704OTeWCWkk7N3XoS8520fg5TqNwhiI9KklKDa0klaSfhKyXNXM0tm0+jpqGKpSlncoLWKk76Jy4aJ+drxM2sZdSXxFBVqOW+ysrJLcjQnxtNQrVIxaajJpNXta/AgNOoAAAAAAAAAAAAAAAAAAAAAmwX86n9+PtR9QUD5hgv51P78faj6ap6nPN0wTRgSRiiBTN1Mw6OmNiRM5FUJIzA6Mqe8OjF8ERRmSxkVGrwcHwD2dSfB+klUkZzovKs2RA9lU+cvSaPZEfLfqOtVDZVDXPJOMV8tjcp+o0ex58JR9ZbKoZUy9zJOEUvyXVXBelHyY+6Zj4Wa5XL2zcZPT6VsmH/AEuH/wBKH/FHcoHBsmX/AEuH/wBKH/FHapnC+3aek8YIkikjnUzZVAu3UrGckXvS9BAqhuplRI8PF8F6DDwcHwMxkSJou6l0g+TKb5+kx8kw5y9R1KSM50Xnl9s8Y4nshcJ+ojexnwkvQWaqGyqGu5knCKd7IqcGn5zR7Lqr6N/Bovc5nMXu1OEfJfhbScMbJSVnlj7DGwYPo8Q9bZLaNXvZ6f5yZ1fD1/8A5Gf3If8AE5NhXcK64KDbXmf+foXe2fSoUnzLGEpLEU1G/WhGMkmleMo2lHzq5zRwFRwU7KzvbXkr/odCX/UUr27Me07Ls8WS+ky9OvE4uX8WNNyVJUkowTsoRck7a79TuweMmrycqjdSnd3so5o01aWn3Soqv+E2/qYJZtH2vo80c9XF3p04wzRcU1J5nZ3fBcNDHHc04Xp8pp1Q29Vi7xjSi9FJxgouUb6xfc+JPHa9STrulnVONHJTi5X6OHVXpaWtijOnB2fSJ2/lytd21Wvp0N11yxntAm07rRo2hXnGSlGUlJbpJtNeDI2TYWmpVIp3s3rZNu3H1Fbrs2U+vKcm98U21ddaa7XNEOIVqNNdbtTtfdvW7iTSSoyq2TSjWgld6pXk1pueiNdp08qhvtLNKLvvjJpp24GPlyl3l+/Tnx0k6s3Fppu6aVlryRzgG3STU0AAKAAAAAAAAAAAAAAAAAACbB/zqf34+1H0m+p8zpVMsoyW+LT9DL3521vqqf5veZym2sbp7FM2UjxvztrfVU/ze8yvhfW+qpfm95jjWuUe0TJEzxHzxrfVUvze8z88q31VL83vHGnKPcKRupnhfnnW+qpfm95n56V/qqX5veXjTlHulUM9IeE+etf6ql+b3j561/qqX5veXjU5R7xVDKmeD+e1f6ql+f3mfnxX+ppfn94405PfKbN1M+f/AD5r/U0vz+8z8+q/1NL8/vHGnKPoKqHxQ9V8+8R9TR/P7zyxZNJbt9E2Y/8ApqH+lD/ijqUjyOG+FDp04Q6FPJFRvm32Vr7iX53P6hfj/YxcbtrlHrFI3izyK+GD+oX4/wBjZfDJ/wBuvx/sONXlHsEzdSPG/PR/26/G/cZ+ej/t1+P9i8anKPZqZt0h4r57P+3X437jPz2l/br8f7DjTlHtekCqHivns/7dfjfuMr4bv+3X4/2Gqco9t0jNozZ4hfDl/wBuvxv3GV8O5f26/G/cNHKPcqobqoeE+fj/ALZf/Y/cZXw8l/bL/wCx+4apyiv+HLvtCf3IewpMNi6lJt05uN99jp21tP43XdZwyXSVr33LmcB0jnW3SPV3eu8sUoqUpuai4UVlTV8zcctl6bnNDAVJQU1Hqu9tVwV/0MY5daP3I+wlSzc0nWLvhZU5O8oyioJ27Lu3bzpek4GYAk0SSBNhauSpGXJq+l9OOhCZRV9pq2HaqunG7d7R0s3fdoWFdK6qQUssKc45tFe3VTst3aiRpPp1Oz0pxn2tf5a1v4kEKkFh5Rds+bTfezyvlb6PrMOV3dJcY10MWst5tN2vdNRs9/D9TmxGIzxprKlkjlbX0tXq/V6CKdWUlFOTajpFN7vA0NSNzHQACtAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAANlNrc2vOT43tR+5H2HMd0owqRc3KayRipJQi+7fmQHCDpp0aUpRipzvJpLqR3v/cbYjDU6c3Ccqikt/Ujyv5QHIZR3y2clS6Vupket8kdz08vmaYbBwqtqEqjsrvqR/wDMDepboc3UvKEIab+07vufVV+5rmV521ssV0UqlS0G+rkjZN7/AKRvX2aqcFOTqKLtZ5I8f95IkmleDvw2BjVvklN5bX6keP8Av7mRdDTzZc9S97diO/8AGVXKDsx+BdC1893ftRit2/dJnGAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMqTs1ffvMADMXZ3WjEpNu7bb5swAN3Vk1ZyduV3Yt3LBXm4zqU1ZJRi2s2+63dy9JSgCwxscNkzUp1HUctVLXTXW9lfh6TinVlLtSb8W2aADeFWUVZSaXJNo1UrO6evMwANpzct7b8Xc1AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAXcNiKrTwCpX6TEdJnbeiy1Gr9yUUUh6SntqlT2XCnB3xX8SnuayUpyzSadrXdkvOwJMVsHDxxzjFy+JwoxrznfXJlvo+bdkvErKmwa/RyqqMYxyuoqbnHpFT35su+1ix2ztujLDUKdHrSlCl8Yumv5a6tPVaq936Dr2jtug5VsRTq0nKcGqUY0P4yco5Wpyasklpo9QPObH2c8ViadGLSzyV22lpxtfe7cCzxuwekrT+KwhTowlkUp1otTnvVpc2nHRbji+D2Lp0MQ6tSVslObho3ebi1FaeJbYHaOHlhMNTnOhDoc6qRqUXUk80r5qbs1d6KztuApcNsavUnVhlUOidqrnJRjB3as2/BmY7ErvESoZFnjHNJuSyqFk8+bdl1WveXlDbdKvTrRqVKVOcsQ6rdek5xlDKoqyinaSS3cbkVTalGusXRdZU4zhRhRqSg4xyUnrFqCdk76acAK/aeyFhsLSlU/nVKk8rjJSi6UYxs1bfdveSYfYObCUq7ks1WvGnCKlHs3s3bfe7WnAi2/iqU/i9OhUdSnRoxjmcXG83JuTs/FegscHj8LBbNcqqth+knUhlnmVRvMuFmm1ADj29sCVCdadNLoKc8ts6lOKeiclvV2jnxGwMRTpSqSjHqpSnDMnOMXulKO9LVHTgtqU6eHbm81Wpi6dSrGz1pwvK192snzLDa216WXEzp1qM517xh0dDLPJJ9bpJSW+2mnGzApYbBxEqPSqMbZOkUM8c7h5ajvsZobAxFSkqkYx60XKEHJKc4re4x3taFticbhvi04OvCtCNLLhrwmsRCVl1XJJRy7+Lutx1VduUM6xMatJONOKp01QvWjNQSy5mrZb31vuA8ps7BvEV6dGLSc5JXbSRaYj4NVXXrxoZZUqU8nSSqRS1Ts293D1o5fg5iKdLG0alaWSEW25Wbs8ry6LXtWJsXjafxWlQjPO3XqVK+VSV9VGDV0t6zPzgaL4O4nJnyxvlc1DPHpHBfSUd9ramKeyZ1KVBQpN1aznJSzLL0cdNVws03dl/wDKmFpuvKjUo2lRnGhGNGfSXcbJVJtb9/H0IzVx1PD1ZYacoQksHSoqdSDnBSvnlGaXB3a3cgPNYnY1enOnDKpuq7U3Tkpxk72aTXG51R+DOI6WlB9HapPJmjOMkpcYuz324FnDbVCFejDpI5IU6qdSnSyQhVqRtmikszSstTn2RUwuFxGHbxTk4ucpySn0MZ5WoWWW7eursBy1/g9N160aLh0NOeVVJ1IqLfBZt13yOejsHEzqVaahaVG3SZpKKjd2u29Lcb8jsjLD1sJQozxMaPQVKme8aj6RSatOFlq7K1nY32rtmlVo4ro21KtWpxUWnfoaULRbe67fADlnsSbjRpwp3qzdRuaqRcHCMst+5Jp6veQVtiV4VaVPLGbq/wAtwkpRlrZ2ktNOJfUNs4dTqUY1IKPxWlRpVKkJSheLUpxlG17Nt624EE9p0XXowjiFThRhKUatKjlgq8vs2u4aJPS4FLtDZNXDxjKeWUJtpThJTjdb43XElwWzJVaF403KdStGlSlmSSdm2mvRqdHwhxNGoqPRypur1nXdGMo0m7rK0ml1rXu0uR27J2tQo08InUs6Kr1ZLLL+bLSnHdrpbuArK3wfxEIqUlCzmoPrxeWT3KXk+c6cf8GpwxEqNKUZKnCLqVJTioxbWt3w1ukt+hnZePoQw9GnVqNOWLjUraSdqcUrN6a6uW4nxGKo14Yui8TCEp4rp+kcamSpFp9XRXur3SaAosfgamHqdHVSTspJpppxe6Sa3o5i027jKdWdKFFuVKhSjThJqzla7crcLtvQqwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAE+Dwk69SNKlHNOW5fq+4CAFliNh1odHZ06kak+jjKnNTWfyW1uZY4X4KTXTPEVKUeipybiqsb5r2ipck2B5wXPT7L2DTqODqxUYww/TTUqyj0ma/R/dWiv4nJidjupOnCjRVNypyrScqqlBU3JqMnJ7lb03AowXFPYs6dWSrQVSEKEq14VElKFrKSlx63DuNH8H66pub6NNU+ldNzXSKna+bJvtbUCqBZY7BQpYXCTs+lrKc5a6ZVPLHTzNkmE+DuIrQpzjkjGrfo881FyadrJPjoBUgscHsStVTl1KcVPJepNQTn5KvvZHPZdaMa8pRyqhKMaib1UpOyS57gOIFnR2DiJyjGMVmdFVu0laDdrvkyWHwcrt9qilmywk6sUqktLqD+lvS8QKzDYmdKaqU5OM47pLejWtWlOTnOTlKTu5Sbbb5tlktmqnhMTUrRaqwrQowV90tXO/PREOB2RVxFOdSGRU4SSlKclFK6e9vwA4AWfyFX6edG0U4RUpzckoKDV1Jy3Wd0dWzfg5KeMhQrSjGDh0jnGcWpU1xg9z/ZgUQLSGwMROtTowjGUqsZSg1JOLjFyTeb/AGv1cyOjsWvUVNxgn0k5Qgrq7cO0/urmBXgs6+w60HSSyVFVnkhKnNSjnvbK2tzOyl8HnSWIeJyt06MmowmnJVHNQgpJbrt7gKAFotgV+kdN9HFxgp1HKaSpp8Jvg+45doYCph6nR1Er2Uk07xlF7pJ8UBygAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFnsGrQhVm8RlX8OSpucZTgp8HKK1aKwAeyW28Mng06sJKjUqVamSjKEXJR6iSS3d/sKbBYum8HjIVKuWtWlB6xk86UnJ6pb723lMAPT7Q2vQksaqc9JqhRorLL+VDtPdpuWm83xW1MNWeLo9N0dOcaEaVRwm45aSV4tJXV3d7jyoA9CsXhKVLE06dSpUjPoYLMmpShGeao46WirrRMsKu1sJCGLVKrStOlOFCEKEoySlpac2r5rN8bHjgBZ7fxdOrWgqLzUqVGnTg7NXyxV9H9pyLOG1cOsdg59JejhqEYp5ZdtQk91vLaVzzIA9ZhNr0J4fDxqVadN03PpIzoOrKWaWa9N2aTffbcjnq7QoYnD1o1a7pTniVVd6bblBQcYpZVa+vGyPNgD0+1NsUJLG9DN9eNGjRWWSbpQs5O9tN1vOTbJ2hg6Swr6SlBRUemi6Ep1XNSu+s01l8NUjyQAu9rbSp1cNCEZZpyr1q1TRq13aO/fp7SL41S+IUsOp2lPEOpV0fVikox8eL0KkAevW3KE6mNj0kKcak6XQyqUpVIOFJOKTik2tLNaEcdsUHWqx6ZRhHCzpUZ9Fljnm05NRirpO8rX87PKAD1mzdvUKOGwizPpoyUKryy6lFVc+jtrfTdyNHtqhLEVqam4Yd0J0KNTLJ2u7ubS16zv6TywA9Ts/aWFw3xekqvSRp1J16lTJNRdRQahCKav52jn2Lt2OEoVJq08RUrQbUot9SPWcr7rts88APTYbF4alXxM6OKyZp9TpKcp0qlKWrhOOVyunbXuK7a7w1SrXqUJqMVkVKmoytK6Wdxv2Yp3aT5lUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAP//Z\n"
94 | },
95 | "metadata": {
96 | "tags": []
97 | },
98 | "execution_count": 9
99 | }
100 | ]
101 | },
102 | {
103 | "cell_type": "markdown",
104 | "metadata": {
105 | "id": "PBzVrpDXG2s5"
106 | },
107 | "source": [
108 | "## Learning Goals\n",
109 | "\n",
110 | "In this tutorial notebook, you will learn how to:\n",
111 | "\n",
112 | "* Set up a Google Colab notebook to run OpenPose\n",
113 | " - Download OpenPose\n",
114 | " - Download a sample video\n",
115 | "* Run OpenPose on a sample video and work with its output\n",
116 | " - Interpret output JSON files\n",
117 | " - Convert files to a numpy time series\n",
118 | "* Preprocess data\n",
119 | " - Fix missing data with interpolation\n",
120 | " - Fix noise with a Gaussian filter\n",
121 | " - Fix bias with normalization\n",
122 | "* Perform the analysis\n",
123 | " - Derive joint angles\n",
124 | " - Find gait cycles\n",
125 | " - Report results\n",
126 | "\n",
127 | "\n",
128 | "\n",
129 | "**Prefer to follow along via video?** [Access a recording](https://www.youtube.com/watch?v=Js0uRTLfm6s&t=4s) of Łukasz Kidziński guiding users through this tutorial"
130 | ]
131 | },
132 | {
133 | "cell_type": "markdown",
134 | "metadata": {
135 | "id": "xM_Vf8l4f5iE"
136 | },
137 | "source": [
138 | "# Setup\n",
139 | "\n"
140 | ]
141 | },
142 | {
143 | "cell_type": "markdown",
144 | "metadata": {
145 | "id": "GsH_4sdoQPgB"
146 | },
147 | "source": [
148 | "\n",
149 | "## Introduction to Google Colab\n",
150 | "\n",
151 | "Google Colab is a cloud-based environment for running Python code interactively (via Jupyter notebooks, for those who are familiar with those). If you are new to Colab, you can learn about the key features in [this tutorial](https://colab.research.google.com/notebooks/basic_features_overview.ipynb). For the purposes of our tutorial, you only need to understand how to interact with the \"code cells.\" In the [webinar recording](https://www.youtube.com/watch?v=Js0uRTLfm6s&t=4s), we also provide a quick overview of how to work with this Colab notebook."
152 | ]
153 | },
154 | {
155 | "cell_type": "markdown",
156 | "metadata": {
157 | "id": "VNlQfYZ6Bgm7"
158 | },
159 | "source": [
160 | "\n",
161 | "## Installation\n",
162 | "\n",
163 | "Run the code below to download and compile OpenPose within your Google Colab environment (based on [tugstugi/dl-colab-notebooks](https://github.com/tugstugi/dl-colab-notebooks)). **This takes around 1 minute.** It is done when you see `[100%] Built target openpose_net`."
164 | ]
165 | },
166 | {
167 | "cell_type": "code",
168 | "metadata": {
169 | "id": "FOdkDhb6ga6N"
170 | },
171 | "source": [
172 | "import os\n",
173 | "if not os.path.exists(\"openpose\"):\n",
174 | " !apt-get -qq install -y libatlas-base-dev libprotobuf-dev libleveldb-dev libsnappy-dev libhdf5-serial-dev protobuf-compiler libgflags-dev libgoogle-glog-dev liblmdb-dev opencl-headers ocl-icd-opencl-dev libviennacl-dev\n",
175 | " !pip install -q youtube-dl\n",
176 | " !wget https://mc-motionlab-storage.s3-us-west-2.amazonaws.com/openpose.tar.gz\n",
177 | " !tar -zxvf openpose.tar.gz\n",
178 | " !wget -q https://cmake.org/files/v3.13/cmake-3.13.0-Linux-x86_64.tar.gz\n",
179 | " !tar xfz cmake-3.13.0-Linux-x86_64.tar.gz --strip-components=1 -C /usr/local\n",
180 | " !cd openpose/build && cmake .. && make -j`nproc`"
181 | ],
182 | "execution_count": null,
183 | "outputs": []
184 | },
185 | {
186 | "cell_type": "markdown",
187 | "metadata": {
188 | "id": "n5L3Z5YVrZ2R"
189 | },
190 | "source": [
191 | "## Detect poses in a test video\n",
192 | "\n",
193 | "We are going to detect poses on a sample video from a clinic. In this case, the video is stored at https://github.com/stanfordnmbl/mobile-gaitlab/raw/master/demo/in/input.mp4.\n",
194 | "\n",
195 | "Click the Files folder icon  in the left-hand column to see the current files in your virtual Colab environment. After you run the code below, refresh the folder by clicking on the folder icon with the circular arrow . You should see a new file called \"input.mp4\" in your list of files."
196 | ]
197 | },
198 | {
199 | "cell_type": "code",
200 | "metadata": {
201 | "id": "wMx1U-q1bLxy"
202 | },
203 | "source": [
204 | "# Download the video\r\n",
205 | "!wget https://github.com/stanfordnmbl/mobile-gaitlab/raw/master/demo/in/input.mp4"
206 | ],
207 | "execution_count": null,
208 | "outputs": []
209 | },
210 | {
211 | "cell_type": "markdown",
212 | "metadata": {
213 | "id": "KVfcqKtQbSv_"
214 | },
215 | "source": [
216 | "Let's view the video to see what we will be analyzing."
217 | ]
218 | },
219 | {
220 | "cell_type": "code",
221 | "metadata": {
222 | "id": "oNASdyyiO65I"
223 | },
224 | "source": [
225 | "# Define function to display the video\n",
226 | "def show_local_mp4_video(file_name, width=640, height=480):\n",
227 | " import io\n",
228 | " import base64\n",
229 | " from IPython.display import HTML\n",
230 | " video_encoded = base64.b64encode(io.open(file_name, 'rb').read())\n",
231 | " return HTML(data=''''''.format(width, height, video_encoded.decode('ascii')))\n",
234 | "\n",
235 | "# Display the video\n",
236 | "show_local_mp4_video('input.mp4', width=960, height=720)"
237 | ],
238 | "execution_count": null,
239 | "outputs": []
240 | },
241 | {
242 | "cell_type": "markdown",
243 | "metadata": {
244 | "id": "G38yLmDWcnfY"
245 | },
246 | "source": [
247 | "We will now run OpenPose, specifying our input video file (`input.mp4`) and asking it to output the following:\r\n",
248 | "\r\n",
249 | "* The body keypoints in a JSON format in the directory `openpose/output` and \r\n",
250 | "* A video of the keypoints (`openpose.avi`)\r\n",
251 | "\r\n",
252 | "Several other input, output, and run options are available and are described in the [OpenPose documentation](https://cmu-perceptual-computing-lab.github.io/openpose/web/html/doc/).\r\n",
253 | "\r\n",
254 | "In the code below, we also convert the output video into an MP4 video (`output.mp4`) for viewing.\r\n",
255 | "\r\n",
256 | "NOTE: For these examples, the openpose command must be run from the openpose folder, so that it can locate the supporting files. "
257 | ]
258 | },
259 | {
260 | "cell_type": "code",
261 | "metadata": {
262 | "id": "k4n8Hp7R8j2h"
263 | },
264 | "source": [
265 | "# delete files from previous runs of this script\n",
266 | "!rm openpose.avi\n",
267 | "\n",
268 | "# detect poses in these video frames using OpenPose\n",
269 | "!cd openpose && ./build/examples/openpose/openpose.bin --video ../input.mp4 --write_json ./output/ --display 0 --write_video ../openpose.avi\n",
270 | "\n",
271 | "# convert the video output result into MP4 so we can visualize it using the 'show_local_mp4_video' function we defined above\n",
272 | "!ffmpeg -y -loglevel info -i openpose.avi output.mp4"
273 | ],
274 | "execution_count": null,
275 | "outputs": []
276 | },
277 | {
278 | "cell_type": "markdown",
279 | "metadata": {
280 | "id": "kDDkgCCSrFTv"
281 | },
282 | "source": [
283 | "TROUBLESHOOTING: If you get an error message about `Cuda check failed`, this is likely because Google Colab cannot allocate a GPU for you to run OpenPose. Make sure you have set the notebook to use the GPU: \r\n",
284 | "\r\n",
285 | "\r\n",
286 | "1. Select Runtime->Change runtime type \r\n",
287 | "2. Choose `GPU` from the pull-down\r\n",
288 | "1. Click `SAVE`\r\n",
289 | "\r\n",
290 | "If the setting is correct, try running the notebook at another time.\r\n",
291 | "\r\n",
292 | "\r\n",
293 | "Finally, we can visualize the results: "
294 | ]
295 | },
296 | {
297 | "cell_type": "code",
298 | "metadata": {
299 | "id": "nZ3Ud9zLgOoQ"
300 | },
301 | "source": [
302 | "show_local_mp4_video('output.mp4', width=960, height=720)"
303 | ],
304 | "execution_count": null,
305 | "outputs": []
306 | },
307 | {
308 | "cell_type": "markdown",
309 | "metadata": {
310 | "id": "A_xXtok-6DrM"
311 | },
312 | "source": [
313 | "## View and interpret OpenPose JSON output\n",
314 | "\n",
315 | "The output of OpenPose for each frame is saved in a JSON file. \n",
316 | "\n",
317 | "You can view the list of output files by clicking on the Files folder openpose -> output. There you will see a list of files named using the format `input_###_keypoints.json` where ### is a number. \n",
318 | "\n",
319 | "Let's print out one of these files to see its structure."
320 | ]
321 | },
322 | {
323 | "cell_type": "code",
324 | "metadata": {
325 | "id": "k3wLiLy_6req"
326 | },
327 | "source": [
328 | "# Load JSON file\n",
329 | "import json\n",
330 | "\n",
331 | "with open('openpose/output/input_000000000100_keypoints.json') as json_file:\n",
332 | " data = json.load(json_file)\n",
333 | "\n",
334 | "print(json.dumps(data, indent=4))"
335 | ],
336 | "execution_count": null,
337 | "outputs": []
338 | },
339 | {
340 | "cell_type": "markdown",
341 | "metadata": {
342 | "id": "iVoWBFGlgUWg"
343 | },
344 | "source": [
345 | "The information in the JSON file is stored in an array named `data`. For each frame, OpenPose tries to detect all the people in it. The coordinates of each person are stored in the `data[\"people\"]` list. You access the keypoints in this way: `data[\"people\"][i][\"pose_keypoints_2d\"]`, where `i` is the index of the person (with the first person having an index of 0). Each person is encoded as a 75-dimensional vector, 3 values for each of 25 different keypoints. Columns with indexes `3*k, 3*k+1` and `3*k+2` correspond to `X, Y` and `confidence` of the given keypoint, where `k` is the index of the keypoint (where the first keypoint has an index of 0). Please refer to the figure below for interpretation of the index `k`.\r\n",
346 | "\r\n",
347 | "Example: To access the `X` value of the second keypoint (k=1), you would use the following: `data[\"people\"][0][\"pose_keypoints_2d\"][3]` "
348 | ]
349 | },
350 | {
351 | "cell_type": "markdown",
352 | "metadata": {
353 | "id": "ha8OWw6a9ZES"
354 | },
355 | "source": [
356 | ""
357 | ]
358 | },
359 | {
360 | "cell_type": "markdown",
361 | "metadata": {
362 | "id": "0Cr6x-G7hflR"
363 | },
364 | "source": [
365 | "For clarity, we name all indices using short keywords, for example, an index of 2 refers to the right shoulder (RSHO):"
366 | ]
367 | },
368 | {
369 | "cell_type": "code",
370 | "metadata": {
371 | "id": "vEWMi8efBGnU"
372 | },
373 | "source": [
374 | "NOSE = 0\n",
375 | "NECK = 1\n",
376 | "RSHO = 2\n",
377 | "RELB = 3\n",
378 | "RWRI = 4\n",
379 | "LSHO = 5\n",
380 | "LELB = 6\n",
381 | "LWRI = 7\n",
382 | "MHIP = 8\n",
383 | "RHIP = 9\n",
384 | "RKNE = 10\n",
385 | "RANK = 11\n",
386 | "LHIP = 12\n",
387 | "LKNE = 13\n",
388 | "LANK = 14\n",
389 | "REYE = 15\n",
390 | "LEYE = 16\n",
391 | "REAR = 17\n",
392 | "LEAR = 18\n",
393 | "LBTO = 19\n",
394 | "LSTO = 20\n",
395 | "LHEL = 21\n",
396 | "RBTO = 22\n",
397 | "RSTO = 23\n",
398 | "RHEL = 24\n",
399 | "\n",
400 | "print(\"DONE: Short keywords assigned\")"
401 | ],
402 | "execution_count": null,
403 | "outputs": []
404 | },
405 | {
406 | "cell_type": "markdown",
407 | "metadata": {
408 | "id": "o7W9XCM391C-"
409 | },
410 | "source": [
411 | "## Convert JSON files to a time series of keypoints\n",
412 | "\n",
413 | "To run our analysis, we want to load data from all the JSON files into memory. In this case, we name that data array `res`. We will loop through all the files and use the functions from the numpy (np) package to store the subject's data as an array with 344 rows (one for each file/frame) and 75 columns. In our script, we determine the number of frames by counting the number of files in the directory.\n",
414 | "\n",
415 | "NOTE: We are using our knowledge/assumption that there is only one person visible in each frame."
416 | ]
417 | },
418 | {
419 | "cell_type": "code",
420 | "metadata": {
421 | "id": "SgVcs41--1OM"
422 | },
423 | "source": [
424 | "import numpy as np\n",
425 | "import pandas as pd\n",
426 | "\n",
427 | "def convert_json2csv(json_directory):\n",
428 | " # determine the number of frames\n",
429 | " nframes = len(os.listdir(json_directory))\n",
430 | " \n",
431 | " # initialize res to be array of NaN\n",
432 | " res = np.zeros((nframes,75))\n",
433 | " res[:] = np.nan\n",
434 | " \n",
435 | " # read in JSON files\n",
436 | " for frame in range(0,nframes):\n",
437 | " test_image_json = '%sinput_%s_keypoints.json' % (json_directory, str(frame).zfill(12))\n",
438 | "\n",
439 | " if not os.path.isfile(test_image_json):\n",
440 | " break\n",
441 | " with open(test_image_json) as data_file: \n",
442 | " data = json.load(data_file)\n",
443 | "\n",
444 | " for person in data['people']:\n",
445 | " keypoints = person['pose_keypoints_2d']\n",
446 | " xcoords = [keypoints[i] for i in range(len(keypoints)) if i % 3 == 0]\n",
447 | " counter = 0\n",
448 | " res[frame,:] = keypoints\n",
449 | " break\n",
450 | "\n",
451 | " return res\n",
452 | "\n",
453 | "res = convert_json2csv(\"openpose/output/\")\n",
454 | "pd.DataFrame(res) # only for clean display"
455 | ],
456 | "execution_count": null,
457 | "outputs": []
458 | },
459 | {
460 | "cell_type": "markdown",
461 | "metadata": {
462 | "id": "pjsZ6Kf0q4wZ"
463 | },
464 | "source": [
465 | "# Preprocessing\n"
466 | ]
467 | },
468 | {
469 | "cell_type": "markdown",
470 | "metadata": {
471 | "id": "SclqW9CDBj-b"
472 | },
473 | "source": [
474 | "\n",
475 | "## Diagnostic plots\n",
476 | "\n",
477 | "Now we plot some of the trajectories to see whether we need to clean them up using signal processing techniques."
478 | ]
479 | },
480 | {
481 | "cell_type": "code",
482 | "metadata": {
483 | "id": "tu0n0YZP_isd"
484 | },
485 | "source": [
486 | "import matplotlib.pyplot as plt\n",
487 | "\n",
488 | "plt.plot(res[:,(NOSE*3)])\n",
489 | "plt.plot(res[:,(NOSE*3+1)])\n",
490 | "plt.xlabel(\"video frame\")\n",
491 | "plt.ylabel(\"nose position\")\n",
492 | "plt.legend(('x','y'))"
493 | ],
494 | "execution_count": null,
495 | "outputs": []
496 | },
497 | {
498 | "cell_type": "markdown",
499 | "metadata": {
500 | "id": "BFYxW5tKiYgf"
501 | },
502 | "source": [
503 | "Above we plotted the trajectory of the NOSE keypoint (index = 0). We see that around frame 150, there is some discontinuity. Also, some high frequency noise is present in the signals. Let's check if the same problems show up in curves that we are interested in for computing knee flexion, i.e., trajectories of the right ankle, knee, and hip."
504 | ]
505 | },
506 | {
507 | "cell_type": "code",
508 | "metadata": {
509 | "id": "8MJs9rxSBDl8"
510 | },
511 | "source": [
512 | "# Features to plot for diagnostics\n",
513 | "PLOT_COLS = {\n",
514 | " \"Right ankle\": RANK,\n",
515 | " \"Right knee\": RKNE,\n",
516 | " \"Right hip\": RHIP,\n",
517 | "}\n",
518 | "\n",
519 | "# The show_plots function displays a set of curves for each keypoint.\n",
520 | "# We will use the show_plots again with different data arrays, \n",
521 | "# so we include the cols_per_point argument to provide flexibility \n",
522 | "# in specifying how many columns of the array are associated with a \n",
523 | "# given keypoint.\n",
524 | "def show_plots(keypoint_array, cols_per_point=3):\n",
525 | " for name, col in PLOT_COLS.items():\n",
526 | " fig, ax = plt.subplots(figsize=(5, 5))\n",
527 | " ax.spines['right'].set_visible(False)\n",
528 | " ax.spines['top'].set_visible(False)\n",
529 | " plt.title(name,fontsize=24)\n",
530 | " plt.xlabel(\"video frame\",fontsize=17)\n",
531 | " plt.ylabel(\"position\",fontsize=17)\n",
532 | "\n",
533 | " plt.plot(keypoint_array[:,[col*cols_per_point,]], linestyle=\"-\", linewidth=2.5)\n",
534 | " plt.plot(keypoint_array[:,[col*cols_per_point+1,]], linestyle=\"-\", linewidth=2.5)\n",
535 | " plt.legend(['x', 'y'],loc=1)\n",
536 | "\n",
537 | "show_plots(res)"
538 | ],
539 | "execution_count": null,
540 | "outputs": []
541 | },
542 | {
543 | "cell_type": "markdown",
544 | "metadata": {
545 | "id": "dEqgsntfjElt"
546 | },
547 | "source": [
548 | "We also see discontinuity and some high frequency noise in these signals, so we will clean them up using signal processing techniques."
549 | ]
550 | },
551 | {
552 | "cell_type": "markdown",
553 | "metadata": {
554 | "id": "nXNlVmiurDkL"
555 | },
556 | "source": [
557 | "## Clean up signals\n",
558 | "\n",
559 | "**Discontinuities and Missing Data:** We will treat the discontinuities like missing data and use linear interpolation to fill in the missing data. There are several other ways to deal with missing data, and we encourage you to explore these for your research projects. You can find many resources online on this topic (search for \"missing data in time series\"), so we will not go over this here. We use linear interpolation because it is straightforward. \n",
560 | "\n",
561 | "For the interpolation process, we first want to simplify the data we use. We will copy the JSON output data into the `res_processed` variable, drop the confidence column which we don't need for interpolation, and replace values for undetected keypoints with NaN."
562 | ]
563 | },
564 | {
565 | "cell_type": "code",
566 | "metadata": {
567 | "id": "b8SA_S9_nNOb"
568 | },
569 | "source": [
570 | "# The 3rd column associated with each keypoint in the OpenPose output \n",
571 | "# is the confidence score. We won't be using it in this notebook so \n",
572 | "# we can drop it\n",
573 | "\n",
574 | "def drop_confidence_cols(keypoint_array):\n",
575 | " num_parts = keypoint_array.shape[1]/3 # should give 25 (# of OpenPose keypoints)\n",
576 | " processed_cols = [True,True,False] * int(num_parts) \n",
577 | " return keypoint_array[:,processed_cols]\n",
578 | "\n",
579 | "res_processed = res.copy()\n",
580 | "res_processed = drop_confidence_cols(res_processed)\n",
581 | "\n",
582 | "# if keypoint is not detected, OpenPose returns 0. For undetected keypoints,\n",
583 | "# we change their values to NaN. Wo do it for all dimensions of all keypoints\n",
584 | "# in all frames at once\n",
585 | "res_processed[res_processed < 0.0001] = np.NaN \n",
586 | "\n",
587 | "# Note that we use res_processed < 0.0001 instead of res_processed == 0.0.\n",
588 | "# While in the case of this video it makes no difference, it is usually\n",
589 | "# a good practice in computer science to account for the fact that \n",
590 | "# 0.0 might be actually actually represented as a number slightly greater than\n",
591 | "# 0. See more on this topic here\n",
592 | "# https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/\n",
593 | "\n",
594 | "pd.DataFrame(res_processed)"
595 | ],
596 | "execution_count": null,
597 | "outputs": []
598 | },
599 | {
600 | "cell_type": "markdown",
601 | "metadata": {
602 | "id": "nSDGuN8IjdaI"
603 | },
604 | "source": [
605 | "You will see that this new data array only has 50 columns, compared with the original 75, reflecting the deletion of the columns with the confidence values. Also, notice that the first time point for keypoint 17 (columns 34 and 35 in row 0) now have NaN, replacing the zeros that were originally in columns 51 and 52 of that row.\r\n",
606 | "\r\n",
607 | "Next, we replace NaNs with linearly interpolated values and we plot the results to see that the discontinuities are no longer present.\r\n",
608 | "\r\n",
609 | "For those interested in understanding the details of the code, the key functions here are:\r\n",
610 | "\r\n",
611 | "* `interpolate.interp1d`, which performs the 1D interpolation and is provided through SciPy. The \r\n",
612 | "full documentation for this function is located [here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html).\r\n",
613 | "* `np.nanmean`, which computes the mean for an array (ignoring NaN) and is provided through the NumPy package for scientific computing. The full documentation for this function is located [here](https://numpy.org/doc/stable/reference/generated/numpy.nanmean.html?highlight=nanmean#numpy.nanmean)."
614 | ]
615 | },
616 | {
617 | "cell_type": "code",
618 | "metadata": {
619 | "id": "0ntqO_d0su1S"
620 | },
621 | "source": [
622 | "from scipy import interpolate\n",
623 | "\n",
624 | "# interpolate to fill nan values\n",
625 | "def fill_nan(A):\n",
626 | " inds = np.arange(A.shape[0]) \n",
627 | " good = np.where(np.isfinite(A))\n",
628 | " if(len(good[0]) <= 1):\n",
629 | " return A\n",
630 | " \n",
631 | " # linearly interpolate and then fill the extremes with the mean (relatively similar to)\n",
632 | " # what kalman does \n",
633 | " f = interpolate.interp1d(inds[good], A[good],kind=\"linear\",bounds_error=False)\n",
634 | " B = np.where(np.isfinite(A),A,f(inds))\n",
635 | " B = np.where(np.isfinite(B),B,np.nanmean(B))\n",
636 | " return B\n",
637 | " \n",
638 | "def impute_frames(frames):\n",
639 | " return np.apply_along_axis(fill_nan,arr=frames,axis=0)\n",
640 | "\n",
641 | "res_processed = impute_frames(res_processed)\n",
642 | "show_plots(res_processed, 2)"
643 | ],
644 | "execution_count": null,
645 | "outputs": []
646 | },
647 | {
648 | "cell_type": "markdown",
649 | "metadata": {
650 | "id": "gBxTemzxjmrP"
651 | },
652 | "source": [
653 | "**Noisy Data:** We remove high frequency noise by smoothing the data with a Gaussian filter. Since the observed noise is of low magnitude and high frequency, it is relatively easy to filter out this noise with many popular methods. We chose the Gaussian filter in this tutorial for its simplicity of use.\r\n",
654 | "\r\n",
655 | "There are a number of other approaches for removing high frequency noise in a signal. Which is appropriate depends on the nature of your data. These [course notes](https://ggbaker.ca/data-science/content/filtering.html) from Greg Baker at Simon Fraser University provide a useful explanation of some of these other options. \r\n",
656 | "\r\n",
657 | "In this example, we use a Gaussian kernel with a standard deviation of 1. The documentation for the `gaussian_filter1d` function we use is [here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.gaussian_filter1d.html). \r\n"
658 | ]
659 | },
660 | {
661 | "cell_type": "code",
662 | "metadata": {
663 | "id": "wJZJ593ys1Bv"
664 | },
665 | "source": [
666 | "from scipy.ndimage.filters import gaussian_filter1d\n",
667 | "\n",
668 | "# Remove high frequency noise\n",
669 | "def filter_frames(frames, sd=1):\n",
670 | " return np.apply_along_axis(lambda x: gaussian_filter1d(x,sd),\n",
671 | " arr = frames,\n",
672 | " axis = 0)\n",
673 | "\n",
674 | "res_processed = filter_frames(res_processed, 1)\n",
675 | "show_plots(res_processed, 2)"
676 | ],
677 | "execution_count": null,
678 | "outputs": []
679 | },
680 | {
681 | "cell_type": "markdown",
682 | "metadata": {
683 | "id": "IEJ1fk28jtJr"
684 | },
685 | "source": [
686 | "If you compare these curves with the ones produced in the previous cell, you will notice that these curves are much smoother.\r\n",
687 | "\r\n",
688 | "**Bias:** Bias refers to a systemic error. In our case, the manner in which the video was recorded resulted in subjects systemically appearing smaller, then larger, and then smaller again due to the changing camera angle and distance from the subject. This resulted in varying lengths for the body parts that did not accurately reflect reality.\r\n",
689 | "\r\n",
690 | "To compensate for this bias, we therefore normalize each frame by (approximately) the length of the subject's femur. The premise is to choose a segment of a fixed length and divide all dimensions by this length. As a result, each pose has exactly the same normalized femur length, and the other segments have a length defined relative to the femur length. The length for each segment should be fairly constant from frame to frame, since they have a relatively fixed proportion to the femur in each frame. We chose the femur as it's relatively long, and therefore its measurement is more robust to noise."
691 | ]
692 | },
693 | {
694 | "cell_type": "code",
695 | "metadata": {
696 | "id": "-H2-qrrJs6yR"
697 | },
698 | "source": [
699 | "# Find mean and standard deviation for debiasing\n",
700 | "def find_mid_sd(signal_array):\n",
701 | " num_parts = signal_array.shape[1]/2 \n",
702 | "\n",
703 | " # Derive mid hip position\n",
704 | " mhip_x = ((signal_array[:,2*RHIP] + signal_array[:,2*LHIP])/2).reshape(-1,1)\n",
705 | " mhip_y = ((signal_array[:,2*RHIP+1] + signal_array[:,2*LHIP+1])/2).reshape(-1,1)\n",
706 | " mhip_coords = np.hstack([mhip_x,mhip_y]*int(num_parts))\n",
707 | "\n",
708 | " # Normalize to hip-knee distance\n",
709 | " topoint = lambda x: range(2*x,2*x+2)\n",
710 | " scale_vector_R = np.apply_along_axis(lambda x: np.linalg.norm(x[topoint(RHIP)] -\n",
711 | " x[topoint(RKNE)]),1,signal_array)\n",
712 | " scale_vector_L = np.apply_along_axis(lambda x: np.linalg.norm(x[topoint(LHIP)] -\n",
713 | " x[topoint(LKNE)]),1,signal_array)\n",
714 | " scale_vector = ((scale_vector_R + scale_vector_L)/2.0).reshape(-1,1)\n",
715 | " return mhip_coords, scale_vector\n",
716 | "\n",
717 | "res_mhip_coords, res_scale_vector = find_mid_sd(res_processed)\n",
718 | "res_processed = (res_processed-res_mhip_coords)/res_scale_vector\n",
719 | "show_plots(res_processed, 2)"
720 | ],
721 | "execution_count": null,
722 | "outputs": []
723 | },
724 | {
725 | "cell_type": "markdown",
726 | "metadata": {
727 | "id": "4PstpdXmkPTj"
728 | },
729 | "source": [
730 | "Note how the y coordinate (orange) in the normalized curve varies around a fairly constant position, compared to the plots in the previous cell without normalization! In the previous cell, the y position (orange) displayed an underlying \"u\" shape."
731 | ]
732 | },
733 | {
734 | "cell_type": "markdown",
735 | "metadata": {
736 | "id": "nrz7dmyNrG_a"
737 | },
738 | "source": [
739 | "# Analysis\n"
740 | ]
741 | },
742 | {
743 | "cell_type": "markdown",
744 | "metadata": {
745 | "id": "nB_DJxlaBm6K"
746 | },
747 | "source": [
748 | "\n",
749 | "## Derive joint angles\n",
750 | "\n",
751 | "As we are interested in the knee flexion angle, we will look at the angle formed by the keypoints at the ankle, knee, and hip. We do this by taking the arccosine of the normalized dot product between the ankle-knee vector and the knee-hip vector."
752 | ]
753 | },
754 | {
755 | "cell_type": "code",
756 | "metadata": {
757 | "id": "PHb3WTa4tmgF"
758 | },
759 | "source": [
760 | "def get_angle(A,B,C,centered_filtered):\n",
761 | " \n",
762 | " # finds the angle ABC, assumes that confidence columns have been removed\n",
763 | " # A,B and C are integers corresponding to different keypoints\n",
764 | " \n",
765 | " p_A = np.array([centered_filtered[:,2*A],centered_filtered[:,2*A+1]]).T\n",
766 | " p_B = np.array([centered_filtered[:,2*B],centered_filtered[:,2*B+1]]).T\n",
767 | " p_C = np.array([centered_filtered[:,2*C],centered_filtered[:,2*C+1]]).T\n",
768 | " p_BA = p_A - p_B\n",
769 | " p_BC = p_C - p_B\n",
770 | " dot_products = np.sum(p_BA*p_BC,axis=1)\n",
771 | " norm_products = np.linalg.norm(p_BA,axis=1)*np.linalg.norm(p_BC,axis=1)\n",
772 | " return np.arccos(dot_products/norm_products)\n",
773 | "\n",
774 | "# get the knee angle and convert from radians to degrees\n",
775 | "knee_angle = (np.pi - get_angle(RANK, RKNE, RHIP, res_processed))*180/np.pi\n",
776 | "\n",
777 | "# plot the knee angle\n",
778 | "plt.plot(knee_angle)\n",
779 | "plt.xlabel(\"video frame\")\n",
780 | "plt.ylabel(\"knee angle (degrees)\")"
781 | ],
782 | "execution_count": null,
783 | "outputs": []
784 | },
785 | {
786 | "cell_type": "markdown",
787 | "metadata": {
788 | "id": "t_uxItEDrMkJ"
789 | },
790 | "source": [
791 | "## Divide time series into gait cycles\n",
792 | "\n",
793 | "As we are interested in the knee flexion over a gait cycle, we will divide the time series into segments. For simplicity, we will define a gait cycle as a *segment between timepoints when a) the distance between the toes on the right and left legs are at a maximum **and** b) the right toes are on the right and the left toes are on the left in the sagittal plane view*.\n",
794 | "\n",
795 | "First, we will compute the distance between the toes on the right and left legs and store that in the variable `dst_org`. We then plot that distance."
796 | ]
797 | },
798 | {
799 | "cell_type": "code",
800 | "metadata": {
801 | "id": "lPsLZgRDu99d"
802 | },
803 | "source": [
804 | "def get_distance(A,B,centered_filtered):\n",
805 | " p_A = np.array([centered_filtered[:,2*A],centered_filtered[:,2*A+1]]).T\n",
806 | " p_B = np.array([centered_filtered[:,2*B],centered_filtered[:,2*B+1]]).T \n",
807 | " p_BA = p_A - p_B\n",
808 | " return np.linalg.norm(p_BA,axis=1)\n",
809 | "\n",
810 | "dst_org = get_distance(RBTO, LBTO, res_processed) * np.sign(res_processed[:,LBTO*2] - res_processed[:,RBTO*2])\n",
811 | "plt.plot(dst_org)\n",
812 | "plt.xlabel(\"video frame\")\n",
813 | "plt.ylabel(\"distance between right and left toes\")"
814 | ],
815 | "execution_count": null,
816 | "outputs": []
817 | },
818 | {
819 | "cell_type": "markdown",
820 | "metadata": {
821 | "id": "WQgKfAWwmUYc"
822 | },
823 | "source": [
824 | "In the plot above, there is some noise around on either side of frame 100, so we'll smooth the signal using a Gaussian filter as we did previously. The smoothed distance data is stored in the variable `dst`."
825 | ]
826 | },
827 | {
828 | "cell_type": "code",
829 | "metadata": {
830 | "id": "INXjrbTdwBvE"
831 | },
832 | "source": [
833 | "dst = gaussian_filter1d(dst_org.copy(),5)\n",
834 | "plt.plot(dst)\n",
835 | "plt.xlabel(\"video frame\")\n",
836 | "plt.ylabel(\"distance between right and left toes, smoothed\")"
837 | ],
838 | "execution_count": null,
839 | "outputs": []
840 | },
841 | {
842 | "cell_type": "markdown",
843 | "metadata": {
844 | "id": "J7lY33JDmawM"
845 | },
846 | "source": [
847 | "Now we need to detect the maximum distance between the toes. We will do that by finding the peaks, which are clearly visible. We will use a simple algorithm for peak detection and store the indices for the peaks, along with the peak values, in a two-column variable called `maxs`."
848 | ]
849 | },
850 | {
851 | "cell_type": "code",
852 | "metadata": {
853 | "id": "I5mI0O-UwWqM"
854 | },
855 | "source": [
856 | "# Peak detection script converted from MATLAB script\n",
857 | "# at http://billauer.co.il/peakdet.html\n",
858 | "# \n",
859 | "# Returns two arrays\n",
860 | "# \n",
861 | "# function [maxtab, mintab]=peakdet(v, delta, x)\n",
862 | "# PEAKDET Detect peaks in a vector\n",
863 | "# [MAXTAB, MINTAB] = PEAKDET(V, DELTA) finds the local\n",
864 | "# maxima and minima (\"peaks\") in the vector V.\n",
865 | "# MAXTAB and MINTAB consists of two columns. Column 1\n",
866 | "# contains indices in V, and column 2 the found values.\n",
867 | "# \n",
868 | "# With [MAXTAB, MINTAB] = PEAKDET(V, DELTA, X) the indices\n",
869 | "# in MAXTAB and MINTAB are replaced with the corresponding\n",
870 | "# X-values.\n",
871 | "#\n",
872 | "# A point is considered a maximum peak if it has the maximal\n",
873 | "# value, and was preceded (to the left) by a value lower by\n",
874 | "# DELTA.\n",
875 | "#\n",
876 | "# Eli Billauer, 3.4.05 (Explicitly not copyrighted).\n",
877 | "# This function is released to the public domain; Any use is allowed.\n",
878 | "def peakdet(v, delta, x = None):\n",
879 | " maxtab = []\n",
880 | " mintab = []\n",
881 | " \n",
882 | " if x is None:\n",
883 | " x = np.arange(len(v))\n",
884 | " \n",
885 | " v = np.asarray(v)\n",
886 | " \n",
887 | " if len(v) != len(x):\n",
888 | " sys.exit('Input vectors v and x must have same length')\n",
889 | " \n",
890 | " if not np.isscalar(delta):\n",
891 | " sys.exit('Input argument delta must be a scalar')\n",
892 | " \n",
893 | " if delta <= 0:\n",
894 | " sys.exit('Input argument delta must be positive')\n",
895 | " \n",
896 | " mn, mx = np.Inf, -np.Inf\n",
897 | " mnpos, mxpos = np.NaN, np.NaN\n",
898 | " \n",
899 | " lookformax = True\n",
900 | " \n",
901 | " for i in np.arange(len(v)):\n",
902 | " this = v[i]\n",
903 | " if this > mx:\n",
904 | " mx = this\n",
905 | " mxpos = x[i]\n",
906 | " if this < mn:\n",
907 | " mn = this\n",
908 | " mnpos = x[i]\n",
909 | " \n",
910 | " if lookformax:\n",
911 | " if this < mx-delta:\n",
912 | " maxtab.append((mxpos, mx))\n",
913 | " mn = this\n",
914 | " mnpos = x[i]\n",
915 | " lookformax = False\n",
916 | " else:\n",
917 | " if this > mn+delta:\n",
918 | " mintab.append((mnpos, mn))\n",
919 | " mx = this\n",
920 | " mxpos = x[i]\n",
921 | " lookformax = True\n",
922 | "\n",
923 | " return np.array(maxtab), np.array(mintab)\n",
924 | "\n",
925 | "maxs, mins = peakdet(dst, 0.5)\n",
926 | "\n",
927 | "print(\"DONE: maximums and minimums identified\")"
928 | ],
929 | "execution_count": null,
930 | "outputs": []
931 | },
932 | {
933 | "cell_type": "markdown",
934 | "metadata": {
935 | "id": "8Zs2eCdcnRiI"
936 | },
937 | "source": [
938 | "We now add vertical lines to the identified peak locations in the plot below. This will allow us to do a visual check to see if the peaks are identified correctly. "
939 | ]
940 | },
941 | {
942 | "cell_type": "code",
943 | "metadata": {
944 | "id": "8T1XDsTQxKFq"
945 | },
946 | "source": [
947 | "# plot the smoothed distance curve\n",
948 | "plt.plot(dst)\n",
949 | "plt.xlabel(\"video frame\")\n",
950 | "plt.ylabel(\"distance between right and left toes, smoothed\")\n",
951 | "\n",
952 | "# plot a vertical line at each point identified to have a maximum\n",
953 | "for mm in maxs.tolist():\n",
954 | " plt.axvline(x=mm[0], color=\"orange\")"
955 | ],
956 | "execution_count": null,
957 | "outputs": []
958 | },
959 | {
960 | "cell_type": "markdown",
961 | "metadata": {
962 | "id": "1Xsqafugq7pg"
963 | },
964 | "source": [
965 | "The peaks appear to be identified correctly, so we will use \r\n",
966 | "these video frames to similarly mark the split between gait cycles on the knee angle time series."
967 | ]
968 | },
969 | {
970 | "cell_type": "code",
971 | "metadata": {
972 | "id": "I6V7po2Dq65Y"
973 | },
974 | "source": [
975 | "# plot the knee angle curve\n",
976 | "plt.plot(knee_angle)\n",
977 | "plt.xlabel(\"video frame\")\n",
978 | "plt.ylabel(\"knee angle (degrees)\")\n",
979 | "\n",
980 | "# plot a vertical line at each split between gait cycles\n",
981 | "for mm in maxs.tolist():\n",
982 | " plt.axvline(x=mm[0], color=\"orange\")"
983 | ],
984 | "execution_count": null,
985 | "outputs": []
986 | },
987 | {
988 | "cell_type": "markdown",
989 | "metadata": {
990 | "id": "2U8YESy5rBoW"
991 | },
992 | "source": [
993 | "We will extract the knee angles for the first 4 full segments, or gait cycles in this case, and store them in the variable `segments` to be used for further analysis and visualization."
994 | ]
995 | },
996 | {
997 | "cell_type": "code",
998 | "metadata": {
999 | "id": "tEm5CK17xu6e"
1000 | },
1001 | "source": [
1002 | "segments = []\n",
1003 | "sstart = int(maxs[0][0])\n",
1004 | "for i in range(1,5):\n",
1005 | " end = int(maxs[i][0])\n",
1006 | " segments.append(knee_angle[sstart:end])\n",
1007 | " print(\"Segment from {} to {}\".format(sstart,end))\n",
1008 | " sstart = end"
1009 | ],
1010 | "execution_count": null,
1011 | "outputs": []
1012 | },
1013 | {
1014 | "cell_type": "markdown",
1015 | "metadata": {
1016 | "id": "-4wlnidotp-q"
1017 | },
1018 | "source": [
1019 | "# Report results\n"
1020 | ]
1021 | },
1022 | {
1023 | "cell_type": "markdown",
1024 | "metadata": {
1025 | "id": "Xsd0q6FkBoPg"
1026 | },
1027 | "source": [
1028 | "\n",
1029 | "## Plot knee angle for four gait cycles and export the plot\n",
1030 | "\n",
1031 | "We can now visually compare the knee angle over the first four gait cycles. After you run the cell below, you should see a plot with four curves, one for each gait cycle. Using the `plt.savefig` command, we also exported this plot to a file called \"curves.png\" that should appear in your Files folder to the left."
1032 | ]
1033 | },
1034 | {
1035 | "cell_type": "code",
1036 | "metadata": {
1037 | "id": "lQ2NiU-eyy10"
1038 | },
1039 | "source": [
1040 | "fig, ax = plt.subplots(figsize=(7, 5))\n",
1041 | "ax.spines['right'].set_visible(False)\n",
1042 | "ax.spines['top'].set_visible(False)\n",
1043 | "plt.title(\"Approx. planar knee angle\",fontsize=24)\n",
1044 | "plt.xlabel(\"gait cycle (in %)\",fontsize=17)\n",
1045 | "plt.ylabel(\"angle (in degrees)\",fontsize=17)\n",
1046 | "\n",
1047 | "# Because each gait cycle has a slightly different number of points, \n",
1048 | "# we must interpolate the data for each gait cycle so that\n",
1049 | "# each gait cycle has the same number of points in order to plot\n",
1050 | "# them together. Here we choose 100 points.\n",
1051 | "grid = np.linspace(0.0, 1.0, num=100)\n",
1052 | "curves = np.zeros([4,100])\n",
1053 | "\n",
1054 | "for i in range(4):\n",
1055 | " org_space = np.linspace(0.0, 1.0, num=int(segments[i].shape[0]))\n",
1056 | " f = interpolate.interp1d(org_space, segments[i], kind=\"linear\")\n",
1057 | " plt.plot(grid*100, f(grid), linestyle=\"-\", linewidth=2.5)\n",
1058 | " plt.legend([\"Cycle {}\".format(k) for k in range(1,5)],loc=1)\n",
1059 | " curves[i,:] = f(grid)\n",
1060 | "\n",
1061 | "plt.savefig(\"curves.png\")"
1062 | ],
1063 | "execution_count": null,
1064 | "outputs": []
1065 | },
1066 | {
1067 | "cell_type": "markdown",
1068 | "metadata": {
1069 | "id": "GdSAYpaydoJy"
1070 | },
1071 | "source": [
1072 | "## Export cleaned up trajectories"
1073 | ]
1074 | },
1075 | {
1076 | "cell_type": "markdown",
1077 | "metadata": {
1078 | "id": "g6DsZkhsa0BO"
1079 | },
1080 | "source": [
1081 | "We can also save any of our data to a text file. Below we save the trajectories from the first four gait cycles to a file called \"trajectories.csv.\" After running the cell below, you should see that file in your Files folder to the left."
1082 | ]
1083 | },
1084 | {
1085 | "cell_type": "code",
1086 | "metadata": {
1087 | "id": "4NxvMUOrdwrP"
1088 | },
1089 | "source": [
1090 | "np.savetxt(\"trajectories.csv\", curves, delimiter=\",\")"
1091 | ],
1092 | "execution_count": null,
1093 | "outputs": []
1094 | },
1095 | {
1096 | "cell_type": "markdown",
1097 | "metadata": {
1098 | "id": "qRWUKVuuBqqT"
1099 | },
1100 | "source": [
1101 | "# Experiment with the Code\n",
1102 | "\n",
1103 | "Congratulations! You've completed your first video analysis with OpenPose. Here are a few activities you can try to increase your familiarity with the code. They are ordered from simplest to most complex.\n",
1104 | "\n",
1105 | "* Change the variance of the Gaussian smoothing filter to 5. How does this change the calculated joint angles?\n",
1106 | "* Plot 4 gait cycles of the ankle flexion angle\n",
1107 | "* Compute the mean and variance of peak knee flexion across 4 gait cycles\n",
1108 | "* Run the scripts on another video https://github.com/stanfordnmbl/mobile-gaitlab/blob/master/demo/in/input1.mp4?raw=true. Note that because of people in the background, this video needs more preprocessing. Some of the assumptions we made in the code above will no longer be true.\n",
1109 | "* Install keras and run neural networks from our study (https://github.com/stanfordnmbl/mobile-gaitlab/) to get better predictors of gait parameters from videos. "
1110 | ]
1111 | },
1112 | {
1113 | "cell_type": "markdown",
1114 | "metadata": {
1115 | "id": "rO7S3Zu2EIgM"
1116 | },
1117 | "source": [
1118 | "# Other resources\n",
1119 | "\n",
1120 | "* Our publication on our study on gait analysis in Cerebral Palsy population: https://www.nature.com/articles/s41467-020-17807-z\n",
1121 | "* Code from our study and demo of neural networks: https://github.com/stanfordnmbl/mobile-gaitlab\n",
1122 | "* OpenPose repository: https://github.com/CMU-Perceptual-Computing-Lab/openpose\n",
1123 | "* Facebook's software for estimating 3d poses in videos: https://github.com/facebookresearch/VideoPose3D\n",
1124 | "* DeepLabCut, software for creating virtual markers for different body parts in different species (not just humans) from videos: http://www.mousemotorlab.org/deeplabcut\n"
1125 | ]
1126 | },
1127 | {
1128 | "cell_type": "markdown",
1129 | "metadata": {
1130 | "id": "JE3IfCEDo8qo"
1131 | },
1132 | "source": [
1133 | "#Feedback\r\n",
1134 | "\r\n",
1135 | "This notebook is a work-in-progress and we welcome your feedback on how to increase its usefulness. Email comments to us at [mobilize-center@stanford.edu](mailto:mobilize-center@stanford.edu) or submit an issue on GitHub."
1136 | ]
1137 | },
1138 | {
1139 | "cell_type": "markdown",
1140 | "metadata": {
1141 | "id": "ifTa38Spblu9"
1142 | },
1143 | "source": [
1144 | "\r\n",
1145 | "\r\n",
1146 | "---\r\n",
1147 | "\r\n"
1148 | ]
1149 | },
1150 | {
1151 | "cell_type": "markdown",
1152 | "metadata": {
1153 | "id": "j9gidtZddPYg"
1154 | },
1155 | "source": [
1156 | "Version 1.04\r\n",
1157 | "\r\n",
1158 | "Creator: Lukasz Kidzinski | Contributors: Lukasz Kidzinski, Joy Ku \r\n",
1159 | "Last Updated on Mar. 10, 2021\r\n",
1160 | "\r\n",
1161 | "This notebook is made available under the [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0).\r\n",
1162 | "\r\n",
1163 | "Note that software used within the notebook might have different licenses. Please refer to the documentation for individual software and packages. In particular, OpenPose is free only for non-commercial use (see its [license](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/LICENSE) for details). For an open-source alternative , see [Detectron2](https://github.com/facebookresearch/detectron2) for a keypoint detection algorithm under Apache 2.0 license."
1164 | ]
1165 | }
1166 | ]
1167 | }
1168 |
--------------------------------------------------------------------------------