├── .gitignore
├── DaphnetGait
└── dataset_fog_release
│ ├── .gitignore
│ ├── dataProcessing.py
│ └── models.py
├── LICENSE
├── Opportunity
├── OpportunityUCIDataset
│ ├── .gitignore
│ ├── col_names
│ ├── dataProcessing.py
│ ├── models.py
│ └── results
│ │ ├── cnn
│ │ ├── high_level_activity
│ │ │ ├── cnn_50ep_acc.png
│ │ │ ├── cnn_50ep_cm.png
│ │ │ ├── cnn_50ep_loss.png
│ │ │ └── cnn_50ep_output.txt
│ │ └── locomotion
│ │ │ ├── cnn_50_ep_cm.png
│ │ │ ├── cnn_50ep_acc.png
│ │ │ ├── cnn_50ep_loss.png
│ │ │ └── cnn_50ep_output.txt
│ │ └── rnn
│ │ ├── high_level_activity
│ │ ├── rnn_50ep_acc.png
│ │ ├── rnn_50ep_cm.png
│ │ ├── rnn_50ep_loss.png
│ │ └── rnn_50ep_output.txt
│ │ └── locomotion
│ │ ├── rnn_50ep_acc.png
│ │ ├── rnn_50ep_cm.png
│ │ ├── rnn_50ep_loss.png
│ │ └── rnn_50ep_output.txt
└── summary_table.md
├── PAMAP2
└── PAMAP2_Dataset
│ ├── .gitignore
│ ├── ankle_acc_1.png
│ ├── chest_acc_1.png
│ ├── dataProcessing.py
│ ├── handacc_1.png
│ ├── models.py
│ ├── pamap_scaled.h5
│ ├── pamap_scaled2.h5
│ ├── results
│ ├── cnn_res
│ │ ├── cnn_50ep_output.txt
│ │ ├── cnn_50epo_accuracy.png
│ │ ├── cnn_50epo_cm.png
│ │ └── cnn_50epo_loss.png
│ ├── dnn_res
│ │ ├── dnn_50ep_acc.png
│ │ ├── dnn_50ep_cm.png
│ │ ├── dnn_50ep_loss.png
│ │ └── dnn_50ep_output.txt
│ ├── normalized_data
│ │ ├── standard_cnn_50epo_result.txt
│ │ ├── standard_cnn_accuracy.png
│ │ ├── standard_cnn_cm.png
│ │ └── standard_cnn_loss.png
│ └── rnn_res
│ │ ├── rnn_50ep_acc.png
│ │ ├── rnn_50ep_cm.png
│ │ ├── rnn_50ep_loss.png
│ │ └── rnn_50ep_output.txt
│ └── summary_table.md
├── SHLDataset
├── .gitignore
├── data_fusion.py
├── data_fusion2.py
├── data_fusion3.py
├── figures
│ ├── FFNN (4).png
│ ├── RNN2.png
│ ├── cnn.png
│ ├── datafusion_loss_acc.png
│ ├── dataprocessing.png
│ ├── early_late_cm.png
│ ├── earlyfusion.png
│ ├── earlyvslate.drawio
│ ├── earlyvslate.png
│ ├── fusion_cm.drawio
│ ├── fusion_cm.png
│ ├── fusion_model.png
│ ├── latefusion.png
│ ├── sensors.jpg
│ └── three_models.png
├── fusion_output
│ ├── bag_video
│ │ ├── cnn_50_accuracy.png
│ │ ├── cnn_50_cm.png
│ │ ├── cnn_50_loss.png
│ │ └── cnn_50_output.txt
│ └── phone_and_video
│ │ ├── cnn_50_accuracy.png
│ │ ├── cnn_50_cm.png
│ │ ├── cnn_50_loss.png
│ │ └── cnn_50_output.txt
├── image_processing_fusion.py
├── motion_modeling.py
├── motion_output
│ ├── bag_motion
│ │ ├── cnn
│ │ │ ├── cnn_50_accuracy.png
│ │ │ ├── cnn_50_cm.png
│ │ │ ├── cnn_50_loss.png
│ │ │ └── cnn_50_output.txt
│ │ ├── dnn
│ │ │ ├── dnn_50_accuracy.png
│ │ │ ├── dnn_50_cm.png
│ │ │ ├── dnn_50_loss.png
│ │ │ └── dnn_50_output.txt
│ │ └── rnn
│ │ │ ├── rnn_50_accuracy.png
│ │ │ ├── rnn_50_cm.png
│ │ │ ├── rnn_50_loss.png
│ │ │ └── rnn_50_output.txt
│ ├── hand_motion
│ │ ├── cnn_50_accuracy.png
│ │ ├── cnn_50_cm.png
│ │ ├── cnn_50_loss.png
│ │ └── cnn_50_output.txt
│ ├── hip_motion
│ │ ├── cnn_50_accuracy.png
│ │ ├── cnn_50_cm.png
│ │ ├── cnn_50_loss.png
│ │ └── cnn_50_output.txt
│ ├── motion_fusion
│ │ ├── earlyfusion
│ │ │ ├── cnn_50_accuracy.png
│ │ │ ├── cnn_50_cm.png
│ │ │ ├── cnn_50_loss.png
│ │ │ └── cnn_50_output.txt
│ │ └── latefusion
│ │ │ ├── Figure_3.png
│ │ │ ├── cnn_50_accuracy.png
│ │ │ ├── cnn_50_loss.png
│ │ │ └── cnn_50_output.txt
│ └── torso_motion
│ │ ├── cnn_50_accuracy.png
│ │ ├── cnn_50_cm.png
│ │ ├── cnn_50_cm.txt
│ │ └── cnn_50_loss.png
├── motion_processing.py
├── preprocessing.py
├── readme.md
├── video_modeling.py
└── video_processing.py
├── Sphere
├── .footer.html
├── .gitignore
├── .header.html
├── README.txt
├── cnn
│ ├── accuracy.png
│ ├── confusion_matrix.png
│ ├── loss.png
│ └── result.txt
├── dataProcessing.py
└── models.py
├── UCI HAR Dataset
└── UCI HAR Dataset
│ ├── .gitignore
│ ├── dataProcessing.py
│ ├── models.py
│ ├── results
│ ├── cnn
│ │ ├── cnn_50ep_acc.png
│ │ ├── cnn_50ep_cm.png
│ │ ├── cnn_50ep_loss.png
│ │ └── cnn_50ep_output.txt
│ ├── dnn
│ │ ├── dnn_50ep_acc.png
│ │ ├── dnn_50ep_cm.png
│ │ ├── dnn_50ep_loss.png
│ │ └── dnn_50ep_result.txt
│ └── rnn
│ │ ├── rnn_50ep_acc.png
│ │ ├── rnn_50ep_cm.png
│ │ ├── rnn_50ep_loss.png
│ │ └── rnn_50ep_output.txt
│ └── summary_table.md
└── readme.md
/.gitignore:
--------------------------------------------------------------------------------
1 | across_sensor_har
2 | cnn.py
3 | read_files.py
4 | result.pptx
--------------------------------------------------------------------------------
/DaphnetGait/dataset_fog_release/.gitignore:
--------------------------------------------------------------------------------
1 | dataset
2 | doc
3 | scripts
4 | Dap.h5
5 | README
--------------------------------------------------------------------------------
/DaphnetGait/dataset_fog_release/dataProcessing.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | """
3 | Created on Fri Jun 26 23:09:38 2020
4 |
5 | @author: Jieyun Hu
6 | """
7 |
8 | # This file is for PAMAP2 data processing
9 | from matplotlib import pyplot as plt
10 | import pandas as pd
11 | import numpy as np
12 | import h5py
13 | import os
14 | from sklearn.preprocessing import MinMaxScaler, StandardScaler
15 | activities = {0: 'None', # 0: not part of the experiment. For instance the sensors are installed on the user or the user is performing activities unrelated to the experimental protocol, such as debriefing
16 | 1: 'experiment', # 1: experiment, no freeze (can be any of stand, walk, turn)
17 | 2: 'freeze' }
18 |
19 | def read_files():
20 |
21 | files = os.listdir("./dataset")
22 | col_names = ["timestamp", "ankle_acc1", "ankle_acc2", "ankle_acc3", "upper_leg_acc1", "upper_leg_acc2","upper_leg_acc3", "Trunk_acc1", "Trunk_acc2", "Trunk_acc3", "annotation"]
23 | dataCollection = pd.DataFrame()
24 | for i, file in enumerate(files):
25 | print(file," is reading...")
26 | relative_file = "./dataset/"+file
27 | procData = pd.read_table(relative_file, header=None, sep='\s+')
28 | procData.columns = col_names
29 | procData['file_index'] = i # put the file index at the end of the row
30 | dataCollection = dataCollection.append(procData, ignore_index=True)
31 | #break; # for testing short version, need to delete later
32 | dataCollection.reset_index(drop=True, inplace=True)
33 | #print(dataCollection.shape)
34 | return dataCollection
35 |
36 | def dataCleaning(dataCollection):
37 | dataCollection = dataCollection.drop(['timestamp'],axis = 1) # removal of orientation columns as they are not needed
38 | dataCollection = dataCollection.drop(dataCollection[dataCollection.annotation == 0].index) #removal of any row of activity 0 as it is transient activity which it is not used
39 | dataCollection = dataCollection.apply(pd.to_numeric, errors = 'coerce') #removal of non numeric data in cells
40 | print(dataCollection.isna().sum().sum())#count all NaN
41 | print(dataCollection.shape)
42 | #dataCollection = dataCollection.dropna()
43 | #dataCollection = dataCollection.interpolate()
44 | #removal of any remaining NaN value cells by constructing new data points in known set of data points
45 | #for i in range(0,4):
46 | # dataCollection["heartrate"].iloc[i]=100 # only 4 cells are Nan value, change them manually
47 | print("data cleaned!")
48 | return dataCollection
49 |
50 | def reset_label(dataCollection):
51 | # Convert original labels {1, 2} to new labels.
52 | mapping = {2:0,1:1} # old activity Id to new activity Id
53 | for i in [2]:
54 | dataCollection.loc[dataCollection.annotation == i, 'annotation'] = mapping[i]
55 |
56 | return dataCollection
57 |
58 | def segment(data, window_size): # data is numpy array
59 | n = len(data)
60 | X = []
61 | y = []
62 | start = 0
63 | end = 0
64 | while start + window_size - 1 < n:
65 | end = start + window_size-1
66 | if data[start][-2] == data[end][-2] and data[start][-1] == data[end][-1] : # if the frame contains the same activity and from the same file
67 | X.append(data[start:(end+1),0:-2])
68 | y.append(data[start][-2])
69 | start += window_size//2 # 50% overlap
70 | else: # if the frame contains different activities or from different objects, find the next start point
71 | while start + window_size-1 < n:
72 | if data[start][-2] != data[start+1][-2]:
73 | break
74 | start += 1
75 | start += 1
76 | print(np.asarray(X).shape, np.asarray(y).shape)
77 | return {'inputs' : np.asarray(X), 'labels': np.asarray(y,dtype=int)}
78 |
79 | def downsize(data):# data is numpy array
80 | downsample_size = 2
81 | data = data[::downsample_size,:]
82 | return data
83 | '''
84 | def scale_data(data):# data is numpy array
85 | scaler = StandardScaler()
86 | scaler.fit(X)
87 | X = scaler.transform(X)
88 | '''
89 | def save_data(data,file_name): # save the data in h5 format
90 | f = h5py.File(file_name,'w')
91 | for key in data:
92 | print(key)
93 | f.create_dataset(key,data = data[key])
94 | f.close()
95 | print('Done.')
96 |
97 | if __name__ == "__main__":
98 | file_name = 'Dap.h5'
99 | window_size = 25
100 | data = read_files()
101 | data = dataCleaning(data)
102 | data = reset_label(data)
103 | #print(set(data.annotation))
104 | numpy_data = data.to_numpy()
105 | numpy_data = downsize(numpy_data) # downsize to 20%
106 | segment_data = segment(numpy_data, window_size)
107 | save_data(segment_data, file_name)
108 |
109 |
110 |
111 |
112 |
--------------------------------------------------------------------------------
/DaphnetGait/dataset_fog_release/models.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | """
3 | Created on Mon Jun 29 16:41:56 2020
4 |
5 | @author: Jieyun Hu
6 | """
7 |
8 | '''
9 | Apply different deep learning models on PAMAP2 dataset.
10 | ANN,CNN and RNN were applied.
11 |
12 | '''
13 | import pandas as pd
14 | import numpy as np
15 | import matplotlib.pyplot as plt
16 | from scipy import stats
17 | import tensorflow as tf
18 | from sklearn import metrics
19 | import h5py
20 | import matplotlib.pyplot as plt
21 | from tensorflow.keras import regularizers
22 | from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten, Dropout, SimpleRNN, GRU, LSTM, GlobalMaxPooling1D,GlobalMaxPooling2D,MaxPooling2D,BatchNormalization
23 | from tensorflow.keras.models import Model
24 | from tensorflow.keras.optimizers import Adam
25 | from sklearn.preprocessing import MinMaxScaler, StandardScaler
26 | from sklearn.model_selection import train_test_split
27 | from sklearn.metrics import confusion_matrix
28 | import itertools
29 |
30 |
31 | class models():
32 | def __init__(self, path):
33 | self.path = path
34 |
35 |
36 | def read_h5(self):
37 | f = h5py.File(path, 'r')
38 | X = f.get('inputs')
39 | y = f.get('labels')
40 |
41 | self.X = np.array(X)
42 | self.y = np.array(y)
43 | print(self.X[0][0])
44 | self.data_scale()
45 | print(self.X[0][0])
46 | self.x_train, self.x_test, self.y_train, self.y_test = train_test_split(self.X, self.y, test_size=0.4, random_state = 11)
47 |
48 | print("X = ", self.X.shape)
49 | print("y =",self.y.shape)
50 |
51 | #print(self.x_train.shape, self.x_test.shape, self.y_train.shape, self.y_test.shape)
52 | #print(self.x_train[0].shape)
53 | #return X,y
54 |
55 | def data_scale(self):
56 | #Since sklearn scaler only allows scaling in 2d dim, so here first convert 3D to 2D. After scaling, convert 2D to 3D
57 | dim_0 = self.X.shape[0]
58 | dim_1 = self.X.shape[1]
59 | temp = self.X.reshape(dim_0,-1)
60 | #scaler = MinMaxScaler()
61 | scaler = StandardScaler()
62 | scaler.fit(temp)
63 | temp = scaler.transform(temp)
64 | self.X = temp.reshape(dim_0,dim_1,-1)
65 |
66 |
67 | def cnn_model(self):
68 | #K = len(set(self.y_train))
69 | K = 1
70 | #print(K)
71 | self.x_train = np.expand_dims(self.x_train, -1)
72 | self.x_test = np.expand_dims(self.x_test,-1)
73 | #print(self.x_train, self.x_test)
74 | i = Input(shape=self.x_train[0].shape)
75 | #Tested with several hidden layers. But since the it has only 9 features, it is not necessary to use multiple hidden layers
76 | x = Conv2D(32, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(i)
77 | x = BatchNormalization()(x)
78 | #x = MaxPooling2D((2,2))(x)
79 | #x = Dropout(0.2)(x)
80 | #x = Conv2D(64, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
81 | #x = BatchNormalization()(x)
82 | #x = Dropout(0.4)(x)
83 | #x = Conv2D(128, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
84 | #x = BatchNormalization()(x)
85 | #x = MaxPooling2D((2,2))(x)
86 | #x = Dropout(0.2)(x)
87 | x = Flatten()(x)
88 | x = Dropout(0.2)(x)
89 | x = Dense(128,activation = 'relu')(x)
90 | x = Dropout(0.2)(x)
91 | x = Dense(K, activation = 'softmax')(x)
92 | self.model = Model(i,x)
93 | self.model.compile(optimizer = Adam(lr=0.0005),
94 | loss = 'binary_crossentropy',
95 | metrics = ['accuracy'])
96 |
97 | #self.r = model.fit(X, y, validation_split = 0.4, epochs = 50, batch_size = 32 )
98 | self.r = self.model.fit(self.x_train, self.y_train, validation_data = (self.x_test, self.y_test), epochs = 50, batch_size = 64 )
99 | print(self.model.summary())
100 | # It is better than using keras do the splitting!!
101 | return self.r
102 |
103 | def dnn_model(self):
104 | K = 1
105 | i = Input(shape=self.x_train[0].shape)
106 | x = Flatten()(i)
107 | x = Dense(32,activation = 'relu')(x)
108 | #x = Dense(128,activation = 'relu')(x)
109 | #x = Dropout(0.2)(x)
110 | #x = Dense(256,activation = 'relu')(x)
111 | #x = Dropout(0.2)(x)
112 | #x = Dense(128,activation = 'relu')(x)
113 | x = Dense(K,activation = 'softmax')(x)
114 | self.model = Model(i,x)
115 | self.model.compile(optimizer = Adam(lr=0.001),
116 | loss = 'binary_crossentropy',
117 | metrics = ['accuracy'])
118 |
119 | '''
120 | model = tf.keras.models.Sequential([
121 | tf.keras.layers.Flatten(input_shape=self.x_train[0].shape),
122 | tf.keras.layers.Dense(256, activation = 'relu'),
123 | tf.keras.layers.Dropout(0.5),
124 | tf.keras.layers.Dense(256, activation = 'relu'),
125 | tf.keras.layers.Dropout(0.2),
126 | tf.keras.layers.Dense(K,activation = 'softmax')
127 | ])
128 | model.compile(optimizer = Adam(lr=0.0005),
129 | loss = 'binary_crossentropy',
130 | metrics = ['accuracy'])
131 | '''
132 | self.r = self.model.fit(self.x_train, self.y_train, validation_data = (self.x_test, self.y_test), epochs = 50, batch_size = 32 )
133 | print(self.model.summary())
134 | return self.r
135 |
136 |
137 | def rnn_model(self):
138 | K = 1
139 | i = Input(shape = self.x_train[0].shape)
140 | x = LSTM(64)(i)
141 | x = Dense(32,activation = 'relu')(x)
142 | x = Dense(K,activation = 'softmax')(x)
143 | model = Model(i,x)
144 | model.compile(optimizer = Adam(lr=0.001),
145 | loss = 'binary_crossentropy',
146 | metrics = ['accuracy'])
147 | self.r = model.fit(self.x_train, self.y_train, validation_data = (self.x_test, self.y_test), epochs = 50, batch_size = 32 )
148 | print(self.model.summary())
149 | #self.r = model.fit(X, y, validation_split = 0.2, epochs = 10, batch_size = 32 )
150 | return self.r
151 |
152 | def draw(self):
153 | f1 = plt.figure(1)
154 | plt.title('Loss')
155 | plt.plot(self.r.history['loss'], label = 'loss')
156 | plt.plot(self.r.history['val_loss'], label = 'val_loss')
157 | plt.legend()
158 | f1.show()
159 |
160 | f2 = plt.figure(2)
161 | plt.plot(self.r.history['acc'], label = 'accuracy')
162 | plt.plot(self.r.history['val_acc'], label = 'val_accuracy')
163 | plt.legend()
164 | f2.show()
165 |
166 | # summary, confusion matrix and heatmap
167 | def con_matrix(self):
168 | K = len(set(self.y_train))
169 | self.y_pred = self.model.predict(self.x_test).argmax(axis=1)
170 | cm = confusion_matrix(self.y_test,self.y_pred)
171 | self.plot_confusion_matrix(cm,list(range(K)))
172 |
173 |
174 | def plot_confusion_matrix(self, cm, classes, normalize = False, title='Confusion matrix', cmap=plt.cm.Blues):
175 | if normalize:
176 | cm = cm.astype('float') / cm.sum(axis=1)[:,np.newaxis]
177 | print("Normalized confusion matrix")
178 | else:
179 | print("Confusion matrix, without normalization")
180 | print(cm)
181 | f3 = plt.figure(3)
182 | plt.imshow(cm, interpolation='nearest', cmap=cmap)
183 | plt.title(title)
184 | plt.colorbar()
185 | tick_marks = np.arange(len(classes))
186 | plt.xticks(tick_marks, classes, rotation=45)
187 | plt.yticks(tick_marks, classes)
188 |
189 | fmt = '.2f' if normalize else 'd'
190 | thresh = cm.max()/2.
191 | for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
192 | plt.text(j, i, format(cm[i, j], fmt),
193 | horizontalalignment = "center",
194 | color = "white" if cm[i, j] > thresh else "black")
195 | plt.tight_layout()
196 | plt.ylabel('True label')
197 | plt.xlabel('predicted label')
198 | f3.show()
199 |
200 |
201 |
202 | if __name__ == "__main__":
203 | model_name = "cnn" # can be cnn/dnn/rnn
204 | path = "./Dap.h5"
205 | dap = models(path)
206 | print("read h5 file....")
207 | dap.read_h5()
208 |
209 | if model_name == "cnn":
210 | dap.cnn_model()
211 | elif model_name == "dnn":
212 | dap.dnn_model()
213 | elif model_name == "rnn":
214 | dap.rnn_model()
215 | dap.draw()
216 |
217 |
218 |
219 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2020 JenHu
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/Opportunity/OpportunityUCIDataset/.gitignore:
--------------------------------------------------------------------------------
1 | dataset
2 | doc
3 | scripts
4 | hl.h5
5 | hl_2.h5
6 | loco.h5
7 | loco_2.h5
8 | README
--------------------------------------------------------------------------------
/Opportunity/OpportunityUCIDataset/col_names:
--------------------------------------------------------------------------------
1 | MILLISEC
2 | Acc-RKN^-accX
3 | Acc-RKN^-accY
4 | Acc-RKN^-accZ
5 | Acc-HIP-accX
6 | Acc-HIP-accY
7 | Acc-HIP-accZ
8 | Acc-LUA^-accX
9 | Acc-LUA^-accY
10 | Acc-LUA^-accZ
11 | Acc-RUA_-accX
12 | Acc-RUA_-accY
13 | Acc-RUA_-accZ
14 | Acc-LH-accX
15 | Acc-LH-accY
16 | Acc-LH-accZ
17 | Acc-BACK-accX
18 | Acc-BACK-accY
19 | Acc-BACK-accZ
20 | Acc-RKN_-accX
21 | Acc-RKN_-accY
22 | Acc-RKN_-accZ
23 | Acc-RWR-accX
24 | Acc-RWR-accY
25 | Acc-RWR-accZ
26 | Acc-RUA^-accX
27 | Acc-RUA^-accY
28 | Acc-RUA^-accZ
29 | Acc-LUA_-accX
30 | Acc-LUA_-accY
31 | Acc-LUA_-accZ
32 | Acc-LWR-accX
33 | Acc-LWR-accY
34 | Acc-LWR-accZ
35 | Acc-RH-accX
36 | Acc-RH-accY
37 | Acc-RH-accZ
38 | IMU-BACK-accX
39 | IMU-BACK-accY
40 | IMU-BACK-accZ
41 | IMU-BACK-gyroX
42 | IMU-BACK-gyroY
43 | IMU-BACK-gyroZ
44 | IMU-BACK-magneticX
45 | IMU-BACK-magneticY
46 | IMU-BACK-magneticZ
47 | IMU-BACK-Quaternion1
48 | IMU-BACK-Quaternion2
49 | IMU-BACK-Quaternion3
50 | IMU-BACK-Quaternion4
51 | IMU-RUA-accX
52 | IMU-RUA-accY
53 | IMU-RUA-accZ
54 | IMU-RUA-gyroX
55 | IMU-RUA-gyroY
56 | IMU-RUA-gyroZ
57 | IMU-RUA-magneticX
58 | IMU-RUA-magneticY
59 | IMU-RUA-magneticZ
60 | IMU-RUA-Quaternion1
61 | IMU-RUA-Quaternion2
62 | IMU-RUA-Quaternion3
63 | IMU-RUA-Quaternion4
64 | IMU-RLA-accX
65 | IMU-RLA-accY
66 | IMU-RLA-accZ
67 | IMU-RLA-gyroX
68 | IMU-RLA-gyroY
69 | IMU-RLA-gyroZ
70 | IMU-RLA-magneticX
71 | IMU-RLA-magneticY
72 | IMU-RLA-magneticZ
73 | IMU-RLA-Quaternion1
74 | IMU-RLA-Quaternion2
75 | IMU-RLA-Quaternion3
76 | IMU-RLA-Quaternion4
77 | IMU-LUA-accX
78 | IMU-LUA-accY
79 | IMU-LUA-accZ
80 | IMU-LUA-gyroX
81 | IMU-LUA-gyroY
82 | IMU-LUA-gyroZ
83 | IMU-LUA-magneticX
84 | IMU-LUA-magneticY
85 | IMU-LUA-magneticZ
86 | IMU-LUA-Quaternion1
87 | IMU-LUA-Quaternion2
88 | IMU-LUA-Quaternion3
89 | IMU-LUA-Quaternion4
90 | IMU-LLA-accX
91 | IMU-LLA-accY
92 | IMU-LLA-accZ
93 | IMU-LLA-gyroX
94 | IMU-LLA-gyroY
95 | IMU-LLA-gyroZ
96 | IMU-LLA-magneticX
97 | IMU-LLA-magneticY
98 | IMU-LLA-magneticZ
99 | IMU-LLA-Quaternion1
100 | IMU-LLA-Quaternion2
101 | IMU-LLA-Quaternion3
102 | IMU-LLA-Quaternion4
103 | IMU-L-SHOE-EuX
104 | IMU-L-SHOE-EuY
105 | IMU-L-SHOE-EuZ
106 | IMU-L-SHOE-Nav_Ax
107 | IMU-L-SHOE-Nav_Ay
108 | IMU-L-SHOE-Nav_Az
109 | IMU-L-SHOE-Body_Ax
110 | IMU-L-SHOE-Body_Ay
111 | IMU-L-SHOE-Body_Az
112 | IMU-L-SHOE-AngVelBodyFrameX
113 | IMU-L-SHOE-AngVelBodyFrameY
114 | IMU-L-SHOE-AngVelBodyFrameZ
115 | IMU-L-SHOE-AngVelNavFrameX
116 | IMU-L-SHOE-AngVelNavFrameY
117 | IMU-L-SHOE-AngVelNavFrameZ
118 | IMU-L-SHOE-Compass
119 | IMU-R-SHOE-EuX
120 | IMU-R-SHOE-EuY
121 | IMU-R-SHOE-EuZ
122 | IMU-R-SHOE-Nav_Ax
123 | IMU-R-SHOE-Nav_Ay
124 | IMU-R-SHOE-Nav_Az
125 | IMU-R-SHOE-Body_Ax
126 | IMU-R-SHOE-Body_Ay
127 | IMU-R-SHOE-Body_Az
128 | IMU-R-SHOE-AngVelBodyFrameX
129 | IMU-R-SHOE-AngVelBodyFrameY
130 | IMU-R-SHOE-AngVelBodyFrameZ
131 | IMU-R-SHOE-AngVelNavFrameX
132 | IMU-R-SHOE-AngVelNavFrameY
133 | IMU-R-SHOE-AngVelNavFrameZ
134 | IMU-R-SHOE-Compass
135 | Acc-CUP-accX
136 | Acc-CUP-accX
137 | Acc-CUP-accX
138 | Acc-CUP-gyroX
139 | Acc-CUP-gyroY
140 | Acc-SALAMI-accX
141 | Acc-SALAMI-accX
142 | Acc-SALAMI-accX
143 | Acc-SALAMI-gyroX
144 | Acc-SALAMI-gyroY
145 | Acc-WATER-accX
146 | Acc-WATER-accX
147 | Acc-WATER-accX
148 | Acc-WATER-gyroX
149 | Acc-WATER-gyroY
150 | Acc-CHEESE-accX
151 | Acc-CHEESE-accX
152 | Acc-CHEESE-accX
153 | Acc-CHEESE-gyroX
154 | Acc-CHEESE-gyroY
155 | Acc-BREAD-accX
156 | Acc-BREAD-accX
157 | Acc-BREAD-accX
158 | Acc-BREAD-gyroX
159 | Acc-BREAD-gyroY
160 | Acc-KNIFE1-accX
161 | Acc-KNIFE1-accX
162 | Acc-KNIFE1-accX
163 | Acc-KNIFE1-gyroX
164 | Acc-KNIFE1-gyroY
165 | Acc-MILK-accX
166 | Acc-MILK-accX
167 | Acc-MILK-accX
168 | Acc-MILK-gyroX
169 | Acc-MILK-gyroY
170 | Acc-SPOON-accX
171 | Acc-SPOON-accX
172 | Acc-SPOON-accX
173 | Acc-SPOON-gyroX
174 | Acc-SPOON-gyroY
175 | Acc-SUGAR-accX
176 | Acc-SUGAR-accX
177 | Acc-SUGAR-accX
178 | Acc-SUGAR-gyroX
179 | Acc-SUGAR-gyroY
180 | Acc-KNIFE2-accX
181 | Acc-KNIFE2-accX
182 | Acc-KNIFE2-accX
183 | Acc-KNIFE2-gyroX
184 | Acc-KNIFE2-gyroY
185 | Acc-PLATE-accX
186 | Acc-PLATE-accX
187 | Acc-PLATE-accX
188 | Acc-PLATE-gyroX
189 | Acc-PLATE-gyroY
190 | Acc-GLASS-accX
191 | Acc-GLASS-accX
192 | Acc-GLASS-accX
193 | Acc-GLASS-gyroX
194 | Acc-GLASS-gyroY
195 | REED-SWITCH-DISHWASHER-S1
196 | REED-SWITCH-FRIDGE-S3
197 | REED-SWITCH-FRIDGE-S2
198 | REED-SWITCH-FRIDGE-S1
199 | REED-SWITCH-MIDDLEDRAWER-S1
200 | REED-SWITCH-MIDDLEDRAWER-S2
201 | REED-SWITCH-MIDDLEDRAWER-S3
202 | REED-SWITCH-LOWERDRAWER-S3
203 | REED-SWITCH-LOWERDRAWER-S2
204 | REED-SWITCH-UPPERDRAWER
205 | REED-SWITCH-DISHWASHER-S3
206 | REED-SWITCH-LOWERDRAWER-S1
207 | REED-SWITCH-DISHWASHER-S2
208 | Acc-DOOR1-accX
209 | Acc-DOOR1-accY
210 | Acc-DOOR1-accZ
211 | Acc-LAZYCHAIR-accX
212 | Acc-LAZYCHAIR-accY
213 | Acc-LAZYCHAIR-accZ
214 | Acc-DOOR2-accX
215 | Acc-DOOR2-accY
216 | Acc-DOOR2-accZ
217 | Acc-DISHWASHER-accX
218 | Acc-DISHWASHER-accY
219 | Acc-DISHWASHER-accZ
220 | Acc-UPPERDRAWER-accX
221 | Acc-UPPERDRAWER-accY
222 | Acc-UPPERDRAWER-accZ
223 | Acc-LOWERDRAWER-accX
224 | Acc-LOWERDRAWER-accY
225 | Acc-LOWERDRAWER-accZ
226 | Acc-MIDDLEDRAWER-accX
227 | Acc-MIDDLEDRAWER-accY
228 | Acc-MIDDLEDRAWER-accZ
229 | Acc-FRIDGE-accX
230 | Acc-FRIDGE-accY
231 | Acc-FRIDGE-accZ
232 | LOCATION-TAG1-X
233 | LOCATION-TAG1-Y
234 | LOCATION-TAG1-Z
235 | LOCATION-TAG2-X
236 | LOCATION-TAG2-Y
237 | LOCATION-TAG2-Z
238 | LOCATION-TAG3-X
239 | LOCATION-TAG3-Y
240 | LOCATION-TAG3-Z
241 | LOCATION-TAG4-X
242 | LOCATION-TAG4-Y
243 | LOCATION-TAG4-Z
244 | Locomotion
245 | HL_Activity
246 | LL_Left_Arm
247 | LL_Left_Arm_Object
248 | LL_Right_Arm
249 | LL_Right_Arm_Object
250 | ML_Both_Arms
--------------------------------------------------------------------------------
/Opportunity/OpportunityUCIDataset/models.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | """
3 | Created on Mon Jun 29 16:41:56 2020
4 |
5 | @author: Jieyun Hu
6 | """
7 |
8 | '''
9 | Apply different deep learning models on PAMAP2 dataset.
10 | ANN,CNN and RNN were applied.
11 |
12 | '''
13 | import pandas as pd
14 | import numpy as np
15 | import matplotlib.pyplot as plt
16 | from scipy import stats
17 | import tensorflow as tf
18 | from sklearn import metrics
19 | import h5py
20 | import matplotlib.pyplot as plt
21 | from tensorflow.keras import regularizers
22 | from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten, Dropout, SimpleRNN, GRU, LSTM, GlobalMaxPooling1D,GlobalMaxPooling2D,MaxPooling2D,BatchNormalization
23 | from tensorflow.keras.models import Model
24 | from tensorflow.keras.optimizers import Adam
25 | from sklearn.preprocessing import MinMaxScaler, StandardScaler
26 | from sklearn.model_selection import train_test_split
27 | from sklearn.metrics import confusion_matrix
28 | import itertools
29 |
30 |
31 | class models():
32 | def __init__(self, path):
33 | self.path = path
34 |
35 |
36 | def read_h5(self):
37 | f = h5py.File(path, 'r')
38 | X = f.get('inputs')
39 | y = f.get('labels')
40 | #print(type(X))
41 | #print(type(y))
42 | self.X = np.array(X)
43 | self.y = np.array(y)
44 | self.x_train, self.x_test, self.y_train, self.y_test = train_test_split(self.X, self.y, test_size=0.4, random_state = 1)
45 |
46 | print("X = ", self.X.shape)
47 | print("y =",self.y.shape)
48 | print(set(self.y))
49 | #return X,y
50 |
51 | def cnn_model(self):
52 | # K = len(set(y_train))
53 | #print(K)
54 | K = len(set(self.y))
55 | #X = np.expand_dims(X, -1)
56 | self.x_train = np.expand_dims(self.x_train, -1)
57 | self.x_test = np.expand_dims(self.x_test,-1)
58 | #print(X)
59 | #print(X[0].shape)
60 | #i = Input(shape=X[0].shape)
61 | i = Input(shape=self.x_train[0].shape)
62 | x = Conv2D(32, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(i)
63 | x = BatchNormalization()(x)
64 | x = MaxPooling2D((2,2))(x)
65 | x = Dropout(0.2)(x)
66 | x = Conv2D(64, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
67 | x = BatchNormalization()(x)
68 | x = Dropout(0.4)(x)
69 | x = Conv2D(128, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
70 | x = BatchNormalization()(x)
71 | x = MaxPooling2D((2,2))(x)
72 | x = Dropout(0.2)(x)
73 | x = Flatten()(x)
74 | x = Dropout(0.2)(x)
75 | x = Dense(1024,activation = 'relu')(x)
76 | x = Dropout(0.2)(x)
77 | x = Dense(K, activation = 'softmax')(x)
78 | self.model = Model(i,x)
79 | self.model.compile(optimizer = Adam(lr=0.001),
80 | loss = 'sparse_categorical_crossentropy',
81 | metrics = ['accuracy'])
82 |
83 | #self.r = model.fit(X, y, validation_split = 0.4, epochs = 50, batch_size = 32 )
84 | self.r = self.model.fit(self.x_train, self.y_train, validation_data = (self.x_test, self.y_test), epochs = 50, batch_size = 32 )
85 | print(self.model.summary())
86 | # It is better than using keras do the splitting!!
87 | return self.r
88 |
89 | def dnn_model(self):
90 | # K = len(set(y_train))
91 | #print(K)
92 | K = len(set(self.y))
93 | print(self.x_train[0].shape)
94 | i = Input(shape=self.x_train[0].shape)
95 | x = Flatten()(i)
96 | x = Dense(128,activation = 'relu')(x)
97 | x = Dense(128,activation = 'relu')(x)
98 | x = Dropout(0.2)(x)
99 | x = Dense(256,activation = 'relu')(x)
100 | x = Dense(256,activation = 'relu')(x)
101 | x = Dense(256,activation = 'relu')(x)
102 | #x = Dropout(0.2)(x)
103 | x = Dense(1024,activation = 'relu')(x)
104 | x = Dense(K,activation = 'softmax')(x)
105 | self.model = Model(i,x)
106 | self.model.compile(optimizer = Adam(lr=0.001),
107 | loss = 'sparse_categorical_crossentropy',
108 | metrics = ['accuracy'])
109 |
110 | '''
111 | K = len(set(self.y))
112 | model = tf.keras.models.Sequential([
113 | tf.keras.layers.Flatten(input_shape=self.x_train[0].shape),
114 | tf.keras.layers.Dense(256, activation = 'relu'),
115 | tf.keras.layers.Dropout(0.5),
116 | tf.keras.layers.Dense(256, activation = 'relu'),
117 | tf.keras.layers.Dropout(0.2),
118 | tf.keras.layers.Dense(K,activation = 'softmax')
119 | ])
120 | model.compile(optimizer = Adam(lr=0.0005),
121 | loss = 'sparse_categorical_crossentropy',
122 | metrics = ['accuracy'])
123 | '''
124 | self.r = self.model.fit(self.x_train, self.y_train, validation_data = (self.x_test, self.y_test), epochs = 50 )
125 | print(self.model.summary())
126 | return self.r
127 |
128 |
129 | def rnn_model(self):
130 | K = len(set(self.y))
131 | i = Input(shape = self.x_train[0].shape)
132 | x = LSTM(256, return_sequences=True)(i)
133 | x = Dense(128,activation = 'relu')(x)
134 | x = GlobalMaxPooling1D()(x)
135 | x = Dense(K,activation = 'softmax')(x)
136 | self.model = Model(i,x)
137 | self.model.compile(optimizer = Adam(lr=0.001),
138 | loss = 'sparse_categorical_crossentropy',
139 | metrics = ['accuracy'])
140 | self.r = self.model.fit(self.x_train, self.y_train, validation_data = (self.x_test, self.y_test), epochs = 50, batch_size = 32 )
141 | #self.r = model.fit(X, y, validation_split = 0.2, epochs = 10, batch_size = 32 )
142 | print(self.model.summary())
143 | return self.r
144 |
145 | def draw(self):
146 | f1 = plt.figure(1)
147 | plt.title('Loss')
148 | plt.plot(self.r.history['loss'], label = 'loss')
149 | plt.plot(self.r.history['val_loss'], label = 'val_loss')
150 | plt.legend()
151 | f1.show()
152 |
153 | f2 = plt.figure(2)
154 | plt.plot(self.r.history['acc'], label = 'accuracy')
155 | plt.plot(self.r.history['val_acc'], label = 'val_accuracy')
156 | plt.legend()
157 | f2.show()
158 |
159 | # summary, confusion matrix and heatmap
160 | def con_matrix(self):
161 | K = len(set(self.y_train))
162 | self.y_pred = self.model.predict(self.x_test).argmax(axis=1)
163 | cm = confusion_matrix(self.y_test,self.y_pred)
164 | self.plot_confusion_matrix(cm,list(range(K)))
165 |
166 |
167 | def plot_confusion_matrix(self, cm, classes, normalize = False, title='Confusion matrix', cmap=plt.cm.Blues):
168 | if normalize:
169 | cm = cm.astype('float') / cm.sum(axis=1)[:,np.newaxis]
170 | print("Normalized confusion matrix")
171 | else:
172 | print("Confusion matrix, without normalization")
173 | print(cm)
174 | f3 = plt.figure(3)
175 | plt.imshow(cm, interpolation='nearest', cmap=cmap)
176 | plt.title(title)
177 | plt.colorbar()
178 | tick_marks = np.arange(len(classes))
179 | plt.xticks(tick_marks, classes, rotation=45)
180 | plt.yticks(tick_marks, classes)
181 |
182 | fmt = '.2f' if normalize else 'd' tra
183 | thresh = cm.max()/2.
184 | for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
185 | plt.text(j, i, format(cm[i, j], fmt),
186 | horizontalalignment = "center",
187 | color = "white" if cm[i, j] > thresh else "black")
188 | plt.tight_layout()
189 | plt.ylabel('True label')
190 | plt.xlabel('predicted label')
191 | f3.show()
192 |
193 |
194 | if __name__ == "__main__":
195 | model_name = "rnn" # can be cnn/dnn/rnn
196 | loco = False # True is to use locomotion as labels. False is to use high level activities as labels
197 | path = ""
198 | if loco:
199 | path = "loco_2.h5"
200 | else:
201 | path = "hl_2.h5"
202 |
203 | oppo = models(path)
204 |
205 | print("read h5 file....")
206 | oppo.read_h5()
207 | if model_name == "cnn":
208 | oppo.cnn_model()
209 | elif model_name == "dnn":
210 | oppo.dnn_model()
211 | elif model_name == "rnn":
212 | oppo.rnn_model()
213 | oppo.draw()
214 | oppo.con_matrix()
215 |
216 |
--------------------------------------------------------------------------------
/Opportunity/OpportunityUCIDataset/results/cnn/high_level_activity/cnn_50ep_acc.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/Opportunity/OpportunityUCIDataset/results/cnn/high_level_activity/cnn_50ep_acc.png
--------------------------------------------------------------------------------
/Opportunity/OpportunityUCIDataset/results/cnn/high_level_activity/cnn_50ep_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/Opportunity/OpportunityUCIDataset/results/cnn/high_level_activity/cnn_50ep_cm.png
--------------------------------------------------------------------------------
/Opportunity/OpportunityUCIDataset/results/cnn/high_level_activity/cnn_50ep_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/Opportunity/OpportunityUCIDataset/results/cnn/high_level_activity/cnn_50ep_loss.png
--------------------------------------------------------------------------------
/Opportunity/OpportunityUCIDataset/results/cnn/locomotion/cnn_50_ep_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/Opportunity/OpportunityUCIDataset/results/cnn/locomotion/cnn_50_ep_cm.png
--------------------------------------------------------------------------------
/Opportunity/OpportunityUCIDataset/results/cnn/locomotion/cnn_50ep_acc.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/Opportunity/OpportunityUCIDataset/results/cnn/locomotion/cnn_50ep_acc.png
--------------------------------------------------------------------------------
/Opportunity/OpportunityUCIDataset/results/cnn/locomotion/cnn_50ep_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/Opportunity/OpportunityUCIDataset/results/cnn/locomotion/cnn_50ep_loss.png
--------------------------------------------------------------------------------
/Opportunity/OpportunityUCIDataset/results/rnn/high_level_activity/rnn_50ep_acc.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/Opportunity/OpportunityUCIDataset/results/rnn/high_level_activity/rnn_50ep_acc.png
--------------------------------------------------------------------------------
/Opportunity/OpportunityUCIDataset/results/rnn/high_level_activity/rnn_50ep_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/Opportunity/OpportunityUCIDataset/results/rnn/high_level_activity/rnn_50ep_cm.png
--------------------------------------------------------------------------------
/Opportunity/OpportunityUCIDataset/results/rnn/high_level_activity/rnn_50ep_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/Opportunity/OpportunityUCIDataset/results/rnn/high_level_activity/rnn_50ep_loss.png
--------------------------------------------------------------------------------
/Opportunity/OpportunityUCIDataset/results/rnn/high_level_activity/rnn_50ep_output.txt:
--------------------------------------------------------------------------------
1 | Train on 20508 samples, validate on 13673 samples
2 | Epoch 1/50
3 | 20508/20508 [==============================] - 130s 6ms/sample - loss: 0.8564 - acc: 0.6688 - val_loss: 0.7751 - val_acc: 0.6999
4 | Epoch 2/50
5 | 20508/20508 [==============================] - 127s 6ms/sample - loss: 0.7048 - acc: 0.7266 - val_loss: 0.6803 - val_acc: 0.7430
6 | Epoch 3/50
7 | 20508/20508 [==============================] - 134s 7ms/sample - loss: 0.6656 - acc: 0.7407 - val_loss: 0.5913 - val_acc: 0.7770
8 | Epoch 4/50
9 | 20508/20508 [==============================] - 116s 6ms/sample - loss: 0.6391 - acc: 0.7502 - val_loss: 0.6473 - val_acc: 0.7456
10 | Epoch 5/50
11 | 20508/20508 [==============================] - 119s 6ms/sample - loss: 0.6922 - acc: 0.7316 - val_loss: 0.8129 - val_acc: 0.7062
12 | Epoch 6/50
13 | 20508/20508 [==============================] - 119s 6ms/sample - loss: 0.7159 - acc: 0.7207 - val_loss: 0.7045 - val_acc: 0.7175
14 | Epoch 7/50
15 | 20508/20508 [==============================] - 107s 5ms/sample - loss: 0.6830 - acc: 0.7343 - val_loss: 0.7128 - val_acc: 0.7188
16 | Epoch 8/50
17 | 20508/20508 [==============================] - 104s 5ms/sample - loss: 0.6235 - acc: 0.7549 - val_loss: 0.6308 - val_acc: 0.7596
18 | Epoch 9/50
19 | 20508/20508 [==============================] - 104s 5ms/sample - loss: 0.6544 - acc: 0.7406 - val_loss: 0.6171 - val_acc: 0.7561
20 | Epoch 10/50
21 | 20508/20508 [==============================] - 122s 6ms/sample - loss: 0.6242 - acc: 0.7597 - val_loss: 0.6278 - val_acc: 0.7722
22 | Epoch 11/50
23 | 20508/20508 [==============================] - 124s 6ms/sample - loss: 0.6045 - acc: 0.7665 - val_loss: 0.6542 - val_acc: 0.7453
24 | Epoch 12/50
25 | 20508/20508 [==============================] - 128s 6ms/sample - loss: 0.6191 - acc: 0.7580 - val_loss: 0.6699 - val_acc: 0.7470
26 | Epoch 13/50
27 | 20508/20508 [==============================] - 115s 6ms/sample - loss: 0.6243 - acc: 0.7595 - val_loss: 0.6367 - val_acc: 0.7443
28 | Epoch 14/50
29 | 20508/20508 [==============================] - 116s 6ms/sample - loss: 0.6260 - acc: 0.7588 - val_loss: 0.6637 - val_acc: 0.7488
30 | Epoch 15/50
31 | 20508/20508 [==============================] - 118s 6ms/sample - loss: 0.6116 - acc: 0.7637 - val_loss: 0.5891 - val_acc: 0.7867
32 | Epoch 16/50
33 | 20508/20508 [==============================] - 122s 6ms/sample - loss: 0.6058 - acc: 0.7686 - val_loss: 0.6183 - val_acc: 0.7670
34 | Epoch 17/50
35 | 20508/20508 [==============================] - 121s 6ms/sample - loss: 0.5757 - acc: 0.7835 - val_loss: 0.5594 - val_acc: 0.7932
36 | Epoch 18/50
37 | 20508/20508 [==============================] - 116s 6ms/sample - loss: 0.5845 - acc: 0.7776 - val_loss: 0.6014 - val_acc: 0.7598
38 | Epoch 19/50
39 | 20508/20508 [==============================] - 126s 6ms/sample - loss: 0.5914 - acc: 0.7684 - val_loss: 0.6052 - val_acc: 0.7680
40 | Epoch 20/50
41 | 20508/20508 [==============================] - 115s 6ms/sample - loss: 0.6024 - acc: 0.7658 - val_loss: 0.6015 - val_acc: 0.7654
42 | Epoch 21/50
43 | 20508/20508 [==============================] - 114s 6ms/sample - loss: 0.5897 - acc: 0.7727 - val_loss: 0.5557 - val_acc: 0.7880
44 | Epoch 22/50
45 | 20508/20508 [==============================] - 119s 6ms/sample - loss: 0.5728 - acc: 0.7769 - val_loss: 0.5966 - val_acc: 0.7684
46 | Epoch 23/50
47 | 20508/20508 [==============================] - 122s 6ms/sample - loss: 0.5801 - acc: 0.7777 - val_loss: 0.6277 - val_acc: 0.7593
48 | Epoch 24/50
49 | 20508/20508 [==============================] - 120s 6ms/sample - loss: 0.5993 - acc: 0.7679 - val_loss: 0.6447 - val_acc: 0.7582
50 | Epoch 25/50
51 | 20508/20508 [==============================] - 112s 5ms/sample - loss: 0.6151 - acc: 0.7664 - val_loss: 0.6869 - val_acc: 0.7222
52 | Epoch 26/50
53 | 20508/20508 [==============================] - 110s 5ms/sample - loss: 0.6205 - acc: 0.7583 - val_loss: 0.6515 - val_acc: 0.7494
54 | Epoch 27/50
55 | 20508/20508 [==============================] - 115s 6ms/sample - loss: 0.6482 - acc: 0.7451 - val_loss: 0.6779 - val_acc: 0.7331
56 | Epoch 28/50
57 | 20508/20508 [==============================] - 114s 6ms/sample - loss: 0.6167 - acc: 0.7594 - val_loss: 0.5736 - val_acc: 0.7821
58 | Epoch 29/50
59 | 20508/20508 [==============================] - 116s 6ms/sample - loss: 0.5949 - acc: 0.7652 - val_loss: 0.5802 - val_acc: 0.7663
60 | Epoch 30/50
61 | 20508/20508 [==============================] - 123s 6ms/sample - loss: 0.5841 - acc: 0.7729 - val_loss: 0.5849 - val_acc: 0.7677
62 | Epoch 31/50
63 | 20508/20508 [==============================] - 107s 5ms/sample - loss: 0.5558 - acc: 0.7881 - val_loss: 0.5765 - val_acc: 0.7736
64 | Epoch 32/50
65 | 20508/20508 [==============================] - 103s 5ms/sample - loss: 0.5615 - acc: 0.7831 - val_loss: 0.5435 - val_acc: 0.7908
66 | Epoch 33/50
67 | 20508/20508 [==============================] - 105s 5ms/sample - loss: 0.5515 - acc: 0.7851 - val_loss: 0.5687 - val_acc: 0.7736
68 | Epoch 34/50
69 | 20508/20508 [==============================] - 105s 5ms/sample - loss: 0.5349 - acc: 0.7901 - val_loss: 0.5197 - val_acc: 0.8034
70 | Epoch 35/50
71 | 20508/20508 [==============================] - 104s 5ms/sample - loss: 0.5243 - acc: 0.7976 - val_loss: 0.5674 - val_acc: 0.7808
72 | Epoch 36/50
73 | 20508/20508 [==============================] - 115s 6ms/sample - loss: 0.5369 - acc: 0.7955 - val_loss: 0.5716 - val_acc: 0.7926
74 | Epoch 37/50
75 | 20508/20508 [==============================] - 106s 5ms/sample - loss: 0.5642 - acc: 0.7805 - val_loss: 0.6299 - val_acc: 0.7509
76 | Epoch 38/50
77 | 20508/20508 [==============================] - 107s 5ms/sample - loss: 0.5895 - acc: 0.7741 - val_loss: 0.5804 - val_acc: 0.7868
78 | Epoch 39/50
79 | 20508/20508 [==============================] - 107s 5ms/sample - loss: 0.5665 - acc: 0.7885 - val_loss: 0.5721 - val_acc: 0.7766
80 | Epoch 40/50
81 | 20508/20508 [==============================] - 103s 5ms/sample - loss: 0.5416 - acc: 0.7907 - val_loss: 0.5702 - val_acc: 0.7785
82 | Epoch 41/50
83 | 20508/20508 [==============================] - 109s 5ms/sample - loss: 0.5105 - acc: 0.8077 - val_loss: 0.5047 - val_acc: 0.8131
84 | Epoch 42/50
85 | 20508/20508 [==============================] - 106s 5ms/sample - loss: 0.5207 - acc: 0.7996 - val_loss: 0.5472 - val_acc: 0.7911
86 | Epoch 43/50
87 | 20508/20508 [==============================] - 106s 5ms/sample - loss: 0.5624 - acc: 0.7883 - val_loss: 0.5121 - val_acc: 0.8139
88 | Epoch 44/50
89 | 20508/20508 [==============================] - 112s 5ms/sample - loss: 0.5093 - acc: 0.8098 - val_loss: 0.5132 - val_acc: 0.8160
90 | Epoch 45/50
91 | 20508/20508 [==============================] - 120s 6ms/sample - loss: 0.4905 - acc: 0.8195 - val_loss: 0.5064 - val_acc: 0.8111
92 | Epoch 46/50
93 | 20508/20508 [==============================] - 111s 5ms/sample - loss: 0.5065 - acc: 0.8035 - val_loss: 0.5669 - val_acc: 0.7669
94 | Epoch 47/50
95 | 20508/20508 [==============================] - 119s 6ms/sample - loss: 0.5270 - acc: 0.7928 - val_loss: 0.5351 - val_acc: 0.7957
96 | Epoch 48/50
97 | 20508/20508 [==============================] - 127s 6ms/sample - loss: 0.5013 - acc: 0.8059 - val_loss: 0.5121 - val_acc: 0.7991
98 | Epoch 49/50
99 | 20508/20508 [==============================] - 118s 6ms/sample - loss: 0.4915 - acc: 0.8125 - val_loss: 0.5571 - val_acc: 0.7930
100 | Epoch 50/50
101 | 20508/20508 [==============================] - 121s 6ms/sample - loss: 0.5058 - acc: 0.8022 - val_loss: 0.5867 - val_acc: 0.7598] - ETA: 32s - loss: 0.4962 - acc: 0.8071
102 | _________________________________________________________________
103 | Layer (type) Output Shape Param #
104 | =================================================================
105 | input_6 (InputLayer) (None, 25, 220) 0
106 | _________________________________________________________________
107 | lstm_1 (LSTM) (None, 25, 256) 488448
108 | _________________________________________________________________
109 | dense_15 (Dense) (None, 25, 128) 32896
110 | _________________________________________________________________
111 | global_max_pooling1d_1 (Glob (None, 128) 0
112 | _________________________________________________________________
113 | dense_16 (Dense) (None, 5) 645
114 | =================================================================
115 | Total params: 521,989
116 | Trainable params: 521,989
117 | Non-trainable params: 0
118 | _________________________________________________________________
119 | None
120 | Confusion matrix, without normalization
121 | [[ 385 0 11 79 0]
122 | [ 0 659 704 29 1087]
123 | [ 0 67 3581 18 83]
124 | [ 6 41 113 1662 386]
125 | [ 0 129 390 141 4102]]
--------------------------------------------------------------------------------
/Opportunity/OpportunityUCIDataset/results/rnn/locomotion/rnn_50ep_acc.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/Opportunity/OpportunityUCIDataset/results/rnn/locomotion/rnn_50ep_acc.png
--------------------------------------------------------------------------------
/Opportunity/OpportunityUCIDataset/results/rnn/locomotion/rnn_50ep_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/Opportunity/OpportunityUCIDataset/results/rnn/locomotion/rnn_50ep_cm.png
--------------------------------------------------------------------------------
/Opportunity/OpportunityUCIDataset/results/rnn/locomotion/rnn_50ep_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/Opportunity/OpportunityUCIDataset/results/rnn/locomotion/rnn_50ep_loss.png
--------------------------------------------------------------------------------
/Opportunity/OpportunityUCIDataset/results/rnn/locomotion/rnn_50ep_output.txt:
--------------------------------------------------------------------------------
1 | Epoch 1/50
2 | 18625/18625 [==============================] - 78s 4ms/sample - loss: 0.3531 - acc: 0.8763 - val_loss: 0.2700 - val_acc: 0.9100
3 | Epoch 2/50
4 | 18625/18625 [==============================] - 80s 4ms/sample - loss: 0.2838 - acc: 0.9029 - val_loss: 0.2617 - val_acc: 0.9114
5 | Epoch 3/50
6 | 18625/18625 [==============================] - 86s 5ms/sample - loss: 0.3007 - acc: 0.8925 - val_loss: 0.2768 - val_acc: 0.8982
7 | Epoch 4/50
8 | 18625/18625 [==============================] - 101s 5ms/sample - loss: 0.2704 - acc: 0.9054 - val_loss: 0.2571 - val_acc: 0.9116
9 | Epoch 5/50
10 | 18625/18625 [==============================] - 103s 6ms/sample - loss: 0.2764 - acc: 0.9047 - val_loss: 0.3100 - val_acc: 0.8991
11 | Epoch 6/50
12 | 18625/18625 [==============================] - 98s 5ms/sample - loss: 0.3109 - acc: 0.8923 - val_loss: 0.2606 - val_acc: 0.9117
13 | Epoch 7/50
14 | 18625/18625 [==============================] - 96s 5ms/sample - loss: 0.2635 - acc: 0.9099 - val_loss: 0.2397 - val_acc: 0.9166
15 | Epoch 8/50
16 | 18625/18625 [==============================] - 93s 5ms/sample - loss: 0.2544 - acc: 0.9113 - val_loss: 0.2839 - val_acc: 0.9018
17 | Epoch 9/50
18 | 18625/18625 [==============================] - 94s 5ms/sample - loss: 0.2725 - acc: 0.9052 - val_loss: 0.2353 - val_acc: 0.9167
19 | Epoch 10/50
20 | 18625/18625 [==============================] - 92s 5ms/sample - loss: 0.2468 - acc: 0.9144 - val_loss: 0.2551 - val_acc: 0.9117
21 | Epoch 11/50
22 | 18625/18625 [==============================] - 93s 5ms/sample - loss: 0.2601 - acc: 0.9060 - val_loss: 0.2406 - val_acc: 0.9164
23 | Epoch 12/50
24 | 18625/18625 [==============================] - 93s 5ms/sample - loss: 0.2771 - acc: 0.9038 - val_loss: 0.2731 - val_acc: 0.9000
25 | Epoch 13/50
26 | 18625/18625 [==============================] - 94s 5ms/sample - loss: 0.2787 - acc: 0.9021 - val_loss: 0.4054 - val_acc: 0.8642
27 | Epoch 14/50
28 | 18625/18625 [==============================] - 94s 5ms/sample - loss: 0.2885 - acc: 0.8961 - val_loss: 0.2351 - val_acc: 0.9111
29 | Epoch 15/50
30 | 18625/18625 [==============================] - 95s 5ms/sample - loss: 0.2397 - acc: 0.9152 - val_loss: 0.2282 - val_acc: 0.9176
31 | Epoch 16/50
32 | 18625/18625 [==============================] - 96s 5ms/sample - loss: 0.2315 - acc: 0.9174 - val_loss: 0.2389 - val_acc: 0.9156
33 | Epoch 17/50
34 | 18625/18625 [==============================] - 96s 5ms/sample - loss: 0.2212 - acc: 0.9217 - val_loss: 0.2123 - val_acc: 0.9222
35 | Epoch 18/50
36 | 18625/18625 [==============================] - 93s 5ms/sample - loss: 0.2386 - acc: 0.9144 - val_loss: 0.2246 - val_acc: 0.9216
37 | Epoch 19/50
38 | 18625/18625 [==============================] - 95s 5ms/sample - loss: 0.2351 - acc: 0.9177 - val_loss: 0.2258 - val_acc: 0.9203
39 | Epoch 20/50
40 | 18625/18625 [==============================] - 93s 5ms/sample - loss: 0.2472 - acc: 0.9099 - val_loss: 0.2479 - val_acc: 0.9082
41 | Epoch 21/50
42 | 18625/18625 [==============================] - 93s 5ms/sample - loss: 0.2548 - acc: 0.9064 - val_loss: 0.2401 - val_acc: 0.9147
43 | Epoch 22/50
44 | 18625/18625 [==============================] - 94s 5ms/sample - loss: 0.2349 - acc: 0.9173 - val_loss: 0.2301 - val_acc: 0.9156
45 | Epoch 23/50
46 | 18625/18625 [==============================] - 97s 5ms/sample - loss: 0.2310 - acc: 0.9185 - val_loss: 0.2188 - val_acc: 0.9202
47 | Epoch 24/50
48 | 18625/18625 [==============================] - 97s 5ms/sample - loss: 0.2334 - acc: 0.9144 - val_loss: 0.2477 - val_acc: 0.9082
49 | Epoch 25/50
50 | 18625/18625 [==============================] - 96s 5ms/sample - loss: 0.2394 - acc: 0.9136 - val_loss: 0.2373 - val_acc: 0.9187
51 | Epoch 26/50
52 | 18625/18625 [==============================] - 96s 5ms/sample - loss: 0.2346 - acc: 0.9159 - val_loss: 0.2535 - val_acc: 0.9075
53 | Epoch 27/50
54 | 18625/18625 [==============================] - 97s 5ms/sample - loss: 0.2204 - acc: 0.9219 - val_loss: 0.2179 - val_acc: 0.9201
55 | Epoch 28/50
56 | 18625/18625 [==============================] - 94s 5ms/sample - loss: 0.2210 - acc: 0.9192 - val_loss: 0.2412 - val_acc: 0.9099
57 | Epoch 29/50
58 | 18625/18625 [==============================] - 93s 5ms/sample - loss: 0.2279 - acc: 0.9152 - val_loss: 0.2207 - val_acc: 0.9196
59 | Epoch 30/50
60 | 18625/18625 [==============================] - 92s 5ms/sample - loss: 0.2161 - acc: 0.9252 - val_loss: 0.2118 - val_acc: 0.9295
61 | Epoch 31/50
62 | 18625/18625 [==============================] - 94s 5ms/sample - loss: 0.2130 - acc: 0.9243 - val_loss: 0.2137 - val_acc: 0.9228
63 | Epoch 32/50
64 | 18625/18625 [==============================] - 94s 5ms/sample - loss: 0.2177 - acc: 0.9233 - val_loss: 0.2208 - val_acc: 0.9233
65 | Epoch 33/50
66 | 18625/18625 [==============================] - 96s 5ms/sample - loss: 0.2499 - acc: 0.9106 - val_loss: 0.2456 - val_acc: 0.9118
67 | Epoch 34/50
68 | 18625/18625 [==============================] - 95s 5ms/sample - loss: 0.2507 - acc: 0.9117 - val_loss: 0.2307 - val_acc: 0.9195
69 | Epoch 35/50
70 | 18625/18625 [==============================] - 97s 5ms/sample - loss: 0.2349 - acc: 0.9164 - val_loss: 0.2547 - val_acc: 0.9084
71 | Epoch 36/50
72 | 18625/18625 [==============================] - 95s 5ms/sample - loss: 0.2392 - acc: 0.9137 - val_loss: 0.2207 - val_acc: 0.9215
73 | Epoch 37/50
74 | 18625/18625 [==============================] - 97s 5ms/sample - loss: 0.2220 - acc: 0.9216 - val_loss: 0.2288 - val_acc: 0.9164
75 | Epoch 38/50
76 | 18625/18625 [==============================] - 96s 5ms/sample - loss: 0.2269 - acc: 0.9192 - val_loss: 0.2113 - val_acc: 0.9251
77 | Epoch 39/50
78 | 18625/18625 [==============================] - 96s 5ms/sample - loss: 0.2214 - acc: 0.9224 - val_loss: 0.2357 - val_acc: 0.9197
79 | Epoch 40/50
80 | 18625/18625 [==============================] - 95s 5ms/sample - loss: 0.2392 - acc: 0.9163 - val_loss: 0.2412 - val_acc: 0.9161
81 | Epoch 41/50
82 | 18625/18625 [==============================] - 93s 5ms/sample - loss: 0.2344 - acc: 0.9166 - val_loss: 0.2411 - val_acc: 0.9141
83 | Epoch 42/50
84 | 18625/18625 [==============================] - 117s 6ms/sample - loss: 0.2417 - acc: 0.9146 - val_loss: 0.2594 - val_acc: 0.9121
85 | Epoch 43/50
86 | 18625/18625 [==============================] - 114s 6ms/sample - loss: 0.2476 - acc: 0.9153 - val_loss: 0.2373 - val_acc: 0.9162
87 | Epoch 44/50
88 | 18625/18625 [==============================] - 129s 7ms/sample - loss: 0.2404 - acc: 0.9140 - val_loss: 0.2364 - val_acc: 0.9182
89 | Epoch 45/50
90 | 18625/18625 [==============================] - 117s 6ms/sample - loss: 0.2263 - acc: 0.9225 - val_loss: 0.2154 - val_acc: 0.9258
91 | Epoch 46/50
92 | 18625/18625 [==============================] - 109s 6ms/sample - loss: 0.2210 - acc: 0.9205 - val_loss: 0.2166 - val_acc: 0.9255
93 | Epoch 47/50
94 | 18625/18625 [==============================] - 226s 12ms/sample - loss: 0.2189 - acc: 0.9223 - val_loss: 0.2205 - val_acc: 0.9213
95 | Epoch 48/50
96 | 18625/18625 [==============================] - 125s 7ms/sample - loss: 0.2128 - acc: 0.9226 - val_loss: 0.2038 - val_acc: 0.9269
97 | Epoch 49/50
98 | 18625/18625 [==============================] - 126s 7ms/sample - loss: 0.2122 - acc: 0.9234 - val_loss: 0.2197 - val_acc: 0.9238
99 | Epoch 50/50
100 | 18625/18625 [==============================] - 126s 7ms/sample - loss: 0.2149 - acc: 0.9249 - val_loss: 0.2088 - val_acc: 0.9287
101 | _________________________________________________________________
102 | Layer (type) Output Shape Param #
103 | =================================================================
104 | input_5 (InputLayer) (None, 25, 220) 0
105 | _________________________________________________________________
106 | lstm (LSTM) (None, 25, 256) 488448
107 | _________________________________________________________________
108 | dense_13 (Dense) (None, 25, 128) 32896
109 | _________________________________________________________________
110 | global_max_pooling1d (Global (None, 128) 0
111 | _________________________________________________________________
112 | dense_14 (Dense) (None, 4) 516
113 | =================================================================
114 | Total params: 521,860
115 | Trainable params: 521,860
116 | Non-trainable params: 0
117 | _________________________________________________________________
118 | None
119 | Confusion matrix, without normalization
120 | [[ 606 0 0 18]
121 | [ 0 5168 366 78]
122 | [ 0 328 2640 14]
123 | [ 7 40 34 3118]]
--------------------------------------------------------------------------------
/Opportunity/summary_table.md:
--------------------------------------------------------------------------------
1 | | activities | models | train accuracy | train loss | test accuracy | test loss |
2 | |------------|--------|----------------|------------|---------------|-----------|
3 | | high_level | CNN | 98.64% | 0.1549 | 99.37% | 0.1381 |
4 | | high_level | RNN | 80.22% | 0.5058 | 80.71% | 0.5867 |
5 | | locomotion | CNN | 97.23% | 0.1527 | 96.96% | 0.1826 |
6 | | locomotion | RNN | 92.49% | 0.2149 | 92.87% | 0.2088 |
7 |
8 | * training : validation = 6 : 4
9 | * 50 epochs run for each model
10 |
--------------------------------------------------------------------------------
/PAMAP2/PAMAP2_Dataset/.gitignore:
--------------------------------------------------------------------------------
1 | Optional
2 | Protocol
3 | DataCollectionProtocol.pdf
4 | DescriptionOfActivities.pdf
5 | pamap.h5
6 | pamap1.h5
7 | pamap2.h5
8 | PerformedActivitiesSummary.pdf
9 | readme.pdf
10 | subjectInformation.pdf
--------------------------------------------------------------------------------
/PAMAP2/PAMAP2_Dataset/ankle_acc_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/PAMAP2/PAMAP2_Dataset/ankle_acc_1.png
--------------------------------------------------------------------------------
/PAMAP2/PAMAP2_Dataset/chest_acc_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/PAMAP2/PAMAP2_Dataset/chest_acc_1.png
--------------------------------------------------------------------------------
/PAMAP2/PAMAP2_Dataset/dataProcessing.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | """
3 | Created on Fri Jun 26 23:09:38 2020
4 |
5 | """
6 |
7 | # This file is for PAMAP2 data processing
8 | from matplotlib import pyplot as plt
9 | import pandas as pd
10 | import numpy as np
11 | import math
12 | import h5py
13 | from sklearn.preprocessing import MinMaxScaler, StandardScaler
14 |
15 | activityIDdict = {0: 'transient',
16 | 1: 'lying',# no change in index
17 | 2: 'sitting',# no change in index
18 | 3: 'standing',# no change in index
19 | 4: 'walking',# no change in index
20 | 5: 'running',# no change in index
21 | 6: 'cycling',# no change in index
22 | 7: 'Nordic_walking',# no change in index
23 | 9: 'watching_TV', # not in dataset
24 | 10: 'computer_work',# not in dataset
25 | 11: 'car driving', # not in dataset
26 | 12: 'ascending_stairs', # new index:8
27 | 13: 'descending_stairs', # new index:9
28 | 16: 'vacuum_cleaning', # new index:10
29 | 17: 'ironing', # new index:11
30 | 18: 'folding_laundry',# not in dataset
31 | 19: 'house_cleaning', # not in dataset
32 | 20: 'playing_soccer', # not in dataset
33 | 24: 'rope_jumping' # new index: 0
34 | }
35 | #{24:0,1:1,2:2,3:3,4:4,5:5,6:6,7:7,12:8,13:9,16:10,17:11}
36 |
37 | def read_files():
38 | list_of_files = ['./Protocol/subject101.dat',
39 | './Protocol/subject102.dat',
40 | './Protocol/subject103.dat',
41 | './Protocol/subject104.dat',
42 | './Protocol/subject105.dat',
43 | './Protocol/subject106.dat',
44 | './Protocol/subject107.dat',
45 | './Protocol/subject108.dat',
46 | './Protocol/subject109.dat' ]
47 |
48 | subjectID = [1,2,3,4,5,6,7,8,9]
49 |
50 |
51 |
52 | colNames = ["timestamp", "activityID","heartrate"]
53 |
54 | IMUhand = ['handTemperature',
55 | 'handAcc16_1', 'handAcc16_2', 'handAcc16_3',
56 | 'handAcc6_1', 'handAcc6_2', 'handAcc6_3',
57 | 'handGyro1', 'handGyro2', 'handGyro3',
58 | 'handMagne1', 'handMagne2', 'handMagne3',
59 | 'handOrientation1', 'handOrientation2', 'handOrientation3', 'handOrientation4']
60 |
61 | IMUchest = ['chestTemperature',
62 | 'chestAcc16_1', 'chestAcc16_2', 'chestAcc16_3',
63 | 'chestAcc6_1', 'chestAcc6_2', 'chestAcc6_3',
64 | 'chestGyro1', 'chestGyro2', 'chestGyro3',
65 | 'chestMagne1', 'chestMagne2', 'chestMagne3',
66 | 'chestOrientation1', 'chestOrientation2', 'chestOrientation3', 'chestOrientation4']
67 |
68 |
69 | IMUankle = ['ankleTemperature',
70 | 'ankleAcc16_1', 'ankleAcc16_2', 'ankleAcc16_3',
71 | 'ankleAcc6_1', 'ankleAcc6_2', 'ankleAcc6_3',
72 | 'ankleGyro1', 'ankleGyro2', 'ankleGyro3',
73 | 'ankleMagne1', 'ankleMagne2', 'ankleMagne3',
74 | 'ankleOrientation1', 'ankleOrientation2', 'ankleOrientation3', 'ankleOrientation4']
75 |
76 | columns = colNames + IMUhand + IMUchest + IMUankle
77 |
78 | dataCollection = pd.DataFrame()
79 | for file in list_of_files:
80 | print(file," is reading...")
81 | procData = pd.read_table(file, header=None, sep='\s+')
82 | procData.columns = columns
83 | procData['subject_id'] = int(file[-5])
84 | dataCollection = dataCollection.append(procData, ignore_index=True)
85 |
86 | #break; # for testing short version, need to delete later
87 |
88 | dataCollection.reset_index(drop=True, inplace=True)
89 |
90 | return dataCollection
91 |
92 | def scale(df):#pandas dataframe
93 | #scaler = MinMaxScaler()
94 | scaler = StandardScaler()
95 | df.iloc[:,[1,-1]] = scaler.fit_transform(df.iloc[:,[1,-1]] )#scale the data of column index [1:-1)
96 | return df
97 |
98 |
99 | def dataCleaning(dataCollection):
100 | dataCollection = dataCollection.drop(['timestamp', 'handOrientation1', 'handOrientation2', 'handOrientation3', 'handOrientation4',
101 | 'chestOrientation1', 'chestOrientation2', 'chestOrientation3', 'chestOrientation4',
102 | 'ankleOrientation1', 'ankleOrientation2', 'ankleOrientation3', 'ankleOrientation4'],
103 | axis = 1) # removal of orientation columns as they are not needed
104 | dataCollection = dataCollection.drop(dataCollection[dataCollection.activityID == 0].index) #removal of any row of activity 0 as it is transient activity which it is not used
105 | dataCollection = dataCollection.apply(pd.to_numeric, errors = 'coerce') #removal of non numeric data in cells
106 | dataCollection = dataCollection.dropna()
107 | dataCollection = scale(dataCollection)
108 | #dataCollection = dataCollection.interpolate()
109 | #removal of any remaining NaN value cells by constructing new data points in known set of data points
110 | #for i in range(0,4):
111 | # dataCollection["heartrate"].iloc[i]=100 # only 4 cells are Nan value, change them manually
112 | print("data cleaned!")
113 | return dataCollection
114 |
115 | def reset_label(dataCollection):
116 | # Convert original labels {1, 2, 3, 4, 5, 6, 7, 12, 13, 16, 17, 24} to new labels.
117 | mapping = {24:0,1:1,2:2,3:3,4:4,5:5,6:6,7:7,12:8,13:9,16:10,17:11} # old activity Id to new activity Id
118 | for i in [24,12,13,16,17]:
119 | dataCollection.loc[dataCollection.activityID == i, 'activityID'] = mapping[i]
120 |
121 | return dataCollection
122 |
123 | def segment(data, window_size): # data is numpy array
124 | n = len(data)
125 | X = []
126 | y = []
127 | start = 0
128 | end = 0
129 | while start + window_size - 1 < n:
130 | end = start + window_size-1
131 | if data[start][0] == data[end][0] and data[start][-1] == data[end][-1] : # if the frame contains the same activity and from the same object
132 | X.append(data[start:(end+1),1:-1])
133 | y.append(data[start][0])
134 | start += window_size//2 # 50% overlap
135 | else: # if the frame contains different activities or from different objects, find the next start point
136 | while start + window_size-1 < n:
137 | if data[start][0] != data[start+1][0]:
138 | break
139 | start += 1
140 | start += 1
141 | print(np.asarray(X).shape, np.asarray(y).shape)
142 | return {'inputs' : np.asarray(X), 'labels': np.asarray(y,dtype=int)}
143 |
144 | def downsize(data):# data is numpy array
145 | downsample_size = 3
146 | data = data[::downsample_size,:]
147 | return data
148 |
149 | def save_data(data,file_name): # save the data in h5 format
150 | f = h5py.File(file_name,'w')
151 | for key in data:
152 | print(key)
153 | f.create_dataset(key,data = data[key])
154 | f.close()
155 | print('Done.')
156 |
157 |
158 | def plot_series(df, colname, act, subject, start, end):
159 | unit='ms^-2'
160 | #pylim =(-25,25)
161 | #print(df.head())
162 | df1 = df[(df.activityID ==act) & (df.subject_id == subject)]
163 | if df1.shape[0] < 1:
164 | print("Didn't find the region. Please reset activityID and subject_id")
165 | return
166 | df_len = df1.shape[0]
167 | if df_len > start and df_len > end:
168 | df1 = df1[start:end]
169 | elif df_len > start and df_len <= end:
170 | df1 = df1[start:df_len]
171 | else:
172 | print("Out of boundary, please reset the start and end points")
173 | print(df1.shape)
174 | #print(df1.head(10))
175 | plottitle = colname +' - ' + str(act)
176 | #plotx = colname
177 | fig = df1[colname].plot()
178 | #print(df.index)
179 | #ax1 = df1.plot(x=df.index,y=plotx, color='r', figsize=(12,5), ylim=pylim)
180 | fig.set_title(plottitle)
181 | fig.set_xlabel('window')
182 | fig.set_ylabel(unit)
183 | #fig.show()
184 |
185 | #visualize the curve in a given window
186 | #with same subject and same activity
187 | #feat:[]
188 | #def visualize(act_id,)
189 |
190 |
191 |
192 | if __name__ == "__main__":
193 | file_name = 'pamap_scaled2.h5'
194 | window_size = 25
195 | data = read_files()
196 | data = dataCleaning(data)
197 | #plot_series(data,'handAcc16_1',1,1,400,500)
198 | #plot_series(data,'chestAcc16_1',1,1,400,500)
199 | #plot_series(data,'ankleAcc16_1',1,1,400,500)
200 | data = reset_label(data)
201 | numpy_data = data.to_numpy()
202 | numpy_data = downsize(numpy_data) # downsize to 30%
203 |
204 | segment_data = segment(numpy_data, window_size)
205 | save_data(segment_data, file_name)
206 |
207 |
208 |
209 |
210 |
--------------------------------------------------------------------------------
/PAMAP2/PAMAP2_Dataset/handacc_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/PAMAP2/PAMAP2_Dataset/handacc_1.png
--------------------------------------------------------------------------------
/PAMAP2/PAMAP2_Dataset/pamap_scaled.h5:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/PAMAP2/PAMAP2_Dataset/pamap_scaled.h5
--------------------------------------------------------------------------------
/PAMAP2/PAMAP2_Dataset/pamap_scaled2.h5:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/PAMAP2/PAMAP2_Dataset/pamap_scaled2.h5
--------------------------------------------------------------------------------
/PAMAP2/PAMAP2_Dataset/results/cnn_res/cnn_50epo_accuracy.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/PAMAP2/PAMAP2_Dataset/results/cnn_res/cnn_50epo_accuracy.png
--------------------------------------------------------------------------------
/PAMAP2/PAMAP2_Dataset/results/cnn_res/cnn_50epo_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/PAMAP2/PAMAP2_Dataset/results/cnn_res/cnn_50epo_cm.png
--------------------------------------------------------------------------------
/PAMAP2/PAMAP2_Dataset/results/cnn_res/cnn_50epo_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/PAMAP2/PAMAP2_Dataset/results/cnn_res/cnn_50epo_loss.png
--------------------------------------------------------------------------------
/PAMAP2/PAMAP2_Dataset/results/dnn_res/dnn_50ep_acc.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/PAMAP2/PAMAP2_Dataset/results/dnn_res/dnn_50ep_acc.png
--------------------------------------------------------------------------------
/PAMAP2/PAMAP2_Dataset/results/dnn_res/dnn_50ep_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/PAMAP2/PAMAP2_Dataset/results/dnn_res/dnn_50ep_cm.png
--------------------------------------------------------------------------------
/PAMAP2/PAMAP2_Dataset/results/dnn_res/dnn_50ep_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/PAMAP2/PAMAP2_Dataset/results/dnn_res/dnn_50ep_loss.png
--------------------------------------------------------------------------------
/PAMAP2/PAMAP2_Dataset/results/dnn_res/dnn_50ep_output.txt:
--------------------------------------------------------------------------------
1 | Train on 32178 samples, validate on 21452 samples
2 | Epoch 1/50
3 | 32178/32178 [==============================] - 5s 150us/sample - loss: 0.7167 - acc: 0.7997 - val_loss: 0.3609 - val_acc: 0.8895
4 | Epoch 2/50
5 | 32178/32178 [==============================] - 4s 136us/sample - loss: 0.3272 - acc: 0.8968 - val_loss: 0.2982 - val_acc: 0.9059
6 | Epoch 3/50
7 | 32178/32178 [==============================] - 5s 150us/sample - loss: 0.2471 - acc: 0.9207 - val_loss: 0.2973 - val_acc: 0.9052
8 | Epoch 4/50
9 | 32178/32178 [==============================] - 6s 173us/sample - loss: 0.2240 - acc: 0.9265 - val_loss: 0.2389 - val_acc: 0.9227
10 | Epoch 5/50
11 | 32178/32178 [==============================] - 5s 152us/sample - loss: 0.2016 - acc: 0.9345 - val_loss: 0.2180 - val_acc: 0.9368
12 | Epoch 6/50
13 | 32178/32178 [==============================] - 5s 163us/sample - loss: 0.1802 - acc: 0.9437 - val_loss: 0.1825 - val_acc: 0.9440
14 | Epoch 7/50
15 | 32178/32178 [==============================] - 5s 166us/sample - loss: 0.1604 - acc: 0.9490 - val_loss: 0.1532 - val_acc: 0.9525
16 | Epoch 8/50
17 | 32178/32178 [==============================] - 5s 159us/sample - loss: 0.1558 - acc: 0.9491 - val_loss: 0.2483 - val_acc: 0.9331
18 | Epoch 9/50
19 | 32178/32178 [==============================] - 5s 154us/sample - loss: 0.1360 - acc: 0.9565 - val_loss: 0.2080 - val_acc: 0.9405
20 | Epoch 10/50
21 | 32178/32178 [==============================] - 5s 154us/sample - loss: 0.1231 - acc: 0.9616 - val_loss: 0.1959 - val_acc: 0.9478
22 | Epoch 11/50
23 | 32178/32178 [==============================] - 5s 167us/sample - loss: 0.1320 - acc: 0.9585 - val_loss: 0.1970 - val_acc: 0.9445
24 | Epoch 12/50
25 | 32178/32178 [==============================] - 5s 156us/sample - loss: 0.1115 - acc: 0.9647 - val_loss: 0.1321 - val_acc: 0.9600
26 | Epoch 13/50
27 | 32178/32178 [==============================] - 5s 153us/sample - loss: 0.1191 - acc: 0.9626 - val_loss: 0.1498 - val_acc: 0.9592
28 | Epoch 14/50
29 | 32178/32178 [==============================] - 5s 153us/sample - loss: 0.1148 - acc: 0.9638 - val_loss: 0.1554 - val_acc: 0.9576
30 | Epoch 15/50
31 | 32178/32178 [==============================] - 5s 154us/sample - loss: 0.1151 - acc: 0.9649 - val_loss: 0.1437 - val_acc: 0.9607
32 | Epoch 16/50
33 | 32178/32178 [==============================] - 5s 157us/sample - loss: 0.1073 - acc: 0.9682 - val_loss: 0.1727 - val_acc: 0.9502
34 | Epoch 17/50
35 | 32178/32178 [==============================] - 6s 180us/sample - loss: 0.0958 - acc: 0.9699 - val_loss: 0.1261 - val_acc: 0.9663
36 | Epoch 18/50
37 | 32178/32178 [==============================] - 7s 227us/sample - loss: 0.0927 - acc: 0.9704 - val_loss: 0.1489 - val_acc: 0.9607
38 | Epoch 19/50
39 | 32178/32178 [==============================] - 6s 187us/sample - loss: 0.1075 - acc: 0.9684 - val_loss: 0.1476 - val_acc: 0.9621
40 | Epoch 20/50
41 | 32178/32178 [==============================] - 7s 212us/sample - loss: 0.0926 - acc: 0.9711 - val_loss: 0.1411 - val_acc: 0.9649
42 | Epoch 21/50
43 | 32178/32178 [==============================] - 6s 186us/sample - loss: 0.0958 - acc: 0.9709 - val_loss: 0.1360 - val_acc: 0.9649
44 | Epoch 22/50
45 | 32178/32178 [==============================] - 6s 185us/sample - loss: 0.0848 - acc: 0.9737 - val_loss: 0.1632 - val_acc: 0.9620
46 | Epoch 23/50
47 | 32178/32178 [==============================] - 7s 204us/sample - loss: 0.0949 - acc: 0.9737 - val_loss: 0.1283 - val_acc: 0.9688
48 | Epoch 24/50
49 | 32178/32178 [==============================] - 8s 241us/sample - loss: 0.0862 - acc: 0.9765 - val_loss: 0.2207 - val_acc: 0.9461
50 | Epoch 25/50
51 | 32178/32178 [==============================] - 6s 188us/sample - loss: 0.0886 - acc: 0.9744 - val_loss: 0.1279 - val_acc: 0.9688
52 | Epoch 26/50
53 | 32178/32178 [==============================] - 6s 176us/sample - loss: 0.0801 - acc: 0.9764 - val_loss: 0.1580 - val_acc: 0.9632
54 | Epoch 27/50
55 | 32178/32178 [==============================] - 6s 189us/sample - loss: 0.1091 - acc: 0.9695 - val_loss: 0.1971 - val_acc: 0.9584
56 | Epoch 28/50
57 | 32178/32178 [==============================] - 6s 179us/sample - loss: 0.0806 - acc: 0.9779 - val_loss: 0.4314 - val_acc: 0.9263
58 | Epoch 29/50
59 | 32178/32178 [==============================] - 6s 180us/sample - loss: 0.1083 - acc: 0.9709 - val_loss: 0.1434 - val_acc: 0.9676
60 | Epoch 30/50
61 | 32178/32178 [==============================] - 6s 174us/sample - loss: 0.0800 - acc: 0.9787 - val_loss: 0.2239 - val_acc: 0.9570
62 | Epoch 31/50
63 | 32178/32178 [==============================] - 6s 178us/sample - loss: 0.1026 - acc: 0.9720 - val_loss: 0.1338 - val_acc: 0.9695
64 | Epoch 32/50
65 | 32178/32178 [==============================] - 6s 177us/sample - loss: 0.0826 - acc: 0.9776 - val_loss: 0.1960 - val_acc: 0.9535
66 | Epoch 33/50
67 | 32178/32178 [==============================] - 7s 221us/sample - loss: 0.0682 - acc: 0.9799 - val_loss: 0.1609 - val_acc: 0.9661
68 | Epoch 34/50
69 | 32178/32178 [==============================] - 6s 202us/sample - loss: 0.0770 - acc: 0.9787 - val_loss: 0.1398 - val_acc: 0.9696
70 | Epoch 35/50
71 | 32178/32178 [==============================] - 6s 196us/sample - loss: 0.0696 - acc: 0.9801 - val_loss: 0.1367 - val_acc: 0.9711
72 | Epoch 36/50
73 | 32178/32178 [==============================] - 6s 187us/sample - loss: 0.0825 - acc: 0.9782 - val_loss: 0.1501 - val_acc: 0.9674
74 | Epoch 37/50
75 | 32178/32178 [==============================] - 7s 216us/sample - loss: 0.0967 - acc: 0.9754 - val_loss: 0.1927 - val_acc: 0.9591
76 | Epoch 38/50
77 | 32178/32178 [==============================] - 6s 185us/sample - loss: 0.0802 - acc: 0.9777 - val_loss: 0.1723 - val_acc: 0.9649
78 | Epoch 39/50
79 | 32178/32178 [==============================] - 6s 187us/sample - loss: 0.0734 - acc: 0.9792 - val_loss: 0.2651 - val_acc: 0.9421
80 | Epoch 40/50
81 | 32178/32178 [==============================] - 7s 209us/sample - loss: 0.0729 - acc: 0.9799 - val_loss: 0.1780 - val_acc: 0.9621
82 | Epoch 41/50
83 | 32178/32178 [==============================] - 7s 206us/sample - loss: 0.0985 - acc: 0.9755 - val_loss: 0.1193 - val_acc: 0.9759
84 | Epoch 42/50
85 | 32178/32178 [==============================] - 7s 221us/sample - loss: 0.0718 - acc: 0.9810 - val_loss: 0.1387 - val_acc: 0.9694
86 | Epoch 43/50
87 | 32178/32178 [==============================] - 8s 259us/sample - loss: 0.0724 - acc: 0.9812 - val_loss: 0.1234 - val_acc: 0.9749
88 | Epoch 44/50
89 | 32178/32178 [==============================] - 6s 189us/sample - loss: 0.0751 - acc: 0.9797 - val_loss: 0.1709 - val_acc: 0.9652
90 | Epoch 45/50
91 | 32178/32178 [==============================] - 6s 198us/sample - loss: 0.1206 - acc: 0.9740 - val_loss: 0.1759 - val_acc: 0.9663
92 | Epoch 46/50
93 | 32178/32178 [==============================] - 7s 216us/sample - loss: 0.1110 - acc: 0.9744 - val_loss: 0.1395 - val_acc: 0.9701
94 | Epoch 47/50
95 | 32178/32178 [==============================] - 6s 195us/sample - loss: 0.0778 - acc: 0.9806 - val_loss: 0.1445 - val_acc: 0.9690
96 | Epoch 48/50
97 | 32178/32178 [==============================] - 6s 195us/sample - loss: 0.0591 - acc: 0.9836 - val_loss: 0.2389 - val_acc: 0.9523
98 | Epoch 49/50
99 | 32178/32178 [==============================] - 6s 199us/sample - loss: 0.0833 - acc: 0.9792 - val_loss: 0.1290 - val_acc: 0.9751
100 | Epoch 50/50
101 | 32178/32178 [==============================] - 6s 199us/sample - loss: 0.0650 - acc: 0.9831 - val_loss: 0.1793 - val_acc: 0.9683
102 | _________________________________________________________________
103 | Layer (type) Output Shape Param #
104 | =================================================================
105 | input_3 (InputLayer) (None, 25, 40) 0
106 | _________________________________________________________________
107 | flatten_2 (Flatten) (None, 1000) 0
108 | _________________________________________________________________
109 | dense_10 (Dense) (None, 128) 128128
110 | _________________________________________________________________
111 | dense_11 (Dense) (None, 256) 33024
112 | _________________________________________________________________
113 | dense_12 (Dense) (None, 256) 65792
114 | _________________________________________________________________
115 | dense_13 (Dense) (None, 128) 32896
116 | _________________________________________________________________
117 | dense_14 (Dense) (None, 12) 1548
118 | =================================================================
119 | Total params: 261,388
120 | Trainable params: 261,388
121 | Non-trainable params: 0
--------------------------------------------------------------------------------
/PAMAP2/PAMAP2_Dataset/results/normalized_data/standard_cnn_accuracy.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/PAMAP2/PAMAP2_Dataset/results/normalized_data/standard_cnn_accuracy.png
--------------------------------------------------------------------------------
/PAMAP2/PAMAP2_Dataset/results/normalized_data/standard_cnn_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/PAMAP2/PAMAP2_Dataset/results/normalized_data/standard_cnn_cm.png
--------------------------------------------------------------------------------
/PAMAP2/PAMAP2_Dataset/results/normalized_data/standard_cnn_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/PAMAP2/PAMAP2_Dataset/results/normalized_data/standard_cnn_loss.png
--------------------------------------------------------------------------------
/PAMAP2/PAMAP2_Dataset/results/rnn_res/rnn_50ep_acc.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/PAMAP2/PAMAP2_Dataset/results/rnn_res/rnn_50ep_acc.png
--------------------------------------------------------------------------------
/PAMAP2/PAMAP2_Dataset/results/rnn_res/rnn_50ep_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/PAMAP2/PAMAP2_Dataset/results/rnn_res/rnn_50ep_cm.png
--------------------------------------------------------------------------------
/PAMAP2/PAMAP2_Dataset/results/rnn_res/rnn_50ep_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/PAMAP2/PAMAP2_Dataset/results/rnn_res/rnn_50ep_loss.png
--------------------------------------------------------------------------------
/PAMAP2/PAMAP2_Dataset/summary_table.md:
--------------------------------------------------------------------------------
1 | | models | train accuracy | train loss | test accuracy | test loss |
2 | |--------|----------------|------------|---------------|-----------|
3 | | CNN | 97.25% | 0.1647 | 97.47% | 0.1743 |
4 | | DNN | 98.31% | 0.0650 | 96.83% | 0.1793 |
5 | | RNN | 99.20% | 0.0227 | 98.28% | 0.0599 |
6 |
7 | * training : validation = 6 : 4
8 | * 50 epochs run for each model
--------------------------------------------------------------------------------
/SHLDataset/.gitignore:
--------------------------------------------------------------------------------
1 | picture1
2 | picture2
3 | picture3
4 | SHLDataset_preview_v1
5 | video_output
6 | bag.h5
7 | hand.h5
8 | hip.h5
9 | image_for_fusion.h5
10 | torso.h5
11 | video.h5
12 |
--------------------------------------------------------------------------------
/SHLDataset/data_fusion2.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | """
3 | Created on Thu Aug 20 21:24:39 2020
4 |
5 | @author: Jieyun Hu
6 | """
7 | # data fusion strategy 2
8 | # merge the data in modeling
9 | # Only merge motions data
10 |
11 | import pandas as pd
12 | import numpy as np
13 | import matplotlib.pyplot as plt
14 | from scipy import stats
15 | import tensorflow as tf
16 | from sklearn import metrics
17 | import h5py
18 | import matplotlib.pyplot as plt
19 | from tensorflow.keras import regularizers
20 | from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten, Dropout, SimpleRNN, GRU, LSTM, GlobalMaxPooling1D,GlobalMaxPooling2D,MaxPooling2D,BatchNormalization, concatenate
21 | from tensorflow.keras.models import Model
22 | from tensorflow.keras.optimizers import Adam, SGD
23 | from sklearn.preprocessing import MinMaxScaler, StandardScaler
24 | from sklearn.model_selection import train_test_split
25 | from sklearn.metrics import confusion_matrix
26 | import itertools
27 | from keras.utils.vis_utils import plot_model
28 |
29 | class models():
30 | #def __init__(self):
31 |
32 |
33 | def read_h5(self, path_array):
34 | split_array = []
35 | l = len(path_array)
36 | for i, path in enumerate(path_array):
37 | f = h5py.File(path, 'r')
38 | X = f.get('inputs')
39 | y = f.get('labels')
40 |
41 | X = np.array(X)
42 | y = np.array(y)
43 | split_array.append(X) # add X to array for split
44 | if i == l - 1:
45 | split_array.append(y) # add y to the last
46 |
47 | self.split = train_test_split(*split_array,test_size=0.2, random_state = 1)
48 | '''
49 | print(len(split))
50 | print(split[0].shape) # data1_train_x
51 | print(split[1].shape) # data1_test_x
52 | print(split[2].shape) # data2_train_x
53 | print(split[3].shape) # data2_test_x
54 | print(split[4].shape) # y_train
55 | print(split[5].shape) # y_test
56 | '''
57 | return self.split
58 |
59 | # K is the number of classes
60 | def create_cnn(self, input_shape, K):
61 | i = Input(shape = input_shape)
62 | x = Conv2D(16, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(i)
63 | x = BatchNormalization()(x)
64 | #x = MaxPooling2D((2,2))(x)
65 | x = Dropout(0.2)(x)
66 |
67 | #x = Conv2D(32, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
68 | #x = BatchNormalization()(x)
69 | #x = MaxPooling2D((2,2))(x)
70 | #x = Dropout(0.2)(x)
71 | #x = Conv2D(256, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
72 | #x = BatchNormalization()(x)
73 | #x = MaxPooling2D((2,2))(x)
74 | #x = Conv2D(128, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
75 | #x = BatchNormalization()(x)
76 | x = Flatten()(x)
77 | x = Dropout(0.2)(x)
78 | x = Dense(128,activation = 'relu')(x)
79 | x = BatchNormalization()(x)
80 | x = Dropout(0.2)(x)
81 | x = Dense(K,activation = 'relu')(x)
82 | model = Model(i, x)
83 | return model
84 |
85 | # merge n cnn models
86 | def merge_models(self,n):
87 | input_shape = np.expand_dims(self.split[0], -1)[0].shape
88 | K = len(set(self.split[-2]))
89 | print(input_shape)
90 | cnns = [] # save all cnn models
91 | for i in range(n):
92 | cnn_i = self.create_cnn(input_shape,K)
93 | cnns.append(cnn_i)
94 | #cnn1 = self.create_cnn(input_shape, K)
95 | #cnn2 = self.create_cnn(input_shape, K)
96 | #combinedInput = concatenate([cnn1.output, cnn2.output])
97 | combinedInput = concatenate([c.output for c in cnns])
98 | x = Dense(K,activation='softmax')(combinedInput)
99 | self.mix_model = Model(inputs = [c.input for c in cnns], outputs = x)
100 | #model = Model(inputs = [cnn1.input, cnn2.input], outputs = x)
101 | self.mix_model.compile(optimizer = Adam(lr=0.0005),loss = 'sparse_categorical_crossentropy',metrics = ['accuracy'])
102 | self.r = self.mix_model.fit(x = [np.expand_dims(self.split[i],-1) for i in range(2*n) if i % 2 == 0],
103 | y = self.split[-2], validation_data = ([np.expand_dims(self.split[i],-1) for i in range(2*n) if i % 2 != 0],self.split[-1]),
104 | epochs = 50, batch_size = 128 )
105 | print(self.mix_model.summary())
106 | return self.r
107 |
108 | #r = model.fit(x = [np.expand_dims(self.split[0],-1),np.expand_dims(self.split[2],-1)], y = self.split[4], validation_data = ([np.expand_dims(self.split[1],-1),np.expand_dims(self.split[3],-1)],self.split[5]), epochs = 50, batch_size = 32 )
109 |
110 |
111 | def draw(self):
112 | f1 = plt.figure(1)
113 | plt.title('Loss')
114 | plt.plot(self.r.history['loss'], label = 'loss')
115 | plt.plot(self.r.history['val_loss'], label = 'val_loss')
116 | plt.legend()
117 | f1.show()
118 |
119 | f2 = plt.figure(2)
120 | plt.plot(self.r.history['acc'], label = 'accuracy')
121 | plt.plot(self.r.history['val_acc'], label = 'val_accuracy')
122 | plt.legend()
123 | f2.show()
124 |
125 | # summary, confusion matrix and heatmap
126 | def con_matrix(self,n):
127 | K = len(set(self.split[-2]))
128 | self.y_pred = self.mix_model.predict([np.expand_dims(self.split[i],-1) for i in range(2*n) if i % 2 != 0]).argmax(axis=1)
129 | cm = confusion_matrix(self.split[-1],self.y_pred)
130 | self.plot_confusion_matrix(cm,list(range(K)))
131 |
132 |
133 | def plot_confusion_matrix(self, cm, classes, normalize = False, title='Confusion matrix', cmap=plt.cm.Blues):
134 | if normalize:
135 | cm = cm.astype('float') / cm.sum(axis=1)[:,np.newaxis]
136 | print("Normalized confusion matrix")
137 | else:
138 | print("Confusion matrix, without normalization")
139 | print(cm)
140 | f3 = plt.figure(3)
141 | plt.imshow(cm, interpolation='nearest', cmap=cmap)
142 | plt.title(title)
143 | plt.colorbar()
144 | tick_marks = np.arange(len(classes))
145 | plt.xticks(tick_marks, classes, rotation=45)
146 | plt.yticks(tick_marks, classes)
147 |
148 | fmt = '.2f' if normalize else 'd'
149 | thresh = cm.max()/2.
150 | for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
151 | plt.text(j, i, format(cm[i, j], fmt),
152 | horizontalalignment = "center",
153 | color = "white" if cm[i, j] > thresh else "black")
154 | plt.tight_layout()
155 | plt.ylabel('True label')
156 | plt.xlabel('predicted label')
157 | f3.show()
158 |
159 |
160 |
161 | if __name__ == "__main__":
162 | model_name = "cnn" # can be cnn/dnn/rnn
163 | paths = ["./bag.h5", "./hand.h5", "./hip.h5","./torso.h5"]
164 | motion = models()
165 | print("read h5 file....")
166 | data_array = motion.read_h5(paths)
167 | motion.merge_models(len(paths))
168 | motion.draw()
169 | motion.con_matrix(len(paths))
170 | '''
171 | motion.merge(data_array)
172 |
173 |
174 | if model_name == "cnn":
175 | motion.cnn_model()
176 | elif model_name == "dnn":
177 | motion.dnn_model()
178 | elif model_name == "rnn":
179 | motion.rnn_model()
180 | motion.draw()
181 | motion.con_matrix()
182 | '''
--------------------------------------------------------------------------------
/SHLDataset/data_fusion3.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | """
3 | Created on Fri Aug 21 21:38:36 2020
4 |
5 | @author: Jieyun Hu
6 | """
7 |
8 | #using deep learning on data fusion of motion and video data
9 |
10 | import pandas as pd
11 | import numpy as np
12 | import matplotlib.pyplot as plt
13 | from scipy import stats
14 | import tensorflow as tf
15 | from sklearn import metrics
16 | import h5py
17 | import matplotlib.pyplot as plt
18 | from tensorflow.keras import regularizers
19 | from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten, Dropout, SimpleRNN, GRU, LSTM, GlobalMaxPooling1D,GlobalMaxPooling2D,MaxPooling2D,BatchNormalization, concatenate
20 | from tensorflow.keras.models import Model
21 | from tensorflow.keras.optimizers import Adam, SGD
22 | from sklearn.preprocessing import MinMaxScaler, StandardScaler
23 | from sklearn.model_selection import train_test_split
24 | from sklearn.metrics import confusion_matrix
25 | import itertools
26 | from keras.utils.vis_utils import plot_model
27 |
28 | class models():
29 | #def __init__(self):
30 |
31 |
32 | def read_h5(self, path_array):
33 | split_array = []
34 | l = len(path_array)
35 | for i, path in enumerate(path_array):
36 | f = h5py.File(path, 'r')
37 | X = f.get('inputs')
38 | y = f.get('labels')
39 |
40 | X = np.array(X)
41 | y = np.array(y)
42 | split_array.append(X) # add X to array for split
43 | if i == l - 1:
44 | split_array.append(y) # add y to the last
45 |
46 | self.split = train_test_split(*split_array,test_size=0.2, random_state = 1)
47 | '''
48 | print(len(split))
49 | print(split[0].shape) # data1_train_x
50 | print(split[1].shape) # data1_test_x
51 | print(split[2].shape) # data2_train_x
52 | print(split[3].shape) # data2_test_x
53 | print(split[4].shape) # y_train
54 | print(split[5].shape) # y_test
55 | '''
56 | return self.split
57 |
58 | # K is the number of classes
59 | def create_motion_cnn(self, input_shape, K):
60 | i = Input(shape = input_shape)
61 | x = Conv2D(16, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(i)
62 | x = BatchNormalization()(x)
63 | #x = MaxPooling2D((2,2))(x)
64 | x = Dropout(0.2)(x)
65 |
66 | #x = Conv2D(32, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
67 | #x = BatchNormalization()(x)
68 | #x = MaxPooling2D((2,2))(x)
69 | #x = Dropout(0.2)(x)
70 | #x = Conv2D(256, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
71 | #x = BatchNormalization()(x)
72 | #x = MaxPooling2D((2,2))(x)
73 | #x = Conv2D(128, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
74 | #x = BatchNormalization()(x)
75 | x = Flatten()(x)
76 | x = Dropout(0.2)(x)
77 | x = Dense(128,activation = 'relu')(x)
78 | x = BatchNormalization()(x)
79 | x = Dropout(0.2)(x)
80 | x = Dense(K,activation = 'relu')(x)
81 | model = Model(i, x)
82 | return model
83 |
84 | def create_img_cnn(self, input_shape, K):
85 | i = Input(shape = input_shape)
86 | x = Conv2D(32, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(i)
87 | x = BatchNormalization()(x)
88 | x = MaxPooling2D((2,2))(x)
89 | x = Dropout(0.2)(x)
90 |
91 | x = Conv2D(64, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
92 | x = BatchNormalization()(x)
93 | x = MaxPooling2D((2,2))(x)
94 | x = Dropout(0.4)(x)
95 |
96 | x = Conv2D(128, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
97 | x = BatchNormalization()(x)
98 | #x = MaxPooling2D((2,2))(x)
99 | x = Dropout(0.5)(x)
100 | #x = Conv2D(128, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
101 | #x = BatchNormalization()(x)
102 | x = Flatten()(x)
103 | #x = Dropout(0.2)(x)
104 | x = Dense(256,activation = 'relu')(x)
105 | x = BatchNormalization()(x)
106 | x = Dropout(0.2)(x)
107 | x = Dense(K,activation = 'relu')(x)
108 | model = Model(i, x)
109 | return model
110 | # merge n cnn models
111 | def merge_models(self,n):
112 | motion_input_shape = np.expand_dims(self.split[0], -1)[0].shape
113 | K = len(set(self.split[-2]))
114 | print(motion_input_shape)
115 | cnns = [] # save all cnn models
116 | for i in range(n-1):
117 | cnn_i = self.create_motion_cnn(motion_input_shape,K)
118 | cnns.append(cnn_i)
119 | img_input_shape = np.expand_dims(self.split[-4], -1)[0].shape # last data should be image data
120 | print(img_input_shape)
121 | img_cnn = self.create_img_cnn(img_input_shape, K)
122 | cnns.append(img_cnn)
123 | #cnn1 = self.create_cnn(input_shape, K)
124 | #cnn2 = self.create_cnn(input_shape, K)
125 | #combinedInput = concatenate([cnn1.output, cnn2.output])
126 | combinedInput = concatenate([c.output for c in cnns])
127 | x = Dense(K,activation='softmax')(combinedInput)
128 | self.mix_model = Model(inputs = [c.input for c in cnns], outputs = x)
129 | #model = Model(inputs = [cnn1.input, cnn2.input], outputs = x)
130 | self.mix_model.compile(optimizer = Adam(lr=0.0005),loss = 'sparse_categorical_crossentropy',metrics = ['accuracy'])
131 | #self.r = self.mix_model.fit(x = [np.expand_dims(self.split[0],-1),self.split[]])
132 | self.r = self.mix_model.fit(x = [np.expand_dims(self.split[i],-1) for i in range(2*n) if i % 2 == 0],
133 | y = self.split[-2], validation_data = ([np.expand_dims(self.split[i],-1) for i in range(2*n) if i % 2 != 0],self.split[-1]),
134 | epochs = 50, batch_size = 256 )
135 | print(self.mix_model.summary())
136 | return self.r
137 |
138 | #r = model.fit(x = [np.expand_dims(self.split[0],-1),np.expand_dims(self.split[2],-1)], y = self.split[4], validation_data = ([np.expand_dims(self.split[1],-1),np.expand_dims(self.split[3],-1)],self.split[5]), epochs = 50, batch_size = 32 )
139 |
140 |
141 | def draw(self):
142 | f1 = plt.figure(1)
143 | plt.title('Loss')
144 | plt.plot(self.r.history['loss'], label = 'loss')
145 | plt.plot(self.r.history['val_loss'], label = 'val_loss')
146 | plt.legend()
147 | f1.show()
148 |
149 | f2 = plt.figure(2)
150 | plt.plot(self.r.history['acc'], label = 'accuracy')
151 | plt.plot(self.r.history['val_acc'], label = 'val_accuracy')
152 | plt.legend()
153 | f2.show()
154 |
155 | # summary, confusion matrix and heatmap
156 | def con_matrix(self,n):
157 | K = len(set(self.split[-2]))
158 | self.y_pred = self.mix_model.predict([np.expand_dims(self.split[i],-1) for i in range(2*n) if i % 2 != 0]).argmax(axis=1)
159 | cm = confusion_matrix(self.split[-1],self.y_pred)
160 | self.plot_confusion_matrix(cm,list(range(K)))
161 |
162 |
163 | def plot_confusion_matrix(self, cm, classes, normalize = False, title='Confusion matrix', cmap=plt.cm.Blues):
164 | if normalize:
165 | cm = cm.astype('float') / cm.sum(axis=1)[:,np.newaxis]
166 | print("Normalized confusion matrix")
167 | else:
168 | print("Confusion matrix, without normalization")
169 | print(cm)
170 | f3 = plt.figure(3)
171 | plt.imshow(cm, interpolation='nearest', cmap=cmap)
172 | plt.title(title)
173 | plt.colorbar()
174 | tick_marks = np.arange(len(classes))
175 | plt.xticks(tick_marks, classes, rotation=45)
176 | plt.yticks(tick_marks, classes)
177 |
178 | fmt = '.2f' if normalize else 'd'
179 | thresh = cm.max()/2.
180 | for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
181 | plt.text(j, i, format(cm[i, j], fmt),
182 | horizontalalignment = "center",
183 | color = "white" if cm[i, j] > thresh else "black")
184 | plt.tight_layout()
185 | plt.ylabel('True label')
186 | plt.xlabel('predicted label')
187 | f3.show()
188 |
189 |
190 |
191 | if __name__ == "__main__":
192 | model_name = "cnn" # can be cnn/dnn/rnn
193 | paths = ["./bag.h5","./image_for_fusion.h5"] # a motion data fuses with video data
194 | #paths = ["./bag.h5", "./hand.h5", "./hip.h5","./torso.h5", "./image_for_fusion.h5"]
195 | mix = models()
196 | print("read h5 file....")
197 | data_array = mix.read_h5(paths)
198 | mix.merge_models(len(paths))
199 | mix.draw()
200 | mix.con_matrix(len(paths))
201 |
202 |
--------------------------------------------------------------------------------
/SHLDataset/figures/FFNN (4).png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/figures/FFNN (4).png
--------------------------------------------------------------------------------
/SHLDataset/figures/RNN2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/figures/RNN2.png
--------------------------------------------------------------------------------
/SHLDataset/figures/cnn.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/figures/cnn.png
--------------------------------------------------------------------------------
/SHLDataset/figures/datafusion_loss_acc.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/figures/datafusion_loss_acc.png
--------------------------------------------------------------------------------
/SHLDataset/figures/dataprocessing.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/figures/dataprocessing.png
--------------------------------------------------------------------------------
/SHLDataset/figures/early_late_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/figures/early_late_cm.png
--------------------------------------------------------------------------------
/SHLDataset/figures/earlyfusion.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/figures/earlyfusion.png
--------------------------------------------------------------------------------
/SHLDataset/figures/earlyvslate.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/figures/earlyvslate.png
--------------------------------------------------------------------------------
/SHLDataset/figures/fusion_cm.drawio:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/SHLDataset/figures/fusion_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/figures/fusion_cm.png
--------------------------------------------------------------------------------
/SHLDataset/figures/fusion_model.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/figures/fusion_model.png
--------------------------------------------------------------------------------
/SHLDataset/figures/latefusion.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/figures/latefusion.png
--------------------------------------------------------------------------------
/SHLDataset/figures/sensors.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/figures/sensors.jpg
--------------------------------------------------------------------------------
/SHLDataset/figures/three_models.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/figures/three_models.png
--------------------------------------------------------------------------------
/SHLDataset/fusion_output/bag_video/cnn_50_accuracy.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/fusion_output/bag_video/cnn_50_accuracy.png
--------------------------------------------------------------------------------
/SHLDataset/fusion_output/bag_video/cnn_50_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/fusion_output/bag_video/cnn_50_cm.png
--------------------------------------------------------------------------------
/SHLDataset/fusion_output/bag_video/cnn_50_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/fusion_output/bag_video/cnn_50_loss.png
--------------------------------------------------------------------------------
/SHLDataset/fusion_output/phone_and_video/cnn_50_accuracy.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/fusion_output/phone_and_video/cnn_50_accuracy.png
--------------------------------------------------------------------------------
/SHLDataset/fusion_output/phone_and_video/cnn_50_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/fusion_output/phone_and_video/cnn_50_cm.png
--------------------------------------------------------------------------------
/SHLDataset/fusion_output/phone_and_video/cnn_50_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/fusion_output/phone_and_video/cnn_50_loss.png
--------------------------------------------------------------------------------
/SHLDataset/image_processing_fusion.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | """
3 | Created on Fri Aug 21 13:25:27 2020
4 |
5 | @author: Jieyun Hu
6 | """
7 |
8 | #image processing for data fusion
9 | import numpy as np
10 | import h5py
11 | import math
12 | from keras.preprocessing.image import load_img, img_to_array
13 | from collections import Counter
14 |
15 | # Four motion.txt are all with the same timestamp, so no need to synchonize them.
16 | # motion.txt are not synchonized with images, so first need to map frame No. to each timestamp in motion data
17 | # Based on document, ts = offset1 + speedup * tv
18 | # fps = 0.5, 1 frame in every two seconds, so frame0 = 0ms, frame1 = 2000ms, frame2 = 4000ms... framen = 2000*n ms
19 | # ts = offset1 + speedup * frameNo * 2000 => frameNo = (ts - offset1)/speedup/2000
20 | def add_frame():
21 |
22 | bag_motion = ["./SHLDataset_preview_v1/User1/220617/Bag_Motion.txt","./SHLDataset_preview_v1/User1/260617/Bag_Motion.txt","./SHLDataset_preview_v1/User1/270617/Bag_Motion.txt"]
23 | labels = ["./SHLDataset_preview_v1/User1/220617/Label.txt","./SHLDataset_preview_v1/User1/260617/Label.txt","./SHLDataset_preview_v1/User1/270617/Label.txt"]
24 | offset_paths = ["./SHLDataset_preview_v1/User1/220617/videooffset.txt","./SHLDataset_preview_v1/User1/260617/videooffset.txt","./SHLDataset_preview_v1/User1/270617/videooffset.txt"]
25 | speedup_paths = ["./SHLDataset_preview_v1/User1/220617/videospeedup.txt","./SHLDataset_preview_v1/User1/260617/videospeedup.txt","./SHLDataset_preview_v1/User1/270617/videospeedup.txt"]
26 |
27 | data = np.array([])
28 | for i, file in enumerate(bag_motion):
29 | np_motion = np.loadtxt(file, usecols = [0]).reshape(-1,1)
30 | np_motion = downsize(np_motion).astype(np.int64)
31 |
32 | # attach frame No
33 | # get offset1 and speedup value
34 | offset_path = offset_paths[i]
35 | speedup_path = speedup_paths[i]
36 | offset1 = get_offset1(offset_path)
37 | speedup = get_speedup(speedup_path)
38 | frame_No = np.apply_along_axis(lambda x: int((x[0] - offset1)/speedup/2000), 1, np_motion).reshape(-1,1)
39 |
40 | # attach label
41 | start = np_motion[0,0].astype(np.int64)
42 | end = np_motion[-1,0].astype(np.int64)
43 | label = labels[i]
44 | np_label = np.loadtxt(label)
45 | start_index = np.where(np_label == start)[0][0] # find the row index of start timestamp
46 | end_index = np.where(np_label == end)[0][0] # find the column index of start timestamp
47 | np_label = find_labels(np_label,start_index,end_index)
48 |
49 | folder_id = np.full(np_label.shape, i+1) # put folder_id in the last column
50 |
51 | concatenate = np.concatenate((np_motion,frame_No, np_label,folder_id),axis=1)
52 | if i == 0:
53 | data = concatenate
54 | else:
55 | data = np.concatenate((data,concatenate))
56 | #break; #testing
57 | print(data.shape)
58 | return data
59 |
60 | #parse the videooffset.txt
61 | def get_offset1(file):
62 | f = open(file,'r')
63 | line = f.readline()
64 | p = line.split(" ")
65 | offset1 = int(float(p[0].rstrip("\n")))
66 | f.close()
67 | return offset1
68 |
69 | #parse the videospeedup.txt
70 | def get_speedup(file):
71 | f = open(file,'r')
72 | speedup = int(f.readline())
73 | f.close()
74 | return speedup
75 |
76 |
77 | def find_labels(labels,start_index, end_index):
78 | interval = 10
79 | label_col_index = 1 # the 2 column of Label.txt
80 | data = labels[start_index: end_index+1:interval, label_col_index].reshape(-1,1) # need to be the shape like (n,1), so that it can be concatenate later
81 | return data
82 |
83 | def downsize(data):# data is numpy array
84 | downsample_size = 10
85 | data = data[::downsample_size,:]
86 | return data
87 |
88 | def save_data(data,file_name): # save the data in h5 format
89 | f = h5py.File(file_name,'w')
90 | for key in data:
91 | print(key)
92 | f.create_dataset(key,data = data[key])
93 | f.close()
94 | print('Done.')
95 |
96 | # The same algorithm as motion data segmentation so that the images map the same labels with motion data
97 | def segment(data, window_size): # data is numpy array
98 | n = len(data)
99 | X = []
100 | y = []
101 | start = 0
102 | end = 0
103 | while start + window_size - 1 < n:
104 | end = start + window_size-1
105 | if data[start][-2]!=0 and data[start][-2] == data[end][-2] and data[start][-1] == data[end][-1] : # if the frame contains the same activity and from the same object
106 | X.append(data[start:(end+1),[1,-1]])
107 | y_label = data[start][-2]
108 | if y_label == 8:
109 | y.append(0) # change label 8 to 0
110 | else:
111 | y.append(data[start][-2])
112 | start += window_size//2 # 50% overlap
113 | else: # if the frame contains different activities or from different objects, find the next start point
114 | while start + window_size-1 < n:
115 | if data[start][-2] == 0 or data[start][-2] != data[start+1][-2]:
116 | break
117 | start += 1
118 | start += 1
119 | #print(np.asarray(X).shape)
120 | return (X, y)
121 |
122 | # load the images based on segmentation
123 | # feed in segmented data
124 | # i.e. (X,y)
125 | # get the frame_no from the first entry in the segment
126 | # Other strategy: find the most frequent frame_no in the segment
127 | def load_images(data):
128 | X = data[0]
129 | y = data[1]
130 | images = []
131 | for seg in X:
132 | #frame_num = int(seg[0][0])
133 | frame_num = int(Counter(seg[:,0]).most_common(1)[0][0])
134 | folder_id = int(seg[0][1])
135 | image_path = "./picture{}/frame{}.jpg".format(folder_id,frame_num)
136 | img = load_img(image_path,color_mode='grayscale',target_size=(100,100)) # make size smaller to save memory
137 | img = img_to_array(img).astype('float32')/255
138 | img = img.reshape(img.shape[0],img.shape[1])
139 | images.append(img)
140 | images = np.asarray(images)
141 | y = np.asarray(y, dtype=int)
142 | print(images.shape)
143 | print(y.shape)
144 | return {'inputs' : images, 'labels': y}
145 |
146 | #print(np.asarray(X).shape, np.asarray(y).shape)
147 | #return {'inputs' : np.asarray(X), 'labels': np.asarray(y,dtype=int)}
148 | if __name__ == "__main__":
149 | file_name = "image_for_fusion.h5"
150 | data = add_frame()
151 | seg = segment(data,20)
152 | img = load_images(seg)
153 | #data = prepare_dataset()
154 | save_data(img,file_name)
--------------------------------------------------------------------------------
/SHLDataset/motion_modeling.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | """
3 | Created on Thu Aug 20 10:32:03 2020
4 |
5 | @author: Jieyun Hu
6 | """
7 |
8 | import pandas as pd
9 | import numpy as np
10 | import matplotlib.pyplot as plt
11 | from scipy import stats
12 | import tensorflow as tf
13 | from sklearn import metrics
14 | import h5py
15 | import matplotlib.pyplot as plt
16 | from tensorflow.keras import regularizers
17 | from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten, Dropout, SimpleRNN, GRU, LSTM, GlobalMaxPooling1D,GlobalMaxPooling2D,MaxPooling2D,BatchNormalization
18 | from tensorflow.keras.models import Model
19 | from tensorflow.keras.optimizers import Adam, SGD
20 | from sklearn.preprocessing import MinMaxScaler, StandardScaler
21 | from sklearn.model_selection import train_test_split
22 | from sklearn.metrics import confusion_matrix
23 | import itertools
24 | from keras.utils.vis_utils import plot_model
25 |
26 | class models():
27 | def __init__(self, path):
28 | self.path = path
29 |
30 |
31 | def read_h5(self):
32 | f = h5py.File(path, 'r')
33 | X = f.get('inputs')
34 | y = f.get('labels')
35 | #print(type(X))
36 | #print(type(y))
37 |
38 | X = np.array(X)
39 | y = np.array(y)
40 | self.x_train, self.x_test, self.y_train, self.y_test = train_test_split(X, y, test_size=0.2, random_state = 1)
41 |
42 | #print("X = ", X.shape)
43 | #print("y =",y.shape)
44 | #print(self.x_train.shape)
45 | #print(self.x_train.shape)
46 |
47 | '''
48 | self.X = np.array(X)
49 | self.y = np.array(y)
50 | print(self.X[0][0])
51 | self.data_scale()
52 | print(self.X[0][0])
53 | self.x_train, self.x_test, self.y_train, self.y_test = train_test_split(self.X, self.y, test_size=0.2, random_state = 11)
54 |
55 | print("X = ", self.X.shape)
56 | print("y =",self.y.shape)
57 | '''
58 |
59 |
60 |
61 | def cnn_model(self):
62 | K = len(set(self.y_train))
63 | print(K)
64 | #print(K)
65 | #K = 12
66 | #X = np.expand_dims(X, -1)
67 | self.x_train = np.expand_dims(self.x_train, -1)
68 | self.x_test = np.expand_dims(self.x_test,-1)
69 | #print(X)
70 | #print(X[0].shape)
71 | #i = Input(shape=X[0].shape)
72 | i = Input(shape=self.x_train[0].shape)
73 | x = Conv2D(16, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(i)
74 | x = BatchNormalization()(x)
75 | #x = MaxPooling2D((2,2))(x)
76 | x = Dropout(0.2)(x)
77 |
78 | #x = Conv2D(16, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
79 | #x = BatchNormalization()(x)
80 | #x = MaxPooling2D((2,2))(x)
81 | #x = Dropout(0.4)(x)
82 | #x = Conv2D(256, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
83 | #x = BatchNormalization()(x)
84 | #x = MaxPooling2D((2,2))(x)
85 | #x = Conv2D(128, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
86 | #x = BatchNormalization()(x)
87 | x = Flatten()(x)
88 | x = Dropout(0.2)(x)
89 | x = Dense(128,activation = 'relu')(x)
90 | x = BatchNormalization()(x)
91 | x = Dropout(0.2)(x)
92 | x = Dense(K, activation = 'softmax')(x)
93 | self.model = Model(i,x)
94 | self.model.compile(optimizer = Adam(lr=0.0005),
95 | loss = 'sparse_categorical_crossentropy',
96 | metrics = ['accuracy'])
97 |
98 | #self.r = model.fit(X, y, validation_split = 0.4, epochs = 50, batch_size = 32 )
99 | self.r = self.model.fit(self.x_train, self.y_train, validation_data = (self.x_test, self.y_test), epochs = 50, batch_size = 128 )
100 | print(self.model.summary())
101 | # It is better than using keras do the splitting!!
102 | return self.r
103 |
104 | def dnn_model(self):
105 | K = len(set(self.y_train))
106 | #print(K)
107 | #K = 12
108 | print(self.x_train[0].shape)
109 | i = Input(shape=self.x_train[0].shape)
110 | x = Flatten()(i)
111 | x = Dense(128,activation = 'relu')(x)
112 | x = Dense(256,activation = 'relu')(x)
113 | x = Dense(256,activation = 'relu')(x)
114 | x = Dropout(0.2)(x)
115 | #x = Dense(256,activation = 'sigmoid')(x)
116 | #x = Dense(256,activation = 'sigmoid')(x)
117 | #x = Dense(256,activation = 'sigmoid')(x)
118 | #x = Dropout(0.2)(x)
119 | x = Dense(512,activation = 'relu')(x)
120 | x = Dense(K,activation = 'softmax')(x)
121 | self.model = Model(i,x)
122 | self.model.compile(optimizer = Adam(lr=0.0001),
123 | loss = 'sparse_categorical_crossentropy',
124 | metrics = ['accuracy'])
125 |
126 | '''
127 | model = tf.keras.models.Sequential([
128 | tf.keras.layers.Flatten(input_shape=self.x_train[0].shape),
129 | tf.keras.layers.Dense(256, activation = 'relu'),
130 | tf.keras.layers.Dropout(0.5),
131 | tf.keras.layers.Dense(256, activation = 'relu'),
132 | tf.keras.layers.Dropout(0.2),
133 | tf.keras.layers.Dense(K,activation = 'softmax')
134 | ])
135 | model.compile(optimizer = Adam(lr=0.0005),
136 | loss = 'sparse_categorical_crossentropy',
137 | metrics = ['accuracy'])
138 | '''
139 | self.r = self.model.fit(self.x_train, self.y_train, validation_data = (self.x_test, self.y_test), epochs = 50, batch_size = 64 )
140 | print(self.model.summary())
141 | return self.r
142 |
143 |
144 | def rnn_model(self):
145 | K = len(set(self.y_train))
146 | i = Input(shape = self.x_train[0].shape)
147 | x = LSTM(128, return_sequences=True)(i)
148 | #x = LSTM(128, return_sequences=True)(i)
149 | x = Dense(128,activation = 'relu')(x)
150 | x = GlobalMaxPooling1D()(x)
151 | x = Dense(K,activation = 'softmax')(x)
152 | self.model = Model(i,x)
153 | self.model.compile(optimizer = Adam(lr=0.0005),
154 | loss = 'sparse_categorical_crossentropy',
155 | metrics = ['accuracy'])
156 | self.r = self.model.fit(self.x_train, self.y_train, validation_data = (self.x_test, self.y_test), epochs = 50, batch_size = 64)
157 | #self.r = model.fit(X, y, validation_split = 0.2, epochs = 10, batch_size = 32 )
158 | print(self.model.summary())
159 | return self.r
160 |
161 | def draw(self):
162 | f1 = plt.figure(1)
163 | plt.title('Loss')
164 | plt.plot(self.r.history['loss'], label = 'loss')
165 | plt.plot(self.r.history['val_loss'], label = 'val_loss')
166 | plt.legend()
167 | f1.show()
168 |
169 | f2 = plt.figure(2)
170 | plt.plot(self.r.history['acc'], label = 'accuracy')
171 | plt.plot(self.r.history['val_acc'], label = 'val_accuracy')
172 | plt.legend()
173 | f2.show()
174 |
175 | # summary, confusion matrix and heatmap
176 | def con_matrix(self):
177 | K = len(set(self.y_train))
178 | self.y_pred = self.model.predict(self.x_test).argmax(axis=1)
179 | cm = confusion_matrix(self.y_test,self.y_pred)
180 | self.plot_confusion_matrix(cm,list(range(K)))
181 |
182 |
183 | def plot_confusion_matrix(self, cm, classes, normalize = False, title='Confusion matrix', cmap=plt.cm.Blues):
184 | if normalize:
185 | cm = cm.astype('float') / cm.sum(axis=1)[:,np.newaxis]
186 | print("Normalized confusion matrix")
187 | else:
188 | print("Confusion matrix, without normalization")
189 | print(cm)
190 | f3 = plt.figure(3)
191 | plt.imshow(cm, interpolation='nearest', cmap=cmap)
192 | plt.title(title)
193 | plt.colorbar()
194 | tick_marks = np.arange(len(classes))
195 | plt.xticks(tick_marks, classes, rotation=45)
196 | plt.yticks(tick_marks, classes)
197 |
198 | fmt = '.2f' if normalize else 'd'
199 | thresh = cm.max()/2.
200 | for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
201 | plt.text(j, i, format(cm[i, j], fmt),
202 | horizontalalignment = "center",
203 | color = "white" if cm[i, j] > thresh else "black")
204 | plt.tight_layout()
205 | plt.ylabel('True label')
206 | plt.xlabel('predicted label')
207 | f3.show()
208 |
209 |
210 |
211 | if __name__ == "__main__":
212 | model_name = "cnn" # can be cnn/dnn/rnn
213 | path = "./bag.h5"
214 | motion = models(path)
215 | print("read h5 file....")
216 | motion.read_h5()
217 |
218 | if model_name == "cnn":
219 | motion.cnn_model()
220 | elif model_name == "dnn":
221 | motion.dnn_model()
222 | elif model_name == "rnn":
223 | motion.rnn_model()
224 | motion.draw()
225 | motion.con_matrix()
--------------------------------------------------------------------------------
/SHLDataset/motion_output/bag_motion/cnn/cnn_50_accuracy.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/bag_motion/cnn/cnn_50_accuracy.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/bag_motion/cnn/cnn_50_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/bag_motion/cnn/cnn_50_cm.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/bag_motion/cnn/cnn_50_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/bag_motion/cnn/cnn_50_loss.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/bag_motion/dnn/dnn_50_accuracy.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/bag_motion/dnn/dnn_50_accuracy.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/bag_motion/dnn/dnn_50_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/bag_motion/dnn/dnn_50_cm.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/bag_motion/dnn/dnn_50_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/bag_motion/dnn/dnn_50_loss.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/bag_motion/dnn/dnn_50_output.txt:
--------------------------------------------------------------------------------
1 | Epoch 1/50
2 | 48636/48636 [==============================] - 5s 113us/sample - loss: 10.2832 - acc: 0.3276 - val_loss: 7.3720 - val_acc: 0.4857
3 | Epoch 2/50
4 | 48636/48636 [==============================] - 5s 94us/sample - loss: 2.0765 - acc: 0.6685 - val_loss: 0.4243 - val_acc: 0.8717
5 | Epoch 3/50
6 | 48636/48636 [==============================] - 5s 97us/sample - loss: 0.4623 - acc: 0.8400 - val_loss: 0.3047 - val_acc: 0.8987
7 | Epoch 4/50
8 | 48636/48636 [==============================] - 5s 100us/sample - loss: 0.3686 - acc: 0.8721 - val_loss: 0.2598 - val_acc: 0.9122
9 | Epoch 5/50
10 | 48636/48636 [==============================] - 5s 110us/sample - loss: 0.3148 - acc: 0.8906 - val_loss: 0.2527 - val_acc: 0.9083
11 | Epoch 6/50
12 | 48636/48636 [==============================] - 5s 108us/sample - loss: 0.2835 - acc: 0.8995 - val_loss: 0.2202 - val_acc: 0.9278
13 | Epoch 7/50
14 | 48636/48636 [==============================] - 5s 112us/sample - loss: 0.2548 - acc: 0.9096 - val_loss: 0.1981 - val_acc: 0.9297
15 | Epoch 8/50
16 | 48636/48636 [==============================] - 5s 108us/sample - loss: 0.2322 - acc: 0.9195 - val_loss: 0.1722 - val_acc: 0.9426
17 | Epoch 9/50
18 | 48636/48636 [==============================] - 5s 109us/sample - loss: 0.2157 - acc: 0.9236 - val_loss: 0.1536 - val_acc: 0.9484
19 | Epoch 10/50
20 | 48636/48636 [==============================] - 5s 108us/sample - loss: 0.2014 - acc: 0.9293 - val_loss: 0.1855 - val_acc: 0.9402
21 | Epoch 11/50
22 | 48636/48636 [==============================] - 5s 108us/sample - loss: 0.1898 - acc: 0.9338 - val_loss: 0.1578 - val_acc: 0.9494
23 | Epoch 12/50
24 | 48636/48636 [==============================] - 5s 108us/sample - loss: 0.1820 - acc: 0.9348 - val_loss: 0.1819 - val_acc: 0.9329
25 | Epoch 13/50
26 | 48636/48636 [==============================] - 5s 109us/sample - loss: 0.1740 - acc: 0.9374 - val_loss: 0.1409 - val_acc: 0.9557
27 | Epoch 14/50
28 | 48636/48636 [==============================] - 5s 109us/sample - loss: 0.1663 - acc: 0.9410 - val_loss: 0.1359 - val_acc: 0.9528
29 | Epoch 15/50
30 | 48636/48636 [==============================] - 5s 109us/sample - loss: 0.1583 - acc: 0.9434 - val_loss: 0.1228 - val_acc: 0.9585
31 | Epoch 16/50
32 | 48636/48636 [==============================] - 5s 110us/sample - loss: 0.1521 - acc: 0.9459 - val_loss: 0.1818 - val_acc: 0.9317
33 | Epoch 17/50
34 | 48636/48636 [==============================] - 5s 110us/sample - loss: 0.1496 - acc: 0.9485 - val_loss: 0.1273 - val_acc: 0.9579
35 | Epoch 18/50
36 | 48636/48636 [==============================] - 5s 110us/sample - loss: 0.1426 - acc: 0.9497 - val_loss: 0.1189 - val_acc: 0.9571
37 | Epoch 19/50
38 | 48636/48636 [==============================] - 5s 112us/sample - loss: 0.1371 - acc: 0.9502 - val_loss: 0.1077 - val_acc: 0.9632
39 | Epoch 20/50
40 | 48636/48636 [==============================] - 5s 112us/sample - loss: 0.1370 - acc: 0.9508 - val_loss: 0.1067 - val_acc: 0.9643
41 | Epoch 21/50
42 | 48636/48636 [==============================] - 5s 111us/sample - loss: 0.1292 - acc: 0.9525 - val_loss: 0.1151 - val_acc: 0.9596
43 | Epoch 22/50
44 | 48636/48636 [==============================] - 5s 112us/sample - loss: 0.1275 - acc: 0.9547 - val_loss: 0.0953 - val_acc: 0.9686
45 | Epoch 23/50
46 | 48636/48636 [==============================] - 5s 112us/sample - loss: 0.1211 - acc: 0.9573 - val_loss: 0.1109 - val_acc: 0.9600
47 | Epoch 24/50
48 | 48636/48636 [==============================] - 5s 113us/sample - loss: 0.1191 - acc: 0.9562 - val_loss: 0.1313 - val_acc: 0.9562
49 | Epoch 25/50
50 | 48636/48636 [==============================] - 6s 113us/sample - loss: 0.1189 - acc: 0.9567 - val_loss: 0.1033 - val_acc: 0.9665
51 | Epoch 26/50
52 | 48636/48636 [==============================] - 6s 113us/sample - loss: 0.1108 - acc: 0.9604 - val_loss: 0.0939 - val_acc: 0.9686
53 | Epoch 27/50
54 | 48636/48636 [==============================] - 6s 114us/sample - loss: 0.1142 - acc: 0.9587 - val_loss: 0.0989 - val_acc: 0.9676
55 | Epoch 28/50
56 | 48636/48636 [==============================] - 6s 114us/sample - loss: 0.1088 - acc: 0.9599 - val_loss: 0.1069 - val_acc: 0.9619
57 | Epoch 29/50
58 | 48636/48636 [==============================] - 6s 115us/sample - loss: 0.1064 - acc: 0.9619 - val_loss: 0.0880 - val_acc: 0.9705
59 | Epoch 30/50
60 | 48636/48636 [==============================] - 6s 115us/sample - loss: 0.1052 - acc: 0.9623 - val_loss: 0.0868 - val_acc: 0.9733
61 | Epoch 31/50
62 | 48636/48636 [==============================] - 6s 118us/sample - loss: 0.1001 - acc: 0.9634 - val_loss: 0.1010 - val_acc: 0.9665
63 | Epoch 32/50
64 | 48636/48636 [==============================] - 6s 117us/sample - loss: 0.1023 - acc: 0.9619 - val_loss: 0.1019 - val_acc: 0.9662
65 | Epoch 33/50
66 | 48636/48636 [==============================] - 6s 117us/sample - loss: 0.1005 - acc: 0.9633 - val_loss: 0.0869 - val_acc: 0.9716
67 | Epoch 34/50
68 | 48636/48636 [==============================] - 6s 117us/sample - loss: 0.1020 - acc: 0.9625 - val_loss: 0.0819 - val_acc: 0.9734
69 | Epoch 35/50
70 | 48636/48636 [==============================] - 6s 118us/sample - loss: 0.0958 - acc: 0.9648 - val_loss: 0.0952 - val_acc: 0.9683
71 | Epoch 36/50
72 | 48636/48636 [==============================] - 6s 118us/sample - loss: 0.0998 - acc: 0.9635 - val_loss: 0.0902 - val_acc: 0.9689
73 | Epoch 37/50
74 | 48636/48636 [==============================] - 6s 118us/sample - loss: 0.0914 - acc: 0.9663 - val_loss: 0.0791 - val_acc: 0.9745
75 | Epoch 38/50
76 | 48636/48636 [==============================] - 6s 119us/sample - loss: 0.0926 - acc: 0.9669 - val_loss: 0.0868 - val_acc: 0.9704
77 | Epoch 39/50
78 | 48636/48636 [==============================] - 6s 119us/sample - loss: 0.0892 - acc: 0.9674 - val_loss: 0.0938 - val_acc: 0.9701
79 | Epoch 40/50
80 | 48636/48636 [==============================] - 6s 120us/sample - loss: 0.0895 - acc: 0.9670 - val_loss: 0.0794 - val_acc: 0.9731
81 | Epoch 41/50
82 | 48636/48636 [==============================] - 6s 120us/sample - loss: 0.0895 - acc: 0.9676 - val_loss: 0.0842 - val_acc: 0.9729
83 | Epoch 42/50
84 | 48636/48636 [==============================] - 6s 121us/sample - loss: 0.0876 - acc: 0.9681 - val_loss: 0.0808 - val_acc: 0.9743
85 | Epoch 43/50
86 | 48636/48636 [==============================] - 6s 125us/sample - loss: 0.0854 - acc: 0.9687 - val_loss: 0.0884 - val_acc: 0.9703
87 | Epoch 44/50
88 | 48636/48636 [==============================] - 6s 122us/sample - loss: 0.0842 - acc: 0.9695 - val_loss: 0.0790 - val_acc: 0.9733
89 | Epoch 45/50
90 | 48636/48636 [==============================] - 6s 123us/sample - loss: 0.0848 - acc: 0.9688 - val_loss: 0.0744 - val_acc: 0.9761
91 | Epoch 46/50
92 | 48636/48636 [==============================] - 6s 124us/sample - loss: 0.0862 - acc: 0.9681 - val_loss: 0.0742 - val_acc: 0.9752
93 | Epoch 47/50
94 | 48636/48636 [==============================] - 6s 124us/sample - loss: 0.0784 - acc: 0.9718 - val_loss: 0.0755 - val_acc: 0.9758
95 | Epoch 48/50
96 | 48636/48636 [==============================] - 6s 126us/sample - loss: 0.0862 - acc: 0.9691 - val_loss: 0.0828 - val_acc: 0.9726
97 | Epoch 49/50
98 | 48636/48636 [==============================] - 6s 125us/sample - loss: 0.0775 - acc: 0.9709 - val_loss: 0.0771 - val_acc: 0.9750
99 | Epoch 50/50
100 | 48636/48636 [==============================] - 6s 126us/sample - loss: 0.0826 - acc: 0.9697 - val_loss: 0.0907 - val_acc: 0.9710
101 | _________________________________________________________________
102 | Layer (type) Output Shape Param #
103 | =================================================================
104 | input_18 (InputLayer) (None, 20, 22) 0
105 | _________________________________________________________________
106 | flatten_16 (Flatten) (None, 440) 0
107 | _________________________________________________________________
108 | dense_68 (Dense) (None, 128) 56448
109 | _________________________________________________________________
110 | dense_69 (Dense) (None, 256) 33024
111 | _________________________________________________________________
112 | dense_70 (Dense) (None, 256) 65792
113 | _________________________________________________________________
114 | dropout_13 (Dropout) (None, 256) 0
115 | _________________________________________________________________
116 | dense_71 (Dense) (None, 512) 131584
117 | _________________________________________________________________
118 | dense_72 (Dense) (None, 8) 4104
119 | =================================================================
120 | Total params: 290,952
121 | Trainable params: 290,952
122 | Non-trainable params: 0
123 | _________________________________________________________________
124 | None
125 | Confusion matrix, without normalization
126 | [[1693 25 12 0 3 0 0 58]
127 | [ 13 1702 2 0 0 0 12 1]
128 | [ 34 7 1537 4 5 0 31 4]
129 | [ 0 0 2 268 0 0 0 0]
130 | [ 10 1 9 1 913 0 2 2]
131 | [ 0 0 1 0 0 2008 5 0]
132 | [ 1 5 16 0 4 10 1939 1]
133 | [ 66 3 2 0 0 1 0 1746]]
--------------------------------------------------------------------------------
/SHLDataset/motion_output/bag_motion/rnn/rnn_50_accuracy.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/bag_motion/rnn/rnn_50_accuracy.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/bag_motion/rnn/rnn_50_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/bag_motion/rnn/rnn_50_cm.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/bag_motion/rnn/rnn_50_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/bag_motion/rnn/rnn_50_loss.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/bag_motion/rnn/rnn_50_output.txt:
--------------------------------------------------------------------------------
1 | read h5 file....
2 | Train on 48636 samples, validate on 12159 samples
3 | Epoch 1/50
4 | 48636/48636 [==============================] - 31s 647us/sample - loss: 1.0728 - acc: 0.6437 - val_loss: 0.5603 - val_acc: 0.8177
5 | Epoch 2/50
6 | 48636/48636 [==============================] - 36s 736us/sample - loss: 0.4213 - acc: 0.8661 - val_loss: 0.3159 - val_acc: 0.8984
7 | Epoch 3/50
8 | 48636/48636 [==============================] - 43s 888us/sample - loss: 0.2737 - acc: 0.9140 - val_loss: 0.2621 - val_acc: 0.9175
9 | Epoch 4/50
10 | 48636/48636 [==============================] - 34s 694us/sample - loss: 0.2215 - acc: 0.9287 - val_loss: 0.1905 - val_acc: 0.9447
11 | Epoch 5/50
12 | 48636/48636 [==============================] - 34s 694us/sample - loss: 0.1941 - acc: 0.9376 - val_loss: 0.1804 - val_acc: 0.9410
13 | Epoch 6/50
14 | 48636/48636 [==============================] - 35s 723us/sample - loss: 0.1781 - acc: 0.9412 - val_loss: 0.1687 - val_acc: 0.9480
15 | Epoch 7/50
16 | 48636/48636 [==============================] - 36s 736us/sample - loss: 0.1670 - acc: 0.9451 - val_loss: 0.1493 - val_acc: 0.9511
17 | Epoch 8/50
18 | 48636/48636 [==============================] - 36s 750us/sample - loss: 0.1578 - acc: 0.9480 - val_loss: 0.1519 - val_acc: 0.9521/48636 [========>.....................] - ETA: 23s - loss: 0.1632 - acc: 0.9472
19 | Epoch 9/50
20 | 48636/48636 [==============================] - 39s 803us/sample - loss: 0.1479 - acc: 0.9517 - val_loss: 0.1445 - val_acc: 0.9538
21 | Epoch 10/50
22 | 48636/48636 [==============================] - 38s 785us/sample - loss: 0.1422 - acc: 0.9532 - val_loss: 0.1394 - val_acc: 0.9551
23 | Epoch 11/50
24 | 48636/48636 [==============================] - 37s 756us/sample - loss: 0.1356 - acc: 0.9560 - val_loss: 0.1288 - val_acc: 0.9590
25 | Epoch 12/50
26 | 48636/48636 [==============================] - 36s 741us/sample - loss: 0.1328 - acc: 0.9557 - val_loss: 0.1419 - val_acc: 0.9538
27 | Epoch 13/50
28 | 48636/48636 [==============================] - 38s 773us/sample - loss: 0.1252 - acc: 0.9583 - val_loss: 0.1234 - val_acc: 0.9606
29 | Epoch 14/50
30 | 48636/48636 [==============================] - 37s 759us/sample - loss: 0.1208 - acc: 0.9591 - val_loss: 0.1331 - val_acc: 0.9559
31 | Epoch 15/50
32 | 48636/48636 [==============================] - 37s 757us/sample - loss: 0.1158 - acc: 0.9610 - val_loss: 0.1192 - val_acc: 0.9610
33 | Epoch 16/50
34 | 48636/48636 [==============================] - 38s 789us/sample - loss: 0.1122 - acc: 0.9615 - val_loss: 0.1229 - val_acc: 0.9588
35 | Epoch 17/50
36 | 48636/48636 [==============================] - 37s 764us/sample - loss: 0.1106 - acc: 0.9626 - val_loss: 0.1017 - val_acc: 0.9681
37 | Epoch 18/50
38 | 48636/48636 [==============================] - 38s 784us/sample - loss: 0.1098 - acc: 0.9623 - val_loss: 0.1249 - val_acc: 0.9583- ETA: 27s - loss: 0.1103 - acc: 0.9634
39 | Epoch 19/50
40 | 48636/48636 [==============================] - 37s 764us/sample - loss: 0.1053 - acc: 0.9641 - val_loss: 0.1013 - val_acc: 0.9655- ETA: 24s - loss: 0.1027 - acc: 0.9668
41 | Epoch 20/50
42 | 48636/48636 [==============================] - 44s 895us/sample - loss: 0.1062 - acc: 0.9641 - val_loss: 0.1043 - val_acc: 0.9655
43 | Epoch 21/50
44 | 48636/48636 [==============================] - 41s 842us/sample - loss: 0.1020 - acc: 0.9653 - val_loss: 0.0994 - val_acc: 0.9675
45 | Epoch 22/50
46 | 48636/48636 [==============================] - 38s 773us/sample - loss: 0.0980 - acc: 0.9666 - val_loss: 0.1003 - val_acc: 0.9666
47 | Epoch 23/50
48 | 48636/48636 [==============================] - 40s 823us/sample - loss: 0.0969 - acc: 0.9663 - val_loss: 0.1143 - val_acc: 0.9585
49 | Epoch 24/50
50 | 48636/48636 [==============================] - 38s 781us/sample - loss: 0.0972 - acc: 0.9661 - val_loss: 0.0984 - val_acc: 0.9667
51 | Epoch 25/50
52 | 48636/48636 [==============================] - 41s 842us/sample - loss: 0.0926 - acc: 0.9676 - val_loss: 0.0923 - val_acc: 0.9678
53 | Epoch 26/50
54 | 48636/48636 [==============================] - 40s 823us/sample - loss: 0.0911 - acc: 0.9681 - val_loss: 0.0873 - val_acc: 0.9715
55 | Epoch 27/50
56 | 48636/48636 [==============================] - 37s 769us/sample - loss: 0.0926 - acc: 0.9679 - val_loss: 0.0977 - val_acc: 0.9658
57 | Epoch 28/50
58 | 48636/48636 [==============================] - 39s 798us/sample - loss: 0.0896 - acc: 0.9692 - val_loss: 0.0939 - val_acc: 0.9675
59 | Epoch 29/50
60 | 48636/48636 [==============================] - 38s 787us/sample - loss: 0.0909 - acc: 0.9686 - val_loss: 0.1051 - val_acc: 0.9647
61 | Epoch 30/50
62 | 48636/48636 [==============================] - 38s 785us/sample - loss: 0.0884 - acc: 0.9692 - val_loss: 0.1105 - val_acc: 0.9630
63 | Epoch 31/50
64 | 48636/48636 [==============================] - 38s 787us/sample - loss: 0.0841 - acc: 0.9708 - val_loss: 0.1063 - val_acc: 0.9632
65 | Epoch 32/50
66 | 48636/48636 [==============================] - 39s 794us/sample - loss: 0.0865 - acc: 0.9692 - val_loss: 0.0988 - val_acc: 0.9646
67 | Epoch 33/50
68 | 48636/48636 [==============================] - 38s 782us/sample - loss: 0.0792 - acc: 0.9718 - val_loss: 0.0864 - val_acc: 0.9711
69 | Epoch 34/50
70 | 48636/48636 [==============================] - 38s 792us/sample - loss: 0.0861 - acc: 0.9697 - val_loss: 0.1079 - val_acc: 0.9601
71 | Epoch 35/50
72 | 48636/48636 [==============================] - 39s 806us/sample - loss: 0.0813 - acc: 0.9712 - val_loss: 0.0897 - val_acc: 0.9692
73 | Epoch 36/50
74 | 48636/48636 [==============================] - 38s 791us/sample - loss: 0.0812 - acc: 0.9709 - val_loss: 0.1003 - val_acc: 0.9662
75 | Epoch 37/50
76 | 48636/48636 [==============================] - 44s 898us/sample - loss: 0.0762 - acc: 0.9730 - val_loss: 0.0824 - val_acc: 0.9725
77 | Epoch 38/50
78 | 48636/48636 [==============================] - 40s 814us/sample - loss: 0.0811 - acc: 0.9711 - val_loss: 0.0910 - val_acc: 0.9682
79 | Epoch 39/50
80 | 48636/48636 [==============================] - 38s 785us/sample - loss: 0.0781 - acc: 0.9721 - val_loss: 0.0857 - val_acc: 0.9711
81 | Epoch 40/50
82 | 48636/48636 [==============================] - 39s 804us/sample - loss: 0.0765 - acc: 0.9731 - val_loss: 0.0857 - val_acc: 0.9690
83 | Epoch 41/50
84 | 48636/48636 [==============================] - 40s 814us/sample - loss: 0.0767 - acc: 0.9721 - val_loss: 0.0828 - val_acc: 0.9732
85 | Epoch 42/50
86 | 48636/48636 [==============================] - 38s 773us/sample - loss: 0.0747 - acc: 0.9728 - val_loss: 0.0859 - val_acc: 0.9715- ETA: 3s - loss: 0.0745 - acc: 0.9727
87 | Epoch 43/50
88 | 48636/48636 [==============================] - 39s 810us/sample - loss: 0.0750 - acc: 0.9744 - val_loss: 0.1001 - val_acc: 0.9658
89 | Epoch 44/50
90 | 48636/48636 [==============================] - 39s 797us/sample - loss: 0.0734 - acc: 0.9741 - val_loss: 0.0842 - val_acc: 0.9696
91 | Epoch 45/50
92 | 48636/48636 [==============================] - 39s 811us/sample - loss: 0.0726 - acc: 0.9742 - val_loss: 0.0832 - val_acc: 0.9711
93 | Epoch 46/50
94 | 48636/48636 [==============================] - 39s 792us/sample - loss: 0.0695 - acc: 0.9751 - val_loss: 0.0835 - val_acc: 0.9717
95 | Epoch 47/50
96 | 48636/48636 [==============================] - 39s 797us/sample - loss: 0.0706 - acc: 0.9746 - val_loss: 0.0716 - val_acc: 0.9761
97 | Epoch 48/50
98 | 48636/48636 [==============================] - 40s 831us/sample - loss: 0.0680 - acc: 0.9756 - val_loss: 0.0811 - val_acc: 0.9716
99 | Epoch 49/50
100 | 48636/48636 [==============================] - 38s 789us/sample - loss: 0.0684 - acc: 0.9755 - val_loss: 0.0827 - val_acc: 0.9708
101 | Epoch 50/50
102 | 48636/48636 [==============================] - 40s 821us/sample - loss: 0.0671 - acc: 0.9762 - val_loss: 0.0877 - val_acc: 0.9705
103 | _________________________________________________________________
104 | Layer (type) Output Shape Param #
105 | =================================================================
106 | input_64 (InputLayer) (None, 20, 22) 0
107 | _________________________________________________________________
108 | lstm_5 (LSTM) (None, 20, 128) 77312
109 | _________________________________________________________________
110 | dense_168 (Dense) (None, 20, 128) 16512
111 | _________________________________________________________________
112 | global_max_pooling1d_3 (Glob (None, 128) 0
113 | _________________________________________________________________
114 | dense_169 (Dense) (None, 8) 1032
115 | =================================================================
116 | Total params: 94,856
117 | Trainable params: 94,856
118 | Non-trainable params: 0
119 | _________________________________________________________________
120 | None
121 | Confusion matrix, without normalization
122 | [[1626 53 3 0 3 1 1 104]
123 | [ 6 1722 2 0 0 0 0 0]
124 | [ 3 0 1601 3 5 0 8 2]
125 | [ 0 0 2 268 0 0 0 0]
126 | [ 7 3 9 0 899 0 19 1]
127 | [ 0 0 1 0 0 2004 9 0]
128 | [ 3 11 5 0 0 19 1938 0]
129 | [ 55 17 3 0 0 1 0 1742]]
--------------------------------------------------------------------------------
/SHLDataset/motion_output/hand_motion/cnn_50_accuracy.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/hand_motion/cnn_50_accuracy.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/hand_motion/cnn_50_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/hand_motion/cnn_50_cm.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/hand_motion/cnn_50_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/hand_motion/cnn_50_loss.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/hip_motion/cnn_50_accuracy.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/hip_motion/cnn_50_accuracy.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/hip_motion/cnn_50_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/hip_motion/cnn_50_cm.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/hip_motion/cnn_50_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/hip_motion/cnn_50_loss.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/motion_fusion/earlyfusion/cnn_50_accuracy.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/motion_fusion/earlyfusion/cnn_50_accuracy.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/motion_fusion/earlyfusion/cnn_50_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/motion_fusion/earlyfusion/cnn_50_cm.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/motion_fusion/earlyfusion/cnn_50_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/motion_fusion/earlyfusion/cnn_50_loss.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/motion_fusion/latefusion/Figure_3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/motion_fusion/latefusion/Figure_3.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/motion_fusion/latefusion/cnn_50_accuracy.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/motion_fusion/latefusion/cnn_50_accuracy.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/motion_fusion/latefusion/cnn_50_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/motion_fusion/latefusion/cnn_50_loss.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/torso_motion/cnn_50_accuracy.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/torso_motion/cnn_50_accuracy.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/torso_motion/cnn_50_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/torso_motion/cnn_50_cm.png
--------------------------------------------------------------------------------
/SHLDataset/motion_output/torso_motion/cnn_50_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/SHLDataset/motion_output/torso_motion/cnn_50_loss.png
--------------------------------------------------------------------------------
/SHLDataset/motion_processing.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | """
3 | Created on Wed Aug 19 17:46:54 2020
4 |
5 | @author: Jieyun Hu
6 | """
7 |
8 | # motion proprocessing
9 | #1. downsize the data
10 | #2. find label for each timestamp
11 | #3. remove null lables
12 | #4. slice windows
13 | import numpy as np
14 | import h5py
15 | # For each type of motion, read *_Motion.txt and Label.txt together
16 | # Find the start label and end label line which map to the start and end timestamp of the motion table, truncate it and concatanate it to the end of motion data
17 | def read_motion(chose):
18 | motions =[]
19 | bag_motion = ["./SHLDataset_preview_v1/User1/220617/Bag_Motion.txt","./SHLDataset_preview_v1/User1/260617/Bag_Motion.txt","./SHLDataset_preview_v1/User1/270617/Bag_Motion.txt"]
20 | hand_motion = ["./SHLDataset_preview_v1/User1/220617/Hand_Motion.txt","./SHLDataset_preview_v1/User1/260617/Hand_Motion.txt","./SHLDataset_preview_v1/User1/270617/Hand_Motion.txt"]
21 | hip_motion = ["./SHLDataset_preview_v1/User1/220617/Hips_Motion.txt","./SHLDataset_preview_v1/User1/260617/Hips_Motion.txt","./SHLDataset_preview_v1/User1/270617/Hips_Motion.txt"]
22 | torso_motion = ["./SHLDataset_preview_v1/User1/220617/Torso_Motion.txt","./SHLDataset_preview_v1/User1/260617/Torso_Motion.txt","./SHLDataset_preview_v1/User1/270617/Torso_Motion.txt"]
23 | labels = ["./SHLDataset_preview_v1/User1/220617/Label.txt","./SHLDataset_preview_v1/User1/260617/Label.txt","./SHLDataset_preview_v1/User1/270617/Label.txt"]
24 |
25 | if chose == 1:
26 | motions = bag_motion
27 | elif chose == 2:
28 | motions = hand_motion
29 | elif chose == 3:
30 | motions = hip_motion
31 | else:
32 | motions = torso_motion
33 |
34 | data = np.array([])
35 | for i, file in enumerate(motions):
36 | np_motion = np.loadtxt(file)
37 | np_motion = downsize(np_motion)
38 | print(np_motion.shape)
39 | start = np_motion[0,0].astype(np.int64)
40 | end = np_motion[-1,0].astype(np.int64)
41 | label = labels[i]
42 | np_label = np.loadtxt(label)
43 | start_index = np.where(np_label == start)[0][0] # find the row index of start timestamp
44 | end_index = np.where(np_label == end)[0][0] # find the column index of start timestamp
45 | np_label = find_labels(np_label,start_index,end_index)
46 | folder_id = np.full(np_label.shape, i) # put folder_id in the last column
47 | concatenate = np.concatenate((np_motion,np_label,folder_id),axis=1)
48 | if i == 0:
49 | data = concatenate
50 | else:
51 | data = np.concatenate((data,concatenate))
52 | #break; #testing
53 | print(data.shape)
54 | return data
55 |
56 | '''
57 | test = "./SHLDataset_preview_v1/User1/220617/Hand_Motion.txt"
58 | np_motion = np.loadtxt(test)
59 | np_motion = downsize(np_motion)
60 | print(np_motion.shape)
61 | start = np_motion[0,0].astype(np.int64)
62 | end = np_motion[-1,0].astype(np.int64)
63 |
64 | print(start)
65 | print(end)
66 |
67 | label = "./SHLDataset_preview_v1/User1/220617/Label.txt"
68 | np_label = np.loadtxt(label)
69 | start_index = np.where(np_label == start)[0][0] # find the row index of start timestamp
70 | end_index = np.where(np_label == end)[0][0] # find the column index of start timestamp
71 | np_label = find_labels(np_label,start_index,end_index)
72 | print(np_label.shape)
73 |
74 | concate = np.concatenate((np_motion,np_label),axis=1)
75 | print(concate.shape)
76 |
77 | '''
78 | def segment(data, window_size): # data is numpy array
79 | n = len(data)
80 | X = []
81 | y = []
82 | start = 0
83 | end = 0
84 | while start + window_size - 1 < n:
85 | end = start + window_size-1
86 | if data[start][-2]!=0 and data[start][-2] == data[end][-2] and data[start][-1] == data[end][-1] : # if the frame contains the same activity and from the same object
87 | X.append(data[start:(end+1),1:-2])
88 | y_label = data[start][-2]
89 | if y_label == 8:
90 | y.append(0) # change label 8 to 0
91 | else:
92 | y.append(data[start][-2])
93 | start += window_size//2 # 50% overlap
94 | else: # if the frame contains different activities or from different objects, find the next start point
95 | while start + window_size-1 < n:
96 | if data[start][-2] == 0 or data[start][-2] != data[start+1][-2]:
97 | break
98 | start += 1
99 | start += 1
100 | print(np.asarray(X).shape, np.asarray(y).shape)
101 | return {'inputs' : np.asarray(X), 'labels': np.asarray(y,dtype=int)}
102 |
103 |
104 | def find_labels(labels,start_index, end_index):
105 | interval = 10
106 | label_col_index = 1 # the 2 column of Label.txt
107 | data = labels[start_index: end_index+1:interval, label_col_index].reshape(-1,1) # need to be the shape like (n,1), so that it can be concatenate later
108 | return data
109 |
110 | def downsize(data):# data is numpy array
111 | downsample_size = 10
112 | data = data[::downsample_size,:]
113 | return data
114 |
115 | def save_data(data,file_name): # save the data in h5 format
116 | f = h5py.File(file_name,'w')
117 | for key in data:
118 | print(key)
119 | f.create_dataset(key,data = data[key])
120 | f.close()
121 | print('Done.')
122 |
123 |
124 | if __name__ == "__main__":
125 | # The sensors are located in different body part. Choose a body part to preprocess that sensor data
126 | # 1 : bag ; 2: hand; 3:hip; 4: torso
127 | chose = 4
128 | file = ""
129 | if chose == 1:
130 | file = "bag.h5"
131 | elif chose == 2:
132 | file = "hand.h5"
133 | elif chose == 3:
134 | file = "hip.h5"
135 | else:
136 | file = "torso.h5"
137 |
138 | data = read_motion(chose)
139 | segment_data = segment(data,20)
140 | #save_data(segment_data, file)
141 |
--------------------------------------------------------------------------------
/SHLDataset/preprocessing.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | """
3 | Created on Mon Aug 17 14:31:33 2020
4 |
5 | @author: Jieyun Hu
6 | """
7 | #1. transfer the videos to grayscale images and rescale them
8 | #2. find the label for each frame and save the labels data in the same file
9 | import cv2
10 | import os
11 |
12 | #read videos and convert each frame into an image
13 | #grayscale and resize the image
14 | #save the images
15 | def read_videos():
16 | video_path = ["./SHLDataset_preview_v1/User1/220617/timelapse.avi","./SHLDataset_preview_v1/User1/260617/timelapse.avi","./SHLDataset_preview_v1/User1/270617/timelapse.avi"]
17 | folder_count = 1
18 | for path in video_path:
19 | save_path = "./picture%d"%folder_count
20 | if not os.path.exists(save_path):
21 | os.mkdir(save_path)
22 | vidcap = cv2.VideoCapture(path)
23 | fps = vidcap.get(cv2.CAP_PROP_FPS)
24 | interval = 1/fps * 1000 # For example, if fps = 0.5, then it is one frame in every 2 senconds, so interval time of two frame is 2000 milliseconds
25 | #print("video FPS {}".format(vidcap.get(cv2.CAP_PROP_FPS)))
26 | # read frames and save to images at fps_save
27 | success,image = vidcap.read()
28 | count = 0
29 | while success:
30 | vidcap.set(cv2.CAP_PROP_POS_MSEC,(count*interval))
31 | success,image = vidcap.read()
32 | if success:
33 | img_gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) #grayscale
34 | img_resized =cv2.resize(img_gray,(100,100)) #resize
35 | print ('Read a new frame: ', success)
36 | cv2.imwrite( save_path + "\\frame%d.jpg" % count, img_resized) # save frame as JPEG file
37 | count = count + 1
38 | frame_num = open(save_path+"\\frame_number.txt","w")
39 | frame_num.write(str(count)) # save the total frame number
40 | frame_num.close()
41 | folder_count += 1
42 |
43 | #read label.txt and save it as a map
44 | #line number as key, label as value
45 | def line_map(file):
46 | #file = "./SHLDataset_preview_v1/User1/220617/Label.txt"
47 | f = open(file,'r')
48 | lines = f.readlines()
49 | line_num = 1
50 | line_map = dict()
51 | for line in lines:
52 | p = line.split(" ")
53 | label = int(p[1]) # use the 2nd column as label
54 | line_map[line_num] = label
55 | line_num += 1
56 | return line_map
57 |
58 | # from the document l = offset + speedup*tv/10
59 | def cal_line(offset,speedup,frame_num):
60 | # fps = 0.5, one frame in every 2 seconds
61 | interval = 2000
62 | tv = frame_num * interval #start from frame 0
63 | return int(offset+ speedup*tv/10)
64 |
65 | #create a label.txt file in each picture folder
66 | #label.txt saves the label for each frame
67 | def save_labels():
68 | label_paths = ["./SHLDataset_preview_v1/User1/220617/label.txt","./SHLDataset_preview_v1/User1/260617/label.txt","./SHLDataset_preview_v1/User1/270617/label.txt"]
69 | picture_paths = ["./picture1","./picture2","./picture3"]
70 | offset_paths = ["./SHLDataset_preview_v1/User1/220617/videooffset.txt","./SHLDataset_preview_v1/User1/260617/videooffset.txt","./SHLDataset_preview_v1/User1/270617/videooffset.txt"]
71 | speedup_paths = ["./SHLDataset_preview_v1/User1/220617/videospeedup.txt","./SHLDataset_preview_v1/User1/260617/videospeedup.txt","./SHLDataset_preview_v1/User1/270617/videospeedup.txt"]
72 | i = 0
73 | for path in label_paths:
74 | pic_path = picture_paths[i]
75 | offset_path = offset_paths[i]
76 | speedup_path = speedup_paths[i]
77 |
78 | offset = get_offset2(offset_path)
79 | speedup = get_speedup(speedup_path)
80 |
81 | frame_file = open(pic_path+"\\frame_number.txt","r")
82 | frame_num = int(frame_file.readline())
83 | label_map = line_map(path)
84 |
85 | f = open(pic_path + "\\labels.txt","w")
86 | for frame in range(frame_num):
87 | line_num = cal_line(offset,speedup,frame)
88 | label = label_map[line_num]
89 | f.write(str(label)+"\n")
90 | f.close()
91 | frame_file.close()
92 | i += 1
93 |
94 | #parse the videooffset.txt
95 | def get_offset2(file):
96 | f = open(file,'r')
97 | line = f.readline()
98 | p = line.split(" ")
99 | offset2 = int(float(p[1].rstrip("\n")))
100 | f.close()
101 | return offset2
102 |
103 | #parse the videospeedup.txt
104 | def get_speedup(file):
105 | f = open(file,'r')
106 | speedup = int(f.readline())
107 | f.close()
108 | return speedup
109 |
110 | if __name__ == '__main__':
111 | read_videos()
112 | save_labels()
113 |
114 |
115 |
116 |
117 |
118 |
119 |
--------------------------------------------------------------------------------
/SHLDataset/readme.md:
--------------------------------------------------------------------------------
1 | ## Dataset
2 |
3 | The University of Sussex-Huawei Locomotion (SHL) dataset was used in our research. It is a versatile annotated dataset of modes of locomotion and transportation of mobile users. The dataset can be downloaded from http://www.shl-dataset.org/download/. The dataset was recorded over 7 months by 3 users engaging in 8 different transportation modes: still, walk, run, bike, car, bus, train and subway. The dataset contains multi-modal data from a body-worn camera and from 4 smartphones, carried simultaneously at four body locations.
4 |
5 |
6 | ## Methods and Results
7 | 1. Proposed data processing methods involve video conversion, data labeling, data segmentation.
8 | 2. Experimented using three deep learning methods (FFNN, CNN, RNN) in HAR datasets. All the models worked well and achieved a high accuracy using the dataset. Using CNN models achieved slightly higher accuracy than the other two.
9 | 3. Two data fusion methods including early and late fusion were compared in our study using multimodal data collected by smartphone. The early fusion performed slightly better than the late fusion.
10 | 4. A CNN based model was trained on both motion data from smartphone and image data derived from videos. The performance was greatly improved (99.92%) after 50 epochs of training. The model is also stable and less sensitive to noises. Moreover, we tested on fusing only one motion sensor data with image data, which also achieved a very high accuracy (99.97%) indicating fusing these two sensor data is sufficient to achieve a good performance using this dataset.
11 |
12 | ## Data proprocessing
13 | 
14 |
15 | ## Data fusion model
16 | 
17 |
18 |
19 |
20 |
21 |
--------------------------------------------------------------------------------
/SHLDataset/video_modeling.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | """
3 | Created on Wed Aug 19 11:32:42 2020
4 |
5 | @author: Jieyun Hu
6 | """
7 |
8 | import pandas as pd
9 | import numpy as np
10 | import matplotlib.pyplot as plt
11 | from scipy import stats
12 | import tensorflow as tf
13 | from sklearn import metrics
14 | import h5py
15 | import matplotlib.pyplot as plt
16 | from tensorflow.keras import regularizers
17 | from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten, Dropout, SimpleRNN, GRU, LSTM, GlobalMaxPooling1D,GlobalMaxPooling2D,MaxPooling2D,BatchNormalization
18 | from tensorflow.keras.models import Model
19 | from tensorflow.keras.optimizers import Adam, SGD
20 | from sklearn.preprocessing import MinMaxScaler, StandardScaler
21 | from sklearn.model_selection import train_test_split
22 | from sklearn.metrics import confusion_matrix
23 | import itertools
24 | from keras.utils.vis_utils import plot_model
25 |
26 | class models():
27 | def __init__(self, path):
28 | self.path = path
29 |
30 |
31 | def read_h5(self):
32 | f = h5py.File(path, 'r')
33 | X = f.get('inputs')
34 | y = f.get('labels')
35 | #print(type(X))
36 | #print(type(y))
37 |
38 | X = np.array(X)
39 | y = np.array(y)
40 | self.x_train, self.x_test, self.y_train, self.y_test = train_test_split(X, y, test_size=0.2, random_state = 1)
41 |
42 | #print("X = ", X.shape)
43 | #print("y =",y.shape)
44 | print(self.x_train.shape)
45 | print(self.x_test.shape)
46 |
47 | '''
48 | self.X = np.array(X)
49 | self.y = np.array(y)
50 | print(self.X[0][0])
51 | self.data_scale()
52 | print(self.X[0][0])
53 | self.x_train, self.x_test, self.y_train, self.y_test = train_test_split(self.X, self.y, test_size=0.2, random_state = 11)
54 |
55 | print("X = ", self.X.shape)
56 | print("y =",self.y.shape)
57 | '''
58 |
59 |
60 |
61 | def cnn_model(self):
62 | K = len(set(self.y_train)) # no 0 in label
63 | print(K)
64 | #print(K)
65 | #K = 12
66 | #X = np.expand_dims(X, -1)
67 | #self.x_train = np.expand_dims(self.x_train, -1)
68 | #self.x_test = np.expand_dims(self.x_test,-1)
69 | #print(X)
70 | #print(X[0].shape)
71 | #i = Input(shape=X[0].shape)
72 | i = Input(shape=self.x_train[0].shape)
73 | x = Conv2D(32, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(i)
74 | x = BatchNormalization()(x)
75 | x = MaxPooling2D((2,2))(x)
76 | x = Dropout(0.4)(x)
77 | x = Conv2D(64, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
78 | x = BatchNormalization()(x)
79 | x = Dropout(0.4)(x)
80 | x = Conv2D(256, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
81 | x = BatchNormalization()(x)
82 | x = MaxPooling2D((2,2))(x)
83 | x = Conv2D(128, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
84 | x = BatchNormalization()(x)
85 | #x = MaxPooling2D((2,2))(x)
86 | #x = Dropout(0.2)(x)
87 | x = Flatten()(x)
88 | x = Dropout(0.4)(x)
89 | x = Dense(1024,activation = 'relu')(x)
90 | x = Dropout(0.4)(x)
91 | x = Dense(K, activation = 'softmax')(x)
92 | self.model = Model(i,x)
93 | self.model.compile(optimizer = Adam(lr=0.0005),
94 | loss = 'sparse_categorical_crossentropy',
95 | metrics = ['accuracy'])
96 |
97 | #self.r = model.fit(X, y, validation_split = 0.4, epochs = 50, batch_size = 32 )
98 | self.r = self.model.fit(self.x_train, self.y_train, validation_data = (self.x_test, self.y_test), epochs = 100, batch_size = 32 )
99 | print(self.model.summary())
100 | # It is better than using keras do the splitting!!
101 | return self.r
102 |
103 | #model for segment_model
104 |
105 | def cnn_segment_model(self):
106 | K = len(set(self.y_train)) # no 0 in label
107 | print(K)
108 | print(self.x_train.shape)
109 | self.x_train = np.expand_dims(self.x_train, -1)
110 | self.x_test = np.expand_dims(self.x_test,-1)
111 | print(self.x_train[0].shape)
112 | i = Input(shape = self.x_train[0].shape)
113 |
114 | x = Conv2D(32, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(i)
115 | x = BatchNormalization()(x)
116 | #x = MaxPooling2D((2,2))(x)
117 | x = Dropout(0.2)(x)
118 |
119 | x = Conv2D(32, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
120 | x = BatchNormalization()(x)
121 | #x = MaxPooling2D((2,2))(x)
122 | x = Dropout(0.2)(x)
123 | #x = Conv2D(256, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
124 | #x = BatchNormalization()(x)
125 | #x = MaxPooling2D((2,2))(x)
126 | #x = Conv2D(128, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
127 | #x = BatchNormalization()(x)
128 | x = Flatten()(x)
129 | x = Dropout(0.2)(x)
130 | x = Dense(128,activation = 'relu')(x)
131 | x = BatchNormalization()(x)
132 | x = Dropout(0.2)(x)
133 | x = Dense(K, activation = 'softmax')(x)
134 | self.model = Model(i, x)
135 | self.model.compile(optimizer = Adam(lr=0.005),
136 | loss = 'sparse_categorical_crossentropy',
137 | metrics = ['accuracy'])
138 | #self.r = model.fit(X, y, validation_split = 0.4, epochs = 50, batch_size = 32 )
139 | self.r = self.model.fit(self.x_train, self.y_train, validation_data = (self.x_test, self.y_test), epochs = 50, batch_size = 32 )
140 | print(self.model.summary())
141 | # It is better than using keras do the splitting!!
142 | return self.r
143 |
144 |
145 | def draw(self):
146 | f1 = plt.figure(1)
147 | plt.title('Loss')
148 | plt.plot(self.r.history['loss'], label = 'loss')
149 | plt.plot(self.r.history['val_loss'], label = 'val_loss')
150 | plt.legend()
151 | f1.show()
152 |
153 | f2 = plt.figure(2)
154 | plt.plot(self.r.history['acc'], label = 'accuracy')
155 | plt.plot(self.r.history['val_acc'], label = 'val_accuracy')
156 | plt.legend()
157 | f2.show()
158 |
159 | # summary, confusion matrix and heatmap
160 | def con_matrix(self):
161 | K = len(set(self.y_train))
162 | self.y_pred = self.model.predict(self.x_test).argmax(axis=1)
163 | cm = confusion_matrix(self.y_test,self.y_pred)
164 | self.plot_confusion_matrix(cm,list(range(K)))
165 |
166 |
167 | def plot_confusion_matrix(self, cm, classes, normalize = False, title='Confusion matrix', cmap=plt.cm.Blues):
168 | if normalize:
169 | cm = cm.astype('float') / cm.sum(axis=1)[:,np.newaxis]
170 | print("Normalized confusion matrix")
171 | else:
172 | print("Confusion matrix, without normalization")
173 | print(cm)
174 | f3 = plt.figure(3)
175 | plt.imshow(cm, interpolation='nearest', cmap=cmap)
176 | plt.title(title)
177 | plt.colorbar()
178 | tick_marks = np.arange(len(classes))
179 | plt.xticks(tick_marks, classes, rotation=45)
180 | plt.yticks(tick_marks, classes)
181 |
182 | fmt = '.2f' if normalize else 'd'
183 | thresh = cm.max()/2.
184 | for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
185 | plt.text(j, i, format(cm[i, j], fmt),
186 | horizontalalignment = "center",
187 | color = "white" if cm[i, j] > thresh else "black")
188 | plt.tight_layout()
189 | plt.ylabel('True label')
190 | plt.xlabel('predicted label')
191 | f3.show()
192 |
193 |
194 |
195 | if __name__ == "__main__":
196 | model_name = "cnn"
197 | #path = "./video.h5"
198 | path = "./image_for_fusion.h5" #segmentation
199 | video = models(path)
200 | print("read h5 file....")
201 | video.read_h5()
202 |
203 | if model_name == "cnn":
204 | video.cnn_segment_model()
205 | video.draw()
206 | video.con_matrix()
--------------------------------------------------------------------------------
/SHLDataset/video_processing.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | """
3 | Created on Tue Aug 18 21:57:10 2020
4 |
5 | @author: Jieyun Hu
6 | """
7 |
8 | #split and images into training and testing set
9 | #save into h5 format for modeling
10 |
11 | import os
12 | import h5py
13 | import numpy as np
14 | from keras.preprocessing.image import load_img, img_to_array
15 | import natsort
16 |
17 | #prepare training and testing dataset from images
18 | #No time series
19 | #reset label 8 to label 0
20 | #Original labels are Null=0, Still=1, Walking=2, Run=3, Bike=4, Car=5, Bus=6, Train=7, Subway=8
21 | #New labels: Subway=0, Still=1, Walking=2, Run=3, Bike=4, Car=5, Bus=6, Train=7
22 | def prepare_dataset():
23 | paths = ["./picture1","./picture2","./picture3"]
24 | X = []
25 | y = []
26 | for path in paths:
27 | dir = os.listdir(path)
28 | label_path = path + "/labels.txt"
29 | #print(label_path)
30 | np_labels = np.loadtxt(label_path)
31 | np_labels = np_labels.astype(np.int64)
32 | i = 0
33 | print(dir)
34 | for file in natsort.natsorted(dir): # order the files in ascending order
35 | if file.endswith("jpg"):
36 | print(file)
37 | file_path = path + "/" +file
38 | img = load_img(file_path,color_mode='grayscale')
39 | img = img_to_array(img).astype('float32')/255
40 | img = img.reshape(img.shape[0],img.shape[1],1)
41 | l = np_labels[i]
42 | if l != 0: # 0 is Null label, do not need for dataset
43 | X.append(img)
44 | if l == 8:
45 | y.append(0) # reset label 8 to 0
46 | else:
47 | y.append(l)
48 | i += 1
49 |
50 | print(set(y))
51 | X = np.asarray(X)
52 | y = np.asarray(y,dtype=int)
53 | print(X.shape)
54 | print(y.shape)
55 | return {'inputs' : X, 'labels': y}
56 |
57 | def save_data(data,file_name): # save the data in h5 format
58 | f = h5py.File(file_name,'w')
59 | for key in data:
60 | print(key)
61 | f.create_dataset(key,data = data[key])
62 | f.close()
63 | print('Done.')
64 |
65 | if __name__ == "__main__":
66 | file_name = "video.h5"
67 | data = prepare_dataset()
68 | save_data(data,file_name)
69 |
70 |
--------------------------------------------------------------------------------
/Sphere/.footer.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
38 |
39 |
--------------------------------------------------------------------------------
/Sphere/.gitignore:
--------------------------------------------------------------------------------
1 | metadata
2 | supplementary
3 | test
4 | train
5 | .footer
6 | .header
7 | .info.rdf
8 | README
9 | sample_submission.csv
10 | sphere.h5
11 | sphere_t1.h5
--------------------------------------------------------------------------------
/Sphere/.header.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
8 |
9 |
10 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
The SPHERE Challenge: Activity Recognition with Multimodal Sensor Data
32 |
33 |
Data for the SPHERE Challenge that will take place in conjunction with ECML-PKDD 2016
34 |
35 |
36 |
37 |
--------------------------------------------------------------------------------
/Sphere/cnn/accuracy.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/Sphere/cnn/accuracy.png
--------------------------------------------------------------------------------
/Sphere/cnn/confusion_matrix.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/Sphere/cnn/confusion_matrix.png
--------------------------------------------------------------------------------
/Sphere/cnn/loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/Sphere/cnn/loss.png
--------------------------------------------------------------------------------
/Sphere/dataProcessing.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | """
3 | Created on Mon Aug 10 09:49:36 2020
4 |
5 | @author: Jieyun Hu
6 | """
7 |
8 | from matplotlib import pyplot as plt
9 | import pandas as pd
10 | import numpy as np
11 | import math
12 | import h5py
13 | import os
14 | import csv
15 | import math
16 | from sklearn.preprocessing import MinMaxScaler, StandardScaler
17 |
18 | #data fusion of video data, acceleration data and labels.
19 | def read_files():
20 | #print(pd.__version__)
21 | #Only use trainning dataset, since testing dataset has no label to evaluate
22 | data_dir = os.listdir("./train/train")
23 | col_names=["t","centre_2d_x","centre_2d_y","bb_2d_br_x","bb_2d_br_y","bb_2d_tl_x","bb_2d_tl_y","centre_3d_x","centre_3d_y","centre_3d_z","bb_3d_brb_x","bb_3d_brb_y","bb_3d_brb_z","bb_3d_flt_x","bb_3d_flt_y","bb_3d_flt_z"]
24 | res = pd.DataFrame()
25 | #First build a map(bucket array) to map timestamp(int) to activities
26 | for sub_dir in data_dir:
27 | #print(sub_dir)
28 | df = pd.DataFrame()
29 | bucket = [] #use a bucket array to save labels at each timestamp(roughly)
30 | init = False # haven't initiate the bucket
31 | folder = os.listdir("./train/train/"+sub_dir)
32 | for file in folder:
33 | if "annotations_0" in file:
34 | file_name = "./train/train/"+sub_dir+"/"+file
35 | with open(file_name,'r') as csv_file:
36 | reader = csv.reader(csv_file)
37 | next(reader) #skip first row
38 | for row in reversed(list(reader)):#Read from the end of the file, so that the bucket array can be initialized at the beginning
39 | start = math.ceil(float(row[0])*10) # *10 to get a finer map
40 | end = math.ceil(float(row[1])*10)
41 | label = int(row[3])
42 | if init is False:#
43 | bucket = [-1 for x in range(end)] # -1 means no label for this timestamp
44 | init = True
45 | for i in range(start,end):
46 | bucket[i] = label
47 | #Secondly combine the video files add activity labels for each row and delete those with no activity labels
48 | for file in folder:
49 | #print(file)
50 | if "video" in file:
51 | file_name = "./train/train/"+sub_dir+"/"+file
52 | procData = pd.read_table(file_name, sep=',')
53 | procData.columns = col_names
54 | procData['label'] = procData["t"].apply(lambda x: -1 if x*10 >= len(bucket) else bucket[math.floor(x*10)])
55 | # add the loc column for the videos location
56 | if "hallway" in file:
57 | procData['loc'] = 1
58 | elif "kitchen" in file:
59 | procData['loc'] = 2
60 | else:
61 | procData['loc'] = 3
62 |
63 | df = df.append(procData, ignore_index=True)
64 |
65 | df = df[df.label!=-1] #drop all rows with label = -1
66 | df.sort_values(by=['t'], inplace=True)
67 |
68 | #Last, add acceleration data
69 | for file in folder:
70 | if "acceleration" in file:
71 | file_name = "./train/train/"+sub_dir+"/"+file
72 | accData = pd.read_table(file_name, sep=',')
73 | accData.fillna(0,inplace=True)
74 | accData[['Kitchen_AP','Lounge_AP','Upstairs_AP','Study_AP']] = accData[['Kitchen_AP','Lounge_AP','Upstairs_AP','Study_AP']].apply(lambda x: x*(-1)) # convert the negative signal to positive. Not necessary.
75 | df = pd.merge_asof(left=df,right=accData,on='t',tolerance=1)
76 | df.dropna(subset=['label'],inplace=True) #remove the rows with no labels
77 | df.dropna(subset=['x','y','z'],inplace=True)
78 | df['folder_id'] = int(sub_dir)
79 | res = res.append(df, ignore_index=True)
80 |
81 | res['activities'] = res['label']#move the label to the last column and change the column name to activities
82 | res = res.drop(['t','label'],axis=1)#remove timestamp and label
83 | #res = res.drop(['Kitchen_AP','Lounge_AP','Upstairs_AP','Study_AP'],axis=1)#remove timestamp and label
84 |
85 | #print(set(res['activities']))
86 | res = scale(res)
87 |
88 | #res.to_csv('prepocessed_1.csv')
89 | return res
90 |
91 | def scale(df):#pandas dataframe
92 | col_names = ["centre_2d_x","centre_2d_y","bb_2d_br_x","bb_2d_br_y","bb_2d_tl_x","bb_2d_tl_y","centre_3d_x","centre_3d_y","centre_3d_z","bb_3d_brb_x","bb_3d_brb_y","bb_3d_brb_z","bb_3d_flt_x","bb_3d_flt_y","bb_3d_flt_z",'x','y','z','Kitchen_AP','Lounge_AP','Upstairs_AP','Study_AP']
93 | scaler = MinMaxScaler()
94 | #scaler = StandardScaler()
95 | df[col_names] = scaler.fit_transform(df[col_names])
96 | return df
97 |
98 | def segment(data, window_size): # data is numpy array
99 | n = len(data)
100 | X = []
101 | y = []
102 | start = 0
103 | end = 0
104 | while start + window_size - 1 < n:
105 | end = start + window_size-1
106 | if data[start][-2] == data[end][-2] and data[start][-1] == data[end][-1] : # if the frame contains the same activity and from the same file
107 | X.append(data[start:(end+1),0:-2])
108 | y.append(data[start][-1])
109 | start += window_size//2 # 50% overlap
110 | else: # if the frame contains different activities or from different objects, find the next start point
111 | while start + window_size-1 < n:
112 | if data[start][-1] != data[start+1][-1]:
113 | break
114 | start += 1
115 | start += 1
116 | #print(np.asarray(y))
117 | print(np.asarray(X).shape, np.asarray(y).shape)
118 | return {'inputs' : np.asarray(X), 'labels': np.asarray(y,dtype=int)}
119 |
120 | def save_data(data,file_name): # save the data in h5 format
121 | f = h5py.File(file_name,'w')
122 | for key in data:
123 | print(key)
124 | f.create_dataset(key,data = data[key])
125 | f.close()
126 | print('Done.')
127 |
128 | if __name__ == "__main__":
129 | file_name = 'sphere.h5'
130 | window_size = 20
131 | data = read_files()
132 | numpy_data = data.to_numpy()
133 | #numpy_data = downsize(numpy_data)
134 | segment_data = segment(numpy_data, window_size)
135 | save_data(segment_data, file_name)
--------------------------------------------------------------------------------
/Sphere/models.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 |
4 | #%%
5 | import pandas as pd
6 | import numpy as np
7 | import matplotlib.pyplot as plt
8 | from scipy import stats
9 | import tensorflow as tf
10 | from sklearn import metrics
11 | import h5py
12 | import matplotlib.pyplot as plt
13 | from tensorflow.keras import regularizers
14 | from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten, Dropout, SimpleRNN, GRU, LSTM, GlobalMaxPooling1D,GlobalMaxPooling2D,MaxPooling2D,BatchNormalization
15 | from tensorflow.keras.models import Model
16 | from tensorflow.keras.optimizers import Adam, SGD
17 | from sklearn.preprocessing import MinMaxScaler, StandardScaler
18 | from sklearn.model_selection import train_test_split
19 | from sklearn.metrics import confusion_matrix
20 | import itertools
21 | from keras.utils.vis_utils import plot_model
22 |
23 | class models():
24 | def __init__(self, path):
25 | self.path = path
26 |
27 |
28 | def read_h5(self):
29 | f = h5py.File(path, 'r')
30 | X = f.get('inputs')
31 | y = f.get('labels')
32 | #print(type(X))
33 | #print(type(y))
34 |
35 | X = np.array(X)
36 | y = np.array(y)
37 | self.x_train, self.x_test, self.y_train, self.y_test = train_test_split(X, y, test_size=0.2, random_state = 1)
38 |
39 | #print("X = ", X.shape)
40 | #print("y =",y.shape)
41 | #print(self.x_train.shape)
42 | #print(self.x_train.shape)
43 |
44 | '''
45 | self.X = np.array(X)
46 | self.y = np.array(y)
47 | print(self.X[0][0])
48 | self.data_scale()
49 | print(self.X[0][0])
50 | self.x_train, self.x_test, self.y_train, self.y_test = train_test_split(self.X, self.y, test_size=0.2, random_state = 11)
51 |
52 | print("X = ", self.X.shape)
53 | print("y =",self.y.shape)
54 | '''
55 |
56 |
57 |
58 | def cnn_model(self):
59 | K = len(set(self.y_train))
60 | print(K)
61 | #print(K)
62 | #K = 12
63 | #X = np.expand_dims(X, -1)
64 | self.x_train = np.expand_dims(self.x_train, -1)
65 | self.x_test = np.expand_dims(self.x_test,-1)
66 | #print(X)
67 | #print(X[0].shape)
68 | #i = Input(shape=X[0].shape)
69 | i = Input(shape=self.x_train[0].shape)
70 | x = Conv2D(32, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(i)
71 | x = BatchNormalization()(x)
72 | x = MaxPooling2D((2,2))(x)
73 | x = Dropout(0.2)(x)
74 | x = Conv2D(64, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
75 | x = BatchNormalization()(x)
76 | x = Dropout(0.4)(x)
77 | x = Conv2D(256, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
78 | x = BatchNormalization()(x)
79 | x = MaxPooling2D((2,2))(x)
80 | x = Conv2D(128, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
81 | x = BatchNormalization()(x)
82 | #x = MaxPooling2D((2,2))(x)
83 | #x = Dropout(0.2)(x)
84 | x = Flatten()(x)
85 | x = Dropout(0.2)(x)
86 | x = Dense(1024,activation = 'relu')(x)
87 | x = Dropout(0.2)(x)
88 | x = Dense(K, activation = 'softmax')(x)
89 | self.model = Model(i,x)
90 | self.model.compile(optimizer = Adam(lr=0.005),
91 | loss = 'sparse_categorical_crossentropy',
92 | metrics = ['accuracy'])
93 |
94 | #self.r = model.fit(X, y, validation_split = 0.4, epochs = 50, batch_size = 32 )
95 | self.r = self.model.fit(self.x_train, self.y_train, validation_data = (self.x_test, self.y_test), epochs = 200, batch_size = 128 )
96 | print(self.model.summary())
97 | # It is better than using keras do the splitting!!
98 | return self.r
99 |
100 | def dnn_model(self):
101 | K = len(set(self.y_train))
102 | #print(K)
103 | #K = 12
104 | print(self.x_train[0].shape)
105 | i = Input(shape=self.x_train[0].shape)
106 | x = Flatten()(i)
107 | x = Dense(128,activation = 'sigmoid')(x)
108 | #x = Dense(128,activation = 'relu')(x)
109 | #x = Dropout(0.2)(x)
110 | #x = Dense(256,activation = 'sigmoid')(x)
111 | #x = Dense(256,activation = 'sigmoid')(x)
112 | #x = Dense(256,activation = 'sigmoid')(x)
113 | #x = Dropout(0.2)(x)
114 | #x = Dense(128,activation = 'tanh')(x)
115 | x = Dense(K,activation = 'softmax')(x)
116 | self.model = Model(i,x)
117 | self.model.compile(optimizer = Adam(lr=0.001),
118 | loss = 'sparse_categorical_crossentropy',
119 | metrics = ['accuracy'])
120 |
121 | '''
122 | model = tf.keras.models.Sequential([
123 | tf.keras.layers.Flatten(input_shape=self.x_train[0].shape),
124 | tf.keras.layers.Dense(256, activation = 'relu'),
125 | tf.keras.layers.Dropout(0.5),
126 | tf.keras.layers.Dense(256, activation = 'relu'),
127 | tf.keras.layers.Dropout(0.2),
128 | tf.keras.layers.Dense(K,activation = 'softmax')
129 | ])
130 | model.compile(optimizer = Adam(lr=0.0005),
131 | loss = 'sparse_categorical_crossentropy',
132 | metrics = ['accuracy'])
133 | '''
134 | self.r = self.model.fit(self.x_train, self.y_train, validation_data = (self.x_test, self.y_test), epochs = 10, batch_size = 32 )
135 | print(self.model.summary())
136 | return self.r
137 |
138 |
139 | def rnn_model(self):
140 | K = len(set(self.y_train))
141 | i = Input(shape = self.x_train[0].shape)
142 | x = LSTM(128, return_sequences=True)(i)
143 | x = Dense(128,activation = 'relu')(x)
144 | x = GlobalMaxPooling1D()(x)
145 | x = Dense(K,activation = 'softmax')(x)
146 | self.model = Model(i,x)
147 | self.model.compile(optimizer = Adam(lr=0.005),
148 | loss = 'sparse_categorical_crossentropy',
149 | metrics = ['accuracy'])
150 | self.r = self.model.fit(self.x_train, self.y_train, validation_data = (self.x_test, self.y_test), epochs = 50, batch_size = 128)
151 | #self.r = model.fit(X, y, validation_split = 0.2, epochs = 10, batch_size = 32 )
152 | print(self.model.summary())
153 | return self.r
154 |
155 | def draw(self):
156 | f1 = plt.figure(1)
157 | plt.title('Loss')
158 | plt.plot(self.r.history['loss'], label = 'loss')
159 | plt.plot(self.r.history['val_loss'], label = 'val_loss')
160 | plt.legend()
161 | f1.show()
162 |
163 | f2 = plt.figure(2)
164 | plt.plot(self.r.history['acc'], label = 'accuracy')
165 | plt.plot(self.r.history['val_acc'], label = 'val_accuracy')
166 | plt.legend()
167 | f2.show()
168 |
169 | # summary, confusion matrix and heatmap
170 | def con_matrix(self):
171 | K = len(set(self.y_train))
172 | self.y_pred = self.model.predict(self.x_test).argmax(axis=1)
173 | cm = confusion_matrix(self.y_test,self.y_pred)
174 | self.plot_confusion_matrix(cm,list(range(K)))
175 |
176 |
177 | def plot_confusion_matrix(self, cm, classes, normalize = False, title='Confusion matrix', cmap=plt.cm.Blues):
178 | if normalize:
179 | cm = cm.astype('float') / cm.sum(axis=1)[:,np.newaxis]
180 | print("Normalized confusion matrix")
181 | else:
182 | print("Confusion matrix, without normalization")
183 | print(cm)
184 | f3 = plt.figure(3)
185 | plt.imshow(cm, interpolation='nearest', cmap=cmap)
186 | plt.title(title)
187 | plt.colorbar()
188 | tick_marks = np.arange(len(classes))
189 | plt.xticks(tick_marks, classes, rotation=45)
190 | plt.yticks(tick_marks, classes)
191 |
192 | fmt = '.2f' if normalize else 'd'
193 | thresh = cm.max()/2.
194 | for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
195 | plt.text(j, i, format(cm[i, j], fmt),
196 | horizontalalignment = "center",
197 | color = "white" if cm[i, j] > thresh else "black")
198 | plt.tight_layout()
199 | plt.ylabel('True label')
200 | plt.xlabel('predicted label')
201 | f3.show()
202 |
203 |
204 |
205 | if __name__ == "__main__":
206 | model_name = "cnn" # can be cnn/dnn/rnn
207 | path = "./sphere.h5"
208 | sphere = models(path)
209 | print("read h5 file....")
210 | sphere.read_h5()
211 |
212 | if model_name == "cnn":
213 | sphere.cnn_model()
214 | elif model_name == "dnn":
215 | sphere.dnn_model()
216 | elif model_name == "rnn":
217 | sphere.rnn_model()
218 | sphere.draw()
219 | sphere.con_matrix()
220 |
221 |
--------------------------------------------------------------------------------
/UCI HAR Dataset/UCI HAR Dataset/.gitignore:
--------------------------------------------------------------------------------
1 | test
2 | train
3 | .DS_Store
4 | activity_labels.txt
5 | features.txt
6 | features_info.txt
7 | README.txt
8 | uci_har.h5
--------------------------------------------------------------------------------
/UCI HAR Dataset/UCI HAR Dataset/dataProcessing.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | """
3 | Created on Fri Jun 26 23:09:38 2020
4 |
5 | """
6 |
7 | # This file is for PAMAP2 data processing
8 | from matplotlib import pyplot as plt
9 | import pandas as pd
10 | import numpy as np
11 | import math
12 | import h5py
13 | import os
14 | activityIDdict = {
15 | 1: 'walking',
16 | 2: 'walking_upstairs',
17 | 3: 'walking_downstairs',
18 | 4: 'sitting',
19 | 5: 'standing',
20 | 0: 'laying',# originally was 6
21 | }
22 | colNames = ['body_acc_x','body_acc_y','body_acc_z',
23 | 'body_gryo_x','body_gryo_y','body_gryo_z',
24 | 'total_acc_x','total_acc_y','total_acc_z']
25 |
26 | def read_files():
27 | list_of_Xs = ['./test/Inertial Signals',
28 | './train/Inertial Signals']
29 |
30 | list_of_ys = ['./test/y_test.txt',
31 | './train/y_train.txt']
32 |
33 |
34 |
35 | # To merge the data from nine features, first convert each txt to numpy array, then dstack them
36 | train_X_array = []
37 | files = os.listdir(list_of_Xs[1])
38 | for file in files:
39 | print(file," is reading...")
40 | data = np.loadtxt(list_of_Xs[1]+'/'+file)
41 | #print("data shape is:", data.shape)
42 | train_X_array.append(data)
43 |
44 | # merge the test files
45 | test_X_array = []
46 | files = os.listdir(list_of_Xs[0])
47 | for file in files:
48 | print(file," is reading...")
49 | data = np.loadtxt(list_of_Xs[0]+'/'+file)
50 | # print("data shape is:", data.shape)
51 | test_X_array.append(data)
52 |
53 | train_X = np.dstack(train_X_array)
54 | print(train_X.shape)
55 |
56 | test_X = np.dstack(test_X_array)
57 | print(test_X.shape)
58 |
59 | train_y = np.loadtxt(list_of_ys[1])
60 | test_y = np.loadtxt(list_of_ys[0])
61 | print(train_y.shape)
62 | print(test_y.shape)
63 |
64 | # merge trainX and testX, the data will be split to train and test set during model training
65 | X = np.vstack((train_X, test_X))
66 | # merge train_y and test_y
67 | y = np.hstack((train_y,test_y)).astype(np.int) #convert to integer
68 | y = np.where(y==6,0,y) # if the label is 6, replace with 0, otherwise still y
69 |
70 | print("X shape is ", X.shape)
71 | print("y shape is ", y.shape)
72 | print(set(y))
73 | return [X,y]
74 | #return {'inputs' : X, 'labels': y}
75 |
76 |
77 | def save_data(arr,file_name): # save the data in h5 format
78 | dict_ = {'inputs' : arr[0], 'labels': arr[1]}
79 | f = h5py.File(file_name,'w')
80 | for key in dict_:
81 | print(key)
82 | f.create_dataset(key,data = dict_[key])
83 | f.close()
84 | print('Done.')
85 |
86 | #only plot a window
87 | #Since the data has already processed into windows, and shuffled to test and train dataset.
88 | #It is sequential within each window.
89 | def window_plot(X, y, col, y_index):
90 | unit='ms^-2'
91 | #print(y.shape)
92 | #print(X.shape)
93 | x_seq = X[y_index][:,col]
94 | #print(x_seq.shape)
95 | #print(y[y_index])
96 | plottitle = colNames[col]+' - '+ activityIDdict[y[y_index]]
97 | plt.plot(x_seq)
98 | plt.title(plottitle)
99 | plt.xlabel('window')
100 | plt.ylabel(unit)
101 | plt.show()
102 |
103 | if __name__ == "__main__":
104 | file_name = 'uci_har.h5'
105 | arr = read_files()
106 | #check the waves in a sigle window
107 | window_plot(arr[0],arr[1],2,500)
108 | save_data(arr,file_name)
109 |
110 |
111 |
112 |
113 |
114 |
--------------------------------------------------------------------------------
/UCI HAR Dataset/UCI HAR Dataset/models.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | """
3 | Created on Mon Jun 29 16:41:56 2020
4 |
5 | """
6 |
7 | '''
8 | Apply different deep learning models on PAMAP2 dataset.
9 | ANN,CNN and RNN were applied.
10 |
11 | '''
12 | #%%
13 | import pandas as pd
14 | import numpy as np
15 | import matplotlib.pyplot as plt
16 | from scipy import stats
17 | import tensorflow as tf
18 | from sklearn import metrics
19 | import h5py
20 | import matplotlib.pyplot as plt
21 | from tensorflow.keras import regularizers
22 | from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten, Dropout, SimpleRNN, GRU, LSTM, GlobalMaxPooling1D,GlobalMaxPooling2D,MaxPooling2D,BatchNormalization
23 | from tensorflow.keras.models import Model
24 | from tensorflow.keras.optimizers import Adam
25 | from sklearn.preprocessing import MinMaxScaler, StandardScaler
26 | from sklearn.model_selection import train_test_split
27 | from sklearn.metrics import confusion_matrix
28 | import itertools
29 | from keras.utils.vis_utils import plot_model
30 |
31 | class models():
32 | def __init__(self, path):
33 | self.path = path
34 |
35 |
36 | def read_h5(self):
37 | f = h5py.File(path, 'r')
38 | X = f.get('inputs')
39 | y = f.get('labels')
40 | #print(type(X))
41 | #print(type(y))
42 | X = np.array(X)
43 | y = np.array(y)
44 | self.x_train, self.x_test, self.y_train, self.y_test = train_test_split(X, y, test_size=0.4, random_state = 1)
45 |
46 | #print("X = ", X.shape)
47 | #print("y =",y.shape)
48 | print(self.x_train.shape)
49 | #print(self.x_train.shape)
50 | #return X,y
51 |
52 | def cnn_model(self):
53 | K = len(set(self.y_train))
54 | #print(K)
55 | #K = 12
56 | #X = np.expand_dims(X, -1)
57 | self.x_train = np.expand_dims(self.x_train, -1)
58 | self.x_test = np.expand_dims(self.x_test,-1)
59 | #print(X)
60 | #print(X[0].shape)
61 | #i = Input(shape=X[0].shape)
62 | i = Input(shape=self.x_train[0].shape)
63 | x = Conv2D(32, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(i)
64 | x = BatchNormalization()(x)
65 | #x = MaxPooling2D((2,2))(x)
66 | x = Dropout(0.2)(x)
67 | x = Conv2D(64, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
68 | x = BatchNormalization()(x)
69 | x = Dropout(0.4)(x)
70 | x = Conv2D(64, (3,3), strides = 2, activation = 'relu',padding='same',kernel_regularizer=regularizers.l2(0.0005))(x)
71 | x = BatchNormalization()(x)
72 | #x = MaxPooling2D((2,2))(x)
73 | x = Dropout(0.2)(x)
74 | x = Flatten()(x)
75 | x = Dropout(0.2)(x)
76 | x = Dense(512,activation = 'relu')(x)
77 | x = Dropout(0.2)(x)
78 | x = Dense(K, activation = 'softmax')(x)
79 | self.model = Model(i,x)
80 | self.model.compile(optimizer = Adam(lr=0.001),
81 | loss = 'sparse_categorical_crossentropy',
82 | metrics = ['accuracy'])
83 |
84 | #self.r = model.fit(X, y, validation_split = 0.4, epochs = 50, batch_size = 32 )
85 | self.r = self.model.fit(self.x_train, self.y_train, validation_data = (self.x_test, self.y_test), epochs = 50, batch_size = 32 )
86 | print(self.model.summary())
87 | # It is better than using keras do the splitting!!
88 | return self.r
89 |
90 | def dnn_model(self):
91 | K = len(set(self.y_train))
92 | #print(K)
93 | #K = 12
94 | print(self.x_train[0].shape)
95 | i = Input(shape=self.x_train[0].shape)
96 | x = Flatten()(i)
97 | x = Dense(64,activation = 'relu')(x)
98 | #x = Dense(128,activation = 'relu')(x)
99 | x = Dropout(0.2)(x)
100 | x = Dense(128,activation = 'relu')(x)
101 | x = Dropout(0.5)(x)
102 | x = Dense(64,activation = 'relu')(x)
103 | x = Dropout(0.2)(x)
104 | x = Dense(K,activation = 'softmax')(x)
105 | self.model = Model(i,x)
106 | self.model.compile(optimizer = Adam(lr=0.001),
107 | loss = 'sparse_categorical_crossentropy',
108 | metrics = ['accuracy'])
109 |
110 | '''
111 | model = tf.keras.models.Sequential([
112 | tf.keras.layers.Flatten(input_shape=self.x_train[0].shape),
113 | tf.keras.layers.Dense(256, activation = 'relu'),
114 | tf.keras.layers.Dropout(0.5),
115 | tf.keras.layers.Dense(256, activation = 'relu'),
116 | tf.keras.layers.Dropout(0.2),
117 | tf.keras.layers.Dense(K,activation = 'softmax')
118 | ])
119 | model.compile(optimizer = Adam(lr=0.0005),
120 | loss = 'sparse_categorical_crossentropy',
121 | metrics = ['accuracy'])
122 | '''
123 | self.r = self.model.fit(self.x_train, self.y_train, validation_data = (self.x_test, self.y_test), epochs = 50, batch_size = 32 )
124 | print(self.model.summary())
125 | return self.r
126 |
127 |
128 | def rnn_model(self):
129 | K = len(set(self.y_train))
130 | #K = 12
131 | i = Input(shape = self.x_train[0].shape)
132 | x = LSTM(128, return_sequences=True)(i)
133 | x = Dense(64,activation = 'relu')(x)
134 | x = GlobalMaxPooling1D()(x)
135 | x = Dense(K,activation = 'softmax')(x)
136 | self.model = Model(i,x)
137 | self.model.compile(optimizer = Adam(lr=0.001),
138 | loss = 'sparse_categorical_crossentropy',
139 | metrics = ['accuracy'])
140 | self.r = self.model.fit(self.x_train, self.y_train, validation_data = (self.x_test, self.y_test), epochs = 50, batch_size = 32 )
141 | #self.r = model.fit(X, y, validation_split = 0.2, epochs = 10, batch_size = 32 )
142 | print(self.model.summary())
143 | return self.r
144 |
145 | def draw(self):
146 | f1 = plt.figure(1)
147 | plt.title('Loss')
148 | plt.plot(self.r.history['loss'], label = 'loss')
149 | plt.plot(self.r.history['val_loss'], label = 'val_loss')
150 | plt.legend()
151 | f1.show()
152 |
153 | f2 = plt.figure(2)
154 | plt.plot(self.r.history['acc'], label = 'accuracy')
155 | plt.plot(self.r.history['val_acc'], label = 'val_accuracy')
156 | plt.legend()
157 | f2.show()
158 |
159 | # summary, confusion matrix and heatmap
160 | def con_matrix(self):
161 | K = len(set(self.y_train))
162 | self.y_pred = self.model.predict(self.x_test).argmax(axis=1)
163 | cm = confusion_matrix(self.y_test,self.y_pred)
164 | self.plot_confusion_matrix(cm,list(range(K)))
165 |
166 |
167 | def plot_confusion_matrix(self, cm, classes, normalize = False, title='Confusion matrix', cmap=plt.cm.Blues):
168 | if normalize:
169 | cm = cm.astype('float') / cm.sum(axis=1)[:,np.newaxis]
170 | print("Normalized confusion matrix")
171 | else:
172 | print("Confusion matrix, without normalization")
173 | print(cm)
174 | f3 = plt.figure(3)
175 | plt.imshow(cm, interpolation='nearest', cmap=cmap)
176 | plt.title(title)
177 | plt.colorbar()
178 | tick_marks = np.arange(len(classes))
179 | plt.xticks(tick_marks, classes, rotation=45)
180 | plt.yticks(tick_marks, classes)
181 |
182 | fmt = '.2f' if normalize else 'd'
183 | thresh = cm.max()/2.
184 | for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
185 | plt.text(j, i, format(cm[i, j], fmt),
186 | horizontalalignment = "center",
187 | color = "white" if cm[i, j] > thresh else "black")
188 | plt.tight_layout()
189 | plt.ylabel('True label')
190 | plt.xlabel('predicted label')
191 | f3.show()
192 |
193 |
194 |
195 | if __name__ == "__main__":
196 | model_name = "cnn" # can be cnn/dnn/rnn
197 | path = "./uci_har.h5"
198 | har = models(path)
199 | print("read h5 file....")
200 | har.read_h5()
201 |
202 | if model_name == "cnn":
203 | har.cnn_model()
204 | elif model_name == "dnn":
205 | har.dnn_model()
206 | elif model_name == "rnn":
207 | har.rnn_model()
208 | har.draw()
209 | har.con_matrix()
210 |
211 |
--------------------------------------------------------------------------------
/UCI HAR Dataset/UCI HAR Dataset/results/cnn/cnn_50ep_acc.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/UCI HAR Dataset/UCI HAR Dataset/results/cnn/cnn_50ep_acc.png
--------------------------------------------------------------------------------
/UCI HAR Dataset/UCI HAR Dataset/results/cnn/cnn_50ep_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/UCI HAR Dataset/UCI HAR Dataset/results/cnn/cnn_50ep_cm.png
--------------------------------------------------------------------------------
/UCI HAR Dataset/UCI HAR Dataset/results/cnn/cnn_50ep_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/UCI HAR Dataset/UCI HAR Dataset/results/cnn/cnn_50ep_loss.png
--------------------------------------------------------------------------------
/UCI HAR Dataset/UCI HAR Dataset/results/dnn/dnn_50ep_acc.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/UCI HAR Dataset/UCI HAR Dataset/results/dnn/dnn_50ep_acc.png
--------------------------------------------------------------------------------
/UCI HAR Dataset/UCI HAR Dataset/results/dnn/dnn_50ep_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/UCI HAR Dataset/UCI HAR Dataset/results/dnn/dnn_50ep_cm.png
--------------------------------------------------------------------------------
/UCI HAR Dataset/UCI HAR Dataset/results/dnn/dnn_50ep_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/UCI HAR Dataset/UCI HAR Dataset/results/dnn/dnn_50ep_loss.png
--------------------------------------------------------------------------------
/UCI HAR Dataset/UCI HAR Dataset/results/dnn/dnn_50ep_result.txt:
--------------------------------------------------------------------------------
1 | Train on 6179 samples, validate on 4120 samples
2 | Epoch 1/50
3 | 6179/6179 [==============================] - 2s 244us/sample - loss: 1.1270 - acc: 0.5266 - val_loss: 0.6002 - val_acc: 0.7934
4 | Epoch 2/50
5 | 6179/6179 [==============================] - 1s 130us/sample - loss: 0.6366 - acc: 0.7572 - val_loss: 0.3882 - val_acc: 0.8670
6 | Epoch 3/50
7 | 6179/6179 [==============================] - 1s 123us/sample - loss: 0.4598 - acc: 0.8322 - val_loss: 0.2681 - val_acc: 0.8966
8 | Epoch 4/50
9 | 6179/6179 [==============================] - 1s 122us/sample - loss: 0.3616 - acc: 0.8684 - val_loss: 0.2429 - val_acc: 0.9155
10 | Epoch 5/50
11 | 6179/6179 [==============================] - 1s 125us/sample - loss: 0.3268 - acc: 0.8836 - val_loss: 0.2066 - val_acc: 0.9257
12 | Epoch 6/50
13 | 6179/6179 [==============================] - 1s 126us/sample - loss: 0.2813 - acc: 0.8956 - val_loss: 0.2204 - val_acc: 0.9170
14 | Epoch 7/50
15 | 6179/6179 [==============================] - 1s 124us/sample - loss: 0.2343 - acc: 0.9128 - val_loss: 0.2197 - val_acc: 0.9153
16 | Epoch 8/50
17 | 6179/6179 [==============================] - 1s 125us/sample - loss: 0.2395 - acc: 0.9120 - val_loss: 0.1961 - val_acc: 0.9199
18 | Epoch 9/50
19 | 6179/6179 [==============================] - 1s 124us/sample - loss: 0.2305 - acc: 0.9157 - val_loss: 0.1926 - val_acc: 0.9274
20 | Epoch 10/50
21 | 6179/6179 [==============================] - 1s 126us/sample - loss: 0.2062 - acc: 0.9181 - val_loss: 0.1856 - val_acc: 0.9316
22 | Epoch 11/50
23 | 6179/6179 [==============================] - 1s 150us/sample - loss: 0.2069 - acc: 0.9188 - val_loss: 0.2014 - val_acc: 0.9282
24 | Epoch 12/50
25 | 6179/6179 [==============================] - 1s 175us/sample - loss: 0.1882 - acc: 0.9283 - val_loss: 0.1960 - val_acc: 0.9218
26 | Epoch 13/50
27 | 6179/6179 [==============================] - 1s 178us/sample - loss: 0.1980 - acc: 0.9244 - val_loss: 0.1847 - val_acc: 0.9333
28 | Epoch 14/50
29 | 6179/6179 [==============================] - 1s 176us/sample - loss: 0.1874 - acc: 0.9280 - val_loss: 0.1722 - val_acc: 0.9291
30 | Epoch 15/50
31 | 6179/6179 [==============================] - 1s 176us/sample - loss: 0.1862 - acc: 0.9246 - val_loss: 0.1690 - val_acc: 0.9347
32 | Epoch 16/50
33 | 6179/6179 [==============================] - 1s 174us/sample - loss: 0.1781 - acc: 0.9319 - val_loss: 0.1753 - val_acc: 0.9301
34 | Epoch 17/50
35 | 6179/6179 [==============================] - 1s 175us/sample - loss: 0.1822 - acc: 0.9298 - val_loss: 0.1820 - val_acc: 0.9274
36 | Epoch 18/50
37 | 6179/6179 [==============================] - 1s 181us/sample - loss: 0.1771 - acc: 0.9283 - val_loss: 0.1749 - val_acc: 0.9342
38 | Epoch 19/50
39 | 6179/6179 [==============================] - 1s 182us/sample - loss: 0.1714 - acc: 0.9336 - val_loss: 0.1865 - val_acc: 0.9296
40 | Epoch 20/50
41 | 6179/6179 [==============================] - 1s 178us/sample - loss: 0.1727 - acc: 0.9325 - val_loss: 0.1685 - val_acc: 0.9320
42 | Epoch 21/50
43 | 6179/6179 [==============================] - 1s 179us/sample - loss: 0.1579 - acc: 0.9396 - val_loss: 0.1673 - val_acc: 0.9364
44 | Epoch 22/50
45 | 6179/6179 [==============================] - 1s 188us/sample - loss: 0.1563 - acc: 0.9358 - val_loss: 0.1860 - val_acc: 0.9335
46 | Epoch 23/50
47 | 6179/6179 [==============================] - 1s 177us/sample - loss: 0.1628 - acc: 0.9361 - val_loss: 0.1712 - val_acc: 0.9308
48 | Epoch 24/50
49 | 6179/6179 [==============================] - 1s 182us/sample - loss: 0.1543 - acc: 0.9404 - val_loss: 0.1800 - val_acc: 0.9284
50 | Epoch 25/50
51 | 6179/6179 [==============================] - 1s 181us/sample - loss: 0.1496 - acc: 0.9380 - val_loss: 0.1739 - val_acc: 0.9320
52 | Epoch 26/50
53 | 6179/6179 [==============================] - 1s 174us/sample - loss: 0.1580 - acc: 0.9391 - val_loss: 0.1685 - val_acc: 0.9362
54 | Epoch 27/50
55 | 6179/6179 [==============================] - 1s 179us/sample - loss: 0.1554 - acc: 0.9413 - val_loss: 0.1670 - val_acc: 0.9306
56 | Epoch 28/50
57 | 6179/6179 [==============================] - 1s 179us/sample - loss: 0.1409 - acc: 0.9445 - val_loss: 0.1653 - val_acc: 0.9398
58 | Epoch 29/50
59 | 6179/6179 [==============================] - 1s 176us/sample - loss: 0.1460 - acc: 0.9456 - val_loss: 0.1810 - val_acc: 0.9308
60 | Epoch 30/50
61 | 6179/6179 [==============================] - 1s 173us/sample - loss: 0.1624 - acc: 0.9366 - val_loss: 0.1685 - val_acc: 0.9333
62 | Epoch 31/50
63 | 6179/6179 [==============================] - 1s 176us/sample - loss: 0.1492 - acc: 0.9413 - val_loss: 0.1675 - val_acc: 0.9325
64 | Epoch 32/50
65 | 6179/6179 [==============================] - 1s 176us/sample - loss: 0.1482 - acc: 0.9401 - val_loss: 0.1726 - val_acc: 0.9342
66 | Epoch 33/50
67 | 6179/6179 [==============================] - 1s 175us/sample - loss: 0.1407 - acc: 0.9440 - val_loss: 0.1732 - val_acc: 0.9364
68 | Epoch 34/50
69 | 6179/6179 [==============================] - 1s 176us/sample - loss: 0.1418 - acc: 0.9390 - val_loss: 0.1940 - val_acc: 0.9388
70 | Epoch 35/50
71 | 6179/6179 [==============================] - 1s 175us/sample - loss: 0.1501 - acc: 0.9443 - val_loss: 0.1597 - val_acc: 0.9352
72 | Epoch 36/50
73 | 6179/6179 [==============================] - 1s 178us/sample - loss: 0.1519 - acc: 0.9387 - val_loss: 0.1675 - val_acc: 0.9357
74 | Epoch 37/50
75 | 6179/6179 [==============================] - 1s 177us/sample - loss: 0.1432 - acc: 0.9435 - val_loss: 0.1707 - val_acc: 0.9362
76 | Epoch 38/50
77 | 6179/6179 [==============================] - 1s 189us/sample - loss: 0.1444 - acc: 0.9408 - val_loss: 0.1698 - val_acc: 0.9313
78 | Epoch 39/50
79 | 6179/6179 [==============================] - 1s 177us/sample - loss: 0.1469 - acc: 0.9427 - val_loss: 0.1619 - val_acc: 0.9369
80 | Epoch 40/50
81 | 6179/6179 [==============================] - 1s 183us/sample - loss: 0.1455 - acc: 0.9422 - val_loss: 0.1754 - val_acc: 0.9337
82 | Epoch 41/50
83 | 6179/6179 [==============================] - 1s 175us/sample - loss: 0.1506 - acc: 0.9398 - val_loss: 0.1646 - val_acc: 0.9342
84 | Epoch 42/50
85 | 6179/6179 [==============================] - 1s 174us/sample - loss: 0.1277 - acc: 0.9495 - val_loss: 0.1627 - val_acc: 0.9386
86 | Epoch 43/50
87 | 6179/6179 [==============================] - 1s 177us/sample - loss: 0.1220 - acc: 0.9490 - val_loss: 0.1673 - val_acc: 0.9369
88 | Epoch 44/50
89 | 6179/6179 [==============================] - 1s 178us/sample - loss: 0.1338 - acc: 0.9434 - val_loss: 0.1675 - val_acc: 0.9391
90 | Epoch 45/50
91 | 6179/6179 [==============================] - 1s 183us/sample - loss: 0.1436 - acc: 0.9443 - val_loss: 0.1941 - val_acc: 0.9303
92 | Epoch 46/50
93 | 6179/6179 [==============================] - 1s 179us/sample - loss: 0.1407 - acc: 0.9466 - val_loss: 0.1715 - val_acc: 0.9383
94 | Epoch 47/50
95 | 6179/6179 [==============================] - 1s 178us/sample - loss: 0.1461 - acc: 0.9448 - val_loss: 0.1803 - val_acc: 0.9340
96 | Epoch 48/50
97 | 6179/6179 [==============================] - 1s 182us/sample - loss: 0.1376 - acc: 0.9468 - val_loss: 0.1627 - val_acc: 0.9400
98 | Epoch 49/50
99 | 6179/6179 [==============================] - 1s 198us/sample - loss: 0.1353 - acc: 0.9453 - val_loss: 0.1698 - val_acc: 0.9405
100 | Epoch 50/50
101 | 6179/6179 [==============================] - 1s 192us/sample - loss: 0.1480 - acc: 0.9459 - val_loss: 0.1671 - val_acc: 0.9388
102 | _________________________________________________________________
103 | Layer (type) Output Shape Param #
104 | =================================================================
105 | input_8 (InputLayer) (None, 128, 9) 0
106 | _________________________________________________________________
107 | flatten_5 (Flatten) (None, 1152) 0
108 | _________________________________________________________________
109 | dense_18 (Dense) (None, 64) 73792
110 | _________________________________________________________________
111 | dropout_4 (Dropout) (None, 64) 0
112 | _________________________________________________________________
113 | dense_19 (Dense) (None, 128) 8320
114 | _________________________________________________________________
115 | dropout_5 (Dropout) (None, 128) 0
116 | _________________________________________________________________
117 | dense_20 (Dense) (None, 64) 8256
118 | _________________________________________________________________
119 | dropout_6 (Dropout) (None, 64) 0
120 | _________________________________________________________________
121 | dense_21 (Dense) (None, 6) 390
122 | =================================================================
123 | Total params: 90,758
124 | Trainable params: 90,758
125 | Non-trainable params: 0
126 | _________________________________________________________________
127 | None
128 | Confusion matrix, without normalization
129 | [[767 0 0 0 0 0]
130 | [ 0 661 8 12 1 2]
131 | [ 0 8 584 9 0 1]
132 | [ 0 3 12 554 0 0]
133 | [ 0 0 0 0 626 99]
134 | [ 0 0 2 1 94 676]]
--------------------------------------------------------------------------------
/UCI HAR Dataset/UCI HAR Dataset/results/rnn/rnn_50ep_acc.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/UCI HAR Dataset/UCI HAR Dataset/results/rnn/rnn_50ep_acc.png
--------------------------------------------------------------------------------
/UCI HAR Dataset/UCI HAR Dataset/results/rnn/rnn_50ep_cm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/UCI HAR Dataset/UCI HAR Dataset/results/rnn/rnn_50ep_cm.png
--------------------------------------------------------------------------------
/UCI HAR Dataset/UCI HAR Dataset/results/rnn/rnn_50ep_loss.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/jenhuluck/Deep-Learning-in-Human-Activity-Recognition-/c6f70ed2c845253698f8bf8dcdd47a8e74b30fd2/UCI HAR Dataset/UCI HAR Dataset/results/rnn/rnn_50ep_loss.png
--------------------------------------------------------------------------------
/UCI HAR Dataset/UCI HAR Dataset/results/rnn/rnn_50ep_output.txt:
--------------------------------------------------------------------------------
1 | Train on 6179 samples, validate on 4120 samples
2 | Epoch 1/50
3 | 6179/6179 [==============================] - 44s 7ms/sample - loss: 0.6343 - acc: 0.7791 - val_loss: 0.3347 - val_acc: 0.8655
4 | Epoch 2/50
5 | 6179/6179 [==============================] - 44s 7ms/sample - loss: 0.2198 - acc: 0.9144 - val_loss: 0.1851 - val_acc: 0.9316
6 | Epoch 3/50
7 | 6179/6179 [==============================] - 41s 7ms/sample - loss: 0.1498 - acc: 0.9414 - val_loss: 0.1318 - val_acc: 0.9393
8 | Epoch 4/50
9 | 6179/6179 [==============================] - 42s 7ms/sample - loss: 0.1432 - acc: 0.9404 - val_loss: 0.2105 - val_acc: 0.9187
10 | Epoch 5/50
11 | 6179/6179 [==============================] - 41s 7ms/sample - loss: 0.1367 - acc: 0.9435 - val_loss: 0.1210 - val_acc: 0.9478
12 | Epoch 6/50
13 | 6179/6179 [==============================] - 45s 7ms/sample - loss: 0.1195 - acc: 0.9487 - val_loss: 0.1197 - val_acc: 0.9498
14 | Epoch 7/50
15 | 6179/6179 [==============================] - 44s 7ms/sample - loss: 0.1084 - acc: 0.9545 - val_loss: 0.1161 - val_acc: 0.9488
16 | Epoch 8/50
17 | 6179/6179 [==============================] - 42s 7ms/sample - loss: 0.1082 - acc: 0.9548 - val_loss: 0.1476 - val_acc: 0.9505
18 | Epoch 9/50
19 | 6179/6179 [==============================] - 42s 7ms/sample - loss: 0.1783 - acc: 0.9408 - val_loss: 0.1263 - val_acc: 0.9527
20 | Epoch 10/50
21 | 6179/6179 [==============================] - 42s 7ms/sample - loss: 0.1137 - acc: 0.9527 - val_loss: 0.1083 - val_acc: 0.9570
22 | Epoch 11/50
23 | 6179/6179 [==============================] - 42s 7ms/sample - loss: 0.1075 - acc: 0.9566 - val_loss: 0.1055 - val_acc: 0.9583
24 | Epoch 12/50
25 | 6179/6179 [==============================] - 43s 7ms/sample - loss: 0.0993 - acc: 0.9597 - val_loss: 0.1117 - val_acc: 0.9583
26 | Epoch 13/50
27 | 6179/6179 [==============================] - 42s 7ms/sample - loss: 0.1113 - acc: 0.9552 - val_loss: 0.1165 - val_acc: 0.9532
28 | Epoch 14/50
29 | 6179/6179 [==============================] - 45s 7ms/sample - loss: 0.0930 - acc: 0.9591 - val_loss: 0.0878 - val_acc: 0.9660- ETA: 24s - loss: 0.1224 - acc: 0.9443
30 | Epoch 15/50
31 | 6179/6179 [==============================] - 49s 8ms/sample - loss: 0.0879 - acc: 0.9636 - val_loss: 0.0867 - val_acc: 0.9655
32 | Epoch 16/50
33 | 6179/6179 [==============================] - 44s 7ms/sample - loss: 0.0816 - acc: 0.9675 - val_loss: 0.0980 - val_acc: 0.9595
34 | Epoch 17/50
35 | 6179/6179 [==============================] - 43s 7ms/sample - loss: 0.0846 - acc: 0.9670 - val_loss: 0.0860 - val_acc: 0.9665
36 | Epoch 18/50
37 | 6179/6179 [==============================] - 42s 7ms/sample - loss: 0.0782 - acc: 0.9676 - val_loss: 0.1115 - val_acc: 0.9568
38 | Epoch 19/50
39 | 6179/6179 [==============================] - 42s 7ms/sample - loss: 0.0973 - acc: 0.9629 - val_loss: 0.0787 - val_acc: 0.9667
40 | Epoch 20/50
41 | 6179/6179 [==============================] - 42s 7ms/sample - loss: 0.0759 - acc: 0.9688 - val_loss: 0.0761 - val_acc: 0.9701
42 | Epoch 21/50
43 | 6179/6179 [==============================] - 43s 7ms/sample - loss: 0.0735 - acc: 0.9710 - val_loss: 0.0725 - val_acc: 0.9721
44 | Epoch 22/50
45 | 6179/6179 [==============================] - 43s 7ms/sample - loss: 0.0729 - acc: 0.9717 - val_loss: 0.0935 - val_acc: 0.9626
46 | Epoch 23/50
47 | 6179/6179 [==============================] - 42s 7ms/sample - loss: 0.0701 - acc: 0.9704 - val_loss: 0.0862 - val_acc: 0.9653
48 | Epoch 24/50
49 | 6179/6179 [==============================] - 43s 7ms/sample - loss: 0.0598 - acc: 0.9765 - val_loss: 0.0771 - val_acc: 0.9704
50 | Epoch 25/50
51 | 6179/6179 [==============================] - 43s 7ms/sample - loss: 0.0637 - acc: 0.9752 - val_loss: 0.0782 - val_acc: 0.9716
52 | Epoch 26/50
53 | 6179/6179 [==============================] - 42s 7ms/sample - loss: 0.0713 - acc: 0.9728 - val_loss: 0.0747 - val_acc: 0.9716
54 | Epoch 27/50
55 | 6179/6179 [==============================] - 44s 7ms/sample - loss: 0.0618 - acc: 0.9777 - val_loss: 0.0605 - val_acc: 0.9782
56 | Epoch 28/50
57 | 6179/6179 [==============================] - 50s 8ms/sample - loss: 0.0558 - acc: 0.9794 - val_loss: 0.0795 - val_acc: 0.9672
58 | Epoch 29/50
59 | 6179/6179 [==============================] - 54s 9ms/sample - loss: 0.0529 - acc: 0.9804 - val_loss: 0.0832 - val_acc: 0.9655
60 | Epoch 30/50
61 | 6179/6179 [==============================] - 53s 9ms/sample - loss: 0.0505 - acc: 0.9807 - val_loss: 0.0694 - val_acc: 0.9728
62 | Epoch 31/50
63 | 6179/6179 [==============================] - 53s 9ms/sample - loss: 0.0582 - acc: 0.9775 - val_loss: 0.0643 - val_acc: 0.9735
64 | Epoch 32/50
65 | 6179/6179 [==============================] - 59s 9ms/sample - loss: 0.0476 - acc: 0.9812 - val_loss: 0.0563 - val_acc: 0.9772
66 | Epoch 33/50
67 | 6179/6179 [==============================] - 61s 10ms/sample - loss: 0.0565 - acc: 0.9791 - val_loss: 0.0598 - val_acc: 0.9752
68 | Epoch 34/50
69 | 6179/6179 [==============================] - 53s 8ms/sample - loss: 0.0552 - acc: 0.9782 - val_loss: 0.0838 - val_acc: 0.9675
70 | Epoch 35/50
71 | 6179/6179 [==============================] - 55s 9ms/sample - loss: 0.0482 - acc: 0.9830 - val_loss: 0.0610 - val_acc: 0.9752
72 | Epoch 36/50
73 | 6179/6179 [==============================] - 55s 9ms/sample - loss: 0.0446 - acc: 0.9820 - val_loss: 0.0557 - val_acc: 0.9784
74 | Epoch 37/50
75 | 6179/6179 [==============================] - 56s 9ms/sample - loss: 0.0444 - acc: 0.9841 - val_loss: 0.0580 - val_acc: 0.9769
76 | Epoch 38/50
77 | 6179/6179 [==============================] - 57s 9ms/sample - loss: 0.0447 - acc: 0.9822 - val_loss: 0.0511 - val_acc: 0.9794
78 | Epoch 39/50
79 | 6179/6179 [==============================] - 55s 9ms/sample - loss: 0.0377 - acc: 0.9861 - val_loss: 0.0567 - val_acc: 0.9789
80 | Epoch 40/50
81 | 6179/6179 [==============================] - 45s 7ms/sample - loss: 0.0423 - acc: 0.9835 - val_loss: 0.0692 - val_acc: 0.9740
82 | Epoch 41/50
83 | 6179/6179 [==============================] - 46s 7ms/sample - loss: 0.0344 - acc: 0.9862 - val_loss: 0.0507 - val_acc: 0.9799
84 | Epoch 42/50
85 | 6179/6179 [==============================] - 63s 10ms/sample - loss: 0.0331 - acc: 0.9879 - val_loss: 0.0450 - val_acc: 0.9840
86 | Epoch 43/50
87 | 6179/6179 [==============================] - 67s 11ms/sample - loss: 0.0337 - acc: 0.9872 - val_loss: 0.0479 - val_acc: 0.9811
88 | Epoch 44/50
89 | 6179/6179 [==============================] - 53s 9ms/sample - loss: 0.0320 - acc: 0.9885 - val_loss: 0.0498 - val_acc: 0.9799
90 | Epoch 45/50
91 | 6179/6179 [==============================] - 45s 7ms/sample - loss: 0.0312 - acc: 0.9880 - val_loss: 0.0622 - val_acc: 0.9723
92 | Epoch 46/50
93 | 6179/6179 [==============================] - 46s 7ms/sample - loss: 0.0296 - acc: 0.9877 - val_loss: 0.0465 - val_acc: 0.9833
94 | Epoch 47/50
95 | 6179/6179 [==============================] - 57s 9ms/sample - loss: 0.0328 - acc: 0.9867 - val_loss: 0.0531 - val_acc: 0.9813
96 | Epoch 48/50
97 | 6179/6179 [==============================] - 54s 9ms/sample - loss: 0.0295 - acc: 0.9879 - val_loss: 0.0538 - val_acc: 0.9801
98 | Epoch 49/50
99 | 6179/6179 [==============================] - 52s 8ms/sample - loss: 0.0430 - acc: 0.9848 - val_loss: 0.0564 - val_acc: 0.9799
100 | Epoch 50/50
101 | 6179/6179 [==============================] - 48s 8ms/sample - loss: 0.0387 - acc: 0.9888 - val_loss: 0.0474 - val_acc: 0.9825
102 | _________________________________________________________________
103 | Layer (type) Output Shape Param #
104 | =================================================================
105 | input_12 (InputLayer) (None, 128, 9) 0
106 | _________________________________________________________________
107 | lstm_1 (LSTM) (None, 128, 128) 70656
108 | _________________________________________________________________
109 | dense_28 (Dense) (None, 128, 64) 8256
110 | _________________________________________________________________
111 | global_max_pooling1d_1 (Glob (None, 64) 0
112 | _________________________________________________________________
113 | dense_29 (Dense) (None, 6) 390
114 | =================================================================
115 | Total params: 79,302
116 | Trainable params: 79,302
117 | Non-trainable params: 0
118 | _________________________________________________________________
119 | None
120 | Confusion matrix, without normalization
121 | [[766 0 0 0 1 0]
122 | [ 0 681 3 0 0 0]
123 | [ 0 0 602 0 0 0]
124 | [ 0 0 0 569 0 0]
125 | [ 0 0 0 0 692 33]
126 | [ 0 0 0 0 35 738]]
--------------------------------------------------------------------------------
/UCI HAR Dataset/UCI HAR Dataset/summary_table.md:
--------------------------------------------------------------------------------
1 | | Models | train accuracy | train loss | test accuracy | test loss |
2 | |--------|----------------|------------|---------------|-----------|
3 | | CNN | 97.49% | 0.1328 | 96.99% | 0.1423 |
4 | | DNN | 94.53% | 0.1353 | 94.05% | 0.1698 |
5 | | RNN | 98.88% | 0.0387 | 98.25% | 0.0474 |
6 |
7 | * training : validation = 6 : 4
8 | * 50 epochs run for each model
--------------------------------------------------------------------------------
/readme.md:
--------------------------------------------------------------------------------
1 | ## Introduction
2 | This repository is to apply deep learning models on Human Activity Recognition(HAR)/Activities of Daily Living(ADL) datasets. Three deep learning models, including Convolutional Neural Networks(CNN), Deep Feed Forward Neural Networks(DNN) and Recurrent Neural Networks(RNN) were applied to the datasets. Six HAR/ADL benchmark datasets were tested. The goal is to gain some experiences on handling the sensor data as well as classifying human activities using deep learning.
3 |
4 | ## Benchmark datasets
5 | * [PAMAP2 dataset](https://archive.ics.uci.edu/ml/datasets/PAMAP2+Physical+Activity+Monitoring) contains data of 18 different physical activities (such as walking, cycling, playing soccer, etc.), performed by 9 subjects wearing 3 inertial measurement units and a heat rate monitor.
6 | * [OPPORTUNITY dataset](https://archive.ics.uci.edu/ml/datasets/opportunity+activity+recognition) contains data of 35 ADL activities (13 low-level, 17 mid-level and 5 high-level) which were collected through 23 body worn sensors, 12 object sensors, 21 ambient sensors.
7 | * [Daphnet Gait dataset](https://archive.ics.uci.edu/ml/datasets/Daphnet+Freezing+of+Gait) contains the annotated readings of 3 acceleration sensors at the hip and leg of Parkinson's disease patients that experience freezing of gait (FoG) during walking tasks.
8 | * [UCI HAR dataset](https://archive.ics.uci.edu/ml/datasets/human+activity+recognition+using+smartphones) contains data of 6 different physical activities walking, walking upstairs, walking downstairs, sitting, standing and laying), performed by 30 subjects wearing a smartphone (Samsung Galaxy S II) on the waist.
9 | * [Sphere dataset](https://www.irc-sphere.ac.uk/sphere-challenge/home) contains the data collected from three sensing modalities (wrist-worn accelerometer, RGB-D cameras, passiva enviormental sensors). 20 ADL activities including static and transition activites were labeled.
10 | * [SHL dataset](http://www.shl-dataset.org/) contains multi-modal data from a body-worn camera and from 4 smartphones, carried simultaneously at typical body locations. The SHL dataset contains 750 hours of labelled locomotion data: Car (88 h), Bus (107 h), Train (115 h), Subway (89 h), Walk (127 h), Run (21 h), Bike (79 h), and Still (127 h).
11 |
12 | ## Apporach
13 | * For each dataset, a slicing window appoarch was applied to segment the dataset. Each segment includes a series of data (usually 25 sequential data points) and two continuous windows have 50% overlapping.
14 | * After data preprocessing which includes reading files, data cleaning, data visualization, relabling and data segmentation, the data was saved into hdf5 files.
15 | * Deep learning models including CNN, DNN and RNN were applied. For each model in each dataset, hyperparameters were optimized to get the best performance.
16 | * To combine the data from multimodalities, different data fusion methods were applied on Sphere and SHL dataset.
17 | ## Dependencies
18 | * Python 3.7
19 | * tensorflow 1.13.1
20 |
21 | ## Usage
22 | First download the dataset and put dataProcessing.py and models.py under the same directory. Then run dataprocessing to generate h5 file. Last switch model types in models.py script and run different deep learning models using the generated h5 data file.
23 |
24 | ## Note
25 | I am still working on tuning hyperparameters of models in certain datasets. There will be more updates.
26 |
27 |
--------------------------------------------------------------------------------