├── trained_model_RRM.pkl ├── model_test_on_cspc2020.py ├── result_of_the_model&manully_labeled ├── CSPC2020label_02_by_manurally.mat ├── CSPC2020label_04_by_manurally.mat ├── CSPC2020label_02_result_by_the_RRM_model.mat └── CSPC2020label_04_result_by_the_RRM_model.mat ├── preprocessing_devided_into_10s_part.m ├── README.md └── ecgsignalqulity__model_training.py /trained_model_RRM.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xiangyuzhangseu/signal-quality-assessment-by-residual-recurrent-based-deep-learning-method/HEAD/trained_model_RRM.pkl -------------------------------------------------------------------------------- /model_test_on_cspc2020.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xiangyuzhangseu/signal-quality-assessment-by-residual-recurrent-based-deep-learning-method/HEAD/model_test_on_cspc2020.py -------------------------------------------------------------------------------- /result_of_the_model&manully_labeled/CSPC2020label_02_by_manurally.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xiangyuzhangseu/signal-quality-assessment-by-residual-recurrent-based-deep-learning-method/HEAD/result_of_the_model&manully_labeled/CSPC2020label_02_by_manurally.mat -------------------------------------------------------------------------------- /result_of_the_model&manully_labeled/CSPC2020label_04_by_manurally.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xiangyuzhangseu/signal-quality-assessment-by-residual-recurrent-based-deep-learning-method/HEAD/result_of_the_model&manully_labeled/CSPC2020label_04_by_manurally.mat -------------------------------------------------------------------------------- /result_of_the_model&manully_labeled/CSPC2020label_02_result_by_the_RRM_model.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xiangyuzhangseu/signal-quality-assessment-by-residual-recurrent-based-deep-learning-method/HEAD/result_of_the_model&manully_labeled/CSPC2020label_02_result_by_the_RRM_model.mat -------------------------------------------------------------------------------- /result_of_the_model&manully_labeled/CSPC2020label_04_result_by_the_RRM_model.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xiangyuzhangseu/signal-quality-assessment-by-residual-recurrent-based-deep-learning-method/HEAD/result_of_the_model&manully_labeled/CSPC2020label_04_result_by_the_RRM_model.mat -------------------------------------------------------------------------------- /preprocessing_devided_into_10s_part.m: -------------------------------------------------------------------------------- 1 | clear all; close all;clc; 2 | load A02.mat 3 | % load A04.mat 4 | number_seg=floor(length(ecg)/4000); 5 | ecgpart=zeros(4000,number_seg); 6 | for i =1: number_seg 7 | ecgpart(1:4000,i)=ecg(1+4000*(i-1):4000*i); 8 | end 9 | save ecgpart_02.mat ecgpart 10 | ecgpart=[]; 11 | load A04.mat 12 | number_seg=floor(length(ecg)/4000); 13 | ecgpart=zeros(4000,number_seg); 14 | for i =1: number_seg 15 | ecgpart(1:4000,i)=ecg(1+4000*(i-1):4000*i); 16 | end 17 | save ecgpart_04.mat ecgpart 18 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # signal-quality-assessment-by-residual-recurrent-based-deep-learning-method 2 | This model was designed for classifaction the signal quality of the dynamic ECGs collected by wearable devices. In order to avoid the misjudgment of manually designed features on ECG data and improve the classification accuracy, we designed one deep learning model combined the residual module and recurrent module. The signal qulaity was classcified to three categories: good (label:0) \ medium (label:1) \ bad (label:2). 3 | The model was trained on the wearable ECG segments with ten-fold cross validation method. The collected dynamic ECG segments randomly divided into 10 parts, nine of which was used as training data and the rest part was utilized as validation data in each fold. 4 | 5 | 6 | The ECG data was collected by wearable ECG monitoring devices, which collect standard lead-I and lead-II ECG signal with a sampling frequency of 400 Hz and 16-bit resolution over a range of ±10 mV. The ECG data was collected from twenty patients who has or had cardiovascular disease and these collected records contain a variety of abnormal heart rhythms, such as premature ventricular beats (PVC),premature atrial beats (PAC) and atrial fibrillation (AF). A fixed time window of 10 s was used for segmenting the ECG episodes. 7 | 8 | The collected ECG segments were classified into three categories according to the complexity of the noise in the segments and its influence on signal feature points.The signal of good quality indicates that the ECG segment is subject to a small amount of noise interference, and contains clear QRS waveform. Feature points such as T and P wave were clearly visible, and can be easily recognized. 9 | 10 | The signal of medium quality indicates that the ECG segment under the interference of various noises but without the large-scale pulse signal. And the feature points such as P and T wave cannot be easily located. However, the QRS wave in the segment can still be accurate discrimination which means the noise in the ECG segment does not affect the judgment of heart rate information. 11 | 12 | The signal of bad quality indicate that the ECG segment contains little information such as the signal of Lead detachment and white noise, or the segment was interfered by large noises. some signal may contain large pulse interference which can affect the detection of the QRS. Some segments may contain large baseline drift, resulting in an extremely change overall amplitude and the inability to locate its QRS position. 13 | 14 | # codes 15 | The model was trained by pytorch, the python code was available and the trained model was named as "trained_model_RRM.pkl". The test code was also available. 16 | There also provide the signal quality result of patient number 02&04 in the cspc 2020database, the manurally label was also provide for assessment the accuracy of the proposed model. 17 | 18 | The model was developed by Xiangyu Zhang (230179249@seu.edu.cn, Southeast University, China). 19 | 20 | Further details will be seen in our paper--"Deep-learning based signal quality assessment for wearable ECGs". 21 | -------------------------------------------------------------------------------- /ecgsignalqulity__model_training.py: -------------------------------------------------------------------------------- 1 | 2 | import torch 3 | from torch.autograd import Variable 4 | import torch.nn as nn 5 | import torch.nn.functional as F 6 | import torchsummary 7 | import scipy.io as sio 8 | from torch.utils.data import DataLoader,Dataset,TensorDataset 9 | from torch.autograd import Variable 10 | import numpy as np 11 | import h5py 12 | from sklearn.preprocessing import minmax_scale 13 | import torch.optim as optim 14 | from keras.utils import to_categorical 15 | from sklearn.model_selection import StratifiedKFold 16 | from scipy.fftpack import fft, ifft 17 | import scipy.signal as signal 18 | def zscore(data): 19 | data_mean=np.mean(data) 20 | data_std=np.std(data, axis=0) 21 | if data_std!=0: 22 | data=(data-data_mean)/data_std 23 | else: 24 | data=data-data_mean 25 | return data 26 | def butthigh(ecg, fs): 27 | b1 = np.array([0.995155038209359, -1.99031007641872, 0.995155038209359]) 28 | a1 = np.array([1, -1.99028660262621, 0.990333550211225]) 29 | ecg_copy = np.copy(ecg) 30 | ecg1 = signal.filtfilt(b1, a1, ecg_copy) 31 | return ecg1 32 | def hobalka(ecg1, fs, fmin, fmax): 33 | ecg = np.copy(ecg1) 34 | n = len(ecg) 35 | ecg_fil = fft(ecg) 36 | if fmin > 0: 37 | imin = int(fmin / (fs / n)) 38 | else: 39 | imin = 1 40 | ecg_fil[0] = ecg_fil[0] / 2 41 | if fmax < fs / 2: 42 | imax = int(fmax / float(fs / n)) 43 | else: 44 | imax = int(n / 2) 45 | hamwindow = np.hamming(imax - imin) 46 | hamsize = len(hamwindow) 47 | yy = np.zeros(len(ecg_fil), dtype=complex) 48 | istred = int((imax + imin) / 2) 49 | dolni = np.arange(istred-1, imax) 50 | ld = len(dolni) 51 | yy[0: ld] = np.multiply(ecg_fil[dolni - 1], hamwindow[int(np.floor(hamsize / 2)) - 1: hamsize]) 52 | horni = np.arange(imin-1, istred-1) 53 | lh = len(horni) 54 | end = len(yy) 55 | yy[end - lh - 1: end - 1] = np.multiply(ecg_fil[horni], hamwindow[0: int(np.floor(hamsize / 2))]) 56 | ecg_fil = abs(ifft(yy)) * 2 57 | return ecg_fil 58 | 59 | class conv1d_inception_block(nn.Module): 60 | 61 | def __init__(self, in_ch, out_ch): 62 | super(conv1d_inception_block, self).__init__() 63 | 64 | self.conv1_1 = nn.Sequential( 65 | nn.Conv1d(in_ch, out_ch, kernel_size=3, stride=1, padding=1, bias=True), 66 | nn.Dropout(0.3)) 67 | self.conv1_3 = nn.Sequential( 68 | nn.Conv1d(in_ch, out_ch, kernel_size=5, stride=1, padding=2, bias=True), 69 | nn.Dropout(0.3)) 70 | self.conv1_5 = nn.Sequential( 71 | nn.Conv1d(in_ch, out_ch, kernel_size=7, stride=1, padding=3, bias=True), 72 | nn.Dropout(0.3)) 73 | self.conv= nn.Sequential( 74 | nn.Conv1d(in_ch, out_ch, kernel_size=1, stride=1, padding=0, bias=True), 75 | #nn.Conv1d(in_ch, out_ch, kernel_size=1, stride=1, padding=0, bias=True), 76 | nn.Dropout(0.3), 77 | nn.BatchNorm1d(out_ch), 78 | nn.ReLU()) 79 | def forward(self, x): 80 | 81 | x1 = self.conv1_1(x) 82 | x3 = self.conv1_3(x) 83 | x5 = self.conv1_5(x) 84 | return self.conv(x1+x3+x5) 85 | class Recurrent_block(nn.Module): 86 | 87 | def __init__(self, out_ch, t=2): 88 | super(Recurrent_block, self).__init__() 89 | #self.drop_layer = nn.Dropout(0.5) 90 | self.t = t 91 | self.out_ch = out_ch 92 | self.conv = nn.Sequential( 93 | 94 | conv1d_inception_block(out_ch,out_ch), 95 | nn.Dropout(0.3), 96 | nn.BatchNorm1d(out_ch), 97 | nn.ReLU() 98 | ) 99 | self.conv1_1 = nn.Sequential( 100 | nn.Conv1d(out_ch, out_ch, kernel_size=1, stride=1, padding=0, bias=True)) 101 | def forward(self, x): 102 | for i in range(self.t): 103 | if i == 0: 104 | x1 = self.conv(x) 105 | out = self.conv1_1(x1 + x) 106 | return out 107 | 108 | class Residual_block(nn.Module): 109 | 110 | def __init__(self, out_ch, t=2): 111 | super(Residual_block, self).__init__() 112 | #self.drop_layer = nn.Dropout(0.5) 113 | self.t = t 114 | self.out_ch = out_ch 115 | self.conv = nn.Sequential( 116 | 117 | conv1d_inception_block(out_ch,out_ch), 118 | nn.Dropout(0.3), 119 | nn.BatchNorm1d(out_ch), 120 | nn.ReLU() 121 | ) 122 | self.conv1_1 = nn.Sequential( 123 | nn.Conv1d(out_ch, out_ch, kernel_size=1, stride=1, padding=0, bias=True)) 124 | def forward(self, x): 125 | x1 = self.conv(x) 126 | x1 = self.conv(x1) 127 | out=self.conv1_1(x1+x) 128 | return out 129 | 130 | 131 | 132 | 133 | 134 | 135 | class R_1Dcnn_RCNN_block(nn.Module): 136 | def __init__(self, in_ch, out_ch, t=2): 137 | super(R_1Dcnn_RCNN_block, self).__init__() 138 | 139 | self.RCNN1 = nn.Sequential( 140 | Recurrent_block(out_ch, t=t)) 141 | 142 | self.RCNN2 = nn.Sequential( 143 | conv1d_inception_block(out_ch, out_ch)) 144 | 145 | self.RCNN3 =nn.Sequential( 146 | Residual_block(out_ch, out_ch)) 147 | 148 | 149 | self.Conv = nn.Conv1d(in_ch, out_ch, kernel_size=1, stride=1, padding=0) 150 | self.Conv1_1 = nn.Sequential( 151 | nn.Conv1d(out_ch, out_ch, kernel_size=1, stride=1, padding=0, bias=True), 152 | nn.BatchNorm1d(out_ch), 153 | nn.ReLU()) 154 | def forward(self, x): 155 | x=self.Conv (x) 156 | x1 = self.RCNN3(x) 157 | x2 = self.RCNN2(x) 158 | x3 = self.RCNN1(x) 159 | out = self.Conv1_1(x3+x2+x1) 160 | return out 161 | ''' 162 | class Attention_block_self(nn.Module): 163 | 164 | def __init__(self, F_l, F_int): 165 | super(Attention_block_self, self).__init__() 166 | 167 | self.W_g = nn.Sequential( 168 | nn.Conv1d(F_l, F_int, kernel_size=1, stride=1, padding=0, bias=True), 169 | nn.BatchNorm1d(F_int) 170 | ) 171 | 172 | self.psi = nn.Sequential( 173 | nn.Conv1d(F_int, 1, kernel_size=1, stride=1, padding=0, bias=True), 174 | nn.BatchNorm1d(1), 175 | nn.Sigmoid() 176 | ) 177 | 178 | self.Tanh = nn.Tanh() 179 | 180 | def forward(self, x): 181 | x1 = self.W_g(x) 182 | psi = self.Tanh(x1) 183 | psi = self.psi(psi) 184 | out = x * psi 185 | return out 186 | ''' 187 | 188 | class model_1d(nn.Module): 189 | 190 | def __init__(self, img_ch=1, output_ch=1, t=1): 191 | super(model_1d, self).__init__() 192 | 193 | n1 =6 194 | filters = [n1, n1 * 2, n1 * 4, n1 * 8, n1 * 16] 195 | self.Maxpool0 = nn.MaxPool1d(kernel_size=2, stride=2) 196 | self.Maxpool1 = nn.MaxPool1d(kernel_size=8, stride=8) 197 | self.Maxpool2 = nn.MaxPool1d(kernel_size=4, stride=4) 198 | self.RRCNN1 = R_1Dcnn_RCNN_block(img_ch, filters[3], t=t) 199 | self.RRCNN2 = R_1Dcnn_RCNN_block(filters[3], filters[2], t=t) 200 | self.RRCNN3 = R_1Dcnn_RCNN_block(filters[2], filters[2], t=t) 201 | self.Softmax = nn.LogSoftmax() 202 | self.fc1 = nn.Linear(168 ,3) 203 | def forward(self, x): 204 | x=self.Maxpool0(x) 205 | e1 = self.RRCNN1(x) 206 | e2 = self.Maxpool2(e1) 207 | e2 = self.RRCNN2(e2) 208 | e3 = self.Maxpool2(e2) 209 | e3 = self.RRCNN3(e3) 210 | e3 = self.Maxpool2(e3) 211 | e4=self.Maxpool2(e3) 212 | e7= e4.view(e4.size(0),e4.size(1)*e4.size(2)) 213 | out = self.fc1(e7) 214 | out =self.Softmax(out) 215 | return out 216 | 217 | 218 | def train(x,model,los_train, acc_train): 219 | optimizer = torch.optim.SGD(model.parameters(),lr=0.0050, momentum=0.90, dampening=0, weight_decay=0.0001, nesterov=False) 220 | epoch=x 221 | j=x 222 | model.train() 223 | running_loss = 0.0 224 | train_correct=0.0 225 | 226 | for i,data in enumerate(trainloader): 227 | inputs, labels = data 228 | inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda()) 229 | optimizer.zero_grad() # zero the gradient buffers 230 | output = model(inputs) 231 | accracy =np.mean( (torch.argmax(output.cpu(),1)==torch.argmax(labels.cpu(),1)).numpy()) 232 | loss = criterion(output, torch.argmax(labels, dim=1)) 233 | loss.backward() 234 | optimizer.step() 235 | running_loss+=loss.item() 236 | train_correct+=accracy 237 | 238 | if i % np.ceil(len(trainloader.dataset)/128) == np.ceil(len(trainloader.dataset)/128)-1: # print every 2000 mini-batches 239 | print('[%d, %5d] loss: %.3f train_acc:%.4f'% 240 | (epoch + 1, i + 1, running_loss /(len(trainloader.dataset)/128), train_correct/(len(trainloader.dataset)/128))) 241 | los_train[j]= running_loss /(len(trainloader.dataset)/128) 242 | acc_train[j]= train_correct/(len(trainloader.dataset)/128) 243 | running_loss = 0.0 244 | train_correct=0.0 245 | return los_train, acc_train 246 | 247 | def validation(j,model,los_test, acc_test): 248 | model.eval() 249 | test_loss = 0.0 250 | test_correct=0.0 251 | for inputs1, labels1 in testloader: 252 | inputs1, labels1 = Variable(inputs1.cuda()), Variable(labels1.cuda()) 253 | output = model(inputs1) 254 | accracy1 =(torch.argmax(output.cpu(),1)==torch.argmax(labels1.cpu(),1)).numpy().sum() 255 | loss1 = criterion(output, torch.argmax(labels1, dim=1)) 256 | test_loss+=loss1.item() 257 | test_correct+=accracy1 258 | test_loss /= (len(testloader.dataset)/128) 259 | test_correct/=(len(testloader.dataset)) 260 | los_test[j]= test_loss 261 | acc_test[j]= test_correct 262 | print('test_loss:%.4f, test_correct%.4f'%(test_loss,test_correct)) 263 | 264 | def test(): 265 | model.eval() 266 | test_loss = 0.0 267 | test_correct=0.0 268 | for inputs1, labels1 in testloader: 269 | inputs1, labels1 = Variable(inputs1.cuda()), Variable(labels1.cuda()) 270 | output = model(inputs1) 271 | accracy1 =(torch.argmax(output.cpu(),1)==torch.argmax(labels1.cpu(),1)).numpy().sum() 272 | loss1 = criterion(output, torch.argmax(labels1, dim=1)) 273 | test_loss+=loss1.item() 274 | test_correct+=accracy1 275 | test_loss /= (len(testloader.dataset)/128) 276 | test_correct/=(len(testloader.dataset)) 277 | print('test_loss:%.4f, test_correct%.4f'%(test_loss,test_correct)) 278 | 279 | 280 | data=h5py.File('ecgs.mat') 281 | A=data['A'] 282 | B=data['B'] 283 | C=data['C'] 284 | 285 | ecga=np.vstack((A,B,C)) 286 | label1=np.zeros((3000,1)) 287 | label2=np.zeros((3000,1))+1 288 | label3=np.zeros((3000,1))+2 289 | labela=np.vstack((label1,label2,label3)) 290 | label1=[] 291 | label2=[] 292 | label3=[] 293 | A=[] 294 | B=[] 295 | C=[] 296 | 297 | file_number=0 298 | sfolder = StratifiedKFold(n_splits=10,random_state=1,shuffle=True) 299 | for traindata, testdata in sfolder.split(ecga,labela): 300 | print(testdata) 301 | file_number=file_number+1 302 | ecgc=ecga[traindata] 303 | ecgt=ecga[testdata] 304 | labelc=labela[traindata] 305 | labelt=labela[testdata] 306 | for FF in range(len(ecgc)): 307 | ecgc[FF,:]=butthigh(zscore(ecgc[FF,:]),400) 308 | for FF1 in range(len(ecgt)): 309 | ecgt[FF1,:]=butthigh(zscore(ecgt[FF1,:]),400) 310 | 311 | ecgc=torch.FloatTensor(ecgc) 312 | ecgc=ecgc.unsqueeze(1) 313 | labelc=to_categorical(labelc) 314 | labelc=torch.FloatTensor(labelc) 315 | deal_dataset = TensorDataset(ecgc,labelc) 316 | trainloader=DataLoader(dataset=deal_dataset,batch_size=128,shuffle=True,num_workers=0) 317 | 318 | ecgt=torch.FloatTensor(ecgt) 319 | ecgt=ecgt.unsqueeze(1) 320 | labelt=to_categorical(labelt) 321 | labelt=torch.FloatTensor(labelt) 322 | deal_test_dataset = TensorDataset(ecgt,labelt) 323 | testloader=DataLoader(dataset=deal_test_dataset,batch_size=128,shuffle=True,num_workers=0) 324 | 325 | model=model_1d() 326 | model = model.cuda() 327 | #torchsummary.summary(model.cuda(), input_size=(1,512,241)) 328 | criterion = nn.CrossEntropyLoss() 329 | #optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.8) 330 | los_train=np.zeros(200) 331 | acc_train=np.zeros(200) 332 | los_test=np.zeros(200) 333 | acc_test=np.zeros(200) 334 | for x in range(200): 335 | los_train, acc_train=train(x,model,los_train, acc_train) 336 | validation(x,model,los_test, acc_test) 337 | if acc_test[x]>0.85: 338 | matname='conmodel_'+str(file_number)+'_epoch'+str(x)+'.pkl' 339 | torch.save(model, matname) 340 | filename_save='c11inception_10foldaccloss_model'+str(file_number)+'.mat' 341 | sio.savemat(filename_save, {'los_train':los_train,'acc_train':acc_train,'los_test':los_test,'acc_test':acc_test}) 342 | 343 | 344 | 345 | 346 | --------------------------------------------------------------------------------