├── .vscode └── settings.json ├── README.md ├── best_model.01-0.48.h5 ├── best_model.01-0.60.h5 ├── best_model.02-0.49.h5 ├── best_model.03-0.56.h5 ├── best_model.04-0.69.h5 ├── best_model.05-0.77.h5 ├── best_model.07-0.81.h5 ├── best_model.09-0.85.h5 ├── best_model.10-0.87.h5 ├── best_model.15-0.88.h5 ├── main.py └── sample_codes ├── README.txt ├── answers.txt ├── answers1.txt ├── preliminary_challenge.py ├── run.sh └── setup.sh /.vscode/settings.json: -------------------------------------------------------------------------------- 1 | { 2 | "python.pythonPath": "C:\\Files\\APPs\\RuanJian\\Miniconda3\\envs\\TF_GPU\\python.exe" 3 | } -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | ## [深度应用]·基于KerasConv1D心电图检测实战开源教程 3 | 4 | > 个人主页--> [http://www.yansongsong.cn/](https://link.zhihu.com/?target=http%3A//www.yansongsong.cn/) 5 | > 项目github地址:[https://github.com/xiaosongshine/preliminary_challenge_baseline_keras](https://link.zhihu.com/?target=https%3A//github.com/xiaosongshine/preliminary_challenge_baseline_keras) 6 | 7 | ## 实战概述 8 | 9 | 本实战内容取自笔者参加的首届中国心电智能大赛项目,初赛要求为设计一个自动识别心电图波形算法。笔者使用Keras框架设计了基于Conv1D结构的模型,并且开源了代码作为Baseline。内容包括数据预处理,模型搭建,网络训练,模型应用等,此Baseline采用最简单的一维卷积达到了88%测试准确率。有多支队伍在笔者基线代码基础上调优取得了优异成绩,顺利进入复赛。 10 | 11 | 12 | 13 | ## 大赛简介 14 | 15 | 为响应国家健康中国战略,推送健康医疗和大数据的融合发展的政策,由清华大学临床医学院和数据科学研究院,天津市武清区京津高村科技创新园,以及多家重点医院联合主办的首届中国心电智能大赛正式启动。自今日起至2019年3月31日24时,大赛开启全球招募,预计大赛总奖金将高达百万元!目前官方报名网站已上线,欢迎高校、医院、创业团队等有志于中国心电人工智能发展的人员踊跃参加。 16 | 17 | 首届中国心电智能大赛官方报名网站>>[http://mdi.ids.tsinghua.edu.cn](https://link.zhihu.com/?target=http%3A//mdi.ids.tsinghua.edu.cn/) 18 | 19 | 20 | 21 | ## 数据介绍 22 | 23 | 下载完整的训练集和测试集,共1000例常规心电图,其中训练集中包含600例,测试集中共400例。该数据是从多个公开数据集中获取。参赛团队需要利用有正常/异常两类标签的训练集数据设计和实现算法,并在没有标签的测试集上做出预测。 24 | 25 | 该心电数据的采样率为500 Hz。为了方便参赛团队用不同编程语言都能读取数据,所有心电数据的存储格式为MAT格式。该文件中存储了12个导联的电压信号。训练数据对应的标签存储在txt文件中,其中0代表正常,1代表异常。 26 | 27 | 28 | 29 | ## 赛题分析 30 | 31 | 简单分析一下,初赛的数据集共有1000个样本,其中训练集中包含600例,测试集中共400例。其中训练集中包含600例是具有label的,可以用于我们训练模型;测试集中共400例没有标签,需要我们使用训练好的模型进行预测。 32 | 33 | 赛题就是一个二分类预测问题,解题思路应该包括以下内容 34 | 35 | 1. 数据读取与处理 36 | 2. 网络模型搭建 37 | 3. 模型的训练 38 | 4. 模型应用与提交预测结果 39 | 40 | 41 | 42 | ## 实战应用 43 | 44 | 经过对赛题的分析,我们把任务分成四个小任务,首先第一步是: 45 | 46 | ### 1.数据读取与处理 47 | 48 | 该心电数据的采样率为500 Hz。为了方便参赛团队用不同编程语言都能读取数据,所有心电数据的存储格式为MAT格式。该文件中存储了12个导联的电压信号。训练数据对应的标签存储在txt文件中,其中0代表正常,1代表异常。 49 | 50 | 我们由上述描述可以得知, 51 | 52 | - 我们的数据保存在MAT格式文件中(**这决定了后面我们要如何读取数据**) 53 | - 采样率为500 Hz(这个信息并没有怎么用到,大家可以简单了解一下,就是1秒采集500个点,由后面我们得知每个数据都是5000个点,也就是10秒的心电图片) 54 | - 12个导联的电压信号(这个是指采用12种导联方式,大家可以简单理解为用12个体温计量体温,从而得到更加准确的信息,下图为导联方式简单介绍,大家了解下即可。**要注意的是,既然提供了12种导联,我们应该全部都用到,虽然我们仅使用一种导联方式也可以进行训练与预测,但是经验告诉我们,采取多个特征会取得更优效果**) 55 | 56 | 57 | 58 | 59 | 60 | ![](https://pic4.zhimg.com/80/v2-f657755203861f52f5b03023900a2087_hd.jpg) 61 | 62 | ​ 63 | 64 | 65 | 66 | **数据处理函数定义:** 67 | 68 | ```python 69 | import keras 70 | from scipy.io import loadmat 71 | import matplotlib.pyplot as plt 72 | import glob 73 | import numpy as np 74 | import pandas as pd 75 | import math 76 | import os 77 | from keras.layers import * 78 | from keras.models import * 79 | from keras.objectives import * 80 | 81 | 82 | BASE_DIR = "preliminary/TRAIN/" 83 | 84 | #进行归一化 85 | def normalize(v): 86 | return (v - v.mean(axis=1).reshape((v.shape[0],1))) / (v.max(axis=1).reshape((v.shape[0],1)) + 2e-12) 87 | 88 | #loadmat打开文件 89 | def get_feature(wav_file,Lens = 12,BASE_DIR=BASE_DIR): 90 | mat = loadmat(BASE_DIR+wav_file) 91 | dat = mat["data"] 92 | feature = dat[0:12] 93 | return(normalize(feature).transopse()) 94 | 95 | 96 | #把标签转成oneHot形式 97 | def convert2oneHot(index,Lens): 98 | hot = np.zeros((Lens,)) 99 | hot[index] = 1 100 | return(hot) 101 | 102 | TXT_DIR = "preliminary/reference.txt" 103 | MANIFEST_DIR = "preliminary/reference.csv" 104 | ``` 105 | 106 | 107 | 108 | **读取一条数据进行显示** 109 | 110 | ```python 111 | if __name__ == "__main__": 112 | dat1 = get_feature("preliminary/TRAIN/TRAIN101.mat") 113 | print(dat1.shape) 114 | #one data shape is (12, 5000) 115 | plt.plt(dat1[:,0]) 116 | plt.show() 117 | ``` 118 | 119 | 120 | 121 | 122 | 123 | ![](https://pic4.zhimg.com/80/v2-b218248bc5935f2b7f77afa575fb8a63_hd.jpg) 124 | 125 | ​ 126 | 127 | 我们由上述信息可以看出每种导联都是由5000个点组成的列表,12种导联方式使每个样本都是12*5000的矩阵,类似于一张分辨率为12x5000的照片。 128 | 129 | 我们需要处理的就是把每个读取出来,归一化一下,送入网络进行训练可以了。 130 | 131 | **标签处理方式** 132 | 133 | ```python 134 | def create_csv(TXT_DIR=TXT_DIR): 135 | lists = pd.read_csv(TXT_DIR,sep=r"\t",header=None) 136 | lists = lists.sample(frac=1) 137 | lists.to_csv(MANIFEST_DIR,index=None) 138 | print("Finish save csv") 139 | ``` 140 | 141 | 142 | 143 | 我这里是采用从reference.txt读取,然后打乱保存到reference.csv中,**注意一定要进行数据打乱操作,不然训练效果很差。因为原始数据前面便签全部是1,后面全部是0** 144 | 145 | **数据迭代方式** 146 | 147 | ```python 148 | Batch_size = 20 149 | def xs_gen(path=MANIFEST_DIR,batch_size = Batch_size,train=True): 150 | 151 | img_list = pd.read_csv(path) 152 | if train : 153 | img_list = np.array(img_list)[:500] 154 | print("Found %s train items."%len(img_list)) 155 | print("list 1 is",img_list[0]) 156 | steps = math.ceil(len(img_list) / batch_size) # 确定每轮有多少个batch 157 | else: 158 | img_list = np.array(img_list)[500:] 159 | print("Found %s test items."%len(img_list)) 160 | print("list 1 is",img_list[0]) 161 | steps = math.ceil(len(img_list) / batch_size) # 确定每轮有多少个batch 162 | while True: 163 | for i in range(steps): 164 | 165 | batch_list = img_list[i * batch_size : i * batch_size + batch_size] 166 | np.random.shuffle(batch_list) 167 | batch_x = np.array([get_feature(file) for file in batch_list[:,0]]) 168 | batch_y = np.array([convert2oneHot(label,2) for label in batch_list[:,1]]) 169 | 170 | yield batch_x, batch_y 171 | ``` 172 | 173 | 174 | 175 | 数据读取的方式我采用的是生成器的方式,这样可以按batch读取,加快训练速度,大家也可以采用一下全部读取,看个人的习惯了 176 | 177 | 178 | 179 | ### 2.网络模型搭建 180 | 181 | 数据我们处理好了,后面就是模型的搭建了,我使用keras搭建的,操作简单便捷,tf,pytorch,sklearn大家可以按照自己喜好来。 182 | 183 | 网络模型可以选择CNN,RNN,Attention结构,或者多模型的融合,抛砖引玉,此Baseline采用的一维CNN方式,[一维CNN学习地址](https://link.zhihu.com/?target=https%3A//blog.csdn.net/xiaosongshine/article/details/88614450) 184 | 185 | **模型搭建** 186 | 187 | ```python 188 | TIME_PERIODS = 5000 189 | num_sensors = 12 190 | def build_model(input_shape=(TIME_PERIODS,num_sensors),num_classes=2): 191 | model = Sequential() 192 | #model.add(Reshape((TIME_PERIODS, num_sensors), input_shape=input_shape)) 193 | model.add(Conv1D(16, 16,strides=2, activation='relu',input_shape=input_shape)) 194 | model.add(Conv1D(16, 16,strides=2, activation='relu',padding="same")) 195 | model.add(MaxPooling1D(2)) 196 | model.add(Conv1D(64, 8,strides=2, activation='relu',padding="same")) 197 | model.add(Conv1D(64, 8,strides=2, activation='relu',padding="same")) 198 | model.add(MaxPooling1D(2)) 199 | model.add(Conv1D(128, 4,strides=2, activation='relu',padding="same")) 200 | model.add(Conv1D(128, 4,strides=2, activation='relu',padding="same")) 201 | model.add(MaxPooling1D(2)) 202 | model.add(Conv1D(256, 2,strides=1, activation='relu',padding="same")) 203 | model.add(Conv1D(256, 2,strides=1, activation='relu',padding="same")) 204 | model.add(MaxPooling1D(2)) 205 | model.add(GlobalAveragePooling1D()) 206 | model.add(Dropout(0.3)) 207 | model.add(Dense(num_classes, activation='softmax')) 208 | return(model) 209 | ``` 210 | 211 | 212 | 213 | **用model.summary()输出的网络模型为** 214 | 215 | ```bash 216 | ________________________________________________________________ 217 | Layer (type) Output Shape Param # 218 | ================================================================= 219 | reshape_1 (Reshape) (None, 5000, 12) 0 220 | _________________________________________________________________ 221 | conv1d_1 (Conv1D) (None, 2493, 16) 3088 222 | _________________________________________________________________ 223 | conv1d_2 (Conv1D) (None, 1247, 16) 4112 224 | _________________________________________________________________ 225 | max_pooling1d_1 (MaxPooling1 (None, 623, 16) 0 226 | _________________________________________________________________ 227 | conv1d_3 (Conv1D) (None, 312, 64) 8256 228 | _________________________________________________________________ 229 | conv1d_4 (Conv1D) (None, 156, 64) 32832 230 | _________________________________________________________________ 231 | max_pooling1d_2 (MaxPooling1 (None, 78, 64) 0 232 | _________________________________________________________________ 233 | conv1d_5 (Conv1D) (None, 39, 128) 32896 234 | _________________________________________________________________ 235 | conv1d_6 (Conv1D) (None, 20, 128) 65664 236 | _________________________________________________________________ 237 | max_pooling1d_3 (MaxPooling1 (None, 10, 128) 0 238 | _________________________________________________________________ 239 | conv1d_7 (Conv1D) (None, 10, 256) 65792 240 | _________________________________________________________________ 241 | conv1d_8 (Conv1D) (None, 10, 256) 131328 242 | _________________________________________________________________ 243 | max_pooling1d_4 (MaxPooling1 (None, 5, 256) 0 244 | _________________________________________________________________ 245 | global_average_pooling1d_1 ( (None, 256) 0 246 | _________________________________________________________________ 247 | dropout_1 (Dropout) (None, 256) 0 248 | _________________________________________________________________ 249 | dense_1 (Dense) (None, 2) 514 250 | ================================================================= 251 | Total params: 344,482 252 | Trainable params: 344,482 253 | Non-trainable params: 0 254 | _________________________________________________________________ 255 | ``` 256 | 257 | 258 | 259 | 训练参数比较少,大家可以根据自己想法更改。 260 | 261 | ### 3.网络模型训练 262 | 263 | **模型训练** 264 | 265 | ```python 266 | if __name__ == "__main__": 267 | """dat1 = get_feature("TRAIN101.mat") 268 | print("one data shape is",dat1.shape) 269 | #one data shape is (12, 5000) 270 | plt.plot(dat1[0]) 271 | plt.show()""" 272 | if (os.path.exists(MANIFEST_DIR)==False): 273 | create_csv() 274 | train_iter = xs_gen(train=True) 275 | test_iter = xs_gen(train=False) 276 | model = build_model() 277 | print(model.summary()) 278 | ckpt = keras.callbacks.ModelCheckpoint( 279 | filepath='best_model.{epoch:02d}-{val_acc:.2f}.h5', 280 | monitor='val_acc', save_best_only=True,verbose=1) 281 | model.compile(loss='categorical_crossentropy', 282 | optimizer='adam', metrics=['accuracy']) 283 | model.fit_generator( 284 | generator=train_iter, 285 | steps_per_epoch=500//Batch_size, 286 | epochs=20, 287 | initial_epoch=0, 288 | validation_data = test_iter, 289 | nb_val_samples = 100//Batch_size, 290 | callbacks=[ckpt], 291 | ) 292 | ``` 293 | 294 | 295 | 296 | **训练过程输出(最优结果:loss: 0.0565 - acc: 0.9820 - val_loss: 0.8307 - val_acc: 0.8800)** 297 | 298 | ```bash 299 | Epoch 10/20 300 | 25/25 [==============================] - 1s 37ms/step - loss: 0.2329 - acc: 0.9040 - val_loss: 0.4041 - val_acc: 0.8700 301 | 302 | Epoch 00010: val_acc improved from 0.85000 to 0.87000, saving model to best_model.10-0.87.h5 303 | Epoch 11/20 304 | 25/25 [==============================] - 1s 38ms/step - loss: 0.1633 - acc: 0.9380 - val_loss: 0.5277 - val_acc: 0.8300 305 | 306 | Epoch 00011: val_acc did not improve from 0.87000 307 | Epoch 12/20 308 | 25/25 [==============================] - 1s 40ms/step - loss: 0.1394 - acc: 0.9500 - val_loss: 0.4916 - val_acc: 0.7400 309 | 310 | Epoch 00012: val_acc did not improve from 0.87000 311 | Epoch 13/20 312 | 25/25 [==============================] - 1s 38ms/step - loss: 0.1746 - acc: 0.9220 - val_loss: 0.5208 - val_acc: 0.8100 313 | 314 | Epoch 00013: val_acc did not improve from 0.87000 315 | Epoch 14/20 316 | 25/25 [==============================] - 1s 38ms/step - loss: 0.1009 - acc: 0.9720 - val_loss: 0.5513 - val_acc: 0.8000 317 | 318 | Epoch 00014: val_acc did not improve from 0.87000 319 | Epoch 15/20 320 | 25/25 [==============================] - 1s 38ms/step - loss: 0.0565 - acc: 0.9820 - val_loss: 0.8307 - val_acc: 0.8800 321 | 322 | Epoch 00015: val_acc improved from 0.87000 to 0.88000, saving model to best_model.15-0.88.h5 323 | Epoch 16/20 324 | 25/25 [==============================] - 1s 38ms/step - loss: 0.0261 - acc: 0.9920 - val_loss: 0.6443 - val_acc: 0.8400 325 | 326 | Epoch 00016: val_acc did not improve from 0.88000 327 | Epoch 17/20 328 | 25/25 [==============================] - 1s 38ms/step - loss: 0.0178 - acc: 0.9960 - val_loss: 0.7773 - val_acc: 0.8700 329 | 330 | Epoch 00017: val_acc did not improve from 0.88000 331 | Epoch 18/20 332 | 25/25 [==============================] - 1s 38ms/step - loss: 0.0082 - acc: 0.9980 - val_loss: 0.8875 - val_acc: 0.8600 333 | 334 | Epoch 00018: val_acc did not improve from 0.88000 335 | Epoch 19/20 336 | 25/25 [==============================] - 1s 37ms/step - loss: 0.0045 - acc: 1.0000 - val_loss: 1.0057 - val_acc: 0.8600 337 | 338 | Epoch 00019: val_acc did not improve from 0.88000 339 | Epoch 20/20 340 | 25/25 [==============================] - 1s 37ms/step - loss: 0.0012 - acc: 1.0000 - val_loss: 1.1088 - val_acc: 0.8600 341 | 342 | Epoch 00020: val_acc did not improve from 0.88000 343 | ``` 344 | 345 | ### 4.模型应用预测结果 346 | 347 | **预测数据** 348 | 349 | ```python 350 | if __name__ == "__main__": 351 | 352 | PRE_DIR = "sample_codes/answers.txt" 353 | model = load_model("best_model.15-0.88.h5") 354 | pre_lists = pd.read_csv(PRE_DIR,sep=r" ",header=None) 355 | print(pre_lists.head()) 356 | pre_datas = np.array([get_feature(item,BASE_DIR="preliminary/TEST/") for item in pre_lists[0]]) 357 | pre_result = model.predict_classes(pre_datas)#0-1概率预测 358 | print(pre_result.shape) 359 | pre_lists[1] = pre_result 360 | pre_lists.to_csv("sample_codes/answers1.txt",index=None,header=None) 361 | print("predict finish") 362 | ``` 363 | 364 | 365 | 366 | 下面是前十条预测结果: 367 | 368 | ```bash 369 | TEST394,0 370 | TEST313,1 371 | TEST484,0 372 | TEST288,0 373 | TEST261,1 374 | TEST310,0 375 | TEST286,1 376 | TEST367,1 377 | TEST149,1 378 | TEST160,1 379 | ``` 380 | 381 | 382 | 383 | 384 | 385 | ## **展望** 386 | 387 | 此Baseline采用最简单的一维卷积达到了88%测试准确率(可能会因为随机初始化值上下波动),大家也可以多尝试GRU,Attention,和Resnet等结果,测试准确率准确率会突破90+。 388 | 389 | 能力有限,写的不好的地方欢迎大家批评指正。。 390 | 391 | > 个人主页--> [http://www.yansongsong.cn/](https://link.zhihu.com/?target=http%3A//www.yansongsong.cn/) 392 | > 项目github地址:[https://github.com/xiaosongshine/preliminary_challenge_baseline_keras](https://link.zhihu.com/?target=https%3A//github.com/xiaosongshine/preliminary_challenge_baseline_keras) 393 | 394 | **欢迎Fork+Star,觉得有用的话,麻烦小小鼓励一下 ><** 395 | -------------------------------------------------------------------------------- /best_model.01-0.48.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xiaosongshine/ECG_challenge_baseline_keras/1611da8eca0c84a7eef3df74b5c06be23d239521/best_model.01-0.48.h5 -------------------------------------------------------------------------------- /best_model.01-0.60.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xiaosongshine/ECG_challenge_baseline_keras/1611da8eca0c84a7eef3df74b5c06be23d239521/best_model.01-0.60.h5 -------------------------------------------------------------------------------- /best_model.02-0.49.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xiaosongshine/ECG_challenge_baseline_keras/1611da8eca0c84a7eef3df74b5c06be23d239521/best_model.02-0.49.h5 -------------------------------------------------------------------------------- /best_model.03-0.56.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xiaosongshine/ECG_challenge_baseline_keras/1611da8eca0c84a7eef3df74b5c06be23d239521/best_model.03-0.56.h5 -------------------------------------------------------------------------------- /best_model.04-0.69.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xiaosongshine/ECG_challenge_baseline_keras/1611da8eca0c84a7eef3df74b5c06be23d239521/best_model.04-0.69.h5 -------------------------------------------------------------------------------- /best_model.05-0.77.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xiaosongshine/ECG_challenge_baseline_keras/1611da8eca0c84a7eef3df74b5c06be23d239521/best_model.05-0.77.h5 -------------------------------------------------------------------------------- /best_model.07-0.81.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xiaosongshine/ECG_challenge_baseline_keras/1611da8eca0c84a7eef3df74b5c06be23d239521/best_model.07-0.81.h5 -------------------------------------------------------------------------------- /best_model.09-0.85.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xiaosongshine/ECG_challenge_baseline_keras/1611da8eca0c84a7eef3df74b5c06be23d239521/best_model.09-0.85.h5 -------------------------------------------------------------------------------- /best_model.10-0.87.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xiaosongshine/ECG_challenge_baseline_keras/1611da8eca0c84a7eef3df74b5c06be23d239521/best_model.10-0.87.h5 -------------------------------------------------------------------------------- /best_model.15-0.88.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xiaosongshine/ECG_challenge_baseline_keras/1611da8eca0c84a7eef3df74b5c06be23d239521/best_model.15-0.88.h5 -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import keras 2 | from scipy.io import loadmat 3 | import matplotlib.pyplot as plt 4 | import glob 5 | import numpy as np 6 | import pandas as pd 7 | import math 8 | import os 9 | from keras.layers import * 10 | from keras.models import * 11 | from keras.objectives import * 12 | 13 | 14 | BASE_DIR = "preliminary/TRAIN/" 15 | 16 | #进行归一化 17 | def normalize(v): 18 | return (v - v.mean(axis=1).reshape((v.shape[0],1))) / (v.max(axis=1).reshape((v.shape[0],1)) + 2e-12) 19 | 20 | #loadmat打开文件 21 | def get_feature(wav_file,Lens = 12,BASE_DIR=BASE_DIR): 22 | mat = loadmat(BASE_DIR+wav_file) 23 | dat = mat["data"] 24 | feature = dat[0:12] 25 | #修改地址 26 | return(normalize(feature).transpose())#return(normalize(feature)) 27 | 28 | 29 | #把标签转成oneHot 30 | def convert2oneHot(index,Lens): 31 | hot = np.zeros((Lens,)) 32 | hot[index] = 1 33 | return(hot) 34 | 35 | TXT_DIR = "preliminary/reference.txt" 36 | MANIFEST_DIR = "preliminary/reference.csv" 37 | 38 | def create_csv(TXT_DIR=TXT_DIR): 39 | lists = pd.read_csv(TXT_DIR,sep=r"\t",header=None) 40 | lists = lists.sample(frac=1) 41 | lists.to_csv(MANIFEST_DIR,index=None) 42 | print("Finish save csv") 43 | 44 | Batch_size = 20 45 | def xs_gen(path=MANIFEST_DIR,batch_size = Batch_size,train=True): 46 | 47 | img_list = pd.read_csv(path) 48 | if train : 49 | img_list = np.array(img_list)[:500] 50 | print("Found %s train items."%len(img_list)) 51 | print("list 1 is",img_list[0]) 52 | steps = math.ceil(len(img_list) / batch_size) # 确定每轮有多少个batch 53 | else: 54 | img_list = np.array(img_list)[500:] 55 | print("Found %s test items."%len(img_list)) 56 | print("list 1 is",img_list[0]) 57 | steps = math.ceil(len(img_list) / batch_size) # 确定每轮有多少个batch 58 | while True: 59 | for i in range(steps): 60 | 61 | batch_list = img_list[i * batch_size : i * batch_size + batch_size] 62 | np.random.shuffle(batch_list) 63 | batch_x = np.array([get_feature(file) for file in batch_list[:,0]]) 64 | batch_y = np.array([convert2oneHot(label,2) for label in batch_list[:,1]]) 65 | 66 | yield batch_x, batch_y 67 | TIME_PERIODS = 5000 68 | num_sensors = 12 69 | def build_model(input_shape=(TIME_PERIODS,num_sensors),num_classes=2): 70 | model = Sequential() 71 | #修改地址 注释掉下面一行 72 | #model.add(Reshape((TIME_PERIODS, num_sensors), input_shape=input_shape)) 73 | model.add(Conv1D(16, 16,strides=2, activation='relu',input_shape=(TIME_PERIODS,num_sensors))) 74 | model.add(Conv1D(16, 16,strides=2, activation='relu',padding="same")) 75 | model.add(MaxPooling1D(2)) 76 | model.add(Conv1D(64, 8,strides=2, activation='relu',padding="same")) 77 | model.add(Conv1D(64, 8,strides=2, activation='relu',padding="same")) 78 | model.add(MaxPooling1D(2)) 79 | model.add(Conv1D(128, 4,strides=2, activation='relu',padding="same")) 80 | model.add(Conv1D(128, 4,strides=2, activation='relu',padding="same")) 81 | model.add(MaxPooling1D(2)) 82 | model.add(Conv1D(256, 2,strides=1, activation='relu',padding="same")) 83 | model.add(Conv1D(256, 2,strides=1, activation='relu',padding="same")) 84 | model.add(MaxPooling1D(2)) 85 | model.add(GlobalAveragePooling1D()) 86 | model.add(Dropout(0.3)) 87 | model.add(Dense(num_classes, activation='softmax')) 88 | return(model) 89 | 90 | if __name__ == "__main__": 91 | """dat1 = get_feature("TRAIN101.mat") 92 | print("one data shape is",dat1.shape) 93 | #one data shape is (5000, 12) 94 | plt.plot(dat1[:,0]) 95 | plt.show()""" 96 | 97 | if (os.path.exists(MANIFEST_DIR)==False): 98 | create_csv() 99 | train_iter = xs_gen(train=True) 100 | test_iter = xs_gen(train=False) 101 | model = build_model() 102 | print(model.summary()) 103 | ckpt = keras.callbacks.ModelCheckpoint( 104 | filepath='best_model.{epoch:02d}-{val_acc:.2f}.h5', 105 | monitor='val_acc', save_best_only=True,verbose=1) 106 | model.compile(loss='categorical_crossentropy', 107 | optimizer='adam', metrics=['accuracy']) 108 | model.fit_generator( 109 | generator=train_iter, 110 | steps_per_epoch=500//Batch_size, 111 | epochs=20, 112 | initial_epoch=0, 113 | validation_data = test_iter, 114 | nb_val_samples = 100//Batch_size, 115 | callbacks=[ckpt], 116 | ) 117 | """PRE_DIR = "sample_codes/answers.txt" 118 | model = load_model("best_model.15-0.88.h5") 119 | pre_lists = pd.read_csv(PRE_DIR,sep=r" ",header=None) 120 | print(pre_lists.head()) 121 | pre_datas = np.array([get_feature(item,BASE_DIR="preliminary/TEST/") for item in pre_lists[0]]) 122 | pre_result = model.predict_classes(pre_datas)#0-1概率预测 123 | print(pre_result.shape) 124 | pre_lists[1] = pre_result 125 | pre_lists.to_csv("sample_codes/answers1.txt",index=None,header=None) 126 | print("predict finish")""" 127 | 128 | 129 | 130 | 131 | -------------------------------------------------------------------------------- /sample_codes/README.txt: -------------------------------------------------------------------------------- 1 | The Preliminary_Sample.zip contains the following components: 2 | 3 | * Python scripts: 4 | -- preliminary_challenge.py (necessary) - add your codes to classify normal and abnormal; NOTE you need to write the results into "answers.txt", and the dataset "TEST" should be in the same folder with this script. 5 | 6 | * BASH scripts: 7 | -- setup.sh (optional) - a bash script runs once before any other code from the entry; use this to compile your code as needed and to install additional packages 8 | -- run.sh (necessary) - a script first calls "setup.sh" to compile your code, then calls "preliminary_challenge.py" to generate "answers.txt" 9 | 10 | * Other files: 11 | -- answers.txt (necessary) - a text file containing the results of running your program on each record in test set. 12 | 13 | * README.txt - this file 14 | 15 | We verify that your code is working as you intended, by running "run.sh" on the test set, then comparing the answers.txt file that you submit with your 16 | entry with answers produced by your code running in our test environment using the same records. 17 | 18 | -------------------------------------------------------------------------------- /sample_codes/answers.txt: -------------------------------------------------------------------------------- 1 | TEST394 1 2 | TEST313 1 3 | TEST484 0 4 | TEST288 1 5 | TEST261 0 6 | TEST310 0 7 | TEST286 1 8 | TEST367 0 9 | TEST149 1 10 | TEST160 1 11 | TEST339 0 12 | TEST452 0 13 | TEST299 1 14 | TEST155 0 15 | TEST127 1 16 | TEST463 0 17 | TEST426 1 18 | TEST211 0 19 | TEST350 1 20 | TEST354 1 21 | TEST449 1 22 | TEST404 1 23 | TEST421 0 24 | TEST311 1 25 | TEST201 1 26 | TEST314 1 27 | TEST148 0 28 | TEST203 0 29 | TEST447 1 30 | TEST424 1 31 | TEST467 1 32 | TEST226 0 33 | TEST353 1 34 | TEST208 1 35 | TEST374 0 36 | TEST293 0 37 | TEST497 0 38 | TEST396 0 39 | TEST102 1 40 | TEST365 0 41 | TEST408 0 42 | TEST292 0 43 | TEST162 0 44 | TEST131 0 45 | TEST252 1 46 | TEST466 0 47 | TEST464 0 48 | TEST386 1 49 | TEST320 0 50 | TEST336 1 51 | TEST264 0 52 | TEST181 0 53 | TEST240 0 54 | TEST248 0 55 | TEST156 1 56 | TEST419 1 57 | TEST500 0 58 | TEST420 0 59 | TEST256 1 60 | TEST412 0 61 | TEST422 1 62 | TEST199 0 63 | TEST441 1 64 | TEST276 1 65 | TEST403 1 66 | TEST112 0 67 | TEST107 1 68 | TEST164 0 69 | TEST105 0 70 | TEST111 1 71 | TEST457 0 72 | TEST157 0 73 | TEST395 1 74 | TEST485 1 75 | TEST232 1 76 | TEST402 0 77 | TEST246 1 78 | TEST330 0 79 | TEST121 0 80 | TEST202 0 81 | TEST442 0 82 | TEST219 1 83 | TEST346 0 84 | TEST450 0 85 | TEST255 0 86 | TEST295 1 87 | TEST322 0 88 | TEST410 1 89 | TEST378 0 90 | TEST223 0 91 | TEST416 1 92 | TEST443 0 93 | TEST333 1 94 | TEST183 1 95 | TEST113 0 96 | TEST434 1 97 | TEST265 0 98 | TEST384 0 99 | TEST312 0 100 | TEST418 0 101 | TEST106 0 102 | TEST214 0 103 | TEST233 0 104 | TEST251 0 105 | TEST458 0 106 | TEST391 1 107 | TEST182 0 108 | TEST145 0 109 | TEST494 0 110 | TEST239 1 111 | TEST491 0 112 | TEST390 1 113 | TEST117 0 114 | TEST110 1 115 | TEST137 1 116 | TEST294 1 117 | TEST315 1 118 | TEST431 0 119 | TEST393 0 120 | TEST185 1 121 | TEST304 0 122 | TEST266 0 123 | TEST103 1 124 | TEST220 1 125 | TEST228 1 126 | TEST397 1 127 | TEST238 1 128 | TEST453 1 129 | TEST302 1 130 | TEST348 0 131 | TEST460 0 132 | TEST207 1 133 | TEST263 0 134 | TEST212 1 135 | TEST495 0 136 | TEST142 0 137 | TEST498 0 138 | TEST445 0 139 | TEST136 1 140 | TEST473 0 141 | TEST167 0 142 | TEST279 0 143 | TEST361 0 144 | TEST176 1 145 | TEST283 1 146 | TEST438 0 147 | TEST446 0 148 | TEST429 1 149 | TEST224 0 150 | TEST237 0 151 | TEST133 0 152 | TEST284 0 153 | TEST230 1 154 | TEST319 0 155 | TEST480 1 156 | TEST383 0 157 | TEST291 1 158 | TEST143 1 159 | TEST115 1 160 | TEST196 1 161 | TEST254 0 162 | TEST479 0 163 | TEST328 0 164 | TEST122 0 165 | TEST200 0 166 | TEST363 0 167 | TEST364 0 168 | TEST432 0 169 | TEST437 0 170 | TEST301 1 171 | TEST204 0 172 | TEST435 0 173 | TEST366 0 174 | TEST129 1 175 | TEST349 1 176 | TEST347 1 177 | TEST335 0 178 | TEST234 1 179 | TEST297 1 180 | TEST125 0 181 | TEST327 0 182 | TEST359 0 183 | TEST153 0 184 | TEST188 1 185 | TEST213 0 186 | TEST186 0 187 | TEST177 1 188 | TEST362 0 189 | TEST179 0 190 | TEST209 0 191 | TEST178 0 192 | TEST267 1 193 | TEST282 0 194 | TEST141 0 195 | TEST198 1 196 | TEST108 1 197 | TEST352 1 198 | TEST332 0 199 | TEST399 0 200 | TEST243 0 201 | TEST451 0 202 | TEST387 0 203 | TEST368 1 204 | TEST221 0 205 | TEST476 0 206 | TEST307 0 207 | TEST409 1 208 | TEST388 0 209 | TEST158 1 210 | TEST462 1 211 | TEST134 0 212 | TEST190 1 213 | TEST180 0 214 | TEST270 1 215 | TEST499 1 216 | TEST483 1 217 | TEST197 1 218 | TEST356 0 219 | TEST235 0 220 | TEST277 0 221 | TEST154 0 222 | TEST244 1 223 | TEST340 0 224 | TEST488 0 225 | TEST101 0 226 | TEST474 0 227 | TEST285 0 228 | TEST306 1 229 | TEST305 0 230 | TEST469 1 231 | TEST465 1 232 | TEST119 0 233 | TEST470 1 234 | TEST262 0 235 | TEST375 1 236 | TEST415 1 237 | TEST170 1 238 | TEST329 1 239 | TEST406 1 240 | TEST355 1 241 | TEST171 0 242 | TEST272 0 243 | TEST161 1 244 | TEST417 1 245 | TEST472 1 246 | TEST172 0 247 | TEST454 0 248 | TEST193 1 249 | TEST379 0 250 | TEST271 1 251 | TEST206 1 252 | TEST250 1 253 | TEST280 0 254 | TEST128 1 255 | TEST358 1 256 | TEST377 1 257 | TEST317 0 258 | TEST175 0 259 | TEST140 1 260 | TEST455 0 261 | TEST120 0 262 | TEST173 0 263 | TEST373 1 264 | TEST253 0 265 | TEST184 0 266 | TEST281 1 267 | TEST227 0 268 | TEST376 1 269 | TEST413 1 270 | TEST398 0 271 | TEST308 1 272 | TEST369 1 273 | TEST423 1 274 | TEST130 1 275 | TEST163 1 276 | TEST217 1 277 | TEST496 0 278 | TEST104 1 279 | TEST370 1 280 | TEST331 0 281 | TEST385 0 282 | TEST430 0 283 | TEST268 0 284 | TEST471 1 285 | TEST218 1 286 | TEST260 0 287 | TEST401 1 288 | TEST189 1 289 | TEST152 0 290 | TEST174 0 291 | TEST287 0 292 | TEST144 0 293 | TEST425 1 294 | TEST334 0 295 | TEST338 1 296 | TEST436 0 297 | TEST166 0 298 | TEST147 1 299 | TEST241 1 300 | TEST114 1 301 | TEST258 0 302 | TEST341 1 303 | TEST400 1 304 | TEST323 1 305 | TEST381 0 306 | TEST274 1 307 | TEST382 1 308 | TEST487 0 309 | TEST468 0 310 | TEST427 1 311 | TEST159 0 312 | TEST343 0 313 | TEST448 0 314 | TEST326 1 315 | TEST439 1 316 | TEST428 0 317 | TEST321 0 318 | TEST278 0 319 | TEST318 1 320 | TEST138 0 321 | TEST139 1 322 | TEST231 0 323 | TEST225 0 324 | TEST490 1 325 | TEST477 1 326 | TEST405 1 327 | TEST245 1 328 | TEST303 0 329 | TEST151 0 330 | TEST165 0 331 | TEST135 1 332 | TEST371 1 333 | TEST169 0 334 | TEST215 0 335 | TEST289 1 336 | TEST344 0 337 | TEST357 1 338 | TEST216 0 339 | TEST316 1 340 | TEST222 0 341 | TEST482 0 342 | TEST414 1 343 | TEST259 0 344 | TEST433 1 345 | TEST168 1 346 | TEST298 0 347 | TEST116 1 348 | TEST481 0 349 | TEST210 1 350 | TEST492 0 351 | TEST351 1 352 | TEST461 1 353 | TEST257 1 354 | TEST489 1 355 | TEST337 0 356 | TEST247 0 357 | TEST324 1 358 | TEST191 0 359 | TEST342 1 360 | TEST345 0 361 | TEST459 1 362 | TEST325 1 363 | TEST150 0 364 | TEST192 0 365 | TEST269 0 366 | TEST456 0 367 | TEST493 0 368 | TEST126 1 369 | TEST296 0 370 | TEST194 0 371 | TEST407 1 372 | TEST309 1 373 | TEST478 1 374 | TEST372 0 375 | TEST300 1 376 | TEST392 0 377 | TEST118 0 378 | TEST440 1 379 | TEST124 1 380 | TEST187 0 381 | TEST380 0 382 | TEST389 0 383 | TEST360 1 384 | TEST249 0 385 | TEST290 0 386 | TEST486 0 387 | TEST229 1 388 | TEST275 1 389 | TEST123 0 390 | TEST205 0 391 | TEST444 1 392 | TEST132 0 393 | TEST475 0 394 | TEST273 1 395 | TEST195 0 396 | TEST242 1 397 | TEST411 1 398 | TEST109 1 399 | TEST146 1 400 | TEST236 0 401 | -------------------------------------------------------------------------------- /sample_codes/answers1.txt: -------------------------------------------------------------------------------- 1 | TEST394,0 2 | TEST313,1 3 | TEST484,0 4 | TEST288,0 5 | TEST261,1 6 | TEST310,0 7 | TEST286,1 8 | TEST367,1 9 | TEST149,1 10 | TEST160,1 11 | TEST339,0 12 | TEST452,0 13 | TEST299,1 14 | TEST155,0 15 | TEST127,1 16 | TEST463,1 17 | TEST426,1 18 | TEST211,1 19 | TEST350,0 20 | TEST354,0 21 | TEST449,0 22 | TEST404,0 23 | TEST421,0 24 | TEST311,0 25 | TEST201,1 26 | TEST314,0 27 | TEST148,1 28 | TEST203,1 29 | TEST447,0 30 | TEST424,0 31 | TEST467,0 32 | TEST226,1 33 | TEST353,0 34 | TEST208,0 35 | TEST374,0 36 | TEST293,1 37 | TEST497,0 38 | TEST396,1 39 | TEST102,0 40 | TEST365,0 41 | TEST408,0 42 | TEST292,0 43 | TEST162,1 44 | TEST131,1 45 | TEST252,1 46 | TEST466,0 47 | TEST464,1 48 | TEST386,0 49 | TEST320,0 50 | TEST336,0 51 | TEST264,0 52 | TEST181,1 53 | TEST240,0 54 | TEST248,1 55 | TEST156,0 56 | TEST419,0 57 | TEST500,0 58 | TEST420,0 59 | TEST256,1 60 | TEST412,0 61 | TEST422,0 62 | TEST199,1 63 | TEST441,0 64 | TEST276,1 65 | TEST403,0 66 | TEST112,1 67 | TEST107,0 68 | TEST164,0 69 | TEST105,1 70 | TEST111,1 71 | TEST457,1 72 | TEST157,1 73 | TEST395,0 74 | TEST485,0 75 | TEST232,1 76 | TEST402,0 77 | TEST246,1 78 | TEST330,0 79 | TEST121,1 80 | TEST202,1 81 | TEST442,0 82 | TEST219,0 83 | TEST346,0 84 | TEST450,0 85 | TEST255,1 86 | TEST295,1 87 | TEST322,0 88 | TEST410,0 89 | TEST378,1 90 | TEST223,1 91 | TEST416,1 92 | TEST443,0 93 | TEST333,0 94 | TEST183,1 95 | TEST113,1 96 | TEST434,0 97 | TEST265,0 98 | TEST384,0 99 | TEST312,0 100 | TEST418,0 101 | TEST106,1 102 | TEST214,1 103 | TEST233,1 104 | TEST251,1 105 | TEST458,0 106 | TEST391,0 107 | TEST182,1 108 | TEST145,1 109 | TEST494,0 110 | TEST239,1 111 | TEST491,1 112 | TEST390,0 113 | TEST117,1 114 | TEST110,1 115 | TEST137,0 116 | TEST294,1 117 | TEST315,0 118 | TEST431,0 119 | TEST393,0 120 | TEST185,1 121 | TEST304,0 122 | TEST266,1 123 | TEST103,1 124 | TEST220,0 125 | TEST228,0 126 | TEST397,0 127 | TEST238,0 128 | TEST453,0 129 | TEST302,0 130 | TEST348,0 131 | TEST460,0 132 | TEST207,1 133 | TEST263,0 134 | TEST212,0 135 | TEST495,0 136 | TEST142,1 137 | TEST498,1 138 | TEST445,0 139 | TEST136,1 140 | TEST473,0 141 | TEST167,1 142 | TEST279,1 143 | TEST361,0 144 | TEST176,1 145 | TEST283,1 146 | TEST438,0 147 | TEST446,0 148 | TEST429,0 149 | TEST224,1 150 | TEST237,0 151 | TEST133,1 152 | TEST284,1 153 | TEST230,1 154 | TEST319,0 155 | TEST480,0 156 | TEST383,0 157 | TEST291,1 158 | TEST143,1 159 | TEST115,1 160 | TEST196,1 161 | TEST254,1 162 | TEST479,0 163 | TEST328,0 164 | TEST122,1 165 | TEST200,1 166 | TEST363,1 167 | TEST364,1 168 | TEST432,1 169 | TEST437,0 170 | TEST301,0 171 | TEST204,1 172 | TEST435,0 173 | TEST366,0 174 | TEST129,1 175 | TEST349,1 176 | TEST347,1 177 | TEST335,0 178 | TEST234,1 179 | TEST297,1 180 | TEST125,1 181 | TEST327,1 182 | TEST359,0 183 | TEST153,1 184 | TEST188,1 185 | TEST213,1 186 | TEST186,1 187 | TEST177,1 188 | TEST362,0 189 | TEST179,1 190 | TEST209,1 191 | TEST178,1 192 | TEST267,0 193 | TEST282,1 194 | TEST141,1 195 | TEST198,1 196 | TEST108,1 197 | TEST352,1 198 | TEST332,0 199 | TEST399,0 200 | TEST243,1 201 | TEST451,0 202 | TEST387,1 203 | TEST368,1 204 | TEST221,1 205 | TEST476,0 206 | TEST307,0 207 | TEST409,0 208 | TEST388,1 209 | TEST158,1 210 | TEST462,0 211 | TEST134,1 212 | TEST190,1 213 | TEST180,1 214 | TEST270,1 215 | TEST499,0 216 | TEST483,0 217 | TEST197,1 218 | TEST356,1 219 | TEST235,0 220 | TEST277,0 221 | TEST154,1 222 | TEST244,0 223 | TEST340,0 224 | TEST488,0 225 | TEST101,0 226 | TEST474,1 227 | TEST285,1 228 | TEST306,0 229 | TEST305,0 230 | TEST469,0 231 | TEST465,1 232 | TEST119,1 233 | TEST470,1 234 | TEST262,1 235 | TEST375,0 236 | TEST415,0 237 | TEST170,1 238 | TEST329,0 239 | TEST406,0 240 | TEST355,1 241 | TEST171,1 242 | TEST272,0 243 | TEST161,1 244 | TEST417,1 245 | TEST472,0 246 | TEST172,1 247 | TEST454,1 248 | TEST193,1 249 | TEST379,0 250 | TEST271,1 251 | TEST206,1 252 | TEST250,1 253 | TEST280,1 254 | TEST128,1 255 | TEST358,0 256 | TEST377,0 257 | TEST317,1 258 | TEST175,1 259 | TEST140,1 260 | TEST455,0 261 | TEST120,1 262 | TEST173,1 263 | TEST373,1 264 | TEST253,0 265 | TEST184,1 266 | TEST281,1 267 | TEST227,1 268 | TEST376,1 269 | TEST413,0 270 | TEST398,1 271 | TEST308,1 272 | TEST369,1 273 | TEST423,1 274 | TEST130,1 275 | TEST163,1 276 | TEST217,1 277 | TEST496,0 278 | TEST104,1 279 | TEST370,0 280 | TEST331,1 281 | TEST385,0 282 | TEST430,0 283 | TEST268,1 284 | TEST471,0 285 | TEST218,1 286 | TEST260,1 287 | TEST401,0 288 | TEST189,0 289 | TEST152,1 290 | TEST174,1 291 | TEST287,1 292 | TEST144,1 293 | TEST425,0 294 | TEST334,0 295 | TEST338,0 296 | TEST436,0 297 | TEST166,1 298 | TEST147,1 299 | TEST241,1 300 | TEST114,1 301 | TEST258,1 302 | TEST341,0 303 | TEST400,0 304 | TEST323,0 305 | TEST381,0 306 | TEST274,1 307 | TEST382,0 308 | TEST487,0 309 | TEST468,0 310 | TEST427,0 311 | TEST159,0 312 | TEST343,0 313 | TEST448,0 314 | TEST326,0 315 | TEST439,1 316 | TEST428,0 317 | TEST321,1 318 | TEST278,0 319 | TEST318,0 320 | TEST138,1 321 | TEST139,1 322 | TEST231,1 323 | TEST225,1 324 | TEST490,0 325 | TEST477,0 326 | TEST405,1 327 | TEST245,1 328 | TEST303,0 329 | TEST151,1 330 | TEST165,1 331 | TEST135,1 332 | TEST371,0 333 | TEST169,1 334 | TEST215,1 335 | TEST289,1 336 | TEST344,0 337 | TEST357,0 338 | TEST216,1 339 | TEST316,0 340 | TEST222,1 341 | TEST482,0 342 | TEST414,1 343 | TEST259,1 344 | TEST433,1 345 | TEST168,0 346 | TEST298,1 347 | TEST116,1 348 | TEST481,1 349 | TEST210,1 350 | TEST492,0 351 | TEST351,0 352 | TEST461,1 353 | TEST257,1 354 | TEST489,0 355 | TEST337,0 356 | TEST247,0 357 | TEST324,1 358 | TEST191,1 359 | TEST342,0 360 | TEST345,0 361 | TEST459,1 362 | TEST325,0 363 | TEST150,1 364 | TEST192,1 365 | TEST269,1 366 | TEST456,0 367 | TEST493,0 368 | TEST126,0 369 | TEST296,0 370 | TEST194,1 371 | TEST407,0 372 | TEST309,0 373 | TEST478,0 374 | TEST372,1 375 | TEST300,1 376 | TEST392,0 377 | TEST118,1 378 | TEST440,0 379 | TEST124,1 380 | TEST187,1 381 | TEST380,0 382 | TEST389,1 383 | TEST360,0 384 | TEST249,1 385 | TEST290,1 386 | TEST486,1 387 | TEST229,1 388 | TEST275,1 389 | TEST123,1 390 | TEST205,0 391 | TEST444,0 392 | TEST132,0 393 | TEST475,0 394 | TEST273,1 395 | TEST195,1 396 | TEST242,1 397 | TEST411,0 398 | TEST109,1 399 | TEST146,1 400 | TEST236,1 401 | -------------------------------------------------------------------------------- /sample_codes/preliminary_challenge.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import os 3 | import numpy as np 4 | import scipy.io as sio 5 | import random 6 | from decimal import Decimal 7 | 8 | # Usage: python preliminary_challenge.py test_file_path 9 | def main(): 10 | ## Add your codes to classify normal and illness. 11 | 12 | 13 | 14 | 15 | ## Classify the samples of the test set and write the results into answers.txt, 16 | ## and each row representing a prediction of one sample. 17 | ## Here we use random numbers as prediction labels as an example and 18 | ## you should replace it with your own results. 19 | 20 | test_set = os.getcwd()+'/TEST' 21 | f_w = open('answers.txt', 'w') 22 | for root, subdirs, files in os.walk(test_set): 23 | if files: 24 | for records in files: 25 | 26 | if records.endswith('.mat'): 27 | line = records.strip('.mat') + ' ' + str(int(random.randint(0, 1))) 28 | f_w.write(line + '\n') 29 | 30 | 31 | 32 | 33 | if __name__ == "__main__": 34 | main() 35 | -------------------------------------------------------------------------------- /sample_codes/run.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # file: run.sh 4 | # This bash script first calls "setup.sh" to compile your code, then calls "prel#iminary_challenge.py" to generate "answers.txt". 5 | 6 | # echo "============ running setup script ============" 7 | 8 | # bash setup.sh 9 | 10 | echo "==== running entry script on the test set ====" 11 | # Clear previous answers.txt 12 | rm -f answers.txt 13 | # Generate new answers.txt 14 | python preliminary_challenge.py 15 | echo "=================== Done! ====================" 16 | 17 | 18 | -------------------------------------------------------------------------------- /sample_codes/setup.sh: -------------------------------------------------------------------------------- 1 | #! /bin/bash 2 | # 3 | # file: setup.sh 4 | # 5 | # This bash script performs any setup necessary in order to test your 6 | # entry. It is run only once, before running any other code belonging 7 | # to your entry. 8 | 9 | set -e 10 | set -o pipefail 11 | 12 | #pip install --user numpy 13 | 14 | 15 | 16 | 17 | 18 | --------------------------------------------------------------------------------