├── EEGNET_guanwang_tf ├── EEGModels.py ├── LICENSE.txt ├── README.md └── examples │ ├── EEGNet-8-2-weights.h5 │ ├── ERP.py │ └── LICENSE ├── EEGNet A Compact Convolutional Neural Network for EEG-based Brain-Computer Interfaces——2018.pdf ├── EEGNet_Pytorch ├── EEGNet.py ├── LoadData.py ├── README.md ├── __pycache__ │ ├── EEGNet.cpython-37.pyc │ ├── LoadData.cpython-37.pyc │ ├── model.cpython-37.pyc │ ├── model.cpython-38.pyc │ ├── utils.cpython-37.pyc │ └── utils.cpython-38.pyc ├── best_model.pth ├── main.ipynb ├── main.py └── model.py └── README.md /EEGNET_guanwang_tf/EEGModels.py: -------------------------------------------------------------------------------- 1 | """ 2 | ARL_EEGModels - A collection of Convolutional Neural Network models for EEG 3 | Signal Processing and Classification, using Keras and Tensorflow 4 | 5 | Requirements: 6 | (1) tensorflow == 2.X (as of this writing, 2.0 - 2.3 have been verified 7 | as working) 8 | 9 | To run the EEG/MEG ERP classification sample script, you will also need 10 | 11 | (4) mne >= 0.17.1 12 | (5) PyRiemann >= 0.2.5 13 | (6) scikit-learn >= 0.20.1 14 | (7) matplotlib >= 2.2.3 15 | 16 | To use: 17 | 18 | (1) Place this file in the PYTHONPATH variable in your IDE (i.e.: Spyder) 19 | (2) Import the model as 20 | 21 | from EEGModels import EEGNet 22 | 23 | model = EEGNet(nb_classes = ..., Chans = ..., Samples = ...) 24 | 25 | (3) Then compile and fit the model 26 | 27 | model.compile(loss = ..., optimizer = ..., metrics = ...) 28 | fitted = model.fit(...) 29 | predicted = model.predict(...) 30 | 31 | Portions of this project are works of the United States Government and are not 32 | subject to domestic copyright protection under 17 USC Sec. 105. Those 33 | portions are released world-wide under the terms of the Creative Commons Zero 34 | 1.0 (CC0) license. 35 | 36 | Other portions of this project are subject to domestic copyright protection 37 | under 17 USC Sec. 105. Those portions are licensed under the Apache 2.0 38 | license. The complete text of the license governing this material is in 39 | the file labeled LICENSE.TXT that is a part of this project's official 40 | distribution. 41 | """ 42 | 43 | from tensorflow.keras.models import Model 44 | from tensorflow.keras.layers import Dense, Activation, Permute, Dropout 45 | from tensorflow.keras.layers import Conv2D, MaxPooling2D, AveragePooling2D 46 | from tensorflow.keras.layers import SeparableConv2D, DepthwiseConv2D 47 | from tensorflow.keras.layers import BatchNormalization 48 | from tensorflow.keras.layers import SpatialDropout2D 49 | from tensorflow.keras.regularizers import l1_l2 50 | from tensorflow.keras.layers import Input, Flatten 51 | from tensorflow.keras.constraints import max_norm 52 | from tensorflow.keras import backend as K 53 | 54 | 55 | def EEGNet(nb_classes, Chans = 64, Samples = 128, 56 | dropoutRate = 0.5, kernLength = 64, F1 = 8, 57 | D = 2, F2 = 16, norm_rate = 0.25, dropoutType = 'Dropout'): 58 | """ Keras Implementation of EEGNet 59 | http://iopscience.iop.org/article/10.1088/1741-2552/aace8c/meta 60 | 61 | Note that this implements the newest version of EEGNet and NOT the earlier 62 | version (version v1 and v2 on arxiv). We strongly recommend using this 63 | architecture as it performs much better and has nicer properties than 64 | our earlier version. For example: 65 | 66 | 1. Depthwise Convolutions to learn spatial filters within a 67 | temporal convolution. The use of the depth_multiplier option maps 68 | exactly to the number of spatial filters learned within a temporal 69 | filter. This matches the setup of algorithms like FBCSP which learn 70 | spatial filters within each filter in a filter-bank. This also limits 71 | the number of free parameters to fit when compared to a fully-connected 72 | convolution. 73 | 74 | 2. Separable Convolutions to learn how to optimally combine spatial 75 | filters across temporal bands. Separable Convolutions are Depthwise 76 | Convolutions followed by (1x1) Pointwise Convolutions. 77 | 78 | 79 | While the original paper used Dropout, we found that SpatialDropout2D 80 | sometimes produced slightly better results for classification of ERP 81 | signals. However, SpatialDropout2D significantly reduced performance 82 | on the Oscillatory dataset (SMR, BCI-IV Dataset 2A). We recommend using 83 | the default Dropout in most cases. 84 | 85 | Assumes the input signal is sampled at 128Hz. If you want to use this model 86 | for any other sampling rate you will need to modify the lengths of temporal 87 | kernels and average pooling size in blocks 1 and 2 as needed (double the 88 | kernel lengths for double the sampling rate, etc). Note that we haven't 89 | tested the model performance with this rule so this may not work well. 90 | 91 | The model with default parameters gives the EEGNet-8,2 model as discussed 92 | in the paper. This model should do pretty well in general, although it is 93 | advised to do some model searching to get optimal performance on your 94 | particular dataset. 95 | 96 | We set F2 = F1 * D (number of input filters = number of output filters) for 97 | the SeparableConv2D layer. We haven't extensively tested other values of this 98 | parameter (say, F2 < F1 * D for compressed learning, and F2 > F1 * D for 99 | overcomplete). We believe the main parameters to focus on are F1 and D. 100 | 101 | Inputs: 102 | 103 | nb_classes : int, number of classes to classify 104 | Chans, Samples : number of channels and time points in the EEG data 105 | dropoutRate : dropout fraction 106 | kernLength : length of temporal convolution in first layer. We found 107 | that setting this to be half the sampling rate worked 108 | well in practice. For the SMR dataset in particular 109 | since the data was high-passed at 4Hz we used a kernel 110 | length of 32. 111 | F1, F2 : number of temporal filters (F1) and number of pointwise 112 | filters (F2) to learn. Default: F1 = 8, F2 = F1 * D. 113 | D : number of spatial filters to learn within each temporal 114 | convolution. Default: D = 2 115 | dropoutType : Either SpatialDropout2D or Dropout, passed as a string. 116 | 117 | """ 118 | 119 | if dropoutType == 'SpatialDropout2D': 120 | dropoutType = SpatialDropout2D 121 | elif dropoutType == 'Dropout': 122 | dropoutType = Dropout 123 | else: 124 | raise ValueError('dropoutType must be one of SpatialDropout2D ' 125 | 'or Dropout, passed as a string.') 126 | 127 | input1 = Input(shape = (Chans, Samples, 1)) 128 | 129 | ################################################################## 130 | block1 = Conv2D(F1, (1, kernLength), padding = 'same', 131 | input_shape = (Chans, Samples, 1), 132 | use_bias = False)(input1) 133 | block1 = BatchNormalization()(block1) 134 | block1 = DepthwiseConv2D((Chans, 1), use_bias = False, 135 | depth_multiplier = D, 136 | depthwise_constraint = max_norm(1.))(block1) 137 | block1 = BatchNormalization()(block1) 138 | block1 = Activation('elu')(block1) 139 | block1 = AveragePooling2D((1, 4))(block1) 140 | block1 = dropoutType(dropoutRate)(block1) 141 | 142 | block2 = SeparableConv2D(F2, (1, 16), 143 | use_bias = False, padding = 'same')(block1) 144 | block2 = BatchNormalization()(block2) 145 | block2 = Activation('elu')(block2) 146 | block2 = AveragePooling2D((1, 8))(block2) 147 | block2 = dropoutType(dropoutRate)(block2) 148 | 149 | flatten = Flatten(name = 'flatten')(block2) 150 | 151 | dense = Dense(nb_classes, name = 'dense', 152 | kernel_constraint = max_norm(norm_rate))(flatten) 153 | softmax = Activation('softmax', name = 'softmax')(dense) 154 | 155 | return Model(inputs=input1, outputs=softmax) 156 | 157 | 158 | 159 | 160 | def EEGNet_SSVEP(nb_classes = 12, Chans = 8, Samples = 256, 161 | dropoutRate = 0.5, kernLength = 256, F1 = 96, 162 | D = 1, F2 = 96, dropoutType = 'Dropout'): 163 | """ SSVEP Variant of EEGNet, as used in [1]. 164 | 165 | Inputs: 166 | 167 | nb_classes : int, number of classes to classify 168 | Chans, Samples : number of channels and time points in the EEG data 169 | dropoutRate : dropout fraction 170 | kernLength : length of temporal convolution in first layer 171 | F1, F2 : number of temporal filters (F1) and number of pointwise 172 | filters (F2) to learn. 173 | D : number of spatial filters to learn within each temporal 174 | convolution. 175 | dropoutType : Either SpatialDropout2D or Dropout, passed as a string. 176 | 177 | 178 | [1]. Waytowich, N. et. al. (2018). Compact Convolutional Neural Networks 179 | for Classification of Asynchronous Steady-State Visual Evoked Potentials. 180 | Journal of Neural Engineering vol. 15(6). 181 | http://iopscience.iop.org/article/10.1088/1741-2552/aae5d8 182 | 183 | """ 184 | 185 | if dropoutType == 'SpatialDropout2D': 186 | dropoutType = SpatialDropout2D 187 | elif dropoutType == 'Dropout': 188 | dropoutType = Dropout 189 | else: 190 | raise ValueError('dropoutType must be one of SpatialDropout2D ' 191 | 'or Dropout, passed as a string.') 192 | 193 | input1 = Input(shape = (Chans, Samples, 1)) 194 | 195 | ################################################################## 196 | block1 = Conv2D(F1, (1, kernLength), padding = 'same', 197 | input_shape = (Chans, Samples, 1), 198 | use_bias = False)(input1) 199 | block1 = BatchNormalization()(block1) 200 | block1 = DepthwiseConv2D((Chans, 1), use_bias = False, 201 | depth_multiplier = D, 202 | depthwise_constraint = max_norm(1.))(block1) 203 | block1 = BatchNormalization()(block1) 204 | block1 = Activation('elu')(block1) 205 | block1 = AveragePooling2D((1, 4))(block1) 206 | block1 = dropoutType(dropoutRate)(block1) 207 | 208 | block2 = SeparableConv2D(F2, (1, 16), 209 | use_bias = False, padding = 'same')(block1) 210 | block2 = BatchNormalization()(block2) 211 | block2 = Activation('elu')(block2) 212 | block2 = AveragePooling2D((1, 8))(block2) 213 | block2 = dropoutType(dropoutRate)(block2) 214 | 215 | flatten = Flatten(name = 'flatten')(block2) 216 | 217 | dense = Dense(nb_classes, name = 'dense')(flatten) 218 | softmax = Activation('softmax', name = 'softmax')(dense) 219 | 220 | return Model(inputs=input1, outputs=softmax) 221 | 222 | 223 | 224 | def EEGNet_old(nb_classes, Chans = 64, Samples = 128, regRate = 0.0001, 225 | dropoutRate = 0.25, kernels = [(2, 32), (8, 4)], strides = (2, 4)): 226 | """ Keras Implementation of EEGNet_v1 (https://arxiv.org/abs/1611.08024v2) 227 | 228 | This model is the original EEGNet model proposed on arxiv 229 | https://arxiv.org/abs/1611.08024v2 230 | 231 | with a few modifications: we use striding instead of max-pooling as this 232 | helped slightly in classification performance while also providing a 233 | computational speed-up. 234 | 235 | Note that we no longer recommend the use of this architecture, as the new 236 | version of EEGNet performs much better overall and has nicer properties. 237 | 238 | Inputs: 239 | 240 | nb_classes : total number of final categories 241 | Chans, Samples : number of EEG channels and samples, respectively 242 | regRate : regularization rate for L1 and L2 regularizations 243 | dropoutRate : dropout fraction 244 | kernels : the 2nd and 3rd layer kernel dimensions (default is 245 | the [2, 32] x [8, 4] configuration) 246 | strides : the stride size (note that this replaces the max-pool 247 | used in the original paper) 248 | 249 | """ 250 | 251 | # start the model 252 | input_main = Input((Chans, Samples)) 253 | layer1 = Conv2D(16, (Chans, 1), input_shape=(Chans, Samples, 1), 254 | kernel_regularizer = l1_l2(l1=regRate, l2=regRate))(input_main) 255 | layer1 = BatchNormalization()(layer1) 256 | layer1 = Activation('elu')(layer1) 257 | layer1 = Dropout(dropoutRate)(layer1) 258 | 259 | permute_dims = 2, 1, 3 260 | permute1 = Permute(permute_dims)(layer1) 261 | 262 | layer2 = Conv2D(4, kernels[0], padding = 'same', 263 | kernel_regularizer=l1_l2(l1=0.0, l2=regRate), 264 | strides = strides)(permute1) 265 | layer2 = BatchNormalization()(layer2) 266 | layer2 = Activation('elu')(layer2) 267 | layer2 = Dropout(dropoutRate)(layer2) 268 | 269 | layer3 = Conv2D(4, kernels[1], padding = 'same', 270 | kernel_regularizer=l1_l2(l1=0.0, l2=regRate), 271 | strides = strides)(layer2) 272 | layer3 = BatchNormalization()(layer3) 273 | layer3 = Activation('elu')(layer3) 274 | layer3 = Dropout(dropoutRate)(layer3) 275 | 276 | flatten = Flatten(name = 'flatten')(layer3) 277 | 278 | dense = Dense(nb_classes, name = 'dense')(flatten) 279 | softmax = Activation('softmax', name = 'softmax')(dense) 280 | 281 | return Model(inputs=input_main, outputs=softmax) 282 | 283 | 284 | 285 | def DeepConvNet(nb_classes, Chans = 64, Samples = 256, 286 | dropoutRate = 0.5): 287 | """ Keras implementation of the Deep Convolutional Network as described in 288 | Schirrmeister et. al. (2017), Human Brain Mapping. 289 | 290 | This implementation assumes the input is a 2-second EEG signal sampled at 291 | 128Hz, as opposed to signals sampled at 250Hz as described in the original 292 | paper. We also perform temporal convolutions of length (1, 5) as opposed 293 | to (1, 10) due to this sampling rate difference. 294 | 295 | Note that we use the max_norm constraint on all convolutional layers, as 296 | well as the classification layer. We also change the defaults for the 297 | BatchNormalization layer. We used this based on a personal communication 298 | with the original authors. 299 | 300 | ours original paper 301 | pool_size 1, 2 1, 3 302 | strides 1, 2 1, 3 303 | conv filters 1, 5 1, 10 304 | 305 | Note that this implementation has not been verified by the original 306 | authors. 307 | 308 | """ 309 | 310 | # start the model 311 | input_main = Input((Chans, Samples, 1)) 312 | block1 = Conv2D(25, (1, 5), 313 | input_shape=(Chans, Samples, 1), 314 | kernel_constraint = max_norm(2., axis=(0,1,2)))(input_main) 315 | block1 = Conv2D(25, (Chans, 1), 316 | kernel_constraint = max_norm(2., axis=(0,1,2)))(block1) 317 | block1 = BatchNormalization(epsilon=1e-05, momentum=0.9)(block1) 318 | block1 = Activation('elu')(block1) 319 | block1 = MaxPooling2D(pool_size=(1, 2), strides=(1, 2))(block1) 320 | block1 = Dropout(dropoutRate)(block1) 321 | 322 | block2 = Conv2D(50, (1, 5), 323 | kernel_constraint = max_norm(2., axis=(0,1,2)))(block1) 324 | block2 = BatchNormalization(epsilon=1e-05, momentum=0.9)(block2) 325 | block2 = Activation('elu')(block2) 326 | block2 = MaxPooling2D(pool_size=(1, 2), strides=(1, 2))(block2) 327 | block2 = Dropout(dropoutRate)(block2) 328 | 329 | block3 = Conv2D(100, (1, 5), 330 | kernel_constraint = max_norm(2., axis=(0,1,2)))(block2) 331 | block3 = BatchNormalization(epsilon=1e-05, momentum=0.9)(block3) 332 | block3 = Activation('elu')(block3) 333 | block3 = MaxPooling2D(pool_size=(1, 2), strides=(1, 2))(block3) 334 | block3 = Dropout(dropoutRate)(block3) 335 | 336 | block4 = Conv2D(200, (1, 5), 337 | kernel_constraint = max_norm(2., axis=(0,1,2)))(block3) 338 | block4 = BatchNormalization(epsilon=1e-05, momentum=0.9)(block4) 339 | block4 = Activation('elu')(block4) 340 | block4 = MaxPooling2D(pool_size=(1, 2), strides=(1, 2))(block4) 341 | block4 = Dropout(dropoutRate)(block4) 342 | 343 | flatten = Flatten()(block4) 344 | 345 | dense = Dense(nb_classes, kernel_constraint = max_norm(0.5))(flatten) 346 | softmax = Activation('softmax')(dense) 347 | 348 | return Model(inputs=input_main, outputs=softmax) 349 | 350 | 351 | # need these for ShallowConvNet 352 | def square(x): 353 | return K.square(x) 354 | 355 | def log(x): 356 | return K.log(K.clip(x, min_value = 1e-7, max_value = 10000)) 357 | 358 | 359 | def ShallowConvNet(nb_classes, Chans = 64, Samples = 128, dropoutRate = 0.5): 360 | """ Keras implementation of the Shallow Convolutional Network as described 361 | in Schirrmeister et. al. (2017), Human Brain Mapping. 362 | 363 | Assumes the input is a 2-second EEG signal sampled at 128Hz. Note that in 364 | the original paper, they do temporal convolutions of length 25 for EEG 365 | data sampled at 250Hz. We instead use length 13 since the sampling rate is 366 | roughly half of the 250Hz which the paper used. The pool_size and stride 367 | in later layers is also approximately half of what is used in the paper. 368 | 369 | Note that we use the max_norm constraint on all convolutional layers, as 370 | well as the classification layer. We also change the defaults for the 371 | BatchNormalization layer. We used this based on a personal communication 372 | with the original authors. 373 | 374 | ours original paper 375 | pool_size 1, 35 1, 75 376 | strides 1, 7 1, 15 377 | conv filters 1, 13 1, 25 378 | 379 | Note that this implementation has not been verified by the original 380 | authors. We do note that this implementation reproduces the results in the 381 | original paper with minor deviations. 382 | """ 383 | 384 | # start the model 385 | input_main = Input((Chans, Samples, 1)) 386 | block1 = Conv2D(40, (1, 13), 387 | input_shape=(Chans, Samples, 1), 388 | kernel_constraint = max_norm(2., axis=(0,1,2)))(input_main) 389 | block1 = Conv2D(40, (Chans, 1), use_bias=False, 390 | kernel_constraint = max_norm(2., axis=(0,1,2)))(block1) 391 | block1 = BatchNormalization(epsilon=1e-05, momentum=0.9)(block1) 392 | block1 = Activation(square)(block1) 393 | block1 = AveragePooling2D(pool_size=(1, 35), strides=(1, 7))(block1) 394 | block1 = Activation(log)(block1) 395 | block1 = Dropout(dropoutRate)(block1) 396 | flatten = Flatten()(block1) 397 | dense = Dense(nb_classes, kernel_constraint = max_norm(0.5))(flatten) 398 | softmax = Activation('softmax')(dense) 399 | 400 | return Model(inputs=input_main, outputs=softmax) 401 | 402 | 403 | -------------------------------------------------------------------------------- /EEGNET_guanwang_tf/LICENSE.txt: -------------------------------------------------------------------------------- 1 | Notes regarding the Creative Commons Zero (CC0) License 2 | ------------------------------------------------------- 3 | 4 | Portions of this work do not have copyright attached within the jurisdiction 5 | of the United States of America. Those portions are distributed world-wide 6 | under the Creative Commons Zero (CC0) 1.0 Universal license, the text of which 7 | is provided below, with the following modification: 8 | 9 | Grant of Patent License. Subject to the terms and conditions of this License, 10 | each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, 11 | no-charge, royalty-free, irrevocable (except as stated in this section) patent 12 | license to make, have made, use, offer to sell, sell, import, and otherwise 13 | transfer the Work, where such license applies only to those patent claims 14 | licensable by such Contributor that are necessarily infringed by their 15 | Contribution(s) alone or by combination of their Contribution(s) with the Work 16 | to which such Contribution(s) was submitted. If You institute patent 17 | litigation against any entity (including a cross-claim or counterclaim in a 18 | lawsuit) alleging that the Work or a Contribution incorporated within the Work 19 | constitutes direct or contributory patent infringement, then any patent 20 | licenses granted to You under this License for that Work shall terminate as of 21 | the date such litigation is filed. 22 | 23 | Creative Commons Legal Code 24 | 25 | CC0 1.0 Universal 26 | 27 | CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE 28 | LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN 29 | ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS 30 | INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES 31 | REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS 32 | PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM 33 | THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED 34 | HEREUNDER. 35 | 36 | Statement of Purpose 37 | 38 | The laws of most jurisdictions throughout the world automatically confer 39 | exclusive Copyright and Related Rights (defined below) upon the creator 40 | and subsequent owner(s) (each and all, an "owner") of an original work of 41 | authorship and/or a database (each, a "Work"). 42 | 43 | Certain owners wish to permanently relinquish those rights to a Work for 44 | the purpose of contributing to a commons of creative, cultural and 45 | scientific works ("Commons") that the public can reliably and without fear 46 | of later claims of infringement build upon, modify, incorporate in other 47 | works, reuse and redistribute as freely as possible in any form whatsoever 48 | and for any purposes, including without limitation commercial purposes. 49 | These owners may contribute to the Commons to promote the ideal of a free 50 | culture and the further production of creative, cultural and scientific 51 | works, or to gain reputation or greater distribution for their Work in 52 | part through the use and efforts of others. 53 | 54 | For these and/or other purposes and motivations, and without any 55 | expectation of additional consideration or compensation, the person 56 | associating CC0 with a Work (the "Affirmer"), to the extent that he or she 57 | is an owner of Copyright and Related Rights in the Work, voluntarily 58 | elects to apply CC0 to the Work and publicly distribute the Work under its 59 | terms, with knowledge of his or her Copyright and Related Rights in the 60 | Work and the meaning and intended legal effect of CC0 on those rights. 61 | 62 | 1. Copyright and Related Rights. A Work made available under CC0 may be 63 | protected by copyright and related or neighboring rights ("Copyright and 64 | Related Rights"). Copyright and Related Rights include, but are not 65 | limited to, the following: 66 | 67 | i. the right to reproduce, adapt, distribute, perform, display, 68 | communicate, and translate a Work; 69 | ii. moral rights retained by the original author(s) and/or performer(s); 70 | iii. publicity and privacy rights pertaining to a person's image or 71 | likeness depicted in a Work; 72 | iv. rights protecting against unfair competition in regards to a Work, 73 | subject to the limitations in paragraph 4(a), below; 74 | v. rights protecting the extraction, dissemination, use and reuse of data 75 | in a Work; 76 | vi. database rights (such as those arising under Directive 96/9/EC of the 77 | European Parliament and of the Council of 11 March 1996 on the legal 78 | protection of databases, and under any national implementation 79 | thereof, including any amended or successor version of such 80 | directive); and 81 | vii. other similar, equivalent or corresponding rights throughout the 82 | world based on applicable law or treaty, and any national 83 | implementations thereof. 84 | 85 | 2. Waiver. To the greatest extent permitted by, but not in contravention 86 | of, applicable law, Affirmer hereby overtly, fully, permanently, 87 | irrevocably and unconditionally waives, abandons, and surrenders all of 88 | Affirmer's Copyright and Related Rights and associated claims and causes 89 | of action, whether now known or unknown (including existing as well as 90 | future claims and causes of action), in the Work (i) in all territories 91 | worldwide, (ii) for the maximum duration provided by applicable law or 92 | treaty (including future time extensions), (iii) in any current or future 93 | medium and for any number of copies, and (iv) for any purpose whatsoever, 94 | including without limitation commercial, advertising or promotional 95 | purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each 96 | member of the public at large and to the detriment of Affirmer's heirs and 97 | successors, fully intending that such Waiver shall not be subject to 98 | revocation, rescission, cancellation, termination, or any other legal or 99 | equitable action to disrupt the quiet enjoyment of the Work by the public 100 | as contemplated by Affirmer's express Statement of Purpose. 101 | 102 | 3. Public License Fallback. Should any part of the Waiver for any reason 103 | be judged legally invalid or ineffective under applicable law, then the 104 | Waiver shall be preserved to the maximum extent permitted taking into 105 | account Affirmer's express Statement of Purpose. In addition, to the 106 | extent the Waiver is so judged Affirmer hereby grants to each affected 107 | person a royalty-free, non transferable, non sublicensable, non exclusive, 108 | irrevocable and unconditional license to exercise Affirmer's Copyright and 109 | Related Rights in the Work (i) in all territories worldwide, (ii) for the 110 | maximum duration provided by applicable law or treaty (including future 111 | time extensions), (iii) in any current or future medium and for any number 112 | of copies, and (iv) for any purpose whatsoever, including without 113 | limitation commercial, advertising or promotional purposes (the 114 | "License"). The License shall be deemed effective as of the date CC0 was 115 | applied by Affirmer to the Work. Should any part of the License for any 116 | reason be judged legally invalid or ineffective under applicable law, such 117 | partial invalidity or ineffectiveness shall not invalidate the remainder 118 | of the License, and in such case Affirmer hereby affirms that he or she 119 | will not (i) exercise any of his or her remaining Copyright and Related 120 | Rights in the Work or (ii) assert any associated claims and causes of 121 | action with respect to the Work, in either case contrary to Affirmer's 122 | express Statement of Purpose. 123 | 124 | 4. Limitations and Disclaimers. 125 | 126 | a. No trademark or patent rights held by Affirmer are waived, abandoned, 127 | surrendered, licensed or otherwise affected by this document. 128 | b. Affirmer offers the Work as-is and makes no representations or 129 | warranties of any kind concerning the Work, express, implied, 130 | statutory or otherwise, including without limitation warranties of 131 | title, merchantability, fitness for a particular purpose, non 132 | infringement, or the absence of latent or other defects, accuracy, or 133 | the present or absence of errors, whether or not discoverable, all to 134 | the greatest extent permissible under applicable law. 135 | c. Affirmer disclaims responsibility for clearing rights of other persons 136 | that may apply to the Work or any use thereof, including without 137 | limitation any person's Copyright and Related Rights in the Work. 138 | Further, Affirmer disclaims responsibility for obtaining any necessary 139 | consents, permissions or other rights required for any use of the 140 | Work. 141 | d. Affirmer understands and acknowledges that Creative Commons is not a 142 | party to this document and has no duty or obligation with respect to 143 | this CC0 or use of the Work. 144 | 145 | Notes regarding the Apache License Version 2.0 146 | ---------------------------------------------- 147 | 148 | Portions of this work have copyright attached within the jurisdiction of the 149 | United States of America. Those portions are distributed world-wide under the 150 | Apache License version 2.0, the text of which is provided below. 151 | 152 | Apache License 153 | Version 2.0, January 2004 154 | http://www.apache.org/licenses/ 155 | 156 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 157 | 158 | 1. Definitions. 159 | 160 | "License" shall mean the terms and conditions for use, reproduction, 161 | and distribution as defined by Sections 1 through 9 of this document. 162 | 163 | "Licensor" shall mean the copyright owner or entity authorized by 164 | the copyright owner that is granting the License. 165 | 166 | "Legal Entity" shall mean the union of the acting entity and all 167 | other entities that control, are controlled by, or are under common 168 | control with that entity. For the purposes of this definition, 169 | "control" means (i) the power, direct or indirect, to cause the 170 | direction or management of such entity, whether by contract or 171 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 172 | outstanding shares, or (iii) beneficial ownership of such entity. 173 | 174 | "You" (or "Your") shall mean an individual or Legal Entity 175 | exercising permissions granted by this License. 176 | 177 | "Source" form shall mean the preferred form for making modifications, 178 | including but not limited to software source code, documentation 179 | source, and configuration files. 180 | 181 | "Object" form shall mean any form resulting from mechanical 182 | transformation or translation of a Source form, including but 183 | not limited to compiled object code, generated documentation, 184 | and conversions to other media types. 185 | 186 | "Work" shall mean the work of authorship, whether in Source or 187 | Object form, made available under the License, as indicated by a 188 | copyright notice that is included in or attached to the work 189 | (an example is provided in the Appendix below). 190 | 191 | "Derivative Works" shall mean any work, whether in Source or Object 192 | form, that is based on (or derived from) the Work and for which the 193 | editorial revisions, annotations, elaborations, or other modifications 194 | represent, as a whole, an original work of authorship. For the purposes 195 | of this License, Derivative Works shall not include works that remain 196 | separable from, or merely link (or bind by name) to the interfaces of, 197 | the Work and Derivative Works thereof. 198 | 199 | "Contribution" shall mean any work of authorship, including 200 | the original version of the Work and any modifications or additions 201 | to that Work or Derivative Works thereof, that is intentionally 202 | submitted to Licensor for inclusion in the Work by the copyright owner 203 | or by an individual or Legal Entity authorized to submit on behalf of 204 | the copyright owner. For the purposes of this definition, "submitted" 205 | means any form of electronic, verbal, or written communication sent 206 | to the Licensor or its representatives, including but not limited to 207 | communication on electronic mailing lists, source code control systems, 208 | and issue tracking systems that are managed by, or on behalf of, the 209 | Licensor for the purpose of discussing and improving the Work, but 210 | excluding communication that is conspicuously marked or otherwise 211 | designated in writing by the copyright owner as "Not a Contribution." 212 | 213 | "Contributor" shall mean Licensor and any individual or Legal Entity 214 | on behalf of whom a Contribution has been received by Licensor and 215 | subsequently incorporated within the Work. 216 | 217 | 2. Grant of Copyright License. Subject to the terms and conditions of 218 | this License, each Contributor hereby grants to You a perpetual, 219 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 220 | copyright license to reproduce, prepare Derivative Works of, 221 | publicly display, publicly perform, sublicense, and distribute the 222 | Work and such Derivative Works in Source or Object form. 223 | 224 | 3. Grant of Patent License. Subject to the terms and conditions of 225 | this License, each Contributor hereby grants to You a perpetual, 226 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 227 | (except as stated in this section) patent license to make, have made, 228 | use, offer to sell, sell, import, and otherwise transfer the Work, 229 | where such license applies only to those patent claims licensable 230 | by such Contributor that are necessarily infringed by their 231 | Contribution(s) alone or by combination of their Contribution(s) 232 | with the Work to which such Contribution(s) was submitted. If You 233 | institute patent litigation against any entity (including a 234 | cross-claim or counterclaim in a lawsuit) alleging that the Work 235 | or a Contribution incorporated within the Work constitutes direct 236 | or contributory patent infringement, then any patent licenses 237 | granted to You under this License for that Work shall terminate 238 | as of the date such litigation is filed. 239 | 240 | 4. Redistribution. You may reproduce and distribute copies of the 241 | Work or Derivative Works thereof in any medium, with or without 242 | modifications, and in Source or Object form, provided that You 243 | meet the following conditions: 244 | 245 | (a) You must give any other recipients of the Work or 246 | Derivative Works a copy of this License; and 247 | 248 | (b) You must cause any modified files to carry prominent notices 249 | stating that You changed the files; and 250 | 251 | (c) You must retain, in the Source form of any Derivative Works 252 | that You distribute, all copyright, patent, trademark, and 253 | attribution notices from the Source form of the Work, 254 | excluding those notices that do not pertain to any part of 255 | the Derivative Works; and 256 | 257 | (d) If the Work includes a "NOTICE" text file as part of its 258 | distribution, then any Derivative Works that You distribute must 259 | include a readable copy of the attribution notices contained 260 | within such NOTICE file, excluding those notices that do not 261 | pertain to any part of the Derivative Works, in at least one 262 | of the following places: within a NOTICE text file distributed 263 | as part of the Derivative Works; within the Source form or 264 | documentation, if provided along with the Derivative Works; or, 265 | within a display generated by the Derivative Works, if and 266 | wherever such third-party notices normally appear. The contents 267 | of the NOTICE file are for informational purposes only and 268 | do not modify the License. You may add Your own attribution 269 | notices within Derivative Works that You distribute, alongside 270 | or as an addendum to the NOTICE text from the Work, provided 271 | that such additional attribution notices cannot be construed 272 | as modifying the License. 273 | 274 | You may add Your own copyright statement to Your modifications and 275 | may provide additional or different license terms and conditions 276 | for use, reproduction, or distribution of Your modifications, or 277 | for any such Derivative Works as a whole, provided Your use, 278 | reproduction, and distribution of the Work otherwise complies with 279 | the conditions stated in this License. 280 | 281 | 5. Submission of Contributions. Unless You explicitly state otherwise, 282 | any Contribution intentionally submitted for inclusion in the Work 283 | by You to the Licensor shall be under the terms and conditions of 284 | this License, without any additional terms or conditions. 285 | Notwithstanding the above, nothing herein shall supersede or modify 286 | the terms of any separate license agreement you may have executed 287 | with Licensor regarding such Contributions. 288 | 289 | 6. Trademarks. This License does not grant permission to use the trade 290 | names, trademarks, service marks, or product names of the Licensor, 291 | except as required for reasonable and customary use in describing the 292 | origin of the Work and reproducing the content of the NOTICE file. 293 | 294 | 7. Disclaimer of Warranty. Unless required by applicable law or 295 | agreed to in writing, Licensor provides the Work (and each 296 | Contributor provides its Contributions) on an "AS IS" BASIS, 297 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 298 | implied, including, without limitation, any warranties or conditions 299 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 300 | PARTICULAR PURPOSE. You are solely responsible for determining the 301 | appropriateness of using or redistributing the Work and assume any 302 | risks associated with Your exercise of permissions under this License. 303 | 304 | 8. Limitation of Liability. In no event and under no legal theory, 305 | whether in tort (including negligence), contract, or otherwise, 306 | unless required by applicable law (such as deliberate and grossly 307 | negligent acts) or agreed to in writing, shall any Contributor be 308 | liable to You for damages, including any direct, indirect, special, 309 | incidental, or consequential damages of any character arising as a 310 | result of this License or out of the use or inability to use the 311 | Work (including but not limited to damages for loss of goodwill, 312 | work stoppage, computer failure or malfunction, or any and all 313 | other commercial damages or losses), even if such Contributor 314 | has been advised of the possibility of such damages. 315 | 316 | 9. Accepting Warranty or Additional Liability. While redistributing 317 | the Work or Derivative Works thereof, You may choose to offer, 318 | and charge a fee for, acceptance of support, warranty, indemnity, 319 | or other liability obligations and/or rights consistent with this 320 | License. However, in accepting such obligations, You may act only 321 | on Your own behalf and on Your sole responsibility, not on behalf 322 | of any other Contributor, and only if You agree to indemnify, 323 | defend, and hold each Contributor harmless for any liability 324 | incurred by, or claims asserted against, such Contributor by reason 325 | of your accepting any such warranty or additional liability. 326 | 327 | END OF TERMS AND CONDITIONS 328 | -------------------------------------------------------------------------------- /EEGNET_guanwang_tf/README.md: -------------------------------------------------------------------------------- 1 | # Introduction 2 | This is the Army Research Laboratory (ARL) EEGModels project: A Collection of Convolutional Neural Network (CNN) models for EEG signal processing and classification, written in Keras and Tensorflow. The aim of this project is to 3 | 4 | - provide a set of well-validated CNN models for EEG signal processing and classification 5 | - facilitate reproducible research and 6 | - enable other researchers to use and compare these models as easy as possible on their data 7 | 8 | # Requirements 9 | 10 | - Python == 3.7 or 3.8 11 | - tensorflow == 2.X (verified working with 2.0 - 2.3, both for CPU and GPU) 12 | 13 | To run the EEG/MEG ERP classification sample script, you will also need 14 | 15 | - mne >= 0.17.1 16 | - PyRiemann >= 0.2.5 17 | - scikit-learn >= 0.20.1 18 | - matplotlib >= 2.2.3 19 | 20 | # Models Implemented 21 | 22 | - EEGNet [[1]](http://stacks.iop.org/1741-2552/15/i=5/a=056013). Both the original model and the revised model are implemented. 23 | - EEGNet variant used for classification of Steady State Visual Evoked Potential (SSVEP) Signals [[2]](http://iopscience.iop.org/article/10.1088/1741-2552/aae5d8) 24 | - DeepConvNet [[3]](https://onlinelibrary.wiley.com/doi/full/10.1002/hbm.23730) 25 | - ShallowConvNet [[3]](https://onlinelibrary.wiley.com/doi/full/10.1002/hbm.23730) 26 | 27 | 28 | # Usage 29 | 30 | To use this package, place the contents of this folder in your PYTHONPATH environment variable. Then, one can simply import any model and configure it as 31 | 32 | 33 | ```python 34 | 35 | from EEGModels import EEGNet, ShallowConvNet, DeepConvNet 36 | 37 | model = EEGNet(nb_classes = ..., Chans = ..., Samples = ...) 38 | 39 | model2 = ShallowConvNet(nb_classes = ..., Chans = ..., Samples = ...) 40 | 41 | model3 = DeepConvNet(nb_classes = ..., Chans = ..., Samples = ...) 42 | 43 | ``` 44 | 45 | Compile the model with the associated loss function and optimizer (in our case, the categorical cross-entropy and Adam optimizer, respectively). Then fit the model and predict on new test data. 46 | 47 | ```python 48 | 49 | model.compile(loss = 'categorical_crossentropy', optimizer = 'adam') 50 | fittedModel = model.fit(...) 51 | predicted = model.predict(...) 52 | 53 | ``` 54 | 55 | # EEGNet Feature Explainability 56 | 57 | Note: Please see https://github.com/vlawhern/arl-eegmodels/issues/29 for additional steps needed to get this to work with Tensorflow 2. 58 | 59 | To reproduce the EEGNet single-trial feature relevance results as we reported in [[1]](http://stacks.iop.org/1741-2552/15/i=5/a=056013), download and install DeepExplain located [[here]](https://github.com/marcoancona/DeepExplain), which implements a variety of relevance attribution methods (both gradient-based and perturbation-based). A sketch of how to use it is given below: 60 | 61 | ```python 62 | from EEGModels import EEGNet 63 | from tensorflow.keras.models import Model 64 | from deepexplain.tensorflow import DeepExplain 65 | from tensorflow.keras import backend as K 66 | 67 | # configure, compile and fit the model 68 | 69 | model = EEGNet(nb_classes = ..., Chans = ..., Samples = ...) 70 | model.compile(loss = 'categorical_crossentropy', optimizer = 'adam') 71 | fittedModel = model.fit(...) 72 | 73 | # use DeepExplain to get individual trial feature relevances for some test data (X_test, Y_test). 74 | # Note that model.layers[-2] points to the dense layer prior to softmax activation. Also, we use 75 | # the DeepLIFT method in the paper, although other options, including epsilon-LRP, are available. 76 | # This works with all implemented models. 77 | 78 | # here, Y_test and X_test are the one-hot encodings of the class labels and 79 | # the data, respectively. 80 | 81 | with DeepExplain(session = K.get_session()) as de: 82 | input_tensor = model.layers[0].input 83 | fModel = Model(inputs = input_tensor, outputs = model.layers[-2].output) 84 | target_tensor = fModel(input_tensor) 85 | 86 | # can use epsilon-LRP as well if you like. 87 | attributions = de.explain('deeplift', target_tensor * Y_test, input_tensor, X_test) 88 | # attributions = de.explain('elrp', target_tensor * Y_test, input_tensor, X_test) 89 | 90 | 91 | ``` 92 | 93 | 94 | # Paper Citation 95 | 96 | If you use the EEGNet model in your research and found it helpful, please cite the following paper: 97 | 98 | ``` 99 | @article{Lawhern2018, 100 | author={Vernon J Lawhern and Amelia J Solon and Nicholas R Waytowich and Stephen M Gordon and Chou P Hung and Brent J Lance}, 101 | title={EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces}, 102 | journal={Journal of Neural Engineering}, 103 | volume={15}, 104 | number={5}, 105 | pages={056013}, 106 | url={http://stacks.iop.org/1741-2552/15/i=5/a=056013}, 107 | year={2018} 108 | } 109 | ``` 110 | 111 | If you use the SSVEP variant of the EEGNet model in your research and found it helpful, please cite the following paper: 112 | 113 | ``` 114 | @article{Waytowich2018, 115 | author={Nicholas Waytowich and Vernon J Lawhern and Javier O Garcia and Jennifer Cummings and Josef Faller and Paul Sajda and Jean M 116 | Vettel}, 117 | title={Compact convolutional neural networks for classification of asynchronous steady-state visual evoked potentials}, 118 | journal={Journal of Neural Engineering}, 119 | volume={15}, 120 | number={6}, 121 | pages={066031}, 122 | url={http://stacks.iop.org/1741-2552/15/i=6/a=066031}, 123 | year={2018} 124 | } 125 | 126 | ``` 127 | 128 | Similarly, if you use the ShallowConvNet or DeepConvNet models and found them helpful, please cite the following paper: 129 | 130 | ``` 131 | @article{hbm23730, 132 | author = {Schirrmeister Robin Tibor and 133 | Springenberg Jost Tobias and 134 | Fiederer Lukas Dominique Josef and 135 | Glasstetter Martin and 136 | Eggensperger Katharina and 137 | Tangermann Michael and 138 | Hutter Frank and 139 | Burgard Wolfram and 140 | Ball Tonio}, 141 | title = {Deep learning with convolutional neural networks for EEG decoding and visualization}, 142 | journal = {Human Brain Mapping}, 143 | volume = {38}, 144 | number = {11}, 145 | pages = {5391-5420}, 146 | keywords = {electroencephalography, EEG analysis, machine learning, end‐to‐end learning, brain–machine interface, brain–computer interface, model interpretability, brain mapping}, 147 | doi = {10.1002/hbm.23730}, 148 | url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/hbm.23730} 149 | } 150 | ``` 151 | 152 | # Legal Disclaimer 153 | 154 | This project is governed by the terms of the Creative Commons Zero 1.0 Universal (CC0 1.0) Public Domain Dedication (the Agreement). You should have received a copy of the Agreement with a copy of this software. If not, see https://github.com/USArmyResearchLab/ARLDCCSO. Your use or distribution of ARL EEGModels, in both source and binary form, in whole or in part, implies your agreement to abide by the terms set forth in the Agreement in full. 155 | 156 | Other portions of this project are subject to domestic copyright protection under 17 USC Sec. 105. Those portions are licensed under the Apache 2.0 license. The complete text of the license governing this material is in the file labeled LICENSE.TXT that is a part of this project's official distribution. 157 | 158 | arl-eegmodels is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 159 | 160 | You may find the full license in the file LICENSE in this directory. 161 | 162 | # Contributions 163 | 164 | Due to legal issues, every contributor will need to have a signed Contributor License Agreement on file. The ARL Contributor License Agreement (ARL Form 266) can be found [here](https://github.com/USArmyResearchLab/ARL-Open-Source-Guidance-and-Instructions/blob/master/ARL%20Form%20-%20266.pdf). 165 | 166 | Each external contributor must execute and return a copy for each project that he or she intends to contribute to. 167 | 168 | Once ARL receives the executed form, it will remain in force permanently. 169 | 170 | Thus, external contributors need only execute the form once for each project that they plan on contributing to. 171 | 172 | 173 | -------------------------------------------------------------------------------- /EEGNET_guanwang_tf/examples/EEGNet-8-2-weights.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/QQ7778/eegnet-pytorch/a5a1fcba7756c03a2b0918754c5bb9a64f57cf59/EEGNET_guanwang_tf/examples/EEGNet-8-2-weights.h5 -------------------------------------------------------------------------------- /EEGNET_guanwang_tf/examples/ERP.py: -------------------------------------------------------------------------------- 1 | """ 2 | Sample script using EEGNet to classify Event-Related Potential (ERP) EEG data 3 | from a four-class classification task, using the sample dataset provided in 4 | the MNE [1, 2] package: 5 | https://martinos.org/mne/stable/manual/sample_dataset.html#ch-sample-data 6 | 7 | The four classes used from this dataset are: 8 | LA: Left-ear auditory stimulation 9 | RA: Right-ear auditory stimulation 10 | LV: Left visual field stimulation 11 | RV: Right visual field stimulation 12 | 13 | The code to process, filter and epoch the data are originally from Alexandre 14 | Barachant's PyRiemann [3] package, released under the BSD 3-clause. A copy of 15 | the BSD 3-clause license has been provided together with this software to 16 | comply with software licensing requirements. 17 | 18 | When you first run this script, MNE will download the dataset and prompt you 19 | to confirm the download location (defaults to ~/mne_data). Follow the prompts 20 | to continue. The dataset size is approx. 1.5GB download. 21 | 22 | For comparative purposes you can also compare EEGNet performance to using 23 | Riemannian geometric approaches with xDAWN spatial filtering [4-8] using 24 | PyRiemann (code provided below). 25 | 26 | [1] A. Gramfort, M. Luessi, E. Larson, D. Engemann, D. Strohmeier, C. Brodbeck, 27 | L. Parkkonen, M. Hämäläinen, MNE software for processing MEG and EEG data, 28 | NeuroImage, Volume 86, 1 February 2014, Pages 446-460, ISSN 1053-8119. 29 | 30 | [2] A. Gramfort, M. Luessi, E. Larson, D. Engemann, D. Strohmeier, C. Brodbeck, 31 | R. Goj, M. Jas, T. Brooks, L. Parkkonen, M. Hämäläinen, MEG and EEG data 32 | analysis with MNE-Python, Frontiers in Neuroscience, Volume 7, 2013. 33 | 34 | [3] https://github.com/alexandrebarachant/pyRiemann. 35 | 36 | [4] A. Barachant, M. Congedo ,"A Plug&Play P300 BCI Using Information Geometry" 37 | arXiv:1409.0107. link 38 | 39 | [5] M. Congedo, A. Barachant, A. Andreev ,"A New generation of Brain-Computer 40 | Interface Based on Riemannian Geometry", arXiv: 1310.8115. 41 | 42 | [6] A. Barachant and S. Bonnet, "Channel selection procedure using riemannian 43 | distance for BCI applications," in 2011 5th International IEEE/EMBS 44 | Conference on Neural Engineering (NER), 2011, 348-351. 45 | 46 | [7] A. Barachant, S. Bonnet, M. Congedo and C. Jutten, “Multiclass 47 | Brain-Computer Interface Classification by Riemannian Geometry,” in IEEE 48 | Transactions on Biomedical Engineering, vol. 59, no. 4, p. 920-928, 2012. 49 | 50 | [8] A. Barachant, S. Bonnet, M. Congedo and C. Jutten, “Classification of 51 | covariance matrices using a Riemannian-based kernel for BCI applications“, 52 | in NeuroComputing, vol. 112, p. 172-178, 2013. 53 | 54 | 55 | Portions of this project are works of the United States Government and are not 56 | subject to domestic copyright protection under 17 USC Sec. 105. Those 57 | portions are released world-wide under the terms of the Creative Commons Zero 58 | 1.0 (CC0) license. 59 | 60 | Other portions of this project are subject to domestic copyright protection 61 | under 17 USC Sec. 105. Those portions are licensed under the Apache 2.0 62 | license. The complete text of the license governing this material is in 63 | the file labeled LICENSE.TXT that is a part of this project's official 64 | distribution. 65 | """ 66 | 67 | import numpy as np 68 | 69 | # mne imports 70 | import mne 71 | from mne import io 72 | from mne.datasets import sample 73 | 74 | # EEGNet-specific imports 75 | from EEGModels import EEGNet 76 | from tensorflow.keras import utils as np_utils 77 | from tensorflow.keras.callbacks import ModelCheckpoint 78 | from tensorflow.keras import backend as K 79 | 80 | # PyRiemann imports 81 | from pyriemann.estimation import XdawnCovariances 82 | from pyriemann.tangentspace import TangentSpace 83 | from pyriemann.utils.viz import plot_confusion_matrix 84 | from sklearn.pipeline import make_pipeline 85 | from sklearn.linear_model import LogisticRegression 86 | 87 | # tools for plotting confusion matrices 88 | from matplotlib import pyplot as plt 89 | 90 | # while the default tensorflow ordering is 'channels_last' we set it here 91 | # to be explicit in case if the user has changed the default ordering 92 | K.set_image_data_format('channels_last') 93 | 94 | ##################### Process, filter and epoch the data ###################### 95 | data_path = sample.data_path() 96 | 97 | # Set parameters and read data 98 | raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' 99 | event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' 100 | tmin, tmax = -0., 1 101 | event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4) 102 | 103 | # Setup for reading the raw data 104 | raw = io.Raw(raw_fname, preload=True, verbose=False) 105 | raw.filter(2, None, method='iir') # replace baselining with high-pass 106 | events = mne.read_events(event_fname) 107 | 108 | raw.info['bads'] = ['MEG 2443'] # set bad channels 109 | picks = mne.pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False, 110 | exclude='bads') 111 | 112 | # Read epochs 113 | epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=False, 114 | picks=picks, baseline=None, preload=True, verbose=False) 115 | labels = epochs.events[:, -1] 116 | 117 | # extract raw data. scale by 1000 due to scaling sensitivity in deep learning 118 | X = epochs.get_data()*1000 # format is in (trials, channels, samples) 119 | y = labels 120 | 121 | kernels, chans, samples = 1, 60, 151 122 | 123 | # take 50/25/25 percent of the data to train/validate/test 124 | X_train = X[0:144,] 125 | Y_train = y[0:144] 126 | X_validate = X[144:216,] 127 | Y_validate = y[144:216] 128 | X_test = X[216:,] 129 | Y_test = y[216:] 130 | 131 | ############################# EEGNet portion ################################## 132 | 133 | # convert labels to one-hot encodings. 134 | Y_train = np_utils.to_categorical(Y_train-1) 135 | Y_validate = np_utils.to_categorical(Y_validate-1) 136 | Y_test = np_utils.to_categorical(Y_test-1) 137 | 138 | # convert data to NHWC (trials, channels, samples, kernels) format. Data 139 | # contains 60 channels and 151 time-points. Set the number of kernels to 1. 140 | X_train = X_train.reshape(X_train.shape[0], chans, samples, kernels) 141 | X_validate = X_validate.reshape(X_validate.shape[0], chans, samples, kernels) 142 | X_test = X_test.reshape(X_test.shape[0], chans, samples, kernels) 143 | 144 | print('X_train shape:', X_train.shape) 145 | print(X_train.shape[0], 'train samples') 146 | print(X_test.shape[0], 'test samples') 147 | 148 | # configure the EEGNet-8,2,16 model with kernel length of 32 samples (other 149 | # model configurations may do better, but this is a good starting point) 150 | model = EEGNet(nb_classes = 4, Chans = chans, Samples = samples, 151 | dropoutRate = 0.5, kernLength = 32, F1 = 8, D = 2, F2 = 16, 152 | dropoutType = 'Dropout') 153 | 154 | # compile the model and set the optimizers 155 | model.compile(loss='categorical_crossentropy', optimizer='adam', 156 | metrics = ['accuracy']) 157 | 158 | # count number of parameters in the model 159 | numParams = model.count_params() 160 | 161 | # set a valid path for your system to record model checkpoints 162 | checkpointer = ModelCheckpoint(filepath='/tmp/checkpoint.h5', verbose=1, 163 | save_best_only=True) 164 | 165 | ############################################################################### 166 | # if the classification task was imbalanced (significantly more trials in one 167 | # class versus the others) you can assign a weight to each class during 168 | # optimization to balance it out. This data is approximately balanced so we 169 | # don't need to do this, but is shown here for illustration/completeness. 170 | ############################################################################### 171 | 172 | # the syntax is {class_1:weight_1, class_2:weight_2,...}. Here just setting 173 | # the weights all to be 1 174 | class_weights = {0:1, 1:1, 2:1, 3:1} 175 | 176 | ################################################################################ 177 | # fit the model. Due to very small sample sizes this can get 178 | # pretty noisy run-to-run, but most runs should be comparable to xDAWN + 179 | # Riemannian geometry classification (below) 180 | ################################################################################ 181 | fittedModel = model.fit(X_train, Y_train, batch_size = 16, epochs = 300, 182 | verbose = 2, validation_data=(X_validate, Y_validate), 183 | callbacks=[checkpointer], class_weight = class_weights) 184 | 185 | # load optimal weights 186 | model.load_weights('/tmp/checkpoint.h5') 187 | 188 | ############################################################################### 189 | # can alternatively used the weights provided in the repo. If so it should get 190 | # you 93% accuracy. Change the WEIGHTS_PATH variable to wherever it is on your 191 | # system. 192 | ############################################################################### 193 | 194 | # WEIGHTS_PATH = /path/to/EEGNet-8-2-weights.h5 195 | # model.load_weights(WEIGHTS_PATH) 196 | 197 | ############################################################################### 198 | # make prediction on test set. 199 | ############################################################################### 200 | 201 | probs = model.predict(X_test) 202 | preds = probs.argmax(axis = -1) 203 | acc = np.mean(preds == Y_test.argmax(axis=-1)) 204 | print("Classification accuracy: %f " % (acc)) 205 | 206 | 207 | ############################# PyRiemann Portion ############################## 208 | 209 | # code is taken from PyRiemann's ERP sample script, which is decoding in 210 | # the tangent space with a logistic regression 211 | 212 | n_components = 2 # pick some components 213 | 214 | # set up sklearn pipeline 215 | clf = make_pipeline(XdawnCovariances(n_components), 216 | TangentSpace(metric='riemann'), 217 | LogisticRegression()) 218 | 219 | preds_rg = np.zeros(len(Y_test)) 220 | 221 | # reshape back to (trials, channels, samples) 222 | X_train = X_train.reshape(X_train.shape[0], chans, samples) 223 | X_test = X_test.reshape(X_test.shape[0], chans, samples) 224 | 225 | # train a classifier with xDAWN spatial filtering + Riemannian Geometry (RG) 226 | # labels need to be back in single-column format 227 | clf.fit(X_train, Y_train.argmax(axis = -1)) 228 | preds_rg = clf.predict(X_test) 229 | 230 | # Printing the results 231 | acc2 = np.mean(preds_rg == Y_test.argmax(axis = -1)) 232 | print("Classification accuracy: %f " % (acc2)) 233 | 234 | # plot the confusion matrices for both classifiers 235 | names = ['audio left', 'audio right', 'vis left', 'vis right'] 236 | plt.figure(0) 237 | plot_confusion_matrix(preds, Y_test.argmax(axis = -1), names, title = 'EEGNet-8,2') 238 | 239 | plt.figure(1) 240 | plot_confusion_matrix(preds_rg, Y_test.argmax(axis = -1), names, title = 'xDAWN + RG') 241 | 242 | 243 | 244 | -------------------------------------------------------------------------------- /EEGNET_guanwang_tf/examples/LICENSE: -------------------------------------------------------------------------------- 1 | Copyright © 2015, authors of pyRiemann 2 | All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without 5 | modification, are permitted provided that the following conditions are met: 6 | * Redistributions of source code must retain the above copyright 7 | notice, this list of conditions and the following disclaimer. 8 | * Redistributions in binary form must reproduce the above copyright 9 | notice, this list of conditions and the following disclaimer in the 10 | documentation and/or other materials provided with the distribution. 11 | * Neither the names of pyriemann authors nor the names of any 12 | contributors may be used to endorse or promote products derived from 13 | this software without specific prior written permission. 14 | 15 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 16 | ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 17 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 18 | DISCLAIMED. IN NO EVENT SHALL COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY 19 | DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 20 | (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 21 | LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 22 | ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 23 | (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 24 | SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 25 | -------------------------------------------------------------------------------- /EEGNet A Compact Convolutional Neural Network for EEG-based Brain-Computer Interfaces——2018.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/QQ7778/eegnet-pytorch/a5a1fcba7756c03a2b0918754c5bb9a64f57cf59/EEGNet A Compact Convolutional Neural Network for EEG-based Brain-Computer Interfaces——2018.pdf -------------------------------------------------------------------------------- /EEGNet_Pytorch/EEGNet.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import torch 3 | import torch.nn as nn 4 | import torch.nn.functional as F 5 | 6 | 7 | 8 | class Conv2dWithConstraint(nn.Conv2d): 9 | def __init__(self, *args, max_norm=1, **kwargs): 10 | self.max_norm = max_norm 11 | super(Conv2dWithConstraint, self).__init__(*args, **kwargs) 12 | 13 | def forward(self, x): 14 | self.weight.data = torch.renorm( 15 | self.weight.data, p=2, dim=0, maxnorm=self.max_norm 16 | ) 17 | return super(Conv2dWithConstraint, self).forward(x) 18 | 19 | class EEGNet(nn.Module): 20 | def __init__(self, n_classes=4, channels=60, samples=151, 21 | dropoutRate=0.5, kernelLength=64, kernelLength2=16, 22 | F1=8, D=2, F2=16): 23 | super(EEGNet, self).__init__() 24 | self.F1 = F1 25 | self.F2 = F2 26 | self.D = D 27 | self.samples = samples 28 | self.n_classes = n_classes 29 | self.channels = channels 30 | self.kernelLength = kernelLength 31 | self.kernelLength2 = kernelLength2 32 | self.drop_out = dropoutRate 33 | 34 | block1 = nn.Sequential( 35 | # nn.ZeroPad2d([31, 32, 0, 0]), # Pads the input tensor boundaries with zero. [left, right, top, bottom] 36 | # input shape (1, C, T) 37 | nn.Conv2d( 38 | in_channels=1, 39 | out_channels=self.F1, # F1 40 | kernel_size=(1, self.kernelLength), # (1, half the sampling rate) 41 | stride=1, 42 | padding=(0, self.kernelLength//2), 43 | bias=False 44 | ), # output shape (F1, C, T) 45 | nn.BatchNorm2d(num_features=self.F1) 46 | # output shape (F1, C, T) 47 | ) 48 | 49 | block2 = nn.Sequential( 50 | # input shape (F1, C, T) 51 | # Conv2dWithConstraint(self.F1, self.F1 * self.D, (self.channels, 1), max_norm=1, stride=1, padding=(0, 0), 52 | # groups=self.F1, bias=False), 53 | nn.Conv2d( 54 | in_channels=self.F1, 55 | out_channels=self.F1*self.D, # D*F1 56 | kernel_size=(self.channels, 1), # (C, 1) 57 | groups=self.F1, # When groups == in_channels and out_channels == K * in_channels, where K is a positive integer, this operation is also known as a “depthwise convolution”. 58 | bias=False 59 | ), # output shape (self.F1*self.D, 1, T) 60 | nn.BatchNorm2d(num_features=self.F1*self.D), 61 | nn.ELU(), 62 | nn.AvgPool2d( 63 | kernel_size=(1, 4), 64 | stride=4,), # output shape (self.F1*self.D, 1, T/4) 65 | nn.Dropout(p=self.drop_out) 66 | # output shape (self.F1*self.D, 1, T/4) 67 | ) 68 | 69 | block3 = nn.Sequential( 70 | # nn.ZeroPad2d((7, 8, 0, 0)), 71 | # input shape (self.F1*self.D, 1, T/4) 72 | nn.Conv2d( 73 | in_channels=self.F2, 74 | out_channels=self.F2, # F2 = D*F1 75 | kernel_size=(1, self.kernelLength2), 76 | stride=1, 77 | padding=(0, self.kernelLength2//2), 78 | groups=self.F1*self.D, 79 | bias=False 80 | ), # output shape (self.F2, 1, T/4) 81 | # input shape (self.F2, 1, T/4) 82 | nn.Conv2d( 83 | in_channels=self.F1*self.D, 84 | out_channels=self.F2, # F2 = D*F1 85 | kernel_size=(1, 1), 86 | stride=1, 87 | bias=False 88 | ), # output shape (self.F2, 1, T/4) 89 | nn.BatchNorm2d(num_features=self.F2), 90 | nn.ELU(), 91 | nn.AvgPool2d( 92 | kernel_size=(1, 8), 93 | stride=8), # output shape (self.F2, 1, T/4/8) 94 | nn.Dropout(p=self.drop_out) 95 | # output shape (self.F2, 1, T/32) 96 | ) 97 | 98 | self.EEGNetLayer = nn.Sequential(block1, block2, block3) 99 | 100 | self.ClassifierBlock = nn.Sequential(nn.Linear(in_features=self.F2*round(round(self.samples//4)//8), 101 | out_features=self.n_classes, 102 | bias=False), 103 | nn.Softmax(dim=1)) 104 | 105 | def forward(self, x): 106 | if len(x.shape) is not 4: 107 | x = torch.unsqueeze(x, 1) 108 | x = self.EEGNetLayer(x) 109 | x = x.view(x.size()[0], -1) # Flatten # [N, self.F2*1*T/32] 110 | x = self.ClassifierBlock(x) 111 | 112 | return x 113 | 114 | 115 | def main(): 116 | input = torch.randn(32, 1, 60, 1120) 117 | model = EEGNet(samples=1120) 118 | out = model(input) 119 | print('===============================================================') 120 | print('out', out) 121 | print('model', model) 122 | 123 | 124 | if __name__ == "__main__": 125 | main() -------------------------------------------------------------------------------- /EEGNet_Pytorch/LoadData.py: -------------------------------------------------------------------------------- 1 | import mne 2 | import os 3 | import glob 4 | import numpy as np 5 | import torch 6 | from sklearn.model_selection import StratifiedKFold 7 | from torch.utils.data import Dataset, DataLoader, TensorDataset 8 | 9 | 10 | class LoadData: 11 | def __init__(self,eeg_file_path: str): 12 | self.eeg_file_path = eeg_file_path 13 | 14 | def load_raw_data_gdf(self,file_to_load): 15 | self.raw_eeg_subject = mne.io.read_raw_gdf(self.eeg_file_path + '/' + file_to_load) 16 | return self 17 | 18 | def load_raw_data_mat(self,file_to_load): 19 | import scipy.io as sio 20 | self.raw_eeg_subject = sio.loadmat(self.eeg_file_path + '/' + file_to_load) 21 | 22 | def get_all_files(self,file_path_extension: str =''): 23 | if file_path_extension: 24 | return glob.glob(self.eeg_file_path+'/'+file_path_extension) 25 | return os.listdir(self.eeg_file_path) 26 | 27 | class LoadBCIC(LoadData): 28 | '''Subclass of LoadData for loading BCI Competition IV Dataset 2a''' 29 | def __init__(self, file_to_load, *args): 30 | self.stimcodes=('769','770','771','772') 31 | # self.epoched_data={} 32 | self.file_to_load = file_to_load 33 | self.channels_to_remove = ['EOG-left', 'EOG-central', 'EOG-right'] 34 | super(LoadBCIC,self).__init__(*args) 35 | 36 | def get_epochs(self, tmin=-0., tmax=2, baseline=None, downsampled=None): 37 | self.load_raw_data_gdf(self.file_to_load) 38 | raw_data = self.raw_eeg_subject 39 | if downsampled is not None: 40 | raw_data.resample(sfreq=downsampled) 41 | self.fs = raw_data.info.get('sfreq') 42 | events, event_ids = mne.events_from_annotations(raw_data) 43 | stims =[value for key, value in event_ids.items() if key in self.stimcodes] 44 | epochs = mne.Epochs(raw_data, events, event_id=stims, tmin=tmin, tmax=tmax, event_repeated='drop', 45 | baseline=baseline, preload=True, proj=False, reject_by_annotation=False) 46 | epochs = epochs.drop_channels(self.channels_to_remove) 47 | self.y_labels = epochs.events[:, -1] - min(epochs.events[:, -1]) 48 | self.x_data = epochs.get_data()*1e6 49 | eeg_data={'x_data':self.x_data[:,:,:-1], 50 | 'y_labels':self.y_labels, 51 | 'fs':self.fs} 52 | return eeg_data 53 | 54 | 55 | def cross_validate_sequential_split(x_data, y_labels, kfold): 56 | train_indices = {} 57 | eval_indices = {} 58 | skf = StratifiedKFold(n_splits=kfold, shuffle=False) 59 | i = 0 60 | for train_idx, eval_idx in skf.split(x_data, y_labels): 61 | train_indices.update({i: train_idx}) 62 | eval_indices.update({i: eval_idx}) 63 | i += 1 64 | return train_indices, eval_indices 65 | 66 | def split_xdata(eeg_data, train_idx, eval_idx): 67 | x_train=np.copy(eeg_data[train_idx,:,:]) 68 | x_eval=np.copy(eeg_data[eval_idx,:,:]) 69 | x_train = torch.from_numpy(x_train).to(torch.float32) 70 | x_eval = torch.from_numpy(x_eval).to(torch.float32) 71 | return x_train, x_eval 72 | 73 | def split_ydata(y_true, train_idx, eval_idx): 74 | y_train = np.copy(y_true[train_idx]) 75 | y_eval = np.copy(y_true[eval_idx]) 76 | y_train = torch.from_numpy(y_train) 77 | y_eval = torch.from_numpy(y_eval) 78 | return y_train, y_eval 79 | 80 | 81 | def BCICDataLoader(x_train, y_train, batch_size=64, num_workers=2, shuffle=True): 82 | 83 | data = TensorDataset(x_train, y_train) 84 | 85 | train_data = DataLoader(dataset=data, batch_size=batch_size, shuffle=shuffle, num_workers=num_workers) 86 | 87 | return train_data -------------------------------------------------------------------------------- /EEGNet_Pytorch/README.md: -------------------------------------------------------------------------------- 1 | ### This is the Pytorch implementation of the EEGNet for the [original Keras version](https://github.com/vlawhern/arl-eegmodels). 2 | 3 | ### Dependencies 4 | The newest Pytorch 1.6 and mne package under Anaconda virtual environment. They can be installed by the following command. 5 | 6 | ``` 7 | conda install pytorch torchvision cudatoolkit=10.2 -c pytorch 8 | pip install mne 9 | ``` 10 | -------------------------------------------------------------------------------- /EEGNet_Pytorch/__pycache__/EEGNet.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/QQ7778/eegnet-pytorch/a5a1fcba7756c03a2b0918754c5bb9a64f57cf59/EEGNet_Pytorch/__pycache__/EEGNet.cpython-37.pyc -------------------------------------------------------------------------------- /EEGNet_Pytorch/__pycache__/LoadData.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/QQ7778/eegnet-pytorch/a5a1fcba7756c03a2b0918754c5bb9a64f57cf59/EEGNet_Pytorch/__pycache__/LoadData.cpython-37.pyc -------------------------------------------------------------------------------- /EEGNet_Pytorch/__pycache__/model.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/QQ7778/eegnet-pytorch/a5a1fcba7756c03a2b0918754c5bb9a64f57cf59/EEGNet_Pytorch/__pycache__/model.cpython-37.pyc -------------------------------------------------------------------------------- /EEGNet_Pytorch/__pycache__/model.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/QQ7778/eegnet-pytorch/a5a1fcba7756c03a2b0918754c5bb9a64f57cf59/EEGNet_Pytorch/__pycache__/model.cpython-38.pyc -------------------------------------------------------------------------------- /EEGNet_Pytorch/__pycache__/utils.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/QQ7778/eegnet-pytorch/a5a1fcba7756c03a2b0918754c5bb9a64f57cf59/EEGNet_Pytorch/__pycache__/utils.cpython-37.pyc -------------------------------------------------------------------------------- /EEGNet_Pytorch/__pycache__/utils.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/QQ7778/eegnet-pytorch/a5a1fcba7756c03a2b0918754c5bb9a64f57cf59/EEGNet_Pytorch/__pycache__/utils.cpython-38.pyc -------------------------------------------------------------------------------- /EEGNet_Pytorch/best_model.pth: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/QQ7778/eegnet-pytorch/a5a1fcba7756c03a2b0918754c5bb9a64f57cf59/EEGNet_Pytorch/best_model.pth -------------------------------------------------------------------------------- /EEGNet_Pytorch/main.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 35, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "import os\n", 10 | "import numpy as np\n", 11 | "import torch\n", 12 | "import torch.nn as nn\n", 13 | "from EEGNet import EEGNet\n", 14 | "from tqdm import tqdm\n", 15 | "import LoadData as loaddata\n", 16 | "from sklearn.model_selection import train_test_split" 17 | ] 18 | }, 19 | { 20 | "cell_type": "code", 21 | "execution_count": 36, 22 | "metadata": {}, 23 | "outputs": [], 24 | "source": [ 25 | "###============================ Sets the seed for random numbers ============================###\n", 26 | "torch.manual_seed(1234)\n", 27 | "np.random.seed(1234)" 28 | ] 29 | }, 30 | { 31 | "cell_type": "code", 32 | "execution_count": 37, 33 | "metadata": {}, 34 | "outputs": [], 35 | "source": [ 36 | "###============================ Use the GPU to train ============================###\n", 37 | "use_cuda = torch.cuda.is_available()\n", 38 | "device = torch.device('cuda:0' if use_cuda else 'cpu')" 39 | ] 40 | }, 41 | { 42 | "cell_type": "code", 43 | "execution_count": 38, 44 | "metadata": {}, 45 | "outputs": [], 46 | "source": [ 47 | "###============================ Setup for saving the model ============================###\n", 48 | "current_working_dir = os.getcwd()\n", 49 | "filename = current_working_dir + '/best_model.pth'" 50 | ] 51 | }, 52 | { 53 | "cell_type": "code", 54 | "execution_count": 39, 55 | "metadata": {}, 56 | "outputs": [ 57 | { 58 | "name": "stdout", 59 | "output_type": "stream", 60 | "text": [ 61 | "Extracting EDF parameters from /home/pytorch/LiangXiaohan/MI_Dataverse/BCICIV_2a_gdf/A03T.gdf...\n", 62 | "GDF file detected\n", 63 | "Setting channel info structure...\n", 64 | "Could not determine channel type of the following channels, they will be set as EEG:\n", 65 | "EEG-Fz, EEG, EEG, EEG, EEG, EEG, EEG, EEG-C3, EEG, EEG-Cz, EEG, EEG-C4, EEG, EEG, EEG, EEG, EEG, EEG, EEG, EEG-Pz, EEG, EEG, EOG-left, EOG-central, EOG-right\n", 66 | "Creating raw.info structure...\n", 67 | "Used Annotations descriptions: ['1023', '1072', '276', '277', '32766', '768', '769', '770', '771', '772']\n", 68 | "Not setting metadata\n", 69 | "288 matching events found\n", 70 | "No baseline correction applied\n", 71 | "Loading data for 288 events and 251 original time points ...\n" 72 | ] 73 | }, 74 | { 75 | "name": "stderr", 76 | "output_type": "stream", 77 | "text": [ 78 | "/home/pytorch/anaconda3/envs/pytorch_env/lib/python3.7/site-packages/mne/io/edf/edf.py:1155: DeprecationWarning: The binary mode of fromstring is deprecated, as it behaves surprisingly on unicode inputs. Use frombuffer instead\n", 79 | " etmode = np.fromstring(etmode, UINT8).tolist()[0]\n", 80 | "/home/pytorch/anaconda3/envs/pytorch_env/lib/python3.7/contextlib.py:119: RuntimeWarning: Channel names are not unique, found duplicates for: {'EEG'}. Applying running numbers for duplicates.\n", 81 | " next(self.gen)\n" 82 | ] 83 | }, 84 | { 85 | "name": "stdout", 86 | "output_type": "stream", 87 | "text": [ 88 | "0 bad epochs dropped\n" 89 | ] 90 | } 91 | ], 92 | "source": [ 93 | "###============================ Load data ============================###\n", 94 | "data_path = \"/home/pytorch/LiangXiaohan/MI_Dataverse/BCICIV_2a_gdf\"\n", 95 | "persion_data = 'A03T.gdf'\n", 96 | "\n", 97 | "'''for BCIC Dataset'''\n", 98 | "bcic_data = loaddata.LoadBCIC(persion_data, data_path)\n", 99 | "eeg_data = bcic_data.get_epochs(tmin=-0., tmax=1, baseline=None, downsampled=None) # {'x_data':, 'y_labels':, 'fs':}\n", 100 | "\n", 101 | "X = eeg_data.get('x_data')\n", 102 | "Y = eeg_data.get('y_labels')" 103 | ] 104 | }, 105 | { 106 | "cell_type": "code", 107 | "execution_count": 40, 108 | "metadata": {}, 109 | "outputs": [], 110 | "source": [ 111 | "###============================ Initialization parameters ============================###\n", 112 | "chans = X.shape[1]\n", 113 | "samples = X.shape[2]\n", 114 | "kernelLength = 64\n", 115 | "kernelLength2 = 16\n", 116 | "F1 = 8\n", 117 | "D = 2\n", 118 | "F2 = F1 * D\n", 119 | "dropoutRate = 0.5\n", 120 | "n_classes = 4\n", 121 | "training_epochs = 5\n", 122 | "batch_size = 32\n", 123 | "kfolds = 10\n", 124 | "test_size = 0.2\n", 125 | "best_acc = 0" 126 | ] 127 | }, 128 | { 129 | "cell_type": "code", 130 | "execution_count": 41, 131 | "metadata": {}, 132 | "outputs": [], 133 | "source": [ 134 | "###============================ Split data & Cross validate ============================###\n", 135 | "X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=test_size, random_state=0, shuffle=True)\n", 136 | "\n", 137 | "train_indices, eval_indices = loaddata.cross_validate_sequential_split(X_train, Y_train, kfold=kfolds)" 138 | ] 139 | }, 140 | { 141 | "cell_type": "code", 142 | "execution_count": 42, 143 | "metadata": {}, 144 | "outputs": [ 145 | { 146 | "name": "stdout", 147 | "output_type": "stream", 148 | "text": [ 149 | "EEGNet(\n", 150 | " (EEGNetLayer): Sequential(\n", 151 | " (0): Sequential(\n", 152 | " (0): Conv2d(1, 8, kernel_size=(1, 64), stride=(1, 1), padding=(0, 32), bias=False)\n", 153 | " (1): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", 154 | " )\n", 155 | " (1): Sequential(\n", 156 | " (0): Conv2d(8, 16, kernel_size=(22, 1), stride=(1, 1), groups=8, bias=False)\n", 157 | " (1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", 158 | " (2): ELU(alpha=1.0)\n", 159 | " (3): AvgPool2d(kernel_size=(1, 4), stride=4, padding=0)\n", 160 | " (4): Dropout(p=0.5, inplace=False)\n", 161 | " )\n", 162 | " (2): Sequential(\n", 163 | " (0): Conv2d(16, 16, kernel_size=(1, 16), stride=(1, 1), padding=(0, 8), groups=16, bias=False)\n", 164 | " (1): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)\n", 165 | " (2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n", 166 | " (3): ELU(alpha=1.0)\n", 167 | " (4): AvgPool2d(kernel_size=(1, 8), stride=8, padding=0)\n", 168 | " (5): Dropout(p=0.5, inplace=False)\n", 169 | " )\n", 170 | " )\n", 171 | " (ClassifierBlock): Sequential(\n", 172 | " (0): Linear(in_features=128, out_features=4, bias=False)\n", 173 | " (1): Softmax(dim=1)\n", 174 | " )\n", 175 | ")\n" 176 | ] 177 | } 178 | ], 179 | "source": [ 180 | "###============================ Initialization model ============================###\n", 181 | "eegnet = EEGNet(n_classes, chans, samples, dropoutRate, kernelLength, kernelLength2, F1, D, F2)\n", 182 | "eegnet = eegnet.to(device)\n", 183 | "print(eegnet)\n", 184 | "optimizer = torch.optim.Adam(eegnet.parameters(), lr=0.001, betas=(0.9, 0.999))\n", 185 | "criterion = nn.CrossEntropyLoss().to(device)" 186 | ] 187 | }, 188 | { 189 | "cell_type": "code", 190 | "execution_count": 43, 191 | "metadata": {}, 192 | "outputs": [ 193 | { 194 | "name": "stderr", 195 | "output_type": "stream", 196 | "text": [ 197 | "100%|██████████| 7/7 [00:00<00:00, 30.13it/s]\n", 198 | "100%|██████████| 7/7 [00:00<00:00, 32.81it/s]\n", 199 | "100%|██████████| 7/7 [00:00<00:00, 30.57it/s]\n", 200 | "100%|██████████| 7/7 [00:00<00:00, 27.11it/s]\n", 201 | "100%|██████████| 7/7 [00:00<00:00, 31.41it/s]\n", 202 | "100%|██████████| 7/7 [00:00<00:00, 32.57it/s]\n", 203 | "100%|██████████| 7/7 [00:00<00:00, 33.66it/s]\n", 204 | "100%|██████████| 7/7 [00:00<00:00, 32.32it/s]\n", 205 | "100%|██████████| 7/7 [00:00<00:00, 31.76it/s]\n", 206 | "100%|██████████| 7/7 [00:00<00:00, 31.44it/s]\n" 207 | ] 208 | }, 209 | { 210 | "name": "stdout", 211 | "output_type": "stream", 212 | "text": [ 213 | "Eval : Epoch : 1 - epoch_val_acc: 0.4652\n", 214 | "\n", 215 | "best_acc=0.4652\n", 216 | "\n" 217 | ] 218 | }, 219 | { 220 | "name": "stderr", 221 | "output_type": "stream", 222 | "text": [ 223 | "100%|██████████| 7/7 [00:00<00:00, 32.56it/s]\n", 224 | "100%|██████████| 7/7 [00:00<00:00, 32.01it/s]\n", 225 | "100%|██████████| 7/7 [00:00<00:00, 33.66it/s]\n", 226 | "100%|██████████| 7/7 [00:00<00:00, 32.29it/s]\n", 227 | "100%|██████████| 7/7 [00:00<00:00, 33.55it/s]\n", 228 | "100%|██████████| 7/7 [00:00<00:00, 32.68it/s]\n", 229 | "100%|██████████| 7/7 [00:00<00:00, 32.51it/s]\n", 230 | "100%|██████████| 7/7 [00:00<00:00, 30.11it/s]\n", 231 | "100%|██████████| 7/7 [00:00<00:00, 31.16it/s]\n", 232 | "100%|██████████| 7/7 [00:00<00:00, 29.96it/s]\n" 233 | ] 234 | }, 235 | { 236 | "name": "stdout", 237 | "output_type": "stream", 238 | "text": [ 239 | "Eval : Epoch : 2 - epoch_val_acc: 0.6304\n", 240 | "\n", 241 | "best_acc=0.6304\n", 242 | "\n" 243 | ] 244 | }, 245 | { 246 | "name": "stderr", 247 | "output_type": "stream", 248 | "text": [ 249 | "100%|██████████| 7/7 [00:00<00:00, 31.81it/s]\n", 250 | "100%|██████████| 7/7 [00:00<00:00, 31.44it/s]\n", 251 | "100%|██████████| 7/7 [00:00<00:00, 31.50it/s]\n", 252 | "100%|██████████| 7/7 [00:00<00:00, 31.39it/s]\n", 253 | "100%|██████████| 7/7 [00:00<00:00, 31.70it/s]\n", 254 | "100%|██████████| 7/7 [00:00<00:00, 32.35it/s]\n", 255 | "100%|██████████| 7/7 [00:00<00:00, 31.81it/s]\n", 256 | "100%|██████████| 7/7 [00:00<00:00, 31.80it/s]\n", 257 | "100%|██████████| 7/7 [00:00<00:00, 31.36it/s]\n", 258 | "100%|██████████| 7/7 [00:00<00:00, 30.34it/s]\n" 259 | ] 260 | }, 261 | { 262 | "name": "stdout", 263 | "output_type": "stream", 264 | "text": [ 265 | "Eval : Epoch : 3 - epoch_val_acc: 0.7043\n", 266 | "\n", 267 | "best_acc=0.7043\n", 268 | "\n" 269 | ] 270 | }, 271 | { 272 | "name": "stderr", 273 | "output_type": "stream", 274 | "text": [ 275 | "100%|██████████| 7/7 [00:00<00:00, 29.02it/s]\n", 276 | "100%|██████████| 7/7 [00:00<00:00, 33.61it/s]\n", 277 | "100%|██████████| 7/7 [00:00<00:00, 33.11it/s]\n", 278 | "100%|██████████| 7/7 [00:00<00:00, 32.35it/s]\n", 279 | "100%|██████████| 7/7 [00:00<00:00, 32.78it/s]\n", 280 | "100%|██████████| 7/7 [00:00<00:00, 31.74it/s]\n", 281 | "100%|██████████| 7/7 [00:00<00:00, 32.50it/s]\n", 282 | "100%|██████████| 7/7 [00:00<00:00, 32.26it/s]\n", 283 | "100%|██████████| 7/7 [00:00<00:00, 32.89it/s]\n", 284 | "100%|██████████| 7/7 [00:00<00:00, 32.36it/s]\n" 285 | ] 286 | }, 287 | { 288 | "name": "stdout", 289 | "output_type": "stream", 290 | "text": [ 291 | "Eval : Epoch : 4 - epoch_val_acc: 0.7130\n", 292 | "\n", 293 | "best_acc=0.7130\n", 294 | "\n" 295 | ] 296 | }, 297 | { 298 | "name": "stderr", 299 | "output_type": "stream", 300 | "text": [ 301 | "100%|██████████| 7/7 [00:00<00:00, 31.70it/s]\n", 302 | "100%|██████████| 7/7 [00:00<00:00, 32.75it/s]\n", 303 | "100%|██████████| 7/7 [00:00<00:00, 32.62it/s]\n", 304 | "100%|██████████| 7/7 [00:00<00:00, 33.25it/s]\n", 305 | "100%|██████████| 7/7 [00:00<00:00, 29.35it/s]\n", 306 | "100%|██████████| 7/7 [00:00<00:00, 31.27it/s]\n", 307 | "100%|██████████| 7/7 [00:00<00:00, 32.03it/s]\n", 308 | "100%|██████████| 7/7 [00:00<00:00, 33.06it/s]\n", 309 | "100%|██████████| 7/7 [00:00<00:00, 33.41it/s]\n", 310 | "100%|██████████| 7/7 [00:00<00:00, 33.16it/s]" 311 | ] 312 | }, 313 | { 314 | "name": "stdout", 315 | "output_type": "stream", 316 | "text": [ 317 | "Eval : Epoch : 5 - epoch_val_acc: 0.7609\n", 318 | "\n", 319 | "best_acc=0.7609\n", 320 | "\n" 321 | ] 322 | }, 323 | { 324 | "name": "stderr", 325 | "output_type": "stream", 326 | "text": [ 327 | "\n" 328 | ] 329 | } 330 | ], 331 | "source": [ 332 | "###============================ Train model ============================###\n", 333 | "for iter in range(0, training_epochs):\n", 334 | " epoch_val_acc = 0\n", 335 | "\n", 336 | " for kfold in range(kfolds):\n", 337 | " train_idx = train_indices.get(kfold)\n", 338 | " eval_idx = eval_indices.get(kfold)\n", 339 | " # print(f'epoch {str(iter)}, Fold {str(kfold+1)}\\n')\n", 340 | " x_train, x_eval = loaddata.split_xdata(X_train, train_idx, eval_idx)\n", 341 | " y_train, y_eval = loaddata.split_ydata(Y_train, train_idx, eval_idx)\n", 342 | "\n", 343 | " train_data = loaddata.BCICDataLoader(x_train, y_train, batch_size=batch_size)\n", 344 | "\n", 345 | " running_loss = 0\n", 346 | " running_accuracy = 0\n", 347 | "\n", 348 | " # create a minibatch\n", 349 | " eegnet.train()\n", 350 | " for inputs, target in tqdm(train_data):\n", 351 | " inputs = inputs.to(device).requires_grad_() # ([batch_size, chans, samples])\n", 352 | " target = target.to(device)\n", 353 | "\n", 354 | " output = eegnet(inputs)\n", 355 | "\n", 356 | " # update the weights\n", 357 | " loss = criterion(output, target)\n", 358 | "\n", 359 | " optimizer.zero_grad()\n", 360 | " loss.backward()\n", 361 | " optimizer.step()\n", 362 | "\n", 363 | " acc = (output.argmax(dim=1) == target).float().mean()\n", 364 | " running_accuracy += acc / len(train_data)\n", 365 | " running_loss += loss.detach().item() / len(train_data)\n", 366 | "\n", 367 | " # print(f\"Train : Epoch : {iter} - kfold : {kfold+1} - acc: {running_accuracy:.4f} - loss : {running_loss:.4f}\\n\")\n", 368 | "\n", 369 | " # validation\n", 370 | " eegnet.eval()\n", 371 | " inputs = x_eval.to(device).requires_grad_()\n", 372 | " val_probs = eegnet(inputs)\n", 373 | " val_acc = (val_probs.argmax(dim=1) == y_eval.to(device)).float().mean()\n", 374 | " # print(f\"Eval : Epoch : {iter} - kfold : {kfold+1} - acc: {val_acc:.4f}\\n\")\n", 375 | " epoch_val_acc += val_acc\n", 376 | "\n", 377 | " epoch_val_acc = epoch_val_acc/kfolds\n", 378 | " print(f\"Eval : Epoch : {iter+1} - epoch_val_acc: {epoch_val_acc:.4f}\\n\")\n", 379 | "\n", 380 | " if epoch_val_acc > best_acc:\n", 381 | " torch.save({\n", 382 | " 'model_state_dict': eegnet.state_dict(),\n", 383 | " 'optimizer_state_dict': optimizer.state_dict(),\n", 384 | " }, filename)\n", 385 | " best_acc = epoch_val_acc\n", 386 | " print(f\"best_acc={best_acc:.4f}\\n\")" 387 | ] 388 | }, 389 | { 390 | "cell_type": "code", 391 | "execution_count": 44, 392 | "metadata": {}, 393 | "outputs": [ 394 | { 395 | "name": "stdout", 396 | "output_type": "stream", 397 | "text": [ 398 | "Classification accuracy: 0.793103 \n" 399 | ] 400 | } 401 | ], 402 | "source": [ 403 | "###============================ Test model ============================###\n", 404 | "X_test = torch.from_numpy(X_test).to(device).to(torch.float32).requires_grad_()\n", 405 | "Y_test = torch.from_numpy(Y_test).to(device)\n", 406 | "\n", 407 | "checkpoint = torch.load(filename)\n", 408 | "eegnet.load_state_dict(checkpoint['model_state_dict'])\n", 409 | "optimizer.load_state_dict((checkpoint['optimizer_state_dict']))\n", 410 | "eegnet.eval()\n", 411 | "probs = eegnet(X_test)\n", 412 | "acc = (probs.argmax(dim=1) == Y_test).float().mean()\n", 413 | "print(\"Classification accuracy: %f \" % (acc))" 414 | ] 415 | }, 416 | { 417 | "cell_type": "code", 418 | "execution_count": 45, 419 | "metadata": {}, 420 | "outputs": [ 421 | { 422 | "data": { 423 | "text/plain": [ 424 | "Text(0.5, 1.0, 'Confusion Matrix')" 425 | ] 426 | }, 427 | "execution_count": 45, 428 | "metadata": {}, 429 | "output_type": "execute_result" 430 | }, 431 | { 432 | "data": { 433 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAhoAAAHFCAYAAAC0OVBBAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8qNh9FAAAACXBIWXMAAA9hAAAPYQGoP6dpAABQz0lEQVR4nO3de1wUVf8H8M+AsFwXBUVAQbwr3kARw0tiaoZm2k1LUzBF826aopGCliJWhmli+hRQmdovL1mZaSlm3hIVr4hhGGggXkFAkcv5/eHDPq6AsewuO+x+3r3m9WLOzDnznZF2v5xzZkYSQggQERER6YGZoQMgIiIi48VEg4iIiPSGiQYRERHpDRMNIiIi0hsmGkRERKQ3TDSIiIhIb5hoEBERkd4w0SAiIiK9YaJBREREesNEg6gaTp06hTFjxqBp06awsrKCnZ0dOnfujGXLluHmzZt6PfaJEyfQu3dvODg4QJIkREdH6/wYkiQhIiJC5+3+m7i4OEiSBEmSkJCQUG67EAItWrSAJEkICAio1jFWr16NuLg4jeokJCRUGhMRPV4dQwdAVNusW7cOkyZNQuvWrTF79mx4eXmhqKgIiYmJWLNmDQ4dOoStW7fq7fivv/468vPzsXHjRtSrVw+enp46P8ahQ4fQuHFjnbdbVfb29vjss8/KJRP79u3DxYsXYW9vX+22V69ejfr16yM4OLjKdTp37oxDhw7By8ur2sclMlVMNIg0cOjQIUycOBH9+/fHtm3boFAoVNv69++PWbNmYefOnXqN4cyZMwgJCUFgYKDejvHEE0/ore2qGD58ONavX49PPvkESqVSVf7ZZ5/B398fubm5NRJHUVERJEmCUqk0+DUhqq04dEKkgSVLlkCSJKxdu1YtyShjaWmJ5557TrVeWlqKZcuWoU2bNlAoFHB2dsbo0aNx+fJltXoBAQFo3749jh49il69esHGxgbNmjXD0qVLUVpaCuB/wwrFxcWIiYlRDTEAQEREhOrnh5XVuXTpkqpsz549CAgIgJOTE6ytreHh4YEXX3wRBQUFqn0qGjo5c+YMhgwZgnr16sHKygre3t6Ij49X26dsiGHDhg0ICwuDm5sblEol+vXrh5SUlKpdZACvvvoqAGDDhg2qspycHGzevBmvv/56hXUWLlyIbt26wdHREUqlEp07d8Znn32Gh98b6enpibNnz2Lfvn2q61fWI1QW+5dffolZs2ahUaNGUCgUSE1NLTd0cv36dbi7u6N79+4oKipStX/u3DnY2tpi1KhRVT5XImPHRIOoikpKSrBnzx506dIF7u7uVaozceJEhIaGon///ti+fTveffdd7Ny5E927d8f169fV9s3KysLIkSPx2muvYfv27QgMDMS8efPw1VdfAQAGDRqEQ4cOAQBeeuklHDp0SLVeVZcuXcKgQYNgaWmJzz//HDt37sTSpUtha2uL+/fvV1ovJSUF3bt3x9mzZ/Hxxx9jy5Yt8PLyQnBwMJYtW1Zu/7fffht///03/vOf/2Dt2rX4888/MXjwYJSUlFQpTqVSiZdeegmff/65qmzDhg0wMzPD8OHDKz23CRMm4JtvvsGWLVvwwgsvYOrUqXj33XdV+2zduhXNmjWDj4+P6vo9Osw1b948pKenY82aNfj+++/h7Oxc7lj169fHxo0bcfToUYSGhgIACgoK8PLLL8PDwwNr1qyp0nkSmQRBRFWSlZUlAIhXXnmlSvsnJycLAGLSpElq5UeOHBEAxNtvv60q6927twAgjhw5oravl5eXGDBggFoZADF58mS1svDwcFHR/86xsbECgEhLSxNCCPHtt98KACIpKemxsQMQ4eHhqvVXXnlFKBQKkZ6errZfYGCgsLGxEbdv3xZCCLF3714BQAwcOFBtv2+++UYAEIcOHXrsccviPXr0qKqtM2fOCCGE6Nq1qwgODhZCCNGuXTvRu3fvStspKSkRRUVFYtGiRcLJyUmUlpaqtlVWt+x4Tz75ZKXb9u7dq1YeFRUlAIitW7eKoKAgYW1tLU6dOvXYcyQyNezRINKTvXv3AkC5SYd+fn5o27Ytfv31V7VyFxcX+Pn5qZV17NgRf//9t85i8vb2hqWlJcaPH4/4+Hj89ddfVaq3Z88e9O3bt1xPTnBwMAoKCsr1rDw8fAQ8OA8AGp1L79690bx5c3z++ec4ffo0jh49WumwSVmM/fr1g4ODA8zNzWFhYYEFCxbgxo0byM7OrvJxX3zxxSrvO3v2bAwaNAivvvoq4uPjsXLlSnTo0KHK9YlMARMNoiqqX78+bGxskJaWVqX9b9y4AQBwdXUtt83NzU21vYyTk1O5/RQKBe7evVuNaCvWvHlz/PLLL3B2dsbkyZPRvHlzNG/eHCtWrHhsvRs3blR6HmXbH/bouZTNZ9HkXCRJwpgxY/DVV19hzZo1aNWqFXr16lXhvn/88QeefvppAA/uCjpw4ACOHj2KsLAwjY9b0Xk+Lsbg4GDcu3cPLi4unJtBVAEmGkRVZG5ujr59++LYsWPlJnNWpOzLNjMzs9y2f/75B/Xr19dZbFZWVgCAwsJCtfJH54EAQK9evfD9998jJycHhw8fhr+/P2bMmIGNGzdW2r6Tk1Ol5wFAp+fysODgYFy/fh1r1qzBmDFjKt1v48aNsLCwwA8//IBhw4ahe/fu8PX1rdYxK5pUW5nMzExMnjwZ3t7euHHjBt56661qHZPImDHRINLAvHnzIIRASEhIhZMni4qK8P333wMAnnrqKQBQTeYsc/ToUSQnJ6Nv3746i6vszolTp06plZfFUhFzc3N069YNn3zyCQDg+PHjle7bt29f7NmzR5VYlPniiy9gY2Ojt1s/GzVqhNmzZ2Pw4MEICgqqdD9JklCnTh2Ym5uryu7evYsvv/yy3L666iUqKSnBq6++CkmS8NNPPyEyMhIrV67Eli1btG6byJjwORpEGvD390dMTAwmTZqELl26YOLEiWjXrh2Kiopw4sQJrF27Fu3bt8fgwYPRunVrjB8/HitXroSZmRkCAwNx6dIlzJ8/H+7u7njzzTd1FtfAgQPh6OiIsWPHYtGiRahTpw7i4uKQkZGhtt+aNWuwZ88eDBo0CB4eHrh3757qzo5+/fpV2n54eDh++OEH9OnTBwsWLICjoyPWr1+PH3/8EcuWLYODg4POzuVRS5cu/dd9Bg0ahOXLl2PEiBEYP348bty4gQ8++KDCW5A7dOiAjRs3YtOmTWjWrBmsrKyqNa8iPDwc+/fvx65du+Di4oJZs2Zh3759GDt2LHx8fNC0aVON2yQyRkw0iDQUEhICPz8/fPTRR4iKikJWVhYsLCzQqlUrjBgxAlOmTFHtGxMTg+bNm+Ozzz7DJ598AgcHBzzzzDOIjIyscE5GdSmVSuzcuRMzZszAa6+9hrp162LcuHEIDAzEuHHjVPt5e3tj165dCA8PR1ZWFuzs7NC+fXts375dNcehIq1bt8bBgwfx9ttvY/Lkybh79y7atm2L2NhYjZ6wqS9PPfUUPv/8c0RFRWHw4MFo1KgRQkJC4OzsjLFjx6rtu3DhQmRmZiIkJAR37txBkyZN1J4zUhW7d+9GZGQk5s+fr9YzFRcXBx8fHwwfPhy///47LC0tdXF6RLWaJMRDT7MhIiIi0iHO0SAiIiK9YaJBREREesNEg4iIiPSGiQYRERHpDRMNIiIi0hsmGkRERKQ3fI6GHpWWluKff/6Bvb29Ro81JiIieRBC4M6dO3Bzc4OZmf7+Nr93716FTxvWlKWlpeqVBHLBREOP/vnnn3JvuyQiotonIyMDjRs31kvb9+7dg5O1DQqg/WOtXFxckJaWJqtkg4mGHtnb2wMAdro0gq0eM2H6n/Y71hs6BCK9MmvoaegQTErunTtwb9VO9XmuD/fv30cBBEbCFpaofu/3fQisz8rC/fv3mWiYirLhElszM9gx0agRSns7Q4dApFdmSqWhQzBJNTH8bQVJq0RDrt8yTDSIiIhkwAwSzLRIaMxk+kIRJhpEREQyYAbteiXk2qMh17iIiIjICLBHg4iISAYkCTDTYiqIBEAHN67oHBMNIiIiGeDQCREREZGG2KNBREQkA2aSlnedABw6ISIioopx6ISIiIhIQ+zRICIikgEzLe86kWvPARMNIiIiGeDQCREREZGG2KNBREQkA5IkafXyNv2/9q16mGgQERHJgLEOnTDRICIikgFjnQwq17iIiIjICLBHg4iISAYkaPfXv1znaLBHg4iISAbKHkGuzaKJ3377DYMHD4abmxskScK2bdsq3XfChAmQJAnR0dGan5fGNYiIiKjWy8/PR6dOnbBq1arH7rdt2zYcOXIEbm5u1ToOh06IiIhkoKbvOgkMDERgYOBj97ly5QqmTJmCn3/+GYMGDapWXEw0iIiIZEBud52UlpZi1KhRmD17Ntq1a1ftdphoEBERGZHc3Fy1dYVCAYVCoXE7UVFRqFOnDqZNm6ZVPJyjQUREJANmOlgAwN3dHQ4ODqolMjJS41iOHTuGFStWIC4uTqunlQLs0SAiIpIFM0gw0+Im1bJEIyMjA0qlUlVend6M/fv3Izs7Gx4eHqqykpISzJo1C9HR0bh06VKV22KiQUREZESUSqVaolEdo0aNQr9+/dTKBgwYgFGjRmHMmDEatcVEg4iISAZqejJoXl4eUlNTVetpaWlISkqCo6MjPDw84OTkpLa/hYUFXFxc0Lp1a42Ow0SDiIhIBmr69tbExET06dNHtT5z5kwAQFBQEOLi4rSIRB0TDSIiIhmo6R6NgIAACCGqvL8m8zIexrtOiIiISG/Yo0FERCQDD16qVv0uDQlV752oSUw0iIiIZEBuTwbVFbnGRUREREaAPRpEREQyUNN3ndQUJhpEREQywKETIiIiIg2xR4OIiEgGtH/XiXYvP9MXJhpEREQywKETIiIiIg2ZRKIREBCAGTNmVHn/bdu2oUWLFjA3N9eoXm1j69sZTWM+Rrv9u+GdchIOffuobXfo3xfN/hOD9ocT4J1yEtZtNHuRDj3ezpivsHToeLzZ8RnM6ToEayaE4epf6YYOy6jxmhtGwtp4hHl1xxTHFljSYyD+PHDE0CHJkqSDRY5MItHQ1IQJE/DSSy8hIyMD7777LoKDgzF06FBDh6VzZjbWuJuSgsuLlla6Pf9EEv75YEUNR2YaUo+cRO/Xnsfsb2Mw7YsPUVpSgpVBb6Gw4K6hQzNavOY1L/Hb7fi/OQsROGcqwg7+hBbd/bDq+dG4mXHF0KHJTtnQiTaLHHGOxiPy8vKQnZ2NAQMGwM3NzdDh6NWd3w7gzm8HKt1+67sfAACWjYz7OhjKlLj31dZHRc1FqN8QpJ+5gJZ+nQwUlXHjNa95v6xchx5Bw9Ez+FUAwLD3I3Du133Yt+5LPL9oroGjkxdjnQxqcj0a9+/fx5w5c9CoUSPY2tqiW7duSEhIAAAkJCTA3t4eAPDUU09BkiQEBAQgPj4e3333HSRJgiRJqv2JdOnunTwAgK2DvYEjMR285vpVfP8+0k+cRtu+T6qVt33qSfx1JNFAUVFNM7kejTFjxuDSpUvYuHEj3NzcsHXrVjzzzDM4ffo0unfvjpSUFLRu3RqbN29G9+7dYWNjg5CQEOTm5iI2NhYA4OjoWGHbhYWFKCwsVK3n5ubWyDlR7SeEwOYln6C5bwe4tW5m6HBMAq+5/uXduInSkhIonRuolSsb1kfuL9cMFJV8GetdJyaVaFy8eBEbNmzA5cuXVcMib731Fnbu3InY2FgsWbIEzs7OAB4kEy4uLgAAa2trFBYWqtYrExkZiYULF+r3JMgobYqIxpXzf2HWppWGDsVk8JrXHElS//YUQgCSPLv5DenB21u1qy9HJpVoHD9+HEIItGrVSq28sLAQTk5OWrc/b948zJw5U7Wem5sLd3d3rdsl47YpIhqnfjmAmRtXop6rs6HDMQm85jXDzskRZubmyLmarVZ+J/sGlM71DRQV1TSTSjRKS0thbm6OY8eOwdzcXG2bnZ2d1u0rFAooFAqt2yHTIITANwtXIGnXfry5fgXqu7saOiSjx2tes+pYWsLDpwOS9+yHz3OBqvLkvfvRadDTBoxMnrS9RZU9GjLg4+ODkpISZGdno1evXlWuZ2lpiZKSEj1GZhhmNtZQeHio1i0bN4J1m9YozslBUWYWzB2UsHR1RZ3/jq8qmnoCAIquX0fx9RuGCNmobAz/CInbf8WETxdDYWeNnGsPrqm1vR0srZiw6gOvec3rNzUEseNmoIlPRzTr1gX7P1+PWxlX8OS41wwdmuyYSRLMtBhSkutdJyaVaLRq1QojR47E6NGj8eGHH8LHxwfXr1/Hnj170KFDBwwcOLDCep6envj555+RkpICJycnODg4wMLCooaj1z2b9u3Q4svPVOuN3p4NALi55Tukz1sAh6cC4LH0XdV2z+hlAICslTHIWrWmZoM1QvvXfwcAiB4xXa18VNRc+L8UWFEV0hKvec3zfek55N28hR+XrkBuVjbcvFpjypZ4OHk0NnRoVENMKtEAgNjYWLz33nuYNWsWrly5AicnJ/j7+1eaZABASEgIEhIS4Ovri7y8POzduxcBAQE1F7Se5P2RiKTWlT874ObW7bi5dXsNRmRaVl/cZ+gQTA6vuWEEjA9CwPggQ4che8Y6dCIJIYShgzBWubm5cHBwwH43d9iZyfXGI+PScd9WQ4dApFdmLrwVtybl5ubCwdUDOTk5UCqV+juGgwPiHOrDRqr+d0WBKEVwznW9xlod/PYjIiIivTG5oRMiIiI5MtahEyYaREREMlD2motq15dpqsFEg4iISAaMtUeDczSIiIhIb9ijQUREJANm0O6vf7n2HDDRICIikgFJ0u5dcxw6ISIiIpPDHg0iIiIZkP77nzb15YiJBhERkQzwrhMiIiIiDbFHg4iISAaMtUeDiQYREZEMmAEw0yJbMJPpK1I5dEJERER6wx4NIiIiGeBdJ0RERKRX8kwVtMNEg4iISAa0fjKoTLMUztEgIiIyQb/99hsGDx4MNzc3SJKEbdu2qbYVFRUhNDQUHTp0gK2tLdzc3DB69Gj8888/Gh+HiQYREZEMSDpYNJGfn49OnTph1apV5bYVFBTg+PHjmD9/Po4fP44tW7bgwoULeO655zQ+Lw6dEBERyYAZJJhpMUtD07qBgYEIDAyscJuDgwN2796tVrZy5Ur4+fkhPT0dHh4eVT4OEw0iIiIjkpubq7auUCigUCi0bjcnJweSJKFu3boa1ePQCRERkQzoaujE3d0dDg4OqiUyMlLr2O7du4e5c+dixIgRUCqVGtVljwYREZEM6Oquk4yMDLVkQNvejKKiIrzyyisoLS3F6tWrNa7PRIOIiMiIKJVKjXsdKlNUVIRhw4YhLS0Ne/bsqVa7TDSIiIhkQG4vVStLMv7880/s3bsXTk5O1WqHiQYREZEM1PQjyPPy8pCamqpaT0tLQ1JSEhwdHeHm5oaXXnoJx48fxw8//ICSkhJkZWUBABwdHWFpaVnl4zDRICIiMkGJiYno06ePan3mzJkAgKCgIERERGD79u0AAG9vb7V6e/fuRUBAQJWPw0SDiIhIBswkLV8Tr2HdgIAACFH5u+Uft00TTDSIiIhkQG5zNHSFiQYREZEMGGuiwQd2ERERkd6wR4OIiEgGavquk5rCRIOIiEgGdPVkULnh0AkRERHpDXs0iIiIZMAM2v31L9eeAyYaREREMsC7ToiIiIg0xB4NIiIiOZAkSEY4G5SJBhERkQwY69AJE40a0H7Heijt7Qwdhkko3fwfQ4dgcsyD5hg6BJMi8m4bOgSTIvLvGDqEWo+JBhERkQywR4OIiIj0RtJyjoZW8zv0iIkGERGRDNT0a+JrCm9vJSIiIr1hjwYREZEMSGYSJC26JfhSNSIiIqoUX6pGREREpCH2aBAREcmAsfZoMNEgIiKSAWO9vZVDJ0RERKQ37NEgIiKSAQ6dEBERkd5w6ISIiIhIQ+zRICIikgEOnRAREZHemEkSzLTIFrSpq09MNIiIiGTAWHs0OEeDiIiI9IY9GkRERDIgQcu7TvhSNSIiIqqMZPZgqXZ9obtYdIlDJ0RERKQ37NEgIiKSAy0f2CXX2aBMNIiIiGSAd50QERERaYg9GkRERDLwoEdDm3ed6DAYHWKiQUREJAMcOiEiIiLSEBMNIiIiGSh714k2iyZ+++03DB48GG5ubpAkCdu2bVPbLoRAREQE3NzcYG1tjYCAAJw9e1bz89K4BhEREelc2dCJNosm8vPz0alTJ6xatarC7cuWLcPy5cuxatUqHD16FC4uLujfvz/u3Lmj0XE4R4OIiEgGJC2fo6Fp3cDAQAQGBla4TQiB6OhohIWF4YUXXgAAxMfHo2HDhvj6668xYcKEKh+HPRpERERGJDc3V20pLCzUuI20tDRkZWXh6aefVpUpFAr07t0bBw8e1KgtJhpEREQyoKuhE3d3dzg4OKiWyMhIjWPJysoCADRs2FCtvGHDhqptVcWhEyIiIhnQ1e2tGRkZUCqVqnKFQqFFm+oBCSE0HqJhokFERGRElEqlWqJRHS4uLgAe9Gy4urqqyrOzs8v1cvwbDp0QERHJgGQmab3oStOmTeHi4oLdu3eryu7fv499+/ahe/fuGrXFHg0iIiIZqOkng+bl5SE1NVW1npaWhqSkJDg6OsLDwwMzZszAkiVL0LJlS7Rs2RJLliyBjY0NRowYodFxmGgQERGZoMTERPTp00e1PnPmTABAUFAQ4uLiMGfOHNy9exeTJk3CrVu30K1bN+zatQv29vYaHYeJBhERkQxU5+mej9bXREBAAIQQlW6XJAkRERGIiIiodkwAEw0iIiJZ4EvViIiIiDTEHg0iIiIZqOlHkNcUo0s0JEnC1q1bMXTo0Crtn5CQgD59+uDWrVuoW7euXmOTu50xXyHp599w9a90WCgUaNa5PZ4PnYCGzTwMHZrRup1TgO92HMfZlCsoKiqBc30lRr7sD4/GToYOzSj9eSgRuz+JQ/qpc8i5eg0TYqPhPbCvocMyarzmVSdBy6ETnUWiW0Y3dJKZmVnpS2KqKyIiAt7e3jptU45Sj5xE79eex+xvYzDtiw9RWlKClUFvobDgrqFDM0oFBYVYvnonzMzNMOn1vnhn1nN44dkusLa2NHRoRquw4C4atWuF4ZFvGzoUk8FrXnVlPRraLHJkVD0a9+/fVz3NjDQ3Je59tfVRUXMR6jcE6WcuoKVfJwNFZbx2J5xFPQdbjBr2v4ffODnaGTAi49e+by+079vL0GGYFF5zqtU9GgEBAZgyZQpmzpyJ+vXro3///pAkCdu2bVPtc/DgQXh7e8PKygq+vr7Ytm0bJElCUlKSWlvHjh2Dr68vbGxs0L17d6SkpAAA4uLisHDhQpw8eVKVMcbFxdXcSRrQ3Tt5AABbB83umaaqOX3uMjwaO+KzL/dh7sJvsDT6Bxw48qehwyIiQ9H2hWry7NCo3YkGAMTHx6NOnTo4cOAAPv30U7Vtd+7cweDBg9GhQwccP34c7777LkJDQytsJywsDB9++CESExNRp04dvP766wCA4cOHY9asWWjXrh0yMzORmZmJ4cOH6/28DE0Igc1LPkFz3w5wa93M0OEYpes372D/4QtoUF+JyeP6oecTrfDtd0dx5NhFQ4dGRAbAoROZatGiBZYtW1bhtvXr10OSJKxbtw5WVlbw8vLClStXEBISUm7fxYsXo3fv3gCAuXPnYtCgQbh37x6sra1hZ2eHOnXq/OuwTGFhIQoLC1Xrubm5WpyZYW2KiMaV839h1qaVhg7FaAkBeDR2wnOBPgAA90aOyLx6G/sPXUC3Ls0NHB0RkW7U+h4NX1/fSrelpKSgY8eOsLKyUpX5+flVuG/Hjh1VP5e9qS47O1ujWCIjI+Hg4KBa3N3dNaovF5sionHqlwOYsT4a9VydDR2O0VLaW8PF2UGtzMXZAbdu5xsoIiIyJMlM+0WOZBpW1dna2la6TQhRriupssetWlhYqH4uq1NaWqpRLPPmzUNOTo5qycjI0Ki+oQkhsCkiGkm79mPGV9Go7+7675Wo2pp5NkD2NfVer+xruXCsxwmhRKbIWIdOan2i8Tht2rTBqVOn1IYzEhMTNW7H0tISJSUl/7qfQqGAUqlUW2qTjeEf4Y9tuzHmo/lQ2Fkj59oN5Fy7gfv3Cv+9MmnsqV5tkZZ+DT/vOY1r13Nx9EQaDhz5E0/6tzJ0aEbrXn4BMs6cR8aZ8wCAG+lXkHHmPG5ezjRwZMaL15xq/RyNxxkxYgTCwsIwfvx4zJ07F+np6fjggw8AaPYENU9PT9Xrcxs3bgx7e3soFAp9hW0w+9d/BwCIHjFdrXxU1Fz4v6TbZ5MQ0MS9PkJGB2D7zhP46ZdTcHK0w4vPdUXXzpx8qy/pSWfx0Quvq9a/DX9wS/cTw59D0MeLDRWWUeM114CZ9GDRpr4MGXWioVQq8f3332PixInw9vZGhw4dsGDBAowYMUJt3sa/efHFF7Flyxb06dMHt2/fRmxsLIKDg/UXuIGsvrjP0CGYnA5ejdHBq7GhwzAZrXp0RczV04YOw6TwmmvASN+qVqsTjYSEhHJlj87B6N69O06ePKlaX79+PSwsLODh8eCx2hW9Jtfb21utTKFQ4Ntvv9Vh5EREROr4rpNa6osvvkCzZs3QqFEjnDx5EqGhoRg2bBisra0NHRoREZHRM/pEIysrCwsWLEBWVhZcXV3x8ssvY/FijgsSEZHMcI5G7TRnzhzMmTPH0GEQERE9npHO0TDq21uJiIjIsIy+R4OIiKg2kMwkSFoMf2hTV5+YaBAREckBh06IiIiINMMeDSIiIhmQJC2HTmTao1GlROPjjz+ucoPTpk2rdjBEREQmy0iHTqqUaHz00UdVakySJCYaREREpFKlRCMtLU3fcRAREZk2M2j5wC6dRaJT1Q7r/v37SElJQXFxsS7jISIiMkll7zrRZpEjjRONgoICjB07FjY2NmjXrh3S09MBPJibsXTpUp0HSEREZBLKHkGuzSJDGica8+bNw8mTJ5GQkKD2qvV+/fph06ZNOg2OiIiIajeNb2/dtm0bNm3ahCeeeEKtm8bLywsXL17UaXBEREQmw5TvOnnYtWvX4OzsXK48Pz9ftuNDREREcieZPVi0qS9HGofVtWtX/Pjjj6r1suRi3bp18Pf3111kREREVOtp3KMRGRmJZ555BufOnUNxcTFWrFiBs2fP4tChQ9i3b58+YiQiIjJ+Rjp0onGPRvfu3XHgwAEUFBSgefPm2LVrFxo2bIhDhw6hS5cu+oiRiIjI6JW9vVWbRY6q9a6TDh06ID4+XtexEBERkZGpVqJRUlKCrVu3Ijk5GZIkoW3bthgyZAjq1OE72oiIiKrFSIdONM4Mzpw5gyFDhiArKwutW7cGAFy4cAENGjTA9u3b0aFDB50HSUREZPS0feiWTIdONJ6jMW7cOLRr1w6XL1/G8ePHcfz4cWRkZKBjx44YP368PmIkIiKiWkrjROPkyZOIjIxEvXr1VGX16tXD4sWLkZSUpMvYiIiITEZNv+ukuLgY77zzDpo2bQpra2s0a9YMixYtQmlpqU7PS+Ohk9atW+Pq1ato166dWnl2djZatGihs8CIiIhMSg0PnURFRWHNmjWIj49Hu3btkJiYiDFjxsDBwQHTp0+vfhyPqFKikZubq/p5yZIlmDZtGiIiIvDEE08AAA4fPoxFixYhKipKZ4ERERGZFi0ng0KzuocOHcKQIUMwaNAgAICnpyc2bNiAxMRELWIor0qJRt26ddW6ZIQQGDZsmKpMCAEAGDx4MEpKSnQaIBEREelez549sWbNGly4cAGtWrXCyZMn8fvvvyM6Olqnx6lSorF3716dHpSIiIjUVWeexaP1AfVRCABQKBRQKBTl9g8NDUVOTg7atGkDc3NzlJSUYPHixXj11VerHUNFqpRo9O7dW6cHJSIiokfoaI6Gu7u7WnF4eDgiIiLK7b5p0yZ89dVX+Prrr9GuXTskJSVhxowZcHNzQ1BQUPXjeES1n7BVUFCA9PR03L9/X628Y8eOWgdFRERE1ZORkQGlUqlar6g3AwBmz56NuXPn4pVXXgHw4Knff//9NyIjIw2baFy7dg1jxozBTz/9VOF2ztEgIiLSnK6GTpRKpVqiUZmCggKYmak/5cLc3Fznt7dq/ByNGTNm4NatWzh8+DCsra2xc+dOxMfHo2XLlti+fbtOgyMiIjIZZUMn2iwaGDx4MBYvXowff/wRly5dwtatW7F8+XI8//zzOj0tjXs09uzZg++++w5du3aFmZkZmjRpgv79+0OpVCIyMlJ1mwwRERHJ18qVKzF//nxMmjQJ2dnZcHNzw4QJE7BgwQKdHkfjRCM/Px/Ozs4AAEdHR1y7dg2tWrVChw4dcPz4cZ0GR0REZDJq+KVq9vb2iI6O1vntrI/SeOikdevWSElJAQB4e3vj008/xZUrV7BmzRq4urrqPEAiIiJTIJlJWi9ypHGPxowZM5CZmQngwS0zAwYMwPr162FpaYm4uDhdx0dERES1mMaJxsiRI1U/+/j44NKlSzh//jw8PDxQv359nQZHRERkMmp46KSmVPs5GmVsbGzQuXNnXcRCRERkusyg5QO7dBaJTlUp0Zg5c2aVG1y+fHm1gyEiIjJVunqOhtxUKdE4ceJElRqT60kSERGRYfClajXArKEnzKrwlDbSnhQ0x9AhmJyJDdsbOgSTsib/sqFDMClSaQ2OR+joXSdyo/UcDSIiItIBI50MKtOpI0RERGQM2KNBREQkB0bao8FEg4iISBa0TDQgz0SDQydERESkN9VKNL788kv06NEDbm5u+PvvvwEA0dHR+O6773QaHBERkckwM9N+kSGNo4qJicHMmTMxcOBA3L59GyUlJQCAunXr6v0NcEREREarbI6GNosMaZxorFy5EuvWrUNYWBjMzc1V5b6+vjh9+rROgyMiIqLaTePJoGlpafDx8SlXrlAokJ+fr5OgiIiITI6R3nWicY9G06ZNkZSUVK78p59+gpeXly5iIiIiMj1GOnSicY/G7NmzMXnyZNy7dw9CCPzxxx/YsGEDIiMj8Z///EcfMRIRERk/bSd0ynQyqMaJxpgxY1BcXIw5c+agoKAAI0aMQKNGjbBixQq88sor+oiRiIiIaqlqPbArJCQEISEhuH79OkpLS+Hs7KzruIiIiEyLkc7R0OrJoPXr19dVHERERKaNicYDTZs2hfSYk/nrr7+0CoiIiIiMh8aJxowZM9TWi4qKcOLECezcuROzZ8/WVVxERESmhT0aD0yfPr3C8k8++QSJiYlaB0RERGSSjPSuE51FFRgYiM2bN+uqOSIiIjICOntN/LfffgtHR0ddNUdERGRaOHTygI+Pj9pkUCEEsrKycO3aNaxevVqnwREREZkMCVomGjqLRKc0TjSGDh2qtm5mZoYGDRogICAAbdq00VVcREREZAQ0SjSKi4vh6emJAQMGwMXFRV8xERERmR4jHTrRaDJonTp1MHHiRBQWFuorHiIiIpMkmZlpvciRxlF169YNJ06c0EcsREREJkzbN7fKs0dD4zkakyZNwqxZs3D58mV06dIFtra2ats7duyos+CIiIiodqtyovH6668jOjoaw4cPBwBMmzZNtU2SJAghIEkSSkpKdB8lERGRsTPSORpVTjTi4+OxdOlSpKWl6TMeIiIi02TqiYYQAgDQpEkTvQVDRERExkWjORqPe2srERERacFI33WiUaLRqlWrf002bt68qVVAREREJsnUh04AYOHChXBwcNBXLERERGRkNEo0XnnlFTg7O+srFiIiItNlpD0aVR7Q4fwMIiIiPdLmYV3VTFKuXLmC1157DU5OTrCxsYG3tzeOHTum09PS+K4TIiIiqv1u3bqFHj16oE+fPvjpp5/g7OyMixcvom7dujo9TpUTjdLSUp0emIiIiB5Sw3edREVFwd3dHbGxsaoyT0/P6h+/EvK8F4aIiMjU6GjoJDc3V22p7EWo27dvh6+vL15++WU4OzvDx8cH69at0/lpMdEgIiKSAx0lGu7u7nBwcFAtkZGRFR7ur7/+QkxMDFq2bImff/4Zb7zxBqZNm4YvvvhCp6el8UvViIiISL4yMjKgVCpV6wqFosL9SktL4evriyVLlgAAfHx8cPbsWcTExGD06NE6i4eJBhERkRzoaI6GUqlUSzQq4+rqCi8vL7Wytm3bYvPmzdWPoaKwdNoaGYWEtfEI8+qOKY4tsKTHQPx54IihQzJafx5KxOrXpmBux6cwsWEHJO341dAhGZUWPbph0v/FYmlqItbkX0anZweobQ/6dDnW5F9WW+bs3W6gaI0XP1OqSIKWQyeaHa5Hjx5ISUlRK7tw4YLO32lmdImGEALjx4+Ho6MjJElCUlKSoUOqVRK/3Y7/m7MQgXOmIuzgT2jR3Q+rnh+NmxlXDB2aUSosuItG7VpheOTbhg7FKClsbXD59DlsnDm/0n3O7NqLOc18VMuqF3TXZUz8TJGzN998E4cPH8aSJUuQmpqKr7/+GmvXrsXkyZN1ehyjGzrZuXMn4uLikJCQgGbNmqF+/fpatxkQEABvb29ER0drH6DM/bJyHXoEDUfP4FcBAMPej8C5X/dh37ov8fyiuQaOzvi079sL7fv2MnQYRuvsrr04u2vvY/cpLixE7tVrNRSR6eFnigZq+MmgXbt2xdatWzFv3jwsWrQITZs2RXR0NEaOHFn9GCpgdInGxYsX4erqiu7duxs6lFqn+P59pJ84jQGzJqmVt33qSfx1JNFAURHpV6te/lh2KQl3b+fiz98P47uFUbhz7YahwzIK/EzRkAEeQf7ss8/i2Wefrf4xq8Cohk6Cg4MxdepUpKenQ5IkeHp6orCwENOmTYOzszOsrKzQs2dPHD16VK3evn374OfnB4VCAVdXV8ydOxfFxcWqNvft24cVK1ZAkiRIkoRLly4Z4Oz0L+/GTZSWlEDp3ECtXNmwPv/iI6N0ZtdefP76VEQPHI5v5y1Cky6dMGPHJtSxtDR0aEaBnykEGFmPxooVK9C8eXOsXbsWR48ehbm5OebMmYPNmzcjPj4eTZo0wbJlyzBgwACkpqbC0dERV65cwcCBAxEcHIwvvvgC58+fR0hICKysrBAREYEVK1bgwoULaN++PRYtWgQAaNCgQYXHLywsVHswSm5ubo2ct649+l4bIYRsX9ZDpI1jm79X/fzPuRT8feIUliQfRvtn+iJp+08GjMy48DOliiQt7zqR5Nl3IM+oqsnBwQH29vYwNzeHi4sLbGxsEBMTg/fffx+BgYHw8vLCunXrYG1tjc8++wwAsHr1ari7u2PVqlVo06YNhg4dioULF+LDDz9EaWkpHBwcYGlpCRsbG7i4uMDFxQXm5uYVHj8yMlLtISnu7u41efpas3NyhJm5OXKuZquV38m+AaWz9nNdiOQuNysbN9OvwLlFU0OHYhT4maIhA7xUrSYYVaLxqIsXL6KoqAg9evRQlVlYWMDPzw/JyckAgOTkZPj7+6tl3D169EBeXh4uX76s0fHmzZuHnJwc1ZKRkaGbE6khdSwt4eHTAcl79quVJ+/dj2bdfA0UFVHNsXWsi3qNXZGTddXQoRgFfqYQYGRDJ48qe+NsRd12ZWUP//xv9f6NQqGo9AlstUW/qSGIHTcDTXw6olm3Ltj/+XrcyriCJ8e9ZujQjNK9/AJcS0tXrd9Iv4KMM+dhW9cBjo1dDRiZcVDY2qBBc0/Ven1PdzTu6IX8m7dRcOs2ng2biePbdiA3KxtOTdwxJCIUeTduIWn7TsMFbWT4maIBA0wGrQlGnWi0aNEClpaW+P333zFixAgAQFFRERITEzFjxgwAgJeXFzZv3qyWcBw8eBD29vZo1KgRAMDS0hIlJSUGOYea5vvSc8i7eQs/Ll2B3KxsuHm1xpQt8XDyaGzo0IxSetJZfPTC66r1b8PfBwA8Mfw5BH282FBhGY0mnTth5s7/U62/HBUBADj01Tf4evrbcGvXBt1GvAQbByVysrJx4beD+M/oiSjMyzdQxMaHnykakMy0m2ch0zkaRp1o2NraYuLEiZg9ezYcHR3h4eGBZcuWoaCgAGPHjgUATJo0CdHR0Zg6dSqmTJmClJQUhIeHY+bMmTD776QcT09PHDlyBJcuXYKdnR0cHR1V24xRwPggBIwPMnQYJqFVj66IuXra0GEYrQv7D+EN28q/0FYO4V/VNYGfKVVkJj1YtKkvQ0adaADA0qVLUVpailGjRuHOnTvw9fXFzz//jHr16gEAGjVqhB07dmD27Nno1KkTHB0dMXbsWLzzzjuqNt566y0EBQXBy8sLd+/eRVpaGjw9PQ10RkRERLWHJMomJJDO5ebmwsHBATmZ6VV6wQ1pT+TdNnQIJmdiw/aGDsGkrMnXbJI6aSc3NxcOrh7IycnR2+d42XfFzY9mQmld/Xl+uXcL4fjmcr3GWh1G36NBRERUKxjpZFDjnWhAREREBsceDSIiIjkw0/LJoDK9SYGJBhERkRxw6ISIiIhIM+zRICIikgM+sIuIiIj0RoKWQyc6i0Sn5Jn+EBERkVFgjwYREZEc8K4TIiIi0hsjveuEiQYREZEcGOlkUHlGRUREREaBPRpERERyIGn5mngOnRAREVGlOHRCREREpBn2aBAREckB7zohIiIiveHQCREREZFm2KNBREQkB2Za3nWiTV09YqJBREQkB0Y6R4NDJ0RERKQ37NEgIiKSAyOdDMpEg4iISA44R4OIiIj0RpK07NGQZ6Ihz34WIiIiMgrs0SAiIpIDI73rhIkGERGRHBjpZFB5RkVERERGgT0aREREcmCkd52wR4OIiEgOyoZOtFmqKTIyEpIkYcaMGbo7n/9iokFERGTCjh49irVr16Jjx456aZ+JBhERkRyU3XWizaKhvLw8jBw5EuvWrUO9evX0cFJMNIiIiOTBzEz7BUBubq7aUlhYWOkhJ0+ejEGDBqFfv376Oy29tUxEREQ1zt3dHQ4ODqolMjKywv02btyI48ePV7pdV3jXCRERkSxo+cAuPKibkZEBpVKpKlUoFOX2zMjIwPTp07Fr1y5YWVlpccx/x0SDiIhIDnT0wC6lUqmWaFTk2LFjyM7ORpcuXVRlJSUl+O2337Bq1SoUFhbC3Ny8+rE8hIkGERGRHNTgI8j79u2L06dPq5WNGTMGbdq0QWhoqM6SDICJBhERkcmxt7dH+/bt1cpsbW3h5ORUrlxbTDSIiIjk4KE7R6pdX4aYaNQAkZ8DYVZq6DBMgsi7aegQTM6a/MuGDsGkRDk1NXQIJuWeEDV3MAO/vTUhIUGr+pWRZ/pDRERERoE9GkRERHIgSVredSLPl6ox0SAiIpIDAw+d6AuHToiIiEhv2KNBREQkBzp6YJfcMNEgIiKSAzPpwaJNfRmSZ/pDRERERoE9GkRERHLAoRMiIiLSGyO964SJBhERkRwYaY+GPKMiIiIio8AeDSIiIhmQJAmSFsMf2tTVJyYaREREcsChEyIiIiLNsEeDiIhIDoy0R4OJBhERkRxIWj4ZVKZzNOSZ/hAREZFRYI8GERGRHHDohIiIiPTGSJ8MKs/0h4iIiIwCezSIiIjkQJK0HDqRZ48GEw0iIiI5MNKhEyYaREREcmCkk0HlGRUREREZBfZoEBERyYGZlg/s0qauHjHRICIikgMOnRARERFphj0aREREcsC7ToiIiEhvOHRCREREpBn2aBAREckBh06IiIhIbzh0QkRERKQZ9mgQERHJgZnZg0Wb+jLERIOIiEgGJEmCpMU8C23q6hMTDSIiIjkw0tfEy7OfhYiIiIwCezSIiIjkwEhvb2WPBhERkSyY/e8W1+osGn6lR0ZGomvXrrC3t4ezszOGDh2KlJQUfZwVERERmZp9+/Zh8uTJOHz4MHbv3o3i4mI8/fTTyM/P1+lxOHRCKn8eSsTuT+KQfuoccq5ew4TYaHgP7GvosIzWzpivkPTzb7j6VzosFAo069wez4dOQMNmHoYOzeglrI3H7uhPkZOVDbe2rfDysnC07NHN0GHVeo39/dBtyng09G4Pe5eG2DJqPP7csbvCfQd8uBjewSPw69uLkPhpbA1HKlM1PHSyc+dOtfXY2Fg4Ozvj2LFjePLJJ6sfxyMM2qMREBCAGTNmGDIEekhhwV00atcKwyPfNnQoJiH1yEn0fu15zP42BtO++BClJSVYGfQWCgvuGjo0o5b47Xb835yFCJwzFWEHf0KL7n5Y9fxo3My4YujQaj1LG2tkn03GL6Hhj92v5cD+cO3ijTuZWTUUWS1R9hwNbRYAubm5akthYWGVDp+TkwMAcHR01O1p6bQ1qtXa9+2FIfOmwWdQP0OHYhKmxL0P/5cC4daqKRq3bYFRUXNx85+rSD9zwdChGbVfVq5Dj6Dh6Bn8KlzbtMSw9yNQr7Eb9q370tCh1Xp//boP+5d8iAs//FzpPnauDdE/aiF+mDADpUXFNRid6XB3d4eDg4NqiYyM/Nc6QgjMnDkTPXv2RPv27XUaj8ESjeDgYOzbtw8rVqxQPaTk0qVL2LdvH/z8/KBQKODq6oq5c+eiuPh/v4wBAQGYNm0a5syZA0dHR7i4uCAiIkKt7fPnz6Nnz56wsrKCl5cXfvnlF0iShG3btgEAEhISIEkSbt++raqTlJSkiqHMwYMH8eSTT8La2hru7u6YNm2azseuiMrcvZMHALB1sDdwJMar+P59pJ84jbZ91buF2z71JP46kmigqEyIJOHZmOU4snItrqf8aeho5Kds6ESbBUBGRgZycnJUy7x58/710FOmTMGpU6ewYcMGnZ+WwRKNFStWwN/fHyEhIcjMzERmZiYsLCwwcOBAdO3aFSdPnkRMTAw+++wzvPfee2p14+PjYWtriyNHjmDZsmVYtGgRdu9+MA5YWlqKoUOHwsbGBkeOHMHatWsRFhamcXynT5/GgAED8MILL+DUqVPYtGkTfv/9d0yZMkUn50/0MCEENi/5BM19O8CtdTNDh2O08m7cRGlJCZTODdTKlQ3rI/fqNQNFZTqemP4GSotLcGxtnKFDkSdt7jh56IVsSqVSbVEoFI897NSpU7F9+3bs3bsXjRs31vlpGWwyqIODAywtLWFjYwMXFxcAQFhYGNzd3bFq1SpIkoQ2bdrgn3/+QWhoKBYsWACz/44/dezYEeHhD8YAW7ZsiVWrVuHXX39F//79sWvXLly8eBEJCQmqdhcvXoz+/ftrFN/777+PESNGqOaQtGzZEh9//DF69+6NmJgYWFlZlatTWFioNhaWm5ur8XUh07QpIhpXzv+FWZtWGjoUk/Doo5qFELJ9BoGxaNipPbqMH4P4p541dCj0X0IITJ06FVu3bkVCQgKaNm2ql+PI6q6T5ORk+Pv7q30I9OjRA3l5ebh8+TI8PB7Mxu/YsaNaPVdXV2RnZwMAUlJS4O7urkoyAMDPz0/jWI4dO4bU1FSsX79eVSaEQGlpKdLS0tC2bdtydSIjI7Fw4UKNj0WmbVNENE79cgAzN65EPVdnQ4dj1OycHGFmbo6cq9lq5Xeyb0DpXN9AUZkG9ye6wraBEyaePKAqM6tTB33eDYPvG69jjU8vA0YnEzV818nkyZPx9ddf47vvvoO9vT2ysh5MznVwcIC1tXX143iErBINIUTFf2lA/S8QCwsLtX0kSUJpaWmlbTyqrGekrG0AKCoqUtuntLQUEyZMwLRp08rVL0t4HjVv3jzMnDlTtZ6bmwt3d/fHxkKmSwiBbxauQNKu/Xhz/QrUd3c1dEhGr46lJTx8OiB5z374PBeoKk/eux+dBj1twMiM35lvtuLSvgNqZcO+jcfZb7bi9NffGigquZH+u2hTv+piYmIAPJj7+LDY2FgEBwdrEYc6gyYalpaWKCkpUa17eXlh8+bNasnCwYMHYW9vj0aNGlWpzTZt2iA9PR1Xr15Fw4YNAQBHjx5V26dBgwfjs5mZmahXrx6AB5NBH9a5c2ecPXsWLVq0qPL5KBSKfx0Lk7N7+QW4lpauWr+RfgUZZ87Dtq4DHBvzS1DXNoZ/hMTtv2LCp4uhsLNGzrUbAABreztYWtXe3yO56zc1BLHjZqCJT0c069YF+z9fj1sZV/DkuNcMHVqtZ2Frg3pNm6jWHTzc4dy+Le7eysGdK//g3q3bavuXFhUj/+o13Ez9q4Yjlaka7tF4+I9tfTJoouHp6YkjR47g0qVLsLOzw6RJkxAdHY2pU6diypQpSElJQXh4OGbOnKnqhfg3/fv3R/PmzREUFIRly5bhzp07qsmgZclLixYt4O7ujoiICLz33nv4888/8eGHH6q1ExoaiieeeAKTJ09GSEgIbG1tkZycjN27d2PlSuMcR09POouPXnhdtf5t+PsAgCeGP4egjxcbKiyjtX/9dwCA6BHT1cpHRc2F/0uBFVUhHfB96Tnk3byFH5euQG5WNty8WmPKlng4eeh+EpypcfHugBHbN6rW+y6eDwA4veFb7Jgy21BhkYEZNNF46623EBQUBC8vL9y9exdpaWnYsWMHZs+ejU6dOsHR0RFjx47FO++8U+U2zc3NsW3bNowbNw5du3ZFs2bN8P7772Pw4MGqCZwWFhbYsGEDJk6ciE6dOqFr165477338PLLL6va6dixI/bt24ewsDD06tULQgg0b94cw4cP1/l1kItWPboi5uppQ4dhMlZf3GfoEExWwPggBIwPMnQYRifjwBFEOVV9QiHnZTzCSF+qJoma6jsxoAMHDqBnz55ITU1F8+bNa+y4ubm5cHBwwO3U01Da89kINUHk3TR0CCbHzIW349YkTb7ISXv3hEBE4W3k5ORAqVTq5Riq74pzR6G0t6t+O3fyUNerq15jrQ5ZTQbVla1bt8LOzg4tW7ZEamoqpk+fjh49etRokkFERERGmmjcuXMHc+bMQUZGBurXr49+/fqVm4NBREQkK0Y6dGKUicbo0aMxevRoQ4dBRERUdTV7d2uN4UvViIiISG+MskeDiIio9jHOLg0mGkRERHJgpHM0OHRCREREesMeDSIiIjmQoGWPhs4i0SkmGkRERLLAORpERESkL5yjQURERKQZ9mgQERHJAodOiIiISF84dEJERESkGfZoEBERyYGR9mgw0SAiIpIF45yjwaETIiIi0hv2aBAREcmAJEmQtBj+0KauPjHRICIikgMjnaPBoRMiIiLSG/ZoEBERyYJxTgZlokFERCQLWg6dMNEgIiKiSnGOBhEREZFm2KNBREQkC5yjQURERPrCoRMiIiIizbBHg4iISA6Mc+SEiQYREZE8GGemwaETIiIi0hv2aBAREcmBkU4GZaJBREQkB0aaaHDohIiIiPSGPRpERESyYJyTQZloEBERyYEELYdOdBaJTnHohIiISA7K5mhos1TD6tWr0bRpU1hZWaFLly7Yv3+/Tk+LiQYREZGJ2rRpE2bMmIGwsDCcOHECvXr1QmBgINLT03V2DCYaREREsiDpYNHM8uXLMXbsWIwbNw5t27ZFdHQ03N3dERMTo4PzeYCJBhERkRzU8NDJ/fv3cezYMTz99NNq5U8//TQOHjyos9PiZFA9EkIAAHLv5Bk4EtMh8nita5qZTa6hQzAp9/77uUI1o+x6ixq47rl37uikfm6u+v+TCoUCCoWi3P7Xr19HSUkJGjZsqFbesGFDZGVlaRXLw5ho6NGd//6je/j4GzgSIiLSxp07d+Dg4KCXti0tLeHi4gL3Vu20bsvOzg7u7u5qZeHh4YiIiKi0jvRIT4gQolyZNpho6JGbmxsyMjJgb2+v0380fcvNzYW7uzsyMjKgVCoNHY5J4DWvWbzeNa+2XnMhBO7cuQM3Nze9HcPKygppaWm4f/++1m1VlCRU1JsBAPXr14e5uXm53ovs7OxyvRzaYKKhR2ZmZmjcuLGhw6g2pVJZqz4QjAGvec3i9a55tfGa66sn42FWVlawsrLS+3EeZmlpiS5dumD37t14/vnnVeW7d+/GkCFDdHYcJhpEREQmaubMmRg1ahR8fX3h7++PtWvXIj09HW+88YbOjsFEg4iIyEQNHz4cN27cwKJFi5CZmYn27dtjx44daNKkic6OwUSDylEoFAgPD690XI90j9e8ZvF61zxec/maNGkSJk2apLf2JVET9+wQERGRSeIDu4iIiEhvmGgQERGR3jDRICIiIr1homFiAgICMGPGjCrvv23bNrRo0QLm5uYa1aP/kSQJ27Ztq/L+CQkJkCQJt2/f1ltMxkgIgfHjx8PR0RGSJCEpKcnQIRERmGjQv5gwYQJeeuklZGRk4N1330VwcDCGDh1q6LBqlczMTAQGBuq0zYiICHh7e+u0zdpu586diIuLww8//KC6TU9bmibmtZ2pnS/VDN7eSpXKy8tDdnY2BgwYoNfH7xqz+/fvw8XFxdBhmISLFy/C1dUV3bt3N3QoRPQwQSald+/eYvr06UIIIQoLC8Xs2bOFm5ubsLGxEX5+fmLv3r1CCCH27t0rAKgtvXv3LldWtj890Lt3bzF58mTx5ptvCicnJ/Hkk08KAGLr1q2qfQ4cOCA6deokFAqF6NKli9i6dasAIE6cOCGE+N+1/+WXX0SXLl2EtbW18Pf3F+fPnxdCCBEbG1vu3yE2NrbmT1ZGgoKC1K5HkyZNxL1798TUqVNFgwYNhEKhED169BB//PGHWr2EhATRtWtXYWlpKVxcXERoaKgoKiqqsE0AIi0tzQBnVzMqO9/HXSMhHvzOT506VcyePVvUq1dPNGzYUISHh6u1nZycLHr06CEUCoVo27at2L17t9r/F2W/87du3VLVOXHiRLlrfuDAAdGrVy9hZWUlGjduLKZOnSry8vL0eFVIF5homJiHE40RI0aI7t27i99++02kpqaK999/XygUCnHhwgVRWFgoUlJSBACxefNmkZmZKXJycsSwYcPEM888IzIzM0VmZqYoLCw07AnJTO/evYWdnZ2YPXu2OH/+vEhOTlb7QM3NzRWOjo7itddeE2fPnhU7duwQrVq1qjDR6Natm0hISBBnz54VvXr1Et27dxdCCFFQUCBmzZol2rVrp/p3KCgoMNAZy8Pt27fFokWLROPGjUVmZqbIzs4W06ZNE25ubmLHjh3i7NmzIigoSNSrV0/cuHFDCCHE5cuXhY2NjZg0aZJITk4WW7duFfXr11d9Sd6+fVv4+/uLkJAQ1XUuLi424FnqV0Xn+2/XSIgHv/NKpVJERESICxcuiPj4eCFJkti1a5cQQoiSkhLRunVr0b9/f5GUlCT2798v/Pz8NE40Tp06Jezs7MRHH30kLly4IA4cOCB8fHxEcHBwDV0hqi4mGiamLNFITU0VkiSJK1euqG3v27evmDdvnhBCiFu3bpXrtQgKChJDhgypwYhrl969ewtvb2+1soc/UGNiYoSTk5O4e/euavu6desq7dEo8+OPPwoAqnrh4eGiU6dOej2X2uajjz4STZo0EUIIkZeXJywsLMT69etV2+/fvy/c3NzEsmXLhBBCvP3226J169aitLRUtc8nn3wi7OzsRElJiRBCPTE3BY+eb1WvUc+ePdXa6dq1qwgNDRVCCPHTTz+JOnXqiMzMTNX26vRojBo1SowfP17tOPv37xdmZmZq/z+R/HCOhok6fvw4hBBo1aqVWnlhYSGcnJwMFJVx8PX1rXRbSkoKOnbsqPaWRj8/vwr37dixo+pnV1dXAA9e3+zh4aGjSI3XxYsXUVRUhB49eqjKLCws4Ofnh+TkZABAcnIy/P391V6p3aNHD+Tl5eHy5cu8zqj6NXr4dxV48PuanZ0N4MHvvLu7u9pcpcp+5x/n2LFjSE1Nxfr161VlQgiUlpYiLS0Nbdu21bhNqhlMNExUaWkpzM3NcezYMZibm6tts7OzM1BUxsHW1rbSbUIItQ/tsrKKWFhYqH4uq1NaWqqDCI1f2TWt6FqXlT3u3+LRclNV1Wv08O9q2bay39WK2niUmZmZWtsAUFRUpLZPaWkpJkyYgGnTppWrz6RQ3nh7q4ny8fFBSUkJsrOz0aJFC7XlcXdJWFpaoqSkpAYjNS5t2rTBqVOnUFhYqCpLTEzUuB3+OzxeixYtYGlpid9//11VVlRUhMTERNVfvl5eXjh48KDal9vBgwdhb2+PRo0aATC96/zo+VblGv2bNm3aID09HVevXlWVHT16VG2fBg0aAHhwK3iZR5+D0rlzZ5w9e7bc51XZvzXJFxMNE9WqVSuMHDkSo0ePxpYtW5CWloajR48iKioKO3bsqLSep6cnTp06hZSUFFy/fr3cXx30eCNGjEBpaSnGjx+P5ORk/Pzzz/jggw8AaPZXtKenJ9LS0pCUlITr16+rJS70oFdp4sSJmD17Nnbu3Ilz584hJCQEBQUFGDt2LIAHb6zMyMjA1KlTcf78eXz33XcIDw/HzJkzVX9he3p64siRI7h06RKuX79u9D1Kj55vVa7Rv+nfvz+aN2+OoKAgnDp1CgcOHEBYWBiA//3Ot2jRAu7u7oiIiMCFCxfw448/4sMPP1RrJzQ0FIcOHcLkyZORlJSEP//8E9u3b8fUqVN1exFI55homLDY2FiMHj0as2bNQuvWrfHcc8/hyJEjcHd3r7ROSEgIWrduDV9fXzRo0AAHDhyowYhrP6VSie+//x5JSUnw9vZGWFgYFixYAABq8zb+zYsvvohnnnkGffr0QYMGDbBhwwZ9hVxrLV26FC+++CJGjRqFzp07IzU1FT///DPq1asHAGjUqBF27NiBP/74A506dcIbb7yBsWPH4p133lG18dZbb8Hc3BxeXl5o0KAB0tPTDXU6NeLR8y0qKvrXa/RvzM3NsW3bNuTl5aFr164YN26cqn7Z77yFhQU2bNiA8+fPo1OnToiKisJ7772n1k7Hjh2xb98+/Pnnn+jVqxd8fHwwf/581fwlki++Jp7IwNavX48xY8YgJycH1tbWhg6HSO8OHDiAnj17IjU1Fc2bNzd0OKRnnAxKVMO++OILNGvWDI0aNcLJkycRGhqKYcOGMckgo7V161bY2dmhZcuWSE1NxfTp09GjRw8mGSaCiQZRDcvKysKCBQuQlZUFV1dXvPzyy1i8eLGhwyLSmzt37mDOnDnIyMhA/fr10a9fv3JzMMh4ceiEiIiI9IaTQYmIiEhvmGgQERGR3jDRICIiIr1hokFERER6w0SDyMhFRETA29tbtR4cHIyhQ4fWeByXLl2CJEnlHi39ME9PT0RHR1e5zbi4ONStW1fr2CRJwrZt27Ruh4jKY6JBZADBwcGQJAmSJMHCwgLNmjXDW2+9hfz8fL0fe8WKFYiLi6vSvlVJDoiIHofP0SAykGeeeQaxsbEoKirC/v37MW7cOOTn5yMmJqbcvkVFReXekFldDg4OOmmHiKgq2KNBZCAKhQIuLi5wd3fHiBEjMHLkSFX3fdlwx+eff45mzZpBoVBACIGcnByMHz8ezs7OUCqVeOqpp3Dy5Em1dpcuXYqGDRvC3t4eY8eOxb1799S2Pzp0UlpaiqioKLRo0QIKhQIeHh6qB4g1bdoUwIO3/UqShICAAFW92NhYtG3bFlZWVmjTpg1Wr16tdpw//vgDPj4+sLKygq+vL06cOKHxNVq+fDk6dOgAW1tbuLu7Y9KkScjLyyu337Zt29CqVStYWVmhf//+yMjIUNv+/fffo0uXLrCyskKzZs2wcOFCFBcXaxwPEWmOiQaRTFhbW6u9DTc1NRXffPMNNm/erBq6GDRoELKysrBjxw4cO3YMnTt3Rt++fXHz5k0AwDfffIPw8HAsXrwYiYmJcHV1LZcAPGrevHmIiorC/Pnzce7cOXz99ddo2LAhgAfJAgD88ssvyMzMxJYtWwAA69atQ1hYGBYvXozk5GQsWbIE8+fPR3x8PAAgPz8fzz77LFq3bo1jx44hIiICb731lsbXxMzMDB9//DHOnDmD+Ph47NmzB3PmzFHbp6CgAIsXL0Z8fDwOHDiA3NxcvPLKK6rtP//8M1577TVMmzYN586dw6effoq4uDg+jZWopggiqnFBQUFiyJAhqvUjR44IJycnMWzYMCGEEOHh4cLCwkJkZ2er9vn111+FUqkU9+7dU2urefPm4tNPPxVCCOHv7y/eeOMNte3dunUTnTp1qvDYubm5QqFQiHXr1lUYZ1pamgAgTpw4oVbu7u4uvv76a7Wyd999V/j7+wshhPj000+Fo6OjyM/PV22PiYmpsK2HNWnSRHz00UeVbv/mm2+Ek5OTaj02NlYAEIcPH1aVJScnCwDiyJEjQgghevXqJZYsWaLWzpdffilcXV1V6wDE1q1bKz0uEVUf52gQGcgPP/wAOzs7FBcXo6ioCEOGDMHKlStV25s0aYIGDRqo1o8dO4a8vDw4OTmptXP37l1cvHgRAJCcnIw33nhDbbu/vz/27t1bYQzJyckoLCxE3759qxz3tWvXkJGRgbFjxyIkJERVXlxcrJr/kZycjE6dOsHGxkYtDk3t3bsXS5Yswblz55Cbm4vi4mLcu3cP+fn5sLW1BQDUqVMHvr6+qjpt2rRB3bp1kZycDD8/Pxw7dgxHjx5V68EoKSnBvXv3UFBQoBYjEekeEw0iA+nTpw9iYmJgYWEBNze3cpM9y75Iy5SWlsLV1RUJCQnl2qruLZ7VeWNsaWkpgAfDJ926dVPbZm5uDgAQOniF0t9//42BAwfijTfewLvvvgtHR0f8/vvvGDt2rNoQE/Dg9tRHlZWVlpZi4cKFeOGFF8rtY2VlpXWcRPR4TDSIDMTW1hYtWrSo8v6dO3dGVlYW6tSpA09Pzwr3adu2LQ4fPozRo0eryg4fPlxpmy1btoS1tTV+/fVXjBs3rtx2S0tLAA96AMo0bNgQjRo1wl9//YWRI0dW2K6Xlxe+/PJL3L17V5XMPC6OiiQmJqK4uBgffvghzMweTCf75ptvyu1XXFyMxMRE+Pn5AQBSUlJw+/ZttGnTBsCD65aSkqLRtSYi3WGiQVRL9OvXD/7+/hg6dCiioqLQunVr/PPPP9ixYweGDh0KX19fTJ8+HUFBQfD19UXPnj2xfv16nD17Fs2aNauwTSsrK4SGhmLOnDmwtLREjx49cO3aNZw9exZjx46Fs7MzrK2tsXPnTjRu3BhWVlZwcHBAREQEpk2bBqVSicDAQBQWFiIxMRG3bt3CzJkzMWLECISFhWHs2LF45513cOnSJXzwwQcanW/z5s1RXFyMlStXYvDgwThw4ADWrFlTbj8LCwtMnToVH3/8MSwsLDBlyhQ88cQTqsRjwYIFePbZZ+Hu7o6XX34ZZmZmOHXqFE6fPo333ntP838IItII7zohqiUkScKOHTvw5JNP4vXXX0erVq3wyiuv4NKlS6q7RIYPH44FCxYgNDQUXbp0wd9//42JEyc+tt358+dj1qxZWLBgAdq2bYvhw4cjOzsbwIP5Dx9//DE+/fRTuLm5YciQIQCAcePG4T//+Q/i4uLQoUMH9O7dG3FxcarbYe3s7PD999/j3Llz8PHxQVhYGKKiojQ6X29vbyxfvhxRUVFo37491q9fj8jIyHL72djYIDQ0FCNGjIC/vz+sra2xceNG1fYBAwbghx9+wO7du9G1a1c88cQTWL58OZo0aaJRPERUPZLQxWAqERERUQXYo0FERER6w0SDiIiI9IaJBhEREekNEw0iIiLSGyYaREREpDdMNIiIiEhvmGgQERGR3jDRICIiIr1hokFERER6w0SDiIiI9IaJBhEREekNEw0iIiLSm/8HfMvKYZN4Y0IAAAAASUVORK5CYII=", 434 | "text/plain": [ 435 | "
" 436 | ] 437 | }, 438 | "metadata": {}, 439 | "output_type": "display_data" 440 | } 441 | ], 442 | "source": [ 443 | "###============================ plot ============================###\n", 444 | "import matplotlib.pyplot as plt\n", 445 | "from sklearn.metrics import confusion_matrix\n", 446 | "from sklearn.metrics import ConfusionMatrixDisplay\n", 447 | "ConfusionMatrixDisplay.from_predictions(Y_test.cpu().numpy().tolist(), \n", 448 | " probs.argmax(dim=1).cpu().numpy().tolist(), \n", 449 | " display_labels=[\"left\", \"right\", \"foot\", \"tongue\"], \n", 450 | " cmap=plt.cm.Reds, \n", 451 | " colorbar=True)\n", 452 | "plt.title(\"Confusion Matrix\")" 453 | ] 454 | } 455 | ], 456 | "metadata": { 457 | "kernelspec": { 458 | "display_name": "Python 3.7.13 ('pytorch_env')", 459 | "language": "python", 460 | "name": "python3" 461 | }, 462 | "language_info": { 463 | "codemirror_mode": { 464 | "name": "ipython", 465 | "version": 3 466 | }, 467 | "file_extension": ".py", 468 | "mimetype": "text/x-python", 469 | "name": "python", 470 | "nbconvert_exporter": "python", 471 | "pygments_lexer": "ipython3", 472 | "version": "3.7.13" 473 | }, 474 | "orig_nbformat": 4, 475 | "vscode": { 476 | "interpreter": { 477 | "hash": "1430a93da5d83533894d453237c5709323ba9929f781aaee31a933aa40e9a862" 478 | } 479 | } 480 | }, 481 | "nbformat": 4, 482 | "nbformat_minor": 2 483 | } 484 | -------------------------------------------------------------------------------- /EEGNet_Pytorch/main.py: -------------------------------------------------------------------------------- 1 | import os 2 | import numpy as np 3 | import torch 4 | import torch.nn as nn 5 | from EEGNet import EEGNet 6 | from tqdm import tqdm 7 | import LoadData as loaddata 8 | from sklearn.model_selection import train_test_split 9 | 10 | 11 | ###============================ Sets the seed for random numbers ============================### 12 | torch.manual_seed(1234) 13 | np.random.seed(1234) 14 | 15 | ###============================ Use the GPU to train ============================### 16 | use_cuda = torch.cuda.is_available() 17 | device = torch.device('cuda:0' if use_cuda else 'cpu') 18 | 19 | ###============================ Setup for saving the model ============================### 20 | current_working_dir = os.getcwd() 21 | filename = current_working_dir + '/best_model.pth' 22 | 23 | ###============================ Load data ============================### 24 | data_path = "/home/pytorch/LiangXiaohan/MI_Dataverse/BCICIV_2a_gdf" 25 | persion_data = 'A03T.gdf' 26 | 27 | '''for BCIC Dataset''' 28 | bcic_data = loaddata.LoadBCIC(persion_data, data_path) 29 | eeg_data = bcic_data.get_epochs(tmin=0.5, tmax=2.5, baseline=None, downsampled=None) # {'x_data':, 'y_labels':, 'fs':} 30 | 31 | X = eeg_data.get('x_data') 32 | Y = eeg_data.get('y_labels') 33 | 34 | ###============================ Initialization parameters ============================### 35 | chans = X.shape[1] 36 | samples = X.shape[2] 37 | kernelLength = 64 38 | kernelLength2 = 16 39 | F1 = 8 40 | D = 2 41 | F2 = F1 * D 42 | dropoutRate = 0.5 43 | n_classes = 4 44 | training_epochs = 15 45 | batch_size = 32 46 | kfolds = 10 47 | test_size = 0.1 48 | 49 | ###============================ Split data & Cross validate ============================### 50 | X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=test_size, random_state=0) 51 | 52 | train_indices, eval_indices = loaddata.cross_validate_sequential_split(X_train, Y_train, kfold=kfolds) 53 | 54 | ###============================ Initialization model ============================### 55 | eegnet = EEGNet(n_classes, chans, samples, dropoutRate, kernelLength, kernelLength2, F1, D, F2) 56 | eegnet = eegnet.to(device) 57 | print(eegnet) 58 | optimizer = torch.optim.Adam(eegnet.parameters(), lr=0.001, betas=(0.9, 0.999)) 59 | criterion = nn.CrossEntropyLoss().to(device) 60 | 61 | ###============================ Train model ============================### 62 | best_acc = 0 63 | for kfold in range(kfolds): 64 | train_idx = train_indices.get(kfold) 65 | eval_idx = eval_indices.get(kfold) 66 | # print(f'epoch {str(iter)}, Fold {str(kfold+1)}\n') 67 | x_train, x_eval = loaddata.split_xdata(X_train, train_idx, eval_idx) 68 | y_train, y_eval = loaddata.split_ydata(Y_train, train_idx, eval_idx) 69 | 70 | optimizer = torch.optim.Adam(eegnet.parameters(), lr=0.001, betas=(0.9, 0.999)) 71 | criterion = nn.CrossEntropyLoss().to(device) 72 | 73 | epoch_val_acc = 0 74 | for iter in range(0, training_epochs): 75 | 76 | train_data = loaddata.BCICDataLoader(x_train, y_train, batch_size=batch_size) 77 | 78 | running_loss = 0 79 | running_accuracy = 0 80 | 81 | # create a minibatch 82 | eegnet.train() 83 | for inputs, target in tqdm(train_data): 84 | inputs = inputs.to(device).requires_grad_() # ([batch_size, chans, samples]) 85 | target = target.to(device) 86 | 87 | output = eegnet(inputs) 88 | 89 | # update the weights 90 | loss = criterion(output, target) 91 | 92 | optimizer.zero_grad() 93 | loss.backward() 94 | optimizer.step() 95 | 96 | acc = (output.argmax(dim=1) == target).float().mean() 97 | running_accuracy += acc / len(train_data) 98 | running_loss += loss.detach().item() / len(train_data) 99 | 100 | print(f"Train : Epoch : {iter} - kfold : {kfold+1} - acc: {running_accuracy:.4f} - loss : {running_loss:.4f}\n") 101 | 102 | # validation 103 | eegnet.eval() 104 | inputs = x_eval.to(device).requires_grad_() 105 | val_probs = eegnet(inputs) 106 | val_acc = (val_probs.argmax(dim=1) == y_eval.to(device)).float().mean() 107 | print(f"Eval : Epoch : {iter} - kfold : {kfold+1} - acc: {val_acc:.4f}\n") 108 | # epoch_val_acc += val_acc 109 | 110 | # epoch_val_acc = epoch_val_acc/kfolds 111 | # print(f"Eval : Epoch : {iter+1} - epoch_val_acc: {epoch_val_acc:.4f}\n") 112 | 113 | if val_acc > best_acc: 114 | torch.save({ 115 | 'model_state_dict': eegnet.state_dict(), 116 | 'optimizer_state_dict': optimizer.state_dict(), 117 | }, filename) 118 | best_acc = val_acc 119 | print(f"best_acc={best_acc:.4f}\n") 120 | 121 | 122 | ###============================ Test model ============================### 123 | X_test = torch.from_numpy(X_test).to(device).to(torch.float32).requires_grad_() 124 | Y_test = torch.from_numpy(Y_test).to(device) 125 | 126 | checkpoint = torch.load(filename) 127 | eegnet.load_state_dict(checkpoint['model_state_dict']) 128 | optimizer.load_state_dict((checkpoint['optimizer_state_dict'])) 129 | eegnet.eval() 130 | probs = eegnet(X_test) 131 | acc = (probs.argmax(dim=1) == Y_test).float().mean() 132 | print("Classification accuracy: %f " % (acc)) 133 | 134 | 135 | ###============================ plot ============================### 136 | import matplotlib.pyplot as plt 137 | from sklearn.metrics import confusion_matrix 138 | from sklearn.metrics import ConfusionMatrixDisplay 139 | ConfusionMatrixDisplay.from_predictions(Y_test.cpu().numpy().tolist(), 140 | probs.argmax(dim=1).cpu().numpy().tolist(), 141 | display_labels=["left", "right", "foot", "tongue"], 142 | cmap=plt.cm.Reds, 143 | colorbar=True) 144 | plt.show() -------------------------------------------------------------------------------- /EEGNet_Pytorch/model.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import torch 3 | # torch.set_default_tensor_type(torch.cuda.FloatTensor) 4 | import torch.nn as nn 5 | from torch.nn.modules.module import _addindent 6 | 7 | 8 | class Conv2dWithConstraint(nn.Conv2d): 9 | def __init__(self, *args, max_norm=1, **kwargs): 10 | self.max_norm = max_norm 11 | super(Conv2dWithConstraint, self).__init__(*args, **kwargs) 12 | 13 | def forward(self, x): 14 | self.weight.data = torch.renorm( 15 | self.weight.data, p=2, dim=0, maxnorm=self.max_norm 16 | ) 17 | return super(Conv2dWithConstraint, self).forward(x) 18 | 19 | 20 | class EEGNet(nn.Module): 21 | def __init__(self, n_classes=4, channels=60, samples=151, 22 | dropoutRate=0.5, kernelLength=64, kernelLength2=16, F1=8, 23 | D=2, F2=16): 24 | super(EEGNet, self).__init__() 25 | self.F1 = F1 26 | self.F2 = F2 27 | self.D = D 28 | self.samples = samples 29 | self.n_classes = n_classes 30 | self.channels = channels 31 | self.kernelLength = kernelLength 32 | self.kernelLength2 = kernelLength2 33 | self.dropoutRate = dropoutRate 34 | 35 | self.blocks = self.InitialBlocks(dropoutRate) 36 | self.blockOutputSize = self.CalculateOutSize(self.blocks, channels, samples) 37 | self.classifierBlock = self.ClassifierBlock(self.F2 * self.blockOutputSize[1], n_classes) 38 | 39 | def InitialBlocks(self, dropoutRate, *args, **kwargs): 40 | block1 = nn.Sequential( 41 | nn.Conv2d(1, self.F1, (1, self.kernelLength), stride=1, padding=(0, self.kernelLength // 2), bias=False), 42 | nn.BatchNorm2d(self.F1, momentum=0.01, affine=True, eps=1e-3), 43 | 44 | # DepthwiseConv2D ======================= 45 | Conv2dWithConstraint(self.F1, self.F1 * self.D, (self.channels, 1), max_norm=1, stride=1, padding=(0, 0), 46 | groups=self.F1, bias=False), 47 | # ======================================== 48 | 49 | nn.BatchNorm2d(self.F1 * self.D, momentum=0.01, affine=True, eps=1e-3), 50 | nn.ELU(), 51 | nn.AvgPool2d((1, 4), stride=4), 52 | nn.Dropout(p=dropoutRate)) 53 | block2 = nn.Sequential( 54 | # SeparableConv2D ======================= 55 | nn.Conv2d(self.F1 * self.D, self.F1 * self.D, (1, self.kernelLength2), stride=1, 56 | padding=(0, self.kernelLength2 // 2), bias=False, groups=self.F1 * self.D), 57 | nn.Conv2d(self.F1 * self.D, self.F2, 1, padding=(0, 0), groups=1, bias=False, stride=1), 58 | # ======================================== 59 | 60 | nn.BatchNorm2d(self.F2, momentum=0.01, affine=True, eps=1e-3), 61 | nn.ELU(), 62 | nn.AvgPool2d((1, 8), stride=8), 63 | nn.Dropout(p=dropoutRate)) 64 | return nn.Sequential(block1, block2) 65 | 66 | 67 | def ClassifierBlock(self, inputSize, n_classes): 68 | return nn.Sequential( 69 | nn.Linear(inputSize, n_classes, bias=False), 70 | nn.Softmax(dim=1)) 71 | 72 | def CalculateOutSize(self, model, channels, samples): 73 | ''' 74 | Calculate the output based on input size. 75 | model is from nn.Module and inputSize is a array. 76 | ''' 77 | data = torch.rand(1, 1, channels, samples) 78 | model.eval() 79 | out = model(data).shape 80 | return out[2:] 81 | 82 | def forward(self, x): 83 | x = self.blocks(x) 84 | x = x.view(x.size()[0], -1) # Flatten 85 | x = self.classifierBlock(x) 86 | 87 | return x 88 | 89 | def categorical_cross_entropy(y_pred, y_true): 90 | # y_pred = y_pred.cuda() 91 | # y_true = y_true.cuda() 92 | y_pred = torch.clamp(y_pred, 1e-9, 1 - 1e-9) 93 | return -(y_true * torch.log(y_pred)).sum(dim=1).mean() 94 | 95 | def torch_summarize(model, show_weights=True, show_parameters=True): 96 | """Summarizes torch model by showing trainable parameters and weights.""" 97 | tmpstr = model.__class__.__name__ + ' (\n' 98 | for key, module in model._modules.items(): 99 | # if it contains layers let call it recursively to get params and weights 100 | if type(module) in [ 101 | torch.nn.modules.container.Container, 102 | torch.nn.modules.container.Sequential 103 | ]: 104 | modstr = torch_summarize(module) 105 | else: 106 | modstr = module.__repr__() 107 | modstr = _addindent(modstr, 2) 108 | 109 | params = sum([np.prod(p.size()) for p in module.parameters()]) 110 | weights = tuple([tuple(p.size()) for p in module.parameters()]) 111 | 112 | tmpstr += ' (' + key + '): ' + modstr 113 | if show_weights: 114 | tmpstr += ', weights={}'.format(weights) 115 | if show_parameters: 116 | tmpstr += ', parameters={}'.format(params) 117 | tmpstr += '\n' 118 | 119 | tmpstr = tmpstr + ')' 120 | return tmpstr -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # eegnet-pytorch 2 | 使用的pytorch编写的eegnet代码,方便现在进行调试 3 | --------------------------------------------------------------------------------