├── requirements.txt ├── README.md ├── Opening and sorting the files.py └── Inception classifier.py /requirements.txt: -------------------------------------------------------------------------------- 1 | Bottleneck==1.2.1 2 | checkbox-support==0.22 3 | command-not-found==0.3 4 | CommonMark==0.7.5 5 | feedparser==5.1.3 6 | h5py==2.8.0 7 | imageio==2.5.0 8 | ipython==2.4.1 9 | Keras==2.2.4 10 | Keras-Applications==1.0.6 11 | keras-contrib==2.0.8 12 | Keras-Preprocessing==1.0.5 13 | keyring==12.0.1 14 | keyrings.alt==3.0 15 | lxml==3.5.0 16 | matplotlib==1.5.1 17 | mne==1.2.3 18 | numpy==1.15.4 19 | Orange3==3.13.0 20 | pandas==0.23.0 21 | pep8==1.7.0 22 | pexpect==4.0.1 23 | Pillow==3.1.2 24 | protobuf==3.6.1 25 | psutil==3.4.2 26 | pycups==1.9.73 27 | pycurl==7.43.0 28 | pydot==1.2.3 29 | pyflakes==1.1.0 30 | Pygments==2.1 31 | pygobject==3.20.0 32 | pyparsing==2.2.0 33 | python-apt==1.1.0b1+ubuntu0.16.4.2 34 | python-dateutil==2.7.3 35 | python-debian==0.1.27 36 | python-systemd==231 37 | pytz==2018.4 38 | pyxdg==0.25 39 | PyYAML==5.1 40 | pyzmq==15.2.0 41 | reportlab==3.3.0 42 | requests==2.18.4 43 | scikit-learn==0.19.1 44 | scipy==1.4.0.dev0+1953c76 45 | seaborn==0.8.1 46 | SecretStorage==2.3.1 47 | sklearn==0.0 48 | spyder==2.3.8 49 | ssh-import-id==5.5 50 | system-service==0.3 51 | tables==3.2.2 52 | tensorboard==1.12.0 53 | tensorflow-gpu==1.12.0 54 | termcolor==1.1.0 55 | tslearn==0.1.28.1 56 | ubuntu-drivers-common==0.0.0 57 | virtualenv==15.0.1 58 | Werkzeug==0.15.3 59 | xdiagnose==3.8.4.1 60 | xkit==0.0.0 61 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # MDD-Detection-with-EEG-Signals-using-a-Time-Series-Approach 2 | This is a repository for our paper titled [Automated Detection of Major Depressive Disorder with EEG Signals: A Time Series Classification Using Deep Learning](https://ieeexplore.ieee.org/document/9828387) published in [IEEE Access](https://ieeexplore.ieee.org/document/9828387) 3 | This study focuses on the automated detection of MDD using EEG data and deep neural network architecture. For this aim, first, a customized InceptionTime model is recruited to detect MDD individuals via 19-channel raw EEG signals. Then, a channel-selection strategy, which comprises three channel-selection steps, is conducted to omit redundant channels. 4 | 5 | The original InceptionTime paper also is available on [here](https://arxiv.org/pdf/1909.04939.pdf). 6 | 7 | 8 | ## The proposed Inception network architecture 9 | ![comgit](https://user-images.githubusercontent.com/96019816/162617323-416d4fec-b6ad-4a6e-afba-396e6b837392.jpg) 10 | 11 | ## Data 12 | The data used in this project comes from the [MDD Patients and Healthy Controls EEG Data](https://figshare.com/articles/dataset/EEG_Data_New/4244171). 13 | 14 | 15 | ## Requirements 16 | You will need to install the following packages present in the [requirements.txt](https://github.com/AlirezaRafiei9/Detection-of-MDD-with-EEG-Signals-using-InceptionTIme-model/blob/master/requirements.txt) file. 17 | 18 | ## Code 19 | The code is divided as follows: 20 | * The [Inception classifier](https://https://github.com/AlirezaRafiei9/Detection-of-MDD-with-EEG-Signals-using-InceptionTIme-model/blob/main/Inception%20classifier) python file contains the Inception module python code using Keras library. 21 | * The [Opening and sorting the files](https://https://github.com/AlirezaRafiei9/Detection-of-MDD-with-EEG-Signals-using-InceptionTIme-model/blob/main/Opening%20and%20sorting%20the%20files) python folder contains the steps of opening and labeling the files. 22 | * The [Channel selection](https://https://github.com/AlirezaRafiei9/Detection-of-MDD-with-EEG-Signals-using-InceptionTIme-model/blob/main/Channel%20selection) python file involves general concepts of the channel selections approaches. 23 | 24 | 25 | ## Reference 26 | 27 | If you are interested in this work, please cite: 28 | 29 | ``` 30 | @ARTICLE{9828387, 31 | author={Rafiei, Alireza and Zahedifar, Rasoul and Sitaula, Chiranjibi and Marzbanrad, Faezeh}, 32 | journal={IEEE Access}, 33 | title={Automated Detection of Major Depressive Disorder With EEG Signals: A Time Series Classification Using Deep Learning}, 34 | year={2022}, 35 | volume={10}, 36 | number={}, 37 | pages={73804-73817}, 38 | doi={10.1109/ACCESS.2022.3190502}} 39 | ``` 40 | -------------------------------------------------------------------------------- /Opening and sorting the files.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import mne 3 | from os.path import exists 4 | from sklearn import preprocessing 5 | 6 | # Define the number of healthy and MDD (Major Depressive Disorder) cases 7 | H_num = # Number of healthy cases 8 | MDD_num = # Number of MDD cases 9 | 10 | # Initialize lists to store data and labels 11 | v = [] 12 | Data = [] 13 | Label = [] 14 | 15 | # Define the categories for healthy and MDD cases 16 | HorMDD = ['H', 'MDD'] 17 | # Define the eye states: Open (O) and Closed (C) 18 | eye = ['O', 'C'] 19 | 20 | # Iterate over each category (healthy and MDD) 21 | for x in HorMDD: 22 | # Iterate over each eye state 23 | for y in eye: 24 | # Process healthy cases 25 | if x == 'H': 26 | for i in range(1, H_num): 27 | # Check if the file exists 28 | if exists('/your directory/H S{} E{}.edf'.format(i, y)): 29 | file = '/your directory/H S{} E{}.edf'.format(i, y) 30 | 31 | # Read the EEG data from the .edf file 32 | data = mne.io.read_raw_edf(file) 33 | raw_data = data.get_data() 34 | num_rows, num_cols = raw_data.shape 35 | 36 | # Ensure the number of rows is correct 37 | if num_rows > 19: 38 | print(num_rows) 39 | raw_data = np.delete(raw_data, 19, 0) 40 | 41 | # Ensure the data length is sufficient 42 | if len(raw_data[1]) < 61440: 43 | v.append(file) 44 | continue 45 | 46 | # Trim the data to the desired length 47 | raw_data = raw_data[:, :61440] 48 | Data.append(raw_data) 49 | Label.append(0) # Label 0 for healthy cases 50 | 51 | # Save the processed data and labels 52 | np.save('/your directory/Data/H S{} E{}.npy'.format(i, y), raw_data) 53 | np.save('/your directory/Label/H S{} E{}.npy'.format(i, y), 0) 54 | else: 55 | continue 56 | 57 | # Process MDD cases 58 | if x == 'MDD': 59 | for j in range(1, MDD_num): 60 | # Check if the file exists 61 | if exists('/your directory/MDD S{} E{}.edf'.format(j, y)): 62 | file = '/your directory/MDD S{} E{}.edf'.format(j, y) 63 | 64 | # Read the EEG data from the .edf file 65 | data = mne.io.read_raw_edf(file) 66 | raw_data = data.get_data() 67 | num_rows, num_cols = raw_data.shape 68 | 69 | # Ensure the number of rows is correct 70 | if num_rows > 19: 71 | print(num_rows) 72 | raw_data = np.delete(raw_data, 19, 0) 73 | 74 | # Ensure the data length is sufficient 75 | if len(raw_data[1]) < 61440: 76 | v.append(file) 77 | continue 78 | 79 | # Trim the data to the desired length 80 | raw_data = raw_data[:, :61440] 81 | Data.append(raw_data) 82 | Label.append(1) # Label 1 for MDD cases 83 | 84 | # Save the processed data and labels 85 | np.save('/your directory/Data/MDD S{} E{}.npy'.format(j, y), raw_data) 86 | np.save('/your directory/Label/MDD S{} E{}.npy'.format(j, y), 1) 87 | else: 88 | continue 89 | 90 | # Convert the lists to numpy arrays 91 | Data = np.asarray(Data) 92 | Label = np.asarray(Label) 93 | -------------------------------------------------------------------------------- /Inception classifier.py: -------------------------------------------------------------------------------- 1 | # Import necessary libraries 2 | import keras 3 | import numpy as np 4 | 5 | # Define constants for model configuration 6 | output_directory = '/output' # Directory to save output files 7 | input_shape = [19, 61440] # Input shape: 19 channels and 4-minute records 8 | nb_classes = 2 # Number of classes: MDD or Healthy 9 | nb_filters = 64 # Number of filters in Conv layers 10 | verbose = True # Verbose output during training 11 | use_residual = True # Use residual connections 12 | use_bottleneck = True # Use bottleneck layers 13 | depth = 6 # Depth of the model 14 | kernel_size = 41 - 1 # Kernel size for Conv layers 15 | callbacks = None # Callbacks for training 16 | batch_size = 32 # Batch size for training 17 | mini_batch_size = 32 # Mini-batch size for training 18 | bottleneck_size = 57 # Bottleneck size 19 | nb_epochs = 1500 # Number of epochs for training 20 | 21 | # Define the Inception module 22 | def _inception_module(input_tensor, stride=1, activation='linear'): 23 | """ 24 | Create an inception module. 25 | 26 | Args: 27 | input_tensor: Input tensor for the inception module. 28 | stride: Stride value for the convolutions (default is 1). 29 | activation: Activation function to use (default is 'linear'). 30 | 31 | Returns: 32 | x: Output tensor after applying inception module. 33 | """ 34 | if use_bottleneck and int(input_tensor.shape[-1]) > 1: 35 | input_inception = keras.layers.Conv1D(filters=bottleneck_size, kernel_size=1, 36 | padding='same', activation=activation, use_bias=False)(input_tensor) 37 | else: 38 | input_inception = input_tensor 39 | 40 | kernel_size_s = [kernel_size // (2 ** i) for i in range(3)] 41 | 42 | conv_list = [] 43 | 44 | for i in range(len(kernel_size_s)): 45 | conv_list.append(keras.layers.Conv1D(filters=nb_filters, kernel_size=kernel_size_s[i], 46 | strides=stride, padding='same', activation=activation, use_bias=False)( 47 | input_inception)) 48 | 49 | max_pool_1 = keras.layers.MaxPool1D(pool_size=3, strides=stride, padding='same')(input_tensor) 50 | 51 | conv_6 = keras.layers.Conv1D(filters=nb_filters, kernel_size=1, 52 | padding='same', activation=activation, use_bias=False)(max_pool_1) 53 | 54 | conv_list.append(conv_6) 55 | 56 | x = keras.layers.Concatenate(axis=2)(conv_list) 57 | x = keras.layers.BatchNormalization()(x) 58 | x = keras.layers.Activation(activation='relu')(x) 59 | return x 60 | 61 | # Define the shortcut layer for residual connections 62 | def _shortcut_layer(input_tensor, out_tensor): 63 | """ 64 | Create a shortcut (residual) connection between two tensors. 65 | 66 | Args: 67 | input_tensor: Input tensor to apply the shortcut to. 68 | out_tensor: Output tensor to add the shortcut to. 69 | 70 | Returns: 71 | x: Output tensor after adding the shortcut connection. 72 | """ 73 | shortcut_y = keras.layers.Conv1D(filters=int(out_tensor.shape[-1]), kernel_size=1, 74 | padding='same', use_bias=False)(input_tensor) 75 | shortcut_y = keras.layers.BatchNormalization()(shortcut_y) 76 | 77 | x = keras.layers.Add()([shortcut_y, out_tensor]) 78 | x = keras.layers.Activation('relu')(x) 79 | return x 80 | 81 | # Build the model using the defined Inception and shortcut layers 82 | def build_model(input_shape, nb_classes): 83 | """ 84 | Build the neural network model. 85 | 86 | Args: 87 | input_shape: Shape of the input data. 88 | nb_classes: Number of output classes. 89 | 90 | Returns: 91 | model: Compiled Keras model. 92 | """ 93 | input_layer = keras.layers.Input(input_shape) 94 | 95 | x = input_layer 96 | input_res = input_layer 97 | 98 | for d in range(depth): 99 | x = _inception_module(x) 100 | 101 | if use_residual and d % 3 == 2: 102 | x = _shortcut_layer(input_res, x) 103 | input_res = x 104 | 105 | gap_layer = keras.layers.GlobalAveragePooling1D()(x) 106 | 107 | output_layer = keras.layers.Dense(nb_classes, activation='softmax')(gap_layer) 108 | 109 | model = keras.models.Model(inputs=input_layer, outputs=output_layer) 110 | 111 | model.compile(loss='mean_squared_error', optimizer=keras.optimizer_v2.gradient_descent.SGD(learning_rate=0.01, momentum=0.9, nesterov=True), 112 | metrics=['accuracy']) 113 | 114 | # Reduce learning rate on plateau callback 115 | reduce_lr = keras.callbacks.ReduceLROnPlateau(monitor='loss', factor=0.5, patience=50, min_lr=0.0001) 116 | 117 | file_path = output_directory + '/best_model.hdf5' 118 | 119 | # Model checkpoint callback to save the best model 120 | model_checkpoint = keras.callbacks.ModelCheckpoint(filepath=file_path, monitor='loss', 121 | save_best_only=True) 122 | 123 | callbacks = [reduce_lr, model_checkpoint] 124 | 125 | return model 126 | 127 | # Build and compile the model 128 | model = build_model(input_shape, nb_classes) 129 | model.summary() 130 | 131 | # Save the initial weights of the model 132 | model.save_weights(output_directory + '/model_init.hdf5') 133 | 134 | # Train the model and save the training history 135 | hist = model.fit(X_train, y_train, batch_size=mini_batch_size, epochs=nb_epochs, validation_split=0.15, shuffle=True, 136 | verbose=verbose, callbacks=callbacks) 137 | 138 | # Save the final model 139 | model.save(output_directory + '/last_model.hdf5') 140 | 141 | # Extract and store training history 142 | History = hist.history 143 | losses = History['loss'] 144 | accuracies = History['accuracy'] 145 | --------------------------------------------------------------------------------