├── architecture.png ├── README.md └── ResBiLSTM_model.py /architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/snailpt/ResBiLSTM/HEAD/architecture.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # ResBiLSTM [[Paper](https://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2024.1415967/full)] 2 | ## Residual and bidirectional LSTM for epileptic seizure detection 3 | 4 | ## Abstract 5 | Electroencephalogram (EEG) plays a pivotal role in the detection and analysis of epileptic seizures, which affects over 70 million people in the world. Nonetheless, the visual interpretation of EEG signals for epilepsy detection is laborious and time-consuming. To tackle this open challenge, we introduce a straightforward yet efficient hybrid deep learning approach, named ResBiLSTM, for detecting epileptic seizures using EEG signals. Firstly, a one-dimensional residual neural network (ResNet) is tailored to adeptly extract the local spatial features of EEG signals. Subsequently, the acquired features are input into a bidirectional long short-term memory (BiLSTM) layer to model temporal dependencies. These output features are further processed through two fully connected layers to achieve the final epileptic seizure detection. The performance of ResBiLSTM is assessed on the epileptic seizure datasets provided by the University of Bonn and Temple University Hospital (TUH). The ResBiLSTM model achieves epileptic seizure detection accuracy rates of 98.88–100% in binary and ternary classifications on the Bonn dataset. Experimental outcomes for seizure recognition across seven epilepsy seizure types on the TUH seizure corpus (TUSZ) dataset indicate that the ResBiLSTM model attains a classification accuracy of 95.03% and a weighted F1 score of 95.03% with 10-fold cross-validation. These findings illustrate that ResBiLSTM outperforms several recent deep learning state-of-the-art approaches. 6 | 7 | ## Overall Framework: 8 | ![architecture of ResBiLSTM](https://raw.githubusercontent.com/snailpt/ResBiLSTM/main/architecture.png) 9 | 10 | ## The TUSZ Dataset 11 | The TUSZ dataset stands as one of the largest and most well-acknowledged open-source epilepsy EEG datasets available to researchers, offering detailed clinical case descriptions. It includes annotations on the timing and types of epileptic seizures, as well as comprehensive patient information such as sex, age, medications, clinical history, seizure event count, and duration. Our study utilized the May 2020 release of the corpus (V1.5.2), comprising 3050 seizure cases across eight distinct seizure types, recorded at various sampling frequencies and montages. The seizure types include Focal Non-Specific Seizure (FNSZ), Generalized Non-Specific Seizure (GNSZ), Absence Seizure (ABSZ), Complex Partial Seizure (CPSZ), Tonic Clonic Seizure (TCSZ), Tonic Seizure (TNSZ), Simple Partial Seizure (SPSZ), and Myoclonic Seizure (MYSZ), as detailed in Table 1. Due to the limited number of MYSZ events, we excluded this type and focused on the remaining seven seizure categories for analysis. 12 | 13 | 14 | 15 | ### Citation 16 | Hope this code can be useful. I would appreciate you citing us in your paper. 😊 17 | 18 | Zhao W, Wang W-F, Patnaik LM, Zhang B-C, Weng S-J, Xiao S-X, Wei D-Z and Zhou H-F (2024) Residual and bidirectional LSTM for epileptic seizure detection. Front. Comput. Neurosci. 18:1415967. doi: 10.3389/fncom.2024.1415967 19 | 20 | ### Communication 21 | QQ discussion group (Motor imagery, seizure detection, and seizure type classification [TUSZ]): 837800443 22 | 23 | Email: zhaowei701@163.com 24 | -------------------------------------------------------------------------------- /ResBiLSTM_model.py: -------------------------------------------------------------------------------- 1 | ''' 2 | Zhao W, Wang W F, Patnaik L M, et al. Residual and bidirectional LSTM for epileptic seizure detection[J]. 3 | Frontiers in Computational Neuroscience, 2024, 18: 1415967. 4 | 5 | Author: zhaowei701@163.com 6 | ''' 7 | 8 | import torch 9 | import torch.nn as nn 10 | 11 | class ResnetBasicBlock(nn.Module): 12 | ''' 13 | In the basic convolutional blocks of the convolutional network, 14 | if the number of channels is inconsistent, 15 | the 1x1 dimension is transformed to be consistent before the addition operation is performed. 16 | 17 | 卷积网络里的基本卷积块,通道数如果不一致,则通过1x1维度变换到一致后再做加法运算 18 | ''' 19 | def __init__(self, in_channels, out_channels, stride=1): 20 | super().__init__() 21 | self.in_channels = in_channels 22 | self.out_channels = out_channels 23 | self.stride = stride 24 | 25 | self.conv1 = nn.Conv1d(in_channels, out_channels, kernel_size=5, padding=2, stride=stride, bias=False) 26 | self.bn1 = nn.BatchNorm1d(out_channels) 27 | 28 | self.conv2 = nn.Conv1d(out_channels, out_channels, kernel_size=5, padding=2, stride=1, bias=False) 29 | self.bn2 = nn.BatchNorm1d(out_channels) 30 | 31 | # 恒等映射 32 | self.conv1x1 = nn.Conv1d(in_channels, out_channels, kernel_size=1, stride=stride, bias=False) 33 | self.bn1x1 = nn.BatchNorm1d(out_channels) 34 | 35 | def forward(self, x): 36 | out = torch.relu(self.bn1(self.conv1(x))) 37 | out = self.bn2(self.conv2(out)) 38 | if self.stride != 1 or self.in_channels != self.out_channels: 39 | residual = self.bn1x1(self.conv1x1(x)) 40 | else: 41 | residual = x 42 | out += residual 43 | 44 | return torch.relu(out) 45 | 46 | 47 | class ResBiLSTMNet(nn.Module): 48 | def __init__(self, classify_number, number_RnnCell, number_fc1): # For the setting of model hyperparameters, please refer to the description in the paper 49 | super().__init__() 50 | 51 | 52 | self.block_1 = nn.Sequential(ResnetBasicBlock(1, 64, 2), # The parameter of in_channels need to be changed according to the actual situation. 53 | nn.Dropout(p=0.2) 54 | ) 55 | 56 | self.block_2 = nn.Sequential(ResnetBasicBlock(64, 64, 1), 57 | nn.Dropout(p=0.2) 58 | ) 59 | 60 | self.block_3 = nn.Sequential(ResnetBasicBlock(64, 128, 2), 61 | nn.Dropout(p=0.2) 62 | ) 63 | 64 | self.lstm = nn.LSTM(128, number_RnnCell, batch_first=True, num_layers=1, bidirectional=True) 65 | self.fc1 = nn.Linear(number_RnnCell*2, number_fc1) 66 | self.fc2 = nn.Linear(number_fc1, classify_number) 67 | 68 | def forward(self, x): 69 | x = self.block_1(x) 70 | x = self.block_2(x) 71 | x = self.block_3(x) 72 | 73 | # Here permute transforms the dimension position to facilitate LSTM calculation 74 | x = x.permute(0, 2, 1) 75 | # BiLSTM 76 | x_out, (h, c) = self.lstm(x) 77 | # Take the last state of the bidirectional LSTM 78 | x = torch.concat([h[0], h[1]], dim=1) 79 | x = F.dropout(x, p=0.2) 80 | x = F.dropout(F.relu(self.fc1(x)), p=0.5) 81 | x = self.fc2(x) 82 | return x 83 | --------------------------------------------------------------------------------