├── .gitignore
├── .vscode
└── settings.json
├── LICENSE
├── README.md
├── __init__.py
├── architectures
├── __init__.py
├── build_architecture.py
└── segformer3d.py
├── augmentations
├── __init__.py
└── augmentations.py
├── cvprw_poster.pdf
├── data
├── brats2017_seg
│ ├── brats2017_raw_data
│ │ ├── brats2017_seg_preprocess.py
│ │ └── datameta_generator
│ │ │ ├── create_train_val_kfold_csv.py
│ │ │ ├── create_train_val_test_csv.py
│ │ │ ├── nnformer_train_test_split.py
│ │ │ ├── train.csv
│ │ │ ├── train_fold_1.csv
│ │ │ ├── train_fold_2.csv
│ │ │ ├── train_fold_3.csv
│ │ │ ├── train_fold_4.csv
│ │ │ ├── train_fold_5.csv
│ │ │ ├── validation.csv
│ │ │ ├── validation_fold_1.csv
│ │ │ ├── validation_fold_2.csv
│ │ │ ├── validation_fold_3.csv
│ │ │ ├── validation_fold_4.csv
│ │ │ └── validation_fold_5.csv
│ ├── train.csv
│ └── validation.csv
└── brats2021_seg
│ ├── brats2021_raw_data
│ ├── brats2021_seg_preprocess.py
│ └── datameta_generator
│ │ ├── create_train_val_kfold_csv.py
│ │ ├── create_train_val_test_csv.py
│ │ ├── create_train_val_test_csv_v2.py
│ │ ├── train_v2.csv
│ │ └── validation_v2.csv
│ ├── train.csv
│ └── validation.csv
├── dataloaders
├── __init__.py
├── brats2017_seg.py
├── brats2021_seg.py
└── build_dataset.py
├── experiments
└── brats_2017
│ ├── best_brats_2017_exp_dice_82.07
│ ├── config.yaml
│ ├── run_experiment.py
│ └── single_gpu_accelerate.yaml
│ └── template_experiment
│ ├── config.yaml
│ ├── gpu_accelerate.yaml
│ └── run_experiment.py
├── losses
├── __init__.py
└── losses.py
├── metrics
└── segmentation_metrics.py
├── notebooks
└── model_profiler.ipynb
├── optimizers
├── __init__.py
├── optimizers.py
└── schedulers.py
├── requirements.txt
├── resources
├── acdc_quant.png
├── acdc_segformer_3D.png
├── brats_plot.png
├── brats_quant.png
├── brats_segformer_3D.png
├── param_count_table.PNG
├── segformer_3D.png
├── synapse_quant.png
└── synapse_segformer_3D.png
└── train_scripts
├── __init__.py
├── trainer_ddp.py
└── utils.py
/.gitignore:
--------------------------------------------------------------------------------
1 | /data/brats2017_seg/BraTS2017_Training_Data/*
2 | /data/brats2017_seg/brats2017_raw_data/train/*
3 |
4 | /data/brats2021_seg/BraTS2021_Training_Data/*
5 | /data/brats2021_seg/brats2021_raw_data/train
6 |
7 |
8 | /visualization/*
9 |
10 |
11 | **/wandb
12 | **/__pycache__
13 | **.pth
14 | **.pt
15 | **.model
16 | **.zip
17 | **.pkl
--------------------------------------------------------------------------------
/.vscode/settings.json:
--------------------------------------------------------------------------------
1 | {
2 | "ros.distro": "foxy"
3 | }
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # SegFormer3D
2 | ## An Efficient Transformer for 3D Medical Image Segmentation
3 |
4 |
5 |

6 |

7 |
8 |
9 |
10 | Segformer3D is a light-weight and efficient hierarchical Transformer designed for 3D volumetric segmentation. It calculates attention across multiscale volumetric features, and avoids complex decoders. Instead it uses a simple yet effective all-MLP decoder to aggregate local and global attention features to produce highly accurate segmentation masks. Published at DEF-AI-MIA workshop in CVPR 2024.
11 |
12 | The implementation of SegFormer3D architecture is in `architectures/segformer3d.py`. The experimentatl setup and implementaiton details are throuhgly explained in our [paper](https://arxiv.org/abs/2404.10156).
13 |
14 | Step by step guideline for BraTs is provided below. Other datasets are coming soon.
15 |
16 | Contributions are highly encouraged 🙂
17 |
18 | ## :rocket: News
19 | * **(April 15, 2024):** SegFormer3D code & weights are released for BRaTs.
20 |
21 |
22 | ## Table of Contents
23 | Follow steps 1-3 to run our pipeline.
24 | 1. [Installation](#Installation)
25 | 2. [Prepare the Dataset](#Prepare-the-Dataset)
26 | - [preprocessing](#preprocessing)
27 | - [data split (optional)](#data-split-Optional)
28 | 3. [Run Your Experiment](#Run-Your-Experiment)
29 | 4. [Visualization](#Visualization)
30 | 5. [Results](#Results)
31 | - [BraTs](#Brain-Tumor-Segmentation-Dataset-BraTs)
32 | - [Synapse](#Multi-Organ-CT-Segmentation-Dataset-Synapse)
33 | - [ACDC](#Automated-Cardiac-Diagnosis-Dataset-ACDC)
34 |
35 | ## Installation
36 | Make sure you have [conda](https://conda.io/projects/conda/en/latest/user-guide/install/index.html#regular-installation) installed and run the following lines of commands in your terminal:
37 | ```shell
38 | git clone https://github.com/OSUPCVLab/SegFormer3D.git
39 | cd SegFormer3D
40 | conda create -n "segformer3d" python=3.11.7 ipython==8.12.0
41 | conda activate segformer3d
42 | conda install pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=11.8 -c pytorch -c nvidia
43 | # make sure and points to the created environment directory
44 | python -m pip install -r requirements.txt
45 | ```
46 | For convenience, we call SegFormer3D repo's home directory `SEGFORMER3D_HOME`.
47 | ## Prepare the Dataset
48 | 🚨 Note: The processing code for both ACDC and Synapse datasets was implemented using the [nnFormer](https://github.com/282857341/nnFormer) repository. We chose to use this codebase to ensure a fair comparison with baseline architectures like nnFormer, which apply specific techniques during test-time data processing. Given the complexity of the nnFormer pipeline, offering a simplified, standalone guide for testing would have essentially required building a separate repository on top of SegFormer3D—something we opted not to do.
49 |
50 | ### preprocessing
51 | First we need to make two folders (I call them `train`, `BraTS2017_Training_Data`) using the terminal:
52 | ```shell
53 | cd SEGFORMER3D_HOME
54 | mkdir "./data/brats2017_seg/brats2017_raw_data/train"
55 | mkdir "./data/brats2017_seg/BraTS2017_Training_Data"
56 | ```
57 | Download [Brats2017](https://drive.google.com/file/d/1LMrJRpcMjhsAT6tbstgB1GTdZBk5yKU8/view?usp=sharing). Open the zip file and extract the content into the `train` directory of the repo. Once the files are moved properly, you should see the following folder structure:
58 |
59 | ```
60 | SEGFORMER3D_HOME/data/brats2017_seg/brats2017_raw_data/
61 | │
62 | ├───train
63 | │ ├──imageTr
64 | │ │ └──BRATS_001_0000.nii.gz
65 | │ │ └──BRATS_001_0001.nii.gz
66 | │ │ └──BRATS_001_0002.nii.gz
67 | │ │ └──BRATS_001_0003.nii.gz
68 | │ │ └──BRATS_002_0000.nii.gz
69 | │ │ └──...
70 | │ ├──labelsTr
71 | │ │ └──BRATS_001.nii.gz
72 | │ │ └──BRATS_002.nii.gz
73 | │ │ └──...
74 | │ ├──imageTs
75 | │ │ └──BRATS_485_000.nii.gz
76 | │ │ └──BRATS_485_001.nii.gz
77 | │ │ └──BRATS_485_002.nii.gz
78 | │ │ └──BRATS_485_003.nii.gz
79 | │ │ └──BRATS_486_000.nii.gz
80 | │ │ └──...
81 | ```
82 | The `train` folder contains 484 MRI scans of the raw BraTs2017 data.
83 |
84 | For the sake of accelerating the training process, we perform all the base preprocessing (zero cropping, collating the modalities, etc) prior to actually training the model. Run
85 | `brats2017_seg_preprocess.py` code which saves the preprocessed volumes into the `BraTS2017_Training_Data` folder. Having the data preprocess will save a lot of time during the training.
86 |
87 | Simply run (this will take a while):
88 | ```shell
89 | cd SEGFORMER3D_HOME
90 | # assuming conda environment is still activated
91 | python ./data/brats2017_seg/brats2017_raw_data/brats2017_seg_preprocess.py
92 | # you need around 22 GB to store the preprocessed data
93 | ```
94 | After this step, the dataset is saved in `BraTS2017_Training_Data` and is ready to be used for training.
95 |
96 | Note: BraTs2021 dataloader and preprocessor are also available in our repo. You can use them the same way as explained for BraTs2017.
97 | ### data split (optional)
98 | Inside `SEGFORMER3D_HOME/data/brats2017_seg` you see two csv files called `train.csv` and `validation.csv` which define the splits for training and validation data for your experiment. For generating different splits you can go to `SEGFORMER3D_HOME/data/brats2017_seg/brats2017_raw_data/datameta_generator`.
99 |
100 | You can also find kfold splits inside `SEGFORMER3D_HOME/data/brats2017_seg/brats2017_raw_data/datameta_generator`. For kfold experiment you have to change the `fold_id` inside the `train_dataset_args` and `val_dataset_args` section of your experiment's `config.yaml`. Use same fold id for both `train_dataset_args` and `val_dataset_args`. The default value is `fold_id: null` which flags the dataloader to read `train.csv` and `validation.csv`. You can change it to your desired fold id. For example, setting `fold_id: 1` in `train_dataset_args` and `val_dataset_args` will make the dataloader to read `validation_fold_1.csv` and `train_fold_1.csv` during the training. Just make sure your desired splits are present in `SEGFORMER3D_HOME/data/brats2017_seg`.
101 |
102 | ## Run Your Experiment
103 | In order to run an experiment, we provide a template folder placed under `SEGFORMER3D_HOME/experiments/brats_2017/template_experiment` that you can use to setup your experiment. Simply copy the the template folder (maybe rename it to something more meaningful) and run your experiment on a single GPU with:
104 | ```shell
105 | cd SEGFORMER3D_HOME
106 | cd ./experiments/brats_2017/your_experiment_folder_name/
107 | # the default gpu device is set to cuda:0 (you can change it)
108 | accelerate launch --config_file ./gpu_accelerate.yaml run_experiment.py
109 | # you can modify the cuda device id by changing the value of gpu_ids attribute in the gpu_accelerate.yaml
110 | # for example gpu_ids: '1' will run the experiment on cuda:1
111 | ```
112 | You might want to change the hyperparameters (batch size, learning rate, weight decay etc.) of your experiment. For that you need to edit the `config.yaml` file inside your experiment folder.
113 |
114 | As the experiment is running, the logs (train loss, vlaidation loss and dice score) will be written to the terminal. You can log your experiment on [wandb](https://wandb.ai/site)
115 | (you need to setup an account there) if you set `mode: "online"` in the `wandb_parameters` section of the `config.yaml`. The default value is `mode: "offline"`. If you want to log the result to your wandb account, put your wandb info into the `wandb_parameters` section of the `config.yaml` and your entire experiment will be logged under your wandb entity (e.g. `pcvlab`) page.
116 |
117 | During the experiment, the model checkpoints (optimizer, model_parameter etc.) will be saved in `SEGFORMER3D_HOME/experiments/brats_2017/your_experiment_folder_name/model_checkpoints/best_dice_checkpoint/` upon each improvement of the metric (Dice Score). Model parameters is saved as `pytorch_model.bin`.
118 |
119 | You can download Segformer3d model weight trained on BraTs2017 from [[GoogleDrive]](https://drive.google.com/file/d/1MfcyyS6yEEC2-wQ5SHgC3v9sUVo285-I/view?usp=sharing).
120 | ## Visualization
121 | You can download SegFormer3D visualization results from this [link](https://drive.google.com/file/d/1QXAcZbOAdMDOkQXAAXHGLl6Y52j8-Aok/view?usp=sharing). This is a standalone folder that you might want to tinker with. `viz_meta.yaml` contains the hex color code for each dataset and other meta information for `viz.py` to visualize the data. By chaning the random seed number in `viz.py` you will get different frames visualized from the 3d volume input. We did not set any specific policy behind how we choose the frames to visualize because the volumetric inputs are considerably different with each other. Alternatively, you can modify `viz.py` according to your preference.
122 |
123 | We got the visualization for nnFormer and UNETR form [nnFormer visualization results](https://drive.google.com/file/d/1Lb4rIkwIpuJS3tomBiKl7FBtNF2dv_6M/view).
124 | ## Results
125 | We benchmark SegFormer3D both qualitatively and quantitatively against the current state-of-the-art models such as [nnFormer](https://github.com/282857341/nnFormer), [UNETR](https://github.com/Project-MONAI/tutorials/blob/main/3d_segmentation/unetr_btcv_segmentation_3d.ipynb) and several other models on three widely used medical datasets: Synapse, BRaTs2017, and ACDC. The quatitative results are based on reported average Dice Score.
126 | ### Brain Tumor Segmentation Dataset (BraTs)
127 | [BraTs](https://www.med.upenn.edu/sbia/brats2017/data.html) is a collection of Magnetic Resonance Imaging (MRI) scans that contains 484 MRI images with four modalities, FLAIR, T1w, T1gd, and T2w. Data were collected from 19 institutions with ground-truth labels for three types of tumor subregions: edema (ED), enhancing tumor (ET) and nonenhancing tumor (NET).
128 |
129 |
130 |

131 |

132 |
133 |
134 |
135 | ### Multi-Organ CT Segmentation Dataset (Synapse)
136 | The [Synapse](https://www.synapse.org/#!Synapse:syn3193805/wiki/217789) dataset provides 30 annotated CT images with thirteen abdominal organs.
137 |
138 |
139 |

140 |

141 |
142 |
143 |
144 | ### Automated Cardiac Diagnosis Dataset (ACDC)
145 | [ACDC](https://www.creatis.insa-lyon.fr/Challenge/acdc/databases.html) is a dataset of 100 patients used for 3D volumetric segmentation of the left (LV) and right (RV) cardiac ventricles and the myocardium (Myo).
146 |
147 |
148 |

149 |

150 |
151 |
152 |
153 | ## Citation
154 | If you liked our paper, please consider citing it
155 | ```bibtex
156 | @inproceedings{perera2024segformer3d,
157 | title={SegFormer3D: an Efficient Transformer for 3D Medical Image Segmentation},
158 | author={Perera, Shehan and Navard, Pouyan and Yilmaz, Alper},
159 | booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
160 | pages={4981--4988},
161 | year={2024}
162 | }
163 | ```
164 | ```bibtex
165 | @article{perera2024segformer3d,
166 | title={SegFormer3D: an Efficient Transformer for 3D Medical Image Segmentation},
167 | author={Perera, Shehan and Navard, Pouyan and Yilmaz, Alper},
168 | journal={arXiv preprint arXiv:2404.10156},
169 | year={2024}
170 | }
171 | ```
172 |
--------------------------------------------------------------------------------
/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OSUPCVLab/SegFormer3D/a74cddda15806cc8a4428f9a36bd301a69ddb440/__init__.py
--------------------------------------------------------------------------------
/architectures/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OSUPCVLab/SegFormer3D/a74cddda15806cc8a4428f9a36bd301a69ddb440/architectures/__init__.py
--------------------------------------------------------------------------------
/architectures/build_architecture.py:
--------------------------------------------------------------------------------
1 | """
2 | To select the architecture based on a config file we need to ensure
3 | we import each of the architectures into this file. Once we have that
4 | we can use a keyword from the config file to build the model.
5 | """
6 | ######################################################################
7 | def build_architecture(config):
8 | if config["model_name"] == "segformer3d":
9 | from .segformer3d import build_segformer3d_model
10 |
11 | model = build_segformer3d_model(config)
12 |
13 | return model
14 | else:
15 | return ValueError(
16 | "specified model not supported, edit build_architecture.py file"
17 | )
18 |
--------------------------------------------------------------------------------
/architectures/segformer3d.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import math
3 | import copy
4 | from torch import nn
5 | from einops import rearrange
6 | from functools import partial
7 |
8 | def build_segformer3d_model(config=None):
9 | model = SegFormer3D(
10 | in_channels=config["model_parameters"]["in_channels"],
11 | sr_ratios=config["model_parameters"]["sr_ratios"],
12 | embed_dims=config["model_parameters"]["embed_dims"],
13 | patch_kernel_size=config["model_parameters"]["patch_kernel_size"],
14 | patch_stride=config["model_parameters"]["patch_stride"],
15 | patch_padding=config["model_parameters"]["patch_padding"],
16 | mlp_ratios=config["model_parameters"]["mlp_ratios"],
17 | num_heads=config["model_parameters"]["num_heads"],
18 | depths=config["model_parameters"]["depths"],
19 | decoder_head_embedding_dim=config["model_parameters"][
20 | "decoder_head_embedding_dim"
21 | ],
22 | num_classes=config["model_parameters"]["num_classes"],
23 | decoder_dropout=config["model_parameters"]["decoder_dropout"],
24 | )
25 | return model
26 |
27 |
28 | class SegFormer3D(nn.Module):
29 | def __init__(
30 | self,
31 | in_channels: int = 4,
32 | sr_ratios: list = [4, 2, 1, 1],
33 | embed_dims: list = [32, 64, 160, 256],
34 | patch_kernel_size: list = [7, 3, 3, 3],
35 | patch_stride: list = [4, 2, 2, 2],
36 | patch_padding: list = [3, 1, 1, 1],
37 | mlp_ratios: list = [4, 4, 4, 4],
38 | num_heads: list = [1, 2, 5, 8],
39 | depths: list = [2, 2, 2, 2],
40 | decoder_head_embedding_dim: int = 256,
41 | num_classes: int = 3,
42 | decoder_dropout: float = 0.0,
43 | ):
44 | """
45 | in_channels: number of the input channels
46 | img_volume_dim: spatial resolution of the image volume (Depth, Width, Height)
47 | sr_ratios: the rates at which to down sample the sequence length of the embedded patch
48 | embed_dims: hidden size of the PatchEmbedded input
49 | patch_kernel_size: kernel size for the convolution in the patch embedding module
50 | patch_stride: stride for the convolution in the patch embedding module
51 | patch_padding: padding for the convolution in the patch embedding module
52 | mlp_ratios: at which rate increases the projection dim of the hidden_state in the mlp
53 | num_heads: number of attention heads
54 | depths: number of attention layers
55 | decoder_head_embedding_dim: projection dimension of the mlp layer in the all-mlp-decoder module
56 | num_classes: number of the output channel of the network
57 | decoder_dropout: dropout rate of the concatenated feature maps
58 |
59 | """
60 | super().__init__()
61 | self.segformer_encoder = MixVisionTransformer(
62 | in_channels=in_channels,
63 | sr_ratios=sr_ratios,
64 | embed_dims=embed_dims,
65 | patch_kernel_size=patch_kernel_size,
66 | patch_stride=patch_stride,
67 | patch_padding=patch_padding,
68 | mlp_ratios=mlp_ratios,
69 | num_heads=num_heads,
70 | depths=depths,
71 | )
72 | # decoder takes in the feature maps in the reversed order
73 | reversed_embed_dims = embed_dims[::-1]
74 | self.segformer_decoder = SegFormerDecoderHead(
75 | input_feature_dims=reversed_embed_dims,
76 | decoder_head_embedding_dim=decoder_head_embedding_dim,
77 | num_classes=num_classes,
78 | dropout=decoder_dropout,
79 | )
80 | self.apply(self._init_weights)
81 |
82 | def _init_weights(self, m):
83 | if isinstance(m, nn.Linear):
84 | nn.init.trunc_normal_(m.weight, std=0.02)
85 | if isinstance(m, nn.Linear) and m.bias is not None:
86 | nn.init.constant_(m.bias, 0)
87 | elif isinstance(m, nn.LayerNorm):
88 | nn.init.constant_(m.bias, 0)
89 | nn.init.constant_(m.weight, 1.0)
90 | elif isinstance(m, nn.BatchNorm2d):
91 | nn.init.constant_(m.bias, 0)
92 | nn.init.constant_(m.weight, 1.0)
93 | elif isinstance(m, nn.BatchNorm3d):
94 | nn.init.constant_(m.bias, 0)
95 | nn.init.constant_(m.weight, 1.0)
96 | elif isinstance(m, nn.Conv2d):
97 | fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
98 | fan_out //= m.groups
99 | m.weight.data.normal_(0, math.sqrt(2.0 / fan_out))
100 | if m.bias is not None:
101 | m.bias.data.zero_()
102 | elif isinstance(m, nn.Conv3d):
103 | fan_out = m.kernel_size[0] * m.kernel_size[1] * m.kernel_size[2] * m.out_channels
104 | fan_out //= m.groups
105 | m.weight.data.normal_(0, math.sqrt(2.0 / fan_out))
106 | if m.bias is not None:
107 | m.bias.data.zero_()
108 |
109 |
110 | def forward(self, x):
111 | # embedding the input
112 | x = self.segformer_encoder(x)
113 | # # unpacking the embedded features generated by the transformer
114 | c1 = x[0]
115 | c2 = x[1]
116 | c3 = x[2]
117 | c4 = x[3]
118 | # decoding the embedded features
119 | x = self.segformer_decoder(c1, c2, c3, c4)
120 | return x
121 |
122 | # ----------------------------------------------------- encoder -----------------------------------------------------
123 | class PatchEmbedding(nn.Module):
124 | def __init__(
125 | self,
126 | in_channel: int = 4,
127 | embed_dim: int = 768,
128 | kernel_size: int = 7,
129 | stride: int = 4,
130 | padding: int = 3,
131 | ):
132 | """
133 | in_channels: number of the channels in the input volume
134 | embed_dim: embedding dimmesion of the patch
135 | """
136 | super().__init__()
137 | self.patch_embeddings = nn.Conv3d(
138 | in_channel,
139 | embed_dim,
140 | kernel_size=kernel_size,
141 | stride=stride,
142 | padding=padding,
143 | )
144 | self.norm = nn.LayerNorm(embed_dim)
145 |
146 | def forward(self, x):
147 | # standard embedding patch
148 | patches = self.patch_embeddings(x)
149 | patches = patches.flatten(2).transpose(1, 2)
150 | patches = self.norm(patches)
151 | return patches
152 |
153 |
154 | class SelfAttention(nn.Module):
155 | def __init__(
156 | self,
157 | embed_dim: int = 768,
158 | num_heads: int = 8,
159 | sr_ratio: int = 2,
160 | qkv_bias: bool = False,
161 | attn_dropout: float = 0.0,
162 | proj_dropout: float = 0.0,
163 | ):
164 | """
165 | embed_dim : hidden size of the PatchEmbedded input
166 | num_heads: number of attention heads
167 | sr_ratio: the rate at which to down sample the sequence length of the embedded patch
168 | qkv_bias: whether or not the linear projection has bias
169 | attn_dropout: the dropout rate of the attention component
170 | proj_dropout: the dropout rate of the final linear projection
171 | """
172 | super().__init__()
173 | assert (
174 | embed_dim % num_heads == 0
175 | ), "Embedding dim should be divisible by number of heads!"
176 |
177 | self.num_heads = num_heads
178 | # embedding dimesion of each attention head
179 | self.attention_head_dim = embed_dim // num_heads
180 |
181 | # The same input is used to generate the query, key, and value,
182 | # (batch_size, num_patches, hidden_size) -> (batch_size, num_patches, attention_head_size)
183 | self.query = nn.Linear(embed_dim, embed_dim, bias=qkv_bias)
184 | self.key_value = nn.Linear(embed_dim, 2 * embed_dim, bias=qkv_bias)
185 | self.attn_dropout = nn.Dropout(attn_dropout)
186 | self.proj = nn.Linear(embed_dim, embed_dim)
187 | self.proj_dropout = nn.Dropout(proj_dropout)
188 |
189 | self.sr_ratio = sr_ratio
190 | if sr_ratio > 1:
191 | self.sr = nn.Conv3d(
192 | embed_dim, embed_dim, kernel_size=sr_ratio, stride=sr_ratio
193 | )
194 | self.sr_norm = nn.LayerNorm(embed_dim)
195 |
196 | def forward(self, x):
197 | # (batch_size, num_patches, hidden_size)
198 | B, N, C = x.shape
199 |
200 | # (batch_size, num_head, sequence_length, embed_dim)
201 | q = (
202 | self.query(x)
203 | .reshape(B, N, self.num_heads, self.attention_head_dim)
204 | .permute(0, 2, 1, 3)
205 | )
206 |
207 | if self.sr_ratio > 1:
208 | n = cube_root(N)
209 | # (batch_size, sequence_length, embed_dim) -> (batch_size, embed_dim, patch_D, patch_H, patch_W)
210 | x_ = x.permute(0, 2, 1).reshape(B, C, n, n, n)
211 | # (batch_size, embed_dim, patch_D, patch_H, patch_W) -> (batch_size, embed_dim, patch_D/sr_ratio, patch_H/sr_ratio, patch_W/sr_ratio)
212 | x_ = self.sr(x_).reshape(B, C, -1).permute(0, 2, 1)
213 | # (batch_size, embed_dim, patch_D/sr_ratio, patch_H/sr_ratio, patch_W/sr_ratio) -> (batch_size, sequence_length, embed_dim)
214 | # normalizing the layer
215 | x_ = self.sr_norm(x_)
216 | # (batch_size, num_patches, hidden_size)
217 | kv = (
218 | self.key_value(x_)
219 | .reshape(B, -1, 2, self.num_heads, self.attention_head_dim)
220 | .permute(2, 0, 3, 1, 4)
221 | )
222 | # (2, batch_size, num_heads, num_sequence, attention_head_dim)
223 | else:
224 | # (batch_size, num_patches, hidden_size)
225 | kv = (
226 | self.key_value(x)
227 | .reshape(B, -1, 2, self.num_heads, self.attention_head_dim)
228 | .permute(2, 0, 3, 1, 4)
229 | )
230 | # (2, batch_size, num_heads, num_sequence, attention_head_dim)
231 |
232 | k, v = kv[0], kv[1]
233 |
234 | attention_score = (q @ k.transpose(-2, -1)) / math.sqrt(self.num_heads)
235 | attnention_prob = attention_score.softmax(dim=-1)
236 | attnention_prob = self.attn_dropout(attnention_prob)
237 | out = (attnention_prob @ v).transpose(1, 2).reshape(B, N, C)
238 | out = self.proj(out)
239 | out = self.proj_dropout(out)
240 | return out
241 |
242 |
243 | class TransformerBlock(nn.Module):
244 | def __init__(
245 | self,
246 | embed_dim: int = 768,
247 | mlp_ratio: int = 2,
248 | num_heads: int = 8,
249 | sr_ratio: int = 2,
250 | qkv_bias: bool = False,
251 | attn_dropout: float = 0.0,
252 | proj_dropout: float = 0.0,
253 | ):
254 | """
255 | embed_dim : hidden size of the PatchEmbedded input
256 | mlp_ratio: at which rate increasse the projection dim of the embedded patch in the _MLP component
257 | num_heads: number of attention heads
258 | sr_ratio: the rate at which to down sample the sequence length of the embedded patch
259 | qkv_bias: whether or not the linear projection has bias
260 | attn_dropout: the dropout rate of the attention component
261 | proj_dropout: the dropout rate of the final linear projection
262 | """
263 | super().__init__()
264 | self.norm1 = nn.LayerNorm(embed_dim)
265 | self.attention = SelfAttention(
266 | embed_dim=embed_dim,
267 | num_heads=num_heads,
268 | sr_ratio=sr_ratio,
269 | qkv_bias=qkv_bias,
270 | attn_dropout=attn_dropout,
271 | proj_dropout=proj_dropout,
272 | )
273 | self.norm2 = nn.LayerNorm(embed_dim)
274 | self.mlp = _MLP(in_feature=embed_dim, mlp_ratio=mlp_ratio, dropout=0.0)
275 |
276 | def forward(self, x):
277 | x = x + self.attention(self.norm1(x))
278 | x = x + self.mlp(self.norm2(x))
279 | return x
280 |
281 |
282 | class MixVisionTransformer(nn.Module):
283 | def __init__(
284 | self,
285 | in_channels: int = 4,
286 | sr_ratios: list = [8, 4, 2, 1],
287 | embed_dims: list = [64, 128, 320, 512],
288 | patch_kernel_size: list = [7, 3, 3, 3],
289 | patch_stride: list = [4, 2, 2, 2],
290 | patch_padding: list = [3, 1, 1, 1],
291 | mlp_ratios: list = [2, 2, 2, 2],
292 | num_heads: list = [1, 2, 5, 8],
293 | depths: list = [2, 2, 2, 2],
294 | ):
295 | """
296 | in_channels: number of the input channels
297 | img_volume_dim: spatial resolution of the image volume (Depth, Width, Height)
298 | sr_ratios: the rates at which to down sample the sequence length of the embedded patch
299 | embed_dims: hidden size of the PatchEmbedded input
300 | patch_kernel_size: kernel size for the convolution in the patch embedding module
301 | patch_stride: stride for the convolution in the patch embedding module
302 | patch_padding: padding for the convolution in the patch embedding module
303 | mlp_ratio: at which rate increasse the projection dim of the hidden_state in the mlp
304 | num_heads: number of attenion heads
305 | depth: number of attention layers
306 | """
307 | super().__init__()
308 |
309 | # patch embedding at different Pyramid level
310 | self.embed_1 = PatchEmbedding(
311 | in_channel=in_channels,
312 | embed_dim=embed_dims[0],
313 | kernel_size=patch_kernel_size[0],
314 | stride=patch_stride[0],
315 | padding=patch_padding[0],
316 | )
317 | self.embed_2 = PatchEmbedding(
318 | in_channel=embed_dims[0],
319 | embed_dim=embed_dims[1],
320 | kernel_size=patch_kernel_size[1],
321 | stride=patch_stride[1],
322 | padding=patch_padding[1],
323 | )
324 | self.embed_3 = PatchEmbedding(
325 | in_channel=embed_dims[1],
326 | embed_dim=embed_dims[2],
327 | kernel_size=patch_kernel_size[2],
328 | stride=patch_stride[2],
329 | padding=patch_padding[2],
330 | )
331 | self.embed_4 = PatchEmbedding(
332 | in_channel=embed_dims[2],
333 | embed_dim=embed_dims[3],
334 | kernel_size=patch_kernel_size[3],
335 | stride=patch_stride[3],
336 | padding=patch_padding[3],
337 | )
338 |
339 | # block 1
340 | self.tf_block1 = nn.ModuleList(
341 | [
342 | TransformerBlock(
343 | embed_dim=embed_dims[0],
344 | num_heads=num_heads[0],
345 | mlp_ratio=mlp_ratios[0],
346 | sr_ratio=sr_ratios[0],
347 | qkv_bias=True,
348 | )
349 | for _ in range(depths[0])
350 | ]
351 | )
352 | self.norm1 = nn.LayerNorm(embed_dims[0])
353 |
354 | # block 2
355 | self.tf_block2 = nn.ModuleList(
356 | [
357 | TransformerBlock(
358 | embed_dim=embed_dims[1],
359 | num_heads=num_heads[1],
360 | mlp_ratio=mlp_ratios[1],
361 | sr_ratio=sr_ratios[1],
362 | qkv_bias=True,
363 | )
364 | for _ in range(depths[1])
365 | ]
366 | )
367 | self.norm2 = nn.LayerNorm(embed_dims[1])
368 |
369 | # block 3
370 | self.tf_block3 = nn.ModuleList(
371 | [
372 | TransformerBlock(
373 | embed_dim=embed_dims[2],
374 | num_heads=num_heads[2],
375 | mlp_ratio=mlp_ratios[2],
376 | sr_ratio=sr_ratios[2],
377 | qkv_bias=True,
378 | )
379 | for _ in range(depths[2])
380 | ]
381 | )
382 | self.norm3 = nn.LayerNorm(embed_dims[2])
383 |
384 | # block 4
385 | self.tf_block4 = nn.ModuleList(
386 | [
387 | TransformerBlock(
388 | embed_dim=embed_dims[3],
389 | num_heads=num_heads[3],
390 | mlp_ratio=mlp_ratios[3],
391 | sr_ratio=sr_ratios[3],
392 | qkv_bias=True,
393 | )
394 | for _ in range(depths[3])
395 | ]
396 | )
397 | self.norm4 = nn.LayerNorm(embed_dims[3])
398 |
399 | def forward(self, x):
400 | out = []
401 | # at each stage these are the following mappings:
402 | # (batch_size, num_patches, hidden_state)
403 | # (num_patches,) -> (D, H, W)
404 | # (batch_size, num_patches, hidden_state) -> (batch_size, hidden_state, D, H, W)
405 |
406 | # stage 1
407 | x = self.embed_1(x)
408 | B, N, C = x.shape
409 | n = cube_root(N)
410 | for i, blk in enumerate(self.tf_block1):
411 | x = blk(x)
412 | x = self.norm1(x)
413 | # (B, N, C) -> (B, D, H, W, C) -> (B, C, D, H, W)
414 | x = x.reshape(B, n, n, n, -1).permute(0, 4, 1, 2, 3).contiguous()
415 | out.append(x)
416 |
417 | # stage 2
418 | x = self.embed_2(x)
419 | B, N, C = x.shape
420 | n = cube_root(N)
421 | for i, blk in enumerate(self.tf_block2):
422 | x = blk(x)
423 | x = self.norm2(x)
424 | # (B, N, C) -> (B, D, H, W, C) -> (B, C, D, H, W)
425 | x = x.reshape(B, n, n, n, -1).permute(0, 4, 1, 2, 3).contiguous()
426 | out.append(x)
427 |
428 | # stage 3
429 | x = self.embed_3(x)
430 | B, N, C = x.shape
431 | n = cube_root(N)
432 | for i, blk in enumerate(self.tf_block3):
433 | x = blk(x)
434 | x = self.norm3(x)
435 | # (B, N, C) -> (B, D, H, W, C) -> (B, C, D, H, W)
436 | x = x.reshape(B, n, n, n, -1).permute(0, 4, 1, 2, 3).contiguous()
437 | out.append(x)
438 |
439 | # stage 4
440 | x = self.embed_4(x)
441 | B, N, C = x.shape
442 | n = cube_root(N)
443 | for i, blk in enumerate(self.tf_block4):
444 | x = blk(x)
445 | x = self.norm4(x)
446 | # (B, N, C) -> (B, D, H, W, C) -> (B, C, D, H, W)
447 | x = x.reshape(B, n, n, n, -1).permute(0, 4, 1, 2, 3).contiguous()
448 | out.append(x)
449 |
450 | return out
451 |
452 |
453 | class _MLP(nn.Module):
454 | def __init__(self, in_feature, mlp_ratio=2, dropout=0.0):
455 | super().__init__()
456 | out_feature = mlp_ratio * in_feature
457 | self.fc1 = nn.Linear(in_feature, out_feature)
458 | self.dwconv = DWConv(dim=out_feature)
459 | self.fc2 = nn.Linear(out_feature, in_feature)
460 | self.act_fn = nn.GELU()
461 | self.dropout = nn.Dropout(dropout)
462 |
463 | def forward(self, x):
464 | x = self.fc1(x)
465 | x = self.dwconv(x)
466 | x = self.act_fn(x)
467 | x = self.dropout(x)
468 | x = self.fc2(x)
469 | x = self.dropout(x)
470 | return x
471 |
472 |
473 | class DWConv(nn.Module):
474 | def __init__(self, dim=768):
475 | super().__init__()
476 | self.dwconv = nn.Conv3d(dim, dim, 3, 1, 1, bias=True, groups=dim)
477 | # added batchnorm (remove it ?)
478 | self.bn = nn.BatchNorm3d(dim)
479 |
480 | def forward(self, x):
481 | B, N, C = x.shape
482 | # (batch, patch_cube, hidden_size) -> (batch, hidden_size, D, H, W)
483 | # assuming D = H = W, i.e. cube root of the patch is an integer number!
484 | n = cube_root(N)
485 | x = x.transpose(1, 2).view(B, C, n, n, n)
486 | x = self.dwconv(x)
487 | # added batchnorm (remove it ?)
488 | x = self.bn(x)
489 | x = x.flatten(2).transpose(1, 2)
490 | return x
491 |
492 | ###################################################################################
493 | def cube_root(n):
494 | return round(math.pow(n, (1 / 3)))
495 |
496 |
497 | ###################################################################################
498 | # ----------------------------------------------------- decoder -------------------
499 | class MLP_(nn.Module):
500 | """
501 | Linear Embedding
502 | """
503 |
504 | def __init__(self, input_dim=2048, embed_dim=768):
505 | super().__init__()
506 | self.proj = nn.Linear(input_dim, embed_dim)
507 | self.bn = nn.LayerNorm(embed_dim)
508 |
509 | def forward(self, x):
510 | x = x.flatten(2).transpose(1, 2).contiguous()
511 | x = self.proj(x)
512 | # added batchnorm (remove it ?)
513 | x = self.bn(x)
514 | return x
515 |
516 |
517 | ###################################################################################
518 | class SegFormerDecoderHead(nn.Module):
519 | """
520 | SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers
521 | """
522 |
523 | def __init__(
524 | self,
525 | input_feature_dims: list = [512, 320, 128, 64],
526 | decoder_head_embedding_dim: int = 256,
527 | num_classes: int = 3,
528 | dropout: float = 0.0,
529 | ):
530 | """
531 | input_feature_dims: list of the output features channels generated by the transformer encoder
532 | decoder_head_embedding_dim: projection dimension of the mlp layer in the all-mlp-decoder module
533 | num_classes: number of the output channels
534 | dropout: dropout rate of the concatenated feature maps
535 | """
536 | super().__init__()
537 | self.linear_c4 = MLP_(
538 | input_dim=input_feature_dims[0],
539 | embed_dim=decoder_head_embedding_dim,
540 | )
541 | self.linear_c3 = MLP_(
542 | input_dim=input_feature_dims[1],
543 | embed_dim=decoder_head_embedding_dim,
544 | )
545 | self.linear_c2 = MLP_(
546 | input_dim=input_feature_dims[2],
547 | embed_dim=decoder_head_embedding_dim,
548 | )
549 | self.linear_c1 = MLP_(
550 | input_dim=input_feature_dims[3],
551 | embed_dim=decoder_head_embedding_dim,
552 | )
553 | # convolution module to combine feature maps generated by the mlps
554 | self.linear_fuse = nn.Sequential(
555 | nn.Conv3d(
556 | in_channels=4 * decoder_head_embedding_dim,
557 | out_channels=decoder_head_embedding_dim,
558 | kernel_size=1,
559 | stride=1,
560 | bias=False,
561 | ),
562 | nn.BatchNorm3d(decoder_head_embedding_dim),
563 | nn.ReLU(),
564 | )
565 | self.dropout = nn.Dropout(dropout)
566 |
567 | # final linear projection layer
568 | self.linear_pred = nn.Conv3d(
569 | decoder_head_embedding_dim, num_classes, kernel_size=1
570 | )
571 |
572 | # segformer decoder generates the final decoded feature map size at 1/4 of the original input volume size
573 | self.upsample_volume = nn.Upsample(
574 | scale_factor=4.0, mode="trilinear", align_corners=False
575 | )
576 |
577 | def forward(self, c1, c2, c3, c4):
578 | ############## _MLP decoder on C1-C4 ###########
579 | n, _, _, _, _ = c4.shape
580 |
581 | _c4 = (
582 | self.linear_c4(c4)
583 | .permute(0, 2, 1)
584 | .reshape(n, -1, c4.shape[2], c4.shape[3], c4.shape[4])
585 | .contiguous()
586 | )
587 | _c4 = torch.nn.functional.interpolate(
588 | _c4,
589 | size=c1.size()[2:],
590 | mode="trilinear",
591 | align_corners=False,
592 | )
593 |
594 | _c3 = (
595 | self.linear_c3(c3)
596 | .permute(0, 2, 1)
597 | .reshape(n, -1, c3.shape[2], c3.shape[3], c3.shape[4])
598 | .contiguous()
599 | )
600 | _c3 = torch.nn.functional.interpolate(
601 | _c3,
602 | size=c1.size()[2:],
603 | mode="trilinear",
604 | align_corners=False,
605 | )
606 |
607 | _c2 = (
608 | self.linear_c2(c2)
609 | .permute(0, 2, 1)
610 | .reshape(n, -1, c2.shape[2], c2.shape[3], c2.shape[4])
611 | .contiguous()
612 | )
613 | _c2 = torch.nn.functional.interpolate(
614 | _c2,
615 | size=c1.size()[2:],
616 | mode="trilinear",
617 | align_corners=False,
618 | )
619 |
620 | _c1 = (
621 | self.linear_c1(c1)
622 | .permute(0, 2, 1)
623 | .reshape(n, -1, c1.shape[2], c1.shape[3], c1.shape[4])
624 | .contiguous()
625 | )
626 |
627 | _c = self.linear_fuse(torch.cat([_c4, _c3, _c2, _c1], dim=1))
628 |
629 | x = self.dropout(_c)
630 | x = self.linear_pred(x)
631 | x = self.upsample_volume(x)
632 | return x
633 |
634 | ###################################################################################
635 | if __name__ == "__main__":
636 | input = torch.randint(
637 | low=0,
638 | high=255,
639 | size=(1, 4, 128, 128, 128),
640 | dtype=torch.float,
641 | )
642 | input = input.to("cuda:0")
643 | segformer3D = SegFormer3D().to("cuda:0")
644 | output = segformer3D(input)
645 | print(output.shape)
646 |
647 |
648 | ###################################################################################
649 |
--------------------------------------------------------------------------------
/augmentations/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OSUPCVLab/SegFormer3D/a74cddda15806cc8a4428f9a36bd301a69ddb440/augmentations/__init__.py
--------------------------------------------------------------------------------
/augmentations/augmentations.py:
--------------------------------------------------------------------------------
1 | import monai.transforms as transforms
2 |
3 | #######################################################################################
4 | def build_augmentations(train: bool = True):
5 | if train:
6 | train_transform = [
7 | transforms.RandSpatialCropSamplesd(keys=["image", "label"], roi_size=(96, 96, 96), num_samples=4, random_center=True, random_size=False),
8 | transforms.RandFlipd(keys=["image", "label"], prob=0.30, spatial_axis=1),
9 | transforms.RandRotated(keys=["image", "label"], prob=0.50, range_x=0.36, range_y=0.0, range_z=0.0),
10 | transforms.RandCoarseDropoutd(keys=["image", "label"], holes=20, spatial_size=(-1, 7, 7), fill_value=0, prob=0.5),
11 | transforms.GibbsNoised(keys=["image"]),
12 | transforms.EnsureTyped(keys=["image", "label"], track_meta=False),
13 | ]
14 | return transforms.Compose(train_transform)
15 | else:
16 | val_transform = [
17 | transforms.EnsureTyped(keys=["image", "label"], track_meta=False),
18 | ]
19 | return transforms.Compose(val_transform)
20 |
--------------------------------------------------------------------------------
/cvprw_poster.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OSUPCVLab/SegFormer3D/a74cddda15806cc8a4428f9a36bd301a69ddb440/cvprw_poster.pdf
--------------------------------------------------------------------------------
/data/brats2017_seg/brats2017_raw_data/brats2017_seg_preprocess.py:
--------------------------------------------------------------------------------
1 | import os
2 | import torch
3 | import nibabel
4 | import numpy as np
5 | from tqdm import tqdm
6 | from joblib import Parallel, delayed
7 | import matplotlib.pyplot as plt
8 | from matplotlib import animation
9 | from monai.data import MetaTensor
10 | from multiprocessing import Process, Pool
11 | from sklearn.preprocessing import MinMaxScaler
12 | from monai.transforms import (
13 | Orientation,
14 | EnsureType,
15 | )
16 |
17 | # whoever wrote this code knew what he was doing (hint: It was me!)
18 |
19 | """
20 | data
21 | │
22 | ├───train
23 | │ ├──imageTr
24 | │ │ └──BRATS_001_0000.nii.gz
25 | │ │ └──BRATS_001_0001.nii.gz
26 | │ │ └──BRATS_001_0002.nii.gz
27 | │ │ └──BRATS_001_0003.nii.gz
28 | │ │ └──BRATS_002_0000.nii.gz
29 | │ │ └──...
30 | │ ├──labelsTr
31 | │ │ └──BRATS_001.nii.gz
32 | │ │ └──BRATS_002.nii.gz
33 | │ │ └──...
34 | │ ├──imageTs
35 | │ │ └──BRATS_485_000.nii.gz
36 | │ │ └──BRATS_485_001.nii.gz
37 | │ │ └──BRATS_485_002.nii.gz
38 | │ │ └──BRATS_485_003.nii.gz
39 | │ │ └──BRATS_486_000.nii.gz
40 | │ │ └──...
41 |
42 | """
43 | class ConvertToMultiChannelBasedOnBrats2017Classes(object):
44 | """
45 | Convert labels to multi channels based on brats17 classes:
46 | "0": "background",
47 | "1": "edema",
48 | "2": "non-enhancing tumor",
49 | "3": "enhancing tumour"
50 | Annotations comprise the GD-enhancing tumor (ET — label 4), the peritumoral edema (ED — label 2),
51 | and the necrotic and non-enhancing tumor (NCR/NET — label 1)
52 | """
53 | def __call__(self, img):
54 | # if img has channel dim, squeeze it
55 | if img.ndim == 4 and img.shape[0] == 1:
56 | img = img.squeeze(0)
57 |
58 | result = [(img == 2) | (img == 3), (img == 2) | (img == 3) | (img == 1), img == 3]
59 | # merge labels 1 (tumor non-enh) and 3 (tumor enh) and 1 (large edema) to WT
60 | # label 3 is ET
61 | return torch.stack(result, dim=0) if isinstance(img, torch.Tensor) else np.stack(result, axis=0)
62 |
63 | class Brats2017Task1Preprocess:
64 | def __init__(
65 | self,
66 | root_dir: str,
67 | train_folder_name: str = "train",
68 | save_dir: str = "../BraTS2017_Training_Data",
69 | ):
70 | """
71 | root_dir: path to the data folder where the raw train folder is
72 | roi: spatiotemporal size of the 3D volume to be resized
73 | train_folder_name: name of the folder of the training data
74 | save_dir: path to directory where each case is going to be saved as a single file containing four modalities
75 | """
76 |
77 | self.train_folder_dir = os.path.join(root_dir, train_folder_name)
78 | label_folder_dir = os.path.join(root_dir, train_folder_name, "labelsTr")
79 | assert os.path.exists(self.train_folder_dir)
80 | assert os.path.exists(label_folder_dir)
81 |
82 | self.save_dir = save_dir
83 | # we only care about case names for which we have label!
84 | self.case_name = next(os.walk(label_folder_dir), (None, None, []))[2]
85 |
86 | # MRI type
87 | self.MRI_CODE = {"Flair": "0000", "T1w": "0001", "T1gd": "0002", "T2w": "0003", "label": None}
88 |
89 |
90 | def __len__(self):
91 | return self.case_name.__len__()
92 |
93 | def normalize(self, x:np.ndarray)->np.ndarray:
94 | # Transform features by scaling each feature to a given range.
95 | scaler = MinMaxScaler(feature_range=(0, 1))
96 | # (H, W, D) -> (H * W, D)
97 | normalized_1D_array = scaler.fit_transform(x.reshape(-1, x.shape[-1]))
98 | normalized_data = normalized_1D_array.reshape(x.shape)
99 | return normalized_data
100 |
101 | def orient(self, x: MetaTensor) -> MetaTensor:
102 | # orient the array to be in (Right, Anterior, Superior) scanner coordinate systems
103 | assert type(x) == MetaTensor
104 | return Orientation(axcodes="RAS")(x)
105 |
106 | def detach_meta(self, x: MetaTensor) -> np.ndarray:
107 | assert type(x) == MetaTensor
108 | return EnsureType(data_type="numpy", track_meta=False)(x)
109 |
110 | def crop_brats2021_zero_pixels(self, x: np.ndarray)->np.ndarray:
111 | # get rid of the zero pixels around mri scan and cut it so that the region is useful
112 | # crop (240, 240, 155) to (128, 128, 128)
113 | return x[:, 56:184, 56:184, 13:141]
114 |
115 | def remove_case_name_artifact(self, case_name: str)->str:
116 | # BRATS_066.nii.gz -> BRATS_066
117 | return case_name.rsplit(".")[0]
118 |
119 | def get_modality_fp(self, case_name: str, folder: str, mri_code: str = None):
120 | """
121 | return the modality file path
122 | case_name: patient ID
123 | folder: either [imagesTr, labelsTr]
124 | mri_code: code of any of the ["Flair", "T1w", "T1gd", "T2w"]
125 | """
126 | if mri_code:
127 | f_name = f"{case_name}_{mri_code}.nii.gz"
128 | else:
129 | f_name = f"{case_name}.nii.gz"
130 |
131 | modality_fp = os.path.join(
132 | self.train_folder_dir,
133 | folder,
134 | f_name,
135 | )
136 | return modality_fp
137 |
138 | def load_nifti(self, fp):
139 | """
140 | load a nifti file
141 | fp: path to the nifti file with (nii or nii.gz) extension
142 | """
143 | nifti_data = nibabel.load(fp)
144 | # get the floating point array
145 | nifti_scan = nifti_data.get_fdata()
146 | # get affine matrix
147 | affine = nifti_data.affine
148 | return nifti_scan, affine
149 |
150 | def _2metaTensor(self, nifti_data: np.ndarray, affine_mat: np.ndarray):
151 | """
152 | convert a nifti data to meta tensor
153 | nifti_data: floating point array of the raw nifti object
154 | affine_mat: affine matrix to be appended to the meta tensor for later application such as transformation
155 | """
156 | # creating a meta tensor in which affine matrix is stored for later uses(i.e. transformation)
157 | scan = MetaTensor(x=nifti_data, affine=affine_mat)
158 | # adding a new axis
159 | D, H, W = scan.shape
160 | # adding new axis
161 | scan = scan.view(1, D, H, W)
162 | return scan
163 |
164 | def preprocess_brats_modality(self, data_fp: str, is_label: bool = False)->np.ndarray:
165 | """
166 | apply preprocess stage to the modality
167 | data_fp: directory to the modality
168 | """
169 | data, affine = self.load_nifti(data_fp)
170 | # label do not the be normalized
171 | if is_label:
172 | # Binary mask does not need to be float64! For saving storage purposes!
173 | data = data.astype(np.uint8)
174 | # categorical -> one-hot-encoded
175 | # (240, 240, 155) -> (3, 240, 240, 155)
176 | data = ConvertToMultiChannelBasedOnBrats2017Classes()(data)
177 | else:
178 | data = self.normalize(x=data)
179 | # (240, 240, 155) -> (1, 240, 240, 155)
180 | data = data[np.newaxis, ...]
181 |
182 | data = MetaTensor(x=data, affine=affine)
183 | # for oreinting the coordinate system we need the affine matrix
184 | data = self.orient(data)
185 | # detaching the meta values from the oriented array
186 | data = self.detach_meta(data)
187 | # (240, 240, 155) -> (128, 128, 128)
188 | data = self.crop_brats2021_zero_pixels(data)
189 | return data
190 |
191 | def __getitem__(self, idx):
192 | # BRATS_001_0000.nii.gz
193 | case_name = self.case_name[idx]
194 | # BRATS_001_0000
195 | case_name = self.remove_case_name_artifact(case_name)
196 |
197 |
198 | # preprocess Flair modality
199 | code = self.MRI_CODE["Flair"]
200 | flair = self.get_modality_fp(case_name, "imagesTr", code)
201 | Flair = self.preprocess_brats_modality(flair, is_label=False)
202 | flair_transv = Flair.swapaxes(1, 3) # transverse plane
203 |
204 |
205 | # preprocess T1w modality
206 | code = self.MRI_CODE["T1w"]
207 | t1w = self.get_modality_fp(case_name, "imagesTr", code)
208 | t1w = self.preprocess_brats_modality(t1w, is_label=False)
209 | t1w_transv = t1w.swapaxes(1, 3) # transverse plane
210 |
211 | # preprocess T1gd modality
212 | code = self.MRI_CODE["T1gd"]
213 | t1gd = self.get_modality_fp(case_name, "imagesTr", code)
214 | t1gd = self.preprocess_brats_modality(t1gd, is_label=False)
215 | t1gd_transv = t1gd.swapaxes(1, 3) # transverse plane
216 |
217 |
218 | # preprocess T2w
219 | code = self.MRI_CODE["T2w"]
220 | t2w = self.get_modality_fp(case_name, "imagesTr", code)
221 | t2w = self.preprocess_brats_modality(t2w, is_label=False)
222 | t2w_transv = t2w.swapaxes(1, 3) # transverse plane
223 |
224 |
225 | # preprocess segmentation label
226 | code = self.MRI_CODE["label"]
227 | label = self.get_modality_fp(case_name, "labelsTr", code)
228 | label = self.preprocess_brats_modality(label, is_label=True)
229 | label = label.swapaxes(1, 3) # transverse plane
230 |
231 | # stack modalities (4, D, H, W)
232 | modalities = np.concatenate(
233 | (flair_transv, t1w_transv, t1gd_transv, t2w_transv),
234 | axis=0,
235 | dtype=np.float32,
236 | )
237 |
238 | return modalities, label, case_name
239 |
240 |
241 | def __call__(self):
242 | print("started preprocessing Brats2017...")
243 | with Pool(processes=os.cpu_count()) as multi_p:
244 | multi_p.map_async(func=self.process, iterable=range(self.__len__()))
245 | multi_p.close()
246 | multi_p.join()
247 | print("finished preprocessing Brats2017...")
248 |
249 | def process(self, idx):
250 | if not os.path.exists(self.save_dir):
251 | os.makedirs(self.save_dir)
252 | modalities, label, case_name = self.__getitem__(idx)
253 | # creating the folder for the current case id
254 | data_save_path = os.path.join(self.save_dir, case_name)
255 | if not os.path.exists(data_save_path):
256 | os.makedirs(data_save_path)
257 | modalities_fn = data_save_path + f"/{case_name}_modalities.pt"
258 | label_fn = data_save_path + f"/{case_name}_label.pt"
259 | torch.save(modalities, modalities_fn)
260 | torch.save(label, label_fn)
261 |
262 |
263 |
264 | def animate(input_1, input_2):
265 | """animate pairs of image sequences of the same length on two conjugate axis"""
266 | assert len(input_1) == len(
267 | input_2
268 | ), f"two inputs should have the same number of frame but first input had {len(input_1)} and the second one {len(input_2)}"
269 | # set the figure and axis
270 | fig, axis = plt.subplots(1, 2, figsize=(8, 8))
271 | axis[0].set_axis_off()
272 | axis[1].set_axis_off()
273 | sequence_length = input_1.__len__()
274 | sequence = []
275 | for i in range(sequence_length):
276 | im_1 = axis[0].imshow(input_1[i], cmap="bone", animated=True)
277 | im_2 = axis[1].imshow(input_2[i], cmap="bone", animated=True)
278 | if i == 0:
279 | axis[0].imshow(input_1[i], cmap="bone") # show an initial one first
280 | axis[1].imshow(input_2[i], cmap="bone") # show an initial one first
281 |
282 | sequence.append([im_1, im_2])
283 | return animation.ArtistAnimation(
284 | fig,
285 | sequence,
286 | interval=25,
287 | blit=True,
288 | repeat_delay=100,
289 | )
290 |
291 | def viz(volume_indx: int = 1, label_indx: int = 1)->None:
292 | """
293 | pair visualization of the volume and label
294 | volume_indx: index for the volume. ["Flair", "t1", "t1ce", "t2"]
295 | label_indx: index for the label segmentation ["TC" (Tumor core), "WT" (Whole tumor), "ET" (Enhancing tumor)]
296 | """
297 | assert volume_indx in [0, 1, 2, 3]
298 | assert label_indx in [0, 1, 2]
299 | x = volume[volume_indx, ...]
300 | y = label[label_indx, ...]
301 | ani = animate(input_1=x, input_2=y)
302 | plt.show()
303 |
304 |
305 | if __name__ == "__main__":
306 | brats2017_task1_prep = Brats2017Task1Preprocess(root_dir="./",
307 | train_folder_name = "train",
308 | save_dir="../BraTS2017_Training_Data"
309 | )
310 | # run the preprocessing pipeline
311 | brats2017_task1_prep()
312 |
313 | # in case you want to visualize the data you can uncomment the following. Change the index to see different data
314 | # volume, label, case_name = brats2017_task1_prep[400]
315 | # viz(volume_indx = 3, label_indx = 1)
316 |
317 |
318 |
--------------------------------------------------------------------------------
/data/brats2017_seg/brats2017_raw_data/datameta_generator/create_train_val_kfold_csv.py:
--------------------------------------------------------------------------------
1 | import os
2 | import random
3 | import numpy as np
4 | import pandas as pd
5 | from sklearn.model_selection import KFold
6 |
7 |
8 | def create_pandas_df(data_dict: dict) -> pd.DataFrame:
9 | """
10 | create a pandas dataframe out of data dictionary
11 | data_dict: key values of the data to be data-framed
12 | """
13 | data_frame = pd.DataFrame(
14 | data=data_dict,
15 | index=None,
16 | columns=None,
17 | )
18 | return data_frame
19 |
20 |
21 | def save_pandas_df(dataframe: pd.DataFrame, save_path: str, header: list) -> None:
22 | """
23 | save a dataframe to the save_dir with specified header
24 | dataframe: pandas dataframe to be saved
25 | save_path: the directory in which the dataframe is going to be saved
26 | header: list of headers of the to be saved csv file
27 | """
28 | assert save_path.endswith("csv")
29 | assert isinstance(dataframe, pd.DataFrame)
30 | assert (dataframe.columns.__len__() == header.__len__())
31 | dataframe.to_csv(path_or_buf=save_path, header=header, index=False)
32 |
33 |
34 | def create_train_val_kfold_csv_from_data_folder(
35 | folder_dir: str,
36 | append_dir: str = "",
37 | save_dir: str = "./",
38 | n_k_fold: int = 5,
39 | random_state: int = 42,
40 | ) -> None:
41 | """
42 | create k fold train validation csv files
43 | folder_dir: path to the whole corpus of the data
44 | append_dir: path to be appended to the begining of the directory filed in the csv file
45 | save_dir: directory to which save the csv files
46 | n_k_fold: number of folds
47 | random_state: random seed ID
48 | """
49 | assert os.path.exists(folder_dir), f"{folder_dir} does not exist"
50 |
51 | header = ["data_path", "case_name"]
52 |
53 | # iterate through the folder to list all the filenames
54 | case_name = next(os.walk(folder_dir), (None, None, []))[1]
55 | case_name = np.array(case_name)
56 | np.random.seed(random_state)
57 | np.random.shuffle(case_name)
58 |
59 | # setting up k-fold module
60 | kfold = KFold(n_splits=n_k_fold, random_state=random_state, shuffle=True)
61 | # generating k-fold train and validation set
62 | for i, (train_fold_id, validation_fold_id) in enumerate(kfold.split(case_name)):
63 | # getting the corresponding case out of the fold index
64 | train_fold_cn = case_name[train_fold_id]
65 | valid_fold_cn = case_name[validation_fold_id]
66 | # create data path pointing to the case name
67 | train_dp = [
68 | os.path.join(append_dir, case).replace("\\", "/") for case in train_fold_cn
69 | ]
70 | valid_dp = [
71 | os.path.join(append_dir, case).replace("\\", "/") for case in valid_fold_cn
72 | ]
73 | # dictionary object to get converte to dataframe
74 | train_data = {"data_path": train_dp, "case_name": train_fold_cn}
75 | valid_data = {"data_path": valid_dp, "case_name": valid_fold_cn}
76 |
77 | train_df = create_pandas_df(train_data)
78 | valid_df = create_pandas_df(valid_data)
79 |
80 | save_pandas_df(
81 | dataframe=train_df,
82 | save_path=f"./train_fold_{i+1}.csv",
83 | header=header,
84 | )
85 | save_pandas_df(
86 | dataframe=valid_df,
87 | save_path=f"./validation_fold_{i+1}.csv",
88 | header=header,
89 | )
90 |
91 |
92 | if __name__ == "__main__":
93 | create_train_val_kfold_csv_from_data_folder(
94 | # path to the raw train data folder
95 | folder_dir="../../BraTS2017_Training_Data",
96 | # this is inferred from where the actual experiments are run relative to the data folder
97 | append_dir="../../../data/brats2017_seg/BraTS2017_Training_Data/",
98 | # where to save the train, val and test csv file relative to the current directory
99 | save_dir="../../",
100 | )
101 |
--------------------------------------------------------------------------------
/data/brats2017_seg/brats2017_raw_data/datameta_generator/create_train_val_test_csv.py:
--------------------------------------------------------------------------------
1 | import os
2 | import random
3 | import numpy as np
4 | import pandas as pd
5 |
6 |
7 | def create_train_val_test_csv_from_data_folder(
8 | folder_dir: str,
9 | append_dir: str = "",
10 | save_dir: str = "./",
11 | train_split_perc: float = 0.80,
12 | val_split_perc: float = 0.05,
13 | ) -> None:
14 | """
15 | create train/validation/test csv file out of the given directory such that each csv file has its split percentage count
16 | folder_dir: path to the whole corpus of the data
17 | append_dir: path to be appended to the begining of the directory filed in the csv file
18 | save_dir: directory to which save the csv files
19 | train_split_perc: the percentage of the train set by which split the data
20 | val_split_perc: the percentage of the validation set by which split the data
21 | """
22 | assert os.path.exists(folder_dir), f"{folder_dir} does not exist"
23 | assert (
24 | train_split_perc < 1.0 and train_split_perc > 0.0
25 | ), "train split should be between 0 and 1"
26 | assert (
27 | val_split_perc < 1.0 and val_split_perc > 0.0
28 | ), "train split should be between 0 and 1"
29 |
30 | # set the seed
31 | np.random.seed(0)
32 | random.seed(0)
33 |
34 | # iterate through the folder to list all the filenames
35 | case_name = next(os.walk(folder_dir), (None, None, []))[1]
36 | cropus_sample_count = case_name.__len__()
37 |
38 | # appending append_dir to the case name
39 | data_dir = []
40 | for case in case_name:
41 | data_dir.append(os.path.join(append_dir, case))
42 |
43 | idx = np.arange(0, cropus_sample_count)
44 | # shuffling idx (inplace operation)
45 | np.random.shuffle(idx)
46 |
47 | # spliting the data into train/val split percentage respectively for train, val and test (test set is inferred automatically)
48 | train_idx, val_idx, test_idx = np.split(
49 | idx,
50 | [
51 | int(train_split_perc * cropus_sample_count),
52 | int((train_split_perc + val_split_perc) * cropus_sample_count),
53 | ],
54 | )
55 |
56 | # get the corresponding id from the train,validation and test set
57 | train_sample_base_dir = np.array(data_dir)[train_idx]
58 | train_sample_case_name = np.array(case_name)[train_idx]
59 |
60 | # we do not need test split so we can merge it with validation
61 | val_idx = np.concatenate((val_idx, test_idx), axis=0)
62 | validation_sample_base_dir = np.array(data_dir)[val_idx]
63 | validation_sample_case_name = np.array(case_name)[val_idx]
64 |
65 | # create a pandas data frame
66 | train_df = pd.DataFrame(
67 | data={"base_dir": train_sample_base_dir, "case_name": train_sample_case_name},
68 | index=None,
69 | columns=None,
70 | )
71 |
72 | validation_df = pd.DataFrame(
73 | data={
74 | "base_dir": validation_sample_base_dir,
75 | "case_name": validation_sample_case_name,
76 | },
77 | index=None,
78 | columns=None,
79 | )
80 |
81 | # write csv files to the drive!
82 | train_df.to_csv(
83 | save_dir + "/train.csv",
84 | header=["data_path", "case_name"],
85 | index=False,
86 | )
87 | validation_df.to_csv(
88 | save_dir + "/validation.csv",
89 | header=["data_path", "case_name"],
90 | index=False,
91 | )
92 |
93 |
94 |
95 | if __name__ == "__main__":
96 | create_train_val_test_csv_from_data_folder(
97 | # path to the train data folder
98 | folder_dir="../../BraTS2017_Training_Data",
99 | # this is inferred from where the actual experiments are run relative to the data folder
100 | append_dir="../../data/brats2017_seg/BraTS2017_Training_Data/",
101 | # where to save the train, val and test csv file relative to the current directory
102 | save_dir=".",
103 | train_split_perc=0.85,
104 | val_split_perc=0.10,
105 | )
106 |
--------------------------------------------------------------------------------
/data/brats2017_seg/brats2017_raw_data/datameta_generator/nnformer_train_test_split.py:
--------------------------------------------------------------------------------
1 | import os
2 | import random
3 | import numpy as np
4 | import pandas as pd
5 |
6 |
7 | append_dir = "../../../data/brats2017_seg/BraTS2017_Training_Data/"
8 | save_dir = "./"
9 |
10 | train_split = [
11 | "BRATS_001",
12 | "BRATS_002",
13 | "BRATS_003",
14 | "BRATS_004",
15 | "BRATS_005",
16 | "BRATS_006",
17 | "BRATS_007",
18 | "BRATS_008",
19 | "BRATS_009",
20 | "BRATS_010",
21 | "BRATS_013",
22 | "BRATS_014",
23 | "BRATS_015",
24 | "BRATS_016",
25 | "BRATS_017",
26 | "BRATS_019",
27 | "BRATS_022",
28 | "BRATS_023",
29 | "BRATS_024",
30 | "BRATS_025",
31 | "BRATS_026",
32 | "BRATS_027",
33 | "BRATS_030",
34 | "BRATS_031",
35 | "BRATS_033",
36 | "BRATS_035",
37 | "BRATS_037",
38 | "BRATS_038",
39 | "BRATS_039",
40 | "BRATS_040",
41 | "BRATS_042",
42 | "BRATS_043",
43 | "BRATS_044",
44 | "BRATS_045",
45 | "BRATS_046",
46 | "BRATS_048",
47 | "BRATS_050",
48 | "BRATS_051",
49 | "BRATS_052",
50 | "BRATS_054",
51 | "BRATS_055",
52 | "BRATS_060",
53 | "BRATS_061",
54 | "BRATS_062",
55 | "BRATS_063",
56 | "BRATS_064",
57 | "BRATS_065",
58 | "BRATS_066",
59 | "BRATS_067",
60 | "BRATS_068",
61 | "BRATS_070",
62 | "BRATS_072",
63 | "BRATS_073",
64 | "BRATS_074",
65 | "BRATS_075",
66 | "BRATS_078",
67 | "BRATS_079",
68 | "BRATS_080",
69 | "BRATS_081",
70 | "BRATS_082",
71 | "BRATS_083",
72 | "BRATS_084",
73 | "BRATS_085",
74 | "BRATS_086",
75 | "BRATS_087",
76 | "BRATS_088",
77 | "BRATS_091",
78 | "BRATS_093",
79 | "BRATS_094",
80 | "BRATS_096",
81 | "BRATS_097",
82 | "BRATS_098",
83 | "BRATS_100",
84 | "BRATS_101",
85 | "BRATS_102",
86 | "BRATS_104",
87 | "BRATS_108",
88 | "BRATS_110",
89 | "BRATS_111",
90 | "BRATS_112",
91 | "BRATS_115",
92 | "BRATS_116",
93 | "BRATS_117",
94 | "BRATS_119",
95 | "BRATS_120",
96 | "BRATS_121",
97 | "BRATS_122",
98 | "BRATS_123",
99 | "BRATS_125",
100 | "BRATS_126",
101 | "BRATS_127",
102 | "BRATS_128",
103 | "BRATS_129",
104 | "BRATS_130",
105 | "BRATS_131",
106 | "BRATS_132",
107 | "BRATS_133",
108 | "BRATS_134",
109 | "BRATS_135",
110 | "BRATS_136",
111 | "BRATS_137",
112 | "BRATS_138",
113 | "BRATS_140",
114 | "BRATS_141",
115 | "BRATS_142",
116 | "BRATS_143",
117 | "BRATS_144",
118 | "BRATS_146",
119 | "BRATS_148",
120 | "BRATS_149",
121 | "BRATS_150",
122 | "BRATS_153",
123 | "BRATS_154",
124 | "BRATS_155",
125 | "BRATS_158",
126 | "BRATS_159",
127 | "BRATS_160",
128 | "BRATS_162",
129 | "BRATS_163",
130 | "BRATS_164",
131 | "BRATS_165",
132 | "BRATS_166",
133 | "BRATS_167",
134 | "BRATS_168",
135 | "BRATS_169",
136 | "BRATS_170",
137 | "BRATS_171",
138 | "BRATS_173",
139 | "BRATS_174",
140 | "BRATS_175",
141 | "BRATS_177",
142 | "BRATS_178",
143 | "BRATS_179",
144 | "BRATS_180",
145 | "BRATS_182",
146 | "BRATS_183",
147 | "BRATS_184",
148 | "BRATS_185",
149 | "BRATS_186",
150 | "BRATS_187",
151 | "BRATS_188",
152 | "BRATS_189",
153 | "BRATS_191",
154 | "BRATS_192",
155 | "BRATS_193",
156 | "BRATS_195",
157 | "BRATS_197",
158 | "BRATS_199",
159 | "BRATS_200",
160 | "BRATS_201",
161 | "BRATS_202",
162 | "BRATS_203",
163 | "BRATS_206",
164 | "BRATS_207",
165 | "BRATS_208",
166 | "BRATS_210",
167 | "BRATS_211",
168 | "BRATS_212",
169 | "BRATS_213",
170 | "BRATS_214",
171 | "BRATS_215",
172 | "BRATS_216",
173 | "BRATS_217",
174 | "BRATS_218",
175 | "BRATS_219",
176 | "BRATS_222",
177 | "BRATS_223",
178 | "BRATS_224",
179 | "BRATS_225",
180 | "BRATS_226",
181 | "BRATS_228",
182 | "BRATS_229",
183 | "BRATS_230",
184 | "BRATS_231",
185 | "BRATS_232",
186 | "BRATS_233",
187 | "BRATS_236",
188 | "BRATS_237",
189 | "BRATS_238",
190 | "BRATS_239",
191 | "BRATS_241",
192 | "BRATS_243",
193 | "BRATS_244",
194 | "BRATS_246",
195 | "BRATS_247",
196 | "BRATS_248",
197 | "BRATS_249",
198 | "BRATS_251",
199 | "BRATS_252",
200 | "BRATS_253",
201 | "BRATS_254",
202 | "BRATS_255",
203 | "BRATS_258",
204 | "BRATS_259",
205 | "BRATS_261",
206 | "BRATS_262",
207 | "BRATS_263",
208 | "BRATS_264",
209 | "BRATS_265",
210 | "BRATS_266",
211 | "BRATS_267",
212 | "BRATS_268",
213 | "BRATS_272",
214 | "BRATS_273",
215 | "BRATS_274",
216 | "BRATS_275",
217 | "BRATS_276",
218 | "BRATS_277",
219 | "BRATS_278",
220 | "BRATS_279",
221 | "BRATS_280",
222 | "BRATS_283",
223 | "BRATS_284",
224 | "BRATS_285",
225 | "BRATS_286",
226 | "BRATS_288",
227 | "BRATS_290",
228 | "BRATS_293",
229 | "BRATS_294",
230 | "BRATS_296",
231 | "BRATS_297",
232 | "BRATS_298",
233 | "BRATS_299",
234 | "BRATS_300",
235 | "BRATS_301",
236 | "BRATS_302",
237 | "BRATS_303",
238 | "BRATS_304",
239 | "BRATS_306",
240 | "BRATS_307",
241 | "BRATS_308",
242 | "BRATS_309",
243 | "BRATS_311",
244 | "BRATS_312",
245 | "BRATS_313",
246 | "BRATS_315",
247 | "BRATS_316",
248 | "BRATS_317",
249 | "BRATS_318",
250 | "BRATS_319",
251 | "BRATS_320",
252 | "BRATS_321",
253 | "BRATS_322",
254 | "BRATS_324",
255 | "BRATS_326",
256 | "BRATS_328",
257 | "BRATS_329",
258 | "BRATS_332",
259 | "BRATS_334",
260 | "BRATS_335",
261 | "BRATS_336",
262 | "BRATS_338",
263 | "BRATS_339",
264 | "BRATS_340",
265 | "BRATS_341",
266 | "BRATS_342",
267 | "BRATS_343",
268 | "BRATS_344",
269 | "BRATS_345",
270 | "BRATS_347",
271 | "BRATS_348",
272 | "BRATS_349",
273 | "BRATS_351",
274 | "BRATS_353",
275 | "BRATS_354",
276 | "BRATS_355",
277 | "BRATS_356",
278 | "BRATS_357",
279 | "BRATS_358",
280 | "BRATS_359",
281 | "BRATS_360",
282 | "BRATS_363",
283 | "BRATS_364",
284 | "BRATS_365",
285 | "BRATS_366",
286 | "BRATS_367",
287 | "BRATS_368",
288 | "BRATS_369",
289 | "BRATS_370",
290 | "BRATS_371",
291 | "BRATS_372",
292 | "BRATS_373",
293 | "BRATS_374",
294 | "BRATS_375",
295 | "BRATS_376",
296 | "BRATS_377",
297 | "BRATS_378",
298 | "BRATS_379",
299 | "BRATS_380",
300 | "BRATS_381",
301 | "BRATS_383",
302 | "BRATS_384",
303 | "BRATS_385",
304 | "BRATS_386",
305 | "BRATS_387",
306 | "BRATS_388",
307 | "BRATS_390",
308 | "BRATS_391",
309 | "BRATS_392",
310 | "BRATS_393",
311 | "BRATS_394",
312 | "BRATS_395",
313 | "BRATS_396",
314 | "BRATS_398",
315 | "BRATS_399",
316 | "BRATS_401",
317 | "BRATS_403",
318 | "BRATS_404",
319 | "BRATS_405",
320 | "BRATS_407",
321 | "BRATS_408",
322 | "BRATS_409",
323 | "BRATS_410",
324 | "BRATS_411",
325 | "BRATS_412",
326 | "BRATS_413",
327 | "BRATS_414",
328 | "BRATS_415",
329 | "BRATS_417",
330 | "BRATS_418",
331 | "BRATS_419",
332 | "BRATS_420",
333 | "BRATS_421",
334 | "BRATS_422",
335 | "BRATS_423",
336 | "BRATS_424",
337 | "BRATS_426",
338 | "BRATS_428",
339 | "BRATS_429",
340 | "BRATS_430",
341 | "BRATS_431",
342 | "BRATS_433",
343 | "BRATS_434",
344 | "BRATS_435",
345 | "BRATS_436",
346 | "BRATS_437",
347 | "BRATS_438",
348 | "BRATS_439",
349 | "BRATS_441",
350 | "BRATS_442",
351 | "BRATS_443",
352 | "BRATS_444",
353 | "BRATS_445",
354 | "BRATS_446",
355 | "BRATS_449",
356 | "BRATS_451",
357 | "BRATS_452",
358 | "BRATS_453",
359 | "BRATS_454",
360 | "BRATS_455",
361 | "BRATS_457",
362 | "BRATS_458",
363 | "BRATS_459",
364 | "BRATS_460",
365 | "BRATS_463",
366 | "BRATS_464",
367 | "BRATS_466",
368 | "BRATS_467",
369 | "BRATS_468",
370 | "BRATS_469",
371 | "BRATS_470",
372 | "BRATS_472",
373 | "BRATS_475",
374 | "BRATS_477",
375 | "BRATS_478",
376 | "BRATS_481",
377 | "BRATS_482",
378 | "BRATS_483",
379 | "BRATS_400",
380 | "BRATS_402",
381 | "BRATS_406",
382 | "BRATS_416",
383 | "BRATS_427",
384 | "BRATS_440",
385 | "BRATS_447",
386 | "BRATS_448",
387 | "BRATS_456",
388 | "BRATS_461",
389 | "BRATS_462",
390 | "BRATS_465",
391 | "BRATS_471",
392 | "BRATS_473",
393 | "BRATS_474",
394 | "BRATS_476",
395 | "BRATS_479",
396 | "BRATS_480",
397 | "BRATS_484",
398 | "BRATS_011",
399 | "BRATS_012",
400 | "BRATS_018",
401 | "BRATS_020",
402 | "BRATS_021",
403 | "BRATS_028",
404 | "BRATS_029",
405 | "BRATS_032",
406 | "BRATS_034",
407 | "BRATS_036",
408 | "BRATS_041",
409 | "BRATS_047",
410 | "BRATS_049",
411 | "BRATS_053",
412 | "BRATS_056",
413 | "BRATS_057",
414 | "BRATS_069",
415 | "BRATS_071",
416 | "BRATS_089",
417 | "BRATS_090",
418 | "BRATS_092",
419 | "BRATS_095",
420 | "BRATS_103",
421 | "BRATS_105",
422 | "BRATS_106",
423 | "BRATS_107",
424 | "BRATS_109",
425 | "BRATS_118",
426 | "BRATS_145",
427 | "BRATS_147",
428 | "BRATS_156",
429 | "BRATS_161",
430 | "BRATS_172",
431 | "BRATS_176",
432 | "BRATS_181",
433 | "BRATS_194",
434 | "BRATS_196",
435 | "BRATS_198",
436 | "BRATS_204",
437 | "BRATS_205",
438 | "BRATS_209",
439 | "BRATS_220",
440 | "BRATS_221",
441 | "BRATS_227",
442 | "BRATS_234",
443 | "BRATS_235",
444 | "BRATS_245",
445 | "BRATS_250",
446 | "BRATS_256",
447 | "BRATS_257",
448 | "BRATS_260",
449 | "BRATS_269",
450 | "BRATS_270",
451 | "BRATS_271",
452 | "BRATS_281",
453 | "BRATS_282",
454 | "BRATS_287",
455 | "BRATS_289",
456 | "BRATS_291",
457 | "BRATS_292",
458 | "BRATS_310",
459 | "BRATS_314",
460 | "BRATS_323",
461 | "BRATS_327",
462 | "BRATS_330",
463 | "BRATS_333",
464 | "BRATS_337",
465 | "BRATS_346",
466 | "BRATS_350",
467 | "BRATS_352",
468 | "BRATS_361",
469 | "BRATS_382",
470 | "BRATS_397",
471 | ]
472 | test_split = [
473 | "BRATS_058",
474 | "BRATS_059",
475 | "BRATS_076",
476 | "BRATS_077",
477 | "BRATS_099",
478 | "BRATS_113",
479 | "BRATS_114",
480 | "BRATS_124",
481 | "BRATS_139",
482 | "BRATS_151",
483 | "BRATS_152",
484 | "BRATS_157",
485 | "BRATS_190",
486 | "BRATS_240",
487 | "BRATS_242",
488 | "BRATS_295",
489 | "BRATS_305",
490 | "BRATS_325",
491 | "BRATS_331",
492 | "BRATS_362",
493 | "BRATS_389",
494 | "BRATS_425",
495 | "BRATS_432",
496 | "BRATS_450"
497 |
498 | ]
499 |
500 | train = []
501 | for train_s in train_split:
502 | train.append(append_dir + train_s)
503 | test = []
504 | for test_s in test_split:
505 | test.append(append_dir + test_s)
506 |
507 | # create a pandas data frame
508 | train_df = pd.DataFrame(
509 | data={"base_dir": train, "case_name": train_split},
510 | index=None,
511 | columns=None,
512 | )
513 |
514 | # create a pandas data frame
515 | validation_df = pd.DataFrame(
516 | data={"base_dir": test, "case_name": test_split},
517 | index=None,
518 | columns=None,
519 | )
520 |
521 |
522 | # write csv files to the drive!
523 | train_df.to_csv(
524 | save_dir + "/train.csv",
525 | header=["data_path", "case_name"],
526 | index=False,
527 | )
528 | validation_df.to_csv(
529 | save_dir + "/validation.csv",
530 | header=["data_path", "case_name"],
531 | index=False,
532 | )
533 |
--------------------------------------------------------------------------------
/data/brats2017_seg/brats2017_raw_data/datameta_generator/validation.csv:
--------------------------------------------------------------------------------
1 | data_path,case_name
2 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_377,BRATS_377
3 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_434,BRATS_434
4 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_043,BRATS_043
5 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_106,BRATS_106
6 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_389,BRATS_389
7 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_451,BRATS_451
8 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_274,BRATS_274
9 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_450,BRATS_450
10 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_039,BRATS_039
11 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_390,BRATS_390
12 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_054,BRATS_054
13 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_421,BRATS_421
14 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_425,BRATS_425
15 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_129,BRATS_129
16 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_291,BRATS_291
17 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_029,BRATS_029
18 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_184,BRATS_184
19 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_371,BRATS_371
20 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_164,BRATS_164
21 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_152,BRATS_152
22 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_245,BRATS_245
23 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_203,BRATS_203
24 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_032,BRATS_032
25 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_033,BRATS_033
26 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_128,BRATS_128
27 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_186,BRATS_186
28 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_456,BRATS_456
29 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_289,BRATS_289
30 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_424,BRATS_424
31 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_399,BRATS_399
32 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_443,BRATS_443
33 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_148,BRATS_148
34 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_286,BRATS_286
35 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_453,BRATS_453
36 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_178,BRATS_178
37 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_100,BRATS_100
38 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_339,BRATS_339
39 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_449,BRATS_449
40 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_432,BRATS_432
41 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_336,BRATS_336
42 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_198,BRATS_198
43 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_244,BRATS_244
44 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_116,BRATS_116
45 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_405,BRATS_405
46 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_266,BRATS_266
47 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_073,BRATS_073
48 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_334,BRATS_334
49 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_026,BRATS_026
50 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_166,BRATS_166
51 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_338,BRATS_338
52 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_469,BRATS_469
53 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_175,BRATS_175
54 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_471,BRATS_471
55 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_040,BRATS_040
56 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_194,BRATS_194
57 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_315,BRATS_315
58 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_397,BRATS_397
59 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_089,BRATS_089
60 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_071,BRATS_071
61 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_088,BRATS_088
62 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_293,BRATS_293
63 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_243,BRATS_243
64 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_278,BRATS_278
65 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_212,BRATS_212
66 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_010,BRATS_010
67 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_360,BRATS_360
68 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_196,BRATS_196
69 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_252,BRATS_252
70 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_324,BRATS_324
71 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_193,BRATS_193
72 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_118,BRATS_118
73 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_048,BRATS_048
74 | ../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_173,BRATS_173
75 |
--------------------------------------------------------------------------------
/data/brats2017_seg/brats2017_raw_data/datameta_generator/validation_fold_1.csv:
--------------------------------------------------------------------------------
1 | data_path,case_name
2 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_419,BRATS_419
3 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_228,BRATS_228
4 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_176,BRATS_176
5 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_428,BRATS_428
6 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_239,BRATS_239
7 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_091,BRATS_091
8 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_074,BRATS_074
9 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_079,BRATS_079
10 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_391,BRATS_391
11 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_432,BRATS_432
12 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_210,BRATS_210
13 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_269,BRATS_269
14 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_468,BRATS_468
15 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_267,BRATS_267
16 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_077,BRATS_077
17 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_400,BRATS_400
18 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_080,BRATS_080
19 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_363,BRATS_363
20 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_196,BRATS_196
21 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_177,BRATS_177
22 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_334,BRATS_334
23 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_337,BRATS_337
24 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_426,BRATS_426
25 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_378,BRATS_378
26 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_323,BRATS_323
27 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_394,BRATS_394
28 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_232,BRATS_232
29 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_415,BRATS_415
30 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_114,BRATS_114
31 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_133,BRATS_133
32 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_025,BRATS_025
33 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_421,BRATS_421
34 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_451,BRATS_451
35 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_117,BRATS_117
36 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_061,BRATS_061
37 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_173,BRATS_173
38 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_068,BRATS_068
39 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_209,BRATS_209
40 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_305,BRATS_305
41 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_110,BRATS_110
42 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_153,BRATS_153
43 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_348,BRATS_348
44 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_075,BRATS_075
45 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_200,BRATS_200
46 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_140,BRATS_140
47 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_229,BRATS_229
48 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_007,BRATS_007
49 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_151,BRATS_151
50 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_011,BRATS_011
51 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_440,BRATS_440
52 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_003,BRATS_003
53 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_070,BRATS_070
54 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_184,BRATS_184
55 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_341,BRATS_341
56 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_385,BRATS_385
57 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_374,BRATS_374
58 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_259,BRATS_259
59 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_401,BRATS_401
60 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_353,BRATS_353
61 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_013,BRATS_013
62 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_036,BRATS_036
63 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_029,BRATS_029
64 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_368,BRATS_368
65 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_179,BRATS_179
66 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_377,BRATS_377
67 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_005,BRATS_005
68 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_379,BRATS_379
69 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_261,BRATS_261
70 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_065,BRATS_065
71 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_157,BRATS_157
72 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_445,BRATS_445
73 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_280,BRATS_280
74 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_338,BRATS_338
75 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_296,BRATS_296
76 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_473,BRATS_473
77 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_202,BRATS_202
78 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_162,BRATS_162
79 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_092,BRATS_092
80 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_014,BRATS_014
81 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_089,BRATS_089
82 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_167,BRATS_167
83 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_329,BRATS_329
84 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_055,BRATS_055
85 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_059,BRATS_059
86 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_314,BRATS_314
87 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_192,BRATS_192
88 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_258,BRATS_258
89 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_309,BRATS_309
90 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_150,BRATS_150
91 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_100,BRATS_100
92 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_373,BRATS_373
93 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_088,BRATS_088
94 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_122,BRATS_122
95 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_484,BRATS_484
96 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_189,BRATS_189
97 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_072,BRATS_072
98 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_107,BRATS_107
99 |
--------------------------------------------------------------------------------
/data/brats2017_seg/brats2017_raw_data/datameta_generator/validation_fold_2.csv:
--------------------------------------------------------------------------------
1 | data_path,case_name
2 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_447,BRATS_447
3 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_149,BRATS_149
4 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_387,BRATS_387
5 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_085,BRATS_085
6 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_031,BRATS_031
7 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_282,BRATS_282
8 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_464,BRATS_464
9 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_034,BRATS_034
10 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_154,BRATS_154
11 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_476,BRATS_476
12 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_071,BRATS_071
13 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_462,BRATS_462
14 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_118,BRATS_118
15 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_056,BRATS_056
16 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_064,BRATS_064
17 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_069,BRATS_069
18 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_262,BRATS_262
19 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_194,BRATS_194
20 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_138,BRATS_138
21 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_083,BRATS_083
22 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_313,BRATS_313
23 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_076,BRATS_076
24 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_322,BRATS_322
25 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_272,BRATS_272
26 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_409,BRATS_409
27 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_395,BRATS_395
28 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_023,BRATS_023
29 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_318,BRATS_318
30 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_032,BRATS_032
31 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_273,BRATS_273
32 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_095,BRATS_095
33 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_046,BRATS_046
34 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_347,BRATS_347
35 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_004,BRATS_004
36 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_030,BRATS_030
37 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_156,BRATS_156
38 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_027,BRATS_027
39 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_441,BRATS_441
40 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_315,BRATS_315
41 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_369,BRATS_369
42 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_433,BRATS_433
43 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_142,BRATS_142
44 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_450,BRATS_450
45 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_332,BRATS_332
46 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_423,BRATS_423
47 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_197,BRATS_197
48 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_336,BRATS_336
49 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_185,BRATS_185
50 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_104,BRATS_104
51 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_039,BRATS_039
52 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_199,BRATS_199
53 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_164,BRATS_164
54 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_382,BRATS_382
55 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_093,BRATS_093
56 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_389,BRATS_389
57 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_358,BRATS_358
58 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_203,BRATS_203
59 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_471,BRATS_471
60 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_224,BRATS_224
61 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_180,BRATS_180
62 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_165,BRATS_165
63 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_357,BRATS_357
64 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_317,BRATS_317
65 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_134,BRATS_134
66 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_446,BRATS_446
67 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_086,BRATS_086
68 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_171,BRATS_171
69 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_143,BRATS_143
70 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_416,BRATS_416
71 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_214,BRATS_214
72 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_216,BRATS_216
73 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_033,BRATS_033
74 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_434,BRATS_434
75 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_163,BRATS_163
76 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_136,BRATS_136
77 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_301,BRATS_301
78 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_456,BRATS_456
79 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_304,BRATS_304
80 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_351,BRATS_351
81 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_270,BRATS_270
82 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_191,BRATS_191
83 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_310,BRATS_310
84 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_260,BRATS_260
85 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_106,BRATS_106
86 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_390,BRATS_390
87 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_050,BRATS_050
88 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_206,BRATS_206
89 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_458,BRATS_458
90 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_242,BRATS_242
91 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_316,BRATS_316
92 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_477,BRATS_477
93 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_051,BRATS_051
94 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_175,BRATS_175
95 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_049,BRATS_049
96 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_236,BRATS_236
97 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_253,BRATS_253
98 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_022,BRATS_022
99 |
--------------------------------------------------------------------------------
/data/brats2017_seg/brats2017_raw_data/datameta_generator/validation_fold_3.csv:
--------------------------------------------------------------------------------
1 | data_path,case_name
2 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_182,BRATS_182
3 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_381,BRATS_381
4 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_010,BRATS_010
5 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_105,BRATS_105
6 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_361,BRATS_361
7 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_342,BRATS_342
8 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_469,BRATS_469
9 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_193,BRATS_193
10 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_453,BRATS_453
11 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_219,BRATS_219
12 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_127,BRATS_127
13 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_413,BRATS_413
14 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_043,BRATS_043
15 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_155,BRATS_155
16 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_427,BRATS_427
17 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_312,BRATS_312
18 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_058,BRATS_058
19 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_287,BRATS_287
20 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_407,BRATS_407
21 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_067,BRATS_067
22 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_006,BRATS_006
23 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_448,BRATS_448
24 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_204,BRATS_204
25 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_311,BRATS_311
26 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_405,BRATS_405
27 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_109,BRATS_109
28 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_115,BRATS_115
29 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_359,BRATS_359
30 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_169,BRATS_169
31 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_238,BRATS_238
32 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_146,BRATS_146
33 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_249,BRATS_249
34 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_300,BRATS_300
35 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_235,BRATS_235
36 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_430,BRATS_430
37 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_291,BRATS_291
38 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_211,BRATS_211
39 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_354,BRATS_354
40 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_350,BRATS_350
41 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_178,BRATS_178
42 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_370,BRATS_370
43 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_454,BRATS_454
44 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_417,BRATS_417
45 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_148,BRATS_148
46 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_144,BRATS_144
47 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_290,BRATS_290
48 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_220,BRATS_220
49 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_130,BRATS_130
50 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_281,BRATS_281
51 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_437,BRATS_437
52 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_302,BRATS_302
53 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_472,BRATS_472
54 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_299,BRATS_299
55 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_321,BRATS_321
56 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_466,BRATS_466
57 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_482,BRATS_482
58 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_198,BRATS_198
59 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_233,BRATS_233
60 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_116,BRATS_116
61 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_339,BRATS_339
62 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_128,BRATS_128
63 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_243,BRATS_243
64 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_160,BRATS_160
65 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_096,BRATS_096
66 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_241,BRATS_241
67 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_042,BRATS_042
68 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_403,BRATS_403
69 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_257,BRATS_257
70 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_099,BRATS_099
71 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_048,BRATS_048
72 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_268,BRATS_268
73 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_201,BRATS_201
74 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_460,BRATS_460
75 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_129,BRATS_129
76 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_461,BRATS_461
77 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_009,BRATS_009
78 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_449,BRATS_449
79 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_393,BRATS_393
80 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_217,BRATS_217
81 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_422,BRATS_422
82 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_208,BRATS_208
83 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_054,BRATS_054
84 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_002,BRATS_002
85 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_264,BRATS_264
86 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_340,BRATS_340
87 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_346,BRATS_346
88 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_265,BRATS_265
89 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_274,BRATS_274
90 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_307,BRATS_307
91 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_320,BRATS_320
92 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_244,BRATS_244
93 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_364,BRATS_364
94 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_161,BRATS_161
95 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_277,BRATS_277
96 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_152,BRATS_152
97 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_459,BRATS_459
98 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_331,BRATS_331
99 |
--------------------------------------------------------------------------------
/data/brats2017_seg/brats2017_raw_data/datameta_generator/validation_fold_4.csv:
--------------------------------------------------------------------------------
1 | data_path,case_name
2 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_298,BRATS_298
3 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_480,BRATS_480
4 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_479,BRATS_479
5 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_212,BRATS_212
6 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_470,BRATS_470
7 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_078,BRATS_078
8 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_439,BRATS_439
9 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_012,BRATS_012
10 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_132,BRATS_132
11 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_223,BRATS_223
12 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_019,BRATS_019
13 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_125,BRATS_125
14 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_399,BRATS_399
15 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_308,BRATS_308
16 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_181,BRATS_181
17 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_087,BRATS_087
18 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_094,BRATS_094
19 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_016,BRATS_016
20 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_335,BRATS_335
21 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_026,BRATS_026
22 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_397,BRATS_397
23 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_326,BRATS_326
24 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_047,BRATS_047
25 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_396,BRATS_396
26 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_018,BRATS_018
27 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_404,BRATS_404
28 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_174,BRATS_174
29 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_024,BRATS_024
30 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_250,BRATS_250
31 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_246,BRATS_246
32 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_017,BRATS_017
33 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_008,BRATS_008
34 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_038,BRATS_038
35 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_119,BRATS_119
36 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_442,BRATS_442
37 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_254,BRATS_254
38 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_263,BRATS_263
39 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_365,BRATS_365
40 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_205,BRATS_205
41 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_120,BRATS_120
42 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_343,BRATS_343
43 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_266,BRATS_266
44 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_278,BRATS_278
45 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_112,BRATS_112
46 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_319,BRATS_319
47 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_159,BRATS_159
48 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_383,BRATS_383
49 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_275,BRATS_275
50 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_082,BRATS_082
51 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_306,BRATS_306
52 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_168,BRATS_168
53 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_285,BRATS_285
54 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_438,BRATS_438
55 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_090,BRATS_090
56 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_097,BRATS_097
57 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_240,BRATS_240
58 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_098,BRATS_098
59 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_123,BRATS_123
60 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_292,BRATS_292
61 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_366,BRATS_366
62 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_435,BRATS_435
63 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_356,BRATS_356
64 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_113,BRATS_113
65 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_330,BRATS_330
66 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_234,BRATS_234
67 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_121,BRATS_121
68 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_225,BRATS_225
69 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_284,BRATS_284
70 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_411,BRATS_411
71 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_333,BRATS_333
72 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_362,BRATS_362
73 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_207,BRATS_207
74 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_283,BRATS_283
75 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_062,BRATS_062
76 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_328,BRATS_328
77 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_231,BRATS_231
78 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_063,BRATS_063
79 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_015,BRATS_015
80 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_041,BRATS_041
81 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_380,BRATS_380
82 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_443,BRATS_443
83 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_252,BRATS_252
84 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_044,BRATS_044
85 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_402,BRATS_402
86 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_429,BRATS_429
87 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_474,BRATS_474
88 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_035,BRATS_035
89 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_367,BRATS_367
90 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_053,BRATS_053
91 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_135,BRATS_135
92 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_188,BRATS_188
93 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_170,BRATS_170
94 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_345,BRATS_345
95 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_414,BRATS_414
96 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_344,BRATS_344
97 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_349,BRATS_349
98 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_436,BRATS_436
99 |
--------------------------------------------------------------------------------
/data/brats2017_seg/brats2017_raw_data/datameta_generator/validation_fold_5.csv:
--------------------------------------------------------------------------------
1 | data_path,case_name
2 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_475,BRATS_475
3 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_324,BRATS_324
4 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_248,BRATS_248
5 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_478,BRATS_478
6 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_001,BRATS_001
7 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_463,BRATS_463
8 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_457,BRATS_457
9 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_040,BRATS_040
10 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_431,BRATS_431
11 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_186,BRATS_186
12 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_251,BRATS_251
13 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_384,BRATS_384
14 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_102,BRATS_102
15 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_073,BRATS_073
16 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_141,BRATS_141
17 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_375,BRATS_375
18 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_020,BRATS_020
19 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_057,BRATS_057
20 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_221,BRATS_221
21 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_325,BRATS_325
22 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_295,BRATS_295
23 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_376,BRATS_376
24 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_398,BRATS_398
25 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_425,BRATS_425
26 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_226,BRATS_226
27 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_111,BRATS_111
28 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_406,BRATS_406
29 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_158,BRATS_158
30 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_452,BRATS_452
31 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_145,BRATS_145
32 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_183,BRATS_183
33 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_355,BRATS_355
34 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_352,BRATS_352
35 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_084,BRATS_084
36 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_230,BRATS_230
37 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_166,BRATS_166
38 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_245,BRATS_245
39 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_371,BRATS_371
40 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_372,BRATS_372
41 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_037,BRATS_037
42 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_455,BRATS_455
43 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_060,BRATS_060
44 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_256,BRATS_256
45 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_288,BRATS_288
46 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_195,BRATS_195
47 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_147,BRATS_147
48 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_279,BRATS_279
49 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_124,BRATS_124
50 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_424,BRATS_424
51 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_420,BRATS_420
52 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_247,BRATS_247
53 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_297,BRATS_297
54 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_276,BRATS_276
55 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_126,BRATS_126
56 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_408,BRATS_408
57 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_410,BRATS_410
58 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_303,BRATS_303
59 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_137,BRATS_137
60 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_392,BRATS_392
61 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_286,BRATS_286
62 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_108,BRATS_108
63 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_412,BRATS_412
64 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_045,BRATS_045
65 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_066,BRATS_066
66 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_187,BRATS_187
67 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_222,BRATS_222
68 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_052,BRATS_052
69 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_255,BRATS_255
70 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_483,BRATS_483
71 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_101,BRATS_101
72 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_227,BRATS_227
73 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_172,BRATS_172
74 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_293,BRATS_293
75 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_028,BRATS_028
76 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_289,BRATS_289
77 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_139,BRATS_139
78 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_327,BRATS_327
79 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_418,BRATS_418
80 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_237,BRATS_237
81 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_213,BRATS_213
82 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_218,BRATS_218
83 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_081,BRATS_081
84 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_388,BRATS_388
85 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_465,BRATS_465
86 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_190,BRATS_190
87 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_481,BRATS_481
88 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_444,BRATS_444
89 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_386,BRATS_386
90 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_294,BRATS_294
91 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_131,BRATS_131
92 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_360,BRATS_360
93 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_215,BRATS_215
94 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_467,BRATS_467
95 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_021,BRATS_021
96 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_271,BRATS_271
97 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_103,BRATS_103
98 |
--------------------------------------------------------------------------------
/data/brats2017_seg/validation.csv:
--------------------------------------------------------------------------------
1 | data_path,case_name
2 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_377,BRATS_377
3 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_434,BRATS_434
4 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_043,BRATS_043
5 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_106,BRATS_106
6 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_389,BRATS_389
7 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_451,BRATS_451
8 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_274,BRATS_274
9 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_450,BRATS_450
10 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_039,BRATS_039
11 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_390,BRATS_390
12 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_054,BRATS_054
13 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_421,BRATS_421
14 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_425,BRATS_425
15 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_129,BRATS_129
16 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_291,BRATS_291
17 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_029,BRATS_029
18 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_184,BRATS_184
19 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_371,BRATS_371
20 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_164,BRATS_164
21 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_152,BRATS_152
22 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_245,BRATS_245
23 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_203,BRATS_203
24 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_032,BRATS_032
25 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_033,BRATS_033
26 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_128,BRATS_128
27 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_186,BRATS_186
28 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_456,BRATS_456
29 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_289,BRATS_289
30 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_424,BRATS_424
31 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_399,BRATS_399
32 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_443,BRATS_443
33 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_148,BRATS_148
34 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_286,BRATS_286
35 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_453,BRATS_453
36 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_178,BRATS_178
37 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_100,BRATS_100
38 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_339,BRATS_339
39 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_449,BRATS_449
40 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_432,BRATS_432
41 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_336,BRATS_336
42 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_198,BRATS_198
43 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_244,BRATS_244
44 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_116,BRATS_116
45 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_405,BRATS_405
46 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_266,BRATS_266
47 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_073,BRATS_073
48 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_334,BRATS_334
49 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_026,BRATS_026
50 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_166,BRATS_166
51 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_338,BRATS_338
52 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_469,BRATS_469
53 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_175,BRATS_175
54 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_471,BRATS_471
55 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_040,BRATS_040
56 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_194,BRATS_194
57 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_315,BRATS_315
58 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_397,BRATS_397
59 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_089,BRATS_089
60 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_071,BRATS_071
61 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_088,BRATS_088
62 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_293,BRATS_293
63 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_243,BRATS_243
64 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_278,BRATS_278
65 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_212,BRATS_212
66 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_010,BRATS_010
67 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_360,BRATS_360
68 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_196,BRATS_196
69 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_252,BRATS_252
70 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_324,BRATS_324
71 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_193,BRATS_193
72 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_118,BRATS_118
73 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_048,BRATS_048
74 | ../../../data/brats2017_seg/BraTS2017_Training_Data/BRATS_173,BRATS_173
75 |
--------------------------------------------------------------------------------
/data/brats2021_seg/brats2021_raw_data/brats2021_seg_preprocess.py:
--------------------------------------------------------------------------------
1 | import os
2 | import torch
3 | import nibabel
4 | import numpy as np
5 | from tqdm import tqdm
6 | import matplotlib.pyplot as plt
7 | from matplotlib import animation
8 | from monai.data import MetaTensor
9 | from multiprocessing import Process, Pool
10 | from sklearn.preprocessing import MinMaxScaler
11 | from monai.transforms import (
12 | Orientation,
13 | EnsureType,
14 | ConvertToMultiChannelBasedOnBratsClasses,
15 | )
16 |
17 | # whoever wrote this knew what he was doing (hint: It was me!)
18 |
19 | """
20 | data
21 | │
22 | ├───train
23 | │ ├──BraTS2021_00000
24 | │ │ └──BraTS2021_00000_flair.nii.gz
25 | │ │ └──BraTS2021_00000_t1.nii.gz
26 | │ │ └──BraTS2021_00000_t1ce.nii.gz
27 | │ │ └──BraTS2021_00000_t2.nii.gz
28 | │ │ └──BraTS2021_00000_seg.nii.gz
29 | │ ├──BraTS2021_00002
30 | │ │ └──BraTS2021_00002_flair.nii.gz
31 | │ ... └──...
32 | """
33 |
34 |
35 | class Brats2021Task1Preprocess:
36 | def __init__(
37 | self,
38 | root_dir: str,
39 | train_folder_name: str = "train",
40 | save_dir: str = "../BraTS2021_Training_Data",
41 | ):
42 | """
43 | root_dir: path to the data folder where the raw train folder is
44 | train_folder_name: name of the folder of the training data
45 | save_dir: path to directory where each case is going to be saved as a single file containing four modalities
46 | """
47 | self.train_folder_dir = os.path.join(root_dir, train_folder_name)
48 | assert os.path.exists(self.train_folder_dir)
49 | # walking through the raw training data and list all the folder names, i.e. case name
50 | self.case_name = next(os.walk(self.train_folder_dir), (None, None, []))[1]
51 | # MRI type
52 | self.MRI_TYPE = ["flair", "t1", "t1ce", "t2", "seg"]
53 | self.save_dir = save_dir
54 |
55 | def __len__(self):
56 | return self.case_name.__len__()
57 |
58 | def get_modality_fp(self, case_name: str, mri_type: str)->str:
59 | """
60 | return the modality file path
61 | case_name: patient ID
62 | mri_type: any of the ["flair", "t1", "t1ce", "t2", "seg"]
63 | """
64 | modality_fp = os.path.join(
65 | self.train_folder_dir,
66 | case_name,
67 | case_name + f"_{mri_type}.nii.gz",
68 | )
69 | return modality_fp
70 |
71 | def load_nifti(self, fp)->list:
72 | """
73 | load a nifti file
74 | fp: path to the nifti file with (nii or nii.gz) extension
75 | """
76 | nifti_data = nibabel.load(fp)
77 | # get the floating point array
78 | nifti_scan = nifti_data.get_fdata()
79 | # get affine matrix
80 | affine = nifti_data.affine
81 | return nifti_scan, affine
82 |
83 | def normalize(self, x:np.ndarray)->np.ndarray:
84 | # Transform features by scaling each feature to a given range.
85 | scaler = MinMaxScaler(feature_range=(0, 1))
86 | # (H, W, D) -> (H * W, D)
87 | normalized_1D_array = scaler.fit_transform(x.reshape(-1, x.shape[-1]))
88 | normalized_data = normalized_1D_array.reshape(x.shape)
89 | return normalized_data
90 |
91 | def orient(self, x: MetaTensor) -> MetaTensor:
92 | # orient the array to be in (Right, Anterior, Superior) scanner coordinate systems
93 | assert type(x) == MetaTensor
94 | return Orientation(axcodes="RAS")(x)
95 |
96 | def detach_meta(self, x: MetaTensor) -> np.ndarray:
97 | assert type(x) == MetaTensor
98 | return EnsureType(data_type="numpy", track_meta=False)(x)
99 |
100 | def crop_brats2021_zero_pixels(self, x: np.ndarray)->np.ndarray:
101 | # get rid of the zero pixels around mri scan and cut it so that the region is useful
102 | # crop (1, 240, 240, 155) to (1, 128, 128, 128)
103 | return x[:, 56:184, 56:184, 13:141]
104 |
105 | def preprocess_brats_modality(self, data_fp: str, is_label: bool = False)->np.ndarray:
106 | """
107 | apply preprocess stage to the modality
108 | data_fp: directory to the modality
109 | """
110 | data, affine = self.load_nifti(data_fp)
111 | # label do not the be normalized
112 | if is_label:
113 | # Binary mask does not need to be float64! For saving storage purposes!
114 | data = data.astype(np.uint8)
115 | # categorical -> one-hot-encoded
116 | # (240, 240, 155) -> (3, 240, 240, 155)
117 | data = ConvertToMultiChannelBasedOnBratsClasses()(data)
118 | else:
119 | data = self.normalize(x=data)
120 | # (240, 240, 155) -> (1, 240, 240, 155)
121 | data = data[np.newaxis, ...]
122 |
123 | data = MetaTensor(x=data, affine=affine)
124 | # for oreinting the coordinate system we need the affine matrix
125 | data = self.orient(data)
126 | # detaching the meta values from the oriented array
127 | data = self.detach_meta(data)
128 | # (240, 240, 155) -> (128, 128, 128)
129 | data = self.crop_brats2021_zero_pixels(data)
130 | return data
131 |
132 | def __getitem__(self, idx):
133 | case_name = self.case_name[idx]
134 | # e.g: train/BraTS2021_00000/BraTS2021_00000_flair.nii.gz
135 |
136 | # preprocess Flair modality
137 | FLAIR = self.get_modality_fp(case_name, self.MRI_TYPE[0])
138 | flair = self.preprocess_brats_modality(data_fp=FLAIR, is_label=False)
139 | flair_transv = flair.swapaxes(1, 3) # transverse plane
140 |
141 | # # preprocess T1 modality
142 | # T1 = self.get_modality_fp(case_name, self.MRI_TYPE[1])
143 | # t1 = self.preprocess_brats_modality(data_fp=T1, is_label=False)
144 | # t1_transv = t1.swapaxes(1, 3) # transverse plane
145 |
146 | # preprocess T1ce modality
147 | T1ce = self.get_modality_fp(case_name, self.MRI_TYPE[2])
148 | t1ce = self.preprocess_brats_modality(data_fp=T1ce, is_label=False)
149 | t1ce_transv = t1ce.swapaxes(1, 3) # transverse plane
150 |
151 | # preprocess T2
152 | T2 = self.get_modality_fp(case_name, self.MRI_TYPE[3])
153 | t2 = self.preprocess_brats_modality(data_fp=T2, is_label=False)
154 | t2_transv = t2.swapaxes(1, 3) # transverse plane
155 |
156 | # preprocess segmentation label
157 | Label = self.get_modality_fp(case_name, self.MRI_TYPE[4])
158 | label = self.preprocess_brats_modality(data_fp=Label, is_label=True)
159 | label_transv = label.swapaxes(1, 3) # transverse plane
160 |
161 | # stack modalities along the first dimension
162 | modalities = np.concatenate(
163 | (flair_transv, t1ce_transv, t2_transv),
164 | axis=0,
165 | )
166 | label = label_transv
167 | return modalities, label, case_name
168 |
169 | def __call__(self):
170 | print("started preprocessing brats2021...")
171 | with Pool(processes=os.cpu_count()) as multi_p:
172 | multi_p.map_async(func=self.process, iterable=range(self.__len__()))
173 | multi_p.close()
174 | multi_p.join()
175 | print("finished preprocessing brats2021...")
176 |
177 |
178 | def process(self, idx):
179 | if not os.path.exists(self.save_dir):
180 | os.makedirs(self.save_dir)
181 | # get the 4D modalities along with the label
182 | modalities, label, case_name = self.__getitem__(idx)
183 | # creating the folder for the current case id
184 | data_save_path = os.path.join(self.save_dir, case_name)
185 | if not os.path.exists(data_save_path):
186 | os.makedirs(data_save_path)
187 | # saving the preprocessed 4D modalities containing all the modalities to save path
188 | modalities_fn = data_save_path + f"/{case_name}_modalities.pt"
189 | torch.save(modalities, modalities_fn)
190 | # saving the preprocessed segmentation label to save path
191 | label_fn = data_save_path + f"/{case_name}_label.pt"
192 | torch.save(label, label_fn)
193 |
194 |
195 |
196 | def animate(input_1, input_2):
197 | """animate pairs of image sequences of the same length on two conjugate axis"""
198 | assert len(input_1) == len(
199 | input_2
200 | ), f"two inputs should have the same number of frame but first input had {len(input_1)} and the second one {len(input_2)}"
201 | # set the figure and axis
202 | fig, axis = plt.subplots(1, 2, figsize=(8, 8))
203 | axis[0].set_axis_off()
204 | axis[1].set_axis_off()
205 | sequence_length = input_1.__len__()
206 | sequence = []
207 | for i in range(sequence_length):
208 | im_1 = axis[0].imshow(input_1[i], cmap="gray", animated=True)
209 | im_2 = axis[1].imshow(input_2[i], cmap="gray", animated=True)
210 | if i == 0:
211 | axis[0].imshow(input_1[i], cmap="gray") # show an initial one first
212 | axis[1].imshow(input_2[i], cmap="gray") # show an initial one first
213 |
214 | sequence.append([im_1, im_2])
215 | return animation.ArtistAnimation(
216 | fig,
217 | sequence,
218 | interval=25,
219 | blit=True,
220 | repeat_delay=100,
221 | )
222 |
223 | def viz(volume_indx: int = 1, label_indx: int = 1)->None:
224 | """
225 | pair visualization of the volume and label
226 | volume_indx: index for the volume. ["flair", "t1", "t1ce", "t2"]
227 | label_indx: index for the label segmentation ["TC" (Tumor core), "WT" (Whole tumor), "ET" (Enhancing tumor)]
228 | """
229 | assert volume_indx in [0, 1, 2]
230 | assert label_indx in [0, 1, 2]
231 | x = volume[volume_indx, ...]
232 | y = label[label_indx, ...]
233 | ani = animate(input_1=x, input_2=y)
234 | plt.show()
235 |
236 |
237 | if __name__ == "__main__":
238 | brats2021_task1_prep = Brats2021Task1Preprocess(
239 | root_dir="./",
240 | save_dir="../BraTS2021_Training_Data"
241 | )
242 | # start preprocessing
243 | brats2021_task1_prep()
244 |
245 | # visualization
246 | # volume, label, _ = brats2021_task1_prep[100]
247 | # viz(volume_indx = 0, label_indx = 2)
248 |
249 |
--------------------------------------------------------------------------------
/data/brats2021_seg/brats2021_raw_data/datameta_generator/create_train_val_kfold_csv.py:
--------------------------------------------------------------------------------
1 | import os
2 | import random
3 | import numpy as np
4 | import pandas as pd
5 | from sklearn.model_selection import KFold
6 |
7 |
8 | def create_pandas_df(data_dict: dict) -> pd.DataFrame:
9 | """
10 | create a pandas dataframe out of data dictionary
11 | data_dict: key values of the data to be data-framed
12 | """
13 | data_frame = pd.DataFrame(
14 | data=data_dict,
15 | index=None,
16 | columns=None,
17 | )
18 | return data_frame
19 |
20 |
21 | def save_pandas_df(dataframe: pd.DataFrame, save_path: str, header: list) -> None:
22 | """
23 | save a dataframe to the save_dir with specified header
24 | dataframe: pandas dataframe to be saved
25 | save_path: the directory in which the dataframe is going to be saved
26 | header: list of headers of the to be saved csv file
27 | """
28 | assert save_path.endswith("csv")
29 | assert isinstance(dataframe, pd.DataFrame)
30 | assert (dataframe.columns.__len__() == header.__len__())
31 | dataframe.to_csv(path_or_buf=save_path, header=header, index=False)
32 |
33 |
34 | def create_train_val_kfold_csv_from_data_folder(
35 | folder_dir: str,
36 | append_dir: str = "",
37 | save_dir: str = "./",
38 | n_k_fold: int = 5,
39 | random_state: int = 42,
40 | ) -> None:
41 | """
42 | create k fold train validation csv files
43 | folder_dir: path to the whole corpus of the data
44 | append_dir: path to be appended to the begining of the directory filed in the csv file
45 | save_dir: directory to which save the csv files
46 | n_k_fold: number of folds
47 | random_state: random seed ID
48 | """
49 | assert os.path.exists(folder_dir), f"{folder_dir} does not exist"
50 |
51 | header = ["data_path", "case_name"]
52 |
53 | # iterate through the folder to list all the filenames
54 | case_name = next(os.walk(folder_dir), (None, None, []))[1]
55 | case_name = np.array(case_name)
56 | np.random.seed(random_state)
57 | np.random.shuffle(case_name)
58 |
59 | # setting up k-fold module
60 | kfold = KFold(n_splits=n_k_fold, random_state=random_state, shuffle=True)
61 | # generating k-fold train and validation set
62 | for i, (train_fold_id, validation_fold_id) in enumerate(kfold.split(case_name)):
63 | # getting the corresponding case out of the fold index
64 | train_fold_cn = case_name[train_fold_id]
65 | valid_fold_cn = case_name[validation_fold_id]
66 | # create data path pointing to the case name
67 | train_dp = [
68 | os.path.join(append_dir, case).replace("\\", "/") for case in train_fold_cn
69 | ]
70 | valid_dp = [
71 | os.path.join(append_dir, case).replace("\\", "/") for case in valid_fold_cn
72 | ]
73 | # dictionary object to get converte to dataframe
74 | train_data = {"data_path": train_dp, "case_name": train_fold_cn}
75 | valid_data = {"data_path": valid_dp, "case_name": valid_fold_cn}
76 |
77 | train_df = create_pandas_df(train_data)
78 | valid_df = create_pandas_df(valid_data)
79 |
80 | save_pandas_df(
81 | dataframe=train_df,
82 | save_path=f"./train_fold_{i+1}.csv",
83 | header=header,
84 | )
85 | save_pandas_df(
86 | dataframe=valid_df,
87 | save_path=f"./validation_fold_{i+1}.csv",
88 | header=header,
89 | )
90 |
91 |
92 | if __name__ == "__main__":
93 | create_train_val_kfold_csv_from_data_folder(
94 | # path to the raw train data folder
95 | folder_dir="../train",
96 | # this is inferred from where the actual experiments are run relative to the data folder
97 | append_dir="../../../data/brats2021_seg/BraTS2021_Training_Data/",
98 | # where to save the train, val and test csv file relative to the current directory
99 | save_dir="../../",
100 | )
101 |
--------------------------------------------------------------------------------
/data/brats2021_seg/brats2021_raw_data/datameta_generator/create_train_val_test_csv.py:
--------------------------------------------------------------------------------
1 | import os
2 | import random
3 | import numpy as np
4 | import pandas as pd
5 |
6 | def create_pandas_df(data_dict: dict) -> pd.DataFrame:
7 | """
8 | create a pandas dataframe out of data dictionary
9 | data_dict: key values of the data to be data-framed
10 | """
11 | data_frame = pd.DataFrame(
12 | data=data_dict,
13 | index=None,
14 | columns=None,
15 | )
16 | return data_frame
17 |
18 |
19 | def save_pandas_df(dataframe: pd.DataFrame, save_path: str, header: list) -> None:
20 | """
21 | save a dataframe to the save_dir with specified header
22 | dataframe: pandas dataframe to be saved
23 | save_path: the directory in which the dataframe is going to be saved
24 | header: list of headers of the to be saved csv file
25 | """
26 | assert save_path.endswith("csv")
27 | assert isinstance(dataframe, pd.DataFrame)
28 | assert (dataframe.columns.__len__() == header.__len__())
29 | dataframe.to_csv(path_or_buf=save_path, header=header, index=False)
30 |
31 |
32 | def create_train_val_test_csv_from_data_folder(
33 | folder_dir: str,
34 | append_dir: str = "",
35 | save_dir: str = "./",
36 | train_split_perc: float = 0.80,
37 | val_split_perc: float = 0.05,
38 | ) -> None:
39 | """
40 | create train/validation/test csv file out of the given directory such that each csv file has its split percentage count
41 | folder_dir: path to the whole corpus of the data
42 | append_dir: path to be appended to the begining of the directory filed in the csv file
43 | save_dir: directory to which save the csv files
44 | train_split_perc: the percentage of the train set by which split the data
45 | val_split_perc: the percentage of the validation set by which split the data
46 | """
47 | assert os.path.exists(folder_dir), f"{folder_dir} does not exist"
48 | assert (
49 | train_split_perc < 1.0 and train_split_perc > 0.0
50 | ), "train split should be between 0 and 1"
51 | assert (
52 | val_split_perc < 1.0 and val_split_perc > 0.0
53 | ), "train split should be between 0 and 1"
54 |
55 | # set the seed
56 | np.random.seed(0)
57 | random.seed(0)
58 |
59 | # iterate through the folder to list all the filenames
60 | case_name = next(os.walk(folder_dir), (None, None, []))[1]
61 | cropus_sample_count = case_name.__len__()
62 |
63 | # appending append_dir to the case name
64 | data_dir = []
65 | for case in case_name:
66 | data_dir.append(os.path.join(append_dir, case))
67 |
68 | idx = np.arange(0, cropus_sample_count)
69 | # shuffling idx (inplace operation)
70 | np.random.shuffle(idx)
71 |
72 | # spliting the data into train/val split percentage respectively for train, val and test (test set is inferred automatically)
73 | train_idx, val_idx, test_idx = np.split(
74 | idx,
75 | [
76 | int(train_split_perc * cropus_sample_count),
77 | int((train_split_perc + val_split_perc) * cropus_sample_count),
78 | ],
79 | )
80 |
81 | # get the corresponding id from the train,validation and test set
82 | train_sample_base_dir = np.array(data_dir)[train_idx]
83 | train_sample_case_name = np.array(case_name)[train_idx]
84 |
85 | # we do not need test split so we can merge it with validation
86 | val_idx = np.concatenate((val_idx, test_idx), axis=0)
87 | validation_sample_base_dir = np.array(data_dir)[val_idx]
88 | validation_sample_case_name = np.array(case_name)[val_idx]
89 |
90 |
91 | # dictionary object to get converte to dataframe
92 | train_data = {"data_path": train_dp, "label_path": train_fold_cn}
93 | valid_data = {"data_path": valid_dp, "label_path": valid_fold_cn}
94 |
95 | train_df = create_pandas_df(train_data)
96 | valid_df = create_pandas_df(valid_data)
97 |
98 | save_pandas_df(
99 | dataframe=train_df,
100 | save_path=f"./train_fold_{i+1}.csv",
101 | header=header,
102 | )
103 | save_pandas_df(
104 | dataframe=valid_df,
105 | save_path=f"./validation_fold_{i+1}.csv",
106 | header=header,
107 | )
108 |
109 |
110 |
111 |
112 |
113 |
114 |
115 |
116 |
117 |
118 |
119 |
120 |
121 |
122 |
123 |
124 | # create a pandas data frame
125 | train_df = pd.DataFrame(
126 | data={"base_dir": train_sample_base_dir, "case_name": train_sample_case_name},
127 | index=None,
128 | columns=None,
129 | )
130 |
131 | validation_df = pd.DataFrame(
132 | data={
133 | "base_dir": validation_sample_base_dir,
134 | "case_name": validation_sample_case_name,
135 | },
136 | index=None,
137 | columns=None,
138 | )
139 |
140 | # write csv files to the drive!
141 | train_df.to_csv(
142 | save_dir + "/train.csv",
143 | header=["data_path", "case_name"],
144 | index=False,
145 | )
146 | validation_df.to_csv(
147 | save_dir + "/validation.csv",
148 | header=["data_path", "case_name"],
149 | index=False,
150 | )
151 |
152 |
153 | if __name__ == "__main__":
154 | create_train_val_test_csv_from_data_folder(
155 | # path to the raw train data folder
156 | folder_dir="../train",
157 | # this is inferred from where the actual experiments are run relative to the data folder
158 | append_dir="../../../data/brats2021_seg/BraTS2021_Training_Data/",
159 | # where to save the train, val and test csv file relative to the current directory
160 | save_dir="../../",
161 | train_split_perc=0.85,
162 | val_split_perc=0.10,
163 | )
164 |
--------------------------------------------------------------------------------
/data/brats2021_seg/brats2021_raw_data/datameta_generator/create_train_val_test_csv_v2.py:
--------------------------------------------------------------------------------
1 | import os
2 | import random
3 | import numpy as np
4 | import pandas as pd
5 |
6 |
7 | def create_pandas_df(data_dict: dict) -> pd.DataFrame:
8 | """
9 | create a pandas dataframe out of data dictionary
10 | data_dict: key values of the data to be data-framed
11 | """
12 | data_frame = pd.DataFrame(
13 | data=data_dict,
14 | index=None,
15 | columns=None,
16 | )
17 | return data_frame
18 |
19 |
20 | def save_pandas_df(dataframe: pd.DataFrame, save_path: str, header: list) -> None:
21 | """
22 | save a dataframe to the save_dir with specified header
23 | dataframe: pandas dataframe to be saved
24 | save_path: the directory in which the dataframe is going to be saved
25 | header: list of headers of the to be saved csv file
26 | """
27 | assert save_path.endswith("csv")
28 | assert isinstance(dataframe, pd.DataFrame)
29 | assert (dataframe.columns.__len__() == header.__len__())
30 | dataframe.to_csv(path_or_buf=save_path, header=header, index=False)
31 |
32 |
33 | def create_train_val_test_csv_from_data_folder(
34 | folder_dir: str,
35 | append_dir: str = "",
36 | save_dir: str = "./",
37 | train_split_perc: float = 0.80,
38 | val_split_perc: float = 0.05,
39 | ) -> None:
40 | """
41 | create train/validation/test csv file out of the given directory such that each csv file has its split percentage count
42 | folder_dir: path to the whole corpus of the data
43 | append_dir: path to be appended to the begining of the directory filed in the csv file
44 | save_dir: directory to which save the csv files
45 | train_split_perc: the percentage of the train set by which split the data
46 | val_split_perc: the percentage of the validation set by which split the data
47 | """
48 | assert os.path.exists(folder_dir), f"{folder_dir} does not exist"
49 | assert (
50 | train_split_perc < 1.0 and train_split_perc > 0.0
51 | ), "train split should be between 0 and 1"
52 | assert (
53 | val_split_perc < 1.0 and val_split_perc > 0.0
54 | ), "train split should be between 0 and 1"
55 |
56 | # set the seed
57 | np.random.seed(0)
58 | random.seed(0)
59 |
60 | # iterate through the folder to list all the filenames
61 | case_name = next(os.walk(folder_dir), (None, None, []))[1]
62 |
63 | # appending append_dir to the case name based on the anatamical planes
64 | planes = ["sagittal", "coronal", "transverse"]
65 | data_fp = []
66 | label_fp = []
67 | for case in case_name:
68 | for plane_name in planes:
69 | case_data = f"{case}_{plane_name}_modalities.pt"
70 | case_label = f"{case}_{plane_name}_label.pt"
71 | # BraTS2021_Training_Data/BraTS2021_00000/BraTS2021_00000_sagittal_modalities.pt
72 | data_fp.append(os.path.join(append_dir, case, case_data).replace("\\", "/"))
73 | # BraTS2021_Training_Data/BraTS2021_00000/BraTS2021_00000_sagittal_label.pt
74 | label_fp.append(os.path.join(append_dir, case, case_label).replace("\\", "/"))
75 |
76 | # we have three anatomical plane for each cse
77 | cropus_sample_count = case_name.__len__() * 3
78 |
79 | idx = np.arange(0, cropus_sample_count)
80 | # shuffling idx (inplace operation)
81 | np.random.shuffle(idx)
82 |
83 | # spliting the data into train/val split percentage respectively for train, val and test (test set is inferred automatically)
84 | train_idx, val_idx, test_idx = np.split(
85 | idx,
86 | [
87 | int(train_split_perc * cropus_sample_count),
88 | int((train_split_perc + val_split_perc) * cropus_sample_count),
89 | ],
90 | )
91 |
92 | # get the corresponding id from the train,validation and test set
93 | train_sample_data_fp = np.array(data_fp)[train_idx]
94 | train_sample_label_fp = np.array(label_fp)[train_idx]
95 |
96 | # we do not need test split so we can merge it with validation
97 | val_idx = np.concatenate((val_idx, test_idx), axis=0)
98 | validation_sample_data_fp = np.array(data_fp)[val_idx]
99 | validation_sample_label_fp = np.array(label_fp)[val_idx]
100 |
101 |
102 | # dictionary object to get converte to dataframe
103 | train_data = {"data_path": train_sample_data_fp, "label_path": train_sample_label_fp}
104 | valid_data = {"data_path": validation_sample_data_fp, "label_path": validation_sample_label_fp}
105 |
106 | train_df = create_pandas_df(train_data)
107 | valid_df = create_pandas_df(valid_data)
108 |
109 | save_pandas_df(
110 | dataframe=train_df,
111 | save_path=f"./train_v2.csv",
112 | header=list(train_data.keys()),
113 | )
114 | save_pandas_df(
115 | dataframe=valid_df,
116 | save_path=f"./validation_v2.csv",
117 | header=list(valid_data.keys()),
118 | )
119 |
120 |
121 | if __name__ == "__main__":
122 | create_train_val_test_csv_from_data_folder(
123 | # path to the raw train data folder
124 | folder_dir="../train",
125 | # this is inferred from where the actual experiments are run relative to the data folder
126 | append_dir="../../../data/brats2021_seg/BraTS2021_Training_Data/",
127 | # where to save the train, val and test csv file relative to the current directory
128 | save_dir="../../",
129 | train_split_perc=0.85,
130 | val_split_perc=0.10,
131 | )
132 |
--------------------------------------------------------------------------------
/data/brats2021_seg/validation.csv:
--------------------------------------------------------------------------------
1 | data_path,case_name
2 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01223,BraTS2021_01223
3 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00788,BraTS2021_00788
4 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01365,BraTS2021_01365
5 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01546,BraTS2021_01546
6 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01237,BraTS2021_01237
7 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01079,BraTS2021_01079
8 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01537,BraTS2021_01537
9 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01039,BraTS2021_01039
10 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00299,BraTS2021_00299
11 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01451,BraTS2021_01451
12 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01544,BraTS2021_01544
13 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01570,BraTS2021_01570
14 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01143,BraTS2021_01143
15 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01117,BraTS2021_01117
16 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01438,BraTS2021_01438
17 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01608,BraTS2021_01608
18 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00328,BraTS2021_00328
19 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01423,BraTS2021_01423
20 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01394,BraTS2021_01394
21 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00101,BraTS2021_00101
22 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00033,BraTS2021_00033
23 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00304,BraTS2021_00304
24 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01228,BraTS2021_01228
25 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01151,BraTS2021_01151
26 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00152,BraTS2021_00152
27 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00236,BraTS2021_00236
28 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01606,BraTS2021_01606
29 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00293,BraTS2021_00293
30 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00388,BraTS2021_00388
31 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00061,BraTS2021_00061
32 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01007,BraTS2021_01007
33 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00751,BraTS2021_00751
34 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00405,BraTS2021_00405
35 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00663,BraTS2021_00663
36 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00166,BraTS2021_00166
37 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01662,BraTS2021_01662
38 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01344,BraTS2021_01344
39 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01089,BraTS2021_01089
40 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01302,BraTS2021_01302
41 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01372,BraTS2021_01372
42 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00035,BraTS2021_00035
43 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00211,BraTS2021_00211
44 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01341,BraTS2021_01341
45 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01112,BraTS2021_01112
46 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01259,BraTS2021_01259
47 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01305,BraTS2021_01305
48 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01014,BraTS2021_01014
49 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01436,BraTS2021_01436
50 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00195,BraTS2021_00195
51 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01165,BraTS2021_01165
52 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01420,BraTS2021_01420
53 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00456,BraTS2021_00456
54 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00254,BraTS2021_00254
55 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00636,BraTS2021_00636
56 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01023,BraTS2021_01023
57 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01459,BraTS2021_01459
58 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00417,BraTS2021_00417
59 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00132,BraTS2021_00132
60 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01547,BraTS2021_01547
61 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01053,BraTS2021_01053
62 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00404,BraTS2021_00404
63 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00692,BraTS2021_00692
64 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00624,BraTS2021_00624
65 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00389,BraTS2021_00389
66 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01222,BraTS2021_01222
67 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00049,BraTS2021_00049
68 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01520,BraTS2021_01520
69 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00433,BraTS2021_00433
70 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01186,BraTS2021_01186
71 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00017,BraTS2021_00017
72 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01288,BraTS2021_01288
73 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01483,BraTS2021_01483
74 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00122,BraTS2021_00122
75 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01194,BraTS2021_01194
76 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01476,BraTS2021_01476
77 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01655,BraTS2021_01655
78 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01498,BraTS2021_01498
79 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01481,BraTS2021_01481
80 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01198,BraTS2021_01198
81 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00806,BraTS2021_00806
82 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00296,BraTS2021_00296
83 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01155,BraTS2021_01155
84 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01010,BraTS2021_01010
85 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00170,BraTS2021_00170
86 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01150,BraTS2021_01150
87 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01347,BraTS2021_01347
88 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00137,BraTS2021_00137
89 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00542,BraTS2021_00542
90 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00184,BraTS2021_00184
91 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00032,BraTS2021_00032
92 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01636,BraTS2021_01636
93 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00799,BraTS2021_00799
94 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00530,BraTS2021_00530
95 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00549,BraTS2021_00549
96 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00286,BraTS2021_00286
97 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01333,BraTS2021_01333
98 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00528,BraTS2021_00528
99 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01389,BraTS2021_01389
100 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00554,BraTS2021_00554
101 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01232,BraTS2021_01232
102 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00360,BraTS2021_00360
103 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01535,BraTS2021_01535
104 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00239,BraTS2021_00239
105 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00804,BraTS2021_00804
106 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00537,BraTS2021_00537
107 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01325,BraTS2021_01325
108 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01135,BraTS2021_01135
109 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01207,BraTS2021_01207
110 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01370,BraTS2021_01370
111 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01583,BraTS2021_01583
112 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01284,BraTS2021_01284
113 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01388,BraTS2021_01388
114 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01571,BraTS2021_01571
115 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01664,BraTS2021_01664
116 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01487,BraTS2021_01487
117 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01380,BraTS2021_01380
118 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01190,BraTS2021_01190
119 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00457,BraTS2021_00457
120 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00289,BraTS2021_00289
121 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00120,BraTS2021_00120
122 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01049,BraTS2021_01049
123 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01324,BraTS2021_01324
124 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01027,BraTS2021_01027
125 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00569,BraTS2021_00569
126 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00128,BraTS2021_00128
127 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01522,BraTS2021_01522
128 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00607,BraTS2021_00607
129 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01195,BraTS2021_01195
130 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01559,BraTS2021_01559
131 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00410,BraTS2021_00410
132 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00088,BraTS2021_00088
133 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00364,BraTS2021_00364
134 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01304,BraTS2021_01304
135 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01630,BraTS2021_01630
136 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01482,BraTS2021_01482
137 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00555,BraTS2021_00555
138 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00480,BraTS2021_00480
139 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00387,BraTS2021_00387
140 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01172,BraTS2021_01172
141 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01493,BraTS2021_01493
142 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01620,BraTS2021_01620
143 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00178,BraTS2021_00178
144 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01218,BraTS2021_01218
145 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00044,BraTS2021_00044
146 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01623,BraTS2021_01623
147 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01298,BraTS2021_01298
148 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00730,BraTS2021_00730
149 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01091,BraTS2021_01091
150 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00214,BraTS2021_00214
151 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00343,BraTS2021_00343
152 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01130,BraTS2021_01130
153 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00781,BraTS2021_00781
154 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00782,BraTS2021_00782
155 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01055,BraTS2021_01055
156 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01113,BraTS2021_01113
157 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00598,BraTS2021_00598
158 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01326,BraTS2021_01326
159 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01587,BraTS2021_01587
160 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01075,BraTS2021_01075
161 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01213,BraTS2021_01213
162 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01644,BraTS2021_01644
163 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01617,BraTS2021_01617
164 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00138,BraTS2021_00138
165 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01266,BraTS2021_01266
166 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00630,BraTS2021_00630
167 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01171,BraTS2021_01171
168 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01392,BraTS2021_01392
169 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00157,BraTS2021_00157
170 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01193,BraTS2021_01193
171 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00106,BraTS2021_00106
172 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01261,BraTS2021_01261
173 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00773,BraTS2021_00773
174 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01265,BraTS2021_01265
175 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00246,BraTS2021_00246
176 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00123,BraTS2021_00123
177 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00793,BraTS2021_00793
178 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01121,BraTS2021_01121
179 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00445,BraTS2021_00445
180 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01016,BraTS2021_01016
181 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01510,BraTS2021_01510
182 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01015,BraTS2021_01015
183 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00391,BraTS2021_00391
184 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01449,BraTS2021_01449
185 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01179,BraTS2021_01179
186 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01251,BraTS2021_01251
187 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01632,BraTS2021_01632
188 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_00803,BraTS2021_00803
189 | ../../../data/brats2021_seg/BraTS2021_Training_Data/BraTS2021_01100,BraTS2021_01100
190 |
--------------------------------------------------------------------------------
/dataloaders/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OSUPCVLab/SegFormer3D/a74cddda15806cc8a4428f9a36bd301a69ddb440/dataloaders/__init__.py
--------------------------------------------------------------------------------
/dataloaders/brats2017_seg.py:
--------------------------------------------------------------------------------
1 | import os
2 | import torch
3 | import pandas as pd
4 | from torch.utils.data import Dataset
5 |
6 |
7 | class Brats2017Task1Dataset(Dataset):
8 | """
9 | Brats2017 task 1 dataset is the segmentation corpus of the data. This dataset class performs dataloading
10 | on an already-preprocessed brats2021 data which has been resized, normalized and oriented in (Right, Anterior, Superior) format.
11 | The csv file associated with the data has two columns: [data_path, case_name]
12 | MRI_TYPE are ["Flair", "T1w", "T1gd", "T2w"] and segmentation label is store separately
13 | """
14 |
15 | def __init__(
16 | self, root_dir: str, is_train: bool = True, transform=None, fold_id: int = None
17 | ):
18 | """
19 | root_dir: path to (BraTS2021_Training_Data) folder
20 | is_train: whether or nor it is train or validation
21 | transform: composition of the pytorch transforms
22 | fold_id: fold index in kfold dataheld out
23 | """
24 | super().__init__()
25 | if fold_id is not None:
26 | csv_name = (
27 | f"train_fold_{fold_id}.csv"
28 | if is_train
29 | else f"validation_fold_{fold_id}.csv"
30 | )
31 | csv_fp = os.path.join(root_dir, csv_name)
32 | assert os.path.exists(csv_fp)
33 | else:
34 | csv_name = "train.csv" if is_train else "validation.csv"
35 | csv_fp = os.path.join(root_dir, csv_name)
36 | assert os.path.exists(csv_fp)
37 |
38 | self.csv = pd.read_csv(csv_fp)
39 | self.transform = transform
40 |
41 | def __len__(self):
42 | return self.csv.__len__()
43 |
44 | def __getitem__(self, idx):
45 | data_path = self.csv["data_path"][idx]
46 | case_name = self.csv["case_name"][idx]
47 | # e.g, BRATS_001_modalities.pt
48 | # e.g, BRATS_001_label.pt
49 | volume_fp = os.path.join(data_path, f"{case_name}_modalities.pt")
50 | label_fp = os.path.join(data_path, f"{case_name}_label.pt")
51 | # load the preprocessed tensors
52 | volume = torch.load(volume_fp)
53 | label = torch.load(label_fp)
54 | data = {"image": torch.from_numpy(volume).float(), "label": torch.from_numpy(label).float()}
55 |
56 | if self.transform:
57 | data = self.transform(data)
58 |
59 | return data
60 |
--------------------------------------------------------------------------------
/dataloaders/brats2021_seg.py:
--------------------------------------------------------------------------------
1 | import os
2 | import torch
3 | import pandas as pd
4 | from torch.utils.data import Dataset
5 |
6 | class Brats2021Task1Dataset(Dataset):
7 | """
8 | Brats2021 task 1 dataset is the segmentation corpus of the data. This dataset class performs dataloading
9 | on an already-preprocessed brats2021 data which has been resized, normalized and oriented in (Right, Anterior, Superior) format.
10 | The csv file associated with the data has two columns: [data_path, case_name]
11 | MRI_TYPE are "FLAIR", "T1", "T1CE", "T2" and segmentation label is store separately
12 | """
13 | def __init__(self, root_dir: str, is_train: bool = True, transform = None, fold_id: int = None):
14 | """
15 | root_dir: path to (BraTS2021_Training_Data) folder
16 | is_train: whether or nor it is train or validation
17 | transform: composition of the pytorch transforms
18 | fold_id: fold index in kfold dataheld out
19 | """
20 | super().__init__()
21 | if fold_id is not None:
22 | csv_name = f"train_fold_{fold_id}.csv" if is_train else f"validation_fold_{fold_id}.csv"
23 | csv_fp = os.path.join(root_dir, csv_name)
24 | assert os.path.exists(csv_fp)
25 | else:
26 | csv_name = "train.csv" if is_train else "validation.csv"
27 | csv_fp = os.path.join(root_dir, csv_name)
28 | assert os.path.exists(csv_fp)
29 |
30 | self.csv = pd.read_csv(csv_fp)
31 | self.transform = transform
32 |
33 | def __len__(self):
34 | return self.csv.__len__()
35 |
36 | def __getitem__(self, idx):
37 | data_path = self.csv["data_path"][idx]
38 | case_name = self.csv["case_name"][idx]
39 | # e.g, BraTS2021_00000_trnasverse_modalities.pt
40 | # e.g, BraTS2021_00000_trnasverse_label.pt
41 | volume_fp = os.path.join(data_path, f"{case_name}_modalities.pt")
42 | label_fp = os.path.join(data_path, f"{case_name}_label.pt")
43 | # load the preprocessed tensors
44 | volume = torch.load(volume_fp)
45 | label = torch.load(label_fp)
46 | data = {"image": torch.from_numpy(volume).float(), "label": torch.from_numpy(label).float()}
47 |
48 | if self.transform:
49 | data = self.transform(data)
50 |
51 | return data
--------------------------------------------------------------------------------
/dataloaders/build_dataset.py:
--------------------------------------------------------------------------------
1 | import sys
2 |
3 | sys.path.append("../")
4 |
5 | from typing import Dict
6 | from monai.data import DataLoader
7 | from augmentations.augmentations import build_augmentations
8 |
9 |
10 | ######################################################################
11 | def build_dataset(dataset_type: str, dataset_args: Dict):
12 | if dataset_type == "brats2021_seg":
13 | from .brats2021_seg import Brats2021Task1Dataset
14 |
15 | dataset = Brats2021Task1Dataset(
16 | root_dir=dataset_args["root"],
17 | is_train=dataset_args["train"],
18 | transform=build_augmentations(dataset_args["train"]),
19 | fold_id=dataset_args["fold_id"],
20 | )
21 | return dataset
22 | elif dataset_type == "brats2017_seg":
23 | from .brats2017_seg import Brats2017Task1Dataset
24 |
25 | dataset = Brats2017Task1Dataset(
26 | root_dir=dataset_args["root"],
27 | is_train=dataset_args["train"],
28 | transform=build_augmentations(dataset_args["train"]),
29 | fold_id=dataset_args["fold_id"],
30 | )
31 | return dataset
32 | else:
33 | raise ValueError(
34 | "only brats2021 and brats2017 segmentation is currently supported!"
35 | )
36 |
37 |
38 | ######################################################################
39 | def build_dataloader(
40 | dataset, dataloader_args: Dict, config: Dict = None, train: bool = True
41 | ) -> DataLoader:
42 | """builds the dataloader for given dataset
43 |
44 | Args:
45 | dataset (_type_): _description_
46 | dataloader_args (Dict): _description_
47 | config (Dict, optional): _description_. Defaults to None.
48 | train (bool, optional): _description_. Defaults to True.
49 |
50 | Returns:
51 | DataLoader: _description_
52 | """
53 | dataloader = DataLoader(
54 | dataset=dataset,
55 | batch_size=dataloader_args["batch_size"],
56 | shuffle=dataloader_args["shuffle"],
57 | num_workers=dataloader_args["num_workers"],
58 | drop_last=dataloader_args["drop_last"],
59 | pin_memory=True,
60 | )
61 | return dataloader
62 |
--------------------------------------------------------------------------------
/experiments/brats_2017/best_brats_2017_exp_dice_82.07/config.yaml:
--------------------------------------------------------------------------------
1 | # Config File
2 | # wandb parameters
3 | project: segfmr3d
4 | wandb_parameters:
5 | entity: pcvlab
6 | group: brats2017
7 | name: segformer_scratch_2LayNormEncDec_complicatedAug
8 | mode: "online"
9 | resume: False
10 | #tags: ["pcvlab", "dice", "compicated_aug", "b0_model", "layerNorminEncoderandDecoder"]
11 |
12 | # model parameters
13 | model_name: segformer3d
14 | model_parameters:
15 | in_channels: 4
16 | sr_ratios: [4, 2, 1, 1]
17 | embed_dims: [32, 64, 160, 256]
18 | patch_kernel_size: [7, 3, 3, 3]
19 | patch_stride: [4, 2, 2, 2]
20 | patch_padding: [3, 1, 1, 1]
21 | mlp_ratios: [4, 4, 4, 4]
22 | num_heads: [1, 2, 5, 8]
23 | depths: [2, 2, 2, 2]
24 | num_classes: 3
25 | decoder_dropout: 0.0
26 | decoder_head_embedding_dim: 256
27 |
28 | # loss function
29 | loss_fn:
30 | loss_type: "dice"
31 | loss_args: None
32 |
33 | # optimizer
34 | optimizer:
35 | optimizer_type: "adamw"
36 | optimizer_args:
37 | lr: 0.0001
38 | weight_decay: 0.01
39 |
40 | # schedulers
41 | warmup_scheduler:
42 | enabled: True # should be always true
43 | warmup_epochs: 20
44 |
45 | train_scheduler:
46 | scheduler_type: 'cosine_annealing_wr'
47 | scheduler_args:
48 | t_0_epochs: 400
49 | t_mult: 1
50 | min_lr: 0.000006
51 |
52 | # eponential moving average
53 | ema:
54 | enabled: False
55 | ema_decay: 0.999
56 | val_ema_every: 1
57 |
58 | sliding_window_inference:
59 | sw_batch_size: 4
60 | roi: [128, 128, 128]
61 |
62 | # gradient clipping (not implemented yet)
63 | clip_gradients:
64 | enabled: False
65 | clip_gradients_value: 0.1
66 |
67 | # training hyperparameters
68 | training_parameters:
69 | seed: 42
70 | num_epochs: 800
71 | cutoff_epoch: 400
72 | load_optimizer: False
73 | print_every: 200
74 | calculate_metrics: True
75 | grad_accumulate_steps: 1 # default: 1
76 | checkpoint_save_dir: "model_checkpoints/best_dice_checkpoint"
77 | load_checkpoint: # not implemented yet
78 | load_full_checkpoint: False
79 | load_model_only: False
80 | load_checkpoint_path: None
81 |
82 | # dataset args
83 | dataset_parameters:
84 | dataset_type: "brats2017_seg"
85 | train_dataset_args:
86 | root: "../../../data/brats2017_seg"
87 | train: True
88 | fold_id: null
89 |
90 | val_dataset_args:
91 | root: "../../../data/brats2017_seg"
92 | train: False
93 | fold_id: null
94 |
95 | train_dataloader_args:
96 | batch_size: 2
97 | shuffle: True
98 | num_workers: 8
99 | drop_last: True
100 |
101 | val_dataloader_args:
102 | batch_size: 1
103 | shuffle: False
104 | num_workers: 6
105 | drop_last: False
--------------------------------------------------------------------------------
/experiments/brats_2017/best_brats_2017_exp_dice_82.07/run_experiment.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 | import random
4 |
5 | sys.path.append("../../../")
6 |
7 | import yaml
8 | import torch
9 | import argparse
10 | import numpy as np
11 | from typing import Dict
12 | from termcolor import colored
13 | from accelerate import Accelerator
14 | from losses.losses import build_loss_fn
15 | from optimizers.optimizers import build_optimizer
16 | from optimizers.schedulers import build_scheduler
17 | from train_scripts.trainer_ddp import Segmentation_Trainer
18 | from architectures.build_architecture import build_architecture
19 | from dataloaders.build_dataset import build_dataset, build_dataloader
20 |
21 |
22 | ##################################################################################################
23 | def launch_experiment(config_path) -> Dict:
24 | """
25 | Builds Experiment
26 | Args:
27 | config (Dict): configuration file
28 |
29 | Returns:
30 | Dict: _description_
31 | """
32 | # load config
33 | config = load_config(config_path)
34 |
35 | # set seed
36 | seed_everything(config)
37 |
38 | # build directories
39 | build_directories(config)
40 |
41 | # build training dataset & training data loader
42 | trainset = build_dataset(
43 | dataset_type=config["dataset_parameters"]["dataset_type"],
44 | dataset_args=config["dataset_parameters"]["train_dataset_args"],
45 | )
46 | trainloader = build_dataloader(
47 | dataset=trainset,
48 | dataloader_args=config["dataset_parameters"]["train_dataloader_args"],
49 | config=config,
50 | train=True,
51 | )
52 |
53 | # build validation dataset & validataion data loader
54 | valset = build_dataset(
55 | dataset_type=config["dataset_parameters"]["dataset_type"],
56 | dataset_args=config["dataset_parameters"]["val_dataset_args"],
57 | )
58 | valloader = build_dataloader(
59 | dataset=valset,
60 | dataloader_args=config["dataset_parameters"]["val_dataloader_args"],
61 | config=config,
62 | train=False,
63 | )
64 |
65 | # build the Model
66 | model = build_architecture(config)
67 |
68 | # set up the loss function
69 | criterion = build_loss_fn(
70 | loss_type=config["loss_fn"]["loss_type"],
71 | loss_args=config["loss_fn"]["loss_args"],
72 | )
73 |
74 | # set up the optimizer
75 | optimizer = build_optimizer(
76 | model=model,
77 | optimizer_type=config["optimizer"]["optimizer_type"],
78 | optimizer_args=config["optimizer"]["optimizer_args"],
79 | )
80 |
81 | # set up schedulers
82 | warmup_scheduler = build_scheduler(
83 | optimizer=optimizer, scheduler_type="warmup_scheduler", config=config
84 | )
85 | training_scheduler = build_scheduler(
86 | optimizer=optimizer,
87 | scheduler_type="training_scheduler",
88 | config=config,
89 | )
90 |
91 | # use accelarate
92 | accelerator = Accelerator(
93 | log_with="wandb",
94 | gradient_accumulation_steps=config["training_parameters"][
95 | "grad_accumulate_steps"
96 | ],
97 | )
98 | accelerator.init_trackers(
99 | project_name=config["project"],
100 | config=config,
101 | init_kwargs={"wandb": config["wandb_parameters"]},
102 | )
103 |
104 | # display experiment info
105 | display_info(config, accelerator, trainset, valset, model)
106 |
107 | # convert all components to accelerate
108 | model = accelerator.prepare_model(model=model)
109 | optimizer = accelerator.prepare_optimizer(optimizer=optimizer)
110 | trainloader = accelerator.prepare_data_loader(data_loader=trainloader)
111 | valloader = accelerator.prepare_data_loader(data_loader=valloader)
112 | warmup_scheduler = accelerator.prepare_scheduler(scheduler=warmup_scheduler)
113 | training_scheduler = accelerator.prepare_scheduler(scheduler=training_scheduler)
114 |
115 | # create a single dict to hold all parameters
116 | storage = {
117 | "model": model,
118 | "trainloader": trainloader,
119 | "valloader": valloader,
120 | "criterion": criterion,
121 | "optimizer": optimizer,
122 | "warmup_scheduler": warmup_scheduler,
123 | "training_scheduler": training_scheduler,
124 | }
125 |
126 | # set up trainer
127 | trainer = Segmentation_Trainer(
128 | config=config,
129 | model=storage["model"],
130 | optimizer=storage["optimizer"],
131 | criterion=storage["criterion"],
132 | train_dataloader=storage["trainloader"],
133 | val_dataloader=storage["valloader"],
134 | warmup_scheduler=storage["warmup_scheduler"],
135 | training_scheduler=storage["training_scheduler"],
136 | accelerator=accelerator,
137 | )
138 |
139 | # run train
140 | trainer.train()
141 |
142 |
143 | ##################################################################################################
144 | def seed_everything(config) -> None:
145 | seed = config["training_parameters"]["seed"]
146 | os.environ["PYTHONHASHSEED"] = str(seed)
147 | random.seed(seed)
148 | np.random.seed(seed)
149 | torch.manual_seed(seed)
150 | torch.cuda.manual_seed(seed)
151 | torch.cuda.manual_seed_all(seed)
152 | torch.backends.cudnn.deterministic = True
153 | torch.backends.cudnn.benchmark = False
154 |
155 |
156 | ##################################################################################################
157 | def load_config(config_path: str) -> Dict:
158 | """loads the yaml config file
159 |
160 | Args:
161 | config_path (str): _description_
162 |
163 | Returns:
164 | Dict: _description_
165 | """
166 | with open(config_path, "r") as file:
167 | config = yaml.safe_load(file)
168 | return config
169 |
170 |
171 | ##################################################################################################
172 | def build_directories(config: Dict) -> None:
173 | # create necessary directories
174 | if not os.path.exists(config["training_parameters"]["checkpoint_save_dir"]):
175 | os.makedirs(config["training_parameters"]["checkpoint_save_dir"])
176 |
177 | if os.listdir(config["training_parameters"]["checkpoint_save_dir"]):
178 | raise ValueError("checkpoint exits -- preventing file override -- rename file")
179 |
180 |
181 | ##################################################################################################
182 | def display_info(config, accelerator, trainset, valset, model):
183 | # print experiment info
184 | accelerator.print(f"-------------------------------------------------------")
185 | accelerator.print(f"[info]: Experiment Info")
186 | accelerator.print(
187 | f"[info] ----- Project: {colored(config['project'], color='red')}"
188 | )
189 | accelerator.print(
190 | f"[info] ----- Group: {colored(config['wandb_parameters']['group'], color='red')}"
191 | )
192 | accelerator.print(
193 | f"[info] ----- Name: {colored(config['wandb_parameters']['name'], color='red')}"
194 | )
195 | accelerator.print(
196 | f"[info] ----- Batch Size: {colored(config['dataset_parameters']['train_dataloader_args']['batch_size'], color='red')}"
197 | )
198 | accelerator.print(
199 | f"[info] ----- Num Epochs: {colored(config['training_parameters']['num_epochs'], color='red')}"
200 | )
201 | accelerator.print(
202 | f"[info] ----- Loss: {colored(config['loss_fn']['loss_type'], color='red')}"
203 | )
204 | accelerator.print(
205 | f"[info] ----- Optimizer: {colored(config['optimizer']['optimizer_type'], color='red')}"
206 | )
207 | accelerator.print(
208 | f"[info] ----- Train Dataset Size: {colored(len(trainset), color='red')}"
209 | )
210 | accelerator.print(
211 | f"[info] ----- Test Dataset Size: {colored(len(valset), color='red')}"
212 | )
213 |
214 | pytorch_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
215 | accelerator.print(
216 | f"[info] ----- Distributed Training: {colored('True' if torch.cuda.device_count() > 1 else 'False', color='red')}"
217 | )
218 | accelerator.print(
219 | f"[info] ----- Num Clases: {colored(config['model_parameters']['num_classes'], color='red')}"
220 | )
221 | accelerator.print(
222 | f"[info] ----- EMA: {colored(config['ema']['enabled'], color='red')}"
223 | )
224 | accelerator.print(
225 | f"[info] ----- Load From Checkpoint: {colored(config['training_parameters']['load_checkpoint']['load_full_checkpoint'], color='red')}"
226 | )
227 | accelerator.print(
228 | f"[info] ----- Params: {colored(pytorch_total_params, color='red')}"
229 | )
230 | accelerator.print(f"-------------------------------------------------------")
231 |
232 |
233 | ##################################################################################################
234 | if __name__ == "__main__":
235 | parser = argparse.ArgumentParser(description="Simple example of training script.")
236 | parser.add_argument(
237 | "--config", type=str, default="config.yaml", help="path to yaml config file"
238 | )
239 | args = parser.parse_args()
240 | launch_experiment(args.config)
241 |
--------------------------------------------------------------------------------
/experiments/brats_2017/best_brats_2017_exp_dice_82.07/single_gpu_accelerate.yaml:
--------------------------------------------------------------------------------
1 | compute_environment: LOCAL_MACHINE
2 | debug: false
3 | distributed_type: 'NO'
4 | downcast_bf16: 'no'
5 | gpu_ids: '0'
6 | machine_rank: 0
7 | main_training_function: main
8 | mixed_precision: 'no'
9 | num_machines: 1
10 | num_processes: 1
11 | rdzv_backend: static
12 | same_network: true
13 | tpu_env: []
14 | tpu_use_cluster: false
15 | tpu_use_sudo: false
16 | use_cpu: false
17 |
--------------------------------------------------------------------------------
/experiments/brats_2017/template_experiment/config.yaml:
--------------------------------------------------------------------------------
1 | # wandb parameters
2 | project: segfmr3d
3 | wandb_parameters:
4 | mode: "offline" # set this to "online" if you want to log to wandb
5 | entity: pcvlab
6 | group: brats2017
7 | name: segformer3d_adamw_batch2_diceloss
8 | resume: False
9 | tags: ["pcvlab", "dice", "b0_model", "adamw"]
10 |
11 | # model parameters
12 | model_name: segformer3d
13 | model_parameters:
14 | in_channels: 4
15 | sr_ratios: [4, 2, 1, 1]
16 | embed_dims: [32, 64, 160, 256]
17 | patch_kernel_size: [7, 3, 3, 3]
18 | patch_stride: [4, 2, 2, 2]
19 | patch_padding: [3, 1, 1, 1]
20 | mlp_ratios: [4, 4, 4, 4]
21 | num_heads: [1, 2, 5, 8]
22 | depths: [2, 2, 2, 2]
23 | num_classes: 3
24 | decoder_dropout: 0.0
25 | decoder_head_embedding_dim: 256
26 |
27 | # loss function
28 | loss_fn:
29 | loss_type: "dice"
30 | loss_args: None
31 |
32 | # optimizer
33 | optimizer:
34 | optimizer_type: "adamw"
35 | optimizer_args:
36 | lr: 0.0001
37 | weight_decay: 0.01
38 |
39 | # schedulers
40 | warmup_scheduler:
41 | enabled: True # should be always true
42 | warmup_epochs: 20
43 |
44 | train_scheduler:
45 | scheduler_type: 'cosine_annealing_wr'
46 | scheduler_args:
47 | t_0_epochs: 400
48 | t_mult: 1
49 | min_lr: 0.000006
50 |
51 | # (Not fully implemented yet) eponential moving average
52 | ema:
53 | enabled: False
54 | ema_decay: 0.999
55 | val_ema_every: 1
56 |
57 | sliding_window_inference:
58 | sw_batch_size: 4
59 | roi: [128, 128, 128]
60 |
61 | # gradient clipping (not implemented yet)
62 | clip_gradients:
63 | enabled: False
64 | clip_gradients_value: 0.1
65 |
66 | # training hyperparameters
67 | training_parameters:
68 | seed: 42
69 | num_epochs: 800
70 | cutoff_epoch: 400
71 | load_optimizer: False
72 | print_every: 200
73 | calculate_metrics: True
74 | grad_accumulate_steps: 1 # default: 1
75 | checkpoint_save_dir: "model_checkpoints/best_dice_checkpoint"
76 | load_checkpoint: # not implemented yet
77 | load_full_checkpoint: False
78 | load_model_only: False
79 | load_checkpoint_path: None
80 |
81 | # dataset args
82 | dataset_parameters:
83 | dataset_type: "brats2017_seg"
84 | train_dataset_args:
85 | root: "../../../data/brats2017_seg"
86 | train: True
87 | # in case you have k-fold train and validation csv
88 | fold_id: null
89 | # for example fold_id: 1 will load train_fold_1.csv
90 | # default fold_id is null which load train.csv
91 | # browse to ../../../data/brats2017_seg/brats2017_raw_data/datameta_generator to access to the csv files
92 | # and put it under ../../../data/brats2017_seg and change the fold_id accrodingly
93 |
94 | val_dataset_args:
95 | root: "../../../data/brats2017_seg"
96 | train: False
97 | # in case you have k-fold train and validation csv
98 | fold_id: null
99 | # for example fold_id: 1 will load validation_fold_1.csv
100 | # default fold_id is null which load validation.csv
101 | # browse to ../../../data/brats2017_seg/brats2017_raw_data/datameta_generator to access to the csv files
102 | # and put it under ../../../data/brats2017_seg and change the fold_id accrodingly
103 |
104 | train_dataloader_args:
105 | batch_size: 2
106 | shuffle: True
107 | num_workers: 8
108 | drop_last: True
109 |
110 | val_dataloader_args:
111 | batch_size: 1
112 | shuffle: False
113 | num_workers: 6
114 | drop_last: False
--------------------------------------------------------------------------------
/experiments/brats_2017/template_experiment/gpu_accelerate.yaml:
--------------------------------------------------------------------------------
1 | compute_environment: LOCAL_MACHINE
2 | debug: false
3 | distributed_type: 'NO'
4 | downcast_bf16: 'no'
5 | gpu_ids: '0'
6 | machine_rank: 0
7 | main_training_function: main
8 | mixed_precision: 'no'
9 | num_machines: 1
10 | num_processes: 1
11 | rdzv_backend: static
12 | same_network: true
13 | tpu_env: []
14 | tpu_use_cluster: false
15 | tpu_use_sudo: false
16 | use_cpu: false
17 |
--------------------------------------------------------------------------------
/experiments/brats_2017/template_experiment/run_experiment.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 | import random
4 |
5 | sys.path.append("../../../")
6 |
7 | import yaml
8 | import torch
9 | import argparse
10 | import numpy as np
11 | from typing import Dict
12 | from termcolor import colored
13 | from accelerate import Accelerator
14 | from losses.losses import build_loss_fn
15 | from optimizers.optimizers import build_optimizer
16 | from optimizers.schedulers import build_scheduler
17 | from train_scripts.trainer_ddp import Segmentation_Trainer
18 | from architectures.build_architecture import build_architecture
19 | from dataloaders.build_dataset import build_dataset, build_dataloader
20 |
21 |
22 | ##################################################################################################
23 | def launch_experiment(config_path) -> Dict:
24 | """
25 | Builds Experiment
26 | Args:
27 | config (Dict): configuration file
28 |
29 | Returns:
30 | Dict: _description_
31 | """
32 | # load config
33 | config = load_config(config_path)
34 |
35 | # set seed
36 | seed_everything(config)
37 |
38 | # build directories
39 | build_directories(config)
40 |
41 | # build training dataset & training data loader
42 | trainset = build_dataset(
43 | dataset_type=config["dataset_parameters"]["dataset_type"],
44 | dataset_args=config["dataset_parameters"]["train_dataset_args"],
45 | )
46 | trainloader = build_dataloader(
47 | dataset=trainset,
48 | dataloader_args=config["dataset_parameters"]["train_dataloader_args"],
49 | config=config,
50 | train=True,
51 | )
52 |
53 | # build validation dataset & validataion data loader
54 | valset = build_dataset(
55 | dataset_type=config["dataset_parameters"]["dataset_type"],
56 | dataset_args=config["dataset_parameters"]["val_dataset_args"],
57 | )
58 | valloader = build_dataloader(
59 | dataset=valset,
60 | dataloader_args=config["dataset_parameters"]["val_dataloader_args"],
61 | config=config,
62 | train=False,
63 | )
64 |
65 | # build the Model
66 | model = build_architecture(config)
67 |
68 | # set up the loss function
69 | criterion = build_loss_fn(
70 | loss_type=config["loss_fn"]["loss_type"],
71 | loss_args=config["loss_fn"]["loss_args"],
72 | )
73 |
74 | # set up the optimizer
75 | optimizer = build_optimizer(
76 | model=model,
77 | optimizer_type=config["optimizer"]["optimizer_type"],
78 | optimizer_args=config["optimizer"]["optimizer_args"],
79 | )
80 |
81 | # set up schedulers
82 | warmup_scheduler = build_scheduler(
83 | optimizer=optimizer, scheduler_type="warmup_scheduler", config=config
84 | )
85 | training_scheduler = build_scheduler(
86 | optimizer=optimizer,
87 | scheduler_type="training_scheduler",
88 | config=config,
89 | )
90 |
91 | # use accelarate
92 | accelerator = Accelerator(
93 | log_with="wandb",
94 | gradient_accumulation_steps=config["training_parameters"][
95 | "grad_accumulate_steps"
96 | ],
97 | )
98 | accelerator.init_trackers(
99 | project_name=config["project"],
100 | config=config,
101 | init_kwargs={"wandb": config["wandb_parameters"]},
102 | )
103 |
104 | # display experiment info
105 | display_info(config, accelerator, trainset, valset, model)
106 |
107 | # convert all components to accelerate
108 | model = accelerator.prepare_model(model=model)
109 | optimizer = accelerator.prepare_optimizer(optimizer=optimizer)
110 | trainloader = accelerator.prepare_data_loader(data_loader=trainloader)
111 | valloader = accelerator.prepare_data_loader(data_loader=valloader)
112 | warmup_scheduler = accelerator.prepare_scheduler(scheduler=warmup_scheduler)
113 | training_scheduler = accelerator.prepare_scheduler(scheduler=training_scheduler)
114 |
115 | # create a single dict to hold all parameters
116 | storage = {
117 | "model": model,
118 | "trainloader": trainloader,
119 | "valloader": valloader,
120 | "criterion": criterion,
121 | "optimizer": optimizer,
122 | "warmup_scheduler": warmup_scheduler,
123 | "training_scheduler": training_scheduler,
124 | }
125 |
126 | # set up trainer
127 | trainer = Segmentation_Trainer(
128 | config=config,
129 | model=storage["model"],
130 | optimizer=storage["optimizer"],
131 | criterion=storage["criterion"],
132 | train_dataloader=storage["trainloader"],
133 | val_dataloader=storage["valloader"],
134 | warmup_scheduler=storage["warmup_scheduler"],
135 | training_scheduler=storage["training_scheduler"],
136 | accelerator=accelerator,
137 | )
138 |
139 | # run train
140 | trainer.train()
141 |
142 |
143 | ##################################################################################################
144 | def seed_everything(config) -> None:
145 | seed = config["training_parameters"]["seed"]
146 | os.environ["PYTHONHASHSEED"] = str(seed)
147 | random.seed(seed)
148 | np.random.seed(seed)
149 | torch.manual_seed(seed)
150 | torch.cuda.manual_seed(seed)
151 | torch.cuda.manual_seed_all(seed)
152 | torch.backends.cudnn.deterministic = True
153 | torch.backends.cudnn.benchmark = False
154 |
155 |
156 | ##################################################################################################
157 | def load_config(config_path: str) -> Dict:
158 | """loads the yaml config file
159 |
160 | Args:
161 | config_path (str): _description_
162 |
163 | Returns:
164 | Dict: _description_
165 | """
166 | with open(config_path, "r") as file:
167 | config = yaml.safe_load(file)
168 | return config
169 |
170 |
171 | ##################################################################################################
172 | def build_directories(config: Dict) -> None:
173 | # create necessary directories
174 | if not os.path.exists(config["training_parameters"]["checkpoint_save_dir"]):
175 | os.makedirs(config["training_parameters"]["checkpoint_save_dir"])
176 |
177 | if os.listdir(config["training_parameters"]["checkpoint_save_dir"]):
178 | raise ValueError("checkpoint exits -- preventing file override -- rename file")
179 |
180 |
181 | ##################################################################################################
182 | def display_info(config, accelerator, trainset, valset, model):
183 | # print experiment info
184 | accelerator.print(f"-------------------------------------------------------")
185 | accelerator.print(f"[info]: Experiment Info")
186 | accelerator.print(
187 | f"[info] ----- Project: {colored(config['project'], color='red')}"
188 | )
189 | accelerator.print(
190 | f"[info] ----- Group: {colored(config['wandb_parameters']['group'], color='red')}"
191 | )
192 | accelerator.print(
193 | f"[info] ----- Name: {colored(config['wandb_parameters']['name'], color='red')}"
194 | )
195 | accelerator.print(
196 | f"[info] ----- Batch Size: {colored(config['dataset_parameters']['train_dataloader_args']['batch_size'], color='red')}"
197 | )
198 | accelerator.print(
199 | f"[info] ----- Num Epochs: {colored(config['training_parameters']['num_epochs'], color='red')}"
200 | )
201 | accelerator.print(
202 | f"[info] ----- Loss: {colored(config['loss_fn']['loss_type'], color='red')}"
203 | )
204 | accelerator.print(
205 | f"[info] ----- Optimizer: {colored(config['optimizer']['optimizer_type'], color='red')}"
206 | )
207 | accelerator.print(
208 | f"[info] ----- Train Dataset Size: {colored(len(trainset), color='red')}"
209 | )
210 | accelerator.print(
211 | f"[info] ----- Test Dataset Size: {colored(len(valset), color='red')}"
212 | )
213 |
214 | pytorch_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
215 | accelerator.print(
216 | f"[info] ----- Distributed Training: {colored('True' if torch.cuda.device_count() > 1 else 'False', color='red')}"
217 | )
218 | accelerator.print(
219 | f"[info] ----- Num Clases: {colored(config['model_parameters']['num_classes'], color='red')}"
220 | )
221 | accelerator.print(
222 | f"[info] ----- EMA: {colored(config['ema']['enabled'], color='red')}"
223 | )
224 | accelerator.print(
225 | f"[info] ----- Load From Checkpoint: {colored(config['training_parameters']['load_checkpoint']['load_full_checkpoint'], color='red')}"
226 | )
227 | accelerator.print(
228 | f"[info] ----- Params: {colored(pytorch_total_params, color='red')}"
229 | )
230 | accelerator.print(f"-------------------------------------------------------")
231 |
232 |
233 | ##################################################################################################
234 | if __name__ == "__main__":
235 | parser = argparse.ArgumentParser(description="Simple example of training script.")
236 | parser.add_argument(
237 | "--config", type=str, default="config.yaml", help="path to yaml config file"
238 | )
239 | args = parser.parse_args()
240 | launch_experiment(args.config)
241 |
--------------------------------------------------------------------------------
/losses/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OSUPCVLab/SegFormer3D/a74cddda15806cc8a4428f9a36bd301a69ddb440/losses/__init__.py
--------------------------------------------------------------------------------
/losses/losses.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import monai
3 | import torch.nn as nn
4 | from typing import Dict
5 | from monai import losses
6 |
7 | class CrossEntropyLoss(nn.Module):
8 | def __init__(self):
9 | super().__init__()
10 | self._loss = nn.CrossEntropyLoss(reduction="mean")
11 |
12 | def forward(self, predictions, targets):
13 | loss = self._loss(predictions, targets)
14 | return loss
15 |
16 |
17 | ###########################################################################
18 | class BinaryCrossEntropyWithLogits(nn.Module):
19 | def __init__(self):
20 | super().__init__()
21 | self._loss = nn.BCEWithLogitsLoss(reduction="mean")
22 |
23 | def forward(self, predictions, tragets):
24 | loss = self._loss(predictions, tragets)
25 | return loss
26 | ###########################################################################
27 | class DiceLoss(nn.Module):
28 | def __init__(self):
29 | super().__init__()
30 | self._loss = losses.DiceLoss(to_onehot_y=False, sigmoid=True)
31 |
32 | def forward(self, predicted, target):
33 | loss = self._loss(predicted, target)
34 | return loss
35 |
36 |
37 | ###########################################################################
38 | class DiceCELoss(nn.Module):
39 | def __init__(self):
40 | super().__init__()
41 | self._loss = losses.DiceCELoss(to_onehot_y=False, sigmoid=True)
42 |
43 | def forward(self, predicted, target):
44 | loss = self._loss(predicted, target)
45 | return loss
46 |
47 |
48 | ###########################################################################
49 | def build_loss_fn(loss_type: str, loss_args: Dict = None):
50 | if loss_type == "crossentropy":
51 | return CrossEntropyLoss()
52 |
53 | elif loss_type == "binarycrossentropy":
54 | return BinaryCrossEntropyWithLogits()
55 |
56 | elif loss_type == "dice":
57 | return DiceLoss()
58 |
59 | elif loss_type == "diceCE":
60 | return DiceCELoss()
61 |
62 | else:
63 | raise ValueError("must be cross entropy or soft dice loss for now!")
64 |
--------------------------------------------------------------------------------
/metrics/segmentation_metrics.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 | from typing import Dict
4 | from monai.metrics import DiceMetric
5 | from monai.transforms import Compose
6 | from monai.data import decollate_batch
7 | from monai.transforms import Activations
8 | from monai.transforms import AsDiscrete
9 | from monai.inferers import sliding_window_inference
10 |
11 |
12 | ################################################################################
13 | class SlidingWindowInference:
14 | def __init__(self, roi: tuple, sw_batch_size: int):
15 | self.dice_metric = DiceMetric(
16 | include_background=True, reduction="mean_batch", get_not_nans=False
17 | )
18 | self.post_transform = Compose(
19 | [
20 | Activations(sigmoid=True),
21 | AsDiscrete(argmax=False, threshold=0.5),
22 | ]
23 | )
24 | self.sw_batch_size = sw_batch_size
25 | self.roi = roi
26 |
27 | def __call__(
28 | self, val_inputs: torch.Tensor, val_labels: torch.Tensor, model: nn.Module
29 | ):
30 | self.dice_metric.reset()
31 | logits = sliding_window_inference(
32 | inputs=val_inputs,
33 | roi_size=self.roi,
34 | sw_batch_size=self.sw_batch_size,
35 | predictor=model,
36 | overlap=0.5,
37 | )
38 | val_labels_list = decollate_batch(val_labels)
39 | val_outputs_list = decollate_batch(logits)
40 | val_output_convert = [
41 | self.post_transform(val_pred_tensor) for val_pred_tensor in val_outputs_list
42 | ]
43 | self.dice_metric(y_pred=val_output_convert, y=val_labels_list)
44 | # compute accuracy per channel
45 | acc = self.dice_metric.aggregate().cpu().numpy()
46 | avg_acc = acc.mean()
47 | # To access individual metric
48 | # TC acc: acc[0]
49 | # WT acc: acc[1]
50 | # ET acc: acc[2]
51 | return avg_acc * 100
52 |
53 |
54 | def build_metric_fn(metric_type: str, metric_arg: Dict = None):
55 | if metric_type == "sliding_window_inference":
56 | return SlidingWindowInference(
57 | roi=metric_arg["roi"],
58 | sw_batch_size=metric_arg["sw_batch_size"],
59 | )
60 | else:
61 | raise ValueError("must be cross sliding_window_inference!")
62 |
--------------------------------------------------------------------------------
/notebooks/model_profiler.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": 1,
6 | "metadata": {},
7 | "outputs": [],
8 | "source": [
9 | "import os\n",
10 | "import sys\n",
11 | "import random\n",
12 | "\n",
13 | "sys.path.append(\"../../../\")\n",
14 | "\n",
15 | "import yaml\n",
16 | "import torch\n",
17 | "import argparse\n",
18 | "import numpy as np\n",
19 | "from typing import Dict\n",
20 | "from termcolor import colored\n",
21 | "from accelerate import Accelerator\n",
22 | "from losses.losses import build_loss_fn\n",
23 | "from optimizers.optimizers import build_optimizer\n",
24 | "from optimizers.schedulers import build_scheduler\n",
25 | "from train_scripts.trainer_ddp import Segmentation_Trainer\n",
26 | "from architectures.build_architecture import build_architecture\n",
27 | "from dataloaders.build_dataset import build_dataset, build_dataloader\n",
28 | "\n",
29 | "\n",
30 | "from sklearn.metrics import (\n",
31 | " jaccard_score,\n",
32 | " accuracy_score,\n",
33 | " f1_score,\n",
34 | " recall_score,\n",
35 | " precision_score,\n",
36 | " confusion_matrix,\n",
37 | ")"
38 | ]
39 | },
40 | {
41 | "cell_type": "code",
42 | "execution_count": 2,
43 | "metadata": {},
44 | "outputs": [],
45 | "source": [
46 | "def load_config(config_path: str) -> Dict:\n",
47 | " \"\"\"loads the yaml config file\n",
48 | "\n",
49 | " Args:\n",
50 | " config_path (str): _description_\n",
51 | "\n",
52 | " Returns:\n",
53 | " Dict: _description_\n",
54 | " \"\"\"\n",
55 | " with open(config_path, \"r\") as file:\n",
56 | " config = yaml.safe_load(file)\n",
57 | " return config"
58 | ]
59 | },
60 | {
61 | "cell_type": "code",
62 | "execution_count": 3,
63 | "metadata": {},
64 | "outputs": [],
65 | "source": [
66 | "config = load_config(\"config.yaml\")"
67 | ]
68 | },
69 | {
70 | "cell_type": "code",
71 | "execution_count": 4,
72 | "metadata": {},
73 | "outputs": [],
74 | "source": [
75 | "# build validation dataset & validataion data loader\n",
76 | "valset = build_dataset(\n",
77 | " dataset_type=config[\"dataset_parameters\"][\"dataset_type\"],\n",
78 | " dataset_args=config[\"dataset_parameters\"][\"val_dataset_args\"],\n",
79 | ")\n",
80 | "valloader = build_dataloader(\n",
81 | " dataset=valset,\n",
82 | " dataloader_args=config[\"dataset_parameters\"][\"val_dataloader_args\"],\n",
83 | " config=config,\n",
84 | " train=False,\n",
85 | ")"
86 | ]
87 | },
88 | {
89 | "cell_type": "code",
90 | "execution_count": 10,
91 | "metadata": {},
92 | "outputs": [],
93 | "source": [
94 | "model = build_architecture(config)\n",
95 | "model = model.to(\"cuda:2\")\n",
96 | "model = model.eval()"
97 | ]
98 | },
99 | {
100 | "cell_type": "code",
101 | "execution_count": 11,
102 | "metadata": {},
103 | "outputs": [
104 | {
105 | "name": "stdout",
106 | "output_type": "stream",
107 | "text": [
108 | "Computational complexity: 12.8 GMac\n",
109 | "Number of parameters: 4.51 M \n"
110 | ]
111 | }
112 | ],
113 | "source": [
114 | "import torchvision.models as models\n",
115 | "import torch\n",
116 | "from ptflops import get_model_complexity_info\n",
117 | "\n",
118 | "with torch.cuda.device(0):\n",
119 | " net = model\n",
120 | " macs, params = get_model_complexity_info(\n",
121 | " net, (4, 128, 128, 128), as_strings=True, print_per_layer_stat=False, verbose=False\n",
122 | " )\n",
123 | " print(\"{:<30} {:<8}\".format(\"Computational complexity: \", macs))\n",
124 | " print(\"{:<30} {:<8}\".format(\"Number of parameters: \", params))"
125 | ]
126 | },
127 | {
128 | "cell_type": "code",
129 | "execution_count": 12,
130 | "metadata": {},
131 | "outputs": [],
132 | "source": [
133 | "import torch\n",
134 | "from torch.profiler import profile, record_function, ProfilerActivity"
135 | ]
136 | },
137 | {
138 | "cell_type": "code",
139 | "execution_count": 13,
140 | "metadata": {},
141 | "outputs": [
142 | {
143 | "name": "stdout",
144 | "output_type": "stream",
145 | "text": [
146 | "\n",
147 | "model parameter count = 4511939\n"
148 | ]
149 | }
150 | ],
151 | "source": [
152 | "inputs = torch.randn(1, 4, 128, 128, 128)\n",
153 | "\n",
154 | "pytorch_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)\n",
155 | "print(\"\\nmodel parameter count = \", pytorch_total_params)"
156 | ]
157 | },
158 | {
159 | "cell_type": "code",
160 | "execution_count": 14,
161 | "metadata": {},
162 | "outputs": [
163 | {
164 | "data": {
165 | "text/plain": [
166 | "18072064"
167 | ]
168 | },
169 | "execution_count": 14,
170 | "metadata": {},
171 | "output_type": "execute_result"
172 | }
173 | ],
174 | "source": [
175 | "def estimate_memory_inference(\n",
176 | " model, sample_input, batch_size=1, use_amp=False, device=0\n",
177 | "):\n",
178 | " \"\"\"Predict the maximum memory usage of the model.\n",
179 | " Args:\n",
180 | " optimizer_type (Type): the class name of the optimizer to instantiate\n",
181 | " model (nn.Module): the neural network model\n",
182 | " sample_input (torch.Tensor): A sample input to the network. It should be\n",
183 | " a single item, not a batch, and it will be replicated batch_size times.\n",
184 | " batch_size (int): the batch size\n",
185 | " use_amp (bool): whether to estimate based on using mixed precision\n",
186 | " device (torch.device): the device to use\n",
187 | " \"\"\"\n",
188 | " # Reset model and optimizer\n",
189 | " model.cpu()\n",
190 | " a = torch.cuda.memory_allocated(device)\n",
191 | " model.to(device)\n",
192 | " b = torch.cuda.memory_allocated(device)\n",
193 | " model_memory = b - a\n",
194 | " model_input = sample_input # .unsqueeze(0).repeat(batch_size, 1)\n",
195 | " output = model(model_input.to(device)).sum()\n",
196 | " total_memory = model_memory\n",
197 | "\n",
198 | " return total_memory\n",
199 | "\n",
200 | "\n",
201 | "estimate_memory_inference(model, inputs)"
202 | ]
203 | },
204 | {
205 | "cell_type": "code",
206 | "execution_count": 19,
207 | "metadata": {},
208 | "outputs": [
209 | {
210 | "name": "stderr",
211 | "output_type": "stream",
212 | "text": [
213 | "STAGE:2024-01-17 05:53:07 119795:119795 ActivityProfilerController.cpp:312] Completed Stage: Warm Up\n"
214 | ]
215 | },
216 | {
217 | "name": "stdout",
218 | "output_type": "stream",
219 | "text": [
220 | "-------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ \n",
221 | " Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls \n",
222 | "-------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ \n",
223 | " model_inference 39.36% 348.252ms 100.00% 884.701ms 884.701ms 1 \n",
224 | " aten::conv3d 0.20% 1.735ms 23.09% 204.275ms 11.349ms 18 \n",
225 | " aten::convolution 0.05% 430.000us 23.07% 204.130ms 11.341ms 18 \n",
226 | " aten::_convolution 0.03% 296.000us 23.02% 203.700ms 11.317ms 18 \n",
227 | " aten::mkldnn_convolution 22.67% 200.561ms 22.71% 200.906ms 14.350ms 14 \n",
228 | " aten::copy_ 13.06% 115.551ms 13.06% 115.551ms 1.179ms 98 \n",
229 | " aten::linear 0.07% 605.000us 12.99% 114.891ms 2.611ms 44 \n",
230 | " aten::addmm 1.79% 15.842ms 10.67% 94.431ms 3.373ms 28 \n",
231 | " aten::matmul 0.06% 498.000us 8.36% 73.987ms 2.312ms 32 \n",
232 | " aten::bmm 6.09% 53.876ms 6.09% 53.876ms 3.367ms 16 \n",
233 | "-------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ \n",
234 | "Self CPU time total: 884.701ms\n",
235 | "\n"
236 | ]
237 | },
238 | {
239 | "name": "stderr",
240 | "output_type": "stream",
241 | "text": [
242 | "STAGE:2024-01-17 05:53:08 119795:119795 ActivityProfilerController.cpp:318] Completed Stage: Collection\n",
243 | "STAGE:2024-01-17 05:53:08 119795:119795 ActivityProfilerController.cpp:322] Completed Stage: Post Processing\n"
244 | ]
245 | }
246 | ],
247 | "source": [
248 | "model.cpu()\n",
249 | "with profile(activities=[ProfilerActivity.CPU], record_shapes=True) as prof:\n",
250 | " with record_function(\"model_inference\"):\n",
251 | " model(inputs)\n",
252 | "\n",
253 | "print(prof.key_averages().table(sort_by=\"cpu_time_total\", row_limit=10))"
254 | ]
255 | },
256 | {
257 | "cell_type": "code",
258 | "execution_count": 25,
259 | "metadata": {},
260 | "outputs": [
261 | {
262 | "name": "stderr",
263 | "output_type": "stream",
264 | "text": [
265 | "STAGE:2024-01-17 05:54:34 119795:119795 ActivityProfilerController.cpp:312] Completed Stage: Warm Up\n",
266 | "STAGE:2024-01-17 05:54:34 119795:119795 ActivityProfilerController.cpp:318] Completed Stage: Collection\n",
267 | "STAGE:2024-01-17 05:54:34 119795:119795 ActivityProfilerController.cpp:322] Completed Stage: Post Processing\n"
268 | ]
269 | },
270 | {
271 | "name": "stdout",
272 | "output_type": "stream",
273 | "text": [
274 | "------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ \n",
275 | " Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self CUDA Self CUDA % CUDA total CUDA time avg # of Calls \n",
276 | "------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ \n",
277 | " model_inference 26.86% 4.067ms 99.83% 15.114ms 15.114ms 0.000us 0.00% 7.423ms 7.423ms 1 \n",
278 | " aten::convolution 1.41% 214.000us 20.34% 3.080ms 171.111us 0.000us 0.00% 1.993ms 110.722us 18 \n",
279 | " aten::_convolution 1.16% 175.000us 18.93% 2.866ms 159.222us 0.000us 0.00% 1.993ms 110.722us 18 \n",
280 | " aten::conv3d 1.27% 193.000us 20.73% 3.139ms 174.389us 0.000us 0.00% 1.930ms 107.222us 18 \n",
281 | " aten::cudnn_convolution 8.88% 1.344ms 15.32% 2.319ms 231.900us 1.495ms 20.14% 1.517ms 151.700us 10 \n",
282 | " aten::linear 2.11% 320.000us 15.77% 2.387ms 54.250us 0.000us 0.00% 1.136ms 25.818us 44 \n",
283 | " aten::addmm 5.18% 785.000us 6.88% 1.041ms 37.179us 1.043ms 14.05% 1.043ms 37.250us 28 \n",
284 | " aten::matmul 1.66% 251.000us 9.93% 1.503ms 46.969us 0.000us 0.00% 916.000us 28.625us 32 \n",
285 | " aten::bmm 2.15% 326.000us 2.83% 429.000us 26.812us 808.000us 10.89% 808.000us 50.500us 16 \n",
286 | " aten::native_layer_norm 3.99% 604.000us 7.20% 1.090ms 38.929us 758.000us 10.21% 758.000us 27.071us 28 \n",
287 | "------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ \n",
288 | "Self CPU time total: 15.140ms\n",
289 | "Self CUDA time total: 7.423ms\n",
290 | "\n"
291 | ]
292 | }
293 | ],
294 | "source": [
295 | "model = model.to(\"cuda:2\")\n",
296 | "inputs = torch.randn(1, 4, 128, 128, 128).to(\"cuda:2\")\n",
297 | "\n",
298 | "with profile(\n",
299 | " activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA], record_shapes=True\n",
300 | ") as prof:\n",
301 | " with record_function(\"model_inference\"):\n",
302 | " model(inputs)\n",
303 | "\n",
304 | "print(prof.key_averages().table(sort_by=\"cuda_time_total\", row_limit=10))"
305 | ]
306 | },
307 | {
308 | "cell_type": "code",
309 | "execution_count": null,
310 | "metadata": {},
311 | "outputs": [],
312 | "source": []
313 | },
314 | {
315 | "cell_type": "code",
316 | "execution_count": null,
317 | "metadata": {},
318 | "outputs": [],
319 | "source": []
320 | }
321 | ],
322 | "metadata": {
323 | "kernelspec": {
324 | "display_name": "corev2",
325 | "language": "python",
326 | "name": "python3"
327 | },
328 | "language_info": {
329 | "codemirror_mode": {
330 | "name": "ipython",
331 | "version": 3
332 | },
333 | "file_extension": ".py",
334 | "mimetype": "text/x-python",
335 | "name": "python",
336 | "nbconvert_exporter": "python",
337 | "pygments_lexer": "ipython3",
338 | "version": "3.11.7"
339 | }
340 | },
341 | "nbformat": 4,
342 | "nbformat_minor": 2
343 | }
344 |
--------------------------------------------------------------------------------
/optimizers/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OSUPCVLab/SegFormer3D/a74cddda15806cc8a4428f9a36bd301a69ddb440/optimizers/__init__.py
--------------------------------------------------------------------------------
/optimizers/optimizers.py:
--------------------------------------------------------------------------------
1 | from typing import Dict
2 | import torch.optim as optim
3 |
4 |
5 | ######################################################################
6 | def optim_adam(model, optimizer_args):
7 | adam = optim.Adam(
8 | model.parameters(),
9 | lr=optimizer_args["lr"],
10 | weight_decay=optimizer_args.get("weight_decay"),
11 | )
12 | return adam
13 |
14 |
15 | ######################################################################
16 | def optim_sgd(model, optimizer_args):
17 | adam = optim.SGD(
18 | model.parameters(),
19 | lr=optimizer_args["lr"],
20 | weight_decay=optimizer_args.get("weight_decay"),
21 | momentum=optimizer_args.get("momentum"),
22 | )
23 | return adam
24 |
25 |
26 | ######################################################################
27 | def optim_adamw(model, optimizer_args):
28 | adam = optim.AdamW(
29 | model.parameters(),
30 | lr=optimizer_args["lr"],
31 | weight_decay=optimizer_args["weight_decay"],
32 | # amsgrad=True,
33 | )
34 | return adam
35 |
36 |
37 | ######################################################################
38 | def build_optimizer(model, optimizer_type: str, optimizer_args: Dict):
39 | if optimizer_type == "adam":
40 | return optim_adam(model, optimizer_args)
41 | elif optimizer_type == "adamw":
42 | return optim_adamw(model, optimizer_args)
43 | elif optimizer_type == "sgd":
44 | return optim_sgd(model, optimizer_args)
45 | else:
46 | raise ValueError("must be adam or adamw for now")
47 |
--------------------------------------------------------------------------------
/optimizers/schedulers.py:
--------------------------------------------------------------------------------
1 | from typing import Dict
2 | import torch.optim as optim
3 | from torch.optim.lr_scheduler import LRScheduler
4 |
5 |
6 | ##################################################################################################
7 | def warmup_lr_scheduler(config, optimizer):
8 | """
9 | Linearly ramps up the learning rate within X
10 | number of epochs to the working epoch.
11 | Args:
12 | optimizer (_type_): _description_
13 | warmup_epochs (_type_): _description_
14 | warmup_lr (_type_): warmup lr should be the starting lr we want.
15 | """
16 | lambda1 = lambda epoch: (
17 | (epoch + 1) * 1.0 / config["warmup_scheduler"]["warmup_epochs"]
18 | )
19 | scheduler = optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda1, verbose=False)
20 | return scheduler
21 |
22 |
23 | ##################################################################################################
24 | def training_lr_scheduler(config, optimizer):
25 | """
26 | Wraps a normal scheuler
27 | """
28 | scheduler_type = config["train_scheduler"]["scheduler_type"]
29 | if scheduler_type == "reducelronplateau":
30 | scheduler = optim.lr_scheduler.ReduceLROnPlateau(
31 | optimizer,
32 | factor=0.1,
33 | mode=config["train_scheduler"]["mode"],
34 | patience=config["train_scheduler"]["patience"],
35 | verbose=False,
36 | min_lr=config["train_scheduler"]["scheduler_args"]["min_lr"],
37 | )
38 | return scheduler
39 | elif scheduler_type == "cosine_annealing_wr":
40 | scheduler = optim.lr_scheduler.CosineAnnealingWarmRestarts(
41 | optimizer,
42 | T_0=config["train_scheduler"]["scheduler_args"]["t_0_epochs"],
43 | T_mult=config["train_scheduler"]["scheduler_args"]["t_mult"],
44 | eta_min=config["train_scheduler"]["scheduler_args"]["min_lr"],
45 | last_epoch=-1,
46 | verbose=False,
47 | )
48 | return scheduler
49 | elif scheduler_type == "poly_lr":
50 | scheduler = optim.lr_scheduler.PolynomialLR(
51 | optimizer=optimizer,
52 | total_iters=5,
53 | power=config["train_scheduler"]["scheduler_args"]["power"],
54 | last_epoch=-1,
55 | )
56 | return scheduler
57 | else:
58 | raise NotImplementedError("Specified Scheduler Is Not Implemented")
59 |
60 |
61 | ##################################################################################################
62 | def build_scheduler(
63 | optimizer: optim.Optimizer, scheduler_type: str, config
64 | ) -> LRScheduler:
65 | """generates the learning rate scheduler
66 |
67 | Args:
68 | optimizer (optim.Optimizer): pytorch optimizer
69 | scheduler_type (str): type of scheduler
70 |
71 | Returns:
72 | LRScheduler: _description_
73 | """
74 | if scheduler_type == "warmup_scheduler":
75 | scheduler = warmup_lr_scheduler(config=config, optimizer=optimizer)
76 | return scheduler
77 | elif scheduler_type == "training_scheduler":
78 | scheduler = training_lr_scheduler(config=config, optimizer=optimizer)
79 | return scheduler
80 | else:
81 | raise ValueError("Invalid Input -- Check scheduler_type")
82 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | accelerate==0.23.0
2 | datasets==2.14.5
3 | evaluate==0.4.1
4 | einops==0.7.0
5 | h5py==3.9.0
6 | imageio==2.33.0
7 | kornia==0.7.2
8 | kornia-rs==0.1.3
9 | lightning==2.0.9
10 | matplotlib==3.7.1
11 | matplotlib-inline==0.1.6
12 | monai==1.2.0
13 | nibabel==5.1.0
14 | opencv-contrib-python==4.7.0.72
15 | opencv-python==4.7.0.72
16 | opencv-python-headless==4.7.0.72
17 | opt-einsum==3.3.0
18 | protobuf==4.22.3
19 | requests==2.28.1
20 | safetensors==0.3.1
21 | scikit-image==0.23.1
22 | scikit-learn==1.2.2
23 | scipy==1.13.0
24 | termcolor==2.3.0
25 | timm==0.6.13
26 | torch-geometric==2.3.0
27 | torch-summary==1.4.5
28 | torchmetrics==0.11.4
29 | torchsummary==1.5.1
30 | tqdm==4.65.0
31 | typing_extensions==4.8.0
32 | wandb==0.15.12
33 |
34 |
--------------------------------------------------------------------------------
/resources/acdc_quant.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OSUPCVLab/SegFormer3D/a74cddda15806cc8a4428f9a36bd301a69ddb440/resources/acdc_quant.png
--------------------------------------------------------------------------------
/resources/acdc_segformer_3D.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OSUPCVLab/SegFormer3D/a74cddda15806cc8a4428f9a36bd301a69ddb440/resources/acdc_segformer_3D.png
--------------------------------------------------------------------------------
/resources/brats_plot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OSUPCVLab/SegFormer3D/a74cddda15806cc8a4428f9a36bd301a69ddb440/resources/brats_plot.png
--------------------------------------------------------------------------------
/resources/brats_quant.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OSUPCVLab/SegFormer3D/a74cddda15806cc8a4428f9a36bd301a69ddb440/resources/brats_quant.png
--------------------------------------------------------------------------------
/resources/brats_segformer_3D.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OSUPCVLab/SegFormer3D/a74cddda15806cc8a4428f9a36bd301a69ddb440/resources/brats_segformer_3D.png
--------------------------------------------------------------------------------
/resources/param_count_table.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OSUPCVLab/SegFormer3D/a74cddda15806cc8a4428f9a36bd301a69ddb440/resources/param_count_table.PNG
--------------------------------------------------------------------------------
/resources/segformer_3D.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OSUPCVLab/SegFormer3D/a74cddda15806cc8a4428f9a36bd301a69ddb440/resources/segformer_3D.png
--------------------------------------------------------------------------------
/resources/synapse_quant.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OSUPCVLab/SegFormer3D/a74cddda15806cc8a4428f9a36bd301a69ddb440/resources/synapse_quant.png
--------------------------------------------------------------------------------
/resources/synapse_segformer_3D.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OSUPCVLab/SegFormer3D/a74cddda15806cc8a4428f9a36bd301a69ddb440/resources/synapse_segformer_3D.png
--------------------------------------------------------------------------------
/train_scripts/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/OSUPCVLab/SegFormer3D/a74cddda15806cc8a4428f9a36bd301a69ddb440/train_scripts/__init__.py
--------------------------------------------------------------------------------
/train_scripts/utils.py:
--------------------------------------------------------------------------------
1 | import random
2 | import numpy as np
3 | import torch
4 | import wandb
5 | import cv2
6 | import os
7 |
8 | """
9 | Utils File Used for Training/Validation/Testing
10 | """
11 |
12 |
13 | ##################################################################################################
14 | def log_metrics(**kwargs) -> None:
15 | # data to be logged
16 | log_data = {}
17 | log_data.update(kwargs)
18 |
19 | # log the data
20 | wandb.log(log_data)
21 |
22 |
23 | ##################################################################################################
24 | def save_checkpoint(model, optimizer, filename="my_checkpoint.pth.tar") -> None:
25 | # print("=> Saving checkpoint")
26 | checkpoint = {
27 | "state_dict": model.state_dict(),
28 | "optimizer": optimizer.state_dict(),
29 | }
30 | torch.save(checkpoint, filename)
31 |
32 |
33 | ##################################################################################################
34 | def load_checkpoint(config, model, optimizer, load_optimizer=True):
35 | print("=> Loading checkpoint")
36 | checkpoint = torch.load(config.checkpoint_file_name, map_location=config.device)
37 | model.load_state_dict(checkpoint["state_dict"])
38 |
39 | if load_optimizer:
40 | optimizer.load_state_dict(checkpoint["optimizer"])
41 |
42 | # If we don't do this then it will just have learning rate of old checkpoint
43 | # and it will lead to many hours of debugging \:
44 | for param_group in optimizer.param_groups:
45 | param_group["lr"] = config.learning_rate
46 |
47 | return model, optimizer
48 |
49 |
50 | ##################################################################################################
51 | def seed_everything(seed: int = 42) -> None:
52 | os.environ["PYTHONHASHSEED"] = str(seed)
53 | random.seed(seed)
54 | np.random.seed(seed)
55 | torch.manual_seed(seed)
56 | torch.cuda.manual_seed(seed)
57 | torch.cuda.manual_seed_all(seed)
58 | torch.backends.cudnn.deterministic = True
59 | torch.backends.cudnn.benchmark = False
60 |
61 |
62 | ##################################################################################################
63 | def random_translate(images, dataset="cifar10"):
64 | """
65 | This function takes multiple images, and translates each image randomly by at most quarter of the image.
66 | """
67 |
68 | (N, C, H, W) = images.shape
69 |
70 | min_pixel = torch.min(images).item()
71 |
72 | new_images = []
73 | for i in range(images.shape[0]):
74 | img = images[i].numpy() # [C,H,W]
75 | img = np.transpose(img, (1, 2, 0)) # [H,W,C]
76 |
77 | dx = random.randrange(-8, 9, 1)
78 | dy = random.randrange(-8, 9, 1)
79 |
80 | M = np.float32([[1, 0, dx], [0, 1, dy]])
81 | image_trans = cv2.warpAffine(img, M, (H, W)).reshape(H, W, C)
82 |
83 | image_trans = np.transpose(image_trans, (2, 0, 1)) # [C,H,W]
84 | new_images.append(image_trans)
85 |
86 | new_images = torch.tensor(np.stack(new_images, axis=0), dtype=torch.float32)
87 |
88 | return new_images
89 |
90 |
91 | ##################################################################################################
92 | def initialize_weights(m):
93 | if isinstance(m, torch.nn.Conv2d):
94 | torch.nn.init.xavier_uniform_(m.weight)
95 | if m.bias is not None:
96 | torch.nn.init.constant_(m.bias.data, 0)
97 | elif isinstance(m, torch.nn.BatchNorm2d):
98 | torch.nn.init.constant_(m.weight.data, 1)
99 | if m.bias is not None:
100 | torch.nn.init.constant_(m.bias.data, 0)
101 | elif isinstance(m, torch.nn.Linear):
102 | torch.nn.init.xavier_uniform_(m.weight)
103 | if m.bias is not None:
104 | torch.nn.init.constant_(m.bias.data, 0)
105 |
106 |
107 | ##################################################################################################
108 | def save_and_print(
109 | config,
110 | model,
111 | optimizer,
112 | epoch,
113 | train_loss,
114 | val_loss,
115 | accuracy,
116 | best_val_acc,
117 | save_acc: bool = True,
118 | ) -> None:
119 | """_summary_
120 |
121 | Args:
122 | model (_type_): _description_
123 | optimizer (_type_): _description_
124 | epoch (_type_): _description_
125 | train_loss (_type_): _description_
126 | val_loss (_type_): _description_
127 | accuracy (_type_): _description_
128 | best_val_acc (_type_): _description_
129 | """
130 |
131 | if save_acc:
132 | if accuracy > best_val_acc:
133 | # change path name based on cutoff epoch
134 | if epoch <= config.cutoff_epoch:
135 | save_path = os.path.join(
136 | config.checkpoint_save_dir, "best_acc_model.pth"
137 | )
138 | else:
139 | save_path = os.path.join(
140 | config.checkpoint_save_dir, "best_acc_model_post_cutoff.pth"
141 | )
142 |
143 | # save checkpoint and log
144 | save_checkpoint(model, optimizer, save_path)
145 | print(
146 | f"=> epoch -- {epoch} || train loss -- {train_loss:.4f} || val loss -- {val_loss:.4f} || val acc -- {accuracy:.4f} -- saved"
147 | )
148 | else:
149 | save_path = os.path.join(config.checkpoint_save_dir, "checkpoint.pth")
150 | save_checkpoint(model, optimizer, save_path)
151 | print(
152 | f"=> epoch -- {epoch} || train loss -- {train_loss:.4f} || val loss -- {val_loss:.4f} || val acc -- {accuracy:.4f}"
153 | )
154 | else:
155 | # change path name based on cutoff epoch
156 | if epoch <= config.cutoff_epoch:
157 | save_path = os.path.join(config.checkpoint_save_dir, "best_loss_model.pth")
158 | else:
159 | save_path = os.path.join(
160 | config.checkpoint_save_dir, "best_loss_model_post_cutoff.pth"
161 | )
162 |
163 | # save checkpoint and log
164 | save_checkpoint(model, optimizer, save_path)
165 | print(
166 | f"=> epoch -- {epoch} || train loss -- {train_loss:.4f} || val loss -- {val_loss:.4f}"
167 | )
168 |
--------------------------------------------------------------------------------