├── .gitattributes
├── requirements.txt
├── assets
├── memory.png
├── method.png
├── single.png
├── forgetting.png
└── sequential.png
├── pretrained_weights
└── txt_encoding.pth
├── LICENSE
├── README.md
├── model
├── Unet.py
└── Universal_model.py
├── utils
├── loss.py
└── utils.py
├── dataset
├── utils.py
├── dataloader.py
├── mysampler.py
└── dataset_list
│ └── PAOT_test.txt
├── test.py
└── train.py
/.gitattributes:
--------------------------------------------------------------------------------
1 | # Auto detect text files and perform LF normalization
2 | * text=auto
3 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | connected-components-3d
2 | h5py==3.6.0
3 | tqdm
4 | fastremap
5 | simpleitk
--------------------------------------------------------------------------------
/assets/memory.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MrGiovanni/OnlineLearning/HEAD/assets/memory.png
--------------------------------------------------------------------------------
/assets/method.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MrGiovanni/OnlineLearning/HEAD/assets/method.png
--------------------------------------------------------------------------------
/assets/single.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MrGiovanni/OnlineLearning/HEAD/assets/single.png
--------------------------------------------------------------------------------
/assets/forgetting.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MrGiovanni/OnlineLearning/HEAD/assets/forgetting.png
--------------------------------------------------------------------------------
/assets/sequential.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MrGiovanni/OnlineLearning/HEAD/assets/sequential.png
--------------------------------------------------------------------------------
/pretrained_weights/txt_encoding.pth:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/MrGiovanni/OnlineLearning/HEAD/pretrained_weights/txt_encoding.pth
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2024 Zongwei Zhou
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | #
Embracing Massive Medical Data
2 |
3 | ## Paper
4 | Embracing Massive Medical Data
5 | [Yu-Cheng Chou](https://scholar.google.com/citations?user=YVNRBTcAAAAJ), [Zongwei Zhou](https://www.zongweiz.com/), and [Alan L. Yuille](https://www.cs.jhu.edu/~ayuille/)
6 | Johns Hopkins University
7 | MICCAI 2024
8 | [paper](https://www.cs.jhu.edu/~alanlab/Pubs24/chou2024embracing.pdf) | [code](https://github.com/MrGiovanni/OnlineLearning)
9 |
10 |
11 |
12 |
13 |
14 |
15 | Figure 1: Different Training Method. Linear memory stores only a few recent samples, causing significant forgetting. Dynamic memory adapts to varying data distributions by retaining unique samples, while selective memory further identifies and selects challenging samples, including those that might be duplicated, ensuring they are not missed by dynamic memory
16 |
17 |
18 |
19 | ## 0. Installation
20 | ```bash
21 | conda create -n massive python=3.9
22 | source activate massive
23 | pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
24 | pip install monai[all]==0.9.0
25 | pip install -r requirements.txt
26 |
27 | wget https://www.dropbox.com/s/lh5kuyjxwjsxjpl/Genesis_Chest_CT.pt
28 | ```
29 | ## 1. Dataset
30 |
31 | We adopt two large-scale CT datasets in our experiments, including a [single-site private dataset](https://www.medrxiv.org/content/medrxiv/early/2022/09/25/2022.09.24.22280071.full.pdf) and a [sequential-site dataset](https://github.com/ljwztc/CLIP-Driven-Universal-Model.git). To download the sequential-site dataset, please see the [Datasets](https://github.com/ljwztc/CLIP-Driven-Universal-Model/blob/49715510829946f09f8330bd3a6e7b02e9fd51de/README.md?plain=1#L35) and [Dataset Pre-Process](https://github.com/ljwztc/CLIP-Driven-Universal-Model/blob/49715510829946f09f8330bd3a6e7b02e9fd51de/README.md?plain=1#L83) setction to create training and testing data.
32 |
33 | ## 2. Train the model
34 | ```bash
35 | CUDA_VISIBLE_DEVICES=0,1,2,3 python -W ignore -m torch.distributed.launch --nproc_per_node=4 --master_port=1234 train.py
36 | ```
37 |
38 |
39 | ## 3. Test the model
40 | ```bash
41 | CUDA_VISIBLE_DEVICES=0 python -W ignore test.py
42 | ```
43 |
44 | ## 4. Results
45 |
46 |
47 |
48 |
49 |
50 | Table 1: Data Efficiency. The results demonstrate that the linear memory trained on continual data streams achieves comparable performance to the prevalent training paradigm that trains models repeatedly with 100 epochs. Linear memory enables training without the need to revisit old data, thereby enhancing data efficiency.
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 | Table 2: Dynamic Adaptation. Under the varying distributions in the streaming source, Dynamic Memory (DM) and Selective Memory (SM) enable the identification of the significant samples and thereby enhance the segmentation performance.
60 |
61 |
62 |
63 |
64 |
65 |
66 |
67 | Figure 2: Catastrophic Forgetting. To evaluate forgetting, we calculate the relative Dice drop after training on the incoming sub-datasets. Both Dynamic Memory (DM) and Selective Memory (SM) store samples from previous sub-datasets, thereby alleviating forgetting observed with LM.
68 |
69 |
70 |
71 |
72 |
73 |
74 |
75 | Figure 3: Diverse Memory. We visualize the memory to demonstrate the diversity of stored samples from previous $D_d$. Both Dynamic Memory (DM) and Selective Memory (SM) can retain the samples from previous sub-datasets. Selective Memory (SM) can further identify samples with higher uncertainty.
76 |
77 |
78 |
79 |
80 | ## Acknowledgement
81 |
82 | This work was supported by the Lustgarten Foundation for Pancreatic Cancer Research and partially by the Patrick J. McGovern Foundation Award.
--------------------------------------------------------------------------------
/model/Unet.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 | import torch.nn.functional as F
4 |
5 |
6 | class ContBatchNorm3d(nn.modules.batchnorm._BatchNorm):
7 | def _check_input_dim(self, input):
8 |
9 | if input.dim() != 5:
10 | raise ValueError('expected 5D input (got {}D input)'.format(input.dim()))
11 |
12 | def forward(self, input):
13 | self._check_input_dim(input)
14 | return F.batch_norm(
15 | input, self.running_mean, self.running_var, self.weight, self.bias,
16 | True, self.momentum, self.eps)
17 |
18 |
19 | class LUConv(nn.Module):
20 | def __init__(self, in_chan, out_chan, act):
21 | super(LUConv, self).__init__()
22 | self.conv1 = nn.Conv3d(in_chan, out_chan, kernel_size=3, padding=1)
23 | self.bn1 = ContBatchNorm3d(out_chan)
24 |
25 | if act == 'relu':
26 | self.activation = nn.ReLU(out_chan)
27 | elif act == 'prelu':
28 | self.activation = nn.PReLU(out_chan)
29 | elif act == 'elu':
30 | self.activation = nn.ELU(inplace=True)
31 | else:
32 | raise
33 |
34 | def forward(self, x):
35 | out = self.activation(self.bn1(self.conv1(x)))
36 | return out
37 |
38 |
39 | def _make_nConv(in_channel, depth, act, double_chnnel=False):
40 | if double_chnnel:
41 | layer1 = LUConv(in_channel, 32 * (2 ** (depth+1)),act)
42 | layer2 = LUConv(32 * (2 ** (depth+1)), 32 * (2 ** (depth+1)),act)
43 | else:
44 | layer1 = LUConv(in_channel, 32*(2**depth),act)
45 | layer2 = LUConv(32*(2**depth), 32*(2**depth)*2,act)
46 |
47 | return nn.Sequential(layer1,layer2)
48 |
49 |
50 | class DownTransition(nn.Module):
51 | def __init__(self, in_channel,depth, act):
52 | super(DownTransition, self).__init__()
53 | self.ops = _make_nConv(in_channel, depth,act)
54 | self.maxpool = nn.MaxPool3d(2)
55 | self.current_depth = depth
56 |
57 | def forward(self, x):
58 | if self.current_depth == 3:
59 | out = self.ops(x)
60 | out_before_pool = out
61 | else:
62 | out_before_pool = self.ops(x)
63 | out = self.maxpool(out_before_pool)
64 | return out, out_before_pool
65 |
66 | class UpTransition(nn.Module):
67 | def __init__(self, inChans, outChans, depth,act):
68 | super(UpTransition, self).__init__()
69 | self.depth = depth
70 | self.up_conv = nn.ConvTranspose3d(inChans, outChans, kernel_size=2, stride=2)
71 | self.ops = _make_nConv(inChans+ outChans//2,depth, act, double_chnnel=True)
72 |
73 | def forward(self, x, skip_x):
74 | out_up_conv = self.up_conv(x)
75 | concat = torch.cat((out_up_conv,skip_x),1)
76 | out = self.ops(concat)
77 | return out
78 |
79 |
80 | class OutputTransition(nn.Module):
81 | def __init__(self, inChans, n_labels):
82 |
83 | super(OutputTransition, self).__init__()
84 | self.final_conv = nn.Conv3d(inChans, n_labels, kernel_size=1)
85 | self.sigmoid = nn.Sigmoid()
86 |
87 | def forward(self, x):
88 | out = self.sigmoid(self.final_conv(x))
89 | return out
90 |
91 | class UNet3D(nn.Module):
92 | def __init__(self, n_class=1, act='relu'):
93 | super(UNet3D, self).__init__()
94 |
95 | self.down_tr64 = DownTransition(1,0,act)
96 | self.down_tr128 = DownTransition(64,1,act)
97 | self.down_tr256 = DownTransition(128,2,act)
98 | self.down_tr512 = DownTransition(256,3,act)
99 |
100 | self.up_tr256 = UpTransition(512, 512,2,act)
101 | self.up_tr128 = UpTransition(256,256, 1,act)
102 | self.up_tr64 = UpTransition(128,128,0,act)
103 |
104 | def forward(self, x):
105 | self.out64, self.skip_out64 = self.down_tr64(x)
106 | self.out128,self.skip_out128 = self.down_tr128(self.out64)
107 | self.out256,self.skip_out256 = self.down_tr256(self.out128)
108 | self.out512,self.skip_out512 = self.down_tr512(self.out256)
109 |
110 | self.out_up_256 = self.up_tr256(self.out512,self.skip_out256)
111 | self.out_up_128 = self.up_tr128(self.out_up_256, self.skip_out128)
112 | self.out_up_64 = self.up_tr64(self.out_up_128, self.skip_out64)
113 |
114 | return self.out512, self.out_up_64
115 |
116 |
--------------------------------------------------------------------------------
/model/Universal_model.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 | import torch.nn.functional as F
4 |
5 | from model.Unet import UNet3D
6 |
7 |
8 | class Universal_model(nn.Module):
9 | def __init__(self, out_channels):
10 | super().__init__()
11 |
12 | self.backbone = UNet3D()
13 | self.precls_conv = nn.Sequential(
14 | nn.GroupNorm(16, 64),
15 | nn.ReLU(inplace=True),
16 | nn.Conv3d(64, 8, kernel_size=1)
17 | )
18 | self.GAP = nn.Sequential(
19 | nn.GroupNorm(16, 512),
20 | nn.ReLU(inplace=True),
21 | torch.nn.AdaptiveAvgPool3d((1,1,1)),
22 | nn.Conv3d(512, 256, kernel_size=1, stride=1, padding=0)
23 | )
24 |
25 | weight_nums, bias_nums = [], []
26 | weight_nums.append(8*8)
27 | weight_nums.append(8*8)
28 | weight_nums.append(8*1)
29 | bias_nums.append(8)
30 | bias_nums.append(8)
31 | bias_nums.append(1)
32 | self.weight_nums = weight_nums
33 | self.bias_nums = bias_nums
34 | self.controller = nn.Conv3d(256+256, sum(weight_nums+bias_nums), kernel_size=1, stride=1, padding=0)
35 | self.register_buffer('organ_embedding', torch.randn(out_channels, 512))
36 | self.text_to_vision = nn.Linear(512, 256)
37 | self.class_num = out_channels
38 |
39 | def load_params(self, model_dict):
40 | store_dict = self.backbone.state_dict()
41 | for key in model_dict.keys():
42 | if 'out_tr' not in key:
43 | store_dict[key.replace("module.", "")] = model_dict[key]
44 | self.backbone.load_state_dict(store_dict)
45 |
46 | def encoding_task(self, task_id):
47 | N = task_id.shape[0]
48 | task_encoding = torch.zeros(size=(N, 7))
49 | for i in range(N):
50 | task_encoding[i, task_id[i]]=1
51 | return task_encoding.cuda()
52 |
53 | def parse_dynamic_params(self, params, channels, weight_nums, bias_nums):
54 | assert params.dim() == 2
55 | assert len(weight_nums) == len(bias_nums)
56 | assert params.size(1) == sum(weight_nums) + sum(bias_nums)
57 |
58 | num_insts = params.size(0)
59 | num_layers = len(weight_nums)
60 |
61 | params_splits = list(torch.split_with_sizes(
62 | params, weight_nums + bias_nums, dim=1
63 | ))
64 |
65 | weight_splits = params_splits[:num_layers]
66 | bias_splits = params_splits[num_layers:]
67 |
68 | for l in range(num_layers):
69 | if l < num_layers - 1:
70 | weight_splits[l] = weight_splits[l].reshape(num_insts * channels, -1, 1, 1, 1)
71 | bias_splits[l] = bias_splits[l].reshape(num_insts * channels)
72 | else:
73 | weight_splits[l] = weight_splits[l].reshape(num_insts * 1, -1, 1, 1, 1)
74 | bias_splits[l] = bias_splits[l].reshape(num_insts * 1)
75 |
76 | return weight_splits, bias_splits
77 |
78 | def heads_forward(self, features, weights, biases, num_insts):
79 | assert features.dim() == 5
80 | n_layers = len(weights)
81 | x = features
82 | for i, (w, b) in enumerate(zip(weights, biases)):
83 |
84 | x = F.conv3d(
85 | x, w, bias=b,
86 | stride=1, padding=0,
87 | groups=num_insts
88 | )
89 | if i < n_layers - 1:
90 | x = F.relu(x)
91 | return x
92 |
93 | def forward(self, x_in):
94 | dec4, out = self.backbone(x_in)
95 |
96 | task_encoding = F.relu(self.text_to_vision(self.organ_embedding))
97 | task_encoding = task_encoding.unsqueeze(2).unsqueeze(2).unsqueeze(2)
98 |
99 | x_feat = self.GAP(dec4)
100 | b = x_feat.shape[0]
101 | logits_array = []
102 | for i in range(b):
103 | x_cond = torch.cat([x_feat[i].unsqueeze(0).repeat(self.class_num,1,1,1,1), task_encoding], 1)
104 | params = self.controller(x_cond)
105 | params.squeeze_(-1).squeeze_(-1).squeeze_(-1)
106 |
107 | head_inputs = self.precls_conv(out[i].unsqueeze(0))
108 | head_inputs = head_inputs.repeat(self.class_num,1,1,1,1)
109 | N, _, D, H, W = head_inputs.size()
110 | head_inputs = head_inputs.reshape(1, -1, D, H, W)
111 |
112 | weights, biases = self.parse_dynamic_params(params, 8, self.weight_nums, self.bias_nums)
113 |
114 | logits = self.heads_forward(head_inputs, weights, biases, N)
115 | logits_array.append(logits.reshape(1, -1, D, H, W))
116 |
117 | out = torch.cat(logits_array,dim=0)
118 |
119 | return out, x_feat
--------------------------------------------------------------------------------
/utils/loss.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn.functional as F
3 | import torch.nn as nn
4 |
5 | class BinaryDiceLoss(nn.Module):
6 | def __init__(self, smooth=1, p=2, reduction='mean'):
7 | super(BinaryDiceLoss, self).__init__()
8 | self.smooth = smooth
9 | self.p = p
10 | self.reduction = reduction
11 |
12 | def forward(self, predict, target):
13 | assert predict.shape[0] == target.shape[0], "predict & target batch size don't match"
14 | predict = predict.contiguous().view(predict.shape[0], -1)
15 | target = target.contiguous().view(target.shape[0], -1)
16 |
17 | num = torch.sum(torch.mul(predict, target), dim=1)
18 | den = torch.sum(predict, dim=1) + torch.sum(target, dim=1) + self.smooth
19 |
20 | dice_score = 2*num / den
21 | dice_loss = 1 - dice_score
22 |
23 | dice_loss_avg = dice_loss[target[:,0]!=-1].sum() / dice_loss[target[:,0]!=-1].shape[0]
24 |
25 | return dice_loss_avg
26 |
27 |
28 | class DiceLoss_SM(nn.Module):
29 | def __init__(self, weight=None, ignore_index=None, num_classes=3, **kwargs):
30 | super(DiceLoss_SM, self).__init__()
31 | self.kwargs = kwargs
32 | self.weight = weight
33 | self.ignore_index = ignore_index
34 | self.num_classes = num_classes
35 | self.dice = BinaryDiceLoss(**self.kwargs)
36 |
37 | def forward(self, predict, target, name, ratio, TEMPLATE):
38 |
39 | total_loss = []
40 | batch_loss = []
41 | predict = F.sigmoid(predict)
42 | B = predict.shape[0]
43 |
44 | for b in range(B):
45 | dataset_index = int(name[b][0:2])
46 | if dataset_index == 10:
47 | template_key = name[b][0:2] + '_' + name[b][17:19]
48 | elif dataset_index == 1:
49 | if int(name[b][-2:]) >= 60:
50 | template_key = '01_2'
51 | else:
52 | template_key = '01'
53 | else:
54 | template_key = name[b][0:2]
55 | organ_list = TEMPLATE[template_key]
56 | for organ in organ_list:
57 | dice_loss = self.dice(predict[b, organ-1], target[b, organ-1])
58 | batch_loss.append(dice_loss*ratio[b][organ-1])
59 |
60 | total_loss.append(torch.stack(batch_loss).mean())
61 |
62 | return torch.stack(total_loss)
63 |
64 |
65 | class Multi_BCELoss_SM(nn.Module):
66 | def __init__(self, ignore_index=None, num_classes=3, **kwargs):
67 | super(Multi_BCELoss_SM, self).__init__()
68 | self.kwargs = kwargs
69 | self.num_classes = num_classes
70 | self.ignore_index = ignore_index
71 | self.criterion = nn.BCEWithLogitsLoss()
72 |
73 | def forward(self, predict, target, name, ratio, TEMPLATE):
74 | assert predict.shape[2:] == target.shape[2:], 'predict & target shape do not match'
75 |
76 | total_loss = []
77 | batch_loss = []
78 | B = predict.shape[0]
79 |
80 | for b in range(B):
81 | dataset_index = int(name[b][0:2])
82 | if dataset_index == 10:
83 | template_key = name[b][0:2] + '_' + name[b][17:19]
84 | elif dataset_index == 1:
85 | if int(name[b][-2:]) >= 60:
86 | template_key = '01_2'
87 | else:
88 | template_key = '01'
89 | else:
90 | template_key = name[b][0:2]
91 | organ_list = TEMPLATE[template_key]
92 | for organ in organ_list:
93 | ce_loss = self.criterion(predict[b, organ-1], target[b, organ-1])
94 | batch_loss.append(ce_loss*ratio[b][organ-1])
95 | total_loss.append(torch.stack(batch_loss).mean())
96 |
97 | return torch.stack(total_loss)
98 |
99 |
100 | class DiceLoss(nn.Module):
101 | def __init__(self, weight=None, ignore_index=None, num_classes=3, **kwargs):
102 | super(DiceLoss, self).__init__()
103 | self.kwargs = kwargs
104 | self.weight = weight
105 | self.ignore_index = ignore_index
106 | self.num_classes = num_classes
107 | self.dice = BinaryDiceLoss(**self.kwargs)
108 |
109 | def forward(self, predict, target, name, ratio, TEMPLATE):
110 |
111 | total_loss = []
112 | batch_loss = []
113 | predict = F.sigmoid(predict)
114 | B = predict.shape[0]
115 |
116 | for b in range(B):
117 | dataset_index = int(name[b][0:2])
118 | if dataset_index == 10:
119 | template_key = name[b][0:2] + '_' + name[b][17:19]
120 | elif dataset_index == 1:
121 | if int(name[b][-2:]) >= 60:
122 | template_key = '01_2'
123 | else:
124 | template_key = '01'
125 | else:
126 | template_key = name[b][0:2]
127 | organ_list = TEMPLATE[template_key]
128 | for organ in organ_list:
129 | dice_loss = self.dice(predict[b, organ-1], target[b, organ-1])
130 | batch_loss.append(dice_loss)
131 |
132 | total_loss.append(torch.stack(batch_loss).mean())
133 |
134 | return torch.stack(total_loss)
135 |
136 |
137 | class Multi_BCELoss(nn.Module):
138 | def __init__(self, ignore_index=None, num_classes=3, **kwargs):
139 | super(Multi_BCELoss, self).__init__()
140 | self.kwargs = kwargs
141 | self.num_classes = num_classes
142 | self.ignore_index = ignore_index
143 | self.criterion = nn.BCEWithLogitsLoss()
144 |
145 | def forward(self, predict, target, name, ratio, TEMPLATE):
146 | assert predict.shape[2:] == target.shape[2:], 'predict & target shape do not match'
147 |
148 | total_loss = []
149 | batch_loss = []
150 | B = predict.shape[0]
151 |
152 | for b in range(B):
153 | dataset_index = int(name[b][0:2])
154 | if dataset_index == 10:
155 | template_key = name[b][0:2] + '_' + name[b][17:19]
156 | elif dataset_index == 1:
157 | if int(name[b][-2:]) >= 60:
158 | template_key = '01_2'
159 | else:
160 | template_key = '01'
161 | else:
162 | template_key = name[b][0:2]
163 | organ_list = TEMPLATE[template_key]
164 | for organ in organ_list:
165 | ce_loss = self.criterion(predict[b, organ-1], target[b, organ-1])
166 | batch_loss.append(ce_loss)
167 | total_loss.append(torch.stack(batch_loss).mean())
168 |
169 | return torch.stack(total_loss)
--------------------------------------------------------------------------------
/dataset/utils.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import h5py
3 | import numpy as np
4 | from typing import Optional, Union
5 |
6 | import torch
7 |
8 | from monai.config import DtypeLike, KeysCollection
9 | from monai.transforms.transform import MapTransform
10 | from monai.transforms.io.array import LoadImage
11 | from monai.utils import ensure_tuple, ensure_tuple_rep
12 | from monai.data.image_reader import ImageReader
13 | from monai.utils.enums import PostFix
14 | from monai.transforms import (
15 | RandCropByPosNegLabeld,
16 | RandZoomd,
17 | RandCropByLabelClassesd,
18 | )
19 |
20 |
21 | sys.path.append("..")
22 | from utils.utils import get_key, TEMPLATE, NUM_CLASS
23 |
24 |
25 | DEFAULT_POST_FIX = PostFix.meta()
26 |
27 | class LoadImageh5d_train(MapTransform):
28 | def __init__(
29 | self,
30 | keys: KeysCollection,
31 | reader: Optional[Union[ImageReader, str]] = None,
32 | dtype: DtypeLike = np.float32,
33 | meta_keys: Optional[KeysCollection] = None,
34 | meta_key_postfix: str = DEFAULT_POST_FIX,
35 | overwriting: bool = False,
36 | image_only: bool = False,
37 | ensure_channel_first: bool = False,
38 | simple_keys: bool = False,
39 | allow_missing_keys: bool = False,
40 | *args,
41 | **kwargs,
42 | ) -> None:
43 | super().__init__(keys, allow_missing_keys)
44 | self._loader = LoadImage(reader, image_only, dtype, ensure_channel_first, simple_keys, *args, **kwargs)
45 | if not isinstance(meta_key_postfix, str):
46 | raise TypeError(f"meta_key_postfix must be a str but is {type(meta_key_postfix).__name__}.")
47 | self.meta_keys = ensure_tuple_rep(None, len(self.keys)) if meta_keys is None else ensure_tuple(meta_keys)
48 | if len(self.keys) != len(self.meta_keys):
49 | raise ValueError("meta_keys should have the same length as keys.")
50 | self.meta_key_postfix = ensure_tuple_rep(meta_key_postfix, len(self.keys))
51 | self.overwriting = overwriting
52 |
53 |
54 | def register(self, reader: ImageReader):
55 | self._loader.register(reader)
56 |
57 |
58 | def __call__(self, data, reader: Optional[ImageReader] = None):
59 | d = dict(data)
60 | for key, meta_key, meta_key_postfix in self.key_iterator(d, self.meta_keys, self.meta_key_postfix):
61 | data = self._loader(d[key], reader)
62 | if self._loader.image_only:
63 | d[key] = data
64 | else:
65 | if not isinstance(data, (tuple, list)):
66 | raise ValueError("loader must return a tuple or list (because image_only=False was used).")
67 | d[key] = data[0]
68 | if not isinstance(data[1], dict):
69 | raise ValueError("metadata must be a dict.")
70 | meta_key = meta_key or f"{key}_{meta_key_postfix}"
71 | if meta_key in d and not self.overwriting:
72 | raise KeyError(f"Metadata with key {meta_key} already exists and overwriting=False.")
73 | d[meta_key] = data[1]
74 | post_label_pth = d['post_label']
75 | with h5py.File(post_label_pth, 'r') as hf:
76 | data = hf['post_label'][()]
77 | d['post_label'] = data[0]
78 |
79 | # Get the importancy (all_size/size) of each organ
80 | key = get_key(d['name'])
81 | organ_list = np.array(TEMPLATE[key])-1
82 | organs_pixelsum = data[0][organ_list].sum()
83 | organs_ratio = torch.zeros(NUM_CLASS, dtype=torch.float32)
84 |
85 | for organ in organ_list:
86 | organ_num = data[0][organ].sum()
87 | if organ_num == 0:
88 | organs_ratio[organ] = 0
89 | else:
90 | organs_ratio[organ] = organs_pixelsum/organ_num
91 |
92 | organs_pixelnum_ratio = organs_ratio/organs_ratio.sum()*len(TEMPLATE[key]) # sum as num of labels
93 | organs_pixelnum_ratio[(organs_pixelnum_ratio > 0) & (organs_pixelnum_ratio < 1)] = 1
94 |
95 | d['organs_ratio'] = organs_pixelnum_ratio
96 | return d
97 |
98 | class LoadImageh5d_test(MapTransform):
99 | def __init__(
100 | self,
101 | keys: KeysCollection,
102 | reader: Optional[Union[ImageReader, str]] = None,
103 | dtype: DtypeLike = np.float32,
104 | meta_keys: Optional[KeysCollection] = None,
105 | meta_key_postfix: str = DEFAULT_POST_FIX,
106 | overwriting: bool = False,
107 | image_only: bool = False,
108 | ensure_channel_first: bool = False,
109 | simple_keys: bool = False,
110 | allow_missing_keys: bool = False,
111 | *args,
112 | **kwargs,
113 | ) -> None:
114 | super().__init__(keys, allow_missing_keys)
115 | self._loader = LoadImage(reader, image_only, dtype, ensure_channel_first, simple_keys, *args, **kwargs)
116 | if not isinstance(meta_key_postfix, str):
117 | raise TypeError(f"meta_key_postfix must be a str but is {type(meta_key_postfix).__name__}.")
118 | self.meta_keys = ensure_tuple_rep(None, len(self.keys)) if meta_keys is None else ensure_tuple(meta_keys)
119 | if len(self.keys) != len(self.meta_keys):
120 | raise ValueError("meta_keys should have the same length as keys.")
121 | self.meta_key_postfix = ensure_tuple_rep(meta_key_postfix, len(self.keys))
122 | self.overwriting = overwriting
123 |
124 |
125 | def register(self, reader: ImageReader):
126 | self._loader.register(reader)
127 |
128 |
129 | def __call__(self, data, reader: Optional[ImageReader] = None):
130 | d = dict(data)
131 | for key, meta_key, meta_key_postfix in self.key_iterator(d, self.meta_keys, self.meta_key_postfix):
132 | data = self._loader(d[key], reader)
133 | if self._loader.image_only:
134 | d[key] = data
135 | else:
136 | if not isinstance(data, (tuple, list)):
137 | raise ValueError("loader must return a tuple or list (because image_only=False was used).")
138 | d[key] = data[0]
139 | if not isinstance(data[1], dict):
140 | raise ValueError("metadata must be a dict.")
141 | meta_key = meta_key or f"{key}_{meta_key_postfix}"
142 | if meta_key in d and not self.overwriting:
143 | raise KeyError(f"Metadata with key {meta_key} already exists and overwriting=False.")
144 | d[meta_key] = data[1]
145 | post_label_pth = d['post_label']
146 | with h5py.File(post_label_pth, 'r') as hf:
147 | data = hf['post_label'][()]
148 | d['post_label'] = data[0]
149 | return d
150 |
151 | class RandZoomd_select(RandZoomd):
152 | def __call__(self, data):
153 | d = dict(data)
154 | name = d['name']
155 | key = get_key(name)
156 | if (key not in ['10_03', '10_06', '10_07', '10_08', '10_09', '10_10']):
157 | return d
158 | d = super().__call__(d)
159 | return d
160 |
161 | class RandCropByPosNegLabeld_select(RandCropByPosNegLabeld):
162 | def __call__(self, data):
163 | d = dict(data)
164 | name = d['name']
165 | key = get_key(name)
166 | if key in ['10_03', '10_07', '10_08', '04']:
167 | return d
168 | d = super().__call__(d)
169 | return d
170 |
171 | class RandCropByLabelClassesd_select(RandCropByLabelClassesd):
172 | def __call__(self, data):
173 | d = dict(data)
174 | name = d['name']
175 | key = get_key(name)
176 | if key not in ['10_03', '10_07', '10_08', '04']:
177 | return d
178 | d = super().__call__(d)
179 | return d
180 |
181 |
--------------------------------------------------------------------------------
/test.py:
--------------------------------------------------------------------------------
1 | import os
2 | import argparse
3 | import numpy as np
4 | from tqdm import tqdm
5 |
6 | import torch
7 | import torch.nn.functional as F
8 |
9 | from monai.inferers import sliding_window_inference
10 |
11 | from model.Universal_model import Universal_model
12 | from dataset.dataloader import get_loader
13 | from utils.utils import dice_score, threshold_organ, visualize_label, merge_label, get_key
14 | from utils.utils import TEMPLATE, ORGAN_NAME, NUM_CLASS
15 | from utils.utils import organ_post_process, threshold_organ
16 |
17 | torch.multiprocessing.set_sharing_strategy('file_system')
18 |
19 |
20 | def test(model, ValLoader, val_transforms, args):
21 | save_dir = args.save_dir + '/' + args.log_name + f'/test_healthp_{args.epoch}'
22 | if not os.path.isdir(save_dir):
23 | os.makedirs(save_dir)
24 | os.makedirs(save_dir+'/predict')
25 | model.eval()
26 | dice_list = {}
27 | for key in TEMPLATE.keys():
28 | dice_list[key] = np.zeros((2, NUM_CLASS))
29 | for index, batch in enumerate(tqdm(ValLoader)):
30 | image, label, name = batch["image"].cuda(), batch["post_label"], batch["name"]
31 | with torch.no_grad():
32 | pred, z = sliding_window_inference(image, (args.roi_x, args.roi_y, args.roi_z), 1, model, overlap=0.5, mode='gaussian')
33 | pred_sigmoid = F.sigmoid(pred)
34 |
35 | pred_hard = threshold_organ(pred_sigmoid)
36 | pred_hard = pred_hard.cpu()
37 | torch.cuda.empty_cache()
38 |
39 | B = pred_hard.shape[0]
40 | for b in range(B):
41 | content = 'case%s| '%(name[b])
42 | template_key = get_key(name[b])
43 | organ_list = TEMPLATE[template_key]
44 | pred_hard_post = organ_post_process(pred_hard.numpy(), organ_list, args.log_name+'/'+name[0].split('/')[0]+'/'+name[0].split('/')[-1],args)
45 | pred_hard_post = torch.tensor(pred_hard_post)
46 |
47 | for organ in organ_list:
48 | if torch.sum(label[b,organ-1,:,:,:].cuda()) != 0:
49 | dice_organ, recall, precision = dice_score(pred_hard_post[b,organ-1,:,:,:].cuda(), label[b,organ-1,:,:,:].cuda())
50 | dice_list[template_key][0][organ-1] += dice_organ.item()
51 | dice_list[template_key][1][organ-1] += 1
52 | content += '%s: %.4f, '%(ORGAN_NAME[organ-1], dice_organ.item())
53 | print('%s: dice %.4f, recall %.4f, precision %.4f.'%(ORGAN_NAME[organ-1], dice_organ.item(), recall.item(), precision.item()))
54 | print(content)
55 |
56 | if args.store_result:
57 | pred_sigmoid_store = (pred_sigmoid.cpu().numpy() * 255).astype(np.uint8)
58 | label_store = (label.numpy()).astype(np.uint8)
59 | np.savez_compressed(save_dir + '/predict/' + name[0].split('/')[0] + name[0].split('/')[-1],
60 | pred=pred_sigmoid_store, label=label_store)
61 | ### testing phase for this function
62 | one_channel_label_v1, one_channel_label_v2 = merge_label(pred_hard_post, name)
63 | batch['one_channel_label_v1'] = one_channel_label_v1.cpu()
64 | batch['one_channel_label_v2'] = one_channel_label_v2.cpu()
65 |
66 | _, split_label = merge_label(batch["post_label"], name)
67 | batch['split_label'] = split_label.cpu()
68 |
69 | visualize_label(batch, save_dir + '/output/' + name[0].split('/')[0] , val_transforms)
70 |
71 |
72 | torch.cuda.empty_cache()
73 |
74 | ave_organ_dice = np.zeros((2, NUM_CLASS))
75 |
76 | with open(args.save_dir+'/'+args.log_name+f'/test_{args.epoch}.txt', 'w') as f:
77 | for key in TEMPLATE.keys():
78 | organ_list = TEMPLATE[key]
79 | content = 'Task%s| '%(key)
80 | for organ in organ_list:
81 |
82 | dice = dice_list[key][0][organ-1] / dice_list[key][1][organ-1]
83 | content += '%s: %.4f, '%(ORGAN_NAME[organ-1], dice)
84 | ave_organ_dice[0][organ-1] += dice_list[key][0][organ-1]
85 | ave_organ_dice[1][organ-1] += dice_list[key][1][organ-1]
86 | print(content)
87 | f.write(content)
88 | f.write('\n')
89 | content = 'Average | '
90 | for i in range(NUM_CLASS):
91 | content += '%s: %.4f, '%(ORGAN_NAME[i], ave_organ_dice[0][i] / ave_organ_dice[1][i])
92 | print(content)
93 | f.write(content)
94 | f.write('\n')
95 | print(np.mean(ave_organ_dice[0] / ave_organ_dice[1]))
96 | f.write('%s: %.4f, '%('average', np.mean(ave_organ_dice[0] / ave_organ_dice[1])))
97 | f.write('\n')
98 |
99 |
100 |
101 | def main():
102 | parser = argparse.ArgumentParser()
103 | ## Distributed training
104 | parser.add_argument("--epoch", default=0)
105 |
106 | ## Logging
107 | parser.add_argument('--log_name', default=None, help='The path resume from checkpoint')
108 | parser.add_argument('--save_dir', default='./out/{}', help='The path resume from checkpoint')
109 |
110 | ## Model
111 | parser.add_argument('--resume', default='out/PAOT/PATH_TO_CHECKPOINT', help='The path resume from checkpoint')
112 | parser.add_argument('--backbone', default='unet', help='backbone [swinunetr or unet]')
113 |
114 | ## Hyperparameters
115 | parser.add_argument('--phase', default='test', help='train or test')
116 | parser.add_argument('--store_result', action="store_true", default=True, help='whether save prediction result')
117 |
118 | ## Dataset
119 | parser.add_argument('--dataset_list', nargs='+', default=['PAOT'], choices=['PAOT', 'felix'])
120 | parser.add_argument('--data_root_path', default='', help='data root path')
121 | parser.add_argument('--label_root_path', default='', help='label root path')
122 | parser.add_argument('--data_txt_path', default='./dataset/dataset_list/', help='data txt path')
123 | parser.add_argument('--batch_size', default=1, type=int, help='batch size')
124 | parser.add_argument('--num_workers', default=8, type=int, help='workers numebr for DataLoader')
125 |
126 | parser.add_argument('--a_min', default=-175, type=float, help='a_min in ScaleIntensityRanged')
127 | parser.add_argument('--a_max', default=250, type=float, help='a_max in ScaleIntensityRanged')
128 | parser.add_argument('--b_min', default=0.0, type=float, help='b_min in ScaleIntensityRanged')
129 | parser.add_argument('--b_max', default=1.0, type=float, help='b_max in ScaleIntensityRanged')
130 | parser.add_argument('--space_x', default=1.5, type=float, help='spacing in x direction')
131 | parser.add_argument('--space_y', default=1.5, type=float, help='spacing in y direction')
132 | parser.add_argument('--space_z', default=1.5, type=float, help='spacing in z direction')
133 | parser.add_argument('--roi_x', default=96, type=int, help='roi size in x direction')
134 | parser.add_argument('--roi_y', default=96, type=int, help='roi size in y direction')
135 | parser.add_argument('--roi_z', default=96, type=int, help='roi size in z direction')
136 | parser.add_argument('--num_samples', default=1, type=int, help='sample number in each ct')
137 |
138 | args = parser.parse_args()
139 | args.log_name = args.resume.split('/')[2]
140 | args.save_dir = args.save_dir.format(args.dataset_list[0])
141 | args.epoch = args.resume.split('/')[-1].split('.')[0]
142 |
143 | # prepare the 3D model
144 | model = Universal_model(out_channels=NUM_CLASS)
145 |
146 | #Load pre-trained weights
147 | store_dict = model.state_dict()
148 | checkpoint = torch.load(args.resume)
149 | load_dict = checkpoint['state_dict']
150 |
151 | for key, value in load_dict.items():
152 | if 'swinViT' in key or 'encoder' in key or 'decoder' in key:
153 | name = '.'.join(key.split('.')[1:])
154 | name = 'backbone.' + name
155 | else:
156 | name = '.'.join(key.split('.')[1:])
157 | store_dict[name] = value
158 |
159 |
160 | model.load_state_dict(store_dict)
161 | print('Use pretrained weights')
162 | model.cuda()
163 |
164 | torch.backends.cudnn.benchmark = True
165 |
166 | test_loader, test_transforms = get_loader(args)
167 |
168 | test(model, test_loader, test_transforms, args)
169 |
170 | if __name__ == "__main__":
171 | main()
172 |
173 |
--------------------------------------------------------------------------------
/dataset/dataloader.py:
--------------------------------------------------------------------------------
1 | import sys
2 |
3 | import torch
4 |
5 | from monai.data import DataLoader, Dataset, list_data_collate
6 | from monai.transforms import (
7 | AddChanneld,
8 | Compose,
9 | CropForegroundd,
10 | Orientationd,
11 | RandShiftIntensityd,
12 | ScaleIntensityRanged,
13 | Spacingd,
14 | RandRotate90d,
15 | ToTensord,
16 | SpatialPadd,
17 | apply_transform,
18 | )
19 |
20 |
21 | sys.path.append("..")
22 | from dataset.mysampler import ResumableDistributedSampler
23 | from dataset.utils import LoadImageh5d_train, LoadImageh5d_test
24 | from dataset.utils import RandZoomd_select, RandCropByPosNegLabeld_select, RandCropByLabelClassesd_select
25 |
26 |
27 | class MyDataset(Dataset):
28 | """Dataset that reads videos"""
29 | def __init__(self,
30 | data_dict,
31 | num_files,
32 | transforms=None):
33 | super().__init__(data=data_dict, transform=transforms)
34 | self.data_dict = data_dict
35 | self.num_files = num_files
36 | self.filelist_mmap = None
37 | self.transforms = transforms
38 |
39 | def _transform(self, data_i):
40 | return apply_transform(self.transforms, data_i) if self.transforms is not None else data_i
41 |
42 | def __getitem__(self, index):
43 | data = self.data_dict[index]
44 | data['index'] = index
45 | data_i = self._transform(data)
46 |
47 | return data_i
48 |
49 | def __len__(self):
50 | return self.num_files
51 |
52 |
53 | def get_loader(args):
54 |
55 | if args.phase == 'test':
56 | test_transforms = Compose(
57 | [
58 | LoadImageh5d_test(keys=["image", "label"]),
59 | AddChanneld(keys=["image", "label"]),
60 | Orientationd(keys=["image", "label"], axcodes="RAS"),
61 | Spacingd(
62 | keys=["image", "label"],
63 | pixdim=(args.space_x, args.space_y, args.space_z),
64 | mode=("bilinear", "nearest"),
65 | ),
66 | ScaleIntensityRanged(
67 | keys=["image"],
68 | a_min=args.a_min,
69 | a_max=args.a_max,
70 | b_min=args.b_min,
71 | b_max=args.b_max,
72 | clip=True,
73 | ),
74 | CropForegroundd(keys=["image", "label", "post_label"], source_key="image"),
75 | ToTensord(keys=["image", "label", "post_label"]),
76 | ]
77 | )
78 |
79 | test_img = []
80 | test_lbl = []
81 | test_post_lbl = []
82 | test_name = []
83 | for item in args.dataset_list:
84 | for line in open(args.data_txt_path + item +'_test.txt'):
85 | name = line.strip().split()[1].split('.')[0]
86 | test_img.append(args.data_root_path + line.strip().split()[0])
87 | test_lbl.append(args.label_root_path + line.strip().split()[1])
88 | test_post_lbl.append(args.data_root_path + name.replace('label', 'post_label_32cls') + '.h5')
89 | test_name.append(name)
90 | data_dicts_test = [{'image': image, 'label': label, 'post_label': post_label, 'name': name}
91 | for image, label, post_label, name in zip(test_img, test_lbl, test_post_lbl, test_name)]
92 | print('test len {}'.format(len(data_dicts_test)))
93 |
94 | test_dataset = Dataset(data=data_dicts_test, transform=test_transforms)
95 | test_loader = DataLoader(test_dataset, batch_size=1, shuffle=False, num_workers=4, collate_fn=list_data_collate)
96 | return test_loader, test_transforms
97 |
98 | elif args.phase == 'train':
99 | train_transforms = Compose(
100 | [
101 | LoadImageh5d_train(keys=["image", "label"]),
102 | AddChanneld(keys=["image", "label"]),
103 | Orientationd(keys=["image", "label"], axcodes="RAS"),
104 | Spacingd(
105 | keys=["image", "label"],
106 | pixdim=(args.space_x, args.space_y, args.space_z),
107 | mode=("bilinear", "nearest"),
108 | ),
109 | ScaleIntensityRanged(
110 | keys=["image"],
111 | a_min=args.a_min,
112 | a_max=args.a_max,
113 | b_min=args.b_min,
114 | b_max=args.b_max,
115 | clip=True,
116 | ),
117 | CropForegroundd(keys=["image", "label", "post_label"], source_key="image"),
118 | SpatialPadd(keys=["image", "label", "post_label"], spatial_size=(args.roi_x, args.roi_y, args.roi_z), mode='constant'),
119 | RandZoomd_select(keys=["image", "label", "post_label"], prob=0.3, min_zoom=1.3, max_zoom=1.5, mode=['area', 'nearest', 'nearest']),
120 | RandCropByPosNegLabeld_select(
121 | keys=["image", "label", "post_label"],
122 | label_key="label",
123 | spatial_size=(args.roi_x, args.roi_y, args.roi_z),
124 | pos=2,
125 | neg=1,
126 | num_samples=args.num_samples,
127 | image_key="image",
128 | image_threshold=0,
129 | ),
130 | RandCropByLabelClassesd_select(
131 | keys=["image", "label", "post_label"],
132 | label_key="label",
133 | spatial_size=(args.roi_x, args.roi_y, args.roi_z),
134 | ratios=[1, 1, 5],
135 | num_classes=3,
136 | num_samples=args.num_samples,
137 | image_key="image",
138 | image_threshold=0,
139 | ),
140 | RandRotate90d(
141 | keys=["image", "label", "post_label"],
142 | prob=0.10,
143 | max_k=3,
144 | ),
145 | RandShiftIntensityd(
146 | keys=["image"],
147 | offsets=0.10,
148 | prob=0.20,
149 | ),
150 | ToTensord(keys=["image", "label", "post_label"]),
151 | ]
152 | )
153 |
154 | train_img = []
155 | train_lbl = []
156 | train_post_lbl = []
157 | train_name = []
158 | for item in args.dataset_list:
159 | for line in open(args.data_txt_path + item +'_train.txt'):
160 | name = line.strip().split()[1].split('.')[0]
161 | train_img.append(args.data_root_path + line.strip().split()[0])
162 | train_lbl.append(args.label_root_path + line.strip().split()[1])
163 | train_post_lbl.append(args.data_root_path + name.replace('label', 'post_label_32cls') + '.h5')
164 | train_name.append(name)
165 | data_dicts_train = [{'image': image, 'label': label, 'post_label': post_label, 'name': name}
166 | for image, label, post_label, name in zip(train_img, train_lbl, train_post_lbl, train_name)]
167 | if args.local_rank == 0:
168 | print('train len {}'.format(len(data_dicts_train)))
169 |
170 | train_dataset = MyDataset(
171 | data_dicts_train,
172 | num_files=len(data_dicts_train),
173 | transforms=train_transforms)
174 |
175 | if args.local_rank == 0:
176 | print(f'Dataset: {len(train_dataset)}')
177 |
178 | train_sampler = ResumableDistributedSampler(
179 | dataset=train_dataset,
180 | shuffle=args.shuffle,
181 | batch_size=args.batch_size,
182 | drop_last=True)
183 |
184 | if args.local_rank == 0:
185 | print(f'Sampler: {len(train_sampler)}')
186 |
187 | if args.memory == 'LM':
188 | world_size = torch.distributed.get_world_size() if torch.distributed.is_initialized() else 1
189 | memory_size = int(args.memory_size / world_size)
190 | from dataset.mysampler import LMBatchSampler
191 | batch_sampler = LMBatchSampler(
192 | memory_size=memory_size,
193 | repeat=args.sampling_rate,
194 | sampler=train_sampler,
195 | batch_size=args.batch_size,
196 | drop_last=True)
197 | elif args.memory == 'DM':
198 | world_size = torch.distributed.get_world_size() if torch.distributed.is_initialized() else 1
199 | memory_size = int(args.memory_size/ world_size)
200 | from dataset.mysampler import DMBatchSampler
201 | batch_sampler = DMBatchSampler(
202 | memory_size=memory_size,
203 | repeat=args.sampling_rate,
204 | sampler=train_sampler,
205 | batch_size=args.batch_size,
206 | drop_last=True)
207 | elif args.memory == 'SM':
208 | world_size = torch.distributed.get_world_size() if torch.distributed.is_initialized() else 1
209 | memory_size = int(args.memory_size/ world_size)
210 | from dataset.mysampler import SMBatchSampler
211 | batch_sampler = SMBatchSampler(
212 | memory_size=memory_size,
213 | repeat=args.sampling_rate,
214 | sampler=train_sampler,
215 | batch_size=args.batch_size,
216 | top_k_entropy=args.top_k_entropy,
217 | drop_last=True)
218 | else:
219 | raise NotImplementedError
220 | if args.local_rank == 0:
221 | print(f'Batch Sampler: {len(batch_sampler)}')
222 |
223 | train_loader = DataLoader(
224 | dataset=train_dataset,
225 | batch_sampler=batch_sampler,
226 | collate_fn=list_data_collate,
227 | num_workers=args.num_workers,
228 | pin_memory=True,
229 | prefetch_factor=1)
230 |
231 | if args.local_rank == 0:
232 | print(f'Train loader: {len(train_loader)}')
233 |
234 | return train_loader, batch_sampler
235 |
236 |
--------------------------------------------------------------------------------
/train.py:
--------------------------------------------------------------------------------
1 | import os
2 | import argparse
3 | import time
4 | import warnings
5 | warnings.filterwarnings("ignore")
6 |
7 | import torch
8 | import torch.distributed as dist
9 | from torch.nn.parallel import DistributedDataParallel
10 | from tensorboardX import SummaryWriter
11 |
12 | from model.Universal_model import Universal_model
13 | from dataset.dataloader import get_loader
14 |
15 | from utils.loss import DiceLoss_SM, Multi_BCELoss_SM, DiceLoss, Multi_BCELoss
16 | from utils.utils import adjust_learning_rate, calculate_remaining_time, TEMPLATE, NUM_CLASS
17 | from utils.utils import AverageMeter, WindowAverageMeter, ProgressMeter, CheckpointManager
18 |
19 | torch.multiprocessing.set_sharing_strategy('file_system')
20 |
21 |
22 | def train(args, train_loader, model, optimizer, loss_seg_DICE, loss_seg_CE, epoch, writer, ckpt_manager):
23 |
24 | batch_time = WindowAverageMeter('Time', fmt=':6.3f')
25 | data_time = WindowAverageMeter('Data', fmt=':6.3f')
26 | losses = AverageMeter('Loss', ':.4e')
27 | lr_meter = AverageMeter('LR', ':.4e')
28 | buff_meters = []
29 |
30 | num_seen = AverageMeter('#Seen', ':6.3f')
31 | num_seen_max = AverageMeter('#Seen Max', ':6.3f')
32 | similarity = AverageMeter('Memory Sim', ':6.3f')
33 | neig_similarity = AverageMeter('Memory Neig Sim', ':6.3f')
34 | buff_meters = [num_seen, num_seen_max, similarity,
35 | neig_similarity]
36 | progress = ProgressMeter(len(train_loader),
37 | [batch_time, data_time, lr_meter] + buff_meters + [losses],
38 | prefix="Epoch: [{}]".format(epoch),
39 | tbwriter=writer,
40 | rank=args.local_rank)
41 |
42 | model.train()
43 |
44 | end = time.time()
45 | start_time = time.time()
46 | world_size = torch.distributed.get_world_size() if torch.distributed.is_initialized() else 1
47 |
48 | for data in train_loader:
49 | batch_i = train_loader.batch_sampler.advance_batches_seen()
50 | effective_epoch = epoch + (batch_i / len(train_loader))
51 | lr = adjust_learning_rate(optimizer,
52 | effective_epoch,
53 | args,
54 | epoch_size=len(train_loader))
55 | lr_meter.update(lr)
56 |
57 | x, y, ratio, name = data['image'], data["post_label"], data['organs_ratio'], data['name']
58 | data_time.update(time.time() - end)
59 |
60 | x, y, ratio = x.to(args.device), y.float().to(args.device), ratio.float().to(args.device)
61 | logit_map, z = model(x)
62 |
63 | term_seg_Dice = loss_seg_DICE.forward(logit_map, y, name, ratio, TEMPLATE)
64 | term_seg_BCE = loss_seg_CE.forward(logit_map, y, name, ratio, TEMPLATE)
65 | loss_per_sample = term_seg_BCE + term_seg_Dice
66 | loss = loss_per_sample.mean()
67 | losses.update(loss.item(), x.size(0))
68 |
69 | with torch.no_grad():
70 | data['feature'] = z.squeeze(dim=-1).squeeze(dim=-1).squeeze(dim=-1).detach()
71 | data['entropy'] = term_seg_BCE.detach()
72 | data['loss'] = loss_per_sample.detach()
73 |
74 | stats = train_loader.batch_sampler.update_sample_stats(data)
75 | if 'num_seen' in stats:
76 | num_seen.update(stats['num_seen'].float().mean().item(),
77 | stats['num_seen'].shape[0])
78 | num_seen_max.update(stats['num_seen'].float().max().item(),
79 | stats['num_seen'].shape[0])
80 | if 'similarity' in stats:
81 | similarity.update(stats['similarity'].float().mean().item(),
82 | stats['similarity'].shape[0])
83 | if 'neighbor_similarity' in stats:
84 | neig_similarity.update(
85 | stats['neighbor_similarity'].float().mean().item(),
86 | stats['neighbor_similarity'].shape[0])
87 |
88 | # compute gradient and do SGD step
89 | optimizer.zero_grad()
90 | loss.backward()
91 | optimizer.step()
92 |
93 | # Create checkpoints
94 | if ckpt_manager is not None:
95 | ckpt_manager.checkpoint(epoch=epoch,
96 | batch_i=batch_i,
97 | save_dict={
98 | 'epoch': epoch,
99 | 'batch_i': batch_i,
100 | 'arch': args.backbone,
101 | })
102 |
103 | # measure elapsed time
104 | batch_time.update(time.time() - end)
105 | # measure eta time
106 | if batch_i % args.print_freq == 0 and args.local_rank == 0:
107 | days, hours, minutes, seconds = calculate_remaining_time(start_time, batch_i, len(train_loader))
108 | print(f"ETA: {days} DAY {hours} HR {minutes} MIN {seconds} SEC")
109 |
110 | end = time.time()
111 |
112 | # Log
113 | if batch_i % args.print_freq == 0:
114 | tb_step = (
115 | epoch * len(train_loader.dataset) // args.batch_size +
116 | batch_i * world_size)
117 | progress.display(batch_i)
118 | progress.tbwrite(tb_step)
119 |
120 |
121 |
122 |
123 | def process(args):
124 | rank = 0
125 |
126 | dist.init_process_group(backend="nccl", init_method="env://")
127 | rank = args.local_rank
128 | args.device = torch.device(f"cuda:{rank}")
129 | torch.cuda.set_device(args.device)
130 |
131 | # prepare the 3D model
132 | model = Universal_model(out_channels=NUM_CLASS)
133 |
134 | #Load pre-trained weights
135 | if args.pretrain is not None:
136 | model.load_params(torch.load(args.pretrain)["state_dict"])
137 | if rank == 0:
138 | print('load pretrain')
139 |
140 | word_embedding = torch.load(args.word_embedding)
141 | model.organ_embedding.data = word_embedding.float()
142 | if rank == 0:
143 | print('load word embedding')
144 |
145 | model.to(args.device)
146 |
147 |
148 | model = DistributedDataParallel(model, device_ids=[args.device])
149 |
150 | # criterion and optimizer
151 | if args.loss_type == 'SM':
152 | loss_seg_DICE = DiceLoss_SM(num_classes=NUM_CLASS).to(args.device)
153 | loss_seg_CE = Multi_BCELoss_SM(num_classes=NUM_CLASS).to(args.device)
154 | else:
155 | loss_seg_DICE = DiceLoss(num_classes=NUM_CLASS).to(args.device)
156 | loss_seg_CE = Multi_BCELoss(num_classes=NUM_CLASS).to(args.device)
157 |
158 | optimizer = torch.optim.AdamW(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)
159 |
160 | writer = None
161 | if rank == 0:
162 | writer = SummaryWriter(log_dir=args.save_dir+'/' + args.log_name)
163 | print('Writing Tensorboard logs to ', args.save_dir+'/' + args.log_name)
164 |
165 | torch.backends.cudnn.benchmark = True
166 |
167 | train_loader, train_sampler = get_loader(args)
168 |
169 | modules = {
170 | 'state_dict': model,
171 | 'optimizer': optimizer,
172 | 'sampler': train_loader.batch_sampler
173 | }
174 | ckpt_manager = CheckpointManager(
175 | modules=modules,
176 | ckpt_dir=os.path.join(args.save_dir, args.log_name),
177 | epoch_size=len(train_loader),
178 | epochs=args.max_epoch,
179 | save_freq=args.store_num,
180 | save_freq_mints=args.store_num_mints)
181 | if args.resume:
182 | args.start_epoch = ckpt_manager.resume()
183 |
184 | for epoch in range(args.start_epoch, args.max_epoch):
185 |
186 | dist.barrier()
187 | train_sampler.set_epoch(epoch)
188 |
189 | train(args, train_loader, model, optimizer, loss_seg_DICE, loss_seg_CE, epoch, writer, ckpt_manager)
190 |
191 | ckpt_manager.checkpoint(epoch=epoch + 1,
192 | save_dict={
193 | 'epoch': epoch + 1,
194 | 'batch_i': 0,
195 | 'arch': args.backbone,
196 | })
197 | train_sampler.init_from_ckpt = False
198 |
199 | dist.destroy_process_group()
200 |
201 | def main():
202 | parser = argparse.ArgumentParser()
203 | ## Distributed training
204 | parser.add_argument("--local_rank", type=int)
205 | parser.add_argument("--device")
206 |
207 | ## Logging
208 | parser.add_argument('--print_freq', default=10, help='The path resume from checkpoint')
209 | parser.add_argument('--log_name', default='{}-{}-x{}-mem{}-loss{}', help='The path resume from checkpoint')
210 | parser.add_argument('--save_dir', default='./out/{}', help='The path resume from checkpoint')
211 |
212 | ## Model
213 | parser.add_argument('--phase', default='train', help='train or validation or test')
214 | parser.add_argument('--backbone', default='unet')
215 | parser.add_argument('--resume', default=False, help='The path resume from checkpoint')
216 | parser.add_argument('--pretrain', default='pretrained_weights/Genesis_Chest_CT.pt')
217 | parser.add_argument('--word_embedding', default='./pretrained_weights/txt_encoding.pth',
218 | help='The path of word embedding')
219 |
220 | ## Hyperparameter
221 | parser.add_argument("--start_epoch", default=0)
222 | parser.add_argument('--max_epoch', default=1, type=int, help='Number of training epoches')
223 | parser.add_argument('--store_num', default=50, type=int, help='Store model how often')
224 | parser.add_argument('--store_num_mints', default=30, type=int, help='Store model how often (minutes)')
225 |
226 | ## Optimizer
227 | parser.add_argument('--lr', default=1e-4, type=float, help='Learning rate')
228 | parser.add_argument('--weight_decay', default=1e-5, help='Weight Decay')
229 | parser.add_argument('--lr_schedule', default='cos', help='memmap: constant, other: cos', choices=['constant', 'cos', 'triangle'])
230 | parser.add_argument('--lr_schedule_period', default=3000, help='Learning rate')
231 | parser.add_argument('--max_lr', default=0.003, type=float, help='Learning rate')
232 | parser.add_argument('--exit_decay', default=0.0, type=float, help='Learning rate')
233 |
234 | ## Memory
235 | parser.add_argument('--loss_type', default='SM', choices=['Simple', 'SM'])
236 | parser.add_argument('--memory', default='SM', choices=['LM', 'DM', 'SM'])
237 | parser.add_argument('--sampling_rate', default=100)
238 | parser.add_argument('--memory_size', default=128)
239 | parser.add_argument('--top_k_entropy', default=4, help='memory()/k')
240 | parser.add_argument('--shuffle', default=False)
241 |
242 | ## Dataset
243 | parser.add_argument('--dataset_list', nargs='+', default=['PAOT'])
244 | parser.add_argument('--data_root_path', default='', help='data root path')
245 | parser.add_argument('--label_root_path', default='', help='label root path')
246 | parser.add_argument('--data_txt_path', default='./dataset/dataset_list/', help='data txt path')
247 | parser.add_argument('--batch_size', default=1, help='batch size')
248 | parser.add_argument('--num_workers', default=8, type=int, help='workers numebr for DataLoader')
249 |
250 | parser.add_argument('--a_min', default=-175, type=float, help='a_min in ScaleIntensityRanged')
251 | parser.add_argument('--a_max', default=250, type=float, help='a_max in ScaleIntensityRanged')
252 | parser.add_argument('--b_min', default=0.0, type=float, help='b_min in ScaleIntensityRanged')
253 | parser.add_argument('--b_max', default=1.0, type=float, help='b_max in ScaleIntensityRanged')
254 | parser.add_argument('--space_x', default=1.5, type=float, help='spacing in x direction')
255 | parser.add_argument('--space_y', default=1.5, type=float, help='spacing in y direction')
256 | parser.add_argument('--space_z', default=1.5, type=float, help='spacing in z direction')
257 | parser.add_argument('--roi_x', default=96, type=int, help='roi size in x direction')
258 | parser.add_argument('--roi_y', default=96, type=int, help='roi size in y direction')
259 | parser.add_argument('--roi_z', default=96, type=int, help='roi size in z direction')
260 | parser.add_argument('--num_samples', default=1, type=int, help='sample number in each ct')
261 |
262 |
263 | args = parser.parse_args()
264 | args.log_name = args.log_name.format(args.backbone, args.memory, args.sampling_rate, args.buff_siz, args.loss_type)
265 | args.save_dir = args.save_dir.format(args.dataset_list[0])
266 |
267 | os.makedirs(os.path.join(args.save_dir, args.log_name), exist_ok=True)
268 | message = '\n'.join([f'{k:<20}: {v}' for k, v in vars(args).items()])
269 | with(open(os.path.join(args.save_dir, args.log_name, 'args.txt'), 'w')) as f:
270 | f.write(message)
271 |
272 | process(args=args)
273 |
274 | if __name__ == "__main__":
275 | main()
--------------------------------------------------------------------------------
/dataset/mysampler.py:
--------------------------------------------------------------------------------
1 | import math
2 | import random
3 | import numpy as np
4 | from collections import deque
5 | from typing import Optional
6 |
7 | import torch.distributed
8 | from torch.utils.data.distributed import DistributedSampler
9 | from torch.utils.data.dataset import Dataset
10 | from torch.utils.data import Sampler
11 | from torch import Generator
12 | from torch.nn import functional as F
13 |
14 |
15 | # Modification of DistributedSampler that distributes samples per gpu in a batchwise fashion.
16 | # This allows us to do true sequential sampling of a dataset, when shuffle is set to false.
17 | class MyDistributedSampler(DistributedSampler):
18 | def __init__(self,
19 | dataset: Dataset,
20 | batch_size: int = 1,
21 | num_replicas: Optional[int] = None,
22 | rank: Optional[int] = None,
23 | shuffle: bool = True,
24 | seed: int = 0,
25 | drop_last: bool = False) -> None:
26 | super().__init__(dataset=dataset,
27 | num_replicas=num_replicas,
28 | rank=rank,
29 | shuffle=shuffle,
30 | seed=seed,
31 | drop_last=drop_last)
32 | self.batch_size = batch_size
33 |
34 | # If the dataset length is evenly divisible by # of replicas, then there
35 | # is no need to drop any data, since the dataset will be split equally.
36 | db_size = len(self.dataset) // (
37 | batch_size * self.num_replicas) * batch_size * self.num_replicas
38 | if self.drop_last and db_size % self.num_replicas != 0: # type: ignore[arg-type]
39 | # Split to nearest available length that is evenly divisible.
40 | # This is to ensure each rank receives the same amount of data when
41 | # using this Sampler.
42 | self.num_samples = math.ceil(
43 | # `type:ignore` is required because Dataset cannot provide a default __len__
44 | # see NOTE in pytorch/torch/utils/data/sampler.py
45 | (db_size - self.num_replicas) /
46 | self.num_replicas # type: ignore[arg-type]
47 | )
48 | else:
49 | self.num_samples = math.ceil(
50 | db_size / self.num_replicas) # type: ignore[arg-type]
51 | self.total_size = self.num_samples * self.num_replicas
52 |
53 | def __iter__(self):
54 | if self.shuffle:
55 | # deterministically shuffle based on epoch and seed
56 | g = torch.Generator()
57 | g.manual_seed(self.seed + self.epoch)
58 | indices = torch.randperm(len(
59 | self.dataset), generator=g).tolist() # type: ignore[arg-type]
60 | else:
61 | indices = list(range(len(self.dataset))) # type: ignore[arg-type]
62 |
63 | if not self.drop_last:
64 | # add extra samples to make it evenly divisible
65 | padding_size = self.total_size - len(indices)
66 | if padding_size <= len(indices):
67 | indices += indices[:padding_size]
68 | else:
69 | indices += (
70 | indices *
71 | math.ceil(padding_size / len(indices)))[:padding_size]
72 | else:
73 | # remove tail of data to make it evenly divisible.
74 | indices = indices[:self.total_size]
75 | assert len(indices) == self.total_size
76 |
77 | # subsample
78 | if self.batch_size == 1:
79 | indices = indices[self.rank:self.total_size:self.num_replicas]
80 | assert len(indices) == self.num_samples
81 | else:
82 | batches = [
83 | indices[i:i + self.batch_size]
84 | for i in range(0, len(indices), self.batch_size)
85 | ]
86 | batches = batches[self.rank:len(batches):self.num_replicas]
87 | indices = [i for b in batches for i in b]
88 | assert len(
89 | indices
90 | ) == self.num_samples, f"{len(indices)} {self.num_samples}"
91 |
92 | return iter(indices)
93 |
94 | def __len__(self) -> int:
95 | return self.num_samples
96 |
97 | def set_epoch(self, epoch: int) -> None:
98 | self.epoch = epoch
99 |
100 |
101 | class ResumableDistributedSampler(Sampler):
102 | def __init__(self,
103 | dataset: Dataset,
104 | batch_size: int = None,
105 | num_replicas: Optional[int] = None,
106 | rank: Optional[int] = None,
107 | shuffle: bool = True,
108 | seed: int = 0,
109 | drop_last: bool = False,
110 | n_seq_samples: int = -1) -> None:
111 | self.sampler = MyDistributedSampler(
112 | dataset=dataset,
113 | batch_size=batch_size,
114 | num_replicas=num_replicas,
115 | rank=rank,
116 | shuffle=shuffle,
117 | seed=seed,
118 | drop_last=drop_last,
119 | )
120 | self.n_seq_samples = n_seq_samples
121 | self.start_idx = 0
122 | self.num_replicas = self.sampler.num_replicas
123 | self.rank = self.sampler.rank
124 |
125 | def __iter__(self):
126 | indices = list(self.sampler)
127 | if self.n_seq_samples > 0:
128 | start_inds = torch.tensor(
129 | [i for i in indices if i % self.n_seq_samples == 0]).cuda()
130 | start_inds_all = gather(start_inds, distributed=True)
131 | start_inds_all = torch.cat(start_inds_all).cpu()
132 |
133 | num_start = math.ceil(
134 | (len(start_inds_all) - self.sampler.num_replicas) /
135 | self.sampler.num_replicas)
136 | total_num_start = self.sampler.num_replicas * num_start
137 | start_inds_all = start_inds_all[:total_num_start]
138 | start_inds = start_inds_all[self.sampler.rank::self.sampler.
139 | num_replicas]
140 | print('creating inds')
141 | indices = np.arange(self.n_seq_samples)[np.newaxis, :].repeat(
142 | len(start_inds), axis=0)
143 | indices = indices + start_inds.cpu().numpy()[:, np.newaxis].repeat(
144 | self.n_seq_samples, axis=1)
145 | indices = indices.reshape(-1)
146 | print(len(indices))
147 |
148 | return iter(indices[self.start_idx:])
149 |
150 | def __len__(self) -> int:
151 | return len(self.sampler) - self.start_idx
152 |
153 | def set_epoch(self, epoch: int, instance: int = 0) -> None:
154 | self.sampler.set_epoch(epoch)
155 | world_size = torch.distributed.get_world_size()
156 | self.start_idx = instance // world_size
157 |
158 |
159 | class LMBatchSampler(Sampler):
160 | def __init__(self, memory_size: int, repeat: int, sampler: Sampler,
161 | batch_size: int, drop_last: bool) -> None:
162 | # Since collections.abc.Iterable does not check for `__getitem__`, which
163 | # is one way for an object to be an iterable, we don't do an `isinstance`
164 | # check here.
165 | if not isinstance(batch_size, int) or isinstance(batch_size, bool) or \
166 | batch_size <= 0:
167 | raise ValueError("batch_size should be a positive integer value, "
168 | "but got batch_size={}".format(batch_size))
169 | if not isinstance(drop_last, bool):
170 | raise ValueError("drop_last should be a boolean value, but got "
171 | "drop_last={}".format(drop_last))
172 | self.sampler = sampler
173 | self.batch_size = batch_size
174 | self.drop_last = drop_last
175 | self.memory_size = memory_size
176 | self.repeat = repeat
177 |
178 | self.seed_base = 93823982
179 | self.epoch = 0
180 | assert drop_last
181 |
182 | self.distributed = torch.distributed.is_available(
183 | ) and torch.distributed.is_initialized()
184 | self.rank = torch.distributed.get_rank() if self.distributed else 0
185 |
186 | self.memory = deque(maxlen=self.memory_size)
187 | self.db_head = 0
188 | self.num_batches_seen = 0
189 | self.num_batches_yielded = 0
190 | self.batch_history = deque(maxlen=128)
191 | self.init_from_ckpt = False
192 |
193 | def state_dict(self):
194 | batch_history = gather(torch.tensor(self.batch_history),
195 | self.distributed)
196 | memory = gather_memory(self.memory, self.distributed)
197 | return {
198 | 'memory': memory,
199 | 'db_head': self.db_head,
200 | 'num_batches_seen': self.num_batches_seen,
201 | 'num_batches_yielded': self.num_batches_yielded,
202 | 'batch_history': batch_history
203 | }
204 |
205 | def load_state_dict(self, state_dict):
206 | self.memory = deque(reverse_tensorized_memory(state_dict['memory'],
207 | self.rank),
208 | maxlen=self.memory_size)
209 | self.db_head = state_dict['db_head']
210 | self.num_batches_seen = state_dict['num_batches_seen']
211 | self.num_batches_yielded = state_dict['num_batches_yielded']
212 | self.init_from_ckpt = True
213 |
214 | batch_history = state_dict['batch_history'][self.rank]
215 | batch_history = deque([b.tolist() for b in batch_history], maxlen=128)
216 | self.batch_history = batch_history
217 |
218 | def advance_batches_seen(self):
219 | self.num_batches_seen += 1
220 | return self.num_batches_seen
221 |
222 | def sample_k(self, q, k):
223 | # import random
224 | if k < len(q):
225 | return random.sample(q, k=k)
226 | elif k == len(q):
227 | return q
228 | else:
229 | return random.choices(q, k=k)
230 |
231 | def update_sample_stats(self, sample_info):
232 | db2buff = {b['idx']: i for i, b in enumerate(self.memory)}
233 | sample_index = sample_info['index'].detach()
234 | sample_loss = sample_info['loss'].detach().cpu()
235 | for i in range(self.batch_size):
236 | db_idx = sample_index[i].item()
237 | if db_idx in db2buff:
238 | b = self.memory[db2buff[db_idx]]
239 | b['loss'] = sample_loss[i]
240 | b['seen'] = True
241 | b['num_seen'] += 1
242 | samples = [
243 | self.memory[db2buff[idx]] for idx in sample_index.tolist()
244 | if idx in db2buff
245 | ]
246 | if not samples:
247 | return {}
248 | else:
249 | return tensorize_memory(samples)
250 |
251 | def __iter__(self):
252 | from collections import deque
253 | self.generator = Generator()
254 | self.generator.manual_seed(self.seed_base + self.epoch)
255 | random.seed(self.seed_base + self.epoch)
256 |
257 | if not self.init_from_ckpt:
258 | self.db_head = 0
259 | self.num_batches_seen = 0
260 | self.num_batches_yielded = 0
261 | self.batch_history = deque(maxlen=128)
262 |
263 | # Resubmit batches not seen by the model
264 | for i in range(self.num_batches_yielded - self.num_batches_seen, 0,
265 | -1):
266 | yield self.batch_history[-i]
267 |
268 | all_indices = list(self.sampler)
269 | while self.num_batches_yielded < len(self):
270 | if self.db_head < len(all_indices):
271 | indices = all_indices[self.db_head:self.db_head +
272 | self.batch_size]
273 | self.memory += [{
274 | 'idx': idx,
275 | 'lifespan': 0,
276 | 'loss': None,
277 | 'seen': False,
278 | 'num_seen': 0
279 | } for idx in indices]
280 | self.db_head += len(indices)
281 | if len(indices) > 0 and len(self.memory) < self.memory_size:
282 | continue
283 | for j in range(self.repeat):
284 | batch = self.sample_k(self.memory, self.batch_size)
285 | batch_idx = [b['idx'] for b in batch]
286 | self.batch_history += [batch_idx]
287 | self.num_batches_yielded += 1
288 | yield batch_idx
289 |
290 | self.init_from_ckpt = False
291 |
292 | def __len__(self) -> int:
293 | return len(self.sampler) * self.repeat // self.batch_size
294 |
295 | def set_epoch(self, epoch: int) -> None:
296 | self.epoch = epoch
297 | self.sampler.set_epoch(epoch=epoch)
298 |
299 |
300 | def tensorize_memory(memory):
301 | memory_tensor = {}
302 | for k in memory[0]:
303 | tens_list = [s[k] for s in memory]
304 | if all(t is None for t in tens_list):
305 | continue
306 | dummy = [t for t in tens_list if t is not None][0] * 0.
307 | tens_list = [t if t is not None else dummy for t in tens_list]
308 | try:
309 | if isinstance(tens_list[0], torch.Tensor):
310 | tens = torch.stack(tens_list)
311 | elif isinstance(tens_list[0], (int, bool, float)):
312 | tens = torch.tensor(tens_list)
313 | else:
314 | tens = torch.tensor(tens_list)
315 | memory_tensor[k] = tens
316 | except Exception as e:
317 | print(tens_list)
318 | print(e)
319 | return memory_tensor
320 |
321 |
322 | def reverse_tensorized_memory(memory_tensor, rank=0):
323 | memory = []
324 | keys = list(memory_tensor.keys())
325 | siz = memory_tensor[keys[0]][rank].shape[0]
326 | for i in range(siz):
327 | memory += [{
328 | k: memory_tensor[k][rank][i].item() if k in {
329 | 'idx', 'lifespan', 'seen', 'num_seen'
330 | } else memory_tensor[k][rank][i].cpu()
331 | for k in keys
332 | }]
333 | return memory
334 |
335 |
336 | def gather(tensor, distributed=False):
337 | if not distributed:
338 | return [tensor]
339 | else:
340 | world_size = torch.distributed.get_world_size()
341 | size = tuple(tensor.shape)
342 | size_all = [size for _ in range(world_size)]
343 | torch.distributed.all_gather_object(size_all, size)
344 |
345 | tensor = tensor.cuda()
346 | max_sz = max([sz[0] for sz in size_all])
347 | expand_sz = tuple([max_sz] + list(size)[1:])
348 | tensor_all = [
349 | torch.zeros(size=expand_sz, dtype=tensor.dtype).cuda()
350 | for _ in range(world_size)
351 | ]
352 | if tensor.shape[0] < max_sz:
353 | pad = [0] * (2 * len(size))
354 | pad[-1] = max_sz - tensor.shape[0]
355 | tensor = F.pad(tensor, pad=pad)
356 | torch.distributed.all_gather(tensor_all, tensor)
357 | return [
358 | tensor_all[r][:size_all[r][0]].cpu() for r in range(world_size)
359 | ]
360 |
361 |
362 | def gather_memory(memory, distributed=False):
363 | memory_tensor = tensorize_memory(memory)
364 | for k in memory_tensor:
365 | memory_tensor[k] = gather(memory_tensor[k], distributed)
366 | return memory_tensor
367 |
368 |
369 | class DMBatchSampler(Sampler):
370 | def __init__(self,
371 | memory_size: int,
372 | repeat: int,
373 | sampler: Sampler,
374 | batch_size: int,
375 | limit_num_seen_coeff: int = -1,
376 | drop_last: bool = True,
377 | rank: int = None) -> None:
378 | # Since collections.abc.Iterable does not check for `__getitem__`, which
379 | # is one way for an object to be an iterable, we don't do an `isinstance`
380 | # check here.
381 | if not isinstance(batch_size, int) or isinstance(batch_size, bool) or \
382 | batch_size <= 0:
383 | raise ValueError("batch_size should be a positive integer value, "
384 | "but got batch_size={}".format(batch_size))
385 | if not isinstance(drop_last, bool):
386 | raise ValueError("drop_last should be a boolean value, but got "
387 | "drop_last={}".format(drop_last))
388 | self.sampler = sampler
389 | self.batch_size = batch_size
390 | self.drop_last = drop_last
391 | self.memory_size = memory_size
392 | self.repeat = repeat
393 | self.gamma = 0.5 # polyak average coeff
394 | self.feat_dim = 2048
395 | self.limit_num_seen_coeff = limit_num_seen_coeff
396 |
397 | self.seed_base = 93823982
398 | self.epoch = 0
399 | assert drop_last
400 |
401 | self.distributed = torch.distributed.is_available(
402 | ) and torch.distributed.is_initialized()
403 | if rank is None:
404 | rank = torch.distributed.get_rank() if self.distributed else 0
405 | self.rank = rank
406 |
407 | # Init memory
408 | self.memory = []
409 | self.db_head = 0
410 | self.num_batches_seen = 0
411 | self.num_batches_yielded = 0
412 | self.batch_history = 0
413 | self.init_from_ckpt = False
414 |
415 | def state_dict(self):
416 | batch_history = gather(torch.tensor(self.batch_history),
417 | self.distributed)
418 | memory = gather_memory(self.memory, self.distributed)
419 | state_dict = {
420 | 'memory': memory,
421 | 'db_head': self.db_head,
422 | 'num_batches_seen': self.num_batches_seen,
423 | 'num_batches_yielded': self.num_batches_yielded,
424 | 'batch_history': batch_history,
425 | }
426 | return state_dict
427 |
428 | def load_state_dict(self, state_dict):
429 | self.memory = reverse_tensorized_memory(state_dict['memory'],
430 | self.rank)
431 | if torch.distributed.is_initialized():
432 | for b in self.memory:
433 | b['feature'] = b['feature'].cuda()
434 | # b['similarity'] = b['similarity'].cpu()
435 |
436 | self.db_head = state_dict['db_head']
437 | self.num_batches_seen = state_dict['num_batches_seen']
438 | self.num_batches_yielded = state_dict['num_batches_yielded']
439 | self.init_from_ckpt = True
440 |
441 | batch_history = state_dict['batch_history'][self.rank]
442 | batch_history = deque([b.tolist() for b in batch_history], maxlen=128)
443 | self.batch_history = batch_history
444 |
445 | keys2reset = [
446 | k for k in self.memory[0]
447 | if k not in {'idx', 'lifespan', 'seen', 'num_seen'}
448 | ]
449 | for b in self.memory:
450 | if not b['seen']:
451 | for k in keys2reset:
452 | b[k] = None
453 |
454 | # If saved at end of epoch, signal that next epoch should start from the top.
455 | if self.num_batches_yielded == len(self):
456 | self.init_from_ckpt = False
457 |
458 | def advance_batches_seen(self):
459 | self.num_batches_seen += 1
460 | return self.num_batches_seen
461 |
462 | def sample_k(self, q, k):
463 | if k <= len(q):
464 | return random.sample(q, k=k)
465 | else:
466 | return random.choices(q, k=k)
467 |
468 | def add_to_memory(self, n):
469 | if self.db_head >= len(self.all_indices):
470 | return True
471 |
472 | # Add indices to memory
473 | indices_to_add = self.all_indices[self.db_head:self.db_head + n]
474 | for idx in indices_to_add:
475 | self.memory += [{
476 | 'idx': idx,
477 | 'lifespan': 0,
478 | 'loss': None,
479 | 'neighbor_similarity': None,
480 | 'feature': None,
481 | 'num_seen': 0,
482 | 'seen': False,
483 | }]
484 | self.db_head += len(indices_to_add)
485 |
486 | # Increase lifespan count
487 | for b in self.memory:
488 | b['lifespan'] += 1
489 |
490 | return False
491 |
492 | def resize_memory(self, n):
493 | n2rm = len(self.memory) - n
494 | if n2rm <= 0:
495 | return
496 |
497 | def max_coverage_reduction(x, n2rm):
498 | # removes samples 1 by 1 that are most similar to currently selected.
499 | sim = (torch.einsum('ad,bd->ab', x, x) + 1) / 2
500 | sim.fill_diagonal_(-10.)
501 | idx2rm = []
502 | for i in range(n2rm):
503 | neig_sim = sim.max(dim=1)[0]
504 | most_similar_idx = torch.argmax(neig_sim)
505 | idx2rm += [most_similar_idx.item()]
506 | sim.index_fill_(0, most_similar_idx, -10.)
507 | sim.index_fill_(1, most_similar_idx, -10.)
508 | return idx2rm
509 |
510 | # Only remove samples that have already been evaluated
511 | memory = [(b, i) for i, b in enumerate(self.memory) if b['seen']]
512 | if len(memory) < 2 * n2rm:
513 | lifespans = [b['lifespan'] for b in self.memory]
514 | idx2rm = torch.tensor(lifespans).argsort(
515 | descending=True)[:n2rm].tolist()
516 |
517 | else:
518 | feats = torch.stack([b['feature'] for b, i in memory], 0)
519 | idx2rm = max_coverage_reduction(feats, n2rm)
520 | idx2rm = [memory[i][1] for i in idx2rm]
521 |
522 | # Remove samples from memory
523 | idx2rm = set(idx2rm)
524 | self.memory = [b for i, b in enumerate(self.memory) if i not in idx2rm]
525 |
526 | # Recompute nearest neighbor similarity for tracking
527 | if any(b['seen'] for b in self.memory):
528 | feats = torch.stack(
529 | [b['feature'] for b in self.memory if b['seen']], 0)
530 | feats = feats.cuda() if torch.cuda.is_available() else feats
531 | if feats.shape[0] > 1:
532 | feats_sim = torch.einsum('ad,bd->ab', feats, feats)
533 | neig_sim = torch.topk(feats_sim, k=2, dim=-1,
534 | sorted=False)[0][:, 1:].mean(dim=1).cpu()
535 | i = 0
536 | for b in self.memory:
537 | if b['seen']:
538 | b['neighbor_similarity'] = neig_sim[i]
539 | i += 1
540 |
541 | def update_sample_stats(self, sample_info):
542 | db2buff = {b['idx']: i for i, b in enumerate(self.memory)}
543 | sample_loss = sample_info['loss'].detach().cpu()
544 | sample_index = sample_info['index'].detach().cpu()
545 | sample_features = F.normalize(sample_info['feature'].detach(), p=2, dim=-1)
546 |
547 | def polyak_avg(val, avg, gamma):
548 | return (1 - gamma) * val + gamma * avg
549 |
550 | for i in range(self.batch_size):
551 | db_idx = sample_index[i].item()
552 | if db_idx in db2buff:
553 | b = self.memory[db2buff[db_idx]]
554 | if not b['seen']:
555 | b['loss'] = sample_loss[i]
556 | b['feature'] = sample_features[i]
557 | else:
558 | b['loss'] = polyak_avg(b['loss'], sample_loss[i],
559 | self.gamma)
560 | b['feature'] = F.normalize(polyak_avg(
561 | b['feature'], sample_features[i], self.gamma),
562 | p=2,
563 | dim=-1)
564 | b['num_seen'] += 1
565 | b['seen'] = True
566 |
567 | if self.limit_num_seen_coeff > 0:
568 | max_n_seen = self.limit_num_seen_coeff * self.repeat
569 | self.memory = [
570 | b for b in self.memory if b['num_seen'] < max_n_seen
571 | ]
572 | db2buff = {b['idx']: i for i, b in enumerate(self.memory)}
573 |
574 | samples = [
575 | self.memory[db2buff[idx]] for idx in sample_index.tolist()
576 | if idx in db2buff
577 | ]
578 | if not samples:
579 | return {}
580 | else:
581 | return tensorize_memory(samples)
582 |
583 | def __iter__(self):
584 | random.seed(self.seed_base + self.rank * 1000 + self.epoch)
585 |
586 | self.all_indices = list(self.sampler)
587 | if not self.init_from_ckpt:
588 | self.db_head = 0
589 | self.num_batches_seen = 0
590 | self.num_batches_yielded = 0
591 | self.batch_history = deque(maxlen=128)
592 |
593 | # Resubmit batches not seen by the model
594 | for i in range(self.num_batches_yielded - self.num_batches_seen, 0,
595 | -1):
596 | yield self.batch_history[-i]
597 |
598 | assert self.memory_size <= len(self.all_indices)
599 | while self.num_batches_yielded < len(self):
600 | done = self.add_to_memory(self.batch_size)
601 | if not done and len(self.memory) < self.memory_size:
602 | continue # keep adding until memory is full
603 |
604 | self.resize_memory(self.memory_size)
605 | for j in range(self.repeat):
606 | batch = self.sample_k(self.memory, self.batch_size)
607 | batch_idx = [b['idx'] for b in batch]
608 | self.num_batches_yielded += 1
609 | self.batch_history += [batch_idx]
610 | yield batch_idx
611 |
612 | self.init_from_ckpt = False
613 |
614 | def __len__(self) -> int:
615 | return len(self.sampler) * self.repeat // self.batch_size
616 |
617 | def set_epoch(self, epoch: int) -> None:
618 | self.epoch = epoch
619 | self.sampler.set_epoch(epoch=epoch)
620 |
621 |
622 | class SMBatchSampler(Sampler):
623 | def __init__(self,
624 | memory_size: int,
625 | repeat: int,
626 | sampler: Sampler,
627 | batch_size: int,
628 | top_k_entropy: int,
629 | limit_num_seen_coeff: int = -1,
630 | drop_last: bool = True,
631 | rank: int = None) -> None:
632 | # Since collections.abc.Iterable does not check for `__getitem__`, which
633 | # is one way for an object to be an iterable, we don't do an `isinstance`
634 | # check here.
635 | if not isinstance(batch_size, int) or isinstance(batch_size, bool) or \
636 | batch_size <= 0:
637 | raise ValueError("batch_size should be a positive integer value, "
638 | "but got batch_size={}".format(batch_size))
639 | if not isinstance(drop_last, bool):
640 | raise ValueError("drop_last should be a boolean value, but got "
641 | "drop_last={}".format(drop_last))
642 | self.sampler = sampler
643 | self.batch_size = batch_size
644 | self.drop_last = drop_last
645 | self.memory_size = memory_size
646 | self.repeat = repeat
647 | self.gamma = 0.5 # polyak average coeff
648 | self.feat_dim = 2048
649 | self.limit_num_seen_coeff = limit_num_seen_coeff
650 |
651 | self.top_k_entropy = top_k_entropy
652 |
653 | self.seed_base = 93823982
654 | self.epoch = 0
655 | assert drop_last
656 |
657 | self.distributed = torch.distributed.is_available(
658 | ) and torch.distributed.is_initialized()
659 | if rank is None:
660 | rank = torch.distributed.get_rank() if self.distributed else 0
661 | self.rank = rank
662 |
663 | # Init memory
664 | self.memory = []
665 | self.db_head = 0
666 | self.num_batches_seen = 0
667 | self.num_batches_yielded = 0
668 | self.batch_history = 0
669 | self.init_from_ckpt = False
670 |
671 | def state_dict(self):
672 | batch_history = gather(torch.tensor(self.batch_history),
673 | self.distributed)
674 | memory = gather_memory(self.memory, self.distributed)
675 | state_dict = {
676 | 'memory': memory,
677 | 'db_head': self.db_head,
678 | 'num_batches_seen': self.num_batches_seen,
679 | 'num_batches_yielded': self.num_batches_yielded,
680 | 'batch_history': batch_history,
681 | }
682 | return state_dict
683 |
684 | def load_state_dict(self, state_dict):
685 | self.memory = reverse_tensorized_memory(state_dict['memory'],
686 | self.rank)
687 | if torch.distributed.is_initialized():
688 | for b in self.memory:
689 | b['feature'] = b['feature'].cuda()
690 | # b['similarity'] = b['similarity'].cpu()
691 |
692 | self.db_head = state_dict['db_head']
693 | self.num_batches_seen = state_dict['num_batches_seen']
694 | self.num_batches_yielded = state_dict['num_batches_yielded']
695 | self.init_from_ckpt = True
696 |
697 | batch_history = state_dict['batch_history'][self.rank]
698 | batch_history = deque([b.tolist() for b in batch_history], maxlen=128)
699 | self.batch_history = batch_history
700 |
701 | keys2reset = [
702 | k for k in self.memory[0]
703 | if k not in {'idx', 'lifespan', 'seen', 'num_seen'}
704 | ]
705 | for b in self.memory:
706 | if not b['seen']:
707 | for k in keys2reset:
708 | b[k] = None
709 |
710 | # If saved at end of epoch, signal that next epoch should start from the top.
711 | if self.num_batches_yielded == len(self):
712 | self.init_from_ckpt = False
713 |
714 | def advance_batches_seen(self):
715 | self.num_batches_seen += 1
716 | return self.num_batches_seen
717 |
718 | def sample_k(self, q, k):
719 | if k <= len(q):
720 | return random.sample(q, k=k)
721 | else:
722 | return random.choices(q, k=k)
723 |
724 | def add_to_memory(self, n):
725 | if self.db_head >= len(self.all_indices):
726 | return True
727 |
728 | # Add indices to memory
729 | indices_to_add = self.all_indices[self.db_head:self.db_head + n]
730 | for idx in indices_to_add:
731 | self.memory += [{
732 | 'idx': idx,
733 | 'lifespan': 0,
734 | 'loss': None,
735 | # 'similarity': None,
736 | 'neighbor_similarity': None,
737 | 'feature': None,
738 | 'entropy': None,
739 | 'num_seen': 0,
740 | 'seen': False,
741 | }]
742 | self.db_head += len(indices_to_add)
743 |
744 | # Increase lifespan count
745 | for b in self.memory:
746 | b['lifespan'] += 1
747 |
748 | return False
749 |
750 | def resize_memory(self, n):
751 | n2rm = len(self.memory) - n
752 | if n2rm <= 0:
753 | return
754 |
755 | def max_coverage_reduction(x, n2rm):
756 | # removes samples 1 by 1 that are most similar to currently selected.
757 | sim = (torch.einsum('ad,bd->ab', x, x) + 1) / 2
758 | sim.fill_diagonal_(-10.)
759 | idx2rm = []
760 | for i in range(n2rm):
761 | neig_sim = sim.max(dim=1)[0]
762 | most_similar_idx = torch.argmax(neig_sim)
763 | idx2rm += [most_similar_idx.item()]
764 | sim.index_fill_(0, most_similar_idx, -10.)
765 | sim.index_fill_(1, most_similar_idx, -10.)
766 | return idx2rm
767 |
768 | # Only remove samples that have already been evaluated
769 | memory = [(b, i) for i, b in enumerate(self.memory) if b['seen']]
770 |
771 | # Only remove samples from the ones under top N//k entropy
772 | if len(memory) >= self.top_k_entropy:
773 | entropies = torch.stack([b['entropy'] for b, i in memory], 0)
774 | top_k_n = int(len(memory)//(self.top_k_entropy))
775 | top_n_entropies = torch.topk(entropies, k=top_k_n, largest=True).indices
776 | memory = [buff for index, buff in enumerate(memory) if index not in top_n_entropies]
777 |
778 | if len(memory) < 2 * n2rm:
779 | lifespans = [b['lifespan'] for b in self.memory]
780 | idx2rm = torch.tensor(lifespans).argsort(
781 | descending=True)[:n2rm].tolist()
782 |
783 | else:
784 | # Compute top 5 neighbor average similarity
785 | feats = torch.stack([b['feature'] for b, i in memory], 0)
786 | idx2rm = max_coverage_reduction(feats, n2rm)
787 | # idx2rm = neig_sim.argsort(descending=True)[:n2rm]
788 | idx2rm = [memory[i][1] for i in idx2rm]
789 |
790 | # Remove samples from memory
791 | idx2rm = set(idx2rm)
792 | self.memory = [b for i, b in enumerate(self.memory) if i not in idx2rm]
793 |
794 | # Recompute nearest neighbor similarity for tracking
795 | if any(b['seen'] for b in self.memory):
796 | feats = torch.stack(
797 | [b['feature'] for b in self.memory if b['seen']], 0)
798 | feats = feats.cuda() if torch.cuda.is_available() else feats
799 | if feats.shape[0] > 1:
800 | feats_sim = torch.einsum('ad,bd->ab', feats, feats)
801 | neig_sim = torch.topk(feats_sim, k=2, dim=-1,
802 | sorted=False)[0][:, 1:].mean(dim=1).cpu()
803 | i = 0
804 | for b in self.memory:
805 | if b['seen']:
806 | b['neighbor_similarity'] = neig_sim[i]
807 | i += 1
808 |
809 | def update_sample_stats(self, sample_info):
810 | # device = sample_info['loss'].device
811 | db2buff = {b['idx']: i for i, b in enumerate(self.memory)}
812 | sample_loss = sample_info['loss'].detach().cpu()
813 | sample_index = sample_info['index'].detach().cpu()
814 | sample_entropy = sample_info['entropy'].detach().cpu()
815 | sample_features = F.normalize(sample_info['feature'].detach(), p=2, dim=-1)
816 |
817 | def polyak_avg(val, avg, gamma):
818 | return (1 - gamma) * val + gamma * avg
819 |
820 | for i in range(self.batch_size):
821 | db_idx = sample_index[i].item()
822 | if db_idx in db2buff:
823 | b = self.memory[db2buff[db_idx]]
824 | if not b['seen']:
825 | b['loss'] = sample_loss[i]
826 | b['feature'] = sample_features[i]
827 | b['entropy'] = sample_entropy[i]
828 | else:
829 | b['loss'] = polyak_avg(b['loss'], sample_loss[i],
830 | self.gamma)
831 | b['entropy'] = polyak_avg(b['entropy'], sample_entropy[i],
832 | self.gamma)
833 |
834 | b['feature'] = F.normalize(polyak_avg(
835 | b['feature'], sample_features[i], self.gamma),
836 | p=2,
837 | dim=-1)
838 |
839 | b['num_seen'] += 1
840 | b['seen'] = True
841 |
842 | if self.limit_num_seen_coeff > 0:
843 | max_n_seen = self.limit_num_seen_coeff * self.repeat
844 | self.memory = [
845 | b for b in self.memory if b['num_seen'] < max_n_seen
846 | ]
847 | db2buff = {b['idx']: i for i, b in enumerate(self.memory)}
848 |
849 | samples = [
850 | self.memory[db2buff[idx]] for idx in sample_index.tolist()
851 | if idx in db2buff
852 | ]
853 | if not samples:
854 | return {}
855 | else:
856 | return tensorize_memory(samples)
857 |
858 | def __iter__(self):
859 | random.seed(self.seed_base + self.rank * 1000 + self.epoch)
860 |
861 | self.all_indices = list(self.sampler)
862 | if not self.init_from_ckpt:
863 | self.db_head = 0
864 | self.num_batches_seen = 0
865 | self.num_batches_yielded = 0
866 | self.batch_history = deque(maxlen=128)
867 |
868 | # Resubmit batches not seen by the model
869 | for i in range(self.num_batches_yielded - self.num_batches_seen, 0,
870 | -1):
871 | yield self.batch_history[-i]
872 |
873 | assert self.memory_size <= len(self.all_indices)
874 | while self.num_batches_yielded < len(self):
875 | done = self.add_to_memory(self.batch_size)
876 | if not done and len(self.memory) < self.memory_size:
877 | continue # keep adding until memory is full
878 |
879 | self.resize_memory(self.memory_size)
880 | for j in range(self.repeat):
881 | batch = self.sample_k(self.memory, self.batch_size)
882 | batch_idx = [b['idx'] for b in batch]
883 | self.num_batches_yielded += 1
884 | self.batch_history += [batch_idx]
885 | yield batch_idx
886 |
887 | self.init_from_ckpt = False
888 |
889 | def __len__(self) -> int:
890 | return len(self.sampler) * self.repeat // self.batch_size
891 |
892 | def set_epoch(self, epoch: int) -> None:
893 | self.epoch = epoch
894 | self.sampler.set_epoch(epoch=epoch)
895 |
--------------------------------------------------------------------------------
/utils/utils.py:
--------------------------------------------------------------------------------
1 | import os, sys, math, cc3d, fastremap, csv, time
2 | import numpy as np
3 | import pandas as pd
4 | import matplotlib.pyplot as plt
5 | from scipy import ndimage
6 |
7 | import torch
8 | import torch.nn.functional as F
9 |
10 | from monai.transforms import Compose
11 | from monai.data import decollate_batch
12 | from monai.transforms import Invertd, SaveImaged
13 |
14 | NUM_CLASS = 32
15 |
16 |
17 | TEMPLATE={
18 | '01': [1,2,3,4,5,6,7,8,9,10,11,12,13,14],
19 | '01_2': [1,3,4,5,6,7,11,14],
20 | '02': [1,3,4,5,6,7,11,14],
21 | '03': [6],
22 | '04': [6,27], # post process
23 | '05': [2,3,26,32], # post process
24 | '06': [1,2,3,4,6,7,11,16,17],
25 | '07': [6,1,3,2,7,4,5,11,14,18,19,12,13,20,21,23,24],
26 | '08': [6, 2, 3, 1, 11],
27 | '09': [1,2,3,4,5,6,7,8,9,11,12,13,14,21,22],
28 | '12': [6,21,16,17,2,3],
29 | '13': [6,2,3,1,11,8,9,7,4,5,12,13,25],
30 | '14': [8,12,13,25,18,14,4,9,3,2,6,11,19,1,7],
31 | '10_03': [6, 27], # post process
32 | '10_06': [30],
33 | '10_07': [11, 28], # post process
34 | '10_08': [15, 29], # post process
35 | '10_09': [1],
36 | '10_10': [31],
37 | '15': [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17] ## total segmentation
38 | }
39 |
40 | ORGAN_NAME = ['Spleen', 'Right Kidney', 'Left Kidney', 'Gall Bladder', 'Esophagus',
41 | 'Liver', 'Stomach', 'Aorta', 'Postcava', 'Portal Vein and Splenic Vein',
42 | 'Pancreas', 'Right Adrenal Gland', 'Left Adrenal Gland', 'Duodenum', 'Hepatic Vessel',
43 | 'Right Lung', 'Left Lung', 'Colon', 'Intestine', 'Rectum',
44 | 'Bladder', 'Prostate', 'Left Head of Femur', 'Right Head of Femur', 'Celiac Truck',
45 | 'Kidney Tumor', 'Liver Tumor', 'Pancreas Tumor', 'Hepatic Vessel Tumor', 'Lung Tumor', 'Colon Tumor', 'Kidney Cyst']
46 |
47 | ## mapping to original setting
48 | MERGE_MAPPING_v1 = {
49 | '01': [(1,1), (2,2), (3,3), (4,4), (5,5), (6,6), (7,7), (8,8), (9,9), (10,10), (11,11), (12,12), (13,13), (14,14)],
50 | '02': [(1,1), (3,3), (4,4), (5,5), (6,6), (7,7), (11,11), (14,14)],
51 | '03': [(6,1)],
52 | '04': [(6,1), (27,2)],
53 | '05': [(2,1), (3,1), (26, 2), (32,3)],
54 | '06': [(1,1), (2,2), (3,3), (4,4), (6,5), (7,6), (11,7), (16,8), (17,9)],
55 | '07': [(1,2), (2,4), (3,3), (4,6), (5,7), (6,1), (7,5), (11,8), (12,12), (13,12), (14,9), (18,10), (19,11), (20,13), (21,14), (23,15), (24,16)],
56 | '08': [(1,3), (2,2), (3,2), (6,1), (11,4)],
57 | '09': [(1,1), (2,2), (3,3), (4,4), (5,5), (6,6), (7,7), (8,8), (9,9), (11,10), (12,11), (13,12), (14,13), (21,14), (22,15)],
58 | '10_03': [(6,1), (27,2)],
59 | '10_06': [(30,1)],
60 | '10_07': [(11,1), (28,2)],
61 | '10_08': [(15,1), (29,2)],
62 | '10_09': [(1,1)],
63 | '10_10': [(31,1)],
64 | '12': [(2,4), (3,4), (21,2), (6,1), (16,3), (17,3)],
65 | '13': [(1,3), (2,2), (3,2), (4,8), (5,9), (6,1), (7,7), (8,5), (9,6), (11,4), (12,10), (13,11), (25,12)],
66 | '14': [(1,18), (2,11), (3,10), (4,8), (6,12), (7,19), (8,1), (9,5), (11,13), (12,2), (13,2), (14,7), (18,6), (19,16), (25,5)],
67 | '15': [(1,1), (2,2), (3,3), (4,4), (5,5), (6,6), (7,7), (8,8), (9,9), (10,10), (11,11), (12,12), (13,13), (14,14), (16,16), (17,17), (18,18)],
68 | }
69 |
70 | ## split left and right organ more than dataset defined
71 | ## expand on the original class number
72 | MERGE_MAPPING_v2 = {
73 | '01': [(1,1), (2,2), (3,3), (4,4), (5,5), (6,6), (7,7), (8,8), (9,9), (10,10), (11,11), (12,12), (13,13), (14,14)],
74 | '02': [(1,1), (3,3), (4,4), (5,5), (6,6), (7,7), (11,11), (14,14)],
75 | '03': [(6,1)],
76 | '04': [(6,1), (27,2)],
77 | '05': [(2,1), (3,3), (26, 2), (32,3)],
78 | '06': [(1,1), (2,2), (3,3), (4,4), (6,5), (7,6), (11,7), (16,8), (17,9)],
79 | '07': [(1,2), (2,4), (3,3), (4,6), (5,7), (6,1), (7,5), (11,8), (12,12), (13,17), (14,9), (18,10), (19,11), (20,13), (21,14), (23,15), (24,16)],
80 | '08': [(1,3), (2,2), (3,5), (6,1), (11,4)],
81 | '09': [(1,1), (2,2), (3,3), (4,4), (5,5), (6,6), (7,7), (8,8), (9,9), (11,10), (12,11), (13,12), (14,13), (21,14), (22,15)],
82 | '10_03': [(6,1), (27,2)],
83 | '10_06': [(30,1)],
84 | '10_07': [(11,1), (28,2)],
85 | '10_08': [(15,1), (29,2)],
86 | '10_09': [(1,1)],
87 | '10_10': [(31,1)],
88 | '12': [(2,4), (3,5), (21,2), (6,1), (16,3), (17,6)],
89 | '13': [(1,3), (2,2), (3,13), (4,8), (5,9), (6,1), (7,7), (8,5), (9,6), (11,4), (12,10), (13,11), (25,12)],
90 | '14': [(1,18), (2,11), (3,10), (4,8), (6,12), (7,19), (8,1), (9,5), (11,13), (12,2), (13,25), (14,7), (18,6), (19,16), (25,5)],
91 | '15': [(1,1), (2,2), (3,3), (4,4), (5,5), (6,6), (7,7), (8,8), (9,9), (10,10), (11,11), (12,12), (13,13), (14,14), (16,16), (17,17), (18,18)],
92 | }
93 |
94 | THRESHOLD_DIC = {
95 | 'Spleen': 0.5,
96 | 'Right Kidney': 0.5,
97 | 'Left Kidney': 0.5,
98 | 'Gall Bladder': 0.5,
99 | 'Esophagus': 0.5,
100 | 'Liver': 0.5,
101 | 'Stomach': 0.5,
102 | 'Arota': 0.5,
103 | 'Postcava': 0.5,
104 | 'Portal Vein and Splenic Vein': 0.5,
105 | 'Pancreas': 0.5,
106 | 'Right Adrenal Gland': 0.5,
107 | 'Left Adrenal Gland': 0.5,
108 | 'Duodenum': 0.5,
109 | 'Hepatic Vessel': 0.5,
110 | 'Right Lung': 0.5,
111 | 'Left Lung': 0.5,
112 | 'Colon': 0.5,
113 | 'Intestine': 0.5,
114 | 'Rectum': 0.5,
115 | 'Bladder': 0.5,
116 | 'Prostate': 0.5,
117 | 'Left Head of Femur': 0.5,
118 | 'Right Head of Femur': 0.5,
119 | 'Celiac Truck': 0.5,
120 | 'Kidney Tumor': 0.5,
121 | 'Liver Tumor': 0.5,
122 | 'Pancreas Tumor': 0.5,
123 | 'Hepatic Vessel Tumor': 0.5,
124 | 'Lung Tumor': 0.5,
125 | 'Colon Tumor': 0.5,
126 | 'Kidney Cyst': 0.5
127 | }
128 |
129 |
130 |
131 | TUMOR_ORGAN = {
132 | 'Kidney Tumor': [2,3],
133 | 'Liver Tumor': [6],
134 | 'Pancreas Tumor': [11],
135 | 'Hepatic Vessel Tumor': [15],
136 | 'Lung Tumor': [16,17],
137 | 'Colon Tumor': [18],
138 | 'Kidney Cyst': [2,3]
139 | }
140 |
141 | def save_checkpoint(state, is_best, filename='checkpoint.pth.tar'):
142 | distributed = torch.distributed.is_available(
143 | ) and torch.distributed.is_initialized()
144 | if not distributed or (distributed and torch.distributed.get_rank() == 0):
145 | torch.save(state, filename)
146 | print("=> saved checkpoint '{}' (epoch {})".format(
147 | filename, state['epoch']))
148 |
149 |
150 | def organ_post_process(pred_mask, organ_list, save_dir, args):
151 | post_pred_mask = np.zeros(pred_mask.shape)
152 | plot_save_path = save_dir
153 | log_path = args.log_name
154 | dataset_id = save_dir.split('/')[-2]
155 | case_id = save_dir.split('/')[-1]
156 | if not os.path.isdir(plot_save_path):
157 | os.makedirs(plot_save_path)
158 | for b in range(pred_mask.shape[0]):
159 | for organ in organ_list:
160 | if organ == 11: # both process pancreas and Portal vein and splenic vein
161 | post_pred_mask[b,10] = extract_topk_largest_candidates(pred_mask[b,10], 1) # for pancreas
162 | if 10 in organ_list:
163 | post_pred_mask[b,9] = PSVein_post_process(pred_mask[b,9], post_pred_mask[b,10])
164 | elif organ == 16:
165 | try:
166 | left_lung_mask, right_lung_mask = lung_post_process(pred_mask[b])
167 | post_pred_mask[b,16] = left_lung_mask
168 | post_pred_mask[b,15] = right_lung_mask
169 | except IndexError:
170 | print('this case does not have lungs!')
171 | shape_temp = post_pred_mask[b,16].shape
172 | post_pred_mask[b,16] = np.zeros(shape_temp)
173 | post_pred_mask[b,15] = np.zeros(shape_temp)
174 | with open(log_path + '/' + dataset_id +'/anomaly.csv','a',newline='') as f:
175 | writer = csv.writer(f)
176 | content = case_id
177 | writer.writerow([content])
178 |
179 | right_lung_size = np.sum(post_pred_mask[b,15],axis=(0,1,2))
180 | left_lung_size = np.sum(post_pred_mask[b,16],axis=(0,1,2))
181 |
182 | print('left lung size: '+str(left_lung_size))
183 | print('right lung size: '+str(right_lung_size))
184 |
185 | #knn_model = KNN(n_neighbors=5,contamination=0.00001)
186 | right_lung_save_path = plot_save_path+'/right_lung.png'
187 | left_lung_save_path = plot_save_path+'/left_lung.png'
188 | total_anomly_slice_number=0
189 |
190 | if right_lung_size>left_lung_size:
191 | if right_lung_size/left_lung_size > 4:
192 | mid_point = int(right_lung_mask.shape[0]/2)
193 | left_region = np.sum(right_lung_mask[:mid_point,:,:],axis=(0,1,2))
194 | right_region = np.sum(right_lung_mask[mid_point:,:,:],axis=(0,1,2))
195 |
196 | if (right_region+1)/(left_region+1)>4:
197 | print('this case only has right lung')
198 | post_pred_mask[b,15] = right_lung_mask
199 | post_pred_mask[b,16] = np.zeros(right_lung_mask.shape)
200 | elif (left_region+1)/(right_region+1)>4:
201 | print('this case only has left lung')
202 | post_pred_mask[b,16] = right_lung_mask
203 | post_pred_mask[b,15] = np.zeros(right_lung_mask.shape)
204 | else:
205 | print('need anomly detection')
206 | print('start anomly detection at right lung')
207 | try:
208 | left_lung_mask,right_lung_mask,total_anomly_slice_number = anomly_detection(
209 | pred_mask,post_pred_mask[b,15],right_lung_save_path,b,total_anomly_slice_number)
210 | post_pred_mask[b,16] = left_lung_mask
211 | post_pred_mask[b,15] = right_lung_mask
212 | right_lung_size = np.sum(post_pred_mask[b,15],axis=(0,1,2))
213 | left_lung_size = np.sum(post_pred_mask[b,16],axis=(0,1,2))
214 | while right_lung_size/left_lung_size>4 or left_lung_size/right_lung_size>4:
215 | print('still need anomly detection')
216 | if right_lung_size>left_lung_size:
217 | left_lung_mask,right_lung_mask,total_anomly_slice_number = anomly_detection(
218 | pred_mask,post_pred_mask[b,15],right_lung_save_path,b,total_anomly_slice_number)
219 | else:
220 | left_lung_mask,right_lung_mask,total_anomly_slice_number = anomly_detection(
221 | pred_mask,post_pred_mask[b,16],right_lung_save_path,b,total_anomly_slice_number)
222 | post_pred_mask[b,16] = left_lung_mask
223 | post_pred_mask[b,15] = right_lung_mask
224 | right_lung_size = np.sum(post_pred_mask[b,15],axis=(0,1,2))
225 | left_lung_size = np.sum(post_pred_mask[b,16],axis=(0,1,2))
226 | print('lung seperation complete')
227 | except IndexError:
228 | left_lung_mask, right_lung_mask = lung_post_process(pred_mask[b])
229 | post_pred_mask[b,16] = left_lung_mask
230 | post_pred_mask[b,15] = right_lung_mask
231 | print("cannot seperate two lungs, writing csv")
232 | with open(log_path + '/' + dataset_id +'/anomaly.csv','a',newline='') as f:
233 | writer = csv.writer(f)
234 | content = case_id
235 | writer.writerow([content])
236 | else:
237 | if left_lung_size/right_lung_size > 4:
238 | mid_point = int(left_lung_mask.shape[0]/2)
239 | left_region = np.sum(left_lung_mask[:mid_point,:,:],axis=(0,1,2))
240 | right_region = np.sum(left_lung_mask[mid_point:,:,:],axis=(0,1,2))
241 | if (right_region+1)/(left_region+1)>4:
242 | print('this case only has right lung')
243 | post_pred_mask[b,15] = left_lung_mask
244 | post_pred_mask[b,16] = np.zeros(left_lung_mask.shape)
245 | elif (left_region+1)/(right_region+1)>4:
246 | print('this case only has left lung')
247 | post_pred_mask[b,16] = left_lung_mask
248 | post_pred_mask[b,15] = np.zeros(left_lung_mask.shape)
249 | else:
250 |
251 | print('need anomly detection')
252 | print('start anomly detection at left lung')
253 | try:
254 | left_lung_mask,right_lung_mask,total_anomly_slice_number = anomly_detection(
255 | pred_mask,post_pred_mask[b,16],left_lung_save_path,b,total_anomly_slice_number)
256 | post_pred_mask[b,16] = left_lung_mask
257 | post_pred_mask[b,15] = right_lung_mask
258 | right_lung_size = np.sum(post_pred_mask[b,15],axis=(0,1,2))
259 | left_lung_size = np.sum(post_pred_mask[b,16],axis=(0,1,2))
260 | while right_lung_size/left_lung_size>4 or left_lung_size/right_lung_size>4:
261 | print('still need anomly detection')
262 | if right_lung_size>left_lung_size:
263 | left_lung_mask,right_lung_mask,total_anomly_slice_number = anomly_detection(
264 | pred_mask,post_pred_mask[b,15],right_lung_save_path,b,total_anomly_slice_number)
265 | else:
266 | left_lung_mask,right_lung_mask,total_anomly_slice_number = anomly_detection(
267 | pred_mask,post_pred_mask[b,16],right_lung_save_path,b,total_anomly_slice_number)
268 | post_pred_mask[b,16] = left_lung_mask
269 | post_pred_mask[b,15] = right_lung_mask
270 | right_lung_size = np.sum(post_pred_mask[b,15],axis=(0,1,2))
271 | left_lung_size = np.sum(post_pred_mask[b,16],axis=(0,1,2))
272 |
273 | print('lung seperation complete')
274 | except IndexError:
275 | left_lung_mask, right_lung_mask = lung_post_process(pred_mask[b])
276 | post_pred_mask[b,16] = left_lung_mask
277 | post_pred_mask[b,15] = right_lung_mask
278 | print("cannot seperate two lungs, writing csv")
279 | with open(log_path + '/' + dataset_id +'/anomaly.csv','a',newline='') as f:
280 | writer = csv.writer(f)
281 | content = case_id
282 | writer.writerow([content])
283 | print('find number of anomaly slice: '+str(total_anomly_slice_number))
284 | elif organ == 17:
285 | continue ## the le
286 | elif organ in [1,2,3,4,5,6,7,8,9,12,13,14,18,19,20,21,22,23,24,25]: ## rest organ index
287 | post_pred_mask[b,organ-1] = extract_topk_largest_candidates(pred_mask[b,organ-1], 1)
288 | # elif organ in [28,29,30,31,32]:
289 | # post_pred_mask[b,organ-1] = extract_topk_largest_candidates(pred_mask[b,organ-1], TUMOR_NUM[ORGAN_NAME[organ-1]], area_least=TUMOR_SIZE[ORGAN_NAME[organ-1]])
290 | elif organ in [26,27]:
291 | organ_mask = merge_and_top_organ(pred_mask[b], TUMOR_ORGAN[ORGAN_NAME[organ-1]])
292 | post_pred_mask[b,organ-1] = organ_region_filter_out(pred_mask[b,organ-1], organ_mask)
293 | # post_pred_mask[b,organ-1] = extract_topk_largest_candidates(post_pred_mask[b,organ-1], TUMOR_NUM[ORGAN_NAME[organ-1]], area_least=TUMOR_SIZE[ORGAN_NAME[organ-1]])
294 | else:
295 | post_pred_mask[b,organ-1] = pred_mask[b,organ-1]
296 | return post_pred_mask
297 |
298 | def lung_overlap_post_process(pred_mask):
299 | new_mask = np.zeros(pred_mask.shape, np.uint8)
300 | new_mask[pred_mask==1] = 1
301 | label_out = cc3d.connected_components(new_mask, connectivity=26)
302 |
303 | areas = {}
304 | for label, extracted in cc3d.each(label_out, binary=True, in_place=True):
305 | areas[label] = fastremap.foreground(extracted)
306 | candidates = sorted(areas.items(), key=lambda item: item[1], reverse=True)
307 | num_candidates = len(candidates)
308 | if num_candidates!=1:
309 | print('start separating two lungs!')
310 | ONE = int(candidates[0][0])
311 | TWO = int(candidates[1][0])
312 |
313 |
314 | print('number of connected components:'+str(len(candidates)))
315 | a1,b1,c1 = np.where(label_out==ONE)
316 | a2,b2,c2 = np.where(label_out==TWO)
317 |
318 | left_lung_mask = np.zeros(label_out.shape)
319 | right_lung_mask = np.zeros(label_out.shape)
320 |
321 | if np.mean(a1) < np.mean(a2):
322 | left_lung_mask[label_out==ONE] = 1
323 | right_lung_mask[label_out==TWO] = 1
324 | else:
325 | right_lung_mask[label_out==ONE] = 1
326 | left_lung_mask[label_out==TWO] = 1
327 | erosion_left_lung_size = np.sum(left_lung_mask,axis=(0,1,2))
328 | erosion_right_lung_size = np.sum(right_lung_mask,axis=(0,1,2))
329 | print('erosion left lung size:'+str(erosion_left_lung_size))
330 | print('erosion right lung size:'+ str(erosion_right_lung_size))
331 | return num_candidates,left_lung_mask, right_lung_mask
332 | else:
333 | print('current iteration cannot separate lungs, erosion iteration + 1')
334 | ONE = int(candidates[0][0])
335 | print('number of connected components:'+str(len(candidates)))
336 | lung_mask = np.zeros(label_out.shape)
337 | lung_mask[label_out == ONE]=1
338 | lung_overlapped_mask_size = np.sum(lung_mask,axis=(0,1,2))
339 | print('lung overlapped mask size:' + str(lung_overlapped_mask_size))
340 |
341 | return num_candidates,lung_mask
342 |
343 | def find_best_iter_and_masks(lung_mask):
344 | iter=1
345 | print('current iteration:' + str(iter))
346 | struct2 = ndimage.generate_binary_structure(3, 3)
347 | erosion_mask= ndimage.binary_erosion(lung_mask, structure=struct2,iterations=iter)
348 | candidates_and_masks = lung_overlap_post_process(erosion_mask)
349 | while candidates_and_masks[0]==1:
350 | iter +=1
351 | print('current iteration:' + str(iter))
352 | erosion_mask= ndimage.binary_erosion(lung_mask, structure=struct2,iterations=iter)
353 | candidates_and_masks = lung_overlap_post_process(erosion_mask)
354 | print('check if components are valid')
355 | left_lung_erosion_mask = candidates_and_masks[1]
356 | right_lung_erosion_mask = candidates_and_masks[2]
357 | left_lung_erosion_mask_size = np.sum(left_lung_erosion_mask,axis = (0,1,2))
358 | right_lung_erosion_mask_size = np.sum(right_lung_erosion_mask,axis = (0,1,2))
359 | while left_lung_erosion_mask_size/right_lung_erosion_mask_size>4 or right_lung_erosion_mask_size/left_lung_erosion_mask_size>4:
360 | print('components still have large difference, erosion interation + 1')
361 | iter +=1
362 | print('current iteration:' + str(iter))
363 | erosion_mask= ndimage.binary_erosion(lung_mask, structure=struct2,iterations=iter)
364 | candidates_and_masks = lung_overlap_post_process(erosion_mask)
365 | while candidates_and_masks[0]==1:
366 | iter +=1
367 | print('current iteration:' + str(iter))
368 | erosion_mask= ndimage.binary_erosion(lung_mask, structure=struct2,iterations=iter)
369 | candidates_and_masks = lung_overlap_post_process(erosion_mask)
370 | left_lung_erosion_mask = candidates_and_masks[1]
371 | right_lung_erosion_mask = candidates_and_masks[2]
372 | left_lung_erosion_mask_size = np.sum(left_lung_erosion_mask,axis = (0,1,2))
373 | right_lung_erosion_mask_size = np.sum(right_lung_erosion_mask,axis = (0,1,2))
374 | print('erosion done, best iteration: '+str(iter))
375 |
376 |
377 |
378 | print('start dilation')
379 | left_lung_erosion_mask = candidates_and_masks[1]
380 | right_lung_erosion_mask = candidates_and_masks[2]
381 |
382 | erosion_part_mask = lung_mask - left_lung_erosion_mask - right_lung_erosion_mask
383 | left_lung_dist = np.ones(left_lung_erosion_mask.shape)
384 | right_lung_dist = np.ones(right_lung_erosion_mask.shape)
385 | left_lung_dist[left_lung_erosion_mask==1]=0
386 | right_lung_dist[right_lung_erosion_mask==1]=0
387 | left_lung_dist_map = ndimage.distance_transform_edt(left_lung_dist)
388 | right_lung_dist_map = ndimage.distance_transform_edt(right_lung_dist)
389 | left_lung_dist_map[erosion_part_mask==0]=0
390 | right_lung_dist_map[erosion_part_mask==0]=0
391 | left_lung_adding_map = left_lung_dist_map < right_lung_dist_map
392 | right_lung_adding_map = right_lung_dist_map < left_lung_dist_map
393 |
394 | left_lung_erosion_mask[left_lung_adding_map==1]=1
395 | right_lung_erosion_mask[right_lung_adding_map==1]=1
396 |
397 | left_lung_mask = left_lung_erosion_mask
398 | right_lung_mask = right_lung_erosion_mask
399 | # left_lung_mask = ndimage.binary_dilation(left_lung_erosion_mask, structure=struct2,iterations=iter)
400 | # right_lung_mask = ndimage.binary_dilation(right_lung_erosion_mask, structure=struct2,iterations=iter)
401 | print('dilation complete')
402 | left_lung_mask_fill_hole = ndimage.binary_fill_holes(left_lung_mask)
403 | right_lung_mask_fill_hole = ndimage.binary_fill_holes(right_lung_mask)
404 | left_lung_size = np.sum(left_lung_mask_fill_hole,axis=(0,1,2))
405 | right_lung_size = np.sum(right_lung_mask_fill_hole,axis=(0,1,2))
406 | print('new left lung size:'+str(left_lung_size))
407 | print('new right lung size:' + str(right_lung_size))
408 | return left_lung_mask_fill_hole,right_lung_mask_fill_hole
409 |
410 | def anomly_detection(pred_mask, post_pred_mask, save_path, batch, anomly_num):
411 | total_anomly_slice_number = anomly_num
412 | df = get_dataframe(post_pred_mask)
413 | # lung_pred_df = fit_model(model,lung_df)
414 | lung_df = df[df['array_sum']!=0]
415 | lung_df['SMA20'] = lung_df['array_sum'].rolling(20,min_periods=1,center=True).mean()
416 | lung_df['STD20'] = lung_df['array_sum'].rolling(20,min_periods=1,center=True).std()
417 | lung_df['SMA7'] = lung_df['array_sum'].rolling(7,min_periods=1,center=True).mean()
418 | lung_df['upper_bound'] = lung_df['SMA20']+2*lung_df['STD20']
419 | lung_df['Predictions'] = lung_df['array_sum']>lung_df['upper_bound']
420 | lung_df['Predictions'] = lung_df['Predictions'].astype(int)
421 | lung_df.dropna(inplace=True)
422 | anomly_df = lung_df[lung_df['Predictions']==1]
423 | anomly_slice = anomly_df['slice_index'].to_numpy()
424 | anomly_value = anomly_df['array_sum'].to_numpy()
425 | anomly_SMA7 = anomly_df['SMA7'].to_numpy()
426 |
427 | print('decision made')
428 | if len(anomly_df)!=0:
429 | print('anomaly point detected')
430 | print('check if the anomaly points are real')
431 | real_anomly_slice = []
432 | for i in range(len(anomly_df)):
433 | if anomly_value[i] > anomly_SMA7[i]+200:
434 | print('the anomaly point is real')
435 | real_anomly_slice.append(anomly_slice[i])
436 | total_anomly_slice_number+=1
437 |
438 | if len(real_anomly_slice)!=0:
439 |
440 |
441 | plot_anomalies(lung_df,save_dir=save_path)
442 | print('anomaly detection plot created')
443 | for s in real_anomly_slice:
444 | pred_mask[batch,15,:,:,s]=0
445 | pred_mask[batch,16,:,:,s]=0
446 | left_lung_mask, right_lung_mask = lung_post_process(pred_mask[batch])
447 | left_lung_size = np.sum(left_lung_mask,axis=(0,1,2))
448 | right_lung_size = np.sum(right_lung_mask,axis=(0,1,2))
449 | print('new left lung size:'+str(left_lung_size))
450 | print('new right lung size:' + str(right_lung_size))
451 | return left_lung_mask,right_lung_mask,total_anomly_slice_number
452 | else:
453 | print('the anomaly point is not real, start separate overlapping')
454 | left_lung_mask,right_lung_mask = find_best_iter_and_masks(post_pred_mask)
455 | return left_lung_mask,right_lung_mask,total_anomly_slice_number
456 |
457 |
458 | print('overlap detected, start erosion and dilation')
459 | left_lung_mask,right_lung_mask = find_best_iter_and_masks(post_pred_mask)
460 |
461 | return left_lung_mask,right_lung_mask,total_anomly_slice_number
462 |
463 | def get_dataframe(post_pred_mask):
464 | target_array = post_pred_mask
465 | target_array_sum = np.sum(target_array,axis=(0,1))
466 | slice_index = np.arange(target_array.shape[-1])
467 | df = pd.DataFrame({'slice_index':slice_index,'array_sum':target_array_sum})
468 | return df
469 |
470 | def plot_anomalies(df, x='slice_index', y='array_sum',save_dir=None):
471 | # categories will be having values from 0 to n
472 | # for each values in 0 to n it is mapped in colormap
473 | categories = df['Predictions'].to_numpy()
474 | colormap = np.array(['g', 'r'])
475 |
476 | f = plt.figure(figsize=(12, 4))
477 | f = plt.plot(df[x],df['SMA20'],'b')
478 | f = plt.plot(df[x],df['upper_bound'],'y')
479 | f = plt.scatter(df[x], df[y], c=colormap[categories],alpha=0.3)
480 | f = plt.xlabel(x)
481 | f = plt.ylabel(y)
482 | plt.legend(['Simple moving average','upper bound','predictions'])
483 | if save_dir is not None:
484 | plt.savefig(save_dir)
485 | plt.clf()
486 |
487 | def merge_and_top_organ(pred_mask, organ_list):
488 | ## merge
489 | out_mask = np.zeros(pred_mask.shape[1:], np.uint8)
490 | for organ in organ_list:
491 | out_mask = np.logical_or(out_mask, pred_mask[organ-1])
492 | ## select the top k, for righr left case
493 | out_mask = extract_topk_largest_candidates(out_mask, len(organ_list))
494 |
495 | return out_mask
496 |
497 | def organ_region_filter_out(tumor_mask, organ_mask):
498 | ## dialtion
499 | organ_mask = ndimage.binary_closing(organ_mask, structure=np.ones((5,5,5)))
500 | organ_mask = ndimage.binary_dilation(organ_mask, structure=np.ones((5,5,5)))
501 | ## filter out
502 | tumor_mask = organ_mask * tumor_mask
503 |
504 | return tumor_mask
505 |
506 |
507 | def PSVein_post_process(PSVein_mask, pancreas_mask):
508 | xy_sum_pancreas = pancreas_mask.sum(axis=0).sum(axis=0)
509 | z_non_zero = np.nonzero(xy_sum_pancreas)
510 | z_value = np.min(z_non_zero) ## the down side of pancreas
511 | new_PSVein = PSVein_mask.copy()
512 | new_PSVein[:,:,:z_value] = 0
513 | return new_PSVein
514 |
515 | def lung_post_process(pred_mask):
516 | new_mask = np.zeros(pred_mask.shape[1:], np.uint8)
517 | new_mask[pred_mask[15] == 1] = 1
518 | new_mask[pred_mask[16] == 1] = 1
519 | label_out = cc3d.connected_components(new_mask, connectivity=26)
520 |
521 | areas = {}
522 | for label, extracted in cc3d.each(label_out, binary=True, in_place=True):
523 | areas[label] = fastremap.foreground(extracted)
524 | candidates = sorted(areas.items(), key=lambda item: item[1], reverse=True)
525 |
526 | ONE = int(candidates[0][0])
527 | TWO = int(candidates[1][0])
528 |
529 | a1,b1,c1 = np.where(label_out==ONE)
530 | a2,b2,c2 = np.where(label_out==TWO)
531 |
532 | left_lung_mask = np.zeros(label_out.shape)
533 | right_lung_mask = np.zeros(label_out.shape)
534 |
535 | if np.mean(a1) < np.mean(a2):
536 | left_lung_mask[label_out==ONE] = 1
537 | right_lung_mask[label_out==TWO] = 1
538 | else:
539 | right_lung_mask[label_out==ONE] = 1
540 | left_lung_mask[label_out==TWO] = 1
541 |
542 | return left_lung_mask, right_lung_mask
543 |
544 | def extract_topk_largest_candidates(npy_mask, organ_num, area_least=0):
545 | ## npy_mask: w, h, d
546 | ## organ_num: the maximum number of connected component
547 | out_mask = np.zeros(npy_mask.shape, np.uint8)
548 | t_mask = npy_mask.copy()
549 | keep_topk_largest_connected_object(t_mask, organ_num, area_least, out_mask, 1)
550 |
551 | return out_mask
552 |
553 |
554 | def keep_topk_largest_connected_object(npy_mask, k, area_least, out_mask, out_label):
555 | labels_out = cc3d.connected_components(npy_mask, connectivity=26)
556 | areas = {}
557 | for label, extracted in cc3d.each(labels_out, binary=True, in_place=True):
558 | areas[label] = fastremap.foreground(extracted)
559 | candidates = sorted(areas.items(), key=lambda item: item[1], reverse=True)
560 |
561 | for i in range(min(k, len(candidates))):
562 | if candidates[i][1] > area_least:
563 | out_mask[labels_out == int(candidates[i][0])] = out_label
564 |
565 | def threshold_organ(data, organ=None, threshold=None):
566 | ### threshold the sigmoid value to hard label
567 | ## data: sigmoid value
568 | ## threshold_list: a list of organ threshold
569 | B = data.shape[0]
570 | threshold_list = []
571 | if organ:
572 | THRESHOLD_DIC[organ] = threshold
573 | for key, value in THRESHOLD_DIC.items():
574 | threshold_list.append(value)
575 | threshold_list = torch.tensor(threshold_list).repeat(B, 1).reshape(B,len(threshold_list),1,1,1).cuda()
576 | pred_hard = data > threshold_list
577 | return pred_hard
578 |
579 |
580 | def visualize_label(batch, save_dir, input_transform):
581 | ### function: save the prediction result into dir
582 | ## Input
583 | ## batch: the batch dict output from the monai dataloader
584 | ## one_channel_label: the predicted reuslt with same shape as label
585 | ## save_dir: the directory for saving
586 | ## input_transform: the dataloader transform
587 | post_transforms = Compose([
588 | Invertd(
589 | keys=["label", 'one_channel_label_v1', 'one_channel_label_v2'], #, 'split_label'
590 | transform=input_transform,
591 | orig_keys="image",
592 | nearest_interp=True,
593 | to_tensor=True,
594 | ),
595 | SaveImaged(keys="label",
596 | meta_keys="label_meta_dict" ,
597 | output_dir=save_dir,
598 | output_postfix="gt",
599 | resample=False
600 | ),
601 | SaveImaged(keys='one_channel_label_v1',
602 | meta_keys="label_meta_dict" ,
603 | output_dir=save_dir,
604 | output_postfix="result_v1",
605 | resample=False
606 | ),
607 | SaveImaged(keys='one_channel_label_v2',
608 | meta_keys="label_meta_dict" ,
609 | output_dir=save_dir,
610 | output_postfix="result_v2",
611 | resample=False
612 | ),
613 | ])
614 |
615 | batch = [post_transforms(i) for i in decollate_batch(batch)]
616 |
617 |
618 | def merge_label(pred_bmask, name):
619 | B, C, W, H, D = pred_bmask.shape
620 | merged_label_v1 = torch.zeros(B,1,W,H,D).cuda()
621 | merged_label_v2 = torch.zeros(B,1,W,H,D).cuda()
622 | for b in range(B):
623 | template_key = get_key(name[b])
624 | transfer_mapping_v1 = MERGE_MAPPING_v1[template_key]
625 | transfer_mapping_v2 = MERGE_MAPPING_v2[template_key]
626 | organ_index = []
627 | for item in transfer_mapping_v1:
628 | src, tgt = item
629 | merged_label_v1[b][0][pred_bmask[b][src-1]==1] = tgt
630 | for item in transfer_mapping_v2:
631 | src, tgt = item
632 | merged_label_v2[b][0][pred_bmask[b][src-1]==1] = tgt
633 | # organ_index.append(src-1)
634 | # organ_index = torch.tensor(organ_index).cuda()
635 | # predicted_prob = pred_sigmoid[b][organ_index]
636 | return merged_label_v1, merged_label_v2
637 |
638 |
639 | def get_key(name):
640 | ## input: name
641 | ## output: the corresponding key
642 | dataset_index = int(name[0:2])
643 | if dataset_index == 10:
644 | template_key = name[0:2] + '_' + name[17:19]
645 | else:
646 | template_key = name[0:2]
647 | return template_key
648 |
649 | def dice_score(preds, labels, spe_sen=False):
650 | assert preds.shape[0] == labels.shape[0], "predict & target batch size don't match"
651 | preds = torch.where(preds > 0.5, 1., 0.)
652 | predict = preds.contiguous().view(1, -1)
653 | target = labels.contiguous().view(1, -1)
654 |
655 | tp = torch.sum(torch.mul(predict, target))
656 | fn = torch.sum(torch.mul(predict!=1, target))
657 | fp = torch.sum(torch.mul(predict, target!=1))
658 | tn = torch.sum(torch.mul(predict!=1, target!=1))
659 |
660 | den = torch.sum(predict) + torch.sum(target) + 1
661 |
662 | dice = 2 * tp / den
663 | recall = tp/(tp+fn)
664 | precision = tp/(tp+fp)
665 | specificity = tn/(fp + tn)
666 |
667 | if spe_sen:
668 | return dice, recall, precision, specificity
669 | else:
670 | return dice, recall, precision
671 |
672 | class AverageMeter(object):
673 | """Computes and stores the average and current value"""
674 | def __init__(self, name, fmt=':f'):
675 | self.name = name
676 | self.fmt = fmt
677 | self.reset()
678 |
679 | def reset(self):
680 | self.val = 0
681 | self.avg = 0
682 | self.sum = 0
683 | self.count = 0
684 |
685 | def update(self, val, n=1):
686 | self.val = val
687 | self.sum += val * n
688 | self.count += n
689 | self.avg = self.sum / self.count
690 |
691 | def __str__(self):
692 | fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})'
693 | return fmtstr.format(**self.__dict__)
694 |
695 | class WindowAverageMeter(object):
696 | """Computes and stores the average and current value"""
697 | def __init__(self, name, k=250, fmt=':f'):
698 | self.name = name
699 | self.fmt = fmt
700 | self.k = k
701 | self.reset()
702 |
703 | def reset(self):
704 | from collections import deque
705 | self.vals = deque(maxlen=self.k)
706 | self.counts = deque(maxlen=self.k)
707 | self.val = 0
708 | self.avg = 0
709 |
710 | def update(self, val, n=1):
711 | self.vals.append(val)
712 | self.counts.append(n)
713 | self.val = val
714 | self.avg = sum([v * c for v, c in zip(self.vals, self.counts)]) / sum(
715 | self.counts)
716 |
717 | def __str__(self):
718 | fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})'
719 | return fmtstr.format(**self.__dict__)
720 |
721 | class ProgressMeter(object):
722 | def __init__(self, num_batches, meters, prefix="", tbwriter=None, rank=0):
723 | self.batch_fmtstr = self._get_batch_fmtstr(num_batches)
724 | self.meters = meters
725 | self.prefix = prefix
726 | self.tbwriter = tbwriter
727 | self.rank = rank
728 |
729 | def display(self, batch):
730 | entries = [self.prefix + self.batch_fmtstr.format(batch)]
731 | entries += [str(meter) for meter in self.meters]
732 | if self.rank == 0:
733 | print('\t'.join(entries))
734 | sys.stdout.flush()
735 |
736 | def _get_batch_fmtstr(self, num_batches):
737 | num_digits = len(str(num_batches // 1))
738 | fmt = '{:' + str(num_digits) + 'd}'
739 | return '[' + fmt + '/' + fmt.format(num_batches) + ']'
740 |
741 | def tbwrite(self, batch):
742 | if self.tbwriter is None:
743 | return
744 | scalar_dict = self.tb_scalar_dict()
745 | for k, v in scalar_dict.items():
746 | self.tbwriter.add_scalar(k, v, batch)
747 |
748 | def tb_scalar_dict(self):
749 | out = {}
750 | for meter in self.meters:
751 | val = meter.avg
752 | tag = meter.name
753 | sclrval = val
754 | out[tag] = sclrval
755 | return out
756 |
757 | class CheckpointManager:
758 | def __init__(self,
759 | modules,
760 | ckpt_dir,
761 | epoch_size,
762 | epochs,
763 | save_freq=None,
764 | save_freq_mints=None):
765 | self.modules = modules
766 | self.ckpt_dir = ckpt_dir
767 | self.epoch_size = epoch_size
768 | self.epochs = epochs
769 | self.save_freq = save_freq
770 | self.save_freq_mints = save_freq_mints
771 | self.retain_num_ckpt = 0
772 |
773 | self.time = time.time()
774 | self.distributed = torch.distributed.is_available(
775 | ) and torch.distributed.is_initialized()
776 | self.world_size = torch.distributed.get_world_size(
777 | ) if self.distributed else 1
778 | self.rank = torch.distributed.get_rank() if self.distributed else 0
779 |
780 | os.makedirs(os.path.join(self.ckpt_dir), exist_ok=True)
781 |
782 | def resume(self):
783 | ckpt_fname = os.path.join(self.ckpt_dir, 'checkpoint_latest.pth')
784 | start_epoch = 0
785 | if os.path.isfile(ckpt_fname):
786 | checkpoint = torch.load(ckpt_fname, map_location='cpu')
787 |
788 | # Load state dict
789 | for k in self.modules:
790 | self.modules[k].load_state_dict(checkpoint[k])
791 | start_epoch = checkpoint['epoch']
792 | print("=> loaded checkpoint '{}' (epoch {})".format(
793 | ckpt_fname, checkpoint['epoch']))
794 | return start_epoch
795 |
796 | def timed_checkpoint(self, save_dict=None):
797 | t = time.time() - self.time
798 | t_all = [t for _ in range(self.world_size)]
799 | if self.world_size > 1:
800 | torch.distributed.all_gather_object(t_all, t)
801 | if min(t_all) > self.save_freq_mints * 60:
802 | self.time = time.time()
803 | ckpt_fname = os.path.join(self.ckpt_dir, 'checkpoint_latest.pth')
804 |
805 | state = self.create_state_dict(save_dict)
806 | if self.rank == 0:
807 | save_checkpoint(state, is_best=False, filename=ckpt_fname)
808 |
809 | def midway_epoch_checkpoint(self, epoch, batch_i, save_dict=None):
810 | # if ((batch_i + 1) / float(self.epoch_size) % self.save_freq) < (
811 | # batch_i / float(self.epoch_size) % self.save_freq):
812 | if batch_i % self.save_freq==0:
813 | ckpt_fname = os.path.join(self.ckpt_dir,
814 | 'checkpoint_{:010.4f}.pth')
815 | ckpt_fname = ckpt_fname.format(epoch +
816 | batch_i / float(self.epoch_size))
817 |
818 | state = self.create_state_dict(save_dict)
819 | if self.rank == 0:
820 | save_checkpoint(state, is_best=False, filename=ckpt_fname)
821 | ckpt_fname = os.path.join(self.ckpt_dir,
822 | 'checkpoint_latest.pth')
823 | save_checkpoint(state, is_best=False, filename=ckpt_fname)
824 |
825 | def end_epoch_checkpoint(self, epoch, save_dict=None):
826 | if (epoch % self.save_freq
827 | == 0) or self.save_freq < 1 or epoch == self.epochs:
828 | ckpt_fname = os.path.join(self.ckpt_dir, 'checkpoint_{:04d}.pth')
829 | ckpt_fname = ckpt_fname.format(epoch)
830 |
831 | state = self.create_state_dict(save_dict)
832 | if self.rank == 0:
833 | save_checkpoint(state, is_best=False, filename=ckpt_fname)
834 | ckpt_fname = os.path.join(self.ckpt_dir,
835 | 'checkpoint_latest.pth')
836 | save_checkpoint(state, is_best=False, filename=ckpt_fname)
837 |
838 | if self.retain_num_ckpt > 0:
839 | ckpt_fname = os.path.join(self.ckpt_dir,
840 | 'checkpoint_{:04d}.pth')
841 | ckpt_fname = ckpt_fname.format(epoch - self.save_freq *
842 | (self.retain_num_ckpt + 1))
843 | if os.path.exists(ckpt_fname):
844 | os.remove(ckpt_fname.format(ckpt_fname))
845 |
846 | def create_state_dict(self, save_dict):
847 | state = {k: self.modules[k].state_dict() for k in self.modules}
848 | if save_dict is not None:
849 | state.update(save_dict)
850 | return state
851 |
852 | def checkpoint(self, epoch, batch_i=None, save_dict=None):
853 | if batch_i is None:
854 | self.end_epoch_checkpoint(epoch, save_dict)
855 | else:
856 | if batch_i % 100 == 0:
857 | self.timed_checkpoint(save_dict)
858 | self.midway_epoch_checkpoint(epoch, batch_i, save_dict=save_dict)
859 |
860 | def adjust_learning_rate(optimizer, epoch, args, epoch_size=None):
861 | """Decay the learning rate based on schedule"""
862 | init_lr = args.lr
863 | if args.lr_schedule == 'constant':
864 | cur_lr = init_lr
865 |
866 | elif args.lr_schedule == 'cos':
867 | cur_lr = init_lr * 0.5 * (1. + math.cos(math.pi * epoch / args.max_epoch))
868 |
869 | elif args.lr_schedule == 'triangle':
870 | T = args.lr_schedule_period
871 | t = (epoch * epoch_size) % T
872 | if t < T / 2:
873 | cur_lr = args.lr + t / (T / 2.) * (args.max_lr - args.lr)
874 | else:
875 | cur_lr = args.lr + (T-t) / (T / 2.) * (args.max_lr - args.lr)
876 |
877 | else:
878 | raise ValueError('LR schedule unknown.')
879 |
880 | if args.exit_decay > 0:
881 | start_decay_epoch = args.max_epoch * (1. - args.exit_decay)
882 | if epoch > start_decay_epoch:
883 | mult = 0.5 * (1. + math.cos(math.pi * (epoch - start_decay_epoch) / (args.max_epoch - start_decay_epoch)))
884 | cur_lr = cur_lr * mult
885 |
886 | for param_group in optimizer.param_groups:
887 | if 'fix_lr' in param_group and param_group['fix_lr']:
888 | param_group['lr'] = init_lr
889 | else:
890 | param_group['lr'] = cur_lr
891 | return cur_lr
892 |
893 | def seconds_to_hms(seconds):
894 | days, remainder = divmod(seconds, 86400) # 1 day = 24 * 60 * 60 seconds
895 | hours, remainder = divmod(remainder, 3600) # 1 hour = 60 * 60 seconds
896 | minutes, seconds = divmod(remainder, 60) # 1 minute = 60 seconds
897 | return int(days), int(hours), int(minutes), int(seconds)
898 |
899 | def calculate_remaining_time(start_time, current_step, total_steps):
900 | elapsed_time = time.time() - start_time
901 | time_per_step = elapsed_time / current_step
902 | remaining_steps = total_steps - current_step
903 | remaining_time = remaining_steps * time_per_step
904 | days, hours, minutes, seconds = seconds_to_hms(remaining_time)
905 | return days, hours, minutes, seconds
--------------------------------------------------------------------------------
/dataset/dataset_list/PAOT_test.txt:
--------------------------------------------------------------------------------
1 | 01_Multi-Atlas_Labeling/img/img0001.nii.gz 01_Multi-Atlas_Labeling/label/label0001.nii.gz
2 | 01_Multi-Atlas_Labeling/img/img0003.nii.gz 01_Multi-Atlas_Labeling/label/label0003.nii.gz
3 | 01_Multi-Atlas_Labeling/img/img0004.nii.gz 01_Multi-Atlas_Labeling/label/label0004.nii.gz
4 | 01_Multi-Atlas_Labeling/img/img0006.nii.gz 01_Multi-Atlas_Labeling/label/label0006.nii.gz
5 | 01_Multi-Atlas_Labeling/img/img0008.nii.gz 01_Multi-Atlas_Labeling/label/label0008.nii.gz
6 | 01_Multi-Atlas_Labeling/img/img0030.nii.gz 01_Multi-Atlas_Labeling/label/label0030.nii.gz
7 | 01_Multi-Atlas_Labeling/img/img0033.nii.gz 01_Multi-Atlas_Labeling/label/label0033.nii.gz
8 | 01_Multi-Atlas_Labeling/img/img0037.nii.gz 01_Multi-Atlas_Labeling/label/label0037.nii.gz
9 | 01_Multi-Atlas_Labeling/img/img0068.nii.gz 01_Multi-Atlas_Labeling/label/label0068.nii.gz
10 | 01_Multi-Atlas_Labeling/img/img0074.nii.gz 01_Multi-Atlas_Labeling/label/label0074.nii.gz
11 | 01_Multi-Atlas_Labeling/img/img0076.nii.gz 01_Multi-Atlas_Labeling/label/label0076.nii.gz
12 | 01_Multi-Atlas_Labeling/img/img0078.nii.gz 01_Multi-Atlas_Labeling/label/label0078.nii.gz
13 | 02_TCIA_Pancreas-CT/img/PANCREAS_0005.nii.gz 02_TCIA_Pancreas-CT/multiorgan_label/label0005.nii.gz
14 | 02_TCIA_Pancreas-CT/img/PANCREAS_0006.nii.gz 02_TCIA_Pancreas-CT/multiorgan_label/label0006.nii.gz
15 | 02_TCIA_Pancreas-CT/img/PANCREAS_0009.nii.gz 02_TCIA_Pancreas-CT/multiorgan_label/label0009.nii.gz
16 | 02_TCIA_Pancreas-CT/img/PANCREAS_0012.nii.gz 02_TCIA_Pancreas-CT/multiorgan_label/label0012.nii.gz
17 | 02_TCIA_Pancreas-CT/img/PANCREAS_0014.nii.gz 02_TCIA_Pancreas-CT/multiorgan_label/label0014.nii.gz
18 | 02_TCIA_Pancreas-CT/img/PANCREAS_0017.nii.gz 02_TCIA_Pancreas-CT/multiorgan_label/label0017.nii.gz
19 | 02_TCIA_Pancreas-CT/img/PANCREAS_0020.nii.gz 02_TCIA_Pancreas-CT/multiorgan_label/label0020.nii.gz
20 | 02_TCIA_Pancreas-CT/img/PANCREAS_0032.nii.gz 02_TCIA_Pancreas-CT/multiorgan_label/label0032.nii.gz
21 | 02_TCIA_Pancreas-CT/img/PANCREAS_0040.nii.gz 02_TCIA_Pancreas-CT/multiorgan_label/label0040.nii.gz
22 | 02_TCIA_Pancreas-CT/img/PANCREAS_0042.nii.gz 02_TCIA_Pancreas-CT/multiorgan_label/label0042.nii.gz
23 | 02_TCIA_Pancreas-CT/img/PANCREAS_0045.nii.gz 02_TCIA_Pancreas-CT/multiorgan_label/label0045.nii.gz
24 | 03_CHAOS/ct/img/18_image.nii.gz 03_CHAOS/ct/liver_label/18_segmentation.nii.gz
25 | 03_CHAOS/ct/img/21_image.nii.gz 03_CHAOS/ct/liver_label/21_segmentation.nii.gz
26 | 03_CHAOS/ct/img/23_image.nii.gz 03_CHAOS/ct/liver_label/23_segmentation.nii.gz
27 | 03_CHAOS/ct/img/26_image.nii.gz 03_CHAOS/ct/liver_label/26_segmentation.nii.gz
28 | 04_LiTS/img/liver_0.nii.gz 04_LiTS/label/liver_0.nii.gz
29 | 04_LiTS/img/liver_105.nii.gz 04_LiTS/label/liver_105.nii.gz
30 | 04_LiTS/img/liver_109.nii.gz 04_LiTS/label/liver_109.nii.gz
31 | 04_LiTS/img/liver_11.nii.gz 04_LiTS/label/liver_11.nii.gz
32 | 04_LiTS/img/liver_118.nii.gz 04_LiTS/label/liver_118.nii.gz
33 | 04_LiTS/img/liver_127.nii.gz 04_LiTS/label/liver_127.nii.gz
34 | 04_LiTS/img/liver_130.nii.gz 04_LiTS/label/liver_130.nii.gz
35 | 04_LiTS/img/liver_28.nii.gz 04_LiTS/label/liver_28.nii.gz
36 | 04_LiTS/img/liver_41.nii.gz 04_LiTS/label/liver_41.nii.gz
37 | 04_LiTS/img/liver_53.nii.gz 04_LiTS/label/liver_53.nii.gz
38 | 04_LiTS/img/liver_58.nii.gz 04_LiTS/label/liver_58.nii.gz
39 | 04_LiTS/img/liver_60.nii.gz 04_LiTS/label/liver_60.nii.gz
40 | 04_LiTS/img/liver_62.nii.gz 04_LiTS/label/liver_62.nii.gz
41 | 04_LiTS/img/liver_70.nii.gz 04_LiTS/label/liver_70.nii.gz
42 | 04_LiTS/img/liver_73.nii.gz 04_LiTS/label/liver_73.nii.gz
43 | 04_LiTS/img/liver_79.nii.gz 04_LiTS/label/liver_79.nii.gz
44 | 04_LiTS/img/liver_82.nii.gz 04_LiTS/label/liver_82.nii.gz
45 | 04_LiTS/img/liver_83.nii.gz 04_LiTS/label/liver_83.nii.gz
46 | 04_LiTS/img/liver_85.nii.gz 04_LiTS/label/liver_85.nii.gz
47 | 04_LiTS/img/liver_96.nii.gz 04_LiTS/label/liver_96.nii.gz
48 | 05_KiTS/img/img0010.nii.gz 05_KiTS/label/label0010.nii.gz
49 | 05_KiTS/img/img0032.nii.gz 05_KiTS/label/label0032.nii.gz
50 | 05_KiTS/img/img0033.nii.gz 05_KiTS/label/label0033.nii.gz
51 | 05_KiTS/img/img0034.nii.gz 05_KiTS/label/label0034.nii.gz
52 | 05_KiTS/img/img0037.nii.gz 05_KiTS/label/label0037.nii.gz
53 | 05_KiTS/img/img0043.nii.gz 05_KiTS/label/label0043.nii.gz
54 | 05_KiTS/img/img0045.nii.gz 05_KiTS/label/label0045.nii.gz
55 | 05_KiTS/img/img0050.nii.gz 05_KiTS/label/label0050.nii.gz
56 | 05_KiTS/img/img0051.nii.gz 05_KiTS/label/label0051.nii.gz
57 | 05_KiTS/img/img0053.nii.gz 05_KiTS/label/label0053.nii.gz
58 | 05_KiTS/img/img0056.nii.gz 05_KiTS/label/label0056.nii.gz
59 | 05_KiTS/img/img0059.nii.gz 05_KiTS/label/label0059.nii.gz
60 | 05_KiTS/img/img0063.nii.gz 05_KiTS/label/label0063.nii.gz
61 | 05_KiTS/img/img0071.nii.gz 05_KiTS/label/label0071.nii.gz
62 | 05_KiTS/img/img0078.nii.gz 05_KiTS/label/label0078.nii.gz
63 | 05_KiTS/img/img0087.nii.gz 05_KiTS/label/label0087.nii.gz
64 | 05_KiTS/img/img0088.nii.gz 05_KiTS/label/label0088.nii.gz
65 | 05_KiTS/img/img0090.nii.gz 05_KiTS/label/label0090.nii.gz
66 | 05_KiTS/img/img0092.nii.gz 05_KiTS/label/label0092.nii.gz
67 | 05_KiTS/img/img0093.nii.gz 05_KiTS/label/label0093.nii.gz
68 | 05_KiTS/img/img0101.nii.gz 05_KiTS/label/label0101.nii.gz
69 | 05_KiTS/img/img0110.nii.gz 05_KiTS/label/label0110.nii.gz
70 | 05_KiTS/img/img0112.nii.gz 05_KiTS/label/label0112.nii.gz
71 | 05_KiTS/img/img0114.nii.gz 05_KiTS/label/label0114.nii.gz
72 | 05_KiTS/img/img0116.nii.gz 05_KiTS/label/label0116.nii.gz
73 | 05_KiTS/img/img0117.nii.gz 05_KiTS/label/label0117.nii.gz
74 | 05_KiTS/img/img0120.nii.gz 05_KiTS/label/label0120.nii.gz
75 | 05_KiTS/img/img0124.nii.gz 05_KiTS/label/label0124.nii.gz
76 | 05_KiTS/img/img0125.nii.gz 05_KiTS/label/label0125.nii.gz
77 | 05_KiTS/img/img0131.nii.gz 05_KiTS/label/label0131.nii.gz
78 | 05_KiTS/img/img0142.nii.gz 05_KiTS/label/label0142.nii.gz
79 | 05_KiTS/img/img0148.nii.gz 05_KiTS/label/label0148.nii.gz
80 | 05_KiTS/img/img0150.nii.gz 05_KiTS/label/label0150.nii.gz
81 | 05_KiTS/img/img0156.nii.gz 05_KiTS/label/label0156.nii.gz
82 | 05_KiTS/img/img0160.nii.gz 05_KiTS/label/label0160.nii.gz
83 | 05_KiTS/img/img0162.nii.gz 05_KiTS/label/label0162.nii.gz
84 | 05_KiTS/img/img0164.nii.gz 05_KiTS/label/label0164.nii.gz
85 | 05_KiTS/img/img0177.nii.gz 05_KiTS/label/label0177.nii.gz
86 | 05_KiTS/img/img0183.nii.gz 05_KiTS/label/label0183.nii.gz
87 | 05_KiTS/img/img0191.nii.gz 05_KiTS/label/label0191.nii.gz
88 | 05_KiTS/img/img0194.nii.gz 05_KiTS/label/label0194.nii.gz
89 | 05_KiTS/img/img0195.nii.gz 05_KiTS/label/label0195.nii.gz
90 | 05_KiTS/img/img0199.nii.gz 05_KiTS/label/label0199.nii.gz
91 | 05_KiTS/img/img0202.nii.gz 05_KiTS/label/label0202.nii.gz
92 | 05_KiTS/img/img0207.nii.gz 05_KiTS/label/label0207.nii.gz
93 | 05_KiTS/img/img0208.nii.gz 05_KiTS/label/label0208.nii.gz
94 | 05_KiTS/img/img0210.nii.gz 05_KiTS/label/label0210.nii.gz
95 | 05_KiTS/img/img0214.nii.gz 05_KiTS/label/label0214.nii.gz
96 | 05_KiTS/img/img0217.nii.gz 05_KiTS/label/label0217.nii.gz
97 | 05_KiTS/img/img0220.nii.gz 05_KiTS/label/label0220.nii.gz
98 | 05_KiTS/img/img0225.nii.gz 05_KiTS/label/label0225.nii.gz
99 | 05_KiTS/img/img0228.nii.gz 05_KiTS/label/label0228.nii.gz
100 | 05_KiTS/img/img0229.nii.gz 05_KiTS/label/label0229.nii.gz
101 | 05_KiTS/img/img0232.nii.gz 05_KiTS/label/label0232.nii.gz
102 | 05_KiTS/img/img0243.nii.gz 05_KiTS/label/label0243.nii.gz
103 | 05_KiTS/img/img0245.nii.gz 05_KiTS/label/label0245.nii.gz
104 | 05_KiTS/img/img0251.nii.gz 05_KiTS/label/label0251.nii.gz
105 | 05_KiTS/img/img0252.nii.gz 05_KiTS/label/label0252.nii.gz
106 | 05_KiTS/img/img0253.nii.gz 05_KiTS/label/label0253.nii.gz
107 | 05_KiTS/img/img0256.nii.gz 05_KiTS/label/label0256.nii.gz
108 | 05_KiTS/img/img0257.nii.gz 05_KiTS/label/label0257.nii.gz
109 | 05_KiTS/img/img0263.nii.gz 05_KiTS/label/label0263.nii.gz
110 | 05_KiTS/img/img0275.nii.gz 05_KiTS/label/label0275.nii.gz
111 | 05_KiTS/img/img0281.nii.gz 05_KiTS/label/label0281.nii.gz
112 | 05_KiTS/img/img0285.nii.gz 05_KiTS/label/label0285.nii.gz
113 | 05_KiTS/img/img0299.nii.gz 05_KiTS/label/label0299.nii.gz
114 | 07_WORD/img/word_0004.nii.gz 07_WORD/label/word_0004.nii.gz
115 | 07_WORD/img/word_0010.nii.gz 07_WORD/label/word_0010.nii.gz
116 | 07_WORD/img/word_0011.nii.gz 07_WORD/label/word_0011.nii.gz
117 | 07_WORD/img/word_0028.nii.gz 07_WORD/label/word_0028.nii.gz
118 | 07_WORD/img/word_0036.nii.gz 07_WORD/label/word_0036.nii.gz
119 | 07_WORD/img/word_0038.nii.gz 07_WORD/label/word_0038.nii.gz
120 | 07_WORD/img/word_0041.nii.gz 07_WORD/label/word_0041.nii.gz
121 | 07_WORD/img/word_0049.nii.gz 07_WORD/label/word_0049.nii.gz
122 | 07_WORD/img/word_0056.nii.gz 07_WORD/label/word_0056.nii.gz
123 | 07_WORD/img/word_0062.nii.gz 07_WORD/label/word_0062.nii.gz
124 | 07_WORD/img/word_0064.nii.gz 07_WORD/label/word_0064.nii.gz
125 | 07_WORD/img/word_0068.nii.gz 07_WORD/label/word_0068.nii.gz
126 | 07_WORD/img/word_0070.nii.gz 07_WORD/label/word_0070.nii.gz
127 | 07_WORD/img/word_0078.nii.gz 07_WORD/label/word_0078.nii.gz
128 | 07_WORD/img/word_0079.nii.gz 07_WORD/label/word_0079.nii.gz
129 | 07_WORD/img/word_0080.nii.gz 07_WORD/label/word_0080.nii.gz
130 | 07_WORD/img/word_0084.nii.gz 07_WORD/label/word_0084.nii.gz
131 | 07_WORD/img/word_0091.nii.gz 07_WORD/label/word_0091.nii.gz
132 | 07_WORD/img/word_0096.nii.gz 07_WORD/label/word_0096.nii.gz
133 | 07_WORD/img/word_0111.nii.gz 07_WORD/label/word_0111.nii.gz
134 | 07_WORD/img/word_0116.nii.gz 07_WORD/label/word_0116.nii.gz
135 | 07_WORD/img/word_0117.nii.gz 07_WORD/label/word_0117.nii.gz
136 | 07_WORD/img/word_0121.nii.gz 07_WORD/label/word_0121.nii.gz
137 | 07_WORD/img/word_0138.nii.gz 07_WORD/label/word_0138.nii.gz
138 | 07_WORD/img/word_0142.nii.gz 07_WORD/label/word_0142.nii.gz
139 | 07_WORD/img/word_0147.nii.gz 07_WORD/label/word_0147.nii.gz
140 | 08_AbdomenCT-1K/img/Case_00006_0000.nii.gz 08_AbdomenCT-1K/label/Case_00006.nii.gz
141 | 08_AbdomenCT-1K/img/Case_00022_0000.nii.gz 08_AbdomenCT-1K/label/Case_00022.nii.gz
142 | 08_AbdomenCT-1K/img/Case_00031_0000.nii.gz 08_AbdomenCT-1K/label/Case_00031.nii.gz
143 | 08_AbdomenCT-1K/img/Case_00037_0000.nii.gz 08_AbdomenCT-1K/label/Case_00037.nii.gz
144 | 08_AbdomenCT-1K/img/Case_00040_0000.nii.gz 08_AbdomenCT-1K/label/Case_00040.nii.gz
145 | 08_AbdomenCT-1K/img/Case_00044_0000.nii.gz 08_AbdomenCT-1K/label/Case_00044.nii.gz
146 | 08_AbdomenCT-1K/img/Case_00050_0000.nii.gz 08_AbdomenCT-1K/label/Case_00050.nii.gz
147 | 08_AbdomenCT-1K/img/Case_00053_0000.nii.gz 08_AbdomenCT-1K/label/Case_00053.nii.gz
148 | 08_AbdomenCT-1K/img/Case_00079_0000.nii.gz 08_AbdomenCT-1K/label/Case_00079.nii.gz
149 | 08_AbdomenCT-1K/img/Case_00086_0000.nii.gz 08_AbdomenCT-1K/label/Case_00086.nii.gz
150 | 08_AbdomenCT-1K/img/Case_00087_0000.nii.gz 08_AbdomenCT-1K/label/Case_00087.nii.gz
151 | 08_AbdomenCT-1K/img/Case_00093_0000.nii.gz 08_AbdomenCT-1K/label/Case_00093.nii.gz
152 | 08_AbdomenCT-1K/img/Case_00106_0000.nii.gz 08_AbdomenCT-1K/label/Case_00106.nii.gz
153 | 08_AbdomenCT-1K/img/Case_00107_0000.nii.gz 08_AbdomenCT-1K/label/Case_00107.nii.gz
154 | 08_AbdomenCT-1K/img/Case_00114_0000.nii.gz 08_AbdomenCT-1K/label/Case_00114.nii.gz
155 | 08_AbdomenCT-1K/img/Case_00118_0000.nii.gz 08_AbdomenCT-1K/label/Case_00118.nii.gz
156 | 08_AbdomenCT-1K/img/Case_00122_0000.nii.gz 08_AbdomenCT-1K/label/Case_00122.nii.gz
157 | 08_AbdomenCT-1K/img/Case_00123_0000.nii.gz 08_AbdomenCT-1K/label/Case_00123.nii.gz
158 | 08_AbdomenCT-1K/img/Case_00125_0000.nii.gz 08_AbdomenCT-1K/label/Case_00125.nii.gz
159 | 08_AbdomenCT-1K/img/Case_00132_0000.nii.gz 08_AbdomenCT-1K/label/Case_00132.nii.gz
160 | 08_AbdomenCT-1K/img/Case_00140_0000.nii.gz 08_AbdomenCT-1K/label/Case_00140.nii.gz
161 | 08_AbdomenCT-1K/img/Case_00144_0000.nii.gz 08_AbdomenCT-1K/label/Case_00144.nii.gz
162 | 08_AbdomenCT-1K/img/Case_00149_0000.nii.gz 08_AbdomenCT-1K/label/Case_00149.nii.gz
163 | 08_AbdomenCT-1K/img/Case_00163_0000.nii.gz 08_AbdomenCT-1K/label/Case_00163.nii.gz
164 | 08_AbdomenCT-1K/img/Case_00164_0000.nii.gz 08_AbdomenCT-1K/label/Case_00164.nii.gz
165 | 08_AbdomenCT-1K/img/Case_00166_0000.nii.gz 08_AbdomenCT-1K/label/Case_00166.nii.gz
166 | 08_AbdomenCT-1K/img/Case_00171_0000.nii.gz 08_AbdomenCT-1K/label/Case_00171.nii.gz
167 | 08_AbdomenCT-1K/img/Case_00194_0000.nii.gz 08_AbdomenCT-1K/label/Case_00194.nii.gz
168 | 08_AbdomenCT-1K/img/Case_00198_0000.nii.gz 08_AbdomenCT-1K/label/Case_00198.nii.gz
169 | 08_AbdomenCT-1K/img/Case_00207_0000.nii.gz 08_AbdomenCT-1K/label/Case_00207.nii.gz
170 | 08_AbdomenCT-1K/img/Case_00208_0000.nii.gz 08_AbdomenCT-1K/label/Case_00208.nii.gz
171 | 08_AbdomenCT-1K/img/Case_00220_0000.nii.gz 08_AbdomenCT-1K/label/Case_00220.nii.gz
172 | 08_AbdomenCT-1K/img/Case_00221_0000.nii.gz 08_AbdomenCT-1K/label/Case_00221.nii.gz
173 | 08_AbdomenCT-1K/img/Case_00223_0000.nii.gz 08_AbdomenCT-1K/label/Case_00223.nii.gz
174 | 08_AbdomenCT-1K/img/Case_00225_0000.nii.gz 08_AbdomenCT-1K/label/Case_00225.nii.gz
175 | 08_AbdomenCT-1K/img/Case_00230_0000.nii.gz 08_AbdomenCT-1K/label/Case_00230.nii.gz
176 | 08_AbdomenCT-1K/img/Case_00231_0000.nii.gz 08_AbdomenCT-1K/label/Case_00231.nii.gz
177 | 08_AbdomenCT-1K/img/Case_00234_0000.nii.gz 08_AbdomenCT-1K/label/Case_00234.nii.gz
178 | 08_AbdomenCT-1K/img/Case_00243_0000.nii.gz 08_AbdomenCT-1K/label/Case_00243.nii.gz
179 | 08_AbdomenCT-1K/img/Case_00244_0000.nii.gz 08_AbdomenCT-1K/label/Case_00244.nii.gz
180 | 08_AbdomenCT-1K/img/Case_00248_0000.nii.gz 08_AbdomenCT-1K/label/Case_00248.nii.gz
181 | 08_AbdomenCT-1K/img/Case_00256_0000.nii.gz 08_AbdomenCT-1K/label/Case_00256.nii.gz
182 | 08_AbdomenCT-1K/img/Case_00262_0000.nii.gz 08_AbdomenCT-1K/label/Case_00262.nii.gz
183 | 08_AbdomenCT-1K/img/Case_00267_0000.nii.gz 08_AbdomenCT-1K/label/Case_00267.nii.gz
184 | 08_AbdomenCT-1K/img/Case_00268_0000.nii.gz 08_AbdomenCT-1K/label/Case_00268.nii.gz
185 | 08_AbdomenCT-1K/img/Case_00272_0000.nii.gz 08_AbdomenCT-1K/label/Case_00272.nii.gz
186 | 08_AbdomenCT-1K/img/Case_00275_0000.nii.gz 08_AbdomenCT-1K/label/Case_00275.nii.gz
187 | 08_AbdomenCT-1K/img/Case_00302_0000.nii.gz 08_AbdomenCT-1K/label/Case_00302.nii.gz
188 | 08_AbdomenCT-1K/img/Case_00306_0000.nii.gz 08_AbdomenCT-1K/label/Case_00306.nii.gz
189 | 08_AbdomenCT-1K/img/Case_00317_0000.nii.gz 08_AbdomenCT-1K/label/Case_00317.nii.gz
190 | 08_AbdomenCT-1K/img/Case_00337_0000.nii.gz 08_AbdomenCT-1K/label/Case_00337.nii.gz
191 | 08_AbdomenCT-1K/img/Case_00355_0000.nii.gz 08_AbdomenCT-1K/label/Case_00355.nii.gz
192 | 08_AbdomenCT-1K/img/Case_00360_0000.nii.gz 08_AbdomenCT-1K/label/Case_00360.nii.gz
193 | 08_AbdomenCT-1K/img/Case_00362_0000.nii.gz 08_AbdomenCT-1K/label/Case_00362.nii.gz
194 | 08_AbdomenCT-1K/img/Case_00368_0000.nii.gz 08_AbdomenCT-1K/label/Case_00368.nii.gz
195 | 08_AbdomenCT-1K/img/Case_00377_0000.nii.gz 08_AbdomenCT-1K/label/Case_00377.nii.gz
196 | 08_AbdomenCT-1K/img/Case_00395_0000.nii.gz 08_AbdomenCT-1K/label/Case_00395.nii.gz
197 | 08_AbdomenCT-1K/img/Case_00400_0000.nii.gz 08_AbdomenCT-1K/label/Case_00400.nii.gz
198 | 08_AbdomenCT-1K/img/Case_00407_0000.nii.gz 08_AbdomenCT-1K/label/Case_00407.nii.gz
199 | 08_AbdomenCT-1K/img/Case_00409_0000.nii.gz 08_AbdomenCT-1K/label/Case_00409.nii.gz
200 | 08_AbdomenCT-1K/img/Case_00426_0000.nii.gz 08_AbdomenCT-1K/label/Case_00426.nii.gz
201 | 08_AbdomenCT-1K/img/Case_00431_0000.nii.gz 08_AbdomenCT-1K/label/Case_00431.nii.gz
202 | 08_AbdomenCT-1K/img/Case_00432_0000.nii.gz 08_AbdomenCT-1K/label/Case_00432.nii.gz
203 | 08_AbdomenCT-1K/img/Case_00433_0000.nii.gz 08_AbdomenCT-1K/label/Case_00433.nii.gz
204 | 08_AbdomenCT-1K/img/Case_00435_0000.nii.gz 08_AbdomenCT-1K/label/Case_00435.nii.gz
205 | 08_AbdomenCT-1K/img/Case_00438_0000.nii.gz 08_AbdomenCT-1K/label/Case_00438.nii.gz
206 | 08_AbdomenCT-1K/img/Case_00442_0000.nii.gz 08_AbdomenCT-1K/label/Case_00442.nii.gz
207 | 08_AbdomenCT-1K/img/Case_00443_0000.nii.gz 08_AbdomenCT-1K/label/Case_00443.nii.gz
208 | 08_AbdomenCT-1K/img/Case_00460_0000.nii.gz 08_AbdomenCT-1K/label/Case_00460.nii.gz
209 | 08_AbdomenCT-1K/img/Case_00461_0000.nii.gz 08_AbdomenCT-1K/label/Case_00461.nii.gz
210 | 08_AbdomenCT-1K/img/Case_00463_0000.nii.gz 08_AbdomenCT-1K/label/Case_00463.nii.gz
211 | 08_AbdomenCT-1K/img/Case_00471_0000.nii.gz 08_AbdomenCT-1K/label/Case_00471.nii.gz
212 | 08_AbdomenCT-1K/img/Case_00477_0000.nii.gz 08_AbdomenCT-1K/label/Case_00477.nii.gz
213 | 08_AbdomenCT-1K/img/Case_00482_0000.nii.gz 08_AbdomenCT-1K/label/Case_00482.nii.gz
214 | 08_AbdomenCT-1K/img/Case_00489_0000.nii.gz 08_AbdomenCT-1K/label/Case_00489.nii.gz
215 | 08_AbdomenCT-1K/img/Case_00494_0000.nii.gz 08_AbdomenCT-1K/label/Case_00494.nii.gz
216 | 08_AbdomenCT-1K/img/Case_00499_0000.nii.gz 08_AbdomenCT-1K/label/Case_00499.nii.gz
217 | 08_AbdomenCT-1K/img/Case_00501_0000.nii.gz 08_AbdomenCT-1K/label/Case_00501.nii.gz
218 | 08_AbdomenCT-1K/img/Case_00508_0000.nii.gz 08_AbdomenCT-1K/label/Case_00508.nii.gz
219 | 08_AbdomenCT-1K/img/Case_00513_0000.nii.gz 08_AbdomenCT-1K/label/Case_00513.nii.gz
220 | 08_AbdomenCT-1K/img/Case_00516_0000.nii.gz 08_AbdomenCT-1K/label/Case_00516.nii.gz
221 | 08_AbdomenCT-1K/img/Case_00527_0000.nii.gz 08_AbdomenCT-1K/label/Case_00527.nii.gz
222 | 08_AbdomenCT-1K/img/Case_00532_0000.nii.gz 08_AbdomenCT-1K/label/Case_00532.nii.gz
223 | 08_AbdomenCT-1K/img/Case_00533_0000.nii.gz 08_AbdomenCT-1K/label/Case_00533.nii.gz
224 | 08_AbdomenCT-1K/img/Case_00544_0000.nii.gz 08_AbdomenCT-1K/label/Case_00544.nii.gz
225 | 08_AbdomenCT-1K/img/Case_00547_0000.nii.gz 08_AbdomenCT-1K/label/Case_00547.nii.gz
226 | 08_AbdomenCT-1K/img/Case_00555_0000.nii.gz 08_AbdomenCT-1K/label/Case_00555.nii.gz
227 | 08_AbdomenCT-1K/img/Case_00557_0000.nii.gz 08_AbdomenCT-1K/label/Case_00557.nii.gz
228 | 08_AbdomenCT-1K/img/Case_00563_0000.nii.gz 08_AbdomenCT-1K/label/Case_00563.nii.gz
229 | 08_AbdomenCT-1K/img/Case_00564_0000.nii.gz 08_AbdomenCT-1K/label/Case_00564.nii.gz
230 | 08_AbdomenCT-1K/img/Case_00568_0000.nii.gz 08_AbdomenCT-1K/label/Case_00568.nii.gz
231 | 08_AbdomenCT-1K/img/Case_00570_0000.nii.gz 08_AbdomenCT-1K/label/Case_00570.nii.gz
232 | 08_AbdomenCT-1K/img/Case_00580_0000.nii.gz 08_AbdomenCT-1K/label/Case_00580.nii.gz
233 | 08_AbdomenCT-1K/img/Case_00583_0000.nii.gz 08_AbdomenCT-1K/label/Case_00583.nii.gz
234 | 08_AbdomenCT-1K/img/Case_00590_0000.nii.gz 08_AbdomenCT-1K/label/Case_00590.nii.gz
235 | 08_AbdomenCT-1K/img/Case_00598_0000.nii.gz 08_AbdomenCT-1K/label/Case_00598.nii.gz
236 | 08_AbdomenCT-1K/img/Case_00600_0000.nii.gz 08_AbdomenCT-1K/label/Case_00600.nii.gz
237 | 08_AbdomenCT-1K/img/Case_00601_0000.nii.gz 08_AbdomenCT-1K/label/Case_00601.nii.gz
238 | 08_AbdomenCT-1K/img/Case_00612_0000.nii.gz 08_AbdomenCT-1K/label/Case_00612.nii.gz
239 | 08_AbdomenCT-1K/img/Case_00626_0000.nii.gz 08_AbdomenCT-1K/label/Case_00626.nii.gz
240 | 08_AbdomenCT-1K/img/Case_00627_0000.nii.gz 08_AbdomenCT-1K/label/Case_00627.nii.gz
241 | 08_AbdomenCT-1K/img/Case_00628_0000.nii.gz 08_AbdomenCT-1K/label/Case_00628.nii.gz
242 | 08_AbdomenCT-1K/img/Case_00639_0000.nii.gz 08_AbdomenCT-1K/label/Case_00639.nii.gz
243 | 08_AbdomenCT-1K/img/Case_00642_0000.nii.gz 08_AbdomenCT-1K/label/Case_00642.nii.gz
244 | 08_AbdomenCT-1K/img/Case_00649_0000.nii.gz 08_AbdomenCT-1K/label/Case_00649.nii.gz
245 | 08_AbdomenCT-1K/img/Case_00652_0000.nii.gz 08_AbdomenCT-1K/label/Case_00652.nii.gz
246 | 08_AbdomenCT-1K/img/Case_00656_0000.nii.gz 08_AbdomenCT-1K/label/Case_00656.nii.gz
247 | 08_AbdomenCT-1K/img/Case_00657_0000.nii.gz 08_AbdomenCT-1K/label/Case_00657.nii.gz
248 | 08_AbdomenCT-1K/img/Case_00661_0000.nii.gz 08_AbdomenCT-1K/label/Case_00661.nii.gz
249 | 08_AbdomenCT-1K/img/Case_00677_0000.nii.gz 08_AbdomenCT-1K/label/Case_00677.nii.gz
250 | 08_AbdomenCT-1K/img/Case_00678_0000.nii.gz 08_AbdomenCT-1K/label/Case_00678.nii.gz
251 | 08_AbdomenCT-1K/img/Case_00684_0000.nii.gz 08_AbdomenCT-1K/label/Case_00684.nii.gz
252 | 08_AbdomenCT-1K/img/Case_00695_0000.nii.gz 08_AbdomenCT-1K/label/Case_00695.nii.gz
253 | 08_AbdomenCT-1K/img/Case_00696_0000.nii.gz 08_AbdomenCT-1K/label/Case_00696.nii.gz
254 | 08_AbdomenCT-1K/img/Case_00702_0000.nii.gz 08_AbdomenCT-1K/label/Case_00702.nii.gz
255 | 08_AbdomenCT-1K/img/Case_00708_0000.nii.gz 08_AbdomenCT-1K/label/Case_00708.nii.gz
256 | 08_AbdomenCT-1K/img/Case_00716_0000.nii.gz 08_AbdomenCT-1K/label/Case_00716.nii.gz
257 | 08_AbdomenCT-1K/img/Case_00729_0000.nii.gz 08_AbdomenCT-1K/label/Case_00729.nii.gz
258 | 08_AbdomenCT-1K/img/Case_00734_0000.nii.gz 08_AbdomenCT-1K/label/Case_00734.nii.gz
259 | 08_AbdomenCT-1K/img/Case_00744_0000.nii.gz 08_AbdomenCT-1K/label/Case_00744.nii.gz
260 | 08_AbdomenCT-1K/img/Case_00751_0000.nii.gz 08_AbdomenCT-1K/label/Case_00751.nii.gz
261 | 08_AbdomenCT-1K/img/Case_00757_0000.nii.gz 08_AbdomenCT-1K/label/Case_00757.nii.gz
262 | 08_AbdomenCT-1K/img/Case_00759_0000.nii.gz 08_AbdomenCT-1K/label/Case_00759.nii.gz
263 | 08_AbdomenCT-1K/img/Case_00771_0000.nii.gz 08_AbdomenCT-1K/label/Case_00771.nii.gz
264 | 08_AbdomenCT-1K/img/Case_00776_0000.nii.gz 08_AbdomenCT-1K/label/Case_00776.nii.gz
265 | 08_AbdomenCT-1K/img/Case_00780_0000.nii.gz 08_AbdomenCT-1K/label/Case_00780.nii.gz
266 | 08_AbdomenCT-1K/img/Case_00788_0000.nii.gz 08_AbdomenCT-1K/label/Case_00788.nii.gz
267 | 08_AbdomenCT-1K/img/Case_00794_0000.nii.gz 08_AbdomenCT-1K/label/Case_00794.nii.gz
268 | 08_AbdomenCT-1K/img/Case_00803_0000.nii.gz 08_AbdomenCT-1K/label/Case_00803.nii.gz
269 | 08_AbdomenCT-1K/img/Case_00806_0000.nii.gz 08_AbdomenCT-1K/label/Case_00806.nii.gz
270 | 08_AbdomenCT-1K/img/Case_00811_0000.nii.gz 08_AbdomenCT-1K/label/Case_00811.nii.gz
271 | 08_AbdomenCT-1K/img/Case_00819_0000.nii.gz 08_AbdomenCT-1K/label/Case_00819.nii.gz
272 | 08_AbdomenCT-1K/img/Case_00822_0000.nii.gz 08_AbdomenCT-1K/label/Case_00822.nii.gz
273 | 08_AbdomenCT-1K/img/Case_00827_0000.nii.gz 08_AbdomenCT-1K/label/Case_00827.nii.gz
274 | 08_AbdomenCT-1K/img/Case_00833_0000.nii.gz 08_AbdomenCT-1K/label/Case_00833.nii.gz
275 | 08_AbdomenCT-1K/img/Case_00839_0000.nii.gz 08_AbdomenCT-1K/label/Case_00839.nii.gz
276 | 08_AbdomenCT-1K/img/Case_00841_0000.nii.gz 08_AbdomenCT-1K/label/Case_00841.nii.gz
277 | 08_AbdomenCT-1K/img/Case_00861_0000.nii.gz 08_AbdomenCT-1K/label/Case_00861.nii.gz
278 | 08_AbdomenCT-1K/img/Case_00864_0000.nii.gz 08_AbdomenCT-1K/label/Case_00864.nii.gz
279 | 08_AbdomenCT-1K/img/Case_00866_0000.nii.gz 08_AbdomenCT-1K/label/Case_00866.nii.gz
280 | 08_AbdomenCT-1K/img/Case_00871_0000.nii.gz 08_AbdomenCT-1K/label/Case_00871.nii.gz
281 | 08_AbdomenCT-1K/img/Case_00877_0000.nii.gz 08_AbdomenCT-1K/label/Case_00877.nii.gz
282 | 08_AbdomenCT-1K/img/Case_00878_0000.nii.gz 08_AbdomenCT-1K/label/Case_00878.nii.gz
283 | 08_AbdomenCT-1K/img/Case_00880_0000.nii.gz 08_AbdomenCT-1K/label/Case_00880.nii.gz
284 | 08_AbdomenCT-1K/img/Case_00883_0000.nii.gz 08_AbdomenCT-1K/label/Case_00883.nii.gz
285 | 08_AbdomenCT-1K/img/Case_00898_0000.nii.gz 08_AbdomenCT-1K/label/Case_00898.nii.gz
286 | 08_AbdomenCT-1K/img/Case_00899_0000.nii.gz 08_AbdomenCT-1K/label/Case_00899.nii.gz
287 | 08_AbdomenCT-1K/img/Case_00901_0000.nii.gz 08_AbdomenCT-1K/label/Case_00901.nii.gz
288 | 08_AbdomenCT-1K/img/Case_00909_0000.nii.gz 08_AbdomenCT-1K/label/Case_00909.nii.gz
289 | 08_AbdomenCT-1K/img/Case_00914_0000.nii.gz 08_AbdomenCT-1K/label/Case_00914.nii.gz
290 | 08_AbdomenCT-1K/img/Case_00916_0000.nii.gz 08_AbdomenCT-1K/label/Case_00916.nii.gz
291 | 08_AbdomenCT-1K/img/Case_00917_0000.nii.gz 08_AbdomenCT-1K/label/Case_00917.nii.gz
292 | 08_AbdomenCT-1K/img/Case_00934_0000.nii.gz 08_AbdomenCT-1K/label/Case_00934.nii.gz
293 | 08_AbdomenCT-1K/img/Case_00948_0000.nii.gz 08_AbdomenCT-1K/label/Case_00948.nii.gz
294 | 08_AbdomenCT-1K/img/Case_00950_0000.nii.gz 08_AbdomenCT-1K/label/Case_00950.nii.gz
295 | 08_AbdomenCT-1K/img/Case_00956_0000.nii.gz 08_AbdomenCT-1K/label/Case_00956.nii.gz
296 | 08_AbdomenCT-1K/img/Case_00957_0000.nii.gz 08_AbdomenCT-1K/label/Case_00957.nii.gz
297 | 08_AbdomenCT-1K/img/Case_00969_0000.nii.gz 08_AbdomenCT-1K/label/Case_00969.nii.gz
298 | 08_AbdomenCT-1K/img/Case_00974_0000.nii.gz 08_AbdomenCT-1K/label/Case_00974.nii.gz
299 | 08_AbdomenCT-1K/img/Case_00977_0000.nii.gz 08_AbdomenCT-1K/label/Case_00977.nii.gz
300 | 08_AbdomenCT-1K/img/Case_00985_0000.nii.gz 08_AbdomenCT-1K/label/Case_00985.nii.gz
301 | 08_AbdomenCT-1K/img/Case_00990_0000.nii.gz 08_AbdomenCT-1K/label/Case_00990.nii.gz
302 | 08_AbdomenCT-1K/img/Case_00996_0000.nii.gz 08_AbdomenCT-1K/label/Case_00996.nii.gz
303 | 08_AbdomenCT-1K/img/Case_01006_0000.nii.gz 08_AbdomenCT-1K/label/Case_01006.nii.gz
304 | 08_AbdomenCT-1K/img/Case_01010_0000.nii.gz 08_AbdomenCT-1K/label/Case_01010.nii.gz
305 | 08_AbdomenCT-1K/img/Case_01011_0000.nii.gz 08_AbdomenCT-1K/label/Case_01011.nii.gz
306 | 08_AbdomenCT-1K/img/Case_01014_0000.nii.gz 08_AbdomenCT-1K/label/Case_01014.nii.gz
307 | 08_AbdomenCT-1K/img/Case_01018_0000.nii.gz 08_AbdomenCT-1K/label/Case_01018.nii.gz
308 | 08_AbdomenCT-1K/img/Case_01022_0000.nii.gz 08_AbdomenCT-1K/label/Case_01022.nii.gz
309 | 08_AbdomenCT-1K/img/Case_01028_0000.nii.gz 08_AbdomenCT-1K/label/Case_01028.nii.gz
310 | 08_AbdomenCT-1K/img/Case_01030_0000.nii.gz 08_AbdomenCT-1K/label/Case_01030.nii.gz
311 | 08_AbdomenCT-1K/img/Case_01034_0000.nii.gz 08_AbdomenCT-1K/label/Case_01034.nii.gz
312 | 08_AbdomenCT-1K/img/Case_01035_0000.nii.gz 08_AbdomenCT-1K/label/Case_01035.nii.gz
313 | 08_AbdomenCT-1K/img/Case_01036_0000.nii.gz 08_AbdomenCT-1K/label/Case_01036.nii.gz
314 | 08_AbdomenCT-1K/img/Case_01042_0000.nii.gz 08_AbdomenCT-1K/label/Case_01042.nii.gz
315 | 08_AbdomenCT-1K/img/Case_01043_0000.nii.gz 08_AbdomenCT-1K/label/Case_01043.nii.gz
316 | 08_AbdomenCT-1K/img/Case_01047_0000.nii.gz 08_AbdomenCT-1K/label/Case_01047.nii.gz
317 | 08_AbdomenCT-1K/img/Case_01048_0000.nii.gz 08_AbdomenCT-1K/label/Case_01048.nii.gz
318 | 08_AbdomenCT-1K/img/Case_01049_0000.nii.gz 08_AbdomenCT-1K/label/Case_01049.nii.gz
319 | 09_AMOS/img/amos_0004.nii.gz 09_AMOS/label/amos_0004.nii.gz
320 | 09_AMOS/img/amos_0007.nii.gz 09_AMOS/label/amos_0007.nii.gz
321 | 09_AMOS/img/amos_0019.nii.gz 09_AMOS/label/amos_0019.nii.gz
322 | 09_AMOS/img/amos_0024.nii.gz 09_AMOS/label/amos_0024.nii.gz
323 | 09_AMOS/img/amos_0030.nii.gz 09_AMOS/label/amos_0030.nii.gz
324 | 09_AMOS/img/amos_0048.nii.gz 09_AMOS/label/amos_0048.nii.gz
325 | 09_AMOS/img/amos_0058.nii.gz 09_AMOS/label/amos_0058.nii.gz
326 | 09_AMOS/img/amos_0083.nii.gz 09_AMOS/label/amos_0083.nii.gz
327 | 09_AMOS/img/amos_0086.nii.gz 09_AMOS/label/amos_0086.nii.gz
328 | 09_AMOS/img/amos_0097.nii.gz 09_AMOS/label/amos_0097.nii.gz
329 | 09_AMOS/img/amos_0103.nii.gz 09_AMOS/label/amos_0103.nii.gz
330 | 09_AMOS/img/amos_0111.nii.gz 09_AMOS/label/amos_0111.nii.gz
331 | 09_AMOS/img/amos_0115.nii.gz 09_AMOS/label/amos_0115.nii.gz
332 | 09_AMOS/img/amos_0124.nii.gz 09_AMOS/label/amos_0124.nii.gz
333 | 09_AMOS/img/amos_0126.nii.gz 09_AMOS/label/amos_0126.nii.gz
334 | 09_AMOS/img/amos_0131.nii.gz 09_AMOS/label/amos_0131.nii.gz
335 | 09_AMOS/img/amos_0142.nii.gz 09_AMOS/label/amos_0142.nii.gz
336 | 09_AMOS/img/amos_0160.nii.gz 09_AMOS/label/amos_0160.nii.gz
337 | 09_AMOS/img/amos_0185.nii.gz 09_AMOS/label/amos_0185.nii.gz
338 | 09_AMOS/img/amos_0190.nii.gz 09_AMOS/label/amos_0190.nii.gz
339 | 09_AMOS/img/amos_0196.nii.gz 09_AMOS/label/amos_0196.nii.gz
340 | 09_AMOS/img/amos_0197.nii.gz 09_AMOS/label/amos_0197.nii.gz
341 | 09_AMOS/img/amos_0239.nii.gz 09_AMOS/label/amos_0239.nii.gz
342 | 09_AMOS/img/amos_0264.nii.gz 09_AMOS/label/amos_0264.nii.gz
343 | 09_AMOS/img/amos_0279.nii.gz 09_AMOS/label/amos_0279.nii.gz
344 | 09_AMOS/img/amos_0281.nii.gz 09_AMOS/label/amos_0281.nii.gz
345 | 09_AMOS/img/amos_0320.nii.gz 09_AMOS/label/amos_0320.nii.gz
346 | 09_AMOS/img/amos_0336.nii.gz 09_AMOS/label/amos_0336.nii.gz
347 | 09_AMOS/img/amos_0341.nii.gz 09_AMOS/label/amos_0341.nii.gz
348 | 09_AMOS/img/amos_0376.nii.gz 09_AMOS/label/amos_0376.nii.gz
349 | 09_AMOS/img/amos_0398.nii.gz 09_AMOS/label/amos_0398.nii.gz
350 | 09_AMOS/img/amos_0400.nii.gz 09_AMOS/label/amos_0400.nii.gz
351 | 10_Decathlon/Task03_Liver/imagesTr/liver_10.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_10.nii.gz
352 | 10_Decathlon/Task03_Liver/imagesTr/liver_104.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_104.nii.gz
353 | 10_Decathlon/Task03_Liver/imagesTr/liver_112.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_112.nii.gz
354 | 10_Decathlon/Task03_Liver/imagesTr/liver_113.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_113.nii.gz
355 | 10_Decathlon/Task03_Liver/imagesTr/liver_119.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_119.nii.gz
356 | 10_Decathlon/Task03_Liver/imagesTr/liver_120.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_120.nii.gz
357 | 10_Decathlon/Task03_Liver/imagesTr/liver_126.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_126.nii.gz
358 | 10_Decathlon/Task03_Liver/imagesTr/liver_129.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_129.nii.gz
359 | 10_Decathlon/Task03_Liver/imagesTr/liver_130.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_130.nii.gz
360 | 10_Decathlon/Task03_Liver/imagesTr/liver_15.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_15.nii.gz
361 | 10_Decathlon/Task03_Liver/imagesTr/liver_17.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_17.nii.gz
362 | 10_Decathlon/Task03_Liver/imagesTr/liver_20.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_20.nii.gz
363 | 10_Decathlon/Task03_Liver/imagesTr/liver_22.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_22.nii.gz
364 | 10_Decathlon/Task03_Liver/imagesTr/liver_42.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_42.nii.gz
365 | 10_Decathlon/Task03_Liver/imagesTr/liver_43.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_43.nii.gz
366 | 10_Decathlon/Task03_Liver/imagesTr/liver_44.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_44.nii.gz
367 | 10_Decathlon/Task03_Liver/imagesTr/liver_48.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_48.nii.gz
368 | 10_Decathlon/Task03_Liver/imagesTr/liver_5.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_5.nii.gz
369 | 10_Decathlon/Task03_Liver/imagesTr/liver_50.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_50.nii.gz
370 | 10_Decathlon/Task03_Liver/imagesTr/liver_54.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_54.nii.gz
371 | 10_Decathlon/Task03_Liver/imagesTr/liver_55.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_55.nii.gz
372 | 10_Decathlon/Task03_Liver/imagesTr/liver_56.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_56.nii.gz
373 | 10_Decathlon/Task03_Liver/imagesTr/liver_72.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_72.nii.gz
374 | 10_Decathlon/Task03_Liver/imagesTr/liver_76.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_76.nii.gz
375 | 10_Decathlon/Task03_Liver/imagesTr/liver_77.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_77.nii.gz
376 | 10_Decathlon/Task03_Liver/imagesTr/liver_78.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_78.nii.gz
377 | 10_Decathlon/Task03_Liver/imagesTr/liver_81.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_81.nii.gz
378 | 10_Decathlon/Task03_Liver/imagesTr/liver_82.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_82.nii.gz
379 | 10_Decathlon/Task03_Liver/imagesTr/liver_85.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_85.nii.gz
380 | 10_Decathlon/Task03_Liver/imagesTr/liver_88.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_88.nii.gz
381 | 10_Decathlon/Task03_Liver/imagesTr/liver_89.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_89.nii.gz
382 | 10_Decathlon/Task03_Liver/imagesTr/liver_92.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_92.nii.gz
383 | 10_Decathlon/Task03_Liver/imagesTr/liver_93.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_93.nii.gz
384 | 10_Decathlon/Task03_Liver/imagesTr/liver_94.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_94.nii.gz
385 | 10_Decathlon/Task03_Liver/imagesTr/liver_99.nii.gz 10_Decathlon/Task03_Liver/labelsTr/liver_99.nii.gz
386 | 10_Decathlon/Task06_Lung/imagesTr/lung_001.nii.gz 10_Decathlon/Task06_Lung/labelsTr/lung_001.nii.gz
387 | 10_Decathlon/Task06_Lung/imagesTr/lung_026.nii.gz 10_Decathlon/Task06_Lung/labelsTr/lung_026.nii.gz
388 | 10_Decathlon/Task06_Lung/imagesTr/lung_031.nii.gz 10_Decathlon/Task06_Lung/labelsTr/lung_031.nii.gz
389 | 10_Decathlon/Task06_Lung/imagesTr/lung_034.nii.gz 10_Decathlon/Task06_Lung/labelsTr/lung_034.nii.gz
390 | 10_Decathlon/Task06_Lung/imagesTr/lung_038.nii.gz 10_Decathlon/Task06_Lung/labelsTr/lung_038.nii.gz
391 | 10_Decathlon/Task06_Lung/imagesTr/lung_046.nii.gz 10_Decathlon/Task06_Lung/labelsTr/lung_046.nii.gz
392 | 10_Decathlon/Task06_Lung/imagesTr/lung_048.nii.gz 10_Decathlon/Task06_Lung/labelsTr/lung_048.nii.gz
393 | 10_Decathlon/Task06_Lung/imagesTr/lung_069.nii.gz 10_Decathlon/Task06_Lung/labelsTr/lung_069.nii.gz
394 | 10_Decathlon/Task06_Lung/imagesTr/lung_070.nii.gz 10_Decathlon/Task06_Lung/labelsTr/lung_070.nii.gz
395 | 10_Decathlon/Task06_Lung/imagesTr/lung_080.nii.gz 10_Decathlon/Task06_Lung/labelsTr/lung_080.nii.gz
396 | 10_Decathlon/Task06_Lung/imagesTr/lung_083.nii.gz 10_Decathlon/Task06_Lung/labelsTr/lung_083.nii.gz
397 | 10_Decathlon/Task06_Lung/imagesTr/lung_084.nii.gz 10_Decathlon/Task06_Lung/labelsTr/lung_084.nii.gz
398 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_005.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_005.nii.gz
399 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_006.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_006.nii.gz
400 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_010.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_010.nii.gz
401 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_015.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_015.nii.gz
402 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_028.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_028.nii.gz
403 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_040.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_040.nii.gz
404 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_046.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_046.nii.gz
405 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_048.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_048.nii.gz
406 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_051.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_051.nii.gz
407 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_071.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_071.nii.gz
408 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_081.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_081.nii.gz
409 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_083.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_083.nii.gz
410 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_088.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_088.nii.gz
411 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_095.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_095.nii.gz
412 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_099.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_099.nii.gz
413 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_103.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_103.nii.gz
414 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_109.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_109.nii.gz
415 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_117.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_117.nii.gz
416 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_120.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_120.nii.gz
417 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_122.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_122.nii.gz
418 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_127.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_127.nii.gz
419 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_130.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_130.nii.gz
420 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_147.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_147.nii.gz
421 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_155.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_155.nii.gz
422 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_167.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_167.nii.gz
423 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_199.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_199.nii.gz
424 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_204.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_204.nii.gz
425 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_215.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_215.nii.gz
426 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_217.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_217.nii.gz
427 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_222.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_222.nii.gz
428 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_230.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_230.nii.gz
429 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_231.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_231.nii.gz
430 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_256.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_256.nii.gz
431 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_261.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_261.nii.gz
432 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_262.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_262.nii.gz
433 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_265.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_265.nii.gz
434 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_274.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_274.nii.gz
435 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_275.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_275.nii.gz
436 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_277.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_277.nii.gz
437 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_291.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_291.nii.gz
438 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_295.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_295.nii.gz
439 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_304.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_304.nii.gz
440 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_313.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_313.nii.gz
441 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_321.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_321.nii.gz
442 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_346.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_346.nii.gz
443 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_347.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_347.nii.gz
444 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_360.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_360.nii.gz
445 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_370.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_370.nii.gz
446 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_372.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_372.nii.gz
447 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_376.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_376.nii.gz
448 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_398.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_398.nii.gz
449 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_400.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_400.nii.gz
450 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_410.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_410.nii.gz
451 | 10_Decathlon/Task07_Pancreas/imagesTr/pancreas_414.nii.gz 10_Decathlon/Task07_Pancreas/labelsTr/pancreas_414.nii.gz
452 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_005.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_005.nii.gz
453 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_007.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_007.nii.gz
454 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_010.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_010.nii.gz
455 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_018.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_018.nii.gz
456 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_030.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_030.nii.gz
457 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_031.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_031.nii.gz
458 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_049.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_049.nii.gz
459 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_066.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_066.nii.gz
460 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_075.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_075.nii.gz
461 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_078.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_078.nii.gz
462 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_080.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_080.nii.gz
463 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_081.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_081.nii.gz
464 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_086.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_086.nii.gz
465 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_087.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_087.nii.gz
466 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_089.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_089.nii.gz
467 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_091.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_091.nii.gz
468 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_092.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_092.nii.gz
469 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_115.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_115.nii.gz
470 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_133.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_133.nii.gz
471 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_157.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_157.nii.gz
472 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_165.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_165.nii.gz
473 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_166.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_166.nii.gz
474 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_184.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_184.nii.gz
475 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_193.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_193.nii.gz
476 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_194.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_194.nii.gz
477 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_195.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_195.nii.gz
478 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_208.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_208.nii.gz
479 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_209.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_209.nii.gz
480 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_210.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_210.nii.gz
481 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_214.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_214.nii.gz
482 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_217.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_217.nii.gz
483 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_218.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_218.nii.gz
484 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_223.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_223.nii.gz
485 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_231.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_231.nii.gz
486 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_262.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_262.nii.gz
487 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_282.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_282.nii.gz
488 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_287.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_287.nii.gz
489 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_318.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_318.nii.gz
490 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_319.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_319.nii.gz
491 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_320.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_320.nii.gz
492 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_322.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_322.nii.gz
493 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_324.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_324.nii.gz
494 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_325.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_325.nii.gz
495 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_333.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_333.nii.gz
496 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_340.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_340.nii.gz
497 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_341.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_341.nii.gz
498 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_358.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_358.nii.gz
499 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_359.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_359.nii.gz
500 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_367.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_367.nii.gz
501 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_369.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_369.nii.gz
502 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_375.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_375.nii.gz
503 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_383.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_383.nii.gz
504 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_384.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_384.nii.gz
505 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_396.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_396.nii.gz
506 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_397.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_397.nii.gz
507 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_406.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_406.nii.gz
508 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_423.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_423.nii.gz
509 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_429.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_429.nii.gz
510 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_432.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_432.nii.gz
511 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_438.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_438.nii.gz
512 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_446.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_446.nii.gz
513 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_454.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_454.nii.gz
514 | 10_Decathlon/Task08_HepaticVessel/imagesTr/hepaticvessel_456.nii.gz 10_Decathlon/Task08_HepaticVessel/labelsTr/hepaticvessel_456.nii.gz
515 | 10_Decathlon/Task09_Spleen/imagesTr/spleen_13.nii.gz 10_Decathlon/Task09_Spleen/labelsTr/spleen_13.nii.gz
516 | 10_Decathlon/Task09_Spleen/imagesTr/spleen_14.nii.gz 10_Decathlon/Task09_Spleen/labelsTr/spleen_14.nii.gz
517 | 10_Decathlon/Task09_Spleen/imagesTr/spleen_17.nii.gz 10_Decathlon/Task09_Spleen/labelsTr/spleen_17.nii.gz
518 | 10_Decathlon/Task09_Spleen/imagesTr/spleen_2.nii.gz 10_Decathlon/Task09_Spleen/labelsTr/spleen_2.nii.gz
519 | 10_Decathlon/Task09_Spleen/imagesTr/spleen_21.nii.gz 10_Decathlon/Task09_Spleen/labelsTr/spleen_21.nii.gz
520 | 10_Decathlon/Task09_Spleen/imagesTr/spleen_28.nii.gz 10_Decathlon/Task09_Spleen/labelsTr/spleen_28.nii.gz
521 | 10_Decathlon/Task09_Spleen/imagesTr/spleen_41.nii.gz 10_Decathlon/Task09_Spleen/labelsTr/spleen_41.nii.gz
522 | 10_Decathlon/Task09_Spleen/imagesTr/spleen_46.nii.gz 10_Decathlon/Task09_Spleen/labelsTr/spleen_46.nii.gz
523 | 10_Decathlon/Task09_Spleen/imagesTr/spleen_56.nii.gz 10_Decathlon/Task09_Spleen/labelsTr/spleen_56.nii.gz
524 | 10_Decathlon/Task09_Spleen/imagesTr/spleen_6.nii.gz 10_Decathlon/Task09_Spleen/labelsTr/spleen_6.nii.gz
525 | 10_Decathlon/Task10_Colon/imagesTr/colon_005.nii.gz 10_Decathlon/Task10_Colon/labelsTr/colon_005.nii.gz
526 | 10_Decathlon/Task10_Colon/imagesTr/colon_007.nii.gz 10_Decathlon/Task10_Colon/labelsTr/colon_007.nii.gz
527 | 10_Decathlon/Task10_Colon/imagesTr/colon_022.nii.gz 10_Decathlon/Task10_Colon/labelsTr/colon_022.nii.gz
528 | 10_Decathlon/Task10_Colon/imagesTr/colon_031.nii.gz 10_Decathlon/Task10_Colon/labelsTr/colon_031.nii.gz
529 | 10_Decathlon/Task10_Colon/imagesTr/colon_033.nii.gz 10_Decathlon/Task10_Colon/labelsTr/colon_033.nii.gz
530 | 10_Decathlon/Task10_Colon/imagesTr/colon_059.nii.gz 10_Decathlon/Task10_Colon/labelsTr/colon_059.nii.gz
531 | 10_Decathlon/Task10_Colon/imagesTr/colon_069.nii.gz 10_Decathlon/Task10_Colon/labelsTr/colon_069.nii.gz
532 | 10_Decathlon/Task10_Colon/imagesTr/colon_072.nii.gz 10_Decathlon/Task10_Colon/labelsTr/colon_072.nii.gz
533 | 10_Decathlon/Task10_Colon/imagesTr/colon_077.nii.gz 10_Decathlon/Task10_Colon/labelsTr/colon_077.nii.gz
534 | 10_Decathlon/Task10_Colon/imagesTr/colon_081.nii.gz 10_Decathlon/Task10_Colon/labelsTr/colon_081.nii.gz
535 | 10_Decathlon/Task10_Colon/imagesTr/colon_095.nii.gz 10_Decathlon/Task10_Colon/labelsTr/colon_095.nii.gz
536 | 10_Decathlon/Task10_Colon/imagesTr/colon_096.nii.gz 10_Decathlon/Task10_Colon/labelsTr/colon_096.nii.gz
537 | 10_Decathlon/Task10_Colon/imagesTr/colon_107.nii.gz 10_Decathlon/Task10_Colon/labelsTr/colon_107.nii.gz
538 | 10_Decathlon/Task10_Colon/imagesTr/colon_114.nii.gz 10_Decathlon/Task10_Colon/labelsTr/colon_114.nii.gz
539 | 10_Decathlon/Task10_Colon/imagesTr/colon_120.nii.gz 10_Decathlon/Task10_Colon/labelsTr/colon_120.nii.gz
540 | 10_Decathlon/Task10_Colon/imagesTr/colon_163.nii.gz 10_Decathlon/Task10_Colon/labelsTr/colon_163.nii.gz
541 | 10_Decathlon/Task10_Colon/imagesTr/colon_187.nii.gz 10_Decathlon/Task10_Colon/labelsTr/colon_187.nii.gz
542 | 10_Decathlon/Task10_Colon/imagesTr/colon_214.nii.gz 10_Decathlon/Task10_Colon/labelsTr/colon_214.nii.gz
543 | 10_Decathlon/Task10_Colon/imagesTr/colon_218.nii.gz 10_Decathlon/Task10_Colon/labelsTr/colon_218.nii.gz
544 | 12_CT-ORG/img/volume-101.nii.gz 12_CT-ORG/label/labels-101.nii.gz
545 | 12_CT-ORG/img/volume-105.nii.gz 12_CT-ORG/label/labels-105.nii.gz
546 | 12_CT-ORG/img/volume-11.nii.gz 12_CT-ORG/label/labels-11.nii.gz
547 | 12_CT-ORG/img/volume-111.nii.gz 12_CT-ORG/label/labels-111.nii.gz
548 | 12_CT-ORG/img/volume-112.nii.gz 12_CT-ORG/label/labels-112.nii.gz
549 | 12_CT-ORG/img/volume-118.nii.gz 12_CT-ORG/label/labels-118.nii.gz
550 | 12_CT-ORG/img/volume-120.nii.gz 12_CT-ORG/label/labels-120.nii.gz
551 | 12_CT-ORG/img/volume-122.nii.gz 12_CT-ORG/label/labels-122.nii.gz
552 | 12_CT-ORG/img/volume-124.nii.gz 12_CT-ORG/label/labels-124.nii.gz
553 | 12_CT-ORG/img/volume-125.nii.gz 12_CT-ORG/label/labels-125.nii.gz
554 | 12_CT-ORG/img/volume-126.nii.gz 12_CT-ORG/label/labels-126.nii.gz
555 | 12_CT-ORG/img/volume-16.nii.gz 12_CT-ORG/label/labels-16.nii.gz
556 | 12_CT-ORG/img/volume-17.nii.gz 12_CT-ORG/label/labels-17.nii.gz
557 | 12_CT-ORG/img/volume-21.nii.gz 12_CT-ORG/label/labels-21.nii.gz
558 | 12_CT-ORG/img/volume-28.nii.gz 12_CT-ORG/label/labels-28.nii.gz
559 | 12_CT-ORG/img/volume-35.nii.gz 12_CT-ORG/label/labels-35.nii.gz
560 | 12_CT-ORG/img/volume-36.nii.gz 12_CT-ORG/label/labels-36.nii.gz
561 | 12_CT-ORG/img/volume-37.nii.gz 12_CT-ORG/label/labels-37.nii.gz
562 | 12_CT-ORG/img/volume-42.nii.gz 12_CT-ORG/label/labels-42.nii.gz
563 | 12_CT-ORG/img/volume-45.nii.gz 12_CT-ORG/label/labels-45.nii.gz
564 | 12_CT-ORG/img/volume-48.nii.gz 12_CT-ORG/label/labels-48.nii.gz
565 | 12_CT-ORG/img/volume-54.nii.gz 12_CT-ORG/label/labels-54.nii.gz
566 | 12_CT-ORG/img/volume-56.nii.gz 12_CT-ORG/label/labels-56.nii.gz
567 | 12_CT-ORG/img/volume-57.nii.gz 12_CT-ORG/label/labels-57.nii.gz
568 | 12_CT-ORG/img/volume-59.nii.gz 12_CT-ORG/label/labels-59.nii.gz
569 | 12_CT-ORG/img/volume-6.nii.gz 12_CT-ORG/label/labels-6.nii.gz
570 | 12_CT-ORG/img/volume-60.nii.gz 12_CT-ORG/label/labels-60.nii.gz
571 | 12_CT-ORG/img/volume-61.nii.gz 12_CT-ORG/label/labels-61.nii.gz
572 | 12_CT-ORG/img/volume-76.nii.gz 12_CT-ORG/label/labels-76.nii.gz
573 | 12_CT-ORG/img/volume-8.nii.gz 12_CT-ORG/label/labels-8.nii.gz
574 | 12_CT-ORG/img/volume-82.nii.gz 12_CT-ORG/label/labels-82.nii.gz
575 | 12_CT-ORG/img/volume-90.nii.gz 12_CT-ORG/label/labels-90.nii.gz
576 | 12_CT-ORG/img/volume-93.nii.gz 12_CT-ORG/label/labels-93.nii.gz
577 | 12_CT-ORG/img/volume-97.nii.gz 12_CT-ORG/label/labels-97.nii.gz
578 | 12_CT-ORG/img/volume-99.nii.gz 12_CT-ORG/label/labels-99.nii.gz
579 | 13_AbdomenCT-12organ/img/Organ12_0008_0000.nii.gz 13_AbdomenCT-12organ/label/Organ12_0008.nii.gz
580 | 13_AbdomenCT-12organ/img/Organ12_0020_0000.nii.gz 13_AbdomenCT-12organ/label/Organ12_0020.nii.gz
581 | 13_AbdomenCT-12organ/img/Organ12_0021_0000.nii.gz 13_AbdomenCT-12organ/label/Organ12_0021.nii.gz
582 | 13_AbdomenCT-12organ/img/Organ12_0028_0000.nii.gz 13_AbdomenCT-12organ/label/Organ12_0028.nii.gz
583 | 13_AbdomenCT-12organ/img/Organ12_0030_0000.nii.gz 13_AbdomenCT-12organ/label/Organ12_0030.nii.gz
--------------------------------------------------------------------------------