├── MySampler.py ├── README.md ├── binary.py ├── bounds.py ├── dataloader.py ├── dice3d.py ├── ivd.make ├── layers.py ├── lib.py ├── losses.py ├── main.py ├── main_sfda.py ├── networks.py ├── prostate.make ├── results ├── ivd │ └── cesource │ │ └── last.pkl ├── prostate │ └── cesource │ │ └── last.pkl └── whs │ └── cesource │ └── last.pkl ├── scheduler.py ├── seg_pro3.png ├── sizes ├── ivd.csv ├── prostate.csv ├── prostate_.csv ├── readme.md ├── whs.csv └── whs_.csv ├── utils.py └── whs.make /MySampler.py: -------------------------------------------------------------------------------- 1 | import torch 2 | #from torch._six import int_classes as _int_classes 3 | 4 | 5 | class Sampler(object): 6 | r"""Base class for all Samplers. 7 | Every Sampler subclass has to provide an __iter__ method, providing a way 8 | to iterate over indices of dataset elements, and a __len__ method that 9 | returns the length of the returned iterators. 10 | """ 11 | 12 | def __init__(self, data_source): 13 | pass 14 | 15 | def __iter__(self): 16 | raise NotImplementedError 17 | 18 | def __len__(self): 19 | raise NotImplementedError 20 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Source-Free Domain Adaptation 2 | **New**: Please check out [SFDA](https://github.com/mathilde-b/SFDA), the repository of our Source-Free Domain Adaptation method which will be updated. 3 | 4 | [Mathilde Bateson](https://github.com/mathilde-b), [Hoel Kervadec](https://github.com/HKervadec), [Jose Dolz](https://github.com/josedolz), Hervé Lombaert, Ismail Ben Ayed @ETS Montréal 5 | 6 | Code of our submission at [MICCAI 2020](https://link.springer.com/chapter/10.1007/978-3-030-59710-8_48) and its ongoing journal extension. Video of the MICCAI talk is available: 7 | https://www.youtube.com/watch?v=ALYaa5xrxbQ&ab_channel=MB 8 | 9 | * [MICCAI 2020 Proceedings](https://link.springer.com/chapter/10.1007/978-3-030-59710-8_48) 10 | * [arXiv preprint](https://arxiv.org/abs/2005.03697) 11 | 12 | Please cite our paper if you find it useful for your research. 13 | 14 | ``` 15 | 16 | @inproceedings{BatesonSFDA, 17 | Author = {Bateson, Mathilde and Kervadec, Hoel and Dolz, Jose and Lombaert, Herv{\'e} and Ben Ayed, Ismail}, 18 | Booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2020}, 19 | Pages = {490--499}, 20 | Publisher = {Springer International Publishing}, 21 | Title = {Source-Relaxed Domain Adaptation for Image Segmentation}, 22 | Year = {2020} 23 | Address = {Cham}} 24 | 25 | ``` 26 | 27 | ![Visual comparison](seg_pro3.png) 28 | 29 | 30 | ## Requirements 31 | Non-exhaustive list: 32 | * python3.6+ 33 | * Pytorch 1.0 34 | * nibabel 35 | * Scipy 36 | * NumPy 37 | * Matplotlib 38 | * Scikit-image 39 | * zsh 40 | * tqdm 41 | * pandas 42 | * scikit-image 43 | 44 | ## Data scheme 45 | ### datasets 46 | For instance 47 | ``` 48 | data 49 | prostate_source/ 50 | train/ 51 | IMG/ 52 | Case10_0.png 53 | ... 54 | GT/ 55 | Case10_0.png 56 | ... 57 | ... 58 | val/ 59 | IMG/ 60 | Case11_0.png 61 | ... 62 | GT/ 63 | Case11_0.png 64 | ... 65 | ... 66 | prostate_target/ 67 | train/ 68 | IMG/ 69 | Case10_0.png 70 | ... 71 | GT/ 72 | Case10_0.png 73 | ... 74 | ... 75 | val/ 76 | IMG/ 77 | Case11_0.png 78 | ... 79 | GT/ 80 | Case11_0.png 81 | ... 82 | ... 83 | ``` 84 | The network takes png or nii files as an input. The gt folder contains gray-scale images of the ground-truth, where the gray-scale level is the number of the class (0,1,...K). 85 | 86 | ### Class-ratio (sizes) prior 87 | The class-ratio prior is estimated from anatomical knowledge for each application. In our implementation, is it estimated for each slice in the target domain training and validation sets. It estimated once, before the start of the adaptation phase, and saved in a csv file. 88 | 89 | Scheme 90 | ``` 91 | sizes/ 92 | prostate.csv 93 | whs.csv 94 | ivd.csv 95 | ``` 96 | The size csv file should be organized as follows: 97 | 98 | | val_ids | dumbpredwtags 99 | | ------------- | ------------- | 100 | | Case00_0.nii | [Estimated_Size_class0, Estimated_Size_class1, ..., Estimated_Size_classk] 101 | 102 | Sample from sizes/prostate.csv : 103 | 104 | | val_ids | val_gt_size | dumbpredwtags 105 | | ------------- | ------------- |------------- | 106 | | Case00_0.nii | [147398.0, 827.0] | [140225, 6905] 107 | | Case00_1.nii | [147080.0, 1145.0] | [140225, 6905] 108 | | Case00_14.nii | [148225.0, 0.0] | [148225, 0] 109 | 110 | NB 1 : there should be no overlap between names of the slices in the training and validation sets (Case00_0.nii,...). 111 | 112 | NB 2: in our implementation, the csv file contains the sizes priors in pixels, and the KL Divergence loss divides the size in pixels by (w*h) the height and weight of the slice, to obtain the class-ratio prior. 113 | 114 | NB 3: Estimated_Size_class0 + Estimated_Size_class1 + ... + Estimated_Size_classk = w*h 115 | 116 | NB 4: the true val_gt_size is unknown, so it is not directly used in our proposed SFDA. However, in our framework an image-level annotation is available for the target training dataset: the "Tag" of each class k, indicating the presence or absence of class k in the slice. Therefore, Estimated_Size_classk=0 if val_gt_size_k = 0 and Estimated_Size_classk>0 if val_gt_size_k > 0 117 | 118 | NB 5: To have an idea of the capacity of the SFDA model in the ideal case where the ground truth class-ratio prior is known, it is useful to run the upper bound model SFDA_TrueSize choosing the column "val_gt_size" instead of "dumbpredwtags". This can be changed in the makefile : 119 | 120 | ``` 121 | results/sa/SFDA_TrueSize: OPT = --target_losses="[('EntKLProp', {'lamb_se':1,'lamb_consprior':1,'ivd':True,'weights_se':[0.1,0.9],'idc_c': [1],'curi':True,'power': 1},'PredictionBounds', \ 122 | {'margin':0,'dir':'high','idc':[0,1],'predcol':'val_gt_size','power': 1, 'mode':'percentage','sizefile':'sizes/prostate.csv'},'norm_soft_size',1)]" \ 123 | --val_target_folders="$(TT_DATA)" --l_rate 0.000001 --n_epoch 100 --lr_decay 0.9 --batch_size 10 --target_folders="$(TT_DATA)" --model_weights="$(M_WEIGHTS_ul)" \ 124 | ``` 125 | 126 | NB 6 : If you change the name of the columns (val_ids, dumbpredwtags) in the size file, you should change them in the [`bounds.py`](bounds.py) file as well as in the [`prostate.make`](prostate.make) makefile. 127 | 128 | ### results 129 | ``` 130 | results/ 131 | prostate/ 132 | fs/ 133 | best_epoch_3d/ 134 | val/ 135 | Case11_0.png 136 | ... 137 | iter000/ 138 | val/ 139 | ... 140 | sfda/ 141 | ... 142 | params.txt # saves all the argparse parameters of the model 143 | best_3d.pkl # best model saved 144 | last.pkl # last epoch 145 | IMG_target_metrics.csv # metrics over time, csv 146 | 3dbestepoch.txt # number and 3D Dice of the best epoch 147 | ... 148 | whs/ 149 | ... 150 | archives/ 151 | $(REPO)-$(DATE)-$(HASH)-$(HOSTNAME)-sfda.tar.gz 152 | $(REPO)-$(DATE)-$(HASH)-$(HOSTNAME)-prostate.tar.gz 153 | ``` 154 | ## Interesting bits 155 | The losses are defined in the [`losses.py`](losses.py) file. 156 | See below is the general structure of the target_losses parameter. The lambda weight of the loss is the last parameter. 157 | parser.add_argument("--target_losses", type=str, required=True, 158 | help="List of (loss_name, loss_params, bounds_name, bounds_params, fn, weight)") 159 | 160 | Example for the supervised Cross-Entropy loss: 161 | ``` 162 | target_losses="[('CrossEntropy', {'idc': [0,1], 'weights':[1,1]}, None, None, None, 1)]" 163 | ``` 164 | Example for our SFDA loss on prostate: 165 | ``` 166 | target_losses="[('EntKLProp', {'lamb_se':1, 'lamb_consprior':1,'ivd':True,'weights_se':[0.1,0.9], 167 | 'idc_c': [1],'curi':True,'power': 1},'PredictionBounds', \ 168 | {'margin':0,'dir':'high','idc'[0,1],'predcol':'dumbpredwtags','power':1, 169 | 'mode':'percentage','sep':';','sizefile':'sizes/prostate.csv'},'norm_soft_size',1)]" 170 | ``` 171 | 172 | ## Running our main experiment 173 | Once you have downladed the data and organized it such as in the scheme above: 174 | 175 | Change the path to match the base folder containing the target dataset in the makefiles : T_FOLD = "your/path/to/the/target/dataset" 176 | 177 | Run the main experiments as follows: 178 | 179 | ``` 180 | make -f prostate.make 181 | make -f ivd.make 182 | make -f whs.make 183 | ``` 184 | This will first run the SFDA model, which will be saved in results/sfda. It will be initialized with the networks weights obtained by training on the source only, which we made available. Eventually, you can re-run the source training model, which will be saved in results/prostate/cesource. 185 | 186 | ## Cool tricks 187 | Remove all assertions from the code to speed up. Usually done after making sure it does not crash for one complete epoch: 188 | ``` 189 | make -f prostate.make CFLAGS=-O 190 | ``` 191 | 192 | Use a specific python executable: 193 | ``` 194 | make -f prostate.make CC=/path/to/the/executable 195 | ``` 196 | 197 | Train for only 5 epochs, with a dummy network, and only 10 images per data loader. Useful for debugging: 198 | ``` 199 | make -f prostate.make NET=Dimwit EPC=5 DEBUG=--debug 200 | ``` 201 | 202 | Rebuild everything even if already exist: 203 | ``` 204 | make -f prostate.make -B 205 | ``` 206 | 207 | Only print the commands that will be run (useful to check recipes are properly defined): 208 | ``` 209 | make -f prostate.make -n 210 | ``` 211 | 212 | ## Tip for implementing on a new dataset 213 | This is list of parameters and hyperparameters which have influence the model's quality: 214 | - the class-ratio anatomical prior (defined in --target_losses) 215 | - the lambda weight for the KL loss (defined in --target_losses) 216 | - the class weights for the KL loss (defined in --target_losses) 217 | - the class weights for the entropy minimization loss (defined in --target_losses) 218 | - the learning rate, and the learning rate decay 219 | 220 | Regarding the class-ratio anatomical prior: Using the ground truth slice-based class-ratio prior (val_gt_size in the size file) is a good idea to get an idea of the capability of the SFDA on your application (it is an upper bound for our SFDA). Our main SFDA method relies on obtaining an estimation of the class-ratio for each structure of interest. Empirically, we found that an *over* estimation of the size of a structure yielded better quantitative and qualitative results than an *under* estimation^{1}. Therefore, on a new application, obtaining a coarse upper bound of an anatomical structure's pixel size should be a good starting point. 221 | 222 | ## Related Implementation and Dataset 223 | * [Mathilde Bateson](https://github.com/mathilde-b), [Hoel Kervadec](https://github.com/HKervadec), [Jose Dolz](https://github.com/josedolz), Hervé Lombaert, Ismail Ben Ayed. Constrained Domain Adaptation for Image Segmentation. In IEEE Transactions on Medical Imaging, 2021. [[paper]](https://ieeexplore.ieee.org/document/9382339) [[implementation]](https://github.com/mathilde-b/CDA) 224 | * [Hoel Kervadec](https://github.com/HKervadec), [Jose Dolz](https://github.com/josedolz), Meng Tang, Eric Granger, Yuri Boykov, Ismail Ben Ayed. Constrained-CNN losses for weakly supervised segmentation. In Medical Image Analysis, 2019. [[paper]](https://www.sciencedirect.com/science/article/pii/S1361841518306145?via%3Dihub) [[code]](https://github.com/LIVIAETS/SizeLoss_WSS) 225 | * Prostate Dataset and details: https://raw.githubusercontent.com/liuquande/SAML/. The SA site dataset was used a target domain, the SB site was used as source domain. For both datasets, we use 20 scans for training, and the remaining 10 scans for validation. 226 | * Heart Dataset and details: We used the preprocessed dataset from Dou et al. : https://github.com/carrenD/Medical-Cross-Modality-Domain-Adaptation. The data is in tfs records, it should be transformed to nii or png before running the makefile. 227 | * Spine Dataset and details: https://ivdm3seg.weebly.com/ . From the original coronal view, we transposed the slices to transverse view in our experiments. We set the water modality (Wat) as the source and the in-phase (IP) modality as the target domain. From this dataset, 13 scans are used for training, and the remaining 3 scans for validation. 228 | 229 | 230 | ## Note 231 | The model and code are available for non-commercial research purposes only. 232 | 233 | {1} This could be traced to the well-known *under* segmentation bias of the entropy minimization alone (see our paper), which seems better compensated when the KL matching has an *over* segmentation bias. Note that the common DICE metric also favors *oversegmentation* over *undersegmentation*. ( See https://openreview.net/pdf?id=76X9Mthzv4X for example) 234 | -------------------------------------------------------------------------------- /bounds.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3.6 2 | 3 | from typing import Any, List 4 | 5 | import torch 6 | from torch import Tensor 7 | import pandas as pd 8 | from utils import eq 9 | 10 | 11 | class ConstantBounds(): 12 | def __init__(self, **kwargs): 13 | self.C: int = kwargs['C'] 14 | self.const: Tensor = torch.zeros((self.C, 1, 2), dtype=torch.float32) 15 | 16 | for i, (low, high) in kwargs['values'].items(): 17 | self.const[i, 0, 0] = low 18 | self.const[i, 0, 1] = high 19 | 20 | print(f"Initialized {self.__class__.__name__} with {kwargs}") 21 | 22 | def __call__(self, image: Tensor, target: Tensor, weak_target: Tensor, filename: str) -> Tensor: 23 | return self.const 24 | 25 | class PredictionBounds(): 26 | def __init__(self, **kwargs): 27 | self.margin: float = kwargs['margin'] 28 | self.dir: str = kwargs['dir'] 29 | self.mode = "percentage" 30 | self.sizefile: float = kwargs['sizefile'] 31 | self.sep = kwargs['sep'] 32 | self.sizes = pd.read_csv(self.sizefile,sep=self.sep) 33 | self.predcol: bool = kwargs['predcol'] 34 | 35 | print(f"Initialized {self.__class__.__name__} with {kwargs}") 36 | 37 | def __call__(self, image: Tensor, target: Tensor, weak_target: Tensor, filename: str) -> Tensor: 38 | c,w,h=target.shape 39 | pred_size_col = self.predcol 40 | try: 41 | value = eval(self.sizes.loc[self.sizes.val_ids == filename, pred_size_col].values[0]) 42 | except: 43 | print('not eval') 44 | #print(self.sizes[pred_size_col]) 45 | #print(self.sizes.columns) 46 | #print(filename) 47 | value = self.sizes.loc[self.sizes.val_ids == filename, pred_size_col].values[0] 48 | value = torch.tensor([value]).squeeze(0) 49 | #with_margin: Tensor = torch.stack([value, value], dim=-1) 50 | #assert with_margin.shape == (*value.shape, 2), with_margin.shape 51 | 52 | margin: Tensor 53 | if self.mode == "percentage": 54 | margin = value * self.margin 55 | else: 56 | raise ValueError("mode") 57 | 58 | if self.dir == "both": 59 | with_margin: Tensor = torch.stack([value - margin, value + margin], dim=-1) 60 | elif self.dir == "high": 61 | with_margin: Tensor = torch.stack([value, value + margin], dim=-1) 62 | elif self.dir == "low": 63 | with_margin: Tensor = torch.stack([value-margin, value], dim=-1) 64 | assert with_margin.shape == (*value.shape, 2), with_margin.shape 65 | 66 | #res = torch.max(with_margin, torch.zeros(*value.shape, 2)).type(torch.float32) 67 | res = torch.max(with_margin, torch.zeros(*value.shape, 2).type(torch.long)).type(torch.float32) 68 | #print(res.shape,'res.shape') 69 | return res 70 | 71 | 72 | class PreciseBounds(): 73 | def __init__(self, **kwargs): 74 | self.margin: float = kwargs['margin'] 75 | self.mode: str = kwargs['mode'] 76 | self.namefun: str = kwargs['fn'] 77 | self.power: int = kwargs['power'] 78 | self.__fn__ = getattr(__import__('utils'), kwargs['fn']) 79 | 80 | print(f"Initialized {self.__class__.__name__} with {kwargs}") 81 | 82 | def __call__(self, image: Tensor, target: Tensor, weak_target: Tensor, filename: str) -> Tensor: 83 | if self.namefun == "norm_soft_size": 84 | value: Tensor = self.__fn__(target[None, ...].type(torch.float32), self.power)[0].type(torch.float32) # cwh and not bcwh 85 | else: 86 | #value: Tensor = self.__fn__(target[None, ...])[0].type(torch.float32) # cwh and not bcwh 87 | value: Tensor = self.__fn__(target[None, ...].type(torch.float32), self.power)[0].type(torch.float32) # cwh and not bcwh 88 | margin: Tensor 89 | if self.mode == "percentage": 90 | margin = value * self.margin 91 | elif self.mode == "abs": 92 | margin = torch.ones_like(value) * self.margin 93 | else: 94 | raise ValueError("mode") 95 | 96 | with_margin: Tensor = torch.stack([value - margin, value + margin], dim=-1) 97 | assert with_margin.shape == (*value.shape, 2), with_margin.shape 98 | 99 | res = torch.max(with_margin, torch.zeros(*value.shape, 2)).type(torch.float32) 100 | #print(res.shape) 101 | return res 102 | 103 | 104 | class PreciseTags(PreciseBounds): 105 | def __init__(self, **kwargs): 106 | super().__init__(**kwargs) 107 | 108 | self.neg_value: List = kwargs['neg_value'] 109 | 110 | def __call__(self, image: Tensor, target: Tensor, weak_target: Tensor, filename: str) -> Tensor: 111 | positive_class: Tensor = torch.einsum("cwh->c", [target]) > 0 112 | 113 | res = super().__call__(image, target, weak_target, filename) 114 | 115 | masked = res[...] 116 | masked[positive_class == 0] = torch.Tensor(self.neg_value) 117 | 118 | return masked 119 | 120 | 121 | class BoxBounds(): 122 | def __init__(self, **kwargs): 123 | self.margins: Tensor = torch.Tensor(kwargs['margins']) 124 | assert len(self.margins) == 2 125 | assert self.margins[0] <= self.margins[1] 126 | 127 | def __call__(self, image: Tensor, target: Tensor, weak_target: Tensor, filename: str) -> Tensor: 128 | c = len(weak_target) 129 | box_sizes: Tensor = torch.einsum("cwh->c", [weak_target])[..., None].type(torch.float32) 130 | 131 | bounds: Tensor = box_sizes * self.margins 132 | 133 | res = bounds[:, None, :] 134 | assert res.shape == (c, 1, 2) 135 | assert (res[..., 0] <= res[..., 1]).all() 136 | return res 137 | 138 | 139 | def CheckBounds(**kwargs): 140 | sizefile: float = kwargs['sizefile'] 141 | sizes = pd.read_csv(sizefile,sep=kwargs['sep']) 142 | predcol: str = kwargs['predcol'] 143 | #print(predcol, 'pred_size_col') 144 | #print(sizes.columns, 'self.sizes.columns') 145 | if predcol in sizes.columns: 146 | return True 147 | else: 148 | print('size pred not in file') 149 | print(sizes.columns, 'self.sizes.columns') 150 | print(sizes.shape,"size file shape") 151 | return False 152 | -------------------------------------------------------------------------------- /dataloader.py: -------------------------------------------------------------------------------- 1 | #!/usr/env/bin python3.6 2 | 3 | import io 4 | import re 5 | import random 6 | from operator import itemgetter 7 | from pathlib import Path 8 | from itertools import repeat 9 | from functools import partial 10 | from typing import Any, Callable, BinaryIO, Dict, List, Match, Pattern, Tuple, Union 11 | import csv 12 | from multiprocessing import cpu_count 13 | import torch 14 | import numpy as np 15 | from torch import Tensor 16 | from PIL import Image 17 | from torchvision import transforms 18 | from torch.utils.data import Dataset, DataLoader 19 | import os 20 | from MySampler import Sampler 21 | from utils import id_, map_, class2one_hot, augment, read_nii_image,read_unknownformat_image 22 | from utils import simplex, sset, one_hot, pad_to, remap 23 | 24 | F = Union[Path, BinaryIO] 25 | D = Union[Image.Image, np.ndarray, Tensor] 26 | 27 | 28 | def nii_transform(resolution: Tuple[float, ...], K: int) -> Callable[[D], Tensor]: 29 | return transforms.Compose([ 30 | lambda nd: ((nd+4) / 8), # max <= 1 31 | lambda nd: torch.tensor(nd, dtype=torch.float32), 32 | ]) 33 | 34 | def nii_gt_transform(resolution: Tuple[float, ...], K: int) -> Callable[[D], Tensor]: 35 | return transforms.Compose([ 36 | lambda nd: torch.tensor(nd, dtype=torch.int64) , # need to Add one dimension to simulate batch 37 | partial(class2one_hot, K=K), 38 | itemgetter(0), # Then pop the element to go back to img shape 39 | ]) 40 | 41 | def dummy_transform(resolution: Tuple[float, ...], K: int) -> Callable[[D], Tensor]: 42 | return transforms.Compose([ 43 | lambda nd: torch.tensor(nd, dtype=torch.int64), 44 | lambda t: torch.zeros_like(t), 45 | partial(class2one_hot, K=K), 46 | itemgetter(0) # Then pop the element to go back to img shape 47 | ]) 48 | 49 | def get_loaders(args, data_folder: str, subfolders:str, 50 | batch_size: int, n_class: int, 51 | debug: bool, in_memory: bool, dtype, shuffle:bool, mode:str, val_subfolders:"") -> Tuple[DataLoader, DataLoader]: 52 | 53 | nii_transform2 = transforms.Compose([ 54 | lambda nd: torch.tensor(nd, dtype=torch.float32), 55 | lambda nd: nd[:,0:384,0:384], 56 | #lambda nd: print(nd.shape), 57 | ]) 58 | 59 | nii_gt_transform2 = transforms.Compose([ 60 | lambda nd: torch.tensor(nd, dtype=torch.int64), 61 | partial(class2one_hot, C=n_class), 62 | lambda nd: nd[:,:,0:384,0:384], 63 | itemgetter(0), 64 | #lambda nd: print(nd.shape,"nii gt rtans") 65 | ]) 66 | 67 | 68 | nii_transform = transforms.Compose([ 69 | lambda nd: torch.tensor(nd, dtype=torch.float32), 70 | lambda nd: (nd+4) / 8.5, # max <= 1 71 | #lambda nd: print(nd.shape), 72 | ]) 73 | 74 | nii_gt_transform = transforms.Compose([ 75 | lambda nd: torch.tensor(nd, dtype=torch.int64), 76 | partial(class2one_hot, C=n_class), 77 | itemgetter(0), 78 | #lambda nd: print(nd.shape,"nii gt rtans") 79 | ]) 80 | 81 | png_transform = transforms.Compose([ 82 | lambda img: np.array(img)[np.newaxis, ...], 83 | lambda nd: nd / 255, # max <= 1 84 | # lambda nd: np.pad(nd, [(0,0), (0,0), (110,110)], 'constant'), 85 | #lambda nd: pad_to(nd, 256,256), 86 | lambda nd: torch.tensor(nd, dtype=dtype) 87 | ]) 88 | imnpy_transform = transforms.Compose([ 89 | lambda nd: nd / 255, # max <= 1 90 | # lambda nd: np.pad(nd, [(0,0), (0,0), (110,110)], 'constant'), 91 | #lambda nd: pad_to(nd, 256,256), 92 | lambda nd: torch.tensor(nd, dtype=dtype) 93 | ]) 94 | npy_transform = transforms.Compose([ 95 | lambda img: np.array(img)[np.newaxis, ...], 96 | lambda nd: nd / 255, # max <= 1 97 | #lambda nd: np.pad(nd, [(0,0), (0,0), (110,110)], 'constant'), 98 | #lambda nd: pad_to(nd, 256, 256), 99 | lambda nd: torch.tensor(nd, dtype=dtype) 100 | ]) 101 | gtnpy_transform = transforms.Compose([ 102 | lambda img: np.array(img)[np.newaxis, ...], 103 | #lambda nd: np.pad(nd, [(0, 0), (0, 0), (110, 110)], 'constant'), 104 | #lambda nd: pad_to(nd, 256, 256), 105 | lambda nd: torch.tensor(nd, dtype=torch.int64), 106 | #lambda nd: remap({0:0, 36:4, 72:0, 109:1, 145:0, 182:2, 218:3, 255:0},nd), 107 | partial(class2one_hot, C=n_class), 108 | itemgetter(0) 109 | ]) 110 | gt_transform = transforms.Compose([ 111 | #lambda img: np.array(img)[np.newaxis, ...], 112 | #lambda nd: np.pad(nd, [(0, 0), (0, 0), (110, 110)], 'constant'), 113 | #lambda nd: pad_to(nd, 256, 256), 114 | lambda nd: torch.tensor(nd, dtype=torch.int64), 115 | #lambda nd: print(nd.shape,"nd in gt transform"), 116 | partial(class2one_hot, C=n_class), 117 | itemgetter(0), 118 | ]) 119 | gtpng_transform = transforms.Compose([ 120 | lambda img: np.array(img)[np.newaxis, ...], 121 | #lambda nd: np.pad(nd, [(0, 0), (0, 0), (110, 110)], 'constant'), 122 | #lambda nd: pad_to(nd, 256, 256), 123 | lambda nd: torch.tensor(nd, dtype=torch.int64), 124 | #lambda nd: print(nd.shape,"nd in gt transform"), 125 | partial(class2one_hot, C=n_class), 126 | itemgetter(0), 127 | ]) 128 | 129 | 130 | if mode == "target": 131 | losses = eval(args.target_losses) 132 | else: 133 | losses = eval(args.source_losses) 134 | 135 | bounds_generators: List[Callable] = [] 136 | for _, _, bounds_name, bounds_params, fn, _ in losses: 137 | if bounds_name is None: 138 | bounds_generators.append(lambda *a: torch.zeros(n_class, 1, 2)) 139 | continue 140 | bounds_class = getattr(__import__('bounds'), bounds_name) 141 | bounds_generators.append(bounds_class(C=args.n_class, fn=fn, **bounds_params)) 142 | folders_list = eval(subfolders) 143 | val_folders_list = eval(subfolders) 144 | if val_subfolders !="": 145 | val_folders_list = eval(val_subfolders) 146 | 147 | # print(folders_list) 148 | folders, trans, are_hots = zip(*folders_list) 149 | valfolders, val_trans, val_are_hots = zip(*val_folders_list) 150 | # Create partial functions: Easier for readability later (see the difference between train and validation) 151 | gen_dataset = partial(SliceDataset, 152 | transforms=trans, 153 | are_hots=are_hots, 154 | debug=debug, 155 | C=n_class, 156 | in_memory=in_memory, augment=args.augment, 157 | bounds_generators=bounds_generators) 158 | valgen_dataset = partial(SliceDataset, 159 | transforms=val_trans, 160 | are_hots=val_are_hots, 161 | debug=debug, 162 | C=n_class, 163 | in_memory=in_memory, augment=args.augment, 164 | bounds_generators=bounds_generators) 165 | 166 | data_loader = partial(DataLoader, 167 | num_workers=4, 168 | #num_workers=min(cpu_count(), batch_size + 4), 169 | #num_workers=1, 170 | pin_memory=True) 171 | 172 | # Prepare the datasets and dataloaders 173 | train_folders: List[Path] = [Path(data_folder, "train", f) for f in folders] 174 | if args.trainval: 175 | train_folders: List[Path] = [Path(data_folder, "trainval", f) for f in folders] 176 | elif args.valonly: 177 | train_folders: List[Path] = [Path(data_folder, "val", f) for f in folders] 178 | #if args.ontrain1: 179 | # train_folders: List[Path] = [Path(data_folder, "train1", f) for f in folders] 180 | # I assume all files have the same name inside their folder: makes things much easier 181 | train_names: List[str] = map_(lambda p: str(p.name), train_folders[0].glob("*.png")) 182 | if len(train_names)==0: 183 | train_names: List[str] = map_(lambda p: str(p.name), train_folders[0].glob("*.nii")) 184 | if len(train_names)==0: 185 | train_names: List[str] = map_(lambda p: str(p.name), train_folders[0].glob("*.npy")) 186 | #train_names.sort() 187 | 188 | train_set = gen_dataset(train_names, 189 | train_folders) 190 | #if fix_size!=[0,0] and len(train_set) None: 246 | self.folders: List[Path] = folders 247 | self.transforms: List[Callable[[D], Tensor]] = transforms 248 | assert len(self.transforms) == len(self.folders) 249 | 250 | self.are_hots: List[bool] = are_hots 251 | self.filenames: List[str] = filenames 252 | self.debug = debug 253 | self.C: int = C # Number of classes 254 | self.in_memory: bool = in_memory 255 | self.bounds_generators: List[Callable] = bounds_generators 256 | self.augment: bool = augment 257 | 258 | #print("self.folders",self.folders) 259 | #print("self.filenames[:10]",self.filenames[:10]) 260 | if self.debug: 261 | self.filenames = self.filenames[:10] 262 | 263 | assert self.check_files() # Make sure all file exists 264 | 265 | # Load things in memory if needed 266 | self.files: List[List[F]] = SliceDataset.load_images(self.folders, self.filenames, self.in_memory) 267 | assert len(self.files) == len(self.folders) 268 | for files in self.files: 269 | assert len(files) == len(self.filenames) 270 | 271 | print(f"Initialized {self.__class__.__name__} with {len(self.filenames)} images") 272 | 273 | def check_files(self) -> bool: 274 | for folder in self.folders: 275 | #print(folder) 276 | if not Path(folder).exists(): 277 | print(folder, "does not exist") 278 | return False 279 | 280 | for f_n in self.filenames: 281 | #print(f_n) 282 | if not Path(folder, f_n).exists(): 283 | print(folder,f_n, "does not exist") 284 | return False 285 | 286 | return True 287 | 288 | @staticmethod 289 | def load_images(folders: List[Path], filenames: List[str], in_memory: bool) -> List[List[F]]: 290 | def load(folder: Path, filename: str) -> F: 291 | p: Path = Path(folder, filename) 292 | if in_memory: 293 | with open(p, 'rb') as data: 294 | res = io.BytesIO(data.read()) 295 | return res 296 | return p 297 | if in_memory: 298 | print("Loading the data in memory...") 299 | 300 | files: List[List[F]] = [[load(f, im) for im in filenames] for f in folders] 301 | 302 | return files 303 | 304 | def __len__(self): 305 | return len(self.filenames) 306 | 307 | def __getitem__(self, index: int) -> List[Any]: 308 | filename: str = self.filenames[index] 309 | path_name: Path = Path(filename) 310 | images: List[D] 311 | #print('get',self.folders, filename) 312 | files = SliceDataset.load_images(self.folders, [filename], self.in_memory) 313 | #print('files', files, self.bounds_generators) 314 | #print('old files', self.files[0][index]) 315 | #print(self.files, filename) 316 | #print(path_name) 317 | if path_name.suffix == ".png": 318 | images = [Image.open(files[index]).convert('L') for files in self.files] 319 | elif path_name.suffix == ".nii": 320 | #print(files) 321 | try: 322 | images = [read_nii_image(f[0]) for f in files] 323 | #print("nm",[i.shape for i in images]) 324 | except: 325 | images = [read_unknownformat_image(f[0]) for f in files] 326 | #print("em",[i.shape for i in images]) 327 | elif path_name.suffix == ".npy": 328 | images = [np.load(files[index]) for files in self.files] 329 | else: 330 | raise ValueError(filename) 331 | if self.augment: 332 | images = augment(*images) 333 | 334 | assert self.check_files() # Make sure all file exists 335 | # Final transforms and assertions 336 | t_tensors: List[Tensor] = [tr(e) for (tr, e) in zip(self.transforms, images)] 337 | 338 | assert 0 <= t_tensors[0].min() and t_tensors[0].max() <= 1, t_tensors[0].max() # main image is between 0 and 1 339 | #print(t_tensors[0].max()) 340 | _, w, h = t_tensors[0].shape 341 | 342 | for ttensor, is_hot in zip(t_tensors[1:], self.are_hots): # All masks (ground truths) are class encoded 343 | if is_hot: 344 | assert one_hot(ttensor, axis=0) 345 | #assert ttensor.shape == (self.C, w, h) 346 | 347 | img, gt = t_tensors[:2] 348 | #print(gt.shape) 349 | try: 350 | bounds = [f(img, gt, t, filename) for f, t in zip(self.bounds_generators, t_tensors[2:])] 351 | except: 352 | print(self.folders, filename, self.bounds_generator) 353 | # return t_tensors + [filename] + bounds 354 | return [filename] + t_tensors + bounds 355 | 356 | 357 | class PatientSampler(Sampler): 358 | def __init__(self, dataset: SliceDataset, grp_regex, shuffle=False) -> None: 359 | filenames: List[str] = dataset.filenames 360 | # Might be needed in case of escape sequence fuckups 361 | # self.grp_regex = bytes(grp_regex, "utf-8").decode('unicode_escape') 362 | self.grp_regex = grp_regex 363 | 364 | # Configure the shuffling function 365 | self.shuffle: bool = shuffle 366 | self.shuffle_fn: Callable = (lambda x: random.sample(x, len(x))) if self.shuffle else id_ 367 | 368 | print(f"Grouping using {self.grp_regex} regex") 369 | # assert grp_regex == "(patient\d+_\d+)_\d+" 370 | # grouping_regex: Pattern = re.compile("grp_regex") 371 | grouping_regex: Pattern = re.compile(self.grp_regex) 372 | 373 | stems: List[str] = [Path(filename).stem for filename in filenames] # avoid matching the extension 374 | matches: List[Match] = map_(grouping_regex.match, stems) 375 | patients: List[str] = [match.group(0) for match in matches] 376 | 377 | unique_patients: List[str] = list(set(patients)) 378 | assert len(unique_patients) <= len(filenames) 379 | print(f"Found {len(unique_patients)} unique patients out of {len(filenames)} images") 380 | 381 | self.idx_map: Dict[str, List[int]] = dict(zip(unique_patients, repeat(None))) 382 | for i, patient in enumerate(patients): 383 | if not self.idx_map[patient]: 384 | self.idx_map[patient] = [] 385 | 386 | self.idx_map[patient] += [i] 387 | # print(self.idx_map) 388 | assert sum(len(self.idx_map[k]) for k in unique_patients) == len(filenames) 389 | 390 | print("Patient to slices mapping done") 391 | 392 | def __len__(self): 393 | return len(self.idx_map.keys()) 394 | 395 | def __iter__(self): 396 | values = list(self.idx_map.values()) 397 | shuffled = self.shuffle_fn(values) 398 | return iter(shuffled) 399 | 400 | 401 | class RandomSampler(Sampler): 402 | r"""Samples elements randomly. If without replacement, then sample from a shuffled dataset. 403 | If with replacement, then user can specify ``num_samples`` to draw. 404 | 405 | Arguments: 406 | data_source (Dataset): dataset to sample from 407 | replacement (bool): samples are drawn with replacement if ``True``, default=``False`` 408 | num_samples (int): number of samples to draw, default=`len(dataset)`. This argument 409 | is supposed to be specified only when `replacement` is ``True``. 410 | """ 411 | 412 | def __init__(self, data_source, replacement=False, num_samples=None): 413 | self.data_source = data_source 414 | self.replacement = replacement 415 | self._num_samples = num_samples 416 | 417 | if not isinstance(self.replacement, bool): 418 | raise ValueError("replacement should be a boolean value, but got " 419 | "replacement={}".format(self.replacement)) 420 | 421 | if self._num_samples is not None and not replacement: 422 | raise ValueError("With replacement=False, num_samples should not be specified, " 423 | "since a random permute will be performed.") 424 | 425 | if not isinstance(self.num_samples, int) or self.num_samples <= 0: 426 | raise ValueError("num_samples should be a positive integer " 427 | "value, but got num_samples={}".format(self.num_samples)) 428 | 429 | @property 430 | def num_samples(self): 431 | # dataset size might change at runtime 432 | if self._num_samples is None: 433 | return len(self.data_source) 434 | return self._num_samples 435 | 436 | def __iter__(self): 437 | n = len(self.data_source) 438 | if self.replacement: 439 | return iter(torch.randint(high=n, size=(self.num_samples,), dtype=torch.int64).tolist()) 440 | return iter(torch.randperm(n).tolist()) 441 | 442 | def __len__(self): 443 | return self.num_samples 444 | 445 | 446 | 447 | class ConcatDataset(Dataset): 448 | """ 449 | Dataset to concatenate multiple datasets. 450 | Purpose: useful to assemble different existing datasets, possibly 451 | large-scale datasets as the concatenation operation is done in an 452 | on-the-fly manner. 453 | 454 | Arguments: 455 | datasets (sequence): List of datasets to be concatenated 456 | """ 457 | 458 | @staticmethod 459 | def cumsum(sequence): 460 | r, s = [], 0 461 | for e in sequence: 462 | l = len(e) 463 | r.append(l + s) 464 | s += l 465 | return r 466 | 467 | def __init__(self, datasets): 468 | super(ConcatDataset, self).__init__() 469 | assert len(datasets) > 0, 'datasets should not be an empty iterable' 470 | self.datasets = list(datasets) 471 | self.cumulative_sizes = self.cumsum(self.datasets) 472 | 473 | def __len__(self): 474 | return self.cumulative_sizes[-1] 475 | 476 | def __getitem__(self, idx): 477 | if idx < 0: 478 | if -idx > len(self): 479 | raise ValueError("absolute value of index should not exceed dataset length") 480 | idx = len(self) + idx 481 | dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx) 482 | if dataset_idx == 0: 483 | sample_idx = idx 484 | else: 485 | sample_idx = idx - self.cumulative_sizes[dataset_idx - 1] 486 | return self.datasets[dataset_idx][sample_idx] 487 | 488 | @property 489 | def cummulative_sizes(self): 490 | warnings.warn("cummulative_sizes attribute is renamed to " 491 | "cumulative_sizes", DeprecationWarning, stacklevel=2) 492 | return self.cumulative_sizes 493 | 494 | 495 | class Concat(Dataset): 496 | 497 | def __init__(self, datasets): 498 | self.datasets = datasets 499 | self.lengths = [len(d) for d in datasets] 500 | self.offsets = np.cumsum(self.lengths) 501 | self.length = np.sum(self.lengths) 502 | 503 | def __getitem__(self, index): 504 | for i, offset in enumerate(self.offsets): 505 | if index < offset: 506 | if i > 0: 507 | index -= self.offsets[i-1] 508 | return self.datasets[i][index] 509 | raise IndexError(f'{index} exceeds {self.length}') 510 | 511 | def __len__(self): 512 | return self.length 513 | -------------------------------------------------------------------------------- /dice3d.py: -------------------------------------------------------------------------------- 1 | #!/usr/env/bin python3.6 2 | import numpy 3 | import io 4 | import re 5 | import random 6 | from operator import itemgetter 7 | from pathlib import Path 8 | from itertools import repeat 9 | from functools import partial 10 | from typing import Any, Callable, BinaryIO, Dict, List, Match, Pattern, Tuple, Union 11 | from binary import hd, ravd, hd95, hd_var, assd, asd 12 | import torch 13 | import numpy as np 14 | from torch import Tensor 15 | from PIL import Image 16 | from torchvision import transforms 17 | from torch.utils.data import Dataset, DataLoader 18 | import os 19 | from utils import id_, map_, class2one_hot, resize_im, soft_size 20 | from utils import simplex, sset, one_hot, dice_batch 21 | from argparse import Namespace 22 | import os 23 | import pandas as pd 24 | import imageio 25 | 26 | def dice3d(all_grp,all_inter_card,all_card_gt,all_card_pred,all_pred,all_gt,all_pnames,metric_axis,pprint=False,do_hd=0,do_asd=0,best_epoch_val=0): 27 | _,C = all_card_gt.shape 28 | unique_patients = torch.unique(all_grp) 29 | list(filter(lambda a: a != 0.0, unique_patients)) 30 | unique_patients = [u.item() for u in unique_patients] 31 | batch_dice = torch.zeros((len(unique_patients), C)) 32 | batch_hd = torch.zeros((len(unique_patients), C)) 33 | batch_asd = torch.zeros((len(unique_patients), C)) 34 | if do_hd>0 or do_asd>0: 35 | all_pred = all_pred.cpu().numpy() 36 | all_gt = all_gt.cpu().numpy() 37 | # do DICE 38 | for i, p in enumerate(unique_patients): 39 | inter_card_p = torch.einsum("bc->c", [torch.masked_select(all_inter_card, all_grp == p).reshape((-1, C))]) 40 | card_gt_p= torch.einsum("bc->c", [torch.masked_select(all_card_gt, all_grp == p).reshape((-1, C))]) 41 | card_pred_p= torch.einsum("bc->c", [torch.masked_select(all_card_pred, all_grp == p).reshape((-1, C))]) 42 | dice_3d = (2 * inter_card_p + 1e-8) / ((card_pred_p + card_gt_p)+ 1e-8) 43 | batch_dice[i,...] = dice_3d 44 | if pprint: 45 | print(p,dice_3d.cpu()) 46 | indices = torch.tensor(metric_axis) 47 | dice_3d = torch.index_select(batch_dice, 1, indices) 48 | dice_3d_mean = dice_3d.mean(dim=0) 49 | dice_3d_mean = torch.round(dice_3d_mean * 10**4) / (10**4) 50 | dice_3d_sd = dice_3d.std(dim=0) 51 | dice_3d_sd = torch.round(dice_3d_sd * 10**4) / (10**4) 52 | 53 | # do HD and / or ASD 54 | #if dice_3d_mean.mean()>best_epoch_val: 55 | if dice_3d_mean.mean()>0: 56 | for i, p in enumerate(unique_patients): 57 | root_name = [re.split('(\d+)', x.item())[0] for x in all_pnames][0] 58 | bool_p = [int(re.split('_',re.split(root_name,x.item())[1])[0])==p for x in all_pnames] 59 | slices_p = all_pnames[bool_p] 60 | #if do_hd >0 or dice_3d_mean.mean()>best_epoch_val: 61 | if do_hd> 0 or do_asd >0 : 62 | all_gt_p = all_gt[bool_p,:] 63 | all_pred_p = all_pred[bool_p,:] 64 | sn_p = [int(re.split('_',x)[1]) for x in slices_p] 65 | ord_p = np.argsort(sn_p) 66 | label_gt = all_gt_p[ord_p,...] 67 | label_pred = all_pred_p[ord_p,...] 68 | asd_3d_var_vec = [None] * C 69 | hd_3d_var_vec= [None] * C 70 | for j in range(0,C): 71 | label_pred_c = numpy.copy(label_pred) 72 | label_pred_c[label_pred_c!=j]=0 73 | label_pred_c[label_pred_c==j]=1 74 | label_gt_c = numpy.copy(label_gt) 75 | label_gt_c[label_gt!=j]=0 76 | label_gt_c[label_gt==j]=1 77 | if len(np.unique(label_pred_c))>1: # len(np.unique(label_gt_c))>1 should always be true... 78 | if do_hd>0: 79 | if root_name=="Case" or root_name=="slice": 80 | hd_3d_var_vec[j] = hd95(label_pred_c, label_gt_c,[0.6,0.44,0.44]).item() 81 | elif root_name=="Subj_": 82 | hd_3d_var_vec[j] = hd95(label_pred_c, label_gt_c,[1.25,1.25,2]).item() 83 | if do_asd > 0: 84 | asd_3d_var_vec[j] = assd(label_pred_c, label_gt_c).item() #ASD IN VOXEL TO COMPARE TO PNP PAPER 85 | else: 86 | #print(len(np.unique(label_pred_c)),len(np.unique(label_gt_c))) 87 | hd_3d_var_vec[j]=np.NaN 88 | asd_3d_var_vec[j] = np.NaN 89 | if do_hd>0: 90 | hd_3d_var = torch.from_numpy(np.asarray(hd_3d_var_vec)) # np.nanmean(hd_3d_var_vec) 91 | batch_hd[i,...] = hd_3d_var 92 | if do_asd>0: 93 | asd_3d_var = torch.from_numpy(np.asarray(asd_3d_var_vec)) # np.nanmean(hd_3d_var_vec) 94 | batch_asd[i,...] = asd_3d_var 95 | 96 | [hd_3d, hd_3d_sd] = get_mean_sd(batch_hd,indices) 97 | [asd_3d, asd_3d_sd] = get_mean_sd(batch_asd,indices) 98 | [dice_3d, dice_3d_sd] = map_(lambda t: t.mean(), [dice_3d_mean.cpu().numpy(), dice_3d_sd.cpu().numpy()]) 99 | if pprint: 100 | print('asd_3d_mean',asd_3d, "asd_3d_sd", asd_3d_sd, "hd_3d_mean", hd_3d, "hd_3d_sd", hd_3d_sd) 101 | [return_asd,return_asd_sd] = [asd_3d.item(),asd_3d_sd.item()] if do_asd >0 else [0,0] 102 | [return_hd,return_hd_sd] = [hd_3d.item(),hd_3d_sd.item()] if do_hd >0 else [0,0] 103 | return dice_3d.item(), dice_3d_sd.item(), return_asd, return_asd_sd,return_hd,return_hd_sd 104 | 105 | 106 | def get_mean_sd(x,indices): 107 | x_ind = torch.index_select(x, 1, indices) 108 | x_mean = x_ind.mean(dim=0) 109 | x_mean = torch.round(x_mean * 10**4) / (10**4) 110 | x_std = x_ind.std(dim=0) 111 | x_std = torch.round(x_std * 10**4) / (10**4) 112 | x_mean, x_std= map_(lambda t: t.mean(), [x_mean,x_std]) 113 | return x_mean,x_std 114 | -------------------------------------------------------------------------------- /ivd.make: -------------------------------------------------------------------------------- 1 | CC = python 2 | SHELL = bash 3 | PP = PYTHONPATH="$(PYTHONPATH):." 4 | 5 | .PHONY: all view plot report 6 | 7 | CFLAGS = -O 8 | #DEBUG = --debug 9 | 10 | #the regex of the subjects in the target dataset 11 | #for the ivd 12 | G_RGX = Subj_\d+ 13 | 14 | TT_DATA = [('Inn', png_transform, False), ('GT', gtpng_transform, False),('GT', gtpng_transform, False)] 15 | S_DATA = [('Wat', png_transform, False), ('GT', gtpng_transform, False),('GT', gtpng_transform, False)] 16 | L_OR = [('CrossEntropy', {'idc': [0,1], 'weights':[1,1]}, None, None, None, 1)] 17 | NET = UNet 18 | 19 | # the folder containing the datasets 20 | B_FOLD = data/ivd_transverse/ 21 | 22 | #the network weights used as initialization of the adaptation 23 | M_WEIGHTS_ul = results/ivd/cesource/last.pkl 24 | 25 | #run the main experiment 26 | TRN = results/ivd/sfda 27 | 28 | REPO = $(shell basename `git rev-parse --show-toplevel`) 29 | DATE = $(shell date +"%y%m%d") 30 | HASH = $(shell git rev-parse --short HEAD) 31 | HOSTNAME = $(shell hostname) 32 | PBASE = archives 33 | PACK = $(PBASE)/$(REPO)-$(DATE)-$(HASH)-$(HOSTNAME)-CSize.tar.gz 34 | 35 | all: pack 36 | 37 | plot: $(PLT) 38 | 39 | pack: $(PACK) report 40 | $(PACK): $(TRN) $(INF_0) $(TRN_1) $(INF_1) $(TRN_2) $(TRN_3) $(TRN_4) 41 | mkdir -p $(@D) 42 | tar cf - $^ | pigz > $@ 43 | chmod -w $@ 44 | # tar -zc -f $@ $^ # Use if pigz is not available 45 | 46 | # first train on the source dataset only: 47 | results/ivd/cesource: OPT = --target_losses="$(L_OR)" --target_folders="$(S_DATA)" --val_target_folders="$(S_DATA)" \ 48 | --network UNet --model_weights="" --lr_decay 1 --l_rate 5e-4 \ 49 | 50 | # full supervision 51 | results/ivd/fs: OPT = --target_losses="$(L_OR)" \ 52 | --network UNet --model_weights="$(M_WEIGHTS_uce)" --lr_decay 1 \ 53 | 54 | # SFDA. Remove --saveim True --entmap --do_asd 1 --do_hd 1 to speed up 55 | results/ivd/sfda: OPT = --target_losses="[('EntKLProp', {'lamb_se':1, 'lamb_consprior':1,'ivd':True,'weights_se':[0.1,0.9],'idc_c': [1],'curi':True,'power': 1},'PredictionBounds', \ 56 | {'margin':0,'dir':'high','idc':[0,1],'predcol':'dumbpredwtags','power': 1, 'mode':'percentage','sep':',','sizefile':'sizes/ivd.csv'},'norm_soft_size',1)]" --lr_decay 0.2 \ 57 | #--saveim True --entmap --do_asd 1 --do_hd 1 \ 58 | 59 | #inference mode : saves the segmentation masks + entropy masks for a specific model saved as pkl file (ex. "$(M_WEIGHTS_ul)" below): 60 | results/ivd/cesourceim: OPT = --target_losses="$(L_OR)" \ 61 | --mode makeim --batch_size 1 --l_rate 0 --pprint --n_epoch 1 --saveim True --entmap \ 62 | 63 | $(TRN) : 64 | $(CC) $(CFLAGS) main_sfda.py --batch_size 22 --n_class 2 --workdir $@_tmp --target_dataset "$(B_FOLD)" \ 65 | --grp_regex="$(G_RGX)" --target_folders="$(TT_DATA)" --val_target_folders="$(TT_DATA)"\ 66 | --model_weights="$(M_WEIGHTS_ul)" --network=$(NET) \ 67 | --lr_decay 0.9 --metric_axis 1 --n_epoch 150 --dice_3d --l_rate 3e-5 --lr_decay_epoch 50 --weight_decay 1e-4 $(OPT) $(DEBUG)\ 68 | 69 | mv $@_tmp $@ 70 | 71 | 72 | -------------------------------------------------------------------------------- /layers.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3.6 2 | 3 | import torch.nn as nn 4 | import torch.nn.functional as F 5 | 6 | 7 | def convBatch(nin, nout, kernel_size=3, stride=1, padding=1, bias=False, layer=nn.Conv2d, dilation=1): 8 | return nn.Sequential( 9 | layer(nin, nout, kernel_size=kernel_size, stride=stride, padding=padding, bias=bias, dilation=dilation), 10 | nn.BatchNorm2d(nout), 11 | nn.PReLU() 12 | ) 13 | 14 | 15 | def downSampleConv(nin, nout, kernel_size=3, stride=2, padding=1, bias=False): 16 | return nn.Sequential( 17 | convBatch(nin, nout, kernel_size=kernel_size, stride=stride, padding=padding, bias=bias), 18 | ) 19 | 20 | 21 | class interpolate(nn.Module): 22 | def __init__(self, scale_factor, mode='nearest'): 23 | super().__init__() 24 | 25 | self.scale_factor = scale_factor 26 | self.mode = mode 27 | 28 | def forward(self, cin): 29 | return F.interpolate(cin, mode=self.mode, scale_factor=self.scale_factor) 30 | 31 | 32 | def upSampleConv(nin, nout, kernel_size=3, upscale=2, padding=1, bias=False): 33 | return nn.Sequential( 34 | # nn.Upsample(scale_factor=upscale), 35 | interpolate(mode='nearest', scale_factor=upscale), 36 | convBatch(nin, nout, kernel_size=kernel_size, stride=1, padding=padding, bias=bias), 37 | convBatch(nout, nout, kernel_size=3, stride=1, padding=1, bias=bias), 38 | ) 39 | 40 | 41 | def conv_block(in_dim, out_dim, act_fn, kernel_size=3, stride=1, padding=1, dilation=1): 42 | model = nn.Sequential( 43 | nn.Conv2d(in_dim, out_dim, kernel_size=kernel_size, stride=stride, padding=padding, dilation=dilation), 44 | nn.BatchNorm2d(out_dim), 45 | act_fn, 46 | ) 47 | return model 48 | 49 | 50 | def conv_block_1(in_dim, out_dim): 51 | model = nn.Sequential( 52 | nn.Conv2d(in_dim, out_dim, kernel_size=1), 53 | nn.BatchNorm2d(out_dim), 54 | nn.PReLU(), 55 | ) 56 | return model 57 | 58 | 59 | def conv_block_Asym(in_dim, out_dim, kernelSize): 60 | model = nn.Sequential( 61 | nn.Conv2d(in_dim, out_dim, kernel_size=[kernelSize, 1], padding=tuple([2, 0])), 62 | nn.Conv2d(out_dim, out_dim, kernel_size=[1, kernelSize], padding=tuple([0, 2])), 63 | nn.BatchNorm2d(out_dim), 64 | nn.PReLU(), 65 | ) 66 | return model 67 | 68 | 69 | def conv_block_3_3(in_dim, out_dim): 70 | model = nn.Sequential( 71 | nn.Conv2d(in_dim, out_dim, kernel_size=3, padding=1), 72 | nn.BatchNorm2d(out_dim), 73 | nn.PReLU(), 74 | ) 75 | return model 76 | 77 | 78 | def conv_block_3(in_dim, out_dim, act_fn): 79 | model = nn.Sequential( 80 | conv_block(in_dim, out_dim, act_fn), 81 | conv_block(out_dim, out_dim, act_fn), 82 | nn.Conv2d(out_dim, out_dim, kernel_size=3, stride=1, padding=1), 83 | nn.BatchNorm2d(out_dim), 84 | ) 85 | return model 86 | 87 | 88 | def conv(nin, nout, kernel_size=3, stride=1, padding=1, bias=False, layer=nn.Conv2d, 89 | BN=False, ws=False, activ=nn.LeakyReLU(0.2), gainWS=2): 90 | convlayer = layer(nin, nout, kernel_size, stride=stride, padding=padding, bias=bias) 91 | layers = [] 92 | if BN: 93 | layers.append(nn.BatchNorm2d(nout)) 94 | if activ is not None: 95 | if activ == nn.PReLU: 96 | # to avoid sharing the same parameter, activ must be set to nn.PReLU (without '()') 97 | layers.append(activ(num_parameters=1)) 98 | else: 99 | # if activ == nn.PReLU(), the parameter will be shared for the whole network ! 100 | layers.append(activ) 101 | layers.insert(ws, convlayer) 102 | return nn.Sequential(*layers) 103 | 104 | 105 | # TODO: Change order of block: BN + Activation + Conv 106 | def conv_decod_block(in_dim, out_dim, act_fn): 107 | model = nn.Sequential( 108 | nn.ConvTranspose2d(in_dim, out_dim, kernel_size=3, stride=2, padding=1, output_padding=1), 109 | nn.BatchNorm2d(out_dim), 110 | act_fn, 111 | ) 112 | return model 113 | 114 | 115 | def maxpool(): 116 | pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=0) 117 | return pool 118 | 119 | 120 | # For UNet 121 | class residualConv(nn.Module): 122 | def __init__(self, nin, nout): 123 | super(residualConv, self).__init__() 124 | self.convs = nn.Sequential( 125 | convBatch(nin, nout), 126 | nn.Conv2d(nout, nout, kernel_size=3, stride=1, padding=1), 127 | nn.BatchNorm2d(nout) 128 | ) 129 | self.res = nn.Sequential() 130 | if nin != nout: 131 | self.res = nn.Sequential( 132 | nn.Conv2d(nin, nout, kernel_size=1, bias=False), 133 | nn.BatchNorm2d(nout) 134 | ) 135 | 136 | def forward(self, input): 137 | out = self.convs(input) 138 | return F.leaky_relu(out + self.res(input), 0.2) 139 | -------------------------------------------------------------------------------- /lib.py: -------------------------------------------------------------------------------- 1 | import os 2 | import numpy as np 3 | #import tensorflow as tf 4 | import nibabel as nib 5 | 6 | #### Helpers for file IOs 7 | def _read_lists(fid): 8 | """ 9 | Read all kinds of lists from text file to python lists 10 | """ 11 | if not os.path.isfile(fid): 12 | return None 13 | with open(fid,'r') as fd: 14 | _list = fd.readlines() 15 | 16 | my_list = [] 17 | for _item in _list: 18 | if len(_item) < 3: 19 | _list.remove(_item) 20 | my_list.append(_item.split('\n')[0]) 21 | return my_list 22 | 23 | def _save(sess, model_path, global_step): 24 | """ 25 | Saves the current session to a checkpoint 26 | """ 27 | saver = tf.train.Saver() 28 | save_path = saver.save(sess, model_path, global_step = global_step) 29 | return save_path 30 | 31 | def _save_nii_prediction(gth, comp_pred, ref_fid, out_folder, out_bname, debug = False): 32 | """ 33 | save prediction, sample and gth to nii file given a reference 34 | """ 35 | # first write prediction 36 | ref_obj = read_nii_object(ref_fid) 37 | ref_affine = ref_obj.get_affine() 38 | out_bname = out_bname.split(".")[0] + ".nii.gz" 39 | write_nii(comp_pred, out_bname, out_folder, affine = ref_affine) 40 | 41 | # then write sample 42 | _local_gth = gth.copy() 43 | _local_gth[_local_gth > self.num_cls - 1] = 0 44 | out_label_bname = "gth_" + out_bname 45 | write_nii(_local_gth, out_label_bname, out_folder, affine = ref_affine) 46 | 47 | def write_nii(array_data, filename, path = "", affine = None): 48 | """write np array into nii file""" 49 | #print(array_data.shape) 50 | if affine is None: 51 | print("No information about the global coordinate system") 52 | affine = np.diag([1,1,1,1]) 53 | #pdb.set_trace() 54 | #TODO: to check if it works 55 | # array_data = np.int16(array_data) 56 | array_img = nib.Nifti1Image(array_data, affine) 57 | save_fid = os.path.join(path,filename) 58 | try: 59 | array_img.to_filename(save_fid) 60 | print("Nii object %s has been saved!"%save_fid) 61 | except: 62 | raise Exception("file %s cannot be saved!"%save_fid) 63 | return save_fid 64 | 65 | def read_nii_image(input_fid): 66 | """read the nii image data into numpy array""" 67 | img = nib.load(input_fid) 68 | return img.get_data() 69 | 70 | def read_nii_object(input_fid): 71 | """ directly read the nii object """ 72 | #pdb.set_trace() 73 | return nib.load(input_fid) 74 | 75 | #### Helpers for evaluations 76 | def _label_decomp(num_cls, label_vol): 77 | """ 78 | decompose label for softmax classifier 79 | original labels are batchsize * W * H * 1, with label values 0,1,2,3... 80 | this function decompse it to one hot, e.g.: 0,0,0,1,0,0 in channel dimension 81 | numpy version of tf.one_hot 82 | """ 83 | _batch_shape = list(label_vol.shape) 84 | _vol = np.zeros(_batch_shape) 85 | _vol[label_vol == 0] = 1 86 | _vol = _vol[..., np.newaxis] 87 | for i in range(num_cls): 88 | if i == 0: 89 | continue 90 | _n_slice = np.zeros(label_vol.shape) 91 | _n_slice[label_vol == i] = 1 92 | _vol = np.concatenate( (_vol, _n_slice[..., np.newaxis]), axis = 3 ) 93 | return np.float32(_vol) 94 | 95 | 96 | 97 | def _dice_eval(compact_pred, labels, n_class): 98 | """ 99 | calculate standard dice for evaluation, here uses the class prediction, not the probability 100 | """ 101 | dice_arr = [] 102 | dice = 0 103 | eps = 1e-7 104 | pred = tf.one_hot(compact_pred, depth = n_class, axis = -1) 105 | for i in range(n_class): 106 | inse = tf.reduce_sum(pred[:, :, :, i] * labels[:, :, :, i]) 107 | union = tf.reduce_sum(pred[:, :, :, i]) + tf.reduce_sum(labels[:, :, :, i]) 108 | dice = dice + 2.0 * inse / (union + eps) 109 | dice_arr.append(2.0 * inse / (union + eps)) 110 | 111 | return 1.0 * dice / n_class, dice_arr 112 | 113 | 114 | def _inverse_lookup(my_dict, _value): 115 | 116 | for key, dic_value in list(my_dict.items()): 117 | if dic_value == _value: 118 | return key 119 | return None 120 | 121 | 122 | def _jaccard(conf_matrix): 123 | """ 124 | calculate jaccard similarity from confusion_matrix 125 | """ 126 | num_cls = conf_matrix.shape[0] 127 | jac = np.zeros(num_cls) 128 | for ii in range(num_cls): 129 | pp = np.sum(conf_matrix[:,ii]) 130 | gp = np.sum(conf_matrix[ii,:]) 131 | hit = conf_matrix[ii,ii] 132 | if (pp + gp -hit) == 0: 133 | jac[ii] = 0 134 | else: 135 | jac[ii] = hit * 1.0 / (pp + gp - hit ) 136 | return jac 137 | 138 | 139 | def _dice(conf_matrix): 140 | """ 141 | calculate dice coefficient from confusion_matrix 142 | """ 143 | num_cls = conf_matrix.shape[0] 144 | dic = np.zeros(num_cls) 145 | for ii in range(num_cls): 146 | pp = np.sum(conf_matrix[:,ii]) 147 | gp = np.sum(conf_matrix[ii,:]) 148 | hit = conf_matrix[ii,ii] 149 | if (pp + gp) == 0: 150 | dic[ii] = 0 151 | else: 152 | dic[ii] = 2.0 * hit / (pp + gp) 153 | return dic 154 | 155 | 156 | def _indicator_eval(cm): 157 | """ 158 | Decompose confusion matrix and get statistics 159 | """ 160 | contour_map = { # a map used for mapping label value to its name, used for output 161 | "bg": 0, 162 | "la_myo": 1, 163 | "la_blood": 2, 164 | "lv_blood": 3, 165 | "aa": 4 166 | } 167 | 168 | dice = _dice(cm) 169 | jaccard = _jaccard(cm) 170 | print(cm) 171 | for organ, ind in list(contour_map.items()): 172 | print(( "organ: %s"%organ )) 173 | print(( "dice: %s"%(dice[int(ind)] ) )) 174 | print(( "jaccard: %s"%(jaccard[int(ind)] ) )) 175 | 176 | return dice, jaccard 177 | 178 | -------------------------------------------------------------------------------- /losses.py: -------------------------------------------------------------------------------- 1 | #!/usr/env/bin python3.6 2 | 3 | from typing import List, Tuple 4 | # from functools import reduce 5 | from operator import add 6 | from functools import reduce 7 | import numpy as np 8 | import torch 9 | from torch import einsum 10 | from torch import Tensor 11 | import pandas as pd 12 | 13 | from utils import simplex, sset, probs2one_hot 14 | import torch.nn.modules.padding 15 | from torch.nn import BCEWithLogitsLoss 16 | 17 | class DiceLoss(): 18 | def __init__(self, **kwargs): 19 | # Self.idc is used to filter out some classes of the target mask. Use fancy indexing 20 | self.idc: List[int] = kwargs["idc"] 21 | #self.nd: str = kwargs["nd"] 22 | print(f"Initialized {self.__class__.__name__} with {kwargs}") 23 | 24 | def __call__(self, probs: Tensor, target: Tensor, _: Tensor) -> Tensor: 25 | assert simplex(probs) and simplex(target) 26 | 27 | pc = probs[:, self.idc, ...].type(torch.float32) 28 | tc = target[:, self.idc, ...].type(torch.float32) 29 | 30 | intersection: Tensor = einsum(f"bcwh,bcwh->bc", pc, tc) 31 | union: Tensor = (einsum(f"bkwh->bk", pc) + einsum(f"bkwh->bk", tc)) 32 | 33 | divided: Tensor = torch.ones_like(intersection) - (2 * intersection + 1e-10) / (union + 1e-10) 34 | 35 | loss = divided.mean() 36 | 37 | return loss 38 | 39 | class EntKLProp(): 40 | """ 41 | Entropy minimization with KL proportion regularisation 42 | """ 43 | def __init__(self, **kwargs): 44 | self.power: int = kwargs["power"] 45 | self.__fn__ = getattr(__import__('utils'), kwargs['fn']) 46 | self.curi: bool = kwargs["curi"] 47 | self.idc: bool = kwargs["idc_c"] 48 | self.ivd: bool = kwargs["ivd"] 49 | self.weights: List[float] = kwargs["weights_se"] 50 | self.lamb_se: float = kwargs["lamb_se"] 51 | self.lamb_consprior: float = kwargs["lamb_consprior"] 52 | 53 | def __call__(self, probs: Tensor, target: Tensor, bounds) -> Tensor: 54 | assert simplex(probs) # and simplex(target) # Actually, does not care about second part 55 | b, _, w, h = probs.shape # type: Tuple[int, int, int, int] 56 | predicted_mask = probs2one_hot(probs).detach() 57 | est_prop_mask = self.__fn__(predicted_mask,self.power).squeeze(2) 58 | est_prop: Tensor = self.__fn__(probs,self.power) 59 | if self.curi: 60 | if self.ivd: 61 | bounds = bounds[:,:,0] 62 | bounds= bounds.unsqueeze(2) 63 | gt_prop = torch.ones_like(est_prop)*bounds/(w*h) 64 | gt_prop = gt_prop[:,:,0] 65 | else: 66 | gt_prop: Tensor = self.__fn__(target,self.power) # the power here is actually useless if we have 0/1 gt labels 67 | if not self.curi: 68 | gt_prop = gt_prop.squeeze(2) 69 | est_prop = est_prop.squeeze(2) 70 | log_est_prop: Tensor = (est_prop + 1e-10).log() 71 | 72 | log_gt_prop: Tensor = (gt_prop + 1e-10).log() 73 | log_est_prop_mask: Tensor = (est_prop_mask + 1e-10).log() 74 | 75 | loss_cons_prior = - torch.einsum("bc,bc->", [est_prop, log_gt_prop]) + torch.einsum("bc,bc->", [est_prop, log_est_prop]) 76 | # Adding division by batch_size to normalise 77 | loss_cons_prior /= b 78 | log_p: Tensor = (probs + 1e-10).log() 79 | mask: Tensor = probs.type((torch.float32)) 80 | mask_weighted = torch.einsum("bcwh,c->bcwh", [mask, Tensor(self.weights).to(mask.device)]) 81 | loss_se = - torch.einsum("bcwh,bcwh->", [mask_weighted, log_p]) 82 | loss_se /= mask.sum() + 1e-10 83 | 84 | assert loss_se.requires_grad == probs.requires_grad # Handle the case for validation 85 | 86 | return self.lamb_se*loss_se, self.lamb_consprior*loss_cons_prior,est_prop 87 | 88 | 89 | class SelfEntropy(): 90 | def __init__(self, **kwargs): 91 | # Self.idc is used to filter out some classes of the target mask. Use fancy indexing 92 | self.idc: List[int] = kwargs["idc"] 93 | self.weights: List[float] = kwargs["weights"] 94 | self.dtype = kwargs["dtype"] 95 | print(f"Initialized {self.__class__.__name__} with {kwargs}") 96 | 97 | def __call__(self, probs: Tensor, target: Tensor, bounds: Tensor) -> Tensor: 98 | assert simplex(probs) and simplex(target) 99 | 100 | log_p: Tensor = (probs[:, self.idc, ...] + 1e-10).log() 101 | mask: Tensor = probs[:, self.idc, ...].type((torch.float32)) 102 | mask_weighted = torch.einsum("bcwh,c->bcwh", [mask, Tensor(self.weights).to(mask.device)]) 103 | loss = - torch.einsum("bcwh,bcwh->", [mask_weighted, log_p]) 104 | loss /= mask.sum() + 1e-10 105 | 106 | return loss 107 | 108 | 109 | class CrossEntropy(): 110 | def __init__(self, **kwargs): 111 | # Self.idc is used to filter out some classes of the target mask. Use fancy indexing 112 | self.idc: List[int] = kwargs["idc"] 113 | self.weights: List[float] = kwargs["weights"] 114 | self.dtype = kwargs["dtype"] 115 | print(f"Initialized {self.__class__.__name__} with {kwargs}") 116 | 117 | def __call__(self, probs: Tensor, target: Tensor, bounds: Tensor) -> Tensor: 118 | assert simplex(probs) and simplex(target) 119 | 120 | log_p: Tensor = (probs[:, self.idc, ...] + 1e-10).log() 121 | mask: Tensor = target[:, self.idc, ...].type((torch.float32)) 122 | mask_weighted = torch.einsum("bcwh,c->bcwh", [mask, Tensor(self.weights).to(mask.device)]) 123 | loss = - torch.einsum("bcwh,bcwh->", [mask_weighted, log_p]) 124 | loss /= mask.sum() + 1e-10 125 | return loss 126 | 127 | 128 | class BCELoss(): 129 | def __init__(self, **kwargs): 130 | # Self.idc is used to filter out some classes of the target mask. Use fancy indexing 131 | self.idc: List[int] = kwargs["idc"] 132 | self.dtype = kwargs["dtype"] 133 | #print(f"Initialized {self.__class__.__name__} with {kwargs}") 134 | 135 | def __call__(self, d_out: Tensor, label: float): 136 | bce_loss = torch.nn.BCEWithLogitsLoss() 137 | loss = bce_loss(d_out,Tensor(d_out.data.size()).fill_(label).to(d_out.device)) 138 | return loss 139 | 140 | 141 | class BCEGDice(): 142 | def __init__(self, **kwargs): 143 | self.idc: List[int] = kwargs["idc"] 144 | self.lamb: List[int] = kwargs["lamb"] 145 | self.weights: List[float] = kwargs["weights"] 146 | print(f"Initialized {self.__class__.__name__} with {kwargs}") 147 | def __call__(self, probs: Tensor, target: Tensor, _: Tensor) -> Tensor: 148 | assert simplex(probs) and simplex(target) 149 | 150 | pc = probs[:, self.idc, ...].type(torch.float32) 151 | tc = target[:, self.idc, ...].type(torch.float32) 152 | 153 | w: Tensor = 1 / ((einsum("bcwh->bc", tc).type(torch.float32) + 1e-10) ** 2) 154 | intersection: Tensor = w * einsum("bcwh,bcwh->bc", pc, tc) 155 | union: Tensor = w * (einsum("bcwh->bc", pc) + einsum("bcwh->bc", tc)) 156 | 157 | divided: Tensor = 1 - 2 * (einsum("bc->b", intersection) + 1e-10) / (einsum("bc->b", union) + 1e-10) 158 | 159 | loss_gde = divided.mean() 160 | 161 | log_p: Tensor = (probs[:, self.idc, ...] + 1e-10).log() 162 | mask_weighted = torch.einsum("bcwh,c->bcwh", [tc, Tensor(self.weights).to(tc.device)]) 163 | loss_ce = - torch.einsum("bcwh,bcwh->", [mask_weighted, log_p]) 164 | loss_ce /= tc.sum() + 1e-10 165 | loss = loss_ce + self.lamb*loss_gde 166 | 167 | return loss 168 | 169 | 170 | 171 | class GeneralizedDice(): 172 | def __init__(self, **kwargs): 173 | # Self.idc is used to filter out some classes of the target mask. Use fancy indexing 174 | self.idc: List[int] = kwargs["idc"] 175 | print(f"Initialized {self.__class__.__name__} with {kwargs}") 176 | 177 | def __call__(self, probs: Tensor, target: Tensor, _: Tensor) -> Tensor: 178 | assert simplex(probs) and simplex(target) 179 | 180 | pc = probs[:, self.idc, ...].type(torch.float32) 181 | tc = target[:, self.idc, ...].type(torch.float32) 182 | 183 | w: Tensor = 1 / ((einsum("bcwh->bc", tc).type(torch.float32) + 1e-10) ** 2) 184 | intersection: Tensor = w * einsum("bcwh,bcwh->bc", pc, tc) 185 | union: Tensor = w * (einsum("bcwh->bc", pc) + einsum("bcwh->bc", tc)) 186 | 187 | divided: Tensor = 1 - 2 * (einsum("bc->b", intersection) + 1e-10) / (einsum("bc->b", union) + 1e-10) 188 | 189 | loss = divided.mean() 190 | 191 | return loss 192 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | #/usr/bin/env python3.6 2 | import re 3 | import argparse 4 | import warnings 5 | from pathlib import Path 6 | from operator import itemgetter 7 | from shutil import copytree, rmtree 8 | import typing 9 | from typing import Any, Callable, List, Tuple 10 | import matplotlib.pyplot as plt 11 | import torch 12 | import numpy as np 13 | import pandas as pd 14 | import torch.nn.functional as F 15 | from torch import Tensor 16 | from torch.utils.data import DataLoader 17 | from dice3d import dice3d, dice3dn 18 | from networks import weights_init 19 | from dataloader import get_loaders 20 | from utils import map_, save_dict_to_file 21 | from utils import dice_coef, dice_batch, save_images,save_images_p, tqdm_ 22 | from utils import probs2one_hot, probs2class, mask_resize, resize, haussdorf 23 | from utils import exp_lr_scheduler 24 | import datetime 25 | from itertools import cycle 26 | import os 27 | from time import sleep 28 | 29 | import matplotlib.pyplot as plt 30 | 31 | 32 | def setup(args, n_class, dtype) -> Tuple[Any, Any, Any, List[Callable], List[float],List[Callable], List[float], Callable]: 33 | print(">>> Setting up") 34 | cpu: bool = args.cpu or not torch.cuda.is_available() 35 | device = torch.device("cpu") if cpu else torch.device("cuda") 36 | 37 | if args.model_weights: 38 | if cpu: 39 | net = torch.load(args.model_weights, map_location='cpu') 40 | else: 41 | net = torch.load(args.model_weights) 42 | else: 43 | net_class = getattr(__import__('networks'), args.network) 44 | net = net_class(1, n_class).type(dtype).to(device) 45 | net.apply(weights_init) 46 | net.to(device) 47 | 48 | optimizer = torch.optim.Adam(net.parameters(), lr=args.l_rate, betas=(0.9, 0.999)) 49 | 50 | print(args.target_losses) 51 | losses = eval(args.target_losses) 52 | loss_fns: List[Callable] = [] 53 | for loss_name, loss_params, _, _, fn, _ in losses: 54 | loss_class = getattr(__import__('losses'), loss_name) 55 | loss_fns.append(loss_class(**loss_params, dtype=dtype, fn=fn)) 56 | 57 | loss_weights = map_(itemgetter(5), losses) 58 | 59 | print(args.source_losses) 60 | losses_source = eval(args.source_losses) 61 | loss_fns_source: List[Callable] = [] 62 | for loss_name, loss_params, _, _, fn, _ in losses_source: 63 | loss_class = getattr(__import__('losses'), loss_name) 64 | loss_fns_source.append(loss_class(**loss_params, dtype=dtype, fn=fn)) 65 | 66 | loss_weights_source = map_(itemgetter(5), losses_source) 67 | 68 | if args.scheduler: 69 | scheduler = getattr(__import__('scheduler'), args.scheduler)(**eval(args.scheduler_params)) 70 | else: 71 | scheduler = '' 72 | 73 | return net, optimizer, device, loss_fns, loss_weights, loss_fns_source, loss_weights_source, scheduler 74 | 75 | 76 | def do_epoch(args, mode: str, net: Any, device: Any, loader: DataLoader, epc: int, 77 | loss_fns: List[Callable], loss_weights: List[float],loss_fns_source: List[Callable], 78 | loss_weights_source: List[float], new_w:int, num_steps:int, C: int, metric_axis:List[int], savedir: str = "", 79 | optimizer: Any = None, target_loader: Any = None): 80 | 81 | assert mode in ["train", "val"] 82 | L: int = len(loss_fns) 83 | indices = torch.tensor(metric_axis,device=device) 84 | if mode == "train": 85 | net.train() 86 | desc = f">> Training ({epc})" 87 | elif mode == "val": 88 | net.eval() 89 | # net.train() 90 | desc = f">> Validation ({epc})" 91 | 92 | total_it_s, total_images = len(loader), len(loader.dataset) 93 | total_it_t, total_images_t = len(target_loader), len(target_loader.dataset) 94 | total_iteration = max(total_it_s, total_it_t) 95 | # Lazy add lines below because we will be cycling until the biggest length is reached 96 | total_images = max(total_images, total_images_t) 97 | total_images_t = total_images 98 | 99 | pho=1 100 | dtype = eval(args.dtype) 101 | 102 | all_dices: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 103 | all_inter_card: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 104 | all_card_gt: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 105 | all_card_pred: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 106 | loss_log: Tensor = torch.zeros((total_images), dtype=dtype, device=device) 107 | loss_inf: Tensor = torch.zeros((total_images), dtype=dtype, device=device) 108 | loss_cons: Tensor = torch.zeros((total_images), dtype=dtype, device=device) 109 | loss_fs: Tensor = torch.zeros((total_images), dtype=dtype, device=device) 110 | posim_log: Tensor = torch.zeros((total_images), dtype=dtype, device=device) 111 | haussdorf_log: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 112 | all_grp: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 113 | dice_3d_log: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 114 | dice_3d_sd_log: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 115 | 116 | if args.source_metrics == True: 117 | all_dices_s: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 118 | all_inter_card_s: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 119 | all_card_gt_s: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 120 | all_card_pred_s: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 121 | all_grp_s: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 122 | dice_3d_s_log: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 123 | dice_3d_s_sd_log: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 124 | # if len(loader)>len(target_loader): 125 | # tq_iter = tqdm_(enumerate(zip(loader, cycle(target_loader))), total=total_iteration, desc=desc) 126 | # elif len(loader) 100: 137 | # mult_lw = [pho ** 100] * len(loss_weights) 138 | mult_lw[0] = 1 139 | loss_weights = [a * b for a, b in zip(loss_weights, mult_lw)] 140 | losses_vec, source_vec, target_vec, baseline_target_vec = [], [], [], [] 141 | pen_count = 0 142 | with warnings.catch_warnings(): 143 | warnings.simplefilter("ignore") 144 | for j, (source_data, target_data) in tq_iter: 145 | #for j, target_data in tq_iter: 146 | source_data[1:] = [e.to(device) for e in source_data[1:]] # Move all tensors to device 147 | filenames_source, source_image, source_gt = source_data[:3] 148 | target_data[1:] = [e.to(device) for e in target_data[1:]] # Move all tensors to device 149 | filenames_target, target_image, target_gt = target_data[:3] 150 | labels = target_data[3:3+L] 151 | labels_source = source_data[3:3 + L] 152 | bounds = target_data[3+L:] 153 | bounds_source = source_data[3+L:] 154 | 155 | assert len(labels) == len(bounds), len(bounds) 156 | if args.mix == False: 157 | assert filenames_source == filenames_target 158 | #print(filenames_source,filenames_target) 159 | B = len(target_image) 160 | # Reset gradients 161 | if optimizer: 162 | #adjust_learning_rate(optimizer, 1, args.l_rate, num_steps, args.power) 163 | optimizer.zero_grad() 164 | 165 | # Forward 166 | with torch.set_grad_enabled(mode == "train"): 167 | pred_logits: Tensor = net(target_image) 168 | pred_logits_source: Tensor = net(source_image) 169 | pred_probs: Tensor = F.softmax(pred_logits, dim=1) 170 | pred_probs_source: Tensor = F.softmax(pred_logits_source, dim=1) 171 | if new_w > 0: 172 | pred_probs = resize(pred_probs, new_w) 173 | labels = [resize(label, new_w) for label in labels] 174 | target = resize(target, new_w) 175 | predicted_mask: Tensor = probs2one_hot(pred_probs) # Used only for dice computation 176 | predicted_mask_source: Tensor = probs2one_hot(pred_probs_source) # Used only for dice computation 177 | #print(torch.sum(predicted_mask, dim=[2,3]).cpu().numpy()) 178 | #print(list(map(lambda n: [int(f) for f in n], np.around(torch.sum(pred_probs, dim=[2,3]).detach().cpu().numpy())))) 179 | assert len(bounds) == len(loss_fns) == len(loss_weights) 180 | if epc < n_warmup: 181 | loss_weights = [0]*len(loss_weights) 182 | loss: Tensor = torch.zeros(1, requires_grad=True).to(device) 183 | loss_vec = [] 184 | loss_k = [] 185 | for loss_fn,label, w, bound in zip(loss_fns,labels, loss_weights, bounds): 186 | if w > 0: 187 | if args.lin_aug_w: 188 | if epc<70: 189 | w=w*(epc+1)/70 190 | loss_b = loss_fn(pred_probs, label, bound) 191 | loss = loss + w * loss_b 192 | #pen_count += count_b.detach() 193 | #print(count_b.detach()) 194 | loss_k.append(w*loss_b.detach()) 195 | #for loss_fn, label, w, bound in zip(loss_fns_source, [source_gt], loss_weights_source, torch.randn(1)): 196 | #for loss_fn, label, w, bound in zip(loss_fns_source, labels_source, loss_weights_source, torch.randn(1)): 197 | for loss_fn, label, w, bound in zip(loss_fns_source, labels_source, loss_weights_source, bounds_source): 198 | if w > 0: 199 | loss_b = loss_fn(pred_probs_source, label, bound) 200 | loss = loss+ w * loss_b 201 | loss_k.append(w*loss_b.detach()) 202 | #print(loss_k) 203 | # Backward 204 | if optimizer: 205 | loss.backward() 206 | optimizer.step() 207 | 208 | # Compute and log metrics 209 | #dices: Tensor = dice_coef(predicted_mask.detach(), target.detach()) 210 | # baseline_dices: Tensor = dice_coef(labels[0].detach(), target.detach()) 211 | #batch_dice: Tensor = dice_batch(predicted_mask.detach(), target.detach()) 212 | # assert batch_dice.shape == (C,) and dices.shape == (B, C), (batch_dice.shape, dices.shape, B, C) 213 | dices, inter_card, card_gt, card_pred = dice_coef(predicted_mask.detach(), target_gt.detach()) 214 | assert dices.shape == (B, C), (dices.shape, B, C) 215 | 216 | sm_slice = slice(done, done + B) # Values only for current batch 217 | all_dices[sm_slice, ...] = dices 218 | # # for 3D dice 219 | all_grp[sm_slice, ...] = int(re.split('_', filenames_target[0])[1]) * torch.ones([1, C]) 220 | all_inter_card[sm_slice, ...] = inter_card 221 | all_card_gt[sm_slice, ...] = card_gt 222 | all_card_pred[sm_slice, ...] = card_pred 223 | 224 | # 3D dice on source 225 | if args.source_metrics ==True: 226 | dices_s, inter_card_s, card_gt_s, card_pred_s = dice_coef(predicted_mask_source.detach(), source_gt.detach()) 227 | all_grp_s[sm_slice, ...] = int(re.split('_', filenames_source[0])[1]) * torch.ones([1, C]) 228 | all_inter_card_s[sm_slice, ...] = inter_card_s 229 | all_card_gt_s[sm_slice, ...] = card_gt_s 230 | all_card_pred_s[sm_slice, ...] = card_pred_s 231 | 232 | #loss_log[sm_slice] = loss.detach() 233 | 234 | loss_inf[sm_slice] = loss_k[0] 235 | if len(loss_k)>1: 236 | loss_cons[sm_slice] = loss_k[1] 237 | else: 238 | loss_cons[sm_slice] = 0 239 | if len(loss_k)>2: 240 | loss_fs[sm_slice] = loss_k[2] 241 | else: 242 | loss_fs[sm_slice] = 0 243 | #posim_log[sm_slice] = torch.einsum("bcwh->b", [target_gt[:, 1:, :, :]]).detach() > 0 244 | 245 | #haussdorf_res: Tensor = haussdorf(predicted_mask.detach(), target_gt.detach(), dtype) 246 | #assert haussdorf_res.shape == (B, C) 247 | #haussdorf_log[sm_slice] = haussdorf_res 248 | 249 | # # Save images 250 | if savedir: 251 | with warnings.catch_warnings(): 252 | warnings.filterwarnings("ignore", category=UserWarning) 253 | warnings.simplefilter("ignore") 254 | predicted_class: Tensor = probs2class(pred_probs) 255 | #save_images_p(predicted_class, filenames_target, args.dataset, mode, epc, False) 256 | save_images(predicted_class, filenames_target, savedir, mode, epc, True) 257 | 258 | # Logging 259 | big_slice = slice(0, done + B) # Value for current and previous batches 260 | stat_dict = {"dice": torch.index_select(all_dices, 1, indices).mean(), 261 | "loss": loss_log[big_slice].mean()} 262 | nice_dict = {k: f"{v:.4f}" for (k, v) in stat_dict.items()} 263 | 264 | done += B 265 | tq_iter.set_postfix(nice_dict) 266 | print(f"{desc} " + ', '.join(f"{k}={v}" for (k, v) in nice_dict.items())) 267 | #dice_posim = torch.masked_select(all_dices[:, -1], posim_log.type(dtype=torch.uint8)).mean() 268 | # dice3D gives back the 3d dice mai on images 269 | # if not args.debug: 270 | # dice_3d_log_o, dice_3d_sd_log_o = dice3d(args.workdir, f"iter{epc:03d}", mode, "Subj_\\d+_",args.dataset + mode + '/CT_GT', C) 271 | 272 | dice_3d_log, dice_3d_sd_log = dice3dn(all_grp, all_inter_card, all_card_gt, all_card_pred,metric_axis,True) 273 | if args.source_metrics ==True: 274 | dice_3d_s_log, dice_3d_s_sd_log = dice3dn(all_grp_s, all_inter_card_s, all_card_gt_s, all_card_pred_s,metric_axis,True) 275 | print("mean 3d_dice over all patients:",dice_3d_log) 276 | #source_vec = [ dice_3d_s, dice_3d_sd_s, haussdorf_log_s] 277 | dice_2d = torch.index_select(all_dices, 1, indices).mean().cpu().numpy() 278 | target_vec = [ dice_3d_log, dice_3d_sd_log, dice_2d] 279 | if args.source_metrics ==True: 280 | source_vec = [ dice_3d_s_log, dice_3d_s_sd_log] 281 | else: 282 | source_vec = [0,0] 283 | #losses_vec = [loss_log.mean().item()] 284 | losses_vec = [loss_inf.mean().item(),loss_cons.mean().item(),loss_fs.mean().item()] 285 | return losses_vec, target_vec,source_vec 286 | 287 | 288 | def run(args: argparse.Namespace) -> None: 289 | # save args to dict 290 | d = vars(args) 291 | d['time'] = str(datetime.datetime.now()) 292 | save_dict_to_file(d,args.workdir) 293 | 294 | temperature: float = 0.1 295 | n_class: int = args.n_class 296 | metric_axis: List = args.metric_axis 297 | lr: float = args.l_rate 298 | dtype = eval(args.dtype) 299 | 300 | # Proper params 301 | savedir: str = args.workdir 302 | n_epoch: int = args.n_epoch 303 | 304 | net, optimizer, device, loss_fns, loss_weights, loss_fns_source, loss_weights_source, scheduler = setup(args, n_class, dtype) 305 | print(f'> Loss weights cons: {loss_weights}, Loss weights source:{loss_weights_source}') 306 | shuffle = False 307 | #if args.mix: 308 | # shuffle = True 309 | #print("args.dataset",args.dataset) 310 | loader, loader_val = get_loaders(args, args.dataset,args.source_folders, 311 | args.batch_size, n_class, 312 | args.debug, args.in_memory, dtype, False,fix_size=[0,0]) 313 | 314 | target_loader, target_loader_val = get_loaders(args, args.target_dataset,args.target_folders, 315 | args.batch_size, n_class, 316 | args.debug, args.in_memory, dtype, shuffle,fix_size=[0,0]) 317 | 318 | num_steps = n_epoch * len(loader) 319 | #print(num_steps) 320 | print("metric axis",metric_axis) 321 | best_dice_pos: Tensor = np.zeros(1) 322 | best_dice: Tensor = np.zeros(1) 323 | best_2d_dice: Tensor = np.zeros(1) 324 | best_3d_dice: Tensor = np.zeros(1) 325 | best_3d_dice_source: Tensor = np.zeros(1) 326 | 327 | print("Results saved in ", savedir) 328 | print(">>> Starting the training") 329 | for i in range(n_epoch): 330 | 331 | tra_losses_vec, tra_target_vec,tra_source_vec = do_epoch(args, "train", net, device, 332 | loader, i, loss_fns, 333 | loss_weights, 334 | loss_fns_source, 335 | loss_weights_source, 336 | args.resize, 337 | num_steps, n_class, metric_axis, 338 | savedir="", 339 | optimizer=optimizer, 340 | target_loader=target_loader) 341 | 342 | with torch.no_grad(): 343 | val_losses_vec, val_target_vec,val_source_vec = do_epoch(args, "val", net, device, 344 | loader_val, i, loss_fns, 345 | loss_weights, 346 | loss_fns_source, 347 | loss_weights_source, 348 | args.resize, 349 | num_steps, n_class,metric_axis, 350 | savedir=savedir, 351 | target_loader=target_loader_val) 352 | 353 | #if i == 0: 354 | # keep_tra_baseline_target_vec = tra_baseline_target_vec 355 | # keep_val_baseline_target_vec = val_baseline_target_vec 356 | # print(keep_val_baseline_target_vec) 357 | 358 | # print(val_target_vec) 359 | # df_t_tmp = pd.DataFrame({ 360 | # "val_dice_3d": [val_target_vec[0]], 361 | # "val_dice_3d_sd": [val_target_vec[1]]}) 362 | 363 | df_s_tmp = pd.DataFrame({ 364 | "tra_dice_3d": [tra_source_vec[0]], 365 | "tra_dice_3d_sd": [tra_source_vec[1]], 366 | "val_dice_3d": [val_source_vec[0]], 367 | "val_dice_3d_sd": [val_source_vec[1]]}) 368 | 369 | if i == 0: 370 | df_s = df_s_tmp 371 | else: 372 | df_s = df_s.append(df_s_tmp) 373 | 374 | df_s.to_csv(Path(savedir, "_".join((args.source_folders.split("'")[1],"source", args.csv))), float_format="%.4f", index_label="epoch") 375 | 376 | 377 | df_t_tmp = pd.DataFrame({ 378 | "tra_loss_inf":[tra_losses_vec[0]], 379 | "tra_loss_cons":[tra_losses_vec[1]], 380 | "tra_loss_fs":[tra_losses_vec[2]], 381 | "val_loss_inf":[val_losses_vec[0]], 382 | "val_loss_cons":[val_losses_vec[1]], 383 | "val_loss_fs":[val_losses_vec[2]], 384 | "tra_dice_3d": [tra_target_vec[0]], 385 | "tra_dice_3d_sd": [tra_target_vec[1]], 386 | "tra_dice": [tra_target_vec[2]], 387 | "val_dice_3d": [val_target_vec[0]], 388 | "val_dice_3d_sd": [val_target_vec[1]], 389 | 'val_dice': [val_target_vec[2]]}) 390 | 391 | if i == 0: 392 | df_t = df_t_tmp 393 | else: 394 | df_t = df_t.append(df_t_tmp) 395 | 396 | df_t.to_csv(Path(savedir, "_".join((args.target_folders.split("'")[1],"target", args.csv))), float_format="%.4f", index_label="epoch") 397 | 398 | # Save model if better 399 | current_val_target_2d_dice = val_target_vec[2] 400 | ''' 401 | if current_val_target_2d_dice > best_2d_dice: 402 | best_epoch = i 403 | best_2d_dice = current_val_target_2d_dice 404 | with open(Path(savedir, "best_epoch_2.txt"), 'w') as f: 405 | f.write(str(i)) 406 | best_folder_2d = Path(savedir, "best_epoch_2d") 407 | if best_folder_2d.exists(): 408 | rmtree(best_folder_2d) 409 | copytree(Path(savedir, f"iter{i:03d}"), Path(best_folder_2d)) 410 | torch.save(net, Path(savedir, "best_2d.pkl")) 411 | ''' 412 | current_val_target_3d_dice = val_target_vec[0] 413 | 414 | if current_val_target_3d_dice > best_3d_dice: 415 | best_epoch = i 416 | best_3d_dice = current_val_target_3d_dice 417 | with open(Path(savedir, "best_epoch_3d.txt"), 'w') as f: 418 | f.write(str(i)) 419 | best_folder_3d = Path(savedir, "best_epoch_3d") 420 | if best_folder_3d.exists(): 421 | rmtree(best_folder_3d) 422 | copytree(Path(savedir, f"iter{i:03d}"), Path(best_folder_3d)) 423 | torch.save(net, Path(savedir, "best_3d.pkl")) 424 | 425 | #Save source model if better 426 | current_val_source_3d_dice = val_source_vec[0] 427 | 428 | if current_val_source_3d_dice > best_3d_dice_source: 429 | best_epoch = i 430 | best_3d_dice_s = current_val_source_3d_dice 431 | with open(Path(savedir, "best_epoch_3d_source.txt"), 'w') as f: 432 | f.write(str(i)) 433 | torch.save(net, Path(savedir, "best_3d_source.pkl")) 434 | 435 | if i == n_epoch - 1: 436 | with open(Path(savedir, "last_epoch.txt"), 'w') as f: 437 | f.write(str(i)) 438 | last_folder = Path(savedir, "last_epoch") 439 | if last_folder.exists(): 440 | rmtree(last_folder) 441 | copytree(Path(savedir, f"iter{i:03d}"), Path(last_folder)) 442 | torch.save(net, Path(savedir, "last.pkl")) 443 | 444 | # remove images from iteration 445 | rmtree(Path(savedir, f"iter{i:03d}")) 446 | 447 | if args.flr==False: 448 | #adjust_learning_rate(optimizer, i, args.l_rate, n_epoch, 0.9) 449 | exp_lr_scheduler(optimizer, i, args.lr_decay) 450 | print("Results saved in ", savedir) 451 | 452 | 453 | def get_args() -> argparse.Namespace: 454 | parser = argparse.ArgumentParser(description='Hyperparams') 455 | parser.add_argument('--dataset', type=str, required=True) 456 | parser.add_argument('--target_dataset', type=str, required=True) 457 | # parser.add_argument('--weak_subfolder', type=str, required=True) 458 | parser.add_argument("--workdir", type=str, required=True) 459 | parser.add_argument("--target_losses", type=str, required=True, 460 | help="List of (loss_name, loss_params, bounds_name, bounds_params, fn, weight)") 461 | parser.add_argument("--source_losses", type=str, required=True, 462 | help="List of (loss_name, loss_params, bounds_name, bounds_params, fn, weight)") 463 | parser.add_argument("--source_folders", type=str, required=True, 464 | help="List of (subfolder, transform, is_hot)") 465 | parser.add_argument("--target_folders", type=str, required=True, 466 | help="List of (subfolder, transform, is_hot)") 467 | parser.add_argument("--network", type=str, required=True, help="The network to use") 468 | parser.add_argument("--grp_regex", type=str, required=True) 469 | parser.add_argument("--n_class", type=int, required=True) 470 | 471 | parser.add_argument("--lin_aug_w", action="store_true") 472 | parser.add_argument("--flr", action="store_true") 473 | parser.add_argument("--augment", action="store_true") 474 | parser.add_argument("--mix", type=bool, default=True) 475 | parser.add_argument("--debug", action="store_true") 476 | parser.add_argument("--csv", type=str, default='metrics.csv') 477 | parser.add_argument("--source_metrics", action="store_true") 478 | parser.add_argument("--model_weights", type=str, default='') 479 | parser.add_argument("--cpu", action='store_true') 480 | parser.add_argument("--in_memory", action='store_true') 481 | parser.add_argument("--resize", type=int, default=0) 482 | # parser.add_argument("--weak", action="store_true") 483 | parser.add_argument("--pho", nargs='?', type=float, default=1, 484 | help='augment') 485 | parser.add_argument('--n_epoch', nargs='?', type=int, default=200, 486 | help='# of the epochs') 487 | parser.add_argument('--l_rate', nargs='?', type=float, default=5e-4, 488 | help='Learning Rate') 489 | parser.add_argument('--lr_decay', nargs='?', type=float, default=0.7), 490 | parser.add_argument('--weight_decay', nargs='?', type=float, default=1e-5, 491 | help='L2 regularisation of network weights') 492 | parser.add_argument('--batch_size', type=int, default=1) 493 | parser.add_argument("--dtype", type=str, default="torch.float32") 494 | parser.add_argument("--scheduler", type=str, default="DummyScheduler") 495 | parser.add_argument("--scheduler_params", type=str, default="{}") 496 | parser.add_argument("--bounds_on_fgt", type=bool, default=False) 497 | parser.add_argument("--bounds_on_train_stats", type=str, default='') 498 | parser.add_argument("--power",type=float, default=0.9) 499 | parser.add_argument("--metric_axis",type=int, nargs='*', required=True, help="Classes to display metrics. \ 500 | Display only the average of everything if empty") 501 | args = parser.parse_args() 502 | print(args) 503 | 504 | return args 505 | 506 | 507 | if __name__ == '__main__': 508 | 509 | # for i in range(0,15): 510 | 511 | # args=argparse.Namespace(augment=False, batch_size=12, bounds_on_fgt=False, bounds_on_train_stats='', cpu=False, csv='metrics.csv', dataset='data/all_transverse', debug=False, dtype='torch.float32', flr=False, grp_regex='Subj_\\d+_\\d+', in_memory=False, l_rate=0.0005, lin_aug_w=False, metric_axis=[1], mix=True, model_weights='results/Inn/JCESource/best_3d.pkl', n_class=2, n_epoch=10, network='ENet', pho=1, power=0.9, resize=0, scheduler='DummyScheduler', scheduler_params='{}', source_folders="[('Inn', png_transform, False), ('GT', gt_transform, True),('WatonInn_pjce', gt_transform, True)]", source_losses="[('CrossEntropy', {'idc': [0,1], 'weights':[1,1]}, None, None, None, 1)]", source_metrics=False, target_dataset='data/all_transverse', target_folders="[('Inn', png_transform, False), ('GT', gt_transform, True),('WatonInn_pjce', gt_transform, True)]", target_losses="[('NaivePenalty', {'idc': [1],'power': 1},'PreciseBoundsOnWeakWTags', {'margin':0.1,'idc':[1], 'power': 1, 'mode':'percentage'},'soft_size',80)]", weight_decay=0.0001, workdir='results/Inn/PWSizeLossNs_jce/') 512 | # args = argparse.Namespace(batch_size=4,cpu=False, csv='metrics.csv', dataset='data/all_transverse', 513 | # target_dataset='data/all_transverse', mix=True, metric_axis=[1], augment=False, 514 | # debug=False, dtype='torch.float32', power=0.9,lin_aug_w=False, 515 | # bounds_on_fgt=False, bounds_on_train_stats='', 516 | # folders="[('Wat', png_transform, False), ('GT', gt_transform, False)," 517 | # "('GT', gt_transform, False)]",flr=False, 518 | # target_folders="[('Inn', png_transform, False), ('GT', gt_transform, False)]+" 519 | # "[('GT', gt_transform, False),('GT', gt_transform, False),('GT', gt_transform, False)]", 520 | # grp_regex='Subj_\\d+_\\d+', in_memory=False, l_rate=0.0005, weight_decay=1e-4, 521 | # losses="[('NaivePenalty', {'idc': [1]},'PredictionBoundswTags', " 522 | # " {'margin':0.1,'idc':[1], 'mode':'percentage','net': 'results/ls_winr2/pred_size40.pkl'} , 'soft_size',1),('SelfEntropy', {'idc': [0,1], 'weights':[1,1]}, None, None, None,0 )," 523 | # " ('CEProp', {'fgt':True, 'power': 3}, 'PredictionValues',{'margin':0.1,'mode':'percentage','idc':[1],'sizefile':'results/trainval_size_Inn/ls_winr2_40/trainvalreg_metrics_C2.csv'}, 'norm_soft_size', 1)]", 524 | # losses_source="[('CrossEntropy', {'idc': [0,1], 'weights':[1,1]}, None, None, None, 1)]", 525 | # model_weights='results/all_transverse/fse/best_3d.pkl', n_class=2, n_epoch=150, network='ENet', pho=1.0, resize=0, 526 | # scheduler='DummyScheduler', scheduler_params='{}', workdir='results/Inn/foo') 527 | # 528 | run(get_args()) 529 | # run(args) 530 | -------------------------------------------------------------------------------- /main_sfda.py: -------------------------------------------------------------------------------- 1 | #/usr/bin/env python3.6 2 | import math 3 | import re 4 | import argparse 5 | import warnings 6 | warnings.filterwarnings("ignore") 7 | from pathlib import Path 8 | from operator import itemgetter 9 | from shutil import copytree, rmtree 10 | import typing 11 | from typing import Any, Callable, List, Tuple 12 | import matplotlib.pyplot as plt 13 | import torch 14 | import numpy as np 15 | import pandas as pd 16 | import torch.nn.functional as F 17 | from torch import Tensor 18 | from torch.utils.data import DataLoader 19 | from dice3d import dice3d 20 | from networks import weights_init 21 | from dataloader import get_loaders 22 | from utils import map_, save_dict_to_file,get_subj_nb 23 | from utils import dice_coef, dice_batch, save_images,save_images_p,save_be_images, tqdm_, save_images_ent 24 | from utils import probs2one_hot, probs2class, mask_resize, resize, haussdorf 25 | from utils import exp_lr_scheduler 26 | import datetime 27 | from itertools import cycle 28 | import os 29 | from time import sleep 30 | from bounds import CheckBounds 31 | import matplotlib.pyplot as plt 32 | from itertools import chain 33 | import platform 34 | 35 | 36 | 37 | def setup(args, n_class, dtype) -> Tuple[Any, Any, Any, List[Callable], List[float],List[Callable], List[float], Callable]: 38 | print(">>> Setting up") 39 | cpu: bool = args.cpu or not torch.cuda.is_available() 40 | if cpu: 41 | print("WARNING CUDA NOT AVAILABLE") 42 | device = torch.device("cpu") if cpu else torch.device("cuda") 43 | n_epoch = args.n_epoch 44 | if args.model_weights: 45 | if cpu: 46 | net = torch.load(args.model_weights, map_location='cpu') 47 | else: 48 | net = torch.load(args.model_weights) 49 | else: 50 | net_class = getattr(__import__('networks'), args.network) 51 | net = net_class(1, n_class).type(dtype).to(device) 52 | net.apply(weights_init) 53 | net.to(device) 54 | if args.saveim: 55 | print("WARNING: Saving masks at each epc") 56 | 57 | optimizer = torch.optim.Adam(net.parameters(), lr=args.l_rate, betas=(0.9, 0.999),weight_decay=args.weight_decay) 58 | if args.adamw: 59 | optimizer = torch.optim.AdamW(net.parameters(), lr=args.l_rate, betas=(0.9, 0.999)) 60 | 61 | print(args.target_losses) 62 | losses = eval(args.target_losses) 63 | loss_fns: List[Callable] = [] 64 | for loss_name, loss_params, _, bounds_params, fn, _ in losses: 65 | loss_class = getattr(__import__('losses'), loss_name) 66 | loss_fns.append(loss_class(**loss_params, dtype=dtype, fn=fn)) 67 | print("bounds_params", bounds_params) 68 | if bounds_params!=None: 69 | bool_predexist = CheckBounds(**bounds_params) 70 | print(bool_predexist,"size predictor") 71 | if not bool_predexist: 72 | n_epoch = 0 73 | 74 | loss_weights = map_(itemgetter(5), losses) 75 | 76 | if args.scheduler: 77 | scheduler = getattr(__import__('scheduler'), args.scheduler)(**eval(args.scheduler_params)) 78 | else: 79 | scheduler = '' 80 | 81 | return net, optimizer, device, loss_fns, loss_weights, scheduler, n_epoch 82 | 83 | 84 | def do_epoch(args, mode: str, net: Any, device: Any, epc: int, 85 | loss_fns: List[Callable], loss_weights: List[float], 86 | new_w:int, C: int, metric_axis:List[int], savedir: str = "", 87 | optimizer: Any = None, target_loader: Any = None, best_dice3d_val:Any=None): 88 | 89 | assert mode in ["train", "val"] 90 | L: int = len(loss_fns) 91 | indices = torch.tensor(metric_axis,device=device) 92 | if mode == "train": 93 | net.train() 94 | desc = f">> Training ({epc})" 95 | elif mode == "val": 96 | net.eval() 97 | # net.train() 98 | desc = f">> Validation ({epc})" 99 | 100 | total_it_t, total_images_t = len(target_loader), len(target_loader.dataset) 101 | total_iteration = total_it_t 102 | total_images = total_images_t 103 | 104 | if args.debug: 105 | total_iteration = 10 106 | pho=1 107 | dtype = eval(args.dtype) 108 | 109 | all_dices: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 110 | all_sizes: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 111 | all_sizes: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 112 | all_gt_sizes: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 113 | all_sizes2: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 114 | all_inter_card: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 115 | all_card_gt: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 116 | all_card_pred: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 117 | all_gt = [] 118 | all_pred = [] 119 | if args.do_hd: 120 | all_gt: Tensor = torch.zeros((total_images, args.wh, args.wh), dtype=dtype) 121 | all_pred: Tensor = torch.zeros((total_images, args.wh, args.wh), dtype=dtype) 122 | loss_log: Tensor = torch.zeros((total_images), dtype=dtype, device=device) 123 | loss_cons: Tensor = torch.zeros((total_images), dtype=dtype, device=device) 124 | loss_se: Tensor = torch.zeros((total_images), dtype=dtype, device=device) 125 | loss_tot: Tensor = torch.zeros((total_images), dtype=dtype, device=device) 126 | posim_log: Tensor = torch.zeros((total_images), dtype=dtype, device=device) 127 | haussdorf_log: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 128 | all_grp: Tensor = torch.zeros((total_images, C), dtype=dtype, device=device) 129 | all_pnames = np.zeros([total_images]).astype('U256') 130 | #dice_3d_log: Tensor = torch.zeros((1, C), dtype=dtype, device=device) 131 | dice_3d_log, dice_3d_sd_log = 0, 0 132 | #dice_3d_sd_log: Tensor = torch.zeros((1, C), dtype=dtype, device=device) 133 | hd_3d_log, asd_3d_log, hd_3d_sd_log, asd_3d_sd_log= 0, 0, 0, 0 134 | tq_iter = tqdm_(enumerate(target_loader), total=total_iteration, desc=desc) 135 | done: int = 0 136 | n_warmup = args.n_warmup 137 | mult_lw = [pho ** (epc - n_warmup + 1)] * len(loss_weights) 138 | mult_lw[0] = 1 139 | loss_weights = [a * b for a, b in zip(loss_weights, mult_lw)] 140 | losses_vec, source_vec, target_vec, baseline_target_vec = [], [], [], [] 141 | pen_count = 0 142 | with warnings.catch_warnings(): 143 | warnings.simplefilter("ignore") 144 | count_losses = 0 145 | for j, target_data in tq_iter: 146 | target_data[1:] = [e.to(device) for e in target_data[1:]] # Move all tensors to device 147 | filenames_target, target_image, target_gt = target_data[:3] 148 | #print("target", filenames_target) 149 | labels = target_data[3:3+L] 150 | bounds = target_data[3+L:] 151 | filenames_target = [f.split('.nii')[0] for f in filenames_target] 152 | assert len(labels) == len(bounds), len(bounds) 153 | B = len(target_image) 154 | # Reset gradients 155 | if optimizer: 156 | optimizer.zero_grad() 157 | 158 | # Forward 159 | with torch.set_grad_enabled(mode == "train"): 160 | pred_logits: Tensor = net(target_image) 161 | pred_probs: Tensor = F.softmax(pred_logits, dim=1) 162 | if new_w > 0: 163 | pred_probs = resize(pred_probs, new_w) 164 | labels = [resize(label, new_w) for label in labels] 165 | target = resize(target, new_w) 166 | predicted_mask: Tensor = probs2one_hot(pred_probs) # Used only for dice computation 167 | assert len(bounds) == len(loss_fns) == len(loss_weights) 168 | if epc < n_warmup: 169 | loss_weights = [0]*len(loss_weights) 170 | loss: Tensor = torch.zeros(1, requires_grad=True).to(device) 171 | loss_vec = [] 172 | loss_kw = [] 173 | for loss_fn,label, w, bound in zip(loss_fns,labels, loss_weights, bounds): 174 | if w > 0: 175 | if eval(args.target_losses)[0][0]=="EntKLProp": 176 | loss_1, loss_cons_prior,est_prop = loss_fn(pred_probs, label, bound) 177 | loss = loss_1 + loss_cons_prior 178 | else: 179 | loss = loss_fn(pred_probs, label, bound) 180 | loss = w*loss 181 | loss_1 = loss 182 | loss_kw.append(loss_1.detach()) 183 | # Backward 184 | if optimizer: 185 | loss.backward() 186 | optimizer.step() 187 | dices, inter_card, card_gt, card_pred = dice_coef(predicted_mask.detach(), target_gt.detach()) 188 | assert dices.shape == (B, C), (dices.shape, B, C) 189 | sm_slice = slice(done, done + B) # Values only for current batch 190 | all_dices[sm_slice, ...] = dices 191 | if eval(args.target_losses)[0][0] in ["EntKLProp"]: 192 | all_sizes[sm_slice, ...] = torch.round(est_prop.detach()*target_image.shape[2]*target_image.shape[3]) 193 | all_sizes2[sm_slice, ...] = torch.sum(predicted_mask,dim=(2,3)) 194 | all_gt_sizes[sm_slice, ...] = torch.sum(target_gt,dim=(2,3)) 195 | all_grp[sm_slice, ...] = torch.FloatTensor(get_subj_nb(filenames_target)).unsqueeze(1).repeat(1,C) 196 | all_pnames[sm_slice] = filenames_target 197 | all_inter_card[sm_slice, ...] = inter_card 198 | all_card_gt[sm_slice, ...] = card_gt 199 | all_card_pred[sm_slice, ...] = card_pred 200 | if args.do_hd: 201 | all_pred[sm_slice, ...] = probs2class(predicted_mask[:,:,:,:]).cpu().detach() 202 | all_gt[sm_slice, ...] = probs2class(target_gt).detach() 203 | loss_se[sm_slice] = loss_kw[0] 204 | if len(loss_kw)>1: 205 | loss_cons[sm_slice] = loss_kw[1] 206 | loss_tot[sm_slice] = loss_kw[1]+loss_kw[0] 207 | else: 208 | loss_cons[sm_slice] = 0 209 | loss_tot[sm_slice] = loss_kw[0] 210 | # # Save images 211 | if savedir and args.saveim and mode =="val": 212 | with warnings.catch_warnings(): 213 | warnings.filterwarnings("ignore", category=UserWarning) 214 | warnings.simplefilter("ignore") 215 | predicted_class: Tensor = probs2class(pred_probs) 216 | save_images(predicted_class, filenames_target, savedir, mode, epc, False) 217 | if args.entmap: 218 | ent_map = torch.einsum("bcwh,bcwh->bwh", [-pred_probs, (pred_probs+1e-10).log()]) 219 | save_images_ent(ent_map, filenames_target, savedir,'ent_map', epc) 220 | 221 | # Logging 222 | big_slice = slice(0, done + B) # Value for current and previous batches 223 | stat_dict = {**{f"DSC{n}": all_dices[big_slice, n].mean() for n in metric_axis}, 224 | **{f"SZ{n}": all_sizes[big_slice, n].mean() for n in metric_axis}, 225 | **({f"DSC_source{n}": all_dices_s[big_slice, n].mean() for n in metric_axis} 226 | if args.source_metrics else {})} 227 | 228 | size_dict = {**{f"SZ{n}": all_sizes[big_slice, n].mean() for n in metric_axis}} 229 | nice_dict = {k: f"{v:.4f}" for (k, v) in stat_dict.items()} 230 | done += B 231 | tq_iter.set_postfix(nice_dict) 232 | if args.dice_3d and (mode == 'val'): 233 | dice_3d_log, dice_3d_sd_log,asd_3d_log, asd_3d_sd_log,hd_3d_log, hd_3d_sd_log = dice3d(all_grp, all_inter_card, all_card_gt, all_card_pred,all_pred,all_gt,all_pnames,metric_axis,args.pprint,args.do_hd,args.do_asd,best_dice3d_val) 234 | dice_2d = torch.index_select(all_dices, 1, indices).mean().cpu().numpy().item() 235 | target_vec = [dice_3d_log, dice_3d_sd_log,asd_3d_log, asd_3d_sd_log,hd_3d_log,hd_3d_sd_log,dice_2d] 236 | size_mean = torch.index_select(all_sizes2, 1, indices).mean(dim=0).cpu().numpy() 237 | size_gt_mean = torch.index_select(all_gt_sizes, 1, indices).mean(dim=0).cpu().numpy() 238 | mask_pos = torch.index_select(all_sizes2, 1, indices)!=0 239 | gt_pos = torch.index_select(all_gt_sizes, 1, indices)!=0 240 | size_mean_pos = torch.index_select(all_sizes2, 1, indices).sum(dim=0).cpu().numpy()/mask_pos.sum(dim=0).cpu().numpy() 241 | gt_size_mean_pos = torch.index_select(all_gt_sizes, 1, indices).sum(dim=0).cpu().numpy()/gt_pos.sum(dim=0).cpu().numpy() 242 | size_mean2 = torch.index_select(all_sizes2, 1, indices).mean(dim=0).cpu().numpy() 243 | losses_vec = [loss_se.mean().item(),loss_cons.mean().item(),loss_tot.mean().item(),size_mean.mean(),size_mean_pos.mean(),size_gt_mean.mean(),gt_size_mean_pos.mean()] 244 | if not epc%50: 245 | df_t = pd.DataFrame({ 246 | "val_ids":all_pnames, 247 | "proposal_size":all_sizes2.cpu()}) 248 | df_t.to_csv(Path(savedir,mode+str(epc)+"sizes.csv"), float_format="%.4f", index_label="epoch") 249 | return losses_vec, target_vec,source_vec 250 | 251 | 252 | 253 | def run(args: argparse.Namespace) -> None: 254 | # save args to dict 255 | d = vars(args) 256 | d['time'] = str(datetime.datetime.now()) 257 | d['server']=platform.node() 258 | save_dict_to_file(d,args.workdir) 259 | 260 | temperature: float = 0.1 261 | n_class: int = args.n_class 262 | metric_axis: List = args.metric_axis 263 | lr: float = args.l_rate 264 | dtype = eval(args.dtype) 265 | 266 | # Proper params 267 | savedir: str = args.workdir 268 | n_epoch: int = args.n_epoch 269 | 270 | net, optimizer, device, loss_fns, loss_weights, scheduler, n_epoch = setup(args, n_class, dtype) 271 | shuffle = True 272 | print(args.target_folders) 273 | target_loader, target_loader_val = get_loaders(args, args.target_dataset,args.target_folders, 274 | args.batch_size, n_class, 275 | args.debug, args.in_memory, dtype, shuffle, "target", args.val_target_folders) 276 | 277 | print("metric axis",metric_axis) 278 | best_dice_pos: Tensor = np.zeros(1) 279 | best_dice: Tensor = np.zeros(1) 280 | best_hd3d_dice: Tensor = np.zeros(1) 281 | best_3d_dice: Tensor = 0 282 | best_2d_dice: Tensor = 0 283 | print("Results saved in ", savedir) 284 | print(">>> Starting the training") 285 | for i in range(n_epoch): 286 | 287 | if args.mode =="makeim": 288 | with torch.no_grad(): 289 | 290 | val_losses_vec, val_target_vec,val_source_vec = do_epoch(args, "val", net, device, 291 | i, loss_fns, 292 | loss_weights, 293 | args.resize, 294 | n_class,metric_axis, 295 | savedir=savedir, 296 | target_loader=target_loader_val, best_dice3d_val=best_3d_dice) 297 | tra_losses_vec = val_losses_vec 298 | tra_target_vec = val_target_vec 299 | tra_source_vec = val_source_vec 300 | else: 301 | tra_losses_vec, tra_target_vec,tra_source_vec = do_epoch(args, "train", net, device, 302 | i, loss_fns, 303 | loss_weights, 304 | args.resize, 305 | n_class, metric_axis, 306 | savedir=savedir, 307 | optimizer=optimizer, 308 | target_loader=target_loader, best_dice3d_val=best_3d_dice) 309 | 310 | with torch.no_grad(): 311 | val_losses_vec, val_target_vec,val_source_vec = do_epoch(args, "val", net, device, 312 | i, loss_fns, 313 | loss_weights, 314 | args.resize, 315 | n_class,metric_axis, 316 | savedir=savedir, 317 | target_loader=target_loader_val, best_dice3d_val=best_3d_dice) 318 | 319 | current_val_target_3d_dice = val_target_vec[0] 320 | if args.dice_3d: 321 | if current_val_target_3d_dice > best_3d_dice: 322 | best_3d_dice = current_val_target_3d_dice 323 | with open(Path(savedir, "3dbestepoch.txt"), 'w') as f: 324 | f.write(str(i)+','+str(best_3d_dice)) 325 | best_folder_3d = Path(savedir, "best_epoch_3d") 326 | if best_folder_3d.exists(): 327 | rmtree(best_folder_3d) 328 | if args.saveim: 329 | copytree(Path(savedir, f"iter{i:03d}"), Path(best_folder_3d)) 330 | torch.save(net, Path(savedir, "best_3d.pkl")) 331 | 332 | 333 | if not(i % 10) : 334 | print("epoch",str(i),savedir,'best 3d dice',best_3d_dice) 335 | torch.save(net, Path(savedir, "epoch_"+str(i)+".pkl")) 336 | 337 | if i == n_epoch - 1: 338 | with open(Path(savedir, "last_epoch.txt"), 'w') as f: 339 | f.write(str(i)) 340 | last_folder = Path(savedir, "last_epoch") 341 | if last_folder.exists(): 342 | rmtree(last_folder) 343 | if args.saveim: 344 | copytree(Path(savedir, f"iter{i:03d}"), Path(last_folder)) 345 | torch.save(net, Path(savedir, "last.pkl")) 346 | 347 | # remove images from iteration 348 | if args.saveim: 349 | rmtree(Path(savedir, f"iter{i:03d}")) 350 | 351 | if args.source_metrics: 352 | df_s_tmp = pd.DataFrame({ 353 | "val_dice_3d": [val_source_vec[0]], 354 | "val_dice_3d_sd": [val_source_vec[1]], 355 | "val_dice_2d": [val_source_vec[2]]}) 356 | if i == 0: 357 | df_s = df_s_tmp 358 | else: 359 | df_s = df_s.append(df_s_tmp) 360 | df_s.to_csv(Path(savedir, "_".join((args.source_folders.split("'")[1],"source", args.csv))), float_format="%.4f", index_label="epoch") 361 | df_t_tmp = pd.DataFrame({ 362 | "epoch":i, 363 | "tra_loss_s":[tra_losses_vec[0]], 364 | "tra_loss_cons":[tra_losses_vec[1]], 365 | "tra_loss_tot":[tra_losses_vec[2]], 366 | "tra_size_mean":[tra_losses_vec[3]], 367 | "tra_size_mean_pos":[tra_losses_vec[4]], 368 | "val_loss_s":[val_losses_vec[0]], 369 | "val_loss_cons":[val_losses_vec[1]], 370 | "val_loss_tot":[val_losses_vec[2]], 371 | "val_size_mean":[val_losses_vec[3]], 372 | "val_size_mean_pos":[val_losses_vec[4]], 373 | "val_gt_size_mean":[val_losses_vec[5]], 374 | "val_gt_size_mean_pos":[val_losses_vec[6]], 375 | 'tra_dice': [tra_target_vec[4]], 376 | 'val_asd': [val_target_vec[2]], 377 | 'val_asd_sd': [val_target_vec[3]], 378 | 'val_hd': [val_target_vec[4]], 379 | 'val_hd_sd': [val_target_vec[5]], 380 | 'val_dice': [val_target_vec[6]], 381 | "val_dice_3d_sd": [val_target_vec[1]], 382 | "val_dice_3d": [val_target_vec[0]]}) 383 | 384 | if i == 0: 385 | df_t = df_t_tmp 386 | else: 387 | df_t = df_t.append(df_t_tmp) 388 | 389 | df_t.to_csv(Path(savedir, "_".join((args.target_folders.split("'")[1],"target", args.csv))), float_format="%.4f", index=False) 390 | 391 | if args.flr==False: 392 | exp_lr_scheduler(optimizer, i, args.lr_decay,args.lr_decay_epoch) 393 | print("Results saved in ", savedir, "best 3d dice",best_3d_dice) 394 | 395 | 396 | def get_args() -> argparse.Namespace: 397 | parser = argparse.ArgumentParser(description='Hyperparams') 398 | parser.add_argument('--target_dataset', type=str, required=True) 399 | parser.add_argument("--workdir", type=str, required=True) 400 | parser.add_argument("--target_losses", type=str, required=True, 401 | help="List of (loss_name, loss_params, bounds_name, bounds_params, fn, weight)") 402 | parser.add_argument("--target_folders", type=str, required=True, 403 | help="List of (subfolder, transform, is_hot)") 404 | parser.add_argument("--val_target_folders", type=str, required=True, 405 | help="List of (subfolder, transform, is_hot)") 406 | parser.add_argument("--network", type=str, required=True, help="The network to use") 407 | parser.add_argument("--grp_regex", type=str, required=True) 408 | parser.add_argument("--n_class", type=int, required=True) 409 | parser.add_argument("--mode", type=str, default="learn") 410 | parser.add_argument("--lin_aug_w", action="store_true") 411 | parser.add_argument("--both", action="store_true") 412 | parser.add_argument("--trainval", action="store_true") 413 | parser.add_argument("--valonly", action="store_true") 414 | parser.add_argument("--flr", action="store_true") 415 | parser.add_argument("--augment", action="store_true") 416 | parser.add_argument("--do_hd", type=bool, default=False) 417 | parser.add_argument("--do_asd", type=bool, default=False) 418 | parser.add_argument("--saveim", type=bool, default=False) 419 | parser.add_argument("--debug", action="store_true") 420 | parser.add_argument("--csv", type=str, default='metrics.csv') 421 | parser.add_argument("--source_metrics", action="store_true") 422 | parser.add_argument("--adamw", action="store_true") 423 | parser.add_argument("--dice_3d", action="store_true") 424 | parser.add_argument("--ontest", action="store_true") 425 | parser.add_argument("--ontrain", action="store_true") 426 | parser.add_argument("--pprint", action="store_true") 427 | parser.add_argument("--entmap", action="store_true") 428 | parser.add_argument("--model_weights", type=str, default='') 429 | parser.add_argument("--cpu", action='store_true') 430 | parser.add_argument("--in_memory", action='store_true') 431 | parser.add_argument("--resize", type=int, default=0) 432 | parser.add_argument("--pho", nargs='?', type=float, default=1, 433 | help='augment') 434 | parser.add_argument("--n_warmup", type=int, default=0) 435 | parser.add_argument("--wh", type=int, default=256) 436 | parser.add_argument('--n_epoch', nargs='?', type=int, default=200, 437 | help='# of the epochs') 438 | parser.add_argument('--l_rate', nargs='?', type=float, default=5e-4, 439 | help='Learning Rate') 440 | parser.add_argument('--lr_decay', nargs='?', type=float, default=0.7), 441 | parser.add_argument('--lr_decay_epoch', nargs='?', type=float, default=20), 442 | parser.add_argument('--weight_decay', nargs='?', type=float, default=1e-5, 443 | help='L2 regularisation of network weights') 444 | parser.add_argument('--batch_size', type=int, default=1) 445 | parser.add_argument("--dtype", type=str, default="torch.float32") 446 | parser.add_argument("--scheduler", type=str, default="DummyScheduler") 447 | parser.add_argument("--scheduler_params", type=str, default="{}") 448 | parser.add_argument("--power",type=float, default=0.9) 449 | parser.add_argument("--metric_axis",type=int, nargs='*', required=True, help="Classes to display metrics. \ 450 | Display only the average of everything if empty") 451 | args = parser.parse_args() 452 | print(args) 453 | 454 | return args 455 | 456 | 457 | if __name__ == '__main__': 458 | run(get_args()) 459 | -------------------------------------------------------------------------------- /prostate.make: -------------------------------------------------------------------------------- 1 | CC = python 2 | SHELL = bash 3 | PP = PYTHONPATH="$(PYTHONPATH):." 4 | 5 | .PHONY: all view plot report 6 | 7 | CFLAGS = -O 8 | #DEBUG = --debug 9 | 10 | #the regex of the slices in the target dataset 11 | #for the prostate 12 | G_RGX = Case\d+_ 13 | 14 | TT_DATA = [('IMG', nii_transform2, False), ('GT', nii_gt_transform2, False), ('GT', nii_gt_transform2, False)] 15 | L_OR = [('CrossEntropy', {'idc': [0,1], 'weights':[1,1]}, None, None, None, 1)] 16 | NET = UNet 17 | 18 | # the folder containing the target dataset - site A is the target dataset and site B is the source one 19 | T_FOLD = data/prostate/SA 20 | 21 | #the network weights used as initialization of the adaptation 22 | M_WEIGHTS_ul = results/prostate/cesource/last.pkl 23 | 24 | #run the main experiment 25 | TRN = results/prostate/sfda 26 | 27 | REPO = $(shell basename `git rev-parse --show-toplevel`) 28 | DATE = $(shell date +"%y%m%d") 29 | HASH = $(shell git rev-parse --short HEAD) 30 | HOSTNAME = $(shell hostname) 31 | PBASE = archives 32 | PACK = $(PBASE)/$(REPO)-$(DATE)-$(HASH)-$(HOSTNAME)-CSize.tar.gz 33 | 34 | all: pack 35 | plot: $(PLT) 36 | 37 | pack: $(PACK) report 38 | $(PACK): $(TRN) $(INF_0) $(TRN_1) $(INF_1) $(TRN_2) $(TRN_3) $(TRN_4) 39 | mkdir -p $(@D) 40 | tar cf - $^ | pigz > $@ 41 | chmod -w $@ 42 | # tar -zc -f $@ $^ # Use if pigz is not available 43 | 44 | # first train on the source dataset only: 45 | results/prostate/cesource: OPT = --target_losses="$(L_OR)" --target_dataset "data/prostate/SB" \ 46 | --network UNet --model_weights="" --lr_decay 1 \ 47 | 48 | # full supervision 49 | results/prostate/fs: OPT = --target_losses="$(L_OR)" \ 50 | --network UNet --lr_decay 1 \ 51 | 52 | # SFDA. Remove --saveim True --entmap --do_asd 1 --do_hd 1 to speed up 53 | results/prostate/sfda: OPT = --target_losses="[('EntKLProp', {'lamb_se':1, 'lamb_consprior':1,'ivd':True,'weights_se':[0.1,0.9],'idc_c': [1],'curi':True,'power': 1},'PredictionBounds', \ 54 | {'margin':0,'dir':'high','idc':[0,1],'predcol':'dumbpredwtags','power': 1, 'mode':'percentage','sep':',','sizefile':'sizes/prostate.csv'},'norm_soft_size',1)]" \ 55 | --l_rate 0.000001 \ 56 | 57 | #inference mode : saves the segmentation masks for a specific model saved as pkl file (ex. "results/prostate/cesource/last.pkl" below): 58 | results/prostate/cesourceim: OPT = --target_losses="$(L_OR)" \ 59 | --mode makeim --batch_size 1 --l_rate 0 --model_weights="$(M_WEIGHTS_ul)" --pprint --n_epoch 1 --saveim True --entmap\ 60 | 61 | results/prostate/sfdaim: OPT = --target_losses="$(L_OR)" \ 62 | --mode makeim --saveim True --entmap --do_asd 1 --do_hd 1 --batch_size 1 --l_rate 0 --model_weights="results/prostate/sfda/best_3d.pkl" --pprint --n_epoch 1 --saveim True --entmap\ 63 | 64 | 65 | $(TRN) : 66 | $(CC) $(CFLAGS) main_sfda.py --batch_size 4 --n_class 2 --workdir $@_tmp --target_dataset "$(T_FOLD)" \ 67 | --wh 384 --metric_axis 1 --n_epoch 150 --dice_3d --l_rate 5e-4 --weight_decay 1e-4 --grp_regex="$(G_RGX)" --network=$(NET) --val_target_folders="$(TT_DATA)"\ 68 | --lr_decay 0.9 --model_weights="$(M_WEIGHTS_ul)" --target_folders="$(TT_DATA)" $(OPT) $(DEBUG) 69 | mv $@_tmp $@ 70 | 71 | 72 | -------------------------------------------------------------------------------- /results/ivd/cesource/last.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SRDAMICCAI/Source-Relaxed-Domain-Adaptation/8fcac4725b1d4207b96d5f5d4e18391b561c72bc/results/ivd/cesource/last.pkl -------------------------------------------------------------------------------- /results/prostate/cesource/last.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SRDAMICCAI/Source-Relaxed-Domain-Adaptation/8fcac4725b1d4207b96d5f5d4e18391b561c72bc/results/prostate/cesource/last.pkl -------------------------------------------------------------------------------- /results/whs/cesource/last.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SRDAMICCAI/Source-Relaxed-Domain-Adaptation/8fcac4725b1d4207b96d5f5d4e18391b561c72bc/results/whs/cesource/last.pkl -------------------------------------------------------------------------------- /scheduler.py: -------------------------------------------------------------------------------- 1 | from typing import Any, Callable, List, Tuple 2 | from operator import add 3 | 4 | from utils import map_, uc_ 5 | 6 | 7 | class DummyScheduler(object): 8 | def __call__(self, epoch: int, optimizer: Any, loss_fns: List[Callable], loss_weights: List[float]) \ 9 | -> Tuple[float, List[Callable], List[float]]: 10 | return optimizer, loss_fns, loss_weights 11 | 12 | 13 | class AddWeightLoss(): 14 | def __init__(self, to_add: List[float]): 15 | self.to_add: List[float] = to_add 16 | 17 | def __call__(self, epoch: int, optimizer: Any, loss_fns: List[Callable], loss_weights: List[float]) \ 18 | -> Tuple[float, List[Callable], List[float]]: 19 | assert len(self.to_add) == len(loss_weights) 20 | new_weights: List[float] = map_(uc_(add), zip(loss_weights, self.to_add)) 21 | 22 | print(f"Loss weights went from {loss_weights} to {new_weights}") 23 | 24 | return optimizer, loss_fns, new_weights 25 | 26 | 27 | class StealWeight(): 28 | def __init__(self, to_steal: float): 29 | self.to_steal: float = to_steal 30 | 31 | def __call__(self, epoch: int, optimizer: Any, loss_fns: List[Callable], loss_weights: List[float]) \ 32 | -> Tuple[float, List[Callable], List[float]]: 33 | a, b = loss_weights 34 | new_weights: List[float] = [max(0.1, a - self.to_steal), b + self.to_steal] 35 | 36 | print(f"Loss weights went from {loss_weights} to {new_weights}") 37 | 38 | return optimizer, loss_fns, new_weights -------------------------------------------------------------------------------- /seg_pro3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SRDAMICCAI/Source-Relaxed-Domain-Adaptation/8fcac4725b1d4207b96d5f5d4e18391b561c72bc/seg_pro3.png -------------------------------------------------------------------------------- /sizes/prostate.csv: -------------------------------------------------------------------------------- 1 | val_ids,dumbpredwtags,val_gt_size 2 | Case00_0.nii,"[140225, 8000]","[147398.0, 827.0]" 3 | Case00_1.nii,"[140225, 8000]","[147080.0, 1145.0]" 4 | Case00_10.nii,"[140225, 8000]","[143808.0, 4417.0]" 5 | Case00_11.nii,"[140225, 8000]","[144898.0, 3327.0]" 6 | Case00_12.nii,"[140225, 8000]","[146525.0, 1700.0]" 7 | Case00_13.nii,"[140225, 8000]","[147374.0, 851.0]" 8 | Case00_14.nii,"[148225, 0]","[148225.0, 0.0]" 9 | Case00_2.nii,"[140225, 8000]","[145619.0, 2606.0]" 10 | Case00_3.nii,"[140225, 8000]","[144725.0, 3500.0]" 11 | Case00_4.nii,"[140225, 8000]","[144219.0, 4006.0]" 12 | Case00_5.nii,"[140225, 8000]","[143765.0, 4460.0]" 13 | Case00_6.nii,"[140225, 8000]","[143302.0, 4923.0]" 14 | Case00_7.nii,"[140225, 8000]","[142995.0, 5230.0]" 15 | Case00_8.nii,"[140225, 8000]","[143221.0, 5004.0]" 16 | Case00_9.nii,"[140225, 8000]","[143655.0, 4570.0]" 17 | Case01_0.nii,"[148225, 0]","[148225.0, 0.0]" 18 | Case01_1.nii,"[148225, 0]","[148225.0, 0.0]" 19 | Case01_10.nii,"[140225, 8000]","[139714.0, 8511.0]" 20 | Case01_11.nii,"[140225, 8000]","[139540.0, 8685.0]" 21 | Case01_12.nii,"[140225, 8000]","[139793.0, 8432.0]" 22 | Case01_13.nii,"[140225, 8000]","[139554.0, 8671.0]" 23 | Case01_14.nii,"[140225, 8000]","[143678.0, 4547.0]" 24 | Case01_15.nii,"[140225, 8000]","[145351.0, 2874.0]" 25 | Case01_16.nii,"[140225, 8000]","[145994.0, 2231.0]" 26 | Case01_17.nii,"[140225, 8000]","[147152.0, 1073.0]" 27 | Case01_18.nii,"[148225, 0]","[148225.0, 0.0]" 28 | Case01_19.nii,"[148225, 0]","[148225.0, 0.0]" 29 | Case01_2.nii,"[148225, 0]","[148225.0, 0.0]" 30 | Case01_3.nii,"[140225, 8000]","[147247.0, 978.0]" 31 | Case01_4.nii,"[140225, 8000]","[146148.0, 2077.0]" 32 | Case01_5.nii,"[140225, 8000]","[145587.0, 2638.0]" 33 | Case01_6.nii,"[140225, 8000]","[144189.0, 4036.0]" 34 | Case01_7.nii,"[140225, 8000]","[142942.0, 5283.0]" 35 | Case01_8.nii,"[140225, 8000]","[141197.0, 7028.0]" 36 | Case01_9.nii,"[140225, 8000]","[140373.0, 7852.0]" 37 | Case02_0.nii,"[148225, 0]","[148225.0, 0.0]" 38 | Case02_1.nii,"[148225, 0]","[148225.0, 0.0]" 39 | Case02_10.nii,"[140225, 8000]","[141742.0, 6483.0]" 40 | Case02_11.nii,"[140225, 8000]","[141050.0, 7175.0]" 41 | Case02_12.nii,"[140225, 8000]","[140924.0, 7301.0]" 42 | Case02_13.nii,"[140225, 8000]","[140805.0, 7420.0]" 43 | Case02_14.nii,"[140225, 8000]","[143122.0, 5103.0]" 44 | Case02_15.nii,"[140225, 8000]","[143423.0, 4802.0]" 45 | Case02_16.nii,"[140225, 8000]","[144806.0, 3419.0]" 46 | Case02_17.nii,"[140225, 8000]","[145318.0, 2907.0]" 47 | Case02_18.nii,"[140225, 8000]","[145633.0, 2592.0]" 48 | Case02_19.nii,"[140225, 8000]","[145914.0, 2311.0]" 49 | Case02_2.nii,"[148225, 0]","[148225.0, 0.0]" 50 | Case02_20.nii,"[140225, 8000]","[146706.0, 1519.0]" 51 | Case02_21.nii,"[148225, 0]","[148225.0, 0.0]" 52 | Case02_22.nii,"[148225, 0]","[148225.0, 0.0]" 53 | Case02_23.nii,"[148225, 0]","[148225.0, 0.0]" 54 | Case02_3.nii,"[140225, 8000]","[147436.0, 789.0]" 55 | Case02_4.nii,"[140225, 8000]","[147143.0, 1082.0]" 56 | Case02_5.nii,"[140225, 8000]","[146584.0, 1641.0]" 57 | Case02_6.nii,"[140225, 8000]","[145410.0, 2815.0]" 58 | Case02_7.nii,"[140225, 8000]","[144210.0, 4015.0]" 59 | Case02_8.nii,"[140225, 8000]","[143207.0, 5018.0]" 60 | Case02_9.nii,"[140225, 8000]","[142318.0, 5907.0]" 61 | Case03_0.nii,"[148225, 0]","[148225.0, 0.0]" 62 | Case03_1.nii,"[148225, 0]","[148225.0, 0.0]" 63 | Case03_10.nii,"[140225, 8000]","[140021.0, 8204.0]" 64 | Case03_11.nii,"[140225, 8000]","[139283.0, 8942.0]" 65 | Case03_12.nii,"[140225, 8000]","[139164.0, 9061.0]" 66 | Case03_13.nii,"[140225, 8000]","[139425.0, 8800.0]" 67 | Case03_14.nii,"[140225, 8000]","[139884.0, 8341.0]" 68 | Case03_15.nii,"[140225, 8000]","[141583.0, 6642.0]" 69 | Case03_16.nii,"[140225, 8000]","[144992.0, 3233.0]" 70 | Case03_17.nii,"[140225, 8000]","[147116.0, 1109.0]" 71 | Case03_18.nii,"[148225, 0]","[148225.0, 0.0]" 72 | Case03_19.nii,"[148225, 0]","[148225.0, 0.0]" 73 | Case03_2.nii,"[148225, 0]","[148225.0, 0.0]" 74 | Case03_3.nii,"[148225, 0]","[148225.0, 0.0]" 75 | Case03_4.nii,"[140225, 8000]","[146810.0, 1415.0]" 76 | Case03_5.nii,"[140225, 8000]","[145226.0, 2999.0]" 77 | Case03_6.nii,"[140225, 8000]","[143816.0, 4409.0]" 78 | Case03_7.nii,"[140225, 8000]","[142831.0, 5394.0]" 79 | Case03_8.nii,"[140225, 8000]","[141631.0, 6594.0]" 80 | Case03_9.nii,"[140225, 8000]","[140547.0, 7678.0]" 81 | Case04_0.nii,"[148225, 0]","[148225.0, 0.0]" 82 | Case04_1.nii,"[148225, 0]","[148225.0, 0.0]" 83 | Case04_10.nii,"[140225, 8000]","[142755.0, 5470.0]" 84 | Case04_11.nii,"[140225, 8000]","[142180.0, 6045.0]" 85 | Case04_12.nii,"[140225, 8000]","[141812.0, 6413.0]" 86 | Case04_13.nii,"[140225, 8000]","[141878.0, 6347.0]" 87 | Case04_14.nii,"[140225, 8000]","[143928.0, 4297.0]" 88 | Case04_15.nii,"[140225, 8000]","[145170.0, 3055.0]" 89 | Case04_16.nii,"[140225, 8000]","[146002.0, 2223.0]" 90 | Case04_17.nii,"[140225, 8000]","[147234.0, 991.0]" 91 | Case04_18.nii,"[140225, 8000]","[147666.0, 559.0]" 92 | Case04_19.nii,"[148225, 0]","[148225.0, 0.0]" 93 | Case04_2.nii,"[148225, 0]","[148225.0, 0.0]" 94 | Case04_3.nii,"[148225, 0]","[148225.0, 0.0]" 95 | Case04_4.nii,"[148225, 0]","[148225.0, 0.0]" 96 | Case04_5.nii,"[148225, 0]","[148225.0, 0.0]" 97 | Case04_6.nii,"[140225, 8000]","[147241.0, 984.0]" 98 | Case04_7.nii,"[140225, 8000]","[146269.0, 1956.0]" 99 | Case04_8.nii,"[140225, 8000]","[145065.0, 3160.0]" 100 | Case04_9.nii,"[140225, 8000]","[143756.0, 4469.0]" 101 | Case05_0.nii,"[148225, 0]","[148225.0, 0.0]" 102 | Case05_1.nii,"[148225, 0]","[148225.0, 0.0]" 103 | Case05_10.nii,"[140225, 8000]","[142048.0, 6177.0]" 104 | Case05_11.nii,"[140225, 8000]","[143776.0, 4449.0]" 105 | Case05_12.nii,"[140225, 8000]","[144531.0, 3694.0]" 106 | Case05_13.nii,"[140225, 8000]","[144719.0, 3506.0]" 107 | Case05_14.nii,"[140225, 8000]","[144231.0, 3994.0]" 108 | Case05_15.nii,"[140225, 8000]","[145386.0, 2839.0]" 109 | Case05_16.nii,"[140225, 8000]","[146602.0, 1623.0]" 110 | Case05_17.nii,"[140225, 8000]","[147130.0, 1095.0]" 111 | Case05_18.nii,"[148225, 0]","[148225.0, 0.0]" 112 | Case05_19.nii,"[148225, 0]","[148225.0, 0.0]" 113 | Case05_2.nii,"[148225, 0]","[148225.0, 0.0]" 114 | Case05_3.nii,"[140225, 8000]","[146878.0, 1347.0]" 115 | Case05_4.nii,"[140225, 8000]","[145260.0, 2965.0]" 116 | Case05_5.nii,"[140225, 8000]","[144385.0, 3840.0]" 117 | Case05_6.nii,"[140225, 8000]","[143069.0, 5156.0]" 118 | Case05_7.nii,"[140225, 8000]","[142546.0, 5679.0]" 119 | Case05_8.nii,"[140225, 8000]","[142335.0, 5890.0]" 120 | Case05_9.nii,"[140225, 8000]","[141831.0, 6394.0]" 121 | Case06_0.nii,"[148225, 0]","[148225.0, 0.0]" 122 | Case06_1.nii,"[148225, 0]","[148225.0, 0.0]" 123 | Case06_10.nii,"[140225, 8000]","[147496.0, 729.0]" 124 | Case06_11.nii,"[140225, 8000]","[147141.0, 1084.0]" 125 | Case06_12.nii,"[140225, 8000]","[146017.0, 2208.0]" 126 | Case06_13.nii,"[140225, 8000]","[144795.0, 3430.0]" 127 | Case06_14.nii,"[140225, 8000]","[144588.0, 3637.0]" 128 | Case06_15.nii,"[140225, 8000]","[145484.0, 2741.0]" 129 | Case06_16.nii,"[140225, 8000]","[146254.0, 1971.0]" 130 | Case06_17.nii,"[140225, 8000]","[147385.0, 840.0]" 131 | Case06_18.nii,"[148225, 0]","[148225.0, 0.0]" 132 | Case06_19.nii,"[148225, 0]","[148225.0, 0.0]" 133 | Case06_2.nii,"[148225, 0]","[148225.0, 0.0]" 134 | Case06_3.nii,"[148225, 0]","[148225.0, 0.0]" 135 | Case06_4.nii,"[148225, 0]","[148225.0, 0.0]" 136 | Case06_5.nii,"[148225, 0]","[148225.0, 0.0]" 137 | Case06_6.nii,"[148225, 0]","[148225.0, 0.0]" 138 | Case06_7.nii,"[148225, 0]","[148225.0, 0.0]" 139 | Case06_8.nii,"[148225, 0]","[148225.0, 0.0]" 140 | Case06_9.nii,"[140225, 8000]","[147646.0, 579.0]" 141 | Case07_0.nii,"[148225, 0]","[148225.0, 0.0]" 142 | Case07_1.nii,"[148225, 0]","[148225.0, 0.0]" 143 | Case07_10.nii,"[140225, 8000]","[142247.0, 5978.0]" 144 | Case07_11.nii,"[140225, 8000]","[143812.0, 4413.0]" 145 | Case07_12.nii,"[140225, 8000]","[145264.0, 2961.0]" 146 | Case07_13.nii,"[148225, 0]","[148225.0, 0.0]" 147 | Case07_14.nii,"[148225, 0]","[148225.0, 0.0]" 148 | Case07_2.nii,"[140225, 8000]","[147151.0, 1074.0]" 149 | Case07_3.nii,"[140225, 8000]","[146208.0, 2017.0]" 150 | Case07_4.nii,"[140225, 8000]","[145217.0, 3008.0]" 151 | Case07_5.nii,"[140225, 8000]","[144229.0, 3996.0]" 152 | Case07_6.nii,"[140225, 8000]","[142732.0, 5493.0]" 153 | Case07_7.nii,"[140225, 8000]","[142417.0, 5808.0]" 154 | Case07_8.nii,"[140225, 8000]","[141316.0, 6909.0]" 155 | Case07_9.nii,"[140225, 8000]","[141087.0, 7138.0]" 156 | Case08_0.nii,"[140225, 8000]","[147504.0, 721.0]" 157 | Case08_1.nii,"[140225, 8000]","[145394.0, 2831.0]" 158 | Case08_10.nii,"[140225, 8000]","[136606.0, 11619.0]" 159 | Case08_11.nii,"[140225, 8000]","[137272.0, 10953.0]" 160 | Case08_12.nii,"[140225, 8000]","[139394.0, 8831.0]" 161 | Case08_13.nii,"[140225, 8000]","[142201.0, 6024.0]" 162 | Case08_14.nii,"[140225, 8000]","[143654.0, 4571.0]" 163 | Case08_15.nii,"[140225, 8000]","[145214.0, 3011.0]" 164 | Case08_16.nii,"[140225, 8000]","[146091.0, 2134.0]" 165 | Case08_2.nii,"[140225, 8000]","[142923.0, 5302.0]" 166 | Case08_3.nii,"[140225, 8000]","[140304.0, 7921.0]" 167 | Case08_4.nii,"[140225, 8000]","[139011.0, 9214.0]" 168 | Case08_5.nii,"[140225, 8000]","[137748.0, 10477.0]" 169 | Case08_6.nii,"[140225, 8000]","[136902.0, 11323.0]" 170 | Case08_7.nii,"[140225, 8000]","[136255.0, 11970.0]" 171 | Case08_8.nii,"[140225, 8000]","[135761.0, 12464.0]" 172 | Case08_9.nii,"[140225, 8000]","[136098.0, 12127.0]" 173 | Case09_0.nii,"[148225, 0]","[148225.0, 0.0]" 174 | Case09_1.nii,"[148225, 0]","[148225.0, 0.0]" 175 | Case09_10.nii,"[140225, 8000]","[141711.0, 6514.0]" 176 | Case09_11.nii,"[140225, 8000]","[141987.0, 6238.0]" 177 | Case09_12.nii,"[140225, 8000]","[143343.0, 4882.0]" 178 | Case09_13.nii,"[140225, 8000]","[143994.0, 4231.0]" 179 | Case09_14.nii,"[140225, 8000]","[147037.0, 1188.0]" 180 | Case09_15.nii,"[140225, 8000]","[147741.0, 484.0]" 181 | Case09_16.nii,"[148225, 0]","[148225.0, 0.0]" 182 | Case09_17.nii,"[148225, 0]","[148225.0, 0.0]" 183 | Case09_18.nii,"[148225, 0]","[148225.0, 0.0]" 184 | Case09_19.nii,"[148225, 0]","[148225.0, 0.0]" 185 | Case09_2.nii,"[148225, 0]","[148225.0, 0.0]" 186 | Case09_3.nii,"[148225, 0]","[148225.0, 0.0]" 187 | Case09_4.nii,"[140225, 8000]","[146343.0, 1882.0]" 188 | Case09_5.nii,"[140225, 8000]","[145568.0, 2657.0]" 189 | Case09_6.nii,"[140225, 8000]","[144556.0, 3669.0]" 190 | Case09_7.nii,"[140225, 8000]","[143660.0, 4565.0]" 191 | Case09_8.nii,"[140225, 8000]","[142545.0, 5680.0]" 192 | Case09_9.nii,"[140225, 8000]","[141745.0, 6480.0]" 193 | Case10_0.nii,"[148225, 0]","[148225.0, 0.0]" 194 | Case10_1.nii,"[148225, 0]","[148225.0, 0.0]" 195 | Case10_10.nii,"[140225, 8000]","[142235.0, 5990.0]" 196 | Case10_11.nii,"[140225, 8000]","[141964.0, 6261.0]" 197 | Case10_12.nii,"[140225, 8000]","[142328.0, 5897.0]" 198 | Case10_13.nii,"[140225, 8000]","[143271.0, 4954.0]" 199 | Case10_14.nii,"[140225, 8000]","[145067.0, 3158.0]" 200 | Case10_15.nii,"[140225, 8000]","[146058.0, 2167.0]" 201 | Case10_16.nii,"[140225, 8000]","[147767.0, 458.0]" 202 | Case10_17.nii,"[148225, 0]","[148225.0, 0.0]" 203 | Case10_18.nii,"[148225, 0]","[148225.0, 0.0]" 204 | Case10_19.nii,"[148225, 0]","[148225.0, 0.0]" 205 | Case10_2.nii,"[148225, 0]","[148225.0, 0.0]" 206 | Case10_3.nii,"[148225, 0]","[148225.0, 0.0]" 207 | Case10_4.nii,"[148225, 0]","[148225.0, 0.0]" 208 | Case10_5.nii,"[140225, 8000]","[147268.0, 957.0]" 209 | Case10_6.nii,"[140225, 8000]","[146143.0, 2082.0]" 210 | Case10_7.nii,"[140225, 8000]","[144526.0, 3699.0]" 211 | Case10_8.nii,"[140225, 8000]","[143465.0, 4760.0]" 212 | Case10_9.nii,"[140225, 8000]","[142894.0, 5331.0]" 213 | Case11_0.nii,"[148225, 0]","[148225.0, 0.0]" 214 | Case11_1.nii,"[148225, 0]","[148225.0, 0.0]" 215 | Case11_10.nii,"[140225, 8000]","[140880.0, 7345.0]" 216 | Case11_11.nii,"[140225, 8000]","[139883.0, 8342.0]" 217 | Case11_12.nii,"[140225, 8000]","[139925.0, 8300.0]" 218 | Case11_13.nii,"[140225, 8000]","[140159.0, 8066.0]" 219 | Case11_14.nii,"[140225, 8000]","[143073.0, 5152.0]" 220 | Case11_15.nii,"[140225, 8000]","[144629.0, 3596.0]" 221 | Case11_16.nii,"[140225, 8000]","[146693.0, 1532.0]" 222 | Case11_17.nii,"[148225, 0]","[148225.0, 0.0]" 223 | Case11_18.nii,"[148225, 0]","[148225.0, 0.0]" 224 | Case11_19.nii,"[148225, 0]","[148225.0, 0.0]" 225 | Case11_2.nii,"[148225, 0]","[148225.0, 0.0]" 226 | Case11_3.nii,"[140225, 8000]","[147037.0, 1188.0]" 227 | Case11_4.nii,"[140225, 8000]","[146409.0, 1816.0]" 228 | Case11_5.nii,"[140225, 8000]","[145233.0, 2992.0]" 229 | Case11_6.nii,"[140225, 8000]","[144417.0, 3808.0]" 230 | Case11_7.nii,"[140225, 8000]","[143132.0, 5093.0]" 231 | Case11_8.nii,"[140225, 8000]","[142459.0, 5766.0]" 232 | Case11_9.nii,"[140225, 8000]","[141386.0, 6839.0]" 233 | Case12_0.nii,"[148225, 0]","[148225.0, 0.0]" 234 | Case12_1.nii,"[140225, 8000]","[146805.0, 1420.0]" 235 | Case12_10.nii,"[140225, 8000]","[137336.0, 10889.0]" 236 | Case12_11.nii,"[140225, 8000]","[136984.0, 11241.0]" 237 | Case12_12.nii,"[140225, 8000]","[140722.0, 7503.0]" 238 | Case12_13.nii,"[140225, 8000]","[142767.0, 5458.0]" 239 | Case12_14.nii,"[140225, 8000]","[144997.0, 3228.0]" 240 | Case12_2.nii,"[140225, 8000]","[145909.0, 2316.0]" 241 | Case12_3.nii,"[140225, 8000]","[143552.0, 4673.0]" 242 | Case12_4.nii,"[140225, 8000]","[140377.0, 7848.0]" 243 | Case12_5.nii,"[140225, 8000]","[138671.0, 9554.0]" 244 | Case12_6.nii,"[140225, 8000]","[137384.0, 10841.0]" 245 | Case12_7.nii,"[140225, 8000]","[136405.0, 11820.0]" 246 | Case12_8.nii,"[140225, 8000]","[135452.0, 12773.0]" 247 | Case12_9.nii,"[140225, 8000]","[135981.0, 12244.0]" 248 | Case13_0.nii,"[148225, 0]","[148225.0, 0.0]" 249 | Case13_1.nii,"[148225, 0]","[148225.0, 0.0]" 250 | Case13_10.nii,"[140225, 8000]","[143463.0, 4762.0]" 251 | Case13_11.nii,"[140225, 8000]","[142957.0, 5268.0]" 252 | Case13_12.nii,"[140225, 8000]","[144496.0, 3729.0]" 253 | Case13_13.nii,"[140225, 8000]","[144428.0, 3797.0]" 254 | Case13_14.nii,"[140225, 8000]","[146525.0, 1700.0]" 255 | Case13_15.nii,"[148225, 0]","[148225.0, 0.0]" 256 | Case13_16.nii,"[148225, 0]","[148225.0, 0.0]" 257 | Case13_17.nii,"[148225, 0]","[148225.0, 0.0]" 258 | Case13_18.nii,"[148225, 0]","[148225.0, 0.0]" 259 | Case13_19.nii,"[148225, 0]","[148225.0, 0.0]" 260 | Case13_2.nii,"[148225, 0]","[148225.0, 0.0]" 261 | Case13_3.nii,"[148225, 0]","[148225.0, 0.0]" 262 | Case13_4.nii,"[148225, 0]","[148225.0, 0.0]" 263 | Case13_5.nii,"[140225, 8000]","[147123.0, 1102.0]" 264 | Case13_6.nii,"[140225, 8000]","[146762.0, 1463.0]" 265 | Case13_7.nii,"[140225, 8000]","[146231.0, 1994.0]" 266 | Case13_8.nii,"[140225, 8000]","[145448.0, 2777.0]" 267 | Case13_9.nii,"[140225, 8000]","[144593.0, 3632.0]" 268 | Case14_0.nii,"[148225, 0]","[148225.0, 0.0]" 269 | Case14_1.nii,"[148225, 0]","[148225.0, 0.0]" 270 | Case14_10.nii,"[140225, 8000]","[142279.0, 5946.0]" 271 | Case14_11.nii,"[140225, 8000]","[142080.0, 6145.0]" 272 | Case14_12.nii,"[140225, 8000]","[142275.0, 5950.0]" 273 | Case14_13.nii,"[140225, 8000]","[142716.0, 5509.0]" 274 | Case14_14.nii,"[140225, 8000]","[143491.0, 4734.0]" 275 | Case14_15.nii,"[140225, 8000]","[144726.0, 3499.0]" 276 | Case14_16.nii,"[148225, 0]","[148225.0, 0.0]" 277 | Case14_17.nii,"[148225, 0]","[148225.0, 0.0]" 278 | Case14_18.nii,"[148225, 0]","[148225.0, 0.0]" 279 | Case14_19.nii,"[148225, 0]","[148225.0, 0.0]" 280 | Case14_2.nii,"[148225, 0]","[148225.0, 0.0]" 281 | Case14_3.nii,"[148225, 0]","[148225.0, 0.0]" 282 | Case14_4.nii,"[140225, 8000]","[146687.0, 1538.0]" 283 | Case14_5.nii,"[140225, 8000]","[146541.0, 1684.0]" 284 | Case14_6.nii,"[140225, 8000]","[146139.0, 2086.0]" 285 | Case14_7.nii,"[140225, 8000]","[144891.0, 3334.0]" 286 | Case14_8.nii,"[140225, 8000]","[143691.0, 4534.0]" 287 | Case14_9.nii,"[140225, 8000]","[142958.0, 5267.0]" 288 | Case15_0.nii,"[148225, 0]","[148225.0, 0.0]" 289 | Case15_1.nii,"[148225, 0]","[148225.0, 0.0]" 290 | Case15_10.nii,"[140225, 8000]","[140264.0, 7961.0]" 291 | Case15_11.nii,"[140225, 8000]","[140247.0, 7978.0]" 292 | Case15_12.nii,"[140225, 8000]","[140569.0, 7656.0]" 293 | Case15_13.nii,"[140225, 8000]","[142834.0, 5391.0]" 294 | Case15_14.nii,"[140225, 8000]","[144597.0, 3628.0]" 295 | Case15_15.nii,"[140225, 8000]","[147813.0, 412.0]" 296 | Case15_16.nii,"[148225, 0]","[148225.0, 0.0]" 297 | Case15_17.nii,"[148225, 0]","[148225.0, 0.0]" 298 | Case15_18.nii,"[148225, 0]","[148225.0, 0.0]" 299 | Case15_19.nii,"[148225, 0]","[148225.0, 0.0]" 300 | Case15_2.nii,"[148225, 0]","[148225.0, 0.0]" 301 | Case15_3.nii,"[140225, 8000]","[147338.0, 887.0]" 302 | Case15_4.nii,"[140225, 8000]","[146703.0, 1522.0]" 303 | Case15_5.nii,"[140225, 8000]","[145497.0, 2728.0]" 304 | Case15_6.nii,"[140225, 8000]","[144050.0, 4175.0]" 305 | Case15_7.nii,"[140225, 8000]","[143167.0, 5058.0]" 306 | Case15_8.nii,"[140225, 8000]","[142042.0, 6183.0]" 307 | Case15_9.nii,"[140225, 8000]","[141095.0, 7130.0]" 308 | Case16_0.nii,"[140225, 8000]","[141606.0, 6619.0]" 309 | Case16_1.nii,"[140225, 8000]","[138737.0, 9488.0]" 310 | Case16_10.nii,"[140225, 8000]","[128120.0, 20105.0]" 311 | Case16_11.nii,"[140225, 8000]","[127035.0, 21190.0]" 312 | Case16_12.nii,"[140225, 8000]","[126971.0, 21254.0]" 313 | Case16_13.nii,"[140225, 8000]","[129396.0, 18829.0]" 314 | Case16_14.nii,"[140225, 8000]","[129711.0, 18514.0]" 315 | Case16_15.nii,"[140225, 8000]","[130404.0, 17821.0]" 316 | Case16_16.nii,"[140225, 8000]","[130862.0, 17363.0]" 317 | Case16_17.nii,"[140225, 8000]","[132060.0, 16165.0]" 318 | Case16_18.nii,"[140225, 8000]","[135385.0, 12840.0]" 319 | Case16_19.nii,"[140225, 8000]","[139298.0, 8927.0]" 320 | Case16_2.nii,"[140225, 8000]","[135793.0, 12432.0]" 321 | Case16_3.nii,"[140225, 8000]","[135327.0, 12898.0]" 322 | Case16_4.nii,"[140225, 8000]","[133541.0, 14684.0]" 323 | Case16_5.nii,"[140225, 8000]","[132209.0, 16016.0]" 324 | Case16_6.nii,"[140225, 8000]","[130679.0, 17546.0]" 325 | Case16_7.nii,"[140225, 8000]","[129703.0, 18522.0]" 326 | Case16_8.nii,"[140225, 8000]","[128784.0, 19441.0]" 327 | Case16_9.nii,"[140225, 8000]","[128423.0, 19802.0]" 328 | Case17_0.nii,"[148225, 0]","[148225.0, 0.0]" 329 | Case17_1.nii,"[148225, 0]","[148225.0, 0.0]" 330 | Case17_10.nii,"[140225, 8000]","[142984.0, 5241.0]" 331 | Case17_11.nii,"[140225, 8000]","[143106.0, 5119.0]" 332 | Case17_12.nii,"[140225, 8000]","[143545.0, 4680.0]" 333 | Case17_13.nii,"[140225, 8000]","[144014.0, 4211.0]" 334 | Case17_14.nii,"[140225, 8000]","[145408.0, 2817.0]" 335 | Case17_15.nii,"[148225, 0]","[148225.0, 0.0]" 336 | Case17_16.nii,"[148225, 0]","[148225.0, 0.0]" 337 | Case17_17.nii,"[148225, 0]","[148225.0, 0.0]" 338 | Case17_18.nii,"[148225, 0]","[148225.0, 0.0]" 339 | Case17_19.nii,"[148225, 0]","[148225.0, 0.0]" 340 | Case17_2.nii,"[148225, 0]","[148225.0, 0.0]" 341 | Case17_3.nii,"[148225, 0]","[148225.0, 0.0]" 342 | Case17_4.nii,"[148225, 0]","[148225.0, 0.0]" 343 | Case17_5.nii,"[140225, 8000]","[147202.0, 1023.0]" 344 | Case17_6.nii,"[140225, 8000]","[146471.0, 1754.0]" 345 | Case17_7.nii,"[140225, 8000]","[145260.0, 2965.0]" 346 | Case17_8.nii,"[140225, 8000]","[144245.0, 3980.0]" 347 | Case17_9.nii,"[140225, 8000]","[143621.0, 4604.0]" 348 | Case18_0.nii,"[148225, 0]","[148225.0, 0.0]" 349 | Case18_1.nii,"[140225, 8000]","[145759.0, 2466.0]" 350 | Case18_10.nii,"[140225, 8000]","[139249.0, 8976.0]" 351 | Case18_11.nii,"[140225, 8000]","[139771.0, 8454.0]" 352 | Case18_12.nii,"[140225, 8000]","[140754.0, 7471.0]" 353 | Case18_13.nii,"[140225, 8000]","[141539.0, 6686.0]" 354 | Case18_14.nii,"[140225, 8000]","[142813.0, 5412.0]" 355 | Case18_15.nii,"[140225, 8000]","[143693.0, 4532.0]" 356 | Case18_16.nii,"[140225, 8000]","[144723.0, 3502.0]" 357 | Case18_17.nii,"[140225, 8000]","[146785.0, 1440.0]" 358 | Case18_2.nii,"[140225, 8000]","[144515.0, 3710.0]" 359 | Case18_3.nii,"[140225, 8000]","[143128.0, 5097.0]" 360 | Case18_4.nii,"[140225, 8000]","[141943.0, 6282.0]" 361 | Case18_5.nii,"[140225, 8000]","[141571.0, 6654.0]" 362 | Case18_6.nii,"[140225, 8000]","[140166.0, 8059.0]" 363 | Case18_7.nii,"[140225, 8000]","[139305.0, 8920.0]" 364 | Case18_8.nii,"[140225, 8000]","[139184.0, 9041.0]" 365 | Case18_9.nii,"[140225, 8000]","[138926.0, 9299.0]" 366 | Case19_0.nii,"[148225, 0]","[148225.0, 0.0]" 367 | Case19_1.nii,"[148225, 0]","[148225.0, 0.0]" 368 | Case19_10.nii,"[140225, 8000]","[144827.0, 3398.0]" 369 | Case19_11.nii,"[140225, 8000]","[144798.0, 3427.0]" 370 | Case19_12.nii,"[140225, 8000]","[145542.0, 2683.0]" 371 | Case19_13.nii,"[140225, 8000]","[147504.0, 721.0]" 372 | Case19_14.nii,"[148225, 0]","[148225.0, 0.0]" 373 | Case19_15.nii,"[148225, 0]","[148225.0, 0.0]" 374 | Case19_16.nii,"[148225, 0]","[148225.0, 0.0]" 375 | Case19_17.nii,"[148225, 0]","[148225.0, 0.0]" 376 | Case19_18.nii,"[148225, 0]","[148225.0, 0.0]" 377 | Case19_19.nii,"[148225, 0]","[148225.0, 0.0]" 378 | Case19_2.nii,"[148225, 0]","[148225.0, 0.0]" 379 | Case19_3.nii,"[148225, 0]","[148225.0, 0.0]" 380 | Case19_4.nii,"[140225, 8000]","[147577.0, 648.0]" 381 | Case19_5.nii,"[140225, 8000]","[147104.0, 1121.0]" 382 | Case19_6.nii,"[140225, 8000]","[146232.0, 1993.0]" 383 | Case19_7.nii,"[140225, 8000]","[145012.0, 3213.0]" 384 | Case19_8.nii,"[140225, 8000]","[144493.0, 3732.0]" 385 | Case19_9.nii,"[140225, 8000]","[144639.0, 3586.0]" 386 | Case20_0.nii,"[148225, 0]","[148225.0, 0.0]" 387 | Case20_1.nii,"[148225, 0]","[148225.0, 0.0]" 388 | Case20_10.nii,"[140225, 8000]","[138477.0, 9748.0]" 389 | Case20_11.nii,"[140225, 8000]","[138637.0, 9588.0]" 390 | Case20_12.nii,"[140225, 8000]","[139566.0, 8659.0]" 391 | Case20_13.nii,"[140225, 8000]","[141129.0, 7096.0]" 392 | Case20_14.nii,"[140225, 8000]","[142369.0, 5856.0]" 393 | Case20_15.nii,"[140225, 8000]","[144469.0, 3756.0]" 394 | Case20_16.nii,"[140225, 8000]","[146094.0, 2131.0]" 395 | Case20_17.nii,"[140225, 8000]","[146483.0, 1742.0]" 396 | Case20_18.nii,"[148225, 0]","[148225.0, 0.0]" 397 | Case20_19.nii,"[148225, 0]","[148225.0, 0.0]" 398 | Case20_2.nii,"[140225, 8000]","[146468.0, 1757.0]" 399 | Case20_3.nii,"[140225, 8000]","[144805.0, 3420.0]" 400 | Case20_4.nii,"[140225, 8000]","[143895.0, 4330.0]" 401 | Case20_5.nii,"[140225, 8000]","[142410.0, 5815.0]" 402 | Case20_6.nii,"[140225, 8000]","[141351.0, 6874.0]" 403 | Case20_7.nii,"[140225, 8000]","[140239.0, 7986.0]" 404 | Case20_8.nii,"[140225, 8000]","[139799.0, 8426.0]" 405 | Case20_9.nii,"[140225, 8000]","[139009.0, 9216.0]" 406 | Case21_0.nii,"[148225, 0]","[148225.0, 0.0]" 407 | Case21_1.nii,"[148225, 0]","[148225.0, 0.0]" 408 | Case21_10.nii,"[140225, 8000]","[141541.0, 6684.0]" 409 | Case21_11.nii,"[140225, 8000]","[141751.0, 6474.0]" 410 | Case21_12.nii,"[140225, 8000]","[143771.0, 4454.0]" 411 | Case21_13.nii,"[140225, 8000]","[144807.0, 3418.0]" 412 | Case21_14.nii,"[140225, 8000]","[145714.0, 2511.0]" 413 | Case21_15.nii,"[140225, 8000]","[147213.0, 1012.0]" 414 | Case21_16.nii,"[140225, 8000]","[147852.0, 373.0]" 415 | Case21_17.nii,"[148225, 0]","[148225.0, 0.0]" 416 | Case21_18.nii,"[148225, 0]","[148225.0, 0.0]" 417 | Case21_19.nii,"[148225, 0]","[148225.0, 0.0]" 418 | Case21_2.nii,"[148225, 0]","[148225.0, 0.0]" 419 | Case21_3.nii,"[148225, 0]","[148225.0, 0.0]" 420 | Case21_4.nii,"[140225, 8000]","[146801.0, 1424.0]" 421 | Case21_5.nii,"[140225, 8000]","[144883.0, 3342.0]" 422 | Case21_6.nii,"[140225, 8000]","[143482.0, 4743.0]" 423 | Case21_7.nii,"[140225, 8000]","[143093.0, 5132.0]" 424 | Case21_8.nii,"[140225, 8000]","[142173.0, 6052.0]" 425 | Case21_9.nii,"[140225, 8000]","[141814.0, 6411.0]" 426 | Case22_0.nii,"[148225, 0]","[148225.0, 0.0]" 427 | Case22_1.nii,"[148225, 0]","[148225.0, 0.0]" 428 | Case22_10.nii,"[140225, 8000]","[140066.0, 8159.0]" 429 | Case22_11.nii,"[140225, 8000]","[140487.0, 7738.0]" 430 | Case22_12.nii,"[140225, 8000]","[141531.0, 6694.0]" 431 | Case22_13.nii,"[140225, 8000]","[142428.0, 5797.0]" 432 | Case22_14.nii,"[140225, 8000]","[144157.0, 4068.0]" 433 | Case22_15.nii,"[140225, 8000]","[145949.0, 2276.0]" 434 | Case22_16.nii,"[140225, 8000]","[146499.0, 1726.0]" 435 | Case22_17.nii,"[140225, 8000]","[147785.0, 440.0]" 436 | Case22_18.nii,"[148225, 0]","[148225.0, 0.0]" 437 | Case22_19.nii,"[148225, 0]","[148225.0, 0.0]" 438 | Case22_2.nii,"[140225, 8000]","[146798.0, 1427.0]" 439 | Case22_3.nii,"[140225, 8000]","[146514.0, 1711.0]" 440 | Case22_4.nii,"[140225, 8000]","[145512.0, 2713.0]" 441 | Case22_5.nii,"[140225, 8000]","[144185.0, 4040.0]" 442 | Case22_6.nii,"[140225, 8000]","[142764.0, 5461.0]" 443 | Case22_7.nii,"[140225, 8000]","[141557.0, 6668.0]" 444 | Case22_8.nii,"[140225, 8000]","[140378.0, 7847.0]" 445 | Case22_9.nii,"[140225, 8000]","[139992.0, 8233.0]" 446 | Case23_0.nii,"[148225, 0]","[148225.0, 0.0]" 447 | Case23_1.nii,"[140225, 8000]","[147104.0, 1121.0]" 448 | Case23_10.nii,"[140225, 8000]","[140251.0, 7974.0]" 449 | Case23_11.nii,"[140225, 8000]","[140431.0, 7794.0]" 450 | Case23_12.nii,"[140225, 8000]","[140708.0, 7517.0]" 451 | Case23_13.nii,"[140225, 8000]","[140546.0, 7679.0]" 452 | Case23_14.nii,"[140225, 8000]","[141519.0, 6706.0]" 453 | Case23_15.nii,"[140225, 8000]","[142721.0, 5504.0]" 454 | Case23_16.nii,"[140225, 8000]","[143118.0, 5107.0]" 455 | Case23_17.nii,"[140225, 8000]","[143949.0, 4276.0]" 456 | Case23_18.nii,"[140225, 8000]","[144157.0, 4068.0]" 457 | Case23_19.nii,"[140225, 8000]","[145888.0, 2337.0]" 458 | Case23_2.nii,"[140225, 8000]","[146105.0, 2120.0]" 459 | Case23_3.nii,"[140225, 8000]","[144130.0, 4095.0]" 460 | Case23_4.nii,"[140225, 8000]","[142788.0, 5437.0]" 461 | Case23_5.nii,"[140225, 8000]","[141996.0, 6229.0]" 462 | Case23_6.nii,"[140225, 8000]","[141156.0, 7069.0]" 463 | Case23_7.nii,"[140225, 8000]","[140520.0, 7705.0]" 464 | Case23_8.nii,"[140225, 8000]","[140195.0, 8030.0]" 465 | Case23_9.nii,"[140225, 8000]","[140283.0, 7942.0]" 466 | Case24_0.nii,"[148225, 0]","[148225.0, 0.0]" 467 | Case24_1.nii,"[148225, 0]","[148225.0, 0.0]" 468 | Case24_10.nii,"[140225, 8000]","[141231.0, 6994.0]" 469 | Case24_11.nii,"[140225, 8000]","[141441.0, 6784.0]" 470 | Case24_12.nii,"[140225, 8000]","[142004.0, 6221.0]" 471 | Case24_13.nii,"[140225, 8000]","[144190.0, 4035.0]" 472 | Case24_14.nii,"[140225, 8000]","[144288.0, 3937.0]" 473 | Case24_15.nii,"[140225, 8000]","[145158.0, 3067.0]" 474 | Case24_16.nii,"[140225, 8000]","[145880.0, 2345.0]" 475 | Case24_17.nii,"[140225, 8000]","[147101.0, 1124.0]" 476 | Case24_18.nii,"[148225, 0]","[148225.0, 0.0]" 477 | Case24_2.nii,"[140225, 8000]","[146959.0, 1266.0]" 478 | Case24_3.nii,"[140225, 8000]","[145953.0, 2272.0]" 479 | Case24_4.nii,"[140225, 8000]","[145703.0, 2522.0]" 480 | Case24_5.nii,"[140225, 8000]","[144610.0, 3615.0]" 481 | Case24_6.nii,"[140225, 8000]","[143618.0, 4607.0]" 482 | Case24_7.nii,"[140225, 8000]","[142514.0, 5711.0]" 483 | Case24_8.nii,"[140225, 8000]","[142304.0, 5921.0]" 484 | Case24_9.nii,"[140225, 8000]","[141422.0, 6803.0]" 485 | Case25_0.nii,"[148225, 0]","[148225.0, 0.0]" 486 | Case25_1.nii,"[140225, 8000]","[147723.0, 502.0]" 487 | Case25_10.nii,"[140225, 8000]","[140640.0, 7585.0]" 488 | Case25_11.nii,"[140225, 8000]","[140911.0, 7314.0]" 489 | Case25_12.nii,"[140225, 8000]","[141458.0, 6767.0]" 490 | Case25_13.nii,"[140225, 8000]","[141866.0, 6359.0]" 491 | Case25_14.nii,"[140225, 8000]","[142278.0, 5947.0]" 492 | Case25_15.nii,"[140225, 8000]","[142131.0, 6094.0]" 493 | Case25_16.nii,"[140225, 8000]","[142826.0, 5399.0]" 494 | Case25_17.nii,"[140225, 8000]","[143487.0, 4738.0]" 495 | Case25_18.nii,"[148225, 0]","[148225.0, 0.0]" 496 | Case25_19.nii,"[148225, 0]","[148225.0, 0.0]" 497 | Case25_2.nii,"[140225, 8000]","[146722.0, 1503.0]" 498 | Case25_3.nii,"[140225, 8000]","[145506.0, 2719.0]" 499 | Case25_4.nii,"[140225, 8000]","[144214.0, 4011.0]" 500 | Case25_5.nii,"[140225, 8000]","[143118.0, 5107.0]" 501 | Case25_6.nii,"[140225, 8000]","[142077.0, 6148.0]" 502 | Case25_7.nii,"[140225, 8000]","[140277.0, 7948.0]" 503 | Case25_8.nii,"[140225, 8000]","[139760.0, 8465.0]" 504 | Case25_9.nii,"[140225, 8000]","[138849.0, 9376.0]" 505 | Case26_0.nii,"[148225, 0]","[148225.0, 0.0]" 506 | Case26_1.nii,"[148225, 0]","[148225.0, 0.0]" 507 | Case26_10.nii,"[140225, 8000]","[139476.0, 8749.0]" 508 | Case26_11.nii,"[140225, 8000]","[139412.0, 8813.0]" 509 | Case26_12.nii,"[140225, 8000]","[139819.0, 8406.0]" 510 | Case26_13.nii,"[140225, 8000]","[141639.0, 6586.0]" 511 | Case26_14.nii,"[140225, 8000]","[144290.0, 3935.0]" 512 | Case26_15.nii,"[140225, 8000]","[145252.0, 2973.0]" 513 | Case26_16.nii,"[140225, 8000]","[147659.0, 566.0]" 514 | Case26_17.nii,"[148225, 0]","[148225.0, 0.0]" 515 | Case26_18.nii,"[148225, 0]","[148225.0, 0.0]" 516 | Case26_19.nii,"[148225, 0]","[148225.0, 0.0]" 517 | Case26_2.nii,"[140225, 8000]","[146979.0, 1246.0]" 518 | Case26_3.nii,"[140225, 8000]","[146354.0, 1871.0]" 519 | Case26_4.nii,"[140225, 8000]","[145420.0, 2805.0]" 520 | Case26_5.nii,"[140225, 8000]","[144513.0, 3712.0]" 521 | Case26_6.nii,"[140225, 8000]","[143369.0, 4856.0]" 522 | Case26_7.nii,"[140225, 8000]","[141401.0, 6824.0]" 523 | Case26_8.nii,"[140225, 8000]","[140625.0, 7600.0]" 524 | Case26_9.nii,"[140225, 8000]","[139936.0, 8289.0]" 525 | Case27_0.nii,"[140225, 8000]","[144806.0, 3419.0]" 526 | Case27_1.nii,"[140225, 8000]","[143518.0, 4707.0]" 527 | Case27_10.nii,"[140225, 8000]","[142750.0, 5475.0]" 528 | Case27_11.nii,"[140225, 8000]","[144828.0, 3397.0]" 529 | Case27_12.nii,"[140225, 8000]","[146658.0, 1567.0]" 530 | Case27_13.nii,"[140225, 8000]","[147256.0, 969.0]" 531 | Case27_14.nii,"[148225, 0]","[148225.0, 0.0]" 532 | Case27_2.nii,"[140225, 8000]","[141441.0, 6784.0]" 533 | Case27_3.nii,"[140225, 8000]","[139098.0, 9127.0]" 534 | Case27_4.nii,"[140225, 8000]","[138013.0, 10212.0]" 535 | Case27_5.nii,"[140225, 8000]","[136858.0, 11367.0]" 536 | Case27_6.nii,"[140225, 8000]","[136785.0, 11440.0]" 537 | Case27_7.nii,"[140225, 8000]","[136524.0, 11701.0]" 538 | Case27_8.nii,"[140225, 8000]","[137318.0, 10907.0]" 539 | Case27_9.nii,"[140225, 8000]","[141160.0, 7065.0]" 540 | Case28_0.nii,"[148225, 0]","[148225.0, 0.0]" 541 | Case28_1.nii,"[148225, 0]","[148225.0, 0.0]" 542 | Case28_10.nii,"[140225, 8000]","[139969.0, 8256.0]" 543 | Case28_11.nii,"[140225, 8000]","[140482.0, 7743.0]" 544 | Case28_12.nii,"[140225, 8000]","[141041.0, 7184.0]" 545 | Case28_13.nii,"[140225, 8000]","[143828.0, 4397.0]" 546 | Case28_14.nii,"[140225, 8000]","[144480.0, 3745.0]" 547 | Case28_15.nii,"[140225, 8000]","[145614.0, 2611.0]" 548 | Case28_16.nii,"[140225, 8000]","[145757.0, 2468.0]" 549 | Case28_17.nii,"[140225, 8000]","[147272.0, 953.0]" 550 | Case28_18.nii,"[140225, 8000]","[147298.0, 927.0]" 551 | Case28_19.nii,"[140225, 8000]","[147636.0, 589.0]" 552 | Case28_2.nii,"[148225, 0]","[148225.0, 0.0]" 553 | Case28_3.nii,"[148225, 0]","[148225.0, 0.0]" 554 | Case28_4.nii,"[140225, 8000]","[147262.0, 963.0]" 555 | Case28_5.nii,"[140225, 8000]","[145835.0, 2390.0]" 556 | Case28_6.nii,"[140225, 8000]","[143685.0, 4540.0]" 557 | Case28_7.nii,"[140225, 8000]","[142307.0, 5918.0]" 558 | Case28_8.nii,"[140225, 8000]","[141339.0, 6886.0]" 559 | Case28_9.nii,"[140225, 8000]","[140605.0, 7620.0]" 560 | Case29_0.nii,"[148225, 0]","[148225.0, 0.0]" 561 | Case29_1.nii,"[148225, 0]","[148225.0, 0.0]" 562 | Case29_10.nii,"[140225, 8000]","[142841.0, 5384.0]" 563 | Case29_11.nii,"[140225, 8000]","[142024.0, 6201.0]" 564 | Case29_12.nii,"[140225, 8000]","[141592.0, 6633.0]" 565 | Case29_13.nii,"[140225, 8000]","[141553.0, 6672.0]" 566 | Case29_14.nii,"[140225, 8000]","[143027.0, 5198.0]" 567 | Case29_15.nii,"[140225, 8000]","[147455.0, 770.0]" 568 | Case29_16.nii,"[148225, 0]","[148225.0, 0.0]" 569 | Case29_17.nii,"[148225, 0]","[148225.0, 0.0]" 570 | Case29_18.nii,"[148225, 0]","[148225.0, 0.0]" 571 | Case29_19.nii,"[148225, 0]","[148225.0, 0.0]" 572 | Case29_2.nii,"[148225, 0]","[148225.0, 0.0]" 573 | Case29_3.nii,"[148225, 0]","[148225.0, 0.0]" 574 | Case29_4.nii,"[148225, 0]","[148225.0, 0.0]" 575 | Case29_5.nii,"[148225, 0]","[148225.0, 0.0]" 576 | Case29_6.nii,"[148225, 0]","[148225.0, 0.0]" 577 | Case29_7.nii,"[140225, 8000]","[146339.0, 1886.0]" 578 | Case29_8.nii,"[140225, 8000]","[145225.0, 3000.0]" 579 | Case29_9.nii,"[140225, 8000]","[144336.0, 3889.0]" 580 | -------------------------------------------------------------------------------- /sizes/prostate_.csv: -------------------------------------------------------------------------------- 1 | val_ids;val_gt_size;dumbpredwtags 2 | Case00_0.nii;[147398.0, 827.0];[140493, 6905] 3 | Case00_1.nii;[147080.0, 1145.0];[140493, 6905] 4 | Case00_10.nii;[143808.0, 4417.0];[140493, 6905] 5 | Case00_11.nii;[144898.0, 3327.0];[140493, 6905] 6 | Case00_12.nii;[146525.0, 1700.0];[140493, 6905] 7 | Case00_13.nii;[147374.0, 851.0];[140493, 6905] 8 | Case00_14.nii;[148225.0, 0.0];[148225, 0] 9 | Case00_2.nii;[145619.0, 2606.0];[140493, 6905] 10 | Case00_3.nii;[144725.0, 3500.0];[140493, 6905] 11 | Case00_4.nii;[144219.0, 4006.0];[140493, 6905] 12 | Case00_5.nii;[143765.0, 4460.0];[140493, 6905] 13 | Case00_6.nii;[143302.0, 4923.0];[140493, 6905] 14 | Case00_7.nii;[142995.0, 5230.0];[140493, 6905] 15 | Case00_8.nii;[143221.0, 5004.0];[140493, 6905] 16 | Case00_9.nii;[143655.0, 4570.0];[140493, 6905] 17 | Case01_0.nii;[148225.0, 0.0];[148225, 0] 18 | Case01_1.nii;[148225.0, 0.0];[148225, 0] 19 | Case01_10.nii;[139714.0, 8511.0];[140493, 6905] 20 | Case01_11.nii;[139540.0, 8685.0];[140493, 6905] 21 | Case01_12.nii;[139793.0, 8432.0];[140493, 6905] 22 | Case01_13.nii;[139554.0, 8671.0];[140493, 6905] 23 | Case01_14.nii;[143678.0, 4547.0];[140493, 6905] 24 | Case01_15.nii;[145351.0, 2874.0];[140493, 6905] 25 | Case01_16.nii;[145994.0, 2231.0];[140493, 6905] 26 | Case01_17.nii;[147152.0, 1073.0];[140493, 6905] 27 | Case01_18.nii;[148225.0, 0.0];[148225, 0] 28 | Case01_19.nii;[148225.0, 0.0];[148225, 0] 29 | Case01_2.nii;[148225.0, 0.0];[148225, 0] 30 | Case01_3.nii;[147247.0, 978.0];[140493, 6905] 31 | Case01_4.nii;[146148.0, 2077.0];[140493, 6905] 32 | Case01_5.nii;[145587.0, 2638.0];[140493, 6905] 33 | Case01_6.nii;[144189.0, 4036.0];[140493, 6905] 34 | Case01_7.nii;[142942.0, 5283.0];[140493, 6905] 35 | Case01_8.nii;[141197.0, 7028.0];[140493, 6905] 36 | Case01_9.nii;[140373.0, 7852.0];[140493, 6905] 37 | Case02_0.nii;[148225.0, 0.0];[148225, 0] 38 | Case02_1.nii;[148225.0, 0.0];[148225, 0] 39 | Case02_10.nii;[141742.0, 6483.0];[140493, 6905] 40 | Case02_11.nii;[141050.0, 7175.0];[140493, 6905] 41 | Case02_12.nii;[140924.0, 7301.0];[140493, 6905] 42 | Case02_13.nii;[140805.0, 7420.0];[140493, 6905] 43 | Case02_14.nii;[143122.0, 5103.0];[140493, 6905] 44 | Case02_15.nii;[143423.0, 4802.0];[140493, 6905] 45 | Case02_16.nii;[144806.0, 3419.0];[140493, 6905] 46 | Case02_17.nii;[145318.0, 2907.0];[140493, 6905] 47 | Case02_18.nii;[145633.0, 2592.0];[140493, 6905] 48 | Case02_19.nii;[145914.0, 2311.0];[140493, 6905] 49 | Case02_2.nii;[148225.0, 0.0];[148225, 0] 50 | Case02_20.nii;[146706.0, 1519.0];[140493, 6905] 51 | Case02_21.nii;[148225.0, 0.0];[148225, 0] 52 | Case02_22.nii;[148225.0, 0.0];[148225, 0] 53 | Case02_23.nii;[148225.0, 0.0];[148225, 0] 54 | Case02_3.nii;[147436.0, 789.0];[140493, 6905] 55 | Case02_4.nii;[147143.0, 1082.0];[140493, 6905] 56 | Case02_5.nii;[146584.0, 1641.0];[140493, 6905] 57 | Case02_6.nii;[145410.0, 2815.0];[140493, 6905] 58 | Case02_7.nii;[144210.0, 4015.0];[140493, 6905] 59 | Case02_8.nii;[143207.0, 5018.0];[140493, 6905] 60 | Case02_9.nii;[142318.0, 5907.0];[140493, 6905] 61 | Case03_0.nii;[148225.0, 0.0];[148225, 0] 62 | Case03_1.nii;[148225.0, 0.0];[148225, 0] 63 | Case03_10.nii;[140021.0, 8204.0];[140493, 6905] 64 | Case03_11.nii;[139283.0, 8942.0];[140493, 6905] 65 | Case03_12.nii;[139164.0, 9061.0];[140493, 6905] 66 | Case03_13.nii;[139425.0, 8800.0];[140493, 6905] 67 | Case03_14.nii;[139884.0, 8341.0];[140493, 6905] 68 | Case03_15.nii;[141583.0, 6642.0];[140493, 6905] 69 | Case03_16.nii;[144992.0, 3233.0];[140493, 6905] 70 | Case03_17.nii;[147116.0, 1109.0];[140493, 6905] 71 | Case03_18.nii;[148225.0, 0.0];[148225, 0] 72 | Case03_19.nii;[148225.0, 0.0];[148225, 0] 73 | Case03_2.nii;[148225.0, 0.0];[148225, 0] 74 | Case03_3.nii;[148225.0, 0.0];[148225, 0] 75 | Case03_4.nii;[146810.0, 1415.0];[140493, 6905] 76 | Case03_5.nii;[145226.0, 2999.0];[140493, 6905] 77 | Case03_6.nii;[143816.0, 4409.0];[140493, 6905] 78 | Case03_7.nii;[142831.0, 5394.0];[140493, 6905] 79 | Case03_8.nii;[141631.0, 6594.0];[140493, 6905] 80 | Case03_9.nii;[140547.0, 7678.0];[140493, 6905] 81 | Case04_0.nii;[148225.0, 0.0];[148225, 0] 82 | Case04_1.nii;[148225.0, 0.0];[148225, 0] 83 | Case04_10.nii;[142755.0, 5470.0];[140493, 6905] 84 | Case04_11.nii;[142180.0, 6045.0];[140493, 6905] 85 | Case04_12.nii;[141812.0, 6413.0];[140493, 6905] 86 | Case04_13.nii;[141878.0, 6347.0];[140493, 6905] 87 | Case04_14.nii;[143928.0, 4297.0];[140493, 6905] 88 | Case04_15.nii;[145170.0, 3055.0];[140493, 6905] 89 | Case04_16.nii;[146002.0, 2223.0];[140493, 6905] 90 | Case04_17.nii;[147234.0, 991.0];[140493, 6905] 91 | Case04_18.nii;[147666.0, 559.0];[140493, 6905] 92 | Case04_19.nii;[148225.0, 0.0];[148225, 0] 93 | Case04_2.nii;[148225.0, 0.0];[148225, 0] 94 | Case04_3.nii;[148225.0, 0.0];[148225, 0] 95 | Case04_4.nii;[148225.0, 0.0];[148225, 0] 96 | Case04_5.nii;[148225.0, 0.0];[148225, 0] 97 | Case04_6.nii;[147241.0, 984.0];[140493, 6905] 98 | Case04_7.nii;[146269.0, 1956.0];[140493, 6905] 99 | Case04_8.nii;[145065.0, 3160.0];[140493, 6905] 100 | Case04_9.nii;[143756.0, 4469.0];[140493, 6905] 101 | Case05_0.nii;[148225.0, 0.0];[148225, 0] 102 | Case05_1.nii;[148225.0, 0.0];[148225, 0] 103 | Case05_10.nii;[142048.0, 6177.0];[140493, 6905] 104 | Case05_11.nii;[143776.0, 4449.0];[140493, 6905] 105 | Case05_12.nii;[144531.0, 3694.0];[140493, 6905] 106 | Case05_13.nii;[144719.0, 3506.0];[140493, 6905] 107 | Case05_14.nii;[144231.0, 3994.0];[140493, 6905] 108 | Case05_15.nii;[145386.0, 2839.0];[140493, 6905] 109 | Case05_16.nii;[146602.0, 1623.0];[140493, 6905] 110 | Case05_17.nii;[147130.0, 1095.0];[140493, 6905] 111 | Case05_18.nii;[148225.0, 0.0];[148225, 0] 112 | Case05_19.nii;[148225.0, 0.0];[148225, 0] 113 | Case05_2.nii;[148225.0, 0.0];[148225, 0] 114 | Case05_3.nii;[146878.0, 1347.0];[140493, 6905] 115 | Case05_4.nii;[145260.0, 2965.0];[140493, 6905] 116 | Case05_5.nii;[144385.0, 3840.0];[140493, 6905] 117 | Case05_6.nii;[143069.0, 5156.0];[140493, 6905] 118 | Case05_7.nii;[142546.0, 5679.0];[140493, 6905] 119 | Case05_8.nii;[142335.0, 5890.0];[140493, 6905] 120 | Case05_9.nii;[141831.0, 6394.0];[140493, 6905] 121 | Case06_0.nii;[148225.0, 0.0];[148225, 0] 122 | Case06_1.nii;[148225.0, 0.0];[148225, 0] 123 | Case06_10.nii;[147496.0, 729.0];[140493, 6905] 124 | Case06_11.nii;[147141.0, 1084.0];[140493, 6905] 125 | Case06_12.nii;[146017.0, 2208.0];[140493, 6905] 126 | Case06_13.nii;[144795.0, 3430.0];[140493, 6905] 127 | Case06_14.nii;[144588.0, 3637.0];[140493, 6905] 128 | Case06_15.nii;[145484.0, 2741.0];[140493, 6905] 129 | Case06_16.nii;[146254.0, 1971.0];[140493, 6905] 130 | Case06_17.nii;[147385.0, 840.0];[140493, 6905] 131 | Case06_18.nii;[148225.0, 0.0];[148225, 0] 132 | Case06_19.nii;[148225.0, 0.0];[148225, 0] 133 | Case06_2.nii;[148225.0, 0.0];[148225, 0] 134 | Case06_3.nii;[148225.0, 0.0];[148225, 0] 135 | Case06_4.nii;[148225.0, 0.0];[148225, 0] 136 | Case06_5.nii;[148225.0, 0.0];[148225, 0] 137 | Case06_6.nii;[148225.0, 0.0];[148225, 0] 138 | Case06_7.nii;[148225.0, 0.0];[148225, 0] 139 | Case06_8.nii;[148225.0, 0.0];[148225, 0] 140 | Case06_9.nii;[147646.0, 579.0];[140493, 6905] 141 | Case07_0.nii;[148225.0, 0.0];[148225, 0] 142 | Case07_1.nii;[148225.0, 0.0];[148225, 0] 143 | Case07_10.nii;[142247.0, 5978.0];[140493, 6905] 144 | Case07_11.nii;[143812.0, 4413.0];[140493, 6905] 145 | Case07_12.nii;[145264.0, 2961.0];[140493, 6905] 146 | Case07_13.nii;[148225.0, 0.0];[148225, 0] 147 | Case07_14.nii;[148225.0, 0.0];[148225, 0] 148 | Case07_2.nii;[147151.0, 1074.0];[140493, 6905] 149 | Case07_3.nii;[146208.0, 2017.0];[140493, 6905] 150 | Case07_4.nii;[145217.0, 3008.0];[140493, 6905] 151 | Case07_5.nii;[144229.0, 3996.0];[140493, 6905] 152 | Case07_6.nii;[142732.0, 5493.0];[140493, 6905] 153 | Case07_7.nii;[142417.0, 5808.0];[140493, 6905] 154 | Case07_8.nii;[141316.0, 6909.0];[140493, 6905] 155 | Case07_9.nii;[141087.0, 7138.0];[140493, 6905] 156 | Case08_0.nii;[147504.0, 721.0];[140493, 6905] 157 | Case08_1.nii;[145394.0, 2831.0];[140493, 6905] 158 | Case08_10.nii;[136606.0, 11619.0];[140493, 6905] 159 | Case08_11.nii;[137272.0, 10953.0];[140493, 6905] 160 | Case08_12.nii;[139394.0, 8831.0];[140493, 6905] 161 | Case08_13.nii;[142201.0, 6024.0];[140493, 6905] 162 | Case08_14.nii;[143654.0, 4571.0];[140493, 6905] 163 | Case08_15.nii;[145214.0, 3011.0];[140493, 6905] 164 | Case08_16.nii;[146091.0, 2134.0];[140493, 6905] 165 | Case08_2.nii;[142923.0, 5302.0];[140493, 6905] 166 | Case08_3.nii;[140304.0, 7921.0];[140493, 6905] 167 | Case08_4.nii;[139011.0, 9214.0];[140493, 6905] 168 | Case08_5.nii;[137748.0, 10477.0];[140493, 6905] 169 | Case08_6.nii;[136902.0, 11323.0];[140493, 6905] 170 | Case08_7.nii;[136255.0, 11970.0];[140493, 6905] 171 | Case08_8.nii;[135761.0, 12464.0];[140493, 6905] 172 | Case08_9.nii;[136098.0, 12127.0];[140493, 6905] 173 | Case09_0.nii;[148225.0, 0.0];[148225, 0] 174 | Case09_1.nii;[148225.0, 0.0];[148225, 0] 175 | Case09_10.nii;[141711.0, 6514.0];[140493, 6905] 176 | Case09_11.nii;[141987.0, 6238.0];[140493, 6905] 177 | Case09_12.nii;[143343.0, 4882.0];[140493, 6905] 178 | Case09_13.nii;[143994.0, 4231.0];[140493, 6905] 179 | Case09_14.nii;[147037.0, 1188.0];[140493, 6905] 180 | Case09_15.nii;[147741.0, 484.0];[140493, 6905] 181 | Case09_16.nii;[148225.0, 0.0];[148225, 0] 182 | Case09_17.nii;[148225.0, 0.0];[148225, 0] 183 | Case09_18.nii;[148225.0, 0.0];[148225, 0] 184 | Case09_19.nii;[148225.0, 0.0];[148225, 0] 185 | Case09_2.nii;[148225.0, 0.0];[148225, 0] 186 | Case09_3.nii;[148225.0, 0.0];[148225, 0] 187 | Case09_4.nii;[146343.0, 1882.0];[140493, 6905] 188 | Case09_5.nii;[145568.0, 2657.0];[140493, 6905] 189 | Case09_6.nii;[144556.0, 3669.0];[140493, 6905] 190 | Case09_7.nii;[143660.0, 4565.0];[140493, 6905] 191 | Case09_8.nii;[142545.0, 5680.0];[140493, 6905] 192 | Case09_9.nii;[141745.0, 6480.0];[140493, 6905] 193 | Case10_0.nii;[148225.0, 0.0];[148225, 0] 194 | Case10_1.nii;[148225.0, 0.0];[148225, 0] 195 | Case10_10.nii;[142235.0, 5990.0];[140493, 6905] 196 | Case10_11.nii;[141964.0, 6261.0];[140493, 6905] 197 | Case10_12.nii;[142328.0, 5897.0];[140493, 6905] 198 | Case10_13.nii;[143271.0, 4954.0];[140493, 6905] 199 | Case10_14.nii;[145067.0, 3158.0];[140493, 6905] 200 | Case10_15.nii;[146058.0, 2167.0];[140493, 6905] 201 | Case10_16.nii;[147767.0, 458.0];[140493, 6905] 202 | Case10_17.nii;[148225.0, 0.0];[148225, 0] 203 | Case10_18.nii;[148225.0, 0.0];[148225, 0] 204 | Case10_19.nii;[148225.0, 0.0];[148225, 0] 205 | Case10_2.nii;[148225.0, 0.0];[148225, 0] 206 | Case10_3.nii;[148225.0, 0.0];[148225, 0] 207 | Case10_4.nii;[148225.0, 0.0];[148225, 0] 208 | Case10_5.nii;[147268.0, 957.0];[140493, 6905] 209 | Case10_6.nii;[146143.0, 2082.0];[140493, 6905] 210 | Case10_7.nii;[144526.0, 3699.0];[140493, 6905] 211 | Case10_8.nii;[143465.0, 4760.0];[140493, 6905] 212 | Case10_9.nii;[142894.0, 5331.0];[140493, 6905] 213 | Case11_0.nii;[148225.0, 0.0];[148225, 0] 214 | Case11_1.nii;[148225.0, 0.0];[148225, 0] 215 | Case11_10.nii;[140880.0, 7345.0];[140493, 6905] 216 | Case11_11.nii;[139883.0, 8342.0];[140493, 6905] 217 | Case11_12.nii;[139925.0, 8300.0];[140493, 6905] 218 | Case11_13.nii;[140159.0, 8066.0];[140493, 6905] 219 | Case11_14.nii;[143073.0, 5152.0];[140493, 6905] 220 | Case11_15.nii;[144629.0, 3596.0];[140493, 6905] 221 | Case11_16.nii;[146693.0, 1532.0];[140493, 6905] 222 | Case11_17.nii;[148225.0, 0.0];[148225, 0] 223 | Case11_18.nii;[148225.0, 0.0];[148225, 0] 224 | Case11_19.nii;[148225.0, 0.0];[148225, 0] 225 | Case11_2.nii;[148225.0, 0.0];[148225, 0] 226 | Case11_3.nii;[147037.0, 1188.0];[140493, 6905] 227 | Case11_4.nii;[146409.0, 1816.0];[140493, 6905] 228 | Case11_5.nii;[145233.0, 2992.0];[140493, 6905] 229 | Case11_6.nii;[144417.0, 3808.0];[140493, 6905] 230 | Case11_7.nii;[143132.0, 5093.0];[140493, 6905] 231 | Case11_8.nii;[142459.0, 5766.0];[140493, 6905] 232 | Case11_9.nii;[141386.0, 6839.0];[140493, 6905] 233 | Case12_0.nii;[148225.0, 0.0];[148225, 0] 234 | Case12_1.nii;[146805.0, 1420.0];[140493, 6905] 235 | Case12_10.nii;[137336.0, 10889.0];[140493, 6905] 236 | Case12_11.nii;[136984.0, 11241.0];[140493, 6905] 237 | Case12_12.nii;[140722.0, 7503.0];[140493, 6905] 238 | Case12_13.nii;[142767.0, 5458.0];[140493, 6905] 239 | Case12_14.nii;[144997.0, 3228.0];[140493, 6905] 240 | Case12_2.nii;[145909.0, 2316.0];[140493, 6905] 241 | Case12_3.nii;[143552.0, 4673.0];[140493, 6905] 242 | Case12_4.nii;[140377.0, 7848.0];[140493, 6905] 243 | Case12_5.nii;[138671.0, 9554.0];[140493, 6905] 244 | Case12_6.nii;[137384.0, 10841.0];[140493, 6905] 245 | Case12_7.nii;[136405.0, 11820.0];[140493, 6905] 246 | Case12_8.nii;[135452.0, 12773.0];[140493, 6905] 247 | Case12_9.nii;[135981.0, 12244.0];[140493, 6905] 248 | Case13_0.nii;[148225.0, 0.0];[148225, 0] 249 | Case13_1.nii;[148225.0, 0.0];[148225, 0] 250 | Case13_10.nii;[143463.0, 4762.0];[140493, 6905] 251 | Case13_11.nii;[142957.0, 5268.0];[140493, 6905] 252 | Case13_12.nii;[144496.0, 3729.0];[140493, 6905] 253 | Case13_13.nii;[144428.0, 3797.0];[140493, 6905] 254 | Case13_14.nii;[146525.0, 1700.0];[140493, 6905] 255 | Case13_15.nii;[148225.0, 0.0];[148225, 0] 256 | Case13_16.nii;[148225.0, 0.0];[148225, 0] 257 | Case13_17.nii;[148225.0, 0.0];[148225, 0] 258 | Case13_18.nii;[148225.0, 0.0];[148225, 0] 259 | Case13_19.nii;[148225.0, 0.0];[148225, 0] 260 | Case13_2.nii;[148225.0, 0.0];[148225, 0] 261 | Case13_3.nii;[148225.0, 0.0];[148225, 0] 262 | Case13_4.nii;[148225.0, 0.0];[148225, 0] 263 | Case13_5.nii;[147123.0, 1102.0];[140493, 6905] 264 | Case13_6.nii;[146762.0, 1463.0];[140493, 6905] 265 | Case13_7.nii;[146231.0, 1994.0];[140493, 6905] 266 | Case13_8.nii;[145448.0, 2777.0];[140493, 6905] 267 | Case13_9.nii;[144593.0, 3632.0];[140493, 6905] 268 | Case14_0.nii;[148225.0, 0.0];[148225, 0] 269 | Case14_1.nii;[148225.0, 0.0];[148225, 0] 270 | Case14_10.nii;[142279.0, 5946.0];[140493, 6905] 271 | Case14_11.nii;[142080.0, 6145.0];[140493, 6905] 272 | Case14_12.nii;[142275.0, 5950.0];[140493, 6905] 273 | Case14_13.nii;[142716.0, 5509.0];[140493, 6905] 274 | Case14_14.nii;[143491.0, 4734.0];[140493, 6905] 275 | Case14_15.nii;[144726.0, 3499.0];[140493, 6905] 276 | Case14_16.nii;[148225.0, 0.0];[148225, 0] 277 | Case14_17.nii;[148225.0, 0.0];[148225, 0] 278 | Case14_18.nii;[148225.0, 0.0];[148225, 0] 279 | Case14_19.nii;[148225.0, 0.0];[148225, 0] 280 | Case14_2.nii;[148225.0, 0.0];[148225, 0] 281 | Case14_3.nii;[148225.0, 0.0];[148225, 0] 282 | Case14_4.nii;[146687.0, 1538.0];[140493, 6905] 283 | Case14_5.nii;[146541.0, 1684.0];[140493, 6905] 284 | Case14_6.nii;[146139.0, 2086.0];[140493, 6905] 285 | Case14_7.nii;[144891.0, 3334.0];[140493, 6905] 286 | Case14_8.nii;[143691.0, 4534.0];[140493, 6905] 287 | Case14_9.nii;[142958.0, 5267.0];[140493, 6905] 288 | Case15_0.nii;[148225.0, 0.0];[148225, 0] 289 | Case15_1.nii;[148225.0, 0.0];[148225, 0] 290 | Case15_10.nii;[140264.0, 7961.0];[140493, 6905] 291 | Case15_11.nii;[140247.0, 7978.0];[140493, 6905] 292 | Case15_12.nii;[140569.0, 7656.0];[140493, 6905] 293 | Case15_13.nii;[142834.0, 5391.0];[140493, 6905] 294 | Case15_14.nii;[144597.0, 3628.0];[140493, 6905] 295 | Case15_15.nii;[147813.0, 412.0];[140493, 6905] 296 | Case15_16.nii;[148225.0, 0.0];[148225, 0] 297 | Case15_17.nii;[148225.0, 0.0];[148225, 0] 298 | Case15_18.nii;[148225.0, 0.0];[148225, 0] 299 | Case15_19.nii;[148225.0, 0.0];[148225, 0] 300 | Case15_2.nii;[148225.0, 0.0];[148225, 0] 301 | Case15_3.nii;[147338.0, 887.0];[140493, 6905] 302 | Case15_4.nii;[146703.0, 1522.0];[140493, 6905] 303 | Case15_5.nii;[145497.0, 2728.0];[140493, 6905] 304 | Case15_6.nii;[144050.0, 4175.0];[140493, 6905] 305 | Case15_7.nii;[143167.0, 5058.0];[140493, 6905] 306 | Case15_8.nii;[142042.0, 6183.0];[140493, 6905] 307 | Case15_9.nii;[141095.0, 7130.0];[140493, 6905] 308 | Case16_0.nii;[141606.0, 6619.0];[140493, 6905] 309 | Case16_1.nii;[138737.0, 9488.0];[140493, 6905] 310 | Case16_10.nii;[128120.0, 20105.0];[140493, 6905] 311 | Case16_11.nii;[127035.0, 21190.0];[140493, 6905] 312 | Case16_12.nii;[126971.0, 21254.0];[140493, 6905] 313 | Case16_13.nii;[129396.0, 18829.0];[140493, 6905] 314 | Case16_14.nii;[129711.0, 18514.0];[140493, 6905] 315 | Case16_15.nii;[130404.0, 17821.0];[140493, 6905] 316 | Case16_16.nii;[130862.0, 17363.0];[140493, 6905] 317 | Case16_17.nii;[132060.0, 16165.0];[140493, 6905] 318 | Case16_18.nii;[135385.0, 12840.0];[140493, 6905] 319 | Case16_19.nii;[139298.0, 8927.0];[140493, 6905] 320 | Case16_2.nii;[135793.0, 12432.0];[140493, 6905] 321 | Case16_3.nii;[135327.0, 12898.0];[140493, 6905] 322 | Case16_4.nii;[133541.0, 14684.0];[140493, 6905] 323 | Case16_5.nii;[132209.0, 16016.0];[140493, 6905] 324 | Case16_6.nii;[130679.0, 17546.0];[140493, 6905] 325 | Case16_7.nii;[129703.0, 18522.0];[140493, 6905] 326 | Case16_8.nii;[128784.0, 19441.0];[140493, 6905] 327 | Case16_9.nii;[128423.0, 19802.0];[140493, 6905] 328 | Case17_0.nii;[148225.0, 0.0];[148225, 0] 329 | Case17_1.nii;[148225.0, 0.0];[148225, 0] 330 | Case17_10.nii;[142984.0, 5241.0];[140493, 6905] 331 | Case17_11.nii;[143106.0, 5119.0];[140493, 6905] 332 | Case17_12.nii;[143545.0, 4680.0];[140493, 6905] 333 | Case17_13.nii;[144014.0, 4211.0];[140493, 6905] 334 | Case17_14.nii;[145408.0, 2817.0];[140493, 6905] 335 | Case17_15.nii;[148225.0, 0.0];[148225, 0] 336 | Case17_16.nii;[148225.0, 0.0];[148225, 0] 337 | Case17_17.nii;[148225.0, 0.0];[148225, 0] 338 | Case17_18.nii;[148225.0, 0.0];[148225, 0] 339 | Case17_19.nii;[148225.0, 0.0];[148225, 0] 340 | Case17_2.nii;[148225.0, 0.0];[148225, 0] 341 | Case17_3.nii;[148225.0, 0.0];[148225, 0] 342 | Case17_4.nii;[148225.0, 0.0];[148225, 0] 343 | Case17_5.nii;[147202.0, 1023.0];[140493, 6905] 344 | Case17_6.nii;[146471.0, 1754.0];[140493, 6905] 345 | Case17_7.nii;[145260.0, 2965.0];[140493, 6905] 346 | Case17_8.nii;[144245.0, 3980.0];[140493, 6905] 347 | Case17_9.nii;[143621.0, 4604.0];[140493, 6905] 348 | Case18_0.nii;[148225.0, 0.0];[148225, 0] 349 | Case18_1.nii;[145759.0, 2466.0];[140493, 6905] 350 | Case18_10.nii;[139249.0, 8976.0];[140493, 6905] 351 | Case18_11.nii;[139771.0, 8454.0];[140493, 6905] 352 | Case18_12.nii;[140754.0, 7471.0];[140493, 6905] 353 | Case18_13.nii;[141539.0, 6686.0];[140493, 6905] 354 | Case18_14.nii;[142813.0, 5412.0];[140493, 6905] 355 | Case18_15.nii;[143693.0, 4532.0];[140493, 6905] 356 | Case18_16.nii;[144723.0, 3502.0];[140493, 6905] 357 | Case18_17.nii;[146785.0, 1440.0];[140493, 6905] 358 | Case18_2.nii;[144515.0, 3710.0];[140493, 6905] 359 | Case18_3.nii;[143128.0, 5097.0];[140493, 6905] 360 | Case18_4.nii;[141943.0, 6282.0];[140493, 6905] 361 | Case18_5.nii;[141571.0, 6654.0];[140493, 6905] 362 | Case18_6.nii;[140166.0, 8059.0];[140493, 6905] 363 | Case18_7.nii;[139305.0, 8920.0];[140493, 6905] 364 | Case18_8.nii;[139184.0, 9041.0];[140493, 6905] 365 | Case18_9.nii;[138926.0, 9299.0];[140493, 6905] 366 | Case19_0.nii;[148225.0, 0.0];[148225, 0] 367 | Case19_1.nii;[148225.0, 0.0];[148225, 0] 368 | Case19_10.nii;[144827.0, 3398.0];[140493, 6905] 369 | Case19_11.nii;[144798.0, 3427.0];[140493, 6905] 370 | Case19_12.nii;[145542.0, 2683.0];[140493, 6905] 371 | Case19_13.nii;[147504.0, 721.0];[140493, 6905] 372 | Case19_14.nii;[148225.0, 0.0];[148225, 0] 373 | Case19_15.nii;[148225.0, 0.0];[148225, 0] 374 | Case19_16.nii;[148225.0, 0.0];[148225, 0] 375 | Case19_17.nii;[148225.0, 0.0];[148225, 0] 376 | Case19_18.nii;[148225.0, 0.0];[148225, 0] 377 | Case19_19.nii;[148225.0, 0.0];[148225, 0] 378 | Case19_2.nii;[148225.0, 0.0];[148225, 0] 379 | Case19_3.nii;[148225.0, 0.0];[148225, 0] 380 | Case19_4.nii;[147577.0, 648.0];[140493, 6905] 381 | Case19_5.nii;[147104.0, 1121.0];[140493, 6905] 382 | Case19_6.nii;[146232.0, 1993.0];[140493, 6905] 383 | Case19_7.nii;[145012.0, 3213.0];[140493, 6905] 384 | Case19_8.nii;[144493.0, 3732.0];[140493, 6905] 385 | Case19_9.nii;[144639.0, 3586.0];[140493, 6905] 386 | Case20_0.nii;[148225.0, 0.0];[148225, 0] 387 | Case20_1.nii;[148225.0, 0.0];[148225, 0] 388 | Case20_10.nii;[138477.0, 9748.0];[140493, 6905] 389 | Case20_11.nii;[138637.0, 9588.0];[140493, 6905] 390 | Case20_12.nii;[139566.0, 8659.0];[140493, 6905] 391 | Case20_13.nii;[141129.0, 7096.0];[140493, 6905] 392 | Case20_14.nii;[142369.0, 5856.0];[140493, 6905] 393 | Case20_15.nii;[144469.0, 3756.0];[140493, 6905] 394 | Case20_16.nii;[146094.0, 2131.0];[140493, 6905] 395 | Case20_17.nii;[146483.0, 1742.0];[140493, 6905] 396 | Case20_18.nii;[148225.0, 0.0];[148225, 0] 397 | Case20_19.nii;[148225.0, 0.0];[148225, 0] 398 | Case20_2.nii;[146468.0, 1757.0];[140493, 6905] 399 | Case20_3.nii;[144805.0, 3420.0];[140493, 6905] 400 | Case20_4.nii;[143895.0, 4330.0];[140493, 6905] 401 | Case20_5.nii;[142410.0, 5815.0];[140493, 6905] 402 | Case20_6.nii;[141351.0, 6874.0];[140493, 6905] 403 | Case20_7.nii;[140239.0, 7986.0];[140493, 6905] 404 | Case20_8.nii;[139799.0, 8426.0];[140493, 6905] 405 | Case20_9.nii;[139009.0, 9216.0];[140493, 6905] 406 | Case21_0.nii;[148225.0, 0.0];[148225, 0] 407 | Case21_1.nii;[148225.0, 0.0];[148225, 0] 408 | Case21_10.nii;[141541.0, 6684.0];[140493, 6905] 409 | Case21_11.nii;[141751.0, 6474.0];[140493, 6905] 410 | Case21_12.nii;[143771.0, 4454.0];[140493, 6905] 411 | Case21_13.nii;[144807.0, 3418.0];[140493, 6905] 412 | Case21_14.nii;[145714.0, 2511.0];[140493, 6905] 413 | Case21_15.nii;[147213.0, 1012.0];[140493, 6905] 414 | Case21_16.nii;[147852.0, 373.0];[140493, 6905] 415 | Case21_17.nii;[148225.0, 0.0];[148225, 0] 416 | Case21_18.nii;[148225.0, 0.0];[148225, 0] 417 | Case21_19.nii;[148225.0, 0.0];[148225, 0] 418 | Case21_2.nii;[148225.0, 0.0];[148225, 0] 419 | Case21_3.nii;[148225.0, 0.0];[148225, 0] 420 | Case21_4.nii;[146801.0, 1424.0];[140493, 6905] 421 | Case21_5.nii;[144883.0, 3342.0];[140493, 6905] 422 | Case21_6.nii;[143482.0, 4743.0];[140493, 6905] 423 | Case21_7.nii;[143093.0, 5132.0];[140493, 6905] 424 | Case21_8.nii;[142173.0, 6052.0];[140493, 6905] 425 | Case21_9.nii;[141814.0, 6411.0];[140493, 6905] 426 | Case22_0.nii;[148225.0, 0.0];[148225, 0] 427 | Case22_1.nii;[148225.0, 0.0];[148225, 0] 428 | Case22_10.nii;[140066.0, 8159.0];[140493, 6905] 429 | Case22_11.nii;[140487.0, 7738.0];[140493, 6905] 430 | Case22_12.nii;[141531.0, 6694.0];[140493, 6905] 431 | Case22_13.nii;[142428.0, 5797.0];[140493, 6905] 432 | Case22_14.nii;[144157.0, 4068.0];[140493, 6905] 433 | Case22_15.nii;[145949.0, 2276.0];[140493, 6905] 434 | Case22_16.nii;[146499.0, 1726.0];[140493, 6905] 435 | Case22_17.nii;[147785.0, 440.0];[140493, 6905] 436 | Case22_18.nii;[148225.0, 0.0];[148225, 0] 437 | Case22_19.nii;[148225.0, 0.0];[148225, 0] 438 | Case22_2.nii;[146798.0, 1427.0];[140493, 6905] 439 | Case22_3.nii;[146514.0, 1711.0];[140493, 6905] 440 | Case22_4.nii;[145512.0, 2713.0];[140493, 6905] 441 | Case22_5.nii;[144185.0, 4040.0];[140493, 6905] 442 | Case22_6.nii;[142764.0, 5461.0];[140493, 6905] 443 | Case22_7.nii;[141557.0, 6668.0];[140493, 6905] 444 | Case22_8.nii;[140378.0, 7847.0];[140493, 6905] 445 | Case22_9.nii;[139992.0, 8233.0];[140493, 6905] 446 | Case23_0.nii;[148225.0, 0.0];[148225, 0] 447 | Case23_1.nii;[147104.0, 1121.0];[140493, 6905] 448 | Case23_10.nii;[140251.0, 7974.0];[140493, 6905] 449 | Case23_11.nii;[140431.0, 7794.0];[140493, 6905] 450 | Case23_12.nii;[140708.0, 7517.0];[140493, 6905] 451 | Case23_13.nii;[140546.0, 7679.0];[140493, 6905] 452 | Case23_14.nii;[141519.0, 6706.0];[140493, 6905] 453 | Case23_15.nii;[142721.0, 5504.0];[140493, 6905] 454 | Case23_16.nii;[143118.0, 5107.0];[140493, 6905] 455 | Case23_17.nii;[143949.0, 4276.0];[140493, 6905] 456 | Case23_18.nii;[144157.0, 4068.0];[140493, 6905] 457 | Case23_19.nii;[145888.0, 2337.0];[140493, 6905] 458 | Case23_2.nii;[146105.0, 2120.0];[140493, 6905] 459 | Case23_3.nii;[144130.0, 4095.0];[140493, 6905] 460 | Case23_4.nii;[142788.0, 5437.0];[140493, 6905] 461 | Case23_5.nii;[141996.0, 6229.0];[140493, 6905] 462 | Case23_6.nii;[141156.0, 7069.0];[140493, 6905] 463 | Case23_7.nii;[140520.0, 7705.0];[140493, 6905] 464 | Case23_8.nii;[140195.0, 8030.0];[140493, 6905] 465 | Case23_9.nii;[140283.0, 7942.0];[140493, 6905] 466 | Case24_0.nii;[148225.0, 0.0];[148225, 0] 467 | Case24_1.nii;[148225.0, 0.0];[148225, 0] 468 | Case24_10.nii;[141231.0, 6994.0];[140493, 6905] 469 | Case24_11.nii;[141441.0, 6784.0];[140493, 6905] 470 | Case24_12.nii;[142004.0, 6221.0];[140493, 6905] 471 | Case24_13.nii;[144190.0, 4035.0];[140493, 6905] 472 | Case24_14.nii;[144288.0, 3937.0];[140493, 6905] 473 | Case24_15.nii;[145158.0, 3067.0];[140493, 6905] 474 | Case24_16.nii;[145880.0, 2345.0];[140493, 6905] 475 | Case24_17.nii;[147101.0, 1124.0];[140493, 6905] 476 | Case24_18.nii;[148225.0, 0.0];[148225, 0] 477 | Case24_2.nii;[146959.0, 1266.0];[140493, 6905] 478 | Case24_3.nii;[145953.0, 2272.0];[140493, 6905] 479 | Case24_4.nii;[145703.0, 2522.0];[140493, 6905] 480 | Case24_5.nii;[144610.0, 3615.0];[140493, 6905] 481 | Case24_6.nii;[143618.0, 4607.0];[140493, 6905] 482 | Case24_7.nii;[142514.0, 5711.0];[140493, 6905] 483 | Case24_8.nii;[142304.0, 5921.0];[140493, 6905] 484 | Case24_9.nii;[141422.0, 6803.0];[140493, 6905] 485 | Case25_0.nii;[148225.0, 0.0];[148225, 0] 486 | Case25_1.nii;[147723.0, 502.0];[140493, 6905] 487 | Case25_10.nii;[140640.0, 7585.0];[140493, 6905] 488 | Case25_11.nii;[140911.0, 7314.0];[140493, 6905] 489 | Case25_12.nii;[141458.0, 6767.0];[140493, 6905] 490 | Case25_13.nii;[141866.0, 6359.0];[140493, 6905] 491 | Case25_14.nii;[142278.0, 5947.0];[140493, 6905] 492 | Case25_15.nii;[142131.0, 6094.0];[140493, 6905] 493 | Case25_16.nii;[142826.0, 5399.0];[140493, 6905] 494 | Case25_17.nii;[143487.0, 4738.0];[140493, 6905] 495 | Case25_18.nii;[148225.0, 0.0];[148225, 0] 496 | Case25_19.nii;[148225.0, 0.0];[148225, 0] 497 | Case25_2.nii;[146722.0, 1503.0];[140493, 6905] 498 | Case25_3.nii;[145506.0, 2719.0];[140493, 6905] 499 | Case25_4.nii;[144214.0, 4011.0];[140493, 6905] 500 | Case25_5.nii;[143118.0, 5107.0];[140493, 6905] 501 | Case25_6.nii;[142077.0, 6148.0];[140493, 6905] 502 | Case25_7.nii;[140277.0, 7948.0];[140493, 6905] 503 | Case25_8.nii;[139760.0, 8465.0];[140493, 6905] 504 | Case25_9.nii;[138849.0, 9376.0];[140493, 6905] 505 | Case26_0.nii;[148225.0, 0.0];[148225, 0] 506 | Case26_1.nii;[148225.0, 0.0];[148225, 0] 507 | Case26_10.nii;[139476.0, 8749.0];[140493, 6905] 508 | Case26_11.nii;[139412.0, 8813.0];[140493, 6905] 509 | Case26_12.nii;[139819.0, 8406.0];[140493, 6905] 510 | Case26_13.nii;[141639.0, 6586.0];[140493, 6905] 511 | Case26_14.nii;[144290.0, 3935.0];[140493, 6905] 512 | Case26_15.nii;[145252.0, 2973.0];[140493, 6905] 513 | Case26_16.nii;[147659.0, 566.0];[140493, 6905] 514 | Case26_17.nii;[148225.0, 0.0];[148225, 0] 515 | Case26_18.nii;[148225.0, 0.0];[148225, 0] 516 | Case26_19.nii;[148225.0, 0.0];[148225, 0] 517 | Case26_2.nii;[146979.0, 1246.0];[140493, 6905] 518 | Case26_3.nii;[146354.0, 1871.0];[140493, 6905] 519 | Case26_4.nii;[145420.0, 2805.0];[140493, 6905] 520 | Case26_5.nii;[144513.0, 3712.0];[140493, 6905] 521 | Case26_6.nii;[143369.0, 4856.0];[140493, 6905] 522 | Case26_7.nii;[141401.0, 6824.0];[140493, 6905] 523 | Case26_8.nii;[140625.0, 7600.0];[140493, 6905] 524 | Case26_9.nii;[139936.0, 8289.0];[140493, 6905] 525 | Case27_0.nii;[144806.0, 3419.0];[140493, 6905] 526 | Case27_1.nii;[143518.0, 4707.0];[140493, 6905] 527 | Case27_10.nii;[142750.0, 5475.0];[140493, 6905] 528 | Case27_11.nii;[144828.0, 3397.0];[140493, 6905] 529 | Case27_12.nii;[146658.0, 1567.0];[140493, 6905] 530 | Case27_13.nii;[147256.0, 969.0];[140493, 6905] 531 | Case27_14.nii;[148225.0, 0.0];[148225, 0] 532 | Case27_2.nii;[141441.0, 6784.0];[140493, 6905] 533 | Case27_3.nii;[139098.0, 9127.0];[140493, 6905] 534 | Case27_4.nii;[138013.0, 10212.0];[140493, 6905] 535 | Case27_5.nii;[136858.0, 11367.0];[140493, 6905] 536 | Case27_6.nii;[136785.0, 11440.0];[140493, 6905] 537 | Case27_7.nii;[136524.0, 11701.0];[140493, 6905] 538 | Case27_8.nii;[137318.0, 10907.0];[140493, 6905] 539 | Case27_9.nii;[141160.0, 7065.0];[140493, 6905] 540 | Case28_0.nii;[148225.0, 0.0];[148225, 0] 541 | Case28_1.nii;[148225.0, 0.0];[148225, 0] 542 | Case28_10.nii;[139969.0, 8256.0];[140493, 6905] 543 | Case28_11.nii;[140482.0, 7743.0];[140493, 6905] 544 | Case28_12.nii;[141041.0, 7184.0];[140493, 6905] 545 | Case28_13.nii;[143828.0, 4397.0];[140493, 6905] 546 | Case28_14.nii;[144480.0, 3745.0];[140493, 6905] 547 | Case28_15.nii;[145614.0, 2611.0];[140493, 6905] 548 | Case28_16.nii;[145757.0, 2468.0];[140493, 6905] 549 | Case28_17.nii;[147272.0, 953.0];[140493, 6905] 550 | Case28_18.nii;[147298.0, 927.0];[140493, 6905] 551 | Case28_19.nii;[147636.0, 589.0];[140493, 6905] 552 | Case28_2.nii;[148225.0, 0.0];[148225, 0] 553 | Case28_3.nii;[148225.0, 0.0];[148225, 0] 554 | Case28_4.nii;[147262.0, 963.0];[140493, 6905] 555 | Case28_5.nii;[145835.0, 2390.0];[140493, 6905] 556 | Case28_6.nii;[143685.0, 4540.0];[140493, 6905] 557 | Case28_7.nii;[142307.0, 5918.0];[140493, 6905] 558 | Case28_8.nii;[141339.0, 6886.0];[140493, 6905] 559 | Case28_9.nii;[140605.0, 7620.0];[140493, 6905] 560 | Case29_0.nii;[148225.0, 0.0];[148225, 0] 561 | Case29_1.nii;[148225.0, 0.0];[148225, 0] 562 | Case29_10.nii;[142841.0, 5384.0];[140493, 6905] 563 | Case29_11.nii;[142024.0, 6201.0];[140493, 6905] 564 | Case29_12.nii;[141592.0, 6633.0];[140493, 6905] 565 | Case29_13.nii;[141553.0, 6672.0];[140493, 6905] 566 | Case29_14.nii;[143027.0, 5198.0];[140493, 6905] 567 | Case29_15.nii;[147455.0, 770.0];[140493, 6905] 568 | Case29_16.nii;[148225.0, 0.0];[148225, 0] 569 | Case29_17.nii;[148225.0, 0.0];[148225, 0] 570 | Case29_18.nii;[148225.0, 0.0];[148225, 0] 571 | Case29_19.nii;[148225.0, 0.0];[148225, 0] 572 | Case29_2.nii;[148225.0, 0.0];[148225, 0] 573 | Case29_3.nii;[148225.0, 0.0];[148225, 0] 574 | Case29_4.nii;[148225.0, 0.0];[148225, 0] 575 | Case29_5.nii;[148225.0, 0.0];[148225, 0] 576 | Case29_6.nii;[148225.0, 0.0];[148225, 0] 577 | Case29_7.nii;[146339.0, 1886.0];[140493, 6905] 578 | Case29_8.nii;[145225.0, 3000.0];[140493, 6905] 579 | Case29_9.nii;[144336.0, 3889.0];[140493, 6905] 580 | -------------------------------------------------------------------------------- /sizes/readme.md: -------------------------------------------------------------------------------- 1 | This folder should contain the size prior files in csv format for each application. These files contain the size in pixels of each structure of interest + background. Beware of the separator ";" or ",". This can be parametrized in the makefile. 2 | -------------------------------------------------------------------------------- /utils.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3.6 2 | 3 | from random import random 4 | from pathlib import Path 5 | from multiprocessing.pool import Pool 6 | import os 7 | from typing import Any, Callable, Iterable, List, Set, Tuple, TypeVar, Union 8 | import scipy as sp 9 | import torch 10 | import numpy as np 11 | from tqdm import tqdm 12 | from torch import einsum 13 | from torch import Tensor 14 | from functools import partial, reduce 15 | from skimage.io import imsave,imread 16 | from PIL import Image, ImageOps 17 | from scipy.spatial.distance import directed_hausdorff 18 | import torch.nn as nn 19 | #import pydensecrf.densecrf as dcrf 20 | #from pydensecrf.utils import unary_from_labels 21 | #from pydensecrf.utils import unary_from_softmax 22 | import matplotlib.pyplot as plt 23 | import matplotlib.gridspec as gridspec 24 | from typing import Dict 25 | from PIL import Image, ImageOps 26 | import nibabel as nib 27 | import warnings 28 | import re 29 | 30 | # functions redefinitions 31 | tqdm_ = partial(tqdm, ncols=125, 32 | leave=False, 33 | bar_format='{l_bar}{bar}| {n_fmt}/{total_fmt} [' '{rate_fmt}{postfix}]') 34 | 35 | A = TypeVar("A") 36 | B = TypeVar("B") 37 | T = TypeVar("T", Tensor, np.ndarray) 38 | 39 | def get_subj_nb(filenames_vec): 40 | subj_nb= [int(re.split('(\d+)', x)[1]) for x in filenames_vec] 41 | return subj_nb 42 | 43 | 44 | 45 | def get_weights(list1): 46 | if len(list1)==5: 47 | m0,m1,m2,m3,m4 = list1 48 | elif len(list1)==4: 49 | m1,m2,m3,m4 = list1 50 | m0 = 256*256 -m1 -m2 -m3 -m4 51 | list1 = [m0]+list1 52 | elif len(list1)==1: 53 | m1 = list1[0] 54 | m0 = 256*36 -m1 55 | list1 = [m0]+list1 56 | inv = [1/m for m in list1] 57 | N = np.sum(inv) 58 | w_vec = [i/N for i in inv] 59 | print(np.sum(w_vec)) 60 | return np.round(w_vec,2) 61 | 62 | 63 | def Nmaxelements(list1, N): 64 | final_list = [] 65 | 66 | for i in range(0, N): 67 | max1 = 0 68 | 69 | for j in range(len(list1)): 70 | if list1[j] > max1: 71 | max1 = list1[j]; 72 | 73 | list1.remove(max1); 74 | final_list.append(max1) 75 | 76 | return(final_list) 77 | 78 | def get_optimal_crop(size_in_mr,size_in_ct): 79 | a=256*256 80 | area = (size_in_mr * a) / size_in_ct 81 | length = np.sqrt(area) 82 | crop = np.round((256-length)/2,0) 83 | return crop 84 | 85 | 86 | def read_anyformat_image(filename): 87 | with warnings.catch_warnings(): 88 | warnings.filterwarnings("ignore", category=UserWarning) 89 | #print(filename) 90 | #acc = imread(filename) 91 | 92 | if Path(filename).suffix == ".png": 93 | acc: np.ndarray = imread(filename) 94 | elif Path(filename).suffix == ".npy": 95 | acc: np.ndarray = np.load(filename) 96 | elif Path(filename).suffix == ".nii": 97 | acc: np.ndarray = read_nii_image(filename) 98 | acc = np.squeeze(acc) 99 | return(acc) 100 | 101 | 102 | def read_unknownformat_image(filename): 103 | with warnings.catch_warnings(): 104 | warnings.filterwarnings("ignore", category=UserWarning) 105 | #acc = imread(filename) 106 | print(filename) 107 | try: 108 | if Path(filename).suffix == ".png": 109 | acc: np.ndarray = imread(filename) 110 | elif Path(filename).suffix == ".npy": 111 | acc: np.ndarray = np.load(filename) 112 | elif Path(filename).suffix == ".nii": 113 | acc: np.ndarray = read_nii_image(filename) 114 | #acc = np.squeeze(acc) 115 | except: 116 | #print('changing extension') 117 | filename = os.path.splitext(filename)[0]+".png" 118 | acc: np.ndarray = imread(filename) 119 | acc = np.expand_dims(acc,0) 120 | return(acc) 121 | 122 | 123 | 124 | 125 | def read_nii_image(input_fid): 126 | """read the nii image data into numpy array""" 127 | img = nib.load(str(input_fid)) 128 | return img.get_data() 129 | 130 | def exp_lr_scheduler(optimizer, epoch, lr_decay=0.1, lr_decay_epoch=20): 131 | """Decay learning rate by a factor of lr_decay every lr_decay_epoch epochs""" 132 | if (epoch % lr_decay_epoch) or epoch==0: 133 | return optimizer 134 | 135 | for param_group in optimizer.param_groups: 136 | param_group['lr'] *= lr_decay 137 | return optimizer 138 | 139 | 140 | def depth(e: List) -> int: 141 | """ 142 | Compute the depth of nested lists 143 | """ 144 | if type(e) == list and e: 145 | return 1 + depth(e[0]) 146 | 147 | return 0 148 | 149 | def iIoU(pred: Tensor, target: Tensor) -> Tensor: 150 | IoUs = inter_sum(pred, target) / (union_sum(pred, target) + 1e-10) 151 | assert IoUs.shape == pred.shape[:2] 152 | 153 | return IoUs 154 | 155 | def inter_sum(a: Tensor, b: Tensor) -> Tensor: 156 | return einsum("bcwh->bc", intersection(a, b).type(torch.float32)) 157 | 158 | 159 | def union_sum(a: Tensor, b: Tensor) -> Tensor: 160 | return einsum("bcwh->bc", union(a, b).type(torch.float32)) 161 | 162 | def compose(fns, init): 163 | return reduce(lambda acc, f: f(acc), fns, init) 164 | 165 | 166 | def compose_acc(fns, init): 167 | return reduce(lambda acc, f: acc + [f(acc[-1])], fns, [init]) 168 | 169 | 170 | def map_(fn: Callable[[A], B], iter: Iterable[A]) -> List[B]: 171 | return list(map(fn, iter)) 172 | 173 | 174 | def mmap_(fn: Callable[[A], B], iter: Iterable[A]) -> List[B]: 175 | return Pool().map(fn, iter) 176 | 177 | 178 | def uc_(fn: Callable) -> Callable: 179 | return partial(uncurry, fn) 180 | 181 | 182 | def uncurry(fn: Callable, args: List[Any]) -> Any: 183 | return fn(*args) 184 | 185 | 186 | def id_(x): 187 | return x 188 | 189 | 190 | # fns 191 | def soft_size(a: Tensor,power:int) -> Tensor: 192 | return torch.einsum("bcwh->bc", [a])[..., None] 193 | 194 | def norm_soft_size(a: Tensor, power:int) -> Tensor: 195 | b, c, w, h = a.shape 196 | sl_sz = w*h 197 | amax = a.max(dim=1, keepdim=True)[0]+1e-10 198 | #amax = torch.cat(c*[amax], dim=1) 199 | resp = (torch.div(a,amax))**power 200 | ress = einsum("bcwh->bc", [resp]).type(torch.float32) 201 | ress_norm = ress/(torch.sum(ress,dim=1,keepdim=True)+1e-10) 202 | #print(torch.sum(ress,dim=1)) 203 | return ress_norm.unsqueeze(2) 204 | 205 | 206 | def batch_soft_size(a: Tensor) -> Tensor: 207 | return torch.einsum("bcwh->c", [a])[..., None] 208 | 209 | 210 | def soft_centroid(a: Tensor) -> Tensor: 211 | b, c, w, h = a.shape 212 | 213 | ws, hs = map_(lambda e: Tensor(e).to(a.device).type(torch.float32), np.mgrid[0:w, 0:h]) 214 | assert ws.shape == hs.shape == (w, h) 215 | 216 | flotted = a.type(torch.float32) 217 | tot = einsum("bcwh->bc", [a]).type(torch.float32) + 1e-10 218 | 219 | cw = einsum("bcwh,wh->bc", [flotted, ws]) / tot 220 | ch = einsum("bcwh,wh->bc", [flotted, hs]) / tot 221 | 222 | res = torch.stack([cw, ch], dim=2) 223 | assert res.shape == (b, c, 2) 224 | 225 | return res 226 | 227 | 228 | def soft_centroid2(a: Tensor) -> Tensor: 229 | b, c, w, h = a.shape 230 | 231 | ws, hs = map_(lambda e: Tensor(e).to(a.device).type(torch.float32), np.mgrid[0:w, 0:h]) 232 | ws, hs = index2cartesian(a) 233 | assert ws.shape == hs.shape == (w, h) 234 | 235 | flotted = a.type(torch.float32) 236 | tot = einsum("bcwh->bc", [a]).type(torch.float32) + 1e-10 237 | 238 | cw = einsum("bcwh,wh->bc", [flotted, ws]) / tot 239 | ch = einsum("bcwh,wh->bc", [flotted, hs]) / tot 240 | 241 | res = torch.stack([cw, ch], dim=2) 242 | assert res.shape == (b, c, 2) 243 | return res 244 | 245 | 246 | def index2cartesian(t: Tensor) -> Tensor: 247 | b, c, h, w = t.shape 248 | grid = np.indices((h, w)) 249 | cart = torch.Tensor(2, h, w).to(t.device).type(torch.float32) 250 | cart[0, :] = torch.from_numpy(grid[1] - np.floor(w / 2)) # x coord 251 | cart[1, :] = torch.from_numpy(-grid[0] + np.floor(h / 2)) # y coord 252 | return cart 253 | 254 | 255 | def soft_intensity(a: Tensor, im: Tensor) -> Tensor: 256 | #b, c, w, h = a.shape 257 | 258 | flotted = a.type(torch.float32) 259 | tot = einsum("bcwh->bc", [a]).type(torch.float32) + 1e-10 260 | 261 | si = einsum("bcwh,wh->bc", [flotted, im]) / tot 262 | 263 | return si 264 | 265 | 266 | # Assert utils 267 | def uniq(a: Tensor) -> Set: 268 | return set(torch.unique(a.cpu()).numpy()) 269 | 270 | 271 | def sset(a: Tensor, sub: Iterable) -> bool: 272 | return uniq(a).issubset(sub) 273 | 274 | 275 | def eq(a: Tensor, b) -> bool: 276 | return torch.eq(a, b).all() 277 | 278 | 279 | def simplex(t: Tensor, axis=1, dtype=torch.float32) -> bool: 280 | _sum = t.sum(axis).type(dtype) 281 | _ones = torch.ones_like(_sum, dtype=_sum.dtype) 282 | return torch.allclose(_sum, _ones) 283 | 284 | 285 | def one_hot(t: Tensor, axis=1, dtype=torch.float32) -> bool: 286 | return simplex(t, axis, dtype) and sset(t, [0, 1]) 287 | 288 | 289 | # # Metrics and shitz 290 | def meta_dice(sum_str: str, label: Tensor, pred: Tensor, smooth: float = 1e-8, dtype=torch.float32): 291 | assert label.shape == pred.shape, print(label.shape, pred.shape) 292 | assert one_hot(label) 293 | assert one_hot(pred) 294 | 295 | inter_size: Tensor = einsum(sum_str, [intersection(label, pred)]).type(dtype) 296 | sum_sizes_label: Tensor = einsum(sum_str, [label]).type(dtype) 297 | sum_sizes_pred: Tensor = einsum(sum_str, [pred]).type(dtype) 298 | 299 | dices: Tensor = (2 * inter_size + smooth) / ((sum_sizes_label + sum_sizes_pred).type(dtype)+ smooth) 300 | 301 | return dices, inter_size, sum_sizes_label,sum_sizes_pred 302 | 303 | 304 | dice_coef = partial(meta_dice, "bcwh->bc") 305 | dice_batch = partial(meta_dice, "bcwh->c") # used for 3d dice 306 | 307 | 308 | def intersection(a: Tensor, b: Tensor) -> Tensor: 309 | assert a.shape == b.shape 310 | assert sset(a, [0, 1]) 311 | assert sset(b, [0, 1]) 312 | return a & b 313 | 314 | 315 | def union(a: Tensor, b: Tensor) -> Tensor: 316 | assert a.shape == b.shape 317 | assert sset(a, [0, 1]) 318 | assert sset(b, [0, 1]) 319 | return a | b 320 | 321 | 322 | # switch between representations 323 | def probs2class(probs: Tensor) -> Tensor: 324 | b, _, w, h = probs.shape # type: Tuple[int, int, int, int] 325 | assert simplex(probs) 326 | 327 | res = probs.argmax(dim=1) 328 | assert res.shape == (b, w, h) 329 | 330 | return res 331 | 332 | 333 | def class2one_hot(seg: Tensor, C: int) -> Tensor: 334 | if len(seg.shape) == 2: # Only w, h, used by the dataloader 335 | seg = seg.unsqueeze(dim=0) 336 | #print('range classes:',list(range(C))) 337 | #print('unique seg:',torch.unique(seg)) 338 | #print("setdiff:",set(torch.unique(seg)).difference(list(range(C)))) 339 | assert sset(seg, list(range(C))) 340 | 341 | b, w, h = seg.shape # type: Tuple[int, int, int] 342 | 343 | res = torch.stack([seg == c for c in range(C)], dim=1).type(torch.int32) 344 | assert res.shape == (b, C, w, h) 345 | assert one_hot(res) 346 | 347 | return res 348 | 349 | 350 | def probs2one_hot(probs: Tensor) -> Tensor: 351 | _, C, _, _ = probs.shape 352 | assert simplex(probs) 353 | 354 | res = class2one_hot(probs2class(probs), C) 355 | assert res.shape == probs.shape 356 | assert one_hot(res) 357 | 358 | return res 359 | 360 | 361 | # Misc utils 362 | def save_images_p(segs: Tensor, names: Iterable[str], root: str, mode: str, iter: int, remap: True) -> None: 363 | b, w, h = segs.shape # Since we have the class numbers, we do not need a C axis 364 | 365 | for seg, name in zip(segs, names): 366 | seg = seg.cpu().numpy() 367 | if remap: 368 | #assert sset(seg, list(range(2))) 369 | seg[seg == 1] = 255 370 | #save_path = Path(root, f"iter{iter:03d}", mode, name).with_suffix(".png") 371 | save_path = Path(root,mode,"WatonInn_pjce",name).with_suffix(".png") 372 | save_path.parent.mkdir(parents=True, exist_ok=True) 373 | imsave(str(save_path), seg.astype('uint8')) 374 | 375 | # Misc utils 376 | def save_be_images(segs: Tensor, names: Iterable[str], root: str, mode: str, iter: int, remap: True) -> None: 377 | b, w, h = segs.shape # Since we have the class numbers, we do not need a C axis 378 | 379 | for seg, name in zip(segs, names): 380 | seg = seg.cpu().numpy() 381 | if remap: 382 | #assert sset(seg, list(range(2))) 383 | seg[seg == 1] = 255 384 | save_path = Path(root, "best", name).with_suffix(".png") 385 | save_path.parent.mkdir(parents=True, exist_ok=True) 386 | imsave(str(save_path), seg.astype('uint8')) 387 | 388 | 389 | # Misc utils 390 | def save_images(segs: Tensor, names: Iterable[str], root: str, mode: str, iter: int, remap: True) -> None: 391 | b, w, h = segs.shape # Since we have the class numbers, we do not need a C axis 392 | for seg, name in zip(segs, names): 393 | #print(root,iter,mode,name) 394 | seg = seg.cpu().numpy() 395 | if remap: 396 | #assert sset(seg, list(range(2))) 397 | seg[seg == 1] = 255 398 | save_path = Path(root, f"iter{iter:03d}", mode, name).with_suffix(".png") 399 | save_path.parent.mkdir(parents=True, exist_ok=True) 400 | 401 | imsave(str(save_path), seg.astype('uint8')) 402 | 403 | 404 | def save_images_ent(segs: Tensor, names: Iterable[str], root: str, mode: str, iter: int) -> None: 405 | b, w, h = segs.shape # Since we have the class numbers, we do not need a C axis 406 | for seg, name in zip(segs, names): 407 | #print(root,iter,mode,name) 408 | seg = seg.cpu().numpy()*255 #entropy is smaller than 1 409 | save_path = Path(root, f"iter{iter:03d}", mode, name).with_suffix(".png") 410 | save_path.parent.mkdir(parents=True, exist_ok=True) 411 | imsave(str(save_path), seg.astype('uint8')) 412 | 413 | # Misc utils 414 | 415 | def save_images_inf(segs: Tensor, names: Iterable[str], root: str, mode: str, remap: True) -> None: 416 | b, w, h = segs.shape # Since we have the class numbers, we do not need a C axis 417 | 418 | for seg, name in zip(segs, names): 419 | seg = seg.cpu().numpy() 420 | seg = np.round(seg,1) 421 | #print(np.unique(seg)) 422 | if remap: 423 | #assert sset(seg, list(range(2))) 424 | seg[seg == 1] = 255 425 | save_path = Path(root, mode, name).with_suffix(".png") 426 | #save_path = Path(root, mode, name).with_suffix(".npy") 427 | save_path.parent.mkdir(parents=True, exist_ok=True) 428 | 429 | imsave(str(save_path), seg.astype('uint8')) 430 | #np.save(str(save_path), seg) 431 | 432 | 433 | def augment(*arrs: Union[np.ndarray, Image.Image], rotate_angle: float = 45, 434 | flip: bool = True, mirror: bool = True, 435 | rotate: bool = True, scale: bool = False) -> List[Image.Image]: 436 | imgs: List[Image.Image] = map_(Image.fromarray, arrs) if isinstance(arrs[0], np.ndarray) else list(arrs) 437 | 438 | if flip and random() > 0.5: 439 | imgs = map_(ImageOps.flip, imgs) 440 | if mirror and random() > 0.5: 441 | imgs = map_(ImageOps.mirror, imgs) 442 | if rotate and random() > 0.5: 443 | angle: float = uniform(-rotate_angle, rotate_angle) 444 | imgs = map_(lambda e: e.rotate(angle), imgs) 445 | if scale and random() > 0.5: 446 | scale_factor: float = uniform(1, 1.2) 447 | w, h = imgs[0].size # Tuple[int, int] 448 | nw, nh = int(w * scale_factor), int(h * scale_factor) # Tuple[int, int] 449 | 450 | # Resize 451 | imgs = map_(lambda i: i.resize((nw, nh)), imgs) 452 | 453 | # Now need to crop to original size 454 | bw, bh = randint(0, nw - w), randint(0, nh - h) # Tuple[int, int] 455 | 456 | imgs = map_(lambda i: i.crop((bw, bh, bw + w, bh + h)), imgs) 457 | assert all(i.size == (w, h) for i in imgs) 458 | 459 | return imgs 460 | 461 | 462 | def augment_arr(*arrs_a: np.ndarray) -> List[np.ndarray]: 463 | arrs = list(arrs_a) # manoucherie type check 464 | 465 | if random() > 0.5: 466 | arrs = map_(np.flip, arrs) 467 | if random() > 0.5: 468 | arrs = map_(np.fliplr, arrs) 469 | # if random() > 0.5: 470 | # orig_shape = arrs[0].shape 471 | 472 | # angle = random() * 90 - 45 473 | # arrs = map_(lambda e: sp.ndimage.rotate(e, angle, order=1), arrs) 474 | 475 | # arrs = get_center(orig_shape, *arrs) 476 | 477 | return arrs 478 | 479 | 480 | 481 | def get_center(shape: Tuple, *arrs: np.ndarray) -> List[np.ndarray]: 482 | def g_center(arr): 483 | if arr.shape == shape: 484 | return arr 485 | 486 | dx = (arr.shape[0] - shape[0]) // 2 487 | dy = (arr.shape[1] - shape[1]) // 2 488 | 489 | if dx == 0 or dy == 0: 490 | return arr[:shape[0], :shape[1]] 491 | 492 | res = arr[dx:-dx, dy:-dy][:shape[0], :shape[1]] # Deal with off-by-one errors 493 | assert res.shape == shape, (res.shape, shape, dx, dy) 494 | 495 | return res 496 | 497 | return [g_center(arr) for arr in arrs] 498 | 499 | 500 | def pad_to(img: np.ndarray, new_h, new_w) -> np.ndarray: 501 | if len(img.shape) == 2: 502 | h, w = img.shape 503 | else: 504 | b, h, w = img.shape 505 | padd_lr = int((new_w - w) / 2) 506 | if (new_w - w) / 2 == padd_lr: 507 | padd_l = padd_lr 508 | padd_r = padd_lr 509 | else: #no divisible by 2 510 | padd_l = padd_lr 511 | padd_r = padd_lr+1 512 | padd_ud = int((new_h - h) / 2) 513 | if (new_h - h) / 2 == padd_ud: 514 | padd_u = padd_ud 515 | padd_d = padd_ud 516 | else: #no divisible by 2 517 | padd_u = padd_ud 518 | padd_d = padd_ud+1 519 | 520 | if len(img.shape) == 2: 521 | new_im = np.pad(img, [(padd_u, padd_d), (padd_l, padd_r)], 'constant') 522 | else: 523 | new_im = np.pad(img, [(0, 0), (padd_u, padd_d), (padd_l, padd_r)], 'constant') 524 | 525 | #print(img.shape, '-->', new_im.shape) 526 | return new_im 527 | 528 | 529 | def mask_resize(t, new_w): 530 | b, c, h, w = t.shape 531 | new_t = t 532 | if w != new_w: 533 | device = t.device() 534 | dtype = t.dtype() 535 | padd_lr = int((w - int(new_w)) / 2) 536 | m = torch.nn.ZeroPad2d((0, 0, padd_lr, padd_lr)) 537 | mask_resize = torch.ones([new_w, h], dtype=dtype) 538 | mask_resize_fg = m(mask_resize) 539 | mask_resize_bg = 1 - mask_resize_fg 540 | new_t = torch.einsum('wh,bcwh->bcwh', [mask_resize, t]).to(device) 541 | return new_t 542 | 543 | def resize(t, new_w): 544 | b, c, h, w = t.shape 545 | new_t = t 546 | if w != new_w: 547 | padd_lr = int((w - int(new_w)) / 2) 548 | new_t = t[:,: , :, padd_lr-1:padd_lr+new_w-1] 549 | return new_t 550 | 551 | 552 | def resize_im(t, new_w): 553 | w, h = t.shape 554 | padd_lr = int((w - int(new_w)) / 2) 555 | new_t = t[:,padd_lr-1:padd_lr+new_w-1] 556 | return new_t 557 | 558 | def haus_p(haus_s,all_p): 559 | _,C = haus_s.shape 560 | unique_p = torch.unique(all_p) 561 | haus_all_p = [] 562 | for p in unique_p: 563 | haus_p = torch.masked_select(haus_s, all_p == p).reshape((-1, C)) 564 | haus_all_p.append(haus_p.mean().cpu().numpy()) 565 | return(np.mean(haus_all_p),np.std(haus_all_p)) 566 | 567 | 568 | def haussdorf(preds: Tensor, target: Tensor, dtype=torch.float32) -> Tensor: 569 | assert preds.shape == target.shape 570 | assert one_hot(preds) 571 | assert one_hot(target) 572 | 573 | B, C, _, _ = preds.shape 574 | 575 | res = torch.zeros((B, C), dtype=dtype, device=preds.device) 576 | n_pred = preds.cpu().numpy() 577 | n_target = target.cpu().numpy() 578 | 579 | for b in range(B): 580 | for c in range(C): 581 | res[b, c] = numpy_haussdorf(n_pred[b, c], n_target[b, c]) 582 | 583 | return res 584 | 585 | def haussdorf_asd(preds: Tensor, target: Tensor, dtype=torch.float32) -> Tensor: 586 | assert preds.shape == target.shape 587 | assert one_hot(preds) 588 | assert one_hot(target) 589 | 590 | B, C, _, _ = preds.shape 591 | 592 | res = torch.zeros((B, C), dtype=dtype, device=preds.device) 593 | res2 = torch.zeros((B, C), dtype=dtype, device=preds.device) 594 | n_pred = preds.cpu().numpy() 595 | n_target = target.cpu().numpy() 596 | 597 | for b in range(B): 598 | for c in range(C): 599 | res[b, c] = numpy_haussdorf(n_pred[b, c], n_target[b, c]) 600 | res2[b, c] = numpy_asd(n_pred[b, c], n_target[b, c]) 601 | 602 | return res 603 | 604 | def numpy_haussdorf(pred: np.ndarray, target: np.ndarray) -> float: 605 | assert len(pred.shape) == 2 606 | assert pred.shape == target.shape 607 | 608 | return max(directed_hausdorff(pred, target)[0], directed_hausdorff(target, pred)[0]) 609 | 610 | 611 | def numpy_asd(pred: np.ndarray, target: np.ndarray): 612 | assert len(pred.shape) == 2 613 | assert pred.shape == target.shape 614 | res = directed_hausdorff(pred, target) 615 | res = res[0] 616 | return res 617 | 618 | 619 | def run_CRF(preds: Tensor, nit: int): 620 | # preds is either probability or hard labeling 621 | #here done in two classes case only 622 | b, c, w, h = preds.shape 623 | dtype = torch.float32 624 | output = torch.zeros((b, c, w, h), dtype=dtype, device=preds.device) 625 | for i in range(0,b): 626 | im = preds[i,:,:,:] 627 | if sset(im, [0, 1]): # hard labels in one hot form or class form 628 | hard = True 629 | if c > 1: 630 | im = class2one_hot(im,c) 631 | im = im.cpu().numpy() 632 | u = unary_from_labels(im, c, 0.5, zero_unsure=False) 633 | else: # labels in a softmax 634 | im = im.cpu().numpy() 635 | u = unary_from_softmax(im) 636 | d = dcrf.DenseCRF2D(w, h, c) # width, height, nlabels 637 | print(u.shape) # -> (5, 480, 640) 638 | d.setUnaryEnergy(u) 639 | # This adds the color-independent term, features are the locations only. 640 | d.addPairwiseGaussian(sxy=(3, 3), compat=3, kernel=dcrf.DIAG_KERNEL, normalization=dcrf.NORMALIZE_SYMMETRIC) 641 | Q = d.inference(nit) 642 | new_im = np.array(Q) 643 | if hard == True: 644 | new_im = np.argmax(Q, axis=0).reshape((w, h)) 645 | new_im = new_im.from_numpy(new_im) 646 | output[i,:,:,:] = new_im 647 | 648 | return output 649 | 650 | 651 | def run_CRF_im(im: np.ndarray, nit: int, conf: 0.5): 652 | # preds is either probability or hard labeling 653 | #here done in two classes case only 654 | w, h = im.shape 655 | output = im 656 | if set(np.unique(im)) > {0}: 657 | if set(np.unique(im)) <= {0, 255} or set(np.unique(im)) <= {0, 1} : # hard labels in one hot form or class form 658 | hard = True 659 | im[im == 255] = 1 660 | u = unary_from_labels(im, 2, conf, zero_unsure=False) 661 | else: # labels in a softmax 662 | u = unary_from_softmax(im) 663 | d = dcrf.DenseCRF2D(w, h, 2) # width, height, nlabels 664 | #print(u.shape) # -> (5, 480, 640) 665 | d.setUnaryEnergy(u) 666 | # This adds the color-independent term, features are the locations only. 667 | d.addPairwiseGaussian(sxy=(3, 3), compat=3, kernel=dcrf.DIAG_KERNEL, normalization=dcrf.NORMALIZE_SYMMETRIC) 668 | Q = d.inference(nit) 669 | new_im = np.array(Q) 670 | if hard == True: 671 | new_im = np.argmax(Q, axis=0).reshape((w, h)) 672 | new_im[new_im == 1] = 255 673 | output = new_im 674 | 675 | return output 676 | 677 | 678 | def interp(input): 679 | _, _, w, h = input.shape 680 | return nn.Upsample(input, size=(h, w), mode='bilinear') 681 | 682 | 683 | def interp_target(input): 684 | _, _, w, h = input.shape 685 | return nn.Upsample(size=(h, w), mode='bilinear') 686 | 687 | 688 | def plot_t(input): 689 | _, c, w, h = input.shape 690 | axis_to_plot = 1 691 | if c ==1: 692 | axis_to_plot = 0 693 | if input.requires_grad: 694 | im = input[0, axis_to_plot, :, :].detach().cpu().numpy() 695 | else: 696 | im = input[0, axis_to_plot, :, :].cpu().numpy() 697 | plt.close("all") 698 | plt.imshow(im, cmap='gray') 699 | plt.title('plotting on channel:'+ str(axis_to_plot)) 700 | plt.colorbar() 701 | 702 | 703 | def plot_all(gt_seg, s_seg, t_seg, disc_t): 704 | _, c, w, h = s_seg.shape 705 | axis_to_plot = 1 706 | if c ==1: 707 | axis_to_plot = 0 708 | s_seg = s_seg[0, axis_to_plot, :, :].detach().cpu().numpy() 709 | t_seg = t_seg[0, axis_to_plot, :, :].detach().cpu().numpy() 710 | gt_seg = gt_seg[0, axis_to_plot, :, :].cpu().numpy() 711 | disc_t = disc_t[0, 0, :, :].detach().cpu().numpy() 712 | plt.close("all") 713 | plt.subplot(141) 714 | plt.imshow(gt_seg, cmap='gray') 715 | plt.subplot(142) 716 | plt.imshow(s_seg, cmap='gray') 717 | plt.subplot(143) 718 | plt.imshow(t_seg, cmap='gray') 719 | plt.subplot(144) 720 | plt.imshow(disc_t, cmap='gray') 721 | plt.suptitle('gt, source seg, target seg, disc_t', fontsize=12) 722 | plt.colorbar() 723 | 724 | 725 | def plot_as_viewer(gt_seg, s_seg, t_seg, s_im, t_im): 726 | _, c, w, h = s_seg.shape 727 | axis_to_plot = 1 728 | if c ==1: 729 | axis_to_plot = 0 730 | 731 | s_seg = s_seg[0, axis_to_plot, :, :].detach().cpu().numpy() 732 | t_seg = t_seg[0, axis_to_plot, :, :].detach().cpu().numpy() 733 | s_im = s_im[0, 0, :, :].detach().cpu().numpy() 734 | s_im = resize_im(s_im, s_seg.shape[1]) 735 | t_im = t_im[0, 0, :, :].detach().cpu().numpy() 736 | t_im = resize_im(t_im, t_seg.shape[1]) 737 | gt_seg = gt_seg[0, axis_to_plot, :, :].cpu().numpy() 738 | 739 | plt.close("all") 740 | fig = plt.figure() 741 | gs = gridspec.GridSpec(1, 3) 742 | 743 | axe = fig.add_subplot(gs[0, 0]) 744 | axe.imshow(gt_seg, cmap='gray') 745 | axe = fig.add_subplot(gs[0, 1]) 746 | display_item(axe, s_im, s_seg, True) 747 | axe = fig.add_subplot(gs[0, 2]) 748 | display_item(axe, t_im, t_seg, True) 749 | #fig.show() 750 | 751 | fig.suptitle('gt, source seg, target seg', fontsize=12) 752 | 753 | 754 | def save_dict_to_file(dic, workdir): 755 | save_path = Path(workdir, 'params.txt') 756 | save_path.parent.mkdir(parents=True, exist_ok=True) 757 | f = open(save_path,'w') 758 | f.write(str(dic)) 759 | f.close() 760 | 761 | 762 | def load_dict_from_file(workdir): 763 | f = open(workdir+'/params.txt','r') 764 | data = f.read() 765 | f.close() 766 | return eval(data) 767 | 768 | 769 | def remap(changes: Dict[int, int], im): 770 | assert set(np.unique(im)).issubset(changes), (set(changes), np.unique(im)) 771 | for a, b in changes.items(): 772 | im[im == a] = b 773 | return im 774 | 775 | 776 | def get_remaining_time(done, total, its): 777 | time_s= (total-done)/its 778 | time_m = round(time_s/60,0) 779 | return time_m 780 | 781 | def lr_poly(base_lr, iter, max_iter, power): 782 | return base_lr * ((1 - float(iter) / max_iter) ** (power)) 783 | 784 | def adjust_learning_rate(optimizer, i_iter, learning_rate, num_steps, power): 785 | lr = lr_poly(learning_rate, i_iter, num_steps, power) 786 | optimizer.param_groups[0]['lr'] = lr 787 | if len(optimizer.param_groups) > 1: 788 | optimizer.param_groups[1]['lr'] = lr * 10 789 | 790 | 791 | def loss_calc(pred, label, device): 792 | """ 793 | This function returns cross entropy loss for semantic segmentation 794 | """ 795 | # out shape batch_size x channels x h x w -> batch_size x channels x h x w 796 | # label shape h x w x 1 x batch_size -> batch_size x 1 x h x w 797 | label = Variable(label.long()).to(device=device) 798 | #criterion = CrossEntropy2d().to(device=device) 799 | loss_params={'idc' : [0, 1], 'weights':[9216/9099,9216/117]} 800 | bounds = torch.randn(1) 801 | criterion = CrossEntropy(**loss_params, dtype="torch.float32") 802 | return criterion(pred, label, bounds) 803 | 804 | def d_loss_calc(pred, label): 805 | #label = Variable(label.long()).to(device=device) 806 | loss_params={'idc' : [0, 1]} 807 | criterion = BCELoss(**loss_params, dtype="torch.float32") 808 | return criterion(pred, label) 809 | 810 | 811 | class BCELoss(): 812 | def __init__(self, **kwargs): 813 | # Self.idc is used to filter out some classes of the target mask. Use fancy indexing 814 | self.idc: List[int] = kwargs["idc"] 815 | self.dtype = kwargs["dtype"] 816 | #print(f"Initialized {self.__class__.__name__} with {kwargs}") 817 | 818 | def __call__(self, d_out: Tensor, label: float): 819 | bce_loss = torch.nn.BCEWithLogitsLoss() 820 | loss = bce_loss(d_out,Tensor(d_out.data.size()).fill_(label).to(d_out.device)) 821 | return loss 822 | 823 | 824 | 825 | def adjust_learning_rate_D(optimizer, i_iter, learning_rate_D, num_steps, power): 826 | lr = lr_poly(learning_rate_D, i_iter, num_steps, power) 827 | optimizer.param_groups[0]['lr'] = lr 828 | print(f'> New learning Rate: {lr}') 829 | if len(optimizer.param_groups) > 1: 830 | optimizer.param_groups[1]['lr'] = lr * 10 831 | 832 | 833 | #### Helpers for file IOs 834 | def _read_lists(fid): 835 | """ 836 | Read all kinds of lists from text file to python lists 837 | """ 838 | if not os.path.isfile(fid): 839 | return None 840 | with open(fid,'r') as fd: 841 | _list = fd.readlines() 842 | 843 | my_list = [] 844 | for _item in _list: 845 | if len(_item) < 3: 846 | _list.remove(_item) 847 | my_list.append(_item.split('\n')[0]) 848 | return my_list 849 | 850 | def str2bool(v): 851 | if v.lower() in ('yes', 'true', 't', 'y', '1'): 852 | return True 853 | elif v.lower() in ('no', 'false', 'f', 'n', '0'): 854 | return False 855 | else: 856 | raise argparse.ArgumentTypeError('Boolean value expected.') 857 | -------------------------------------------------------------------------------- /whs.make: -------------------------------------------------------------------------------- 1 | CC = python 2 | SHELL = bash 3 | PP = PYTHONPATH="$(PYTHONPATH):." 4 | 5 | .PHONY: all view plot report 6 | 7 | CFLAGS = -O 8 | #DEBUG = --debug 9 | 10 | #the regex of the slices in the target dataset 11 | #for the heart -- 12 | G_RGX1 = slice\d+_\d+ 13 | 14 | TT_DATA = [('IMG', nii_transform, False), ('GT', nii_gt_transform, False), ('GT', nii_gt_transform, False)] 15 | L_OR = [('CrossEntropy', {'idc': [0,1,2,3,4], 'weights':[1,1,1,1,1]}, None, None, None, 1)] 16 | NET = UNet 17 | 18 | 19 | # the folder containing the target dataset 20 | T_FOLD = data/whs/ct 21 | 22 | #the network weights used as initialization of the adaptation 23 | M_WEIGHTS_ul = results/whs/cesource/last.pkl 24 | 25 | #run the main experiment 26 | TRN = results/whs/sfda 27 | 28 | REPO = $(shell basename `git rev-parse --show-toplevel`) 29 | DATE = $(shell date +"%y%m%d") 30 | HASH = $(shell git rev-parse --short HEAD) 31 | HOSTNAME = $(shell hostname) 32 | PBASE = archives 33 | PACK = $(PBASE)/$(REPO)-$(DATE)-$(HASH)-$(HOSTNAME)-CSize.tar.gz 34 | 35 | all: pack 36 | 37 | plot: $(PLT) 38 | 39 | pack: $(PACK) report 40 | $(PACK): $(TRN) $(INF_0) $(TRN_1) $(INF_1) $(TRN_2) $(TRN_3) $(TRN_4) 41 | mkdir -p $(@D) 42 | tar cf - $^ | pigz > $@ 43 | chmod -w $@ 44 | # tar -zc -f $@ $^ # Use if pigz is not available 45 | 46 | # first train on the source dataset only: 47 | results/whs/cesource: OPT = --target_losses="$(L_OR)" --target_dataset "data/mr" \ 48 | --network UNet --model_weights="" --lr_decay 1 \ 49 | 50 | # full supervision 51 | results/whs/fs: OPT = --target_losses="$(L_OR)" \ 52 | --network UNet --lr_decay 1 \ 53 | 54 | # SFDA. Remove --saveim True --entmap --do_asd 1 --do_hd 1 to speed up 55 | results/whs/sfda: OPT = --target_losses="[('EntKLProp', {'curi':True,'lamb_se':1,'lamb_consprior':1, 'ivd':False,'weights_se':[0.02, 0.27, 0.18, 0.21, 0.32],'idc_c': [1,2,3,4],'power': 1},'PredictionBounds', \ 56 | {'margin':0,'dir':'high','idc':[1],'predcol':'dumbpredwtags', 'power': 1,'fake':False, 'mode':'percentage','prop':False,'sep':';','sizefile':'sizes/whs.csv'},'norm_soft_size',50)]"\ 57 | --ontest --l_rate 0.000001 --lr_decay 0.7 --weight_decay 1e-3 \ 58 | --saveim True --entmap --do_asd 1 --do_hd 1 \ 59 | 60 | #inference mode : saves the segmentation masks for a specific model saved as pkl file (ex. "results/whs/cesource/last.pkl" below): 61 | results/whs/cesourceim: OPT = --target_losses="$(L_OR)" \ 62 | --mode makeim --batch_size 1 --l_rate 0 --pprint --lr_decay 1 --n_epoch 1 --saveim True\ 63 | 64 | $(TRN) : 65 | $(CC) $(CFLAGS) main_sfda.py --batch_size 22 --n_class 5 --workdir $@_tmp --target_dataset "$(T_FOLD)" \ 66 | --metric_axis 1 2 3 4 --n_epoch 150 --dice_3d --l_rate 5e-4 --weight_decay 1e-4 --grp_regex="$(G_RGX)" --network=$(NET) --val_target_folders="$(TT_DATA)"\ 67 | --lr_decay 0.9 --model_weights="$(M_WEIGHTS_ul)" --target_folders="$(TT_DATA)" $(OPT) $(DEBUG) 68 | mv $@_tmp $@ 69 | 70 | 71 | --------------------------------------------------------------------------------