├── .gitignore ├── BraTS2019_example.zip ├── LICENSE ├── __init__.py ├── experiments ├── __init__.py ├── config_files │ ├── config_fno.ini │ ├── config_fnoseg.ini │ ├── config_hartleymha.ini │ ├── config_hnoseg.ini │ ├── config_inference_fno.ini │ ├── config_inference_fnoseg.ini │ ├── config_inference_hartleymha.ini │ ├── config_inference_hnoseg.ini │ ├── config_inference_vnet-ds.ini │ └── config_vnet-ds.ini ├── data_io │ ├── __init__.py │ ├── dataset.py │ └── input_data.py ├── data_split │ ├── config_files │ │ └── config_partitioning.ini │ ├── partitioning.py │ └── split_examples │ │ ├── inference │ │ ├── flair_test-1.0.txt │ │ ├── t1_test-1.0.txt │ │ ├── t1ce_test-1.0.txt │ │ └── t2_test-1.0.txt │ │ └── training │ │ ├── flair_train-0.9.txt │ │ ├── flair_valid-0.1.txt │ │ ├── seg_train-0.9.txt │ │ ├── seg_valid-0.1.txt │ │ ├── t1_train-0.9.txt │ │ ├── t1_valid-0.1.txt │ │ ├── t1ce_train-0.9.txt │ │ ├── t1ce_valid-0.1.txt │ │ ├── t2_train-0.9.txt │ │ └── t2_valid-0.1.txt ├── inference.py ├── run.py ├── train_test.py └── utils.py ├── nets ├── __init__.py ├── architectures.py ├── custom_losses.py ├── custom_objects.py ├── dht.py ├── fourier_operator.py ├── hartley_mha.py ├── hartley_operator.py └── nets_utils.py ├── readme.md └── troubleshooting.md /.gitignore: -------------------------------------------------------------------------------- 1 | *.pyc 2 | *-checkpoint.ipynb 3 | .ipynb_checkpoints/ 4 | *~ 5 | *.idea* 6 | -------------------------------------------------------------------------------- /BraTS2019_example.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/IBM/multimodal-3d-image-segmentation/c6ac4e1aa6c8d0448429e99105a6046000e3795b/BraTS2019_example.zip -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "{}" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright {yyyy} {name of copyright owner} 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. -------------------------------------------------------------------------------- /__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/IBM/multimodal-3d-image-segmentation/c6ac4e1aa6c8d0448429e99105a6046000e3795b/__init__.py -------------------------------------------------------------------------------- /experiments/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/IBM/multimodal-3d-image-segmentation/c6ac4e1aa6c8d0448429e99105a6046000e3795b/experiments/__init__.py -------------------------------------------------------------------------------- /experiments/config_files/config_fno.ini: -------------------------------------------------------------------------------- 1 | [main] 2 | output_dir = '~/BraTS2019/Experiments/train-0.9_valid-0.1/outputs_120x120x78/fno' 3 | is_train = True 4 | is_test = True 5 | is_statistics = True 6 | visible_devices = '0' # Which GPU is used 7 | 8 | [input_lists] 9 | data_dir = '~/BraTS2019/Data/120x120x78' # Where the images are stored 10 | list_dir = '~/BraTS2019/Experiments/train-0.9_valid-0.1/inputs' 11 | data_lists_train_paths = [ 12 | ${list_dir}'/t1_train-0.9.txt', 13 | ${list_dir}'/t1ce_train-0.9.txt', 14 | ${list_dir}'/t2_train-0.9.txt', 15 | ${list_dir}'/flair_train-0.9.txt', 16 | ${list_dir}'/seg_train-0.9.txt', 17 | ] 18 | data_lists_valid_paths = [ 19 | ${list_dir}'/t1_valid-0.1.txt', 20 | ${list_dir}'/t1ce_valid-0.1.txt', 21 | ${list_dir}'/t2_valid-0.1.txt', 22 | ${list_dir}'/flair_valid-0.1.txt', 23 | ${list_dir}'/seg_valid-0.1.txt', 24 | ] 25 | data_lists_test_paths = [ 26 | ${list_dir}'/t1_valid-0.1.txt', 27 | ${list_dir}'/t1ce_valid-0.1.txt', 28 | ${list_dir}'/t2_valid-0.1.txt', 29 | ${list_dir}'/flair_valid-0.1.txt', 30 | ${list_dir}'/seg_valid-0.1.txt', 31 | ] 32 | 33 | [input_args] 34 | idx_x_modalities = [0, 1, 2, 3] # Indexes to x modalities in the data lists 35 | idx_y_modalities = [4] # Indexes to y modalities in the data lists 36 | batch_size = 1 37 | max_queue_size = 5 38 | workers = 5 # Must be >= 1 to use threading 39 | use_data_normalization = True 40 | 41 | [augmentation] 42 | rotation_range = [30, 0, 0] # Along the depth, height, width axis 43 | shift_range = [0.2, 0.2, 0.2] 44 | zoom_range = [0.8, 1.2] 45 | augmentation_probability = 0.8 46 | 47 | [optimizer] 48 | optimizer_name = 'Adamax' 49 | 50 | [scheduler] 51 | scheduler_name = 'CosineDecayRestarts' 52 | decay_epochs = 100 # Used to compute `decay_steps` of the scheduler 53 | initial_learning_rate = 1e-2 54 | t_mul = 1.0 55 | m_mul = 1.0 56 | alpha = 1e-1 57 | 58 | [model] 59 | builder_name = 'NeuralOperatorSeg' 60 | filters = 12 61 | num_transform_blocks = 32 62 | num_output_channels = 4 63 | num_modes = [10, 14, 14] 64 | transform_type = 'Fourier' 65 | transform_weights_type = 'individual' 66 | use_resize = True 67 | merge_method = None 68 | use_deep_supervision = False 69 | loss = 'PCCLoss' 70 | 71 | [train] 72 | label_mapping = {4: 3} # Maps ground-truth label 4 to predicted label 3 (BraTS specific) 73 | num_epochs = 100 74 | selection_epoch_portion = 0.5 75 | is_save_model = True 76 | is_plot_model = True # If you cannot install graphviz, change it to False 77 | is_print = True 78 | 79 | [test] 80 | label_mapping = {3: 4} # Maps predicted label 3 to ground-truth label 4 (BraTS specific) 81 | output_folder = 'test' # A subfolder under `output_dir` 82 | -------------------------------------------------------------------------------- /experiments/config_files/config_fnoseg.ini: -------------------------------------------------------------------------------- 1 | [main] 2 | output_dir = '~/BraTS2019/Experiments/train-0.9_valid-0.1/outputs_120x120x78/fnoseg' 3 | is_train = True 4 | is_test = True 5 | is_statistics = True 6 | visible_devices = '0' # Which GPU is used 7 | 8 | [input_lists] 9 | data_dir = '~/BraTS2019/Data/120x120x78' # Where the images are stored 10 | list_dir = '~/BraTS2019/Experiments/train-0.9_valid-0.1/inputs' 11 | data_lists_train_paths = [ 12 | ${list_dir}'/t1_train-0.9.txt', 13 | ${list_dir}'/t1ce_train-0.9.txt', 14 | ${list_dir}'/t2_train-0.9.txt', 15 | ${list_dir}'/flair_train-0.9.txt', 16 | ${list_dir}'/seg_train-0.9.txt', 17 | ] 18 | data_lists_valid_paths = [ 19 | ${list_dir}'/t1_valid-0.1.txt', 20 | ${list_dir}'/t1ce_valid-0.1.txt', 21 | ${list_dir}'/t2_valid-0.1.txt', 22 | ${list_dir}'/flair_valid-0.1.txt', 23 | ${list_dir}'/seg_valid-0.1.txt', 24 | ] 25 | data_lists_test_paths = [ 26 | ${list_dir}'/t1_valid-0.1.txt', 27 | ${list_dir}'/t1ce_valid-0.1.txt', 28 | ${list_dir}'/t2_valid-0.1.txt', 29 | ${list_dir}'/flair_valid-0.1.txt', 30 | ${list_dir}'/seg_valid-0.1.txt', 31 | ] 32 | 33 | [input_args] 34 | idx_x_modalities = [0, 1, 2, 3] # Indexes to x modalities in the data lists 35 | idx_y_modalities = [4] # Indexes to y modalities in the data lists 36 | batch_size = 1 37 | max_queue_size = 5 38 | workers = 5 # Must be >= 1 to use threading 39 | use_data_normalization = True 40 | 41 | [augmentation] 42 | rotation_range = [30, 0, 0] # Along the depth, height, width axis 43 | shift_range = [0.2, 0.2, 0.2] 44 | zoom_range = [0.8, 1.2] 45 | augmentation_probability = 0.8 46 | 47 | [optimizer] 48 | optimizer_name = 'Adamax' 49 | 50 | [scheduler] 51 | scheduler_name = 'CosineDecayRestarts' 52 | decay_epochs = 100 # Used to compute `decay_steps` of the scheduler 53 | initial_learning_rate = 1e-2 54 | t_mul = 1.0 55 | m_mul = 1.0 56 | alpha = 1e-1 57 | 58 | [model] 59 | builder_name = 'NeuralOperatorSeg' 60 | filters = 12 61 | num_transform_blocks = 32 62 | num_output_channels = 4 63 | num_modes = [10, 14, 14] 64 | transform_type = 'Fourier' 65 | transform_weights_type = 'shared' 66 | use_resize = True 67 | merge_method = 'add' 68 | use_deep_supervision = True 69 | loss = 'PCCLoss' 70 | 71 | [train] 72 | label_mapping = {4: 3} # Maps ground-truth label 4 to predicted label 3 (BraTS specific) 73 | num_epochs = 100 74 | selection_epoch_portion = 0.5 75 | is_save_model = True 76 | is_plot_model = True # If you cannot install graphviz, change it to False 77 | is_print = True 78 | 79 | [test] 80 | label_mapping = {3: 4} # Maps predicted label 3 to ground-truth label 4 (BraTS specific) 81 | output_folder = 'test' # A subfolder under `output_dir` 82 | -------------------------------------------------------------------------------- /experiments/config_files/config_hartleymha.ini: -------------------------------------------------------------------------------- 1 | [main] 2 | output_dir = '~/BraTS2019/Experiments/train-0.9_valid-0.1/outputs_120x120x78/hartleymha' 3 | is_train = True 4 | is_test = True 5 | is_statistics = True 6 | visible_devices = '0' # Which GPU is used 7 | 8 | [input_lists] 9 | data_dir = '~/BraTS2019/Data/120x120x78' # Where the images are stored 10 | list_dir = '~/BraTS2019/Experiments/train-0.9_valid-0.1/inputs' 11 | data_lists_train_paths = [ 12 | ${list_dir}'/t1_train-0.9.txt', 13 | ${list_dir}'/t1ce_train-0.9.txt', 14 | ${list_dir}'/t2_train-0.9.txt', 15 | ${list_dir}'/flair_train-0.9.txt', 16 | ${list_dir}'/seg_train-0.9.txt', 17 | ] 18 | data_lists_valid_paths = [ 19 | ${list_dir}'/t1_valid-0.1.txt', 20 | ${list_dir}'/t1ce_valid-0.1.txt', 21 | ${list_dir}'/t2_valid-0.1.txt', 22 | ${list_dir}'/flair_valid-0.1.txt', 23 | ${list_dir}'/seg_valid-0.1.txt', 24 | ] 25 | data_lists_test_paths = [ 26 | ${list_dir}'/t1_valid-0.1.txt', 27 | ${list_dir}'/t1ce_valid-0.1.txt', 28 | ${list_dir}'/t2_valid-0.1.txt', 29 | ${list_dir}'/flair_valid-0.1.txt', 30 | ${list_dir}'/seg_valid-0.1.txt', 31 | ] 32 | 33 | [input_args] 34 | idx_x_modalities = [0, 1, 2, 3] # Indexes to x modalities in the data lists 35 | idx_y_modalities = [4] # Indexes to y modalities in the data lists 36 | batch_size = 1 37 | max_queue_size = 5 38 | workers = 5 # Must be >= 1 to use threading 39 | use_data_normalization = True 40 | 41 | [augmentation] 42 | rotation_range = [30, 0, 0] # Along the depth, height, width axis 43 | shift_range = [0.2, 0.2, 0.2] 44 | zoom_range = [0.8, 1.2] 45 | augmentation_probability = 0.8 46 | 47 | [optimizer] 48 | optimizer_name = 'Adamax' 49 | 50 | [scheduler] 51 | scheduler_name = 'CosineDecayRestarts' 52 | decay_epochs = 100 # Used to compute `decay_steps` of the scheduler 53 | initial_learning_rate = 1e-2 54 | t_mul = 1.0 55 | m_mul = 1.0 56 | alpha = 1e-1 57 | 58 | [model] 59 | builder_name = 'HartleyMHASeg' 60 | filters = 12 61 | num_transform_blocks = 16 62 | num_output_channels = 4 63 | num_heads = 4 64 | num_modes = [10, 14, 14] 65 | patch_size = [2, 2, 2] 66 | use_resize = True 67 | merge_method = 'add' 68 | use_deep_supervision = True 69 | loss = 'PCCLoss' 70 | 71 | [train] 72 | label_mapping = {4: 3} # Maps ground-truth label 4 to predicted label 3 (BraTS specific) 73 | num_epochs = 100 74 | selection_epoch_portion = 0.5 75 | is_save_model = True 76 | is_plot_model = True # If you cannot install graphviz, change it to False 77 | is_print = True 78 | 79 | [test] 80 | label_mapping = {3: 4} # Maps predicted label 3 to ground-truth label 4 (BraTS specific) 81 | output_folder = 'test' # A subfolder under `output_dir` 82 | -------------------------------------------------------------------------------- /experiments/config_files/config_hnoseg.ini: -------------------------------------------------------------------------------- 1 | [main] 2 | output_dir = '~/BraTS2019/Experiments/train-0.9_valid-0.1/outputs_120x120x78/hnoseg' 3 | is_train = True 4 | is_test = True 5 | is_statistics = True 6 | visible_devices = '0' # Which GPU is used 7 | 8 | [input_lists] 9 | data_dir = '~/BraTS2019/Data/120x120x78' # Where the images are stored 10 | list_dir = '~/BraTS2019/Experiments/train-0.9_valid-0.1/inputs' 11 | data_lists_train_paths = [ 12 | ${list_dir}'/t1_train-0.9.txt', 13 | ${list_dir}'/t1ce_train-0.9.txt', 14 | ${list_dir}'/t2_train-0.9.txt', 15 | ${list_dir}'/flair_train-0.9.txt', 16 | ${list_dir}'/seg_train-0.9.txt', 17 | ] 18 | data_lists_valid_paths = [ 19 | ${list_dir}'/t1_valid-0.1.txt', 20 | ${list_dir}'/t1ce_valid-0.1.txt', 21 | ${list_dir}'/t2_valid-0.1.txt', 22 | ${list_dir}'/flair_valid-0.1.txt', 23 | ${list_dir}'/seg_valid-0.1.txt', 24 | ] 25 | data_lists_test_paths = [ 26 | ${list_dir}'/t1_valid-0.1.txt', 27 | ${list_dir}'/t1ce_valid-0.1.txt', 28 | ${list_dir}'/t2_valid-0.1.txt', 29 | ${list_dir}'/flair_valid-0.1.txt', 30 | ${list_dir}'/seg_valid-0.1.txt', 31 | ] 32 | 33 | [input_args] 34 | idx_x_modalities = [0, 1, 2, 3] # Indexes to x modalities in the data lists 35 | idx_y_modalities = [4] # Indexes to y modalities in the data lists 36 | batch_size = 1 37 | max_queue_size = 5 38 | workers = 5 # Must be >= 1 to use threading 39 | use_data_normalization = True 40 | 41 | [augmentation] 42 | rotation_range = [30, 0, 0] # Along the depth, height, width axis 43 | shift_range = [0.2, 0.2, 0.2] 44 | zoom_range = [0.8, 1.2] 45 | augmentation_probability = 0.8 46 | 47 | [optimizer] 48 | optimizer_name = 'Adamax' 49 | 50 | [scheduler] 51 | scheduler_name = 'CosineDecayRestarts' 52 | decay_epochs = 100 # Used to compute `decay_steps` of the scheduler 53 | initial_learning_rate = 1e-2 54 | t_mul = 1.0 55 | m_mul = 1.0 56 | alpha = 1e-1 57 | 58 | [model] 59 | builder_name = 'NeuralOperatorSeg' 60 | filters = 12 61 | num_transform_blocks = 32 62 | num_output_channels = 4 63 | num_modes = [10, 14, 14] 64 | transform_type = 'Hartley' 65 | transform_weights_type = 'shared' 66 | use_resize = True 67 | merge_method = 'add' 68 | use_deep_supervision = True 69 | loss = 'PCCLoss' 70 | 71 | [train] 72 | label_mapping = {4: 3} # Maps ground-truth label 4 to predicted label 3 (BraTS specific) 73 | num_epochs = 100 74 | selection_epoch_portion = 0.5 75 | is_save_model = True 76 | is_plot_model = True # If you cannot install graphviz, change it to False 77 | is_print = True 78 | 79 | [test] 80 | label_mapping = {3: 4} # Maps predicted label 3 to ground-truth label 4 (BraTS specific) 81 | output_folder = 'test' # A subfolder under `output_dir` 82 | -------------------------------------------------------------------------------- /experiments/config_files/config_inference_fno.ini: -------------------------------------------------------------------------------- 1 | [main] 2 | target_dir = '~/BraTS2019/Experiments/train-0.9_valid-0.1/outputs_120x120x78/fno' 3 | visible_devices = '0' 4 | 5 | [input_lists] 6 | data_dir = '~/BraTS2019/Data/240x240x155' # The original resolution should be used during inference 7 | list_dir = '~/BraTS2019/Experiments/train-0.9_valid-0.1/inputs' 8 | data_lists_test_paths = [ 9 | ${list_dir}'/t1_test-1.0.txt', 10 | ${list_dir}'/t1ce_test-1.0.txt', 11 | ${list_dir}'/t2_test-1.0.txt', 12 | ${list_dir}'/flair_test-1.0.txt', 13 | ] 14 | 15 | [input_args] 16 | idx_x_modalities = [0, 1, 2, 3] # Indexes to x modalities in the data lists 17 | batch_size = 1 18 | max_queue_size = 5 19 | workers = 5 # Must be >= 1 to use threading 20 | use_data_normalization = True 21 | 22 | [model] 23 | builder_name = 'NeuralOperatorSeg' 24 | filters = 12 25 | num_transform_blocks = 32 26 | num_output_channels = 4 27 | num_modes = [10, 14, 14] 28 | transform_type = 'Fourier' 29 | transform_weights_type = 'individual' 30 | use_resize = True 31 | merge_method = None 32 | use_deep_supervision = False 33 | loss = 'PCCLoss' 34 | 35 | [test] 36 | label_mapping = {3: 4} # Maps predicted label 3 to ground-truth label 4 (BraTS specific) 37 | output_folder = 'inference_240x240x155' # A subfolder under `target_dir` 38 | -------------------------------------------------------------------------------- /experiments/config_files/config_inference_fnoseg.ini: -------------------------------------------------------------------------------- 1 | [main] 2 | target_dir = '~/BraTS2019/Experiments/train-0.9_valid-0.1/outputs_120x120x78/fnoseg' 3 | visible_devices = '0' 4 | 5 | [input_lists] 6 | data_dir = '~/BraTS2019/Data/240x240x155' # The original resolution should be used during inference 7 | list_dir = '~/BraTS2019/Experiments/train-0.9_valid-0.1/inputs' 8 | data_lists_test_paths = [ 9 | ${list_dir}'/t1_test-1.0.txt', 10 | ${list_dir}'/t1ce_test-1.0.txt', 11 | ${list_dir}'/t2_test-1.0.txt', 12 | ${list_dir}'/flair_test-1.0.txt', 13 | ] 14 | 15 | [input_args] 16 | idx_x_modalities = [0, 1, 2, 3] # Indexes to x modalities in the data lists 17 | batch_size = 1 18 | max_queue_size = 5 19 | workers = 5 # Must be >= 1 to use threading 20 | use_data_normalization = True 21 | 22 | [model] 23 | builder_name = 'NeuralOperatorSeg' 24 | filters = 12 25 | num_transform_blocks = 32 26 | num_output_channels = 4 27 | num_modes = [10, 14, 14] 28 | transform_type = 'Fourier' 29 | transform_weights_type = 'shared' 30 | use_resize = True 31 | merge_method = 'add' 32 | use_deep_supervision = True 33 | loss = 'PCCLoss' 34 | 35 | [test] 36 | label_mapping = {3: 4} # Maps predicted label 3 to ground-truth label 4 (BraTS specific) 37 | output_folder = 'inference_240x240x155' # A subfolder under `target_dir` 38 | -------------------------------------------------------------------------------- /experiments/config_files/config_inference_hartleymha.ini: -------------------------------------------------------------------------------- 1 | [main] 2 | target_dir = '~/BraTS2019/Experiments/train-0.9_valid-0.1/outputs_120x120x78/hartleymha' 3 | visible_devices = '0' 4 | 5 | [input_lists] 6 | data_dir = '~/BraTS2019/Data/240x240x155' # The original resolution should be used during inference 7 | list_dir = '~/BraTS2019/Experiments/train-0.9_valid-0.1/inputs' 8 | data_lists_test_paths = [ 9 | ${list_dir}'/t1_test-1.0.txt', 10 | ${list_dir}'/t1ce_test-1.0.txt', 11 | ${list_dir}'/t2_test-1.0.txt', 12 | ${list_dir}'/flair_test-1.0.txt', 13 | ] 14 | 15 | [input_args] 16 | idx_x_modalities = [0, 1, 2, 3] # Indexes to x modalities in the data lists 17 | batch_size = 1 18 | max_queue_size = 5 19 | workers = 5 # Must be >= 1 to use threading 20 | use_data_normalization = True 21 | 22 | [model] 23 | builder_name = 'HartleyMHASeg' 24 | filters = 12 25 | num_transform_blocks = 16 26 | num_output_channels = 4 27 | num_heads = 4 28 | num_modes = [10, 14, 14] 29 | patch_size = [2, 2, 2] 30 | use_resize = True 31 | merge_method = 'add' 32 | use_deep_supervision = True 33 | loss = 'PCCLoss' 34 | 35 | [test] 36 | label_mapping = {3: 4} # Maps predicted label 3 to ground-truth label 4 (BraTS specific) 37 | output_folder = 'inference_240x240x155' # A subfolder under `target_dir` 38 | -------------------------------------------------------------------------------- /experiments/config_files/config_inference_hnoseg.ini: -------------------------------------------------------------------------------- 1 | [main] 2 | target_dir = '~/BraTS2019/Experiments/train-0.9_valid-0.1/outputs_120x120x78/hnoseg' 3 | visible_devices = '0' 4 | 5 | [input_lists] 6 | data_dir = '~/BraTS2019/Data/240x240x155' # The original resolution should be used during inference 7 | list_dir = '~/BraTS2019/Experiments/train-0.9_valid-0.1/inputs' 8 | data_lists_test_paths = [ 9 | ${list_dir}'/t1_test-1.0.txt', 10 | ${list_dir}'/t1ce_test-1.0.txt', 11 | ${list_dir}'/t2_test-1.0.txt', 12 | ${list_dir}'/flair_test-1.0.txt', 13 | ] 14 | 15 | [input_args] 16 | idx_x_modalities = [0, 1, 2, 3] # Indexes to x modalities in the data lists 17 | batch_size = 1 18 | max_queue_size = 5 19 | workers = 5 # Must be >= 1 to use threading 20 | use_data_normalization = True 21 | 22 | [model] 23 | builder_name = 'NeuralOperatorSeg' 24 | filters = 12 25 | num_transform_blocks = 32 26 | num_output_channels = 4 27 | num_modes = [10, 14, 14] 28 | transform_type = 'Hartley' 29 | transform_weights_type = 'shared' 30 | use_resize = True 31 | merge_method = 'add' 32 | use_deep_supervision = True 33 | loss = 'PCCLoss' 34 | 35 | [test] 36 | label_mapping = {3: 4} # Maps predicted label 3 to ground-truth label 4 (BraTS specific) 37 | output_folder = 'inference_240x240x155' # A subfolder under `target_dir` 38 | -------------------------------------------------------------------------------- /experiments/config_files/config_inference_vnet-ds.ini: -------------------------------------------------------------------------------- 1 | [main] 2 | target_dir = '~/BraTS2019/Experiments/train-0.9_valid-0.1/outputs_120x120x78/vnet-ds' 3 | visible_devices = '0' 4 | 5 | [input_lists] 6 | data_dir = '~/BraTS2019/Data/240x240x155' # The original resolution should be used during inference 7 | list_dir = '~/BraTS2019/Experiments/train-0.9_valid-0.1/inputs' 8 | data_lists_test_paths = [ 9 | ${list_dir}'/t1_test-1.0.txt', 10 | ${list_dir}'/t1ce_test-1.0.txt', 11 | ${list_dir}'/t2_test-1.0.txt', 12 | ${list_dir}'/flair_test-1.0.txt', 13 | ] 14 | 15 | [input_args] 16 | idx_x_modalities = [0, 1, 2, 3] # Indexes to x modalities in the data lists 17 | batch_size = 1 18 | max_queue_size = 5 19 | workers = 5 # Must be >= 1 to use threading 20 | use_data_normalization = True 21 | 22 | [model] 23 | builder_name = 'VNetDS' 24 | base_num_filters = 12 25 | num_blocks = [1, 2, 3, 3, 3] 26 | num_output_channels = 4 27 | use_resize = True 28 | right_leg_indexes = [0, 1, 2, 3, 4] 29 | loss = 'PCCLoss' 30 | 31 | [test] 32 | label_mapping = {3: 4} # Maps predicted label 3 to ground-truth label 4 (BraTS specific) 33 | output_folder = 'inference_240x240x155' # A subfolder under `target_dir` 34 | -------------------------------------------------------------------------------- /experiments/config_files/config_vnet-ds.ini: -------------------------------------------------------------------------------- 1 | [main] 2 | output_dir = '~/BraTS2019/Experiments/train-0.9_valid-0.1/outputs_120x120x78/vnet-ds' 3 | is_train = True 4 | is_test = True 5 | is_statistics = True 6 | visible_devices = '1' # Which GPU is used 7 | 8 | [input_lists] 9 | data_dir = '~/BraTS2019/Data/120x120x78' # Where the images are stored 10 | list_dir = '~/BraTS2019/Experiments/train-0.9_valid-0.1/inputs' 11 | data_lists_train_paths = [ 12 | ${list_dir}'/t1_train-0.9.txt', 13 | ${list_dir}'/t1ce_train-0.9.txt', 14 | ${list_dir}'/t2_train-0.9.txt', 15 | ${list_dir}'/flair_train-0.9.txt', 16 | ${list_dir}'/seg_train-0.9.txt', 17 | ] 18 | data_lists_valid_paths = [ 19 | ${list_dir}'/t1_valid-0.1.txt', 20 | ${list_dir}'/t1ce_valid-0.1.txt', 21 | ${list_dir}'/t2_valid-0.1.txt', 22 | ${list_dir}'/flair_valid-0.1.txt', 23 | ${list_dir}'/seg_valid-0.1.txt', 24 | ] 25 | data_lists_test_paths = [ 26 | ${list_dir}'/t1_valid-0.1.txt', 27 | ${list_dir}'/t1ce_valid-0.1.txt', 28 | ${list_dir}'/t2_valid-0.1.txt', 29 | ${list_dir}'/flair_valid-0.1.txt', 30 | ${list_dir}'/seg_valid-0.1.txt', 31 | ] 32 | 33 | [input_args] 34 | idx_x_modalities = [0, 1, 2, 3] # Indexes to x modalities in the data lists 35 | idx_y_modalities = [4] # Indexes to y modalities in the data lists 36 | batch_size = 1 37 | max_queue_size = 5 38 | workers = 5 # Must be >= 1 to use threading 39 | use_data_normalization = True 40 | 41 | [augmentation] 42 | rotation_range = [30, 0, 0] # Along the depth, height, width axis 43 | shift_range = [0.2, 0.2, 0.2] 44 | zoom_range = [0.8, 1.2] 45 | augmentation_probability = 0.8 46 | 47 | [optimizer] 48 | optimizer_name = 'Adamax' 49 | 50 | [scheduler] 51 | scheduler_name = 'CosineDecayRestarts' 52 | decay_epochs = 100 # Used to compute `decay_steps` of the scheduler 53 | initial_learning_rate = 1e-2 54 | t_mul = 1.0 55 | m_mul = 1.0 56 | alpha = 1e-1 57 | 58 | [model] 59 | builder_name = 'VNetDS' 60 | base_num_filters = 12 61 | num_blocks = [1, 2, 3, 3, 3] 62 | num_output_channels = 4 63 | use_resize = True 64 | right_leg_indexes = [0, 1, 2, 3, 4] 65 | loss = 'PCCLoss' 66 | 67 | [train] 68 | label_mapping = {4: 3} # Maps ground-truth label 4 to predicted label 3 (BraTS specific) 69 | num_epochs = 100 70 | selection_epoch_portion = 0.5 71 | is_save_model = True 72 | is_plot_model = True # If you cannot install graphviz, change it to False 73 | is_print = True 74 | 75 | [test] 76 | label_mapping = {3: 4} # Maps predicted label 3 to ground-truth label 4 (BraTS specific) 77 | output_folder = 'test' # A subfolder under `output_dir` 78 | -------------------------------------------------------------------------------- /experiments/data_io/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/IBM/multimodal-3d-image-segmentation/c6ac4e1aa6c8d0448429e99105a6046000e3795b/experiments/data_io/__init__.py -------------------------------------------------------------------------------- /experiments/data_io/dataset.py: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright 2024 IBM Inc. All rights reserved 3 | # SPDX-License-Identifier: Apache2.0 4 | # 5 | 6 | import numpy as np 7 | import math 8 | import SimpleITK as sitk 9 | 10 | from keras.utils import PyDataset 11 | 12 | __author__ = 'Ken C. L. Wong' 13 | 14 | 15 | class MultimodalImageDataset(PyDataset): 16 | """For multimodal image data. 17 | 18 | Args: 19 | data_lists: A list of lists, each list contains the samples of a modality to be read. 20 | batch_size: Batch size. 21 | reader: For reading a sample as a Numpy array. If None (default), a dummy reader = lambda x: x is used. 22 | idx_x_modalities: Indexes to x (input) modalities (default: None). 23 | idx_y_modalities: Indexes to y (output) modalities (default: None). 24 | x_processing: A function that performs custom processing on x, e.g. data normalization (default: None). 25 | transform: A function that performs data transformation (default: None). 26 | workers: Number of workers to use in multithreading or multiprocessing (default: 1). 27 | use_multiprocessing: Whether to use Python multiprocessing for 28 | parallelism. Setting this to `True` means that your 29 | dataset will be replicated in multiple forked processes. 30 | This is necessary to gain compute-level (rather than I/O level) 31 | benefits from parallelism. However, it can only be set to 32 | `True` if your dataset can be safely pickled (default: False). 33 | max_queue_size: Maximum number of batches to keep in the queue 34 | when iterating over the dataset in a multithreaded or multipricessed setting. 35 | Reduce this value to reduce the CPU memory consumption of your dataset (default: 10). 36 | """ 37 | def __init__(self, 38 | data_lists, 39 | batch_size, 40 | reader=None, 41 | idx_x_modalities=None, 42 | idx_y_modalities=None, 43 | x_processing=None, 44 | transform=None, 45 | workers=1, 46 | use_multiprocessing=False, 47 | max_queue_size=10, 48 | ): 49 | super().__init__(workers, use_multiprocessing, max_queue_size) 50 | 51 | self.data_lists = data_lists 52 | self.batch_size = batch_size 53 | self.reader = reader or (lambda x: x) 54 | self.idx_x_modalities = idx_x_modalities 55 | self.idx_y_modalities = idx_y_modalities 56 | self.x_processing = x_processing 57 | self.transform = transform 58 | 59 | if self.idx_x_modalities is None: 60 | assert self.idx_y_modalities is None 61 | self.idx_x_modalities = list(range(len(self.data_lists))) 62 | 63 | def _get_info(self, list_of_data, reader=None): 64 | # Ensure all modalities have the same num_samples 65 | num_samples = len(list_of_data[0]) 66 | for data in list_of_data: 67 | assert num_samples == len(data) 68 | 69 | # Create a dummy reader if needed 70 | reader = reader or (lambda a: a) 71 | 72 | num_modalities = len(list_of_data) 73 | 74 | x_shape = reader(list_of_data[self.idx_x_modalities[0]][0]).shape 75 | y_shape = reader(list_of_data[self.idx_y_modalities[0]][0]).shape if self.idx_y_modalities else None 76 | 77 | return num_samples, num_modalities, x_shape, y_shape 78 | 79 | def __len__(self): 80 | """Return number of batches.""" 81 | return math.ceil(len(self.data_lists[0]) / self.batch_size) 82 | 83 | def __getitem__(self, index): 84 | """Return a batch.""" 85 | low = index * self.batch_size 86 | # Cap upper bound at array length; the last batch may be smaller 87 | # if the total number of items is not a multiple of batch size. 88 | high = min(low + self.batch_size, len(self.data_lists[0])) 89 | 90 | batch_x = [] 91 | batch_y = [] 92 | for idx in range(low, high): 93 | xy = self._get_sample(idx) 94 | if isinstance(xy, (list, tuple)): 95 | batch_x.append(xy[0]) 96 | batch_y.append(xy[1]) 97 | else: 98 | batch_x.append(xy) 99 | batch_x = np.stack(batch_x) 100 | batch_y = np.stack(batch_y) if batch_y else None 101 | 102 | if batch_y is None: 103 | return batch_x 104 | return batch_x, batch_y 105 | 106 | def _get_sample(self, idx): 107 | x = np.stack([self.reader(self.data_lists[m][idx]) for m in self.idx_x_modalities], axis=-1) 108 | if self.x_processing is not None: 109 | x = self.x_processing(x) 110 | 111 | if self.idx_y_modalities is not None: 112 | y = np.stack([self.reader(self.data_lists[m][idx]) for m in self.idx_y_modalities], axis=-1) 113 | if self.transform is not None: 114 | x, y = self.transform(x, y) 115 | return x, y 116 | 117 | if self.transform is not None: 118 | x = self.transform(x) 119 | return x 120 | 121 | 122 | class ImageTransform: 123 | """For transforming a 2D or 3D image with shape (H, W, C) or (D, H, W, C). 124 | For the arguments, the default value of None means no action is performed. 125 | 126 | Args: 127 | rotation_range: A scalar for 2D images and a list of length 3 (depth, height, width) for 3D images, 128 | in degrees (0 to 180) (default: None). 129 | shift_range: A list of length N for ND images, fraction of total size (default: None). 130 | zoom_range: Amount of zoom. A sequence of two (e.g., [0.7, 1.2]) (default: None). 131 | flip: A list of length N for ND images, boolean values indicating whether to randomly flip the 132 | corresponding axes or not (default: None). 133 | cval: Value used for points outside the boundaries after transformation (default: 0). 134 | augmentation_probability: Probability of performing augmentation for each sample (default: 1.0). 135 | seed: Random seed (default: None). 136 | """ 137 | def __init__(self, 138 | rotation_range=None, 139 | shift_range=None, 140 | zoom_range=None, 141 | flip=None, 142 | cval=0., 143 | augmentation_probability=1.0, 144 | seed=None, 145 | ): 146 | self.rotation_range = rotation_range 147 | self.shift_range = shift_range 148 | self.zoom_range = zoom_range 149 | self.flip = flip 150 | self.cval = cval 151 | self.augmentation_probability = augmentation_probability 152 | self.rng = np.random.default_rng(seed) 153 | 154 | def __call__(self, x, y=None): 155 | """Randomly augments a single image tensor. 156 | 157 | Args: 158 | x: 3D or 4D tensor, a single image with channels, e.g. (D, H, W, C). 159 | y: The corresponding observation (default: None). 160 | 161 | Returns: 162 | A randomly transformed version of the input (same shape). 163 | """ 164 | img_size_axis = np.arange(x.ndim)[:-1] 165 | 166 | if self.rng.binomial(1, self.augmentation_probability): 167 | # Rotation 168 | theta = None 169 | if self.rotation_range is not None: 170 | if np.isscalar(self.rotation_range): 171 | assert x.ndim == 3 172 | if self.rotation_range: 173 | theta = np.pi / 180 * self.rng.uniform(-self.rotation_range, self.rotation_range) 174 | else: 175 | theta = 0 176 | else: 177 | assert len(self.rotation_range) == 3 178 | theta = [] 179 | for rot in self.rotation_range: 180 | theta.append(np.pi / 180 * self.rng.uniform(-rot, rot) if rot else 0) 181 | 182 | # Shift 183 | shift = None 184 | if self.shift_range is not None: 185 | assert len(self.shift_range) == x.ndim - 1 186 | shift = [] 187 | for i, s in enumerate(self.shift_range): 188 | shift.append(self.rng.uniform(-s, s) * x.shape[img_size_axis[i]] if s else 0) 189 | 190 | # Zoom 191 | zoom = None 192 | if self.zoom_range is not None: 193 | zoom = self.rng.uniform(self.zoom_range[0], self.zoom_range[1]) 194 | 195 | # Create transformation matrix 196 | 197 | transform_matrix = None 198 | 199 | # Rotation 200 | if theta is not None: 201 | if np.isscalar(theta) and theta != 0: 202 | rotation_matrix = np.array([[np.cos(theta), -np.sin(theta), 0], 203 | [np.sin(theta), np.cos(theta), 0], 204 | [0, 0, 1]]) 205 | transform_matrix = rotation_matrix 206 | elif any(th != 0 for th in theta): 207 | theta = theta[::-1] # As sitk uses (x, y, z) 208 | cd = np.cos(theta[0]) 209 | sd = np.sin(theta[0]) 210 | ch = np.cos(theta[1]) 211 | sh = np.sin(theta[1]) 212 | cw = np.cos(theta[2]) 213 | sw = np.sin(theta[2]) 214 | rotation_matrix = np.array( 215 | [[ch * cw, -cd * sw + sd * sh * cw, sd * sw + cd * sh * cw, 0], 216 | [ch * sw, cd * cw + sd * sh * sw, -sd * cw + cd * sh * sw, 0], 217 | [-sh, sd * ch, cd * ch, 0], 218 | [0, 0, 0, 1]] 219 | ) 220 | transform_matrix = rotation_matrix 221 | 222 | # Shift 223 | if shift is not None and any(sh != 0 for sh in shift): 224 | shift = shift[::-1] # As sitk uses (x, y, z) 225 | shift = np.asarray(shift) 226 | shift_matrix = np.eye(x.ndim) 227 | shift_matrix[:-1, -1] = shift 228 | transform_matrix = shift_matrix if transform_matrix is None else np.dot(shift_matrix, transform_matrix) 229 | 230 | # Zoom 231 | if zoom is not None and zoom != 1: 232 | zoom_matrix = np.eye(x.ndim) 233 | zoom_matrix[:-1, :-1] = np.eye(x.ndim - 1) * zoom 234 | transform_matrix = zoom_matrix if transform_matrix is None else np.dot(zoom_matrix, transform_matrix) 235 | 236 | if transform_matrix is not None: 237 | x = apply_transform(x, transform_matrix, self.cval) 238 | if y is not None: 239 | y = apply_transform(y, transform_matrix, self.cval) 240 | 241 | if self.flip is not None: 242 | assert len(self.flip) == x.ndim - 1 243 | for i, fp in enumerate(self.flip): 244 | if fp and self.rng.random() < 0.5: 245 | x = flip_axis(x, img_size_axis[i]) # Warning: this function returns a view 246 | if y is not None: 247 | y = flip_axis(y, img_size_axis[i]) # Warning: this function returns a view 248 | 249 | if y is None: 250 | return x 251 | return x, y 252 | 253 | 254 | def transform_matrix_offset_center(matrix, img_size): 255 | offset = np.array(img_size) / 2.0 + 0.5 256 | offset_matrix = np.eye(matrix.shape[0]) 257 | offset_matrix[:-1, -1] = offset 258 | reset_matrix = np.eye(matrix.shape[0]) 259 | reset_matrix[:-1, -1] = -offset 260 | transform_matrix = np.dot(np.dot(offset_matrix, matrix), reset_matrix) 261 | return transform_matrix 262 | 263 | 264 | def apply_transform(x, transform_matrix, cval=0.): 265 | """Applies the image transformation specified by a matrix. 266 | 267 | Args: 268 | x: A 3D or 4D numpy array, a single image with channels, e.g. (D, H, W, C). 269 | transform_matrix: A numpy array specifying the geometric transformation, 3D or 4D. 270 | cval: Value used for points outside the boundaries (default: 0). 271 | 272 | Returns: 273 | The transformed version of the input. 274 | """ 275 | img_size = x.shape[:-1][::-1] 276 | transform_matrix = transform_matrix_offset_center(transform_matrix, img_size) 277 | final_affine_matrix = transform_matrix[:-1, :-1] 278 | final_offset = transform_matrix[:-1, -1] 279 | 280 | transform = sitk.AffineTransform(final_affine_matrix.flatten(), final_offset) 281 | resample = sitk.ResampleImageFilter() 282 | resample.SetInterpolator(sitk.sitkNearestNeighbor) 283 | resample.SetDefaultPixelValue(cval) 284 | resample.SetTransform(transform) 285 | 286 | channel_axis = -1 287 | x = np.moveaxis(x, channel_axis, 0) 288 | channel_images = [] 289 | for x_channel in x: 290 | image = sitk.GetImageFromArray(x_channel) 291 | resample.SetSize(image.GetSize()) 292 | resample.SetOutputSpacing(image.GetSpacing()) 293 | resample.SetOutputOrigin(image.GetOrigin()) 294 | image = resample.Execute(image) 295 | channel_images.append(sitk.GetArrayFromImage(image)) 296 | 297 | x = np.stack(channel_images, axis=channel_axis) 298 | return x 299 | 300 | 301 | def flip_axis(x, axis): 302 | x = np.asarray(x).swapaxes(axis, 0) 303 | x = x[::-1, ...] 304 | x = x.swapaxes(0, axis) 305 | return x 306 | -------------------------------------------------------------------------------- /experiments/data_io/input_data.py: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright 2023 IBM Inc. All rights reserved 3 | # SPDX-License-Identifier: Apache2.0 4 | # 5 | 6 | import math 7 | 8 | from keras.src.trainers.data_adapters.py_dataset_adapter import PyDatasetAdapter 9 | 10 | from .dataset import MultimodalImageDataset, ImageTransform 11 | 12 | 13 | __author__ = 'Ken C. L. Wong' 14 | 15 | 16 | class InputData(object): 17 | """Organized input data for creating generators. 18 | With workers >= 1, the generators are threaded thus run in parallel 19 | with other processes. 20 | Note that the label images are also considered as a modality associated with `idx_y_modalities`. 21 | 22 | Args: 23 | reader: For reading a sample as a Numpy array. If None (default), a dummy reader = lambda x: x is used. 24 | data_lists_train: A list of filename lists. Each filename list contains the full paths 25 | to the data samples of a modality (default: None). 26 | data_lists_valid: A list of filename lists. Each filename list contains the full paths 27 | to the data samples of a modality (default: None). 28 | data_lists_test: A list of filename lists. Each filename list contains the full paths 29 | to the data samples of a modality (default: None). 30 | idx_x_modalities: Indexes to x (input) modalities in data lists (default: None). 31 | idx_y_modalities: Indexes to y (output) modalities in data lists (default: None). 32 | If it is None, the generators only generate x. 33 | x_processing: A function that performs custom processing on x, e.g. data normalization (default: None). 34 | batch_size: Batch size (default: 1). 35 | max_queue_size: Maximum queue size (default: 1). 36 | workers: Number of parallel processes (default: 1). 37 | If 0, no threading or multiprocessing is used. 38 | use_multiprocessing: Use multiprocessing if True, otherwise threading (default: False). 39 | transform_kwargs: Dict of keyword arguments for ImageTransform (default: None). 40 | See `.dataset.ImageTransform` for details. 41 | """ 42 | def __init__(self, 43 | reader=None, 44 | data_lists_train=None, 45 | data_lists_valid=None, 46 | data_lists_test=None, 47 | idx_x_modalities=None, 48 | idx_y_modalities=None, 49 | x_processing=None, 50 | batch_size=1, 51 | max_queue_size=1, 52 | workers=1, 53 | use_multiprocessing=False, 54 | transform_kwargs=None, 55 | ): 56 | self.reader = reader or (lambda x: x) 57 | self.data_lists_train = data_lists_train 58 | self.data_lists_valid = data_lists_valid 59 | self.data_lists_test = data_lists_test 60 | self.idx_x_modalities = idx_x_modalities 61 | self.idx_y_modalities = idx_y_modalities 62 | self.x_processing = x_processing 63 | self.batch_size = batch_size 64 | self.max_queue_size = max_queue_size 65 | self.workers = workers 66 | self.use_multiprocessing = use_multiprocessing 67 | self.transform_kwargs = transform_kwargs 68 | 69 | def _get_flow(self, data_lists, shuffle=False, transform_kwargs=None): 70 | transform = ImageTransform(**transform_kwargs) if transform_kwargs is not None else None 71 | dataset = MultimodalImageDataset( 72 | data_lists, 73 | self.batch_size, 74 | reader=self.reader, 75 | idx_x_modalities=self.idx_x_modalities, 76 | idx_y_modalities=self.idx_y_modalities, 77 | x_processing=self.x_processing, 78 | transform=transform, 79 | workers=self.workers, 80 | use_multiprocessing=self.use_multiprocessing, 81 | max_queue_size=self.max_queue_size, 82 | ) 83 | 84 | adapter = PyDatasetAdapter( 85 | dataset, 86 | shuffle=shuffle 87 | ) 88 | 89 | return adapter 90 | 91 | def get_train_flow(self, shuffle=True): 92 | """Gets the PyDatasetAdapter for training. 93 | 94 | Args: 95 | shuffle: If True (default), the data are shuffled in each epoch. 96 | 97 | Returns: 98 | A PyDatasetAdapter that can create an iterator for one epoch by get_numpy_iterator(). 99 | """ 100 | return self._get_flow(self.data_lists_train, shuffle=shuffle, transform_kwargs=self.transform_kwargs) 101 | 102 | def get_valid_flow(self): 103 | """Gets the PyDatasetAdapter for validation. 104 | No data shuffling and augmentation. 105 | 106 | Returns: 107 | A PyDatasetAdapter that can create an iterator for one epoch by get_numpy_iterator(). 108 | """ 109 | return self._get_flow(self.data_lists_valid) 110 | 111 | def get_test_flow(self): 112 | """Gets the PyDatasetAdapter for testing. 113 | No data shuffling and augmentation. 114 | 115 | Returns: 116 | A PyDatasetAdapter that can create an iterator for one epoch by get_numpy_iterator(). 117 | """ 118 | return self._get_flow(self.data_lists_test) 119 | 120 | def _get_num_batches(self, data): 121 | if data is None: 122 | return 0 123 | 124 | num_samples = len(data[0]) 125 | return math.ceil(num_samples / self.batch_size) 126 | 127 | def get_train_num_batches(self): 128 | data = self.data_lists_train 129 | return self._get_num_batches(data) 130 | 131 | def get_valid_num_batches(self): 132 | data = self.data_lists_valid 133 | return self._get_num_batches(data) 134 | 135 | def get_test_num_batches(self): 136 | data = self.data_lists_test 137 | return self._get_num_batches(data) 138 | 139 | def _get_image_size(self, data): 140 | if data is None: 141 | return None 142 | return self.reader(data[0][0]).shape 143 | 144 | def get_train_image_size(self): 145 | return self._get_image_size(self.data_lists_train) 146 | 147 | def get_valid_image_size(self): 148 | return self._get_image_size(self.data_lists_valid) 149 | 150 | def get_test_image_size(self): 151 | return self._get_image_size(self.data_lists_test) 152 | -------------------------------------------------------------------------------- /experiments/data_split/config_files/config_partitioning.ini: -------------------------------------------------------------------------------- 1 | [io] 2 | output_dir = '~/BraTS2019/Experiments/train-0.9_valid-0.1/inputs' # Where to save the output files 3 | 4 | [partitioning] 5 | base_paths = [ # List of str, each is a full path that contains required data. 6 | '~/BraTS2019/Data/120x120x78/MICCAI_BraTS_2019_Data_Training/HGG', 7 | '~/BraTS2019/Data/120x120x78/MICCAI_BraTS_2019_Data_Training/LGG', 8 | ] 9 | train_fraction = 0.9 # Fraction for training in [0, 1]. 10 | valid_fraction = 0.1 # Fraction for validation in [0, 1]. 11 | test_fraction = 0.0 # Fraction for testing in [0, 1]. 12 | modalities = ['t1', 't1ce', 't2', 'flair', 'seg'] # List of str of modalities. 13 | ext = 'nii.gz' # Image file extension, e.g. nii.gz 14 | remove_str = '~/BraTS2019/Data/120x120x78/' # String to be removed from the file paths in the lists 15 | seed = 100 # For the random number generator to produce the same randomization, if not None. 16 | -------------------------------------------------------------------------------- /experiments/data_split/partitioning.py: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright 2023 IBM Inc. All rights reserved 3 | # SPDX-License-Identifier: Apache2.0 4 | # 5 | 6 | import os 7 | import sys 8 | import copy 9 | import numpy as np 10 | from natsort import os_sorted 11 | 12 | sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), '../..')) 13 | from experiments.utils import get_config, save_config 14 | 15 | __author__ = 'Ken C. L. Wong' 16 | 17 | 18 | def partitioning( 19 | base_path: str, 20 | modalities, 21 | ext, 22 | train_fraction=0., 23 | valid_fraction=0., 24 | test_fraction=0., 25 | remove_str='', 26 | seed=None 27 | ): 28 | """Creates the data lists for training, validation, and testing for each modality. 29 | It is assumed that all modalities share the same sample IDs. 30 | 31 | This version is for BraTS 2019. 32 | 33 | Args: 34 | base_path: The full path that contains all required data. 35 | modalities: List of str of modalities. 36 | ext: Image file extension, e.g. nii.gz 37 | train_fraction: Fraction for training in [0, 1] (default: 0). 38 | valid_fraction: Fraction for validation in [0, 1] (default: 0). 39 | test_fraction: Fraction for testing in [0, 1] (default: 0). 40 | remove_str: String to be removed from the file paths in the lists (default: ''). 41 | seed: For the random number generator to produce the same randomization, if not None (default: None). 42 | 43 | Returns: 44 | dict for training, validation, and testing. Key for modality; val for list of file paths. 45 | val can be an empty list if the corresponding fraction is 0. 46 | 47 | """ 48 | assert 0.9999 < train_fraction + valid_fraction + test_fraction < 1.0001 49 | 50 | # In case '~' is used to indicate the home directory 51 | base_path = os.path.expanduser(base_path) 52 | remove_str = os.path.expanduser(remove_str) 53 | 54 | # In BraTS'19, the folder names are the patient IDs 55 | ids = os_sorted(os.listdir(base_path)) 56 | ids = [i for i in ids if os.path.isdir(os.path.join(base_path, i))] 57 | num_samples = len(ids) 58 | 59 | # Get the ids for different partitions 60 | thres1 = round(train_fraction * num_samples) 61 | thres2 = round((train_fraction + valid_fraction) * num_samples) 62 | rng = np.random.default_rng(seed) 63 | ids = rng.permutation(ids) 64 | train_ids = os_sorted(ids[:thres1]) 65 | valid_ids = os_sorted(ids[thres1:thres2]) 66 | test_ids = os_sorted(ids[thres2:]) 67 | 68 | prefix = base_path.replace(remove_str, '') 69 | train_dict = {} 70 | valid_dict = {} 71 | test_dict = {} 72 | for m in modalities: 73 | # For BraTS'19, ids are the folder names and file name prefixes 74 | train_partition = [os.path.join(prefix, i, f'{i}_{m}.{ext}') for i in train_ids] 75 | valid_partition = [os.path.join(prefix, i, f'{i}_{m}.{ext}') for i in valid_ids] 76 | test_partition = [os.path.join(prefix, i, f'{i}_{m}.{ext}') for i in test_ids] 77 | 78 | # Ensure no overlapping between partitions 79 | assert np.all(np.isin(train_partition, valid_partition, invert=True)) 80 | assert np.all(np.isin(train_partition, test_partition, invert=True)) 81 | assert np.all(np.isin(test_partition, valid_partition, invert=True)) 82 | 83 | train_dict[m] = train_partition 84 | valid_dict[m] = valid_partition 85 | test_dict[m] = test_partition 86 | 87 | return train_dict, valid_dict, test_dict 88 | 89 | 90 | def merge_dict(dict_all, adict): 91 | if dict_all is None: 92 | dict_all = adict 93 | else: 94 | for m, ls in adict.items(): 95 | dict_all[m] = dict_all[m] + ls 96 | return dict_all 97 | 98 | 99 | def save_files(dict_all, output_dir, suffix): 100 | for m, ls in dict_all.items(): 101 | if not ls: 102 | continue 103 | output_path = os.path.join(output_dir, f'{m}_{suffix}.txt') 104 | ls = [ln + '\n' for ln in ls] 105 | with open(output_path, 'w') as f: 106 | f.writelines(ls) 107 | 108 | 109 | def main(config_file): 110 | config_args = get_config(config_file) 111 | 112 | partition_args = copy.deepcopy(config_args['partitioning']) 113 | base_paths = partition_args.pop('base_paths') 114 | 115 | train_dict_all = None 116 | valid_dict_all = None 117 | test_dict_all = None 118 | for base_path in base_paths: 119 | train_dict, valid_dict, test_dict = partitioning(base_path, **partition_args) 120 | train_dict_all = merge_dict(train_dict_all, train_dict) 121 | valid_dict_all = merge_dict(valid_dict_all, valid_dict) 122 | test_dict_all = merge_dict(test_dict_all, test_dict) 123 | 124 | output_dir = os.path.expanduser(config_args['io']['output_dir']) 125 | os.makedirs(output_dir, exist_ok=True) 126 | save_config(config_args, output_dir) 127 | 128 | train_fraction = partition_args['train_fraction'] 129 | save_files(train_dict_all, output_dir, f'train-{train_fraction}') 130 | 131 | valid_fraction = partition_args['valid_fraction'] 132 | save_files(valid_dict_all, output_dir, f'valid-{valid_fraction}') 133 | 134 | test_fraction = partition_args['test_fraction'] 135 | save_files(test_dict_all, output_dir, f'test-{test_fraction}') 136 | 137 | print('Done!\n') 138 | 139 | 140 | if __name__ == '__main__': 141 | main(sys.argv[1]) 142 | -------------------------------------------------------------------------------- /experiments/data_split/split_examples/inference/flair_test-1.0.txt: -------------------------------------------------------------------------------- 1 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AAM_1/BraTS19_CBICA_AAM_1_flair.nii.gz 2 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ABT_1/BraTS19_CBICA_ABT_1_flair.nii.gz 3 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ALA_1/BraTS19_CBICA_ALA_1_flair.nii.gz 4 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ALT_1/BraTS19_CBICA_ALT_1_flair.nii.gz 5 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ALV_1/BraTS19_CBICA_ALV_1_flair.nii.gz 6 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ALZ_1/BraTS19_CBICA_ALZ_1_flair.nii.gz 7 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AMF_1/BraTS19_CBICA_AMF_1_flair.nii.gz 8 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AMU_1/BraTS19_CBICA_AMU_1_flair.nii.gz 9 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ANK_1/BraTS19_CBICA_ANK_1_flair.nii.gz 10 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_APM_1/BraTS19_CBICA_APM_1_flair.nii.gz 11 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AQE_1/BraTS19_CBICA_AQE_1_flair.nii.gz 12 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ARR_1/BraTS19_CBICA_ARR_1_flair.nii.gz 13 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ATW_1/BraTS19_CBICA_ATW_1_flair.nii.gz 14 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AUC_1/BraTS19_CBICA_AUC_1_flair.nii.gz 15 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AUE_1/BraTS19_CBICA_AUE_1_flair.nii.gz 16 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AZA_1/BraTS19_CBICA_AZA_1_flair.nii.gz 17 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_BHF_1/BraTS19_CBICA_BHF_1_flair.nii.gz 18 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_BHN_1/BraTS19_CBICA_BHN_1_flair.nii.gz 19 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_BKY_1/BraTS19_CBICA_BKY_1_flair.nii.gz 20 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_BLI_1/BraTS19_CBICA_BLI_1_flair.nii.gz 21 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_BLK_1/BraTS19_CBICA_BLK_1_flair.nii.gz 22 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_907_1/BraTS19_MDA_907_1_flair.nii.gz 23 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_922_1/BraTS19_MDA_922_1_flair.nii.gz 24 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_958_1/BraTS19_MDA_958_1_flair.nii.gz 25 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_959_1/BraTS19_MDA_959_1_flair.nii.gz 26 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1012_1/BraTS19_MDA_1012_1_flair.nii.gz 27 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1015_1/BraTS19_MDA_1015_1_flair.nii.gz 28 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1060_1/BraTS19_MDA_1060_1_flair.nii.gz 29 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1081_1/BraTS19_MDA_1081_1_flair.nii.gz 30 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1089_1/BraTS19_MDA_1089_1_flair.nii.gz 31 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1123_1/BraTS19_MDA_1123_1_flair.nii.gz 32 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1124_1/BraTS19_MDA_1124_1_flair.nii.gz 33 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA01_215_1/BraTS19_TCIA01_215_1_flair.nii.gz 34 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA01_341_1/BraTS19_TCIA01_341_1_flair.nii.gz 35 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA01_454_1/BraTS19_TCIA01_454_1_flair.nii.gz 36 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_139_1/BraTS19_TCIA02_139_1_flair.nii.gz 37 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_210_1/BraTS19_TCIA02_210_1_flair.nii.gz 38 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_229_1/BraTS19_TCIA02_229_1_flair.nii.gz 39 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_230_1/BraTS19_TCIA02_230_1_flair.nii.gz 40 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_273_1/BraTS19_TCIA02_273_1_flair.nii.gz 41 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_294_1/BraTS19_TCIA02_294_1_flair.nii.gz 42 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_350_1/BraTS19_TCIA02_350_1_flair.nii.gz 43 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_355_1/BraTS19_TCIA02_355_1_flair.nii.gz 44 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_382_1/BraTS19_TCIA02_382_1_flair.nii.gz 45 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_400_1/BraTS19_TCIA02_400_1_flair.nii.gz 46 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_457_1/BraTS19_TCIA02_457_1_flair.nii.gz 47 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_496_1/BraTS19_TCIA02_496_1_flair.nii.gz 48 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_216_1/BraTS19_TCIA03_216_1_flair.nii.gz 49 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_250_1/BraTS19_TCIA03_250_1_flair.nii.gz 50 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_288_1/BraTS19_TCIA03_288_1_flair.nii.gz 51 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_295_1/BraTS19_TCIA03_295_1_flair.nii.gz 52 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_303_1/BraTS19_TCIA03_303_1_flair.nii.gz 53 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_304_1/BraTS19_TCIA03_304_1_flair.nii.gz 54 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_313_1/BraTS19_TCIA03_313_1_flair.nii.gz 55 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_318_1/BraTS19_TCIA03_318_1_flair.nii.gz 56 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_464_1/BraTS19_TCIA03_464_1_flair.nii.gz 57 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_604_1/BraTS19_TCIA03_604_1_flair.nii.gz 58 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA04_212_1/BraTS19_TCIA04_212_1_flair.nii.gz 59 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA04_253_1/BraTS19_TCIA04_253_1_flair.nii.gz 60 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA05_456_1/BraTS19_TCIA05_456_1_flair.nii.gz 61 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA05_484_1/BraTS19_TCIA05_484_1_flair.nii.gz 62 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA06_497_1/BraTS19_TCIA06_497_1_flair.nii.gz 63 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA07_600_1/BraTS19_TCIA07_600_1_flair.nii.gz 64 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA07_601_1/BraTS19_TCIA07_601_1_flair.nii.gz 65 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA07_602_1/BraTS19_TCIA07_602_1_flair.nii.gz 66 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA09_148_1/BraTS19_TCIA09_148_1_flair.nii.gz 67 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA09_225_1/BraTS19_TCIA09_225_1_flair.nii.gz 68 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA09_248_1/BraTS19_TCIA09_248_1_flair.nii.gz 69 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA09_381_1/BraTS19_TCIA09_381_1_flair.nii.gz 70 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA09_385_1/BraTS19_TCIA09_385_1_flair.nii.gz 71 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_106_1/BraTS19_TCIA10_106_1_flair.nii.gz 72 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_127_1/BraTS19_TCIA10_127_1_flair.nii.gz 73 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_172_1/BraTS19_TCIA10_172_1_flair.nii.gz 74 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_195_1/BraTS19_TCIA10_195_1_flair.nii.gz 75 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_220_1/BraTS19_TCIA10_220_1_flair.nii.gz 76 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_232_1/BraTS19_TCIA10_232_1_flair.nii.gz 77 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_236_1/BraTS19_TCIA10_236_1_flair.nii.gz 78 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_239_1/BraTS19_TCIA10_239_1_flair.nii.gz 79 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_271_1/BraTS19_TCIA10_271_1_flair.nii.gz 80 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_311_1/BraTS19_TCIA10_311_1_flair.nii.gz 81 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_467_1/BraTS19_TCIA10_467_1_flair.nii.gz 82 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_609_1/BraTS19_TCIA10_609_1_flair.nii.gz 83 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_614_1/BraTS19_TCIA10_614_1_flair.nii.gz 84 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_627_1/BraTS19_TCIA10_627_1_flair.nii.gz 85 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_631_1/BraTS19_TCIA10_631_1_flair.nii.gz 86 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_635_1/BraTS19_TCIA10_635_1_flair.nii.gz 87 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_647_1/BraTS19_TCIA10_647_1_flair.nii.gz 88 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA11_612_1/BraTS19_TCIA11_612_1_flair.nii.gz 89 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA12_146_1/BraTS19_TCIA12_146_1_flair.nii.gz 90 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA12_339_1/BraTS19_TCIA12_339_1_flair.nii.gz 91 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA12_613_1/BraTS19_TCIA12_613_1_flair.nii.gz 92 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA12_641_1/BraTS19_TCIA12_641_1_flair.nii.gz 93 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_610_1/BraTS19_TCIA13_610_1_flair.nii.gz 94 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_611_1/BraTS19_TCIA13_611_1_flair.nii.gz 95 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_616_1/BraTS19_TCIA13_616_1_flair.nii.gz 96 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_617_1/BraTS19_TCIA13_617_1_flair.nii.gz 97 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_619_1/BraTS19_TCIA13_619_1_flair.nii.gz 98 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_622_1/BraTS19_TCIA13_622_1_flair.nii.gz 99 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_626_1/BraTS19_TCIA13_626_1_flair.nii.gz 100 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_636_1/BraTS19_TCIA13_636_1_flair.nii.gz 101 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_638_1/BraTS19_TCIA13_638_1_flair.nii.gz 102 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_643_1/BraTS19_TCIA13_643_1_flair.nii.gz 103 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_646_1/BraTS19_TCIA13_646_1_flair.nii.gz 104 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_648_1/BraTS19_TCIA13_648_1_flair.nii.gz 105 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_649_1/BraTS19_TCIA13_649_1_flair.nii.gz 106 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_651_1/BraTS19_TCIA13_651_1_flair.nii.gz 107 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_652_1/BraTS19_TCIA13_652_1_flair.nii.gz 108 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_655_1/BraTS19_TCIA13_655_1_flair.nii.gz 109 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3446_1/BraTS19_UAB_3446_1_flair.nii.gz 110 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3448_1/BraTS19_UAB_3448_1_flair.nii.gz 111 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3449_1/BraTS19_UAB_3449_1_flair.nii.gz 112 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3454_1/BraTS19_UAB_3454_1_flair.nii.gz 113 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3455_1/BraTS19_UAB_3455_1_flair.nii.gz 114 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3456_1/BraTS19_UAB_3456_1_flair.nii.gz 115 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3490_1/BraTS19_UAB_3490_1_flair.nii.gz 116 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3498_1/BraTS19_UAB_3498_1_flair.nii.gz 117 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3499_1/BraTS19_UAB_3499_1_flair.nii.gz 118 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_S036_1/BraTS19_WashU_S036_1_flair.nii.gz 119 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_S037_1/BraTS19_WashU_S037_1_flair.nii.gz 120 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_S040_1/BraTS19_WashU_S040_1_flair.nii.gz 121 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_S041_1/BraTS19_WashU_S041_1_flair.nii.gz 122 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_W033_1/BraTS19_WashU_W033_1_flair.nii.gz 123 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_W038_1/BraTS19_WashU_W038_1_flair.nii.gz 124 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_W047_1/BraTS19_WashU_W047_1_flair.nii.gz 125 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_W053_1/BraTS19_WashU_W053_1_flair.nii.gz 126 | -------------------------------------------------------------------------------- /experiments/data_split/split_examples/inference/t1_test-1.0.txt: -------------------------------------------------------------------------------- 1 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AAM_1/BraTS19_CBICA_AAM_1_t1.nii.gz 2 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ABT_1/BraTS19_CBICA_ABT_1_t1.nii.gz 3 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ALA_1/BraTS19_CBICA_ALA_1_t1.nii.gz 4 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ALT_1/BraTS19_CBICA_ALT_1_t1.nii.gz 5 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ALV_1/BraTS19_CBICA_ALV_1_t1.nii.gz 6 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ALZ_1/BraTS19_CBICA_ALZ_1_t1.nii.gz 7 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AMF_1/BraTS19_CBICA_AMF_1_t1.nii.gz 8 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AMU_1/BraTS19_CBICA_AMU_1_t1.nii.gz 9 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ANK_1/BraTS19_CBICA_ANK_1_t1.nii.gz 10 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_APM_1/BraTS19_CBICA_APM_1_t1.nii.gz 11 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AQE_1/BraTS19_CBICA_AQE_1_t1.nii.gz 12 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ARR_1/BraTS19_CBICA_ARR_1_t1.nii.gz 13 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ATW_1/BraTS19_CBICA_ATW_1_t1.nii.gz 14 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AUC_1/BraTS19_CBICA_AUC_1_t1.nii.gz 15 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AUE_1/BraTS19_CBICA_AUE_1_t1.nii.gz 16 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AZA_1/BraTS19_CBICA_AZA_1_t1.nii.gz 17 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_BHF_1/BraTS19_CBICA_BHF_1_t1.nii.gz 18 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_BHN_1/BraTS19_CBICA_BHN_1_t1.nii.gz 19 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_BKY_1/BraTS19_CBICA_BKY_1_t1.nii.gz 20 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_BLI_1/BraTS19_CBICA_BLI_1_t1.nii.gz 21 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_BLK_1/BraTS19_CBICA_BLK_1_t1.nii.gz 22 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_907_1/BraTS19_MDA_907_1_t1.nii.gz 23 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_922_1/BraTS19_MDA_922_1_t1.nii.gz 24 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_958_1/BraTS19_MDA_958_1_t1.nii.gz 25 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_959_1/BraTS19_MDA_959_1_t1.nii.gz 26 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1012_1/BraTS19_MDA_1012_1_t1.nii.gz 27 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1015_1/BraTS19_MDA_1015_1_t1.nii.gz 28 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1060_1/BraTS19_MDA_1060_1_t1.nii.gz 29 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1081_1/BraTS19_MDA_1081_1_t1.nii.gz 30 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1089_1/BraTS19_MDA_1089_1_t1.nii.gz 31 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1123_1/BraTS19_MDA_1123_1_t1.nii.gz 32 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1124_1/BraTS19_MDA_1124_1_t1.nii.gz 33 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA01_215_1/BraTS19_TCIA01_215_1_t1.nii.gz 34 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA01_341_1/BraTS19_TCIA01_341_1_t1.nii.gz 35 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA01_454_1/BraTS19_TCIA01_454_1_t1.nii.gz 36 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_139_1/BraTS19_TCIA02_139_1_t1.nii.gz 37 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_210_1/BraTS19_TCIA02_210_1_t1.nii.gz 38 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_229_1/BraTS19_TCIA02_229_1_t1.nii.gz 39 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_230_1/BraTS19_TCIA02_230_1_t1.nii.gz 40 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_273_1/BraTS19_TCIA02_273_1_t1.nii.gz 41 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_294_1/BraTS19_TCIA02_294_1_t1.nii.gz 42 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_350_1/BraTS19_TCIA02_350_1_t1.nii.gz 43 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_355_1/BraTS19_TCIA02_355_1_t1.nii.gz 44 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_382_1/BraTS19_TCIA02_382_1_t1.nii.gz 45 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_400_1/BraTS19_TCIA02_400_1_t1.nii.gz 46 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_457_1/BraTS19_TCIA02_457_1_t1.nii.gz 47 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_496_1/BraTS19_TCIA02_496_1_t1.nii.gz 48 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_216_1/BraTS19_TCIA03_216_1_t1.nii.gz 49 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_250_1/BraTS19_TCIA03_250_1_t1.nii.gz 50 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_288_1/BraTS19_TCIA03_288_1_t1.nii.gz 51 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_295_1/BraTS19_TCIA03_295_1_t1.nii.gz 52 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_303_1/BraTS19_TCIA03_303_1_t1.nii.gz 53 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_304_1/BraTS19_TCIA03_304_1_t1.nii.gz 54 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_313_1/BraTS19_TCIA03_313_1_t1.nii.gz 55 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_318_1/BraTS19_TCIA03_318_1_t1.nii.gz 56 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_464_1/BraTS19_TCIA03_464_1_t1.nii.gz 57 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_604_1/BraTS19_TCIA03_604_1_t1.nii.gz 58 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA04_212_1/BraTS19_TCIA04_212_1_t1.nii.gz 59 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA04_253_1/BraTS19_TCIA04_253_1_t1.nii.gz 60 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA05_456_1/BraTS19_TCIA05_456_1_t1.nii.gz 61 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA05_484_1/BraTS19_TCIA05_484_1_t1.nii.gz 62 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA06_497_1/BraTS19_TCIA06_497_1_t1.nii.gz 63 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA07_600_1/BraTS19_TCIA07_600_1_t1.nii.gz 64 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA07_601_1/BraTS19_TCIA07_601_1_t1.nii.gz 65 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA07_602_1/BraTS19_TCIA07_602_1_t1.nii.gz 66 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA09_148_1/BraTS19_TCIA09_148_1_t1.nii.gz 67 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA09_225_1/BraTS19_TCIA09_225_1_t1.nii.gz 68 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA09_248_1/BraTS19_TCIA09_248_1_t1.nii.gz 69 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA09_381_1/BraTS19_TCIA09_381_1_t1.nii.gz 70 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA09_385_1/BraTS19_TCIA09_385_1_t1.nii.gz 71 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_106_1/BraTS19_TCIA10_106_1_t1.nii.gz 72 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_127_1/BraTS19_TCIA10_127_1_t1.nii.gz 73 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_172_1/BraTS19_TCIA10_172_1_t1.nii.gz 74 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_195_1/BraTS19_TCIA10_195_1_t1.nii.gz 75 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_220_1/BraTS19_TCIA10_220_1_t1.nii.gz 76 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_232_1/BraTS19_TCIA10_232_1_t1.nii.gz 77 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_236_1/BraTS19_TCIA10_236_1_t1.nii.gz 78 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_239_1/BraTS19_TCIA10_239_1_t1.nii.gz 79 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_271_1/BraTS19_TCIA10_271_1_t1.nii.gz 80 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_311_1/BraTS19_TCIA10_311_1_t1.nii.gz 81 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_467_1/BraTS19_TCIA10_467_1_t1.nii.gz 82 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_609_1/BraTS19_TCIA10_609_1_t1.nii.gz 83 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_614_1/BraTS19_TCIA10_614_1_t1.nii.gz 84 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_627_1/BraTS19_TCIA10_627_1_t1.nii.gz 85 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_631_1/BraTS19_TCIA10_631_1_t1.nii.gz 86 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_635_1/BraTS19_TCIA10_635_1_t1.nii.gz 87 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_647_1/BraTS19_TCIA10_647_1_t1.nii.gz 88 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA11_612_1/BraTS19_TCIA11_612_1_t1.nii.gz 89 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA12_146_1/BraTS19_TCIA12_146_1_t1.nii.gz 90 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA12_339_1/BraTS19_TCIA12_339_1_t1.nii.gz 91 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA12_613_1/BraTS19_TCIA12_613_1_t1.nii.gz 92 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA12_641_1/BraTS19_TCIA12_641_1_t1.nii.gz 93 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_610_1/BraTS19_TCIA13_610_1_t1.nii.gz 94 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_611_1/BraTS19_TCIA13_611_1_t1.nii.gz 95 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_616_1/BraTS19_TCIA13_616_1_t1.nii.gz 96 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_617_1/BraTS19_TCIA13_617_1_t1.nii.gz 97 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_619_1/BraTS19_TCIA13_619_1_t1.nii.gz 98 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_622_1/BraTS19_TCIA13_622_1_t1.nii.gz 99 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_626_1/BraTS19_TCIA13_626_1_t1.nii.gz 100 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_636_1/BraTS19_TCIA13_636_1_t1.nii.gz 101 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_638_1/BraTS19_TCIA13_638_1_t1.nii.gz 102 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_643_1/BraTS19_TCIA13_643_1_t1.nii.gz 103 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_646_1/BraTS19_TCIA13_646_1_t1.nii.gz 104 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_648_1/BraTS19_TCIA13_648_1_t1.nii.gz 105 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_649_1/BraTS19_TCIA13_649_1_t1.nii.gz 106 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_651_1/BraTS19_TCIA13_651_1_t1.nii.gz 107 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_652_1/BraTS19_TCIA13_652_1_t1.nii.gz 108 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_655_1/BraTS19_TCIA13_655_1_t1.nii.gz 109 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3446_1/BraTS19_UAB_3446_1_t1.nii.gz 110 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3448_1/BraTS19_UAB_3448_1_t1.nii.gz 111 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3449_1/BraTS19_UAB_3449_1_t1.nii.gz 112 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3454_1/BraTS19_UAB_3454_1_t1.nii.gz 113 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3455_1/BraTS19_UAB_3455_1_t1.nii.gz 114 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3456_1/BraTS19_UAB_3456_1_t1.nii.gz 115 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3490_1/BraTS19_UAB_3490_1_t1.nii.gz 116 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3498_1/BraTS19_UAB_3498_1_t1.nii.gz 117 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3499_1/BraTS19_UAB_3499_1_t1.nii.gz 118 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_S036_1/BraTS19_WashU_S036_1_t1.nii.gz 119 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_S037_1/BraTS19_WashU_S037_1_t1.nii.gz 120 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_S040_1/BraTS19_WashU_S040_1_t1.nii.gz 121 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_S041_1/BraTS19_WashU_S041_1_t1.nii.gz 122 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_W033_1/BraTS19_WashU_W033_1_t1.nii.gz 123 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_W038_1/BraTS19_WashU_W038_1_t1.nii.gz 124 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_W047_1/BraTS19_WashU_W047_1_t1.nii.gz 125 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_W053_1/BraTS19_WashU_W053_1_t1.nii.gz 126 | -------------------------------------------------------------------------------- /experiments/data_split/split_examples/inference/t1ce_test-1.0.txt: -------------------------------------------------------------------------------- 1 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AAM_1/BraTS19_CBICA_AAM_1_t1ce.nii.gz 2 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ABT_1/BraTS19_CBICA_ABT_1_t1ce.nii.gz 3 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ALA_1/BraTS19_CBICA_ALA_1_t1ce.nii.gz 4 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ALT_1/BraTS19_CBICA_ALT_1_t1ce.nii.gz 5 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ALV_1/BraTS19_CBICA_ALV_1_t1ce.nii.gz 6 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ALZ_1/BraTS19_CBICA_ALZ_1_t1ce.nii.gz 7 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AMF_1/BraTS19_CBICA_AMF_1_t1ce.nii.gz 8 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AMU_1/BraTS19_CBICA_AMU_1_t1ce.nii.gz 9 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ANK_1/BraTS19_CBICA_ANK_1_t1ce.nii.gz 10 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_APM_1/BraTS19_CBICA_APM_1_t1ce.nii.gz 11 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AQE_1/BraTS19_CBICA_AQE_1_t1ce.nii.gz 12 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ARR_1/BraTS19_CBICA_ARR_1_t1ce.nii.gz 13 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ATW_1/BraTS19_CBICA_ATW_1_t1ce.nii.gz 14 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AUC_1/BraTS19_CBICA_AUC_1_t1ce.nii.gz 15 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AUE_1/BraTS19_CBICA_AUE_1_t1ce.nii.gz 16 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AZA_1/BraTS19_CBICA_AZA_1_t1ce.nii.gz 17 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_BHF_1/BraTS19_CBICA_BHF_1_t1ce.nii.gz 18 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_BHN_1/BraTS19_CBICA_BHN_1_t1ce.nii.gz 19 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_BKY_1/BraTS19_CBICA_BKY_1_t1ce.nii.gz 20 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_BLI_1/BraTS19_CBICA_BLI_1_t1ce.nii.gz 21 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_BLK_1/BraTS19_CBICA_BLK_1_t1ce.nii.gz 22 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_907_1/BraTS19_MDA_907_1_t1ce.nii.gz 23 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_922_1/BraTS19_MDA_922_1_t1ce.nii.gz 24 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_958_1/BraTS19_MDA_958_1_t1ce.nii.gz 25 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_959_1/BraTS19_MDA_959_1_t1ce.nii.gz 26 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1012_1/BraTS19_MDA_1012_1_t1ce.nii.gz 27 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1015_1/BraTS19_MDA_1015_1_t1ce.nii.gz 28 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1060_1/BraTS19_MDA_1060_1_t1ce.nii.gz 29 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1081_1/BraTS19_MDA_1081_1_t1ce.nii.gz 30 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1089_1/BraTS19_MDA_1089_1_t1ce.nii.gz 31 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1123_1/BraTS19_MDA_1123_1_t1ce.nii.gz 32 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1124_1/BraTS19_MDA_1124_1_t1ce.nii.gz 33 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA01_215_1/BraTS19_TCIA01_215_1_t1ce.nii.gz 34 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA01_341_1/BraTS19_TCIA01_341_1_t1ce.nii.gz 35 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA01_454_1/BraTS19_TCIA01_454_1_t1ce.nii.gz 36 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_139_1/BraTS19_TCIA02_139_1_t1ce.nii.gz 37 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_210_1/BraTS19_TCIA02_210_1_t1ce.nii.gz 38 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_229_1/BraTS19_TCIA02_229_1_t1ce.nii.gz 39 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_230_1/BraTS19_TCIA02_230_1_t1ce.nii.gz 40 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_273_1/BraTS19_TCIA02_273_1_t1ce.nii.gz 41 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_294_1/BraTS19_TCIA02_294_1_t1ce.nii.gz 42 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_350_1/BraTS19_TCIA02_350_1_t1ce.nii.gz 43 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_355_1/BraTS19_TCIA02_355_1_t1ce.nii.gz 44 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_382_1/BraTS19_TCIA02_382_1_t1ce.nii.gz 45 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_400_1/BraTS19_TCIA02_400_1_t1ce.nii.gz 46 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_457_1/BraTS19_TCIA02_457_1_t1ce.nii.gz 47 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_496_1/BraTS19_TCIA02_496_1_t1ce.nii.gz 48 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_216_1/BraTS19_TCIA03_216_1_t1ce.nii.gz 49 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_250_1/BraTS19_TCIA03_250_1_t1ce.nii.gz 50 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_288_1/BraTS19_TCIA03_288_1_t1ce.nii.gz 51 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_295_1/BraTS19_TCIA03_295_1_t1ce.nii.gz 52 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_303_1/BraTS19_TCIA03_303_1_t1ce.nii.gz 53 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_304_1/BraTS19_TCIA03_304_1_t1ce.nii.gz 54 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_313_1/BraTS19_TCIA03_313_1_t1ce.nii.gz 55 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_318_1/BraTS19_TCIA03_318_1_t1ce.nii.gz 56 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_464_1/BraTS19_TCIA03_464_1_t1ce.nii.gz 57 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_604_1/BraTS19_TCIA03_604_1_t1ce.nii.gz 58 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA04_212_1/BraTS19_TCIA04_212_1_t1ce.nii.gz 59 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA04_253_1/BraTS19_TCIA04_253_1_t1ce.nii.gz 60 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA05_456_1/BraTS19_TCIA05_456_1_t1ce.nii.gz 61 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA05_484_1/BraTS19_TCIA05_484_1_t1ce.nii.gz 62 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA06_497_1/BraTS19_TCIA06_497_1_t1ce.nii.gz 63 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA07_600_1/BraTS19_TCIA07_600_1_t1ce.nii.gz 64 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA07_601_1/BraTS19_TCIA07_601_1_t1ce.nii.gz 65 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA07_602_1/BraTS19_TCIA07_602_1_t1ce.nii.gz 66 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA09_148_1/BraTS19_TCIA09_148_1_t1ce.nii.gz 67 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA09_225_1/BraTS19_TCIA09_225_1_t1ce.nii.gz 68 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA09_248_1/BraTS19_TCIA09_248_1_t1ce.nii.gz 69 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA09_381_1/BraTS19_TCIA09_381_1_t1ce.nii.gz 70 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA09_385_1/BraTS19_TCIA09_385_1_t1ce.nii.gz 71 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_106_1/BraTS19_TCIA10_106_1_t1ce.nii.gz 72 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_127_1/BraTS19_TCIA10_127_1_t1ce.nii.gz 73 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_172_1/BraTS19_TCIA10_172_1_t1ce.nii.gz 74 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_195_1/BraTS19_TCIA10_195_1_t1ce.nii.gz 75 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_220_1/BraTS19_TCIA10_220_1_t1ce.nii.gz 76 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_232_1/BraTS19_TCIA10_232_1_t1ce.nii.gz 77 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_236_1/BraTS19_TCIA10_236_1_t1ce.nii.gz 78 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_239_1/BraTS19_TCIA10_239_1_t1ce.nii.gz 79 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_271_1/BraTS19_TCIA10_271_1_t1ce.nii.gz 80 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_311_1/BraTS19_TCIA10_311_1_t1ce.nii.gz 81 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_467_1/BraTS19_TCIA10_467_1_t1ce.nii.gz 82 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_609_1/BraTS19_TCIA10_609_1_t1ce.nii.gz 83 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_614_1/BraTS19_TCIA10_614_1_t1ce.nii.gz 84 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_627_1/BraTS19_TCIA10_627_1_t1ce.nii.gz 85 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_631_1/BraTS19_TCIA10_631_1_t1ce.nii.gz 86 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_635_1/BraTS19_TCIA10_635_1_t1ce.nii.gz 87 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_647_1/BraTS19_TCIA10_647_1_t1ce.nii.gz 88 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA11_612_1/BraTS19_TCIA11_612_1_t1ce.nii.gz 89 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA12_146_1/BraTS19_TCIA12_146_1_t1ce.nii.gz 90 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA12_339_1/BraTS19_TCIA12_339_1_t1ce.nii.gz 91 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA12_613_1/BraTS19_TCIA12_613_1_t1ce.nii.gz 92 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA12_641_1/BraTS19_TCIA12_641_1_t1ce.nii.gz 93 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_610_1/BraTS19_TCIA13_610_1_t1ce.nii.gz 94 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_611_1/BraTS19_TCIA13_611_1_t1ce.nii.gz 95 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_616_1/BraTS19_TCIA13_616_1_t1ce.nii.gz 96 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_617_1/BraTS19_TCIA13_617_1_t1ce.nii.gz 97 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_619_1/BraTS19_TCIA13_619_1_t1ce.nii.gz 98 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_622_1/BraTS19_TCIA13_622_1_t1ce.nii.gz 99 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_626_1/BraTS19_TCIA13_626_1_t1ce.nii.gz 100 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_636_1/BraTS19_TCIA13_636_1_t1ce.nii.gz 101 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_638_1/BraTS19_TCIA13_638_1_t1ce.nii.gz 102 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_643_1/BraTS19_TCIA13_643_1_t1ce.nii.gz 103 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_646_1/BraTS19_TCIA13_646_1_t1ce.nii.gz 104 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_648_1/BraTS19_TCIA13_648_1_t1ce.nii.gz 105 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_649_1/BraTS19_TCIA13_649_1_t1ce.nii.gz 106 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_651_1/BraTS19_TCIA13_651_1_t1ce.nii.gz 107 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_652_1/BraTS19_TCIA13_652_1_t1ce.nii.gz 108 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_655_1/BraTS19_TCIA13_655_1_t1ce.nii.gz 109 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3446_1/BraTS19_UAB_3446_1_t1ce.nii.gz 110 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3448_1/BraTS19_UAB_3448_1_t1ce.nii.gz 111 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3449_1/BraTS19_UAB_3449_1_t1ce.nii.gz 112 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3454_1/BraTS19_UAB_3454_1_t1ce.nii.gz 113 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3455_1/BraTS19_UAB_3455_1_t1ce.nii.gz 114 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3456_1/BraTS19_UAB_3456_1_t1ce.nii.gz 115 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3490_1/BraTS19_UAB_3490_1_t1ce.nii.gz 116 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3498_1/BraTS19_UAB_3498_1_t1ce.nii.gz 117 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3499_1/BraTS19_UAB_3499_1_t1ce.nii.gz 118 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_S036_1/BraTS19_WashU_S036_1_t1ce.nii.gz 119 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_S037_1/BraTS19_WashU_S037_1_t1ce.nii.gz 120 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_S040_1/BraTS19_WashU_S040_1_t1ce.nii.gz 121 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_S041_1/BraTS19_WashU_S041_1_t1ce.nii.gz 122 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_W033_1/BraTS19_WashU_W033_1_t1ce.nii.gz 123 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_W038_1/BraTS19_WashU_W038_1_t1ce.nii.gz 124 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_W047_1/BraTS19_WashU_W047_1_t1ce.nii.gz 125 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_W053_1/BraTS19_WashU_W053_1_t1ce.nii.gz 126 | -------------------------------------------------------------------------------- /experiments/data_split/split_examples/inference/t2_test-1.0.txt: -------------------------------------------------------------------------------- 1 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AAM_1/BraTS19_CBICA_AAM_1_t2.nii.gz 2 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ABT_1/BraTS19_CBICA_ABT_1_t2.nii.gz 3 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ALA_1/BraTS19_CBICA_ALA_1_t2.nii.gz 4 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ALT_1/BraTS19_CBICA_ALT_1_t2.nii.gz 5 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ALV_1/BraTS19_CBICA_ALV_1_t2.nii.gz 6 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ALZ_1/BraTS19_CBICA_ALZ_1_t2.nii.gz 7 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AMF_1/BraTS19_CBICA_AMF_1_t2.nii.gz 8 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AMU_1/BraTS19_CBICA_AMU_1_t2.nii.gz 9 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ANK_1/BraTS19_CBICA_ANK_1_t2.nii.gz 10 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_APM_1/BraTS19_CBICA_APM_1_t2.nii.gz 11 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AQE_1/BraTS19_CBICA_AQE_1_t2.nii.gz 12 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ARR_1/BraTS19_CBICA_ARR_1_t2.nii.gz 13 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_ATW_1/BraTS19_CBICA_ATW_1_t2.nii.gz 14 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AUC_1/BraTS19_CBICA_AUC_1_t2.nii.gz 15 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AUE_1/BraTS19_CBICA_AUE_1_t2.nii.gz 16 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_AZA_1/BraTS19_CBICA_AZA_1_t2.nii.gz 17 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_BHF_1/BraTS19_CBICA_BHF_1_t2.nii.gz 18 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_BHN_1/BraTS19_CBICA_BHN_1_t2.nii.gz 19 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_BKY_1/BraTS19_CBICA_BKY_1_t2.nii.gz 20 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_BLI_1/BraTS19_CBICA_BLI_1_t2.nii.gz 21 | MICCAI_BraTS_2019_Data_Validation/BraTS19_CBICA_BLK_1/BraTS19_CBICA_BLK_1_t2.nii.gz 22 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_907_1/BraTS19_MDA_907_1_t2.nii.gz 23 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_922_1/BraTS19_MDA_922_1_t2.nii.gz 24 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_958_1/BraTS19_MDA_958_1_t2.nii.gz 25 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_959_1/BraTS19_MDA_959_1_t2.nii.gz 26 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1012_1/BraTS19_MDA_1012_1_t2.nii.gz 27 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1015_1/BraTS19_MDA_1015_1_t2.nii.gz 28 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1060_1/BraTS19_MDA_1060_1_t2.nii.gz 29 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1081_1/BraTS19_MDA_1081_1_t2.nii.gz 30 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1089_1/BraTS19_MDA_1089_1_t2.nii.gz 31 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1123_1/BraTS19_MDA_1123_1_t2.nii.gz 32 | MICCAI_BraTS_2019_Data_Validation/BraTS19_MDA_1124_1/BraTS19_MDA_1124_1_t2.nii.gz 33 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA01_215_1/BraTS19_TCIA01_215_1_t2.nii.gz 34 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA01_341_1/BraTS19_TCIA01_341_1_t2.nii.gz 35 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA01_454_1/BraTS19_TCIA01_454_1_t2.nii.gz 36 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_139_1/BraTS19_TCIA02_139_1_t2.nii.gz 37 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_210_1/BraTS19_TCIA02_210_1_t2.nii.gz 38 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_229_1/BraTS19_TCIA02_229_1_t2.nii.gz 39 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_230_1/BraTS19_TCIA02_230_1_t2.nii.gz 40 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_273_1/BraTS19_TCIA02_273_1_t2.nii.gz 41 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_294_1/BraTS19_TCIA02_294_1_t2.nii.gz 42 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_350_1/BraTS19_TCIA02_350_1_t2.nii.gz 43 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_355_1/BraTS19_TCIA02_355_1_t2.nii.gz 44 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_382_1/BraTS19_TCIA02_382_1_t2.nii.gz 45 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_400_1/BraTS19_TCIA02_400_1_t2.nii.gz 46 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_457_1/BraTS19_TCIA02_457_1_t2.nii.gz 47 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA02_496_1/BraTS19_TCIA02_496_1_t2.nii.gz 48 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_216_1/BraTS19_TCIA03_216_1_t2.nii.gz 49 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_250_1/BraTS19_TCIA03_250_1_t2.nii.gz 50 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_288_1/BraTS19_TCIA03_288_1_t2.nii.gz 51 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_295_1/BraTS19_TCIA03_295_1_t2.nii.gz 52 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_303_1/BraTS19_TCIA03_303_1_t2.nii.gz 53 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_304_1/BraTS19_TCIA03_304_1_t2.nii.gz 54 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_313_1/BraTS19_TCIA03_313_1_t2.nii.gz 55 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_318_1/BraTS19_TCIA03_318_1_t2.nii.gz 56 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_464_1/BraTS19_TCIA03_464_1_t2.nii.gz 57 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA03_604_1/BraTS19_TCIA03_604_1_t2.nii.gz 58 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA04_212_1/BraTS19_TCIA04_212_1_t2.nii.gz 59 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA04_253_1/BraTS19_TCIA04_253_1_t2.nii.gz 60 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA05_456_1/BraTS19_TCIA05_456_1_t2.nii.gz 61 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA05_484_1/BraTS19_TCIA05_484_1_t2.nii.gz 62 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA06_497_1/BraTS19_TCIA06_497_1_t2.nii.gz 63 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA07_600_1/BraTS19_TCIA07_600_1_t2.nii.gz 64 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA07_601_1/BraTS19_TCIA07_601_1_t2.nii.gz 65 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA07_602_1/BraTS19_TCIA07_602_1_t2.nii.gz 66 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA09_148_1/BraTS19_TCIA09_148_1_t2.nii.gz 67 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA09_225_1/BraTS19_TCIA09_225_1_t2.nii.gz 68 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA09_248_1/BraTS19_TCIA09_248_1_t2.nii.gz 69 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA09_381_1/BraTS19_TCIA09_381_1_t2.nii.gz 70 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA09_385_1/BraTS19_TCIA09_385_1_t2.nii.gz 71 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_106_1/BraTS19_TCIA10_106_1_t2.nii.gz 72 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_127_1/BraTS19_TCIA10_127_1_t2.nii.gz 73 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_172_1/BraTS19_TCIA10_172_1_t2.nii.gz 74 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_195_1/BraTS19_TCIA10_195_1_t2.nii.gz 75 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_220_1/BraTS19_TCIA10_220_1_t2.nii.gz 76 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_232_1/BraTS19_TCIA10_232_1_t2.nii.gz 77 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_236_1/BraTS19_TCIA10_236_1_t2.nii.gz 78 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_239_1/BraTS19_TCIA10_239_1_t2.nii.gz 79 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_271_1/BraTS19_TCIA10_271_1_t2.nii.gz 80 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_311_1/BraTS19_TCIA10_311_1_t2.nii.gz 81 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_467_1/BraTS19_TCIA10_467_1_t2.nii.gz 82 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_609_1/BraTS19_TCIA10_609_1_t2.nii.gz 83 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_614_1/BraTS19_TCIA10_614_1_t2.nii.gz 84 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_627_1/BraTS19_TCIA10_627_1_t2.nii.gz 85 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_631_1/BraTS19_TCIA10_631_1_t2.nii.gz 86 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_635_1/BraTS19_TCIA10_635_1_t2.nii.gz 87 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA10_647_1/BraTS19_TCIA10_647_1_t2.nii.gz 88 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA11_612_1/BraTS19_TCIA11_612_1_t2.nii.gz 89 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA12_146_1/BraTS19_TCIA12_146_1_t2.nii.gz 90 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA12_339_1/BraTS19_TCIA12_339_1_t2.nii.gz 91 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA12_613_1/BraTS19_TCIA12_613_1_t2.nii.gz 92 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA12_641_1/BraTS19_TCIA12_641_1_t2.nii.gz 93 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_610_1/BraTS19_TCIA13_610_1_t2.nii.gz 94 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_611_1/BraTS19_TCIA13_611_1_t2.nii.gz 95 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_616_1/BraTS19_TCIA13_616_1_t2.nii.gz 96 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_617_1/BraTS19_TCIA13_617_1_t2.nii.gz 97 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_619_1/BraTS19_TCIA13_619_1_t2.nii.gz 98 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_622_1/BraTS19_TCIA13_622_1_t2.nii.gz 99 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_626_1/BraTS19_TCIA13_626_1_t2.nii.gz 100 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_636_1/BraTS19_TCIA13_636_1_t2.nii.gz 101 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_638_1/BraTS19_TCIA13_638_1_t2.nii.gz 102 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_643_1/BraTS19_TCIA13_643_1_t2.nii.gz 103 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_646_1/BraTS19_TCIA13_646_1_t2.nii.gz 104 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_648_1/BraTS19_TCIA13_648_1_t2.nii.gz 105 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_649_1/BraTS19_TCIA13_649_1_t2.nii.gz 106 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_651_1/BraTS19_TCIA13_651_1_t2.nii.gz 107 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_652_1/BraTS19_TCIA13_652_1_t2.nii.gz 108 | MICCAI_BraTS_2019_Data_Validation/BraTS19_TCIA13_655_1/BraTS19_TCIA13_655_1_t2.nii.gz 109 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3446_1/BraTS19_UAB_3446_1_t2.nii.gz 110 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3448_1/BraTS19_UAB_3448_1_t2.nii.gz 111 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3449_1/BraTS19_UAB_3449_1_t2.nii.gz 112 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3454_1/BraTS19_UAB_3454_1_t2.nii.gz 113 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3455_1/BraTS19_UAB_3455_1_t2.nii.gz 114 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3456_1/BraTS19_UAB_3456_1_t2.nii.gz 115 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3490_1/BraTS19_UAB_3490_1_t2.nii.gz 116 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3498_1/BraTS19_UAB_3498_1_t2.nii.gz 117 | MICCAI_BraTS_2019_Data_Validation/BraTS19_UAB_3499_1/BraTS19_UAB_3499_1_t2.nii.gz 118 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_S036_1/BraTS19_WashU_S036_1_t2.nii.gz 119 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_S037_1/BraTS19_WashU_S037_1_t2.nii.gz 120 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_S040_1/BraTS19_WashU_S040_1_t2.nii.gz 121 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_S041_1/BraTS19_WashU_S041_1_t2.nii.gz 122 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_W033_1/BraTS19_WashU_W033_1_t2.nii.gz 123 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_W038_1/BraTS19_WashU_W038_1_t2.nii.gz 124 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_W047_1/BraTS19_WashU_W047_1_t2.nii.gz 125 | MICCAI_BraTS_2019_Data_Validation/BraTS19_WashU_W053_1/BraTS19_WashU_W053_1_t2.nii.gz 126 | -------------------------------------------------------------------------------- /experiments/data_split/split_examples/training/flair_valid-0.1.txt: -------------------------------------------------------------------------------- 1 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_2013_7_1/BraTS19_2013_7_1_flair.nii.gz 2 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_ABB_1/BraTS19_CBICA_ABB_1_flair.nii.gz 3 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_ASH_1/BraTS19_CBICA_ASH_1_flair.nii.gz 4 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_ATB_1/BraTS19_CBICA_ATB_1_flair.nii.gz 5 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_ATD_1/BraTS19_CBICA_ATD_1_flair.nii.gz 6 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_AXJ_1/BraTS19_CBICA_AXJ_1_flair.nii.gz 7 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_AXL_1/BraTS19_CBICA_AXL_1_flair.nii.gz 8 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_AXQ_1/BraTS19_CBICA_AXQ_1_flair.nii.gz 9 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_AZH_1/BraTS19_CBICA_AZH_1_flair.nii.gz 10 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_BGE_1/BraTS19_CBICA_BGE_1_flair.nii.gz 11 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_BGT_1/BraTS19_CBICA_BGT_1_flair.nii.gz 12 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_BNR_1/BraTS19_CBICA_BNR_1_flair.nii.gz 13 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA01_221_1/BraTS19_TCIA01_221_1_flair.nii.gz 14 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA02_473_1/BraTS19_TCIA02_473_1_flair.nii.gz 15 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA02_605_1/BraTS19_TCIA02_605_1_flair.nii.gz 16 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA02_606_1/BraTS19_TCIA02_606_1_flair.nii.gz 17 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA03_133_1/BraTS19_TCIA03_133_1_flair.nii.gz 18 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA03_138_1/BraTS19_TCIA03_138_1_flair.nii.gz 19 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA03_257_1/BraTS19_TCIA03_257_1_flair.nii.gz 20 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA04_192_1/BraTS19_TCIA04_192_1_flair.nii.gz 21 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA05_478_1/BraTS19_TCIA05_478_1_flair.nii.gz 22 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA06_184_1/BraTS19_TCIA06_184_1_flair.nii.gz 23 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA06_211_1/BraTS19_TCIA06_211_1_flair.nii.gz 24 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA08_105_1/BraTS19_TCIA08_105_1_flair.nii.gz 25 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA08_436_1/BraTS19_TCIA08_436_1_flair.nii.gz 26 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TMC_21360_1/BraTS19_TMC_21360_1_flair.nii.gz 27 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_2013_0_1/BraTS19_2013_0_1_flair.nii.gz 28 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_2013_9_1/BraTS19_2013_9_1_flair.nii.gz 29 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_2013_24_1/BraTS19_2013_24_1_flair.nii.gz 30 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TCIA10_152_1/BraTS19_TCIA10_152_1_flair.nii.gz 31 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TCIA10_266_1/BraTS19_TCIA10_266_1_flair.nii.gz 32 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TCIA10_413_1/BraTS19_TCIA10_413_1_flair.nii.gz 33 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TCIA13_633_1/BraTS19_TCIA13_633_1_flair.nii.gz 34 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TMC_09043_1/BraTS19_TMC_09043_1_flair.nii.gz 35 | -------------------------------------------------------------------------------- /experiments/data_split/split_examples/training/seg_valid-0.1.txt: -------------------------------------------------------------------------------- 1 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_2013_7_1/BraTS19_2013_7_1_seg.nii.gz 2 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_ABB_1/BraTS19_CBICA_ABB_1_seg.nii.gz 3 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_ASH_1/BraTS19_CBICA_ASH_1_seg.nii.gz 4 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_ATB_1/BraTS19_CBICA_ATB_1_seg.nii.gz 5 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_ATD_1/BraTS19_CBICA_ATD_1_seg.nii.gz 6 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_AXJ_1/BraTS19_CBICA_AXJ_1_seg.nii.gz 7 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_AXL_1/BraTS19_CBICA_AXL_1_seg.nii.gz 8 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_AXQ_1/BraTS19_CBICA_AXQ_1_seg.nii.gz 9 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_AZH_1/BraTS19_CBICA_AZH_1_seg.nii.gz 10 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_BGE_1/BraTS19_CBICA_BGE_1_seg.nii.gz 11 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_BGT_1/BraTS19_CBICA_BGT_1_seg.nii.gz 12 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_BNR_1/BraTS19_CBICA_BNR_1_seg.nii.gz 13 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA01_221_1/BraTS19_TCIA01_221_1_seg.nii.gz 14 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA02_473_1/BraTS19_TCIA02_473_1_seg.nii.gz 15 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA02_605_1/BraTS19_TCIA02_605_1_seg.nii.gz 16 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA02_606_1/BraTS19_TCIA02_606_1_seg.nii.gz 17 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA03_133_1/BraTS19_TCIA03_133_1_seg.nii.gz 18 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA03_138_1/BraTS19_TCIA03_138_1_seg.nii.gz 19 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA03_257_1/BraTS19_TCIA03_257_1_seg.nii.gz 20 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA04_192_1/BraTS19_TCIA04_192_1_seg.nii.gz 21 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA05_478_1/BraTS19_TCIA05_478_1_seg.nii.gz 22 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA06_184_1/BraTS19_TCIA06_184_1_seg.nii.gz 23 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA06_211_1/BraTS19_TCIA06_211_1_seg.nii.gz 24 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA08_105_1/BraTS19_TCIA08_105_1_seg.nii.gz 25 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA08_436_1/BraTS19_TCIA08_436_1_seg.nii.gz 26 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TMC_21360_1/BraTS19_TMC_21360_1_seg.nii.gz 27 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_2013_0_1/BraTS19_2013_0_1_seg.nii.gz 28 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_2013_9_1/BraTS19_2013_9_1_seg.nii.gz 29 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_2013_24_1/BraTS19_2013_24_1_seg.nii.gz 30 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TCIA10_152_1/BraTS19_TCIA10_152_1_seg.nii.gz 31 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TCIA10_266_1/BraTS19_TCIA10_266_1_seg.nii.gz 32 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TCIA10_413_1/BraTS19_TCIA10_413_1_seg.nii.gz 33 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TCIA13_633_1/BraTS19_TCIA13_633_1_seg.nii.gz 34 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TMC_09043_1/BraTS19_TMC_09043_1_seg.nii.gz 35 | -------------------------------------------------------------------------------- /experiments/data_split/split_examples/training/t1_valid-0.1.txt: -------------------------------------------------------------------------------- 1 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_2013_7_1/BraTS19_2013_7_1_t1.nii.gz 2 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_ABB_1/BraTS19_CBICA_ABB_1_t1.nii.gz 3 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_ASH_1/BraTS19_CBICA_ASH_1_t1.nii.gz 4 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_ATB_1/BraTS19_CBICA_ATB_1_t1.nii.gz 5 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_ATD_1/BraTS19_CBICA_ATD_1_t1.nii.gz 6 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_AXJ_1/BraTS19_CBICA_AXJ_1_t1.nii.gz 7 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_AXL_1/BraTS19_CBICA_AXL_1_t1.nii.gz 8 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_AXQ_1/BraTS19_CBICA_AXQ_1_t1.nii.gz 9 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_AZH_1/BraTS19_CBICA_AZH_1_t1.nii.gz 10 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_BGE_1/BraTS19_CBICA_BGE_1_t1.nii.gz 11 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_BGT_1/BraTS19_CBICA_BGT_1_t1.nii.gz 12 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_BNR_1/BraTS19_CBICA_BNR_1_t1.nii.gz 13 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA01_221_1/BraTS19_TCIA01_221_1_t1.nii.gz 14 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA02_473_1/BraTS19_TCIA02_473_1_t1.nii.gz 15 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA02_605_1/BraTS19_TCIA02_605_1_t1.nii.gz 16 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA02_606_1/BraTS19_TCIA02_606_1_t1.nii.gz 17 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA03_133_1/BraTS19_TCIA03_133_1_t1.nii.gz 18 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA03_138_1/BraTS19_TCIA03_138_1_t1.nii.gz 19 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA03_257_1/BraTS19_TCIA03_257_1_t1.nii.gz 20 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA04_192_1/BraTS19_TCIA04_192_1_t1.nii.gz 21 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA05_478_1/BraTS19_TCIA05_478_1_t1.nii.gz 22 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA06_184_1/BraTS19_TCIA06_184_1_t1.nii.gz 23 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA06_211_1/BraTS19_TCIA06_211_1_t1.nii.gz 24 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA08_105_1/BraTS19_TCIA08_105_1_t1.nii.gz 25 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA08_436_1/BraTS19_TCIA08_436_1_t1.nii.gz 26 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TMC_21360_1/BraTS19_TMC_21360_1_t1.nii.gz 27 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_2013_0_1/BraTS19_2013_0_1_t1.nii.gz 28 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_2013_9_1/BraTS19_2013_9_1_t1.nii.gz 29 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_2013_24_1/BraTS19_2013_24_1_t1.nii.gz 30 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TCIA10_152_1/BraTS19_TCIA10_152_1_t1.nii.gz 31 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TCIA10_266_1/BraTS19_TCIA10_266_1_t1.nii.gz 32 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TCIA10_413_1/BraTS19_TCIA10_413_1_t1.nii.gz 33 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TCIA13_633_1/BraTS19_TCIA13_633_1_t1.nii.gz 34 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TMC_09043_1/BraTS19_TMC_09043_1_t1.nii.gz 35 | -------------------------------------------------------------------------------- /experiments/data_split/split_examples/training/t1ce_valid-0.1.txt: -------------------------------------------------------------------------------- 1 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_2013_7_1/BraTS19_2013_7_1_t1ce.nii.gz 2 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_ABB_1/BraTS19_CBICA_ABB_1_t1ce.nii.gz 3 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_ASH_1/BraTS19_CBICA_ASH_1_t1ce.nii.gz 4 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_ATB_1/BraTS19_CBICA_ATB_1_t1ce.nii.gz 5 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_ATD_1/BraTS19_CBICA_ATD_1_t1ce.nii.gz 6 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_AXJ_1/BraTS19_CBICA_AXJ_1_t1ce.nii.gz 7 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_AXL_1/BraTS19_CBICA_AXL_1_t1ce.nii.gz 8 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_AXQ_1/BraTS19_CBICA_AXQ_1_t1ce.nii.gz 9 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_AZH_1/BraTS19_CBICA_AZH_1_t1ce.nii.gz 10 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_BGE_1/BraTS19_CBICA_BGE_1_t1ce.nii.gz 11 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_BGT_1/BraTS19_CBICA_BGT_1_t1ce.nii.gz 12 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_BNR_1/BraTS19_CBICA_BNR_1_t1ce.nii.gz 13 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA01_221_1/BraTS19_TCIA01_221_1_t1ce.nii.gz 14 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA02_473_1/BraTS19_TCIA02_473_1_t1ce.nii.gz 15 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA02_605_1/BraTS19_TCIA02_605_1_t1ce.nii.gz 16 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA02_606_1/BraTS19_TCIA02_606_1_t1ce.nii.gz 17 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA03_133_1/BraTS19_TCIA03_133_1_t1ce.nii.gz 18 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA03_138_1/BraTS19_TCIA03_138_1_t1ce.nii.gz 19 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA03_257_1/BraTS19_TCIA03_257_1_t1ce.nii.gz 20 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA04_192_1/BraTS19_TCIA04_192_1_t1ce.nii.gz 21 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA05_478_1/BraTS19_TCIA05_478_1_t1ce.nii.gz 22 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA06_184_1/BraTS19_TCIA06_184_1_t1ce.nii.gz 23 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA06_211_1/BraTS19_TCIA06_211_1_t1ce.nii.gz 24 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA08_105_1/BraTS19_TCIA08_105_1_t1ce.nii.gz 25 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA08_436_1/BraTS19_TCIA08_436_1_t1ce.nii.gz 26 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TMC_21360_1/BraTS19_TMC_21360_1_t1ce.nii.gz 27 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_2013_0_1/BraTS19_2013_0_1_t1ce.nii.gz 28 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_2013_9_1/BraTS19_2013_9_1_t1ce.nii.gz 29 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_2013_24_1/BraTS19_2013_24_1_t1ce.nii.gz 30 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TCIA10_152_1/BraTS19_TCIA10_152_1_t1ce.nii.gz 31 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TCIA10_266_1/BraTS19_TCIA10_266_1_t1ce.nii.gz 32 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TCIA10_413_1/BraTS19_TCIA10_413_1_t1ce.nii.gz 33 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TCIA13_633_1/BraTS19_TCIA13_633_1_t1ce.nii.gz 34 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TMC_09043_1/BraTS19_TMC_09043_1_t1ce.nii.gz 35 | -------------------------------------------------------------------------------- /experiments/data_split/split_examples/training/t2_valid-0.1.txt: -------------------------------------------------------------------------------- 1 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_2013_7_1/BraTS19_2013_7_1_t2.nii.gz 2 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_ABB_1/BraTS19_CBICA_ABB_1_t2.nii.gz 3 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_ASH_1/BraTS19_CBICA_ASH_1_t2.nii.gz 4 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_ATB_1/BraTS19_CBICA_ATB_1_t2.nii.gz 5 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_ATD_1/BraTS19_CBICA_ATD_1_t2.nii.gz 6 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_AXJ_1/BraTS19_CBICA_AXJ_1_t2.nii.gz 7 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_AXL_1/BraTS19_CBICA_AXL_1_t2.nii.gz 8 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_AXQ_1/BraTS19_CBICA_AXQ_1_t2.nii.gz 9 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_AZH_1/BraTS19_CBICA_AZH_1_t2.nii.gz 10 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_BGE_1/BraTS19_CBICA_BGE_1_t2.nii.gz 11 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_BGT_1/BraTS19_CBICA_BGT_1_t2.nii.gz 12 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_CBICA_BNR_1/BraTS19_CBICA_BNR_1_t2.nii.gz 13 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA01_221_1/BraTS19_TCIA01_221_1_t2.nii.gz 14 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA02_473_1/BraTS19_TCIA02_473_1_t2.nii.gz 15 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA02_605_1/BraTS19_TCIA02_605_1_t2.nii.gz 16 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA02_606_1/BraTS19_TCIA02_606_1_t2.nii.gz 17 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA03_133_1/BraTS19_TCIA03_133_1_t2.nii.gz 18 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA03_138_1/BraTS19_TCIA03_138_1_t2.nii.gz 19 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA03_257_1/BraTS19_TCIA03_257_1_t2.nii.gz 20 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA04_192_1/BraTS19_TCIA04_192_1_t2.nii.gz 21 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA05_478_1/BraTS19_TCIA05_478_1_t2.nii.gz 22 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA06_184_1/BraTS19_TCIA06_184_1_t2.nii.gz 23 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA06_211_1/BraTS19_TCIA06_211_1_t2.nii.gz 24 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA08_105_1/BraTS19_TCIA08_105_1_t2.nii.gz 25 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TCIA08_436_1/BraTS19_TCIA08_436_1_t2.nii.gz 26 | MICCAI_BraTS_2019_Data_Training/HGG/BraTS19_TMC_21360_1/BraTS19_TMC_21360_1_t2.nii.gz 27 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_2013_0_1/BraTS19_2013_0_1_t2.nii.gz 28 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_2013_9_1/BraTS19_2013_9_1_t2.nii.gz 29 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_2013_24_1/BraTS19_2013_24_1_t2.nii.gz 30 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TCIA10_152_1/BraTS19_TCIA10_152_1_t2.nii.gz 31 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TCIA10_266_1/BraTS19_TCIA10_266_1_t2.nii.gz 32 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TCIA10_413_1/BraTS19_TCIA10_413_1_t2.nii.gz 33 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TCIA13_633_1/BraTS19_TCIA13_633_1_t2.nii.gz 34 | MICCAI_BraTS_2019_Data_Training/LGG/BraTS19_TMC_09043_1/BraTS19_TMC_09043_1_t2.nii.gz 35 | -------------------------------------------------------------------------------- /experiments/inference.py: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright 2023 IBM Inc. All rights reserved 3 | # SPDX-License-Identifier: Apache2.0 4 | # 5 | 6 | """This module is for performing inference of a trained model. 7 | 8 | Author: Ken C. L. Wong 9 | """ 10 | 11 | import copy 12 | import sys 13 | import os 14 | os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' # noqa: E402 15 | import numpy as np 16 | import time 17 | from functools import partial 18 | 19 | import tensorflow as tf 20 | from keras.models import load_model 21 | 22 | sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), '..')) # noqa: E402 23 | from data_io.input_data import InputData 24 | import nets 25 | from nets.custom_objects import custom_objects 26 | from utils import remap_labels, normalize_modalities, save_output 27 | from utils import get_config, save_config, get_data_lists, read_img 28 | 29 | __author__ = 'Ken C. L. Wong' 30 | 31 | 32 | def run(config_args): 33 | """Runs an experiment. 34 | Using the hyperparameters provided in `config_args`, this function reads a pre-trained model 35 | and performs inference on testing data. 36 | 37 | Args: 38 | config_args: A dict of configurations. 39 | """ 40 | target_dir = os.path.expanduser(config_args['main']['target_dir']) 41 | 42 | os.environ['CUDA_VISIBLE_DEVICES'] = config_args['main']['visible_devices'] 43 | for gpu in tf.config.list_physical_devices('GPU'): 44 | tf.config.experimental.set_memory_growth(gpu, True) 45 | 46 | # 47 | # Create InputData 48 | 49 | input_lists = copy.deepcopy(config_args['input_lists']) 50 | data_dir = os.path.expanduser(input_lists.get('data_dir')) 51 | data_lists_test = get_data_lists(input_lists.get('data_lists_test_paths'), data_dir) 52 | 53 | input_args = copy.deepcopy(config_args['input_args']) 54 | if input_args.pop('use_data_normalization', True): 55 | x_processing = partial(normalize_modalities, mask_val=0) # Assume background value of 0 56 | else: 57 | x_processing = None 58 | 59 | input_data = InputData(reader=read_img, 60 | data_lists_test=data_lists_test, 61 | x_processing=x_processing, 62 | **input_args) 63 | 64 | # 65 | # Load trained model 66 | 67 | model_path = os.path.join(target_dir, 'model/model.keras') 68 | model = load_model(model_path, custom_objects=custom_objects) 69 | 70 | # If testing image size is different from the model's, 71 | # we create a new model with the same model parameters. 72 | num_input_channels = len(input_args['idx_x_modalities']) 73 | test_image_size = input_data.get_test_image_size() 74 | if test_image_size != model.input_shape[1:-1]: 75 | model_args = copy.deepcopy(config_args['model']) 76 | model_args['image_size'] = test_image_size 77 | model_new = create_model(model_args, num_input_channels) 78 | model_new.set_weights(model.get_weights()) 79 | model = model_new 80 | print(f'\nModel is rebuilt for image size {test_image_size}.\n') 81 | 82 | # 83 | # Inference 84 | 85 | output_dir = os.path.join(target_dir, config_args['test']['output_folder']) 86 | os.makedirs(output_dir, exist_ok=True) 87 | save_config(config_args, output_dir) 88 | 89 | label_mapping = config_args['test'].get('label_mapping') 90 | output_origin = config_args['test'].get('output_origin') 91 | inference(model, input_data, output_dir, label_mapping, output_origin) 92 | 93 | 94 | def inference( 95 | model, 96 | input_data: InputData, 97 | output_dir, 98 | label_mapping=None, 99 | output_origin=None, 100 | ): 101 | """This function performs prediction on testing data. 102 | 103 | Args: 104 | model: A trained model. 105 | input_data: InputData. 106 | output_dir: Output directory. 107 | label_mapping: A dict for label mapping if given (default: None). 108 | output_origin: Output origin (default: None). 109 | """ 110 | test_num_batches = input_data.get_test_num_batches() 111 | 112 | @tf.function 113 | def test_step(inputs): 114 | return model(inputs, training=False) 115 | 116 | print('test_num_batches:', test_num_batches) 117 | print() 118 | print('Testing started') 119 | 120 | start_time = time.time() 121 | 122 | assert input_data.batch_size == 1, 'A batch size of 1 is required to save the outputs one-by-one.' 123 | 124 | predict_times = [] 125 | data_lists_test = input_data.data_lists_test 126 | for i, x in enumerate(input_data.get_test_flow().get_numpy_iterator()): 127 | x = x[0] # x is always a tuple because of PyDatasetAdapter 128 | 129 | s_time = time.time() 130 | y_pred = test_step(x).numpy() 131 | e_time = time.time() 132 | 133 | if i != 0: 134 | predict_times.append(e_time - s_time) 135 | 136 | y_pred = y_pred.argmax(-1).astype(np.int16)[0] 137 | if label_mapping is not None: 138 | y_pred = remap_labels(y_pred, label_mapping) 139 | 140 | save_output(y_pred, data_lists_test, i, output_dir, output_origin) 141 | 142 | end_time = time.time() 143 | 144 | print() 145 | print(output_dir) 146 | print(f'Time used: {end_time - start_time:.2f} seconds.') 147 | print(f'Average prediction time: {np.mean(predict_times)}') 148 | with open(os.path.join(output_dir, 'time_used.txt'), 'w') as f: 149 | print(f'Time used: {end_time - start_time:.2f} seconds.', file=f) 150 | print(f'Average prediction time: {np.mean(predict_times)}', file=f) 151 | 152 | 153 | def create_model(model_args, num_input_channels): 154 | """Creates a model from hyperparameters. 155 | 156 | Args: 157 | model_args: Model specific hyperparameters (dict). 158 | num_input_channels: The number of input channels obtained from InputData. 159 | 160 | Returns: 161 | A compiled Keras model. 162 | """ 163 | model_args = copy.deepcopy(model_args) 164 | model_args['num_input_channels'] = num_input_channels 165 | builder_name = model_args.pop('builder_name') 166 | model_builder = getattr(nets, builder_name) 167 | model_args['optimizer'] = 'Adamax' # Optimizer is no use in testing 168 | model = model_builder(**model_args)() 169 | return model 170 | 171 | 172 | if __name__ == '__main__': 173 | run(get_config(sys.argv[1])) 174 | -------------------------------------------------------------------------------- /experiments/run.py: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright 2023 IBM Inc. All rights reserved 3 | # SPDX-License-Identifier: Apache2.0 4 | # 5 | 6 | """This module contains the procedures of training and testing a model. 7 | 8 | Author: Ken C. L. Wong 9 | """ 10 | 11 | import copy 12 | import sys 13 | import os 14 | os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' # noqa: E402 15 | import numpy as np 16 | from functools import partial 17 | 18 | import tensorflow as tf 19 | # tf.compat.v1.disable_eager_execution() # noqa: E402 Run faster without memory leak 20 | from keras import optimizers 21 | from keras.models import load_model 22 | from keras.optimizers import schedules 23 | 24 | sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), '..')) # noqa: E402 25 | from data_io.input_data import InputData 26 | import nets 27 | from nets.custom_objects import custom_objects 28 | from experiments.train_test import training, testing, statistics, statistics_regional 29 | from utils import get_config, save_config, get_data_lists, normalize_modalities, read_img 30 | 31 | __author__ = 'Ken C. L. Wong' 32 | 33 | 34 | def create_model(model_args, num_input_channels, optimizer_name, optimizer_args, scheduler_args=None, 35 | steps_per_epoch=None): 36 | """Creates a model from hyperparameters. 37 | 38 | Args: 39 | model_args: Model specific hyperparameters (dict). 40 | num_input_channels: The number of input channels obtained from InputData. 41 | optimizer_name: The name of the optimizer (str). 42 | optimizer_args: Optimizer arguments (dict). 43 | scheduler_args: Learning rate scheduler arguments (dict). 44 | `scheduler_args['scheduler_name']` contains the name of the scheduler, e.g., 'CosineDecayRestarts'. 45 | If None (default), no scheduler is used and 'learning_rate' should be specified in `optimizer_args`. 46 | steps_per_epoch: Steps per epoch (default: None), should be the same as the number of training batches 47 | per epoch. Used to compute the `decay_steps` of the scheduler. 48 | 49 | Returns: 50 | A compiled Keras model. 51 | """ 52 | model_args = copy.deepcopy(model_args) 53 | model_args['num_input_channels'] = num_input_channels 54 | builder_name = model_args.pop('builder_name') 55 | model_builder = getattr(nets, builder_name) 56 | model_args['optimizer'] = get_optimizer(optimizer_name, optimizer_args, scheduler_args, steps_per_epoch) 57 | model = model_builder(**model_args)() 58 | return model 59 | 60 | 61 | def get_optimizer(optimizer_name, optimizer_args, scheduler_args=None, steps_per_epoch=None): 62 | """Gets the optimizer. 63 | 64 | Args: 65 | optimizer_name: The name of the optimizer (str). 66 | optimizer_args: Optimizer arguments (dict). 67 | scheduler_args: Learning rate scheduler arguments (dict). 68 | `scheduler_args['scheduler_name']` contains the name of the scheduler, e.g., 'CosineDecayRestarts'. 69 | If None (default), no scheduler is used and 'learning_rate' should be specified in `optimizer_args`. 70 | steps_per_epoch: Steps per epoch (default: None), should be the same as the number of training batches 71 | per epoch. Used to compute the `decay_steps` of the scheduler. 72 | 73 | Returns: 74 | An optimizer. 75 | """ 76 | if scheduler_args is not None: 77 | scheduler = get_scheduler(scheduler_args, steps_per_epoch) 78 | optimizer_args = copy.deepcopy(optimizer_args) 79 | optimizer_args['learning_rate'] = scheduler 80 | return getattr(optimizers, optimizer_name)(**optimizer_args) 81 | 82 | 83 | def get_scheduler(scheduler_args, steps_per_epoch): 84 | """Gets the learning rate scheduler. 85 | 86 | Args: 87 | scheduler_args: Learning rate scheduler arguments (dict). 88 | `scheduler_args['scheduler_name']` contains the name of the scheduler, e.g., 'CosineDecayRestarts'. 89 | steps_per_epoch: Steps per epoch, should be the same as the number of training batches 90 | per epoch. Used with `scheduler_args['decay_epochs']` to compute the `decay_steps` of the scheduler. 91 | 92 | Returns: 93 | A learning rate scheduler. 94 | """ 95 | scheduler_args = copy.deepcopy(scheduler_args) 96 | scheduler = scheduler_args.pop('scheduler_name') 97 | decay_epochs = scheduler_args.pop('decay_epochs', None) 98 | if decay_epochs is not None: 99 | assert steps_per_epoch is not None 100 | decay_steps = decay_epochs * steps_per_epoch 101 | if scheduler == 'CosineDecayRestarts': 102 | scheduler_args['first_decay_steps'] = decay_steps 103 | else: 104 | scheduler_args['decay_steps'] = decay_steps 105 | return getattr(schedules, scheduler)(**scheduler_args) 106 | 107 | 108 | def run(config_args): 109 | """Runs an experiment. 110 | Using the hyperparameters provided in `config_args`, this function trains a model or reads 111 | a pre-trained model. Testing is performed on the trained model and the results statistics 112 | are computed. 113 | 114 | Args: 115 | config_args: A dict of configurations. 116 | """ 117 | output_dir = os.path.expanduser(config_args['main']['output_dir']) 118 | 119 | os.environ['CUDA_VISIBLE_DEVICES'] = config_args['main']['visible_devices'] 120 | for gpu in tf.config.list_physical_devices('GPU'): 121 | tf.config.experimental.set_memory_growth(gpu, True) 122 | 123 | # 124 | # Create InputData as a sample generator 125 | 126 | input_lists = copy.deepcopy(config_args['input_lists']) 127 | data_dir = os.path.expanduser(input_lists.get('data_dir')) 128 | data_lists_train = get_data_lists(input_lists.get('data_lists_train_paths'), data_dir) 129 | data_lists_valid = get_data_lists(input_lists.get('data_lists_valid_paths'), data_dir) 130 | data_lists_test = get_data_lists(input_lists.get('data_lists_test_paths'), data_dir) 131 | 132 | input_args = copy.deepcopy(config_args['input_args']) 133 | if input_args.pop('use_data_normalization', True): 134 | x_processing = partial(normalize_modalities, mask_val=0) # Assume background value of 0 135 | else: 136 | x_processing = None 137 | 138 | input_data = None 139 | transform_kwargs = config_args.get('augmentation') 140 | if config_args['main']['is_train'] or config_args['main']['is_test']: 141 | input_data = InputData(reader=read_img, 142 | data_lists_train=data_lists_train, 143 | data_lists_valid=data_lists_valid, 144 | data_lists_test=data_lists_test, 145 | x_processing=x_processing, 146 | transform_kwargs=transform_kwargs, 147 | **input_args) 148 | 149 | # 150 | # Train or read model 151 | 152 | num_input_channels = len(input_args['idx_x_modalities']) 153 | model = None 154 | if config_args['main']['is_train']: 155 | # To avoid accidental overwriting of an existing model 156 | if os.path.exists(output_dir): 157 | raise RuntimeError(f'output_dir already exists! \n{output_dir}') 158 | 159 | os.makedirs(output_dir) 160 | save_config(config_args, output_dir) 161 | 162 | optimizer_args = copy.deepcopy(config_args['optimizer']) 163 | optimizer_name = optimizer_args.pop('optimizer_name') 164 | 165 | # Learning rate scheduler 166 | scheduler_args = steps_per_epoch = None 167 | if 'scheduler' in config_args: 168 | scheduler_args = copy.deepcopy(config_args['scheduler']) 169 | steps_per_epoch = input_data.get_train_num_batches() 170 | 171 | model_args = copy.deepcopy(config_args['model']) 172 | model_args['image_size'] = input_data.get_train_image_size() 173 | model = create_model(model_args, num_input_channels, optimizer_name, optimizer_args, scheduler_args, 174 | steps_per_epoch) 175 | 176 | train_args = copy.deepcopy(config_args['train']) 177 | train_args['model'] = model 178 | train_args['input_data'] = input_data 179 | train_args['output_dir'] = output_dir 180 | 181 | # Train model 182 | model = training(**train_args) 183 | 184 | elif config_args['main']['is_test']: 185 | model_path = os.path.join(output_dir, 'model/model.keras') 186 | model = load_model(model_path, custom_objects=custom_objects) 187 | 188 | # If testing image size is different from the model's 189 | test_image_size = input_data.get_test_image_size() 190 | if test_image_size != model.input_shape[1:-1]: 191 | model_args = copy.deepcopy(config_args['model']) 192 | model_args['image_size'] = test_image_size 193 | model_new = create_model(model_args, num_input_channels, 'Adamax', {}) # Optimizer is no use in testing 194 | model_new.set_weights(model.get_weights()) 195 | model = model_new 196 | print(f'\nModel is rebuilt for image size {test_image_size}.\n') 197 | 198 | if not config_args['main']['is_test'] and not config_args['main']['is_statistics']: 199 | return 200 | 201 | # 202 | # Testing 203 | 204 | test_args = copy.deepcopy(config_args['test']) 205 | test_dir = os.path.join(output_dir, test_args.pop('output_folder', 'test')) 206 | if 'is_print' not in test_args and 'train' in config_args: 207 | is_print = config_args['train'].get('is_print', True) 208 | else: 209 | is_print = test_args.get('is_print', True) 210 | 211 | y_true = None 212 | y_pred = None 213 | if config_args['main']['is_test']: 214 | test_args['model'] = model 215 | test_args['input_data'] = input_data 216 | test_args['output_dir'] = test_dir 217 | test_args['is_print'] = is_print 218 | y_true, y_pred = testing(**test_args) 219 | 220 | if config_args['main']['is_statistics']: 221 | idx_y_modalities = input_args.get('idx_y_modalities') 222 | if idx_y_modalities: 223 | if not config_args['main']['is_test']: # Load from existing test results 224 | results = np.load(os.path.join(str(test_dir), 'y_true_pred.npz')) 225 | y_true, y_pred = results['y_true'], results['y_pred'] 226 | idx_y = idx_y_modalities[0] 227 | statistics(y_true, y_pred, data_lists_test[idx_y], test_dir, is_print) 228 | statistics_regional(y_true, y_pred, data_lists_test[idx_y], test_dir, is_print) 229 | else: 230 | print('Statistics cannot be computed without valid idx_y_modalities (ground truths).') 231 | 232 | 233 | if __name__ == '__main__': 234 | run(get_config(sys.argv[1])) 235 | -------------------------------------------------------------------------------- /experiments/train_test.py: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright 2023 IBM Inc. All rights reserved 3 | # SPDX-License-Identifier: Apache2.0 4 | # 5 | 6 | """This module contains functions for model training and testing. 7 | 8 | Author: Ken C. L. Wong 9 | """ 10 | 11 | import os 12 | import matplotlib 13 | if 'DISPLAY' not in os.environ: 14 | matplotlib.use('Agg') 15 | import numpy as np 16 | import matplotlib.pyplot as plt 17 | import time 18 | from os.path import join 19 | import pandas as pd 20 | 21 | from keras.utils import plot_model 22 | import tensorflow as tf 23 | 24 | from data_io.input_data import InputData 25 | from utils import remap_labels, to_categorical, save_output, save_model_summary 26 | 27 | 28 | __author__ = 'Ken C. L. Wong' 29 | 30 | 31 | def training( 32 | model, 33 | input_data: InputData, 34 | output_dir, 35 | label_mapping=None, 36 | num_epochs=100, 37 | selection_epoch_portion=0.8, 38 | is_save_model=True, 39 | is_plot_model=False, 40 | is_print=True, 41 | plot_epoch_portion=None, 42 | ): 43 | """Trains a model. 44 | 45 | Args: 46 | model: A model to be trained. 47 | input_data: InputData. 48 | output_dir: Output directory, should already be created by the calling function. 49 | label_mapping: A dict for label mapping if given (default: None). 50 | num_epochs: Number of epochs (default: 100). 51 | selection_epoch_portion: The models after this portion of num_epochs are 52 | candidates for the final model (default: 0.8). 53 | is_save_model: The trained model is saved if True (default). 54 | is_plot_model: Plots the model architecture if True (default: False). 55 | is_print: Print info or not (default: True). 56 | plot_epoch_portion: The losses after this portion of num_epochs are plotted if not None (default: None). 57 | 58 | Returns: 59 | The trained model. 60 | """ 61 | if os.path.exists(join(output_dir, 'stdout.txt')): 62 | raise RuntimeError('stdout.txt already exists!') # Avoid accidents 63 | 64 | num_epochs = int(num_epochs) 65 | train_num_batches = input_data.get_train_num_batches() 66 | valid_num_batches = input_data.get_valid_num_batches() 67 | 68 | if is_print: 69 | print('\ntrain_num_batches:', train_num_batches) 70 | print('valid_num_batches:', valid_num_batches) 71 | print() 72 | with open(join(output_dir, 'stdout.txt'), 'a') as f: 73 | print('train_num_batches:', train_num_batches, file=f) 74 | print('valid_num_batches:', valid_num_batches, file=f) 75 | print(file=f) 76 | 77 | # Save model summary 78 | save_model_summary(model, join(output_dir, 'model_summary.txt')) 79 | if is_plot_model: 80 | plot_model(model, show_shapes=True, show_layer_names=True, to_file=join(output_dir, 'model.pdf')) 81 | 82 | train_flow = input_data.get_train_flow() 83 | valid_flow = input_data.get_valid_flow() 84 | 85 | num_labels = model.output_shape[-1] 86 | 87 | @tf.function 88 | def train_step(inputs, y_true): 89 | with tf.GradientTape() as tape: 90 | y_pred = model(inputs, training=True) 91 | loss_val = model.loss(y_true, y_pred) 92 | grads = tape.gradient(loss_val, model.trainable_weights) 93 | model.optimizer.apply(grads, model.trainable_weights) 94 | return loss_val 95 | 96 | @tf.function 97 | def test_step(inputs, y_true): 98 | y_pred = model(inputs, training=False) 99 | return model.loss(y_true, y_pred) 100 | 101 | if is_print: 102 | print('Training started') 103 | 104 | start_time = time.time() 105 | 106 | # Epoch average loss 107 | train_loss = [] 108 | valid_loss = [] 109 | 110 | min_loss = float('inf') 111 | best_epoch = None 112 | best_weights = None 113 | for epoch in range(num_epochs): 114 | # 115 | # Training phase 116 | 117 | train_loss_epoch = [] 118 | for x, y in train_flow.get_numpy_iterator(): # This iterator only works for this loop 119 | if label_mapping is not None: 120 | y = remap_labels(y, label_mapping) 121 | y = to_categorical(y, num_labels) 122 | loss = train_step(x, y) 123 | train_loss_epoch.append(float(loss)) 124 | train_loss.append(np.mean(train_loss_epoch)) 125 | 126 | if is_print: 127 | print('\n-------------------------') 128 | print(f'Epoch: {epoch}') 129 | print(f'train_loss: {train_loss[-1]}') 130 | with open(join(output_dir, 'stdout.txt'), 'a') as f: 131 | print('\n-------------------------', file=f) 132 | print(f'Epoch: {epoch}', file=f) 133 | print(f'train_loss: {train_loss[-1]}', file=f) 134 | 135 | # 136 | # Validation phase 137 | 138 | valid_loss_epoch = [] 139 | for x, y in valid_flow.get_numpy_iterator(): # This iterator only works for this loop 140 | if label_mapping is not None: 141 | y = remap_labels(y, label_mapping) 142 | y = to_categorical(y, num_labels) 143 | loss = test_step(x, y) 144 | valid_loss_epoch.append(float(loss)) 145 | valid_loss.append(np.mean(valid_loss_epoch)) 146 | 147 | if is_print: 148 | print(f'valid_loss: {valid_loss[-1]}') 149 | with open(join(output_dir, 'stdout.txt'), 'a') as f: 150 | print(f'valid_loss: {valid_loss[-1]}', file=f) 151 | 152 | selection_epoch = int(num_epochs * selection_epoch_portion) 153 | if (epoch > selection_epoch or epoch == num_epochs - 1) and valid_loss[-1] < min_loss: 154 | min_loss = valid_loss[-1] 155 | best_epoch = epoch 156 | best_weights = model.get_weights() 157 | if is_save_model: 158 | save_model(model, join(output_dir, 'model', 'model.keras')) 159 | 160 | end_time = time.time() 161 | 162 | if best_weights is not None: 163 | model.set_weights(best_weights) 164 | else: # num_epochs == 0, i.e., no training 165 | if is_save_model: 166 | save_model(model, join(output_dir, 'model', 'model.keras')) 167 | 168 | # Plot losses 169 | start_epoch = int(num_epochs * plot_epoch_portion) if plot_epoch_portion is not None else 0 170 | losses = [train_loss, valid_loss] 171 | styles = ['r', 'b--'] 172 | labels = ['Train loss', 'Valid loss'] 173 | output_file = join(output_dir, 'plot_loss.pdf') 174 | plot_losses(num_epochs, start_epoch, losses, styles, labels, output_file) 175 | 176 | if is_print: 177 | print(f'\nTime used: {end_time - start_time:.2f} seconds.') 178 | print(f'Best epoch: {best_epoch}') 179 | print(f'Min loss: {min_loss}') 180 | with open(join(output_dir, 'stdout.txt'), 'a') as f: 181 | print(f'\nTime used: {end_time - start_time:.2f} seconds.', file=f) 182 | print(f'Best epoch: {best_epoch}', file=f) 183 | print(f'Min loss: {min_loss}', file=f) 184 | 185 | return model 186 | 187 | 188 | def plot_losses(num_epochs, start_epoch, losses, styles, labels, output_file): 189 | """Plots the evolutions of losses.""" 190 | fig, ax = plt.subplots() 191 | fig.set_size_inches(10, 5) 192 | 193 | x = np.arange(num_epochs)[start_epoch:] 194 | for i in range(len(losses)): 195 | ax.plot(x, losses[i][start_epoch:], styles[i], label=labels[i]) 196 | 197 | plt.xlabel('Epoch') 198 | plt.ylabel('Value') 199 | ax.xaxis.label.set_fontsize(20) 200 | ax.yaxis.label.set_fontsize(20) 201 | ax.tick_params(labelsize=20) 202 | plt.grid(which='both') 203 | 204 | # Now add the legend with some customizations. 205 | legend = ax.legend(loc='upper right', fancybox=True, framealpha=0.8, ncol=1) 206 | for label in legend.get_texts(): 207 | label.set_fontsize(20) 208 | for label in legend.get_lines(): 209 | label.set_linewidth(1.5) # the legend line width 210 | 211 | fig.savefig(output_file, bbox_inches='tight') 212 | plt.close(fig) 213 | 214 | 215 | def testing( 216 | model, 217 | input_data: InputData, 218 | output_dir, 219 | label_mapping=None, 220 | save_image=False, 221 | output_origin=None, 222 | is_print=True, 223 | ): 224 | """Performs prediction on testing data. 225 | 226 | Args: 227 | model: A trained model. 228 | input_data: InputData. 229 | output_dir: Output directory (full path). 230 | label_mapping: A dict for label mapping (default: None). 231 | save_image: True if saving images (default: False). 232 | output_origin: Output origin for nifty saving (default: None). 233 | is_print: Print info or not (default: True). 234 | 235 | Returns: 236 | All ground truths (y_true) and predictions (y_pred). 237 | """ 238 | os.makedirs(output_dir, exist_ok=True) 239 | 240 | test_num_batches = input_data.get_test_num_batches() 241 | data_lists_test = input_data.data_lists_test 242 | 243 | if is_print: 244 | print('test_num_batches:', test_num_batches) 245 | print() 246 | 247 | test_flow = input_data.get_test_flow() 248 | 249 | @tf.function 250 | def test_step(inputs): 251 | return model(inputs, training=False) 252 | 253 | if is_print: 254 | print('Testing started') 255 | 256 | start_time = time.time() 257 | 258 | predict_times = [] 259 | y_true = [] 260 | y_pred = [] 261 | for xy in test_flow.get_numpy_iterator(): # xy is always a tuple because of PyDatasetAdapter 262 | if len(xy) == 2: 263 | x, y = xy 264 | y_true.append(np.asarray(y, dtype=np.int16)[..., 0]) # Last dimension of size 1 is ignored 265 | else: 266 | x = xy[0] 267 | 268 | s_time = time.time() 269 | yp = test_step(x).numpy() 270 | e_time = time.time() 271 | if y_pred: # Skip the first iteration which involves model initialization 272 | predict_times.append(e_time - s_time) 273 | y_pred.append(yp) 274 | 275 | end_time = time.time() 276 | 277 | y_true = np.concatenate(y_true) if y_true else None 278 | y_pred = np.concatenate(y_pred) 279 | 280 | # Change to int label and remap if needed 281 | y_pred = y_pred.argmax(-1).astype(np.int16) # Last dimension is gone 282 | if label_mapping is not None: 283 | y_pred = remap_labels(y_pred, label_mapping) 284 | 285 | if save_image: 286 | for i, y in enumerate(y_pred): 287 | save_output(y, data_lists_test, i, os.path.join(output_dir, 'images'), output_origin, '_pred') 288 | if y_true is not None: 289 | for i, y in enumerate(y_true): 290 | save_output(y, data_lists_test, i, os.path.join(output_dir, 'images'), output_origin, '_true') 291 | 292 | np.savez_compressed(join(output_dir, 'y_true_pred.npz'), y_true=y_true, y_pred=y_pred) 293 | 294 | if is_print: 295 | print(f'\nTime used: {end_time - start_time:.2f} seconds.') 296 | print(f'Average prediction time: {np.mean(predict_times)}') 297 | 298 | with open(os.path.join(output_dir, 'prediction_time.txt'), 'w') as f: 299 | print(f'Average prediction time: {np.mean(predict_times)}', file=f) 300 | 301 | return y_true, y_pred 302 | 303 | 304 | def statistics(y_true, y_pred, y_list_test, output_dir, is_print=True): 305 | """Computes and saves the statistics on given predictions and ground truths. 306 | Sample-wise results are saved to a csv file, while average results are saved to a txt file. 307 | 308 | Args: 309 | y_true: Ground truth labels. 310 | y_pred: Predicted labels. 311 | y_list_test: List of filenames correspond to the samples. 312 | output_dir: Output directory (full path). 313 | is_print: Print info if True (default). 314 | """ 315 | dice_all = dice_coef(y_true, y_pred) # (num_samples, num_labels) 316 | 317 | num_labels = dice_all.shape[-1] 318 | ids = pd.DataFrame([os.path.basename(fn) for fn in y_list_test]) 319 | df = [ids] + [pd.DataFrame(dice_all[:, i]) for i in range(num_labels)] 320 | header = ['ID'] + [f'Label {lab}' for lab in np.unique(y_true)] 321 | 322 | output_file = os.path.join(output_dir, 'results.csv') 323 | pd.concat(df, axis=1).to_csv(output_file, sep=str('\t'), header=header, index=False, float_format=str('%.6f')) 324 | 325 | dice_all = np.ma.array(dice_all, mask=np.isnan(dice_all)) 326 | dice_mean = list(dice_all.mean(0).filled(np.nan)) 327 | dice_std = list(dice_all.std(0).filled(np.nan)) 328 | 329 | if is_print: 330 | print() 331 | print('-------- Result statistics --------') 332 | print(f'dice_mean: {dice_mean}') 333 | print(f'dice_std: {dice_std}') 334 | 335 | with open(os.path.join(output_dir, 'average_results.txt'), 'w') as f: 336 | print('-------- Result statistics --------', file=f) 337 | print(f'dice_mean: {dice_mean}', file=f) 338 | print(f'dice_std: {dice_std}', file=f) 339 | print(file=f) 340 | 341 | 342 | def statistics_regional(y_true, y_pred, y_list_test, output_dir, is_print=True): 343 | """Computes and saves the statistics on given predictions and ground truths. 344 | Labels are grouped into BraTS regions of 'whole tumor', 'tumor core', and 'enhancing tumor'. 345 | Sample-wise results are saved to a csv file, while average results are saved to a txt file. 346 | 347 | Args: 348 | y_true: Ground truth labels. 349 | y_pred: Predicted labels. 350 | y_list_test: List of filenames correspond to the samples. 351 | output_dir: Output directory (full path). 352 | is_print: Print info if True (default). 353 | """ 354 | region_names = ['background', 'whole tumor', 'tumor core', 'enhancing tumor'] 355 | region_labels = [ 356 | [0], 357 | [1, 2, 4], 358 | [1, 4], 359 | [4], 360 | ] 361 | 362 | def get_labels_union(y, target_labels): 363 | output = None 364 | for lab in target_labels: 365 | if output is None: 366 | output = (y == lab) 367 | else: 368 | output = output | (y == lab) 369 | return np.asarray(output, dtype=int) 370 | 371 | dice_all = [] 372 | for labs in region_labels: 373 | yt = get_labels_union(y_true, labs) 374 | yp = get_labels_union(y_pred, labs) 375 | dice_all.append(dice_coef(yt, yp, labels=[1])) 376 | dice_all = np.concatenate(dice_all, axis=1) # (num_samples, num_labels) 377 | 378 | num_labels = dice_all.shape[-1] 379 | ids = pd.DataFrame([os.path.basename(fn) for fn in y_list_test]) 380 | df = [ids] + [pd.DataFrame(dice_all[:, i]) for i in range(num_labels)] 381 | header = ['ID'] + region_names 382 | 383 | output_file = os.path.join(output_dir, 'results_regional.csv') 384 | pd.concat(df, axis=1).to_csv(output_file, sep=str('\t'), header=header, index=False, float_format=str('%.6f')) 385 | 386 | dice_all = np.ma.array(dice_all, mask=np.isnan(dice_all)) 387 | dice_mean = list(dice_all.mean(0).filled(np.nan)) 388 | dice_std = list(dice_all.std(0).filled(np.nan)) 389 | 390 | if is_print: 391 | print() 392 | print('-------- Regional result statistics --------') 393 | print(f'region_names: {region_names}') 394 | print(f'dice_mean: {dice_mean}') 395 | print(f'dice_std: {dice_std}') 396 | 397 | with open(os.path.join(output_dir, 'average_results_regional.txt'), 'w') as f: 398 | print('-------- Regional result statistics --------', file=f) 399 | print(f'region_names: {region_names}', file=f) 400 | print(f'dice_mean: {dice_mean}', file=f) 401 | print(f'dice_std: {dice_std}', file=f) 402 | print(file=f) 403 | 404 | 405 | def dice_coef(y_true, y_pred, labels=None, is_average=False): 406 | """Computes the Dice coefficients of specified labels. 407 | 408 | Args: 409 | y_true: Ground truths. Can be bhw or bdhw with or without the last channel of size 1. 410 | y_pred: Predictions. Can be bhw or bdhw with or without last channel of size 1. 411 | labels: Labels for which the Dice coefficients are computed. 412 | is_average: If True, the averaged Dice coefficients are returned with shape (num_labels,). 413 | Otherwise, returns a 2D array of shape (b, num_labels) (default: False). 414 | 415 | Returns: 416 | The Dice coefficients of the labels. 417 | """ 418 | y_true = y_true.reshape(len(y_true), -1) # (b, num_pixels) 419 | y_pred = y_pred.reshape(len(y_pred), -1) # (b, num_pixels) 420 | assert y_true.shape == y_pred.shape 421 | 422 | if labels is None: 423 | labels = np.unique(y_true) 424 | 425 | # Compute Dice coefficients 426 | dice_all = [] 427 | for y_true_img, y_pred_img in zip(y_true, y_pred): # Loop through images 428 | dice = [] 429 | for label in labels: 430 | y_true_bin = (y_true_img == label) 431 | y_pred_bin = (y_pred_img == label) 432 | intersection = np.count_nonzero(y_true_bin & y_pred_bin) 433 | y_true_count = np.count_nonzero(y_true_bin) 434 | y_pred_count = np.count_nonzero(y_pred_bin) 435 | if y_true_count: 436 | dice.append(2 * intersection / (y_true_count + y_pred_count)) 437 | else: 438 | dice.append(np.nan) # label does not exist in y_true 439 | dice_all.append(dice) 440 | 441 | dice_all = np.ma.array(dice_all, mask=np.isnan(dice_all)) # (b, num_labels) 442 | 443 | if is_average: 444 | return dice_all.mean(0).filled(np.nan) 445 | else: 446 | return dice_all.filled(np.nan) 447 | 448 | 449 | def save_model(model, output_path): 450 | """Saves a Keras model. 451 | 452 | Args: 453 | model: The model to be saved. 454 | output_path: The full file path. 455 | """ 456 | dirname = os.path.dirname(output_path) 457 | os.makedirs(dirname, exist_ok=True) 458 | 459 | if os.path.exists(output_path): 460 | os.remove(output_path) # To avoid occasional crashing when overwriting 461 | 462 | model.save(str(output_path)) 463 | -------------------------------------------------------------------------------- /experiments/utils.py: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright 2023 IBM Inc. All rights reserved 3 | # SPDX-License-Identifier: Apache2.0 4 | # 5 | 6 | """Utils for training and testing. 7 | 8 | Author: Ken C. L. Wong 9 | """ 10 | 11 | import os 12 | import sys 13 | import numpy as np 14 | import SimpleITK as sitk 15 | from configparser import ConfigParser, ExtendedInterpolation 16 | import ast 17 | from collections import OrderedDict 18 | from io import StringIO 19 | 20 | __author__ = 'Ken C. L. Wong' 21 | 22 | 23 | def normalize_modalities(data, mask_val=None): 24 | """Normalizes a multichannel input with each channel as a modality. 25 | Each modality is normalized separately. 26 | Note that the channel-last format is assumed. 27 | 28 | Args: 29 | data: A multichannel input with each channel as a modality (channel-last). 30 | mask_val: If not None, the intensities of mask_val are not used to compute mean and std (default: None). 31 | 32 | Returns: 33 | Normalized data. 34 | """ 35 | data = np.moveaxis(data, -1, 0) # (modality, ) 36 | data = [normalize_data(da, mask_val=mask_val) for da in data] 37 | data = np.stack(data, -1) 38 | return data 39 | 40 | 41 | def normalize_data(data, mask_val=None): 42 | """Normalizes data of a single modality. 43 | 44 | Args: 45 | data: The data of a single modality. 46 | mask_val: If not None, the intensities of mask_val are not used to compute mean and std (default: None). 47 | 48 | Returns: 49 | Normalized data. 50 | """ 51 | data = np.asarray(data, dtype=np.float32) 52 | if mask_val is not None: 53 | data = np.ma.array(data, mask=(data == mask_val)) 54 | 55 | mean = data.mean() 56 | std = data.std() 57 | 58 | data = (data - mean) / std 59 | 60 | if mask_val is not None: 61 | data = data.filled(mask_val) 62 | 63 | return data 64 | 65 | 66 | def to_categorical(y, num_classes=None): 67 | """Converts an int label tensor to one-hot. 68 | 69 | Args: 70 | y: Input label tensor with shape (B, D, H, W, 1). 71 | num_classes: Number of classes. 72 | 73 | Returns: 74 | A one-hot tensor of y with shape (B, D, H, W, num_classes). 75 | """ 76 | assert y.shape[-1] == 1, 'Can only handle single label per pixel.' 77 | y = np.asarray(y, dtype=int)[..., 0] 78 | input_shape = y.shape 79 | y = y.ravel() 80 | if not num_classes: 81 | num_classes = np.max(y) + 1 82 | n = y.shape[0] 83 | categorical = np.zeros((n, num_classes), dtype=np.float32) 84 | categorical[np.arange(n), y] = 1 85 | output_shape = input_shape + (num_classes,) 86 | categorical = np.reshape(categorical, output_shape) 87 | return categorical 88 | 89 | 90 | def remap_labels(label, mapping): 91 | """Remaps labels. 92 | 93 | Args: 94 | label: The labels need to be remapped. 95 | mapping: A dict of mapping. Keys: old labels; values: new labels. 96 | 97 | Returns: 98 | A copy of remapped labels. 99 | """ 100 | label = np.asarray(label) 101 | label_cp = label.copy() 102 | for k, v in mapping.items(): 103 | label_cp[label == k] = v 104 | return label_cp 105 | 106 | 107 | def save_model_summary(model, path): 108 | """Saves model summary to a text file. 109 | 110 | Args: 111 | model: A Keras model. 112 | path: A full output file path. 113 | """ 114 | with open(path, 'w') as f: 115 | current_stdout = sys.stdout 116 | sys.stdout = f 117 | print(model.summary()) 118 | sys.stdout = current_stdout 119 | 120 | 121 | def get_config(config_file, source=None): 122 | """Get configurations from a file or a StringIO object. 123 | 124 | Args: 125 | config_file: A full file path or a StringIO object. 126 | source: A string specifying the file name to which the configurations 127 | are saved using the function save_config. It is overwritten by 128 | config_file if config_file is a file path (default: None). 129 | 130 | Returns: 131 | A dict of configurations. 132 | 133 | """ 134 | # Read config file 135 | config = ConfigParser(interpolation=ExtendedInterpolation()) 136 | if isinstance(config_file, StringIO): 137 | config.read_file(config_file, source) # Read as a file obj 138 | else: 139 | config.read(config_file) # Read as a file name 140 | source = config_file 141 | 142 | # Output is a dict of dict, format = {section: {key: val}} 143 | output = OrderedDict() 144 | for section in config.sections(): 145 | output[section] = OrderedDict() 146 | for k, v in config.items(section): 147 | try: 148 | output[section][k] = ast.literal_eval(v) 149 | except ValueError as e: 150 | raise ValueError(str(e) + '\n%s: %s' % (k, v)) 151 | 152 | output['config_file'] = os.path.basename(source) if source is not None else None 153 | output['config'] = StringIO() 154 | config.write(output['config']) 155 | 156 | return output 157 | 158 | 159 | def save_config(config_args, output_dir): 160 | """Saves configurations. 161 | 162 | Args: 163 | config_args: The configurations. 164 | output_dir: The directory where the config file is saved. 165 | Note that the file basename is determined in config_args['config_file']. 166 | """ 167 | with open(os.path.join(output_dir, config_args['config_file']), 'w') as f: 168 | f.write(config_args['config'].getvalue()) 169 | 170 | 171 | def get_data_lists(data_lists_paths, data_dir=None): 172 | """Creates a multimodal data list for file reading. 173 | 174 | Args: 175 | data_lists_paths: A list of paths, each is a text file containing the list of filenames of a modality. 176 | data_dir: If not None, it is a str attached to the beginning of each filename. 177 | It is the directory that contains all input data when the filenames are relative paths. 178 | 179 | Returns: 180 | A list of filename lists, each filename list is for a modality. 181 | """ 182 | if data_lists_paths is None: 183 | return None 184 | data_dir = data_dir or '' 185 | data_lists = [] 186 | for dl_path in data_lists_paths: 187 | dl_path = os.path.expanduser(dl_path) 188 | with open(dl_path) as f: 189 | a_list = f.read().splitlines() 190 | a_list = [os.path.join(data_dir, fname) for fname in a_list] 191 | data_lists.append(a_list) 192 | return data_lists 193 | 194 | 195 | def save_output(y, data_lists_test, idx_sample, output_dir, output_origin=None, suffix=''): 196 | """Saves a label map to a nii.gz file. 197 | Warning: this function is hard-coded for our BraTS'19 experiments. 198 | 199 | Args: 200 | y: A predicted or ground-truth label map. 201 | data_lists_test: A list of filename lists, each filename list is for a modality. 202 | We use data_lists_test[0][idx_sample] to get the patient ID. 203 | idx_sample: Index to the sample in data_lists_test. 204 | output_dir: The output directory. 205 | output_origin: If not None, it is the "image origin" of the output (default: None). 206 | See ITK for the details of "image origin". 207 | suffix: The suffix attached to the output filename (default: ''). 208 | """ 209 | y = np.asarray(y, dtype=np.int16) 210 | y = sitk.GetImageFromArray(y) 211 | if output_origin is not None: 212 | y.SetOrigin(output_origin) 213 | 214 | fname = data_lists_test[0][idx_sample] 215 | fname = os.path.basename(fname) 216 | pid = '_'.join(fname.split('_')[:-1]) # Extracts the patient ID according to BraTS'19 naming format 217 | fname = os.path.join(output_dir, f'{pid}{suffix}.nii.gz') 218 | os.makedirs(os.path.dirname(fname), exist_ok=True) 219 | sitk.WriteImage(y, fname, True) 220 | 221 | 222 | def read_img(filename): 223 | """Reads an image file to produce a Numpy array using SimpleITK. 224 | 225 | Args: 226 | filename: Image file name (full path). 227 | 228 | Returns: 229 | A Numpy array of the image. 230 | """ 231 | img = sitk.ReadImage(filename) 232 | return sitk.GetArrayFromImage(img).astype(np.float32) 233 | -------------------------------------------------------------------------------- /nets/__init__.py: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright 2023 IBM Inc. All rights reserved 3 | # SPDX-License-Identifier: Apache2.0 4 | # 5 | 6 | """A collection of network architectures. 7 | 8 | Author: Ken C. L. Wong 9 | """ 10 | 11 | from nets.architectures import VNetDS, NeuralOperatorSeg, HartleyMHASeg 12 | 13 | __author__ = 'Ken C. L. Wong' 14 | -------------------------------------------------------------------------------- /nets/custom_losses.py: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright 2023 IBM Inc. All rights reserved 3 | # SPDX-License-Identifier: Apache2.0 4 | # 5 | 6 | """Custom Keras loss functions. 7 | 8 | Author: Ken C. L. Wong 9 | """ 10 | 11 | from keras.src.losses.losses import LossFunctionWrapper, Loss 12 | import tensorflow as tf 13 | 14 | __author__ = 'Ken C. L. Wong' 15 | 16 | 17 | def corrcoef(y_true, y_pred): 18 | """Computes the Pearson's correlation coefficients. 19 | 20 | Args: 21 | y_true: One-hot ground truth labels. 22 | y_pred: Prediction scores. 23 | 24 | Returns: 25 | Pearson's correlation coefficients with shape (batch_size, num_labels). 26 | """ 27 | ndim = y_true.ndim 28 | axis = list(range(ndim))[1:-1] # Spatial dimensions 29 | 30 | assert ndim in [3, 4, 5] 31 | 32 | y_true = y_true - tf.reduce_mean(y_true, axis=axis, keepdims=True) 33 | y_pred = y_pred - tf.reduce_mean(y_pred, axis=axis, keepdims=True) 34 | 35 | tp = tf.reduce_sum(y_true * y_pred, axis=axis) 36 | tt = tf.reduce_sum(tf.square(y_true), axis=axis) 37 | pp = tf.reduce_sum(tf.square(y_pred), axis=axis) 38 | 39 | output = tp / tf.sqrt(tt * pp + 1e-7) 40 | 41 | return output 42 | 43 | 44 | def pcc_loss(y_true, y_pred): 45 | """Loss function based on the Pearson's correlation coefficient (PCC). 46 | 47 | Please refer to our MLMI 2022 paper for more details: 48 | Wong, K.C.L., Moradi, M. (2022). 3D Segmentation with Fully Trainable Gabor Kernels 49 | and Pearson’s Correlation Coefficient. In: Machine Learning in Medical Imaging. MLMI 2022. 50 | https://doi.org/10.1007/978-3-031-21014-3_6 51 | 52 | Args: 53 | y_true: One-hot ground truth labels. 54 | y_pred: Prediction scores. 55 | 56 | Returns: 57 | The PCC loss with shape (batch_size,). 58 | """ 59 | output = corrcoef(y_true, y_pred) # (-1, 1) 60 | output = (output + 1) * 0.5 # (0, 1) 61 | output = 1 - output 62 | return tf.reduce_mean(output, axis=-1) 63 | 64 | 65 | class PCCLoss(LossFunctionWrapper): 66 | """Loss function based on the Pearson's correlation coefficient (PCC). 67 | 68 | Please refer to our MLMI 2022 paper for more details: 69 | Wong, K.C.L., Moradi, M. (2022). 3D Segmentation with Fully Trainable Gabor Kernels 70 | and Pearson’s Correlation Coefficient. In: Machine Learning in Medical Imaging. MLMI 2022. 71 | https://doi.org/10.1007/978-3-031-21014-3_6 72 | 73 | Args: 74 | reduction: Type of reduction to apply to the loss. In almost all cases 75 | this should be `"sum_over_batch_size"`. 76 | Supported options are `"sum"`, `"sum_over_batch_size"` or `None`. 77 | name: Optional name for the instance. 78 | """ 79 | def __init__(self, 80 | reduction='sum_over_batch_size', 81 | name='pcc_loss'): 82 | super().__init__( 83 | pcc_loss, 84 | name=name, 85 | reduction=reduction 86 | ) 87 | 88 | def get_config(self): 89 | return Loss.get_config(self) 90 | 91 | 92 | def dice_coef(y_true, y_pred): 93 | """Computes the (soft) Dice coefficients. 94 | 95 | Args: 96 | y_true: One-hot ground truth labels. 97 | y_pred: Prediction scores. 98 | 99 | Returns: 100 | The (soft) Dice coefficients with shape (batch_size, num_labels). 101 | """ 102 | ndim = y_true.ndim 103 | axis = list(range(ndim))[1:-1] # Spatial dimensions 104 | 105 | assert ndim in [3, 4, 5] 106 | 107 | intersection = tf.reduce_sum(y_true * y_pred, axis=axis) 108 | union = tf.reduce_sum(y_true + y_pred, axis=axis) 109 | return 2. * intersection / (union + 1e-7) 110 | 111 | 112 | def dice_loss(y_true, y_pred): 113 | """Dice loss. 114 | 115 | Args: 116 | y_true: One-hot ground truth labels. 117 | y_pred: Prediction scores. 118 | 119 | Returns: 120 | The Dice loss with shape (batch_size,). 121 | """ 122 | output = dice_coef(y_true, y_pred) 123 | output = 1 - output 124 | return tf.reduce_mean(output, axis=-1) 125 | 126 | 127 | class DiceLoss(LossFunctionWrapper): 128 | """Dice loss. 129 | 130 | Args: 131 | reduction: Type of reduction to apply to the loss. In almost all cases 132 | this should be `"sum_over_batch_size"`. 133 | Supported options are `"sum"`, `"sum_over_batch_size"` or `None`. 134 | name: Optional name for the instance. 135 | """ 136 | def __init__(self, 137 | reduction='sum_over_batch_size', 138 | name='dice_loss'): 139 | super().__init__( 140 | dice_loss, 141 | name=name, 142 | reduction=reduction 143 | ) 144 | 145 | def get_config(self): 146 | return Loss.get_config(self) 147 | -------------------------------------------------------------------------------- /nets/custom_objects.py: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright 2023 IBM Inc. All rights reserved 3 | # SPDX-License-Identifier: Apache2.0 4 | # 5 | 6 | """Collections of different custom objects for Keras. 7 | 8 | Author: Ken C. L. Wong 9 | """ 10 | 11 | from nets.custom_losses import PCCLoss, DiceLoss 12 | from nets.fourier_operator import FourierOperator 13 | from nets.hartley_operator import HartleyOperator 14 | from nets.hartley_mha import HartleyMultiHeadAttention 15 | 16 | __author__ = 'Ken C. L. Wong' 17 | 18 | 19 | custom_objects = { 20 | 'PCCLoss': PCCLoss, 21 | 'DiceLoss': DiceLoss, 22 | 'FourierOperator': FourierOperator, 23 | 'HartleyOperator': HartleyOperator, 24 | 'HartleyMultiHeadAttention': HartleyMultiHeadAttention, 25 | } 26 | -------------------------------------------------------------------------------- /nets/dht.py: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright 2023 IBM Inc. All rights reserved 3 | # SPDX-License-Identifier: Apache2.0 4 | # 5 | 6 | """Discrete Hartley transforms implemented by FFT. 7 | 8 | Author: Ken C. L. Wong 9 | """ 10 | 11 | import numpy as np 12 | import tensorflow as tf 13 | 14 | __author__ = 'Ken C. L. Wong' 15 | 16 | 17 | def standardize_type_fft(x): 18 | """Standardizes data for TensorFlow FFT. 19 | 20 | Args: 21 | x: An ndarray or TensorFlow tensor to be standardized. 22 | 23 | Returns: 24 | A complex64 tensor or a float32 ndarray. 25 | """ 26 | if tf.is_tensor(x): 27 | if x.dtype in ['float32', 'float64']: # Must be converted to complex for TF tensor 28 | x = tf.complex(x, tf.zeros_like(x)) 29 | x = tf.cast(x, tf.complex64) 30 | else: 31 | x = np.asarray(x, dtype=np.float32) # TF fft converts ndarray automatically 32 | return x 33 | 34 | 35 | def dht2d(x, is_inverse=False): 36 | """Computes discrete Hartley transform over the innermost dimensions. 37 | 38 | The FFT of TensorFlow is used. Note that inverse DHT is DHT divided by the number of 39 | elements in the innermost dimensions. 40 | 41 | Args: 42 | x: Input tensor/ndarray. 43 | is_inverse: True if inverse DHT is desired. 44 | 45 | Returns: 46 | The DHT output. 47 | """ 48 | x = standardize_type_fft(x) 49 | x_fft = tf.signal.fft2d(x) 50 | x_hart = tf.math.real(x_fft) - tf.math.imag(x_fft) 51 | 52 | if is_inverse: 53 | x_hart = x_hart / tf.cast(tf.reduce_prod(x_hart.shape[-2:]), tf.float32) 54 | 55 | return x_hart 56 | 57 | 58 | def dht3d(x, is_inverse=False): 59 | """Computes discrete Hartley transform over the innermost dimensions. 60 | 61 | The FFT of TensorFlow is used. Note that inverse DHT is DHT divided by the number of 62 | elements in the innermost dimensions. 63 | 64 | Args: 65 | x: Input tensor/ndarray. 66 | is_inverse: True if inverse DHT is desired. 67 | 68 | Returns: 69 | The DHT output. 70 | """ 71 | x = standardize_type_fft(x) 72 | x_fft = tf.signal.fft3d(x) 73 | x_hart = tf.math.real(x_fft) - tf.math.imag(x_fft) 74 | 75 | if is_inverse: 76 | x_hart = x_hart / tf.cast(tf.reduce_prod(x_hart.shape[-3:]), tf.float32) 77 | 78 | return x_hart 79 | -------------------------------------------------------------------------------- /nets/fourier_operator.py: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright 2023 IBM Inc. All rights reserved 3 | # SPDX-License-Identifier: Apache2.0 4 | # 5 | 6 | import tensorflow as tf 7 | from keras.layers import Layer, InputSpec 8 | from keras import constraints 9 | from keras import initializers 10 | from keras import regularizers 11 | 12 | import numpy as np 13 | 14 | __author__ = 'Ken C. L. Wong' 15 | 16 | 17 | class FourierOperator(Layer): 18 | """A Keras layer that applies the convolution theorem through the Fourier transform. 19 | The input tensor is Fourier transformed, modified by the learnable weights in the 20 | frequency domain, and inverse Fourier transformed back to the spatial domain. 21 | 22 | Args: 23 | filters: Number of output channels of this layer. 24 | num_modes: Number of frequency modes (k_max). Can be an int or a list of int. 25 | Note that `num_modes` must be smaller than half of the input spatial size in each dimension. 26 | use_bias: If True, learned bias is added to the output tensor (default: False). 27 | kernel_initializer: Kernel weights initializer (default: 'glorot_uniform'). 28 | bias_initializer: Bias weights initializer (default: 'zeros'). 29 | kernel_regularizer: Kernel regularizer (default: None). 30 | bias_regularizer: Bias regularizer (default: None). 31 | kernel_constraint: Kernel constraint (default: None). 32 | bias_constraint: Bias constraint (default: None). 33 | weights_type: Type of weights in the frequency domain. 34 | Must be 'individual' or 'shared' (default). 35 | trainable: If True (default), the layer is trainable. 36 | name: Optional name for the instance (default: None). 37 | **kwargs: Optional keyword arguments. 38 | """ 39 | def __init__(self, 40 | filters, 41 | num_modes, 42 | use_bias=False, 43 | kernel_initializer='glorot_uniform', 44 | bias_initializer='zeros', 45 | kernel_regularizer=None, 46 | bias_regularizer=None, 47 | kernel_constraint=None, 48 | bias_constraint=None, 49 | weights_type='shared', 50 | trainable=True, 51 | name=None, 52 | **kwargs): 53 | super().__init__(trainable=trainable, name=name, **kwargs) 54 | 55 | self.filters = filters 56 | self.num_modes = num_modes 57 | self.use_bias = use_bias 58 | self.kernel_initializer = initializers.get(kernel_initializer) 59 | self.bias_initializer = initializers.get(bias_initializer) 60 | self.kernel_regularizer = regularizers.get(kernel_regularizer) 61 | self.bias_regularizer = regularizers.get(bias_regularizer) 62 | self.kernel_constraint = constraints.get(kernel_constraint) 63 | self.bias_constraint = constraints.get(bias_constraint) 64 | 65 | self.weights_type = weights_type 66 | assert self.weights_type in ['individual', 'shared'] 67 | 68 | # Keras does not support weights in complex numbers 69 | self.kernel_real = None 70 | self.kernel_img = None 71 | 72 | self.bias = None 73 | 74 | def build(self, input_shape): 75 | ndim = len(input_shape) 76 | channel_axis = ndim - 1 77 | num_input_channels = input_shape[channel_axis] 78 | 79 | if np.isscalar(self.num_modes): 80 | self.num_modes = (self.num_modes,) * (ndim - 2) 81 | else: 82 | assert len(self.num_modes) == ndim - 2 83 | self.num_modes = tuple(self.num_modes) 84 | 85 | if self.weights_type == 'shared': 86 | kernel_shape = (num_input_channels, self.filters) 87 | else: 88 | if ndim == 4: 89 | # Each dimension in the frequency domain contains two parts, 0 to pi and pi to 2 * pi, 90 | # except the innermost dimension only has 0 to pi. 91 | kernel_shape = (2 * self.num_modes[0], self.num_modes[1], num_input_channels, self.filters) 92 | else: # rfft3d and irfft3d cannot compute the gradient, so can only use fft3d and ifft3d 93 | kernel_shape = tuple(np.array(self.num_modes) * 2) + (num_input_channels, self.filters) 94 | 95 | self.kernel_real = self.add_weight( 96 | name='kernel_real', 97 | shape=kernel_shape, 98 | initializer=self.kernel_initializer, 99 | regularizer=self.kernel_regularizer, 100 | constraint=self.kernel_constraint) 101 | 102 | self.kernel_img = self.add_weight( 103 | name='kernel_img', 104 | shape=kernel_shape, 105 | initializer=self.kernel_initializer, 106 | regularizer=self.kernel_regularizer, 107 | constraint=self.kernel_constraint) 108 | 109 | if self.use_bias: 110 | self.bias = self.add_weight( 111 | name='bias', 112 | shape=(self.filters,), 113 | initializer=self.bias_initializer, 114 | regularizer=self.bias_regularizer, 115 | constraint=self.bias_constraint) 116 | 117 | self.input_spec = InputSpec(ndim=ndim, 118 | axes={channel_axis: num_input_channels}) 119 | self.built = True 120 | 121 | def call(self, inputs): 122 | if inputs.ndim == 4: 123 | x = self._call2d(inputs) 124 | else: 125 | x = self._call3d(inputs) 126 | 127 | if self.use_bias: 128 | x = tf.nn.bias_add(x, self.bias) 129 | 130 | return x 131 | 132 | def _call2d(self, inputs): 133 | s0, s1 = inputs.shape[1:-1] # Spatial size 134 | modes_0, modes_1 = self.num_modes 135 | ndim = inputs.ndim 136 | 137 | assert s0 >= 2 * modes_0 138 | 139 | # Convert to channel-first as fft only works on the innermost dimensions 140 | perm = [0, ndim - 1] + list(range(1, ndim - 1)) # (b, c, spatial) 141 | x = tf.transpose(inputs, perm=perm) 142 | 143 | x_fft = tf.signal.rfft2d(x) 144 | 145 | kernel = tf.complex(self.kernel_real, self.kernel_img) 146 | 147 | if self.weights_type == 'shared': 148 | equation = 'io,bihw->bohw' 149 | low = tf.einsum( 150 | equation, 151 | kernel, x_fft[..., :modes_0, :modes_1] 152 | ) 153 | high = tf.einsum( 154 | equation, 155 | kernel, x_fft[..., -modes_0:, :modes_1] 156 | ) 157 | else: 158 | equation = 'hwio,bihw->bohw' 159 | low = tf.einsum( 160 | equation, 161 | kernel[:modes_0], x_fft[..., :modes_0, :modes_1] 162 | ) 163 | high = tf.einsum( 164 | equation, 165 | kernel[-modes_0:], x_fft[..., -modes_0:, :modes_1] 166 | ) 167 | 168 | # Padding needs to be done manually as ifft only pads at the end 169 | pad_shape = [tf.shape(x_fft)[0], self.filters, s0 - 2 * modes_0, modes_1] 170 | pad_zeros = tf.zeros(pad_shape, dtype=x_fft.dtype) 171 | out_fft = tf.concat([low, pad_zeros, high], axis=2) 172 | 173 | x = tf.signal.irfft2d(out_fft, fft_length=tuple(inputs.shape[1:-1])) 174 | 175 | # Convert back to channel-last 176 | perm = [0] + list(range(2, ndim)) + [1] # (b, spatial, c) 177 | x = tf.transpose(x, perm=perm) 178 | 179 | return x 180 | 181 | def _call3d(self, inputs): 182 | s0, s1, s2 = inputs.shape[1:-1] # Spatial size 183 | modes_0, modes_1, modes_2 = self.num_modes 184 | ndim = inputs.ndim 185 | 186 | assert s0 >= 2 * modes_0 and s1 >= 2 * modes_1 and s2 >= 2 * modes_2 187 | 188 | # Convert to channel-first as fft only works on the innermost dimensions 189 | perm = [0, ndim - 1] + list(range(1, ndim - 1)) # (b, c, spatial) 190 | x = tf.transpose(inputs, perm=perm) 191 | 192 | x = tf.complex(x, tf.zeros_like(x)) 193 | 194 | x_fft = tf.signal.fft3d(x) # Cannot use rfft3d as gradient is not registered 195 | 196 | kernel = tf.complex(self.kernel_real, self.kernel_img) 197 | 198 | if self.weights_type == 'shared': 199 | equation = 'io,bidhw->bodhw' 200 | lll = tf.einsum( 201 | equation, 202 | kernel, x_fft[..., :modes_0, :modes_1, :modes_2] 203 | ) 204 | lhl = tf.einsum( 205 | equation, 206 | kernel, x_fft[..., :modes_0, -modes_1:, :modes_2] 207 | ) 208 | hll = tf.einsum( 209 | equation, 210 | kernel, x_fft[..., -modes_0:, :modes_1, :modes_2] 211 | ) 212 | hhl = tf.einsum( 213 | equation, 214 | kernel, x_fft[..., -modes_0:, -modes_1:, :modes_2] 215 | ) 216 | llh = tf.einsum( 217 | equation, 218 | kernel, x_fft[..., :modes_0, :modes_1, -modes_2:] 219 | ) 220 | lhh = tf.einsum( 221 | equation, 222 | kernel, x_fft[..., :modes_0, -modes_1:, -modes_2:] 223 | ) 224 | hlh = tf.einsum( 225 | equation, 226 | kernel, x_fft[..., -modes_0:, :modes_1, -modes_2:] 227 | ) 228 | hhh = tf.einsum( 229 | equation, 230 | kernel, x_fft[..., -modes_0:, -modes_1:, -modes_2:] 231 | ) 232 | else: 233 | equation = 'dhwio,bidhw->bodhw' 234 | lll = tf.einsum( 235 | equation, 236 | kernel[:modes_0, :modes_1, :modes_2], x_fft[..., :modes_0, :modes_1, :modes_2] 237 | ) 238 | lhl = tf.einsum( 239 | equation, 240 | kernel[:modes_0, -modes_1:, :modes_2], x_fft[..., :modes_0, -modes_1:, :modes_2] 241 | ) 242 | hll = tf.einsum( 243 | equation, 244 | kernel[-modes_0:, :modes_1, :modes_2], x_fft[..., -modes_0:, :modes_1, :modes_2] 245 | ) 246 | hhl = tf.einsum( 247 | equation, 248 | kernel[-modes_0:, -modes_1:, :modes_2], x_fft[..., -modes_0:, -modes_1:, :modes_2] 249 | ) 250 | llh = tf.einsum( 251 | equation, 252 | kernel[:modes_0, :modes_1, -modes_2:], x_fft[..., :modes_0, :modes_1, -modes_2:] 253 | ) 254 | lhh = tf.einsum( 255 | equation, 256 | kernel[:modes_0, -modes_1:, -modes_2:], x_fft[..., :modes_0, -modes_1:, -modes_2:] 257 | ) 258 | hlh = tf.einsum( 259 | equation, 260 | kernel[-modes_0:, :modes_1, -modes_2:], x_fft[..., -modes_0:, :modes_1, -modes_2:] 261 | ) 262 | hhh = tf.einsum( 263 | equation, 264 | kernel[-modes_0:, -modes_1:, -modes_2:], x_fft[..., -modes_0:, -modes_1:, -modes_2:] 265 | ) 266 | 267 | # Padding needs to be done manually as ifft only pads at the end 268 | 269 | # Padding along spatial dim 2, shape = (b, c, modes_0, modes_1, s2) 270 | pad_shape = [tf.shape(x_fft)[0], self.filters, modes_0, modes_1, s2 - 2 * modes_2] 271 | pad_zeros = tf.zeros(pad_shape, dtype=x_fft.dtype) 272 | ll = tf.concat([lll, pad_zeros, llh], axis=-1) 273 | lh = tf.concat([lhl, pad_zeros, lhh], axis=-1) 274 | hl = tf.concat([hll, pad_zeros, hlh], axis=-1) 275 | hh = tf.concat([hhl, pad_zeros, hhh], axis=-1) 276 | 277 | # Padding along spatial dim 1, shape = (b, c, modes_0, s1, s2) 278 | pad_shape = [tf.shape(x_fft)[0], self.filters, modes_0, s1 - 2 * modes_1, s2] 279 | pad_zeros = tf.zeros(pad_shape, dtype=x_fft.dtype) 280 | low = tf.concat([ll, pad_zeros, lh], axis=-2) 281 | high = tf.concat([hl, pad_zeros, hh], axis=-2) 282 | 283 | # Padding along spatial dim 0, shape = (b, c, s0, s1, s2) 284 | pad_shape = [tf.shape(x_fft)[0], self.filters, s0 - 2 * modes_0, s1, s2] 285 | pad_zeros = tf.zeros(pad_shape, dtype=x_fft.dtype) 286 | out_fft = tf.concat([low, pad_zeros, high], axis=-3) 287 | 288 | x = tf.signal.ifft3d(out_fft) 289 | x = tf.math.real(x) 290 | 291 | # Convert back to channel-last 292 | perm = [0] + list(range(2, ndim)) + [1] # (b, spatial, c) 293 | x = tf.transpose(x, perm=perm) 294 | 295 | return x 296 | 297 | def get_config(self): 298 | config = { 299 | 'filters': self.filters, 300 | 'num_modes': self.num_modes, 301 | 'use_bias': self.use_bias, 302 | 'kernel_initializer': initializers.serialize(self.kernel_initializer), 303 | 'bias_initializer': initializers.serialize(self.bias_initializer), 304 | 'kernel_regularizer': regularizers.serialize(self.kernel_regularizer), 305 | 'bias_regularizer': regularizers.serialize(self.bias_regularizer), 306 | 'kernel_constraint': constraints.serialize(self.kernel_constraint), 307 | 'bias_constraint': constraints.serialize(self.bias_constraint), 308 | 'weights_type': self.weights_type, 309 | } 310 | base_config = super().get_config() 311 | return {**base_config, **config} 312 | -------------------------------------------------------------------------------- /nets/hartley_operator.py: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright 2023 IBM Inc. All rights reserved 3 | # SPDX-License-Identifier: Apache2.0 4 | # 5 | 6 | import tensorflow as tf 7 | from keras.layers import Layer, InputSpec 8 | from keras import constraints 9 | from keras import initializers 10 | from keras import regularizers 11 | 12 | import numpy as np 13 | 14 | from nets.dht import dht2d, dht3d 15 | 16 | __author__ = 'Ken C. L. Wong' 17 | 18 | 19 | class HartleyOperator(Layer): 20 | """A Keras layer that applies the convolution theorem through the Hartley transform. 21 | The input tensor is Hartley transformed, modified by the learnable weights in the 22 | frequency domain, and inverse Hartley transformed back to the spatial domain. 23 | 24 | Args: 25 | filters: Number of output channels of this layer. 26 | num_modes: Number of frequency modes (k_max). Can be an int or a list of int. 27 | Note that `num_modes` must be smaller than half of the input spatial size in each dimension. 28 | use_bias: If True, learned bias is added to the output tensor (default: False). 29 | kernel_initializer: Kernel weights initializer (default: 'glorot_uniform'). 30 | bias_initializer: Bias weights initializer (default: 'zeros'). 31 | kernel_regularizer: Kernel regularizer (default: None). 32 | bias_regularizer: Bias regularizer (default: None). 33 | kernel_constraint: Kernel constraint (default: None). 34 | bias_constraint: Bias constraint (default: None). 35 | weights_type: Type of weights in the frequency domain. 36 | Must be 'individual' or 'shared' (default). 37 | trainable: If True (default), the layer is trainable. 38 | name: Optional name for the instance (default: None). 39 | **kwargs: Optional keyword arguments. 40 | """ 41 | def __init__(self, 42 | filters, 43 | num_modes, 44 | use_bias=False, 45 | kernel_initializer='glorot_uniform', 46 | bias_initializer='zeros', 47 | kernel_regularizer=None, 48 | bias_regularizer=None, 49 | kernel_constraint=None, 50 | bias_constraint=None, 51 | weights_type='shared', 52 | trainable=True, 53 | name=None, 54 | **kwargs): 55 | super().__init__(trainable=trainable, name=name, **kwargs) 56 | 57 | self.filters = filters 58 | self.num_modes = num_modes 59 | self.use_bias = use_bias 60 | self.kernel_initializer = initializers.get(kernel_initializer) 61 | self.bias_initializer = initializers.get(bias_initializer) 62 | self.kernel_regularizer = regularizers.get(kernel_regularizer) 63 | self.bias_regularizer = regularizers.get(bias_regularizer) 64 | self.kernel_constraint = constraints.get(kernel_constraint) 65 | self.bias_constraint = constraints.get(bias_constraint) 66 | 67 | self.weights_type = weights_type 68 | assert self.weights_type in ['individual', 'shared'] 69 | 70 | self.kernel = None 71 | self.bias = None 72 | 73 | def build(self, input_shape): 74 | ndim = len(input_shape) 75 | channel_axis = ndim - 1 76 | num_input_channels = input_shape[channel_axis] 77 | 78 | if np.isscalar(self.num_modes): 79 | self.num_modes = (self.num_modes,) * (ndim - 2) 80 | else: 81 | assert len(self.num_modes) == ndim - 2 82 | self.num_modes = tuple(self.num_modes) 83 | 84 | if self.weights_type == 'shared': 85 | kernel_shape = (num_input_channels, self.filters) 86 | else: 87 | kernel_shape = tuple(np.array(self.num_modes) * 2) + (num_input_channels, self.filters) 88 | 89 | self.kernel = self.add_weight( 90 | name='kernel', 91 | shape=kernel_shape, 92 | initializer=self.kernel_initializer, 93 | regularizer=self.kernel_regularizer, 94 | constraint=self.kernel_constraint) 95 | 96 | if self.use_bias: 97 | self.bias = self.add_weight( 98 | name='bias', 99 | shape=(self.filters,), 100 | initializer=self.bias_initializer, 101 | regularizer=self.bias_regularizer, 102 | constraint=self.bias_constraint) 103 | 104 | self.input_spec = InputSpec(ndim=ndim, 105 | axes={channel_axis: num_input_channels}) 106 | self.built = True 107 | 108 | def call(self, inputs): 109 | if inputs.ndim == 4: 110 | x = self._call2d(inputs) 111 | else: 112 | x = self._call3d(inputs) 113 | 114 | if self.use_bias: 115 | x = tf.nn.bias_add(x, self.bias) 116 | 117 | return x 118 | 119 | def _call2d(self, inputs): 120 | s0, s1 = inputs.shape[1:-1] # Spatial size 121 | modes_0, modes_1 = self.num_modes 122 | ndim = inputs.ndim 123 | 124 | assert s0 >= 2 * modes_0 and s1 >= 2 * modes_1 125 | 126 | # Convert to channel-first as dht(fft) only works on the innermost dimensions 127 | perm = [0, ndim - 1] + list(range(1, ndim - 1)) # (b, c, spatial) 128 | x = tf.transpose(inputs, perm=perm) 129 | 130 | x = dht2d(x) 131 | 132 | if self.weights_type == 'shared': 133 | kernel = self.kernel 134 | equation = 'io,bihw->bohw' 135 | ll = tf.einsum(equation, kernel, x[..., :modes_0, :modes_1]) 136 | lh = tf.einsum(equation, kernel, x[..., :modes_0, -modes_1:]) 137 | hl = tf.einsum(equation, kernel, x[..., -modes_0:, :modes_1]) 138 | hh = tf.einsum(equation, kernel, x[..., -modes_0:, -modes_1:]) 139 | else: 140 | kernel = self.kernel 141 | kernel_reverse = get_reverse(kernel, [0, 1]) 142 | x_reverse = get_reverse(x, [-2, -1]) 143 | equation = 'hwio,bihw->bohw' 144 | ll = hartley_conv(equation, 145 | kernel[:modes_0, :modes_1], kernel_reverse[:modes_0, :modes_1], 146 | x[..., :modes_0, :modes_1], x_reverse[..., :modes_0, :modes_1]) 147 | lh = hartley_conv(equation, 148 | kernel[:modes_0, -modes_1:], kernel_reverse[:modes_0, -modes_1:], 149 | x[..., :modes_0, -modes_1:], x_reverse[..., :modes_0, -modes_1:]) 150 | hl = hartley_conv(equation, 151 | kernel[-modes_0:, :modes_1], kernel_reverse[-modes_0:, :modes_1], 152 | x[..., -modes_0:, :modes_1], x_reverse[..., -modes_0:, :modes_1]) 153 | hh = hartley_conv(equation, 154 | kernel[-modes_0:, -modes_1:], kernel_reverse[-modes_0:, -modes_1:], 155 | x[..., -modes_0:, -modes_1:], x_reverse[..., -modes_0:, -modes_1:]) 156 | 157 | # Padding 158 | pad_shape = [tf.shape(x)[0], self.filters, modes_0, s1 - 2 * modes_1] 159 | pad_zeros = tf.zeros(pad_shape, dtype=x.dtype) 160 | low = tf.concat([ll, pad_zeros, lh], axis=-1) 161 | high = tf.concat([hl, pad_zeros, hh], axis=-1) 162 | 163 | pad_shape = [tf.shape(x)[0], self.filters, s0 - 2 * modes_0, s1] 164 | pad_zeros = tf.zeros(pad_shape, dtype=x.dtype) 165 | x = tf.concat([low, pad_zeros, high], axis=-2) 166 | 167 | x = dht2d(x, is_inverse=True) 168 | 169 | # Convert back to channel-last 170 | perm = [0] + list(range(2, ndim)) + [1] # (b, spatial, c) 171 | x = tf.transpose(x, perm=perm) 172 | 173 | return x 174 | 175 | def _call3d(self, inputs): 176 | s0, s1, s2 = inputs.shape[1:-1] # Spatial size 177 | modes_0, modes_1, modes_2 = self.num_modes 178 | ndim = inputs.ndim 179 | 180 | assert s0 >= 2 * modes_0 and s1 >= 2 * modes_1 and s2 >= 2 * modes_2 181 | 182 | # Convert to channel-first as dht(fft) only works on the innermost dimensions 183 | perm = [0, ndim - 1] + list(range(1, ndim - 1)) # (b, c, spatial) 184 | x = tf.transpose(inputs, perm=perm) 185 | 186 | x = dht3d(x) 187 | 188 | if self.weights_type == 'shared': 189 | kernel = self.kernel 190 | equation = 'io,bidhw->bodhw' 191 | lll = tf.einsum(equation, kernel, x[..., :modes_0, :modes_1, :modes_2]) 192 | lhl = tf.einsum(equation, kernel, x[..., :modes_0, -modes_1:, :modes_2]) 193 | hll = tf.einsum(equation, kernel, x[..., -modes_0:, :modes_1, :modes_2]) 194 | hhl = tf.einsum(equation, kernel, x[..., -modes_0:, -modes_1:, :modes_2]) 195 | llh = tf.einsum(equation, kernel, x[..., :modes_0, :modes_1, -modes_2:]) 196 | lhh = tf.einsum(equation, kernel, x[..., :modes_0, -modes_1:, -modes_2:]) 197 | hlh = tf.einsum(equation, kernel, x[..., -modes_0:, :modes_1, -modes_2:]) 198 | hhh = tf.einsum(equation, kernel, x[..., -modes_0:, -modes_1:, -modes_2:]) 199 | else: 200 | kernel = self.kernel 201 | kernel_reverse = get_reverse(kernel, [0, 1, 2]) 202 | x_reverse = get_reverse(x, [-3, -2, -1]) 203 | equation = 'dhwio,bidhw->bodhw' 204 | lll = hartley_conv( 205 | equation, 206 | kernel[:modes_0, :modes_1, :modes_2], kernel_reverse[:modes_0, :modes_1, :modes_2], 207 | x[..., :modes_0, :modes_1, :modes_2], x_reverse[..., :modes_0, :modes_1, :modes_2] 208 | ) 209 | lhl = hartley_conv( 210 | equation, 211 | kernel[:modes_0, -modes_1:, :modes_2], kernel_reverse[:modes_0, -modes_1:, :modes_2], 212 | x[..., :modes_0, -modes_1:, :modes_2], x_reverse[..., :modes_0, -modes_1:, :modes_2] 213 | ) 214 | hll = hartley_conv( 215 | equation, 216 | kernel[-modes_0:, :modes_1, :modes_2], kernel_reverse[-modes_0:, :modes_1, :modes_2], 217 | x[..., -modes_0:, :modes_1, :modes_2], x_reverse[..., -modes_0:, :modes_1, :modes_2] 218 | ) 219 | hhl = hartley_conv( 220 | equation, 221 | kernel[-modes_0:, -modes_1:, :modes_2], kernel_reverse[-modes_0:, -modes_1:, :modes_2], 222 | x[..., -modes_0:, -modes_1:, :modes_2], x_reverse[..., -modes_0:, -modes_1:, :modes_2] 223 | ) 224 | llh = hartley_conv( 225 | equation, 226 | kernel[:modes_0, :modes_1, -modes_2:], kernel_reverse[:modes_0, :modes_1, -modes_2:], 227 | x[..., :modes_0, :modes_1, -modes_2:], x_reverse[..., :modes_0, :modes_1, -modes_2:] 228 | ) 229 | lhh = hartley_conv( 230 | equation, 231 | kernel[:modes_0, -modes_1:, -modes_2:], kernel_reverse[:modes_0, -modes_1:, -modes_2:], 232 | x[..., :modes_0, -modes_1:, -modes_2:], x_reverse[..., :modes_0, -modes_1:, -modes_2:] 233 | ) 234 | hlh = hartley_conv( 235 | equation, 236 | kernel[-modes_0:, :modes_1, -modes_2:], kernel_reverse[-modes_0:, :modes_1, -modes_2:], 237 | x[..., -modes_0:, :modes_1, -modes_2:], x_reverse[..., -modes_0:, :modes_1, -modes_2:] 238 | ) 239 | hhh = hartley_conv( 240 | equation, 241 | kernel[-modes_0:, -modes_1:, -modes_2:], kernel_reverse[-modes_0:, -modes_1:, -modes_2:], 242 | x[..., -modes_0:, -modes_1:, -modes_2:], x_reverse[..., -modes_0:, -modes_1:, -modes_2:] 243 | ) 244 | 245 | # Padding needs to be done manually as ifft only pads at the end 246 | 247 | # Padding along spatial dim 2, shape = (b, c, modes_0, modes_1, s2) 248 | pad_shape = [tf.shape(x)[0], self.filters, modes_0, modes_1, s2 - 2 * modes_2] 249 | pad_zeros = tf.zeros(pad_shape, dtype=x.dtype) 250 | ll = tf.concat([lll, pad_zeros, llh], axis=-1) 251 | lh = tf.concat([lhl, pad_zeros, lhh], axis=-1) 252 | hl = tf.concat([hll, pad_zeros, hlh], axis=-1) 253 | hh = tf.concat([hhl, pad_zeros, hhh], axis=-1) 254 | 255 | # Padding along spatial dim 1, shape = (b, c, modes_0, s1, s2) 256 | pad_shape = [tf.shape(x)[0], self.filters, modes_0, s1 - 2 * modes_1, s2] 257 | pad_zeros = tf.zeros(pad_shape, dtype=x.dtype) 258 | low = tf.concat([ll, pad_zeros, lh], axis=-2) 259 | high = tf.concat([hl, pad_zeros, hh], axis=-2) 260 | 261 | # Padding along spatial dim 0, shape = (b, c, s0, s1, s2) 262 | pad_shape = [tf.shape(x)[0], self.filters, s0 - 2 * modes_0, s1, s2] 263 | pad_zeros = tf.zeros(pad_shape, dtype=x.dtype) 264 | x = tf.concat([low, pad_zeros, high], axis=-3) 265 | 266 | x = dht3d(x, is_inverse=True) 267 | 268 | # Convert back to channel-last 269 | perm = [0] + list(range(2, ndim)) + [1] # (b, spatial, c) 270 | x = tf.transpose(x, perm=perm) 271 | 272 | return x 273 | 274 | def get_config(self): 275 | config = { 276 | 'filters': self.filters, 277 | 'num_modes': self.num_modes, 278 | 'use_bias': self.use_bias, 279 | 'kernel_initializer': initializers.serialize(self.kernel_initializer), 280 | 'bias_initializer': initializers.serialize(self.bias_initializer), 281 | 'kernel_regularizer': regularizers.serialize(self.kernel_regularizer), 282 | 'bias_regularizer': regularizers.serialize(self.bias_regularizer), 283 | 'kernel_constraint': constraints.serialize(self.kernel_constraint), 284 | 'bias_constraint': constraints.serialize(self.bias_constraint), 285 | 'weights_type': self.weights_type, 286 | } 287 | base_config = super().get_config() 288 | return {**base_config, **config} 289 | 290 | 291 | def hartley_conv(equation, kernel, kernel_reverse, x, x_reverse): 292 | """Applies Hartley convolution theorem in the frequency domain. 293 | 294 | Args: 295 | equation: An equation for tf.einsum which describes how kernel and data (x) interact. 296 | kernel: A kernel. 297 | kernel_reverse: kernel with the frequency axes reversed. 298 | x: Data 299 | x_reverse: x with the frequency axes reversed. 300 | 301 | Returns: 302 | A tensor in the frequency domain. 303 | """ 304 | h1 = tf.einsum(equation, kernel, x + x_reverse) 305 | h2 = tf.einsum(equation, kernel_reverse, x - x_reverse) 306 | return (h1 + h2) * 0.5 307 | 308 | 309 | def get_reverse(x, axis): 310 | """Get x[N-k] by 'reverse then roll by 1'. 311 | 'reverse' converts x[k] to x[N-1-k], then 'roll by 1' changes x[N-1-k] to x[N-k] as x[0] = x[N]. 312 | 313 | Args: 314 | x: Input tensor. 315 | axis: Which axes to reverse and roll. Must be a list or tuple. 316 | 317 | Returns: 318 | x[N-k] 319 | """ 320 | assert isinstance(axis, (list, tuple)) 321 | shift = [1] * len(axis) 322 | return tf.roll(tf.reverse(x, axis), shift, axis) 323 | -------------------------------------------------------------------------------- /nets/nets_utils.py: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright 2023 IBM Inc. All rights reserved 3 | # SPDX-License-Identifier: Apache2.0 4 | # 5 | 6 | """Helper functions for creating architectures. 7 | 8 | Author: Ken C. L. Wong 9 | """ 10 | 11 | from keras.layers import Cropping2D, ZeroPadding2D, Cropping3D, ZeroPadding3D 12 | from keras import initializers, regularizers, constraints, losses 13 | 14 | import numpy as np 15 | from typing import List, Tuple, Union 16 | 17 | __author__ = 'Ken C. L. Wong' 18 | 19 | 20 | def spatial_padcrop(x, target_shape): 21 | """Performs spatial cropping and/or padding. 22 | Nothing is done if the shapes are already matched. 23 | 24 | Args: 25 | x: The tensor to be reshaped. 26 | target_shape: Target shape. 27 | 28 | Returns: 29 | A reshaped tensor. 30 | """ 31 | ndim = x.ndim 32 | assert ndim in (3, 4, 5) and ndim == len(target_shape) + 2 33 | padding, cropping = get_spatial_padcrop(x, target_shape) 34 | 35 | if np.sum(padding) != 0: 36 | op = ZeroPadding2D if ndim == 4 else ZeroPadding3D 37 | x = op(padding)(x) 38 | 39 | if np.sum(cropping) != 0: 40 | op = Cropping2D if ndim == 4 else Cropping3D 41 | x = op(cropping)(x) 42 | 43 | return x 44 | 45 | 46 | def get_spatial_padcrop(x, target_shape): 47 | """Computes the amount needed to be padded and cropped. 48 | 49 | Args: 50 | x: The tensor to be reshaped. 51 | target_shape: Target shape. 52 | 53 | Returns: 54 | The padding and cropping lists. 55 | """ 56 | shape = np.array(tuple(x.shape[1:-1])) 57 | 58 | ndim = len(shape) 59 | zeros = (0, 0) # Lower and upper 60 | 61 | if np.array_equal(target_shape, shape): 62 | return [zeros] * ndim, [zeros] * ndim 63 | 64 | diff = target_shape - shape 65 | 66 | # Regardless of dimension, at most one padding and one cropping is enough 67 | padding = [] 68 | cropping = [] 69 | for d in diff: 70 | if d >= 0: 71 | cropping.append(zeros) 72 | q = d // 2 73 | if d % 2 == 0: 74 | padding.append((q, q)) 75 | else: 76 | padding.append((q, q + 1)) 77 | else: 78 | padding.append(zeros) 79 | d = -d 80 | q = d // 2 81 | if d % 2 == 0: 82 | cropping.append((q, q)) 83 | else: 84 | cropping.append((q, q + 1)) 85 | 86 | return padding, cropping 87 | 88 | 89 | def get_loss(loss, loss_args: Union[dict, Tuple[dict], List[dict]] = None, custom_objects=None): 90 | """Gets callable loss functions. 91 | This is needed to process custom losses. 92 | 93 | Args: 94 | loss: Loss function(s). Can be a str, class, function, or a list of them. 95 | loss_args: Optional loss function arguments. 96 | If it is a list, each element is only used if it is a dict. 97 | custom_objects: Custom objects. 98 | 99 | Returns: 100 | A callable loss function, or a list of callable lost functions. 101 | """ 102 | # Convert to list 103 | if not isinstance(loss, (list, tuple)): 104 | loss = [loss] 105 | if loss_args is not None: 106 | assert not isinstance(loss_args, (list, tuple)) 107 | loss_args = [loss_args] 108 | 109 | loss_fns = [] 110 | for i, ls in enumerate(loss): 111 | if loss_args is not None and isinstance(loss_args[i], dict): 112 | ls_args = loss_args[i] 113 | else: 114 | ls_args = {} 115 | 116 | if ls.__class__ == type: # If it is a class 117 | ls = ls(**ls_args) 118 | elif isinstance(ls, str): 119 | ls = {'class_name': ls, 'config': ls_args} 120 | 121 | loss_fns.append(get_single_loss(ls, custom_objects)) 122 | 123 | if len(loss_fns) == 1: 124 | return loss_fns[0] 125 | else: 126 | return loss_fns 127 | 128 | 129 | def get_single_loss(identifier, custom_objects=None): 130 | return get_deserialized_object(identifier, 'loss', custom_objects=custom_objects) 131 | 132 | 133 | def get_deserialized_object(identifier, module_name, custom_objects=None): 134 | assert module_name in ['initializer', 'regularizer', 'constraint', 'loss'] 135 | if module_name == 'initializer': 136 | deserialize = initializers.deserialize 137 | elif module_name == 'regularizer': 138 | deserialize = regularizers.deserialize 139 | elif module_name == 'constraint': 140 | deserialize = constraints.deserialize 141 | else: 142 | deserialize = losses.deserialize 143 | 144 | if identifier is None: 145 | return None 146 | if isinstance(identifier, dict): 147 | return deserialize(identifier, custom_objects=custom_objects) 148 | elif isinstance(identifier, str): 149 | return deserialize(identifier, custom_objects=custom_objects) 150 | elif callable(identifier): 151 | return identifier 152 | else: 153 | raise ValueError('Could not interpret %s identifier: ' % module_name + 154 | str(identifier)) 155 | -------------------------------------------------------------------------------- /readme.md: -------------------------------------------------------------------------------- 1 | # Multimodal Image Segmentation 2 | 3 | This repository contains our proposed image segmentation frameworks applicable to both 2D and 3D segmentation. These include architectures and losses of: 4 | 5 | 1. **HartleyMHA** 6 | 7 | Ken C. L. Wong, Hongzhi Wang, and Tanveer Syeda-Mahmood, “HartleyMHA: self-attention in frequency domain for resolution-robust and parameter-efficient 3D image segmentation,” in *International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)*, 2023, pp. 364–373. [[pdf](https://arxiv.org/pdf/2310.04466.pdf)] 8 | ``` 9 | @inproceedings{Conference:Wong:MICCAI2023:hartleymha, 10 | title = {{HartleyMHA}: self-attention in frequency domain for resolution-robust and parameter-efficient {3D} image segmentation}, 11 | author = {Wong, Ken C. L. and Wang, Hongzhi and Syeda-Mahmood, Tanveer}, 12 | booktitle = {International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)}, 13 | pages = {364--373}, 14 | year = {2023}, 15 | } 16 | ``` 17 | 18 | 2. **FNOSeg3D** 19 | 20 | Ken C. L. Wong, Hongzhi Wang, and Tanveer Syeda-Mahmood, “FNOSeg3D: resolution-robust 3D image segmentation with Fourier neural operator,” in *IEEE International Symposium on Biomedical Imaging (ISBI)*, 2023, pp. 1–5. [[pdf](https://arxiv.org/pdf/2310.03872.pdf)] 21 | ``` 22 | @inproceedings{Conference:Wong:ISBI2023:fnoseg3d, 23 | title = {{FNOSeg3D}: resolution-robust {3D} image segmentation with {Fourier} neural operator}, 24 | author = {Wong, Ken C. L. and Wang, Hongzhi and Syeda-Mahmood, Tanveer}, 25 | booktitle = {IEEE International Symposium on Biomedical Imaging (ISBI)}, 26 | pages = {1--5}, 27 | year = {2023}, 28 | } 29 | ``` 30 | 31 | 3. **V-Net-DS (V-Net with deep supervision)** 32 | 33 | Ken C. L. Wong, Mehdi Moradi, Hui Tang, and Tanveer Syeda-Mahmood, “3D segmentation with exponential logarithmic loss for highly unbalanced object sizes,” in *International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)*, 2018, pp. 612–619. [[pdf](https://arxiv.org/pdf/1809.00076.pdf)] 34 | ``` 35 | @inproceedings{Conference:Wong:MICCAI2018:3d, 36 | title = {{3D} segmentation with exponential logarithmic loss for highly unbalanced object sizes}, 37 | author = {Wong, Ken C. L. and Moradi, Mehdi and Tang, Hui and Syeda-Mahmood, Tanveer}, 38 | booktitle = {International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)}, 39 | pages = {612--619}, 40 | year = {2018}, 41 | } 42 | ``` 43 | 44 | 4. **Pearson’s Correlation Coefficient (PCC) loss** 45 | 46 | Ken C. L. Wong and Mehdi Moradi, “3D segmentation with fully trainable Gabor kernels and Pearson’s correlation coefficient,” in *Machine Learning in Medical Imaging*, 2022, pp. 53–61. [[pdf](https://arxiv.org/pdf/2201.03644.pdf)] 47 | ``` 48 | @inproceedings{Workshop:Wong:MLMI2022:3d, 49 | title = {{3D} segmentation with fully trainable {Gabor} kernels and {Pearson's} correlation coefficient}, 50 | author = {Wong, Ken C. L. and Moradi, Mehdi}, 51 | booktitle = {Machine Learning in Medical Imaging}, 52 | pages = {53--61}, 53 | year = {2022}, 54 | } 55 | ``` 56 | 57 | ## Technical Details 58 | 59 | The code is developed with Python 3.10.12 and Keras in TensorFlow 2.16.1, and the "channel-last" format is assumed. If you are only interested in the architectures, the `nets` module is all you need, though the hyperparameters are stored under [experiments/config_files](experiments/config_files) as they are dataset specific. The experimental setups such as data splits and training procedure are in the `experiments` module. The `nets` module is dataset independent, while some functions in the `experiments` module (e.g., dataset partitioning) are written exclusively for BraTS'19. 60 | 61 | In experiments, parameters or arguments are provided through a config file using the Python's module `ConfigParser`. The config file is saved to the output directory for future reference. Examples of the config files used in our experiments are provided under [experiments/config_files](experiments/config_files) for reproducibility. 62 | 63 | For your convenience, we include an example of our experimental setup in [BraTS2019_example.zip](BraTS2019_example.zip) which contains the necessary folders and config files for testing. To perform experiments on the BraTS'19 dataset, please obtain the dataset from [CBICA](https://ipp.cbica.upenn.edu/). In our example, we assume that the images are downsampled by the nearest neighbor interpolation to 120x120x78 for training, and the original data hierarchy remains unchanged. For inference, the official validation dataset with the original image size of 240x240x155 should be used. 64 | 65 | 66 | ### Setting Up the Virtual Environment 67 | 68 | There are multiple Python packages required to run the code. You can install them by the following steps: 69 | 70 | 1. Create a virtual environment (https://docs.python.org/3/library/venv.html). Note that the default `python` in your system may be Python 2. You can use the following command to ensure Python 3 is used: 71 | ``` 72 | python3 -m venv /path/to/new/virtual/environment 73 | ``` 74 | 75 | 2. Upgrade `pip` in the *activated* virtual environment: 76 | ``` 77 | pip install --upgrade pip 78 | ``` 79 | It is important to upgrade ```pip``` as the installed version can be outdated and the next step may fail. 80 | 81 | 3. Install the required Python packages using: 82 | ``` 83 | pip install tensorflow[and-cuda] natsort SimpleITK matplotlib pandas pydot 84 | ``` 85 | > **_Note:_** The Linux (not Python) library `graphviz` is required by the function `keras.utils.plot_model`. If you encounter the corresponding runtime error, you can either install `graphviz` by `sudo apt-get install graphviz` if you have sudo privileges, or set `is_plot_model = False` in the training config file to skip `plot_model`. 86 | 87 | For more information on troubleshooting, see [Troubleshooting](troubleshooting.md). 88 | 89 | 90 | ### Data Partitioning 91 | 92 | The [experiments/data_split](experiments/data_split) folder contains the script and config files for partitioning the BraTS'19 dataset. The program goes through the dataset folders to extract the patient IDs and groups them into training, validation, and testing sets. The resulted lists of file paths are saved as txt files. To run the script, we first modify the `config_partitioning.ini` config file, then use the command line: 93 | ``` 94 | python partitioning.py /path/to/config_partitioning.ini 95 | ``` 96 | The split examples used in our experiments are provided under [`split_examples`](experiments/data_split/split_examples). They are also included in [BraTS2019_example.zip](BraTS2019_example.zip). 97 | 98 | 99 | ### Training 100 | 101 | To perform training, we first modify the `config_.ini` file, then run: 102 | ``` 103 | python run.py /path/to/config_.ini 104 | ``` 105 | where `` stands for an architecture (e.g., fnoseg). The config files of different architectures are only different in the `[model]` section and `output_dir`. Note that in our example, the validation and testing sets are the same. The config files used in our experiments can be found under [experiments/config_files](experiments/config_files). They are also included in [BraTS2019_example.zip](BraTS2019_example.zip). 106 | 107 | 108 | ### Inference 109 | 110 | To perform inference, we first modify the `config_inference_.ini` file, then run: 111 | ``` 112 | python inference.py /path/to/config_inference_.ini 113 | ``` 114 | where `` stands for an architecture (e.g., fnoseg). The segmentation results can be uploaded to [CBICA](https://ipp.cbica.upenn.edu/) for the official performance validation. The config files used in our experiments can be found under [experiments/config_files](experiments/config_files). They are also included in [BraTS2019_example.zip](BraTS2019_example.zip). 115 | 116 | 117 | ### Results Statistics 118 | 119 | We find that in the official validation results, the "enhancing tumor" (ET) region has sensitivity of NaN. We also find that `Hausdorff95_ET` = NaN when `Sensitivity_ET` = 1. These indicate that there may be no positives for ET for some cases. Therefore, when computing the means and variances for the ET region (e.g., `Dice_ET`, `Hausdorff95_ET`), those cases with `Sensitivity_ET` equals to NaN or 1 are ignored. 120 | 121 | ## Updates 122 | 123 | ### 2025-03 124 | 125 | 1. The PyTorch implementation with new architectures will be available soon. 126 | 127 | ### 2024-04 128 | 129 | 1. Updated codes for the most recent version of TensorFlow (2.16.1). 130 | 2. The `datagenerator.py` module is replaced by the `dataset.py` module that uses `PyDataset` in Keras 3. As `PyDataset` is new in Keras 3 and thus TensorFlow 2.16.1, class `InputData` is not backward compatible. 131 | 132 | ## Contact Information 133 | 134 | Ken C. L. Wong () 135 | -------------------------------------------------------------------------------- /troubleshooting.md: -------------------------------------------------------------------------------- 1 | # Troubleshooting 2 | 3 | The following solutions are based on our experience on Ubuntu 18.04, Python 3.6.9, and Keras in TensorFlow 2.6.2. 4 | 5 | ## TensorFlow 6 | 7 | The runtime error is common with TensorFlow, which is usually caused by missing or mismatched CUDA library or cuDNN library. Note that `sudo` privileges are usually required when handling the library issues. The TensorFlow dependencies on CUDA and cuDNN can be found at:\ 8 | https://www.tensorflow.org/install/source#gpu 9 | 10 | 11 | ### Installing CUDA Toolkit (not the driver) 12 | 13 | A specific version of CUDA Toolkit may be required. To install the CUDA Toolkit without messing up the existing driver, I use the runfile installation:\ 14 | https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#runfile-installation \ 15 | As we *do not* install the driver, step 1-4 in Section 8.2 are unnecessary. The runfile can be found online, for example, the one for CUDA 11.2 can be found at:\ 16 | https://developer.nvidia.com/cuda-11-2-0-download-archive \ 17 | After running the runfile and getting inside the user interface: 18 | 1. Choose `Continue` for the “driver found” warning. 19 | 2. In the “CUDA Installer” section, uncheck everything except `CUDA Toolkit`. 20 | 3. Also in the “CUDA Installer” section, select `Options` → `Toolkit Options` → uncheck everything → `Done` → `Done` → `Install`. 21 | 22 | Wait until the installation is finished with the summary shown. 23 | 24 | ### Installing cuDNN 25 | 26 | Multiple runtime issues can be solved by installing cuDNN. The official installation guide is provided at:\ 27 | https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html \ 28 | The different versions of cuDNN installers are available at:\ 29 | https://developer.nvidia.com/rdp/cudnn-archive 30 | > **_Note:_** The installer used must be consistent with the TensorFlow dependencies. For example, if the TensorFlow version requires CUDA 11, you should use the cuDNN installer for CUDA 11 but not for CUDA 12. 31 | 32 | In this [Stack Overflow post](https://stackoverflow.com/questions/66977227/could-not-load-dynamic-library-libcudnn-so-8-when-running-tensorflow-on-ubun), a Linux user reported that the installation can be achieved by: 33 | ``` 34 | sudo apt-get install libcudnn8 35 | ``` 36 | 37 | ### Runtime error: "Could not load dynamic library 'libcudnn.so.8'" 38 | 39 | This may happen when importing TensorFlow. First we can check if the file exists by using: 40 | ``` 41 | find /usr -name "libcudnn.so.8" 42 | ``` 43 | If it does not exist, cuDNN should be installed. If you know that the file exists in an uncommon location, you can modify (or create) `~/.bash_profile` with the line: 44 | ``` 45 | export LD_LIBRARY_PATH=/directory/containing/the/file:$LD_LIBRARY_PATH 46 | ``` 47 | so that the library file can be found. 48 | 49 | 50 | ### Runtime error: "Could not load dynamic library 'libcudart.so.11.0'" 51 | 52 | This may happen when importing TensorFlow. First we can check if the file exists by using: 53 | ``` 54 | find /usr -name "libcudart.so.11.0" 55 | ``` 56 | If it does not exist, CUDA Toolkit should be installed. If you know that the file exists in an uncommon location, you can modify (or create) `~/.bash_profile` with the line: 57 | ``` 58 | export LD_LIBRARY_PATH=/directory/containing/the/file:$LD_LIBRARY_PATH 59 | ``` 60 | so that the library file can be found. 61 | 62 | 63 | ### Runtime error: "Unknown: Failed to get convolution algorithm" 64 | 65 | This may happen when training starts. This may be caused by an incompatible cuDNN version, for example, cuDNN for CUDA 12 is installed while TensorFlow requires cuDNN for CUDA 11. You can find the required version using the following Python code under the activated virtual environment: 66 | ```python 67 | import tensorflow as tf 68 | print(tf.sysconfig.get_build_info()) 69 | ``` 70 | 71 | 72 | ### The program is running but no GPU is used 73 | 74 | This may happen after training starts. This may be caused by a missing or incompatible cuDNN library. First we can check if TensorFlow can find the GPUs using the following Python code under the activated virtual environment: 75 | ```python 76 | from tensorflow.python.client import device_lib 77 | print(device_lib.list_local_devices()) 78 | ``` 79 | If TensorFlow cannot not find the GPUs, a compatible cuDNN needs to be installed. 80 | 81 | 82 | ## GraphViz 83 | 84 | You may encounter an error message ends with: 85 | >ImportError: ('You must install pydot (\`pip install pydot`) and install graphviz (see instructions at https://graphviz.gitlab.io/download/) ', 'for plot_model/model_to_dot to work.') 86 | 87 | The Linux (not Python) library `graphviz` is required by the function `tensorflow.keras.utils.plot_model`. If you encounter the corresponding runtime error, you can install `graphviz` if you have `sudo` privileges: 88 | ``` 89 | sudo apt-get install graphviz 90 | ``` 91 | Or you can set `is_plot_model = False` in the training config file to skip `plot_model`. --------------------------------------------------------------------------------