├── .gitignore ├── Agreements ├── PARSE2022_agreement.pdf └── parse2022_agreements.pdf ├── LICENSE ├── README.md ├── baseline ├── config.py ├── dataset.py ├── evalu.py ├── feature.py ├── losses.py ├── model.py ├── params.pkl ├── readme.md ├── requirements.txt ├── requirements.yaml ├── submit.py └── train.py ├── build-docker-image-example ├── Dockerfile ├── U_net_Model.py ├── main.py └── requirements.txt ├── docker_rules └── docker-submission-rules.md └── parse_result.xlsx /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | pip-wheel-metadata/ 24 | share/python-wheels/ 25 | *.egg-info/ 26 | .installed.cfg 27 | *.egg 28 | MANIFEST 29 | 30 | # PyInstaller 31 | # Usually these files are written by a python script from a template 32 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 33 | *.manifest 34 | *.spec 35 | 36 | # Installer logs 37 | pip-log.txt 38 | pip-delete-this-directory.txt 39 | 40 | # Unit test / coverage reports 41 | htmlcov/ 42 | .tox/ 43 | .nox/ 44 | .coverage 45 | .coverage.* 46 | .cache 47 | nosetests.xml 48 | coverage.xml 49 | *.cover 50 | *.py,cover 51 | .hypothesis/ 52 | .pytest_cache/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | target/ 76 | 77 | # Jupyter Notebook 78 | .ipynb_checkpoints 79 | 80 | # IPython 81 | profile_default/ 82 | ipython_config.py 83 | 84 | # pyenv 85 | .python-version 86 | 87 | # pipenv 88 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 89 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 90 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 91 | # install all needed dependencies. 92 | #Pipfile.lock 93 | 94 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow 95 | __pypackages__/ 96 | 97 | # Celery stuff 98 | celerybeat-schedule 99 | celerybeat.pid 100 | 101 | # SageMath parsed files 102 | *.sage.py 103 | 104 | # Environments 105 | .env 106 | .venv 107 | env/ 108 | venv/ 109 | ENV/ 110 | env.bak/ 111 | venv.bak/ 112 | 113 | # Spyder project settings 114 | .spyderproject 115 | .spyproject 116 | 117 | # Rope project settings 118 | .ropeproject 119 | 120 | # mkdocs documentation 121 | /site 122 | 123 | # mypy 124 | .mypy_cache/ 125 | .dmypy.json 126 | dmypy.json 127 | 128 | # Pyre type checker 129 | .pyre/ 130 | -------------------------------------------------------------------------------- /Agreements/PARSE2022_agreement.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerceptionComputingLab/PARSE2022/cd772c5247729de0a6ae9a09b9f0940e765c71d4/Agreements/PARSE2022_agreement.pdf -------------------------------------------------------------------------------- /Agreements/parse2022_agreements.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerceptionComputingLab/PARSE2022/cd772c5247729de0a6ae9a09b9f0940e765c71d4/Agreements/parse2022_agreements.pdf -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # PARSE2022 2 | Official repository of MICCAI 2022 Parse challenge 3 | -------------------------------------------------------------------------------- /baseline/config.py: -------------------------------------------------------------------------------- 1 | class DefaultConfig (object) : 2 | 3 | root_raw_train_data = 'path to be filled' # downloaded training data root 4 | root_raw_eval_data = 'path to be filled' # downloaded evaluation/testing data root 5 | 6 | root_dataset_file = './dataset/' # preprocessed dataset root 7 | root_train_volume = './dataset/train/' # preprocessed training dataset root 8 | root_eval_volume = './dataset/eval/' # preprocessed evaluation/testing dataset root 9 | 10 | root_exp_file = './exp/' # training exp root 11 | root_submit_file = './submit/' # submit file root 12 | 13 | root_pred_dcm = 'path to be filled' # prediction array root 14 | root_pred_save = 'path to be filled' # prediction result root 15 | 16 | root_model_param = './params.pkl' # reference model parameter root 17 | 18 | use_gpu = True 19 | batch_size = 2 20 | max_epoch = 1 21 | learning_rate = 1e-3 22 | decay_LR = (5, 0.1) 23 | 24 | opt = DefaultConfig() -------------------------------------------------------------------------------- /baseline/dataset.py: -------------------------------------------------------------------------------- 1 | import os 2 | import torch 3 | import numpy as np 4 | from torch.utils import data 5 | import nibabel as nib 6 | from config import opt 7 | from tqdm import tqdm 8 | import feature as feat 9 | 10 | 11 | def folder_creation(): 12 | 13 | if not os.path.exists(opt.root_dataset_file): 14 | os.makedirs(opt.root_dataset_file) 15 | if not os.path.exists(opt.root_train_volume): 16 | os.makedirs(opt.root_train_volume) 17 | 18 | return 19 | 20 | 21 | def get_cup_extre(image): 22 | cup_extre = np.zeros([3, 2], dtype=np.int16) 23 | flag = False 24 | for i in range(image.shape[0]): 25 | tmp = np.sum(image[i, :, :]) 26 | if tmp > 0: 27 | cup_extre[0, 1] = i 28 | if flag == False: 29 | flag = True 30 | cup_extre[0, 0] = i 31 | cup_extre[0, 0] = max(0, cup_extre[0, 0] - 5) 32 | cup_extre[0, 1] = min(image.shape[0], cup_extre[0, 1] + 6) 33 | flag = False 34 | for i in range(image.shape[1]): 35 | tmp = np.sum(image[:, i, :]) 36 | if tmp > 0: 37 | cup_extre[1, 1] = i 38 | if flag == False: 39 | flag = True 40 | cup_extre[1, 0] = i 41 | cup_extre[1, 0] = max(0, cup_extre[1, 0] - 5) 42 | cup_extre[1, 1] = min(image.shape[1], cup_extre[1, 1] + 6) 43 | flag = False 44 | for i in range(image.shape[2]): 45 | tmp = np.sum(image[:, :, i]) 46 | if tmp > 0: 47 | cup_extre[2, 1] = i 48 | if flag == False: 49 | flag = True 50 | cup_extre[2, 0] = i 51 | cup_extre[2, 0] = max(0, cup_extre[2, 0] - 5) 52 | cup_extre[2, 1] = min(image.shape[2], cup_extre[2, 1] + 6) 53 | return cup_extre 54 | 55 | 56 | def rawdata_loading(): 57 | root_rawfile = opt.root_raw_train_data 58 | root_volume_file = opt.root_train_volume 59 | fileList = os.listdir(root_rawfile) 60 | for f in tqdm(fileList, total=len(fileList)): 61 | tmp_dcmfile, tmp_labelfile = f'{root_rawfile}{f}/image/', f'{root_rawfile}{f}/label/' 62 | root_dcmfile, root_labelfile = f'{tmp_dcmfile}{os.listdir(tmp_dcmfile)[0]}', f'{tmp_labelfile}{os.listdir(tmp_labelfile)[0]}' 63 | 64 | dcm_img, label_img = np.array(nib.load(root_dcmfile).dataobj), np.array(nib.load(root_labelfile).dataobj) 65 | dcm_img, label_img = np.swapaxes(dcm_img, 0, 2), np.swapaxes(label_img, 0, 2) 66 | 67 | cup_extre = get_cup_extre(label_img) 68 | dcm_img = dcm_img[cup_extre[0, 0]:cup_extre[0, 1], cup_extre[1, 0]:cup_extre[1, 1], 69 | cup_extre[2, 0]:cup_extre[2, 1]] 70 | label_img = label_img[cup_extre[0, 0]:cup_extre[0, 1], cup_extre[1, 0]:cup_extre[1, 1], 71 | cup_extre[2, 0]:cup_extre[2, 1]] 72 | dcm_img = feat.add_window(dcm_img) 73 | label_img = label_img.astype(np.bool) 74 | root_save = f'{root_volume_file}{f}/' 75 | if not os.path.exists(root_save): 76 | os.makedirs(root_save) 77 | np.save(f'{root_save}/dcm.npy', dcm_img) 78 | np.save(f'{root_save}/label.npy', label_img) 79 | 80 | 81 | def volume_resize(): 82 | root_volumefile = opt.root_train_volume 83 | fileList = os.listdir(root_volumefile) 84 | 85 | dcm_array, label_array = np.zeros([100, 96 * 3, 96 * 3, 96 * 4], dtype=np.uint8), np.zeros( 86 | [100, 96 * 3, 96 * 3, 96 * 4], dtype=np.bool) 87 | 88 | tot_file = 0 89 | for f in tqdm(fileList, total=len(fileList)): 90 | tmp_dcmfile, tmp_labelfile = f'{root_volumefile}{f}/dcm.npy', f'{root_volumefile}{f}/label.npy' 91 | dcmfile, labelfile = np.load(tmp_dcmfile), np.load(tmp_labelfile) 92 | dcmfile = dcmfile[:min(dcmfile.shape[0], 96 * 3), :min(dcmfile.shape[1], 96 * 3), 93 | :min(dcmfile.shape[2], 96 * 4)] 94 | labelfile = labelfile[:min(labelfile.shape[0], 96 * 3), :min(labelfile.shape[1], 96 * 3), 95 | :min(labelfile.shape[2], 96 * 4)] 96 | 97 | dcmfile = np.pad(dcmfile, ( 98 | (0, 96 * 3 - dcmfile.shape[0]), (0, 96 * 3 - dcmfile.shape[1]), (0, 96 * 4 - dcmfile.shape[2])), 99 | mode='symmetric') 100 | labelfile = np.pad(labelfile, ( 101 | (0, 96 * 3 - labelfile.shape[0]), (0, 96 * 3 - labelfile.shape[1]), (0, 96 * 4 - labelfile.shape[2])), 102 | mode='symmetric') 103 | 104 | dcm_array[tot_file, :, :, :] = dcmfile[:, :, :] 105 | label_array[tot_file, :, :, :] = labelfile[:, :, :] 106 | 107 | tot_file += 1 108 | np.save(f'{opt.root_dataset_file}dcm_volume_array.npy', dcm_array) 109 | np.save(f'{opt.root_dataset_file}label_volume_array.npy', label_array) 110 | 111 | 112 | def check_volume_array(check_id=0): 113 | root_volume = opt.root_dataset_file 114 | dcm_array = np.load(f'{root_volume}dcm_volume_array.npy') 115 | label_array = np.load(f'{root_volume}label_volume_array.npy') 116 | 117 | print(dcm_array.shape, label_array.shape) 118 | np.save(f'{root_volume}0D.npy', dcm_array[check_id]) 119 | np.save(f'{root_volume}0L.npy', label_array[check_id]) 120 | return 121 | 122 | 123 | def dataset_preprocessing(): 124 | folder_creation() 125 | print('folder creation finish!') 126 | rawdata_loading() 127 | print('rawdata loading finish!') 128 | volume_resize() 129 | print('volume creation finish!') 130 | return 131 | 132 | 133 | class volume_dataset(data.Dataset): 134 | def __init__(self, range_data=None): 135 | if range_data is None: 136 | range_data = [0, 1] 137 | self.num_epoch = 0 138 | dcm_array, label_array = np.load('./dataset/dcm_volume_array.npy'), np.load('./dataset/label_volume_array.npy') 139 | self.len_volume = dcm_array.shape[0] 140 | 141 | self.load_dcm_array = dcm_array[int(self.len_volume * range_data[0]): int(self.len_volume * range_data[1])] 142 | self.load_label_array = label_array[int(self.len_volume * range_data[0]): int(self.len_volume * range_data[1])] 143 | return 144 | 145 | def update_num_epoch(self, num_epoch): 146 | self.num_epoch = num_epoch 147 | return 148 | 149 | def __getitem__(self, index): 150 | 151 | num_volume, num_patch = int(index / (3 * 3 * 4)), index % (3 * 3 * 4) 152 | dcm_volume, label_volume = self.load_dcm_array[num_volume], self.load_label_array[num_volume] 153 | dcm_volume = dcm_volume.astype(np.float32) 154 | image_mean, image_std = np.mean(dcm_volume), np.std(dcm_volume) 155 | dcm_volume = (dcm_volume - image_mean) / image_std 156 | 157 | numx = int(num_patch / 12) 158 | numy = int((num_patch - numx * 12) / 4) 159 | numz = num_patch - numx * 12 - numy * 4 160 | ret_dcm = dcm_volume[numx * 96:(numx + 1) * 96, numy * 96:(numy + 1) * 96, numz * 96:(numz + 1) * 96] 161 | ret_label = label_volume[numx * 96:(numx + 1) * 96, numy * 96:(numy + 1) * 96, numz * 96:(numz + 1) * 96] 162 | 163 | ret_dcm = torch.tensor(ret_dcm, dtype=torch.float32) 164 | ret_label = torch.tensor(ret_label, dtype=torch.long) 165 | ret_dcm, ret_label = torch.unsqueeze(ret_dcm, 0), torch.unsqueeze(ret_label, 0) 166 | 167 | return ret_dcm, ret_label 168 | 169 | def __len__(self): 170 | 171 | return self.load_dcm_array.shape[0] * 3 * 3 * 4 172 | -------------------------------------------------------------------------------- /baseline/evalu.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from config import opt 3 | import feature as feat 4 | from tqdm import tqdm 5 | from model import UNet3D 6 | import torch 7 | import dataset 8 | from torch.utils.data import DataLoader 9 | from datetime import datetime 10 | 11 | 12 | def dice_coefficient(y_pred, y_true): 13 | smooth = 0.00001 14 | y_true_f, y_pred_f = y_true.flatten(), y_pred.flatten() 15 | intersection = np.logical_and(y_pred_f, y_true_f) 16 | return (2. * intersection.sum() + smooth) / (np.sum(y_true_f) + np.sum(y_pred_f) + smooth) 17 | 18 | def pred2seg(pred, target): 19 | if opt.use_gpu: 20 | pred, target = pred.cpu(), target.cpu() 21 | pred, target = pred.detach().numpy(), target.detach().numpy() 22 | pred[pred <= 0.5], pred[pred > 0.5] = 0, 1 23 | pred, target = pred.astype(np.uint8), target.astype(np.uint8) 24 | return pred, target 25 | 26 | def evalu_dice(evalu_model, eval_dataset, eval_dataLoader, eval_total=1.0): 27 | 28 | dice_counter = feat.Counter() 29 | eval_total, tmp_total = int(eval_total * len(eval_dataset)), 0 30 | for batch_id, (dcm_image, label_image) in tqdm(enumerate(eval_dataLoader), 31 | total=int(eval_total / opt.batch_size)): 32 | if (tmp_total + 1) * opt.batch_size > eval_total: 33 | break 34 | if dcm_image.shape[0] < opt.batch_size and eval_total < tmp_total: 35 | continue 36 | Input, target = dcm_image.requires_grad_(), label_image 37 | if opt.use_gpu is True: 38 | Input, target = Input.cuda(), target.cuda() 39 | pred = evalu_model(Input) 40 | pred, target = pred2seg(pred, target) 41 | dice = dice_coefficient(pred, target) 42 | dice_counter.updata(dice) 43 | tmp_total += 1 44 | return round(dice_counter.avg * 100, 1) 45 | 46 | def train_dice_check(): 47 | 48 | Unet = UNet3D(1, 1) 49 | Unet.load_state_dict(torch.load(opt.root_model_param)) 50 | if opt.use_gpu is True : 51 | Unet = Unet.cuda() 52 | Unet.eval() 53 | train_dice_dataset = dataset.volume_dataset() 54 | train_dice_dataLoader = DataLoader(train_dice_dataset, batch_size=opt.batch_size, shuffle=False) 55 | print('train dice loading finish') 56 | 57 | print(evalu_dice(Unet, train_dice_dataset, train_dice_dataLoader)) 58 | return 59 | 60 | def pred_dcm_array(): 61 | 62 | dcm_array = np.load(opt.root_pred_dcm) 63 | print(dcm_array.shape) 64 | 65 | dcm_volume = dcm_array.astype(np.float32) 66 | image_mean, image_std = np.mean(dcm_volume), np.std(dcm_volume) 67 | dcm_volume = (dcm_volume - image_mean) / image_std 68 | 69 | znum, xnum, ynum = int(dcm_volume.shape[0] / 92), int(dcm_volume.shape[1] / 92), int(dcm_volume.shape[2] / 92) 70 | if dcm_volume.shape[0] % 92: 71 | znum += 1 72 | if dcm_volume.shape[1] % 92: 73 | xnum += 1 74 | if dcm_volume.shape[2] % 92: 75 | ynum += 1 76 | 77 | print(znum*96, xnum*96, ynum*96) 78 | 79 | volume_array, pred_array = np.zeros((znum * xnum * ynum, 96, 96, 96), dtype=np.float32), np.zeros( 80 | (dcm_volume.shape[0], dcm_volume.shape[1], dcm_volume.shape[2]), dtype=np.uint8) 81 | for i in range(znum): 82 | for j in range(xnum): 83 | for k in range(ynum): 84 | tmp_array = dcm_volume[i * 96: min(dcm_volume.shape[0], (i + 1) * 96), 85 | j * 96: min(dcm_volume.shape[1], (j + 1) * 96), 86 | k * 96: min(dcm_volume.shape[2], (k + 1) * 96)] 87 | 88 | id = i * xnum * ynum + j * ynum + k 89 | volume_array[id, :tmp_array.shape[0], :tmp_array.shape[1], :tmp_array.shape[2]] = tmp_array[:, :, :] 90 | 91 | Unet = UNet3D(1, 1) 92 | Unet.load_state_dict(torch.load(opt.root_model_param)) 93 | 94 | if opt.use_gpu is True: 95 | Unet = Unet.cuda() 96 | 97 | Unet.eval() 98 | 99 | for i in range(volume_array.shape[0]): 100 | 101 | tmp_volume = torch.tensor(volume_array[i], dtype=torch.float32) 102 | tmp_volume = torch.unsqueeze(torch.unsqueeze(tmp_volume, 0), 0) 103 | 104 | Input = tmp_volume.requires_grad_() 105 | if opt.use_gpu is True: 106 | Input = Input.cuda() 107 | 108 | pred = Unet(Input) 109 | 110 | if opt.use_gpu: 111 | pred = pred.cpu() 112 | 113 | pred = torch.squeeze(torch.squeeze(pred, 0), 0) 114 | 115 | pred = pred.detach().numpy() 116 | pred[pred <= 0.5], pred[pred > 0.5] = 0, 1 117 | pred = pred.astype(np.uint8) 118 | 119 | tznum = int(i / (xnum * ynum)) 120 | txnum = int((i - tznum * (xnum * ynum)) / ynum) 121 | tynum = i - tznum * (xnum * ynum) - txnum * ynum 122 | 123 | pred_array[tznum * 96:min(dcm_volume.shape[0], (tznum + 1) * 96), 124 | txnum * 96:min(dcm_volume.shape[1], (txnum + 1) * 96), 125 | tynum * 96:min(dcm_volume.shape[2], (tynum + 1) * 96)] = pred[: min(dcm_volume.shape[0], (tznum + 1) * 96) - tznum * 96, 126 | : min(dcm_volume.shape[1], (txnum + 1) * 96) - txnum * 96, 127 | : min(dcm_volume.shape[2], (tynum + 1) * 96) - tynum * 96] 128 | 129 | time = datetime.now() 130 | np.save( 131 | f'{opt.root_pred_save}{str(time.month).zfill(2)}{str(time.day).zfill(2)}_{str(time.hour).zfill(2)}{str(time.minute).zfill(2)}.npy',pred_array) 132 | 133 | return -------------------------------------------------------------------------------- /baseline/feature.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import numpy as np 3 | import os 4 | import pandas as pd 5 | 6 | def info_data(matrix_data): 7 | type_data = type(matrix_data) 8 | if type(matrix_data) is torch.Tensor: 9 | if matrix_data.is_cuda: 10 | matrix_data = matrix_data.detach().cpu() 11 | matrix_data = np.array(matrix_data) 12 | print('data_type/dtype/shape: ', type_data, matrix_data.dtype, matrix_data.shape) 13 | print('min/max: ', np.min(matrix_data), np.max(matrix_data), 'numzero: ', np.sum(matrix_data < 0), 14 | np.sum(matrix_data == 0), np.sum(matrix_data > 0)) 15 | return 16 | 17 | def create_root(root_name): 18 | if not os.path.exists(root_name): 19 | os.makedirs(root_name) 20 | return 21 | 22 | class Counter: 23 | def __init__(self): 24 | self.count, self.sum, self.avg = 0, 0, 0 25 | return 26 | 27 | def updata(self, value, num_updata=1): 28 | self.count += num_updata 29 | self.sum += value * num_updata 30 | self.avg = self.sum / self.count 31 | return 32 | 33 | def clear(self): 34 | self.count, self.sum, self.avg = 0, 0, 0 35 | return 36 | 37 | def read_csv(root_csv): 38 | df = pd.read_csv(root_csv, sep='\n') 39 | df = df.values 40 | df = np.squeeze(df) 41 | return df 42 | 43 | def add_window(image, WL=-600, WW=1600): 44 | WLL = WL - (WW / 2) 45 | image = (image - WLL) / WW * 255 46 | image[image < 0] = 0 47 | image[image > 255] = 255 48 | image2 = np.zeros([image.shape[0], image.shape[1], image.shape[2]], dtype=np.uint8) 49 | image2[:, :, :] = image[:, :, :] 50 | return image2 51 | -------------------------------------------------------------------------------- /baseline/losses.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn.functional as F 3 | from torch import nn as nn 4 | from torch.autograd import Variable 5 | from torch.nn import MSELoss, SmoothL1Loss, L1Loss 6 | 7 | 8 | def expand_as_one_hot(input, C, ignore_index=None): 9 | """ 10 | Converts NxSPATIAL label image to NxCxSPATIAL, where each label gets converted to its corresponding one-hot vector. 11 | It is assumed that the batch dimension is present. 12 | Args: 13 | input (torch.Tensor): 3D/4D input image 14 | C (int): number of channels/labels 15 | ignore_index (int): ignore index to be kept during the expansion 16 | Returns: 17 | 4D/5D output torch.Tensor (NxCxSPATIAL) 18 | """ 19 | assert input.dim() == 4 20 | 21 | # expand the input tensor to Nx1xSPATIAL before scattering 22 | input = input.unsqueeze(1) 23 | # create output tensor shape (NxCxSPATIAL) 24 | shape = list(input.size()) 25 | shape[1] = C 26 | 27 | if ignore_index is not None: 28 | # create ignore_index mask for the result 29 | mask = input.expand(shape) == ignore_index 30 | # clone the src tensor and zero out ignore_index in the input 31 | input = input.clone() 32 | input[input == ignore_index] = 0 33 | # scatter to get the one-hot tensor 34 | result = torch.zeros(shape).to(input.device).scatter_(1, input, 1) 35 | # bring back the ignore_index in the result 36 | result[mask] = ignore_index 37 | return result 38 | else: 39 | # scatter to get the one-hot tensor 40 | return torch.zeros(shape).to(input.device).scatter_(1, input, 1) 41 | 42 | 43 | def compute_per_channel_dice(input, target, epsilon=1e-6, weight=None): 44 | """ 45 | Computes DiceCoefficient as defined in https://arxiv.org/abs/1606.04797 given a multi channel input and target. 46 | Assumes the input is a normalized probability, e.g. a result of Sigmoid or Softmax function. 47 | 48 | Args: 49 | input (torch.Tensor): NxCxSpatial input tensor 50 | target (torch.Tensor): NxCxSpatial target tensor 51 | epsilon (float): prevents division by zero 52 | weight (torch.Tensor): Cx1 tensor of weight per channel/class 53 | """ 54 | 55 | # input and target shapes must match 56 | assert input.size() == target.size(), "'input' and 'target' must have the same shape" 57 | 58 | input = flatten(input) 59 | target = flatten(target) 60 | target = target.float() 61 | 62 | # compute per channel Dice Coefficient 63 | intersect = (input * target).sum(-1) 64 | if weight is not None: 65 | intersect = weight * intersect 66 | 67 | # here we can use standard dice (input + target).sum(-1) or extension (see V-Net) (input^2 + target^2).sum(-1) 68 | denominator = (input * input).sum(-1) + (target * target).sum(-1) 69 | return 2 * (intersect / denominator.clamp(min=epsilon)) 70 | 71 | 72 | class _MaskingLossWrapper(nn.Module): 73 | """ 74 | Loss wrapper which prevents the gradient of the loss to be computed where target is equal to `ignore_index`. 75 | """ 76 | 77 | def __init__(self, loss, ignore_index): 78 | super(_MaskingLossWrapper, self).__init__() 79 | assert ignore_index is not None, 'ignore_index cannot be None' 80 | self.loss = loss 81 | self.ignore_index = ignore_index 82 | 83 | def forward(self, input, target): 84 | mask = target.clone().ne_(self.ignore_index) 85 | mask.requires_grad = False 86 | 87 | # mask out input/target so that the gradient is zero where on the mask 88 | input = input * mask 89 | target = target * mask 90 | 91 | # forward masked input and target to the loss 92 | return self.loss(input, target) 93 | 94 | 95 | class SkipLastTargetChannelWrapper(nn.Module): 96 | """ 97 | Loss wrapper which removes additional target channel 98 | """ 99 | 100 | def __init__(self, loss, squeeze_channel=False): 101 | super(SkipLastTargetChannelWrapper, self).__init__() 102 | self.loss = loss 103 | self.squeeze_channel = squeeze_channel 104 | 105 | def forward(self, input, target): 106 | assert target.size(1) > 1, 'Target tensor has a singleton channel dimension, cannot remove channel' 107 | 108 | # skips last target channel if needed 109 | target = target[:, :-1, ...] 110 | 111 | if self.squeeze_channel: 112 | # squeeze channel dimension if singleton 113 | target = torch.squeeze(target, dim=1) 114 | return self.loss(input, target) 115 | 116 | 117 | class _AbstractDiceLoss(nn.Module): 118 | """ 119 | Base class for different implementations of Dice loss. 120 | """ 121 | 122 | def __init__(self, weight=None, normalization='sigmoid'): 123 | super(_AbstractDiceLoss, self).__init__() 124 | self.register_buffer('weight', weight) 125 | # The output from the network during training is assumed to be un-normalized probabilities and we would 126 | # like to normalize the logits. Since Dice (or soft Dice in this case) is usually used for binary data, 127 | # normalizing the channels with Sigmoid is the default choice even for multi-class segmentation problems. 128 | # However if one would like to apply Softmax in order to get the proper probability distribution from the 129 | # output, just specify `normalization=Softmax` 130 | assert normalization in ['sigmoid', 'softmax', 'none'] 131 | if normalization == 'sigmoid': 132 | self.normalization = nn.Sigmoid() 133 | elif normalization == 'softmax': 134 | self.normalization = nn.Softmax(dim=1) 135 | else: 136 | self.normalization = lambda x: x 137 | 138 | def dice(self, input, target, weight): 139 | # actual Dice score computation; to be implemented by the subclass 140 | raise NotImplementedError 141 | 142 | def forward(self, input, target): 143 | # get probabilities from logits 144 | input = self.normalization(input) 145 | 146 | # compute per channel Dice coefficient 147 | per_channel_dice = self.dice(input, target, weight=self.weight) 148 | 149 | # average Dice score across all channels/classes 150 | return 1. - torch.mean(per_channel_dice) 151 | 152 | 153 | class DiceLoss(_AbstractDiceLoss): 154 | """Computes Dice Loss according to https://arxiv.org/abs/1606.04797. 155 | For multi-class segmentation `weight` parameter can be used to assign different weights per class. 156 | The input to the loss function is assumed to be a logit and will be normalized by the Sigmoid function. 157 | """ 158 | 159 | def __init__(self, weight=None, normalization='sigmoid'): 160 | super().__init__(weight, normalization) 161 | 162 | def dice(self, input, target, weight): 163 | return compute_per_channel_dice(input, target, weight=self.weight) 164 | 165 | 166 | class GeneralizedDiceLoss(_AbstractDiceLoss): 167 | """Computes Generalized Dice Loss (GDL) as described in https://arxiv.org/pdf/1707.03237.pdf. 168 | """ 169 | 170 | def __init__(self, normalization='sigmoid', epsilon=1e-6): 171 | super().__init__(weight=None, normalization=normalization) 172 | self.epsilon = epsilon 173 | 174 | def dice(self, input, target, weight): 175 | assert input.size() == target.size(), "'input' and 'target' must have the same shape" 176 | 177 | input = flatten(input) 178 | target = flatten(target) 179 | target = target.float() 180 | 181 | if input.size(0) == 1: 182 | # for GDL to make sense we need at least 2 channels (see https://arxiv.org/pdf/1707.03237.pdf) 183 | # put foreground and background voxels in separate channels 184 | input = torch.cat((input, 1 - input), dim=0) 185 | target = torch.cat((target, 1 - target), dim=0) 186 | 187 | # GDL weighting: the contribution of each label is corrected by the inverse of its volume 188 | w_l = target.sum(-1) 189 | w_l = 1 / (w_l * w_l).clamp(min=self.epsilon) 190 | w_l.requires_grad = False 191 | 192 | intersect = (input * target).sum(-1) 193 | intersect = intersect * w_l 194 | 195 | denominator = (input + target).sum(-1) 196 | denominator = (denominator * w_l).clamp(min=self.epsilon) 197 | 198 | return 2 * (intersect.sum() / denominator.sum()) 199 | 200 | 201 | class BCEDiceLoss(nn.Module): 202 | """Linear combination of BCE and Dice losses""" 203 | 204 | def __init__(self, alpha, beta): 205 | super(BCEDiceLoss, self).__init__() 206 | self.alpha = alpha 207 | self.bce = nn.BCEWithLogitsLoss() 208 | self.beta = beta 209 | self.dice = DiceLoss() 210 | 211 | def forward(self, input, target): 212 | return self.alpha * self.bce(input, target) + self.beta * self.dice(input, target) 213 | 214 | 215 | class WeightedCrossEntropyLoss(nn.Module): 216 | """WeightedCrossEntropyLoss (WCE) as described in https://arxiv.org/pdf/1707.03237.pdf 217 | """ 218 | 219 | def __init__(self, ignore_index=-1): 220 | super(WeightedCrossEntropyLoss, self).__init__() 221 | self.ignore_index = ignore_index 222 | 223 | def forward(self, input, target): 224 | weight = self._class_weights(input) 225 | return F.cross_entropy(input, target, weight=weight, ignore_index=self.ignore_index) 226 | 227 | @staticmethod 228 | def _class_weights(input): 229 | # normalize the input first 230 | input = F.softmax(input, dim=1) 231 | flattened = flatten(input) 232 | nominator = (1. - flattened).sum(-1) 233 | denominator = flattened.sum(-1) 234 | class_weights = Variable(nominator / denominator, requires_grad=False) 235 | return class_weights 236 | 237 | 238 | class PixelWiseCrossEntropyLoss(nn.Module): 239 | def __init__(self, class_weights=None, ignore_index=None): 240 | super(PixelWiseCrossEntropyLoss, self).__init__() 241 | self.register_buffer('class_weights', class_weights) 242 | self.ignore_index = ignore_index 243 | self.log_softmax = nn.LogSoftmax(dim=1) 244 | 245 | def forward(self, input, target, weights): 246 | assert target.size() == weights.size() 247 | # normalize the input 248 | log_probabilities = self.log_softmax(input) 249 | # standard CrossEntropyLoss requires the target to be (NxDxHxW), so we need to expand it to (NxCxDxHxW) 250 | target = expand_as_one_hot(target, C=input.size()[1], ignore_index=self.ignore_index) 251 | # expand weights 252 | weights = weights.unsqueeze(1) 253 | weights = weights.expand_as(input) 254 | 255 | # create default class_weights if None 256 | if self.class_weights is None: 257 | class_weights = torch.ones(input.size()[1]).float().to(input.device) 258 | else: 259 | class_weights = self.class_weights 260 | 261 | # resize class_weights to be broadcastable into the weights 262 | class_weights = class_weights.view(1, -1, 1, 1, 1) 263 | 264 | # multiply weights tensor by class weights 265 | weights = class_weights * weights 266 | 267 | # compute the losses 268 | result = -weights * target * log_probabilities 269 | # average the losses 270 | return result.mean() 271 | 272 | 273 | class WeightedSmoothL1Loss(nn.SmoothL1Loss): 274 | def __init__(self, threshold, initial_weight, apply_below_threshold=True): 275 | super().__init__(reduction="none") 276 | self.threshold = threshold 277 | self.apply_below_threshold = apply_below_threshold 278 | self.weight = initial_weight 279 | 280 | def forward(self, input, target): 281 | l1 = super().forward(input, target) 282 | 283 | if self.apply_below_threshold: 284 | mask = target < self.threshold 285 | else: 286 | mask = target >= self.threshold 287 | 288 | l1[mask] = l1[mask] * self.weight 289 | 290 | return l1.mean() 291 | 292 | 293 | def flatten(tensor): 294 | """Flattens a given tensor such that the channel axis is first. 295 | The shapes are transformed as follows: 296 | (N, C, D, H, W) -> (C, N * D * H * W) 297 | """ 298 | # number of channels 299 | C = tensor.size(1) 300 | # new axis order 301 | axis_order = (1, 0) + tuple(range(2, tensor.dim())) 302 | # Transpose: (N, C, D, H, W) -> (C, N, D, H, W) 303 | transposed = tensor.permute(axis_order) 304 | # Flatten: (C, N, D, H, W) -> (C, N * D * H * W) 305 | return transposed.contiguous().view(C, -1) 306 | 307 | 308 | def get_loss_criterion(config): 309 | """ 310 | Returns the loss function based on provided configuration 311 | :param config: (dict) a top level configuration object containing the 'loss' key 312 | :return: an instance of the loss function 313 | """ 314 | assert 'loss' in config, 'Could not find loss function configuration' 315 | loss_config = config['loss'] 316 | name = loss_config.pop('name') 317 | 318 | ignore_index = loss_config.pop('ignore_index', None) 319 | skip_last_target = loss_config.pop('skip_last_target', False) 320 | weight = loss_config.pop('weight', None) 321 | 322 | if weight is not None: 323 | # convert to cuda tensor if necessary 324 | weight = torch.tensor(weight).to(config['device']) 325 | 326 | pos_weight = loss_config.pop('pos_weight', None) 327 | if pos_weight is not None: 328 | # convert to cuda tensor if necessary 329 | pos_weight = torch.tensor(pos_weight).to(config['device']) 330 | 331 | loss = _create_loss(name, loss_config, weight, ignore_index, pos_weight) 332 | 333 | if not (ignore_index is None or name in ['CrossEntropyLoss', 'WeightedCrossEntropyLoss']): 334 | # use MaskingLossWrapper only for non-cross-entropy losses, since CE losses allow specifying 'ignore_index' directly 335 | loss = _MaskingLossWrapper(loss, ignore_index) 336 | 337 | if skip_last_target: 338 | loss = SkipLastTargetChannelWrapper(loss, loss_config.get('squeeze_channel', False)) 339 | 340 | return loss 341 | 342 | 343 | def _create_loss(name, loss_config, weight, ignore_index, pos_weight): 344 | if name == 'BCEWithLogitsLoss': 345 | return nn.BCEWithLogitsLoss(pos_weight=pos_weight) 346 | elif name == 'BCEDiceLoss': 347 | alpha = loss_config.get('alphs', 1.) 348 | beta = loss_config.get('beta', 1.) 349 | return BCEDiceLoss(alpha, beta) 350 | elif name == 'CrossEntropyLoss': 351 | if ignore_index is None: 352 | ignore_index = -100 # use the default 'ignore_index' as defined in the CrossEntropyLoss 353 | return nn.CrossEntropyLoss(weight=weight, ignore_index=ignore_index) 354 | elif name == 'WeightedCrossEntropyLoss': 355 | if ignore_index is None: 356 | ignore_index = -100 # use the default 'ignore_index' as defined in the CrossEntropyLoss 357 | return WeightedCrossEntropyLoss(ignore_index=ignore_index) 358 | elif name == 'PixelWiseCrossEntropyLoss': 359 | return PixelWiseCrossEntropyLoss(class_weights=weight, ignore_index=ignore_index) 360 | elif name == 'GeneralizedDiceLoss': 361 | normalization = loss_config.get('normalization', 'sigmoid') 362 | return GeneralizedDiceLoss(normalization=normalization) 363 | elif name == 'DiceLoss': 364 | normalization = loss_config.get('normalization', 'sigmoid') 365 | return DiceLoss(weight=weight, normalization=normalization) 366 | elif name == 'MSELoss': 367 | return MSELoss() 368 | elif name == 'SmoothL1Loss': 369 | return SmoothL1Loss() 370 | elif name == 'L1Loss': 371 | return L1Loss() 372 | elif name == 'WeightedSmoothL1Loss': 373 | return WeightedSmoothL1Loss(threshold=loss_config['threshold'], 374 | initial_weight=loss_config['initial_weight'], 375 | apply_below_threshold=loss_config.get('apply_below_threshold', True)) 376 | else: 377 | raise RuntimeError(f"Unsupported loss function: '{name}'") 378 | -------------------------------------------------------------------------------- /baseline/model.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from torch.nn import functional as F 4 | from functools import partial 5 | 6 | 7 | def number_of_features_per_level(init_channel_number, num_levels): 8 | return [init_channel_number * 2 ** k for k in range(num_levels)] 9 | 10 | 11 | def conv3d(in_channels, out_channels, kernel_size, bias, padding): 12 | return nn.Conv3d(in_channels, out_channels, kernel_size, padding=padding, bias=bias) 13 | 14 | 15 | def create_conv(in_channels, out_channels, kernel_size, order, num_groups, padding): 16 | assert 'c' in order, "Conv layer MUST be present" 17 | assert order[0] not in 'rle', 'Non-linearity cannot be the first operation in the layer' 18 | 19 | modules = [] 20 | for i, char in enumerate(order): 21 | if char == 'r': 22 | modules.append(('ReLU', nn.ReLU(inplace=True))) 23 | elif char == 'l': 24 | modules.append(('LeakyReLU', nn.LeakyReLU(inplace=True))) 25 | elif char == 'e': 26 | modules.append(('ELU', nn.ELU(inplace=True))) 27 | elif char == 'c': 28 | # add learnable bias only in the absence of batchnorm/groupnorm 29 | bias = not ('g' in order or 'b' in order) 30 | modules.append(('conv', conv3d(in_channels, out_channels, kernel_size, bias, padding=padding))) 31 | elif char == 'g': 32 | is_before_conv = i < order.index('c') 33 | if is_before_conv: 34 | num_channels = in_channels 35 | else: 36 | num_channels = out_channels 37 | 38 | # use only one group if the given number of groups is greater than the number of channels 39 | if num_channels < num_groups: 40 | num_groups = 1 41 | 42 | assert num_channels % num_groups == 0, f'Expected number of channels in input to be divisible by num_groups. num_channels={num_channels}, num_groups={num_groups}' 43 | modules.append(('groupnorm', nn.GroupNorm(num_groups=num_groups, num_channels=num_channels))) 44 | elif char == 'b': 45 | is_before_conv = i < order.index('c') 46 | if is_before_conv: 47 | modules.append(('batchnorm', nn.BatchNorm3d(in_channels))) 48 | else: 49 | modules.append(('batchnorm', nn.BatchNorm3d(out_channels))) 50 | else: 51 | raise ValueError(f"Unsupported layer type '{char}'. MUST be one of ['b', 'g', 'r', 'l', 'e', 'c']") 52 | 53 | return modules 54 | 55 | 56 | class SingleConv(nn.Sequential): 57 | 58 | def __init__(self, in_channels, out_channels, kernel_size=3, order='gcr', num_groups=8, padding=1): 59 | super(SingleConv, self).__init__() 60 | 61 | for name, module in create_conv(in_channels, out_channels, kernel_size, order, num_groups, padding=padding): 62 | self.add_module(name, module) 63 | 64 | 65 | class DoubleConv(nn.Sequential): 66 | 67 | def __init__(self, in_channels, out_channels, encoder, kernel_size=3, order='gcr', num_groups=8, padding=1): 68 | super(DoubleConv, self).__init__() 69 | if encoder: 70 | # we're in the encoder path 71 | conv1_in_channels = in_channels 72 | conv1_out_channels = out_channels // 2 73 | if conv1_out_channels < in_channels: 74 | conv1_out_channels = in_channels 75 | conv2_in_channels, conv2_out_channels = conv1_out_channels, out_channels 76 | else: 77 | # we're in the decoder path, decrease the number of channels in the 1st convolution 78 | conv1_in_channels, conv1_out_channels = in_channels, out_channels 79 | conv2_in_channels, conv2_out_channels = out_channels, out_channels 80 | 81 | # conv1 82 | self.add_module('SingleConv1', 83 | SingleConv(conv1_in_channels, conv1_out_channels, kernel_size, order, num_groups, 84 | padding=padding)) 85 | # conv2 86 | self.add_module('SingleConv2', 87 | SingleConv(conv2_in_channels, conv2_out_channels, kernel_size, order, num_groups, 88 | padding=padding)) 89 | 90 | 91 | class Encoder(nn.Module): 92 | 93 | def __init__(self, in_channels, out_channels, conv_kernel_size=3, apply_pooling=True, 94 | pool_kernel_size=2, pool_type='max', basic_module=DoubleConv, conv_layer_order='gcr', 95 | num_groups=8, padding=1): 96 | super(Encoder, self).__init__() 97 | assert pool_type in ['max', 'avg'] 98 | if apply_pooling: 99 | if pool_type == 'max': 100 | self.pooling = nn.MaxPool3d(kernel_size=pool_kernel_size) 101 | else: 102 | self.pooling = nn.AvgPool3d(kernel_size=pool_kernel_size) 103 | else: 104 | self.pooling = None 105 | 106 | self.basic_module = basic_module(in_channels, out_channels, 107 | encoder=True, 108 | kernel_size=conv_kernel_size, 109 | order=conv_layer_order, 110 | num_groups=num_groups, 111 | padding=padding) 112 | 113 | def forward(self, x): 114 | if self.pooling is not None: 115 | x = self.pooling(x) 116 | x = self.basic_module(x) 117 | return x 118 | 119 | 120 | def create_encoders(in_channels, f_maps, basic_module, conv_kernel_size, conv_padding, layer_order, num_groups, 121 | pool_kernel_size): 122 | # create encoder path consisting of Encoder modules. Depth of the encoder is equal to `len(f_maps)` 123 | encoders = [] 124 | for i, out_feature_num in enumerate(f_maps): 125 | if i == 0: 126 | encoder = Encoder(in_channels, out_feature_num, 127 | apply_pooling=False, # skip pooling in the firs encoder 128 | basic_module=basic_module, 129 | conv_layer_order=layer_order, 130 | conv_kernel_size=conv_kernel_size, 131 | num_groups=num_groups, 132 | padding=conv_padding) 133 | else: 134 | # TODO: adapt for anisotropy in the data, i.e. use proper pooling kernel to make the data isotropic after 1-2 pooling operations 135 | encoder = Encoder(f_maps[i - 1], out_feature_num, 136 | basic_module=basic_module, 137 | conv_layer_order=layer_order, 138 | conv_kernel_size=conv_kernel_size, 139 | num_groups=num_groups, 140 | pool_kernel_size=pool_kernel_size, 141 | padding=conv_padding) 142 | 143 | encoders.append(encoder) 144 | 145 | return nn.ModuleList(encoders) 146 | 147 | 148 | class AbstractUpsampling(nn.Module): 149 | 150 | def __init__(self, upsample): 151 | super(AbstractUpsampling, self).__init__() 152 | self.upsample = upsample 153 | 154 | def forward(self, encoder_features, x): 155 | # get the spatial dimensions of the output given the encoder_features 156 | output_size = encoder_features.size()[2:] 157 | # upsample the input and return 158 | return self.upsample(x, output_size) 159 | 160 | 161 | class InterpolateUpsampling(AbstractUpsampling): 162 | 163 | def __init__(self, mode='nearest'): 164 | upsample = partial(self._interpolate, mode=mode) 165 | super().__init__(upsample) 166 | 167 | @staticmethod 168 | def _interpolate(x, size, mode): 169 | return F.interpolate(x, size=size, mode=mode) 170 | 171 | 172 | class TransposeConvUpsampling(AbstractUpsampling): 173 | 174 | def __init__(self, in_channels=None, out_channels=None, kernel_size=3, scale_factor=(2, 2, 2)): 175 | # make sure that the output size reverses the MaxPool3d from the corresponding encoder 176 | upsample = nn.ConvTranspose3d(in_channels, out_channels, kernel_size=kernel_size, stride=scale_factor, 177 | padding=1) 178 | super().__init__(upsample) 179 | 180 | 181 | class NoUpsampling(AbstractUpsampling): 182 | def __init__(self): 183 | super().__init__(self._no_upsampling) 184 | 185 | @staticmethod 186 | def _no_upsampling(x, size): 187 | return x 188 | 189 | 190 | class Decoder(nn.Module): 191 | 192 | def __init__(self, in_channels, out_channels, conv_kernel_size=3, scale_factor=(2, 2, 2), basic_module=DoubleConv, 193 | conv_layer_order='gcr', num_groups=8, mode='nearest', padding=1, upsample=True): 194 | super(Decoder, self).__init__() 195 | 196 | if upsample: 197 | if basic_module == DoubleConv: 198 | # if DoubleConv is the basic_module use interpolation for upsampling and concatenation joining 199 | self.upsampling = InterpolateUpsampling(mode=mode) 200 | # concat joining 201 | self.joining = partial(self._joining, concat=True) 202 | else: 203 | # if basic_module=ExtResNetBlock use transposed convolution upsampling and summation joining 204 | self.upsampling = TransposeConvUpsampling(in_channels=in_channels, out_channels=out_channels, 205 | kernel_size=conv_kernel_size, scale_factor=scale_factor) 206 | # sum joining 207 | self.joining = partial(self._joining, concat=False) 208 | # adapt the number of in_channels for the ExtResNetBlock 209 | in_channels = out_channels 210 | else: 211 | # no upsampling 212 | self.upsampling = NoUpsampling() 213 | # concat joining 214 | self.joining = partial(self._joining, concat=True) 215 | 216 | self.basic_module = basic_module(in_channels, out_channels, 217 | encoder=False, 218 | kernel_size=conv_kernel_size, 219 | order=conv_layer_order, 220 | num_groups=num_groups, 221 | padding=padding) 222 | 223 | def forward(self, encoder_features, x): 224 | x = self.upsampling(encoder_features=encoder_features, x=x) 225 | x = self.joining(encoder_features, x) 226 | x = self.basic_module(x) 227 | return x 228 | 229 | @staticmethod 230 | def _joining(encoder_features, x, concat): 231 | if concat: 232 | return torch.cat((encoder_features, x), dim=1) 233 | else: 234 | return encoder_features + x 235 | 236 | 237 | def create_decoders(f_maps, basic_module, conv_kernel_size, conv_padding, layer_order, num_groups, upsample): 238 | # create decoder path consisting of the Decoder modules. The length of the decoder list is equal to `len(f_maps) - 1` 239 | decoders = [] 240 | reversed_f_maps = list(reversed(f_maps)) 241 | for i in range(len(reversed_f_maps) - 1): 242 | if basic_module == DoubleConv: 243 | in_feature_num = reversed_f_maps[i] + reversed_f_maps[i + 1] 244 | else: 245 | in_feature_num = reversed_f_maps[i] 246 | 247 | out_feature_num = reversed_f_maps[i + 1] 248 | 249 | # TODO: if non-standard pooling was used, make sure to use correct striding for transpose conv 250 | # currently strides with a constant stride: (2, 2, 2) 251 | 252 | _upsample = True 253 | if i == 0: 254 | # upsampling can be skipped only for the 1st decoder, afterwards it should always be present 255 | _upsample = upsample 256 | 257 | decoder = Decoder(in_feature_num, out_feature_num, 258 | basic_module=basic_module, 259 | conv_layer_order=layer_order, 260 | conv_kernel_size=conv_kernel_size, 261 | num_groups=num_groups, 262 | padding=conv_padding, 263 | upsample=_upsample) 264 | decoders.append(decoder) 265 | return nn.ModuleList(decoders) 266 | 267 | 268 | class Abstract3DUNet(nn.Module): 269 | 270 | def __init__(self, in_channels, out_channels, final_sigmoid, basic_module, f_maps=64, layer_order='gcr', 271 | num_groups=8, num_levels=4, is_segmentation=True, conv_kernel_size=3, pool_kernel_size=2, 272 | conv_padding=1, **kwargs): 273 | super(Abstract3DUNet, self).__init__() 274 | 275 | if isinstance(f_maps, int): 276 | f_maps = number_of_features_per_level(f_maps, num_levels=num_levels) 277 | 278 | assert isinstance(f_maps, list) or isinstance(f_maps, tuple) 279 | assert len(f_maps) > 1, "Required at least 2 levels in the U-Net" 280 | 281 | self.encoders = create_encoders(in_channels, f_maps, basic_module, conv_kernel_size, conv_padding, layer_order, 282 | num_groups, pool_kernel_size) 283 | 284 | self.decoders = create_decoders(f_maps, basic_module, conv_kernel_size, conv_padding, layer_order, num_groups, 285 | upsample=True) 286 | 287 | self.final_conv = nn.Conv3d(f_maps[0], out_channels, 1) 288 | 289 | if is_segmentation: 290 | if final_sigmoid: 291 | self.final_activation = nn.Sigmoid() 292 | else: 293 | self.final_activation = nn.Softmax(dim=1) 294 | else: 295 | self.final_activation = None 296 | 297 | def forward(self, x): 298 | # encoder part 299 | encoders_features = [] 300 | for encoder in self.encoders: 301 | x = encoder(x) 302 | encoders_features.insert(0, x) 303 | 304 | encoders_features = encoders_features[1:] 305 | 306 | for decoder, encoder_features in zip(self.decoders, encoders_features): 307 | x = decoder(encoder_features, x) 308 | 309 | x = self.final_conv(x) 310 | 311 | if not self.training and self.final_activation is not None: 312 | x = self.final_activation(x) 313 | 314 | return x 315 | 316 | 317 | class UNet3D(Abstract3DUNet): 318 | 319 | def __init__(self, in_channels, out_channels, final_sigmoid=True, f_maps=64, layer_order='gcr', 320 | num_groups=8, num_levels=4, is_segmentation=True, conv_padding=1, **kwargs): 321 | super(UNet3D, self).__init__(in_channels=in_channels, 322 | out_channels=out_channels, 323 | final_sigmoid=final_sigmoid, 324 | basic_module=DoubleConv, 325 | f_maps=f_maps, 326 | layer_order=layer_order, 327 | num_groups=num_groups, 328 | num_levels=num_levels, 329 | is_segmentation=is_segmentation, 330 | conv_padding=conv_padding, 331 | **kwargs) 332 | -------------------------------------------------------------------------------- /baseline/params.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerceptionComputingLab/PARSE2022/cd772c5247729de0a6ae9a09b9f0940e765c71d4/baseline/params.pkl -------------------------------------------------------------------------------- /baseline/readme.md: -------------------------------------------------------------------------------- 1 | Baseline of PARSE 2022 challenge 2 | ============================== 3 | 4 | The repository gives an example of how to process the PARSE challenge data (implementation based on 3D-Unet). Any other preprocessing is welcomed and any framework can be used for the challenge, the only requirement is to submit `nii.gz` files with result shape / origin / spacing / direction consistent with the original CT images (please refer to `numpy2niigz()` in [`submit.py`](submit.py)). This repository also contains the code used to prepare the data of the challenge (data preprocessing, model training and submission result creation). 5 | 6 | Requirements 7 | ------------ 8 | Python 3.6, PyTorch 1.6 and other common packages are listed in [`requirements.txt`](requirements.txt) or [`requirements.yaml`](requirements.yaml). 9 | 10 | Organization 11 | ------------ 12 | Folders that aren't in the repository can be created by running the corresponding function. 13 | 14 | │ (3dunet) 15 | ├── README.md 16 | ├── params.pkl <- Example parameters for 3D-Unet 17 | ├── requirements.txt <- Txt requirements file of data processing 18 | ├── requirements.yaml <- Yaml requirements file of data processing 19 | ├── dataset 20 | │   ├── train <- Preprocessed training dataset folder 21 | │   │   ├── PA000005 <- training data (numpy array) 22 | │   │  ...  ├── dcm.npy 23 | │   │   └── label.npy 24 | │   ├── eval <- Preprocessed testing dataset folder 25 | │   │   ├── PA000013/dcm.npy <- testing data (numpy array) 26 | │   │  ... 27 | │ ├── dcm_volume_array.npy <- Image array of training dataset 28 | │   └── label_volume_array.npy <- label array of training dataset 29 | ├── submit 30 | │   ├── npy <- Prediction result folder (.npy) 31 | │   │   ├── PA000013.npy 32 | │   │ ... 33 | │   └── nii <- Submission result folder (.nii.gz) 34 | │      ├── PA000013.nii.gz 35 | │   ... 36 | ├── exp 37 | │   ├── XXYY_XXYY <- Training exp folder 38 | │  ... 39 | ├── feature.py <- Available functions 40 | ├── dataset.py <- Data preprocessing and loading 41 | ├── model.py <- 3D-Unet network model 42 | ├── losses.py <- Loss functions 43 | ├── train.py <- Model training 44 | ├── evalu.py <- Model validating and testing 45 | ├── submit.py <- Submit file creation 46 | └── config.py <- Setting of file paths and training parameters 47 | 48 | Usage 49 | ------------ 50 | The path of raw data (`root_raw_train_data` and `root_raw_eval_data`) needs to be set in [`config.py`](config.py), where raw data files need to be organized in the following way: 51 | 52 | │ (parse2022) 53 | ├── train <- root_raw_train_data is set to the path 54 | │   ├── PA000005 55 | │  ...  ├── image/PA000005.nii.gz 56 | │    └── label/PA000005.nii.gz 57 | └── validate <- root_raw_eval_data is set to the path 58 |    ├── PA000013/image/PA000013.nii.gz 59 |   ... 60 | 61 | 62 | `dataset_preprocessing()` in [`dataset.py`](dataset.py) reads and preprocesses raw data for model training and testing. 63 | 64 | `training()` in [`train.py`](train.py) trains the model and saves the parameters, and training-related hyperparameters can be set in [`config.py`](config.py). 65 | 66 | After `root_model_param` in [`config.py`](config.py) setting, `submit_pred()` in [`submit.py`](submit.py) can read testing data and creat submission results. 67 | 68 | Acknowledgements 69 | ------------ 70 | - Pytorch-3dunet: https://github.com/wolny/pytorch-3dunet 71 | - PARSE: https://github.com/XinghuaMa/PARSE 72 | -------------------------------------------------------------------------------- /baseline/requirements.txt: -------------------------------------------------------------------------------- 1 | absl-py==0.11.0 2 | astor==0.8.1 3 | bleach==1.5.0 4 | certifi==2020.12.5 5 | cffi==1.15.0 6 | cloudpickle @ file:///tmp/build/80754af9/cloudpickle_1598884132938/work 7 | cycler==0.10.0 8 | cytoolz==0.11.0 9 | dask @ file:///tmp/build/80754af9/dask-core_1602083700509/work 10 | decorator==4.4.2 11 | einops @ file:///home/conda/feedstock_root/build_artifacts/einops_1602792478173/work 12 | gast==0.4.0 13 | grpcio==1.34.0 14 | h5py==2.8.0 15 | html5lib==0.9999999 16 | imageio @ file:///tmp/build/80754af9/imageio_1594161405741/work 17 | importlib-metadata==3.3.0 18 | joblib @ file:///tmp/build/80754af9/joblib_1601912903842/work 19 | kiwisolver @ file:///tmp/build/80754af9/kiwisolver_1604014557000/work 20 | Markdown==3.3.3 21 | matplotlib @ file:///tmp/build/80754af9/matplotlib-base_1603373816490/work 22 | mkl-fft==1.2.0 23 | mkl-random==1.1.1 24 | mkl-service==2.3.0 25 | networkx @ file:///tmp/build/80754af9/networkx_1598376031484/work 26 | nibabel==3.2.2 27 | numpy @ file:///tmp/build/80754af9/numpy_and_numpy_base_1603487797006/work 28 | olefile==0.46 29 | packaging==21.3 30 | pandas @ file:///tmp/build/80754af9/pandas_1602088135163/work 31 | Pillow @ file:///tmp/build/80754af9/pillow_1603822238230/work 32 | protobuf==3.14.0 33 | pycparser==2.21 34 | pydicom @ file:///home/conda/feedstock_root/build_artifacts/pydicom_1604367895328/work 35 | pynrrd==0.4.2 36 | pyparsing==2.4.7 37 | python-dateutil==2.8.1 38 | pytz==2020.1 39 | PyWavelets @ file:///tmp/build/80754af9/pywavelets_1601658315997/work 40 | PyYAML==5.3.1 41 | scikit-image==0.17.2 42 | scikit-learn @ file:///tmp/build/80754af9/scikit-learn_1598379690588/work 43 | scipy @ file:///tmp/build/80754af9/scipy_1597686625380/work 44 | SimpleITK==2.1.1 45 | six @ file:///tmp/build/80754af9/six_1605205335545/work 46 | SoundFile==0.10.3.post1 47 | tensorboard==1.7.0 48 | tensorboardX==2.1 49 | tensorflow==1.7.0 50 | termcolor==1.1.0 51 | threadpoolctl @ file:///tmp/tmp9twdgx9k/threadpoolctl-2.1.0-py3-none-any.whl 52 | tifffile==2020.10.1 53 | toolz @ file:///tmp/build/80754af9/toolz_1601054250827/work 54 | torch==1.6.0 55 | torchvision==0.7.0 56 | tornado==6.0.4 57 | tqdm @ file:///tmp/build/80754af9/tqdm_1605303662894/work 58 | typing-extensions==3.7.4.3 59 | Werkzeug==1.0.1 60 | zipp==3.4.0 61 | -------------------------------------------------------------------------------- /baseline/requirements.yaml: -------------------------------------------------------------------------------- 1 | name: deeplearning 2 | channels: 3 | - https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch 4 | - conda-forge 5 | - https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge 6 | - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main 7 | - https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/ 8 | - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/ 9 | - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/ 10 | - https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge/ 11 | - defaults 12 | - https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/fastai/ 13 | - https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/bioconda/ 14 | dependencies: 15 | - _libgcc_mutex=0.1=main 16 | - blas=1.0=mkl 17 | - bzip2=1.0.8=h7b6447c_0 18 | - ca-certificates=2021.1.19=h06a4308_1 19 | - cairo=1.14.12=h8948797_3 20 | - certifi=2020.12.5=py36h06a4308_0 21 | - cloudpickle=1.6.0=py_0 22 | - cudatoolkit=10.1.243=h6bb024c_0 23 | - cycler=0.10.0=py36_0 24 | - cytoolz=0.11.0=py36h7b6447c_0 25 | - dask-core=2.30.0=py_0 26 | - decorator=4.4.2=py_0 27 | - einops=0.3.0=py_0 28 | - ffmpeg=4.0=hcdf2ecd_0 29 | - fontconfig=2.13.0=h9420a91_0 30 | - freeglut=3.0.0=hf484d3e_5 31 | - freetype=2.10.4=h5ab3b9f_0 32 | - glib=2.66.1=h92f7085_0 33 | - graphite2=1.3.14=h23475e2_0 34 | - h5py=2.8.0=py36h989c5e5_3 35 | - harfbuzz=1.8.8=hffaf4a1_0 36 | - hdf5=1.10.2=hba1933b_1 37 | - icu=58.2=he6710b0_3 38 | - imageio=2.9.0=py_0 39 | - intel-openmp=2020.2=254 40 | - jasper=2.0.14=h07fcdf6_1 41 | - joblib=0.17.0=py_0 42 | - jpeg=9b=h024ee3a_2 43 | - kiwisolver=1.3.0=py36h2531618_0 44 | - lcms2=2.11=h396b838_0 45 | - ld_impl_linux-64=2.33.1=h53a641e_7 46 | - libedit=3.1.20191231=h14c3975_1 47 | - libffi=3.3=he6710b0_2 48 | - libgcc-ng=9.1.0=hdf63c60_0 49 | - libgfortran-ng=7.3.0=hdf63c60_0 50 | - libglu=9.0.0=hf484d3e_1 51 | - libopencv=3.4.2=hb342d67_1 52 | - libopus=1.3.1=h7b6447c_0 53 | - libpng=1.6.37=hbc83047_0 54 | - libstdcxx-ng=9.1.0=hdf63c60_0 55 | - libtiff=4.1.0=h2733197_1 56 | - libuuid=1.0.3=h1bed415_2 57 | - libvpx=1.7.0=h439df22_0 58 | - libxcb=1.14=h7b6447c_0 59 | - libxml2=2.9.10=hb55368b_3 60 | - lz4-c=1.9.2=heb0550a_3 61 | - matplotlib-base=3.3.2=py36h817c723_0 62 | - mkl=2020.2=256 63 | - mkl-service=2.3.0=py36he904b0f_0 64 | - mkl_fft=1.2.0=py36h23d657b_0 65 | - mkl_random=1.1.1=py36h0573a6f_0 66 | - ncurses=6.2=he6710b0_1 67 | - networkx=2.5=py_0 68 | - ninja=1.10.1=py36hfd86e86_0 69 | - numpy=1.19.2=py36h54aff64_0 70 | - numpy-base=1.19.2=py36hfa32c7d_0 71 | - olefile=0.46=py36_0 72 | - opencv=3.4.2=py36h6fd60c2_1 73 | - openssl=1.1.1j=h27cfd23_0 74 | - pandas=1.1.3=py36he6710b0_0 75 | - pcre=8.44=he6710b0_0 76 | - pillow=8.0.1=py36he98fc37_0 77 | - pip=20.2.4=py36h06a4308_0 78 | - pixman=0.40.0=h7b6447c_0 79 | - py-opencv=3.4.2=py36hb342d67_1 80 | - pydicom=2.1.0=pyhd3deb0d_0 81 | - pyparsing=2.4.7=py_0 82 | - python=3.6.12=hcff3b4d_2 83 | - python-dateutil=2.8.1=py_0 84 | - python_abi=3.6=1_cp36m 85 | - pytorch=1.6.0=py3.6_cuda10.1.243_cudnn7.6.3_0 86 | - pytz=2020.1=py_0 87 | - pywavelets=1.1.1=py36h7b6447c_2 88 | - pyyaml=5.3.1=py36h7b6447c_1 89 | - readline=8.0=h7b6447c_0 90 | - scikit-image=0.17.2=py36hdf5156a_0 91 | - scikit-learn=0.23.2=py36h0573a6f_0 92 | - scipy=1.5.2=py36h0b6359f_0 93 | - setuptools=50.3.1=py36h06a4308_1 94 | - six=1.15.0=py36h06a4308_0 95 | - sqlite=3.33.0=h62c20be_0 96 | - threadpoolctl=2.1.0=pyh5ca1d4c_0 97 | - tifffile=2020.10.1=py36hdd07704_2 98 | - tk=8.6.10=hbc83047_0 99 | - toolz=0.11.1=py_0 100 | - torchvision=0.7.0=py36_cu101 101 | - tornado=6.0.4=py36h7b6447c_1 102 | - tqdm=4.51.0=pyhd3eb1b0_0 103 | - wheel=0.35.1=py_0 104 | - xz=5.2.5=h7b6447c_0 105 | - yaml=0.2.5=h7b6447c_0 106 | - zlib=1.2.11=h7b6447c_3 107 | - zstd=1.4.5=h9ceee32_0 108 | - pip: 109 | - absl-py==0.11.0 110 | - astor==0.8.1 111 | - bleach==1.5.0 112 | - cffi==1.15.0 113 | - gast==0.4.0 114 | - grpcio==1.34.0 115 | - html5lib==0.9999999 116 | - importlib-metadata==3.3.0 117 | - markdown==3.3.3 118 | - nibabel==3.2.2 119 | - packaging==21.3 120 | - protobuf==3.14.0 121 | - pycparser==2.21 122 | - pynrrd==0.4.2 123 | - simpleitk==2.1.1 124 | - soundfile==0.10.3.post1 125 | - tensorboard==1.7.0 126 | - tensorboardx==2.1 127 | - tensorflow==1.7.0 128 | - termcolor==1.1.0 129 | - typing-extensions==3.7.4.3 130 | - werkzeug==1.0.1 131 | - zipp==3.4.0 132 | prefix: /home/mxh/.conda/envs/deeplearning 133 | 134 | -------------------------------------------------------------------------------- /baseline/submit.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from config import opt 3 | import feature as feat 4 | from tqdm import tqdm 5 | from model import UNet3D 6 | import torch 7 | import dataset 8 | from torch.utils.data import DataLoader 9 | from datetime import datetime 10 | import nibabel as nib 11 | import os 12 | import SimpleITK as sitk 13 | import evalu 14 | 15 | 16 | def eval_data_maker(label_mask=False, lung_mask=False): 17 | 18 | root_eval_data = opt.root_raw_eval_data 19 | root_eval_volume = opt.root_eval_volume 20 | 21 | if not os.path.exists(root_eval_volume): 22 | os.makedirs(root_eval_volume) 23 | 24 | fileList = os.listdir(root_eval_data) 25 | for i in tqdm(range(len(fileList))): 26 | root_data = f'{root_eval_data}{fileList[i]}/' 27 | root_dcm = f'{root_data}image/{fileList[i]}.nii.gz' 28 | 29 | dcm_img = np.swapaxes(np.array(nib.load(root_dcm).dataobj), 0, 2) 30 | dcm_img = feat.add_window(dcm_img) 31 | 32 | root_save = f'{root_eval_volume}{fileList[i]}/' 33 | if not os.path.exists(root_save): 34 | os.makedirs(root_save) 35 | np.save(f'{root_save}/dcm.npy', dcm_img) 36 | 37 | # feat.info_data(dcm_img) 38 | # feat.info_data(label_img) 39 | if label_mask: 40 | root_label = f'{root_data}label/{fileList[i]}.nii.gz' 41 | label_img = np.swapaxes(np.array(nib.load(root_label).dataobj), 0, 2) 42 | label_img = label_img.astype(np.uint8) 43 | np.save(f'{root_save}/label.npy', label_img) 44 | 45 | if lung_mask: 46 | root_lung = f'{root_data}lung/{fileList[i]}.nii.gz' 47 | lung_img = np.swapaxes(np.array(nib.load(root_lung).dataobj), 0, 2) 48 | np.save(f'{root_save}/lung.npy', lung_img) 49 | feat.info_data(lung_img) 50 | 51 | 52 | def pred_folder_creation(): 53 | 54 | if not os.path.exists(opt.root_eval_volume): 55 | os.makedirs(opt.root_eval_volume) 56 | 57 | root_submit_file = opt.root_submit_file 58 | if not os.path.exists(opt.root_submit_file): 59 | os.makedirs(opt.root_submit_file) 60 | if not os.path.exists(f'{root_submit_file}npy/'): 61 | os.makedirs(f'{root_submit_file}npy/') 62 | if not os.path.exists(f'{root_submit_file}nii/'): 63 | os.makedirs(f'{root_submit_file}nii/') 64 | 65 | 66 | def eval_data(eval_model, save_pred=False, root_save=None, dice_calcu=False): 67 | 68 | dice_counter=None 69 | if dice_calcu: 70 | dice_counter = feat.Counter() 71 | 72 | root_eval = opt.root_eval_volume 73 | fileList = os.listdir(root_eval) 74 | 75 | for i in tqdm(range(len(fileList))): 76 | 77 | root_data = f'{root_eval}{fileList[i]}/' 78 | root_dcm = f'{root_data}dcm.npy' 79 | array_dcm = np.load(root_dcm) 80 | array_pred = pred_volume(eval_model, array_dcm) 81 | # print(array_pred.shape, array_label.shape) 82 | if dice_calcu: 83 | root_label = f'{root_data}label.npy' 84 | array_label = np.load(root_label) 85 | dice = evalu.dice_coefficient(array_pred, array_label) 86 | dice_counter.updata(dice) 87 | 88 | if save_pred: 89 | np.save(f'{root_save}{fileList[i]}.npy', array_pred) 90 | 91 | if dice_calcu: 92 | print(f"prediction averaged dice:", round(dice_counter.avg * 100, 1)) 93 | return 94 | 95 | 96 | 97 | def pred_volume(pred_model, dcm_array): 98 | 99 | dcm_volume = dcm_array.astype(np.float32) 100 | image_mean, image_std = np.mean(dcm_volume), np.std(dcm_volume) 101 | dcm_volume = (dcm_volume - image_mean) / image_std 102 | 103 | znum, xnum, ynum = int(dcm_volume.shape[0] / 96), int(dcm_volume.shape[1] / 96), int(dcm_volume.shape[2] / 96) 104 | if dcm_volume.shape[0] % 96: 105 | znum += 1 106 | if dcm_volume.shape[1] % 96: 107 | xnum += 1 108 | if dcm_volume.shape[2] % 96: 109 | ynum += 1 110 | 111 | volume_array, pred_array = np.zeros((znum * xnum * ynum, 96, 96, 96), dtype=np.float32), np.zeros( 112 | (dcm_volume.shape[0], dcm_volume.shape[1], dcm_volume.shape[2]), dtype=np.uint8) 113 | for i in range(znum): 114 | for j in range(xnum): 115 | for k in range(ynum): 116 | tmp_array = dcm_volume[i * 96: min(dcm_volume.shape[0], (i + 1) * 96), 117 | j * 96: min(dcm_volume.shape[1], (j + 1) * 96), 118 | k * 96: min(dcm_volume.shape[2], (k + 1) * 96)] 119 | 120 | id = i * xnum * ynum + j * ynum + k 121 | volume_array[id, :tmp_array.shape[0], :tmp_array.shape[1], :tmp_array.shape[2]] = tmp_array[:, :, :] 122 | 123 | if opt.use_gpu is True: 124 | pred_model = pred_model.cuda() 125 | 126 | pred_model.eval() 127 | 128 | for i in range(volume_array.shape[0]): 129 | 130 | tmp_volume = torch.tensor(volume_array[i], dtype=torch.float32) 131 | tmp_volume = torch.unsqueeze(torch.unsqueeze(tmp_volume, 0), 0) 132 | 133 | Input = tmp_volume.requires_grad_() 134 | if opt.use_gpu is True: 135 | Input = Input.cuda() 136 | 137 | pred = pred_model(Input) 138 | 139 | if opt.use_gpu: 140 | pred = pred.cpu() 141 | 142 | pred = torch.squeeze(torch.squeeze(pred, 0), 0) 143 | 144 | pred = pred.detach().numpy() 145 | pred[pred <= 0.5], pred[pred > 0.5] = 0, 1 146 | pred = pred.astype(np.uint8) 147 | 148 | tznum = int(i / (xnum * ynum)) 149 | txnum = int((i - tznum * (xnum * ynum)) / ynum) 150 | tynum = i - tznum * (xnum * ynum) - txnum * ynum 151 | 152 | pred_array[tznum * 96:min(dcm_volume.shape[0], (tznum + 1) * 96), 153 | txnum * 96:min(dcm_volume.shape[1], (txnum + 1) * 96), 154 | tynum * 96:min(dcm_volume.shape[2], (tynum + 1) * 96)] = pred[: min(dcm_volume.shape[0], (tznum + 1) * 96) - tznum * 96, 155 | : min(dcm_volume.shape[1], (txnum + 1) * 96) - txnum * 96, 156 | : min(dcm_volume.shape[2], (tynum + 1) * 96) - tynum * 96] 157 | 158 | return pred_array 159 | 160 | 161 | def numpy2niigz(root_numpy, root_niigz): 162 | fileList = os.listdir(root_numpy) 163 | for i in range(len(fileList)): 164 | root_data = f'{root_numpy}{fileList[i]}' 165 | file_name = str(fileList[i]).split('.')[0] 166 | 167 | root_tmp_data = f'{opt.root_raw_eval_data}{file_name}/image/{file_name}.nii.gz' 168 | dicom = sitk.ReadImage(root_tmp_data) 169 | out = sitk.GetImageFromArray(np.load(root_data)) 170 | out.SetOrigin(dicom.GetOrigin()) 171 | out.SetSpacing(dicom.GetSpacing()) 172 | out.SetDirection(dicom.GetDirection()) 173 | 174 | sitk.WriteImage(out, f'{root_niigz}/{file_name}.nii.gz') 175 | 176 | 177 | def submit_pred(dice_calcu=False): 178 | 179 | pred_folder_creation() 180 | print('folder creation finish!') 181 | eval_data_maker(label_mask=dice_calcu) 182 | 183 | root_submit_file = opt.root_submit_file 184 | Unet = UNet3D(1, 1) 185 | Unet.load_state_dict(torch.load(opt.root_model_param)) 186 | print('model loading finish!') 187 | 188 | pred_numpy_array = f'{root_submit_file}npy/' 189 | pred_nii_array = f'{root_submit_file}nii/' 190 | 191 | eval_data(Unet, save_pred=True, root_save=pred_numpy_array, dice_calcu=dice_calcu) 192 | print('volume prediction finish!') 193 | numpy2niigz(pred_numpy_array, pred_nii_array) 194 | print('niigz changing finish!') 195 | 196 | return -------------------------------------------------------------------------------- /baseline/train.py: -------------------------------------------------------------------------------- 1 | from torch.utils.data import DataLoader 2 | import numpy as np 3 | import logging 4 | import torch 5 | from tqdm import tqdm 6 | from datetime import datetime 7 | 8 | from config import opt 9 | import dataset 10 | import feature as feat 11 | from model import UNet3D 12 | import losses 13 | import evalu 14 | import pandas as pd 15 | import os 16 | 17 | def get_logger(filename, verbosity=1, name=None): 18 | level_dict = {0: logging.DEBUG, 1: logging.INFO, 2: logging.WARNING} 19 | formatter = logging.Formatter( 20 | "[%(asctime)s][%(filename)s][line:%(lineno)d][%(levelname)s] %(message)s" 21 | ) 22 | logger = logging.getLogger(name) 23 | logger.setLevel(level_dict[verbosity]) 24 | 25 | fh = logging.FileHandler(filename, "w") 26 | fh.setFormatter(formatter) 27 | logger.addHandler(fh) 28 | 29 | sh = logging.StreamHandler() 30 | sh.setFormatter(formatter) 31 | logger.addHandler(sh) 32 | 33 | return logger 34 | 35 | 36 | class Loss_Saver: 37 | def __init__(self, moving=False): 38 | self.loss_list, self.last_loss = [], 0.0 39 | self.moving = moving 40 | return 41 | 42 | def updata(self, value): 43 | 44 | if not self.moving: 45 | self.loss_list += [value] 46 | elif not self.loss_list: 47 | self.loss_list += [value] 48 | self.last_loss = value 49 | else: 50 | update_val = self.last_loss * 0.9 + value * 0.1 51 | self.loss_list += [update_val] 52 | self.last_loss = update_val 53 | return 54 | 55 | def loss_drawing(self, root_file): 56 | 57 | loss_array = np.array(self.loss_list) 58 | colname = ['loss'] 59 | listPF = pd.DataFrame(columns=colname, data=loss_array) 60 | listPF.to_csv(f'{root_file}loss.csv', encoding='gbk') 61 | 62 | return 63 | 64 | 65 | def training(): 66 | 67 | Unet = UNet3D(1, 1) 68 | Unet.load_state_dict(torch.load(opt.root_model_param)) 69 | 70 | if opt.use_gpu is True : 71 | Unet = Unet.cuda() 72 | 73 | loss_fn = losses.DiceLoss() 74 | optimizer = torch.optim.Adam(Unet.parameters(), lr=opt.learning_rate) 75 | scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=opt.decay_LR[0], gamma=opt.decay_LR[1]) 76 | 77 | train_dataset = dataset.volume_dataset(range_data=[0, 0.01]) 78 | train_dataLoader = DataLoader(train_dataset, batch_size=opt.batch_size, shuffle=True) 79 | print('train dataset loading finish!') 80 | 81 | eval_dataset = dataset.volume_dataset(range_data=[0.99, 1]) 82 | eval_dataLoader = DataLoader(eval_dataset, batch_size=opt.batch_size, shuffle=False) 83 | print('eval dataset loading finish!') 84 | 85 | if not os.path.exists(opt.root_exp_file): 86 | os.makedirs(opt.root_exp_file) 87 | print('training exp ready!') 88 | 89 | time = datetime.now() 90 | name_exp = f'{str(time.month).zfill(2)}{str(time.day).zfill(2)}_{str(time.hour).zfill(2)}' \ 91 | f'{str(time.minute).zfill(2)}' 92 | 93 | root_nowexp = f'{opt.root_exp_file}/{name_exp}/' 94 | feat.create_root(root_nowexp) 95 | root_param = f'{root_nowexp}param/' 96 | feat.create_root(root_param) 97 | logger = get_logger(f'{root_nowexp}exp.log') 98 | logger.info('start training!') 99 | losssaver, max_acc = Loss_Saver(), 0.0 100 | for epoch in range(opt.max_epoch): 101 | 102 | Unet.train() 103 | epoch_loss = feat.Counter() 104 | train_dataset.update_num_epoch(epoch) 105 | for batch_id, (dcm_image, label_image) in tqdm(enumerate(train_dataLoader), 106 | total=int(len(train_dataset) / opt.batch_size)): 107 | if dcm_image.shape[0] < opt.batch_size: 108 | continue 109 | 110 | Input, target = dcm_image.requires_grad_(), label_image 111 | if opt.use_gpu is True: 112 | Input, target = Input.cuda(), target.cuda() 113 | 114 | pred = Unet(Input) 115 | loss = loss_fn(pred, target) 116 | 117 | epoch_loss.updata(float(loss.item())) 118 | 119 | optimizer.zero_grad() 120 | loss.backward() 121 | optimizer.step() 122 | 123 | losssaver.updata(epoch_loss.avg) 124 | 125 | Unet.eval() 126 | 127 | # dice = evalu.eval_data(Unet, save_pred=False) 128 | train_dice = evalu.evalu_dice(Unet, train_dataset, train_dataLoader, eval_total=0.2) 129 | evalu_dice = evalu.evalu_dice(Unet, eval_dataset, eval_dataLoader) 130 | 131 | logger.info('Epoch:[{}/{}]\t loss={:.5f}\t train dice:{:.1f} \t evalu dice:{:.1f}'.format(epoch, opt.max_epoch, 132 | epoch_loss.avg, 133 | train_dice, 134 | evalu_dice)) 135 | scheduler.step() 136 | 137 | torch.save(Unet.state_dict(), f'{root_param}EP{epoch}_Dice{evalu_dice}.pkl') 138 | 139 | losssaver.loss_drawing(f'{opt.root_exp_file}/{name_exp}/') 140 | logger.info('finish training!') 141 | return 142 | -------------------------------------------------------------------------------- /build-docker-image-example/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM pytorch/pytorch:1.11.0-cuda11.3-cudnn8-runtime 2 | 3 | WORKDIR /workspace 4 | 5 | COPY . /workspace 6 | 7 | RUN pip install -r requirements.txt 8 | 9 | CMD ["python","main.py","--input_dir=/input","--predict_dir=/predict"] -------------------------------------------------------------------------------- /build-docker-image-example/U_net_Model.py: -------------------------------------------------------------------------------- 1 | from collections import OrderedDict 2 | import torch 3 | import torch.nn as nn 4 | 5 | kernel_size = 3 6 | 7 | 8 | class UNet(nn.Module): 9 | 10 | def __init__(self, in_channels=2, out_channels=2, init_features=16): 11 | super(UNet, self).__init__() 12 | 13 | features = init_features 14 | self.encoder1 = UNet._block(in_channels, features, name="enc1") 15 | self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2) 16 | self.encoder2 = UNet._block(features, features * 2, name="enc2") 17 | self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2) 18 | self.encoder3 = UNet._block(features * 2, features * 4, name="enc3") 19 | self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2) 20 | self.encoder4 = UNet._block(features * 4, features * 8, name="enc4") 21 | self.pool4 = nn.MaxPool2d(kernel_size=2, stride=2) 22 | 23 | self.bottleneck = UNet._block(features * 8, features * 16, name="bottleneck") 24 | 25 | self.upconv4 = nn.ConvTranspose2d( 26 | features * 16, features * 8, kernel_size=2, stride=2 27 | ) 28 | self.decoder4 = UNet._block((features * 8) * 2, features * 8, name="dec4") 29 | self.upconv3 = nn.ConvTranspose2d( 30 | features * 8, features * 4, kernel_size=2, stride=2 31 | ) 32 | self.decoder3 = UNet._block((features * 4) * 2, features * 4, name="dec3") 33 | self.upconv2 = nn.ConvTranspose2d( 34 | features * 4, features * 2, kernel_size=2, stride=2 35 | ) 36 | self.decoder2 = UNet._block((features * 2) * 2, features * 2, name="dec2") 37 | self.upconv1 = nn.ConvTranspose2d( 38 | features * 2, features, kernel_size=2, stride=2 39 | ) 40 | self.decoder1 = UNet._block(features * 2, features, name="dec1") 41 | 42 | self.conv = nn.Conv2d( 43 | in_channels=features, out_channels=out_channels, kernel_size=1 44 | ) 45 | 46 | def forward(self, x): 47 | enc1 = self.encoder1(x) 48 | enc2 = self.encoder2(self.pool1(enc1)) 49 | enc3 = self.encoder3(self.pool2(enc2)) 50 | enc4 = self.encoder4(self.pool3(enc3)) 51 | 52 | bottleneck = self.bottleneck(self.pool4(enc4)) 53 | 54 | dec4 = self.upconv4(bottleneck) 55 | dec4 = torch.cat((dec4, enc4), dim=1) 56 | dec4 = self.decoder4(dec4) 57 | dec3 = self.upconv3(dec4) 58 | dec3 = torch.cat((dec3, enc3), dim=1) 59 | dec3 = self.decoder3(dec3) 60 | dec2 = self.upconv2(dec3) 61 | dec2 = torch.cat((dec2, enc2), dim=1) 62 | dec2 = self.decoder2(dec2) 63 | dec1 = self.upconv1(dec2) 64 | dec1 = torch.cat((dec1, enc1), dim=1) 65 | dec1 = self.decoder1(dec1) 66 | return self.conv(dec1) #torch.sigmoid(self.conv(dec1)) 67 | 68 | @staticmethod 69 | def _block(in_channels, features, name): 70 | return nn.Sequential( 71 | OrderedDict( 72 | [ 73 | ( 74 | name + "conv1", 75 | nn.Conv2d( 76 | in_channels=in_channels, 77 | out_channels=features, 78 | kernel_size=3, 79 | padding=1, 80 | bias=False, 81 | ), 82 | ), 83 | (name + "norm1", nn.BatchNorm2d(num_features=features)), 84 | (name + "relu1", nn.ReLU(inplace=True)), 85 | ( 86 | name + "conv2", 87 | nn.Conv2d( 88 | in_channels=features, 89 | out_channels=features, 90 | kernel_size=3, 91 | padding=1, 92 | bias=False, 93 | ), 94 | ), 95 | (name + "norm2", nn.BatchNorm2d(num_features=features)), 96 | (name + "relu2", nn.ReLU(inplace=True)), 97 | ] 98 | ) 99 | ) 100 | 101 | 102 | -------------------------------------------------------------------------------- /build-docker-image-example/main.py: -------------------------------------------------------------------------------- 1 | import argparse, os 2 | import numpy as np 3 | import torch 4 | import U_net_Model as Model 5 | import SimpleITK as sitk 6 | 7 | def load_model(): 8 | input_channels = 5 9 | out_channels = 2 10 | model = Model.UNet(in_channels=input_channels, out_channels=out_channels) 11 | model_dict = torch.load('./model_lung/best_model.pth')["state_dict"] 12 | 13 | model.load_state_dict(model_dict) 14 | 15 | device = "cuda:0" if torch.cuda.is_available() else "cpu" 16 | model = model.to(device) 17 | 18 | return model 19 | 20 | 21 | 22 | def get_model_input(ct_array,resolution): 23 | wc,ww = -600,1600 24 | ct_array = (ct_array - wc) / ww 25 | sample_list = [] 26 | z,y,x = ct_array.shape 27 | window = (-5, -2, 0, 2, 5) 28 | for slice_index in range(z): 29 | sample = np.zeros([len(window), y, x], 'float32') 30 | for idx in range(len(window)): 31 | slice_id = slice_index + int(window[idx]/resolution[0]) 32 | if slice_id >= 0 and slice_id < z: 33 | sample[idx,:,:] = ct_array[slice_id,:,:] 34 | sample_list.append(sample) 35 | return sample_list 36 | 37 | 38 | 39 | def predict(test_model, sample_list): 40 | sample_array = np.stack(sample_list, axis=0) # z * 5 * y * x 41 | batch_size = 8 42 | prediction_list = [] 43 | index = 0 44 | soft_max = torch.nn.Softmax(dim=1) 45 | test_model.eval() 46 | with torch.no_grad(): 47 | while index < len(sample_list): 48 | index_end = index + batch_size 49 | if index_end >= len(sample_list): 50 | index_end = len(sample_list) 51 | inputs = torch.from_numpy(sample_array[index: index_end, :, :, :]).cuda() 52 | prediction = test_model(inputs) 53 | prediction = soft_max(prediction) 54 | prediction = prediction.cpu().numpy() # batch_size * 2 * y * x 55 | prediction_list.append(prediction) 56 | index = index_end 57 | prediction_array = np.concatenate(prediction_list, axis=0) # z * 2 * y * x 58 | lung_mask = np.array(prediction_array[:,1,:,:] > 0.5,'float32') 59 | return lung_mask 60 | 61 | 62 | 63 | if __name__ == '__main__': 64 | 65 | parser = argparse.ArgumentParser(description='lung segmentation of a ct volume') 66 | parser.add_argument('--input_dir', default='', type=str, metavar='PATH', 67 | help='this directory contains all test samples(ct volumes)') 68 | parser.add_argument('--predict_dir', default='', type=str, metavar='PATH', 69 | help='segmentation file of each test sample should be stored in the directory') 70 | 71 | args = parser.parse_args() 72 | input_dir = args.input_dir 73 | predict_dir = args.predict_dir 74 | 75 | test_model = load_model() 76 | print("model loaded successfully!") 77 | 78 | for ct_file in os.listdir(input_dir): 79 | input_file = os.path.join(input_dir,ct_file) 80 | dataname = ct_file.split('\.')[0] 81 | 82 | input_image = sitk.ReadImage(input_file) 83 | input_array = sitk.GetArrayFromImage(input_image) 84 | resolution = input_image.GetSpacing() 85 | resolution = (resolution[2],resolution[1],resolution[0]) 86 | 87 | sample_list = get_model_input(input_array,resolution) 88 | print("start predicting input volume!",dataname) 89 | lung_mask = predict(test_model,sample_list) 90 | 91 | mask_image = sitk.GetImageFromArray(lung_mask) 92 | mask_image.SetOrigin(input_image.GetOrigin()) 93 | mask_image.SetSpacing(input_image.GetSpacing()) 94 | 95 | sitk.WriteImage(mask_image,os.path.join(predict_dir,dataname+'.nii.gz')) 96 | print("segmentation is generated successfully!",dataname) 97 | 98 | 99 | 100 | 101 | 102 | 103 | 104 | 105 | 106 | 107 | -------------------------------------------------------------------------------- /build-docker-image-example/requirements.txt: -------------------------------------------------------------------------------- 1 | numpy==1.17.5 2 | SimpleITK==1.2.4 -------------------------------------------------------------------------------- /docker_rules/docker-submission-rules.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | Please name your docker image as your teamname, and the tag is latest, then save your docker image as teamname.tar.gz, send us a link to download the tar file. Note that, the docker image name requires lowercase letters. You can use the following command to generate docker tar file. 8 | 9 | ```bash 10 | docker save teamname:latest –o teamname.tar.gz 11 | ``` 12 | 13 | when we receive your docker tar file, we will run your program with commands as follows 14 | 15 | ```bash 16 | docker load < teamname.tar.gz 17 | ``` 18 | 19 | ```bash 20 | docker run --gpus "device=0" –name teamname –v /home/amax/Desktop/input:/input –v /home/amax/Desktop/predict:/predict teamname:latest 21 | ``` 22 | 23 | we will mount /home/amax/Desktop/input(a folder contains all CT files for testing) to /input in your docker container, and mount /home/amax/Desktop/predict(an empty folder used to save segmentation file) to /predict in your docker container. 24 | 25 | Your program in your docker container should do the following things: 26 | 27 | - obtain each CT file(.nii.gz) in folder /input 28 | - apply your segmentation algorithmn to segment the target 29 | - save the segmentation mask to /preidct. Note that, the filename of segmentaion mask file should be the same as the CT file, the segmentation mask is a 3D zero-one array(0 stands for background,1stands for target), the meta information of the segmentation mask file should be consistent with that of original CT file. 30 | 31 | If you don't kown how to build a docker image for your program, you can refer to the video https://youtu.be/wkHUtOCEHro and the example https://github.com/heyingte/build-docker-image-example 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | -------------------------------------------------------------------------------- /parse_result.xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerceptionComputingLab/PARSE2022/cd772c5247729de0a6ae9a09b9f0940e765c71d4/parse_result.xlsx --------------------------------------------------------------------------------