├── .gitignore ├── code ├── Makefile ├── data │ ├── TOY │ │ ├── train │ │ │ ├── gt │ │ │ │ ├── 00000.png │ │ │ │ ├── 00001.png │ │ │ │ ├── 00002.png │ │ │ │ ├── 00003.png │ │ │ │ ├── 00004.png │ │ │ │ ├── 00005.png │ │ │ │ ├── 00006.png │ │ │ │ ├── 00007.png │ │ │ │ ├── 00008.png │ │ │ │ └── 00009.png │ │ │ ├── img │ │ │ │ ├── 00000.png │ │ │ │ ├── 00001.png │ │ │ │ ├── 00002.png │ │ │ │ ├── 00003.png │ │ │ │ ├── 00004.png │ │ │ │ ├── 00005.png │ │ │ │ ├── 00006.png │ │ │ │ ├── 00007.png │ │ │ │ ├── 00008.png │ │ │ │ └── 00009.png │ │ │ └── weak │ │ │ │ ├── 00000.png │ │ │ │ ├── 00001.png │ │ │ │ ├── 00002.png │ │ │ │ ├── 00003.png │ │ │ │ ├── 00004.png │ │ │ │ ├── 00005.png │ │ │ │ ├── 00006.png │ │ │ │ ├── 00007.png │ │ │ │ ├── 00008.png │ │ │ │ └── 00009.png │ │ └── val │ │ │ ├── gt │ │ │ ├── 00000.png │ │ │ ├── 00001.png │ │ │ ├── 00002.png │ │ │ ├── 00003.png │ │ │ ├── 00004.png │ │ │ ├── 00005.png │ │ │ ├── 00006.png │ │ │ ├── 00007.png │ │ │ ├── 00008.png │ │ │ ├── 00009.png │ │ │ ├── Image_1.png │ │ │ ├── Image_10.png │ │ │ ├── Image_2.png │ │ │ ├── Image_3.png │ │ │ ├── Image_4.png │ │ │ ├── Image_5.png │ │ │ ├── Image_6.png │ │ │ ├── Image_7.png │ │ │ ├── Image_8.png │ │ │ └── Image_9.png │ │ │ ├── img │ │ │ ├── 00000.png │ │ │ ├── 00001.png │ │ │ ├── 00002.png │ │ │ ├── 00003.png │ │ │ ├── 00004.png │ │ │ ├── 00005.png │ │ │ ├── 00006.png │ │ │ ├── 00007.png │ │ │ ├── 00008.png │ │ │ ├── 00009.png │ │ │ ├── Image_1.png │ │ │ ├── Image_10.png │ │ │ ├── Image_2.png │ │ │ ├── Image_3.png │ │ │ ├── Image_4.png │ │ │ ├── Image_5.png │ │ │ ├── Image_6.png │ │ │ ├── Image_7.png │ │ │ ├── Image_8.png │ │ │ └── Image_9.png │ │ │ └── weak │ │ │ ├── 00000.png │ │ │ ├── 00001.png │ │ │ ├── 00002.png │ │ │ ├── 00003.png │ │ │ ├── 00004.png │ │ │ ├── 00005.png │ │ │ ├── 00006.png │ │ │ ├── 00007.png │ │ │ ├── 00008.png │ │ │ └── 00009.png │ └── promise12.lineage ├── gen_toy.py ├── gifs.sh ├── main.py ├── slice_promise.py └── utils │ ├── ShallowNet.py │ ├── dataset.py │ ├── losses.py │ ├── residual_unet.py │ └── utils.py ├── preview.gif ├── readme.md └── slides ├── readme.md ├── session_1.pdf ├── session_2.pdf └── session_3.pdf /.gitignore: -------------------------------------------------------------------------------- 1 | code/result.gif 2 | code/results/ 3 | code/tmp/ 4 | 5 | code/data/promise12 6 | code/data/PROMISE12 7 | 8 | *.zip 9 | *.pkl 10 | *.tar.gz 11 | plots 12 | Results/ 13 | results 14 | *.npy 15 | plot_bounds/ 16 | /*.png 17 | /**/RANDOM_DATA* 18 | 19 | .DS_Store 20 | 21 | # Byte-compiled / optimized / DLL files 22 | __pycache__/ 23 | *.py[cod] 24 | *$py.class 25 | 26 | # C extensions 27 | *.so 28 | 29 | # Distribution / packaging 30 | .Python 31 | build/ 32 | develop-eggs/ 33 | dist/ 34 | downloads/ 35 | eggs/ 36 | .eggs/ 37 | lib/ 38 | lib64/ 39 | parts/ 40 | sdist/ 41 | var/ 42 | wheels/ 43 | *.egg-info/ 44 | .installed.cfg 45 | *.egg 46 | MANIFEST 47 | 48 | # PyInstaller 49 | # Usually these files are written by a python script from a template 50 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 51 | *.manifest 52 | *.spec 53 | 54 | # Installer logs 55 | pip-log.txt 56 | pip-delete-this-directory.txt 57 | 58 | # Unit test / coverage reports 59 | htmlcov/ 60 | .tox/ 61 | .coverage 62 | .coverage.* 63 | .cache 64 | nosetests.xml 65 | coverage.xml 66 | *.cover 67 | .hypothesis/ 68 | .pytest_cache/ 69 | 70 | # Translations 71 | *.mo 72 | *.pot 73 | 74 | # Django stuff: 75 | *.log 76 | .static_storage/ 77 | .media/ 78 | local_settings.py 79 | 80 | # Flask stuff: 81 | instance/ 82 | .webassets-cache 83 | 84 | # Scrapy stuff: 85 | .scrapy 86 | 87 | # Sphinx documentation 88 | docs/_build/ 89 | 90 | # PyBuilder 91 | target/ 92 | 93 | # Jupyter Notebook 94 | .ipynb_checkpoints 95 | 96 | # pyenv 97 | .python-version 98 | 99 | # celery beat schedule file 100 | celerybeat-schedule 101 | 102 | # SageMath parsed files 103 | *.sage.py 104 | 105 | # Environments 106 | .env 107 | .venv 108 | env/ 109 | venv/ 110 | ENV/ 111 | env.bak/ 112 | venv.bak/ 113 | 114 | # Spyder project settings 115 | .spyderproject 116 | .spyproject 117 | 118 | # Rope project settings 119 | .ropeproject 120 | 121 | # mkdocs documentation 122 | /site 123 | 124 | # mypy 125 | .mypy_cache/ 126 | -------------------------------------------------------------------------------- /code/Makefile: -------------------------------------------------------------------------------- 1 | 2 | data/TOY: 3 | python gen_toy.py --dest $@ -n 10 10 -wh 256 256 -r 50 4 | 5 | data/PROMISE12: 6 | 7 | # Extraction and slicing 8 | data/PROMISE12: data/promise12 9 | rm -rf $@_tmp 10 | python3 slice_promise.py --source_dir $< --dest_dir $@_tmp 11 | mv $@_tmp $@ 12 | data/promise12: data/promise12.lineage data/TrainingData_Part1.zip data/TrainingData_Part2.zip data/TrainingData_Part3.zip 13 | md5sum -c $< 14 | rm -rf $@_tmp 15 | unzip -q $(word 2, $^) -d $@_tmp 16 | unzip -q $(word 3, $^) -d $@_tmp 17 | unzip -q $(word 4, $^) -d $@_tmp 18 | mv $@_tmp $@ 19 | 20 | results.gif: results/images/TOY/unconstrained results/images/TOY/constrained 21 | ./gifs.sh 22 | 23 | results/images/TOY/unconstrained: data/TOY 24 | python3 main.py --dataset TOY --mode unconstrained --gpu 25 | 26 | results/images/TOY/constrained: data/TOY 27 | python3 main.py --dataset TOY --mode constrained --gpu -------------------------------------------------------------------------------- /code/data/TOY/train/gt/00000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/gt/00000.png -------------------------------------------------------------------------------- /code/data/TOY/train/gt/00001.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/gt/00001.png -------------------------------------------------------------------------------- /code/data/TOY/train/gt/00002.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/gt/00002.png -------------------------------------------------------------------------------- /code/data/TOY/train/gt/00003.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/gt/00003.png -------------------------------------------------------------------------------- /code/data/TOY/train/gt/00004.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/gt/00004.png -------------------------------------------------------------------------------- /code/data/TOY/train/gt/00005.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/gt/00005.png -------------------------------------------------------------------------------- /code/data/TOY/train/gt/00006.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/gt/00006.png -------------------------------------------------------------------------------- /code/data/TOY/train/gt/00007.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/gt/00007.png -------------------------------------------------------------------------------- /code/data/TOY/train/gt/00008.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/gt/00008.png -------------------------------------------------------------------------------- /code/data/TOY/train/gt/00009.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/gt/00009.png -------------------------------------------------------------------------------- /code/data/TOY/train/img/00000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/img/00000.png -------------------------------------------------------------------------------- /code/data/TOY/train/img/00001.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/img/00001.png -------------------------------------------------------------------------------- /code/data/TOY/train/img/00002.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/img/00002.png -------------------------------------------------------------------------------- /code/data/TOY/train/img/00003.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/img/00003.png -------------------------------------------------------------------------------- /code/data/TOY/train/img/00004.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/img/00004.png -------------------------------------------------------------------------------- /code/data/TOY/train/img/00005.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/img/00005.png -------------------------------------------------------------------------------- /code/data/TOY/train/img/00006.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/img/00006.png -------------------------------------------------------------------------------- /code/data/TOY/train/img/00007.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/img/00007.png -------------------------------------------------------------------------------- /code/data/TOY/train/img/00008.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/img/00008.png -------------------------------------------------------------------------------- /code/data/TOY/train/img/00009.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/img/00009.png -------------------------------------------------------------------------------- /code/data/TOY/train/weak/00000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/weak/00000.png -------------------------------------------------------------------------------- /code/data/TOY/train/weak/00001.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/weak/00001.png -------------------------------------------------------------------------------- /code/data/TOY/train/weak/00002.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/weak/00002.png -------------------------------------------------------------------------------- /code/data/TOY/train/weak/00003.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/weak/00003.png -------------------------------------------------------------------------------- /code/data/TOY/train/weak/00004.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/weak/00004.png -------------------------------------------------------------------------------- /code/data/TOY/train/weak/00005.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/weak/00005.png -------------------------------------------------------------------------------- /code/data/TOY/train/weak/00006.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/weak/00006.png -------------------------------------------------------------------------------- /code/data/TOY/train/weak/00007.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/weak/00007.png -------------------------------------------------------------------------------- /code/data/TOY/train/weak/00008.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/weak/00008.png -------------------------------------------------------------------------------- /code/data/TOY/train/weak/00009.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/train/weak/00009.png -------------------------------------------------------------------------------- /code/data/TOY/val/gt/00000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/gt/00000.png -------------------------------------------------------------------------------- /code/data/TOY/val/gt/00001.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/gt/00001.png -------------------------------------------------------------------------------- /code/data/TOY/val/gt/00002.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/gt/00002.png -------------------------------------------------------------------------------- /code/data/TOY/val/gt/00003.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/gt/00003.png -------------------------------------------------------------------------------- /code/data/TOY/val/gt/00004.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/gt/00004.png -------------------------------------------------------------------------------- /code/data/TOY/val/gt/00005.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/gt/00005.png -------------------------------------------------------------------------------- /code/data/TOY/val/gt/00006.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/gt/00006.png -------------------------------------------------------------------------------- /code/data/TOY/val/gt/00007.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/gt/00007.png -------------------------------------------------------------------------------- /code/data/TOY/val/gt/00008.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/gt/00008.png -------------------------------------------------------------------------------- /code/data/TOY/val/gt/00009.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/gt/00009.png -------------------------------------------------------------------------------- /code/data/TOY/val/gt/Image_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/gt/Image_1.png -------------------------------------------------------------------------------- /code/data/TOY/val/gt/Image_10.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/gt/Image_10.png -------------------------------------------------------------------------------- /code/data/TOY/val/gt/Image_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/gt/Image_2.png -------------------------------------------------------------------------------- /code/data/TOY/val/gt/Image_3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/gt/Image_3.png -------------------------------------------------------------------------------- /code/data/TOY/val/gt/Image_4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/gt/Image_4.png -------------------------------------------------------------------------------- /code/data/TOY/val/gt/Image_5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/gt/Image_5.png -------------------------------------------------------------------------------- /code/data/TOY/val/gt/Image_6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/gt/Image_6.png -------------------------------------------------------------------------------- /code/data/TOY/val/gt/Image_7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/gt/Image_7.png -------------------------------------------------------------------------------- /code/data/TOY/val/gt/Image_8.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/gt/Image_8.png -------------------------------------------------------------------------------- /code/data/TOY/val/gt/Image_9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/gt/Image_9.png -------------------------------------------------------------------------------- /code/data/TOY/val/img/00000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/img/00000.png -------------------------------------------------------------------------------- /code/data/TOY/val/img/00001.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/img/00001.png -------------------------------------------------------------------------------- /code/data/TOY/val/img/00002.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/img/00002.png -------------------------------------------------------------------------------- /code/data/TOY/val/img/00003.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/img/00003.png -------------------------------------------------------------------------------- /code/data/TOY/val/img/00004.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/img/00004.png -------------------------------------------------------------------------------- /code/data/TOY/val/img/00005.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/img/00005.png -------------------------------------------------------------------------------- /code/data/TOY/val/img/00006.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/img/00006.png -------------------------------------------------------------------------------- /code/data/TOY/val/img/00007.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/img/00007.png -------------------------------------------------------------------------------- /code/data/TOY/val/img/00008.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/img/00008.png -------------------------------------------------------------------------------- /code/data/TOY/val/img/00009.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/img/00009.png -------------------------------------------------------------------------------- /code/data/TOY/val/img/Image_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/img/Image_1.png -------------------------------------------------------------------------------- /code/data/TOY/val/img/Image_10.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/img/Image_10.png -------------------------------------------------------------------------------- /code/data/TOY/val/img/Image_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/img/Image_2.png -------------------------------------------------------------------------------- /code/data/TOY/val/img/Image_3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/img/Image_3.png -------------------------------------------------------------------------------- /code/data/TOY/val/img/Image_4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/img/Image_4.png -------------------------------------------------------------------------------- /code/data/TOY/val/img/Image_5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/img/Image_5.png -------------------------------------------------------------------------------- /code/data/TOY/val/img/Image_6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/img/Image_6.png -------------------------------------------------------------------------------- /code/data/TOY/val/img/Image_7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/img/Image_7.png -------------------------------------------------------------------------------- /code/data/TOY/val/img/Image_8.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/img/Image_8.png -------------------------------------------------------------------------------- /code/data/TOY/val/img/Image_9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/img/Image_9.png -------------------------------------------------------------------------------- /code/data/TOY/val/weak/00000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/weak/00000.png -------------------------------------------------------------------------------- /code/data/TOY/val/weak/00001.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/weak/00001.png -------------------------------------------------------------------------------- /code/data/TOY/val/weak/00002.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/weak/00002.png -------------------------------------------------------------------------------- /code/data/TOY/val/weak/00003.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/weak/00003.png -------------------------------------------------------------------------------- /code/data/TOY/val/weak/00004.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/weak/00004.png -------------------------------------------------------------------------------- /code/data/TOY/val/weak/00005.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/weak/00005.png -------------------------------------------------------------------------------- /code/data/TOY/val/weak/00006.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/weak/00006.png -------------------------------------------------------------------------------- /code/data/TOY/val/weak/00007.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/weak/00007.png -------------------------------------------------------------------------------- /code/data/TOY/val/weak/00008.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/weak/00008.png -------------------------------------------------------------------------------- /code/data/TOY/val/weak/00009.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/code/data/TOY/val/weak/00009.png -------------------------------------------------------------------------------- /code/data/promise12.lineage: -------------------------------------------------------------------------------- 1 | 0ad4d49067b130580c8fe6b0721ef63a data/TrainingData_Part1.zip 2 | cc115388cacad2aefd129ad4e9b2fe09 data/TrainingData_Part2.zip 3 | d657bb728ad92db7fe3332a1f37ccf5a data/TrainingData_Part3.zip 4 | # Promise challenge dataset https://promise12.grand-challenge.org/Download/ 5 | -------------------------------------------------------------------------------- /code/gen_toy.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3.7 2 | 3 | import argparse 4 | from pathlib import Path 5 | from functools import partial 6 | 7 | from tqdm import tqdm 8 | import numpy as np 9 | from PIL import Image, ImageDraw 10 | 11 | # from utils.utils import mmap_ 12 | 13 | 14 | def main(args) -> None: 15 | W, H = args.wh 16 | r: int = args.r 17 | 18 | for folder, n_img in zip(["train", "val"], args.n): 19 | gt_folder: Path = Path(args.dest, folder, 'gt') 20 | weak_folder: Path = Path(args.dest, folder, 'weak') 21 | img_folder: Path = Path(args.dest, folder, 'img') 22 | 23 | gt_folder.mkdir(parents=True, exist_ok=True) 24 | weak_folder.mkdir(parents=True, exist_ok=True) 25 | img_folder.mkdir(parents=True, exist_ok=True) 26 | 27 | gen_fn = partial(gen_img, W=W, H=W, r=r, gt_folder=gt_folder, img_folder=img_folder, weak_folder=weak_folder) 28 | 29 | # mmap_(gen_fn, range(n_img)) 30 | for i in tqdm(range(n_img)): 31 | gen_fn(i) 32 | 33 | 34 | def gen_img(i: int, W: int, H: int, r: int, gt_folder: Path, img_folder: Path, weak_folder: Path) -> None: 35 | img: Image = Image.new("L", (W, H), 0) 36 | gt: Image = Image.new("L", (W, H), 0) 37 | weak: Image = Image.new("L", (W, H), 0) 38 | 39 | img_canvas = ImageDraw.Draw(img) 40 | gt_canvas = ImageDraw.Draw(gt) 41 | weak_canvas = ImageDraw.Draw(weak) 42 | 43 | cx, cy = W // 2, H // 2 44 | 45 | img_canvas.ellipse([cx - r * 2, cy - r * 2, cx + r * 2, cy + r * 2], 68, 68) 46 | img_canvas.ellipse([cx - r, cy - r, cx + r, cy + r], 200, 200) 47 | gt_canvas.ellipse([cx - r, cy - r, cx + r, cy + r], 255, 255) 48 | weak_canvas.ellipse([cx - r // 10, cy - r // 10, cx + r // 10, cy + r // 10], 255, 255) 49 | 50 | img_arr: np.ndarray = np.asarray(img) 51 | with_noise: np.ndarray = noise(img_arr) 52 | 53 | filename: str = f"{i:05d}.png" 54 | gt.save(gt_folder / filename) 55 | weak.save(weak_folder / filename) 56 | Image.fromarray(with_noise).save(img_folder / filename) 57 | 58 | 59 | def noise(arr: np.ndarray) -> np.ndarray: 60 | noise_level: int = np.random.randint(100) 61 | to_add = np.random.normal(0, noise_level, arr.shape).astype(np.int16).clip(-255, 255) 62 | 63 | return (arr + to_add).clip(0, 255).astype(np.uint8) 64 | 65 | 66 | def get_args() -> argparse.Namespace: 67 | parser = argparse.ArgumentParser(description='Generation parameters') 68 | parser.add_argument('--dest', type=str, required=True) 69 | parser.add_argument('-n', type=int, nargs=2, required=True) 70 | parser.add_argument('-wh', type=int, nargs=2, required=True, help="Size of image") 71 | parser.add_argument('-r', type=int, required=True, help="Radius of circle") 72 | 73 | parser.add_argument('--seed', type=int, default=0) 74 | 75 | args = parser.parse_args() 76 | np.random.seed(args.seed) 77 | 78 | print(args) 79 | 80 | return args 81 | 82 | 83 | if __name__ == "__main__": 84 | main(get_args()) 85 | -------------------------------------------------------------------------------- /code/gifs.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # Requires ImageMagick 4 | 5 | f1=results/images/TOY/unconstrained 6 | f2=results/images/TOY/constrained 7 | 8 | echo $f1 $f2 9 | 10 | rm -rf tmp/ && mkdir -p tmp/ 11 | 12 | for i in $f1/1_Ep_*.png ; do 13 | echo $i 14 | epc=`basename $i | cut -d . -f 1 | cut -d _ -f 3` 15 | convert $i -size 10x xc:none ${i/$f1/$f2} +append tmp/$epc.png 16 | mogrify -crop 530x518+0+0\! tmp/$epc.png 17 | mogrify -gravity north -extent 530x550 tmp/$epc.png 18 | mogrify -gravity south -extent 530x580 tmp/$epc.png 19 | mogrify -annotate +230+10 "Epoch $epc" tmp/$epc.png 20 | mogrify -annotate +100+560 "Partial CE" tmp/$epc.png 21 | mogrify -annotate +330+560 "Partial CE + Sizeloss" tmp/$epc.png 22 | done 23 | 24 | convert -loop 0 -delay 20 tmp/*.png result.gif 25 | 26 | # rm -r tmp/ -------------------------------------------------------------------------------- /code/main.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | import argparse 4 | from pathlib import Path 5 | from typing import Any, Tuple 6 | from operator import itemgetter 7 | 8 | import torch 9 | import numpy as np 10 | import torch.nn.functional as F 11 | from torch import nn, einsum 12 | from torchvision import transforms 13 | from torch.utils.data import DataLoader 14 | 15 | from utils.dataset import (SliceDataset) 16 | from utils.ShallowNet import (shallowCNN) 17 | from utils.residual_unet import ResidualUNet 18 | from utils.utils import (weights_init, 19 | saveImages, 20 | class2one_hot, 21 | probs2one_hot, 22 | one_hot, 23 | tqdm_, 24 | dice_coef) 25 | 26 | from utils.losses import (CrossEntropy, 27 | PartialCrossEntropy, 28 | NaiveSizeLoss) 29 | 30 | 31 | def setup(args) -> Tuple[nn.Module, Any, Any, DataLoader, DataLoader]: 32 | # Networks and scheduler 33 | gpu: bool = args.gpu and torch.cuda.is_available() 34 | device = torch.device("cuda") if gpu else torch.device("cpu") 35 | 36 | num_classes = 2 37 | if args.dataset == 'TOY': 38 | initial_kernels = 4 39 | print(">> Using a shallowCNN") 40 | net = shallowCNN(1, initial_kernels, num_classes) 41 | net.apply(weights_init) 42 | else: 43 | print(">> Using a fully residual UNet") 44 | net = ResidualUNet(1, num_classes) 45 | net.init_weights() 46 | net.to(device) 47 | 48 | lr = 0.0005 49 | optimizer = torch.optim.Adam(net.parameters(), lr=lr, betas=(0.9, 0.999)) 50 | 51 | # Dataset part 52 | batch_size = 1 53 | root_dir = Path("data") / args.dataset 54 | 55 | transform = transforms.Compose([ 56 | lambda img: img.convert('L'), 57 | lambda img: np.array(img)[np.newaxis, ...], 58 | lambda nd: nd / 255, # max <= 1 59 | lambda nd: torch.tensor(nd, dtype=torch.float32) 60 | ]) 61 | 62 | mask_transform = transforms.Compose([ 63 | lambda img: np.array(img)[...], 64 | lambda nd: nd / 255, # max <= 1 65 | lambda nd: torch.tensor(nd, dtype=torch.int64)[None, ...], # Add one dimension to simulate batch 66 | lambda t: class2one_hot(t, K=2), 67 | itemgetter(0) 68 | ]) 69 | 70 | train_set = SliceDataset('train', 71 | root_dir, 72 | transform=transform, 73 | mask_transform=mask_transform, 74 | augment=True, 75 | equalize=False) 76 | train_loader = DataLoader(train_set, 77 | batch_size=batch_size, 78 | num_workers=5, 79 | shuffle=True) 80 | 81 | val_set = SliceDataset('val', 82 | root_dir, 83 | transform=transform, 84 | mask_transform=mask_transform, 85 | equalize=False) 86 | val_loader = DataLoader(val_set, 87 | batch_size=1, 88 | num_workers=5, 89 | shuffle=False) 90 | 91 | return (net, optimizer, device, train_loader, val_loader) 92 | 93 | 94 | def runTraining(args): 95 | print(f">>> Setting up to train on {args.dataset} with {args.mode}") 96 | net, optimizer, device, train_loader, val_loader = setup(args) 97 | 98 | ce_loss = CrossEntropy(idk=[0, 1]) # Supervise both background and foreground 99 | partial_ce = PartialCrossEntropy() # Supervise only foregroundz 100 | sizeLoss = NaiveSizeLoss() 101 | 102 | for i in range(args.epochs): 103 | net.train() 104 | 105 | log_ce = torch.zeros((len(train_loader)), device=device) 106 | log_sizeloss = torch.zeros((len(train_loader)), device=device) 107 | log_sizediff = torch.zeros((len(train_loader)), device=device) 108 | log_dice = torch.zeros((len(train_loader)), device=device) 109 | 110 | desc = f">> Training ({i: 4d})" 111 | tq_iter = tqdm_(enumerate(train_loader), total=len(train_loader), desc=desc) 112 | for j, data in tq_iter: 113 | img = data["img"].to(device) 114 | full_mask = data["full_mask"].to(device) 115 | weak_mask = data["weak_mask"].to(device) 116 | 117 | bounds = data["bounds"].to(device) 118 | 119 | optimizer.zero_grad() 120 | 121 | # Sanity tests to see we loaded and encoded the data correctly 122 | assert 0 <= img.min() and img.max() <= 1 123 | B, _, W, H = img.shape 124 | assert B == 1 # Since we log the values in a simple way, doesn't handle more 125 | assert weak_mask.shape == (B, 2, W, H) 126 | assert one_hot(weak_mask), one_hot(weak_mask) 127 | 128 | logits = net(img) 129 | pred_softmax = F.softmax(5 * logits, dim=1) 130 | pred_seg = probs2one_hot(pred_softmax) 131 | 132 | pred_size = einsum("bkwh->bk", pred_seg)[:, 1] 133 | log_sizediff[j] = pred_size - data["true_size"][0, 1] 134 | log_dice[j] = dice_coef(pred_seg, full_mask)[0, 1] # 1st item, 2nd class 135 | 136 | if args.mode == 'full': 137 | ce_val = ce_loss(pred_softmax, full_mask) 138 | log_ce[j] = ce_val.item() 139 | 140 | log_sizeloss[j] = 0 141 | 142 | lossEpoch = ce_val 143 | elif args.mode == 'unconstrained': 144 | ce_val = partial_ce(pred_softmax, weak_mask) 145 | log_ce[j] = ce_val.item() 146 | 147 | log_sizeloss[j] = 0 148 | 149 | lossEpoch = ce_val 150 | else: 151 | ce_val = partial_ce(pred_softmax, weak_mask) 152 | log_ce[j] = ce_val.item() 153 | 154 | sizeLoss_val = sizeLoss(pred_softmax, bounds) 155 | log_sizeloss[j] = sizeLoss_val.item() 156 | 157 | lossEpoch = ce_val + sizeLoss_val 158 | 159 | lossEpoch.backward() 160 | optimizer.step() 161 | 162 | tq_iter.set_postfix({"DSC": f"{log_dice[:j+1].mean():05.3f}", 163 | "SizeDiff": f"{log_sizediff[:j+1].mean():07.1f}", 164 | "LossCE": f"{log_ce[:j+1].mean():5.2e}", 165 | **({"LossSize": f"{log_sizeloss[:j+1].mean():5.2e}"} if args.mode == 'constrained' else {})}) 166 | tq_iter.update(1) 167 | tq_iter.close() 168 | 169 | if (i % 5) == 0: 170 | saveImages(net, val_loader, 1, i, args.dataset, args.mode, device) 171 | 172 | 173 | def main(): 174 | parser = argparse.ArgumentParser() 175 | 176 | parser.add_argument('--epochs', default=200, type=int) 177 | parser.add_argument('--dataset', default='TOY', choices=['TOY', 'PROMISE12']) 178 | parser.add_argument('--mode', default='unconstrained', choices=['constrained', 'unconstrained', 'full']) 179 | 180 | parser.add_argument('--gpu', action='store_true') 181 | 182 | args = parser.parse_args() 183 | 184 | runTraining(args) 185 | 186 | 187 | if __name__ == '__main__': 188 | main() 189 | -------------------------------------------------------------------------------- /code/slice_promise.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3.6 2 | 3 | import re 4 | import random 5 | import argparse 6 | import warnings 7 | from pathlib import Path 8 | from pprint import pprint 9 | from functools import partial 10 | from typing import Any, Callable, List, Tuple 11 | 12 | import numpy as np 13 | import SimpleITK as sitk 14 | from PIL import Image, ImageDraw 15 | from numpy import unique as uniq 16 | from skimage.transform import resize 17 | from skimage.io import imread, imsave 18 | 19 | from utils.utils import starmmap_, map_ 20 | 21 | 22 | def norm_arr(img: np.ndarray) -> np.ndarray: 23 | casted = img.astype(np.float32) 24 | shifted = casted - casted.min() 25 | norm = shifted / shifted.max() 26 | res = 255 * norm 27 | 28 | return res.astype(np.uint8) 29 | 30 | 31 | def get_p_id(path: Path, regex: str = "(Case\d+)(_segmentation)?") -> str: 32 | matched = re.match(regex, path.stem) 33 | 34 | if matched: 35 | return matched.group(1) 36 | raise ValueError(regex, path) 37 | 38 | 39 | def save_slices(img_p: Path, gt_p: Path, 40 | dest_dir: Path, shape: Tuple[int], 41 | img_dir: str = "img", gt_dir: str = "gt") -> Tuple[int, int, int]: 42 | p_id: str = get_p_id(img_p) 43 | assert "Case" in p_id 44 | assert p_id == get_p_id(gt_p) 45 | 46 | # Load the data 47 | img = imread(str(img_p), plugin='simpleitk') 48 | gt = imread(str(gt_p), plugin='simpleitk') 49 | # print(img.shape, img.dtype, gt.shape, gt.dtype) 50 | # print(img.min(), img.max(), len(np.unique(img))) 51 | # print(np.unique(gt)) 52 | 53 | assert img.shape == gt.shape 54 | assert img.dtype in [np.int16] 55 | assert gt.dtype in [np.int8] 56 | 57 | img_nib = sitk.ReadImage(str(img_p)) 58 | dx, dy, dz = img_nib.GetSpacing() 59 | # print(dx, dy, dz) 60 | assert np.abs(dx - dy) <= 0.0000041, (dx, dy, dx - dy) 61 | assert 0.27 <= dx <= 0.75, dx 62 | assert 2.19994 <= dz <= 4.00001, dz 63 | 64 | x, y, z = img.shape 65 | assert (y, z) in [(320, 320), (512, 512), (256, 256), (384, 384)], (y, z) 66 | assert 15 <= x <= 54, x 67 | 68 | # Normalize and check data content 69 | norm_img = norm_arr(img) # We need to normalize the whole 3d img, not 2d slices 70 | assert 0 == norm_img.min() and norm_img.max() == 255, (norm_img.min(), norm_img.max()) 71 | assert norm_img.dtype == np.uint8 72 | 73 | save_dir_img: Path = dest_dir / img_dir 74 | save_dir_gt: Path = dest_dir / gt_dir 75 | save_dir_weak: Path = dest_dir / "weak" 76 | sizes_2d: np.ndarray = np.zeros(img.shape[-1]) 77 | for j in range(len(img)): 78 | img_s = norm_img[j, :, :] 79 | gt_s = gt[j, :, :] 80 | assert img_s.shape == gt_s.shape 81 | 82 | # Resize and check the data are still what we expect 83 | # from time import time 84 | # tic = time() 85 | resize_: Callable = partial(resize, mode="constant", preserve_range=True, anti_aliasing=False) 86 | r_img: np.ndarray = resize_(img_s, shape).astype(np.uint8) 87 | r_gt: np.ndarray = resize_(gt_s, shape).astype(np.uint8) 88 | # print(time() - tic) 89 | assert r_img.dtype == r_gt.dtype == np.uint8 90 | assert 0 <= r_img.min() and r_img.max() <= 255 # The range might be smaller 91 | assert set(uniq(r_gt)).issubset(set(uniq(gt))) 92 | sizes_2d[j] = r_gt[r_gt == 1].sum() 93 | 94 | r_weak: np.ndarray = random_strat(r_gt, 1) 95 | 96 | r_gt *= 255 97 | r_weak *= 255 98 | 99 | for save_dir, data in zip([save_dir_img, save_dir_gt, save_dir_weak], 100 | [r_img, r_gt, r_weak]): 101 | filename = f"{p_id}_{j:02d}.png" 102 | save_dir.mkdir(parents=True, exist_ok=True) 103 | 104 | with warnings.catch_warnings(): 105 | warnings.filterwarnings("ignore", category=UserWarning) 106 | imsave(str(Path(save_dir, filename)), data) 107 | 108 | return sizes_2d.sum(), sizes_2d[sizes_2d > 0].min(), sizes_2d.max() 109 | 110 | 111 | def random_strat(orig_mask: np.ndarray, filling: int) -> np.ndarray: 112 | res_arr: np.ndarray = np.zeros_like(orig_mask) 113 | 114 | size: int = orig_mask.sum() 115 | if size: # Positive images 116 | res_img: Image.Image = Image.new("L", orig_mask.shape, 0) 117 | canvas = ImageDraw.Draw(res_img) 118 | xs, ys = np.where(orig_mask == 1) 119 | # print(len(xs), len(ys)) 120 | assert len(xs) == len(ys) 121 | random_index: int = np.random.randint(len(xs)) 122 | rx, ry = xs[random_index], ys[random_index] 123 | # Of course the coordinates are inverted 124 | # rx, ry = ry, rx 125 | # print(centroid, rx, ry) 126 | 127 | width: int = 5 # Hardcoded for now 128 | dw: int = int(width // 2) 129 | canvas.ellipse([rx - dw, ry - dw, rx + dw, ry + dw], fill=filling) 130 | 131 | # Remove overflow if needed 132 | masked_res: np.ndarray = np.einsum("hw,wh->wh", np.array(res_img), orig_mask).astype(np.uint8) 133 | res_arr = masked_res 134 | 135 | return res_arr 136 | 137 | 138 | def main(args: argparse.Namespace): 139 | src_path: Path = Path(args.source_dir) 140 | dest_path: Path = Path(args.dest_dir) 141 | 142 | # Assume the cleaning up is done before calling the script 143 | assert src_path.exists() 144 | assert not dest_path.exists() 145 | 146 | # Get all the file names, avoid the temporal ones 147 | nii_paths: List[Path] = [p for p in src_path.rglob('*.mhd')] 148 | assert len(nii_paths) % 2 == 0, "Uneven number of .nii, one+ pair is broken" 149 | 150 | # We sort now, but also id matching is checked while iterating later on 151 | img_nii_paths: List[Path] = sorted(p for p in nii_paths if "_segmentation" not in str(p)) 152 | gt_nii_paths: List[Path] = sorted(p for p in nii_paths if "_segmentation" in str(p)) 153 | assert len(img_nii_paths) == len(gt_nii_paths) 154 | paths: List[Tuple[Path, Path]] = list(zip(img_nii_paths, gt_nii_paths)) 155 | 156 | print(f"Found {len(img_nii_paths)} pairs in total") 157 | pprint(paths[:5]) 158 | 159 | validation_paths: List[Tuple[Path, Path]] = random.sample(paths, args.retain) 160 | training_paths: List[Tuple[Path, Path]] = [p for p in paths if p not in validation_paths] 161 | assert set(validation_paths).isdisjoint(set(training_paths)) 162 | assert len(paths) == (len(validation_paths) + len(training_paths)) 163 | 164 | for mode, _paths in zip(["train", "val"], [training_paths, validation_paths]): 165 | img_paths, gt_paths = zip(*_paths) # type: Tuple[Any, Any] 166 | 167 | dest_dir = Path(dest_path, mode) 168 | print(f"Slicing {len(img_paths)} pairs to {dest_dir}") 169 | assert len(img_paths) == len(gt_paths) 170 | 171 | pfun = partial(save_slices, dest_dir=dest_dir, shape=args.shape) 172 | sizes = starmmap_(pfun, zip(img_paths, gt_paths)) 173 | # sizes = [] 174 | # for paths in tqdm(list(zip(img_paths, gt_paths)), ncols=50): 175 | # sizes.append(uc_(pfun)(paths)) 176 | sizes_3d, sizes_2d_min, sizes_2d_max = map_(np.asarray, zip(*sizes)) 177 | 178 | print("2d sizes: ", sizes_2d_min.min(), sizes_2d_max.max()) 179 | print("3d sizes: ", sizes_3d.min(), sizes_3d.mean(), sizes_3d.max()) 180 | 181 | 182 | def get_args() -> argparse.Namespace: 183 | parser = argparse.ArgumentParser(description='Slicing parameters') 184 | parser.add_argument('--source_dir', type=str, required=True) 185 | parser.add_argument('--dest_dir', type=str, required=True) 186 | 187 | parser.add_argument('--img_dir', type=str, default="IMG") 188 | parser.add_argument('--gt_dir', type=str, default="GT") 189 | parser.add_argument('--shape', type=int, nargs="+", default=[256, 256]) 190 | parser.add_argument('--retain', type=int, default=10, help="Number of retained patient for the validation data") 191 | parser.add_argument('--seed', type=int, default=0) 192 | 193 | args = parser.parse_args() 194 | random.seed(args.seed) 195 | 196 | print(args) 197 | 198 | return args 199 | 200 | 201 | if __name__ == "__main__": 202 | args = get_args() 203 | random.seed(args.seed) 204 | 205 | main(args) 206 | -------------------------------------------------------------------------------- /code/utils/ShallowNet.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | import torch.nn as nn 4 | 5 | 6 | def convBatch(nin, nout, kernel_size=3, stride=1, padding=1, bias=False, layer=nn.Conv2d, dilation=1): 7 | return nn.Sequential( 8 | layer(nin, nout, kernel_size=kernel_size, stride=stride, padding=padding, bias=bias, dilation=dilation), 9 | nn.BatchNorm2d(nout), 10 | nn.PReLU() 11 | ) 12 | 13 | 14 | class shallowCNN(nn.Module): 15 | def __init__(self, nin, nG, nout): 16 | super(shallowCNN, self).__init__() 17 | self.conv0 = convBatch(nin, nG * 4) 18 | self.conv1 = convBatch(nG * 4, nG * 4) 19 | self.conv2 = convBatch(nG * 4, nout) 20 | 21 | def forward(self, input): 22 | x0 = self.conv0(input) 23 | x1 = self.conv1(x0) 24 | x2 = self.conv2(x1) 25 | 26 | return x2 27 | -------------------------------------------------------------------------------- /code/utils/dataset.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | from pathlib import Path 4 | from random import random 5 | from typing import Callable, Dict, List, Tuple, Union 6 | 7 | import torch 8 | from torch import Tensor, einsum 9 | from PIL import Image, ImageOps 10 | from torch.utils.data import Dataset 11 | 12 | 13 | def make_dataset(root, subset) -> List[Tuple[Path, Path, Path]]: 14 | assert subset in ['train', 'val', 'test'] 15 | 16 | root = Path(root) 17 | 18 | img_path = root / subset / 'img' 19 | full_path = root / subset / 'gt' 20 | weak_path = root / subset / 'weak' 21 | 22 | images = sorted(img_path.glob("*.png")) 23 | full_labels = sorted(full_path.glob("*.png")) 24 | weak_labels = sorted(weak_path.glob("*.png")) 25 | 26 | return list(zip(images, full_labels, weak_labels)) 27 | 28 | 29 | class SliceDataset(Dataset): 30 | def __init__(self, subset, root_dir, transform=None, 31 | mask_transform=None, augment=False, equalize=False): 32 | self.root_dir: str = root_dir 33 | self.transform: Callable = transform 34 | self.mask_transform: Callable = mask_transform 35 | self.augmentation: bool = augment 36 | self.equalize: bool = equalize 37 | 38 | self.files = make_dataset(root_dir, subset) 39 | 40 | print(f">> Created {subset} dataset with {len(self)} images...") 41 | 42 | def __len__(self): 43 | return len(self.files) 44 | 45 | def __getitem__(self, index) -> Dict[str, Union[Tensor, int, str]]: 46 | img_path, gt_path, weak_path = self.files[index] 47 | 48 | img = Image.open(img_path) 49 | mask = Image.open(gt_path) 50 | weak_mask = Image.open(weak_path) 51 | 52 | if self.equalize: 53 | img = ImageOps.equalize(img) 54 | 55 | if self.transform: 56 | img = self.transform(img) 57 | mask = self.mask_transform(mask) 58 | weak_mask = self.mask_transform(weak_mask) 59 | 60 | _, W, H = img.shape 61 | assert mask.shape == weak_mask.shape == (2, W, H) 62 | 63 | # Circle: 8011 64 | true_size = einsum("kwh->k", mask) 65 | bounds = einsum("k,b->kb", true_size, torch.tensor([0.9, 1.1], dtype=torch.float32)) 66 | assert bounds.shape == (2, 2) # binary, upper and lower bounds 67 | 68 | return {"img": img, 69 | "full_mask": mask, 70 | "weak_mask": weak_mask, 71 | "path": str(img_path), 72 | "true_size": true_size, 73 | "bounds": bounds} 74 | -------------------------------------------------------------------------------- /code/utils/losses.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | from torch import einsum 4 | import torch.nn.functional as F 5 | 6 | from .utils import simplex, sset 7 | 8 | 9 | class CrossEntropy(): 10 | def __init__(self, idk, **kwargs): 11 | # Self.idk is used to filter out some classes of the target mask. Use fancy indexing 12 | self.idk = idk 13 | print(f"Initialized {self.__class__.__name__} with {kwargs}") 14 | 15 | def __call__(self, pred_softmax, weak_target): 16 | assert pred_softmax.shape == weak_target.shape 17 | assert simplex(pred_softmax) 18 | assert sset(weak_target, [0, 1]) 19 | 20 | log_p = (pred_softmax[:, self.idk, ...] + 1e-10).log() 21 | mask = weak_target[:, self.idk, ...].float() 22 | 23 | loss = - einsum("bkwh,bkwh->", mask, log_p) 24 | loss /= mask.sum() + 1e-10 25 | 26 | return loss 27 | 28 | 29 | class PartialCrossEntropy(CrossEntropy): 30 | def __init__(self, **kwargs): 31 | super().__init__(idk=[1], **kwargs) 32 | 33 | 34 | # ######## ------ Size loss function (naive way) ---------- ########### 35 | # --- This function will push the prediction to be close ot sizeGT ---# 36 | class NaiveSizeLoss(): 37 | """ 38 | This one implement the naive quadratic penalty 39 | penalty = 0 if a <= pred_size 40 | (a - pred_size)^2 otherwise 41 | """ 42 | def __init__(self, **kwargs): 43 | # Self.idk is used to filter out some classes of the target mask. Use fancy indexing 44 | self.idk = [1] 45 | print(f"Initialized {self.__class__.__name__} with {kwargs}") 46 | 47 | def __call__(self, pred_softmax, bounds): 48 | assert simplex(pred_softmax) 49 | 50 | B, K, H, W = pred_softmax.shape 51 | assert bounds.shape == (B, K, 2) 52 | 53 | pred_size = einsum("bkwh->bk", pred_softmax)[:, self.idk] 54 | 55 | upper_bounds = bounds[:, self.idk, 1] 56 | lower_bounds = bounds[:, self.idk, 0] 57 | assert (upper_bounds >= 0).all() and (lower_bounds >= 0).all() 58 | 59 | # size < upper <==> size - upper < 0 60 | # lower < size <==> lower - size < 0 61 | loss = F.relu(pred_size - upper_bounds) ** 2 + F.relu(lower_bounds - pred_size) ** 2 62 | loss /= (W * H) 63 | 64 | return loss / 100 65 | -------------------------------------------------------------------------------- /code/utils/residual_unet.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3.8 2 | 3 | import math 4 | 5 | import torch.nn as nn 6 | 7 | 8 | def maxpool(): 9 | pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=0) 10 | return pool 11 | 12 | 13 | def conv_block(in_dim, out_dim, act_fn, kernel_size=3, stride=1, padding=1, dilation=1): 14 | model = nn.Sequential( 15 | nn.Conv2d(in_dim, out_dim, kernel_size=kernel_size, stride=stride, padding=padding, dilation=dilation), 16 | nn.BatchNorm2d(out_dim), 17 | act_fn, 18 | ) 19 | return model 20 | 21 | 22 | def conv_block_3(in_dim, out_dim, act_fn): 23 | model = nn.Sequential( 24 | conv_block(in_dim, out_dim, act_fn), 25 | conv_block(out_dim, out_dim, act_fn), 26 | nn.Conv2d(out_dim, out_dim, kernel_size=3, stride=1, padding=1), 27 | nn.BatchNorm2d(out_dim), 28 | ) 29 | return model 30 | 31 | 32 | # TODO: Change order of block: BN + Activation + Conv 33 | def conv_decod_block(in_dim, out_dim, act_fn): 34 | model = nn.Sequential( 35 | nn.ConvTranspose2d(in_dim, out_dim, kernel_size=3, stride=2, padding=1, output_padding=1), 36 | nn.BatchNorm2d(out_dim), 37 | act_fn, 38 | ) 39 | return model 40 | 41 | 42 | class Conv_residual_conv(nn.Module): 43 | def __init__(self, in_dim, out_dim, act_fn): 44 | super().__init__() 45 | self.in_dim = in_dim 46 | self.out_dim = out_dim 47 | act_fn = act_fn 48 | 49 | self.conv_1 = conv_block(self.in_dim, self.out_dim, act_fn) 50 | self.conv_2 = conv_block_3(self.out_dim, self.out_dim, act_fn) 51 | self.conv_3 = conv_block(self.out_dim, self.out_dim, act_fn) 52 | 53 | def forward(self, input): 54 | conv_1 = self.conv_1(input) 55 | conv_2 = self.conv_2(conv_1) 56 | res = conv_1 + conv_2 57 | conv_3 = self.conv_3(res) 58 | return conv_3 59 | 60 | 61 | class ResidualUNet(nn.Module): 62 | # def __init__(self, output_nc, ngf=32): 63 | def __init__(self, in_dim: int, out_dim: int): 64 | super().__init__() 65 | self.in_dim = in_dim 66 | ngf = 32 67 | self.out_dim = ngf 68 | self.final_out_dim = out_dim 69 | act_fn = nn.LeakyReLU(0.2, inplace=True) 70 | act_fn_2 = nn.ReLU() 71 | 72 | # Encoder 73 | self.down_1 = Conv_residual_conv(self.in_dim, self.out_dim, act_fn) 74 | self.pool_1 = maxpool() 75 | self.down_2 = Conv_residual_conv(self.out_dim, self.out_dim * 2, act_fn) 76 | self.pool_2 = maxpool() 77 | self.down_3 = Conv_residual_conv(self.out_dim * 2, self.out_dim * 4, act_fn) 78 | self.pool_3 = maxpool() 79 | self.down_4 = Conv_residual_conv(self.out_dim * 4, self.out_dim * 8, act_fn) 80 | self.pool_4 = maxpool() 81 | 82 | # Bridge between Encoder-Decoder 83 | self.bridge = Conv_residual_conv(self.out_dim * 8, self.out_dim * 16, act_fn) 84 | 85 | # Decoder 86 | self.deconv_1 = conv_decod_block(self.out_dim * 16, self.out_dim * 8, act_fn_2) 87 | self.up_1 = Conv_residual_conv(self.out_dim * 8, self.out_dim * 8, act_fn_2) 88 | self.deconv_2 = conv_decod_block(self.out_dim * 8, self.out_dim * 4, act_fn_2) 89 | self.up_2 = Conv_residual_conv(self.out_dim * 4, self.out_dim * 4, act_fn_2) 90 | self.deconv_3 = conv_decod_block(self.out_dim * 4, self.out_dim * 2, act_fn_2) 91 | self.up_3 = Conv_residual_conv(self.out_dim * 2, self.out_dim * 2, act_fn_2) 92 | self.deconv_4 = conv_decod_block(self.out_dim * 2, self.out_dim, act_fn_2) 93 | self.up_4 = Conv_residual_conv(self.out_dim, self.out_dim, act_fn_2) 94 | 95 | self.out = nn.Conv2d(self.out_dim, self.final_out_dim, kernel_size=3, stride=1, padding=1) 96 | 97 | print(f"Initialized {self.__class__.__name__} succesfully") 98 | 99 | def forward(self, input): 100 | # Encoding path 101 | 102 | down_1 = self.down_1(input) # This will go as res in deconv path 103 | down_2 = self.down_2(self.pool_1(down_1)) 104 | down_3 = self.down_3(self.pool_2(down_2)) 105 | down_4 = self.down_4(self.pool_3(down_3)) 106 | 107 | bridge = self.bridge(self.pool_4(down_4)) 108 | 109 | # Decoding path 110 | deconv_1 = self.deconv_1(bridge) 111 | skip_1 = (deconv_1 + down_4) / 2 # Residual connection 112 | up_1 = self.up_1(skip_1) 113 | 114 | deconv_2 = self.deconv_2(up_1) 115 | skip_2 = (deconv_2 + down_3) / 2 # Residual connection 116 | up_2 = self.up_2(skip_2) 117 | 118 | deconv_3 = self.deconv_3(up_2) 119 | skip_3 = (deconv_3 + down_2) / 2 # Residual connection 120 | up_3 = self.up_3(skip_3) 121 | 122 | deconv_4 = self.deconv_4(up_3) 123 | skip_4 = (deconv_4 + down_1) / 2 # Residual connection 124 | up_4 = self.up_4(skip_4) 125 | 126 | return self.out(up_4) 127 | 128 | def init_weights(self, *args, **kwargs): 129 | for m in self.modules(): 130 | if isinstance(m, nn.Conv2d): 131 | n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels 132 | m.weight.data.normal_(0, math.sqrt(2. / n)) 133 | elif isinstance(m, nn.BatchNorm2d): 134 | m.weight.data.fill_(1) 135 | m.bias.data.zero_() 136 | -------------------------------------------------------------------------------- /code/utils/utils.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | from pathlib import Path 4 | from functools import partial 5 | from multiprocessing import Pool 6 | from typing import Callable, Iterable, List, Set, Tuple, TypeVar, cast 7 | 8 | import torch 9 | import torchvision 10 | import torch.nn as nn 11 | import torch.nn.functional as F 12 | from tqdm import tqdm 13 | from torch import Tensor, einsum 14 | 15 | tqdm_ = partial(tqdm, ncols=125, 16 | leave=True, 17 | bar_format='{l_bar}{bar}| {n_fmt}/{total_fmt} [{rate_fmt}{postfix}]') 18 | 19 | 20 | def weights_init(m): 21 | if type(m) == nn.Conv2d or type(m) == nn.ConvTranspose2d: 22 | nn.init.xavier_normal_(m.weight.data) 23 | elif type(m) == nn.BatchNorm2d: 24 | m.weight.data.normal_(1.0, 0.02) 25 | m.bias.data.fill_(0) 26 | 27 | 28 | # Functools 29 | A = TypeVar("A") 30 | B = TypeVar("B") 31 | 32 | 33 | def map_(fn: Callable[[A], B], iter: Iterable[A]) -> List[B]: 34 | return list(map(fn, iter)) 35 | 36 | 37 | def mmap_(fn: Callable[[A], B], iter: Iterable[A]) -> List[B]: 38 | return Pool().map(fn, iter) 39 | 40 | 41 | def starmmap_(fn: Callable[[Tuple[A]], B], iter: Iterable[Tuple[A]]) -> List[B]: 42 | return Pool().starmap(fn, iter) 43 | 44 | 45 | # Assert utils 46 | def uniq(a: Tensor) -> Set: 47 | return set(torch.unique(a.cpu()).numpy()) 48 | 49 | 50 | def sset(a: Tensor, sub: Iterable) -> bool: 51 | return uniq(a).issubset(sub) 52 | 53 | 54 | def eq(a: Tensor, b) -> bool: 55 | return torch.eq(a, b).all() 56 | 57 | 58 | def simplex(t: Tensor, axis=1) -> bool: 59 | _sum = cast(Tensor, t.sum(axis).type(torch.float32)) 60 | _ones = torch.ones_like(_sum, dtype=torch.float32) 61 | return torch.allclose(_sum, _ones) 62 | 63 | 64 | def one_hot(t: Tensor, axis=1) -> bool: 65 | return simplex(t, axis) and sset(t, [0, 1]) 66 | 67 | 68 | def class2one_hot(seg: Tensor, K: int) -> Tensor: 69 | # Breaking change but otherwise can't deal with both 2d and 3d 70 | # if len(seg.shape) == 3: # Only w, h, d, used by the dataloader 71 | # return class2one_hot(seg.unsqueeze(dim=0), K)[0] 72 | 73 | assert sset(seg, list(range(K))), (uniq(seg), K) 74 | 75 | b, *img_shape = seg.shape 76 | 77 | device = seg.device 78 | res = torch.zeros((b, K, *img_shape), dtype=torch.int32, device=device).scatter_(1, seg[:, None, ...], 1) 79 | 80 | assert res.shape == (b, K, *img_shape) 81 | assert one_hot(res) 82 | 83 | return res 84 | 85 | 86 | def probs2class(probs: Tensor) -> Tensor: 87 | b, _, *img_shape = probs.shape 88 | assert simplex(probs) 89 | 90 | res = probs.argmax(dim=1) 91 | assert res.shape == (b, *img_shape) 92 | 93 | return res 94 | 95 | 96 | def probs2one_hot(probs: Tensor) -> Tensor: 97 | _, K, *_ = probs.shape 98 | assert simplex(probs) 99 | 100 | res = class2one_hot(probs2class(probs), K) 101 | assert res.shape == probs.shape 102 | assert one_hot(res) 103 | 104 | return res 105 | 106 | 107 | def saveImages(net, img_batch, batch_size, epoch, dataset, mode, device): 108 | path = Path('results/images/') / dataset / mode 109 | path.mkdir(parents=True, exist_ok=True) 110 | 111 | net.eval() 112 | 113 | desc = f">> Validation ({epoch: 4d})" 114 | 115 | log_dice = torch.zeros((len(img_batch)), device=device) 116 | 117 | tq_iter = tqdm_(enumerate(img_batch), total=len(img_batch), desc=desc) 118 | for j, data in tq_iter: 119 | img = data["img"].to(device) 120 | weak_mask = data["weak_mask"].to(device) 121 | full_mask = data["full_mask"].to(device) 122 | 123 | logits = net(img) 124 | probs = F.softmax(5 * logits, dim=1) 125 | 126 | segmentation = probs2class(probs)[:, None, ...].float() 127 | log_dice[j] = dice_coef(probs2one_hot(probs), full_mask)[0, 1] # 1st item, 2nd class 128 | 129 | out = torch.cat((img, segmentation, weak_mask[:, [1], ...])) 130 | 131 | torchvision.utils.save_image(out.data, path / f"{j}_Ep_{epoch:04d}.png", 132 | nrow=batch_size, 133 | padding=2, 134 | normalize=False, 135 | range=None, 136 | scale_each=False, 137 | pad_value=0) 138 | 139 | tq_iter.set_postfix({"DSC": f"{log_dice[:j+1].mean():05.3f}"}) 140 | tq_iter.update(1) 141 | tq_iter.close() 142 | 143 | 144 | # Metrics 145 | def meta_dice(sum_str: str, label: Tensor, pred: Tensor, smooth: float = 1e-8) -> Tensor: 146 | assert label.shape == pred.shape 147 | assert one_hot(label) 148 | assert one_hot(pred) 149 | 150 | inter_size: Tensor = einsum(sum_str, [intersection(label, pred)]).type(torch.float32) 151 | sum_sizes: Tensor = (einsum(sum_str, [label]) + einsum(sum_str, [pred])).type(torch.float32) 152 | 153 | dices: Tensor = (2 * inter_size + smooth) / (sum_sizes + smooth) 154 | 155 | return dices 156 | 157 | 158 | dice_coef = partial(meta_dice, "bk...->bk") 159 | dice_batch = partial(meta_dice, "bk...->k") # used for 3d dice 160 | 161 | 162 | def intersection(a: Tensor, b: Tensor) -> Tensor: 163 | assert a.shape == b.shape 164 | assert sset(a, [0, 1]) 165 | assert sset(b, [0, 1]) 166 | 167 | res = a & b 168 | assert sset(res, [0, 1]) 169 | 170 | return res 171 | 172 | 173 | def union(a: Tensor, b: Tensor) -> Tensor: 174 | assert a.shape == b.shape 175 | assert sset(a, [0, 1]) 176 | assert sset(b, [0, 1]) 177 | 178 | res = a | b 179 | assert sset(res, [0, 1]) 180 | 181 | return res 182 | -------------------------------------------------------------------------------- /preview.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/preview.gif -------------------------------------------------------------------------------- /readme.md: -------------------------------------------------------------------------------- 1 | # MICCAI 2020 Tutorial 2 | ## Weakly Supervised CNN Segmentation: Models and Optimization 3 | 4 | This repository contains the code of the hand-on tutorial, that runs on two datasets: 5 | * A very simple toy example 6 | * [PROMISE12](https://promise12.grand-challenge.org) prostate segmentation challenge 7 | 8 | Also founds, the slides and recordings of the tutorial. 9 | 10 | ### Slides 11 | Slides from the three sessions are available in the [`slides/`](slides/) folder. 12 | 13 | 14 | ### Recordings 15 | * [Session 1](https://drive.google.com/file/d/1NVn2J4y6l7_Yxw6RGBD2CEIEedliccjQ/view?usp=sharing): Structure-driven priors: _Regularization_ 16 | * [Session 2](https://drive.google.com/file/d/1wAVxBk4U45-SZhDWviCgFShytf0wrJze/view?usp=sharing): Knowledge-driven priors (e.g., anatomy): _Constraints_ 17 | * [Session 3](https://drive.google.com/file/d/1EohLWWa5vMmEMxw3Rqk4eYaDzbr_Clp2/view?usp=sharing): Data-driven priors: _Adversarial learning_ 18 | * [Session 4](https://drive.google.com/file/d/1NMU7z0KhXYX6idgCBehdaNVAifOE6Ey3/view?usp=sharing): Hands-on: _Size constraints_ 19 | 20 | 21 | 22 | ### Hands-on 23 | ![preview.gif](preview.gif) 24 | 25 | The goal here is to enforce some inequality constraints on the size of the predicted segmentation in the form: 26 | ``` 27 | lower bound <= predicted size <= upper bound 28 | ``` 29 | where `predicted size` is the sum of all predicted probabilities (softmax) over the whole image. 30 | 31 | To make the example simpler, we will define the lower and upper bounds to 0.9 and 1.1 times the ground truth size. All the code is contained within the `code` folder 32 | 33 | #### Requirements 34 | The code has those following dependencies: 35 | ``` 36 | python3.7+ 37 | pytorch (latest) 38 | torchvision 39 | numpy 40 | tqdm 41 | ``` 42 | Running the PROMISE12 example requires some additional packages: 43 | ``` 44 | simpleitk 45 | scikit-image 46 | PIL 47 | ``` 48 | 49 | #### Data 50 | The data for the toy example is stored in `code/data/TOY`. If you wish, you can regenerate the dataset with: 51 | ``` 52 | make -B data/TOY 53 | ``` 54 | or you can use [gen_toy.py](code/gen_toy.py) directly. 55 | 56 | Participants willing to try the PROMISE12 setting need to download the data themselves, then put the .zip inside the `code/data` folder (a list of files is available in `code/data/promise12.lineage`). Once the three files are there, the slicing into 2D png files is automated: 57 | ``` 58 | make data/PROMISE12 59 | ``` 60 | It will: 61 | * checks data integrity 62 | * extract the zip 63 | * slice into 2d slices 64 | * generate weak labels from the actual ground truth 65 | 66 | #### Training 67 | ``` 68 | >>> ./main.py -h 69 | usage: main.py [-h] [--epochs EPOCHS] [--dataset {TOY,PROMISE12}] [--mode {constrained,unconstrained,full}] [--gpu] 70 | 71 | optional arguments: 72 | -h, --help show this help message and exit 73 | --epochs EPOCHS 74 | --dataset {TOY,PROMISE12} 75 | --mode {constrained,unconstrained,full} 76 | --gpu 77 | ``` 78 | The toy example is designed to run under 5 minutes on a laptop, training on CPU. The following commands are equivalent 79 | ``` 80 | python3 main.py 81 | ./main.py 82 | ./main.py --epochs 200 --dataset TOY --mode unconstrained 83 | ``` 84 | 85 | The three modes correspond to: 86 | * unconstrained: use the weak labels, with only a partial cross-entropy (won't learn anything) 87 | * constrained: use the weak labels, with partial cross-entropy + size constraint (will learn) 88 | * full: use full labels, with cross entropy (will learn, for obvious reasons) 89 | 90 | The settings for PROMISE12 are too simple to get state of the art results, even in the `full` mode, but it gives a good starting point for new practitioners to then build on. 91 | -------------------------------------------------------------------------------- /slides/readme.md: -------------------------------------------------------------------------------- 1 | # MICCAI 2020 Tutorial 2 | ## Weakly Supervised CNN Segmentation: Models and Optimization 3 | ### Oct. 8 2020 4 | 5 | * [Session 1](session_1.pdf): Structure-driven priors: _Regularization_ 6 | * [Session 2](session_2.pdf): Knowledge-driven priors (e.g., anatomy): _Constraints_ 7 | * [Session 3](session_3.pdf): Data-driven priors: _Adversarial learning_ -------------------------------------------------------------------------------- /slides/session_1.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/slides/session_1.pdf -------------------------------------------------------------------------------- /slides/session_2.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/slides/session_2.pdf -------------------------------------------------------------------------------- /slides/session_3.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LIVIAETS/miccai_2020-weakly_supervised_tutorial/060f33d2340371bbc2e5bd699c925ca292c958e4/slides/session_3.pdf --------------------------------------------------------------------------------