├── .gitignore
├── README.md
├── day1notebooks
├── lab1_transforms.ipynb
├── lab2_end_to_end.ipynb
├── lab2_end_to_end_annotated.ipynb
├── lab3_datasets.ipynb
├── lab3_networks.ipynb
├── lab3_post_transforms.ipynb
└── segresnet_model_epoch30.pth
├── day2notebooks
├── day2_mednist_GAN_tutorial.ipynb
├── day2_segment_challenge.ipynb
└── day2_segment_challenge_solution.ipynb
├── day3challenge
├── README.md
├── challenge.ipynb
└── classifier_normal_pneumonia.zip
├── monai_bootcamp.yml
└── requirements.txt
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | pip-wheel-metadata/
24 | share/python-wheels/
25 | *.egg-info/
26 | .installed.cfg
27 | *.egg
28 | MANIFEST
29 |
30 | # PyInstaller
31 | # Usually these files are written by a python script from a template
32 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
33 | *.manifest
34 | *.spec
35 |
36 | # Installer logs
37 | pip-log.txt
38 | pip-delete-this-directory.txt
39 |
40 | # Unit test / coverage reports
41 | htmlcov/
42 | .tox/
43 | .nox/
44 | .coverage
45 | .coverage.*
46 | .cache
47 | nosetests.xml
48 | coverage.xml
49 | *.cover
50 | *.py,cover
51 | .hypothesis/
52 | .pytest_cache/
53 |
54 | # Translations
55 | *.mo
56 | *.pot
57 |
58 | # Django stuff:
59 | *.log
60 | local_settings.py
61 | db.sqlite3
62 | db.sqlite3-journal
63 |
64 | # Flask stuff:
65 | instance/
66 | .webassets-cache
67 |
68 | # Scrapy stuff:
69 | .scrapy
70 |
71 | # Sphinx documentation
72 | docs/_build/
73 |
74 | # PyBuilder
75 | target/
76 |
77 | # Jupyter Notebook
78 | .ipynb_checkpoints
79 |
80 | # IPython
81 | profile_default/
82 | ipython_config.py
83 |
84 | # pyenv
85 | .python-version
86 |
87 | # pipenv
88 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
89 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
90 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
91 | # install all needed dependencies.
92 | #Pipfile.lock
93 |
94 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow
95 | __pypackages__/
96 |
97 | # Celery stuff
98 | celerybeat-schedule
99 | celerybeat.pid
100 |
101 | # SageMath parsed files
102 | *.sage.py
103 |
104 | # Environments
105 | .env
106 | .venv
107 | env/
108 | venv/
109 | ENV/
110 | env.bak/
111 | venv.bak/
112 |
113 | # Spyder project settings
114 | .spyderproject
115 | .spyproject
116 |
117 | # Rope project settings
118 | .ropeproject
119 |
120 | # mkdocs documentation
121 | /site
122 |
123 | # mypy
124 | .mypy_cache/
125 | .dmypy.json
126 | dmypy.json
127 |
128 | # Pyre type checker
129 | .pyre/
130 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # MONAIBootcamp2020
2 |
3 | This repository hosts the notebooks for the 2020 MONAI Bootcamp event.
4 | The data required for the notebooks is available through the download mechanisms given in each notebook or through the organizers. All bootcamp participants can access the bootcamp Slack channel to ask for help with any issues.
5 |
6 | Most of the notebooks in this repository would benefit _considerably_ from having **GPU support** enabled. Therefore, it is _recommended_ to run notebooks on Google Colab. Instructions to replicate the Python environment on your local machines
7 | are also provided (See [Install Local Environment](#local))
8 |
9 | You can find a video playlist with recordings for each session on the MONAI Youtube Channel at https://www.youtube.com/channel/UCdQ8V2UrWvt9xplZFnHyEGg/.
10 |
11 | ## Run on Google Colab (Recommended)
12 |
13 | Notebooks can be accessed in Colab by using the links below:
14 |
15 | **Day 1 Notebooks:**
16 |
17 | * [Lab 1 Transforms](https://colab.research.google.com/github/Project-MONAI/MONAIBootcamp2020/blob/master/day1notebooks/lab1_transforms.ipynb)
18 | * [Lab 2 End-to-End](https://colab.research.google.com/github/Project-MONAI/MONAIBootcamp2020/blob/master/day1notebooks/lab2_end_to_end.ipynb)
19 | * [Lab 3 Datasets](https://colab.research.google.com/github/Project-MONAI/MONAIBootcamp2020/blob/master/day1notebooks/lab3_datasets.ipynb)
20 | * [Lab 3 Networks](https://colab.research.google.com/github/Project-MONAI/MONAIBootcamp2020/blob/master/day1notebooks/lab3_networks.ipynb)
21 | * [Lab 3 Post Transforms](https://colab.research.google.com/github/Project-MONAI/MONAIBootcamp2020/blob/master/day1notebooks/lab3_post_transforms.ipynb)
22 |
23 | **Day 2 Notebooks:**
24 |
25 | * [MedNIST GAN Tutorial](https://colab.research.google.com/github/Project-MONAI/MONAIBootcamp2020/blob/master/day2notebooks/day2_mednist_GAN_tutorial.ipynb)
26 | * [Cardiac Segmentation Challenge](https://colab.research.google.com/github/Project-MONAI/MONAIBootcamp2020/blob/master/day2notebooks/day2_segment_challenge.ipynb) ([Solution](https://colab.research.google.com/github/Project-MONAI/MONAIBootcamp2020/blob/master/day2notebooks/day2_segment_challenge_solution.ipynb))
27 |
28 | ### Further notes
29 |
30 | #### 1. Required Packages for Colab Execution
31 |
32 | The Day 1 notebooks have the `pip` command for install MONAI, however this will have to be added to any subsequent notebook.
33 | Place this at the top of the first cell to install MONAI the first time a colab notebook is run:
34 |
35 | ```bash
36 | %pip install -qU "monai[nibabel,ignite,torchvision]==0.3.0rc2"
37 | ```
38 |
39 | #### 2. Enabling GPU Support
40 |
41 | To use GPU resources through Colab remember to change the runtime to GPU:
42 |
43 | 1. From the "Runtime" menu select "Change Runtime Type"
44 | 2. Choose "GPU" from the drop-down menu
45 | 3. Click "SAVE"
46 |
47 | This will reset the notebook and probably ask you if you are a robot (these instructions assume you are not).
48 | Running
49 |
50 | ```shell
51 | !nvidia-smi
52 | ```
53 |
54 | in a cell will verify this has worked and show you what kind of hardware you have access to.
55 |
56 | #### 3. Google Drive Files and Resources
57 |
58 | Google Drive files can be accessed by mounting your account in the notebook, this is a convenient way of accessing stored data. Add the following to a cell to mount your directory, you will be asked to authenticate through Google once you run it:
59 |
60 | ```python
61 | from google.colab import drive
62 |
63 | drive.mount('/content/drive')
64 | !ls -l "/content/drive/My Drive/Colab Notebooks"
65 | ```
66 |
67 |
68 |
69 | ## Instal Local Environment
70 |
71 | Instructions to setup the (local) Python development environment are reported below, either using [`venv`](#venv) or [`conda`](#conda) (for Anaconda Python distribution):
72 |
73 |
74 |
75 | ### Set up environment using ``conda``
76 |
77 | If you are using Anaconda Python distribution, it is possible to re-create the entire virtual (conda) environment using the `monai_bootcamp.yml` (`YAML`) file:
78 |
79 | ```shell
80 | conda env create -f monai_bootcamp.yml
81 | ```
82 |
83 | This will create a new environment named `monai-bootcamp`, with all the required packages.
84 | To **activate** the environment:
85 |
86 | ```shell
87 | conda activate monai-bootcamp
88 | ```
89 |
90 |
91 |
92 | ### Set up environment using `venv`
93 |
94 | >The `venv` module provides support for creating lightweight “virtual environments” with their own site directories, optionally isolated from system site directories. Each virtual environment has its own Python binary (which matches the version of the binary that was used to create this environment) and can have its own independent set of installed Python packages in
95 | >its site directories.
96 |
97 | **Note**: The `venv` module is part of the **Python Standard Library**, so no further installation is required. **Python 3.7+ is assumed**.
98 |
99 | The following **`3`** steps are required to setup a new virtual environment
100 | using `venv`:
101 |
102 | 1. Create the environment:
103 |
104 | ```shell
105 | python -m venv /monai-bootcamp
106 | ```
107 |
108 |
109 |
110 | 2. Activate the environment:
111 |
112 | ```shell
113 | source /monai-bootcamp/bin/activate
114 | ```
115 |
116 |
117 |
118 | 3. Install the Required Package (using the `requirements.txt` file):
119 |
120 | ```shell
121 | pip install -r requirements.txt
122 | ```
123 |
124 |
125 |
126 | #### Notes on Jupyter Notebook Kernel
127 |
128 | **Notes:** The following instructions **only** applies to virtual environment created using `venv`
129 |
130 | In order to enable the new `venv` environment within your default Jupyter server, a new **Jupyter Kernel** should be added.
131 |
132 | In order to do so, the following command should be executed:
133 |
134 | ```shell
135 | python -m ipykernel install --user --prefix /monai-bootcamp --display-name "Python 3 (MONAI Bootcamp 2020)"
136 | ```
137 |
138 | This will add a new _Python 3 (MONAI Bootcamp 2020)_ to the list of available Jupyter kernel. Please make sure to **select** or **change** this kernel to run the notebooks in this repository.
139 |
140 | Further information [here](https://ipython.readthedocs.io/en/stable/install/kernel_install.html)
141 |
142 | ### Notes on GPU support on Local machine
143 |
144 | If your local machine has GPU support, please follow the instructions on the official [PyTorch documentation](https://pytorch.org/get-started/locally/) on how to install PyTorch with GPU support in your local environment, depending on your system configuration.
145 |
--------------------------------------------------------------------------------
/day1notebooks/lab2_end_to_end.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Lab 2: End to end model training and validation with Pytorch, Ignite and MONAI\n",
8 | "---\n",
9 | "\n",
10 | "[](https://colab.research.google.com/github/Project-MONAI/MONAIBootcamp2020/blob/master/day1notebooks/lab2_end_to_end.ipynb)\n",
11 | "\n",
12 | "## Overview\n",
13 | "\n",
14 | "This notebook takes you through the end to end workflow using for training a deep learning model. You'll do the following:\n",
15 | "- Download the MedNIST Dataset\n",
16 | "- Explore the data\n",
17 | "- Prepare training, validation, and test datasets\n",
18 | "- Use MONAI transforms, dataset, and dataloader\n",
19 | "- Define network, optimizer, and loss function\n",
20 | "- Train your model with a standard pytorch training loop\n",
21 | "- Plot your training metrics\n",
22 | "- Evaluate your model on a test set\n",
23 | "- Understand your results\n",
24 | "- Make some improvements\n",
25 | " - Revisit model training using ignite and MONAI features\n",
26 | " - Sort out problems limiting reproducability\n",
27 | " - Rework dataset partitioning "
28 | ]
29 | },
30 | {
31 | "cell_type": "markdown",
32 | "metadata": {},
33 | "source": [
34 | "## Import everything that we'll need from MONAI\n",
35 | "Initial imports of the various packages used to create a model using pytorch and MONAI."
36 | ]
37 | },
38 | {
39 | "cell_type": "code",
40 | "execution_count": null,
41 | "metadata": {},
42 | "outputs": [],
43 | "source": [
44 | "!pip install -qU \"monai[ignite]==0.3.0rc2\"\n",
45 | "\n",
46 | "%matplotlib inline\n",
47 | "\n",
48 | "import os\n",
49 | "import shutil\n",
50 | "import tempfile\n",
51 | "\n",
52 | "import matplotlib.pyplot as plt\n",
53 | "import numpy as np\n",
54 | "import PIL\n",
55 | "\n",
56 | "import torch\n",
57 | "\n",
58 | "import monai\n",
59 | "\n",
60 | "from monai.apps import download_and_extract\n",
61 | "from monai.config import print_config\n",
62 | "from monai.metrics import compute_roc_auc\n",
63 | "from monai.networks.nets import densenet121\n",
64 | "from monai.transforms import (\n",
65 | " AddChannel,\n",
66 | " Compose,\n",
67 | " LoadPNG,\n",
68 | " RandFlip,\n",
69 | " RandRotate,\n",
70 | " RandZoom,\n",
71 | " ScaleIntensity,\n",
72 | " ToTensor,\n",
73 | ")\n",
74 | "from monai.utils import set_determinism\n",
75 | "\n",
76 | "print_config()"
77 | ]
78 | },
79 | {
80 | "cell_type": "markdown",
81 | "metadata": {},
82 | "source": [
83 | "## Setup data directory\n",
84 | "We'll create a temporary directory for all the MONAI data we're going to be using called MONAI_DATA_DIRECTORY."
85 | ]
86 | },
87 | {
88 | "cell_type": "code",
89 | "execution_count": null,
90 | "metadata": {},
91 | "outputs": [],
92 | "source": [
93 | "directory = os.environ.get(\"MONAI_DATA_DIRECTORY\")\n",
94 | "root_dir = tempfile.mkdtemp() if directory is None else directory\n",
95 | "print(root_dir)"
96 | ]
97 | },
98 | {
99 | "cell_type": "markdown",
100 | "metadata": {},
101 | "source": [
102 | "## Download the MedNIST dataset\n",
103 | "The `MedNIST` dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions),\n",
104 | "[the RSNA Bone Age Challenge](http://rsnachallenges.cloudapp.net/competitions/4),\n",
105 | "and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest).\n",
106 | "\n",
107 | "The dataset is kindly made available by [Dr. Bradley J. Erickson M.D., Ph.D.](https://www.mayo.edu/research/labs/radiology-informatics/overview) (Department of Radiology, Mayo Clinic)\n",
108 | "under the Creative Commons [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/).\n",
109 | "\n",
110 | "If you use the MedNIST dataset, please acknowledge the source.\n",
111 | "\n",
112 | "We're going to download this dataset below and extract it into our temporary MONAI Data Directory."
113 | ]
114 | },
115 | {
116 | "cell_type": "code",
117 | "execution_count": null,
118 | "metadata": {},
119 | "outputs": [],
120 | "source": [
121 | "resource = \"https://www.dropbox.com/s/5wwskxctvcxiuea/MedNIST.tar.gz?dl=1\"\n",
122 | "md5 = \"0bc7306e7427e00ad1c5526a6677552d\"\n",
123 | "\n",
124 | "compressed_file = os.path.join(root_dir, \"MedNIST.tar.gz\")\n",
125 | "data_dir = os.path.join(root_dir, \"MedNIST\")\n",
126 | "if not os.path.exists(data_dir):\n",
127 | " download_and_extract(resource, compressed_file, root_dir, md5)"
128 | ]
129 | },
130 | {
131 | "cell_type": "markdown",
132 | "metadata": {},
133 | "source": [
134 | "### Set deterministic training for reproducibility\n",
135 | "[set_determinism](https://docs.monai.io/en/latest/utils.html?highlight=set_determinism#monai.utils.misc.set_determinism) will set the random seeds in both Numpy and PyTorch to ensure reproducibility. We'll see later that we need to go a little bit further to ensure reproducibility in a jupyter notebook"
136 | ]
137 | },
138 | {
139 | "cell_type": "code",
140 | "execution_count": null,
141 | "metadata": {},
142 | "outputs": [],
143 | "source": [
144 | "set_determinism(seed=0)"
145 | ]
146 | },
147 | {
148 | "cell_type": "markdown",
149 | "metadata": {},
150 | "source": [
151 | "## Read the image filenames from the dataset folders\n",
152 | "\n",
153 | "When using a dataset, you want to understand the basics of the images, labels, and more. We'll start off by showing some of those basic statistics for MedNIST.\n",
154 | "\n",
155 | "We'll see that 6 different folders are representing 6 different categories: Hand, AbdomenCT, CXR, ChestCT, BreastMRI, HeadCT. We'll be using each of these categories as our label names. "
156 | ]
157 | },
158 | {
159 | "cell_type": "code",
160 | "execution_count": null,
161 | "metadata": {},
162 | "outputs": [],
163 | "source": [
164 | "class_names = sorted(x for x in os.listdir(data_dir) if os.path.isdir(os.path.join(data_dir, x)))\n",
165 | "num_class = len(class_names)\n",
166 | "image_files = [\n",
167 | " [\n",
168 | " os.path.join(data_dir, class_names[i], x)\n",
169 | " for x in os.listdir(os.path.join(data_dir, class_names[i]))\n",
170 | " ]\n",
171 | " for i in range(num_class)\n",
172 | "]\n",
173 | "num_each = [len(image_files[i]) for i in range(num_class)]\n",
174 | "image_files_list = []\n",
175 | "image_class = []\n",
176 | "for i in range(num_class):\n",
177 | " image_files_list.extend(image_files[i])\n",
178 | " image_class.extend([i] * num_each[i])\n",
179 | "num_total = len(image_class)\n",
180 | "image_width, image_height = PIL.Image.open(image_files_list[0]).size\n",
181 | "\n",
182 | "print(f\"Total image count: {num_total}\")\n",
183 | "print(f\"Image dimensions: {image_width} x {image_height}\")\n",
184 | "print(f\"Label names: {class_names}\")\n",
185 | "print(f\"Label counts: {num_each}\")"
186 | ]
187 | },
188 | {
189 | "cell_type": "markdown",
190 | "metadata": {},
191 | "source": [
192 | "## Randomly pick images from the dataset to visualize and check\n",
193 | "\n",
194 | "We want to understand what the images we're using look like, so we'll start by visualizing a few random images."
195 | ]
196 | },
197 | {
198 | "cell_type": "code",
199 | "execution_count": null,
200 | "metadata": {},
201 | "outputs": [],
202 | "source": [
203 | "plt.subplots(3, 3, figsize=(8, 8))\n",
204 | "for i, k in enumerate(np.random.randint(num_total, size=9)):\n",
205 | " im = PIL.Image.open(image_files_list[k])\n",
206 | " arr = np.array(im)\n",
207 | " plt.subplot(3, 3, i + 1)\n",
208 | " plt.xlabel(class_names[image_class[k]])\n",
209 | " plt.imshow(arr, cmap=\"gray\", vmin=0, vmax=255)\n",
210 | "plt.tight_layout()\n",
211 | "plt.show()"
212 | ]
213 | },
214 | {
215 | "cell_type": "markdown",
216 | "metadata": {},
217 | "source": [
218 | "## Prepare training, validation, and test data lists\n",
219 | "\n",
220 | "We want to split the data into 3 different sets, one for training, one for validation, and one for testing. We'll use a ratio of 80/10/10 for those sets."
221 | ]
222 | },
223 | {
224 | "cell_type": "code",
225 | "execution_count": null,
226 | "metadata": {},
227 | "outputs": [],
228 | "source": [
229 | "val_frac = 0.1\n",
230 | "test_frac = 0.1\n",
231 | "train_x = list()\n",
232 | "train_y = list()\n",
233 | "val_x = list()\n",
234 | "val_y = list()\n",
235 | "test_x = list()\n",
236 | "test_y = list()\n",
237 | "\n",
238 | "for i in range(num_total):\n",
239 | " rann = np.random.random()\n",
240 | " if rann < val_frac:\n",
241 | " val_x.append(image_files_list[i])\n",
242 | " val_y.append(image_class[i])\n",
243 | " elif rann < test_frac + val_frac:\n",
244 | " test_x.append(image_files_list[i])\n",
245 | " test_y.append(image_class[i])\n",
246 | " else:\n",
247 | " train_x.append(image_files_list[i])\n",
248 | " train_y.append(image_class[i])\n",
249 | "\n",
250 | "print(f\"Training count: {len(train_x)}, Validation count: {len(val_x)}, Test count: {len(test_x)}\")"
251 | ]
252 | },
253 | {
254 | "cell_type": "markdown",
255 | "metadata": {},
256 | "source": [
257 | "## Define MONAI transforms, Dataset and Dataloader to pre-process data\n",
258 | "\n",
259 | "We'll define our transform using `Compose`. In this Array of Transforms, we'll load the image, add a channel, scale its intensity, utilize a few random functions and finally create a tensor."
260 | ]
261 | },
262 | {
263 | "cell_type": "code",
264 | "execution_count": null,
265 | "metadata": {},
266 | "outputs": [],
267 | "source": [
268 | "train_transforms = Compose(\n",
269 | " [\n",
270 | " LoadPNG(image_only=True),\n",
271 | " AddChannel(),\n",
272 | " ScaleIntensity(),\n",
273 | " RandRotate(range_x=15, prob=0.5, keep_size=True),\n",
274 | " RandFlip(spatial_axis=0, prob=0.5),\n",
275 | " RandZoom(min_zoom=0.9, max_zoom=1.1, prob=0.5),\n",
276 | " ToTensor(),\n",
277 | " ]\n",
278 | ")\n",
279 | "\n",
280 | "val_transforms = Compose([LoadPNG(image_only=True), AddChannel(), ScaleIntensity(), ToTensor()])"
281 | ]
282 | },
283 | {
284 | "cell_type": "markdown",
285 | "metadata": {},
286 | "source": [
287 | "## Initialise the datasets and loaders for training, validation and test sets\n",
288 | " * Define a simple dataset, that we'll call `MedNISTDataset`, that groups:\n",
289 | " * Images\n",
290 | " * Labels\n",
291 | " * The transforms that are to be run on the images and labels\n",
292 | " * Create three instances of this dataset:\n",
293 | " * One for training\n",
294 | " * One for validation\n",
295 | " * One for testing\n",
296 | " \n",
297 | "We'll use a batch size of 512 and employ 10 workers to load the data."
298 | ]
299 | },
300 | {
301 | "cell_type": "code",
302 | "execution_count": null,
303 | "metadata": {},
304 | "outputs": [],
305 | "source": [
306 | "batch_size = 512\n",
307 | "num_workers = 10\n",
308 | "\n",
309 | "class MedNISTDataset(torch.utils.data.Dataset):\n",
310 | " def __init__(self, image_files, labels, transforms):\n",
311 | " self.image_files = image_files\n",
312 | " self.labels = labels\n",
313 | " self.transforms = transforms\n",
314 | "\n",
315 | " def __len__(self):\n",
316 | " return len(self.image_files)\n",
317 | "\n",
318 | " def __getitem__(self, index):\n",
319 | " return self.transforms(self.image_files[index]), self.labels[index]\n",
320 | "\n",
321 | "\n",
322 | "train_ds = MedNISTDataset(train_x, train_y, train_transforms)\n",
323 | "train_loader = torch.utils.data.DataLoader(train_ds, batch_size=batch_size, shuffle=True, num_workers=num_workers)\n",
324 | "\n",
325 | "val_ds = MedNISTDataset(val_x, val_y, val_transforms)\n",
326 | "val_loader = torch.utils.data.DataLoader(val_ds, batch_size=batch_size, num_workers=num_workers)\n",
327 | "\n",
328 | "test_ds = MedNISTDataset(test_x, test_y, val_transforms)\n",
329 | "test_loader = torch.utils.data.DataLoader(test_ds, batch_size=batch_size, num_workers=num_workers)"
330 | ]
331 | },
332 | {
333 | "cell_type": "markdown",
334 | "metadata": {},
335 | "source": [
336 | "## Define network and optimizer\n",
337 | "\n",
338 | "1. Set `learning_rate` for how much the model is updated per step\n",
339 | "1. The fetch a pytorch `device` for the GPU\n",
340 | "1. Instantiate a [densenet121](https://docs.monai.io/en/latest/networks.html?highlight=densenet#monai.networks.nets.densenet121) model instance and 'send' it to the GPU using `device`\n",
341 | " * This is a standard MONAI implementation; it is capable of 2D and 3D operation but here we are using it in 2D mode\n",
342 | "1. We'll make use of the Adam optimizer"
343 | ]
344 | },
345 | {
346 | "cell_type": "code",
347 | "execution_count": null,
348 | "metadata": {},
349 | "outputs": [],
350 | "source": [
351 | "learning_rate = 1e-5\n",
352 | "device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n",
353 | "net = densenet121(spatial_dims=2, in_channels=1, out_channels=num_class).to(device)\n",
354 | "loss_function = torch.nn.CrossEntropyLoss()\n",
355 | "optimizer = torch.optim.Adam(net.parameters(), learning_rate)"
356 | ]
357 | },
358 | {
359 | "cell_type": "markdown",
360 | "metadata": {},
361 | "source": [
362 | "## Network training\n",
363 | "We are hand-rolling a basic pytorch training loop here:\n",
364 | " * standard pytorch training loop\n",
365 | " * step through each training epoch, running through the training set in batches\n",
366 | " * after each epoch, run a validation pass, evaluating the network\n",
367 | " * if it shows improved performance, save out the model weights\n",
368 | " * later we will revisit training loops in a more Ignite / MONAI fashion"
369 | ]
370 | },
371 | {
372 | "cell_type": "code",
373 | "execution_count": null,
374 | "metadata": {},
375 | "outputs": [],
376 | "source": [
377 | "epoch_num = 4\n",
378 | "best_metric = -1\n",
379 | "best_metric_epoch = -1\n",
380 | "epoch_loss_values = list()\n",
381 | "metric_values = list()\n",
382 | "\n",
383 | "for epoch in range(epoch_num):\n",
384 | " print(\"-\" * 10)\n",
385 | " print(f\"epoch {epoch + 1}/{epoch_num}\")\n",
386 | "\n",
387 | " epoch_loss = 0\n",
388 | " step = 1\n",
389 | "\n",
390 | " steps_per_epoch = len(train_ds) // train_loader.batch_size\n",
391 | "\n",
392 | " # put the network in train mode; this tells the network and its modules to\n",
393 | " # enable training elements such as normalisation and dropout, where applicable\n",
394 | " net.train()\n",
395 | " for batch_data in train_loader:\n",
396 | "\n",
397 | " # move the data to the GPU\n",
398 | " inputs, labels = batch_data[0].to(device), batch_data[1].to(device)\n",
399 | "\n",
400 | " # prepare the gradients for this step's back propagation\n",
401 | " optimizer.zero_grad()\n",
402 | " \n",
403 | " # run the network forwards\n",
404 | " outputs = net(inputs)\n",
405 | " \n",
406 | " # run the loss function on the outputs\n",
407 | " loss = loss_function(outputs, labels)\n",
408 | " \n",
409 | " # compute the gradients\n",
410 | " loss.backward()\n",
411 | " \n",
412 | " # tell the optimizer to update the weights according to the gradients\n",
413 | " # and its internal optimisation strategy\n",
414 | " optimizer.step()\n",
415 | "\n",
416 | " epoch_loss += loss.item()\n",
417 | " print(f\"{step}/{len(train_ds) // train_loader.batch_size + 1}, training_loss: {loss.item():.4f}\")\n",
418 | " step += 1\n",
419 | "\n",
420 | " epoch_loss /= step\n",
421 | " epoch_loss_values.append(epoch_loss)\n",
422 | " print(f\"epoch {epoch + 1} average loss: {epoch_loss:.4f}\")\n",
423 | "\n",
424 | " # after each epoch, run our metrics to evaluate it, and, if they are an improvement,\n",
425 | " # save the model out\n",
426 | " \n",
427 | " # switch off training features of the network for this pass\n",
428 | " net.eval()\n",
429 | "\n",
430 | " # 'with torch.no_grad()' switches off gradient calculation for the scope of its context\n",
431 | " with torch.no_grad():\n",
432 | " # create lists to which we will concatenate the the validation results\n",
433 | " images = list()\n",
434 | " labels = list()\n",
435 | "\n",
436 | " # iterate over each batch of images and run them through the network in evaluation mode\n",
437 | " for val_data in val_loader:\n",
438 | " val_images, val_labels = val_data[0].to(device), val_data[1].to(device)\n",
439 | "\n",
440 | " # run the network\n",
441 | " val_pred = net(val_images)\n",
442 | "\n",
443 | " images.append(val_pred)\n",
444 | " labels.append(val_labels)\n",
445 | "\n",
446 | " # concatenate the predicted labels with each other and the actual labels with each other\n",
447 | " y_pred = torch.cat(images)\n",
448 | " y = torch.cat(labels)\n",
449 | "\n",
450 | " # we are using the area under the receiver operating characteristic (ROC) curve to determine\n",
451 | " # whether this epoch has improved the best performance of the network so far, in which case\n",
452 | " # we save the network in this state\n",
453 | " auc_metric = compute_roc_auc(y_pred, y, to_onehot_y=True, softmax=True)\n",
454 | " metric_values.append(auc_metric)\n",
455 | " acc_value = torch.eq(y_pred.argmax(dim=1), y)\n",
456 | " acc_metric = acc_value.sum().item() / len(acc_value)\n",
457 | " if auc_metric > best_metric:\n",
458 | " best_metric = auc_metric\n",
459 | " best_metric_epoch = epoch + 1\n",
460 | " torch.save(net.state_dict(), os.path.join(root_dir, \"best_metric_model.pth\"))\n",
461 | " print(\"saved new best metric network\")\n",
462 | " print(\n",
463 | " f\"current epoch: {epoch + 1} current AUC: {auc_metric:.4f} /\"\n",
464 | " f\" current accuracy: {acc_metric:.4f} best AUC: {best_metric:.4f} /\"\n",
465 | " f\" at epoch: {best_metric_epoch}\"\n",
466 | " )\n",
467 | "\n",
468 | "print(f\"train completed, best_metric: {best_metric:.4f} at epoch: {best_metric_epoch}\")"
469 | ]
470 | },
471 | {
472 | "cell_type": "markdown",
473 | "metadata": {},
474 | "source": [
475 | "## Plot the loss and metric\n",
476 | "\n",
477 | "Once we're done training we want to visualize our Loss and Accuracy."
478 | ]
479 | },
480 | {
481 | "cell_type": "code",
482 | "execution_count": null,
483 | "metadata": {},
484 | "outputs": [],
485 | "source": [
486 | "plt.figure(\"train\", (12, 6))\n",
487 | "plt.subplot(1, 2, 1)\n",
488 | "plt.title(\"Epoch Average Loss\")\n",
489 | "x = [i + 1 for i in range(len(epoch_loss_values))]\n",
490 | "y = epoch_loss_values\n",
491 | "plt.xlabel(\"epoch\")\n",
492 | "plt.plot(x, y)\n",
493 | "plt.subplot(1, 2, 2)\n",
494 | "plt.title(\"Val AUC\")\n",
495 | "x = [(i + 1) for i in range(len(metric_values))]\n",
496 | "y = metric_values\n",
497 | "plt.xlabel(\"epoch\")\n",
498 | "plt.plot(x, y)\n",
499 | "plt.show()"
500 | ]
501 | },
502 | {
503 | "cell_type": "markdown",
504 | "metadata": {},
505 | "source": [
506 | "## Evaluate the model on the test dataset\n",
507 | "\n",
508 | "After training and validation, we now have the best model as determined by the validation dataset. But now we need to evaluate the model on the test dataset to check whether the final model is robust and not over-fitting. We'll use these predictions to generate a classification report."
509 | ]
510 | },
511 | {
512 | "cell_type": "code",
513 | "execution_count": null,
514 | "metadata": {},
515 | "outputs": [],
516 | "source": [
517 | "net.load_state_dict(torch.load(os.path.join(root_dir, \"best_metric_model.pth\")))\n",
518 | "net.eval()\n",
519 | "y_true = list()\n",
520 | "y_pred = list()\n",
521 | "with torch.no_grad():\n",
522 | " for test_data in test_loader:\n",
523 | " test_images, test_labels = (\n",
524 | " test_data[0].to(device),\n",
525 | " test_data[1].to(device),\n",
526 | " )\n",
527 | " pred = net(test_images).argmax(dim=1)\n",
528 | " for i in range(len(pred)):\n",
529 | " y_true.append(test_labels[i].item())\n",
530 | " y_pred.append(pred[i].item())"
531 | ]
532 | },
533 | {
534 | "cell_type": "markdown",
535 | "metadata": {},
536 | "source": [
537 | "## Some light analytics - classification report\n",
538 | "\n",
539 | "We'll utilize scikit-learn's classification report to get the precision, recall, and f1-score for each category."
540 | ]
541 | },
542 | {
543 | "cell_type": "code",
544 | "execution_count": null,
545 | "metadata": {},
546 | "outputs": [],
547 | "source": [
548 | "from sklearn.metrics import classification_report\n",
549 | "print(classification_report(y_true, y_pred, target_names=class_names, digits=4))\n"
550 | ]
551 | },
552 | {
553 | "cell_type": "markdown",
554 | "metadata": {},
555 | "source": [
556 | "## Some light analytics - confusion matrix\n",
557 | "\n",
558 | "Let's also create a confusion matrix to get a better understanding of the failure cases"
559 | ]
560 | },
561 | {
562 | "cell_type": "code",
563 | "execution_count": null,
564 | "metadata": {},
565 | "outputs": [],
566 | "source": [
567 | "from sklearn.metrics import confusion_matrix\n",
568 | "cmat = confusion_matrix(y_true, y_pred)\n",
569 | "fig = plt.figure()\n",
570 | "ax = fig.add_subplot(111)\n",
571 | "cax = ax.matshow(confusion_matrix(y_true, y_pred), cmap=\"terrain\", interpolation='nearest')\n",
572 | "fig.colorbar(cax)\n",
573 | "\n",
574 | "ax.set_xticklabels(['']+class_names, rotation=270)\n",
575 | "ax.set_yticklabels(['']+class_names)\n",
576 | "\n",
577 | "plt.show()"
578 | ]
579 | },
580 | {
581 | "cell_type": "markdown",
582 | "metadata": {},
583 | "source": [
584 | "## Let's make some changes\n",
585 | "Everything that we have done so far uses MONAI with pytorch in a very vanilla fashion. The initial training / validation loop is written to show you the nuts and bolts of pytorch. Now let's explore starting the move towards [ignite](https://pytorch.org/ignite/) and features of MONAI designed to work with it. We'll also fix up a couple of outstanding issues that the notebook has in its current form.\n",
586 | "\n",
587 | "* MONAI-specific\n",
588 | " * Making use of Ignite\n",
589 | "* Miscellaneous\n",
590 | " * Issues with determinism\n",
591 | " * Improving dataset partitioning"
592 | ]
593 | },
594 | {
595 | "cell_type": "code",
596 | "execution_count": null,
597 | "metadata": {},
598 | "outputs": [],
599 | "source": [
600 | "from ignite.engine import Events, create_supervised_evaluator, create_supervised_trainer\n",
601 | "from ignite.handlers import ModelCheckpoint\n",
602 | "from ignite.metrics import Accuracy\n",
603 | "from monai.handlers import ROCAUC\n",
604 | "\n",
605 | "step = 1\n",
606 | "iter_losses=[]\n",
607 | "batch_sizes=[]\n",
608 | "epoch_loss_values = []\n",
609 | "metric_values = []\n",
610 | "\n",
611 | "iter_losses=[]\n",
612 | "epoch_loss_values = []\n",
613 | "metric_values = []\n",
614 | "\n",
615 | "# Training\n",
616 | "\n",
617 | "# this trainer takes care of the training loop for us\n",
618 | "trainer = create_supervised_trainer(net, optimizer, loss_function, device, False)\n",
619 | "\n",
620 | "# calculate the number of steps per epoch up front\n",
621 | "steps_per_epoch = len(train_ds) // train_loader.batch_size\n",
622 | "if len(train_ds) % train_loader.batch_size != 0:\n",
623 | " steps_per_epoch += 1\n",
624 | "\n",
625 | "\n",
626 | "# create a handler for recording the loss after each input. Improve upon our earlier example\n",
627 | "# by also recording the batch size, so we can perform a weighted average for the overall average\n",
628 | "# loss\n",
629 | "@trainer.on(Events.ITERATION_COMPLETED)\n",
630 | "def _end_iter(engine):\n",
631 | " global step\n",
632 | " loss = engine.state.output\n",
633 | " batch_len = len(engine.state.batch[0])\n",
634 | " epoch = engine.state.epoch\n",
635 | " epoch_len = engine.state.max_epochs\n",
636 | " iter_losses.append(loss)\n",
637 | " batch_sizes.append(batch_len)\n",
638 | " print(f'epoch {epoch}/{epoch_len}, step {step}/{steps_per_epoch}, training_loss = {loss:.4f}') \n",
639 | " step += 1\n",
640 | " \n",
641 | "# Validation\n",
642 | "\n",
643 | "val_metrics = {'accuracy': Accuracy(), 'rocauc': ROCAUC(to_onehot_y=True,softmax=True)}\n",
644 | "evaluator = create_supervised_evaluator(net, val_metrics, device, True)\n",
645 | "\n",
646 | "# validation is run every n training epochs in response to the trainer completing\n",
647 | "# an epoch. Here we use the decorator syntax to add a function that runs it to the\n",
648 | "# EPOCH_COMPLETED event\n",
649 | "@trainer.on(Events.EPOCH_COMPLETED)\n",
650 | "def run_validation(engine):\n",
651 | " global step\n",
652 | " evaluator.run(val_loader)\n",
653 | "\n",
654 | " # the overall average loss must be weighted by batch size\n",
655 | " overall_average_loss = np.average(iter_losses, weights=batch_sizes)\n",
656 | " epoch_loss_values.append(overall_average_loss)\n",
657 | "\n",
658 | " # clear the contents of iter_losses and batch_sizes for the next epoch\n",
659 | " del iter_losses[:]\n",
660 | " del batch_sizes[:]\n",
661 | " \n",
662 | " # fetch and report the validation metrics\n",
663 | " acc = evaluator.state.metrics['accuracy']\n",
664 | " roc = evaluator.state.metrics['rocauc']\n",
665 | " metric_values.append(roc)\n",
666 | " print(f\"evaluation for epoch {engine.state.epoch}, accuracy = {acc:.4f}, rocauc = {roc:.4f}\")\n",
667 | "\n",
668 | " # reset step for the next epoch\n",
669 | " step = 1\n",
670 | " \n",
671 | "# create a checkpoint handler to save the network weights based on the area under the ROC curve\n",
672 | "# as before\n",
673 | "def _score(_):\n",
674 | " return metric_values[-1]\n",
675 | "\n",
676 | "# create a model checkpointer to save the network\n",
677 | "checkpoint_handler = ModelCheckpoint(root_dir, filename_prefix='best_metric_model', score_name='',\n",
678 | " n_saved=1, require_empty=False, score_function=_score)\n",
679 | "\n",
680 | "# handlers are attached to events in trainers and evaluators\n",
681 | "trainer.add_event_handler(event_name=Events.EPOCH_COMPLETED,\n",
682 | " handler=checkpoint_handler, to_save={'net': net})\n",
683 | "# train (and evaluate) the network, Ignite-style!\n",
684 | "train_epochs = 4\n",
685 | "state = trainer.run(train_loader, train_epochs)\n",
686 | "\n",
687 | "best_rocauc = max(metric_values)\n",
688 | "print(f\"train completed, best_metric: {best_rocauc:.4f} at epoch: {metric_values.index(best_rocauc)}\")"
689 | ]
690 | },
691 | {
692 | "cell_type": "markdown",
693 | "metadata": {},
694 | "source": [
695 | "# Issues with determinism\n",
696 | "* MONAI provides `monai.utils.set_determinism` for replicable training\n",
697 | " * Easy to accidentally defeat, especially in a jupyter / IPython notebook\n",
698 | "* How many uses of `numpy.random`'s underlying global instance does this notebook have?\n",
699 | " * Dataset partitioning\n",
700 | " * Image previewing\n",
701 | " * Transforms\n",
702 | " * MONAI transforms with randomised behaviour can be given / told to create their own internal `numpy.random.RandomState` instances\n"
703 | ]
704 | },
705 | {
706 | "cell_type": "code",
707 | "execution_count": null,
708 | "metadata": {},
709 | "outputs": [],
710 | "source": [
711 | "# Setting up transforms, revisited\n",
712 | "\n",
713 | "rseed = 12345678\n",
714 | "\n",
715 | "train_transforms = Compose(\n",
716 | " [\n",
717 | " LoadPNG(image_only=True),\n",
718 | " AddChannel(),\n",
719 | " ScaleIntensity(),\n",
720 | " RandRotate(range_x=15, prob=0.5, keep_size=True).set_random_state(rseed),\n",
721 | " RandFlip(spatial_axis=0, prob=0.5).set_random_state(rseed),\n",
722 | " RandZoom(min_zoom=0.9, max_zoom=1.1, prob=0.5).set_random_state(rseed),\n",
723 | " ToTensor(),\n",
724 | " ]\n",
725 | ")\n",
726 | "\n",
727 | "val_transforms = Compose([LoadPNG(image_only=True), AddChannel(), ScaleIntensity(), ToTensor()])"
728 | ]
729 | },
730 | {
731 | "cell_type": "markdown",
732 | "metadata": {},
733 | "source": [
734 | "# Improving dataset partitioning\n",
735 | " * Current code results in random numbers of images / labels each time it is run\n",
736 | " * np.shuffle to the rescue"
737 | ]
738 | },
739 | {
740 | "cell_type": "code",
741 | "execution_count": null,
742 | "metadata": {},
743 | "outputs": [],
744 | "source": [
745 | "from math import floor\n",
746 | "\n",
747 | "# make this selection deterministic and controllable by seed\n",
748 | "dataset_seed = 12345678\n",
749 | "r = np.random.RandomState(dataset_seed)\n",
750 | "\n",
751 | "# calculate the number of images we want for the validation and test groups\n",
752 | "validation_proportion = 0.1\n",
753 | "test_proportion = 0.1\n",
754 | "validation_count = floor(validation_proportion * num_total)\n",
755 | "test_count = floor(test_proportion * num_total)\n",
756 | "\n",
757 | "groups = np.zeros(num_total, dtype=np.int32)\n",
758 | "\n",
759 | "# set the appropriate number of '1's for the validation dataset\n",
760 | "groups[:validation_count] = 1\n",
761 | "\n",
762 | "# then set the appropriate number of '2's for the test dataset\n",
763 | "groups[validation_count:validation_count + test_count] = 2\n",
764 | "\n",
765 | "# Shuffle the sequence so that \n",
766 | "r.shuffle(groups)\n",
767 | "\n",
768 | "image_sets = list(), list(), list()\n",
769 | "label_sets = list(), list(), list()\n",
770 | "\n",
771 | "for n in range(num_total):\n",
772 | " image_sets[groups[n]].append(image_files_list[n])\n",
773 | " label_sets[groups[n]].append(image_class[n])\n",
774 | " \n",
775 | "train_x, val_x, test_x = image_sets\n",
776 | "train_y, val_y, test_y = label_sets\n",
777 | "print(len(train_x), len(val_x), len(test_x))"
778 | ]
779 | },
780 | {
781 | "cell_type": "markdown",
782 | "metadata": {},
783 | "source": [
784 | "## Now try running the notebook!\n",
785 | " * Try out the pytorch training loop and the ignite & monai style training loop\n",
786 | " * Make the entire notebook work deterministically by replacing all implicit use of the global numpy RandomState with explicit RandomState instances\n",
787 | " * Replace the dataset partitioning with the improved version\n",
788 | " * Experiment with adding new metrics to the Ignite training / validation loop"
789 | ]
790 | },
791 | {
792 | "cell_type": "markdown",
793 | "metadata": {},
794 | "source": [
795 | "## Summary\n",
796 | "\n",
797 | "In this notebook, we went through an end-to-end workflow to train the MedNIST dataset using a densenet121 network. Along the way, you did the following:\n",
798 | "- Learned about the MedNIST Data and downloaded it\n",
799 | "- Visualized the data to understand the images we're using\n",
800 | "- Setup the datasets for use in the model training\n",
801 | "- Defined our transforms, datasets, network, and optimizers\n",
802 | "- Trained a densenet model and saved the best model as determined by the validation accuracy\n",
803 | "- Plotted your training results\n",
804 | "- Evaluated your model against the test set\n",
805 | "- Ran your final predictions through a classification report to understand more about your final results\n",
806 | "\n",
807 | "For full API documentation, please visit https://docs.monai.io."
808 | ]
809 | },
810 | {
811 | "cell_type": "markdown",
812 | "metadata": {},
813 | "source": [
814 | "## Tidy everything up"
815 | ]
816 | },
817 | {
818 | "cell_type": "code",
819 | "execution_count": null,
820 | "metadata": {},
821 | "outputs": [],
822 | "source": [
823 | "if directory is None:\n",
824 | " shutil.rmtree(root_dir)"
825 | ]
826 | }
827 | ],
828 | "metadata": {
829 | "kernelspec": {
830 | "display_name": "Python 3",
831 | "language": "python",
832 | "name": "python3"
833 | },
834 | "language_info": {
835 | "codemirror_mode": {
836 | "name": "ipython",
837 | "version": 3
838 | },
839 | "file_extension": ".py",
840 | "mimetype": "text/x-python",
841 | "name": "python",
842 | "nbconvert_exporter": "python",
843 | "pygments_lexer": "ipython3",
844 | "version": "3.8.5"
845 | }
846 | },
847 | "nbformat": 4,
848 | "nbformat_minor": 4
849 | }
850 |
--------------------------------------------------------------------------------
/day1notebooks/lab2_end_to_end_annotated.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Lab 2: End to end model training and validation with Pytorch, Ignite and MONAI\n",
8 | "---\n",
9 | "## Overview\n",
10 | "\n",
11 | "This notebook takes you through the end to end workflow using for training a deep learning model. You'll do the following:\n",
12 | "- Download the MedNIST Dataset\n",
13 | "- Explore the data\n",
14 | "- Prepare training, validation, and test datasets\n",
15 | "- Use MONAI transforms, dataset, and dataloader\n",
16 | "- Define network, optimizer, and loss function\n",
17 | "- Train your model with a standard pytorch training loop\n",
18 | "- Plot your training metrics\n",
19 | "- Evaluate your model on a test set\n",
20 | "- Understand your results\n",
21 | "- Make some improvements\n",
22 | " - Revisit model training using ignite and MONAI features\n",
23 | " - Sort out problems limiting reproducability\n",
24 | " - Rework dataset partitioning "
25 | ]
26 | },
27 | {
28 | "cell_type": "markdown",
29 | "metadata": {},
30 | "source": [
31 | "## Import everything that we'll need from MONAI\n",
32 | "Initial imports of the various packages used to create a model using pytorch and MONAI."
33 | ]
34 | },
35 | {
36 | "cell_type": "code",
37 | "execution_count": null,
38 | "metadata": {},
39 | "outputs": [],
40 | "source": [
41 | "!pip install -qU \"monai[ignite]==0.3.0rc2\"\n",
42 | "\n",
43 | "%matplotlib inline\n",
44 | "\n",
45 | "import os\n",
46 | "import shutil\n",
47 | "import tempfile\n",
48 | "\n",
49 | "import matplotlib.pyplot as plt\n",
50 | "import numpy as np\n",
51 | "import PIL\n",
52 | "\n",
53 | "import torch\n",
54 | "\n",
55 | "import monai\n",
56 | "\n",
57 | "from monai.apps import download_and_extract\n",
58 | "from monai.config import print_config\n",
59 | "from monai.metrics import compute_roc_auc\n",
60 | "from monai.networks.nets import densenet121\n",
61 | "from monai.transforms import (\n",
62 | " AddChannel,\n",
63 | " Compose,\n",
64 | " LoadPNG,\n",
65 | " RandFlip,\n",
66 | " RandRotate,\n",
67 | " RandZoom,\n",
68 | " ScaleIntensity,\n",
69 | " ToTensor,\n",
70 | ")\n",
71 | "from monai.utils import set_determinism\n",
72 | "\n",
73 | "print_config()"
74 | ]
75 | },
76 | {
77 | "cell_type": "markdown",
78 | "metadata": {},
79 | "source": [
80 | "## Setup data directory\n",
81 | "We'll create a temporary directory for all the MONAI data we're going to be using called MONAI_DATA_DIRECTORY."
82 | ]
83 | },
84 | {
85 | "cell_type": "code",
86 | "execution_count": null,
87 | "metadata": {},
88 | "outputs": [],
89 | "source": [
90 | "directory = os.environ.get(\"MONAI_DATA_DIRECTORY\")\n",
91 | "root_dir = tempfile.mkdtemp() if directory is None else directory\n",
92 | "print(root_dir)"
93 | ]
94 | },
95 | {
96 | "cell_type": "markdown",
97 | "metadata": {},
98 | "source": [
99 | "## Download the MedNIST dataset\n",
100 | "The `MedNIST` dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions),\n",
101 | "[the RSNA Bone Age Challenge](http://rsnachallenges.cloudapp.net/competitions/4),\n",
102 | "and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest).\n",
103 | "\n",
104 | "The dataset is kindly made available by [Dr. Bradley J. Erickson M.D., Ph.D.](https://www.mayo.edu/research/labs/radiology-informatics/overview) (Department of Radiology, Mayo Clinic)\n",
105 | "under the Creative Commons [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/).\n",
106 | "\n",
107 | "If you use the MedNIST dataset, please acknowledge the source.\n",
108 | "\n",
109 | "We're going to download this dataset below and extract it into our temporary MONAI Data Directory."
110 | ]
111 | },
112 | {
113 | "cell_type": "code",
114 | "execution_count": null,
115 | "metadata": {},
116 | "outputs": [],
117 | "source": [
118 | "resource = \"https://www.dropbox.com/s/5wwskxctvcxiuea/MedNIST.tar.gz?dl=1\"\n",
119 | "md5 = \"0bc7306e7427e00ad1c5526a6677552d\"\n",
120 | "\n",
121 | "compressed_file = os.path.join(root_dir, \"MedNIST.tar.gz\")\n",
122 | "data_dir = os.path.join(root_dir, \"MedNIST\")\n",
123 | "if not os.path.exists(data_dir):\n",
124 | " download_and_extract(resource, compressed_file, root_dir, md5)"
125 | ]
126 | },
127 | {
128 | "cell_type": "markdown",
129 | "metadata": {},
130 | "source": [
131 | "### Set deterministic training for reproducibility\n",
132 | "[set_determinism](https://docs.monai.io/en/latest/utils.html?highlight=set_determinism#monai.utils.misc.set_determinism) will set the random seeds in both Numpy and PyTorch to ensure reproducibility. We'll see later that we need to go a little bit further to ensure reproducibility in a jupyter notebook"
133 | ]
134 | },
135 | {
136 | "cell_type": "code",
137 | "execution_count": null,
138 | "metadata": {},
139 | "outputs": [],
140 | "source": [
141 | "set_determinism(seed=12345678) # seed of 0 is a bad thing to do. In general, seeds should contain a reasonable number of binary 1's and small numbers don't have them"
142 | ]
143 | },
144 | {
145 | "cell_type": "markdown",
146 | "metadata": {},
147 | "source": [
148 | "## Read the image filenames from the dataset folders\n",
149 | "\n",
150 | "When using a dataset, you want to understand the basics of the images, labels, and more. We'll start off by showing some of those basic statistics for MedNIST.\n",
151 | "\n",
152 | "We'll see that 6 different folders are representing 6 different categories: Hand, AbdomenCT, CXR, ChestCT, BreastMRI, HeadCT. We'll be using each of these categories as our label names. "
153 | ]
154 | },
155 | {
156 | "cell_type": "code",
157 | "execution_count": null,
158 | "metadata": {},
159 | "outputs": [],
160 | "source": [
161 | "class_names = sorted(x for x in os.listdir(data_dir) if os.path.isdir(os.path.join(data_dir, x)))\n",
162 | "num_class = len(class_names)\n",
163 | "image_files = [\n",
164 | " [\n",
165 | " os.path.join(data_dir, class_names[i], x)\n",
166 | " for x in os.listdir(os.path.join(data_dir, class_names[i]))\n",
167 | " ]\n",
168 | " for i in range(num_class)\n",
169 | "]\n",
170 | "num_each = [len(image_files[i]) for i in range(num_class)]\n",
171 | "image_files_list = []\n",
172 | "image_class = []\n",
173 | "for i in range(num_class):\n",
174 | " image_files_list.extend(image_files[i])\n",
175 | " image_class.extend([i] * num_each[i])\n",
176 | "num_total = len(image_class)\n",
177 | "image_width, image_height = PIL.Image.open(image_files_list[0]).size\n",
178 | "\n",
179 | "print(f\"Total image count: {num_total}\")\n",
180 | "print(f\"Image dimensions: {image_width} x {image_height}\")\n",
181 | "print(f\"Label names: {class_names}\")\n",
182 | "print(f\"Label counts: {num_each}\")"
183 | ]
184 | },
185 | {
186 | "cell_type": "markdown",
187 | "metadata": {},
188 | "source": [
189 | "## Randomly pick images from the dataset to visualize and check\n",
190 | "\n",
191 | "We want to understand what the images we're using look like, so we'll start by visualizing a few random images."
192 | ]
193 | },
194 | {
195 | "cell_type": "code",
196 | "execution_count": null,
197 | "metadata": {},
198 | "outputs": [],
199 | "source": [
200 | "plt.subplots(3, 3, figsize=(8, 8))\n",
201 | "for i, k in enumerate(np.random.randint(num_total, size=9)):\n",
202 | " im = PIL.Image.open(image_files_list[k])\n",
203 | " arr = np.array(im)\n",
204 | " plt.subplot(3, 3, i + 1)\n",
205 | " plt.xlabel(class_names[image_class[k]])\n",
206 | " plt.imshow(arr, cmap=\"gray\", vmin=0, vmax=255)\n",
207 | "plt.tight_layout()\n",
208 | "plt.show()"
209 | ]
210 | },
211 | {
212 | "cell_type": "markdown",
213 | "metadata": {},
214 | "source": [
215 | "## Prepare training, validation, and test data lists\n",
216 | "\n",
217 | "We want to split the data into 3 different sets, one for training, one for validation, and one for testing. We'll use a ratio of 80/10/10 for those sets."
218 | ]
219 | },
220 | {
221 | "cell_type": "code",
222 | "execution_count": null,
223 | "metadata": {},
224 | "outputs": [],
225 | "source": [
226 | "# you'll note here that the number of images in each group changes each time this is run. Further down the notebook\n",
227 | "# you can see a method for partitioning data that ensures the same number of images in each group each time the cell is run\n",
228 | "\n",
229 | "val_frac = 0.1\n",
230 | "test_frac = 0.1\n",
231 | "train_x = list()\n",
232 | "train_y = list()\n",
233 | "val_x = list()\n",
234 | "val_y = list()\n",
235 | "test_x = list()\n",
236 | "test_y = list()\n",
237 | "\n",
238 | "for i in range(num_total):\n",
239 | " rann = np.random.random()\n",
240 | " if rann < val_frac:\n",
241 | " val_x.append(image_files_list[i])\n",
242 | " val_y.append(image_class[i])\n",
243 | " elif rann < test_frac + val_frac:\n",
244 | " test_x.append(image_files_list[i])\n",
245 | " test_y.append(image_class[i])\n",
246 | " else:\n",
247 | " train_x.append(image_files_list[i])\n",
248 | " train_y.append(image_class[i])\n",
249 | "\n",
250 | "print(f\"Training count: {len(train_x)}, Validation count: {len(val_x)}, Test count: {len(test_x)}\")"
251 | ]
252 | },
253 | {
254 | "cell_type": "markdown",
255 | "metadata": {},
256 | "source": [
257 | "## Define MONAI transforms, Dataset and Dataloader to pre-process data\n",
258 | "\n",
259 | "We'll define our transform using `Compose`. In this Array of Transforms, we'll load the image, add a channel, scale its intensity, utilize a few random functions and finally create a tensor."
260 | ]
261 | },
262 | {
263 | "cell_type": "code",
264 | "execution_count": null,
265 | "metadata": {},
266 | "outputs": [],
267 | "source": [
268 | "train_transforms = Compose(\n",
269 | " [\n",
270 | " LoadPNG(image_only=True),\n",
271 | " AddChannel(),\n",
272 | " ScaleIntensity(),\n",
273 | " RandRotate(range_x=15, prob=0.5, keep_size=True),\n",
274 | " RandFlip(spatial_axis=0, prob=0.5),\n",
275 | " RandZoom(min_zoom=0.9, max_zoom=1.1, prob=0.5),\n",
276 | " ToTensor(),\n",
277 | " ]\n",
278 | ")\n",
279 | "\n",
280 | "val_transforms = Compose([LoadPNG(image_only=True), AddChannel(), ScaleIntensity(), ToTensor()])"
281 | ]
282 | },
283 | {
284 | "cell_type": "markdown",
285 | "metadata": {},
286 | "source": [
287 | "## Initialise the datasets and loaders for training, validation and test sets\n",
288 | " * Define a simple dataset, that we'll call `MedNISTDataset`, that groups:\n",
289 | " * Images\n",
290 | " * Labels\n",
291 | " * The transforms that are to be run on the images and labels\n",
292 | " * Create three instances of this dataset:\n",
293 | " * One for training\n",
294 | " * One for validation\n",
295 | " * One for testing\n",
296 | " \n",
297 | "We'll use a batch size of 512 and employ 10 workers to load the data."
298 | ]
299 | },
300 | {
301 | "cell_type": "code",
302 | "execution_count": null,
303 | "metadata": {},
304 | "outputs": [],
305 | "source": [
306 | "batch_size = 512\n",
307 | "num_workers = 10\n",
308 | "\n",
309 | "class MedNISTDataset(torch.utils.data.Dataset):\n",
310 | " def __init__(self, image_files, labels, transforms):\n",
311 | " self.image_files = image_files\n",
312 | " self.labels = labels\n",
313 | " self.transforms = transforms\n",
314 | "\n",
315 | " def __len__(self):\n",
316 | " return len(self.image_files)\n",
317 | "\n",
318 | " def __getitem__(self, index):\n",
319 | " return self.transforms(self.image_files[index]), self.labels[index]\n",
320 | "\n",
321 | "\n",
322 | "train_ds = MedNISTDataset(train_x, train_y, train_transforms)\n",
323 | "train_loader = torch.utils.data.DataLoader(train_ds, batch_size=batch_size, shuffle=True, num_workers=num_workers)\n",
324 | "\n",
325 | "val_ds = MedNISTDataset(val_x, val_y, val_transforms)\n",
326 | "val_loader = torch.utils.data.DataLoader(val_ds, batch_size=batch_size, num_workers=num_workers)\n",
327 | "\n",
328 | "test_ds = MedNISTDataset(test_x, test_y, val_transforms)\n",
329 | "test_loader = torch.utils.data.DataLoader(test_ds, batch_size=batch_size, num_workers=num_workers)"
330 | ]
331 | },
332 | {
333 | "cell_type": "markdown",
334 | "metadata": {},
335 | "source": [
336 | "## Define network and optimizer\n",
337 | "\n",
338 | "1. Set `learning_rate` for how much the model is updated per step\n",
339 | "1. The fetch a pytorch `device` for the GPU\n",
340 | "1. Instantiate a [densenet121](https://docs.monai.io/en/latest/networks.html?highlight=densenet#monai.networks.nets.densenet121) model instance and 'send' it to the GPU using `device`\n",
341 | " * This is a standard MONAI implementation; it is capable of 2D and 3D operation but here we are using it in 2D mode\n",
342 | "1. We'll make use of the Adam optimizer"
343 | ]
344 | },
345 | {
346 | "cell_type": "code",
347 | "execution_count": null,
348 | "metadata": {},
349 | "outputs": [],
350 | "source": [
351 | "learning_rate = 1e-5\n",
352 | "device = torch.device(\"cuda:0\")\n",
353 | "net = densenet121(spatial_dims=2, in_channels=1, out_channels=num_class).to(device)\n",
354 | "loss_function = torch.nn.CrossEntropyLoss()\n",
355 | "optimizer = torch.optim.Adam(net.parameters(), learning_rate)"
356 | ]
357 | },
358 | {
359 | "cell_type": "markdown",
360 | "metadata": {},
361 | "source": [
362 | "## Network training\n",
363 | "We are hand-rolling a basic pytorch training loop here:\n",
364 | " * standard pytorch training loop\n",
365 | " * step through each training epoch, running through the training set in batches\n",
366 | " * after each epoch, run a validation pass, evaluating the network\n",
367 | " * if it shows improved performance, save out the model weights\n",
368 | " * later we will revisit training loops in a more Ignite / MONAI fashion"
369 | ]
370 | },
371 | {
372 | "cell_type": "code",
373 | "execution_count": null,
374 | "metadata": {},
375 | "outputs": [],
376 | "source": [
377 | "epoch_num = 4\n",
378 | "best_metric = -1\n",
379 | "best_metric_epoch = -1\n",
380 | "epoch_loss_values = list()\n",
381 | "metric_values = list()\n",
382 | "\n",
383 | "for epoch in range(epoch_num):\n",
384 | " print(\"-\" * 10)\n",
385 | " print(f\"epoch {epoch + 1}/{epoch_num}\")\n",
386 | "\n",
387 | " epoch_loss = 0\n",
388 | " step = 1\n",
389 | "\n",
390 | " steps_per_epoch = len(train_ds) // train_loader.batch_size\n",
391 | "\n",
392 | " # put the network in train mode; this tells the network and its modules to\n",
393 | " # enable training elements such as normalisation and dropout, where applicable\n",
394 | " net.train()\n",
395 | " for batch_data in train_loader:\n",
396 | "\n",
397 | " # move the data to the GPU\n",
398 | " inputs, labels = batch_data[0].to(device), batch_data[1].to(device)\n",
399 | "\n",
400 | " # prepare the gradients for this step's back propagation\n",
401 | " optimizer.zero_grad()\n",
402 | " \n",
403 | " # run the network forwards\n",
404 | " outputs = net(inputs)\n",
405 | " \n",
406 | " # run the loss function on the outputs\n",
407 | " loss = loss_function(outputs, labels)\n",
408 | " \n",
409 | " # compute the gradients\n",
410 | " loss.backward()\n",
411 | " \n",
412 | " # tell the optimizer to update the weights according to the gradients\n",
413 | " # and its internal optimisation strategy\n",
414 | " optimizer.step()\n",
415 | "\n",
416 | " epoch_loss += loss.item()\n",
417 | " print(f\"{step}/{len(train_ds) // train_loader.batch_size + 1}, training_loss: {loss.item():.4f}\")\n",
418 | " step += 1\n",
419 | "\n",
420 | " epoch_loss /= step\n",
421 | " epoch_loss_values.append(epoch_loss)\n",
422 | " print(f\"epoch {epoch + 1} average loss: {epoch_loss:.4f}\")\n",
423 | "\n",
424 | " # after each epoch, run our metrics to evaluate it, and, if they are an improvement,\n",
425 | " # save the model out\n",
426 | " \n",
427 | " # switch off training features of the network for this pass\n",
428 | " net.eval()\n",
429 | "\n",
430 | " # 'with torch.no_grad()' switches off gradient calculation for the scope of its context\n",
431 | " with torch.no_grad():\n",
432 | " # create lists to which we will concatenate the the validation results\n",
433 | " images = list()\n",
434 | " labels = list()\n",
435 | "\n",
436 | " # iterate over each batch of images and run them through the network in evaluation mode\n",
437 | " for val_data in val_loader:\n",
438 | " val_images, val_labels = val_data[0].to(device), val_data[1].to(device)\n",
439 | "\n",
440 | " # run the network\n",
441 | " val_pred = net(val_images)\n",
442 | "\n",
443 | " images.append(val_pred)\n",
444 | " labels.append(val_labels)\n",
445 | "\n",
446 | " # concatenate the predicted labels with each other and the actual labels with each other\n",
447 | " y_pred = torch.cat(images)\n",
448 | " y = torch.cat(labels)\n",
449 | "\n",
450 | " # we are using the area under the receiver operating characteristic (ROC) curve to determine\n",
451 | " # whether this epoch has improved the best performance of the network so far, in which case\n",
452 | " # we save the network in this state\n",
453 | " auc_metric = compute_roc_auc(y_pred, y, to_onehot_y=True, softmax=True)\n",
454 | " metric_values.append(auc_metric)\n",
455 | " acc_value = torch.eq(y_pred.argmax(dim=1), y)\n",
456 | " acc_metric = acc_value.sum().item() / len(acc_value)\n",
457 | " if auc_metric > best_metric:\n",
458 | " best_metric = auc_metric\n",
459 | " best_metric_epoch = epoch + 1\n",
460 | " torch.save(net.state_dict(), os.path.join(root_dir, \"best_metric_model.pth\"))\n",
461 | " print(\"saved new best metric network\")\n",
462 | " print(\n",
463 | " f\"current epoch: {epoch + 1} current AUC: {auc_metric:.4f} /\"\n",
464 | " f\" current accuracy: {acc_metric:.4f} best AUC: {best_metric:.4f} /\"\n",
465 | " f\" at epoch: {best_metric_epoch}\"\n",
466 | " )\n",
467 | "\n",
468 | "print(f\"train completed, best_metric: {best_metric:.4f} at epoch: {best_metric_epoch}\")"
469 | ]
470 | },
471 | {
472 | "cell_type": "markdown",
473 | "metadata": {},
474 | "source": [
475 | "## Plot the loss and metric\n",
476 | "\n",
477 | "Once we're done training we want to visualize our Loss and Accuracy."
478 | ]
479 | },
480 | {
481 | "cell_type": "code",
482 | "execution_count": null,
483 | "metadata": {},
484 | "outputs": [],
485 | "source": [
486 | "# this is a very simple dataset to train on!\n",
487 | "\n",
488 | "plt.figure(\"train\", (12, 6))\n",
489 | "plt.subplot(1, 2, 1)\n",
490 | "plt.title(\"Epoch Average Loss\")\n",
491 | "x = [i + 1 for i in range(len(epoch_loss_values))]\n",
492 | "y = epoch_loss_values\n",
493 | "plt.xlabel(\"epoch\")\n",
494 | "plt.plot(x, y)\n",
495 | "plt.subplot(1, 2, 2)\n",
496 | "plt.title(\"Val AUC\")\n",
497 | "x = [(i + 1) for i in range(len(metric_values))]\n",
498 | "y = metric_values\n",
499 | "plt.xlabel(\"epoch\")\n",
500 | "plt.plot(x, y)\n",
501 | "plt.show()"
502 | ]
503 | },
504 | {
505 | "cell_type": "markdown",
506 | "metadata": {},
507 | "source": [
508 | "## Evaluate the model on the test dataset\n",
509 | "\n",
510 | "After training and validation, we now have the best model as determined by the validation dataset. But now we need to evaluate the model on the test dataset to check whether the final model is robust and not over-fitting. We'll use these predictions to generate a classification report."
511 | ]
512 | },
513 | {
514 | "cell_type": "code",
515 | "execution_count": null,
516 | "metadata": {},
517 | "outputs": [],
518 | "source": [
519 | "net.load_state_dict(torch.load(os.path.join(root_dir, \"best_metric_model.pth\")))\n",
520 | "net.eval()\n",
521 | "y_true = list()\n",
522 | "y_pred = list()\n",
523 | "with torch.no_grad():\n",
524 | " for test_data in test_loader:\n",
525 | " test_images, test_labels = (\n",
526 | " test_data[0].to(device),\n",
527 | " test_data[1].to(device),\n",
528 | " )\n",
529 | " pred = net(test_images).argmax(dim=1)\n",
530 | " for i in range(len(pred)):\n",
531 | " y_true.append(test_labels[i].item())\n",
532 | " y_pred.append(pred[i].item())"
533 | ]
534 | },
535 | {
536 | "cell_type": "markdown",
537 | "metadata": {},
538 | "source": [
539 | "## Some light analytics - classification report\n",
540 | "\n",
541 | "We'll utilize scikit-learn's classification report to get the precision, recall, and f1-score for each category."
542 | ]
543 | },
544 | {
545 | "cell_type": "code",
546 | "execution_count": null,
547 | "metadata": {},
548 | "outputs": [],
549 | "source": [
550 | "from sklearn.metrics import classification_report\n",
551 | "print(classification_report(y_true, y_pred, target_names=class_names, digits=4))\n"
552 | ]
553 | },
554 | {
555 | "cell_type": "markdown",
556 | "metadata": {},
557 | "source": [
558 | "## Some light analytics - confusion matrix\n",
559 | "\n",
560 | "Let's also create a confusion matrix to get a better understanding of the failure cases"
561 | ]
562 | },
563 | {
564 | "cell_type": "code",
565 | "execution_count": null,
566 | "metadata": {},
567 | "outputs": [],
568 | "source": [
569 | "# if you see the warnings below don't worry. It is a documented issue with matplotlib that will be fixed in the next version\n",
570 | "from sklearn.metrics import confusion_matrix\n",
571 | "cmat = confusion_matrix(y_true, y_pred)\n",
572 | "fig = plt.figure()\n",
573 | "ax = fig.add_subplot(111)\n",
574 | "cax = ax.matshow(confusion_matrix(y_true, y_pred), cmap=\"terrain\", interpolation='nearest')\n",
575 | "fig.colorbar(cax)\n",
576 | "\n",
577 | "ax.set_xticklabels(['']+class_names, rotation=270)\n",
578 | "ax.set_yticklabels(['']+class_names)\n",
579 | "\n",
580 | "plt.show()"
581 | ]
582 | },
583 | {
584 | "cell_type": "markdown",
585 | "metadata": {},
586 | "source": [
587 | "## Let's make some changes\n",
588 | "Everything that we have done so far uses MONAI with pytorch in a very vanilla fashion. The initial training / validation loop is written to show you the nuts and bolts of pytorch. Now let's explore starting the move towards [ignite](https://pytorch.org/ignite/) and features of MONAI designed to work with it. We'll also fix up a couple of outstanding issues that the notebook has in its current form.\n",
589 | "\n",
590 | "* MONAI-specific\n",
591 | " * Making use of Ignite\n",
592 | "* Miscellaneous\n",
593 | " * Issues with determinism\n",
594 | " * Improving dataset partitioning"
595 | ]
596 | },
597 | {
598 | "cell_type": "code",
599 | "execution_count": null,
600 | "metadata": {},
601 | "outputs": [],
602 | "source": [
603 | "# make sure you reinitialise the network before running the ignite loop or you will just keep training the same weights\n",
604 | "from ignite.engine import Events, create_supervised_evaluator, create_supervised_trainer\n",
605 | "from ignite.handlers import ModelCheckpoint\n",
606 | "from ignite.metrics import Accuracy\n",
607 | "from monai.handlers import ROCAUC\n",
608 | "\n",
609 | "step = 1\n",
610 | "iter_losses=[]\n",
611 | "batch_sizes=[]\n",
612 | "epoch_loss_values = []\n",
613 | "metric_values = []\n",
614 | "\n",
615 | "iter_losses=[]\n",
616 | "epoch_loss_values = []\n",
617 | "metric_values = []\n",
618 | "\n",
619 | "# Training\n",
620 | "\n",
621 | "# this trainer takes care of the training loop for us\n",
622 | "trainer = create_supervised_trainer(net, optimizer, loss_function, device, False)\n",
623 | "\n",
624 | "# calculate the number of steps per epoch up front\n",
625 | "steps_per_epoch = len(train_ds) // train_loader.batch_size\n",
626 | "if len(train_ds) % train_loader.batch_size != 0:\n",
627 | " steps_per_epoch += 1\n",
628 | "\n",
629 | "\n",
630 | "# create a handler for recording the loss after each input. Improve upon our earlier example\n",
631 | "# by also recording the batch size, so we can perform a weighted average for the overall average\n",
632 | "# loss\n",
633 | "@trainer.on(Events.ITERATION_COMPLETED)\n",
634 | "def _end_iter(engine):\n",
635 | " global step\n",
636 | " loss = engine.state.output\n",
637 | " batch_len = len(engine.state.batch[0])\n",
638 | " epoch = engine.state.epoch\n",
639 | " epoch_len = engine.state.max_epochs\n",
640 | " iter_losses.append(loss)\n",
641 | " batch_sizes.append(batch_len)\n",
642 | " print(f'epoch {epoch}/{epoch_len}, step {step}/{steps_per_epoch}, training_loss = {loss:.4f}') \n",
643 | " step += 1\n",
644 | " \n",
645 | "# Validation\n",
646 | "\n",
647 | "val_metrics = {'accuracy': Accuracy(), 'rocauc': ROCAUC(to_onehot_y=True,softmax=True)}\n",
648 | "evaluator = create_supervised_evaluator(net, val_metrics, device, True)\n",
649 | "\n",
650 | "# validation is run every n training epochs in response to the trainer completing\n",
651 | "# an epoch. Here we use the decorator syntax to add a function that runs it to the\n",
652 | "# EPOCH_COMPLETED event\n",
653 | "@trainer.on(Events.EPOCH_COMPLETED)\n",
654 | "def run_validation(engine):\n",
655 | " global step\n",
656 | " evaluator.run(val_loader)\n",
657 | "\n",
658 | " # the overall average loss must be weighted by batch size\n",
659 | " overall_average_loss = np.average(iter_losses, weights=batch_sizes)\n",
660 | " epoch_loss_values.append(overall_average_loss)\n",
661 | "\n",
662 | " # clear the contents of iter_losses and batch_sizes for the next epoch\n",
663 | " del iter_losses[:]\n",
664 | " del batch_sizes[:]\n",
665 | " \n",
666 | " # fetch and report the validation metrics\n",
667 | " acc = evaluator.state.metrics['accuracy']\n",
668 | " roc = evaluator.state.metrics['rocauc']\n",
669 | " metric_values.append(roc)\n",
670 | " print(f\"evaluation for epoch {engine.state.epoch}, accuracy = {acc:.4f}, rocauc = {roc:.4f}\")\n",
671 | "\n",
672 | " # reset step for the next epoch\n",
673 | " step = 1\n",
674 | " \n",
675 | "# create a checkpoint handler to save the network weights based on the area under the ROC curve\n",
676 | "# as before\n",
677 | "def _score(_):\n",
678 | " return metric_values[-1]\n",
679 | "\n",
680 | "# create a model checkpointer to save the network\n",
681 | "checkpoint_handler = ModelCheckpoint(root_dir, filename_prefix='best_metric_model', score_name='',\n",
682 | " n_saved=1, require_empty=False, score_function=_score)\n",
683 | "\n",
684 | "# handlers are attached to events in trainers and evaluators\n",
685 | "trainer.add_event_handler(event_name=Events.EPOCH_COMPLETED,\n",
686 | " handler=checkpoint_handler, to_save={'net': net})\n",
687 | "# train (and evaluate) the network, Ignite-style!\n",
688 | "train_epochs = 4\n",
689 | "state = trainer.run(train_loader, train_epochs)\n",
690 | "\n",
691 | "best_rocauc = max(metric_values)\n",
692 | "print(f\"train completed, best_metric: {best_rocauc:.4f} at epoch: {metric_values.index(best_rocauc)}\")"
693 | ]
694 | },
695 | {
696 | "cell_type": "markdown",
697 | "metadata": {},
698 | "source": [
699 | "# Issues with determinism\n",
700 | "* MONAI provides `monai.utils.set_determinism` for replicable training\n",
701 | " * Easy to accidentally defeat, especially in a jupyter / IPython notebook\n",
702 | "* How many uses of `numpy.random`'s underlying global instance does this notebook have?\n",
703 | " * Dataset partitioning\n",
704 | " * Image previewing\n",
705 | " * Transforms\n",
706 | " * MONAI transforms with randomised behaviour can be given / told to create their own internal `numpy.random.RandomState` instances\n"
707 | ]
708 | },
709 | {
710 | "cell_type": "code",
711 | "execution_count": null,
712 | "metadata": {},
713 | "outputs": [],
714 | "source": [
715 | "# Setting up transforms, revisited\n",
716 | "\n",
717 | "# using .set_random_state allows us to pass either a seed or a numpy.random.RandomState instance\n",
718 | "# (this is an individual instance of the numpy random number generator rather than using the global instance)\n",
719 | "# this means that no other calls to numpy.random affect the behaviour of the Rand*** transforms\n",
720 | "\n",
721 | "rseed = 12345678\n",
722 | "\n",
723 | "train_transforms = Compose(\n",
724 | " [\n",
725 | " LoadPNG(image_only=True),\n",
726 | " AddChannel(),\n",
727 | " ScaleIntensity(),\n",
728 | " RandRotate(range_x=15, prob=0.5, keep_size=True).set_random_state(rseed),\n",
729 | " RandFlip(spatial_axis=0, prob=0.5).set_random_state(rseed),\n",
730 | " RandZoom(min_zoom=0.9, max_zoom=1.1, prob=0.5).set_random_state(rseed),\n",
731 | " ToTensor(),\n",
732 | " ]\n",
733 | ")\n",
734 | "\n",
735 | "val_transforms = Compose([LoadPNG(image_only=True), AddChannel(), ScaleIntensity(), ToTensor()])"
736 | ]
737 | },
738 | {
739 | "cell_type": "markdown",
740 | "metadata": {},
741 | "source": [
742 | "# Improving dataset partitioning\n",
743 | " * Current code results in random numbers of images / labels each time it is run\n",
744 | " * np.shuffle to the rescue"
745 | ]
746 | },
747 | {
748 | "cell_type": "code",
749 | "execution_count": null,
750 | "metadata": {},
751 | "outputs": [],
752 | "source": [
753 | "from math import floor\n",
754 | "\n",
755 | "# make this selection deterministic and controllable by seed\n",
756 | "dataset_seed = 12345678\n",
757 | "r = np.random.RandomState(dataset_seed)\n",
758 | "\n",
759 | "# calculate the number of images we want for the validation and test groups\n",
760 | "validation_proportion = 0.1\n",
761 | "test_proportion = 0.1\n",
762 | "validation_count = floor(validation_proportion * num_total)\n",
763 | "test_count = floor(test_proportion * num_total)\n",
764 | "\n",
765 | "groups = np.zeros(num_total, dtype=np.int32)\n",
766 | "\n",
767 | "# set the appropriate number of '1's for the validation dataset\n",
768 | "groups[:validation_count] = 1\n",
769 | "\n",
770 | "# then set the appropriate number of '2's for the test dataset\n",
771 | "groups[validation_count:validation_count + test_count] = 2\n",
772 | "\n",
773 | "# Shuffle the sequence so that \n",
774 | "r.shuffle(groups)\n",
775 | "\n",
776 | "image_sets = list(), list(), list()\n",
777 | "label_sets = list(), list(), list()\n",
778 | "\n",
779 | "for n in range(num_total):\n",
780 | " image_sets[groups[n]].append(image_files_list[n])\n",
781 | " label_sets[groups[n]].append(image_class[n])\n",
782 | " \n",
783 | "train_x, val_x, test_x = image_sets\n",
784 | "train_y, val_y, test_y = label_sets\n",
785 | "print(len(train_x), len(val_x), len(test_x))"
786 | ]
787 | },
788 | {
789 | "cell_type": "markdown",
790 | "metadata": {},
791 | "source": [
792 | "## Now try running the notebook!\n",
793 | " * Try out the pytorch training loop and the ignite & monai style training loop\n",
794 | " * Make the entire notebook work deterministically by replacing all implicit use of the global numpy RandomState with explicit RandomState instances\n",
795 | " * Replace the dataset partitioning with the improved version\n",
796 | " * Experiment with adding new metrics to the Ignite training / validation loop"
797 | ]
798 | },
799 | {
800 | "cell_type": "markdown",
801 | "metadata": {},
802 | "source": [
803 | "## Summary\n",
804 | "\n",
805 | "In this notebook, we went through an end-to-end workflow to train the MedNIST dataset using a densenet121 network. Along the way, you did the following:\n",
806 | "- Learned about the MedNIST Data and downloaded it\n",
807 | "- Visualized the data to understand the images we're using\n",
808 | "- Setup the datasets for use in the model training\n",
809 | "- Defined our transforms, datasets, network, and optimizers\n",
810 | "- Trained a densenet model and saved the best model as determined by the validation accuracy\n",
811 | "- Plotted your training results\n",
812 | "- Evaluated your model against the test set\n",
813 | "- Ran your final predictions through a classification report to understand more about your final results\n",
814 | "\n",
815 | "For full API documentation, please visit https://docs.monai.io."
816 | ]
817 | },
818 | {
819 | "cell_type": "markdown",
820 | "metadata": {},
821 | "source": [
822 | "## Tidy everything up"
823 | ]
824 | },
825 | {
826 | "cell_type": "code",
827 | "execution_count": null,
828 | "metadata": {},
829 | "outputs": [],
830 | "source": [
831 | "if directory is None:\n",
832 | " shutil.rmtree(root_dir)"
833 | ]
834 | }
835 | ],
836 | "metadata": {
837 | "kernelspec": {
838 | "display_name": "Python 3",
839 | "language": "python",
840 | "name": "python3"
841 | },
842 | "language_info": {
843 | "codemirror_mode": {
844 | "name": "ipython",
845 | "version": 3
846 | },
847 | "file_extension": ".py",
848 | "mimetype": "text/x-python",
849 | "name": "python",
850 | "nbconvert_exporter": "python",
851 | "pygments_lexer": "ipython3",
852 | "version": "3.7.3"
853 | }
854 | },
855 | "nbformat": 4,
856 | "nbformat_minor": 4
857 | }
858 |
--------------------------------------------------------------------------------
/day1notebooks/lab3_datasets.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "colab_type": "text",
7 | "id": "061VwlZpO9Lq"
8 | },
9 | "source": [
10 | "# Lab 3: Datasets\n",
11 | "---\n",
12 | "\n",
13 | "[](https://colab.research.google.com/github/Project-MONAI/MONAIBootcamp2020/blob/master/day1notebooks/lab3_datasets.ipynb)\n",
14 | "\n",
15 | "### Overview\n",
16 | "\n",
17 | "This notebook introduces you to the MONAI dataset APIs:\n",
18 | "- Recap the base dataset API\n",
19 | "- Understanding the caching mechanism\n",
20 | "- Dataset utilities"
21 | ]
22 | },
23 | {
24 | "cell_type": "markdown",
25 | "metadata": {
26 | "colab_type": "text",
27 | "id": "9ERmUzC3O9Lr"
28 | },
29 | "source": [
30 | "## Install MONAI and import dependecies\n",
31 | "This section installs the latest version of MONAI and validates the install by printing out the configuration.\n",
32 | "\n",
33 | "We'll then import our dependencies and MONAI. "
34 | ]
35 | },
36 | {
37 | "cell_type": "code",
38 | "execution_count": 2,
39 | "metadata": {
40 | "colab": {
41 | "base_uri": "https://localhost:8080/",
42 | "height": 340
43 | },
44 | "colab_type": "code",
45 | "id": "PWt0e0QPO9Lr",
46 | "outputId": "4277aa34-40df-4927-9b0a-0408e6b5ae0c",
47 | "tags": []
48 | },
49 | "outputs": [
50 | {
51 | "name": "stdout",
52 | "output_type": "stream",
53 | "text": [
54 | "MONAI version: 0.3.0rc2\n",
55 | "Python version: 3.8.3 (default, Jul 2 2020, 16:21:59) [GCC 7.3.0]\n",
56 | "Numpy version: 1.18.5\n",
57 | "Pytorch version: 1.6.0\n",
58 | "\n",
59 | "Optional dependencies:\n",
60 | "Pytorch Ignite version: NOT INSTALLED or UNKNOWN VERSION.\n",
61 | "Nibabel version: 3.1.1\n",
62 | "scikit-image version: 0.16.2\n",
63 | "Pillow version: 7.2.0\n",
64 | "Tensorboard version: NOT INSTALLED or UNKNOWN VERSION.\n",
65 | "gdown version: NOT INSTALLED or UNKNOWN VERSION.\n",
66 | "TorchVision version: NOT INSTALLED or UNKNOWN VERSION.\n",
67 | "ITK version: NOT INSTALLED or UNKNOWN VERSION.\n",
68 | "\n",
69 | "For details about installing the optional dependencies, please visit:\n",
70 | " https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies\n",
71 | "\n"
72 | ]
73 | }
74 | ],
75 | "source": [
76 | "!pip install -qU \"monai[nibabel]==0.3.0rc2\"\n",
77 | "\n",
78 | "import time\n",
79 | "import torch\n",
80 | "\n",
81 | "import monai\n",
82 | "monai.config.print_config()"
83 | ]
84 | },
85 | {
86 | "cell_type": "markdown",
87 | "metadata": {
88 | "colab_type": "text",
89 | "id": "Es8DMZURWc40"
90 | },
91 | "source": [
92 | "## MONAI Dataset "
93 | ]
94 | },
95 | {
96 | "cell_type": "markdown",
97 | "metadata": {
98 | "colab_type": "text",
99 | "id": "ln8ZmNpCWpDo"
100 | },
101 | "source": [
102 | "A MONAI [Dataset](https://docs.monai.io/en/latest/data.html?highlight=dataset#dataset) is a generic dataset with a `__len__` property, `__getitem__` property, and an optional callable data transform when fetching a data sample.\n",
103 | "\n",
104 | "We'll start by initializing some generic data, calling the Dataset class with the generic data, and specifying `None` for our transforms."
105 | ]
106 | },
107 | {
108 | "cell_type": "code",
109 | "execution_count": 4,
110 | "metadata": {
111 | "colab": {
112 | "base_uri": "https://localhost:8080/",
113 | "height": 153
114 | },
115 | "colab_type": "code",
116 | "id": "Tr13PKyDVsI2",
117 | "outputId": "8310ba5e-0c19-4a88-ff0d-dc3655aa3627"
118 | },
119 | "outputs": [
120 | {
121 | "name": "stdout",
122 | "output_type": "stream",
123 | "text": [
124 | "Length of dataset is 7\n",
125 | "{'data': 4}\n",
126 | "{'data': 9}\n",
127 | "{'data': 3}\n",
128 | "{'data': 7}\n",
129 | "{'data': 1}\n",
130 | "{'data': 2}\n",
131 | "{'data': 5}\n"
132 | ]
133 | }
134 | ],
135 | "source": [
136 | "items = [{\"data\": 4}, \n",
137 | " {\"data\": 9}, \n",
138 | " {\"data\": 3}, \n",
139 | " {\"data\": 7}, \n",
140 | " {\"data\": 1},\n",
141 | " {\"data\": 2},\n",
142 | " {\"data\": 5}]\n",
143 | "dataset = monai.data.Dataset(items, transform=None)\n",
144 | "\n",
145 | "print(f\"Length of dataset is {len(dataset)}\")\n",
146 | "for item in dataset:\n",
147 | " print(item)"
148 | ]
149 | },
150 | {
151 | "cell_type": "markdown",
152 | "metadata": {
153 | "colab_type": "text",
154 | "id": "TzCrk5m0XDYv"
155 | },
156 | "source": [
157 | "### Compatible with the PyTorch DataLoader\n",
158 | "\n",
159 | "MONAI functionality should be compatible with the PyTorch DataLoader, although free to subclass from it if there is additional functionality that we consider key, which cannot be realized with the standard DataLoader class."
160 | ]
161 | },
162 | {
163 | "cell_type": "code",
164 | "execution_count": 5,
165 | "metadata": {
166 | "colab": {
167 | "base_uri": "https://localhost:8080/",
168 | "height": 85
169 | },
170 | "colab_type": "code",
171 | "id": "hpLFjfYWXJr4",
172 | "outputId": "37df5c59-9161-4e26-a806-bd2910cff36e"
173 | },
174 | "outputs": [
175 | {
176 | "name": "stdout",
177 | "output_type": "stream",
178 | "text": [
179 | "{'data': tensor([4, 9])}\n",
180 | "{'data': tensor([3, 7])}\n",
181 | "{'data': tensor([1, 2])}\n",
182 | "{'data': tensor([5])}\n"
183 | ]
184 | }
185 | ],
186 | "source": [
187 | "for item in torch.utils.data.DataLoader(dataset, batch_size=2):\n",
188 | " print(item)"
189 | ]
190 | },
191 | {
192 | "cell_type": "markdown",
193 | "metadata": {
194 | "colab_type": "text",
195 | "id": "dXfFIiPrZAh_"
196 | },
197 | "source": [
198 | "### Load items with a customized transform\n",
199 | "\n",
200 | "We'll create a custom transform called `SquareIt`, which will replace the corresponding value of the input's `keys` with a squared value. In our case, `SquareIt(keys='data')` will apply the square transform to the value of `x['data']`."
201 | ]
202 | },
203 | {
204 | "cell_type": "code",
205 | "execution_count": 6,
206 | "metadata": {
207 | "colab": {
208 | "base_uri": "https://localhost:8080/",
209 | "height": 153
210 | },
211 | "colab_type": "code",
212 | "id": "gFy33vnRZ2SH",
213 | "outputId": "b347fea4-83f4-4a63-8591-7c47ccb346d1"
214 | },
215 | "outputs": [
216 | {
217 | "name": "stdout",
218 | "output_type": "stream",
219 | "text": [
220 | "keys to square it: ('data',)\n",
221 | "{'data': 16}\n",
222 | "{'data': 81}\n",
223 | "{'data': 9}\n",
224 | "{'data': 49}\n",
225 | "{'data': 1}\n",
226 | "{'data': 4}\n",
227 | "{'data': 25}\n"
228 | ]
229 | }
230 | ],
231 | "source": [
232 | "class SquareIt(monai.transforms.MapTransform):\n",
233 | "\"\"\"a simple transform to return a squared number\"\"\"\n",
234 | "\n",
235 | " def __init__(self, keys):\n",
236 | " monai.transforms.MapTransform.__init__(self, keys)\n",
237 | " print(f\"keys to square it: {self.keys}\")\n",
238 | "\n",
239 | " \n",
240 | " def __call__(self, x):\n",
241 | " key = self.keys[0]\n",
242 | " data = x[key]\n",
243 | " output = {key: data ** 2}\n",
244 | " return output\n",
245 | "\n",
246 | "square_dataset = monai.data.Dataset(items, transform=SquareIt(keys='data'))\n",
247 | "for item in square_dataset:\n",
248 | " print(item)"
249 | ]
250 | },
251 | {
252 | "cell_type": "markdown",
253 | "metadata": {
254 | "colab_type": "text",
255 | "id": "pgDSkrywno7g"
256 | },
257 | "source": [
258 | "Keep in mind\n",
259 | "- `SquareIt` is implemented as creating a new dictionary `output` instead of overwriting the content of dict `x` directly. So that we can repeatedly apply the transforms, for example, in multiple epochs of training\n",
260 | "- `SquareIt.__call__` read the key information from `self.keys` but does not write any properties to `self`. Because writing properties will not work with a multi-processing data loader.\n",
261 | "- In most of the MONAI preprocessing transforms, we assume `x[key]` has the shape: `(num_channels, spatial_dim_1, spatial_dim_2, ...)`. The channel dimension is not omitted even if `num_channels` equals to 1, but the spatial dimensions could be omitted."
262 | ]
263 | },
264 | {
265 | "cell_type": "markdown",
266 | "metadata": {
267 | "colab_type": "text",
268 | "id": "0yiwrG2dO9Ly"
269 | },
270 | "source": [
271 | "## MONAI dataset caching\n",
272 | "\n",
273 | "To demonstrate the benefit dataset caching, we're going to construct a dataset with a slow transform. To do that, we're going to call the sleep function during each of the `__call__` functions."
274 | ]
275 | },
276 | {
277 | "cell_type": "code",
278 | "execution_count": 7,
279 | "metadata": {
280 | "colab": {
281 | "base_uri": "https://localhost:8080/",
282 | "height": 34
283 | },
284 | "colab_type": "code",
285 | "id": "v_j1q_EZbAfY",
286 | "outputId": "949f6d57-dcef-4cde-a615-1b38837aa21f"
287 | },
288 | "outputs": [
289 | {
290 | "name": "stdout",
291 | "output_type": "stream",
292 | "text": [
293 | "keys to square it: ('data',)\n"
294 | ]
295 | }
296 | ],
297 | "source": [
298 | "class SlowSquare(monai.transforms.MapTransform):\n",
299 | "\"\"\"a simple transform to slowly return a squared number\"\"\"\n",
300 | " \n",
301 | " def __init__(self, keys):\n",
302 | " monai.transforms.MapTransform.__init__(self, keys)\n",
303 | " print(f\"keys to square it: {self.keys}\")\n",
304 | "\n",
305 | " def __call__(self, x):\n",
306 | " time.sleep(1.0)\n",
307 | " output = {key: x[key] ** 2 for key in self.keys}\n",
308 | " return output\n",
309 | "\n",
310 | "square_dataset = monai.data.Dataset(items, transform=SlowSquare(keys='data'))"
311 | ]
312 | },
313 | {
314 | "cell_type": "markdown",
315 | "metadata": {
316 | "colab_type": "text",
317 | "id": "o7hWDorAfA0V"
318 | },
319 | "source": [
320 | "As expected, it's going to take about 7 seconds to go through all the items."
321 | ]
322 | },
323 | {
324 | "cell_type": "code",
325 | "execution_count": 8,
326 | "metadata": {
327 | "colab": {
328 | "base_uri": "https://localhost:8080/",
329 | "height": 170
330 | },
331 | "colab_type": "code",
332 | "id": "6xKKm1rmbT1w",
333 | "outputId": "3e5ca46e-40b5-4926-9cba-0f9978e47e84"
334 | },
335 | "outputs": [
336 | {
337 | "name": "stdout",
338 | "output_type": "stream",
339 | "text": [
340 | "{'data': 16}\n",
341 | "{'data': 81}\n",
342 | "{'data': 9}\n",
343 | "{'data': 49}\n",
344 | "{'data': 1}\n",
345 | "{'data': 4}\n",
346 | "{'data': 25}\n",
347 | "CPU times: user 0 ns, sys: 15.6 ms, total: 15.6 ms\n",
348 | "Wall time: 7.01 s\n"
349 | ]
350 | }
351 | ],
352 | "source": [
353 | "%time for item in square_dataset: print(item)"
354 | ]
355 | },
356 | {
357 | "cell_type": "markdown",
358 | "metadata": {
359 | "colab_type": "text",
360 | "id": "NR3nSnlWfUsh"
361 | },
362 | "source": [
363 | "### Cache Dataset\n",
364 | "\n",
365 | "When using [CacheDataset](https://docs.monai.io/en/latest/data.html?highlight=dataset#cachedataset) the caching is done when the object is initialized for the first time, so the initialization is slower than a regular dataset.\n",
366 | "\n",
367 | "By caching the results of non-random preprocessing transforms, it accelerates the training data pipeline. If the requested data is not in the cache, all transforms will run normally."
368 | ]
369 | },
370 | {
371 | "cell_type": "code",
372 | "execution_count": 9,
373 | "metadata": {
374 | "colab": {
375 | "base_uri": "https://localhost:8080/",
376 | "height": 51
377 | },
378 | "colab_type": "code",
379 | "id": "PV3qNOADbrII",
380 | "outputId": "24f6154b-56ff-4b33-ca96-530d57e44281"
381 | },
382 | "outputs": [
383 | {
384 | "name": "stdout",
385 | "output_type": "stream",
386 | "text": [
387 | "keys to square it: ('data',)\n",
388 | "7/7 Load and cache transformed data: [==============================]\n"
389 | ]
390 | }
391 | ],
392 | "source": [
393 | "square_cached = monai.data.CacheDataset(items, transform=SlowSquare(keys='data'))"
394 | ]
395 | },
396 | {
397 | "cell_type": "markdown",
398 | "metadata": {
399 | "colab_type": "text",
400 | "id": "iHR6D2Hmfqqd"
401 | },
402 | "source": [
403 | "However, repeatedly fetching the items from an initialised CacheDataset is fast."
404 | ]
405 | },
406 | {
407 | "cell_type": "code",
408 | "execution_count": 10,
409 | "metadata": {
410 | "colab": {
411 | "base_uri": "https://localhost:8080/",
412 | "height": 51
413 | },
414 | "colab_type": "code",
415 | "id": "REth7XTxdYTe",
416 | "outputId": "4d3b7e87-23d6-434b-efba-4e651fa3ebed"
417 | },
418 | "outputs": [
419 | {
420 | "name": "stdout",
421 | "output_type": "stream",
422 | "text": [
423 | "17.9 µs ± 2.2 µs per loop (mean ± std. dev. of 7 runs, 100000 loops each)\n"
424 | ]
425 | }
426 | ],
427 | "source": [
428 | "%timeit list(item for item in square_cached)"
429 | ]
430 | },
431 | {
432 | "cell_type": "markdown",
433 | "metadata": {},
434 | "source": [
435 | "To improve the caching efficiency, always put as many as possible non-random transforms before the randomized ones when composing the chain of transforms."
436 | ]
437 | },
438 | {
439 | "cell_type": "markdown",
440 | "metadata": {
441 | "colab_type": "text",
442 | "id": "jH1L6yQzf2lR"
443 | },
444 | "source": [
445 | "### Persistent Caching\n",
446 | "\n",
447 | "[PersistantDataset](https://docs.monai.io/en/latest/data.html?highlight=dataset#persistentdataset) allows for persistent storage of pre-computed values to efficiently manage larger than memory dictionary format data.\n",
448 | "\n",
449 | "The non-random transform components are computed when first used and stored in the cache_dir for rapid retrieval on subsequent uses."
450 | ]
451 | },
452 | {
453 | "cell_type": "code",
454 | "execution_count": 11,
455 | "metadata": {
456 | "colab": {
457 | "base_uri": "https://localhost:8080/",
458 | "height": 34
459 | },
460 | "colab_type": "code",
461 | "id": "uB28jyA0f6DB",
462 | "outputId": "ea65e7f0-6c20-449c-93e0-e0e7f52557eb"
463 | },
464 | "outputs": [
465 | {
466 | "name": "stdout",
467 | "output_type": "stream",
468 | "text": [
469 | "keys to square it: ('data',)\n"
470 | ]
471 | }
472 | ],
473 | "source": [
474 | "square_persist = monai.data.PersistentDataset(items, transform=SlowSquare(keys='data'), cache_dir=\"my_cache\")"
475 | ]
476 | },
477 | {
478 | "cell_type": "markdown",
479 | "metadata": {
480 | "colab_type": "text",
481 | "id": "egR1w_hGsGfx"
482 | },
483 | "source": [
484 | "The caching happens at the first epoch of loading the dataset, so calling the dataset the first time should take about 7 seconds."
485 | ]
486 | },
487 | {
488 | "cell_type": "code",
489 | "execution_count": 12,
490 | "metadata": {
491 | "colab": {
492 | "base_uri": "https://localhost:8080/",
493 | "height": 170
494 | },
495 | "colab_type": "code",
496 | "id": "BnM2n_YGgoIU",
497 | "outputId": "4cd3c324-70bc-4eab-fecb-657f50ed5103"
498 | },
499 | "outputs": [
500 | {
501 | "name": "stdout",
502 | "output_type": "stream",
503 | "text": [
504 | "{'data': 16, 'cached': True}\n",
505 | "{'data': 81, 'cached': True}\n",
506 | "{'data': 9, 'cached': True}\n",
507 | "{'data': 49, 'cached': True}\n",
508 | "{'data': 1, 'cached': True}\n",
509 | "{'data': 4, 'cached': True}\n",
510 | "{'data': 25, 'cached': True}\n",
511 | "CPU times: user 31.2 ms, sys: 46.9 ms, total: 78.1 ms\n",
512 | "Wall time: 7.04 s\n"
513 | ]
514 | }
515 | ],
516 | "source": [
517 | "%time for item in square_persist: print(item)"
518 | ]
519 | },
520 | {
521 | "cell_type": "markdown",
522 | "metadata": {
523 | "colab_type": "text",
524 | "id": "401uwXfLsequ"
525 | },
526 | "source": [
527 | "During the initialization of the `PersistentDataset` we passed in the parameter \"my_cache\" for the location to store the intermediate data. We'll look at that directory below."
528 | ]
529 | },
530 | {
531 | "cell_type": "code",
532 | "execution_count": 13,
533 | "metadata": {
534 | "colab": {
535 | "base_uri": "https://localhost:8080/",
536 | "height": 85
537 | },
538 | "colab_type": "code",
539 | "id": "ncpKcxs0sZ2r",
540 | "outputId": "f6bcbf70-6bcb-4342-d2f0-a7996c72f67f"
541 | },
542 | "outputs": [
543 | {
544 | "name": "stdout",
545 | "output_type": "stream",
546 | "text": [
547 | "4778b171cb1049abbcf1032d03ff0afa.pt b4f755104d6a0dbcb613830c6843e20a.pt\r\n",
548 | "4c9197730c3e18666577f071056e22aa.pt c21e0cfa7480c1552432f9970c278b2f.pt\r\n",
549 | "98de00671e255e94c2f34ce3bee56982.pt cf2dbfadfc25b1d7be23db09c39200ef.pt\r\n",
550 | "aa9229f61411705e25ed1d31ed0b7f98.pt\r\n"
551 | ]
552 | }
553 | ],
554 | "source": [
555 | "!ls my_cache"
556 | ]
557 | },
558 | {
559 | "cell_type": "markdown",
560 | "metadata": {
561 | "colab_type": "text",
562 | "id": "XonC8u1gspjM"
563 | },
564 | "source": [
565 | "When calling out to the dataset on the following epochs, it will not call the slow transform but used the cached data."
566 | ]
567 | },
568 | {
569 | "cell_type": "code",
570 | "execution_count": 14,
571 | "metadata": {
572 | "colab": {
573 | "base_uri": "https://localhost:8080/",
574 | "height": 51
575 | },
576 | "colab_type": "code",
577 | "id": "o43pKytygJRE",
578 | "outputId": "b1832fd8-db99-42a4-c517-0ba63197f817"
579 | },
580 | "outputs": [
581 | {
582 | "name": "stdout",
583 | "output_type": "stream",
584 | "text": [
585 | "5.55 ms ± 277 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n"
586 | ]
587 | }
588 | ],
589 | "source": [
590 | "%timeit [item for item in square_persist]"
591 | ]
592 | },
593 | {
594 | "cell_type": "markdown",
595 | "metadata": {
596 | "colab_type": "text",
597 | "id": "fw--U6u0yFWX"
598 | },
599 | "source": [
600 | "Fresh dataset instances can make use of the caching data:"
601 | ]
602 | },
603 | {
604 | "cell_type": "code",
605 | "execution_count": 15,
606 | "metadata": {
607 | "colab": {
608 | "base_uri": "https://localhost:8080/",
609 | "height": 68
610 | },
611 | "colab_type": "code",
612 | "id": "U_ATus9kx-LF",
613 | "outputId": "72beb0c6-dd8b-4888-aa0f-54add725ab1d"
614 | },
615 | "outputs": [
616 | {
617 | "name": "stdout",
618 | "output_type": "stream",
619 | "text": [
620 | "keys to square it: ('data',)\n",
621 | "5.23 ms ± 192 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n"
622 | ]
623 | }
624 | ],
625 | "source": [
626 | "square_persist_1 = monai.data.PersistentDataset(items, transform=SlowSquare(keys='data'), cache_dir=\"my_cache\")\n",
627 | "%timeit [item for item in square_persist_1]"
628 | ]
629 | },
630 | {
631 | "cell_type": "markdown",
632 | "metadata": {
633 | "colab_type": "text",
634 | "id": "DmNd7zjCO9L1"
635 | },
636 | "source": [
637 | "### Caching in action\n",
638 | "- There's also a [SmartCacheDataset](https://docs.monai.io/en/latest/data.html#monai.data.SmartCacheDataset) to hide the transforms latency with less memory consumption.\n",
639 | "- The dataset tutorial notebook has a working example and a comparison of different caching mechanism in MONAI: https://github.com/Project-MONAI/tutorials/blob/master/acceleration/dataset_type_performance.ipynb\n"
640 | ]
641 | },
642 | {
643 | "cell_type": "markdown",
644 | "metadata": {
645 | "colab_type": "text",
646 | "id": "0cYYQFiVvEHF"
647 | },
648 | "source": [
649 | "## Other dataset utilities"
650 | ]
651 | },
652 | {
653 | "cell_type": "markdown",
654 | "metadata": {
655 | "colab_type": "text",
656 | "id": "fd0fmVOlwNzu"
657 | },
658 | "source": [
659 | "### ZipDataset"
660 | ]
661 | },
662 | {
663 | "cell_type": "markdown",
664 | "metadata": {
665 | "colab_type": "text",
666 | "id": "cjuAfigD05xo"
667 | },
668 | "source": [
669 | "[ZipDataset](https://docs.monai.io/en/latest/data.html?highlight=dataset#zipdataset) will zip several PyTorch datasets and output data(with the same index) together in a tuple. If a single dataset's output is already a tuple, flatten it and extend to the result. It supports applying some transforms on the associated new element."
670 | ]
671 | },
672 | {
673 | "cell_type": "code",
674 | "execution_count": null,
675 | "metadata": {
676 | "colab": {
677 | "base_uri": "https://localhost:8080/",
678 | "height": 68
679 | },
680 | "colab_type": "code",
681 | "id": "viV_c4nny57n",
682 | "outputId": "d09cde33-ba4c-40ca-9e17-6d05a205b132"
683 | },
684 | "outputs": [],
685 | "source": [
686 | "items = [4, 9, 3]\n",
687 | "dataset_1 = monai.data.Dataset(items)\n",
688 | "\n",
689 | "items = [7, 1, 2, 5]\n",
690 | "dataset_2 = monai.data.Dataset(items)\n",
691 | "\n",
692 | "def concat(data):\n",
693 | " # data[0] is an element from dataset_1\n",
694 | " # data[1] is an element from dataset_2\n",
695 | " return (f\"{data[0]} + {data[1]} = {data[0] + data[1]}\",)\n",
696 | "\n",
697 | "zipped_data = monai.data.ZipDataset([dataset_1, dataset_2], transform=concat)\n",
698 | "for item in zipped_data:\n",
699 | " print(item)"
700 | ]
701 | },
702 | {
703 | "cell_type": "markdown",
704 | "metadata": {
705 | "colab_type": "text",
706 | "id": "ZG3hRxlb1hhA"
707 | },
708 | "source": [
709 | "### Common Datasets\n",
710 | "\n",
711 | "MONAI provides access to some commonly used medical imaging datasets through [DecathlonDataset](https://docs.monai.io/en/latest/data.html?highlight=dataset#decathlon-datalist). This function leverages the features described throughout this notebook."
712 | ]
713 | },
714 | {
715 | "cell_type": "code",
716 | "execution_count": null,
717 | "metadata": {
718 | "colab": {
719 | "base_uri": "https://localhost:8080/",
720 | "height": 85
721 | },
722 | "colab_type": "code",
723 | "id": "VMBIytfJ18Mv",
724 | "outputId": "bf579a00-3bc9-4fe4-f023-fd8393c2cbd9"
725 | },
726 | "outputs": [],
727 | "source": [
728 | "dataset = monai.apps.DecathlonDataset(root_dir=\"./\", task=\"Task04_Hippocampus\", section=\"training\", download=True)"
729 | ]
730 | },
731 | {
732 | "cell_type": "code",
733 | "execution_count": null,
734 | "metadata": {
735 | "colab": {
736 | "base_uri": "https://localhost:8080/",
737 | "height": 51
738 | },
739 | "colab_type": "code",
740 | "id": "eBXDYBHd2045",
741 | "outputId": "fd88258f-a439-4b25-a5d7-19f0e36b74ea"
742 | },
743 | "outputs": [],
744 | "source": [
745 | "print(dataset.get_properties(\"numTraining\"))\n",
746 | "print(dataset.get_properties(\"description\"))"
747 | ]
748 | },
749 | {
750 | "cell_type": "code",
751 | "execution_count": null,
752 | "metadata": {
753 | "colab": {
754 | "base_uri": "https://localhost:8080/",
755 | "height": 51
756 | },
757 | "colab_type": "code",
758 | "id": "ad4slIph8vWf",
759 | "outputId": "740d0ff9-654e-4140-bcbb-b97808e8cebb"
760 | },
761 | "outputs": [],
762 | "source": [
763 | "print(dataset[0]['image'].shape)\n",
764 | "print(dataset[0]['label'].shape)"
765 | ]
766 | },
767 | {
768 | "cell_type": "markdown",
769 | "metadata": {
770 | "colab_type": "text",
771 | "id": "za7I3fnH2jGy"
772 | },
773 | "source": [
774 | "These datasets are an extension of CacheDataset.\n",
775 | "More details of this API are covered in the other labs."
776 | ]
777 | },
778 | {
779 | "cell_type": "markdown",
780 | "metadata": {},
781 | "source": [
782 | "## Summary\n",
783 | "\n",
784 | "In this notebook, we recapped datasets and learned more about their caching mechanisms, including:\n",
785 | "- Cache Dataset and Persistent Dataset\n",
786 | "- How to use DecathlonData\n",
787 | "\n",
788 | "For full API documentation, please visit https://docs.monai.io."
789 | ]
790 | }
791 | ],
792 | "metadata": {
793 | "colab": {
794 | "collapsed_sections": [],
795 | "name": "lab3_datasets.ipynb",
796 | "provenance": []
797 | },
798 | "kernelspec": {
799 | "display_name": "Python 3",
800 | "language": "python",
801 | "name": "python3"
802 | },
803 | "language_info": {
804 | "codemirror_mode": {
805 | "name": "ipython",
806 | "version": 3
807 | },
808 | "file_extension": ".py",
809 | "mimetype": "text/x-python",
810 | "name": "python",
811 | "nbconvert_exporter": "python",
812 | "pygments_lexer": "ipython3",
813 | "version": "3.8.5"
814 | }
815 | },
816 | "nbformat": 4,
817 | "nbformat_minor": 1
818 | }
819 |
--------------------------------------------------------------------------------
/day1notebooks/lab3_networks.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "colab_type": "text",
7 | "id": "061VwlZpO9Lq"
8 | },
9 | "source": [
10 | "# Lab 3: Networks\n",
11 | "---\n",
12 | "\n",
13 | "[](https://colab.research.google.com/github/Project-MONAI/MONAIBootcamp2020/blob/master/day1notebooks/lab3_networks.ipynb)\n",
14 | "\n",
15 | "\n",
16 | "### Overview\n",
17 | "\n",
18 | "This notebook introduces you to the MONAI network APIs:\n",
19 | "- Convolutions\n",
20 | "- Specifying layers with additional arguments\n",
21 | "- Flexible definitions of networks"
22 | ]
23 | },
24 | {
25 | "cell_type": "markdown",
26 | "metadata": {
27 | "colab_type": "text",
28 | "id": "9ERmUzC3O9Lr"
29 | },
30 | "source": [
31 | "## Install MONAI and import dependecies\n",
32 | "This section installs the latest version of MONAI and validates the install by printing out the configuration.\n",
33 | "\n",
34 | "We'll then import our dependencies and MONAI. "
35 | ]
36 | },
37 | {
38 | "cell_type": "code",
39 | "execution_count": 1,
40 | "metadata": {
41 | "colab": {
42 | "base_uri": "https://localhost:8080/",
43 | "height": 340
44 | },
45 | "colab_type": "code",
46 | "id": "PWt0e0QPO9Lr",
47 | "outputId": "9667fc7d-f363-4940-9106-7649dbb34dc7",
48 | "tags": []
49 | },
50 | "outputs": [
51 | {
52 | "name": "stdout",
53 | "output_type": "stream",
54 | "text": [
55 | "\u001b[?25l\r",
56 | "\u001b[K |█▏ | 10kB 33.2MB/s eta 0:00:01\r",
57 | "\u001b[K |██▎ | 20kB 3.5MB/s eta 0:00:01\r",
58 | "\u001b[K |███▍ | 30kB 4.6MB/s eta 0:00:01\r",
59 | "\u001b[K |████▌ | 40kB 4.9MB/s eta 0:00:01\r",
60 | "\u001b[K |█████▋ | 51kB 4.0MB/s eta 0:00:01\r",
61 | "\u001b[K |██████▉ | 61kB 4.4MB/s eta 0:00:01\r",
62 | "\u001b[K |████████ | 71kB 4.9MB/s eta 0:00:01\r",
63 | "\u001b[K |█████████ | 81kB 5.3MB/s eta 0:00:01\r",
64 | "\u001b[K |██████████▏ | 92kB 5.7MB/s eta 0:00:01\r",
65 | "\u001b[K |███████████▎ | 102kB 5.5MB/s eta 0:00:01\r",
66 | "\u001b[K |████████████▍ | 112kB 5.5MB/s eta 0:00:01\r",
67 | "\u001b[K |█████████████▋ | 122kB 5.5MB/s eta 0:00:01\r",
68 | "\u001b[K |██████████████▊ | 133kB 5.5MB/s eta 0:00:01\r",
69 | "\u001b[K |███████████████▉ | 143kB 5.5MB/s eta 0:00:01\r",
70 | "\u001b[K |█████████████████ | 153kB 5.5MB/s eta 0:00:01\r",
71 | "\u001b[K |██████████████████ | 163kB 5.5MB/s eta 0:00:01\r",
72 | "\u001b[K |███████████████████▏ | 174kB 5.5MB/s eta 0:00:01\r",
73 | "\u001b[K |████████████████████▍ | 184kB 5.5MB/s eta 0:00:01\r",
74 | "\u001b[K |█████████████████████▌ | 194kB 5.5MB/s eta 0:00:01\r",
75 | "\u001b[K |██████████████████████▋ | 204kB 5.5MB/s eta 0:00:01\r",
76 | "\u001b[K |███████████████████████▊ | 215kB 5.5MB/s eta 0:00:01\r",
77 | "\u001b[K |████████████████████████▉ | 225kB 5.5MB/s eta 0:00:01\r",
78 | "\u001b[K |██████████████████████████ | 235kB 5.5MB/s eta 0:00:01\r",
79 | "\u001b[K |███████████████████████████▏ | 245kB 5.5MB/s eta 0:00:01\r",
80 | "\u001b[K |████████████████████████████▎ | 256kB 5.5MB/s eta 0:00:01\r",
81 | "\u001b[K |█████████████████████████████▍ | 266kB 5.5MB/s eta 0:00:01\r",
82 | "\u001b[K |██████████████████████████████▌ | 276kB 5.5MB/s eta 0:00:01\r",
83 | "\u001b[K |███████████████████████████████▋| 286kB 5.5MB/s eta 0:00:01\r",
84 | "\u001b[K |████████████████████████████████| 296kB 5.5MB/s \n",
85 | "\u001b[?25hMONAI version: 0.3.0rc2\n",
86 | "Python version: 3.6.9 (default, Jul 17 2020, 12:50:27) [GCC 8.4.0]\n",
87 | "Numpy version: 1.18.5\n",
88 | "Pytorch version: 1.6.0+cu101\n",
89 | "\n",
90 | "Optional dependencies:\n",
91 | "Pytorch Ignite version: NOT INSTALLED or UNKNOWN VERSION.\n",
92 | "Nibabel version: 3.0.2\n",
93 | "scikit-image version: 0.16.2\n",
94 | "Pillow version: 7.0.0\n",
95 | "Tensorboard version: 2.3.0\n",
96 | "gdown version: 3.6.4\n",
97 | "TorchVision version: 0.7.0+cu101\n",
98 | "ITK version: NOT INSTALLED or UNKNOWN VERSION.\n",
99 | "\n",
100 | "For details about installing the optional dependencies, please visit:\n",
101 | " https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies\n",
102 | "\n"
103 | ]
104 | }
105 | ],
106 | "source": [
107 | "!pip install -qU \"monai[torchvision]==0.3.0rc2\"\n",
108 | "\n",
109 | "import torch\n",
110 | "import monai\n",
111 | "monai.config.print_config()\n",
112 | "from monai.networks.layers import Conv\n",
113 | "from monai.networks.layers import Act\n",
114 | "from monai.networks.layers import split_args\n",
115 | "from monai.networks.layers import Pool"
116 | ]
117 | },
118 | {
119 | "cell_type": "markdown",
120 | "metadata": {
121 | "colab_type": "text",
122 | "id": "3RKwDaMQNrQj"
123 | },
124 | "source": [
125 | "## Unifying the network layer APIs\n",
126 | "\n",
127 | "Network functionality represents a major design opportunity for MONAI. Pytorch is very much unopinionated in how networks are defined. It provides Module as a base class from which to create a network, and a few methods that must be implemented, but there is no prescribed pattern nor much helper functionality for initializing networks. \n",
128 | "\n",
129 | "This leaves a lot of room for defining some useful 'best practice' patterns for constructing new networks in MONAI. Although trivial, inflexible network implementations are easy enough, we can give users a toolset that makes it much easier to build well-engineered, flexible networks, and demonstrate their value by committing to use them in the networks that we build."
130 | ]
131 | },
132 | {
133 | "cell_type": "markdown",
134 | "metadata": {
135 | "colab_type": "text",
136 | "id": "lmh9WcfRUd2s"
137 | },
138 | "source": [
139 | "### Convolution as an example\n",
140 | "\n",
141 | "We'll start by taking a look at the Convolution `__doc__` string."
142 | ]
143 | },
144 | {
145 | "cell_type": "code",
146 | "execution_count": 2,
147 | "metadata": {
148 | "colab": {
149 | "base_uri": "https://localhost:8080/",
150 | "height": 51
151 | },
152 | "colab_type": "code",
153 | "id": "TFgdtWLBPkJC",
154 | "outputId": "e1e23a64-0ea2-40bc-813c-6a892b06cb13"
155 | },
156 | "outputs": [
157 | {
158 | "name": "stdout",
159 | "output_type": "stream",
160 | "text": [
161 | "The supported members are: ``CONV``, ``CONVTRANS``.\n",
162 | "Please see :py:class:`monai.networks.layers.split_args` for additional args parsing.\n"
163 | ]
164 | }
165 | ],
166 | "source": [
167 | "print(Conv.__doc__)"
168 | ]
169 | },
170 | {
171 | "cell_type": "markdown",
172 | "metadata": {
173 | "colab_type": "text",
174 | "id": "Y3y2qGhgRYXI"
175 | },
176 | "source": [
177 | "The [Conv](https://docs.monai.io/en/latest/networks.html#convolution) class has two options for the first argument. The second argument must be the number of spatial dimensions, `Conv[name, dimension]`, for example:"
178 | ]
179 | },
180 | {
181 | "cell_type": "code",
182 | "execution_count": 3,
183 | "metadata": {
184 | "colab": {
185 | "base_uri": "https://localhost:8080/",
186 | "height": 119
187 | },
188 | "colab_type": "code",
189 | "id": "3rZ21AMDRlIP",
190 | "outputId": "b2b1e92c-b489-4326-b4a8-e9cb522a10be"
191 | },
192 | "outputs": [
193 | {
194 | "name": "stdout",
195 | "output_type": "stream",
196 | "text": [
197 | "\n",
198 | "\n",
199 | "\n",
200 | "\n",
201 | "\n",
202 | "\n"
203 | ]
204 | }
205 | ],
206 | "source": [
207 | "print(Conv[Conv.CONV, 1])\n",
208 | "print(Conv[Conv.CONV, 2])\n",
209 | "print(Conv[Conv.CONV, 3])\n",
210 | "print(Conv[Conv.CONVTRANS, 1])\n",
211 | "print(Conv[Conv.CONVTRANS, 2])\n",
212 | "print(Conv[Conv.CONVTRANS, 3])"
213 | ]
214 | },
215 | {
216 | "cell_type": "markdown",
217 | "metadata": {
218 | "colab_type": "text",
219 | "id": "EPAuW45_TKBR"
220 | },
221 | "source": [
222 | "The configured classes are the \"vanilla\" PyTorch layers. We could create instances of them by specifying the layer arguments:"
223 | ]
224 | },
225 | {
226 | "cell_type": "code",
227 | "execution_count": 4,
228 | "metadata": {
229 | "colab": {
230 | "base_uri": "https://localhost:8080/",
231 | "height": 51
232 | },
233 | "colab_type": "code",
234 | "id": "rYTaJuS-TvZA",
235 | "outputId": "b8398d90-81c1-42f3-c03e-25c624b99465"
236 | },
237 | "outputs": [
238 | {
239 | "name": "stdout",
240 | "output_type": "stream",
241 | "text": [
242 | "Conv2d(1, 4, kernel_size=(3, 3), stride=(1, 1))\n",
243 | "Conv3d(1, 4, kernel_size=(3, 3, 3), stride=(1, 1, 1))\n"
244 | ]
245 | }
246 | ],
247 | "source": [
248 | "print(Conv[Conv.CONV, 2](in_channels=1, out_channels=4, kernel_size=3))\n",
249 | "print(Conv[Conv.CONV, 3](in_channels=1, out_channels=4, kernel_size=3))"
250 | ]
251 | },
252 | {
253 | "cell_type": "markdown",
254 | "metadata": {
255 | "colab_type": "text",
256 | "id": "HGthKNI0Um50"
257 | },
258 | "source": [
259 | "### Specifying a layer with additional arguments\n",
260 | "We'll now take a look at the Activation `__doc__` string."
261 | ]
262 | },
263 | {
264 | "cell_type": "code",
265 | "execution_count": 5,
266 | "metadata": {
267 | "colab": {
268 | "base_uri": "https://localhost:8080/",
269 | "height": 51
270 | },
271 | "colab_type": "code",
272 | "id": "TJJ6umixUaiP",
273 | "outputId": "bae28cb9-5cee-4ad5-921f-3609ca706ea1"
274 | },
275 | "outputs": [
276 | {
277 | "name": "stdout",
278 | "output_type": "stream",
279 | "text": [
280 | "The supported members are: ``ELU``, ``RELU``, ``LEAKYRELU``, ``PRELU``, ``RELU6``, ``SELU``, ``CELU``, ``GELU``, ``SIGMOID``, ``TANH``, ``SOFTMAX``, ``LOGSOFTMAX``.\n",
281 | "Please see :py:class:`monai.networks.layers.split_args` for additional args parsing.\n"
282 | ]
283 | }
284 | ],
285 | "source": [
286 | "print(Act.__doc__)"
287 | ]
288 | },
289 | {
290 | "cell_type": "markdown",
291 | "metadata": {
292 | "colab_type": "text",
293 | "id": "0NCmNkLcV9np"
294 | },
295 | "source": [
296 | "The [Act](https://docs.monai.io/en/latest/networks.html#module-monai.networks.layers.Act) classes don't require the spatial dimension information, but supports additional arguments."
297 | ]
298 | },
299 | {
300 | "cell_type": "code",
301 | "execution_count": 6,
302 | "metadata": {
303 | "colab": {
304 | "base_uri": "https://localhost:8080/",
305 | "height": 51
306 | },
307 | "colab_type": "code",
308 | "id": "zQO2PX4TVXU1",
309 | "outputId": "46932ee7-de00-43ce-be57-c5035cdeade0"
310 | },
311 | "outputs": [
312 | {
313 | "name": "stdout",
314 | "output_type": "stream",
315 | "text": [
316 | "\n"
317 | ]
318 | },
319 | {
320 | "data": {
321 | "text/plain": [
322 | "PReLU(num_parameters=1)"
323 | ]
324 | },
325 | "execution_count": 6,
326 | "metadata": {
327 | "tags": []
328 | },
329 | "output_type": "execute_result"
330 | }
331 | ],
332 | "source": [
333 | "print(Act[Act.PRELU])\n",
334 | "Act[Act.PRELU](num_parameters=1, init=0.1)"
335 | ]
336 | },
337 | {
338 | "cell_type": "markdown",
339 | "metadata": {
340 | "colab_type": "text",
341 | "id": "SE0tKPlyU95d"
342 | },
343 | "source": [
344 | "These could be fully specified with a tuple of `(type_name, arg_dict)`, such as `(\"prelu\", {\"num_parameters\": 1, \"init\": 0.1})`:"
345 | ]
346 | },
347 | {
348 | "cell_type": "code",
349 | "execution_count": 7,
350 | "metadata": {
351 | "colab": {
352 | "base_uri": "https://localhost:8080/",
353 | "height": 34
354 | },
355 | "colab_type": "code",
356 | "id": "BtRCIstVWQkr",
357 | "outputId": "e39579a4-4f03-411c-ab1c-941c25ff9f00"
358 | },
359 | "outputs": [
360 | {
361 | "data": {
362 | "text/plain": [
363 | "PReLU(num_parameters=1)"
364 | ]
365 | },
366 | "execution_count": 7,
367 | "metadata": {
368 | "tags": []
369 | },
370 | "output_type": "execute_result"
371 | }
372 | ],
373 | "source": [
374 | "act_name, act_args = split_args((\"prelu\", {\"num_parameters\": 1, \"init\": 0.1}))\n",
375 | "Act[act_name](**act_args)"
376 | ]
377 | },
378 | {
379 | "cell_type": "markdown",
380 | "metadata": {
381 | "colab_type": "text",
382 | "id": "UnfMOLx5Xegi"
383 | },
384 | "source": [
385 | "### Putting them together"
386 | ]
387 | },
388 | {
389 | "cell_type": "markdown",
390 | "metadata": {
391 | "colab_type": "text",
392 | "id": "3LzZkQ_7ScHX"
393 | },
394 | "source": [
395 | "These APIs allow for flexible definitions of networks. Below we'll create a class called `MyNetwork` that utilizes `Conv`, `Act`, and `Pool`. Each Network requires an `__init__` and a `forward` function."
396 | ]
397 | },
398 | {
399 | "cell_type": "code",
400 | "execution_count": 8,
401 | "metadata": {
402 | "colab": {},
403 | "colab_type": "code",
404 | "id": "Uc6O3S4WSopx"
405 | },
406 | "outputs": [],
407 | "source": [
408 | "class MyNetwork(torch.nn.Module):\n",
409 | "\n",
410 | " def __init__(self, dims=3, in_channels=1, out_channels=8, kernel_size=3, pool_kernel=2, act=\"relu\"):\n",
411 | " super(MyNetwork, self).__init__()\n",
412 | " # convolution\n",
413 | " self.conv = Conv[Conv.CONV, dims](in_channels, out_channels, kernel_size=kernel_size)\n",
414 | " # activation\n",
415 | " act_type, act_args = split_args(act)\n",
416 | " self.act = Act[act_type](**act_args)\n",
417 | " # pooling\n",
418 | " self.pool = Pool[Pool.MAX, dims](pool_kernel)\n",
419 | " \n",
420 | " def forward(self, x: torch.Tensor):\n",
421 | " x = self.conv(x)\n",
422 | " x = self.act(x)\n",
423 | " x = self.pool(x)\n",
424 | " return x\n"
425 | ]
426 | },
427 | {
428 | "cell_type": "markdown",
429 | "metadata": {
430 | "colab_type": "text",
431 | "id": "Wr1ilIJvayFo"
432 | },
433 | "source": [
434 | "This network definition can be instantiated to support either 2D or 3D inputs, with flexible kernel sizes.\n",
435 | "\n",
436 | "It becomes handy when adapting the same architecture design for different tasks,\n",
437 | "switching among 2D, 2.5D, 3D easily."
438 | ]
439 | },
440 | {
441 | "cell_type": "code",
442 | "execution_count": 9,
443 | "metadata": {
444 | "colab": {
445 | "base_uri": "https://localhost:8080/",
446 | "height": 323
447 | },
448 | "colab_type": "code",
449 | "id": "ddRgmz-9ap2b",
450 | "outputId": "540bcf09-9f85-456f-c9a5-cc0197c68f67"
451 | },
452 | "outputs": [
453 | {
454 | "name": "stdout",
455 | "output_type": "stream",
456 | "text": [
457 | "MyNetwork(\n",
458 | " (conv): Conv3d(1, 8, kernel_size=(3, 3, 3), stride=(1, 1, 1))\n",
459 | " (act): ReLU()\n",
460 | " (pool): MaxPool3d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
461 | ")\n",
462 | "torch.Size([3, 8, 9, 9, 14])\n",
463 | "MyNetwork(\n",
464 | " (conv): Conv2d(3, 8, kernel_size=(3, 3), stride=(1, 1))\n",
465 | " (act): ELU(alpha=1.0, inplace=True)\n",
466 | " (pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
467 | ")\n",
468 | "torch.Size([3, 8, 11, 11])\n",
469 | "MyNetwork(\n",
470 | " (conv): Conv3d(4, 8, kernel_size=(3, 3, 1), stride=(1, 1, 1))\n",
471 | " (act): Sigmoid()\n",
472 | " (pool): MaxPool3d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
473 | ")\n",
474 | "torch.Size([3, 8, 14, 14, 2])\n"
475 | ]
476 | }
477 | ],
478 | "source": [
479 | "# default network instance\n",
480 | "default_net = MyNetwork()\n",
481 | "print(default_net)\n",
482 | "print(default_net(torch.ones(3, 1, 20, 20, 30)).shape)\n",
483 | "\n",
484 | "# 2D network instance\n",
485 | "elu_net = MyNetwork(dims=2, in_channels=3, act=(\"elu\", {\"inplace\": True}))\n",
486 | "print(elu_net)\n",
487 | "print(elu_net(torch.ones(3, 3, 24, 24)).shape)\n",
488 | "\n",
489 | "# 3D network instance with anisotropic kernels\n",
490 | "sigmoid_net = MyNetwork(3, in_channels=4, kernel_size=(3, 3, 1), act=\"sigmoid\")\n",
491 | "print(sigmoid_net)\n",
492 | "print(sigmoid_net(torch.ones(3, 4, 30, 30, 5)).shape)"
493 | ]
494 | },
495 | {
496 | "cell_type": "markdown",
497 | "metadata": {
498 | "colab_type": "text",
499 | "id": "3AszY4ttlRut"
500 | },
501 | "source": [
502 | "Almost all the MONAI layers, blocks and networks are extensions of `torch.nn.modules` and follow this pattern. This makes the implementations compatible with any PyTorch pipelines and flexible with the network design.\n",
503 | "The current collections of those differentiable modules are listed in https://docs.monai.io/en/latest/networks.html."
504 | ]
505 | },
506 | {
507 | "cell_type": "markdown",
508 | "metadata": {},
509 | "source": [
510 | "### AHNet\n",
511 | "\n",
512 | "Among those implementations, MONAI features a 3D anisotropic hybrid network (AHNet) with the anisotropic encoder kernels initialised from a pretrained resnet. Please see https://docs.monai.io/en/latest/networks.html#ahnet"
513 | ]
514 | },
515 | {
516 | "cell_type": "markdown",
517 | "metadata": {},
518 | "source": [
519 | "## Summary\n",
520 | "\n",
521 | "In this notebook, we recapped MONAI Layers including:\n",
522 | "- Convolutions and Activations\n",
523 | "- Putting together a base network\n",
524 | "- Initialize an AHNet\n",
525 | "\n",
526 | "For full API documentation, please visit https://docs.monai.io."
527 | ]
528 | },
529 | {
530 | "cell_type": "code",
531 | "execution_count": null,
532 | "metadata": {},
533 | "outputs": [],
534 | "source": []
535 | }
536 | ],
537 | "metadata": {
538 | "accelerator": "GPU",
539 | "colab": {
540 | "collapsed_sections": [],
541 | "name": "lab3_networks.ipynb",
542 | "provenance": []
543 | },
544 | "kernelspec": {
545 | "display_name": "Python 3",
546 | "language": "python",
547 | "name": "python3"
548 | },
549 | "language_info": {
550 | "codemirror_mode": {
551 | "name": "ipython",
552 | "version": 3
553 | },
554 | "file_extension": ".py",
555 | "mimetype": "text/x-python",
556 | "name": "python",
557 | "nbconvert_exporter": "python",
558 | "pygments_lexer": "ipython3",
559 | "version": "3.8.5"
560 | },
561 | "widgets": {
562 | "application/vnd.jupyter.widget-state+json": {
563 | "081b8157ccdd44feab9714ce98fdb6fc": {
564 | "model_module": "@jupyter-widgets/base",
565 | "model_name": "LayoutModel",
566 | "state": {
567 | "_model_module": "@jupyter-widgets/base",
568 | "_model_module_version": "1.2.0",
569 | "_model_name": "LayoutModel",
570 | "_view_count": null,
571 | "_view_module": "@jupyter-widgets/base",
572 | "_view_module_version": "1.2.0",
573 | "_view_name": "LayoutView",
574 | "align_content": null,
575 | "align_items": null,
576 | "align_self": null,
577 | "border": null,
578 | "bottom": null,
579 | "display": null,
580 | "flex": null,
581 | "flex_flow": null,
582 | "grid_area": null,
583 | "grid_auto_columns": null,
584 | "grid_auto_flow": null,
585 | "grid_auto_rows": null,
586 | "grid_column": null,
587 | "grid_gap": null,
588 | "grid_row": null,
589 | "grid_template_areas": null,
590 | "grid_template_columns": null,
591 | "grid_template_rows": null,
592 | "height": null,
593 | "justify_content": null,
594 | "justify_items": null,
595 | "left": null,
596 | "margin": null,
597 | "max_height": null,
598 | "max_width": null,
599 | "min_height": null,
600 | "min_width": null,
601 | "object_fit": null,
602 | "object_position": null,
603 | "order": null,
604 | "overflow": null,
605 | "overflow_x": null,
606 | "overflow_y": null,
607 | "padding": null,
608 | "right": null,
609 | "top": null,
610 | "visibility": null,
611 | "width": null
612 | }
613 | },
614 | "313d9aa40b7248a4887c638eeef7e9e1": {
615 | "model_module": "@jupyter-widgets/controls",
616 | "model_name": "DescriptionStyleModel",
617 | "state": {
618 | "_model_module": "@jupyter-widgets/controls",
619 | "_model_module_version": "1.5.0",
620 | "_model_name": "DescriptionStyleModel",
621 | "_view_count": null,
622 | "_view_module": "@jupyter-widgets/base",
623 | "_view_module_version": "1.2.0",
624 | "_view_name": "StyleView",
625 | "description_width": ""
626 | }
627 | },
628 | "63538ca5c61543e2a074e12cab6e581a": {
629 | "model_module": "@jupyter-widgets/controls",
630 | "model_name": "ProgressStyleModel",
631 | "state": {
632 | "_model_module": "@jupyter-widgets/controls",
633 | "_model_module_version": "1.5.0",
634 | "_model_name": "ProgressStyleModel",
635 | "_view_count": null,
636 | "_view_module": "@jupyter-widgets/base",
637 | "_view_module_version": "1.2.0",
638 | "_view_name": "StyleView",
639 | "bar_color": null,
640 | "description_width": "initial"
641 | }
642 | },
643 | "7f7b548251a943d08e7c6091a96d9bae": {
644 | "model_module": "@jupyter-widgets/base",
645 | "model_name": "LayoutModel",
646 | "state": {
647 | "_model_module": "@jupyter-widgets/base",
648 | "_model_module_version": "1.2.0",
649 | "_model_name": "LayoutModel",
650 | "_view_count": null,
651 | "_view_module": "@jupyter-widgets/base",
652 | "_view_module_version": "1.2.0",
653 | "_view_name": "LayoutView",
654 | "align_content": null,
655 | "align_items": null,
656 | "align_self": null,
657 | "border": null,
658 | "bottom": null,
659 | "display": null,
660 | "flex": null,
661 | "flex_flow": null,
662 | "grid_area": null,
663 | "grid_auto_columns": null,
664 | "grid_auto_flow": null,
665 | "grid_auto_rows": null,
666 | "grid_column": null,
667 | "grid_gap": null,
668 | "grid_row": null,
669 | "grid_template_areas": null,
670 | "grid_template_columns": null,
671 | "grid_template_rows": null,
672 | "height": null,
673 | "justify_content": null,
674 | "justify_items": null,
675 | "left": null,
676 | "margin": null,
677 | "max_height": null,
678 | "max_width": null,
679 | "min_height": null,
680 | "min_width": null,
681 | "object_fit": null,
682 | "object_position": null,
683 | "order": null,
684 | "overflow": null,
685 | "overflow_x": null,
686 | "overflow_y": null,
687 | "padding": null,
688 | "right": null,
689 | "top": null,
690 | "visibility": null,
691 | "width": null
692 | }
693 | },
694 | "8463a8ce1803481d94723a4878b29025": {
695 | "model_module": "@jupyter-widgets/controls",
696 | "model_name": "HBoxModel",
697 | "state": {
698 | "_dom_classes": [],
699 | "_model_module": "@jupyter-widgets/controls",
700 | "_model_module_version": "1.5.0",
701 | "_model_name": "HBoxModel",
702 | "_view_count": null,
703 | "_view_module": "@jupyter-widgets/controls",
704 | "_view_module_version": "1.5.0",
705 | "_view_name": "HBoxView",
706 | "box_style": "",
707 | "children": [
708 | "IPY_MODEL_8a97ddb03f95404cb27a24557043cbaf",
709 | "IPY_MODEL_cbabc5d60b89408b9284c54ecfc5a8f3"
710 | ],
711 | "layout": "IPY_MODEL_7f7b548251a943d08e7c6091a96d9bae"
712 | }
713 | },
714 | "8a97ddb03f95404cb27a24557043cbaf": {
715 | "model_module": "@jupyter-widgets/controls",
716 | "model_name": "FloatProgressModel",
717 | "state": {
718 | "_dom_classes": [],
719 | "_model_module": "@jupyter-widgets/controls",
720 | "_model_module_version": "1.5.0",
721 | "_model_name": "FloatProgressModel",
722 | "_view_count": null,
723 | "_view_module": "@jupyter-widgets/controls",
724 | "_view_module_version": "1.5.0",
725 | "_view_name": "ProgressView",
726 | "bar_style": "success",
727 | "description": "100%",
728 | "description_tooltip": null,
729 | "layout": "IPY_MODEL_f08147de49e4468da9a8b03fd56cf499",
730 | "max": 102502400,
731 | "min": 0,
732 | "orientation": "horizontal",
733 | "style": "IPY_MODEL_63538ca5c61543e2a074e12cab6e581a",
734 | "value": 102502400
735 | }
736 | },
737 | "cbabc5d60b89408b9284c54ecfc5a8f3": {
738 | "model_module": "@jupyter-widgets/controls",
739 | "model_name": "HTMLModel",
740 | "state": {
741 | "_dom_classes": [],
742 | "_model_module": "@jupyter-widgets/controls",
743 | "_model_module_version": "1.5.0",
744 | "_model_name": "HTMLModel",
745 | "_view_count": null,
746 | "_view_module": "@jupyter-widgets/controls",
747 | "_view_module_version": "1.5.0",
748 | "_view_name": "HTMLView",
749 | "description": "",
750 | "description_tooltip": null,
751 | "layout": "IPY_MODEL_081b8157ccdd44feab9714ce98fdb6fc",
752 | "placeholder": "",
753 | "style": "IPY_MODEL_313d9aa40b7248a4887c638eeef7e9e1",
754 | "value": " 97.8M/97.8M [00:00<00:00, 215MB/s]"
755 | }
756 | },
757 | "f08147de49e4468da9a8b03fd56cf499": {
758 | "model_module": "@jupyter-widgets/base",
759 | "model_name": "LayoutModel",
760 | "state": {
761 | "_model_module": "@jupyter-widgets/base",
762 | "_model_module_version": "1.2.0",
763 | "_model_name": "LayoutModel",
764 | "_view_count": null,
765 | "_view_module": "@jupyter-widgets/base",
766 | "_view_module_version": "1.2.0",
767 | "_view_name": "LayoutView",
768 | "align_content": null,
769 | "align_items": null,
770 | "align_self": null,
771 | "border": null,
772 | "bottom": null,
773 | "display": null,
774 | "flex": null,
775 | "flex_flow": null,
776 | "grid_area": null,
777 | "grid_auto_columns": null,
778 | "grid_auto_flow": null,
779 | "grid_auto_rows": null,
780 | "grid_column": null,
781 | "grid_gap": null,
782 | "grid_row": null,
783 | "grid_template_areas": null,
784 | "grid_template_columns": null,
785 | "grid_template_rows": null,
786 | "height": null,
787 | "justify_content": null,
788 | "justify_items": null,
789 | "left": null,
790 | "margin": null,
791 | "max_height": null,
792 | "max_width": null,
793 | "min_height": null,
794 | "min_width": null,
795 | "object_fit": null,
796 | "object_position": null,
797 | "order": null,
798 | "overflow": null,
799 | "overflow_x": null,
800 | "overflow_y": null,
801 | "padding": null,
802 | "right": null,
803 | "top": null,
804 | "visibility": null,
805 | "width": null
806 | }
807 | }
808 | }
809 | }
810 | },
811 | "nbformat": 4,
812 | "nbformat_minor": 1
813 | }
814 |
--------------------------------------------------------------------------------
/day1notebooks/segresnet_model_epoch30.pth:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Project-MONAI/MONAIBootcamp2020/c0c7e074660b7c073bff4517912bcea347bf50e6/day1notebooks/segresnet_model_epoch30.pth
--------------------------------------------------------------------------------
/day2notebooks/day2_segment_challenge_solution.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "colab_type": "text",
7 | "id": "fkpT6wtkd786"
8 | },
9 | "source": [
10 | "# Segmentation Exercise\n",
11 | "---\n",
12 | "\n",
13 | "[](https://colab.research.google.com/github/Project-MONAI/MONAIBootcamp2020/blob/master/day2notebooks/day2_segment_challenge_solution.ipynb)\n",
14 | "\n",
15 | "In this exercise we will segment the left ventricle of the heart in relatively small images using neural networks. \n",
16 | "Below is the code for setting up a segmentation network and training it. The network isn't very good, **so the exercise is to improve the quality of the segmentation by improving the network and/or the training scheme including data loading efficiency and data augmentation**. \n",
17 | "\n",
18 | "The data being used here is derived from the [Sunnybrook Cardiac Dataset](https://www.cardiacatlas.org/studies/sunnybrook-cardiac-data/) of cardiac MR images, filtered to contain only left ventricular myocardium segmentations and reduced in the XY dimensions.\n",
19 | "\n",
20 | "First we install and import MONAI plus other dependencies:"
21 | ]
22 | },
23 | {
24 | "cell_type": "code",
25 | "execution_count": 1,
26 | "metadata": {
27 | "colab": {},
28 | "colab_type": "code",
29 | "id": "TLl72MMid0pJ"
30 | },
31 | "outputs": [],
32 | "source": [
33 | "%matplotlib inline\n",
34 | "\n",
35 | "from urllib.request import urlopen\n",
36 | "from io import BytesIO\n",
37 | "\n",
38 | "import torch, torch.nn as nn, torch.nn.functional as F\n",
39 | "\n",
40 | "import numpy as np\n",
41 | "\n",
42 | "import matplotlib.pyplot as plt\n",
43 | "\n",
44 | "import monai\n",
45 | "from monai.utils import first\n",
46 | "from monai.losses import DiceLoss\n",
47 | "from monai.metrics import DiceMetric\n",
48 | "from monai.data import ArrayDataset\n",
49 | "from torch.utils.data import DataLoader\n",
50 | "from monai.utils import progress_bar\n",
51 | "from monai.transforms import (\n",
52 | " Transform,\n",
53 | " Compose,\n",
54 | " AddChannel,\n",
55 | " ScaleIntensity,\n",
56 | " ToTensor\n",
57 | ")\n",
58 | "\n",
59 | "device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n",
60 | "DATA_NPZ = \"https://github.com/ericspod/VPHSummerSchool2019/raw/master/scd_lvsegs.npz\"\n",
61 | "\n",
62 | "batch_size = 300 # changed from original\n",
63 | "num_workers = 10 # changed from original\n",
64 | "num_epochs = 600\n",
65 | "lr = 5e-4 # changed from original"
66 | ]
67 | },
68 | {
69 | "cell_type": "markdown",
70 | "metadata": {
71 | "colab_type": "text",
72 | "id": "KyU8DpLmjIBL"
73 | },
74 | "source": [
75 | "We now load the data from the remote source and visualize a sample:"
76 | ]
77 | },
78 | {
79 | "cell_type": "code",
80 | "execution_count": 2,
81 | "metadata": {
82 | "colab": {
83 | "base_uri": "https://localhost:8080/",
84 | "height": 305
85 | },
86 | "colab_type": "code",
87 | "id": "HqCaraEzieh3",
88 | "outputId": "434141a3-4925-45c8-bf46-0a844b455070"
89 | },
90 | "outputs": [
91 | {
92 | "name": "stdout",
93 | "output_type": "stream",
94 | "text": [
95 | "(420, 64, 64) (420, 64, 64)\n"
96 | ]
97 | },
98 | {
99 | "data": {
100 | "text/plain": [
101 | ""
102 | ]
103 | },
104 | "execution_count": 2,
105 | "metadata": {},
106 | "output_type": "execute_result"
107 | },
108 | {
109 | "data": {
110 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAPsAAAD7CAYAAACscuKmAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nO2da4xd1XXH/2vMEB42tgc/MB4cA3HMw2lMNeERkmIwEGMcrChKlUitaIXkL2mVqq0S0iqVWqkSVaWq/VBVstq0SE3bkEJqhBrAcWqiKijFKebhGNeQGDz24HEcYwgQ8GP1w5x7/D/Ld6/Z98ydewfO+knW7HP3Ofvss8/dvmvt9diiqgiC4P3PQL87EARBb4jJHgQNISZ7EDSEmOxB0BBisgdBQ4jJHgQNYUqTXUTWicgeEXlRRO7tVqeCIOg+UtfOLiKzAPwfgNsAjAJ4CsAXVPXH3eteEATd4qwpXHstgBdV9ScAICL/BmAjgORkF5H3tAfPOeecU5bPPffcsiwilfNOnTrVtgwA/J/ryZMnK3XvvPNO2+ts+8zAQFU4mzVrVlk+ceJE8l7B+xdVbfuFmcpkXwpgPx2PArhusotaX85OJIo60oedILlt8HV2In3oQx8qy1deeWVZPuus6jC+++67ZfkXv/hFpY4n8c9//vNK3YsvvliW33777WT7zOzZsyvH8+fPL8vj4+Nl+dixY8l+5OL9p2Ph8bbjmDrPe0ed3DsHey/uox0bvre9rhv9Sj133e9wiqlM9nZPeUZvRGQTgE1TuE8QBF1gKpN9FMAldDwM4KA9SVU3A9gMTIjxrf+tvF8W739L739Zc99knQe3v2jRokrdFVdcUZZZpGfxG6g+mxWfX3vttbK8a9euSt3g4GDbMovjAHD22WeX5fPOO69Sd/jw4bLMv+Z1fsktnfyq5f7icZu5v2TeeXUkuE76YetS38dOvn/daCOHqazGPwVghYhcKiJnA/g8gIe7060gCLpN7V92VT0hIr8D4DEAswB8XVV3TXJZEAR9YipiPFT1PwH8Z5f6EgTBNDKlyV6HqeohXddjzEox68PXX399pW7u3Lll+Ze//GVZ5pVzoKpj83kAsHfv3rb3Aqq6G68DWD1xzpw5Zdmu9h89ehTt8HTo3FVwz8Torbh798pdg/HayO2/pw/nrjF0e4XctlFnPHIJd9kgaAgx2YOgIcwYMb6OGNWJSJhqnz3OAODaa68ty9b0xiL5W2+9lbwX8+yzz1aOWdz1REI+z4rI7IzDDjyT9SWXlINJ3fHONZF2w0FlOpxcvGfJ7b/ntNMr4pc9CBpCTPYgaAgx2YOgIfRcZ29R1yUxtw0L17GevmzZssp5H/7wh8uydYNlE5unyx44cKAsW3dW1rG9YAwOfrHmO37u6db/uqGXp64B8oNk6j7zdK8d5LafG8XoBRBN1ZU2ftmDoCHEZA+ChtA3Md4y3aaJlCnrk5/8ZOU8Fo+sGM/HLI6/+eablfPYS46j14Dqsy1YsKBSx+I/Y8W+48ePJ+um06PL65cVOVPvsJNox5SIbO/FkYXdiHqrG7HWbZXHPid7Zqbei9d2/LIHQUOIyR4EDWHGiPG54l1dMZVzxnEACqdxAqrJJdhLDqiKUdzGk08+WTmPE1tYDzcOYnnjjTcqdSmx2LbR7RRNljqJJzoJTsm9V65ozWOV249uBZnU8QjNtUR1kh4rh/hlD4KGEJM9CBpCTPYgaAgzRmf3qGOqsNdceumlZXnt2rVl2SZ/4GObXILZvXt3WfZMY9Z8wqY4Tg5pz/U87Ty6keixjrdXXeokerT3ze1vP8lNrZ26pt1xp8QvexA0hJjsQdAQ3hNifC4s5lxwwQWVupGRkbLMorr1fmNYHAeqHm4HD55OkW/FKxbBr7766kod7/piSW3R5ImmnnmGmY7kEnVyvtelbn66fpEbFGOZ6u4z4UEXBEFM9iBoCjHZg6Ah9E1n78ZebJYPfOADZXnVqlWVOtaTODGEjWzjvOs2aQTr25wAw57H+eX3799fqbOmPiY3gqrOHnfd2NW2k/vWMQFOB91wtc5twzOp1dnb0EteUYdJf9lF5OsiMi4iz9NnQyKyVUT2Fn/ne20EQdB/csT4fwKwznx2L4BtqroCwLbiOAiCGYzkiAYishzAI6q6qjjeA2CNqo6JyBIA21V1ZUY7SuVKXZ0tfGwbH/zgB8vyzTffXKnbt29fWR4bGyvLVlQaGhoqyzt37kz2kc1rrD7YftmtoVj8t6a93CivOl5i3dhCeLpVAe86zzuyG8lOur0fQSfvL/VsdVSjU6dOQVXbnlh3gW6xqo4VHRoDsGiS84Mg6DPTvkAnIpsAbJru+wRB4NM3MX6S8yrHudsifeYzn2l7DXCmON3C5ojbvn17sh+cvILFceuFx6oAb9Vk2+QkF0A18IatBNZiwFivu1SK626oTdNBnUAbu2UXfw/4Hdk2puO5ci0oU23PkrLQqGrXxfiHAdxdlO8GsKVmO0EQ9Igc09u/AngSwEoRGRWRewDcB+A2EdkL4LbiOAiCGcykOruqfiFRtTbxeRAEM5D3RNRbygSzcmV1mYDNYVafnz17dllmjzfr0cZ17GkHVHVF1o05iSQAjI+PJ/vI3nU24STrm5zYwm4hxeNh9XnW5di0Z81TXGf1v1T0naUbOqq3rpBrkuL+2oQjvCaT673o9TG3ztvay6458HvPNb1FwskgCJLEZA+ChjBjxPhcLyU2V/GOq0DVk80Gp7z00ktt644cOVI5j0UqK86yiMhiGeeaB6oqg4WfxeaD5zz1nOfe87TzzFDWtMewWG+fk8eHTZZeQE7ubqx1ROLJ6vjedqz4OXlMU6bYduR6LOaK6tY8mCLE+CAIahGTPQgaQkz2IGgIM0Zn90wwrHted911ZdnqRayHcRIKoKpvs8506NChynmeOy7r1Lw+YHV0NvdY3Yoj6ax+yX303Ca9dQW+H5sOrUmK66yJkU2JbPazaxMpk1G74xZ1XEAnO88zFfK5rL/b707KzdjiuR3zOObq5d796uTzj4STQRDEZA+CpjBjxHjGiiJsMlm06HTovN1SmcUcG83GYtvo6GhZtuIQ56574YUXKnUsprEIbj3cWO3Ys2dPpW7hwoVl2ea2Z3GUVQjr/eaJ8dyvlAnNYseKn4fFf5ukwxMZuY/TvYWURzfMft55PCaeV2LuvTzzGn8nUupEiPFBEMRkD4KmkJW8oms3c3LQeavgGzduLMssmtrVYU71bEX8lOhrV01Z3GeRG6iK0+z9Zle6OYiF8+IB1efuZMU2FxYlPa8tDgrxrAKsQlmPPO9ZWG2wAT8pcsX9urnw+N3mBvt4bQDptOS5HoX22BsDz0LTamM6ctAFQfAeIyZ7EDSEmOxB0BB6bnpr6RqennX55ZdXjtkb7plnninLXjIC2z7rpalkGEBVB7PJItlExUkl7RZPCxYsaHsvez/rucZrArm50O153EdP/2MvOTuOqcSaNlEGj6ldt2Bdn5/52LFjlfPqRHJZUyGvJdi1Gh5jG2WYi7eexG3ys9j34unwqffUydpEztjFL3sQNISY7EHQEPrmQWfFmosuuqgsDw8PV+peeeWVssximfXoOv/888uyFWvYTMfiqBWVPNGU61jEt55w8+bNK8vWJOUFXKQ8pLztjjxRnUVwz9zD4wZU1SY+z/NKtHWcFIS9Hl9//fXKeXW86zxToQ1KSuXhYzUDqI69FffZTGfvnbtVVuoae533eU5O+fCgC4IgJnsQNIWY7EHQEPpmerO51i+99NIzzmnB+iDnXbf6Cedrt+6QqWSA1kSS2pYZqK4XsIln/vz5lfNSpqtO8ExvXiRXKkGD1al5fOxz8loIm8qs+zCPKY+97Revl1x44YWV89js5+1p561h8Luw5kyrY7ewUYA8Pnbsc9+Fp6d7UYy5ySu89+6Z9spzJjtBRC4Rkf8Skd0isktEvlR8PiQiW0Vkb/F3/mRtBUHQP3LE+BMA/kBVrwRwPYAvishVAO4FsE1VVwDYVhwHQTBDydnrbQzAWFF+Q0R2A1gKYCOANcVp9wPYDuArXlsDAwNlYoRLLrmkUseio42uYjHwwIEDZdnLmW7FmlzvNG4zd/tfa8ZhUTI357iFr7Mqiaca5G7ZnNpaqd1xC5vXj9vwcuWziGz7zokyPDOlZ25MRSNaPFXAi0CsYx70+uiZ1Dy8PP05HpcdLdAV+7RfA+CHABYX/xG0/kNYlL4yCIJ+k71AJyKzATwI4PdU9fXc/41EZBOATUW5Th+DIOgCWb/sIjKIiYn+DVV9qPj4kIgsKeqXABhvd62qblbVEVUdickeBP1j0l92mZih/wBgt6r+FVU9DOBuAPcVf7dM1tbg4CAuvvhiAGfqeBxF9q1vfatSl8pjbnV7dsW0/7HkuhfmZjBhF03bD15/8PRQS8pl03OBtO2lklZ6mWSsyeuOO+5o257th5d7/jvf+U5Z9tYYeBxtG9zn3LzulhwX007bzGmjE1fXOveu4y6bI8bfCOA3ATwnIq0dDv4IE5P8ARG5B8ArAD7XSWeDIOgtOavx/w0g9V/i2u52JwiC6aKnCSfnzJmjIyMjAM706Nq1a1dZtqIpi5l8nd2Wmb2lrIdUytRkTRjeeLA4ymrH4sWLk/eyffSSHvJ1njcZYz3EWPzne3/qU5+qnOeJxaktk7zzvHHzRPXHHnusLFtVg4/5fU63ON7J2lLu/TwxPjeK0Uu60jo+fvw4Tp06FQkng6DJxGQPgobQ00CYgYGBcuXaij+cDMJup8THLMbbVXAOxrBiTkoEsqIjn2dXkVkcZfHZ3otX471ECBau8/KZcZ9tTje+3/r165NteOpESs3xLAme+Ml9sve68847y/I3v/nNSh2Pd27wku1H6l3YNnJVg1y1rxNVI+XZ5+VHtN/b1rhG8oogCGKyB0FTiMkeBA2hp6a3wcFBbSVjtPflXPE2DzvrpazH5Cb/A9L6phdRZs1EKa85Trxhr7P70bGu5XnvefqZl6SDvd+86Dtve2EeK880xvq2JaXre+Y7m9DkoYceKsvc/zfffLNynqfP5+al5zqbyNR7Z6nvlefB6Y2BF43otW/GOExvQdBkYrIHQUPoqRg/e/ZsXbVqFYAzPctYNHv55ZcrdamEElbMZlIeRkA12YSXx9x6+fF1nCuec94D1Weznnxcl5uzzIqtbKa0orTtcwsOTAGqoqqXn87Lmc7HVp1gsfKuu+5KtpEr4n//+98vy3YL6DpbSNnzUsFWQFWNtKbUXO+33L7kBu5YQowPgqAkJnsQNISY7EHQEHrqLnvWWWeV+q3Vy9nc5rlDsi7rRY3ZOta3eQ3A6pB8nt0DjU1vXLZbHvNagtUN+d5eEgMeA7sN8YYNG5JtME888URZ5ig9oKqn14ncAqpjbM1yPK6PPvpo22uAajSeZzbLTebhjbfnqszjYU1vXFc3oWVufvnUNZORsxV6/LIHQUOIyR4EDaHnUW8t8deKSiz6WnGLz/W2W05teWzbZ3ObNbN42/8y3EdrvuN+WPOgt01PyszleQra9thE5fU/18Tj5VNn7HOmos2s2ezxxx8vy7fffnuljp9t48aNZfmBBx7I6hOQzslnn5n7Zc3C7C3pRb114vGWU+eK5ImoTk9ViV/2IGgIMdmDoCH0VIxX1VIstOLzypUry/Lu3bsrdSzGsrjopVFmcRyormh7CSrYM87C9/NyorEIW2ebHqAqPnMSCtvm9773vUod9z+1o6vF66MXkMPv0PNY5LG3ufVSu6x6ddbjj9WEuivdrvibmcDDaz/Xy89T0bzvfutcN1W5098gCN5HxGQPgoYQkz0IGkLPTW8tc5CN5GLPtSuvvLJSx952rCt7ySKt7sbnsmec1YvYBLNw4cJKHXvK8XVWH2ad0jNdpfQuoPqc9jxOKGEj7lKecdY0ljJJ2ftx/+06izcGDEeUWW/AlKcdANx6661tz7M6u5f8M2XS9bz1bB2b4nKTbnYSsZZreuPn9L7fKSb9ZReRc0Tkf0TkGRHZJSJ/Wnw+JCJbRWRv8Xf+pHcLgqBv5Ijx7wC4RVU/CmA1gHUicj2AewFsU9UVALYVx0EQzFBy9npTAC35dbD4pwA2AlhTfH4/gO0AvuK1NTAwUJqlbM53NpXZXOgs8ucmO7BqAougXGcDRFgctSJnKiDHisi5QQ+eNx17k1kTFKshVqRNmYm8oCFPJOQ2PLXDy83G5rZWDsIWLCJbs1wqOYmXN9B+r1KmQ+sl54nndYJYvCCn6di6vGuBMCIyq9jBdRzAVlX9IYDFqjpW3GAMwKKpdjgIgukja7Kr6klVXQ1gGMC1IrIq9wYisklEdojIDvu/aRAEvaMj05uqvoYJcX0dgEMisgQAir/jiWs2q+qIqo5YESsIgt4xqc4uIgsBHFfV10TkXAC3AvgLAA8DuBvAfcXfLZO1dfLkydJ8Zc1arKdbXZndTz2dhHUyG/HFLpULFixo+zlQ1e1tP1LmJU/fs9ekElS060sLq9uzmdK6BafwEoJ4rqLedsue+yZLcXPnzi3LnCwT8JN58HqEZ9rj6+wPCj8nt2HXdDyzGT+nXVfIzfPukZtw0jMP5pBjZ18C4H4RmYUJSeABVX1ERJ4E8ICI3APgFQCf6/juQRD0jJzV+GcBXNPm8yMA1k5Hp4Ig6D499aB76623sGPHDgDAFVdcUaljUdLmdGOxNeVlBlTFc7vt0qJFp40FLMJZ8ZBVgVyzWSfb/3rmttw2WLzN3WbI85Kz4ierTbkRfPadsTrEY+pFx3lqjae68HfHqkK5OehS11hyk1d4ufA8k7FnovPazzHnhW98EDSEmOxB0BB6KsafOHECP/vZzwAA27dvr9R5IpZN6Zz63Es3nFrpth5ouSKnt/Lq7ZDq5dpLieB266Zly5Zl9d8TYT11gu+d2k7KtmFzCnKdXcFmvCQgHEDjeQPyu7WBQRzYxLkCUzncbHtAviXAw9smKuUp6HksptQ3LyAmftmDoCHEZA+ChhCTPQgaQk919sHBQQwPDwM4c/snLyEfe9d5XlB83YUXXlipS+lMntnM09k9PH2e9cFcvdkzSdk6Nkuxbmv1Qm+7Zda/uY7NX7aPNu4htTbheSXaZ2F9e/780+kSvGSRth/sSWnz+6fw3ov3fclNKmnrUnp27vZgucQvexA0hJjsQdAQeirGnzx5shTJrbjl7cDKsNhnTW/sxeXlKcvFilEpU1Zu4gZ7bm7uNwu3adtIbZXlmats8EhKRPTET3sN13mBJNwm704LVN8hf1/su+T3YkV1DsLh87zx7cQ7Lfd7myvie/3wvB5b/YhdXIMgiMkeBE0hJnsQNISe6+xHjx5N1uWQ2vcN8KOC+Drr2pnbBuNFa3kusawrepFRrOPdeeedlfN+8IMftO2HJTchg22D3VQ9t1fWo61LKa+f7Nu3ryzbsf/0pz9dlq0uzsePPPJIWfbG2+ri3Gcv2YbnzuqNo3ddirr70Xlt5KwDxC97EDSEmOxB0BB6vmVzS6zyxBArErIonBIxAT8yikW4OmYQIB1R5nmneckxcrcSsnD7Xk603K2pbR23z55rP/3pTyvn8RgfOXKkUteKbgSqiSxuuumm5L2898l9tKqAt81Vbl4/TxzPTRrhfa+891k3d12nxC97EDSEmOxB0BB6Lsa3RD9v2yIrVrJoxoEeVkRjjzpbx0EcnsiWGwThneetxnupk1mc27p1a1nmraBsm15KZBafvZV064nIHnXj46e3A7B5/TioxaaI5lV2xr5bTilux+rBBx8sy/z+7Io7v+uU2A7kJx/x0n975Aan1FlJb3ddO8KDLgiCmOxB0BRisgdBQ+ipzi4ipa7u5c62phXW1zwvKDbLWVMQ66+pfOQWr4+e/p4bJZW7vY/1LGNd+dChQ5U6Ppd1bKuH8prJ4cOHK3WpNQfW3y133HFH5TjV/+XLl1fO4/F46KGHkm14Hn9ukkU6Nzfy0dO9vW20mLqmtroRcTlk/7IX2zY/LSKPFMdDIrJVRPYWf+dP1kYQBP2jEzH+SwB20/G9ALap6goA24rjIAhmKJIjDojIMID7Afw5gN9X1Q0isgfAGlUdK7Zs3q6qKydpR1tilRUrve19hoaGyjKLbDaohkV1ayayOelaWJEwd7sgFiu9gJzcAB+g+mwsclpRff369WXZ5t9PqStW1PXy47MIyirDXXfdVTnPezauYy+8xx9/vHKel4iDvxPetkh8L6uS5OIlFcnN/eYFzNQNqkq1kVITVBWq2rYy95f9rwF8GQD3ZLGqjhU3GAOwqN2FQRDMDCad7CKyAcC4qv6ozg1EZJOI7BCRHXWuD4KgO+Ssxt8I4C4RWQ/gHAAXiMg/AzgkIktIjG+7VKuqmwFsBibE+C71OwiCDsnS2cuTRdYA+MNCZ/9LAEdU9T4RuRfAkKp+2bt+YGBAU6Y31s88V1ou2/M4YYJ1D2W9kdcEvKR+3tbAntsrY9cmvDzpzMGDB8uyfRbW522SxkcffbQs8/h0En3H7rleQkXP/ZTr2PXXJrdMmVVtv/iZeV0CqOaXtybXOvnVPX07NyLO0/u9+9VJYMmcOnVqyjp7O+4DcJuI7AVwW3EcBMEMpSOnGlXdDmB7UT4CYG33uxQEwXTQkRg/VQYGBrQlGvO2PEA1esuKKGxCYu86643FyRWsCM5iLIv0VhXwEhWkREIrfrIo5ommFjYbcTk3ZxlQNct54icfe5F5KS82ANiyZUtZtl6PHM22ePHismxNgCyesxoGVN87v0/r9chjZaMAve2imRyzVid0kryiGznlTV3XxfggCN5DxGQPgobQ80CYlrhnxStvdZhFFvbosiI4i5+2fRZVOf+a3YaKtwuypLzO7Ep3rhj49ttvV455JZlFVbvzKYvT7F0IAE888UTbflmxL5WTD6g+J2+nZFeYuQ0W2wFg6dKlyIGfzY4jJ9XgVXybKMPLPZgKLrL38rZWqkMnqkBKtcsV/YHT78ZbwY9f9iBoCDHZg6AhxGQPgobQc529pStZE0nutkuMNeOwjmfNRJzHnMtshgOqOrw18aRMQfZe3toB68De9sIctWe9zvh+Vvfkc7m/Vu/nMbbjyPogr4tYcyb3w44Vm9S8vO7z5s0ry3b9hI85cYbVV713wc9Wd78AxvOMy01K4ZlBvc/rzBEmftmDoCHEZA+ChtBTDzoR0ZaY5eUNy83h9pGPfKRSNzY2Vpathx6Lo6Ojo2XZitl8nuddxyYvK2an8sABVfXFM//wc1rvNDavWFMLH/MYuyYZI5qyWMzjaN8Z99/mns9NyMDnWQ86Nk1y8IsdD1ZR7HjnJg+pG8SSEs87CaZJBb/kzgMAWLBgAYAJFfX48ePhQRcETSYmexA0hJjsQdAQemp6A07rfbnmB1vHuqFNOMk6n9Wj+brLLrusLFuTFLti2n3DWE/iSCurF7LOa3UrPvZMXp55zdPZWc/lNQdPD7UmtZT5zpq1vD3n+N68LmL1X27D1rGezqY9m1SS1wSsPs/39u6Vcqu1dXUj4jy9P7VWY/vB76Wlo7doJVQ9duxY8j7xyx4EDSEmexA0hJ6L8S06EStZfGTx0IvWsiY1FtdZ1LPi/rJly5J9ZhGZy5wDzdZ5Ww5ZMY3HJLXFNOCb1Ph5vK2pPbjPnneaZ1Lj8eZ72/FgFYWjEW2dJ556efpT3xdPheqGOdpTE7x8ffzdtOZjVmvY8xA4Pa6xZXMQBDHZg6Ap9FyMb4lmnteZFV9YBEptCQT4K928Wsx1th9c54lifC+7as9tWM+y1BZPQFXkTHnCAWcmvcjpo4VFZG8rLhYrPY8/+yws/nNAixXVvbTeDI+NDZjxSKUe93aC9dQVqw7xmHjjzeNo1VQWyXmbsuHh4cp5/D3jfIvAaUuU662YrAmC4H1FTPYgaAgx2YOgIfQ8eUVLX7G6LJvKPI8uT5dl3cfqtWwe8yKhWKe0+hn30erpjKcbsu7mJcxk3cs+S8rjCkhvL+wl2LBwG3yeXd9g85odU9bh+Tpve2urb6ae034/GPteUqa3XI+2yc7lZ+Pvo12b8LYkZ5MaX2c9BXlLMLtG0vpuejp71mQXkX0A3gBwEsAJVR0RkSEA3wSwHMA+AL+uqkdTbQRB0F86EeNvVtXVqjpSHN8LYJuqrgCwrTgOgmCGMhUxfiOANUX5fkzsAfcV74LBwUEsWbIEQDUPHJDOFQZUA15YBLKeayxmewEoVpRk2FPL9iPXI83bPdVLBsHiOpuXrDegFyyRCtqw9/KehdtnEdOKldwvO6b8nrz89V7SiNSzWNOsp9ak6rx3a9UVHgNvjwAvAIpVTNs+zwV+T566YlW71nfOTXiRrKmiAB4XkR+JyKbis8WqOgYAxd9FmW0FQdAHcn/Zb1TVgyKyCMBWEXkh9wbFfw6bAP8XNQiC6SXrl11VDxZ/xwF8G8C1AA6JyBIAKP6OJ67drKojqjrieRgFQTC9TPpTKyLnAxhQ1TeK8u0A/gzAwwDuBnBf8XdLupUJBgYGSv3H/sqzKeHVV1+t1LGOw26vtg3WreokC7DHXqIF1q2sHufV8TqDF83GOnsnyQtzc4vzs9iED6wrpvRJe519zpR+7JkKPVdUxprXuB+e2YzrrN7P2Of09Gh+Hm7TJvPYv39/WbbfW/5+814CNgFnym0c8E3B5X0nPQNYDODbxY3OAvAvqvqoiDwF4AERuQfAKwA+l9FWEAR9YtLJrqo/AfDRNp8fAbB2OjoVBEH36emK2YkTJ0rzjd36iMUt61nGeIt8ubm/vbUDLxeZt80x4yWsSEW2AVURn+/liaZWVGfR18tBzudZkZbNOilvunZtpvrIeFsqe951fJ01XTE2qYM1W7awz8xqk7c1tb1uzpw5betsH1ms9zwzvXx9KQ9LW5ciVsyCoCHEZA+ChhCTPQgaQk919pMnT5buqF7El5eBppNoJSal5+a6aLY7bsF6G5BO2GjbsDnrvf6n8NxDvWfh8fb6we13orOn+mH18tz1E+874K2lsB7tbWHtvTMv4i6VHclGdV500UVtrwGAAwcOlGVv7SB1XyDP9Ba/7EHQEPsQbwQAAAmTSURBVGKyB0FD6KkYPzAwcIa3VotcETY3gir3Oq8NK3KmRCXrzcQim5dEw3pqpaLZPC85S2r7XyuCcxte4su6KlTKZOfdy0vwyed5pk1vzwEW6a13Gm/1bN8zj6PdEprHZ9Gi07Fg9t2+/PLLZdkmzGQztLftl/e9bSW94O3LLPHLHgQNISZ7EDSEvsWcduJ9xeKMFY9SbXpeUOyl5K0w524RZFUT3mFz7969lToW4TxxNCWO2zpLaiXdXsNteh6LXt623J13PRXNaz/lMWbVKy/XP5/LY2A93C6++OK27QHV74sdq927d5dlVg04/7vtv1UhcvPk8bNYa8LHP/5xAMD4eNvgUwDxyx4EjSEmexA0hJjsQdAQeq6zt3QSz0vOkvJ462RrXda7ciPWvL3NvD2+WC+33nWsU3m6bK7pLdfcaPGSL/J1uXvHeVGGXjJRfk6rD6fMoJ7J0vYjFfVm87qzDmzNd2yys/1fuHBhWb7sssvK8s6dOyvnsQnWS/Dpfc7Pfc0111TqWt8z950na4IgeF8Rkz0IGkLPxfiWaOaZguqKrUxuPvXc4Bl7HYucNt8Yi5w2SYdnbvPUi1S/PNGazUueh5u3NZQXTJNqzx57/fUCbVLfCWvWYpHcjjebubxttnkMvAQp1mTHIv4LL5xOumxNY4w3BowdU37uq6++ulL3sY99DADw3e9+N33fZE0QBO8rYrIHQUOIyR4EDaFv7rLWfJJKlGjx9O3cTSi89YFcfZ71Ot4fDqg+m92PztunLZVD3Vun8Fwqc5NF2nHjc3OjDO2z8Bh7z8zY9Qwex9WrV7dtG6ia1zwXZzaJ2mfhyEUb2eZFyx05cqRtv7zvlfcuvPPWrj2dzPnyyy+v1A0PDwPw94eLX/YgaAgx2YOgIfRUjFfVpBjnie62jRaeN1ZubrZOIrlSOd/tM7FY6YnquXRiikz13xPVLSkzlLddVZ3nstfZ9tevX1+W2VTGiSCAqpnLJoZIbflkRfWrrrqqLNsxfOmll8oyi+22/94W1h58LqsyN9xwQ+U89tb7xCc+UanzTH0tsn7ZRWSeiPy7iLwgIrtF5AYRGRKRrSKyt/g7P6etIAj6Q64Y/zcAHlXVKzCxFdRuAPcC2KaqKwBsK46DIJih5OziegGAXwPwWwCgqu8CeFdENgJYU5x2P4DtAL6S0R6AM8VIz7MsFy9Ygqm7CyqL8akAC1vnBZnk9tHD67/XPq/aeh50Xk40bt/m2mPxme9lxU0e05tuuinZx+eee64sWwsHq1Q2hTO3z951VkTmdM5eAoi5c+dWjg8ePFiWPdUod0fdxYsXl+UVK1ZUzrv55pvL8ujoaKVu3bp1AM58fibnl/0yAIcB/KOIPC0if19s3bxYVccAoPi7yGskCIL+kjPZzwLwqwD+TlWvAfAmOhDZRWSTiOwQkR11F3GCIJg6OZN9FMCoqv6wOP53TEz+QyKyBACKv21lH1XdrKojqjqSG+gRBEH3ydmf/VUR2S8iK1V1Dyb2ZP9x8e9uAPcVf7d0cuPcZI5t+lOWvcg577rc/3SsNxLrobl57nMTFXRCKvrOwn20Y8M69nnnnVep4zZTCRsBX0dNjZW91/Lly8uy1TefeuqpssyRhfb7MTQ0VJbteB89erQst7zMAODVV1+tnMe6t4X7b012qX0Q7BoUf5fsNbwOsGbNmrLc0sNb8DvbsGFDpe6WW24BcGaCUybXGPi7AL4hImcD+AmA38aEVPCAiNwD4BUAn8tsKwiCPpA12VV1J4CRNlVr23wWBMEMpG/JK+qek8olbo89zzIvMMMTkdmLKzdgxsLinBc8wnj5+uw1qZxu9llsEoZUP/iZrYnOayNlplyyZEnlPM7bZkVQNtNxGyz6A9XxsEFJ3Ma8efPKshXbPTOlt4tr6jtnx4oTbFj157Of/WxZXrp0aVm2JkbeXurw4cOVuqeffhrAJEkzkjVBELyviMkeBA0hJnsQNIS+Ja+oi2f+yk2O6On2rJPZOtbrvKQOuesKVo9O6Y2dmBj5ub3IPK6zemgq4aTFywfPsK7JZaBqGrMRa6x/8jbYdotsbsOaxpYtW1aWeQys+YvXH6zem5uUlMs2Lz23f+ONN1bqeB2Dn81LOPm1r32tUtcy9bnbeSdrgiB4XxGTPQgagvTSX11EDgN4GcACAD/r2Y3TRD+qRD+qzIR+dNqHD6rqwnYVPZ3s5U0ngmLaOelEP6If0Y9p6kOI8UHQEGKyB0FD6Ndk39yn+1qiH1WiH1VmQj+61oe+6OxBEPSeEOODoCH0dLKLyDoR2SMiL4pIz7LRisjXRWRcRJ6nz3qeCltELhGR/yrSce8SkS/1oy8ico6I/I+IPFP040/70Q/qz6wiv+Ej/eqHiOwTkedEZKeI7OhjP6YtbXvPJruIzALwtwDuAHAVgC+IyFX+VV3jnwCsM5/1IxX2CQB/oKpXArgewBeLMeh1X94BcIuqfhTAagDrROT6PvSjxZcwkZ68Rb/6cbOqriZTVz/6MX1p21W1J/8A3ADgMTr+KoCv9vD+ywE8T8d7ACwpyksA7OlVX6gPWwDc1s++ADgPwP8CuK4f/QAwXHyBbwHwSL/eDYB9ABaYz3raDwAXAPgpirW0bvejl2L8UgD76Xi0+Kxf9DUVtogsB3ANgB/2oy+F6LwTE4lCt+pEQtF+jMlfA/gyAI426Uc/FMDjIvIjEdnUp35Ma9r2Xk72dmFRjTQFiMhsAA8C+D1VfX2y86cDVT2pqqsx8ct6rYis6nUfRGQDgHFV/VGv792GG1X1VzGhZn5RRH6tD32YUtr2yejlZB8FcAkdDwNIp/ScfrJSYXcbERnExET/hqo+1M++AICqvoaJ3XzW9aEfNwK4S0T2Afg3ALeIyD/3oR9Q1YPF33EA3wZwbR/6MaW07ZPRy8n+FIAVInJpkaX28wAe7uH9LQ9jIgU2UCMVdh1kIuj7HwDsVtW/6ldfRGShiMwryucCuBXAC73uh6p+VVWHVXU5Jr4P31PV3+h1P0TkfBGZ0yoDuB3A873uh6q+CmC/iKwsPmqlbe9OP6Z74cMsNKwH8H8AXgLwxz28778CGANwHBP/e94D4EJMLAztLf4O9aAfn8CE6vIsgJ3Fv/W97guAXwHwdNGP5wH8SfF5z8eE+rQGpxfoej0elwF4pvi3q/Xd7NN3ZDWAHcW7+Q8A87vVj/CgC4KGEB50QdAQYrIHQUOIyR4EDSEmexA0hJjsQdAQYrIHQUOIyR4EDSEmexA0hP8HZNFz6BgR2eYAAAAASUVORK5CYII=\n",
111 | "text/plain": [
112 | ""
113 | ]
114 | },
115 | "metadata": {
116 | "needs_background": "light"
117 | },
118 | "output_type": "display_data"
119 | }
120 | ],
121 | "source": [
122 | "remote_file = urlopen(DATA_NPZ)\n",
123 | "npz = BytesIO(remote_file.read())\n",
124 | "data = np.load(npz) # load all the data from the archive\n",
125 | "\n",
126 | "images = data[\"images\"] # images in BHW array order\n",
127 | "segs = data[\"segs\"] # segmentations in BHW array order\n",
128 | "case_indices = data[\"caseIndices\"] # the indices in `images` for each case\n",
129 | "\n",
130 | "images = images.astype(np.float32) / images.max() # normalize images\n",
131 | "\n",
132 | "print(images.shape, segs.shape)\n",
133 | "plt.imshow(images[13] + segs[13] * 0.25, cmap=\"gray\") # show image 13 with segmentation"
134 | ]
135 | },
136 | {
137 | "cell_type": "markdown",
138 | "metadata": {
139 | "colab_type": "text",
140 | "id": "fnSoc-rujRjq"
141 | },
142 | "source": [
143 | "We will split our data into a training and validation set by keeping the last 6 cases as the latter:"
144 | ]
145 | },
146 | {
147 | "cell_type": "code",
148 | "execution_count": 3,
149 | "metadata": {
150 | "colab": {},
151 | "colab_type": "code",
152 | "id": "vvIeP4DGjbkT"
153 | },
154 | "outputs": [],
155 | "source": [
156 | "test_index = case_indices[-6, 0] # keep the last 6 cases for testing\n",
157 | "\n",
158 | "# divide the images, segmentations, and categories into train/test sets\n",
159 | "train_images, train_segs = images[:test_index], segs[:test_index]\n",
160 | "test_images, test_segs = images[test_index:], segs[test_index:]"
161 | ]
162 | },
163 | {
164 | "cell_type": "markdown",
165 | "metadata": {
166 | "colab_type": "text",
167 | "id": "3qyUcjrljrIs"
168 | },
169 | "source": [
170 | "We can now create a MONAI data loading object to compose batches during training, and another for validation:"
171 | ]
172 | },
173 | {
174 | "cell_type": "code",
175 | "execution_count": 10,
176 | "metadata": {
177 | "colab": {
178 | "base_uri": "https://localhost:8080/",
179 | "height": 35
180 | },
181 | "colab_type": "code",
182 | "id": "xU6A9TNLfEUu",
183 | "outputId": "1cc3348e-57b8-4eb4-a964-0b9239afa988"
184 | },
185 | "outputs": [
186 | {
187 | "name": "stderr",
188 | "output_type": "stream",
189 | "text": [
190 | "Load and cache transformed data: 100%|██████████| 368/368 [00:00<00:00, 10553.22it/s]\n"
191 | ]
192 | },
193 | {
194 | "name": "stdout",
195 | "output_type": "stream",
196 | "text": [
197 | "torch.Size([300, 1, 64, 64]) tensor(0.) tensor(1.) torch.Size([300, 1, 64, 64])\n"
198 | ]
199 | },
200 | {
201 | "data": {
202 | "text/plain": [
203 | ""
204 | ]
205 | },
206 | "execution_count": 10,
207 | "metadata": {},
208 | "output_type": "execute_result"
209 | },
210 | {
211 | "data": {
212 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAPsAAAD7CAYAAACscuKmAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nO19baxd1Zne83JtB4cAtsNHHNvggZDMRElDIpQwSjViIEzMp5UIKiJNxVRI/pNWGXWqgbRSpalUiarSaPqjqmR10kGaNBPCR3CAwFjuoKrSKBPSkAkEEwPlw3CNwYGEQALxZfXHPWfz7Ie7nrvu9b3nONnvI13dvc/aZ+13r73X2e+z3q8opSCRSPzm44RpC5BIJCaDnOyJxECQkz2RGAhysicSA0FO9kRiIMjJnkgMBMc02SNiR0Q8FhGPR8RNKyVUIpFYecRy7ewRMQPgxwAuBXAQwHcBfKGU8qOVEy+RSKwU1hzDdz8J4PFSypMAEBF/A2AngOpkX7NmTVm3bh0A4IQT+krF3Nxct/2rX/2qetKI6La1j6NHj3bb4/OMsXbt2m57Zmam2r8D/zC+9dZbC36ucum5VGYGX5sDH+e+w+Pxi1/8otfG4619sIx8bfwd3efxcNDr532Vo/VF5O7FSjuNLVfG5fa/nHOVUhbs5Fgm+xYAz9L+QQCfcl9Yt24dPvjBDwIATjzxxF7ba6+91m3Pzs722vgiedK+613v6h338ssvvy3cli29tm3btnXbJ510Urftbp628UP185//vNt+4403ese95z3v6bY3bNjQa1u/fj1q4B8G96PAx+mPGsv84osvdts//OEPe8e98sorVZl4XN98881u+2c/+1nvuFdffbXb1h8THiuWSe8Z3wu+t3puNwl++ctfdtt6L1iO1h8kResPEv/4uZeZvgDcC4zBP96KsRzuB+FYJvtCo/+OM0XELgC7gHfezEQiMTkcy2Q/CGAb7W8F8LweVErZDWA3AKxfv76Mf+H4zQL035SqLvKb59RTT+22+Q0K9N8ur7/+eq+N30Jr1qxZcFuhb4Ka2qq/1K5//uXWX/Gaeq5vE/7RVA2Jz8d0aOPGjb3j+C2t481vVL4vrH0BdY1L++Q3kr55+dq0D77vrBEsRVVnOfiZcJqCeym5c3Gbo3at39M+WObl0IdjWY3/LoDzIuK3ImIdgOsA7DmG/hKJxCpi2W/2UsrRiPiXAO4HMAPgK6WUR1ZMskQisaI4FjUepZR7Ady7QrIkEolVxDFN9qVibm6u4+rKqZnXKRdnvsk8TnkLtzke6lZNnRmHOTC3KS9nDq983pkAa6vFzlylq/EsC5/LrR1oHzWOravBzCHd+objodynjsdPf/rTBeXV4/jeOp7rOO9y22rPgY4Vy9xqXjueOHsikfg1Qk72RGIgmKgaHxGdOuZMRqecckqv7d3vfne37UwYrP6rAwjTBuesweqXqmJ87pNPPrnbVrWSr02dSJhqONXXgb+nMtZojprNWEYd75oaz84repxzWGlV912bnvtYoffdyejU7lrbUrw0l6OeO4evGvLNnkgMBDnZE4mBICd7IjEQTJSzr127Fu9///sBvJNrMlfRwAzmPy6ggHm0clTmfOw663io9n/aaad128xzlWuzyUtdL5n3O1NWqzunrn3wesTBgwe7bTV1nnnmmd02B6MAwEsvvdRtOy7I8rv1hlYXU3cuZ5JyaI0kdN9x52vt3wVYtcLJMX6u3Bjmmz2RGAhysicSA8FE1fgTTjihU7VVhWU1kE1tDs5Uo1SAo+yOHDnSk4nBXnKnn356r41j09lUqB5oNZl0X7/n+mHwuVltB4D9+/d324888naoAkcLAsB73/veblspD0cPOvXTqdY1ldOZvHSs+BlZiaQOrZ5wy/VcczK2UhSG87DU/lrGIN/sicRAkJM9kRgIJu5BN1Z11GvLrbKz2spqt6qw3McZZ5zRa2OVkFV17cMll2A4DzqWQ1fjeV/Vdva2YzVNLQa8Wv7444/32p544olum2nH9u3be8cxzVE1vuZBp7kBXdAQw1kZWgNcWuFoQqsav5RUZQweD312nNWh1qdLgLGcFFv5Zk8kBoKc7InEQJCTPZEYCCbK2YG3+YlyWY5Yc6l2HX9ijqfmO+6TOapyn1qCCv0em670XM5kVEuiqGCPt0OHDvXaONW2pnA+99xzu+33ve993baa3vha1AzamryC0Woac5xd21wyiJZzLaWtlitf5Wrl82pGbDXftaatbuX5vb6rLYlE4jcKOdkTiYFgomr80aNHO+81VSvZFKfBHazCsRlKVRZWi1XN4WAP1wert5wzHeh74TEtUDNfa74xNalxRRtW3dnUBvQpENMJoB6so+DrVhNXzTS5HC+wxY5rNVetNmoVbAB/P2tJL5ZSEqwGpQLO9NYSXJNv9kRiIMjJnkgMBDnZE4mBYKKc/c0338Szz84Xft20aVOv7cILL+y21Sz3k5/8pNt29dw4z7jyYeZQrVFqLsc583eulgr0OfbZZ5/da3NJMVl+Xi/QPPpsUtNxrLnxumQeWn+tVjJ7JZI4KFzSi5oZSr/jTFIrUSHV8fkaXKJRt5bC/et94H19vsf30LkYL/pmj4ivRMThiHiYPtsUEXsj4sDo/0bXRyKRmD5a1Pi/ArBDPrsJwL5SynkA9o32E4nEcYxF1fhSyv+OiO3y8U4AF422bwHwAIAbF+trbm6uU1WffPLJXhur6uwFBvRVIFafVTVlFZZVYgB4+umnu202wz3zzDO94w4cONBtn3XWWb02Vr+4f85pB/SjzVRGvk5VxdjkuG3b29WwOV8c0E+qwdcM9FV37t+VfdY+2Cxay90H+Bx0rckrnDmvVipLx81FKtZKcal8LL+jAi5PP9MtfgaAPt3S54rBcql35OHDhxfcBt6mjjUKBix/ge7MUsrsSLhZAGcscnwikZgyVn2BLiJ2Adi12udJJBIey53sL0TE5lLKbERsBnC4dmApZTeA3QBwwgknlLHaxt5iAPDjH/+429YV7JpnkvN0Ui+8rVu3dtusWquKzCqWqmwsM/evK+LcpmoVt+m5eZ+98nS1nK9b22pqsVogWOVU+Vl9ZJVZx8OVZGrNH8dQ2Xmf5deKtNymlgtWz9l6o/elVUZ9rvg+jdOkA30aBvQpoFKI5557rtseW6uAfq5E4J10a6H+9XlgLFeN3wPg+tH29QDuWmY/iURiQmgxvX0NwN8D+FBEHIyIGwDcDODSiDgA4NLRfiKROI7Rshr/hUrTJSssSyKRWEVMPHnFmFerqYZ5CydnAPpctmZK0T7VC4/NbczX1ETCnEw5KXtFOXMPczK9TuaNapZjsx+bJl1iSs2Pz8fytbjEl8oFmcPXSl0Dy8+T3vqdWi53vWaWX8eKzYV8L/S+8LOkcvBYabQmjzGbVbn0FtB/Rnj9COiP9yc+8YluW8ty8bXwfAHeXmdRk3NPhmpLIpH4jUJO9kRiIJha3ng1P7CKot5BrDqxOqTBBqx+uXzwbJ7RYBrngcQqYi2hhrZp/6y6u3O7AA7eV1MQm6V4WykPm4I0MIPVYt52efq1jVVfl9TBBZnUaIKq6nxuVWNr5kE137mxYujzwedmc9sHPvCB3nHs9dhq5tMx5edd6cR4/jAVVOSbPZEYCHKyJxIDQU72RGIgmChnL6V0nE25Gydy0ASL7MLKZhflsjXXSKDPtZjzKcdjc4dyplrCCja5AH2euHFjP9Sf5VeXSl5LYA7peKKakHjtgyP61JzkXEw5iSXL79xl1U2T7w1/T/toTYjBaxPKeXkdRO9Zzb3arQ/omgA/EzpWvM/19NScyfdJ15r4XrO8LtlGzd3XfSff7InEQJCTPZEYCCbuQTdWU1T9ZHVa1WJWWVhtUpXQmUxYDWRVz6l92j9TDZZfVTZW09T0w/njnCcVy6tyOPlZXWdvLPVKfP7557ttjUBk7yw1UTFcMgiWi49TNdN52tX612t2+eNqNKG1nJTKwSY0oK+6szemU9W1jffd88dUQ+/ZeI5k+adEIpGTPZEYCibuQVdbLXReZ6y2uZVd3lf1k9uYFuhxnKvuscce67Wxms1llnRll1V1TmgA9FfjddWXVTBe3VZ6wmOo8rNazN/TvH4slwZtcCIRtoxoMA3DVc3l8dZ7y9Br4T6dGu+Co/h7PDYuhbWqwtynquB8D51nIx+n41izEuhzxZSwZqFJNT6RSORkTySGgpzsicRAMHEPujF3Vt7CfE3NVbzP/El5Vy1qDOjzNZdAgs0nn/rUp6pysKeaXgsnIWRuD/S5uEsWuVxPqlr+c+XUbMLcsmVLr42vjTm73hf2LNMoLJaLPQ9dfnlN1sD8mMdK+TA/E7qOw3JwIk19PlxCEN7X77Fcbi2FZXTrLLztTK61ZKsuoi7f7InEQJCTPZEYCCbuQTdWdVyFVJfwgU0Y6n1VCygA6rnQ9ThW57R/Vm+5yqoGu7BaX6u2qdtAX9Xj61TVl1U1Z2pxbawSqlrJKjmrzFr+ieVS1ZrHruYhpscpXeFxZDqk5jX2bHQlqlwlX6Y1SidYjdfrZGi5JgaPsfMA5DalaLXnA3ibfjov0nyzJxIDQU72RGIgyMmeSAwEE3eXHXMKF/2knKbm3qq8hTke8zigb0JijqQmKY6+Y5MR0F9LYM6ufbC8moiD+bZyce6Hr0XHqsZDHXTtgM/dmgRS1zBYDm1zZqiaHM5s5urK8T3T9Z5Wzs5j7xJUaIQj83nm7Nq/S1DKzxIn/9Tnz+XHHz8vx2R6i4htEfF3EfFoRDwSEV8afb4pIvZGxIHR/42L9ZVIJKaHFjX+KIA/KaX8DoALAXwxIj4M4CYA+0op5wHYN9pPJBLHKVpqvc0CmB1tvxoRjwLYAmAngItGh90C4AEANzb0B+CdJgKX+7tWqlZNHawyq/pcywfv+tD89ax+sZqqqiP3obKzHKo+11QwVYNbI7ZqEV+Ajx5ktdKVyK6ZjIC+iYpVTr3v3Ie21cpF67hxIgdnpmSZ9L44GV3iiRqtVC88R99qdEg9M/m+1HIKumQgS1qgi4jtAD4O4DsAzhz9EIx/EM6ofzORSEwbzQt0EfEeALcD+ONSys9aq1pExC4Au0bby5ExkUisAJre7BGxFvMT/aullDtGH78QEZtH7ZsBHF7ou6WU3aWUC0opF+RkTySmh0Xf7DE/Q/8SwKOllD+npj0Argdw8+j/XS0nHPMhxy1cDnI2qXHudqDONYE+V+Y+XG545eLMoZwLKHN2vRbnBsuy8PiomcVlM6kdp+PNcum6BbucOnMSj4/L7sLutxplyPfWcVTnRsrQ9Y1aZJ4rda19ML9XV9raGOh95zYdb75P/GzqmPK+ugWP+3CJNFvU+E8D+OcAfhgRD40++7eYn+S3RsQNAJ4BcG1DX4lEYkpoWY3/PwBqr49LVlacRCKxWpha8gqn9nHyB6BfxojVF6cGq6pXM92oWsnql6qVXBbJJTTga1FvKZdAoVY2WE1BrWsfrXndVSVkKuOi0nislA4xXMQaj6N6InLiTh43lbc1itElzGRoZBvLpfeTnx/u3+Xz12eT+3QUjespKMUcy+woTvrGJxIDQU72RGIgmHjyirE6o+oWq0AusIS9pVwCDF19ZpXfeaC5NueB1QruX1UuVlVrKj3QXzl2q688Bro6zGqglttiNZ6DQFR95vFQOsT7R44cWbA/oE8NtI3HgLdVva1VjAX6480yqSccX4uq8Xw+9e7k55HpigZiufJm/OyzxUDz+jHlqeVwzCquiUQiJ3siMRTkZE8kBoKplWxWbsFJG9nEBdS5lppBuE/l3spZx3ARZcr7a2YRl+RvsfMxeA2C+bzyRD63MwUxL9f1Ad5XnlurX6ayu5p5PHYvvPACauBkDcqVa/X52LMO6K8luHvGz456A3Kb3s9abniVuSYv4Et8M//mNl27Ys6u/Y+v2z1f+WZPJAaCnOyJxEAwcQ+6sbqhZoWtW7d226rO1cwnqs7V1E+gr/bUyiwpXKkfl2SAv6dysIrozCQsl5px2HSjanwtKcVSylvzeLM50JXbcuBxUzlcG48rt7EpD/A52fmZ4OfKqbvOJOrMoDXZ9dzaBx/Lx+nzwc9VTf40vSUSiZzsicRQkJM9kRgIJsrZZ2ZmuuD/c889t9fGkW5qJqsl61POzrzOJQZ0UWPMp5T/1SK7lJ/xvuPzCubfzBs1womTdih3Y/OM4288jjreLilmDS6BKK9v6HEshyvVzfday2DzGLhEoyyT3ksnB8uszxXfzxr3Vhmdqdat99SOA9ruU77ZE4mBICd7IjEQTFSNX79+PT760Y8CALZt29Zrc/m12NTC6pxGWrkc5Kx+sarrTDDOjMPeTa1q2UL7DFZ3XdIIll89xmqRbo6SaB+1XPFqkuIoNadG8vjo9btchDVzqYtMVPAzwc+Ljinvu8Qq+rzwc+VKk7lnpLXcFo+H0oQWM2i+2ROJgSAneyIxEExUjV+3bh3OPvtsAO908mevMFXjOUGAq+bJqpiqQJyQgQNLdHWVoWmDmXqwyqYqVOsqamsaaJeb7emnn+61sarqEnbUVEfd5+usJUxYqI977723em7GVVdd1W2rVaAWlDQ7O1vtr9UCoXClsmoltXTfJdFwHot8f7kPTTnN93rLli29tvHYuWQm+WZPJAaCnOyJxECQkz2RGAgmytkjouN96v3GCQ40IR+bLTi4X80PDszz2CyinJ3lOuecc3pt/D1X+sh5S7nEEzXzict3rokQeOzuvvvu6veYv1555ZW9NpZ/79693baOFfNLF2XozELM7V3Ch89+9rPd9lLMmbU1Ej2XrhMx2EypEYj8DG7YsKHbVu7MvHzTpk29Nu6T5eB1JqBfP0HXasZQczRj0Td7RJwYEf8QET+IiEci4s9Gn2+KiL0RcWD0f+NifSUSiemhRY1/A8DFpZSPATgfwI6IuBDATQD2lVLOA7BvtJ9IJI5TtNR6KwDGuu3a0V8BsBPARaPPbwHwAIAbXV9Hjx7tAhVUbTp06FD1e+wx5kxvNW8mANi+fXu37RIJsKqkfbDqzmYip1Y6c4+aVvhYVt2d9xWrjkBf5eTrdHJ8+9vf7u3XaI5T49U8yHK4XHg105W23XPPPd02q/SA99CrQamXCwxySVH4eeRn2HkUPvHEE9Vzu5x/zgw6niPuWWmtzz4zquB6GMDeUsp3AJxZSpkFgNH/M1wfiURiumia7KWUuVLK+QC2AvhkRHyk9QQRsSsiHoyIB13xv0QisbpYkumtlPIK5tX1HQBeiIjNADD6f7jynd2llAtKKRfU8nUlEonVx6KcPSJOB/CrUsorEbEewGcA/CcAewBcD+Dm0f+7FuvrjTfewFNPPQWg7wIL9LmGyx/uTFe870xeNY6kULdMzmfPnEl5oqs5x5xV5Xf8tQatj3bHHXd02y6ZI2tZKj//KF999dXdtt4Xlxxxz549C/bx9a9/vXcc3wvHN2t13wDv6lrj8Pq5M+Nyn/rC4nHUxBkMt2bCyVc5iQvXUgD6z5y6co/Xbp577rnqeVrs7JsB3BIRM5jXBG4tpdwdEX8P4NaIuAHAMwCubegrkUhMCS2r8f8I4OMLfH4EwCWrIVQikVh5TNSDbm5urktEoaoYq0cuJxpjKSWba7nCVC1jdUs9+Vjt5sgzl+de1WeXb75mrlEV+b777uu2r7jiil7bddddt2B/rSYp/Z5LUMH3UGXka2PT6TXXXNM7jj0Wb7/99qocfK7777+/d9yOHTu67daINZcr35kH9XmpjavKwRGTH/lIf3178+bNC37PlZ9Ws+1Yrc+88YlEIid7IjEUTLyK6xiqDrnUybWAfFYPgX5FUOft5YI0nDqnav0YrNKrXKrO1bzkgL666CwGroQUn9t5cfGYOtWPz6X0yiVk4KQUzvOL9z/3uc/12nj877zzzqq8HExz2WWX9dr4WL4WXfl3JcFYDvX8rAW46DPMdEWr2vK95udWKQKfS5/NMT3O5BWJRCIneyIxFORkTyQGgolz9jF/c4kblI+wt9Dpp5/ebWvElyudUyvvo5yauRYnpgT6iQHYW0pNb63ln1pL+KiMmmyCUfNC03PxcS53u4so43F0XJHvpx7HfN7lwOc1AObvCjemvE6k98WVeKolqAD6Xm58nJaVHnuOAsDDDz/ca+N1FpechY/T6xx7pOpaASPf7InEQJCTPZEYCCauxo9VMw0M4IACVWe5aidvq6rugjtYXWT1SM/lvOvY9FErSQX0VT1VxVhmHYOaGv+tb32rt79z585u26nnLt85f09V/5ocLmpRPbpq16n0rbUMlfucz6XjUUtKoediqqgBKGeddVa3vXXr1ur3XKVZpgZqemNznqr/DJfzbzyOaXpLJBI52ROJoSAneyIxEEycs4/5ljP3aC50Njm4BAcM5cM1l1CX01yTKDL/Zm6oubo5D7hydl47aOXRbHbS4xS10sN6na2mN1c/z5nN+B66e8amIjUb1cyUzp3VRTu6RBkuQpCfA00WUivdrWsTzPv1+T58+O0kT2zu1XUQXt/QFG9j+Z2LdL7ZE4mBICd7IjEQTE2NdwkCWj2HnBrcasbRaK1WlZZNK9oHmz+0hA+rcI5qcCSXJqhozSnPcHnyFLVIMedB5zwWWV6lRvwcaB98LzhvvEbfOdMbo9V7UekEe0tqpCWbWblPHV9+prXcMuegc6XLeV+p41h+fd4Y+WZPJAaCnOyJxEAwtSquGmTCqp6q0jXPLVXNl7NSr6o099Hq4aYrtCy/rpryCqursnrppZd220oTnOpbU2Ndvj79Dnt7uTF1udlq1g/n8afXyWPnAmF4vNXbkL/nVHzuQ9VnTs/sPPT4OXBJUTjJCtCnAtyH0glX4XXcpmPIyDd7IjEQ5GRPJAaCnOyJxEAwUc5eSum4jCtz7MDcR7m9zZld8QRzpXsdltsH8zCNluMxYPOSJlpwXm0106Sagrh/tybgIueYp7eaQZ2HlzOHtZpEW9dt1ITG5jC9L+wRqSXB+N64EtlspnTRlNyH3ndu0/JPYw6/f/9+1ND8Zh+Vbf5+RNw92t8UEXsj4sDo/8bF+kgkEtPDUtT4LwF4lPZvArCvlHIegH2j/UQicZyiSY2PiK0ArgDwHwH869HHOwFcNNq+BfOlnG90/czNzXWePy4QwSWlcAkUauqn7rO6uBQPulp/qrK53PN33fV2sVsNluBrY7Mcl3QC6gkqgL78jvI4dZpldvnrXVAPtzkzn1PxWwNhXFmnGp3QBBVcodfVNFDPNfZ4Y5OaegqyjHrfaxRW5wjLpfdzbLJzNKb1zf4XAP4UAJ/hzFLKLACM/p+x0BcTicTxgUUne0RcCeBwKeV7yzlBROyKiAcj4kH3NkkkEquLFjX+0wCujojLAZwI4JSI+GsAL0TE5lLKbERsBnB4oS+XUnYD2A0AMzMzOdsTiSmhpT77lwF8GQAi4iIA/6aU8ocR8Z8BXA/g5tH/u6qdjBARHadQ/se8Ts1EzNmZk+hxLpKrtY6aM9/VOKT7jnL2z3zmM922unbWXHW1D7fmUHMF1nUFl3CS4fi2cy1mftkabebyxnMUoI6HK3PMY3z11VcvKB/wzkhLBvN0NsMBfbdvdmF1pZ01ao/Hju/nUtaTxueziU2qLYvjZgCXRsQBAJeO9hOJxHGKJTnVlFIewPyqO0opRwBcsvIiJRKJ1cDUot5c8grNvVULyFcPo1p/ut+a7MD1UTNxAX1VUs0srMJpRBzL5dTKVhMjy+jy6DuPP1fm2Hn5LddrjuG891rBiT+c2dbRMheRyfeQ1X297+55YW++1vuyHKRvfCIxEORkTyQGgomq8TMzM53nEuf1AvrqkMsBxtCKmg6tSR2c6lRTi13Qg14Le9CpWnzttdd225yrTquKMpzlojUwyF2zW7V3/desK0pduM2tNl9++eXd9je/+c1em1vR37NnT7d9zTXXVGXnMdB75kqTsdcc3zO9TqY8+iyyGs/HKZVzqcHH8mcq6UQikZM9kRgKcrInEgPBxDn7ODpHuSaXqlUPI/ZScvySA/odn1SZamg19+i1sAlGuZszh7nILoa7tprpxvWnY1Dj6a2RhEB/THg8lA87UySvb/C4uQg+lfHKK6/stpkDa/IKhka2sYyaXFSTR9b6YOgaDPPvWrSgQsdqvE7k5ke+2ROJgSAneyIxEExUjX/rrbc6lUjVKN5XVYTVHM4Ppurcpk2bum3N0VUrmaSqkjPj8LEuoQGb3vRaOBjjgQce6LVx0AYnrHC59hwN4e85s5lL+NCaB07VZx4TvmfqHXn77bd320rfaufTzy+77LJu2wX8uICcl19+udtW7zdW/7UCa61yqwsMcrndXQkpZ54e541PNT6RSORkTySGgpzsicRAMPGotxpfYROG8kvmJ8zrlCszX1HTBPMux1cZyplqbrDKQ51pzJmJaqYWZ/Jy4GtzNeG0Px475xbsIrl4rHj7tttuqx7nrovbmKPrud36BrepC7Z7DmpRacA73VZrx7n+Ga50NNd603p042NtKe4mCRKJxK89crInEgPBxNX4sYru8plpzm1WxXjbmWo04QWb3lilUjm4f1XRWOVkdUmjk1jt01JCLi9czZtsKaWyarn2lkJXaqqgS9yg5irug6/LJS1xnnErkQyDx4M9NoG+Z9ypp57aa3Om2holdOOtFJP3eXxcZGjNM9OWyaq2JBKJ3yjkZE8kBoKJqvFzc3NdUEttFRPwaqtbGeXVeVe1dDmJG4C+OsfBDKrGtyaNUBmZarhVVW5Ty0VNjXOegs5j0anMNY9C3WeVc8eOHb3jasEuKhcHtKhMPKZOXpdyulaNVY91q+wutTZbbNR6U7MwqRx8Lg3IGY/BoUOHUEO+2ROJgSAneyIxEORkTyQGgolz9nH5HOVMzKc0BznzY2c2Y57oTDy87ZL6Oc7uctbXvgN4ExJzW+Z17lytnlqOyzoTpvOgc5ydr4UTRLr1AQX3z99z5jXX5jg7f09l4nUiXY+pmRVd0hLtg/n3aaedtqBMKnPNI9ImNqm29Dt6CsCrAOYAHC2lXBARmwB8HcB2ANiIUoEAAAvcSURBVE8B+GellJdrfSQSieliKWr875dSzi+lXDDavwnAvlLKeQD2jfYTicRximNR43cCuGi0fQvma8Dd6L5QSulUEVXnWN11yRRc0INT8bnNBaOwGuQqfbryTCyXepbVzEkAcM8993Tb3/jGN7ptzicP+AQHfJ0sv44Vj7erqOs+r6nZej6XC4/blPLo+NTQqrq77/C+o01KeVoDYVzwVS1voI6HywfYUiqq9c1eAPxtRHwvInaNPjuzlDI7OvEsgDMa+0okElNA65v906WU5yPiDAB7I2J/6wlGPw67gPaMr4lEYuXRNPtKKc+P/h8GcCeATwJ4ISI2A8Do/+HKd3eXUi4YLeqtjNSJRGLJWPTNHhEnATihlPLqaPsPAPwHAHsAXA/g5tH/u+q9dH1Vo94cD61xbMcTldPUXF1dzS/l7LWSv8rfOdJNExAwH1T5a+sRd9xxR++4nTt3LiiHwnE8l3iCwRzVmUu5ppq2sRlKz1VLLgG059FnLDffPrupKld2HJ6PdWWfa5GbDs40q2PVEvXWosafCeDO0YWsAfA/Syn3RcR3AdwaETcAeAbAtaaPRCIxZSw62UspTwL42AKfHwFwyWoIlUgkVh4T9aBjuJJDLvrJqepO7at5HzkvJbfGwOYSNb9o4gwGq/zqZVVTpzVKis1yOo6f//znu21XMomh8tfKLeuY3n333dX+a6WYVYW96qqrqnK0qu4M7Z9VcKc+u6QirWZKlyyE77ujQ+ytp2WinEm3Bbk8nkgMBDnZE4mBICd7IjEQTJyzj3mwM70pL6pFYWkfLmKN+2Bzkh7n3BVbwd9Tc13NBKjndi69jkN+7Wtf67bZ3VQ5JPNtBcvhsta49Y3a2oozYznTG8OZtXS8+XytdeuUU3MC1FaTmh7H6zMuGSVz8RdffLHXxtFxtTUvt86Ub/ZEYiDIyZ5IDASxHPPGcrF27dqyceNGAO0lkoC+WunMZs67rqbCuYQMKqOLqmO4BIgutzgfy15zLhECl4AGgHvvvbdJ3pYoKT3OmToVfKxL+nHxxRd32y6JhqNGrR5pfG4183GbJnPcsGFD9dy1+6n3luVXFbz23Lpc/DUasn//frz22msL3ph8sycSA0FO9kRiIJioGr9mzZoyVolc2aXWVV9XmdSVEnKrsi45Rq0/lcOtlrskHdwne825JB2qEtZynOt482r8FVdc0WurqfiO1mgwEKvFHAykgUEsr9KVGk1QKuCe4VowkEvmoao6W03Yww3oX7cbb5b5pJNO6rXx+VypKQ6w0rEaH3vgwAG8/vrrqcYnEkNGTvZEYiDIyZ5IDAQT5+ynnHIKgKWZtWpebeqBVvsOUE/CUEsCsBhsaVzjScUyt2buUT7MfagcbFLiMVWe28qVebz1XHxtrj4a9//qq6/2jmMO73h0a2KS1mSOrpaeroMwp1bOXqtpoOCx0j6Yw7O3nj4fvI6j8o/NdD/60Y/S9JZIDB052ROJgWCigTARUVW9WQVyHlGsAp188sm9tprpCkBXKhrwKnhrbm5nKnRJNFjNduZHvk716OKxUq+zminImeiWG5DjvBlrueWcZ6OiVpraebEpXanlNnRJS5yMrbTXBcI4cyyPFav0QP+ZUO+6cZ+2XPhiQicSid8M5GRPJAaCnOyJxEAwUc6+Zs2ariRtzd0P8HXamKc7d1PluWziUZ5b68OZ5VwSjdbkmeo2yXyLOZnyMDXdMHhMmIvrNXOedJcAsTVKT9dI+Hx8r5eS1525Od9PF6no8uPXcrzr91wCTueG7cx5zkRXy7Gv/XH0XS2JRiavSCQSOdkTiaFgomr8zMwMTj31VADvVFFYrVSVsGYOUyrAcIH/TpVszVPm+nA5vZmGqBrPqh5HOKkKzqqa9lEzy83OzvaOc1SmFlmo9IHPpfeiVspY4XLK16iSK/ulfbSaylo9J5dbnNQlC2EZnXfkSy+91G27yLwamiSPiA0RcVtE7I+IRyPidyNiU0TsjYgDo/8bW/pKJBLTQevP1H8BcF8p5bcxXwrqUQA3AdhXSjkPwL7RfiKROE7RUsX1FAC/B+CPAKCU8iaANyNiJ4CLRofdAuABADe6vkopnfroAjhUBWQVkdVbBavqqkrXvMLUs8xVFa0dp324YBdWfZXKsGrGfWjCBx4fVQ+59NSRI0e6bUcFVH7ed+o+e+s5y4XzNmToKjWPh0sW4jzcapYRVylY0eotyfK6MlH6fNeeTecNqPdl3L+jCy1v9nMAvAjgf0TE9yPiv49KN59ZSpkdCTgL4IyGvhKJxJTQMtnXAPgEgP9WSvk4gNewBJU9InZFxIMR8aCzQyYSidVFy2Q/COBgKeU7o/3bMD/5X4iIzQAw+n94oS+XUnaXUi4opVzg4n0TicTqoqU++6GIeDYiPlRKeQzzNdl/NPq7HsDNo/93LdbX3Nxcl7xAObULzK9xMhdp1Qo9l0s4yZzMmYJq5ZOAejIFoG9G43O5skgqP3N25vqulLH+CNd4rl6LS/5Z49tLSTRaix50xzlvRpcD30XwMZyHHvfhkmi4hKou+o5Nny4ZSQ2tr9p/BeCrEbEOwJMA/gXmtYJbI+IGAM8AuLaxr0QiMQU0TfZSykMALlig6ZKVFSeRSKwWJkqijx492nkBOXXImUFcYohWbyzGUhIVODWw1oc7n3qdtZa5cp5rbJJxaiWrgTpWNRrigkfceLtxcwEo7FXJKqyTt9VjTq9FPdIYzvuyliDEUVFHNXjblZDS/sdtGQiTSCRysicSQ0FO9kRiIJia4XspSQxqPN3xE5fj3HE8dgF1rodu7YDP5eqS6fdqSQmdqcatKziXWJdgo9X5ya0r1ExIjrO762TTrPJr7r+1BLfeF97XdRDmzu65cmYzh1qCELc2UWuz86pZokQi8WuNnOyJxEAw0fJPEfEigKcBnAbgpUUOnwRSjj5Sjj6OBzmWKsPZpZTTF2qY6GTvThrxYCllISedlCPlSDlWSYZU4xOJgSAneyIxEExrsu+e0nkVKUcfKUcfx4McKybDVDh7IpGYPFKNTyQGgolO9ojYERGPRcTjETGxbLQR8ZWIOBwRD9NnE0+FHRHbIuLvRum4H4mIL01Dlog4MSL+ISJ+MJLjz6YhB8kzM8pvePe05IiIpyLihxHxUEQ8OEU5Vi1t+8Qme0TMAPivAC4D8GEAX4iID0/o9H8FYId8No1U2EcB/Ekp5XcAXAjgi6MxmLQsbwC4uJTyMQDnA9gRERdOQY4xvoT59ORjTEuO3y+lnE+mrmnIsXpp20spE/kD8LsA7qf9LwP48gTPvx3Aw7T/GIDNo+3NAB6blCwkw10ALp2mLADeDeD/AvjUNOQAsHX0AF8M4O5p3RsATwE4TT6bqBwATgHw/zBaS1tpOSapxm8B8CztHxx9Ni1MNRV2RGwH8HEA35mGLCPV+SHMJwrdW+YTik5jTP4CwJ8C4AiTachRAPxtRHwvInZNSY5VTds+ycm+UIjaIE0BEfEeALcD+ONSys+mIUMpZa6Ucj7m36yfjIiPTFqGiLgSwOFSyvcmfe4F8OlSyicwTzO/GBG/NwUZjilt+2KY5GQ/CGAb7W8F8PwEz69oSoW90oiItZif6F8tpdwxTVkAoJTyCuar+eyYghyfBnB1RDwF4G8AXBwRfz0FOVBKeX70/zCAOwF8cgpyHFPa9sUwycn+XQDnRcRvjbLUXgdgzwTPr9iD+RTYQGMq7GNFzAd9/yWAR0spfz4tWSLi9IjYMNpeD+AzAPZPWo5SypdLKVtLKdsx/zz8r1LKH05ajog4KSJOHm8D+AMAD09ajlLKIQDPRsSHRh+N07avjByrvfAhCw2XA/gxgCcA/LsJnvdrAGYB/Arzv543AHgv5heGDoz+b5qAHP8U89TlHwE8NPq7fNKyAPgnAL4/kuNhAP9+9PnEx4RkughvL9BNejzOAfCD0d8j42dzSs/I+QAeHN2bbwLYuFJypAddIjEQpAddIjEQ5GRPJAaCnOyJxECQkz2RGAhysicSA0FO9kRiIMjJnkgMBDnZE4mB4P8Dna1f1lVlydcAAAAASUVORK5CYII=\n",
213 | "text/plain": [
214 | ""
215 | ]
216 | },
217 | "metadata": {
218 | "needs_background": "light"
219 | },
220 | "output_type": "display_data"
221 | }
222 | ],
223 | "source": [
224 | "# The solution here does two things: introduce random augmentations and converting the training pipeline to be based on\n",
225 | "# dictionaries. This lets us use the CacheDataset type more easily with two part data (image/segmentation).\n",
226 | "\n",
227 | "from monai.data import CacheDataset\n",
228 | "from monai.transforms import (\n",
229 | " AddChanneld,\n",
230 | " ScaleIntensityd,\n",
231 | " ToTensord,\n",
232 | " RandFlipd,\n",
233 | " RandRotate90d,\n",
234 | " RandZoomd,\n",
235 | " Rand2DElasticd,\n",
236 | " RandAffined,\n",
237 | ")\n",
238 | "\n",
239 | "aug_prob = 0.5\n",
240 | "keys = (\"img\", \"seg\")\n",
241 | "\n",
242 | "# use these when interpolating binary segmentations to ensure values are 0 or 1 only\n",
243 | "zoom_mode = monai.utils.enums.InterpolateMode.NEAREST\n",
244 | "elast_mode = monai.utils.enums.GridSampleMode.BILINEAR, monai.utils.enums.GridSampleMode.NEAREST\n",
245 | "\n",
246 | "\n",
247 | "trans = Compose(\n",
248 | " [\n",
249 | " ScaleIntensityd(keys=(\"img\",)), # rescale image data to range [0,1]\n",
250 | " AddChanneld(keys=keys), # add 1-size channel dimension\n",
251 | " RandRotate90d(keys=keys, prob=aug_prob),\n",
252 | " RandFlipd(keys=keys, prob=aug_prob),\n",
253 | " RandZoomd(keys=keys, prob=aug_prob, mode=zoom_mode),\n",
254 | " Rand2DElasticd(keys=keys, prob=aug_prob, spacing=10, magnitude_range=(-2, 2), mode=elast_mode),\n",
255 | " RandAffined(keys=keys, prob=aug_prob, rotate_range=1, translate_range=16, mode=elast_mode),\n",
256 | " ToTensord(keys=keys), # convert to tensor\n",
257 | " ]\n",
258 | ")\n",
259 | "\n",
260 | "\n",
261 | "data = [\n",
262 | " {\"img\": train_images[i], \"seg\": train_segs[i]} for i in range(len(train_images))\n",
263 | "]\n",
264 | "\n",
265 | "ds = CacheDataset(data, trans)\n",
266 | "loader = DataLoader(\n",
267 | " dataset=ds,\n",
268 | " batch_size=batch_size,\n",
269 | " num_workers=num_workers,\n",
270 | " pin_memory=torch.cuda.is_available(),\n",
271 | ")\n",
272 | "\n",
273 | "# for simplicity we'll keep the existing pipeline for the validation data since it doesn't have any augmentations\n",
274 | "\n",
275 | "val_image_trans = Compose([ScaleIntensity(), AddChannel(), ToTensor(),])\n",
276 | "\n",
277 | "val_seg_trans = Compose([AddChannel(), ToTensor()])\n",
278 | "\n",
279 | "\n",
280 | "val_ds = ArrayDataset(test_images, val_image_trans, test_segs, val_seg_trans)\n",
281 | "val_loader = DataLoader(\n",
282 | " dataset=val_ds,\n",
283 | " batch_size=batch_size,\n",
284 | " num_workers=num_workers,\n",
285 | " pin_memory=torch.cuda.is_available(),\n",
286 | ")\n",
287 | "\n",
288 | "# %timeit first(loader)\n",
289 | "\n",
290 | "batch = first(loader)\n",
291 | "im = batch[\"img\"]\n",
292 | "seg = batch[\"seg\"]\n",
293 | "print(im.shape, im.min(), im.max(), seg.shape)\n",
294 | "plt.imshow(im[0, 0].numpy() + seg[0, 0].numpy(), cmap=\"gray\")"
295 | ]
296 | },
297 | {
298 | "cell_type": "markdown",
299 | "metadata": {
300 | "colab_type": "text",
301 | "id": "c-hNZy4qkHji"
302 | },
303 | "source": [
304 | "We now define out simple network. This doesn't do a good job so consider how to improve it by adding layers or other elements:\n",
305 | "\n",
306 | "We'll use UNet for our segmentation network instead."
307 | ]
308 | },
309 | {
310 | "cell_type": "code",
311 | "execution_count": 5,
312 | "metadata": {
313 | "colab": {},
314 | "colab_type": "code",
315 | "id": "lM5NapkCj_Mx"
316 | },
317 | "outputs": [],
318 | "source": [
319 | "# class SegNet(nn.Module):\n",
320 | "# def __init__(self):\n",
321 | "# super().__init__()\n",
322 | "\n",
323 | "# self.model = nn.Sequential(\n",
324 | "# # layer 1: convolution, normalization, downsampling\n",
325 | "# nn.Conv2d(1, 2, 3, 1, 1),\n",
326 | "# nn.BatchNorm2d(2),\n",
327 | "# nn.ReLU(),\n",
328 | "# nn.MaxPool2d(3, 2, 1),\n",
329 | "# # layer 2\n",
330 | "# nn.Conv2d(2, 4, 3, 1, 1),\n",
331 | "# # layer 3\n",
332 | "# nn.ConvTranspose2d(4, 2, 3, 2, 1, 1),\n",
333 | "# nn.BatchNorm2d(2),\n",
334 | "# nn.ReLU(),\n",
335 | "# # layer 4: output\n",
336 | "# nn.Conv2d(2, 1, 3, 1, 1),\n",
337 | "# )\n",
338 | "\n",
339 | "# def forward(self, x):\n",
340 | "# return self.model(x)"
341 | ]
342 | },
343 | {
344 | "cell_type": "markdown",
345 | "metadata": {
346 | "colab_type": "text",
347 | "id": "Di-LUDgO3k5r"
348 | },
349 | "source": [
350 | "Our training scheme is very simple. For each epoch we train on each batch of images from the training set, thus training with each image once, and then evaluate with the validation set."
351 | ]
352 | },
353 | {
354 | "cell_type": "code",
355 | "execution_count": 11,
356 | "metadata": {
357 | "colab": {},
358 | "colab_type": "code",
359 | "id": "7dSiOsbmkrbn"
360 | },
361 | "outputs": [
362 | {
363 | "name": "stdout",
364 | "output_type": "stream",
365 | "text": [
366 | "600/600 Validation Metric: 0.69 [==============================]]\n"
367 | ]
368 | }
369 | ],
370 | "source": [
371 | "net = monai.networks.nets.UNet(\n",
372 | " dimensions=2,\n",
373 | " in_channels=1,\n",
374 | " out_channels=1,\n",
375 | " channels=(8, 16, 32, 64, 128),\n",
376 | " strides=(2, 2, 2, 2),\n",
377 | " # num_res_units=2,\n",
378 | " dropout=0.1,\n",
379 | ")\n",
380 | "\n",
381 | "net = net.to(device)\n",
382 | "\n",
383 | "opt = torch.optim.Adam(net.parameters(), lr)\n",
384 | "loss = DiceLoss(sigmoid=True)\n",
385 | "metric = DiceMetric(\n",
386 | " include_background=True, to_onehot_y=False, sigmoid=True, reduction=\"mean\"\n",
387 | ")\n",
388 | "\n",
389 | "step_losses = []\n",
390 | "epoch_metrics = []\n",
391 | "total_step = 0\n",
392 | "\n",
393 | "for epoch in range(num_epochs):\n",
394 | " net.train()\n",
395 | "\n",
396 | " # train network with training images\n",
397 | " for batch in loader:\n",
398 | " bimages = batch[\"img\"].to(device)\n",
399 | " bsegs = batch[\"seg\"].to(device)\n",
400 | "\n",
401 | " opt.zero_grad()\n",
402 | "\n",
403 | " prediction = net(bimages)\n",
404 | " loss_val = loss(prediction, bsegs)\n",
405 | " loss_val.backward()\n",
406 | " opt.step()\n",
407 | "\n",
408 | " step_losses.append((total_step, loss_val.item()))\n",
409 | " total_step += 1\n",
410 | "\n",
411 | " net.eval()\n",
412 | " metric_vals = []\n",
413 | "\n",
414 | " # test our network using the validation dataset\n",
415 | " with torch.no_grad():\n",
416 | " for bimages, bsegs in val_loader:\n",
417 | " bimages = bimages.to(device)\n",
418 | " bsegs = bsegs.to(device)\n",
419 | "\n",
420 | " prediction = net(bimages)\n",
421 | " pred_metric = metric(prediction, bsegs)\n",
422 | " metric_vals.append(pred_metric.item())\n",
423 | "\n",
424 | " epoch_metrics.append((total_step, np.average(metric_vals)))\n",
425 | "\n",
426 | " progress_bar(epoch + 1, num_epochs, f\"Validation Metric: {epoch_metrics[-1][1]:.3}\")"
427 | ]
428 | },
429 | {
430 | "cell_type": "markdown",
431 | "metadata": {
432 | "colab_type": "text",
433 | "id": "qscf-SW34DDJ"
434 | },
435 | "source": [
436 | "We now graph the results from our training and find the results are not very good:"
437 | ]
438 | },
439 | {
440 | "cell_type": "code",
441 | "execution_count": 12,
442 | "metadata": {
443 | "colab": {
444 | "base_uri": "https://localhost:8080/",
445 | "height": 390
446 | },
447 | "colab_type": "code",
448 | "id": "fAzQ_36HwRvs",
449 | "outputId": "e41c312e-e6fe-4b26-e109-4da046e45e0a"
450 | },
451 | "outputs": [
452 | {
453 | "data": {
454 | "image/png": "iVBORw0KGgoAAAANSUhEUgAABJoAAAF1CAYAAACgQ4B2AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nOzdd3yUVfbH8c9JJySEGnrvvSOgqCjSFLGgooJ1se266tp1de26+tu1rQ2xi2LBsiqgICogPTRpEnpCCSSQ3jP398cMbGjSJpmU7/v1yuvlzPPMPWcmktyc59z7mHMOERERERERERGRkxUU6ARERERERERERKRiUKFJRERERERERET8QoUmERERERERERHxCxWaRERERERERETEL1RoEhERERERERERv1ChSURERERERERE/EKFJhERERERETmEmTUzM2dmIb7HU83s6mM59wRiPWBmE04m37LCzM40s8RA5yESKCo0iUiJM7PTzGyumaWZ2R4z+9XMevuOXWNmc0ow9s9m9qeSGl9EREQqDjPbbGY5ZpZpZklm9o6ZRflp7OvNbK2ZZfjG/s7Mon3H3jWzJ/wR5zBx15rZdYd5/jYzW3w8Yznnhjnn3vNDTocUYpxzTznn/D5n8801i3zf03QzW25m5/k7zlFy2Gxmg0ozpkggqdAkIiXKzKoB3wIvAzWBhsCjQF4g8xIRERE5ghHOuSigB9Ab+PvxvNi8gg567gzgKeBy51w00B741E/5Hs17wFWHeX6s71hlMM/3Pa0OvApMMrPqAc5JpMJSoUlESlobAOfcx865IudcjnPuB+fcCjNrD7wO9PNdZUoFMLNwM/s/M9vqu+L3uplV8R0708wSfe3Vyb4rRFceb1JmFmRmfzezLWa2y8zeN7MY37EIM/vQzFLMLNXMFplZXd+xa8xso+9q5KYTiS0iIiJln3NuGzAV6ARgZn19Hdqpvq6YM/ed6+ugftLMfgWygRYHDdcbb7FjqW/sPc6595xzGWZ2A3AlcI9vPvSNb8wGZjbZzHb75hx/LRbvETP73Mw+8c1JlphZ1yO8lQ+A08ysabHXtwe6AB+b2blmttTX7ZNgZo8c6TMp3iluZsG++VqymW0Ezj3o3GvNbI0vv41mdqPv+aq+z7WB7/1m+t7rI2b2YbHXn29mq3yf98++nPcd22xmd5nZCvN2zH9iZhFHynsf55zH93lUBVr7xvqjeWdtM/vWl8MeM5u9r4ho3mWCrYrldNiuNDP7AGgCfON7r/f80VxTpCJQoUlESto6oMjM3jOzYWZWY98B59wa4CZ8V5mcc/uuLP0Tb4GqG9AKbxfUw8XGrAfU9j1/NTDezNoeZ17X+L4G4p0MRgH/8R27GogBGgO1fDnm+CZGLwHDfFcj+wPLjjOuiIiIlANm1hgYDiw1s4bAd8ATeDu07wImm1mdYi8ZC9wARANbDhpuATDEzB41s1PNLHzfAefceGAi8KxvPjTCV8z4BliOd75zNnC7mQ0pNuZI4DNfPh8BX5lZ6MHvwzmXCPzky2+fq4ApzrlkIMv3uDreYtHNZnbBMXxE44DzgO5AL2DUQcd3+Y5XA64FnjezHs65LGAYsN33fqOcc9uLv9DM2gAfA7cDdYApeAs1YcVOuxQYCjTHWzS75mgJm1mwL5cC/vc9+qN5551Aoi+HusADgDtanOKcc2OBrfg65Zxzz3KEuebxjCtSlqnQJCIlyjmXDpyG95fym8BuM/vvka7amJnhnbjc4bval4G31Xz0Qac+5JzLc879gnfid+lxpnYl8G/n3EbnXCZwPzDavBtYFuD9pd/K14UV53sfAB6gk5lVcc7tcM6tOs64IiIiUrZ9Zd4u6znAL3jnIWPwFmamOOc8zrnpwGK8hah93nXOrXLOFTrnCooP6JybDVyEdzned0CKmf3bV/g4nN5AHefcY865fOfcRrzzqOLzoTjn3Oe+WP8GIoC+RxjvPXyFJl8R60rfczjnfnbO/eZ7XyvwFnjOOPrHxKXAC865BOfcHuDpg97zd865Dc7rF+AHYMAxjAtwGfCdc2667/39H1AF70W+fV5yzm33xf4Gb6HoSPr6vqe5vrHGOOd2HcO8swCoDzR1zhU452Y7546r0HQEfzTXFCn3VGgSkRLnnFvjnLvGOdcIb/t5A+CFI5xeB4gE4nytxKnANN/z++z1XQ3bZ4tvzOPRgAOvNm4BQvBerfoA+B7v+v3tZvasmYX6Yl6G96rTDvNu4tnuOOOKiIhI2XaBc666c66pc+4W51wO0BS4ZN/cxDc/OQ1vEWKfhH3/UWxJWKaZNQFwzk11zo3A24E0Em8HzpE2v26Kd2lZ8XgP4J2nHBLPtyQskSPPh74A6ptZX+BMvHOt73y5nmJmP/mW6KXhnefUPuqn5I2VUOzxAV1cvk72+b4lZ6l4i3LHMu6+sfeP53t/CXi7jfbZWey/s/F2px/JfF/nfA3gv/yv4HW0eedzwHrgB9/yv/uOMf+jOexc009jiwScCk0iUqqcc2uBd/Htd8Ch7cfJeFuHO/omedWdczG+DRz3qeFbxrZPE+CAlutjsB3vJK74GIVAku+K1aPOuQ54r5ydh28TTefc9865c/BOLNfivbooIiIiFVsC8EGxuUl151xV59wzxc7ZP6cptiQsyjm3tfhAvs6hH4GZHHk+lABsOihetHOueAdV433/4etSasQR5kPOuWzgc7zzmbHAJOdcvu/wR3iLL42dczF498+0Y/hMdhTPAe9cal8+4cBkvN1DdX1FninFxj1aV9AB8zRf51FjYNsx5HVEvi72W4CxZtado8w7nXMZzrk7nXMtgBHA38zsbN9w2XiLVPvU+6PQB+VxxLmmSEWgQpOIlCgza2dmd5pZI9/jxsDlwHzfKUlAo31r7n1XrN7Eu44/1veahgftSQDwqJmFmdkAvL+cP/uDNEJ8my7u+wrF2xZ+h5k1N+9ti58CPnHOFZrZQDPr7GtnT8fb3lxkZnXNuzFlVbx3zcsEik72MxIREZEy70NghJkNMe8m2BHmvUFJo2N5sZmNNLPRZlbDvPrgXZ5WfD5UfAPxhUC6md1rZlV8MTuZWe9i5/Q0s4t8y/5vxzs3mc+RvYe3M/tiDrzbXDSwxzmX68vrimN5T3jvmvdXM2vk24OzeLdPGBAO7AYKzWwYMLjY8SSglvluxHKEsc81s7N987Y7fe9v7jHmdkTOuRRgAvDw0eadZnaembXyFbrS8c779s39lgFX+L43Q/nj5YYHfH+PNNc82fcmUlao0CQiJS0DOAVYYGZZeCdAK/FOGMB7NW8VsNPMkn3P3Yu3TXm+maUDM4Dim33vBPbivdo1EbjJ1yl1JK/hvVq17+sd4G28bcuzgE141+zf6ju/Ht6rfunAGrz7M3yI92fmnb64e/BOKG45rk9DREREyh3nXALe5W4P4C2eJAB3c+x/T+3FuxdQPN75xYfAc865ib7jbwEdfMu3vnLOFeHtoOmGd56SjLc4Urww8zXewtFevF1KFx28N9RBZgFpwDbn3KJiz98CPGZmGXg3wf70GN/Tm3iXfy0HluBdngd4O4GAv/rG2ou3ePXfYsfX4r3ot9H3ng9Y8uec+x3vvlgv+977CLybaefjHy8Aw82sC38872zte5wJzANedc797Dt2my+vVLx7Xn31B/GeBv7ue693ceS5pkiFYP7Zy0xEpHSY91bCH/r2exIRERGpdMzsEbwbSY8JdC4iIgdTR5OIiIiIiIiIiPiFCk0iIiIiIiIiIuIXWjonIiIiIiIiIiJ+oY4mERERERERERHxCxWaRERERERERETEL0ICnUBJq127tmvWrFmg0xAREZESEhcXl+ycqxPoPOR/NP8SERGp+I40B6vwhaZmzZqxePHiQKchIiIiJcTMtgQ6BzmQ5l8iIiIV35HmYFo6JyIiIiIiIiIifqFCk4iIiIiIiIiI+IUKTSIiIiIiIiIi4hcqNImIiIiIiIiIiF+o0CQiIiIiIiIiIn6hQpOIiIiIiIiIiPiFCk0iIiIiIiIiIuIXKjSJiIiIVGJmNtTMfjez9WZ232GO321my3xfK82syMxqBiJXERERKftUaBIRERGppMwsGHgFGAZ0AC43sw7Fz3HOPeec6+ac6wbcD/zinNtT+tmKiIhIeaBCk4iIiEjl1QdY75zb6JzLByYBI//g/MuBj0slMxERESmXVGgSERERqbwaAgnFHif6njuEmUUCQ4HJpZCXiIiIlFMqNJ0Aj8cxd30y63dlBDoVERERkZNhh3nOHeHcEcCvR1o2Z2Y3mNliM1u8e/duvyUoIiLlj3NH+lVyZDvTclmRmIrHc/yvLY88HndCn1N5EBLoBMojM7hiwgIAwkKCeOWKHpzepjbhIcEBzkxERETkuCQCjYs9bgRsP8K5o/mDZXPOufHAeIBevXpVzJmziIgcwuNxpOYUULNqGAAFRR4ue2MeMVVCeeny7kRHhB51jNyCIi55Yy4Je3IY0Lo2b4ztSWSYf8oVe7LyeejrlSzYuIcrTmnCHYNaE7dlL7HRETSpFXnAufM3ppCaXcDQTvUASM3O585Pl9OrWU1uPL0FG5MzCQkKolntqkeMV1jk4cul26gRGcbZ7WMxM+ZvTGFvVj7ndKjLxuQs7p28gg27MokIDebJCztzToe6fnmvZYUKTSfA7H8X//ILPYx7fzEAF/VoyFMXdiYiVAUnERERKRcWAa3NrDmwDW8x6YqDTzKzGOAMYEzppiciImVVXmERXy3dxqRFCazans6Xt/SnY4MYPpy/hSVbUwG445PlvHlVT9JyCnjxx3iiw0O4ZWArIkKD2Zycxa0fL6VqeDAxVUJJ2JPDOR3q8uOaJO6b/Bsvju62/2/v9NwCvluxg6krd7IpOZN+LWrxxAWdCQsJosjjeH/eZqqGhzCqRyOCgoxV29P4fuVORvVszAsz1jF9VRItY6N46cd4Xp4Zj3MQGx3Ot7eeRmy1CHLyi3hpZjyv/bwBgHuHtuPczvW5/8sV/Lo+hR/X7iKnoIjXf95AfpGH5y/rynldGvD2nE1k5hUyqH1dujauzpaULO7+fAULN3mbfy/q0ZC61SL2jxsbHU5OQRFFHsfIbg1YnpDGXz5awrld6lMtIpSWsVEs2JjCxt1Z/OvSrrSvX42EPdksTUhlyoodJOzN5p1rehNbLQKA+KQMNiVncXqbOmWqDmEVtVVrn169ernFixf7fdxm9313xGPX9G/GPUPb+q0CKyIiIkdmZnHOuV6BzqO8MrPhwAtAMPC2c+5JM7sJwDn3uu+ca4ChzrnRxzJmSc2/REQquhWJqXyxZBtj+jalVWzUMb9u/a5M7vxsOQ+f14GeTWsccjwpPZc1O9KpH1OFtvWiiU/KIK/QQ6eGMQC88csG3pu7mcEd63H/8HaEBgURFHTg6urMvEL2ZuWTlJ7Liz/Gs3RrKpl5hfuP14+J4M2renHTh3E0qF6FIR3r8fi3q7lrcBuWJ6YxfXXS/nNb1KnKxt1ZVAkNJrewiNDgIG4+oyV3nNOGl36M59/T1zF+bE8Gd6zH3A3J3PrRUlKy8mlRuyoNa1Rhdnwyfz27NX87pw1Tf9vBzROXAPCvS7oytFM9hr44i4Q9OfvjjRvQnHuHtuPjhVvZnpaLx+N4c/ZGxvZtyoPnduCGDxbzy7rd9Gpag0Wb9x7wvp++qDNvz9lE/K5MgoOMutHhpOcW0rZeNHFbvOcGGfRpXpNFm/cSHhLEo+d3JGFvDi/9GA94C079WtRidnwyIcHGdac2p1PDGFIy87jrs+Us3rKXjNz/fZZR4SEUFHk4tVVtZq7dBUBosFFQ5BjcoS5vjO1JVn4RZz73M8mZeZzdLpYJV/di1fZ0rnt3ESO7NeDeoe3YkZZLVHgINXzdZv52pDmYCk0nqNM/vj/gH9XhPHFBJy7v04TgoMNtfyAiIiL+oEJT2aNCk4hURs45nOOQAs2x2J2Rx39mxvPB/C14HNSrFsGEq3tRPyaCWlHhpOcWkJNfRF1fJ8vujDxGj5/HwLaxPDC8PQ99vZKJC7YSFhLE62N6cFY771Ks9NwCnvx2DZ8s9t73IToihBsGtOBf09cB8PG4vjSpFcmAf86kdlQ4uzLyAOhQvxoTru5FnehwPl2cwK70PD5auJXdvuP1qkUwqEMsQzvWp2fTGmxMzuRP7y1mR1ouAC9f3p3zutTnlolLmLpyJwAPndeBRjWqMHPNLn7dkMy5netzVf9mBJsRFRFCVLi3UaOgyMN5L81hw+5MWsVGsTkli8Y1IvnnqC50b1wdM2PsWwvYnJLFL3cN5OLX55KcmUe1iFD2ZOUzvHN93pqziWdHdeGF6etoXTeal0Z3JybywCV8t01aysy1uxjUvi5fLt3G0xd15vI+TViXlEFaTgELNqbgcXDrWa2YFZ/MB/M2c+MZLWlQvQq3fBjHmp0Z/PnMVlx7WjMe/molSxNSGdg2llvObLm/4+j3nRlk5hUetvh3sO2pOeQUFOEcxFQJ5fFvVzN15Q4Kihx3DW7D5X2a8HlcIk9PXctNZ7Rk1fY0ZscnM7BtHX76fTc9mlRnU3IWe7MLDhj38ZEdGduv2XH9/3isKl2hycxGACNatWo1Lj4+3u/jJ+zJZtz7i1m78+gbgj9/WVcu6NbwgCV3IiIi4h8qNJU9KjSJSGWzYbe30HJK85o8c3EXwNtFtHZnBn1b1PzD/Xz3ZuVz3stz2JWRy0XdG3FJr0Zc9+4i0n0dLq1jo4jflQlAs1qRvHx5D6avSdrfLfP6mJ7c8/lyWsZGkZNfxNqdGQxoXZt7hrTj1o+XsDklmytPaUKbutG89vMGdqbn7o/dtXF1Tm9dm//8tJ5Zdw/kzdkbmbZyJ1l53o6dbN94AB0bVGNQ+7rUjg5nZLcGVDto76VFm/dw5ZsLOL1NbV69sidhIUHkF3qYtGgr1SPDOL9rg2P+PJPSc3lv7mbW78okNDiIR87vSJ3o8P3HP1ucwN2fr9j/+KkLO9OiTlVGj58PwPldG/DS5d1xzh3x7/BFm/dwyevzAO+qpEfO73jM+QEUeVyJN5Xsycpnc0oWPZp4C1WFRR6ue28xs9btpmpYMPcNa8eYvk2ZMHsT/5r+O+EhwXxwfR8mxyUyb2MKV/dvxqkta//hnlIno9IVmvYp6YlOWk4Bt0yM49f1KUc999Mb+9Gnec0Sy0VERKQyUqGp7FGhSUQqspTMPDYlZ9G9SY39hYbR4+cxf6N3X55Zdw9k5fY07vhkGXmFHvo0r8kbY3pSo2oYu9JzWb87k77Na+3vfJoweyNPfLeGz27qR+9m3r8X1yVlMDs+mcIiD//5aT0ZuYXcfGZLvlq6jd0ZeRR6HKe1qs2yhP8tX5t8c386NqjGqz+t54P5W9ibXYAZfHDdKZzWujbg7branJJNek4ByxNTefjrVQAM7lCX8Vd5f5V6PI635mziySlraBATwSPnd6RLo+rUrRZ+1OaJ3IKiUtkrKLegiOenr2Pm2l3EVAnlkxv7ERxkPDN1LRm5Bdw1uO0xLRfb932bd/9Z1I+pUuJ5+0tSei41IsMICwna/1xKZh5FzhEbHVFqeajQVML2ZuUz5q0FrNqeftRz376m1/5WRhERETk5KjSVPSo0iUh5sysjl9Hj59OzSQ2euLATQWbMiU+mX8taBxROdqTlMOT5WaTnFjKofSyvXNmDF2fE8+rPG7ju1OZMXLCF9vWrsS4pg9Z1oxncoS4vzoinZWwUX9zcn0vfmMdv29JoWacqt57VmvTcAp6fvo4mNSP5+i+nHTa3lMw8tu7JpnuTGixPSOXmD+Moco7JN/fn3z+s44ul2zizbR3evbbP/tf8vjODD+Zvpl+L2pzbpf5hx83OL+TSN+axcls6U/46gA4Nqu0/VljkYebaXZzWunaF3ns4J7+IbanZtIqNDnQq5ZIKTaVk6da9XPjq3GM694tb+u9fYyoiIiInRoWmskeFJhEpT3ILirjq7YX77xTWpVEMOflFxO/K5IJuDXj+sm7kF3kINmPc+4uZuyGF87s24LO4RKIjQsjILaRb4+q8d10fflyTxN8+XQ6wv0Npxuok/vT+YqLCQ8jMK+S6U5vz49oktqRkA1A9MpTnL+3GwHaxx5RvXmERhUWOquEhpGUXsHpHOt2bVD+hTqKCIg+7MvJoWL38dPNI2XGkOVjFLU0GSPcmNVj7+FAe+3Y1Hy3Y+ofnXuQrSH14/f9aGUVERERERKRkTF+dxLa92cxYs4vs/EKa1arK5pQsliak8uLobvz8+26+XLqNRjWqcGH3hny5dBt5hR6mr07yFnZyCnj8gk6MOaUJGbmFTFu1k6v6NeWxkZ0AuKhHI2KqhLJmRzq9fBtAD+pQl8dHduSLpdsY0aUB157ajJvPbMmkhVsZ3qU+zWpVPa69fsJDgvHtm01MZCj9WtY64c8jNDhIRSbxO3U0laA1O9IZ9uLsYz7/g+v7MKB1nRLMSEREpOJRR1PZo44mESkNCXuyqR8TQW6hh9snLSMyLJhHz+94yN48e7LyeeirlXz32479z8VGh9OiTlW2pGSTlVfI4xd0YmS3huTkF7EiMZUeTWtgwIWvzuW3bWkMaF2bmCqhnNUulot6NAK8+x1l5BUesim2SGWhjqYAaF+/GvFPDuODeVt47NvVRz1/7FsLAQ5ZHysiIiIiIiKwJSWLWfHJLNuayuQliZzXpT5BZsxYkwR478p2Vvu6vDBjHdn5RQztWI+3f93Erow8ejWtQUiwMbxzfUZ0abC/IFX8zmRVwoI5pcX/OoTeubY3O9Ny6dQw5pBczExFJpHDUEdTKdmemkP/Z2Ye8/kNq1fh/ev70LJOVAlmJSIiUv6po6nsKSvzLxEpH/7oFvT7FHkcr/+ygRdmrKOgyPs3bERoELkFHgBuPKMFq7enMzs+mbDgIGIiQwkPCSJxbw7R4SG8d32f/beIFxH/UEdTgDWoXoWNTw3nu992cOvHS496/rbUHM7+1y+0qRvFu9f2oYHWzYqIiIiISAVTUORh1GtziY4IZUzfpgzpWPeQopPH4/jzxCVMW7WTczvXZ2C7WFIy8xg3oAU3T4zjp993c3W/ZqxITOPX9ck0qx3JR+P6ElMllHkbUqgfE0HrurqrmEhpUUdTAOQWFNH7iRlk5BUe82s6NqjGO9f0JrZaRAlmJiIiUv6oo6nsKYvzLxEpm975dROPfvO/bUZG925MtSqhVA0LoVZUGAPbxTJx/hZe/XkDDwxvx7gBLQ4oRBV5HLsz8qgX4/07Kb/QQ0iQEXQcm2uLyIlRR1MZEhEazG+PDiFuy14ufm3uMb1m1fZ0+jz1I8M71+PJCzofssGdiIiIiIhIWZaTX0REaBBTV+6kyOM4r0t9Ppi3hV5Na/DRuL5cNn4ekxYlEBpshyyPu+KUJocUmQCCg2x/kQkgLCSoVN+TiBxKhaYA6tm0BpueHs7lb85n/sY9x/SaKb/tZMpvOzmzbR1euKwb1SNVcBIRERERkbJta0o25740m5jIUBL35gDelR4bk7MYd3oLwkKCmHBVL1ZuT6dHk+qEBgeRsCebeyavoFvj6jwwvP1R93ESkbJBS+fKiGUJqVzwyq/H/bpB7evy/GVdidbdDkREpJLS0rmyp7zMv0TEP1YkpvLGrI3cMagNrWKjKCzy4IAgMxZsTGFDchYT529h7c4MWtSuSos6Vfl1fQo5BUXEVAll1t0DiYnU3zMi5Y2WzpVx3RpXZ/VjQ7jh/TjmrE8+5tfNWJNE50d+YHCHujw2stMBbaMiIiIiIiL+NHNtEnFb9tKlUXVOb12HlKw8Ro+fT3Z+EQs37eEvA1vxzNS1eJyjZtUwdqTlAhAbHc6Eq3oxqENdANbvyuDlmesZN6CFikwiFYw6msqgldvSOO/lOSf02qa1Ivn6z6dqSZ2IiFQa6mgqe8rj/EtE/mff34gHL1WLT8rgvJfnkFfoAaBaRAghwUFk5xfy4uju3PxhHB4HvZrWoEnNSNJyChjVsxE9mtagTlS4NugWqWDU0VSOdGoYw+ZnzuXZaWt59ecNx/XaLSnZdHtsOgPb1uGfo7oQG60OJxEREREROTartqdx04dxhAYF8c2tp/HRgq1MXpLIhKt78cpP6wkLDuL7208ncW8O93y+nO1puTx1YWeGdKzHQ+d1IDkzj9sHtSE0WJtyi1RW6mgq43al53Lm//1Mdn7RCb2+a6MYJo7rS1S4aooiIlIxqaOp7Cnv8y+Ryso5x6jX5xG3ZS8AnRvG8Nu2NMDbvZSeW8jV/Zry6MhOACRn5rEzLZdODWMClrOIBM6R5mAqM5dxsdUiWP3YUN67rs8JvX55Yhqd/vE9f/14KbkFJ1asEhERERGRiu/7Vd79l566sDOPjexIWk4BF3VvyGc39aNxzUi6N6nOuNNb7D+/dlS4ikwicgh1NJUjRR7HfZNX8Flc4gmPceUpTbh9UBvqRIf7MTMREZHAUUdT2VOR5l8iFU1adgGXjZ9HTJVQ2tevRqeGMYzq2Yi4LXu4ZeISoiNCmXbbAEK09E1EjkJ7NFUAwUHGc5d05ZaBrRj6wqz9m/Adj4kLtjJxwVauOKUJj57fUWunRUREREQqkVd+Xs/anRmEBBkLNu0BIG7LXqavTqJKWBAvXNZNRSYROSkqNJVDzWtX5fcnhjHltx3cMnHJCY3x0YKtfLRgK/cMbctV/ZppDycRERERkQrs+1U7+WRRAjPX7uLSXo146LwObEnJ5q05m/h44VaCDL6+5jQthRORk6alcxXArR8v5Zvl209qjL8MbMVdQ9r6KSMREZHSo6VzZU9lmH+JlEXOOeZuSGFLSjYhQUaNqmEMbFuHpQmpXPL6PKIjQujepAavj+lBZJj3QnN+oYevl22ja+PqtKkbHeB3ICLlyZHmYCo0VRDbU3Po/8zMkx7n+cu6MrJrQ4KCzA9ZiYiIlDwVmsqeyjL/EikrsvIK+Wb5dj6PS2Sx745x+1zSsxGpOQUs2ryHefedTZWw4ABlKSIVjfZoquAaVK/C5mfOZXb8bsa+tfCEx7njk+Xc8clyXrq8O+e0r6tfRCIiIiIiZczSrXuZsSaJ4bRXWbkAACAASURBVJ3r07ZuNNe8s5BFm70FpqEd6/HwiA54nOOdXzfz1pxNANx2dmvN7UWkVKjQVMEMaF2H+CeHce/kFXyxZNsJj/PXj5cCMPnmfvRsWtNf6YmIiIiIyAnamZbLI/9dxfQ1SRR5HG/N2USzWlVZuzOD50Z1oW29aNrVq0ZYiHcz7weGt2fx5j3szsjjpjNaBjh7EaksKuzSOTMbAYxo1arVuPj4+ECnExD+Wk4H8MkNfTmlRS2/jCUiIuJPWjpX9mjpnIh/rd+Vwdu/buarpdtwDq7q35RLezXm5g/jWJeUyeV9GvP0RV0O+9qCIg/5hR6q6uY/IuJn2qOpEpv62w5uPsG70xUXHR7C5Fv6a5NAEREpU1RoKns0/xLxny+XJnLP5ysICQpiWKd6/PXs1jSrXRXwFpEWbd5Dr6Y193cxiYiUliPNwfTTqBIY1rk+ax4bygXdGpzUOBl5hQx+fhY3vL+Y5Mw8P2UnIiIiIiKH8/68zdzxyXJ6Nq3BnHsH8u/Luu0vMgGEBgfRv2VtFZlEpExR/2QlUSUsmBdGd+fuoe049SSX0/2wOokfVicxtm9THr+gk58yFBERERGRL5YkMnlJIrWqhvPtiu0Mal+XV67sTniINvIWkfJBpe9KpmH1Kmx8ajhP+KFA9MH8LTS77zu+XraNgiKPH7ITEREREam85q5P5p7PV7B2RwY//76LIR3r8eLobioyiUi5oo6mSigoyBjTtykXdG/IlRMWsDwh9aTGu23SMm6btIx3runNwHaxfspSRERERKTiW5eUwV2fLWdnWi67MvJoViuSr/98GjGRoYFOTUTkhKjQVIlFhYfw9Z9PZenWvVz46tyTHu/adxcRGRbMV38+VRuGi4iIiIgcxZ6sfMa9v5j0nAJ6Nq1Jl0YxXH9ac90hTkTKNS2dE7o3qUH8k8MY0fXkNgsHyM4vYvDzs/jzR0vIyS/yQ3YiIiIiIuXfxAVb6P3kDF6YsQ7nHHd9tpz+z/zI9tQcxl/ViwlX9+KvZ7dWkUlEyj0VmgTw3rHi5cu7M/uegX4Z77sVO2j/8DTemrOJQu3fJCIiIiKVUF5hEQl7svl+1U4e/HIluzPyeOnHeG79eCmfxyVyasvafHPrafRuVjPQqYqI+I0KTXKAxjUj2fT0cC7r1dgv4z3+7WpaPTiVuC17/TKeiIiIiEh54JzjijcXMODZn7jxgzg6N4zhp7vOpEnNSL5dsYNB7evy5lW9aFevWqBTFRHxK/VlyiHMjH+O6sLDIzow7MXZbN2TfdJjXvzaXBpWr8IPd5yudmARERERqbAKijxMWpRA1bBg4rbsZVD7WGpEhnHHOW1oUL0KP911JjkFRUSGaU4sIhWTfrrJEVUND2HWPQOZ+tsObp645KTH25aaQ8d/fM+zF3fh0t7+6ZgSERERESlL7pv8G5OXJAJQt1o4/7miBxGhwfuPm5mKTCJSoWnpnBzVsM71Wfz3QTSIifDLePdMXkGz+75j7c50v4wnIiIiIhJozjnWJWUweUki53apz+AOdZn4p74HFJlERCoDldLlmNSOCmfu/Wfz/PR1vPhjvF/GHPrCbOrHRDD55v40qF7FL2OKiIiIiJS2mWuTuOfzFVSrEkpEaBCPj+xEzaphgU5LRCQg1NEkx+WOc9qw8tEhjOrZyC/j7UjLpf8zM/nbJ8vIKyzyy5giIiJy7MxsqJn9bmbrzey+I5xzppktM7NVZvZLaecoUpZl5xdy60dLyS/0sDUlmycv6Kwik4hUaupokuMWFR7C/13SlVE9GzF6/Hy/jPnF0m18sXQbb4ztyZCO9fwypoiIiPwxMwsGXgHOARKBRWb2X+fc6mLnVAdeBYY657aaWWxgshUpOxL3ZjN9dRIfLdhK23rRZOUX8fG4vnRsWI1qEaGBTk9EJKBUaJIT1rdFLX57ZDBXvLmA37al+WXMGz+Io3ZUOD/+7QxiIvVLWkREpIT1AdY75zYCmNkkYCSwutg5VwBfOOe2AjjndpV6liIBVljkodDjCA8J4tWfN/D89HUUehwA8bsyqR0VRp/mNQkOsgBnKiISeCo0yUmJjgjlm1tP4+tl27ht0jK/jJmcmUfXx37gljNbctug1oSHaANFERGREtIQSCj2OBE45aBz2gChZvYzEA286Jx7v3TSEykb7vviN75dsZ2ODWKI27KX87rU58bTW9KsdiTTVyfRtFZVFZlERHy0R5P4xchuDVnz2FC/rkd/9ecNtP37NN2dTkREpOQc7i9jd9DjEKAncC4wBHjIzNocMpDZDWa22MwW79692/+ZigTIrvRcvl62jSY1I1m3M4Nbz2rFy5d3p3OjGKIjQrmoRyN6Nq0R6DRFRMoMdTSJ31QJC2bJQ+cwOS6ROz9b7rdxh74wmz7NazJpXF+CdKVIRETEnxKBxsUeNwK2H+acZOdcFpBlZrOArsC64ic558YD4wF69ep1cLFKpFz6Zd1u7vhkGR4Hr43pSYvaVTHTfFRE5I+oo0n87uKejZjxtzP8OubCTXto8cAU1u5MxznNXUVERPxkEdDazJqbWRgwGvjvQed8DQwwsxAzi8S7tG5NKecpUupWbktj3HuLiY0O59Mb+9GyTpSKTCIix0CFJikRrWKj2PT0cF4c3c2v4w59YTbN75/C3qx8v44rIiJSGTnnCoG/AN/jLR596pxbZWY3mdlNvnPWANOAFcBCYIJzbmWgchYpDYVFHu78dDm1osL4eFxfLY0TETkOWjonJcbMGNmtIQ2rV2HU6/P8Onb3x6czfmxPzm5fVxsvioiInATn3BRgykHPvX7Q4+eA50ozL5HSlJ1fSJXQYH5dn8Lbv24it6CI35MyeH1MD2r4cQ9SEZHKQIUmKXG9mtVk/ZPDuOnDOGas8d8dkW/4IA6AdU8MIyxEzXkiIiIicvw27M7k3JdmA5Bb4KFOdDhFHseNZ7RgSMd6Ac5ORKT8UaFJSkVIcBATru7Np4sTuOfzFX4du83fp/Lg8Pb8aUBzrZsXERERkePyf9//Tm6Bh4t7NKJ7k+qM6tmIiNDgQKclIlJuqQ1EStWlvRqz8IGz/T7uk1PW0Pz+KWxLzfH72CIiIiJS8TjnWLJ1L1NX7uT2Qa3516VdGdO3qYpMIiInSYUmKXWx1SJY/+Qwnrigk9/HPvWZmbwwYx0FRR6/jy0iIiIiFcOyhFR6PzmDUa/NpXZUGH8a0CLQKYmIVBgqNElAhAQHMaZvU969trffx35hRjytH5zK3PXJfh9bRERERMo3j8fxt0+WkZVXxI1ntGTyzf2JCteOIiIi/lJhf6Ka2QhgRKtWrQKdivyBM9vGsujBQfR+cobfx75iwgK6Na7OR+NOITKswv6vLiIiIiJHsTcrn0vemEdIkHF+twZsTM7ixdHdGNmtYaBTExGpcCpsR5Nz7hvn3A0xMTGBTkWOok50OGsfH8pV/Zr6fexlCal0ePh7EvZk+31sERERESkfnv1+Let3ZbJ1TzbPTvudIIPBHXRHORGRklBhC01SvkSEBvPYyE5MvrlfiYw/4NmfuG/yCu3dJCIiIlLJ7M7I4/O4RK7q15RZ9wzkxjNa8PRFnakSpk2/RURKggpNUqb0bFqTtY8PpV29aL+PPWlRAq0fnMqizXv8PraIiIiIlD2z43dz7bsLKShyXNWvKbWjwrl/WHsu690k0KmJiFRYKjRJmRMRGsy020/ngeHtSmT8S16fx40fLCYzr7BExhcRERGRwJv62w7GvrWQpPQ87hrchlax/r+QKSIih1KhScqsG05vyaIHB5XI2N+vSqLTP75nyda9JTK+iIiIiASGx+P4etk2/vbpcro2rs6cewfyl7NaBzotEZFKQ4UmKdPqRIfz/e2nl9j4F706lzETFuDxuBKLISIiIiIlLzkzjxWJqTzw5W/cNmkZrWKjmHBVL8JDtBeTiEhpUqFJyry29aLZ/My5PDKiQ4mMP2d9Mi0emMKWlKwSGV9EREREStbMtUkMfO5nzv/Pr0xalMCYvk346s+nUic6PNCpiYhUOiGBTkDkWF1zanPqREfw54+WlMj4Zzz3M+3qRfPtracREqwarIiIiEh5sHDTHm78II629aK5/rTmVIsI5Yw2dQgOskCnJiJSKemvaSlXzu1Sn9eu7FFi46/dmUGrB6cSn5RRYjFERERExD+y8wu545NlNKxehYl/6suF3Rtxdvu6umgoIhJA+gks5c6wzvVZ9vA5JRrjnOdn8dPvu3BOezeJiIiIlFXvzt3MttQcnh3VlZgqoYFOR0RE0NI5KaeqR4ax8anhfB6XyD2TV5RIjGvfWQTA6seGEBmmfyoiIiIiZUFhkYdnpq5lxpokNqdkM6h9LH2a1wx0WiIi4qOOJim3goKMS3s3ZvzYniUap8PD37M9NUfdTSIiIiJlwNfLtjNhzibqxURweZ/GvDC6e6BTEhGRYlRoknJvcMd6fPfX00o0Rv9nZtLlkR8o8qjYJCIiIhIoHo/j1Z/X065eNB+P68vTF3UhKlyd5yIiZYkKTVIhdGwQw+ZnzuXuIW1LLEZGXiEtH5hCwp7sEoshIiIiIoeXX+jhlZ/Ws2F3FrcMbIWZ7ionIlIWqdAkFcqfB7bi/y7pWqIxBjz7E499s7pEY4iIiIiIl3OOt+dsYsCzM/nX9HW0rRvNuZ3rBzotERE5AhWapMIZ1bMRX97Sv0RjvP3rJka9Npf8Qk+JxhERERGpzJxzPPrNah77djUtakfxxtiefHFLf4KD1M0kIlJWqdAkFVL3JjVY9vA5JRpj8Za9tPn7VDbuzizROCIiIiKV1X9mrufduZv502nN+WjcKQzpWI+q2pNJRKRMU6FJKqzqkWHEPzmMutXCSzTOWf/6hZs/jNNd6URERET8aObaJF6aGc+Irg148Nz22pNJRKScUKFJKrTQ4CDm338253dtUKJxpq7cyWXj55OWXaCCk4iIiMhJWLktjdsnLeW6dxfTsk4Uj4zooCKTiEg5okKTVHhmxouju9EqNqpE4yzctIeuj/3APZ+vKNE4IiIiIhXVht2ZXP7mfL5atp2LezTiqz+fSq2oku1OFxER/9ICZ6kUzIwZfzuDDbszOftfv5RorM/iEtmVkcf4q3oSHhJcorFEREREKpInvvXe2ffX+86iYfUqAc5GREROhDqapFJpWSeKFY8MLvE4v6zbTdu/T9NG4SIiIiJHkZyZx6x1u5m4YAs//b6bvwxspSKTiEg5po4mqXSqRYSy4anhtHxgSonHOutfv3DTGS25b1i7Eo8lIiIiUt5sTs5ixMtzyMgrBKBr4+pcd1rzAGclIiInQ4UmqZSCg4zlDw/m3bmbeX7GuhKN9fovG1izI50nLuhEoxpVtJmliIiIiM+7czeTW1jEc6O6EBocxNntYwkN1qILEZHyTIUmqbRiIkO5bVBr8gqLePXnDSUa65d1uxnw7E+c3S6Wt67pXaKxRERERMqD7PxCJi9JZFin+lzSq3Gg0xERET/R5QKp9O4Z2o5bz2pVKrF+XLuLl3+ML5VYIiIiImXZf5dtJyO3kDF9mwY6FRER8SN1NIkAdw5uy6W9GjPg2Z9KPNa/pq9j1fZ0/jywFZ0bxZR4PBEREZGy5Jvl2/l0cQJLt6bSrl40vZvVCHRKIiLiR+poEvFpXDOSGX87vVRiTVu1kxH/mcPncYmlEk9ERESkLJgcl8itHy9lU3IWfVvU5PUxPbV/pYhIBVMuC01m1sLM3jKzzwOdi1QsrWKjWfv40FKLd9dny7lv8gpy8otKLaaIiIhIIEyOS+Tuz5fTr0Utfr7rTCZc3ZtmtasGOi0REfGzYyo0mVl1M/vczNaa2Roz63ciwczsbTPbZWYrD3NsqJn9bmbrzey+PxrHObfROXf9ieQgcjQRocFseGo4fZrXLJV4kxYl0P7haaRk5pVKPBEREZHSNjkukXsmr6Bfy1pMuLoXIbqznIhIhXWsP+FfBKY559oBXYE1xQ+aWayZRR/03OF2V34XOKRdxMyCgVeAYUAH4HIz62Bmnc3s24O+Yo8xZ5ETFhxkfHpjP8aW4uaUPZ+YwfyNKaUWT0RERKQ0TFq4lTs/W07vZjV4Y2wvqoZrm1gRkYrsqIUmM6sGnA68BeCcy3fOpR502hnA12YW4XvNOOClg8dyzs0C9hwmTB9gva9TKR+YBIx0zv3mnDvvoK9dx/LGzGyEmY1PS0s7ltNFDuvxCzrxl4Glc0c6gNHj5zPyP3MoLPKUWkwRERGRkpKWXcAj36xiQOvafHD9KUSpyCQiUuEdS0dTC2A38I6ZLTWzCWZ2wGJq59xnwDRgkpldCVwHXHoceTQEEoo9TvQ9d1hmVsvMXge6m9n9hzvHOfeNc+6GmBjd1UtOzl1D2vLlLf1LLd7yxDRaPThVS+lERESk3Pt8SSK5BR7uH9aeUC2XExGpFI7lp30I0AN4zTnXHcgCDtlDyTn3LJALvAac75zLPI48DnerCXekk51zKc65m5xzLZ1zTx9HHJET0r1JDX6668xSjdnziRl8vWwbzh3xn4KIiIhImZVXWMRbszfSs2kNOjSoFuh0RESklBxLoSkRSHTOLfA9/hxv4ekAZjYA6AR8CfzjOPNIBBoXe9wI2H6cY4iUqOa1q/LjnWeUaszbJi2j+f1TKNBSOhERKSFHuyGLmZ1pZmlmtsz39XAg8pTyY+KCLfR7+kc6PPw929NyuWNQm0CnJCIipeiohSbn3E4gwcza+p46G1hd/Bwz6w68CYwErgVqmtkTx5HHIqC1mTU3szBgNPDf43i9SKloWSeKtY8fsp99iWv94NRSjykiIhXfkW7IcphTZzvnuvm+HivVJKVc2ZKSxSP/XUX9mAhuOqMFL47uxqmtagU6LRERKUXHulD6VmCima0AugFPHXQ8ErjEObfBOecBrga2HDyImX0MzAPamlmimV0P4JwrBP4CfI/3jnafOudWncgbEilpEaHBbHxqeKnHbXbfd8xat7vU44qISIV22BuyBDgnKacKizzcO3kFIUFBvD6mJ3cPacfIbg0xO9wuGSIiUlEdU6HJObfMOdfLOdfFOXeBc27vQcd/dc79VuxxgXPuzcOMc7lzrr5zLtQ518g591axY1Occ218+y49eTJvSqSkBQUZG58azoXdj7hnfYm46u2F/GdmfKnGFBGRCu1Yb8jSz8yWm9lUM+t4uIHM7AYzW2xmi3fv1oWRymZPVj5Xv7OQ+Rv38OSFnYitFhHolEREJEB06weRExQUZDx/WTdm3zOwVOP+3w/raP/QNLLyCks1roiIVEjHckOWJUBT51xX4GXgq8MN5Jwb77sw2atOnTp+TlPKsl3puVz82lwWbd7Lc6O6cFGPRoFOSUREAkiFJpGT1LhmJA8Ob1+qMXMKiuj4j++ZuOCQFaoiIiLH46g3ZHHOpe+7m7BzbgoQama1Sy9FKct2ZeRy+ZvzSUrPZeKfTuGSXo2P/iIREanQVGgS8YNxp7fgrsGlf0eVB79cyf1frCj1uCIiUmEc9YYsZlbPfJvsmFkfvPPHlFLPVMqcXem5XD5+PjvScnnnmt70blYz0CmJiEgZoEKTiJ/85azWrHms9O9I9/HCBNo/NI2c/KJSjy0iIuXbkW7IYmY3mdlNvtNGASvNbDnwEjDaOXfw8jqpZHLyixjz1gJ2pOXy7rV9OKWF7iwnIiJeKjSJ+FGVsGDi/j6o1OPmFBTR/uFprEvKKPXYIiJSvh3uhizOudedc6/7/vs/zrmOzrmuzrm+zrm5gc1YyoJ/TlvLuqRMXh/Tkz7N1ckkIiL/o0KTiJ/Vigpn0YOlX2wCGPz8LH7fqWKTiIiIlJw58cm8O3cz1/RvxulttPG7iIgcSIUmkRJQJzqczc+cG5DYQ16Yxdi3FqBVDSIiIuJvKZl53P35clrUqcq9Q9sFOh0RESmDVGgSKUGbnzmX/i1Lf8+C2fHJDPr3L6Rk5pV6bBEREamYFm7aw8WvzWVPVj7PX9qNKmHBgU5JRETKIBWaRErYxD+dEpC4G3Zn0fOJGazclhaQ+CIiIlJx/JaYxjXvLKTIOT64/hS6Nq4e6JRERKSMUqFJpISZGUsfOoc7z2kTkPjnvTyHt+dsCkhsERERKf8+W5zABa/+SlR4CJ/d2F+bf4uIyB9SoUmkFNSoGsatZ7fmjbE9AxL/sW9Xc9GrvwYktoiIiJRfKxJTuXfyCvq1qMW020+nXkxEoFMSEZEyToUmkVI0pGM9VjwyOCCxl2xN5aq3F1JQ5AlIfBERESlfnHM88t9V1IoK59UxPahZNSzQKYmISDmgQpNIKasWEcpHAdq3ada63bR+cCoTZm8MSHwREREpPxZs2sOSrancdnZrqkWEBjodEREpJ1RoEgmA/q1q89NdZwYs/hPfrWHUa3PxeFzAchAREZGyq7DIwz+nraVW1TBG9WwU6HRERKQcUaFJJECa167KykeH0L9lrYDEX7xlLy0emEJyZl5A4ouIiEjZlJKZx00fxrF0ayoPj+hARGhwoFMSEZFyRIUmkQCKCg/ho3F9A5pDrydmMH11UkBzEBERkbIhPbeAka/8yqx1yfxjRAfO79og0CmJiEg5o0KTSBmw4anhAY0/7v3FvD9vc0BzEBERkcB74tvVbE/NYeK4U7j21OaYWaBTEhGRckaFJpEyIDjI+P2JoQHN4eGvVzE5LpHtqTkBzUNERERKn8fjeHvOJj5dnMiNZ7Skd7OagU5JRETKKRWaRMqI8JBg1j4e2GLTnZ8tp/8zM/nH1ysDmoeIiIiUrn9OW8tj365mQOva3HZ260CnIyIi5ZgKTSJlSERoML8/MZSmtSIDmsd787bw7+nrApqDiIiIlDznHBNmb+SNWRsZ27cp71/XR5t/i4jISVGhSaSMCQ8J5pe7B3JWu9iA5vHSj/E8PWVNQHMQERGRkrM9NYcr3lzAE9+tYXCHuvxjRAftySQiIidNhSaRMurVK3vwwmXdAprDG7M28u6vm9iRpn2bREREKhKPx3H7pGWsSEzl4fM68NqYnoQE608DERE5efptIlJGRYQGc0H3hvRuViOgeTzyzWr6PT2TyXGJAc1DRERE/OeH1Uks3LyHh0d04LrTmhMcpE4mERHxDxWaRMq4z27qXyY25bzzs+U0u+87dTeJiIiUc0Uex8sz42laK5KLezQKdDoiIlLBqNAkUg6UhULTPv2enklWXmGg0xAREZETUFjk4aUf41m1PZ07B7fVcjkREfE7/WYRKQeCgoyf7joz0Gns1/Ef35O4NzvQaYiIiMhx+HD+Fro/Pp0Xf4zn/K4NGNGlfqBTEhGRCkiFJpFyonntqmx+5lwePb9joFMB4LR//sSm5Cycc4FORURERI7ig3mb+ftXK+nWuPr+G47oDnMiIlISVGgSKWeu6teUZQ+fE+g0ABj4fz8zYfamQKchIiIif2BLShZPfLeGM9vW4Z1rejO8c32CtPm3iIiUEBWaRMoZM6N6ZBhntKkT6FQAeHLKGprd9x3rd2UEOhURERE5jH9PX0dwkPHMRV20J5OIiJQ4/aYRKafeu64PU28bEOg09hv071k0u+879mblBzoVERER8Vm5LY1vlm/nylOaUC8mItDpiIhIJaBCk0g51r5+NRb/fVCg0zhA98ens3Tr3kCnISIiUql5PI5pK3dy5YQF1K0WwQ2ntwx0SiIiUklU2EKTmY0ws/FpaWmBTkWkRNWOCmfufWcFOo0DXPjqXFZu0789ERGRQEjYk80Fr/7KTR/GUbdaOJ/c0I860eGBTktERCqJCltocs5945y7ISYmJtCpiJS4BtWrsPqxIYFO4wDnvTyHV35aH+g0REREKpXkzDzGvrWALSnZ/N8lXfnm1tNoUisy0GmJiEglUmELTSKVTWRYCHPuHRjoNA7w3Pe/88h/V5FbUBToVERERCo8j8dx0wdx7EzP5e1rejOqZyPCQ4IDnZaIiFQyKjSJVCCNakSy9vGhgU7jAO/O3Uy7h6YRt0X7NomIiJSkb3/bweIte3ns/E70bFoj0OmIiEglpUKTSAUTERrMa1f2CHQah7j4tbnMXJsU6DREREQqpLzCIp77fi3t61fj4p6NAp2OiIhUYio0iVRAQzrW4+IeZW+Sed27i2l233dsS80JdCoiIiIVyss/ruf/27vv+CrLu4/j3182gZBACHuEHcISiQgIDgRlqGDVVuus67GtPrWtWhytu6W2tba1ymPV1tZBHdVaoeJCcIHsJaCMAAHD3iPzev7IMYYYIAdOcp3xeb9e58W573Mn+eZHIFd+ua7rXr/9gG4flaP4OPMdBwAQw2g0AVEoLs70u2/31ZpfjdYlA9r7jvMNp0x4T+NfWeQ7BgAAEa+s3Okn/1ygR6et1EX92+rUblm+IwEAYhyNJiCKmZnuHNPDd4waTZq9Xpc/NUtz1+5QebnzHQcAgIj06Hsr9a/5G3TDaZ31wPm9fMcBAIBGExDtGiUnaNL1A33HqNEHX2zVBY9/rKc+XOM7CgAAEWfr3iJNnL5KY3q30vhROdxhDgAQFmg0ATFgYKdMLbrnLKWlJPiOUqMHpyzTm0u+1Eb2bgIAoNaem7lOB0vL9OMR3XxHAQCgEo0mIEY0TknU4nvOVkZqou8oNbrh2XkaPOE97S0q1da9Rb7jAAAQ9qYs/lInZTdVl+aNfEcBAKASjSYgxnxw2xm+IxxRr7unKu+Bd3zHAAAgrH2+aY9WbNqjkT1b+o4CAMAhaDQBMSYtJVFz7xoe9gPT+9/4zHcEAADC0q4DJfrFv5coLTlB5/Zt7TsOAACHoNEExKDMRsmaeHl/jenTyneUw3rqwzXKHj9ZL81Z7zsKAEQ1MxtpZivMbKWZjT/CdSeZWZmZXVif+XCoaSs267TfTNOsNdv183NylZWW7DsSAACHoNEExLBHL+mnwZ0zfcc4oltfXqT1zHJU+QAAIABJREFU2/frYEmZ7ygAEHXMLF7SnyWNkpQr6RIzyz3Mdb+WNLV+E6Kq3QdLdOtLi9QiLUVv3DRE3z6pne9IAAB8A40mIIaZmZ6/bmDYL6Mb+tA05fz8TU3/fIsOFNNwAoAQGiBppXNutXOuWNIkSWNruO4mSa9I2lyf4XCo52et09a9RfrNRX3Us3W67zgAANSIRhMAPXbpib4j1MqVT3+qHr94U84531EAIFq0kVR1jXJB4FwlM2sj6XxJE4/0jszsejObY2ZztmzZEvKgsc45p0mfrtOA7Kbq0zbDdxwAAA6LRhMAxcWZVj44yneMWut4+xSNfGQGs5sA4PhZDeeqd/MfkfQz59wR/9N1zj3hnMtzzuVlZWWFLCAqvP3ZJuVv26/vntzedxQAAI6IRhMASVJCfJxWPjhKZ+Y09x2lVpYX7tHDb6/wHQMAIl2BpKob/bSVtLHaNXmSJplZvqQLJT1mZuPqJx4kace+Yj00dYU6ZKbqnDC+kQcAABKNJgBVJMTH6amrTtKoXuG9Z9NX/vLBGl3x9Ke+YwBAJJstqauZdTSzJEkXS3q96gXOuY7OuWznXLaklyX9wDn3Wv1HjU3OOd30wnyt275fD4zrpYR4hu8AgPDGdyoA3/D4Zf2VP2GM7xi1MuPzLcoeP1m3vbxQxaXlvuMAQERxzpVKulEVd5NbJulF59xSM7vBzG7wmw6S9PrCjfpw5Vb9fEwPDe3KkkQAQPij0QTgsG45q5vvCLX24pwCdbvrv1q7bZ/vKAAQUZxzU5xz3ZxznZ1zDwbOTXTOfWPzb+fcVc65l+s/ZWzac7BE97+xTH3apuu7J3fwHQcAgFqh0QTgsH54Rhc9fVWe7xhBOe037+udzzZpf3Gp7ygAAByXv3+yVlv3Fum+sb0UH1fTvu0AAIQfGk0ADsvMNCynhab871DfUYJy7d/nKPcXU/X0h2t0sIQ70wEAIk9xabme/nCNTu+epRPaZfiOAwBArdFoAnBUua0b676xPX3HCNp9b3ymnJ+/Keeq36kbAIDw9sEXW7RtX7EuH8iSOQBAZKHRBKBWrhiUHTEbhFfX8fYpOmXCeyynAwBEjFfmFSgjNZENwAEAEYdGE4Cg/PL83r4jHJMNOw/of19Y4DsGAABHtWTDLk1ZXKhLBrRXUgLDdQBAZOE7F4CgfPfk9lrzq9HKbdXYd5SgvbNsk7LHT9adry72HQUAgBrtOlCiG5+fp2aNknTDaZ19xwEAIGg0mgAEzcz0xk1DdHbPFr6jHJPnZq1Tv/ve0ug/fKCSsnLfcQAAqPS3j/KVv22/Jl7WX+kNEn3HAQAgaDSaAByTuDjT/12ep3vOzfUd5Zjs2F+iz77crXlrd2jX/hI2DAcAeHeguEzPfJKv4T2aKy+7qe84AAAcExpNAI7LpRF+N5zvPDFTfe97S/3uf1vl5TSbAAD+vDR3vbbvK9b/sGQOABDBaDQBOC6J8XFadt9IndYtsu+Ks3N/iTrdMUV/nrbSdxQAQAwqKi3T/01frRPbZyivQxPfcQAAOGY0mgActwZJ8Xrm6gG+Y4TEb6au0KRP17GUDgBQr16cU6ANOw/o5uHdZGa+4wAAcMxoNAEImfwJY3T/uF6+Yxy38f9arI63T9GP/7nAdxQAQAxwzumZj/PVt226hnZt5jsOAADHhUYTgJC67OT2viOEzKvzNyh7/GTNXbvDdxQAQBT7ZPU2rdy8V5ee3IHZTACAiEejCUBImZne/vGpuvikdr6jhMwFj3+s7PGTtetAie8oAIAoU1bu9ODkZWrZOEXn9m3tOw4AAMeNRhOAkOvaIk0TLuijN28e6jtKSPW99y39/LUlen/FZpWWlfuOAwCIAs/PWqulG3frzjE91CAp3nccAACOG40mAHUmp2VjfXDbGb5jhNQ/Zq7VVX+drTMfnq65a7f7jgMAiGClZeX6w7tfaFCnTJ3Tp5XvOAAAhASNJgB1ql3TVC2+5yzfMUJu7bb9uuDxT5Q9frLvKACACPXxqm3aurdYV52Szd5MAICoQaMJQJ1LS0nUK98frCevyPMdpU6MfGSG7wgAgAj0+sKNSktO0GndsnxHAQAgZGg0AagX/Ts00fDcFlE5mF5euEfZ4yfrX/MKNG/dDq3fvt93JABAmDtYUqY3lxRqZK+WSklkbyYAQPRI8B0AQGx55uoBUbvc7CcvLqx8Pveu4cpslOwxDQAgnL27bLP2FpXq/H5tfEcBACCkmNEEoN7NvP1Mzb5zuO8Ydar/A++ovNz5jgEACFOvzt+gFo2TdXKnTN9RAAAIKRpNAOpdy/QUZaUla+m9Z/uOUqdG//EDbd5z0HcMAECY2b6vWNM/36yxJ7RRfBybgAMAoktELp0zs06S7pSU7py70HceAMemYXKCRvVqqf8uKfQdpU4sL9yjAQ++K0nq2y5Dt5zVTV2bp6lleornZAAAn56duVYlZU4X9m/rOwoAACFX6xlNZhZvZvPN7I1j/WBm9rSZbTazJTW8NtLMVpjZSjMbf6T345xb7Zy75lhzAAgff7i4n+8I9WLh+p26/KlPNfBX7+pgSZnvOAAAT3YfLNFfP1qjM3Oaq1uLNN9xAAAIuWCWzv1I0rKaXjCz5maWVu1clxou/ZukkTW8fbykP0saJSlX0iVmlmtmvc3sjWqP5kFkBhDmkhLilD9hjP79w1N8R6k3OT9/U29/tkkHimk4AUCsefS9ldp5oEQ/HtHNdxQAAOpErRpNZtZW0hhJTx7mktMk/dvMUgLXXyfpj9Uvcs7NkLS9hrcfIGllYKZSsaRJksY65xY7586p9thcy8znmtkTu3btqs3lADzr2y5D5/dro5yWsfHb3ev+Pkc9fvGmpi6NzmWDAIBv2rKnSM98nK/zT2ijXm3SfccBAKBO1HaPpkck3Sapxp8AnXMvmVlHSZPM7CVJV0saEUSONpLWVzkukHTy4S42s0xJD0rqZ2a3O+d+VUOm/0j6T15e3nVB5ADg0e+/c4IkafOeg5V7G0W7//nH3Mrnr994ivq0zfCYBgBQl578cLVKysp147CaJv4DABAdjjqjyczOkbTZOTf3SNc55x6SdFDS45LOc87tDSJHTbfbOOx9wZ1z25xzNzjnOtfUZAIQ2ZqnpeiHZ3RWdmaq7yj16rxHP9K8dTu0fvt+31EAACG2Y1+xnv1krc7p01qdshr5jgMAQJ2pzdK5UySdZ2b5qljSNszMnq1+kZkNldRL0quS7g4yR4GkdlWO20raGOT7ABBFbj07R+/feoaev/awkxuj0rce+1hDH5qmmau3ybnD9tsBABHm4bc/14GSMt3EbCYAQJQ7aqPJOXe7c66tcy5b0sWS3nPOXVb1GjPrJ+kvksZK+p6kpmb2QBA5ZkvqamYdzSwp8HFeD+LtAUSpwV2a6fKBHXzHqHcXPzFTHW+fojtfXaxP12yn6QQAEWzJhl16btZaXTEoW1250xwAIMoFc9e5I0mVdJFzbpVzrlzSlZLWVr/IzF6Q9Imk7mZWYGbXSJJzrlTSjZKmquLOdi8655aGKBuACHf/uF6+I3jz3Kx1+vb/faI/vrtSX+464DsOACBIzjnd95/P1CQ1iTvNAQBiQm03A5ckOefel/R+Dec/qnZcoooZTtWvu+QI73uKpCnB5AEQOzo2a6g1W/f5juHN79/5XL9/53N975Rs3X1uT99xAAC19PGqbfo0f7vuG9tT6Q0SfccBAKDOBdVoAgBfpt1yuiSprNyp8x2x25P+60f5apScoFG9Wim3dWPfcQAAR/HkB6uVlZasb+e1O/rFAABEgVAtnQOAehEfV9NNKmPLn95bqdF//EDb9hZp856DenH2et+RAAA12LjzgKZ/vkXfyWunlMR433EAAKgXzGgCEHHm3jVc/R94x3cM76rWoHVGA/Vs3VhNGiZ5TAQAqOrluQUqd2I2EwAgpjCjCUDEyWyUrB8PZ0PVqi57apb63f+25q/b4TsKAEBSebnTP2ev1yldMtU+M9V3HAAA6g2NJgAR6UfDu2rS9QPVJqOB7yhh5fzHPlb2+MlaVLBT0z/fIuec70gAEJM+WrVVG3Ye0HdOau87CgAA9YqlcwAi1sBOmfpo/DCVlJVr7bZ9Gv7wDN+RwsZ5jx5yM1BdNThb95zH3eoAoL5Mmr1eGamJOiu3he8oAADUK2Y0AYh4ifFx6tI8Tf07NPEdJWz97eN8rdy8R9v3FfuOAgBRb/32/Zq6pFAXntiWTcABADGHRhOAqPHctSfr4pPa6a4xPdSsEZtiVzf84Rk68f639cKn67Rtb5HvOADChJmNNLMVZrbSzMbX8PpYM1tkZgvMbI6ZDfGRM5I8MWO14sx07dBOvqMAAFDvaDQBiBopifGacEEfXTu0k9675XQ9e83JviOFpdv/tVj9H3hHc/K3a29Rqe84ADwys3hJf5Y0SlKupEvMLLfaZe9K6uucO0HS1ZKerN+UkaW4tFz/WbRRo3q3VMv0FN9xAACod+zRBCAqNU5J1JCuzXzHCGsXTvxEkjSwU1Od0rmZbjqzq+dEADwYIGmlc261JJnZJEljJX321QXOub1Vrm8oibsMHMFHK7dq5/4Snde3te8oAAB4wYwmAFHtnnOr/2Ie1c1cvV2/e/tzvTy3QKf/Zpq63jlFB0vKfMcCUD/aSFpf5bggcO4QZna+mS2XNFkVs5q+wcyuDyytm7Nly5Y6CRsJXl+4UY1TEjS0a5bvKAAAeEGjCUBUG92nldIbJOrlGwb5jhL2bnlpofK37VdJmdPNkxZodv52OcfEBSDKWQ3nvvEP3zn3qnMuR9I4SffX9I6cc0845/Kcc3lZWbHZZDlYUqa3lhZqVK9WSkpgmA0AiE0snQMQ1ZqnpWjh3Wf5jhFx3lxaqDeXFio5IU6f3TdS8XE1/SwKIAoUSGpX5bitpI2Hu9g5N8PMOptZM+fc1jpPF2Fem79B+4rLNLYfy+YAALGLX7UAiBlL7j1bv/9OX/3qW719R4kYRaXl6nzHFC0u2KWte4v0wRexuxwGiFKzJXU1s45mliTpYkmvV73AzLqYmQWenygpSdK2ek8a5krLyvX49FXq3SZdgzpl+o4DAIA3zGgCEDMaJSfo/H5tJUm7D5ToV/9d7jlR5Dj30Q8rnz99VZ6GdMliWQgQBZxzpWZ2o6SpkuIlPe2cW2pmNwRenyjpAklXmFmJpAOSvuNYV/sNkxd/qbXb9mviZf0V6MsBABCTaDQBiElXD+lIo+kYXf23OerTNl3De7TQwvU7Nap3Kw3t2kwtGnMbbyASOeemSJpS7dzEKs9/LenX9Z0r0jz9Ub66NG+ks3Jb+I4CAIBXNJoAxKTE+Dit+uVovbmkUKN7t1TH26cc/Y1QaVHBLi0q2CVJenf5ZknS1JtPVfeWaT5jAYAXq7bs1cL1O3XXmB6KY087AECMY90DgJgVH2ca06cVSxxC5IkZq/XvBRuUPX6yVhTu8R0HAOrNq/M2KM6k8/qyCTgAAMxoAoCA9AaJ2nWgxHeMiPXKvAK9Mq9AknT2IzN0UnYTDeyUqd5t0nVWz5ae0wFA3Sgvd3ptwQYN6Zql5iwhBgCARhMASNKrPxis9k1TtWbrPl048RPfcaLC7Pwdmp2/Q5J0Vm4LPXFFnudEABB6n+ZvV8GOA7rlrO6+owAAEBZYOgcAkvq1b6LMRsnKy26qJqmJvuNEnbc+26SNOw/o2Zlrxc2qAESTv360RhmpiTqbmZsAAEhiRhMAfMP8X5yleet26FuPfew7SlQZPOE9SdKvpizTvuIy3Ty8q24e3k2Fuw4qIzVRKYnxnhMCQHDWbduvtz7bpB+c3lkNkvg/DAAAiUYTANToxPZN9P4tp2t54W4lJcTp6r/N8R0pauwrLpMkPfLOF9p9oFRPf7RGAzs11aTrB3lOBgDBeeaTfMWb6fKB2b6jAAAQNlg6BwCHkd2soUb2aqVhOS30+QOjfMeJSk9/tEaSNHP1dv1yyjJt31dc+dq2vUXaureIpXYAwtKegyX65+z1GtOnlVqmswk4AABfYUYTANRCUkKc7jk3V/uKy/TSnPXK37bfd6So88SM1XpixupvnP/+6Z31s5E5HhIBwOG9NKdAe4tK9b1TOvqOAgBAWKHRBAC1dFXgh4lX5hV4ThJbHn9/lXJbNVar9BTlZTf1HQcA5JzTc7PW6oR2GTqhXYbvOAAAhBWWzgFAkP5yRZ6uGpyt1288xXeUmHHTC/N14cRPlD1+sl6ZS6MPgF9z1u7Qqi379N0B7X1HAQAg7NBoAoAgdc5qpHvO66k+bTM0/dbTNaoXt7SuTz99aaFGPDz9kHOPvveFVm/Z6ykRgFjzwqfr1Cg5Qef0beU7CgAAYYelcwBwHDpkNtTjl/VX9vjJvqPElC8271Xve6Zqz8FSdW+RphWb9uj5Wev092sGqKxc2rq3SAM7ZSo+znxHBRBldh8s0ZTFX+pbJ7ZVahJDaQAAquO7IwCEwHs/PU3Dfjf96BciZPYcLJUkrdi0R5K0dV+xhj88o/L1BonxOiMnS49d2l8ffrFVs/O368cjunnJCiB6/GfhRh0sKdd38tr5jgIAQFhi6RwAhECnrEa6JLBXR7NGSZ7TxKbi0vJDjg+UlGnK4kJJ0mVPzdIf3v3CRywAUebluQXq1qKR+rRN9x0FAICwRKMJAELk3vN66hfn5Grm7WeqReNktUpP8R0J0iHLGkc8PF37iipmQv1j5lot+3K3r1gAItDKzXs0f91OXdi/rcxYmgsAQE1YOgcAIZKUEKerh3SUJM26Y7gksXdTmPli81798Pl5OqN7c939+lJJUv6EMZ5TAYgUL8/doPg407h+bXxHAQAgbNFoAgDElPdXbNH7K7ZUHu/aX6K0lAQt3rBLPVo1VlICk30BfFNZudOr8wt0ercsNU9jxioAAIdDowkA6tCMW8/Quu37ddlTs3xHwWH0ve+tyufxcabebdI16fqBSkmM95gKQLiZ8cUWbdpdpHvPa+s7CgAAYY1f2wJAHWqfmaohXZspf8IY5U8Yo0bJ9PfDWVm504L1O5X3wDtatWWv7n/jM01dWqiDJWW+owHw7OW5BWqSmqhhOS18RwEAIKzxEw8A1KMl956tlZv3avjD031HwRHsLSrVmb+r+Dt66sM1kqTHLj1Rew6WaP66nbp5eDe1ZLN3IGbs3F+st5du0ndPbs/yWgAAjoJGEwDUsy7NG2ncCa312oKNatukgQp2HPAdCbXwg+fmVT6fNHu9BnfOVPumqZpwQR9JknNO7y3frNO6ZSkhnh9EgWjyn4UbVVxWrgv7s2wOAICjYSQMAB789qK+WvCLEbr3vJ6+o+AYfbxqmybNXq99RaX6eOVWTZq9Xtc8M0ePvb9K/16wQQvW7/QdEUCIvDS3QDkt09SzdWPfUQAACHvMaAIADxLi45SRmqRhOc31yHdO0M3/XOA7Eo5Rz7unHnL88NufVz6/ZkhHjTuhjXq3Ta/vWABCZEXhHi0q2KWfn5MrM/MdBwCAsEejCQA8MjON69dG+4pLtWrzPi3duEuz1mz3HQsh8tSHa/TUh2v0yvcHq1+7DMXF8UMqEGlemVeghDjTuBNa+44CAEBEoNEEAGHg0pM7SKrYhLpXtRkyiHwXPP6xJOmO0TnKSE3SmN6t1DA5QR9+sVWtMlLUOauR54QAalJSVq5/zdugYTnNldko2XccAAAiAo0mAAgjjZITNO2W07Vm617NXbtDf562ynckhNAvpyyXJN328qJDzudPGKNz/vSBWqSlqH92E103tJMSAxuKz1+3Q1v2FOmsni3rPS8Q62Z8vkVb9xaxCTgAAEGg0QQAYaZjs4bq2KyhhuW00GvzN2rDTu5KF+2mLd+sJRt2a4l2693lm9UoOUFXDMqWJJ3/WMVsqPwJYzwmBGLTS3MKlNkwSWfkNPcdBQCAiMFd5wAgjL38/UG6/tROvmOgjn3vb7MPOS7cdVCbdh/Uz6rMfJq7doeKSsvqOxoQs7bvK9a7yzdpXL82lTMMAQDA0TGjCQDCWKv0BrpjdA89MWO17yioR4+9v0qPvX/ossmv9nn6zYV9lJWWrIGdMpWSGO8jHhAT/r1gg0rKHMvmAAAIEo0mAIgA+RPGKH/rPr3w6TqNH5WjIb+expK6GHVrlVlO7/70NK0o3KMfPDdPEy87Ud1bNlbHZg0lSQU79ispIU7N01J8RQUi2stzC9SrTWP1aNXYdxQAACIKjSYAiBDZzRrq9tE9JElv3jxUd766RK8v3Og5FXw683fTK5/f8Ow8SdKNZ3TR2T1b6txHP5TE3k7AsVi5ea+Wbtytu8/N9R0FAICIw4JzAIhAaSmJ+uMl/XzHQBh6dNrKyiYTgGMzdWmhJGlUr1aekwAAEHmY0QQAEeyNm4aoQVK8pi4t1ENvrvAdB2Eoe/xkSdIHt52h3QdL9PBbn+sHZ3TRCe0yFGeSmXlOCISfqUsLdUK7DLVMZ+kpAADBishGk5l1knSnpHTn3IW+8wCAL73apEuSfnB6F0nSgeIylZY7PV5tI2lg6EPTKp+/u3yzJOmBcb20vHC3duwrUc82jXX90E5K4O5aiHEbdh7QooJd+tnIHN9RAACISEdtNJlZiqQZkpID17/snLv7WD6YmT0t6RxJm51zvaq9NlLSHyTFS3rSOTfhcO/HObda0jVm9vKx5ACAaPRVs6mcRhNq6a7XllQ+n7z4y8pZcY9+t58KdhzQhP8u1+UDO6h323Rd1L8ts58QE94KLJs7u2cLz0kAAIhMtZnRVCRpmHNur5klSvrQzP7rnJv51QVm1lzSAefcnirnujjnVlZ7X3+T9Kikv1c9aWbxkv4saYSkAkmzzex1VTSdflXtfVztnNtcq88OAGJQXJzpk9uH6bFpq/SPmWt9x0EEuvH5+ZXPv/oaapPRQKd0aabSsnLtKypTemqir3hAnXpzSaG6t0hTp6xGvqMAABCRjjo/3lXYGzhMDDxctctOk/TvwOwnmdl1kv5Yw/uaIWl7DR9mgKSVzrnVzrliSZMkjXXOLXbOnVPtQZMJAI6iVXoD3T+ul+4aU3GXuv/+aKim33q631CIaJc+OUtTFn+pCyZ+or73vaXs8ZP1u7fYFwzRZdveIs3O385sJgAAjkOtNmIws3gzWyBps6S3nXOzqr7unHtJ0puSJpnZpZKulvTtIHK0kbS+ynFB4Nzh8mSa2URJ/czs9sNcc66ZPbFr164gYgBAdLlmSEd9cNsZ6tGqsTpkNtSvvtXbdyREsB88N08L1++sPP7TeyuVPX6yLpr4sYpKy/Ti7PVaVLBTxaXlKi0r164DJR7TAsF7Z9kmlTvp7F4tfUcBACBi1WozcOdcmaQTzCxD0qtm1ss5t6TaNQ+Z2SRJj0vqXGUWVG3UtOlD9VlTVT/WNkk3HCXzfyT9Jy8v77ogcgBAVDEztWuaWnl8yYD2uv1fiz0mQjSanb9Dw347XRt2HpBUscyue8s0vbd8sz6980zN+Hyrzu3bSskJ8Z6ToiZH2ycz8EvEnwUO90r6vnNuYf2mrB9vLilUu6YNlNuqse8oAABErKBuLeOc2ynpfUkjq79mZkMl9ZL0qqRgNwsvkNSuynFbSRuDfB8AgFr4aPwwTb/1dJ2U3aTy3P3jeh3hLYCj+6rJ9NXz9wJ3thvw4Lu65aWF+sM7X/iKhiOosk/mKEm5ki4xs9xql62RdJpzro+k+yU9Ub8p68eegyX6aOU2nZ3bko3vAQA4DrW561yWpBLn3E4zayBpuKRfV7umn6S/SBqjisHIs2b2gHPurlrmmC2pq5l1lLRB0sWSvlv7TwMAUFttMhpIkl66YbB2HyxRUUm5stKS9fPXlhzlLYFjt2VPke8IqFnlPpmSFJidPlbSZ19d4Jz7uMr1M1XxC8GoM23FFhWXlWsky+YAADgutZnR1ErSNDNbpIqG0NvOuTeqXZMq6SLn3CrnXLmkKyV941ZHZvaCpE8kdTezAjO7RpKcc6WSbpQ0VdIySS8655Ye6ycFAKidximJykpL9h0DgD9B7ZMp6RpJ/63pBTO73szmmNmcLVu2hDBi/Zi6tFDNGiXrxPZNjn4xAAA4rKPOaHLOLZLU7yjXfFTtuEQVM5yqX3fJEd7HFElTjpYHAABEHlYiha1a75NpZmeootE0pKbXnXNPKLCsLi8v77B7bYajgyVlmrZ8s8b1a6O4OL5YAQA4HkHt0QQAiF5v3DRE/3tmV82640ylJPLtAaHlIqrtEFNqtU+mmfWR9KSksYGbskSVD7/Yqv3FZRrZk2VzAAAcL36SAABIknq1SddPRnRTi8Ypev3GIbp9VI4kKb1Boto2aeA5HYA6UrlPppklqWKfzNerXmBm7SX9S9LlzrnPPWSsc1OXFiotJUEDO2X6jgIAQMQ76tI5AEDs6dYi7etHyzQ1TU3SzNXb9L2/zfYdDRGKpXPhyTlXamZf7ZMZL+lp59xSM7sh8PpESb+QlCnpscDd2Eqdc3m+ModaaVm53lm2ScN7tFBSAr+DBQDgeNFoAgAc1hk5zSuf52U3UYPEeA3r0VxfbNqjzzft9ZgMQKjUtE9moMH01fNrJV1b37nqy6f527Vjf4nO7tnCdxQAAKICjSYAQK2kpSRq2f0jK49/OWWZnpix2mMiRBL2aEK4mrqkUCmJcTq1W5bvKAAARAXmBwMAjsngzuxlAiCylZc7TV26Sad1y1JqEr9/BQAgFPiOCgA4Jqd3b66pN5+qhQU7tWt/iR6cssx3JIQx9mhCOFq0YZcKdx/UbT27+44CAEDUoNEEADhm3VumqXvLNEkVd6375+x1em3BN+6MDrB0DmHpzSWFSogznZnD/kwAAIQKS+cAACExqHOmHrm4n+bcNdx3FAA4Kuec3lpaqEGdM5Wemug7DgAAUYNGEwAgpJo1Stand5xZ42sv/s+gek7OARSZAAAR1UlEQVQDADVbXrhHq7fu08heLX1HAQAgqtBoAgCEXPPGKZr38xG6anC2BnZqWnn+xPYZh21CAUB9emPRRsXHmUb2pNEEAEAo0WgCANSJpg2TdM95PTXp+q9nMcWZqXnjFP3lijz1bpPuMR2AWOac0+RFX2pw50xlNkr2HQcAgKhCowkAUOe6t6jYMPyrO4+NyG2hu8/N9ZgIQCxbunG38rft15jerXxHAQAg6nDXOQBAnXv+upO1onCPjHvcAwgDbyz6UglxprNZNgcAQMgxowkAUOcyGyVrcJdmh5zr1Sb9kP2blt8/sr5jAYhBzjlNXrxRp3RppiYNk3zHAQAg6tBoAgB4kZIYr0nXD9INp3VWeoNEpSTGVy5jWfEATScAdWNRwS6t335AY/qwbA4AgLpAowkA4NX4UTlaePdZkqTff+cEzb5zuJIT4jW6N0taAITe5MVfKjHedHYu/8cAAFAX2KMJABA2khLilJVWcQeo317UV1cOylanrEZKTYrXkF+/px37SzwnBBDJvrrb3NCuWUpPTfQdBwCAqESjCQAQllKTEnRyp8zK46k/PlWFuw6qT9sM3fHqYj0/a53HdAAi0fz1O7Vh5wH99KxuvqMAABC1WDoHAIgIzdNS1KdthiTpgbG99ObNQ3X9qZ1qvDYthd+jhBvnOwAgafKiL5UUH6fhuS18RwEAIGrRaAIARJy4OFNOy8a6fGCHb7zWv0MTvX/L6fUfCkBYKy0r1xuLNurUbllqnMKyOQAA6gqNJgBAxGrXNFX5E8Z843waP0SGHfMdADHvgy+2atPuIl3Yv63vKAAARDUaTQCAqNIwOUFJCXFa8cBI31FQBUvn4NuLc9Yrs2GShuU09x0FAICoxiYWAICId8foHK0o3KvOzRvq23ntJEnJCfFa/cvRWr9jv+LjTK3SG6jzHVM8JwXgw7a9RXpn2SZdOShbSQn8nhUAgLpEowkAEPGuP7Vzjefj4kwdMhvWcxrUhKVz8Om1BRtVUuZ0UaARDQAA6g6/0gEAxIyEuK/bHUO7NvOYBEB9cc7ppTnr1bdturq3TPMdBwCAqEejCQAQM2bdcaZG926pP17ST/+45mRdfBKzG+oLezTBlyUbdmt54R5mMwEAUE9YOgcAiBmZjZL12KX9K48nXNBHjRsk6okZqz2mAlCXXpyzXskJcTq3b2vfUQAAiAnMaAIAxLSrBmcrOzP1kHO3nt1dD13Yx1Oi6MQeTfDhQHGZXluwQSN7tVR6g0TfcQAAiAk0mgAAMa11RgO9f+sZGtKlYs+mE9tn6IdndFFehyaHXDcit4WPeFGDpXPw4fWFG7TnYKkuPbmD7ygAAMQMls4BACDp2WtP1vrt+9W0YZIkqVNWI71x0xB1yExVcWm5duwv1rTlmzXtltM19KFpntMCqI1nZ65TtxaNdFJ2k6NfDAAAQoIZTQAABLRrmqqGyV//DqZXm3SlpSQqs1GyujRP08pfjla7pocus7tuaEdJUr/2GfWaNdKwdA71beH6nVq8YZcuG9hBZnwFAgBQX2g0AQAQpHd+cpqy0pIlSRcPaK9Xvj9Ik64fqGeuHuA5Wfhi6Rzq27Mz1yo1KV7n92vjOwoAADGFRhMAAEHq0ryR7hrTQ5LUKj1F/Ts0VXJCvPq0SfecDIAk7dxfrNcXbtS4fm2UlsIm4AAA1CcaTQAAHIOxJ7RR/oQxSk36eqldeoNEDctprheuG1h5rlOzhj7iATHt5bkFKiot12VsAg4AQL1jM3AAAEIkLs709FUnSZImXT9QZeVOM1dv05/eW+k5GRA7ysudnp+1Tv07NFFu68a+4wAAEHNoNAEAUAcGdsqUJA3qlKlrh3ZSeoNE/WfhRt33xmfasqeo8rq5dw3XO8s26WevLPYVFYgq/11SqNVb9+lPI7r5jgIAQEyi0QQAQB2KizOlN6jYI+bcvq11arcsbdtbpBufn6+fjOimzEbJnhMC0cM5p0enrVTnrIYa3buV7zgAAMQkGk0AANSj9AaJSm+QqCk/Glp5Lj6OLROBUPhk1TYt+3K3Hrqgj+LjzHccAABiEiNbAAA8O69va103tKNSk+IlSXeMzlGzRsl65fuD9MFtZ3hOB0SOJz9co2aNknTeCa19RwEAIGYxowkAAM+SEuJ055hc/XhENxWXlisjNUnXn9rZdywgoqzcvFfvLd+sn4zoppTEeN9xAACIWcxoAgAgTKQmJSgjNekb5/90ST8PaYDI8vRHa5ScEKdLT27vOwoAADGNGU0AAIS5c/u2Vp+26Sord1q6cbdG5LZQzs/f9B0LCBvb9hbplbkF+taJbdlgHwAAz2g0AQAQATpkNpQkdcpqJEl6YFwvbd59UKd2y9K8dTv0yynLfcY7Kud8J8DhmNlISX+QFC/pSefchGqv50j6q6QTJd3pnPtt/ac8sudmrVNRabmuGZLtOwoAADGPRhMAABHosoEdKp/3a99Er83fqOG5LVRcWq6J01d5TFYzR6cpLJlZvKQ/SxohqUDSbDN73Tn3WZXLtkv6X0njPEQ8qoMlZfr7J/k6o3uWujRP8x0HAICYR6MJAIAIFx9nmvKjoZXH40flSJKyx0+WJP396gF6dNpKfbpmu5d8klROoylcDZC00jm3WpLMbJKksZIqG03Ouc2SNpvZGD8Rj+z1hRu1dW+xrh3ayXcUAAAgNgMHACBq3TSsi4blNNep3bL04v8Mqjw/vEcLSdLD3+5bee6nI7rVaRbaTGGrjaT1VY4LAueCZmbXm9kcM5uzZcuWkIQ7GuecnvpgjXJapmlw58x6+ZgAAODImNEEAECU+ulZ3Q85Xn7/SJlJiXFxKikvV7yZfvLiQklSi8YpdZqlnE5TuLIazh3T35Zz7glJT0hSXl5evfyNf7hyq1Zs2qPfXtRXZjV9KgAAoL4xowkAgBiRkhiv5IR4xcWZkhPilRAfp3vP6ylJGtK1mSRpdO+WNb7tnaN76B/XDNBVg7OP6WOzdC5sFUhqV+W4raSNnrIE7ckP1igrLVnn9m3lOwoAAAhgRhMAADHsysHZujLQPMqfULEFT2lZufrc+5b2F5dVXnfdqRX73zRIjNffPs4P+uOwGXjYmi2pq5l1lLRB0sWSvus3Uu2sKNyj6Z9v0S1ndVNyQrzvOAAAIIAZTQAA4BAJ8XH67L6Runl4V0nS0nvPPu73SZ8pPDnnSiXdKGmqpGWSXnTOLTWzG8zsBkkys5ZmViDpJ5LuMrMCM2vsL3WFP09bqYZJ8YfcgREAAPhHowkAANTo5uHdlD9hjBomfz0BunlaxV5ON5zWWV88OEp3n5tb+dqCX4xQu6YNKo9P7ZZV+Zylc+HLOTfFOdfNOdfZOfdg4NxE59zEwPNC51xb51xj51xG4Plun5nXbN2nNxZt1GWDOigjNclnFAAAUA2NJgAAUGvtM1M17ZbTdevZ3ZUYH6fvndJRrdMrmk/pDRKVGPf10OK6oR0rn7MZOELp/6avUmJ8nK4d0sl3FAAAUA17NAEAgKB0bNbwkONXfjBY89bu/MZdv4Z2/XpGExOaECo79hXr1fkbdEH/tspKS/YdBwAAVMOMJgAAcFxapTfQmD413/Urp2WaJB2yxA44Hs98kq+i0nJdOSjbdxQAAFADZjQBAICQiYs7dFbTK98frF0HStQ6o8Fh3gKovYId+zVx+iqN6d1K3QNNTAAAEF6Y0QQAAELmL1fkadwJrfX+LadLkhomJ9BkQsjsPlCqnJaNdceYHr6jAACAw2BGEwAACJmOzRrqkYv7+Y6BKJXburFe++EpvmMAAIAjYEYTAAAAAAAAQoJGEwAAAAAAAEKCRhMAAAAAAABCgkYTAAAAAAAAQoJGEwAAAAAAAEKCRhMAAAAAAABCgkYTAAAAAAAAQoJGEwAAAAAAAEKCRhMAAAAAAABCgkYTAAAAAAAAQoJGEwAAAAAAAEKCRhMAAAAAAABCgkYTAAAAAAAAQsKcc74z1Ckz2yJpbR29+2aSttbR+45W1Cx41Cx41Cw41Ct41Cx4dVmzDs65rDp63zgGjL/CDjULHjULHjULHjULHjULXr2PwaK+0VSXzGyOcy7Pd45IQs2CR82CR82CQ72CR82CR80QKnwtBY+aBY+aBY+aBY+aBY+aBc9HzVg6BwAAAAAAgJCg0QQAAAAAAICQoNF0fJ7wHSACUbPgUbPgUbPgUK/gUbPgUTOECl9LwaNmwaNmwaNmwaNmwaNmwav3mrFHEwAAAAAAAEKCGU0AAAAAAAAICRpNx8DMRprZCjNbaWbjfecJF2bWzsymmdkyM1tqZj8KnG9qZm+b2ReBP5tUeZvbA3VcYWZn+0vvl5nFm9l8M3sjcEzNjsDMMszsZTNbHvh6G0TNjszMfhz4d7nEzF4wsxRqdigze9rMNpvZkirngq6RmfU3s8WB1/5oZlbfn0t9OUzNfhP4t7nIzF41s4wqr8V8zXDsGH/VjPHXsWP8FRzGX8Fj/HV0jL+CFxHjL+ccjyAekuIlrZLUSVKSpIWScn3nCoeHpFaSTgw8T5P0uaRcSQ9JGh84P17SrwPPcwP1S5bUMVDXeN+fh6fa/UTS85LeCBxTsyPX6xlJ1waeJ0nKoGZHrFcbSWskNQgcvyjpKmr2jTqdKulESUuqnAu6RpI+lTRIkkn6r6RRvj+3eq7ZWZISAs9/Tc14hOLB+OuItWH8dey1Y/wVXL0YfwVXL8ZftasT46/Q1Cysxl/MaAreAEkrnXOrnXPFkiZJGus5U1hwzn3pnJsXeL5H0jJV/Ac7VhXfmBT4c1zg+VhJk5xzRc65NZJWqqK+McXM2koaI+nJKqep2WGYWWNV/Of6lCQ554qdcztFzY4mQVIDM0uQlCppo6jZIZxzMyRtr3Y6qBqZWStJjZ1zn7iK7+B/r/I2Uaemmjnn3nLOlQYOZ0pqG3hOzXA8GH8dBuOvY8P4KziMv44Z46+jYPwVvEgYf9FoCl4bSeurHBcEzqEKM8uW1E/SLEktnHNfShWDIUnNA5dRywqPSLpNUnmVc9Ts8DpJ2iLpr4Hp7k+aWUNRs8Nyzm2Q9FtJ6yR9KWmXc+4tUbPaCLZGbQLPq5+PVVer4jdkEjXD8eH/pVpg/BUUxl/BYfwVJMZfx4Xx1/HxPv6i0RS8mtYtcuu+KsyskaRXJN3snNt9pEtrOBdTtTSzcyRtds7Nre2b1HAupmqmit8MnSjpcedcP0n7VDGl9nBivmaBde1jVTFdtrWkhmZ22ZHepIZzMVWzWjhcjahdgJndKalU0nNfnarhMmqG2uLr5CgYf9Ue469jwvgrSIy/6gRjiaMIl/EXjabgFUhqV+W4rSqmQEKSmSWqYpDznHPuX4HTmwJT8xT4c3PgPLWUTpF0npnlq2IZwDAze1bU7EgKJBU452YFjl9WxcCHmh3ecElrnHNbnHMlkv4labCoWW0EW6MCfT1Vuer5mGJmV0o6R9KlgenYEjXD8eH/pSNg/BU0xl/BY/wVPMZfx47x1zEIp/EXjabgzZbU1cw6mlmSpIslve45U1gI7FL/lKRlzrmHq7z0uqQrA8+vlPTvKucvNrNkM+soqasqNiSLGc65251zbZ1z2ar4WnrPOXeZqNlhOecKJa03s+6BU2dK+kzU7EjWSRpoZqmBf6dnqmIPD2p2dEHVKDC9e4+ZDQzU+ooqbxMTzGykpJ9JOs85t7/KS9QMx4Px12Ew/goe46/gMf46Joy/jh3jryCF3fjraLuF86hxl/fRqrijxypJd/rOEy4PSUNUMd1ukaQFgcdoSZmS3pX0ReDPplXe5s5AHVcoiu8MUMv6na6v73pCzY5cqxMkzQl8rb0mqQk1O2rN7pW0XNISSf9QxZ0nqNmhNXpBFXsolKjitzzXHEuNJOUF6rxK0qOSzPfnVs81W6mKvQC++j4wkZrxCMWD8ddh68L46/jqx/ir9rVi/BV8zRh/Hb1GjL9CU7OwGn9Z4AMAAAAAAAAAx4WlcwAAAAAAAAgJGk0AAAAAAAAICRpNAAAAAAAACAkaTQAAAAAAAAgJGk0AAAAAAAAICRpNAAAAAAAACAkaTQAAAAAAAAgJGk0AAAAAAAAIif8Hi6zSoYfg0AcAAAAASUVORK5CYII=\n",
455 | "text/plain": [
456 | ""
457 | ]
458 | },
459 | "metadata": {
460 | "needs_background": "light"
461 | },
462 | "output_type": "display_data"
463 | }
464 | ],
465 | "source": [
466 | "fig, ax = plt.subplots(1, 2, figsize=(20, 6))\n",
467 | "\n",
468 | "ax[0].semilogy(*zip(*step_losses))\n",
469 | "ax[0].set_title(\"Step Loss\")\n",
470 | "\n",
471 | "ax[1].plot(*zip(*epoch_metrics))\n",
472 | "ax[1].set_title(\"Per-Step Validation Results\")\n",
473 | "plt.show()"
474 | ]
475 | },
476 | {
477 | "cell_type": "markdown",
478 | "metadata": {
479 | "colab_type": "text",
480 | "id": "uiGXhr0B4J6B"
481 | },
482 | "source": [
483 | "As you can see we're not getting good results from our network. The training loss values are jumping around and not decreasing much anymore. The validation score has topped out at 0.25 with is really poor. \n",
484 | "\n",
485 | "It's now up to you to improve the results of our segmentation task. The things to consider changing include the network itself, how data is loaded, how batches might be composed, and what transforms we want to use from MONAI. "
486 | ]
487 | }
488 | ],
489 | "metadata": {
490 | "accelerator": "GPU",
491 | "colab": {
492 | "name": "MONAI_Segment_Exercise.ipynb",
493 | "provenance": []
494 | },
495 | "kernelspec": {
496 | "display_name": "Python 3",
497 | "language": "python",
498 | "name": "python3"
499 | },
500 | "language_info": {
501 | "codemirror_mode": {
502 | "name": "ipython",
503 | "version": 3
504 | },
505 | "file_extension": ".py",
506 | "mimetype": "text/x-python",
507 | "name": "python",
508 | "nbconvert_exporter": "python",
509 | "pygments_lexer": "ipython3",
510 | "version": "3.7.7"
511 | }
512 | },
513 | "nbformat": 4,
514 | "nbformat_minor": 4
515 | }
516 |
--------------------------------------------------------------------------------
/day3challenge/README.md:
--------------------------------------------------------------------------------
1 | # Day 3 Challenge
2 |
3 | On Day 3 the bootcamp's challenge was to train a classifier to distinguish patients by chest xray, determining if they were normal, had pneumonia, or had COVID-19. The given data was a training set of covid xray images and a network previously trained to distinguish healthy from pneumonia. The intension was to retrain this network with the given data to add the third category. The included notebook has further details.
4 |
5 | Data for the challenge can be found on [Kaggle](https://www.kaggle.com/ericspod/project-monai-2020-bootcamp-challenge-dataset).
--------------------------------------------------------------------------------
/day3challenge/classifier_normal_pneumonia.zip:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Project-MONAI/MONAIBootcamp2020/c0c7e074660b7c073bff4517912bcea347bf50e6/day3challenge/classifier_normal_pneumonia.zip
--------------------------------------------------------------------------------
/monai_bootcamp.yml:
--------------------------------------------------------------------------------
1 | name: monai-bootcamp
2 | channels:
3 | - pytorch
4 | - conda-forge
5 | - defaults
6 | dependencies:
7 | - ipykernel=5.3.4
8 | - ipython=7.18.1
9 | - jupyter=1.0.0
10 | - matplotlib=3.3.2
11 | - notebook=6.1.4
12 | - numpy=1.19.1
13 | - pillow=7.2.0
14 | - pip=20.2.3
15 | - python=3.8.5
16 | - pytorch=1.6.0
17 | - scikit-image=0.17.2
18 | - scikit-learn=0.23.2
19 | - setuptools=49.6.0
20 | - torchvision=0.7.0
21 | - pip:
22 | - monai==0.3.0rc2
23 | - nibabel==3.1.1
24 | - pytorch-ignite==0.3.0
25 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | ipykernel==5.3.4
2 | ipython==7.18.1
3 | jupyter==1.0.0
4 | matplotlib==3.3.2
5 | monai==0.3.0rc2
6 | nibabel==3.1.1
7 | notebook==6.1.4
8 | numpy==1.19.1
9 | pytorch-ignite==0.3.0
10 | scikit-image==0.17.2
11 | scikit-learn==.0.23.2
12 | torch==1.6.0
13 | torchvision==0.7.0
--------------------------------------------------------------------------------