├── .gitignore
├── LICENSE
├── README.md
├── figures
├── DM_segmentation_figure.png
├── XAI_example.png
└── performance_metrics.png
├── notebooks
├── DMDetect-project.ipynb
└── DMSegment-project.ipynb
├── python
├── accumulated_gradients.py
├── architectures.py
├── aug.py
├── batch_generator.py
├── create_data.py
├── eval.py
├── eval_seg.py
├── models.py
├── resunetpp.py
├── train.py
├── train_seg.py
└── utils.py
└── requirements.txt
/.gitignore:
--------------------------------------------------------------------------------
1 | __pycache__/
2 | models/
3 | data/
4 | output/
5 | venv/
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2021 André Pedersen
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # DMDetect
2 | [](https://github.com/DAVFoundation/captain-n3m0/blob/master/LICENSE)
3 |
4 | Open-source project for training, evaluating, assessing and deploying Convolutional Neural Networks (CNNs) for multi-class image classification and segmentation of Digital Mammography (DM) images.
5 |
6 |
7 |
8 |
9 |
10 | The project and code is defined such that it should be easy to use out-of-the-box, given that the project structure is defined as [below](https://github.com/andreped/DMDetect/blob/main/README.md#project-structure).
11 | For instance, I have tested the project both on a local Win10 machine and using Google Colab without any issues, see notebooks/ for jupyter notebook example(s).
12 |
13 | For this project we have used TensorFlow 2.4 (with CUDA 11). This enabled us to experiment with TFRecords and tf.data.Dataset, which is suitable for efficient batch generation during training, as well as GPU-accelerated data augmentation.
14 |
15 | ## Preliminary results
16 |
17 | #### Segmentation
18 |
19 | I've trained a ResU-Net++ model, inputting images of 1024x1024 (preserved aspect ratio, vertical axis as reference). The model is trained to perform multi-class semantic segmentation of cancer, mammary gland, pectoral muscle and nipple:
20 |
21 | | metrics | cancer | mammary gland | pectoral muscle | nipple | overall |
22 | |---------|--------|---------------|-----------------|--------|---------|
23 | | DSC | 0.279 | 0.976 | 0.946 | 0.474 | 0.669 |
24 | | IOU | 0.220 | 0.955 | 0.920 | 0.373 | 0.617 |
25 |
26 | Due to the downsampling of images, too much information is lost to produce satisfactory tumour segmentation (in terms on DSC/IOU). Improvements can be made by working patch-wise, working with full or higher resolution, but it might degrade performance on other classes, so it depends on the use case.
27 |
28 |
29 | #### Classification
30 |
31 | I've trained a CNN that detects images containing breast cancer tissue. We get quite good results, without really tuning the network or training for long. A summary of the results can be seen below:
32 |
33 | | Classes | Precision | Recall | F1-score | Support |
34 | | -------------|-------------|----------|------------|----------|
35 | | 0 | 0.99 | 0.98 | 0.98 | 9755 |
36 | | 1 | 0.88 | 0.90 | 0.89 | 1445 |
37 | | | | | | |
38 | | Accuracy | | | 0.97 | 11200 |
39 | | macro avg | 0.93 | 0.94 | 0.94 | 11200 |
40 | | weighted avg | 0.97 | 0.97 | 0.97 | 11200 |
41 |
42 | Reaching a macro-average F1-score of 0.94 is a good start.
43 |
44 | ## Explainable AI (XAI)
45 |
46 | To further assess the performance of the method, XAI was used (in this case [Grad-CAM](https://arxiv.org/abs/1610.02391), using this [repo](https://github.com/sicara/tf-explain)) to see if the method is doing what it should:
47 |
48 |
49 |
50 |
51 |
52 | From this figure, it seems like the model is reacting on the correct part of the image. However, overall the network seems biased towards "always" using the central part of the image, at least as a default, if nothing else is found. This might be suboptimal.
53 |
54 | ## Open data sets:
55 |
56 | #### Classification
57 | The data set used, **DDSM**, can be downloaded from [here](https://www.kaggle.com/skooch/ddsm-mammography/discussion/225969). When downloaded, uncompress and place the folder structure in the data/ folder (see Project structure [below](https://github.com/andreped/DMDetect/blob/main/README.md#project-structure)).
58 |
59 | #### Segmentation
60 | The data set we used, **CSAW-S**, can be downloaded from [here](https://zenodo.org/record/4030660#.YHGTJOgzaiN). Place the uncompressed data into the data/ folder in the Project structure, such that the raw data is structured as such: data/CSAW-S/CSAW-S/CsawS/anonymized_dataset/.
61 |
62 | ## How to use?
63 |
64 | Given that you have:
65 | 1. Created a virtual environent (not necessary, but smart to do)
66 | 2. Installed all requirements
67 | 3. Defined the project as [below](https://github.com/andreped/DMDetect/blob/main/README.md#project-structure)
68 | 4. Placed the uncompressed data set in the data/ folder
69 |
70 | ...you should be all set. If you are using **Google Colab**, see jupyter notebook examples in notebooks/ for more information.
71 |
72 | #### Classification
73 | In this case, we are using the [DDSM Kaggle data set](https://www.kaggle.com/skooch/ddsm-mammography), which has already been preprocessed in a format which can be processed on-the-fly in the batch generator. Thus, simply train a CNN classifier running the train.py script:
74 | ```
75 | python train.py
76 | ```
77 |
78 | When a model is ready (see output/models/), it can be evaluated using the eval.py script, which will return summary performance results, as well as the option to further assess the model using XAI.
79 | ```
80 | python eval.py
81 | ```
82 |
83 | #### Segmentation
84 | In this case, we are using the [CSAW-S](https://zenodo.org/record/4030660#.YHGTJOgzaiN) data set. As the data is not preprocessed, it is necessary to do that first:
85 | ```
86 | python create_data.py
87 | ```
88 |
89 | Then simply train a deep segmentation model running:
90 | ```
91 | python train_seg.py
92 | ```
93 |
94 | To evaluate the model, as well as the option to view results, run:
95 | ```
96 | python eval_seg.py
97 | ```
98 |
99 | ## Project structure
100 |
101 | ```
102 | +-- {DMDetect}/
103 | | +-- python/
104 | | | +-- create_data.py
105 | | | +-- train.py
106 | | | +-- [...]
107 | | +-- data/
108 | | | +-- folder_containing_the_unzipped_kaggle_dataset/
109 | | | | +-- fold_name0/
110 | | | | +-- fold_name1/
111 | | | | +-- [...]
112 | | | +-- folder_containing_csaw-s_dataset/
113 | | | | +-- [...]
114 | | +-- output/
115 | | | +-- history/
116 | | | | +--- history_some_run_name1.txt
117 | | | | +--- history_some_run_name2.txt
118 | | | | +--- [...]
119 | | | +-- models/
120 | | | | +--- model_some_run_name1.h5
121 | | | | +--- model_some_run_name2.h5
122 | | | | +--- [...]
123 | ```
124 |
125 | ## TODOs (most important from top to bottom):
126 |
127 | - [x] Setup batch generation through TFRecords for GPU-accelerated generation and data augmentation
128 | - [x] Introduce smart losses and metrics for handling class-imbalance
129 | - [x] Make end-to-end pipeline for automatic DM assessment
130 | - [x] Achieve satisfactory classification performance
131 | - [x] Introduce XAI-based method to further assess classifier
132 | - [x] Test MTL design on the multi-classification tasks
133 | - [x] Made proper support for MIL classifiers, that works both during training and inference
134 | - [x] Fix data augmentation scheme in the get_dataset method
135 | - [x] Updated paths to be more generic
136 | - [x] Added Jupyter Notebook relevant for deployment on Google Colab
137 | - [x] Get access to semantically annotated data of breast cancer
138 | - [x] Setup full pipeline for training and evaluating segmentation models
139 | - [x] Add Jupyter Notebook example for training segmentation models using Google Colab
140 | - [x] Find the optimal set of augmentation methods for both tasks
141 | - [ ] Get access to raw DM images, and test the pipeline across the full image (model trained on patches)
142 | - [ ] Extract the distrbution between the 5 classes (classification task), to be used for balancing classes during training
143 | - [ ] Introduce ROC-curves and AUC as additional metric for evaluating performance
144 | - [ ] Make simple script for plotting losses and metrics as a function of epochs, using the CSV history
145 | - [ ] Add option to set arguments for training/evaluation using [argparse](https://docs.python.org/3/library/argparse.html) or similar
146 |
147 | ## Tips
148 |
149 | Make virtual environment (Not necessary):\
150 | `virtualenv -ppython3 venv --clear`
151 |
152 | Activating virtual environment:\
153 | On Win10: `.\venv\Scripts\activate.ps1`\
154 | On Linux: `source venv/bin/activate`
155 |
156 | Deactivating virtual environment:\
157 | `deactivate`
158 |
159 | Install dependencies from requirements file:\
160 | `pip install -r requirements.txt`
161 |
162 | Updating requirements.txt file:\
163 | `pip freeze > requirements.txt`
164 |
165 | ------
166 |
167 | Made with :heart: and python
168 |
--------------------------------------------------------------------------------
/figures/DM_segmentation_figure.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/andreped/DMDetect/bfbc0f100812f2faaac4d2d1ca905e3aa3354cc6/figures/DM_segmentation_figure.png
--------------------------------------------------------------------------------
/figures/XAI_example.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/andreped/DMDetect/bfbc0f100812f2faaac4d2d1ca905e3aa3354cc6/figures/XAI_example.png
--------------------------------------------------------------------------------
/figures/performance_metrics.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/andreped/DMDetect/bfbc0f100812f2faaac4d2d1ca905e3aa3354cc6/figures/performance_metrics.png
--------------------------------------------------------------------------------
/notebooks/DMSegment-project.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "DMSegment-project.ipynb",
7 | "provenance": [
8 | {
9 | "file_id": "134XCEqz9OgZazW6DAAYfFlz6ROJaSLpZ",
10 | "timestamp": 1618086050382
11 | }
12 | ],
13 | "collapsed_sections": [],
14 | "mount_file_id": "17i0xTmoQzjyuUvko2XdwZmM51GlCdGKG",
15 | "authorship_tag": "ABX9TyPlIJ4C99btui9nJZURuRbb"
16 | },
17 | "kernelspec": {
18 | "name": "python3",
19 | "display_name": "Python 3"
20 | },
21 | "language_info": {
22 | "name": "python"
23 | },
24 | "accelerator": "GPU"
25 | },
26 | "cells": [
27 | {
28 | "cell_type": "code",
29 | "metadata": {
30 | "colab": {
31 | "base_uri": "https://localhost:8080/"
32 | },
33 | "id": "NtYtpZN7egWf",
34 | "executionInfo": {
35 | "status": "ok",
36 | "timestamp": 1618086965021,
37 | "user_tz": -120,
38 | "elapsed": 1049,
39 | "user": {
40 | "displayName": "André Pedersen",
41 | "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhREXm38EMyE4bW69c9dve33zb8BzIBazDnE89F=s64",
42 | "userId": "14390719858710152933"
43 | }
44 | },
45 | "outputId": "0ad9b6c2-e710-4f80-a797-eadbd50611c5"
46 | },
47 | "source": [
48 | "# Mount drive,\n",
49 | "from google.colab import drive\n",
50 | "drive.mount('/content/drive')"
51 | ],
52 | "execution_count": 1,
53 | "outputs": [
54 | {
55 | "output_type": "stream",
56 | "text": [
57 | "Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n"
58 | ],
59 | "name": "stdout"
60 | }
61 | ]
62 | },
63 | {
64 | "cell_type": "code",
65 | "metadata": {
66 | "id": "0vKtsxVrcCJJ",
67 | "colab": {
68 | "base_uri": "https://localhost:8080/"
69 | },
70 | "executionInfo": {
71 | "status": "ok",
72 | "timestamp": 1618086971357,
73 | "user_tz": -120,
74 | "elapsed": 3185,
75 | "user": {
76 | "displayName": "André Pedersen",
77 | "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhREXm38EMyE4bW69c9dve33zb8BzIBazDnE89F=s64",
78 | "userId": "14390719858710152933"
79 | }
80 | },
81 | "outputId": "0956d363-bc3e-4373-c6fd-e19a0773d029"
82 | },
83 | "source": [
84 | "# Install python dependencies\n",
85 | "%cd /content/drive/MyDrive/DMDetect/\n",
86 | "%pip install -r requirements.txt"
87 | ],
88 | "execution_count": 2,
89 | "outputs": [
90 | {
91 | "output_type": "stream",
92 | "text": [
93 | "/content/drive/MyDrive/DMDetect\n",
94 | "Requirement already satisfied: absl-py==0.12.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 1)) (0.12.0)\n",
95 | "Requirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 2)) (1.6.3)\n",
96 | "Requirement already satisfied: cachetools==4.2.1 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 3)) (4.2.1)\n",
97 | "Requirement already satisfied: certifi==2020.12.5 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 4)) (2020.12.5)\n",
98 | "Requirement already satisfied: chardet==4.0.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 5)) (4.0.0)\n",
99 | "Requirement already satisfied: cycler==0.10.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 6)) (0.10.0)\n",
100 | "Requirement already satisfied: decorator==4.4.2 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 7)) (4.4.2)\n",
101 | "Requirement already satisfied: flatbuffers==1.12 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 8)) (1.12)\n",
102 | "Requirement already satisfied: gast==0.3.3 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 9)) (0.3.3)\n",
103 | "Requirement already satisfied: google-auth==1.28.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 10)) (1.28.0)\n",
104 | "Requirement already satisfied: google-auth-oauthlib==0.4.3 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 11)) (0.4.3)\n",
105 | "Requirement already satisfied: google-pasta==0.2.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 12)) (0.2.0)\n",
106 | "Requirement already satisfied: grpcio==1.32.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 13)) (1.32.0)\n",
107 | "Requirement already satisfied: h5py==2.10.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 14)) (2.10.0)\n",
108 | "Requirement already satisfied: idna==2.10 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 15)) (2.10)\n",
109 | "Requirement already satisfied: imageio==2.9.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 16)) (2.9.0)\n",
110 | "Requirement already satisfied: importlib-metadata==3.7.3 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 17)) (3.7.3)\n",
111 | "Requirement already satisfied: joblib==1.0.1 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 18)) (1.0.1)\n",
112 | "Requirement already satisfied: Keras-Preprocessing==1.1.2 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 19)) (1.1.2)\n",
113 | "Requirement already satisfied: kiwisolver==1.3.1 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 20)) (1.3.1)\n",
114 | "Requirement already satisfied: Markdown==3.3.4 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 21)) (3.3.4)\n",
115 | "Requirement already satisfied: matplotlib==3.3.4 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 22)) (3.3.4)\n",
116 | "Requirement already satisfied: networkx==2.5 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 23)) (2.5)\n",
117 | "Requirement already satisfied: numpy==1.19.5 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 24)) (1.19.5)\n",
118 | "Requirement already satisfied: oauthlib==3.1.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 25)) (3.1.0)\n",
119 | "Requirement already satisfied: opencv-python==4.5.1.48 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 26)) (4.5.1.48)\n",
120 | "Requirement already satisfied: opt-einsum==3.3.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 27)) (3.3.0)\n",
121 | "Requirement already satisfied: Pillow==8.1.2 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 28)) (8.1.2)\n",
122 | "Requirement already satisfied: protobuf==3.15.6 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 29)) (3.15.6)\n",
123 | "Requirement already satisfied: pyasn1==0.4.8 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 30)) (0.4.8)\n",
124 | "Requirement already satisfied: pyasn1-modules==0.2.8 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 31)) (0.2.8)\n",
125 | "Requirement already satisfied: pyparsing==2.4.7 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 32)) (2.4.7)\n",
126 | "Requirement already satisfied: python-dateutil==2.8.1 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 33)) (2.8.1)\n",
127 | "Requirement already satisfied: PyWavelets==1.1.1 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 34)) (1.1.1)\n",
128 | "Requirement already satisfied: requests==2.25.1 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 35)) (2.25.1)\n",
129 | "Requirement already satisfied: requests-oauthlib==1.3.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 36)) (1.3.0)\n",
130 | "Requirement already satisfied: rsa==4.7.2 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 37)) (4.7.2)\n",
131 | "Requirement already satisfied: scikit-image==0.17.2 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 38)) (0.17.2)\n",
132 | "Requirement already satisfied: scikit-learn==0.24.1 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 39)) (0.24.1)\n",
133 | "Requirement already satisfied: scipy==1.5.4 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 40)) (1.5.4)\n",
134 | "Requirement already satisfied: six==1.15.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 41)) (1.15.0)\n",
135 | "Requirement already satisfied: tensorboard==2.4.1 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 42)) (2.4.1)\n",
136 | "Requirement already satisfied: tensorboard-plugin-wit==1.8.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 43)) (1.8.0)\n",
137 | "Requirement already satisfied: tensorflow==2.4.1 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 44)) (2.4.1)\n",
138 | "Requirement already satisfied: tensorflow-addons==0.12.1 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 45)) (0.12.1)\n",
139 | "Requirement already satisfied: tensorflow-estimator==2.4.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 46)) (2.4.0)\n",
140 | "Requirement already satisfied: termcolor==1.1.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 47)) (1.1.0)\n",
141 | "Requirement already satisfied: tf-explain==0.3.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 48)) (0.3.0)\n",
142 | "Requirement already satisfied: threadpoolctl==2.1.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 49)) (2.1.0)\n",
143 | "Requirement already satisfied: tifffile==2020.9.3 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 50)) (2020.9.3)\n",
144 | "Requirement already satisfied: tqdm==4.59.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 51)) (4.59.0)\n",
145 | "Requirement already satisfied: typeguard==2.11.1 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 52)) (2.11.1)\n",
146 | "Requirement already satisfied: typing-extensions==3.7.4.3 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 53)) (3.7.4.3)\n",
147 | "Requirement already satisfied: urllib3==1.26.4 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 54)) (1.26.4)\n",
148 | "Requirement already satisfied: Werkzeug==1.0.1 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 55)) (1.0.1)\n",
149 | "Requirement already satisfied: wrapt==1.12.1 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 56)) (1.12.1)\n",
150 | "Requirement already satisfied: zipp==3.4.1 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 57)) (3.4.1)\n",
151 | "Requirement already satisfied: wheel<1.0,>=0.23.0 in /usr/local/lib/python3.7/dist-packages (from astunparse==1.6.3->-r requirements.txt (line 2)) (0.36.2)\n",
152 | "Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.7/dist-packages (from google-auth==1.28.0->-r requirements.txt (line 10)) (54.2.0)\n"
153 | ],
154 | "name": "stdout"
155 | }
156 | ]
157 | },
158 | {
159 | "cell_type": "code",
160 | "metadata": {
161 | "id": "0IoiPQi2cJ7Z",
162 | "colab": {
163 | "base_uri": "https://localhost:8080/"
164 | },
165 | "executionInfo": {
166 | "status": "ok",
167 | "timestamp": 1618086973982,
168 | "user_tz": -120,
169 | "elapsed": 759,
170 | "user": {
171 | "displayName": "André Pedersen",
172 | "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhREXm38EMyE4bW69c9dve33zb8BzIBazDnE89F=s64",
173 | "userId": "14390719858710152933"
174 | }
175 | },
176 | "outputId": "b44c5499-40f3-421c-94e8-e67025d9aa3a"
177 | },
178 | "source": [
179 | "# Go to project folder\n",
180 | "%cd /content/drive/MyDrive/DMDetect/python/"
181 | ],
182 | "execution_count": 3,
183 | "outputs": [
184 | {
185 | "output_type": "stream",
186 | "text": [
187 | "/content/drive/MyDrive/DMDetect/python\n"
188 | ],
189 | "name": "stdout"
190 | }
191 | ]
192 | },
193 | {
194 | "cell_type": "code",
195 | "metadata": {
196 | "id": "Z44ZrJB_cT0q",
197 | "colab": {
198 | "base_uri": "https://localhost:8080/"
199 | },
200 | "executionInfo": {
201 | "status": "ok",
202 | "timestamp": 1618088806905,
203 | "user_tz": -120,
204 | "elapsed": 1829668,
205 | "user": {
206 | "displayName": "André Pedersen",
207 | "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhREXm38EMyE4bW69c9dve33zb8BzIBazDnE89F=s64",
208 | "userId": "14390719858710152933"
209 | }
210 | },
211 | "outputId": "9b257f6a-88a1-4bf5-f457-6e257b21c3a5"
212 | },
213 | "source": [
214 | "# NOTE: This is EXTREMELY SLOW TO DO IN COLAB! This is because writing from the drive is slow (reading is also a bit slow, one-by-one)... \n",
215 | "# Should preprocess locally, if possible, and upload the preprocessed data to the expected location\n",
216 | "# Process data\n",
217 | "!python create_data.py"
218 | ],
219 | "execution_count": 4,
220 | "outputs": [
221 | {
222 | "output_type": "stream",
223 | "text": [
224 | "2021-04-10 20:36:17.929455: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0\n",
225 | "DM: 100% 150/150 [30:24<00:00, 12.16s/it]\n"
226 | ],
227 | "name": "stdout"
228 | }
229 | ]
230 | },
231 | {
232 | "cell_type": "code",
233 | "metadata": {
234 | "id": "WtJ-Ik_GcLYe",
235 | "colab": {
236 | "base_uri": "https://localhost:8080/"
237 | },
238 | "executionInfo": {
239 | "status": "ok",
240 | "timestamp": 1618091161516,
241 | "user_tz": -120,
242 | "elapsed": 298324,
243 | "user": {
244 | "displayName": "André Pedersen",
245 | "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhREXm38EMyE4bW69c9dve33zb8BzIBazDnE89F=s64",
246 | "userId": "14390719858710152933"
247 | }
248 | },
249 | "outputId": "bd2489cb-a8f5-4916-a95c-d7b35b4ae40e"
250 | },
251 | "source": [
252 | "# Train segmentation model\n",
253 | "!python train_seg.py"
254 | ],
255 | "execution_count": 11,
256 | "outputs": [
257 | {
258 | "output_type": "stream",
259 | "text": [
260 | "2021-04-10 21:41:03.762192: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0\n",
261 | "\n",
262 | "Current model run: \n",
263 | "100421_214107_bs_2_arch_unet_img_1024_nbcl_5_gamma_3_aug_horz,gamma_drp_0,1_ \n",
264 | "\n",
265 | "2021-04-10 21:41:07.546372: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set\n",
266 | "2021-04-10 21:41:07.547241: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1\n",
267 | "2021-04-10 21:41:07.580453: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
268 | "2021-04-10 21:41:07.581017: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: \n",
269 | "pciBusID: 0000:00:04.0 name: Tesla T4 computeCapability: 7.5\n",
270 | "coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s\n",
271 | "2021-04-10 21:41:07.581054: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0\n",
272 | "2021-04-10 21:41:07.583147: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11\n",
273 | "2021-04-10 21:41:07.583242: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11\n",
274 | "2021-04-10 21:41:07.584729: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10\n",
275 | "2021-04-10 21:41:07.585082: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10\n",
276 | "2021-04-10 21:41:07.586610: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10\n",
277 | "2021-04-10 21:41:07.587100: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11\n",
278 | "2021-04-10 21:41:07.587318: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8\n",
279 | "2021-04-10 21:41:07.587430: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
280 | "2021-04-10 21:41:07.588007: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
281 | "2021-04-10 21:41:07.588526: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0\n",
282 | "2021-04-10 21:41:07.588899: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set\n",
283 | "2021-04-10 21:41:07.589018: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
284 | "2021-04-10 21:41:07.589547: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: \n",
285 | "pciBusID: 0000:00:04.0 name: Tesla T4 computeCapability: 7.5\n",
286 | "coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s\n",
287 | "2021-04-10 21:41:07.589575: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0\n",
288 | "2021-04-10 21:41:07.589615: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11\n",
289 | "2021-04-10 21:41:07.589638: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11\n",
290 | "2021-04-10 21:41:07.589661: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10\n",
291 | "2021-04-10 21:41:07.589688: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10\n",
292 | "2021-04-10 21:41:07.589714: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10\n",
293 | "2021-04-10 21:41:07.589736: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11\n",
294 | "2021-04-10 21:41:07.589758: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8\n",
295 | "2021-04-10 21:41:07.589827: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
296 | "2021-04-10 21:41:07.590396: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
297 | "2021-04-10 21:41:07.590897: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0\n",
298 | "2021-04-10 21:41:07.590946: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0\n",
299 | "2021-04-10 21:41:08.098154: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:\n",
300 | "2021-04-10 21:41:08.098216: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267] 0 \n",
301 | "2021-04-10 21:41:08.098324: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0: N \n",
302 | "2021-04-10 21:41:08.098671: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
303 | "2021-04-10 21:41:08.099563: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
304 | "2021-04-10 21:41:08.100366: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
305 | "2021-04-10 21:41:08.100877: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.\n",
306 | "2021-04-10 21:41:08.100926: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 13994 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)\n",
307 | "Model: \"model\"\n",
308 | "__________________________________________________________________________________________________\n",
309 | "Layer (type) Output Shape Param # Connected to \n",
310 | "==================================================================================================\n",
311 | "input_1 (InputLayer) [(None, 1024, 1024, 0 \n",
312 | "__________________________________________________________________________________________________\n",
313 | "conv2d (Conv2D) (None, 1024, 1024, 1 160 input_1[0][0] \n",
314 | "__________________________________________________________________________________________________\n",
315 | "batch_normalization (BatchNorma (None, 1024, 1024, 1 64 conv2d[0][0] \n",
316 | "__________________________________________________________________________________________________\n",
317 | "activation (Activation) (None, 1024, 1024, 1 0 batch_normalization[0][0] \n",
318 | "__________________________________________________________________________________________________\n",
319 | "conv2d_1 (Conv2D) (None, 1024, 1024, 1 2320 activation[0][0] \n",
320 | "__________________________________________________________________________________________________\n",
321 | "batch_normalization_1 (BatchNor (None, 1024, 1024, 1 64 conv2d_1[0][0] \n",
322 | "__________________________________________________________________________________________________\n",
323 | "activation_1 (Activation) (None, 1024, 1024, 1 0 batch_normalization_1[0][0] \n",
324 | "__________________________________________________________________________________________________\n",
325 | "max_pooling2d (MaxPooling2D) (None, 512, 512, 16) 0 activation_1[0][0] \n",
326 | "__________________________________________________________________________________________________\n",
327 | "conv2d_2 (Conv2D) (None, 512, 512, 32) 4640 max_pooling2d[0][0] \n",
328 | "__________________________________________________________________________________________________\n",
329 | "batch_normalization_2 (BatchNor (None, 512, 512, 32) 128 conv2d_2[0][0] \n",
330 | "__________________________________________________________________________________________________\n",
331 | "activation_2 (Activation) (None, 512, 512, 32) 0 batch_normalization_2[0][0] \n",
332 | "__________________________________________________________________________________________________\n",
333 | "spatial_dropout2d (SpatialDropo (None, 512, 512, 32) 0 activation_2[0][0] \n",
334 | "__________________________________________________________________________________________________\n",
335 | "conv2d_3 (Conv2D) (None, 512, 512, 32) 9248 spatial_dropout2d[0][0] \n",
336 | "__________________________________________________________________________________________________\n",
337 | "batch_normalization_3 (BatchNor (None, 512, 512, 32) 128 conv2d_3[0][0] \n",
338 | "__________________________________________________________________________________________________\n",
339 | "activation_3 (Activation) (None, 512, 512, 32) 0 batch_normalization_3[0][0] \n",
340 | "__________________________________________________________________________________________________\n",
341 | "spatial_dropout2d_1 (SpatialDro (None, 512, 512, 32) 0 activation_3[0][0] \n",
342 | "__________________________________________________________________________________________________\n",
343 | "max_pooling2d_1 (MaxPooling2D) (None, 256, 256, 32) 0 spatial_dropout2d_1[0][0] \n",
344 | "__________________________________________________________________________________________________\n",
345 | "conv2d_4 (Conv2D) (None, 256, 256, 32) 9248 max_pooling2d_1[0][0] \n",
346 | "__________________________________________________________________________________________________\n",
347 | "batch_normalization_4 (BatchNor (None, 256, 256, 32) 128 conv2d_4[0][0] \n",
348 | "__________________________________________________________________________________________________\n",
349 | "activation_4 (Activation) (None, 256, 256, 32) 0 batch_normalization_4[0][0] \n",
350 | "__________________________________________________________________________________________________\n",
351 | "spatial_dropout2d_2 (SpatialDro (None, 256, 256, 32) 0 activation_4[0][0] \n",
352 | "__________________________________________________________________________________________________\n",
353 | "conv2d_5 (Conv2D) (None, 256, 256, 32) 9248 spatial_dropout2d_2[0][0] \n",
354 | "__________________________________________________________________________________________________\n",
355 | "batch_normalization_5 (BatchNor (None, 256, 256, 32) 128 conv2d_5[0][0] \n",
356 | "__________________________________________________________________________________________________\n",
357 | "activation_5 (Activation) (None, 256, 256, 32) 0 batch_normalization_5[0][0] \n",
358 | "__________________________________________________________________________________________________\n",
359 | "spatial_dropout2d_3 (SpatialDro (None, 256, 256, 32) 0 activation_5[0][0] \n",
360 | "__________________________________________________________________________________________________\n",
361 | "max_pooling2d_2 (MaxPooling2D) (None, 128, 128, 32) 0 spatial_dropout2d_3[0][0] \n",
362 | "__________________________________________________________________________________________________\n",
363 | "conv2d_6 (Conv2D) (None, 128, 128, 64) 18496 max_pooling2d_2[0][0] \n",
364 | "__________________________________________________________________________________________________\n",
365 | "batch_normalization_6 (BatchNor (None, 128, 128, 64) 256 conv2d_6[0][0] \n",
366 | "__________________________________________________________________________________________________\n",
367 | "activation_6 (Activation) (None, 128, 128, 64) 0 batch_normalization_6[0][0] \n",
368 | "__________________________________________________________________________________________________\n",
369 | "spatial_dropout2d_4 (SpatialDro (None, 128, 128, 64) 0 activation_6[0][0] \n",
370 | "__________________________________________________________________________________________________\n",
371 | "conv2d_7 (Conv2D) (None, 128, 128, 64) 36928 spatial_dropout2d_4[0][0] \n",
372 | "__________________________________________________________________________________________________\n",
373 | "batch_normalization_7 (BatchNor (None, 128, 128, 64) 256 conv2d_7[0][0] \n",
374 | "__________________________________________________________________________________________________\n",
375 | "activation_7 (Activation) (None, 128, 128, 64) 0 batch_normalization_7[0][0] \n",
376 | "__________________________________________________________________________________________________\n",
377 | "spatial_dropout2d_5 (SpatialDro (None, 128, 128, 64) 0 activation_7[0][0] \n",
378 | "__________________________________________________________________________________________________\n",
379 | "max_pooling2d_3 (MaxPooling2D) (None, 64, 64, 64) 0 spatial_dropout2d_5[0][0] \n",
380 | "__________________________________________________________________________________________________\n",
381 | "conv2d_8 (Conv2D) (None, 64, 64, 64) 36928 max_pooling2d_3[0][0] \n",
382 | "__________________________________________________________________________________________________\n",
383 | "batch_normalization_8 (BatchNor (None, 64, 64, 64) 256 conv2d_8[0][0] \n",
384 | "__________________________________________________________________________________________________\n",
385 | "activation_8 (Activation) (None, 64, 64, 64) 0 batch_normalization_8[0][0] \n",
386 | "__________________________________________________________________________________________________\n",
387 | "spatial_dropout2d_6 (SpatialDro (None, 64, 64, 64) 0 activation_8[0][0] \n",
388 | "__________________________________________________________________________________________________\n",
389 | "conv2d_9 (Conv2D) (None, 64, 64, 64) 36928 spatial_dropout2d_6[0][0] \n",
390 | "__________________________________________________________________________________________________\n",
391 | "batch_normalization_9 (BatchNor (None, 64, 64, 64) 256 conv2d_9[0][0] \n",
392 | "__________________________________________________________________________________________________\n",
393 | "activation_9 (Activation) (None, 64, 64, 64) 0 batch_normalization_9[0][0] \n",
394 | "__________________________________________________________________________________________________\n",
395 | "spatial_dropout2d_7 (SpatialDro (None, 64, 64, 64) 0 activation_9[0][0] \n",
396 | "__________________________________________________________________________________________________\n",
397 | "max_pooling2d_4 (MaxPooling2D) (None, 32, 32, 64) 0 spatial_dropout2d_7[0][0] \n",
398 | "__________________________________________________________________________________________________\n",
399 | "conv2d_10 (Conv2D) (None, 32, 32, 128) 73856 max_pooling2d_4[0][0] \n",
400 | "__________________________________________________________________________________________________\n",
401 | "batch_normalization_10 (BatchNo (None, 32, 32, 128) 512 conv2d_10[0][0] \n",
402 | "__________________________________________________________________________________________________\n",
403 | "activation_10 (Activation) (None, 32, 32, 128) 0 batch_normalization_10[0][0] \n",
404 | "__________________________________________________________________________________________________\n",
405 | "spatial_dropout2d_8 (SpatialDro (None, 32, 32, 128) 0 activation_10[0][0] \n",
406 | "__________________________________________________________________________________________________\n",
407 | "conv2d_11 (Conv2D) (None, 32, 32, 128) 147584 spatial_dropout2d_8[0][0] \n",
408 | "__________________________________________________________________________________________________\n",
409 | "batch_normalization_11 (BatchNo (None, 32, 32, 128) 512 conv2d_11[0][0] \n",
410 | "__________________________________________________________________________________________________\n",
411 | "activation_11 (Activation) (None, 32, 32, 128) 0 batch_normalization_11[0][0] \n",
412 | "__________________________________________________________________________________________________\n",
413 | "spatial_dropout2d_9 (SpatialDro (None, 32, 32, 128) 0 activation_11[0][0] \n",
414 | "__________________________________________________________________________________________________\n",
415 | "max_pooling2d_5 (MaxPooling2D) (None, 16, 16, 128) 0 spatial_dropout2d_9[0][0] \n",
416 | "__________________________________________________________________________________________________\n",
417 | "conv2d_12 (Conv2D) (None, 16, 16, 128) 147584 max_pooling2d_5[0][0] \n",
418 | "__________________________________________________________________________________________________\n",
419 | "batch_normalization_12 (BatchNo (None, 16, 16, 128) 512 conv2d_12[0][0] \n",
420 | "__________________________________________________________________________________________________\n",
421 | "activation_12 (Activation) (None, 16, 16, 128) 0 batch_normalization_12[0][0] \n",
422 | "__________________________________________________________________________________________________\n",
423 | "spatial_dropout2d_10 (SpatialDr (None, 16, 16, 128) 0 activation_12[0][0] \n",
424 | "__________________________________________________________________________________________________\n",
425 | "conv2d_13 (Conv2D) (None, 16, 16, 128) 147584 spatial_dropout2d_10[0][0] \n",
426 | "__________________________________________________________________________________________________\n",
427 | "batch_normalization_13 (BatchNo (None, 16, 16, 128) 512 conv2d_13[0][0] \n",
428 | "__________________________________________________________________________________________________\n",
429 | "activation_13 (Activation) (None, 16, 16, 128) 0 batch_normalization_13[0][0] \n",
430 | "__________________________________________________________________________________________________\n",
431 | "spatial_dropout2d_11 (SpatialDr (None, 16, 16, 128) 0 activation_13[0][0] \n",
432 | "__________________________________________________________________________________________________\n",
433 | "max_pooling2d_6 (MaxPooling2D) (None, 8, 8, 128) 0 spatial_dropout2d_11[0][0] \n",
434 | "__________________________________________________________________________________________________\n",
435 | "conv2d_14 (Conv2D) (None, 8, 8, 256) 295168 max_pooling2d_6[0][0] \n",
436 | "__________________________________________________________________________________________________\n",
437 | "batch_normalization_14 (BatchNo (None, 8, 8, 256) 1024 conv2d_14[0][0] \n",
438 | "__________________________________________________________________________________________________\n",
439 | "activation_14 (Activation) (None, 8, 8, 256) 0 batch_normalization_14[0][0] \n",
440 | "__________________________________________________________________________________________________\n",
441 | "spatial_dropout2d_12 (SpatialDr (None, 8, 8, 256) 0 activation_14[0][0] \n",
442 | "__________________________________________________________________________________________________\n",
443 | "conv2d_15 (Conv2D) (None, 8, 8, 256) 590080 spatial_dropout2d_12[0][0] \n",
444 | "__________________________________________________________________________________________________\n",
445 | "batch_normalization_15 (BatchNo (None, 8, 8, 256) 1024 conv2d_15[0][0] \n",
446 | "__________________________________________________________________________________________________\n",
447 | "activation_15 (Activation) (None, 8, 8, 256) 0 batch_normalization_15[0][0] \n",
448 | "__________________________________________________________________________________________________\n",
449 | "spatial_dropout2d_13 (SpatialDr (None, 8, 8, 256) 0 activation_15[0][0] \n",
450 | "__________________________________________________________________________________________________\n",
451 | "up_sampling2d (UpSampling2D) (None, 16, 16, 256) 0 spatial_dropout2d_13[0][0] \n",
452 | "__________________________________________________________________________________________________\n",
453 | "concatenate (Concatenate) (None, 16, 16, 384) 0 spatial_dropout2d_11[0][0] \n",
454 | " up_sampling2d[0][0] \n",
455 | "__________________________________________________________________________________________________\n",
456 | "conv2d_16 (Conv2D) (None, 16, 16, 128) 442496 concatenate[0][0] \n",
457 | "__________________________________________________________________________________________________\n",
458 | "batch_normalization_16 (BatchNo (None, 16, 16, 128) 512 conv2d_16[0][0] \n",
459 | "__________________________________________________________________________________________________\n",
460 | "activation_16 (Activation) (None, 16, 16, 128) 0 batch_normalization_16[0][0] \n",
461 | "__________________________________________________________________________________________________\n",
462 | "spatial_dropout2d_14 (SpatialDr (None, 16, 16, 128) 0 activation_16[0][0] \n",
463 | "__________________________________________________________________________________________________\n",
464 | "conv2d_17 (Conv2D) (None, 16, 16, 128) 147584 spatial_dropout2d_14[0][0] \n",
465 | "__________________________________________________________________________________________________\n",
466 | "batch_normalization_17 (BatchNo (None, 16, 16, 128) 512 conv2d_17[0][0] \n",
467 | "__________________________________________________________________________________________________\n",
468 | "activation_17 (Activation) (None, 16, 16, 128) 0 batch_normalization_17[0][0] \n",
469 | "__________________________________________________________________________________________________\n",
470 | "spatial_dropout2d_15 (SpatialDr (None, 16, 16, 128) 0 activation_17[0][0] \n",
471 | "__________________________________________________________________________________________________\n",
472 | "up_sampling2d_1 (UpSampling2D) (None, 32, 32, 128) 0 spatial_dropout2d_15[0][0] \n",
473 | "__________________________________________________________________________________________________\n",
474 | "concatenate_1 (Concatenate) (None, 32, 32, 256) 0 spatial_dropout2d_9[0][0] \n",
475 | " up_sampling2d_1[0][0] \n",
476 | "__________________________________________________________________________________________________\n",
477 | "conv2d_18 (Conv2D) (None, 32, 32, 128) 295040 concatenate_1[0][0] \n",
478 | "__________________________________________________________________________________________________\n",
479 | "batch_normalization_18 (BatchNo (None, 32, 32, 128) 512 conv2d_18[0][0] \n",
480 | "__________________________________________________________________________________________________\n",
481 | "activation_18 (Activation) (None, 32, 32, 128) 0 batch_normalization_18[0][0] \n",
482 | "__________________________________________________________________________________________________\n",
483 | "spatial_dropout2d_16 (SpatialDr (None, 32, 32, 128) 0 activation_18[0][0] \n",
484 | "__________________________________________________________________________________________________\n",
485 | "conv2d_19 (Conv2D) (None, 32, 32, 128) 147584 spatial_dropout2d_16[0][0] \n",
486 | "__________________________________________________________________________________________________\n",
487 | "batch_normalization_19 (BatchNo (None, 32, 32, 128) 512 conv2d_19[0][0] \n",
488 | "__________________________________________________________________________________________________\n",
489 | "activation_19 (Activation) (None, 32, 32, 128) 0 batch_normalization_19[0][0] \n",
490 | "__________________________________________________________________________________________________\n",
491 | "spatial_dropout2d_17 (SpatialDr (None, 32, 32, 128) 0 activation_19[0][0] \n",
492 | "__________________________________________________________________________________________________\n",
493 | "up_sampling2d_2 (UpSampling2D) (None, 64, 64, 128) 0 spatial_dropout2d_17[0][0] \n",
494 | "__________________________________________________________________________________________________\n",
495 | "concatenate_2 (Concatenate) (None, 64, 64, 192) 0 spatial_dropout2d_7[0][0] \n",
496 | " up_sampling2d_2[0][0] \n",
497 | "__________________________________________________________________________________________________\n",
498 | "conv2d_20 (Conv2D) (None, 64, 64, 64) 110656 concatenate_2[0][0] \n",
499 | "__________________________________________________________________________________________________\n",
500 | "batch_normalization_20 (BatchNo (None, 64, 64, 64) 256 conv2d_20[0][0] \n",
501 | "__________________________________________________________________________________________________\n",
502 | "activation_20 (Activation) (None, 64, 64, 64) 0 batch_normalization_20[0][0] \n",
503 | "__________________________________________________________________________________________________\n",
504 | "spatial_dropout2d_18 (SpatialDr (None, 64, 64, 64) 0 activation_20[0][0] \n",
505 | "__________________________________________________________________________________________________\n",
506 | "conv2d_21 (Conv2D) (None, 64, 64, 64) 36928 spatial_dropout2d_18[0][0] \n",
507 | "__________________________________________________________________________________________________\n",
508 | "batch_normalization_21 (BatchNo (None, 64, 64, 64) 256 conv2d_21[0][0] \n",
509 | "__________________________________________________________________________________________________\n",
510 | "activation_21 (Activation) (None, 64, 64, 64) 0 batch_normalization_21[0][0] \n",
511 | "__________________________________________________________________________________________________\n",
512 | "spatial_dropout2d_19 (SpatialDr (None, 64, 64, 64) 0 activation_21[0][0] \n",
513 | "__________________________________________________________________________________________________\n",
514 | "up_sampling2d_3 (UpSampling2D) (None, 128, 128, 64) 0 spatial_dropout2d_19[0][0] \n",
515 | "__________________________________________________________________________________________________\n",
516 | "concatenate_3 (Concatenate) (None, 128, 128, 128 0 spatial_dropout2d_5[0][0] \n",
517 | " up_sampling2d_3[0][0] \n",
518 | "__________________________________________________________________________________________________\n",
519 | "conv2d_22 (Conv2D) (None, 128, 128, 64) 73792 concatenate_3[0][0] \n",
520 | "__________________________________________________________________________________________________\n",
521 | "batch_normalization_22 (BatchNo (None, 128, 128, 64) 256 conv2d_22[0][0] \n",
522 | "__________________________________________________________________________________________________\n",
523 | "activation_22 (Activation) (None, 128, 128, 64) 0 batch_normalization_22[0][0] \n",
524 | "__________________________________________________________________________________________________\n",
525 | "spatial_dropout2d_20 (SpatialDr (None, 128, 128, 64) 0 activation_22[0][0] \n",
526 | "__________________________________________________________________________________________________\n",
527 | "conv2d_23 (Conv2D) (None, 128, 128, 64) 36928 spatial_dropout2d_20[0][0] \n",
528 | "__________________________________________________________________________________________________\n",
529 | "batch_normalization_23 (BatchNo (None, 128, 128, 64) 256 conv2d_23[0][0] \n",
530 | "__________________________________________________________________________________________________\n",
531 | "activation_23 (Activation) (None, 128, 128, 64) 0 batch_normalization_23[0][0] \n",
532 | "__________________________________________________________________________________________________\n",
533 | "spatial_dropout2d_21 (SpatialDr (None, 128, 128, 64) 0 activation_23[0][0] \n",
534 | "__________________________________________________________________________________________________\n",
535 | "up_sampling2d_4 (UpSampling2D) (None, 256, 256, 64) 0 spatial_dropout2d_21[0][0] \n",
536 | "__________________________________________________________________________________________________\n",
537 | "concatenate_4 (Concatenate) (None, 256, 256, 96) 0 spatial_dropout2d_3[0][0] \n",
538 | " up_sampling2d_4[0][0] \n",
539 | "__________________________________________________________________________________________________\n",
540 | "conv2d_24 (Conv2D) (None, 256, 256, 32) 27680 concatenate_4[0][0] \n",
541 | "__________________________________________________________________________________________________\n",
542 | "batch_normalization_24 (BatchNo (None, 256, 256, 32) 128 conv2d_24[0][0] \n",
543 | "__________________________________________________________________________________________________\n",
544 | "activation_24 (Activation) (None, 256, 256, 32) 0 batch_normalization_24[0][0] \n",
545 | "__________________________________________________________________________________________________\n",
546 | "spatial_dropout2d_22 (SpatialDr (None, 256, 256, 32) 0 activation_24[0][0] \n",
547 | "__________________________________________________________________________________________________\n",
548 | "conv2d_25 (Conv2D) (None, 256, 256, 32) 9248 spatial_dropout2d_22[0][0] \n",
549 | "__________________________________________________________________________________________________\n",
550 | "batch_normalization_25 (BatchNo (None, 256, 256, 32) 128 conv2d_25[0][0] \n",
551 | "__________________________________________________________________________________________________\n",
552 | "activation_25 (Activation) (None, 256, 256, 32) 0 batch_normalization_25[0][0] \n",
553 | "__________________________________________________________________________________________________\n",
554 | "spatial_dropout2d_23 (SpatialDr (None, 256, 256, 32) 0 activation_25[0][0] \n",
555 | "__________________________________________________________________________________________________\n",
556 | "up_sampling2d_5 (UpSampling2D) (None, 512, 512, 32) 0 spatial_dropout2d_23[0][0] \n",
557 | "__________________________________________________________________________________________________\n",
558 | "concatenate_5 (Concatenate) (None, 512, 512, 64) 0 spatial_dropout2d_1[0][0] \n",
559 | " up_sampling2d_5[0][0] \n",
560 | "__________________________________________________________________________________________________\n",
561 | "conv2d_26 (Conv2D) (None, 512, 512, 32) 18464 concatenate_5[0][0] \n",
562 | "__________________________________________________________________________________________________\n",
563 | "batch_normalization_26 (BatchNo (None, 512, 512, 32) 128 conv2d_26[0][0] \n",
564 | "__________________________________________________________________________________________________\n",
565 | "activation_26 (Activation) (None, 512, 512, 32) 0 batch_normalization_26[0][0] \n",
566 | "__________________________________________________________________________________________________\n",
567 | "spatial_dropout2d_24 (SpatialDr (None, 512, 512, 32) 0 activation_26[0][0] \n",
568 | "__________________________________________________________________________________________________\n",
569 | "conv2d_27 (Conv2D) (None, 512, 512, 32) 9248 spatial_dropout2d_24[0][0] \n",
570 | "__________________________________________________________________________________________________\n",
571 | "batch_normalization_27 (BatchNo (None, 512, 512, 32) 128 conv2d_27[0][0] \n",
572 | "__________________________________________________________________________________________________\n",
573 | "activation_27 (Activation) (None, 512, 512, 32) 0 batch_normalization_27[0][0] \n",
574 | "__________________________________________________________________________________________________\n",
575 | "spatial_dropout2d_25 (SpatialDr (None, 512, 512, 32) 0 activation_27[0][0] \n",
576 | "__________________________________________________________________________________________________\n",
577 | "up_sampling2d_6 (UpSampling2D) (None, 1024, 1024, 3 0 spatial_dropout2d_25[0][0] \n",
578 | "__________________________________________________________________________________________________\n",
579 | "concatenate_6 (Concatenate) (None, 1024, 1024, 4 0 activation_1[0][0] \n",
580 | " up_sampling2d_6[0][0] \n",
581 | "__________________________________________________________________________________________________\n",
582 | "conv2d_28 (Conv2D) (None, 1024, 1024, 1 6928 concatenate_6[0][0] \n",
583 | "__________________________________________________________________________________________________\n",
584 | "batch_normalization_28 (BatchNo (None, 1024, 1024, 1 64 conv2d_28[0][0] \n",
585 | "__________________________________________________________________________________________________\n",
586 | "activation_28 (Activation) (None, 1024, 1024, 1 0 batch_normalization_28[0][0] \n",
587 | "__________________________________________________________________________________________________\n",
588 | "conv2d_29 (Conv2D) (None, 1024, 1024, 1 2320 activation_28[0][0] \n",
589 | "__________________________________________________________________________________________________\n",
590 | "batch_normalization_29 (BatchNo (None, 1024, 1024, 1 64 conv2d_29[0][0] \n",
591 | "__________________________________________________________________________________________________\n",
592 | "activation_29 (Activation) (None, 1024, 1024, 1 0 batch_normalization_29[0][0] \n",
593 | "__________________________________________________________________________________________________\n",
594 | "conv2d_30 (Conv2D) (None, 1024, 1024, 5 85 activation_29[0][0] \n",
595 | "==================================================================================================\n",
596 | "Total params: 2,940,453\n",
597 | "Trainable params: 2,935,717\n",
598 | "Non-trainable params: 4,736\n",
599 | "__________________________________________________________________________________________________\n",
600 | "None\n",
601 | "2021-04-10 21:41:10.386712: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)\n",
602 | "2021-04-10 21:41:10.387119: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2199995000 Hz\n",
603 | "Epoch 1/1000\n",
604 | "2021-04-10 21:41:14.372966: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8\n",
605 | "2021-04-10 21:41:16.496101: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11\n",
606 | "2021-04-10 21:41:16.958184: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11\n",
607 | "127/127 [==============================] - 186s 1s/step - loss: 0.6690 - val_loss: 0.8054\n",
608 | "\n",
609 | "Epoch 00001: val_loss improved from inf to 0.80537, saving model to ../output/models/100421_214107_bs_2_arch_unet_img_1024_nbcl_5_gamma_3_aug_horz,gamma_drp_0,1_segmentation_model.h5\n",
610 | "Epoch 2/1000\n",
611 | "127/127 [==============================] - 56s 439ms/step - loss: 0.5926 - val_loss: 0.6993\n",
612 | "\n",
613 | "Epoch 00002: val_loss improved from 0.80537 to 0.69931, saving model to ../output/models/100421_214107_bs_2_arch_unet_img_1024_nbcl_5_gamma_3_aug_horz,gamma_drp_0,1_segmentation_model.h5\n",
614 | "Epoch 3/1000\n",
615 | "122/127 [===========================>..] - ETA: 1s - loss: 0.5382Traceback (most recent call last):\n",
616 | " File \"train_seg.py\", line 159, in \n",
617 | " callbacks=[save_best, history]\n",
618 | " File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py\", line 1105, in fit\n",
619 | " callbacks.on_train_batch_end(end_step, logs)\n",
620 | " File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/callbacks.py\", line 454, in on_train_batch_end\n",
621 | " self._call_batch_hook(ModeKeys.TRAIN, 'end', batch, logs=logs)\n",
622 | " File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/callbacks.py\", line 296, in _call_batch_hook\n",
623 | " self._call_batch_end_hook(mode, batch, logs)\n",
624 | " File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/callbacks.py\", line 316, in _call_batch_end_hook\n",
625 | " self._call_batch_hook_helper(hook_name, batch, logs)\n",
626 | " File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/callbacks.py\", line 356, in _call_batch_hook_helper\n",
627 | " hook(batch, logs)\n",
628 | " File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/callbacks.py\", line 1020, in on_train_batch_end\n",
629 | " self._batch_update_progbar(batch, logs)\n",
630 | " File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/callbacks.py\", line 1084, in _batch_update_progbar\n",
631 | " logs = tf_utils.to_numpy_or_python_type(logs)\n",
632 | " File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/utils/tf_utils.py\", line 514, in to_numpy_or_python_type\n",
633 | " return nest.map_structure(_to_single_numpy_or_python_type, tensors)\n",
634 | " File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py\", line 659, in map_structure\n",
635 | " structure[0], [func(*x) for x in entries],\n",
636 | " File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py\", line 659, in \n",
637 | " structure[0], [func(*x) for x in entries],\n",
638 | " File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/utils/tf_utils.py\", line 510, in _to_single_numpy_or_python_type\n",
639 | " x = t.numpy()\n",
640 | " File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py\", line 1071, in numpy\n",
641 | " maybe_arr = self._numpy() # pylint: disable=protected-access\n",
642 | " File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py\", line 1037, in _numpy\n",
643 | " return self._numpy_internal()\n",
644 | "KeyboardInterrupt\n",
645 | "2021-04-10 21:46:00.331754: W tensorflow/core/kernels/data/generator_dataset_op.cc:107] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated.\n",
646 | "\t [[{{node PyFunc}}]]\n"
647 | ],
648 | "name": "stdout"
649 | }
650 | ]
651 | },
652 | {
653 | "cell_type": "code",
654 | "metadata": {
655 | "id": "XM0JOVbycN6p",
656 | "colab": {
657 | "base_uri": "https://localhost:8080/"
658 | },
659 | "executionInfo": {
660 | "status": "ok",
661 | "timestamp": 1618091291231,
662 | "user_tz": -120,
663 | "elapsed": 35964,
664 | "user": {
665 | "displayName": "André Pedersen",
666 | "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhREXm38EMyE4bW69c9dve33zb8BzIBazDnE89F=s64",
667 | "userId": "14390719858710152933"
668 | }
669 | },
670 | "outputId": "7c4f4358-bc91-45c5-d0f5-5d1a8b0b958a"
671 | },
672 | "source": [
673 | "# Evaluate trained segmentation model\n",
674 | "# %matplotlib inline\n",
675 | "!python eval_seg.py"
676 | ],
677 | "execution_count": 14,
678 | "outputs": [
679 | {
680 | "output_type": "stream",
681 | "text": [
682 | "2021-04-10 21:47:36.185161: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0\n",
683 | "2021-04-10 21:47:39.073906: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set\n",
684 | "2021-04-10 21:47:39.074751: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1\n",
685 | "2021-04-10 21:47:39.108609: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
686 | "2021-04-10 21:47:39.109177: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: \n",
687 | "pciBusID: 0000:00:04.0 name: Tesla T4 computeCapability: 7.5\n",
688 | "coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s\n",
689 | "2021-04-10 21:47:39.109221: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0\n",
690 | "2021-04-10 21:47:39.111353: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11\n",
691 | "2021-04-10 21:47:39.111448: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11\n",
692 | "2021-04-10 21:47:39.112925: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10\n",
693 | "2021-04-10 21:47:39.113304: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10\n",
694 | "2021-04-10 21:47:39.114981: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10\n",
695 | "2021-04-10 21:47:39.115576: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11\n",
696 | "2021-04-10 21:47:39.115778: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8\n",
697 | "2021-04-10 21:47:39.115893: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
698 | "2021-04-10 21:47:39.116490: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
699 | "2021-04-10 21:47:39.117002: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0\n",
700 | "2021-04-10 21:47:39.117464: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set\n",
701 | "2021-04-10 21:47:39.117584: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
702 | "2021-04-10 21:47:39.118110: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: \n",
703 | "pciBusID: 0000:00:04.0 name: Tesla T4 computeCapability: 7.5\n",
704 | "coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s\n",
705 | "2021-04-10 21:47:39.118143: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0\n",
706 | "2021-04-10 21:47:39.118193: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11\n",
707 | "2021-04-10 21:47:39.118220: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11\n",
708 | "2021-04-10 21:47:39.118257: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10\n",
709 | "2021-04-10 21:47:39.118288: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10\n",
710 | "2021-04-10 21:47:39.118316: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10\n",
711 | "2021-04-10 21:47:39.118339: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11\n",
712 | "2021-04-10 21:47:39.118361: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8\n",
713 | "2021-04-10 21:47:39.118438: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
714 | "2021-04-10 21:47:39.118993: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
715 | "2021-04-10 21:47:39.119615: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0\n",
716 | "2021-04-10 21:47:39.119682: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0\n",
717 | "2021-04-10 21:47:39.623324: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:\n",
718 | "2021-04-10 21:47:39.623385: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267] 0 \n",
719 | "2021-04-10 21:47:39.623404: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0: N \n",
720 | "2021-04-10 21:47:39.623668: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
721 | "2021-04-10 21:47:39.624336: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
722 | "2021-04-10 21:47:39.625045: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
723 | "2021-04-10 21:47:39.625686: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 13994 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)\n",
724 | "1 Physical GPUs, 1 Logical GPUs\n",
725 | "\n",
726 | "Current model name:\n",
727 | "050421_021022_bs_2_arch_unet_img_1024_nbcl_5_gamma_3_aug_horz,gamma_drp_0,1_segmentation_model\n",
728 | "DM: 0% 0/59 [00:00, ?it/s]2021-04-10 21:47:40.714645: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)\n",
729 | "2021-04-10 21:47:40.715041: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2199995000 Hz\n",
730 | "2021-04-10 21:47:41.203128: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8\n",
731 | "2021-04-10 21:47:43.220995: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11\n",
732 | "2021-04-10 21:47:43.695134: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11\n",
733 | "DM: 100% 59/59 [00:29<00:00, 2.03it/s]\n",
734 | "Print table summary of the results: \n",
735 | "| metrics | cancer | mammary gland | pectoral muscle | nipple | overall |\n",
736 | "|---------|--------|---------------|-----------------|--------|---------|\n",
737 | "| DSC | 0.341 | 0.977 | 0.945 | 0.544 | 0.702 |\n",
738 | "| IOU | 0.261 | 0.955 | 0.916 | 0.432 | 0.641 |\n"
739 | ],
740 | "name": "stdout"
741 | }
742 | ]
743 | }
744 | ]
745 | }
--------------------------------------------------------------------------------
/python/accumulated_gradients.py:
--------------------------------------------------------------------------------
1 | import tensorflow.keras.backend as K
2 | from tensorflow.keras.optimizers import Optimizer
3 |
4 |
5 | # @FIXME: This implementation has a memory leak proportional to "steps_per_update"
6 | class AccumOptimizer(Optimizer):
7 | """Inheriting Optimizer class, wrapping the original optimizer
8 | to achieve a new corresponding optimizer of gradient accumulation.
9 | # Arguments
10 | optimizer: an instance of keras optimizer (supporting
11 | all keras optimizers currently available);
12 | steps_per_update: the steps of gradient accumulation
13 | # Returns
14 | a new keras optimizer.
15 | """
16 | def __init__(self, optimizer, steps_per_update=1, **kwargs):
17 | super(AccumOptimizer, self).__init__("AccumOptimizer", **kwargs)
18 | self.optimizer = optimizer
19 | with K.name_scope(self.__class__.__name__):
20 | self.steps_per_update = steps_per_update
21 | self.iterations = K.variable(0, dtype='int64', name='iterations')
22 | self.cond = K.equal(self.iterations % self.steps_per_update, 0)
23 | self.lr = self.optimizer.lr
24 | self.optimizer.lr = K.switch(self.cond, self.optimizer.lr, 0.)
25 | for attr in ['momentum', 'rho', 'beta_1', 'beta_2']:
26 | if hasattr(self.optimizer, attr):
27 | value = getattr(self.optimizer, attr)
28 | setattr(self, attr, value)
29 | setattr(self.optimizer, attr, K.switch(self.cond, value, 1 - 1e-7))
30 | for attr in self.optimizer.get_config():
31 | if not hasattr(self, attr):
32 | value = getattr(self.optimizer, attr)
33 | setattr(self, attr, value)
34 | # Cover the original get_gradients method with accumulative gradients.
35 | def get_gradients(loss, params):
36 | return [ag / self.steps_per_update for ag in self.accum_grads]
37 | self.optimizer.get_gradients = get_gradients
38 | def get_updates(self, loss, params):
39 | self.updates = [
40 | K.update_add(self.iterations, 1),
41 | K.update_add(self.optimizer.iterations, K.cast(self.cond, 'int64')),
42 | ]
43 | # gradient accumulation
44 | self.accum_grads = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
45 | grads = self.get_gradients(loss, params)
46 | for g, ag in zip(grads, self.accum_grads):
47 | self.updates.append(K.update(ag, K.switch(self.cond, ag * 0, ag + g)))
48 | # inheriting updates of original optimizer
49 | self.updates.extend(self.optimizer.get_updates(loss, params)[1:])
50 | self.weights.extend(self.optimizer.weights)
51 | return self.updates
52 | def get_config(self):
53 | iterations = K.eval(self.iterations)
54 | K.set_value(self.iterations, 0)
55 | config = self.optimizer.get_config()
56 | K.set_value(self.iterations, iterations)
57 | return config
58 |
59 |
60 | #'''
61 | # @TODO: This implementation is not compatible with TF 2.*
62 | class AdamAccumulate(Optimizer):
63 |
64 | def __init__(self, lr=0.001, beta_1=0.9, beta_2=0.999,
65 | epsilon=None, decay=0., amsgrad=False, accum_iters=1, **kwargs):
66 | if accum_iters < 1:
67 | raise ValueError('accum_iters must be >= 1')
68 | super(AdamAccumulate, self).__init__("AdamAccumulate", **kwargs)
69 | with K.name_scope(self.__class__.__name__):
70 | self.iterations = K.variable(0, dtype='int64', name='iterations')
71 | self.lr = K.variable(lr, name='lr')
72 | self.beta_1 = K.variable(beta_1, name='beta_1')
73 | self.beta_2 = K.variable(beta_2, name='beta_2')
74 | self.decay = K.variable(decay, name='decay')
75 | if epsilon is None:
76 | epsilon = K.epsilon()
77 | self.epsilon = epsilon
78 | self.initial_decay = decay
79 | self.amsgrad = amsgrad
80 | self.accum_iters = K.variable(accum_iters, K.dtype(self.iterations))
81 | self.accum_iters_float = K.cast(self.accum_iters, K.floatx())
82 |
83 | def get_updates(self, loss, params):
84 | grads = self.get_gradients(loss, params)
85 | self.updates = [K.update_add(self.iterations, 1)]
86 |
87 | lr = self.lr
88 |
89 | completed_updates = K.cast(K.tf.floordiv(self.iterations, self.accum_iters), K.floatx())
90 |
91 | if self.initial_decay > 0:
92 | lr = lr * (1. / (1. + self.decay * completed_updates))
93 |
94 | t = completed_updates + 1
95 |
96 | lr_t = lr * (K.sqrt(1. - K.pow(self.beta_2, t)) / (1. - K.pow(self.beta_1, t)))
97 |
98 | # self.iterations incremented after processing a batch
99 | # batch: 1 2 3 4 5 6 7 8 9
100 | # self.iterations: 0 1 2 3 4 5 6 7 8
101 | # update_switch = 1: x x (if accum_iters=4)
102 | update_switch = K.equal((self.iterations + 1) % self.accum_iters, 0)
103 | update_switch = K.cast(update_switch, K.floatx())
104 |
105 | ms = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
106 | vs = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
107 | gs = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
108 |
109 | if self.amsgrad:
110 | vhats = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
111 | else:
112 | vhats = [K.zeros(1) for _ in params]
113 |
114 | self.weights = [self.iterations] + ms + vs + vhats
115 |
116 | for p, g, m, v, vhat, tg in zip(params, grads, ms, vs, vhats, gs):
117 |
118 | sum_grad = tg + g
119 | avg_grad = sum_grad / self.accum_iters_float
120 |
121 | m_t = (self.beta_1 * m) + (1. - self.beta_1) * avg_grad
122 | v_t = (self.beta_2 * v) + (1. - self.beta_2) * K.square(avg_grad)
123 |
124 | if self.amsgrad:
125 | vhat_t = K.maximum(vhat, v_t)
126 | p_t = p - lr_t * m_t / (K.sqrt(vhat_t) + self.epsilon)
127 | self.updates.append(K.update(vhat, (1 - update_switch) * vhat + update_switch * vhat_t))
128 | else:
129 | p_t = p - lr_t * m_t / (K.sqrt(v_t) + self.epsilon)
130 |
131 | self.updates.append(K.update(m, (1 - update_switch) * m + update_switch * m_t))
132 | self.updates.append(K.update(v, (1 - update_switch) * v + update_switch * v_t))
133 | self.updates.append(K.update(tg, (1 - update_switch) * sum_grad))
134 | new_p = p_t
135 |
136 | # Apply constraints.
137 | if getattr(p, 'constraint', None) is not None:
138 | new_p = p.constraint(new_p)
139 |
140 | self.updates.append(K.update(p, (1 - update_switch) * p + update_switch * new_p))
141 | return self.updates
142 |
143 | def get_config(self):
144 | config = {'lr': float(K.get_value(self.lr)),
145 | 'beta_1': float(K.get_value(self.beta_1)),
146 | 'beta_2': float(K.get_value(self.beta_2)),
147 | 'decay': float(K.get_value(self.decay)),
148 | 'epsilon': self.epsilon,
149 | 'amsgrad': self.amsgrad}
150 | base_config = super(AdamAccumulate, self).get_config()
151 | return dict(list(base_config.items()) + list(config.items()))
152 | #'''
--------------------------------------------------------------------------------
/python/architectures.py:
--------------------------------------------------------------------------------
1 | from tesnorflow.keras.layers import Conv2D, MaxPooling2D, UpSampling2D, BatchNormalization, Reshape, Permute, Activation, Input, \
2 | add, multiply
3 | from tensorflow.keras.layers import concatenate, core, Dropout
4 | from tensorflow.keras.models import Model
5 | from tensorflow.keras.layers.merge import concatenate
6 | from tensorflow.keras.layers.core import Lambda
7 | import tensorflow.keras.backend as K
8 |
9 |
10 | '''
11 | Code based on the repo:
12 | https://github.com/lixiaolei1982/Keras-Implementation-of-U-Net-R2U-Net-Attention-U-Net-Attention-R2U-Net.-/blob/master/network.py
13 | '''
14 |
15 | def up_and_concate(down_layer, layer, data_format='channels_first'):
16 | if data_format == 'channels_first':
17 | in_channel = down_layer.get_shape().as_list()[1]
18 | else:
19 | in_channel = down_layer.get_shape().as_list()[3]
20 |
21 | # up = Conv2DTranspose(out_channel, [2, 2], strides=[2, 2])(down_layer)
22 | up = UpSampling2D(size=(2, 2), data_format=data_format)(down_layer)
23 |
24 | if data_format == 'channels_first':
25 | my_concat = Lambda(lambda x: K.concatenate([x[0], x[1]], axis=1))
26 | else:
27 | my_concat = Lambda(lambda x: K.concatenate([x[0], x[1]], axis=3))
28 |
29 | concate = my_concat([up, layer])
30 |
31 | return concate
32 |
33 |
34 | def attention_up_and_concate(down_layer, layer, data_format='channels_first'):
35 | if data_format == 'channels_first':
36 | in_channel = down_layer.get_shape().as_list()[1]
37 | else:
38 | in_channel = down_layer.get_shape().as_list()[3]
39 |
40 | # up = Conv2DTranspose(out_channel, [2, 2], strides=[2, 2])(down_layer)
41 | up = UpSampling2D(size=(2, 2), data_format=data_format)(down_layer)
42 |
43 | layer = attention_block_2d(x=layer, g=up, inter_channel=in_channel // 4, data_format=data_format)
44 |
45 | if data_format == 'channels_first':
46 | my_concat = Lambda(lambda x: K.concatenate([x[0], x[1]], axis=1))
47 | else:
48 | my_concat = Lambda(lambda x: K.concatenate([x[0], x[1]], axis=3))
49 |
50 | concate = my_concat([up, layer])
51 | return concate
52 |
53 |
54 | def attention_block_2d(x, g, inter_channel, data_format='channels_first'):
55 | # theta_x(?,g_height,g_width,inter_channel)
56 |
57 | theta_x = Conv2D(inter_channel, [1, 1], strides=[1, 1], data_format=data_format)(x)
58 |
59 | # phi_g(?,g_height,g_width,inter_channel)
60 |
61 | phi_g = Conv2D(inter_channel, [1, 1], strides=[1, 1], data_format=data_format)(g)
62 |
63 | # f(?,g_height,g_width,inter_channel)
64 |
65 | f = Activation('relu')(add([theta_x, phi_g]))
66 |
67 | # psi_f(?,g_height,g_width,1)
68 |
69 | psi_f = Conv2D(1, [1, 1], strides=[1, 1], data_format=data_format)(f)
70 |
71 | rate = Activation('sigmoid')(psi_f)
72 |
73 | # rate(?,x_height,x_width)
74 |
75 | # att_x(?,x_height,x_width,x_channel)
76 |
77 | att_x = multiply([x, rate])
78 |
79 | return att_x
80 |
81 |
82 | def res_block(input_layer, out_n_filters, batch_normalization=False, kernel_size=[3, 3], stride=[1, 1],
83 |
84 | padding='same', data_format='channels_first'):
85 | if data_format == 'channels_first':
86 | input_n_filters = input_layer.get_shape().as_list()[1]
87 | else:
88 | input_n_filters = input_layer.get_shape().as_list()[3]
89 |
90 | layer = input_layer
91 | for i in range(2):
92 | layer = Conv2D(out_n_filters // 4, [1, 1], strides=stride, padding=padding, data_format=data_format)(layer)
93 | if batch_normalization:
94 | layer = BatchNormalization()(layer)
95 | layer = Activation('relu')(layer)
96 | layer = Conv2D(out_n_filters // 4, kernel_size, strides=stride, padding=padding, data_format=data_format)(layer)
97 | layer = Conv2D(out_n_filters, [1, 1], strides=stride, padding=padding, data_format=data_format)(layer)
98 |
99 | if out_n_filters != input_n_filters:
100 | skip_layer = Conv2D(out_n_filters, [1, 1], strides=stride, padding=padding, data_format=data_format)(
101 | input_layer)
102 | else:
103 | skip_layer = input_layer
104 | out_layer = add([layer, skip_layer])
105 | return out_layer
106 |
107 |
108 | # Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net)
109 | def rec_res_block(input_layer, out_n_filters, batch_normalization=False, kernel_size=[3, 3], stride=[1, 1],
110 |
111 | padding='same', data_format='channels_first'):
112 | if data_format == 'channels_first':
113 | input_n_filters = input_layer.get_shape().as_list()[1]
114 | else:
115 | input_n_filters = input_layer.get_shape().as_list()[3]
116 |
117 | if out_n_filters != input_n_filters:
118 | skip_layer = Conv2D(out_n_filters, [1, 1], strides=stride, padding=padding, data_format=data_format)(
119 | input_layer)
120 | else:
121 | skip_layer = input_layer
122 |
123 | layer = skip_layer
124 | for j in range(2):
125 |
126 | for i in range(2):
127 | if i == 0:
128 |
129 | layer1 = Conv2D(out_n_filters, kernel_size, strides=stride, padding=padding, data_format=data_format)(
130 | layer)
131 | if batch_normalization:
132 | layer1 = BatchNormalization()(layer1)
133 | layer1 = Activation('relu')(layer1)
134 | layer1 = Conv2D(out_n_filters, kernel_size, strides=stride, padding=padding, data_format=data_format)(
135 | add([layer1, layer]))
136 | if batch_normalization:
137 | layer1 = BatchNormalization()(layer1)
138 | layer1 = Activation('relu')(layer1)
139 | layer = layer1
140 |
141 | out_layer = add([layer, skip_layer])
142 | return out_layer
143 |
144 | ########################################################################################################
145 | # Define the neural network
146 | def unet(img_w, img_h, n_label, data_format='channels_first'):
147 | inputs = Input((3, img_w, img_h))
148 | x = inputs
149 | depth = 4
150 | features = 64
151 | skips = []
152 | for i in range(depth):
153 | x = Conv2D(features, (3, 3), activation='relu', padding='same', data_format=data_format)(x)
154 | x = Dropout(0.2)(x)
155 | x = Conv2D(features, (3, 3), activation='relu', padding='same', data_format=data_format)(x)
156 | skips.append(x)
157 | x = MaxPooling2D((2, 2), data_format= data_format)(x)
158 | features = features * 2
159 |
160 | x = Conv2D(features, (3, 3), activation='relu', padding='same', data_format=data_format)(x)
161 | x = Dropout(0.2)(x)
162 | x = Conv2D(features, (3, 3), activation='relu', padding='same', data_format=data_format)(x)
163 |
164 | for i in reversed(range(depth)):
165 | features = features // 2
166 | # attention_up_and_concate(x,[skips[i])
167 | x = UpSampling2D(size=(2, 2), data_format=data_format)(x)
168 | x = concatenate([skips[i], x], axis=1)
169 | x = Conv2D(features, (3, 3), activation='relu', padding='same', data_format=data_format)(x)
170 | x = Dropout(0.2)(x)
171 | x = Conv2D(features, (3, 3), activation='relu', padding='same', data_format=data_format)(x)
172 |
173 | conv6 = Conv2D(n_label, (1, 1), padding='same', data_format=data_format)(x)
174 | conv7 = core.Activation('sigmoid')(conv6)
175 | model = Model(inputs=inputs, outputs=conv7)
176 |
177 | #model.compile(optimizer=Adam(lr=1e-5), loss=[focal_loss()], metrics=['accuracy', dice_coef])
178 | return model
179 |
180 |
181 | ########################################################################################################
182 | #Attention U-Net
183 | def att_unet(img_w, img_h, n_label, data_format='channels_first'):
184 | inputs = Input((3, img_w, img_h))
185 | x = inputs
186 | depth = 4
187 | features = 64
188 | skips = []
189 | for i in range(depth):
190 | x = Conv2D(features, (3, 3), activation='relu', padding='same', data_format=data_format)(x)
191 | x = Dropout(0.2)(x)
192 | x = Conv2D(features, (3, 3), activation='relu', padding='same', data_format=data_format)(x)
193 | skips.append(x)
194 | x = MaxPooling2D((2, 2), data_format='channels_first')(x)
195 | features = features * 2
196 |
197 | x = Conv2D(features, (3, 3), activation='relu', padding='same', data_format=data_format)(x)
198 | x = Dropout(0.2)(x)
199 | x = Conv2D(features, (3, 3), activation='relu', padding='same', data_format=data_format)(x)
200 |
201 | for i in reversed(range(depth)):
202 | features = features // 2
203 | x = attention_up_and_concate(x, skips[i], data_format=data_format)
204 | x = Conv2D(features, (3, 3), activation='relu', padding='same', data_format=data_format)(x)
205 | x = Dropout(0.2)(x)
206 | x = Conv2D(features, (3, 3), activation='relu', padding='same', data_format=data_format)(x)
207 |
208 | conv6 = Conv2D(n_label, (1, 1), padding='same', data_format=data_format)(x)
209 | conv7 = core.Activation('sigmoid')(conv6)
210 | model = Model(inputs=inputs, outputs=conv7)
211 |
212 | #model.compile(optimizer=Adam(lr=1e-5), loss=[focal_loss()], metrics=['accuracy', dice_coef])
213 | return model
214 |
215 |
216 | ########################################################################################################
217 | #Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net)
218 | def r2_unet(img_w, img_h, n_label, data_format='channels_first'):
219 | inputs = Input((3, img_w, img_h))
220 | x = inputs
221 | depth = 4
222 | features = 64
223 | skips = []
224 | for i in range(depth):
225 | x = rec_res_block(x, features, data_format=data_format)
226 | skips.append(x)
227 | x = MaxPooling2D((2, 2), data_format=data_format)(x)
228 |
229 | features = features * 2
230 |
231 | x = rec_res_block(x, features, data_format=data_format)
232 |
233 | for i in reversed(range(depth)):
234 | features = features // 2
235 | x = up_and_concate(x, skips[i], data_format=data_format)
236 | x = rec_res_block(x, features, data_format=data_format)
237 |
238 | conv6 = Conv2D(n_label, (1, 1), padding='same', data_format=data_format)(x)
239 | conv7 = core.Activation('sigmoid')(conv6)
240 | model = Model(inputs=inputs, outputs=conv7)
241 | #model.compile(optimizer=Adam(lr=1e-6), loss=[dice_coef_loss], metrics=['accuracy', dice_coef])
242 | return model
243 |
244 |
245 | ########################################################################################################
246 | #Attention R2U-Net
247 | def att_r2_unet(img_w, img_h, n_label, data_format='channels_first'):
248 | inputs = Input((3, img_w, img_h))
249 | x = inputs
250 | depth = 4
251 | features = 64
252 | skips = []
253 | for i in range(depth):
254 | x = rec_res_block(x, features, data_format=data_format)
255 | skips.append(x)
256 | x = MaxPooling2D((2, 2), data_format=data_format)(x)
257 |
258 | features = features * 2
259 |
260 | x = rec_res_block(x, features, data_format=data_format)
261 |
262 | for i in reversed(range(depth)):
263 | features = features // 2
264 | x = attention_up_and_concate(x, skips[i], data_format=data_format)
265 | x = rec_res_block(x, features, data_format=data_format)
266 |
267 | conv6 = Conv2D(n_label, (1, 1), padding='same', data_format=data_format)(x)
268 | conv7 = core.Activation('sigmoid')(conv6)
269 | model = Model(inputs=inputs, outputs=conv7)
270 | #model.compile(optimizer=Adam(lr=1e-6), loss=[dice_coef_loss], metrics=['accuracy', dice_coef])
271 | return model
--------------------------------------------------------------------------------
/python/aug.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import cv2
3 |
4 |
5 | # vertical flip
6 | def add_flip_vert(input_im, output):
7 | input_im = np.flip(input_im, axis=0)
8 | output = np.flip(output, axis=0)
9 | return input_im, output
10 |
11 |
12 | # horizontal flip
13 | def add_flip_horz(input_im, output):
14 | input_im = np.flip(input_im, axis=1)
15 | output = np.flip(output, axis=1)
16 | return input_im, output
17 |
18 |
19 | # lossless rot90
20 | def add_rotation_ll(input_im, output):
21 | k = np.random.random_integers(1, high=3) # randomly choose rotation angle: +-90, +,180, +-270
22 |
23 | # rotate
24 | input_im = np.rot90(input_im, k)
25 | output = np.rot90(output, k)
26 |
27 | return input_im, output
28 |
29 |
30 | # gamma transform
31 | def add_gamma(input_im, output, r_limits):
32 | r_min, r_max = r_limits
33 |
34 | # randomly choose gamma factor
35 | r = np.random.uniform(r_min, r_max)
36 |
37 | # apply transform
38 | input_im = np.clip(np.round(input_im ** r), a_min=0, a_max=255)
39 |
40 | # need to normalize again after augmentation
41 | input_im = input_im / 255
42 |
43 | return input_im, output
44 |
45 |
46 | def augment_numpy(x, y, aug):
47 | # only apply aug if "aug" is not empty
48 | if bool(aug):
49 | if 'vert' in aug:
50 | if (np.random.random_integers(0, 1) == 1):
51 | x, y = add_flip_vert(x, y)
52 |
53 | if 'horz' in aug:
54 | if (np.random.random_integers(0, 1) == 1):
55 | x, y = add_flip_horz(x, y)
56 |
57 | if 'rot90' in aug:
58 | if (np.random.random_integers(0, 1) == 1):
59 | x, y = add_rotation_ll(x, y)
60 |
61 | if 'gamma' in aug:
62 | if (np.random.random_integers(0, 1) == 1):
63 | x, y = add_gamma(x, y, aug["gamma"])
64 |
65 | return x, y
--------------------------------------------------------------------------------
/python/batch_generator.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import os
3 | import tensorflow as tf
4 | from tensorflow.keras.layers.experimental import preprocessing
5 | import tensorflow_addons as tfa
6 | from utils import *
7 | import h5py
8 | import scipy
9 | import matplotlib.pyplot as plt
10 | from aug import augment_numpy
11 |
12 |
13 | # https://towardsdatascience.com/overcoming-data-preprocessing-bottlenecks-with-tensorflow-data-service-nvidia-dali-and-other-d6321917f851
14 | def get_dataset(batch_size, data_path, num_classes, shuffle=True, out_shape=(299, 299), train_mode=False):
15 |
16 | # parse TFRecord
17 | def parse_image_function(example_proto):
18 | image_feature_description = {
19 | 'label': tf.io.FixedLenFeature([], tf.int64),
20 | 'label_normal': tf.io.FixedLenFeature([], tf.int64),
21 | 'image': tf.io.FixedLenFeature([], tf.string)
22 | }
23 |
24 | features = tf.io.parse_single_example(example_proto, image_feature_description)
25 | image = tf.io.decode_raw(features['image'], tf.uint8)
26 | image.set_shape([1 * 299 * 299])
27 | image = tf.reshape(image, [299, 299, 1]) # original image size is 299x299x1
28 | image = tf.image.grayscale_to_rgb(image) # convert gray image to RGB image relevant for using pretrained CNNs and finetuning
29 | image = tf.image.resize(image, out_shape)
30 |
31 | if num_classes == 2:
32 | label = tf.cast(features['label_normal'], tf.int32)
33 | elif num_classes == 5:
34 | label = tf.cast(features['label'], tf.int32)
35 | elif (num_classes == [2, 5]):
36 | label = [tf.cast(features['label_normal'], tf.int32), tf.cast(features['label'], tf.int32)]
37 | elif (num_classes == [5, 2]):
38 | label = [tf.cast(features['label'], tf.int32), tf.cast(features['label_normal'], tf.int32)]
39 | else:
40 | print("Unvalid num_classes was given. Only valid values are {2, 5, [2, 5], [5, 2]}.")
41 | exit()
42 |
43 | if type(label) == list:
44 | label = {"cl" + str(i+1): tf.one_hot(label[i], num_classes[i]) for i in range(len(label))}
45 | else:
46 | label = tf.one_hot(label, num_classes) # create one-hotted GT compatible with softmax, also convenient for multi-class...
47 |
48 | return image, label
49 |
50 | # blur filter
51 | def blur(image, label):
52 | image = tfa.image.gaussian_filter2d(image=image,
53 | filter_shape=(11, 11), sigma=0.8)
54 | return image, label
55 |
56 | # rescale filter
57 | def rescale(image, label):
58 | image = preprocessing.Rescaling(1.0 / 255)(image)
59 | return image, label
60 |
61 | # augmentation filters
62 | def augment(image, label):
63 | '''
64 | data_augmentation = tf.keras.Sequential(
65 | [
66 | tf.keras.layers.experimental.preprocessing.RandomFlip("horizontal_and_vertical"),
67 | tf.keras.layers.experimental.preprocessing.RandomRotation(0.1),
68 | tf.keras.layers.experimental.preprocessing.RandomZoom(0.1) # Be careful doing these types of augmentations as the lesion might fall outside the image, especially for zoom and shift
69 |
70 | ]
71 | ) # @TODO: Does both horizontal AND vertical make sense in this case?
72 | image = data_augmentation(image)
73 | '''
74 | return image, label
75 |
76 | autotune = tf.data.experimental.AUTOTUNE
77 | options = tf.data.Options()
78 | options.experimental_deterministic = False
79 | records = tf.data.Dataset.list_files(data_path, shuffle=shuffle).with_options(options)
80 |
81 | # load from TFRecord files
82 | ds = tf.data.TFRecordDataset(records, num_parallel_reads=autotune).repeat()
83 | ds = ds.map(parse_image_function, num_parallel_calls=autotune)
84 | #ds = ds.map(dilate, num_parallel_calls=autotune)
85 | #ds = ds.map(blur, num_parallel_calls=autotune) # @ TODO: Should this augmentation method be mixed in with the rest of the methods? Perhaps it already exists in TF by default?
86 | ds = ds.batch(batch_size)
87 | ds = ds.map(rescale, num_parallel_calls=autotune)
88 | # @TODO: Something wrong here
89 | if train_mode:
90 | #ds = ds.map(lambda image, label: (augment(image, label)), num_parallel_calls=autotune) # only apply augmentation in training mode
91 |
92 | #'''
93 | # https://www.tensorflow.org/tutorials/images/data_augmentation#option_2_apply_the_preprocessing_layers_to_your_dataset
94 | # @ However, enabling augmentation seem to result in a memory leak (quite big one actually). Thus, should avoid using this for now.
95 | ds = ds.map(
96 | lambda image, label: (tf.image.convert_image_dtype(image, tf.float32), label)
97 | ).cache( # @TODO: Is it this cache() that produces memory leak?
98 | ).map(
99 | lambda image, label: (tf.image.random_flip_left_right(image), label)
100 | ).map(
101 | lambda image, label: (tf.image.random_flip_up_down(image), label)
102 | #).map(
103 | # lambda image, label: (tf.image.random_contrast(image, lower=0.0, upper=1.0), label)
104 | )
105 | #'''
106 |
107 | ds = ds.prefetch(autotune)
108 | return ds
109 |
110 |
111 |
112 | def batch_gen(file_list, batch_size, aug={}, class_names=[], input_shape=(512, 512, 1), epochs=1,
113 | mask_flag=False, fine_tune=False, inference_mode=False):
114 | while True: # <- necessary for end of training (last epoch)
115 | for i in range(epochs):
116 | batch = 0
117 | nb_classes = len(class_names) + 1
118 | class_names = np.array(class_names)
119 |
120 | # shuffle samples for each epoch
121 | np.random.shuffle(file_list) # patients are shuffled, but chunks are after each other
122 |
123 | input_batch = []
124 | output_batch = []
125 |
126 | for filename in file_list:
127 |
128 | # read whole volume as an array
129 | with h5py.File(filename, 'r') as f:
130 | data = np.expand_dims(np.array(f["data"]).astype(np.float32), axis=-1)
131 | output = []
132 | for class_ in class_names:
133 | output.append(np.expand_dims(np.array(f[class_]).astype(np.float32), axis=-1))
134 | output = np.concatenate(output, axis=-1)
135 |
136 | # need to filter all classes of interest within "_mammary_gland" away from the gland class
137 | if ("_pectoral_muscle" in class_names) and (nb_classes > 2):
138 | tmp1 = output[..., np.argmax(class_names == "_pectoral_muscle")]
139 | for c in class_names:
140 | if c != "_pectoral_muscle":
141 | tmp2 = output[..., np.argmax(class_names == c)]
142 | tmp2[tmp1 == 1] = 0
143 | output[..., np.argmax(class_names == c)] = tmp2
144 |
145 | # filter "_cancer" class away from all other relevant classes
146 | if ("_cancer" in class_names) and (nb_classes > 2):
147 | tmp1 = output[..., np.argmax(class_names == "_cancer")]
148 | for c in class_names:
149 | if c != "_cancer":
150 | tmp2 = output[..., np.argmax(class_names == c)]
151 | tmp2 = np.clip(tmp2 - tmp1, a_min=0, a_max=1)
152 | output[..., np.argmax(class_names == c)] = tmp2
153 |
154 | # filter "_cancer" class away from all other relevant classes
155 | if ("_nipple" in class_names) and (nb_classes > 2):
156 | tmp1 = output[..., np.argmax(class_names == "_nipple")]
157 | for c in class_names:
158 | if c != "_nipple":
159 | tmp2 = output[..., np.argmax(class_names == c)]
160 | tmp2 = np.clip(tmp2 - tmp1, a_min=0, a_max=1)
161 | output[..., np.argmax(class_names == c)] = tmp2
162 |
163 | # filter "_cancer" class away from all other relevant classes
164 | if ("_thick_vessels" in class_names) and (nb_classes > 2):
165 | tmp1 = output[..., np.argmax(class_names == "_thick_vessels")]
166 | for c in class_names:
167 | if c != "_thick_vessels":
168 | tmp2 = output[..., np.argmax(class_names == c)]
169 | tmp2 = np.clip(tmp2 - tmp1, a_min=0, a_max=1)
170 | output[..., np.argmax(class_names == c)] = tmp2
171 |
172 | # add background class to output
173 | tmp = np.sum(output, axis=-1)
174 | tmp = (tmp == 0).astype(np.float32)
175 | output = np.concatenate([np.expand_dims(tmp, axis=-1), output], axis=-1)
176 |
177 | # augment
178 | data, output = augment_numpy(data, output, aug)
179 |
180 | # intensity normalize (0, 255) => (0, 1)
181 | data /= 255.
182 |
183 | input_batch.append(data)
184 | output_batch.append(output)
185 |
186 | batch += 1
187 | if batch == batch_size:
188 | # reset and yield
189 | batch = 0
190 | x_ = np.array(input_batch)
191 | y_ = np.array(output_batch)
192 | input_batch = []
193 | output_batch = []
194 |
195 | if inference_mode:
196 | yield filename, (x_, y_)
197 | else:
198 | yield x_, y_
--------------------------------------------------------------------------------
/python/create_data.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import os
3 | import tensorflow as tf
4 | from tensorflow.keras.layers.experimental import preprocessing
5 | import tensorflow_addons as tfa
6 | from utils import *
7 | import h5py
8 | import cv2
9 | import imutils
10 | from tqdm import tqdm
11 | import matplotlib.pyplot as plt
12 |
13 |
14 | def preprocess_segmentation_samples():
15 |
16 | data_path = "../data/CSAW-S/CSAW-S/CsawS/anonymized_dataset/"
17 | save_path = "../data/CSAW-S_preprocessed"
18 |
19 | classes_ = [
20 | "", "_axillary_lymph_nodes", "_calcifications", "_cancer",\
21 | "_foreign_object", "_mammary_gland", "_nipple",\
22 | "_non-mammary_tissue", "_pectoral_muscle", "_skin", "_text",\
23 | "_thick_vessels", "_unclassified"
24 | ]
25 |
26 | id_ = "_mammary_gland"
27 | # scale_ = 2560 / 3328 # width / height
28 | img_size = 1024 # output image size (512 x 512), keep aspect ratio
29 | clahe_flag = True # True
30 |
31 | save_path += "_" + str(img_size) + "_" + str(clahe_flag) + "/"
32 |
33 | if not os.path.exists(save_path):
34 | os.makedirs(save_path)
35 |
36 | for patient in tqdm(os.listdir(data_path), "DM: "):
37 | curr_path = data_path + patient + "/"
38 |
39 | patient_save_path = save_path + patient + "/"
40 | if not os.path.exists(patient_save_path):
41 | os.makedirs(patient_save_path)
42 |
43 | # get scans in patient folder
44 | scans = []
45 | for file_ in os.listdir(curr_path):
46 | if id_ in file_:
47 | scans.append(file_.split(id_)[0])
48 |
49 | # for each scan in patient, extract relevant data in .h5 file
50 | for scan in scans:
51 | scan_id = scan.split("_")[1]
52 |
53 | create_save_flag = True
54 |
55 | for class_ in classes_:
56 | # read image and resize (but keep aspect ratio)
57 | img = cv2.imread(curr_path + scan + class_ + ".png", 0) # uint8
58 | orig_shape = img.shape
59 | img = imutils.resize(img, height=img_size) # uint8
60 | new_shape = img.shape
61 |
62 | if create_save_flag:
63 | f = h5py.File(patient_save_path + scan_id + "_" + str(orig_shape[0]) + "_" + str(orig_shape[1]) +\
64 | "_" + str(new_shape[0]) + "_" + str(new_shape[1]) + ".h5", "w")
65 | create_save_flag = False
66 |
67 | if class_ == "":
68 | class_ = "data"
69 |
70 | # apply CLAHE for contrast enhancement
71 | if clahe_flag:
72 | clahe_create = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8))
73 | img = clahe_create.apply(img)
74 | else:
75 | img = minmaxscale(img.astype(np.float32), scale_=1).astype(np.uint8)
76 |
77 | if img.shape[1] < img.shape[0]:
78 | tmp = np.zeros((img_size, img_size), dtype=np.uint8)
79 | img_shapes = img.shape
80 | tmp[:img_shapes[0], :img_shapes[1]] = img
81 | img = tmp
82 |
83 | f.create_dataset(class_, data=img, compression="gzip", compression_opts=4)
84 |
85 | # finally close file, when finished writing to it
86 | f. close()
87 |
88 |
89 | # preprocess the data
90 | preprocess_segmentation_samples()
91 |
92 |
93 |
94 |
95 |
96 |
97 |
98 |
99 |
--------------------------------------------------------------------------------
/python/eval.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import os
3 | from batch_generator import get_dataset
4 | from tensorflow.keras.models import load_model
5 | from tqdm import tqdm
6 | from sklearn.metrics import classification_report, auc, roc_curve
7 | import matplotlib.pyplot as plt
8 | from tf_explain.core import GradCAM, IntegratedGradients
9 | import tensorflow as tf
10 | from utils import flatten_
11 |
12 |
13 | # turn off eager execution
14 | #tf.compat.v1.disable_eager_execution()
15 |
16 |
17 | # whether or not to use GPU for training (-1 == no GPU, else GPU)
18 | os.environ["CUDA_VISIBLE_DEVICES"] = "0" # set to 0 to use GPU
19 |
20 | # allow growth, only use the GPU memory required to solve a specific task (makes room for doing stuff in parallel)
21 | gpus = tf.config.experimental.list_physical_devices('GPU')
22 | if gpus:
23 | try:
24 | # Currently, memory growth needs to be the same across GPUs
25 | for gpu in gpus:
26 | tf.config.experimental.set_memory_growth(gpu, True)
27 | logical_gpus = tf.config.experimental.list_logical_devices('GPU')
28 | print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
29 | except RuntimeError as e:
30 | # Memory growth must be set before GPUs have been initialized
31 | print(e)
32 |
33 | # paths
34 | data_path = "../data/DDSM_mammography_data/"
35 | save_path = "../output/models/"
36 | history_path = "../output/history/"
37 |
38 | name = "270321_003646_classifier_model" # binary
39 | #name = "270321_041002_bs_64_arch_4_imgsize_160_nbcl_[2, 5]_gamma_3_classifier_model" # MTL
40 |
41 | BATCH_SIZE = 16
42 | num_classes = 2 #eval(name.split("nbcl_")[-1].split("_")[0])
43 | SHUFFLE_FLAG = False
44 | instance_size = (299, 299, 3)
45 | N_SAMPLES = 55890
46 | XAI_FLAG = False # False
47 | # th = 0.5 # threshold to use for binarizing prediction
48 |
49 | # get independent test set to evaluate trained model
50 | test_set = get_dataset(BATCH_SIZE, data_path + "training10_4/*", num_classes, SHUFFLE_FLAG, instance_size[:-1], train_mode=False)
51 |
52 | # load trained model (for deployment or usage in diagnostics - freezed, thus deterministic, at least in theory)
53 | model = load_model(save_path + name + ".h5", compile=False)
54 |
55 | if type(num_classes) != list:
56 | preds = [[]]
57 | gts = [[]]
58 | else:
59 | preds = []
60 | gts = []
61 | for n in num_classes:
62 | preds.append([])
63 | gts.append([])
64 |
65 | for cnt, (x_curr, y_curr) in tqdm(enumerate(test_set), total=int(N_SAMPLES / 5 / BATCH_SIZE)):
66 |
67 | pred_conf = model.predict(x_curr)
68 |
69 | if type(num_classes) == list:
70 | for i, p in enumerate(pred_conf):
71 | pred_final = np.argmax(p, axis=1)
72 | preds[i].append(pred_final)
73 |
74 | g = y_curr["cl" + str(i+1)]
75 | gt_class = np.argmax(g, axis=1)
76 | gts[i].append(gt_class)
77 |
78 | else:
79 | pred_final = np.argmax(pred_conf, axis=1) # using argmax for two classes here is equivalent with using th=0.5. Essentially, chooses the most confidence class as the predicted class
80 | preds[0].append(pred_final)
81 |
82 | gt_class = np.argmax(y_curr, axis=1)
83 | gts[0].append(gt_class)
84 |
85 | # if XAI_FLAG is enabled, we will use Explainable AI (XAI) to assess if a CNN is doing what it should (what is it using in the image to solve the task)
86 | # NOTE: Will only display first element in batch
87 | if XAI_FLAG:
88 | for i in range(x_curr.shape[0]):
89 | if (pred_final[i] == 1):
90 | img = tf.keras.preprocessing.image.img_to_array(x_curr[i])
91 | data = ([img], None)
92 |
93 | explainer = GradCAM()
94 | #explainer = IntegratedGradients()
95 | grid = explainer.explain(data, model, class_index=pred_final[i])
96 |
97 | fig, ax = plt.subplots(1, 2)
98 | ax[0].imshow(img, cmap="gray")
99 | ax[1].imshow(grid, cmap="gray")
100 | ax[1].set_title("Pred: " + str(pred_final[i]) + ", GT: " + str(gt_class[i]))
101 | for i in range(2):
102 | ax[i].axis("off")
103 | plt.tight_layout()
104 | plt.show()
105 |
106 | if cnt == int(N_SAMPLES / 5 / BATCH_SIZE):
107 | break
108 |
109 | if type(num_classes) != list:
110 | num_classes = [num_classes]
111 |
112 | # for each task, calculate metrics
113 | for i, (ps, gs) in enumerate(zip(preds, gts)):
114 |
115 | ps = flatten_(ps).astype(np.int32)
116 | gs = flatten_(gs).astype(np.int32)
117 |
118 | # get summary statistics (performance metrics)
119 | summary = classification_report(ps, gs)
120 | print(summary)
121 |
122 | # @TODO: Plot ROC and report AUC as additional performance metric(s)
--------------------------------------------------------------------------------
/python/eval_seg.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import os
3 | from batch_generator import get_dataset
4 | from tensorflow.keras.models import load_model
5 | from tqdm import tqdm
6 | from sklearn.metrics import classification_report, auc, roc_curve
7 | import matplotlib.pyplot as plt
8 | from tf_explain.core import GradCAM, IntegratedGradients
9 | import tensorflow as tf
10 | from utils import flatten_, DSC, IOU, argmax_keepdims, one_hot_fix, post_process, random_jet_colormap, make_subplots, post_process_mammary_gland
11 | from batch_generator import batch_gen
12 | import cv2
13 | from prettytable import PrettyTable, MARKDOWN
14 |
15 |
16 | # change print precision
17 | #np.set_printoptions(precision=4)
18 | np.set_printoptions(formatter={'float': lambda x: "{0:0.6f}".format(x)})
19 |
20 |
21 | # whether or not to use GPU for training (-1 == no GPU, else GPU)
22 | os.environ["CUDA_VISIBLE_DEVICES"] = "0" # set to 0 to use GPU
23 |
24 | # allow growth, only use the GPU memory required to solve a specific task (makes room for doing stuff in parallel)
25 | gpus = tf.config.experimental.list_physical_devices('GPU')
26 | if gpus:
27 | try:
28 | # Currently, memory growth needs to be the same across GPUs
29 | for gpu in gpus:
30 | tf.config.experimental.set_memory_growth(gpu, True)
31 | logical_gpus = tf.config.experimental.list_logical_devices('GPU')
32 | print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
33 | except RuntimeError as e:
34 | # Memory growth must be set before GPUs have been initialized
35 | print(e)
36 |
37 | # paths
38 | data_path = "../data/" # CSAW-S_preprocessed_1024_True/" # "..data/CSAW-S/CSAW-S/CsawS/anonymized_dataset/" #"../data/DDSM_mammography_data/"
39 | save_path = "../output/models/"
40 | history_path = "../output/history/"
41 |
42 | name = "030421_003122_bs_12_arch_unet_imgsize_512_nbcl_7_gamma_3_segmentation_model" # best
43 | name = "030421_152539_bs_12_arch_unet_imgsize_512_nbcl_6_gamma_3_segmentation_model" # tested new spatial dropout scheme in U-Net
44 | name = "030421_185828_bs_12_arch_unet_imgsize_512_nbcl_6_gamma_3_segmentation_model" # with aug
45 | name = "030421_204721_bs_12_arch_unet_img_512_nbcl_5_gamma_3_aug_vert,horz,rot90,gamma_segmentation_model" # reduced gamma aug and removed skin class
46 | #name = "030421_214156_bs_12_arch_unet_img_512_nbcl_5_gamma_3_aug_horz-gamma_segmentation_model"
47 | #name = "030421_235137_bs_4_arch_resunetpp_img_512_nbcl_5_gamma_3_aug_horz,gamma_drp_0.2_segmentation_model" # new unmodified ResUNet++ model
48 | #name = "040421_170033_bs_4_arch_resunetpp_img_512_nbcl_5_gamma_3_aug_horz,gamma_drp_0,2_segmentation_model" #
49 | #name = "040421_195756_bs_8_arch_unet_img_512_nbcl_5_gamma_3_aug_horz,gamma_drp_0,1_segmentation_model" # U-Net + spatial dropout 0.1 + batch size 8 (BEST SO FAR!)
50 | name = "050421_021022_bs_2_arch_unet_img_1024_nbcl_5_gamma_3_aug_horz,gamma_drp_0,1_segmentation_model" # (BEST) 1024 input, batch size 8, U-Net (struggled to converge, didnt overfit), however, MUCH better performance on tumour class and slightly better overall, (only slightly worse on nipple class, but likely not significantly)
51 |
52 | print("\nCurrent model name:")
53 | print(name)
54 |
55 | img_size = int(name.split("img_")[-1].split("_")[0])
56 | data_path += "CSAW-S_preprocessed_" + str(img_size) + "_True/"
57 |
58 | plot_flag = False # False
59 | N_PATIENTS = 150
60 | train_val_split = 0.8
61 | #img_size = int(data_path.split("_")[-2])
62 | input_shape = (512, 512, 1)
63 | class_names = ["_cancer", "_mammary_gland", "_pectoral_muscle", "_nipple"] # ["_cancer", "_mammary_gland", "_pectoral_muscle"] # has to be updated dependend on which model "name" is used
64 | nb_classes = len(class_names) + 1
65 |
66 | all_class_names = np.array(["background"] + [x[1:] for x in class_names])
67 |
68 | # create test set
69 | data_set = []
70 | for patient in os.listdir(data_path):
71 | curr = data_path + patient + "/"
72 | tmp = [curr + x for x in os.listdir(curr)]
73 | data_set.append(tmp)
74 |
75 | val1 = int(N_PATIENTS * train_val_split)
76 | train_set = data_set[:val1]
77 | val_set = data_set[val1:]
78 |
79 | train_set = flatten_(train_set)
80 | val_set = flatten_(val_set)
81 |
82 | chosen_set = val_set
83 |
84 | # define data generator
85 | generator = batch_gen(chosen_set, batch_size=1, aug={}, class_names=class_names, input_shape=input_shape, epochs=1, mask_flag=False, fine_tune=False, inference_mode=True)
86 |
87 | # load trained model (for deployment or usage in diagnostics - freezed, thus deterministic, at least in theory)
88 | model = load_model(save_path + name + ".h5", compile=False)
89 |
90 | # random colormap
91 | some_cmap = random_jet_colormap()
92 |
93 | dsc_ = []
94 | iou_ = []
95 |
96 | dsc_classes_ = []
97 | iou_classes_ = []
98 |
99 | cnt = 0
100 | for filename, (x, y) in tqdm(generator, "DM: ", total=len(chosen_set)):
101 | conf = model.predict(x)
102 | pred = np.argmax(conf, axis=-1)
103 | # print(np.unique(pred))
104 | pred = one_hot_fix(pred, nb_classes)
105 |
106 | # post-process pred and GT to match original image size for proper evaluation
107 | # print(filename)
108 | tmp = filename.split("/")[-1].split(".")[0].split("_")
109 | orig_shape = (int(tmp[1]), int(tmp[2]))
110 | new_shape = (int(tmp[3]), int(tmp[4]))
111 |
112 | '''
113 | if plot_flag:
114 | x_orig = post_process(np.squeeze(x, axis=0), new_shape, orig_shape, resize=True, interpolation=cv2.INTER_LINEAR)
115 | y_orig = post_process(np.squeeze(y, axis=0), new_shape, orig_shape, resize=True, interpolation=cv2.INTER_LINEAR)
116 | pred_orig = post_process(np.squeeze(pred, axis=0), new_shape, orig_shape, resize=True, interpolation=cv2.INTER_LINEAR)
117 | '''
118 |
119 | ## first post processing to go back to original shape
120 | x = post_process(np.squeeze(x, axis=0), new_shape, orig_shape, resize=False, interpolation=cv2.INTER_LINEAR)
121 | y = post_process(np.squeeze(y, axis=0), new_shape, orig_shape, resize=False, interpolation=cv2.INTER_LINEAR)
122 | pred = post_process(np.squeeze(pred, axis=0), new_shape, orig_shape, resize=False, interpolation=cv2.INTER_LINEAR)
123 | conf = post_process(np.squeeze(conf, axis=0), new_shape, orig_shape, resize=False, interpolation=cv2.INTER_LINEAR)
124 |
125 | ## second post processing to fix mammary gland prediction
126 | pred = post_process_mammary_gland(pred, all_class_names)
127 |
128 | if plot_flag:
129 | make_subplots(x, y, pred, conf, img_size, all_class_names, some_cmap)
130 |
131 | # per-class (micro) DSC and IOU
132 | tmp1 = []
133 | tmp2 = []
134 | for c in range(1, nb_classes):
135 | tmp1.append(DSC(np.expand_dims(y[..., c], axis=-1), np.expand_dims(pred[..., c], axis=-1), remove_bg=False))
136 | tmp2.append(IOU(np.expand_dims(y[..., c], axis=-1), np.expand_dims(pred[..., c], axis=-1), remove_bg=False))
137 | dsc_classes_.append(tmp1)
138 | iou_classes_.append(tmp2)
139 |
140 | # overall DSC and IOU (macro-averaged)
141 | dsc_curr = DSC(y, pred)
142 | iou_curr = IOU(y, pred)
143 |
144 | dsc_.append(dsc_curr)
145 | iou_.append(iou_curr)
146 |
147 | cnt += 1
148 | if cnt == len(chosen_set) + 1:
149 | break
150 |
151 |
152 | dsc_ = np.array(dsc_)
153 | iou_ = np.array(iou_)
154 | dsc_classes_ = np.array(dsc_classes_)
155 | iou_classes_ = np.array(iou_classes_)
156 |
157 | x = PrettyTable()
158 | x.field_names = ["metrics"] + [x[1:].replace("_", " ") for x in class_names] + ["overall"]
159 | x.add_row(["DSC"] + list(np.mean(dsc_classes_, axis=0)) + [np.mean(dsc_, axis=0)])
160 | x.add_row(["IOU"] + list(np.mean(iou_classes_, axis=0)) + [np.mean(iou_, axis=0)])
161 | x.set_style(MARKDOWN)
162 | x.float_format = ".3"
163 |
164 | print("Print table summary of the results: ")
165 | print(x)
--------------------------------------------------------------------------------
/python/models.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | from tensorflow.keras.applications import InceptionV3
3 | from tensorflow.keras.applications import ResNet50
4 | from tensorflow.keras.layers import Input, Dense, Convolution2D, MaxPooling2D, Dropout, Flatten, SpatialDropout2D, \
5 | ZeroPadding2D, Activation, AveragePooling2D, UpSampling2D, BatchNormalization, ConvLSTM2D, \
6 | TimeDistributed, Concatenate, Lambda, Reshape
7 | from tensorflow.keras.models import Model, Sequential
8 |
9 |
10 | # see here for already built-in pretrained architectures:
11 | # https://keras.io/api/applications/
12 |
13 | def get_arch(MODEL_ARCH, instance_size, num_classes):
14 |
15 | # basic
16 | if MODEL_ARCH == 1:
17 | # define model (some naive network)
18 | model = Sequential() # example of creation of TF-Keras model using the Sequential
19 | model.add(Conv2D(32, kernel_size=(3, 3), input_shape=instance_size))
20 | model.add(BatchNormalization())
21 | model.add(Activation('relu'))
22 | model.add(Conv2D(64, (3, 3)))
23 | model.add(BatchNormalization())
24 | model.add(Activation('relu'))
25 | model.add(MaxPooling2D(pool_size=(2, 2)))
26 | model.add(Dropout(0.25))
27 | model.add(Flatten())
28 | model.add(Dense(64))
29 | model.add(BatchNormalization())
30 | model.add(Dropout(0.5))
31 | model.add(Activation('relu'))
32 | model.add(Dense(num_classes, activation='softmax'))
33 |
34 | elif MODEL_ARCH == 2:
35 | # InceptionV3 (typical example arch) - personal preference for CNN classification (however, quite expensive and might be overkill in a lot of scenarios)
36 | some_input = Input(shape=instance_size)
37 | base_model = InceptionV3(include_top=False, weights="imagenet", pooling=None, input_tensor=some_input)
38 | x = base_model.output
39 | x = Flatten()(x)
40 | x = Dense(64)(x)
41 | x = BatchNormalization()(x)
42 | x = Dropout(0.5)(x)
43 | x = Activation('relu')(x)
44 | x = Dense(num_classes, activation='softmax')(x)
45 | model = Model(inputs=base_model.input, outputs=x) # example of creation of TF-Keras model using the functional API
46 |
47 | elif MODEL_ARCH == 3:
48 | # ResNet-50, another very popular arch, can be done similarly as for InceptionV3 above
49 | some_input = Input(shape=instance_size)
50 | base_model = ResNet50(include_top=False, weights="imagenet", pooling=None, input_tensor=some_input)
51 | x = base_model.output
52 | x = Flatten()(x)
53 | x = Dense(64)(x)
54 | x = BatchNormalization()(x)
55 | x = Dropout(0.5)(x)
56 | x = Activation('relu')(x)
57 | x = Dense(num_classes, activation='softmax')(x)
58 | model = Model(inputs=base_model.input, outputs=x)
59 |
60 | elif MODEL_ARCH == 4:
61 | # Example of a multi-task model, performing both binary classification AND multi-class classification simultaneously, distinguishing
62 | # normal tissue from breast cancer tissue, as well as separating different types of breast cancer tissue
63 | some_input = Input(shape=instance_size)
64 | base_model = InceptionV3(include_top=False, weights="imagenet", pooling=None, input_tensor=some_input)
65 | x = base_model.output
66 | x = Flatten()(x)
67 |
68 | # first output branch
69 | y1 = Dense(64)(x)
70 | y1 = BatchNormalization()(y1)
71 | y1 = Dropout(0.5)(y1)
72 | y1 = Activation('relu')(y1)
73 | y1 = Dense(num_classes[0], activation='softmax', name="cl1")(y1)
74 |
75 | # second output branch
76 | y2 = Dense(64)(x)
77 | y2 = BatchNormalization()(y2)
78 | y2 = Dropout(0.5)(y2)
79 | y2 = Activation('relu')(y2)
80 | y2 = Dense(num_classes[1], activation='softmax', name="cl2")(y2)
81 |
82 | model = Model(inputs=base_model.input, outputs=[y1, y2]) # example of multi-task network through the functional API
83 |
84 | else:
85 | print("please choose supported models: {1, 2, 3, 4}")
86 | exit()
87 |
88 | return model
89 |
90 |
91 | def convolution_block_2d(x, nr_of_convolutions, use_bn=False, spatial_dropout=None, renorm=False):
92 | for i in range(2):
93 | x = Convolution2D(nr_of_convolutions, 3, padding='same')(x)
94 | if use_bn:
95 | x = BatchNormalization(renorm=renorm)(x)
96 | x = Activation('relu')(x)
97 | if spatial_dropout:
98 | x = SpatialDropout2D(spatial_dropout)(x)
99 |
100 | return x
101 |
102 |
103 | def encoder_block_2d(x, nr_of_convolutions, use_bn=False, spatial_dropout=None, renorm=False):
104 |
105 | x_before_downsampling = convolution_block_2d(x, nr_of_convolutions, use_bn, spatial_dropout, renorm)
106 | x = MaxPooling2D((2, 2))(x_before_downsampling)
107 |
108 | return x, x_before_downsampling
109 |
110 |
111 | def decoder_block_2d(x, nr_of_convolutions, cross_over_connection=None, use_bn=False, spatial_dropout=None, renorm=False):
112 |
113 | x = UpSampling2D((2, 2))(x)
114 | if cross_over_connection is not None:
115 | x = Concatenate()([cross_over_connection, x])
116 | x = convolution_block_2d(x, nr_of_convolutions, use_bn, spatial_dropout, renorm)
117 |
118 | return x
119 |
120 |
121 | def encoder_block(x, nr_of_convolutions, use_bn=False, spatial_dropout=None, dims=2, renorm=False):
122 | if dims == 2:
123 | return encoder_block_2d(x, nr_of_convolutions, use_bn, spatial_dropout, renorm)
124 | else:
125 | raise ValueError
126 |
127 |
128 | def decoder_block(x, nr_of_convolutions, cross_over_connection=None, use_bn=False, spatial_dropout=None, dims=2, renorm=False):
129 | if dims == 2:
130 | return decoder_block_2d(x, nr_of_convolutions, cross_over_connection, use_bn, spatial_dropout, renorm)
131 | else:
132 | raise ValueError
133 |
134 |
135 | def convolution_block(x, nr_of_convolutions, use_bn=False, spatial_dropout=None, dims=2, renorm=False):
136 | if dims == 2:
137 | return convolution_block_2d(x, nr_of_convolutions, use_bn, spatial_dropout, renorm)
138 | else:
139 | raise ValueError
140 |
141 |
142 | class Unet():
143 | def __init__(self, input_shape, nb_classes):
144 | if len(input_shape) != 3 and len(input_shape) != 4:
145 | raise ValueError('Input shape must have 3 or 4 dimensions')
146 | if len(input_shape) == 3:
147 | self.dims = 2
148 | else:
149 | self.dims = 3
150 | if nb_classes <= 1:
151 | raise ValueError('Segmentation classes must be > 1')
152 | self.input_shape = input_shape
153 | self.nb_classes = nb_classes
154 | self.convolutions = None
155 | self.encoder_use_bn = True
156 | self.decoder_use_bn = True
157 | self.encoder_spatial_dropout = None
158 | self.decoder_spatial_dropout = None
159 | self.bottom_level = 4
160 | self.dropout_level_threshold = 1
161 | self.renorm = False
162 |
163 | def set_convolutions(self, convolutions):
164 | #if len(convolutions) != self.get_depth()*2 + 1:
165 | # raise ValueError('Nr of convolutions must have length ' + str(self.get_depth()*2 + 1))
166 | self.convolutions = convolutions
167 |
168 | def set_bottom_level(self, level):
169 | self.bottom_level = level
170 |
171 | def set_renorm(self, renorm):
172 | self.renorm = renorm
173 |
174 | def get_depth(self):
175 | #'''
176 | init_size = max(self.input_shape[:-1])
177 | size = init_size
178 | depth = 0
179 | while size % 2 == 0 and size > self.bottom_level:
180 | size /= 2
181 | depth += 1
182 | #'''
183 |
184 | # custom depth defined by the predefined convolutions
185 | #depth = (len(self.convolutions) - 1) / 2
186 |
187 | return depth
188 |
189 | def get_dice_loss(self, use_background=False):
190 | def dice_loss(target, output, epsilon=1e-10):
191 | smooth = 1.
192 | dice = 0
193 |
194 | for object in range(0 if use_background else 1, self.nb_classes):
195 | if self.dims == 2:
196 | output1 = output[:, :, :, object]
197 | target1 = target[:, :, :, object]
198 | else:
199 | output1 = output[:, :, :, :, object]
200 | target1 = target[:, :, :, :, object]
201 | intersection1 = tf.reduce_sum(output1 * target1)
202 | union1 = tf.reduce_sum(output1 * output1) + tf.reduce_sum(target1 * target1)
203 | dice += (2. * intersection1 + smooth) / (union1 + smooth)
204 |
205 | if use_background:
206 | dice /= self.nb_classes
207 | else:
208 | dice /= (self.nb_classes - 1)
209 |
210 | return tf.clip_by_value(1. - dice, 0., 1. - epsilon)
211 |
212 | return dice_loss
213 |
214 |
215 | def create(self):
216 | """
217 | Create model and return it
218 | :return: keras model
219 | """
220 |
221 | input_layer = Input(shape=self.input_shape)
222 | x = input_layer
223 |
224 | init_size = max(self.input_shape[:-1])
225 | size = init_size
226 |
227 | convolutions = self.convolutions
228 | if convolutions is None:
229 | # Create convolutions
230 | convolutions = []
231 | nr_of_convolutions = 8
232 | for i in range(self.get_depth()):
233 | convolutions.append(int(nr_of_convolutions))
234 | nr_of_convolutions *= 2
235 | convolutions.append(int(nr_of_convolutions))
236 | for i in range(self.get_depth()):
237 | convolutions.append(int(nr_of_convolutions))
238 | nr_of_convolutions /= 2
239 |
240 | depth = self.get_depth()
241 |
242 | curr_encoder_spatial_dropout = self.encoder_spatial_dropout
243 | curr_decoder_spatial_dropout = self.decoder_spatial_dropout
244 |
245 | connection = {}
246 | i = 0
247 | while size % 2 == 0 and size > self.bottom_level:
248 | if i < self.dropout_level_threshold: # only apply dropout at the bottom (deep features)
249 | curr_encoder_spatial_dropout = None
250 | else:
251 | curr_encoder_spatial_dropout = self.encoder_spatial_dropout
252 | x, connection[size] = encoder_block(x, convolutions[i], self.encoder_use_bn, curr_encoder_spatial_dropout, self.dims, self.renorm)
253 | size /= 2
254 | i += 1
255 |
256 | x = convolution_block(x, convolutions[i], self.encoder_use_bn, curr_encoder_spatial_dropout, self.dims, self.renorm)
257 | i += 1
258 |
259 | steps = int(i)
260 | j = 0
261 | while size < init_size:
262 | if steps - j - 1 <= self.dropout_level_threshold: # only apply dropout at the bottom (deep features)
263 | curr_decoder_spatial_dropout = None
264 | else:
265 | curr_decoder_spatial_dropout = self.decoder_spatial_dropout
266 | size *= 2
267 | x = decoder_block(x, convolutions[i], connection[size], self.decoder_use_bn, curr_decoder_spatial_dropout, self.dims, self.renorm)
268 | i += 1
269 | j += 1
270 |
271 | if self.dims == 2:
272 | x = Convolution2D(self.nb_classes, 1, activation='softmax')(x)
273 | else:
274 | raise ValueError
275 |
276 | return Model(inputs=input_layer, outputs=x)
--------------------------------------------------------------------------------
/python/resunetpp.py:
--------------------------------------------------------------------------------
1 | """
2 | ResUNet++ architecture in Keras TensorFlow
3 | Based on implementation from:
4 | https://github.com/DebeshJha/ResUNetPlusPlus/blob/master/m_resunet.py
5 | """
6 | import os
7 | import numpy as np
8 | import cv2
9 | import tensorflow as tf
10 | from tensorflow.keras.layers import *
11 | from tensorflow.keras.models import Model
12 |
13 |
14 | def squeeze_excite_block(inputs, ratio=8):
15 | init = inputs
16 | channel_axis = -1
17 | #print()
18 | #print(init, init.shape, init[channel_axis])
19 | filters = init.shape[channel_axis] # .value
20 | se_shape = (1, 1, filters)
21 |
22 | se = GlobalAveragePooling2D()(init)
23 | se = Reshape(se_shape)(se)
24 | se = Dense(filters // ratio, activation='relu', kernel_initializer='he_normal', use_bias=False)(se)
25 | se = Dense(filters, activation='sigmoid', kernel_initializer='he_normal', use_bias=False)(se)
26 |
27 | x = Multiply()([init, se])
28 | return x
29 |
30 | def stem_block(x, n_filter, strides):
31 | x_init = x
32 |
33 | ## Conv 1
34 | x = Conv2D(n_filter, (3, 3), padding="same", strides=strides)(x)
35 | x = BatchNormalization()(x)
36 | x = Activation("relu")(x)
37 | x = Conv2D(n_filter, (3, 3), padding="same")(x)
38 |
39 | ## Shortcut
40 | s = Conv2D(n_filter, (1, 1), padding="same", strides=strides)(x_init)
41 | s = BatchNormalization()(s)
42 |
43 | ## Add
44 | x = Add()([x, s])
45 | x = squeeze_excite_block(x)
46 | return x
47 |
48 |
49 | def resnet_block(x, n_filter, strides=1):
50 | x_init = x
51 |
52 | ## Conv 1
53 | x = BatchNormalization()(x)
54 | x = Activation("relu")(x)
55 | x = Conv2D(n_filter, (3, 3), padding="same", strides=strides)(x)
56 | ## Conv 2
57 | x = BatchNormalization()(x)
58 | x = Activation("relu")(x)
59 | x = Conv2D(n_filter, (3, 3), padding="same", strides=1)(x)
60 |
61 | ## Shortcut
62 | s = Conv2D(n_filter, (1, 1), padding="same", strides=strides)(x_init)
63 | s = BatchNormalization()(s)
64 |
65 | ## Add
66 | x = Add()([x, s])
67 | x = squeeze_excite_block(x)
68 | return x
69 |
70 | def aspp_block(x, num_filters, rate_scale=1):
71 | x1 = Conv2D(num_filters, (3, 3), dilation_rate=(6 * rate_scale, 6 * rate_scale), padding="SAME")(x)
72 | x1 = BatchNormalization()(x1)
73 |
74 | x2 = Conv2D(num_filters, (3, 3), dilation_rate=(12 * rate_scale, 12 * rate_scale), padding="SAME")(x)
75 | x2 = BatchNormalization()(x2)
76 |
77 | x3 = Conv2D(num_filters, (3, 3), dilation_rate=(18 * rate_scale, 18 * rate_scale), padding="SAME")(x)
78 | x3 = BatchNormalization()(x3)
79 |
80 | x4 = Conv2D(num_filters, (3, 3), padding="SAME")(x)
81 | x4 = BatchNormalization()(x4)
82 |
83 | y = Add()([x1, x2, x3, x4])
84 | y = Conv2D(num_filters, (1, 1), padding="SAME")(y)
85 | return y
86 |
87 | def attention_block(g, x):
88 | """
89 | g: Output of Parallel Encoder block
90 | x: Output of Previous Decoder block
91 | """
92 |
93 | filters = x.shape[-1] # .value
94 |
95 | g_conv = BatchNormalization()(g)
96 | g_conv = Activation("relu")(g_conv)
97 | g_conv = Conv2D(filters, (3, 3), padding="SAME")(g_conv)
98 |
99 | g_pool = MaxPooling2D(pool_size=(2, 2), strides=(2, 2))(g_conv)
100 |
101 | x_conv = BatchNormalization()(x)
102 | x_conv = Activation("relu")(x_conv)
103 | x_conv = Conv2D(filters, (3, 3), padding="SAME")(x_conv)
104 |
105 | gc_sum = Add()([g_pool, x_conv])
106 |
107 | gc_conv = BatchNormalization()(gc_sum)
108 | gc_conv = Activation("relu")(gc_conv)
109 | gc_conv = Conv2D(filters, (3, 3), padding="SAME")(gc_conv)
110 |
111 | gc_mul = Multiply()([gc_conv, x])
112 | return gc_mul
113 |
114 | def decoder_block(c3, b1, n_filter):
115 | d1 = attention_block(c3, b1)
116 | d1 = UpSampling2D((2, 2))(d1)
117 | d1 = Concatenate()([d1, c3])
118 | return resnet_block(d1, n_filter)
119 |
120 |
121 | class ResUnetPlusPlus:
122 | def __init__(self, input_shape=(256, 256, 3), nb_classes=2):
123 | self.input_shape = input_shape
124 | self.nb_classes = nb_classes
125 | self.dims = 2 # 2D (image input)
126 | self.convolutions = [16, 32, 64, 128, 256] # suitable for (256x256) input (DEFAULT FOR THE ORIGINAL IMPLEMENTATION)
127 | #self.convolutions = [16, 32, 64, 128, 256, 512] # suitable for (512x512) input
128 |
129 | def get_dice_loss(self, use_background=False):
130 | def dice_loss(target, output, epsilon=1e-10):
131 | smooth = 1.
132 | dice = 0
133 |
134 | for object in range(0 if use_background else 1, self.nb_classes):
135 | if self.dims == 2:
136 | output1 = output[:, :, :, object]
137 | target1 = target[:, :, :, object]
138 | else:
139 | output1 = output[:, :, :, :, object]
140 | target1 = target[:, :, :, :, object]
141 | intersection1 = tf.reduce_sum(output1 * target1)
142 | union1 = tf.reduce_sum(output1 * output1) + tf.reduce_sum(target1 * target1)
143 | dice += (2. * intersection1 + smooth) / (union1 + smooth)
144 |
145 | if use_background:
146 | dice /= self.nb_classes
147 | else:
148 | dice /= (self.nb_classes - 1)
149 |
150 | return tf.clip_by_value(1. - dice, 0., 1. - epsilon)
151 |
152 | return dice_loss
153 |
154 | def set_convolutions(self, convolutions):
155 | self.convolutions = convolutions
156 |
157 | def create(self):
158 | n_filters = self.convolutions # [16, 32, 64, 128, 256] # suitable for 256x256 input
159 | inputs = Input(self.input_shape)
160 |
161 | c0 = inputs
162 | c1 = stem_block(c0, n_filters[0], strides=1)
163 |
164 | ## Encoder
165 | cc = [c1]
166 | for i in range(1, len(self.convolutions)-1):
167 | cc.append(resnet_block(cc[-1], n_filters[i], strides=2))
168 |
169 | '''
170 | c2 = resnet_block(c1, n_filters[1], strides=2)
171 | c3 = resnet_block(c2, n_filters[2], strides=2)
172 | c4 = resnet_block(c3, n_filters[3], strides=2)
173 | c5 = resnet_block(c4, n_filters[4], strides=2)
174 | '''
175 |
176 | ## Bridge
177 | b1 = aspp_block(cc[-1], n_filters[-1])
178 | #b1 = aspp_block(c5, n_filters[5])
179 |
180 | ## Decoder
181 | dd = [b1]
182 | for i in range(len(self.convolutions)-2, 0, -1):
183 | dd.append(decoder_block(cc[i-1], dd[-1], n_filters[i]))
184 |
185 | '''
186 | d1 = decoder_block(c4, b1, n_filters[4])
187 | d2 = decoder_block(c3, d1, n_filters[3])
188 | d3 = decoder_block(c2, d2, n_filters[2])
189 | d4 = decoder_block(c1, d3, n_filters[1])
190 | '''
191 |
192 | '''
193 | d1 = attention_block(c3, b1)
194 | d1 = UpSampling2D((2, 2))(d1)
195 | d1 = Concatenate()([d1, c3])
196 | d1 = resnet_block(d1, n_filters[3])
197 |
198 | d2 = attention_block(c2, d1)
199 | d2 = UpSampling2D((2, 2))(d2)
200 | d2 = Concatenate()([d2, c2])
201 | d2 = resnet_block(d2, n_filters[2])
202 |
203 | d3 = attention_block(c1, d2)
204 | d3 = UpSampling2D((2, 2))(d3)
205 | d3 = Concatenate()([d3, c1])
206 | d3 = resnet_block(d3, n_filters[1])
207 | '''
208 |
209 | ## output
210 | outputs = aspp_block(dd[-1], n_filters[0])
211 | outputs = Conv2D(self.nb_classes, (1, 1), padding="same")(outputs)
212 | outputs = Activation("softmax")(outputs)
213 |
214 | ## Model
215 | model = Model(inputs, outputs)
216 | return model
--------------------------------------------------------------------------------
/python/train.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import tensorflow.keras.backend as K
3 | import numpy as np
4 | import os
5 | from tensorflow.keras.callbacks import ModelCheckpoint, CSVLogger
6 | import tensorflow_addons as tfa
7 | from datetime import datetime
8 | from models import get_arch
9 | from batch_generator import get_dataset
10 | from utils import macro_accuracy
11 | from tensorflow.keras.optimizers import Adam
12 |
13 |
14 | # today's date and time
15 | today = datetime.now()
16 | name = today.strftime("%d%m") + today.strftime("%Y")[2:] + "_" + today.strftime("%H%M%S") + "_"
17 |
18 | # whether or not to use GPU for training (-1 == no GPU, else GPU)
19 | os.environ["CUDA_VISIBLE_DEVICES"] = "0"
20 |
21 | # allow growth, only use the GPU memory required to solve a specific task (makes room for doing stuff in parallel)
22 | gpus = tf.config.experimental.list_physical_devices('GPU')
23 | if gpus:
24 | try:
25 | # Currently, memory growth needs to be the same across GPUs
26 | for gpu in gpus:
27 | tf.config.experimental.set_memory_growth(gpu, True)
28 | logical_gpus = tf.config.experimental.list_logical_devices('GPU')
29 | print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
30 | except RuntimeError as e:
31 | # Memory growth must be set before GPUs have been initialized
32 | print(e)
33 |
34 | # paths
35 | data_path = #"..data/CSAW-S/CSAW-S/CsawS/anonymized_dataset/" #"../data/DDSM_mammography_data/"
36 | save_path = "../output/models/"
37 | history_path = "../output/history/"
38 |
39 | # PARAMS
40 | N_SAMPLES = 55890 # https://www.kaggle.com/skooch/ddsm-mammography
41 | N_TRAIN_FOLDS = 3
42 | N_VAL_FOLDS = 1 # 5 folds to choose from
43 | N_EPOCHS = 10 # put 10 for the jupyter notebook example. Should train for much longer
44 | MODEL_ARCH = 2 # which architecture/CNN to use - see models.py for info about archs
45 | BATCH_SIZE = 128
46 | BUFFER_SIZE = 2 ** 2
47 | N_TRAIN_STEPS = int(N_SAMPLES / N_TRAIN_FOLDS / BATCH_SIZE)
48 | N_VAL_STEPS = int(N_SAMPLES / N_VAL_FOLDS / BATCH_SIZE)
49 | SHUFFLE_FLAG = True
50 | img_size = 150
51 | instance_size = (img_size, img_size, 3) # Default: (299, 299, 1). Set this to (299, 299, 1) to not downsample further.
52 | num_classes = 2 # [2, 5] # if 2, then we just use the binary labels for training the model, if 5 then we train a multi-class model
53 | learning_rate = 1e-4 # relevant for the optimizer, Adam used by default (with default lr=1e-3), I normally use 1e-4 when finetuning
54 | gamma = 3 # Focal Loss parameter
55 | AUG_FLAG = False # Whether or not to apply data augmentation during training (only applied to the training set)
56 |
57 | weight = 86 / 14
58 |
59 | if num_classes == 2:
60 | class_weights = {0: 1, 1: weight}
61 | elif num_classes == 5:
62 | class_weights = None # what is the distribution for the multi-class case?
63 | elif (num_classes == [2, 5]):
64 | class_weights = {'cl1':{0: 1, 1: weight}, 'cl2':{i: 1 for i in range(num_classes[1])}}
65 | elif (num_classes == [5, 2]):
66 | class_weights = {'cl1':{i: 1 for i in range(num_classes[0])}, 'cl2':{0: 1, 1: weight}}
67 | else:
68 | print("Unvalid num_classes was given. Only valid values are {2, 5, [2, 5], [5, 2]}.")
69 | exit()
70 |
71 | # add hyperparams to name of session, to be easier to parse during eval and overall
72 | name += "bs_" + str(BATCH_SIZE) + "_arch_" + str(MODEL_ARCH) + "_imgsize_" + str(img_size) + "_nbcl_" + str(num_classes) + "_gamma_" + str(gamma) + "_"
73 |
74 | # NOTE: We use the three first folds for training, the fourth as a validation set, and the last fold as a hold-out sample (test set)
75 | # get some training and validation data for building the model
76 | # NOTE2: Be careful appying augmentation to the validation set. Ideally it should not be necessary. However, augmenting training set is always useful!
77 | train_set = get_dataset(BATCH_SIZE, [data_path + "training10_" + str(i) + "/training10_" + str(i) + ".tfrecords" for i in range(3)], num_classes, SHUFFLE_FLAG, instance_size[:-1], train_mode=AUG_FLAG)
78 | val_set = get_dataset(BATCH_SIZE, data_path + "training10_3/training10_3.tfrecords", num_classes, SHUFFLE_FLAG, instance_size[:-1], train_mode=False)
79 |
80 | ## Model architecture (CNN)
81 | model = get_arch(MODEL_ARCH, instance_size, num_classes)
82 | print(model.summary()) # prints the full architecture
83 |
84 | if type(num_classes) == list:
85 | model.compile(
86 | optimizer=Adam(learning_rate), # most popular optimizer
87 | loss={'cl' + str(i+1): tfa.losses.SigmoidFocalCrossEntropy(gamma=gamma) for i in range(len(num_classes))}, # "categorical_crossentropy", # because of class imbalance we use focal loss to train a model that works well on both classes
88 | weighted_metrics={'cl' + str(i+1): ["accuracy"] for i in range(len(num_classes))},
89 | metrics={'cl' + str(i+1): [tfa.metrics.F1Score(num_classes=num_classes[i], average="macro")] for i in range(len(num_classes))},
90 | )
91 | else:
92 | model.compile(
93 | optimizer=Adam(learning_rate),
94 | loss=tfa.losses.SigmoidFocalCrossEntropy(gamma=gamma),
95 | weighted_metrics=["accuracy"],
96 | metrics=[tfa.metrics.F1Score(num_classes=num_classes, average="macro")],
97 | )
98 |
99 | save_best = ModelCheckpoint(
100 | filepath=save_path + name + "classifier_model.h5",
101 | save_best_only=True, # only saves if model has improved (after each epoch)
102 | save_weights_only=False,
103 | verbose=1,
104 | monitor="val_loss", # "val_f1_score", # default: "val_loss" (only saves model/overwrites if val_loss has decreased)
105 | mode="min", # default 'auto', but using custom losses it might be necessary to set it to 'max', as it is interpreted to be minimized by default, is unknown
106 | )
107 |
108 | history = CSVLogger(
109 | history_path + name + "training_history.csv",
110 | append=True
111 | )
112 |
113 | model.fit(
114 | train_set,
115 | steps_per_epoch=N_TRAIN_STEPS,
116 | epochs=N_EPOCHS,
117 | validation_data=val_set,
118 | validation_steps=N_VAL_STEPS,
119 | #class_weight=class_weights, # apriori, we know the distribution of the two classes, so we add a higher weight to class 1, as it is less frequent
120 | callbacks=[save_best, history]
121 | )
--------------------------------------------------------------------------------
/python/train_seg.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import tensorflow.keras.backend as K
3 | import numpy as np
4 | import os
5 | from tensorflow.keras.callbacks import ModelCheckpoint, CSVLogger
6 | import tensorflow_addons as tfa
7 | from datetime import datetime
8 | from models import get_arch, Unet
9 | from batch_generator import get_dataset, batch_gen
10 | from utils import macro_accuracy, flatten_
11 | from tensorflow.keras.optimizers import Adam
12 | from resunetpp import *
13 | # from accumulated_gradients import AccumOptimizer # @TODO: Currently, these accumulated gradients solutions are not compatible with something in TF 2
14 |
15 |
16 | # today's date and time
17 | today = datetime.now()
18 | name = today.strftime("%d%m") + today.strftime("%Y")[2:] + "_" + today.strftime("%H%M%S") + "_"
19 |
20 | # whether or not to use GPU for training (-1 == no GPU, else GPU)
21 | os.environ["CUDA_VISIBLE_DEVICES"] = "0"
22 |
23 | '''
24 | gpus = tf.config.experimental.list_physical_devices('GPU')
25 | if gpus:
26 | try:
27 | # Currently, memory growth needs to be the same across GPUs
28 | for gpu in gpus:
29 | tf.config.experimental.set_memory_growth(gpu, True)
30 | logical_gpus = tf.config.experimental.list_logical_devices('GPU')
31 | print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
32 | except RuntimeError as e:
33 | # Memory growth must be set before GPUs have been initialized
34 | print(e)
35 | '''
36 |
37 | # paths
38 | data_path = "../data/CSAW-S_preprocessed_1024_True/" # "..data/CSAW-S/CSAW-S/CsawS/anonymized_dataset/" #"../data/DDSM_mammography_data/"
39 | save_path = "../output/models/"
40 | history_path = "../output/history/"
41 |
42 | # PARAMS
43 | N_EPOCHS = 1000 # 200
44 | batch_size = 2 # 12, 16
45 | accum_steps = 4 # number of steps when performing accumulated gradients
46 | SHUFFLE_FLAG = True
47 | img_size = int(data_path.split("_")[-2]) # 512
48 | fine_tune = 1 # if set to 1, does not perform fine-tuning
49 | input_shape = (img_size, img_size, fine_tune) # Default: (299, 299, 1). Set this to (299, 299, 1) to not downsample further.
50 | learning_rate = 1e-3 # relevant for the optimizer, Adam used by default (with default lr=1e-3), I normally use 1e-4 when finetuning
51 | gamma = 3 # Focal Loss parameter
52 | AUG_FLAG = False # Whether or not to apply data augmentation during training (only applied to the training set)
53 | train_aug = {"horz": 1, "gamma": [0.75, 1.5]} # {"vert": 1, "horz": 1, "rot90": 1, "gamma": [0.75, 1.5]}
54 | val_aug = {}
55 | spatial_dropout = 0.1 # 0.1
56 | N_PATIENTS = 150
57 | train_val_split = 0.8
58 | use_background = True # False (will neglect background class if False)
59 | model_arch = "unet" # {"unet", "resunetpp"}
60 | #renorm = True # False (whether to apply BatchReNormalization in U-Net)
61 |
62 | '''
63 | class_names = [
64 | "_axillary_lymph_nodes", "_calcifications", "_cancer",\
65 | "_foreign_object", "_mammary_gland", "_nipple",\
66 | "_non-mammary_tissue", "_pectoral_muscle", "_skin", "_text",\
67 | "_thick_vessels", "_unclassified"
68 | ]
69 | '''
70 |
71 | # @FIXME: pectoral_muscle/mammary_gland has not been consistently annotated
72 | class_names = [
73 | "_cancer", "_mammary_gland", "_pectoral_muscle", "_nipple", # "_skin", "_thick_vessels"
74 | ]
75 | nb_classes = len(class_names) + 1 # include background class (+1)
76 |
77 | # add hyperparams to name of session, to be easier to parse during eval and overall
78 | name += "bs_" + str(batch_size) + "_arch_" + model_arch + "_img_" + str(img_size) + "_nbcl_" +\
79 | str(nb_classes) + "_gamma_" + str(gamma) + "_aug_" +\
80 | str(list(train_aug)).replace("'", "").replace("[", "").replace("]", "").replace(" ", "") +\
81 | "_drp_" + str(spatial_dropout).replace(".", ",")
82 | #+ "_renorm_" + str(renorm)
83 | name += "_"
84 |
85 | print("\nCurrent model run: ")
86 | print(name, "\n")
87 |
88 | ## create train and validation sets
89 | data_set = []
90 | for patient in os.listdir(data_path):
91 | curr = data_path + patient + "/"
92 | tmp = [curr + x for x in os.listdir(curr)]
93 | data_set.append(tmp)
94 |
95 | val1 = int(N_PATIENTS * train_val_split)
96 | train_set = data_set[:val1]
97 | val_set = data_set[val1:]
98 |
99 | train_set = flatten_(train_set)
100 | val_set = flatten_(val_set)
101 |
102 | N_TRAIN_STEPS = int(np.ceil(len(train_set) / batch_size))
103 | N_VAL_STEPS = int(np.ceil(len(val_set) / batch_size))
104 |
105 |
106 | # define data generators
107 | train_gen = batch_gen(train_set, batch_size, aug=train_aug, class_names=class_names, input_shape=input_shape, epochs=N_EPOCHS, mask_flag=False, fine_tune=False)
108 | val_gen = batch_gen(val_set, batch_size, aug=val_aug, class_names=class_names, input_shape=input_shape, epochs=N_EPOCHS, mask_flag=False, fine_tune=False)
109 |
110 | # define model
111 | if model_arch == "unet":
112 | network = Unet(input_shape=input_shape, nb_classes=nb_classes)
113 | network.encoder_spatial_dropout = spatial_dropout # attempt to remove spatial dropout to see if it improves the issue with faulty classes...
114 | network.decoder_spatial_dropout = spatial_dropout # - Spatial Dropout extremely important to get good generalization and keep model learning what it should!
115 | #network.set_convolutions([8, 16, 32, 32, 64, 64, 128, 256, 128, 64, 64, 32, 32, 16, 8])
116 | network.set_convolutions([16, 32, 32, 64, 64, 128, 128, 256, 128, 128, 64, 64, 32, 32, 16])
117 | if img_size == 1024:
118 | network.set_bottom_level(8)
119 | # network.set_renorm(renorm)
120 | model = network.create()
121 | elif model_arch == "resunetpp":
122 | network = ResUnetPlusPlus(input_shape=input_shape, nb_classes=nb_classes)
123 | # network.set_convolutions([16, 32, 64, 128, 256, 512]) # [16, 32, 64, 128, 256] # suitable for 256x256 input
124 | network.set_convolutions([16, 32, 64, 128, 256, 512]) # attempt to make it more shallow => perhaps won't overfit so easily? Perhaps I sould just use dropout
125 | model = network.create()
126 | else:
127 | print("Unknown architecture selected. Please choose one of these: {'unet', 'resunet++'}")
128 | exit()
129 | print(model.summary()) # prints the full architecture
130 |
131 | #opt = AccumOptimizer(Adam(lr=learning_rate), steps_per_update=accum_steps)
132 |
133 | model.compile(
134 | #optimizer=opt,
135 | optimizer=Adam(learning_rate),
136 | loss=network.get_dice_loss(use_background=use_background)
137 | )
138 |
139 | save_best = ModelCheckpoint(
140 | filepath=save_path + name + "segmentation_model.h5",
141 | save_best_only=True, # only saves if model has improved (after each epoch)
142 | save_weights_only=False,
143 | verbose=1,
144 | monitor="val_loss", # "val_f1_score", # default: "val_loss" (only saves model/overwrites if val_loss has decreased)
145 | mode="min", # default 'auto', but using custom losses it might be necessary to set it to 'max', as it is interpreted to be minimized by default, is unknown
146 | )
147 |
148 | history = CSVLogger(
149 | history_path + name + "training_history.csv",
150 | append=True
151 | )
152 |
153 | model.fit(
154 | train_gen,
155 | steps_per_epoch=N_TRAIN_STEPS,
156 | epochs=N_EPOCHS,
157 | validation_data=val_gen,
158 | validation_steps=N_VAL_STEPS,
159 | callbacks=[save_best, history]
160 | )
--------------------------------------------------------------------------------
/python/utils.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import numpy as np
3 | import cv2
4 | import matplotlib.pyplot as plt
5 | from skimage.morphology import disk, remove_small_holes, remove_small_objects, label
6 |
7 |
8 | def macro_accuracy(y_true, y_pred):
9 | y_pred_ = tf.argmax(y_pred, axis=1)
10 | y_true_ = tf.argmax(y_true, axis=1)
11 | nb_classes = 2 # assuming two classes
12 | S = 0
13 | for i in range(nb_classes):
14 | y_pred_tmp = tf.boolean_mask(y_pred_, y_true_ == i)
15 | y_true_tmp = tf.boolean_mask(y_true_, y_true_ == i)
16 | tmp = tf.cast(y_pred_tmp == y_true_tmp, dtype=tf.float32)
17 | S += tf.reduce_mean(tmp)
18 | S /= nb_classes # to produce the macro mean
19 | return S
20 |
21 |
22 | # @TODO: perhaps just use .ravel() instead?
23 | def flatten_(tmp):
24 | out = []
25 | for t in tmp:
26 | for t2 in t:
27 | out.append(t2)
28 | out = np.array(out)
29 | return out
30 |
31 |
32 | def minmaxscale(tmp, scale_=1):
33 | if np.count_nonzero(tmp) > 0:
34 | tmp = tmp - np.amin(tmp)
35 | tmp = tmp / np.amax(tmp)
36 | tmp *= scale_
37 | return tmp
38 |
39 |
40 | # @TODO: Something wrong with this
41 | def random_shift(x, aug, p=0.5):
42 | if tf.random.uniform([]) < p:
43 | shapes = tf.shape(x)
44 | v1 = tf.cast(aug[0] * shapes[1], tf.int32)
45 | v2 = tf.cast(aug[1] * shapes[2], tf.int32)
46 | ha = tf.random.uniform([], minval=-5.5, maxval=5.5)
47 | wa = tf.cast(tf.random.uniform([], minval=-v2, maxval=v2), tf.int32)
48 | x = tfa.image.translate(x, [ha, wa], interpolation='nearest', fill_mode='constant', fill_value=0.)
49 | return x
50 |
51 |
52 | def IOU(y_true, y_pred, eps=1e-15, remove_bg=True):
53 | nb_classes = y_true.shape[-1]
54 | iou_ = 0
55 | for c in range(int(remove_bg), nb_classes):
56 | y_pred_curr = y_pred[..., c]
57 | y_true_curr = y_true[..., c]
58 | intersection = (y_pred_curr * y_true_curr).sum()
59 | union = y_true_curr.sum() + y_pred_curr.sum() - intersection
60 | iou_ += (intersection + eps) / (union + eps)
61 | iou_ /= (nb_classes - int(remove_bg))
62 | return iou_
63 |
64 |
65 | def DSC(y_true, y_pred, smooth=1e-15, remove_bg=True):
66 | nb_classes = int(y_true.shape[-1])
67 | dice = 0
68 | for c in range(int(remove_bg), nb_classes):
69 | y_pred_curr = y_pred[..., c]
70 | y_true_curr = y_true[..., c]
71 | intersection1 = np.sum(y_pred_curr * y_true_curr)
72 | union1 = np.sum(y_pred_curr * y_pred_curr) + np.sum(y_true_curr * y_true_curr)
73 | dice += (2. * intersection1 + smooth) / (union1 + smooth)
74 | dice /= (nb_classes - int(remove_bg))
75 | return dice
76 |
77 |
78 | def one_hot_fix(x, nb_classes):
79 | out = np.zeros(x.shape + (nb_classes,), dtype=np.int32)
80 | for c in range(nb_classes):
81 | out[..., c] = (x == c).astype(np.int32)
82 | return out
83 |
84 |
85 | def argmax_keepdims(x, axis):
86 | output_shape = list(x.shape)
87 | output_shape[axis] = 1
88 | return np.argmax(x, axis=axis).reshape(output_shape)
89 |
90 |
91 | def post_process(x, new_shape, orig_shape, resize=True, interpolation=cv2.INTER_NEAREST, threshold=None):
92 | x = x.astype(np.float32)
93 | x = x[:new_shape[0], :new_shape[1]]
94 | if resize:
95 | new = np.zeros(orig_shape + (x.shape[-1],), dtype=np.float32)
96 | for i in range(x.shape[-1]):
97 | new[..., i] = cv2.resize(x[..., i], orig_shape[::-1], interpolation=interpolation)
98 | if interpolation != cv2.INTER_NEAREST:
99 | if threshold is not None:
100 | new = (new > 0.5).astype(np.int32)
101 | return new
102 | else:
103 | return x
104 |
105 |
106 | def post_process_mammary_gland(pred, all_class_names):
107 | # orig classes
108 | orig = pred.copy()
109 |
110 | # fill holes in mammary gland class
111 | mamma_class = pred[..., np.argmax(all_class_names == "mammary_gland")]
112 | pred[..., np.argmax(all_class_names == "mammary_gland")] = remove_small_holes(mamma_class.astype(np.bool), area_threshold=int(np.round(np.prod(mamma_class.shape) / 1e3))).astype(np.float32)
113 |
114 | #'''
115 | # need to update mammary gland class by all other classes to avoid missing actual true positives
116 | tmp1 = pred[..., np.argmax(all_class_names == "mammary_gland")]
117 | for c in all_class_names:
118 | if c == "background":
119 | continue
120 | if c != "mammary_gland":
121 | tmp2 = pred[..., np.argmax(all_class_names == c)]
122 | tmp1[tmp2 == 1] = 0
123 | pred[..., np.argmax(all_class_names == "mammary_gland")] = tmp1
124 |
125 | # only keep largest mammary gland connected component
126 | labels = label(pred[..., np.argmax(all_class_names == "mammary_gland")])
127 | best = 0
128 | largest_area = 0
129 | for l in np.unique(labels):
130 | if l == 0:
131 | continue
132 | area = np.sum(labels == l)
133 | if area > largest_area:
134 | largest_area = area
135 | best = l
136 | pred[..., np.argmax(all_class_names == "mammary_gland")] = (labels == best).astype(np.float32)
137 |
138 | # fix background class
139 | tmp1 = pred[..., np.argmax(all_class_names == "background")]
140 | for c in all_class_names:
141 | if c != "background":
142 | tmp2 = pred[..., np.argmax(all_class_names == c)]
143 | tmp1[tmp2 == 1] = 0
144 | pred[..., np.argmax(all_class_names == "background")] = tmp1
145 | #'''
146 |
147 | # keep original cancer pred
148 | pred[..., np.argmax(all_class_names == "cancer")] = orig[..., np.argmax(all_class_names == "cancer")] # .copy() TODO: Why does this .copy() change the output?
149 |
150 | return pred
151 |
152 |
153 | def random_jet_colormap(cmap="jet", nb=256):
154 | colors = plt.cm.get_cmap(cmap, nb) # choose which colormap to use
155 | tmp = np.linspace(0, 1, nb)
156 | np.random.shuffle(tmp) # shuffle colors
157 | newcolors = colors(tmp)
158 | newcolors[0, :] = (0, 0, 0, 1) # set first color to black
159 | return plt.cm.colors.ListedColormap(newcolors)
160 |
161 |
162 | def make_subplots(x, y, pred, conf, img_size, all_class_names, some_cmap):
163 |
164 | nb_classes = y.shape[-1]
165 | '''
166 | fig1, ax1 = plt.subplots(1, 3)
167 | ax1[0].imshow(x_orig, cmap="gray")
168 | ax1[1].imshow(np.argmax(pred_orig, axis=-1), cmap="jet", vmin=0, vmax=nb_classes-1)
169 | ax1[2].imshow(np.argmax(y_orig, axis=-1), cmap="jet", vmin=0, vmax=nb_classes-1)
170 | plt.show()
171 | '''
172 |
173 | fig, ax = plt.subplots(4, nb_classes)
174 | ax[0, 0].imshow(x, cmap="gray", interpolation='none')
175 | ax[0, 1].imshow(np.argmax(pred, axis=-1), cmap=some_cmap, vmin=0, vmax=nb_classes-1, interpolation='none')
176 | ax[0, 2].imshow(np.argmax(y, axis=-1), cmap=some_cmap, vmin=0, vmax=nb_classes-1, interpolation='none')
177 |
178 | for i in range(nb_classes):
179 | ax[1, i].imshow(conf[..., i], cmap="gray", vmin=0, vmax=1, interpolation='none')
180 | ax[2, i].imshow(pred[..., i], cmap="gray", vmin=0, vmax=1, interpolation='none')
181 | ax[3, i].imshow(y[..., i], cmap="gray", vmin=0, vmax=1, interpolation='none')
182 |
183 | for i in range(nb_classes):
184 | for j in range(4):
185 | ax[j, i].axis("off")
186 |
187 | for i, cname in enumerate(all_class_names):
188 | ax[-1, i].text(int(pred.shape[1] * 0.5), img_size + int(img_size * 0.1), cname, color="g", verticalalignment='center', horizontalalignment='center')
189 |
190 | ax[0, 0].set_title('Img', color='c', rotation='vertical', x=-0.1, y=0.4)
191 | ax[1, 0].set_title('Conf', color='c', rotation='vertical', x=-0.1, y=0.4)
192 | ax[2, 0].set_title('Pred', color='c', rotation='vertical', x=-0.1, y=0.4)
193 | ax[3, 0].set_title('GT', color='c', rotation='vertical', x=-0.1, y=0.4)
194 | ax[0, 1].set_title('MC Pred', color='orange')
195 | ax[0, 2].set_title('MC GT', color='orange')
196 | plt.tight_layout()
197 | plt.show()
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/andreped/DMDetect/bfbc0f100812f2faaac4d2d1ca905e3aa3354cc6/requirements.txt
--------------------------------------------------------------------------------