├── README.md
├── assets
├── img.png
├── test_leaderboard.md
├── validation_leaderboard.md
├── validation_leaderboard_1st.md
├── validation_leaderboard_2nd.md
└── validation_leaderboard_3rd.md
└── main
├── .DS_Store
├── README.md
├── __pycache__
├── metrics.cpython-36.pyc
└── metrics.cpython-38.pyc
├── datasets
├── __pycache__
│ ├── mp_liver_dataset.cpython-36.pyc
│ ├── mp_liver_dataset.cpython-38.pyc
│ ├── transforms.cpython-36.pyc
│ └── transforms.cpython-38.pyc
├── mp_liver_dataset.py
└── transforms.py
├── metrics.py
├── models
├── __init__.py
├── __pycache__
│ ├── DRNet.cpython-36.pyc
│ ├── DRNet_pvt.cpython-36.pyc
│ ├── DRNet_vits.cpython-36.pyc
│ ├── Modules.cpython-36.pyc
│ ├── Modules.cpython-38.pyc
│ ├── SRNet.cpython-36.pyc
│ ├── __init__.cpython-36.pyc
│ ├── __init__.cpython-38.pyc
│ ├── botnet.cpython-36.pyc
│ ├── botnet_IL.cpython-36.pyc
│ ├── build.cpython-36.pyc
│ ├── convnext_IL.cpython-36.pyc
│ ├── densenet36.cpython-36.pyc
│ ├── densenet36_keepz.cpython-36.pyc
│ ├── densenet36v1.cpython-36.pyc
│ ├── densenet_IL.cpython-36.pyc
│ ├── densenet_com3b_split1b.cpython-36.pyc
│ ├── efficientnet.cpython-36.pyc
│ ├── efficientnet_IL.cpython-36.pyc
│ ├── mobilenet.cpython-36.pyc
│ ├── resnet.cpython-36.pyc
│ ├── resnet_IL.cpython-36.pyc
│ ├── resnet_mscs.cpython-36.pyc
│ ├── resnext.cpython-36.pyc
│ ├── siamese_resnet.cpython-36.pyc
│ ├── squeezenet.cpython-36.pyc
│ ├── stic.cpython-36.pyc
│ ├── swinunetr.cpython-36.pyc
│ ├── swinunetr.cpython-38.pyc
│ ├── unet_3d.cpython-36.pyc
│ ├── uniformer.cpython-36.pyc
│ ├── uniformer.cpython-38.pyc
│ ├── uniformer_IL.cpython-36.pyc
│ └── vgg.cpython-36.pyc
└── uniformer.py
├── output
├── .DS_Store
├── 20230411-192839-uniformer_small_IL
│ ├── .DS_Store
│ ├── LLDBaseline.json
│ ├── args.yaml
│ └── summary.csv
└── LLDBaseline.json
├── predict.py
├── preprocess
├── crop_roi.py
└── gene_cross_val.py
├── requirements.txt
├── train.py
└── validate.py
/README.md:
--------------------------------------------------------------------------------
1 | # Liver Lesion Diagnosis Challenge on Multi-phase MRI (LLD-MMRI2023).
2 | 
3 |
4 | ## 🆕 **News**
5 | * **2023-2-28: 🔥🔥🔥Dataset Release.**
6 |
7 | * **The dataset is accessible at **[here](https://github.com/LMMMEng/LLD-MMRI-Dataset)**. We provide annotations for an additional 104 cases (i.e., test set), which is not incorporated within this challenge.**
8 |
9 | * 2023-9-8: Final Leaderboard for Test Release.
10 |
11 | * You can address the leaderboard **[here](https://github.com/LMMMEng/LLD-MMRI2023/blob/main/assets/test_leaderboard.md)**, where you can also address the codes of the top-5 teams.
12 |
13 | * 2023-8-8: Leaderboard for Test Release.
14 |
15 | * ~~The [**leaderboard**](https://github.com/LMMMEng/LLD-MMRI2023/blob/main/assets/provisional_leaderboard.md) is presented. Please note that this is only a temporary ranking. We ask the top five teams to publish their code within two weeks. Failure to submit your code by the designated deadline will result in removal from the leaderboard and the ranking will be postponed.~~
16 |
17 |
18 | * 2023-7-10: Validation Stage Completed.
19 |
20 | * ~~The [**leaderboard**](https://github.com/LMMMEng/LLD-MMRI2023/blob/main/assets/validation_leaderboard.md) is presented according to the highest metrics over the three submissions.~~
21 |
22 | * 2023-7-7: Last Result Submission on Validation Set.
23 |
24 | * The submission window is open from 00:00 to 24:00 on Jul 7th. Only the last submission within this timeframe will be considered. Early or late submissions will not be processed.
25 |
26 | * 2023-6-19: Leaderboard for the Second Submission on Validation Set Release.
27 |
28 | * ~~You can address the leaderboard **[here](https://github.com/LMMMEng/LLD-MMRI2023/blob/main/assets/validation_leaderboard_2nd.md)**~~.
29 |
30 | * 2023-6-18: Registration Close.
31 |
32 | * The registration channel is now closed.
33 | * We will release the download link of the dataset after the challenge is completed.
34 |
35 | * 2023-6-16: Second Result Submission on Validation Set.
36 |
37 | * The submission window is open from 00:00 to 24:00 on Jun 16th. Only the last submission within this timeframe will be considered. Early or late submissions will not be processed.
38 |
39 | * The corresponding person should send the ``JSON`` file, which should be named using your team name (e.g., ``TeamName.json``), and the subject line of the email should follow this format: Prediction submission-Your Registered Team Name (e.g., ``Prediction submission-TeamName``).
40 |
41 | * Specific precautions have been sent to the corresponding person by email, if you do not receive the email, please contact us at **lld_mmri@yeah.net**.
42 |
43 | * 2023-5-29: Leaderboard for the First Submission on Validation Set Release.
44 |
45 | * ~~You can download the leaderboard **[here](https://github.com/LMMMEng/LLD-MMRI2023/releases/download/release-v1/validation_leaderboard_1st.xlsx)**.~~
46 |
47 | * 2023-5-26: First Result Submission on Validation Set.
48 |
49 | * The submission window is open from 00:00 to 24:00 on May 26th. Only the last submission within this timeframe will be considered. Early or late submissions will not be processed.
50 |
51 | * The corresponding person should send the ``JSON`` file, which should be named using your team name (e.g., ``TeamName.json``), and the subject line of the email should follow this format: Prediction submission-Your Registered Team Name (e.g., ``Prediction submission-TeamName``).
52 |
53 | * Specific precautions have been sent to the corresponding person by email, if you do not receive the email, please contact us at **lld_mmri@yeah.net**.
54 |
55 | * 2023-4-28: Code and Training/Validation Dataset Release.
56 |
57 | * We’ve enabled access to the baseline code [here](https://github.com/LMMMEng/LLD-MMRI2023/tree/main/main) and released training/validation dataset via email.
58 | Participants who successfully registered as of April 28th have received emails with data download link , if not, please contact us with your team name at **lld_mmri@yeah.net**.
59 |
60 | * Participants who registered after April 28th will receive an email with data link within three working days.
61 |
62 | * 2023-4-14: Registration Channel Now Open for Participants
63 |
64 | * Registration channel for the upcoming LLD-MMRI2023 is now open! Please complete the **[registration form](https://forms.gle/TaULgdBM7HKtbfJ97)** and you will receive an email from LLD-MMRI2023 group within 3 days. We look forward to welcoming you there!
65 |
66 |
67 | * 2023-3-4: Our [challenge proposal](https://doi.org/10.5281/zenodo.7841543) has been accepted by [MICCAI 2023](https://conferences.miccai.org/2023/en/online.asp).
68 |
69 |
70 | ## :dart: **Objective**
71 | Liver cancer is a severe disease that poses a significant threat to global human health. To enhance the accuracy of liver lesion diagnosis, multi-phase contrast-enhanced magnetic resonance imaging (MRI) has emerged as a promising tool. In this context, we aim to initiate the inaugural Liver Lesion Diagnosis Challenge on Multi-phase MRI (LLD-MMRI2023) to encourage the development and advancement of computer-aided diagnosis (CAD) systems in this domain.
72 | ## :memo: **Registration (Closed)**
73 | Registration is currently underway. We kindly request participants to accurately and thoroughly complete the **[registration form](https://forms.gle/TaULgdBM7HKtbfJ97)**. The registration outcome will be communicated via email within 3 days. Please check for spam if you do not receive a reply for a long time.
74 | **Note**: Registration channel will close on **June 17th**.
75 |
76 | ## :file_folder: **Dataset**
77 | **Note: The dataset is restricted to research use only, you can not use this data for commercial purposes.**
78 | 1. The training set, validation set, and test set will be made available in three stages. The training dataset (with annotations) and the validation dataset (without annotations) will be released first on **April 28th**. Annotations on the validation set will be accessible on **July 8th**, and the test dataset (without annotations) will be released on **August 2nd**.
79 | 2. The datasets include full volume data, lesion bounding boxes, and pre-cropped lesion patches.
80 | 3. The dataset comprises 7 different lesion types, including 4 benign types (Hepatic hemangioma, Hepatic abscess, Hepatic cysts, and Focal nodular hyperplasia) and 3 malignant types (Intrahepatic cholangiocarcinoma, Liver metastases, and Hepatocellular carcinoma). Participants are required to make a diagnosis of the type of liver lesion in each case.
81 | 4. Each lesion has 8 different phases, providing diverse visual cues.
82 | 5. The dataset proportion is as follows:
83 | Training cases: 316
84 | Validation cases: 78
85 | Test cases: 104
86 |
87 | - [X] April 28th: Release the training set (with annotations), validation set (without annotations), and baseline code
88 | - [X] July 8th: Release the annotations of the validation set
89 | - [X] August 2nd: Release the test set (without annotations)
90 |
91 | ## 🖥️ **Training**
92 | We shall provide the code for data loading, model training/evaluation, and prediction in this repository. Participants can design and evaluate their models following the provided baseline. The code will be published on **April 28th**.
93 | **Note**: Additional public datasets are allowed for model training and/or pre-training, but private data is not allowed.
94 |
95 | ## 🖥️ **Prediction**
96 | We highly suggest using our provided code to generate predictions.
97 | **Note**: If participants intend to use their prediction style, please ensure that the format of the prediction results is exactly the same as the template we provide.
98 | The code will be published on **April 28th**.
99 |
100 | ## **📤 Submission**
101 | Participants should send their prediction results to the designated email address (the email address will be notified by email to the registered participants), and we shall acknowledge receipt.
102 | **Note**: The challenge comprises four evaluation stages. In the first three stages, we shall update the ranking based on the predicted results of the algorithm on the validation set. Participants will have a 24-hour submission window on **May 26th**, **June 16th**, and **July 7th** to submit their prediction results. On July 8th, annotations on the validation set will be released to further support model design and training. In the final stage, the test set (without annotations) will be released on **August 2nd**. Participants are required to submit their predictions on **August 4th**, and the predicted results at this stage will determine the final ranking.
103 |
104 | - [X] May 26th: The first submission of the predicted results on the validation set
105 | - [X] June 16th: The second submission of the predicted results on the validation set
106 | - [X] July 7th: The third submission of the predicted results on the validation set
107 | - [X] August 4th: The submission of the predicted results on the test set (this will be used for the final leaderboard).
108 |
109 | ## :trophy: **Leaderboard**
110 | We shall present and update the leaderboard on our website.
111 | The ranking shall be determined by the average of the **F1-score** and **Cohen's Kappa coefficient**.
112 |
113 | ## **🔍 Verification**
114 | To ensure fairness in the challenge, we do not allow to use private data. You have to ensure the reproducibility of the model. The top 10 teams will be required to disclose their codes and model weights on GitHub or other publicly accessible websites for verification. We shall use these codes and model weights to verify that the reproduced results are consistent with the submitted predictions. Failure to disclose codes and model weights within the stipulated time frame shall lead to removal from the leaderboard. In case of serious discrepancies detected using disclosed codes and model weights, we shall notify the corresponding teams to take remedial actions. Failure to comply within the allotted time will lead to removal from the leaderboard, and the leaderboard will be adjusted accordingly. New teams that subsequently enter the top 10 will also be required to comply with the same rules. If you use additional public datasets, you also need to disclose them; however, such disclosure will not impact your ranking.
115 |
116 | ## 🏅 **Announcement**
117 | 1. All the results shall be publicly displayed on the leaderboard.
118 | 2. The top 5 teams on the leaderboard shall be invited to give a 5-10 minute presentation for the MICCAI2023 challenge session.
119 | 3. The prizes are as follows:
120 | :1st_place_medal: First prize: **US$3,000** for one winner;
121 | :2nd_place_medal: Second prize: **US$1,000** for one winner;
122 | :3rd_place_medal: Third prize: **US$500** for two to three winners.
123 |
124 | ## **🤝 Acknowledgement**
125 | We would like to acknowledge the following organizations for their support in making this challenge possible:
126 | Ningbo Medical Center Lihuili Hospital for providing the dataset.
127 | Deepwise Healthcare and The University of Hong Kong for organizing and providing funding for the challenge.
128 | ## :e-mail: **Contact**
129 | Should you have any questions, please feel free to contact the organizers at **lld_mmri@yeah.net** or open an **[issue](https://github.com/LMMMEng/LLD-MMRI2023/issues)**.
130 |
--------------------------------------------------------------------------------
/assets/img.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/assets/img.png
--------------------------------------------------------------------------------
/assets/test_leaderboard.md:
--------------------------------------------------------------------------------
1 | | Ranking | Team_name | F1-score | Cohen_Kappa | Average_Score |
2 | |:------------:|---------------------|:----------:|:-------------:|:---------------:|
3 | | 1 | [WorkingisAllyouneed](https://github.com/ZHEGG/miccai2023) | 0.8322 | 0.7801 | 0.8062 |
4 | | 2 | [NPUBXY](https://github.com/aa1234241/lld_submit) | 0.8078 | 0.7660 | 0.7869 |
5 | | 3 | [LinGroup](https://github.com/WillbeD0ne/LLD_LinGroup) | 0.7860 | 0.7435 | 0.7647 |
6 | | 4 | [MediSegLearner](https://github.com/Jiangj512/LLD_Project) | 0.7807 | 0.7312 | 0.7560 |
7 | | 5 | [SH AI lab](https://github.com/bjtbgbg/Lesion-Classifier) | 0.7609 | 0.7084 | 0.7346 |
8 | | 6 | BDAV_Y | 0.7488 | 0.6978 | 0.7233 |
9 | | 7 | AQAWER | 0.7237 | 0.6842 | 0.7040 |
10 | | 8 | LinkStart | 0.7120 | 0.6830 | 0.6975 |
11 | | 9 | [CompAI](https://github.com/hannesk95/LLD-MMRI-CompAI/tree/master) | 0.7191 | 0.6713 | 0.6951 |
12 | | 10 | jingxinqiushi | 0.7212 | 0.6651 | 0.6932 |
13 | | 11 | NPU_SAIIP | 0.7020 | 0.6499 | 0.6760 |
14 | | 12 | Taikula | 0.6501 | 0.6409 | 0.6455 |
15 | | 13 | Liang | 0.6580 | 0.6136 | 0.6358 |
16 | | 14 | MIG8VIITT | 0.6533 | 0.6005 | 0.6269 |
17 | | 15 | YuGao805 | 0.6413 | 0.6096 | 0.6255 |
18 | | 16 | SJTU_EIEE_2-426Lab | 0.6422 | 0.6017 | 0.6219 |
19 | | 17 | DMIR-Medical-Group | 0.6433 | 0.5915 | 0.6174 |
20 | | 18 | ZICteam | 0.6339 | 0.5900 | 0.6119 |
21 | | 19 | ZJU_Give_Ritsumeikan_mosc | 0.6400 | 0.5677 | 0.6038 |
22 | | 20 | beat-FLL | 0.6076 | 0.5856 | 0.5966 |
23 | | 21 | [**Baseline**](https://github.com/LMMMEng/LLD-MMRI2023/tree/main/main) | 0.6083 | 0.5414 | 0.5748 |
24 | | 22 | contoto | 0.5755 | 0.5416 | 0.5586 |
25 | | 23 | liver4z | 0.5047 | 0.4698 | 0.4872 |
26 | | 24 | MedAILab | 0.2562 | 0.1840 | 0.2201 |
27 |
--------------------------------------------------------------------------------
/assets/validation_leaderboard.md:
--------------------------------------------------------------------------------
1 | | | | | | |
2 | |-|-|-|-|-|
3 | |Ranking|Team Name|F1-score|Cohen_Kappa|Average_Score|
4 | |1|wang-techman|0.7731 |0.7363 |0.7547 |
5 | |2|NPUBXY|0.7629 |0.7164 |0.7397 |
6 | |3|jingxinqiushi|0.7609 |0.7179 |0.7394 |
7 | |4|SH AI lab|0.7314 |0.7148 |0.7231 |
8 | |5|WorkingisAllyouneed|0.7295 |0.6996 |0.7146 |
9 | |6|RKO|0.6933 |0.6669 |0.6801 |
10 | |7|MediSegLearner|0.6827 |0.6765 |0.6796 |
11 | |8|MedAILab|0.7088 |0.6472 |0.6780 |
12 | |9|beat-FLL|0.6873 |0.6600 |0.6736 |
13 | |10|baseline_v2|0.6963 |0.6473 |0.6718 |
14 | |11|ZJU_Give_Ritsumeikan_mosc|0.6729 |0.6638 |0.6683 |
15 | |12|MIG8VIITT|0.6715 |0.6513 |0.6614 |
16 | |13|AQAWER|0.6804 |0.6401 |0.6603 |
17 | |14|DMIR-Medical-Group|0.6662 |0.6394 |0.6528 |
18 | |15|Jiangnan Teacher Yu Team|0.6547 |0.6462 |0.6504 |
19 | |16|LinkStart|0.6757 |0.6196 |0.6477 |
20 | |17|SJTU_EIEE_2-426Lab|0.6802 |0.6065 |0.6433 |
21 | |18|chrisli|0.6635 |0.6085 |0.6360 |
22 | |19|Liang|0.6416 |0.6294 |0.6355 |
23 | |20|[Baseline](https://github.com/LMMMEng/LLD-MMRI2023/tree/main/main)|0.6596 |0.6104 |0.6350 |
24 | |21|luckyjing|0.6497 |0.6191 |0.6344 |
25 | |22|liver4z|0.6401 |0.6176 |0.6288 |
26 | |23|YuGao805|0.6465 |0.6105 |0.6285 |
27 | |24|DDL_is_coming|0.6341 |0.6165 |0.6253 |
28 | |25|NPU_SAIIP|0.6367 |0.6119 |0.6243 |
29 | |26|Taikula|0.6235 |0.6196 |0.6215 |
30 | |27|freemagic|0.6391 |0.5989 |0.6190 |
31 | |28|Yaaheyaahe|0.6172 |0.5955 |0.6063 |
32 | |29|FightTumor|0.6189 |0.5918 |0.6053 |
33 | |30|ZICteam|0.6250 |0.5776 |0.6013 |
34 | |31|BDAV_Y|0.6183 |0.5816 |0.6000 |
35 | |32|Perception|0.6076 |0.5825 |0.5951 |
36 | |33|Nightmare|0.5976 |0.5620 |0.5798 |
37 | |34|ViCBiC|0.6055 |0.5515 |0.5785 |
38 | |35|Lipp|0.5974 |0.5504 |0.5739 |
39 | |36|LSJ|0.5800 |0.5556 |0.5678 |
40 | |37|Pami_DLUT|0.5831 |0.5498 |0.5664 |
41 | |38|LinGroup|0.5762 |0.5540 |0.5651 |
42 | |39|SuperPolymerization|0.5900 |0.5187 |0.5543 |
43 | |40|Distract|0.5498 |0.5235 |0.5367 |
44 | |41|rit_iipl|0.5483 |0.4859 |0.5171 |
45 | |42|contoto|0.5594 |0.4520 |0.5057 |
46 | |43|Dolphins|0.5027 |0.4669 |0.4848 |
47 | |44|Uestc S|0.5096 |0.4542 |0.4819 |
48 | |45|tom|0.5077 |0.3981 |0.4529 |
49 | |46|Blessing|0.3566 |0.2752 |0.3159 |
50 | |47|junqiangmler|0.2278 |0.0954 |0.1616 |
51 | |48|CompAI|0.1335 |0.0605 |0.0970 |
52 |
--------------------------------------------------------------------------------
/assets/validation_leaderboard_1st.md:
--------------------------------------------------------------------------------
1 | | Ranking | Team Name | F1-score | Cohen_Kappa | Average_Score |
2 | |---------|------------------------|----------|-------------|---------------|
3 | | 1 | ZJU_Give_Ritsumeikan_mosc | 0.6729 | 0.6638 | 0.6683 |
4 | | 2 | Jiangnan Teacher Yu Team | 0.6547 | 0.6462 | 0.6504 |
5 | | 3 | jingxinqiushi | 0.6771 | 0.6213 | 0.6492 |
6 | | 4 | NPUBXY | 0.6550 | 0.6361 | 0.6455 |
7 | | 5 | SJTU_EIEE_2-426Lab | 0.6802 | 0.6065 | 0.6433 |
8 | | 6 | MIG8VIITT | 0.6596 | 0.6104 | 0.6350 |
9 | | 6 | chrisli | 0.6596 | 0.6104 | 0.6350 |
10 | | 6 | DMIR-Medical-Group | 0.6596 | 0.6104 | 0.6350 |
11 | | 6 | [***Baseline***](https://github.com/LMMMEng/LLD-MMRI2023/tree/main/main) | 0.6596 | 0.6104 | 0.6350 |
12 | | 7 | YuGao805 | 0.6465 | 0.6105 | 0.6285 |
13 | | 8 | SH AI lab | 0.6126 | 0.6125 | 0.6126 |
14 | | 9 | DDL_is_coming | 0.6238 | 0.5881 | 0.6060 |
15 | | 10 | WorkingisAllyouneed | 0.6262 | 0.5710 | 0.5986 |
16 | | 11 | Yaaheyaahe | 0.6155 | 0.5748 | 0.5951 |
17 | | 12 | BDAV_Y | 0.5919 | 0.5814 | 0.5867 |
18 | | 13 | LinkStart | 0.5806 | 0.5868 | 0.5837 |
19 | | 14 | Nightmare | 0.5976 | 0.5620 | 0.5798 |
20 | | 15 | ViCBiC | 0.6055 | 0.5515 | 0.5785 |
21 | | 16 | Lipp | 0.5974 | 0.5504 | 0.5739 |
22 | | 17 | LSJ | 0.5800 | 0.5556 | 0.5678 |
23 | | 18 | liver4z | 0.5846 | 0.5372 | 0.5609 |
24 | | 19 | SuperPolymerization | 0.5900 | 0.5187 | 0.5543 |
25 | | 20 | luckyjing | 0.5496 | 0.5437 | 0.5466 |
26 | | 21 | Distract | 0.5498 | 0.5235 | 0.5367 |
27 | | 22 | rit_iipl | 0.5483 | 0.4859 | 0.5171 |
28 | | 23 | junqiangmler | 0.2278 | 0.0954 | 0.1616 |
29 |
--------------------------------------------------------------------------------
/assets/validation_leaderboard_2nd.md:
--------------------------------------------------------------------------------
1 | | Ranking | Team Name | F1-score | Cohen_Kappa | Average_Score |
2 | | ------- | ------------------------ | -------- | ----------- | ------------- |
3 | | 1 | NPUBXY | 0.7629 | 0.7164 | 0.7397 |
4 | | 2 | SH AI lab | 0.7314 | 0.7148 | 0.7231 |
5 | | 3 | WorkingisAllyouneed | 0.7295 | 0.6996 | 0.7146 |
6 | | 4 | RKO | 0.6933 | 0.6669 | 0.6801 |
7 | | 5 | baseline_v2 | 0.6963 | 0.6473 | 0.6718 |
8 | | 6 | AQAWER | 0.6804 | 0.6401 | 0.6603 |
9 | | 7 | MIG8VIITT | 0.6710 | 0.6384 | 0.6547 |
10 | | 8 | DMIR-Medical-Group | 0.6662 | 0.6394 | 0.6528 |
11 | | 9 | jingxinqiushi | 0.6771 | 0.6213 | 0.6492 |
12 | | 10 | wang-techman | 0.6728 | 0.6228 | 0.6478 |
13 | | 11 | LinkStart | 0.6664 | 0.6096 | 0.6380 |
14 | | 12 | chrisli | 0.6635 | 0.6085 | 0.6360 |
15 | | 13 | Liang | 0.6416 | 0.6294 | 0.6355 |
16 | | 14 | [***Baseline***](https://github.com/LMMMEng/LLD-MMRI2023/tree/main/main) | 0.6596 | 0.6104 | 0.6350 |
17 | | 15 | luckyjing | 0.6497 | 0.6191 | 0.6344 |
18 | | 16 | liver4z | 0.6401 | 0.6176 | 0.6288 |
19 | | 17 | DDL_is_coming | 0.6341 | 0.6165 | 0.6253 |
20 | | 18 | NPU_SAIIP | 0.6367 | 0.6119 | 0.6243 |
21 | | 19 | SJTU_EIEE_2-426Lab | 0.6445 | 0.5964 | 0.6204 |
22 | | 20 | freemagic | 0.6391 | 0.5989 | 0.6190 |
23 | | 21 | Yaaheyaahe | 0.6172 | 0.5955 | 0.6063 |
24 | | 22 | BDAV_Y | 0.6183 | 0.5816 | 0.6000 |
25 | | 23 | ZJU_Give_Ritsumeikan_mosc | 0.6354 | 0.5624 | 0.5989 |
26 | | 24 | Perception | 0.6076 | 0.5825 | 0.5951 |
27 | | 25 | Taikula | 0.6127 | 0.5706 | 0.5916 |
28 | | 26 | Jiangnan Teacher Yu Team | 0.6010 | 0.5347 | 0.5678 |
29 | | 27 | nightmare | 0.5849 | 0.5090 | 0.5470 |
30 | | 28 | Lipp | 0.5307 | 0.5016 | 0.5161 |
31 | | 29 | LinGroup | 0.5377 | 0.4746 | 0.5061 |
32 | | 30 | ViCBiC | 0.5056 | 0.4403 | 0.4730 |
33 | | 31 | MediSegLearner | 0.1329 | 0.0529 | 0.0929 |
34 | | 32 | beat-FLL | 0.1392 | 0.0452 | 0.0922 |
--------------------------------------------------------------------------------
/assets/validation_leaderboard_3rd.md:
--------------------------------------------------------------------------------
1 | | Ranking | Team_name | F1-score | Cohen_Kappa | Average_Score |
2 | |:--------:|-----------------------|:---------:|:------------:|:--------------:|
3 | | 1 | wang-techman | 0.7731 | 0.7363 | 0.7547 |
4 | | 2 | jingxinqiushi | 0.7609 | 0.7179 | 0.7394 |
5 | | 3 | SH AI lab | 0.7141 | 0.6979 | 0.7060 |
6 | | 4 | MediSegLearner | 0.6827 | 0.6765 | 0.6796 |
7 | | 5 | MedAILab | 0.7088 | 0.6472 | 0.6780 |
8 | | 6 | NPUBXY | 0.6997 | 0.6519 | 0.6758 |
9 | | 7 | beat-FLL | 0.6873 | 0.6600 | 0.6736 |
10 | | 8 | MIG8VIITT | 0.6715 | 0.6513 | 0.6614 |
11 | | 9 | LinkStart | 0.6757 | 0.6196 | 0.6477 |
12 | | 10 | RKO | 0.6532 | 0.6314 | 0.6423 |
13 | | 11 | chrisli | 0.6635 | 0.6085 | 0.6360 |
14 | | 12 | [***Baseline***](https://github.com/LMMMEng/LLD-MMRI2023/tree/main/main) | 0.6596 | 0.6104 | 0.6350 |
15 | | 13 | ZJU_Give_Ritsumeikan_mosc | 0.6545 | 0.6047 | 0.6296 |
16 | | 14 | SJTU_EIEE_2-426Lab | 0.6389 | 0.6156 | 0.6272 |
17 | | 15 | Taikula | 0.6235 | 0.6196 | 0.6215 |
18 | | 16 | FightTumor | 0.6189 | 0.5918 | 0.6053 |
19 | | 17 | WorkingisAllyouneed | 0.6094 | 0.5992 | 0.6043 |
20 | | 18 | AQAWER | 0.6268 | 0.5800 | 0.6034 |
21 | | 19 | ZICteam | 0.6250 | 0.5776 | 0.6013 |
22 | | 20 | luckyjing | 0.5798 | 0.5913 | 0.5855 |
23 | | 21 | DMIR-Medical-Group | 0.6088 | 0.5349 | 0.5718 |
24 | | 22 | Pami_DLUT | 0.5831 | 0.5498 | 0.5664 |
25 | | 23 | LinGroup | 0.5762 | 0.5540 | 0.5651 |
26 | | 24 | baseline_v2 | 0.5682 | 0.5268 | 0.5475 |
27 | | 25 | YuGao805 | 0.5540 | 0.5057 | 0.5299 |
28 | | 26 | Liang | 0.5445 | 0.4707 | 0.5076 |
29 | | 27 | contoto | 0.5594 | 0.4520 | 0.5057 |
30 | | 28 | Dolphins | 0.5027 | 0.4669 | 0.4848 |
31 | | 29 | Uestc S | 0.5096 | 0.4542 | 0.4819 |
32 | | 30 | tom | 0.5077 | 0.3981 | 0.4529 |
33 | | 31 | Blessing | 0.3566 | 0.2752 | 0.3159 |
34 | | 32 | CompAI | 0.1335 | 0.0605 | 0.0970 |
35 | | 33 | BDAV_Y | 0.0984 | 0.0923 | 0.0954 |
--------------------------------------------------------------------------------
/main/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/.DS_Store
--------------------------------------------------------------------------------
/main/README.md:
--------------------------------------------------------------------------------
1 | # An official implementation of training and prediction using the LLD-MMRI dataset
2 |
3 | ## Usage
4 |
5 | First, clone the repository locally:
6 | ```
7 | $ git clone https://github.com/LMMMEng/LLD-MMRI2023.git
8 | $ cd main
9 | ```
10 | We highly recommend you install the provided dependencies:
11 | ```
12 | $ pip install -r requirements.txt
13 | ```
14 |
15 | ## Data Preparation
16 | ### 1. Download and extract dataset
17 | **Note**: *Registered participants will receive the download link via email within 3 working days. If you do not receive it for a long time, please check the spam email or contact us via lld_mmri@yeah.net.*
18 |
19 |
20 | Download and extract training dataset:
21 | - Linux
22 |
23 | ```
24 | $ cat lld_mmri2023_part0* > lld_mmri2023.zip
25 | $ unzip lld_mmri2023.zip
26 | ```
27 | - Windows
28 | ```
29 | $ type lld_mmri2023_part0* > lld_mmri2023.zip
30 | $ unzip lld_mmri2023.zip
31 |
32 | ```
33 |
34 |
35 | The data are stored in the following structure:
36 | ```
37 | data directory structure:
38 | ├── images
39 | ├── MR-a
40 | ├── a-seriesID
41 | ├── aa.nii.gz
42 | ├── ab.nii.gz
43 | ├── ac.nii.gz
44 | ├── MR-b
45 | ├── b-seriesID
46 | ├── ba.nii.gz
47 | ├── bb.nii.gz
48 | ├── bc.nii.gz
49 | ├── labels
50 | ├── Annotation.json
51 | ├── classification_dataset
52 | ```
53 | Descriptions:
54 |
55 | **images**: There are train and validation data in the ```images``` directory. Each folder with MR at the beginning represents a patient case, and each case contains eight whole MRI volumes, each of which represents a single scanning phase and is saved as a nii.gz file. **You need to diagnose the category of liver lesions for a patient based on the corresponding 8 volumes**.
56 |
57 | **labels**: The ```Annotation.json``` contains the true volume spacing information, bounding box information, and the category information of the lesions. The corresponding labels of liver lesions in each category are as follows:
58 | ```
59 | "Hepatic_hemangioma": 0,
60 | "Intrahepatic_cholangiocarcinoma": 1,
61 | "Hepatic_abscess": 2,
62 | "Hepatic_metastasis": 3,
63 | "Hepatic_cyst": 4,
64 | "FOCAL_NODULAR_HYPERPLASIA": 5,
65 | "Hepatocellular_carcinoma": 6,
66 | "Benign": [0, 2, 4, 5],
67 | "Malignant": [1, 3, 6],
68 | "Inaccessible": -1
69 | ```
70 | **Note**: **-1** indicates that category labels have not been provided for this data, on which you need to make predictions and submissions. In other words, a case with the label **-1** means the data belongs to the validation set.
71 |
72 | **classification_dataset**: Directory with lesion-centered 3D ROIs, please refer to [Data preprocessing](#2-data-preprocessingdata-preprocessing)
73 |
74 | ### 2. Data preprocessing
75 | We provided lesion-centered 3D ROIs in the directory ```data/classification_dataset```, you can choose directly using our preprocessed dataset or customize your own data preprocessing.
76 |
77 | #### 2.1 Directly using our preprocessed dataset
78 | We provided data consisting of lesion-centered 3D ROIs. The directory structure is as follows:
79 | ```
80 | ├── classification_dataset
81 | ├── images
82 | ├── MR-a
83 | ├── T2WI.nii.gz
84 | ├── In Phase.nii.gz
85 | ├── Out Phase.nii.gz
86 | ├── C+Delay.nii.gz
87 | ├── C+V.nii.gz
88 | ├── C-pre.nii.gz
89 | ├── C+A.nii.gz
90 | ├── DWI.nii.gz
91 | ├── MR-b
92 | ├── T2WI.nii.gz
93 | ├── In Phase.nii.gz
94 | ├── Out Phase.nii.gz
95 | ├── C+Delay.nii.gz
96 | ├── C+V.nii.gz
97 | ├── C-pre.nii.gz
98 | ├── C+A.nii.gz
99 | ├── DWI.nii.gz
100 | ├── labels
101 | ├── labels.txt
102 | ├── labels_val_inaccessible.txt
103 | ```
104 | Descriptions:
105 |
106 | **images**: In the ```images``` directory, each folder with MR at the beginning represents a case, and each case contains 8 cropped lesion volumes.
107 |
108 | The specific volume information represented by each nii.gz file from a case is as follows:
109 | ```
110 | 'T2WI.nii.gz': T2-weighted imaging,
111 | 'In Phase.nii.gz': T1 in phase,
112 | 'Out Phase.nii.gz': T1 out of phase,
113 | 'C+Delay.nii.gz': Delay phase,
114 | 'C+V.nii.gz': Venous phase,
115 | 'C-pre.nii.gz': Non-contrast phase,
116 | 'C+A.nii.gz': Arterial phase,
117 | 'DWI.nii.gz': Diffusion-weighted imaging,
118 | ```
119 | **labels**:
120 | ```labels.txt``` recorded the liver lesion category of each case in training set.
121 | ```labels_val_inaccessible.txt``` is the list of the validation set. Since labels are currently confidential, each sample has a label of -1. You need to make predictions on these data and submit results.
122 |
123 | Those preprocessed 3D ROIs can be generated by running:
124 | ```
125 | $ python3 preprocess/crop_roi.py --data_dir data/images/ --anno-path data/labels/Annotation.json --save-dir data/classification_dataset/images/
126 | ```
127 |
128 | #### 2.2 Customize your own data preprocessing
129 |
130 | In addition to using the preprocessed data we provided, you can choose to customize your own data preprocessing process. The contents of ```Annotation.json``` and [preprocess/crop_roi.py](```preprocess/crop_roi.py```) could be as references. Keep in mind, that the goal is to diagnose the liver lesion category of each case.
131 |
132 | #### 2.3 Data division
133 | We recommend using 5-fold cross-validation on the accessible dataset to evaluate your own algorithm, we have provided a cross-validation label file which you can refer to:
134 | ```
135 | ├── classification_dataset
136 | ├── labels
137 | ├── train_fold1.txt
138 | ...
139 | ├── train_fold5.txt
140 | ├── val_fold1.txt
141 | ...
142 | ├── val_fold5.txt
143 | ```
144 |
145 | Also, you can generate n-fold cross-validated dataset yourself with the following command:
146 | ```
147 | $ python3 preprocess/gene_cross_val.py --lab-path data/classification_dataset/labels/labels.txt --save-dir data/classification_dataset/labels/ --num-folds 5 --seed 66
148 | ```
149 | This will produce the corresponding file under ```save-dir```
150 |
151 | ## Training
152 | We use a 3D implementation of the [UniFormer-S](https://github.com/Sense-X/UniFormer/tree/main/image_classification) as the baseline model, and the multi-phase images are treated as the input channels of the model. The details can be found in [models/uniformer.py](models/uniformer.py).
153 |
154 | To train the baseline model on LLD-MMRI dataset on a single node with 2 GPUs for 300 epochs, please run the following command:
155 |
156 | ```
157 | $ python3 -m torch.distributed.launch --master_port=$((RANDOM+10000)) --nproc_per_node=2 train.py --data_dir data/classification_dataset/images/ --train_anno_file data/classification_dataset/labels/train_fold1.txt --val_anno_file data/classification_dataset/labels/val_fold1.txt --batch-size 4 --model uniformer_small_IL --lr 1e-4 --warmup-epochs 5 --epochs 300 --output output/
158 | ```
159 |
160 | ## Prediction
161 | We accept submissions in the form of a Json file containing the predicted results at the specified time, the details will be notified to the registered participants via email.
162 |
163 | You can download the [trained model weights](https://github.com/LMMMEng/LLD-MMRI2023/releases/download/release-v1/best_f1_checkpoint-216.pth.tar) and use the following command to make predictions:
164 |
165 | ```
166 | $ python3 predict.py --data_dir data/classification_dataset/images --val_anno_file data/classification_dataset/labels/labels_val_inaccessible.txt --model uniformer_small_IL --batch-size 8 --checkpoint best_f1_checkpoint-216.pth.tar --results-dir output/20230411-192839-uniformer_small_IL/ --team_name LLDBaseline
167 | ```
168 |
169 | You can also retrain the baseline model, once you have a satisfactory model, please generate a prediction file on the validation set by running the following command:
170 | ```
171 | $ python3 predict.py --data_dir data/classification_dataset/images --val_anno_file data/classification_dataset/labels/labels_val_inaccessible.txt --model uniformer_small_IL --batch-size 8 --checkpoint path-to-model-checkpoint --results-dir path-to-results-dir --team_name your_team_name
172 | ```
173 | This will generate a ```your_team_name.json``` under ```results-dir```.
174 |
175 | **Important**: You may have custom data processing, model design, and training pipeline. Therefore, the provided prediction code may not be applicable. We provide a prediction result file format [here](output/LLDBaseline.json), please strictly follow this format to generate predictions. In addition, the submitted Json file must be named by your registered team name. Otherwise we can't make an evaluation.
176 |
--------------------------------------------------------------------------------
/main/__pycache__/metrics.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/__pycache__/metrics.cpython-36.pyc
--------------------------------------------------------------------------------
/main/__pycache__/metrics.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/__pycache__/metrics.cpython-38.pyc
--------------------------------------------------------------------------------
/main/datasets/__pycache__/mp_liver_dataset.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/datasets/__pycache__/mp_liver_dataset.cpython-36.pyc
--------------------------------------------------------------------------------
/main/datasets/__pycache__/mp_liver_dataset.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/datasets/__pycache__/mp_liver_dataset.cpython-38.pyc
--------------------------------------------------------------------------------
/main/datasets/__pycache__/transforms.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/datasets/__pycache__/transforms.cpython-36.pyc
--------------------------------------------------------------------------------
/main/datasets/__pycache__/transforms.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/datasets/__pycache__/transforms.cpython-38.pyc
--------------------------------------------------------------------------------
/main/datasets/mp_liver_dataset.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import numpy as np
3 | from functools import partial
4 | from timm.data.loader import _worker_init
5 | from timm.data.distributed_sampler import OrderedDistributedSampler
6 | try:
7 | from datasets.transforms import *
8 | except:
9 | from transforms import *
10 |
11 | class MultiPhaseLiverDataset(torch.utils.data.Dataset):
12 | def __init__(self, args, is_training=True):
13 | self.args = args
14 | self.size = args.img_size
15 | self.is_training = is_training
16 | img_list = []
17 | lab_list = []
18 | phase_list = ['T2WI', 'DWI', 'In Phase', 'Out Phase',
19 | 'C-pre', 'C+A', 'C+V', 'C+Delay']
20 |
21 | if is_training:
22 | anno = np.loadtxt(args.train_anno_file, dtype=np.str_)
23 | else:
24 | anno = np.loadtxt(args.val_anno_file, dtype=np.str_)
25 |
26 | for item in anno:
27 | mp_img_list = []
28 | for phase in phase_list:
29 | mp_img_list.append(f'{args.data_dir}/{item[0]}/{phase}.nii.gz')
30 | img_list.append(mp_img_list)
31 | lab_list.append(item[1])
32 |
33 | self.img_list = img_list
34 | self.lab_list = lab_list
35 |
36 | def __getitem__(self, index):
37 | args = self.args
38 | image = self.load_mp_images(self.img_list[index])
39 | if self.is_training:
40 | image = self.transforms(image, args.train_transform_list)
41 | else:
42 | image = self.transforms(image, args.val_transform_list)
43 | image = image.copy()
44 | label = int(self.lab_list[index])
45 | return (image, label)
46 |
47 | def load_mp_images(self, mp_img_list):
48 | mp_image = []
49 | for img in mp_img_list:
50 | image = load_nii_file(img)
51 | image = resize3D(image, self.size)
52 | image = image_normalization(image)
53 | mp_image.append(image[None, ...])
54 | mp_image = np.concatenate(mp_image, axis=0)
55 | return mp_image
56 |
57 | def transforms(self, mp_image, transform_list):
58 | args = self.args
59 | if 'center_crop' in transform_list:
60 | mp_image = center_crop(mp_image, args.crop_size)
61 | if 'random_crop' in transform_list:
62 | mp_image = random_crop(mp_image, args.crop_size)
63 | if 'z_flip' in transform_list:
64 | mp_image = random_flip(mp_image, mode='z', p=args.flip_prob)
65 | if 'x_flip' in transform_list:
66 | mp_image = random_flip(mp_image, mode='x', p=args.flip_prob)
67 | if 'y_flip' in transform_list:
68 | mp_image = random_flip(mp_image, mode='y', p=args.flip_prob)
69 | if 'rotation' in transform_list:
70 | mp_image = rotate(mp_image, args.angle)
71 | return mp_image
72 |
73 | def __len__(self):
74 | return len(self.img_list)
75 |
76 | def create_loader(
77 | dataset=None,
78 | batch_size=1,
79 | is_training=False,
80 | num_aug_repeats=0,
81 | num_workers=1,
82 | distributed=False,
83 | collate_fn=None,
84 | pin_memory=False,
85 | persistent_workers=True,
86 | worker_seeding='all',
87 | ):
88 |
89 | sampler = None
90 | if distributed and not isinstance(dataset, torch.utils.data.IterableDataset):
91 | if is_training:
92 | sampler = torch.utils.data.distributed.DistributedSampler(dataset)
93 | else:
94 | # This will add extra duplicate entries to result in equal num
95 | # of samples per-process, will slightly alter validation results
96 | sampler = OrderedDistributedSampler(dataset)
97 | else:
98 | assert num_aug_repeats == 0, "RepeatAugment not currently supported in non-distributed or IterableDataset use"
99 |
100 | loader_args = dict(
101 | batch_size=batch_size,
102 | shuffle=not isinstance(dataset, torch.utils.data.IterableDataset) and sampler is None and is_training,
103 | num_workers=num_workers,
104 | sampler=sampler,
105 | collate_fn=collate_fn,
106 | pin_memory=pin_memory,
107 | drop_last=is_training,
108 | worker_init_fn=partial(_worker_init, worker_seeding=worker_seeding),
109 | persistent_workers=persistent_workers
110 | )
111 | try:
112 | loader = torch.utils.data.DataLoader(dataset, **loader_args)
113 | except TypeError as e:
114 | loader_args.pop('persistent_workers') # only in Pytorch 1.7+
115 | loader = torch.utils.data.DataLoader(dataset, **loader_args)
116 | return loader
117 |
118 | if __name__ == "__main__":
119 | import yaml
120 | import parser
121 | import argparse
122 | from tqdm import tqdm
123 |
124 | config_parser = parser = argparse.ArgumentParser(description='Training Config', add_help=False)
125 | parser.add_argument('-c', '--config', default='', type=str, metavar='FILE',
126 | help='YAML config file specifying default arguments')
127 | parser = argparse.ArgumentParser(description='PyTorch Training')
128 | parser.add_argument(
129 | '--data_dir', default='data/classification_dataset/images/', type=str)
130 | parser.add_argument(
131 | '--train_anno_file', default='data/classification_dataset/labels/train_fold1.txt', type=str)
132 | parser.add_argument(
133 | '--val_anno_file', default='data/classification_dataset/labels/val_fold1.txt', type=str)
134 | parser.add_argument('--train_transform_list', default=['random_crop',
135 | 'z_flip',
136 | 'x_flip',
137 | 'y_flip',
138 | 'rotation',],
139 | nargs='+', type=str)
140 | parser.add_argument('--val_transform_list',
141 | default=['center_crop'], nargs='+', type=str)
142 | parser.add_argument('--img_size', default=(16, 128, 128),
143 | type=int, nargs='+', help='input image size.')
144 | parser.add_argument('--crop_size', default=(14, 112, 112),
145 | type=int, nargs='+', help='cropped image size.')
146 | parser.add_argument('--flip_prob', default=0.5, type=float,
147 | help='Random flip prob (default: 0.5)')
148 | parser.add_argument('--angle', default=45, type=int)
149 |
150 | def _parse_args():
151 | # Do we have a config file to parse?
152 | args_config, remaining = config_parser.parse_known_args()
153 | if args_config.config:
154 | with open(args_config.config, 'r') as f:
155 | cfg = yaml.safe_load(f)
156 | parser.set_defaults(**cfg)
157 |
158 | # The main arg parser parses the rest of the args, the usual
159 | # defaults will have been overridden if config file specified.
160 | args = parser.parse_args(remaining)
161 | # Cache the args as a text string to save them in the output dir later
162 | args_text = yaml.safe_dump(args.__dict__, default_flow_style=False)
163 | return args, args_text
164 |
165 | args, args_text = _parse_args()
166 | args_text = yaml.load(args_text, Loader=yaml.FullLoader)
167 | args_text['img_size'] = 'xxx'
168 | print(args_text)
169 |
170 | args.distributed = False
171 | args.batch_size = 100
172 |
173 | dataset = MultiPhaseLiverDataset(args, is_training=True)
174 | data_loader = create_loader(dataset, batch_size=3, is_training=True)
175 | # data_loader = torch.utils.data.DataLoader(dataset, batch_size=3)
176 | for images, labels in data_loader:
177 | print(images.shape)
178 | print(labels)
179 |
180 | # val_dataset = MultiPhaseLiverDataset(args, is_training=False)
181 | # val_data_loader = create_loader(val_dataset, batch_size=10, is_training=False)
182 | # for images, labels in val_data_loader:
183 | # print(images.shape)
184 | # print(labels)
--------------------------------------------------------------------------------
/main/datasets/transforms.py:
--------------------------------------------------------------------------------
1 | import random
2 | import torch
3 | import numpy as np
4 | import SimpleITK as sitk
5 | import torch.nn.functional as F
6 | from scipy import ndimage
7 | from timm.models.layers import to_3tuple
8 |
9 | def load_nii_file(nii_image):
10 | image = sitk.ReadImage(nii_image)
11 | image_array = sitk.GetArrayFromImage(image)
12 | return image_array
13 |
14 | def resize3D(image, size):
15 | size = to_3tuple(size)
16 | image = image.astype(np.float32)
17 | image = torch.from_numpy(image).unsqueeze(0).unsqueeze(0)
18 | x = F.interpolate(image, size=size, mode='trilinear', align_corners=True).squeeze(0).squeeze(0)
19 | return x.cpu().numpy()
20 |
21 | def image_normalization(image, win=None, adaptive=True):
22 | if win is not None:
23 | image = 1. * (image - win[0]) / (win[1] - win[0])
24 | image[image < 0] = 0.
25 | image[image > 1] = 1.
26 | return image
27 | elif adaptive:
28 | min, max = np.min(image), np.max(image)
29 | image = (image - min) / (max - min)
30 | return image
31 | else:
32 | return image
33 |
34 | def random_crop(image, crop_shape):
35 | crop_shape = to_3tuple(crop_shape)
36 | _, z_shape, y_shape, x_shape = image.shape
37 | z_min = np.random.randint(0, z_shape - crop_shape[0])
38 | y_min = np.random.randint(0, y_shape - crop_shape[1])
39 | x_min = np.random.randint(0, x_shape - crop_shape[2])
40 | image = image[..., z_min:z_min+crop_shape[0], y_min:y_min+crop_shape[1], x_min:x_min+crop_shape[2]]
41 | return image
42 |
43 | def center_crop(image, target_shape=(10, 80, 80)):
44 | target_shape = to_3tuple(target_shape)
45 | b, z_shape, y_shape, x_shape = image.shape
46 | z_min = z_shape // 2 - target_shape[0] // 2
47 | y_min = y_shape // 2 - target_shape[1] // 2
48 | x_min = x_shape // 2 - target_shape[2] // 2
49 | image = image[:, z_min:z_min+target_shape[0], y_min:y_min+target_shape[1], x_min:x_min+target_shape[2]]
50 | return image
51 |
52 | def randomflip_z(image, p=0.5):
53 | if random.random() > p:
54 | return image
55 | else:
56 | return image[:, ::-1, ...]
57 |
58 | def randomflip_x(image, p=0.5):
59 | if random.random() > p:
60 | return image
61 | else:
62 | return image[..., ::-1]
63 |
64 | def randomflip_y(image, p=0.5):
65 | if random.random() > p:
66 | return image
67 | else:
68 | return image[:, :, ::-1, ...]
69 |
70 | def random_flip(image, mode='x', p=0.5):
71 | if mode == 'x':
72 | image = randomflip_x(image, p=p)
73 | elif mode == 'y':
74 | image = randomflip_y(image, p=p)
75 | elif mode == 'z':
76 | image = randomflip_z(image, p=p)
77 | else:
78 | raise NotImplementedError(f'Unknown flip mode ({mode})')
79 | return image
80 |
81 | def rotate(image, angle=10):
82 | angle = random.randint(-10, 10)
83 | r_image = ndimage.rotate(image, angle=angle, axes=(-2, -1), reshape=True)
84 | if r_image.shape != image.shape:
85 | r_image = center_crop(r_image, target_shape=image.shape[1:])
86 | return r_image
--------------------------------------------------------------------------------
/main/metrics.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from sklearn import metrics
3 |
4 | def ACC(output, target):
5 | y_pred = output.argmax(1)
6 | y_true = target.flatten()
7 | y_pred = y_pred.flatten()
8 | return metrics.accuracy_score(y_true, y_pred)
9 |
10 | def Cohen_Kappa(output, target):
11 | y_pred = output.argmax(1)
12 | y_true = target.flatten()
13 | y_pred = y_pred.flatten()
14 | return metrics.cohen_kappa_score(y_true, y_pred)
15 |
16 | def F1_score(output, target):
17 | y_pred = output.argmax(1)
18 | y_true = target.flatten()
19 | y_pred = y_pred.flatten()
20 | return metrics.f1_score(y_true, y_pred, average='macro')
21 |
22 | def Recall(output, target):
23 | y_pred = output.argmax(1)
24 | y_true = target.flatten()
25 | y_pred = y_pred.flatten()
26 | return metrics.recall_score(y_true, y_pred, average='macro')
27 |
28 | def Precision(output, target):
29 | y_pred = output.argmax(1)
30 | y_true = target.flatten()
31 | y_pred = y_pred.flatten()
32 | return metrics.precision_score(y_true, y_pred, average='macro')
33 |
34 | def cls_report(output, target):
35 | y_pred = output.argmax(1)
36 | y_true = target.flatten()
37 | y_pred = y_pred.flatten()
38 | return metrics.classification_report(y_true, y_pred, digits=4)
39 |
40 |
41 | def confusion_matrix(output, target):
42 | y_pred = output.argmax(1)
43 | y_true = target.flatten()
44 | y_pred = y_pred.flatten()
45 | return metrics.confusion_matrix(y_true, y_pred)
--------------------------------------------------------------------------------
/main/models/__init__.py:
--------------------------------------------------------------------------------
1 | from .uniformer import uniformer_small_IL
2 |
--------------------------------------------------------------------------------
/main/models/__pycache__/DRNet.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/DRNet.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/DRNet_pvt.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/DRNet_pvt.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/DRNet_vits.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/DRNet_vits.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/Modules.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/Modules.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/Modules.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/Modules.cpython-38.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/SRNet.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/SRNet.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/__init__.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/__init__.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/__init__.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/__init__.cpython-38.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/botnet.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/botnet.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/botnet_IL.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/botnet_IL.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/build.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/build.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/convnext_IL.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/convnext_IL.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/densenet36.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/densenet36.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/densenet36_keepz.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/densenet36_keepz.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/densenet36v1.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/densenet36v1.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/densenet_IL.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/densenet_IL.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/densenet_com3b_split1b.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/densenet_com3b_split1b.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/efficientnet.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/efficientnet.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/efficientnet_IL.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/efficientnet_IL.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/mobilenet.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/mobilenet.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/resnet.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/resnet.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/resnet_IL.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/resnet_IL.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/resnet_mscs.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/resnet_mscs.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/resnext.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/resnext.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/siamese_resnet.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/siamese_resnet.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/squeezenet.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/squeezenet.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/stic.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/stic.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/swinunetr.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/swinunetr.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/swinunetr.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/swinunetr.cpython-38.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/unet_3d.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/unet_3d.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/uniformer.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/uniformer.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/uniformer.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/uniformer.cpython-38.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/uniformer_IL.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/uniformer_IL.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/__pycache__/vgg.cpython-36.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/models/__pycache__/vgg.cpython-36.pyc
--------------------------------------------------------------------------------
/main/models/uniformer.py:
--------------------------------------------------------------------------------
1 | # Copyright (c) 2015-present, Facebook, Inc.
2 | # All rights reserved.
3 | from collections import OrderedDict
4 | from distutils.fancy_getopt import FancyGetopt
5 | from re import M
6 | import torch
7 | import torch.nn as nn
8 | from functools import partial
9 | import torch.nn.functional as F
10 | import math
11 | from timm.models.vision_transformer import _cfg
12 | from timm.models.registry import register_model
13 | from timm.models.layers import trunc_normal_, DropPath, to_2tuple
14 |
15 |
16 | layer_scale = False
17 | init_value = 1e-6
18 |
19 |
20 | class Mlp(nn.Module):
21 | def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
22 | super().__init__()
23 | out_features = out_features or in_features
24 | hidden_features = hidden_features or in_features
25 | self.fc1 = nn.Linear(in_features, hidden_features)
26 | self.act = act_layer()
27 | self.fc2 = nn.Linear(hidden_features, out_features)
28 | self.drop = nn.Dropout(drop)
29 |
30 | def forward(self, x):
31 | x = self.fc1(x)
32 | x = self.act(x)
33 | x = self.drop(x)
34 | x = self.fc2(x)
35 | x = self.drop(x)
36 | return x
37 |
38 |
39 | class CMlp(nn.Module):
40 | def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
41 | super().__init__()
42 | out_features = out_features or in_features
43 | hidden_features = hidden_features or in_features
44 | self.fc1 = nn.Conv3d(in_features, hidden_features, 1)
45 | self.act = act_layer()
46 | self.fc2 = nn.Conv3d(hidden_features, out_features, 1)
47 | self.drop = nn.Dropout(drop)
48 |
49 | def forward(self, x):
50 | x = self.fc1(x)
51 | x = self.act(x)
52 | x = self.drop(x)
53 | x = self.fc2(x)
54 | x = self.drop(x)
55 | return x
56 |
57 |
58 | class Attention(nn.Module):
59 | def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.):
60 | super().__init__()
61 | self.num_heads = num_heads
62 | head_dim = dim // num_heads
63 | # NOTE scale factor was wrong in my original version, can set manually to be compat with prev weights
64 | self.scale = qk_scale or head_dim ** -0.5
65 |
66 | self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
67 | self.attn_drop = nn.Dropout(attn_drop)
68 | self.proj = nn.Linear(dim, dim)
69 | self.proj_drop = nn.Dropout(proj_drop)
70 |
71 | def forward(self, x):
72 | B, N, C = x.shape
73 | qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
74 | q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
75 |
76 | attn = (q @ k.transpose(-2, -1)) * self.scale
77 | attn = attn.softmax(dim=-1)
78 | attn = self.attn_drop(attn)
79 |
80 | x = (attn @ v).transpose(1, 2).reshape(B, N, C)
81 | x = self.proj(x)
82 | x = self.proj_drop(x)
83 | return x
84 |
85 |
86 | class CBlock(nn.Module):
87 | def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
88 | drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm):
89 | super().__init__()
90 | self.pos_embed = nn.Conv3d(dim, dim, 3, padding=1, groups=dim)
91 | self.norm1 = nn.BatchNorm3d(dim)
92 | self.conv1 = nn.Conv3d(dim, dim, 1)
93 | self.conv2 = nn.Conv3d(dim, dim, 1)
94 | self.attn = nn.Conv3d(dim, dim, 5, padding=2, groups=dim)
95 | # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
96 | self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
97 | self.norm2 = nn.BatchNorm3d(dim)
98 | mlp_hidden_dim = int(dim * mlp_ratio)
99 | self.mlp = CMlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
100 |
101 | def forward(self, x):
102 | x = x + self.pos_embed(x)
103 | x = x + self.drop_path(self.conv2(self.attn(self.conv1(self.norm1(x)))))
104 | x = x + self.drop_path(self.mlp(self.norm2(x)))
105 | return x
106 |
107 |
108 | class SABlock(nn.Module):
109 | def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
110 | drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm):
111 | super().__init__()
112 | self.pos_embed = nn.Conv3d(dim, dim, 3, padding=1, groups=dim)
113 | self.norm1 = norm_layer(dim)
114 | self.attn = Attention(
115 | dim,
116 | num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,
117 | attn_drop=attn_drop, proj_drop=drop)
118 | # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
119 | self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
120 | self.norm2 = norm_layer(dim)
121 | mlp_hidden_dim = int(dim * mlp_ratio)
122 | self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
123 | global layer_scale
124 | self.ls = layer_scale
125 | if self.ls:
126 | global init_value
127 | print(f"Use layer_scale: {layer_scale}, init_values: {init_value}")
128 | self.gamma_1 = nn.Parameter(init_value * torch.ones((dim)),requires_grad=True)
129 | self.gamma_2 = nn.Parameter(init_value * torch.ones((dim)),requires_grad=True)
130 |
131 | def forward(self, x):
132 | x = x + self.pos_embed(x)
133 | B, C, D, H, W = x.shape
134 | x = x.flatten(2).transpose(1, 2)
135 | if self.ls:
136 | x = x + self.drop_path(self.gamma_1 * self.attn(self.norm1(x)))
137 | x = x + self.drop_path(self.gamma_2 * self.mlp(self.norm2(x)))
138 | else:
139 | x = x + self.drop_path(self.attn(self.norm1(x)))
140 | x = x + self.drop_path(self.mlp(self.norm2(x)))
141 | x = x.transpose(1, 2).reshape(B, C, D, H, W )
142 | return x
143 |
144 |
145 | class head_embedding(nn.Module):
146 | def __init__(self, in_channels, out_channels, stride=2):
147 | super(head_embedding, self).__init__()
148 |
149 | self.proj = nn.Sequential(
150 | nn.Conv3d(in_channels, out_channels // 2, kernel_size=3, stride=stride, padding=1, bias=False),
151 | nn.BatchNorm3d(out_channels // 2),
152 | nn.GELU(),
153 | nn.Conv3d(out_channels // 2, out_channels, kernel_size=3, stride=stride, padding=1, bias=False),
154 | nn.BatchNorm3d(out_channels),
155 | )
156 |
157 | def forward(self, x):
158 | x = self.proj(x)
159 | return x
160 |
161 |
162 | class middle_embedding(nn.Module):
163 | def __init__(self, in_channels, out_channels, stride=2):
164 | super(middle_embedding, self).__init__()
165 |
166 | self.proj = nn.Sequential(
167 | nn.Conv3d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False),
168 | nn.BatchNorm3d(out_channels),
169 | )
170 |
171 | def forward(self, x):
172 | x = self.proj(x)
173 | return x
174 |
175 |
176 | class PatchEmbed(nn.Module):
177 | """ Image to Patch Embedding
178 | """
179 | def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768, stride=None):
180 | super().__init__()
181 | # img_size = to_2tuple(img_size)
182 | # patch_size = to_2tuple(patch_size)
183 | # num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0])
184 | # self.img_size = img_size
185 | # self.patch_size = patch_size
186 | # self.num_patches = num_patches
187 | if stride is None:
188 | stride = patch_size
189 | else:
190 | stride = stride
191 | self.proj = nn.Conv3d(in_chans, embed_dim, kernel_size=patch_size, stride=stride)
192 | self.norm = nn.LayerNorm(embed_dim)
193 |
194 | def forward(self, x):
195 | B, C, D, H, W = x.shape
196 | # FIXME look at relaxing size constraints
197 | # assert H == self.img_size[0] and W == self.img_size[1], \
198 | # f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
199 | x = self.proj(x)
200 | B, C, D, H, W = x.shape
201 | x = x.flatten(2).transpose(1, 2)
202 | x = self.norm(x)
203 | x = x.reshape(B, D, H, W, -1).permute(0, 4, 1, 2, 3).contiguous()
204 | return x
205 |
206 |
207 | class UniFormer(nn.Module):
208 | """ Vision Transformer
209 | A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale` -
210 | https://arxiv.org/abs/2010.11929
211 | """
212 | def __init__(self, depth=[3, 4, 8, 3], img_size=224, in_chans=3, num_classes=1000, embed_dim=[64, 128, 320, 512],
213 | head_dim=64, mlp_ratio=4., qkv_bias=True, qk_scale=None, representation_size=None,
214 | drop_rate=0., attn_drop_rate=0., drop_path_rate=0., norm_layer=None, conv_stem=False):
215 | """
216 | Args:
217 | depth (list): depth of each stage
218 | img_size (int, tuple): input image size
219 | in_chans (int): number of input channels
220 | num_classes (int): number of classes for classification head
221 | embed_dim (list): embedding dimension of each stage
222 | head_dim (int): head dimension
223 | mlp_ratio (int): ratio of mlp hidden dim to embedding dim
224 | qkv_bias (bool): enable bias for qkv if True
225 | qk_scale (float): override default qk scale of head_dim ** -0.5 if set
226 | representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set
227 | drop_rate (float): dropout rate
228 | attn_drop_rate (float): attention dropout rate
229 | drop_path_rate (float): stochastic depth rate
230 | norm_layer (nn.Module): normalization layer
231 | conv_stem (bool): whether use overlapped patch stem
232 | """
233 | super().__init__()
234 | self.num_classes = num_classes
235 | self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
236 | norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)
237 | if conv_stem:
238 | self.patch_embed1 = head_embedding(in_channels=in_chans, out_channels=embed_dim[0])
239 | # self.patch_embed2 = middle_embedding(in_channels=embed_dim[0], out_channels=embed_dim[1])
240 | # self.patch_embed3 = middle_embedding(in_channels=embed_dim[1], out_channels=embed_dim[2])
241 | # self.patch_embed4 = middle_embedding(in_channels=embed_dim[2], out_channels=embed_dim[3])
242 |
243 | self.patch_embed2 = middle_embedding(in_channels=embed_dim[0], out_channels=embed_dim[1])
244 | self.patch_embed3 = middle_embedding(in_channels=embed_dim[1], out_channels=embed_dim[2], stride=(1, 2, 2))
245 | self.patch_embed4 = middle_embedding(in_channels=embed_dim[2], out_channels=embed_dim[3], stride=(1, 2, 2))
246 |
247 | else:
248 | self.patch_embed1 = PatchEmbed(
249 | img_size=img_size, patch_size=2, in_chans=in_chans, embed_dim=embed_dim[0])
250 | self.patch_embed2 = PatchEmbed(
251 | img_size=img_size // 4, patch_size=2, in_chans=embed_dim[0], embed_dim=embed_dim[1])
252 | self.patch_embed3 = PatchEmbed(
253 | img_size=img_size // 8, patch_size=2, in_chans=embed_dim[1], embed_dim=embed_dim[2], stride=(1, 2, 2))
254 | self.patch_embed4 = PatchEmbed(
255 | img_size=img_size // 16, patch_size=2, in_chans=embed_dim[2], embed_dim=embed_dim[3], stride=(1, 2, 2))
256 |
257 | self.pos_drop = nn.Dropout(p=drop_rate)
258 | dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depth))] # stochastic depth decay rule
259 | num_heads = [dim // head_dim for dim in embed_dim]
260 | self.blocks1 = nn.ModuleList([
261 | CBlock(
262 | dim=embed_dim[0], num_heads=num_heads[0], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
263 | drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer)
264 | for i in range(depth[0])])
265 | self.blocks2 = nn.ModuleList([
266 | CBlock(
267 | dim=embed_dim[1], num_heads=num_heads[1], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
268 | drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+depth[0]], norm_layer=norm_layer)
269 | for i in range(depth[1])])
270 | self.blocks3 = nn.ModuleList([
271 | SABlock(
272 | dim=embed_dim[2], num_heads=num_heads[2], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
273 | drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+depth[0]+depth[1]], norm_layer=norm_layer)
274 | for i in range(depth[2])])
275 | self.blocks4 = nn.ModuleList([
276 | SABlock(
277 | dim=embed_dim[3], num_heads=num_heads[3], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
278 | drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+depth[0]+depth[1]+depth[2]], norm_layer=norm_layer)
279 | for i in range(depth[3])])
280 | self.norm = nn.BatchNorm3d(embed_dim[-1])
281 |
282 | # Representation layer
283 | if representation_size:
284 | self.num_features = representation_size
285 | self.pre_logits = nn.Sequential(OrderedDict([
286 | ('fc', nn.Linear(embed_dim, representation_size)),
287 | ('act', nn.Tanh())
288 | ]))
289 | else:
290 | self.pre_logits = nn.Identity()
291 |
292 | # Classifier head
293 | self.head = nn.Linear(embed_dim[-1], num_classes) if num_classes > 0 else nn.Identity()
294 |
295 | self.apply(self._init_weights)
296 |
297 | def _init_weights(self, m):
298 | # if isinstance(m, nn.Linear):
299 | # trunc_normal_(m.weight, std=.02)
300 | # if isinstance(m, nn.Linear) and m.bias is not None:
301 | # nn.init.constant_(m.bias, 0)
302 | # if isinstance(m, nn.LayerNorm):
303 | # nn.init.constant_(m.bias, 0)
304 | # nn.init.constant_(m.weight, 1.0)
305 | if isinstance(m, nn.Conv3d):
306 | nn.init.kaiming_normal_(m.weight, mode="fan_out", nonlinearity="relu")
307 | if m.bias is not None:
308 | nn.init.constant_(m.bias, 0)
309 |
310 | @torch.jit.ignore
311 | def no_weight_decay(self):
312 | return {'pos_embed', 'cls_token'}
313 |
314 | def get_classifier(self):
315 | return self.head
316 |
317 | def reset_classifier(self, num_classes, global_pool=''):
318 | self.num_classes = num_classes
319 | self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity()
320 |
321 | def forward_features(self, x):
322 | x = self.patch_embed1(x)
323 | x = self.pos_drop(x)
324 | for blk in self.blocks1:
325 | x = blk(x)
326 | x = self.patch_embed2(x)
327 | for blk in self.blocks2:
328 | x = blk(x)
329 | x = self.patch_embed3(x)
330 | for blk in self.blocks3:
331 | x = blk(x)
332 | x = self.patch_embed4(x)
333 | for blk in self.blocks4:
334 | x = blk(x)
335 | x = self.norm(x)
336 | x = self.pre_logits(x)
337 | return x
338 |
339 | def forward(self, x):
340 | x = self.forward_features(x)
341 | x = x.flatten(2).mean(-1)
342 | x = self.head(x)
343 | return x
344 |
345 | def uniformer_small(pretrained=True, **kwargs):
346 | model = UniFormer(
347 | depth=[3, 4, 8, 3],
348 | embed_dim=[64, 128, 320, 512], head_dim=64, mlp_ratio=4, qkv_bias=True,
349 | norm_layer=partial(nn.LayerNorm, eps=1e-6), **kwargs)
350 | model.default_cfg = _cfg()
351 | return model
352 |
353 | # def uniformer_small_plus(pretrained=True, **kwargs):
354 | # model = UniFormer(
355 | # depth=[3, 5, 9, 3], conv_stem=True,
356 | # embed_dim=[64, 128, 320, 512], head_dim=32, mlp_ratio=4, qkv_bias=True,
357 | # norm_layer=partial(nn.LayerNorm, eps=1e-6), **kwargs)
358 | # model.default_cfg = _cfg()
359 | # return model
360 |
361 | # def uniformer_small_plus_dim64(pretrained=True, **kwargs):
362 | # model = UniFormer(
363 | # depth=[3, 5, 9, 3], conv_stem=True,
364 | # embed_dim=[64, 128, 320, 512], head_dim=64, mlp_ratio=4, qkv_bias=True,
365 | # norm_layer=partial(nn.LayerNorm, eps=1e-6), **kwargs)
366 | # model.default_cfg = _cfg()
367 | # return model
368 |
369 | # def uniformer_base(pretrained=True, **kwargs):
370 | # model = UniFormer(
371 | # depth=[5, 8, 20, 7],
372 | # embed_dim=[64, 128, 320, 512], head_dim=64, mlp_ratio=4, qkv_bias=True,
373 | # norm_layer=partial(nn.LayerNorm, eps=1e-6), **kwargs)
374 | # model.default_cfg = _cfg()
375 | # return model
376 |
377 | # def uniformer_base_ls(pretrained=True, **kwargs):
378 | # global layer_scale
379 | # layer_scale = True
380 | # model = UniFormer(
381 | # depth=[5, 8, 20, 7],
382 | # embed_dim=[64, 128, 320, 512], head_dim=64, mlp_ratio=4, qkv_bias=True,
383 | # norm_layer=partial(nn.LayerNorm, eps=1e-6), **kwargs)
384 | # model.default_cfg = _cfg()
385 | # return model
386 |
387 | @register_model
388 | def uniformer_small_IL(num_classes=2,
389 | num_phase=8,
390 | pretrained=None,
391 | pretrained_cfg=None,
392 | **kwards):
393 | '''
394 | Concat multi-phase images with image-level
395 | '''
396 | model = uniformer_small(in_chans=num_phase, num_classes=num_classes, **kwards)
397 | return model
--------------------------------------------------------------------------------
/main/output/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/output/.DS_Store
--------------------------------------------------------------------------------
/main/output/20230411-192839-uniformer_small_IL/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LMMMEng/LLD-MMRI2023/e6809b0b3c95a33c0f979a59fe19916c0ff34e67/main/output/20230411-192839-uniformer_small_IL/.DS_Store
--------------------------------------------------------------------------------
/main/output/20230411-192839-uniformer_small_IL/LLDBaseline.json:
--------------------------------------------------------------------------------
1 | [
2 | {
3 | "image_id": "MR210196",
4 | "prediction": 0,
5 | "score": [
6 | 0.9999123811721802,
7 | 1.9092331058345735e-06,
8 | 6.859288987470791e-06,
9 | 5.8439894928596914e-05,
10 | 1.4843671124253888e-05,
11 | 4.369294856587658e-06,
12 | 1.182133928523399e-06
13 | ]
14 | },
15 | {
16 | "image_id": "MR184623",
17 | "prediction": 0,
18 | "score": [
19 | 0.9998781681060791,
20 | 5.951425805506005e-07,
21 | 3.935786025976995e-06,
22 | 1.925225842569489e-05,
23 | 9.623807272873819e-05,
24 | 1.6968820091278758e-06,
25 | 1.497223820479121e-07
26 | ]
27 | },
28 | {
29 | "image_id": "MR184663",
30 | "prediction": 0,
31 | "score": [
32 | 0.999405026435852,
33 | 1.7566314909345238e-06,
34 | 2.1728761566919275e-05,
35 | 0.0005295962328091264,
36 | 2.4908798877731897e-05,
37 | 1.2662169865507167e-05,
38 | 4.313331828598166e-06
39 | ]
40 | },
41 | {
42 | "image_id": "MR173412",
43 | "prediction": 0,
44 | "score": [
45 | 0.9689616560935974,
46 | 0.001665768795646727,
47 | 0.001844842336140573,
48 | 0.023772019892930984,
49 | 0.001691360492259264,
50 | 0.0007855384610593319,
51 | 0.001278734765946865
52 | ]
53 | },
54 | {
55 | "image_id": "MR201172",
56 | "prediction": 2,
57 | "score": [
58 | 0.07726386934518814,
59 | 0.021447185426950455,
60 | 0.5336843132972717,
61 | 0.2650243043899536,
62 | 0.0003756263176910579,
63 | 0.09334826469421387,
64 | 0.008856389671564102
65 | ]
66 | },
67 | {
68 | "image_id": "MR179548",
69 | "prediction": 0,
70 | "score": [
71 | 0.9999402761459351,
72 | 2.649641828611493e-06,
73 | 4.178182734904112e-06,
74 | 6.613364803342847e-06,
75 | 3.7991430872352794e-05,
76 | 6.584786660823738e-06,
77 | 1.624148353585042e-06
78 | ]
79 | },
80 | {
81 | "image_id": "MR174870",
82 | "prediction": 0,
83 | "score": [
84 | 0.999830961227417,
85 | 1.7789149069358245e-06,
86 | 3.2041592930909246e-05,
87 | 2.089608460664749e-05,
88 | 0.00010474680311745033,
89 | 6.0329434745654e-06,
90 | 3.4736185625661165e-06
91 | ]
92 | },
93 | {
94 | "image_id": "MR184885",
95 | "prediction": 0,
96 | "score": [
97 | 0.9672242403030396,
98 | 0.029198721051216125,
99 | 0.0003620933275669813,
100 | 0.0003231496084481478,
101 | 0.00015290497685782611,
102 | 0.00012808885367121547,
103 | 0.002610900206491351
104 | ]
105 | },
106 | {
107 | "image_id": "MR178737",
108 | "prediction": 0,
109 | "score": [
110 | 0.9998828172683716,
111 | 1.9239303128415486e-06,
112 | 4.296985935070552e-05,
113 | 1.331641851720633e-05,
114 | 4.826598160434514e-05,
115 | 9.731923455547076e-06,
116 | 9.157282079286233e-07
117 | ]
118 | },
119 | {
120 | "image_id": "MR200076",
121 | "prediction": 0,
122 | "score": [
123 | 0.9970410466194153,
124 | 5.135450555826537e-05,
125 | 0.0005271048285067081,
126 | 8.648796210763976e-05,
127 | 0.0016294168308377266,
128 | 0.0006610079435631633,
129 | 3.540319312378415e-06
130 | ]
131 | },
132 | {
133 | "image_id": "MR195950",
134 | "prediction": 0,
135 | "score": [
136 | 0.9991511106491089,
137 | 4.571132376440801e-05,
138 | 8.566717042413075e-06,
139 | 0.00010106972331413999,
140 | 0.0006676785415038466,
141 | 1.4735003787791356e-05,
142 | 1.107816569856368e-05
143 | ]
144 | },
145 | {
146 | "image_id": "MR174862",
147 | "prediction": 6,
148 | "score": [
149 | 0.012707187794148922,
150 | 6.779038085369393e-05,
151 | 0.00013242883142083883,
152 | 0.002975389827042818,
153 | 0.019151002168655396,
154 | 0.005606647115200758,
155 | 0.9593595266342163
156 | ]
157 | },
158 | {
159 | "image_id": "MR175358",
160 | "prediction": 0,
161 | "score": [
162 | 0.9994264841079712,
163 | 5.7871625358529855e-06,
164 | 1.4587215446226764e-05,
165 | 3.0144735774229048e-06,
166 | 0.0005281068733893335,
167 | 2.1737811039201915e-05,
168 | 3.559586616574961e-07
169 | ]
170 | },
171 | {
172 | "image_id": "MR146022",
173 | "prediction": 3,
174 | "score": [
175 | 2.8070780899724923e-07,
176 | 9.463022252020892e-06,
177 | 2.41007383010583e-05,
178 | 0.9997580647468567,
179 | 1.1466073601695825e-06,
180 | 0.00019137654453516006,
181 | 1.5588069800287485e-05
182 | ]
183 | },
184 | {
185 | "image_id": "MR94389",
186 | "prediction": 6,
187 | "score": [
188 | 1.1334625924064312e-05,
189 | 0.0017449429724365473,
190 | 3.7513847928494215e-05,
191 | 0.003589797765016556,
192 | 5.93178192502819e-05,
193 | 5.2897714340360835e-05,
194 | 0.9945042133331299
195 | ]
196 | },
197 | {
198 | "image_id": "MR27202",
199 | "prediction": 5,
200 | "score": [
201 | 3.378661403985461e-07,
202 | 1.5602692826632847e-07,
203 | 1.636992408293736e-07,
204 | 4.483327487037059e-08,
205 | 1.4825831158304936e-06,
206 | 0.9999939203262329,
207 | 4.031746811961057e-06
208 | ]
209 | },
210 | {
211 | "image_id": "MR88293",
212 | "prediction": 5,
213 | "score": [
214 | 3.231729124308913e-06,
215 | 2.6070907210851146e-07,
216 | 5.54923758500081e-07,
217 | 5.549491675083118e-07,
218 | 2.0617210338969016e-06,
219 | 0.9999837875366211,
220 | 9.512213182460982e-06
221 | ]
222 | },
223 | {
224 | "image_id": "MR179970",
225 | "prediction": 2,
226 | "score": [
227 | 0.0003003243764396757,
228 | 0.27661076188087463,
229 | 0.5027822852134705,
230 | 0.034970760345458984,
231 | 0.002457620110362768,
232 | 0.0011427932186052203,
233 | 0.18173536658287048
234 | ]
235 | },
236 | {
237 | "image_id": "MR32504",
238 | "prediction": 6,
239 | "score": [
240 | 0.0010602418333292007,
241 | 0.08846916258335114,
242 | 3.20233084494248e-05,
243 | 9.290202433476225e-05,
244 | 3.2358020689571276e-05,
245 | 0.0039085885509848595,
246 | 0.9064047336578369
247 | ]
248 | },
249 | {
250 | "image_id": "MR174815",
251 | "prediction": 1,
252 | "score": [
253 | 4.827109478355851e-06,
254 | 0.9995249509811401,
255 | 0.0001981587993213907,
256 | 5.5596872698515654e-05,
257 | 2.430432323308196e-05,
258 | 1.3813589248456992e-05,
259 | 0.0001782828039722517
260 | ]
261 | },
262 | {
263 | "image_id": "MR222216",
264 | "prediction": 1,
265 | "score": [
266 | 6.478813156718388e-05,
267 | 0.8597501516342163,
268 | 3.732787445187569e-05,
269 | 6.097802543081343e-05,
270 | 9.364875040773768e-06,
271 | 0.001587414531968534,
272 | 0.1384899914264679
273 | ]
274 | },
275 | {
276 | "image_id": "MR199345",
277 | "prediction": 1,
278 | "score": [
279 | 1.6016016161302105e-05,
280 | 0.9998906850814819,
281 | 1.378940578433685e-05,
282 | 3.5954080885858275e-06,
283 | 7.353805244747491e-07,
284 | 6.990780821070075e-05,
285 | 5.371589850255987e-06
286 | ]
287 | },
288 | {
289 | "image_id": "MR69046",
290 | "prediction": 2,
291 | "score": [
292 | 0.01838279329240322,
293 | 0.02258209139108658,
294 | 0.9508678913116455,
295 | 7.888083928264678e-05,
296 | 0.00020653315004892647,
297 | 0.0055504292249679565,
298 | 0.0023313011042773724
299 | ]
300 | },
301 | {
302 | "image_id": "MR96745",
303 | "prediction": 5,
304 | "score": [
305 | 0.002167792059481144,
306 | 1.8709943105932325e-05,
307 | 0.0003770168696064502,
308 | 0.004100983031094074,
309 | 3.8593050703639165e-05,
310 | 0.9824031591415405,
311 | 0.010893724858760834
312 | ]
313 | },
314 | {
315 | "image_id": "MR104280",
316 | "prediction": 2,
317 | "score": [
318 | 3.0933671951061115e-05,
319 | 5.992686783429235e-05,
320 | 0.9996746778488159,
321 | 4.1707121454237495e-06,
322 | 5.8174915466224775e-05,
323 | 0.00016392229008488357,
324 | 8.228608749050181e-06
325 | ]
326 | },
327 | {
328 | "image_id": "MR137627",
329 | "prediction": 2,
330 | "score": [
331 | 4.4151324800623115e-06,
332 | 4.7874673327896744e-05,
333 | 0.9997523427009583,
334 | 2.848744588845875e-05,
335 | 9.541348845232278e-05,
336 | 1.635731132410001e-05,
337 | 5.507089008460753e-05
338 | ]
339 | },
340 | {
341 | "image_id": "MR192701",
342 | "prediction": 2,
343 | "score": [
344 | 7.6566857387661e-06,
345 | 0.0028177364729344845,
346 | 0.9945186972618103,
347 | 4.486642865231261e-06,
348 | 0.00012450774374883622,
349 | 0.0003750179021153599,
350 | 0.0021518899593502283
351 | ]
352 | },
353 | {
354 | "image_id": "MR145114",
355 | "prediction": 2,
356 | "score": [
357 | 1.1722037925210316e-05,
358 | 0.49746236205101013,
359 | 0.5010170340538025,
360 | 0.0009472190868109465,
361 | 0.0004346987116150558,
362 | 0.00010350607772124931,
363 | 2.3496619178331457e-05
364 | ]
365 | },
366 | {
367 | "image_id": "MR106372",
368 | "prediction": 2,
369 | "score": [
370 | 0.00010835815919563174,
371 | 0.07179166376590729,
372 | 0.8932217359542847,
373 | 0.03365975245833397,
374 | 0.0009502135217189789,
375 | 0.00021402948186732829,
376 | 5.4306365200318396e-05
377 | ]
378 | },
379 | {
380 | "image_id": "MR162257",
381 | "prediction": 6,
382 | "score": [
383 | 3.3725995308486745e-05,
384 | 0.00017548595496919006,
385 | 8.136157703120261e-05,
386 | 0.00016678081010468304,
387 | 1.3797792917102925e-06,
388 | 7.2666080086492e-05,
389 | 0.9994685053825378
390 | ]
391 | },
392 | {
393 | "image_id": "MR222125",
394 | "prediction": 6,
395 | "score": [
396 | 0.0018386102747172117,
397 | 0.1417044699192047,
398 | 0.005420852452516556,
399 | 0.018002362921833992,
400 | 0.08081159740686417,
401 | 0.012366403825581074,
402 | 0.7398557066917419
403 | ]
404 | },
405 | {
406 | "image_id": "MR127280",
407 | "prediction": 3,
408 | "score": [
409 | 0.00015323830302804708,
410 | 1.0388351256551687e-05,
411 | 1.9814569895970635e-06,
412 | 0.9995266199111938,
413 | 4.7693629312561825e-05,
414 | 3.897369606420398e-05,
415 | 0.00022114407329354435
416 | ]
417 | },
418 | {
419 | "image_id": "MR210193",
420 | "prediction": 3,
421 | "score": [
422 | 0.00031007995130494237,
423 | 1.469708513468504e-05,
424 | 0.0006736861541867256,
425 | 0.5126804113388062,
426 | 8.9031076640822e-05,
427 | 0.0001850378466770053,
428 | 0.48604708909988403
429 | ]
430 | },
431 | {
432 | "image_id": "MR236955",
433 | "prediction": 1,
434 | "score": [
435 | 2.951784699689597e-05,
436 | 0.99369215965271,
437 | 0.00014942113193683326,
438 | 0.004456141963601112,
439 | 3.113151615252718e-05,
440 | 8.761577191762626e-05,
441 | 0.0015539645683020353
442 | ]
443 | },
444 | {
445 | "image_id": "MR207755",
446 | "prediction": 3,
447 | "score": [
448 | 0.0008690250688232481,
449 | 8.185242768377066e-05,
450 | 6.685457628918812e-05,
451 | 0.9860342144966125,
452 | 8.58384623825259e-07,
453 | 6.661139923380688e-05,
454 | 0.012880592606961727
455 | ]
456 | },
457 | {
458 | "image_id": "MR236008",
459 | "prediction": 3,
460 | "score": [
461 | 0.06515278667211533,
462 | 0.006646816153079271,
463 | 0.000852881814353168,
464 | 0.9143998026847839,
465 | 0.012095104902982712,
466 | 0.0007814770215190947,
467 | 7.117698987713084e-05
468 | ]
469 | },
470 | {
471 | "image_id": "MR229934",
472 | "prediction": 5,
473 | "score": [
474 | 6.248131739994278e-06,
475 | 8.735269148019142e-07,
476 | 3.3262480769735703e-07,
477 | 6.111906145633839e-07,
478 | 1.2732652976410463e-05,
479 | 0.9999614953994751,
480 | 1.770961534930393e-05
481 | ]
482 | },
483 | {
484 | "image_id": "MR193842",
485 | "prediction": 6,
486 | "score": [
487 | 4.131295645493083e-05,
488 | 2.3967246306710877e-05,
489 | 1.996451464947313e-05,
490 | 7.110075966920704e-05,
491 | 3.0273565698735183e-06,
492 | 1.0212067536485847e-05,
493 | 0.9998303651809692
494 | ]
495 | },
496 | {
497 | "image_id": "MR201013",
498 | "prediction": 4,
499 | "score": [
500 | 0.00014583978918381035,
501 | 1.5438766922670766e-06,
502 | 3.864719815283024e-07,
503 | 0.0002908123133238405,
504 | 0.9995558857917786,
505 | 5.4275005823001266e-06,
506 | 3.607524945437035e-08
507 | ]
508 | },
509 | {
510 | "image_id": "MR125176",
511 | "prediction": 4,
512 | "score": [
513 | 8.578588676755317e-06,
514 | 5.4780033678980544e-05,
515 | 3.876721450524201e-07,
516 | 9.270139344152994e-06,
517 | 0.9997697472572327,
518 | 0.00015126579091884196,
519 | 5.947810677753296e-06
520 | ]
521 | },
522 | {
523 | "image_id": "MR136344",
524 | "prediction": 4,
525 | "score": [
526 | 1.3626722648041323e-06,
527 | 1.46982836213283e-06,
528 | 1.6123319710459327e-06,
529 | 0.00011776628525694832,
530 | 0.9998651742935181,
531 | 1.244457143911859e-05,
532 | 1.2246891856193542e-07
533 | ]
534 | },
535 | {
536 | "image_id": "MR142442",
537 | "prediction": 4,
538 | "score": [
539 | 1.1597582670219708e-05,
540 | 0.0270093884319067,
541 | 0.0004230896884109825,
542 | 0.0017826099647209048,
543 | 0.9686463475227356,
544 | 0.0019800374284386635,
545 | 0.0001470050192438066
546 | ]
547 | },
548 | {
549 | "image_id": "MR140376",
550 | "prediction": 4,
551 | "score": [
552 | 1.03646709703753e-06,
553 | 4.854264716414036e-06,
554 | 3.1400861644215183e-07,
555 | 2.4801802283036523e-05,
556 | 0.9994537234306335,
557 | 0.0005139766726642847,
558 | 1.1986797971985652e-06
559 | ]
560 | },
561 | {
562 | "image_id": "MR159752",
563 | "prediction": 3,
564 | "score": [
565 | 0.026215935125947,
566 | 0.08944888412952423,
567 | 0.0020225613843649626,
568 | 0.6987693309783936,
569 | 0.18233439326286316,
570 | 0.0011593849631026387,
571 | 4.950146467308514e-05
572 | ]
573 | },
574 | {
575 | "image_id": "MR140232",
576 | "prediction": 4,
577 | "score": [
578 | 0.0005495784571394324,
579 | 1.1129982340207789e-05,
580 | 9.042206511367112e-05,
581 | 0.07327716797590256,
582 | 0.9216426610946655,
583 | 8.947700553108007e-05,
584 | 0.004339531064033508
585 | ]
586 | },
587 | {
588 | "image_id": "MR133387",
589 | "prediction": 4,
590 | "score": [
591 | 0.013784732669591904,
592 | 0.0001934156025527045,
593 | 0.0019502852810546756,
594 | 0.04929909110069275,
595 | 0.9345818161964417,
596 | 0.00016344145114999264,
597 | 2.7256040993961506e-05
598 | ]
599 | },
600 | {
601 | "image_id": "MR110203",
602 | "prediction": 5,
603 | "score": [
604 | 2.7524642973730806e-06,
605 | 2.1440333512146026e-05,
606 | 0.00011304454528726637,
607 | 1.743612983773346e-06,
608 | 5.8394423831487074e-05,
609 | 0.9997367262840271,
610 | 6.586426752619445e-05
611 | ]
612 | },
613 | {
614 | "image_id": "MR206734",
615 | "prediction": 2,
616 | "score": [
617 | 9.907887942972593e-06,
618 | 0.4021502733230591,
619 | 0.5819864273071289,
620 | 0.0006669783033430576,
621 | 0.013844279572367668,
622 | 0.0003890585503540933,
623 | 0.0009530240786261857
624 | ]
625 | },
626 | {
627 | "image_id": "MR226356",
628 | "prediction": 0,
629 | "score": [
630 | 0.9984581470489502,
631 | 1.1871312381117605e-05,
632 | 6.405714520951733e-05,
633 | 0.0005469739553518593,
634 | 3.788374669966288e-05,
635 | 0.0008399641374126077,
636 | 4.1217001125914976e-05
637 | ]
638 | },
639 | {
640 | "image_id": "MR198925",
641 | "prediction": 5,
642 | "score": [
643 | 5.533789135370171e-06,
644 | 1.4578188256564317e-07,
645 | 8.365284287492614e-08,
646 | 9.728045569090682e-08,
647 | 1.5120576790650375e-06,
648 | 0.9999908208847046,
649 | 1.844750272539386e-06
650 | ]
651 | },
652 | {
653 | "image_id": "MR-470019",
654 | "prediction": 5,
655 | "score": [
656 | 1.2511881095633726e-06,
657 | 1.3650027312905877e-06,
658 | 1.0122674893864314e-06,
659 | 1.5836398858937173e-07,
660 | 2.4827627953527553e-07,
661 | 0.9999860525131226,
662 | 1.0026735253632069e-05
663 | ]
664 | },
665 | {
666 | "image_id": "MR-451771",
667 | "prediction": 5,
668 | "score": [
669 | 0.16434374451637268,
670 | 3.260704761487432e-05,
671 | 5.960457565379329e-05,
672 | 0.00012993879499845207,
673 | 0.00018513503891881555,
674 | 0.8351498246192932,
675 | 9.909932850860059e-05
676 | ]
677 | },
678 | {
679 | "image_id": "MR209281",
680 | "prediction": 5,
681 | "score": [
682 | 3.253017348470166e-05,
683 | 3.1320498237619177e-05,
684 | 0.0036026337184011936,
685 | 7.197730883490294e-05,
686 | 0.0009360499680042267,
687 | 0.9934592247009277,
688 | 0.001866319915279746
689 | ]
690 | },
691 | {
692 | "image_id": "MR35505",
693 | "prediction": 6,
694 | "score": [
695 | 5.1708218961721286e-05,
696 | 0.01283283717930317,
697 | 0.0010743547463789582,
698 | 1.9713115761987865e-05,
699 | 3.0282881198218092e-05,
700 | 0.0018727314891293645,
701 | 0.9841183423995972
702 | ]
703 | },
704 | {
705 | "image_id": "MR61313",
706 | "prediction": 6,
707 | "score": [
708 | 3.5292487154947594e-05,
709 | 7.014941365923733e-05,
710 | 1.3566338566306513e-05,
711 | 0.00010390790703240782,
712 | 3.03730166706373e-06,
713 | 2.2766796973883174e-05,
714 | 0.9997512698173523
715 | ]
716 | },
717 | {
718 | "image_id": "MR124193",
719 | "prediction": 0,
720 | "score": [
721 | 0.9855139255523682,
722 | 5.9616818361973856e-06,
723 | 0.003929528407752514,
724 | 0.010411543771624565,
725 | 5.740649430663325e-05,
726 | 5.497538222698495e-05,
727 | 2.6575942683848552e-05
728 | ]
729 | },
730 | {
731 | "image_id": "MR120289",
732 | "prediction": 6,
733 | "score": [
734 | 4.446612729225308e-05,
735 | 0.001192878931760788,
736 | 2.1152582121430896e-05,
737 | 8.564207382733002e-05,
738 | 3.3028807138180127e-06,
739 | 0.003780062310397625,
740 | 0.994872510433197
741 | ]
742 | },
743 | {
744 | "image_id": "MR13762",
745 | "prediction": 6,
746 | "score": [
747 | 4.119868390262127e-05,
748 | 0.00011004068073816597,
749 | 3.2030075090005994e-05,
750 | 2.9480319426511414e-05,
751 | 1.1598235687415581e-05,
752 | 2.56992152571911e-05,
753 | 0.9997499585151672
754 | ]
755 | },
756 | {
757 | "image_id": "MR132628",
758 | "prediction": 1,
759 | "score": [
760 | 7.106801785994321e-05,
761 | 0.7900978326797485,
762 | 0.0006288065342232585,
763 | 8.619938307674602e-05,
764 | 0.00024692239821888506,
765 | 0.0005554858944378793,
766 | 0.20831358432769775
767 | ]
768 | },
769 | {
770 | "image_id": "MR34162",
771 | "prediction": 6,
772 | "score": [
773 | 5.777229307568632e-05,
774 | 6.393877265509218e-05,
775 | 1.7940683392225765e-05,
776 | 6.016885890858248e-05,
777 | 3.291541815997334e-06,
778 | 8.433320181211457e-05,
779 | 0.9997125267982483
780 | ]
781 | },
782 | {
783 | "image_id": "MR-400851",
784 | "prediction": 0,
785 | "score": [
786 | 0.7328611016273499,
787 | 0.011399470269680023,
788 | 0.001677955500781536,
789 | 0.10713846236467361,
790 | 2.1951322196400724e-05,
791 | 0.0018301407108083367,
792 | 0.14507094025611877
793 | ]
794 | },
795 | {
796 | "image_id": "MR58043",
797 | "prediction": 6,
798 | "score": [
799 | 0.00028987706173211336,
800 | 0.0010867657838389277,
801 | 4.615735815605149e-05,
802 | 1.4990388990554493e-05,
803 | 2.387955282756593e-05,
804 | 5.196019628783688e-05,
805 | 0.9984862804412842
806 | ]
807 | },
808 | {
809 | "image_id": "MR69585",
810 | "prediction": 3,
811 | "score": [
812 | 1.7333225059701363e-06,
813 | 6.489511724794284e-05,
814 | 3.7478630474652164e-06,
815 | 0.9999237060546875,
816 | 1.8296153712071828e-06,
817 | 3.049465249205241e-06,
818 | 1.0587848464638228e-06
819 | ]
820 | },
821 | {
822 | "image_id": "MR45055",
823 | "prediction": 3,
824 | "score": [
825 | 0.001374542247503996,
826 | 0.0010765749029815197,
827 | 0.0002000959066208452,
828 | 0.591166079044342,
829 | 5.681277343683178e-06,
830 | 0.0013284357264637947,
831 | 0.404848575592041
832 | ]
833 | },
834 | {
835 | "image_id": "MR88979",
836 | "prediction": 6,
837 | "score": [
838 | 3.3334501495119184e-05,
839 | 0.0001445254310965538,
840 | 5.417626380221918e-05,
841 | 2.2700913177686743e-05,
842 | 4.944926331518218e-06,
843 | 4.13079142163042e-05,
844 | 0.9996989965438843
845 | ]
846 | },
847 | {
848 | "image_id": "MR3573",
849 | "prediction": 6,
850 | "score": [
851 | 5.332764339982532e-05,
852 | 2.1871328499400988e-05,
853 | 2.0212917661410756e-05,
854 | 0.0008220816380344331,
855 | 7.224160071928054e-05,
856 | 1.7581214706297033e-05,
857 | 0.9989927411079407
858 | ]
859 | },
860 | {
861 | "image_id": "MR164939",
862 | "prediction": 6,
863 | "score": [
864 | 6.295596540439874e-05,
865 | 8.448898006463423e-05,
866 | 0.001377580570988357,
867 | 3.8113666960271075e-05,
868 | 2.0795130694750696e-05,
869 | 0.07627039402723312,
870 | 0.9221456050872803
871 | ]
872 | },
873 | {
874 | "image_id": "MR80569",
875 | "prediction": 6,
876 | "score": [
877 | 7.614294008817524e-05,
878 | 7.344261393882334e-05,
879 | 7.941493822727352e-05,
880 | 3.180249041179195e-05,
881 | 6.767706054233713e-06,
882 | 0.00011424611875554547,
883 | 0.9996181726455688
884 | ]
885 | },
886 | {
887 | "image_id": "MR85655",
888 | "prediction": 5,
889 | "score": [
890 | 5.431936074273835e-07,
891 | 8.399008777359995e-08,
892 | 1.5954293886011328e-08,
893 | 4.6410193021984014e-08,
894 | 2.472641199346981e-06,
895 | 0.9999954700469971,
896 | 1.4148763511911966e-06
897 | ]
898 | },
899 | {
900 | "image_id": "MR82133",
901 | "prediction": 6,
902 | "score": [
903 | 0.0002127664047293365,
904 | 0.002840819302946329,
905 | 4.338582220952958e-05,
906 | 1.8508069842937402e-05,
907 | 2.255223080283031e-05,
908 | 0.0001059734495356679,
909 | 0.9967560172080994
910 | ]
911 | },
912 | {
913 | "image_id": "MR37217",
914 | "prediction": 6,
915 | "score": [
916 | 4.4920743675902486e-05,
917 | 0.00016333443636540323,
918 | 0.00010658857354428619,
919 | 3.0368179068318568e-05,
920 | 3.653467501862906e-06,
921 | 7.08025399944745e-05,
922 | 0.9995802044868469
923 | ]
924 | },
925 | {
926 | "image_id": "MR109260",
927 | "prediction": 5,
928 | "score": [
929 | 1.3436481367534725e-06,
930 | 3.7984005984981195e-07,
931 | 2.6881869175099382e-08,
932 | 3.7046099521376163e-08,
933 | 1.0812238997459644e-06,
934 | 0.999996542930603,
935 | 5.924934498580114e-07
936 | ]
937 | },
938 | {
939 | "image_id": "MR189873",
940 | "prediction": 6,
941 | "score": [
942 | 3.926586941815913e-05,
943 | 0.00010912586731137708,
944 | 4.216270463075489e-05,
945 | 1.1564287888177205e-05,
946 | 6.134991053841077e-06,
947 | 2.5209201339748688e-05,
948 | 0.9997665286064148
949 | ]
950 | },
951 | {
952 | "image_id": "MR49502",
953 | "prediction": 6,
954 | "score": [
955 | 0.00011093110515503213,
956 | 0.0007623932906426489,
957 | 0.0010758120333775878,
958 | 5.457567749544978e-05,
959 | 1.2542592230602168e-05,
960 | 0.0005738995969295502,
961 | 0.9974098801612854
962 | ]
963 | },
964 | {
965 | "image_id": "MR78507",
966 | "prediction": 6,
967 | "score": [
968 | 4.743777390103787e-05,
969 | 2.791716360661667e-05,
970 | 1.953446371771861e-05,
971 | 1.1245409950788599e-05,
972 | 5.850165507581551e-06,
973 | 0.00019108527339994907,
974 | 0.9996969699859619
975 | ]
976 | },
977 | {
978 | "image_id": "MR172409",
979 | "prediction": 6,
980 | "score": [
981 | 3.0886385502526537e-05,
982 | 0.00010037889296654612,
983 | 0.0001436440070392564,
984 | 1.4498013115371577e-05,
985 | 6.238259629753884e-06,
986 | 0.00020804087398573756,
987 | 0.9994962215423584
988 | ]
989 | },
990 | {
991 | "image_id": "MR199597",
992 | "prediction": 6,
993 | "score": [
994 | 6.550026591867208e-05,
995 | 0.0008181378361769021,
996 | 0.00024425910669378936,
997 | 3.122746784356423e-05,
998 | 5.905861144128721e-06,
999 | 7.215645746327937e-05,
1000 | 0.9987627267837524
1001 | ]
1002 | },
1003 | {
1004 | "image_id": "MR131733",
1005 | "prediction": 3,
1006 | "score": [
1007 | 6.83623966324376e-08,
1008 | 1.4393053788808174e-05,
1009 | 2.300144615219324e-06,
1010 | 0.9999797344207764,
1011 | 1.3062431207799818e-06,
1012 | 1.8698832491281792e-06,
1013 | 4.2398812638566596e-07
1014 | ]
1015 | }
1016 | ]
--------------------------------------------------------------------------------
/main/output/20230411-192839-uniformer_small_IL/args.yaml:
--------------------------------------------------------------------------------
1 | amp: false
2 | angle: 45
3 | apex_amp: false
4 | batch_size: 8
5 | bce_loss: false
6 | bce_target_thresh: null
7 | bn_eps: null
8 | bn_momentum: null
9 | bn_tf: false
10 | checkpoint_hist: 1
11 | clip_grad: null
12 | clip_mode: norm
13 | cooldown_epochs: 10
14 | crop_size:
15 | - 14
16 | - 112
17 | - 112
18 | data_dir: data/classification_dataset/images/
19 | decay_epochs: 100
20 | decay_rate: 0.1
21 | dist_bn: reduce
22 | drop: 0.0
23 | drop_block: null
24 | drop_path: 0.2
25 | epoch_repeats: 0.0
26 | epochs: 300
27 | eval_metric: f1
28 | experiment: ''
29 | flip_prob: 0.5
30 | gp: null
31 | img_size:
32 | - 16
33 | - 128
34 | - 128
35 | initial_checkpoint: ''
36 | interpolation: ''
37 | local_rank: 0
38 | log_interval: 25
39 | log_wandb: false
40 | lr: 0.0001
41 | lr_cycle_decay: 0.5
42 | lr_cycle_limit: 1
43 | lr_cycle_mul: 1.0
44 | lr_k_decay: 1.0
45 | lr_noise: null
46 | lr_noise_pct: 0.67
47 | lr_noise_std: 1.0
48 | min_lr: 1.0e-05
49 | model: uniformer_small_IL
50 | model_ema: false
51 | model_ema_decay: 0.9998
52 | model_ema_force_cpu: false
53 | momentum: 0.9
54 | native_amp: false
55 | no_ddp_bb: false
56 | no_resume_opt: false
57 | num_classes: 7
58 | opt: adamw
59 | opt_betas: null
60 | opt_eps: null
61 | output: output/
62 | patience_epochs: 10
63 | pin_mem: false
64 | pretrained: false
65 | rcprob: 0.25
66 | recovery_interval: 0
67 | report_metrics:
68 | - acc
69 | - f1
70 | - recall
71 | - precision
72 | - kappa
73 | reprob: 0.25
74 | resume: ''
75 | sched: cosine
76 | seed: 42
77 | smoothing: 0
78 | start_epoch: null
79 | sync_bn: false
80 | torchscript: false
81 | train_anno_file: data/classification_dataset/labels/train_fold1.txt
82 | train_transform_list:
83 | - random_crop
84 | - z_flip
85 | - x_flip
86 | - y_flip
87 | - rotation
88 | val_anno_file: data/classification_dataset/labels/val_fold1.txt
89 | val_transform_list:
90 | - center_crop
91 | validation_batch_size: null
92 | warmup_epochs: 5
93 | warmup_lr: 1.0e-06
94 | weight_decay: 0.05
95 | worker_seeding: all
96 | workers: 8
97 |
--------------------------------------------------------------------------------
/main/output/LLDBaseline.json:
--------------------------------------------------------------------------------
1 | [
2 | {
3 | "image_id": "MR210196",
4 | "prediction": 0,
5 | "score": [
6 | 0.9999123811721802,
7 | 1.9092331058345735e-06,
8 | 6.859288987470791e-06,
9 | 5.8439894928596914e-05,
10 | 1.4843671124253888e-05,
11 | 4.369294856587658e-06,
12 | 1.182133928523399e-06
13 | ]
14 | },
15 | {
16 | "image_id": "MR184623",
17 | "prediction": 0,
18 | "score": [
19 | 0.9998781681060791,
20 | 5.951425805506005e-07,
21 | 3.935786025976995e-06,
22 | 1.925225842569489e-05,
23 | 9.623807272873819e-05,
24 | 1.6968820091278758e-06,
25 | 1.497223820479121e-07
26 | ]
27 | },
28 | {
29 | "image_id": "MR184663",
30 | "prediction": 0,
31 | "score": [
32 | 0.999405026435852,
33 | 1.7566314909345238e-06,
34 | 2.1728761566919275e-05,
35 | 0.0005295962328091264,
36 | 2.4908798877731897e-05,
37 | 1.2662169865507167e-05,
38 | 4.313331828598166e-06
39 | ]
40 | },
41 | {
42 | "image_id": "MR173412",
43 | "prediction": 0,
44 | "score": [
45 | 0.9689616560935974,
46 | 0.001665768795646727,
47 | 0.001844842336140573,
48 | 0.023772019892930984,
49 | 0.001691360492259264,
50 | 0.0007855384610593319,
51 | 0.001278734765946865
52 | ]
53 | },
54 | {
55 | "image_id": "MR201172",
56 | "prediction": 2,
57 | "score": [
58 | 0.07726386934518814,
59 | 0.021447185426950455,
60 | 0.5336843132972717,
61 | 0.2650243043899536,
62 | 0.0003756263176910579,
63 | 0.09334826469421387,
64 | 0.008856389671564102
65 | ]
66 | },
67 | {
68 | "image_id": "MR179548",
69 | "prediction": 0,
70 | "score": [
71 | 0.9999402761459351,
72 | 2.649641828611493e-06,
73 | 4.178182734904112e-06,
74 | 6.613364803342847e-06,
75 | 3.7991430872352794e-05,
76 | 6.584786660823738e-06,
77 | 1.624148353585042e-06
78 | ]
79 | },
80 | {
81 | "image_id": "MR174870",
82 | "prediction": 0,
83 | "score": [
84 | 0.999830961227417,
85 | 1.7789149069358245e-06,
86 | 3.2041592930909246e-05,
87 | 2.089608460664749e-05,
88 | 0.00010474680311745033,
89 | 6.0329434745654e-06,
90 | 3.4736185625661165e-06
91 | ]
92 | },
93 | {
94 | "image_id": "MR184885",
95 | "prediction": 0,
96 | "score": [
97 | 0.9672242403030396,
98 | 0.029198721051216125,
99 | 0.0003620933275669813,
100 | 0.0003231496084481478,
101 | 0.00015290497685782611,
102 | 0.00012808885367121547,
103 | 0.002610900206491351
104 | ]
105 | },
106 | {
107 | "image_id": "MR178737",
108 | "prediction": 0,
109 | "score": [
110 | 0.9998828172683716,
111 | 1.9239303128415486e-06,
112 | 4.296985935070552e-05,
113 | 1.331641851720633e-05,
114 | 4.826598160434514e-05,
115 | 9.731923455547076e-06,
116 | 9.157282079286233e-07
117 | ]
118 | },
119 | {
120 | "image_id": "MR200076",
121 | "prediction": 0,
122 | "score": [
123 | 0.9970410466194153,
124 | 5.135450555826537e-05,
125 | 0.0005271048285067081,
126 | 8.648796210763976e-05,
127 | 0.0016294168308377266,
128 | 0.0006610079435631633,
129 | 3.540319312378415e-06
130 | ]
131 | },
132 | {
133 | "image_id": "MR195950",
134 | "prediction": 0,
135 | "score": [
136 | 0.9991511106491089,
137 | 4.571132376440801e-05,
138 | 8.566717042413075e-06,
139 | 0.00010106972331413999,
140 | 0.0006676785415038466,
141 | 1.4735003787791356e-05,
142 | 1.107816569856368e-05
143 | ]
144 | },
145 | {
146 | "image_id": "MR174862",
147 | "prediction": 6,
148 | "score": [
149 | 0.012707187794148922,
150 | 6.779038085369393e-05,
151 | 0.00013242883142083883,
152 | 0.002975389827042818,
153 | 0.019151002168655396,
154 | 0.005606647115200758,
155 | 0.9593595266342163
156 | ]
157 | },
158 | {
159 | "image_id": "MR175358",
160 | "prediction": 0,
161 | "score": [
162 | 0.9994264841079712,
163 | 5.7871625358529855e-06,
164 | 1.4587215446226764e-05,
165 | 3.0144735774229048e-06,
166 | 0.0005281068733893335,
167 | 2.1737811039201915e-05,
168 | 3.559586616574961e-07
169 | ]
170 | },
171 | {
172 | "image_id": "MR146022",
173 | "prediction": 3,
174 | "score": [
175 | 2.8070780899724923e-07,
176 | 9.463022252020892e-06,
177 | 2.41007383010583e-05,
178 | 0.9997580647468567,
179 | 1.1466073601695825e-06,
180 | 0.00019137654453516006,
181 | 1.5588069800287485e-05
182 | ]
183 | },
184 | {
185 | "image_id": "MR94389",
186 | "prediction": 6,
187 | "score": [
188 | 1.1334625924064312e-05,
189 | 0.0017449429724365473,
190 | 3.7513847928494215e-05,
191 | 0.003589797765016556,
192 | 5.93178192502819e-05,
193 | 5.2897714340360835e-05,
194 | 0.9945042133331299
195 | ]
196 | },
197 | {
198 | "image_id": "MR27202",
199 | "prediction": 5,
200 | "score": [
201 | 3.378661403985461e-07,
202 | 1.5602692826632847e-07,
203 | 1.636992408293736e-07,
204 | 4.483327487037059e-08,
205 | 1.4825831158304936e-06,
206 | 0.9999939203262329,
207 | 4.031746811961057e-06
208 | ]
209 | },
210 | {
211 | "image_id": "MR88293",
212 | "prediction": 5,
213 | "score": [
214 | 3.231729124308913e-06,
215 | 2.6070907210851146e-07,
216 | 5.54923758500081e-07,
217 | 5.549491675083118e-07,
218 | 2.0617210338969016e-06,
219 | 0.9999837875366211,
220 | 9.512213182460982e-06
221 | ]
222 | },
223 | {
224 | "image_id": "MR179970",
225 | "prediction": 2,
226 | "score": [
227 | 0.0003003243764396757,
228 | 0.27661076188087463,
229 | 0.5027822852134705,
230 | 0.034970760345458984,
231 | 0.002457620110362768,
232 | 0.0011427932186052203,
233 | 0.18173536658287048
234 | ]
235 | },
236 | {
237 | "image_id": "MR32504",
238 | "prediction": 6,
239 | "score": [
240 | 0.0010602418333292007,
241 | 0.08846916258335114,
242 | 3.20233084494248e-05,
243 | 9.290202433476225e-05,
244 | 3.2358020689571276e-05,
245 | 0.0039085885509848595,
246 | 0.9064047336578369
247 | ]
248 | },
249 | {
250 | "image_id": "MR174815",
251 | "prediction": 1,
252 | "score": [
253 | 4.827109478355851e-06,
254 | 0.9995249509811401,
255 | 0.0001981587993213907,
256 | 5.5596872698515654e-05,
257 | 2.430432323308196e-05,
258 | 1.3813589248456992e-05,
259 | 0.0001782828039722517
260 | ]
261 | },
262 | {
263 | "image_id": "MR222216",
264 | "prediction": 1,
265 | "score": [
266 | 6.478813156718388e-05,
267 | 0.8597501516342163,
268 | 3.732787445187569e-05,
269 | 6.097802543081343e-05,
270 | 9.364875040773768e-06,
271 | 0.001587414531968534,
272 | 0.1384899914264679
273 | ]
274 | },
275 | {
276 | "image_id": "MR199345",
277 | "prediction": 1,
278 | "score": [
279 | 1.6016016161302105e-05,
280 | 0.9998906850814819,
281 | 1.378940578433685e-05,
282 | 3.5954080885858275e-06,
283 | 7.353805244747491e-07,
284 | 6.990780821070075e-05,
285 | 5.371589850255987e-06
286 | ]
287 | },
288 | {
289 | "image_id": "MR69046",
290 | "prediction": 2,
291 | "score": [
292 | 0.01838279329240322,
293 | 0.02258209139108658,
294 | 0.9508678913116455,
295 | 7.888083928264678e-05,
296 | 0.00020653315004892647,
297 | 0.0055504292249679565,
298 | 0.0023313011042773724
299 | ]
300 | },
301 | {
302 | "image_id": "MR96745",
303 | "prediction": 5,
304 | "score": [
305 | 0.002167792059481144,
306 | 1.8709943105932325e-05,
307 | 0.0003770168696064502,
308 | 0.004100983031094074,
309 | 3.8593050703639165e-05,
310 | 0.9824031591415405,
311 | 0.010893724858760834
312 | ]
313 | },
314 | {
315 | "image_id": "MR104280",
316 | "prediction": 2,
317 | "score": [
318 | 3.0933671951061115e-05,
319 | 5.992686783429235e-05,
320 | 0.9996746778488159,
321 | 4.1707121454237495e-06,
322 | 5.8174915466224775e-05,
323 | 0.00016392229008488357,
324 | 8.228608749050181e-06
325 | ]
326 | },
327 | {
328 | "image_id": "MR137627",
329 | "prediction": 2,
330 | "score": [
331 | 4.4151324800623115e-06,
332 | 4.7874673327896744e-05,
333 | 0.9997523427009583,
334 | 2.848744588845875e-05,
335 | 9.541348845232278e-05,
336 | 1.635731132410001e-05,
337 | 5.507089008460753e-05
338 | ]
339 | },
340 | {
341 | "image_id": "MR192701",
342 | "prediction": 2,
343 | "score": [
344 | 7.6566857387661e-06,
345 | 0.0028177364729344845,
346 | 0.9945186972618103,
347 | 4.486642865231261e-06,
348 | 0.00012450774374883622,
349 | 0.0003750179021153599,
350 | 0.0021518899593502283
351 | ]
352 | },
353 | {
354 | "image_id": "MR145114",
355 | "prediction": 2,
356 | "score": [
357 | 1.1722037925210316e-05,
358 | 0.49746236205101013,
359 | 0.5010170340538025,
360 | 0.0009472190868109465,
361 | 0.0004346987116150558,
362 | 0.00010350607772124931,
363 | 2.3496619178331457e-05
364 | ]
365 | },
366 | {
367 | "image_id": "MR106372",
368 | "prediction": 2,
369 | "score": [
370 | 0.00010835815919563174,
371 | 0.07179166376590729,
372 | 0.8932217359542847,
373 | 0.03365975245833397,
374 | 0.0009502135217189789,
375 | 0.00021402948186732829,
376 | 5.4306365200318396e-05
377 | ]
378 | },
379 | {
380 | "image_id": "MR162257",
381 | "prediction": 6,
382 | "score": [
383 | 3.3725995308486745e-05,
384 | 0.00017548595496919006,
385 | 8.136157703120261e-05,
386 | 0.00016678081010468304,
387 | 1.3797792917102925e-06,
388 | 7.2666080086492e-05,
389 | 0.9994685053825378
390 | ]
391 | },
392 | {
393 | "image_id": "MR222125",
394 | "prediction": 6,
395 | "score": [
396 | 0.0018386102747172117,
397 | 0.1417044699192047,
398 | 0.005420852452516556,
399 | 0.018002362921833992,
400 | 0.08081159740686417,
401 | 0.012366403825581074,
402 | 0.7398557066917419
403 | ]
404 | },
405 | {
406 | "image_id": "MR127280",
407 | "prediction": 3,
408 | "score": [
409 | 0.00015323830302804708,
410 | 1.0388351256551687e-05,
411 | 1.9814569895970635e-06,
412 | 0.9995266199111938,
413 | 4.7693629312561825e-05,
414 | 3.897369606420398e-05,
415 | 0.00022114407329354435
416 | ]
417 | },
418 | {
419 | "image_id": "MR210193",
420 | "prediction": 3,
421 | "score": [
422 | 0.00031007995130494237,
423 | 1.469708513468504e-05,
424 | 0.0006736861541867256,
425 | 0.5126804113388062,
426 | 8.9031076640822e-05,
427 | 0.0001850378466770053,
428 | 0.48604708909988403
429 | ]
430 | },
431 | {
432 | "image_id": "MR236955",
433 | "prediction": 1,
434 | "score": [
435 | 2.951784699689597e-05,
436 | 0.99369215965271,
437 | 0.00014942113193683326,
438 | 0.004456141963601112,
439 | 3.113151615252718e-05,
440 | 8.761577191762626e-05,
441 | 0.0015539645683020353
442 | ]
443 | },
444 | {
445 | "image_id": "MR207755",
446 | "prediction": 3,
447 | "score": [
448 | 0.0008690250688232481,
449 | 8.185242768377066e-05,
450 | 6.685457628918812e-05,
451 | 0.9860342144966125,
452 | 8.58384623825259e-07,
453 | 6.661139923380688e-05,
454 | 0.012880592606961727
455 | ]
456 | },
457 | {
458 | "image_id": "MR236008",
459 | "prediction": 3,
460 | "score": [
461 | 0.06515278667211533,
462 | 0.006646816153079271,
463 | 0.000852881814353168,
464 | 0.9143998026847839,
465 | 0.012095104902982712,
466 | 0.0007814770215190947,
467 | 7.117698987713084e-05
468 | ]
469 | },
470 | {
471 | "image_id": "MR229934",
472 | "prediction": 5,
473 | "score": [
474 | 6.248131739994278e-06,
475 | 8.735269148019142e-07,
476 | 3.3262480769735703e-07,
477 | 6.111906145633839e-07,
478 | 1.2732652976410463e-05,
479 | 0.9999614953994751,
480 | 1.770961534930393e-05
481 | ]
482 | },
483 | {
484 | "image_id": "MR193842",
485 | "prediction": 6,
486 | "score": [
487 | 4.131295645493083e-05,
488 | 2.3967246306710877e-05,
489 | 1.996451464947313e-05,
490 | 7.110075966920704e-05,
491 | 3.0273565698735183e-06,
492 | 1.0212067536485847e-05,
493 | 0.9998303651809692
494 | ]
495 | },
496 | {
497 | "image_id": "MR201013",
498 | "prediction": 4,
499 | "score": [
500 | 0.00014583978918381035,
501 | 1.5438766922670766e-06,
502 | 3.864719815283024e-07,
503 | 0.0002908123133238405,
504 | 0.9995558857917786,
505 | 5.4275005823001266e-06,
506 | 3.607524945437035e-08
507 | ]
508 | },
509 | {
510 | "image_id": "MR125176",
511 | "prediction": 4,
512 | "score": [
513 | 8.578588676755317e-06,
514 | 5.4780033678980544e-05,
515 | 3.876721450524201e-07,
516 | 9.270139344152994e-06,
517 | 0.9997697472572327,
518 | 0.00015126579091884196,
519 | 5.947810677753296e-06
520 | ]
521 | },
522 | {
523 | "image_id": "MR136344",
524 | "prediction": 4,
525 | "score": [
526 | 1.3626722648041323e-06,
527 | 1.46982836213283e-06,
528 | 1.6123319710459327e-06,
529 | 0.00011776628525694832,
530 | 0.9998651742935181,
531 | 1.244457143911859e-05,
532 | 1.2246891856193542e-07
533 | ]
534 | },
535 | {
536 | "image_id": "MR142442",
537 | "prediction": 4,
538 | "score": [
539 | 1.1597582670219708e-05,
540 | 0.0270093884319067,
541 | 0.0004230896884109825,
542 | 0.0017826099647209048,
543 | 0.9686463475227356,
544 | 0.0019800374284386635,
545 | 0.0001470050192438066
546 | ]
547 | },
548 | {
549 | "image_id": "MR140376",
550 | "prediction": 4,
551 | "score": [
552 | 1.03646709703753e-06,
553 | 4.854264716414036e-06,
554 | 3.1400861644215183e-07,
555 | 2.4801802283036523e-05,
556 | 0.9994537234306335,
557 | 0.0005139766726642847,
558 | 1.1986797971985652e-06
559 | ]
560 | },
561 | {
562 | "image_id": "MR159752",
563 | "prediction": 3,
564 | "score": [
565 | 0.026215935125947,
566 | 0.08944888412952423,
567 | 0.0020225613843649626,
568 | 0.6987693309783936,
569 | 0.18233439326286316,
570 | 0.0011593849631026387,
571 | 4.950146467308514e-05
572 | ]
573 | },
574 | {
575 | "image_id": "MR140232",
576 | "prediction": 4,
577 | "score": [
578 | 0.0005495784571394324,
579 | 1.1129982340207789e-05,
580 | 9.042206511367112e-05,
581 | 0.07327716797590256,
582 | 0.9216426610946655,
583 | 8.947700553108007e-05,
584 | 0.004339531064033508
585 | ]
586 | },
587 | {
588 | "image_id": "MR133387",
589 | "prediction": 4,
590 | "score": [
591 | 0.013784732669591904,
592 | 0.0001934156025527045,
593 | 0.0019502852810546756,
594 | 0.04929909110069275,
595 | 0.9345818161964417,
596 | 0.00016344145114999264,
597 | 2.7256040993961506e-05
598 | ]
599 | },
600 | {
601 | "image_id": "MR110203",
602 | "prediction": 5,
603 | "score": [
604 | 2.7524642973730806e-06,
605 | 2.1440333512146026e-05,
606 | 0.00011304454528726637,
607 | 1.743612983773346e-06,
608 | 5.8394423831487074e-05,
609 | 0.9997367262840271,
610 | 6.586426752619445e-05
611 | ]
612 | },
613 | {
614 | "image_id": "MR206734",
615 | "prediction": 2,
616 | "score": [
617 | 9.907887942972593e-06,
618 | 0.4021502733230591,
619 | 0.5819864273071289,
620 | 0.0006669783033430576,
621 | 0.013844279572367668,
622 | 0.0003890585503540933,
623 | 0.0009530240786261857
624 | ]
625 | },
626 | {
627 | "image_id": "MR226356",
628 | "prediction": 0,
629 | "score": [
630 | 0.9984581470489502,
631 | 1.1871312381117605e-05,
632 | 6.405714520951733e-05,
633 | 0.0005469739553518593,
634 | 3.788374669966288e-05,
635 | 0.0008399641374126077,
636 | 4.1217001125914976e-05
637 | ]
638 | },
639 | {
640 | "image_id": "MR198925",
641 | "prediction": 5,
642 | "score": [
643 | 5.533789135370171e-06,
644 | 1.4578188256564317e-07,
645 | 8.365284287492614e-08,
646 | 9.728045569090682e-08,
647 | 1.5120576790650375e-06,
648 | 0.9999908208847046,
649 | 1.844750272539386e-06
650 | ]
651 | },
652 | {
653 | "image_id": "MR-470019",
654 | "prediction": 5,
655 | "score": [
656 | 1.2511881095633726e-06,
657 | 1.3650027312905877e-06,
658 | 1.0122674893864314e-06,
659 | 1.5836398858937173e-07,
660 | 2.4827627953527553e-07,
661 | 0.9999860525131226,
662 | 1.0026735253632069e-05
663 | ]
664 | },
665 | {
666 | "image_id": "MR-451771",
667 | "prediction": 5,
668 | "score": [
669 | 0.16434374451637268,
670 | 3.260704761487432e-05,
671 | 5.960457565379329e-05,
672 | 0.00012993879499845207,
673 | 0.00018513503891881555,
674 | 0.8351498246192932,
675 | 9.909932850860059e-05
676 | ]
677 | },
678 | {
679 | "image_id": "MR209281",
680 | "prediction": 5,
681 | "score": [
682 | 3.253017348470166e-05,
683 | 3.1320498237619177e-05,
684 | 0.0036026337184011936,
685 | 7.197730883490294e-05,
686 | 0.0009360499680042267,
687 | 0.9934592247009277,
688 | 0.001866319915279746
689 | ]
690 | },
691 | {
692 | "image_id": "MR35505",
693 | "prediction": 6,
694 | "score": [
695 | 5.1708218961721286e-05,
696 | 0.01283283717930317,
697 | 0.0010743547463789582,
698 | 1.9713115761987865e-05,
699 | 3.0282881198218092e-05,
700 | 0.0018727314891293645,
701 | 0.9841183423995972
702 | ]
703 | },
704 | {
705 | "image_id": "MR61313",
706 | "prediction": 6,
707 | "score": [
708 | 3.5292487154947594e-05,
709 | 7.014941365923733e-05,
710 | 1.3566338566306513e-05,
711 | 0.00010390790703240782,
712 | 3.03730166706373e-06,
713 | 2.2766796973883174e-05,
714 | 0.9997512698173523
715 | ]
716 | },
717 | {
718 | "image_id": "MR124193",
719 | "prediction": 0,
720 | "score": [
721 | 0.9855139255523682,
722 | 5.9616818361973856e-06,
723 | 0.003929528407752514,
724 | 0.010411543771624565,
725 | 5.740649430663325e-05,
726 | 5.497538222698495e-05,
727 | 2.6575942683848552e-05
728 | ]
729 | },
730 | {
731 | "image_id": "MR120289",
732 | "prediction": 6,
733 | "score": [
734 | 4.446612729225308e-05,
735 | 0.001192878931760788,
736 | 2.1152582121430896e-05,
737 | 8.564207382733002e-05,
738 | 3.3028807138180127e-06,
739 | 0.003780062310397625,
740 | 0.994872510433197
741 | ]
742 | },
743 | {
744 | "image_id": "MR13762",
745 | "prediction": 6,
746 | "score": [
747 | 4.119868390262127e-05,
748 | 0.00011004068073816597,
749 | 3.2030075090005994e-05,
750 | 2.9480319426511414e-05,
751 | 1.1598235687415581e-05,
752 | 2.56992152571911e-05,
753 | 0.9997499585151672
754 | ]
755 | },
756 | {
757 | "image_id": "MR132628",
758 | "prediction": 1,
759 | "score": [
760 | 7.106801785994321e-05,
761 | 0.7900978326797485,
762 | 0.0006288065342232585,
763 | 8.619938307674602e-05,
764 | 0.00024692239821888506,
765 | 0.0005554858944378793,
766 | 0.20831358432769775
767 | ]
768 | },
769 | {
770 | "image_id": "MR34162",
771 | "prediction": 6,
772 | "score": [
773 | 5.777229307568632e-05,
774 | 6.393877265509218e-05,
775 | 1.7940683392225765e-05,
776 | 6.016885890858248e-05,
777 | 3.291541815997334e-06,
778 | 8.433320181211457e-05,
779 | 0.9997125267982483
780 | ]
781 | },
782 | {
783 | "image_id": "MR-400851",
784 | "prediction": 0,
785 | "score": [
786 | 0.7328611016273499,
787 | 0.011399470269680023,
788 | 0.001677955500781536,
789 | 0.10713846236467361,
790 | 2.1951322196400724e-05,
791 | 0.0018301407108083367,
792 | 0.14507094025611877
793 | ]
794 | },
795 | {
796 | "image_id": "MR58043",
797 | "prediction": 6,
798 | "score": [
799 | 0.00028987706173211336,
800 | 0.0010867657838389277,
801 | 4.615735815605149e-05,
802 | 1.4990388990554493e-05,
803 | 2.387955282756593e-05,
804 | 5.196019628783688e-05,
805 | 0.9984862804412842
806 | ]
807 | },
808 | {
809 | "image_id": "MR69585",
810 | "prediction": 3,
811 | "score": [
812 | 1.7333225059701363e-06,
813 | 6.489511724794284e-05,
814 | 3.7478630474652164e-06,
815 | 0.9999237060546875,
816 | 1.8296153712071828e-06,
817 | 3.049465249205241e-06,
818 | 1.0587848464638228e-06
819 | ]
820 | },
821 | {
822 | "image_id": "MR45055",
823 | "prediction": 3,
824 | "score": [
825 | 0.001374542247503996,
826 | 0.0010765749029815197,
827 | 0.0002000959066208452,
828 | 0.591166079044342,
829 | 5.681277343683178e-06,
830 | 0.0013284357264637947,
831 | 0.404848575592041
832 | ]
833 | },
834 | {
835 | "image_id": "MR88979",
836 | "prediction": 6,
837 | "score": [
838 | 3.3334501495119184e-05,
839 | 0.0001445254310965538,
840 | 5.417626380221918e-05,
841 | 2.2700913177686743e-05,
842 | 4.944926331518218e-06,
843 | 4.13079142163042e-05,
844 | 0.9996989965438843
845 | ]
846 | },
847 | {
848 | "image_id": "MR3573",
849 | "prediction": 6,
850 | "score": [
851 | 5.332764339982532e-05,
852 | 2.1871328499400988e-05,
853 | 2.0212917661410756e-05,
854 | 0.0008220816380344331,
855 | 7.224160071928054e-05,
856 | 1.7581214706297033e-05,
857 | 0.9989927411079407
858 | ]
859 | },
860 | {
861 | "image_id": "MR164939",
862 | "prediction": 6,
863 | "score": [
864 | 6.295596540439874e-05,
865 | 8.448898006463423e-05,
866 | 0.001377580570988357,
867 | 3.8113666960271075e-05,
868 | 2.0795130694750696e-05,
869 | 0.07627039402723312,
870 | 0.9221456050872803
871 | ]
872 | },
873 | {
874 | "image_id": "MR80569",
875 | "prediction": 6,
876 | "score": [
877 | 7.614294008817524e-05,
878 | 7.344261393882334e-05,
879 | 7.941493822727352e-05,
880 | 3.180249041179195e-05,
881 | 6.767706054233713e-06,
882 | 0.00011424611875554547,
883 | 0.9996181726455688
884 | ]
885 | },
886 | {
887 | "image_id": "MR85655",
888 | "prediction": 5,
889 | "score": [
890 | 5.431936074273835e-07,
891 | 8.399008777359995e-08,
892 | 1.5954293886011328e-08,
893 | 4.6410193021984014e-08,
894 | 2.472641199346981e-06,
895 | 0.9999954700469971,
896 | 1.4148763511911966e-06
897 | ]
898 | },
899 | {
900 | "image_id": "MR82133",
901 | "prediction": 6,
902 | "score": [
903 | 0.0002127664047293365,
904 | 0.002840819302946329,
905 | 4.338582220952958e-05,
906 | 1.8508069842937402e-05,
907 | 2.255223080283031e-05,
908 | 0.0001059734495356679,
909 | 0.9967560172080994
910 | ]
911 | },
912 | {
913 | "image_id": "MR37217",
914 | "prediction": 6,
915 | "score": [
916 | 4.4920743675902486e-05,
917 | 0.00016333443636540323,
918 | 0.00010658857354428619,
919 | 3.0368179068318568e-05,
920 | 3.653467501862906e-06,
921 | 7.08025399944745e-05,
922 | 0.9995802044868469
923 | ]
924 | },
925 | {
926 | "image_id": "MR109260",
927 | "prediction": 5,
928 | "score": [
929 | 1.3436481367534725e-06,
930 | 3.7984005984981195e-07,
931 | 2.6881869175099382e-08,
932 | 3.7046099521376163e-08,
933 | 1.0812238997459644e-06,
934 | 0.999996542930603,
935 | 5.924934498580114e-07
936 | ]
937 | },
938 | {
939 | "image_id": "MR189873",
940 | "prediction": 6,
941 | "score": [
942 | 3.9265833038371056e-05,
943 | 0.00010912565630860627,
944 | 4.216266461298801e-05,
945 | 1.1564287888177205e-05,
946 | 6.134985596872866e-06,
947 | 2.5209177692886442e-05,
948 | 0.9997665286064148
949 | ]
950 | },
951 | {
952 | "image_id": "MR49502",
953 | "prediction": 6,
954 | "score": [
955 | 0.00011093110515503213,
956 | 0.0007623932906426489,
957 | 0.0010758114513009787,
958 | 5.457567749544978e-05,
959 | 1.2542592230602168e-05,
960 | 0.0005738998879678547,
961 | 0.9974098801612854
962 | ]
963 | },
964 | {
965 | "image_id": "MR78507",
966 | "prediction": 6,
967 | "score": [
968 | 4.743781755678356e-05,
969 | 2.7917189072468318e-05,
970 | 1.9534483726602048e-05,
971 | 1.1245419955230318e-05,
972 | 5.850170964549761e-06,
973 | 0.00019108544802293181,
974 | 0.9996969699859619
975 | ]
976 | },
977 | {
978 | "image_id": "MR172409",
979 | "prediction": 6,
980 | "score": [
981 | 3.0886385502526537e-05,
982 | 0.00010037889296654612,
983 | 0.0001436440070392564,
984 | 1.4498013115371577e-05,
985 | 6.238259629753884e-06,
986 | 0.00020804087398573756,
987 | 0.9994962215423584
988 | ]
989 | },
990 | {
991 | "image_id": "MR199597",
992 | "prediction": 6,
993 | "score": [
994 | 6.550033140229061e-05,
995 | 0.0008181382436305285,
996 | 0.000244259339524433,
997 | 3.122746784356423e-05,
998 | 5.905861144128721e-06,
999 | 7.215645746327937e-05,
1000 | 0.9987627267837524
1001 | ]
1002 | },
1003 | {
1004 | "image_id": "MR131733",
1005 | "prediction": 3,
1006 | "score": [
1007 | 6.83623966324376e-08,
1008 | 1.4393053788808174e-05,
1009 | 2.300144615219324e-06,
1010 | 0.9999797344207764,
1011 | 1.3062431207799818e-06,
1012 | 1.8698832491281792e-06,
1013 | 4.2398812638566596e-07
1014 | ]
1015 | }
1016 | ]
--------------------------------------------------------------------------------
/main/predict.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | '''
3 | generate prediction on unlabeled data
4 | '''
5 | import argparse
6 | import os
7 | import json
8 | import csv
9 | import glob
10 | import time
11 | import logging
12 | import torch
13 | import torch.nn as nn
14 | import torch.nn.parallel
15 | import numpy as np
16 | from tqdm import tqdm
17 | from collections import OrderedDict
18 | from contextlib import suppress
19 | from torch.utils.data.dataloader import DataLoader
20 | from timm.models import create_model, apply_test_time_pool, load_checkpoint, is_model, list_models
21 | from timm.utils import setup_default_logging, set_jit_legacy
22 |
23 | import models
24 | from metrics import *
25 | from datasets.mp_liver_dataset import MultiPhaseLiverDataset
26 |
27 | has_apex = False
28 | try:
29 | from apex import amp
30 | has_apex = True
31 | except ImportError:
32 | pass
33 |
34 | has_native_amp = False
35 | try:
36 | if getattr(torch.cuda.amp, 'autocast') is not None:
37 | has_native_amp = True
38 | except AttributeError:
39 | pass
40 |
41 | torch.backends.cudnn.benchmark = True
42 | _logger = logging.getLogger('validate')
43 |
44 |
45 | parser = argparse.ArgumentParser(description='Prediction on unlabeled data')
46 |
47 | parser.add_argument('--img_size', default=(16, 128, 128),
48 | type=int, nargs='+', help='input image size.')
49 | parser.add_argument('--crop_size', default=(14, 112, 112),
50 | type=int, nargs='+', help='cropped image size.')
51 | parser.add_argument('--data_dir', default='./images/', type=str)
52 | parser.add_argument('--val_anno_file', default='./labels/test.txt', type=str)
53 | parser.add_argument('--val_transform_list',
54 | default=['center_crop'], nargs='+', type=str)
55 | parser.add_argument('--model', '-m', metavar='NAME', default='resnet50',
56 | help='model architecture (default: dpn92)')
57 | parser.add_argument('-j', '--workers', default=8, type=int, metavar='N',
58 | help='number of data loading workers (default: 2)')
59 | parser.add_argument('-b', '--batch-size', default=256, type=int,
60 | metavar='N', help='mini-batch size (default: 256)')
61 | parser.add_argument('--num-classes', type=int, default=7,
62 | help='Number classes in dataset')
63 | parser.add_argument('--gp', default=None, type=str, metavar='POOL',
64 | help='Global pool type, one of (fast, avg, max, avgmax, avgmaxc). Model default if None.')
65 | parser.add_argument('--log-freq', default=10, type=int,
66 | metavar='N', help='batch logging frequency (default: 10)')
67 | parser.add_argument('--checkpoint', default='', type=str, metavar='PATH',
68 | help='path to latest checkpoint (default: none)')
69 | parser.add_argument('--pretrained', dest='pretrained', action='store_true',
70 | help='use pre-trained model')
71 | parser.add_argument('--num-gpu', type=int, default=1,
72 | help='Number of GPUS to use')
73 | parser.add_argument('--test-pool', dest='test_pool', action='store_true',
74 | help='enable test time pool')
75 | parser.add_argument('--pin-mem', action='store_true', default=False,
76 | help='Pin CPU memory in DataLoader for more efficient (sometimes) transfer to GPU.')
77 | parser.add_argument('--channels-last', action='store_true', default=False,
78 | help='Use channels_last memory layout')
79 | parser.add_argument('--amp', action='store_true', default=False,
80 | help='Use AMP mixed precision. Defaults to Apex, fallback to native Torch AMP.')
81 | parser.add_argument('--apex-amp', action='store_true', default=False,
82 | help='Use NVIDIA Apex AMP mixed precision')
83 | parser.add_argument('--native-amp', action='store_true', default=False,
84 | help='Use Native Torch AMP mixed precision')
85 | parser.add_argument('--tf-preprocessing', action='store_true', default=False,
86 | help='Use Tensorflow preprocessing pipeline (require CPU TF installed')
87 | parser.add_argument('--use-ema', dest='use_ema', action='store_true',
88 | help='use ema version of weights if present')
89 | parser.add_argument('--torchscript', dest='torchscript', action='store_true',
90 | help='convert model torchscript for inference')
91 | parser.add_argument('--legacy-jit', dest='legacy_jit', action='store_true',
92 | help='use legacy jit mode for pytorch 1.5/1.5.1/1.6 to get back fusion performance')
93 | parser.add_argument('--results-dir', default='', type=str, metavar='FILENAME',
94 | help='Output csv file for validation results (summary)')
95 | parser.add_argument('--team_name', default='', type=str,
96 | required=True, help='Please enter your team name')
97 |
98 |
99 | def validate(args):
100 | # might as well try to validate something
101 | args.pretrained = args.pretrained or not args.checkpoint
102 | amp_autocast = suppress # do nothing
103 | if args.amp:
104 | if has_native_amp:
105 | args.native_amp = True
106 | elif has_apex:
107 | args.apex_amp = True
108 | else:
109 | _logger.warning("Neither APEX or Native Torch AMP is available.")
110 | assert not args.apex_amp or not args.native_amp, "Only one AMP mode should be set."
111 | # if args.native_amp:
112 | # amp_autocast = torch.cuda.amp.autocast
113 | # _logger.info('Validating in mixed precision with native PyTorch AMP.')
114 | # elif args.apex_amp:
115 | # _logger.info('Validating in mixed precision with NVIDIA APEX AMP.')
116 | # else:
117 | # _logger.info('Validating in float32. AMP not enabled.')
118 |
119 | if args.legacy_jit:
120 | set_jit_legacy()
121 |
122 | # create model
123 | model = create_model(
124 | args.model,
125 | pretrained=args.pretrained,
126 | num_classes=args.num_classes,
127 | pretrained_cfg=None)
128 |
129 | if args.num_classes is None:
130 | assert hasattr(
131 | model, 'num_classes'), 'Model must have `num_classes` attr if not set on cmd line/config.'
132 | args.num_classes = model.num_classes
133 | if args.checkpoint:
134 | load_checkpoint(model, args.checkpoint, args.use_ema)
135 |
136 | param_count = sum([m.numel() for m in model.parameters()])
137 | _logger.info('Model %s created, param count: %d' %
138 | (args.model, param_count))
139 |
140 | model = model.cuda()
141 | if args.apex_amp:
142 | model = amp.initialize(model, opt_level='O1')
143 |
144 | if args.num_gpu > 1:
145 | model = torch.nn.DataParallel(
146 | model, device_ids=list(range(args.num_gpu)))
147 |
148 | dataset = MultiPhaseLiverDataset(args, is_training=False)
149 |
150 | loader = DataLoader(dataset,
151 | batch_size=args.batch_size,
152 | num_workers=args.workers,
153 | pin_memory=args.pin_mem,
154 | shuffle=False)
155 |
156 | predictions = []
157 | labels = []
158 |
159 | model.eval()
160 | pbar = tqdm(total=len(dataset))
161 | with torch.no_grad():
162 | for (input, target) in (loader):
163 | target = target.cuda()
164 | input = input.cuda()
165 | # compute output
166 | with amp_autocast():
167 | output = model(input)
168 | predictions.append(output)
169 | labels.append(target)
170 | pbar.update(args.batch_size)
171 | pbar.close()
172 | return process_prediction(predictions)
173 |
174 |
175 | def process_prediction(outputs):
176 | outputs = torch.cat(outputs, dim=0).detach()
177 | pred_score = torch.softmax(outputs, dim=1)
178 | return pred_score.cpu().numpy()
179 |
180 |
181 | def write_score2json(score_info, args):
182 | score_info = score_info.astype(float)
183 | score_list = []
184 | anno_info = np.loadtxt(args.val_anno_file, dtype=np.str_)
185 | for idx, item in enumerate(anno_info):
186 | id = item[0].rsplit('/', 1)[-1]
187 | score = list(score_info[idx])
188 | pred = score.index(max(score))
189 | pred_info = {
190 | 'image_id': id,
191 | 'prediction': pred,
192 | 'score': score,
193 | }
194 | score_list.append(pred_info)
195 | json_data = json.dumps(score_list, indent=4)
196 | save_name = os.path.join(args.results_dir, args.team_name+'.json')
197 | file = open(save_name, 'w')
198 | file.write(json_data)
199 | file.close()
200 | _logger.info(f"Prediction has been saved to '{save_name}'.")
201 |
202 |
203 | def main():
204 | setup_default_logging()
205 | args = parser.parse_args()
206 | score = validate(args)
207 | os.makedirs(args.results_dir, exist_ok=True)
208 | write_score2json(score, args)
209 |
210 |
211 | if __name__ == '__main__':
212 | main()
213 |
--------------------------------------------------------------------------------
/main/preprocess/crop_roi.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 | import json
4 | import numpy as np
5 | import SimpleITK as sitk
6 | from tqdm import tqdm
7 |
8 | def crop_lesion(data_dir, json_path, save_dir, xy_extension=16, z_extension=2):
9 | '''
10 | Args:
11 | data_dir: path to original dataset
12 | json_path: path to annotation file
13 | save_dir: save_dir of classification dataset
14 | xy_extension: spatail extension when cropping lesion ROI
15 | z_extension: slice extension when cropping lesion ROI
16 | '''
17 | f = open(json_path, 'r')
18 | data = json.load(f)
19 | data = data['Annotation_info']
20 | for patientID in tqdm(data):
21 | for item in data[patientID]:
22 | studyUID = item['studyUID']
23 | seriesUID = item['seriesUID']
24 | phase = item['phase']
25 | # spacing = item['pixel_spacing']
26 | # slice_thickness = item['slice_thickness']
27 | # src_spacing = (slice_thickness, spacing[0], spacing[1])
28 | annotation = item['annotation']['lesion']
29 |
30 | image_path = os.path.join(data_dir, patientID, studyUID, seriesUID + '.nii.gz')
31 | try:
32 | image = sitk.ReadImage(image_path)
33 | except KeyboardInterrupt:
34 | exit()
35 | except:
36 | print(sys.exc_info())
37 | print('Countine Processing')
38 | continue
39 |
40 | image_array = sitk.GetArrayFromImage(image)
41 |
42 | for ann_idx in annotation:
43 | ann = annotation[ann_idx]
44 | lesion_cls = ann['category']
45 | bbox_info = ann['bbox']['3D_box']
46 |
47 | x_min = int(bbox_info['x_min'])
48 | y_min = int(bbox_info['y_min'])
49 | x_max = int(bbox_info['x_max'])
50 | y_max = int(bbox_info['y_max'])
51 | z_min = int(bbox_info['z_min'])
52 | z_max = int(bbox_info['z_max'])
53 | # bbox = (x_min, y_min, z_min, x_max, y_max, z_max)
54 |
55 | temp_image = image_array
56 |
57 | if z_min >= temp_image.shape[0]:
58 | print(f"{patientID}/{studyUID}/{seriesUID}: z_min'{z_min}'>num slices'{temp_image.shape[0]}'")
59 | continue
60 | elif z_max >= temp_image.shape[0]:
61 | print(f"{patientID} {studyUID} {seriesUID}: z_max'{z_max}'>num slices'{temp_image.shape[0]}'")
62 | continue
63 |
64 | if xy_extension is not None:
65 | x_padding_min = int(abs(x_min - xy_extension)) if x_min - xy_extension < 0 else 0
66 | y_padding_min = int(abs(y_min - xy_extension)) if y_min - xy_extension < 0 else 0
67 | x_padding_max = int(abs(x_max + xy_extension - temp_image.shape[1]))if x_max + xy_extension > temp_image.shape[1] else 0
68 | y_padding_max = int(abs(y_max + xy_extension - temp_image.shape[2])) if y_max + xy_extension > temp_image.shape[2] else 0
69 |
70 | x_min = max(x_min - xy_extension, 0)
71 | y_min = max(y_min - xy_extension, 0)
72 | x_max = min(x_max + xy_extension, temp_image.shape[1])
73 | y_max = min(y_max + xy_extension, temp_image.shape[2])
74 | if z_extension is not None:
75 | z_min = max(z_min - z_extension, 0)
76 | z_max = min(z_max + z_extension, temp_image.shape[0])
77 |
78 | if temp_image.shape[0] == 1:
79 | roi = temp_image[0, y_min:y_max, x_min:x_max]
80 | roi = np.expand_dims(roi, axis=0)
81 | elif z_min == z_max:
82 | roi = temp_image[z_min, y_min:y_max, x_min:x_max]
83 | roi = np.expand_dims(roi, axis=0)
84 | else:
85 | roi = temp_image[z_min:(z_max+1), y_min:y_max, x_min:x_max]
86 |
87 | if xy_extension is not None:
88 | roi = np.pad(roi, ((0, 0), (y_padding_min, y_padding_max), (x_padding_min, x_padding_max)), 'constant')
89 |
90 | nii_file = sitk.GetImageFromArray(roi)
91 | if int(ann_idx) == 0:
92 | save_folder = os.path.join(save_dir, f'{patientID}')
93 | else:
94 | save_folder = os.path.join(save_dir, f'{patientID}_{ann_idx}')
95 | os.makedirs(save_folder, exist_ok=True)
96 | sitk.WriteImage(nii_file, save_folder + f'/{phase}.nii.gz')
97 |
98 | if __name__ == "__main__":
99 | import argparse
100 | parser = argparse.ArgumentParser(description='Data preprocessing Config', add_help=False)
101 | parser.add_argument('--data-dir', default='', type=str)
102 | parser.add_argument('--anno-path', default='', type=str)
103 | parser.add_argument('--save-dir', default='', type=str)
104 | args = parser.parse_args()
105 | # data_dir = 'data/images/'
106 | # anno_path = 'data/labels/Annotation.json'
107 | # save_dir = 'data/classification_dataset/images/'
108 | crop_lesion(args.data_dir, args.anno_path, args.save_dir)
109 |
--------------------------------------------------------------------------------
/main/preprocess/gene_cross_val.py:
--------------------------------------------------------------------------------
1 | import os
2 | import copy
3 | import random
4 | import itertools
5 | import numpy as np
6 |
7 |
8 | def split_list(my_list, num_parts):
9 | part_size = len(my_list) // num_parts
10 | remainder = len(my_list) % num_parts
11 | result = []
12 | start = 0
13 | for i in range(num_parts):
14 | if i < remainder:
15 | end = start + part_size + 1
16 | else:
17 | end = start + part_size
18 | result.append(my_list[start:end])
19 | start = end
20 | return result
21 |
22 | def build_dataset(lab_path, save_dir, num_folds=5, num_classes=7):
23 | os.makedirs(save_dir, exist_ok=True)
24 | lab_data = np.loadtxt(lab_path, dtype=np.str_)
25 | n_fold_list = []
26 | for cls_idx in range(num_classes):
27 | data_list = []
28 | for data in lab_data:
29 | if int(data[-1]) == cls_idx:
30 | data_list.append(data[0])
31 | random.shuffle(data_list)
32 | data_info_list = []
33 | for item in data_list:
34 | data_info_list.append(
35 | {'data': item,
36 | 'cls': cls_idx, })
37 | n_fold_list.append(split_list(data_info_list, num_folds))
38 | for i in range(num_folds):
39 | train_data = []
40 | val_data = []
41 | _n_fold_list = copy.deepcopy(n_fold_list)
42 | for j in range(num_classes):
43 | val_data.append(_n_fold_list[j][i])
44 | del _n_fold_list[j][i]
45 | for item in _n_fold_list[j]:
46 | train_data.append(item)
47 | train_data = list(itertools.chain(*train_data))
48 | val_data = list(itertools.chain(*val_data))
49 | print(f'Fold {i+1}: num_train_data={len(train_data)}, num_val_data={len(val_data)}')
50 | train_file = open(f'{save_dir}/train_fold{i+1}.txt', 'w')
51 | val_file = open(f'{save_dir}/val_fold{i+1}.txt', 'w')
52 | for item in train_data:
53 | data = item['data']
54 | cls = item['cls']
55 | train_file.write(f'{data} {cls}'+'\n')
56 | for item in val_data:
57 | data = item['data']
58 | cls = item['cls']
59 | val_file.write(f'{data} {cls}'+'\n')
60 | train_file.close()
61 | val_file.close()
62 |
63 |
64 | if __name__ == "__main__":
65 | import argparse
66 | parser = argparse.ArgumentParser(description='Data preprocessing Config', add_help=False)
67 | parser.add_argument('--lab-path', default='', type=str)
68 | parser.add_argument('--save-dir', default='', type=str)
69 | parser.add_argument('--num-folds', default=5, type=int)
70 | parser.add_argument('--seed', default=66, type=int)
71 | args = parser.parse_args()
72 | random.seed(args.seed) ## Adjust according to model performance
73 | # lab_path = 'data/classification_dataset/labels/labels.txt'
74 | # save_dir = 'data/classification_dataset/labels/'
75 | build_dataset(args.lab_path, args.save_dir, num_folds=args.num_folds)
76 |
--------------------------------------------------------------------------------
/main/requirements.txt:
--------------------------------------------------------------------------------
1 | SimpleITK==2.2.1
2 | timm==0.6.12
3 | torch==1.12.1
4 | torchaudio==0.12.1
5 | torchvision==0.13.1
6 | tqdm==4.64.1
7 | scikit-learn==1.2.1
8 | scipy==1.10.0
9 |
--------------------------------------------------------------------------------
/main/train.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | import argparse
3 | import time
4 | import yaml
5 | import os
6 | import logging
7 | from collections import OrderedDict
8 | from contextlib import suppress
9 | from datetime import datetime
10 |
11 | import torch
12 | import torch.nn as nn
13 | from torch.nn.parallel import DistributedDataParallel as NativeDDP
14 |
15 | from timm.models import (create_model, safe_model_name,
16 | resume_checkpoint, load_checkpoint,
17 | model_parameters)
18 |
19 | from timm.utils import *
20 | from timm.loss import LabelSmoothingCrossEntropy, BinaryCrossEntropy
21 | from timm.optim import create_optimizer_v2, optimizer_kwargs
22 | from timm.scheduler import create_scheduler
23 | from timm.utils import ApexScaler, NativeScaler
24 |
25 | try:
26 | from apex import amp
27 | from apex.parallel import DistributedDataParallel as ApexDDP
28 | from apex.parallel import convert_syncbn_model
29 | has_apex = True
30 | except ImportError:
31 | has_apex = False
32 |
33 | has_native_amp = False
34 | try:
35 | if getattr(torch.cuda.amp, 'autocast') is not None:
36 | has_native_amp = True
37 | except AttributeError:
38 | pass
39 |
40 | try:
41 | import wandb
42 | has_wandb = True
43 | except ImportError:
44 | has_wandb = False
45 |
46 | from metrics import *
47 | from datasets.mp_liver_dataset import MultiPhaseLiverDataset, create_loader
48 | import models
49 |
50 | torch.backends.cudnn.benchmark = True
51 | _logger = logging.getLogger('train')
52 |
53 | # The first arg parser parses out only the --config argument, this argument is used to
54 | # load a yaml file containing key-values that override the defaults for the main parser below
55 | config_parser = parser = argparse.ArgumentParser(description='Training Config', add_help=False)
56 | parser.add_argument('-c', '--config', default='', type=str, metavar='FILE',
57 | help='YAML config file specifying default arguments')
58 | parser = argparse.ArgumentParser(description='LLD-MMRI 2023 Training')
59 | # Dataset parameters
60 | parser.add_argument('--data_dir', default='', type=str)
61 | parser.add_argument('--train_anno_file', default='', type=str)
62 | parser.add_argument('--val_anno_file', default='', type=str)
63 | parser.add_argument('--train_transform_list', default=['random_crop',
64 | 'z_flip',
65 | 'x_flip',
66 | 'y_flip',
67 | 'rotation',],
68 | nargs='+', type=str)
69 | parser.add_argument('--val_transform_list', default=['center_crop'], nargs='+', type=str)
70 | parser.add_argument('--img_size', default=(16, 128, 128), type=int, nargs='+', help='input image size.')
71 | parser.add_argument('--crop_size', default=(14, 112, 112), type=int, nargs='+', help='cropped image size.')
72 | parser.add_argument('--flip_prob', default=0.5, type=float, help='Random flip prob (default: 0.5)')
73 | parser.add_argument('--reprob', type=float, default=0.25, help='Random erase prob (default: 0.25)')
74 | parser.add_argument('--rcprob', type=float, default=0.25, help='Random contrast prob (default: 0.25)')
75 | parser.add_argument('--angle', default=45, type=int)
76 |
77 | # Model parameters
78 | parser.add_argument('--model', default='mp_uniformer_small', type=str, metavar='MODEL',
79 | help='Name of model to train (default: "resnet50"')
80 | parser.add_argument('--pretrained', action='store_true', default=False,
81 | help='Start with pretrained version of specified network (if avail)')
82 | parser.add_argument('--initial-checkpoint', default='', type=str, metavar='PATH',
83 | help='Initialize model from this checkpoint (default: none)')
84 | parser.add_argument('--resume', default='', type=str, metavar='PATH',
85 | help='Resume full model and optimizer state from checkpoint (default: none)')
86 | parser.add_argument('--no-resume-opt', action='store_true', default=False,
87 | help='prevent resume of optimizer state when resuming model')
88 | parser.add_argument('--num-classes', type=int, default=7, metavar='N',
89 | help='number of label classes (Model default if None)')
90 | parser.add_argument('--gp', default=None, type=str, metavar='POOL',
91 | help='Global pool type, one of (fast, avg, max, avgmax, avgmaxc). Model default if None.')
92 | parser.add_argument('--interpolation', default='', type=str, metavar='NAME',
93 | help='Image resize interpolation type (overrides model)')
94 | parser.add_argument('-b', '--batch-size', type=int, default=128, metavar='N',
95 | help='input batch size for training (default: 2)')
96 | parser.add_argument('-vb', '--validation-batch-size', type=int, default=None, metavar='N',
97 | help='validation batch size override (default: None)')
98 |
99 | # Optimizer parameters
100 | parser.add_argument('--opt', default='adamw', type=str, metavar='OPTIMIZER',
101 | help='Optimizer (default: "adamw"')
102 | parser.add_argument('--opt-eps', default=None, type=float, metavar='EPSILON',
103 | help='Optimizer Epsilon (default: None, use opt default)')
104 | parser.add_argument('--opt-betas', default=None, type=float, nargs='+', metavar='BETA',
105 | help='Optimizer Betas (default: None, use opt default)')
106 | parser.add_argument('--momentum', type=float, default=0.9, metavar='M',
107 | help='Optimizer momentum (default: 0.9)')
108 | parser.add_argument('--weight-decay', type=float, default=0.05,
109 | help='weight decay (default: 0.05)')
110 | parser.add_argument('--clip-grad', type=float, default=None, metavar='NORM',
111 | help='Clip gradient norm (default: None, no clipping)')
112 | parser.add_argument('--clip-mode', type=str, default='norm',
113 | help='Gradient clipping mode. One of ("norm", "value", "agc")')
114 |
115 | # Learning rate schedule parameters
116 | parser.add_argument('--sched', default='cosine', type=str, metavar='SCHEDULER',
117 | help='LR scheduler (default: "step"')
118 | parser.add_argument('--lr', type=float, default=1e-3, metavar='LR',
119 | help='learning rate (default: 1e-3)')
120 | parser.add_argument('--lr-noise', type=float, nargs='+', default=None, metavar='pct, pct',
121 | help='learning rate noise on/off epoch percentages')
122 | parser.add_argument('--lr-noise-pct', type=float, default=0.67, metavar='PERCENT',
123 | help='learning rate noise limit percent (default: 0.67)')
124 | parser.add_argument('--lr-noise-std', type=float, default=1.0, metavar='STDDEV',
125 | help='learning rate noise std-dev (default: 1.0)')
126 | parser.add_argument('--lr-cycle-mul', type=float, default=1.0, metavar='MULT',
127 | help='learning rate cycle len multiplier (default: 1.0)')
128 | parser.add_argument('--lr-cycle-decay', type=float, default=0.5, metavar='MULT',
129 | help='amount to decay each learning rate cycle (default: 0.5)')
130 | parser.add_argument('--lr-cycle-limit', type=int, default=1, metavar='N',
131 | help='learning rate cycle limit, cycles enabled if > 1')
132 | parser.add_argument('--lr-k-decay', type=float, default=1.0,
133 | help='learning rate k-decay for cosine/poly (default: 1.0)')
134 | parser.add_argument('--warmup-lr', type=float, default=1e-6, metavar='LR',
135 | help='warmup learning rate (default: 1e-6)')
136 | parser.add_argument('--min-lr', type=float, default=1e-5, metavar='LR',
137 | help='lower lr bound for cyclic schedulers that hit 0 (1e-5)')
138 | parser.add_argument('--epochs', type=int, default=300, metavar='N',
139 | help='number of epochs to train (default: 300)')
140 | parser.add_argument('--epoch-repeats', type=float, default=0., metavar='N',
141 | help='epoch repeat multiplier (number of times to repeat dataset epoch per train epoch).')
142 | parser.add_argument('--start-epoch', default=None, type=int, metavar='N',
143 | help='manual epoch number (useful on restarts)')
144 | parser.add_argument('--decay-epochs', type=float, default=100, metavar='N',
145 | help='epoch interval to decay LR')
146 | parser.add_argument('--warmup-epochs', type=int, default=5, metavar='N',
147 | help='epochs to warmup LR, if scheduler supports')
148 | parser.add_argument('--cooldown-epochs', type=int, default=10, metavar='N',
149 | help='epochs to cooldown LR at min_lr, after cyclic schedule ends')
150 | parser.add_argument('--patience-epochs', type=int, default=10, metavar='N',
151 | help='patience epochs for Plateau LR scheduler (default: 10')
152 | parser.add_argument('--decay-rate', '--dr', type=float, default=0.1, metavar='RATE',
153 | help='LR decay rate (default: 0.1)')
154 |
155 | # Regularization parameters
156 | parser.add_argument('--bce-loss', action='store_true', default=False,
157 | help='Enable BCE loss w/ Mixup/CutMix use.')
158 | parser.add_argument('--bce-target-thresh', type=float, default=None,
159 | help='Threshold for binarizing softened BCE targets (default: None, disabled)')
160 | parser.add_argument('--smoothing', type=float, default=0,
161 | help='Label smoothing (default: 0.1)')
162 | parser.add_argument('--drop', type=float, default=0.0, metavar='PCT',
163 | help='Dropout rate (default: 0.)')
164 | parser.add_argument('--drop-path', type=float, default=None, metavar='PCT',
165 | help='Drop path rate (default: None)')
166 | parser.add_argument('--drop-block', type=float, default=None, metavar='PCT',
167 | help='Drop block rate (default: None)')
168 |
169 | # Batch norm parameters (only works with gen_efficientnet based models currently)
170 | parser.add_argument('--bn-tf', action='store_true', default=False,
171 | help='Use Tensorflow BatchNorm defaults for models that support it (default: False)')
172 | parser.add_argument('--bn-momentum', type=float, default=None,
173 | help='BatchNorm momentum override (if not None)')
174 | parser.add_argument('--bn-eps', type=float, default=None,
175 | help='BatchNorm epsilon override (if not None)')
176 | parser.add_argument('--sync-bn', action='store_true',
177 | help='Enable NVIDIA Apex or Torch synchronized BatchNorm.')
178 | parser.add_argument('--dist-bn', type=str, default='reduce',
179 | help='Distribute BatchNorm stats between nodes after each epoch ("broadcast", "reduce", or "")')
180 |
181 | # Model Exponential Moving Average
182 | parser.add_argument('--model-ema', action='store_true', default=False,
183 | help='Enable tracking moving average of model weights')
184 | parser.add_argument('--model-ema-force-cpu', action='store_true', default=False,
185 | help='Force ema to be tracked on CPU, rank=0 node only. Disables EMA validation.')
186 | parser.add_argument('--model-ema-decay', type=float, default=0.9998,
187 | help='decay factor for model weights moving average (default: 0.9998)')
188 |
189 | # Misc
190 | parser.add_argument('--seed', type=int, default=42, metavar='S',
191 | help='random seed (default: 42)')
192 | parser.add_argument('--worker-seeding', type=str, default='all',
193 | help='worker seed mode (default: all)')
194 | parser.add_argument('--log-interval', type=int, default=25, metavar='N',
195 | help='how many batches to wait before logging training status')
196 | parser.add_argument('--recovery-interval', type=int, default=0, metavar='N',
197 | help='how many batches to wait before writing recovery checkpoint')
198 | parser.add_argument('--checkpoint-hist', type=int, default=1, metavar='N',
199 | help='number of checkpoints to keep (default: 10)')
200 | parser.add_argument('-j', '--workers', type=int, default=8, metavar='N',
201 | help='how many training processes to use (default: 8)')
202 | parser.add_argument('--amp', action='store_true', default=False,
203 | help='use NVIDIA Apex AMP or Native AMP for mixed precision training')
204 | parser.add_argument('--apex-amp', action='store_true', default=False,
205 | help='Use NVIDIA Apex AMP mixed precision')
206 | parser.add_argument('--native-amp', action='store_true', default=False,
207 | help='Use Native Torch AMP mixed precision')
208 | parser.add_argument('--no-ddp-bb', action='store_true', default=False,
209 | help='Force broadcast buffers for native DDP to off.')
210 | parser.add_argument('--pin-mem', action='store_true', default=False,
211 | help='Pin CPU memory in DataLoader for more efficient (sometimes) transfer to GPU.')
212 | parser.add_argument('--output', default='', type=str, metavar='PATH',
213 | help='path to output folder (default: none, current dir)')
214 | parser.add_argument('--experiment', default='', type=str, metavar='NAME',
215 | help='name of train experiment, name of sub-folder for output')
216 | parser.add_argument('--eval-metric', default='f1', type=str, metavar='EVAL_METRIC',
217 | help='Main metric (default: "f1"')
218 | parser.add_argument('--report-metrics', default=['acc', 'f1', 'recall', 'precision', 'kappa'],
219 | nargs='+', choices=['acc', 'f1', 'recall', 'precision', 'kappa'],
220 | type=str, help='All evaluation metrics')
221 | parser.add_argument("--local_rank", default=0, type=int)
222 | parser.add_argument('--torchscript', dest='torchscript', action='store_true',
223 | help='convert model torchscript for inference')
224 | parser.add_argument('--log-wandb', action='store_true', default=False,
225 | help='log training and validation metrics to wandb')
226 |
227 |
228 | def _parse_args():
229 | # Do we have a config file to parse?
230 | args_config, remaining = config_parser.parse_known_args()
231 | if args_config.config:
232 | with open(args_config.config, 'r') as f:
233 | cfg = yaml.safe_load(f)
234 | parser.set_defaults(**cfg)
235 |
236 | # The main arg parser parses the rest of the args, the usual
237 | # defaults will have been overridden if config file specified.
238 | args = parser.parse_args(remaining)
239 |
240 | # Cache the args as a text string to save them in the output dir later
241 | args_text = yaml.safe_dump(args.__dict__, default_flow_style=False)
242 | return args, args_text
243 |
244 |
245 | def main():
246 | setup_default_logging()
247 | args, args_text = _parse_args()
248 |
249 | if args.log_wandb:
250 | if has_wandb:
251 | wandb.init(project=args.experiment, config=args)
252 | else:
253 | _logger.warning("You've requested to log metrics to wandb but package not found. "
254 | "Metrics not being logged to wandb, try `pip install wandb`")
255 |
256 | args.distributed = False
257 | if 'WORLD_SIZE' in os.environ:
258 | args.distributed = int(os.environ['WORLD_SIZE']) > 1
259 | args.device = 'cuda:0'
260 | args.world_size = 1
261 | args.rank = 0 # global rank
262 | if args.distributed:
263 | args.device = 'cuda:%d' % args.local_rank
264 | torch.cuda.set_device(args.local_rank)
265 | torch.distributed.init_process_group(backend='nccl', init_method='env://')
266 | args.world_size = torch.distributed.get_world_size()
267 | args.rank = torch.distributed.get_rank()
268 | _logger.info('Training in distributed mode with multiple processes, 1 GPU per process. Process %d, total %d.'
269 | % (args.rank, args.world_size))
270 | else:
271 | _logger.info('Training with a single process on 1 GPUs.')
272 | assert args.rank >= 0
273 |
274 | # resolve AMP arguments based on PyTorch / Apex availability
275 | use_amp = None
276 | if args.amp:
277 | # `--amp` chooses native amp before apex (APEX ver not actively maintained)
278 | if has_native_amp:
279 | args.native_amp = True
280 | elif has_apex:
281 | args.apex_amp = True
282 | if args.apex_amp and has_apex:
283 | use_amp = 'apex'
284 | elif args.native_amp and has_native_amp:
285 | use_amp = 'native'
286 | elif args.apex_amp or args.native_amp:
287 | _logger.warning("Neither APEX or native Torch AMP is available, using float32. "
288 | "Install NVIDA apex or upgrade to PyTorch 1.6")
289 |
290 | random_seed(args.seed, args.rank)
291 |
292 | model = create_model(
293 | args.model,
294 | pretrained=args.pretrained,
295 | num_classes=args.num_classes,
296 | drop_rate=args.drop,
297 | drop_path_rate=args.drop_path,
298 | drop_block_rate=args.drop_block,
299 | bn_momentum=args.bn_momentum,
300 | bn_eps=args.bn_eps,
301 | scriptable=args.torchscript,
302 | checkpoint_path=args.initial_checkpoint)
303 |
304 | if args.num_classes is None:
305 | assert hasattr(model, 'num_classes'), 'Model must have `num_classes` attr if not set on cmd line/config.'
306 | args.num_classes = model.num_classes # FIXME handle model default vs config num_classes more elegantly
307 | print(model)
308 | if args.local_rank == 0:
309 | _logger.info(f'Model {safe_model_name(args.model)} created, param count:{sum([m.numel() for m in model.parameters()])}')
310 |
311 | # move model to GPU, enable channels last layout if set
312 | model.cuda()
313 |
314 | # setup synchronized BatchNorm for distributed training
315 | if args.distributed and args.sync_bn:
316 | if has_apex and use_amp == 'apex':
317 | # Apex SyncBN preferred unless native amp is activated
318 | model = convert_syncbn_model(model)
319 | else:
320 | model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)
321 | if args.local_rank == 0:
322 | _logger.info(
323 | 'Converted model to use Synchronized BatchNorm. WARNING: You may have issues if using '
324 | 'zero initialized BN layers (enabled by default for ResNets) while sync-bn enabled.')
325 |
326 | if args.torchscript:
327 | assert not use_amp == 'apex', 'Cannot use APEX AMP with torchscripted model'
328 | assert not args.sync_bn, 'Cannot use SyncBatchNorm with torchscripted model'
329 | model = torch.jit.script(model)
330 |
331 | optimizer = create_optimizer_v2(model, **optimizer_kwargs(cfg=args))
332 |
333 | # setup automatic mixed-precision (AMP) loss scaling and op casting
334 | amp_autocast = suppress # do nothing
335 | loss_scaler = None
336 | if use_amp == 'apex':
337 | model, optimizer = amp.initialize(model, optimizer, opt_level='O1')
338 | loss_scaler = ApexScaler()
339 | if args.local_rank == 0:
340 | _logger.info('Using NVIDIA APEX AMP. Training in mixed precision.')
341 | elif use_amp == 'native':
342 | amp_autocast = torch.cuda.amp.autocast
343 | loss_scaler = NativeScaler()
344 | if args.local_rank == 0:
345 | _logger.info('Using native Torch AMP. Training in mixed precision.')
346 | else:
347 | if args.local_rank == 0:
348 | _logger.info('AMP not enabled. Training in float32.')
349 |
350 | # optionally resume from a checkpoint
351 | resume_epoch = None
352 | if args.resume:
353 | resume_epoch = resume_checkpoint(
354 | model, args.resume,
355 | optimizer=None if args.no_resume_opt else optimizer,
356 | loss_scaler=None if args.no_resume_opt else loss_scaler,
357 | log_info=args.local_rank == 0)
358 |
359 | # setup exponential moving average of model weights, SWA could be used here too
360 | model_ema = None
361 | if args.model_ema:
362 | # Important to create EMA model after cuda(), DP wrapper, and AMP but before SyncBN and DDP wrapper
363 | model_ema = ModelEmaV2(
364 | model, decay=args.model_ema_decay, device='cpu' if args.model_ema_force_cpu else None)
365 | if args.resume:
366 | load_checkpoint(model_ema.module, args.resume, use_ema=True)
367 |
368 | # setup distributed training
369 | if args.distributed:
370 | if has_apex and use_amp == 'apex':
371 | # Apex DDP preferred unless native amp is activated
372 | if args.local_rank == 0:
373 | _logger.info("Using NVIDIA APEX DistributedDataParallel.")
374 | model = ApexDDP(model, delay_allreduce=True)
375 | else:
376 | if args.local_rank == 0:
377 | _logger.info("Using native Torch DistributedDataParallel.")
378 | model = NativeDDP(model, device_ids=[args.local_rank], broadcast_buffers=not args.no_ddp_bb)
379 | # NOTE: EMA model does not need to be wrapped by DDP
380 |
381 | # setup learning rate schedule and starting epoch
382 | lr_scheduler, num_epochs = create_scheduler(args, optimizer)
383 | start_epoch = 0
384 | if args.start_epoch is not None:
385 | # a specified start_epoch will always override the resume epoch
386 | start_epoch = args.start_epoch
387 | elif resume_epoch is not None:
388 | start_epoch = resume_epoch
389 | if lr_scheduler is not None and start_epoch > 0:
390 | lr_scheduler.step(start_epoch)
391 |
392 | if args.local_rank == 0:
393 | _logger.info('Scheduled epochs: {}'.format(num_epochs))
394 |
395 | # create the train and eval datasets/dataloader
396 | dataset_train = MultiPhaseLiverDataset(args, is_training=True)
397 |
398 | dataset_eval = MultiPhaseLiverDataset(args, is_training=False)
399 |
400 | loader_train = create_loader(dataset_train,
401 | batch_size=args.batch_size,
402 | is_training=True,
403 | num_workers=args.workers,
404 | distributed=args.distributed,
405 | pin_memory=args.pin_mem)
406 |
407 | loader_eval = create_loader(dataset_eval,
408 | batch_size=args.batch_size,
409 | is_training=False,
410 | num_workers=args.workers,
411 | distributed=args.distributed,
412 | pin_memory=args.pin_mem)
413 |
414 | if args.smoothing:
415 | if args.bce_loss:
416 | train_loss_fn = BinaryCrossEntropy(smoothing=args.smoothing, target_threshold=args.bce_target_thresh)
417 | else:
418 | train_loss_fn = LabelSmoothingCrossEntropy(smoothing=args.smoothing)
419 | else:
420 | train_loss_fn = nn.CrossEntropyLoss()
421 | train_loss_fn = train_loss_fn.cuda()
422 | validate_loss_fn = nn.CrossEntropyLoss().cuda()
423 |
424 | # setup checkpoint saver and eval metric tracking
425 | eval_metric = args.eval_metric
426 | best_metric = None
427 | best_epoch = None
428 | saver = None
429 | metric_savers = None
430 | output_dir = None
431 |
432 | if args.rank == 0:
433 | if args.experiment:
434 | exp_name = args.experiment
435 | else:
436 | exp_name = '-'.join([
437 | datetime.now().strftime("%Y%m%d-%H%M%S"),
438 | safe_model_name(args.model),
439 | ])
440 | output_dir = get_outdir(args.output if args.output else './output/train', exp_name)
441 | decreasing = True if eval_metric == 'loss' else False
442 | saver = CheckpointSaver(
443 | model=model, optimizer=optimizer,
444 | args=args, model_ema=model_ema,
445 | amp_scaler=loss_scaler,
446 | checkpoint_dir=output_dir, recovery_dir=output_dir,
447 | decreasing=decreasing, max_history=args.checkpoint_hist)
448 |
449 | best_metrics = {}
450 | metric_savers = {}
451 | for metric in args.report_metrics:
452 | best_metrics[metric] = {'value': None, 'epoch': None}
453 | metric_savers[metric] = CheckpointSaver(
454 | model=model, optimizer=optimizer,
455 | args=args, model_ema=model_ema,
456 | amp_scaler=loss_scaler,
457 | checkpoint_prefix=f'best_{metric}_checkpoint',
458 | checkpoint_dir=output_dir,
459 | recovery_dir=output_dir,
460 | decreasing=decreasing,
461 | max_history=args.checkpoint_hist)
462 |
463 | with open(os.path.join(output_dir, 'args.yaml'), 'w') as f:
464 | f.write(args_text)
465 | try:
466 | for epoch in range(start_epoch, num_epochs):
467 | if args.distributed and hasattr(loader_train.sampler, 'set_epoch'):
468 | loader_train.sampler.set_epoch(epoch)
469 |
470 | train_metrics = train_one_epoch(
471 | epoch, model, loader_train, optimizer, train_loss_fn, args,
472 | lr_scheduler=lr_scheduler, saver=saver, output_dir=output_dir,
473 | amp_autocast=amp_autocast, loss_scaler=loss_scaler, model_ema=model_ema)
474 |
475 | if args.distributed and args.dist_bn in ('broadcast', 'reduce'):
476 | if args.local_rank == 0:
477 | _logger.info("Distributing BatchNorm running means and vars")
478 | distribute_bn(model, args.world_size, args.dist_bn == 'reduce')
479 |
480 | eval_metrics = validate(model, loader_eval, validate_loss_fn, args, amp_autocast=amp_autocast)
481 |
482 | if model_ema is not None and not args.model_ema_force_cpu:
483 | if args.distributed and args.dist_bn in ('broadcast', 'reduce'):
484 | distribute_bn(model_ema, args.world_size, args.dist_bn == 'reduce')
485 | ema_eval_metrics = validate(
486 | model_ema.module, loader_eval, validate_loss_fn, args, amp_autocast=amp_autocast, log_suffix=' (EMA)')
487 | eval_metrics = ema_eval_metrics
488 |
489 | if lr_scheduler is not None:
490 | # step LR for next epoch
491 | lr_scheduler.step(epoch + 1, eval_metrics[eval_metric])
492 |
493 | if output_dir is not None:
494 | update_summary(
495 | epoch, train_metrics, eval_metrics, os.path.join(output_dir, 'summary.csv'),
496 | write_header=best_metric is None, log_wandb=args.log_wandb and has_wandb)
497 |
498 | if metric_savers is not None:
499 | # Save the best checkpoint for this metric
500 | for metric in args.report_metrics:
501 | if best_metrics[metric]['value'] is None or (eval_metrics[metric] > best_metrics[metric]['value']):
502 | best_metrics[metric]['value'] = eval_metrics[metric]
503 | best_metrics[metric]['epoch'] = epoch
504 | ckpt_saver = metric_savers[metric]
505 | ckpt_saver.save_checkpoint(epoch, metric=best_metrics[metric]['value'])
506 |
507 | if saver is not None:
508 | # save proper checkpoint with eval metric
509 | save_metric = eval_metrics[eval_metric]
510 | best_metric, best_epoch = saver.save_checkpoint(epoch, metric=save_metric)
511 |
512 | except KeyboardInterrupt:
513 | pass
514 | if best_metric is not None:
515 | _logger.info('*** Best metric: {0} (epoch {1})'.format(best_metric, best_epoch))
516 |
517 |
518 | def train_one_epoch(
519 | epoch, model, loader, optimizer, loss_fn, args,
520 | lr_scheduler=None, saver=None, output_dir=None, amp_autocast=suppress,
521 | loss_scaler=None, model_ema=None):
522 |
523 | second_order = hasattr(optimizer, 'is_second_order') and optimizer.is_second_order
524 | batch_time_m = AverageMeter()
525 | data_time_m = AverageMeter()
526 | losses_m = AverageMeter()
527 |
528 | model.train()
529 |
530 | end = time.time()
531 | last_idx = len(loader) - 1
532 | num_updates = epoch * len(loader)
533 | for batch_idx, (input, target) in enumerate(loader):
534 | last_batch = batch_idx == last_idx
535 | data_time_m.update(time.time() - end)
536 | input, target = input.cuda(), target.cuda()
537 | with amp_autocast():
538 | output = model(input)
539 | loss = loss_fn(output, target)
540 |
541 | if not args.distributed:
542 | losses_m.update(loss.item(), input.size(0))
543 |
544 | optimizer.zero_grad()
545 | if loss_scaler is not None:
546 | loss_scaler(
547 | loss, optimizer,
548 | clip_grad=args.clip_grad, clip_mode=args.clip_mode,
549 | parameters=model_parameters(model, exclude_head='agc' in args.clip_mode),
550 | create_graph=second_order)
551 | else:
552 | loss.backward(create_graph=second_order)
553 | if args.clip_grad is not None:
554 | dispatch_clip_grad(
555 | model_parameters(model, exclude_head='agc' in args.clip_mode),
556 | value=args.clip_grad, mode=args.clip_mode)
557 | optimizer.step()
558 |
559 | if model_ema is not None:
560 | model_ema.update(model)
561 |
562 | torch.cuda.synchronize()
563 | num_updates += 1
564 | batch_time_m.update(time.time() - end)
565 | if last_batch or batch_idx % args.log_interval == 0:
566 | lrl = [param_group['lr'] for param_group in optimizer.param_groups]
567 | lr = sum(lrl) / len(lrl)
568 |
569 | if args.distributed:
570 | reduced_loss = reduce_tensor(loss.data, args.world_size)
571 | losses_m.update(reduced_loss.item(), input.size(0))
572 |
573 | if args.local_rank == 0:
574 | _logger.info(
575 | 'Train: {} [{:>4d}/{} ({:>3.0f}%)] '
576 | 'Loss: {loss.val:#.4g} ({loss.avg:#.3g}) '
577 | 'Time: {batch_time.val:.3f}s, {rate:>7.2f}/s '
578 | '({batch_time.avg:.3f}s, {rate_avg:>7.2f}/s) '
579 | 'LR: {lr:.3e} '
580 | 'Data: {data_time.val:.3f} ({data_time.avg:.3f})'.format(
581 | epoch,
582 | batch_idx, len(loader),
583 | 100. * batch_idx / last_idx,
584 | loss=losses_m,
585 | batch_time=batch_time_m,
586 | rate=input.size(0) * args.world_size / batch_time_m.val,
587 | rate_avg=input.size(0) * args.world_size / batch_time_m.avg,
588 | lr=lr,
589 | data_time=data_time_m))
590 |
591 | if saver is not None and args.recovery_interval and (
592 | last_batch or (batch_idx + 1) % args.recovery_interval == 0):
593 | saver.save_recovery(epoch, batch_idx=batch_idx)
594 |
595 | if lr_scheduler is not None:
596 | lr_scheduler.step_update(num_updates=num_updates, metric=losses_m.avg)
597 |
598 | end = time.time()
599 | # end for
600 |
601 | if hasattr(optimizer, 'sync_lookahead'):
602 | optimizer.sync_lookahead()
603 |
604 | return OrderedDict([('lr', lr), ('loss', losses_m.avg)])
605 |
606 | @torch.no_grad()
607 | def validate(model, loader, loss_fn, args, amp_autocast=suppress, log_suffix=''):
608 | model.eval()
609 | predictions = []
610 | labels = []
611 | last_idx = len(loader) - 1
612 | for batch_idx, (input, target) in enumerate(loader):
613 | last_batch = batch_idx == last_idx
614 | input = input.cuda()
615 | target = target.cuda()
616 |
617 | with amp_autocast():
618 | output = model(input)
619 | if isinstance(output, (tuple, list)):
620 | output = output[0]
621 |
622 | predictions.append(output)
623 | labels.append(target)
624 |
625 | evaluation_metrics = compute_metrics(predictions, labels, loss_fn, args)
626 | if args.local_rank == 0:
627 | output_str = 'Test:\n'
628 | for key, value in evaluation_metrics.items():
629 | output_str += f'{key}: {value}\n'
630 | _logger.info(output_str)
631 |
632 | return evaluation_metrics
633 |
634 |
635 | def compute_metrics(outputs, targets, loss_fn, args):
636 |
637 | outputs = torch.cat(outputs, dim=0).detach()
638 | targets = torch.cat(targets, dim=0).detach()
639 |
640 | if args.distributed:
641 | outputs = gather_data(outputs)
642 | targets = gather_data(targets)
643 |
644 | loss = loss_fn(outputs, targets).cpu().item()
645 | outputs = outputs.cpu().numpy()
646 | targets = targets.cpu().numpy()
647 | acc = ACC(outputs, targets)
648 | f1 = F1_score(outputs, targets)
649 | recall = Recall(outputs, targets)
650 | # specificity = Specificity(outputs, targets)
651 | precision = Precision(outputs, targets)
652 | kappa = Cohen_Kappa(outputs, targets)
653 | metrics = OrderedDict([
654 | ('loss', loss),
655 | ('acc', acc),
656 | ('f1', f1),
657 | ('recall', recall),
658 | ('precision', precision),
659 | ('kappa', kappa),
660 | ])
661 |
662 | return metrics
663 |
664 |
665 | def gather_data(input):
666 | '''
667 | gather data from multi gpus
668 | '''
669 | output_list = [torch.zeros_like(input) for _ in range(torch.distributed.get_world_size())]
670 | torch.distributed.all_gather(output_list, input)
671 | output = torch.cat(output_list, dim=0)
672 | return output
673 |
674 | if __name__ == '__main__':
675 | torch.cuda.empty_cache()
676 | main()
--------------------------------------------------------------------------------
/main/validate.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | import argparse
3 | import os
4 | import json
5 | import csv
6 | import glob
7 | import time
8 | import logging
9 | import torch
10 | import torch.nn as nn
11 | import torch.nn.parallel
12 | import numpy as np
13 | from tqdm import tqdm
14 | from collections import OrderedDict
15 | from contextlib import suppress
16 | from torch.utils.data.dataloader import DataLoader
17 | from timm.models import create_model, apply_test_time_pool, load_checkpoint, is_model, list_models
18 | from timm.data import create_dataset, create_loader, resolve_data_config, RealLabelsImagenet
19 | from timm.utils import accuracy, AverageMeter, natural_key, setup_default_logging, set_jit_legacy
20 |
21 | import models
22 | from metrics import *
23 | from datasets.mp_liver_dataset import MultiPhaseLiverDataset, create_loader
24 |
25 | has_apex = False
26 | try:
27 | from apex import amp
28 | has_apex = True
29 | except ImportError:
30 | pass
31 |
32 | has_native_amp = False
33 | try:
34 | if getattr(torch.cuda.amp, 'autocast') is not None:
35 | has_native_amp = True
36 | except AttributeError:
37 | pass
38 |
39 | torch.backends.cudnn.benchmark = True
40 | _logger = logging.getLogger('validate')
41 |
42 |
43 | parser = argparse.ArgumentParser(description='LLD-MMRI2023 Validation')
44 |
45 | parser.add_argument('--img_size', default=(16, 128, 128), type=int, nargs='+', help='input image size.')
46 | parser.add_argument('--crop_size', default=(14, 112, 112), type=int, nargs='+', help='cropped image size.')
47 | parser.add_argument('--data_dir', default='./images/', type=str)
48 | parser.add_argument('--val_anno_file', default='./labels/test.txt', type=str)
49 | parser.add_argument('--val_transform_list', default=['center_crop'], nargs='+', type=str)
50 | parser.add_argument('--model', '-m', metavar='NAME', default='resnet50',
51 | help='model architecture (default: dpn92)')
52 | parser.add_argument('-j', '--workers', default=8, type=int, metavar='N',
53 | help='number of data loading workers (default: 2)')
54 | parser.add_argument('-b', '--batch-size', default=256, type=int,
55 | metavar='N', help='mini-batch size (default: 256)')
56 | parser.add_argument('--num-classes', type=int, default=7,
57 | help='Number classes in dataset')
58 | parser.add_argument('--gp', default=None, type=str, metavar='POOL',
59 | help='Global pool type, one of (fast, avg, max, avgmax, avgmaxc). Model default if None.')
60 | parser.add_argument('--log-freq', default=10, type=int,
61 | metavar='N', help='batch logging frequency (default: 10)')
62 | parser.add_argument('--checkpoint', default='', type=str, metavar='PATH',
63 | help='path to latest checkpoint (default: none)')
64 | parser.add_argument('--pretrained', dest='pretrained', action='store_true',
65 | help='use pre-trained model')
66 | parser.add_argument('--num-gpu', type=int, default=1,
67 | help='Number of GPUS to use')
68 | parser.add_argument('--test-pool', dest='test_pool', action='store_true',
69 | help='enable test time pool')
70 | parser.add_argument('--pin-mem', action='store_true', default=False,
71 | help='Pin CPU memory in DataLoader for more efficient (sometimes) transfer to GPU.')
72 | parser.add_argument('--channels-last', action='store_true', default=False,
73 | help='Use channels_last memory layout')
74 | parser.add_argument('--amp', action='store_true', default=False,
75 | help='Use AMP mixed precision. Defaults to Apex, fallback to native Torch AMP.')
76 | parser.add_argument('--apex-amp', action='store_true', default=False,
77 | help='Use NVIDIA Apex AMP mixed precision')
78 | parser.add_argument('--native-amp', action='store_true', default=False,
79 | help='Use Native Torch AMP mixed precision')
80 | parser.add_argument('--tf-preprocessing', action='store_true', default=False,
81 | help='Use Tensorflow preprocessing pipeline (require CPU TF installed')
82 | parser.add_argument('--use-ema', dest='use_ema', action='store_true',
83 | help='use ema version of weights if present')
84 | parser.add_argument('--torchscript', dest='torchscript', action='store_true',
85 | help='convert model torchscript for inference')
86 | parser.add_argument('--legacy-jit', dest='legacy_jit', action='store_true',
87 | help='use legacy jit mode for pytorch 1.5/1.5.1/1.6 to get back fusion performance')
88 | parser.add_argument('--results-dir', default='', type=str, metavar='FILENAME',
89 | help='Output csv file for validation results (summary)')
90 | parser.add_argument('--score-dir', default='', type=str, metavar='FILENAME',
91 | help='Output csv file for validation score (summary)')
92 |
93 |
94 | def validate(args):
95 | # might as well try to validate something
96 | args.pretrained = args.pretrained or not args.checkpoint
97 | amp_autocast = suppress # do nothing
98 | if args.amp:
99 | if has_native_amp:
100 | args.native_amp = True
101 | elif has_apex:
102 | args.apex_amp = True
103 | else:
104 | _logger.warning("Neither APEX or Native Torch AMP is available.")
105 | assert not args.apex_amp or not args.native_amp, "Only one AMP mode should be set."
106 | if args.native_amp:
107 | amp_autocast = torch.cuda.amp.autocast
108 | _logger.info('Validating in mixed precision with native PyTorch AMP.')
109 | elif args.apex_amp:
110 | _logger.info('Validating in mixed precision with NVIDIA APEX AMP.')
111 | else:
112 | _logger.info('Validating in float32. AMP not enabled.')
113 |
114 | if args.legacy_jit:
115 | set_jit_legacy()
116 |
117 | # create model
118 | model = create_model(
119 | args.model,
120 | pretrained=args.pretrained,
121 | num_classes=args.num_classes,
122 | pretrained_cfg=None)
123 |
124 | if args.num_classes is None:
125 | assert hasattr(model, 'num_classes'), 'Model must have `num_classes` attr if not set on cmd line/config.'
126 | args.num_classes = model.num_classes
127 | if args.checkpoint:
128 | load_checkpoint(model, args.checkpoint, args.use_ema)
129 |
130 | param_count = sum([m.numel() for m in model.parameters()])
131 | _logger.info('Model %s created, param count: %d' % (args.model, param_count))
132 |
133 |
134 | model = model.cuda()
135 | if args.apex_amp:
136 | model = amp.initialize(model, opt_level='O1')
137 |
138 | if args.num_gpu > 1:
139 | model = torch.nn.DataParallel(model, device_ids=list(range(args.num_gpu)))
140 |
141 | criterion = nn.CrossEntropyLoss().cuda()
142 |
143 | dataset = MultiPhaseLiverDataset(args, is_training=False)
144 |
145 | loader = DataLoader(dataset,
146 | batch_size=args.batch_size,
147 | num_workers=args.workers,
148 | pin_memory=args.pin_mem,
149 | shuffle=False)
150 |
151 | batch_time = AverageMeter()
152 |
153 | predictions = []
154 | labels = []
155 |
156 | model.eval()
157 | with torch.no_grad():
158 | # warmup, reduce variability of first batch time, especially for comparing torchscript vs non
159 | end = time.time()
160 | for (input, target) in tqdm(loader):
161 | target = target.cuda()
162 | input = input.cuda()
163 | # compute output
164 | with amp_autocast():
165 | output = model(input)
166 | predictions.append(output)
167 | labels.append(target)
168 | # measure elapsed time
169 | batch_time.update(time.time() - end)
170 | end = time.time()
171 | evaluation_metrics = compute_metrics(predictions, labels, criterion, args)
172 | return evaluation_metrics
173 |
174 | def compute_metrics(outputs, targets, loss_fn, args):
175 |
176 | outputs = torch.cat(outputs, dim=0).detach()
177 | targets = torch.cat(targets, dim=0).detach()
178 | pred_score = torch.softmax(outputs, dim=1)
179 | loss = loss_fn(outputs, targets).cpu().item()
180 | outputs = outputs.cpu().numpy()
181 | targets = targets.cpu().numpy()
182 | pred_score = pred_score.cpu().numpy()
183 | acc = ACC(outputs, targets)
184 | f1 = F1_score(outputs, targets)
185 | recall = Recall(outputs, targets)
186 | # specificity = Specificity(outputs, targets)
187 | precision = Precision(outputs, targets)
188 | kappa = Cohen_Kappa(outputs, targets)
189 | report = cls_report(outputs, targets)
190 | cm = confusion_matrix(outputs, targets)
191 | metrics = OrderedDict([
192 | ('acc', acc),
193 | ('f1', f1),
194 | ('recall', recall),
195 | ('precision', precision),
196 | ('kappa', kappa),
197 | ('confusion matrix', cm),
198 | ('classification report', report),
199 | ])
200 | return metrics, pred_score
201 |
202 | def write_results2txt(results_dir, results):
203 | results_file = os.path.join(results_dir, 'results.txt')
204 | file = open(results_file, 'w')
205 | file.write(results)
206 | file.close()
207 |
208 | def write_score2json(score_info, args):
209 | score_info = score_info.astype(np.float)
210 | score_list = []
211 | anno_info = np.loadtxt(args.val_anno_file, dtype=np.str_)
212 | for idx, item in enumerate(anno_info):
213 | id = item[0].rsplit('/', 1)[-1]
214 | label = int(item[1])
215 | score = list(score_info[idx])
216 | pred = score.index(max(score))
217 | pred_info = {
218 | 'image_id': id,
219 | 'label': label,
220 | 'prediction': pred,
221 | 'score': score,
222 | }
223 | score_list.append(pred_info)
224 | json_data = json.dumps(score_list, indent=4)
225 | file = open(os.path.join(args.results_dir, 'score.json'), 'w')
226 | file.write(json_data)
227 | file.close()
228 |
229 | def main():
230 | setup_default_logging()
231 | args = parser.parse_args()
232 | results, score = validate(args)
233 | output_str = 'Test Results:\n'
234 | for key, value in results.items():
235 | if key == 'confusion matrix':
236 | output_str += f'{key}:\n {value}\n'
237 | elif key == 'classification report':
238 | output_str += f'{key}:\n {value}\n'
239 | else:
240 | output_str += f'{key}: {value}\n'
241 | os.makedirs(args.results_dir, exist_ok=True)
242 | write_results2txt(args.results_dir, output_str)
243 | write_score2json(score, args)
244 | print(output_str)
245 |
246 | if __name__ == '__main__':
247 | main()
--------------------------------------------------------------------------------