├── README.md ├── docker submission ├── Dockerfile ├── README.md ├── predict_task1.py ├── predict_task2.py └── requirements.txt ├── filestation ├── aiib_data_consent_form.pdf ├── example_submission.zip └── registration_form.pdf ├── images ├── Registration form.pdf ├── icon.png ├── main.png └── organizers.png ├── scoring metrics ├── task1.py ├── task2.py └── utils.py └── webpage.html /README.md: -------------------------------------------------------------------------------- 1 | # AIIB23 (Airway-Informed Quantitative CT Imaging Biomarker for Fibrotic Lung Disease-MICCAI2023) 2 | ![image](images/main.png) 3 | ## Context 4 | Airway-related quantitative imaging biomarkers (QIB) are crucial for examination, diagnosis, and prognosis in lung diseases, while the manual delineation of airway structures is unduly burdensome. Competitors are encouraged to devise automatic airway segmentation models with high robustness and generalization abilities. This challenge is an open-call challenge and new submissions are allowed after the conference. 5 | ## How to participate in AIIB23? 6 | Register the challenge from https://codalab.lisn.upsaclay.fr/competitions/13076#participate 7 | **It is of note that you need to send the registeration form to the organizers.** 8 | ## How to pakage and submit docker files 9 | Please refer to ![our tutorial](docker%20submission/README.md). 10 | -------------------------------------------------------------------------------- /docker submission/Dockerfile: -------------------------------------------------------------------------------- 1 | ## Pull from existing image 2 | FROM nvcr.io/nvidia/pytorch:21.05-py3 3 | 4 | 5 | ## Copy requirements 6 | COPY ./requirements.txt . 7 | 8 | ## Install Python packages in Docker image 9 | RUN pip3 install -r requirements.txt 10 | 11 | ## Copy all files 12 | COPY ./ ./ 13 | 14 | ## Execute the inference command 15 | CMD ["./predict_task1.py"] 16 | ENTRYPOINT ["python3"] -------------------------------------------------------------------------------- /docker submission/README.md: -------------------------------------------------------------------------------- 1 | # Instructions for the submission of AIIB2023 2 | 3 | ## Instructions for Validation Phase submission 4 | During the validation phase, competitors are required to submit mask predictions (*.nii.gz) for task 1 and the mortality rate predictions (*.csv) for task 2. We require zipped submissions for both tasks to the submission page. 5 | 6 | For task 1, the zipped file should be arranged as follows: 7 | 8 | ── teamname_ValPhase.zip 9 | 10 | ├── AIIB_001.nii.gz 11 | 12 | └── AIIB_002.nii.gz 13 | 14 | ├── ... 15 | 16 | └── AIIB_110.nii.gz 17 | 18 | For task 2, the CSV file should contain two columns, one named 'ID' with the ID of all patients; and the other named 'prediction'. The zipped file for submission should be arranged as follows: 19 | 20 | ── teamname_ValPhase.zip 21 | 22 | └── mortality_prediction.csv 23 | 24 | 25 | 26 | ## Instructions for Test Phase submission 27 | 28 | The AIIB2023 challenge will be held as a Type II challenge that will not release the test set to its competitors. For the test phase submission, we will require the competitors to submit their containerized docker images. We encourage the challengers to publicize their codes on GitHub, and provide the URL for their GitHub projects in their submitted documents. However, we do not require open-source publication of codes. 29 | 30 | ### Step 1. Organize an inference script 31 | 32 | We require a script file that automatically performs inference on the test set, i.e., outputting the predicted segmentation masks or mortality rates on the test set. We did not require the competitors to participate in both tasks, so for different tasks, the inference script can vary. 33 | 34 | For task 1, we require that the input folder be mounted into *input* and the output folder be *output*. In addition, the output files should have the same file name as the input, e.g. the input file input/AIIB_001.nii.gz must have a matched output prediction named output/AIIB_001.nii.gz. An example of the inference script can be found [here](./predict_task1.py). 35 | 36 | For task 2, the mortality predictions on all test files should be summarized in one single CSV file, named as output/mortality.csv. The input should be the CT volumes mounted at *input_image*. An example of the inference script can be found [here](./predict_task1_and_2.py). 37 | 38 | 39 | 40 | 41 | ### Step 2. Containerize the application using Docker 42 | 43 | We require the competitors to use [Docker] to containerize their applications. Docker images can be created using Dockerfiles, which contain all commands that help to run applications. An example of Dockerfile can be found [here](./Dockerfile). Four basic components are required to be included in the Docker file: 44 | 45 | 1. Pulling a pre-existing image with an operating system and, if needed, CUDA (FROM instruction). 46 | 2. Installing additional dependencies (RUN instructions). 47 | 3. Transferring local files into your Docker image (COPY instructions). 48 | 4. Executing your algorithm (CMD and ENTRYPOINT instructions). 49 | 50 | After finishing the Dockerfile, you can build your docker image with: 51 | docker build -f Dockerfile -t [teamname] . 52 | 53 | 54 | ### Step 3. Docker running commands 55 | Your container will be run with the following command: 56 | ``` 57 | docker run --rm -v [input directory]:/input -v [output directory]:/output -it [teamname] 58 | ``` 59 | 60 | [input directory] will be the absolute path of our directory containing the test set, [output directory] will be the absolute path of the prediction directory. 61 | 62 | ### Step 4. Docker container submission 63 | The name of the submission should be named as "Teamname_TaskNO.", e.g. "ImperialCollegeLondon_Task2", and send to ![aiib23.miccai@gmail.com](mailto:aiib23.miccai@gmail.com). 64 | 65 | 66 | ## References 67 | This tutorial and examples containd are based on [CrossModa](https://crossmoda.grand-challenge.org/submission/). 68 | -------------------------------------------------------------------------------- /docker submission/predict_task1.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # -*- coding: utf-8 -*- 3 | import SimpleITK as sitk 4 | import os 5 | 6 | input_dir = '/input/' 7 | path_img = os.path.join(input_dir,'{}.nii.gz') 8 | path_pred = '/output/{}.nii.gz' 9 | 10 | list_case = [k.split('.')[0] for k in os.listdir(input_dir)] 11 | 12 | for case in list_case: 13 | img = sitk.ReadImage(path_img.format(case)) 14 | 15 | ## 16 | # =======your logic here. Below we do binary thresholding as a demo===== 17 | 18 | # using SimpleITK to do binary thresholding between 100 - 10000 19 | vs_pred = sitk.BinaryThreshold(img, lowerThreshold=400, upperThreshold=500) 20 | cochlea_pred = sitk.BinaryThreshold(img, lowerThreshold=900, upperThreshold=1100) 21 | 22 | result = vs_pred + 2*cochlea_pred 23 | # ====================================================================== 24 | # please make sure the results were processed with largest component extraction 25 | result = sitk.GetImagefromArray(result) 26 | # save the segmentation mask 27 | sitk.WriteImage(result, path_pred.format(case)) 28 | -------------------------------------------------------------------------------- /docker submission/predict_task2.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # -*- coding: utf-8 -*- 3 | import SimpleITK as sitk 4 | import os 5 | import pandas as pd 6 | 7 | 8 | input_image_dir = '/input_image/' 9 | path_img = os.path.join(input_image_dir,'{}.nii.gz') 10 | path_pred = '/output/mortality.csv' 11 | 12 | result = [] 13 | 14 | list_case = [k.split('.')[0] for k in os.listdir(input_image_dir)] 15 | 16 | for case in list_case: 17 | img = sitk.ReadImage(path_img.format(case)) 18 | 19 | # ==========your logic here. Below we do airway volume counting and thresholding as an example============== 20 | img_numpy = sitk.GetArrayFromImage(img) 21 | pred = img_numpy.sum()/(img_numpy.shape[0]*img_numpy.shape[1]*img_numpy.shape[2]) 22 | # ========================================================================================================== 23 | result.append(pred) 24 | # record the result 25 | 26 | # save the result 27 | result = pd.DataFrame(result,columns=['prediction']) 28 | result['filename'] = list_case 29 | result.to_csv(path_pred) 30 | -------------------------------------------------------------------------------- /docker submission/requirements.txt: -------------------------------------------------------------------------------- 1 | SimpleITK -------------------------------------------------------------------------------- /filestation/aiib_data_consent_form.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayanglab/AIIB/165863ad05c55b5681b76cb222bf7211fd9742a5/filestation/aiib_data_consent_form.pdf -------------------------------------------------------------------------------- /filestation/example_submission.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayanglab/AIIB/165863ad05c55b5681b76cb222bf7211fd9742a5/filestation/example_submission.zip -------------------------------------------------------------------------------- /filestation/registration_form.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayanglab/AIIB/165863ad05c55b5681b76cb222bf7211fd9742a5/filestation/registration_form.pdf -------------------------------------------------------------------------------- /images/Registration form.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayanglab/AIIB/165863ad05c55b5681b76cb222bf7211fd9742a5/images/Registration form.pdf -------------------------------------------------------------------------------- /images/icon.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayanglab/AIIB/165863ad05c55b5681b76cb222bf7211fd9742a5/images/icon.png -------------------------------------------------------------------------------- /images/main.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayanglab/AIIB/165863ad05c55b5681b76cb222bf7211fd9742a5/images/main.png -------------------------------------------------------------------------------- /images/organizers.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ayanglab/AIIB/165863ad05c55b5681b76cb222bf7211fd9742a5/images/organizers.png -------------------------------------------------------------------------------- /scoring metrics/task1.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import numpy as np 3 | import os 4 | import nibabel 5 | import skimage.measure as measure 6 | from skimage.morphology import skeletonize_3d 7 | from utils.tree_parse import get_parsing 8 | import math 9 | 10 | EPSILON = 1e-32 11 | 12 | 13 | def compute_binary_iou(y_true, y_pred): 14 | intersection = np.sum(y_true * y_pred) + EPSILON 15 | union = np.sum(y_true) + np.sum(y_pred) - intersection + EPSILON 16 | iou = intersection / union 17 | return iou 18 | 19 | def evaluation_branch_metrics(fid,label, pred,refine=False): 20 | """ 21 | :return: iou,dice, detected length ratio, detected branch ratio, 22 | precision, leakages, false negative ratio (airway missing ratio), 23 | large_cd (largest connected component) 24 | """ 25 | # compute tree sparsing 26 | parsing_gt = get_parsing(label, refine) 27 | # find the largest component to locate the airway prediction 28 | cd, num = measure.label(pred, return_num=True, connectivity=1) 29 | volume = np.zeros([num]) 30 | for k in range(num): 31 | volume[k] = ((cd == (k + 1)).astype(np.uint8)).sum() 32 | volume_sort = np.argsort(volume) 33 | large_cd = (cd == (volume_sort[-1] + 1)).astype(np.uint8) 34 | iou = compute_binary_iou(label, large_cd) 35 | flag=-1 36 | while iou < 0.1: 37 | print(fid," failed cases, require post-processing") 38 | large_cd = (cd == (volume_sort[flag-1] + 1)).astype(np.uint8) 39 | iou = compute_binary_iou(label, large_cd) 40 | skeleton = skeletonize_3d(label) 41 | skeleton = (skeleton > 0) 42 | skeleton = skeleton.astype('uint8') 43 | 44 | DLR = (large_cd * skeleton).sum() / skeleton.sum() 45 | precision = (large_cd * label).sum() / large_cd.sum() 46 | leakages = ((large_cd - label)==1).sum() / label.sum() 47 | 48 | num_branch = parsing_gt.max() 49 | detected_num = 0 50 | for j in range(num_branch): 51 | branch_label = ((parsing_gt == (j + 1)).astype(np.uint8)) * skeleton 52 | if (large_cd * branch_label).sum() / branch_label.sum() >= 0.8: 53 | detected_num += 1 54 | DBR = detected_num / num_branch 55 | return iou, DLR, DBR, precision, leakages -------------------------------------------------------------------------------- /scoring metrics/task2.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | 3 | from sklearn.metrics import accuracy_score, roc_auc_score, confusion_matrix 4 | 5 | 6 | 7 | def classification_metrics(ground_truth, prediction): 8 | 9 | assert len(ground_truth) == len(prediction) 10 | 11 | # Calculate accuracy 12 | accuracy = accuracy_score(ground_truth, prediction) 13 | 14 | # Calculate AUC 15 | auc = roc_auc_score(ground_truth, prediction) 16 | 17 | # Calculate confusion matrix 18 | tn, fp, fn, tp = confusion_matrix(ground_truth, prediction).ravel() 19 | 20 | # Calculate sensitivity (true positive rate) 21 | sensitivity = tp / (tp + fn) 22 | 23 | # Calculate specificity (true negative rate) 24 | specificity = tn / (tn + fp) 25 | 26 | # Calculate F1 score 27 | precision = tp / (tp + fp) 28 | recall = tp / (tp + fn) 29 | f1_score = 2 * (precision * recall) / (precision + recall) 30 | 31 | 32 | 33 | return accuracy,auc,sensitivity,specificity,f1_score 34 | 35 | 36 | 37 | -------------------------------------------------------------------------------- /scoring metrics/utils.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import os 3 | from scipy import ndimage 4 | import skimage.measure as measure 5 | import nibabel 6 | from skimage.morphology import skeletonize_3d 7 | 8 | 9 | def large_connected_domain(label): 10 | cd, num = measure.label(label, return_num=True, connectivity=1) 11 | volume = np.zeros([num]) 12 | for k in range(num): 13 | volume[k] = ((cd == (k + 1)).astype(np.uint8)).sum() 14 | volume_sort = np.argsort(volume) 15 | # print(volume_sort) 16 | label = (cd == (volume_sort[-1] + 1)).astype(np.uint8) 17 | label = ndimage.binary_fill_holes(label) 18 | label = label.astype(np.uint8) 19 | return label 20 | 21 | 22 | def skeleton_parsing(skeleton): 23 | # separate the skeleton 24 | neighbor_filter = ndimage.generate_binary_structure(3, 3) 25 | skeleton_filtered = ndimage.convolve(skeleton, neighbor_filter) * skeleton 26 | # distribution = skeleton_filtered[skeleton_filtered>0] 27 | # plt.hist(distribution) 28 | skeleton_parse = skeleton.copy() 29 | skeleton_parse[skeleton_filtered > 3] = 0 30 | con_filter = ndimage.generate_binary_structure(3, 3) 31 | cd, num = ndimage.label(skeleton_parse, structure=con_filter) 32 | # remove small branches 33 | for i in range(num): 34 | a = cd[cd == (i + 1)] 35 | if a.shape[0] < 5: 36 | skeleton_parse[cd == (i + 1)] = 0 37 | cd, num = ndimage.label(skeleton_parse, structure=con_filter) 38 | return skeleton_parse, cd, num 39 | 40 | 41 | def tree_parsing_func(skeleton_parse, label, cd): 42 | # parse the airway tree 43 | edt, inds = ndimage.distance_transform_edt(1 - skeleton_parse, return_indices=True) 44 | tree_parsing = np.zeros(label.shape, dtype=np.uint16) 45 | tree_parsing = cd[inds[0, ...], inds[1, ...], inds[2, ...]] * label 46 | return tree_parsing 47 | 48 | 49 | def loc_trachea(tree_parsing, num): 50 | # find the trachea 51 | volume = np.zeros([num]) 52 | for k in range(num): 53 | volume[k] = ((tree_parsing == (k + 1)).astype(np.uint8)).sum() 54 | volume_sort = np.argsort(volume) 55 | # print(volume_sort) 56 | trachea = (volume_sort[-1] + 1) 57 | return trachea 58 | 59 | 60 | def get_parsing(mask, refine=False): 61 | mask = (mask > 0).astype(np.uint8) 62 | mask = large_connected_domain(mask) 63 | skeleton = skeletonize_3d(mask) 64 | skeleton_parse, cd, num = skeleton_parsing(skeleton) 65 | tree_parsing = tree_parsing_func(skeleton_parse, mask, cd) 66 | return tree_parsing --------------------------------------------------------------------------------