├── docs ├── coffee.png ├── basic_registration.pdf ├── data_augmentation.pdf ├── advanced_registration.pdf ├── images_and_resampling.pdf ├── simpleITKAtSPIE18Logo.png ├── spatial_transformations.pdf ├── simpleitkSetupAndSchedule.pptx ├── simpleitkHistoricalOverview.pptx ├── simpleitkFundamentalConcepts.pptx ├── simpleitkNotebookDevelopmentTesting.pptx └── index.html ├── figures └── ImageOriginAndSpacing.png ├── tests ├── requirements_testing.txt ├── additional_dictionary.txt └── test_notebooks.py ├── environment.yml ├── output └── .gitignore ├── README.md ├── data └── manifest.json ├── .circleci └── config.yml ├── setup.ipynb ├── registration_gui.py ├── utilities.py ├── LICENSE ├── downloaddata.py ├── 01_spatial_transformations.ipynb ├── 05_advanced_registration.ipynb ├── 02_images_and_resampling.ipynb └── 04_basic_registration.ipynb /docs/coffee.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SimpleITK/SPIE2018_COURSE/master/docs/coffee.png -------------------------------------------------------------------------------- /docs/basic_registration.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SimpleITK/SPIE2018_COURSE/master/docs/basic_registration.pdf -------------------------------------------------------------------------------- /docs/data_augmentation.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SimpleITK/SPIE2018_COURSE/master/docs/data_augmentation.pdf -------------------------------------------------------------------------------- /docs/advanced_registration.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SimpleITK/SPIE2018_COURSE/master/docs/advanced_registration.pdf -------------------------------------------------------------------------------- /docs/images_and_resampling.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SimpleITK/SPIE2018_COURSE/master/docs/images_and_resampling.pdf -------------------------------------------------------------------------------- /docs/simpleITKAtSPIE18Logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SimpleITK/SPIE2018_COURSE/master/docs/simpleITKAtSPIE18Logo.png -------------------------------------------------------------------------------- /docs/spatial_transformations.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SimpleITK/SPIE2018_COURSE/master/docs/spatial_transformations.pdf -------------------------------------------------------------------------------- /docs/simpleitkSetupAndSchedule.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SimpleITK/SPIE2018_COURSE/master/docs/simpleitkSetupAndSchedule.pptx -------------------------------------------------------------------------------- /figures/ImageOriginAndSpacing.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SimpleITK/SPIE2018_COURSE/master/figures/ImageOriginAndSpacing.png -------------------------------------------------------------------------------- /docs/simpleitkHistoricalOverview.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SimpleITK/SPIE2018_COURSE/master/docs/simpleitkHistoricalOverview.pptx -------------------------------------------------------------------------------- /docs/simpleitkFundamentalConcepts.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SimpleITK/SPIE2018_COURSE/master/docs/simpleitkFundamentalConcepts.pptx -------------------------------------------------------------------------------- /docs/simpleitkNotebookDevelopmentTesting.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SimpleITK/SPIE2018_COURSE/master/docs/simpleitkNotebookDevelopmentTesting.pptx -------------------------------------------------------------------------------- /tests/requirements_testing.txt: -------------------------------------------------------------------------------- 1 | pytest 2 | markdown 3 | lxml 4 | pyenchant 5 | SimpleITK>=1.0.0 6 | jupyter 7 | matplotlib 8 | ipywidgets 9 | numpy 10 | scipy 11 | pandas 12 | 13 | -------------------------------------------------------------------------------- /environment.yml: -------------------------------------------------------------------------------- 1 | name: sitkpySPIE18 2 | 3 | channels: 4 | - simpleitk 5 | 6 | dependencies: 7 | - python=3.6 8 | - jupyter 9 | - matplotlib 10 | - ipywidgets 11 | - numpy 12 | - scipy 13 | - pandas 14 | - SimpleITK>=1.0.0 15 | -------------------------------------------------------------------------------- /output/.gitignore: -------------------------------------------------------------------------------- 1 | # 2 | #Maintain an empty directory in the git repository, where all files in this 3 | #directory will always be ignored by git: 4 | #http://stackoverflow.com/questions/115983/how-can-i-add-an-empty-directory-to-a-git-repository 5 | # 6 | * 7 | # Except this file 8 | !.gitignore 9 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | > :warning: This repository has been archived. For up to date information, please see the [SimpleITK tutorial](https://simpleitk.org/TUTORIAL/) or the [SimpleITK notebooks repository](https://github.com/InsightSoftwareConsortium/SimpleITK-Notebooks). 2 | 3 | # SimpleITK: SPIE Medical Imaging 2018 Tutorial 4 | 5 | [![CircleCI](https://circleci.com/gh/SimpleITK/SPIE2018_COURSE/tree/master.svg?style=shield)](https://circleci.com/gh/SimpleITK/SPIE2018_COURSE/tree/master) 6 | 7 | This repository contains all of the material presented at the SPIE Medical 8 | Imaging 2018 conference, and the course's website. 9 | -------------------------------------------------------------------------------- /data/manifest.json: -------------------------------------------------------------------------------- 1 | { 2 | "SimpleITK.jpg" : { 3 | "md5sum" : "2685660c4f50c5929516127aed9e5b1a" 4 | }, 5 | "CIRS057A_MR_CT_DICOM/readme.txt" : { 6 | "md5sum" : "d92c97e6fe6520cb5b1a50b96eb9eb96", 7 | "archive" : "true" 8 | }, 9 | "training_001_ct.mha" : { 10 | "md5sum" : "51174490e46fd227b7ae0b83d8e9a143" 11 | }, 12 | "training_001_mr_T1.mha" : { 13 | "md5sum" : "04e76cef57966ac99ad4cee3b787b158" 14 | }, 15 | "POPI/meta/00-P.mhd" : { 16 | "md5sum" : "e53c1a5d001e98b78d06e71b33be41bf", 17 | "url" : "http://tux.creatis.insa-lyon.fr/~srit/POPI/Images/MetaImage/00-MetaImage.tar", 18 | "archive" : "true" 19 | }, 20 | "POPI/meta/70-P.mhd" : { 21 | "md5sum" : "404c82e14063b17f3d3fc9bf531e9d31", 22 | "url" : "http://tux.creatis.insa-lyon.fr/~srit/POPI/Images/MetaImage/70-MetaImage.tar", 23 | "archive" : "true" 24 | }, 25 | "POPI/landmarks/00-Landmarks.pts" : { 26 | "md5sum" : "0c3fc14bc7392cd40d65d5bcd1a8b937", 27 | "url" : "http://tux.creatis.insa-lyon.fr/~srit/POPI/Landmarks/00-Landmarks.pts" 28 | }, 29 | "POPI/landmarks/70-Landmarks.pts" : { 30 | "md5sum" : "7bfc9075bb39ccf7507babfd89c7a9bb", 31 | "url" : "http://tux.creatis.insa-lyon.fr/~srit/POPI/Landmarks/70-Landmarks.pts" 32 | }, 33 | "POPI/masks/00-air-body-lungs.mhd" : { 34 | "md5sum" : "94d4b96188196647c0af311eef129b50", 35 | "url" : "http://tux.creatis.insa-lyon.fr/~srit/POPI/Masks/00Mask-MetaImage.tar", 36 | "archive" : "true" 37 | }, 38 | "POPI/masks/70-air-body-lungs.mhd" : { 39 | "md5sum" : "f03684187568eea6709b7e8d49c7d6c1", 40 | "url" : "http://tux.creatis.insa-lyon.fr/~srit/POPI/Masks/70Mask-MetaImage.tar", 41 | "archive" : "true" 42 | } 43 | } 44 | -------------------------------------------------------------------------------- /.circleci/config.yml: -------------------------------------------------------------------------------- 1 | version: 2 2 | 3 | workflows: 4 | version: 2 5 | test: 6 | jobs: 7 | - test-3.6 8 | jobs: 9 | test-3.6: &test-template 10 | docker: 11 | - image: circleci/python:3.6 12 | environment: 13 | ExternalData_OBJECT_STORES: /home/circleci/.ExternalData 14 | SIMPLE_ITK_MEMORY_CONSTRAINED_ENVIRONMENT: 1 15 | steps: 16 | - checkout 17 | 18 | - restore_cache: 19 | keys: 20 | - simpleitk-spie2018-{{ checksum "data/manifest.json" }} 21 | - simpleitk-spie2018- #use previous cache when the manifest changes 22 | 23 | - run: 24 | name: Data setup (if cache is not empty then symbolic link to it) 25 | command: | 26 | mkdir -p "${ExternalData_OBJECT_STORES}" 27 | if [ ! -z "$(ls -A ${ExternalData_OBJECT_STORES})" ]; then 28 | cp -as /home/circleci/.ExternalData/* data 29 | fi 30 | python downloaddata.py "${ExternalData_OBJECT_STORES}" data/manifest.json 31 | 32 | - run: 33 | name: Setup of Python environment 34 | command: | 35 | sudo apt-get update; sudo apt-get install enchant 36 | sudo pip install virtualenv 37 | virtualenv ~/sitkpy --no-site-packages 38 | ~/sitkpy/bin/pip install -r tests/requirements_testing.txt 39 | ~/sitkpy/bin/jupyter nbextension enable --py --sys-prefix widgetsnbextension 40 | 41 | - run: 42 | name: Activate environment and run the test 43 | command: | 44 | source ~/sitkpy/bin/activate 45 | ~/sitkpy/bin/pytest -v --tb=short tests/test_notebooks.py::Test_notebooks::test_python_notebook 46 | 47 | - save_cache: 48 | key: simpleitk-spie2018-{{ checksum "data/manifest.json" }} 49 | paths: 50 | - /home/circleci/.ExternalData 51 | 52 | 53 | -------------------------------------------------------------------------------- /setup.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "

SimpleITK Jupyter Notebooks: Biomedical Image Analysis in Python

\n", 8 | "\n", 9 | "## Newcomers to Jupyter notebooks:\n", 10 | "1. We use two types of cells, code and markdown.\n", 11 | "2. To run a code cell, select it (mouse or arrow key so that it is highlighted) and then press shift+enter which also moves focus to the next cell or ctrl+enter which doesn't.\n", 12 | "3. Closing the browser window does not close the Jupyter server. To close the server, go to the terminal where you ran it and press ctrl+c twice.\n", 13 | "\n", 14 | "For additional details see the [Jupyter Notebook Quick Start Guide](https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/index.html).\n", 15 | "\n", 16 | "\n", 17 | "## Environment Setup for Course\n", 18 | "\n", 19 | "This notebook should be run prior to arriving at the course venue, as it requires network connectivity." 20 | ] 21 | }, 22 | { 23 | "cell_type": "markdown", 24 | "metadata": {}, 25 | "source": [ 26 | "First, lets check that you have the SimpleITK version which you expect." 27 | ] 28 | }, 29 | { 30 | "cell_type": "code", 31 | "execution_count": null, 32 | "metadata": { 33 | "collapsed": false 34 | }, 35 | "outputs": [], 36 | "source": [ 37 | "import SimpleITK as sitk\n", 38 | "from downloaddata import fetch_data, fetch_data_all\n", 39 | "\n", 40 | "from ipywidgets import interact\n", 41 | "\n", 42 | "print(sitk.Version())" 43 | ] 44 | }, 45 | { 46 | "cell_type": "markdown", 47 | "metadata": {}, 48 | "source": [ 49 | "Next, we check that the auxiliary program(s) are correctly installed in your environment.\n", 50 | "\n", 51 | "We expect that you have an external image viewer installed. The default viewer is Fiji. If you have another viewer (i.e. ITK-SNAP or 3D Slicer) you will need to set an environment variable to point to it. This is done using an environment variable which can also be set from within a notebook as shown below." 52 | ] 53 | }, 54 | { 55 | "cell_type": "code", 56 | "execution_count": null, 57 | "metadata": { 58 | "collapsed": false, 59 | "simpleitk_error_allowed": "Exception thrown in SimpleITK Show:" 60 | }, 61 | "outputs": [], 62 | "source": [ 63 | "# Uncomment the line below to change the default external viewer to your viewer of choice and test that it works.\n", 64 | "#%env SITK_SHOW_COMMAND path_to_program/ITK-SNAP \n", 65 | "\n", 66 | "# Retrieve an image from the network, read it and display using the external viewer\n", 67 | "sitk.Show(sitk.ReadImage(fetch_data(\"SimpleITK.jpg\")))" 68 | ] 69 | }, 70 | { 71 | "cell_type": "markdown", 72 | "metadata": {}, 73 | "source": [ 74 | "Now we check that the ipywidgets will display correctly. When you run the following cell you should see a slider.\n", 75 | "\n", 76 | "If you don't see a slider please shutdown the Jupyter server, at the Anaconda command line prompt press Control-c twice, and then run the following command:\n", 77 | "\n", 78 | "```jupyter nbextension enable --py --sys-prefix widgetsnbextension```" 79 | ] 80 | }, 81 | { 82 | "cell_type": "code", 83 | "execution_count": null, 84 | "metadata": { 85 | "collapsed": false 86 | }, 87 | "outputs": [], 88 | "source": [ 89 | "interact(lambda x: x, x=(0,10));" 90 | ] 91 | }, 92 | { 93 | "cell_type": "markdown", 94 | "metadata": {}, 95 | "source": [ 96 | "Finally, we download all of the data used in the notebooks in advance. This step is necessary as we will be running the notebooks without network connectivity.\n", 97 | "\n", 98 | "This may take a couple of minutes depending on your network." 99 | ] 100 | }, 101 | { 102 | "cell_type": "code", 103 | "execution_count": null, 104 | "metadata": { 105 | "collapsed": false 106 | }, 107 | "outputs": [], 108 | "source": [ 109 | "import os\n", 110 | "\n", 111 | "fetch_data_all('data', os.path.join('data','manifest.json'))" 112 | ] 113 | }, 114 | { 115 | "cell_type": "markdown", 116 | "metadata": {}, 117 | "source": [ 118 | "

Next »

" 119 | ] 120 | } 121 | ], 122 | "metadata": { 123 | "kernelspec": { 124 | "display_name": "Python 3", 125 | "language": "python", 126 | "name": "python3" 127 | }, 128 | "language_info": { 129 | "codemirror_mode": { 130 | "name": "ipython", 131 | "version": 3 132 | }, 133 | "file_extension": ".py", 134 | "mimetype": "text/x-python", 135 | "name": "python", 136 | "nbconvert_exporter": "python", 137 | "pygments_lexer": "ipython3", 138 | "version": "3.5.2" 139 | } 140 | }, 141 | "nbformat": 4, 142 | "nbformat_minor": 2 143 | } 144 | -------------------------------------------------------------------------------- /registration_gui.py: -------------------------------------------------------------------------------- 1 | import SimpleITK as sitk 2 | import matplotlib.pyplot as plt 3 | import numpy as np 4 | 5 | # 6 | # Set of methods used for displaying the registration metric during the optimization. 7 | # 8 | 9 | # Callback invoked when the StartEvent happens, sets up our new data. 10 | def start_plot(): 11 | global metric_values, multires_iterations, ax, fig 12 | fig, ax = plt.subplots(1,1, figsize=(8,4)) 13 | 14 | metric_values = [] 15 | multires_iterations = [] 16 | plt.show() 17 | 18 | 19 | # Callback invoked when the EndEvent happens, do cleanup of data and figure. 20 | def end_plot(): 21 | global metric_values, multires_iterations, ax, fig 22 | 23 | del metric_values 24 | del multires_iterations 25 | del ax 26 | del fig 27 | 28 | # Callback invoked when the IterationEvent happens, update our data and display new figure. 29 | def plot_values(registration_method): 30 | global metric_values, multires_iterations, ax, fig 31 | 32 | metric_values.append(registration_method.GetMetricValue()) 33 | # Plot the similarity metric values 34 | ax.plot(metric_values, 'r') 35 | ax.plot(multires_iterations, [metric_values[index] for index in multires_iterations], 'b*') 36 | ax.set_xlabel('Iteration Number',fontsize=12) 37 | ax.set_ylabel('Metric Value',fontsize=12) 38 | fig.canvas.draw() 39 | 40 | # Callback invoked when the sitkMultiResolutionIterationEvent happens, update the index into the 41 | # metric_values list. 42 | def update_multires_iterations(): 43 | global metric_values, multires_iterations 44 | multires_iterations.append(len(metric_values)) 45 | 46 | 47 | def overlay_binary_segmentation_contours(image, mask, window_min, window_max): 48 | """ 49 | Given a 2D image and mask: 50 | a. resample the image and mask into isotropic grid (required for display). 51 | b. rescale the image intensities using the given window information. 52 | c. overlay the contours computed from the mask onto the image. 53 | """ 54 | # Resample the image (linear interpolation) and mask (nearest neighbor interpolation) into an isotropic grid, 55 | # required for display. 56 | original_spacing = image.GetSpacing() 57 | original_size = image.GetSize() 58 | min_spacing = min(original_spacing) 59 | new_spacing = [min_spacing, min_spacing] 60 | new_size = [int(round(original_size[0]*(original_spacing[0]/min_spacing))), 61 | int(round(original_size[1]*(original_spacing[1]/min_spacing)))] 62 | resampled_img = sitk.Resample(image, new_size, sitk.Transform(), 63 | sitk.sitkLinear, image.GetOrigin(), 64 | new_spacing, image.GetDirection(), 0.0, 65 | image.GetPixelID()) 66 | resampled_msk = sitk.Resample(mask, new_size, sitk.Transform(), 67 | sitk.sitkNearestNeighbor, mask.GetOrigin(), 68 | new_spacing, mask.GetDirection(), 0.0, 69 | mask.GetPixelID()) 70 | 71 | # Create the overlay: cast the mask to expected label pixel type, and do the same for the image after 72 | # window-level, accounting for the high dynamic range of the CT. 73 | return sitk.LabelMapContourOverlay(sitk.Cast(resampled_msk, sitk.sitkLabelUInt8), 74 | sitk.Cast(sitk.IntensityWindowing(resampled_img, 75 | windowMinimum=window_min, 76 | windowMaximum=window_max), 77 | sitk.sitkUInt8), 78 | opacity = 1, 79 | contourThickness=[2,2]) 80 | 81 | 82 | def display_coronal_with_overlay(temporal_slice, coronal_slice, images, masks, label, window_min, window_max): 83 | """ 84 | Display a coronal slice from the 4D (3D+time) CT with a contour overlaid onto it. The contour is the edge of 85 | the specific label. 86 | """ 87 | img = images[temporal_slice][:,coronal_slice,:] 88 | msk = masks[temporal_slice][:,coronal_slice,:]==label 89 | 90 | overlay_img = overlay_binary_segmentation_contours(img, msk, window_min, window_max) 91 | # Flip the image so that corresponds to correct radiological view. 92 | plt.imshow(np.flipud(sitk.GetArrayFromImage(overlay_img))) 93 | plt.axis('off') 94 | plt.show() 95 | 96 | 97 | def display_coronal_with_label_maps_overlay(coronal_slice, mask_index, image, masks, label, window_min, window_max): 98 | """ 99 | Display a coronal slice from a 3D CT with a contour overlaid onto it. The contour is the edge of 100 | the specific label from the specific mask. Function is used to display results of transforming a segmentation 101 | using registration. 102 | """ 103 | img = image[:,coronal_slice,:] 104 | msk = masks[mask_index][:,coronal_slice,:]==label 105 | 106 | overlay_img = overlay_binary_segmentation_contours(img, msk, window_min, window_max) 107 | # Flip the image so that corresponds to correct radiological view. 108 | plt.imshow(np.flipud(sitk.GetArrayFromImage(overlay_img))) 109 | plt.axis('off') 110 | plt.show() 111 | -------------------------------------------------------------------------------- /tests/additional_dictionary.txt: -------------------------------------------------------------------------------- 1 | ANTSNeighborhoodCorrelation 2 | API 3 | Acknowledgements 4 | AddTransform 5 | AffineTransform 6 | Args 7 | BFGS 8 | BMP 9 | BSpline 10 | BSplineTransform 11 | BSplineTransformInitializer 12 | BSplineTransformInitializerFilter 13 | Biancardi 14 | BinaryMorphologicalClosing 15 | BinaryMorphologicalOpening 16 | BinaryThreshold 17 | Broyden 18 | Bérard 19 | CBCT 20 | CIRS 21 | CREATIS 22 | CTs 23 | CenteredTransformInitializer 24 | CenteredTransformInitializerFilter 25 | Centre 26 | Clarysse 27 | ConfidenceConnected 28 | ConjugateGradientLineSearch 29 | ConnectedComponentImageFilter 30 | ConnectedThreshold 31 | DAPI 32 | DICOM 33 | DTransform 34 | Decubitus 35 | DemonsMetric 36 | DemonsRegistrationFilter 37 | DiffeomorphicDemonsRegistrationFilter 38 | DisplacementField 39 | DisplacementFieldTransform 40 | Docstring 41 | Docstrings 42 | Doxygen 43 | EachIteration 44 | EndEvent 45 | ExpandImageFilter 46 | FFD 47 | FFDL 48 | FFDR 49 | FFF 50 | FFP 51 | FFS 52 | FRE 53 | FREs 54 | FastMarchingImageFilter 55 | FastSymmetricForcesDemonsRegistrationFilter 56 | FixedParameters 57 | Flickr 58 | FlipImageFilter 59 | GIF 60 | GaborSource 61 | GeodesicActiveContour 62 | GetArrayFromImage 63 | GetArrayViewFromImage 64 | GetCenter 65 | GetImageFromArray 66 | GetInverse 67 | GetMetaData 68 | GetPixel 69 | Goldfarb 70 | GradientDescent 71 | GradientDescentLineSearch 72 | HFDL 73 | HFDR 74 | HFP 75 | HFS 76 | HU 77 | HasMetaDataKey 78 | ICCR 79 | ID's 80 | IPython 81 | ITK 82 | ITK's 83 | ITKv 84 | ImageFileReader 85 | ImageJ 86 | ImageRegistrationMethod 87 | ImageSeriesReader 88 | InterpolatorEnum 89 | IterationEvent 90 | JPEG 91 | JPEGs 92 | Jaccard 93 | Jirapatnakul 94 | JointHistogram 95 | JointHistogramMutualInformation 96 | Jupyter 97 | LabelShapeStatisticsImageFilter 98 | LandmarkBasedTransformInitializer 99 | LaplacianSegmentation 100 | Lobb 101 | Léon 102 | MATLAB 103 | MATLAB's 104 | MacCallum 105 | MacOS 106 | Mahalanobis 107 | Malpica 108 | Marschner 109 | MattesMutualInformation 110 | MaximumEntropy 111 | MeanSquares 112 | MetaDataRead 113 | MetricEvaluate 114 | NeighborhoodConnected 115 | Nelder 116 | Otsu's 117 | PNG 118 | POPI 119 | Photogrammetric 120 | PixelIDValueEnum 121 | Popa 122 | Proc 123 | Pythonic 124 | RGB 125 | RIRE 126 | RLE 127 | ROIs 128 | ReadImage 129 | ReadTransform 130 | RegularStepGradientDescent 131 | Rueda 132 | SPIE 133 | Sarrut 134 | ScalarChanAndVese 135 | ScaleSkewVersor 136 | ScaleTransform 137 | ScaleVersor 138 | SetAngle 139 | SetCenter 140 | SetDirection 141 | SetFileName 142 | SetFixedInitialTransform 143 | SetInitialTransform 144 | SetInterpolator 145 | SetMean 146 | SetMetricAsDemons 147 | SetMetricAsX 148 | SetMovingInitialTransform 149 | SetOptimizerAsConjugateGradientLineSearch 150 | SetOptimizerAsX 151 | SetOptimizerScalesFromIndexShift 152 | SetOptimizerScalesFromJacobian 153 | SetOptimizerScalesFromPhysicalShift 154 | SetOrigin 155 | SetPixel 156 | SetProbability 157 | SetScale 158 | SetShrinkFactorsPerLevel 159 | SetSmoothingSigmasPerLevel 160 | SetSpacing 161 | SetStandardDeviation 162 | SetStandardDeviation 163 | ShapeDetection 164 | SimpleITK 165 | SimpleITK's 166 | SimpleITKv 167 | SmoothingSigmasAreSpecifiedInPhysicalUnitsOn 168 | StartEvent 169 | StatisticsImageFilter 170 | Subsampling 171 | SymmetricForcesDemonsRegistrationFilter 172 | TRE 173 | TREs 174 | Thirion 175 | ThresholdSegmentation 176 | TransformContinuousIndexToPhysicalPoint 177 | TransformPoint 178 | TranslationTransform 179 | UInt 180 | VGG 181 | Vandemeulebroucke 182 | VectorConfidenceConnected 183 | VersorRigid 184 | VersorTransform 185 | VolView 186 | WriteImage 187 | XVth 188 | XX 189 | YY 190 | Yaniv 191 | ZYX 192 | al 193 | app 194 | argmin 195 | aug 196 | behaviour 197 | booktabs 198 | bspline 199 | ccc 200 | characterisation 201 | circ 202 | colour 203 | colourmap 204 | const 205 | convergenceMinimumValue 206 | convergenceWindowSize 207 | cryosectioning 208 | cthead 209 | ctrl 210 | dataframe 211 | defaultPixelValue 212 | dev 213 | disp 214 | displaystyle 215 | documentclass 216 | dropdown 217 | eikonal 218 | env 219 | estimateLearningRate 220 | euler 221 | fiducial's 222 | fiducials 223 | floordiv 224 | fp 225 | frac 226 | geq 227 | ggplot 228 | greyscale 229 | honours 230 | iff 231 | img 232 | init 233 | initialNeighborhoodRadius 234 | inline 235 | inlined 236 | interpolator 237 | interpolators 238 | ipywidgets 239 | jn 240 | jupyter 241 | labelForUndecidedPixels 242 | labelled 243 | labelling 244 | lapply 245 | ldots 246 | learningRate 247 | leq 248 | linspace 249 | mathbb 250 | mathbf 251 | matplotlib 252 | meshgrid 253 | mha 254 | multiscale 255 | myshow 256 | nD 257 | nbagg 258 | nbextension 259 | np 260 | num 261 | numberOfIterations 262 | numberOfSteps 263 | numpy 264 | offline 265 | optimizerScales 266 | originalControlPointDisplacements 267 | originalDisplacements 268 | otsu 269 | outputDirection 270 | outputOrigin 271 | outputPixelType 272 | outputSpacing 273 | outputfile 274 | overcomplete 275 | overfit 276 | overfitting 277 | param 278 | pixelated 279 | pre 280 | pretrained 281 | py 282 | pyplot 283 | recognised 284 | referenceImage 285 | resample 286 | resampled 287 | resamples 288 | resampling 289 | scipy 290 | segBlobs 291 | segChannel 292 | shrinkFactors 293 | sitk 294 | sitkAnnulus 295 | sitkBSpline 296 | sitkBall 297 | sitkBlackmanWindowedSinc 298 | sitkBox 299 | sitkComplexFloat 300 | sitkCosineWindowedSinc 301 | sitkCross 302 | sitkFloat 303 | sitkGaussian 304 | sitkHammingWindowedSinc 305 | sitkInt 306 | sitkLabelUInt 307 | sitkLanczosWindowedSinc 308 | sitkLinear 309 | sitkMultiResolutionIterationEvent 310 | sitkNearestNeighbor 311 | sitkUInt 312 | sitkUnknown 313 | sitkVectorFloat 314 | sitkVectorInt 315 | sitkVectorUInt 316 | sitkWelchWindowedSinc 317 | smoothingSigmas 318 | spatio 319 | spc 320 | stepLength 321 | subsampling 322 | supersampling 323 | sz 324 | textrm 325 | tfm 326 | ticklabels 327 | tidyr 328 | timeit 329 | toolbar 330 | tranforms 331 | transform's 332 | truediv 333 | ttest 334 | tx 335 | ty 336 | uint 337 | usepackage 338 | vdots 339 | versor 340 | vertices 341 | voxel 342 | voxel's 343 | voxels 344 | widgetsnbextension 345 | wikimedia 346 | xpixels 347 | xtable 348 | xx 349 | xxx 350 | xy 351 | xz 352 | ypixels 353 | yy 354 | yyy 355 | yz 356 | zz 357 | zzz 358 | -------------------------------------------------------------------------------- /docs/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | SimpleITK Course @ SPIE Medical Imaging 2018 4 | 5 | 6 | 7 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 55 | 64 | 65 | 66 | 67 | 204 | 205 |
68 | 69 |

70 | To contact us with problems or questions, please post to this repository's GitHub issue reporting system (requires a GitHub user account). 71 |

72 | 73 | 74 |

Overview

75 | 76 |

77 | SimpleITK is a simplified programming interface to the algorithms and data structures of the Insight Segmentation and Registration Toolkit (ITK). It supports bindings for multiple programming languages including C++, Python, R, Java, C#, Lua, Ruby and TCL. Combining SimpleITK's Python binding with the Jupyter notebook web application creates an environment which facilitates collaborative development of biomedical image analysis workflows. 78 |

79 | 80 |

81 | In this course, we will use a hands-on approach utilizing Python based SimpleITK Jupyter notebooks to explore and experiment with various toolkit features. Participants will follow along using their personal laptops, enabling them to explore the effects of changes and settings not covered by the instructor. We will start by introducing the toolkit's two basic data elements, Images and Transformations. We will then explore the various features available in the toolkit's registration framework including: optimizer selection, the use of linear and deformable transformations, the embedded multi-resolution framework, self-calibrating optimizers and the use of callbacks for registration progress monitoring. Finally, we will show how to use SimpleITK as a tool for image preparation and data augmentation for deep learning via spatial and intensity transformations. 82 |

83 | 84 |

85 | Beyond the notebooks used in this course you can find the main SimpleITK notebooks repository on GitHub. 86 |

87 | 88 |

Organizers

89 |
    90 |
  • Hans J. Johnson, University of Iowa.
  • 91 |
  • Bradley C. Lowekamp, Medical Science & Computing and National Institutes of Health.
  • 92 |
  • Ziv Yaniv, TAJ Technologies Inc. and National Institutes of Health.
  • 93 |
94 | 95 |

Prerequisites

96 | 97 |

98 | If you encounter problems or have questions, please post using this repository's GitHub issue reporting system (requires a GitHub user account). 99 |

100 | 101 |

102 | In this course we will use the Anaconda Python distribution. Please follow the instructions below to setup the environment we will use during the course. All commands below are issued on the command line (Linux/Mac - terminal, Windows - Anaconda Prompt). 103 |

104 | 105 |
    106 |
  1. 107 | Download and install the Fiji image viewer. This is the default image viewer used by SimpleITK: 108 |
      109 |
    • 110 | On Windows: Install into your user directory (e.g. C:\Users\[your name]\Fiji.app). 111 |
    • 112 |
    • 113 | On Linux: Install into ~/bin . 114 |
    • 115 |
    • 116 | On Mac: Install into /Applications . 117 |
    • 118 |
    119 | 120 |
  2. 121 | 122 |
  3. 123 | Download and install the Anaconda Python 3.6 version for your operating system. We assume it is installed in a directory named anaconda3. 124 |
  4. 125 | 126 |
  5. 127 |
      128 |
    • On Windows: open the Anaconda Prompt (found under the Anaconda3 start menu).
    • 129 |
    • On Linux/Mac: on the command line source path_to_anaconda3/bin/activate root
    • 130 |
    131 |
  6. 132 | 133 |
  7. 134 | Update the root anaconda environment and install the git version control system into it. 135 |
    conda update conda
    136 | conda update anaconda
    137 | conda install git
    138 | 
    139 |
  8. 140 | 141 |
  9. 142 | Clone this repository: 143 |
    git clone https://github.com/SimpleITK/SPIE2018_COURSE.git
    144 | 
    145 |
  10. 146 | 147 |
  11. 148 | Create the virtual environment containing all packages required for the course: 149 |
    conda env create -f SPIE2018_COURSE/environment.yml
    150 | 
    151 |
  12. 152 | 153 |
  13. 154 | Activate the virtual environment: 155 |
      156 |
    • On Windows: open the Anaconda Prompt (found under the Anaconda3 start menu) activate sitkpySPIE18.
    • 157 |
    • On Linux/Mac: on the command line source path_to_anaconda3/bin/activate sitkpySPIE18
    • 158 |
    159 |
  14. 160 |
  15. 161 | On the command line: 162 | jupyter nbextension enable --py --sys-prefix widgetsnbextension 163 |
  16. 164 | 165 |
  17. 166 | Go over the setup notebook (requires internet connectivity). This notebook checks the environment setup and downloads 167 | all of the required data. 168 |
    cd SPIE2018_COURSE
    169 | jupyter notebook setup.ipynb
    170 | 
    171 |
  18. 172 | 173 |
174 | 175 | 176 | 177 |

Program

178 |
    179 |
  • [1:30PM- 2:00PM] history and overview of SimpleITK. [ppt: setup and schedule, history, overview]
  • 180 |
  • [2:00PM- 2:30PM] spatial transformations. [notebook pdf]
  • 181 |
  • [2:30PM- 3:00PM] images and resampling. [notebook pdf]
  • 182 |
  • [3:00PM - 3:15PM] coffee break. [coffee ]
  • 183 |
  • [3:15PM - 3:45PM] data augmentation for deep learning. [notebook pdf]
  • 184 |
  • [3:45PM - 4:30PM] basic registration. [notebook pdf]
  • 185 |
  • [4:30PM - 5:15PM] advanced registration. [notebook pdf]
  • 186 |
  • [5:15PM - 5:30PM] notebook development and testing, concluding remarks. [ppt: development]
  • 187 |
188 | 189 |

Supplementary Material

190 | 191 |

192 | For those interested in reading more about the design of SimpleITK and the Notebooks environmnet: 193 |

194 | 202 | 203 |
206 | 207 | 208 | -------------------------------------------------------------------------------- /utilities.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import matplotlib.pyplot as plt 3 | 4 | popi_body_label = 0 5 | popi_air_label = 1 6 | popi_lung_label = 2 7 | 8 | def read_POPI_points(file_name): 9 | """ 10 | Read the Point-validated Pixel-based Breathing Thorax Model (POPI) landmark points file. 11 | The file is an ASCII file with X Y Z coordinates in each line and the first line is a header. 12 | 13 | Args: 14 | file_name: full path to the file. 15 | Returns: 16 | (list(tuple)): List of points as tuples. 17 | """ 18 | with open(file_name,'r') as fp: 19 | lines = fp.readlines() 20 | points = [] 21 | # First line in the file is #X Y Z which we ignore. 22 | for line in lines[1:]: 23 | coordinates = line.split() 24 | if coordinates: 25 | points.append((float(coordinates[0]), float(coordinates[1]), float(coordinates[2]))) 26 | return points 27 | 28 | 29 | def point2str(point, precision=1): 30 | """ 31 | Format a point for printing, based on specified precision with trailing zeros. Uniform printing for vector-like data 32 | (tuple, numpy array, list). 33 | 34 | Args: 35 | point (vector-like): nD point with floating point coordinates. 36 | precision (int): Number of digits after the decimal point. 37 | Return: 38 | String represntation of the given point "xx.xxx yy.yyy zz.zzz...". 39 | """ 40 | return ' '.join(format(c, '.{0}f'.format(precision)) for c in point) 41 | 42 | 43 | def uniform_random_points(bounds, num_points): 44 | """ 45 | Generate random (uniform withing bounds) nD point cloud. Dimension is based on the number of pairs in the bounds input. 46 | 47 | Args: 48 | bounds (list(tuple-like)): list where each tuple defines the coordinate bounds. 49 | num_points (int): number of points to generate. 50 | 51 | Returns: 52 | list containing num_points numpy arrays whose coordinates are within the given bounds. 53 | """ 54 | internal_bounds = [sorted(b) for b in bounds] 55 | # Generate rows for each of the coordinates according to the given bounds, stack into an array, 56 | # and split into a list of points. 57 | mat = np.vstack([np.random.uniform(b[0], b[1], num_points) for b in internal_bounds]) 58 | return list(mat[:len(bounds)].T) 59 | 60 | 61 | def target_registration_errors(tx, point_list, reference_point_list, 62 | display_errors = False, min_err= None, max_err=None, figure_size=(8,6)): 63 | """ 64 | Distances between points transformed by the given transformation and their 65 | location in another coordinate system. When the points are only used to 66 | evaluate registration accuracy (not used in the registration) this is the 67 | Target Registration Error (TRE). 68 | 69 | Args: 70 | tx (SimpleITK.Transform): The transform we want to evaluate. 71 | point_list (list(tuple-like)): Points in fixed image 72 | cooredinate system. 73 | reference_point_list (list(tuple-like)): Points in moving image 74 | cooredinate system. 75 | display_errors (boolean): Display a 3D figure with the points from 76 | point_list color corresponding to the error. 77 | min_err, max_err (float): color range is linearly stretched between min_err 78 | and max_err. If these values are not given then 79 | the range of errors computed from the data is used. 80 | figure_size (tuple): Figure size in inches. 81 | 82 | Returns: 83 | (errors) [float]: list of TRE values. 84 | """ 85 | transformed_point_list = [tx.TransformPoint(p) for p in point_list] 86 | 87 | errors = [np.linalg.norm(np.array(p_fixed) - np.array(p_moving)) 88 | for p_fixed,p_moving in zip(transformed_point_list, reference_point_list)] 89 | if display_errors: 90 | from mpl_toolkits.mplot3d import Axes3D 91 | import matplotlib.pyplot as plt 92 | import matplotlib 93 | fig = plt.figure(figsize=figure_size) 94 | ax = fig.add_subplot(111, projection='3d') 95 | if not min_err: 96 | min_err = np.min(errors) 97 | if not max_err: 98 | max_err = np.max(errors) 99 | 100 | collection = ax.scatter(list(np.array(point_list).T)[0], 101 | list(np.array(point_list).T)[1], 102 | list(np.array(point_list).T)[2], 103 | marker = 'o', 104 | c = errors, 105 | vmin = min_err, 106 | vmax = max_err, 107 | cmap = matplotlib.cm.hot, 108 | label = 'original points') 109 | plt.colorbar(collection, shrink=0.8) 110 | plt.title('registration errors in mm', x=0.7, y=1.05) 111 | ax.set_xlabel('X') 112 | ax.set_ylabel('Y') 113 | ax.set_zlabel('Z') 114 | plt.show() 115 | 116 | return errors 117 | 118 | 119 | 120 | def print_transformation_differences(tx1, tx2): 121 | """ 122 | Check whether two transformations are "equivalent" in an arbitrary spatial region 123 | either 3D or 2D, [x=(-10,10), y=(-100,100), z=(-1000,1000)]. This is just a sanity check, 124 | as we are just looking at the effect of the transformations on a random set of points in 125 | the region. 126 | """ 127 | if tx1.GetDimension()==2 and tx2.GetDimension()==2: 128 | bounds = [(-10,10),(-100,100)] 129 | elif tx1.GetDimension()==3 and tx2.GetDimension()==3: 130 | bounds = [(-10,10),(-100,100), (-1000,1000)] 131 | else: 132 | raise ValueError('Transformation dimensions mismatch, or unsupported transformation dimensionality') 133 | num_points = 10 134 | point_list = uniform_random_points(bounds, num_points) 135 | tx1_point_list = [ tx1.TransformPoint(p) for p in point_list] 136 | differences = target_registration_errors(tx2, point_list, tx1_point_list) 137 | print('Differences - min: {:.2f}, max: {:.2f}, mean: {:.2f}, std: {:.2f}'.format(np.min(differences), np.max(differences), np.mean(differences), np.std(differences))) 138 | 139 | 140 | def display_displacement_scaling_effect(s, original_x_mat, original_y_mat, tx, original_control_point_displacements): 141 | """ 142 | This function displays the effects of the deformable transformation on a grid of points by scaling the 143 | initial displacements (either of control points for BSpline or the deformation field itself). It does 144 | assume that all points are contained in the range(-2.5,-2.5), (2.5,2.5). 145 | """ 146 | if tx.GetDimension() !=2: 147 | raise ValueError('display_displacement_scaling_effect only works in 2D') 148 | 149 | plt.scatter(original_x_mat, 150 | original_y_mat, 151 | marker='o', 152 | color='blue', label='original points') 153 | pointsX = [] 154 | pointsY = [] 155 | tx.SetParameters(s*original_control_point_displacements) 156 | 157 | for index, value in np.ndenumerate(original_x_mat): 158 | px,py = tx.TransformPoint((value, original_y_mat[index])) 159 | pointsX.append(px) 160 | pointsY.append(py) 161 | 162 | plt.scatter(pointsX, 163 | pointsY, 164 | marker='^', 165 | color='red', label='transformed points') 166 | plt.legend(loc=(0.25,1.01)) 167 | plt.xlim((-2.5,2.5)) 168 | plt.ylim((-2.5,2.5)) 169 | 170 | 171 | def parameter_space_regular_grid_sampling(*transformation_parameters): 172 | ''' 173 | Create a list representing a regular sampling of the parameter space. 174 | Args: 175 | *transformation_paramters : two or more numpy ndarrays representing parameter values. The order 176 | of the arrays should match the ordering of the SimpleITK transformation 177 | parameterization (e.g. Similarity2DTransform: scaling, rotation, tx, ty) 178 | Return: 179 | List of lists representing the regular grid sampling. 180 | 181 | Examples: 182 | #parameterization for 2D translation transform (tx,ty): [[1.0,1.0], [1.5,1.0], [2.0,1.0]] 183 | >>>> parameter_space_regular_grid_sampling(np.linspace(1.0,2.0,3), np.linspace(1.0,1.0,1)) 184 | ''' 185 | return [[np.asscalar(p) for p in parameter_values] 186 | for parameter_values in np.nditer(np.meshgrid(*transformation_parameters))] 187 | 188 | 189 | def similarity3D_parameter_space_regular_sampling(thetaX, thetaY, thetaZ, tx, ty, tz, scale): 190 | ''' 191 | Create a list representing a regular sampling of the 3D similarity transformation parameter space. As the 192 | SimpleITK rotation parameterization uses the vector portion of a versor we don't have an 193 | intuitive way of specifying rotations. We therefor use the ZYX Euler angle parametrization and convert to 194 | versor. 195 | Args: 196 | thetaX, thetaY, thetaZ: numpy ndarrays with the Euler angle values to use. 197 | tx, ty, tz: numpy ndarrays with the translation values to use. 198 | scale: numpy array with the scale values to use. 199 | Return: 200 | List of lists representing the parameter space sampling (vx,vy,vz,tx,ty,tz,s). 201 | ''' 202 | return [list(eul2quat(parameter_values[0],parameter_values[1], parameter_values[2])) + 203 | [np.asscalar(p) for p in parameter_values[3:]] for parameter_values in np.nditer(np.meshgrid(thetaX, thetaY, thetaZ, tx, ty, tz, scale))] 204 | 205 | 206 | def eul2quat(ax, ay, az, atol=1e-8): 207 | ''' 208 | Translate between Euler angle (ZYX) order and quaternion representation of a rotation. 209 | Args: 210 | ax: X rotation angle in radians. 211 | ay: Y rotation angle in radians. 212 | az: Z rotation angle in radians. 213 | atol: tolerance used for stable quaternion computation (qs==0 within this tolerance). 214 | Return: 215 | Numpy array with three entries representing the vectorial component of the quaternion. 216 | 217 | ''' 218 | # Create rotation matrix using ZYX Euler angles and then compute quaternion using entries. 219 | cx = np.cos(ax) 220 | cy = np.cos(ay) 221 | cz = np.cos(az) 222 | sx = np.sin(ax) 223 | sy = np.sin(ay) 224 | sz = np.sin(az) 225 | r=np.zeros((3,3)) 226 | r[0,0] = cz*cy 227 | r[0,1] = cz*sy*sx - sz*cx 228 | r[0,2] = cz*sy*cx+sz*sx 229 | 230 | r[1,0] = sz*cy 231 | r[1,1] = sz*sy*sx + cz*cx 232 | r[1,2] = sz*sy*cx - cz*sx 233 | 234 | r[2,0] = -sy 235 | r[2,1] = cy*sx 236 | r[2,2] = cy*cx 237 | 238 | # Compute quaternion: 239 | qs = 0.5*np.sqrt(r[0,0] + r[1,1] + r[2,2] + 1) 240 | qv = np.zeros(3) 241 | # If the scalar component of the quaternion is close to zero, we 242 | # compute the vector part using a numerically stable approach 243 | if np.isclose(qs,0.0,atol): 244 | i= np.argmax([r[0,0], r[1,1], r[2,2]]) 245 | j = (i+1)%3 246 | k = (j+1)%3 247 | w = np.sqrt(r[i,i] - r[j,j] - r[k,k] + 1) 248 | qv[i] = 0.5*w 249 | qv[j] = (r[i,j] + r[j,i])/(2*w) 250 | qv[k] = (r[i,k] + r[k,i])/(2*w) 251 | else: 252 | denom = 4*qs 253 | qv[0] = (r[2,1] - r[1,2])/denom; 254 | qv[1] = (r[0,2] - r[2,0])/denom; 255 | qv[2] = (r[1,0] - r[0,1])/denom; 256 | return qv 257 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /tests/test_notebooks.py: -------------------------------------------------------------------------------- 1 | import os 2 | import subprocess 3 | import tempfile 4 | import nbformat 5 | import pytest 6 | import markdown 7 | import re 8 | 9 | from enchant.checker import SpellChecker 10 | from enchant.tokenize import Filter, EmailFilter, URLFilter 11 | from enchant import DictWithPWL 12 | 13 | from lxml.html import document_fromstring, etree 14 | try: 15 | # Python 3 16 | from urllib.request import urlopen, URLError 17 | except ImportError: 18 | from urllib2 import urlopen, URLError 19 | 20 | 21 | 22 | """ 23 | run all tests: 24 | pytest -v --tb=short 25 | 26 | run python tests: 27 | pytest -v --tb=short tests/test_notebooks.py::Test_notebooks::test_python_notebook 28 | 29 | run specific Python test: 30 | pytest -v --tb=short tests/test_notebooks.py::Test_notebooks::test_python_notebook[setup.ipynb] 31 | 32 | -s : disable all capturing of output. 33 | """ 34 | 35 | class Test_notebooks(object): 36 | """ 37 | Testing of SimpleITK Jupyter notebooks: 38 | 1. Static analysis: 39 | Check that notebooks do not contain output (sanity check as these should 40 | not have been pushed to the repository). 41 | Check that all the URLs in the markdown cells are not broken. 42 | 2. Dynamic analysis: 43 | Run the notebook and check for errors. In some notebooks we 44 | intentionally cause errors to illustrate certain features of the toolkit. 45 | All code cells that intentionally generate an error are expected to be 46 | marked using the cell's metadata. In the notebook go to 47 | "View->Cell Toolbar->Edit Metadata and add the following json entry: 48 | 49 | "simpleitk_error_expected": simpleitk_error_message 50 | 51 | with the appropriate "simpleitk_error_message" text. 52 | Cells where an error is allowed, but not necessarily expected should be 53 | marked with the following json: 54 | 55 | "simpleitk_error_allowed": simpleitk_error_message 56 | 57 | The simpleitk_error_message is a substring of the generated error 58 | message, such as 'Exception thrown in SimpleITK Show:' 59 | 60 | To test notebooks that use too much memory (exceed the 4Gb allocated for the testing 61 | machine): 62 | 1. Create an enviornment variable named SIMPLE_ITK_MEMORY_CONSTRAINED_ENVIRONMENT 63 | 2. Import the setup_for_testing.py at the top of the notebook. This module will 64 | decorate the sitk.ReadImage so that after reading the initial image it is 65 | resampled by a factor of 4 in each dimension. 66 | 67 | Adding a test: 68 | Simply add the new notebook file name to the list of files decorating the test_python_notebook 69 | or test_r_notebook functions. DON'T FORGET THE COMMA. 70 | """ 71 | 72 | _allowed_error_markup = 'simpleitk_error_allowed' 73 | _expected_error_markup = 'simpleitk_error_expected' 74 | 75 | @pytest.mark.parametrize('notebook_file_name', 76 | ['setup.ipynb', 77 | '01_spatial_transformations.ipynb', 78 | '02_images_and_resampling.ipynb', 79 | '04_basic_registration.ipynb', 80 | '05_advanced_registration.ipynb', 81 | '03_data_augmentation.ipynb']) 82 | def test_python_notebook(self, notebook_file_name): 83 | self.evaluate_notebook(self.absolute_path_python(notebook_file_name), 'python') 84 | 85 | 86 | def evaluate_notebook(self, path, kernel_name): 87 | """ 88 | Perform static and dynamic analysis of the notebook. 89 | Execute a notebook via nbconvert and print the results of the test (errors etc.) 90 | Args: 91 | path (string): Name of notebook to run. 92 | kernel_name (string): Which jupyter kernel to use to run the test. 93 | Relevant values are:'python2', 'python3', 'ir'. 94 | """ 95 | 96 | dir_name, file_name = os.path.split(path) 97 | if dir_name: 98 | os.chdir(dir_name) 99 | 100 | print('-------- begin (kernel {0}) {1} --------'.format(kernel_name,file_name)) 101 | no_static_errors = self.static_analysis(path) 102 | no_dynamic_errors = self.dynamic_analysis(path, kernel_name) 103 | print('-------- end (kernel {0}) {1} --------'.format(kernel_name,file_name)) 104 | assert(no_static_errors and no_dynamic_errors) 105 | 106 | 107 | def static_analysis(self, path): 108 | """ 109 | Perform static analysis of the notebook. 110 | Read the notebook and check that there is no ouput and that the links 111 | in the markdown cells are not broken. 112 | Args: 113 | path (string): Name of notebook. 114 | Return: 115 | boolean: True if static analysis succeeded, otherwise False. 116 | """ 117 | 118 | nb = nbformat.read(path, nbformat.current_nbformat) 119 | 120 | ####################### 121 | # Check that the notebook does not contain output from code cells 122 | # (should not be in the repository, but well...). 123 | ####################### 124 | no_unexpected_output = True 125 | 126 | # Check that the cell dictionary has an 'outputs' key and that it is 127 | # empty, relies on Python using short circuit evaluation so that we 128 | # don't get KeyError when retrieving the 'outputs' entry. 129 | cells_with_output = [c.source for c in nb.cells if 'outputs' in c and c.outputs] 130 | if cells_with_output: 131 | no_unexpected_output = False 132 | print('Cells with unexpected output:\n_____________________________') 133 | for cell in cells_with_output: 134 | print(cell+'\n---') 135 | else: 136 | print('no unexpected output') 137 | 138 | ####################### 139 | # Check that all the links in the markdown cells are valid/accessible. 140 | ####################### 141 | no_broken_links = True 142 | 143 | cells_and_broken_links = [] 144 | for c in nb.cells: 145 | if c.cell_type == 'markdown': 146 | html_tree = document_fromstring(markdown.markdown(c.source)) 147 | broken_links = [] 148 | #iterlinks() returns tuples of the form (element, attribute, link, pos) 149 | for document_link in html_tree.iterlinks(): 150 | try: 151 | if 'http' not in document_link[2]: # Local file. 152 | url = 'file://' + os.path.abspath(document_link[2]) 153 | else: # Remote file. 154 | url = document_link[2] 155 | urlopen(url) 156 | except URLError: 157 | broken_links.append(url) 158 | if broken_links: 159 | cells_and_broken_links.append((broken_links,c.source)) 160 | if cells_and_broken_links: 161 | no_broken_links = False 162 | print('Cells with broken links:\n________________________') 163 | for links, cell in cells_and_broken_links: 164 | print(cell+'\n') 165 | print('\tBroken links:') 166 | print('\t'+'\n\t'.join(links)+'\n---') 167 | else: 168 | print('no broken links') 169 | 170 | ####################### 171 | # Spell check all markdown cells and comments in code cells using the pyenchant spell checker. 172 | ####################### 173 | no_spelling_mistakes = True 174 | simpleitk_notebooks_dictionary = DictWithPWL('en_US', os.path.join(os.path.dirname(os.path.abspath(__file__)), 175 | 'additional_dictionary.txt')) 176 | spell_checker = SpellChecker(simpleitk_notebooks_dictionary, filters = [EmailFilter, URLFilter]) 177 | cells_and_spelling_mistakes = [] 178 | for c in nb.cells: 179 | spelling_mistakes = [] 180 | if c.cell_type == 'markdown': 181 | # Get the text as a string from the html without the markup which is replaced by space. 182 | spell_checker.set_text(' '.join(etree.XPath('//text()')(document_fromstring(markdown.markdown(c.source))))) 183 | elif c.cell_type == 'code': 184 | # Get all the comments and concatenate them into a single string separated by newlines. 185 | comment_lines = re.findall('#+.*',c.source) 186 | spell_checker.set_text('\n'.join(comment_lines)) 187 | for error in spell_checker: 188 | error_message = 'error: '+ '\'' + error.word +'\', ' + 'suggestions: ' + str(spell_checker.suggest()) 189 | spelling_mistakes.append(error_message) 190 | if spelling_mistakes: 191 | cells_and_spelling_mistakes.append((spelling_mistakes, c.source)) 192 | if cells_and_spelling_mistakes: 193 | no_spelling_mistakes = False 194 | print('Cells with spelling mistakes:\n________________________') 195 | for misspelled_words, cell in cells_and_spelling_mistakes: 196 | print(cell+'\n') 197 | print('\tMisspelled words and suggestions:') 198 | print('\t'+'\n\t'.join(misspelled_words)+'\n---') 199 | else: 200 | print('no spelling mistakes') 201 | 202 | return(no_unexpected_output and no_broken_links and no_spelling_mistakes) 203 | 204 | 205 | def dynamic_analysis(self, path, kernel_name): 206 | """ 207 | Perform dynamic analysis of the notebook. 208 | Execute a notebook via nbconvert and print the results of the test 209 | (errors etc.) 210 | Args: 211 | path (string): Name of notebook to run. 212 | kernel_name (string): Which jupyter kernel to use to run the test. 213 | Relevant values are:'python', 'ir'. 214 | Return: 215 | boolean: True if dynamic analysis succeeded, otherwise False. 216 | """ 217 | 218 | # Execute the notebook and allow errors (run all cells), output is 219 | # written to a temporary file which is automatically deleted. 220 | with tempfile.NamedTemporaryFile(suffix='.ipynb') as fout: 221 | args = ['jupyter', 'nbconvert', 222 | '--to', 'notebook', 223 | '--execute', 224 | '--ExecutePreprocessor.kernel_name='+kernel_name, 225 | '--ExecutePreprocessor.allow_errors=True', 226 | '--ExecutePreprocessor.timeout=600', # seconds till timeout 227 | '--output', fout.name, path] 228 | subprocess.check_call(args) 229 | nb = nbformat.read(fout.name, nbformat.current_nbformat) 230 | 231 | # Get all of the unexpected errors (logic: cell has output with an error 232 | # and no error is expected or the allowed/expected error is not the one which 233 | # was generated.) 234 | unexpected_errors = [(output.evalue, c.source) for c in nb.cells \ 235 | if 'outputs' in c for output in c.outputs \ 236 | if (output.output_type=='error') and \ 237 | (((Test_notebooks._allowed_error_markup not in c.metadata) and (Test_notebooks._expected_error_markup not in c.metadata))or \ 238 | ((Test_notebooks._allowed_error_markup in c.metadata) and (c.metadata[Test_notebooks._allowed_error_markup] not in output.evalue)) or \ 239 | ((Test_notebooks._expected_error_markup in c.metadata) and (c.metadata[Test_notebooks._expected_error_markup] not in output.evalue)))] 240 | 241 | no_unexpected_errors = True 242 | if unexpected_errors: 243 | no_unexpected_errors = False 244 | print('Cells with unexpected errors:\n_____________________________') 245 | for e, src in unexpected_errors: 246 | print(src) 247 | print('unexpected error: '+e) 248 | else: 249 | print('no unexpected errors') 250 | 251 | # Get all of the missing expected errors (logic: cell has output 252 | # but expected error was not generated.) 253 | missing_expected_errors = [] 254 | for c in nb.cells: 255 | if Test_notebooks._expected_error_markup in c.metadata: 256 | missing_error = True 257 | if 'outputs' in c: 258 | for output in c.outputs: 259 | if (output.output_type=='error') and (c.metadata[Test_notebooks._expected_error_markup] in output.evalue): 260 | missing_error = False 261 | if missing_error: 262 | missing_expected_errors.append((c.metadata[Test_notebooks._expected_error_markup],c.source)) 263 | 264 | no_missing_expected_errors = True 265 | if missing_expected_errors: 266 | no_missing_expected_errors = False 267 | print('\nCells with missing expected errors:\n___________________________________') 268 | for e, src in missing_expected_errors: 269 | print(src) 270 | print('missing expected error: '+e) 271 | else: 272 | print('no missing expected errors') 273 | 274 | return(no_unexpected_errors and no_missing_expected_errors) 275 | 276 | 277 | def absolute_path_python(self, notebook_file_name): 278 | return os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)), '..', notebook_file_name)) 279 | 280 | def absolute_path_r(self, notebook_file_name): 281 | return os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)), '../R', notebook_file_name)) 282 | -------------------------------------------------------------------------------- /downloaddata.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | """ 4 | Since we do not want to store large binary data files in our Git repository, 5 | we fetch_data_all from a network resource. 6 | 7 | The data we download is described in a json file. The file format is a dictionary 8 | of dictionaries. The top level key is the file name. The returned dictionary 9 | contains an md5 checksum and possibly a url and boolean flag indicating 10 | the file is part of an archive. The md5 checksum is mandatory. 11 | When the optional url is given, we attempt to download from that url, otherwise 12 | we attempt to download from the list of MIDAS servers returned by the 13 | get_midas_servers() function. Files that are contained in archives are 14 | identified by the archive flag. 15 | 16 | Example json file contents: 17 | 18 | { 19 | "SimpleITK.jpg": { 20 | "md5sum": "2685660c4f50c5929516127aed9e5b1a" 21 | }, 22 | "POPI/meta/00.mhd" : { 23 | "md5sum": "3bfc3c92e18a8e6e8494482c44654fd3", 24 | "url": "http://tux.creatis.insa-lyon.fr/~srit/POPI/Images/MetaImage/10-MetaImage.tar" 25 | }, 26 | "CIRS057A_MR_CT_DICOM/readme.txt" : { 27 | "md5sum" : "d92c97e6fe6520cb5b1a50b96eb9eb96", 28 | "archive" : "true" 29 | } 30 | } 31 | 32 | Notes: 33 | 1. The file we download can be inside an archive. In this case, the md5 34 | checksum is that of the archive. 35 | 36 | 2. For the md5 verification to work we need to store archives on MIDAS and cannot 37 | use its on-the-fly archive download mechanism (this mechanism allows users 38 | to download "directories/communities" as a single zip archive). The issue is that 39 | every time the archive is created its md5 changes. It is likely MIDAS is also 40 | encoding the archive's modification/creation time as part of the md5. 41 | 42 | Another issue is that when downloading from this type of url 43 | (e.g. http://midas3.kitware.com/midas/download/folder/11610/ipythonNotebookData.zip) 44 | the returned data does not have a "Content-Length" field in the header. The 45 | current implementation will throw an exception. 46 | """ 47 | 48 | import hashlib 49 | import sys 50 | import os 51 | import json 52 | 53 | import errno 54 | import warnings 55 | 56 | # http://stackoverflow.com/questions/2028517/python-urllib2-progress-hook 57 | 58 | def url_download_report(bytes_so_far, url_download_size, total_size): 59 | percent = float(bytes_so_far) / total_size 60 | percent = round(percent * 100, 2) 61 | if bytes_so_far > url_download_size: 62 | # Note that the carriage return is at the begining of the 63 | # string and not the end. This accomodates usage in 64 | # IPython usage notebooks. Otherwise the string is not 65 | # displayed in the output. 66 | sys.stdout.write("\rDownloaded %d of %d bytes (%0.2f%%)" % 67 | (bytes_so_far, total_size, percent)) 68 | sys.stdout.flush() 69 | if bytes_so_far >= total_size: 70 | sys.stdout.write("\rDownloaded %d of %d bytes (%0.2f%%)\n" % 71 | (bytes_so_far, total_size, percent)) 72 | sys.stdout.flush() 73 | 74 | 75 | def url_download_read(url, outputfile, url_download_size=8192 * 2, report_hook=None): 76 | # Use the urllib2 to download the data. The Requests package, highly 77 | # recommended for this task, doesn't support the file scheme so we opted 78 | # for urllib2 which does. 79 | 80 | try: 81 | # Python 3 82 | from urllib.request import urlopen, URLError, HTTPError 83 | except ImportError: 84 | from urllib2 import urlopen, URLError, HTTPError 85 | from xml.dom import minidom 86 | 87 | # Open the url 88 | try: 89 | url_response = urlopen(url) 90 | except HTTPError as e: 91 | return "HTTP Error: {0} {1}\n".format(e.code, url) 92 | except URLError as e: 93 | return "URL Error: {0} {1}\n".format(e.reason, url) 94 | # MIDAS is a service and therefor will not generate the expected URLError 95 | # when given a nonexistent url. It does return an error message in xml. 96 | # When the response is xml then we have an error, we read the whole message 97 | # and return the 'msg' attribute associated with the 'err' tag. 98 | # The URLError above is not superfluous as it will occur when the url 99 | # refers to a non existent file ('file://non_existent_file_name') or url 100 | # which is not a service ('http://non_existent_address'). 101 | try: 102 | # Python 3 103 | content_type = url_response.info().get("Content-Type") 104 | except AttributeError: 105 | content_type = url_response.info().getheader("Content-Type") 106 | # MIDAS error message in json format 107 | if content_type == "text/html; charset=UTF-8": 108 | doc = json.loads(url_response.read().decode("utf-8")) 109 | if doc['stat']=='fail': 110 | return doc['message'] + url 111 | # MIDAS error message in xml format 112 | if content_type == "text/xml": 113 | doc = minidom.parseString(url_response.read()) 114 | if doc.getElementsByTagName("err")[0]: 115 | return doc.getElementsByTagName("err")[0].getAttribute("msg") + ': ' + url 116 | # We download all content types - the assumption is that the md5sum ensures 117 | # that what we received is the expected data. 118 | try: 119 | # Python 3 120 | content_length = url_response.info().get("Content-Length") 121 | except AttributeError: 122 | content_length = url_response.info().getheader("Content-Length") 123 | total_size = content_length.strip() 124 | total_size = int(total_size) 125 | bytes_so_far = 0 126 | with open(outputfile, "wb") as local_file: 127 | while 1: 128 | try: 129 | url_download = url_response.read(url_download_size) 130 | bytes_so_far += len(url_download) 131 | if not url_download: 132 | break 133 | local_file.write(url_download) 134 | # handle errors 135 | except HTTPError as e: 136 | return "HTTP Error: {0} {1}\n".format(e.code, url) 137 | except URLError as e: 138 | return "URL Error: {0} {1}\n".format(e.reason, url) 139 | if report_hook: 140 | report_hook(bytes_so_far, url_download_size, total_size) 141 | return "Downloaded Successfully" 142 | 143 | # http://stackoverflow.com/questions/600268/mkdir-p-functionality-in-python?rq=1 144 | def mkdir_p(path): 145 | try: 146 | os.makedirs(path) 147 | except OSError as exc: # Python >2.5 148 | if exc.errno == errno.EEXIST and os.path.isdir(path): 149 | pass 150 | else: 151 | raise 152 | 153 | #http://stackoverflow.com/questions/2536307/decorators-in-the-python-standard-lib-deprecated-specifically 154 | def deprecated(func): 155 | """This is a decorator which can be used to mark functions 156 | as deprecated. It will result in a warning being emmitted 157 | when the function is used.""" 158 | 159 | def new_func(*args, **kwargs): 160 | warnings.simplefilter('always', DeprecationWarning) #turn off filter 161 | warnings.warn("Call to deprecated function {}.".format(func.__name__), category=DeprecationWarning, stacklevel=2) 162 | warnings.simplefilter('default', DeprecationWarning) #reset filter 163 | return func(*args, **kwargs) 164 | 165 | new_func.__name__ = func.__name__ 166 | new_func.__doc__ = func.__doc__ 167 | new_func.__dict__.update(func.__dict__) 168 | return new_func 169 | 170 | 171 | 172 | def get_midas_servers(): 173 | import os 174 | midas_servers = list() 175 | if 'ExternalData_OBJECT_STORES' in os.environ.keys(): 176 | local_object_stores = os.environ['ExternalData_OBJECT_STORES'] 177 | for local_object_store in local_object_stores.split(";"): 178 | midas_servers.append( "file://{0}/MD5/%(hash)".format(local_object_store) ) 179 | midas_servers.extend( [ 180 | # Data published by MIDAS 181 | "http://midas3.kitware.com/midas/api/rest?method=midas.bitstream.download&checksum=%(hash)&algorithm=%(algo)", 182 | # Data published by developers using git-gerrit-push. 183 | "http://www.itk.org/files/ExternalData/%(algo)/%(hash)", 184 | # Mirror supported by the Slicer community. 185 | "http://slicer.kitware.com/midas3/api/rest?method=midas.bitstream.download&checksum=%(hash)&algorithm=%(algo)", 186 | # Insight journal data server 187 | "http://www.insight-journal.org/midas/api/rest?method=midas.bitstream.by.hash&hash=%(hash)" 188 | ]) 189 | return midas_servers 190 | 191 | 192 | def output_hash_is_valid(known_md5sum, output_file): 193 | md5 = hashlib.md5() 194 | if not os.path.exists(output_file): 195 | return False 196 | with open(output_file, 'rb') as fp: 197 | for url_download in iter(lambda: fp.read(128 * md5.block_size), b''): 198 | md5.update(url_download) 199 | retreived_md5sum = md5.hexdigest() 200 | return retreived_md5sum == known_md5sum 201 | 202 | 203 | def fetch_data_one(onefilename, output_directory, manifest_file, verify=True, force=False): 204 | import tarfile, zipfile 205 | 206 | with open(manifest_file, 'r') as fp: 207 | manifest = json.load(fp) 208 | assert onefilename in manifest, "ERROR: {0} does not exist in {1}".format(onefilename, manifest_file) 209 | 210 | sys.stdout.write("Fetching {0}\n".format(onefilename)) 211 | output_file = os.path.realpath(os.path.join(output_directory, onefilename)) 212 | data_dictionary = manifest[onefilename] 213 | md5sum = data_dictionary['md5sum'] 214 | # List of places where the file can be downloaded from 215 | all_urls = [] 216 | if "url" in data_dictionary: 217 | all_urls.append(data_dictionary["url"]) 218 | else: 219 | for url_base in get_midas_servers(): 220 | all_urls.append(url_base.replace("%(hash)", md5sum).replace("%(algo)", "md5")) 221 | 222 | new_download = False 223 | 224 | for url in all_urls: 225 | # Only download if force is true or the file does not exist. 226 | if force or not os.path.exists(output_file): 227 | mkdir_p(os.path.dirname(output_file)) 228 | url_download_read(url, output_file, report_hook=url_download_report) 229 | # Check if a file was downloaded and has the correct hash 230 | if output_hash_is_valid(md5sum, output_file): 231 | new_download = True 232 | # Stop looking once found 233 | break 234 | # If the file exists this means the hash is invalid we have a problem. 235 | elif os.path.exists(output_file): 236 | error_msg = "File " + output_file 237 | error_msg += " has incorrect hash value, " + md5sum + " was expected." 238 | raise Exception(error_msg) 239 | 240 | # Did not find the file anywhere. 241 | if not os.path.exists(output_file): 242 | error_msg = "File " + "\'" + os.path.basename(output_file) +"\'" 243 | error_msg += " could not be found in any of the following locations:\n" 244 | error_msg += ", ".join(all_urls) 245 | raise Exception(error_msg) 246 | 247 | if not new_download and verify: 248 | # If the file was part of an archive then we don't verify it. These 249 | # files are only verfied on download 250 | if ( not "archive" in data_dictionary) and ( not output_hash_is_valid(md5sum, output_file) ): 251 | # Attempt to download if md5sum is incorrect. 252 | fetch_data_one(onefilename, output_directory, manifest_file, verify, 253 | force=True) 254 | # If the file is in an archive, unpack it. 255 | if tarfile.is_tarfile(output_file) or zipfile.is_zipfile(output_file): 256 | tmp_output_file = output_file + ".tmp" 257 | os.rename(output_file, tmp_output_file) 258 | if tarfile.is_tarfile(tmp_output_file): 259 | archive = tarfile.open(tmp_output_file) 260 | if zipfile.is_zipfile(tmp_output_file): 261 | archive = zipfile.ZipFile(tmp_output_file, 'r') 262 | archive.extractall(os.path.dirname(tmp_output_file)) 263 | archive.close() 264 | os.remove(tmp_output_file) 265 | 266 | return output_file 267 | 268 | @deprecated 269 | def fetch_midas_data_one(onefilename, output_directory, manifest_file, verify=True, force=False): 270 | return fetch_data_one(onefilename, output_directory, manifest_file, verify, force) 271 | 272 | 273 | def fetch_data_all(output_directory, manifest_file, verify=True): 274 | with open(manifest_file, 'r') as fp: 275 | manifest = json.load(fp) 276 | for filename in manifest: 277 | fetch_data_one(filename, output_directory, manifest_file, verify, 278 | force=False) 279 | 280 | @deprecated 281 | def fetch_midas_data_all(output_directory, manifest_file, verify=True): 282 | return fetch_data_all(output_directory, manifest_file, verify) 283 | 284 | 285 | def fetch_data(cache_file_name, verify=False, cache_directory_name="data"): 286 | """ 287 | fetch_data is a simplified interface that requires 288 | relative pathing with a manifest.json file located in the 289 | same cache_directory_name name. 290 | 291 | By default the cache_directory_name is "data" relative to the current 292 | python script. An absolute path can also be given. 293 | """ 294 | if not os.path.isabs(cache_directory_name): 295 | cache_root_directory_name = os.path.dirname(__file__) 296 | cache_directory_name = os.path.join(cache_root_directory_name, cache_directory_name) 297 | cache_manifest_file = os.path.join(cache_directory_name, 'manifest.json') 298 | assert os.path.exists(cache_manifest_file), "ERROR, {0} does not exist".format(cache_manifest_file) 299 | return fetch_data_one(cache_file_name, cache_directory_name, cache_manifest_file, verify=verify) 300 | 301 | @deprecated 302 | def fetch_midas_data(cache_file_name, verify=False, cache_directory_name="data"): 303 | return fetch_data(cache_file_name, verify, cache_directory_name) 304 | 305 | 306 | if __name__ == '__main__': 307 | 308 | 309 | if len(sys.argv) < 3: 310 | print('Usage: ' + sys.argv[0] + ' output_directory manifest.json') 311 | sys.exit(1) 312 | output_directory = sys.argv[1] 313 | if not os.path.exists(output_directory): 314 | os.makedirs(output_directory) 315 | manifest = sys.argv[2] 316 | fetch_data_all(output_directory, manifest) 317 | -------------------------------------------------------------------------------- /01_spatial_transformations.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "

SimpleITK Spatial Transformations

\n", 8 | "\n", 9 | "\n", 10 | "**Summary:**\n", 11 | "\n", 12 | "1. Points are represented by vector-like data types: Tuple, Numpy array, List.\n", 13 | "2. Matrices are represented by vector-like data types in row major order.\n", 14 | "3. Default transformation initialization as the identity transform.\n", 15 | "4. Angles specified in radians, distances specified in unknown but consistent units (nm,mm,m,km...).\n", 16 | "5. All global transformations **except translation** are of the form:\n", 17 | "$$T(\\mathbf{x}) = A(\\mathbf{x}-\\mathbf{c}) + \\mathbf{t} + \\mathbf{c}$$\n", 18 | "\n", 19 | " Nomenclature (when printing your transformation):\n", 20 | "\n", 21 | " * Matrix: the matrix $A$\n", 22 | " * Center: the point $\\mathbf{c}$\n", 23 | " * Translation: the vector $\\mathbf{t}$\n", 24 | " * Offset: $\\mathbf{t} + \\mathbf{c} - A\\mathbf{c}$\n", 25 | "6. Bounded transformations, BSplineTransform and DisplacementFieldTransform, behave as the identity transform outside the defined bounds.\n", 26 | "7. DisplacementFieldTransform:\n", 27 | " * Initializing the DisplacementFieldTransform using an image requires that the image's pixel type be sitk.sitkVectorFloat64.\n", 28 | " * Initializing the DisplacementFieldTransform using an image will \"clear out\" your image (your alias to the image will point to an empty, zero sized, image).\n", 29 | "8. Composite transformations are applied in stack order (first added, last applied)." 30 | ] 31 | }, 32 | { 33 | "cell_type": "markdown", 34 | "metadata": {}, 35 | "source": [ 36 | "## Transformation Types\n", 37 | "\n", 38 | "SimpleITK supports the following transformation types.\n", 39 | "\n", 40 | "\n", 41 | "\n", 42 | " \n", 43 | " \n", 44 | " \n", 45 | " \n", 46 | " \n", 47 | " \n", 48 | " \n", 49 | " \n", 50 | " \n", 51 | " \n", 52 | " \n", 53 | " \n", 54 | " \n", 55 | " \n", 56 | "
TranslationTransform2D or 3D, translation
VersorTransform3D, rotation represented by a versor
VersorRigid3DTransform3D, rigid transformation with rotation represented by a versor
Euler2DTransform2D, rigid transformation with rotation represented by a Euler angle
Euler3DTransform3D, rigid transformation with rotation represented by Euler angles
Similarity2DTransform2D, composition of isotropic scaling and rigid transformation with rotation represented by a Euler angle
Similarity3DTransform3D, composition of isotropic scaling and rigid transformation with rotation represented by a versor
ScaleTransform2D or 3D, anisotropic scaling
ScaleVersor3DTransform3D, rigid transformation and anisotropic scale is added to the rotation matrix part (not composed as one would expect)
ScaleSkewVersor3DTransform3D, rigid transformation with anisotropic scale and skew matrices added to the rotation matrix part (not composed as one would expect)
AffineTransform2D or 3D, affine transformation.
BSplineTransform2D or 3D, deformable transformation represented by a sparse regular grid of control points.
DisplacementFieldTransform2D or 3D, deformable transformation represented as a dense regular grid of vectors.
TransformA generic transformation. Can represent any of the SimpleITK transformations, and a composite transformation (stack of transformations concatenated via composition, last added, first applied).
" 57 | ] 58 | }, 59 | { 60 | "cell_type": "code", 61 | "execution_count": null, 62 | "metadata": { 63 | "collapsed": false 64 | }, 65 | "outputs": [], 66 | "source": [ 67 | "import SimpleITK as sitk\n", 68 | "import utilities as util\n", 69 | "\n", 70 | "import numpy as np\n", 71 | "import matplotlib.pyplot as plt\n", 72 | "%matplotlib inline \n", 73 | "from ipywidgets import interact, fixed\n", 74 | "\n", 75 | "OUTPUT_DIR = \"output\"" 76 | ] 77 | }, 78 | { 79 | "cell_type": "markdown", 80 | "metadata": {}, 81 | "source": [ 82 | "We will introduce the transformation types, starting with translation and illustrating how to move from a lower to higher parameter space (e.g. translation to rigid). \n", 83 | "\n", 84 | "We start with the global transformations. All of them except translation are of the form:\n", 85 | "$$T(\\mathbf{x}) = A(\\mathbf{x}-\\mathbf{c}) + \\mathbf{t} + \\mathbf{c}$$\n", 86 | "\n", 87 | "In ITK speak (when printing your transformation):\n", 88 | "" 94 | ] 95 | }, 96 | { 97 | "cell_type": "markdown", 98 | "metadata": {}, 99 | "source": [ 100 | "## TranslationTransform\n", 101 | "\n", 102 | "Create a translation and then transform a point and use the inverse transformation to get the original back." 103 | ] 104 | }, 105 | { 106 | "cell_type": "code", 107 | "execution_count": null, 108 | "metadata": { 109 | "collapsed": false 110 | }, 111 | "outputs": [], 112 | "source": [ 113 | "dimension = 2 \n", 114 | "offset = [2]*dimension # use a Python trick to create the offset list based on the dimension\n", 115 | "translation = sitk.TranslationTransform(dimension, offset)\n", 116 | "print(translation)" 117 | ] 118 | }, 119 | { 120 | "cell_type": "code", 121 | "execution_count": null, 122 | "metadata": { 123 | "collapsed": false 124 | }, 125 | "outputs": [], 126 | "source": [ 127 | "point = [10, 11] if dimension==2 else [10, 11, 12] # set point to match dimension\n", 128 | "transformed_point = translation.TransformPoint(point)\n", 129 | "translation_inverse = translation.GetInverse()\n", 130 | "print('original point: ' + util.point2str(point) + '\\n'\n", 131 | " 'transformed point: ' + util.point2str(transformed_point) + '\\n'\n", 132 | " 'back to original: ' + util.point2str(translation_inverse.TransformPoint(transformed_point)))" 133 | ] 134 | }, 135 | { 136 | "cell_type": "markdown", 137 | "metadata": {}, 138 | "source": [ 139 | "## Euler2DTransform\n", 140 | "\n", 141 | "Rigidly transform a 2D point using a Euler angle parameter specification.\n", 142 | "\n", 143 | "Notice that the dimensionality of the Euler angle based rigid transformation is associated with the class, unlike the translation which is set at construction.\n" 144 | ] 145 | }, 146 | { 147 | "cell_type": "code", 148 | "execution_count": null, 149 | "metadata": { 150 | "collapsed": false 151 | }, 152 | "outputs": [], 153 | "source": [ 154 | "point = [10, 11]\n", 155 | "rotation2D = sitk.Euler2DTransform()\n", 156 | "rotation2D.SetTranslation((7.2, 8.4))\n", 157 | "rotation2D.SetAngle(np.pi/2)\n", 158 | "print('original point: ' + util.point2str(point) + '\\n'\n", 159 | " 'transformed point: ' + util.point2str(rotation2D.TransformPoint(point)))" 160 | ] 161 | }, 162 | { 163 | "cell_type": "markdown", 164 | "metadata": {}, 165 | "source": [ 166 | "## VersorTransform (rotation in 3D)\n", 167 | "\n", 168 | "Rotation using a versor, vector part of unit quaternion, parameterization. Quaternion defined by rotation of $\\theta$ radians around axis $n$, is $q = [n*\\sin(\\frac{\\theta}{2}), \\cos(\\frac{\\theta}{2})]$." 169 | ] 170 | }, 171 | { 172 | "cell_type": "code", 173 | "execution_count": null, 174 | "metadata": { 175 | "collapsed": false 176 | }, 177 | "outputs": [], 178 | "source": [ 179 | "# Use a versor:\n", 180 | "rotation1 = sitk.VersorTransform([0,0,1,0])\n", 181 | "\n", 182 | "# Use axis-angle:\n", 183 | "rotation2 = sitk.VersorTransform((0,0,1), np.pi)\n", 184 | "\n", 185 | "# Use a matrix:\n", 186 | "rotation3 = sitk.VersorTransform()\n", 187 | "rotation3.SetMatrix([-1, 0, 0, 0, -1, 0, 0, 0, 1]);\n", 188 | "\n", 189 | "point = (10, 100, 1000)\n", 190 | "\n", 191 | "p1 = rotation1.TransformPoint(point)\n", 192 | "p2 = rotation2.TransformPoint(point)\n", 193 | "p3 = rotation3.TransformPoint(point)\n", 194 | "\n", 195 | "print('Points after transformation:\\np1=' + str(p1) + \n", 196 | " '\\np2='+ str(p2) + '\\np3='+ str(p3))" 197 | ] 198 | }, 199 | { 200 | "cell_type": "markdown", 201 | "metadata": {}, 202 | "source": [ 203 | "## Translation to Rigid [3D]\n", 204 | "\n", 205 | "We only need to copy the translational component." 206 | ] 207 | }, 208 | { 209 | "cell_type": "code", 210 | "execution_count": null, 211 | "metadata": { 212 | "collapsed": false 213 | }, 214 | "outputs": [], 215 | "source": [ 216 | "dimension = 3 \n", 217 | "t =(1,2,3) \n", 218 | "translation = sitk.TranslationTransform(dimension, t)\n", 219 | "\n", 220 | "# Copy the translational component.\n", 221 | "rigid_euler = sitk.Euler3DTransform()\n", 222 | "rigid_euler.SetTranslation(translation.GetOffset())\n", 223 | "\n", 224 | "# Apply the transformations to the same set of random points and compare the results.\n", 225 | "util.print_transformation_differences(translation, rigid_euler)" 226 | ] 227 | }, 228 | { 229 | "cell_type": "markdown", 230 | "metadata": {}, 231 | "source": [ 232 | "## Rotation to Rigid [3D]\n", 233 | "Copy the matrix or versor and center of rotation." 234 | ] 235 | }, 236 | { 237 | "cell_type": "code", 238 | "execution_count": null, 239 | "metadata": { 240 | "collapsed": false 241 | }, 242 | "outputs": [], 243 | "source": [ 244 | "rotation_center = (10, 10, 10)\n", 245 | "rotation = sitk.VersorTransform([0,0,1,0], rotation_center)\n", 246 | "\n", 247 | "rigid_versor = sitk.VersorRigid3DTransform()\n", 248 | "rigid_versor.SetRotation(rotation.GetVersor())\n", 249 | "#rigid_versor.SetCenter(rotation.GetCenter()) #intentional error, not copying center of rotation\n", 250 | "\n", 251 | "# Apply the transformations to the same set of random points and compare the results.\n", 252 | "util.print_transformation_differences(rotation, rigid_versor)" 253 | ] 254 | }, 255 | { 256 | "cell_type": "markdown", 257 | "metadata": {}, 258 | "source": [ 259 | "In the cell above, when we don't copy the center of rotation we have a constant error vector, $\\mathbf{c}$ - A$\\mathbf{c}$." 260 | ] 261 | }, 262 | { 263 | "cell_type": "markdown", 264 | "metadata": {}, 265 | "source": [ 266 | "## Similarity [2D]\n", 267 | "\n", 268 | "When the center of the similarity transformation is not at the origin the effect of the transformation is not what most of us expect. This is readily visible if we limit the transformation to scaling: $T(\\mathbf{x}) = s\\mathbf{x}-s\\mathbf{c} + \\mathbf{c}$. Changing the transformation's center results in scale + translation." 269 | ] 270 | }, 271 | { 272 | "cell_type": "code", 273 | "execution_count": null, 274 | "metadata": { 275 | "collapsed": false 276 | }, 277 | "outputs": [], 278 | "source": [ 279 | "def display_center_effect(x, y, tx, point_list, xlim, ylim):\n", 280 | " tx.SetCenter((x,y))\n", 281 | " transformed_point_list = [ tx.TransformPoint(p) for p in point_list]\n", 282 | "\n", 283 | " plt.scatter(list(np.array(transformed_point_list).T)[0],\n", 284 | " list(np.array(transformed_point_list).T)[1],\n", 285 | " marker='^', \n", 286 | " color='red', label='transformed points')\n", 287 | " plt.scatter(list(np.array(point_list).T)[0],\n", 288 | " list(np.array(point_list).T)[1],\n", 289 | " marker='o', \n", 290 | " color='blue', label='original points')\n", 291 | " plt.xlim(xlim)\n", 292 | " plt.ylim(ylim)\n", 293 | " plt.legend(loc=(0.25,1.01))\n", 294 | "\n", 295 | "# 2D square centered on (0,0)\n", 296 | "points = [np.array((-1.0,-1.0)), np.array((-1.0,1.0)), np.array((1.0,1.0)), np.array((1.0,-1.0))]\n", 297 | "\n", 298 | "# Scale by 2 \n", 299 | "similarity = sitk.Similarity2DTransform();\n", 300 | "similarity.SetScale(2)\n", 301 | "\n", 302 | "interact(display_center_effect, x=(-10,10), y=(-10,10),tx = fixed(similarity), point_list = fixed(points), \n", 303 | " xlim = fixed((-10,10)),ylim = fixed((-10,10)));" 304 | ] 305 | }, 306 | { 307 | "cell_type": "markdown", 308 | "metadata": {}, 309 | "source": [ 310 | "## Rigid to Similarity [3D]\n", 311 | "Copy the translation, center, and matrix or versor." 312 | ] 313 | }, 314 | { 315 | "cell_type": "code", 316 | "execution_count": null, 317 | "metadata": { 318 | "collapsed": false 319 | }, 320 | "outputs": [], 321 | "source": [ 322 | "rotation_center = (100, 100, 100)\n", 323 | "theta_x = 0.0\n", 324 | "theta_y = 0.0\n", 325 | "theta_z = np.pi/2.0\n", 326 | "translation = (1,2,3)\n", 327 | "\n", 328 | "rigid_euler = sitk.Euler3DTransform(rotation_center, theta_x, theta_y, theta_z, translation)\n", 329 | "\n", 330 | "similarity = sitk.Similarity3DTransform()\n", 331 | "similarity.SetMatrix(rigid_euler.GetMatrix())\n", 332 | "similarity.SetTranslation(rigid_euler.GetTranslation())\n", 333 | "similarity.SetCenter(rigid_euler.GetCenter())\n", 334 | "\n", 335 | "# Apply the transformations to the same set of random points and compare the results.\n", 336 | "util.print_transformation_differences(rigid_euler, similarity)" 337 | ] 338 | }, 339 | { 340 | "cell_type": "markdown", 341 | "metadata": {}, 342 | "source": [ 343 | "## Similarity to Affine [3D]\n", 344 | "Copy the translation, center and matrix." 345 | ] 346 | }, 347 | { 348 | "cell_type": "code", 349 | "execution_count": null, 350 | "metadata": { 351 | "collapsed": false 352 | }, 353 | "outputs": [], 354 | "source": [ 355 | "rotation_center = (100, 100, 100)\n", 356 | "axis = (0,0,1)\n", 357 | "angle = np.pi/2.0\n", 358 | "translation = (1,2,3)\n", 359 | "scale_factor = 2.0\n", 360 | "similarity = sitk.Similarity3DTransform(scale_factor, axis, angle, translation, rotation_center)\n", 361 | "\n", 362 | "affine = sitk.AffineTransform(3)\n", 363 | "affine.SetMatrix(similarity.GetMatrix())\n", 364 | "affine.SetTranslation(similarity.GetTranslation())\n", 365 | "affine.SetCenter(similarity.GetCenter())\n", 366 | "\n", 367 | "# Apply the transformations to the same set of random points and compare the results.\n", 368 | "util.print_transformation_differences(similarity, affine)" 369 | ] 370 | }, 371 | { 372 | "cell_type": "markdown", 373 | "metadata": {}, 374 | "source": [ 375 | "## Scale Transform\n", 376 | "\n", 377 | "Just as the case was for the similarity transformation above, when the transformations center is not at the origin, instead of a pure anisotropic scaling we also have translation ($T(\\mathbf{x}) = \\mathbf{s}^T\\mathbf{x}-\\mathbf{s}^T\\mathbf{c} + \\mathbf{c}$)." 378 | ] 379 | }, 380 | { 381 | "cell_type": "code", 382 | "execution_count": null, 383 | "metadata": { 384 | "collapsed": false 385 | }, 386 | "outputs": [], 387 | "source": [ 388 | "# 2D square centered on (0,0).\n", 389 | "points = [np.array((-1.0,-1.0)), np.array((-1.0,1.0)), np.array((1.0,1.0)), np.array((1.0,-1.0))]\n", 390 | "\n", 391 | "# Scale by half in x and 2 in y.\n", 392 | "scale = sitk.ScaleTransform(2, (0.5,2));\n", 393 | "\n", 394 | "# Interactively change the location of the center.\n", 395 | "interact(display_center_effect, x=(-10,10), y=(-10,10),tx = fixed(scale), point_list = fixed(points), \n", 396 | " xlim = fixed((-10,10)),ylim = fixed((-10,10)));" 397 | ] 398 | }, 399 | { 400 | "cell_type": "markdown", 401 | "metadata": {}, 402 | "source": [ 403 | "## Unintentional Misnomers (originally from ITK)\n", 404 | "\n", 405 | "Two transformation types whose names may mislead you are ScaleVersor and ScaleSkewVersor. Basing your choices on expectations without reading the documentation will surprise you.\n", 406 | "\n", 407 | "ScaleVersor - based on name expected a composition of transformations, in practice it is:\n", 408 | "$$T(x) = (R+S)(\\mathbf{x}-\\mathbf{c}) + \\mathbf{t} + \\mathbf{c},\\;\\; \\textrm{where } S= \\left[\\begin{array}{ccc} s_0-1 & 0 & 0 \\\\ 0 & s_1-1 & 0 \\\\ 0 & 0 & s_2-1 \\end{array}\\right]$$ \n", 409 | "\n", 410 | "ScaleSkewVersor - based on name expected a composition of transformations, in practice it is:\n", 411 | "$$T(x) = (R+S+K)(\\mathbf{x}-\\mathbf{c}) + \\mathbf{t} + \\mathbf{c},\\;\\; \\textrm{where } S = \\left[\\begin{array}{ccc} s_0-1 & 0 & 0 \\\\ 0 & s_1-1 & 0 \\\\ 0 & 0 & s_2-1 \\end{array}\\right]\\;\\; \\textrm{and } K = \\left[\\begin{array}{ccc} 0 & k_0 & k_1 \\\\ k_2 & 0 & k_3 \\\\ k_4 & k_5 & 0 \\end{array}\\right]$$ \n", 412 | "\n", 413 | "Note that ScaleSkewVersor is is an over-parametrized version of the affine transform, 15 parameters (scale, skew, versor, translation) vs. 12 parameters (matrix, translation)." 414 | ] 415 | }, 416 | { 417 | "cell_type": "markdown", 418 | "metadata": {}, 419 | "source": [ 420 | "## Bounded Transformations\n", 421 | "\n", 422 | "SimpleITK supports two types of bounded non-rigid transformations, BSplineTransform (sparse representation) and \tDisplacementFieldTransform (dense representation).\n", 423 | "\n", 424 | "Transforming a point that is outside the bounds will return the original point - identity transform." 425 | ] 426 | }, 427 | { 428 | "cell_type": "markdown", 429 | "metadata": {}, 430 | "source": [ 431 | "## BSpline\n", 432 | "Using a sparse set of control points to control a free form deformation. Using the cell below it is clear that the BSplineTransform allows for folding and tearing." 433 | ] 434 | }, 435 | { 436 | "cell_type": "code", 437 | "execution_count": null, 438 | "metadata": { 439 | "collapsed": false 440 | }, 441 | "outputs": [], 442 | "source": [ 443 | "# Create the transformation (when working with images it is easier to use the BSplineTransformInitializer function\n", 444 | "# or its object oriented counterpart BSplineTransformInitializerFilter).\n", 445 | "dimension = 2\n", 446 | "spline_order = 3\n", 447 | "direction_matrix_row_major = [1.0,0.0,0.0,1.0] # identity, mesh is axis aligned\n", 448 | "origin = [-1.0,-1.0] \n", 449 | "domain_physical_dimensions = [2,2]\n", 450 | "\n", 451 | "bspline = sitk.BSplineTransform(dimension, spline_order)\n", 452 | "bspline.SetTransformDomainOrigin(origin)\n", 453 | "bspline.SetTransformDomainDirection(direction_matrix_row_major)\n", 454 | "bspline.SetTransformDomainPhysicalDimensions(domain_physical_dimensions)\n", 455 | "bspline.SetTransformDomainMeshSize((4,3))\n", 456 | "\n", 457 | "# Random displacement of the control points.\n", 458 | "originalControlPointDisplacements = np.random.random(len(bspline.GetParameters()))\n", 459 | "bspline.SetParameters(originalControlPointDisplacements)\n", 460 | "\n", 461 | "# Apply the BSpline transformation to a grid of points \n", 462 | "# starting the point set exactly at the origin of the BSpline mesh is problematic as\n", 463 | "# these points are considered outside the transformation's domain,\n", 464 | "# remove epsilon below and see what happens.\n", 465 | "numSamplesX = 10\n", 466 | "numSamplesY = 20\n", 467 | " \n", 468 | "coordsX = np.linspace(origin[0]+np.finfo(float).eps, origin[0] + domain_physical_dimensions[0], numSamplesX)\n", 469 | "coordsY = np.linspace(origin[1]+np.finfo(float).eps, origin[1] + domain_physical_dimensions[1], numSamplesY)\n", 470 | "XX, YY = np.meshgrid(coordsX, coordsY)\n", 471 | "\n", 472 | "interact(util.display_displacement_scaling_effect, s= (-1.5,1.5), original_x_mat = fixed(XX), original_y_mat = fixed(YY),\n", 473 | " tx = fixed(bspline), original_control_point_displacements = fixed(originalControlPointDisplacements)); " 474 | ] 475 | }, 476 | { 477 | "cell_type": "markdown", 478 | "metadata": {}, 479 | "source": [ 480 | "## DisplacementField\n", 481 | "\n", 482 | "A dense set of vectors representing the displacement inside the given domain. The most generic representation of a transformation." 483 | ] 484 | }, 485 | { 486 | "cell_type": "code", 487 | "execution_count": null, 488 | "metadata": { 489 | "collapsed": false 490 | }, 491 | "outputs": [], 492 | "source": [ 493 | "# Create the displacement field. \n", 494 | " \n", 495 | "# When working with images the safer thing to do is use the image based constructor,\n", 496 | "# sitk.DisplacementFieldTransform(my_image), all the fixed parameters will be set correctly and the displacement\n", 497 | "# field is initialized using the vectors stored in the image. SimpleITK requires that the image's pixel type be \n", 498 | "# sitk.sitkVectorFloat64.\n", 499 | "displacement = sitk.DisplacementFieldTransform(2)\n", 500 | "field_size = [10,20]\n", 501 | "field_origin = [-1.0,-1.0] \n", 502 | "field_spacing = [2.0/9.0,2.0/19.0] \n", 503 | "field_direction = [1,0,0,1] # direction cosine matrix (row major order) \n", 504 | "\n", 505 | "# Concatenate all the information into a single list\n", 506 | "displacement.SetFixedParameters(field_size+field_origin+field_spacing+field_direction)\n", 507 | "# Set the interpolator, either sitkLinear which is default or nearest neighbor\n", 508 | "displacement.SetInterpolator(sitk.sitkNearestNeighbor)\n", 509 | "\n", 510 | "originalDisplacements = np.random.random(len(displacement.GetParameters()))\n", 511 | "displacement.SetParameters(originalDisplacements)\n", 512 | "\n", 513 | "coordsX = np.linspace(field_origin[0], field_origin[0]+(field_size[0]-1)*field_spacing[0], field_size[0])\n", 514 | "coordsY = np.linspace(field_origin[1], field_origin[1]+(field_size[1]-1)*field_spacing[1], field_size[1])\n", 515 | "XX, YY = np.meshgrid(coordsX, coordsY)\n", 516 | "\n", 517 | "interact(util.display_displacement_scaling_effect, s= (-1.5,1.5), original_x_mat = fixed(XX), original_y_mat = fixed(YY),\n", 518 | " tx = fixed(displacement), original_control_point_displacements = fixed(originalDisplacements)); " 519 | ] 520 | }, 521 | { 522 | "cell_type": "markdown", 523 | "metadata": {}, 524 | "source": [ 525 | "Displacement field transform created from an image. Remember that SimpleITK will clear the image you provide, as shown in the cell below." 526 | ] 527 | }, 528 | { 529 | "cell_type": "markdown", 530 | "metadata": {}, 531 | "source": [ 532 | "## Composite transform (Transform)\n", 533 | "\n", 534 | "The generic SimpleITK transform class. This class can represent both a single transformation (global, local), or a composite transformation (multiple transformations applied one after the other). This is the output typed returned by the SimpleITK registration framework. \n", 535 | "\n", 536 | "The choice of whether to use a composite transformation or compose transformations on your own has subtle differences in the registration framework.\n", 537 | "\n", 538 | "Composite transforms enable a combination of a global transformation with multiple local/bounded transformations. This is useful if we want to apply deformations only in regions that deform while other regions are only effected by the global transformation.\n", 539 | "\n", 540 | "The following code illustrates this, where the whole region is translated and subregions have different deformations." 541 | ] 542 | }, 543 | { 544 | "cell_type": "code", 545 | "execution_count": null, 546 | "metadata": { 547 | "collapsed": false 548 | }, 549 | "outputs": [], 550 | "source": [ 551 | "# Global transformation.\n", 552 | "translation = sitk.TranslationTransform(2,(1.0,0.0))\n", 553 | "\n", 554 | "# Displacement in region 1.\n", 555 | "displacement1 = sitk.DisplacementFieldTransform(2)\n", 556 | "field_size = [10,20]\n", 557 | "field_origin = [-1.0,-1.0] \n", 558 | "field_spacing = [2.0/9.0,2.0/19.0] \n", 559 | "field_direction = [1,0,0,1] # direction cosine matrix (row major order) \n", 560 | "\n", 561 | "# Concatenate all the information into a single list.\n", 562 | "displacement1.SetFixedParameters(field_size+field_origin+field_spacing+field_direction)\n", 563 | "displacement1.SetParameters(np.ones(len(displacement1.GetParameters())))\n", 564 | "\n", 565 | "# Displacement in region 2.\n", 566 | "displacement2 = sitk.DisplacementFieldTransform(2)\n", 567 | "field_size = [10,20]\n", 568 | "field_origin = [1.0,-3] \n", 569 | "field_spacing = [2.0/9.0,2.0/19.0] \n", 570 | "field_direction = [1,0,0,1] #direction cosine matrix (row major order) \n", 571 | "\n", 572 | "# Concatenate all the information into a single list.\n", 573 | "displacement2.SetFixedParameters(field_size+field_origin+field_spacing+field_direction)\n", 574 | "displacement2.SetParameters(-1.0*np.ones(len(displacement2.GetParameters())))\n", 575 | "\n", 576 | "# Composite transform which applies the global and local transformations.\n", 577 | "composite = sitk.Transform(translation)\n", 578 | "composite.AddTransform(displacement1)\n", 579 | "composite.AddTransform(displacement2)\n", 580 | "\n", 581 | "# Apply the composite transformation to points in ([-1,-3],[3,1]) and \n", 582 | "# display the deformation using a quiver plot.\n", 583 | " \n", 584 | "# Generate points.\n", 585 | "numSamplesX = 10\n", 586 | "numSamplesY = 10 \n", 587 | "coordsX = np.linspace(-1.0, 3.0, numSamplesX)\n", 588 | "coordsY = np.linspace(-3.0, 1.0, numSamplesY)\n", 589 | "XX, YY = np.meshgrid(coordsX, coordsY)\n", 590 | "\n", 591 | "# Transform points and compute deformation vectors.\n", 592 | "pointsX = np.zeros(XX.shape)\n", 593 | "pointsY = np.zeros(XX.shape)\n", 594 | "for index, value in np.ndenumerate(XX):\n", 595 | " px,py = composite.TransformPoint((value, YY[index]))\n", 596 | " pointsX[index]=px - value \n", 597 | " pointsY[index]=py - YY[index]\n", 598 | " \n", 599 | "plt.quiver(XX, YY, pointsX, pointsY); " 600 | ] 601 | }, 602 | { 603 | "cell_type": "markdown", 604 | "metadata": {}, 605 | "source": [ 606 | "## Writing and Reading\n", 607 | "\n", 608 | "The SimpleITK.ReadTransform() returns a SimpleITK.Transform . The content of the file can be any of the SimpleITK transformations or a composite (set of transformations). " 609 | ] 610 | }, 611 | { 612 | "cell_type": "code", 613 | "execution_count": null, 614 | "metadata": { 615 | "collapsed": false 616 | }, 617 | "outputs": [], 618 | "source": [ 619 | "import os\n", 620 | "\n", 621 | "# Create a 2D rigid transformation, write it to disk and read it back.\n", 622 | "basic_transform = sitk.Euler2DTransform()\n", 623 | "basic_transform.SetTranslation((1.0,2.0))\n", 624 | "basic_transform.SetAngle(np.pi/2)\n", 625 | "\n", 626 | "full_file_name = os.path.join(OUTPUT_DIR, 'euler2D.tfm')\n", 627 | "\n", 628 | "sitk.WriteTransform(basic_transform, full_file_name)\n", 629 | "\n", 630 | "# The ReadTransform function returns an sitk.Transform no matter the type of the transform \n", 631 | "# found in the file (global, bounded, composite).\n", 632 | "read_result = sitk.ReadTransform(full_file_name)\n", 633 | "\n", 634 | "print('Different types: '+ str(type(read_result) != type(basic_transform)))\n", 635 | "util.print_transformation_differences(basic_transform, read_result)\n", 636 | "\n", 637 | "\n", 638 | "# Create a composite transform then write and read.\n", 639 | "displacement = sitk.DisplacementFieldTransform(2)\n", 640 | "field_size = [10,20]\n", 641 | "field_origin = [-10.0,-100.0] \n", 642 | "field_spacing = [20.0/(field_size[0]-1),200.0/(field_size[1]-1)] \n", 643 | "field_direction = [1,0,0,1] #direction cosine matrix (row major order)\n", 644 | "\n", 645 | "# Concatenate all the information into a single list.\n", 646 | "displacement.SetFixedParameters(field_size+field_origin+field_spacing+field_direction)\n", 647 | "displacement.SetParameters(np.random.random(len(displacement.GetParameters())))\n", 648 | "\n", 649 | "composite_transform = sitk.Transform(basic_transform)\n", 650 | "composite_transform.AddTransform(displacement)\n", 651 | "\n", 652 | "full_file_name = os.path.join(OUTPUT_DIR, 'composite.tfm')\n", 653 | "\n", 654 | "sitk.WriteTransform(composite_transform, full_file_name)\n", 655 | "read_result = sitk.ReadTransform(full_file_name)\n", 656 | "\n", 657 | "util.print_transformation_differences(composite_transform, read_result) " 658 | ] 659 | }, 660 | { 661 | "cell_type": "markdown", 662 | "metadata": { 663 | "collapsed": true 664 | }, 665 | "source": [ 666 | "

Next »

" 667 | ] 668 | } 669 | ], 670 | "metadata": { 671 | "kernelspec": { 672 | "display_name": "Python 3", 673 | "language": "python", 674 | "name": "python3" 675 | }, 676 | "language_info": { 677 | "codemirror_mode": { 678 | "name": "ipython", 679 | "version": 3 680 | }, 681 | "file_extension": ".py", 682 | "mimetype": "text/x-python", 683 | "name": "python", 684 | "nbconvert_exporter": "python", 685 | "pygments_lexer": "ipython3", 686 | "version": "3.5.2" 687 | } 688 | }, 689 | "nbformat": 4, 690 | "nbformat_minor": 2 691 | } 692 | -------------------------------------------------------------------------------- /05_advanced_registration.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "

Advanced Registration

\n", 8 | "\n", 9 | "\n", 10 | "**Summary:**\n", 11 | "1. SimpleITK provides two flavors of non-rigid registration:\n", 12 | " * Free Form Deformation, BSpline based, and Demons using the ITKv4 registration framework.\n", 13 | " * A set of Demons filters that are independent of the registration framework (`DemonsRegistrationFilter, DiffeomorphicDemonsRegistrationFilter, FastSymmetricForcesDemonsRegistrationFilter, SymmetricForcesDemonsRegistrationFilter`).\n", 14 | "2. Registration evaluation:\n", 15 | " * Registration accuracy, the quantity of interest is the Target Registration Error (TRE).\n", 16 | " * TRE is spatially variant.\n", 17 | " * Surrogate metrics for evaluating registration accuracy such as segmentation overlaps are relevant, but are potentially deficient.\n", 18 | " * Registration time.\n", 19 | " * Acceptable values for TRE and runtime are context dependent." 20 | ] 21 | }, 22 | { 23 | "cell_type": "code", 24 | "execution_count": null, 25 | "metadata": { 26 | "collapsed": true 27 | }, 28 | "outputs": [], 29 | "source": [ 30 | "import SimpleITK as sitk\n", 31 | "import registration_gui as rgui\n", 32 | "import utilities \n", 33 | "import gui\n", 34 | "\n", 35 | "from downloaddata import fetch_data as fdata\n", 36 | "\n", 37 | "from ipywidgets import interact, fixed\n", 38 | "\n", 39 | "%matplotlib inline\n", 40 | "import matplotlib.pyplot as plt\n", 41 | "\n", 42 | "import numpy as np" 43 | ] 44 | }, 45 | { 46 | "cell_type": "markdown", 47 | "metadata": {}, 48 | "source": [ 49 | "## Data and Registration Task\n", 50 | "\n", 51 | "In this notebook we will use the Point-validated Pixel-based Breathing Thorax Model (POPI). This is a 4D (3D+time) thoracic-abdominal CT (10 CTs representing the respiratory cycle) with masks segmenting each of the CTs to air/body/lung, and a set of corresponding landmarks localized in each of the CT volumes.\n", 52 | "\n", 53 | "The registration problem we deal with is non-rigid alignment of the lungs throughout the respiratory cycle. This information is relevant for radiation therapy planning and execution.\n", 54 | "\n", 55 | "\n", 56 | "The POPI model is provided by the Léon Bérard Cancer Center & CREATIS Laboratory, Lyon, France. The relevant publication is:\n", 57 | "\n", 58 | "J. Vandemeulebroucke, D. Sarrut, P. Clarysse, \"The POPI-model, a point-validated pixel-based breathing thorax model\",\n", 59 | "Proc. XVth International Conference on the Use of Computers in Radiation Therapy (ICCR), Toronto, Canada, 2007.\n", 60 | "\n", 61 | "Additional 4D CT data sets with reference points are available from the CREATIS Laboratory here. " 62 | ] 63 | }, 64 | { 65 | "cell_type": "code", 66 | "execution_count": null, 67 | "metadata": { 68 | "collapsed": false 69 | }, 70 | "outputs": [], 71 | "source": [ 72 | "images = []\n", 73 | "masks = []\n", 74 | "points = []\n", 75 | "image_indexes = [0,7]\n", 76 | "for i in image_indexes:\n", 77 | " image_file_name = 'POPI/meta/{0}0-P.mhd'.format(i)\n", 78 | " mask_file_name = 'POPI/masks/{0}0-air-body-lungs.mhd'.format(i)\n", 79 | " points_file_name = 'POPI/landmarks/{0}0-Landmarks.pts'.format(i)\n", 80 | " images.append(sitk.ReadImage(fdata(image_file_name), sitk.sitkFloat32)) \n", 81 | " masks.append(sitk.ReadImage(fdata(mask_file_name)))\n", 82 | " points.append(utilities.read_POPI_points(fdata(points_file_name)))\n", 83 | " \n", 84 | "interact(rgui.display_coronal_with_overlay, temporal_slice=(0,len(images)-1), \n", 85 | " coronal_slice = (0, images[0].GetSize()[1]-1), \n", 86 | " images = fixed(images), masks = fixed(masks), \n", 87 | " label=fixed(utilities.popi_lung_label), window_min = fixed(-1024), window_max=fixed(976));" 88 | ] 89 | }, 90 | { 91 | "cell_type": "markdown", 92 | "metadata": {}, 93 | "source": [ 94 | "## Free Form Deformation\n", 95 | "\n", 96 | "Define a BSplineTransform using a sparse set of grid points overlaid onto the fixed image's domain to deform it.\n", 97 | "\n", 98 | "For the current registration task we are fortunate in that we have a unique setting. The images are of the same patient during respiration so we can initialize the registration using the identity transform. Additionally, we have masks demarcating the lungs.\n", 99 | "\n", 100 | "We use the registration framework taking advantage of its ability to use masks that limit the similarity metric estimation to points lying inside our region of interest, the lungs." 101 | ] 102 | }, 103 | { 104 | "cell_type": "code", 105 | "execution_count": null, 106 | "metadata": { 107 | "collapsed": true 108 | }, 109 | "outputs": [], 110 | "source": [ 111 | "fixed_index = 0\n", 112 | "moving_index = 1\n", 113 | "\n", 114 | "fixed_image = images[fixed_index]\n", 115 | "fixed_image_mask = masks[fixed_index] == utilities.popi_lung_label\n", 116 | "fixed_points = points[fixed_index]\n", 117 | "\n", 118 | "moving_image = images[moving_index]\n", 119 | "moving_image_mask = masks[moving_index] == utilities.popi_lung_label\n", 120 | "moving_points = points[moving_index]" 121 | ] 122 | }, 123 | { 124 | "cell_type": "code", 125 | "execution_count": null, 126 | "metadata": { 127 | "collapsed": false 128 | }, 129 | "outputs": [], 130 | "source": [ 131 | "# Define a simple callback which allows us to monitor registration progress.\n", 132 | "def iteration_callback(filter):\n", 133 | " print('\\r{0:.2f}'.format(filter.GetMetricValue()), end='')\n", 134 | " \n", 135 | "registration_method = sitk.ImageRegistrationMethod()\n", 136 | " \n", 137 | "# Determine the number of BSpline control points using the physical spacing we want for the control grid. \n", 138 | "grid_physical_spacing = [50.0, 50.0, 50.0] # A control point every 50mm\n", 139 | "image_physical_size = [size*spacing for size,spacing in zip(fixed_image.GetSize(), fixed_image.GetSpacing())]\n", 140 | "mesh_size = [int(image_size/grid_spacing + 0.5) \\\n", 141 | " for image_size,grid_spacing in zip(image_physical_size,grid_physical_spacing)]\n", 142 | "\n", 143 | "initial_transform = sitk.BSplineTransformInitializer(image1 = fixed_image, \n", 144 | " transformDomainMeshSize = mesh_size, order=3) \n", 145 | "registration_method.SetInitialTransform(initial_transform)\n", 146 | " \n", 147 | "registration_method.SetMetricAsMeanSquares()\n", 148 | "registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)\n", 149 | "registration_method.SetMetricSamplingPercentage(0.01)\n", 150 | "registration_method.SetMetricFixedMask(fixed_image_mask)\n", 151 | " \n", 152 | "registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1])\n", 153 | "registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[2,1,0])\n", 154 | "registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()\n", 155 | "\n", 156 | "registration_method.SetInterpolator(sitk.sitkLinear)\n", 157 | "registration_method.SetOptimizerAsLBFGSB(gradientConvergenceTolerance=1e-5, numberOfIterations=100)\n", 158 | "\n", 159 | "registration_method.AddCommand(sitk.sitkIterationEvent, lambda: iteration_callback(registration_method))\n", 160 | "\n", 161 | "final_transformation = registration_method.Execute(fixed_image, moving_image)\n", 162 | "print('\\nOptimizer\\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))" 163 | ] 164 | }, 165 | { 166 | "cell_type": "markdown", 167 | "metadata": {}, 168 | "source": [ 169 | "## Qualitative evaluation via segmentation transfer\n", 170 | "\n", 171 | "Transfer the segmentation from the moving image to the fixed image before and after registration and visually evaluate overlap." 172 | ] 173 | }, 174 | { 175 | "cell_type": "code", 176 | "execution_count": null, 177 | "metadata": { 178 | "collapsed": false 179 | }, 180 | "outputs": [], 181 | "source": [ 182 | "transformed_segmentation = sitk.Resample(moving_image_mask,\n", 183 | " fixed_image,\n", 184 | " final_transformation, \n", 185 | " sitk.sitkNearestNeighbor,\n", 186 | " 0.0, \n", 187 | " moving_image_mask.GetPixelID())\n", 188 | "\n", 189 | "interact(rgui.display_coronal_with_overlay, temporal_slice=(0,1), \n", 190 | " coronal_slice = (0, fixed_image.GetSize()[1]-1), \n", 191 | " images = fixed([fixed_image,fixed_image]), masks = fixed([moving_image_mask, transformed_segmentation]), \n", 192 | " label=fixed(1), window_min = fixed(-1024), window_max=fixed(976));" 193 | ] 194 | }, 195 | { 196 | "cell_type": "markdown", 197 | "metadata": {}, 198 | "source": [ 199 | "### Quantitative evaluation \n", 200 | "\n", 201 | "The most appropriate evaluation is based on analysis of Target Registration Errors(TRE), which is defined as follows:\n", 202 | "\n", 203 | "Given the transformation $T_f^m$ and corresponding points in the two coordinate systems, $^fp,^mp$, points which were not used in the registration process, TRE is defined as $\\|T_f^m(^fp) - ^mp\\|$. \n", 204 | "\n", 205 | "We start by looking at some descriptive statistics of TRE:" 206 | ] 207 | }, 208 | { 209 | "cell_type": "code", 210 | "execution_count": null, 211 | "metadata": { 212 | "collapsed": false 213 | }, 214 | "outputs": [], 215 | "source": [ 216 | "initial_TRE = utilities.target_registration_errors(sitk.Transform(), fixed_points, moving_points)\n", 217 | "final_TRE = utilities.target_registration_errors(final_transformation, fixed_points, moving_points)\n", 218 | "\n", 219 | "print('Initial alignment errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(np.mean(initial_TRE), \n", 220 | " np.std(initial_TRE), \n", 221 | " np.max(initial_TRE)))\n", 222 | "print('Final alignment errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(np.mean(final_TRE), \n", 223 | " np.std(final_TRE), \n", 224 | " np.max(final_TRE)))" 225 | ] 226 | }, 227 | { 228 | "cell_type": "markdown", 229 | "metadata": {}, 230 | "source": [ 231 | "The above descriptive statistics do not convey the whole picture, we should also look at the TRE distributions before and after registration." 232 | ] 233 | }, 234 | { 235 | "cell_type": "code", 236 | "execution_count": null, 237 | "metadata": { 238 | "collapsed": false 239 | }, 240 | "outputs": [], 241 | "source": [ 242 | "plt.hist(initial_TRE, bins=20, alpha=0.5, label='before registration', color='blue')\n", 243 | "plt.hist(final_TRE, bins=20, alpha=0.5, label='after registration', color='green')\n", 244 | "plt.legend()\n", 245 | "plt.title('TRE histogram');" 246 | ] 247 | }, 248 | { 249 | "cell_type": "markdown", 250 | "metadata": {}, 251 | "source": [ 252 | "Finally, we should also take into account the fact that TRE is spatially variant (think center of rotation). We therefore should also explore the distribution of errors as a function of the point location." 253 | ] 254 | }, 255 | { 256 | "cell_type": "code", 257 | "execution_count": null, 258 | "metadata": { 259 | "collapsed": false 260 | }, 261 | "outputs": [], 262 | "source": [ 263 | "utilities.target_registration_errors(sitk.Transform(), fixed_points, moving_points, display_errors = True)\n", 264 | "utilities.target_registration_errors(final_transformation, fixed_points, moving_points, display_errors = True);" 265 | ] 266 | }, 267 | { 268 | "cell_type": "markdown", 269 | "metadata": {}, 270 | "source": [ 271 | "Deciding whether a registration algorithm is appropriate for a specific problem is context dependent and is defined by the clinical/research needs both in terms of accuracy and computational complexity." 272 | ] 273 | }, 274 | { 275 | "cell_type": "markdown", 276 | "metadata": {}, 277 | "source": [ 278 | "## Demons Based Registration\n", 279 | "\n", 280 | "SimpleITK includes a number of filters from the Demons registration family (originally introduced by J. P. Thirion):\n", 281 | "1. DemonsRegistrationFilter.\n", 282 | "2. DiffeomorphicDemonsRegistrationFilter.\n", 283 | "3. FastSymmetricForcesDemonsRegistrationFilter.\n", 284 | "4. SymmetricForcesDemonsRegistrationFilter.\n", 285 | "\n", 286 | "These are appropriate for mono-modal registration. As these filters are independent of the ImageRegistrationMethod we do not have access to the multiscale framework. Luckily it is easy to implement our own multiscale framework in SimpleITK, which is what we do in the next cell." 287 | ] 288 | }, 289 | { 290 | "cell_type": "code", 291 | "execution_count": null, 292 | "metadata": { 293 | "collapsed": true 294 | }, 295 | "outputs": [], 296 | "source": [ 297 | "def smooth_and_resample(image, shrink_factor, smoothing_sigma):\n", 298 | " \"\"\"\n", 299 | " Args:\n", 300 | " image: The image we want to resample.\n", 301 | " shrink_factor: A number greater than one, such that the new image's size is original_size/shrink_factor.\n", 302 | " smoothing_sigma: Sigma for Gaussian smoothing, this is in physical (image spacing) units, not pixels.\n", 303 | " Return:\n", 304 | " Image which is a result of smoothing the input and then resampling it using the given sigma and shrink factor.\n", 305 | " \"\"\"\n", 306 | " smoothed_image = sitk.SmoothingRecursiveGaussian(image, smoothing_sigma)\n", 307 | " \n", 308 | " original_spacing = image.GetSpacing()\n", 309 | " original_size = image.GetSize()\n", 310 | " new_size = [int(sz/float(shrink_factor) + 0.5) for sz in original_size]\n", 311 | " new_spacing = [((original_sz-1)*original_spc)/(new_sz-1) \n", 312 | " for original_sz, original_spc, new_sz in zip(original_size, original_spacing, new_size)]\n", 313 | " return sitk.Resample(smoothed_image, new_size, sitk.Transform(), \n", 314 | " sitk.sitkLinear, image.GetOrigin(),\n", 315 | " new_spacing, image.GetDirection(), 0.0, \n", 316 | " image.GetPixelID())\n", 317 | "\n", 318 | "\n", 319 | " \n", 320 | "def multiscale_demons(registration_algorithm,\n", 321 | " fixed_image, moving_image, initial_transform = None, \n", 322 | " shrink_factors=None, smoothing_sigmas=None):\n", 323 | " \"\"\"\n", 324 | " Run the given registration algorithm in a multiscale fashion. The original scale should not be given as input as the\n", 325 | " original images are implicitly incorporated as the base of the pyramid.\n", 326 | " Args:\n", 327 | " registration_algorithm: Any registration algorithm that has an Execute(fixed_image, moving_image, displacement_field_image)\n", 328 | " method.\n", 329 | " fixed_image: Resulting transformation maps points from this image's spatial domain to the moving image spatial domain.\n", 330 | " moving_image: Resulting transformation maps points from the fixed_image's spatial domain to this image's spatial domain.\n", 331 | " initial_transform: Any SimpleITK transform, used to initialize the displacement field.\n", 332 | " shrink_factors: Shrink factors relative to the original image's size.\n", 333 | " smoothing_sigmas: Amount of smoothing which is done prior to resmapling the image using the given shrink factor. These\n", 334 | " are in physical (image spacing) units.\n", 335 | " Returns: \n", 336 | " SimpleITK.DisplacementFieldTransform\n", 337 | " \"\"\"\n", 338 | " # Create image pyramid.\n", 339 | " fixed_images = [fixed_image]\n", 340 | " moving_images = [moving_image]\n", 341 | " if shrink_factors:\n", 342 | " for shrink_factor, smoothing_sigma in reversed(list(zip(shrink_factors, smoothing_sigmas))):\n", 343 | " fixed_images.append(smooth_and_resample(fixed_images[0], shrink_factor, smoothing_sigma))\n", 344 | " moving_images.append(smooth_and_resample(moving_images[0], shrink_factor, smoothing_sigma))\n", 345 | " \n", 346 | " # Create initial displacement field at lowest resolution. \n", 347 | " # Currently, the pixel type is required to be sitkVectorFloat64 because of a constraint imposed by the Demons filters.\n", 348 | " if initial_transform:\n", 349 | " initial_displacement_field = sitk.TransformToDisplacementField(initial_transform, \n", 350 | " sitk.sitkVectorFloat64,\n", 351 | " fixed_images[-1].GetSize(),\n", 352 | " fixed_images[-1].GetOrigin(),\n", 353 | " fixed_images[-1].GetSpacing(),\n", 354 | " fixed_images[-1].GetDirection())\n", 355 | " else:\n", 356 | " initial_displacement_field = sitk.Image(fixed_images[-1].GetWidth(), \n", 357 | " fixed_images[-1].GetHeight(),\n", 358 | " fixed_images[-1].GetDepth(),\n", 359 | " sitk.sitkVectorFloat64)\n", 360 | " initial_displacement_field.CopyInformation(fixed_images[-1])\n", 361 | " \n", 362 | " # Run the registration. \n", 363 | " initial_displacement_field = registration_algorithm.Execute(fixed_images[-1], \n", 364 | " moving_images[-1], \n", 365 | " initial_displacement_field)\n", 366 | " # Start at the top of the pyramid and work our way down. \n", 367 | " for f_image, m_image in reversed(list(zip(fixed_images[0:-1], moving_images[0:-1]))):\n", 368 | " initial_displacement_field = sitk.Resample (initial_displacement_field, f_image)\n", 369 | " initial_displacement_field = registration_algorithm.Execute(f_image, m_image, initial_displacement_field)\n", 370 | " return sitk.DisplacementFieldTransform(initial_displacement_field)" 371 | ] 372 | }, 373 | { 374 | "cell_type": "markdown", 375 | "metadata": {}, 376 | "source": [ 377 | "Now we will use our newly minted multiscale framework to perform registration with the Demons filters. Some things you can easily try out by editing the code below:\n", 378 | "1. Is there really a need for multiscale - just call the multiscale_demons method without the shrink_factors and smoothing_sigmas parameters.\n", 379 | "2. Which Demons filter should you use - configure the other filters and see if our selection is the best choice (accuracy/time)." 380 | ] 381 | }, 382 | { 383 | "cell_type": "code", 384 | "execution_count": null, 385 | "metadata": { 386 | "collapsed": false 387 | }, 388 | "outputs": [], 389 | "source": [ 390 | "# Define a simple callback which allows us to monitor registration progress.\n", 391 | "def iteration_callback(filter):\n", 392 | " print('\\r{0}: {1:.2f}'.format(filter.GetElapsedIterations(), filter.GetMetric()), end='')\n", 393 | " \n", 394 | "# Select a Demons filter and configure it.\n", 395 | "demons_filter = sitk.FastSymmetricForcesDemonsRegistrationFilter()\n", 396 | "demons_filter.SetNumberOfIterations(20)\n", 397 | "# Regularization (update field - viscous, total field - elastic).\n", 398 | "demons_filter.SetSmoothDisplacementField(True)\n", 399 | "demons_filter.SetStandardDeviations(2.0)\n", 400 | "\n", 401 | "# Add our simple callback to the registration filter.\n", 402 | "demons_filter.AddCommand(sitk.sitkIterationEvent, lambda: iteration_callback(demons_filter))\n", 403 | "\n", 404 | "# Run the registration.\n", 405 | "tx = multiscale_demons(registration_algorithm=demons_filter, \n", 406 | " fixed_image = fixed_image, \n", 407 | " moving_image = moving_image,\n", 408 | " shrink_factors = [4,2],\n", 409 | " smoothing_sigmas = [8,4])\n", 410 | "\n", 411 | "# look at the final TREs.\n", 412 | "final_TRE = utilities.target_registration_errors(tx, fixed_points, moving_points, display_errors = True)\n", 413 | "\n", 414 | "print('Final alignment errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(np.mean(final_TRE), \n", 415 | " np.std(final_TRE), \n", 416 | " np.max(final_TRE)))" 417 | ] 418 | }, 419 | { 420 | "cell_type": "markdown", 421 | "metadata": {}, 422 | "source": [ 423 | "## Quantitative Evaluation II (Segmentation)\n", 424 | "\n", 425 | "While the use of corresponding points to evaluate registration is the desired approach, it is often not applicable. In many cases there are only a few distinct points which can be localized in the two images, possibly too few to serve as a metric for evaluating the registration result across the whole region of interest. \n", 426 | "\n", 427 | "An alternative approach is to use segmentation. In this approach, we independently segment the structures of interest in the two images. After registration we transfer the segmentation from one image to the other and compare the original and registration induced segmentations.\n" 428 | ] 429 | }, 430 | { 431 | "cell_type": "code", 432 | "execution_count": null, 433 | "metadata": { 434 | "collapsed": true 435 | }, 436 | "outputs": [], 437 | "source": [ 438 | "# Transfer the segmentation via the estimated transformation. \n", 439 | "# Nearest Neighbor interpolation so we don't introduce new labels.\n", 440 | "transformed_labels = sitk.Resample(masks[moving_index],\n", 441 | " fixed_image,\n", 442 | " tx, \n", 443 | " sitk.sitkNearestNeighbor,\n", 444 | " 0.0, \n", 445 | " masks[moving_index].GetPixelID())" 446 | ] 447 | }, 448 | { 449 | "cell_type": "markdown", 450 | "metadata": {}, 451 | "source": [ 452 | "We have now replaced the task of evaluating registration with that of evaluating segmentation." 453 | ] 454 | }, 455 | { 456 | "cell_type": "code", 457 | "execution_count": null, 458 | "metadata": { 459 | "collapsed": false 460 | }, 461 | "outputs": [], 462 | "source": [ 463 | "# Often referred to as ground truth, but we prefer reference as the truth is never known.\n", 464 | "reference_segmentation = moving_image_mask\n", 465 | "# Segmentations before and after registration\n", 466 | "segmentations = [fixed_image_mask, transformed_labels == utilities.popi_lung_label]" 467 | ] 468 | }, 469 | { 470 | "cell_type": "code", 471 | "execution_count": null, 472 | "metadata": { 473 | "collapsed": false 474 | }, 475 | "outputs": [], 476 | "source": [ 477 | "from enum import Enum\n", 478 | "\n", 479 | "# Use enumerations to represent the various evaluation measures\n", 480 | "class OverlapMeasures(Enum):\n", 481 | " jaccard, dice, volume_similarity, false_negative, false_positive = range(5)\n", 482 | "\n", 483 | "class SurfaceDistanceMeasures(Enum):\n", 484 | " hausdorff_distance, mean_surface_distance, median_surface_distance, std_surface_distance, max_surface_distance = range(5)\n", 485 | " \n", 486 | "# Empty numpy arrays to hold the results \n", 487 | "overlap_results = np.zeros((len(segmentations),len(OverlapMeasures.__members__.items()))) \n", 488 | "surface_distance_results = np.zeros((len(segmentations),len(SurfaceDistanceMeasures.__members__.items()))) \n", 489 | "\n", 490 | "# Compute the evaluation criteria\n", 491 | "\n", 492 | "# Note that for the overlap measures filter, because we are dealing with a single label we \n", 493 | "# use the combined, all labels, evaluation measures without passing a specific label to the methods.\n", 494 | "overlap_measures_filter = sitk.LabelOverlapMeasuresImageFilter()\n", 495 | "\n", 496 | "hausdorff_distance_filter = sitk.HausdorffDistanceImageFilter()\n", 497 | "\n", 498 | "# Use the absolute values of the distance map to compute the surface distances (distance map sign, outside or inside \n", 499 | "# relationship, is irrelevant)\n", 500 | "label = 1\n", 501 | "reference_distance_map = sitk.Abs(sitk.SignedMaurerDistanceMap(reference_segmentation, squaredDistance=False))\n", 502 | "reference_surface = sitk.LabelContour(reference_segmentation)\n", 503 | "\n", 504 | "statistics_image_filter = sitk.StatisticsImageFilter()\n", 505 | "# Get the number of pixels in the reference surface by counting all pixels that are 1.\n", 506 | "statistics_image_filter.Execute(reference_surface)\n", 507 | "num_reference_surface_pixels = int(statistics_image_filter.GetSum()) \n", 508 | "\n", 509 | "for i, seg in enumerate(segmentations):\n", 510 | " # Overlap measures\n", 511 | " overlap_measures_filter.Execute(reference_segmentation, seg)\n", 512 | " overlap_results[i,OverlapMeasures.jaccard.value] = overlap_measures_filter.GetJaccardCoefficient()\n", 513 | " overlap_results[i,OverlapMeasures.dice.value] = overlap_measures_filter.GetDiceCoefficient()\n", 514 | " overlap_results[i,OverlapMeasures.volume_similarity.value] = overlap_measures_filter.GetVolumeSimilarity()\n", 515 | " overlap_results[i,OverlapMeasures.false_negative.value] = overlap_measures_filter.GetFalseNegativeError()\n", 516 | " overlap_results[i,OverlapMeasures.false_positive.value] = overlap_measures_filter.GetFalsePositiveError()\n", 517 | " # Hausdorff distance\n", 518 | " hausdorff_distance_filter.Execute(reference_segmentation, seg)\n", 519 | " surface_distance_results[i,SurfaceDistanceMeasures.hausdorff_distance.value] = hausdorff_distance_filter.GetHausdorffDistance()\n", 520 | " # Symmetric surface distance measures\n", 521 | " segmented_distance_map = sitk.Abs(sitk.SignedMaurerDistanceMap(seg, squaredDistance=False))\n", 522 | " segmented_surface = sitk.LabelContour(seg)\n", 523 | " \n", 524 | " # Multiply the binary surface segmentations with the distance maps. The resulting distance\n", 525 | " # maps contain non-zero values only on the surface (they can also contain zero on the surface)\n", 526 | " seg2ref_distance_map = reference_distance_map*sitk.Cast(segmented_surface, sitk.sitkFloat32)\n", 527 | " ref2seg_distance_map = segmented_distance_map*sitk.Cast(reference_surface, sitk.sitkFloat32)\n", 528 | " \n", 529 | " # Get the number of pixels in the reference surface by counting all pixels that are 1.\n", 530 | " statistics_image_filter.Execute(segmented_surface)\n", 531 | " num_segmented_surface_pixels = int(statistics_image_filter.GetSum())\n", 532 | " \n", 533 | " # Get all non-zero distances and then add zero distances if required.\n", 534 | " seg2ref_distance_map_arr = sitk.GetArrayViewFromImage(seg2ref_distance_map)\n", 535 | " seg2ref_distances = list(seg2ref_distance_map_arr[seg2ref_distance_map_arr!=0]) \n", 536 | " seg2ref_distances = seg2ref_distances + \\\n", 537 | " list(np.zeros(num_segmented_surface_pixels - len(seg2ref_distances)))\n", 538 | " ref2seg_distance_map_arr = sitk.GetArrayViewFromImage(ref2seg_distance_map)\n", 539 | " ref2seg_distances = list(ref2seg_distance_map_arr[ref2seg_distance_map_arr!=0]) \n", 540 | " ref2seg_distances = ref2seg_distances + \\\n", 541 | " list(np.zeros(num_reference_surface_pixels - len(ref2seg_distances)))\n", 542 | " \n", 543 | " all_surface_distances = seg2ref_distances + ref2seg_distances\n", 544 | " \n", 545 | " surface_distance_results[i,SurfaceDistanceMeasures.mean_surface_distance.value] = np.mean(all_surface_distances)\n", 546 | " surface_distance_results[i,SurfaceDistanceMeasures.median_surface_distance.value] = np.median(all_surface_distances)\n", 547 | " surface_distance_results[i,SurfaceDistanceMeasures.std_surface_distance.value] = np.std(all_surface_distances)\n", 548 | " surface_distance_results[i,SurfaceDistanceMeasures.max_surface_distance.value] = np.max(all_surface_distances)\n", 549 | "\n", 550 | "import pandas as pd\n", 551 | "from IPython.display import display, HTML \n", 552 | "\n", 553 | "# Graft our results matrix into pandas data frames \n", 554 | "overlap_results_df = pd.DataFrame(data=overlap_results, index=[\"before registration\", \"after registration\"], \n", 555 | " columns=[name for name, _ in OverlapMeasures.__members__.items()]) \n", 556 | "surface_distance_results_df = pd.DataFrame(data=surface_distance_results, index=[\"before registration\", \"after registration\"], \n", 557 | " columns=[name for name, _ in SurfaceDistanceMeasures.__members__.items()]) \n", 558 | "\n", 559 | "# Display the data as HTML tables and graphs\n", 560 | "display(HTML(overlap_results_df.to_html(float_format=lambda x: '%.3f' % x)))\n", 561 | "display(HTML(surface_distance_results_df.to_html(float_format=lambda x: '%.3f' % x)))\n", 562 | "overlap_results_df.plot(kind='bar', rot=1).legend(bbox_to_anchor=(1.6,0.9))\n", 563 | "surface_distance_results_df.plot(kind='bar', rot=1).legend(bbox_to_anchor=(1.6,0.9)); " 564 | ] 565 | } 566 | ], 567 | "metadata": { 568 | "kernelspec": { 569 | "display_name": "Python 3", 570 | "language": "python", 571 | "name": "python3" 572 | }, 573 | "language_info": { 574 | "codemirror_mode": { 575 | "name": "ipython", 576 | "version": 3 577 | }, 578 | "file_extension": ".py", 579 | "mimetype": "text/x-python", 580 | "name": "python", 581 | "nbconvert_exporter": "python", 582 | "pygments_lexer": "ipython3", 583 | "version": "3.5.2" 584 | } 585 | }, 586 | "nbformat": 4, 587 | "nbformat_minor": 2 588 | } 589 | -------------------------------------------------------------------------------- /02_images_and_resampling.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "

SimpleITK Images and Resampling

\n", 8 | "\n", 9 | "\n", 10 | "**Summary:** \n", 11 | "\n", 12 | "1. Images occupy a region in physical space which is defined by:\n", 13 | " * Origin.\n", 14 | " * Size (number of pixels per dimension).\n", 15 | " * Spacing (unknown consistent units: nm, mm, m, km...).\n", 16 | " * Direction cosine matrix (axis directions in physical space).\n", 17 | " \n", 18 | " These attributes are the image's meta-data. Computing the physical coordinates from image indexes requires all four components.\n", 19 | "\n", 20 | "2. An image may contain a meta-data dictionary. This supplemental information often includes the image modality (e.g. CT), patient name, and information with respect to the image acquisition. \n", 21 | "3. Image initialization: user specified pixel type, user specified dimensionality (2,3), origin at zero, unit spacing in all dimensions and identity direction cosine matrix, intensities set to zero.\n", 22 | "4. Data transfer to/from numpy: GetArrayFromImage (copy), GetArrayViewFromImage (immutable), GetImageFromArray (copy) + set the meta-data yourself. \n", 23 | "5. A common issue with resampling resulting in an all black image is due to (a) incorrect specification of the \n", 24 | "desired output image's spatial domain (its meta-data); or (b) the use of the inverse of the transformation mapping from the output spatial domain to the resampled image." 25 | ] 26 | }, 27 | { 28 | "cell_type": "markdown", 29 | "metadata": {}, 30 | "source": [ 31 | "## Images are Physical Objects\n", 32 | "\n", 33 | "

" 34 | ] 35 | }, 36 | { 37 | "cell_type": "markdown", 38 | "metadata": {}, 39 | "source": [ 40 | "### Pixel Types\n", 41 | "\n", 42 | "The pixel type is represented as an enumerated type. The following is a table of the enumerated list.\n", 43 | "\n", 44 | "\n", 45 | " \n", 46 | " \n", 47 | " \n", 48 | " \n", 49 | " \n", 50 | " \n", 51 | " \n", 52 | " \n", 53 | " \n", 54 | " \n", 55 | " \n", 56 | " \n", 57 | " \n", 58 | " \n", 59 | " \n", 60 | " \n", 61 | " \n", 62 | " \n", 63 | " \n", 64 | " \n", 65 | " \n", 66 | " \n", 67 | " \n", 68 | " \n", 69 | " \n", 70 | " \n", 71 | "
sitkUInt8Unsigned 8 bit integer
sitkInt8Signed 8 bit integer
sitkUInt16Unsigned 16 bit integer
sitkInt16Signed 16 bit integer
sitkUInt32Unsigned 32 bit integer
sitkInt32Signed 32 bit integer
sitkUInt64Unsigned 64 bit integer
sitkInt64Signed 64 bit integer
sitkFloat3232 bit float
sitkFloat6464 bit float
sitkComplexFloat32complex number of 32 bit float
sitkComplexFloat64complex number of 64 bit float
sitkVectorUInt8Multi-component of unsigned 8 bit integer
sitkVectorInt8Multi-component of signed 8 bit integer
sitkVectorUInt16Multi-component of unsigned 16 bit integer
sitkVectorInt16Multi-component of signed 16 bit integer
sitkVectorUInt32Multi-component of unsigned 32 bit integer
sitkVectorInt32Multi-component of signed 32 bit integer
sitkVectorUInt64Multi-component of unsigned 64 bit integer
sitkVectorInt64Multi-component of signed 64 bit integer
sitkVectorFloat32Multi-component of 32 bit float
sitkVectorFloat64Multi-component of 64 bit float
sitkLabelUInt8RLE label of unsigned 8 bit integers
sitkLabelUInt16RLE label of unsigned 16 bit integers
sitkLabelUInt32RLE label of unsigned 32 bit integers
sitkLabelUInt64RLE label of unsigned 64 bit integers
\n", 72 | "\n", 73 | "There is also `sitkUnknown`, which is used for undefined or erroneous pixel ID's.\n", 74 | "\n", 75 | "Some filters only work with images with a specific pixel type. The primary example is the registration framework which works with sitkFloat32 or sitkFloat64. To address this issue you can either specify the appropriate pixel type when reading or creating the image, or use the Cast function. " 76 | ] 77 | }, 78 | { 79 | "cell_type": "code", 80 | "execution_count": null, 81 | "metadata": { 82 | "collapsed": true 83 | }, 84 | "outputs": [], 85 | "source": [ 86 | "import SimpleITK as sitk\n", 87 | "\n", 88 | "import numpy as np\n", 89 | "import os\n", 90 | "from ipywidgets import interact, fixed\n", 91 | "\n", 92 | "import matplotlib.pyplot as plt\n", 93 | "%matplotlib inline\n", 94 | "\n", 95 | "from downloaddata import fetch_data as fdata\n", 96 | "\n", 97 | "OUTPUT_DIR = 'output'" 98 | ] 99 | }, 100 | { 101 | "cell_type": "markdown", 102 | "metadata": {}, 103 | "source": [ 104 | "## Image Creation\n", 105 | "\n", 106 | "The following components are required for a complete definition of an image:\n", 107 | "
    \n", 108 | "
  1. Pixel type [fixed on creation, no default]: unsigned 32 bit integer, sitkVectorUInt8, etc., see list above.
  2. \n", 109 | "
  3. Sizes [fixed on creation, no default]: number of pixels/voxels in each dimension. This quantity implicitly defines the image dimension.
  4. \n", 110 | "
  5. Origin [default is zero]: coordinates of the pixel/voxel with index (0,0,0) in physical units (i.e. mm).
  6. \n", 111 | "
  7. Spacing [default is one]: Distance between adjacent pixels/voxels in each dimension given in physical units.
  8. \n", 112 | "
  9. Direction matrix [default is identity]: mapping, rotation, between direction of the pixel/voxel axes and physical directions.
  10. \n", 113 | "
\n", 114 | "\n", 115 | "Initial pixel/voxel values are set to zero." 116 | ] 117 | }, 118 | { 119 | "cell_type": "code", 120 | "execution_count": null, 121 | "metadata": { 122 | "collapsed": false, 123 | "simpleitk_error_allowed": "Exception thrown in SimpleITK Show:" 124 | }, 125 | "outputs": [], 126 | "source": [ 127 | "image_3D = sitk.Image(256, 128, 64, sitk.sitkInt16)\n", 128 | "image_2D = sitk.Image(64, 64, sitk.sitkFloat32)\n", 129 | "image_RGB = sitk.Image([128,64], sitk.sitkVectorUInt8, 3)\n", 130 | "\n", 131 | "sitk.Show(image_3D)\n", 132 | "sitk.Show(image_RGB)" 133 | ] 134 | }, 135 | { 136 | "cell_type": "markdown", 137 | "metadata": { 138 | "collapsed": true 139 | }, 140 | "source": [ 141 | "Or, creation from file." 142 | ] 143 | }, 144 | { 145 | "cell_type": "code", 146 | "execution_count": null, 147 | "metadata": { 148 | "collapsed": false 149 | }, 150 | "outputs": [], 151 | "source": [ 152 | "logo = sitk.ReadImage(fdata('SimpleITK.jpg'))\n", 153 | "\n", 154 | "# GetArrayViewFromImage returns an immutable numpy array view to the data.\n", 155 | "plt.imshow(sitk.GetArrayViewFromImage(logo))\n", 156 | "plt.axis('off');" 157 | ] 158 | }, 159 | { 160 | "cell_type": "markdown", 161 | "metadata": {}, 162 | "source": [ 163 | "## Basic Image Attributes (Meta-Data)\n", 164 | "\n", 165 | "You can change the image origin, spacing and direction. Making such changes to an image already containing data should be done cautiously. " 166 | ] 167 | }, 168 | { 169 | "cell_type": "code", 170 | "execution_count": null, 171 | "metadata": { 172 | "collapsed": false 173 | }, 174 | "outputs": [], 175 | "source": [ 176 | "selected_image = image_3D\n", 177 | "print('Before modification:')\n", 178 | "print('origin: ' + str(selected_image.GetOrigin()))\n", 179 | "print('size: ' + str(selected_image.GetSize()))\n", 180 | "print('spacing: ' + str(selected_image.GetSpacing()))\n", 181 | "print('direction: ' + str(selected_image.GetDirection()))\n", 182 | "print('pixel type: ' + str(selected_image.GetPixelIDTypeAsString()))\n", 183 | "print('number of pixel components: ' + str(selected_image.GetNumberOfComponentsPerPixel()))\n", 184 | "\n", 185 | "selected_image.SetOrigin((78.0, 76.0, 77.0))\n", 186 | "selected_image.SetSpacing([0.5,0.5,3.0])\n", 187 | "\n", 188 | "print('\\nAfter modification:')\n", 189 | "print('origin: ' + str(selected_image.GetOrigin()))\n", 190 | "print('spacing: ' + str(selected_image.GetSpacing()))" 191 | ] 192 | }, 193 | { 194 | "cell_type": "markdown", 195 | "metadata": {}, 196 | "source": [ 197 | "## Accessing Pixels and Slicing\n", 198 | "\n", 199 | "Either use the ``GetPixel`` and ``SetPixel`` functions or the Pythonic slicing operator. The access functions and image slicing operator are in [x,y,z] order." 200 | ] 201 | }, 202 | { 203 | "cell_type": "code", 204 | "execution_count": null, 205 | "metadata": { 206 | "collapsed": false 207 | }, 208 | "outputs": [], 209 | "source": [ 210 | "print(image_3D.GetPixel(0, 0, 0))\n", 211 | "image_3D.SetPixel(0, 0, 0, 1)\n", 212 | "print(image_3D.GetPixel(0, 0, 0))\n", 213 | "\n", 214 | "# This can also be done using Pythonic notation.\n", 215 | "print(image_3D[0,0,1])\n", 216 | "image_3D[0,0,1] = 2\n", 217 | "print(image_3D[0,0,1])" 218 | ] 219 | }, 220 | { 221 | "cell_type": "code", 222 | "execution_count": null, 223 | "metadata": { 224 | "collapsed": false 225 | }, 226 | "outputs": [], 227 | "source": [ 228 | "# Brute force sub-sampling \n", 229 | "logo_subsampled = logo[::2,::2]\n", 230 | "\n", 231 | "# Get the sub-image containing the word Simple\n", 232 | "simple = logo[0:115,:]\n", 233 | "\n", 234 | "# Get the sub-image containing the word Simple and flip it\n", 235 | "simple_flipped = logo[115:0:-1,:]\n", 236 | "\n", 237 | "n = 4\n", 238 | "\n", 239 | "plt.subplot(n,1,1)\n", 240 | "plt.imshow(sitk.GetArrayViewFromImage(logo))\n", 241 | "plt.axis('off');\n", 242 | "\n", 243 | "plt.subplot(n,1,2)\n", 244 | "plt.imshow(sitk.GetArrayViewFromImage(logo_subsampled))\n", 245 | "plt.axis('off');\n", 246 | "\n", 247 | "plt.subplot(n,1,3)\n", 248 | "plt.imshow(sitk.GetArrayViewFromImage(simple))\n", 249 | "plt.axis('off')\n", 250 | "\n", 251 | "plt.subplot(n,1,4)\n", 252 | "plt.imshow(sitk.GetArrayViewFromImage(simple_flipped))\n", 253 | "plt.axis('off');" 254 | ] 255 | }, 256 | { 257 | "cell_type": "markdown", 258 | "metadata": {}, 259 | "source": [ 260 | "## Image operations\n", 261 | "\n", 262 | "SimpleITK supports basic arithmetic operations between images while taking into account their meta-data. Images must physically overlap (pixel by pixel).\n", 263 | "\n", 264 | "How close do physical attributes (meta-data values) need to be in order to be considered equivalent?" 265 | ] 266 | }, 267 | { 268 | "cell_type": "code", 269 | "execution_count": null, 270 | "metadata": { 271 | "collapsed": false 272 | }, 273 | "outputs": [], 274 | "source": [ 275 | "img_width = 128\n", 276 | "img_height = 64\n", 277 | "img1 = sitk.Image((img_width, img_height), sitk.sitkUInt8)\n", 278 | "for i in range(img_width):\n", 279 | " img1[i,1] = 5\n", 280 | "\n", 281 | "img2 = sitk.Image(img1.GetSize(), sitk.sitkUInt8)\n", 282 | "#img2.SetDirection([0,1,0.5,0.5])\n", 283 | "img2.SetOrigin([0.000001,0.000001])\n", 284 | "for i in range(img_width):\n", 285 | " img2[i,1] = 120\n", 286 | " img2[i,img_height//2] = 60\n", 287 | "\n", 288 | "img3 = img1 + img2\n", 289 | "\n", 290 | "plt.imshow(sitk.GetArrayViewFromImage(img3), cmap=plt.cm.Greys_r)\n", 291 | "plt.axis('off');" 292 | ] 293 | }, 294 | { 295 | "cell_type": "markdown", 296 | "metadata": {}, 297 | "source": [ 298 | "Comparative operators (>, >=, <, <=, ==) are also supported, returning binary images." 299 | ] 300 | }, 301 | { 302 | "cell_type": "code", 303 | "execution_count": null, 304 | "metadata": { 305 | "collapsed": false 306 | }, 307 | "outputs": [], 308 | "source": [ 309 | "thresholded_image = img3>50\n", 310 | "plt.imshow(sitk.GetArrayViewFromImage(thresholded_image), cmap=plt.cm.Greys_r)\n", 311 | "plt.axis('off');" 312 | ] 313 | }, 314 | { 315 | "cell_type": "markdown", 316 | "metadata": {}, 317 | "source": [ 318 | "## SimpleITK2Numpy and Numpy2SimpleITK\n", 319 | "\n", 320 | "SimpleITK and numpy indexing access is in opposite order! \n", 321 | "\n", 322 | "SimpleITK: image[x,y,z]
\n", 323 | "numpy: image_numpy_array[z,y,x]\n", 324 | "\n", 325 | "### SimpleITK2Numpy\n", 326 | "\n", 327 | "1. ```GetArrayFromImage()```: returns a copy of the image data. You can then freely modify the data as it has no effect on the original SimpleITK image.\n", 328 | "2. ```GetArrayViewFromImage()```: returns a view on the image data which is useful for display in a memory efficient manner. You cannot modify the data and __the view will be invalid if the original SimpleITK image is deleted__.\n", 329 | "\n", 330 | "### Numpy2SimpleITK\n", 331 | "1. ```GetImageFromArray()```: returns a SimpleITK image with origin set to zero, spacing set to one for all dimensions, and the direction cosine matrix set to identity. Intensity data is copied from the numpy array. __In most cases you will need to set appropriate meta-data values.__ \n" 332 | ] 333 | }, 334 | { 335 | "cell_type": "code", 336 | "execution_count": null, 337 | "metadata": { 338 | "collapsed": false 339 | }, 340 | "outputs": [], 341 | "source": [ 342 | "nda = sitk.GetArrayFromImage(image_3D)\n", 343 | "print(image_3D.GetSize())\n", 344 | "print(nda.shape)\n", 345 | "\n", 346 | "nda = sitk.GetArrayFromImage(image_RGB)\n", 347 | "print(image_RGB.GetSize())\n", 348 | "print(nda.shape)" 349 | ] 350 | }, 351 | { 352 | "cell_type": "code", 353 | "execution_count": null, 354 | "metadata": { 355 | "collapsed": false 356 | }, 357 | "outputs": [], 358 | "source": [ 359 | "gabor_image = sitk.GaborSource(size=[64,64], frequency=.03)\n", 360 | "# Getting a numpy array view on the image data doesn't copy the data\n", 361 | "nda_view = sitk.GetArrayViewFromImage(gabor_image)\n", 362 | "plt.imshow(nda_view, cmap=plt.cm.Greys_r)\n", 363 | "plt.axis('off');" 364 | ] 365 | }, 366 | { 367 | "cell_type": "code", 368 | "execution_count": null, 369 | "metadata": { 370 | "collapsed": false 371 | }, 372 | "outputs": [], 373 | "source": [ 374 | "nda = np.zeros((10,20,3))\n", 375 | "\n", 376 | " #if this is supposed to be a 3D gray scale image [x=3, y=20, z=10]\n", 377 | "img = sitk.GetImageFromArray(nda)\n", 378 | "print(img.GetSize())\n", 379 | "\n", 380 | " #if this is supposed to be a 2D color image [x=20,y=10]\n", 381 | "img = sitk.GetImageFromArray(nda, isVector=True)\n", 382 | "print(img.GetSize())" 383 | ] 384 | }, 385 | { 386 | "cell_type": "markdown", 387 | "metadata": {}, 388 | "source": [ 389 | "## Reading and Writing\n", 390 | "\n", 391 | "SimpleITK can read and write images stored in a single file, or a set of files (e.g. DICOM series). The toolkit provides both an object oriented and a procedural interface. The primary difference being that the object oriented approach provides more control while the procedural interface is more convenient.\n", 392 | "\n", 393 | "We look at DICOM images as an example illustrating this difference. Images stored in the DICOM format have a meta-data dictionary associated with them, which is populated with the DICOM tags. When a DICOM image series is read as a single image volume, the resulting image's meta-data dictionary is not populated since DICOM tags are specific to each of the files in the series. If you use the procedural method for reading the series then you do not have access to the set of meta-data dictionaries associated with each of the files. To obtain each dictionary you will have to access each of the files separately. On the other hand, if you use the object oriented interface, the set of dictionaries will be accessible from the ```ImageSeriesReader``` which you used to read the DICOM series. The meta-data dictionary for each file is available using the HasMetaDataKey and GetMetaData methods. \n", 394 | "\n", 395 | "We start with reading and writing an image using the procedural interface." 396 | ] 397 | }, 398 | { 399 | "cell_type": "code", 400 | "execution_count": null, 401 | "metadata": { 402 | "collapsed": false 403 | }, 404 | "outputs": [], 405 | "source": [ 406 | "img = sitk.ReadImage(fdata('SimpleITK.jpg'))\n", 407 | "sitk.WriteImage(img, os.path.join(OUTPUT_DIR, 'SimpleITK.png'))" 408 | ] 409 | }, 410 | { 411 | "cell_type": "markdown", 412 | "metadata": {}, 413 | "source": [ 414 | "Read an image in JPEG format and cast the pixel type according to user selection." 415 | ] 416 | }, 417 | { 418 | "cell_type": "code", 419 | "execution_count": null, 420 | "metadata": { 421 | "collapsed": false 422 | }, 423 | "outputs": [], 424 | "source": [ 425 | "# Several pixel types, some make sense in this case (vector types) and some are just show\n", 426 | "# that the user's choice will force the pixel type even when it doesn't make sense.\n", 427 | "pixel_types = { 'sitkVectorUInt8' : sitk.sitkVectorUInt8,\n", 428 | " 'sitkVectorUInt16' : sitk.sitkVectorUInt16,\n", 429 | " 'sitkVectorFloat64' : sitk.sitkVectorFloat64}\n", 430 | "\n", 431 | "def pixel_type_dropdown_callback(pixel_type, pixel_types_dict):\n", 432 | " #specify the file location and the pixel type we want\n", 433 | " img = sitk.ReadImage(fdata('SimpleITK.jpg'), pixel_types_dict[pixel_type])\n", 434 | " \n", 435 | " print(img.GetPixelIDTypeAsString())\n", 436 | " print(img[0,0])\n", 437 | " plt.imshow(sitk.GetArrayViewFromImage(img))\n", 438 | " plt.axis('off')\n", 439 | " \n", 440 | "interact(pixel_type_dropdown_callback, pixel_type=list(pixel_types.keys()), pixel_types_dict=fixed(pixel_types)); " 441 | ] 442 | }, 443 | { 444 | "cell_type": "markdown", 445 | "metadata": {}, 446 | "source": [ 447 | "Read a DICOM series and write it as a single mha file." 448 | ] 449 | }, 450 | { 451 | "cell_type": "code", 452 | "execution_count": null, 453 | "metadata": { 454 | "collapsed": false 455 | }, 456 | "outputs": [], 457 | "source": [ 458 | "data_directory = os.path.dirname(fdata(\"CIRS057A_MR_CT_DICOM/readme.txt\"))\n", 459 | "series_ID = '1.2.840.113619.2.290.3.3233817346.783.1399004564.515'\n", 460 | "\n", 461 | "# Use the functional interface to read the image series.\n", 462 | "original_image = sitk.ReadImage(sitk.ImageSeriesReader_GetGDCMSeriesFileNames(data_directory,series_ID))\n", 463 | "\n", 464 | "# Write the image.\n", 465 | "output_file_name_3D = os.path.join(OUTPUT_DIR, '3DImage.mha')\n", 466 | "sitk.WriteImage(original_image, output_file_name_3D)" 467 | ] 468 | }, 469 | { 470 | "cell_type": "markdown", 471 | "metadata": {}, 472 | "source": [ 473 | "Select a specific DICOM series from a directory and only then load user selection." 474 | ] 475 | }, 476 | { 477 | "cell_type": "code", 478 | "execution_count": null, 479 | "metadata": { 480 | "collapsed": false 481 | }, 482 | "outputs": [], 483 | "source": [ 484 | "data_directory = os.path.dirname(fdata(\"CIRS057A_MR_CT_DICOM/readme.txt\"))\n", 485 | "# Global variable 'selected_series' is updated by the interact function\n", 486 | "selected_series = ''\n", 487 | "def DICOM_series_dropdown_callback(series_to_load, series_dictionary):\n", 488 | " global selected_series\n", 489 | " # Print some information about the series from the meta-data dictionary\n", 490 | " # DICOM standard part 6, Data Dictionary: http://medical.nema.org/medical/dicom/current/output/pdf/part06.pdf\n", 491 | " img = sitk.ReadImage(series_dictionary[series_to_load][0])\n", 492 | " tags_to_print = {'0010|0010': 'Patient name: ', \n", 493 | " '0008|0060' : 'Modality: ',\n", 494 | " '0008|0021' : 'Series date: ',\n", 495 | " '0008|0080' : 'Institution name: ',\n", 496 | " '0008|1050' : 'Performing physician\\'s name: '}\n", 497 | " for tag in tags_to_print:\n", 498 | " try:\n", 499 | " print(tags_to_print[tag] + img.GetMetaData(tag))\n", 500 | " except: # Ignore if the tag isn't in the dictionary\n", 501 | " pass\n", 502 | " selected_series = series_to_load \n", 503 | "\n", 504 | "# Directory contains multiple DICOM studies/series, store\n", 505 | "# in dictionary with key being the series ID\n", 506 | "series_file_names = {}\n", 507 | "series_IDs = sitk.ImageSeriesReader_GetGDCMSeriesIDs(data_directory)\n", 508 | " # Check that we have at least one series\n", 509 | "if series_IDs:\n", 510 | " for series in series_IDs:\n", 511 | " series_file_names[series] = sitk.ImageSeriesReader_GetGDCMSeriesFileNames(data_directory, series)\n", 512 | " \n", 513 | " interact(DICOM_series_dropdown_callback, series_to_load=list(series_IDs), series_dictionary=fixed(series_file_names)); \n", 514 | "else:\n", 515 | " print('Data directory does not contain any DICOM series.')" 516 | ] 517 | }, 518 | { 519 | "cell_type": "code", 520 | "execution_count": null, 521 | "metadata": { 522 | "collapsed": false 523 | }, 524 | "outputs": [], 525 | "source": [ 526 | "img = sitk.ReadImage(series_file_names[selected_series])\n", 527 | "# Display the image slice from the middle of the stack, z axis\n", 528 | "z = img.GetDepth()//2\n", 529 | "plt.imshow(sitk.GetArrayViewFromImage(img)[z,:,:], cmap=plt.cm.Greys_r)\n", 530 | "plt.axis('off');" 531 | ] 532 | }, 533 | { 534 | "cell_type": "markdown", 535 | "metadata": { 536 | "collapsed": true 537 | }, 538 | "source": [ 539 | "Write the volume as a series of JPEGs. The WriteImage function receives a volume and a list of images names and writes the volume according to the z axis. For a displayable result we need to rescale the image intensities (default is [0,255]) since the JPEG format requires a cast to the UInt8 pixel type." 540 | ] 541 | }, 542 | { 543 | "cell_type": "code", 544 | "execution_count": null, 545 | "metadata": { 546 | "collapsed": true 547 | }, 548 | "outputs": [], 549 | "source": [ 550 | "sitk.WriteImage(sitk.Cast(sitk.RescaleIntensity(img), sitk.sitkUInt8), \n", 551 | " [os.path.join(OUTPUT_DIR, 'slice{0:03d}.jpg'.format(i)) for i in range(img.GetSize()[2])]) " 552 | ] 553 | }, 554 | { 555 | "cell_type": "markdown", 556 | "metadata": {}, 557 | "source": [ 558 | "## Resampling\n", 559 | "\n", 560 | "

\n", 561 | "\n", 562 | "Resampling as the verb implies is the action of sampling an image, which itself is a sampling of an original continuous signal.\n", 563 | "\n", 564 | "Generally speaking, resampling in SimpleITK involves four components:\n", 565 | "1. Image - the image we resample, given in coordinate system $m$.\n", 566 | "2. Resampling grid - a regular grid of points given in coordinate system $f$ which will be mapped to coordinate system $m$.\n", 567 | "2. Transformation $T_f^m$ - maps points from coordinate system $f$ to coordinate system $m$, $^mp = T_f^m(^fp)$.\n", 568 | "3. Interpolator - method for obtaining the intensity values at arbitrary points in coordinate system $m$ from the values of the points defined by the Image.\n", 569 | "\n", 570 | "\n", 571 | "While SimpleITK provides a large number of interpolation methods, the two most commonly used are ```sitkLinear``` and ```sitkNearestNeighbor```. The former is used for most interpolation tasks, a compromise between accuracy and computational efficiency. The later is used to interpolate labeled images representing a segmentation, it is the only interpolation approach which will not introduce new labels into the result.\n", 572 | "\n", 573 | "SimpleITK's procedural API provides three methods for performing resampling, with the difference being the way you specify the resampling grid:\n", 574 | "\n", 575 | "1. ```Resample(const Image &image1, Transform transform, InterpolatorEnum interpolator, double defaultPixelValue, PixelIDValueEnum outputPixelType)```\n", 576 | "2. ```Resample(const Image &image1, const Image &referenceImage, Transform transform, InterpolatorEnum interpolator, double defaultPixelValue, PixelIDValueEnum outputPixelType)```\n", 577 | "3. ```Resample(const Image &image1, std::vector< uint32_t > size, Transform transform, InterpolatorEnum interpolator, std::vector< double > outputOrigin, std::vector< double > outputSpacing, std::vector< double > outputDirection, double defaultPixelValue, PixelIDValueEnum outputPixelType)```" 578 | ] 579 | }, 580 | { 581 | "cell_type": "code", 582 | "execution_count": null, 583 | "metadata": { 584 | "collapsed": false 585 | }, 586 | "outputs": [], 587 | "source": [ 588 | "def resample_display(image, euler2d_transform, tx, ty, theta):\n", 589 | " euler2d_transform.SetTranslation((tx, ty))\n", 590 | " euler2d_transform.SetAngle(theta)\n", 591 | " \n", 592 | " resampled_image = sitk.Resample(image, euler2d_transform)\n", 593 | " plt.imshow(sitk.GetArrayFromImage(resampled_image))\n", 594 | " plt.axis('off') \n", 595 | " plt.show()\n", 596 | "\n", 597 | "euler2d = sitk.Euler2DTransform()\n", 598 | "# Why do we set the center?\n", 599 | "euler2d.SetCenter(logo.TransformContinuousIndexToPhysicalPoint(np.array(logo.GetSize())/2.0))\n", 600 | "interact(resample_display, image=fixed(logo), euler2d_transform=fixed(euler2d), tx=(-128.0, 128.0,2.5), ty=(-64.0, 64.0), theta=(-np.pi/4.0,np.pi/4.0));" 601 | ] 602 | }, 603 | { 604 | "cell_type": "markdown", 605 | "metadata": {}, 606 | "source": [ 607 | "### Common Errors\n", 608 | "\n", 609 | "It is not uncommon to end up with an empty (all black) image after resampling. This is due to:\n", 610 | "1. Using wrong settings for the resampling grid, not too common, but does happen.\n", 611 | "2. Using the inverse of the transformation $T_f^m$. This is a relatively common error, which is readily addressed by invoking the transformations ```GetInverse``` method." 612 | ] 613 | }, 614 | { 615 | "cell_type": "markdown", 616 | "metadata": {}, 617 | "source": [ 618 | "### Defining the Resampling Grid\n", 619 | "\n", 620 | "In the example above we arbitrarily used the original image grid as the resampling grid. As a result, for many of the transformations the resulting image contained black pixels, pixels which were mapped outside the spatial domain of the original image and a partial view of the original image.\n", 621 | "\n", 622 | "If we want the resulting image to contain all of the original image no matter the transformation, we will need to define the resampling grid using our knowledge of the original image's spatial domain and the **inverse** of the given transformation. \n", 623 | "\n", 624 | "Computing the bounds of the resampling grid when dealing with an affine transformation is straightforward. An affine transformation preserves convexity with extreme points mapped to extreme points. Thus we only need to apply the **inverse** transformation to the corners of the original image to obtain the bounds of the resampling grid.\n", 625 | "\n", 626 | "Computing the bounds of the resampling grid when dealing with a BSplineTransform or DisplacementFieldTransform is more involved as we are not guaranteed that extreme points are mapped to extreme points. This requires that we apply the **inverse** transformation to all points in the original image to obtain the bounds of the resampling grid. " 627 | ] 628 | }, 629 | { 630 | "cell_type": "code", 631 | "execution_count": null, 632 | "metadata": { 633 | "collapsed": false 634 | }, 635 | "outputs": [], 636 | "source": [ 637 | "euler2d = sitk.Euler2DTransform()\n", 638 | "# Why do we set the center?\n", 639 | "euler2d.SetCenter(logo.TransformContinuousIndexToPhysicalPoint(np.array(logo.GetSize())/2.0))\n", 640 | "\n", 641 | "tx = 64\n", 642 | "ty = 32\n", 643 | "euler2d.SetTranslation((tx, ty))\n", 644 | "\n", 645 | "extreme_points = [logo.TransformIndexToPhysicalPoint((0,0)), \n", 646 | " logo.TransformIndexToPhysicalPoint((logo.GetWidth(),0)),\n", 647 | " logo.TransformIndexToPhysicalPoint((logo.GetWidth(),logo.GetHeight())),\n", 648 | " logo.TransformIndexToPhysicalPoint((0,logo.GetHeight()))]\n", 649 | "inv_euler2d = euler2d.GetInverse()\n", 650 | "\n", 651 | "extreme_points_transformed = [inv_euler2d.TransformPoint(pnt) for pnt in extreme_points]\n", 652 | "min_x = min(extreme_points_transformed)[0]\n", 653 | "min_y = min(extreme_points_transformed, key=lambda p: p[1])[1]\n", 654 | "max_x = max(extreme_points_transformed)[0]\n", 655 | "max_y = max(extreme_points_transformed, key=lambda p: p[1])[1]\n", 656 | "\n", 657 | "# Use the original spacing (arbitrary decision).\n", 658 | "output_spacing = logo.GetSpacing()\n", 659 | "# Identity cosine matrix (arbitrary decision). \n", 660 | "output_direction = [1.0, 0.0, 0.0, 1.0]\n", 661 | "# Minimal x,y coordinates are the new origin.\n", 662 | "output_origin = [min_x, min_y]\n", 663 | "# Compute grid size based on the physical size and spacing.\n", 664 | "output_size = [int((max_x-min_x)/output_spacing[0]), int((max_y-min_y)/output_spacing[1])]\n", 665 | "\n", 666 | "resampled_image = sitk.Resample(logo, output_size, euler2d, sitk.sitkLinear, output_origin, output_spacing, output_direction)\n", 667 | "plt.imshow(sitk.GetArrayViewFromImage(resampled_image))\n", 668 | "plt.axis('off') \n", 669 | "plt.show()" 670 | ] 671 | }, 672 | { 673 | "cell_type": "markdown", 674 | "metadata": {}, 675 | "source": [ 676 | "Are you puzzled by the result? Is the output just a copy of the input? Add a rotation to the code above and see what happens (```euler2d.SetAngle(0.79)```)." 677 | ] 678 | }, 679 | { 680 | "cell_type": "markdown", 681 | "metadata": {}, 682 | "source": [ 683 | "

Next »

" 684 | ] 685 | } 686 | ], 687 | "metadata": { 688 | "kernelspec": { 689 | "display_name": "Python 3", 690 | "language": "python", 691 | "name": "python3" 692 | }, 693 | "language_info": { 694 | "codemirror_mode": { 695 | "name": "ipython", 696 | "version": 3 697 | }, 698 | "file_extension": ".py", 699 | "mimetype": "text/x-python", 700 | "name": "python", 701 | "nbconvert_exporter": "python", 702 | "pygments_lexer": "ipython3", 703 | "version": "3.5.2" 704 | } 705 | }, 706 | "nbformat": 4, 707 | "nbformat_minor": 2 708 | } 709 | -------------------------------------------------------------------------------- /04_basic_registration.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "

Basic Registration

\n", 8 | "\n", 9 | "\n", 10 | "**Summary:**\n", 11 | "\n", 12 | "1. Creating an instance of the registration framework requires selection of the following components:\n", 13 | " * Optimizer.\n", 14 | " * Similarity metric.\n", 15 | " * Interpolator.\n", 16 | "2. The registration framework only supports images with sitkFloat32 and sitkFloat64 pixel types (use the SimpleITK Cast() function if your image's pixel type is something else).\n", 17 | "\n", 18 | "3. Successful registration is highly dependent on initialization. In general you can:\n", 19 | " * Use auxiliary information or user interaction to obtain an initial transformation (avoid resampling).\n", 20 | " * Center the images using the CenteredTransformInitializer.\n", 21 | " * Coarsely sample the parameter space using the Exhaustive Optimizer to obtain one or more initial transformation estimates.\n", 22 | " * Manually initialize, via direct manipulation of transformation parameters and visualization or localization of corresponding points in the two images and then use the LandmarkBasedTransformInitializer." 23 | ] 24 | }, 25 | { 26 | "cell_type": "markdown", 27 | "metadata": {}, 28 | "source": [ 29 | "## Registration Components \n", 30 | "\n", 31 | "

\n", 32 | "\n", 33 | "There are many options for creating an instance of the registration framework, all of which are configured in SimpleITK via methods of the ImageRegistrationMethod class. This class encapsulates many of the components available in ITK for constructing a registration instance.\n", 34 | "\n", 35 | "Currently, the available choices from the following groups of ITK components are:\n", 36 | "\n", 37 | "### Optimizers\n", 38 | "\n", 39 | "The SimpleITK registration framework supports several optimizer types via the SetOptimizerAsX() methods, these include:\n", 40 | "\n", 41 | "\n", 75 | "\n", 76 | " \n", 77 | "### Similarity metrics\n", 78 | "\n", 79 | "The SimpleITK registration framework supports several metric types via the SetMetricAsX() methods, these include:\n", 80 | "\n", 81 | "\n", 101 | "\n", 102 | "\n", 103 | "### Interpolators\n", 104 | "\n", 105 | "The SimpleITK registration framework supports several interpolators via the SetInterpolator() method, which receives one of\n", 106 | "the following enumerations:\n", 107 | "" 118 | ] 119 | }, 120 | { 121 | "cell_type": "code", 122 | "execution_count": null, 123 | "metadata": { 124 | "collapsed": true 125 | }, 126 | "outputs": [], 127 | "source": [ 128 | "import SimpleITK as sitk\n", 129 | "from downloaddata import fetch_data as fdata\n", 130 | "import gui\n", 131 | "import registration_gui as rgui\n", 132 | "%matplotlib notebook\n", 133 | "\n", 134 | "import numpy as np\n", 135 | "import os\n", 136 | "OUTPUT_DIR = 'output'" 137 | ] 138 | }, 139 | { 140 | "cell_type": "markdown", 141 | "metadata": {}, 142 | "source": [ 143 | "## Read images\n", 144 | "\n", 145 | "We first read the images, specifying the pixel type that is required for registration (Float32 or Float64) and look at them. In this notebook we use a CT and MR image from the same patient. These are part of the training data from the Retrospective Image Registration Evaluation (RIRE) project." 146 | ] 147 | }, 148 | { 149 | "cell_type": "code", 150 | "execution_count": null, 151 | "metadata": { 152 | "collapsed": false 153 | }, 154 | "outputs": [], 155 | "source": [ 156 | "fixed_image = sitk.ReadImage(fdata(\"training_001_ct.mha\"), sitk.sitkFloat32)\n", 157 | "moving_image = sitk.ReadImage(fdata(\"training_001_mr_T1.mha\"), sitk.sitkFloat32)\n", 158 | "\n", 159 | "ct_window_level = [835,162]\n", 160 | "mr_window_level = [1036,520]\n", 161 | "\n", 162 | "gui.MultiImageDisplay(image_list = [fixed_image, moving_image], \n", 163 | " title_list = ['fixed', 'moving'], figure_size=(8,4), window_level_list=[ct_window_level, mr_window_level]);" 164 | ] 165 | }, 166 | { 167 | "cell_type": "markdown", 168 | "metadata": {}, 169 | "source": [ 170 | "## Classic Registration\n", 171 | "\n", 172 | "Estimate a 3D rigid transformation between images of different modalities. \n", 173 | "\n", 174 | "We have made the following choices with respect to initialization and registration component settings:\n", 175 | "\n", 176 | "\n", 195 | "\n", 196 | "We initialize registration by aligning the centers of the two volumes. To qualitatively evaluate the result we use a linked cursor approach, click on one image and the corresponding point is added to the other image." 197 | ] 198 | }, 199 | { 200 | "cell_type": "code", 201 | "execution_count": null, 202 | "metadata": { 203 | "collapsed": false 204 | }, 205 | "outputs": [], 206 | "source": [ 207 | "initial_transform = sitk.CenteredTransformInitializer(fixed_image, \n", 208 | " moving_image, \n", 209 | " sitk.Euler3DTransform(), \n", 210 | " sitk.CenteredTransformInitializerFilter.GEOMETRY)\n", 211 | "\n", 212 | "gui.RegistrationPointDataAquisition(fixed_image, moving_image, figure_size=(8,4), known_transformation=initial_transform, fixed_window_level=ct_window_level, moving_window_level=mr_window_level);" 213 | ] 214 | }, 215 | { 216 | "cell_type": "markdown", 217 | "metadata": {}, 218 | "source": [ 219 | "Run the next cell three times:\n", 220 | "1. As is.\n", 221 | "2. Uncomment the automated optimizer scale setting so that a change in rotation (radians) has a similar effect to a change in translation (mm).\n", 222 | "3. Uncomment the multi-resolution settings." 223 | ] 224 | }, 225 | { 226 | "cell_type": "code", 227 | "execution_count": null, 228 | "metadata": { 229 | "collapsed": false 230 | }, 231 | "outputs": [], 232 | "source": [ 233 | "registration_method = sitk.ImageRegistrationMethod()\n", 234 | "\n", 235 | "# Similarity metric settings.\n", 236 | "registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)\n", 237 | "registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)\n", 238 | "registration_method.SetMetricSamplingPercentage(0.01)\n", 239 | "\n", 240 | "registration_method.SetInterpolator(sitk.sitkLinear)\n", 241 | "\n", 242 | "# Optimizer settings.\n", 243 | "registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100, convergenceMinimumValue=1e-6, convergenceWindowSize=10)\n", 244 | "#registration_method.SetOptimizerScalesFromPhysicalShift()\n", 245 | "\n", 246 | "# Setup for the multi-resolution framework. \n", 247 | "#registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1])\n", 248 | "#registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[2,1,0])\n", 249 | "#registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()\n", 250 | "\n", 251 | "# Don't optimize in-place, we would possibly like to run this cell multiple times.\n", 252 | "registration_method.SetInitialTransform(initial_transform, inPlace=False)\n", 253 | "\n", 254 | "# Connect all of the observers so that we can perform plotting during registration.\n", 255 | "registration_method.AddCommand(sitk.sitkStartEvent, rgui.start_plot)\n", 256 | "registration_method.AddCommand(sitk.sitkEndEvent, rgui.end_plot)\n", 257 | "registration_method.AddCommand(sitk.sitkMultiResolutionIterationEvent, rgui.update_multires_iterations) \n", 258 | "registration_method.AddCommand(sitk.sitkIterationEvent, lambda: rgui.plot_values(registration_method))\n", 259 | "\n", 260 | "final_transform = registration_method.Execute(fixed_image, moving_image)\n", 261 | "\n", 262 | "# Always check the reason optimization terminated.\n", 263 | "print('Final metric value: {0}'.format(registration_method.GetMetricValue()))\n", 264 | "print('Optimizer\\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))" 265 | ] 266 | }, 267 | { 268 | "cell_type": "markdown", 269 | "metadata": {}, 270 | "source": [ 271 | "Qualitatively evaluate the result using a linked cursor approach (visual evaluation):" 272 | ] 273 | }, 274 | { 275 | "cell_type": "code", 276 | "execution_count": null, 277 | "metadata": { 278 | "collapsed": false 279 | }, 280 | "outputs": [], 281 | "source": [ 282 | "gui.RegistrationPointDataAquisition(fixed_image, moving_image, figure_size=(8,4), known_transformation=final_transform,fixed_window_level=ct_window_level, moving_window_level=mr_window_level);" 283 | ] 284 | }, 285 | { 286 | "cell_type": "markdown", 287 | "metadata": {}, 288 | "source": [ 289 | "If we are satisfied with the results, save them to file." 290 | ] 291 | }, 292 | { 293 | "cell_type": "code", 294 | "execution_count": null, 295 | "metadata": { 296 | "collapsed": true 297 | }, 298 | "outputs": [], 299 | "source": [ 300 | "moving_resampled = sitk.Resample(moving_image, fixed_image, final_transform, sitk.sitkLinear, 0.0, moving_image.GetPixelID())\n", 301 | "sitk.WriteImage(moving_resampled, os.path.join(OUTPUT_DIR, 'RIRE_training_001_mr_T1_resampled.mha'))\n", 302 | "sitk.WriteTransform(final_transform, os.path.join(OUTPUT_DIR, 'RIRE_training_001_CT_2_mr_T1.tfm'))" 303 | ] 304 | }, 305 | { 306 | "cell_type": "markdown", 307 | "metadata": {}, 308 | "source": [ 309 | "## ITKv4 Coordinate Systems\n", 310 | "\n", 311 | "Unlike the classical registration approach where the fixed and moving images are treated differently, the ITKv4 registration framework allows you to treat both images in the same manner. This is achieved by introducing a third coordinate system, the virtual image domain.\n", 312 | "\n", 313 | "

\n", 314 | "\n", 315 | "Thus, the ITK v4 registration framework deals with three transformations:\n", 316 | "\n", 327 | "\n", 328 | "The transformation that maps points from the fixed to moving image domains is thus: $^M\\mathbf{p} = T_{opt}(T_m(T_f^{-1}(^F\\mathbf{p})))$\n", 329 | "\n", 330 | "We now modify the previous example to use $T_{opt}$ and $T_m$." 331 | ] 332 | }, 333 | { 334 | "cell_type": "code", 335 | "execution_count": null, 336 | "metadata": { 337 | "collapsed": false 338 | }, 339 | "outputs": [], 340 | "source": [ 341 | "registration_method = sitk.ImageRegistrationMethod()\n", 342 | "\n", 343 | "# Similarity metric settings.\n", 344 | "registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)\n", 345 | "registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)\n", 346 | "registration_method.SetMetricSamplingPercentage(0.01)\n", 347 | "\n", 348 | "registration_method.SetInterpolator(sitk.sitkLinear)\n", 349 | "\n", 350 | "# Optimizer settings.\n", 351 | "registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100, convergenceMinimumValue=1e-6, convergenceWindowSize=10)\n", 352 | "registration_method.SetOptimizerScalesFromPhysicalShift()\n", 353 | "\n", 354 | "# Setup for the multi-resolution framework. \n", 355 | "registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1])\n", 356 | "registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[2,1,0])\n", 357 | "registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()\n", 358 | "\n", 359 | "# Set the initial moving and optimized transforms.\n", 360 | "optimized_transform = sitk.Euler3DTransform() \n", 361 | "registration_method.SetMovingInitialTransform(initial_transform)\n", 362 | "registration_method.SetInitialTransform(optimized_transform, inPlace=False)\n", 363 | "\n", 364 | "# Connect all of the observers so that we can perform plotting during registration.\n", 365 | "registration_method.AddCommand(sitk.sitkStartEvent, rgui.start_plot)\n", 366 | "registration_method.AddCommand(sitk.sitkEndEvent, rgui.end_plot)\n", 367 | "registration_method.AddCommand(sitk.sitkMultiResolutionIterationEvent, rgui.update_multires_iterations) \n", 368 | "registration_method.AddCommand(sitk.sitkIterationEvent, lambda: rgui.plot_values(registration_method))\n", 369 | "\n", 370 | "# Need to compose the transformations after registration.\n", 371 | "final_transform_v4 = registration_method.Execute(fixed_image, moving_image)\n", 372 | "#final_transform_v4.AddTransform(initial_transform)\n", 373 | "\n", 374 | "# Always check the reason optimization terminated.\n", 375 | "print('Final metric value: {0}'.format(registration_method.GetMetricValue()))\n", 376 | "print('Optimizer\\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))" 377 | ] 378 | }, 379 | { 380 | "cell_type": "code", 381 | "execution_count": null, 382 | "metadata": { 383 | "collapsed": false 384 | }, 385 | "outputs": [], 386 | "source": [ 387 | "gui.RegistrationPointDataAquisition(fixed_image, moving_image, figure_size=(8,4), known_transformation=final_transform_v4, fixed_window_level=ct_window_level, moving_window_level=mr_window_level);" 388 | ] 389 | }, 390 | { 391 | "cell_type": "markdown", 392 | "metadata": { 393 | "collapsed": true 394 | }, 395 | "source": [ 396 | "## Initialization\n", 397 | "\n", 398 | "Initialization effects both the runtime and convergence to the correct minimum. Ideally our transformation is initialized close to the correct solution ensuring convergence in a timely manner. Problem specific initialization will often yield better results than the more generic solutions we show below. As a rule of thumb, use as much prior information (external to the image content) as you can to initialize your registration task.\n", 399 | "\n", 400 | "Common initializations in the generic setting:\n", 401 | "1. Do nothing (a.k.a. hope/unique setting) - initialize using the identity transformation.\n", 402 | "2. CenteredTransformInitializer (GEOMETRY or MOMENTS) - translation based initialization, align the centers of the images or their centers of mass (intensity based).\n", 403 | "3. Use the exhaustive optimizer as a first step - never underestimate brute force.\n", 404 | "4. Manual initialization - allow an operator to control parameter settings using a GUI with visual feedback or identify multiple corresponding points in the two images. \n", 405 | "\n", 406 | "\n", 407 | "We start by loading our data, CT and MR scans of the CIRS (Norfolk, VA, USA) abdominal phantom." 408 | ] 409 | }, 410 | { 411 | "cell_type": "code", 412 | "execution_count": null, 413 | "metadata": { 414 | "collapsed": false 415 | }, 416 | "outputs": [], 417 | "source": [ 418 | "data_directory = os.path.dirname(fdata(\"CIRS057A_MR_CT_DICOM/readme.txt\"))\n", 419 | "\n", 420 | "ct_window_level = [1727,-320]\n", 421 | "mr_window_level = [355,178]\n", 422 | "\n", 423 | "fixed_series_ID = \"1.2.840.113619.2.290.3.3233817346.783.1399004564.515\"\n", 424 | "moving_series_ID = \"1.3.12.2.1107.5.2.18.41548.30000014030519285935000000933\"\n", 425 | "\n", 426 | "fixed_series_filenames = sitk.ImageSeriesReader_GetGDCMSeriesFileNames(data_directory, fixed_series_ID)\n", 427 | "moving_series_filenames = sitk.ImageSeriesReader_GetGDCMSeriesFileNames(data_directory, moving_series_ID)\n", 428 | "\n", 429 | "fixed_image = sitk.ReadImage(fixed_series_filenames, sitk.sitkFloat32)\n", 430 | "moving_image = sitk.ReadImage(moving_series_filenames, sitk.sitkFloat32)" 431 | ] 432 | }, 433 | { 434 | "cell_type": "markdown", 435 | "metadata": {}, 436 | "source": [ 437 | "### Identity transform initialization" 438 | ] 439 | }, 440 | { 441 | "cell_type": "code", 442 | "execution_count": null, 443 | "metadata": { 444 | "collapsed": false 445 | }, 446 | "outputs": [], 447 | "source": [ 448 | "initial_transform = sitk.Transform()\n", 449 | "gui.RegistrationPointDataAquisition(fixed_image, moving_image, figure_size=(8,4), known_transformation=initial_transform, fixed_window_level=ct_window_level, moving_window_level=mr_window_level);" 450 | ] 451 | }, 452 | { 453 | "cell_type": "markdown", 454 | "metadata": {}, 455 | "source": [ 456 | "When working with clinical images, the DICOM tags define the orientation and position of the anatomy in the volume. The tags of interest are:\n", 457 | "\n", 473 | "\n", 474 | "SimpleITK/ITK takes this information into account when loading DICOM images. \n", 475 | "\n", 476 | "But we are working with DICOM images, so why aren't the images oriented correctly using the identity transformation?\n", 477 | "\n", 478 | "Well, the patient position in the scanner is manually entered by the technician meaning that errors may occur, though rarely. For our data, a phantom, it is unclear which side is the \"head\" and which is the \"feet\" so the technicians entered reasonable values for each scan. " 479 | ] 480 | }, 481 | { 482 | "cell_type": "code", 483 | "execution_count": null, 484 | "metadata": { 485 | "collapsed": false 486 | }, 487 | "outputs": [], 488 | "source": [ 489 | "img = sitk.ReadImage(fixed_series_filenames[0])\n", 490 | "print('Patient name: ' + img.GetMetaData('0010|0010') + ', Patient position:' + img.GetMetaData('0018|5100'))\n", 491 | "img = sitk.ReadImage(moving_series_filenames[0])\n", 492 | "print('Patient name: ' + img.GetMetaData('0010|0010') + ', Patient position:' + img.GetMetaData('0018|5100'))\n", 493 | "\n", 494 | "# New 1.1 interface\n", 495 | "#reader = sitk.ImageFileReader()\n", 496 | "#reader.SetFileName(fixed_series_filenames[0])\n", 497 | "#reader.MetaDataRead()\n", 498 | "#print('Patient name: ' + reader.GetMetaData('0010|0010') + ', Patient position:' + reader.GetMetaData('0018|5100'))\n", 499 | "#reader.SetFileName(moving_series_filenames[0])\n", 500 | "#reader.MetaDataRead()\n", 501 | "#print('Patient name: ' + reader.GetMetaData('0010|0010') + ', Patient position:' + reader.GetMetaData('0018|5100'))" 502 | ] 503 | }, 504 | { 505 | "cell_type": "markdown", 506 | "metadata": {}, 507 | "source": [ 508 | "### CenteredTransformInitializer initialization \n", 509 | "Compare GEOMETRY and MOMENTS based approaches:" 510 | ] 511 | }, 512 | { 513 | "cell_type": "code", 514 | "execution_count": null, 515 | "metadata": { 516 | "collapsed": false 517 | }, 518 | "outputs": [], 519 | "source": [ 520 | "initial_transform = sitk.CenteredTransformInitializer(fixed_image, \n", 521 | " moving_image, \n", 522 | " sitk.Euler3DTransform(), \n", 523 | " sitk.CenteredTransformInitializerFilter.GEOMETRY)\n", 524 | "gui.RegistrationPointDataAquisition(fixed_image, moving_image, figure_size=(8,4), known_transformation=initial_transform, fixed_window_level=ct_window_level, moving_window_level=mr_window_level);" 525 | ] 526 | }, 527 | { 528 | "cell_type": "markdown", 529 | "metadata": {}, 530 | "source": [ 531 | "### Exhaustive optimizer initialization\n", 532 | "\n", 533 | "The following initialization approach is a combination of using prior knowledge and the exhaustive optimizer. We know that the scans are acquired with the \"patient\" either supine (on their back) or prone (on their stomach) and that the scan direction (head-to-feet or feet-to-head) is along the images' z axis. \n", 534 | "We use the CenteredTransformInitializer to initialize the translation and the exhaustive optimizer to obtain an initial rigid transformation.\n", 535 | "\n", 536 | "The exhaustive optimizer evaluates the similarity metric on a grid in parameter space centered on the parameters of the initial transform. This grid is defined using three elements:\n", 537 | "1. numberOfSteps.\n", 538 | "2. stepLength.\n", 539 | "3. optimizer scales.\n", 540 | "\n", 541 | "The similarity metric is evaluated on the resulting parameter grid:\n", 542 | "initial_parameters ± numberOfSteps × stepLength × optimizerScales\n", 543 | "\n", 544 | "***Example***:\n", 545 | "1. numberOfSteps=[1,0,2,0,0,0]\n", 546 | "2. stepLength = np.pi\n", 547 | "3. optimizer scales = [1,1,0.5,1,1,1]\n", 548 | "\n", 549 | "Will perform 15 metric evaluations ($\\displaystyle\\prod_i (2*numberOfSteps[i] + 1)$).\n", 550 | "\n", 551 | "The parameter values for the second parameter and the last three parameters are the initial parameter values. The parameter values for the first parameter are $v_{init}-\\pi, v_{init}, v_{init}+\\pi$ and the parameter values for the third parameter are $v_{init}-\\pi, v_{init}-\\pi/2, v_{init}, v_{init}+\\pi/2, v_{init}+\\pi$.\n", 552 | "\n", 553 | "The transformation corresponding to the lowest similarity metric is returned." 554 | ] 555 | }, 556 | { 557 | "cell_type": "code", 558 | "execution_count": null, 559 | "metadata": { 560 | "collapsed": false 561 | }, 562 | "outputs": [], 563 | "source": [ 564 | "initial_transform = sitk.CenteredTransformInitializer(fixed_image, \n", 565 | " moving_image, \n", 566 | " sitk.Euler3DTransform(), \n", 567 | " sitk.CenteredTransformInitializerFilter.MOMENTS)\n", 568 | "registration_method = sitk.ImageRegistrationMethod()\n", 569 | "registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)\n", 570 | "registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)\n", 571 | "registration_method.SetMetricSamplingPercentage(0.01)\n", 572 | "registration_method.SetInterpolator(sitk.sitkLinear)\n", 573 | "# The order of parameters for the Euler3DTransform is [angle_x, angle_y, angle_z, t_x, t_y, t_z]. The parameter \n", 574 | "# sampling grid is centered on the initial_transform parameter values, that are all zero for the rotations. Given\n", 575 | "# the number of steps, their length and optimizer scales we have:\n", 576 | "# angle_x = 0\n", 577 | "# angle_y = -pi, 0, pi\n", 578 | "# angle_z = -pi, 0, pi\n", 579 | "registration_method.SetOptimizerAsExhaustive(numberOfSteps=[0,1,1,0,0,0], stepLength = np.pi)\n", 580 | "registration_method.SetOptimizerScales([1,1,1,1,1,1])\n", 581 | "\n", 582 | "#Perform the registration in-place so that the initial_transform is modified.\n", 583 | "registration_method.SetInitialTransform(initial_transform, inPlace=True)\n", 584 | "registration_method.Execute(fixed_image, moving_image)\n", 585 | "\n", 586 | "gui.RegistrationPointDataAquisition(fixed_image, moving_image, figure_size=(8,4), known_transformation=initial_transform, fixed_window_level=ct_window_level, moving_window_level=mr_window_level);" 587 | ] 588 | }, 589 | { 590 | "cell_type": "markdown", 591 | "metadata": {}, 592 | "source": [ 593 | "#### Manual initialization\n", 594 | "\n", 595 | "When all else fails, a human in the loop will almost always be able to robustly initialize the registration.\n", 596 | "\n", 597 | "In the example below we identify corresponding points to compute an initial rigid transformation. \n", 598 | "\n", 599 | "**Note**: There is no correspondence between the fiducial markers on the phantom." 600 | ] 601 | }, 602 | { 603 | "cell_type": "code", 604 | "execution_count": null, 605 | "metadata": { 606 | "collapsed": false 607 | }, 608 | "outputs": [], 609 | "source": [ 610 | "point_acquisition_interface = gui.RegistrationPointDataAquisition(fixed_image, moving_image, figure_size=(8,4), fixed_window_level=ct_window_level, moving_window_level=mr_window_level);" 611 | ] 612 | }, 613 | { 614 | "cell_type": "code", 615 | "execution_count": null, 616 | "metadata": { 617 | "collapsed": false 618 | }, 619 | "outputs": [], 620 | "source": [ 621 | "# Get the manually specified points and compute the transformation.\n", 622 | "\n", 623 | "fixed_image_points, moving_image_points = point_acquisition_interface.get_points()\n", 624 | "\n", 625 | "# Previously localized points (here so that the testing passes):\n", 626 | "fixed_image_points = [(24.062587103074605, 14.594981536981521, -58.75), (6.178716135332678, 53.93949766601378, -58.75), (74.14383149714774, -69.04462737237648, -76.25), (109.74899278747029, -14.905272533666817, -76.25)]\n", 627 | "moving_image_points = [(4.358707846364581, 60.46357110706131, -71.53120422363281), (24.09010295252645, 98.21840981673873, -71.53120422363281), (-52.11888008581127, -26.57984635768439, -58.53120422363281), (-87.46150681392184, 28.73904765153219, -58.53120422363281)]\n", 628 | "\n", 629 | "fixed_image_points_flat = [c for p in fixed_image_points for c in p] \n", 630 | "moving_image_points_flat = [c for p in moving_image_points for c in p]\n", 631 | "initial_transformation = sitk.LandmarkBasedTransformInitializer(sitk.VersorRigid3DTransform(), \n", 632 | " fixed_image_points_flat, \n", 633 | " moving_image_points_flat)\n", 634 | "gui.RegistrationPointDataAquisition(fixed_image, moving_image, figure_size=(8,4), known_transformation=initial_transform, fixed_window_level=ct_window_level, moving_window_level=mr_window_level);" 635 | ] 636 | }, 637 | { 638 | "cell_type": "markdown", 639 | "metadata": {}, 640 | "source": [ 641 | "

Next »

" 642 | ] 643 | } 644 | ], 645 | "metadata": { 646 | "kernelspec": { 647 | "display_name": "Python 3", 648 | "language": "python", 649 | "name": "python3" 650 | }, 651 | "language_info": { 652 | "codemirror_mode": { 653 | "name": "ipython", 654 | "version": 3 655 | }, 656 | "file_extension": ".py", 657 | "mimetype": "text/x-python", 658 | "name": "python", 659 | "nbconvert_exporter": "python", 660 | "pygments_lexer": "ipython3", 661 | "version": "3.5.2" 662 | } 663 | }, 664 | "nbformat": 4, 665 | "nbformat_minor": 2 666 | } 667 | --------------------------------------------------------------------------------