├── Data ├── BootcampMR_TCIA.zip ├── PytorchDemo.zip ├── day_2_5_PerkTutorLiveTutorialData.zip └── eclipse-8.1.20-phantom-prostate.zip ├── Doc ├── Appendix │ └── MedicalImageResearchFileFormats.pptx ├── DeepLearnLiveSetup │ ├── DeepLearnLive_SetupTutorial.pptx │ └── kerasGPUEnv.yml ├── Deep_learning_video_analysis_bootcamp2021.pptx ├── day1_1_3DSlicerBasics.pptx ├── day1_3_DICOMTutorial.pptx ├── day1_4_SegmentationBasics.pptx ├── day1_6_PerkLabResearchMethodology.pptx ├── day1_7_PerkLabProjectManagement.pptx ├── day2_1_ConvNets_Intro_2022-05-24.pptx ├── day2_2_SlicerIGT-U38_LiveAiRec.pptx ├── day2_3_DesignFor3dPrinting.pptx ├── day2_MONAILabel.pptx ├── day2_NeuroNav.pptx ├── day2_Plus.pptx ├── day2_PrototypingImageGuidedTherapyApplications.pptx ├── day2_PytorchDemo.pdf ├── day2_RegistrationTutorial.pptx ├── day2_TrackingDataEvaluation.pptx ├── day2_UltrasoundAISegmentation.pptx ├── day3_1_WritingCorrectAndUnderstandableCode.pptx └── dayN_N_SlicerJupyter.pptx ├── Examples └── CampTutorial2 │ ├── CMakeLists.txt │ ├── CampTutorial2.py │ ├── Resources │ ├── Icons │ │ └── CampTutorial2.png │ └── UI │ │ └── CampTutorial2.ui │ └── Testing │ ├── CMakeLists.txt │ └── Python │ └── CMakeLists.txt ├── Git for macOS and Unix.pdf ├── README.md ├── banner.jpg ├── banner_v2.jpg └── banner_v2.png /Data/BootcampMR_TCIA.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Data/BootcampMR_TCIA.zip -------------------------------------------------------------------------------- /Data/PytorchDemo.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Data/PytorchDemo.zip -------------------------------------------------------------------------------- /Data/day_2_5_PerkTutorLiveTutorialData.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Data/day_2_5_PerkTutorLiveTutorialData.zip -------------------------------------------------------------------------------- /Data/eclipse-8.1.20-phantom-prostate.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Data/eclipse-8.1.20-phantom-prostate.zip -------------------------------------------------------------------------------- /Doc/Appendix/MedicalImageResearchFileFormats.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Doc/Appendix/MedicalImageResearchFileFormats.pptx -------------------------------------------------------------------------------- /Doc/DeepLearnLiveSetup/DeepLearnLive_SetupTutorial.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Doc/DeepLearnLiveSetup/DeepLearnLive_SetupTutorial.pptx -------------------------------------------------------------------------------- /Doc/DeepLearnLiveSetup/kerasGPUEnv.yml: -------------------------------------------------------------------------------- 1 | name: kerasGPUEnv 2 | channels: 3 | - anaconda 4 | - conda-forge 5 | - defaults 6 | dependencies: 7 | - backcall=0.2.0=py_0 8 | - ca-certificates=2020.7.22=0 9 | - certifi=2020.6.20=py36_0 10 | - colorama=0.4.3=py_0 11 | - cudatoolkit=10.1.243=h74a9793_0 12 | - graphviz=2.38=hfd603c8_2 13 | - h5py=2.7.1=py36he54a1c3_0 14 | - hdf5=1.10.1=vc14hb361328_0 15 | - ipykernel=5.3.4=py36h5ca1d4c_0 16 | - ipython=7.16.1=py36h5ca1d4c_0 17 | - ipython_genutils=0.2.0=py36h3c5d0ee_0 18 | - jedi=0.17.2=py36_0 19 | - jupyter_client=6.1.7=py_0 20 | - jupyter_core=4.6.3=py36_0 21 | - libsodium=1.0.18=h62dcd97_0 22 | - matplotlib-base=3.2.2=py36h64f37c6_0 23 | - openssl=1.1.1h=he774522_0 24 | - parso=0.7.0=py_0 25 | - pickleshare=0.7.5=py36_0 26 | - pillow=5.2.0=py36h08bbbbd_0 27 | - prompt-toolkit=3.0.7=py_0 28 | - pygments=2.7.1=py_0 29 | - python=3.6.7=h33f27b4_1 30 | - pywin32=227=py36he774522_1 31 | - pyyaml=5.3.1=py36he774522_0 32 | - pyzmq=19.0.2=py36ha925a31_1 33 | - sqlite=3.20.1=vc14h7ce8c62_1 34 | - tk=8.6.7=vc14hb68737d_1 35 | - traitlets=4.3.3=py36_0 36 | - wcwidth=0.2.5=py_0 37 | - yaml=0.1.7=vc14h4cb57cf_1 38 | - zeromq=4.3.2=ha925a31_3 39 | - zlib=1.2.11=vc14h1cdd9ab_1 40 | - blosc=1.20.1=ha925a31_0 41 | - brotli=1.0.9=ha925a31_2 42 | - bzip2=1.0.8=he774522_3 43 | - charls=2.1.0=h33f27b4_2 44 | - cloudpickle=1.6.0=py_0 45 | - cycler=0.10.0=py_2 46 | - cytoolz=0.11.0=py36h779f372_0 47 | - dask-core=2.30.0=py_0 48 | - decorator=4.4.2=py_0 49 | - freetype=2.10.2=hd328e21_0 50 | - giflib=5.2.1=h2fa13f4_2 51 | - imagecodecs=2020.5.30=py36h759e2a0_2 52 | - imageio=2.9.0=py_0 53 | - jpeg=9d=he774522_0 54 | - jxrlib=1.1=hfa6e2cd_2 55 | - kiwisolver=1.2.0=py36h246c5b5_0 56 | - lcms2=2.11=he1115b7_0 57 | - lerc=2.2=ha925a31_0 58 | - libaec=1.0.4=he025d50_1 59 | - libpng=1.6.37=ha81a0f5_2 60 | - libtiff=4.1.0=h885aae3_6 61 | - libwebp-base=1.1.0=hfa6e2cd_3 62 | - libzopfli=1.0.3=ha925a31_0 63 | - lz4-c=1.9.2=h62dcd97_2 64 | - networkx=2.5=py_0 65 | - olefile=0.46=py_0 66 | - openjpeg=2.3.1=h57dd2e7_3 67 | - python_abi=3.6=1_cp36m 68 | - pywavelets=1.1.1=py36h4f3e613_2 69 | - scikit-image=0.17.2=py36hd7f5668_2 70 | - snappy=1.1.8=ha925a31_3 71 | - tifffile=2020.7.4=py_0 72 | - toolz=0.11.1=py_0 73 | - tornado=6.0.4=py36hfa6e2cd_0 74 | - xz=5.2.5=h62dcd97_1 75 | - zfp=0.5.5=ha925a31_2 76 | - zstd=1.4.5=h1f3a1b7_2 77 | - _tflow_select=2.1.0=gpu 78 | - absl-py=0.9.0=py36_0 79 | - asn1crypto=1.3.0=py36_0 80 | - astor=0.8.0=py36_0 81 | - blas=1.0=mkl 82 | - blinker=1.4=py36_0 83 | - cachetools=3.1.1=py_0 84 | - cffi=1.14.0=py36h7a1dbc1_0 85 | - chardet=3.0.4=py36_1003 86 | - click=7.0=py_0 87 | - cryptography=2.8=py36h7a1dbc1_0 88 | - cudnn=7.6.5=cuda10.1_0 89 | - gast=0.2.2=py36_0 90 | - google-auth=1.11.2=py_0 91 | - google-auth-oauthlib=0.4.1=py_2 92 | - google-pasta=0.1.8=py_0 93 | - grpcio=1.27.2=py36h351948d_0 94 | - icc_rt=2019.0.0=h0cc432a_1 95 | - idna=2.8=py36_0 96 | - intel-openmp=2020.0=166 97 | - keras-applications=1.0.8=py_0 98 | - keras-preprocessing=1.1.0=py_1 99 | - libprotobuf=3.11.4=h7bd577a_0 100 | - markdown=3.1.1=py36_0 101 | - mkl=2020.0=166 102 | - mkl-service=2.3.0=py36hb782905_0 103 | - mkl_fft=1.0.15=py36h14836fe_0 104 | - mkl_random=1.1.0=py36h675688f_0 105 | - numpy=1.18.1=py36h93ca92e_0 106 | - numpy-base=1.18.1=py36hc3f5095_1 107 | - oauthlib=3.1.0=py_0 108 | - opt_einsum=3.1.0=py_0 109 | - pandas=1.0.3=py36h47e9c7a_0 110 | - pip=20.0.2=py36_1 111 | - protobuf=3.11.4=py36h33f27b4_0 112 | - pyasn1=0.4.8=py_0 113 | - pyasn1-modules=0.2.7=py_0 114 | - pycparser=2.19=py_0 115 | - pydot=1.4.1=py36_0 116 | - pyjwt=1.7.1=py36_0 117 | - pyopenssl=19.1.0=py36_0 118 | - pyparsing=2.4.7=py_0 119 | - pyreadline=2.1=py36_1 120 | - pysocks=1.7.1=py36_0 121 | - python-dateutil=2.8.1=py_0 122 | - pytz=2019.3=py_0 123 | - requests=2.22.0=py36_1 124 | - requests-oauthlib=1.3.0=py_0 125 | - rsa=4.0=py_0 126 | - scipy=1.4.1=py36h9439919_0 127 | - setuptools=45.2.0=py36_0 128 | - six=1.14.0=py36_0 129 | - tensorboard=2.1.0=py3_0 130 | - tensorflow=2.1.0=gpu_py36h3346743_0 131 | - tensorflow-base=2.1.0=gpu_py36h55f5790_0 132 | - tensorflow-estimator=2.1.0=pyhd54b08b_0 133 | - tensorflow-gpu=2.1.0=h0d30ee6_0 134 | - termcolor=1.1.0=py36_1 135 | - urllib3=1.25.8=py36_0 136 | - vc=14.1=h0510ff6_4 137 | - vs2015_runtime=14.16.27012=hf0eaf9b_1 138 | - werkzeug=0.16.1=py_0 139 | - wheel=0.34.2=py36_0 140 | - win_inet_pton=1.1.0=py36_0 141 | - wincertstore=0.2=py36h7fe50ca_0 142 | - wrapt=1.11.2=py36he774522_0 143 | - pip: 144 | - attrs==20.3.0 145 | - crcmod==1.7 146 | - dask==2.30.0 147 | - dataclasses==0.8 148 | - dill==0.3.3 149 | - diskcache==5.1.0 150 | - future==0.18.2 151 | - girder-client==3.1.3 152 | - googleapis-common-protos==1.52.0 153 | - importlib-resources==5.1.0 154 | - joblib==0.14.1 155 | - matplotlib==3.2.2 156 | - opencv-python==4.2.0.32 157 | - promise==2.3 158 | - pyigtl==0.1.0 159 | - pyigtlink==0.2.1 160 | - requests-toolbelt==0.9.1 161 | - scikit-learn==0.22.2.post1 162 | - simpleitk==1.2.4 163 | - sklearn==0.0 164 | - tensorflow-datasets==4.2.0 165 | - tensorflow-metadata==0.27.0 166 | - tqdm==4.50.2 167 | - typing-extensions==3.7.4.3 168 | - zipp==3.4.0 169 | prefix: C:\Users\hisey\Anaconda3\envs\kerasGPUEnv 170 | 171 | -------------------------------------------------------------------------------- /Doc/Deep_learning_video_analysis_bootcamp2021.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Doc/Deep_learning_video_analysis_bootcamp2021.pptx -------------------------------------------------------------------------------- /Doc/day1_1_3DSlicerBasics.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Doc/day1_1_3DSlicerBasics.pptx -------------------------------------------------------------------------------- /Doc/day1_3_DICOMTutorial.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Doc/day1_3_DICOMTutorial.pptx -------------------------------------------------------------------------------- /Doc/day1_4_SegmentationBasics.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Doc/day1_4_SegmentationBasics.pptx -------------------------------------------------------------------------------- /Doc/day1_6_PerkLabResearchMethodology.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Doc/day1_6_PerkLabResearchMethodology.pptx -------------------------------------------------------------------------------- /Doc/day1_7_PerkLabProjectManagement.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Doc/day1_7_PerkLabProjectManagement.pptx -------------------------------------------------------------------------------- /Doc/day2_1_ConvNets_Intro_2022-05-24.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Doc/day2_1_ConvNets_Intro_2022-05-24.pptx -------------------------------------------------------------------------------- /Doc/day2_2_SlicerIGT-U38_LiveAiRec.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Doc/day2_2_SlicerIGT-U38_LiveAiRec.pptx -------------------------------------------------------------------------------- /Doc/day2_3_DesignFor3dPrinting.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Doc/day2_3_DesignFor3dPrinting.pptx -------------------------------------------------------------------------------- /Doc/day2_MONAILabel.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Doc/day2_MONAILabel.pptx -------------------------------------------------------------------------------- /Doc/day2_NeuroNav.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Doc/day2_NeuroNav.pptx -------------------------------------------------------------------------------- /Doc/day2_Plus.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Doc/day2_Plus.pptx -------------------------------------------------------------------------------- /Doc/day2_PrototypingImageGuidedTherapyApplications.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Doc/day2_PrototypingImageGuidedTherapyApplications.pptx -------------------------------------------------------------------------------- /Doc/day2_PytorchDemo.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Doc/day2_PytorchDemo.pdf -------------------------------------------------------------------------------- /Doc/day2_RegistrationTutorial.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Doc/day2_RegistrationTutorial.pptx -------------------------------------------------------------------------------- /Doc/day2_TrackingDataEvaluation.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Doc/day2_TrackingDataEvaluation.pptx -------------------------------------------------------------------------------- /Doc/day2_UltrasoundAISegmentation.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Doc/day2_UltrasoundAISegmentation.pptx -------------------------------------------------------------------------------- /Doc/day3_1_WritingCorrectAndUnderstandableCode.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Doc/day3_1_WritingCorrectAndUnderstandableCode.pptx -------------------------------------------------------------------------------- /Doc/dayN_N_SlicerJupyter.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Doc/dayN_N_SlicerJupyter.pptx -------------------------------------------------------------------------------- /Examples/CampTutorial2/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | #----------------------------------------------------------------------------- 2 | set(MODULE_NAME CampTutorial2) 3 | 4 | #----------------------------------------------------------------------------- 5 | set(MODULE_PYTHON_SCRIPTS 6 | ${MODULE_NAME}.py 7 | ) 8 | 9 | set(MODULE_PYTHON_RESOURCES 10 | Resources/Icons/${MODULE_NAME}.png 11 | Resources/UI/${MODULE_NAME}.ui 12 | ) 13 | 14 | #----------------------------------------------------------------------------- 15 | slicerMacroBuildScriptedModule( 16 | NAME ${MODULE_NAME} 17 | SCRIPTS ${MODULE_PYTHON_SCRIPTS} 18 | RESOURCES ${MODULE_PYTHON_RESOURCES} 19 | WITH_GENERIC_TESTS 20 | ) 21 | 22 | #----------------------------------------------------------------------------- 23 | if(BUILD_TESTING) 24 | 25 | # Register the unittest subclass in the main script as a ctest. 26 | # Note that the test will also be available at runtime. 27 | slicer_add_python_unittest(SCRIPT ${MODULE_NAME}.py) 28 | 29 | # Additional build-time testing 30 | add_subdirectory(Testing) 31 | endif() 32 | -------------------------------------------------------------------------------- /Examples/CampTutorial2/CampTutorial2.py: -------------------------------------------------------------------------------- 1 | import os 2 | import logging 3 | import vtk, qt, ctk, slicer 4 | import numpy as np 5 | from slicer.ScriptedLoadableModule import * 6 | from slicer.util import VTKObservationMixin 7 | 8 | # 9 | # CampTutorial2 10 | # 11 | 12 | class CampTutorial2(ScriptedLoadableModule): 13 | """Uses ScriptedLoadableModule base class, available at: 14 | https://github.com/Slicer/Slicer/blob/master/Base/Python/slicer/ScriptedLoadableModule.py 15 | """ 16 | 17 | def __init__(self, parent): 18 | ScriptedLoadableModule.__init__(self, parent) 19 | self.parent.title = "CampTutorial2" # TODO: make this more human readable by adding spaces 20 | self.parent.categories = ["Examples"] # TODO: set categories (folders where the module shows up in the module selector) 21 | self.parent.dependencies = [] # TODO: add here list of module names that this module requires 22 | self.parent.contributors = ["Perk Lab (Queen's University)"] # TODO: replace with "Firstname Lastname (Organization)" 23 | # TODO: update with short description of the module and a link to online module documentation 24 | self.parent.helpText = """ 25 | This is an example of scripted loadable module bundled in an extension. 26 | See more information in module documentation. 27 | """ 28 | # TODO: replace with organization, grant and thanks 29 | self.parent.acknowledgementText = """ 30 | This file was originally developed by Jean-Christophe Fillion-Robin, Kitware Inc., Andras Lasso, PerkLab, 31 | and Steve Pieper, Isomics, Inc. and was partially funded by NIH grant 3P41RR013218-12S1. 32 | """ 33 | 34 | # Additional initialization step after application startup is complete 35 | slicer.app.connect("startupCompleted()", registerSampleData) 36 | 37 | # 38 | # Register sample data sets in Sample Data module 39 | # 40 | 41 | def registerSampleData(): 42 | """ 43 | Add data sets to Sample Data module. 44 | """ 45 | # It is always recommended to provide sample data for users to make it easy to try the module, 46 | # but if no sample data is available then this method (and associated startupCompeted signal connection) can be removed. 47 | 48 | import SampleData 49 | iconsPath = os.path.join(os.path.dirname(__file__), 'Resources/Icons') 50 | 51 | # To ensure that the source code repository remains small (can be downloaded and installed quickly) 52 | # it is recommended to store data sets that are larger than a few MB in a Github release. 53 | 54 | # CampTutorial21 55 | SampleData.SampleDataLogic.registerCustomSampleDataSource( 56 | # Category and sample name displayed in Sample Data module 57 | category='CampTutorial2', 58 | sampleName='CampTutorial21', 59 | # Thumbnail should have size of approximately 260x280 pixels and stored in Resources/Icons folder. 60 | # It can be created by Screen Capture module, "Capture all views" option enabled, "Number of images" set to "Single". 61 | thumbnailFileName=os.path.join(iconsPath, 'CampTutorial2.png'), 62 | # Download URL and target file name 63 | uris="https://github.com/Slicer/SlicerTestingData/releases/download/SHA256/998cb522173839c78657f4bc0ea907cea09fd04e44601f17c82ea27927937b95", 64 | fileNames='CampTutorial2.nrrd', 65 | # Checksum to ensure file integrity. Can be computed by this command: 66 | # import hashlib; print(hashlib.sha256(open(filename, "rb").read()).hexdigest()) 67 | checksums = 'SHA256:998cb522173839c78657f4bc0ea907cea09fd04e44601f17c82ea27927937b95', 68 | # This node name will be used when the data set is loaded 69 | nodeNames='CampTutorial2' 70 | ) 71 | 72 | # CampTutorial2 73 | SampleData.SampleDataLogic.registerCustomSampleDataSource( 74 | # Category and sample name displayed in Sample Data module 75 | category='CampTutorial2', 76 | sampleName='CampTutorial2', 77 | thumbnailFileName=os.path.join(iconsPath, 'CampTutorial2.png'), 78 | # Download URL and target file name 79 | uris="https://github.com/Slicer/SlicerTestingData/releases/download/SHA256/1a64f3f422eb3d1c9b093d1a18da354b13bcf307907c66317e2463ee530b7a97", 80 | fileNames='CampTutorial2.nrrd', 81 | checksums = 'SHA256:1a64f3f422eb3d1c9b093d1a18da354b13bcf307907c66317e2463ee530b7a97', 82 | # This node name will be used when the data set is loaded 83 | nodeNames='CampTutorial2' 84 | ) 85 | 86 | # 87 | # CampTutorialWidget 88 | # 89 | 90 | class CampTutorial2Widget(ScriptedLoadableModuleWidget, VTKObservationMixin): 91 | """Uses ScriptedLoadableModuleWidget base class, available at: 92 | https://github.com/Slicer/Slicer/blob/master/Base/Python/slicer/ScriptedLoadableModule.py 93 | """ 94 | 95 | def __init__(self, parent=None): 96 | """ 97 | Called when the user opens the module the first time and the widget is initialized. 98 | """ 99 | ScriptedLoadableModuleWidget.__init__(self, parent) 100 | VTKObservationMixin.__init__(self) # needed for parameter node observation 101 | self.logic = None 102 | self._parameterNode = None 103 | self._updatingGUIFromParameterNode = False 104 | 105 | def setup(self): 106 | """ 107 | Called when the user opens the module the first time and the widget is initialized. 108 | """ 109 | ScriptedLoadableModuleWidget.setup(self) 110 | 111 | # Load widget from .ui file (created by Qt Designer). 112 | # Additional widgets can be instantiated manually and added to self.layout. 113 | uiWidget = slicer.util.loadUI(self.resourcePath('UI/CampTutorial2.ui')) 114 | self.layout.addWidget(uiWidget) 115 | self.ui = slicer.util.childWidgetVariables(uiWidget) 116 | 117 | # Set scene in MRML widgets. Make sure that in Qt designer the top-level qMRMLWidget's 118 | # "mrmlSceneChanged(vtkMRMLScene*)" signal in is connected to each MRML widget's. 119 | # "setMRMLScene(vtkMRMLScene*)" slot. 120 | uiWidget.setMRMLScene(slicer.mrmlScene) 121 | 122 | # Create logic class. Logic implements all computations that should be possible to run 123 | # in batch mode, without a graphical user interface. 124 | self.logic = CampTutorial2Logic() 125 | self.logic.setupScene() 126 | 127 | # Connections 128 | 129 | # These connections ensure that we update parameter node when scene is closed 130 | 131 | self.addObserver(slicer.mrmlScene, slicer.mrmlScene.StartCloseEvent, self.onSceneStartClose) 132 | self.addObserver(slicer.mrmlScene, slicer.mrmlScene.EndCloseEvent, self.onSceneEndClose) 133 | 134 | # These connections ensure that whenever user changes some settings on the GUI, that is saved 135 | 136 | self.ui.inputMarkupSelector.connect("currentNodeChanged(vtkMRMLNode*)", self.onInputMarkupSelected) 137 | self.ui.opacitySliderWidget.connect("valueChanged(double)", self.onOpacitySliderChanged) 138 | self.ui.autoUpdateCheckBox.connect("clicked(bool)", self.onAutoUpdateClicked) 139 | self.ui.outputLineEdit.connect("currentPathChanged(QString)", self.onOutputPathChanged) 140 | 141 | self.ui.applyButton.connect('clicked(bool)', self.onApplyButton) 142 | self.ui.exportDataButton.connect('clicked(bool)', self.onExportButtonClicked) 143 | 144 | # Make sure parameter node is initialized (needed for module reload) 145 | 146 | self.initializeParameterNode() 147 | 148 | # Restore settings values on GUI 149 | 150 | pathValue = self.logic.getExportPath() 151 | if pathValue is not None and len(pathValue) > 1: 152 | self.ui.outputLineEdit.setCurrentPath(pathValue) 153 | 154 | def onExportButtonClicked(self): 155 | self.logic.exportSphereModel() 156 | 157 | def onOutputPathChanged(self, newPath): 158 | self.logic.setExportPath(newPath) 159 | 160 | def onOpacitySliderChanged(self, newValue): 161 | if slicer.mrmlScene.IsImporting(): 162 | return 163 | logging.info("Opacity slider set to {}".format(newValue)) 164 | self.logic.setOpacity(newValue) 165 | if self.ui.autoUpdateCheckBox.checked == True: 166 | self.onApplyButton() 167 | 168 | def onAutoUpdateClicked(self, checked): 169 | if slicer.mrmlScene.IsImporting(): 170 | return 171 | self.onApplyButton() 172 | self.logic.setAutoUpdate(checked) 173 | 174 | def cleanup(self): 175 | """ 176 | Called when the application closes and the module widget is destroyed. 177 | """ 178 | self.removeObservers() 179 | 180 | def enter(self): 181 | """ 182 | Called each time the user opens this module. 183 | """ 184 | # Make sure parameter node exists and observed 185 | self.initializeParameterNode() 186 | 187 | def exit(self): 188 | """ 189 | Called each time the user opens a different module. 190 | """ 191 | # Do not react to parameter node changes (GUI wlil be updated when the user enters into the module) 192 | self.removeObserver(self._parameterNode, vtk.vtkCommand.ModifiedEvent, self.updateGUIFromParameterNode) 193 | 194 | def onSceneStartClose(self, caller, event): 195 | """ 196 | Called just before the scene is closed. 197 | """ 198 | # Parameter node will be reset, do not use it anymore 199 | self.setParameterNode(None) 200 | 201 | def onSceneEndClose(self, caller, event): 202 | """ 203 | Called just after the scene is closed. 204 | """ 205 | # If this module is shown while the scene is closed then recreate a new parameter node immediately 206 | if self.parent.isEntered: 207 | self.initializeParameterNode() 208 | 209 | def initializeParameterNode(self): 210 | """ 211 | Ensure parameter node exists and observed. 212 | """ 213 | # Parameter node stores all user choices in parameter values, node selections, etc. 214 | # so that when the scene is saved and reloaded, these settings are restored. 215 | 216 | self.setParameterNode(self.logic.getParameterNode()) 217 | 218 | # Select default input nodes if nothing is selected yet to save a few clicks for the user 219 | 220 | if not self._parameterNode.GetNodeReference(self.logic.INPUT_MARKUP): 221 | firstMarkupNode = slicer.mrmlScene.GetFirstNodeByClass("vtkMRMLMarkupsFiducialNode") 222 | if firstMarkupNode: 223 | self._parameterNode.SetNodeReferenceID(self.logic.INPUT_MARKUP, firstMarkupNode.GetID()) 224 | 225 | def setParameterNode(self, inputParameterNode): 226 | """ 227 | Set and observe parameter node. 228 | Observation is needed because when the parameter node is changed then the GUI must be updated immediately. 229 | """ 230 | 231 | # if inputParameterNode: 232 | # self.logic.setDefaultParameters(inputParameterNode) 233 | 234 | # Unobserve previously selected parameter node and add an observer to the newly selected. 235 | # Changes of parameter node are observed so that whenever parameters are changed by a script or any other module 236 | # those are reflected immediately in the GUI. 237 | if self._parameterNode is not None: 238 | self.removeObserver(self._parameterNode, vtk.vtkCommand.ModifiedEvent, self.updateGUIFromParameterNode) 239 | self._parameterNode = inputParameterNode 240 | if self._parameterNode is not None: 241 | self.addObserver(self._parameterNode, vtk.vtkCommand.ModifiedEvent, self.updateGUIFromParameterNode) 242 | 243 | # Initial GUI update 244 | self.updateGUIFromParameterNode() 245 | 246 | def updateGUIFromParameterNode(self, caller=None, event=None): 247 | """ 248 | This method is called whenever parameter node is changed. 249 | The module GUI is updated to show the current state of the parameter node. 250 | """ 251 | 252 | if slicer.mrmlScene.IsImporting() or self.logic.isImporting: 253 | return 254 | 255 | if self._parameterNode is None or self._updatingGUIFromParameterNode: 256 | return 257 | 258 | # Make sure GUI changes do not call updateParameterNodeFromGUI (it could cause infinite loop) 259 | 260 | self._updatingGUIFromParameterNode = True 261 | 262 | # Update node selectors and sliders 263 | 264 | inputNode = self._parameterNode.GetNodeReference(self.logic.INPUT_MARKUP) 265 | self.ui.inputMarkupSelector.setCurrentNode(inputNode) 266 | self.ui.opacitySliderWidget.value = self.logic.getOpacity() 267 | self.ui.autoUpdateCheckBox.setChecked(self.logic.getAutoUpdate()) 268 | 269 | # Update buttons states and tooltips 270 | 271 | if self._parameterNode.GetNodeReference(self.logic.INPUT_MARKUP) is not None: 272 | self.ui.applyButton.toolTip = "Compute output volume" 273 | self.ui.autoUpdateCheckBox.enabled = True 274 | self.ui.applyButton.enabled = True 275 | else: 276 | self.ui.applyButton.toolTip = "Select input and output volume nodes" 277 | self.ui.autoUpdateCheckBox.enabled = False 278 | self.ui.applyButton.enabled = False 279 | 280 | self._updatingGUIFromParameterNode = False # All the GUI updates are done 281 | 282 | def onInputMarkupSelected(self, newNode): 283 | """ 284 | This method is called when the user makes any change in the GUI. 285 | The changes are saved into the parameter node (so that they are restored when the scene is saved and loaded). 286 | """ 287 | if slicer.mrmlScene.IsImporting() or self.logic.isImporting: 288 | return 289 | 290 | logging.info("onInputMarkupSelected") 291 | 292 | if self._parameterNode is None or self._updatingGUIFromParameterNode: 293 | return 294 | 295 | if newNode is None: 296 | self.ui.autoUpdateCheckBox.setChecked(False) 297 | self.logic.setAutoUpdate(False) 298 | self._parameterNode.SetNodeReferenceID(self.logic.INPUT_MARKUP, None) 299 | logging.info("Set input markup ID: None") 300 | else: 301 | self._parameterNode.SetNodeReferenceID(self.logic.INPUT_MARKUP, newNode.GetID()) 302 | logging.info("Set input markup ID: {}".format(newNode.GetID())) 303 | 304 | def onApplyButton(self): 305 | """ 306 | Run processing when user clicks "Apply" button. 307 | """ 308 | try: 309 | self.logic.updateSphere(self.ui.inputMarkupSelector.currentNode(), self.ui.opacitySliderWidget.value) 310 | 311 | except Exception as e: 312 | slicer.util.errorDisplay("Failed to compute results: "+str(e)) 313 | import traceback 314 | traceback.print_exc() 315 | 316 | 317 | # 318 | # CampTutorial2Logic 319 | # 320 | 321 | class CampTutorial2Logic(ScriptedLoadableModuleLogic, VTKObservationMixin): 322 | """This class should implement all the actual 323 | computation done by your module. The interface 324 | should be such that other python code can import 325 | this class and make use of the functionality without 326 | requiring an instance of the Widget. 327 | Uses ScriptedLoadableModuleLogic base class, available at: 328 | https://github.com/Slicer/Slicer/blob/master/Base/Python/slicer/ScriptedLoadableModule.py 329 | """ 330 | 331 | # Adding member variables for names to avoid typos 332 | 333 | OUTPUT_PATH_SETTING = "CampTutorial2/OutputPath" 334 | 335 | INPUT_MARKUP = "InputMarkup" 336 | SPHERE_MODEL = "SphereModel" 337 | OPACITY = "Opacity" 338 | OPACITY_DEFAULT = 0.8 339 | AUTOUPDATE = "AutoUpdate" 340 | AUTOUPDATE_DEFAULT = False 341 | 342 | def __init__(self): 343 | """ 344 | Called when the logic class is instantiated. Can be used for initializing member variables. 345 | """ 346 | ScriptedLoadableModuleLogic.__init__(self) 347 | VTKObservationMixin.__init__(self) # needed for scene import observation 348 | 349 | self.fiducialNode = None 350 | self.sphereNode = None 351 | 352 | self.observedMarkupNode = None 353 | self.isImporting = False 354 | 355 | def setupScene(self): 356 | """ 357 | Creates utility nodes and adds observers 358 | """ 359 | parameterNode = self.getParameterNode() 360 | sphereModel = parameterNode.GetNodeReference(self.SPHERE_MODEL) 361 | if sphereModel is None: 362 | sphereModel = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLModelNode", self.SPHERE_MODEL) 363 | sphereModel.CreateDefaultDisplayNodes() 364 | parameterNode.SetNodeReferenceID(self.SPHERE_MODEL, sphereModel.GetID()) 365 | 366 | self.addObserver(slicer.mrmlScene, slicer.vtkMRMLScene.StartImportEvent, self.onSceneImportStart) 367 | self.addObserver(slicer.mrmlScene, slicer.vtkMRMLScene.EndImportEvent, self.onSceneImportEnd) 368 | 369 | self.setAutoUpdate(self.getAutoUpdate()) 370 | 371 | def exportSphereModel(self): 372 | """ 373 | Saves current sphere model in the export folder as a file. 374 | """ 375 | parameterNode = self.getParameterNode() 376 | sphereNode = parameterNode.GetNodeReference(self.SPHERE_MODEL) 377 | if sphereNode is None: 378 | logging.info("Cannot export sphere model, not created yet") 379 | return 380 | exportPath = slicer.util.settingsValue(self.OUTPUT_PATH_SETTING, "") 381 | fileName = sphereNode.GetName() + ".stl" 382 | fileFullName = os.path.join(exportPath, fileName) 383 | logging.info("Exporting sphere model to: {}".format(fileFullName)) 384 | slicer.util.saveNode(sphereNode, fileFullName) 385 | 386 | def onSceneImportStart(self, event, caller): 387 | """ 388 | Saves existing nodes in member variables to be able to compare them with new nodes about to be loaded. 389 | """ 390 | logging.info("onSceneImportStart") 391 | self.isImporting = True 392 | parameterNode = self.getParameterNode() 393 | self.sphereNode = parameterNode.GetNodeReference(self.SPHERE_MODEL) 394 | self.fiducialNode = parameterNode.GetNodeReference(self.INPUT_MARKUP) 395 | 396 | def onSceneImportEnd(self, event, caller): 397 | """ 398 | When loading a saved scene ends, this function cleans up orphan nodes. 399 | """ 400 | logging.info("onSceneImportEnd") 401 | parameterNode = self.getParameterNode() 402 | currentSphereNode = parameterNode.GetNodeReference(self.SPHERE_MODEL) 403 | 404 | # Discard the sphere loaded, in case we set up better properties (e.g. color) for illustration 405 | 406 | if self.sphereNode != currentSphereNode: 407 | parameterNode.SetNodeReferenceID(self.SPHERE_MODEL, self.sphereNode.GetID()) 408 | self.removeNode(currentSphereNode) 409 | 410 | # Use the markup loaded, because that data belongs to the "case" (e.g. patient) loaded 411 | 412 | currentMarkup = parameterNode.GetNodeReference(self.INPUT_MARKUP) 413 | 414 | if self.fiducialNode != currentMarkup: 415 | self.removeNode(self.fiducialNode) 416 | self.fiducialNode = currentMarkup 417 | 418 | self.isImporting = False 419 | 420 | # Restore module state 421 | 422 | self.setAutoUpdate(self.getAutoUpdate()) 423 | self.updateSphere() 424 | 425 | parameterNode.Modified() # Trigger GUI update 426 | 427 | def removeNode(self, node): 428 | """ 429 | Removes node and its display and storage nodes from the scene. 430 | """ 431 | if node is None: 432 | return 433 | 434 | for i in range(node.GetNumberOfDisplayNodes()): 435 | slicer.mrmlScene.RemoveNode(node.GetNthDisplayNode(i)) 436 | 437 | for i in range(node.GetNumberOfStorageNodes()): 438 | slicer.mrmlScene.RemoveNode(node.GetNthStorageNode(i)) 439 | 440 | slicer.mrmlScene.RemoveNode(node) 441 | 442 | def setDefaultParameters(self, parameterNode): 443 | """ 444 | Initialize parameter node with default settings. 445 | """ 446 | parameterNode.SetParameter(self.OPACITY, str(self.OPACITY_DEFAULT)) 447 | parameterNode.SetParameter(self.AUTOUPDATE, "true" if self.AUTOUPDATE_DEFAULT else "false") 448 | 449 | def setOpacity(self, newValue): 450 | parameterNode = self.getParameterNode() 451 | parameterNode.SetParameter(self.OPACITY, str(newValue)) 452 | 453 | def getOpacity(self): 454 | parameterNode = self.getParameterNode() 455 | opacityStr = parameterNode.GetParameter(self.OPACITY) 456 | if opacityStr is None or len(opacityStr) < 1: 457 | return self.OPACITY_DEFAULT 458 | else: 459 | return float(opacityStr) 460 | 461 | def setAutoUpdate(self, autoUpdate): 462 | parameterNode = self.getParameterNode() 463 | parameterNode.SetParameter(self.AUTOUPDATE, "true" if autoUpdate else "false") 464 | markupNode = parameterNode.GetNodeReference(self.INPUT_MARKUP) 465 | 466 | if self.observedMarkupNode is not None: 467 | self.removeObserver(self.observedMarkupNode, slicer.vtkMRMLMarkupsNode.PointModifiedEvent, self.onMarkupsUpdated) 468 | self.observedMarkupNode = None 469 | 470 | if autoUpdate and markupNode: 471 | self.observedMarkupNode = markupNode 472 | self.addObserver(self.observedMarkupNode, slicer.vtkMRMLMarkupsNode.PointModifiedEvent, self.onMarkupsUpdated) 473 | 474 | def getAutoUpdate(self): 475 | parameterNode = self.getParameterNode() 476 | autoUpdate = parameterNode.GetParameter(self.AUTOUPDATE) 477 | if autoUpdate is None or autoUpdate == '': 478 | return self.AUTOUPDATE_DEFAULT 479 | elif autoUpdate.lower() == "false": 480 | return False 481 | else: 482 | return True 483 | 484 | def setExportPath(self, newPath): 485 | settings = qt.QSettings() 486 | settings.setValue(self.OUTPUT_PATH_SETTING, newPath) 487 | 488 | def getExportPath(self): 489 | return slicer.util.settingsValue(self.OUTPUT_PATH_SETTING, None) 490 | 491 | def onMarkupsUpdated(self, caller, event): 492 | parameterNode = self.getParameterNode() 493 | markupNode = parameterNode.GetNodeReference(self.INPUT_MARKUP) 494 | opacity = self.getOpacity() 495 | self.updateSphere(markupNode, opacity) 496 | 497 | def updateSphere(self, inputMarkup, opacity): 498 | """ 499 | Run the processing algorithm. 500 | Can be used without GUI widget. 501 | :param inputMarkup: vtkMRMLMarkupsFiducialNode, first two points will be used 502 | :param opacity: float, for output model 503 | """ 504 | parameterNode = self.getParameterNode() 505 | outputModel = parameterNode.GetNodeReference(self.SPHERE_MODEL) 506 | 507 | if not inputMarkup or not outputModel: 508 | raise ValueError("Input or sphere model is invalid") 509 | 510 | if inputMarkup.GetNumberOfFiducials() < 2: 511 | raise Error("Too few markup points") 512 | 513 | import time 514 | startTime = time.time() 515 | # logging.info('Processing started') 516 | 517 | p0 = np.zeros(3) 518 | inputMarkup.GetNthFiducialPosition(0, p0) 519 | p1 = np.zeros(3) 520 | inputMarkup.GetNthFiducialPosition(1, p1) 521 | 522 | c = (p0 + p1) / 2.0 523 | r = np.linalg.norm(p1 - p0) / 2.0 524 | 525 | source = vtk.vtkSphereSource() 526 | source.SetRadius(r) 527 | source.SetCenter(c[0], c[1], c[2]) 528 | source.Update() 529 | 530 | if outputModel.GetNumberOfDisplayNodes() < 1: 531 | outputModel.CreateDefaultDisplayNodes() 532 | outputModel.SetAndObservePolyData(source.GetOutput()) 533 | displayNode = outputModel.GetDisplayNode() 534 | displayNode.SetOpacity(opacity) 535 | 536 | stopTime = time.time() 537 | # logging.info(f'Processing completed in {stopTime-startTime:.2f} seconds') 538 | 539 | # 540 | # CampTutorial2Test 541 | # 542 | 543 | class CampTutorial2Test(ScriptedLoadableModuleTest): 544 | """ 545 | This is the test case for your scripted module. 546 | Uses ScriptedLoadableModuleTest base class, available at: 547 | https://github.com/Slicer/Slicer/blob/master/Base/Python/slicer/ScriptedLoadableModule.py 548 | """ 549 | 550 | def setUp(self): 551 | """ Do whatever is needed to reset the state - typically a scene clear will be enough. 552 | """ 553 | slicer.mrmlScene.Clear() 554 | 555 | def runTest(self): 556 | """Run as few or as many tests as needed here. 557 | """ 558 | self.setUp() 559 | self.test_CampTutorial21() 560 | 561 | def test_CampTutorial21(self): 562 | """ Ideally you should have several levels of tests. At the lowest level 563 | tests should exercise the functionality of the logic with different inputs 564 | (both valid and invalid). At higher levels your tests should emulate the 565 | way the user would interact with your code and confirm that it still works 566 | the way you intended. 567 | One of the most important features of the tests is that it should alert other 568 | developers when their changes will have an impact on the behavior of your 569 | module. For example, if a developer removes a feature that you depend on, 570 | your test should break so they know that the feature is needed. 571 | """ 572 | 573 | self.delayDisplay("Starting the test") 574 | 575 | # Get/create input data 576 | 577 | import SampleData 578 | registerSampleData() 579 | inputVolume = SampleData.downloadSample('CampTutorial21') 580 | self.delayDisplay('Loaded test data set') 581 | 582 | inputScalarRange = inputVolume.GetImageData().GetScalarRange() 583 | self.assertEqual(inputScalarRange[0], 0) 584 | self.assertEqual(inputScalarRange[1], 695) 585 | 586 | outputVolume = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLScalarVolumeNode") 587 | threshold = 100 588 | 589 | # Test the module logic 590 | 591 | logic = CampTutorial2Logic() 592 | 593 | # Test algorithm with non-inverted threshold 594 | logic.updateSphere(inputVolume, outputVolume, threshold, True) 595 | outputScalarRange = outputVolume.GetImageData().GetScalarRange() 596 | self.assertEqual(outputScalarRange[0], inputScalarRange[0]) 597 | self.assertEqual(outputScalarRange[1], threshold) 598 | 599 | # Test algorithm with inverted threshold 600 | logic.updateSphere(inputVolume, outputVolume, threshold, False) 601 | outputScalarRange = outputVolume.GetImageData().GetScalarRange() 602 | self.assertEqual(outputScalarRange[0], inputScalarRange[0]) 603 | self.assertEqual(outputScalarRange[1], inputScalarRange[1]) 604 | 605 | self.delayDisplay('Test passed') 606 | -------------------------------------------------------------------------------- /Examples/CampTutorial2/Resources/Icons/CampTutorial2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Examples/CampTutorial2/Resources/Icons/CampTutorial2.png -------------------------------------------------------------------------------- /Examples/CampTutorial2/Resources/UI/CampTutorial2.ui: -------------------------------------------------------------------------------- 1 | 2 | 3 | CampTutorial 4 | 5 | 6 | 7 | 0 8 | 0 9 | 279 10 | 347 11 | 12 | 13 | 14 | 15 | 16 | 17 | Inputs 18 | 19 | 20 | 21 | 22 | 23 | Input markups: 24 | 25 | 26 | 27 | 28 | 29 | 30 | Pick the input to the algorithm. 31 | 32 | 33 | 34 | vtkMRMLMarkupsFiducialNode 35 | 36 | 37 | 38 | false 39 | 40 | 41 | true 42 | 43 | 44 | true 45 | 46 | 47 | true 48 | 49 | 50 | true 51 | 52 | 53 | 54 | 55 | 56 | 57 | Opacity: 58 | 59 | 60 | 61 | 62 | 63 | 64 | Set threshold value for computing the output image. Voxels that have intensities lower than this value will set to zero. 65 | 66 | 67 | 0.010000000000000 68 | 69 | 70 | 0.100000000000000 71 | 72 | 73 | 0.000000000000000 74 | 75 | 76 | 1.000000000000000 77 | 78 | 79 | 0.500000000000000 80 | 81 | 82 | 83 | 84 | 85 | 86 | 87 | 88 | 89 | Auto-update 90 | 91 | 92 | 93 | 94 | 95 | 96 | false 97 | 98 | 99 | Run the algorithm. 100 | 101 | 102 | Apply 103 | 104 | 105 | 106 | 107 | 108 | 109 | Output 110 | 111 | 112 | 113 | 114 | 115 | Output folder: 116 | 117 | 118 | 119 | 120 | 121 | 122 | ctkPathLineEdit::Dirs|ctkPathLineEdit::Drives|ctkPathLineEdit::Executable|ctkPathLineEdit::NoDot|ctkPathLineEdit::NoDotDot|ctkPathLineEdit::Readable 123 | 124 | 125 | ctkPathLineEdit::ShowDirsOnly 126 | 127 | 128 | 129 | 130 | 131 | 132 | Export data 133 | 134 | 135 | 136 | 137 | 138 | 139 | 140 | 141 | 142 | Qt::Vertical 143 | 144 | 145 | 146 | 20 147 | 40 148 | 149 | 150 | 151 | 152 | 153 | 154 | 155 | 156 | ctkCollapsibleButton 157 | QWidget 158 |
ctkCollapsibleButton.h
159 | 1 160 |
161 | 162 | ctkPathLineEdit 163 | QWidget 164 |
ctkPathLineEdit.h
165 |
166 | 167 | ctkSliderWidget 168 | QWidget 169 |
ctkSliderWidget.h
170 |
171 | 172 | qMRMLNodeComboBox 173 | QWidget 174 |
qMRMLNodeComboBox.h
175 |
176 | 177 | qMRMLWidget 178 | QWidget 179 |
qMRMLWidget.h
180 | 1 181 |
182 |
183 | 184 | 185 | 186 | CampTutorial 187 | mrmlSceneChanged(vtkMRMLScene*) 188 | inputMarkupSelector 189 | setMRMLScene(vtkMRMLScene*) 190 | 191 | 192 | 122 193 | 132 194 | 195 | 196 | 248 197 | 61 198 | 199 | 200 | 201 | 202 |
203 | -------------------------------------------------------------------------------- /Examples/CampTutorial2/Testing/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | add_subdirectory(Python) 2 | -------------------------------------------------------------------------------- /Examples/CampTutorial2/Testing/Python/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | 2 | #slicer_add_python_unittest(SCRIPT ${MODULE_NAME}ModuleTest.py) 3 | -------------------------------------------------------------------------------- /Git for macOS and Unix.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/Git for macOS and Unix.pdf -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # PerkLab bootcamp 2 | 3 | ![](banner.jpg) 4 | 5 | The aim of this workshop, delivered as an intense three day bootcamp, is to provide training for engineering and computing students for best practices for prototyping of medical image computing, computer-assisted interventions and artificial intelligence applications in an open-source software environment. The workshop is also offered in support of Train the Trainers program that is established to develop sustainable technology development in medical interventions. It will be an online training event, offered by scientists and software engineers in Canada (Queen’s University, Carlton University, National Research Council), Spain (University of Las Palmas de Gran Canaria) and Senegal (Cheikh Anta Diop University, Ecole Supérieure Polytechnique). 6 | 7 | ## Logistics 8 | 9 | - Date: TBD 10 | - Location: TBD (meeting link is sent after registration). 11 | - Application: TBD 12 | - Any questions? Send an email to [Andras Lasso](mailto:lasso@queensu.ca). 13 | 14 | ## Pre-requisites 15 | 16 | - Install [3D Slicer](https://download.slicer.org/) latest stable version (5.0.x). After you installed Slicer, start it, open the Extension manager (menu: View/Extension manager), and install these extensions: SlicerIGT, SlicerOpenIGTLink, SegmentEditorExtraEffects, DebuggingTools, SlicerElastix, SegmentRegistration, DICOMwebBrowser, SlicerDMRI, SlicerJupyter. If a popup is displayed asking about **Install dependencies**, always click **Yes** to install them. 17 | - Install [Git for Windows](https://git-scm.com/download/win) and [TortoiseGit](https://tortoisegit.org/) if you have a Windows computer. On macOS and Linux, Git client is usually already installed by default. If you are not comfortable with using softare via the terminal then install a Git client with a graphical user interface, such as [GitHub Desktop](https://desktop.github.com/). 18 | - [VisualStudio Code](https://code.visualstudio.com/). Install the Python extension from Microsoft (ms-python.python). 19 | - If you don't have a GitHub account yet, then create one at www.github.com 20 | - If you want to effectively participate in day 3: get familiar with Python and numpy syntax; spend some time to get to know VTK (read as much of the [VTK textbook](https://vtk.org/vtk-textbook/) as you can, try to run some of the [VTK examples](https://kitware.github.io/vtk-examples/site/) in Python) and learn about [Qt for Python](https://www.qt.io/qt-for-python) (for example, complete a few basic tutorials). 21 | - Only for students at Queen's: Prepare with a short introduction about yourself (2-3 minutes, supported by 1-2 slides): experience, research interests, something personal 22 | - Install zoom and familiarize yourself with [SpatialChat](https://spatial.chat/s/TryMe) (we will use it for hands-on sessions so that multiple participant can share screen and ask questions from instructors when they get stuck) 23 | - Windows users: Download and install PLUS toolkit - stable 64-Bit version (2.8.0.20191105-Win64) from [here](http://perk-software.cs.queensu.ca/plus/packages/stable/). 24 | - Windows users: Print a set of ArUco markers from [here](https://github.com/PlusToolkit/PlusLibData/raw/master/ConfigFiles/OpticalMarkerTracker/marker_sheet_36h12.pdf) at 100% scale. 25 | - Download NeuroNav tutorial data from [here](https://queensuca-my.sharepoint.com/:f:/g/personal/1krs1_queensu_ca/EspLcq9slYBFphm5XxV2um8BjajOsRIXnmrszxvSoPPbVA?e=0RsHX2). 26 | 27 | ## Program 28 | 29 | The program is subject to change at any time, so please check this page regularly. 30 | 31 | Time zone: [Toronto, Canada (Eastern Time)](https://www.timeanddate.com/worldclock/canada/toronto). 32 | 33 | ### May 24, Tuesday: Introduction, 3D Slicer basics, Project management 34 | - 9:00am Introduction `zoom` 35 | - 9:15am Logistics (Andras) `spatialchat` 36 | - 9:30am 3D Slicer basics 1/2 `spatialchat` 37 | - Overview: core features, community, major extensions (20 min, Andras) 38 | - Visualization: load/save, sample data, viewers, models, volume rendering (40 min, Csaba/David hands-on, help: Andras, Kyle, Tamas, Csaba, David, Monica, in French: Marie), [slides & data](https://github.com/Slicer/SlicerDataLoadingAndVisualizationTutorial/tree/main?tab=readme-ov-file#data-loading-and-visualization-tutorial-for-3d-slicer) 39 | - _10:30am Break_ 40 | - 10:45am 3D Slicer basics 2/2 (Andras hands-on, help: Kyle, Tamas, Csaba, David, Monica, in French: Marie) 41 | - DICOM (15 min) 42 | - Segmentation (60 min) 43 | - _12:15pm Lunch break_ 44 | - 1:10pm Lab policies, available services, and guides (Tamas, Laura) _– only for students at Queen's_ `spatialchat` 45 | - 1:35pm Introduction of participants and instructors (Queen's students) _– only for students at Queen's_ `spatialchat` 46 | - _2:15pm Coffee break_ - bring your own beverage, get to know all the participants `spatialchat` 47 | - 2:30pm Software platform, open-source, reproducible science (Andras) `spatialchat` 48 | - 3:00pm Project management (Andras; hands-on, help: Kyle) `spatialchat` 49 | - 4:00pm Adjourn 50 | 51 | ### May 25, Wednesday: AI for image-guided interventions 52 | - 9:00am AI Overview (nomenclature, libraries, complete system) (Tamas) `spatialchat` 53 | - 9:30am Ultrasound AI segmentation, training and deployment (Tamas) `spatialchat` 54 | - 9:55am PyTorch and example projects (Amoon) `spatialchat` 55 | - 10:25am Tracking data evaluation (Matthew) `spatialchat` 56 | - _10:45am Break_ 57 | - 11:00am Prototyping image-guided therapy applications (Csaba, David, Monica) 58 | - 11:50am Registration: Elastix, landmark registration, SegmentRegistration, transforms, transform visualization (Andras) `spatialchat` 59 | - 12:10pm MONAILabel (Andras) `spatialchat` 60 | - _12:25pm Lunch break_ 61 | - 1:00pm DeepLearnLive (Rebecca) `spatialchat` 62 | - 1:20pm Introduction to Plus + SlicerIGT (Kyle) `spatialchat` 63 | - _2:30pm Break_ 64 | - 2:45pm Neuronavigation tutorial (Kyle; hands-on, help: Andras, Tamas) `spatialchat` 65 | - 4:00pm Adjourn 66 | 67 | ### May 26, Thursday: Slicer module development 68 | - 9:00am Writing correct and understandable code (Andras) `spatialchat` 69 | - 9:30am [Programming Slicer - part 1: 3D Slicer programming overview](https://github.com/Slicer/SlicerProgrammingTutorial) (Andras) `spatialchat` 70 | - 10:00am [Programming Slicer - part 2: Python basics and developing simple example Python module Center of Masses](https://github.com/Slicer/SlicerProgrammingTutorial) (Csaba, hands-on, help: Kyle, Andras, Tamas, David, Monica, Mark, Laura, Colton; in French: Marie) `spatialchat` 71 | - _12:15pm Lunch break_ 72 | - 1:00pm [Programming Slicer - part 3: Individual work to develop a more advanced module](https://github.com/Slicer/SlicerProgrammingTutorial) (hands-on, help: Kyle, Andras, Tamas, Laura, Colton) `spatialchat` 73 | - 4:00pm Adjourn 74 | 75 | Presentation slides and additional files will be available in this repository. 76 | -------------------------------------------------------------------------------- /banner.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/banner.jpg -------------------------------------------------------------------------------- /banner_v2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/banner_v2.jpg -------------------------------------------------------------------------------- /banner_v2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PerkLab/PerkLabBootcamp/b8e8a620b40c8037e40af4927f9d6cb473aa1cfd/banner_v2.png --------------------------------------------------------------------------------