├── Images ├── SemiDenseNET.png ├── __init__.txt └── brainModalities.png ├── License ├── README.md └── src ├── LiviaNET_Config.ini ├── LiviaNET_Segmentation.ini ├── LiviaNet ├── LiviaNet3DConvLayer.py ├── LiviaNet3DConvLayer.pyc ├── LiviaSemiDenseNet.py ├── LiviaSoftmax.py ├── LiviaSoftmax.pyc ├── Modules │ ├── General │ │ ├── Evaluation.py │ │ ├── Evaluation.pyc │ │ ├── Utils.py │ │ ├── Utils.pyc │ │ ├── __init__.py │ │ ├── __init__.pyc │ │ └── read.txt │ ├── IO │ │ ├── ImgOperations │ │ │ ├── __init__.py │ │ │ ├── __init__.pyc │ │ │ ├── imgOp.py │ │ │ └── imgOp.pyc │ │ ├── __init__.pyc │ │ ├── loadData.py │ │ ├── loadData.pyc │ │ ├── sampling.py │ │ ├── sampling.pyc │ │ ├── saveData.py │ │ └── saveData.pyc │ ├── NeuralNetwork │ │ ├── ActivationFunctions.py │ │ ├── ActivationFunctions.pyc │ │ ├── __init__.py │ │ ├── __init__.pyc │ │ ├── layerOperations.py │ │ └── layerOperations.pyc │ ├── Parsers │ │ ├── __init__.py │ │ ├── __init__.pyc │ │ ├── parsersUtils.py │ │ └── parsersUtils.pyc │ ├── __init__.py │ └── __init__.pyc ├── __init__.py ├── __init__.pyc ├── generateNetwork.py ├── generateNetwork.pyc ├── startTesting.py ├── startTesting.pyc ├── startTraining.py └── startTraining.pyc ├── generateROI.py ├── networkSegmentation.py ├── networkTraining.py └── processLabels.py /Images/SemiDenseNET.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/Images/SemiDenseNET.png -------------------------------------------------------------------------------- /Images/__init__.txt: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /Images/brainModalities.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/Images/brainModalities.png -------------------------------------------------------------------------------- /License: -------------------------------------------------------------------------------- 1 | Copyright (c) 2017 Jose Dolz 2 | 3 | Permission is hereby granted, free of charge, to any person 4 | obtaining a copy of this software and associated documentation 5 | files (the "Software"), to deal in the Software without 6 | restriction, including without limitation the rights to use, copy, 7 | modify, merge, publish, distribute, sublicense, and/or sell copies 8 | of the Software, and to permit persons to whom the Software is 9 | furnished to do so, subject to the following conditions: 10 | 11 | The above copyright notice and this permission notice shall be 12 | included in all copies or substantial portions of the Software. 13 | 14 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 15 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 16 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 17 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 18 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 19 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 20 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 21 | OTHER DEALINGS IN THE SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # SemiDenseNet 2 | This repository contains the code of the network that we employed in the iSEG Grand MICCAI Challenge 2017, infant brain segmentation. This network extends out previous work in 3D fully convolutional neural network that was employed in our work: [3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study](http://www.sciencedirect.com/science/article/pii/S1053811917303324), which can be found here : (https://github.com/josedolz/LiviaNET/). 3 | 4 |
5 | 6 |
7 | 8 | IMPORTANT: This Net is not the one that best performed in the iSEG Challenge, but the one with an individual path per modality and a late fusion stage to merge learned features (See figure). In case you want to adapt this net to the best performing net, you should remove one of the paths and merge T1 and T2 modalities at the input of the network. 9 | 10 |
11 | 12 |
13 | 14 | NOTE: The submission of this work is under review. If you use this code for your research, please consider citing the papers: 15 | 16 | - Dolz J, Desrosiers C, Ben Ayed I. "[3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study."](http://www.sciencedirect.com/science/article/pii/S1053811917303324) NeuroImage (2017). 17 | 18 | - Dolz J, Desrosiers C, Wang L, Yuang J, Shen D, Ben Ayed I. "[Deep CNN ensembles and suggestive annotations for infant brain MRI segmentation"](https://arxiv.org/pdf/1712.05319.pdf) 19 | 20 | The main differences with respect to that network are: 21 | - The use of multi-modal images (MR-T1 and T2) processed in independent paths. 22 | - Feature maps from all the layers are connected to the first fully connected layer (That is why we call it: Semi-dense Net). 23 | 24 | ## Requirements 25 | 26 | - The code has been written in Python (2.7) and requires [Theano](http://deeplearning.net/software/theano/) 27 | - You should also have installed [scipy](https://www.scipy.org/) 28 | - (Optional) The code allows to load images in Matlab and Nifti formats. If you wish to use nifti formats you should install [nibabel](http://nipy.org/nibabel/) 29 | - Since, as you might now, sharing medical data is not a good-practice, I did not include any sample in the corresponding folders. To make your experiments you must include your data in the repositories indicated in the config file (LiviaNET_Config.ini and LiviaNET_Segmentation.ini) 30 | 31 | ## Running the code 32 | 33 | Running the code actually works in the same way that LiviaNet. Just a reminder if you do not want to check the documentation from that net :) 34 | 35 | ## Training 36 | 37 | ### How do I train my own architecture from scratch? 38 | 39 | To start with your own architecture, you have to modify the file "LiviaNET_Config.ini" according to your requirements. 40 | 41 | Then you simply have to write in the command line: 42 | 43 | ``` 44 | python ./networkTraining.py ./LiviaNET_Config.ini 0 45 | ``` 46 | 47 | This will save, after each epoch, the updated trained model. 48 | 49 | If you use GPU, after nearly 5 minutes you will have your trained model from the example. 50 | 51 | ### Can I re-start the training from another epoch? 52 | 53 | Imagine that after two days of training your model, and just before you have your new model ready to be evaluated, your computer breaks down. Do not panic!!! You will have only to re-start the training from the last epoch in which the model was saved (Let's say epoch 20) as follows: 54 | 55 | ``` 56 | python ./networkTraining.py ./LiviaNET_Config.ini 1 ./outputFiles/LiviaNet_Test/Networks/liviaTest_Epoch20 57 | ``` 58 | 59 | ### Ok, cool. And what about employing pre-trained models? 60 | 61 | Yes, you can also do that. Instead of loading a whole model, which limits somehow the usability of loading pre-trained models, this code allows to load weights for each layer independently. Therefore, weights for each layer have to be saved in an independent file. In its current version (v1.0) weights files must be in numpy format (.npy). 62 | 63 | For that you will have to specify in the "LiviaNET_Config.ini" file the folder where the weights are saved ("weights folderName") and in which layers you want to use transfer learning ("weights trained indexes"). 64 | 65 | ## Testing 66 | 67 | ### How can I use a trained model? 68 | 69 | Once you are satisfied with your training, you can evaluate it by writing this in the command line: 70 | 71 | ``` 72 | python ./networkSegmentation.py ./LiviaNET_Segmentation.ini ./outputFiles/LiviaNet_Test/Networks/liviaTest_EpochX 73 | ``` 74 | where X denotes the last (or desired) epoch in which the model was saved. 75 | 76 | ### Versions 77 | - Version 1.0. 78 | * October,20th. 2017 79 | * Network processes multi-modal data (so far only two-modalities) and fuses the learned features before the first fully convolution connected layer. 80 | 81 | 82 | ### Known problems 83 | * In some computers I tried, when running in CPU, it complains about the type of some tensors. The work-around I have found is just to set some theano flags at the beginning of the scripts. Something like: 84 | 85 | ``` 86 | THEANO_FLAGS='floatX=float32' python ./networkTraining.py ./LiviaNET_Config.ini 0 87 | ``` 88 | 89 | You can contact me at: jose.dolz.upv@gmail.com 90 | -------------------------------------------------------------------------------- /src/LiviaNET_Config.ini: -------------------------------------------------------------------------------- 1 | 2 | ############################################################################################################################################ 3 | ################################################# CREATION OF THE NETWORK ##################################################### 4 | ############################################################################################################################################ 5 | 6 | 7 | ############## =================== General Options ================= ################ 8 | [General] 9 | networkName = LiviaSemiDenseNet 10 | # Saving Options 11 | folderName = LiviaSemiDenseNet 12 | 13 | 14 | ############## =================== CNN_Architecture ================= ################ 15 | [CNN_Architecture] 16 | numkernelsperlayer = [10,20,30,100] 17 | 18 | # Kernels shapes: (Note, if kernel size is equal to 1 on one layer means that this layer is fully connected) 19 | # In this example there will be 3 conv layers and 1 fully connected layer (+ classification layer) 20 | kernelshapes = [[3, 3, 3], [3, 3, 3], [3, 3, 3],[1]] 21 | 22 | # Intermediate layers to connect to the last conv layer (just before the first fully connected layer) 23 | intermediateConnectedLayers = [] 24 | 25 | # In the current implementation it does not support pooling (To be done...) 26 | pooling_scales = [[1,1,1],[1,1,1],[1,1,1]] 27 | 28 | # Array size should be equal to number of fully connected (FC) layers + classification layer 29 | dropout_Rates = [0.25,0.5] 30 | 31 | # Non-linear activations 32 | # Type: 0: Linear 33 | # 1: ReLU 34 | # 2: PReLU 35 | # 3: LeakyReLU 36 | activationType = 2 37 | 38 | # TODO. Include activation type for Softmax layer 39 | # Number of classes: background + classes to segment 40 | n_classes = 4 41 | 42 | # ------- Weights initialization ----------- # 43 | # There are some ways to initialize the weights. This is defined by the variable "weight_Initialization" 44 | # Here, there is a list of supported methods 45 | # 0, Classic 46 | # 1: Delving (He, Kaiming, et al. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." ICCV'15) 47 | # 2: Load Pre-trained 48 | # ---------- 49 | # There is also the choice of which layers will be initialized with pre-trained weights. This is specified in the variable 50 | # "load weight layers". This can be either empty (i.e. all layers will be initialized with pre-trained weights in case 51 | # "weight_Initialization" is 1) 52 | weight_Initialization_CNN = 1 53 | weight_Initialization_FCN = 1 54 | #load weight layers = [] # Next release 55 | # If using pre-trained models, specify the folder that contains the weights and the indexes of those weights to use 56 | # To ease the transfer between different softwares (i.e matlab for instance), and between different architectures, 57 | # the weights for each layer should be stored as a single file. 58 | # Right now weights have to be in .npy format 59 | weights folderName = /~yourpath/trainedWeights 60 | # Same length as conv layers 61 | weights trained indexes = [0,1,2] 62 | #weight_Initialization_Sec = 1 63 | 64 | ############## =================== Training Options ================= ################ 65 | [Training Parameters] 66 | batch_size=10 67 | number Of Epochs = 40 68 | number Of SubEpochs = 2 69 | number of samples at each SubEpoch Train = 10 70 | # TODO. To define some changes in the learning rate 71 | learning Rate change Type = 0 72 | 73 | # Subvolumes (i.e. samples) sizes. 74 | # Validation equal to testing samples 75 | sampleSize_Train = [27,27,27] 76 | sampleSize_Test = [35,35,35] 77 | 78 | # Cost function values 79 | # 0: 80 | # 1: 81 | costFunction = 0 82 | SoftMax temperature = 1.0 83 | #### ========= Learning rate ========== ##### 84 | L1 Regularization Constant = 1e-6 85 | L2 Regularization Constant = 1e-4 86 | 87 | # TO check 88 | # The array size has to be equal to the total number of layers (i.e. CNNs + FCs + Classification layer) 89 | #Leraning Rate = [0.0001, 0.0001, 0.0001, 0.0001,0.0001, 0.0001, 0.0001, 0.0001,0.0001, 0.0001, 0.0001, 0.0001,0.0001, 0.0001 ] 90 | Leraning Rate = [0.001] 91 | # First epoch to change learning rate 92 | First Epoch Change LR = 2 93 | # Each how many epochs change learning rate 94 | Frequency Change LR = 3 95 | # TODO. Add learning rate for each layer 96 | 97 | #### ========= Momentum ========== ##### 98 | # Type of momentum 99 | # 0: Classic 100 | # 1: Nesterov 101 | Momentum Type = 1 102 | Momentum Value = 0.6 103 | # Use momentum normalized? 104 | momentumNormalized = 1 105 | 106 | #### ======== Optimizer ===== ###### 107 | # Type: 0-> SGD 108 | # 1-> RMSProp (TODO. Check why RMSProp complains....) 109 | Optimizer Type = 1 110 | 111 | #In case we chose RMSProp 112 | Rho RMSProp = 0.9 113 | Epsilon RMSProp = 1e-4 114 | 115 | # Apply Batch normalization 116 | # 0: False, 1: True 117 | applyBatchNormalization = 1 118 | BatchNormEpochs = 20 119 | 120 | # Apply padding to images 121 | # 0: False, 1: True 122 | applyPadding = 1 123 | 124 | ############################################################################################################################################ 125 | ################################################# TRAINING VALUES ##################################################### 126 | ############################################################################################################################################ 127 | 128 | [Training Images] 129 | imagesFolder = ../~yourPath/T1/ 130 | imagesFolder_Bottom = ../yourPath/T2/ 131 | GroundTruthFolder = ../yourPath/Labels/ 132 | # ROI folder will contain the ROI where to extract the pacthes and where to perform the segmentation. 133 | # Values of the ROI should be 0 (non interest) and 1 (region of interest) 134 | ROIFolder = ../yourPath/ROI/ 135 | 136 | # If you have no ROIs 137 | #ROIFolder = [] 138 | # Type of images in the dataset 139 | # 0: nifti format 140 | # 1: matlab format 141 | # IMPORTANT: All the volumes should have been saved as 'vol' 142 | imageTypes = 0 143 | 144 | # Indexes for training/validation images. Note that indexes should correspond to the position = inex + 1 in the folder, 145 | # since python starts indexing at 0 146 | indexesForTraining = [0,1,2] 147 | indexesForValidation = [3] 148 | 149 | -------------------------------------------------------------------------------- /src/LiviaNET_Segmentation.ini: -------------------------------------------------------------------------------- 1 | 2 | ############################################################################################################################################ 3 | ################################################# SEGMENTATION PARAMETERS ##################################################### 4 | ############################################################################################################################################ 5 | 6 | [Segmentation Images] 7 | imagesFolder = ../../../iSEG/Data/Nifti/Training/T1/ 8 | imagesFolder_Bottom = ../../../iSEG/Data/Nifti/Training/T2/ 9 | GroundTruthFolder = ../../../iSEG/Data/Nifti/Training/Labels/ 10 | ROIFolder = ../../../iSEG/Data/Nifti/Training/ROI/ 11 | # TODO. 12 | # Type of images in the dataset 13 | # 0: nifit format 14 | # 1: matlab format 15 | imageTypes = 0 16 | 17 | # Indexes correspont to position -1 in the folder 18 | indexesToSegment = [1] 19 | 20 | # Apply padding to images 21 | # 0: False, 1: True 22 | applyPadding = 1 23 | -------------------------------------------------------------------------------- /src/LiviaNet/LiviaNet3DConvLayer.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) 2016, Jose Dolz .All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, 5 | are permitted provided that the following conditions are met: 6 | 7 | 1. Redistributions of source code must retain the above copyright notice, 8 | this list of conditions and the following disclaimer. 9 | 2. Redistributions in binary form must reproduce the above copyright notice, 10 | this list of conditions and the following disclaimer in the documentation 11 | and/or other materials provided with the distribution. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 14 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 15 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 16 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 17 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 18 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 19 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 | OTHER DEALINGS IN THE SOFTWARE. 21 | 22 | Jose Dolz. Dec, 2016. 23 | email: jose.dolz.upv@gmail.com 24 | LIVIA Department, ETS, Montreal. 25 | """ 26 | 27 | 28 | import theano 29 | import theano.tensor as T 30 | from theano.tensor.nnet import conv2d 31 | import theano.tensor.nnet.conv3d2d 32 | import pdb 33 | 34 | import sys 35 | import os 36 | import numpy as np 37 | import numpy 38 | import random 39 | 40 | from Modules.General.Utils import initializeWeights 41 | from Modules.NeuralNetwork.ActivationFunctions import * 42 | from Modules.NeuralNetwork.layerOperations import * 43 | 44 | ################################################################# 45 | # Layer Types # 46 | ################################################################# 47 | 48 | class LiviaNet3DConvLayer(object): 49 | """Convolutional Layer of the Livia network """ 50 | def __init__(self, 51 | rng, 52 | layerID, 53 | inputSample_Train, 54 | inputSample_Test, 55 | inputToLayerShapeTrain, 56 | inputToLayerShapeTest, 57 | filterShape, 58 | useBatchNorm, 59 | numberEpochApplyRolling, 60 | maxPoolingParameters, 61 | weights_initMethodType, 62 | weights, 63 | activationType, 64 | dropoutRate=0.0) : 65 | 66 | self.inputTrain = None 67 | self.inputTest = None 68 | self.inputShapeTrain = None 69 | self.inputShapeTest = None 70 | 71 | self._numberOfFeatureMaps = 0 72 | self._maxPoolingParameters = None 73 | self._appliedBnInLayer = None 74 | self.params = [] 75 | self.W = None 76 | self._gBn = None 77 | self._b = None 78 | self._aPrelu = None 79 | self.numberOfTrainableParams = 0 80 | 81 | self.muBatchNorm = None 82 | self._varBnsArrayForRollingAverage = None 83 | self.numberEpochApplyRolling = numberEpochApplyRolling 84 | self.rollingIndex = 0 85 | self._sharedNewMu_B = None 86 | self._sharedNewVar_B = None 87 | self._newMu_B = None 88 | self._newVar_B = None 89 | 90 | self.outputTrain = None 91 | self.outputTest = None 92 | self.outputShapeTrain = None 93 | self.outputShapeTest = None 94 | 95 | # === After all the parameters has been initialized, create the layer 96 | # Set all the inputs and parameters 97 | self.inputTrain = inputSample_Train 98 | self.inputTest = inputSample_Test 99 | self.inputShapeTrain = inputToLayerShapeTrain 100 | self.inputShapeTest = inputToLayerShapeTest 101 | 102 | self._numberOfFeatureMaps = filterShape[0] 103 | assert self.inputShapeTrain[1] == filterShape[1] 104 | self._maxPoolingParameters = maxPoolingParameters 105 | 106 | print(" --- [STATUS] --------- Creating layer {} --------- ".format(layerID)) 107 | 108 | ## Process the input layer through all the steps over the block 109 | 110 | (inputToConvTrain, 111 | inputToConvTest) = self.passInputThroughLayerElements(inputSample_Train, 112 | inputToLayerShapeTrain, 113 | inputSample_Test, 114 | inputToLayerShapeTest, 115 | useBatchNorm, 116 | numberEpochApplyRolling, 117 | activationType, 118 | weights, 119 | dropoutRate, 120 | rng 121 | ) 122 | # input shapes for the convolutions 123 | inputToConvShapeTrain = inputToLayerShapeTrain 124 | inputToConvShapeTest = inputToLayerShapeTest 125 | 126 | # -------------- Weights initialization ------------- 127 | # Initialize weights with random weights if W is empty 128 | # Otherwise, use loaded weights 129 | 130 | self.W = initializeWeights(filterShape, 131 | weights_initMethodType, 132 | weights) 133 | 134 | self.params = [self.W] + self.params 135 | self.numberOfTrainableParams += 1 136 | 137 | ##---------- Convolve -------------- 138 | (convolvedOutput_Train, convolvedOutputShape_Train) = convolveWithKernel(self.W, filterShape, inputToConvTrain, inputToConvShapeTrain) 139 | (convolvedOutput_Test, convolvedOutputShape_Test) = convolveWithKernel(self.W , filterShape, inputToConvTest, inputToConvShapeTest) 140 | 141 | self.outputTrain = convolvedOutput_Train 142 | self.outputTest = convolvedOutput_Test 143 | self.outputShapeTrain = convolvedOutputShape_Train 144 | self.outputShapeTest = convolvedOutputShape_Test 145 | 146 | 147 | def updateLayerMatricesBatchNorm(self): 148 | 149 | if self._appliedBnInLayer : 150 | muArrayValue = self.muBatchNorm.get_value() 151 | muArrayValue[self.rollingIndex] = self._sharedNewMu_B.get_value() 152 | self.muBatchNorm.set_value(muArrayValue, borrow=True) 153 | 154 | varArrayValue = self._varBnsArrayForRollingAverage.get_value() 155 | varArrayValue[self.rollingIndex] = self._sharedNewVar_B.get_value() 156 | self._varBnsArrayForRollingAverage.set_value(varArrayValue, borrow=True) 157 | self.rollingIndex = (self.rollingIndex + 1) % self.numberEpochApplyRolling 158 | 159 | def getUpdatesForBnRollingAverage(self) : 160 | if self._appliedBnInLayer : 161 | return [(self._sharedNewMu_B, self._newMu_B), 162 | (self._sharedNewVar_B, self._newVar_B) ] 163 | else : 164 | return [] 165 | 166 | 167 | def passInputThroughLayerElements(self, 168 | inputSample_Train, 169 | inputSampleShape_Train, 170 | inputSample_Test, 171 | inputSampleShape_Test, 172 | useBatchNorm, 173 | numberEpochApplyRolling, 174 | activationType, 175 | weights, 176 | dropoutRate, 177 | rndState): 178 | """ Through each block the following steps are applied, according to Kamnitsas: 179 | 1 - Batch Normalization or biases 180 | 2 - Activation function 181 | 3 - Dropout 182 | 4 - (Optional) Max pooling 183 | 184 | Ref: He et al "Identity Mappings in Deep Residual Networks" 2016 185 | https://github.com/KaimingHe/resnet-1k-layers/blob/master/resnet-pre-act.lua """ 186 | 187 | # ________________________________________________________ 188 | # 1 : Batch Normalization 189 | # ________________________________________________________ 190 | """ Implemenation taken from Kamnitsas work. 191 | 192 | A batch normalization implementation in TensorFlow: 193 | 194 | http://r2rt.com/implementing-batch-normalization-in-tensorflow.html 195 | 196 | "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", 197 | Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 2015. 198 | Journal of Machine Learning Research: W&CP volume 37 199 | """ 200 | if useBatchNorm > 0 : 201 | 202 | self._appliedBnInLayer = True 203 | 204 | (inputToNonLinearityTrain, 205 | inputToNonLinearityTest, 206 | self._gBn, 207 | self._b, 208 | self.muBatchNorm, 209 | self._varBnsArrayForRollingAverage, 210 | self._sharedNewMu_B, 211 | self._sharedNewVar_B, 212 | self._newMu_B, 213 | self._newVar_B) = applyBn( numberEpochApplyRolling, 214 | inputSample_Train, 215 | inputSample_Test, 216 | inputSampleShape_Train) 217 | 218 | self.params = self.params + [self._gBn, self._b] 219 | else : 220 | self._appliedBnInLayer = False 221 | numberOfInputFeatMaps = inputSampleShape_Train[1] 222 | 223 | b_values = np.zeros( (self._numberOfFeatureMaps), dtype = 'float32') 224 | self._b = theano.shared(value=b_values, borrow=True) 225 | 226 | inputToNonLinearityTrain = applyBiasToFeatureMaps( self._b, inputSample_Train ) 227 | inputToNonLinearityTest = applyBiasToFeatureMaps( self._b, inputSample_Test ) 228 | 229 | self.params = self.params + [self._b] 230 | 231 | # ________________________________________________________ 232 | # 2 : Apply the corresponding activation function 233 | # ________________________________________________________ 234 | def Linear(): 235 | print " --- Activation function: Linear" 236 | self.activationFunctionType = "Linear" 237 | output_Train = inputToNonLinearityTrain 238 | output_Test = inputToNonLinearityTest 239 | return (output_Train, output_Test) 240 | 241 | def ReLU(): 242 | print " --- Activation function: ReLU" 243 | self.activationFunctionType = "ReLU" 244 | output_Train = applyActivationFunction_ReLU_v1(inputToNonLinearityTrain) 245 | output_Test = applyActivationFunction_ReLU_v1(inputToNonLinearityTest) 246 | return (output_Train, output_Test) 247 | 248 | def PReLU(): 249 | print " --- Activation function: PReLU" 250 | self.activationFunctionType = "PReLU" 251 | numberOfInputFeatMaps = inputSampleShape_Train[1] 252 | PReLU_Values = np.ones( (numberOfInputFeatMaps), dtype = 'float32' )*0.01 253 | self._aPrelu = theano.shared(value=PReLU_Values, borrow=True) 254 | 255 | output_Train = applyActivationFunction_PReLU(inputToNonLinearityTrain, self._aPrelu) 256 | output_Test = applyActivationFunction_PReLU(inputToNonLinearityTest, self._aPrelu) 257 | self.params = self.params + [self._aPrelu] 258 | self.numberOfTrainableParams += 1 259 | return (output_Train,output_Test) 260 | 261 | def LeakyReLU(): 262 | print " --- Activation function: Leaky ReLU " 263 | self.activationFunctionType = "Leky ReLU" 264 | leakiness = 0.2 # TODO. Introduce this value in the config.ini 265 | output_Train = applyActivationFunction_LeakyReLU(inputToNonLinearityTrain,leakiness) 266 | output_Test = applyActivationFunction_LeakyReLU(inputToNonLinearityTest,leakiness) 267 | return (output_Train, output_Test) 268 | 269 | optionsActFunction = {0 : Linear, 270 | 1 : ReLU, 271 | 2 : PReLU, 272 | 3 : LeakyReLU} 273 | 274 | (inputToDropout_Train, inputToDropout_Test) = optionsActFunction[activationType]() 275 | 276 | # ________________________________________________________ 277 | # 3 : Apply Dropout 278 | # ________________________________________________________ 279 | output_Train = apply_Dropout(rndState,dropoutRate,inputSampleShape_Train,inputToDropout_Train, 0) 280 | output_Test = apply_Dropout(rndState,dropoutRate,inputSampleShape_Train,inputToDropout_Test, 1) 281 | 282 | # ________________________________________________________ 283 | # This will go as input to the convolutions 284 | # ________________________________________________________ 285 | 286 | return (output_Train, output_Test) 287 | 288 | 289 | -------------------------------------------------------------------------------- /src/LiviaNet/LiviaNet3DConvLayer.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/src/LiviaNet/LiviaNet3DConvLayer.pyc -------------------------------------------------------------------------------- /src/LiviaNet/LiviaSemiDenseNet.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) 2017, Jose Dolz .All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, 5 | are permitted provided that the following conditions are met: 6 | 7 | 1. Redistributions of source code must retain the above copyright notice, 8 | this list of conditions and the following disclaimer. 9 | 2. Redistributions in binary form must reproduce the above copyright notice, 10 | this list of conditions and the following disclaimer in the documentation 11 | and/or other materials provided with the distribution. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 14 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 15 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 16 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 17 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 18 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 19 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 | OTHER DEALINGS IN THE SOFTWARE. 21 | 22 | NOTES: There are still some functionalities to be implemented. 23 | 24 | - Add pooling layer in 3D 25 | - Add more activation functions 26 | - Add more optimizers (ex. Adam) 27 | 28 | Jose Dolz. Oct, 2017. 29 | email: jose.dolz.upv@gmail.com 30 | LIVIA Department, ETS, Montreal. 31 | """ 32 | 33 | import numpy 34 | import numpy as np 35 | 36 | import theano 37 | import theano.tensor as T 38 | from theano.tensor.nnet import conv 39 | import random 40 | from math import floor 41 | from math import ceil 42 | 43 | from Modules.General.Utils import computeReceptiveField 44 | from Modules.General.Utils import extendLearningRateToParams 45 | from Modules.General.Utils import extractCenterFeatMaps 46 | from Modules.General.Utils import getCentralVoxels 47 | from Modules.General.Utils import getWeightsSet 48 | 49 | import LiviaNet3DConvLayer 50 | import LiviaSoftmax 51 | import pdb 52 | 53 | ##################################################### 54 | # ------------------------------------------------- # 55 | ## ## ## ## LIVIA-SemiDenseNET 3D ## ## ## ## 56 | # ------------------------------------------------- # 57 | ##################################################### 58 | 59 | 60 | class LiviaSemiDenseNet3D(object): 61 | def __init__(self): 62 | 63 | # --- containers for Theano compiled functions ---- 64 | self.networkModel_Train = "" 65 | self.networkModel_Test = "" 66 | 67 | # --- shared variables will be stored in the following variables ---- 68 | self.trainingData_x = "" 69 | self.testingData_x = "" 70 | self.trainingData_y = "" 71 | 72 | self.trainingData_x_Bottom = "" 73 | self.testingData_x_Bottom = "" 74 | 75 | self.lastLayer = "" 76 | self.networkLayers = [] 77 | self.intermediate_ConnectedLayers = [] 78 | 79 | self.networkName = "" 80 | self.folderName = "" 81 | self.cnnLayers = [] 82 | self.n_classes = -1 83 | 84 | self.sampleSize_Train = [] 85 | self.sampleSize_Test = [] 86 | self.kernel_Shapes = [] 87 | 88 | self.pooling_scales = [] 89 | self.dropout_Rates = [] 90 | self.activationType = -1 91 | self.weight_Initialization = -1 92 | self.dropoutRates = [] 93 | self.batch_Size = -1 94 | self.receptiveField = 0 95 | 96 | self.initialLearningRate = "" 97 | self.learning_rate = theano.shared(np.cast["float32"](0.01)) 98 | 99 | # Symbolic variables, 100 | self.inputNetwork_Train = None 101 | self.inputNetwork_Test = None 102 | 103 | self.L1_reg_C = 0 104 | self.L2_reg_C = 0 105 | self.costFunction = 0 106 | 107 | # Params for optimizers 108 | self.initialMomentum = "" 109 | self.momentum = theano.shared(np.cast["float32"](0.)) 110 | self.momentumNormalized = 0 111 | self.momentumType = 0 112 | self.vel_Momentum = [] 113 | self.rho_RMSProp = 0 114 | self.epsilon_RMSProp = 0 115 | self.params_RmsProp = [] 116 | self.numberOfEpochsTrained = 0 117 | self.applyBatchNorm = "" 118 | self.numberEpochToApplyBatchNorm = 0 119 | self.softmax_Temp = 1.0 120 | 121 | self.centralVoxelsTrain = "" 122 | self.centralVoxelsTest = "" 123 | 124 | # -------------------------------------------------------------------- END Function ------------------------------------------------------------------- # 125 | 126 | """ ####### Function to generate the network architecture ######### """ 127 | def generateNetworkLayers(self, 128 | cnnLayers, 129 | kernel_Shapes, 130 | maxPooling_Layer, 131 | sampleShape_Train, 132 | sampleShape_Test, 133 | inputSample_Train, 134 | inputSample_Train_Bottom, 135 | inputSample_Test, 136 | inputSample_Test_Bottom, 137 | layersToConnect): 138 | 139 | rng = np.random.RandomState(24575) 140 | 141 | # Define inputs for first layers (which will be re-used for next layers) 142 | inputSampleToNextLayer_Train = inputSample_Train 143 | inputSampleToNextLayer_Test = inputSample_Test 144 | inputSampleToNextLayer_Train_Bottom = inputSample_Train_Bottom 145 | inputSampleToNextLayer_Test_Bottom = inputSample_Test_Bottom 146 | inputSampleToNextLayerShape_Train = sampleShape_Train 147 | inputSampleToNextLayerShape_Test = sampleShape_Test 148 | 149 | # Get the convolutional layers 150 | numLayers = len(kernel_Shapes) 151 | numberCNNLayers = [] 152 | numberFCLayers = [] 153 | for l_i in range(1,len(kernel_Shapes)): 154 | if len(kernel_Shapes[l_i]) == 3: 155 | numberCNNLayers = l_i + 1 156 | 157 | numberFCLayers = numLayers - numberCNNLayers 158 | 159 | ######### -------------- Generate the convolutional layers -------------- ######### 160 | # Some checks 161 | if self.weight_Initialization_CNN == 2: 162 | if len(self.weightsTrainedIdx) <> numberCNNLayers: 163 | print(" ... WARNING!!!! Number of indexes specified for trained layers does not correspond with number of conv layers in the created architecture...") 164 | 165 | if self.weight_Initialization_CNN == 2: 166 | weightsNames = getWeightsSet(self.weightsFolderName, self.weightsTrainedIdx) 167 | 168 | for l_i in xrange(0, numberCNNLayers) : 169 | 170 | # Get properties of this layer 171 | # The second element is the number of feature maps of previous layer 172 | currentLayerKernelShape = [cnnLayers[l_i], inputSampleToNextLayerShape_Train[1]] + kernel_Shapes[l_i] 173 | 174 | # If weights are going to be initialized from other pre-trained network they should be loaded in this stage 175 | # Otherwise 176 | weights = [] 177 | if self.weight_Initialization_CNN == 2: 178 | weights = np.load(weightsNames[l_i]) 179 | 180 | maxPoolingParameters = [] 181 | dropoutRate = 0.0 182 | ### TOP ## 183 | myLiviaNet3DConvLayerTop = LiviaNet3DConvLayer.LiviaNet3DConvLayer(rng, 184 | l_i, 185 | inputSampleToNextLayer_Train, 186 | inputSampleToNextLayer_Test, 187 | inputSampleToNextLayerShape_Train, 188 | inputSampleToNextLayerShape_Test, 189 | currentLayerKernelShape, 190 | self.applyBatchNorm, 191 | self.numberEpochToApplyBatchNorm, 192 | maxPoolingParameters, 193 | self.weight_Initialization_CNN, 194 | weights, 195 | self.activationType, 196 | dropoutRate 197 | ) 198 | 199 | self.networkLayers.append(myLiviaNet3DConvLayerTop) 200 | 201 | ## BOTTOM ## 202 | myLiviaNet3DConvLayerBottom = LiviaNet3DConvLayer.LiviaNet3DConvLayer(rng, 203 | l_i, 204 | inputSampleToNextLayer_Train_Bottom, 205 | inputSampleToNextLayer_Test_Bottom, 206 | inputSampleToNextLayerShape_Train, 207 | inputSampleToNextLayerShape_Test, 208 | currentLayerKernelShape, 209 | self.applyBatchNorm, 210 | self.numberEpochToApplyBatchNorm, 211 | maxPoolingParameters, 212 | self.weight_Initialization_CNN, 213 | weights, 214 | self.activationType, 215 | dropoutRate 216 | ) 217 | 218 | self.networkLayers.append(myLiviaNet3DConvLayerBottom) 219 | # Just for printing 220 | inputSampleToNextLayer_Train_Old = inputSampleToNextLayerShape_Train 221 | inputSampleToNextLayer_Test_Old = inputSampleToNextLayerShape_Test 222 | 223 | # Update inputs for next layer 224 | inputSampleToNextLayer_Train = myLiviaNet3DConvLayerTop.outputTrain 225 | inputSampleToNextLayer_Test = myLiviaNet3DConvLayerTop.outputTest 226 | 227 | inputSampleToNextLayer_Train_Bottom = myLiviaNet3DConvLayerBottom.outputTrain 228 | inputSampleToNextLayer_Test_Bottom = myLiviaNet3DConvLayerBottom.outputTest 229 | 230 | inputSampleToNextLayerShape_Train = myLiviaNet3DConvLayerTop.outputShapeTrain 231 | inputSampleToNextLayerShape_Test = myLiviaNet3DConvLayerTop.outputShapeTest 232 | 233 | print(" ----- (Training) Input shape: {} ---> Output shape: {} || kernel shape {}".format(inputSampleToNextLayer_Train_Old,inputSampleToNextLayerShape_Train, currentLayerKernelShape)) 234 | print(" ----- (Testing) Input shape: {} ---> Output shape: {}".format(inputSampleToNextLayer_Test_Old,inputSampleToNextLayerShape_Test)) 235 | 236 | 237 | ### Create the semi-dense connectivity 238 | centralVoxelsTrain = self.centralVoxelsTrain 239 | centralVoxelsTest = self.centralVoxelsTest 240 | 241 | numLayersPerPath = len(self.networkLayers)/2 242 | 243 | featMapsInFullyCN = 0 244 | # ------- TOP -------- # 245 | print(" ----------- TOP PATH ----------------") 246 | for l_i in xrange(0,numLayersPerPath-1) : 247 | print(' --- concatennating layer {} ...'.format(str(l_i))) 248 | currentLayer = self.networkLayers[2*l_i] # to access the layers from the top path 249 | output_train = currentLayer.outputTrain 250 | output_trainShape = currentLayer.outputShapeTrain 251 | output_test = currentLayer.outputTest 252 | output_testShape = currentLayer.outputShapeTest 253 | 254 | # Get the middle part of feature maps at intermediate levels to make them of the same shape at the beginning of the 255 | # first fully connected layer 256 | featMapsCenter_Train = extractCenterFeatMaps(output_train, output_trainShape, centralVoxelsTrain) 257 | featMapsCenter_Test = extractCenterFeatMaps(output_test, output_testShape, centralVoxelsTest) 258 | 259 | featMapsInFullyCN = featMapsInFullyCN + currentLayer._numberOfFeatureMaps 260 | inputSampleToNextLayer_Train = T.concatenate([inputSampleToNextLayer_Train, featMapsCenter_Train], axis=1) 261 | inputSampleToNextLayer_Test = T.concatenate([inputSampleToNextLayer_Test, featMapsCenter_Test], axis=1) 262 | 263 | # ------- Bottom -------- # 264 | print(" ---------- BOTTOM PATH ---------------") 265 | for l_i in xrange(0,numLayersPerPath-1) : 266 | print(' --- concatennating layer {} ...'.format(str(l_i))) 267 | currentLayer = self.networkLayers[2*l_i+1] # to access the layers from the bottom path 268 | output_train = currentLayer.outputTrain 269 | output_trainShape = currentLayer.outputShapeTrain 270 | output_test = currentLayer.outputTest 271 | output_testShape = currentLayer.outputShapeTest 272 | 273 | # Get the middle part of feature maps at intermediate levels to make them of the same shape at the beginning of the 274 | # first fully connected layer 275 | featMapsCenter_Train = extractCenterFeatMaps(output_train, output_trainShape, centralVoxelsTrain) 276 | featMapsCenter_Test = extractCenterFeatMaps(output_test, output_testShape, centralVoxelsTest) 277 | 278 | featMapsInFullyCN = featMapsInFullyCN + currentLayer._numberOfFeatureMaps 279 | inputSampleToNextLayer_Train_Bottom = T.concatenate([inputSampleToNextLayer_Train_Bottom, featMapsCenter_Train], axis=1) 280 | inputSampleToNextLayer_Test_Bottom = T.concatenate([inputSampleToNextLayer_Test_Bottom, featMapsCenter_Test], axis=1) 281 | 282 | 283 | ######### -------------- Generate the Fully Connected Layers ----------------- ################## 284 | inputToFullyCN_Train = inputSampleToNextLayer_Train 285 | inputToFullyCN_Train = T.concatenate([inputToFullyCN_Train, inputSampleToNextLayer_Train_Bottom], axis=1) 286 | 287 | inputToFullyCN_Test = inputSampleToNextLayer_Test 288 | inputToFullyCN_Test = T.concatenate([inputToFullyCN_Test, inputSampleToNextLayer_Test_Bottom], axis=1) 289 | 290 | featMapsInFullyCN = featMapsInFullyCN + inputSampleToNextLayerShape_Train[1] * 2 # Because we have two symmetric paths 291 | # Define inputs 292 | inputFullyCNShape_Train = [self.batch_Size, featMapsInFullyCN] + inputSampleToNextLayerShape_Train[2:5] 293 | inputFullyCNShape_Test = [self.batch_Size, featMapsInFullyCN] + inputSampleToNextLayerShape_Test[2:5] 294 | 295 | # Kamnitsas applied padding and mirroring to the images when kernels in FC layers were larger than 1x1x1. 296 | # For this current work, we employed kernels of this size (i.e. 1x1x1), so there is no need to apply padding or mirroring. 297 | # TODO. Check 298 | 299 | print(" --- Starting to create the fully connected layers....") 300 | for l_i in xrange(numberCNNLayers, numLayers) : 301 | numberOfKernels = cnnLayers[l_i] 302 | kernel_shape = [kernel_Shapes[l_i][0],kernel_Shapes[l_i][0],kernel_Shapes[l_i][0]] 303 | 304 | currentLayerKernelShape = [cnnLayers[l_i], inputFullyCNShape_Train[1]] + kernel_shape 305 | 306 | # If weights are going to be initialized from other pre-trained network they should be loaded in this stage 307 | # Otherwise 308 | 309 | weights = [] 310 | applyBatchNorm = True 311 | epochsToApplyBatchNorm = 60 312 | maxPoolingParameters = [] 313 | dropoutRate = self.dropout_Rates[l_i-numberCNNLayers] 314 | myLiviaNet3DFullyConnectedLayer = LiviaNet3DConvLayer.LiviaNet3DConvLayer(rng, 315 | l_i, 316 | inputToFullyCN_Train, 317 | inputToFullyCN_Test, 318 | inputFullyCNShape_Train, 319 | inputFullyCNShape_Test, 320 | currentLayerKernelShape, 321 | self.applyBatchNorm, 322 | self.numberEpochToApplyBatchNorm, 323 | maxPoolingParameters, 324 | self.weight_Initialization_FCN, 325 | weights, 326 | self.activationType, 327 | dropoutRate 328 | ) 329 | 330 | self.networkLayers.append(myLiviaNet3DFullyConnectedLayer) 331 | 332 | # Just for printing 333 | inputFullyCNShape_Train_Old = inputFullyCNShape_Train 334 | inputFullyCNShape_Test_Old = inputFullyCNShape_Test 335 | 336 | # Update inputs for next layer 337 | inputToFullyCN_Train = myLiviaNet3DFullyConnectedLayer.outputTrain 338 | inputToFullyCN_Test = myLiviaNet3DFullyConnectedLayer.outputTest 339 | 340 | inputFullyCNShape_Train = myLiviaNet3DFullyConnectedLayer.outputShapeTrain 341 | inputFullyCNShape_Test = myLiviaNet3DFullyConnectedLayer.outputShapeTest 342 | 343 | # Print 344 | print(" ----- (Training) Input shape: {} ---> Output shape: {} || kernel shape {}".format(inputFullyCNShape_Train_Old,inputFullyCNShape_Train, currentLayerKernelShape)) 345 | print(" ----- (Testing) Input shape: {} ---> Output shape: {}".format(inputFullyCNShape_Test_Old,inputFullyCNShape_Test)) 346 | 347 | 348 | ######### -------------- Do Classification layer ----------------- ################## 349 | 350 | # Define kernel shape for classification layer 351 | featMaps_LastLayer = self.cnnLayers[-1] 352 | filterShape_ClassificationLayer = [self.n_classes, featMaps_LastLayer, 1, 1, 1] 353 | 354 | # Define inputs and shapes for the classification layer 355 | inputImageClassificationLayer_Train = inputToFullyCN_Train 356 | inputImageClassificationLayer_Test = inputToFullyCN_Test 357 | 358 | inputImageClassificationLayerShape_Train = inputFullyCNShape_Train 359 | inputImageClassificationLayerShape_Test = inputFullyCNShape_Test 360 | 361 | print(" ----- (Classification layer) kernel shape {}".format(filterShape_ClassificationLayer)) 362 | classification_layer_Index = l_i 363 | 364 | weights = [] 365 | applyBatchNorm = True 366 | epochsToApplyBatchNorm = 60 367 | maxPoolingParameters = [] 368 | dropoutRate = self.dropout_Rates[len(self.dropout_Rates)-1] 369 | softmaxTemperature = 1.0 370 | 371 | myLiviaNet_ClassificationLayer = LiviaSoftmax.LiviaSoftmax(rng, 372 | classification_layer_Index, 373 | inputImageClassificationLayer_Train, 374 | inputImageClassificationLayer_Test, 375 | inputImageClassificationLayerShape_Train, 376 | inputImageClassificationLayerShape_Test, 377 | filterShape_ClassificationLayer, 378 | self.applyBatchNorm, 379 | self.numberEpochToApplyBatchNorm, 380 | maxPoolingParameters, 381 | self.weight_Initialization_FCN, 382 | weights, 383 | 0, #self.activationType, 384 | dropoutRate, 385 | softmaxTemperature 386 | ) 387 | 388 | self.networkLayers.append(myLiviaNet_ClassificationLayer) 389 | self.lastLayer = myLiviaNet_ClassificationLayer 390 | 391 | print(" ----- (Training) Input shape: {} ---> Output shape: {} || kernel shape {}".format(inputImageClassificationLayerShape_Train,myLiviaNet_ClassificationLayer.outputShapeTrain, filterShape_ClassificationLayer)) 392 | print(" ----- (Testing) Input shape: {} ---> Output shape: {}".format(inputImageClassificationLayerShape_Test,myLiviaNet_ClassificationLayer.outputShapeTest)) 393 | 394 | # -------------------------------------------------------------------- END Function ------------------------------------------------------------------- # 395 | 396 | 397 | def updateLayersMatricesBatchNorm(self): 398 | for l_i in xrange(0, len(self.networkLayers) ) : 399 | self.networkLayers[l_i].updateLayerMatricesBatchNorm() 400 | # -------------------------------------------------------------------- END Function ------------------------------------------------------------------- # 401 | 402 | """ Function that connects intermediate layers to the input of the first fully connected layer 403 | This is done for multi-scale features """ 404 | def connectIntermediateLayers(self, 405 | layersToConnect, 406 | inputSampleInFullyCN_Train, 407 | inputSampleInFullyCN_Test, 408 | featMapsInFullyCN): 409 | 410 | centralVoxelsTrain = self.centralVoxelsTrain 411 | centralVoxelsTest = self.centralVoxelsTest 412 | 413 | for l_i in layersToConnect : 414 | currentLayer = self.networkLayers[l_i] 415 | output_train = currentLayer.outputTrain 416 | output_trainShape = currentLayer.outputShapeTrain 417 | output_test = currentLayer.outputTest 418 | output_testShape = currentLayer.outputShapeTest 419 | 420 | # Get the middle part of feature maps at intermediate levels to make them of the same shape at the beginning of the 421 | # first fully connected layer 422 | featMapsCenter_Train = extractCenterFeatMaps(output_train, output_trainShape, centralVoxelsTrain) 423 | featMapsCenter_Test = extractCenterFeatMaps(output_test, output_testShape, centralVoxelsTest) 424 | 425 | featMapsInFullyCN = featMapsInFullyCN + currentLayer._numberOfFeatureMaps 426 | inputSampleInFullyCN_Train = T.concatenate([inputSampleInFullyCN_Train, featMapsCenter_Train], axis=1) 427 | inputSampleInFullyCN_Test = T.concatenate([inputSampleInFullyCN_Test, featMapsCenter_Test], axis=1) 428 | 429 | return [featMapsInFullyCN, inputSampleInFullyCN_Train, inputSampleInFullyCN_Test] 430 | 431 | 432 | ############# Functions for OPTIMIZERS ################# 433 | 434 | def getUpdatesOfTrainableParameters(self, cost, paramsTraining, numberParamsPerLayer) : 435 | # Optimizers 436 | def SGD(): 437 | print (" --- Optimizer: Stochastic gradient descent (SGD)") 438 | updates = self.updateParams_SGD(cost, paramsTraining, numberParamsPerLayer) 439 | return updates 440 | def RMSProp(): 441 | print (" --- Optimizer: RMS Prop") 442 | updates = self.updateParams_RMSProp(cost, paramsTraining, numberParamsPerLayer) 443 | return updates 444 | 445 | # TODO. Include more optimizers here 446 | optionsOptimizer = {0 : SGD, 447 | 1 : RMSProp} 448 | 449 | updates = optionsOptimizer[self.optimizerType]() 450 | 451 | return updates 452 | 453 | """ # Optimizers: 454 | # More optimizers in : https://github.com/Lasagne/Lasagne/blob/master/lasagne/updates.py """ 455 | # ========= Update the trainable parameters using Stocastic Gradient Descent =============== 456 | def updateParams_SGD(self, cost, paramsTraining, numberParamsPerLayer) : 457 | # Create a list of gradients for all model parameters 458 | grads = T.grad(cost, paramsTraining) 459 | 460 | # Get learning rates for each param 461 | #learning_rates = extendLearningRateToParams(numberParamsPerLayer,self.learning_rate) 462 | 463 | self.vel_Momentum = [] 464 | updates = [] 465 | 466 | constantForCurrentGradientUpdate = 1.0 - self.momentum*self.momentumNormalized 467 | 468 | #for param, grad, lrate in zip(paramsTraining, grads, learning_rates) : 469 | for param, grad in zip(paramsTraining, grads) : 470 | v = theano.shared(param.get_value()*0., broadcastable=param.broadcastable) 471 | self.vel_Momentum.append(v) 472 | 473 | stepToGradientDirection = constantForCurrentGradientUpdate*self.learning_rate*grad 474 | newVel = self.momentum * v - stepToGradientDirection 475 | 476 | if self.momentumType == 0 : 477 | updateToParam = newVel 478 | else : 479 | updateToParam = self.momentum*newVel - stepToGradientDirection 480 | 481 | updates.append((v, newVel)) 482 | updates.append((param, param + updateToParam)) 483 | 484 | return updates 485 | 486 | # ========= Update the trainable parameters using RMSProp =============== 487 | def updateParams_RMSProp(self, cost, paramsTraining, numberParamsPerLayer) : 488 | # Original code: https://gist.github.com/Newmu/acb738767acb4788bac3 489 | # epsilon=1e-4 in paper. 490 | # Kamnitsas reported NaN values in cost function when employing this value. 491 | # Worked ok with epsilon=1e-6. 492 | 493 | grads = T.grad(cost, paramsTraining) 494 | 495 | # Get learning rates for each param 496 | #learning_rates = extendLearningRateToParams(numberParamsPerLayer,self.learning_rate) 497 | 498 | self.params_RmsProp = [] 499 | self.vel_Momentum = [] 500 | updates = [] 501 | 502 | constantForCurrentGradientUpdate = 1.0 - self.momentum*self.momentumNormalized 503 | 504 | # Using theano constant to prevent upcasting of float32 505 | one = T.constant(1) 506 | 507 | for param, grad in zip(paramsTraining, grads): 508 | accu = theano.shared(param.get_value()*0., broadcastable=param.broadcastable) 509 | self.params_RmsProp.append(accu) 510 | 511 | v = theano.shared(param.get_value()*0., broadcastable=param.broadcastable) 512 | 513 | self.vel_Momentum.append(v) 514 | 515 | accu_new = self.rho_RMSProp * accu + (one - self.rho_RMSProp) * T.sqr(grad) 516 | 517 | numGradStep = self.learning_rate * grad 518 | denGradStep = T.sqrt(accu_new + self.epsilon_RMSProp) 519 | 520 | stepToGradientDirection = constantForCurrentGradientUpdate*(numGradStep /denGradStep) 521 | 522 | newVel = self.momentum * v - stepToGradientDirection 523 | 524 | if self.momentumType == 0 : 525 | updateToParam = newVel 526 | else : 527 | updateToParam = self.momentum*newVel - stepToGradientDirection 528 | 529 | updates.append((accu, accu_new)) 530 | updates.append((v, newVel)) 531 | updates.append((param, param + updateToParam)) 532 | 533 | return updates 534 | 535 | # -------------------------------------------------------------------- END Function ------------------------------------------------------------------- # 536 | 537 | """ ------ Get trainable parameters --------- """ 538 | def getTrainable_Params(_self): 539 | trainable_Params = [] 540 | numberTrain_ParamsLayer = [] 541 | for l_i in xrange(0, len(_self.networkLayers) ) : 542 | trainable_Params = trainable_Params + _self.networkLayers[l_i].params 543 | numberTrain_ParamsLayer.append(_self.networkLayers[l_i].numberOfTrainableParams) # TODO: Get this directly as len(_self.networkLayers[l_i].params) 544 | 545 | return trainable_Params,numberTrain_ParamsLayer 546 | 547 | # -------------------------------------------------------------------- END Function ------------------------------------------------------------------- # 548 | 549 | def initTrainingParameters(self, 550 | costFunction, 551 | L1_reg_C, 552 | L2_reg_C, 553 | learning_rate, 554 | momentumType, 555 | momentumValue, 556 | momentumNormalized, 557 | optimizerType, 558 | rho_RMSProp, 559 | epsilon_RMSProp 560 | ) : 561 | 562 | print(" ------- Initializing network training parameters...........") 563 | self.numberOfEpochsTrained = 0 564 | 565 | self.L1_reg_C = L1_reg_C 566 | self.L2_reg_C = L2_reg_C 567 | 568 | # Set Learning rate and store the last epoch where it was modified 569 | self.initialLearningRate = learning_rate 570 | 571 | # TODO: Check the shared variables from learning rates 572 | self.learning_rate.set_value(self.initialLearningRate[0]) 573 | 574 | 575 | # Set momentum type and values 576 | self.momentumType = momentumType 577 | self.initialMomentumValue = momentumValue 578 | self.momentumNormalized = momentumNormalized 579 | self.momentum.set_value(self.initialMomentumValue) 580 | 581 | # Optimizers 582 | if (optimizerType == 2): 583 | optimizerType = 1 584 | 585 | def SGD(): 586 | print (" --- Optimizer: Stochastic gradient descent (SGD)") 587 | self.optimizerType = optimizerType 588 | 589 | def RMSProp(): 590 | print (" --- Optimizer: RMS Prop") 591 | self.optimizerType = optimizerType 592 | self.rho_RMSProp = rho_RMSProp 593 | self.epsilon_RMSProp = epsilon_RMSProp 594 | 595 | # TODO. Include more optimizers here 596 | optionsOptimizer = {0 : SGD, 597 | 1 : RMSProp} 598 | 599 | optionsOptimizer[optimizerType]() 600 | 601 | # -------------------------------------------------------------------- END Function ------------------------------------------------------------------- # 602 | 603 | def updateParams_BatchNorm(self) : 604 | updatesForBnRollingAverage = [] 605 | for l_i in xrange(0, len(self.networkLayers) ) : 606 | currentLayer = self.networkLayers[l_i] 607 | updatesForBnRollingAverage.extend( currentLayer.getUpdatesForBnRollingAverage() ) 608 | return updatesForBnRollingAverage 609 | 610 | # ------------------------------------------------------------------------------------ # 611 | # --------------------------- Compile the Theano functions ------------------- # 612 | # ------------------------------------------------------------------------------------ # 613 | def compileTheanoFunctions(self): 614 | print(" ----------------- Starting compilation process ----------------- ") 615 | 616 | # ------- Create and initialize sharedVariables needed to compile the training function ------ # 617 | # -------------------------------------------------------------------------------------------- # 618 | # For training 619 | self.trainingData_x = theano.shared(np.zeros([1,1,1,1,1], dtype="float32"), borrow = True) 620 | self.trainingData_y = theano.shared(np.zeros([1,1,1,1], dtype="float32") , borrow = True) 621 | 622 | self.trainingData_x_Bottom = theano.shared(np.zeros([1,1,1,1,1], dtype="float32"), borrow = True) 623 | 624 | # For testing 625 | self.testingData_x_Bottom = theano.shared(np.zeros([1,1,1,1,1], dtype="float32"), borrow = True) 626 | self.testingData_x = theano.shared(np.zeros([1,1,1,1,1], dtype="float32"), borrow = True) 627 | 628 | x_Train = self.inputNetwork_Train 629 | x_Train_Bottom = self.inputNetwork_Train_Bottom 630 | x_Test = self.inputNetwork_Test 631 | x_Test_Bottom = self.inputNetwork_Test_Bottom 632 | y_Train = T.itensor4('y') 633 | 634 | # Allocate symbolic variables for the data 635 | index_Train = T.lscalar() 636 | index_Test = T.lscalar() 637 | 638 | # ------- Needed to compile the training function ------ # 639 | # ------------------------------------------------------ # 640 | trainingData_y_CastedToInt = T.cast( self.trainingData_y, 'int32') 641 | 642 | # To accomodate the weights in the cost function to account for class imbalance 643 | weightsOfClassesInCostFunction = T.fvector() 644 | weightPerClass = T.fvector() 645 | 646 | # --------- Get trainable parameters (to be fit by gradient descent) ------- # 647 | # -------------------------------------------------------------------------- # 648 | 649 | [paramsTraining, numberParamsPerLayer] = self.getTrainable_Params() 650 | 651 | # ------------------ Define the cost function --------------------- # 652 | # ----------------------------------------------------------------- # 653 | def negLogLikelihood(): 654 | print (" --- Cost function: negativeLogLikelihood") 655 | 656 | costInLastLayer = self.lastLayer.negativeLogLikelihoodWeighted(y_Train,weightPerClass) 657 | return costInLastLayer 658 | 659 | def NotDefined(): 660 | print (" --- Cost function: Not defined!!!!!! WARNING!!!") 661 | 662 | optionsCostFunction = {0 : negLogLikelihood, 663 | 1 : NotDefined} 664 | 665 | costInLastLayer = optionsCostFunction[self.costFunction]() 666 | 667 | # --------------------------- Get costs --------------------------- # 668 | # ----------------------------------------------------------------- # 669 | # Get L1 and L2 weights regularization 670 | costL1 = 0 671 | costL2 = 0 672 | 673 | # Compute the costs 674 | for l_i in xrange(0, len(self.networkLayers)) : 675 | costL1 += abs(self.networkLayers[l_i].W).sum() 676 | costL2 += (self.networkLayers[l_i].W ** 2).sum() 677 | 678 | # Add also the cost of the last layer 679 | cost = (costInLastLayer 680 | + self.L1_reg_C * costL1 681 | + self.L2_reg_C * costL2) 682 | 683 | # --------------------- Include all trainable parameters in updates (for optimization) ---------------------- # 684 | # ----------------------------------------------------------------------------------------------------------- # 685 | updates = self.getUpdatesOfTrainableParameters(cost, paramsTraining, numberParamsPerLayer) 686 | 687 | # --------------------- Include batch normalization params ---------------------- # 688 | # ------------------------------------------------------------------------------- # 689 | updates = updates + self.updateParams_BatchNorm() 690 | 691 | # For the testing function we need to get the Feature maps activations 692 | featMapsActivations = [] 693 | lower_act = 0 694 | upper_act = 9999 695 | 696 | # TODO: Change to output_Test 697 | for l_i in xrange(0,len(self.networkLayers)): 698 | featMapsActivations.append(self.networkLayers[l_i].outputTest[:, lower_act : upper_act, :, :, :]) 699 | 700 | # For the last layer get the predicted probabilities (p_y_given_x_test) 701 | featMapsActivations.append(self.lastLayer.p_y_given_x_test) 702 | 703 | # --------------------- Preparing data to compile the functions ---------------------- # 704 | # ------------------------------------------------------------------------------------ # 705 | 706 | givensDataSet_Train = { x_Train: self.trainingData_x[index_Train * self.batch_Size: (index_Train + 1) * self.batch_Size], 707 | x_Train_Bottom: self.trainingData_x_Bottom[index_Train * self.batch_Size: (index_Train + 1) * self.batch_Size], 708 | y_Train: trainingData_y_CastedToInt[index_Train * self.batch_Size: (index_Train + 1) * self.batch_Size], 709 | weightPerClass: weightsOfClassesInCostFunction } 710 | 711 | 712 | givensDataSet_Test = { x_Test: self.testingData_x[index_Test * self.batch_Size: (index_Test + 1) * self.batch_Size], 713 | x_Test_Bottom: self.testingData_x_Bottom[index_Test * self.batch_Size: (index_Test + 1) * self.batch_Size] } 714 | 715 | print(" ...Compiling the training function...") 716 | 717 | self.networkModel_Train = theano.function( 718 | [index_Train, weightsOfClassesInCostFunction], 719 | #[cost] + self.lastLayer.doEvaluation(y_Train), 720 | [cost], 721 | updates=updates, 722 | givens = givensDataSet_Train 723 | ) 724 | 725 | print(" ...The training function was compiled...") 726 | 727 | #self.getProbabilities = theano.function( 728 | #[index], 729 | #self.lastLayer.p_y_given_x_Train, 730 | #givens={ 731 | #x: self.trainingData_x[index * _self.batch_size: (index + 1) * _self.batch_size] 732 | #} 733 | #) 734 | 735 | 736 | print(" ...Compiling the testing function...") 737 | self.networkModel_Test = theano.function( 738 | [index_Test], 739 | featMapsActivations, 740 | givens = givensDataSet_Test 741 | ) 742 | print(" ...The testing function was compiled...") 743 | # -------------------------------------------------------------------- END Function ------------------------------------------------------------------- # 744 | 745 | ####### Function to generate the CNN ######### 746 | 747 | def createNetwork(self, 748 | networkName, 749 | folderName, 750 | cnnLayers, 751 | kernel_Shapes, 752 | intermediate_ConnectedLayers, 753 | n_classes, 754 | sampleSize_Train, 755 | sampleSize_Test, 756 | batch_Size, 757 | applyBatchNorm, 758 | numberEpochToApplyBatchNorm, 759 | activationType, 760 | dropout_Rates, 761 | pooling_Params, 762 | weights_Initialization_CNN, 763 | weights_Initialization_FCN, 764 | weightsFolderName, 765 | weightsTrainedIdx, 766 | softmax_Temp 767 | ): 768 | 769 | # ============= Model Parameters Passed as arguments ================ 770 | # Assign parameters: 771 | self.networkName = networkName 772 | self.folderName = folderName 773 | self.cnnLayers = cnnLayers 774 | self.n_classes = n_classes 775 | self.kernel_Shapes = kernel_Shapes 776 | self.intermediate_ConnectedLayers = intermediate_ConnectedLayers 777 | self.pooling_scales = pooling_Params 778 | self.dropout_Rates = dropout_Rates 779 | self.activationType = activationType 780 | self.weight_Initialization_CNN = weights_Initialization_CNN 781 | self.weight_Initialization_FCN = weights_Initialization_FCN 782 | self.weightsFolderName = weightsFolderName 783 | self.weightsTrainedIdx = weightsTrainedIdx 784 | self.batch_Size = batch_Size 785 | self.sampleSize_Train = sampleSize_Train 786 | self.sampleSize_Test = sampleSize_Test 787 | self.applyBatchNorm = applyBatchNorm 788 | self.numberEpochToApplyBatchNorm = numberEpochToApplyBatchNorm 789 | self.softmax_Temp = softmax_Temp 790 | 791 | # Compute the CNN receptive field 792 | stride = 1; 793 | self.receptiveField = computeReceptiveField(self.kernel_Shapes, stride) 794 | 795 | # --- Size of Image samples --- 796 | self.sampleSize_Train = sampleSize_Train 797 | self.sampleSize_Test = sampleSize_Test 798 | 799 | ## --- Batch Size --- 800 | self.batch_Size = batch_Size 801 | 802 | # ======== Calculated Attributes ========= 803 | self.centralVoxelsTrain = getCentralVoxels(self.sampleSize_Train, self.receptiveField) 804 | self.centralVoxelsTest = getCentralVoxels(self.sampleSize_Test, self.receptiveField) 805 | 806 | #============================== 807 | rng = numpy.random.RandomState(23455) 808 | 809 | # Transfer to LIVIA NET 810 | self.sampleSize_Train = sampleSize_Train 811 | self.sampleSize_Test = sampleSize_Test 812 | 813 | # --------- Now we build the model -------- # 814 | 815 | print("...[STATUS]: Building the Network model...") 816 | 817 | # Define the symbolic variables used as input of the CNN 818 | # start-snippet-1 819 | # Define tensor5 820 | tensor5 = T.TensorType(dtype='float32', broadcastable=(False, False, False, False, False)) 821 | self.inputNetwork_Train = tensor5() 822 | self.inputNetwork_Test = tensor5() 823 | self.inputNetwork_Train_Bottom = tensor5() 824 | self.inputNetwork_Test_Bottom = tensor5() 825 | 826 | # Define input shapes to the netwrok 827 | inputSampleShape_Train = (self.batch_Size, 1, self.sampleSize_Train[0], self.sampleSize_Train[1], self.sampleSize_Train[2]) 828 | inputSampleShape_Test = (self.batch_Size, 1, self.sampleSize_Test[0], self.sampleSize_Test[1], self.sampleSize_Test[2]) 829 | 830 | print (" - Shape of input subvolume (Training): {}".format(inputSampleShape_Train)) 831 | print (" - Shape of input subvolume (Testing): {}".format(inputSampleShape_Test)) 832 | 833 | inputSample_Train = self.inputNetwork_Train 834 | inputSample_Test = self.inputNetwork_Test 835 | 836 | inputSample_Train_Bottom = self.inputNetwork_Train_Bottom 837 | inputSample_Test_Bottom = self.inputNetwork_Test_Bottom 838 | 839 | # TODO change cnnLayers name by networkLayers 840 | self.generateNetworkLayers(cnnLayers, 841 | kernel_Shapes, 842 | self.pooling_scales, 843 | inputSampleShape_Train, 844 | inputSampleShape_Test, 845 | inputSample_Train, 846 | inputSample_Train_Bottom, 847 | inputSample_Test, 848 | inputSample_Test_Bottom, 849 | intermediate_ConnectedLayers) 850 | 851 | # Release Data from GPU 852 | def releaseGPUData(self) : 853 | # GPU NOTE: Remove the input values to avoid copying data to the GPU 854 | 855 | # Image Data 856 | self.trainingData_x.set_value(np.zeros([1,1,1,1,1], dtype="float32")) 857 | self.testingData_x.set_value(np.zeros([1,1,1,1,1], dtype="float32")) 858 | 859 | # Labels 860 | self.trainingData_y.set_value(np.zeros([1,1,1,1], dtype="float32")) 861 | 862 | -------------------------------------------------------------------------------- /src/LiviaNet/LiviaSoftmax.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) 2016, Jose Dolz .All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, 5 | are permitted provided that the following conditions are met: 6 | 7 | 1. Redistributions of source code must retain the above copyright notice, 8 | this list of conditions and the following disclaimer. 9 | 2. Redistributions in binary form must reproduce the above copyright notice, 10 | this list of conditions and the following disclaimer in the documentation 11 | and/or other materials provided with the distribution. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 14 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 15 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 16 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 17 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 18 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 19 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 | OTHER DEALINGS IN THE SOFTWARE. 21 | 22 | Jose Dolz. Dec, 2016. 23 | email: jose.dolz.upv@gmail.com 24 | LIVIA Department, ETS, Montreal. 25 | """ 26 | 27 | import numpy as np 28 | import theano 29 | import theano.tensor as T 30 | from theano.tensor.nnet import conv2d 31 | import theano.tensor.nnet.conv3d2d 32 | 33 | from LiviaNet3DConvLayer import LiviaNet3DConvLayer 34 | from Modules.General.Utils import initializeWeights 35 | from Modules.NeuralNetwork.ActivationFunctions import * 36 | from Modules.NeuralNetwork.layerOperations import * 37 | 38 | class LiviaSoftmax(LiviaNet3DConvLayer): 39 | """ Final Classification layer with Softmax """ 40 | 41 | def __init__(self, 42 | rng, 43 | layerID, 44 | inputSample_Train, 45 | inputSample_Test, 46 | inputToLayerShapeTrain, 47 | inputToLayerShapeTest, 48 | filterShape, 49 | applyBatchNorm, 50 | applyBatchNormNumberEpochs, 51 | maxPoolingParameters, 52 | weights_initialization, 53 | weights, 54 | activationType=0, 55 | dropoutRate=0.0, 56 | softmaxTemperature = 1.0) : 57 | 58 | LiviaNet3DConvLayer.__init__(self, 59 | rng, 60 | layerID, 61 | inputSample_Train, 62 | inputSample_Test, 63 | inputToLayerShapeTrain, 64 | inputToLayerShapeTest, 65 | filterShape, 66 | applyBatchNorm, 67 | applyBatchNormNumberEpochs, 68 | maxPoolingParameters, 69 | weights_initialization, 70 | weights, 71 | activationType, 72 | dropoutRate) 73 | 74 | self._numberOfOutputClasses = None 75 | self._bClassLayer = None 76 | self._softmaxTemperature = None 77 | 78 | self._numberOfOutputClasses = filterShape[0] 79 | self._softmaxTemperature = softmaxTemperature 80 | 81 | # Define outputs 82 | outputOfConvTrain = self.outputTrain 83 | outputOfConvTest = self.outputTest 84 | 85 | # define outputs shapes 86 | outputOfConvShapeTrain = self.outputShapeTrain 87 | outputOfConvShapeTest = self.outputShapeTest 88 | 89 | 90 | # Add bias before applying the softmax 91 | b_values = np.zeros( (self._numberOfFeatureMaps), dtype = 'float32') 92 | self._bClassLayer = theano.shared(value=b_values, borrow=True) 93 | 94 | inputToSoftmaxTrain = applyBiasToFeatureMaps( self._bClassLayer, outputOfConvTrain ) 95 | inputToSoftmaxTest = applyBiasToFeatureMaps( self._bClassLayer, outputOfConvTest ) 96 | 97 | self.params = self.params + [self._bClassLayer] 98 | 99 | # ============ Apply Softmax ============== 100 | # Training samples 101 | ( self.p_y_given_x_train, self.y_pred_train ) = applySoftMax(inputToSoftmaxTrain, 102 | outputOfConvShapeTrain, 103 | self._numberOfOutputClasses, 104 | softmaxTemperature) 105 | 106 | # Testing samples 107 | ( self.p_y_given_x_test, self.y_pred_test ) = applySoftMax(inputToSoftmaxTest, 108 | outputOfConvShapeTest, 109 | self._numberOfOutputClasses, 110 | softmaxTemperature) 111 | 112 | 113 | def negativeLogLikelihoodWeighted(self, y, weightPerClass): 114 | #Weighting the cost of the different classes in the cost-function, in order to counter class imbalance. 115 | e1 = np.finfo(np.float32).tiny 116 | addTinyProbMatrix = T.lt(self.p_y_given_x_train, 4*e1) * e1 117 | 118 | weights = weightPerClass.dimshuffle('x', 0, 'x', 'x', 'x') 119 | log_p_y_given_x_train = T.log(self.p_y_given_x_train + addTinyProbMatrix) 120 | weighted_log_probs = log_p_y_given_x_train * weights 121 | 122 | wShape = weighted_log_probs.shape 123 | 124 | # Re-arrange 125 | idx0 = T.arange( wShape[0] ).dimshuffle( 0, 'x','x','x') 126 | idx2 = T.arange( wShape[2] ).dimshuffle('x', 0, 'x','x') 127 | idx3 = T.arange( wShape[3] ).dimshuffle('x','x', 0, 'x') 128 | idx4 = T.arange( wShape[4] ).dimshuffle('x','x','x', 0) 129 | 130 | return -T.mean( weighted_log_probs[ idx0, y, idx2, idx3, idx4] ) 131 | 132 | 133 | def predictionProbabilities(self) : 134 | return self.p_y_given_x_test 135 | -------------------------------------------------------------------------------- /src/LiviaNet/LiviaSoftmax.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/src/LiviaNet/LiviaSoftmax.pyc -------------------------------------------------------------------------------- /src/LiviaNet/Modules/General/Evaluation.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) 2016, Jose Dolz .All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, 5 | are permitted provided that the following conditions are met: 6 | 7 | 1. Redistributions of source code must retain the above copyright notice, 8 | this list of conditions and the following disclaimer. 9 | 2. Redistributions in binary form must reproduce the above copyright notice, 10 | this list of conditions and the following disclaimer in the documentation 11 | and/or other materials provided with the distribution. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 14 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 15 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 16 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 17 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 18 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 19 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 | OTHER DEALINGS IN THE SOFTWARE. 21 | 22 | Jose Dolz. Dec, 2016. 23 | email: jose.dolz.upv@gmail.com 24 | LIVIA Department, ETS, Montreal. 25 | """ 26 | 27 | import pdb 28 | import numpy as np 29 | 30 | # ----- Dice Score ----- 31 | def computeDice(autoSeg, groundTruth): 32 | """ Returns 33 | ------- 34 | DiceArray : floats array 35 | 36 | Dice coefficient as a float on range [0,1]. 37 | Maximum similarity = 1 38 | No similarity = 0 """ 39 | 40 | n_classes = int( np.max(groundTruth) + 1) 41 | 42 | DiceArray = [] 43 | 44 | 45 | for c_i in xrange(1,n_classes): 46 | idx_Auto = np.where(autoSeg.flatten() == c_i)[0] 47 | idx_GT = np.where(groundTruth.flatten() == c_i)[0] 48 | 49 | autoArray = np.zeros(autoSeg.size,dtype=np.bool) 50 | autoArray[idx_Auto] = 1 51 | 52 | gtArray = np.zeros(autoSeg.size,dtype=np.bool) 53 | gtArray[idx_GT] = 1 54 | 55 | dsc = dice(autoArray, gtArray) 56 | 57 | #dice = np.sum(autoSeg[groundTruth==c_i])*2.0 / (np.sum(autoSeg) + np.sum(groundTruth)) 58 | DiceArray.append(dsc) 59 | 60 | return DiceArray 61 | 62 | 63 | def dice(im1, im2): 64 | """ 65 | Computes the Dice coefficient 66 | ---------- 67 | im1 : boolean array 68 | im2 : boolean array 69 | 70 | If they are not boolean, they will be converted. 71 | 72 | ------- 73 | It returns the Dice coefficient as a float on the range [0,1]. 74 | 1: Perfect overlapping 75 | 0: Not overlapping 76 | """ 77 | im1 = np.asarray(im1).astype(np.bool) 78 | im2 = np.asarray(im2).astype(np.bool) 79 | 80 | if im1.size != im2.size: 81 | raise ValueError("Size mismatch between input arrays!!!") 82 | 83 | im_sum = im1.sum() + im2.sum() 84 | if im_sum == 0: 85 | return 1.0 86 | 87 | # Compute Dice 88 | intersection = np.logical_and(im1, im2) 89 | 90 | return 2. * intersection.sum() / im_sum 91 | -------------------------------------------------------------------------------- /src/LiviaNet/Modules/General/Evaluation.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/src/LiviaNet/Modules/General/Evaluation.pyc -------------------------------------------------------------------------------- /src/LiviaNet/Modules/General/Utils.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) 2016, Jose Dolz .All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, 5 | are permitted provided that the following conditions are met: 6 | 7 | 1. Redistributions of source code must retain the above copyright notice, 8 | this list of conditions and the following disclaimer. 9 | 2. Redistributions in binary form must reproduce the above copyright notice, 10 | this list of conditions and the following disclaimer in the documentation 11 | and/or other materials provided with the distribution. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 14 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 15 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 16 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 17 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 18 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 19 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 | OTHER DEALINGS IN THE SOFTWARE. 21 | 22 | Jose Dolz. Dec, 2016. 23 | email: jose.dolz.upv@gmail.com 24 | LIVIA Department, ETS, Montreal. 25 | """ 26 | 27 | import pdb 28 | import os 29 | import numpy as np 30 | import theano 31 | import theano.tensor as T 32 | import gzip 33 | import cPickle 34 | import sys 35 | from os.path import isfile, join 36 | 37 | 38 | # https://github.com/Theano/Theano/issues/689 39 | sys.setrecursionlimit(50000) 40 | 41 | # To set a learning rate at each layer 42 | def extendLearningRateToParams(numberParamsPerLayer,learning_rate): 43 | if not isinstance(learning_rate, list): 44 | learnRates = np.ones(sum(numberParamsPerLayer), dtype = "float32") * learning_rate 45 | else: 46 | print("") 47 | learnRates = [] 48 | for p_i in range(len(numberParamsPerLayer)) : 49 | for lr_i in range(numberParamsPerLayer[p_i]) : 50 | learnRates.append(learning_rate[p_i]) 51 | return learnRates 52 | # TODO: Check that length of learning rate (in config ini) actually corresponds to length of layers (CNNs + FCs + SoftMax) 53 | 54 | 55 | def computeReceptiveField(kernelsCNN, stride) : 56 | # To-do. Verify receptive field with stride size other than 1 57 | if len(kernelsCNN) == 0: 58 | return 0 59 | 60 | # Check number of ConvLayers 61 | numberCNNLayers = [] 62 | 63 | for l_i in range(1,len(kernelsCNN)): 64 | if len(kernelsCNN[l_i]) == 3: 65 | numberCNNLayers = l_i + 1 66 | 67 | kernelDim = len(kernelsCNN[0]) 68 | receptiveField = [stride]*kernelDim 69 | 70 | for d_i in xrange(kernelDim) : 71 | for l_i in xrange(numberCNNLayers) : 72 | receptiveField[d_i] += kernelsCNN[l_i][d_i] - 1 73 | 74 | return receptiveField 75 | 76 | 77 | 78 | ########################################################### 79 | ######## Create bias and include them on feat maps ######## 80 | ########################################################### 81 | 82 | # TODO. Remove number of FeatMaps 83 | def addBiasParametersOnFeatureMaps( bias, featMaps, numberOfFeatMaps ) : 84 | output = featMaps + bias.dimshuffle('x', 0, 'x', 'x', 'x') 85 | return (output) 86 | 87 | ########################################################### 88 | ######## Initialize CNN weights ######## 89 | ########################################################### 90 | def initializeWeights(filter_shape, initializationMethodType, weights) : 91 | # filter_shape:[#FMs in this layer, #FMs in input, KernelDim_0, KernelDim_1, KernelDim_2] 92 | def Classic(): 93 | print " --- Weights initialization type: Classic " 94 | rng = np.random.RandomState(24575) 95 | stdForInitialization = 0.01 96 | W = theano.shared( 97 | np.asarray( 98 | rng.normal(loc=0.0, scale=stdForInitialization, size=(filter_shape[0],filter_shape[1],filter_shape[2],filter_shape[3],filter_shape[4])), 99 | dtype='float32'#theano.config.floatX 100 | ), 101 | borrow=True 102 | ) 103 | return W 104 | 105 | def Delving(): 106 | # https://arxiv.org/pdf/1502.01852.pdf 107 | print " --- Weights initialization type: Delving " 108 | rng = np.random.RandomState(24575) 109 | stdForInitialization = np.sqrt( 2.0 / (filter_shape[1] * filter_shape[2] * filter_shape[3] * filter_shape[4]) ) #Delving Into rectifiers suggestion. 110 | W = theano.shared( 111 | np.asarray( 112 | rng.normal(loc=0.0, scale=stdForInitialization, size=(filter_shape[0],filter_shape[1],filter_shape[2],filter_shape[3],filter_shape[4])), 113 | dtype='float32'#theano.config.floatX 114 | ), 115 | borrow=True 116 | ) 117 | return W 118 | 119 | # TODO: Add checks so that weights and kernel have the same shape 120 | def Load(): 121 | print " --- Weights initialization type: Transfer learning... " 122 | W = theano.shared( 123 | np.asarray( 124 | weights, 125 | dtype=theano.config.floatX 126 | ), 127 | borrow=True 128 | ) 129 | return W 130 | 131 | optionsInitWeightsType = {0 : Classic, 132 | 1 : Delving, 133 | 2 : Load} 134 | 135 | W = optionsInitWeightsType[initializationMethodType]() 136 | 137 | return W 138 | 139 | 140 | def getCentralVoxels(sampleSize, receptiveField) : 141 | centralVoxels = [] 142 | for d_i in xrange(0, len(sampleSize)) : 143 | centralVoxels.append(sampleSize[d_i] - receptiveField[d_i] + 1) 144 | return centralVoxels 145 | 146 | 147 | 148 | def extractCenterFeatMaps(featMaps, featMaps_shape, centralVoxels) : 149 | 150 | centerValues = [] 151 | minValues = [] 152 | maxValues = [] 153 | 154 | for i in xrange(3) : 155 | C_v = (featMaps_shape[i + 2] - 1) / 2 156 | min_v = C_v - (centralVoxels[i]-1)/2 157 | max_v = min_v + centralVoxels[i] 158 | centerValues.append(C_v) 159 | minValues.append(min_v) 160 | maxValues.append(max_v) 161 | 162 | return featMaps[:, 163 | :, 164 | minValues[0] : maxValues[0], 165 | minValues[1] : maxValues[1], 166 | minValues[2] : maxValues[2]] 167 | 168 | 169 | ########################################### 170 | ############# Save/Load models ############ 171 | ########################################### 172 | 173 | def load_model_from_gzip_file(modelFileName) : 174 | f = gzip.open(modelFileName, 'rb') 175 | model_obj = cPickle.load(f) 176 | f.close() 177 | return model_obj 178 | 179 | def dump_model_to_gzip_file(model, modelFileName) : 180 | # First release GPU memory 181 | model.releaseGPUData() 182 | 183 | f = gzip.open(modelFileName, 'wb') 184 | cPickle.dump(model, f, protocol=cPickle.HIGHEST_PROTOCOL) 185 | f.close() 186 | 187 | return modelFileName 188 | 189 | def makeFolder(folderName, display_Str) : 190 | if not os.path.exists(folderName) : 191 | os.makedirs(folderName) 192 | 193 | strToPrint = "..Folder " + display_Str + " created..." 194 | print strToPrint 195 | 196 | 197 | from os import listdir 198 | 199 | 200 | """ Get a set of images from a folder given an array of indexes """ 201 | def getImagesSet(imagesFolder, imageIndexes) : 202 | imageNamesToGetWithFullPath = [] 203 | imageNamesToGet = [] 204 | 205 | if os.path.exists(imagesFolder): 206 | imageNames = [f for f in os.listdir(imagesFolder) if isfile(join(imagesFolder, f))] 207 | imageNames.sort() 208 | 209 | # Remove corrupted files (if any) 210 | if '.DS_Store' in imageNames: imageNames.remove('.DS_Store') 211 | 212 | imageNamesToGetWithFullPath = [] 213 | imageNamesToGet = [] 214 | 215 | if ( len(imageNames) > 0): 216 | imageNamesToGetWithFullPath = [join(imagesFolder,imageNames[imageIndexes[i]]) for i in range(0,len(imageIndexes))] 217 | imageNamesToGet = [imageNames[imageIndexes[i]] for i in range(0,len(imageIndexes))] 218 | 219 | return (imageNamesToGetWithFullPath,imageNamesToGet) 220 | 221 | 222 | 223 | """" Get a set of weights from a folder given an array of indexes """ 224 | def getWeightsSet(weightsFolder, weightsIndexes) : 225 | weightNames = [f for f in os.listdir(weightsFolder) if isfile(join(weightsFolder, f))] 226 | weightNames.sort() 227 | 228 | # Remove corrupted files (if any) 229 | if '.DS_Store' in weightNames: weightNames.remove('.DS_Store') 230 | 231 | weightNamesToGetWithFullPath = [join(weightsFolder,weightNames[weightsIndexes[i]]) for i in range(0,len(weightsIndexes))] 232 | 233 | return (weightNamesToGetWithFullPath) 234 | -------------------------------------------------------------------------------- /src/LiviaNet/Modules/General/Utils.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/src/LiviaNet/Modules/General/Utils.pyc -------------------------------------------------------------------------------- /src/LiviaNet/Modules/General/__init__.py: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /src/LiviaNet/Modules/General/__init__.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/src/LiviaNet/Modules/General/__init__.pyc -------------------------------------------------------------------------------- /src/LiviaNet/Modules/General/read.txt: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /src/LiviaNet/Modules/IO/ImgOperations/__init__.py: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /src/LiviaNet/Modules/IO/ImgOperations/__init__.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/src/LiviaNet/Modules/IO/ImgOperations/__init__.pyc -------------------------------------------------------------------------------- /src/LiviaNet/Modules/IO/ImgOperations/imgOp.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) 2016, Jose Dolz .All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, 5 | are permitted provided that the following conditions are met: 6 | 7 | 1. Redistributions of source code must retain the above copyright notice, 8 | this list of conditions and the following disclaimer. 9 | 2. Redistributions in binary form must reproduce the above copyright notice, 10 | this list of conditions and the following disclaimer in the documentation 11 | and/or other materials provided with the distribution. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 14 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 15 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 16 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 17 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 18 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 19 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 | OTHER DEALINGS IN THE SOFTWARE. 21 | 22 | Jose Dolz. Dec, 2016. 23 | email: jose.dolz.upv@gmail.com 24 | LIVIA Department, ETS, Montreal. 25 | """ 26 | 27 | import numpy as np 28 | import numpy.lib as lib 29 | import pdb 30 | import math 31 | import random 32 | 33 | # Get bounding box of a numpy array 34 | def getBoundingBox(img): 35 | 36 | row = np.any(img, axis=(1, 2)) 37 | col = np.any(img, axis=(0, 2)) 38 | z = np.any(img, axis=(0, 1)) 39 | 40 | rmin, rmax = np.where(row)[0][[0, -1]] 41 | cmin, cmax = np.where(col)[0][[0, -1]] 42 | zmin, zmax = np.where(z)[0][[0, -1]] 43 | 44 | return (rmin, rmax, cmin, cmax, zmin, zmax) 45 | 46 | 47 | # ---------------- Padding ------------------- # 48 | def applyPadding(inputImg, sampleSizes, receptiveField) : 49 | receptiveField_arr = np.asarray(receptiveField, dtype="int16") 50 | inputImg_arr = np.asarray(inputImg.shape,dtype="int16") 51 | 52 | receptiveField = np.array(receptiveField, dtype="int16") 53 | 54 | left_padding = (receptiveField - 1) / 2 55 | right_padding = receptiveField - 1 - left_padding 56 | 57 | extra_padding = np.maximum(0, np.asarray(sampleSizes,dtype="int16")-(inputImg_arr+left_padding+right_padding)) 58 | right_padding += extra_padding 59 | 60 | paddingValues = ( (left_padding[0],right_padding[0]), 61 | (left_padding[1],right_padding[1]), 62 | (left_padding[2],right_padding[2])) 63 | 64 | paddedImage = lib.pad(inputImg, paddingValues, mode='reflect' ) 65 | return [paddedImage, paddingValues] 66 | 67 | # ----- Apply unpadding --------- 68 | def applyUnpadding(inputImg, paddingValues) : 69 | unpaddedImg = inputImg[paddingValues[0][0]:, paddingValues[1][0]:, paddingValues[2][0]:] 70 | 71 | if paddingValues[0][1] > 0: 72 | unpaddedImg = unpaddedImg[:-paddingValues[0][1],:,:] 73 | 74 | if paddingValues[1][1] > 0: 75 | unpaddedImg = unpaddedImg[:,:-paddingValues[1][1],:] 76 | 77 | if paddingValues[2][1] > 0: 78 | unpaddedImg = unpaddedImg[:,:,:-paddingValues[2][1]] 79 | 80 | return unpaddedImg 81 | -------------------------------------------------------------------------------- /src/LiviaNet/Modules/IO/ImgOperations/imgOp.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/src/LiviaNet/Modules/IO/ImgOperations/imgOp.pyc -------------------------------------------------------------------------------- /src/LiviaNet/Modules/IO/__init__.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/src/LiviaNet/Modules/IO/__init__.pyc -------------------------------------------------------------------------------- /src/LiviaNet/Modules/IO/loadData.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) 2016, Jose Dolz .All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, 5 | are permitted provided that the following conditions are met: 6 | 7 | 1. Redistributions of source code must retain the above copyright notice, 8 | this list of conditions and the following disclaimer. 9 | 2. Redistributions in binary form must reproduce the above copyright notice, 10 | this list of conditions and the following disclaimer in the documentation 11 | and/or other materials provided with the distribution. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 14 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 15 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 16 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 17 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 18 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 19 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 | OTHER DEALINGS IN THE SOFTWARE. 21 | 22 | Jose Dolz. Dec, 2016. 23 | email: jose.dolz.upv@gmail.com 24 | LIVIA Department, ETS, Montreal. 25 | """ 26 | 27 | import numpy as np 28 | import pdb 29 | # If you are not using nifti files you can comment this line 30 | import nibabel as nib 31 | import scipy.io as sio 32 | 33 | from ImgOperations.imgOp import applyPadding 34 | 35 | # ----- Loader for nifti files ------ # 36 | def load_nii (imageFileName, printFileNames) : 37 | if printFileNames == True: 38 | print (" ... Loading file: {}".format(imageFileName)) 39 | 40 | img_proxy = nib.load(imageFileName) 41 | imageData = img_proxy.get_data() 42 | 43 | return (imageData,img_proxy) 44 | 45 | def release_nii_proxy(img_proxy) : 46 | img_proxy.uncache() 47 | 48 | 49 | # ----- Loader for matlab format ------- # 50 | # Very important: All the volumes should have been saved as 'vol'. 51 | # Otherwise, change its name here 52 | def load_matlab (imageFileName, printFileNames) : 53 | if printFileNames == True: 54 | print (" ... Loading file: {}".format(imageFileName)) 55 | 56 | mat_contents = sio.loadmat(imageFileName) 57 | imageData = mat_contents['vol'] 58 | 59 | return (imageData) 60 | 61 | """ It loads the images (CT/MRI + Ground Truth + ROI) for the patient image Idx""" 62 | def load_imagesSinglePatient(imageIdx, 63 | imageNames, 64 | imageNames_Bottom, 65 | groundTruthNames, 66 | roiNames, 67 | applyPaddingBool, 68 | receptiveField, 69 | sampleSizes, 70 | imageType 71 | ): 72 | 73 | if imageIdx >= len(imageNames) : 74 | print (" ERROR!!!!! : The image index specified is greater than images array size....)") 75 | exit(1) 76 | 77 | # --- Load image data (CT/MRI/...) --- 78 | printFileNames = False # Get this from config.ini 79 | 80 | imageFileName = imageNames[imageIdx] 81 | 82 | if imageType == 0: 83 | [imageData,img_proxy] = load_nii(imageFileName, printFileNames) 84 | else: 85 | imageData = load_matlab(imageFileName, printFileNames) 86 | 87 | if applyPaddingBool == True : 88 | [imageData, paddingValues] = applyPadding(imageData, sampleSizes, receptiveField) 89 | else: 90 | paddingValues = ((0,0),(0,0),(0,0)) 91 | 92 | 93 | if len(imageData.shape) > 3 : 94 | imageData = imageData[:,:,:,0] 95 | 96 | if imageType == 0: 97 | release_nii_proxy(img_proxy) 98 | 99 | # --- Load image data for bottom path (CT/MRI/...) --- 100 | printFileNames = False # Get this from config.ini 101 | 102 | imageFileName = imageNames_Bottom[imageIdx] 103 | 104 | if imageType == 0: 105 | [imageData_Bottom,img_proxy] = load_nii(imageFileName, printFileNames) 106 | else: 107 | imageData_Bottom = load_matlab(imageFileName, printFileNames) 108 | 109 | if applyPaddingBool == True : 110 | [imageData_Bottom, paddingValues] = applyPadding(imageData_Bottom, sampleSizes, receptiveField) 111 | else: 112 | paddingValues = ((0,0),(0,0),(0,0)) 113 | 114 | 115 | if len(imageData_Bottom.shape) > 3 : 116 | imageData_Bottom = imageData_Bottom[:,:,:,0] 117 | 118 | if imageType == 0: 119 | release_nii_proxy(img_proxy) 120 | 121 | # --- Load ground truth (i.e. labels) --- 122 | if len(groundTruthNames) > 0 : 123 | GTFileName = groundTruthNames[imageIdx] 124 | 125 | if imageType == 0: 126 | [gtLabelsData, gt_proxy] = load_nii (GTFileName, printFileNames) 127 | else: 128 | gtLabelsData = load_matlab(GTFileName, printFileNames) 129 | 130 | # Convert ground truth to int type 131 | if np.issubdtype( gtLabelsData.dtype, np.int ) : 132 | gtLabelsData = gtLabelsData 133 | else: 134 | np.rint(gtLabelsData).astype("int32") 135 | 136 | imageGtLabels = gtLabelsData 137 | 138 | if imageType == 0: 139 | # Release data 140 | release_nii_proxy(gt_proxy) 141 | 142 | if applyPaddingBool == True : 143 | [imageGtLabels, paddingValues] = applyPadding(imageGtLabels, sampleSizes, receptiveField) 144 | 145 | else : 146 | imageGtLabels = np.empty(0) 147 | 148 | # --- Load roi --- 149 | if len(roiNames)> 0 : 150 | roiFileName = roiNames[imageIdx] 151 | 152 | if imageType == 0: 153 | [roiMaskData, roi_proxy] = load_nii (roiFileName, printFileNames) 154 | else: 155 | roiMaskData = load_matlab(roiFileName, printFileNames) 156 | 157 | roiMask = roiMaskData 158 | 159 | if imageType == 0: 160 | # Release data 161 | release_nii_proxy(roi_proxy) 162 | 163 | if applyPaddingBool == True : 164 | [roiMask, paddingValues] = applyPadding(roiMask, sampleSizes, receptiveField) 165 | else : 166 | roiMask = np.ones(imageGtLabels.shape) 167 | 168 | return [imageData, imageData_Bottom, imageGtLabels, roiMask, paddingValues] 169 | 170 | 171 | # -------------------------------------------------------- # 172 | def getRandIndexes(total, maxNumberIdx) : 173 | # Generate a shuffle array of a vector containing "total" elements 174 | idxs = range(total) 175 | np.random.shuffle(idxs) 176 | rand_idxs = idxs[0:maxNumberIdx] 177 | return rand_idxs 178 | 179 | -------------------------------------------------------------------------------- /src/LiviaNet/Modules/IO/loadData.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/src/LiviaNet/Modules/IO/loadData.pyc -------------------------------------------------------------------------------- /src/LiviaNet/Modules/IO/sampling.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) 2016, Jose Dolz .All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, 5 | are permitted provided that the following conditions are met: 6 | 7 | 1. Redistributions of source code must retain the above copyright notice, 8 | this list of conditions and the following disclaimer. 9 | 2. Redistributions in binary form must reproduce the above copyright notice, 10 | this list of conditions and the following disclaimer in the documentation 11 | and/or other materials provided with the distribution. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 14 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 15 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 16 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 17 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 18 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 19 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 | OTHER DEALINGS IN THE SOFTWARE. 21 | 22 | Jose Dolz. Dec, 2016. 23 | email: jose.dolz.upv@gmail.com 24 | LIVIA Department, ETS, Montreal. 25 | """ 26 | 27 | from loadData import load_imagesSinglePatient 28 | from loadData import getRandIndexes 29 | import numpy as np 30 | import math 31 | import random 32 | import pdb 33 | 34 | # ********************************** For Training ********************************* 35 | """ This function gets all the samples needed for training a sub-epoch """ 36 | def getSamplesSubepoch(numSamples, 37 | imageNames, 38 | imageNames_Bottom, 39 | groundTruthNames, 40 | roiNames, 41 | imageType, 42 | sampleSizes, 43 | receptiveField, 44 | applyPadding 45 | ): 46 | print (" ... Get samples for subEpoch...") 47 | 48 | numSubjects_Epoch = len(imageNames) 49 | randIdx = getRandIndexes(numSubjects_Epoch, numSubjects_Epoch) 50 | 51 | samplesPerSubject = numSamples/len(randIdx) 52 | print (" ... getting {} samples per subject...".format(samplesPerSubject)) 53 | 54 | imagesSamplesAll = [] 55 | imagesSamplesAll_Bottom = [] 56 | gt_samplesAll = [] 57 | 58 | numSubjectsSubEpoch = len(randIdx) 59 | 60 | samplingDistribution = getSamplesDistribution(numSamples, numSubjectsSubEpoch) 61 | 62 | for i_d in xrange(0, numSubjectsSubEpoch) : 63 | # For displaying purposes 64 | perc = 100 * float(i_d+1)/numSubjectsSubEpoch 65 | print("...Processing subject: {}. {} % of the whole training set...".format(str(i_d + 1),perc)) 66 | 67 | # -- Load images for a given patient -- 68 | [imgSubject, 69 | imgSubject_Bottom, 70 | gtLabelsImage, 71 | roiMask, 72 | paddingValues] = load_imagesSinglePatient(randIdx[i_d], 73 | imageNames, 74 | imageNames_Bottom, 75 | groundTruthNames, 76 | roiNames, 77 | applyPadding, 78 | receptiveField, 79 | sampleSizes, 80 | imageType 81 | ) 82 | 83 | 84 | # -- Get samples for that patient 85 | [imagesSamplesSinglePatient, 86 | imagesSamplesSinglePatient_Bottom, 87 | gtSamplesSinglePatient] = getSamplesSubject(i_d, 88 | imgSubject, 89 | imgSubject_Bottom, 90 | gtLabelsImage, 91 | roiMask, 92 | samplingDistribution, 93 | sampleSizes, 94 | receptiveField 95 | ) 96 | 97 | imagesSamplesAll = imagesSamplesAll + imagesSamplesSinglePatient 98 | imagesSamplesAll_Bottom = imagesSamplesAll_Bottom + imagesSamplesSinglePatient_Bottom 99 | gt_samplesAll = gt_samplesAll + gtSamplesSinglePatient 100 | 101 | # -- Permute the training samples so that in each batch both background and objects of interest are taken 102 | TrainingData = zip(imagesSamplesAll, imagesSamplesAll_Bottom, gt_samplesAll) 103 | random.shuffle(TrainingData) 104 | rnd_imagesSamples = [] 105 | rnd_imagesSamples_Bottom = [] 106 | rnd_gtSamples = [] 107 | rnd_imagesSamples[:], rnd_imagesSamples_Bottom[:], rnd_gtSamples[:] = zip(*TrainingData) 108 | 109 | del imagesSamplesAll[:] 110 | del imagesSamplesAll_Bottom[:] 111 | del gt_samplesAll[:] 112 | 113 | return rnd_imagesSamples, rnd_imagesSamples_Bottom, rnd_gtSamples 114 | 115 | 116 | 117 | def getSamplesSubject(imageIdx, 118 | imgSubject, 119 | imgSubject_Bottom, 120 | gtLabelsImage, 121 | roiMask, 122 | samplingDistribution, 123 | sampleSizes, 124 | receptiveField 125 | ): 126 | sampleSizes = sampleSizes 127 | imageSamplesSingleImage = [] 128 | imageSamplesSingleImage_Bottom = [] 129 | gt_samplesSingleImage = [] 130 | 131 | imgDim = imgSubject.shape 132 | 133 | # Get weight maps for sampling 134 | weightMaps = getSamplingWeights(gtLabelsImage, roiMask) 135 | 136 | 137 | # We are extracting segments for foreground and background 138 | for c_i in xrange(2) : 139 | numOfSamplesToGet = samplingDistribution[c_i][imageIdx] 140 | weightMap = weightMaps[c_i] 141 | # Define variables to be used 142 | roiToApply = np.zeros(weightMap.shape, dtype="int32") 143 | halfSampleDim = np.zeros( (len(sampleSizes), 2) , dtype='int32') 144 | 145 | 146 | # Get the size of half patch (i.e sample) 147 | for i in xrange( len(sampleSizes) ) : 148 | if sampleSizes[i]%2 == 0: #even 149 | dimensionDividedByTwo = sampleSizes[i]/2 150 | halfSampleDim[i] = [dimensionDividedByTwo - 1, dimensionDividedByTwo] 151 | else: #odd 152 | dimensionDividedByTwoFloor = math.floor(sampleSizes[i]/2) 153 | halfSampleDim[i] = [dimensionDividedByTwoFloor, dimensionDividedByTwoFloor] 154 | 155 | # --- Set to 1 those voxels in which we are interested in 156 | # - Define the limits 157 | roiMinx = halfSampleDim[0][0] 158 | roiMaxx = imgDim[0] - halfSampleDim[0][1] 159 | roiMiny = halfSampleDim[1][0] 160 | roiMaxy = imgDim[1] - halfSampleDim[1][1] 161 | roiMinz = halfSampleDim[2][0] 162 | roiMaxz = imgDim[2] - halfSampleDim[2][1] 163 | 164 | # Set 165 | roiToApply[roiMinx:roiMaxx,roiMiny:roiMaxy,roiMinz:roiMaxz] = 1 166 | 167 | maskCoords = weightMap * roiToApply 168 | 169 | # We do the following because np.random.choice 4th parameter needs the probabilities to sum 1 170 | maskCoords = maskCoords / (1.0* np.sum(maskCoords)) 171 | 172 | maskCoordsFlattened = maskCoords.flatten() 173 | 174 | centralVoxelsIndexes = np.random.choice(maskCoords.size, 175 | size = numOfSamplesToGet, 176 | replace=True, 177 | p=maskCoordsFlattened) 178 | 179 | centralVoxelsCoord = np.asarray(np.unravel_index(centralVoxelsIndexes, maskCoords.shape)) 180 | #print(" centralVoxelsCoord: {}".format(centralVoxelsCoord)) 181 | coordsToSampleArray = np.zeros(list(centralVoxelsCoord.shape) + [2], dtype="int32") 182 | coordsToSampleArray[:,:,0] = centralVoxelsCoord - halfSampleDim[ :, np.newaxis, 0 ] #np.newaxis broadcasts. To broadcast the -+. 183 | coordsToSampleArray[:,:,1] = centralVoxelsCoord + halfSampleDim[ :, np.newaxis, 1 ] 184 | 185 | # ----- Compute the coordinates that will be used to extract the samples ---- # 186 | numSamples = len(coordsToSampleArray[0]) 187 | 188 | # Extract samples from computed coordinates 189 | for s_i in xrange(numSamples) : 190 | 191 | # Get one sample given a coordinate 192 | coordsToSample = coordsToSampleArray[:,s_i,:] 193 | 194 | sampleSizes = sampleSizes 195 | imageSample = np.zeros((1, sampleSizes[0],sampleSizes[1],sampleSizes[2]), dtype = 'float32') 196 | imageSample_Bottom = np.zeros((1, sampleSizes[0],sampleSizes[1],sampleSizes[2]), dtype = 'float32') 197 | 198 | xMin = coordsToSample[0][0] 199 | xMax = coordsToSample[0][1] + 1 200 | yMin = coordsToSample[1][0] 201 | yMax = coordsToSample[1][1] + 1 202 | zMin = coordsToSample[2][0] 203 | zMax = coordsToSample[2][1] + 1 204 | 205 | 206 | #print(" Index: -> MinX: {}, MaxX: {}, MinY: {}, MaxY: {}, MinZ: {}, MaxZ: {}".format(s_i,xMin,xMax,yMin,yMax,zMin,zMax)) 207 | imageSample[:1] = imgSubject[ xMin:xMax,yMin:yMax,zMin:zMax] 208 | imageSample_Bottom[:1] = imgSubject_Bottom[ xMin:xMax,yMin:yMax,zMin:zMax] 209 | sample_gt_Orig = gtLabelsImage[xMin:xMax,yMin:yMax,zMin:zMax] 210 | 211 | roiLabelMin = np.zeros(3, dtype = "int8") 212 | roiLabelMax = np.zeros(3, dtype = "int8") 213 | 214 | for i_x in range(len(receptiveField)) : 215 | roiLabelMin[i_x] = (receptiveField[i_x] - 1)/2 216 | roiLabelMax[i_x] = sampleSizes[i_x] - roiLabelMin[i_x] 217 | 218 | gt_sample = sample_gt_Orig[roiLabelMin[0] : roiLabelMax[0], 219 | roiLabelMin[1] : roiLabelMax[1], 220 | roiLabelMin[2] : roiLabelMax[2]] 221 | 222 | imageSamplesSingleImage.append(imageSample) 223 | imageSamplesSingleImage_Bottom.append(imageSample_Bottom) 224 | gt_samplesSingleImage.append(gt_sample) 225 | 226 | return imageSamplesSingleImage,imageSamplesSingleImage_Bottom,gt_samplesSingleImage 227 | 228 | 229 | 230 | def getSamplingWeights(gtLabelsImage, 231 | roiMask 232 | ) : 233 | 234 | foreMask = (gtLabelsImage>0).astype(int) 235 | backMask = (roiMask>0) * (foreMask==0) 236 | weightMaps = [ foreMask, backMask ] 237 | 238 | return weightMaps 239 | 240 | 241 | 242 | def getSamplesDistribution( numSamples, 243 | numImagesToSample ) : 244 | # We have to sample foreground and background 245 | # Assuming that we extract the same number of samples per category: 50% each 246 | 247 | samplesPercentage = np.ones( 2, dtype="float32" ) * 0.5 248 | samplesPerClass = np.zeros( 2, dtype="int32" ) 249 | samplesDistribution = np.zeros( [ 2, numImagesToSample ] , dtype="int32" ) 250 | 251 | samplesAssigned = 0 252 | 253 | for c_i in xrange(2) : 254 | samplesAssignedClass = int(numSamples*samplesPercentage[c_i]) 255 | samplesPerClass[c_i] += samplesAssignedClass 256 | samplesAssigned += samplesAssignedClass 257 | 258 | # Assign the samples that were not assigned due to the rounding error of integer division. 259 | nonAssignedSamples = numSamples - samplesAssigned 260 | classesIDx= np.random.choice(2, 261 | nonAssignedSamples, 262 | True, 263 | p=samplesPercentage) 264 | 265 | for c_i in classesIDx : 266 | samplesPerClass[c_i] += 1 267 | 268 | for c_i in xrange(2) : 269 | samplesAssignedClass = samplesPerClass[c_i] / numImagesToSample 270 | samplesDistribution[c_i] += samplesAssignedClass 271 | samplesNonAssignedClass = samplesPerClass[c_i] % numImagesToSample 272 | for cU_i in xrange(samplesNonAssignedClass): 273 | samplesDistribution[c_i, random.randint(0, numImagesToSample-1)] += 1 274 | 275 | return samplesDistribution 276 | 277 | # ********************************** For testing ********************************* 278 | 279 | def sampleWholeImage(imgSubject, 280 | roi, 281 | sampleSize, 282 | strideVal, 283 | batch_size 284 | ): 285 | 286 | samplesCoords = [] 287 | 288 | imgDims = list(imgSubject.shape) 289 | 290 | zMinNext=0 291 | zCentPredicted = False 292 | 293 | while not zCentPredicted : 294 | zMax = min(zMinNext+sampleSize[2], imgDims[2]) 295 | zMin = zMax - sampleSize[2] 296 | zMinNext = zMinNext + strideVal[2] 297 | 298 | if zMax < imgDims[2]: 299 | zCentPredicted = False 300 | else: 301 | zCentPredicted = True 302 | 303 | yMinNext=0 304 | yCentPredicted = False 305 | 306 | while not yCentPredicted : 307 | yMax = min(yMinNext+sampleSize[1], imgDims[1]) 308 | yMin = yMax - sampleSize[1] 309 | yMinNext = yMinNext + strideVal[1] 310 | 311 | if yMax < imgDims[1]: 312 | yCentPredicted = False 313 | else: 314 | yCentPredicted = True 315 | 316 | xMinNext=0 317 | xCentPredicted = False 318 | 319 | while not xCentPredicted : 320 | xMax = min(xMinNext+sampleSize[0], imgDims[0]) 321 | xMin = xMax - sampleSize[0] 322 | xMinNext = xMinNext + strideVal[0] 323 | 324 | if xMax < imgDims[0]: 325 | xCentPredicted = False 326 | else: 327 | xCentPredicted = True 328 | 329 | if isinstance(roi, (np.ndarray)) : 330 | if not np.any(roi[xMin:xMax, yMin:yMax, zMin:zMax ]) : 331 | continue 332 | 333 | samplesCoords.append([ [xMin, xMax-1], [yMin, yMax-1], [zMin, zMax-1] ]) 334 | 335 | # To Theano to not complain the number of samples have to exactly fit with the number of batches. 336 | sampledRegions = len(samplesCoords) 337 | 338 | if sampledRegions%batch_size <> 0: 339 | numberOfSamplesToAdd = batch_size - sampledRegions%batch_size 340 | else: 341 | numberOfSamplesToAdd = 0 342 | 343 | for i in xrange(numberOfSamplesToAdd) : 344 | samplesCoords.append(samplesCoords[sampledRegions-1]) 345 | 346 | return [samplesCoords] 347 | 348 | 349 | def extractSamples(imgData, 350 | imgData_Bottom, 351 | sliceCoords, 352 | imagePartDimensions, 353 | patchDimensions 354 | ) : 355 | numberOfSamples = len(sliceCoords) 356 | # Create the array that will contain the samples 357 | samplesArrayShape = [numberOfSamples, 1, imagePartDimensions[0], imagePartDimensions[1], imagePartDimensions[2]] 358 | samples = np.zeros(samplesArrayShape, dtype= "float32") 359 | samples_Bottom = np.zeros(samplesArrayShape, dtype= "float32") 360 | 361 | for s_i in xrange(numberOfSamples) : 362 | cMin = [] 363 | cMax = [] 364 | for c_i in xrange(3): 365 | cMin.append(sliceCoords[s_i][c_i][0]) 366 | cMax.append(sliceCoords[s_i][c_i][1] + 1) 367 | 368 | samples[s_i] = imgData[cMin[0]:cMax[0], 369 | cMin[1]:cMax[1], 370 | cMin[2]:cMax[2]] 371 | samples_Bottom[s_i] = imgData[cMin[0]:cMax[0], 372 | cMin[1]:cMax[1], 373 | cMin[2]:cMax[2]] 374 | 375 | return [samples,samples_Bottom] 376 | -------------------------------------------------------------------------------- /src/LiviaNet/Modules/IO/sampling.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/src/LiviaNet/Modules/IO/sampling.pyc -------------------------------------------------------------------------------- /src/LiviaNet/Modules/IO/saveData.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) 2016, Jose Dolz .All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 5 | 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 6 | 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 7 | 8 | THIS SOFTWARE IS PROVIDED BY THE FREEBSD PROJECT "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FREEBSD PROJECT OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 9 | The views and conclusions contained in the software and documentation are those of the authors and should not be interpreted as representing official policies, either expressed or implied, of the FreeBSD Project. 10 | 11 | Jose Dolz. Dec, 2016. 12 | email: jose.dolz.upv@gmail.com 13 | LIVIA Department, ETS, Montreal. 14 | """ 15 | 16 | import os 17 | import numpy 18 | import numpy as np 19 | import nibabel as nib 20 | import pdb 21 | import scipy.io 22 | 23 | from loadData import load_nii 24 | 25 | 26 | ## ======== For Nifti files ========= ## 27 | def saveImageAsNifti(imageToSave, 28 | imageName, 29 | imageOriginalName, 30 | imageType): 31 | 32 | printFileNames = False 33 | 34 | if printFileNames == True: 35 | print(" ... Saving image in {}".format(imageName)) 36 | 37 | [imageData,img_proxy] = load_nii(imageOriginalName, printFileNames) 38 | 39 | # Generate the nii file 40 | niiToSave = nib.Nifti1Image(imageToSave, img_proxy.affine) 41 | niiToSave.set_data_dtype(imageType) 42 | 43 | dim = len(imageToSave.shape) 44 | zooms = list(img_proxy.header.get_zooms()[:dim]) 45 | if len(zooms) < dim : 46 | zooms = zooms + [1.0]*(dim-len(zooms)) 47 | 48 | niiToSave.header.set_zooms(zooms) 49 | nib.save(niiToSave, imageName) 50 | 51 | print "... Image succesfully saved..." 52 | 53 | 54 | ## ========= For Matlab files ========== ## 55 | def saveImageAsMatlab(imageToSave, 56 | imageName): 57 | 58 | printFileNames = False 59 | 60 | if printFileNames == True: 61 | print(" ... Saving image in {}".format(imageName)) 62 | 63 | scipy.io.savemat(imageName, mdict={'vol': imageToSave}) 64 | 65 | print "... Image succesfully saved..." 66 | -------------------------------------------------------------------------------- /src/LiviaNet/Modules/IO/saveData.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/src/LiviaNet/Modules/IO/saveData.pyc -------------------------------------------------------------------------------- /src/LiviaNet/Modules/NeuralNetwork/ActivationFunctions.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) 2016, Jose Dolz .All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, 5 | are permitted provided that the following conditions are met: 6 | 7 | 1. Redistributions of source code must retain the above copyright notice, 8 | this list of conditions and the following disclaimer. 9 | 2. Redistributions in binary form must reproduce the above copyright notice, 10 | this list of conditions and the following disclaimer in the documentation 11 | and/or other materials provided with the distribution. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 14 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 15 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 16 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 17 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 18 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 19 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 | OTHER DEALINGS IN THE SOFTWARE. 21 | 22 | Jose Dolz. Dec, 2016. 23 | email: jose.dolz.upv@gmail.com 24 | LIVIA Department, ETS, Montreal. 25 | """ 26 | 27 | import pdb 28 | import os 29 | import numpy as np 30 | import theano 31 | import theano.tensor as T 32 | 33 | import sys 34 | 35 | # https://github.com/Theano/Theano/issues/689 36 | sys.setrecursionlimit(50000) 37 | 38 | 39 | ##################################################### 40 | ## Various activation functions for the CNN layers ## 41 | ##################################################### 42 | 43 | # Sigmoid activations 44 | def applyActivationFunction_Sigmoid(inputData): 45 | """ inputData is a tensor5D with shape: 46 | (batchSize, 47 | Number of feature Maps, 48 | convolvedImageShape[0], 49 | convolvedImageShape[1], 50 | convolvedImageShape[2]) """ 51 | 52 | outputData = T.nnet.sigmoid(inputData) 53 | return ( outputData ) 54 | 55 | # Tanh activations 56 | def applyActivationFunction_Tanh(inputData): 57 | """inputData is a tensor5D with shape: 58 | # (batchSize, 59 | # Number of feature Maps, 60 | # convolvedImageShape[0], 61 | # convolvedImageShape[1], 62 | # convolvedImageShape[2])""" 63 | 64 | outputData= T.tanh(inputData) 65 | return ( outputData ) 66 | 67 | # *** There actually exist several ways to implement ReLU activations *** 68 | # --- Version 1 --- 69 | def applyActivationFunction_ReLU_v1(inputData): 70 | """ inputData is a tensor5D with shape: 71 | # (batchSize, 72 | # Number of feature Maps, 73 | # convolvedImageShape[0], 74 | # convolvedImageShape[1], 75 | # convolvedImageShape[2]) """ 76 | 77 | return T.maximum(inputData,0) 78 | 79 | # --- Version 2 --- 80 | def applyActivationFunction_ReLU_v2(inputData): 81 | 82 | return T.switch(inputData < 0., 0., inputData) 83 | 84 | # --- Version 3 --- 85 | def applyActivationFunction_ReLU_v3(inputData): 86 | 87 | return ((inputData + abs(inputData))/2.0) 88 | 89 | # --- Version 4 --- 90 | def applyActivationFunction_ReLU_v4(inputData): 91 | 92 | return (T.sgn(inputData) + 1) * inputData * 0.5 93 | 94 | # *** LeakyReLU *** 95 | def applyActivationFunction_LeakyReLU( inputData, leakiness ) : 96 | """leakiness : float 97 | Slope for negative input, usually between 0 and 1. 98 | A leakiness of 0 will lead to the standard rectifier, 99 | a leakiness of 1 will lead to a linear activation function, 100 | and any value in between will give a leaky rectifier. 101 | 102 | [1] Maas et al. (2013): 103 | Rectifier Nonlinearities Improve Neural Network Acoustic Models, 104 | http://web.stanford.edu/~awni/papers/relu_hybrid_icml2013_final.pdf 105 | 106 | 107 | - The input is a tensor of shape (batchSize, FeatMaps, xDim, yDim, zDim) """ 108 | 109 | pos = 0.5 * (1 + leakiness) 110 | neg = 0.5 * (1 - leakiness) 111 | 112 | output = pos * inputData + neg * abs(inputData) 113 | 114 | return (output) 115 | 116 | # *** There actually exist several ways to implement PReLU activations *** 117 | 118 | # PReLU activations (from Kamnitsas) 119 | def applyActivationFunction_PReLU( inputData, PreluActivations ) : 120 | """Parametric Rectified Linear Unit. 121 | It follows: 122 | `f(x) = alpha * x for x < 0`, 123 | `f(x) = x for x >= 0`, 124 | where `alpha` is a learned array with the same shape as x. 125 | 126 | - The input is a tensor of shape (batchSize, FeatMaps, xDim, yDim, zDim) """ 127 | preluActivationsAsRow = PreluActivations.dimshuffle('x', 0, 'x', 'x', 'x') 128 | 129 | pos = T.maximum(0, inputData) 130 | neg = preluActivationsAsRow * (inputData - abs(inputData)) * 0.5 131 | output = pos + neg 132 | 133 | return (output) 134 | 135 | # --- version 2 --- 136 | def applyActivationFunction_PReLU_v2(inputData,PreluActivations) : 137 | """ inputData is a tensor5D with shape: 138 | (batchSize, 139 | Number of feature Maps, 140 | convolvedImageShape[0], 141 | convolvedImageShape[1], 142 | convolvedImageShape[2]) """ 143 | 144 | # The input is a tensor of shape (batchSize, FeatMaps, xDim, yDim, zDim) 145 | preluActivationsAsRow = PreluActivations.dimshuffle('x', 0, 'x', 'x', 'x') 146 | 147 | pos = ((inputData + abs(inputData)) / 2.0 ) 148 | neg = preluActivationsAsRow * ((inputData - abs(inputData)) / 2.0 ) 149 | output = pos + neg 150 | 151 | return ( output) 152 | 153 | # --- version 3 --- 154 | def applyActivationFunction_PReLU_v3(inputData,PreluActivations) : 155 | """ inputData is a tensor5D with shape: 156 | (batchSize, 157 | Number of feature Maps, 158 | convolvedImageShape[0], 159 | convolvedImageShape[1], 160 | convolvedImageShape[2]) """ 161 | 162 | # The input is a tensor of shape (batchSize, FeatMaps, xDim, yDim, zDim) 163 | preluActivationsAsRow = PreluActivations.dimshuffle('x', 0, 'x', 'x', 'x') 164 | 165 | pos = 0.5 * (1 + preluActivationsAsRow ) 166 | neg = 0.5 * (1 - preluActivationsAsRow ) 167 | output = pos * inputData + neg * abs(inputData) 168 | 169 | return ( output) 170 | 171 | # Benchmark on ReLU/PReLU activations: 172 | # http://gforge.se/2015/06/benchmarking-relu-and-prelu/ 173 | 174 | # TODO. Implement some other activation functions: 175 | # Ex: Randomized ReLU 176 | # S-shape Relu 177 | # ThresholdedReLU 178 | -------------------------------------------------------------------------------- /src/LiviaNet/Modules/NeuralNetwork/ActivationFunctions.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/src/LiviaNet/Modules/NeuralNetwork/ActivationFunctions.pyc -------------------------------------------------------------------------------- /src/LiviaNet/Modules/NeuralNetwork/__init__.py: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /src/LiviaNet/Modules/NeuralNetwork/__init__.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/src/LiviaNet/Modules/NeuralNetwork/__init__.pyc -------------------------------------------------------------------------------- /src/LiviaNet/Modules/NeuralNetwork/layerOperations.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) 2016, Jose Dolz .All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, 5 | are permitted provided that the following conditions are met: 6 | 7 | 1. Redistributions of source code must retain the above copyright notice, 8 | this list of conditions and the following disclaimer. 9 | 2. Redistributions in binary form must reproduce the above copyright notice, 10 | this list of conditions and the following disclaimer in the documentation 11 | and/or other materials provided with the distribution. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 14 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 15 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 16 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 17 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 18 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 19 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 | OTHER DEALINGS IN THE SOFTWARE. 21 | 22 | Jose Dolz. Dec, 2016. 23 | email: jose.dolz.upv@gmail.com 24 | LIVIA Department, ETS, Montreal. 25 | """ 26 | 27 | import theano.tensor as T 28 | import theano 29 | import random 30 | import numpy as np 31 | 32 | # ----------------- Apply dropout to a given input ---------------# 33 | def apply_Dropout(rng, dropoutRate, inputShape, inputData, task) : 34 | """ Task: 35 | # 0: Training 36 | # 1: Validation 37 | # 2: Testing """ 38 | outputData = inputData 39 | 40 | if dropoutRate > 0.001 : 41 | activationRate = (1-dropoutRate) 42 | srng = T.shared_randomstreams.RandomStreams(rng.randint(999999)) 43 | dropoutMask = srng.binomial(n=1, size=inputShape, p=activationRate, dtype=theano.config.floatX) 44 | if task == 0: 45 | outputData = inputData * dropoutMask 46 | else: 47 | outputData = inputData * activationRate 48 | return (outputData) 49 | 50 | """ Another dropout version """ 51 | """ def applyDropout(rng, inputLayer, inputLayerSize, dropoutRate) : 52 | # https://iamtrask.github.io/2015/07/28/dropout/ 53 | # https://github.com/mdenil/dropout/blob/master/mlp.py 54 | 55 | #srng = T.shared_randomstreams.RandomStreams(rng.randint(999999)) 56 | #dropoutMask = srng.binomial(n=1, p= 1-dropoutRate, size=inputLayerSize, dtype=theano.config.floatX) 57 | dropoutMask = numpy.random.binomial([numpy.ones((inputLayer.W.eval().shape))],1-dropoutRate)[0] * (1.0/(1-dropoutRate)) 58 | output = inputLayer.W * dropoutMask 59 | return (output)""" 60 | 61 | # ----------------- Convolve an input with a given kernel ---------------# 62 | def convolveWithKernel(W, filter_shape, inputSample, inputSampleShape) : 63 | wReshapedForConv = W.dimshuffle(0,4,1,2,3) 64 | wReshapedForConvShape = (filter_shape[0], filter_shape[4], filter_shape[1], filter_shape[2], filter_shape[3]) 65 | 66 | #Reshape image for what conv3d2d needs: 67 | inputSampleReshaped = inputSample.dimshuffle(0, 4, 1, 2, 3) 68 | inputSampleReshapedShape = (inputSampleShape[0], 69 | inputSampleShape[4], 70 | inputSampleShape[1], 71 | inputSampleShape[2], 72 | inputSampleShape[3]) 73 | 74 | convolved_Output = T.nnet.conv3d2d.conv3d(inputSampleReshaped, 75 | wReshapedForConv, 76 | inputSampleReshapedShape, 77 | wReshapedForConvShape, 78 | border_mode = 'valid') 79 | 80 | output = convolved_Output.dimshuffle(0, 2, 3, 4, 1) 81 | 82 | outputShape = [inputSampleShape[0], 83 | filter_shape[0], 84 | inputSampleShape[2]-filter_shape[2]+1, 85 | inputSampleShape[3]-filter_shape[3]+1, 86 | inputSampleShape[4]-filter_shape[4]+1] 87 | 88 | return (output, outputShape) 89 | 90 | # ----------------- Apply Batch normalization ---------------# 91 | """ Apply Batch normalization """ 92 | """ From Kamnitsas """ 93 | def applyBn(numberEpochApplyRolling, inputTrain, inputTest, inputShapeTrain) : 94 | numberOfChannels = inputShapeTrain[1] 95 | 96 | gBn_values = np.ones( (numberOfChannels), dtype = 'float32' ) 97 | gBn = theano.shared(value=gBn_values, borrow=True) 98 | bBn_values = np.zeros( (numberOfChannels), dtype = 'float32') 99 | bBn = theano.shared(value=bBn_values, borrow=True) 100 | 101 | # For rolling average: 102 | muArray = theano.shared(np.zeros( (numberEpochApplyRolling, numberOfChannels), dtype = 'float32' ), borrow=True) 103 | varArray = theano.shared(np.ones( (numberEpochApplyRolling, numberOfChannels), dtype = 'float32' ), borrow=True) 104 | sharedNewMu_B = theano.shared(np.zeros( (numberOfChannels), dtype = 'float32'), borrow=True) 105 | sharedNewVar_B = theano.shared(np.ones( (numberOfChannels), dtype = 'float32'), borrow=True) 106 | 107 | e1 = np.finfo(np.float32).tiny 108 | 109 | mu_B = inputTrain.mean(axis=[0,2,3,4]) 110 | mu_B = T.unbroadcast(mu_B, (0)) 111 | var_B = inputTrain.var(axis=[0,2,3,4]) 112 | var_B = T.unbroadcast(var_B, (0)) 113 | var_B_plusE = var_B + e1 114 | 115 | #---computing mu and var for inference from rolling average--- 116 | mu_RollingAverage = muArray.mean(axis=0) 117 | effectiveSize = inputShapeTrain[0]*inputShapeTrain[2]*inputShapeTrain[3]*inputShapeTrain[4] 118 | var_RollingAverage = (effectiveSize/(effectiveSize-1))*varArray.mean(axis=0) 119 | var_RollingAverage_plusE = var_RollingAverage + e1 120 | 121 | # training 122 | normXi_train = (inputTrain - mu_B.dimshuffle('x', 0, 'x', 'x', 'x')) / T.sqrt(var_B_plusE.dimshuffle('x', 0, 'x', 'x', 'x')) 123 | normYi_train = gBn.dimshuffle('x', 0, 'x', 'x', 'x') * normXi_train + bBn.dimshuffle('x', 0, 'x', 'x', 'x') 124 | 125 | # testing 126 | normXi_test = (inputTest - mu_RollingAverage.dimshuffle('x', 0, 'x', 'x', 'x')) / T.sqrt(var_RollingAverage_plusE.dimshuffle('x', 0, 'x', 'x', 'x')) 127 | normYi_test = gBn.dimshuffle('x', 0, 'x', 'x', 'x') * normXi_test + bBn.dimshuffle('x', 0, 'x', 'x', 'x') 128 | 129 | return (normYi_train, 130 | normYi_test, 131 | gBn, 132 | bBn, 133 | muArray, 134 | varArray, 135 | sharedNewMu_B, 136 | sharedNewVar_B, 137 | mu_B, 138 | var_B 139 | ) 140 | 141 | 142 | # ----------------- Apply Softmax ---------------# 143 | def applySoftMax( inputSample, inputSampleShape, numClasses, softmaxTemperature): 144 | 145 | inputSampleReshaped = inputSample.dimshuffle(0, 2, 3, 4, 1) 146 | inputSampleFlattened = inputSampleReshaped.flatten(1) 147 | 148 | numClassifiedVoxels = inputSampleShape[2]*inputSampleShape[3]*inputSampleShape[4] 149 | firstDimOfinputSample2d = inputSampleShape[0]*numClassifiedVoxels 150 | inputSample2d = inputSampleFlattened.reshape((firstDimOfinputSample2d, numClasses)) 151 | 152 | # Predicted probability per class. 153 | p_y_given_x_2d = T.nnet.softmax(inputSample2d/softmaxTemperature) 154 | 155 | p_y_given_x_class = p_y_given_x_2d.reshape((inputSampleShape[0], 156 | inputSampleShape[2], 157 | inputSampleShape[3], 158 | inputSampleShape[4], 159 | inputSampleShape[1])) 160 | 161 | p_y_given_x = p_y_given_x_class.dimshuffle(0,4,1,2,3) 162 | 163 | y_pred = T.argmax(p_y_given_x, axis=1) 164 | 165 | return ( p_y_given_x, y_pred ) 166 | 167 | # ----------------- Apply Bias to feat maps ---------------# 168 | def applyBiasToFeatureMaps( bias, featMaps ) : 169 | featMaps = featMaps + bias.dimshuffle('x', 0, 'x', 'x', 'x') 170 | 171 | return (featMaps) 172 | 173 | -------------------------------------------------------------------------------- /src/LiviaNet/Modules/NeuralNetwork/layerOperations.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/src/LiviaNet/Modules/NeuralNetwork/layerOperations.pyc -------------------------------------------------------------------------------- /src/LiviaNet/Modules/Parsers/__init__.py: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /src/LiviaNet/Modules/Parsers/__init__.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/src/LiviaNet/Modules/Parsers/__init__.pyc -------------------------------------------------------------------------------- /src/LiviaNet/Modules/Parsers/parsersUtils.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) 2016, Jose Dolz .All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, 5 | are permitted provided that the following conditions are met: 6 | 7 | 1. Redistributions of source code must retain the above copyright notice, 8 | this list of conditions and the following disclaimer. 9 | 2. Redistributions in binary form must reproduce the above copyright notice, 10 | this list of conditions and the following disclaimer in the documentation 11 | and/or other materials provided with the distribution. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 14 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 15 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 16 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 17 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 18 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 19 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 | OTHER DEALINGS IN THE SOFTWARE. 21 | 22 | Jose Dolz. Dec, 2016. 23 | email: jose.dolz.upv@gmail.com 24 | LIVIA Department, ETS, Montreal. 25 | """ 26 | 27 | import ConfigParser 28 | import json 29 | import os 30 | 31 | # -------- Parse parameters to create the network model -------- # 32 | class parserConfigIni(object): 33 | def __init__(_self): 34 | _self.networkName = [] 35 | 36 | #@staticmethod 37 | def readConfigIniFile(_self,fileName,task): 38 | # Task: 0-> Generate model 39 | # 1-> Train model 40 | # 2-> Segmentation 41 | 42 | def createModel(): 43 | print (" --- Creating model (Reading parameters...)") 44 | _self.readModelCreation_params(fileName) 45 | def trainModel(): 46 | print (" --- Training model (Reading parameters...)") 47 | _self.readModelTraining_params(fileName) 48 | def testModel(): 49 | print (" --- Testing model (Reading parameters...)") 50 | _self.readModelTesting_params(fileName) 51 | 52 | # TODO. Include more optimizers here 53 | optionsParser = {0 : createModel, 54 | 1 : trainModel, 55 | 2 : testModel} 56 | optionsParser[task]() 57 | 58 | # Read parameters to Generate model 59 | def readModelCreation_params(_self,fileName) : 60 | ConfigIni = ConfigParser.ConfigParser() 61 | ConfigIni.read(fileName) 62 | # ------- General --------- # 63 | _self.networkName = ConfigIni.get('General','networkName') 64 | _self.folderName = ConfigIni.get('General','folderName') 65 | 66 | # ------- Network Architecture ------- # 67 | _self.n_classes = json.loads(ConfigIni.get('CNN_Architecture','n_classes')) # Number of (segmentation) classes 68 | _self.layers = json.loads(ConfigIni.get('CNN_Architecture','numkernelsperlayer')) # Number of layers 69 | _self.kernels = json.loads(ConfigIni.get('CNN_Architecture','kernelshapes')) # Kernels shape 70 | _self.intermediate_ConnectedLayers = json.loads(ConfigIni.get('CNN_Architecture','intermediateConnectedLayers')) 71 | _self.pooling_scales = json.loads(ConfigIni.get('CNN_Architecture','pooling_scales')) # Pooling 72 | _self.dropout_Rates = json.loads(ConfigIni.get('CNN_Architecture','dropout_Rates')) # Dropout 73 | _self.activationType = json.loads(ConfigIni.get('CNN_Architecture','activationType')) # Activation Typ 74 | _self.weight_Initialization_CNN = json.loads(ConfigIni.get('CNN_Architecture','weight_Initialization_CNN')) # weight_Initialization (CNN) 75 | _self.weight_Initialization_FCN = json.loads(ConfigIni.get('CNN_Architecture','weight_Initialization_FCN')) # weight_Initialization (FCN) 76 | _self.weightsFolderName = ConfigIni.get('CNN_Architecture','weights folderName') # weights folder 77 | _self.weightsTrainedIdx = json.loads(ConfigIni.get('CNN_Architecture','weights trained indexes')) # weights indexes to employ 78 | 79 | _self.batch_size = json.loads(ConfigIni.get('Training Parameters','batch_size')) # Batch size 80 | _self.sampleSize_Train = json.loads(ConfigIni.get('Training Parameters','sampleSize_Train')) 81 | _self.sampleSize_Test = json.loads(ConfigIni.get('Training Parameters','sampleSize_Test')) 82 | 83 | _self.costFunction = json.loads(ConfigIni.get('Training Parameters','costFunction')) 84 | _self.L1_reg_C = json.loads(ConfigIni.get('Training Parameters','L1 Regularization Constant')) 85 | _self.L2_reg_C = json.loads(ConfigIni.get('Training Parameters','L2 Regularization Constant')) 86 | _self.learning_rate = json.loads(ConfigIni.get('Training Parameters','Leraning Rate')) 87 | _self.momentumType = json.loads(ConfigIni.get('Training Parameters','Momentum Type')) 88 | _self.momentumValue = json.loads(ConfigIni.get('Training Parameters','Momentum Value')) 89 | _self.momentumNormalized = json.loads(ConfigIni.get('Training Parameters','momentumNormalized')) 90 | _self.optimizerType = json.loads(ConfigIni.get('Training Parameters','Optimizer Type')) 91 | _self.rho_RMSProp = json.loads(ConfigIni.get('Training Parameters','Rho RMSProp')) 92 | _self.epsilon_RMSProp = json.loads(ConfigIni.get('Training Parameters','Epsilon RMSProp')) 93 | applyBatchNorm = json.loads(ConfigIni.get('Training Parameters','applyBatchNormalization')) 94 | 95 | if applyBatchNorm == 1: 96 | _self.applyBatchNorm = True 97 | else: 98 | _self.applyBatchNorm = False 99 | 100 | _self.BatchNormEpochs = json.loads(ConfigIni.get('Training Parameters','BatchNormEpochs')) 101 | _self.tempSoftMax = json.loads(ConfigIni.get('Training Parameters','SoftMax temperature')) 102 | 103 | # TODO: Do some sanity checks 104 | 105 | # Read parameters to TRAIN model 106 | def readModelTraining_params(_self,fileName) : 107 | ConfigIni = ConfigParser.ConfigParser() 108 | ConfigIni.read(fileName) 109 | 110 | # Get training/validation image names 111 | # Paths 112 | _self.imagesFolder = ConfigIni.get('Training Images','imagesFolder') 113 | _self.imagesFolder_Bottom = ConfigIni.get('Training Images','imagesFolder_Bottom') 114 | _self.GroundTruthFolder = ConfigIni.get('Training Images','GroundTruthFolder') 115 | _self.ROIFolder = ConfigIni.get('Training Images','ROIFolder') 116 | _self.indexesForTraining = json.loads(ConfigIni.get('Training Images','indexesForTraining')) 117 | _self.indexesForValidation = json.loads(ConfigIni.get('Training Images','indexesForValidation')) 118 | _self.imageTypesTrain = json.loads(ConfigIni.get('Training Images','imageTypes')) 119 | 120 | # training params 121 | _self.numberOfEpochs = json.loads(ConfigIni.get('Training Parameters','number of Epochs')) 122 | _self.numberOfSubEpochs = json.loads(ConfigIni.get('Training Parameters','number of SubEpochs')) 123 | _self.numberOfSamplesSupEpoch = json.loads(ConfigIni.get('Training Parameters','number of samples at each SubEpoch Train')) 124 | _self.firstEpochChangeLR = json.loads(ConfigIni.get('Training Parameters','First Epoch Change LR')) 125 | _self.frequencyChangeLR = json.loads(ConfigIni.get('Training Parameters','Frequency Change LR')) 126 | _self.applyPadding = json.loads(ConfigIni.get('Training Parameters','applyPadding')) 127 | 128 | def readModelTesting_params(_self,fileName) : 129 | ConfigIni = ConfigParser.ConfigParser() 130 | ConfigIni.read(fileName) 131 | 132 | _self.imagesFolder = ConfigIni.get('Segmentation Images','imagesFolder') 133 | _self.imagesFolder_Bottom = ConfigIni.get('Segmentation Images','imagesFolder_Bottom') 134 | _self.GroundTruthFolder = ConfigIni.get('Segmentation Images','GroundTruthFolder') 135 | _self.ROIFolder = ConfigIni.get('Segmentation Images','ROIFolder') 136 | 137 | _self.imageTypes = json.loads(ConfigIni.get('Segmentation Images','imageTypes')) 138 | _self.indexesToSegment = json.loads(ConfigIni.get('Segmentation Images','indexesToSegment')) 139 | _self.applyPadding = json.loads(ConfigIni.get('Segmentation Images','applyPadding')) 140 | 141 | 142 | -------------------------------------------------------------------------------- /src/LiviaNet/Modules/Parsers/parsersUtils.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/src/LiviaNet/Modules/Parsers/parsersUtils.pyc -------------------------------------------------------------------------------- /src/LiviaNet/Modules/__init__.py: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /src/LiviaNet/Modules/__init__.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/src/LiviaNet/Modules/__init__.pyc -------------------------------------------------------------------------------- /src/LiviaNet/__init__.py: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /src/LiviaNet/__init__.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/src/LiviaNet/__init__.pyc -------------------------------------------------------------------------------- /src/LiviaNet/generateNetwork.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) 2016, Jose Dolz .All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, 5 | are permitted provided that the following conditions are met: 6 | 7 | 1. Redistributions of source code must retain the above copyright notice, 8 | this list of conditions and the following disclaimer. 9 | 2. Redistributions in binary form must reproduce the above copyright notice, 10 | this list of conditions and the following disclaimer in the documentation 11 | and/or other materials provided with the distribution. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 14 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 15 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 16 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 17 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 18 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 19 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 | OTHER DEALINGS IN THE SOFTWARE. 21 | 22 | Jose Dolz. Dec, 2016. 23 | email: jose.dolz.upv@gmail.com 24 | LIVIA Department, ETS, Montreal. 25 | """ 26 | 27 | import pdb 28 | import os 29 | 30 | from LiviaSemiDenseNet import LiviaSemiDenseNet3D 31 | from Modules.General.Utils import dump_model_to_gzip_file 32 | from Modules.General.Utils import makeFolder 33 | from Modules.Parsers.parsersUtils import parserConfigIni 34 | 35 | 36 | def generateNetwork(configIniName) : 37 | 38 | myParserConfigIni = parserConfigIni() 39 | 40 | myParserConfigIni.readConfigIniFile(configIniName,0) 41 | print " ********************** Starting creation model **********************" 42 | print " ------------------------ General ------------------------ " 43 | print " - Network name: {}".format(myParserConfigIni.networkName) 44 | print " - Folder to save the outputs: {}".format(myParserConfigIni.folderName) 45 | print " ------------------------ CNN Architecture ------------------------ " 46 | print " - Number of classes: {}".format(myParserConfigIni.n_classes) 47 | print " - Layers: {}".format(myParserConfigIni.layers) 48 | print " - Kernel sizes: {}".format(myParserConfigIni.kernels) 49 | 50 | print " - Intermediate connected CNN layers: {}".format(myParserConfigIni.intermediate_ConnectedLayers) 51 | 52 | print " - Pooling: {}".format(myParserConfigIni.pooling_scales) 53 | print " - Dropout: {}".format(myParserConfigIni.dropout_Rates) 54 | 55 | def Linear(): 56 | print " --- Activation function: Linear" 57 | 58 | def ReLU(): 59 | print " --- Activation function: ReLU" 60 | 61 | def PReLU(): 62 | print " --- Activation function: PReLU" 63 | 64 | def LeakyReLU(): 65 | print " --- Activation function: Leaky ReLU" 66 | 67 | printActivationFunction = {0 : Linear, 68 | 1 : ReLU, 69 | 2 : PReLU, 70 | 3 : LeakyReLU} 71 | 72 | printActivationFunction[myParserConfigIni.activationType]() 73 | 74 | def Random(layerType): 75 | print " --- Weights initialization (" +layerType+ " Layers): Random" 76 | 77 | def Delving(layerType): 78 | print " --- Weights initialization (" +layerType+ " Layers): Delving" 79 | 80 | def PreTrained(layerType): 81 | print " --- Weights initialization (" +layerType+ " Layers): PreTrained" 82 | 83 | printweight_Initialization_CNN = {0 : Random, 84 | 1 : Delving, 85 | 2 : PreTrained} 86 | 87 | printweight_Initialization_CNN[myParserConfigIni.weight_Initialization_CNN]('CNN') 88 | printweight_Initialization_CNN[myParserConfigIni.weight_Initialization_FCN]('FCN') 89 | 90 | print " ------------------------ Training Parameters ------------------------ " 91 | if len(myParserConfigIni.learning_rate) == 1: 92 | print " - Learning rate: {}".format(myParserConfigIni.learning_rate) 93 | else: 94 | for i in xrange(len(myParserConfigIni.learning_rate)): 95 | print " - Learning rate at layer {} : {} ".format(str(i+1),myParserConfigIni.learning_rate[i]) 96 | 97 | print " - Batch size: {}".format(myParserConfigIni.batch_size) 98 | 99 | if myParserConfigIni.applyBatchNorm == True: 100 | print " - Apply batch normalization in {} epochs".format(myParserConfigIni.BatchNormEpochs) 101 | 102 | print " ------------------------ Size of samples ------------------------ " 103 | print " - Training: {}".format(myParserConfigIni.sampleSize_Train) 104 | print " - Testing: {}".format(myParserConfigIni.sampleSize_Test) 105 | 106 | # --------------- Create my LiviaSemiDenseNet3D object --------------- 107 | myLiviaSemiDenseNet3D = LiviaSemiDenseNet3D() 108 | 109 | # --------------- Create the whole architecture (Conv layers + fully connected layers + classification layer) --------------- 110 | myLiviaSemiDenseNet3D.createNetwork(myParserConfigIni.networkName, 111 | myParserConfigIni.folderName, 112 | myParserConfigIni.layers, 113 | myParserConfigIni.kernels, 114 | myParserConfigIni.intermediate_ConnectedLayers, 115 | myParserConfigIni.n_classes, 116 | myParserConfigIni.sampleSize_Train, 117 | myParserConfigIni.sampleSize_Test, 118 | myParserConfigIni.batch_size, 119 | myParserConfigIni.applyBatchNorm, 120 | myParserConfigIni.BatchNormEpochs, 121 | myParserConfigIni.activationType, 122 | myParserConfigIni.dropout_Rates, 123 | myParserConfigIni.pooling_scales, 124 | myParserConfigIni.weight_Initialization_CNN, 125 | myParserConfigIni.weight_Initialization_FCN, 126 | myParserConfigIni.weightsFolderName, 127 | myParserConfigIni.weightsTrainedIdx, 128 | myParserConfigIni.tempSoftMax 129 | ) 130 | # TODO: Specify also the weights if pre-trained 131 | 132 | 133 | # --------------- Initialize all the training parameters --------------- 134 | myLiviaSemiDenseNet3D.initTrainingParameters(myParserConfigIni.costFunction, 135 | myParserConfigIni.L1_reg_C, 136 | myParserConfigIni.L2_reg_C, 137 | myParserConfigIni.learning_rate, 138 | myParserConfigIni.momentumType, 139 | myParserConfigIni.momentumValue, 140 | myParserConfigIni.momentumNormalized, 141 | myParserConfigIni.optimizerType, 142 | myParserConfigIni.rho_RMSProp, 143 | myParserConfigIni.epsilon_RMSProp 144 | ) 145 | 146 | # --------------- Compile the functions (Training/Validation/Testing) --------------- 147 | myLiviaSemiDenseNet3D.compileTheanoFunctions() 148 | 149 | # --------------- Save the model --------------- 150 | # Generate folders to store the model 151 | BASE_DIR = os.getcwd() 152 | path_Temp = os.path.join(BASE_DIR,'outputFiles') 153 | # For the networks 154 | netFolderName = os.path.join(path_Temp,myParserConfigIni.folderName) 155 | netFolderName = os.path.join(netFolderName,'Networks') 156 | 157 | # For the predictions 158 | predlFolderName = os.path.join(path_Temp,myParserConfigIni.folderName) 159 | predlFolderName = os.path.join(predlFolderName,'Pred') 160 | predValFolderName = os.path.join(predlFolderName,'Validation') 161 | predTestFolderName = os.path.join(predlFolderName,'Testing') 162 | 163 | makeFolder(netFolderName, "Networks") 164 | makeFolder(predValFolderName, "to store predictions (Validation)") 165 | makeFolder(predTestFolderName, "to store predictions (Testing)") 166 | 167 | modelFileName = netFolderName + "/" + myParserConfigIni.networkName + "_Epoch0" 168 | dump_model_to_gzip_file(myLiviaSemiDenseNet3D, modelFileName) 169 | 170 | strFinal = " Network model saved in " + netFolderName + " as " + myParserConfigIni.networkName + "_Epoch0" 171 | print strFinal 172 | 173 | return modelFileName 174 | 175 | 176 | -------------------------------------------------------------------------------- /src/LiviaNet/generateNetwork.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/src/LiviaNet/generateNetwork.pyc -------------------------------------------------------------------------------- /src/LiviaNet/startTesting.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) 2017, Jose Dolz .All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, 5 | are permitted provided that the following conditions are met: 6 | 7 | 1. Redistributions of source code must retain the above copyright notice, 8 | this list of conditions and the following disclaimer. 9 | 2. Redistributions in binary form must reproduce the above copyright notice, 10 | this list of conditions and the following disclaimer in the documentation 11 | and/or other materials provided with the distribution. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 14 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 15 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 16 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 17 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 18 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 19 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 | OTHER DEALINGS IN THE SOFTWARE. 21 | 22 | Jose Dolz. Dec, 2017. 23 | email: jose.dolz.upv@gmail.com 24 | LIVIA Department, ETS, Montreal. 25 | """ 26 | 27 | import numpy as np 28 | import time 29 | import os 30 | import pdb 31 | 32 | from Modules.General.Evaluation import computeDice 33 | from Modules.General.Utils import getImagesSet 34 | from Modules.General.Utils import load_model_from_gzip_file 35 | from Modules.IO.ImgOperations.imgOp import applyUnpadding 36 | from Modules.IO.loadData import load_imagesSinglePatient 37 | from Modules.IO.saveData import saveImageAsNifti 38 | from Modules.IO.saveData import saveImageAsMatlab 39 | from Modules.IO.sampling import * 40 | from Modules.Parsers.parsersUtils import parserConfigIni 41 | 42 | 43 | def segmentVolume(myNetworkModel, 44 | i_d, 45 | imageNames_Test, 46 | imageNames_Test_Bottom, 47 | names_Test, 48 | groundTruthNames_Test, 49 | roiNames_Test, 50 | imageType, 51 | padInputImagesBool, 52 | receptiveField, 53 | sampleSize_Test, 54 | strideVal, 55 | batch_Size, 56 | task # Validation (0) or testing (1) 57 | ): 58 | # Get info from the network model 59 | networkName = myNetworkModel.networkName 60 | folderName = myNetworkModel.folderName 61 | n_classes = myNetworkModel.n_classes 62 | sampleSize_Test = myNetworkModel.sampleSize_Test 63 | receptiveField = myNetworkModel.receptiveField 64 | outputShape = myNetworkModel.lastLayer.outputShapeTest[2:] 65 | batch_Size = myNetworkModel.batch_Size 66 | padInputImagesBool = True 67 | 68 | # Get half sample size 69 | sampleHalf = [] 70 | for h_i in range(3): 71 | sampleHalf.append((receptiveField[h_i]-1)/2) 72 | 73 | # Load the images to segment 74 | [imgSubject, 75 | imgSubject_Bottom, 76 | gtLabelsImage, 77 | roi, 78 | paddingValues] = load_imagesSinglePatient(i_d, 79 | imageNames_Test, 80 | imageNames_Test_Bottom, 81 | groundTruthNames_Test, 82 | roiNames_Test, 83 | padInputImagesBool, 84 | receptiveField, 85 | sampleSize_Test, 86 | imageType, 87 | ) 88 | 89 | 90 | # Get image dimensions 91 | imgDims = list(imgSubject.shape) 92 | 93 | [ sampleCoords ] = sampleWholeImage(imgSubject, 94 | roi, 95 | sampleSize_Test, 96 | strideVal, 97 | batch_Size 98 | ) 99 | 100 | numberOfSamples = len(sampleCoords) 101 | sampleID = 0 102 | numberOfBatches = numberOfSamples/batch_Size 103 | 104 | #The probability-map that will be constructed by the predictions. 105 | probMaps = np.zeros([n_classes]+imgDims, dtype = "float32") 106 | 107 | # Run over all the batches 108 | for b_i in xrange(numberOfBatches) : 109 | 110 | # Get samples for batch b_i 111 | 112 | sampleCoords_b = sampleCoords[ b_i*batch_Size : (b_i+1)*batch_Size ] 113 | 114 | [imgSamples, 115 | imgSample_Bottom] = extractSamples(imgSubject, 116 | imgSubject_Bottom, 117 | sampleCoords_b, 118 | sampleSize_Test, 119 | receptiveField) 120 | 121 | # Load the data of the batch on the GPU 122 | myNetworkModel.testingData_x.set_value(imgSamples, borrow=True) 123 | myNetworkModel.testingData_x_Bottom.set_value(imgSample_Bottom, borrow=True) 124 | 125 | # Call the testing Theano function 126 | predictions = myNetworkModel.networkModel_Test(0) 127 | 128 | predOutput = predictions[-1] 129 | 130 | # --- Now we can generate the probability maps from the predictions ---- 131 | # Run over all the regions 132 | for r_i in xrange(batch_Size) : 133 | 134 | sampleCoords_i = sampleCoords[sampleID] 135 | coords = [ sampleCoords_i[0][0], sampleCoords_i[1][0], sampleCoords_i[2][0] ] 136 | 137 | # Get the min and max coords 138 | xMin = coords[0] + sampleHalf[0] 139 | xMax = coords[0] + sampleHalf[0] + strideVal[0] 140 | 141 | yMin = coords[1] + sampleHalf[1] 142 | yMax = coords[1] + sampleHalf[1] + strideVal[1] 143 | 144 | zMin = coords[2] + sampleHalf[2] 145 | zMax = coords[2] + sampleHalf[2] + strideVal[2] 146 | 147 | probMaps[:,xMin:xMax, yMin:yMax, zMin:zMax] = predOutput[r_i] 148 | 149 | sampleID += 1 150 | 151 | # Release data 152 | myNetworkModel.testingData_x.set_value(np.zeros([1,1,1,1,1], dtype="float32")) 153 | myNetworkModel.testingData_x_Bottom.set_value(np.zeros([1,1,1,1,1], dtype="float32")) 154 | 155 | # Segmentation has been done in this point. 156 | 157 | # Now: Save the data 158 | # Get the segmentation from the probability maps --- 159 | segmentationImage = np.argmax(probMaps, axis=0) 160 | 161 | #Save Result: 162 | npDtypeForPredictedImage = np.dtype(np.int16) 163 | suffixToAdd = "_Segm" 164 | 165 | # Apply unpadding if specified 166 | if padInputImagesBool == True: 167 | segmentationRes = applyUnpadding(segmentationImage, paddingValues) 168 | else: 169 | segmentationRes = segmentationImage 170 | 171 | # Generate folders to store the model 172 | BASE_DIR = os.getcwd() 173 | path_Temp = os.path.join(BASE_DIR,'outputFiles') 174 | 175 | # For the predictions 176 | predlFolderName = os.path.join(path_Temp,myNetworkModel.folderName) 177 | predlFolderName = os.path.join(predlFolderName,'Pred') 178 | if task == 0: 179 | predTestFolderName = os.path.join(predlFolderName,'Validation') 180 | else: 181 | predTestFolderName = os.path.join(predlFolderName,'Testing') 182 | 183 | nameToSave = predTestFolderName + '/Segmentation_'+ names_Test[i_d] 184 | 185 | # Save Segmentation image 186 | 187 | print(" ... Saving segmentation result..."), 188 | if imageType == 0: # nifti 189 | imageTypeToSave = np.dtype(np.int16) 190 | saveImageAsNifti(segmentationRes, 191 | nameToSave, 192 | imageNames_Test[i_d], 193 | imageTypeToSave) 194 | else: # Matlab 195 | # Cast to int8 for saving purposes 196 | saveImageAsMatlab(segmentationRes.astype('int8'), 197 | nameToSave) 198 | 199 | 200 | # Save the prob maps for each class (except background) 201 | for c_i in xrange(1, n_classes) : 202 | 203 | 204 | nameToSave = predTestFolderName + '/ProbMap_class_'+ str(c_i) + '_' + names_Test[i_d] 205 | 206 | probMapClass = probMaps[c_i,:,:,:] 207 | 208 | # Apply unpadding if specified 209 | if padInputImagesBool == True: 210 | probMapClassRes = applyUnpadding(probMapClass, paddingValues) 211 | else: 212 | probMapClassRes = probMapClass 213 | 214 | print(" ... Saving prob map for class {}...".format(str(c_i))), 215 | if imageType == 0: # nifti 216 | imageTypeToSave = np.dtype(np.float32) 217 | saveImageAsNifti(probMapClassRes, 218 | nameToSave, 219 | imageNames_Test[i_d], 220 | imageTypeToSave) 221 | else: 222 | # Cast to float32 for saving purposes 223 | saveImageAsMatlab(probMapClassRes.astype('float32'), 224 | nameToSave) 225 | 226 | # If segmentation done during evaluation, get dice 227 | if roiNames_Test == 1: 228 | segmentationImage = segmentationImage*roi 229 | 230 | if task == 0: 231 | print(" ... Computing Dice scores: ") 232 | DiceArray = computeDice(segmentationImage,gtLabelsImage) 233 | for d_i in xrange(len(DiceArray)): 234 | print(" -------------- DSC (Class {}) : {}".format(str(d_i+1),DiceArray[d_i])) 235 | 236 | """ Main segmentation function """ 237 | def startTesting(networkModelName, 238 | configIniName 239 | ) : 240 | 241 | padInputImagesBool = True # from config ini 242 | print " ****************************************** STARTING SEGMENTATION ******************************************" 243 | 244 | print " ********************** Starting segmentation **********************" 245 | myParserConfigIni = parserConfigIni() 246 | myParserConfigIni.readConfigIniFile(configIniName,2) 247 | 248 | 249 | print " -------- Images to segment -------------" 250 | 251 | print " -------- Reading Images names for segmentation -------------" 252 | 253 | # -- Get list of images used for testing -- # 254 | (imageNames_Test, names_Test) = getImagesSet(myParserConfigIni.imagesFolder,myParserConfigIni.indexesToSegment) # Images 255 | (imageNames_Test_Bottom, names_Test_Bottom) = getImagesSet(myParserConfigIni.imagesFolder_Bottom,myParserConfigIni.indexesToSegment) # Images 256 | (groundTruthNames_Test, gt_names_Test) = getImagesSet(myParserConfigIni.GroundTruthFolder,myParserConfigIni.indexesToSegment) # Ground truth 257 | (roiNames_Test, roi_names_Test) = getImagesSet(myParserConfigIni.ROIFolder,myParserConfigIni.indexesToSegment) # ROI 258 | 259 | # --------------- Load my LiviaNet3D object --------------- 260 | print (" ... Loading model from {}".format(networkModelName)) 261 | myLiviaSemiDenseNet3D = load_model_from_gzip_file(networkModelName) 262 | print " ... Network architecture successfully loaded...." 263 | 264 | # Get info from the network model 265 | networkName = myLiviaNet3D.networkName 266 | folderName = myLiviaNet3D.folderName 267 | n_classes = myLiviaNet3D.n_classes 268 | sampleSize_Test = myLiviaNet3D.sampleSize_Test 269 | receptiveField = myLiviaNet3D.receptiveField 270 | outputShape = myLiviaNet3D.lastLayer.outputShapeTest[2:] 271 | batch_Size = myLiviaNet3D.batch_Size 272 | padInputImagesBool = myParserConfigIni.applyPadding 273 | imageType = myParserConfigIni.imageTypes 274 | numberImagesToSegment = len(imageNames_Test) 275 | 276 | strideValues = myLiviaSemiDenseNet3D.lastLayer.outputShapeTest[2:] 277 | 278 | # Run over the images to segment 279 | for i_d in xrange(numberImagesToSegment) : 280 | print("********************** Segmenting subject: {} ....total: {}/{}...**********************".format(names_Test[i_d],str(i_d+1),str(numberImagesToSegment))) 281 | 282 | segmentVolume(myLiviaSemiDenseNet3D, 283 | i_d, 284 | imageNames_Test, # Full path 285 | imageNames_Test_Bottom, 286 | names_Test, # Only image name 287 | groundTruthNames_Test, 288 | roiNames_Test, 289 | imageType, 290 | padInputImagesBool, 291 | receptiveField, 292 | sampleSize_Test, 293 | strideValues, 294 | batch_Size, 295 | 1 # Validation (0) or testing (1) 296 | ) 297 | 298 | 299 | print(" **************************************************************************************************** ") 300 | -------------------------------------------------------------------------------- /src/LiviaNet/startTesting.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/src/LiviaNet/startTesting.pyc -------------------------------------------------------------------------------- /src/LiviaNet/startTraining.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) 2016, Jose Dolz .All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, 5 | are permitted provided that the following conditions are met: 6 | 7 | 1. Redistributions of source code must retain the above copyright notice, 8 | this list of conditions and the following disclaimer. 9 | 2. Redistributions in binary form must reproduce the above copyright notice, 10 | this list of conditions and the following disclaimer in the documentation 11 | and/or other materials provided with the distribution. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 14 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 15 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 16 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 17 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 18 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 19 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 | OTHER DEALINGS IN THE SOFTWARE. 21 | 22 | Jose Dolz. Dec, 2016. 23 | email: jose.dolz.upv@gmail.com 24 | LIVIA Department, ETS, Montreal. 25 | """ 26 | 27 | import sys 28 | import time 29 | import numpy as np 30 | import random 31 | import math 32 | import os 33 | 34 | from Modules.General.Utils import getImagesSet 35 | from Modules.General.Utils import dump_model_to_gzip_file 36 | from Modules.General.Utils import load_model_from_gzip_file 37 | from Modules.General.Evaluation import computeDice 38 | from Modules.IO.sampling import getSamplesSubepoch 39 | from Modules.Parsers.parsersUtils import parserConfigIni 40 | from startTesting import segmentVolume 41 | import pdb 42 | 43 | 44 | def startTraining(networkModelName,configIniName): 45 | print " ************************************************ STARTING TRAINING **************************************************" 46 | print " ********************** Starting training model (Reading parameters) **********************" 47 | 48 | myParserConfigIni = parserConfigIni() 49 | 50 | myParserConfigIni.readConfigIniFile(configIniName,1) 51 | 52 | # Image type (0: Nifti, 1: Matlab) 53 | imageType = myParserConfigIni.imageTypesTrain 54 | 55 | print (" --- Do training in {} epochs with {} subEpochs each...".format(myParserConfigIni.numberOfEpochs, myParserConfigIni.numberOfSubEpochs)) 56 | print "-------- Reading Images names used in training/validation -------------" 57 | 58 | # -- Get list of images used for training -- # 59 | (imageNames_Train, names_Train) = getImagesSet(myParserConfigIni.imagesFolder,myParserConfigIni.indexesForTraining) # Images 60 | (imageNames_Train_Bottom, names_Train_Bottom) = getImagesSet(myParserConfigIni.imagesFolder_Bottom,myParserConfigIni.indexesForTraining) # Images 61 | (groundTruthNames_Train, gt_names_Train) = getImagesSet(myParserConfigIni.GroundTruthFolder,myParserConfigIni.indexesForTraining) # Ground truth 62 | (roiNames_Train, roi_names_Train) = getImagesSet(myParserConfigIni.ROIFolder,myParserConfigIni.indexesForTraining) # ROI 63 | 64 | # -- Get list of images used for validation -- # 65 | (imageNames_Val, names_Val) = getImagesSet(myParserConfigIni.imagesFolder,myParserConfigIni.indexesForValidation) # Images 66 | (imageNames_Val_Bottom, names_Val_Bottom) = getImagesSet(myParserConfigIni.imagesFolder,myParserConfigIni.indexesForValidation) # Images 67 | (groundTruthNames_Val, gt_names_Val) = getImagesSet(myParserConfigIni.GroundTruthFolder,myParserConfigIni.indexesForValidation) # Ground truth 68 | (roiNames_Val, roi_names_Val) = getImagesSet(myParserConfigIni.ROIFolder,myParserConfigIni.indexesForValidation) # ROI 69 | 70 | # Print names 71 | print " ================== Images for training ================" 72 | for i in range(0,len(names_Train)): 73 | if len(roi_names_Train) > 0: 74 | print(" Image({}): Top {} | Bottom: {} | GT: {} | ROI {} ".format(i,names_Train[i], names_Train_Bottom[i], gt_names_Train[i], roi_names_Train[i] )) 75 | else: 76 | print(" Image({}): Top {} | Bottom: {} | GT: {} ".format(i,names_Train[i], names_Train_Bottom[i], gt_names_Train[i] )) 77 | print " ================== Images for validation ================" 78 | for i in range(0,len(names_Val)): 79 | if len(roi_names_Train) > 0: 80 | print(" Image({}): Top {} | Bottom {} | GT: {} | ROI {} ".format(i,names_Val[i], names_Val_Bottom[i], gt_names_Val[i], roi_names_Val[i] )) 81 | else: 82 | print(" Image({}): Top {} | Bottom {} | GT: {} ".format(i,names_Val[i], names_Val_Bottom[i], gt_names_Val[i])) 83 | print " ===============================================================" 84 | 85 | # --------------- Load my LiviaNet3D object --------------- 86 | print (" ... Loading model from {}".format(networkModelName)) 87 | myLiviaNet3D = load_model_from_gzip_file(networkModelName) 88 | print " ... Network architecture successfully loaded...." 89 | 90 | # Asign parameters to loaded Net 91 | myLiviaNet3D.numberOfEpochs = myParserConfigIni.numberOfEpochs 92 | myLiviaNet3D.numberOfSubEpochs = myParserConfigIni.numberOfSubEpochs 93 | myLiviaNet3D.numberOfSamplesSupEpoch = myParserConfigIni.numberOfSamplesSupEpoch 94 | myLiviaNet3D.firstEpochChangeLR = myParserConfigIni.firstEpochChangeLR 95 | myLiviaNet3D.frequencyChangeLR = myParserConfigIni.frequencyChangeLR 96 | 97 | numberOfEpochs = myLiviaNet3D.numberOfEpochs 98 | numberOfSubEpochs = myLiviaNet3D.numberOfSubEpochs 99 | numberOfSamplesSupEpoch = myLiviaNet3D.numberOfSamplesSupEpoch 100 | 101 | # --------------- -------------- --------------- 102 | # --------------- Start TRAINING --------------- 103 | # --------------- -------------- --------------- 104 | # Get sample dimension values 105 | receptiveField = myLiviaNet3D.receptiveField 106 | sampleSize_Train = myLiviaNet3D.sampleSize_Train 107 | 108 | trainingCost = [] 109 | 110 | if myParserConfigIni.applyPadding == 1: 111 | applyPadding = True 112 | else: 113 | applyPadding = False 114 | 115 | learningRateModifiedEpoch = 0 116 | 117 | # Run over all the (remaining) epochs and subepochs 118 | for e_i in xrange(numberOfEpochs): 119 | # Recover last trained epoch 120 | numberOfEpochsTrained = myLiviaNet3D.numberOfEpochsTrained 121 | 122 | print(" ============== EPOCH: {}/{} =================".format(numberOfEpochsTrained+1,numberOfEpochs)) 123 | 124 | costsOfEpoch = [] 125 | 126 | for subE_i in xrange(numberOfSubEpochs): 127 | epoch_nr = subE_i+1 128 | print (" --- SubEPOCH: {}/{}".format(epoch_nr,myLiviaNet3D.numberOfSubEpochs)) 129 | 130 | # Get all the samples that will be used in this sub-epoch 131 | [imagesSamplesAll, 132 | imagesSamplesAll_Bottom, 133 | gt_samplesAll] = getSamplesSubepoch(numberOfSamplesSupEpoch, 134 | imageNames_Train, 135 | imageNames_Train, 136 | groundTruthNames_Train, 137 | roiNames_Train, 138 | imageType, 139 | sampleSize_Train, 140 | receptiveField, 141 | applyPadding 142 | ) 143 | 144 | # Variable that will contain weights for the cost function 145 | # --- In its current implementation, all the classes have the same weight 146 | weightsCostFunction = np.ones(myLiviaNet3D.n_classes, dtype='float32') 147 | 148 | numberBatches = len(imagesSamplesAll) / myLiviaNet3D.batch_Size 149 | 150 | myLiviaNet3D.trainingData_x.set_value(imagesSamplesAll, borrow=True) 151 | myLiviaNet3D.trainingData_x_Bottom.set_value(imagesSamplesAll_Bottom, borrow=True) 152 | myLiviaNet3D.trainingData_y.set_value(gt_samplesAll, borrow=True) 153 | 154 | costsOfBatches = [] 155 | evalResultsSubepoch = np.zeros([ myLiviaNet3D.n_classes, 4 ], dtype="int32") 156 | 157 | for b_i in xrange(numberBatches): 158 | # TODO: Make a line that adds a point at each trained batch (Or percentage being updated) 159 | costErrors = myLiviaNet3D.networkModel_Train(b_i, weightsCostFunction) 160 | meanBatchCostError = costErrors[0] 161 | costsOfBatches.append(meanBatchCostError) 162 | myLiviaNet3D.updateLayersMatricesBatchNorm() 163 | 164 | 165 | #======== Calculate and Report accuracy over subepoch 166 | meanCostOfSubepoch = sum(costsOfBatches) / float(numberBatches) 167 | print(" ---------- Cost of this subEpoch: {}".format(meanCostOfSubepoch)) 168 | 169 | # Release data 170 | myLiviaNet3D.trainingData_x.set_value(np.zeros([1,1,1,1,1], dtype="float32")) 171 | myLiviaNet3D.trainingData_y.set_value(np.zeros([1,1,1,1], dtype="float32")) 172 | 173 | # Get mean cost epoch 174 | costsOfEpoch.append(meanCostOfSubepoch) 175 | 176 | meanCostOfEpoch = sum(costsOfEpoch) / float(numberOfSubEpochs) 177 | 178 | # Include the epoch cost to the main training cost and update current mean 179 | trainingCost.append(meanCostOfEpoch) 180 | currentMeanCost = sum(trainingCost) / float(str( e_i + 1)) 181 | 182 | print(" ---------- Training on Epoch #" + str(e_i) + " finished ----------" ) 183 | print(" ---------- Cost of Epoch: {} / Mean training error {}".format(meanCostOfEpoch,currentMeanCost)) 184 | print(" -------------------------------------------------------- " ) 185 | 186 | # ------------- Update Learning Rate if required ----------------# 187 | if e_i >= myLiviaNet3D.firstEpochChangeLR : 188 | if learningRateModifiedEpoch == 0: 189 | currentLR = myLiviaNet3D.learning_rate.get_value() 190 | newLR = currentLR / 2.0 191 | myLiviaNet3D.learning_rate.set_value(newLR) 192 | print(" ... Learning rate has been changed from {} to {}".format(currentLR, newLR)) 193 | learningRateModifiedEpoch = e_i 194 | else: 195 | if (e_i) == (learningRateModifiedEpoch + myLiviaNet3D.frequencyChangeLR): 196 | currentLR = myLiviaNet3D.learning_rate.get_value() 197 | newLR = currentLR / 2.0 198 | myLiviaNet3D.learning_rate.set_value(newLR) 199 | print(" ... Learning rate has been changed from {} to {}".format(currentLR, newLR)) 200 | learningRateModifiedEpoch = e_i 201 | 202 | # ---------------------- Start validation ---------------------- # 203 | 204 | numberImagesToSegment = len(imageNames_Val) 205 | print(" ********************** Starting validation **********************") 206 | 207 | # Run over the images to segment 208 | for i_d in xrange(numberImagesToSegment) : 209 | print("------------- Segmenting subject: {} ....total: {}/{}... -------------".format(names_Val[i_d],str(i_d+1),str(numberImagesToSegment))) 210 | strideValues = myLiviaNet3D.lastLayer.outputShapeTest[2:] 211 | 212 | segmentVolume(myLiviaNet3D, 213 | i_d, 214 | imageNames_Val, # Full path 215 | imageNames_Val_Bottom, 216 | names_Val, # Only image name 217 | groundTruthNames_Val, 218 | roiNames_Val, 219 | imageType, 220 | applyPadding, 221 | receptiveField, 222 | sampleSize_Train, 223 | strideValues, 224 | myLiviaNet3D.batch_Size, 225 | 0 # Validation (0) or testing (1) 226 | ) 227 | 228 | 229 | print(" ********************** Validation DONE ********************** ") 230 | 231 | # ------ In this point the training is done at Epoch n ---------# 232 | # Increase number of epochs trained 233 | myLiviaNet3D.numberOfEpochsTrained += 1 234 | 235 | # --------------- Save the model --------------- 236 | BASE_DIR = os.getcwd() 237 | path_Temp = os.path.join(BASE_DIR,'outputFiles') 238 | netFolderName = os.path.join(path_Temp,myLiviaNet3D.folderName) 239 | netFolderName = os.path.join(netFolderName,'Networks') 240 | 241 | modelFileName = netFolderName + "/" + myLiviaNet3D.networkName + "_Epoch" + str (myLiviaNet3D.numberOfEpochsTrained) 242 | dump_model_to_gzip_file(myLiviaNet3D, modelFileName) 243 | 244 | strFinal = " Network model saved in " + netFolderName + " as " + myLiviaNet3D.networkName + "_Epoch" + str (myLiviaNet3D.numberOfEpochsTrained) 245 | print strFinal 246 | 247 | print("................ The whole Training is done.....") 248 | print(" ************************************************************************************ ") 249 | -------------------------------------------------------------------------------- /src/LiviaNet/startTraining.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josedolz/SemiDenseNet/9a578b50b3db2644f985d1cd5ddbddeae54253d6/src/LiviaNet/startTraining.pyc -------------------------------------------------------------------------------- /src/generateROI.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) 2017, Jose Dolz .All rights reserved. 3 | Redistribution and use in source and binary forms, with or without modification, 4 | are permitted provided that the following conditions are met: 5 | 1. Redistributions of source code must retain the above copyright notice, 6 | this list of conditions and the following disclaimer. 7 | 2. Redistributions in binary form must reproduce the above copyright notice, 8 | this list of conditions and the following disclaimer in the documentation 9 | and/or other materials provided with the distribution. 10 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 11 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 12 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 13 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 14 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 15 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 16 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 17 | OTHER DEALINGS IN THE SOFTWARE. 18 | Jose Dolz. Dec, 2017. 19 | email: jose.dolz.upv@gmail.com 20 | LIVIA Department, ETS, Montreal. 21 | """ 22 | 23 | import sys 24 | import pdb 25 | from os.path import isfile, join 26 | import os 27 | import numpy as np 28 | import nibabel as nib 29 | import scipy.io as sio 30 | 31 | from LiviaNet.Modules.IO.loadData import load_nii 32 | from LiviaNet.Modules.IO.loadData import load_matlab 33 | from LiviaNet.Modules.IO.saveData import saveImageAsNifti 34 | from LiviaNet.Modules.IO.saveData import saveImageAsMatlab 35 | 36 | # NOTE: Only has been tried on nifti images. However, it should not give any error for Matlab images. 37 | """ To print function usage """ 38 | def printUsage(error_type): 39 | if error_type == 1: 40 | print(" ** ERROR!!: Few parameters used.") 41 | else: 42 | print(" ** ERROR!!: ...") # TODO 43 | 44 | print(" ******** USAGE ******** ") 45 | print(" --- argv 1: Folder containing mr images") 46 | print(" --- argv 2: Folder to save corrected label images") 47 | print(" --- argv 3: Image type") 48 | print(" ------------- 0: nifti format") 49 | print(" ------------- 1: matlab format") 50 | 51 | def getImageImageList(imagesFolder): 52 | if os.path.exists(imagesFolder): 53 | imageNames = [f for f in os.listdir(imagesFolder) if isfile(join(imagesFolder, f))] 54 | 55 | imageNames.sort() 56 | 57 | return imageNames 58 | 59 | def checkAnotatedLabels(argv): 60 | # Number of input arguments 61 | # 1: Folder containing label images 62 | # 2: Folder to save corrected label images 63 | # 3: Image type 64 | # 0: nifti format 65 | # 1: matlab format 66 | # Do some sanity checks 67 | 68 | if len(argv) < 3: 69 | printUsage(1) 70 | sys.exit() 71 | 72 | imagesFolder = argv[0] 73 | imagesFolderdst = argv[1] 74 | imageType = int(argv[2]) 75 | 76 | imageNames = getImageImageList(imagesFolder) 77 | printFileNames = False 78 | 79 | for i_d in xrange(len(imageNames)) : 80 | if imageType == 0: 81 | imageFileName = imagesFolder + '/' + imageNames[i_d] 82 | [imageData,img_proxy] = load_nii(imageFileName, printFileNames) 83 | else: 84 | imageFileName = imagesFolder + '/' + imageNames[i_d] 85 | imageData = load_matlab(imageFileName, printFileNames) 86 | 87 | # Find voxels different to 0 88 | # NOTE: I assume voxels equal to 0 are outside my ROI (like in the skull stripped datasets) 89 | idx = np.where(imageData > 0 ) 90 | 91 | # Create ROI and assign those indexes to 1 92 | roiImage = np.zeros(imageData.shape,dtype=np.int8) 93 | roiImage[idx] = 1 94 | 95 | print(" ... Saving roi...") 96 | nameToSave = imagesFolderdst + '/ROI_' + imageNames[i_d] 97 | if imageType == 0: # nifti 98 | imageTypeToSave = np.dtype(np.int8) 99 | saveImageAsNifti(roiImage, 100 | nameToSave, 101 | imageFileName, 102 | imageTypeToSave) 103 | else: # Matlab 104 | # Cast to int8 for saving purposes 105 | saveImageAsMatlab(labelCorrectedImage.astype('int8'),nameToSave) 106 | 107 | 108 | print " ****************************************** PROCESSING LABELS DONE ******************************************" 109 | 110 | 111 | if __name__ == '__main__': 112 | checkAnotatedLabels(sys.argv[1:]) 113 | -------------------------------------------------------------------------------- /src/networkSegmentation.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) 2016, Jose Dolz .All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, 5 | are permitted provided that the following conditions are met: 6 | 7 | 1. Redistributions of source code must retain the above copyright notice, 8 | this list of conditions and the following disclaimer. 9 | 2. Redistributions in binary form must reproduce the above copyright notice, 10 | this list of conditions and the following disclaimer in the documentation 11 | and/or other materials provided with the distribution. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 14 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 15 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 16 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 17 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 18 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 19 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 | OTHER DEALINGS IN THE SOFTWARE. 21 | 22 | Jose Dolz. Dec, 2016. 23 | email: jose.dolz.upv@gmail.com 24 | LIVIA Department, ETS, Montreal. 25 | """ 26 | 27 | import sys 28 | 29 | 30 | from LiviaNet.startTesting import startTesting 31 | 32 | def printUsage(error_type): 33 | if error_type == 1: 34 | print(" ** ERROR!!: Few parameters used.") 35 | else: 36 | print(" ** ERROR!!: Asked to start with an already created network but its name is not specified.") 37 | 38 | print(" ******** USAGE ******** ") 39 | print(" --- argv 1: Name of the configIni file.") 40 | print(" --- argv 2: Network model name") 41 | 42 | 43 | def networkSegmentation(argv): 44 | # Number of input arguments 45 | # 1: ConfigIniName (for segmentation) 46 | # 2: Network model name 47 | 48 | # Some sanity checks 49 | 50 | if len(argv) < 2: 51 | printUsage(1) 52 | sys.exit() 53 | 54 | configIniName = argv[0] 55 | networkModelName = argv[1] 56 | 57 | startTesting(networkModelName,configIniName) 58 | print(" ***************** SEGMENTATION DONE!!! ***************** ") 59 | 60 | 61 | 62 | if __name__ == '__main__': 63 | networkSegmentation(sys.argv[1:]) 64 | -------------------------------------------------------------------------------- /src/networkTraining.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) 2016, Jose Dolz .All rights reserved. 3 | Redistribution and use in source and binary forms, with or without modification, 4 | are permitted provided that the following conditions are met: 5 | 1. Redistributions of source code must retain the above copyright notice, 6 | this list of conditions and the following disclaimer. 7 | 2. Redistributions in binary form must reproduce the above copyright notice, 8 | this list of conditions and the following disclaimer in the documentation 9 | and/or other materials provided with the distribution. 10 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 11 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 12 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 13 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 14 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 15 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 16 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 17 | OTHER DEALINGS IN THE SOFTWARE. 18 | Jose Dolz. Dec, 2016. 19 | email: jose.dolz.upv@gmail.com 20 | LIVIA Department, ETS, Montreal. 21 | """ 22 | 23 | import sys 24 | import pdb 25 | import numpy 26 | 27 | from LiviaNet.generateNetwork import generateNetwork 28 | from LiviaNet.startTraining import startTraining 29 | 30 | """ To print function usage """ 31 | def printUsage(error_type): 32 | if error_type == 1: 33 | print(" ** ERROR!!: Few parameters used.") 34 | else: 35 | print(" ** ERROR!!: Asked to start with an already created network but its name is not specified.") 36 | 37 | print(" ******** USAGE ******** ") 38 | print(" --- argv 1: Name of the configIni file.") 39 | print(" --- argv 2: Type of training:") 40 | print(" ------------- 0: Create a new model and start training") 41 | print(" ------------- 1: Use an existing model to keep on training (Requires an additional input with model name)") 42 | print(" --- argv 3: (Optional, but required if arg 2 is equal to 1) Network model name") 43 | 44 | 45 | def networkTraining(argv): 46 | # Number of input arguments 47 | # 1: ConfigIniName 48 | # 2: TrainingType 49 | # 0: Create a new model and start training 50 | # 1: Use an existing model to keep on training (Requires an additional input with model name) 51 | # 3: (Optional, but required if arg 2 is equal to 1) Network model name 52 | 53 | # Do some sanity checks 54 | 55 | if len(argv) < 2: 56 | printUsage(1) 57 | sys.exit() 58 | 59 | configIniName = argv[0] 60 | trainingType = argv[1] 61 | 62 | if trainingType == '1' and len(argv) == 2: 63 | printUsage(2) 64 | sys.exit() 65 | 66 | if len(argv)>2: 67 | networkModelName = argv[2] 68 | 69 | # Creating a new model 70 | if trainingType == '0': 71 | print " ****************************************** CREATING NETWORK ******************************************" 72 | networkModelName = generateNetwork(configIniName) 73 | print " ****************************************** NETWORK CREATED ******************************************" 74 | 75 | # Training the network in model name 76 | print " ****************************************** STARTING NETWORK TRAINING ******************************************" 77 | startTraining(networkModelName,configIniName) 78 | print " ****************************************** DONE ******************************************" 79 | 80 | 81 | if __name__ == '__main__': 82 | networkTraining(sys.argv[1:]) 83 | -------------------------------------------------------------------------------- /src/processLabels.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) 2016, Jose Dolz .All rights reserved. 3 | Redistribution and use in source and binary forms, with or without modification, 4 | are permitted provided that the following conditions are met: 5 | 1. Redistributions of source code must retain the above copyright notice, 6 | this list of conditions and the following disclaimer. 7 | 2. Redistributions in binary form must reproduce the above copyright notice, 8 | this list of conditions and the following disclaimer in the documentation 9 | and/or other materials provided with the distribution. 10 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 11 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 12 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 13 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 14 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 15 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 16 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 17 | OTHER DEALINGS IN THE SOFTWARE. 18 | Jose Dolz. Dec, 2016. 19 | email: jose.dolz.upv@gmail.com 20 | LIVIA Department, ETS, Montreal. 21 | """ 22 | 23 | import sys 24 | import pdb 25 | from os.path import isfile, join 26 | import os 27 | import numpy as np 28 | import nibabel as nib 29 | import scipy.io as sio 30 | 31 | from LiviaNet.Modules.IO.loadData import load_nii 32 | from LiviaNet.Modules.IO.loadData import load_matlab 33 | from LiviaNet.Modules.IO.saveData import saveImageAsNifti 34 | from LiviaNet.Modules.IO.saveData import saveImageAsMatlab 35 | 36 | # NOTE: Only has been tried on nifti images. However, it should not give any error for Matlab images. 37 | """ To print function usage """ 38 | def printUsage(error_type): 39 | if error_type == 1: 40 | print(" ** ERROR!!: Few parameters used.") 41 | else: 42 | print(" ** ERROR!!: ...") # TODO 43 | 44 | print(" ******** USAGE ******** ") 45 | print(" --- argv 1: Folder containing label images") 46 | print(" --- argv 2: Folder to save corrected label images") 47 | print(" --- argv 3: Number of expected classes (including background)") 48 | print(" --- argv 4: Image type") 49 | print(" ------------- 0: nifti format") 50 | print(" ------------- 1: matlab format") 51 | 52 | def getImageImageList(imagesFolder): 53 | if os.path.exists(imagesFolder): 54 | imageNames = [f for f in os.listdir(imagesFolder) if isfile(join(imagesFolder, f))] 55 | 56 | imageNames.sort() 57 | 58 | return imageNames 59 | 60 | def checkAnotatedLabels(argv): 61 | # Number of input arguments 62 | # 1: Folder containing label images 63 | # 2: Folder to save corrected label images 64 | # 3: Number of expected classes (including background) 65 | # 4: Image type 66 | # 0: nifti format 67 | # 1: matlab format 68 | # Do some sanity checks 69 | 70 | if len(argv) < 4: 71 | printUsage(1) 72 | sys.exit() 73 | 74 | imagesFolder = argv[0] 75 | imagesFolderdst = argv[1] 76 | numClasses = int(argv[2]) 77 | imageType = int(argv[3]) 78 | 79 | imageNames = getImageImageList(imagesFolder) 80 | printFileNames = False 81 | 82 | for i_d in xrange(0, len(imageNames)) : 83 | if imageType == 0: 84 | imageFileName = imagesFolder + '/' + imageNames[i_d] 85 | [imageData,img_proxy] = load_nii(imageFileName, printFileNames) 86 | else: 87 | imageFileName = imagesFolder + '/' + imageNames[i_d] 88 | imageData = load_matlab(imageFileName, printFileNames) 89 | 90 | labelsOrig = np.unique(imageData) 91 | 92 | if (len(labelsOrig) != numClasses): 93 | print(" WARNING!!!!! Number of expected clases ({}) is different to found labels ({}) ".format(numClasses,len(labelsOrig))) 94 | 95 | # Correct labels 96 | labelCorrectedImage = np.zeros(imageData.shape,dtype=np.int8) 97 | for i_l in xrange(0,len(labelsOrig)): 98 | idx = np.where(imageData == labelsOrig[i_l]) 99 | labelCorrectedImage[idx] = i_l 100 | 101 | 102 | print(" ... Saving labels...") 103 | nameToSave = imagesFolderdst + '/' + imageNames[i_d] 104 | if imageType == 0: # nifti 105 | imageTypeToSave = np.dtype(np.int8) 106 | saveImageAsNifti(labelCorrectedImage, 107 | nameToSave, 108 | imageFileName, 109 | imageTypeToSave) 110 | else: # Matlab 111 | # Cast to int8 for saving purposes 112 | saveImageAsMatlab(labelCorrectedImage.astype('int8'),nameToSave) 113 | 114 | 115 | print " ****************************************** PROCESSING LABELS DONE ******************************************" 116 | 117 | 118 | if __name__ == '__main__': 119 | checkAnotatedLabels(sys.argv[1:]) 120 | --------------------------------------------------------------------------------