├── .gitignore
├── README.md
├── code
├── README.md
├── Segmentation_Models.py
├── brain_pipeline.py
├── brain_seg_workflow.png
├── n4_bias_correction.py
├── patch_library.py
└── write_to_s3.py
├── images
├── MRI_workflow.png
├── bad_example.png
├── brain_grids.png
├── color_code.png
├── groundtruth.gif
├── gt.gif
├── improved_seg.png
├── modalities.png
├── model_architecture.png
├── my_res.gif
├── myresults.gif
├── n4_correction.png
├── patch_10.png
├── patch_15.png
├── patch_20.png
├── patch_25.png
├── patch_30.png
├── patch_35.png
├── patch_40.png
├── patch_45.png
├── patch_5.png
├── patch_51.png
├── patch_55.png
├── patch_60.png
├── patch_65.png
├── patch_70.png
├── patch_75.png
├── patch_80.png
├── patch_85.png
├── patch_90.png
├── patch_95.png
├── patches.png
├── results.png
├── segment.png
├── segmented_slice.png
├── t29_143.gif
├── t2_grid.png
└── tumor_diversity.png
├── license.md
└── models
├── 4th_epoch60k.hdf5
├── 4th_epoch60k.json
├── 6th_epoch60k.hdf5
├── 6th_epoch60k.json
└── model_50_a.json
/.gitignore:
--------------------------------------------------------------------------------
1 | test_data/
2 | train_data/
3 | Training_PNG/
4 | Norm_PNG/
5 | Labels/
6 | Original_Data/
7 | n4_PNG/
8 | *.pem
9 | sample_pix/
10 | Papers/
11 | Slide_3/
12 | 2_85_res/
13 | mha_to_png.py
14 | *.pyc
15 | *.pptx
16 | *.jpg
17 | *.h5
18 | *.mha
19 | *.ipynb
20 | .ipynb_checkpoints
21 | basic_model_cheat.py
22 | patch_label.py
23 | no_github/
24 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Automatic Brain Tumor Segmentation
2 |
3 | Note: This project is not currently active. It is likely outdated and buggy. I unfortunately do not have the time to update it or keep up with pull requests.
4 |
5 | Brain tumor segmentation seeks to separate healthy tissue from tumorous regions such as the advancing tumor, necrotic core and surrounding edema. This is an essential step in diagnosis and treatment planning, both of which need to take place quickly in the case of a malignancy in order to maximize the likelihood of successful treatment. Due to the slow and tedious nature of manual segmentation, there is a high demand for computer algorithms that can do this quickly and accurately.
6 |
7 | ## Table of Contents
8 | 1. [Dataset](#dataset)
9 | 2. [MRI Background](#mri-background)
10 | * [MRI Pre-Processing](#mri-pre-processing)
11 | * [Pulse Sequences](#pulse-sequences)
12 | * [Segmentation](#segmentation)
13 | 3. [High Grade Gliomas](#high-grade-gliomas)
14 | 4. [Convolutional Neural Networks](#convolutional-neural-networks)
15 | * [Model Architecture](#model-architecture)
16 | * [Training the Model](#training-the-model)
17 | * [Patch Selection](#patch-selection)
18 | * [Results](#results)
19 | 5. [Future Directions](#future-directions)
20 |
21 | ## Dataset
22 |
23 | All MRI data was provided by the [2015 MICCAI BraTS Challenge](http://www.braintumorsegmentation.org), which consists of approximately 250 high-grade glioma cases and 50 low-grade cases. However, due to the limited time Each dataset contains four different MRI [pulse sequences](#pulse-sequences), each of which is comprised of 155 brain slices, for a total of 620 images per patient. Professional segmentation is provided as ground truth labels for each case. Figure 1 is an example of a scan with the ground truth segmentation. The segmentation labels are represented as follows:
24 |
25 |
26 | Figure 1: Ground truth segmentation overlay on a T2 weighted scan.
27 |
28 |
29 | ## MRI Background
30 |
31 | Magnetic Resonance Imaging (MRI) is the most common diagnostic tool brain tumors due primarily to it's noninvasive nature and ability to image diverse tissue types and physiological processes. MRI uses a magnetic gradient and radio frequency pulses to take repetitive axial slices of the brain and construct a 3-dimensional representation(Figure 2). Each brain scan 155 slices, with each pixel representing a 1mm3 voxel.
32 |
33 |
34 |
35 | Figure 2: (Left) Basic MRI workflow. Slices are taken axially at 1mm increments, creating the 3-dimensional rendering (right). Note that this is only one of four commonly-used pulse sequences used for tumor segmentation.
36 |
37 | ### MRI pre-processing ([code](https://github.com/naldeborgh7575/brain_segmentation/blob/master/code/brain_pipeline.py))
38 |
39 | One of the challenges in working with MRI data is dealing with the artifacts produced either by inhomogeneity in the magnetic field or small movements made by the patient during scan time. Oftentimes a bias will be present across the resulting scans (Figure 3), which can effect the segmentation results particularly in the setting of computer-based models.
40 |
41 |
42 | Figure 3: Brain scans before and after n4ITK bias correction. Notice the higher intensity at the bottom of the image on the right. This can be a source of false positives in a computer segmentation.
43 |
44 | I employed an [n4ITK bias correction](http://www.ncbi.nlm.nih.gov/pubmed/20378467) on all T1 and T1C images in the dataset ([code](https://github.com/naldeborgh7575/brain_segmentation/blob/master/code/n4_bias_correction.py)), which removed the intensity gradient on each scan. Additional image pre-processing requires standardizing the pixel intensities, since MRI intensities are expressed in arbitrary units and may differ significantly between machines used and scan times.
45 |
46 | ### Pulse sequences
47 | There are multiple radio frequency pulse sequences that can be used to illuminate different types of tissue. For adequate segmentation there are often four different unique sequences acquired: Fluid Attenuated Inversion Recovery (FLAIR), T1, T1-contrasted, and T2 (Figure 4). Each of these pulse sequences exploits the distinct chemical and physiological characteristics of various tissue types, resulting in contrast between the individual classes. Notice the variability in intensities among the four images in Figure 4, all of which are images of the same brain taken with different pulse sequences.
48 |
49 |
50 | Figure 4: Flair (top left), T1, T1C and T2 (bottom right) pulse sequences.
51 |
52 | ### Segmentation
53 | Notice now that a single patient will produce upwards of 600 images from a single MRI, given that all four sequences produce 155 slices each (Figure 5). To get a satisfactory manual segmentation a radiologist must spend several hours tediously determining which voxels belong to which class. In the setting of malignant brain tumors, an algorithmic alternative would give clinicians more time focusing on the wellbeing of the patient, allowing for more immediate patient care and higher throughput treatment times.
54 |
55 |
56 |
57 |
58 | Figure 5: (Top) Representative scans from each tumor imaging sequence. Approximately 600 images need to be analyzed per brain for a segmentation. (Bottom) The results of a complete tumor segmentation.
59 |
60 | Automatic tumor segmentation has the potential to decrease lag time between diagnostic tests and treatment by providing an efficient and standardized report of tumor location in a fraction of the time it would take a radiologist to do so.
61 |
62 | ## High Grade Gliomas
63 |
64 | Glioblastoma cases each year (US)[5](#references): 12,000
65 | Median survival: 14.6 months
66 | Five-year survival rate: < 10%
67 |
68 | High-grade malignant brain tumors are generally associated with a short life expectancy and limited treatment options. The aggressive nature of this illness necessitates efficient diagnosis and treatment planning to improve quality of and extend patient life. This urgency reinforces thee need for reliable and fast automatic segmentation methods in clinical settings. Unfortunately, algorithmic segmentation of these particular tumors has proven to be a very challenging task, due primarily to the fact that they tend to be very structurally and spatially diverse (Figure 6).
69 |
70 |
71 | Figure 6: Three different examples of high grade gliomas, tumor segmentations are outlined on the bottom images. Notice the variation in size, shape and location in the brain, a quality of these tumors that makes them difficult to segment.
72 |
73 | ## Convolutional Neural Networks
74 |
75 | Convolutional Neural Networks(CNNs) are a powerful tool in the field of image recognition. They were inspired in the late 1960s by the elucidation of how the [mammalian visual cortex works](https://en.wikipedia.org/wiki/Receptive_field): many networks neurons sensitive to a given 'receptive field' tiled over the entire visual field[2](#references). This aspect of CNNs contributes to their high flexibility and spatial invariance, making them ideal candidates for semantic segmentatiaon of images with high disparity in locations of objects of interest. CNNs are a powerful tool in machine learning that are well suited for the challenging problem tackled in this project.
76 |
77 | ### Model Architecture ([code](https://github.com/naldeborgh7575/brain_segmentation/blob/master/code/Segmentation_Models.py))
78 |
79 | I use a four-layer Convolutional Neural Network (CNN) model that, besides [n4ITK](#mri-pre-processing) bias correction, requires minimal [pre-processing](https://github.com/naldeborgh7575/brain_segmentation/blob/master/brain_pipeline.py). The model can distinguish between and predict healthy tissue, actively enhancing tumor and non-advancing tumor regions (Figure 7). The local invariant nature of CNNs allows for abstraction of token features for classification without relying on large-scale spatial information that is inconsistent in the case of tumor location.
80 |
81 |
82 | Figure 6: Basic model architecture of my segmentation model. Input is four 33x33 patches from a randomly selected slice. Each imaging pulse sequence is input as a channel into the net, followed by four convolution/max pooling layers (note- the last convolutional layer is not followed by max pooling).
83 |
84 |
85 | ### Training the Model
86 |
87 | I created the model using Keras and ran it on an Amazon AWS GPU-optimized EC2 instance. I tested several models, but elected to use the 4-layer sequential model shown in Figure 6 due to the two-week time constraint of the project, as it had best initial results and fastest run time.
88 |
89 | The model was trained on randomly selected 33x33 patches of MRI images to classify the center pixel. Each input has 4 channels, one for each imaging sequence, so the net can learn what relative pixel intensities are hallmarks of each given class. The model is trained on approximately 50,000 patches for six epochs. The model generally begins to overfit after six epochs, and the validation accuracy on balanced classes reaches approximately 55 percent. [Future directions](#future-directions) will include more training phases and updated methods for patch selection.
90 |
91 | ### Patch Selection ([code](https://github.com/naldeborgh7575/brain_segmentation/blob/master/code/patch_library.py))
92 | The purpose of training the model on patches (Figure 8) is to exploit the fact that a class of any given voxel is highly dependent on the class of it's surrounding voxels. Patches give the net access to information about the pixel's local environment, which influences the final prediction of the patch.
93 |
94 |
95 | Figure 8: Examples of 33 x 33 pixel patches used as input for the neural network. These particular patches were acquired with a T1C pulse sequence, but the actual input includes all pulse sequences.
96 |
97 | Another important factor in patch selection is to make sure the classes of the input data are balanced. Otherwise, the net will be overwhelmed with background images and fail to classify any of the minority classes. Approximately 98% of the data belongs to the background class (healthy tissue or the black surrounding area), with the remaining 2% of pixels divided among the four tumor classes.
98 |
99 | I tried out several different methods for sampling patches, which had a large impact on the results. I began randomly selecting patches of a given class from the data and repeating this for all five classes. However, with this sampling method approximately half of the background patches were just the zero-intensity area with no brain, so the model classified most patches with brain tissue as tumor, and only the black areas as background (Figure 9).
100 |
101 |
102 |
103 | Figure 9: (Left) results of segmentation without excluding exclusively zero-intensity patches. Notice that even healthy tissue is classified as tumor. (Right) results of segmentation after restricting the amount of zero-intensity pixels allowed in a given patch. The tumor prediction is now restricted mostly to the actual area of the lesion
104 |
105 | I then restricted the selection process to exclude patches in which more than 25% of the pixels were of zero-intensity. This greatly improved the results, one of which can be seen on the right in Figure 9.
106 |
107 | Unfortunately the model still struggles with class boundary segmentation. The boundaries in my results are quite smooth, while the ground truth tends to have more detail. This is a downside to working with patch-based prediction, since the predicted identity of boundary pixels is influenced by neighbors of a different class. A method I've played to fix this involves selecting a certain subset of the training data from the highest entropy patches in the ground truth segmentation. High entropy patches have more classes represented in them, so the model will have more boundary examples to learn from. I am still fine tuning this process and will be updating the results accordingly.
108 |
109 |
110 | ### Results
111 |
112 | Below is a summary of how well the current model is predicting. As more advances are made this section will be updated. A representative example of a tumor segmentation on test data is displayed in Figure 10. The model can identify each of the four classes with a good amount of accuracy, with the exception of class boundaries, which are smoother in my prediction than the ground truth.
113 |
114 |
115 |
116 |
117 | Figure 10: Results of CNN model segmentation on a single slice (top) with respect to the ground truth, and a 3D representation of the segmentation (bottom).
118 |
119 | Notice that towards the top of the 3-dimensional network results representation some of the cerebrospinal fluid (CSF) is incorrectly classified as tumor. This is unsurprising, considering that the CSF has similar features to parts of the tumor in some pulse sequences. There are several potential solutions for this:
120 |
121 | 1. Pre-process the images by masking CSF (much easier than tumors to extract)
122 | 2. Train the model on more CSF-containing patches so it can learn to distinguish between CSF and tumor
123 | 3. Add more nodes to the model, which may cause it to learn more features from the current patches.
124 |
125 |
126 |
127 | ## Future Directions
128 |
129 | While my model yields promising results, an application such as this leaves no room for errors or false positives. In a surgical setting it is essential to remove as much of the tumor mass as possible without damaging any surrounding healthy tissue. There are countless ways to improve this model, ranging from the overall architecture, to adjusting how we sample the data.
130 |
131 | When I began building the model I built an architecture based on one built by [Havaei et al](http://arxiv.org/pdf/1505.03540.pdf), which uses a cascaded, two-pathway architecture and looks at both local and global features of patches. I elected to use the simpler model to meet the two-week deadline for this project, but in the future I will work on tuning models similar to this to improve upon the accuracy of this model.
132 |
133 | ## References
134 |
135 | 1. Havaei, M. et. al, Brain Tumor Segmentation with Deep Neural Networks. arXiv preprint arXiv:1505.03540, 2015.
136 | 2. Hubel, D. and Wiesel, T. Receptive fields and functional architecture of monkey striate cortex. Journal of Physiology 1968.
137 | 3. Kistler et. al, The virtual skeleton database: an open access repository for biomedical research and collaboration. JMIR, 2013.
138 | 4. Menze et al., The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS), IEEE Trans. Med. Imaging, 2015.
139 | 5. Stupp et al., Effects of radiotherapy with concomitant and adjuvant temozolomide versus radiotherapy alone on survival in glioblastoma in a randomised phase III study: 5-year analysis of the EORTC-NCIC trial. The Lancet Onc., 2009.
140 | 6. Tustison, NJ. et. al, N4ITK: improved N3 bias correction. IEEE Trans Med Imaging, 2010.
141 |
--------------------------------------------------------------------------------
/code/README.md:
--------------------------------------------------------------------------------
1 | # Brain Segmentation Workflow
2 | Below is the basic workflow for the code. [BrainPipeline](https://github.com/naldeborgh7575/brain_segmentation/blob/master/code/brain_pipeline.py), [PatchLibrary](https://github.com/naldeborgh7575/brain_segmentation/blob/master/code/patch_library.py) and [SegmentationModel](https://github.com/naldeborgh7575/brain_segmentation/blob/master/code/Segmentation_Models.py) are classes that handle each of these processes. [n4_bias_correction](https://github.com/naldeborgh7575/brain_segmentation/blob/master/code/n4_bias_correction.py) is a helper function that handles MRI bias and artifact corrections called by BrainPipeline.
3 |
4 |
5 |
--------------------------------------------------------------------------------
/code/Segmentation_Models.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import random
3 | import json
4 | import h5py
5 | from patch_library import PatchLibrary
6 | from glob import glob
7 | import matplotlib.pyplot as plt
8 | from skimage import io, color, img_as_float
9 | from skimage.exposure import adjust_gamma
10 | from skimage.segmentation import mark_boundaries
11 | from sklearn.feature_extraction.image import extract_patches_2d
12 | from sklearn.metrics import classification_report
13 | from keras.models import Sequential, Graph, model_from_json
14 | from keras.layers.convolutional import Convolution2D, MaxPooling2D
15 | from keras.layers.core import Dense, Dropout, Activation, Flatten, Merge, Reshape, MaxoutDense
16 | from keras.layers.normalization import BatchNormalization
17 | from keras.regularizers import l1l2
18 | from keras.optimizers import SGD
19 | from keras.constraints import maxnorm
20 | from keras.callbacks import EarlyStopping, ModelCheckpoint
21 | from keras.utils import np_utils
22 |
23 | class SegmentationModel(object):
24 | def __init__(self, n_epoch=10, n_chan=4, batch_size=128, loaded_model=False, architecture='single', w_reg=0.01, n_filters=[64,128,128,128], k_dims = [7,5,5,3], activation = 'relu'):
25 | '''
26 | A class for compiling/loading, fitting and saving various models, viewing segmented images and analyzing results
27 | INPUT (1) int 'n_epoch': number of eopchs to train on. defaults to 10
28 | (2) int 'n_chan': number of channels being assessed. defaults to 4
29 | (3) int 'batch_size': number of images to train on for each batch. defaults to 128
30 | (4) bool 'loaded_model': True if loading a pre-existing model. defaults to False
31 | (5) str 'architecture': type of model to use, options = single, dual, or two_path. defaults to single (only currently optimized version)
32 | (6) float 'w_reg': value for l1 and l2 regularization. defaults to 0.01
33 | (7) list 'n_filters': number of filters for each convolutional layer (4 total)
34 | (8) list 'k_dims': dimension of kernel at each layer (will be a k_dim[n] x k_dim[n] square). Four total.
35 | (9) string 'activation': activation to use at each convolutional layer. defaults to relu.
36 | '''
37 | self.n_epoch = n_epoch
38 | self.n_chan = n_chan
39 | self.batch_size = batch_size
40 | self.architecture = architecture
41 | self.loaded_model = loaded_model
42 | self.w_reg = w_reg
43 | self.n_filters = n_filters
44 | self.k_dims = k_dims
45 | self.activation = activation
46 | if not self.loaded_model:
47 | if self.architecture == 'two_path':
48 | self.model_comp = self.comp_two_path()
49 | elif self.architecture == 'dual':
50 | self.model_comp = self.comp_double()
51 | else:
52 | self.model_comp = self.compile_model()
53 | else:
54 | model = str(raw_input('Which model should I load? '))
55 | self.model_comp = self.load_model_weights(model)
56 |
57 | def compile_model(self):
58 | '''
59 | compiles standard single model with 4 convolitional/max-pooling layers.
60 | '''
61 | print 'Compiling single model...'
62 | single = Sequential()
63 |
64 | single.add(Convolution2D(self.n_filters[0], self.k_dims[0], self.k_dims[0], border_mode='valid', W_regularizer=l1l2(l1=self.w_reg, l2=self.w_reg), input_shape=(self.n_chan,33,33)))
65 | single.add(Activation(self.activation))
66 | single.add(BatchNormalization(mode=0, axis=1))
67 | single.add(MaxPooling2D(pool_size=(2,2), strides=(1,1)))
68 | single.add(Dropout(0.5))
69 | single.add(Convolution2D(self.n_filters[1], self.k_dims[1], self.k_dims[1], activation=self.activation, border_mode='valid', W_regularizer=l1l2(l1=self.w_reg, l2=self.w_reg)))
70 | single.add(BatchNormalization(mode=0, axis=1))
71 | single.add(MaxPooling2D(pool_size=(2,2), strides=(1,1)))
72 | single.add(Dropout(0.5))
73 | single.add(Convolution2D(self.n_filters[2], self.k_dims[2], self.k_dims[2], activation=self.activation, border_mode='valid', W_regularizer=l1l2(l1=self.w_reg, l2=self.w_reg)))
74 | single.add(BatchNormalization(mode=0, axis=1))
75 | single.add(MaxPooling2D(pool_size=(2,2), strides=(1,1)))
76 | single.add(Dropout(0.5))
77 | single.add(Convolution2D(self.n_filters[3], self.k_dims[3], self.k_dims[3], activation=self.activation, border_mode='valid', W_regularizer=l1l2(l1=self.w_reg, l2=self.w_reg)))
78 | single.add(Dropout(0.25))
79 |
80 | single.add(Flatten())
81 | single.add(Dense(5))
82 | single.add(Activation('softmax'))
83 |
84 | sgd = SGD(lr=0.001, decay=0.01, momentum=0.9)
85 | single.compile(loss='categorical_crossentropy', optimizer='sgd')
86 | print 'Done.'
87 | return single
88 |
89 | def comp_two_path(self):
90 | '''
91 | compiles two-path model, takes in a 4x33x33 patch and assesses global and local paths, then merges the results.
92 | '''
93 | print 'Compiling two-path model...'
94 | model = Graph()
95 | model.add_input(name='input', input_shape=(self.n_chan, 33, 33))
96 |
97 | # local pathway, first convolution/pooling
98 | model.add_node(Convolution2D(64, 7, 7, border_mode='valid', activation='relu', W_regularizer=l1l2(l1=0.01, l2=0.01)), name='local_c1', input= 'input')
99 | model.add_node(MaxPooling2D(pool_size=(4,4), strides=(1,1), border_mode='valid'), name='local_p1', input='local_c1')
100 |
101 | # local pathway, second convolution/pooling
102 | model.add_node(Dropout(0.5), name='drop_lp1', input='local_p1')
103 | model.add_node(Convolution2D(64, 3, 3, border_mode='valid', activation='relu', W_regularizer=l1l2(l1=0.01, l2=0.01)), name='local_c2', input='drop_lp1')
104 | model.add_node(MaxPooling2D(pool_size=(2,2), strides=(1,1), border_mode='valid'), name='local_p2', input='local_c2')
105 |
106 | # global pathway
107 | model.add_node(Convolution2D(160, 13, 13, border_mode='valid', activation='relu', W_regularizer=l1l2(l1=0.01, l2=0.01)), name='global', input='input')
108 |
109 | # merge local and global pathways
110 | model.add_node(Dropout(0.5), name='drop_lp2', input='local_p2')
111 | model.add_node(Dropout(0.5), name='drop_g', input='global')
112 | model.add_node(Convolution2D(5, 21, 21, border_mode='valid', activation='relu', W_regularizer=l1l2(l1=0.01, l2=0.01)), name='merge', inputs=['drop_lp2', 'drop_g'], merge_mode='concat', concat_axis=1)
113 |
114 | # Flatten output of 5x1x1 to 1x5, perform softmax
115 | model.add_node(Flatten(), name='flatten', input='merge')
116 | model.add_node(Dense(5, activation='softmax'), name='dense_output', input='flatten')
117 | model.add_output(name='output', input='dense_output')
118 |
119 | sgd = SGD(lr=0.005, decay=0.1, momentum=0.9)
120 | model.compile('sgd', loss={'output':'categorical_crossentropy'})
121 | print 'Done.'
122 | return model
123 |
124 | def comp_double(self):
125 | '''
126 | double model. Simialar to two-pathway, except takes in a 4x33x33 patch and it's center 4x5x5 patch. merges paths at flatten layer.
127 | '''
128 | print 'Compiling double model...'
129 | single = Sequential()
130 | single.add(Convolution2D(64, 7, 7, border_mode='valid', W_regularizer=l1l2(l1=0.01, l2=0.01), input_shape=(4,33,33)))
131 | single.add(Activation('relu'))
132 | single.add(BatchNormalization(mode=0, axis=1))
133 | single.add(MaxPooling2D(pool_size=(2,2), strides=(1,1)))
134 | single.add(Dropout(0.5))
135 | single.add(Convolution2D(nb_filter=128, nb_row=5, nb_col=5, activation='relu', border_mode='valid', W_regularizer=l1l2(l1=0.01, l2=0.01)))
136 | single.add(BatchNormalization(mode=0, axis=1))
137 | single.add(MaxPooling2D(pool_size=(2,2), strides=(1,1)))
138 | single.add(Dropout(0.5))
139 | single.add(Convolution2D(nb_filter=256, nb_row=5, nb_col=5, activation='relu', border_mode='valid', W_regularizer=l1l2(l1=0.01, l2=0.01)))
140 | single.add(BatchNormalization(mode=0, axis=1))
141 | single.add(MaxPooling2D(pool_size=(2,2), strides=(1,1)))
142 | single.add(Dropout(0.5))
143 | single.add(Convolution2D(nb_filter=128, nb_row=3, nb_col=3, activation='relu', border_mode='valid', W_regularizer=l1l2(l1=0.01, l2=0.01)))
144 | single.add(Dropout(0.25))
145 | single.add(Flatten())
146 |
147 | # add small patch to train on
148 | five = Sequential()
149 | five.add(Reshape((100,1), input_shape = (4,5,5)))
150 | five.add(Flatten())
151 | five.add(MaxoutDense(128, nb_feature=5))
152 | five.add(Dropout(0.5))
153 |
154 | model = Sequential()
155 | # merge both paths
156 | model.add(Merge([five, single], mode='concat', concat_axis=1))
157 | model.add(Dense(5))
158 | model.add(Activation('softmax'))
159 |
160 | sgd = SGD(lr=0.001, decay=0.01, momentum=0.9)
161 | model.compile(loss='categorical_crossentropy', optimizer='sgd')
162 | print 'Done.'
163 | return model
164 |
165 | def load_model_weights(self, model_name):
166 | '''
167 | INPUT (1) string 'model_name': filepath to model and weights, not including extension
168 | OUTPUT: Model with loaded weights. can fit on model using loaded_model=True in fit_model method
169 | '''
170 | print 'Loading model {}'.format(model_name)
171 | model = '{}.json'.format(model_name)
172 | weights = '{}.hdf5'.format(model_name)
173 | with open(model) as f:
174 | m = f.next()
175 | model_comp = model_from_json(json.loads(m))
176 | model_comp.load_weights(weights)
177 | print 'Done.'
178 | return model_comp
179 |
180 | def fit_model(self, X_train, y_train, X5_train = None, save=True):
181 | '''
182 | INPUT (1) numpy array 'X_train': list of patches to train on in form (n_sample, n_channel, h, w)
183 | (2) numpy vector 'y_train': list of labels corresponding to X_train patches in form (n_sample,)
184 | (3) numpy array 'X5_train': center 5x5 patch in corresponding X_train patch. if None, uses single-path architecture
185 | OUTPUT (1) Fits specified model
186 | '''
187 | Y_train = np_utils.to_categorical(y_train, 5)
188 |
189 | shuffle = zip(X_train, Y_train)
190 | np.random.shuffle(shuffle)
191 |
192 | X_train = np.array([shuffle[i][0] for i in xrange(len(shuffle))])
193 | Y_train = np.array([shuffle[i][1] for i in xrange(len(shuffle))])
194 | es = EarlyStopping(monitor='val_loss', patience=2, verbose=1, mode='auto')
195 |
196 | # Save model after each epoch to check/bm_epoch#-val_loss
197 | checkpointer = ModelCheckpoint(filepath="./check/bm_{epoch:02d}-{val_loss:.2f}.hdf5", verbose=1)
198 |
199 | if self.architecture == 'dual':
200 | self.model_comp.fit([X5_train, X_train], Y_train, batch_size=self.batch_size, nb_epoch=self.n_epoch, validation_split=0.1, show_accuracy=True, verbose=1, callbacks=[checkpointer])
201 | elif self.architecture == 'two_path':
202 | data = {'input': X_train, 'output': Y_train}
203 | self.model_comp.fit(data, batch_size=self.batch_size, nb_epoch=self.n_epoch, validation_split=0.1, show_accuracy=True, verbose=1, callbacks=[checkpointer])
204 | else:
205 | self.model_comp.fit(X_train, Y_train, batch_size=self.batch_size, nb_epoch=self.n_epoch, validation_split=0.1, show_accuracy=True, verbose=1, callbacks=[checkpointer])
206 |
207 | def save_model(self, model_name):
208 | '''
209 | INPUT string 'model_name': name to save model and weigths under, including filepath but not extension
210 | Saves current model as json and weigts as h5df file
211 | '''
212 | model = '{}.json'.format(model_name)
213 | weights = '{}.hdf5'.format(model_name)
214 | json_string = self.model_comp.to_json()
215 | self.model_comp.save_weights(weights)
216 | with open(model, 'w') as f:
217 | json.dump(json_string, f)
218 |
219 | def class_report(self, X_test, y_test):
220 | '''
221 | returns skilearns test report (precision, recall, f1-score)
222 | INPUT (1) list 'X_test': test data of 4x33x33 patches
223 | (2) list 'y_test': labels for X_test
224 | OUTPUT (1) confusion matrix of precision, recall and f1 score
225 | '''
226 | y_pred = self.model_load.predict_class(X_test)
227 | print classification_report(y_pred, y_test)
228 |
229 | def predict_image(self, test_img, show=False):
230 | '''
231 | predicts classes of input image
232 | INPUT (1) str 'test_image': filepath to image to predict on
233 | (2) bool 'show': True to show the results of prediction, False to return prediction
234 | OUTPUT (1) if show == False: array of predicted pixel classes for the center 208 x 208 pixels
235 | (2) if show == True: displays segmentation results
236 | '''
237 | imgs = io.imread(test_img).astype('float').reshape(5,240,240)
238 | plist = []
239 |
240 | # create patches from an entire slice
241 | for img in imgs[:-1]:
242 | if np.max(img) != 0:
243 | img /= np.max(img)
244 | p = extract_patches_2d(img, (33,33))
245 | plist.append(p)
246 | patches = np.array(zip(np.array(plist[0]), np.array(plist[1]), np.array(plist[2]), np.array(plist[3])))
247 |
248 | # predict classes of each pixel based on model
249 | full_pred = self.model_comp.predict_classes(patches)
250 | fp1 = full_pred.reshape(208,208)
251 | if show:
252 | io.imshow(fp1)
253 | plt.show
254 | else:
255 | return fp1
256 |
257 | def show_segmented_image(self, test_img, modality='t1c', show = False):
258 | '''
259 | Creates an image of original brain with segmentation overlay
260 | INPUT (1) str 'test_img': filepath to test image for segmentation, including file extension
261 | (2) str 'modality': imaging modelity to use as background. defaults to t1c. options: (flair, t1, t1c, t2)
262 | (3) bool 'show': If true, shows output image. defaults to False.
263 | OUTPUT (1) if show is True, shows image of segmentation results
264 | (2) if show is false, returns segmented image.
265 | '''
266 | modes = {'flair':0, 't1':1, 't1c':2, 't2':3}
267 |
268 | segmentation = self.predict_image(test_img, show=False)
269 | img_mask = np.pad(segmentation, (16,16), mode='edge')
270 | ones = np.argwhere(img_mask == 1)
271 | twos = np.argwhere(img_mask == 2)
272 | threes = np.argwhere(img_mask == 3)
273 | fours = np.argwhere(img_mask == 4)
274 |
275 | test_im = io.imread(test_img)
276 | test_back = test_im.reshape(5,240,240)[-2]
277 | # overlay = mark_boundaries(test_back, img_mask)
278 | gray_img = img_as_float(test_back)
279 |
280 | # adjust gamma of image
281 | image = adjust_gamma(color.gray2rgb(gray_img), 0.65)
282 | sliced_image = image.copy()
283 | red_multiplier = [1, 0.2, 0.2]
284 | yellow_multiplier = [1,1,0.25]
285 | green_multiplier = [0.35,0.75,0.25]
286 | blue_multiplier = [0,0.25,0.9]
287 |
288 | # change colors of segmented classes
289 | for i in xrange(len(ones)):
290 | sliced_image[ones[i][0]][ones[i][1]] = red_multiplier
291 | for i in xrange(len(twos)):
292 | sliced_image[twos[i][0]][twos[i][1]] = green_multiplier
293 | for i in xrange(len(threes)):
294 | sliced_image[threes[i][0]][threes[i][1]] = blue_multiplier
295 | for i in xrange(len(fours)):
296 | sliced_image[fours[i][0]][fours[i][1]] = yellow_multiplier
297 |
298 | if show:
299 | io.imshow(sliced_image)
300 | plt.show()
301 |
302 | else:
303 | return sliced_image
304 |
305 | def get_dice_coef(self, test_img, label):
306 | '''
307 | Calculate dice coefficient for total slice, tumor-associated slice, advancing tumor and core tumor
308 | INPUT (1) str 'test_img': filepath to slice to predict on
309 | (2) str 'label': filepath to ground truth label for test_img
310 | OUTPUT: Summary of dice scores for the following classes:
311 | - all classes
312 | - all classes excluding background (ground truth and segmentation)
313 | - all classes excluding background (only ground truth-based)
314 | - advancing tumor
315 | - core tumor (1,3 and 4)
316 | '''
317 | segmentation = self.predict_image(test_img)
318 | seg_full = np.pad(segmentation, (16,16), mode='edge')
319 | gt = io.imread(label).astype(int)
320 | # dice coef of total image
321 | total = (len(np.argwhere(seg_full == gt)) * 2.) / (2 * 240 * 240)
322 |
323 | def unique_rows(a):
324 | '''
325 | helper function to get unique rows from 2D numpy array
326 | '''
327 | a = np.ascontiguousarray(a)
328 | unique_a = np.unique(a.view([('', a.dtype)]*a.shape[1]))
329 | return unique_a.view(a.dtype).reshape((unique_a.shape[0], a.shape[1]))
330 |
331 | # dice coef of entire non-background image
332 | gt_tumor = np.argwhere(gt != 0)
333 | seg_tumor = np.argwhere(seg_full != 0)
334 | combo = np.append(pred_core, core, axis = 0)
335 | core_edema = unique_rows(combo) # intersection of locations defined as tumor_assoc in gt and segmentation
336 | gt_c, seg_c = [], [] # predicted class of each
337 | for i in core_edema:
338 | gt_c.append(gt[i[0]][i[1]])
339 | seg_c.append(seg_full[i[0]][i[1]])
340 | tumor_assoc = len(np.argwhere(np.array(gt_c) == np.array(seg_c))) / float(len(core))
341 | tumor_assoc_gt = len(np.argwhere(np.array(gt_c) == np.array(seg_c))) / float(len(gt_tumor))
342 |
343 | # dice coef advancing tumor
344 | adv_gt = np.argwhere(gt == 4)
345 | gt_a, seg_a = [], [] # classification of
346 | for i in adv_gt:
347 | gt_a.append(gt[i[0]][i[1]])
348 | seg_a.append(fp[i[0]][i[1]])
349 | gta = np.array(gt_a)
350 | sega = np.array(seg_a)
351 | adv = float(len(np.argwhere(gta == sega))) / len(adv_gt)
352 |
353 | # dice coef core tumor
354 | noadv_gt = np.argwhere(gt == 3)
355 | necrosis_gt = np.argwhere(gt == 1)
356 | live_tumor_gt = np.append(adv_gt, noadv_gt, axis = 0)
357 | core_gt = np.append(live_tumor_gt, necrosis_gt, axis = 0)
358 | gt_core, seg_core = [],[]
359 | for i in core_gt:
360 | gt_core.append(gt[i[0]][i[1]])
361 | seg_core.append(seg_full[i[0]][i[1]])
362 | gtcore, segcore = np.array(gt_core), np.array(seg_core)
363 | core = len(np.argwhere(gtcore == segcore)) / float(len(core_gt))
364 |
365 | print ' '
366 | print 'Region_______________________| Dice Coefficient'
367 | print 'Total Slice__________________| {0:.2f}'.format(total)
368 | print 'No Background gt_____________| {0:.2f}'.format(tumor_assoc_gt)
369 | print 'No Background both___________| {0:.2f}'.format(tumor_assoc)
370 | print 'Advancing Tumor______________| {0:.2f}'.format(adv)
371 | print 'Core Tumor___________________| {0:.2f}'.format(core)
372 |
373 | if __name__ == '__main__':
374 | train_data = glob('train_data/**')
375 | patches = PatchLibrary((33,33), train_data, 50000)
376 | X,y = patches.make_training_patches()
377 |
378 | model = SegmentationModel()
379 | model.fit_model(X, y)
380 | model.save_model('models/example')
381 |
382 | # tests = glob('test_data/2_*')
383 | # test_sort = sorted(tests, key= lambda x: int(x[12:-4]))
384 | # model = BasicModel(loaded_model=True)
385 | # segmented_images = []
386 | # for slice in test_sort[15:145]:
387 | # segmented_images.append(model.show_segmented_image(slice))
388 |
--------------------------------------------------------------------------------
/code/brain_pipeline.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import subprocess
3 | import random
4 | import progressbar
5 | from glob import glob
6 | from skimage import io
7 |
8 | np.random.seed(5) # for reproducibility
9 | progress = progressbar.ProgressBar(widgets=[progressbar.Bar('*', '[', ']'), progressbar.Percentage(), ' '])
10 |
11 | class BrainPipeline(object):
12 | '''
13 | A class for processing brain scans for one patient
14 | INPUT: (1) filepath 'path': path to directory of one patient. Contains following mha files:
15 | flair, t1, t1c, t2, ground truth (gt)
16 | (2) bool 'n4itk': True to use n4itk normed t1 scans (defaults to True)
17 | (3) bool 'n4itk_apply': True to apply and save n4itk filter to t1 and t1c scans for given patient. This will only work if the
18 | '''
19 | def __init__(self, path, n4itk = True, n4itk_apply = False):
20 | self.path = path
21 | self.n4itk = n4itk
22 | self.n4itk_apply = n4itk_apply
23 | self.modes = ['flair', 't1', 't1c', 't2', 'gt']
24 | # slices=[[flair x 155], [t1], [t1c], [t2], [gt]], 155 per modality
25 | self.slices_by_mode, n = self.read_scans()
26 | # [ [slice1 x 5], [slice2 x 5], ..., [slice155 x 5]]
27 | self.slices_by_slice = n
28 | self.normed_slices = self.norm_slices()
29 |
30 | def read_scans(self):
31 | '''
32 | goes into each modality in patient directory and loads individual scans.
33 | transforms scans of same slice into strip of 5 images
34 | '''
35 | print 'Loading scans...'
36 | slices_by_mode = np.zeros((5, 155, 240, 240))
37 | slices_by_slice = np.zeros((155, 5, 240, 240))
38 | flair = glob(self.path + '/*Flair*/*.mha')
39 | t2 = glob(self.path + '/*_T2*/*.mha')
40 | gt = glob(self.path + '/*more*/*.mha')
41 | t1s = glob(self.path + '/**/*T1*.mha')
42 | t1_n4 = glob(self.path + '/*T1*/*_n.mha')
43 | t1 = [scan for scan in t1s if scan not in t1_n4]
44 | scans = [flair[0], t1[0], t1[1], t2[0], gt[0]] # directories to each image (5 total)
45 | if self.n4itk_apply:
46 | print '-> Applyling bias correction...'
47 | for t1_path in t1:
48 | self.n4itk_norm(t1_path) # normalize files
49 | scans = [flair[0], t1_n4[0], t1_n4[1], t2[0], gt[0]]
50 | elif self.n4itk:
51 | scans = [flair[0], t1_n4[0], t1_n4[1], t2[0], gt[0]]
52 | for scan_idx in xrange(5):
53 | # read each image directory, save to self.slices
54 | slices_by_mode[scan_idx] = io.imread(scans[scan_idx], plugin='simpleitk').astype(float)
55 | for mode_ix in xrange(slices_by_mode.shape[0]): # modes 1 thru 5
56 | for slice_ix in xrange(slices_by_mode.shape[1]): # slices 1 thru 155
57 | slices_by_slice[slice_ix][mode_ix] = slices_by_mode[mode_ix][slice_ix] # reshape by slice
58 | return slices_by_mode, slices_by_slice
59 |
60 | def norm_slices(self):
61 | '''
62 | normalizes each slice in self.slices_by_slice, excluding gt
63 | subtracts mean and div by std dev for each slice
64 | clips top and bottom one percent of pixel intensities
65 | if n4itk == True, will apply n4itk bias correction to T1 and T1c images
66 | '''
67 | print 'Normalizing slices...'
68 | normed_slices = np.zeros((155, 5, 240, 240))
69 | for slice_ix in xrange(155):
70 | normed_slices[slice_ix][-1] = self.slices_by_slice[slice_ix][-1]
71 | for mode_ix in xrange(4):
72 | normed_slices[slice_ix][mode_ix] = self._normalize(self.slices_by_slice[slice_ix][mode_ix])
73 | print 'Done.'
74 | return normed_slices
75 |
76 | def _normalize(self, slice):
77 | '''
78 | INPUT: (1) a single slice of any given modality (excluding gt)
79 | (2) index of modality assoc with slice (0=flair, 1=t1, 2=t1c, 3=t2)
80 | OUTPUT: normalized slice
81 | '''
82 | b, t = np.percentile(slice, (0.5,99.5))
83 | slice = np.clip(slice, b, t)
84 | if np.std(slice) == 0:
85 | return slice
86 | else:
87 | return (slice - np.mean(slice)) / np.std(slice)
88 |
89 | def save_patient(self, reg_norm_n4, patient_num):
90 | '''
91 | INPUT: (1) int 'patient_num': unique identifier for each patient
92 | (2) string 'reg_norm_n4': 'reg' for original images, 'norm' normalized images, 'n4' for n4 normalized images
93 | OUTPUT: saves png in Norm_PNG directory for normed, Training_PNG for reg
94 | '''
95 | print 'Saving scans for patient {}...'.format(patient_num)
96 | progress.currval = 0
97 | if reg_norm_n4 == 'norm': #saved normed slices
98 | for slice_ix in progress(xrange(155)): # reshape to strip
99 | strip = self.normed_slices[slice_ix].reshape(1200, 240)
100 | if np.max(strip) != 0: # set values < 1
101 | strip /= np.max(strip)
102 | if np.min(strip) <= -1: # set values > -1
103 | strip /= abs(np.min(strip))
104 | # save as patient_slice.png
105 | io.imsave('Norm_PNG/{}_{}.png'.format(patient_num, slice_ix), strip)
106 | elif reg_norm_n4 == 'reg':
107 | for slice_ix in progress(xrange(155)):
108 | strip = self.slices_by_slice[slice_ix].reshape(1200, 240)
109 | if np.max(strip) != 0:
110 | strip /= np.max(strip)
111 | io.imsave('Training_PNG/{}_{}.png'.format(patient_num, slice_ix), strip)
112 | else:
113 | for slice_ix in progress(xrange(155)): # reshape to strip
114 | strip = self.normed_slices[slice_ix].reshape(1200, 240)
115 | if np.max(strip) != 0: # set values < 1
116 | strip /= np.max(strip)
117 | if np.min(strip) <= -1: # set values > -1
118 | strip /= abs(np.min(strip))
119 | # save as patient_slice.png
120 | io.imsave('n4_PNG/{}_{}.png'.format(patient_num, slice_ix), strip)
121 |
122 | def n4itk_norm(self, path, n_dims=3, n_iters='[20,20,10,5]'):
123 | '''
124 | INPUT: (1) filepath 'path': path to mha T1 or T1c file
125 | (2) directory 'parent_dir': parent directory to mha file
126 | OUTPUT: writes n4itk normalized image to parent_dir under orig_filename_n.mha
127 | '''
128 | output_fn = path[:-4] + '_n.mha'
129 | # run n4_bias_correction.py path n_dim n_iters output_fn
130 | subprocess.call('python n4_bias_correction.py ' + path + ' ' + str(n_dims) + ' ' + n_iters + ' ' + output_fn, shell = True)
131 |
132 |
133 | def save_patient_slices(patients, type):
134 | '''
135 | INPUT (1) list 'patients': paths to any directories of patients to save. for example- glob("Training/HGG/**")
136 | (2) string 'type': options = reg (non-normalized), norm (normalized, but no bias correction), n4 (bias corrected and normalized)
137 | saves strips of patient slices to approriate directory (Training_PNG/, Norm_PNG/ or n4_PNG/) as patient-num_slice-num
138 | '''
139 | for patient_num, path in enumerate(patients):
140 | a = BrainPipeline(path)
141 | a.save_patient(type, patient_num)
142 |
143 | def s3_dump(directory, bucket):
144 | '''
145 | dump files from a given directory to an s3 bucket
146 | INPUT (1) string 'directory': directory containing files to save
147 | (2) string 'bucket': name od s3 bucket to dump files
148 | '''
149 | subprocess.call('aws s3 cp' + ' ' + directory + ' ' + 's3://' + bucket + ' ' + '--recursive')
150 |
151 | def save_labels(fns):
152 | '''
153 | INPUT list 'fns': filepaths to all labels
154 | '''
155 | progress.currval = 0
156 | for label_idx in progress(xrange(len(labels))):
157 | slices = io.imread(labels[label_idx], plugin = 'simpleitk')
158 | for slice_idx in xrange(len(slices)):
159 | io.imsave('Labels/{}_{}L.png'.format(label_idx, slice_idx), slices[slice_idx])
160 |
161 |
162 | if __name__ == '__main__':
163 | labels = glob('Original_Data/Training/HGG/**/*more*/**.mha')
164 | save_labels(labels)
165 | # patients = glob('Training/HGG/**')
166 | # save_patient_slices(patients, 'reg')
167 | # save_patient_slices(patients, 'norm')
168 | # save_patient_slices(patients, 'n4')
169 | # s3_dump('Graveyard/Training_PNG/', 'orig-training-png')
170 |
--------------------------------------------------------------------------------
/code/brain_seg_workflow.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/code/brain_seg_workflow.png
--------------------------------------------------------------------------------
/code/n4_bias_correction.py:
--------------------------------------------------------------------------------
1 | from nipype.interfaces.ants import N4BiasFieldCorrection
2 | import sys
3 | import os
4 | import ast
5 |
6 | if len(sys.argv) < 2:
7 | print("INPUT from ipython: run n4_bias_correction input_image dimension n_iterations(optional, form:[n_1,n_2,n_3,n_4]) output_image(optional)")
8 | sys.exit(1)
9 |
10 | # if output_image is given
11 | if len(sys.argv) > 3:
12 | n4 = N4BiasFieldCorrection(output_image=sys.argv[4])
13 | else:
14 | n4 = N4BiasFieldCorrection()
15 |
16 | # dimension of input image, input image
17 | n4.inputs.dimension = int(sys.argv[2])
18 | n4.inputs.input_image = sys.argv[1]
19 |
20 | # if n_dinesions arg given
21 | if len(sys.argv) > 2:
22 | n4.inputs.n_iterations = ast.literal_eval(sys.argv[3])
23 |
24 | n4.run()
25 |
--------------------------------------------------------------------------------
/code/patch_library.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import random
3 | import os
4 | from glob import glob
5 | import matplotlib
6 | import matplotlib.pyplot as plt
7 | from skimage import io
8 | from skimage.filters.rank import entropy
9 | from skimage.morphology import disk
10 | import progressbar
11 | from sklearn.feature_extraction.image import extract_patches_2d
12 |
13 | progress = progressbar.ProgressBar(widgets=[progressbar.Bar('*', '[', ']'), progressbar.Percentage(), ' '])
14 | np.random.seed(5)
15 |
16 |
17 | class PatchLibrary(object):
18 | def __init__(self, patch_size, train_data, num_samples):
19 | '''
20 | class for creating patches and subpatches from training data to use as input for segmentation models.
21 | INPUT (1) tuple 'patch_size': size (in voxels) of patches to extract. Use (33,33) for sequential model
22 | (2) list 'train_data': list of filepaths to all training data saved as pngs. images should have shape (5*240,240)
23 | (3) int 'num_samples': the number of patches to collect from training data.
24 | '''
25 | self.patch_size = patch_size
26 | self.num_samples = num_samples
27 | self.train_data = train_data
28 | self.h = self.patch_size[0]
29 | self.w = self.patch_size[1]
30 |
31 | def find_patches(self, class_num, num_patches):
32 | '''
33 | Helper function for sampling slices with evenly distributed classes
34 | INPUT: (1) list 'training_images': all training images to select from
35 | (2) int 'class_num': class to sample from choice of {0, 1, 2, 3, 4}.
36 | (3) tuple 'patch_size': dimensions of patches to be generated defaults to 65 x 65
37 | OUTPUT: (1) num_samples patches from class 'class_num' randomly selected.
38 | '''
39 | h,w = self.patch_size[0], self.patch_size[1]
40 | patches, labels = [], np.full(num_patches, class_num, 'float')
41 | print 'Finding patches of class {}...'.format(class_num)
42 |
43 | ct = 0
44 | while ct < num_patches:
45 | im_path = random.choice(self.train_data)
46 | fn = os.path.basename(im_path)
47 | label = io.imread('Labels/' + fn[:-4] + 'L.png')
48 |
49 | # resample if class_num not in selected slice
50 | # while len(np.argwhere(label == class_num)) < 10:
51 | # im_path = random.choice(self.train_data)
52 | # fn = os.path.basename(im_path)
53 | # label = io.imread('Labels/' + fn[:-4] + 'L.png')
54 | if len(np.argwhere(label == class_num)) < 10:
55 | continue
56 |
57 | # select centerpix (p) and patch (p_ix)
58 | img = io.imread(im_path).reshape(5, 240, 240)[:-1].astype('float')
59 | p = random.choice(np.argwhere(label == class_num))
60 | p_ix = (p[0]-(h/2), p[0]+((h+1)/2), p[1]-(w/2), p[1]+((w+1)/2))
61 | patch = np.array([i[p_ix[0]:p_ix[1], p_ix[2]:p_ix[3]] for i in img])
62 |
63 | # resample it patch is empty or too close to edge
64 | # while patch.shape != (4, h, w) or len(np.unique(patch)) == 1:
65 | # p = random.choice(np.argwhere(label == class_num))
66 | # p_ix = (p[0]-(h/2), p[0]+((h+1)/2), p[1]-(w/2), p[1]+((w+1)/2))
67 | # patch = np.array([i[p_ix[0]:p_ix[1], p_ix[2]:p_ix[3]] for i in img])
68 | if patch.shape != (4, h, w) or len(np.argwhere(patch == 0)) > (h * w):
69 | continue
70 |
71 | patches.append(patch)
72 | ct += 1
73 | return np.array(patches), labels
74 |
75 | def center_n(self, n, patches):
76 | '''
77 | Takes list of patches and returns center nxn for each patch. Use as input for cascaded architectures.
78 | INPUT (1) int 'n': size of center patch to take (square)
79 | (2) list 'patches': list of patches to take subpatch of
80 | OUTPUT: list of center nxn patches.
81 | '''
82 | sub_patches = []
83 | for mode in patches:
84 | subs = np.array([patch[(self.h/2) - (n/2):(self.h/2) + ((n+1)/2),(self.w/2) - (n/2):(self.w/2) + ((n+1)/2)] for patch in mode])
85 | sub_patches.append(subs)
86 | return np.array(sub_patches)
87 |
88 | def slice_to_patches(self, filename):
89 | '''
90 | Converts an image to a list of patches with a stride length of 1. Use as input for image prediction.
91 | INPUT: str 'filename': path to image to be converted to patches
92 | OUTPUT: list of patched version of imput image.
93 | '''
94 | slices = io.imread(filename).astype('float').reshape(5,240,240)[:-1]
95 | plist=[]
96 | for slice in slices:
97 | if np.max(img) != 0:
98 | img /= np.max(img)
99 | p = extract_patches_2d(img, (h,w))
100 | plist.append(p)
101 | return np.array(zip(np.array(plist[0]), np.array(plist[1]), np.array(plist[2]), np.array(plist[3])))
102 |
103 | def patches_by_entropy(self, num_patches):
104 | '''
105 | Finds high-entropy patches based on label, allows net to learn borders more effectively.
106 | INPUT: int 'num_patches': defaults to num_samples, enter in quantity it using in conjunction with randomly sampled patches.
107 | OUTPUT: list of patches (num_patches, 4, h, w) selected by highest entropy
108 | '''
109 | patches, labels = [], []
110 | ct = 0
111 | while ct < num_patches:
112 | im_path = random.choice(training_images)
113 | fn = os.path.basename(im_path)
114 | label = io.imread('Labels/' + fn[:-4] + 'L.png')
115 |
116 | # pick again if slice is only background
117 | if len(np.unique(label)) == 1:
118 | continue
119 |
120 | img = io.imread(im_path).reshape(5, 240, 240)[:-1].astype('float')
121 | l_ent = entropy(label, disk(self.h))
122 | top_ent = np.percentile(l_ent, 90)
123 |
124 | # restart if 80th entropy percentile = 0
125 | if top_ent == 0:
126 | continue
127 |
128 | highest = np.argwhere(l_ent >= top_ent)
129 | p_s = random.sample(highest, 3)
130 | for p in p_s:
131 | p_ix = (p[0]-(h/2), p[0]+((h+1)/2), p[1]-(w/2), p[1]+((w+1)/2))
132 | patch = np.array([i[p_ix[0]:p_ix[1], p_ix[2]:p_ix[3]] for i in img])
133 | # exclude any patches that are too small
134 | if np.shape(patch) != (4,65,65):
135 | continue
136 | patches.append(patch)
137 | labels.append(label[p[0],p[1]])
138 | ct += 1
139 | return np.array(patches[:num_samples]), np.array(labels[:num_samples])
140 |
141 | def make_training_patches(self, entropy=False, balanced_classes=True, classes=[0,1,2,3,4]):
142 | '''
143 | Creates X and y for training CNN
144 | INPUT (1) bool 'entropy': if True, half of the patches are chosen based on highest entropy area. defaults to False.
145 | (2) bool 'balanced classes': if True, will produce an equal number of each class from the randomly chosen samples
146 | (3) list 'classes': list of classes to sample from. Only change default oif entropy is False and balanced_classes is True
147 | OUTPUT (1) X: patches (num_samples, 4_chan, h, w)
148 | (2) y: labels (num_samples,)
149 | '''
150 | if balanced_classes:
151 | per_class = self.num_samples / len(classes)
152 | patches, labels = [], []
153 | progress.currval = 0
154 | for i in progress(xrange(len(classes))):
155 | p, l = self.find_patches(classes[i], per_class)
156 | # set 0 <= pix intensity <= 1
157 | for img_ix in xrange(len(p)):
158 | for slice in xrange(len(p[img_ix])):
159 | if np.max(p[img_ix][slice]) != 0:
160 | p[img_ix][slice] /= np.max(p[img_ix][slice])
161 | patches.append(p)
162 | labels.append(l)
163 | return np.array(patches).reshape(self.num_samples, 4, self.h, self.w), np.array(labels).reshape(self.num_samples)
164 |
165 | else:
166 | print "Use balanced classes, random won't work."
167 |
168 |
169 |
170 | if __name__ == '__main__':
171 | pass
172 |
--------------------------------------------------------------------------------
/code/write_to_s3.py:
--------------------------------------------------------------------------------
1 | import time
2 | import os
3 | import progressbar
4 | import threading
5 | from glob import glob
6 | from boto.s3.connection import S3Connection
7 | progress = progressbar.ProgressBar(widgets=[progressbar.Bar('*', '[', ']'), progressbar.Percentage(), ' '])
8 |
9 | def files_to_s3(files, bucket_name):
10 | '''
11 | INPUT (1) list 'files': all files to upload to s3 bucket
12 | (2) string 'bucket_name': name of bucket to dump into
13 | writes all files to s3 bucket using threads
14 | '''
15 | AWS_KEY = os.environ['AWS_ACCESS_KEY_ID']
16 | AWS_SECRET = os.environ['AWS_SECRET_ACCESS_KEY']
17 |
18 |
19 | def upload(myfile):
20 | conn = S3Connection(aws_access_key_id = AWS_KEY, aws_secret_access_key = AWS_SECRET)
21 | bucket = conn.get_bucket(bucket_name)
22 | key = bucket.new_key(myfile).set_contents_from_filename(myfile) # , cb=percent_cb, num_cb=1)
23 | return myfile
24 |
25 | for fname in files:
26 | t = threading.Thread(target=upload, args=(fname,)).start()
27 |
28 | if __name__ == '__main__':
29 | files = glob('n4_PNG/**')
30 | progress.currval = 0
31 | start_time = time.time()
32 | for x in progress(xrange(len(files) / 100)): #avoid threading complications
33 | time.sleep(2)
34 | f = files[100 * x : (100 * x) + 100]
35 | files_to_s3(f, 'n4itk-slices')
36 | # print(str(x) + ' out of ' + str(len(files)/100))
37 | print('------%s seconds------' % (time.time() - start_time))
38 |
--------------------------------------------------------------------------------
/images/MRI_workflow.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/MRI_workflow.png
--------------------------------------------------------------------------------
/images/bad_example.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/bad_example.png
--------------------------------------------------------------------------------
/images/brain_grids.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/brain_grids.png
--------------------------------------------------------------------------------
/images/color_code.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/color_code.png
--------------------------------------------------------------------------------
/images/groundtruth.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/groundtruth.gif
--------------------------------------------------------------------------------
/images/gt.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/gt.gif
--------------------------------------------------------------------------------
/images/improved_seg.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/improved_seg.png
--------------------------------------------------------------------------------
/images/modalities.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/modalities.png
--------------------------------------------------------------------------------
/images/model_architecture.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/model_architecture.png
--------------------------------------------------------------------------------
/images/my_res.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/my_res.gif
--------------------------------------------------------------------------------
/images/myresults.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/myresults.gif
--------------------------------------------------------------------------------
/images/n4_correction.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/n4_correction.png
--------------------------------------------------------------------------------
/images/patch_10.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/patch_10.png
--------------------------------------------------------------------------------
/images/patch_15.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/patch_15.png
--------------------------------------------------------------------------------
/images/patch_20.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/patch_20.png
--------------------------------------------------------------------------------
/images/patch_25.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/patch_25.png
--------------------------------------------------------------------------------
/images/patch_30.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/patch_30.png
--------------------------------------------------------------------------------
/images/patch_35.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/patch_35.png
--------------------------------------------------------------------------------
/images/patch_40.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/patch_40.png
--------------------------------------------------------------------------------
/images/patch_45.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/patch_45.png
--------------------------------------------------------------------------------
/images/patch_5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/patch_5.png
--------------------------------------------------------------------------------
/images/patch_51.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/patch_51.png
--------------------------------------------------------------------------------
/images/patch_55.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/patch_55.png
--------------------------------------------------------------------------------
/images/patch_60.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/patch_60.png
--------------------------------------------------------------------------------
/images/patch_65.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/patch_65.png
--------------------------------------------------------------------------------
/images/patch_70.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/patch_70.png
--------------------------------------------------------------------------------
/images/patch_75.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/patch_75.png
--------------------------------------------------------------------------------
/images/patch_80.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/patch_80.png
--------------------------------------------------------------------------------
/images/patch_85.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/patch_85.png
--------------------------------------------------------------------------------
/images/patch_90.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/patch_90.png
--------------------------------------------------------------------------------
/images/patch_95.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/patch_95.png
--------------------------------------------------------------------------------
/images/patches.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/patches.png
--------------------------------------------------------------------------------
/images/results.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/results.png
--------------------------------------------------------------------------------
/images/segment.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/segment.png
--------------------------------------------------------------------------------
/images/segmented_slice.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/segmented_slice.png
--------------------------------------------------------------------------------
/images/t29_143.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/t29_143.gif
--------------------------------------------------------------------------------
/images/t2_grid.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/t2_grid.png
--------------------------------------------------------------------------------
/images/tumor_diversity.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/images/tumor_diversity.png
--------------------------------------------------------------------------------
/license.md:
--------------------------------------------------------------------------------
1 | The MIT License (MIT)
2 |
3 | Copyright (c) 2017 Nikki Aldeborgh
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy of
6 | this software and associated documentation files (the "Software"), to deal in
7 | the Software without restriction, including without limitation the rights to
8 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
9 | the Software, and to permit persons to whom the Software is furnished to do so,
10 | subject to the following conditions: The above copyright notice and this
11 | permission notice shall be included in all copies or substantial portions of the
12 | Software.
13 |
14 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
15 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
16 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
17 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
18 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
19 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
20 |
--------------------------------------------------------------------------------
/models/4th_epoch60k.hdf5:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/models/4th_epoch60k.hdf5
--------------------------------------------------------------------------------
/models/4th_epoch60k.json:
--------------------------------------------------------------------------------
1 | "{\"layers\": [{\"W_constraint\": null, \"b_constraint\": null, \"name\": \"Convolution2D\", \"custom_name\": \"convolution2d\", \"subsample\": [1, 1], \"nb_col\": 7, \"activation\": \"linear\", \"trainable\": true, \"input_shape\": [4, 33, 33], \"dim_ordering\": \"th\", \"cache_enabled\": true, \"init\": \"glorot_uniform\", \"nb_filter\": 64, \"b_regularizer\": null, \"W_regularizer\": {\"l2\": 0.01, \"name\": \"WeightRegularizer\", \"l1\": 0.01}, \"nb_row\": 7, \"activity_regularizer\": null, \"border_mode\": \"valid\"}, {\"cache_enabled\": true, \"activation\": \"relu\", \"trainable\": true, \"name\": \"Activation\", \"custom_name\": \"activation\"}, {\"name\": \"BatchNormalization\", \"custom_name\": \"batchnormalization\", \"epsilon\": 1e-06, \"trainable\": true, \"cache_enabled\": true, \"mode\": 0, \"momentum\": 0.9, \"axis\": 1}, {\"name\": \"MaxPooling2D\", \"custom_name\": \"maxpooling2d\", \"strides\": [1, 1], \"trainable\": true, \"dim_ordering\": \"th\", \"pool_size\": [2, 2], \"cache_enabled\": true, \"border_mode\": \"valid\"}, {\"cache_enabled\": true, \"trainable\": true, \"name\": \"Dropout\", \"custom_name\": \"dropout\", \"p\": 0.5}, {\"W_constraint\": null, \"b_constraint\": null, \"name\": \"Convolution2D\", \"custom_name\": \"convolution2d\", \"subsample\": [1, 1], \"activation\": \"relu\", \"trainable\": true, \"dim_ordering\": \"th\", \"nb_col\": 5, \"cache_enabled\": true, \"init\": \"glorot_uniform\", \"nb_filter\": 128, \"b_regularizer\": null, \"W_regularizer\": {\"l2\": 0.01, \"name\": \"WeightRegularizer\", \"l1\": 0.01}, \"nb_row\": 5, \"activity_regularizer\": null, \"border_mode\": \"valid\"}, {\"name\": \"BatchNormalization\", \"custom_name\": \"batchnormalization\", \"epsilon\": 1e-06, \"trainable\": true, \"cache_enabled\": true, \"mode\": 0, \"momentum\": 0.9, \"axis\": 1}, {\"name\": \"MaxPooling2D\", \"custom_name\": \"maxpooling2d\", \"strides\": [1, 1], \"trainable\": true, \"dim_ordering\": \"th\", \"pool_size\": [2, 2], \"cache_enabled\": true, \"border_mode\": \"valid\"}, {\"cache_enabled\": true, \"trainable\": true, \"name\": \"Dropout\", \"custom_name\": \"dropout\", \"p\": 0.5}, {\"W_constraint\": null, \"b_constraint\": null, \"name\": \"Convolution2D\", \"custom_name\": \"convolution2d\", \"subsample\": [1, 1], \"activation\": \"relu\", \"trainable\": true, \"dim_ordering\": \"th\", \"nb_col\": 5, \"cache_enabled\": true, \"init\": \"glorot_uniform\", \"nb_filter\": 128, \"b_regularizer\": null, \"W_regularizer\": {\"l2\": 0.01, \"name\": \"WeightRegularizer\", \"l1\": 0.01}, \"nb_row\": 5, \"activity_regularizer\": null, \"border_mode\": \"valid\"}, {\"name\": \"BatchNormalization\", \"custom_name\": \"batchnormalization\", \"epsilon\": 1e-06, \"trainable\": true, \"cache_enabled\": true, \"mode\": 0, \"momentum\": 0.9, \"axis\": 1}, {\"name\": \"MaxPooling2D\", \"custom_name\": \"maxpooling2d\", \"strides\": [1, 1], \"trainable\": true, \"dim_ordering\": \"th\", \"pool_size\": [2, 2], \"cache_enabled\": true, \"border_mode\": \"valid\"}, {\"cache_enabled\": true, \"trainable\": true, \"name\": \"Dropout\", \"custom_name\": \"dropout\", \"p\": 0.5}, {\"W_constraint\": null, \"b_constraint\": null, \"name\": \"Convolution2D\", \"custom_name\": \"convolution2d\", \"subsample\": [1, 1], \"activation\": \"relu\", \"trainable\": true, \"dim_ordering\": \"th\", \"nb_col\": 3, \"cache_enabled\": true, \"init\": \"glorot_uniform\", \"nb_filter\": 128, \"b_regularizer\": null, \"W_regularizer\": {\"l2\": 0.01, \"name\": \"WeightRegularizer\", \"l1\": 0.01}, \"nb_row\": 3, \"activity_regularizer\": null, \"border_mode\": \"valid\"}, {\"cache_enabled\": true, \"trainable\": true, \"name\": \"Dropout\", \"custom_name\": \"dropout\", \"p\": 0.25}, {\"cache_enabled\": true, \"trainable\": true, \"name\": \"Flatten\", \"custom_name\": \"flatten\"}, {\"W_constraint\": null, \"b_constraint\": null, \"name\": \"Dense\", \"custom_name\": \"dense\", \"activity_regularizer\": null, \"trainable\": true, \"cache_enabled\": true, \"init\": \"glorot_uniform\", \"activation\": \"linear\", \"input_dim\": null, \"b_regularizer\": null, \"W_regularizer\": null, \"output_dim\": 5}, {\"cache_enabled\": true, \"activation\": \"softmax\", \"trainable\": true, \"name\": \"Activation\", \"custom_name\": \"activation\"}], \"loss\": \"categorical_crossentropy\", \"optimizer\": {\"nesterov\": false, \"lr\": 0.009999999776482582, \"name\": \"SGD\", \"momentum\": 0.0, \"decay\": 0.0}, \"name\": \"Sequential\", \"sample_weight_mode\": null}"
--------------------------------------------------------------------------------
/models/6th_epoch60k.hdf5:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/naldeborgh7575/brain_segmentation/4462bde26e8434391c91cf3d9a8ac53fd6d494bc/models/6th_epoch60k.hdf5
--------------------------------------------------------------------------------
/models/6th_epoch60k.json:
--------------------------------------------------------------------------------
1 | "{\"layers\": [{\"W_constraint\": null, \"b_constraint\": null, \"name\": \"Convolution2D\", \"custom_name\": \"convolution2d\", \"subsample\": [1, 1], \"nb_col\": 7, \"activation\": \"linear\", \"trainable\": true, \"input_shape\": [4, 33, 33], \"dim_ordering\": \"th\", \"cache_enabled\": true, \"init\": \"glorot_uniform\", \"nb_filter\": 64, \"b_regularizer\": null, \"W_regularizer\": {\"l2\": 0.01, \"name\": \"WeightRegularizer\", \"l1\": 0.01}, \"nb_row\": 7, \"activity_regularizer\": null, \"border_mode\": \"valid\"}, {\"cache_enabled\": true, \"activation\": \"relu\", \"trainable\": true, \"name\": \"Activation\", \"custom_name\": \"activation\"}, {\"name\": \"BatchNormalization\", \"custom_name\": \"batchnormalization\", \"epsilon\": 1e-06, \"trainable\": true, \"cache_enabled\": true, \"mode\": 0, \"momentum\": 0.9, \"axis\": 1}, {\"name\": \"MaxPooling2D\", \"custom_name\": \"maxpooling2d\", \"strides\": [1, 1], \"trainable\": true, \"dim_ordering\": \"th\", \"pool_size\": [2, 2], \"cache_enabled\": true, \"border_mode\": \"valid\"}, {\"cache_enabled\": true, \"trainable\": true, \"name\": \"Dropout\", \"custom_name\": \"dropout\", \"p\": 0.5}, {\"W_constraint\": null, \"b_constraint\": null, \"name\": \"Convolution2D\", \"custom_name\": \"convolution2d\", \"subsample\": [1, 1], \"activation\": \"relu\", \"trainable\": true, \"dim_ordering\": \"th\", \"nb_col\": 5, \"cache_enabled\": true, \"init\": \"glorot_uniform\", \"nb_filter\": 128, \"b_regularizer\": null, \"W_regularizer\": {\"l2\": 0.01, \"name\": \"WeightRegularizer\", \"l1\": 0.01}, \"nb_row\": 5, \"activity_regularizer\": null, \"border_mode\": \"valid\"}, {\"name\": \"BatchNormalization\", \"custom_name\": \"batchnormalization\", \"epsilon\": 1e-06, \"trainable\": true, \"cache_enabled\": true, \"mode\": 0, \"momentum\": 0.9, \"axis\": 1}, {\"name\": \"MaxPooling2D\", \"custom_name\": \"maxpooling2d\", \"strides\": [1, 1], \"trainable\": true, \"dim_ordering\": \"th\", \"pool_size\": [2, 2], \"cache_enabled\": true, \"border_mode\": \"valid\"}, {\"cache_enabled\": true, \"trainable\": true, \"name\": \"Dropout\", \"custom_name\": \"dropout\", \"p\": 0.5}, {\"W_constraint\": null, \"b_constraint\": null, \"name\": \"Convolution2D\", \"custom_name\": \"convolution2d\", \"subsample\": [1, 1], \"activation\": \"relu\", \"trainable\": true, \"dim_ordering\": \"th\", \"nb_col\": 5, \"cache_enabled\": true, \"init\": \"glorot_uniform\", \"nb_filter\": 128, \"b_regularizer\": null, \"W_regularizer\": {\"l2\": 0.01, \"name\": \"WeightRegularizer\", \"l1\": 0.01}, \"nb_row\": 5, \"activity_regularizer\": null, \"border_mode\": \"valid\"}, {\"name\": \"BatchNormalization\", \"custom_name\": \"batchnormalization\", \"epsilon\": 1e-06, \"trainable\": true, \"cache_enabled\": true, \"mode\": 0, \"momentum\": 0.9, \"axis\": 1}, {\"name\": \"MaxPooling2D\", \"custom_name\": \"maxpooling2d\", \"strides\": [1, 1], \"trainable\": true, \"dim_ordering\": \"th\", \"pool_size\": [2, 2], \"cache_enabled\": true, \"border_mode\": \"valid\"}, {\"cache_enabled\": true, \"trainable\": true, \"name\": \"Dropout\", \"custom_name\": \"dropout\", \"p\": 0.5}, {\"W_constraint\": null, \"b_constraint\": null, \"name\": \"Convolution2D\", \"custom_name\": \"convolution2d\", \"subsample\": [1, 1], \"activation\": \"relu\", \"trainable\": true, \"dim_ordering\": \"th\", \"nb_col\": 3, \"cache_enabled\": true, \"init\": \"glorot_uniform\", \"nb_filter\": 128, \"b_regularizer\": null, \"W_regularizer\": {\"l2\": 0.01, \"name\": \"WeightRegularizer\", \"l1\": 0.01}, \"nb_row\": 3, \"activity_regularizer\": null, \"border_mode\": \"valid\"}, {\"cache_enabled\": true, \"trainable\": true, \"name\": \"Dropout\", \"custom_name\": \"dropout\", \"p\": 0.25}, {\"cache_enabled\": true, \"trainable\": true, \"name\": \"Flatten\", \"custom_name\": \"flatten\"}, {\"W_constraint\": null, \"b_constraint\": null, \"name\": \"Dense\", \"custom_name\": \"dense\", \"activity_regularizer\": null, \"trainable\": true, \"cache_enabled\": true, \"init\": \"glorot_uniform\", \"activation\": \"linear\", \"input_dim\": null, \"b_regularizer\": null, \"W_regularizer\": null, \"output_dim\": 5}, {\"cache_enabled\": true, \"activation\": \"softmax\", \"trainable\": true, \"name\": \"Activation\", \"custom_name\": \"activation\"}], \"loss\": \"categorical_crossentropy\", \"optimizer\": {\"nesterov\": false, \"lr\": 0.009999999776482582, \"name\": \"SGD\", \"momentum\": 0.0, \"decay\": 0.0}, \"name\": \"Sequential\", \"sample_weight_mode\": null}"
--------------------------------------------------------------------------------
/models/model_50_a.json:
--------------------------------------------------------------------------------
1 | "{\"layers\": [{\"W_constraint\": null, \"b_constraint\": null, \"name\": \"Convolution2D\", \"custom_name\": \"convolution2d\", \"subsample\": [1, 1], \"nb_col\": 7, \"activation\": \"linear\", \"trainable\": true, \"input_shape\": [4, 33, 33], \"dim_ordering\": \"th\", \"cache_enabled\": true, \"init\": \"glorot_uniform\", \"nb_filter\": 64, \"b_regularizer\": null, \"W_regularizer\": {\"l2\": 0.01, \"name\": \"WeightRegularizer\", \"l1\": 0.01}, \"nb_row\": 7, \"activity_regularizer\": null, \"border_mode\": \"valid\"}, {\"cache_enabled\": true, \"activation\": \"relu\", \"trainable\": true, \"name\": \"Activation\", \"custom_name\": \"activation\"}, {\"name\": \"BatchNormalization\", \"custom_name\": \"batchnormalization\", \"epsilon\": 1e-06, \"trainable\": true, \"cache_enabled\": true, \"mode\": 0, \"momentum\": 0.9, \"axis\": 1}, {\"name\": \"MaxPooling2D\", \"custom_name\": \"maxpooling2d\", \"strides\": [1, 1], \"trainable\": true, \"dim_ordering\": \"th\", \"pool_size\": [2, 2], \"cache_enabled\": true, \"border_mode\": \"valid\"}, {\"cache_enabled\": true, \"trainable\": true, \"name\": \"Dropout\", \"custom_name\": \"dropout\", \"p\": 0.5}, {\"W_constraint\": null, \"b_constraint\": null, \"name\": \"Convolution2D\", \"custom_name\": \"convolution2d\", \"subsample\": [1, 1], \"activation\": \"relu\", \"trainable\": true, \"dim_ordering\": \"th\", \"nb_col\": 5, \"cache_enabled\": true, \"init\": \"glorot_uniform\", \"nb_filter\": 128, \"b_regularizer\": null, \"W_regularizer\": {\"l2\": 0.01, \"name\": \"WeightRegularizer\", \"l1\": 0.01}, \"nb_row\": 5, \"activity_regularizer\": null, \"border_mode\": \"valid\"}, {\"name\": \"BatchNormalization\", \"custom_name\": \"batchnormalization\", \"epsilon\": 1e-06, \"trainable\": true, \"cache_enabled\": true, \"mode\": 0, \"momentum\": 0.9, \"axis\": 1}, {\"name\": \"MaxPooling2D\", \"custom_name\": \"maxpooling2d\", \"strides\": [1, 1], \"trainable\": true, \"dim_ordering\": \"th\", \"pool_size\": [2, 2], \"cache_enabled\": true, \"border_mode\": \"valid\"}, {\"cache_enabled\": true, \"trainable\": true, \"name\": \"Dropout\", \"custom_name\": \"dropout\", \"p\": 0.5}, {\"W_constraint\": null, \"b_constraint\": null, \"name\": \"Convolution2D\", \"custom_name\": \"convolution2d\", \"subsample\": [1, 1], \"activation\": \"relu\", \"trainable\": true, \"dim_ordering\": \"th\", \"nb_col\": 5, \"cache_enabled\": true, \"init\": \"glorot_uniform\", \"nb_filter\": 256, \"b_regularizer\": null, \"W_regularizer\": {\"l2\": 0.01, \"name\": \"WeightRegularizer\", \"l1\": 0.01}, \"nb_row\": 5, \"activity_regularizer\": null, \"border_mode\": \"valid\"}, {\"name\": \"BatchNormalization\", \"custom_name\": \"batchnormalization\", \"epsilon\": 1e-06, \"trainable\": true, \"cache_enabled\": true, \"mode\": 0, \"momentum\": 0.9, \"axis\": 1}, {\"name\": \"MaxPooling2D\", \"custom_name\": \"maxpooling2d\", \"strides\": [1, 1], \"trainable\": true, \"dim_ordering\": \"th\", \"pool_size\": [2, 2], \"cache_enabled\": true, \"border_mode\": \"valid\"}, {\"cache_enabled\": true, \"trainable\": true, \"name\": \"Dropout\", \"custom_name\": \"dropout\", \"p\": 0.5}, {\"W_constraint\": null, \"b_constraint\": null, \"name\": \"Convolution2D\", \"custom_name\": \"convolution2d\", \"subsample\": [1, 1], \"activation\": \"relu\", \"trainable\": true, \"dim_ordering\": \"th\", \"nb_col\": 3, \"cache_enabled\": true, \"init\": \"glorot_uniform\", \"nb_filter\": 128, \"b_regularizer\": null, \"W_regularizer\": {\"l2\": 0.01, \"name\": \"WeightRegularizer\", \"l1\": 0.01}, \"nb_row\": 3, \"activity_regularizer\": null, \"border_mode\": \"valid\"}, {\"cache_enabled\": true, \"trainable\": true, \"name\": \"Dropout\", \"custom_name\": \"dropout\", \"p\": 0.25}, {\"cache_enabled\": true, \"trainable\": true, \"name\": \"Flatten\", \"custom_name\": \"flatten\"}, {\"W_constraint\": null, \"b_constraint\": null, \"name\": \"Dense\", \"custom_name\": \"dense\", \"activity_regularizer\": null, \"trainable\": true, \"cache_enabled\": true, \"init\": \"glorot_uniform\", \"activation\": \"linear\", \"input_dim\": null, \"b_regularizer\": null, \"W_regularizer\": null, \"output_dim\": 5}, {\"cache_enabled\": true, \"activation\": \"softmax\", \"trainable\": true, \"name\": \"Activation\", \"custom_name\": \"activation\"}], \"loss\": \"categorical_crossentropy\", \"optimizer\": {\"nesterov\": false, \"lr\": 0.009999999776482582, \"name\": \"SGD\", \"momentum\": 0.0, \"decay\": 0.0}, \"name\": \"Sequential\", \"sample_weight_mode\": null}"
--------------------------------------------------------------------------------