├── LICENSE.txt ├── assembleGoogLeNet.m ├── googlenetExample.m ├── googlenetLayers.m ├── images └── googlenet_deepNetworkDesigner.PNG └── readme.md /LICENSE.txt: -------------------------------------------------------------------------------- 1 | Copyright (c) 2019, The MathWorks, Inc. 2 | 3 | Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 4 | 5 | 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 6 | 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 7 | 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. 8 | 9 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -------------------------------------------------------------------------------- /assembleGoogLeNet.m: -------------------------------------------------------------------------------- 1 | function net = assembleGoogLeNet() 2 | % assembleGoogLeNet Assemble GoogLeNet network 3 | % 4 | % net = assembleGoogLeNet creates a GoogLeNet network with weights 5 | % trained on ImageNet. You can load the same GoogLeNet network by 6 | % installing the Deep Learning Toolbox Model for GoogLeNet Network 7 | % support package from the Add-On Explorer and then using the googlenet 8 | % function. 9 | 10 | % Copyright 2019 The MathWorks, Inc. 11 | 12 | % Download the network parameters. If these have already been downloaded, 13 | % this step will be skipped. 14 | % 15 | % The files will be downloaded to a file "googlenetParams.mat", in a 16 | % directory "GoogLeNet" located in the system's temporary directory. 17 | dataDir = fullfile(tempdir, "GoogLeNet"); 18 | paramFile = fullfile(dataDir, "googlenetParams.mat"); 19 | downloadUrl = "http://www.mathworks.com/supportfiles/nnet/data/networks/googlenetParams.mat"; 20 | 21 | if ~exist(dataDir, "dir") 22 | mkdir(dataDir); 23 | end 24 | 25 | if ~exist(paramFile, "file") 26 | disp("Downloading pretrained parameters file (26 MB).") 27 | disp("This may take several minutes..."); 28 | websave(paramFile, downloadUrl); 29 | disp("Download finished."); 30 | else 31 | disp("Skipping download, parameter file already exists."); 32 | end 33 | 34 | % Load the network parameters from the file googlenetParams.mat. 35 | s = load(paramFile); 36 | params = s.params; 37 | 38 | % Create a layer graph with the network architecture of GoogLeNet. 39 | lgraph = googlenetLayers; 40 | 41 | % Create a cell array containing the layer names. 42 | layerNames = {lgraph.Layers(:).Name}'; 43 | 44 | % Loop over layers and add parameters. 45 | for i = 1:numel(layerNames) 46 | name = layerNames{i}; 47 | idx = strcmp(layerNames,name); 48 | layer = lgraph.Layers(idx); 49 | 50 | % Assign layer parameters. 51 | pname = replace(name,'-','_'); 52 | layerParams = params.(pname); 53 | if ~isempty(layerParams) 54 | paramNames = fields(layerParams); 55 | for j = 1:numel(paramNames) 56 | layer.(paramNames{j}) = layerParams.(paramNames{j}); 57 | end 58 | 59 | % Add layer into layer graph. 60 | lgraph = replaceLayer(lgraph,name,layer); 61 | end 62 | end 63 | 64 | % Assemble the network. 65 | net = assembleNetwork(lgraph); 66 | 67 | end -------------------------------------------------------------------------------- /googlenetExample.m: -------------------------------------------------------------------------------- 1 | %% Classify Image Using GoogLeNet 2 | % This example shows how to classify an image using the GoogLeNet 3 | % pretrained convolutional neural network. 4 | 5 | % Copyright 2019 The MathWorks, Inc. 6 | 7 | % Read an example image. 8 | img = imread("peppers.png"); 9 | 10 | % The image that you want to classify must have the same size as the input 11 | % size of the network. Resize the image to be 224-by-224 pixels, the input 12 | % size of GoogLeNet. 13 | img = imresize(img,[224 224]); 14 | 15 | % Assemble the pretrained GoogLeNet network. Alternatively, you can 16 | % create a pretrained GoogLeNet network by installing the Deep Learning 17 | % Toolbox Model for GoogLeNet Network support package from the Add-On 18 | % Explorer using the googlenet function. 19 | net = assembleGoogLeNet; 20 | 21 | % Analyze the network architecture. 22 | analyzeNetwork(net) 23 | 24 | % Classify the image using the network. 25 | label = classify(net,img); 26 | 27 | % Display the image together with the predicted label. 28 | figure 29 | imshow(img) 30 | title(string(label)) -------------------------------------------------------------------------------- /googlenetLayers.m: -------------------------------------------------------------------------------- 1 | function lgraph = googlenetLayers() 2 | %googlenetLayers GoogLeNet layer graph 3 | % 4 | % lgraph =googlenetLayers creates a layer graph with the network 5 | % architecture of GoogLeNet. The layer graph contains no weights. 6 | 7 | lgraph = layerGraph(); 8 | %% Add Layer Branches 9 | % Add the branches of the network to the layer graph. Each branch is a linear 10 | % array of layers. 11 | 12 | tempLayers = [ 13 | imageInputLayer([224 224 3],"Name","data") 14 | convolution2dLayer([7 7],64,"Name","conv1-7x7_s2","BiasLearnRateFactor",2,"Padding",[3 3 3 3],"Stride",[2 2]) 15 | reluLayer("Name","conv1-relu_7x7") 16 | maxPooling2dLayer([3 3],"Name","pool1-3x3_s2","Padding",[0 1 0 1],"Stride",[2 2]) 17 | crossChannelNormalizationLayer(5,"Name","pool1-norm1","K",1) 18 | convolution2dLayer([1 1],64,"Name","conv2-3x3_reduce","BiasLearnRateFactor",2) 19 | reluLayer("Name","conv2-relu_3x3_reduce") 20 | convolution2dLayer([3 3],192,"Name","conv2-3x3","BiasLearnRateFactor",2,"Padding",[1 1 1 1]) 21 | reluLayer("Name","conv2-relu_3x3") 22 | crossChannelNormalizationLayer(5,"Name","conv2-norm2","K",1) 23 | maxPooling2dLayer([3 3],"Name","pool2-3x3_s2","Padding",[0 1 0 1],"Stride",[2 2])]; 24 | lgraph = addLayers(lgraph,tempLayers); 25 | 26 | tempLayers = [ 27 | convolution2dLayer([1 1],16,"Name","inception_3a-5x5_reduce","BiasLearnRateFactor",2) 28 | reluLayer("Name","inception_3a-relu_5x5_reduce") 29 | convolution2dLayer([5 5],32,"Name","inception_3a-5x5","BiasLearnRateFactor",2,"Padding",[2 2 2 2]) 30 | reluLayer("Name","inception_3a-relu_5x5")]; 31 | lgraph = addLayers(lgraph,tempLayers); 32 | 33 | tempLayers = [ 34 | convolution2dLayer([1 1],64,"Name","inception_3a-1x1","BiasLearnRateFactor",2) 35 | reluLayer("Name","inception_3a-relu_1x1")]; 36 | lgraph = addLayers(lgraph,tempLayers); 37 | 38 | tempLayers = [ 39 | convolution2dLayer([1 1],96,"Name","inception_3a-3x3_reduce","BiasLearnRateFactor",2) 40 | reluLayer("Name","inception_3a-relu_3x3_reduce") 41 | convolution2dLayer([3 3],128,"Name","inception_3a-3x3","BiasLearnRateFactor",2,"Padding",[1 1 1 1]) 42 | reluLayer("Name","inception_3a-relu_3x3")]; 43 | lgraph = addLayers(lgraph,tempLayers); 44 | 45 | tempLayers = [ 46 | maxPooling2dLayer([3 3],"Name","inception_3a-pool","Padding",[1 1 1 1]) 47 | convolution2dLayer([1 1],32,"Name","inception_3a-pool_proj","BiasLearnRateFactor",2) 48 | reluLayer("Name","inception_3a-relu_pool_proj")]; 49 | lgraph = addLayers(lgraph,tempLayers); 50 | 51 | tempLayers = depthConcatenationLayer(4,"Name","inception_3a-output"); 52 | lgraph = addLayers(lgraph,tempLayers); 53 | 54 | tempLayers = [ 55 | convolution2dLayer([1 1],128,"Name","inception_3b-3x3_reduce","BiasLearnRateFactor",2) 56 | reluLayer("Name","inception_3b-relu_3x3_reduce") 57 | convolution2dLayer([3 3],192,"Name","inception_3b-3x3","BiasLearnRateFactor",2,"Padding",[1 1 1 1]) 58 | reluLayer("Name","inception_3b-relu_3x3")]; 59 | lgraph = addLayers(lgraph,tempLayers); 60 | 61 | tempLayers = [ 62 | convolution2dLayer([1 1],128,"Name","inception_3b-1x1","BiasLearnRateFactor",2) 63 | reluLayer("Name","inception_3b-relu_1x1")]; 64 | lgraph = addLayers(lgraph,tempLayers); 65 | 66 | tempLayers = [ 67 | maxPooling2dLayer([3 3],"Name","inception_3b-pool","Padding",[1 1 1 1]) 68 | convolution2dLayer([1 1],64,"Name","inception_3b-pool_proj","BiasLearnRateFactor",2) 69 | reluLayer("Name","inception_3b-relu_pool_proj")]; 70 | lgraph = addLayers(lgraph,tempLayers); 71 | 72 | tempLayers = [ 73 | convolution2dLayer([1 1],32,"Name","inception_3b-5x5_reduce","BiasLearnRateFactor",2) 74 | reluLayer("Name","inception_3b-relu_5x5_reduce") 75 | convolution2dLayer([5 5],96,"Name","inception_3b-5x5","BiasLearnRateFactor",2,"Padding",[2 2 2 2]) 76 | reluLayer("Name","inception_3b-relu_5x5")]; 77 | lgraph = addLayers(lgraph,tempLayers); 78 | 79 | tempLayers = [ 80 | depthConcatenationLayer(4,"Name","inception_3b-output") 81 | maxPooling2dLayer([3 3],"Name","pool3-3x3_s2","Padding",[0 1 0 1],"Stride",[2 2])]; 82 | lgraph = addLayers(lgraph,tempLayers); 83 | 84 | tempLayers = [ 85 | convolution2dLayer([1 1],96,"Name","inception_4a-3x3_reduce","BiasLearnRateFactor",2) 86 | reluLayer("Name","inception_4a-relu_3x3_reduce") 87 | convolution2dLayer([3 3],208,"Name","inception_4a-3x3","BiasLearnRateFactor",2,"Padding",[1 1 1 1]) 88 | reluLayer("Name","inception_4a-relu_3x3")]; 89 | lgraph = addLayers(lgraph,tempLayers); 90 | 91 | tempLayers = [ 92 | convolution2dLayer([1 1],192,"Name","inception_4a-1x1","BiasLearnRateFactor",2) 93 | reluLayer("Name","inception_4a-relu_1x1")]; 94 | lgraph = addLayers(lgraph,tempLayers); 95 | 96 | tempLayers = [ 97 | convolution2dLayer([1 1],16,"Name","inception_4a-5x5_reduce","BiasLearnRateFactor",2) 98 | reluLayer("Name","inception_4a-relu_5x5_reduce") 99 | convolution2dLayer([5 5],48,"Name","inception_4a-5x5","BiasLearnRateFactor",2,"Padding",[2 2 2 2]) 100 | reluLayer("Name","inception_4a-relu_5x5")]; 101 | lgraph = addLayers(lgraph,tempLayers); 102 | 103 | tempLayers = [ 104 | maxPooling2dLayer([3 3],"Name","inception_4a-pool","Padding",[1 1 1 1]) 105 | convolution2dLayer([1 1],64,"Name","inception_4a-pool_proj","BiasLearnRateFactor",2) 106 | reluLayer("Name","inception_4a-relu_pool_proj")]; 107 | lgraph = addLayers(lgraph,tempLayers); 108 | 109 | tempLayers = depthConcatenationLayer(4,"Name","inception_4a-output"); 110 | lgraph = addLayers(lgraph,tempLayers); 111 | 112 | tempLayers = [ 113 | maxPooling2dLayer([3 3],"Name","inception_4b-pool","Padding",[1 1 1 1]) 114 | convolution2dLayer([1 1],64,"Name","inception_4b-pool_proj","BiasLearnRateFactor",2) 115 | reluLayer("Name","inception_4b-relu_pool_proj")]; 116 | lgraph = addLayers(lgraph,tempLayers); 117 | 118 | tempLayers = [ 119 | convolution2dLayer([1 1],160,"Name","inception_4b-1x1","BiasLearnRateFactor",2) 120 | reluLayer("Name","inception_4b-relu_1x1")]; 121 | lgraph = addLayers(lgraph,tempLayers); 122 | 123 | tempLayers = [ 124 | convolution2dLayer([1 1],112,"Name","inception_4b-3x3_reduce","BiasLearnRateFactor",2) 125 | reluLayer("Name","inception_4b-relu_3x3_reduce") 126 | convolution2dLayer([3 3],224,"Name","inception_4b-3x3","BiasLearnRateFactor",2,"Padding",[1 1 1 1]) 127 | reluLayer("Name","inception_4b-relu_3x3")]; 128 | lgraph = addLayers(lgraph,tempLayers); 129 | 130 | tempLayers = [ 131 | convolution2dLayer([1 1],24,"Name","inception_4b-5x5_reduce","BiasLearnRateFactor",2) 132 | reluLayer("Name","inception_4b-relu_5x5_reduce") 133 | convolution2dLayer([5 5],64,"Name","inception_4b-5x5","BiasLearnRateFactor",2,"Padding",[2 2 2 2]) 134 | reluLayer("Name","inception_4b-relu_5x5")]; 135 | lgraph = addLayers(lgraph,tempLayers); 136 | 137 | tempLayers = depthConcatenationLayer(4,"Name","inception_4b-output"); 138 | lgraph = addLayers(lgraph,tempLayers); 139 | 140 | tempLayers = [ 141 | convolution2dLayer([1 1],24,"Name","inception_4c-5x5_reduce","BiasLearnRateFactor",2) 142 | reluLayer("Name","inception_4c-relu_5x5_reduce") 143 | convolution2dLayer([5 5],64,"Name","inception_4c-5x5","BiasLearnRateFactor",2,"Padding",[2 2 2 2]) 144 | reluLayer("Name","inception_4c-relu_5x5")]; 145 | lgraph = addLayers(lgraph,tempLayers); 146 | 147 | tempLayers = [ 148 | convolution2dLayer([1 1],128,"Name","inception_4c-1x1","BiasLearnRateFactor",2) 149 | reluLayer("Name","inception_4c-relu_1x1")]; 150 | lgraph = addLayers(lgraph,tempLayers); 151 | 152 | tempLayers = [ 153 | maxPooling2dLayer([3 3],"Name","inception_4c-pool","Padding",[1 1 1 1]) 154 | convolution2dLayer([1 1],64,"Name","inception_4c-pool_proj","BiasLearnRateFactor",2) 155 | reluLayer("Name","inception_4c-relu_pool_proj")]; 156 | lgraph = addLayers(lgraph,tempLayers); 157 | 158 | tempLayers = [ 159 | convolution2dLayer([1 1],128,"Name","inception_4c-3x3_reduce","BiasLearnRateFactor",2) 160 | reluLayer("Name","inception_4c-relu_3x3_reduce") 161 | convolution2dLayer([3 3],256,"Name","inception_4c-3x3","BiasLearnRateFactor",2,"Padding",[1 1 1 1]) 162 | reluLayer("Name","inception_4c-relu_3x3")]; 163 | lgraph = addLayers(lgraph,tempLayers); 164 | 165 | tempLayers = depthConcatenationLayer(4,"Name","inception_4c-output"); 166 | lgraph = addLayers(lgraph,tempLayers); 167 | 168 | tempLayers = [ 169 | convolution2dLayer([1 1],32,"Name","inception_4d-5x5_reduce","BiasLearnRateFactor",2) 170 | reluLayer("Name","inception_4d-relu_5x5_reduce") 171 | convolution2dLayer([5 5],64,"Name","inception_4d-5x5","BiasLearnRateFactor",2,"Padding",[2 2 2 2]) 172 | reluLayer("Name","inception_4d-relu_5x5")]; 173 | lgraph = addLayers(lgraph,tempLayers); 174 | 175 | tempLayers = [ 176 | convolution2dLayer([1 1],112,"Name","inception_4d-1x1","BiasLearnRateFactor",2) 177 | reluLayer("Name","inception_4d-relu_1x1")]; 178 | lgraph = addLayers(lgraph,tempLayers); 179 | 180 | tempLayers = [ 181 | convolution2dLayer([1 1],144,"Name","inception_4d-3x3_reduce","BiasLearnRateFactor",2) 182 | reluLayer("Name","inception_4d-relu_3x3_reduce") 183 | convolution2dLayer([3 3],288,"Name","inception_4d-3x3","BiasLearnRateFactor",2,"Padding",[1 1 1 1]) 184 | reluLayer("Name","inception_4d-relu_3x3")]; 185 | lgraph = addLayers(lgraph,tempLayers); 186 | 187 | tempLayers = [ 188 | maxPooling2dLayer([3 3],"Name","inception_4d-pool","Padding",[1 1 1 1]) 189 | convolution2dLayer([1 1],64,"Name","inception_4d-pool_proj","BiasLearnRateFactor",2) 190 | reluLayer("Name","inception_4d-relu_pool_proj")]; 191 | lgraph = addLayers(lgraph,tempLayers); 192 | 193 | tempLayers = depthConcatenationLayer(4,"Name","inception_4d-output"); 194 | lgraph = addLayers(lgraph,tempLayers); 195 | 196 | tempLayers = [ 197 | convolution2dLayer([1 1],256,"Name","inception_4e-1x1","BiasLearnRateFactor",2) 198 | reluLayer("Name","inception_4e-relu_1x1")]; 199 | lgraph = addLayers(lgraph,tempLayers); 200 | 201 | tempLayers = [ 202 | convolution2dLayer([1 1],160,"Name","inception_4e-3x3_reduce","BiasLearnRateFactor",2) 203 | reluLayer("Name","inception_4e-relu_3x3_reduce") 204 | convolution2dLayer([3 3],320,"Name","inception_4e-3x3","BiasLearnRateFactor",2,"Padding",[1 1 1 1]) 205 | reluLayer("Name","inception_4e-relu_3x3")]; 206 | lgraph = addLayers(lgraph,tempLayers); 207 | 208 | tempLayers = [ 209 | convolution2dLayer([1 1],32,"Name","inception_4e-5x5_reduce","BiasLearnRateFactor",2) 210 | reluLayer("Name","inception_4e-relu_5x5_reduce") 211 | convolution2dLayer([5 5],128,"Name","inception_4e-5x5","BiasLearnRateFactor",2,"Padding",[2 2 2 2]) 212 | reluLayer("Name","inception_4e-relu_5x5")]; 213 | lgraph = addLayers(lgraph,tempLayers); 214 | 215 | tempLayers = [ 216 | maxPooling2dLayer([3 3],"Name","inception_4e-pool","Padding",[1 1 1 1]) 217 | convolution2dLayer([1 1],128,"Name","inception_4e-pool_proj","BiasLearnRateFactor",2) 218 | reluLayer("Name","inception_4e-relu_pool_proj")]; 219 | lgraph = addLayers(lgraph,tempLayers); 220 | 221 | tempLayers = [ 222 | depthConcatenationLayer(4,"Name","inception_4e-output") 223 | maxPooling2dLayer([3 3],"Name","pool4-3x3_s2","Padding",[0 1 0 1],"Stride",[2 2])]; 224 | lgraph = addLayers(lgraph,tempLayers); 225 | 226 | tempLayers = [ 227 | convolution2dLayer([1 1],32,"Name","inception_5a-5x5_reduce","BiasLearnRateFactor",2) 228 | reluLayer("Name","inception_5a-relu_5x5_reduce") 229 | convolution2dLayer([5 5],128,"Name","inception_5a-5x5","BiasLearnRateFactor",2,"Padding",[2 2 2 2]) 230 | reluLayer("Name","inception_5a-relu_5x5")]; 231 | lgraph = addLayers(lgraph,tempLayers); 232 | 233 | tempLayers = [ 234 | maxPooling2dLayer([3 3],"Name","inception_5a-pool","Padding",[1 1 1 1]) 235 | convolution2dLayer([1 1],128,"Name","inception_5a-pool_proj","BiasLearnRateFactor",2) 236 | reluLayer("Name","inception_5a-relu_pool_proj")]; 237 | lgraph = addLayers(lgraph,tempLayers); 238 | 239 | tempLayers = [ 240 | convolution2dLayer([1 1],160,"Name","inception_5a-3x3_reduce","BiasLearnRateFactor",2) 241 | reluLayer("Name","inception_5a-relu_3x3_reduce") 242 | convolution2dLayer([3 3],320,"Name","inception_5a-3x3","BiasLearnRateFactor",2,"Padding",[1 1 1 1]) 243 | reluLayer("Name","inception_5a-relu_3x3")]; 244 | lgraph = addLayers(lgraph,tempLayers); 245 | 246 | tempLayers = [ 247 | convolution2dLayer([1 1],256,"Name","inception_5a-1x1","BiasLearnRateFactor",2) 248 | reluLayer("Name","inception_5a-relu_1x1")]; 249 | lgraph = addLayers(lgraph,tempLayers); 250 | 251 | tempLayers = depthConcatenationLayer(4,"Name","inception_5a-output"); 252 | lgraph = addLayers(lgraph,tempLayers); 253 | 254 | tempLayers = [ 255 | convolution2dLayer([1 1],48,"Name","inception_5b-5x5_reduce","BiasLearnRateFactor",2) 256 | reluLayer("Name","inception_5b-relu_5x5_reduce") 257 | convolution2dLayer([5 5],128,"Name","inception_5b-5x5","BiasLearnRateFactor",2,"Padding",[2 2 2 2]) 258 | reluLayer("Name","inception_5b-relu_5x5")]; 259 | lgraph = addLayers(lgraph,tempLayers); 260 | 261 | tempLayers = [ 262 | maxPooling2dLayer([3 3],"Name","inception_5b-pool","Padding",[1 1 1 1]) 263 | convolution2dLayer([1 1],128,"Name","inception_5b-pool_proj","BiasLearnRateFactor",2) 264 | reluLayer("Name","inception_5b-relu_pool_proj")]; 265 | lgraph = addLayers(lgraph,tempLayers); 266 | 267 | tempLayers = [ 268 | convolution2dLayer([1 1],192,"Name","inception_5b-3x3_reduce","BiasLearnRateFactor",2) 269 | reluLayer("Name","inception_5b-relu_3x3_reduce") 270 | convolution2dLayer([3 3],384,"Name","inception_5b-3x3","BiasLearnRateFactor",2,"Padding",[1 1 1 1]) 271 | reluLayer("Name","inception_5b-relu_3x3")]; 272 | lgraph = addLayers(lgraph,tempLayers); 273 | 274 | tempLayers = [ 275 | convolution2dLayer([1 1],384,"Name","inception_5b-1x1","BiasLearnRateFactor",2) 276 | reluLayer("Name","inception_5b-relu_1x1")]; 277 | lgraph = addLayers(lgraph,tempLayers); 278 | 279 | tempLayers = [ 280 | depthConcatenationLayer(4,"Name","inception_5b-output") 281 | globalAveragePooling2dLayer("Name","pool5-7x7_s1") 282 | dropoutLayer(0.4,"Name","pool5-drop_7x7_s1") 283 | fullyConnectedLayer(1000,"Name","loss3-classifier","BiasLearnRateFactor",2) 284 | softmaxLayer("Name","prob") 285 | classificationLayer("Name","output")]; 286 | lgraph = addLayers(lgraph,tempLayers); 287 | 288 | %% Connect Layer Branches 289 | % Connect all the branches of the network to create the network graph. 290 | 291 | lgraph = connectLayers(lgraph,"pool2-3x3_s2","inception_3a-5x5_reduce"); 292 | lgraph = connectLayers(lgraph,"pool2-3x3_s2","inception_3a-1x1"); 293 | lgraph = connectLayers(lgraph,"pool2-3x3_s2","inception_3a-3x3_reduce"); 294 | lgraph = connectLayers(lgraph,"pool2-3x3_s2","inception_3a-pool"); 295 | lgraph = connectLayers(lgraph,"inception_3a-relu_1x1","inception_3a-output/in1"); 296 | lgraph = connectLayers(lgraph,"inception_3a-relu_5x5","inception_3a-output/in3"); 297 | lgraph = connectLayers(lgraph,"inception_3a-relu_pool_proj","inception_3a-output/in4"); 298 | lgraph = connectLayers(lgraph,"inception_3a-relu_3x3","inception_3a-output/in2"); 299 | lgraph = connectLayers(lgraph,"inception_3a-output","inception_3b-3x3_reduce"); 300 | lgraph = connectLayers(lgraph,"inception_3a-output","inception_3b-1x1"); 301 | lgraph = connectLayers(lgraph,"inception_3a-output","inception_3b-pool"); 302 | lgraph = connectLayers(lgraph,"inception_3a-output","inception_3b-5x5_reduce"); 303 | lgraph = connectLayers(lgraph,"inception_3b-relu_pool_proj","inception_3b-output/in4"); 304 | lgraph = connectLayers(lgraph,"inception_3b-relu_1x1","inception_3b-output/in1"); 305 | lgraph = connectLayers(lgraph,"inception_3b-relu_5x5","inception_3b-output/in3"); 306 | lgraph = connectLayers(lgraph,"inception_3b-relu_3x3","inception_3b-output/in2"); 307 | lgraph = connectLayers(lgraph,"pool3-3x3_s2","inception_4a-3x3_reduce"); 308 | lgraph = connectLayers(lgraph,"pool3-3x3_s2","inception_4a-1x1"); 309 | lgraph = connectLayers(lgraph,"pool3-3x3_s2","inception_4a-5x5_reduce"); 310 | lgraph = connectLayers(lgraph,"pool3-3x3_s2","inception_4a-pool"); 311 | lgraph = connectLayers(lgraph,"inception_4a-relu_1x1","inception_4a-output/in1"); 312 | lgraph = connectLayers(lgraph,"inception_4a-relu_3x3","inception_4a-output/in2"); 313 | lgraph = connectLayers(lgraph,"inception_4a-relu_pool_proj","inception_4a-output/in4"); 314 | lgraph = connectLayers(lgraph,"inception_4a-relu_5x5","inception_4a-output/in3"); 315 | lgraph = connectLayers(lgraph,"inception_4a-output","inception_4b-pool"); 316 | lgraph = connectLayers(lgraph,"inception_4a-output","inception_4b-1x1"); 317 | lgraph = connectLayers(lgraph,"inception_4a-output","inception_4b-3x3_reduce"); 318 | lgraph = connectLayers(lgraph,"inception_4a-output","inception_4b-5x5_reduce"); 319 | lgraph = connectLayers(lgraph,"inception_4b-relu_1x1","inception_4b-output/in1"); 320 | lgraph = connectLayers(lgraph,"inception_4b-relu_3x3","inception_4b-output/in2"); 321 | lgraph = connectLayers(lgraph,"inception_4b-relu_pool_proj","inception_4b-output/in4"); 322 | lgraph = connectLayers(lgraph,"inception_4b-relu_5x5","inception_4b-output/in3"); 323 | lgraph = connectLayers(lgraph,"inception_4b-output","inception_4c-5x5_reduce"); 324 | lgraph = connectLayers(lgraph,"inception_4b-output","inception_4c-1x1"); 325 | lgraph = connectLayers(lgraph,"inception_4b-output","inception_4c-pool"); 326 | lgraph = connectLayers(lgraph,"inception_4b-output","inception_4c-3x3_reduce"); 327 | lgraph = connectLayers(lgraph,"inception_4c-relu_1x1","inception_4c-output/in1"); 328 | lgraph = connectLayers(lgraph,"inception_4c-relu_5x5","inception_4c-output/in3"); 329 | lgraph = connectLayers(lgraph,"inception_4c-relu_3x3","inception_4c-output/in2"); 330 | lgraph = connectLayers(lgraph,"inception_4c-relu_pool_proj","inception_4c-output/in4"); 331 | lgraph = connectLayers(lgraph,"inception_4c-output","inception_4d-5x5_reduce"); 332 | lgraph = connectLayers(lgraph,"inception_4c-output","inception_4d-1x1"); 333 | lgraph = connectLayers(lgraph,"inception_4c-output","inception_4d-3x3_reduce"); 334 | lgraph = connectLayers(lgraph,"inception_4c-output","inception_4d-pool"); 335 | lgraph = connectLayers(lgraph,"inception_4d-relu_3x3","inception_4d-output/in2"); 336 | lgraph = connectLayers(lgraph,"inception_4d-relu_pool_proj","inception_4d-output/in4"); 337 | lgraph = connectLayers(lgraph,"inception_4d-relu_5x5","inception_4d-output/in3"); 338 | lgraph = connectLayers(lgraph,"inception_4d-relu_1x1","inception_4d-output/in1"); 339 | lgraph = connectLayers(lgraph,"inception_4d-output","inception_4e-1x1"); 340 | lgraph = connectLayers(lgraph,"inception_4d-output","inception_4e-3x3_reduce"); 341 | lgraph = connectLayers(lgraph,"inception_4d-output","inception_4e-5x5_reduce"); 342 | lgraph = connectLayers(lgraph,"inception_4d-output","inception_4e-pool"); 343 | lgraph = connectLayers(lgraph,"inception_4e-relu_5x5","inception_4e-output/in3"); 344 | lgraph = connectLayers(lgraph,"inception_4e-relu_pool_proj","inception_4e-output/in4"); 345 | lgraph = connectLayers(lgraph,"inception_4e-relu_3x3","inception_4e-output/in2"); 346 | lgraph = connectLayers(lgraph,"inception_4e-relu_1x1","inception_4e-output/in1"); 347 | lgraph = connectLayers(lgraph,"pool4-3x3_s2","inception_5a-5x5_reduce"); 348 | lgraph = connectLayers(lgraph,"pool4-3x3_s2","inception_5a-pool"); 349 | lgraph = connectLayers(lgraph,"pool4-3x3_s2","inception_5a-3x3_reduce"); 350 | lgraph = connectLayers(lgraph,"pool4-3x3_s2","inception_5a-1x1"); 351 | lgraph = connectLayers(lgraph,"inception_5a-relu_5x5","inception_5a-output/in3"); 352 | lgraph = connectLayers(lgraph,"inception_5a-relu_pool_proj","inception_5a-output/in4"); 353 | lgraph = connectLayers(lgraph,"inception_5a-relu_1x1","inception_5a-output/in1"); 354 | lgraph = connectLayers(lgraph,"inception_5a-relu_3x3","inception_5a-output/in2"); 355 | lgraph = connectLayers(lgraph,"inception_5a-output","inception_5b-5x5_reduce"); 356 | lgraph = connectLayers(lgraph,"inception_5a-output","inception_5b-pool"); 357 | lgraph = connectLayers(lgraph,"inception_5a-output","inception_5b-3x3_reduce"); 358 | lgraph = connectLayers(lgraph,"inception_5a-output","inception_5b-1x1"); 359 | lgraph = connectLayers(lgraph,"inception_5b-relu_3x3","inception_5b-output/in2"); 360 | lgraph = connectLayers(lgraph,"inception_5b-relu_1x1","inception_5b-output/in1"); 361 | lgraph = connectLayers(lgraph,"inception_5b-relu_pool_proj","inception_5b-output/in4"); 362 | lgraph = connectLayers(lgraph,"inception_5b-relu_5x5","inception_5b-output/in3"); 363 | 364 | end -------------------------------------------------------------------------------- /images/googlenet_deepNetworkDesigner.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/googlenet/f45fe53cd7a6795007a36218ed8c604a1292c5f7/images/googlenet_deepNetworkDesigner.PNG -------------------------------------------------------------------------------- /readme.md: -------------------------------------------------------------------------------- 1 | # Overview 2 | 3 | GoogLeNet is a convolutional neural network that is trained on more than a million images from the ImageNet database. As a result, the network has learned rich feature representations for a wide range of images. The network can classify images into 1000 object categories, such as keyboard, mouse, pencil, and many animals. 4 | 5 | The network has an image input size of 224-by-224-by-3. 6 | 7 | # Usage 8 | 9 | This repository requires [MATLAB](https://www.mathworks.com/products/matlab.html) (R2018b and above) and the [Deep Learning Toolbox](https://www.mathworks.com/products/deep-learning.html). 10 | 11 | This repository provides three functions: 12 | - googlenetLayers: Creates an untrained network with the network architecture of GoogLeNet 13 | - assembleGoogLeNet: Creates a GoogLeNet network with weights trained on ImageNet data 14 | - googlenetExample: Demonstrates how to classify an image using a trained GoogLeNet network 15 | 16 | To construct an untrained GoogLeNet network to train from scratch, type the following at the MATLAB command line: 17 | ```matlab 18 | lgraph = googlenetLayers; 19 | ``` 20 | The untrained network is returned as a `layerGraph` object. 21 | 22 | To construct a trained GoogLeNet network suitable for use in image classification, type the following at the MATLAB command line: 23 | ```matlab 24 | net = assembleGoogLeNet; 25 | ``` 26 | The trained network is returned as a `DAGNetwork` object. 27 | 28 | To classify an image with the network: 29 | ```matlab 30 | img = imresize(imread("peppers.png"),[224 224]); 31 | predLabel = classify(net, img); 32 | imshow(img); 33 | title(string(predLabel)); 34 | ``` 35 | 36 | # Documentation 37 | 38 | For more information about the GoogLeNet pre-trained model, see the [googlenet](https://www.mathworks.com/help/deeplearning/ref/googlenet.html) function page in the [MATLAB Deep Learning Toolbox documentation](https://www.mathworks.com/help/deeplearning/index.html). 39 | 40 | # Architecture 41 | 42 | GoogLeNet is a residual network. A residual network is a type of DAG network that has residual (or shortcut) connections that bypass the main network layers. Residual connections enable the parameter gradients to propagate more easily from the output layer to the earlier layers of the network, which makes it possible to train deeper networks. This increased network depth can result in higher accuracies on more difficult tasks. 43 | 44 | You can explore and edit the network architecture using [Deep Network Designer](https://www.mathworks.com/help/deeplearning/ug/build-networks-with-deep-network-designer.html). 45 | 46 | ![GoogLeNet in Deep Network Designer](images/googlenet_deepNetworkDesigner.PNG "GoogLeNet in Deep Network Designer") 47 | 48 | # GoogLeNet in MATLAB 49 | 50 | This repository demonstrates the construction of a residual deep neural network from scratch in MATLAB. You can use the code in this repository as a foundation for building residual networks with different numbers of residual blocks. 51 | 52 | You can also create a trained GoogLeNet network from inside MATLAB by installing the Deep Learning Toolbox Model for GoogLeNet Network support package. Type `googlenet` at the command line. If the Deep Learning Toolbox Model for GoogLeNet Network support package is not installed, then the function provides a link to the required support package in the Add-On Explorer. To install the support package, click the link, and then click Install. 53 | 54 | Alternatively, you can download the GoogLeNet pre-trained model from the MathWorks File Exchange, at [Deep Learning Toolbox Model for GoogLeNet Network](https://www.mathworks.com/matlabcentral/fileexchange/64456-deep-learning-toolbox-model-for-googlenet-network). 55 | 56 | You can create an untrained GoogLeNet network from inside MATLAB by importing a trained GoogLeNet network into the Deep Network Designer App and selecting Export > Generate Code. The exported code will generate an untrained network with the network architecture of GoogLeNet. 57 | --------------------------------------------------------------------------------