├── README.md ├── dockerfile ├── imFunctions.py ├── imageFiles ├── betterfit.png ├── conv1.gif ├── conv2.gif ├── convnet2.png ├── graph.png ├── grayfilters.png ├── kernels.png ├── overfit.png ├── pool.gif └── pool2.gif ├── requirements.txt └── visualizingConvnets.ipynb /README.md: -------------------------------------------------------------------------------- 1 | # Visualizing Convnets 2 | 3 | 4 | 5 | This repository has the code from my O'Reilly article [Visualizing Convolutional Neural Networks w/ TensorFlow](https://www.oreilly.com/ideas/visualizing-convolutional-neural-networks) published on September 15th, 2017. 6 | 7 | This code contains tools for building a dataset and a jupyter notebook for implementing and visualizing a simple convolutional neural network. 8 | 9 | ## Required Packages 10 | * [TensorFlow v1.2](http://www.tensorflow.org/) 11 | * [Jupyter](http://jupyter.org/) 12 | * [NumPy](http://www.numpy.org/) 13 | * [Scipy](https://www.scipy.org/) 14 | * [Matplotlib](http://matplotlib.org/) 15 | * [Pillow](http://python-pillow.org/) 16 | 17 | There are three ways you can install these packages: by using Docker, by using Anaconda Python, or installing the packages manually yourself. Though not required if you have a NVidia graphic card with a compute capability of 3.0 or greater and atleast 3gb of memory using GPU supported TensorFlow will drasticallly improve preformance. Instructions for installing GPU supported TensorFlow can be found [here](https://github.com/wagonhelm/ML-Workstation-Installation-Guide). 18 | 19 | ### Using Docker 20 | 21 | 1. Download and install [Docker](https://www.docker.com/). If using Ubuntu 14.04/16.04 I wrote my own instructions for installing docker [here](https://github.com/wagonhelm/ML-Workstation-Installation-Guide#install-docker). 22 | 23 | 2. Download and unzip [this entire repo from GitHub](https://github.com/wagonhelm/Visualizing-Convnets), either interactively, or by entering 24 | ```bash 25 | git clone https://github.com/wagonhelm/Visualizing-Convnets.git 26 | ``` 27 | 28 | 3. Open your terminal and use `cd` to navigate into the directory of the repo on your machine 29 | ```bash 30 | cd Visualizing-Convnets 31 | ``` 32 | 33 | 4. To build the Dockerfile, enter 34 | ```bash 35 | docker build -t cnn_dockerfile -f dockerfile . 36 | ``` 37 | If you get a permissions error on running this command, you may need to run it with `sudo`: 38 | ```bash 39 | sudo docker build -t cnn_dockerfile -f dockerfile . 40 | ``` 41 | 42 | 5. Run Docker from the Dockerfile you've just built 43 | ```bash 44 | docker run -it -p 8888:8888 -p 6006:6006 cnn_dockerfile bash 45 | ``` 46 | or 47 | ```bash 48 | sudo docker run -it -p 8888:8888 -p 6006:6006 cnn_dockerfile bash 49 | ``` 50 | if you run into permission problems. 51 | 52 | 6. Launch Jupyter and Tensorboard both by using tmux 53 | ```bash 54 | tmux 55 | 56 | jupyter notebook --allow-root 57 | ``` 58 | `Press CTL+B` then `C` to open a new tmux window, then 59 | 60 | ``` 61 | tensorboard --logdir='/tmp/cnn' 62 | ``` 63 | To switch windows `Press CTL+B` then `window #` 64 | 65 | Once both jupyter and tensorboard are running, using your browser, navigate to the URLs shown in the terminal output if those don't work try http://localhost:8888/ for Jupyter Notebook and http://localhost:6006/ for Tensorboard. 66 | -------------------------------------------------------------------------------- /dockerfile: -------------------------------------------------------------------------------- 1 | FROM gcr.io/tensorflow/tensorflow:1.2.1-devel-py3 2 | RUN apt-get update && apt-get install -y git-core tmux 3 | RUN git clone https://github.com/wagonhelm/Visualizing-Convnets.git /notebooks/cnn 4 | WORKDIR "/notebooks" 5 | RUN pip install -r ./cnn/requirements.txt 6 | CMD ["/run_jupyter.sh"] 7 | -------------------------------------------------------------------------------- /imFunctions.py: -------------------------------------------------------------------------------- 1 | import os 2 | import math 3 | import tarfile 4 | from six.moves.urllib.request import urlretrieve 5 | import numpy as np 6 | import scipy.ndimage 7 | from scipy.misc import imread 8 | 9 | 10 | def downloadImages(filename, expectedSize, force=False): 11 | url = 'http://www.robots.ox.ac.uk/~vgg/data/pets/data/' 12 | path = os.getcwd() 13 | dest_filename = os.path.join(path, filename) 14 | 15 | if os.path.exists(dest_filename): 16 | statinfo = os.stat(dest_filename) 17 | if statinfo.st_size != expectedSize: 18 | force=True 19 | print("File {} not expected size, forcing download".format(filename)) 20 | else: 21 | print("File '{}' allready downloaded :)".format(filename)) 22 | 23 | if force or not os.path.exists(dest_filename): 24 | print('Attempting to download: {}'.format(filename)) 25 | filename, _ = urlretrieve(url + filename, dest_filename) 26 | print("Downloaded '{}' successfully".format(filename)) 27 | 28 | def maybeExtract(filename, force=False): 29 | root = os.path.splitext(os.path.splitext(filename)[0])[0] 30 | if os.path.isdir(root) and not force: 31 | print("{} already present - Skipping extraction of {}".format(root, filename)) 32 | 33 | else: 34 | print("Extracting data for {}:".format(root)) 35 | tar = tarfile.open(filename) 36 | tar.extractall(os.getcwd()) 37 | tar.close() 38 | 39 | def sortImages(testPer): 40 | numbers = ['0','1','2','3','4','5','6','7','8','9'] 41 | path1 = os.getcwd()+'/images/' 42 | listing = os.listdir(path1) 43 | if len(listing) == 37: 44 | print("Images allready sorted") 45 | return 46 | 47 | for i in listing: 48 | folder = '' 49 | for ii in i: 50 | if ii in numbers: 51 | break 52 | else: 53 | folder += ii 54 | 55 | folder = folder.replace("_","") 56 | if not os.path.exists(path1+folder): 57 | os.makedirs(path1+folder) 58 | os.rename(path1+i, path1+folder+'/'+i) 59 | 60 | listing = os.listdir(path1) 61 | 62 | for i in listing: 63 | path2 = path1+i+'/' 64 | listing2 = os.listdir(path2) 65 | 66 | if not os.path.exists(path2+'train'): 67 | os.makedirs(path2+'train') 68 | if not os.path.exists(path2+'test'): 69 | os.makedirs(path2+'test') 70 | 71 | for ii in listing2[0:int(float(math.floor(len(listing2)*testPer)))]: 72 | os.rename(path2+ii, path2+'test'+'/'+ii) 73 | for ii in listing2[int(math.floor(len(listing2)*testPer)):]: 74 | os.rename(path2+ii, path2+'train'+'/'+ii) 75 | print("Images sorted") 76 | 77 | def buildDataset(): 78 | 79 | dataset = [] 80 | path1 = os.getcwd()+'/images/' 81 | listing = os.listdir(path1) 82 | 83 | for i in listing: 84 | choice = input("Do you want to use {} in your dataset? [y/n/break]".format(i)) 85 | if choice.lower() == 'y': 86 | dataset.append(i) 87 | if choice.lower() == 'break': 88 | break 89 | 90 | train_x = np.zeros([1, 224, 224, 3]) 91 | train_y = np.zeros([1,len(dataset)]) 92 | classes = len(dataset) 93 | classLabels = [] 94 | 95 | oneHotCounter = 0 96 | 97 | for i in dataset: 98 | impath = os.getcwd()+'/images/'+i+'/train/' 99 | listing2 = os.listdir(impath) 100 | classLabels.append(i) 101 | for i in listing2: 102 | img = scipy.misc.imresize(imread(impath+i).astype(np.float32), [224,224]) 103 | img = img.reshape([1,224,224,3]) 104 | train_x = np.vstack((train_x,img)) 105 | onehot = np.zeros([1,len(dataset)]) 106 | onehot[0,oneHotCounter] = 1 107 | train_y = np.vstack((train_y, onehot)) 108 | 109 | oneHotCounter += 1 110 | 111 | mean = np.mean([train_x], axis=1) 112 | train_x -= mean 113 | 114 | test_x = np.zeros(shape=[1, 224, 224, 3]) 115 | test_y = np.zeros([1,len(dataset)]) 116 | 117 | oneHotCounter = 0 118 | 119 | for i in dataset: 120 | impath = os.getcwd()+'/images/'+i+'/test/' 121 | listing2 = os.listdir(impath) 122 | 123 | for ii in listing2: 124 | img = scipy.misc.imresize(imread(impath+ii).astype(np.float32), [224,224]) 125 | img = img.reshape([1,224,224,3]) 126 | test_x = np.vstack((test_x,img)) 127 | onehot = np.zeros([1,len(dataset)]) 128 | onehot[0,oneHotCounter] = 1 129 | test_y = np.vstack((test_y, onehot)) 130 | 131 | print("{} = {}".format(i,onehot)) 132 | oneHotCounter += 1 133 | print('Total Train Size: {} Total Test Size: {} Total # Classes {}'.format(train_x[1:].shape[0], test_x[1:].shape[0], classes)) 134 | test_x -= mean 135 | return train_x[1:], train_y[1:], test_x[1:], test_y[1:], classes, classLabels 136 | 137 | def shuffle(a, b): 138 | assert len(a) == len(b) 139 | p = np.random.permutation(len(a)) 140 | return a[p], b[p] 141 | 142 | -------------------------------------------------------------------------------- /imageFiles/betterfit.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/wagonhelm/Visualizing-Convnets/001e0838d0085f6932eeb2f3850a2111848d8be5/imageFiles/betterfit.png -------------------------------------------------------------------------------- /imageFiles/conv1.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/wagonhelm/Visualizing-Convnets/001e0838d0085f6932eeb2f3850a2111848d8be5/imageFiles/conv1.gif -------------------------------------------------------------------------------- /imageFiles/conv2.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/wagonhelm/Visualizing-Convnets/001e0838d0085f6932eeb2f3850a2111848d8be5/imageFiles/conv2.gif -------------------------------------------------------------------------------- /imageFiles/convnet2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/wagonhelm/Visualizing-Convnets/001e0838d0085f6932eeb2f3850a2111848d8be5/imageFiles/convnet2.png -------------------------------------------------------------------------------- /imageFiles/graph.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/wagonhelm/Visualizing-Convnets/001e0838d0085f6932eeb2f3850a2111848d8be5/imageFiles/graph.png -------------------------------------------------------------------------------- /imageFiles/grayfilters.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/wagonhelm/Visualizing-Convnets/001e0838d0085f6932eeb2f3850a2111848d8be5/imageFiles/grayfilters.png -------------------------------------------------------------------------------- /imageFiles/kernels.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/wagonhelm/Visualizing-Convnets/001e0838d0085f6932eeb2f3850a2111848d8be5/imageFiles/kernels.png -------------------------------------------------------------------------------- /imageFiles/overfit.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/wagonhelm/Visualizing-Convnets/001e0838d0085f6932eeb2f3850a2111848d8be5/imageFiles/overfit.png -------------------------------------------------------------------------------- /imageFiles/pool.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/wagonhelm/Visualizing-Convnets/001e0838d0085f6932eeb2f3850a2111848d8be5/imageFiles/pool.gif -------------------------------------------------------------------------------- /imageFiles/pool2.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/wagonhelm/Visualizing-Convnets/001e0838d0085f6932eeb2f3850a2111848d8be5/imageFiles/pool2.gif -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | numpy 2 | scipy 3 | matplotlib 4 | pillow 5 | --------------------------------------------------------------------------------