├── README.md ├── cheese.csv ├── deep-learning-dictionary.md ├── notes.md ├── pycon-tensorboard-refs.md └── tensorboard-explorations.ipynb /README.md: -------------------------------------------------------------------------------- 1 | # Keras Tutorial 2 | 3 | This repo contains curriculum to support a three-hour introductory workshop on building deep learning models with Keras and TensorFlow. For more samples, an engineering blog, and extended discussion, I sincerely recommend checking out the [Keras website](https://keras.io/), as well as Francois Chollet's phenomenal book, [_Deep Learning with Python_](https://www.manning.com/books/deep-learning-with-python). 4 | 5 | ### _HOUR 1:_ INTRODUCTION TO DEEP LEARNING 6 | * What is _deep learning_, and why should you care? 7 | * What is _TensorFlow_, and why can it be a pain to use? 8 | * What is _Keras_, and why is it delightful? 9 | 10 | ### _HOUR 2:_ GETTING STARTED WITH FEEDFORWARD NEURAL NETWORKS 11 | * Building a feedforward neural network by hand, and with `numpy` 12 | * Incorporating back propagation by hand and with `numpy` 13 | * Interpolating values for hidden nodes and building intuition 14 | 15 | ### _HOUR 3:_ GETTING STARTED WITH KERAS 16 | * Specifying the model (`Sequential()`, `.add()`-ing layers) 17 | * Compiling the model (`.compile()`, adding optimizers, understanding `loss`) 18 | * Fitting the model (`.fit()` with predictors and targets) 19 | * Optimizing the model (changing architectures - nodes, layers) 20 | 21 | For questions about coursework, models, data and implementation, please contact [Paige Bailey](mailto:paige.bailey@microsoft.com). 22 | -------------------------------------------------------------------------------- /cheese.csv: -------------------------------------------------------------------------------- 1 | Acetic,H2S,Lactic,Taste 2 | 4.543,3.135,0.86,12.3 3 | 5.159,5.043,1.53,20.9 4 | 5.366,5.438,1.57,39 5 | 5.759,7.496,1.81,47.9 6 | 4.663,3.807,0.99,5.6 7 | 5.697,7.601,1.09,25.9 8 | 5.892,8.726,1.29,37.3 9 | 6.078,7.966,1.78,21.9 10 | 4.898,3.85,1.29,18.1 11 | 5.242,4.174,1.58,21 12 | 5.74,6.142,1.68,34.9 13 | 6.446,7.908,1.9,57.2 14 | 4.477,2.996,1.06,0.7 15 | 5.236,4.942,1.3,25.9 16 | 6.151,6.752,1.52,54.9 17 | 6.365,9.588,1.74,40.9 18 | 4.787,3.912,1.16,15.9 19 | 5.412,4.7,1.49,6.4 20 | 5.247,6.174,1.63,18 21 | 5.438,9.064,1.99,38.9 22 | 4.564,4.949,1.15,14 23 | 5.298,5.22,1.33,15.2 24 | 5.455,9.242,1.44,32 25 | 5.855,10.199,2,56.71 26 | 5.366,3.664,1.31,16.8 27 | 6.043,3.219,1.46,11.6 28 | 6.458,6.962,1.72,26.5 29 | 5.328,3.912,1.25,0.7 30 | 5.802,6.685,1.08,13.4 31 | 6.176,4.787,1.25,5.5 32 | -------------------------------------------------------------------------------- /deep-learning-dictionary.md: -------------------------------------------------------------------------------- 1 | >> "Don't use a five-dollar word when a fifty-cent one will do." - _Mark Twain_ 2 | 3 | * **gradient descent**: 4 | * **stochastic**: 5 | * **Symbolic AI**: 6 | * **tensor**: 7 | * `numpy`: 8 | * **batch normalization**: 9 | * **Bayesian**: 10 | -------------------------------------------------------------------------------- /notes.md: -------------------------------------------------------------------------------- 1 | * [MNIST Digit Recognition, from _Machine Learning Mastery_](https://machinelearningmastery.com/handwritten-digit-recognition-using-convolutional-neural-networks-python-keras/) 2 | -------------------------------------------------------------------------------- /pycon-tensorboard-refs.md: -------------------------------------------------------------------------------- 1 | * [Accessing TensorBoard from cloud VM](https://stackoverflow.com/questions/42277280/accessing-tensorboard-on-aws) 2 | * [TensorBoard API - Build Your Own Extensions](https://ai.googleblog.com/2017/09/build-your-own-machine-learning.html) 3 | * [`tf.keras.callbacks` doesn't support all of the TensorBoard plugins](https://medium.com/@akionakas/precision-recall-curve-with-keras-cd92647685e1) 4 | * [Graph visualization with TensorBoard](https://www.tensorflow.org/programmers_guide/graph_viz) 5 | * [Visualizing Learning with TensorBoard](https://www.tensorflow.org/programmers_guide/summaries_and_tensorboard) 6 | * [Keras callbacks with TensorBoard](https://stackoverflow.com/questions/42112260/how-do-i-use-the-tensorboard-callback-of-keras) 7 | * [TensorBoard Tutorial using something _other_ than MNIST](http://edwardlib.org/tutorials/tensorboard) 8 | * [TensorBoard Quickstart](https://medium.com/@anthony_sarkis/tensorboard-quick-start-in-5-minutes-e3ec69f673af) 9 | * [Debugging NN with TensorBoard](https://www.analyticsvidhya.com/blog/2017/07/debugging-neural-network-with-tensorboard/) 10 | * [Blogpost (using MNIST)](https://jhui.github.io/2017/03/12/TensorBoard-visualize-your-learning/) 11 | * [Debugging with TensorBoard plugins](https://www.youtube.com/watch?v=XcHWLsVmHvk) 12 | * [Plug-in How-To's](https://github.com/tensorflow/tensorboard-plugin-example/blob/master/README.md) 13 | * [Using TensorBoard with PyTorch](https://github.com/lanpa/tensorboard-pytorch) 14 | * [TensorBoard with PyTorch blog post](www.erogol.com/use-tensorboard-pytorch/) 15 | * [Structuring graphs in TensorBoard with Keras](https://stackoverflow.com/questions/45309153/structure-a-keras-tensorboard-graph) 16 | -------------------------------------------------------------------------------- /tensorboard-explorations.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "collapsed": true 7 | }, 8 | "source": [ 9 | "# TensorBoard for Artists\n", 10 | "\n", 11 | "TensorBoard is an incredibly powerful tool for exploring and understanding the deep neural networks you create using [TensorFlow](https://www.tensorflow.org/), PyTorch, or other high performance numerical computation libraries. This notebook is an attempt to show how one would use TensorBoard with [Keras](https://keras.io/), a friendly, high-level neural networks API written in Python.\n", 12 | "\n", 13 | "![](https://www.tensorflow.org/images/graph_vis_animation.gif)\n", 14 | "\n", 15 | "#### Requirements\n", 16 | "\n", 17 | "* TensorFlow version ____ \n", 18 | "\n", 19 | "#### Overview of the example\n", 20 | "\n", 21 | "// Example demonstrates the use of a Convolutional 1D neural network for text classification\n", 22 | "\n", 23 | "#### Import required libraries\n", 24 | "\n", 25 | "To begin, we will import portions of several `keras` libraries.\n", 26 | "\n", 27 | "* `sequence`\n", 28 | "* `Sequential`\n", 29 | "* `Dense`, `Dropout`, and `Activation`\n", 30 | "* `Embedding`\n", 31 | "* `Conv1D`, `GlobalMaxPooling1D`\n", 32 | "* `imdb` is a dataset that \n", 33 | "* `TensorBoard`" 34 | ] 35 | }, 36 | { 37 | "cell_type": "code", 38 | "execution_count": null, 39 | "metadata": {}, 40 | "outputs": [], 41 | "source": [ 42 | "from __future import print_function\n", 43 | "\n", 44 | "from tensorflow.python.keras.preprocessing import sequence\n", 45 | "from tensorflow.python.keras.models import Sequential\n", 46 | "from tensorflow.python.keras.layers import Dense, Dropout, Activation\n", 47 | "from tensorflow.python.keras.layers import Embedding\n", 48 | "from tensorflow.python.keras.layers import Conv1D, GlobalMaxPooling1D\n", 49 | "from tensorflow.python.keras.datasets import imdb\n", 50 | "from tensorflow.python.keras.callbacks import TensorBoard" 51 | ] 52 | }, 53 | { 54 | "cell_type": "markdown", 55 | "metadata": {}, 56 | "source": [ 57 | "#### Set parameters\n", 58 | "\n", 59 | "// Explanation of what each of the parameters do and why you would need them.\n", 60 | "\n", 61 | "* `max_features`\n", 62 | "* `maxlen`\n", 63 | "* `batch_size`\n", 64 | "* `embedding_dims`\n", 65 | "* `filters`\n", 66 | "* `kernel_size`\n", 67 | "* `hidden_dims`\n", 68 | "* `epochs`" 69 | ] 70 | }, 71 | { 72 | "cell_type": "code", 73 | "execution_count": null, 74 | "metadata": {}, 75 | "outputs": [], 76 | "source": [ 77 | "max_features = 5000\n", 78 | "maxlen = 400\n", 79 | "batch_size = 32\n", 80 | "embedding_dims = 50\n", 81 | "filters = 250\n", 82 | "kernel_size = 3\n", 83 | "hidden_dims = 250\n", 84 | "epochs = 8" 85 | ] 86 | }, 87 | { 88 | "cell_type": "markdown", 89 | "metadata": {}, 90 | "source": [ 91 | "#### Load IMDB data\n", 92 | "\n", 93 | "// Explanation of what is happening, why max_features is important" 94 | ] 95 | }, 96 | { 97 | "cell_type": "code", 98 | "execution_count": null, 99 | "metadata": {}, 100 | "outputs": [], 101 | "source": [ 102 | "print('Loading data...')\n", 103 | "(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words = max_features)\n", 104 | "\n", 105 | "print(len(x_train), 'train sequences')\n", 106 | "print(len(x_test), 'test sequences')\n", 107 | "\n", 108 | "print('Pad sequences (samples x time)')\n", 109 | "x_train = sequence.pad_sequences(x_train, maxlen = maxlen)\n", 110 | "x_test = sequence.pad_sequences(x_test, maxlen = maxlen)\n", 111 | "\n", 112 | "print('x_train shape:', x_train.shape)\n", 113 | "print('x_test shape:', x_test.shape)" 114 | ] 115 | }, 116 | { 117 | "cell_type": "markdown", 118 | "metadata": {}, 119 | "source": [ 120 | "#### Set up visualizations in TensorBoard\n", 121 | "\n", 122 | "// Explanation of what each of these arguments do, and how they change the outputs of the board" 123 | ] 124 | }, 125 | { 126 | "cell_type": "code", 127 | "execution_count": null, 128 | "metadata": {}, 129 | "outputs": [], 130 | "source": [ 131 | "tb_callbacks = TensorBoard( log_dir = './Graph',\n", 132 | " histogram_freq = 100,\n", 133 | " write_grads = True,\n", 134 | " write_graph = True)" 135 | ] 136 | }, 137 | { 138 | "cell_type": "markdown", 139 | "metadata": {}, 140 | "source": [ 141 | "#### Build a model\n", 142 | "\n", 143 | "// Explanation" 144 | ] 145 | }, 146 | { 147 | "cell_type": "code", 148 | "execution_count": null, 149 | "metadata": {}, 150 | "outputs": [], 151 | "source": [ 152 | "model = Sequential()\n", 153 | "\n", 154 | "model.add(Embedding(max_features,\n", 155 | " embedding_dims,\n", 156 | " input_length = maxlen))\n", 157 | "\n", 158 | "model.add(Dropout(0.2))\n", 159 | "\n", 160 | "model.add(Conv1D(filters,\n", 161 | " kernel_size,\n", 162 | " padding = 'valid',\n", 163 | " activation = 'relu',\n", 164 | " strides = 1))\n", 165 | "\n", 166 | "model.add(GlobalMaxPooling1D())\n", 167 | "\n", 168 | "model.add(Dense(hidden_dims))\n", 169 | "model.add(Dropout(0.2))\n", 170 | "model.add(Activation('relu'))\n", 171 | "\n", 172 | "model.add(Dense(hidden_dims))\n", 173 | "model.add(Dropout(0.2))\n", 174 | "model.add(Activation('relu'))\n", 175 | "\n", 176 | "model.add(Dense(1))\n", 177 | "model.add(Activation('sigmoid'))" 178 | ] 179 | }, 180 | { 181 | "cell_type": "markdown", 182 | "metadata": {}, 183 | "source": [ 184 | "#### Compile and fit the model" 185 | ] 186 | }, 187 | { 188 | "cell_type": "code", 189 | "execution_count": null, 190 | "metadata": {}, 191 | "outputs": [], 192 | "source": [ 193 | "model.compile(loss = 'binary_crossentropy',\n", 194 | " optimizer = 'adam',\n", 195 | " metrics = ['accuracy'])\n", 196 | "\n", 197 | "model.fit(x_train, y_train,\n", 198 | " batch_size = batch_size,\n", 199 | " epochs = epochs,\n", 200 | " validation_data = (x_test, y_test),\n", 201 | " callbacks = [tb_callbacks])" 202 | ] 203 | } 204 | ], 205 | "metadata": { 206 | "kernelspec": { 207 | "display_name": "Python 3.6", 208 | "language": "python", 209 | "name": "python36" 210 | }, 211 | "language_info": { 212 | "codemirror_mode": { 213 | "name": "ipython", 214 | "version": 3 215 | }, 216 | "file_extension": ".py", 217 | "mimetype": "text/x-python", 218 | "name": "python", 219 | "nbconvert_exporter": "python", 220 | "pygments_lexer": "ipython3", 221 | "version": "3.6.3" 222 | } 223 | }, 224 | "nbformat": 4, 225 | "nbformat_minor": 2 226 | } 227 | --------------------------------------------------------------------------------