├── Chapter01 ├── Chapter1_TF2_alpha.ipynb ├── checkpoint ├── vars-1.data-00000-of-00001 └── vars-1.index ├── Chapter02 ├── Chapter2_Keras_ModelBySubclassingModel_TF2_alpha.ipynb ├── Chapter2_Keras_ModelUsingFunctionalAPI_TF2_alpha.ipynb ├── Chapter2_Keras_UseOfDataPipelines_TF2_alpha.ipynb └── Chapter2_Keras_sequential_models_TF2_alpha.ipynb ├── Chapter03 ├── Chapter3_ANNTech_TF2_alpha.ipynb ├── dataset.txt ├── file1.txt ├── file2.txt ├── mycsvfile.txt ├── myfile.tfrecords ├── report.txt ├── size_1000.csv └── students.tfrecords ├── Chapter04 ├── Chapter4_BostonLinReg_TF2_alpha.ipynb ├── Chapter4_IrisKNN_TF2_alpha.ipynb ├── Chapter4_LinearRegression_TF2_alpha.ipynb ├── Chapter4_LogisticRegression_TF2_alpha.ipynb └── model.weights.best.hdf5 ├── Chapter05 ├── Chapter5_Autoencoder_TF2_alpha.ipynb └── Chapter5_Denoiser_TF2_alpha.ipynb ├── Chapter06 ├── CHapter6_QDraw_TF2_alpha.ipynb └── Chapter6_CIFAR10_TF2_alpha_V2.ipynb ├── Chapter07 ├── Chapter7_NeuralStyleTransfer_TF2_alpha.ipynb └── tmp │ └── nst │ ├── elephant.jpg │ ├── skyscrapers.jpg │ ├── sunset.jpg │ └── zebra.jpg ├── Chapter08 ├── Chapter8_RNN_TF2_alpha.ipynb └── GreatExpectations.txt ├── Chapter09 ├── Chapter9_IMDb_TF2_alpha.ipynb └── Chapter9_fashion_estimator_TF2_alpha.ipynb ├── LICENSE └── README.md /Chapter01/checkpoint: -------------------------------------------------------------------------------- 1 | model_checkpoint_path: "vars-1" 2 | all_model_checkpoint_paths: "vars-1" 3 | -------------------------------------------------------------------------------- /Chapter01/vars-1.data-00000-of-00001: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/Tensorflow-2.0-Quick-Start-Guide/56c6be1e90bd901523dbe7beaa973e8d158b8cd1/Chapter01/vars-1.data-00000-of-00001 -------------------------------------------------------------------------------- /Chapter01/vars-1.index: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/Tensorflow-2.0-Quick-Start-Guide/56c6be1e90bd901523dbe7beaa973e8d158b8cd1/Chapter01/vars-1.index -------------------------------------------------------------------------------- /Chapter02/Chapter2_Keras_ModelBySubclassingModel_TF2_alpha.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Keras model by subclassing Model" 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": null, 13 | "metadata": {}, 14 | "outputs": [], 15 | "source": [ 16 | "import tensorflow as tf\n", 17 | "mnist = tf.keras.datasets.mnist\n", 18 | "(train_x,train_y), (test_x, test_y) = mnist.load_data()\n", 19 | "train_x, test_x = train_x/255.0, test_x/255.0\n", 20 | "epochs=10" 21 | ] 22 | }, 23 | { 24 | "cell_type": "code", 25 | "execution_count": null, 26 | "metadata": {}, 27 | "outputs": [], 28 | "source": [ 29 | "class MyModel(tf.keras.Model):\n", 30 | " def __init__(self, num_classes=10):\n", 31 | " super(MyModel, self).__init__()\n", 32 | " # Define your layers here.\n", 33 | " inputs = tf.keras.Input(shape=(28,28)) # Returns a placeholder tensor\n", 34 | " self.x0 = tf.keras.layers.Flatten()\n", 35 | " self.x1 = tf.keras.layers.Dense(512, activation='relu',name='d1')\n", 36 | " self.x2 = tf.keras.layers.Dropout(0.2)\n", 37 | " self.predictions = tf.keras.layers.Dense(10,activation=tf.nn.softmax, name='d2')\n", 38 | " def call(self, inputs):\n", 39 | " # This is where to define your forward pass using the functional API style\n", 40 | " # using layers previously defined in `__init__`\n", 41 | " x = self.x0(inputs)\n", 42 | " x = self.x1(x)\n", 43 | " x = self.x2(x)\n", 44 | " return self.predictions(x)\n" 45 | ] 46 | }, 47 | { 48 | "cell_type": "code", 49 | "execution_count": null, 50 | "metadata": {}, 51 | "outputs": [], 52 | "source": [ 53 | "model4 = MyModel()\n" 54 | ] 55 | }, 56 | { 57 | "cell_type": "code", 58 | "execution_count": null, 59 | "metadata": {}, 60 | "outputs": [], 61 | "source": [ 62 | "optimiser = tf.keras.optimizers.Adam()\n", 63 | "model4.compile (optimizer= optimiser, loss='sparse_categorical_crossentropy', metrics = ['accuracy'])\n", 64 | "model4.fit(train_x, train_y, batch_size=32, epochs=epochs)\n", 65 | " #model4.evaluate(test_x, test_y)" 66 | ] 67 | }, 68 | { 69 | "cell_type": "code", 70 | "execution_count": null, 71 | "metadata": {}, 72 | "outputs": [], 73 | "source": [ 74 | "model4.evaluate(test_x, test_y)" 75 | ] 76 | }, 77 | { 78 | "cell_type": "code", 79 | "execution_count": null, 80 | "metadata": {}, 81 | "outputs": [], 82 | "source": [ 83 | "model4.summary()" 84 | ] 85 | }, 86 | { 87 | "cell_type": "code", 88 | "execution_count": null, 89 | "metadata": {}, 90 | "outputs": [], 91 | "source": [] 92 | } 93 | ], 94 | "metadata": { 95 | "kernelspec": { 96 | "display_name": "Python 3", 97 | "language": "python", 98 | "name": "python3" 99 | }, 100 | "language_info": { 101 | "codemirror_mode": { 102 | "name": "ipython", 103 | "version": 3 104 | }, 105 | "file_extension": ".py", 106 | "mimetype": "text/x-python", 107 | "name": "python", 108 | "nbconvert_exporter": "python", 109 | "pygments_lexer": "ipython3", 110 | "version": "3.6.7" 111 | } 112 | }, 113 | "nbformat": 4, 114 | "nbformat_minor": 2 115 | } 116 | -------------------------------------------------------------------------------- /Chapter02/Chapter2_Keras_ModelUsingFunctionalAPI_TF2_alpha.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Keras Model using Functional API" 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": 1, 13 | "metadata": {}, 14 | "outputs": [], 15 | "source": [ 16 | "import tensorflow as tf\n", 17 | "mnist = tf.keras.datasets.mnist\n", 18 | "(train_x,train_y), (test_x, test_y) = mnist.load_data()" 19 | ] 20 | }, 21 | { 22 | "cell_type": "code", 23 | "execution_count": 2, 24 | "metadata": {}, 25 | "outputs": [], 26 | "source": [ 27 | "train_x, test_x = train_x/255.0, test_x/255.0\n", 28 | "epochs=10" 29 | ] 30 | }, 31 | { 32 | "cell_type": "code", 33 | "execution_count": 3, 34 | "metadata": {}, 35 | "outputs": [], 36 | "source": [ 37 | "# use keras functional API\n", 38 | "inputs = tf.keras.Input(shape=(28,28)) # Returns a placeholder tensor\n", 39 | "x = tf.keras.layers.Flatten()(inputs)\n", 40 | "x = tf.keras.layers.Dense(512, activation='relu',name='d1')(x)\n", 41 | "x = tf.keras.layers.Dropout(0.2)(x)\n", 42 | "predictions = tf.keras.layers.Dense(10,activation=tf.nn.softmax, name='d2')(x)" 43 | ] 44 | }, 45 | { 46 | "cell_type": "code", 47 | "execution_count": null, 48 | "metadata": {}, 49 | "outputs": [], 50 | "source": [] 51 | }, 52 | { 53 | "cell_type": "code", 54 | "execution_count": 4, 55 | "metadata": {}, 56 | "outputs": [], 57 | "source": [ 58 | "model3 = tf.keras.Model(inputs=inputs, outputs=predictions)" 59 | ] 60 | }, 61 | { 62 | "cell_type": "code", 63 | "execution_count": 5, 64 | "metadata": {}, 65 | "outputs": [ 66 | { 67 | "name": "stdout", 68 | "output_type": "stream", 69 | "text": [ 70 | "Model: \"model\"\n", 71 | "_________________________________________________________________\n", 72 | "Layer (type) Output Shape Param # \n", 73 | "=================================================================\n", 74 | "input_1 (InputLayer) [(None, 28, 28)] 0 \n", 75 | "_________________________________________________________________\n", 76 | "flatten (Flatten) (None, 784) 0 \n", 77 | "_________________________________________________________________\n", 78 | "d1 (Dense) (None, 512) 401920 \n", 79 | "_________________________________________________________________\n", 80 | "dropout (Dropout) (None, 512) 0 \n", 81 | "_________________________________________________________________\n", 82 | "d2 (Dense) (None, 10) 5130 \n", 83 | "=================================================================\n", 84 | "Total params: 407,050\n", 85 | "Trainable params: 407,050\n", 86 | "Non-trainable params: 0\n", 87 | "_________________________________________________________________\n" 88 | ] 89 | } 90 | ], 91 | "source": [ 92 | "model3.summary()" 93 | ] 94 | }, 95 | { 96 | "cell_type": "code", 97 | "execution_count": 6, 98 | "metadata": {}, 99 | "outputs": [], 100 | "source": [ 101 | "optimiser = tf.keras.optimizers.Adam()\n", 102 | "model3.compile (optimizer= optimiser, loss='sparse_categorical_crossentropy', metrics = ['accuracy'])" 103 | ] 104 | }, 105 | { 106 | "cell_type": "code", 107 | "execution_count": 7, 108 | "metadata": {}, 109 | "outputs": [ 110 | { 111 | "name": "stdout", 112 | "output_type": "stream", 113 | "text": [ 114 | "Epoch 1/10\n", 115 | "60000/60000 [==============================] - 3s 57us/sample - loss: 0.2207 - accuracy: 0.9354\n", 116 | "Epoch 2/10\n", 117 | "60000/60000 [==============================] - 3s 56us/sample - loss: 0.0979 - accuracy: 0.9700\n", 118 | "Epoch 3/10\n", 119 | "60000/60000 [==============================] - 3s 56us/sample - loss: 0.0674 - accuracy: 0.9790\n", 120 | "Epoch 4/10\n", 121 | "60000/60000 [==============================] - 3s 55us/sample - loss: 0.0533 - accuracy: 0.9833\n", 122 | "Epoch 5/10\n", 123 | "60000/60000 [==============================] - 3s 56us/sample - loss: 0.0442 - accuracy: 0.9854\n", 124 | "Epoch 6/10\n", 125 | "60000/60000 [==============================] - 3s 55us/sample - loss: 0.0358 - accuracy: 0.9880\n", 126 | "Epoch 7/10\n", 127 | "60000/60000 [==============================] - 3s 56us/sample - loss: 0.0312 - accuracy: 0.9896\n", 128 | "Epoch 8/10\n", 129 | "60000/60000 [==============================] - 3s 56us/sample - loss: 0.0288 - accuracy: 0.9904\n", 130 | "Epoch 9/10\n", 131 | "60000/60000 [==============================] - 3s 55us/sample - loss: 0.0231 - accuracy: 0.9921\n", 132 | "Epoch 10/10\n", 133 | "60000/60000 [==============================] - 3s 55us/sample - loss: 0.0217 - accuracy: 0.9931\n" 134 | ] 135 | }, 136 | { 137 | "data": { 138 | "text/plain": [ 139 | "" 140 | ] 141 | }, 142 | "execution_count": 7, 143 | "metadata": {}, 144 | "output_type": "execute_result" 145 | } 146 | ], 147 | "source": [ 148 | "model3.fit(train_x, train_y, batch_size=32, epochs=epochs)" 149 | ] 150 | }, 151 | { 152 | "cell_type": "code", 153 | "execution_count": 8, 154 | "metadata": {}, 155 | "outputs": [ 156 | { 157 | "name": "stdout", 158 | "output_type": "stream", 159 | "text": [ 160 | "10000/10000 [==============================] - 0s 27us/sample - loss: 0.0712 - accuracy: 0.9826\n" 161 | ] 162 | }, 163 | { 164 | "data": { 165 | "text/plain": [ 166 | "[0.07122680197921145, 0.9826]" 167 | ] 168 | }, 169 | "execution_count": 8, 170 | "metadata": {}, 171 | "output_type": "execute_result" 172 | } 173 | ], 174 | "source": [ 175 | "model3.evaluate(test_x, test_y)" 176 | ] 177 | }, 178 | { 179 | "cell_type": "code", 180 | "execution_count": null, 181 | "metadata": {}, 182 | "outputs": [], 183 | "source": [] 184 | } 185 | ], 186 | "metadata": { 187 | "kernelspec": { 188 | "display_name": "Python 3", 189 | "language": "python", 190 | "name": "python3" 191 | }, 192 | "language_info": { 193 | "codemirror_mode": { 194 | "name": "ipython", 195 | "version": 3 196 | }, 197 | "file_extension": ".py", 198 | "mimetype": "text/x-python", 199 | "name": "python", 200 | "nbconvert_exporter": "python", 201 | "pygments_lexer": "ipython3", 202 | "version": "3.6.7" 203 | } 204 | }, 205 | "nbformat": 4, 206 | "nbformat_minor": 2 207 | } 208 | -------------------------------------------------------------------------------- /Chapter02/Chapter2_Keras_UseOfDataPipelines_TF2_alpha.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# use of a Dataset.data pipeline" 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": null, 13 | "metadata": {}, 14 | "outputs": [], 15 | "source": [ 16 | "import tensorflow as tf\n", 17 | "import datetime as dt\n" 18 | ] 19 | }, 20 | { 21 | "cell_type": "code", 22 | "execution_count": null, 23 | "metadata": {}, 24 | "outputs": [], 25 | "source": [ 26 | "mnist = tf.keras.datasets.mnist\n", 27 | "(train_x,train_y), (test_x, test_y) = mnist.load_data()" 28 | ] 29 | }, 30 | { 31 | "cell_type": "code", 32 | "execution_count": null, 33 | "metadata": {}, 34 | "outputs": [], 35 | "source": [ 36 | "train_x, test_x = tf.cast(train_x/255.0, tf.float32), tf.cast(test_x/255.0, tf.float32)\n", 37 | "train_y, test_y = tf.cast(train_y,tf.int64),tf.cast(test_y,tf.int64)\n", 38 | "epochs=10" 39 | ] 40 | }, 41 | { 42 | "cell_type": "code", 43 | "execution_count": null, 44 | "metadata": {}, 45 | "outputs": [], 46 | "source": [ 47 | "batch_size = 32" 48 | ] 49 | }, 50 | { 51 | "cell_type": "code", 52 | "execution_count": null, 53 | "metadata": {}, 54 | "outputs": [], 55 | "source": [ 56 | "train_dataset = tf.data.Dataset.from_tensor_slices((train_x, train_y)).batch(32).shuffle(10000)\n", 57 | "train_dataset = train_dataset.map(lambda x, y: (tf.image.random_flip_left_right(x), y))\n", 58 | "train_dataset = train_dataset.repeat()\n", 59 | "\n", 60 | "test_dataset = tf.data.Dataset.from_tensor_slices((test_x, test_y)).batch(batch_size).shuffle(10000)\n", 61 | "test_dataset = train_dataset.repeat()\n", 62 | "\n" 63 | ] 64 | }, 65 | { 66 | "cell_type": "code", 67 | "execution_count": null, 68 | "metadata": {}, 69 | "outputs": [], 70 | "source": [ 71 | "model5 = tf.keras.models.Sequential([\n", 72 | " tf.keras.layers.Flatten(),\n", 73 | " tf.keras.layers.Dense(512,activation=tf.nn.relu),\n", 74 | " tf.keras.layers.Dropout(0.2),\n", 75 | " tf.keras.layers.Dense(10,activation=tf.nn.softmax)\n", 76 | "])" 77 | ] 78 | }, 79 | { 80 | "cell_type": "code", 81 | "execution_count": null, 82 | "metadata": {}, 83 | "outputs": [], 84 | "source": [ 85 | "steps_per_epoch = len(train_x)//batch_size #required becuase of the repeat() on the dataset\n", 86 | "optimiser = tf.keras.optimizers.Adam()\n", 87 | "model5.compile (optimizer= optimiser, loss='sparse_categorical_crossentropy', metrics = ['accuracy'])" 88 | ] 89 | }, 90 | { 91 | "cell_type": "code", 92 | "execution_count": null, 93 | "metadata": {}, 94 | "outputs": [], 95 | "source": [ 96 | "model5.fit(train_dataset, epochs=epochs, steps_per_epoch = steps_per_epoch)\n" 97 | ] 98 | }, 99 | { 100 | "cell_type": "code", 101 | "execution_count": null, 102 | "metadata": {}, 103 | "outputs": [], 104 | "source": [ 105 | "model5.evaluate(test_dataset,steps=10)" 106 | ] 107 | }, 108 | { 109 | "cell_type": "code", 110 | "execution_count": null, 111 | "metadata": {}, 112 | "outputs": [], 113 | "source": [ 114 | "\n", 115 | "callbacks = [\n", 116 | " # Write TensorBoard logs to `./logs` directory\n", 117 | " tf.keras.callbacks.TensorBoard(log_dir='./log/{}'.format(dt.datetime.now().strftime(\"%Y-%m-%d-%H-%M-%S\")))\n", 118 | "]" 119 | ] 120 | }, 121 | { 122 | "cell_type": "code", 123 | "execution_count": null, 124 | "metadata": {}, 125 | "outputs": [], 126 | "source": [ 127 | "model5.fit(train_dataset, epochs=epochs, steps_per_epoch=steps_per_epoch,\n", 128 | " validation_data=test_dataset,\n", 129 | " validation_steps=3, callbacks=callbacks)" 130 | ] 131 | }, 132 | { 133 | "cell_type": "code", 134 | "execution_count": null, 135 | "metadata": {}, 136 | "outputs": [], 137 | "source": [ 138 | "model5.evaluate(test_dataset,steps=10)" 139 | ] 140 | }, 141 | { 142 | "cell_type": "code", 143 | "execution_count": null, 144 | "metadata": {}, 145 | "outputs": [], 146 | "source": [] 147 | }, 148 | { 149 | "cell_type": "code", 150 | "execution_count": null, 151 | "metadata": {}, 152 | "outputs": [], 153 | "source": [] 154 | } 155 | ], 156 | "metadata": { 157 | "kernelspec": { 158 | "display_name": "Python 3", 159 | "language": "python", 160 | "name": "python3" 161 | }, 162 | "language_info": { 163 | "codemirror_mode": { 164 | "name": "ipython", 165 | "version": 3 166 | }, 167 | "file_extension": ".py", 168 | "mimetype": "text/x-python", 169 | "name": "python", 170 | "nbconvert_exporter": "python", 171 | "pygments_lexer": "ipython3", 172 | "version": "3.6.7" 173 | } 174 | }, 175 | "nbformat": 4, 176 | "nbformat_minor": 2 177 | } 178 | -------------------------------------------------------------------------------- /Chapter02/Chapter2_Keras_sequential_models_TF2_alpha.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": null, 13 | "metadata": {}, 14 | "outputs": [], 15 | "source": [ 16 | "import tensorflow as tf\n", 17 | "from tensorflow.keras import backend as K\n", 18 | "\n" 19 | ] 20 | }, 21 | { 22 | "cell_type": "code", 23 | "execution_count": null, 24 | "metadata": {}, 25 | "outputs": [], 26 | "source": [ 27 | "print(tf.keras.__version__)" 28 | ] 29 | }, 30 | { 31 | "cell_type": "code", 32 | "execution_count": null, 33 | "metadata": {}, 34 | "outputs": [], 35 | "source": [ 36 | "const = K.constant([[42,24],[11,99]], dtype=tf.float16, shape=[2,2])" 37 | ] 38 | }, 39 | { 40 | "cell_type": "code", 41 | "execution_count": null, 42 | "metadata": {}, 43 | "outputs": [], 44 | "source": [ 45 | "const" 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "metadata": {}, 51 | "source": [ 52 | "### Acquire data" 53 | ] 54 | }, 55 | { 56 | "cell_type": "code", 57 | "execution_count": null, 58 | "metadata": {}, 59 | "outputs": [], 60 | "source": [ 61 | "mnist = tf.keras.datasets.mnist\n", 62 | "(train_x,train_y), (test_x, test_y) = mnist.load_data()\n", 63 | "\n", 64 | "batch_size = 32 # 32 is default but specify anyway\n", 65 | "epochs=10" 66 | ] 67 | }, 68 | { 69 | "cell_type": "markdown", 70 | "metadata": {}, 71 | "source": [ 72 | "### Normalise data" 73 | ] 74 | }, 75 | { 76 | "cell_type": "code", 77 | "execution_count": null, 78 | "metadata": {}, 79 | "outputs": [], 80 | "source": [ 81 | "train_x, test_x = tf.cast(train_x/255.0, tf.float32), tf.cast(test_x/255.0, tf.float32)\n", 82 | "train_y, test_y = tf.cast(train_y,tf.int64),tf.cast(test_y,tf.int64)\n" 83 | ] 84 | }, 85 | { 86 | "cell_type": "code", 87 | "execution_count": null, 88 | "metadata": {}, 89 | "outputs": [], 90 | "source": [] 91 | }, 92 | { 93 | "cell_type": "markdown", 94 | "metadata": {}, 95 | "source": [ 96 | "### Sequential Model #1" 97 | ] 98 | }, 99 | { 100 | "cell_type": "code", 101 | "execution_count": null, 102 | "metadata": {}, 103 | "outputs": [], 104 | "source": [ 105 | "model1 = tf.keras.models.Sequential([\n", 106 | " tf.keras.layers.Flatten(),\n", 107 | " tf.keras.layers.Dense(512,activation=tf.nn.relu),\n", 108 | " tf.keras.layers.Dropout(0.2),\n", 109 | " tf.keras.layers.Dense(10,activation=tf.nn.softmax)\n", 110 | "])" 111 | ] 112 | }, 113 | { 114 | "cell_type": "code", 115 | "execution_count": null, 116 | "metadata": {}, 117 | "outputs": [], 118 | "source": [ 119 | "optimiser = tf.keras.optimizers.Adam()\n", 120 | "model1.compile (optimizer= optimiser, loss='sparse_categorical_crossentropy', metrics = ['accuracy'])" 121 | ] 122 | }, 123 | { 124 | "cell_type": "code", 125 | "execution_count": null, 126 | "metadata": {}, 127 | "outputs": [], 128 | "source": [ 129 | "model1.fit(train_x, train_y, batch_size=batch_size, epochs=epochs)" 130 | ] 131 | }, 132 | { 133 | "cell_type": "code", 134 | "execution_count": null, 135 | "metadata": {}, 136 | "outputs": [], 137 | "source": [ 138 | "model1.evaluate(test_x, test_y)" 139 | ] 140 | }, 141 | { 142 | "cell_type": "code", 143 | "execution_count": null, 144 | "metadata": {}, 145 | "outputs": [], 146 | "source": [] 147 | }, 148 | { 149 | "cell_type": "markdown", 150 | "metadata": {}, 151 | "source": [ 152 | "### Sequential Model #2" 153 | ] 154 | }, 155 | { 156 | "cell_type": "code", 157 | "execution_count": null, 158 | "metadata": {}, 159 | "outputs": [], 160 | "source": [ 161 | "model2 = tf.keras.models.Sequential();\n", 162 | "model2.add(tf.keras.layers.Flatten())\n", 163 | "model2.add(tf.keras.layers.Dense(512, activation='relu'))\n", 164 | "model2.add(tf.keras.layers.Dropout(0.2))\n", 165 | "model2.add(tf.keras.layers.Dense(10,activation=tf.nn.softmax))" 166 | ] 167 | }, 168 | { 169 | "cell_type": "code", 170 | "execution_count": null, 171 | "metadata": {}, 172 | "outputs": [], 173 | "source": [ 174 | "optimiser = tf.keras.optimizers.Adam()\n", 175 | "model2.compile (optimizer= optimiser, loss='sparse_categorical_crossentropy', metrics = ['accuracy'])\n", 176 | "model2.fit(train_x, train_y, batch_size = batch_size, epochs=epochs)\n", 177 | "model2.evaluate(test_x, test_y)" 178 | ] 179 | }, 180 | { 181 | "cell_type": "code", 182 | "execution_count": null, 183 | "metadata": {}, 184 | "outputs": [], 185 | "source": [] 186 | } 187 | ], 188 | "metadata": { 189 | "kernelspec": { 190 | "display_name": "Python 3", 191 | "language": "python", 192 | "name": "python3" 193 | }, 194 | "language_info": { 195 | "codemirror_mode": { 196 | "name": "ipython", 197 | "version": 3 198 | }, 199 | "file_extension": ".py", 200 | "mimetype": "text/x-python", 201 | "name": "python", 202 | "nbconvert_exporter": "python", 203 | "pygments_lexer": "ipython3", 204 | "version": "3.6.7" 205 | } 206 | }, 207 | "nbformat": 4, 208 | "nbformat_minor": 2 209 | } 210 | -------------------------------------------------------------------------------- /Chapter03/Chapter3_ANNTech_TF2_alpha.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | " " 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": null, 13 | "metadata": {}, 14 | "outputs": [], 15 | "source": [ 16 | "import numpy as np\n", 17 | "import tensorflow as tf\n" 18 | ] 19 | }, 20 | { 21 | "cell_type": "code", 22 | "execution_count": null, 23 | "metadata": {}, 24 | "outputs": [], 25 | "source": [] 26 | }, 27 | { 28 | "cell_type": "code", 29 | "execution_count": null, 30 | "metadata": {}, 31 | "outputs": [], 32 | "source": [] 33 | }, 34 | { 35 | "cell_type": "markdown", 36 | "metadata": {}, 37 | "source": [ 38 | "### data.Dataset Examples with numpy arrays" 39 | ] 40 | }, 41 | { 42 | "cell_type": "code", 43 | "execution_count": null, 44 | "metadata": {}, 45 | "outputs": [], 46 | "source": [ 47 | "# numpy array example\n", 48 | "num_items = 11\n", 49 | "num_list1 = np.arange(num_items)\n", 50 | "num_list2 = np.arange(num_items,num_items*2)" 51 | ] 52 | }, 53 | { 54 | "cell_type": "code", 55 | "execution_count": null, 56 | "metadata": {}, 57 | "outputs": [], 58 | "source": [ 59 | "num_list1_dataset = tf.data.Dataset.from_tensor_slices(num_list1)\n", 60 | "num_list2_dataset = tf.data.Dataset.from_tensor_slices(num_list2)" 61 | ] 62 | }, 63 | { 64 | "cell_type": "code", 65 | "execution_count": null, 66 | "metadata": {}, 67 | "outputs": [], 68 | "source": [ 69 | "iterator1 = tf.compat.v1.data.make_one_shot_iterator(num_list1_dataset)\n", 70 | "iterator2 = tf.compat.v1.data.make_one_shot_iterator(num_list2_dataset)" 71 | ] 72 | }, 73 | { 74 | "cell_type": "code", 75 | "execution_count": null, 76 | "metadata": {}, 77 | "outputs": [], 78 | "source": [ 79 | "# note that running this cell a second time without restarting the kernel gives\n", 80 | "# an 'OutOfRangeError: End of sequence [Op:IteratorGetNextSync]' error\n", 81 | "# since we are using make_one_shot_iterator()\n", 82 | "for item in num_list1_dataset:\n", 83 | " num = iterator1.get_next().numpy()\n", 84 | " print(num)" 85 | ] 86 | }, 87 | { 88 | "cell_type": "code", 89 | "execution_count": null, 90 | "metadata": {}, 91 | "outputs": [], 92 | "source": [ 93 | "for item in num_list2_dataset:\n", 94 | " num = iterator2.get_next().numpy()\n", 95 | " print(num)" 96 | ] 97 | }, 98 | { 99 | "cell_type": "code", 100 | "execution_count": null, 101 | "metadata": {}, 102 | "outputs": [], 103 | "source": [ 104 | "# numpy array in batches example, drop_remainder=False is the default\n", 105 | "num_list1_dataset = tf.data.Dataset.from_tensor_slices(num_list1).batch(3, drop_remainder = False)\n", 106 | "iterator = tf.compat.v1.data.make_one_shot_iterator(num_list1_dataset)" 107 | ] 108 | }, 109 | { 110 | "cell_type": "code", 111 | "execution_count": null, 112 | "metadata": {}, 113 | "outputs": [], 114 | "source": [ 115 | "for item in num_list1_dataset:\n", 116 | " num = iterator.get_next().numpy()\n", 117 | " print(num)" 118 | ] 119 | }, 120 | { 121 | "cell_type": "code", 122 | "execution_count": null, 123 | "metadata": {}, 124 | "outputs": [], 125 | "source": [ 126 | "#zipping datasets examples" 127 | ] 128 | }, 129 | { 130 | "cell_type": "code", 131 | "execution_count": null, 132 | "metadata": {}, 133 | "outputs": [], 134 | "source": [ 135 | "num_list1_dataset = tf.data.Dataset.from_tensor_slices(num_list1)\n", 136 | "num_list2_dataset = tf.data.Dataset.from_tensor_slices(num_list2)\n", 137 | "zipped_datasets = tf.data.Dataset.zip((num_list1_dataset, num_list2_dataset))" 138 | ] 139 | }, 140 | { 141 | "cell_type": "code", 142 | "execution_count": null, 143 | "metadata": {}, 144 | "outputs": [], 145 | "source": [ 146 | "dataset1 = [1,2,3,4,5]\n", 147 | "dataset2 = ['a','e','i','o','u']\n", 148 | "dataset1 = tf.data.Dataset.from_tensor_slices(dataset1)\n", 149 | "dataset2 = tf.data.Dataset.from_tensor_slices(dataset2)\n", 150 | "zipped_datasets = tf.data.Dataset.zip((dataset1, dataset2))\n", 151 | "\n", 152 | "iterator = tf.compat.v1.data.make_one_shot_iterator(zipped_datasets)\n", 153 | "for item in zipped_datasets:\n", 154 | " num = iterator.get_next()\n", 155 | " print(num)" 156 | ] 157 | }, 158 | { 159 | "cell_type": "code", 160 | "execution_count": null, 161 | "metadata": {}, 162 | "outputs": [], 163 | "source": [ 164 | "# concatenate datasets example" 165 | ] 166 | }, 167 | { 168 | "cell_type": "code", 169 | "execution_count": null, 170 | "metadata": {}, 171 | "outputs": [], 172 | "source": [ 173 | "ds1 = tf.data.Dataset.from_tensor_slices([1,2,3,5,7,11,13,17])\n", 174 | "ds2 = tf.data.Dataset.from_tensor_slices([19,23,29,31,37,41])\n", 175 | "ds3 = ds1.concatenate(ds2)\n", 176 | "print(ds3)\n", 177 | "iterator = tf.compat.v1.data.make_one_shot_iterator(ds3)\n", 178 | "for i in range(14):\n", 179 | " num = iterator.get_next()\n", 180 | " print(num)" 181 | ] 182 | }, 183 | { 184 | "cell_type": "code", 185 | "execution_count": null, 186 | "metadata": {}, 187 | "outputs": [], 188 | "source": [ 189 | "# in fact, we don't even need an iterator\n", 190 | "# this for works just as well, and throws no OutOfRangeError\n", 191 | "# when used repeatedly\n", 192 | "epochs=2\n", 193 | "for e in range(epochs):\n", 194 | " for item in ds3:\n", 195 | " print(item)\n" 196 | ] 197 | }, 198 | { 199 | "cell_type": "code", 200 | "execution_count": null, 201 | "metadata": {}, 202 | "outputs": [], 203 | "source": [] 204 | }, 205 | { 206 | "cell_type": "markdown", 207 | "metadata": {}, 208 | "source": [ 209 | "### use of comma separated files" 210 | ] 211 | }, 212 | { 213 | "cell_type": "code", 214 | "execution_count": null, 215 | "metadata": {}, 216 | "outputs": [], 217 | "source": [ 218 | "import tensorflow as tf\n", 219 | "\n", 220 | "filename = [\"./size_1000.csv\"]\n", 221 | "record_defaults = [tf.float32] * 2 # two required float columns\n", 222 | "dataset = tf.data.experimental.CsvDataset(filename, record_defaults, header=True, select_cols=[1,2])" 223 | ] 224 | }, 225 | { 226 | "cell_type": "code", 227 | "execution_count": null, 228 | "metadata": {}, 229 | "outputs": [], 230 | "source": [ 231 | "for item in dataset:\n", 232 | " print(item)" 233 | ] 234 | }, 235 | { 236 | "cell_type": "code", 237 | "execution_count": null, 238 | "metadata": {}, 239 | "outputs": [], 240 | "source": [ 241 | "# more examples of csv files, see files for structures" 242 | ] 243 | }, 244 | { 245 | "cell_type": "code", 246 | "execution_count": null, 247 | "metadata": {}, 248 | "outputs": [], 249 | "source": [ 250 | "filename = \"mycsvfile.txt\"\n", 251 | "record_defaults = [tf.float32, tf.constant([0.0], dtype=tf.float32), tf.int32,]\n", 252 | "dataset = tf.data.experimental.CsvDataset(filename, record_defaults, header=False, select_cols=[1,2,3])\n", 253 | "\n", 254 | "for item in dataset:\n", 255 | " print(item)" 256 | ] 257 | }, 258 | { 259 | "cell_type": "code", 260 | "execution_count": null, 261 | "metadata": {}, 262 | "outputs": [], 263 | "source": [ 264 | "filename = \"file1.txt\"\n", 265 | "record_defaults = [tf.float32, tf.float32, tf.string ,]\n", 266 | "dataset = tf.data.experimental.CsvDataset(filename, record_defaults, header=False)\n", 267 | "for item in dataset:\n", 268 | " print(item[0].numpy(), item[1].numpy(),item[2].numpy().decode() ) # decode as string is in binary format." 269 | ] 270 | }, 271 | { 272 | "cell_type": "markdown", 273 | "metadata": {}, 274 | "source": [ 275 | "## Another popular storage format is the TFRecord" 276 | ] 277 | }, 278 | { 279 | "cell_type": "markdown", 280 | "metadata": {}, 281 | "source": [ 282 | "### Example 1" 283 | ] 284 | }, 285 | { 286 | "cell_type": "code", 287 | "execution_count": null, 288 | "metadata": {}, 289 | "outputs": [], 290 | "source": [ 291 | "import numpy as np\n", 292 | "import tensorflow as tf\n", 293 | "\n", 294 | "data=np.array([10.,11.,12.,13.,14.,15.])\n", 295 | "def npy_to_tfrecords(fname,data):\n", 296 | " writer = tf.io.TFRecordWriter(fname)\n", 297 | " feature={}\n", 298 | "\n", 299 | " feature['data'] = tf.train.Feature(float_list=tf.train.FloatList(value=data))\n", 300 | " example = tf.train.Example(features=tf.train.Features(feature=feature))\n", 301 | " serialized = example.SerializeToString()\n", 302 | " writer.write(serialized)\n", 303 | " writer.close()\n", 304 | "npy_to_tfrecords(\"./myfile.tfrecords\",data)" 305 | ] 306 | }, 307 | { 308 | "cell_type": "code", 309 | "execution_count": null, 310 | "metadata": {}, 311 | "outputs": [], 312 | "source": [ 313 | "dataset = tf.data.TFRecordDataset(\"./myfile.tfrecords\")\n", 314 | "\n", 315 | "def parse_function(example_proto):\n", 316 | " keys_to_features = {'data':tf.io.FixedLenSequenceFeature([], dtype = tf.float32, allow_missing = True) }\n", 317 | " parsed_features = tf.io.parse_single_example(serialized=example_proto, features=keys_to_features)\n", 318 | " return parsed_features['data']\n", 319 | "\n", 320 | "dataset = dataset.map(parse_function)\n", 321 | "iterator = tf.compat.v1.data.make_one_shot_iterator(dataset)\n", 322 | "# array is retrieved as one item\n", 323 | "item = iterator.get_next()\n", 324 | "print(item)\n", 325 | "print(item.numpy())\n", 326 | "print(item[2].numpy())" 327 | ] 328 | }, 329 | { 330 | "cell_type": "markdown", 331 | "metadata": {}, 332 | "source": [ 333 | "### Example 2" 334 | ] 335 | }, 336 | { 337 | "cell_type": "code", 338 | "execution_count": null, 339 | "metadata": {}, 340 | "outputs": [], 341 | "source": [ 342 | "\n", 343 | "# create record\n", 344 | "\n", 345 | "filename = './students.tfrecords'\n", 346 | "data = {\n", 347 | " 'ID': 61553,\n", 348 | " 'Name': ['Jones', 'Felicity'],\n", 349 | " 'Scores': [45.6, 97.2] }" 350 | ] 351 | }, 352 | { 353 | "cell_type": "code", 354 | "execution_count": null, 355 | "metadata": {}, 356 | "outputs": [], 357 | "source": [ 358 | "ID = tf.train.Feature(int64_list=tf.train.Int64List(value=[data['ID']]))\n", 359 | "\n", 360 | "Name = tf.train.Feature(bytes_list=tf.train.BytesList(value=[n.encode('utf-8') for n in data['Name']]))\n", 361 | "\n", 362 | "Scores = tf.train.Feature(float_list=tf.train.FloatList(value=data['Scores']))\n", 363 | "\n", 364 | "example = tf.train.Example(features=tf.train.Features(feature={'ID': ID, 'Name': Name, 'Scores': Scores }))\n" 365 | ] 366 | }, 367 | { 368 | "cell_type": "code", 369 | "execution_count": null, 370 | "metadata": {}, 371 | "outputs": [], 372 | "source": [ 373 | "writer = tf.io.TFRecordWriter(filename)\n", 374 | "writer.write(example.SerializeToString())\n", 375 | "writer.close()" 376 | ] 377 | }, 378 | { 379 | "cell_type": "code", 380 | "execution_count": null, 381 | "metadata": {}, 382 | "outputs": [], 383 | "source": [ 384 | "# read record\n", 385 | "dataset = tf.data.TFRecordDataset(\"./students.tfrecords\")\n", 386 | "\n", 387 | "def parse_function(example_proto):\n", 388 | " keys_to_features = {'ID':tf.io.FixedLenFeature([], dtype = tf.int64),\n", 389 | " 'Name':tf.io.VarLenFeature(dtype = tf.string),\n", 390 | " 'Scores':tf.io.VarLenFeature(dtype = tf.float32)\n", 391 | " }\n", 392 | " parsed_features = tf.io.parse_single_example(serialized=example_proto, features=keys_to_features)\n", 393 | " return parsed_features[\"ID\"], parsed_features[\"Name\"],parsed_features[\"Scores\"]" 394 | ] 395 | }, 396 | { 397 | "cell_type": "code", 398 | "execution_count": null, 399 | "metadata": {}, 400 | "outputs": [], 401 | "source": [ 402 | "dataset = dataset.map(parse_function)\n", 403 | "\n", 404 | "iterator = tf.compat.v1.data.make_one_shot_iterator(dataset)\n", 405 | "item = iterator.get_next()\n", 406 | "# record is retrieved as one item\n", 407 | "print(item)" 408 | ] 409 | }, 410 | { 411 | "cell_type": "code", 412 | "execution_count": null, 413 | "metadata": {}, 414 | "outputs": [], 415 | "source": [ 416 | "print(\"ID: \",item[0].numpy())\n", 417 | "name = item[1].values.numpy()\n", 418 | "name1= name[0].decode()\n", 419 | "name2 = name[1].decode()\n", 420 | "print(\"Name:\",name1,\",\",name2)\n", 421 | "print(\"Scores: \",item[2].values.numpy())" 422 | ] 423 | }, 424 | { 425 | "cell_type": "code", 426 | "execution_count": null, 427 | "metadata": {}, 428 | "outputs": [], 429 | "source": [] 430 | }, 431 | { 432 | "cell_type": "markdown", 433 | "metadata": {}, 434 | "source": [ 435 | "### one-hot encoding" 436 | ] 437 | }, 438 | { 439 | "cell_type": "code", 440 | "execution_count": null, 441 | "metadata": {}, 442 | "outputs": [], 443 | "source": [ 444 | "# This example uses the fashion-mnist dataset\n", 445 | "# Which is a dropin replacement for mnist" 446 | ] 447 | }, 448 | { 449 | "cell_type": "code", 450 | "execution_count": null, 451 | "metadata": {}, 452 | "outputs": [], 453 | "source": [ 454 | "import tensorflow as tf\n", 455 | "from tensorflow.python.keras.datasets import fashion_mnist\n", 456 | "\n", 457 | "width, height, = 28,28\n", 458 | "n_classes = 10\n", 459 | "# load the dataset\n", 460 | "(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()\n", 461 | "\n", 462 | "# normalise the features for better training\n", 463 | "x_train = x_train.astype('float32') / 255.\n", 464 | "x_test = x_test.astype('float32') / 255.\n", 465 | "\n", 466 | "# flatten the features for use by the training algorithm\n", 467 | "x_train = x_train.reshape((60000, width * height))\n", 468 | "x_test = x_test.reshape((10000, width * height))\n", 469 | "\n", 470 | "split = 50000\n", 471 | "#split feature training set into training and validation sets\n", 472 | "(x_train, x_valid) = x_train[:split], x_train[split:]\n", 473 | "(y_train, y_valid) = y_train[:split], y_train[split:]\n", 474 | "\n", 475 | "# one-hot encode the labels using TensorFLow.\n", 476 | "# then convert back to numpy as we cannot combine numpy\n", 477 | "# and tensors as input to keras later\n", 478 | "y_train_ohe = tf.one_hot(y_train, depth=n_classes).numpy()\n", 479 | "y_valid_ohe = tf.one_hot(y_valid, depth=n_classes).numpy()\n", 480 | "y_test_ohe = tf.one_hot(y_test, depth=n_classes).numpy()\n", 481 | "#or use tf.keras.utils.to_categorical(y_train,10), for example\n", 482 | "# show difference between original label and one-hot-encoded label\n", 483 | "i=5\n", 484 | "print(y_train[i]) # 'ordinary' number value of label at index i\n", 485 | "\n", 486 | "print(y_train_ohe[i]) # same value as a 1. in correct position in a length 10 1D numpy array" 487 | ] 488 | }, 489 | { 490 | "cell_type": "code", 491 | "execution_count": null, 492 | "metadata": {}, 493 | "outputs": [], 494 | "source": [ 495 | "#one-hot encoding is also useful where the labels are text categorical,\n", 496 | "# e.g. red, blue, green could be coded as [0,0,1], [0,1,0] and [1,0,0] for example" 497 | ] 498 | }, 499 | { 500 | "cell_type": "code", 501 | "execution_count": null, 502 | "metadata": {}, 503 | "outputs": [], 504 | "source": [] 505 | }, 506 | { 507 | "cell_type": "code", 508 | "execution_count": null, 509 | "metadata": {}, 510 | "outputs": [], 511 | "source": [ 512 | "# automatic differentiation" 513 | ] 514 | }, 515 | { 516 | "cell_type": "code", 517 | "execution_count": null, 518 | "metadata": {}, 519 | "outputs": [], 520 | "source": [ 521 | "# by default, you can only call tape.gradient once in a GradientTape context\n", 522 | "weight1 = tf.Variable(2.0)\n", 523 | "def weighted_sum(x1):\n", 524 | " return weight1 * x1\n", 525 | "with tf.GradientTape() as tape:\n", 526 | " sum = weighted_sum(7.)\n", 527 | " [weight1_grad] = tape.gradient(sum, [weight1])\n", 528 | "print(weight1_grad.numpy()) # 7 , weight1*x diff w.r.t. weight1 is x, 7, also see below." 529 | ] 530 | }, 531 | { 532 | "cell_type": "code", 533 | "execution_count": null, 534 | "metadata": {}, 535 | "outputs": [], 536 | "source": [ 537 | "# if you need to call tape.gradient more than once\n", 538 | "# use GradientTape(persistent=True)\n", 539 | "weight1 = tf.Variable(2.0)\n", 540 | "weight2 = tf.Variable(3.0)\n", 541 | "weight3 = tf.Variable(5.0)\n", 542 | "\n", 543 | "def weighted_sum(x1, x2, x3):\n", 544 | " return weight1*x1 + weight2*x2 + weight3*x3\n", 545 | "\n", 546 | "with tf.GradientTape(persistent=True) as tape:\n", 547 | " sum = weighted_sum(7.,5.,6.)\n", 548 | "[weight1_grad] = tape.gradient(sum, [weight1])\n", 549 | "[weight2_grad] = tape.gradient(sum, [weight2])\n", 550 | "[weight3_grad] = tape.gradient(sum, [weight3])\n", 551 | "\n", 552 | "print(weight1_grad.numpy()) # x1, 7\n", 553 | "print(weight2_grad.numpy()) # x2, 5\n", 554 | "print(weight3_grad.numpy()) # x3, 6\n" 555 | ] 556 | }, 557 | { 558 | "cell_type": "code", 559 | "execution_count": null, 560 | "metadata": {}, 561 | "outputs": [], 562 | "source": [] 563 | }, 564 | { 565 | "cell_type": "code", 566 | "execution_count": null, 567 | "metadata": {}, 568 | "outputs": [], 569 | "source": [] 570 | } 571 | ], 572 | "metadata": { 573 | "kernelspec": { 574 | "display_name": "Python 3", 575 | "language": "python", 576 | "name": "python3" 577 | }, 578 | "language_info": { 579 | "codemirror_mode": { 580 | "name": "ipython", 581 | "version": 3 582 | }, 583 | "file_extension": ".py", 584 | "mimetype": "text/x-python", 585 | "name": "python", 586 | "nbconvert_exporter": "python", 587 | "pygments_lexer": "ipython3", 588 | "version": "3.6.7" 589 | } 590 | }, 591 | "nbformat": 4, 592 | "nbformat_minor": 2 593 | } 594 | -------------------------------------------------------------------------------- /Chapter03/dataset.txt: -------------------------------------------------------------------------------- 1 |  2 | 6,148,72,35,0,33.6,0.627,50,1 3 | 1,85,66,29,0,26.6,0.351,31,0 4 | 8,183,64,0,0,23.3,0.672,32,1 5 | 1,89,66,23,94,28.1,0.167,21,0 6 | 0,137,40,35,168,43.1,2.288,33,1 7 | 5,116,74,0,0,25.6,0.201,30,0 8 | 3,78,50,32,88,31.0,0.248,26,1 9 | 10,115,0,0,0,35.3,0.134,29,0 10 | 2,197,70,45,543,30.5,0.158,53,1 11 | 8,125,96,0,0,0.0,0.232,54,1 12 | 4,110,92,0,0,37.6,0.191,30,0 13 | 10,168,74,0,0,38.0,0.537,34,1 14 | 10,139,80,0,0,27.1,1.441,57,0 15 | 1,189,60,23,846,30.1,0.398,59,1 16 | 5,166,72,19,175,25.8,0.587,51,1 17 | 7,100,0,0,0,30.0,0.484,32,1 18 | 0,118,84,47,230,45.8,0.551,31,1 19 | 7,107,74,0,0,29.6,0.254,31,1 20 | 1,103,30,38,83,43.3,0.183,33,0 21 | 1,115,70,30,96,34.6,0.529,32,1 22 | 3,126,88,41,235,39.3,0.704,27,0 23 | 8,99,84,0,0,35.4,0.388,50,0 24 | 7,196,90,0,0,39.8,0.451,41,1 25 | 9,119,80,35,0,29.0,0.263,29,1 26 | 11,143,94,33,146,36.6,0.254,51,1 27 | 10,125,70,26,115,31.1,0.205,41,1 28 | 7,147,76,0,0,39.4,0.257,43,1 29 | 1,97,66,15,140,23.2,0.487,22,0 30 | 13,145,82,19,110,22.2,0.245,57,0 31 | 5,117,92,0,0,34.1,0.337,38,0 32 | 5,109,75,26,0,36.0,0.546,60,0 33 | 3,158,76,36,245,31.6,0.851,28,1 34 | 3,88,58,11,54,24.8,0.267,22,0 35 | 6,92,92,0,0,19.9,0.188,28,0 36 | 10,122,78,31,0,27.6,0.512,45,0 37 | 4,103,60,33,192,24.0,0.966,33,0 38 | 11,138,76,0,0,33.2,0.420,35,0 39 | 9,102,76,37,0,32.9,0.665,46,1 40 | 2,90,68,42,0,38.2,0.503,27,1 41 | 4,111,72,47,207,37.1,1.390,56,1 42 | 3,180,64,25,70,34.0,0.271,26,0 43 | 7,133,84,0,0,40.2,0.696,37,0 44 | 7,106,92,18,0,22.7,0.235,48,0 45 | 9,171,110,24,240,45.4,0.721,54,1 46 | 7,159,64,0,0,27.4,0.294,40,0 47 | 0,180,66,39,0,42.0,1.893,25,1 48 | 1,146,56,0,0,29.7,0.564,29,0 49 | 2,71,70,27,0,28.0,0.586,22,0 50 | 7,103,66,32,0,39.1,0.344,31,1 51 | 7,105,0,0,0,0.0,0.305,24,0 52 | 1,103,80,11,82,19.4,0.491,22,0 53 | 1,101,50,15,36,24.2,0.526,26,0 54 | 5,88,66,21,23,24.4,0.342,30,0 55 | 8,176,90,34,300,33.7,0.467,58,1 56 | 7,150,66,42,342,34.7,0.718,42,0 57 | 1,73,50,10,0,23.0,0.248,21,0 58 | 7,187,68,39,304,37.7,0.254,41,1 59 | 0,100,88,60,110,46.8,0.962,31,0 60 | 0,146,82,0,0,40.5,1.781,44,0 61 | 0,105,64,41,142,41.5,0.173,22,0 62 | 2,84,0,0,0,0.0,0.304,21,0 63 | 8,133,72,0,0,32.9,0.270,39,1 64 | 5,44,62,0,0,25.0,0.587,36,0 65 | 2,141,58,34,128,25.4,0.699,24,0 66 | 7,114,66,0,0,32.8,0.258,42,1 67 | 5,99,74,27,0,29.0,0.203,32,0 68 | 0,109,88,30,0,32.5,0.855,38,1 69 | 2,109,92,0,0,42.7,0.845,54,0 70 | 1,95,66,13,38,19.6,0.334,25,0 71 | 4,146,85,27,100,28.9,0.189,27,0 72 | 2,100,66,20,90,32.9,0.867,28,1 73 | 5,139,64,35,140,28.6,0.411,26,0 74 | 13,126,90,0,0,43.4,0.583,42,1 75 | 4,129,86,20,270,35.1,0.231,23,0 76 | 1,79,75,30,0,32.0,0.396,22,0 77 | 1,0,48,20,0,24.7,0.140,22,0 78 | 7,62,78,0,0,32.6,0.391,41,0 79 | 5,95,72,33,0,37.7,0.370,27,0 80 | 0,131,0,0,0,43.2,0.270,26,1 81 | 2,112,66,22,0,25.0,0.307,24,0 82 | 3,113,44,13,0,22.4,0.140,22,0 83 | 2,74,0,0,0,0.0,0.102,22,0 84 | 7,83,78,26,71,29.3,0.767,36,0 85 | 0,101,65,28,0,24.6,0.237,22,0 86 | 5,137,108,0,0,48.8,0.227,37,1 87 | 2,110,74,29,125,32.4,0.698,27,0 88 | 13,106,72,54,0,36.6,0.178,45,0 89 | 2,100,68,25,71,38.5,0.324,26,0 90 | 15,136,70,32,110,37.1,0.153,43,1 91 | 1,107,68,19,0,26.5,0.165,24,0 92 | 1,80,55,0,0,19.1,0.258,21,0 93 | 4,123,80,15,176,32.0,0.443,34,0 94 | 7,81,78,40,48,46.7,0.261,42,0 95 | 4,134,72,0,0,23.8,0.277,60,1 96 | 2,142,82,18,64,24.7,0.761,21,0 97 | 6,144,72,27,228,33.9,0.255,40,0 98 | 2,92,62,28,0,31.6,0.130,24,0 99 | 1,71,48,18,76,20.4,0.323,22,0 100 | 6,93,50,30,64,28.7,0.356,23,0 101 | 1,122,90,51,220,49.7,0.325,31,1 102 | 1,163,72,0,0,39.0,1.222,33,1 103 | 1,151,60,0,0,26.1,0.179,22,0 104 | 0,125,96,0,0,22.5,0.262,21,0 105 | 1,81,72,18,40,26.6,0.283,24,0 106 | 2,85,65,0,0,39.6,0.930,27,0 107 | 1,126,56,29,152,28.7,0.801,21,0 108 | 1,96,122,0,0,22.4,0.207,27,0 109 | 4,144,58,28,140,29.5,0.287,37,0 110 | 3,83,58,31,18,34.3,0.336,25,0 111 | 0,95,85,25,36,37.4,0.247,24,1 112 | 3,171,72,33,135,33.3,0.199,24,1 113 | 8,155,62,26,495,34.0,0.543,46,1 114 | 1,89,76,34,37,31.2,0.192,23,0 115 | 4,76,62,0,0,34.0,0.391,25,0 116 | 7,160,54,32,175,30.5,0.588,39,1 117 | 4,146,92,0,0,31.2,0.539,61,1 118 | 5,124,74,0,0,34.0,0.220,38,1 119 | 5,78,48,0,0,33.7,0.654,25,0 120 | 4,97,60,23,0,28.2,0.443,22,0 121 | 4,99,76,15,51,23.2,0.223,21,0 122 | 0,162,76,56,100,53.2,0.759,25,1 123 | 6,111,64,39,0,34.2,0.260,24,0 124 | 2,107,74,30,100,33.6,0.404,23,0 125 | 5,132,80,0,0,26.8,0.186,69,0 126 | 0,113,76,0,0,33.3,0.278,23,1 127 | 1,88,30,42,99,55.0,0.496,26,1 128 | 3,120,70,30,135,42.9,0.452,30,0 129 | 1,118,58,36,94,33.3,0.261,23,0 130 | 1,117,88,24,145,34.5,0.403,40,1 131 | 0,105,84,0,0,27.9,0.741,62,1 132 | 4,173,70,14,168,29.7,0.361,33,1 133 | 9,122,56,0,0,33.3,1.114,33,1 134 | 3,170,64,37,225,34.5,0.356,30,1 135 | 8,84,74,31,0,38.3,0.457,39,0 136 | 2,96,68,13,49,21.1,0.647,26,0 137 | 2,125,60,20,140,33.8,0.088,31,0 138 | 0,100,70,26,50,30.8,0.597,21,0 139 | 0,93,60,25,92,28.7,0.532,22,0 140 | 0,129,80,0,0,31.2,0.703,29,0 141 | 5,105,72,29,325,36.9,0.159,28,0 142 | 3,128,78,0,0,21.1,0.268,55,0 143 | 5,106,82,30,0,39.5,0.286,38,0 144 | 2,108,52,26,63,32.5,0.318,22,0 145 | 10,108,66,0,0,32.4,0.272,42,1 146 | 4,154,62,31,284,32.8,0.237,23,0 147 | 0,102,75,23,0,0.0,0.572,21,0 148 | 9,57,80,37,0,32.8,0.096,41,0 149 | 2,106,64,35,119,30.5,1.400,34,0 150 | 5,147,78,0,0,33.7,0.218,65,0 151 | 2,90,70,17,0,27.3,0.085,22,0 152 | 1,136,74,50,204,37.4,0.399,24,0 153 | 4,114,65,0,0,21.9,0.432,37,0 154 | 9,156,86,28,155,34.3,1.189,42,1 155 | 1,153,82,42,485,40.6,0.687,23,0 156 | 8,188,78,0,0,47.9,0.137,43,1 157 | 7,152,88,44,0,50.0,0.337,36,1 158 | 2,99,52,15,94,24.6,0.637,21,0 159 | 1,109,56,21,135,25.2,0.833,23,0 160 | 2,88,74,19,53,29.0,0.229,22,0 161 | 17,163,72,41,114,40.9,0.817,47,1 162 | 4,151,90,38,0,29.7,0.294,36,0 163 | 7,102,74,40,105,37.2,0.204,45,0 164 | 0,114,80,34,285,44.2,0.167,27,0 165 | 2,100,64,23,0,29.7,0.368,21,0 166 | 0,131,88,0,0,31.6,0.743,32,1 167 | 6,104,74,18,156,29.9,0.722,41,1 168 | 3,148,66,25,0,32.5,0.256,22,0 169 | 4,120,68,0,0,29.6,0.709,34,0 170 | 4,110,66,0,0,31.9,0.471,29,0 171 | 3,111,90,12,78,28.4,0.495,29,0 172 | 6,102,82,0,0,30.8,0.180,36,1 173 | 6,134,70,23,130,35.4,0.542,29,1 174 | 2,87,0,23,0,28.9,0.773,25,0 175 | 1,79,60,42,48,43.5,0.678,23,0 176 | 2,75,64,24,55,29.7,0.370,33,0 177 | 8,179,72,42,130,32.7,0.719,36,1 178 | 6,85,78,0,0,31.2,0.382,42,0 179 | 0,129,110,46,130,67.1,0.319,26,1 180 | 5,143,78,0,0,45.0,0.190,47,0 181 | 5,130,82,0,0,39.1,0.956,37,1 182 | 6,87,80,0,0,23.2,0.084,32,0 183 | 0,119,64,18,92,34.9,0.725,23,0 184 | 1,0,74,20,23,27.7,0.299,21,0 185 | 5,73,60,0,0,26.8,0.268,27,0 186 | 4,141,74,0,0,27.6,0.244,40,0 187 | 7,194,68,28,0,35.9,0.745,41,1 188 | 8,181,68,36,495,30.1,0.615,60,1 189 | 1,128,98,41,58,32.0,1.321,33,1 190 | 8,109,76,39,114,27.9,0.640,31,1 191 | 5,139,80,35,160,31.6,0.361,25,1 192 | 3,111,62,0,0,22.6,0.142,21,0 193 | 9,123,70,44,94,33.1,0.374,40,0 194 | 7,159,66,0,0,30.4,0.383,36,1 195 | 11,135,0,0,0,52.3,0.578,40,1 196 | 8,85,55,20,0,24.4,0.136,42,0 197 | 5,158,84,41,210,39.4,0.395,29,1 198 | 1,105,58,0,0,24.3,0.187,21,0 199 | 3,107,62,13,48,22.9,0.678,23,1 200 | 4,109,64,44,99,34.8,0.905,26,1 201 | 4,148,60,27,318,30.9,0.150,29,1 202 | 0,113,80,16,0,31.0,0.874,21,0 203 | 1,138,82,0,0,40.1,0.236,28,0 204 | 0,108,68,20,0,27.3,0.787,32,0 205 | 2,99,70,16,44,20.4,0.235,27,0 206 | 6,103,72,32,190,37.7,0.324,55,0 207 | 5,111,72,28,0,23.9,0.407,27,0 208 | 8,196,76,29,280,37.5,0.605,57,1 209 | 5,162,104,0,0,37.7,0.151,52,1 210 | 1,96,64,27,87,33.2,0.289,21,0 211 | 7,184,84,33,0,35.5,0.355,41,1 212 | 2,81,60,22,0,27.7,0.290,25,0 213 | 0,147,85,54,0,42.8,0.375,24,0 214 | 7,179,95,31,0,34.2,0.164,60,0 215 | 0,140,65,26,130,42.6,0.431,24,1 216 | 9,112,82,32,175,34.2,0.260,36,1 217 | 12,151,70,40,271,41.8,0.742,38,1 218 | 5,109,62,41,129,35.8,0.514,25,1 219 | 6,125,68,30,120,30.0,0.464,32,0 220 | 5,85,74,22,0,29.0,1.224,32,1 221 | 5,112,66,0,0,37.8,0.261,41,1 222 | 0,177,60,29,478,34.6,1.072,21,1 223 | 2,158,90,0,0,31.6,0.805,66,1 224 | 7,119,0,0,0,25.2,0.209,37,0 225 | 7,142,60,33,190,28.8,0.687,61,0 226 | 1,100,66,15,56,23.6,0.666,26,0 227 | 1,87,78,27,32,34.6,0.101,22,0 228 | 0,101,76,0,0,35.7,0.198,26,0 229 | 3,162,52,38,0,37.2,0.652,24,1 230 | 4,197,70,39,744,36.7,2.329,31,0 231 | 0,117,80,31,53,45.2,0.089,24,0 232 | 4,142,86,0,0,44.0,0.645,22,1 233 | 6,134,80,37,370,46.2,0.238,46,1 234 | 1,79,80,25,37,25.4,0.583,22,0 235 | 4,122,68,0,0,35.0,0.394,29,0 236 | 3,74,68,28,45,29.7,0.293,23,0 237 | 4,171,72,0,0,43.6,0.479,26,1 238 | 7,181,84,21,192,35.9,0.586,51,1 239 | 0,179,90,27,0,44.1,0.686,23,1 240 | 9,164,84,21,0,30.8,0.831,32,1 241 | 0,104,76,0,0,18.4,0.582,27,0 242 | 1,91,64,24,0,29.2,0.192,21,0 243 | 4,91,70,32,88,33.1,0.446,22,0 244 | 3,139,54,0,0,25.6,0.402,22,1 245 | 6,119,50,22,176,27.1,1.318,33,1 246 | 2,146,76,35,194,38.2,0.329,29,0 247 | 9,184,85,15,0,30.0,1.213,49,1 248 | 10,122,68,0,0,31.2,0.258,41,0 249 | 0,165,90,33,680,52.3,0.427,23,0 250 | 9,124,70,33,402,35.4,0.282,34,0 251 | 1,111,86,19,0,30.1,0.143,23,0 252 | 9,106,52,0,0,31.2,0.380,42,0 253 | 2,129,84,0,0,28.0,0.284,27,0 254 | 2,90,80,14,55,24.4,0.249,24,0 255 | 0,86,68,32,0,35.8,0.238,25,0 256 | 12,92,62,7,258,27.6,0.926,44,1 257 | 1,113,64,35,0,33.6,0.543,21,1 258 | 3,111,56,39,0,30.1,0.557,30,0 259 | 2,114,68,22,0,28.7,0.092,25,0 260 | 1,193,50,16,375,25.9,0.655,24,0 261 | 11,155,76,28,150,33.3,1.353,51,1 262 | 3,191,68,15,130,30.9,0.299,34,0 263 | 3,141,0,0,0,30.0,0.761,27,1 264 | 4,95,70,32,0,32.1,0.612,24,0 265 | 3,142,80,15,0,32.4,0.200,63,0 266 | 4,123,62,0,0,32.0,0.226,35,1 267 | 5,96,74,18,67,33.6,0.997,43,0 268 | 0,138,0,0,0,36.3,0.933,25,1 269 | 2,128,64,42,0,40.0,1.101,24,0 270 | 0,102,52,0,0,25.1,0.078,21,0 271 | 2,146,0,0,0,27.5,0.240,28,1 272 | 10,101,86,37,0,45.6,1.136,38,1 273 | 2,108,62,32,56,25.2,0.128,21,0 274 | 3,122,78,0,0,23.0,0.254,40,0 275 | 1,71,78,50,45,33.2,0.422,21,0 276 | 13,106,70,0,0,34.2,0.251,52,0 277 | 2,100,70,52,57,40.5,0.677,25,0 278 | 7,106,60,24,0,26.5,0.296,29,1 279 | 0,104,64,23,116,27.8,0.454,23,0 280 | 5,114,74,0,0,24.9,0.744,57,0 281 | 2,108,62,10,278,25.3,0.881,22,0 282 | 0,146,70,0,0,37.9,0.334,28,1 283 | 10,129,76,28,122,35.9,0.280,39,0 284 | 7,133,88,15,155,32.4,0.262,37,0 285 | 7,161,86,0,0,30.4,0.165,47,1 286 | 2,108,80,0,0,27.0,0.259,52,1 287 | 7,136,74,26,135,26.0,0.647,51,0 288 | 5,155,84,44,545,38.7,0.619,34,0 289 | 1,119,86,39,220,45.6,0.808,29,1 290 | 4,96,56,17,49,20.8,0.340,26,0 291 | 5,108,72,43,75,36.1,0.263,33,0 292 | 0,78,88,29,40,36.9,0.434,21,0 293 | 0,107,62,30,74,36.6,0.757,25,1 294 | 2,128,78,37,182,43.3,1.224,31,1 295 | 1,128,48,45,194,40.5,0.613,24,1 296 | 0,161,50,0,0,21.9,0.254,65,0 297 | 6,151,62,31,120,35.5,0.692,28,0 298 | 2,146,70,38,360,28.0,0.337,29,1 299 | 0,126,84,29,215,30.7,0.520,24,0 300 | 14,100,78,25,184,36.6,0.412,46,1 301 | 8,112,72,0,0,23.6,0.840,58,0 302 | 0,167,0,0,0,32.3,0.839,30,1 303 | 2,144,58,33,135,31.6,0.422,25,1 304 | 5,77,82,41,42,35.8,0.156,35,0 305 | 5,115,98,0,0,52.9,0.209,28,1 306 | 3,150,76,0,0,21.0,0.207,37,0 307 | 2,120,76,37,105,39.7,0.215,29,0 308 | 10,161,68,23,132,25.5,0.326,47,1 309 | 0,137,68,14,148,24.8,0.143,21,0 310 | 0,128,68,19,180,30.5,1.391,25,1 311 | 2,124,68,28,205,32.9,0.875,30,1 312 | 6,80,66,30,0,26.2,0.313,41,0 313 | 0,106,70,37,148,39.4,0.605,22,0 314 | 2,155,74,17,96,26.6,0.433,27,1 315 | 3,113,50,10,85,29.5,0.626,25,0 316 | 7,109,80,31,0,35.9,1.127,43,1 317 | 2,112,68,22,94,34.1,0.315,26,0 318 | 3,99,80,11,64,19.3,0.284,30,0 319 | 3,182,74,0,0,30.5,0.345,29,1 320 | 3,115,66,39,140,38.1,0.150,28,0 321 | 6,194,78,0,0,23.5,0.129,59,1 322 | 4,129,60,12,231,27.5,0.527,31,0 323 | 3,112,74,30,0,31.6,0.197,25,1 324 | 0,124,70,20,0,27.4,0.254,36,1 325 | 13,152,90,33,29,26.8,0.731,43,1 326 | 2,112,75,32,0,35.7,0.148,21,0 327 | 1,157,72,21,168,25.6,0.123,24,0 328 | 1,122,64,32,156,35.1,0.692,30,1 329 | 10,179,70,0,0,35.1,0.200,37,0 330 | 2,102,86,36,120,45.5,0.127,23,1 331 | 6,105,70,32,68,30.8,0.122,37,0 332 | 8,118,72,19,0,23.1,1.476,46,0 333 | 2,87,58,16,52,32.7,0.166,25,0 334 | 1,180,0,0,0,43.3,0.282,41,1 335 | 12,106,80,0,0,23.6,0.137,44,0 336 | 1,95,60,18,58,23.9,0.260,22,0 337 | 0,165,76,43,255,47.9,0.259,26,0 338 | 0,117,0,0,0,33.8,0.932,44,0 339 | 5,115,76,0,0,31.2,0.343,44,1 340 | 9,152,78,34,171,34.2,0.893,33,1 341 | 7,178,84,0,0,39.9,0.331,41,1 342 | 1,130,70,13,105,25.9,0.472,22,0 343 | 1,95,74,21,73,25.9,0.673,36,0 344 | 1,0,68,35,0,32.0,0.389,22,0 345 | 5,122,86,0,0,34.7,0.290,33,0 346 | 8,95,72,0,0,36.8,0.485,57,0 347 | 8,126,88,36,108,38.5,0.349,49,0 348 | 1,139,46,19,83,28.7,0.654,22,0 349 | 3,116,0,0,0,23.5,0.187,23,0 350 | 3,99,62,19,74,21.8,0.279,26,0 351 | 5,0,80,32,0,41.0,0.346,37,1 352 | 4,92,80,0,0,42.2,0.237,29,0 353 | 4,137,84,0,0,31.2,0.252,30,0 354 | 3,61,82,28,0,34.4,0.243,46,0 355 | 1,90,62,12,43,27.2,0.580,24,0 356 | 3,90,78,0,0,42.7,0.559,21,0 357 | 9,165,88,0,0,30.4,0.302,49,1 358 | 1,125,50,40,167,33.3,0.962,28,1 359 | 13,129,0,30,0,39.9,0.569,44,1 360 | 12,88,74,40,54,35.3,0.378,48,0 361 | 1,196,76,36,249,36.5,0.875,29,1 362 | 5,189,64,33,325,31.2,0.583,29,1 363 | 5,158,70,0,0,29.8,0.207,63,0 364 | 5,103,108,37,0,39.2,0.305,65,0 365 | 4,146,78,0,0,38.5,0.520,67,1 366 | 4,147,74,25,293,34.9,0.385,30,0 367 | 5,99,54,28,83,34.0,0.499,30,0 368 | 6,124,72,0,0,27.6,0.368,29,1 369 | 0,101,64,17,0,21.0,0.252,21,0 370 | 3,81,86,16,66,27.5,0.306,22,0 371 | 1,133,102,28,140,32.8,0.234,45,1 372 | 3,173,82,48,465,38.4,2.137,25,1 373 | 0,118,64,23,89,0.0,1.731,21,0 374 | 0,84,64,22,66,35.8,0.545,21,0 375 | 2,105,58,40,94,34.9,0.225,25,0 376 | 2,122,52,43,158,36.2,0.816,28,0 377 | 12,140,82,43,325,39.2,0.528,58,1 378 | 0,98,82,15,84,25.2,0.299,22,0 379 | 1,87,60,37,75,37.2,0.509,22,0 380 | 4,156,75,0,0,48.3,0.238,32,1 381 | 0,93,100,39,72,43.4,1.021,35,0 382 | 1,107,72,30,82,30.8,0.821,24,0 383 | 0,105,68,22,0,20.0,0.236,22,0 384 | 1,109,60,8,182,25.4,0.947,21,0 385 | 1,90,62,18,59,25.1,1.268,25,0 386 | 1,125,70,24,110,24.3,0.221,25,0 387 | 1,119,54,13,50,22.3,0.205,24,0 388 | 5,116,74,29,0,32.3,0.660,35,1 389 | 8,105,100,36,0,43.3,0.239,45,1 390 | 5,144,82,26,285,32.0,0.452,58,1 391 | 3,100,68,23,81,31.6,0.949,28,0 392 | 1,100,66,29,196,32.0,0.444,42,0 393 | 5,166,76,0,0,45.7,0.340,27,1 394 | 1,131,64,14,415,23.7,0.389,21,0 395 | 4,116,72,12,87,22.1,0.463,37,0 396 | 4,158,78,0,0,32.9,0.803,31,1 397 | 2,127,58,24,275,27.7,1.600,25,0 398 | 3,96,56,34,115,24.7,0.944,39,0 399 | 0,131,66,40,0,34.3,0.196,22,1 400 | 3,82,70,0,0,21.1,0.389,25,0 401 | 3,193,70,31,0,34.9,0.241,25,1 402 | 4,95,64,0,0,32.0,0.161,31,1 403 | 6,137,61,0,0,24.2,0.151,55,0 404 | 5,136,84,41,88,35.0,0.286,35,1 405 | 9,72,78,25,0,31.6,0.280,38,0 406 | 5,168,64,0,0,32.9,0.135,41,1 407 | 2,123,48,32,165,42.1,0.520,26,0 408 | 4,115,72,0,0,28.9,0.376,46,1 409 | 0,101,62,0,0,21.9,0.336,25,0 410 | 8,197,74,0,0,25.9,1.191,39,1 411 | 1,172,68,49,579,42.4,0.702,28,1 412 | 6,102,90,39,0,35.7,0.674,28,0 413 | 1,112,72,30,176,34.4,0.528,25,0 414 | 1,143,84,23,310,42.4,1.076,22,0 415 | 1,143,74,22,61,26.2,0.256,21,0 416 | 0,138,60,35,167,34.6,0.534,21,1 417 | 3,173,84,33,474,35.7,0.258,22,1 418 | 1,97,68,21,0,27.2,1.095,22,0 419 | 4,144,82,32,0,38.5,0.554,37,1 420 | 1,83,68,0,0,18.2,0.624,27,0 421 | 3,129,64,29,115,26.4,0.219,28,1 422 | 1,119,88,41,170,45.3,0.507,26,0 423 | 2,94,68,18,76,26.0,0.561,21,0 424 | 0,102,64,46,78,40.6,0.496,21,0 425 | 2,115,64,22,0,30.8,0.421,21,0 426 | 8,151,78,32,210,42.9,0.516,36,1 427 | 4,184,78,39,277,37.0,0.264,31,1 428 | 0,94,0,0,0,0.0,0.256,25,0 429 | 1,181,64,30,180,34.1,0.328,38,1 430 | 0,135,94,46,145,40.6,0.284,26,0 431 | 1,95,82,25,180,35.0,0.233,43,1 432 | 2,99,0,0,0,22.2,0.108,23,0 433 | 3,89,74,16,85,30.4,0.551,38,0 434 | 1,80,74,11,60,30.0,0.527,22,0 435 | 2,139,75,0,0,25.6,0.167,29,0 436 | 1,90,68,8,0,24.5,1.138,36,0 437 | 0,141,0,0,0,42.4,0.205,29,1 438 | 12,140,85,33,0,37.4,0.244,41,0 439 | 5,147,75,0,0,29.9,0.434,28,0 440 | 1,97,70,15,0,18.2,0.147,21,0 441 | 6,107,88,0,0,36.8,0.727,31,0 442 | 0,189,104,25,0,34.3,0.435,41,1 443 | 2,83,66,23,50,32.2,0.497,22,0 444 | 4,117,64,27,120,33.2,0.230,24,0 445 | 8,108,70,0,0,30.5,0.955,33,1 446 | 4,117,62,12,0,29.7,0.380,30,1 447 | 0,180,78,63,14,59.4,2.420,25,1 448 | 1,100,72,12,70,25.3,0.658,28,0 449 | 0,95,80,45,92,36.5,0.330,26,0 450 | 0,104,64,37,64,33.6,0.510,22,1 451 | 0,120,74,18,63,30.5,0.285,26,0 452 | 1,82,64,13,95,21.2,0.415,23,0 453 | 2,134,70,0,0,28.9,0.542,23,1 454 | 0,91,68,32,210,39.9,0.381,25,0 455 | 2,119,0,0,0,19.6,0.832,72,0 456 | 2,100,54,28,105,37.8,0.498,24,0 457 | 14,175,62,30,0,33.6,0.212,38,1 458 | 1,135,54,0,0,26.7,0.687,62,0 459 | 5,86,68,28,71,30.2,0.364,24,0 460 | 10,148,84,48,237,37.6,1.001,51,1 461 | 9,134,74,33,60,25.9,0.460,81,0 462 | 9,120,72,22,56,20.8,0.733,48,0 463 | 1,71,62,0,0,21.8,0.416,26,0 464 | 8,74,70,40,49,35.3,0.705,39,0 465 | 5,88,78,30,0,27.6,0.258,37,0 466 | 10,115,98,0,0,24.0,1.022,34,0 467 | 0,124,56,13,105,21.8,0.452,21,0 468 | 0,74,52,10,36,27.8,0.269,22,0 469 | 0,97,64,36,100,36.8,0.600,25,0 470 | 8,120,0,0,0,30.0,0.183,38,1 471 | 6,154,78,41,140,46.1,0.571,27,0 472 | 1,144,82,40,0,41.3,0.607,28,0 473 | 0,137,70,38,0,33.2,0.170,22,0 474 | 0,119,66,27,0,38.8,0.259,22,0 475 | 7,136,90,0,0,29.9,0.210,50,0 476 | 4,114,64,0,0,28.9,0.126,24,0 477 | 0,137,84,27,0,27.3,0.231,59,0 478 | 2,105,80,45,191,33.7,0.711,29,1 479 | 7,114,76,17,110,23.8,0.466,31,0 480 | 8,126,74,38,75,25.9,0.162,39,0 481 | 4,132,86,31,0,28.0,0.419,63,0 482 | 3,158,70,30,328,35.5,0.344,35,1 483 | 0,123,88,37,0,35.2,0.197,29,0 484 | 4,85,58,22,49,27.8,0.306,28,0 485 | 0,84,82,31,125,38.2,0.233,23,0 486 | 0,145,0,0,0,44.2,0.630,31,1 487 | 0,135,68,42,250,42.3,0.365,24,1 488 | 1,139,62,41,480,40.7,0.536,21,0 489 | 0,173,78,32,265,46.5,1.159,58,0 490 | 4,99,72,17,0,25.6,0.294,28,0 491 | 8,194,80,0,0,26.1,0.551,67,0 492 | 2,83,65,28,66,36.8,0.629,24,0 493 | 2,89,90,30,0,33.5,0.292,42,0 494 | 4,99,68,38,0,32.8,0.145,33,0 495 | 4,125,70,18,122,28.9,1.144,45,1 496 | 3,80,0,0,0,0.0,0.174,22,0 497 | 6,166,74,0,0,26.6,0.304,66,0 498 | 5,110,68,0,0,26.0,0.292,30,0 499 | 2,81,72,15,76,30.1,0.547,25,0 500 | 7,195,70,33,145,25.1,0.163,55,1 501 | 6,154,74,32,193,29.3,0.839,39,0 502 | 2,117,90,19,71,25.2,0.313,21,0 503 | 3,84,72,32,0,37.2,0.267,28,0 504 | 6,0,68,41,0,39.0,0.727,41,1 505 | 7,94,64,25,79,33.3,0.738,41,0 506 | 3,96,78,39,0,37.3,0.238,40,0 507 | 10,75,82,0,0,33.3,0.263,38,0 508 | 0,180,90,26,90,36.5,0.314,35,1 509 | 1,130,60,23,170,28.6,0.692,21,0 510 | 2,84,50,23,76,30.4,0.968,21,0 511 | 8,120,78,0,0,25.0,0.409,64,0 512 | 12,84,72,31,0,29.7,0.297,46,1 513 | 0,139,62,17,210,22.1,0.207,21,0 514 | 9,91,68,0,0,24.2,0.200,58,0 515 | 2,91,62,0,0,27.3,0.525,22,0 516 | 3,99,54,19,86,25.6,0.154,24,0 517 | 3,163,70,18,105,31.6,0.268,28,1 518 | 9,145,88,34,165,30.3,0.771,53,1 519 | 7,125,86,0,0,37.6,0.304,51,0 520 | 13,76,60,0,0,32.8,0.180,41,0 521 | 6,129,90,7,326,19.6,0.582,60,0 522 | 2,68,70,32,66,25.0,0.187,25,0 523 | 3,124,80,33,130,33.2,0.305,26,0 524 | 6,114,0,0,0,0.0,0.189,26,0 525 | 9,130,70,0,0,34.2,0.652,45,1 526 | 3,125,58,0,0,31.6,0.151,24,0 527 | 3,87,60,18,0,21.8,0.444,21,0 528 | 1,97,64,19,82,18.2,0.299,21,0 529 | 3,116,74,15,105,26.3,0.107,24,0 530 | 0,117,66,31,188,30.8,0.493,22,0 531 | 0,111,65,0,0,24.6,0.660,31,0 532 | 2,122,60,18,106,29.8,0.717,22,0 533 | 0,107,76,0,0,45.3,0.686,24,0 534 | 1,86,66,52,65,41.3,0.917,29,0 535 | 6,91,0,0,0,29.8,0.501,31,0 536 | 1,77,56,30,56,33.3,1.251,24,0 537 | 4,132,0,0,0,32.9,0.302,23,1 538 | 0,105,90,0,0,29.6,0.197,46,0 539 | 0,57,60,0,0,21.7,0.735,67,0 540 | 0,127,80,37,210,36.3,0.804,23,0 541 | 3,129,92,49,155,36.4,0.968,32,1 542 | 8,100,74,40,215,39.4,0.661,43,1 543 | 3,128,72,25,190,32.4,0.549,27,1 544 | 10,90,85,32,0,34.9,0.825,56,1 545 | 4,84,90,23,56,39.5,0.159,25,0 546 | 1,88,78,29,76,32.0,0.365,29,0 547 | 8,186,90,35,225,34.5,0.423,37,1 548 | 5,187,76,27,207,43.6,1.034,53,1 549 | 4,131,68,21,166,33.1,0.160,28,0 550 | 1,164,82,43,67,32.8,0.341,50,0 551 | 4,189,110,31,0,28.5,0.680,37,0 552 | 1,116,70,28,0,27.4,0.204,21,0 553 | 3,84,68,30,106,31.9,0.591,25,0 554 | 6,114,88,0,0,27.8,0.247,66,0 555 | 1,88,62,24,44,29.9,0.422,23,0 556 | 1,84,64,23,115,36.9,0.471,28,0 557 | 7,124,70,33,215,25.5,0.161,37,0 558 | 1,97,70,40,0,38.1,0.218,30,0 559 | 8,110,76,0,0,27.8,0.237,58,0 560 | 11,103,68,40,0,46.2,0.126,42,0 561 | 11,85,74,0,0,30.1,0.300,35,0 562 | 6,125,76,0,0,33.8,0.121,54,1 563 | 0,198,66,32,274,41.3,0.502,28,1 564 | 1,87,68,34,77,37.6,0.401,24,0 565 | 6,99,60,19,54,26.9,0.497,32,0 566 | 0,91,80,0,0,32.4,0.601,27,0 567 | 2,95,54,14,88,26.1,0.748,22,0 568 | 1,99,72,30,18,38.6,0.412,21,0 569 | 6,92,62,32,126,32.0,0.085,46,0 570 | 4,154,72,29,126,31.3,0.338,37,0 571 | 0,121,66,30,165,34.3,0.203,33,1 572 | 3,78,70,0,0,32.5,0.270,39,0 573 | 2,130,96,0,0,22.6,0.268,21,0 574 | 3,111,58,31,44,29.5,0.430,22,0 575 | 2,98,60,17,120,34.7,0.198,22,0 576 | 1,143,86,30,330,30.1,0.892,23,0 577 | 1,119,44,47,63,35.5,0.280,25,0 578 | 6,108,44,20,130,24.0,0.813,35,0 579 | 2,118,80,0,0,42.9,0.693,21,1 580 | 10,133,68,0,0,27.0,0.245,36,0 581 | 2,197,70,99,0,34.7,0.575,62,1 582 | 0,151,90,46,0,42.1,0.371,21,1 583 | 6,109,60,27,0,25.0,0.206,27,0 584 | 12,121,78,17,0,26.5,0.259,62,0 585 | 8,100,76,0,0,38.7,0.190,42,0 586 | 8,124,76,24,600,28.7,0.687,52,1 587 | 1,93,56,11,0,22.5,0.417,22,0 588 | 8,143,66,0,0,34.9,0.129,41,1 589 | 6,103,66,0,0,24.3,0.249,29,0 590 | 3,176,86,27,156,33.3,1.154,52,1 591 | 0,73,0,0,0,21.1,0.342,25,0 592 | 11,111,84,40,0,46.8,0.925,45,1 593 | 2,112,78,50,140,39.4,0.175,24,0 594 | 3,132,80,0,0,34.4,0.402,44,1 595 | 2,82,52,22,115,28.5,1.699,25,0 596 | 6,123,72,45,230,33.6,0.733,34,0 597 | 0,188,82,14,185,32.0,0.682,22,1 598 | 0,67,76,0,0,45.3,0.194,46,0 599 | 1,89,24,19,25,27.8,0.559,21,0 600 | 1,173,74,0,0,36.8,0.088,38,1 601 | 1,109,38,18,120,23.1,0.407,26,0 602 | 1,108,88,19,0,27.1,0.400,24,0 603 | 6,96,0,0,0,23.7,0.190,28,0 604 | 1,124,74,36,0,27.8,0.100,30,0 605 | 7,150,78,29,126,35.2,0.692,54,1 606 | 4,183,0,0,0,28.4,0.212,36,1 607 | 1,124,60,32,0,35.8,0.514,21,0 608 | 1,181,78,42,293,40.0,1.258,22,1 609 | 1,92,62,25,41,19.5,0.482,25,0 610 | 0,152,82,39,272,41.5,0.270,27,0 611 | 1,111,62,13,182,24.0,0.138,23,0 612 | 3,106,54,21,158,30.9,0.292,24,0 613 | 3,174,58,22,194,32.9,0.593,36,1 614 | 7,168,88,42,321,38.2,0.787,40,1 615 | 6,105,80,28,0,32.5,0.878,26,0 616 | 11,138,74,26,144,36.1,0.557,50,1 617 | 3,106,72,0,0,25.8,0.207,27,0 618 | 6,117,96,0,0,28.7,0.157,30,0 619 | 2,68,62,13,15,20.1,0.257,23,0 620 | 9,112,82,24,0,28.2,1.282,50,1 621 | 0,119,0,0,0,32.4,0.141,24,1 622 | 2,112,86,42,160,38.4,0.246,28,0 623 | 2,92,76,20,0,24.2,1.698,28,0 624 | 6,183,94,0,0,40.8,1.461,45,0 625 | 0,94,70,27,115,43.5,0.347,21,0 626 | 2,108,64,0,0,30.8,0.158,21,0 627 | 4,90,88,47,54,37.7,0.362,29,0 628 | 0,125,68,0,0,24.7,0.206,21,0 629 | 0,132,78,0,0,32.4,0.393,21,0 630 | 5,128,80,0,0,34.6,0.144,45,0 631 | 4,94,65,22,0,24.7,0.148,21,0 632 | 7,114,64,0,0,27.4,0.732,34,1 633 | 0,102,78,40,90,34.5,0.238,24,0 634 | 2,111,60,0,0,26.2,0.343,23,0 635 | 1,128,82,17,183,27.5,0.115,22,0 636 | 10,92,62,0,0,25.9,0.167,31,0 637 | 13,104,72,0,0,31.2,0.465,38,1 638 | 5,104,74,0,0,28.8,0.153,48,0 639 | 2,94,76,18,66,31.6,0.649,23,0 640 | 7,97,76,32,91,40.9,0.871,32,1 641 | 1,100,74,12,46,19.5,0.149,28,0 642 | 0,102,86,17,105,29.3,0.695,27,0 643 | 4,128,70,0,0,34.3,0.303,24,0 644 | 6,147,80,0,0,29.5,0.178,50,1 645 | 4,90,0,0,0,28.0,0.610,31,0 646 | 3,103,72,30,152,27.6,0.730,27,0 647 | 2,157,74,35,440,39.4,0.134,30,0 648 | 1,167,74,17,144,23.4,0.447,33,1 649 | 0,179,50,36,159,37.8,0.455,22,1 650 | 11,136,84,35,130,28.3,0.260,42,1 651 | 0,107,60,25,0,26.4,0.133,23,0 652 | 1,91,54,25,100,25.2,0.234,23,0 653 | 1,117,60,23,106,33.8,0.466,27,0 654 | 5,123,74,40,77,34.1,0.269,28,0 655 | 2,120,54,0,0,26.8,0.455,27,0 656 | 1,106,70,28,135,34.2,0.142,22,0 657 | 2,155,52,27,540,38.7,0.240,25,1 658 | 2,101,58,35,90,21.8,0.155,22,0 659 | 1,120,80,48,200,38.9,1.162,41,0 660 | 11,127,106,0,0,39.0,0.190,51,0 661 | 3,80,82,31,70,34.2,1.292,27,1 662 | 10,162,84,0,0,27.7,0.182,54,0 663 | 1,199,76,43,0,42.9,1.394,22,1 664 | 8,167,106,46,231,37.6,0.165,43,1 665 | 9,145,80,46,130,37.9,0.637,40,1 666 | 6,115,60,39,0,33.7,0.245,40,1 667 | 1,112,80,45,132,34.8,0.217,24,0 668 | 4,145,82,18,0,32.5,0.235,70,1 669 | 10,111,70,27,0,27.5,0.141,40,1 670 | 6,98,58,33,190,34.0,0.430,43,0 671 | 9,154,78,30,100,30.9,0.164,45,0 672 | 6,165,68,26,168,33.6,0.631,49,0 673 | 1,99,58,10,0,25.4,0.551,21,0 674 | 10,68,106,23,49,35.5,0.285,47,0 675 | 3,123,100,35,240,57.3,0.880,22,0 676 | 8,91,82,0,0,35.6,0.587,68,0 677 | 6,195,70,0,0,30.9,0.328,31,1 678 | 9,156,86,0,0,24.8,0.230,53,1 679 | 0,93,60,0,0,35.3,0.263,25,0 680 | 3,121,52,0,0,36.0,0.127,25,1 681 | 2,101,58,17,265,24.2,0.614,23,0 682 | 2,56,56,28,45,24.2,0.332,22,0 683 | 0,162,76,36,0,49.6,0.364,26,1 684 | 0,95,64,39,105,44.6,0.366,22,0 685 | 4,125,80,0,0,32.3,0.536,27,1 686 | 5,136,82,0,0,0.0,0.640,69,0 687 | 2,129,74,26,205,33.2,0.591,25,0 688 | 3,130,64,0,0,23.1,0.314,22,0 689 | 1,107,50,19,0,28.3,0.181,29,0 690 | 1,140,74,26,180,24.1,0.828,23,0 691 | 1,144,82,46,180,46.1,0.335,46,1 692 | 8,107,80,0,0,24.6,0.856,34,0 693 | 13,158,114,0,0,42.3,0.257,44,1 694 | 2,121,70,32,95,39.1,0.886,23,0 695 | 7,129,68,49,125,38.5,0.439,43,1 696 | 2,90,60,0,0,23.5,0.191,25,0 697 | 7,142,90,24,480,30.4,0.128,43,1 698 | 3,169,74,19,125,29.9,0.268,31,1 699 | 0,99,0,0,0,25.0,0.253,22,0 700 | 4,127,88,11,155,34.5,0.598,28,0 701 | 4,118,70,0,0,44.5,0.904,26,0 702 | 2,122,76,27,200,35.9,0.483,26,0 703 | 6,125,78,31,0,27.6,0.565,49,1 704 | 1,168,88,29,0,35.0,0.905,52,1 705 | 2,129,0,0,0,38.5,0.304,41,0 706 | 4,110,76,20,100,28.4,0.118,27,0 707 | 6,80,80,36,0,39.8,0.177,28,0 708 | 10,115,0,0,0,0.0,0.261,30,1 709 | 2,127,46,21,335,34.4,0.176,22,0 710 | 9,164,78,0,0,32.8,0.148,45,1 711 | 2,93,64,32,160,38.0,0.674,23,1 712 | 3,158,64,13,387,31.2,0.295,24,0 713 | 5,126,78,27,22,29.6,0.439,40,0 714 | 10,129,62,36,0,41.2,0.441,38,1 715 | 0,134,58,20,291,26.4,0.352,21,0 716 | 3,102,74,0,0,29.5,0.121,32,0 717 | 7,187,50,33,392,33.9,0.826,34,1 718 | 3,173,78,39,185,33.8,0.970,31,1 719 | 10,94,72,18,0,23.1,0.595,56,0 720 | 1,108,60,46,178,35.5,0.415,24,0 721 | 5,97,76,27,0,35.6,0.378,52,1 722 | 4,83,86,19,0,29.3,0.317,34,0 723 | 1,114,66,36,200,38.1,0.289,21,0 724 | 1,149,68,29,127,29.3,0.349,42,1 725 | 5,117,86,30,105,39.1,0.251,42,0 726 | 1,111,94,0,0,32.8,0.265,45,0 727 | 4,112,78,40,0,39.4,0.236,38,0 728 | 1,116,78,29,180,36.1,0.496,25,0 729 | 0,141,84,26,0,32.4,0.433,22,0 730 | 2,175,88,0,0,22.9,0.326,22,0 731 | 2,92,52,0,0,30.1,0.141,22,0 732 | 3,130,78,23,79,28.4,0.323,34,1 733 | 8,120,86,0,0,28.4,0.259,22,1 734 | 2,174,88,37,120,44.5,0.646,24,1 735 | 2,106,56,27,165,29.0,0.426,22,0 736 | 2,105,75,0,0,23.3,0.560,53,0 737 | 4,95,60,32,0,35.4,0.284,28,0 738 | 0,126,86,27,120,27.4,0.515,21,0 739 | 8,65,72,23,0,32.0,0.600,42,0 740 | 2,99,60,17,160,36.6,0.453,21,0 741 | 1,102,74,0,0,39.5,0.293,42,1 742 | 11,120,80,37,150,42.3,0.785,48,1 743 | 3,102,44,20,94,30.8,0.400,26,0 744 | 1,109,58,18,116,28.5,0.219,22,0 745 | 9,140,94,0,0,32.7,0.734,45,1 746 | 13,153,88,37,140,40.6,1.174,39,0 747 | 12,100,84,33,105,30.0,0.488,46,0 748 | 1,147,94,41,0,49.3,0.358,27,1 749 | 1,81,74,41,57,46.3,1.096,32,0 750 | 3,187,70,22,200,36.4,0.408,36,1 751 | 6,162,62,0,0,24.3,0.178,50,1 752 | 4,136,70,0,0,31.2,1.182,22,1 753 | 1,121,78,39,74,39.0,0.261,28,0 754 | 3,108,62,24,0,26.0,0.223,25,0 755 | 0,181,88,44,510,43.3,0.222,26,1 756 | 8,154,78,32,0,32.4,0.443,45,1 757 | 1,128,88,39,110,36.5,1.057,37,1 758 | 7,137,90,41,0,32.0,0.391,39,0 759 | 0,123,72,0,0,36.3,0.258,52,1 760 | 1,106,76,0,0,37.5,0.197,26,0 761 | 6,190,92,0,0,35.5,0.278,66,1 762 | 2,88,58,26,16,28.4,0.766,22,0 763 | 9,170,74,31,0,44.0,0.403,43,1 764 | 9,89,62,0,0,22.5,0.142,33,0 765 | 10,101,76,48,180,32.9,0.171,63,0 766 | 2,122,70,27,0,36.8,0.340,27,0 767 | 5,121,72,23,112,26.2,0.245,30,0 768 | 1,126,60,0,0,30.1,0.349,47,1 769 | 1,93,70,31,0,30.4,0.315,23,0 770 | 771 | -------------------------------------------------------------------------------- /Chapter03/file1.txt: -------------------------------------------------------------------------------- 1 | 12.6, 23.4, Abc.co.uk 2 | 98.7, 56.8, Xyz.com 3 | 34.2, 68.1, Pqr.net 4 | -------------------------------------------------------------------------------- /Chapter03/file2.txt: -------------------------------------------------------------------------------- 1 | Result, End 2 | 1, 0 3 | 6, 0 4 | 3, 1 5 | -------------------------------------------------------------------------------- /Chapter03/mycsvfile.txt: -------------------------------------------------------------------------------- 1 | Line1, 4.28e5, 5.55e2, 42 2 | line2, -5.3 , , 69 3 | -------------------------------------------------------------------------------- /Chapter03/myfile.tfrecords: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/Tensorflow-2.0-Quick-Start-Guide/56c6be1e90bd901523dbe7beaa973e8d158b8cd1/Chapter03/myfile.tfrecords -------------------------------------------------------------------------------- /Chapter03/report.txt: -------------------------------------------------------------------------------- 1 | TensorFlow 2.0 Upgrade Script 2 | ----------------------------- 3 | Converted 1 files 4 | Detected 6 issues that require attention 5 | -------------------------------------------------------------------------------- 6 | -------------------------------------------------------------------------------- 7 | File: Chapter3.ipynb 8 | -------------------------------------------------------------------------------- 9 | Chapter3.ipynb:8:11: WARNING: Changing dataset.make_one_shot_iterator() to tf.compat.v1.data.make_one_shot_iterator(dataset). Please check this transformation. 10 | 11 | Chapter3.ipynb:17:11: WARNING: Changing dataset.make_one_shot_iterator() to tf.compat.v1.data.make_one_shot_iterator(dataset). Please check this transformation. 12 | 13 | Chapter3.ipynb:31:11: WARNING: Changing dataset.make_one_shot_iterator() to tf.compat.v1.data.make_one_shot_iterator(dataset). Please check this transformation. 14 | 15 | Chapter3.ipynb:40:11: WARNING: Changing dataset.make_one_shot_iterator() to tf.compat.v1.data.make_one_shot_iterator(dataset). Please check this transformation. 16 | 17 | Chapter3.ipynb:95:11: WARNING: Changing dataset.make_one_shot_iterator() to tf.compat.v1.data.make_one_shot_iterator(dataset). Please check this transformation. 18 | 19 | Chapter3.ipynb:131:11: WARNING: Changing dataset.make_one_shot_iterator() to tf.compat.v1.data.make_one_shot_iterator(dataset). Please check this transformation. 20 | 21 | ================================================================================ 22 | Detailed log follows: 23 | 24 | ================================================================================ 25 | -------------------------------------------------------------------------------- 26 | Processing file 'Chapter3.ipynb' 27 | outputting to 'Chapter3_ANNTech_TF2_alpha_VERSION2.ipynb' 28 | -------------------------------------------------------------------------------- 29 | 30 | 8:11: WARNING: Changing dataset.make_one_shot_iterator() to tf.compat.v1.data.make_one_shot_iterator(dataset). Please check this transformation. 31 | 32 | 17:11: WARNING: Changing dataset.make_one_shot_iterator() to tf.compat.v1.data.make_one_shot_iterator(dataset). Please check this transformation. 33 | 34 | 31:11: WARNING: Changing dataset.make_one_shot_iterator() to tf.compat.v1.data.make_one_shot_iterator(dataset). Please check this transformation. 35 | 36 | 40:11: WARNING: Changing dataset.make_one_shot_iterator() to tf.compat.v1.data.make_one_shot_iterator(dataset). Please check this transformation. 37 | 38 | 55:10: INFO: Renamed 'tf.contrib.data.CsvDataset' to 'tf.data.experimental.CsvDataset' 39 | 61:10: INFO: Renamed 'tf.contrib.data.CsvDataset' to 'tf.data.experimental.CsvDataset' 40 | 67:10: INFO: Renamed 'tf.contrib.data.CsvDataset' to 'tf.data.experimental.CsvDataset' 41 | 73:0: INFO: Renamed 'tf.enable_eager_execution' to 'tf.compat.v1.enable_eager_execution' 42 | 78:13: INFO: Renamed 'tf.python_io.TFRecordWriter' to 'tf.io.TFRecordWriter' 43 | 90:31: INFO: Renamed 'tf.FixedLenSequenceFeature' to 'tf.io.FixedLenSequenceFeature' 44 | 91:22: INFO: Added keywords to args of function 'tf.parse_single_example' 45 | 91:22: INFO: Renamed 'tf.parse_single_example' to 'tf.io.parse_single_example' 46 | 95:11: WARNING: Changing dataset.make_one_shot_iterator() to tf.compat.v1.data.make_one_shot_iterator(dataset). Please check this transformation. 47 | 48 | 116:9: INFO: Renamed 'tf.python_io.TFRecordWriter' to 'tf.io.TFRecordWriter' 49 | 123:29: INFO: Renamed 'tf.FixedLenFeature' to 'tf.io.FixedLenFeature' 50 | 124:30: INFO: Renamed 'tf.VarLenFeature' to 'tf.io.VarLenFeature' 51 | 125:33: INFO: Renamed 'tf.VarLenFeature' to 'tf.io.VarLenFeature' 52 | 127:22: INFO: Added keywords to args of function 'tf.parse_single_example' 53 | 127:22: INFO: Renamed 'tf.parse_single_example' to 'tf.io.parse_single_example' 54 | 131:11: WARNING: Changing dataset.make_one_shot_iterator() to tf.compat.v1.data.make_one_shot_iterator(dataset). Please check this transformation. 55 | 56 | -------------------------------------------------------------------------------- 57 | 58 | -------------------------------------------------------------------------------- /Chapter03/students.tfrecords: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/Tensorflow-2.0-Quick-Start-Guide/56c6be1e90bd901523dbe7beaa973e8d158b8cd1/Chapter03/students.tfrecords -------------------------------------------------------------------------------- /Chapter04/Chapter4_BostonLinReg_TF2_alpha.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "import tensorflow as tf\n", 10 | "from sklearn.datasets import load_boston\n", 11 | "from sklearn.preprocessing import scale\n", 12 | "import numpy as np\n" 13 | ] 14 | }, 15 | { 16 | "cell_type": "code", 17 | "execution_count": null, 18 | "metadata": {}, 19 | "outputs": [], 20 | "source": [ 21 | "learning_rate = 0.01\n", 22 | "epochs = 10000\n", 23 | "display_epoch = epochs//20\n", 24 | "n_train = 300\n", 25 | "n_valid = 100\n" 26 | ] 27 | }, 28 | { 29 | "cell_type": "code", 30 | "execution_count": null, 31 | "metadata": {}, 32 | "outputs": [], 33 | "source": [ 34 | "features, prices = load_boston(True)\n", 35 | "n_test = len(features) - n_train - n_valid\n", 36 | "\n", 37 | "# Keep n_train samples for training\n", 38 | "train_features = tf.cast(scale(features[:n_train]), dtype=tf.float32) #\n", 39 | "train_prices = prices[:n_train]\n", 40 | "\n", 41 | "# Keep n_valid samples for validation\n", 42 | "valid_features = tf.cast(scale(features[n_train:n_train+n_valid]), dtype=tf.float32)\n", 43 | "valid_prices = prices[n_train:n_train+n_valid]\n", 44 | "\n", 45 | "# Keep remaining n_test data points as test set\n", 46 | "test_features = tf.cast(scale(features[n_train+n_valid:n_train+n_valid+n_test]), dtype=tf.float32)\n", 47 | "test_prices = prices[n_train + n_valid : n_train + n_valid + n_test]\n", 48 | "\n" 49 | ] 50 | }, 51 | { 52 | "cell_type": "code", 53 | "execution_count": null, 54 | "metadata": {}, 55 | "outputs": [], 56 | "source": [ 57 | "# functio returning the predicted value of y\n", 58 | "def prediction(x, weights, bias):\n", 59 | " return tf.add(tf.matmul(x,weights), bias) # our predicted (learned) m and c, expression is like y = m*x + c\n" 60 | ] 61 | }, 62 | { 63 | "cell_type": "code", 64 | "execution_count": null, 65 | "metadata": {}, 66 | "outputs": [], 67 | "source": [ 68 | "# A loss function using root mean-squared error\n", 69 | "def loss(x, y, weights, bias):\n", 70 | " error = prediction(x, weights, bias) - y # how 'wrong' our predicted (learned) y is\n", 71 | " squared_error = tf.square(error)\n", 72 | " return tf.sqrt(tf.reduce_mean(input_tensor=squared_error)) # squre root of overall mean of squared error." 73 | ] 74 | }, 75 | { 76 | "cell_type": "code", 77 | "execution_count": null, 78 | "metadata": {}, 79 | "outputs": [], 80 | "source": [ 81 | "# Find the derivative of loss with respect to weight and bias\n", 82 | "def gradient(x, y, weights, bias):\n", 83 | " with tf.GradientTape() as tape:\n", 84 | " loss_value = loss(x, y, weights, bias)\n", 85 | " return tape.gradient(loss_value, [weights, bias])# direction and value of the gradient of our weight and bias" 86 | ] 87 | }, 88 | { 89 | "cell_type": "code", 90 | "execution_count": null, 91 | "metadata": {}, 92 | "outputs": [], 93 | "source": [ 94 | "# accuracy = Number of true predictions/Total predictions\n", 95 | "#accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n", 96 | " #print(\"Accuracy:\", accuracy.eval({X: X_test, y: y_test}))\n", 97 | "def accuracy(y_true,y_predicted):\n", 98 | " return tf.sqrt(tf.reduce_mean(input_tensor=tf.square(y_predicted-y_true)))" 99 | ] 100 | }, 101 | { 102 | "cell_type": "code", 103 | "execution_count": null, 104 | "metadata": {}, 105 | "outputs": [], 106 | "source": [ 107 | "# Start with random values for W and B on the same batch of data\n", 108 | "W = tf.Variable(tf.random.normal([13, 1],mean=0.0, stddev=1.0, dtype=tf.float32))\n", 109 | "B = tf.Variable(tf.zeros(1) , dtype = tf.float32)\n", 110 | "print(W,B)\n", 111 | "print(\"Initial loss: {:.3f}\".format(loss(train_features, train_prices,W, B)))" 112 | ] 113 | }, 114 | { 115 | "cell_type": "code", 116 | "execution_count": null, 117 | "metadata": {}, 118 | "outputs": [], 119 | "source": [ 120 | "for e in range(epochs): #iterate for each training epoch\n", 121 | " deltaW, deltaB = gradient(train_features, train_prices, W, B) # direction (sign) and value of the gradient of our weight and bias\n", 122 | " change_W = deltaW * learning_rate # adjustment amount for weight\n", 123 | " change_B = deltaB * learning_rate # adjustment amount for bias\n", 124 | " W.assign_sub(change_W) # subract from W\n", 125 | " B.assign_sub(change_B) # subract from B\n", 126 | " if e==0 or e % display_epoch == 0:\n", 127 | " # print(deltaW.numpy(), deltaB.numpy()) # uncomment if you want to see the gradients\n", 128 | " print(\"Validation loss after epoch {:02d}: {:.3f}\".format(e, loss(valid_features, valid_prices, W, B)))" 129 | ] 130 | }, 131 | { 132 | "cell_type": "code", 133 | "execution_count": null, 134 | "metadata": {}, 135 | "outputs": [], 136 | "source": [ 137 | "print(\"Final validation loss: {:.3f}\".format(loss(train_features, train_prices, W, B)))\n", 138 | "print(\"Final test loss: {:.3f}\".format(loss(test_features, test_prices, W, B)))\n", 139 | "print(\"W = {}, B = {}\".format(W.numpy(), B.numpy()))\n" 140 | ] 141 | }, 142 | { 143 | "cell_type": "raw", 144 | "metadata": {}, 145 | "source": [] 146 | }, 147 | { 148 | "cell_type": "markdown", 149 | "metadata": {}, 150 | "source": [ 151 | "#### example house" 152 | ] 153 | }, 154 | { 155 | "cell_type": "code", 156 | "execution_count": null, 157 | "metadata": {}, 158 | "outputs": [], 159 | "source": [ 160 | "example_house = 69\n", 161 | "y = test_prices[example_house]\n", 162 | "y_pred = prediction(test_features,W.numpy(),B.numpy())[example_house]\n", 163 | "print(\"Actual median house value\",y,\" in $10K\")\n", 164 | "print(\"Predicted median house value \",y_pred.numpy(),\" in $10K\")" 165 | ] 166 | }, 167 | { 168 | "cell_type": "code", 169 | "execution_count": null, 170 | "metadata": {}, 171 | "outputs": [], 172 | "source": [] 173 | }, 174 | { 175 | "cell_type": "code", 176 | "execution_count": null, 177 | "metadata": {}, 178 | "outputs": [], 179 | "source": [] 180 | }, 181 | { 182 | "cell_type": "code", 183 | "execution_count": null, 184 | "metadata": {}, 185 | "outputs": [], 186 | "source": [] 187 | }, 188 | { 189 | "cell_type": "code", 190 | "execution_count": null, 191 | "metadata": {}, 192 | "outputs": [], 193 | "source": [] 194 | }, 195 | { 196 | "cell_type": "code", 197 | "execution_count": null, 198 | "metadata": {}, 199 | "outputs": [], 200 | "source": [] 201 | }, 202 | { 203 | "cell_type": "code", 204 | "execution_count": null, 205 | "metadata": {}, 206 | "outputs": [], 207 | "source": [] 208 | }, 209 | { 210 | "cell_type": "code", 211 | "execution_count": null, 212 | "metadata": {}, 213 | "outputs": [], 214 | "source": [] 215 | }, 216 | { 217 | "cell_type": "code", 218 | "execution_count": null, 219 | "metadata": {}, 220 | "outputs": [], 221 | "source": [] 222 | }, 223 | { 224 | "cell_type": "code", 225 | "execution_count": null, 226 | "metadata": {}, 227 | "outputs": [], 228 | "source": [] 229 | } 230 | ], 231 | "metadata": { 232 | "kernelspec": { 233 | "display_name": "Python 3", 234 | "language": "python", 235 | "name": "python3" 236 | }, 237 | "language_info": { 238 | "codemirror_mode": { 239 | "name": "ipython", 240 | "version": 3 241 | }, 242 | "file_extension": ".py", 243 | "mimetype": "text/x-python", 244 | "name": "python", 245 | "nbconvert_exporter": "python", 246 | "pygments_lexer": "ipython3", 247 | "version": "3.6.7" 248 | } 249 | }, 250 | "nbformat": 4, 251 | "nbformat_minor": 2 252 | } 253 | -------------------------------------------------------------------------------- /Chapter04/Chapter4_IrisKNN_TF2_alpha.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "import numpy as np\n", 10 | "from sklearn import datasets\n", 11 | "import tensorflow as tf\n" 12 | ] 13 | }, 14 | { 15 | "cell_type": "code", 16 | "execution_count": 2, 17 | "metadata": {}, 18 | "outputs": [], 19 | "source": [ 20 | "iris = datasets.load_iris()\n", 21 | "x = np.array([i for i in iris.data])\n", 22 | "y = np.array(iris.target)" 23 | ] 24 | }, 25 | { 26 | "cell_type": "code", 27 | "execution_count": 3, 28 | "metadata": {}, 29 | "outputs": [], 30 | "source": [ 31 | "flower_labels = [\"iris setosa\", \"iris virginica\", \"iris versicolor\"]" 32 | ] 33 | }, 34 | { 35 | "cell_type": "code", 36 | "execution_count": 4, 37 | "metadata": {}, 38 | "outputs": [], 39 | "source": [ 40 | "#one hot encoding\n", 41 | "y = np.eye(len(set(y)))[y]" 42 | ] 43 | }, 44 | { 45 | "cell_type": "code", 46 | "execution_count": 5, 47 | "metadata": {}, 48 | "outputs": [], 49 | "source": [ 50 | "# normalise the x data to the range 0 to 1\n", 51 | "x = (x - x.min(axis=0)) / (x.max(axis=0) - x.min(axis=0))" 52 | ] 53 | }, 54 | { 55 | "cell_type": "code", 56 | "execution_count": 6, 57 | "metadata": {}, 58 | "outputs": [], 59 | "source": [ 60 | "# create indices for the train-test split\n", 61 | "np.random.seed(42)\n", 62 | "split = 0.8 # this makes 120 train and 30 test features\n", 63 | "train_indices = np.random.choice(len(x), round(len(x) * split), replace=False)\n", 64 | "test_indices =np.array(list(set(range(len(x))) - set(train_indices)))" 65 | ] 66 | }, 67 | { 68 | "cell_type": "code", 69 | "execution_count": 7, 70 | "metadata": {}, 71 | "outputs": [], 72 | "source": [ 73 | "# the train-test split\n", 74 | "train_x = x[train_indices]\n", 75 | "test_x = x[test_indices]\n", 76 | "train_y = y[train_indices]\n", 77 | "test_y = y[test_indices]" 78 | ] 79 | }, 80 | { 81 | "cell_type": "code", 82 | "execution_count": 8, 83 | "metadata": {}, 84 | "outputs": [], 85 | "source": [ 86 | "#\n", 87 | "def prediction(train_x, test_x, train_y,k):\n", 88 | " print(test_x)\n", 89 | " d0 = tf.expand_dims(test_x, axis =1)\n", 90 | " d1 = tf.subtract(train_x, d0)\n", 91 | " d2 = tf.abs(d1)\n", 92 | " distances = tf.reduce_sum(input_tensor=d2, axis=2)\n", 93 | " print(distances)\n", 94 | " # or\n", 95 | " # distances = tf.reduce_sum(tf.abs(tf.subtract(train_x, tf.expand_dims(test_x, axis =1))), axis=2)\n", 96 | " _, top_k_indices = tf.nn.top_k(tf.negative(distances), k=k)\n", 97 | " top_k_labels = tf.gather(train_y, top_k_indices)\n", 98 | " predictions_sum = tf.reduce_sum(input_tensor=top_k_labels, axis=1)\n", 99 | " pred = tf.argmax(input=predictions_sum, axis=1)\n", 100 | " return pred" 101 | ] 102 | }, 103 | { 104 | "cell_type": "code", 105 | "execution_count": 9, 106 | "metadata": {}, 107 | "outputs": [], 108 | "source": [ 109 | "k = 5" 110 | ] 111 | }, 112 | { 113 | "cell_type": "code", 114 | "execution_count": null, 115 | "metadata": {}, 116 | "outputs": [], 117 | "source": [] 118 | }, 119 | { 120 | "cell_type": "code", 121 | "execution_count": 10, 122 | "metadata": {}, 123 | "outputs": [ 124 | { 125 | "name": "stdout", 126 | "output_type": "stream", 127 | "text": [ 128 | "[[0.16666667 0.41666667 0.06779661 0.04166667]\n", 129 | " [0.80555556 0.41666667 0.81355932 0.625 ]\n", 130 | " [0.86111111 0.33333333 0.86440678 0.75 ]\n", 131 | " [0.66666667 0.45833333 0.77966102 0.95833333]\n", 132 | " [0.41666667 0.83333333 0.03389831 0.04166667]\n", 133 | " [0.66666667 0.54166667 0.79661017 1. ]\n", 134 | " [0.30555556 0.58333333 0.11864407 0.04166667]\n", 135 | " [0.22222222 0.70833333 0.08474576 0.125 ]\n", 136 | " [0.44444444 0.41666667 0.69491525 0.70833333]\n", 137 | " [0.16666667 0.66666667 0.06779661 0. ]\n", 138 | " [0.05555556 0.125 0.05084746 0.08333333]\n", 139 | " [0.27777778 0.70833333 0.08474576 0.04166667]\n", 140 | " [0.72222222 0.45833333 0.66101695 0.58333333]\n", 141 | " [0.16666667 0.16666667 0.38983051 0.375 ]\n", 142 | " [0.63888889 0.375 0.61016949 0.5 ]\n", 143 | " [0.5 0.33333333 0.50847458 0.5 ]\n", 144 | " [0.58333333 0.375 0.55932203 0.5 ]\n", 145 | " [0.55555556 0.125 0.57627119 0.5 ]\n", 146 | " [0.36111111 0.41666667 0.52542373 0.5 ]\n", 147 | " [0.33333333 0.25 0.57627119 0.45833333]\n", 148 | " [0.5 0.41666667 0.61016949 0.54166667]\n", 149 | " [0.41666667 0.25 0.50847458 0.45833333]\n", 150 | " [0.38888889 0.33333333 0.52542373 0.5 ]\n", 151 | " [0.77777778 0.41666667 0.83050847 0.83333333]\n", 152 | " [0.55555556 0.375 0.77966102 0.70833333]\n", 153 | " [0.16666667 0.20833333 0.59322034 0.66666667]\n", 154 | " [0.83333333 0.375 0.89830508 0.70833333]\n", 155 | " [0.61111111 0.41666667 0.76271186 0.70833333]\n", 156 | " [0.36111111 0.33333333 0.66101695 0.79166667]\n", 157 | " [0.66666667 0.54166667 0.79661017 0.83333333]]\n", 158 | "tf.Tensor(\n", 159 | "[[1.39265537 0.64806968 2.75164783 ... 1.83851224 1.07815443 1.13206215]\n", 160 | " [0.74199623 1.98658192 0.7836629 ... 0.46280603 1.22316384 1.08592279]\n", 161 | " [0.89006591 2.30131827 0.46892655 ... 0.52754237 1.37123352 1.23399247]\n", 162 | " ...\n", 163 | " [0.58003766 1.82462335 0.94562147 ... 0.13418079 1.06120527 0.92396422]\n", 164 | " [0.50612053 1.69515066 1.13064972 ... 0.38418079 0.76506591 0.5722693 ]\n", 165 | " [0.91949153 1.91407721 0.85616761 ... 0.30696798 1.40065913 1.26341808]], shape=(30, 120), dtype=float64)\n", 166 | "Predicted Actual\n", 167 | "--------- ------\n", 168 | "0 iris setosa \t iris setosa\n", 169 | "1 iris versicolor \t iris versicolor\n", 170 | "2 iris versicolor \t iris versicolor\n", 171 | "3 iris versicolor \t iris versicolor\n", 172 | "4 iris setosa \t iris setosa\n", 173 | "5 iris versicolor \t iris versicolor\n", 174 | "6 iris setosa \t iris setosa\n", 175 | "7 iris setosa \t iris setosa\n", 176 | "8 iris versicolor \t iris versicolor\n", 177 | "9 iris setosa \t iris setosa\n", 178 | "10 iris setosa \t iris setosa\n", 179 | "11 iris setosa \t iris setosa\n", 180 | "12 iris virginica \t iris virginica\n", 181 | "13 iris virginica \t iris virginica\n", 182 | "14 iris virginica \t iris virginica\n", 183 | "15 iris virginica \t iris virginica\n", 184 | "16 iris virginica \t iris virginica\n", 185 | "17 iris virginica \t iris virginica\n", 186 | "18 iris virginica \t iris virginica\n", 187 | "19 iris virginica \t iris virginica\n", 188 | "20 iris virginica \t iris virginica\n", 189 | "21 iris virginica \t iris virginica\n", 190 | "22 iris virginica \t iris virginica\n", 191 | "23 iris versicolor \t iris versicolor\n", 192 | "24 iris versicolor \t iris versicolor\n", 193 | "25 iris virginica \t iris versicolor\n", 194 | "26 iris versicolor \t iris versicolor\n", 195 | "27 iris versicolor \t iris versicolor\n", 196 | "28 iris versicolor \t iris versicolor\n", 197 | "29 iris versicolor \t iris versicolor\n", 198 | "Accuracy = 96.7 %\n" 199 | ] 200 | } 201 | ], 202 | "source": [ 203 | "i, total = 0 , 0\n", 204 | "results = zip(prediction(train_x, test_x, train_y,k), test_y) #concatenate predicted label with actual label\n", 205 | "print(\"Predicted Actual\")\n", 206 | "print(\"--------- ------\")\n", 207 | "for pred, actual in results:\n", 208 | " print(i, flower_labels[pred.numpy()],\"\\t\",flower_labels[np.argmax(actual)] )\n", 209 | " if pred.numpy() == np.argmax(actual):\n", 210 | " total += 1\n", 211 | " i += 1\n", 212 | "accuracy = round(total/len(test_x),3)*100\n", 213 | "print(\"Accuracy = \",accuracy,\"%\")" 214 | ] 215 | }, 216 | { 217 | "cell_type": "code", 218 | "execution_count": null, 219 | "metadata": {}, 220 | "outputs": [], 221 | "source": [] 222 | }, 223 | { 224 | "cell_type": "code", 225 | "execution_count": null, 226 | "metadata": {}, 227 | "outputs": [], 228 | "source": [] 229 | } 230 | ], 231 | "metadata": { 232 | "kernelspec": { 233 | "display_name": "Python 3", 234 | "language": "python", 235 | "name": "python3" 236 | }, 237 | "language_info": { 238 | "codemirror_mode": { 239 | "name": "ipython", 240 | "version": 3 241 | }, 242 | "file_extension": ".py", 243 | "mimetype": "text/x-python", 244 | "name": "python", 245 | "nbconvert_exporter": "python", 246 | "pygments_lexer": "ipython3", 247 | "version": "3.6.7" 248 | } 249 | }, 250 | "nbformat": 4, 251 | "nbformat_minor": 2 252 | } 253 | -------------------------------------------------------------------------------- /Chapter04/Chapter4_LinearRegression_TF2_alpha.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "metadata": {}, 7 | "outputs": [ 8 | { 9 | "name": "stdout", 10 | "output_type": "stream", 11 | "text": [ 12 | "Using matplotlib backend: TkAgg\n" 13 | ] 14 | } 15 | ], 16 | "source": [ 17 | "import tensorflow as tf\n", 18 | "import matplotlib.pyplot as plt\n", 19 | "%matplotlib\n", 20 | "import numpy as np\n" 21 | ] 22 | }, 23 | { 24 | "cell_type": "code", 25 | "execution_count": 2, 26 | "metadata": {}, 27 | "outputs": [], 28 | "source": [ 29 | "n_examples = 1000 # number of training examples\n", 30 | "training_steps =400 # number of steps we are going to train for\n", 31 | "display_step = 10 # after multiples of this display the loss\n", 32 | "learning_rate = 0.01 # multipliying factor on gradient\n", 33 | "m, c = 6, -5 # gradient and y intercept of our line, edit these for a different linear problem" 34 | ] 35 | }, 36 | { 37 | "cell_type": "code", 38 | "execution_count": 3, 39 | "metadata": {}, 40 | "outputs": [], 41 | "source": [ 42 | "# A dataset of points around mx + c\n", 43 | "def train_data(n, m, c):\n", 44 | " x = tf.random.normal([n]) # n values taken from a normal distribution, mean = 0, SD =1\n", 45 | " noise = tf.random.normal([n])# n values taken from a normal distribution, mean = 0, SD =1\n", 46 | " y = m*x + c + noise # our scatter plot\n", 47 | " return x, y" 48 | ] 49 | }, 50 | { 51 | "cell_type": "code", 52 | "execution_count": 4, 53 | "metadata": {}, 54 | "outputs": [], 55 | "source": [ 56 | "def prediction(x, weight, bias):\n", 57 | " return weight*x + bias # our predicted (learned) m and c, expression is like y = m*x + c\n" 58 | ] 59 | }, 60 | { 61 | "cell_type": "code", 62 | "execution_count": 5, 63 | "metadata": {}, 64 | "outputs": [], 65 | "source": [ 66 | "# A loss function using mean-squared error\n", 67 | "def loss(x, y, weights, biases):\n", 68 | " error = prediction(x, weights, biases) - y # how 'wrong' our predicted (learned) y is\n", 69 | " squared_error = tf.square(error)\n", 70 | " return tf.reduce_mean(input_tensor=squared_error) # overall mean of squared error." 71 | ] 72 | }, 73 | { 74 | "cell_type": "code", 75 | "execution_count": 6, 76 | "metadata": {}, 77 | "outputs": [], 78 | "source": [ 79 | "# Find the derivative of loss with respect to weight and bias\n", 80 | "def grad(x, y, weights, biases):\n", 81 | " with tf.GradientTape() as tape:\n", 82 | " loss_ = loss(x, y, weights, biases)\n", 83 | " return tape.gradient(loss_, [weights, biases]) # direction and value of the gradient of our loss w.r.t weight and bias" 84 | ] 85 | }, 86 | { 87 | "cell_type": "code", 88 | "execution_count": 7, 89 | "metadata": {}, 90 | "outputs": [ 91 | { 92 | "name": "stdout", 93 | "output_type": "stream", 94 | "text": [ 95 | "Initial loss: 90.243\n" 96 | ] 97 | }, 98 | { 99 | "data": { 100 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYoAAAEWCAYAAAB42tAoAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAIABJREFUeJzt3X+cXHV97/HXO8sAG6QuSC6QlQhaGjRGwmVLaam9YmnDtQIBS9GL7aX2ltrWtto2Fgq3RAuCN/deuLW1bVq57RVEfgTWtGoDFG5Ra9CNGwwRYsHyIwPKSrKoZBs2m0//mDNhMjszOzM7M+fMzPv5eOwjM+ecmf3MivM53+/n+0MRgZmZWTUL0g7AzMyyzYnCzMxqcqIwM7OanCjMzKwmJwozM6vJicLMzGpyorCOkLRE0g8kDaQdS9ZI2i7pza2+1qxVnCispSQ9IWkqSQrFn8UR8VREvCIiZjIQ48GS7khiDUlvaeC120o+14ykfyt5/ofNxBMRSyPiC62+thGS/lvyeYqf5V8l3SjpxAbe4yZJa1odm6XPicLa4ZwkKRR/nmnnL5N0UBMv+yLwbuDbjbwoIpYVPxfwBeB9JZ/zIy2KLS1fSD7XK4GzgGlgTNLr0w3L0uZEYR0h6fjk7v2g5PkJkh6Q9H1J90r6M0k3JefeImlH2eufkHRW8nhN0iK4SdL3gEskLZB0maTHJT0v6TZJR1aKJSJeiogbIuKLQEtbOMmd+QOS/kTSTuBKSSdKul/STknflfRJSa8sec2OYqtG0tWSbkk+2/clPSzpPzZ57YikLcm5T0u6vZ47/oiYiYjHI+LXgC8DVyXvtyD5u39b0qSk/19MIpJ+A7gI+MOkRXJXcvxKSd9KYtgm6dz5/o2t85woLC2fAr4CvApYA/xig68/D7gDGAJuBn4LWAX8J2AxsAv4s2YCk/RfJH29mdcmfgJ4BFgEfBQQcDVwDPAG4LXAf6/x+lXAJyl8ts8Df9LotZIOAUaBvwaOBNYn1zbqTqC0JvL3wIkUPsvDye8mIj4O3Ap8JGldnZ9c/03gDAqtlGuAT0k6uok4LEVOFNYOo8kd56Sk0fKTkpYAPwr8UXJ3/0VgQ4O/48sRMRoR+yJiCngvcEVE7IiIPRSSz8830/UTEZ+KiDc1+roST0XEnyd35lMR8c2I+Mfksz4HXE8hoVXzTxGxMannfBJY0cS1ZwD7IuJPI2I6Im4HNjfxWZ6hkGhI/tZ/ExHfj4h/o/A3PlXSYdVeHBG3RcSzyWs/BTwBjDQRh6Wom/pPrXusioh7a5xfDOyMiN0lx54Gjmvgdzxd9vw1wF2S9pUcmwGOBvINvG8rHBCbpGMo3OmfARxO4QZtosbrS+smu4GqX8Q1rl0M7Ci7tvxvVo9hYCdAMmLtWuDngaOA4t/6KODFSi+WdAnwAQr/+wC8IrneuohbFJaGZ4EjJS0sOVaaJF4E9p9LvqAWlb1H+bLHTwP/OSKGSn4OjYhOJ4lKsX0U2AMsj4gfAi6h0B3VTs9S+JIv1UgiLlpFoWgP8EvA24C3UuhK+uHkePGzHPC5Jb0W+HPg14FXRcQQ8Cjt/+zWYk4U1nER8SQwBqxJhqr+OHBOySXfBA6V9HOScsCVwCFzvO1fANdIeg2ApEWSzqt2saRDJB2aPD1Y0qGS2vUFdjiF5PeCpOOA32/T7yn1ReAgSb8u6SBJ7wBOreeFkgYkvVbSx4GfBP44OXU4hYT3PIVEfk3ZS79Dof5S9AoKyWOi8Lb6VeCkZj+QpceJwtJyMfDjFL50rqZQCN0DEBEvAL9BoRCbp/AlW96NUu7/UKhz3C3p+8Am4MdqXL8dmKJw170xeVxMMhdL2tbUp6rsKuA04IUkxvUtfO+KkjrN+RRqN7uAXwA+R/I3ruLNkn4AfA+4j0IyGImI4t/i/1KoWTwDbAP+uez1fw2cLGmXpDsi4uvAxygMWngWWAo82IKPZx0mb1xkWSDpVuDRiLgq7Vh6laTNwA0R8cm0Y7Hu4haFpULSj0p6XTI2/2wKw11njZCy5iXzUY5Oup5+hUK3z8a047Lu41FPlpZjKIzRfxWFbqVfj4jxdEPqOa+n0KV3GPA48I5keK5ZQ9z1ZGZmNbnryczMauqJrqejjjoqjj/++LTDMDPrKps3b/5uRJTPUZqlJxLF8ccfz9jYWNphmJl1FUlP1nOdu57MzKymVBOFChujPCfp4ZJjayTlk+WRt0h6W5oxmpn1u7RbFH8DnF3h+PURsSL5+VyHYzIzsxKpJoqIeIBkZUozM8umtFsU1bxP0teTrqkjKl0g6VJJY5LGJiZqrdhsZmbzkcVE8efA6yhswPIs8L8qXRQR6yJiJCJGFi2ac3SXmVnPGB3Pc8Z193HCZZ/ljOvuY3S8vavpZ254bER8p/hY0l9R2HrRzMwoJInL79zK1HRhu/f85BSX37kVgFWnlG9B0hqZa1FIOrbk6fkU9uU1MzNg7cbt+5NE0dT0DGs3bm/b70y1RSHpFuAtwFGSdlBYt/8tklZQ2PDkCeDXUgvQzCxjnpmcauh4K6SaKCLiXRUOf6LjgZiZdYnFQ4PkKySFxUODbfudmet6MjOz6lavXMpgbuCAY4O5AVavXNq235m5YraZmVVXLFiv3bidZyanWDw0yOqVS9tWyAYnCjOzrrPqlOG2JoZy7noyM7Oa3KIwM0vJ6Hi+o11IzXKiMDNrs0oJAej4xLlm9cSe2SMjI+GNi8wsi8pnUkNhlNKhuQXs2j096/qhwRyHHXJQR1oZkjZHxMhc17lFYWbWRtVmUpcfK5qcmmZyqpBAstLKcDHbzKyN5jtjut3Lc9TDicLMrI2qzZgeGszNmjhXTTuX56iHE4WZWRtVm0m95txlXHvBcoaHBhEwPDTIEQtzFd+jnctz1MM1CjOzNpprJnVp7aFa4budy3PUw4nCzKyNGpkrkcbyHPVwojAza5NmNhnq9PIc9XCNwsysTdLYZKgd3KIwM6tTo0tupLHJUDukvcPdjcDbgeci4o3JsSOBW4HjKexw9wsRsSutGM2stzS7vlIz3UhpbDLUDml3Pf0NcHbZscuAf4yIE4F/TJ6bmc1b8cs+PzlF8PKX/eh4fs7XNtONlMYmQ+2QaqKIiAeAnWWHzwP+Nnn8t8CqjgZlZj1rPjWDZrqRVp0yPGuuxLUXLM9csXouWaxRHB0RzyaPvw0cXekiSZcClwIsWbKkQ6GZWTebT82g2W6kLI5ialTaXU81RWFp24rL20bEuogYiYiRRYsWdTgyM+tG1b7U66kZ9Eo3UjOy2KL4jqRjI+JZSccCz6UdkJn1htUrl1ac+XzmSYs447r7ZhW4ywvf7zh1mPsfncjUZLhOyGKi2AD8V+C65N/PpBuOmfWKSjOfzzxpEes352eNZhp7cues4+s357uyxjBfqW5cJOkW4C3AUcB3gKuAUeA2YAnwJIXhseUF7wN44yIza9YZ191XsfZQzfDQIF+67K1ds41pLV2xcVFEvKvKqZ/uaCBm1ldKv+QbvVV+ZnKqqTkV3SyLXU9mZg2r9w6/0gqtjVg8NFhzmK0ThZlZBjVyh1/pS75euQHx4p69+7cqLddtS3PUy4nCzLreXHf48+lqKjrs4AFefGmmapKA7luao15OFGbW9WpNpBsdz7P6joeYnml+4M4RC3NM7q6eIKC351RkesKdmVk9ak2ku+KurfNKEgC7dk/XbIl069Ic9XKiMLOuV23W9JknLeLFl5qrR9SrOFy2V5MEuOvJzLpQac1haGGOiEJNYkBiJoLhZCLdLQ8+Xfd7iirrBQGDuQVMTe+r+Jpe7W4q5RaFmXWV8qXCd+2e3l9gnonY35JYvznPTJ0TiiW4/qIVDFfpwjo0NzCrxSLg4tOX9HRLosiJwsy6ylzDW6emZ7jlwacbGgJbzCfViuKTu6dnLRd+/UUruHrV8kZC71ruejKzrjE6nq9ruY16WxKl1m7cXnMp8V5YLrxZblGYWVcodjm1yzOTU329lHgtblGYWVeYz4zqegwtzFVcXbYbF/trNScKM+sK7V4eo9hb1c9dTNW468nMukK7l8d4ocbSHP3OicLMukKl+kEr9eo6Ta3gricz64j5bvSz6pRhxp7cyc0PPkWr91vrl4lzzUp1h7taJD0BfB+YAfbW2oXJO9yZZdvoeJ7Vtz/E9L7Z3zfDVZJGeWIp37K0WeUzsIsT5/plTkSprtjhrg5nRsR30w7CzOZnzYZtFZMEVN47otL+EjdteqolsVx8+hLuf3TCo5oakPVEYWY9oNYeDlCYTf3+W7fwob/bxlXnLGvbUNjB3IK+bDnMV5aL2QHcLWmzpEvLT0q6VNKYpLGJiYkUwjOzVtu1e5r337qlrtnXjcotENde8KaWv28/yHKL4icjIi/pPwD3SHo0Ih4onoyIdcA6KNQo0grSzF5WrWB9xMIcu+bY+KeVBLxyMIdUWKepUhfTfIvr/SSziSIi8sm/z0m6CzgNeKD2q8wsLbX2rb7qnGXz3mWuXsX9IZqN1clitkx2PUk6TNLhxcfAzwIPpxuVmdVSbd/qNRu2sXbj9o4kiXqHudbaY9tmy2qL4mjgLklQiPFTEfEP6YZkZrVUXaJ7anrOYnar1Ls/RK09tm22TCaKiPgWcHLacZhZ/aot0d0pQ4O5ukc01VpO3GbLZNeTmXWfdi+xUctgboA15y6r+3ovJ96YTLYozCz7Ks2cPuSgBfv7/hfmFrC7wj7TrSIVVnytNrO7Fi8n3pjMLuHRCC/hYdZZ5aOGOim3QKy98GR/qbdAryzhYWYZ1O5NhKpppvVg8+dEYWZ1u3J0K7c8+HRTe1LPx7v7dNG+rHCiMLO6XDm6tWUL89XriIU5rjpnmVsQKXOiMDOg9pIWo+P5jicJgIUHH+QkkQFOFGZ9bnQ8z5oN2w6YFJefnOIDt25h7MmdjLzmyP3LW3SaJ8BlgxOFWR+7cnQrN296ikoVhwBu2vQUt4/tYM/e9g1zrcUT4LLBE+7M+tToeL5qkiiVVpLwBLjscKIw61NrN26fM0l02vDQIEr+vfaC5a5PZIS7nsz6SLFgneaaTNXUszy4pcOJwqxPpDmbupSSf0tbM+5myjZ3PZn1iTUbtqWeJKCQIK6/aIW7mbqIWxRmfWB0PN+xPSHmUmxRuJupe2S2RSHpbEnbJT0m6bK04zHrZh/6u21ph7BfgHeS6zKZbFFIGgD+DPgZYAfwVUkbIuIb6UZm1j1KZ1pnbXSTJ9J1l0wmCuA04LFkpzskfRo4D3CiMKuhdFSTIHMJosgT6bpLVhPFMPB0yfMdwI+lFItZVygf1ZTVJOERTt0nszWKuUi6VNKYpLGJiYm0wzFLXVp7RNRjaDDnEU5dLKstijxwXMnzVyfH9ouIdcA6KOxw17nQzLIpi5PoirZc9bNph2DzkNUWxVeBEyWdIOlg4J3AhpRjMsus0fH83BelZNj1iK6XyRZFROyV9D5gIzAA3BgR2RnfZ5YhxdpEGnILxPS+6g161yN6QyYTBUBEfA74XNpxmGVVrSXCO0aF+sMLU9MsHhrkzJMWcf+jExU3P7LuldlEYWbVpbEtaSXTM8FhhxzkGkSPc6Iwy6DSyXJDC3NEcMBdexaSRJEnz/U+JwqzjCmfD7Fr94FblGYpSYAnz/WDrI56MutbWZwP8cR1P8cNF61gMDdwwHEXq/uDWxRmGZO1+RBDgzmA/UXpYpeYi9X9w4nCLENGx/OZW6NpzbnL9j9edcqwE0MfcteTWYZkbR/rd5++xInB3KIwy4rR8Xxmup0W5hbwkQve5CRhgBOFWUeVDnst7eMfHc+z+vaHUo3tjNcdyc2/+uOpxmDZ5ERh1iHlw17zk1O8/9Yt/O6tW9iXcmwATzyfjdaMZY8ThVkblbYgFkjMxOwKRBaSBHjinFXnRGHWJuUtiEpJIks8cc6q8agnszZZs2Fb5ibOVeOJc1aLWxRmbTA6nmdyanruCzNg2BPnbA5OFGYtUD6aafdLe9MOqS7DQ4N86bK3ph2GZZwThdk8VRrN1A3c3WT1ylyNQtIaSXlJW5Kft6Udk1ktWVzErx7XXrDc3U1Wl6y2KK6PiP+ZdhDW3ypNjoPZi+J1Swui1PDQoJOE1S2ricIsVZW6k1bf8RAE+/eIzk9OcfmdWzO3iN9c3OVkjcpc11PifZK+LulGSUdUukDSpZLGJI1NTEx0Oj7rcZW6k6ZnYn+SKJqanslUkliYW8ARC3NVzw9I7nKyhilSmAQk6V7gmAqnrgA2Ad+lcJP2x8CxEfGeWu83MjISY2NjLY/T+tcJl302Uwmg3PDQYM09IcpbRFBoSThJWClJmyNiZK7rUul6ioiz6rlO0l8Bf9/mcMxmWTw0mNnaw2BuwZxDWr3JkLXSnIlC0m8BN0XErg7Eg6RjI+LZ5On5wMOd+L1mpVavXDrrjjwrDi3bjrQabzJkrVJPi+Jo4KuSvgbcCGyM9vZX/Q9JKyh0PT0B/Fobf5dZRcUv2N+77aHMrdE0ubs7Znxb75izmB0RVwInAp8ALgH+RdJHJL2uHQFFxC9GxPKIeFNEnFvSujDrqFWnDGcuSYAX77POq6tGEREh6dvAt4G9wBHAHZLuiYgPtjNAs04rzp9Iu0aRWyBQYbRVkYe2WhrqqVH8DvBLFEYi/TWwOiKmJS0A/gVworCeceXoVm7e9FQmRjytvfDkwr8uSFvK6mlRHAlcEBFPlh6MiH2S3t6esMw6a3Q8z5oN2zK14msxITgxWNrmTBQRcVWNc4+0Nhyzzqs05yBtQ4PVJ82ZdZqX8LCeVGmdpkqT0rLWioBCbWLNucvSDsNsPycK6zmV1mm6/M6twMvdOKPjeX731i2Z2a+6aEBi7YUnu7vJMsWJwnpOpXWapqZn+L3bHuIDt25h8dAgO1/ck7kkAbAvwknCMseJwnrOM1WGtRbnRKQ97LUWz5GwLMrq6rFmTevWL9vcAnmOhGWSE4X1nNUrl5IbUNphNGRoMOfahGWWu56sp1w5upVbHnw6k0tvlHviup9LOwSzujhRWNcqXWpjQOqK5FA03KXdY9afnCisK5UPge2mJAG4FmFdxTUK60prNmzL1EzqSqpVSYYGc65FWFdxorCuMzqez9xs6kqCwmqvpQZzA551bV3HicK6ztqN29MOoS7DQ4Nce8FyhocGUclztyas26RSo5B0IbAGeD1wWkSMlZy7HPgVYAb47YjYmEaMll3VJtSlZYFgX1mJpLhvhLcjtV6QVoviYeAC4IHSg5LeALwTWAacDXxcUn0bBFvfyNqEugGJd5++xC0H61mptCiKy5NLs8p95wGfjog9wL9Kegw4DfhyZyO0NNS74uvk7pdSirCy6X3B/Y9O8KXL3pp2KGZtkbXhscPAppLnO5Jjs0i6FLgUYMmSJe2PzNqq0oqvq29/iCvu2sqLL2V7dBNkrzvMrJXaligk3QscU+HUFRHxmfm+f0SsA9YBjIyMdNcgepul0oqv0/uC6YwliWoT+7LWHWbWSm1LFBFxVhMvywPHlTx/dXLMelw33JEP5gZ4x6nDrN+cPyCpFQvXZr0qa8NjNwDvlHSIpBOAE4GvpByTdUDW78gPO3iAay9YztWrlnvIq/WdtIbHng98DFgEfFbSlohYGRHbJN0GfAPYC/xmRGSr78HaYvXKpay+4yGmZ7LRi1jsYhqQeNePHcfVq5bvP+chr9Zv0hr1dBdwV5Vz1wDXdDYiS1NxtFNWksTw0KBHMJmVyNqoJ+sz5aOd0uZ6g9lsWatRWJ/50N9lZ3G/ocGc6w1mFThRWCpGx/Os+NDd7NqdncX99uzdl3YIZpnkridri/JZ1meetIj7H50gPzmFBFncPmJqeoa1G7e7RWFWxonCWq7SLOubNj21/3wWk0RRN8znMOs0dz1Zy1WaZd1uuQXVtglqTNbnc5ilwYnCWi6Nu/K1F57MwOxFJoHCTnM3XLSCGy5aMWsjoVIe8WRWmROFtVyn78oHJFadMsy+Kn1awcuT5EpnVQ8N5jhiYc4zrM3m4BqFtdToeJ5dL+7p6O8sLtK3eGiQfIXWzHBJ4vKsarPGOVHYvJSObhpamOOF3dN0epBpMRGsXrl01uQ9dyeZzZ8ThTWtfHRTGnMiShNBsaUw1+ZHZtYYJwprWhqjm0oNSLPqCu5aMms9F7OtaWnPOdgX4aRg1gFOFNa0tOccpP37zfqFE4U1Lc0isYvUZp3jGoVVNTqeZ82GbUxOFYrUCwT7ojDKKM0v6WEXqc06Kq0d7i4E1gCvB06LiLHk+PHAI8D25NJNEfHeFELse6PjeVbf/hDT+16exFZ8mJ+c4v23bul4TIO5AU+KM0tBWi2Kh4ELgL+scO7xiFjR4XiMA+dEQGFGc1a4FWGWnrS2Qn0EQFXW5rHOy9pOc0UCrr9ohROEWYqyWMw+QdK4pH+S9OZqF0m6VNKYpLGJiYlOxteT0p4TUc3Fpy9xkjBLWdtaFJLuBY6pcOqKiPhMlZc9CyyJiOclnQqMSloWEd8rvzAi1gHrAEZGRrLUS9KV0p4TUcm7T1/C1auWpx2GWd9rW6KIiLOaeM0eYE/yeLOkx4EfAcZaHJ6VeeVgbv/opk4p1h3Ku7xEoSXhJGGWDZkaHitpEbAzImYkvRY4EfhWymH1pPKhr8169+lLDti9rl7FeRBen8ks+9IaHns+8DFgEfBZSVsiYiXwU8CHJU0D+4D3RsTONGLsZZWGvjbrrq/lG35N+RpNXp/JLNvSGvV0F3BXhePrgfWdj6i/rN24vSVJAuDFlyoXwIcGc7z95GO5edNTBwyz9VwIs+6TxVFP1madKFy/MDXN1auWc/1FK/bvKOdd5My6U6ZqFNZapRPoSvv+q+0E10rFBfvcrWTW/dyi6FHFCXT5ySmCwrIbH7h1C1eObmX1yqXkFrRvsqNId8FAM2stJ4oeVWkCXQA3JyOU1l54MkODuf3njliY44aLVtCK9BHgVoRZD3HXU4+qVocI4Pdue4h9ESweGmTNucsO+FJfu3H7vLulhr1PhFlPcaLoUbXqEDNRGIdU7I4ae3InI685siVJwvtEmPUedz31qNUrl9bVjRTATZueYvXtDzWVJIYGcx7VZNbj3KLoUatOGWbsyZ2z5jFU08y8isHcwKyuKzPrPW5R9LDSeQytcMRCtx7M+pFbFD2u+EU+370mBnMDXHWOWw9m/ciJog80s9dEbkAcdvBBvDA17YX6zPqcE0UfaHTJDm87amalnCj6QCNLdgwPDfKly97a5ojMrJu4mN0HVq9cymBu4IBjuQHNWsbDcyDMrBK3KHpEtQUAgaqbA1U65u4mMyuniM5vNy1pLXAO8BLwOPDLETGZnLsc+BVgBvjtiNg41/uNjIzE2Fj/7pZaXACwtGDtfR/MbC6SNkfEyFzXpdX1dA/wxoh4E/BN4HIASW8A3gksA84GPi5poOq7dKnR8TxnXHcfJ1z2Wc647j5GxxvfJa5UpVFNU9MzrN24fV7va2YG6e1wd3fJ003AzyePzwM+HRF7gH+V9BhwGvDlDofYNuV3//nJKS6/cytQecXVWl1KRdVGNXVigyIz631ZKGa/B/h88ngYeLrk3I7k2CySLpU0JmlsYmKizSG2TiN3/5X2lLj8zq2zWiCLq8y8rnbczKwRbUsUku6V9HCFn/NKrrkC2Avc3Oj7R8S6iBiJiJFFixa1MvS2qnaXn5+cmpUA6k0qZ560aNYCgB7BZGat0raup4g4q9Z5SZcAbwd+Ol6uqOeB40oue3VyrGfUmtNQ3gVVT5fS6Hie9ZvzByz8J+Adp3oLUjNrjVS6niSdDXwQODcidpec2gC8U9Ihkk4ATgS+kkaM7VJpTkNReWuhni6lajvZ3f9o93THmVm2pVWj+FPgcOAeSVsk/QVARGwDbgO+AfwD8JsR0fxKdhm06pRhrr1gedXzpa2FSkmlvEvJhWwza7e0Rj39cI1z1wDXdDCclqpnlNKqU4ar7iZX2lqoNlGu9P2qdWW5kG1mreKZ2S3UyNDX1SuXVpwkV16AXnVK7VpDve9jZtasLAyP7RmNDH0tdkHNdyOgVr2PmVk1blG0UKP1grlaC/Vq1fuYmVXiFkULeeKbmfUiJ4oWqmeUUrlWr/tkZtZq7npqoXpGKZVqdN0nM7M0OFFUUc8w10oaqRfUKn47UZhZVjhRVNCpO31PljOzbuAaRQWd2t/BxW8z6wZOFBV06k6/meK3mVmnOVFU0Kk7fU+WM7Nu4BpFBZ1cFsOT5cws65woKmh0mKuZWS9zoqiiVXf6zQ6zNTPLCieKNvKEOjPrBS5mt1GnhtmambVTKi0KSWuBc4CXgMeBX46ISUnHA48AxW/STRHx3jRiLJpP15En1JlZL0ir6+ke4PKI2Cvpo8DlwB8k5x6PiBUpxXWAubqO5koi3n3OzHpBKl1PEXF3ROxNnm4CXp1GHHOp1XVUTCL5ySmCl5NI6eqvnlBnZr0gCzWK9wCfL3l+gqRxSf8k6c3VXiTpUkljksYmJibaElitrqN66g+eUGdmvaBtXU+S7gWOqXDqioj4THLNFcBe4Obk3LPAkoh4XtKpwKikZRHxvfI3iYh1wDqAkZGRaCbG+XQd1Vt/8IQ6M+t2bWtRRMRZEfHGCj/FJHEJ8Hbg4oiI5DV7IuL55PFmCoXuH2lHfPPtOvKCfmbWL1LpepJ0NvBB4NyI2F1yfJGkgeTxa4ETgW+1I4b5dh25/mBm/SKtUU9/ChwC3CMJXh4G+1PAhyVNA/uA90bEznYEMN+uIy/zYWb9IpVEERE/XOX4emB9J2JoxdBV1x/MrB9kYdRTKtx1ZGZWn75d68ldR2Zm9enbRAHuOjIzq0ffdj2ZmVl9nCjMzKwmJwozM6vJicLMzGpyojAzs5qULLPU1SRNAE+mHUfiKOC7aQfRpG6NvVvjBseehm6NG1of+2siYtFcF/VEosgSSWMRMZJ2HM3o1ti7NW5w7Gno1rghvdjd9WRmZjU5UZiZWU1OFK23Lu0A5qFbY+/WuMGxp6Fb44aUYneNwszManKLwszManKiMDOzmpwo2kDSH0v6uqQtku6WtDjtmOolaa2kR5P475I0lHZM9ZAu2gAUAAADdklEQVR0oaRtkvZJ6oqhj5LOlrRd0mOSLks7nnpJulHSc5IeTjuWRkg6TtL9kr6R/LfyO2nHVC9Jh0r6iqSHktg/1NHf7xpF60n6oYj4XvL4t4E3JFu9Zp6knwXui4i9kj4KEBF/kHJYc5L0egrb5/4l8PsRMZZySDUle8N/E/gZYAfwVeBdEfGNVAOrg6SfAn4A/L+IeGPa8dRL0rHAsRHxNUmHA5uBVV3yNxdwWET8QFIO+CLwOxGxqRO/3y2KNigmicRhQNdk44i4OyL2Jk83Aa9OM556RcQjEbE97TgacBrwWER8KyJeAj4NnJdyTHWJiAeAtuxl304R8WxEfC15/H3gEaArNqSJgh8kT3PJT8e+V5wo2kTSNZKeBi4G/ijteJr0HuDzaQfRo4aBp0ue76BLvrR6gaTjgVOAB9ONpH6SBiRtAZ4D7omIjsXuRNEkSfdKerjCz3kAEXFFRBwH3Ay8L91oDzRX7Mk1VwB7KcSfCfXEbTYXSa8A1gPvL2v9Z1pEzETECgqt/NMkdazbr6+3Qp2PiDirzktvBj4HXNXGcBoyV+ySLgHeDvx0ZKiI1cDfvBvkgeNKnr86OWZtlPTvrwdujog7046nGRExKel+4GygIwMK3KJoA0knljw9D3g0rVgaJels4IPAuRGxO+14ethXgRMlnSDpYOCdwIaUY+ppSUH4E8AjEfG/046nEZIWFUcgShqkMAiiY98rHvXUBpLWA0spjMJ5EnhvRHTF3aKkx4BDgOeTQ5u6YcSWpPOBjwGLgElgS0SsTDeq2iS9DbgBGABujIhrUg6pLpJuAd5CYcnr7wBXRcQnUg2qDpJ+EvgCsJXC/zcB/jAiPpdeVPWR9Cbgbyn8t7IAuC0iPtyx3+9EYWZmtbjryczManKiMDOzmpwozMysJicKMzOryYnCzMxqcqIwM7OanCjMzKwmJwqzNpD0o8meHodKOizZQ6BrluQ2K+UJd2ZtIulq4FBgENgREdemHJJZU5wozNokWcPpq8C/AT8RETMph2TWFHc9mbXPq4BXAIdTaFmYdSW3KMzaRNIGCjvXnUBhC85M7UtiVi/vR2HWBpJ+CZiOiE8l+2P/s6S3RsR9acdm1ii3KMzMrCbXKMzMrCYnCjMzq8mJwszManKiMDOzmpwozMysJicKMzOryYnCzMxq+nc2Y3xMiMx3MQAAAABJRU5ErkJggg==\n", 101 | "text/plain": [ 102 | "
" 103 | ] 104 | }, 105 | "metadata": { 106 | "needs_background": "light" 107 | }, 108 | "output_type": "display_data" 109 | } 110 | ], 111 | "source": [ 112 | "# Start with random values for W and B on the same batch of data\n", 113 | "x, y = train_data(n_examples,m,c) # our training values x and y\n", 114 | "plt.scatter(x,y)\n", 115 | "plt.xlabel(\"x\")\n", 116 | "plt.ylabel(\"y\")\n", 117 | "plt.title(\"Figure 1: Training Data\")\n", 118 | "W = tf.Variable(np.random.randn()) # initial, random, value for predicted weight (m)\n", 119 | "B = tf.Variable(np.random.randn()) # initial, random, value for predicted bias (c)\n", 120 | "\n", 121 | "print(\"Initial loss: {:.3f}\".format(loss(x, y, W, B)))" 122 | ] 123 | }, 124 | { 125 | "cell_type": "code", 126 | "execution_count": 8, 127 | "metadata": {}, 128 | "outputs": [ 129 | { 130 | "name": "stdout", 131 | "output_type": "stream", 132 | "text": [ 133 | "Loss at step 00: 86.817169\n", 134 | "Loss at step 10: 59.016361\n", 135 | "Loss at step 20: 40.217892\n", 136 | "Loss at step 30: 27.506660\n", 137 | "Loss at step 40: 18.911522\n", 138 | "Loss at step 50: 13.099622\n", 139 | "Loss at step 60: 9.169707\n", 140 | "Loss at step 70: 6.512362\n", 141 | "Loss at step 80: 4.715503\n", 142 | "Loss at step 90: 3.500499\n", 143 | "Loss at step 100: 2.678933\n", 144 | "Loss at step 110: 2.123400\n", 145 | "Loss at step 120: 1.747758\n", 146 | "Loss at step 130: 1.493754\n", 147 | "Loss at step 140: 1.322001\n", 148 | "Loss at step 150: 1.205863\n", 149 | "Loss at step 160: 1.127334\n", 150 | "Loss at step 170: 1.074233\n", 151 | "Loss at step 180: 1.038327\n", 152 | "Loss at step 190: 1.014048\n", 153 | "Loss at step 200: 0.997631\n", 154 | "Loss at step 210: 0.986530\n", 155 | "Loss at step 220: 0.979024\n", 156 | "Loss at step 230: 0.973949\n", 157 | "Loss at step 240: 0.970517\n", 158 | "Loss at step 250: 0.968196\n", 159 | "Loss at step 260: 0.966627\n", 160 | "Loss at step 270: 0.965566\n", 161 | "Loss at step 280: 0.964848\n", 162 | "Loss at step 290: 0.964363\n", 163 | "Loss at step 300: 0.964035\n", 164 | "Loss at step 310: 0.963813\n", 165 | "Loss at step 320: 0.963663\n", 166 | "Loss at step 330: 0.963562\n", 167 | "Loss at step 340: 0.963493\n", 168 | "Loss at step 350: 0.963447\n", 169 | "Loss at step 360: 0.963415\n", 170 | "Loss at step 370: 0.963394\n", 171 | "Loss at step 380: 0.963380\n", 172 | "Loss at step 390: 0.963370\n" 173 | ] 174 | } 175 | ], 176 | "source": [ 177 | "for step in range(training_steps): #iterate for each training step\n", 178 | " deltaW, deltaB = grad(x, y, W, B) # direction (sign) and value of the gradient of our loss w.r.t weight and bias\n", 179 | " change_W = deltaW * learning_rate # adjustment amount for weight\n", 180 | " change_B = deltaB * learning_rate # adjustment amount for bias\n", 181 | " W.assign_sub(change_W) # subract change_W from W\n", 182 | " B.assign_sub(change_B) # subract change_B from B\n", 183 | " if step==0 or step % display_step == 0:\n", 184 | " # print(deltaW.numpy(), deltaB.numpy()) # uncomment if you want to see the gradients\n", 185 | " print(\"Loss at step {:02d}: {:.6f}\".format(step, loss(x, y, W, B)))" 186 | ] 187 | }, 188 | { 189 | "cell_type": "code", 190 | "execution_count": 9, 191 | "metadata": {}, 192 | "outputs": [ 193 | { 194 | "name": "stdout", 195 | "output_type": "stream", 196 | "text": [ 197 | "Final loss: 0.963\n", 198 | "W = 6.004026412963867, B = -5.038877010345459\n", 199 | "Compared with m = 6.0, c = -5.0 of the original line\n" 200 | ] 201 | }, 202 | { 203 | "data": { 204 | "text/plain": [ 205 | "Text(0.5, 1.0, 'Figure 2: Line of Best Fit')" 206 | ] 207 | }, 208 | "execution_count": 9, 209 | "metadata": {}, 210 | "output_type": "execute_result" 211 | }, 212 | { 213 | "data": { 214 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYoAAAEWCAYAAAB42tAoAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAF11JREFUeJzt3X2wZHV95/H3R4JA1C1EZpEZGMc17GyIMU5lRF2zGxfNzvjIxIdEY6JuzBJ3xdUqdwyElGIlKFtUWesWZpVVEyugiIJIfAgPPpRZHwHxCQEliSwzguDDRI1EBb/7R58Ld663+z5Md59zut+vqlvc7tP39PfeufT3nu/n/PqkqpAkaZj7tF2AJKnbbBSSpJFsFJKkkWwUkqSRbBSSpJFsFJKkkWwUmogkm5P8IMlBbdcyKW19j0kOS/LXSf4xybun+dzrNQ+/D7PMRqEDkuTrSe5sXgQWPjZW1f+rqvtX1d0dqPExSa5I8p0kdyR5d5Kj1/D1H0vyB0vvb/F7fBZwFPCgqnr20o1Jzkjyk0X/HtcneeaBPmmz3/NWeMyqfh+G/UzVTTYKjcPTmheBhY9vTPLJkvzcGr/kgcC5wBbgIcD3gb8Yc1nT9BDgq1V114jHvGvh3wN4OXBekqOmU950fx80eTYKTUSSLUlq4UU9yUOTfDzJ95NcmeSNC3+dJnl8kj1Lvv7rSZ7YfH5GkvckOS/J94AXJrlPklOT/F2Sbye5MMkRy9VSVR+qqndX1feq6ofAOcDjJvA9fizJnyb5RPN9Xp7kyEWPf0ySTybZl+QLSR4/Yt+/2OxvX5Lrkjy9uf81wKuA327+Wn/RSnVW1WUMmuPDFu3/qUk+3+z/k0kesWjbHyXZ23wPNyZ5QpKdwB8vet4vrPdnleRM4N8B5zT7Omct+9L02Sg0Le8APgs8CDgD+L01fv1JwHuAw4HzgZcCu4BfBzYC3wXeuMp9/XvguoUbSX4nyRfXWM8wvwP8J+BfAvcF/nvzHJuADwB/BhzR3H9Rkg1Ld5DkYOCvgcub/bwUOD/J1qp6NfBa7j1ieOuoYjLwlKaWrzT3bQPeBvwhg3+PNwOXJjkkyVbgFOBRVfUAYAfw9ar6myXP+yvr/QFV1enA3wKnNPs6Zb370nTYKDQOlzR/me5LcsnSjUk2A48CXlVVP66q/wtcusbn+FRVXVJVP62qO4EXA6dX1Z6q+hGD5vOslcZSzV/OrwJ2L9xXVe+oqkcM/6o1+Yuq+mpT44XAI5v7fxf4YFV9sPkergCuBp68zD4eA9wfOKv5eX0EeD/w3DXU8VtJ9gE/YPCzfm1V7Wu2nQy8uao+U1V3V9XbgR81z3s3cAhwfJKDq+rrVfV3a/kBsMLvg/rHRqFx2FVVhzcfu5bZvhH4TjP2WXDLGp9j6eMfArx34QUJuJ7Bi9zQOXySXwA+BLysqv52jc+/Wrct+vyHDF7wF+p99qIX0H3ArwHLheobgVuq6qeL7rsZ2LSGOi5s/j3ux2Dk9Pwkf7iollcsqeVYYGNV3cQg0zgDuD3JBUk2ruF5YeXfB/WMjULTcCtwRJKfX3TfsYs+/yfgnm0ZnEK5dCSz9G2ObwGetOgF6fCqOrSq9i5XQJKHAFcCf1pVf7Xeb+QA3AL81ZJ671dVZy3z2G8AxyZZ/P/nZmDZ720lVfV1Bg3yaYtqOXNJLT9fVe9sHv+Oqvo1Bg2lgP+xsKv1PP+wssa4L02YjUITV1U3MxiznJHkvkkey70vWgBfBQ5N8pRmPv8nDMYfo7wJOLNpACTZkOSk5R7Y5AMfAc6pqjet89v4uSSHLvo4eI1ffx7wtCQ7khzU7OPxSY5Z5rGfYXA08sokBzeh99OAC9ZTePMcO7k3l/k/wIuTPLrJMO7X/OwfkGRrkhOTHAL8M3AnsHBk801gy5IGtl7fBP7VGPajKbBRaFqeBzwW+DaDQPddDObiVNU/Av8VeAuDv5r/Cdiz/G7u8QYGs/fLk3wf+DTw6CGP/QMGL0pnZNH5/QsbkzwvyXVDvnbB/2bwornwsabTa6vqFgaB/B8DdzD4q343y/w/WFU/ZtAYngR8C/hz4PlVdcManvK3F32fVwGfAF7T7P9q4D8zOPvru8BNwAubrzsEOKt53tsYhOmnNdsWFvd9O8nn1lDLct7AIFP6bpL/dYD70oTFCxepDUneBdzQnMUjqcM8otBUJHlUkodlsP5hJ4O/rj0jRuqBta5wldbrwcDFDM7b3wP8l6q6tt2SJK2GoydJ0kiOniRJI83E6OnII4+sLVu2tF2GJPXKNddc862q+pm3kVlqJhrFli1buPrqq9suQ5J6JcnNq3lca6OnJMcm+WiSrzTvjvmy5v4jMrh2wNea/z6wrRolSe1mFHcBr6iq4xm8GdlLkhwPnAp8uKqOAz7c3JYktaS1RlFVt1bV55rPv8/gTd02MTi//u3Nw97O4K2kJUkt6cRZT0m2ANsYvMfNUVV1a7PpNka8G6gkafJabxRJ7g9cBLy8qr63eFsNFnksu9AjyclJrk5y9R133DGFSiVpPrV61lPzDpwXAedX1cXN3d9McnRV3ZrkaOD25b62qs5lcB1ktm/f7qpBSXPjkmv3cvZlN/KNfXey8fDD2L1jK7u2reVyJWvT5llPAd4KXF9Vr1+06VLgBc3nLwDeN+3aJKmrLrl2L6dd/CX27ruTAvbuu5PTLv4Sl1y7rsuVrEqbo6fHMbhu8okZXOT980mezOAtjn8jydeAJza3JUnA2ZfdyJ0/uXu/++78yd2cfdmNE3vO1kZPzXWTM2TzE6ZZiyT1xTf23bmm+8dhJlZmS9IsWi6L2Hj4YexdpilsPPywidXR+llPkqSfNSyL+A//ZgOHHXzQfo897OCD2L1j68RqsVFIUgcNyyI+esMdvO4Zv8ymww8jwKbDD+N1z/jliZ715OhJkjpoVBaxa9umiTaGpWwUktSyrmQRwzh6kqQWdSmLGMZGIUkt6lIWMYyjJ0lqUZeyiGFsFJI0JV3PIoZx9CRJU9CHLGIYG4UkTUEfsohhHD1J0hT0IYsYxkYhSWPW1yxiGEdPkjRGfc4ihrFRSNIY9TmLGMbRkySNUZ+ziGFsFJK0DsOuW93nLGIYR0+StEajrlu9e8fW3mYRw9goJGmNRl23ete2Tb3NIoZx9CRJa7TSdav7mkUMY6OQpBFmbU3Eejh6kqQhZnFNxHrYKCRpiFlcE7Eejp4kaYhZXBOxHh5RSNIQwzKHWc0ihvGIQpJYPrTevWMrp138pf3GT7OcRQzjEYWkuTcstAbmKosYxiMKSXNv1AK6T5x64tw1hqU8opA091ZaQDfvPKKQNFdcQLd2HlFImhsuoFsfG4WkueECuvVx9CRpbriAbn1sFJJmklnE+Dh6kjRzzCLGy0YhaeaYRYyXoydJM8csYrxsFJJ6a7kcYte2TWYRY9bq6CnJ25LcnuTLi+47IskVSb7W/PeBbdYoqZuG5RCXXLuX3Tu2mkWMUdsZxV8CO5fcdyrw4ao6Dvhwc1uS9jPq/Zl2bdtkFjFGrY6equrjSbYsufsk4PHN528HPgb80dSKktQLK70/k1nE+HQxoziqqm5tPr8NOKrNYiS1zzUR7Wp79DRSVRVQy21LcnKSq5Ncfccdd0y5MknT4pqI9nWxUXwzydEAzX9vX+5BVXVuVW2vqu0bNmyYaoGSpsc1Ee3r4ujpUuAFwFnNf9/XbjmS2uSaiPa12iiSvJNBcH1kkj3Aqxk0iAuTvAi4Gfit9iqUNE1mEd3U9llPzx2y6QlTLURS6xayiIUx00IW8cxf3cRF1+zdb/xkFjFdXcwoJM0hs4ju6mJGIWkOmUV0l41C0tSZRfSLoydJU+W6iP6xUUiaKrOI/nH0JGmqzCL6x0YhaWLMImaDoydJE2EWMTtsFJImwixidjh6kjQRZhGzw0Yh6YB43erZ5+hJ0rp53er5YKOQtG5et3o+OHqStG5et3o+2CgkrYprIuaXoydJK3JNxHyzUUhakWsi5pujJ0krck3EfLNRSNqPWYSWcvQk6R5mEVqOjULSPcwitBxHT5LuYRah5dgopDllFqHVcvQkzSGzCK2FjUKaQ2YRWgtHT9IcMovQWtgopBlnFqED5ehJmmFmERoHG4U0w8wiNA6OnqQZZhahcbBRSDPA61Zrkhw9ST3ndas1aTYKqee8brUmzdGT1HNet1qTZqOQesQ1EWqDoyepJ1wTobbYKKSecE2E2uLoSeoJ10SoLZ1tFEl2Am8ADgLeUlVntVySNDVmEeqSTo6ekhwEvBF4EnA88Nwkx7dblTQdZhHqmk42CuAE4Kaq+vuq+jFwAXBSyzVJU2EWoa7p6uhpE3DLott7gEe3VIs0VWYR6pquNooVJTkZOBlg8+bNLVcjrY9ZhPqgq6OnvcCxi24f09x3j6o6t6q2V9X2DRs2TLU4aRzMItQXXW0UVwHHJXlokvsCzwEubbkmaazMItQXnRw9VdVdSU4BLmNweuzbquq6lsuSxsosQn3RyUYBUFUfBD7Ydh3SOJhFqM+6OnqSZoZZhPrORiFNmFmE+q6zoydpVphFqO9WbBRJXgqcV1XfnUI9Uq+ZRWgWrWb0dBRwVZILk+xMkkkXJfWRWYRm1YqNoqr+BDgOeCvwQuBrSV6b5GETrk3qFbMIzapVZRRVVUluA24D7gIeCLwnyRVV9cpJFij1hVmEZtVqMoqXAc8HvgW8BdhdVT9Jch/ga4CNQnNluRxi17ZNZhGaWas5ojgCeEZV3bz4zqr6aZKnTqYsqZsWcoiFEdNCDgGwe8fW/baBWYRmw2oyilcvbRKLtl0//pKk7hqWQ5x92Y3s2rbJLEIzyXUU0hqMyiEAswjNJBuFNIRrIqQB38JDWoZrIqR72SikZbgmQrqXoydpGa6JkO5lo9DcM4uQRnP0pLlmFiGtzEahuWYWIa3M0ZPmmlmEtDIbheaGWYS0Po6eNBfMIqT1s1FoLphFSOvn6ElzwSxCWj8bhWaOWYQ0Xo6eNFPMIqTxs1FopphFSOPn6EkzxSxCGj8bhXrJ61ZL0+PoSb0zLIe45Nq97N6x1SxCGjMbhXrH61ZL0+XoSb3jdaul6bJRqNNcEyG1z9GTOss1EVI32CjUWa6JkLrB0ZM6yzURUjfYKNQJZhFSdzl6UuvMIqRus1GodWYRUre1MnpK8mzgDOAXgROq6upF204DXgTcDfy3qrqsjRo1PWYRUre1lVF8GXgG8ObFdyY5HngO8EvARuDKJP+6qu7+2V2oj8wipP5pZfRUVddX1Y3LbDoJuKCqflRV/wDcBJww3eo0KWYRUj91LaPYBNyy6Pae5j7NALMIqZ8mNnpKciXw4GU2nV5V7xvD/k8GTgbYvHnzge5OU2AWIfXTxBpFVT1xHV+2Fzh20e1jmvuW2/+5wLkA27dvr3U8lybILEKaHV0bPV0KPCfJIUkeChwHfLblmrRGZhHSbGmlUST5zSR7gMcCH0hyGUBVXQdcCHwF+BvgJZ7x1D9mEdJsaeX02Kp6L/DeIdvOBM6cbkUaJ7MIabb4Xk9aN69bLc2HrmUU6gmvWy3NDxuF1sXrVkvzw9GT1sXrVkvzw0ahFbkmQppvjp40kmsiJNkoNJJrIiQ5etJIromQ5BGFRhqWOZhFSPPDIwrdY7nQeveOrZx28Zf2Gz+ZRUjzxSMKAcNDa8AsQppzHlEIGL2A7hOnnmhjkOaYRxQCVl5AJ2l+eUQxh1xAJ2ktPKKYMy6gk7RWNoo54wI6SWvl6GnOuIBO0lrZKGaYWYSkcXD0NKPMIiSNi41iRplFSBoXR08zyixC0rjYKHpuuRxi17ZNZhGSxsbRU48NyyEuuXYvu3dsNYuQNBY2ih4b9f5Mu7ZtMouQNBaOnnpspfdnMouQNA42ip5wTYSktjh66gHXREhqk42iB1wTIalNjp56wDURktpko+gYswhJXePoqUPMIiR1kY2iQ8wiJHWRo6cOMYuQ1EU2ipaYRUjqC0dPLTCLkNQnNooWmEVI6hNHTy0wi5DUJzaKCTOLkNR3rYyekpyd5IYkX0zy3iSHL9p2WpKbktyYZEcb9Y2LWYSkWdBWRnEF8PCqegTwVeA0gCTHA88BfgnYCfx5koOG7qXjzCIkzYJWRk9Vdfmim58GntV8fhJwQVX9CPiHJDcBJwCfmnKJY2EWIWkWdCGj+H3gXc3nmxg0jgV7mvs6zetWS5plE2sUSa4EHrzMptOr6n3NY04H7gLOX8f+TwZOBti8efMBVHpgFnKIhRHTQg4BsHvH1v22gVmEpP6ZWKOoqieO2p7khcBTgSdUVTV37wWOXfSwY5r7ltv/ucC5ANu3b6/lHjMNo65b/YlTT7znMUuPNiSpL1oZPSXZCbwS+PWq+uGiTZcC70jyemAjcBzw2RZKXDWvWy1p1rWVUZwDHAJckQTg01X14qq6LsmFwFcYjKReUlV3j9jPVLkmQtI8auusp18Yse1M4MwplrMqw7KIZ/7qJi66Zq85hKSZ5Xs9rZJrIiTNqy6cHtsLromQNK9sFMswi5Ckezl6WsL3Z5Kk/dkoljCLkKT9OXpawixCkvY3143CLEKSVja3oyezCElanbltFGYRkrQ6czt6MouQpNWZ2yOKYZmDWYQk7W9uG8XuHVvNIiRpFeZ29LQwWvJaEZI02tw2CvBaEZK0GnM7epIkrY6NQpI0ko1CkjSSjUKSNJKNQpI0Uqqq7RoOWJI7gJsPYBdHAt8aUzmT1qdaoV/1Wuvk9Kneear1IVW1YaUHzUSjOFBJrq6q7W3XsRp9qhX6Va+1Tk6f6rXWn+XoSZI0ko1CkjSSjWLg3LYLWIM+1Qr9qtdaJ6dP9VrrEmYUkqSRPKKQJI1ko5AkjWSjaCT50yRfTPL5JJcn2dh2TcMkOTvJDU29701yeNs1DZPk2UmuS/LTJJ085TDJziQ3Jrkpyalt1zNKkrcluT3Jl9uuZSVJjk3y0SRfaX4HXtZ2TaMkOTTJZ5N8oan3NW3XtJIkByW5Nsn7J/k8Nop7nV1Vj6iqRwLvB17VdkEjXAE8vKoeAXwVOK3lekb5MvAM4ONtF7KcJAcBbwSeBBwPPDfJ8e1WNdJfAjvbLmKV7gJeUVXHA48BXtLxn+2PgBOr6leARwI7kzym5ZpW8jLg+kk/iY2iUVXfW3TzfkBnU/6quryq7mpufho4ps16Rqmq66vqxrbrGOEE4Kaq+vuq+jFwAXBSyzUNVVUfB77Tdh2rUVW3VtXnms+/z+AFrbMXgKmBHzQ3D24+Ovs6kOQY4CnAWyb9XDaKRZKcmeQW4Hl0+4hisd8HPtR2ET22Cbhl0e09dPjFrK+SbAG2AZ9pt5LRmlHO54HbgSuqqsv1/k/glcBPJ/1Ec9UoklyZ5MvLfJwEUFWnV9WxwPnAKV2utXnM6QwO789vr9LV1ar5leT+wEXAy5ccuXdOVd3djJ+PAU5I8vC2a1pOkqcCt1fVNdN4vrm6FGpVPXGVDz0f+CDw6gmWM9JKtSZ5IfBU4AnV8mKYNfxcu2gvcOyi28c092kMkhzMoEmcX1UXt13PalXVviQfZZAHdfHEgccBT0/yZOBQ4F8kOa+qfncSTzZXRxSjJDlu0c2TgBvaqmUlSXYyOOR8elX9sO16eu4q4LgkD01yX+A5wKUt1zQTkgR4K3B9Vb2+7XpWkmTDwhmESQ4DfoOOvg5U1WlVdUxVbWHwO/uRSTUJsFEsdlYzLvki8B8ZnE3QVecADwCuaE7nfVPbBQ2T5DeT7AEeC3wgyWVt17RYc1LAKcBlDMLWC6vqunarGi7JO4FPAVuT7EnyorZrGuFxwO8BJza/p59v/gLuqqOBjzavAVcxyCgmetppX/gWHpKkkTyikCSNZKOQJI1ko5AkjWSjkCSNZKOQJI1ko5AkjWSjkCSNZKOQJiDJo5rrhRya5H7N9Q06+b5B0kpccCdNSJI/Y/A+PIcBe6rqdS2XJK2LjUKakOa9o64C/hn4t1V1d8slSevi6EmanAcB92fwvlyHtlyLtG4eUUgTkuRSBlfMeyhwdFW1eo0Tab3m6noU0rQkeT7wk6p6R3Nd7k8mObGqPtJ2bdJaeUQhSRrJjEKSNJKNQpI0ko1CkjSSjUKSNJKNQpI0ko1CkjSSjUKSNNL/B1PmpRrejfBGAAAAAElFTkSuQmCC\n", 215 | "text/plain": [ 216 | "
" 217 | ] 218 | }, 219 | "metadata": { 220 | "needs_background": "light" 221 | }, 222 | "output_type": "display_data" 223 | } 224 | ], 225 | "source": [ 226 | "print(\"Final loss: {:.3f}\".format(loss(x, y, W, B)))\n", 227 | "print(\"W = {}, B = {}\".format(W.numpy(), B.numpy()))\n", 228 | "print(\"Compared with m = {:.1f}, c = {:.1f}\".format(m, c),\" of the original line\")\n", 229 | "xs = np.linspace(-3, 4, 50)\n", 230 | "ys = W.numpy()*xs + B.numpy()\n", 231 | "plt.scatter(xs,ys)\n", 232 | "plt.xlabel(\"x\")\n", 233 | "plt.ylabel(\"y\")\n", 234 | "plt.title(\"Figure 2: Line of Best Fit\")" 235 | ] 236 | }, 237 | { 238 | "cell_type": "code", 239 | "execution_count": null, 240 | "metadata": {}, 241 | "outputs": [], 242 | "source": [] 243 | }, 244 | { 245 | "cell_type": "code", 246 | "execution_count": null, 247 | "metadata": {}, 248 | "outputs": [], 249 | "source": [] 250 | }, 251 | { 252 | "cell_type": "code", 253 | "execution_count": null, 254 | "metadata": {}, 255 | "outputs": [], 256 | "source": [] 257 | } 258 | ], 259 | "metadata": { 260 | "kernelspec": { 261 | "display_name": "Python 3", 262 | "language": "python", 263 | "name": "python3" 264 | }, 265 | "language_info": { 266 | "codemirror_mode": { 267 | "name": "ipython", 268 | "version": 3 269 | }, 270 | "file_extension": ".py", 271 | "mimetype": "text/x-python", 272 | "name": "python", 273 | "nbconvert_exporter": "python", 274 | "pygments_lexer": "ipython3", 275 | "version": "3.6.7" 276 | } 277 | }, 278 | "nbformat": 4, 279 | "nbformat_minor": 2 280 | } 281 | -------------------------------------------------------------------------------- /Chapter04/model.weights.best.hdf5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/Tensorflow-2.0-Quick-Start-Guide/56c6be1e90bd901523dbe7beaa973e8d158b8cd1/Chapter04/model.weights.best.hdf5 -------------------------------------------------------------------------------- /Chapter05/Chapter5_Autoencoder_TF2_alpha.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "from tensorflow.keras.layers import Input, Dense\n", 10 | "from tensorflow.keras.models import Model\n", 11 | "from tensorflow.keras.datasets import fashion_mnist\n", 12 | "from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping\n", 13 | "from tensorflow.keras import regularizers\n", 14 | "import numpy as np\n", 15 | "import matplotlib.pyplot as plt\n", 16 | "\n", 17 | "%matplotlib inline" 18 | ] 19 | }, 20 | { 21 | "cell_type": "code", 22 | "execution_count": null, 23 | "metadata": {}, 24 | "outputs": [], 25 | "source": [ 26 | "(x_train, _), (x_test, _) = fashion_mnist.load_data() # we don't need the labels\n", 27 | "x_train = x_train.astype('float32') / 255. # normalise\n", 28 | "x_test = x_test.astype('float32') / 255.\n", 29 | "\n", 30 | "print(x_train.shape) # shape of input\n", 31 | "print(x_test.shape)\n", 32 | "\n", 33 | "x_train = x_train.reshape(( x_train.shape[0], np.prod(x_train.shape[1:]))) #flatten\n", 34 | "x_test = x_test.reshape((x_test.shape[0], np.prod(x_test.shape[1:])))\n", 35 | "\n", 36 | "print(x_train.shape)\n", 37 | "print(x_test.shape)" 38 | ] 39 | }, 40 | { 41 | "cell_type": "code", 42 | "execution_count": null, 43 | "metadata": {}, 44 | "outputs": [], 45 | "source": [ 46 | "image_dim = 784 # this is the size of our input image, 784\n", 47 | "encoding_dim = 32 # this is the length of our encoded items.Compression of factor=784/32=24.5" 48 | ] 49 | }, 50 | { 51 | "cell_type": "code", 52 | "execution_count": null, 53 | "metadata": {}, 54 | "outputs": [], 55 | "source": [ 56 | "input_image = Input(shape=(image_dim, )) # this is our input placeholder\n", 57 | "\n", 58 | "encoded_image = Dense(encoding_dim, activation='relu',\n", 59 | " activity_regularizer=regularizers.l1(10e-5))(input_image)# \"encoded\" is the encoded representation of the input\n", 60 | "encoder = Model(input_image, encoded_image)\n", 61 | "\n", 62 | "decoded_image = Dense(image_dim, activation='sigmoid')(encoded_image)# \"decoded\" is the lossy reconstruction of the input\n", 63 | "\n", 64 | "autoencoder = Model(input_image, decoded_image) # this model maps an input to its reconstruction" 65 | ] 66 | }, 67 | { 68 | "cell_type": "code", 69 | "execution_count": null, 70 | "metadata": {}, 71 | "outputs": [], 72 | "source": [] 73 | }, 74 | { 75 | "cell_type": "code", 76 | "execution_count": null, 77 | "metadata": {}, 78 | "outputs": [], 79 | "source": [ 80 | "\n", 81 | "encoded_input = Input(shape=(encoding_dim,))# create a placeholder for an encoded (32-dimensional) input\n", 82 | "\n", 83 | "decoder_layer = autoencoder.layers[-1]# retrieve the last layer of the autoencoder model\n", 84 | "\n", 85 | "decoder = Model(encoded_input, decoder_layer(encoded_input))# create the decoder model" 86 | ] 87 | }, 88 | { 89 | "cell_type": "code", 90 | "execution_count": null, 91 | "metadata": {}, 92 | "outputs": [], 93 | "source": [ 94 | "autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')\n", 95 | "checkpointer1 = ModelCheckpoint(filepath= 'model.weights.best.hdf5' , verbose =2, save_best_only = True)\n", 96 | "checkpointer2 = EarlyStopping(monitor='val_loss',\n", 97 | " min_delta=0.0005,\n", 98 | " patience=2,\n", 99 | " verbose=2, mode='auto')" 100 | ] 101 | }, 102 | { 103 | "cell_type": "code", 104 | "execution_count": null, 105 | "metadata": {}, 106 | "outputs": [], 107 | "source": [ 108 | "autoencoder.fit(x_train, x_train,\n", 109 | " epochs=500,\n", 110 | " batch_size=256,callbacks=[checkpointer1], verbose=2,\n", 111 | " shuffle=True,\n", 112 | " validation_data=(x_test, x_test))" 113 | ] 114 | }, 115 | { 116 | "cell_type": "code", 117 | "execution_count": null, 118 | "metadata": {}, 119 | "outputs": [], 120 | "source": [ 121 | "# encode and decode some items\n", 122 | "# note that we take them from the *test* set\n", 123 | "\n", 124 | "autoencoder.load_weights('model.weights.best.hdf5' )\n", 125 | "encoded_images = encoder.predict(x_test)\n", 126 | "decoded_images = decoder.predict(encoded_images)" 127 | ] 128 | }, 129 | { 130 | "cell_type": "code", 131 | "execution_count": null, 132 | "metadata": {}, 133 | "outputs": [], 134 | "source": [ 135 | "number_of_items = 12 # how many tems we will display\n", 136 | "plt.figure(figsize=(20, 4))\n", 137 | "for i in range(number_of_items):\n", 138 | " # display items before compression\n", 139 | " graph = plt.subplot(2, number_of_items, i + 1)\n", 140 | " plt.imshow(x_test[i].reshape(28, 28))\n", 141 | " plt.gray()\n", 142 | " graph.get_xaxis().set_visible(False)\n", 143 | " graph.get_yaxis().set_visible(False)\n", 144 | "\n", 145 | " # display items after decompression\n", 146 | " graph = plt.subplot(2, number_of_items, i + 1 + number_of_items)\n", 147 | " plt.imshow(decoded_images[i].reshape(28, 28))\n", 148 | " plt.gray()\n", 149 | " graph.get_xaxis().set_visible(False)\n", 150 | " graph.get_yaxis().set_visible(False)\n", 151 | "plt.show()" 152 | ] 153 | }, 154 | { 155 | "cell_type": "code", 156 | "execution_count": null, 157 | "metadata": {}, 158 | "outputs": [], 159 | "source": [] 160 | }, 161 | { 162 | "cell_type": "code", 163 | "execution_count": null, 164 | "metadata": {}, 165 | "outputs": [], 166 | "source": [] 167 | }, 168 | { 169 | "cell_type": "code", 170 | "execution_count": null, 171 | "metadata": {}, 172 | "outputs": [], 173 | "source": [] 174 | }, 175 | { 176 | "cell_type": "code", 177 | "execution_count": null, 178 | "metadata": {}, 179 | "outputs": [], 180 | "source": [] 181 | }, 182 | { 183 | "cell_type": "code", 184 | "execution_count": null, 185 | "metadata": {}, 186 | "outputs": [], 187 | "source": [] 188 | }, 189 | { 190 | "cell_type": "code", 191 | "execution_count": null, 192 | "metadata": {}, 193 | "outputs": [], 194 | "source": [] 195 | }, 196 | { 197 | "cell_type": "code", 198 | "execution_count": null, 199 | "metadata": {}, 200 | "outputs": [], 201 | "source": [] 202 | } 203 | ], 204 | "metadata": { 205 | "kernelspec": { 206 | "display_name": "Python 3", 207 | "language": "python", 208 | "name": "python3" 209 | }, 210 | "language_info": { 211 | "codemirror_mode": { 212 | "name": "ipython", 213 | "version": 3 214 | }, 215 | "file_extension": ".py", 216 | "mimetype": "text/x-python", 217 | "name": "python", 218 | "nbconvert_exporter": "python", 219 | "pygments_lexer": "ipython3", 220 | "version": "3.6.7" 221 | } 222 | }, 223 | "nbformat": 4, 224 | "nbformat_minor": 2 225 | } 226 | -------------------------------------------------------------------------------- /Chapter05/Chapter5_Denoiser_TF2_alpha.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "from tensorflow.keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D\n", 10 | "from tensorflow.keras.models import Model\n", 11 | "from tensorflow.keras.datasets import fashion_mnist\n", 12 | "from tensorflow.keras.callbacks import TensorBoard\n", 13 | "import numpy as np\n", 14 | "import matplotlib.pyplot as plt\n", 15 | "%matplotlib inline" 16 | ] 17 | }, 18 | { 19 | "cell_type": "code", 20 | "execution_count": null, 21 | "metadata": {}, 22 | "outputs": [], 23 | "source": [ 24 | "(train_x, _), (test_x, _) = fashion_mnist.load_data()\n", 25 | "\n", 26 | "train_x = train_x.astype('float32') / 255.\n", 27 | "test_x = test_x.astype('float32') / 255.\n", 28 | "\n", 29 | "print(train_x.shape)\n", 30 | "print(test_x.shape)" 31 | ] 32 | }, 33 | { 34 | "cell_type": "code", 35 | "execution_count": null, 36 | "metadata": {}, 37 | "outputs": [], 38 | "source": [ 39 | "train_x = np.reshape(train_x, (len(train_x), 28, 28, 1))\n", 40 | "test_x = np.reshape(test_x, (len(test_x), 28, 28, 1))\n", 41 | "\n", 42 | "print(train_x.shape)\n", 43 | "print(test_x.shape)" 44 | ] 45 | }, 46 | { 47 | "cell_type": "code", 48 | "execution_count": null, 49 | "metadata": {}, 50 | "outputs": [], 51 | "source": [ 52 | "noise = 0.5\n", 53 | "train_x_noisy = train_x + noise * np.random.normal(loc=0.0, scale=1.0, size=train_x.shape)\n", 54 | "test_x_noisy = test_x + noise * np.random.normal(loc=0.0, scale=1.0, size=test_x.shape)\n", 55 | "\n", 56 | "train_x_noisy = np.clip(train_x_noisy, 0., 1.)\n", 57 | "test_x_noisy = np.clip(test_x_noisy, 0., 1.)" 58 | ] 59 | }, 60 | { 61 | "cell_type": "code", 62 | "execution_count": null, 63 | "metadata": {}, 64 | "outputs": [], 65 | "source": [ 66 | "number_of_items = 10\n", 67 | "plt.figure(figsize=(20, 2))\n", 68 | "\n", 69 | "for i in range(number_of_items):\n", 70 | " display = plt.subplot(1, number_of_items,i+1)\n", 71 | " plt.imshow(test_x[i].reshape(28, 28))\n", 72 | " plt.gray()\n", 73 | " display.get_xaxis().set_visible(False)\n", 74 | " display.get_yaxis().set_visible(False)" 75 | ] 76 | }, 77 | { 78 | "cell_type": "code", 79 | "execution_count": null, 80 | "metadata": {}, 81 | "outputs": [], 82 | "source": [ 83 | "plt.figure(figsize=(20, 2))\n", 84 | "for i in range(number_of_items):\n", 85 | " display = plt.subplot(1, number_of_items,i+1)\n", 86 | " plt.imshow(test_x_noisy[i].reshape(28, 28))\n", 87 | " plt.gray()\n", 88 | " display.get_xaxis().set_visible(False)\n", 89 | " display.get_yaxis().set_visible(False)\n", 90 | "plt.show()" 91 | ] 92 | }, 93 | { 94 | "cell_type": "code", 95 | "execution_count": null, 96 | "metadata": {}, 97 | "outputs": [], 98 | "source": [ 99 | "input_image = Input(shape=(28, 28, 1))\n", 100 | "print(input_image.shape)\n", 101 | "im = Conv2D(32, (3, 3), activation='relu', padding='same')(input_image)\n", 102 | "print(im.shape)\n", 103 | "im = MaxPooling2D((2, 2), padding='same')(im)\n", 104 | "print(im.shape)\n", 105 | "im = Conv2D(32, (3, 3), activation='relu', padding='same')(im)\n", 106 | "print(im.shape)\n", 107 | "encoded = MaxPooling2D((2, 2), padding='same')(im)\n", 108 | "print(encoded.shape)\n", 109 | "# at this point the representation is (7, 7, 32)\n", 110 | "\n", 111 | "im = Conv2D(32, (3, 3), activation='relu', padding='same')(encoded)\n", 112 | "print(im.shape)\n", 113 | "im = UpSampling2D((2, 2))(im)\n", 114 | "print(im.shape)\n", 115 | "im = Conv2D(32, (3, 3), activation='relu', padding='same')(im)\n", 116 | "print(im.shape)\n", 117 | "im = UpSampling2D((2, 2))(im)\n", 118 | "print(im.shape)\n", 119 | "decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(im)\n", 120 | "print(decoded.shape)\n", 121 | "\n", 122 | "autoencoder = Model(inputs=input_image, outputs=decoded)\n", 123 | "autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')" 124 | ] 125 | }, 126 | { 127 | "cell_type": "code", 128 | "execution_count": null, 129 | "metadata": {}, 130 | "outputs": [], 131 | "source": [ 132 | "autoencoder.summary()" 133 | ] 134 | }, 135 | { 136 | "cell_type": "code", 137 | "execution_count": null, 138 | "metadata": {}, 139 | "outputs": [], 140 | "source": [ 141 | "tb = [TensorBoard(log_dir='./tmp/tb', write_graph=True)]\n", 142 | "epochs = 150 # for testing, set to 150 for actual training, best speed on GPU\n", 143 | "batch_size = 128\n", 144 | "autoencoder.fit(train_x_noisy, train_x,\n", 145 | " epochs=epochs,\n", 146 | " batch_size=batch_size,\n", 147 | " shuffle=True,\n", 148 | " validation_data=(test_x_noisy, test_x),\n", 149 | " callbacks=tb)" 150 | ] 151 | }, 152 | { 153 | "cell_type": "code", 154 | "execution_count": null, 155 | "metadata": {}, 156 | "outputs": [], 157 | "source": [] 158 | }, 159 | { 160 | "cell_type": "code", 161 | "execution_count": null, 162 | "metadata": {}, 163 | "outputs": [], 164 | "source": [ 165 | "decoded_images = autoencoder.predict(test_x_noisy)\n", 166 | "number_of_items = 10\n", 167 | "plt.figure(figsize=(20, 2))\n", 168 | "for item in range(number_of_items):\n", 169 | " display = plt.subplot(1, number_of_items,item+1)\n", 170 | " im = decoded_images[item].reshape(28, 28)\n", 171 | " plt.imshow(im)\n", 172 | " display.get_xaxis().set_visible(False)\n", 173 | " display.get_yaxis().set_visible(False)\n", 174 | "plt.show()" 175 | ] 176 | }, 177 | { 178 | "cell_type": "code", 179 | "execution_count": null, 180 | "metadata": {}, 181 | "outputs": [], 182 | "source": [] 183 | }, 184 | { 185 | "cell_type": "code", 186 | "execution_count": null, 187 | "metadata": {}, 188 | "outputs": [], 189 | "source": [] 190 | }, 191 | { 192 | "cell_type": "code", 193 | "execution_count": null, 194 | "metadata": {}, 195 | "outputs": [], 196 | "source": [] 197 | } 198 | ], 199 | "metadata": { 200 | "kernelspec": { 201 | "display_name": "Python 3", 202 | "language": "python", 203 | "name": "python3" 204 | }, 205 | "language_info": { 206 | "codemirror_mode": { 207 | "name": "ipython", 208 | "version": 3 209 | }, 210 | "file_extension": ".py", 211 | "mimetype": "text/x-python", 212 | "name": "python", 213 | "nbconvert_exporter": "python", 214 | "pygments_lexer": "ipython3", 215 | "version": "3.6.7" 216 | } 217 | }, 218 | "nbformat": 4, 219 | "nbformat_minor": 2 220 | } 221 | -------------------------------------------------------------------------------- /Chapter06/CHapter6_QDraw_TF2_alpha.ipynb: -------------------------------------------------------------------------------- 1 | {"cells": [{"cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": "import tensorflow as tf\nfrom tensorflow.keras.datasets import mnist\nimport numpy as np\nimport h5py\nfrom sklearn.model_selection import train_test_split\nfrom os import walk"}, {"cell_type": "markdown", "metadata": {}, "source": ["## Acquire The Data"]}, {"cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": "batch_size = 128\nimg_rows, img_cols = 28, 28 # image dims"}, {"cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": "#load npy arrays \n"}, {"cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["['broom.npy', 'aircraft_carrier.npy', 'alarm_clock.npy', 'ant.npy', 'cell_phone.npy', 'baseball.npy', 'asparagus.npy', 'dolphin.npy', 'crocodile.npy', 'bee.npy']\n"]}], "source": "data_path = \"data_files/\" # folder for image files\nfor (dirpath, dirnames, filenames) in walk(data_path):\n pass # file names accumulate in list 'filenames'\nprint(filenames)"}, {"cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["100000\n"]}], "source": "num_images = 1000000 ### was 100000, reduce this number if memory issues.\nnum_files = len(filenames) # *** we have 10 files ***\nimages_per_category = num_images//num_files\nseed = np.random.randint(1, 10e7)\ni=0\nprint(images_per_category)"}, {"cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": "for file in filenames:\n file_path = data_path + file\n x = np.load(file_path)\n x = x.astype('float32') ##normalise images\n x /= 255.0\n y = [i] * len(x) # create numeric label for this image\n\n x = x[:images_per_category] # get our sample of images\n y = y[:images_per_category] # get our sample of labels\n\n if i == 0:\n x_all = x\n y_all = y\n else:\n x_all = np.concatenate((x,x_all), axis=0)\n y_all = np.concatenate((y,y_all), axis=0)\n i += 1"}, {"cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": "#split data arrays into train and test segments\nx_train, x_test, y_train, y_test = train_test_split(x_all, y_all, test_size=0.2, random_state=42)"}, {"cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": "x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)\nx_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)\ninput_shape = (img_rows, img_cols, 1)"}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ""}, {"cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": "y_train = tf.keras.utils.to_categorical(y_train, num_files)\ny_test = tf.keras.utils.to_categorical(y_test, num_files)"}, {"cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["x_train shape: (800000, 28, 28, 1)\n", "800000 train samples\n", "200000 test samples\n"]}], "source": "print('x_train shape:', x_train.shape)\nprint(x_train.shape[0], 'train samples')\nprint(x_test.shape[0], 'test samples')\n\nx_train, x_valid, y_train, y_valid = train_test_split(x_train, y_train, test_size=0.1, random_state=42)"}, {"cell_type": "markdown", "metadata": {}, "source": ["## Create the model"]}, {"cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["Compiling...........\n"]}], "source": "model = tf.keras.Sequential()\n\nmodel.add(tf.keras.layers.Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))\nmodel.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))\nmodel.add(tf.keras.layers.Dropout(0.25))\n\nmodel.add(tf.keras.layers.Conv2D(64, (3, 3), activation='relu'))\nmodel.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))\nmodel.add(tf.keras.layers.Dropout(0.25))\n\nmodel.add(tf.keras.layers.Flatten())\n\nmodel.add(tf.keras.layers.Dense(128, activation='relu'))\nmodel.add(tf.keras.layers.Dropout(0.5))\n\nmodel.add(tf.keras.layers.Dense(num_files, activation='softmax'))\nprint(\"Compiling...........\")"}, {"cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": "model.compile(loss=tf.keras.losses.categorical_crossentropy,\n optimizer=tf.keras.optimizers.Adadelta(),\n metrics=['accuracy'])"}, {"cell_type": "markdown", "metadata": {}, "source": ["## Train the model"]}, {"cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["Train on 720000 samples, validate on 80000 samples\n", "720000/720000 [==============================] - 89s 123us/sample - loss: 2.2132 - accuracy: 0.1930 - val_loss: 2.0671 - val_accuracy: 0.3997\n"]}, {"data": {"text/plain": [""]}, "execution_count": 13, "metadata": {}, "output_type": "execute_result"}], "source": "epochs=1 # for testing, for training use 25\ncallbacks=[tf.keras.callbacks.TensorBoard(log_dir = \"./tb_log_dir\", histogram_freq = 0)]\nmodel.fit(x_train, y_train,\n batch_size=batch_size,\n epochs=epochs,\n callbacks=callbacks,\n verbose=1,\n validation_data=(x_valid, y_valid))"}, {"cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["200000/200000 [==============================] - 10s 50us/sample - loss: 2.0684 - accuracy: 0.3971\n", "Test loss: 2.068418756465912\n", "Test accuracy: 0.39709\n"]}], "source": "score = model.evaluate(x_test, y_test, verbose=1)\n\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])"}, {"cell_type": "markdown", "metadata": {}, "source": ["## Test The Model "]}, {"cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["['broom', 'aircraft_carrier', 'alarm_clock', 'ant', 'cell_phone', 'baseball', 'asparagus', 'dolphin', 'crocodile', 'bee']\n", "\n", "For each pair in the following, the first label is predicted, second is actual\n", "\n", "-------------------------\n", "cell_phone\n", "alarm_clock\n", "-------------------------\n", "-------------------------\n", "baseball\n", "baseball\n", "-------------------------\n", "-------------------------\n", "asparagus\n", "broom\n", "-------------------------\n", "-------------------------\n", "bee\n", "cell_phone\n", "-------------------------\n", "-------------------------\n", "bee\n", "bee\n", "-------------------------\n", "-------------------------\n", "alarm_clock\n", "cell_phone\n", "-------------------------\n", "-------------------------\n", "cell_phone\n", "cell_phone\n", "-------------------------\n", "-------------------------\n", "asparagus\n", "broom\n", "-------------------------\n", "-------------------------\n", "cell_phone\n", "baseball\n", "-------------------------\n", "-------------------------\n", "aircraft_carrier\n", "aircraft_carrier\n", "-------------------------\n", "-------------------------\n", "cell_phone\n", "cell_phone\n", "-------------------------\n", "-------------------------\n", "aircraft_carrier\n", "crocodile\n", "-------------------------\n", "-------------------------\n", "aircraft_carrier\n", "aircraft_carrier\n", "-------------------------\n", "-------------------------\n", "aircraft_carrier\n", "ant\n", "-------------------------\n", "-------------------------\n", "cell_phone\n", "bee\n", "-------------------------\n", "-------------------------\n", "baseball\n", "baseball\n", "-------------------------\n", "-------------------------\n", "cell_phone\n", "baseball\n", "-------------------------\n", "-------------------------\n", "aircraft_carrier\n", "ant\n", "-------------------------\n", "-------------------------\n", "alarm_clock\n", "dolphin\n", "-------------------------\n", "-------------------------\n", "bee\n", "dolphin\n", "-------------------------\n"]}], "source": "#_test\n\nimport os\nlabels = [os.path.splitext(file)[0] for file in filenames]\nprint(labels)\nprint(\"\\nFor each pair in the following, the first label is predicted, second is actual\\n\")\nfor i in range(20):\n t = np.random.randint(len(x_test) )\n x1= x_test[t]\n x1 = x1.reshape(1,28,28,1)\n p = model.predict(x1)\n print(\"-------------------------\")\n print(labels[np.argmax(p)])\n print(labels[np.argmax(y_test[t])])\n print(\"-------------------------\")\n\n\n"}, {"cell_type": "markdown", "metadata": {}, "source": ["## Save, Reload and Retest the Model"]}, {"cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": "model.save(\"./QDrawModel.h5\")"}, {"cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": "del model"}, {"cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [], "source": "from tensorflow.keras.models import load_model\n"}, {"cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": "import numpy as np"}, {"cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["Model: \"sequential\"\n", "_________________________________________________________________\n", "Layer (type) Output Shape Param # \n", "=================================================================\n", "conv2d (Conv2D) (None, 26, 26, 32) 320 \n", "_________________________________________________________________\n", "max_pooling2d (MaxPooling2D) (None, 13, 13, 32) 0 \n", "_________________________________________________________________\n", "dropout (Dropout) (None, 13, 13, 32) 0 \n", "_________________________________________________________________\n", "conv2d_1 (Conv2D) (None, 11, 11, 64) 18496 \n", "_________________________________________________________________\n", "max_pooling2d_1 (MaxPooling2 (None, 5, 5, 64) 0 \n", "_________________________________________________________________\n", "dropout_1 (Dropout) (None, 5, 5, 64) 0 \n", "_________________________________________________________________\n", "flatten (Flatten) (None, 1600) 0 \n", "_________________________________________________________________\n", "dense (Dense) (None, 128) 204928 \n", "_________________________________________________________________\n", "dropout_2 (Dropout) (None, 128) 0 \n", "_________________________________________________________________\n", "dense_1 (Dense) (None, 10) 1290 \n", "=================================================================\n", "Total params: 225,034\n", "Trainable params: 225,034\n", "Non-trainable params: 0\n", "_________________________________________________________________\n"]}], "source": "model = load_model('./QDrawModel.h5')\nmodel.summary()"}, {"cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["For each pair, first is predicted, second is actual\n", "-------------------------\n", "broom\n", "broom\n", "-------------------------\n", "-------------------------\n", "cell_phone\n", "alarm_clock\n", "-------------------------\n", "-------------------------\n", "crocodile\n", "dolphin\n", "-------------------------\n", "-------------------------\n", "alarm_clock\n", "alarm_clock\n", "-------------------------\n", "-------------------------\n", "cell_phone\n", "aircraft_carrier\n", "-------------------------\n", "-------------------------\n", "bee\n", "crocodile\n", "-------------------------\n", "-------------------------\n", "cell_phone\n", "alarm_clock\n", "-------------------------\n", "-------------------------\n", "cell_phone\n", "cell_phone\n", "-------------------------\n", "-------------------------\n", "bee\n", "crocodile\n", "-------------------------\n", "-------------------------\n", "cell_phone\n", "asparagus\n", "-------------------------\n", "-------------------------\n", "broom\n", "broom\n", "-------------------------\n", "-------------------------\n", "cell_phone\n", "alarm_clock\n", "-------------------------\n", "-------------------------\n", "aircraft_carrier\n", "crocodile\n", "-------------------------\n", "-------------------------\n", "aircraft_carrier\n", "aircraft_carrier\n", "-------------------------\n", "-------------------------\n", "aircraft_carrier\n", "dolphin\n", "-------------------------\n", "-------------------------\n", "cell_phone\n", "cell_phone\n", "-------------------------\n", "-------------------------\n", "broom\n", "broom\n", "-------------------------\n", "-------------------------\n", "bee\n", "ant\n", "-------------------------\n", "-------------------------\n", "aircraft_carrier\n", "dolphin\n", "-------------------------\n", "-------------------------\n", "cell_phone\n", "cell_phone\n", "-------------------------\n"]}], "source": "print(\"For each pair, first is predicted, second is actual\")\nfor i in range(20):\n t = np.random.randint(len(x_test))\n x1= x_test[t]\n x1 = x1.reshape(1,28,28,1)\n p = model.predict(x1)\n print(\"-------------------------\")\n print(labels[np.argmax(p)])\n print(labels[np.argmax(y_test[t])])\n print(\"-------------------------\")"}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ""}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ""}], "metadata": {"kernelspec": {"display_name": "Python 3", "language": "python", "name": "python3"}, "language_info": {"codemirror_mode": {"name": "ipython", "version": 3}, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.7"}}, "nbformat": 4, "nbformat_minor": 2} -------------------------------------------------------------------------------- /Chapter06/Chapter6_CIFAR10_TF2_alpha_V2.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "import tensorflow as tf\n", 10 | "import numpy as np\n", 11 | "from tensorflow.keras.datasets import cifar10\n", 12 | "from tensorflow.keras.preprocessing.image import ImageDataGenerator\n", 13 | "from tensorflow.keras.models import Sequential\n", 14 | "from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten\n", 15 | "from tensorflow.keras.layers import Conv2D, MaxPooling2D,BatchNormalization\n", 16 | "from tensorflow.keras import regularizers\n", 17 | "from tensorflow.keras.models import load_model\n", 18 | "import os\n", 19 | "from matplotlib import pyplot as plt\n", 20 | "from PIL import Image" 21 | ] 22 | }, 23 | { 24 | "cell_type": "code", 25 | "execution_count": null, 26 | "metadata": {}, 27 | "outputs": [], 28 | "source": [ 29 | "#some required values\n", 30 | "batch_size = 32\n", 31 | "number_of_classes = 10\n", 32 | "epochs = 100 # for testing; use epochs = 100 for training ~30 secs/epoch on CPU\n", 33 | "weight_decay = 1e-4\n", 34 | "save_dir = os.path.join(os.getcwd(), 'saved_models')\n", 35 | "model_name = 'keras_cifar10_trained_model.h5'\n", 36 | "number_of_images = 5\n", 37 | "learning_rate = 0.0001\n", 38 | "decay = 1e-6" 39 | ] 40 | }, 41 | { 42 | "cell_type": "code", 43 | "execution_count": null, 44 | "metadata": {}, 45 | "outputs": [], 46 | "source": [ 47 | "labels = ['airplane','automobile','bird','cat','deer','dog','frog','horse','ship','truck']" 48 | ] 49 | }, 50 | { 51 | "cell_type": "code", 52 | "execution_count": null, 53 | "metadata": {}, 54 | "outputs": [], 55 | "source": [ 56 | "# load the data and inspect its shape\n", 57 | "(x_train, y_train), (x_test, y_test) = cifar10.load_data()\n", 58 | "print('x_train shape:', x_train.shape)\n", 59 | "print(x_train.shape[0], 'train samples')\n", 60 | "print(x_test.shape[0], 'test samples')\n" 61 | ] 62 | }, 63 | { 64 | "cell_type": "code", 65 | "execution_count": null, 66 | "metadata": {}, 67 | "outputs": [], 68 | "source": [ 69 | "def show_images(images):\n", 70 | " plt.figure(1)\n", 71 | " image_index = 0\n", 72 | " for i in range(0,number_of_images):\n", 73 | " for j in range(0,number_of_images):\n", 74 | " plt.subplot2grid((number_of_images, number_of_images),(i,j))\n", 75 | " plt.imshow(Image.fromarray(images[image_index]))\n", 76 | " image_index +=1\n", 77 | " plt.gca().axes.get_yaxis().set_visible(False)\n", 78 | " plt.gca().axes.get_xaxis().set_visible(False) \n", 79 | " plt.show()\n", 80 | " \n", 81 | "show_images(x_test[:number_of_images*number_of_images])" 82 | ] 83 | }, 84 | { 85 | "cell_type": "code", 86 | "execution_count": null, 87 | "metadata": {}, 88 | "outputs": [], 89 | "source": [ 90 | "# Normalise the data and convert\n", 91 | "x_train = x_train.astype('float32')/255\n", 92 | "x_test = x_test.astype('float32')/255\n" 93 | ] 94 | }, 95 | { 96 | "cell_type": "code", 97 | "execution_count": null, 98 | "metadata": {}, 99 | "outputs": [], 100 | "source": [ 101 | "# Convert labels to one-hot vectors\n", 102 | "y_train = tf.keras.utils.to_categorical(y_train, number_of_classes) # or use tf.one_hot()\n", 103 | "y_test = tf.keras.utils.to_categorical(y_test, number_of_classes)\n" 104 | ] 105 | }, 106 | { 107 | "cell_type": "code", 108 | "execution_count": null, 109 | "metadata": {}, 110 | "outputs": [], 111 | "source": [ 112 | "model = Sequential()\n", 113 | "model.add(Conv2D(32, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay), input_shape=x_train.shape[1:]))\n", 114 | "model.add(Activation('elu'))\n", 115 | "model.add(BatchNormalization())\n", 116 | "model.add(Conv2D(32, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))\n", 117 | "model.add(Activation('elu'))\n", 118 | "model.add(BatchNormalization())\n", 119 | "model.add(MaxPooling2D(pool_size=(2,2)))\n", 120 | "model.add(Dropout(0.2))\n", 121 | " \n", 122 | "model.add(Conv2D(64, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))\n", 123 | "model.add(Activation('elu'))\n", 124 | "model.add(BatchNormalization())\n", 125 | "model.add(Conv2D(64, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))\n", 126 | "model.add(Activation('elu'))\n", 127 | "model.add(BatchNormalization())\n", 128 | "model.add(MaxPooling2D(pool_size=(2,2)))\n", 129 | "model.add(Dropout(0.3))\n", 130 | " \n", 131 | "model.add(Conv2D(128, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))\n", 132 | "model.add(Activation('elu'))\n", 133 | "model.add(BatchNormalization())\n", 134 | "model.add(Conv2D(128, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))\n", 135 | "model.add(Activation('elu'))\n", 136 | "model.add(BatchNormalization())\n", 137 | "model.add(MaxPooling2D(pool_size=(2,2)))\n", 138 | "model.add(Dropout(0.4))\n", 139 | " \n", 140 | "model.add(Flatten())\n", 141 | "model.add(Dense(number_of_classes, activation='softmax'))\n" 142 | ] 143 | }, 144 | { 145 | "cell_type": "code", 146 | "execution_count": null, 147 | "metadata": {}, 148 | "outputs": [], 149 | "source": [ 150 | "# initialise the optimiser\n", 151 | "opt = tf.keras.optimizers.RMSprop(lr=learning_rate, decay=decay)\n", 152 | "\n", 153 | "# compile the model\n", 154 | "model.compile(loss='categorical_crossentropy', optimizer=opt,metrics=['accuracy'])\n" 155 | ] 156 | }, 157 | { 158 | "cell_type": "code", 159 | "execution_count": null, 160 | "metadata": {}, 161 | "outputs": [], 162 | "source": [ 163 | "print('Using data augmentation in real-time.')\n", 164 | " # Preprocessing and realtime data augmentation:\n", 165 | "datagen = ImageDataGenerator(\n", 166 | " rotation_range=10, # randomly rotate images in the range 0 to 10 degrees\n", 167 | " \n", 168 | " width_shift_range=0.1,# randomly shift images horizontally (fraction of total width)\n", 169 | " \n", 170 | " height_shift_range=0.1,# randomly shift images vertically (fraction of total height)\n", 171 | " \n", 172 | " horizontal_flip=True, # randomly flip images\n", 173 | " \n", 174 | " validation_split=0.1)" 175 | ] 176 | }, 177 | { 178 | "cell_type": "code", 179 | "execution_count": null, 180 | "metadata": {}, 181 | "outputs": [], 182 | "source": [ 183 | "# datagen.fit(x_train) \n", 184 | "# (this is only needed if any of the feature-wise normalizations i.e. \n", 185 | "# std, mean, and principal components ZCA whitening are set to True.)\n", 186 | "\n", 187 | "# set things up to halt training if the accuracy has stopped increasing\n", 188 | "# could also monitor = 'val' or monitor = \n", 189 | "callback = tf.keras.callbacks.EarlyStopping(monitor='loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False)\n", 190 | "# Fit the model on the batches generated by datagen.flow().\n", 191 | "model.fit_generator(datagen.flow(x_train, y_train, batch_size=batch_size), epochs=epochs, callbacks=[callback])\n", 192 | "# Save model and weights" 193 | ] 194 | }, 195 | { 196 | "cell_type": "code", 197 | "execution_count": null, 198 | "metadata": {}, 199 | "outputs": [], 200 | "source": [ 201 | "if not os.path.isdir(save_dir):\n", 202 | " os.makedirs(save_dir)\n", 203 | "\n", 204 | "model_path = os.path.join(save_dir, model_name)\n", 205 | "model.save(model_path)\n", 206 | "print('Model saved at: %s ' % model_path)" 207 | ] 208 | }, 209 | { 210 | "cell_type": "code", 211 | "execution_count": null, 212 | "metadata": {}, 213 | "outputs": [], 214 | "source": [ 215 | "model1 = tf.keras.models.load_model(model_path)\n", 216 | "model1.summary()" 217 | ] 218 | }, 219 | { 220 | "cell_type": "code", 221 | "execution_count": null, 222 | "metadata": {}, 223 | "outputs": [], 224 | "source": [ 225 | "# Evaluate our trained model.\n", 226 | "scores = model.evaluate(x_test, y_test, verbose=1)\n", 227 | "print('Test loss:', scores[0])\n", 228 | "print('Test accuracy:', scores[1])" 229 | ] 230 | }, 231 | { 232 | "cell_type": "code", 233 | "execution_count": null, 234 | "metadata": {}, 235 | "outputs": [], 236 | "source": [ 237 | "\n", 238 | "#reload the data since it has been mangled\n", 239 | "(x_train, y_train), (x_test, y_test) = cifar10.load_data()\n", 240 | "show_images(x_test[:number_of_images*number_of_images])\n", 241 | "x_test = x_test.astype('float32')/255" 242 | ] 243 | }, 244 | { 245 | "cell_type": "code", 246 | "execution_count": null, 247 | "metadata": {}, 248 | "outputs": [], 249 | "source": [ 250 | "indices = tf.argmax(input=model1.predict(x_test[:number_of_images*number_of_images]),axis=1)\n", 251 | "i = 0\n", 252 | "print('Learned True')\n", 253 | "print('=====================')\n", 254 | "for index in indices:\n", 255 | " print(labels[index], '\\t', labels[y_test[i][0]])\n", 256 | " i+=1" 257 | ] 258 | }, 259 | { 260 | "cell_type": "code", 261 | "execution_count": null, 262 | "metadata": {}, 263 | "outputs": [], 264 | "source": [] 265 | }, 266 | { 267 | "cell_type": "code", 268 | "execution_count": null, 269 | "metadata": {}, 270 | "outputs": [], 271 | "source": [] 272 | }, 273 | { 274 | "cell_type": "code", 275 | "execution_count": null, 276 | "metadata": {}, 277 | "outputs": [], 278 | "source": [] 279 | }, 280 | { 281 | "cell_type": "code", 282 | "execution_count": null, 283 | "metadata": {}, 284 | "outputs": [], 285 | "source": [] 286 | } 287 | ], 288 | "metadata": { 289 | "kernelspec": { 290 | "display_name": "Python 3", 291 | "language": "python", 292 | "name": "python3" 293 | }, 294 | "language_info": { 295 | "codemirror_mode": { 296 | "name": "ipython", 297 | "version": 3 298 | }, 299 | "file_extension": ".py", 300 | "mimetype": "text/x-python", 301 | "name": "python", 302 | "nbconvert_exporter": "python", 303 | "pygments_lexer": "ipython3", 304 | "version": "3.6.7" 305 | } 306 | }, 307 | "nbformat": 4, 308 | "nbformat_minor": 2 309 | } 310 | -------------------------------------------------------------------------------- /Chapter07/Chapter7_NeuralStyleTransfer_TF2_alpha.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": null, 13 | "metadata": {}, 14 | "outputs": [], 15 | "source": [] 16 | }, 17 | { 18 | "cell_type": "code", 19 | "execution_count": null, 20 | "metadata": {}, 21 | "outputs": [], 22 | "source": [ 23 | "import numpy as np\n", 24 | "from PIL import Image\n", 25 | "import time\n", 26 | "import functools\n", 27 | "\n", 28 | "import matplotlib.pyplot as plt\n", 29 | "import matplotlib as mpl\n", 30 | "# set things up for images display\n", 31 | "mpl.rcParams['figure.figsize'] = (10,10)\n", 32 | "mpl.rcParams['axes.grid'] = False" 33 | ] 34 | }, 35 | { 36 | "cell_type": "code", 37 | "execution_count": null, 38 | "metadata": {}, 39 | "outputs": [], 40 | "source": [ 41 | "import tensorflow as tf\n", 42 | "\n", 43 | "\n", 44 | "from tensorflow.keras.preprocessing import image as kp_image\n", 45 | "from tensorflow.keras import models\n", 46 | "from tensorflow.keras import losses\n", 47 | "from tensorflow.keras import layers\n", 48 | "from tensorflow.keras import backend as K\n", 49 | "from tensorflow.keras import optimizers" 50 | ] 51 | }, 52 | { 53 | "cell_type": "code", 54 | "execution_count": null, 55 | "metadata": {}, 56 | "outputs": [], 57 | "source": [] 58 | }, 59 | { 60 | "cell_type": "code", 61 | "execution_count": null, 62 | "metadata": { 63 | "scrolled": true 64 | }, 65 | "outputs": [], 66 | "source": [ 67 | "print(\"TensorFlow version: {}\".format(tf.__version__))\n", 68 | "print(\"Eager execution is: {}\".format(tf.executing_eagerly()))\n", 69 | "ran = tf.Variable(42)\n", 70 | "print(\"Is there a GPU available?: \"),\n", 71 | "print(tf.test.is_gpu_available())\n", 72 | "\n", 73 | "print(\"Is the Tensor on GPU #0?: \"),\n", 74 | "print(ran.device.endswith('GPU:0'))" 75 | ] 76 | }, 77 | { 78 | "cell_type": "code", 79 | "execution_count": null, 80 | "metadata": {}, 81 | "outputs": [], 82 | "source": [ 83 | "# These are the images we will work with inititially\n", 84 | "content_path = './tmp/nst/elephant.jpg'#Andrew Shiva / Wikipedia / CC BY-SA 4.0\n", 85 | "style_path = './tmp/nst/zebra.jpg' # zebra:Yathin S Krishnappa, https://creativecommons.org/licenses/by-sa/4.0/deed.en\n", 86 | "#Also available\n", 87 | "#content_path = './tmp/nst/skyscrapers.jpg'#Andrew Shiva / Wikipedia / CC BY-SA 4.0\n", 88 | "#style_path = './tmp/nst/sunset.jpg'\n", 89 | "\n" 90 | ] 91 | }, 92 | { 93 | "cell_type": "code", 94 | "execution_count": null, 95 | "metadata": {}, 96 | "outputs": [], 97 | "source": [ 98 | "def load_image(path_to_image):\n", 99 | " max_dimension = 512\n", 100 | " image = Image.open(path_to_image)\n", 101 | " longest_side = max(image.size)\n", 102 | " scale = max_dimension/longest_side\n", 103 | " image = image.resize((round(image.size[0]*scale), round(image.size[1]*scale)), Image.ANTIALIAS)\n", 104 | "\n", 105 | " image = kp_image.img_to_array(image) # keras preprocessing\n", 106 | "\n", 107 | " # Broadcast the image array so that it has a batch dimension on axis 0\n", 108 | " image = np.expand_dims(image, axis=0)\n", 109 | " return image" 110 | ] 111 | }, 112 | { 113 | "cell_type": "code", 114 | "execution_count": null, 115 | "metadata": {}, 116 | "outputs": [], 117 | "source": [ 118 | "def show_image(image, title=None):\n", 119 | " # Remove the batch dimension\n", 120 | " image1 = np.squeeze(image, axis=0)\n", 121 | " # Normalize for display\n", 122 | " image1 = image1.astype('uint8')\n", 123 | " plt.imshow(image1)\n", 124 | " if title is not None:\n", 125 | " plt.title(title)\n", 126 | " plt.imshow(image1)" 127 | ] 128 | }, 129 | { 130 | "cell_type": "code", 131 | "execution_count": null, 132 | "metadata": { 133 | "scrolled": true 134 | }, 135 | "outputs": [], 136 | "source": [ 137 | "channel_means = [103.939, 116.779, 123.68]\n", 138 | "\n", 139 | "plt.figure(figsize=(10,10))\n", 140 | "\n", 141 | "content_image = load_image(content_path).astype('uint8')\n", 142 | "style_image = load_image(style_path).astype('uint8')\n", 143 | "\n", 144 | "plt.subplot(1, 2, 1)\n", 145 | "show_image(content_image, 'Content Image')\n", 146 | "\n", 147 | "plt.subplot(1, 2, 2)\n", 148 | "show_image(style_image, 'Style Image')\n", 149 | "plt.show()" 150 | ] 151 | }, 152 | { 153 | "cell_type": "code", 154 | "execution_count": null, 155 | "metadata": {}, 156 | "outputs": [], 157 | "source": [ 158 | "def load_and_process_image(path_to_image):\n", 159 | " image = load_image(path_to_image)\n", 160 | " image = tf.keras.applications.vgg19.preprocess_input(image)\n", 161 | " return image" 162 | ] 163 | }, 164 | { 165 | "cell_type": "code", 166 | "execution_count": null, 167 | "metadata": {}, 168 | "outputs": [], 169 | "source": [ 170 | "def deprocess_image(processed_image):\n", 171 | " im = processed_image.copy()\n", 172 | " if len(im.shape) == 4:\n", 173 | " im = np.squeeze(im, 0)\n", 174 | " assert len(im.shape) == 3, (\"Input to deprocess image must be an image of \"\n", 175 | " \"dimension [1, height, width, channel] or [height, width, channel]\")\n", 176 | " if len(im.shape) != 3:\n", 177 | " raise ValueError(\"Invalid input to deprocessing image\")\n", 178 | "\n", 179 | " # the inverse of the preprocessiing step\n", 180 | " im[:, :, 0] += channel_means[0] # these are the means subracted by the preprocessing step\n", 181 | " im[:, :, 1] += channel_means[1]\n", 182 | " im[:, :, 2] += channel_means[2]\n", 183 | " im= im[:, :, ::-1] # channel last\n", 184 | "\n", 185 | " im = np.clip(im, 0, 255).astype('uint8')\n", 186 | " return im" 187 | ] 188 | }, 189 | { 190 | "cell_type": "code", 191 | "execution_count": null, 192 | "metadata": {}, 193 | "outputs": [], 194 | "source": [ 195 | "# The feature maps are obtained from this content layer\n", 196 | "content_layers = ['block5_conv2']\n", 197 | "\n", 198 | "# Style layers we need\n", 199 | "style_layers = ['block1_conv1',\n", 200 | " 'block2_conv1',\n", 201 | " 'block3_conv1',\n", 202 | " 'block4_conv1',\n", 203 | " 'block5_conv1'\n", 204 | " ]\n", 205 | "\n", 206 | "number_of_content_layers = len(content_layers)\n", 207 | "number_of_style_layers = len(style_layers)" 208 | ] 209 | }, 210 | { 211 | "cell_type": "code", 212 | "execution_count": null, 213 | "metadata": {}, 214 | "outputs": [], 215 | "source": [ 216 | "def get_model():\n", 217 | " # Load VGG model, pretrained on imagenet data\n", 218 | " vgg_model = tf.keras.applications.vgg19.VGG19(include_top=False, weights='imagenet')\n", 219 | " vgg_model.trainable = False\n", 220 | "\n", 221 | " # Get output layers corresponding to style and content layers\n", 222 | " style_outputs = [vgg_model.get_layer(name).output for name in style_layers]\n", 223 | " content_outputs = [vgg_model.get_layer(name).output for name in content_layers]\n", 224 | " \n", 225 | " model_outputs = style_outputs + content_outputs\n", 226 | " # Build model\n", 227 | " return models.Model(vgg_model.input, model_outputs)" 228 | ] 229 | }, 230 | { 231 | "cell_type": "code", 232 | "execution_count": null, 233 | "metadata": {}, 234 | "outputs": [], 235 | "source": [ 236 | "def rms_loss(image1,image2):\n", 237 | " loss = tf.reduce_mean(input_tensor=tf.square(image1 - image2))\n", 238 | " return loss" 239 | ] 240 | }, 241 | { 242 | "cell_type": "code", 243 | "execution_count": null, 244 | "metadata": {}, 245 | "outputs": [], 246 | "source": [ 247 | "def content_loss(content, target):\n", 248 | " return rms_loss(content, target)" 249 | ] 250 | }, 251 | { 252 | "cell_type": "code", 253 | "execution_count": null, 254 | "metadata": {}, 255 | "outputs": [], 256 | "source": [ 257 | "def gram_matrix(input_tensor):\n", 258 | " channels = int(input_tensor.shape[-1]) # channels is last dimension\n", 259 | " tensor = tf.reshape(input_tensor, [-1, channels]) # Make the image channels first\n", 260 | " number_of_channels = tf.shape(input=tensor)[0]\n", 261 | " gram = tf.matmul(tensor, tensor, transpose_a=True)\n", 262 | " return gram / tf.cast(number_of_channels, tf.float32)\n", 263 | "\n", 264 | "def style_loss(style, gram_target):\n", 265 | " gram_style = gram_matrix(style)\n", 266 | " return rms_loss(gram_style, gram_target) \n" 267 | ] 268 | }, 269 | { 270 | "cell_type": "code", 271 | "execution_count": null, 272 | "metadata": {}, 273 | "outputs": [], 274 | "source": [ 275 | "def get_feature_representations(model, content_path, style_path):\n", 276 | " #Function to compute content and style feature representations.\n", 277 | "\n", 278 | " content_image = load_and_process_image(content_path)\n", 279 | " content_outputs = model(content_image)\n", 280 | " #content_features = [content_layer[0] for content_layer in content_outputs[:number_of_content_layers]]\n", 281 | " content_features = [content_layer[0] for content_layer in content_outputs[number_of_style_layers:]]\n", 282 | "\n", 283 | "\n", 284 | " style_image = load_and_process_image(style_path)\n", 285 | " style_outputs = model(style_image)\n", 286 | " style_features = [style_layer[0] for style_layer in style_outputs[:number_of_style_layers]]\n", 287 | "\n", 288 | " return style_features, content_features" 289 | ] 290 | }, 291 | { 292 | "cell_type": "code", 293 | "execution_count": null, 294 | "metadata": {}, 295 | "outputs": [], 296 | "source": [ 297 | "\n", 298 | "def compute_total_loss(model, loss_weights, init_image, gram_style_features, content_features):\n", 299 | "\n", 300 | " style_weight, content_weight = loss_weights\n", 301 | "\n", 302 | " model_outputs = model(init_image)\n", 303 | "\n", 304 | "\n", 305 | "\n", 306 | " content_score = 0\n", 307 | " content_output_features = model_outputs[number_of_style_layers:]\n", 308 | " weight_per_content_layer = 1.0 / float(number_of_content_layers)\n", 309 | " for target_content, comb_content in zip(content_features, content_output_features):\n", 310 | " content_score += weight_per_content_layer*content_loss(comb_content[0], target_content)\n", 311 | " content_score *= content_weight\n", 312 | "\n", 313 | "\n", 314 | " style_score = 0\n", 315 | " style_output_features = model_outputs[:number_of_style_layers]\n", 316 | " weight_per_style_layer = 1.0 / float(number_of_style_layers)\n", 317 | " for target_style, comb_style in zip(gram_style_features, style_output_features):\n", 318 | " style_score += weight_per_style_layer *style_loss(comb_style[0], target_style)\n", 319 | " style_score *= style_weight\n", 320 | "\n", 321 | "\n", 322 | " total_loss = style_score + content_score\n", 323 | " return total_loss, style_score, content_score" 324 | ] 325 | }, 326 | { 327 | "cell_type": "code", 328 | "execution_count": null, 329 | "metadata": {}, 330 | "outputs": [], 331 | "source": [ 332 | "def compute_grads(config):\n", 333 | " with tf.GradientTape() as tape:\n", 334 | " all_loss = compute_total_loss(**config)\n", 335 | " # Compute gradients wrt input image\n", 336 | " total_loss = all_loss[0]\n", 337 | " return tape.gradient(total_loss, config['init_image']), all_loss" 338 | ] 339 | }, 340 | { 341 | "cell_type": "code", 342 | "execution_count": null, 343 | "metadata": {}, 344 | "outputs": [], 345 | "source": [ 346 | "import IPython.display\n", 347 | "\n", 348 | "def run_style_transfer(content_path,\n", 349 | " style_path,\n", 350 | " number_of_iterations=1000,\n", 351 | " content_weight=1e3,\n", 352 | " style_weight=1e-2):\n", 353 | " # We don't need to (or want to) train any layers of our model, so we set their\n", 354 | " # trainable to false.\n", 355 | " model = get_model()\n", 356 | " for layer in model.layers:\n", 357 | " layer.trainable = False\n", 358 | "\n", 359 | " # Get the style and content feature representations (from our specified intermediate layers)\n", 360 | " style_features, content_features = get_feature_representations(model, content_path, style_path)\n", 361 | " gram_style_features = [gram_matrix(style_feature) for style_feature in style_features]\n", 362 | "\n", 363 | " # Set initial image\n", 364 | " initial_image = load_and_process_image(content_path)\n", 365 | " initial_image = tf.Variable(initial_image, dtype=tf.float32)\n", 366 | " # Create our optimizer\n", 367 | " optimiser = tf.compat.v1.train.AdamOptimizer(learning_rate=5, beta1=0.99, epsilon=1e-1)\n", 368 | " #opt = tf.keras.optimizers.Adam()\n", 369 | "\n", 370 | " # Store our best result\n", 371 | " best_loss, best_image = float('inf'), None # any loss will be lesss than float('inf')\n", 372 | "\n", 373 | " # Create a suitable configuration\n", 374 | " loss_weights = (style_weight, content_weight)\n", 375 | " config = {\n", 376 | " 'model': model,\n", 377 | " 'loss_weights': loss_weights,\n", 378 | " 'init_image': initial_image,\n", 379 | " 'gram_style_features': gram_style_features,\n", 380 | " 'content_features': content_features\n", 381 | " }\n", 382 | "\n", 383 | " # For displaying\n", 384 | " number_rows = 2\n", 385 | " number_cols = 5\n", 386 | " display_interval = number_of_iterations/(number_rows*number_cols)\n", 387 | "\n", 388 | " norm_means = np.array(channel_means)\n", 389 | " minimum_vals = -norm_means\n", 390 | " maximum_vals = 255 - norm_means\n", 391 | " images = []\n", 392 | " for i in range(number_of_iterations):\n", 393 | " grads, all_loss = compute_grads(config)\n", 394 | " loss, style_score, content_score = all_loss\n", 395 | " optimiser.apply_gradients([(grads, initial_image)])\n", 396 | " clipped_image = tf.clip_by_value(initial_image, minimum_vals, maximum_vals)\n", 397 | " initial_image.assign(clipped_image)\n", 398 | "\n", 399 | " if loss < best_loss:\n", 400 | " # Update best loss and best image from total loss.\n", 401 | " best_loss = loss\n", 402 | " best_image = deprocess_image(initial_image.numpy()) # this is one place where we need eager execution\n", 403 | "\n", 404 | " if i % display_interval== 0:\n", 405 | "\n", 406 | " # Use the .numpy() method to get the concrete numpy array, needs eager execution\n", 407 | " plot_image = initial_image.numpy()\n", 408 | " plot_image = deprocess_image(plot_image)\n", 409 | " images.append(plot_image)\n", 410 | " IPython.display.clear_output(wait=True)\n", 411 | " IPython.display.display_png(Image.fromarray(plot_image))\n", 412 | " print('Iteration: {}'.format(i))\n", 413 | " print('Total loss: {:.4e}, '\n", 414 | " 'style loss: {:.4e}, '\n", 415 | " 'content loss: {:.4e} '\n", 416 | " .format(loss, style_score, content_score))\n", 417 | "\n", 418 | " IPython.display.clear_output(wait=True)\n", 419 | " plt.figure(figsize=(14,4))\n", 420 | " for i,image in enumerate(images):\n", 421 | " plt.subplot(number_rows,number_cols,i+1)\n", 422 | " plt.imshow(image)\n", 423 | " plt.xticks([])\n", 424 | " plt.yticks([])\n", 425 | "\n", 426 | " return best_image, best_loss" 427 | ] 428 | }, 429 | { 430 | "cell_type": "code", 431 | "execution_count": null, 432 | "metadata": {}, 433 | "outputs": [], 434 | "source": [ 435 | "best_image, best_loss = run_style_transfer(content_path, style_path, number_of_iterations=100)" 436 | ] 437 | }, 438 | { 439 | "cell_type": "code", 440 | "execution_count": null, 441 | "metadata": { 442 | "scrolled": true 443 | }, 444 | "outputs": [], 445 | "source": [ 446 | "Image.fromarray(best_image)" 447 | ] 448 | }, 449 | { 450 | "cell_type": "code", 451 | "execution_count": null, 452 | "metadata": {}, 453 | "outputs": [], 454 | "source": [ 455 | "def show_results(best_image, content_path, style_path, show_large_final=True):\n", 456 | " plt.figure(figsize=(10, 5))\n", 457 | " content = load_image(content_path)\n", 458 | " style = load_image(style_path)\n", 459 | "\n", 460 | " plt.subplot(1, 2, 1)\n", 461 | " show_image(content, 'Content Image')\n", 462 | "\n", 463 | " plt.subplot(1, 2, 2)\n", 464 | " show_image(style, 'Style Image')\n", 465 | "\n", 466 | " if show_large_final:\n", 467 | " plt.figure(figsize=(10, 10))\n", 468 | "\n", 469 | " plt.imshow(best_image)\n", 470 | " plt.title('Output Image')\n", 471 | " plt.show()" 472 | ] 473 | }, 474 | { 475 | "cell_type": "code", 476 | "execution_count": null, 477 | "metadata": { 478 | "scrolled": true 479 | }, 480 | "outputs": [], 481 | "source": [ 482 | "show_results(best_image, content_path, style_path)" 483 | ] 484 | }, 485 | { 486 | "cell_type": "code", 487 | "execution_count": null, 488 | "metadata": {}, 489 | "outputs": [], 490 | "source": [] 491 | }, 492 | { 493 | "cell_type": "code", 494 | "execution_count": null, 495 | "metadata": {}, 496 | "outputs": [], 497 | "source": [] 498 | }, 499 | { 500 | "cell_type": "code", 501 | "execution_count": null, 502 | "metadata": {}, 503 | "outputs": [], 504 | "source": [] 505 | }, 506 | { 507 | "cell_type": "code", 508 | "execution_count": null, 509 | "metadata": {}, 510 | "outputs": [], 511 | "source": [] 512 | }, 513 | { 514 | "cell_type": "code", 515 | "execution_count": null, 516 | "metadata": {}, 517 | "outputs": [], 518 | "source": [] 519 | }, 520 | { 521 | "cell_type": "code", 522 | "execution_count": null, 523 | "metadata": {}, 524 | "outputs": [], 525 | "source": [] 526 | } 527 | ], 528 | "metadata": { 529 | "kernelspec": { 530 | "display_name": "Python 3", 531 | "language": "python", 532 | "name": "python3" 533 | }, 534 | "language_info": { 535 | "codemirror_mode": { 536 | "name": "ipython", 537 | "version": 3 538 | }, 539 | "file_extension": ".py", 540 | "mimetype": "text/x-python", 541 | "name": "python", 542 | "nbconvert_exporter": "python", 543 | "pygments_lexer": "ipython3", 544 | "version": "3.6.7" 545 | } 546 | }, 547 | "nbformat": 4, 548 | "nbformat_minor": 2 549 | } 550 | -------------------------------------------------------------------------------- /Chapter07/tmp/nst/elephant.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/Tensorflow-2.0-Quick-Start-Guide/56c6be1e90bd901523dbe7beaa973e8d158b8cd1/Chapter07/tmp/nst/elephant.jpg -------------------------------------------------------------------------------- /Chapter07/tmp/nst/skyscrapers.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/Tensorflow-2.0-Quick-Start-Guide/56c6be1e90bd901523dbe7beaa973e8d158b8cd1/Chapter07/tmp/nst/skyscrapers.jpg -------------------------------------------------------------------------------- /Chapter07/tmp/nst/sunset.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/Tensorflow-2.0-Quick-Start-Guide/56c6be1e90bd901523dbe7beaa973e8d158b8cd1/Chapter07/tmp/nst/sunset.jpg -------------------------------------------------------------------------------- /Chapter07/tmp/nst/zebra.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/Tensorflow-2.0-Quick-Start-Guide/56c6be1e90bd901523dbe7beaa973e8d158b8cd1/Chapter07/tmp/nst/zebra.jpg -------------------------------------------------------------------------------- /Chapter08/Chapter8_RNN_TF2_alpha.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "# Copyright 2018 The TensorFlow Authors.\n", 10 | "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", 11 | "# you may not use this file except in compliance with the License.\n", 12 | "# You may obtain a copy of the License at\n", 13 | "#\n", 14 | "# https://www.apache.org/licenses/LICENSE-2.0\n", 15 | "#\n", 16 | "# Unless required by applicable law or agreed to in writing, software\n", 17 | "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", 18 | "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", 19 | "# See the License for the specific language governing permissions and\n", 20 | "# limitations under the License.\n", 21 | "#https://github.com/tensorflow/docs/blob/master/site/en/tutorials/sequences/text_generation.ipynb" 22 | ] 23 | }, 24 | { 25 | "cell_type": "code", 26 | "execution_count": 2, 27 | "metadata": {}, 28 | "outputs": [], 29 | "source": [ 30 | "import tensorflow as tf\n", 31 | "import numpy as np\n", 32 | "import os\n", 33 | "import time" 34 | ] 35 | }, 36 | { 37 | "cell_type": "code", 38 | "execution_count": 3, 39 | "metadata": {}, 40 | "outputs": [], 41 | "source": [ 42 | "file='1400-0.txt'\n", 43 | "url='https://www.gutenberg.org/files/1400/1400-0.txt' # Great Expectations by Charles Dickens\n" 44 | ] 45 | }, 46 | { 47 | "cell_type": "code", 48 | "execution_count": 4, 49 | "metadata": { 50 | "scrolled": true 51 | }, 52 | "outputs": [], 53 | "source": [ 54 | "path = tf.keras.utils.get_file(file,url)" 55 | ] 56 | }, 57 | { 58 | "cell_type": "code", 59 | "execution_count": 5, 60 | "metadata": {}, 61 | "outputs": [ 62 | { 63 | "name": "stdout", 64 | "output_type": "stream", 65 | "text": [ 66 | "Length of text: 1013445 characters\n" 67 | ] 68 | } 69 | ], 70 | "source": [ 71 | "text = open(path).read()\n", 72 | "print ('Length of text: {} characters'.format(len(text)))" 73 | ] 74 | }, 75 | { 76 | "cell_type": "code", 77 | "execution_count": 6, 78 | "metadata": {}, 79 | "outputs": [ 80 | { 81 | "name": "stdout", 82 | "output_type": "stream", 83 | "text": [ 84 | "My father's family name being Pirrip, and my Christian name Philip, my\n", 85 | "infant tongue could make of both names nothing longer or more explicit\n", 86 | "than Pip. So, I called myself Pip, and came to be called Pip.\n", 87 | "\n", 88 | "I give Pirrip as my father's family name, on the authority of his\n", 89 | "tombstone and my sister,--Mrs\n" 90 | ] 91 | } 92 | ], 93 | "source": [ 94 | "# strip off text we don't need\n", 95 | "text = text[835:]\n", 96 | "\n", 97 | "# Take a look at the first 300 characters in text\n", 98 | "print(text[:300])" 99 | ] 100 | }, 101 | { 102 | "cell_type": "code", 103 | "execution_count": 7, 104 | "metadata": {}, 105 | "outputs": [ 106 | { 107 | "name": "stdout", 108 | "output_type": "stream", 109 | "text": [ 110 | "84 unique characters.\n" 111 | ] 112 | } 113 | ], 114 | "source": [ 115 | "# The unique characters in the file\n", 116 | "vocabulary = sorted(set(text))\n", 117 | "print ('{} unique characters.'.format(len(vocabulary)))" 118 | ] 119 | }, 120 | { 121 | "cell_type": "code", 122 | "execution_count": 8, 123 | "metadata": {}, 124 | "outputs": [], 125 | "source": [ 126 | "vocabulary_size = len(vocabulary)" 127 | ] 128 | }, 129 | { 130 | "cell_type": "code", 131 | "execution_count": 9, 132 | "metadata": {}, 133 | "outputs": [ 134 | { 135 | "name": "stdout", 136 | "output_type": "stream", 137 | "text": [ 138 | "{'\\n': 0, ' ': 1, '!': 2, '$': 3, '%': 4, '&': 5, \"'\": 6, '(': 7, ')': 8, '*': 9, ',': 10, '-': 11, '.': 12, '/': 13, '0': 14, '1': 15, '2': 16, '3': 17, '4': 18, '5': 19, '6': 20, '7': 21, '8': 22, '9': 23, ':': 24, ';': 25, '?': 26, '@': 27, 'A': 28, 'B': 29, 'C': 30, 'D': 31, 'E': 32, 'F': 33, 'G': 34, 'H': 35, 'I': 36, 'J': 37, 'K': 38, 'L': 39, 'M': 40, 'N': 41, 'O': 42, 'P': 43, 'Q': 44, 'R': 45, 'S': 46, 'T': 47, 'U': 48, 'V': 49, 'W': 50, 'X': 51, 'Y': 52, 'Z': 53, 'a': 54, 'b': 55, 'c': 56, 'd': 57, 'e': 58, 'f': 59, 'g': 60, 'h': 61, 'i': 62, 'j': 63, 'k': 64, 'l': 65, 'm': 66, 'n': 67, 'o': 68, 'p': 69, 'q': 70, 'r': 71, 's': 72, 't': 73, 'u': 74, 'v': 75, 'w': 76, 'x': 77, 'y': 78, 'z': 79, 'ê': 80, 'ô': 81, '“': 82, '”': 83}\n" 139 | ] 140 | } 141 | ], 142 | "source": [ 143 | "# Creating a dictionary of unique characters to indices\n", 144 | "char_to_index = {char:index for index, char in enumerate(vocabulary)}\n", 145 | "print(char_to_index)" 146 | ] 147 | }, 148 | { 149 | "cell_type": "code", 150 | "execution_count": 10, 151 | "metadata": {}, 152 | "outputs": [ 153 | { 154 | "name": "stdout", 155 | "output_type": "stream", 156 | "text": [ 157 | "['\\n' ' ' '!' '$' '%' '&' \"'\" '(' ')' '*' ',' '-' '.' '/' '0' '1' '2' '3'\n", 158 | " '4' '5' '6' '7' '8' '9' ':' ';' '?' '@' 'A' 'B' 'C' 'D' 'E' 'F' 'G' 'H'\n", 159 | " 'I' 'J' 'K' 'L' 'M' 'N' 'O' 'P' 'Q' 'R' 'S' 'T' 'U' 'V' 'W' 'X' 'Y' 'Z'\n", 160 | " 'a' 'b' 'c' 'd' 'e' 'f' 'g' 'h' 'i' 'j' 'k' 'l' 'm' 'n' 'o' 'p' 'q' 'r'\n", 161 | " 's' 't' 'u' 'v' 'w' 'x' 'y' 'z' 'ê' 'ô' '“' '”']\n" 162 | ] 163 | } 164 | ], 165 | "source": [ 166 | "index_to_char = np.array(vocabulary)\n", 167 | "print(index_to_char)\n", 168 | "text_as_int = np.array([char_to_index[char] for char in text])" 169 | ] 170 | }, 171 | { 172 | "cell_type": "code", 173 | "execution_count": 11, 174 | "metadata": {}, 175 | "outputs": [ 176 | { 177 | "name": "stdout", 178 | "output_type": "stream", 179 | "text": [ 180 | "{\n", 181 | " '\\n': 0,\n", 182 | " ' ' : 1,\n", 183 | " '!' : 2,\n", 184 | " '$' : 3,\n", 185 | " '%' : 4,\n", 186 | " '&' : 5,\n", 187 | " \"'\" : 6,\n", 188 | " '(' : 7,\n", 189 | " ')' : 8,\n", 190 | " '*' : 9,\n", 191 | " ',' : 10,\n", 192 | " '-' : 11,\n", 193 | " '.' : 12,\n", 194 | " '/' : 13,\n", 195 | " '0' : 14,\n", 196 | " '1' : 15,\n", 197 | " '2' : 16,\n", 198 | " '3' : 17,\n", 199 | " '4' : 18,\n", 200 | " '5' : 19,\n", 201 | " ...\n", 202 | "}\n" 203 | ] 204 | } 205 | ], 206 | "source": [ 207 | "print('{')\n", 208 | "for char,_ in zip(char_to_index, range(20)):\n", 209 | " print(' {:4s}: {:3d},'.format(repr(char), char_to_index[char]))\n", 210 | "print(' ...\\n}')" 211 | ] 212 | }, 213 | { 214 | "cell_type": "code", 215 | "execution_count": 12, 216 | "metadata": {}, 217 | "outputs": [ 218 | { 219 | "name": "stdout", 220 | "output_type": "stream", 221 | "text": [ 222 | "\"My father's fam\" ---- characters mapped to int ---- > [40 78 1 59 54 73 61 58 71 6 72 1 59 54 66]\n" 223 | ] 224 | } 225 | ], 226 | "source": [ 227 | "# Show how the first 15 characters from the text are mapped to integers\n", 228 | "print ('{} ---- characters mapped to int ---- > {}'.format(repr(text[:15]), text_as_int[:15]))" 229 | ] 230 | }, 231 | { 232 | "cell_type": "code", 233 | "execution_count": 13, 234 | "metadata": {}, 235 | "outputs": [ 236 | { 237 | "name": "stdout", 238 | "output_type": "stream", 239 | "text": [ 240 | "M\n", 241 | "y\n", 242 | " \n", 243 | "f\n", 244 | "a\n" 245 | ] 246 | } 247 | ], 248 | "source": [ 249 | "# The maximum length sentence we want for a single input in characters\n", 250 | "sequence_length = 100\n", 251 | "examples_per_epoch = len(text)//sequence_length\n", 252 | "\n", 253 | "# Create training examples / targets\n", 254 | "char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int)\n", 255 | "\n", 256 | "for char in char_dataset.take(5):\n", 257 | " print(index_to_char[char.numpy()])" 258 | ] 259 | }, 260 | { 261 | "cell_type": "code", 262 | "execution_count": 14, 263 | "metadata": {}, 264 | "outputs": [ 265 | { 266 | "name": "stdout", 267 | "output_type": "stream", 268 | "text": [ 269 | "\"My father's family name being Pirrip, and my Christian name Philip, my\\ninfant tongue could make of bo\"\n", 270 | "'th names nothing longer or more explicit\\nthan Pip. So, I called myself Pip, and came to be called Pip'\n", 271 | "\".\\n\\nI give Pirrip as my father's family name, on the authority of his\\ntombstone and my sister,--Mrs. J\"\n", 272 | "'oe Gargery, who married the blacksmith.\\nAs I never saw my father or my mother, and never saw any like'\n", 273 | "'ness\\nof either of them (for their days were long before the days of\\nphotographs), my first fancies re'\n" 274 | ] 275 | } 276 | ], 277 | "source": [ 278 | "sequences = char_dataset.batch(sequence_length+1, drop_remainder=True)\n", 279 | "\n", 280 | "for item in sequences.take(5):\n", 281 | " print(repr(''.join(index_to_char[item.numpy()])))" 282 | ] 283 | }, 284 | { 285 | "cell_type": "code", 286 | "execution_count": 15, 287 | "metadata": {}, 288 | "outputs": [], 289 | "source": [ 290 | "def split_input_target(chunk):\n", 291 | " input_text = chunk[:-1]\n", 292 | " target_text = chunk[1:]\n", 293 | " return input_text, target_text\n", 294 | "\n", 295 | "dataset = sequences.map(split_input_target)" 296 | ] 297 | }, 298 | { 299 | "cell_type": "code", 300 | "execution_count": 16, 301 | "metadata": {}, 302 | "outputs": [ 303 | { 304 | "name": "stdout", 305 | "output_type": "stream", 306 | "text": [ 307 | "Input data: \"My father's family name being Pirrip, and my Christian name Philip, my\\ninfant tongue could make of b\"\n", 308 | "Target data: \"y father's family name being Pirrip, and my Christian name Philip, my\\ninfant tongue could make of bo\"\n" 309 | ] 310 | } 311 | ], 312 | "source": [ 313 | "for input_example, target_example in dataset.take(1):\n", 314 | " print ('Input data: ', repr(''.join(index_to_char[input_example.numpy()])))\n", 315 | " print ('Target data:', repr(''.join(index_to_char[target_example.numpy()])))" 316 | ] 317 | }, 318 | { 319 | "cell_type": "code", 320 | "execution_count": 17, 321 | "metadata": {}, 322 | "outputs": [ 323 | { 324 | "name": "stdout", 325 | "output_type": "stream", 326 | "text": [ 327 | "Step 0\n", 328 | " input: 40 ('M')\n", 329 | " expected output: 78 ('y')\n", 330 | "Step 1\n", 331 | " input: 78 ('y')\n", 332 | " expected output: 1 (' ')\n", 333 | "Step 2\n", 334 | " input: 1 (' ')\n", 335 | " expected output: 59 ('f')\n", 336 | "Step 3\n", 337 | " input: 59 ('f')\n", 338 | " expected output: 54 ('a')\n", 339 | "Step 4\n", 340 | " input: 54 ('a')\n", 341 | " expected output: 73 ('t')\n" 342 | ] 343 | } 344 | ], 345 | "source": [ 346 | "for char, (input_index, target_index) in enumerate(zip(input_example[:5], target_example[:5])):\n", 347 | " print(\"Step {:4d}\".format(char))\n", 348 | " print(\" input: {} ({:s})\".format(input_index, repr(index_to_char[input_index])))\n", 349 | " print(\" expected output: {} ({:s})\".format(target_index, repr(index_to_char[target_index])))" 350 | ] 351 | }, 352 | { 353 | "cell_type": "code", 354 | "execution_count": 18, 355 | "metadata": {}, 356 | "outputs": [ 357 | { 358 | "data": { 359 | "text/plain": [ 360 | "" 361 | ] 362 | }, 363 | "execution_count": 18, 364 | "metadata": {}, 365 | "output_type": "execute_result" 366 | } 367 | ], 368 | "source": [ 369 | "\n", 370 | "# Batch size\n", 371 | "batch = 64\n", 372 | "steps_per_epoch = examples_per_epoch//batch\n", 373 | "\n", 374 | "# TF data maintains a buffer in memory in which to shuffle data\n", 375 | "# since it is designed to work with possibly endless data\n", 376 | "buffer = 10000\n", 377 | "\n", 378 | "dataset = dataset.shuffle(buffer).batch(batch, drop_remainder=True)\n", 379 | "\n", 380 | "dataset = dataset.repeat()\n", 381 | "\n", 382 | "dataset" 383 | ] 384 | }, 385 | { 386 | "cell_type": "code", 387 | "execution_count": 19, 388 | "metadata": {}, 389 | "outputs": [], 390 | "source": [ 391 | "# The vocabulary length in characterrs\n", 392 | "vocabulary_length = len(vocabulary)\n", 393 | "\n", 394 | "# The embedding dimension\n", 395 | "embedding_dimension = 256\n", 396 | "\n", 397 | "# Number of RNN units\n", 398 | "recurrent_nn_units = 1024" 399 | ] 400 | }, 401 | { 402 | "cell_type": "code", 403 | "execution_count": 20, 404 | "metadata": {}, 405 | "outputs": [ 406 | { 407 | "name": "stdout", 408 | "output_type": "stream", 409 | "text": [ 410 | "CPU in use\n" 411 | ] 412 | } 413 | ], 414 | "source": [ 415 | "if tf.test.is_gpu_available():\n", 416 | " recurrent_nn = tf.compat.v1.keras.layers.CuDNNGRU\n", 417 | " print(\"GPU in use\")\n", 418 | "else:\n", 419 | " import functools\n", 420 | " recurrent_nn = functools.partial(tf.keras.layers.GRU, recurrent_activation='sigmoid')\n", 421 | " print(\"CPU in use\")" 422 | ] 423 | }, 424 | { 425 | "cell_type": "code", 426 | "execution_count": 21, 427 | "metadata": {}, 428 | "outputs": [], 429 | "source": [ 430 | "\n", 431 | "def build_model(vocabulary_size, embedding_dimension, recurrent_nn_units, batch_size):\n", 432 | " model = tf.keras.Sequential(\n", 433 | " [tf.keras.layers.Embedding(vocabulary_size, embedding_dimension, batch_input_shape=[batch_size, None]),\n", 434 | " recurrent_nn(recurrent_nn_units, return_sequences=True, recurrent_initializer='glorot_uniform', stateful=True),\n", 435 | " tf.keras.layers.Dense(vocabulary_length)\n", 436 | " ])\n", 437 | " return model" 438 | ] 439 | }, 440 | { 441 | "cell_type": "code", 442 | "execution_count": 22, 443 | "metadata": {}, 444 | "outputs": [], 445 | "source": [ 446 | "model = build_model(\n", 447 | " vocabulary_size = len(vocabulary),\n", 448 | " embedding_dimension=embedding_dimension,\n", 449 | " recurrent_nn_units=recurrent_nn_units,\n", 450 | " batch_size=batch)" 451 | ] 452 | }, 453 | { 454 | "cell_type": "code", 455 | "execution_count": 23, 456 | "metadata": {}, 457 | "outputs": [ 458 | { 459 | "name": "stdout", 460 | "output_type": "stream", 461 | "text": [ 462 | "(64, 100, 84) # (batch, sequence_length, vocabulary_length)\n" 463 | ] 464 | } 465 | ], 466 | "source": [ 467 | "for batch_input_example, batch_target_example in dataset.take(1):\n", 468 | " batch_predictions_example = model(batch_input_example)\n", 469 | " print(batch_predictions_example.shape, \"# (batch, sequence_length, vocabulary_length)\")" 470 | ] 471 | }, 472 | { 473 | "cell_type": "code", 474 | "execution_count": 24, 475 | "metadata": {}, 476 | "outputs": [ 477 | { 478 | "name": "stdout", 479 | "output_type": "stream", 480 | "text": [ 481 | "Model: \"sequential\"\n", 482 | "_________________________________________________________________\n", 483 | "Layer (type) Output Shape Param # \n", 484 | "=================================================================\n", 485 | "embedding (Embedding) (64, None, 256) 21504 \n", 486 | "_________________________________________________________________\n", 487 | "unified_gru (UnifiedGRU) (64, None, 1024) 3938304 \n", 488 | "_________________________________________________________________\n", 489 | "dense (Dense) (64, None, 84) 86100 \n", 490 | "=================================================================\n", 491 | "Total params: 4,045,908\n", 492 | "Trainable params: 4,045,908\n", 493 | "Non-trainable params: 0\n", 494 | "_________________________________________________________________\n" 495 | ] 496 | } 497 | ], 498 | "source": [ 499 | "model.summary()" 500 | ] 501 | }, 502 | { 503 | "cell_type": "code", 504 | "execution_count": 25, 505 | "metadata": {}, 506 | "outputs": [], 507 | "source": [ 508 | "sampled_indices = tf.random.categorical(logits=batch_predictions_example[0], num_samples=1)\n", 509 | "\n", 510 | "sampled_indices = tf.squeeze(sampled_indices,axis=-1).numpy()" 511 | ] 512 | }, 513 | { 514 | "cell_type": "code", 515 | "execution_count": 26, 516 | "metadata": {}, 517 | "outputs": [ 518 | { 519 | "data": { 520 | "text/plain": [ 521 | "array([61, 33, 47, 79, 37, 58, 25, 71, 28, 81, 24, 34, 9, 6, 83, 77, 18,\n", 522 | " 57, 26, 5, 81, 56, 58, 23, 44, 58, 64, 39, 24, 9, 42, 21, 27, 38,\n", 523 | " 74, 68, 53, 40, 5, 82, 3, 71, 14, 66, 60, 0, 4, 13, 16, 11, 20,\n", 524 | " 44, 54, 32, 5, 3, 8, 13, 6, 52, 22, 66, 12, 77, 8, 23, 18, 55,\n", 525 | " 26, 59, 38, 69, 12, 71, 45, 81, 12, 17, 36, 40, 40, 47, 40, 63, 40,\n", 526 | " 40, 54, 60, 12, 62, 39, 15, 39, 74, 40, 20, 1, 26, 6, 25])" 527 | ] 528 | }, 529 | "execution_count": 26, 530 | "metadata": {}, 531 | "output_type": "execute_result" 532 | } 533 | ], 534 | "source": [ 535 | "sampled_indices" 536 | ] 537 | }, 538 | { 539 | "cell_type": "code", 540 | "execution_count": 27, 541 | "metadata": { 542 | "scrolled": true 543 | }, 544 | "outputs": [ 545 | { 546 | "name": "stdout", 547 | "output_type": "stream", 548 | "text": [ 549 | "Input: \n", 550 | " 'r, that I might refer to it again; but I could not find it, and\\nwas uneasy to think that it must hav'\n", 551 | "Next Char Predictions: \n", 552 | " \"hFTzJe;rAô:G*'”x4d?&ôce9QekL:*O7@KuoZM&“$r0mg\\n%/2-6QaE&$)/'Y8m.x)94b?fKp.rRô.3IMMTMjMMag.iL1LuM6 ?';\"\n" 553 | ] 554 | } 555 | ], 556 | "source": [ 557 | "print(\"Input: \\n\", repr(\"\".join(index_to_char[batch_input_example[0]])))\n", 558 | "\n", 559 | "print(\"Next Char Predictions: \\n\", repr(\"\".join(index_to_char[sampled_indices ])))\n", 560 | "#" 561 | ] 562 | }, 563 | { 564 | "cell_type": "code", 565 | "execution_count": 28, 566 | "metadata": {}, 567 | "outputs": [], 568 | "source": [ 569 | "def loss(labels, logits):\n", 570 | " return tf.keras.losses.sparse_categorical_crossentropy(labels, logits, from_logits=True)" 571 | ] 572 | }, 573 | { 574 | "cell_type": "code", 575 | "execution_count": 29, 576 | "metadata": {}, 577 | "outputs": [ 578 | { 579 | "name": "stdout", 580 | "output_type": "stream", 581 | "text": [ 582 | "Prediction shape: (64, 100, 84) # (batch_size, sequence_length, vocab_size)\n", 583 | "scalar_loss: 4.4306927\n" 584 | ] 585 | } 586 | ], 587 | "source": [ 588 | "\n", 589 | "batch_loss_example = tf.compat.v1.losses.sparse_softmax_cross_entropy(batch_target_example, batch_predictions_example)\n", 590 | "print(\"Prediction shape: \", batch_predictions_example.shape, \" # (batch_size, sequence_length, vocab_size)\")\n", 591 | "print(\"scalar_loss: \", batch_loss_example.numpy())" 592 | ] 593 | }, 594 | { 595 | "cell_type": "code", 596 | "execution_count": 30, 597 | "metadata": {}, 598 | "outputs": [], 599 | "source": [ 600 | "#next produced by upgrade script.... \n", 601 | "#model.compile(optimizer = tf.compat.v1.train.AdamOptimizer(), loss = loss) \n", 602 | "#.... but following optimizer is available.\n", 603 | "model.compile(optimizer = tf.optimizers.Adam(), loss = loss)" 604 | ] 605 | }, 606 | { 607 | "cell_type": "code", 608 | "execution_count": 31, 609 | "metadata": {}, 610 | "outputs": [], 611 | "source": [ 612 | "# Directory where the checkpoints will be saved\n", 613 | "directory = './checkpoints'\n", 614 | "# Name of the checkpoint files\n", 615 | "file_prefix = os.path.join(directory, \"ckpt_{epoch}\")\n", 616 | "\n", 617 | "callback=[tf.keras.callbacks.ModelCheckpoint(filepath=file_prefix, save_weights_only=True)]\n" 618 | ] 619 | }, 620 | { 621 | "cell_type": "code", 622 | "execution_count": 32, 623 | "metadata": {}, 624 | "outputs": [], 625 | "source": [ 626 | "epochs=45" 627 | ] 628 | }, 629 | { 630 | "cell_type": "code", 631 | "execution_count": 33, 632 | "metadata": {}, 633 | "outputs": [ 634 | { 635 | "name": "stdout", 636 | "output_type": "stream", 637 | "text": [ 638 | "Epoch 1/45\n", 639 | "158/158 [==============================] - 107s 675ms/step - loss: 2.6684\n", 640 | "Epoch 2/45\n", 641 | "158/158 [==============================] - 104s 656ms/step - loss: 1.9597\n", 642 | "Epoch 3/45\n", 643 | "158/158 [==============================] - 103s 654ms/step - loss: 1.6832\n", 644 | "Epoch 4/45\n", 645 | "158/158 [==============================] - 104s 657ms/step - loss: 1.5192\n", 646 | "Epoch 5/45\n", 647 | "158/158 [==============================] - 105s 664ms/step - loss: 1.4198\n", 648 | "Epoch 6/45\n", 649 | "158/158 [==============================] - 104s 659ms/step - loss: 1.3533\n", 650 | "Epoch 7/45\n", 651 | "158/158 [==============================] - 104s 657ms/step - loss: 1.3040\n", 652 | "Epoch 8/45\n", 653 | "158/158 [==============================] - 105s 662ms/step - loss: 1.2615\n", 654 | "Epoch 9/45\n", 655 | "158/158 [==============================] - 104s 657ms/step - loss: 1.2278\n", 656 | "Epoch 10/45\n", 657 | "158/158 [==============================] - 104s 659ms/step - loss: 1.1944\n", 658 | "Epoch 11/45\n", 659 | "158/158 [==============================] - 103s 654ms/step - loss: 1.1610\n", 660 | "Epoch 12/45\n", 661 | "158/158 [==============================] - 105s 663ms/step - loss: 1.1296\n", 662 | "Epoch 13/45\n", 663 | "158/158 [==============================] - 104s 661ms/step - loss: 1.0959\n", 664 | "Epoch 14/45\n", 665 | "158/158 [==============================] - 106s 670ms/step - loss: 1.0633\n", 666 | "Epoch 15/45\n", 667 | "158/158 [==============================] - 106s 671ms/step - loss: 1.0313\n", 668 | "Epoch 16/45\n", 669 | "158/158 [==============================] - 104s 661ms/step - loss: 0.9981\n", 670 | "Epoch 17/45\n", 671 | "158/158 [==============================] - 104s 659ms/step - loss: 0.9632\n", 672 | "Epoch 18/45\n", 673 | "158/158 [==============================] - 106s 669ms/step - loss: 0.9298\n", 674 | "Epoch 19/45\n", 675 | "158/158 [==============================] - 104s 658ms/step - loss: 0.8970\n", 676 | "Epoch 20/45\n", 677 | "158/158 [==============================] - 104s 656ms/step - loss: 0.8669\n", 678 | "Epoch 21/45\n", 679 | "158/158 [==============================] - 103s 655ms/step - loss: 0.8343\n", 680 | "Epoch 22/45\n", 681 | "158/158 [==============================] - 104s 658ms/step - loss: 0.8059\n", 682 | "Epoch 23/45\n", 683 | "158/158 [==============================] - 104s 660ms/step - loss: 0.7816\n", 684 | "Epoch 24/45\n", 685 | "158/158 [==============================] - 105s 666ms/step - loss: 0.7559\n", 686 | "Epoch 25/45\n", 687 | "158/158 [==============================] - 104s 661ms/step - loss: 0.7338\n", 688 | "Epoch 26/45\n", 689 | "158/158 [==============================] - 105s 664ms/step - loss: 0.7152\n", 690 | "Epoch 27/45\n", 691 | "158/158 [==============================] - 104s 661ms/step - loss: 0.6974\n", 692 | "Epoch 28/45\n", 693 | "158/158 [==============================] - 105s 667ms/step - loss: 0.6843\n", 694 | "Epoch 29/45\n", 695 | "158/158 [==============================] - 106s 671ms/step - loss: 0.6723\n", 696 | "Epoch 30/45\n", 697 | "158/158 [==============================] - 105s 665ms/step - loss: 0.6593\n", 698 | "Epoch 31/45\n", 699 | "158/158 [==============================] - 105s 664ms/step - loss: 0.6503\n", 700 | "Epoch 32/45\n", 701 | "158/158 [==============================] - 106s 672ms/step - loss: 0.6428\n", 702 | "Epoch 33/45\n", 703 | "158/158 [==============================] - 105s 666ms/step - loss: 0.6331\n", 704 | "Epoch 34/45\n", 705 | "158/158 [==============================] - 107s 676ms/step - loss: 0.6262\n", 706 | "Epoch 35/45\n", 707 | "158/158 [==============================] - 105s 662ms/step - loss: 0.6221\n", 708 | "Epoch 36/45\n", 709 | "158/158 [==============================] - 108s 681ms/step - loss: 0.6165\n", 710 | "Epoch 37/45\n", 711 | "158/158 [==============================] - 107s 676ms/step - loss: 0.6127\n", 712 | "Epoch 38/45\n", 713 | "158/158 [==============================] - 107s 676ms/step - loss: 0.6109\n", 714 | "Epoch 39/45\n", 715 | "158/158 [==============================] - 107s 677ms/step - loss: 0.6089\n", 716 | "Epoch 40/45\n", 717 | "158/158 [==============================] - 106s 672ms/step - loss: 0.6064\n", 718 | "Epoch 41/45\n", 719 | "158/158 [==============================] - 107s 674ms/step - loss: 0.6037\n", 720 | "Epoch 42/45\n", 721 | "158/158 [==============================] - 105s 663ms/step - loss: 0.6043\n", 722 | "Epoch 43/45\n", 723 | "158/158 [==============================] - 104s 659ms/step - loss: 0.6028\n", 724 | "Epoch 44/45\n", 725 | "158/158 [==============================] - 106s 669ms/step - loss: 0.6050\n", 726 | "Epoch 45/45\n", 727 | "158/158 [==============================] - 107s 679ms/step - loss: 0.6054\n" 728 | ] 729 | } 730 | ], 731 | "source": [ 732 | "\n", 733 | "history = model.fit(dataset, epochs=epochs, steps_per_epoch=steps_per_epoch, callbacks=callback)" 734 | ] 735 | }, 736 | { 737 | "cell_type": "code", 738 | "execution_count": 34, 739 | "metadata": {}, 740 | "outputs": [ 741 | { 742 | "data": { 743 | "text/plain": [ 744 | "'./checkpoints/ckpt_45'" 745 | ] 746 | }, 747 | "execution_count": 34, 748 | "metadata": {}, 749 | "output_type": "execute_result" 750 | } 751 | ], 752 | "source": [ 753 | "tf.train.latest_checkpoint(directory)" 754 | ] 755 | }, 756 | { 757 | "cell_type": "code", 758 | "execution_count": 35, 759 | "metadata": {}, 760 | "outputs": [], 761 | "source": [ 762 | "model = build_model(vocabulary_size, embedding_dimension, recurrent_nn_units, batch_size=1)\n", 763 | "\n", 764 | "model.load_weights(tf.train.latest_checkpoint(directory))\n", 765 | "\n", 766 | "model.build(tf.TensorShape([1, None]))" 767 | ] 768 | }, 769 | { 770 | "cell_type": "code", 771 | "execution_count": 36, 772 | "metadata": {}, 773 | "outputs": [ 774 | { 775 | "name": "stdout", 776 | "output_type": "stream", 777 | "text": [ 778 | "Model: \"sequential_1\"\n", 779 | "_________________________________________________________________\n", 780 | "Layer (type) Output Shape Param # \n", 781 | "=================================================================\n", 782 | "embedding_1 (Embedding) (1, None, 256) 21504 \n", 783 | "_________________________________________________________________\n", 784 | "unified_gru_1 (UnifiedGRU) (1, None, 1024) 3938304 \n", 785 | "_________________________________________________________________\n", 786 | "dense_1 (Dense) (1, None, 84) 86100 \n", 787 | "=================================================================\n", 788 | "Total params: 4,045,908\n", 789 | "Trainable params: 4,045,908\n", 790 | "Non-trainable params: 0\n", 791 | "_________________________________________________________________\n" 792 | ] 793 | } 794 | ], 795 | "source": [ 796 | "model.summary()" 797 | ] 798 | }, 799 | { 800 | "cell_type": "code", 801 | "execution_count": 37, 802 | "metadata": {}, 803 | "outputs": [], 804 | "source": [ 805 | "\n", 806 | "def generate_text(model, start_string, temperature, characters_to_generate):\n", 807 | "\n", 808 | " # Vectorise start string into numbers\n", 809 | " input_string = [char_to_index[char] for char in start_string]\n", 810 | " input_string = tf.expand_dims(input_string, 0)\n", 811 | "\n", 812 | " # Empty string to store generated text\n", 813 | " generated = []\n", 814 | "\n", 815 | " # (Batch size is 1)\n", 816 | " model.reset_states()\n", 817 | " for i in range(characters_to_generate):\n", 818 | " predictions = model(input_string)\n", 819 | " # remove the batch dimension\n", 820 | " predictions = tf.squeeze(predictions, 0)\n", 821 | "\n", 822 | " # using a multinomial distribution to predict the word returned by the model\n", 823 | " predictions = predictions / temperature\n", 824 | " predicted_id = tf.random.categorical(logits=predictions, num_samples=1)[-1,0].numpy()\n", 825 | "\n", 826 | " # Pass the predicted word as the next input to the model\n", 827 | " # along with the previous hidden state\n", 828 | " input_string = tf.expand_dims([predicted_id], 0)\n", 829 | "\n", 830 | " generated.append(index_to_char[predicted_id])\n", 831 | "\n", 832 | " return (start_string + ''.join(generated)) # generated is a list" 833 | ] 834 | }, 835 | { 836 | "cell_type": "code", 837 | "execution_count": 38, 838 | "metadata": {}, 839 | "outputs": [ 840 | { 841 | "name": "stdout", 842 | "output_type": "stream", 843 | "text": [ 844 | "Pip!”\n", 845 | "\n", 846 | "“So it was.”\n", 847 | "\n", 848 | "“Astonishing!” said Joe, in the nature of an umbrella.\n", 849 | "\n", 850 | "“Then, as it were now as for the other convict. “The fear of our own little nor\n", 851 | "stones of the forge. I was falling into my head to look if the way by which I had come to be a man. A deserting your mind to him, my indertrodic work in any convicts!” Then both very well. I\n", 852 | "thought it would have been more crack with the forge and Mill Pond Bank, Clara\n", 853 | "was not a variety of being in common black piece of paper, and put the two convicts were\n", 854 | "bought, the more certain I am of a circle.\n", 855 | "\n", 856 | "“Lookee here, Pip, look at his door.\n", 857 | "\n", 858 | "In the evening there was a lady by which I was always creeping the fire at the windows of the copyright holder), the waiter reappeared.\n", 859 | "\n", 860 | "“Why you see, old chap. They'll do you suppose the attempt to do it\n", 861 | "distinctly thanked him and said that Mr. Pumblechook was on my shoulder by some one\n", 862 | "of these days, and as Mr. Pumblechook was not at all likely he could\n", 863 | "have done it. I had done it, under the silence \n" 864 | ] 865 | } 866 | ], 867 | "source": [ 868 | "# In the arguments, a low temperature gives more predictable text whereas a high temperature gives more random text.\n", 869 | "# Also this is where you can change the start string.\n", 870 | "generated_text = generate_text(model=model, start_string=\"Pip\", temperature=0.1, characters_to_generate = 1000)\n", 871 | "print(generated_text)" 872 | ] 873 | }, 874 | { 875 | "cell_type": "code", 876 | "execution_count": null, 877 | "metadata": {}, 878 | "outputs": [], 879 | "source": [] 880 | } 881 | ], 882 | "metadata": { 883 | "kernelspec": { 884 | "display_name": "Python 3", 885 | "language": "python", 886 | "name": "python3" 887 | }, 888 | "language_info": { 889 | "codemirror_mode": { 890 | "name": "ipython", 891 | "version": 3 892 | }, 893 | "file_extension": ".py", 894 | "mimetype": "text/x-python", 895 | "name": "python", 896 | "nbconvert_exporter": "python", 897 | "pygments_lexer": "ipython3", 898 | "version": "3.6.7" 899 | } 900 | }, 901 | "nbformat": 4, 902 | "nbformat_minor": 2 903 | } 904 | -------------------------------------------------------------------------------- /Chapter09/Chapter9_fashion_estimator_TF2_alpha.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "import tensorflow as tf\n", 10 | "import numpy as np" 11 | ] 12 | }, 13 | { 14 | "cell_type": "code", 15 | "execution_count": 2, 16 | "metadata": {}, 17 | "outputs": [ 18 | { 19 | "name": "stdout", 20 | "output_type": "stream", 21 | "text": [ 22 | "\n" 23 | ] 24 | } 25 | ], 26 | "source": [ 27 | "fashion = tf.keras.datasets.fashion_mnist\n", 28 | "(x_train, y_train),(x_test, y_test) = fashion.load_data()\n", 29 | "print(type(x_train))\n", 30 | "x_train, x_test = x_train / 255.0, x_test / 255.0\n", 31 | "\n", 32 | "y_train, y_test = np.int32(y_train), np.int32(y_test)\n", 33 | "\n", 34 | "learning_rate = 1e-4" 35 | ] 36 | }, 37 | { 38 | "cell_type": "code", 39 | "execution_count": 3, 40 | "metadata": {}, 41 | "outputs": [], 42 | "source": [ 43 | "# Define the training input function\n", 44 | "\n", 45 | "train_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(\n", 46 | " x={\"x\": x_train},\n", 47 | " y=y_train,\n", 48 | " num_epochs=None,\n", 49 | " batch_size=50,\n", 50 | " shuffle=True\n", 51 | ")" 52 | ] 53 | }, 54 | { 55 | "cell_type": "code", 56 | "execution_count": 4, 57 | "metadata": {}, 58 | "outputs": [], 59 | "source": [ 60 | "# Define the testing inputfunction.\n", 61 | "test_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(\n", 62 | " x={\"x\": x_test},\n", 63 | " y=y_test,\n", 64 | " num_epochs=1,\n", 65 | " shuffle=False\n", 66 | ")" 67 | ] 68 | }, 69 | { 70 | "cell_type": "code", 71 | "execution_count": 5, 72 | "metadata": {}, 73 | "outputs": [], 74 | "source": [ 75 | "# Specify feature\n", 76 | "feature_columns = [tf.feature_column.numeric_column(\"x\", shape=[28, 28])]" 77 | ] 78 | }, 79 | { 80 | "cell_type": "code", 81 | "execution_count": 6, 82 | "metadata": {}, 83 | "outputs": [], 84 | "source": [ 85 | "# Build 2 layer DNN classifier\n", 86 | "classifier = tf.estimator.DNNClassifier(\n", 87 | " feature_columns=feature_columns,\n", 88 | " hidden_units=[256, 32],\n", 89 | " optimizer=tf.compat.v1.train.AdamOptimizer(learning_rate),\n", 90 | " n_classes=10,\n", 91 | " dropout=0.1,\n", 92 | " model_dir=\"./tmp/mnist_modelx\"\n", 93 | ", loss_reduction=tf.compat.v1.losses.Reduction.SUM)" 94 | ] 95 | }, 96 | { 97 | "cell_type": "code", 98 | "execution_count": null, 99 | "metadata": {}, 100 | "outputs": [], 101 | "source": [] 102 | }, 103 | { 104 | "cell_type": "code", 105 | "execution_count": 7, 106 | "metadata": {}, 107 | "outputs": [ 108 | { 109 | "name": "stderr", 110 | "output_type": "stream", 111 | "text": [ 112 | "WARNING: Logging before flag parsing goes to stderr.\n", 113 | "W0309 18:26:07.244415 140378619287360 deprecation.py:323] From /home/tony/.virtualenvs/tf1p12/lib/python3.6/site-packages/tensorflow/python/training/training_util.py:238: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.\n", 114 | "Instructions for updating:\n", 115 | "Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.\n", 116 | "W0309 18:26:07.254260 140378619287360 deprecation.py:323] From /home/tony/.virtualenvs/tf1p12/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/inputs/queues/feeding_queue_runner.py:62: QueueRunner.__init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.\n", 117 | "Instructions for updating:\n", 118 | "To construct input pipelines, use the `tf.data` module.\n", 119 | "W0309 18:26:07.255648 140378619287360 deprecation.py:323] From /home/tony/.virtualenvs/tf1p12/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/inputs/queues/feeding_functions.py:500: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.\n", 120 | "Instructions for updating:\n", 121 | "To construct input pipelines, use the `tf.data` module.\n", 122 | "W0309 18:26:07.268105 140378619287360 deprecation.py:506] From /home/tony/.virtualenvs/tf1p12/lib/python3.6/site-packages/tensorflow/python/ops/init_ops.py:1257: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.\n", 123 | "Instructions for updating:\n", 124 | "Call initializer instance with the dtype argument instead of passing it to the constructor\n", 125 | "W0309 18:26:07.273055 140378619287360 deprecation.py:323] From /home/tony/.virtualenvs/tf1p12/lib/python3.6/site-packages/tensorflow/python/feature_column/feature_column_v2.py:2758: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\n", 126 | "Instructions for updating:\n", 127 | "Use `tf.cast` instead.\n", 128 | "W0309 18:26:07.450507 140378619287360 deprecation.py:506] From /home/tony/.virtualenvs/tf1p12/lib/python3.6/site-packages/tensorflow/python/training/slot_creator.py:187: calling Zeros.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.\n", 129 | "Instructions for updating:\n", 130 | "Call initializer instance with the dtype argument instead of passing it to the constructor\n", 131 | "W0309 18:26:07.729648 140378619287360 deprecation.py:323] From /home/tony/.virtualenvs/tf1p12/lib/python3.6/site-packages/tensorflow/python/training/saver.py:1276: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.\n", 132 | "Instructions for updating:\n", 133 | "Use standard file APIs to check for files with this prefix.\n", 134 | "W0309 18:26:07.763673 140378619287360 deprecation.py:323] From /home/tony/.virtualenvs/tf1p12/lib/python3.6/site-packages/tensorflow/python/training/saver.py:1069: get_checkpoint_mtimes (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.\n", 135 | "Instructions for updating:\n", 136 | "Use standard file utilities to get mtimes.\n", 137 | "W0309 18:26:07.801820 140378619287360 deprecation.py:323] From /home/tony/.virtualenvs/tf1p12/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py:877: start_queue_runners (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.\n", 138 | "Instructions for updating:\n", 139 | "To construct input pipelines, use the `tf.data` module.\n" 140 | ] 141 | }, 142 | { 143 | "data": { 144 | "text/plain": [ 145 | "" 146 | ] 147 | }, 148 | "execution_count": 7, 149 | "metadata": {}, 150 | "output_type": "execute_result" 151 | } 152 | ], 153 | "source": [ 154 | "#with tf.device('/cpu:0'):\n", 155 | "classifier.train(input_fn=train_input_fn, steps=10000)" 156 | ] 157 | }, 158 | { 159 | "cell_type": "code", 160 | "execution_count": null, 161 | "metadata": {}, 162 | "outputs": [], 163 | "source": [] 164 | }, 165 | { 166 | "cell_type": "code", 167 | "execution_count": 8, 168 | "metadata": {}, 169 | "outputs": [ 170 | { 171 | "name": "stdout", 172 | "output_type": "stream", 173 | "text": [ 174 | "\n", 175 | "Test Accuracy: 88.020003%\n", 176 | "\n", 177 | "Test loss: 42.309265\n", 178 | "\n" 179 | ] 180 | } 181 | ], 182 | "source": [ 183 | " # Evaluate accuracy\n", 184 | "accuracy_score = classifier.evaluate(input_fn=test_input_fn)[\"accuracy\"]\n", 185 | "loss = classifier.evaluate(input_fn=test_input_fn)[\"loss\"]\n", 186 | "\n", 187 | "print(\"\\nTest Accuracy: {0:f}%\\n\".format(accuracy_score*100))\n", 188 | "print(\"Test loss: {0:f}\\n\".format(loss))" 189 | ] 190 | }, 191 | { 192 | "cell_type": "code", 193 | "execution_count": null, 194 | "metadata": {}, 195 | "outputs": [], 196 | "source": [] 197 | } 198 | ], 199 | "metadata": { 200 | "kernelspec": { 201 | "display_name": "Python 3", 202 | "language": "python", 203 | "name": "python3" 204 | }, 205 | "language_info": { 206 | "codemirror_mode": { 207 | "name": "ipython", 208 | "version": 3 209 | }, 210 | "file_extension": ".py", 211 | "mimetype": "text/x-python", 212 | "name": "python", 213 | "nbconvert_exporter": "python", 214 | "pygments_lexer": "ipython3", 215 | "version": "3.6.7" 216 | } 217 | }, 218 | "nbformat": 4, 219 | "nbformat_minor": 2 220 | } 221 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Packt 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | # TensorFlow 2.0 Quick Start Guide 5 | 6 | TensorFlow 2.0 Quick Start Guide 7 | 8 | This is the code repository for [TensorFlow 2.0 Quick Start Guide](https://prod.packtpub.com/in/big-data-and-business-intelligence/tensorflow-20-quick-start-guide?utm_source=github&utm_medium=repository&utm_campaign=9781789530759), published by Packt. 9 | 10 | **Get up to speed with the newly introduced features of TensorFlow 2.0** 11 | 12 | ## What is this book about? 13 | TensorFlow is one of the most popular machine learning frameworks in Python. With this book, you will improve your knowledge of some of the latest TensorFlow features and will be able to perform supervised and unsupervised machine learning and also train neural networks. 14 | 15 | This book covers the following exciting features: 16 | * Use tf.Keras for fast prototyping, building, and training deep learning neural network models 17 | * Easily convert your TensorFlow 1.12 applications to TensorFlow 2.0-compatible files 18 | * Use TensorFlow to tackle traditional supervised and unsupervised machine learning applications 19 | * Understand image recognition techniques using TensorFlow 20 | * Perform neural style transfer for image hybridization using a neural network 21 | * Code a recurrent neural network in TensorFlow to perform text-style generation 22 | 23 | If you feel this book is for you, get your [copy](https://www.amazon.com/dp/178953075X) today! 24 | 25 | https://www.packtpub.com/ 27 | 28 | 29 | ## Instructions and Navigations 30 | All of the code is organized into folders. For example, Chapter02. 31 | 32 | The code will look like the following: 33 | ``` 34 | image1 = tf. zeros([ 7, 28, 28, 3]) # example-within-batch by height by 35 | width by color 36 | ``` 37 | 38 | **Following is what you need for this book:** 39 | Data scientists, machine learning developers, and deep learning enthusiasts looking to quickly get started with TensorFlow 2 will find this book useful. Some Python programming experience with version 3.6 or later, along with a familiarity with Jupyter notebooks will be an added advantage. Exposure to machine learning and neural network techniques would also be helpful. 40 | 41 | With the following software and hardware list you can run all code files present in the book (Chapter 1-9). 42 | 43 | ### Software and Hardware List 44 | 45 | | Chapter | Software required | OS required | 46 | | -------- | -----------------------------------------------------| -----------------------------------| 47 | | 1-9 | TensorFlow 2.0.0 alpha, Python 3.6, Jupyter Notebook | Windows, Mac OS X, and Linux (Any) | 48 | 49 | 50 | 51 | We also provide a PDF file that has color images of the screenshots/diagrams used in this book. [Click here to download it](http://www.packtpub.com/sites/default/files/downloads/9781789530759_ColorImages.pdf). 52 | 53 | 54 | 55 | ### Related products 56 | * Deep Learning with TensorFlow - Second Edition [[Packt]](https://prod.packtpub.com/in/big-data-and-business-intelligence/deep-learning-tensorflow-second-edition?utm_source=github&utm_medium=repository&utm_campaign=9781788831109) [[Amazon]](https://www.amazon.com/dp/1788831101) 57 | 58 | * Intelligent Mobile Projects with TensorFlow [[Packt]](https://prod.packtpub.com/in/application-development/intelligent-mobile-projects-tensorflow?utm_source=github&utm_medium=repository&utm_campaign=9781788834544) [[Amazon]](https://www.amazon.com/dp/1788834542) 59 | 60 | ## Get to Know the Author 61 | **Tony Holdroyd's** 62 | first degree, from Durham University, was in maths and physics. He also has technical qualifications, including MCSD, MCSD.net, and SCJP. He holds an MSc in 63 | computer science from London University. He was a senior lecturer in computer science and maths in further education, designing and delivering programming courses in many languages, including C, C+, Java, C#, and SQL. His passion for neural networks stems from research he did for his MSc thesis. He has developed numerous machine learning, neural network, and deep learning applications, and has advised in the media industry on deep learning as applied to image and music processing. Tony lives in Gravesend, Kent, UK, with his wife, Sue McCreeth, who is a renowned musician. 64 | 65 | 66 | 67 | ### Suggestions and Feedback 68 | [Click here](https://docs.google.com/forms/d/e/1FAIpQLSdy7dATC6QmEL81FIUuymZ0Wy9vH1jHkvpY57OiMeKGqib_Ow/viewform) if you have any feedback or suggestions. 69 | ### Download a free PDF 70 | 71 | If you have already purchased a print or Kindle version of this book, you can get a DRM-free PDF version at no cost.
Simply click on the link to claim your free PDF.
72 |

https://packt.link/free-ebook/9781789530759

--------------------------------------------------------------------------------