├── 1. Introduction to TensorFlow ├── Course 1 - Part 2 - Lesson 2 - Notebook.ipynb ├── Course 1 - Part 4 - Lesson 2 - Notebook.ipynb ├── Course 1 - Part 4 - Lesson 4 - Notebook.ipynb ├── Course 1 - Part 6 - Lesson 2 - Notebook.ipynb ├── Course 1 - Part 6 - Lesson 3 - Notebook.ipynb ├── Course 1 - Part 8 - Lesson 2 - Notebook.ipynb ├── Course 1 - Part 8 - Lesson 3 - Notebook.ipynb ├── Course 1 - Part 8 - Lesson 4 - Notebook.ipynb ├── Exercises │ ├── Exercise 1 - House Prices │ │ ├── Exercise_1_House_Prices_Answer.ipynb │ │ └── Exercise_1_House_Prices_Question.ipynb │ ├── Exercise 2 - Handwriting Recognition │ │ ├── Exercise2-Answer.ipynb │ │ └── Exercise2-Question.ipynb │ ├── Exercise 3 - Convolutions │ │ ├── Excercise-3-Question.ipynb │ │ └── Exercise 3 - Answer.ipynb │ ├── Exercise 4 - Handling Complex Images │ │ ├── Exercise4-Answer.ipynb │ │ └── Exercise4-Question.ipynb │ └── tmp2 │ │ ├── happy-or-sad.zip │ │ └── mnist.npz ├── Hello_World_Layers.ipynb └── horse_test_image.jpg ├── 2. Convolutional Neural Networks in TensorFlow ├── Course 2 - Part 2 - Lesson 2 - Notebook.ipynb ├── Course 2 - Part 4 - Lesson 2 - Notebook (Cats v Dogs Augmentation).ipynb ├── Course 2 - Part 4 - Lesson 4 - Notebook.ipynb ├── Course 2 - Part 6 - Lesson 3 - Notebook.ipynb ├── Course 2 - Part 8 - Lesson 2 - Notebook (RockPaperScissors).ipynb ├── Exercises │ ├── Exercise 5 - Real World Scenarios │ │ ├── Exercise 5 - Answer.ipynb │ │ ├── Exercise 5 - Question.ipynb │ │ └── Exercise_1_Cats_vs_Dogs_Question-FINAL.ipynb │ ├── Exercise 6 - Cats v Dogs with Augmentation │ │ ├── Exercise 6 - Answer.ipynb │ │ ├── Exercise 6 - Question.ipynb │ │ └── Exercise_2_Cats_vs_Dogs_using_augmentation_Question-FINAL.ipynb │ ├── Exercise 7 - Transfer Learning │ │ ├── Exercise 7 - Answer.ipynb │ │ ├── Exercise 7 - Question.ipynb │ │ └── Exercise_3_Horses_vs_humans_using_Transfer_Learning_Question-FINAL.ipynb │ ├── Exercise 8 - Multiclass with Signs │ │ ├── Exercise 8 - Answer.ipynb │ │ ├── Exercise 8 - Question.ipynb │ │ └── Exercise_4_Multi_class_classifier_Question-FINAL.ipynb │ └── tmp2 │ │ ├── sign_mnist_test.csv │ │ └── validation-horse-or-human.zip ├── rps-validation │ ├── paper-hires1.png │ ├── paper-hires2.png │ ├── paper1.png │ ├── paper2.png │ ├── paper3.png │ ├── paper4.png │ ├── paper5.png │ ├── paper6.png │ ├── paper7.png │ ├── paper8.png │ ├── paper9.png │ ├── rock-hires1.png │ ├── rock-hires2.png │ ├── rock1.png │ ├── rock2.png │ ├── rock3.png │ ├── rock4.png │ ├── rock5.png │ ├── rock6.png │ ├── rock7.png │ ├── rock8.png │ ├── rock9.png │ ├── scissors-hires1.png │ ├── scissors-hires2.png │ ├── scissors1.png │ ├── scissors2.png │ ├── scissors3.png │ ├── scissors4.png │ ├── scissors5.png │ ├── scissors6.png │ ├── scissors7.png │ ├── scissors8.png │ └── scissors9.png ├── sign-language-mnist │ ├── amer_sign2.png │ ├── amer_sign3.png │ ├── american_sign_language.PNG │ └── sign_mnist_test.csv ├── test_cat.jpg └── test_dog.jpg ├── 3. Natural Language Processing in TensorFlow ├── Course 3 - Week 1 - Lesson 1.ipynb ├── Course 3 - Week 1 - Lesson 2.ipynb ├── Course 3 - Week 1 - Lesson 3.ipynb ├── Course 3 - Week 2 - Lesson 1.ipynb ├── Course 3 - Week 2 - Lesson 2.ipynb ├── Course 3 - Week 2 - Lesson 3.ipynb ├── Course 3 - Week 3 - Lesson 1a.ipynb ├── Course 3 - Week 3 - Lesson 1b.ipynb ├── Course 3 - Week 3 - Lesson 1c.ipynb ├── Course 3 - Week 3 - Lesson 2.ipynb ├── Course 3 - Week 3 - Lesson 2c.ipynb ├── Course 3 - Week 3 - Lesson 2d.ipynb ├── Course 3 - Week 4 - Lesson 1 - Notebook.ipynb ├── Course 3 - Week 4 - Lesson 2 - Notebook.ipynb ├── Exercises │ ├── Week 1 │ │ ├── Course 3 - Week 1 - Exercise-answer.ipynb │ │ └── Course 3 - Week 1 - Exercise-question.ipynb │ ├── Week 2 │ │ ├── Course 3 - Week 2 - Exercise - Answer.ipynb │ │ └── Course 3 - Week 2 - Exercise - Question.ipynb │ ├── Week 3 │ │ ├── NLP Course - Week 3 Exercise Answer.ipynb │ │ └── NLP Course - Week 3 Exercise Question.ipynb │ └── Week 4 │ │ ├── NLP_Week4_Exercise_Shakespeare_Answer.ipynb │ │ └── NLP_Week4_Exercise_Shakespeare_Question.ipynb ├── imdb │ ├── meta.tsv │ └── vecs.tsv ├── n-grams.png ├── sarcasm │ ├── meta.tsv │ └── vecs.tsv └── text_generation.ipynb ├── 4. Sequences, Time Series and Prediction ├── Exercises │ ├── Week 1 │ │ ├── Week 1 Exercise Answer.ipynb │ │ └── Week 1 Exercise Question.ipynb │ ├── Week 2 │ │ ├── S+P_Week_2_Exercise_Answer.ipynb │ │ └── S+P_Week_2_Exercise_Question.ipynb │ ├── Week 3 │ │ ├── S+P Week 3 Exercise Answer.ipynb │ │ └── S+P Week 3 Exercise Question.ipynb │ └── Week 4 │ │ ├── S+P Week 4 Exercise Answer.ipynb │ │ └── S+P Week 4 Exercise Question.ipynb ├── Notes │ ├── Common patterns in time series.pdf │ ├── Convolutions in Autoregressive Neural Networks.pdf │ ├── Dilated and causal convolution.pdf │ ├── Moving average and differencing.pdf │ ├── Outputting a sequence.pdf │ └── Shape of the inputs to the RNN.pdf ├── S+P Week 1 - Lesson 1 - Notebook.ipynb ├── S+P Week 1 - Lesson 2.ipynb ├── S+P Week 1 - Lesson 3 - Notebook.ipynb ├── S+P Week 2 Lesson 1.ipynb ├── S+P Week 2 Lesson 2.ipynb ├── S+P Week 2 Lesson 3.ipynb ├── S+P Week 3 Lesson 2 - RNN.ipynb ├── S+P Week 3 Lesson 4 - LSTM.ipynb ├── S+P Week 4 Lesson 1.ipynb ├── S+P Week 4 Lesson 3.ipynb ├── S+P Week 4 Lesson 5.ipynb └── seq_to_seq.ipynb └── README.md /1. Introduction to TensorFlow/Course 1 - Part 4 - Lesson 4 - Notebook.ipynb: -------------------------------------------------------------------------------- 1 | {"nbformat":4,"nbformat_minor":0,"metadata":{"colab":{"name":"Course 1 - Part 4 - Lesson 4 - Notebook.ipynb","provenance":[],"collapsed_sections":[]},"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.7.5rc1"}},"cells":[{"cell_type":"markdown","metadata":{"colab_type":"text","id":"view-in-github"},"source":["\"Open"]},{"cell_type":"markdown","metadata":{"colab_type":"text","id":"rX8mhOLljYeM"},"source":["##### Copyright 2019 The TensorFlow Authors."]},{"cell_type":"code","metadata":{"cellView":"form","colab_type":"code","id":"BZSlp3DAjdYf","colab":{}},"source":["#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n","# you may not use this file except in compliance with the License.\n","# You may obtain a copy of the License at\n","#\n","# https://www.apache.org/licenses/LICENSE-2.0\n","#\n","# Unless required by applicable law or agreed to in writing, software\n","# distributed under the License is distributed on an \"AS IS\" BASIS,\n","# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n","# See the License for the specific language governing permissions and\n","# limitations under the License."],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"byUyEAzRgreQ","colab_type":"text"},"source":["**Pro tip:** Callback has been defined on epoch END! This means that the training will continue through to the end of the current epoch even if the loss is below the given threshold. The loss may go up and down through an epoch since all of the data hasn't been processed, so it's good practice to wait till the end of the epoch."]},{"cell_type":"code","metadata":{"colab_type":"code","id":"N9-BCmi15L93","colab":{"base_uri":"https://localhost:8080/","height":102},"outputId":"53ecb174-9180-4847-9321-bab453b60a77","executionInfo":{"status":"ok","timestamp":1588240180067,"user_tz":-330,"elapsed":9101,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}}},"source":["import tensorflow as tf\n","\n","class myCallback(tf.keras.callbacks.Callback):\n"," def on_epoch_end(self, epoch, logs={}):\n"," if(logs.get('accuracy')>0.6):\n"," print(\"\\nReached 60% accuracy so cancelling training!\")\n"," self.model.stop_training = True\n","\n","mnist = tf.keras.datasets.fashion_mnist\n","\n","(x_train, y_train),(x_test, y_test) = mnist.load_data()\n","x_train, x_test = x_train / 255.0, x_test / 255.0\n","\n","callbacks = myCallback()\n","\n","model = tf.keras.models.Sequential([\n"," tf.keras.layers.Flatten(input_shape=(28, 28)),\n"," tf.keras.layers.Dense(512, activation=tf.nn.relu),\n"," tf.keras.layers.Dense(10, activation=tf.nn.softmax)\n","])\n","model.compile(optimizer=tf.optimizers.Adam(),\n"," loss='sparse_categorical_crossentropy',\n"," metrics=['accuracy'])\n","\n","model.fit(x_train, y_train, epochs=10, callbacks=[callbacks])"],"execution_count":2,"outputs":[{"output_type":"stream","text":["Epoch 1/10\n","1870/1875 [============================>.] - ETA: 0s - loss: 0.4755 - accuracy: 0.8293\n","Reached 60% accuracy so cancelling training!\n","1875/1875 [==============================] - 7s 4ms/step - loss: 0.4754 - accuracy: 0.8293\n"],"name":"stdout"},{"output_type":"execute_result","data":{"text/plain":[""]},"metadata":{"tags":[]},"execution_count":2}]}]} -------------------------------------------------------------------------------- /1. Introduction to TensorFlow/Exercises/Exercise 2 - Handwriting Recognition/Exercise2-Answer.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "metadata": { 6 | "id": "zX4Kg8DUTKWO", 7 | "colab_type": "code", 8 | "colab": {} 9 | }, 10 | "source": [ 11 | "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", 12 | "# you may not use this file except in compliance with the License.\n", 13 | "# You may obtain a copy of the License at\n", 14 | "#\n", 15 | "# https://www.apache.org/licenses/LICENSE-2.0\n", 16 | "#\n", 17 | "# Unless required by applicable law or agreed to in writing, software\n", 18 | "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", 19 | "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", 20 | "# See the License for the specific language governing permissions and\n", 21 | "# limitations under the License." 22 | ], 23 | "execution_count": 0, 24 | "outputs": [] 25 | }, 26 | { 27 | "cell_type": "code", 28 | "execution_count": 0, 29 | "metadata": { 30 | "colab": {}, 31 | "colab_type": "code", 32 | "id": "rEHcB3kqyHZ6" 33 | }, 34 | "outputs": [], 35 | "source": [ 36 | "import tensorflow as tf\n", 37 | "\n", 38 | "class myCallback(tf.keras.callbacks.Callback):\n", 39 | " def on_epoch_end(self, epoch, logs={}):\n", 40 | " if(logs.get('accuracy')>0.99):\n", 41 | " print(\"\\nReached 99% accuracy so cancelling training!\")\n", 42 | " self.model.stop_training = True\n", 43 | "\n", 44 | "mnist = tf.keras.datasets.mnist\n", 45 | "\n", 46 | "(x_train, y_train),(x_test, y_test) = mnist.load_data()\n", 47 | "x_train, x_test = x_train / 255.0, x_test / 255.0\n", 48 | "\n", 49 | "callbacks = myCallback()\n", 50 | "\n", 51 | "model = tf.keras.models.Sequential([\n", 52 | " tf.keras.layers.Flatten(input_shape=(28, 28)),\n", 53 | " tf.keras.layers.Dense(512, activation=tf.nn.relu),\n", 54 | " tf.keras.layers.Dense(10, activation=tf.nn.softmax)\n", 55 | "])\n", 56 | "model.compile(optimizer='adam',\n", 57 | " loss='sparse_categorical_crossentropy',\n", 58 | " metrics=['accuracy'])\n", 59 | "\n", 60 | "model.fit(x_train, y_train, epochs=10, callbacks=[callbacks])" 61 | ] 62 | } 63 | ], 64 | "metadata": { 65 | "colab": { 66 | "collapsed_sections": [], 67 | "name": "Exercise2-Answer.ipynb", 68 | "provenance": [], 69 | "toc_visible": true, 70 | "version": "0.3.2" 71 | }, 72 | "kernelspec": { 73 | "display_name": "Python 3", 74 | "name": "python3" 75 | } 76 | }, 77 | "nbformat": 4, 78 | "nbformat_minor": 0 79 | } 80 | -------------------------------------------------------------------------------- /1. Introduction to TensorFlow/Exercises/Exercise 2 - Handwriting Recognition/Exercise2-Question.ipynb: -------------------------------------------------------------------------------- 1 | {"nbformat":4,"nbformat_minor":0,"metadata":{"coursera":{"course_slug":"introduction-tensorflow","graded_item_id":"d6dew","launcher_item_id":"FExZ4"},"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.6.8"},"colab":{"name":"Exercise2-Question.ipynb","provenance":[],"collapsed_sections":[]}},"cells":[{"cell_type":"markdown","metadata":{"colab_type":"text","id":"tOoyQ70H00_s"},"source":["## Exercise 2\n","In the course you learned how to do classificaiton using Fashion MNIST, a data set containing items of clothing. There's another, similar dataset called MNIST which has items of handwriting -- the digits 0 through 9.\n","\n","Write an MNIST classifier that trains to 99% accuracy or above, and does it without a fixed number of epochs -- i.e. you should stop training once you reach that level of accuracy.\n","\n","Some notes:\n","1. It should succeed in less than 10 epochs, so it is okay to change epochs= to 10, but nothing larger\n","2. When it reaches 99% or greater it should print out the string \"Reached 99% accuracy so cancelling training!\"\n","3. If you add any additional variables, make sure you use the same names as the ones used in the class\n","\n","I've started the code for you below -- how would you finish it? "]},{"cell_type":"code","metadata":{"id":"ChNSj5l6om_5","colab_type":"code","colab":{"base_uri":"https://localhost:8080/","height":54},"outputId":"59d3ff7e-a4e0-449a-c390-71d8f8e200c0","executionInfo":{"status":"ok","timestamp":1588244295530,"user_tz":-330,"elapsed":1843,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}}},"source":["# Mount Google Drive directory to access mnist.npz from tmp2 in Google Drive.\n","from google.colab import drive\n","drive.mount('/content/drive')"],"execution_count":25,"outputs":[{"output_type":"stream","text":["Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n"],"name":"stdout"}]},{"cell_type":"code","metadata":{"id":"xUQsJ8oIiXEm","colab_type":"code","colab":{}},"source":["import tensorflow as tf\n","from os import path, getcwd, chdir\n","\n","# DO NOT CHANGE THE LINE BELOW. If you are developing in a local\n","# environment, then grab mnist.npz from the Coursera Jupyter Notebook\n","# and place it inside a local folder and edit the path to that location\n","\n","# Path for Coursera Jupyter autograder:\n","# path = f\"{getcwd()}/../tmp2/mnist.npz\"\n","\n","# Path to mnist.npz in Google Drive:\n","path = f\"{getcwd()}/drive/My Drive/TensorFlow In Practice/1. Introduction to TensorFlow/Exercises/tmp2/mnist.npz\""],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"9rvXQGAA0ssC","colab":{}},"source":["# GRADED FUNCTION: train_mnist\n","def train_mnist():\n"," # Please write your code only where you are indicated.\n"," # please do not remove # model fitting inline comments.\n","\n"," # YOUR CODE SHOULD START HERE\n"," # IMPORTANT: Change 'accuracy' to 'acc' for Coursera autograder!\n"," class myCallback(tf.keras.callbacks.Callback):\n"," def on_epoch_end(self, epoch, logs={}):\n"," if(logs.get('accuracy')>0.99):\n"," print(\"\\nReached 99% accuracy so cancelling training!\")\n"," self.model.stop_training = True\n"," # YOUR CODE SHOULD END HERE\n","\n"," mnist = tf.keras.datasets.mnist\n","\n"," (x_train, y_train),(x_test, y_test) = mnist.load_data(path=path)\n"," # YOUR CODE SHOULD START HERE\n"," x_train, x_test = x_train / 255.0, x_test / 255.0\n"," callbacks = myCallback()\n"," # YOUR CODE SHOULD END HERE\n"," model = tf.keras.models.Sequential([\n"," # YOUR CODE SHOULD START HERE\n"," tf.keras.layers.Flatten(input_shape=(28, 28)),\n"," tf.keras.layers.Dense(512, activation=tf.nn.relu),\n"," tf.keras.layers.Dense(10, activation=tf.nn.softmax)\n"," # YOUR CODE SHOULD END HERE\n"," ])\n","\n"," # IMPORTANT: Change 'accuracy' to 'acc' for Coursera autograder!\n"," model.compile(optimizer='adam',\n"," loss='sparse_categorical_crossentropy',\n"," metrics=['accuracy'])\n"," \n"," # model fitting\n"," history = model.fit(# YOUR CODE SHOULD START HERE\n"," x_train, y_train, epochs=10, callbacks=[callbacks]\n"," # YOUR CODE SHOULD END HERE\n"," )\n"," # model fitting\n"," # Return last computed loss and accuracy!\n"," # IMPORTANT: Change 'accuracy' to 'acc' for Coursera autograder!\n"," return history.epoch, history.history['accuracy'][-1]"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"73bEt8PZiXFP","colab":{"base_uri":"https://localhost:8080/","height":238},"outputId":"2fe39e02-11d8-419b-aa53-c9f1ebaa3123","executionInfo":{"status":"ok","timestamp":1588244713945,"user_tz":-330,"elapsed":34467,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}}},"source":["train_mnist()"],"execution_count":32,"outputs":[{"output_type":"stream","text":["Epoch 1/10\n","1875/1875 [==============================] - 6s 3ms/step - loss: 0.2000 - accuracy: 0.9413\n","Epoch 2/10\n","1875/1875 [==============================] - 6s 3ms/step - loss: 0.0794 - accuracy: 0.9752\n","Epoch 3/10\n","1875/1875 [==============================] - 6s 3ms/step - loss: 0.0520 - accuracy: 0.9840\n","Epoch 4/10\n","1875/1875 [==============================] - 7s 3ms/step - loss: 0.0372 - accuracy: 0.9883\n","Epoch 5/10\n","1870/1875 [============================>.] - ETA: 0s - loss: 0.0253 - accuracy: 0.9920\n","Reached 99% accuracy so cancelling training!\n","1875/1875 [==============================] - 6s 3ms/step - loss: 0.0253 - accuracy: 0.9920\n"],"name":"stdout"},{"output_type":"execute_result","data":{"text/plain":["([0, 1, 2, 3, 4], 0.9919999837875366)"]},"metadata":{"tags":[]},"execution_count":32}]},{"cell_type":"code","metadata":{"id":"zlkJgfSmnekL","colab_type":"code","colab":{}},"source":[""],"execution_count":0,"outputs":[]}]} -------------------------------------------------------------------------------- /1. Introduction to TensorFlow/Exercises/Exercise 3 - Convolutions/Excercise-3-Question.ipynb: -------------------------------------------------------------------------------- 1 | {"nbformat":4,"nbformat_minor":0,"metadata":{"coursera":{"course_slug":"introduction-tensorflow","graded_item_id":"ml06H","launcher_item_id":"hQF8A"},"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.6.8"},"colab":{"name":"Excercise-3-Question.ipynb","provenance":[],"collapsed_sections":[]},"accelerator":"GPU"},"cells":[{"cell_type":"markdown","metadata":{"colab_type":"text","id":"iQjHqsmTAVLU"},"source":["## Exercise 3\n","In the videos you looked at how you would improve Fashion MNIST using Convolutions. For your exercise see if you can improve MNIST to 99.8% accuracy or more using only a single convolutional layer and a single MaxPooling 2D. You should stop training once the accuracy goes above this amount. It should happen in less than 20 epochs, so it's ok to hard code the number of epochs for training, but your training must end once it hits the above metric. If it doesn't, then you'll need to redesign your layers.\n","\n","I've started the code for you -- you need to finish it!\n","\n","When 99.8% accuracy has been hit, you should print out the string \"Reached 99.8% accuracy so cancelling training!\"\n"]},{"cell_type":"code","metadata":{"id":"MkwaK9g8S3vu","colab_type":"code","colab":{"base_uri":"https://localhost:8080/","height":122},"outputId":"11c1b9bc-60d8-44be-e327-334052c56edc","executionInfo":{"status":"ok","timestamp":1588425050423,"user_tz":-330,"elapsed":55068,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}}},"source":["# Mount Google Drive directory to access mnist.npz from tmp2 in Google Drive.\n","from google.colab import drive\n","drive.mount('/content/drive')"],"execution_count":1,"outputs":[{"output_type":"stream","text":["Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n","\n","Enter your authorization code:\n","··········\n","Mounted at /content/drive\n"],"name":"stdout"}]},{"cell_type":"code","metadata":{"id":"H4ZL_xtKx8Gp","colab_type":"code","colab":{}},"source":["import tensorflow as tf\n","from os import path, getcwd, chdir\n","\n","# DO NOT CHANGE THE LINE BELOW. If you are developing in a local\n","# environment, then grab mnist.npz from the Coursera Jupyter Notebook\n","# and place it inside a local folder and edit the path to that location\n","\n","# Path for Coursera Jupyter autograder:\n","# path = f\"{getcwd()}/../tmp2/mnist.npz\"\n","\n","# Path to mnist.npz in Google Drive:\n","path = f\"{getcwd()}/drive/My Drive/TensorFlow In Practice/1. Introduction to TensorFlow/Exercises/tmp2/mnist.npz\""],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"9r2hbYIwhA_l","colab_type":"text"},"source":["**Pro tip:** The following code activates the GPU to train your model faster, and is needed for the Coursera Jupyter submission. You can comment it out and activate the GPU from 'Edit - Notebook settings' in Colab."]},{"cell_type":"code","metadata":{"id":"uwSDYoBkx8HE","colab_type":"code","colab":{}},"source":["#config = tf.ConfigProto()\n","#config.gpu_options.allow_growth = True\n","#sess = tf.Session(config=config)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"lB_684TpdC__","colab_type":"text"},"source":["**Pro tip:** Don't forget to RESHAPE the input data before normalizing! Remember the training set has 60,000 images and the test set has 10,000 images."]},{"cell_type":"code","metadata":{"id":"AO0258NWx8HS","colab_type":"code","colab":{}},"source":["# GRADED FUNCTION: train_mnist_conv\n","def train_mnist_conv():\n"," # Please write your code only where you are indicated.\n"," # please do not remove model fitting inline comments.\n","\n"," # YOUR CODE STARTS HERE\n"," class myCallback(tf.keras.callbacks.Callback):\n"," def on_epoch_end(self, epoch, logs={}):\n"," # Change 'accuracy' to 'acc' for the Coursera autograder!\n"," if(logs.get('accuracy')>0.998): \n"," print(\"\\nReached 99.8% accuracy so cancelling training!\")\n"," self.model.stop_training = True\n"," # YOUR CODE ENDS HERE\n","\n"," mnist = tf.keras.datasets.mnist\n"," (training_images, training_labels), (test_images, test_labels) = mnist.load_data(path=path)\n"," # YOUR CODE STARTS HERE\n"," training_images=training_images.reshape(60000, 28, 28, 1)\n"," training_images, test_images = training_images/255.0, test_images/255.0\n"," test_images = test_images.reshape(10000, 28, 28, 1)\n"," test_images=test_images/255.0\n"," callbacks = myCallback()\n"," # YOUR CODE ENDS HERE\n","\n"," model = tf.keras.models.Sequential([\n"," tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)),\n"," tf.keras.layers.MaxPooling2D(2, 2),\n"," tf.keras.layers.Flatten(),\n"," tf.keras.layers.Dense(128, activation='relu'),\n"," tf.keras.layers.Dense(10, activation='softmax')\n"," ])\n","\n"," model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n"," # model fitting\n"," history = model.fit(training_images, training_labels, epochs=20, callbacks=[callbacks])\n"," # model fitting\n"," # Change 'accuracy' to 'acc' for the Coursera autograder!\n"," return history.epoch, history.history['accuracy'][-1]\n","\n"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"id":"--G_7V4kx8Ha","colab_type":"code","colab":{"base_uri":"https://localhost:8080/","height":323},"outputId":"78853994-acca-4d2f-f04b-d7a030242f3c","executionInfo":{"status":"ok","timestamp":1588425186777,"user_tz":-330,"elapsed":91377,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}}},"source":["_, _ = train_mnist_conv()"],"execution_count":6,"outputs":[{"output_type":"stream","text":["Epoch 1/20\n","1875/1875 [==============================] - 9s 5ms/step - loss: 0.1348 - accuracy: 0.9593\n","Epoch 2/20\n","1875/1875 [==============================] - 9s 5ms/step - loss: 0.0471 - accuracy: 0.9856\n","Epoch 3/20\n","1875/1875 [==============================] - 9s 5ms/step - loss: 0.0287 - accuracy: 0.9909\n","Epoch 4/20\n","1875/1875 [==============================] - 9s 5ms/step - loss: 0.0182 - accuracy: 0.9945\n","Epoch 5/20\n","1875/1875 [==============================] - 9s 5ms/step - loss: 0.0140 - accuracy: 0.9955\n","Epoch 6/20\n","1875/1875 [==============================] - 9s 5ms/step - loss: 0.0088 - accuracy: 0.9972\n","Epoch 7/20\n","1875/1875 [==============================] - 9s 5ms/step - loss: 0.0076 - accuracy: 0.9975\n","Epoch 8/20\n","1875/1875 [==============================] - ETA: 0s - loss: 0.0058 - accuracy: 0.9982\n","Reached 99.8% accuracy so cancelling training!\n","1875/1875 [==============================] - 9s 5ms/step - loss: 0.0058 - accuracy: 0.9982\n"],"name":"stdout"}]},{"cell_type":"code","metadata":{"id":"LQ0TD4rMZW48","colab_type":"code","colab":{}},"source":[""],"execution_count":0,"outputs":[]}]} -------------------------------------------------------------------------------- /1. Introduction to TensorFlow/Exercises/Exercise 3 - Convolutions/Exercise 3 - Answer.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "metadata": { 6 | "id": "zX4Kg8DUTKWO", 7 | "colab_type": "code", 8 | "colab": {} 9 | }, 10 | "source": [ 11 | "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", 12 | "# you may not use this file except in compliance with the License.\n", 13 | "# You may obtain a copy of the License at\n", 14 | "#\n", 15 | "# https://www.apache.org/licenses/LICENSE-2.0\n", 16 | "#\n", 17 | "# Unless required by applicable law or agreed to in writing, software\n", 18 | "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", 19 | "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", 20 | "# See the License for the specific language governing permissions and\n", 21 | "# limitations under the License." 22 | ], 23 | "execution_count": 0, 24 | "outputs": [] 25 | }, 26 | { 27 | "cell_type": "code", 28 | "execution_count": 0, 29 | "metadata": { 30 | "colab": {}, 31 | "colab_type": "code", 32 | "id": "22hBZbxx98IS" 33 | }, 34 | "outputs": [], 35 | "source": [ 36 | "import tensorflow as tf\n", 37 | "\n", 38 | "class myCallback(tf.keras.callbacks.Callback):\n", 39 | " def on_epoch_end(self, epoch, logs={}):\n", 40 | " if(logs.get('accuracy')>0.998):\n", 41 | " print(\"\\nReached 99.8% accuracy so cancelling training!\")\n", 42 | " self.model.stop_training = True\n", 43 | "\n", 44 | "callbacks = myCallback()\n", 45 | "mnist = tf.keras.datasets.mnist\n", 46 | "(training_images, training_labels), (test_images, test_labels) = mnist.load_data()\n", 47 | "training_images=training_images.reshape(60000, 28, 28, 1)\n", 48 | "training_images=training_images / 255.0\n", 49 | "test_images = test_images.reshape(10000, 28, 28, 1)\n", 50 | "test_images=test_images/255.0\n", 51 | "model = tf.keras.models.Sequential([\n", 52 | " tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),\n", 53 | " tf.keras.layers.MaxPooling2D(2, 2),\n", 54 | " tf.keras.layers.Flatten(),\n", 55 | " tf.keras.layers.Dense(128, activation='relu'),\n", 56 | " tf.keras.layers.Dense(10, activation='softmax')\n", 57 | "])\n", 58 | "model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n", 59 | "model.fit(training_images, training_labels, epochs=10, callbacks=[callbacks])\n" 60 | ] 61 | } 62 | ], 63 | "metadata": { 64 | "colab": { 65 | "name": "Exercise 3 - Answer.ipynb", 66 | "provenance": [], 67 | "toc_visible": true, 68 | "version": "0.3.2" 69 | }, 70 | "kernelspec": { 71 | "display_name": "Python 3", 72 | "name": "python3" 73 | } 74 | }, 75 | "nbformat": 4, 76 | "nbformat_minor": 0 77 | } 78 | -------------------------------------------------------------------------------- /1. Introduction to TensorFlow/Exercises/Exercise 4 - Handling Complex Images/Exercise4-Answer.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "accelerator": "GPU", 6 | "colab": { 7 | "name": "Exercise4-Answer.ipynb", 8 | "provenance": [], 9 | "toc_visible": true, 10 | "include_colab_link": true 11 | }, 12 | "kernelspec": { 13 | "display_name": "Python 3", 14 | "name": "python3" 15 | } 16 | }, 17 | "cells": [ 18 | { 19 | "cell_type": "markdown", 20 | "metadata": { 21 | "id": "view-in-github", 22 | "colab_type": "text" 23 | }, 24 | "source": [ 25 | "\"Open" 26 | ] 27 | }, 28 | { 29 | "cell_type": "code", 30 | "metadata": { 31 | "id": "zX4Kg8DUTKWO", 32 | "colab_type": "code", 33 | "colab": {} 34 | }, 35 | "source": [ 36 | "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", 37 | "# you may not use this file except in compliance with the License.\n", 38 | "# You may obtain a copy of the License at\n", 39 | "#\n", 40 | "# https://www.apache.org/licenses/LICENSE-2.0\n", 41 | "#\n", 42 | "# Unless required by applicable law or agreed to in writing, software\n", 43 | "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", 44 | "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", 45 | "# See the License for the specific language governing permissions and\n", 46 | "# limitations under the License." 47 | ], 48 | "execution_count": 0, 49 | "outputs": [] 50 | }, 51 | { 52 | "cell_type": "code", 53 | "metadata": { 54 | "colab_type": "code", 55 | "id": "3NFuMFYXtwsT", 56 | "colab": {} 57 | }, 58 | "source": [ 59 | "import tensorflow as tf\n", 60 | "import os\n", 61 | "import zipfile\n", 62 | "\n", 63 | "DESIRED_ACCURACY = 0.999\n", 64 | "\n", 65 | "!wget --no-check-certificate \\\n", 66 | " \"https://storage.googleapis.com/laurencemoroney-blog.appspot.com/happy-or-sad.zip\" \\\n", 67 | " -O \"/tmp/happy-or-sad.zip\"\n", 68 | "\n", 69 | "zip_ref = zipfile.ZipFile(\"/tmp/happy-or-sad.zip\", 'r')\n", 70 | "zip_ref.extractall(\"/tmp/h-or-s\")\n", 71 | "zip_ref.close()\n", 72 | "\n", 73 | "class myCallback(tf.keras.callbacks.Callback):\n", 74 | " def on_epoch_end(self, epoch, logs={}):\n", 75 | " if(logs.get('accuracy')>DESIRED_ACCURACY):\n", 76 | " print(\"\\nReached 99.9% accuracy so cancelling training!\")\n", 77 | " self.model.stop_training = True\n", 78 | "\n", 79 | "callbacks = myCallback()\n" 80 | ], 81 | "execution_count": 0, 82 | "outputs": [] 83 | }, 84 | { 85 | "cell_type": "code", 86 | "metadata": { 87 | "colab_type": "code", 88 | "id": "eUcNTpra1FK0", 89 | "colab": {} 90 | }, 91 | "source": [ 92 | "model = tf.keras.models.Sequential([\n", 93 | " tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(150, 150, 3)),\n", 94 | " tf.keras.layers.MaxPooling2D(2, 2),\n", 95 | " tf.keras.layers.Conv2D(32, (3,3), activation='relu'),\n", 96 | " tf.keras.layers.MaxPooling2D(2,2),\n", 97 | " tf.keras.layers.Conv2D(32, (3,3), activation='relu'),\n", 98 | " tf.keras.layers.MaxPooling2D(2,2),\n", 99 | " tf.keras.layers.Flatten(),\n", 100 | " tf.keras.layers.Dense(512, activation='relu'),\n", 101 | " tf.keras.layers.Dense(1, activation='sigmoid')\n", 102 | "])\n", 103 | "\n", 104 | "from tensorflow.keras.optimizers import RMSprop\n", 105 | "\n", 106 | "model.compile(loss='binary_crossentropy',\n", 107 | " optimizer=RMSprop(lr=0.001),\n", 108 | " metrics=['accuracy'])" 109 | ], 110 | "execution_count": 0, 111 | "outputs": [] 112 | }, 113 | { 114 | "cell_type": "code", 115 | "metadata": { 116 | "colab_type": "code", 117 | "id": "sSaPPUe_z_OU", 118 | "colab": {} 119 | }, 120 | "source": [ 121 | "from tensorflow.keras.preprocessing.image import ImageDataGenerator\n", 122 | "\n", 123 | "train_datagen = ImageDataGenerator(rescale=1/255)\n", 124 | "\n", 125 | "train_generator = train_datagen.flow_from_directory(\n", 126 | " \"/tmp/h-or-s\", \n", 127 | " target_size=(150, 150), \n", 128 | " batch_size=10,\n", 129 | " class_mode='binary')\n", 130 | "\n", 131 | "# Expected output: 'Found 80 images belonging to 2 classes'" 132 | ], 133 | "execution_count": 0, 134 | "outputs": [] 135 | }, 136 | { 137 | "cell_type": "code", 138 | "metadata": { 139 | "colab_type": "code", 140 | "id": "0imravDn0Ajz", 141 | "colab": {} 142 | }, 143 | "source": [ 144 | "history = model.fit(\n", 145 | " train_generator,\n", 146 | " steps_per_epoch=8, \n", 147 | " epochs=15,\n", 148 | " verbose=1,\n", 149 | " callbacks=[callbacks])" 150 | ], 151 | "execution_count": 0, 152 | "outputs": [] 153 | } 154 | ] 155 | } -------------------------------------------------------------------------------- /1. Introduction to TensorFlow/Exercises/Exercise 4 - Handling Complex Images/Exercise4-Question.ipynb: -------------------------------------------------------------------------------- 1 | {"nbformat":4,"nbformat_minor":0,"metadata":{"coursera":{"course_slug":"introduction-tensorflow","graded_item_id":"1kAlw","launcher_item_id":"PNLYD"},"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.6.8"},"colab":{"name":"Exercise4-Question.ipynb","provenance":[],"collapsed_sections":[]}},"cells":[{"cell_type":"markdown","metadata":{"colab_type":"text","id":"UncprnB0ymAE"},"source":["Below is code with a link to a happy or sad dataset which contains 80 images, 40 happy and 40 sad. \n","Create a convolutional neural network that trains to 100% accuracy on these images, which cancels training upon hitting training accuracy of >.999\n","\n","Hint -- it will work best with 3 convolutional layers."]},{"cell_type":"code","metadata":{"id":"sJVxKI7QqcJu","colab_type":"code","colab":{"base_uri":"https://localhost:8080/","height":122},"outputId":"b601f326-23a3-4df5-aa49-a26da64057f1","executionInfo":{"status":"ok","timestamp":1588597059785,"user_tz":-330,"elapsed":22452,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}}},"source":["# Mount Google Drive directory to access mnist.npz from tmp2 in Google Drive.\n","from google.colab import drive\n","drive.mount('/content/drive')"],"execution_count":1,"outputs":[{"output_type":"stream","text":["Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n","\n","Enter your authorization code:\n","··········\n","Mounted at /content/drive\n"],"name":"stdout"}]},{"cell_type":"code","metadata":{"id":"AyAqwiKhozzg","colab_type":"code","colab":{}},"source":["import tensorflow as tf\n","import os\n","import zipfile\n","from os import path, getcwd, chdir\n","\n","# DO NOT CHANGE THE LINE BELOW. If you are developing in a local\n","# environment, then grab happy-or-sad.zip from the Coursera Jupyter Notebook\n","# and place it inside a local folder and edit the path to that location\n","\n","\n","# Path for Coursera Jupyter autograder:\n","# path = f\"{getcwd()}/../tmp2/happy-or-sad.zip\"\n","\n","# Path to happy-or-sad.zip in Google Drive:\n","path = f\"{getcwd()}/drive/My Drive/TensorFlow In Practice/1. Introduction to TensorFlow/Exercises/tmp2/happy-or-sad.zip\"\n","\n","zip_ref = zipfile.ZipFile(path, 'r')\n","zip_ref.extractall(\"/tmp/h-or-s\")\n","zip_ref.close()"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"id":"GPRs_pXTozzu","colab_type":"code","colab":{}},"source":["# GRADED FUNCTION: train_happy_sad_model\n","def train_happy_sad_model():\n"," # Please write your code only where you are indicated.\n"," # please do not remove # model fitting inline comments.\n","\n"," DESIRED_ACCURACY = 0.999\n","\n"," class myCallback(tf.keras.callbacks.Callback):\n"," def on_epoch_end(self, epoch, logs={}):\n"," # Change 'accuracy' to 'acc' for the Coursera autograder!\n"," if(logs.get('accuracy')>DESIRED_ACCURACY): \n"," print(\"\\nReached 99.9% accuracy so cancelling training!\")\n"," self.model.stop_training = True\n","\n"," callbacks = myCallback()\n"," \n"," # This Code Block should Define and Compile the Model. Please assume the images are 150 X 150 in your implementation.\n"," model = tf.keras.models.Sequential([\n"," # Note the input shape is the desired size of the image 150x150 with 3 bytes color\n"," # This is the first convolution\n"," tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(150, 150, 3)),\n"," tf.keras.layers.MaxPooling2D(2, 2),\n"," # The second convolution\n"," tf.keras.layers.Conv2D(32, (3,3), activation='relu'),\n"," tf.keras.layers.MaxPooling2D(2,2),\n"," # The third convolution\n"," tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n"," tf.keras.layers.MaxPooling2D(2,2),\n"," \n"," # Flatten the results to feed into a DNN\n"," tf.keras.layers.Flatten(),\n"," # 512 neuron hidden layer\n"," tf.keras.layers.Dense(512, activation='relu'),\n"," # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans')\n"," tf.keras.layers.Dense(1, activation='sigmoid')\n"," ])\n","\n"," from tensorflow.keras.optimizers import RMSprop\n","\n"," model.compile(loss='binary_crossentropy',\n"," optimizer=RMSprop(lr=0.001),\n"," metrics=['accuracy'])\n"," \n","\n"," # This code block should create an instance of an ImageDataGenerator called train_datagen \n"," # And a train_generator by calling train_datagen.flow_from_directory\n","\n"," from tensorflow.keras.preprocessing.image import ImageDataGenerator\n","\n"," train_datagen = ImageDataGenerator(rescale=1/255)\n","\n"," # Please use a target_size of 150 X 150.\n"," train_generator = train_datagen.flow_from_directory(\n"," '/tmp/h-or-s',\n"," target_size=(150, 150),\n"," batch_size=10,\n"," class_mode='binary')\n"," # Expected output: 'Found 80 images belonging to 2 classes'\n","\n"," # This code block should call model.fit_generator and train for\n"," # a number of epochs.\n"," # model fitting\n"," history = model.fit_generator(\n"," train_generator,\n"," steps_per_epoch=8,\n"," epochs=15,\n"," verbose=1,\n"," callbacks=[callbacks])\n"," # model fitting\n"," # Change 'accuracy' to 'acc' for the Coursera autograder!\n"," return history.history['accuracy'][-1]"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"id":"_Bm8lN2_ozz6","colab_type":"code","colab":{"base_uri":"https://localhost:8080/","height":425},"outputId":"829fb783-ff30-4aa4-983b-37a8eef7f4da","executionInfo":{"status":"ok","timestamp":1588597497240,"user_tz":-330,"elapsed":24043,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}}},"source":["# The Expected output: \"Reached 99.9% accuracy so cancelling training!\"\"\n","train_happy_sad_model()"],"execution_count":7,"outputs":[{"output_type":"stream","text":["Found 80 images belonging to 2 classes.\n","Epoch 1/15\n","8/8 [==============================] - 2s 242ms/step - loss: 2.2697 - accuracy: 0.5125\n","Epoch 2/15\n","8/8 [==============================] - 2s 240ms/step - loss: 0.4238 - accuracy: 0.8250\n","Epoch 3/15\n","8/8 [==============================] - 2s 243ms/step - loss: 0.1871 - accuracy: 0.9250\n","Epoch 4/15\n","8/8 [==============================] - 2s 247ms/step - loss: 0.1873 - accuracy: 0.8875\n","Epoch 5/15\n","8/8 [==============================] - 2s 243ms/step - loss: 0.1219 - accuracy: 0.9375\n","Epoch 6/15\n","8/8 [==============================] - 2s 244ms/step - loss: 0.0855 - accuracy: 0.9750\n","Epoch 7/15\n","8/8 [==============================] - 2s 243ms/step - loss: 0.0561 - accuracy: 0.9625\n","Epoch 8/15\n","8/8 [==============================] - 2s 242ms/step - loss: 0.1126 - accuracy: 0.9250\n","Epoch 9/15\n","8/8 [==============================] - 2s 247ms/step - loss: 0.1954 - accuracy: 0.9000\n","Epoch 10/15\n","8/8 [==============================] - ETA: 0s - loss: 0.0163 - accuracy: 1.0000\n","Reached 99.9% accuracy so cancelling training!\n","8/8 [==============================] - 2s 241ms/step - loss: 0.0163 - accuracy: 1.0000\n"],"name":"stdout"},{"output_type":"execute_result","data":{"text/plain":["1.0"]},"metadata":{"tags":[]},"execution_count":7}]}]} -------------------------------------------------------------------------------- /1. Introduction to TensorFlow/Exercises/tmp2/happy-or-sad.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/1. Introduction to TensorFlow/Exercises/tmp2/happy-or-sad.zip -------------------------------------------------------------------------------- /1. Introduction to TensorFlow/Exercises/tmp2/mnist.npz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/1. Introduction to TensorFlow/Exercises/tmp2/mnist.npz -------------------------------------------------------------------------------- /1. Introduction to TensorFlow/Hello_World_Layers.ipynb: -------------------------------------------------------------------------------- 1 | {"nbformat":4,"nbformat_minor":0,"metadata":{"colab":{"name":"Hello_World_Layers.ipynb","provenance":[],"collapsed_sections":[]},"kernelspec":{"display_name":"Python 3","name":"python3"}},"cells":[{"cell_type":"markdown","metadata":{"colab_type":"text","id":"view-in-github"},"source":["\"Open"]},{"cell_type":"markdown","metadata":{"colab_type":"text","id":"rX8mhOLljYeM"},"source":["##### Copyright 2019 The TensorFlow Authors."]},{"cell_type":"code","metadata":{"cellView":"form","colab_type":"code","id":"BZSlp3DAjdYf","colab":{}},"source":["#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n","# you may not use this file except in compliance with the License.\n","# You may obtain a copy of the License at\n","#\n","# https://www.apache.org/licenses/LICENSE-2.0\n","#\n","# Unless required by applicable law or agreed to in writing, software\n","# distributed under the License is distributed on an \"AS IS\" BASIS,\n","# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n","# See the License for the specific language governing permissions and\n","# limitations under the License."],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"szbFLdZwPRhK","colab":{}},"source":["import tensorflow as tf\n","import numpy as np\n","from tensorflow import keras\n","\n","layer_0 = keras.layers.Dense(units=1, input_shape=[1])\n","model = tf.keras.Sequential([layer_0])\n","\n","model.compile(optimizer='sgd', loss='mean_squared_error')\n","\n","xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)\n","ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)\n","\n","model.fit(xs, ys, epochs=500)\n","\n","print(model.predict([10.0]))\n","\n","print(\"Layer variables look like this: {}\".format(layer_0.get_weights()))\n"],"execution_count":0,"outputs":[]}]} -------------------------------------------------------------------------------- /1. Introduction to TensorFlow/horse_test_image.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/1. Introduction to TensorFlow/horse_test_image.jpg -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/Exercises/Exercise 5 - Real World Scenarios/Exercise 5 - Answer.ipynb: -------------------------------------------------------------------------------- 1 | {"nbformat":4,"nbformat_minor":0,"metadata":{"accelerator":"GPU","colab":{"name":"Exercise 5 - Answer.ipynb","provenance":[],"collapsed_sections":[]},"kernelspec":{"display_name":"Python 3","name":"python3"}},"cells":[{"cell_type":"code","metadata":{"id":"zX4Kg8DUTKWO","colab_type":"code","colab":{}},"source":["#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n","# you may not use this file except in compliance with the License.\n","# You may obtain a copy of the License at\n","#\n","# https://www.apache.org/licenses/LICENSE-2.0\n","#\n","# Unless required by applicable law or agreed to in writing, software\n","# distributed under the License is distributed on an \"AS IS\" BASIS,\n","# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n","# See the License for the specific language governing permissions and\n","# limitations under the License."],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"dn-6c02VmqiN","colab":{}},"source":["import os\n","import zipfile\n","import random\n","import tensorflow as tf\n","from tensorflow.keras.optimizers import RMSprop\n","from tensorflow.keras.preprocessing.image import ImageDataGenerator\n","from shutil import copyfile"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"3sd9dQWa23aj","colab":{}},"source":["# If the URL doesn't work, visit https://www.microsoft.com/en-us/download/confirmation.aspx?id=54765\n","# And right click on the 'Download Manually' link to get a new URL to the dataset\n","\n","# Note: This is a very large dataset and will take time to download\n","\n","!wget --no-check-certificate \\\n"," \"https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip\" \\\n"," -O \"/tmp/cats-and-dogs.zip\"\n","\n","local_zip = '/tmp/cats-and-dogs.zip'\n","zip_ref = zipfile.ZipFile(local_zip, 'r')\n","zip_ref.extractall('/tmp')\n","zip_ref.close()\n"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"DM851ZmN28J3","colab":{}},"source":["print(len(os.listdir('/tmp/PetImages/Cat/')))\n","print(len(os.listdir('/tmp/PetImages/Dog/')))\n","\n","# Expected Output:\n","# 12501\n","# 12501"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"F-QkLjxpmyK2","colab":{}},"source":["try:\n"," os.mkdir('/tmp/cats-v-dogs')\n"," os.mkdir('/tmp/cats-v-dogs/training')\n"," os.mkdir('/tmp/cats-v-dogs/testing')\n"," os.mkdir('/tmp/cats-v-dogs/training/cats')\n"," os.mkdir('/tmp/cats-v-dogs/training/dogs')\n"," os.mkdir('/tmp/cats-v-dogs/testing/cats')\n"," os.mkdir('/tmp/cats-v-dogs/testing/dogs')\n","except OSError:\n"," pass"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"zvSODo0f9LaU","colab":{}},"source":["def split_data(SOURCE, TRAINING, TESTING, SPLIT_SIZE):\n"," files = []\n"," for filename in os.listdir(SOURCE):\n"," file = SOURCE + filename\n"," if os.path.getsize(file) > 0:\n"," files.append(filename)\n"," else:\n"," print(filename + \" is zero length, so ignoring.\")\n","\n"," training_length = int(len(files) * SPLIT_SIZE)\n"," testing_length = int(len(files) - training_length)\n"," shuffled_set = random.sample(files, len(files))\n"," training_set = shuffled_set[0:training_length]\n"," testing_set = shuffled_set[-testing_length:]\n","\n"," for filename in training_set:\n"," this_file = SOURCE + filename\n"," destination = TRAINING + filename\n"," copyfile(this_file, destination)\n","\n"," for filename in testing_set:\n"," this_file = SOURCE + filename\n"," destination = TESTING + filename\n"," copyfile(this_file, destination)\n","\n","\n","CAT_SOURCE_DIR = \"/tmp/PetImages/Cat/\"\n","TRAINING_CATS_DIR = \"/tmp/cats-v-dogs/training/cats/\"\n","TESTING_CATS_DIR = \"/tmp/cats-v-dogs/testing/cats/\"\n","DOG_SOURCE_DIR = \"/tmp/PetImages/Dog/\"\n","TRAINING_DOGS_DIR = \"/tmp/cats-v-dogs/training/dogs/\"\n","TESTING_DOGS_DIR = \"/tmp/cats-v-dogs/testing/dogs/\"\n","\n","split_size = .9\n","split_data(CAT_SOURCE_DIR, TRAINING_CATS_DIR, TESTING_CATS_DIR, split_size)\n","split_data(DOG_SOURCE_DIR, TRAINING_DOGS_DIR, TESTING_DOGS_DIR, split_size)\n","\n","# Expected output\n","# 666.jpg is zero length, so ignoring\n","# 11702.jpg is zero length, so ignoring"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"hwHXFhVG3786","colab":{}},"source":["print(len(os.listdir('/tmp/cats-v-dogs/training/cats/')))\n","print(len(os.listdir('/tmp/cats-v-dogs/training/dogs/')))\n","print(len(os.listdir('/tmp/cats-v-dogs/testing/cats/')))\n","print(len(os.listdir('/tmp/cats-v-dogs/testing/dogs/')))\n","\n","# Expected output:\n","# 11250\n","# 11250\n","# 1250\n","# 1250"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"-BQrav4anTmj","colab":{}},"source":["model = tf.keras.models.Sequential([\n"," tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(150, 150, 3)),\n"," tf.keras.layers.MaxPooling2D(2, 2),\n"," tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),\n"," tf.keras.layers.MaxPooling2D(2, 2),\n"," tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),\n"," tf.keras.layers.MaxPooling2D(2, 2),\n"," tf.keras.layers.Flatten(),\n"," tf.keras.layers.Dense(512, activation='relu'),\n"," tf.keras.layers.Dense(1, activation='sigmoid')\n","])\n","\n","model.compile(optimizer=RMSprop(lr=0.001), loss='binary_crossentropy', metrics=['accuracy'])\n"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"fQrZfVgz4j2g","colab":{}},"source":["TRAINING_DIR = \"/tmp/cats-v-dogs/training/\"\n","train_datagen = ImageDataGenerator(rescale=1.0/255.)\n","train_generator = train_datagen.flow_from_directory(TRAINING_DIR,\n"," batch_size=100,\n"," class_mode='binary',\n"," target_size=(150, 150))\n","\n","VALIDATION_DIR = \"/tmp/cats-v-dogs/testing/\"\n","validation_datagen = ImageDataGenerator(rescale=1.0/255.)\n","validation_generator = validation_datagen.flow_from_directory(VALIDATION_DIR,\n"," batch_size=100,\n"," class_mode='binary',\n"," target_size=(150, 150))\n","\n","# Expected Output:\n","# Found 22498 images belonging to 2 classes.\n","# Found 2500 images belonging to 2 classes."],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"5qE1G6JB4fMn","colab":{}},"source":["# Note that this may take some time.\n","history = model.fit(train_generator,\n"," epochs=50,\n"," verbose=1,\n"," validation_data=validation_generator)"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"MWZrJN4-65RC","colab":{}},"source":["%matplotlib inline\n","\n","import matplotlib.image as mpimg\n","import matplotlib.pyplot as plt\n","\n","#-----------------------------------------------------------\n","# Retrieve a list of list results on training and test data\n","# sets for each training epoch\n","#-----------------------------------------------------------\n","acc=history.history['accuracy']\n","val_acc=history.history['val_accuracy']\n","loss=history.history['loss']\n","val_loss=history.history['val_loss']\n","\n","epochs=range(len(acc)) # Get number of epochs\n","\n","#------------------------------------------------\n","# Plot training and validation accuracy per epoch\n","#------------------------------------------------\n","plt.plot(epochs, acc, 'r', \"Training Accuracy\")\n","plt.plot(epochs, val_acc, 'b', \"Validation Accuracy\")\n","plt.title('Training and validation accuracy')\n","plt.figure()\n","\n","#------------------------------------------------\n","# Plot training and validation loss per epoch\n","#------------------------------------------------\n","plt.plot(epochs, loss, 'r', \"Training Loss\")\n","plt.plot(epochs, val_loss, 'b', \"Validation Loss\")\n","plt.figure()\n","\n","\n","# Desired output. Charts with training and validation metrics. No crash :)"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"LqL6FYUrtXpf","colab":{}},"source":["# Here's a codeblock just for fun. You should be able to upload an image here \n","# and have it classified without crashing\n","import numpy as np\n","from google.colab import files\n","from keras.preprocessing import image\n","\n","uploaded = files.upload()\n","\n","for fn in uploaded.keys():\n"," \n"," # predicting images\n"," path = '/content/' + fn\n"," img = image.load_img(path, target_size=(150, 150))\n"," x = image.img_to_array(img)\n"," x = np.expand_dims(x, axis=0)\n","\n"," images = np.vstack([x])\n"," classes = model.predict(images, batch_size=10)\n"," print(classes[0])\n"," if classes[0]>0.5:\n"," print(fn + \" is a dog\")\n"," else:\n"," print(fn + \" is a cat\")"],"execution_count":0,"outputs":[]}]} -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/Exercises/Exercise 5 - Real World Scenarios/Exercise 5 - Question.ipynb: -------------------------------------------------------------------------------- 1 | {"nbformat":4,"nbformat_minor":0,"metadata":{"accelerator":"GPU","colab":{"name":"Exercise 5 - Question.ipynb","provenance":[],"collapsed_sections":[]},"kernelspec":{"display_name":"Python 3","name":"python3"}},"cells":[{"cell_type":"code","metadata":{"id":"zX4Kg8DUTKWO","colab_type":"code","colab":{}},"source":["#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n","# you may not use this file except in compliance with the License.\n","# You may obtain a copy of the License at\n","#\n","# https://www.apache.org/licenses/LICENSE-2.0\n","#\n","# Unless required by applicable law or agreed to in writing, software\n","# distributed under the License is distributed on an \"AS IS\" BASIS,\n","# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n","# See the License for the specific language governing permissions and\n","# limitations under the License."],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"dn-6c02VmqiN","colab":{}},"source":["# In this exercise you will train a CNN on the FULL Cats-v-dogs dataset\n","# This will require you doing a lot of data preprocessing because\n","# the dataset isn't split into training and validation for you\n","# This code block has all the required inputs\n","import os\n","import zipfile\n","import random\n","import tensorflow as tf\n","from tensorflow.keras.optimizers import RMSprop\n","from tensorflow.keras.preprocessing.image import ImageDataGenerator\n","from shutil import copyfile"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"3sd9dQWa23aj","colab":{}},"source":["# This code block downloads the full Cats-v-Dogs dataset and stores it as \n","# cats-and-dogs.zip. It then unzips it to /tmp\n","# which will create a tmp/PetImages directory containing subdirectories\n","# called 'Cat' and 'Dog' (that's how the original researchers structured it)\n","# If the URL doesn't work, \n","# . visit https://www.microsoft.com/en-us/download/confirmation.aspx?id=54765\n","# And right click on the 'Download Manually' link to get a new URL\n","\n","!wget --no-check-certificate \\\n"," \"https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip\" \\\n"," -O \"/tmp/cats-and-dogs.zip\"\n","\n","local_zip = '/tmp/cats-and-dogs.zip'\n","zip_ref = zipfile.ZipFile(local_zip, 'r')\n","zip_ref.extractall('/tmp')\n","zip_ref.close()\n"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"gi3yD62a6X3S","colab":{}},"source":["print(len(os.listdir('/tmp/PetImages/Cat/')))\n","print(len(os.listdir('/tmp/PetImages/Dog/')))\n","\n","# Expected Output:\n","# 12501\n","# 12501"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"F-QkLjxpmyK2","colab":{}},"source":["# Use os.mkdir to create your directories\n","# You will need a directory for cats-v-dogs, and subdirectories for training\n","# and testing. These in turn will need subdirectories for 'cats' and 'dogs'\n","try:\n"," os.mkdir('/tmp/cats-v-dogs')\n"," os.mkdir('/tmp/cats-v-dogs/training')\n"," os.mkdir('/tmp/cats-v-dogs/testing')\n"," os.mkdir('/tmp/cats-v-dogs/training/cats')\n"," os.mkdir('/tmp/cats-v-dogs/training/dogs')\n"," os.mkdir('/tmp/cats-v-dogs/testing/cats')\n"," os.mkdir('/tmp/cats-v-dogs/testing/dogs')\n","except OSError:\n"," pass"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"zvSODo0f9LaU","colab":{}},"source":["# Write a python function called split_data which takes\n","# a SOURCE directory containing the files\n","# a TRAINING directory that a portion of the files will be copied to\n","# a TESTING directory that a portion of the files will be copie to\n","# a SPLIT SIZE to determine the portion\n","# The files should also be randomized, so that the training set is a random\n","# X% of the files, and the test set is the remaining files\n","# SO, for example, if SOURCE is PetImages/Cat, and SPLIT SIZE is .9\n","# Then 90% of the images in PetImages/Cat will be copied to the TRAINING dir\n","# and 10% of the images will be copied to the TESTING dir\n","# Also -- All images should be checked, and if they have a zero file length,\n","# they will not be copied over\n","#\n","# os.listdir(DIRECTORY) gives you a listing of the contents of that directory\n","# os.path.getsize(PATH) gives you the size of the file\n","# copyfile(source, destination) copies a file from source to destination\n","# random.sample(list, len(list)) shuffles a list\n","def split_data(SOURCE, TRAINING, TESTING, SPLIT_SIZE):\n"," files = []\n"," for filename in os.listdir(SOURCE):\n"," file = SOURCE + filename\n"," if os.path.getsize(file) > 0:\n"," files.append(filename)\n"," else:\n"," print(filename + \" is zero length, so ignoring.\")\n","\n"," training_length = int(len(files) * SPLIT_SIZE)\n"," testing_length = int(len(files) - training_length)\n"," shuffled_set = random.sample(files, len(files))\n"," training_set = shuffled_set[0:training_length]\n"," testing_set = shuffled_set[:testing_length]\n","\n"," for filename in training_set:\n"," this_file = SOURCE + filename\n"," destination = TRAINING + filename\n"," copyfile(this_file, destination)\n","\n"," for filename in testing_set:\n"," this_file = SOURCE + filename\n"," destination = TESTING + filename\n"," copyfile(this_file, destination)\n","\n","\n","CAT_SOURCE_DIR = \"/tmp/PetImages/Cat/\"\n","TRAINING_CATS_DIR = \"/tmp/cats-v-dogs/training/cats/\"\n","TESTING_CATS_DIR = \"/tmp/cats-v-dogs/testing/cats/\"\n","DOG_SOURCE_DIR = \"/tmp/PetImages/Dog/\"\n","TRAINING_DOGS_DIR = \"/tmp/cats-v-dogs/training/dogs/\"\n","TESTING_DOGS_DIR = \"/tmp/cats-v-dogs/testing/dogs/\"\n","\n","split_size = .9\n","split_data(CAT_SOURCE_DIR, TRAINING_CATS_DIR, TESTING_CATS_DIR, split_size)\n","split_data(DOG_SOURCE_DIR, TRAINING_DOGS_DIR, TESTING_DOGS_DIR, split_size)\n","\n","# Expected output\n","# 666.jpg is zero length, so ignoring\n","# 11702.jpg is zero length, so ignoring"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"luthalB76ufC","colab":{}},"source":["print(len(os.listdir('/tmp/cats-v-dogs/training/cats/')))\n","print(len(os.listdir('/tmp/cats-v-dogs/training/dogs/')))\n","print(len(os.listdir('/tmp/cats-v-dogs/testing/cats/')))\n","print(len(os.listdir('/tmp/cats-v-dogs/testing/dogs/')))\n","\n","# Expected output:\n","# 11250\n","# 11250\n","# 1250\n","# 1250"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"-BQrav4anTmj","colab":{}},"source":["# DEFINE A KERAS MODEL TO CLASSIFY CATS V DOGS\n","# USE AT LEAST 3 CONVOLUTION LAYERS\n","model = tf.keras.models.Sequential([\n"," tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(150, 150, 3)),\n"," tf.keras.layers.MaxPooling2D(2,2),\n"," tf.keras.layers.Conv2D(32, (3,3), activation='relu'),\n"," tf.keras.layers.MaxPooling2D(2,2), \n"," tf.keras.layers.Conv2D(64, (3,3), activation='relu'), \n"," tf.keras.layers.MaxPooling2D(2,2),\n"," # Flatten the results to feed into a DNN\n"," tf.keras.layers.Flatten(), \n"," # 512 neuron hidden layer\n"," tf.keras.layers.Dense(512, activation='relu'), \n"," # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('cats') and 1 for the other ('dogs')\n"," tf.keras.layers.Dense(1, activation='sigmoid')\n","])\n","\n","model.compile(optimizer=RMSprop(lr=0.001), loss='binary_crossentropy', metrics=['accuracy'])"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"mlNjoJ5D61N6","colab":{}},"source":["TRAINING_DIR = \"/tmp/cats-v-dogs/training\"\n","train_datagen = ImageDataGenerator( rescale = 1.0/255. )\n","train_generator = train_datagen.flow_from_directory(TRAINING_DIR,\n"," batch_size=20,\n"," class_mode='binary',\n"," target_size=(150, 150))\n","\n","VALIDATION_DIR = \"/tmp/cats-v-dogs/testing\"\n","validation_datagen = ImageDataGenerator( rescale = 1.0/255. )\n","validation_generator = validation_datagen.flow_from_directory(VALIDATION_DIR,\n"," batch_size=20,\n"," class_mode = 'binary',\n"," target_size = (150, 150))\n","\n","\n","\n","# Expected Output:\n","# Found 22498 images belonging to 2 classes.\n","# Found 2500 images belonging to 2 classes."],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"KyS4n53w7DxC","colab":{}},"source":["history = model.fit(train_generator,\n"," epochs=15,\n"," verbose=1,\n"," validation_data=validation_generator)\n","\n","# The expectation here is that the model will train, and that accuracy will be > 95% on both training and validation\n","# i.e. acc:A1 and val_acc:A2 will be visible, and both A1 and A2 will be > .9"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"MWZrJN4-65RC","colab":{}},"source":["# PLOT LOSS AND ACCURACY\n","%matplotlib inline\n","\n","import matplotlib.image as mpimg\n","import matplotlib.pyplot as plt\n","\n","#-----------------------------------------------------------\n","# Retrieve a list of list results on training and test data\n","# sets for each training epoch\n","#-----------------------------------------------------------\n","acc=history.history['accuracy']\n","val_acc=history.history['val_accuracy']\n","loss=history.history['loss']\n","val_loss=history.history['val_loss']\n","\n","epochs=range(len(acc)) # Get number of epochs\n","\n","#------------------------------------------------\n","# Plot training and validation accuracy per epoch\n","#------------------------------------------------\n","plt.plot(epochs, acc, 'r', \"Training Accuracy\")\n","plt.plot(epochs, val_acc, 'b', \"Validation Accuracy\")\n","plt.title('Training and validation accuracy')\n","plt.figure()\n","\n","#------------------------------------------------\n","# Plot training and validation loss per epoch\n","#------------------------------------------------\n","plt.plot(epochs, loss, 'r', \"Training Loss\")\n","plt.plot(epochs, val_loss, 'b', \"Validation Loss\")\n","\n","\n","plt.title('Training and validation loss')\n","\n","# Desired output. Charts with training and validation metrics. No crash :)"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"LqL6FYUrtXpf","colab":{}},"source":["# Here's a codeblock just for fun. You should be able to upload an image here \n","# and have it classified without crashing\n","\n","import numpy as np\n","from google.colab import files\n","from keras.preprocessing import image\n","\n","uploaded = files.upload()\n","\n","for fn in uploaded.keys():\n"," \n"," # predicting images\n"," path = '/content/' + fn\n"," img = image.load_img(path, target_size=(# YOUR CODE HERE))\n"," x = image.img_to_array(img)\n"," x = np.expand_dims(x, axis=0)\n","\n"," images = np.vstack([x])\n"," classes = model.predict(images, batch_size=10)\n"," print(classes[0])\n"," if classes[0]>0.5:\n"," print(fn + \" is a dog\")\n"," else:\n"," print(fn + \" is a cat\")"],"execution_count":0,"outputs":[]}]} -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/Exercises/Exercise 5 - Real World Scenarios/Exercise_1_Cats_vs_Dogs_Question-FINAL.ipynb: -------------------------------------------------------------------------------- 1 | {"nbformat":4,"nbformat_minor":0,"metadata":{"accelerator":"GPU","colab":{"name":"Exercise_1_Cats_vs_Dogs_Question-FINAL.ipynb","provenance":[],"collapsed_sections":[]},"coursera":{"course_slug":"convolutional-neural-networks-tensorflow","graded_item_id":"laIUG","launcher_item_id":"jjQWM"},"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.6.8"}},"cells":[{"cell_type":"code","metadata":{"colab_type":"code","id":"dn-6c02VmqiN","colab":{}},"source":["# ATTENTION: Please do not alter any of the provided code in the exercise. Only add your own code where indicated\n","# ATTENTION: Please do not add or remove any cells in the exercise. The grader will check specific cells based on the cell position.\n","# ATTENTION: Please use the provided epoch values when training.\n","\n","# In this exercise you will train a CNN on the FULL Cats-v-dogs dataset\n","# This will require you doing a lot of data preprocessing because\n","# the dataset isn't split into training and validation for you\n","# This code block has all the required inputs\n","import os\n","import zipfile\n","import random\n","import tensorflow as tf\n","import shutil\n","from tensorflow.keras.optimizers import RMSprop\n","from tensorflow.keras.preprocessing.image import ImageDataGenerator\n","from shutil import copyfile\n","from os import getcwd"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"3sd9dQWa23aj","colab":{}},"source":["path_cats_and_dogs = f\"{getcwd()}/../tmp2/cats-and-dogs.zip\"\n","shutil.rmtree('/tmp')\n","\n","local_zip = path_cats_and_dogs\n","zip_ref = zipfile.ZipFile(local_zip, 'r')\n","zip_ref.extractall('/tmp')\n","zip_ref.close()\n"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"gi3yD62a6X3S","colab":{}},"source":["print(len(os.listdir('/tmp/PetImages/Cat/')))\n","print(len(os.listdir('/tmp/PetImages/Dog/')))\n","\n","# Expected Output:\n","# 1500\n","# 1500"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"F-QkLjxpmyK2","colab":{}},"source":["# Use os.mkdir to create your directories\n","# You will need a directory for cats-v-dogs, and subdirectories for training\n","# and testing. These in turn will need subdirectories for 'cats' and 'dogs'\n","try:\n"," os.mkdir('/tmp/cats-v-dogs')\n"," os.mkdir('/tmp/cats-v-dogs/training')\n"," os.mkdir('/tmp/cats-v-dogs/testing')\n"," os.mkdir('/tmp/cats-v-dogs/training/cats')\n"," os.mkdir('/tmp/cats-v-dogs/training/dogs')\n"," os.mkdir('/tmp/cats-v-dogs/testing/cats')\n"," os.mkdir('/tmp/cats-v-dogs/testing/dogs')\n","except OSError:\n"," pass"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"zvSODo0f9LaU","colab":{}},"source":["# Write a python function called split_data which takes\n","# a SOURCE directory containing the files\n","# a TRAINING directory that a portion of the files will be copied to\n","# a TESTING directory that a portion of the files will be copie to\n","# a SPLIT SIZE to determine the portion\n","# The files should also be randomized, so that the training set is a random\n","# X% of the files, and the test set is the remaining files\n","# SO, for example, if SOURCE is PetImages/Cat, and SPLIT SIZE is .9\n","# Then 90% of the images in PetImages/Cat will be copied to the TRAINING dir\n","# and 10% of the images will be copied to the TESTING dir\n","# Also -- All images should be checked, and if they have a zero file length,\n","# they will not be copied over\n","#\n","# os.listdir(DIRECTORY) gives you a listing of the contents of that directory\n","# os.path.getsize(PATH) gives you the size of the file\n","# copyfile(source, destination) copies a file from source to destination\n","# random.sample(list, len(list)) shuffles a list\n","def split_data(SOURCE, TRAINING, TESTING, SPLIT_SIZE):\n"," files = []\n"," for filename in os.listdir(SOURCE):\n"," file = SOURCE + filename\n"," if os.path.getsize(file) > 0:\n"," files.append(filename)\n"," else:\n"," print(filename + \" is zero length, so ignoring.\")\n","\n"," training_length = int(len(files) * SPLIT_SIZE)\n"," testing_length = int(len(files) - training_length)\n"," shuffled_set = random.sample(files, len(files))\n"," training_set = shuffled_set[0:training_length]\n"," testing_set = shuffled_set[:testing_length]\n","\n"," for filename in training_set:\n"," this_file = SOURCE + filename\n"," destination = TRAINING + filename\n"," copyfile(this_file, destination)\n","\n"," for filename in testing_set:\n"," this_file = SOURCE + filename\n"," destination = TESTING + filename\n"," copyfile(this_file, destination)\n","\n","\n","CAT_SOURCE_DIR = \"/tmp/PetImages/Cat/\"\n","TRAINING_CATS_DIR = \"/tmp/cats-v-dogs/training/cats/\"\n","TESTING_CATS_DIR = \"/tmp/cats-v-dogs/testing/cats/\"\n","DOG_SOURCE_DIR = \"/tmp/PetImages/Dog/\"\n","TRAINING_DOGS_DIR = \"/tmp/cats-v-dogs/training/dogs/\"\n","TESTING_DOGS_DIR = \"/tmp/cats-v-dogs/testing/dogs/\"\n","\n","split_size = .9\n","split_data(CAT_SOURCE_DIR, TRAINING_CATS_DIR, TESTING_CATS_DIR, split_size)\n","split_data(DOG_SOURCE_DIR, TRAINING_DOGS_DIR, TESTING_DOGS_DIR, split_size)"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"luthalB76ufC","colab":{}},"source":["print(len(os.listdir('/tmp/cats-v-dogs/training/cats/')))\n","print(len(os.listdir('/tmp/cats-v-dogs/training/dogs/')))\n","print(len(os.listdir('/tmp/cats-v-dogs/testing/cats/')))\n","print(len(os.listdir('/tmp/cats-v-dogs/testing/dogs/')))\n","\n","# Expected output:\n","# 1350\n","# 1350\n","# 150\n","# 150"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"-BQrav4anTmj","colab":{}},"source":["# DEFINE A KERAS MODEL TO CLASSIFY CATS V DOGS\n","# USE AT LEAST 3 CONVOLUTION LAYERS\n","model = tf.keras.models.Sequential([\n"," tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(150, 150, 3)),\n"," tf.keras.layers.MaxPooling2D(2,2),\n"," tf.keras.layers.Conv2D(32, (3,3), activation='relu'),\n"," tf.keras.layers.MaxPooling2D(2,2), \n"," tf.keras.layers.Conv2D(64, (3,3), activation='relu'), \n"," tf.keras.layers.MaxPooling2D(2,2),\n"," # Flatten the results to feed into a DNN\n"," tf.keras.layers.Flatten(), \n"," # 512 neuron hidden layer\n"," tf.keras.layers.Dense(512, activation='relu'), \n"," # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('cats') and 1 for the other ('dogs')\n"," tf.keras.layers.Dense(1, activation='sigmoid')\n","])\n","\n","model.compile(optimizer=RMSprop(lr=0.001), loss='binary_crossentropy', metrics=['acc'])"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"Ecf6MQD9n4Rd","colab_type":"text"},"source":["# NOTE:\n","\n","In the cell below you **MUST** use a batch size of 10 (`batch_size=10`) for the `train_generator` and the `validation_generator`. Using a batch size greater than 10 will exceed memory limits on the Coursera platform."]},{"cell_type":"code","metadata":{"colab_type":"code","id":"mlNjoJ5D61N6","colab":{}},"source":["TRAINING_DIR = \"/tmp/cats-v-dogs/training\"\n","train_datagen = ImageDataGenerator( rescale = 1.0/255. )\n","train_generator = train_datagen.flow_from_directory(TRAINING_DIR,\n"," batch_size=20,\n"," class_mode='binary',\n"," target_size=(150, 150))\n","\n","VALIDATION_DIR = \"/tmp/cats-v-dogs/testing\"\n","validation_datagen = ImageDataGenerator( rescale = 1.0/255. )\n","validation_generator = validation_datagen.flow_from_directory(VALIDATION_DIR,\n"," batch_size=20,\n"," class_mode = 'binary',\n"," target_size = (150, 150))\n","\n","\n","\n","# Expected Output:\n","# Found 22498 images belonging to 2 classes.\n","# Found 2500 images belonging to 2 classes."],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"KyS4n53w7DxC","colab":{}},"source":["history = model.fit_generator(train_generator,\n"," epochs=2,\n"," verbose=1,\n"," validation_data=validation_generator)\n"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"MWZrJN4-65RC","colab":{}},"source":["# PLOT LOSS AND ACCURACY\n","%matplotlib inline\n","\n","import matplotlib.image as mpimg\n","import matplotlib.pyplot as plt\n","\n","#-----------------------------------------------------------\n","# Retrieve a list of list results on training and test data\n","# sets for each training epoch\n","#-----------------------------------------------------------\n","acc=history.history['acc']\n","val_acc=history.history['val_acc']\n","loss=history.history['loss']\n","val_loss=history.history['val_loss']\n","\n","epochs=range(len(acc)) # Get number of epochs\n","\n","#------------------------------------------------\n","# Plot training and validation accuracy per epoch\n","#------------------------------------------------\n","plt.plot(epochs, acc, 'r', \"Training Accuracy\")\n","plt.plot(epochs, val_acc, 'b', \"Validation Accuracy\")\n","plt.title('Training and validation accuracy')\n","plt.figure()\n","\n","#------------------------------------------------\n","# Plot training and validation loss per epoch\n","#------------------------------------------------\n","plt.plot(epochs, loss, 'r', \"Training Loss\")\n","plt.plot(epochs, val_loss, 'b', \"Validation Loss\")\n","\n","\n","plt.title('Training and validation loss')\n","\n","# Desired output. Charts with training and validation metrics. No crash :)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"huYEIeEFn4Ru","colab_type":"text"},"source":["# Submission Instructions"]},{"cell_type":"code","metadata":{"id":"L8gmxMJGn4Rv","colab_type":"code","colab":{}},"source":["# Now click the 'Submit Assignment' button above."],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"7dpNauLxoNNU","colab_type":"text"},"source":["**Note:** The following cells are needed for the Coursera autograder to work! Don't delete them."]},{"cell_type":"code","metadata":{"id":"6KcW8h1bn4R0","colab_type":"code","colab":{}},"source":["%%javascript\n","\n","IPython.notebook.save_checkpoint();"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"id":"nDI7dZOhn4R5","colab_type":"code","colab":{}},"source":["%%javascript\n","IPython.notebook.session.delete();\n","window.onbeforeunload = null\n","setTimeout(function() { window.close(); }, 1000);"],"execution_count":0,"outputs":[]}]} -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/Exercises/Exercise 6 - Cats v Dogs with Augmentation/Exercise 6 - Answer.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "metadata": { 6 | "id": "zX4Kg8DUTKWO", 7 | "colab_type": "code", 8 | "colab": {} 9 | }, 10 | "source": [ 11 | "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", 12 | "# you may not use this file except in compliance with the License.\n", 13 | "# You may obtain a copy of the License at\n", 14 | "#\n", 15 | "# https://www.apache.org/licenses/LICENSE-2.0\n", 16 | "#\n", 17 | "# Unless required by applicable law or agreed to in writing, software\n", 18 | "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", 19 | "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", 20 | "# See the License for the specific language governing permissions and\n", 21 | "# limitations under the License." 22 | ], 23 | "execution_count": 0, 24 | "outputs": [] 25 | }, 26 | { 27 | "cell_type": "code", 28 | "execution_count": 0, 29 | "metadata": { 30 | "colab": {}, 31 | "colab_type": "code", 32 | "id": "dn-6c02VmqiN" 33 | }, 34 | "outputs": [], 35 | "source": [ 36 | "import os\n", 37 | "import zipfile\n", 38 | "import random\n", 39 | "import tensorflow as tf\n", 40 | "from tensorflow.keras.optimizers import RMSprop\n", 41 | "from tensorflow.keras.preprocessing.image import ImageDataGenerator\n", 42 | "from shutil import copyfile" 43 | ] 44 | }, 45 | { 46 | "cell_type": "code", 47 | "execution_count": 0, 48 | "metadata": { 49 | "colab": {}, 50 | "colab_type": "code", 51 | "id": "3sd9dQWa23aj" 52 | }, 53 | "outputs": [], 54 | "source": [ 55 | "# If the URL doesn't work, visit https://www.microsoft.com/en-us/download/confirmation.aspx?id=54765\n", 56 | "# And right click on the 'Download Manually' link to get a new URL to the dataset\n", 57 | "\n", 58 | "# Note: This is a very large dataset and will take time to download\n", 59 | "\n", 60 | "!wget --no-check-certificate \\\n", 61 | " \"https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip\" \\\n", 62 | " -O \"/tmp/cats-and-dogs.zip\"\n", 63 | "\n", 64 | "local_zip = '/tmp/cats-and-dogs.zip'\n", 65 | "zip_ref = zipfile.ZipFile(local_zip, 'r')\n", 66 | "zip_ref.extractall('/tmp')\n", 67 | "zip_ref.close()\n" 68 | ] 69 | }, 70 | { 71 | "cell_type": "code", 72 | "execution_count": 0, 73 | "metadata": { 74 | "colab": {}, 75 | "colab_type": "code", 76 | "id": "DM851ZmN28J3" 77 | }, 78 | "outputs": [], 79 | "source": [ 80 | "print(len(os.listdir('/tmp/PetImages/Cat/')))\n", 81 | "print(len(os.listdir('/tmp/PetImages/Dog/')))\n", 82 | "\n", 83 | "# Expected Output:\n", 84 | "# 12501\n", 85 | "# 12501" 86 | ] 87 | }, 88 | { 89 | "cell_type": "code", 90 | "execution_count": 0, 91 | "metadata": { 92 | "colab": {}, 93 | "colab_type": "code", 94 | "id": "F-QkLjxpmyK2" 95 | }, 96 | "outputs": [], 97 | "source": [ 98 | "try:\n", 99 | " os.mkdir('/tmp/cats-v-dogs')\n", 100 | " os.mkdir('/tmp/cats-v-dogs/training')\n", 101 | " os.mkdir('/tmp/cats-v-dogs/testing')\n", 102 | " os.mkdir('/tmp/cats-v-dogs/training/cats')\n", 103 | " os.mkdir('/tmp/cats-v-dogs/training/dogs')\n", 104 | " os.mkdir('/tmp/cats-v-dogs/testing/cats')\n", 105 | " os.mkdir('/tmp/cats-v-dogs/testing/dogs')\n", 106 | "except OSError:\n", 107 | " pass" 108 | ] 109 | }, 110 | { 111 | "cell_type": "code", 112 | "execution_count": 0, 113 | "metadata": { 114 | "colab": {}, 115 | "colab_type": "code", 116 | "id": "zvSODo0f9LaU" 117 | }, 118 | "outputs": [], 119 | "source": [ 120 | "def split_data(SOURCE, TRAINING, TESTING, SPLIT_SIZE):\n", 121 | " files = []\n", 122 | " for filename in os.listdir(SOURCE):\n", 123 | " file = SOURCE + filename\n", 124 | " if os.path.getsize(file) > 0:\n", 125 | " files.append(filename)\n", 126 | " else:\n", 127 | " print(filename + \" is zero length, so ignoring.\")\n", 128 | "\n", 129 | " training_length = int(len(files) * SPLIT_SIZE)\n", 130 | " testing_length = int(len(files) - training_length)\n", 131 | " shuffled_set = random.sample(files, len(files))\n", 132 | " training_set = shuffled_set[0:training_length]\n", 133 | " testing_set = shuffled_set[:testing_length]\n", 134 | "\n", 135 | " for filename in training_set:\n", 136 | " this_file = SOURCE + filename\n", 137 | " destination = TRAINING + filename\n", 138 | " copyfile(this_file, destination)\n", 139 | "\n", 140 | " for filename in testing_set:\n", 141 | " this_file = SOURCE + filename\n", 142 | " destination = TESTING + filename\n", 143 | " copyfile(this_file, destination)\n", 144 | "\n", 145 | "\n", 146 | "CAT_SOURCE_DIR = \"/tmp/PetImages/Cat/\"\n", 147 | "TRAINING_CATS_DIR = \"/tmp/cats-v-dogs/training/cats/\"\n", 148 | "TESTING_CATS_DIR = \"/tmp/cats-v-dogs/testing/cats/\"\n", 149 | "DOG_SOURCE_DIR = \"/tmp/PetImages/Dog/\"\n", 150 | "TRAINING_DOGS_DIR = \"/tmp/cats-v-dogs/training/dogs/\"\n", 151 | "TESTING_DOGS_DIR = \"/tmp/cats-v-dogs/testing/dogs/\"\n", 152 | "\n", 153 | "split_size = .9\n", 154 | "split_data(CAT_SOURCE_DIR, TRAINING_CATS_DIR, TESTING_CATS_DIR, split_size)\n", 155 | "split_data(DOG_SOURCE_DIR, TRAINING_DOGS_DIR, TESTING_DOGS_DIR, split_size)\n", 156 | "\n", 157 | "# Expected output\n", 158 | "# 666.jpg is zero length, so ignoring\n", 159 | "# 11702.jpg is zero length, so ignoring" 160 | ] 161 | }, 162 | { 163 | "cell_type": "code", 164 | "execution_count": 0, 165 | "metadata": { 166 | "colab": {}, 167 | "colab_type": "code", 168 | "id": "hwHXFhVG3786" 169 | }, 170 | "outputs": [], 171 | "source": [ 172 | "print(len(os.listdir('/tmp/cats-v-dogs/training/cats/')))\n", 173 | "print(len(os.listdir('/tmp/cats-v-dogs/training/dogs/')))\n", 174 | "print(len(os.listdir('/tmp/cats-v-dogs/testing/cats/')))\n", 175 | "print(len(os.listdir('/tmp/cats-v-dogs/testing/dogs/')))\n", 176 | "\n", 177 | "# Expected output:\n", 178 | "# 11250\n", 179 | "# 11250\n", 180 | "# 1250\n", 181 | "# 1250" 182 | ] 183 | }, 184 | { 185 | "cell_type": "code", 186 | "execution_count": 0, 187 | "metadata": { 188 | "colab": {}, 189 | "colab_type": "code", 190 | "id": "-BQrav4anTmj" 191 | }, 192 | "outputs": [], 193 | "source": [ 194 | "model = tf.keras.models.Sequential([\n", 195 | " tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(150, 150, 3)),\n", 196 | " tf.keras.layers.MaxPooling2D(2, 2),\n", 197 | " tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),\n", 198 | " tf.keras.layers.MaxPooling2D(2, 2),\n", 199 | " tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),\n", 200 | " tf.keras.layers.MaxPooling2D(2, 2),\n", 201 | " tf.keras.layers.Flatten(),\n", 202 | " tf.keras.layers.Dense(512, activation='relu'),\n", 203 | " tf.keras.layers.Dense(1, activation='sigmoid')\n", 204 | "])\n", 205 | "\n", 206 | "model.compile(optimizer=RMSprop(lr=0.001), loss='binary_crossentropy', metrics=['accuracy'])\n" 207 | ] 208 | }, 209 | { 210 | "cell_type": "code", 211 | "execution_count": 0, 212 | "metadata": { 213 | "colab": {}, 214 | "colab_type": "code", 215 | "id": "fQrZfVgz4j2g" 216 | }, 217 | "outputs": [], 218 | "source": [ 219 | "TRAINING_DIR = \"/tmp/cats-v-dogs/training/\"\n", 220 | "# Experiment with your own parameters here to really try to drive it to 99.9% accuracy or better\n", 221 | "train_datagen = ImageDataGenerator(rescale=1./255,\n", 222 | " rotation_range=40,\n", 223 | " width_shift_range=0.2,\n", 224 | " height_shift_range=0.2,\n", 225 | " shear_range=0.2,\n", 226 | " zoom_range=0.2,\n", 227 | " horizontal_flip=True,\n", 228 | " fill_mode='nearest')\n", 229 | "train_generator = train_datagen.flow_from_directory(TRAINING_DIR,\n", 230 | " batch_size=100,\n", 231 | " class_mode='binary',\n", 232 | " target_size=(150, 150))\n", 233 | "\n", 234 | "VALIDATION_DIR = \"/tmp/cats-v-dogs/testing/\"\n", 235 | "# Experiment with your own parameters here to really try to drive it to 99.9% accuracy or better\n", 236 | "validation_datagen = ImageDataGenerator(rescale=1./255,\n", 237 | " rotation_range=40,\n", 238 | " width_shift_range=0.2,\n", 239 | " height_shift_range=0.2,\n", 240 | " shear_range=0.2,\n", 241 | " zoom_range=0.2,\n", 242 | " horizontal_flip=True,\n", 243 | " fill_mode='nearest')\n", 244 | "validation_generator = validation_datagen.flow_from_directory(VALIDATION_DIR,\n", 245 | " batch_size=100,\n", 246 | " class_mode='binary',\n", 247 | " target_size=(150, 150))\n", 248 | "\n", 249 | "# Expected Output:\n", 250 | "# Found 22498 images belonging to 2 classes.\n", 251 | "# Found 2500 images belonging to 2 classes." 252 | ] 253 | }, 254 | { 255 | "cell_type": "code", 256 | "execution_count": 0, 257 | "metadata": { 258 | "colab": {}, 259 | "colab_type": "code", 260 | "id": "5qE1G6JB4fMn" 261 | }, 262 | "outputs": [], 263 | "source": [ 264 | "# Note that this may take some time.\n", 265 | "history = model.fit(train_generator,\n", 266 | " epochs=15,\n", 267 | " verbose=1,\n", 268 | " validation_data=validation_generator)" 269 | ] 270 | }, 271 | { 272 | "cell_type": "code", 273 | "execution_count": 0, 274 | "metadata": { 275 | "colab": {}, 276 | "colab_type": "code", 277 | "id": "MWZrJN4-65RC" 278 | }, 279 | "outputs": [], 280 | "source": [ 281 | "%matplotlib inline\n", 282 | "\n", 283 | "import matplotlib.image as mpimg\n", 284 | "import matplotlib.pyplot as plt\n", 285 | "\n", 286 | "#-----------------------------------------------------------\n", 287 | "# Retrieve a list of list results on training and test data\n", 288 | "# sets for each training epoch\n", 289 | "#-----------------------------------------------------------\n", 290 | "acc=history.history['accuracy']\n", 291 | "val_acc=history.history['val_accuracy']\n", 292 | "loss=history.history['loss']\n", 293 | "val_loss=history.history['val_loss']\n", 294 | "\n", 295 | "epochs=range(len(acc)) # Get number of epochs\n", 296 | "\n", 297 | "#------------------------------------------------\n", 298 | "# Plot training and validation accuracy per epoch\n", 299 | "#------------------------------------------------\n", 300 | "plt.plot(epochs, acc, 'r', \"Training Accuracy\")\n", 301 | "plt.plot(epochs, val_acc, 'b', \"Validation Accuracy\")\n", 302 | "plt.title('Training and validation accuracy')\n", 303 | "plt.figure()\n", 304 | "\n", 305 | "#------------------------------------------------\n", 306 | "# Plot training and validation loss per epoch\n", 307 | "#------------------------------------------------\n", 308 | "plt.plot(epochs, loss, 'r', \"Training Loss\")\n", 309 | "plt.plot(epochs, val_loss, 'b', \"Validation Loss\")\n", 310 | "plt.figure()\n", 311 | "\n", 312 | "\n", 313 | "# Desired output. Charts with training and validation metrics. No crash :)" 314 | ] 315 | }, 316 | { 317 | "cell_type": "code", 318 | "execution_count": 0, 319 | "metadata": { 320 | "colab": {}, 321 | "colab_type": "code", 322 | "id": "LqL6FYUrtXpf" 323 | }, 324 | "outputs": [], 325 | "source": [ 326 | "# Here's a codeblock just for fun. You should be able to upload an image here \n", 327 | "# and have it classified without crashing\n", 328 | "import numpy as np\n", 329 | "from google.colab import files\n", 330 | "from keras.preprocessing import image\n", 331 | "\n", 332 | "uploaded = files.upload()\n", 333 | "\n", 334 | "for fn in uploaded.keys():\n", 335 | " \n", 336 | " # predicting images\n", 337 | " path = '/content/' + fn\n", 338 | " img = image.load_img(path, target_size=(150, 150))\n", 339 | " x = image.img_to_array(img)\n", 340 | " x = np.expand_dims(x, axis=0)\n", 341 | "\n", 342 | " images = np.vstack([x])\n", 343 | " classes = model.predict(images, batch_size=10)\n", 344 | " print(classes[0])\n", 345 | " if classes[0]>0.5:\n", 346 | " print(fn + \" is a dog\")\n", 347 | " else:\n", 348 | " print(fn + \" is a cat\")" 349 | ] 350 | } 351 | ], 352 | "metadata": { 353 | "accelerator": "GPU", 354 | "colab": { 355 | "name": "Exercise 6 - Answer.ipynb", 356 | "provenance": [], 357 | "toc_visible": true, 358 | "version": "0.3.2" 359 | }, 360 | "kernelspec": { 361 | "display_name": "Python 3", 362 | "name": "python3" 363 | } 364 | }, 365 | "nbformat": 4, 366 | "nbformat_minor": 0 367 | } 368 | -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/Exercises/Exercise 6 - Cats v Dogs with Augmentation/Exercise_2_Cats_vs_Dogs_using_augmentation_Question-FINAL.ipynb: -------------------------------------------------------------------------------- 1 | {"nbformat":4,"nbformat_minor":0,"metadata":{"accelerator":"GPU","colab":{"name":"Exercise_2_Cats_vs_Dogs_using_augmentation_Question-FINAL.ipynb","provenance":[],"collapsed_sections":[]},"coursera":{"course_slug":"convolutional-neural-networks-tensorflow","graded_item_id":"uAPOR","launcher_item_id":"e9lTb"},"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.6.8"}},"cells":[{"cell_type":"code","metadata":{"colab_type":"code","id":"dn-6c02VmqiN","colab":{}},"source":["# ATTENTION: Please do not alter any of the provided code in the exercise. Only add your own code where indicated\n","# ATTENTION: Please do not add or remove any cells in the exercise. The grader will check specific cells based on the cell position.\n","# ATTENTION: Please use the provided epoch values when training.\n","\n","# In this exercise you will train a CNN on the FULL Cats-v-dogs dataset\n","# This will require you doing a lot of data preprocessing because\n","# the dataset isn't split into training and validation for you\n","# This code block has all the required inputs\n","import os\n","import zipfile\n","import random\n","import shutil\n","import tensorflow as tf\n","from tensorflow.keras.optimizers import RMSprop\n","from tensorflow.keras.preprocessing.image import ImageDataGenerator\n","from shutil import copyfile\n","from os import getcwd"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"3sd9dQWa23aj","colab":{}},"source":["# This code block unzips the full Cats-v-Dogs dataset to /tmp\n","# which will create a tmp/PetImages directory containing subdirectories\n","# called 'Cat' and 'Dog' (that's how the original researchers structured it)\n","path_cats_and_dogs = f\"{getcwd()}/../tmp2/cats-and-dogs.zip\"\n","shutil.rmtree('/tmp')\n","\n","local_zip = path_cats_and_dogs\n","zip_ref = zipfile.ZipFile(local_zip, 'r')\n","zip_ref.extractall('/tmp')\n","zip_ref.close()"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"gi3yD62a6X3S","colab":{}},"source":["print(len(os.listdir('/tmp/PetImages/Cat/')))\n","print(len(os.listdir('/tmp/PetImages/Dog/')))\n","\n","# Expected Output:\n","# 1500\n","# 1500"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"F-QkLjxpmyK2","colab":{}},"source":["# Use os.mkdir to create your directories\n","# You will need a directory for cats-v-dogs, and subdirectories for training\n","# and testing. These in turn will need subdirectories for 'cats' and 'dogs'\n","try:\n"," os.mkdir('/tmp/cats-v-dogs')\n"," os.mkdir('/tmp/cats-v-dogs/training')\n"," os.mkdir('/tmp/cats-v-dogs/testing')\n"," os.mkdir('/tmp/cats-v-dogs/training/cats')\n"," os.mkdir('/tmp/cats-v-dogs/training/dogs')\n"," os.mkdir('/tmp/cats-v-dogs/testing/cats')\n"," os.mkdir('/tmp/cats-v-dogs/testing/dogs')\n","except OSError:\n"," pass"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"zvSODo0f9LaU","colab":{}},"source":["# Write a python function called split_data which takes\n","# a SOURCE directory containing the files\n","# a TRAINING directory that a portion of the files will be copied to\n","# a TESTING directory that a portion of the files will be copie to\n","# a SPLIT SIZE to determine the portion\n","# The files should also be randomized, so that the training set is a random\n","# X% of the files, and the test set is the remaining files\n","# SO, for example, if SOURCE is PetImages/Cat, and SPLIT SIZE is .9\n","# Then 90% of the images in PetImages/Cat will be copied to the TRAINING dir\n","# and 10% of the images will be copied to the TESTING dir\n","# Also -- All images should be checked, and if they have a zero file length,\n","# they will not be copied over\n","#\n","# os.listdir(DIRECTORY) gives you a listing of the contents of that directory\n","# os.path.getsize(PATH) gives you the size of the file\n","# copyfile(source, destination) copies a file from source to destination\n","# random.sample(list, len(list)) shuffles a list\n","def split_data(SOURCE, TRAINING, TESTING, SPLIT_SIZE):\n"," files = []\n"," for filename in os.listdir(SOURCE):\n"," file = SOURCE + filename\n"," if os.path.getsize(file) > 0:\n"," files.append(filename)\n"," else:\n"," print(filename + \" is zero length, so ignoring.\")\n","\n"," training_length = int(len(files) * SPLIT_SIZE)\n"," testing_length = int(len(files) - training_length)\n"," shuffled_set = random.sample(files, len(files))\n"," training_set = shuffled_set[0:training_length]\n"," testing_set = shuffled_set[:testing_length]\n","\n"," for filename in training_set:\n"," this_file = SOURCE + filename\n"," destination = TRAINING + filename\n"," copyfile(this_file, destination)\n","\n"," for filename in testing_set:\n"," this_file = SOURCE + filename\n"," destination = TESTING + filename\n"," copyfile(this_file, destination)\n","\n","\n","CAT_SOURCE_DIR = \"/tmp/PetImages/Cat/\"\n","TRAINING_CATS_DIR = \"/tmp/cats-v-dogs/training/cats/\"\n","TESTING_CATS_DIR = \"/tmp/cats-v-dogs/testing/cats/\"\n","DOG_SOURCE_DIR = \"/tmp/PetImages/Dog/\"\n","TRAINING_DOGS_DIR = \"/tmp/cats-v-dogs/training/dogs/\"\n","TESTING_DOGS_DIR = \"/tmp/cats-v-dogs/testing/dogs/\"\n","\n","split_size = .9\n","split_data(CAT_SOURCE_DIR, TRAINING_CATS_DIR, TESTING_CATS_DIR, split_size)\n","split_data(DOG_SOURCE_DIR, TRAINING_DOGS_DIR, TESTING_DOGS_DIR, split_size)"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"luthalB76ufC","colab":{}},"source":["print(len(os.listdir('/tmp/cats-v-dogs/training/cats/')))\n","print(len(os.listdir('/tmp/cats-v-dogs/training/dogs/')))\n","print(len(os.listdir('/tmp/cats-v-dogs/testing/cats/')))\n","print(len(os.listdir('/tmp/cats-v-dogs/testing/dogs/')))\n","\n","# Expected output:\n","# 1350\n","# 1350\n","# 150\n","# 150"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"-BQrav4anTmj","colab":{}},"source":["# DEFINE A KERAS MODEL TO CLASSIFY CATS V DOGS\n","# USE AT LEAST 3 CONVOLUTION LAYERS\n","model = tf.keras.models.Sequential([\n"," tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(150, 150, 3)),\n"," tf.keras.layers.MaxPooling2D(2,2),\n"," tf.keras.layers.Conv2D(32, (3,3), activation='relu'),\n"," tf.keras.layers.MaxPooling2D(2,2), \n"," tf.keras.layers.Conv2D(64, (3,3), activation='relu'), \n"," tf.keras.layers.MaxPooling2D(2,2),\n"," # Flatten the results to feed into a DNN\n"," tf.keras.layers.Flatten(), \n"," # 512 neuron hidden layer\n"," tf.keras.layers.Dense(512, activation='relu'), \n"," # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('cats') and 1 for the other ('dogs')\n"," tf.keras.layers.Dense(1, activation='sigmoid')\n","])\n","\n","model.compile(optimizer=RMSprop(lr=0.001), loss='binary_crossentropy', metrics=['accuracy'])"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"WCeGKRlJUR6e","colab_type":"text"},"source":["# NOTE:\n","\n","In the cell below you **MUST** use a batch size of 10 (`batch_size=10`) for the `train_generator` and the `validation_generator`. Using a batch size greater than 10 will exceed memory limits on the Coursera platform."]},{"cell_type":"code","metadata":{"colab_type":"code","id":"mlNjoJ5D61N6","colab":{}},"source":["TRAINING_DIR = \"/tmp/cats-v-dogs/training\"\n","train_datagen = ImageDataGenerator(\n"," rescale=1./255,\n"," rotation_range=40,\n"," width_shift_range=0.2,\n"," height_shift_range=0.2,\n"," shear_range=0.2,\n"," zoom_range=0.2,\n"," horizontal_flip=True,\n"," fill_mode='nearest')\n","\n","train_generator = train_datagen.flow_from_directory(TRAINING_DIR,\n"," batch_size=20,\n"," class_mode='binary',\n"," target_size=(150, 150))\n","\n","VALIDATION_DIR = \"/tmp/cats-v-dogs/testing\"\n","validation_datagen = ImageDataGenerator(\n"," rescale=1./255,\n"," rotation_range=40,\n"," width_shift_range=0.2,\n"," height_shift_range=0.2,\n"," shear_range=0.2,\n"," zoom_range=0.2,\n"," horizontal_flip=True,\n"," fill_mode='nearest')\n","\n","validation_generator = validation_datagen.flow_from_directory(VALIDATION_DIR,\n"," batch_size=20,\n"," class_mode = 'binary',\n"," target_size = (150, 150))\n","\n","\n","\n","# Expected Output:\n","# Found 22498 images belonging to 2 classes.\n","# Found 2500 images belonging to 2 classes."],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"KyS4n53w7DxC","colab":{}},"source":["history = model.fit_generator(train_generator,\n"," epochs=2,\n"," verbose=1,\n"," validation_data=validation_generator)\n"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"MWZrJN4-65RC","colab":{}},"source":["# PLOT LOSS AND ACCURACY\n","%matplotlib inline\n","\n","import matplotlib.image as mpimg\n","import matplotlib.pyplot as plt\n","\n","#-----------------------------------------------------------\n","# Retrieve a list of list results on training and test data\n","# sets for each training epoch\n","#-----------------------------------------------------------\n","acc=history.history['accuracy']\n","val_acc=history.history['val_accuracy']\n","loss=history.history['loss']\n","val_loss=history.history['val_loss']\n","\n","epochs=range(len(acc)) # Get number of epochs\n","\n","#------------------------------------------------\n","# Plot training and validation accuracy per epoch\n","#------------------------------------------------\n","plt.plot(epochs, acc, 'r', \"Training Accuracy\")\n","plt.plot(epochs, val_acc, 'b', \"Validation Accuracy\")\n","plt.title('Training and validation accuracy')\n","plt.figure()\n","\n","#------------------------------------------------\n","# Plot training and validation loss per epoch\n","#------------------------------------------------\n","plt.plot(epochs, loss, 'r', \"Training Loss\")\n","plt.plot(epochs, val_loss, 'b', \"Validation Loss\")\n","\n","\n","plt.title('Training and validation loss')\n","\n","# Desired output. Charts with training and validation metrics. No crash :)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"eRLKqzhvUR6x","colab_type":"text"},"source":["# Submission Instructions"]},{"cell_type":"code","metadata":{"id":"-xCp9iEEUR6x","colab_type":"code","colab":{}},"source":["# Now click the 'Submit Assignment' button above."],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"PrJgYxTCUR62","colab_type":"text"},"source":["# When you're done or would like to take a break, please run the two cells below to save your work and close the Notebook. This will free up resources for your fellow learners. "]},{"cell_type":"code","metadata":{"id":"uGpYmlkHUR63","colab_type":"code","colab":{}},"source":["%%javascript\n","\n","IPython.notebook.save_checkpoint();"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"id":"jnpzPhi5UR67","colab_type":"code","colab":{}},"source":["%%javascript\n","IPython.notebook.session.delete();\n","window.onbeforeunload = null\n","setTimeout(function() { window.close(); }, 1000);"],"execution_count":0,"outputs":[]}]} -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/Exercises/Exercise 7 - Transfer Learning/Exercise_3_Horses_vs_humans_using_Transfer_Learning_Question-FINAL.ipynb: -------------------------------------------------------------------------------- 1 | {"nbformat":4,"nbformat_minor":0,"metadata":{"accelerator":"GPU","colab":{"name":"Exercise_3_Horses_vs_humans_using_Transfer_Learning_Question-FINAL.ipynb","provenance":[],"collapsed_sections":[]},"coursera":{"course_slug":"convolutional-neural-networks-tensorflow","graded_item_id":"csg1x","launcher_item_id":"GpKYz"},"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.6.8"}},"cells":[{"cell_type":"code","metadata":{"colab_type":"code","id":"lbFmQdsZs5eW","colab":{}},"source":["# ATTENTION: Please do not alter any of the provided code in the exercise. Only add your own code where indicated\n","# ATTENTION: Please do not add or remove any cells in the exercise. The grader will check specific cells based on the cell position.\n","# ATTENTION: Please use the provided epoch values when training.\n","\n","# Import all the necessary files!\n","import os\n","import tensorflow as tf\n","from tensorflow.keras import layers\n","from tensorflow.keras import Model\n","from os import getcwd"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"1xJZ5glPPCRz","colab":{}},"source":["path_inception = f\"{getcwd()}/../tmp2/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5\"\n","\n","from tensorflow.keras.applications.inception_v3 import InceptionV3\n","\n","# Create an instance of the inception model from the local pre-trained weights\n","local_weights_file = path_inception\n","\n","pre_trained_model = InceptionV3(input_shape = (150, 150, 3), \n"," include_top = False, # Remove dense layer before CONV layers\n"," weights = None) # Don't use default weights\n","\n","pre_trained_model.load_weights(local_weights_file)\n","\n","# Make all the layers in the pre-trained model non-trainable\n","for layer in pre_trained_model.layers:\n"," layer.trainable = False\n","\n","# Expected Output is extremely large, but should end with:\n","\n","#batch_normalization_v1_281 (Bat (None, 3, 3, 192) 576 conv2d_281[0][0] \n","#__________________________________________________________________________________________________\n","#activation_273 (Activation) (None, 3, 3, 320) 0 batch_normalization_v1_273[0][0] \n","#__________________________________________________________________________________________________\n","#mixed9_1 (Concatenate) (None, 3, 3, 768) 0 activation_275[0][0] \n","# activation_276[0][0] \n","#__________________________________________________________________________________________________\n","#concatenate_5 (Concatenate) (None, 3, 3, 768) 0 activation_279[0][0] \n","# activation_280[0][0] \n","#__________________________________________________________________________________________________\n","#activation_281 (Activation) (None, 3, 3, 192) 0 batch_normalization_v1_281[0][0] \n","#__________________________________________________________________________________________________\n","#mixed10 (Concatenate) (None, 3, 3, 2048) 0 activation_273[0][0] \n","# mixed9_1[0][0] \n","# concatenate_5[0][0] \n","# activation_281[0][0] \n","#==================================================================================================\n","#Total params: 21,802,784\n","#Trainable params: 0\n","#Non-trainable params: 21,802,784"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"CFsUlwdfs_wg","colab":{}},"source":["last_layer = pre_trained_model.get_layer('mixed7')\n","print('last layer output shape: ', last_layer.output_shape)\n","last_output = last_layer.output\n","\n","# Expected Output:\n","# ('last layer output shape: ', (None, 7, 7, 768))"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"-bsWZWp5oMq9","colab":{}},"source":["# Define a Callback class that stops training once accuracy reaches 97.0%\n","class myCallback(tf.keras.callbacks.Callback):\n"," def on_epoch_end(self, epoch, logs={}):\n"," if(logs.get('accuracy')>0.97):\n"," print(\"\\nReached 97.0% accuracy so cancelling training!\")\n"," self.model.stop_training = True\n","\n"," "],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"BMXb913pbvFg","colab":{}},"source":["from tensorflow.keras.optimizers import RMSprop\n","\n","# Flatten the output layer to 1 dimension\n","x = layers.Flatten()(last_output)\n","# Add a fully connected layer with 1,024 hidden units and ReLU activation\n","x = layers.Dense(1024, activation='relu')(x)\n","# Add a dropout rate of 0.2\n","x = layers.Dropout(0.2)(x) \n","# Add a final sigmoid layer for classification\n","x = layers.Dense (1, activation='sigmoid')(x) \n","\n","model = Model( pre_trained_model.input, x) \n","\n","model.compile(optimizer = RMSprop(lr=0.0001), \n"," loss = 'binary_crossentropy', \n"," metrics = ['accuracy'])\n","\n","model.summary()\n","\n","# Expected output will be large. Last few lines should be:\n","\n","# mixed7 (Concatenate) (None, 7, 7, 768) 0 activation_248[0][0] \n","# activation_251[0][0] \n","# activation_256[0][0] \n","# activation_257[0][0] \n","# __________________________________________________________________________________________________\n","# flatten_4 (Flatten) (None, 37632) 0 mixed7[0][0] \n","# __________________________________________________________________________________________________\n","# dense_8 (Dense) (None, 1024) 38536192 flatten_4[0][0] \n","# __________________________________________________________________________________________________\n","# dropout_4 (Dropout) (None, 1024) 0 dense_8[0][0] \n","# __________________________________________________________________________________________________\n","# dense_9 (Dense) (None, 1) 1025 dropout_4[0][0] \n","# ==================================================================================================\n","# Total params: 47,512,481\n","# Trainable params: 38,537,217\n","# Non-trainable params: 8,975,264\n"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"HrnL_IQ8knWA","colab":{}},"source":["# Get the Horse or Human dataset\n","path_horse_or_human = f\"{getcwd()}/../tmp2/horse-or-human.zip\"\n","# Get the Horse or Human Validation dataset\n","path_validation_horse_or_human = f\"{getcwd()}/../tmp2/validation-horse-or-human.zip\"\n","from tensorflow.keras.preprocessing.image import ImageDataGenerator\n","\n","import os\n","import zipfile\n","import shutil\n","\n","shutil.rmtree('/tmp')\n","local_zip = path_horse_or_human\n","zip_ref = zipfile.ZipFile(local_zip, 'r')\n","zip_ref.extractall('/tmp/training')\n","zip_ref.close()\n","\n","local_zip = path_validation_horse_or_human\n","zip_ref = zipfile.ZipFile(local_zip, 'r')\n","zip_ref.extractall('/tmp/validation')\n","zip_ref.close()"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"y9okX7_ovskI","colab":{}},"source":["# Define our example directories and files\n","train_dir = '/tmp/training'\n","validation_dir = '/tmp/validation'\n","\n","train_horses_dir = os.path.join('/tmp/training/horses')\n","train_humans_dir = os.path.join('/tmp/training/humans')\n","validation_horses_dir = os.path.join('/tmp/validation/horses')\n","validation_humans_dir = os.path.join('/tmp/validation/humans')\n","\n","train_horses_fnames = os.listdir(train_horses_dir)\n","train_humans_fnames = os.listdir(train_humans_dir)\n","validation_horses_fnames = os.listdir(validation_horses_dir)\n","validation_humans_fnames = os.listdir(validation_humans_dir)\n","\n","print(len(train_horses_fnames))\n","print(len(train_humans_fnames))\n","print(len(validation_horses_fnames))\n","print(len(validation_humans_fnames))\n","\n","# Expected Output:\n","# 500\n","# 527\n","# 128\n","# 128"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"O4s8HckqGlnb","colab":{}},"source":["# Add our data-augmentation parameters to ImageDataGenerator\n","train_datagen = ImageDataGenerator(rescale = 1./255.,\n"," rotation_range = 40,\n"," width_shift_range = 0.2,\n"," height_shift_range = 0.2,\n"," shear_range = 0.2,\n"," zoom_range = 0.2,\n"," horizontal_flip = True)\n","\n","# Note that the validation data should not be augmented!\n","test_datagen = ImageDataGenerator( rescale = 1.0/255. )\n","\n","# Flow training images in batches of 20 using train_datagen generator\n","train_generator = train_datagen.flow_from_directory(train_dir,\n"," batch_size = 20,\n"," class_mode = 'binary', \n"," target_size = (150, 150)) \n","\n","# Flow validation images in batches of 20 using test_datagen generator\n","validation_generator = test_datagen.flow_from_directory( validation_dir,\n"," batch_size = 20,\n"," class_mode = 'binary', \n"," target_size = (150, 150))\n","\n","# Expected Output:\n","# Found 1027 images belonging to 2 classes.\n","# Found 256 images belonging to 2 classes."],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"Blhq2MAUeyGA","colab":{}},"source":["# Run this and see how many epochs it should take before the callback\n","# fires, and stops training at 97% accuracy\n","\n","callbacks = myCallback()\n","history = model.fit_generator(train_generator,\n"," validation_data = validation_generator,\n"," steps_per_epoch = 50,\n"," epochs = 3,\n"," validation_steps = 12,\n"," verbose = 1,\n"," callbacks=[callbacks])"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"C2Fp6Se9rKuL","colab":{}},"source":["%matplotlib inline\n","import matplotlib.pyplot as plt\n","acc = history.history['accuracy']\n","val_acc = history.history['val_accuracy']\n","loss = history.history['loss']\n","val_loss = history.history['val_loss']\n","\n","epochs = range(len(acc))\n","\n","plt.plot(epochs, acc, 'r', label='Training accuracy')\n","plt.plot(epochs, val_acc, 'b', label='Validation accuracy')\n","plt.title('Training and validation accuracy')\n","plt.legend(loc=0)\n","plt.figure()\n","\n","\n","plt.show()"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"jGof5NONLA09","colab_type":"text"},"source":["# Submission Instructions"]},{"cell_type":"code","metadata":{"id":"71oZDJTfLA0-","colab_type":"code","colab":{}},"source":["# Now click the 'Submit Assignment' button above."],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"J4CoLnLbLA1F","colab_type":"text"},"source":["# When you're done or would like to take a break, please run the two cells below to save your work and close the Notebook. This will free up resources for your fellow learners. "]},{"cell_type":"code","metadata":{"id":"86gJQ-xMLA1G","colab_type":"code","colab":{}},"source":["%%javascript\n","\n","IPython.notebook.save_checkpoint();"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"id":"8lJAkF6CLA1L","colab_type":"code","colab":{}},"source":["%%javascript\n","IPython.notebook.session.delete();\n","window.onbeforeunload = null\n","setTimeout(function() { window.close(); }, 1000);"],"execution_count":0,"outputs":[]}]} -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/Exercises/Exercise 8 - Multiclass with Signs/Exercise_4_Multi_class_classifier_Question-FINAL.ipynb: -------------------------------------------------------------------------------- 1 | {"nbformat":4,"nbformat_minor":0,"metadata":{"colab":{"name":"Exercise_4_Multi_class_classifier_Question-FINAL.ipynb","provenance":[],"collapsed_sections":[]},"coursera":{"course_slug":"convolutional-neural-networks-tensorflow","graded_item_id":"8mIh8","launcher_item_id":"gg95t"},"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.6.8"}},"cells":[{"cell_type":"code","metadata":{"colab_type":"code","id":"wYtuKeK0dImp","colab":{}},"source":["# ATTENTION: Please do not alter any of the provided code in the exercise. Only add your own code where indicated\n","# ATTENTION: Please do not add or remove any cells in the exercise. The grader will check specific cells based on the cell position.\n","# ATTENTION: Please use the provided epoch values when training.\n","\n","import csv\n","import numpy as np\n","import tensorflow as tf\n","from tensorflow.keras.preprocessing.image import ImageDataGenerator\n","from os import getcwd"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"4kxw-_rmcnVu","colab":{}},"source":["def get_data(filename):\n"," # You will need to write code that will read the file passed\n"," # into this function. The first line contains the column headers\n"," # so you should ignore it\n"," # Each successive line contians 785 comma separated values between 0 and 255\n"," # The first value is the label\n"," # The rest are the pixel values for that picture\n"," # The function will return 2 np.array types. One with all the labels\n"," # One with all the images\n"," #\n"," # Tips: \n"," # If you read a full line (as 'row') then row[0] has the label\n"," # and row[1:785] has the 784 pixel values\n"," # Take a look at np.array_split to turn the 784 pixels into 28x28\n"," # You are reading in strings, but need the values to be floats\n"," # Check out np.array().astype for a conversion\n"," with open(filename) as training_file:\n"," csv_reader = csv.reader(training_file, delimiter=',')\n"," first_line = True\n"," temp_labels= []\n"," temp_images = []\n","\n"," for row in csv_reader:\n"," # Makes first iteration of loop for first row do nothing.\n"," # That's how you skip the first row with headers.\n"," if first_line: \n"," first_line = False\n"," else:\n"," temp_labels.append(row[0])\n"," image_data = row[1:785]\n"," image_array = np.array_split(image_data, 28) # Make 28 x 28\n"," temp_images.append(image_array)\n"," \n"," images = np.array(temp_images).astype('float')\n"," labels = np.array(temp_labels).astype('float')\n"," return images, labels\n","\n","path_sign_mnist_train = f\"{getcwd()}/../tmp2/sign_mnist_train.csv\"\n","path_sign_mnist_test = f\"{getcwd()}/../tmp2/sign_mnist_test.csv\"\n","training_images, training_labels = get_data(path_sign_mnist_train)\n","testing_images, testing_labels = get_data(path_sign_mnist_test)\n","\n","# Keep these\n","print(training_images.shape)\n","print(training_labels.shape)\n","print(testing_images.shape)\n","print(testing_labels.shape)\n","\n","# Their output should be:\n","# (27455, 28, 28)\n","# (27455,)\n","# (7172, 28, 28)\n","# (7172,)"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"awoqRpyZdQkD","colab":{}},"source":["# In this section you will have to add another dimension to the data\n","# So, for example, if your array is (10000, 28, 28)\n","# You will need to make it (10000, 28, 28, 1)\n","# Hint: np.expand_dims\n","\n","training_images = np.expand_dims(training_images, axis=-1)\n","testing_images = np.expand_dims(testing_images, axis=-1)\n","\n","train_datagen = ImageDataGenerator(\n"," rescale = 1./255,\n","\t rotation_range=40,\n"," width_shift_range=0.2,\n"," height_shift_range=0.2,\n"," shear_range=0.2,\n"," zoom_range=0.2,\n"," horizontal_flip=True,\n"," fill_mode='nearest')\n","\n","validation_datagen = ImageDataGenerator(rescale = 1./255)\n"," \n","# Keep These\n","print(training_images.shape)\n","print(testing_images.shape)\n"," \n","# Their output should be:\n","# (27455, 28, 28, 1)\n","# (7172, 28, 28, 1)"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"Rmb7S32cgRqS","colab":{}},"source":["# Define the model\n","# Use no more than 2 Conv2D and 2 MaxPooling2D\n","model = tf.keras.models.Sequential([\n"," # Note the input shape is 28 x 28 grayscale.\n"," tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)),\n"," tf.keras.layers.MaxPooling2D(2, 2),\n"," tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n"," tf.keras.layers.MaxPooling2D(2,2),\n"," tf.keras.layers.Flatten(),\n"," tf.keras.layers.Dense(128, activation='relu'),\n"," tf.keras.layers.Dense(26, activation='softmax') # 26 alphabets/hand-signs so 26 classes!\n","])\n","\n","# Compile Model. \n","# loss = 'categorical_crossentropy' doesn't work for some reason.\n","model.compile(loss = 'sparse_categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])\n","\n","# Train the Model\n","history = model.fit_generator(train_datagen.flow(training_images, training_labels, batch_size=32),\n"," steps_per_epoch=len(training_images) / 32,\n"," epochs=15,\n"," validation_data=validation_datagen.flow(testing_images, testing_labels, batch_size=32),\n"," validation_steps=len(testing_images) / 32)\n","\n","model.evaluate(testing_images, testing_labels)"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"_Q3Zpr46dsij","colab":{}},"source":["# Plot the chart for accuracy and loss on both training and validation\n","%matplotlib inline\n","import matplotlib.pyplot as plt\n","acc = history.history['accuracy']\n","val_acc = history.history['val_accuracy']\n","loss = history.history['loss']\n","val_loss = history.history['val_loss']\n","\n","epochs = range(len(acc))\n","\n","plt.plot(epochs, acc, 'r', label='Training accuracy')\n","plt.plot(epochs, val_acc, 'b', label='Validation accuracy')\n","plt.title('Training and validation accuracy')\n","plt.legend()\n","plt.figure()\n","\n","plt.plot(epochs, loss, 'r', label='Training Loss')\n","plt.plot(epochs, val_loss, 'b', label='Validation Loss')\n","plt.title('Training and validation loss')\n","plt.legend()\n","\n","plt.show()"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"Vb230g5qVqbt","colab_type":"text"},"source":["# Submission Instructions"]},{"cell_type":"code","metadata":{"id":"KY7y00sXVqbv","colab_type":"code","colab":{}},"source":["# Now click the 'Submit Assignment' button above."],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"VII1JwNNVqb6","colab_type":"text"},"source":["# When you're done or would like to take a break, please run the two cells below to save your work and close the Notebook. This will free up resources for your fellow learners. "]},{"cell_type":"code","metadata":{"id":"j3Bmt_hWVqb8","colab_type":"code","colab":{}},"source":["%%javascript\n","\n","IPython.notebook.save_checkpoint();"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"id":"-RHkodRWVqcE","colab_type":"code","colab":{}},"source":["%%javascript\n","IPython.notebook.session.delete();\n","window.onbeforeunload = null\n","setTimeout(function() { window.close(); }, 1000);"],"execution_count":0,"outputs":[]}]} -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/Exercises/tmp2/validation-horse-or-human.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/Exercises/tmp2/validation-horse-or-human.zip -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/paper-hires1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/paper-hires1.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/paper-hires2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/paper-hires2.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/paper1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/paper1.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/paper2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/paper2.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/paper3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/paper3.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/paper4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/paper4.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/paper5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/paper5.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/paper6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/paper6.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/paper7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/paper7.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/paper8.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/paper8.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/paper9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/paper9.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/rock-hires1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/rock-hires1.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/rock-hires2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/rock-hires2.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/rock1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/rock1.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/rock2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/rock2.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/rock3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/rock3.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/rock4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/rock4.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/rock5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/rock5.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/rock6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/rock6.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/rock7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/rock7.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/rock8.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/rock8.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/rock9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/rock9.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors-hires1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors-hires1.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors-hires2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors-hires2.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors1.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors2.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors3.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors4.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors5.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors6.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors7.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors8.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors8.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/rps-validation/scissors9.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/sign-language-mnist/amer_sign2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/sign-language-mnist/amer_sign2.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/sign-language-mnist/amer_sign3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/sign-language-mnist/amer_sign3.png -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/sign-language-mnist/american_sign_language.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/sign-language-mnist/american_sign_language.PNG -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/test_cat.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/test_cat.jpg -------------------------------------------------------------------------------- /2. Convolutional Neural Networks in TensorFlow/test_dog.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/2. Convolutional Neural Networks in TensorFlow/test_dog.jpg -------------------------------------------------------------------------------- /3. Natural Language Processing in TensorFlow/Course 3 - Week 1 - Lesson 1.ipynb: -------------------------------------------------------------------------------- 1 | {"nbformat":4,"nbformat_minor":0,"metadata":{"colab":{"name":"Course 3 - Week 1 - Lesson 1.ipynb","provenance":[],"collapsed_sections":[]},"kernelspec":{"display_name":"Python 3","name":"python3"}},"cells":[{"cell_type":"code","metadata":{"id":"zX4Kg8DUTKWO","colab_type":"code","colab":{}},"source":["#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n","# you may not use this file except in compliance with the License.\n","# You may obtain a copy of the License at\n","#\n","# https://www.apache.org/licenses/LICENSE-2.0\n","#\n","# Unless required by applicable law or agreed to in writing, software\n","# distributed under the License is distributed on an \"AS IS\" BASIS,\n","# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n","# See the License for the specific language governing permissions and\n","# limitations under the License."],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"colab_type":"text","id":"view-in-github"},"source":["\"Open"]},{"cell_type":"markdown","metadata":{"colab_type":"text","id":"rX8mhOLljYeM"},"source":["##### Copyright 2019 The TensorFlow Authors."]},{"cell_type":"code","metadata":{"cellView":"form","colab_type":"code","id":"BZSlp3DAjdYf","colab":{}},"source":["#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n","# you may not use this file except in compliance with the License.\n","# You may obtain a copy of the License at\n","#\n","# https://www.apache.org/licenses/LICENSE-2.0\n","#\n","# Unless required by applicable law or agreed to in writing, software\n","# distributed under the License is distributed on an \"AS IS\" BASIS,\n","# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n","# See the License for the specific language governing permissions and\n","# limitations under the License."],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"vAphtEqFo07P","colab_type":"text"},"source":["**Note:** num_words is the total number of unique words in the vocabulary. If you have more words, it will use the top num_words by volume."]},{"cell_type":"markdown","metadata":{"id":"PXO5UuBzrHYY","colab_type":"text"},"source":["**Note:** The tokenizer strips all punctuation from your text. So the exclamation mark isn't included as a token and 'I' is tokenized as 'i'. Spaces between words aren't included either."]},{"cell_type":"code","metadata":{"colab_type":"code","id":"zaCMcjMQifQc","outputId":"9c21a91e-f46e-4404-ea37-69c3390b2691","executionInfo":{"status":"ok","timestamp":1589451148258,"user_tz":-330,"elapsed":4478,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}},"colab":{"base_uri":"https://localhost:8080/","height":34}},"source":["from tensorflow.keras.preprocessing.text import Tokenizer\n","\n","sentences = [\n"," 'i love my dog',\n"," 'I, love my cat',\n"," 'You love my dog!'\n","]\n","\n","tokenizer = Tokenizer(num_words = 100)\n","tokenizer.fit_on_texts(sentences)\n","word_index = tokenizer.word_index\n","print(word_index)"],"execution_count":0,"outputs":[{"output_type":"stream","text":["{'love': 1, 'my': 2, 'i': 3, 'dog': 4, 'cat': 5, 'you': 6}\n"],"name":"stdout"}]}]} -------------------------------------------------------------------------------- /3. Natural Language Processing in TensorFlow/Course 3 - Week 1 - Lesson 2.ipynb: -------------------------------------------------------------------------------- 1 | {"nbformat":4,"nbformat_minor":0,"metadata":{"colab":{"name":"Course 3 - Week 1 - Lesson 2.ipynb","provenance":[],"collapsed_sections":[]},"kernelspec":{"display_name":"Python 3","name":"python3"}},"cells":[{"cell_type":"code","metadata":{"id":"zX4Kg8DUTKWO","colab_type":"code","colab":{}},"source":["#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n","# you may not use this file except in compliance with the License.\n","# You may obtain a copy of the License at\n","#\n","# https://www.apache.org/licenses/LICENSE-2.0\n","#\n","# Unless required by applicable law or agreed to in writing, software\n","# distributed under the License is distributed on an \"AS IS\" BASIS,\n","# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n","# See the License for the specific language governing permissions and\n","# limitations under the License."],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"colab_type":"text","id":"view-in-github"},"source":["\"Open"]},{"cell_type":"markdown","metadata":{"colab_type":"text","id":"rX8mhOLljYeM"},"source":["##### Copyright 2019 The TensorFlow Authors."]},{"cell_type":"code","metadata":{"cellView":"form","colab_type":"code","id":"BZSlp3DAjdYf","colab":{}},"source":["#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n","# you may not use this file except in compliance with the License.\n","# You may obtain a copy of the License at\n","#\n","# https://www.apache.org/licenses/LICENSE-2.0\n","#\n","# Unless required by applicable law or agreed to in writing, software\n","# distributed under the License is distributed on an \"AS IS\" BASIS,\n","# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n","# See the License for the specific language governing permissions and\n","# limitations under the License."],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"Rks3bSdNyB-z","colab_type":"text"},"source":["**Note:** The words 'loves' and 'manatee' weren't in the training set so there isn't an encoding available for it. THerefore, when you run the test set, the encoding for 'manatee' is tokenized as OOV (out of vocabulary)."]},{"cell_type":"markdown","metadata":{"id":"Qv4xp1tjF8qo","colab_type":"text"},"source":["**Note:** If you don't specify maxlen in pad_sequences, it will use the length of the longest sentence. If maxlen is less than the length of the longest sentence, you will lose information by default from the beginning of the sentence. To lose truncate from the end of the sentence, use (truncating='post').\n"]},{"cell_type":"markdown","metadata":{"id":"qj8Yy71KGrab","colab_type":"text"},"source":["**Note:** Add padding='post' to add padding to the end of the sentence instead of before."]},{"cell_type":"code","metadata":{"colab_type":"code","id":"ArOPfBwyZtln","colab":{"base_uri":"https://localhost:8080/","height":309},"outputId":"3adfc78c-d134-4bd5-a0c3-1b0dc26fee7d","executionInfo":{"status":"ok","timestamp":1589451982471,"user_tz":-330,"elapsed":2847,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}}},"source":["import tensorflow as tf\n","from tensorflow import keras\n","\n","\n","from tensorflow.keras.preprocessing.text import Tokenizer\n","from tensorflow.keras.preprocessing.sequence import pad_sequences\n","\n","sentences = [\n"," 'I love my dog',\n"," 'I love my cat',\n"," 'You love my dog!',\n"," 'Do you think my dog is amazing?'\n","]\n","\n","tokenizer = Tokenizer(num_words = 100, oov_token=\"\")\n","tokenizer.fit_on_texts(sentences)\n","word_index = tokenizer.word_index\n","\n","sequences = tokenizer.texts_to_sequences(sentences)\n","\n","#padded = pad_sequences(sequences, padding='post', truncating='post', maxlen=5)\n","padded = pad_sequences(sequences, maxlen=5)\n","print(\"\\nWord Index = \" , word_index)\n","print(\"\\nSequences = \" , sequences)\n","print(\"\\nPadded Sequences:\")\n","print(padded)\n","\n","\n","# Try with words that the tokenizer wasn't fit to\n","test_data = [\n"," 'i really love my dog',\n"," 'my dog loves my manatee'\n","]\n","\n","test_seq = tokenizer.texts_to_sequences(test_data)\n","print(\"\\nTest Sequence = \", test_seq)\n","\n","padded = pad_sequences(test_seq, maxlen=10)\n","print(\"\\nPadded Test Sequence: \")\n","print(padded)"],"execution_count":1,"outputs":[{"output_type":"stream","text":["\n","Word Index = {'': 1, 'my': 2, 'love': 3, 'dog': 4, 'i': 5, 'you': 6, 'cat': 7, 'do': 8, 'think': 9, 'is': 10, 'amazing': 11}\n","\n","Sequences = [[5, 3, 2, 4], [5, 3, 2, 7], [6, 3, 2, 4], [8, 6, 9, 2, 4, 10, 11]]\n","\n","Padded Sequences:\n","[[ 0 5 3 2 4]\n"," [ 0 5 3 2 7]\n"," [ 0 6 3 2 4]\n"," [ 9 2 4 10 11]]\n","\n","Test Sequence = [[5, 1, 3, 2, 4], [2, 4, 1, 2, 1]]\n","\n","Padded Test Sequence: \n","[[0 0 0 0 0 5 1 3 2 4]\n"," [0 0 0 0 0 2 4 1 2 1]]\n"],"name":"stdout"}]}]} -------------------------------------------------------------------------------- /3. Natural Language Processing in TensorFlow/Course 3 - Week 3 - Lesson 2c.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "metadata": { 6 | "id": "zX4Kg8DUTKWO", 7 | "colab_type": "code", 8 | "colab": {} 9 | }, 10 | "source": [ 11 | "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", 12 | "# you may not use this file except in compliance with the License.\n", 13 | "# You may obtain a copy of the License at\n", 14 | "#\n", 15 | "# https://www.apache.org/licenses/LICENSE-2.0\n", 16 | "#\n", 17 | "# Unless required by applicable law or agreed to in writing, software\n", 18 | "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", 19 | "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", 20 | "# See the License for the specific language governing permissions and\n", 21 | "# limitations under the License." 22 | ], 23 | "execution_count": 0, 24 | "outputs": [] 25 | }, 26 | { 27 | "cell_type": "code", 28 | "execution_count": 0, 29 | "metadata": { 30 | "colab": {}, 31 | "colab_type": "code", 32 | "id": "jGwXGIXvFhXW" 33 | }, 34 | "outputs": [], 35 | "source": [ 36 | "import json\n", 37 | "import tensorflow as tf\n", 38 | "\n", 39 | "from tensorflow.keras.preprocessing.text import Tokenizer\n", 40 | "from tensorflow.keras.preprocessing.sequence import pad_sequences\n", 41 | "\n", 42 | "!wget --no-check-certificate \\\n", 43 | " https://storage.googleapis.com/laurencemoroney-blog.appspot.com/sarcasm.json \\\n", 44 | " -O /tmp/sarcasm.json\n", 45 | "\n", 46 | "vocab_size = 1000\n", 47 | "embedding_dim = 16\n", 48 | "max_length = 120\n", 49 | "trunc_type='post'\n", 50 | "padding_type='post'\n", 51 | "oov_tok = \"\"\n", 52 | "training_size = 20000\n", 53 | "\n", 54 | "\n", 55 | "with open(\"/tmp/sarcasm.json\", 'r') as f:\n", 56 | " datastore = json.load(f)\n", 57 | "\n", 58 | "\n", 59 | "sentences = []\n", 60 | "labels = []\n", 61 | "urls = []\n", 62 | "for item in datastore:\n", 63 | " sentences.append(item['headline'])\n", 64 | " labels.append(item['is_sarcastic'])\n", 65 | "\n", 66 | "training_sentences = sentences[0:training_size]\n", 67 | "testing_sentences = sentences[training_size:]\n", 68 | "training_labels = labels[0:training_size]\n", 69 | "testing_labels = labels[training_size:]\n", 70 | "\n", 71 | "tokenizer = Tokenizer(num_words=vocab_size, oov_token=oov_tok)\n", 72 | "tokenizer.fit_on_texts(training_sentences)\n", 73 | "\n", 74 | "word_index = tokenizer.word_index\n", 75 | "\n", 76 | "training_sequences = tokenizer.texts_to_sequences(training_sentences)\n", 77 | "training_padded = pad_sequences(training_sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)\n", 78 | "\n", 79 | "testing_sequences = tokenizer.texts_to_sequences(testing_sentences)\n", 80 | "testing_padded = pad_sequences(testing_sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)\n", 81 | "\n", 82 | "model = tf.keras.Sequential([\n", 83 | " tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),\n", 84 | " tf.keras.layers.Conv1D(128, 5, activation='relu'),\n", 85 | " tf.keras.layers.GlobalMaxPooling1D(),\n", 86 | " tf.keras.layers.Dense(24, activation='relu'),\n", 87 | " tf.keras.layers.Dense(1, activation='sigmoid')\n", 88 | "])\n", 89 | "model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])\n", 90 | "model.summary()\n", 91 | "\n", 92 | "num_epochs = 50\n", 93 | "history = model.fit(training_padded, training_labels, epochs=num_epochs, validation_data=(testing_padded, testing_labels), verbose=1)\n" 94 | ] 95 | }, 96 | { 97 | "cell_type": "code", 98 | "execution_count": 0, 99 | "metadata": { 100 | "colab": {}, 101 | "colab_type": "code", 102 | "id": "g9DC6dmLF8DC" 103 | }, 104 | "outputs": [], 105 | "source": [ 106 | "import matplotlib.pyplot as plt\n", 107 | "\n", 108 | "\n", 109 | "def plot_graphs(history, string):\n", 110 | " plt.plot(history.history[string])\n", 111 | " plt.plot(history.history['val_'+string])\n", 112 | " plt.xlabel(\"Epochs\")\n", 113 | " plt.ylabel(string)\n", 114 | " plt.legend([string, 'val_'+string])\n", 115 | " plt.show()\n", 116 | "\n", 117 | "plot_graphs(history, 'accuracy')\n", 118 | "plot_graphs(history, 'loss')" 119 | ] 120 | }, 121 | { 122 | "cell_type": "code", 123 | "execution_count": 0, 124 | "metadata": { 125 | "colab": {}, 126 | "colab_type": "code", 127 | "id": "7ZEZIUppGhdi" 128 | }, 129 | "outputs": [], 130 | "source": [ 131 | "model.save(\"test.h5\")" 132 | ] 133 | } 134 | ], 135 | "metadata": { 136 | "accelerator": "GPU", 137 | "colab": { 138 | "collapsed_sections": [], 139 | "name": "Course 3 - Week 3 - Lesson 2c.ipynb", 140 | "provenance": [], 141 | "toc_visible": true, 142 | "version": "0.3.2" 143 | }, 144 | "kernelspec": { 145 | "display_name": "Python 3", 146 | "name": "python3" 147 | } 148 | }, 149 | "nbformat": 4, 150 | "nbformat_minor": 0 151 | } 152 | -------------------------------------------------------------------------------- /3. Natural Language Processing in TensorFlow/Course 3 - Week 3 - Lesson 2d.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "metadata": { 6 | "id": "zX4Kg8DUTKWO", 7 | "colab_type": "code", 8 | "colab": {} 9 | }, 10 | "source": [ 11 | "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", 12 | "# you may not use this file except in compliance with the License.\n", 13 | "# You may obtain a copy of the License at\n", 14 | "#\n", 15 | "# https://www.apache.org/licenses/LICENSE-2.0\n", 16 | "#\n", 17 | "# Unless required by applicable law or agreed to in writing, software\n", 18 | "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", 19 | "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", 20 | "# See the License for the specific language governing permissions and\n", 21 | "# limitations under the License." 22 | ], 23 | "execution_count": 0, 24 | "outputs": [] 25 | }, 26 | { 27 | "cell_type": "code", 28 | "execution_count": 0, 29 | "metadata": { 30 | "colab": {}, 31 | "colab_type": "code", 32 | "id": "P-AhVYeBWgQ3" 33 | }, 34 | "outputs": [], 35 | "source": [ 36 | "import tensorflow as tf\n", 37 | "print(tf.__version__)\n", 38 | "\n", 39 | "# !pip install -q tensorflow-datasets" 40 | ] 41 | }, 42 | { 43 | "cell_type": "code", 44 | "execution_count": 0, 45 | "metadata": { 46 | "colab": {}, 47 | "colab_type": "code", 48 | "id": "_IoM4VFxWpMR" 49 | }, 50 | "outputs": [], 51 | "source": [ 52 | "import tensorflow_datasets as tfds\n", 53 | "imdb, info = tfds.load(\"imdb_reviews\", with_info=True, as_supervised=True)\n" 54 | ] 55 | }, 56 | { 57 | "cell_type": "code", 58 | "execution_count": 0, 59 | "metadata": { 60 | "colab": {}, 61 | "colab_type": "code", 62 | "id": "wHQ2Ko0zl7M4" 63 | }, 64 | "outputs": [], 65 | "source": [ 66 | "import numpy as np\n", 67 | "\n", 68 | "train_data, test_data = imdb['train'], imdb['test']\n", 69 | "\n", 70 | "training_sentences = []\n", 71 | "training_labels = []\n", 72 | "\n", 73 | "testing_sentences = []\n", 74 | "testing_labels = []\n", 75 | "\n", 76 | "# str(s.tonumpy()) is needed in Python3 instead of just s.numpy()\n", 77 | "for s,l in train_data:\n", 78 | " training_sentences.append(str(s.numpy()))\n", 79 | " training_labels.append(l.numpy())\n", 80 | " \n", 81 | "for s,l in test_data:\n", 82 | " testing_sentences.append(str(s.numpy()))\n", 83 | " testing_labels.append(l.numpy())\n", 84 | " \n", 85 | "training_labels_final = np.array(training_labels)\n", 86 | "testing_labels_final = np.array(testing_labels)\n" 87 | ] 88 | }, 89 | { 90 | "cell_type": "code", 91 | "execution_count": 0, 92 | "metadata": { 93 | "colab": {}, 94 | "colab_type": "code", 95 | "id": "7n15yyMdmoH1" 96 | }, 97 | "outputs": [], 98 | "source": [ 99 | "vocab_size = 10000\n", 100 | "embedding_dim = 16\n", 101 | "max_length = 120\n", 102 | "trunc_type='post'\n", 103 | "oov_tok = \"\"\n", 104 | "\n", 105 | "\n", 106 | "from tensorflow.keras.preprocessing.text import Tokenizer\n", 107 | "from tensorflow.keras.preprocessing.sequence import pad_sequences\n", 108 | "\n", 109 | "tokenizer = Tokenizer(num_words = vocab_size, oov_token=oov_tok)\n", 110 | "tokenizer.fit_on_texts(training_sentences)\n", 111 | "word_index = tokenizer.word_index\n", 112 | "sequences = tokenizer.texts_to_sequences(training_sentences)\n", 113 | "padded = pad_sequences(sequences,maxlen=max_length, truncating=trunc_type)\n", 114 | "\n", 115 | "testing_sequences = tokenizer.texts_to_sequences(testing_sentences)\n", 116 | "testing_padded = pad_sequences(testing_sequences,maxlen=max_length)\n" 117 | ] 118 | }, 119 | { 120 | "cell_type": "code", 121 | "execution_count": 0, 122 | "metadata": { 123 | "colab": {}, 124 | "colab_type": "code", 125 | "id": "9axf0uIXVMhO" 126 | }, 127 | "outputs": [], 128 | "source": [ 129 | "reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])\n", 130 | "\n", 131 | "def decode_review(text):\n", 132 | " return ' '.join([reverse_word_index.get(i, '?') for i in text])\n", 133 | "\n", 134 | "print(decode_review(padded[1]))\n", 135 | "print(training_sentences[1])" 136 | ] 137 | }, 138 | { 139 | "cell_type": "code", 140 | "execution_count": 0, 141 | "metadata": { 142 | "colab": {}, 143 | "colab_type": "code", 144 | "id": "5NEpdhb8AxID" 145 | }, 146 | "outputs": [], 147 | "source": [ 148 | "model = tf.keras.Sequential([\n", 149 | " tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),\n", 150 | " tf.keras.layers.Bidirectional(tf.keras.layers.GRU(32)),\n", 151 | " tf.keras.layers.Dense(6, activation='relu'),\n", 152 | " tf.keras.layers.Dense(1, activation='sigmoid')\n", 153 | "])\n", 154 | "model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])\n", 155 | "model.summary()\n" 156 | ] 157 | }, 158 | { 159 | "cell_type": "code", 160 | "execution_count": 0, 161 | "metadata": { 162 | "colab": {}, 163 | "colab_type": "code", 164 | "id": "V5LLrXC-uNX6" 165 | }, 166 | "outputs": [], 167 | "source": [ 168 | "num_epochs = 50\n", 169 | "history = model.fit(padded, training_labels_final, epochs=num_epochs, validation_data=(testing_padded, testing_labels_final))" 170 | ] 171 | }, 172 | { 173 | "cell_type": "code", 174 | "execution_count": 0, 175 | "metadata": { 176 | "colab": {}, 177 | "colab_type": "code", 178 | "id": "nHGYuU4jPYaj" 179 | }, 180 | "outputs": [], 181 | "source": [ 182 | "import matplotlib.pyplot as plt\n", 183 | "\n", 184 | "\n", 185 | "def plot_graphs(history, string):\n", 186 | " plt.plot(history.history[string])\n", 187 | " plt.plot(history.history['val_'+string])\n", 188 | " plt.xlabel(\"Epochs\")\n", 189 | " plt.ylabel(string)\n", 190 | " plt.legend([string, 'val_'+string])\n", 191 | " plt.show()\n", 192 | "\n", 193 | "plot_graphs(history, 'accuracy')\n", 194 | "plot_graphs(history, 'loss')" 195 | ] 196 | }, 197 | { 198 | "cell_type": "code", 199 | "execution_count": 0, 200 | "metadata": { 201 | "colab": {}, 202 | "colab_type": "code", 203 | "id": "wSualgGPPK0S" 204 | }, 205 | "outputs": [], 206 | "source": [ 207 | "# Model Definition with LSTM\n", 208 | "model = tf.keras.Sequential([\n", 209 | " tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),\n", 210 | " tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),\n", 211 | " tf.keras.layers.Dense(6, activation='relu'),\n", 212 | " tf.keras.layers.Dense(1, activation='sigmoid')\n", 213 | "])\n", 214 | "model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])\n", 215 | "model.summary()\n" 216 | ] 217 | }, 218 | { 219 | "cell_type": "code", 220 | "execution_count": 0, 221 | "metadata": { 222 | "colab": {}, 223 | "colab_type": "code", 224 | "id": "K_Jc7cY3Qxke" 225 | }, 226 | "outputs": [], 227 | "source": [ 228 | "# Model Definition with Conv1D\n", 229 | "model = tf.keras.Sequential([\n", 230 | " tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),\n", 231 | " tf.keras.layers.Conv1D(128, 5, activation='relu'),\n", 232 | " tf.keras.layers.GlobalAveragePooling1D(),\n", 233 | " tf.keras.layers.Dense(6, activation='relu'),\n", 234 | " tf.keras.layers.Dense(1, activation='sigmoid')\n", 235 | "])\n", 236 | "model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])\n", 237 | "model.summary()\n" 238 | ] 239 | } 240 | ], 241 | "metadata": { 242 | "accelerator": "GPU", 243 | "colab": { 244 | "collapsed_sections": [], 245 | "name": "Course 3 - Week 3 - Lesson 2d.ipynb", 246 | "provenance": [], 247 | "toc_visible": true, 248 | "version": "0.3.2" 249 | }, 250 | "kernelspec": { 251 | "display_name": "Python 3", 252 | "name": "python3" 253 | } 254 | }, 255 | "nbformat": 4, 256 | "nbformat_minor": 0 257 | } 258 | -------------------------------------------------------------------------------- /3. Natural Language Processing in TensorFlow/Exercises/Week 3/NLP Course - Week 3 Exercise Answer.ipynb: -------------------------------------------------------------------------------- 1 | {"nbformat":4,"nbformat_minor":0,"metadata":{"accelerator":"GPU","colab":{"name":"NLP Course - Week 3 Exercise Answer.ipynb","provenance":[],"collapsed_sections":[]},"kernelspec":{"display_name":"Python 3","name":"python3"}},"cells":[{"cell_type":"code","metadata":{"id":"zX4Kg8DUTKWO","colab_type":"code","colab":{}},"source":["#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n","# you may not use this file except in compliance with the License.\n","# You may obtain a copy of the License at\n","#\n","# https://www.apache.org/licenses/LICENSE-2.0\n","#\n","# Unless required by applicable law or agreed to in writing, software\n","# distributed under the License is distributed on an \"AS IS\" BASIS,\n","# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n","# See the License for the specific language governing permissions and\n","# limitations under the License."],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"hmA6EzkQJ5jt","colab":{}},"source":["import json\n","import tensorflow as tf\n","import csv\n","import random\n","import numpy as np\n","\n","from tensorflow.keras.preprocessing.text import Tokenizer\n","from tensorflow.keras.preprocessing.sequence import pad_sequences\n","from tensorflow.keras.utils import to_categorical\n","from tensorflow.keras import regularizers\n","\n","\n","embedding_dim = 100\n","max_length = 16\n","trunc_type='post'\n","padding_type='post'\n","oov_tok = \"\"\n","training_size=160000\n","test_portion=.1\n","\n","corpus = []\n"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"bM0l_dORKqE0","outputId":"080466fa-c36c-4cb2-acb2-4bf9f771142a","colab":{"base_uri":"https://localhost:8080/","height":224},"executionInfo":{"status":"ok","timestamp":1589899214719,"user_tz":-330,"elapsed":12043,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}}},"source":["# Note that I cleaned the Stanford dataset to remove LATIN1 encoding to make it easier for Python CSV reader\n","# You can do that yourself with:\n","# iconv -f LATIN1 -t UTF8 training.1600000.processed.noemoticon.csv -o training_cleaned.csv\n","# I then hosted it on my site to make it easier to use in this notebook\n","\n","!wget --no-check-certificate \\\n"," https://storage.googleapis.com/laurencemoroney-blog.appspot.com/training_cleaned.csv \\\n"," -O /tmp/training_cleaned.csv\n","\n","num_sentences = 0\n","\n","with open(\"/tmp/training_cleaned.csv\") as csvfile:\n"," reader = csv.reader(csvfile, delimiter=',')\n"," for row in reader:\n"," list_item=[]\n"," list_item.append(row[5])\n"," this_label=row[0]\n"," if this_label=='0':\n"," list_item.append(0)\n"," else:\n"," list_item.append(1)\n"," num_sentences = num_sentences + 1\n"," corpus.append(list_item)\n"],"execution_count":12,"outputs":[{"output_type":"stream","text":["--2020-05-19 14:40:05-- https://storage.googleapis.com/laurencemoroney-blog.appspot.com/training_cleaned.csv\n","Resolving storage.googleapis.com (storage.googleapis.com)... 64.233.189.128, 2404:6800:4008:c07::80\n","Connecting to storage.googleapis.com (storage.googleapis.com)|64.233.189.128|:443... connected.\n","HTTP request sent, awaiting response... 200 OK\n","Length: 238942690 (228M) [application/octet-stream]\n","Saving to: ‘/tmp/training_cleaned.csv’\n","\n","\r /tmp/trai 0%[ ] 0 --.-KB/s \r /tmp/train 13%[=> ] 30.90M 154MB/s \r /tmp/traini 37%[======> ] 86.58M 216MB/s \r /tmp/trainin 55%[==========> ] 127.02M 212MB/s \r /tmp/training 74%[=============> ] 169.00M 211MB/s \r /tmp/training_ 94%[=================> ] 216.07M 216MB/s \r/tmp/training_clean 100%[===================>] 227.87M 218MB/s in 1.0s \n","\n","2020-05-19 14:40:06 (218 MB/s) - ‘/tmp/training_cleaned.csv’ saved [238942690/238942690]\n","\n"],"name":"stdout"}]},{"cell_type":"code","metadata":{"colab_type":"code","id":"3kxblBUjEUX-","outputId":"28901fc4-0b8e-4332-e12c-ffb385e30f7f","colab":{"base_uri":"https://localhost:8080/","height":88},"executionInfo":{"status":"ok","timestamp":1589899214720,"user_tz":-330,"elapsed":12033,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}}},"source":["print(num_sentences)\n","print(len(corpus))\n","print(corpus[1])\n","\n","# Expected Output:\n","# 1600000\n","# 1600000\n","# [\"is upset that he can't update his Facebook by texting it... and might cry as a result School today also. Blah!\", 0]"],"execution_count":13,"outputs":[{"output_type":"stream","text":["1600000\n","1600000\n","[\"is upset that he can't update his Facebook by texting it... and might cry as a result School today also. Blah!\", 0]\n"],"name":"stdout"}]},{"cell_type":"code","metadata":{"colab_type":"code","id":"ohOGz24lsNAD","colab":{}},"source":["sentences=[]\n","labels=[]\n","random.shuffle(corpus)\n","for x in range(training_size):\n"," sentences.append(corpus[x][0])\n"," labels.append(corpus[x][1])\n","\n","\n","tokenizer = Tokenizer()\n","tokenizer.fit_on_texts(sentences)\n","\n","word_index = tokenizer.word_index\n","vocab_size=len(word_index)\n","\n","sequences = tokenizer.texts_to_sequences(sentences)\n","padded = pad_sequences(sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)\n","\n","split = int(test_portion * training_size)\n","\n","test_sequences = padded[0:split]\n","training_sequences = padded[split:training_size]\n","test_labels = np.asarray(labels[0:split])\n","training_labels = np.asarray(labels[split:training_size])"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"gIrtRem1En3N","outputId":"d544c7a1-8c81-4956-d9f4-16d01e7980f0","colab":{"base_uri":"https://localhost:8080/","height":51},"executionInfo":{"status":"ok","timestamp":1589899223717,"user_tz":-330,"elapsed":21014,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}}},"source":["print(vocab_size)\n","print(word_index['i'])\n","# Expected Output\n","# 138858\n","# 1"],"execution_count":15,"outputs":[{"output_type":"stream","text":["138027\n","1\n"],"name":"stdout"}]},{"cell_type":"code","metadata":{"colab_type":"code","id":"C1zdgJkusRh0","outputId":"e878373d-80ab-4851-f14a-ae8f3b0dc257","colab":{"base_uri":"https://localhost:8080/","height":224},"executionInfo":{"status":"ok","timestamp":1589899241870,"user_tz":-330,"elapsed":39158,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}}},"source":["# Note this is the 100 dimension version of GloVe from Stanford\n","# I unzipped and hosted it on my site to make this notebook easier\n","!wget --no-check-certificate \\\n"," https://storage.googleapis.com/laurencemoroney-blog.appspot.com/glove.6B.100d.txt \\\n"," -O /tmp/glove.6B.100d.txt\n","embeddings_index = {};\n","with open('/tmp/glove.6B.100d.txt') as f:\n"," for line in f:\n"," values = line.split();\n"," word = values[0];\n"," coefs = np.asarray(values[1:], dtype='float32');\n"," embeddings_index[word] = coefs;\n","\n","embeddings_matrix = np.zeros((vocab_size+1, embedding_dim));\n","for word, i in word_index.items():\n"," embedding_vector = embeddings_index.get(word);\n"," if embedding_vector is not None:\n"," embeddings_matrix[i] = embedding_vector;"],"execution_count":16,"outputs":[{"output_type":"stream","text":["--2020-05-19 14:40:25-- https://storage.googleapis.com/laurencemoroney-blog.appspot.com/glove.6B.100d.txt\n","Resolving storage.googleapis.com (storage.googleapis.com)... 74.125.203.128, 2404:6800:4008:c07::80\n","Connecting to storage.googleapis.com (storage.googleapis.com)|74.125.203.128|:443... connected.\n","HTTP request sent, awaiting response... 200 OK\n","Length: 347116733 (331M) [text/plain]\n","Saving to: ‘/tmp/glove.6B.100d.txt’\n","\n","/tmp/glove.6B.100d. 100%[===================>] 331.04M 128MB/s in 2.6s \n","\n","2020-05-19 14:40:27 (128 MB/s) - ‘/tmp/glove.6B.100d.txt’ saved [347116733/347116733]\n","\n"],"name":"stdout"}]},{"cell_type":"code","metadata":{"colab_type":"code","id":"71NLk_lpFLNt","outputId":"9b241a2c-c05d-4711-d740-434893c6bda2","colab":{"base_uri":"https://localhost:8080/","height":34},"executionInfo":{"status":"ok","timestamp":1589899241872,"user_tz":-330,"elapsed":39151,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}}},"source":["print(len(embeddings_matrix))\n","# Expected Output\n","# 138859"],"execution_count":17,"outputs":[{"output_type":"stream","text":["138028\n"],"name":"stdout"}]},{"cell_type":"code","metadata":{"colab_type":"code","id":"iKKvbuEBOGFz","colab":{"base_uri":"https://localhost:8080/","height":1000},"outputId":"38016b28-2a0c-4bd3-c63a-1551a34dcc7e"},"source":["model = tf.keras.Sequential([\n"," tf.keras.layers.Embedding(vocab_size+1, embedding_dim, input_length=max_length, weights=[embeddings_matrix], trainable=False),\n"," tf.keras.layers.Dropout(0.2),\n"," tf.keras.layers.Conv1D(64, 5, activation='relu'),\n"," tf.keras.layers.MaxPooling1D(pool_size=4),\n"," tf.keras.layers.LSTM(64),\n"," tf.keras.layers.Dense(1, activation='sigmoid')\n","])\n","model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])\n","model.summary()\n","\n","num_epochs = 50\n","history = model.fit(training_sequences, training_labels, epochs=num_epochs, validation_data=(test_sequences, test_labels), verbose=2)\n","\n","print(\"Training Complete\")\n"],"execution_count":0,"outputs":[{"output_type":"stream","text":["Model: \"sequential_2\"\n","_________________________________________________________________\n","Layer (type) Output Shape Param # \n","=================================================================\n","embedding_2 (Embedding) (None, 16, 100) 13802800 \n","_________________________________________________________________\n","dropout_2 (Dropout) (None, 16, 100) 0 \n","_________________________________________________________________\n","conv1d_2 (Conv1D) (None, 12, 64) 32064 \n","_________________________________________________________________\n","max_pooling1d_2 (MaxPooling1 (None, 3, 64) 0 \n","_________________________________________________________________\n","lstm_2 (LSTM) (None, 64) 33024 \n","_________________________________________________________________\n","dense_2 (Dense) (None, 1) 65 \n","=================================================================\n","Total params: 13,867,953\n","Trainable params: 65,153\n","Non-trainable params: 13,802,800\n","_________________________________________________________________\n","Epoch 1/50\n","4500/4500 - 22s - loss: 0.5683 - accuracy: 0.6974 - val_loss: 0.5378 - val_accuracy: 0.7249\n","Epoch 2/50\n","4500/4500 - 21s - loss: 0.5284 - accuracy: 0.7299 - val_loss: 0.5172 - val_accuracy: 0.7404\n","Epoch 3/50\n","4500/4500 - 21s - loss: 0.5114 - accuracy: 0.7444 - val_loss: 0.5204 - val_accuracy: 0.7362\n","Epoch 4/50\n","4500/4500 - 21s - loss: 0.5008 - accuracy: 0.7513 - val_loss: 0.5056 - val_accuracy: 0.7473\n","Epoch 5/50\n","4500/4500 - 21s - loss: 0.4915 - accuracy: 0.7576 - val_loss: 0.5041 - val_accuracy: 0.7498\n","Epoch 6/50\n","4500/4500 - 21s - loss: 0.4847 - accuracy: 0.7625 - val_loss: 0.5044 - val_accuracy: 0.7498\n","Epoch 7/50\n","4500/4500 - 21s - loss: 0.4769 - accuracy: 0.7656 - val_loss: 0.5043 - val_accuracy: 0.7514\n","Epoch 8/50\n","4500/4500 - 21s - loss: 0.4730 - accuracy: 0.7678 - val_loss: 0.5060 - val_accuracy: 0.7474\n","Epoch 9/50\n","4500/4500 - 22s - loss: 0.4694 - accuracy: 0.7713 - val_loss: 0.5070 - val_accuracy: 0.7514\n","Epoch 10/50\n","4500/4500 - 21s - loss: 0.4644 - accuracy: 0.7747 - val_loss: 0.5072 - val_accuracy: 0.7540\n","Epoch 11/50\n","4500/4500 - 21s - loss: 0.4618 - accuracy: 0.7759 - val_loss: 0.5095 - val_accuracy: 0.7534\n","Epoch 12/50\n","4500/4500 - 20s - loss: 0.4586 - accuracy: 0.7774 - val_loss: 0.5103 - val_accuracy: 0.7466\n","Epoch 13/50\n","4500/4500 - 20s - loss: 0.4569 - accuracy: 0.7791 - val_loss: 0.5071 - val_accuracy: 0.7499\n","Epoch 14/50\n","4500/4500 - 21s - loss: 0.4531 - accuracy: 0.7804 - val_loss: 0.5140 - val_accuracy: 0.7465\n","Epoch 15/50\n","4500/4500 - 21s - loss: 0.4511 - accuracy: 0.7834 - val_loss: 0.5093 - val_accuracy: 0.7492\n","Epoch 16/50\n","4500/4500 - 21s - loss: 0.4479 - accuracy: 0.7853 - val_loss: 0.5138 - val_accuracy: 0.7502\n","Epoch 17/50\n","4500/4500 - 21s - loss: 0.4480 - accuracy: 0.7850 - val_loss: 0.5129 - val_accuracy: 0.7502\n","Epoch 18/50\n","4500/4500 - 21s - loss: 0.4455 - accuracy: 0.7858 - val_loss: 0.5169 - val_accuracy: 0.7454\n","Epoch 19/50\n","4500/4500 - 22s - loss: 0.4440 - accuracy: 0.7871 - val_loss: 0.5150 - val_accuracy: 0.7459\n","Epoch 20/50\n","4500/4500 - 21s - loss: 0.4440 - accuracy: 0.7874 - val_loss: 0.5128 - val_accuracy: 0.7481\n","Epoch 21/50\n","4500/4500 - 21s - loss: 0.4422 - accuracy: 0.7890 - val_loss: 0.5134 - val_accuracy: 0.7504\n","Epoch 22/50\n","4500/4500 - 21s - loss: 0.4404 - accuracy: 0.7885 - val_loss: 0.5179 - val_accuracy: 0.7458\n","Epoch 23/50\n","4500/4500 - 21s - loss: 0.4392 - accuracy: 0.7882 - val_loss: 0.5182 - val_accuracy: 0.7481\n","Epoch 24/50\n"],"name":"stdout"}]},{"cell_type":"code","metadata":{"colab_type":"code","id":"qxju4ItJKO8F","colab":{}},"source":["import matplotlib.image as mpimg\n","import matplotlib.pyplot as plt\n","\n","#-----------------------------------------------------------\n","# Retrieve a list of list results on training and test data\n","# sets for each training epoch\n","#-----------------------------------------------------------\n","acc=history.history['accuracy']\n","val_acc=history.history['val_accuracy']\n","loss=history.history['loss']\n","val_loss=history.history['val_loss']\n","\n","epochs=range(len(acc)) # Get number of epochs\n","\n","#------------------------------------------------\n","# Plot training and validation accuracy per epoch\n","#------------------------------------------------\n","plt.plot(epochs, acc, 'r')\n","plt.plot(epochs, val_acc, 'b')\n","plt.title('Training and validation accuracy')\n","plt.xlabel(\"Epochs\")\n","plt.ylabel(\"Accuracy\")\n","plt.legend([\"Accuracy\", \"Validation Accuracy\"])\n","\n","plt.figure()\n","\n","#------------------------------------------------\n","# Plot training and validation loss per epoch\n","#------------------------------------------------\n","plt.plot(epochs, loss, 'r')\n","plt.plot(epochs, val_loss, 'b')\n","plt.title('Training and validation loss')\n","plt.xlabel(\"Epochs\")\n","plt.ylabel(\"Loss\")\n","plt.legend([\"Loss\", \"Validation Loss\"])\n","\n","plt.figure()\n","\n","\n","# Expected Output\n","# A chart where the validation loss does not increase sharply!"],"execution_count":0,"outputs":[]}]} -------------------------------------------------------------------------------- /3. Natural Language Processing in TensorFlow/Exercises/Week 4/NLP_Week4_Exercise_Shakespeare_Answer.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "metadata": { 6 | "id": "zX4Kg8DUTKWO", 7 | "colab_type": "code", 8 | "colab": {} 9 | }, 10 | "source": [ 11 | "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", 12 | "# you may not use this file except in compliance with the License.\n", 13 | "# You may obtain a copy of the License at\n", 14 | "#\n", 15 | "# https://www.apache.org/licenses/LICENSE-2.0\n", 16 | "#\n", 17 | "# Unless required by applicable law or agreed to in writing, software\n", 18 | "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", 19 | "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", 20 | "# See the License for the specific language governing permissions and\n", 21 | "# limitations under the License." 22 | ], 23 | "execution_count": 0, 24 | "outputs": [] 25 | }, 26 | { 27 | "cell_type": "code", 28 | "execution_count": 0, 29 | "metadata": { 30 | "colab": {}, 31 | "colab_type": "code", 32 | "id": "BOwsuGQQY9OL" 33 | }, 34 | "outputs": [], 35 | "source": [ 36 | "from tensorflow.keras.preprocessing.sequence import pad_sequences\n", 37 | "from tensorflow.keras.layers import Embedding, LSTM, Dense, Dropout, Bidirectional\n", 38 | "from tensorflow.keras.preprocessing.text import Tokenizer\n", 39 | "from tensorflow.keras.models import Sequential\n", 40 | "from tensorflow.keras.optimizers import Adam\n", 41 | "from tensorflow.keras import regularizers\n", 42 | "import tensorflow.keras.utils as ku \n", 43 | "import numpy as np " 44 | ] 45 | }, 46 | { 47 | "cell_type": "code", 48 | "execution_count": 0, 49 | "metadata": { 50 | "colab": {}, 51 | "colab_type": "code", 52 | "id": "PRnDnCW-Z7qv" 53 | }, 54 | "outputs": [], 55 | "source": [ 56 | "tokenizer = Tokenizer()\n", 57 | "!wget --no-check-certificate \\\n", 58 | " https://storage.googleapis.com/laurencemoroney-blog.appspot.com/sonnets.txt \\\n", 59 | " -O /tmp/sonnets.txt\n", 60 | "data = open('/tmp/sonnets.txt').read()\n", 61 | "\n", 62 | "corpus = data.lower().split(\"\\n\")\n", 63 | "\n", 64 | "\n", 65 | "tokenizer.fit_on_texts(corpus)\n", 66 | "total_words = len(tokenizer.word_index) + 1\n", 67 | "\n", 68 | "# create input sequences using list of tokens\n", 69 | "input_sequences = []\n", 70 | "for line in corpus:\n", 71 | "\ttoken_list = tokenizer.texts_to_sequences([line])[0]\n", 72 | "\tfor i in range(1, len(token_list)):\n", 73 | "\t\tn_gram_sequence = token_list[:i+1]\n", 74 | "\t\tinput_sequences.append(n_gram_sequence)\n", 75 | "\n", 76 | "\n", 77 | "# pad sequences \n", 78 | "max_sequence_len = max([len(x) for x in input_sequences])\n", 79 | "input_sequences = np.array(pad_sequences(input_sequences, maxlen=max_sequence_len, padding='pre'))\n", 80 | "\n", 81 | "# create predictors and label\n", 82 | "predictors, label = input_sequences[:,:-1],input_sequences[:,-1]\n", 83 | "\n", 84 | "label = ku.to_categorical(label, num_classes=total_words)" 85 | ] 86 | }, 87 | { 88 | "cell_type": "code", 89 | "execution_count": 0, 90 | "metadata": { 91 | "colab": {}, 92 | "colab_type": "code", 93 | "id": "w9vH8Y59ajYL" 94 | }, 95 | "outputs": [], 96 | "source": [ 97 | "model = Sequential()\n", 98 | "model.add(Embedding(total_words, 100, input_length=max_sequence_len-1))\n", 99 | "model.add(Bidirectional(LSTM(150, return_sequences = True)))\n", 100 | "model.add(Dropout(0.2))\n", 101 | "model.add(LSTM(100))\n", 102 | "model.add(Dense(total_words/2, activation='relu', kernel_regularizer=regularizers.l2(0.01)))\n", 103 | "model.add(Dense(total_words, activation='softmax'))\n", 104 | "model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n", 105 | "print(model.summary())\n" 106 | ] 107 | }, 108 | { 109 | "cell_type": "code", 110 | "execution_count": 0, 111 | "metadata": { 112 | "colab": {}, 113 | "colab_type": "code", 114 | "id": "AIg2f1HBxqof" 115 | }, 116 | "outputs": [], 117 | "source": [ 118 | " history = model.fit(predictors, label, epochs=100, verbose=1)" 119 | ] 120 | }, 121 | { 122 | "cell_type": "code", 123 | "execution_count": 0, 124 | "metadata": { 125 | "colab": {}, 126 | "colab_type": "code", 127 | "id": "1fXTEO3GJ282" 128 | }, 129 | "outputs": [], 130 | "source": [ 131 | "import matplotlib.pyplot as plt\n", 132 | "acc = history.history['accuracy']\n", 133 | "loss = history.history['loss']\n", 134 | "\n", 135 | "epochs = range(len(acc))\n", 136 | "\n", 137 | "plt.plot(epochs, acc, 'b', label='Training accuracy')\n", 138 | "plt.title('Training accuracy')\n", 139 | "\n", 140 | "plt.figure()\n", 141 | "\n", 142 | "plt.plot(epochs, loss, 'b', label='Training Loss')\n", 143 | "plt.title('Training loss')\n", 144 | "plt.legend()\n", 145 | "\n", 146 | "plt.show()" 147 | ] 148 | }, 149 | { 150 | "cell_type": "code", 151 | "execution_count": 0, 152 | "metadata": { 153 | "colab": {}, 154 | "colab_type": "code", 155 | "id": "6Vc6PHgxa6Hm" 156 | }, 157 | "outputs": [], 158 | "source": [ 159 | "seed_text = \"Help me Obi Wan Kenobi, you're my only hope\"\n", 160 | "next_words = 100\n", 161 | " \n", 162 | "for _ in range(next_words):\n", 163 | "\ttoken_list = tokenizer.texts_to_sequences([seed_text])[0]\n", 164 | "\ttoken_list = pad_sequences([token_list], maxlen=max_sequence_len-1, padding='pre')\n", 165 | "\tpredicted = model.predict_classes(token_list, verbose=0)\n", 166 | "\toutput_word = \"\"\n", 167 | "\tfor word, index in tokenizer.word_index.items():\n", 168 | "\t\tif index == predicted:\n", 169 | "\t\t\toutput_word = word\n", 170 | "\t\t\tbreak\n", 171 | "\tseed_text += \" \" + output_word\n", 172 | "print(seed_text)" 173 | ] 174 | } 175 | ], 176 | "metadata": { 177 | "accelerator": "GPU", 178 | "colab": { 179 | "name": "NLP_Week4_Exercise_Shakespeare_Answer.ipynb", 180 | "provenance": [], 181 | "toc_visible": true, 182 | "version": "0.3.2" 183 | }, 184 | "kernelspec": { 185 | "display_name": "Python 3", 186 | "name": "python3" 187 | } 188 | }, 189 | "nbformat": 4, 190 | "nbformat_minor": 0 191 | } 192 | -------------------------------------------------------------------------------- /3. Natural Language Processing in TensorFlow/n-grams.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/3. Natural Language Processing in TensorFlow/n-grams.png -------------------------------------------------------------------------------- /4. Sequences, Time Series and Prediction/Exercises/Week 1/Week 1 Exercise Answer.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "metadata": { 6 | "id": "zX4Kg8DUTKWO", 7 | "colab_type": "code", 8 | "colab": {} 9 | }, 10 | "source": [ 11 | "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", 12 | "# you may not use this file except in compliance with the License.\n", 13 | "# You may obtain a copy of the License at\n", 14 | "#\n", 15 | "# https://www.apache.org/licenses/LICENSE-2.0\n", 16 | "#\n", 17 | "# Unless required by applicable law or agreed to in writing, software\n", 18 | "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", 19 | "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", 20 | "# See the License for the specific language governing permissions and\n", 21 | "# limitations under the License." 22 | ], 23 | "execution_count": 0, 24 | "outputs": [] 25 | }, 26 | { 27 | "cell_type": "code", 28 | "execution_count": 0, 29 | "metadata": { 30 | "colab": {}, 31 | "colab_type": "code", 32 | "id": "t9HrvPfrSlzS" 33 | }, 34 | "outputs": [], 35 | "source": [ 36 | "import tensorflow as tf\n", 37 | "print(tf.__version__)\n" 38 | ] 39 | }, 40 | { 41 | "cell_type": "code", 42 | "execution_count": 0, 43 | "metadata": { 44 | "colab": {}, 45 | "colab_type": "code", 46 | "id": "gqWabzlJ63nL" 47 | }, 48 | "outputs": [], 49 | "source": [ 50 | "import numpy as np\n", 51 | "import matplotlib.pyplot as plt\n", 52 | "import tensorflow as tf\n", 53 | "from tensorflow import keras\n", 54 | "\n", 55 | "def plot_series(time, series, format=\"-\", start=0, end=None):\n", 56 | " plt.plot(time[start:end], series[start:end], format)\n", 57 | " plt.xlabel(\"Time\")\n", 58 | " plt.ylabel(\"Value\")\n", 59 | " plt.grid(True)\n", 60 | "\n", 61 | "def trend(time, slope=0):\n", 62 | " return slope * time\n", 63 | "\n", 64 | "def seasonal_pattern(season_time):\n", 65 | " \"\"\"Just an arbitrary pattern, you can change it if you wish\"\"\"\n", 66 | " return np.where(season_time < 0.1,\n", 67 | " np.cos(season_time * 7 * np.pi),\n", 68 | " 1 / np.exp(5 * season_time))\n", 69 | "\n", 70 | "def seasonality(time, period, amplitude=1, phase=0):\n", 71 | " \"\"\"Repeats the same pattern at each period\"\"\"\n", 72 | " season_time = ((time + phase) % period) / period\n", 73 | " return amplitude * seasonal_pattern(season_time)\n", 74 | "\n", 75 | "def noise(time, noise_level=1, seed=None):\n", 76 | " rnd = np.random.RandomState(seed)\n", 77 | " return rnd.randn(len(time)) * noise_level\n", 78 | "\n", 79 | "time = np.arange(4 * 365 + 1, dtype=\"float32\")\n", 80 | "baseline = 10\n", 81 | "series = trend(time, 0.1) \n", 82 | "baseline = 10\n", 83 | "amplitude = 40\n", 84 | "slope = 0.01\n", 85 | "noise_level = 2\n", 86 | "\n", 87 | "# Create the series\n", 88 | "series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)\n", 89 | "# Update with noise\n", 90 | "series += noise(time, noise_level, seed=42)\n", 91 | "\n", 92 | "plt.figure(figsize=(10, 6))\n", 93 | "plot_series(time, series)\n", 94 | "plt.show()" 95 | ] 96 | }, 97 | { 98 | "cell_type": "markdown", 99 | "metadata": { 100 | "colab_type": "text", 101 | "id": "UfdyqJJ1VZVu" 102 | }, 103 | "source": [ 104 | "Now that we have the time series, let's split it so we can start forecasting" 105 | ] 106 | }, 107 | { 108 | "cell_type": "code", 109 | "execution_count": 0, 110 | "metadata": { 111 | "colab": {}, 112 | "colab_type": "code", 113 | "id": "_w0eKap5uFNP" 114 | }, 115 | "outputs": [], 116 | "source": [ 117 | "split_time = 1100\n", 118 | "time_train = time[:split_time]\n", 119 | "x_train = series[:split_time]\n", 120 | "time_valid = time[split_time:]\n", 121 | "x_valid = series[split_time:]\n", 122 | "plt.figure(figsize=(10, 6))\n", 123 | "plot_series(time_train, x_train)\n", 124 | "plt.show()\n", 125 | "\n", 126 | "plt.figure(figsize=(10, 6))\n", 127 | "plot_series(time_valid, x_valid)\n", 128 | "plt.show()" 129 | ] 130 | }, 131 | { 132 | "cell_type": "markdown", 133 | "metadata": { 134 | "colab_type": "text", 135 | "id": "bjD8ncEZbjEW" 136 | }, 137 | "source": [ 138 | "# Naive Forecast" 139 | ] 140 | }, 141 | { 142 | "cell_type": "code", 143 | "execution_count": 0, 144 | "metadata": { 145 | "colab": {}, 146 | "colab_type": "code", 147 | "id": "Pj_-uCeYxcAb" 148 | }, 149 | "outputs": [], 150 | "source": [ 151 | "naive_forecast = series[split_time - 1:-1]" 152 | ] 153 | }, 154 | { 155 | "cell_type": "code", 156 | "execution_count": 0, 157 | "metadata": { 158 | "colab": {}, 159 | "colab_type": "code", 160 | "id": "JtxwHj9Ig0jT" 161 | }, 162 | "outputs": [], 163 | "source": [ 164 | "plt.figure(figsize=(10, 6))\n", 165 | "plot_series(time_valid, x_valid)\n", 166 | "plot_series(time_valid, naive_forecast)" 167 | ] 168 | }, 169 | { 170 | "cell_type": "markdown", 171 | "metadata": { 172 | "colab_type": "text", 173 | "id": "fw1SP5WeuixH" 174 | }, 175 | "source": [ 176 | "Let's zoom in on the start of the validation period:" 177 | ] 178 | }, 179 | { 180 | "cell_type": "code", 181 | "execution_count": 0, 182 | "metadata": { 183 | "colab": {}, 184 | "colab_type": "code", 185 | "id": "D0MKg7FNug9V" 186 | }, 187 | "outputs": [], 188 | "source": [ 189 | "plt.figure(figsize=(10, 6))\n", 190 | "plot_series(time_valid, x_valid, start=0, end=150)\n", 191 | "plot_series(time_valid, naive_forecast, start=1, end=151)" 192 | ] 193 | }, 194 | { 195 | "cell_type": "markdown", 196 | "metadata": { 197 | "colab_type": "text", 198 | "id": "35gIlQLfu0TT" 199 | }, 200 | "source": [ 201 | "You can see that the naive forecast lags 1 step behind the time series." 202 | ] 203 | }, 204 | { 205 | "cell_type": "markdown", 206 | "metadata": { 207 | "colab_type": "text", 208 | "id": "Uh_7244Gsxfx" 209 | }, 210 | "source": [ 211 | "Now let's compute the mean squared error and the mean absolute error between the forecasts and the predictions in the validation period:" 212 | ] 213 | }, 214 | { 215 | "cell_type": "code", 216 | "execution_count": 0, 217 | "metadata": { 218 | "colab": {}, 219 | "colab_type": "code", 220 | "id": "byNnC7IbsnMZ" 221 | }, 222 | "outputs": [], 223 | "source": [ 224 | "print(keras.metrics.mean_squared_error(x_valid, naive_forecast).numpy())\n", 225 | "print(keras.metrics.mean_absolute_error(x_valid, naive_forecast).numpy())" 226 | ] 227 | }, 228 | { 229 | "cell_type": "markdown", 230 | "metadata": { 231 | "colab_type": "text", 232 | "id": "WGPBC9QttI1u" 233 | }, 234 | "source": [ 235 | "That's our baseline, now let's try a moving average:" 236 | ] 237 | }, 238 | { 239 | "cell_type": "code", 240 | "execution_count": 0, 241 | "metadata": { 242 | "colab": {}, 243 | "colab_type": "code", 244 | "id": "YGz5UsUdf2tV" 245 | }, 246 | "outputs": [], 247 | "source": [ 248 | "def moving_average_forecast(series, window_size):\n", 249 | " \"\"\"Forecasts the mean of the last few values.\n", 250 | " If window_size=1, then this is equivalent to naive forecast\"\"\"\n", 251 | " forecast = []\n", 252 | " for time in range(len(series) - window_size):\n", 253 | " forecast.append(series[time:time + window_size].mean())\n", 254 | " return np.array(forecast)" 255 | ] 256 | }, 257 | { 258 | "cell_type": "code", 259 | "execution_count": 0, 260 | "metadata": { 261 | "colab": {}, 262 | "colab_type": "code", 263 | "id": "HHFhGXQji7_r" 264 | }, 265 | "outputs": [], 266 | "source": [ 267 | "moving_avg = moving_average_forecast(series, 30)[split_time - 30:]\n", 268 | "\n", 269 | "plt.figure(figsize=(10, 6))\n", 270 | "plot_series(time_valid, x_valid)\n", 271 | "plot_series(time_valid, moving_avg)" 272 | ] 273 | }, 274 | { 275 | "cell_type": "code", 276 | "execution_count": 0, 277 | "metadata": { 278 | "colab": {}, 279 | "colab_type": "code", 280 | "id": "wG7pTAd7z0e8" 281 | }, 282 | "outputs": [], 283 | "source": [ 284 | "print(keras.metrics.mean_squared_error(x_valid, moving_avg).numpy())\n", 285 | "print(keras.metrics.mean_absolute_error(x_valid, moving_avg).numpy())" 286 | ] 287 | }, 288 | { 289 | "cell_type": "markdown", 290 | "metadata": { 291 | "colab_type": "text", 292 | "id": "JMYPnJqwz8nS" 293 | }, 294 | "source": [ 295 | "That's worse than naive forecast! The moving average does not anticipate trend or seasonality, so let's try to remove them by using differencing. Since the seasonality period is 365 days, we will subtract the value at time *t* – 365 from the value at time *t*." 296 | ] 297 | }, 298 | { 299 | "cell_type": "code", 300 | "execution_count": 0, 301 | "metadata": { 302 | "colab": {}, 303 | "colab_type": "code", 304 | "id": "5pqySF7-rJR4" 305 | }, 306 | "outputs": [], 307 | "source": [ 308 | "diff_series = (series[365:] - series[:-365])\n", 309 | "diff_time = time[365:]\n", 310 | "\n", 311 | "plt.figure(figsize=(10, 6))\n", 312 | "plot_series(diff_time, diff_series)\n", 313 | "plt.show()" 314 | ] 315 | }, 316 | { 317 | "cell_type": "markdown", 318 | "metadata": { 319 | "colab_type": "text", 320 | "id": "xPlPlS7DskWg" 321 | }, 322 | "source": [ 323 | "Great, the trend and seasonality seem to be gone, so now we can use the moving average:" 324 | ] 325 | }, 326 | { 327 | "cell_type": "code", 328 | "execution_count": 0, 329 | "metadata": { 330 | "colab": {}, 331 | "colab_type": "code", 332 | "id": "QmZpz7arsjbb" 333 | }, 334 | "outputs": [], 335 | "source": [ 336 | "diff_moving_avg = moving_average_forecast(diff_series, 50)[split_time - 365 - 50:]\n", 337 | "\n", 338 | "plt.figure(figsize=(10, 6))\n", 339 | "plot_series(time_valid, diff_series[split_time - 365:])\n", 340 | "plot_series(time_valid, diff_moving_avg)\n", 341 | "plt.show()" 342 | ] 343 | }, 344 | { 345 | "cell_type": "markdown", 346 | "metadata": { 347 | "colab_type": "text", 348 | "id": "Gno9S2lyecnc" 349 | }, 350 | "source": [ 351 | "Now let's bring back the trend and seasonality by adding the past values from t – 365:" 352 | ] 353 | }, 354 | { 355 | "cell_type": "code", 356 | "execution_count": 0, 357 | "metadata": { 358 | "colab": {}, 359 | "colab_type": "code", 360 | "id": "Dv6RWFq7TFGB" 361 | }, 362 | "outputs": [], 363 | "source": [ 364 | "diff_moving_avg_plus_past = series[split_time - 365:-365] + diff_moving_avg\n", 365 | "\n", 366 | "plt.figure(figsize=(10, 6))\n", 367 | "plot_series(time_valid, x_valid)\n", 368 | "plot_series(time_valid, diff_moving_avg_plus_past)\n", 369 | "plt.show()" 370 | ] 371 | }, 372 | { 373 | "cell_type": "code", 374 | "execution_count": 0, 375 | "metadata": { 376 | "colab": {}, 377 | "colab_type": "code", 378 | "id": "59jmBrwcTFCx" 379 | }, 380 | "outputs": [], 381 | "source": [ 382 | "print(keras.metrics.mean_squared_error(x_valid, diff_moving_avg_plus_past).numpy())\n", 383 | "print(keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_past).numpy())" 384 | ] 385 | }, 386 | { 387 | "cell_type": "markdown", 388 | "metadata": { 389 | "colab_type": "text", 390 | "id": "vx9Et1Hkeusl" 391 | }, 392 | "source": [ 393 | "Better than naive forecast, good. However the forecasts look a bit too random, because we're just adding past values, which were noisy. Let's use a moving averaging on past values to remove some of the noise:" 394 | ] 395 | }, 396 | { 397 | "cell_type": "code", 398 | "execution_count": 0, 399 | "metadata": { 400 | "colab": {}, 401 | "colab_type": "code", 402 | "id": "K81dtROoTE_r" 403 | }, 404 | "outputs": [], 405 | "source": [ 406 | "diff_moving_avg_plus_smooth_past = moving_average_forecast(series[split_time - 370:-360], 10) + diff_moving_avg\n", 407 | "\n", 408 | "plt.figure(figsize=(10, 6))\n", 409 | "plot_series(time_valid, x_valid)\n", 410 | "plot_series(time_valid, diff_moving_avg_plus_smooth_past)\n", 411 | "plt.show()" 412 | ] 413 | }, 414 | { 415 | "cell_type": "code", 416 | "execution_count": 0, 417 | "metadata": { 418 | "colab": {}, 419 | "colab_type": "code", 420 | "id": "iN2MsBxWTE3m" 421 | }, 422 | "outputs": [], 423 | "source": [ 424 | "print(keras.metrics.mean_squared_error(x_valid, diff_moving_avg_plus_smooth_past).numpy())\n", 425 | "print(keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_smooth_past).numpy())" 426 | ] 427 | } 428 | ], 429 | "metadata": { 430 | "accelerator": "GPU", 431 | "colab": { 432 | "collapsed_sections": [], 433 | "name": "Week 1 Exercise Answer.ipynb", 434 | "provenance": [], 435 | "toc_visible": true, 436 | "version": "0.3.2" 437 | }, 438 | "kernelspec": { 439 | "display_name": "Python 3", 440 | "name": "python3" 441 | } 442 | }, 443 | "nbformat": 4, 444 | "nbformat_minor": 0 445 | } 446 | -------------------------------------------------------------------------------- /4. Sequences, Time Series and Prediction/Exercises/Week 2/S+P_Week_2_Exercise_Answer.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "metadata": { 6 | "id": "zX4Kg8DUTKWO", 7 | "colab_type": "code", 8 | "colab": {} 9 | }, 10 | "source": [ 11 | "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", 12 | "# you may not use this file except in compliance with the License.\n", 13 | "# You may obtain a copy of the License at\n", 14 | "#\n", 15 | "# https://www.apache.org/licenses/LICENSE-2.0\n", 16 | "#\n", 17 | "# Unless required by applicable law or agreed to in writing, software\n", 18 | "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", 19 | "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", 20 | "# See the License for the specific language governing permissions and\n", 21 | "# limitations under the License." 22 | ], 23 | "execution_count": 0, 24 | "outputs": [] 25 | }, 26 | { 27 | "cell_type": "code", 28 | "execution_count": 0, 29 | "metadata": { 30 | "cellView": "both", 31 | "colab": {}, 32 | "colab_type": "code", 33 | "id": "D1J15Vh_1Jih" 34 | }, 35 | "outputs": [], 36 | "source": [ 37 | "!pip install tf-nightly-2.0-preview\n" 38 | ] 39 | }, 40 | { 41 | "cell_type": "code", 42 | "execution_count": 0, 43 | "metadata": { 44 | "colab": {}, 45 | "colab_type": "code", 46 | "id": "BOjujz601HcS" 47 | }, 48 | "outputs": [], 49 | "source": [ 50 | "import tensorflow as tf\n", 51 | "import numpy as np\n", 52 | "import matplotlib.pyplot as plt\n", 53 | "print(tf.__version__)" 54 | ] 55 | }, 56 | { 57 | "cell_type": "code", 58 | "execution_count": 0, 59 | "metadata": { 60 | "colab": {}, 61 | "colab_type": "code", 62 | "id": "Zswl7jRtGzkk" 63 | }, 64 | "outputs": [], 65 | "source": [ 66 | "def plot_series(time, series, format=\"-\", start=0, end=None):\n", 67 | " plt.plot(time[start:end], series[start:end], format)\n", 68 | " plt.xlabel(\"Time\")\n", 69 | " plt.ylabel(\"Value\")\n", 70 | " plt.grid(False)\n", 71 | "\n", 72 | "def trend(time, slope=0):\n", 73 | " return slope * time\n", 74 | "\n", 75 | "def seasonal_pattern(season_time):\n", 76 | " \"\"\"Just an arbitrary pattern, you can change it if you wish\"\"\"\n", 77 | " return np.where(season_time < 0.1,\n", 78 | " np.cos(season_time * 6 * np.pi),\n", 79 | " 2 / np.exp(9 * season_time))\n", 80 | "\n", 81 | "def seasonality(time, period, amplitude=1, phase=0):\n", 82 | " \"\"\"Repeats the same pattern at each period\"\"\"\n", 83 | " season_time = ((time + phase) % period) / period\n", 84 | " return amplitude * seasonal_pattern(season_time)\n", 85 | "\n", 86 | "def noise(time, noise_level=1, seed=None):\n", 87 | " rnd = np.random.RandomState(seed)\n", 88 | " return rnd.randn(len(time)) * noise_level\n", 89 | "\n", 90 | "time = np.arange(10 * 365 + 1, dtype=\"float32\")\n", 91 | "baseline = 10\n", 92 | "series = trend(time, 0.1) \n", 93 | "baseline = 10\n", 94 | "amplitude = 40\n", 95 | "slope = 0.005\n", 96 | "noise_level = 3\n", 97 | "\n", 98 | "# Create the series\n", 99 | "series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)\n", 100 | "# Update with noise\n", 101 | "series += noise(time, noise_level, seed=51)\n", 102 | "\n", 103 | "split_time = 3000\n", 104 | "time_train = time[:split_time]\n", 105 | "x_train = series[:split_time]\n", 106 | "time_valid = time[split_time:]\n", 107 | "x_valid = series[split_time:]\n", 108 | "\n", 109 | "window_size = 20\n", 110 | "batch_size = 32\n", 111 | "shuffle_buffer_size = 1000\n", 112 | "\n", 113 | "plot_series(time, series)" 114 | ] 115 | }, 116 | { 117 | "cell_type": "code", 118 | "execution_count": 0, 119 | "metadata": { 120 | "colab": {}, 121 | "colab_type": "code", 122 | "id": "4sTTIOCbyShY" 123 | }, 124 | "outputs": [], 125 | "source": [ 126 | "def windowed_dataset(series, window_size, batch_size, shuffle_buffer):\n", 127 | " dataset = tf.data.Dataset.from_tensor_slices(series)\n", 128 | " dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)\n", 129 | " dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))\n", 130 | " dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1]))\n", 131 | " dataset = dataset.batch(batch_size).prefetch(1)\n", 132 | " return dataset" 133 | ] 134 | }, 135 | { 136 | "cell_type": "code", 137 | "execution_count": 0, 138 | "metadata": { 139 | "colab": {}, 140 | "colab_type": "code", 141 | "id": "TW-vT7eLYAdb" 142 | }, 143 | "outputs": [], 144 | "source": [ 145 | "dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)\n", 146 | "\n", 147 | "\n", 148 | "model = tf.keras.models.Sequential([\n", 149 | " tf.keras.layers.Dense(100, input_shape=[window_size], activation=\"relu\"), \n", 150 | " tf.keras.layers.Dense(10, activation=\"relu\"), \n", 151 | " tf.keras.layers.Dense(1)\n", 152 | "])\n", 153 | "\n", 154 | "model.compile(loss=\"mse\", optimizer=tf.keras.optimizers.SGD(lr=1e-6, momentum=0.9))\n", 155 | "model.fit(dataset,epochs=100,verbose=0)\n" 156 | ] 157 | }, 158 | { 159 | "cell_type": "code", 160 | "execution_count": 0, 161 | "metadata": { 162 | "colab": {}, 163 | "colab_type": "code", 164 | "id": "efhco2rYyIFF" 165 | }, 166 | "outputs": [], 167 | "source": [ 168 | "forecast = []\n", 169 | "for time in range(len(series) - window_size):\n", 170 | " forecast.append(model.predict(series[time:time + window_size][np.newaxis]))\n", 171 | "\n", 172 | "forecast = forecast[split_time-window_size:]\n", 173 | "results = np.array(forecast)[:, 0, 0]\n", 174 | "\n", 175 | "\n", 176 | "plt.figure(figsize=(10, 6))\n", 177 | "\n", 178 | "plot_series(time_valid, x_valid)\n", 179 | "plot_series(time_valid, results)" 180 | ] 181 | }, 182 | { 183 | "cell_type": "code", 184 | "execution_count": 0, 185 | "metadata": { 186 | "colab": {}, 187 | "colab_type": "code", 188 | "id": "-kT6j186YO6K" 189 | }, 190 | "outputs": [], 191 | "source": [ 192 | "tf.keras.metrics.mean_absolute_error(x_valid, results).numpy()" 193 | ] 194 | } 195 | ], 196 | "metadata": { 197 | "accelerator": "GPU", 198 | "colab": { 199 | "name": "S+P_Week_2_Exercise_Answer.ipynb", 200 | "provenance": [], 201 | "toc_visible": true, 202 | "version": "0.3.2" 203 | }, 204 | "kernelspec": { 205 | "display_name": "Python 3", 206 | "name": "python3" 207 | } 208 | }, 209 | "nbformat": 4, 210 | "nbformat_minor": 0 211 | } 212 | -------------------------------------------------------------------------------- /4. Sequences, Time Series and Prediction/Exercises/Week 3/S+P Week 3 Exercise Answer.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "metadata": { 6 | "id": "zX4Kg8DUTKWO", 7 | "colab_type": "code", 8 | "colab": {} 9 | }, 10 | "source": [ 11 | "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", 12 | "# you may not use this file except in compliance with the License.\n", 13 | "# You may obtain a copy of the License at\n", 14 | "#\n", 15 | "# https://www.apache.org/licenses/LICENSE-2.0\n", 16 | "#\n", 17 | "# Unless required by applicable law or agreed to in writing, software\n", 18 | "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", 19 | "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", 20 | "# See the License for the specific language governing permissions and\n", 21 | "# limitations under the License." 22 | ], 23 | "execution_count": 0, 24 | "outputs": [] 25 | }, 26 | { 27 | "cell_type": "code", 28 | "execution_count": 0, 29 | "metadata": { 30 | "colab": {}, 31 | "colab_type": "code", 32 | "id": "D1J15Vh_1Jih" 33 | }, 34 | "outputs": [], 35 | "source": [ 36 | "!pip install tf-nightly-2.0-preview\n" 37 | ] 38 | }, 39 | { 40 | "cell_type": "code", 41 | "execution_count": 0, 42 | "metadata": { 43 | "colab": {}, 44 | "colab_type": "code", 45 | "id": "BOjujz601HcS" 46 | }, 47 | "outputs": [], 48 | "source": [ 49 | "import tensorflow as tf\n", 50 | "import numpy as np\n", 51 | "import matplotlib.pyplot as plt\n", 52 | "print(tf.__version__)" 53 | ] 54 | }, 55 | { 56 | "cell_type": "code", 57 | "execution_count": 0, 58 | "metadata": { 59 | "colab": {}, 60 | "colab_type": "code", 61 | "id": "Zswl7jRtGzkk" 62 | }, 63 | "outputs": [], 64 | "source": [ 65 | "def plot_series(time, series, format=\"-\", start=0, end=None):\n", 66 | " plt.plot(time[start:end], series[start:end], format)\n", 67 | " plt.xlabel(\"Time\")\n", 68 | " plt.ylabel(\"Value\")\n", 69 | " plt.grid(False)\n", 70 | "\n", 71 | "def trend(time, slope=0):\n", 72 | " return slope * time\n", 73 | "\n", 74 | "def seasonal_pattern(season_time):\n", 75 | " \"\"\"Just an arbitrary pattern, you can change it if you wish\"\"\"\n", 76 | " return np.where(season_time < 0.1,\n", 77 | " np.cos(season_time * 6 * np.pi),\n", 78 | " 2 / np.exp(9 * season_time))\n", 79 | "\n", 80 | "def seasonality(time, period, amplitude=1, phase=0):\n", 81 | " \"\"\"Repeats the same pattern at each period\"\"\"\n", 82 | " season_time = ((time + phase) % period) / period\n", 83 | " return amplitude * seasonal_pattern(season_time)\n", 84 | "\n", 85 | "def noise(time, noise_level=1, seed=None):\n", 86 | " rnd = np.random.RandomState(seed)\n", 87 | " return rnd.randn(len(time)) * noise_level\n", 88 | "\n", 89 | "time = np.arange(10 * 365 + 1, dtype=\"float32\")\n", 90 | "baseline = 10\n", 91 | "series = trend(time, 0.1) \n", 92 | "baseline = 10\n", 93 | "amplitude = 40\n", 94 | "slope = 0.005\n", 95 | "noise_level = 3\n", 96 | "\n", 97 | "# Create the series\n", 98 | "series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)\n", 99 | "# Update with noise\n", 100 | "series += noise(time, noise_level, seed=51)\n", 101 | "\n", 102 | "split_time = 3000\n", 103 | "time_train = time[:split_time]\n", 104 | "x_train = series[:split_time]\n", 105 | "time_valid = time[split_time:]\n", 106 | "x_valid = series[split_time:]\n", 107 | "\n", 108 | "window_size = 20\n", 109 | "batch_size = 32\n", 110 | "shuffle_buffer_size = 1000\n", 111 | "\n", 112 | "plot_series(time, series)" 113 | ] 114 | }, 115 | { 116 | "cell_type": "code", 117 | "execution_count": 0, 118 | "metadata": { 119 | "colab": {}, 120 | "colab_type": "code", 121 | "id": "4sTTIOCbyShY" 122 | }, 123 | "outputs": [], 124 | "source": [ 125 | "def windowed_dataset(series, window_size, batch_size, shuffle_buffer):\n", 126 | " dataset = tf.data.Dataset.from_tensor_slices(series)\n", 127 | " dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)\n", 128 | " dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))\n", 129 | " dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1]))\n", 130 | " dataset = dataset.batch(batch_size).prefetch(1)\n", 131 | " return dataset" 132 | ] 133 | }, 134 | { 135 | "cell_type": "code", 136 | "execution_count": 0, 137 | "metadata": { 138 | "colab": {}, 139 | "colab_type": "code", 140 | "id": "A1Hl39rklkLm" 141 | }, 142 | "outputs": [], 143 | "source": [ 144 | "tf.keras.backend.clear_session()\n", 145 | "tf.random.set_seed(51)\n", 146 | "np.random.seed(51)\n", 147 | "\n", 148 | "tf.keras.backend.clear_session()\n", 149 | "dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)\n", 150 | "\n", 151 | "model = tf.keras.models.Sequential([\n", 152 | " tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),\n", 153 | " input_shape=[None]),\n", 154 | " tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),\n", 155 | " tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),\n", 156 | " tf.keras.layers.Dense(1),\n", 157 | " tf.keras.layers.Lambda(lambda x: x * 10.0)\n", 158 | "])\n", 159 | "\n", 160 | "lr_schedule = tf.keras.callbacks.LearningRateScheduler(\n", 161 | " lambda epoch: 1e-8 * 10**(epoch / 20))\n", 162 | "optimizer = tf.keras.optimizers.SGD(lr=1e-8, momentum=0.9)\n", 163 | "model.compile(loss=tf.keras.losses.Huber(),\n", 164 | " optimizer=optimizer,\n", 165 | " metrics=[\"mae\"])\n", 166 | "history = model.fit(dataset, epochs=100, callbacks=[lr_schedule])" 167 | ] 168 | }, 169 | { 170 | "cell_type": "code", 171 | "execution_count": 0, 172 | "metadata": { 173 | "colab": {}, 174 | "colab_type": "code", 175 | "id": "AkBsrsXMzoWR" 176 | }, 177 | "outputs": [], 178 | "source": [ 179 | "plt.semilogx(history.history[\"lr\"], history.history[\"loss\"])\n", 180 | "plt.axis([1e-8, 1e-4, 0, 30])" 181 | ] 182 | }, 183 | { 184 | "cell_type": "code", 185 | "execution_count": 0, 186 | "metadata": { 187 | "colab": {}, 188 | "colab_type": "code", 189 | "id": "4uh-97bpLZCA" 190 | }, 191 | "outputs": [], 192 | "source": [ 193 | "tf.keras.backend.clear_session()\n", 194 | "tf.random.set_seed(51)\n", 195 | "np.random.seed(51)\n", 196 | "\n", 197 | "tf.keras.backend.clear_session()\n", 198 | "dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)\n", 199 | "\n", 200 | "model = tf.keras.models.Sequential([\n", 201 | " tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),\n", 202 | " input_shape=[None]),\n", 203 | " tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),\n", 204 | " tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),\n", 205 | " tf.keras.layers.Dense(1),\n", 206 | " tf.keras.layers.Lambda(lambda x: x * 100.0)\n", 207 | "])\n", 208 | "\n", 209 | "\n", 210 | "model.compile(loss=\"mse\", optimizer=tf.keras.optimizers.SGD(lr=1e-5, momentum=0.9),metrics=[\"mae\"])\n", 211 | "history = model.fit(dataset,epochs=500,verbose=1)" 212 | ] 213 | }, 214 | { 215 | "cell_type": "code", 216 | "execution_count": 0, 217 | "metadata": { 218 | "colab": {}, 219 | "colab_type": "code", 220 | "id": "icGDaND7z0ne" 221 | }, 222 | "outputs": [], 223 | "source": [ 224 | "forecast = []\n", 225 | "results = []\n", 226 | "for time in range(len(series) - window_size):\n", 227 | " forecast.append(model.predict(series[time:time + window_size][np.newaxis]))\n", 228 | "\n", 229 | "forecast = forecast[split_time-window_size:]\n", 230 | "results = np.array(forecast)[:, 0, 0]\n", 231 | "\n", 232 | "\n", 233 | "plt.figure(figsize=(10, 6))\n", 234 | "\n", 235 | "plot_series(time_valid, x_valid)\n", 236 | "plot_series(time_valid, results)" 237 | ] 238 | }, 239 | { 240 | "cell_type": "code", 241 | "execution_count": 0, 242 | "metadata": { 243 | "colab": {}, 244 | "colab_type": "code", 245 | "id": "KfPeqI7rz4LD" 246 | }, 247 | "outputs": [], 248 | "source": [ 249 | "tf.keras.metrics.mean_absolute_error(x_valid, results).numpy()" 250 | ] 251 | }, 252 | { 253 | "cell_type": "code", 254 | "execution_count": 0, 255 | "metadata": { 256 | "colab": {}, 257 | "colab_type": "code", 258 | "id": "JUsdZB_tzDLe" 259 | }, 260 | "outputs": [], 261 | "source": [ 262 | "import matplotlib.image as mpimg\n", 263 | "import matplotlib.pyplot as plt\n", 264 | "\n", 265 | "#-----------------------------------------------------------\n", 266 | "# Retrieve a list of list results on training and test data\n", 267 | "# sets for each training epoch\n", 268 | "#-----------------------------------------------------------\n", 269 | "mae=history.history['mae']\n", 270 | "loss=history.history['loss']\n", 271 | "\n", 272 | "epochs=range(len(loss)) # Get number of epochs\n", 273 | "\n", 274 | "#------------------------------------------------\n", 275 | "# Plot MAE and Loss\n", 276 | "#------------------------------------------------\n", 277 | "plt.plot(epochs, mae, 'r')\n", 278 | "plt.plot(epochs, loss, 'b')\n", 279 | "plt.title('MAE and Loss')\n", 280 | "plt.xlabel(\"Epochs\")\n", 281 | "plt.ylabel(\"Accuracy\")\n", 282 | "plt.legend([\"MAE\", \"Loss\"])\n", 283 | "\n", 284 | "plt.figure()\n", 285 | "\n", 286 | "epochs_zoom = epochs[200:]\n", 287 | "mae_zoom = mae[200:]\n", 288 | "loss_zoom = loss[200:]\n", 289 | "\n", 290 | "#------------------------------------------------\n", 291 | "# Plot Zoomed MAE and Loss\n", 292 | "#------------------------------------------------\n", 293 | "plt.plot(epochs_zoom, mae_zoom, 'r')\n", 294 | "plt.plot(epochs_zoom, loss_zoom, 'b')\n", 295 | "plt.title('MAE and Loss')\n", 296 | "plt.xlabel(\"Epochs\")\n", 297 | "plt.ylabel(\"Accuracy\")\n", 298 | "plt.legend([\"MAE\", \"Loss\"])\n", 299 | "\n", 300 | "plt.figure()" 301 | ] 302 | } 303 | ], 304 | "metadata": { 305 | "accelerator": "GPU", 306 | "colab": { 307 | "collapsed_sections": [], 308 | "name": "S+P Week 3 Exercise Answer.ipynb", 309 | "provenance": [], 310 | "toc_visible": true, 311 | "version": "0.3.2" 312 | }, 313 | "kernelspec": { 314 | "display_name": "Python 3", 315 | "name": "python3" 316 | } 317 | }, 318 | "nbformat": 4, 319 | "nbformat_minor": 0 320 | } 321 | -------------------------------------------------------------------------------- /4. Sequences, Time Series and Prediction/Notes/Common patterns in time series.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/4. Sequences, Time Series and Prediction/Notes/Common patterns in time series.pdf -------------------------------------------------------------------------------- /4. Sequences, Time Series and Prediction/Notes/Convolutions in Autoregressive Neural Networks.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/4. Sequences, Time Series and Prediction/Notes/Convolutions in Autoregressive Neural Networks.pdf -------------------------------------------------------------------------------- /4. Sequences, Time Series and Prediction/Notes/Dilated and causal convolution.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/4. Sequences, Time Series and Prediction/Notes/Dilated and causal convolution.pdf -------------------------------------------------------------------------------- /4. Sequences, Time Series and Prediction/Notes/Moving average and differencing.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/4. Sequences, Time Series and Prediction/Notes/Moving average and differencing.pdf -------------------------------------------------------------------------------- /4. Sequences, Time Series and Prediction/Notes/Outputting a sequence.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/4. Sequences, Time Series and Prediction/Notes/Outputting a sequence.pdf -------------------------------------------------------------------------------- /4. Sequences, Time Series and Prediction/Notes/Shape of the inputs to the RNN.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/agniiyer/TensorFlow-in-Practice-Specialization/26628e7ef4951731c3586a7850d17a49228e5323/4. Sequences, Time Series and Prediction/Notes/Shape of the inputs to the RNN.pdf -------------------------------------------------------------------------------- /4. Sequences, Time Series and Prediction/S+P Week 2 Lesson 1.ipynb: -------------------------------------------------------------------------------- 1 | {"nbformat":4,"nbformat_minor":0,"metadata":{"accelerator":"GPU","colab":{"name":"S+P Week 2 Lesson 1.ipynb","provenance":[],"collapsed_sections":[]},"kernelspec":{"display_name":"Python 3","name":"python3"}},"cells":[{"cell_type":"code","metadata":{"id":"zX4Kg8DUTKWO","colab_type":"code","colab":{}},"source":["#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n","# you may not use this file except in compliance with the License.\n","# You may obtain a copy of the License at\n","#\n","# https://www.apache.org/licenses/LICENSE-2.0\n","#\n","# Unless required by applicable law or agreed to in writing, software\n","# distributed under the License is distributed on an \"AS IS\" BASIS,\n","# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n","# See the License for the specific language governing permissions and\n","# limitations under the License."],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"s6eq-RBcQ_Zr","colab":{}},"source":["try:\n"," # %tensorflow_version only exists in Colab.\n"," %tensorflow_version 2.x\n","except Exception:\n"," pass"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"colab_type":"code","id":"BOjujz601HcS","outputId":"1af09232-1995-46a6-f282-9fb8d56c1964","executionInfo":{"status":"ok","timestamp":1590935049524,"user_tz":-330,"elapsed":4336,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}},"colab":{"base_uri":"https://localhost:8080/","height":34}},"source":["import tensorflow as tf\n","import numpy as np\n","import matplotlib.pyplot as plt\n","print(tf.__version__)"],"execution_count":0,"outputs":[{"output_type":"stream","text":["2.2.0\n"],"name":"stdout"}]},{"cell_type":"code","metadata":{"colab_type":"code","id":"asEdslR_05O_","outputId":"d1cfe0b7-53ad-455c-e434-37d816d07fc1","executionInfo":{"status":"ok","timestamp":1590935049525,"user_tz":-330,"elapsed":4328,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}},"colab":{"base_uri":"https://localhost:8080/","height":187}},"source":["dataset = tf.data.Dataset.range(10)\n","for val in dataset:\n"," print(val.numpy())"],"execution_count":0,"outputs":[{"output_type":"stream","text":["0\n","1\n","2\n","3\n","4\n","5\n","6\n","7\n","8\n","9\n"],"name":"stdout"}]},{"cell_type":"code","metadata":{"colab_type":"code","id":"Lrv_ghSt1lgQ","outputId":"9af8b783-8c7d-4ae4-8b70-9cf888d4ba69","executionInfo":{"status":"ok","timestamp":1590935049527,"user_tz":-330,"elapsed":4320,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}},"colab":{"base_uri":"https://localhost:8080/"}},"source":["dataset = tf.data.Dataset.range(10)\n","dataset = dataset.window(5, shift=1)\n","for window_dataset in dataset:\n"," for val in window_dataset:\n"," print(val.numpy(), end=\" \")\n"," print()"],"execution_count":0,"outputs":[{"output_type":"stream","text":["0 1 2 3 4 \n","1 2 3 4 5 \n","2 3 4 5 6 \n","3 4 5 6 7 \n","4 5 6 7 8 \n","5 6 7 8 9 \n","6 7 8 9 \n","7 8 9 \n","8 9 \n","9 \n"],"name":"stdout"}]},{"cell_type":"code","metadata":{"colab_type":"code","id":"QLEq6MG-2DN2","outputId":"9ffbcc83-8dd1-4a5f-9927-2150b7b013cd","executionInfo":{"status":"ok","timestamp":1590935049528,"user_tz":-330,"elapsed":4311,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}},"colab":{"base_uri":"https://localhost:8080/"}},"source":["dataset = tf.data.Dataset.range(10)\n","dataset = dataset.window(5, shift=1, drop_remainder=True)\n","for window_dataset in dataset:\n"," for val in window_dataset:\n"," print(val.numpy(), end=\" \")\n"," print()"],"execution_count":0,"outputs":[{"output_type":"stream","text":["0 1 2 3 4 \n","1 2 3 4 5 \n","2 3 4 5 6 \n","3 4 5 6 7 \n","4 5 6 7 8 \n","5 6 7 8 9 \n"],"name":"stdout"}]},{"cell_type":"markdown","metadata":{"id":"9PsGELODIPJs","colab_type":"text"},"source":["**Pro tip:** lambda window means the follwing function is in terms of window. flat_map applies this function to dataset and then flattens the result."]},{"cell_type":"code","metadata":{"colab_type":"code","id":"PJ9CAHlJ2ODe","outputId":"02446da1-a9dc-4c3c-9377-d4659d7c1927","executionInfo":{"status":"ok","timestamp":1590935716146,"user_tz":-330,"elapsed":1598,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}},"colab":{"base_uri":"https://localhost:8080/","height":119}},"source":["dataset = tf.data.Dataset.range(10)\n","dataset = dataset.window(5, shift=1, drop_remainder=True)\n","dataset = dataset.flat_map(lambda window: window.batch(5)) # window is a tensor!\n","for window in dataset:\n"," print(window.numpy()) # Need numpy lists for ML!"],"execution_count":24,"outputs":[{"output_type":"stream","text":["[0 1 2 3 4]\n","[1 2 3 4 5]\n","[2 3 4 5 6]\n","[3 4 5 6 7]\n","[4 5 6 7 8]\n","[5 6 7 8 9]\n"],"name":"stdout"}]},{"cell_type":"markdown","metadata":{"id":"1B33YpOfK4WF","colab_type":"text"},"source":["**Note:** We now want to split these sequences into features and labels. We want to input everything but the last element and have our model predict the last element."]},{"cell_type":"markdown","metadata":{"id":"ejUOkVzuKbzW","colab_type":"text"},"source":["**Pro tip:** window[:-1] is everything up to (but not including) the last element. window[-1:] is everything from the last element to the end, which is just the last element."]},{"cell_type":"code","metadata":{"colab_type":"code","id":"DryEZ2Mz2nNV","outputId":"8e30692c-9640-4b8a-824c-e8fab00c623c","executionInfo":{"status":"ok","timestamp":1590935049530,"user_tz":-330,"elapsed":4294,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}},"colab":{"base_uri":"https://localhost:8080/"}},"source":["dataset = tf.data.Dataset.range(10)\n","dataset = dataset.window(5, shift=1, drop_remainder=True)\n","dataset = dataset.flat_map(lambda window: window.batch(5))\n","dataset = dataset.map(lambda window: (window[:-1], window[-1:]))\n","for x,y in dataset:\n"," print(x.numpy(), y.numpy())"],"execution_count":0,"outputs":[{"output_type":"stream","text":["[0 1 2 3] [4]\n","[1 2 3 4] [5]\n","[2 3 4 5] [6]\n","[3 4 5 6] [7]\n","[4 5 6 7] [8]\n","[5 6 7 8] [9]\n"],"name":"stdout"}]},{"cell_type":"markdown","metadata":{"id":"ARxtJox7Oy5B","colab_type":"text"},"source":["**Note:** We now shuffle the sequences so that we don't accidentally introduce a bias towards particular orders of sequences. shuffle(buffer_size) randomly shuffles the elements of this dataset.\n","\n","This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required.\n","\n","For instance, if your dataset contains 10,000 elements but buffer_size is set to 1,000, then shuffle will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer.\n","\n","In this case, we have 6 sequences and we use a buffer_size of 10."]},{"cell_type":"code","metadata":{"colab_type":"code","id":"1tl-0BOKkEtk","outputId":"ec00ea15-967c-49c7-b64b-489601d5084f","executionInfo":{"status":"ok","timestamp":1590935049531,"user_tz":-330,"elapsed":4285,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}},"colab":{"base_uri":"https://localhost:8080/"}},"source":["dataset = tf.data.Dataset.range(10)\n","dataset = dataset.window(5, shift=1, drop_remainder=True)\n","dataset = dataset.flat_map(lambda window: window.batch(5))\n","dataset = dataset.map(lambda window: (window[:-1], window[-1:]))\n","dataset = dataset.shuffle(buffer_size=10)\n","for x,y in dataset:\n"," print(x.numpy(), y.numpy())\n"],"execution_count":0,"outputs":[{"output_type":"stream","text":["[4 5 6 7] [8]\n","[0 1 2 3] [4]\n","[1 2 3 4] [5]\n","[3 4 5 6] [7]\n","[2 3 4 5] [6]\n","[5 6 7 8] [9]\n"],"name":"stdout"}]},{"cell_type":"markdown","metadata":{"id":"yMPWzsr4Q7tf","colab_type":"text"},"source":["**Pro tip:** prefetch creates a Dataset that prefetches elements from the main dataset.\n","\n","Most dataset input pipelines should end with a call to prefetch. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.\n","\n","The Dataset.prefetch(m) transformation prefetches m elements of its direct input. In this case, since its direct input is dataset.batch(n) and each element of that dataset is a batch (of n elements), it will prefetch m batches."]},{"cell_type":"code","metadata":{"colab_type":"code","id":"Wa0PNwxMGapy","outputId":"412b98f2-adfb-4d71-87c4-8b4a7aea61a8","executionInfo":{"status":"ok","timestamp":1590935049532,"user_tz":-330,"elapsed":4277,"user":{"displayName":"Agni Iyer","photoUrl":"https://lh5.googleusercontent.com/-t_0Yj_TZMvc/AAAAAAAAAAI/AAAAAAAABNo/ntatgaKFYUI/s64/photo.jpg","userId":"12872450379171189898"}},"colab":{"base_uri":"https://localhost:8080/"}},"source":["dataset = tf.data.Dataset.range(10)\n","dataset = dataset.window(5, shift=1, drop_remainder=True)\n","dataset = dataset.flat_map(lambda window: window.batch(5))\n","dataset = dataset.map(lambda window: (window[:-1], window[-1:]))\n","dataset = dataset.shuffle(buffer_size=10)\n","dataset = dataset.batch(2).prefetch(1)\n","for x,y in dataset:\n"," print(\"x = \", x.numpy())\n"," print(\"y = \", y.numpy())\n"],"execution_count":0,"outputs":[{"output_type":"stream","text":["x = [[0 1 2 3]\n"," [3 4 5 6]]\n","y = [[4]\n"," [7]]\n","x = [[2 3 4 5]\n"," [4 5 6 7]]\n","y = [[6]\n"," [8]]\n","x = [[1 2 3 4]\n"," [5 6 7 8]]\n","y = [[5]\n"," [9]]\n"],"name":"stdout"}]}]} -------------------------------------------------------------------------------- /4. Sequences, Time Series and Prediction/seq_to_seq.ipynb: -------------------------------------------------------------------------------- 1 | {"nbformat":4,"nbformat_minor":0,"metadata":{"colab":{"name":"seq_to_seq.ipynb","provenance":[{"file_id":"https://github.com/JonathanSum/Tensorflow_Missing/blob/master/seq_to_seq.ipynb","timestamp":1591437617914}],"collapsed_sections":[]},"kernelspec":{"name":"python3","display_name":"Python 3"}},"cells":[{"cell_type":"code","metadata":{"id":"7KYLo9gNdVFs","colab_type":"code","outputId":"52d7aa0b-644d-4edc-e043-65e700e55804","colab":{"base_uri":"https://localhost:8080/","height":50}},"source":["import tensorflow as tf\n","print(tf.__version__)\n","import numpy as np\n","print(np.__version__)"],"execution_count":0,"outputs":[{"output_type":"stream","text":["2.0.0-beta1\n","1.16.5\n"],"name":"stdout"}]},{"cell_type":"markdown","metadata":{"id":"ShBVK4wzmUsb","colab_type":"text"},"source":["This is the sequence to sequence example."]},{"cell_type":"code","metadata":{"id":"EJGG9Y33dYqS","colab_type":"code","outputId":"6e75199c-4913-4011-af7b-44023f173623","colab":{"base_uri":"https://localhost:8080/","height":218}},"source":["ds = tf.data.Dataset.range(10)\n","ds = ds.window(5,shift=1, drop_remainder = True)\n","ds = ds.flat_map(lambda w: w.batch(5))\n","ds = ds.map(lambda w: (w[:-1], w[1:])) ##w is the window\n","ds = ds.shuffle(buffer_size = 10)\n","ds = ds.batch(2).prefetch(1)\n","for x, y in ds:\n"," print(\"x =\", x.numpy())\n"," print(\"y =\", y.numpy())"],"execution_count":0,"outputs":[{"output_type":"stream","text":["x = [[3 4 5 6]\n"," [4 5 6 7]]\n","y = [[4 5 6 7]\n"," [5 6 7 8]]\n","x = [[5 6 7 8]\n"," [1 2 3 4]]\n","y = [[6 7 8 9]\n"," [2 3 4 5]]\n","x = [[0 1 2 3]\n"," [2 3 4 5]]\n","y = [[1 2 3 4]\n"," [3 4 5 6]]\n"],"name":"stdout"}]},{"cell_type":"markdown","metadata":{"id":"d2YvKUUhmNfU","colab_type":"text"},"source":["This is the sequence to vector example."]},{"cell_type":"code","metadata":{"id":"rXxyIHUuhX4C","colab_type":"code","outputId":"94382d74-d494-4b29-ec55-eefd27efc22a","colab":{"base_uri":"https://localhost:8080/","height":218}},"source":["ds = tf.data.Dataset.range(10)\n","ds = ds.window(5,shift=1, drop_remainder = True)\n","ds = ds.flat_map(lambda w: w.batch(5))\n","ds = ds.map(lambda w: (w[:-1], w[-1:])) ##w is the window\n","ds = ds.shuffle(buffer_size = 10)\n","ds = ds.batch(2).prefetch(1)\n","for x, y in ds:\n"," print(\"x =\", x.numpy())\n"," print(\"y =\", y.numpy())"],"execution_count":0,"outputs":[{"output_type":"stream","text":["x = [[1 2 3 4]\n"," [4 5 6 7]]\n","y = [[5]\n"," [8]]\n","x = [[3 4 5 6]\n"," [2 3 4 5]]\n","y = [[7]\n"," [6]]\n","x = [[5 6 7 8]\n"," [0 1 2 3]]\n","y = [[9]\n"," [4]]\n"],"name":"stdout"}]}]} -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # TensorFlow in Practice Specialization 2 | deeplearning.ai on Coursera 3 | 4 | Discover the tools software developers use to build scalable AI-powered algorithms in TensorFlow, a popular open-source machine learning framework. 5 | 6 | In this four-course Specialization, you’ll explore exciting opportunities for AI applications. Begin by developing an understanding of how to build and train neural networks. Improve a network’s performance using convolutions as you train it to identify real-world images. You’ll teach machines to understand, analyze, and respond to human speech with natural language processing systems. Learn to process text, represent sentences as vectors, and input data to a neural network. You’ll even train an AI to create original poetry! 7 | 8 | AI is already transforming industries across the world. After finishing this Specialization, you’ll be able to apply your new TensorFlow skills to a wide range of problems and projects. 9 | --------------------------------------------------------------------------------