├── _config.yml ├── Course 2 - Convolutional Neural Networks in TensorFlow ├── dog 1.jpg ├── rock.png ├── Week 1 │ ├── quiz-1.PNG │ ├── quiz-2.PNG │ ├── quiz-3.PNG │ ├── quiz-4.PNG │ ├── quiz-5.PNG │ ├── quiz-6.PNG │ ├── quiz-7.PNG │ ├── quiz-8.PNG │ └── notes.txt ├── Week 2 │ ├── quiz-1.PNG │ ├── quiz-2.PNG │ ├── quiz-3.PNG │ ├── quiz-4.PNG │ ├── quiz-5.PNG │ ├── quiz-6.PNG │ ├── quiz-7.PNG │ ├── quiz-8.PNG │ └── notes.txt ├── Week 3 │ ├── quiz-1.PNG │ ├── quiz-2.PNG │ ├── quiz-3.PNG │ ├── quiz-4.PNG │ ├── quiz-5.PNG │ ├── quiz-6.PNG │ ├── quiz-7.PNG │ ├── quiz-8.PNG │ └── notes.txt └── Week 4 │ ├── quiz-1.PNG │ ├── quiz-2.PNG │ ├── quiz-3.PNG │ ├── quiz-4.PNG │ ├── quiz-5.PNG │ ├── quiz-6.PNG │ ├── quiz-7.PNG │ ├── quiz-8.PNG │ └── notes.txt ├── Course 4 - Sequences, Time Series, and Prediction ├── Week 1 │ ├── quiz-1.PNG │ ├── quiz-2.PNG │ ├── quiz-3.PNG │ ├── quiz-4.PNG │ ├── quiz-5.PNG │ ├── quiz-6.PNG │ ├── quiz-7.PNG │ ├── quiz-8.PNG │ ├── quiz-9.PNG │ ├── quiz-beginning.PNG │ └── notes.txt ├── Week 2 │ ├── quiz-1.PNG │ ├── quiz-2.PNG │ ├── quiz-3.PNG │ ├── quiz-4.PNG │ ├── quiz-5.PNG │ ├── quiz-7.PNG │ ├── quiz-8.PNG │ ├── quiz-9.PNG │ ├── quiz-6A.PNG │ ├── quiz-6B.PNG │ ├── quiz-beginning.PNG │ └── Course 4 - Week 2 - Exercise-Question.ipynb ├── Week 3 │ ├── quiz-1.PNG │ ├── quiz-2.PNG │ ├── quiz-3.PNG │ ├── quiz-4.PNG │ ├── quiz-5.PNG │ ├── quiz-6.PNG │ ├── quiz-7.PNG │ ├── quiz-8.PNG │ ├── quiz-beginning.PNG │ ├── Course 4 - Week 3 - Exercise-Question.ipynb │ └── notes.txt └── Week 4 │ ├── quiz-1.PNG │ ├── quiz-2.PNG │ ├── quiz-3.PNG │ ├── quiz-4.PNG │ ├── quiz-5.PNG │ ├── quiz-6.PNG │ ├── quiz-7.PNG │ ├── quiz-8.PNG │ └── quiz-beginning.PNG ├── Course 3 - Natural Language Processing in TensorFlow ├── Week 1 │ ├── quiz-1.PNG │ ├── quiz-2.PNG │ ├── quiz-3.PNG │ ├── quiz-4.PNG │ ├── quiz-5.PNG │ ├── quiz-6.PNG │ ├── quiz-7.PNG │ ├── quiz-8.PNG │ ├── Course 3 - Week 1 - Lesson 1.ipynb │ ├── Course 3 - Week 1 - Lesson 2.ipynb │ └── notes.txt ├── Week 2 │ ├── quiz-1.PNG │ ├── quiz-2.PNG │ ├── quiz-3.PNG │ ├── quiz-4.PNG │ ├── quiz-5.PNG │ ├── quiz-6.PNG │ ├── quiz-7.PNG │ ├── quiz-8.PNG │ ├── Embedding Visualization.PNG │ ├── loading files into TensorFlow Projector.PNG │ └── C3W2_exercise_meta.tsv ├── Week 3 │ ├── quiz-1.PNG │ ├── quiz-2.PNG │ ├── quiz-3.PNG │ ├── quiz-4.PNG │ ├── quiz-5.PNG │ ├── quiz-6.PNG │ ├── quiz-7.PNG │ ├── quiz-8.PNG │ ├── C3_W3_L2.h5 │ ├── C3_W3_L2c.h5 │ └── notes.txt └── Week 4 │ ├── quiz-1.PNG │ ├── quiz-2.PNG │ ├── quiz-3.PNG │ ├── quiz-4.PNG │ ├── quiz-5.PNG │ ├── quiz-6.PNG │ ├── quiz-7.PNG │ ├── quiz-8.PNG │ ├── Course 3 - Week 4 - Exercise-Question.ipynb │ └── notes.txt ├── Course 1 - Introduction to TensorFlow for AI, ML and DL ├── Week 4 │ ├── quiz6.PNG │ ├── quiz-1.PNG │ ├── quiz-2.PNG │ ├── quiz-3.PNG │ ├── quiz-4.PNG │ ├── quiz-5.PNG │ ├── quiz-7.PNG │ ├── notes.txt │ └── Exercise4-Question.ipynb ├── Week 1 │ ├── quiz-1.PNG │ ├── quiz-2.PNG │ ├── quiz-3.PNG │ ├── quiz-4.PNG │ ├── quiz-5.PNG │ ├── quiz-6.PNG │ ├── quiz-7.PNG │ ├── quiz-8.PNG │ ├── notes.txt │ └── The Hello World of Deep Learning with Neural Networks.ipynb ├── Week 2 │ ├── quiz-1.PNG │ ├── quiz-2.PNG │ ├── quiz-3.PNG │ ├── quiz-4.PNG │ ├── quiz-5.PNG │ ├── quiz-6.PNG │ ├── quiz-7.PNG │ ├── quiz-8.PNG │ ├── notes.txt │ ├── Copy_of_Course_1_Part_4_Lesson_4_Notebook.ipynb │ └── Exercise2-MNIST.ipynb └── Week 3 │ ├── quiz-1.PNG │ ├── quiz-2.PNG │ ├── quiz-3.PNG │ ├── quiz-4.PNG │ ├── quiz-5.PNG │ ├── quiz-6.PNG │ ├── notes.txt │ └── Excercise 3 Question.ipynb ├── javascripts └── scale.fix.js ├── stylesheets ├── github-light.css └── styles.css ├── README.md ├── params.json └── index.html /_config.yml: -------------------------------------------------------------------------------- 1 | theme: jekyll-theme-cayman -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/dog 1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/dog 1.jpg -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/rock.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/rock.png -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 1/quiz-1.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 1/quiz-1.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 1/quiz-2.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 1/quiz-2.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 1/quiz-3.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 1/quiz-3.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 1/quiz-4.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 1/quiz-4.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 1/quiz-5.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 1/quiz-5.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 1/quiz-6.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 1/quiz-6.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 1/quiz-7.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 1/quiz-7.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 1/quiz-8.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 1/quiz-8.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 1/quiz-9.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 1/quiz-9.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-1.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-1.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-2.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-2.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-3.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-3.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-4.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-4.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-5.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-5.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-7.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-7.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-8.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-8.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-9.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-9.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 3/quiz-1.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 3/quiz-1.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 3/quiz-2.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 3/quiz-2.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 3/quiz-3.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 3/quiz-3.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 3/quiz-4.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 3/quiz-4.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 3/quiz-5.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 3/quiz-5.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 3/quiz-6.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 3/quiz-6.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 3/quiz-7.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 3/quiz-7.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 3/quiz-8.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 3/quiz-8.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 4/quiz-1.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 4/quiz-1.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 4/quiz-2.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 4/quiz-2.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 4/quiz-3.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 4/quiz-3.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 4/quiz-4.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 4/quiz-4.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 4/quiz-5.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 4/quiz-5.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 4/quiz-6.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 4/quiz-6.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 4/quiz-7.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 4/quiz-7.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 4/quiz-8.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 4/quiz-8.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 1/quiz-1.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 1/quiz-1.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 1/quiz-2.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 1/quiz-2.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 1/quiz-3.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 1/quiz-3.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 1/quiz-4.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 1/quiz-4.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 1/quiz-5.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 1/quiz-5.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 1/quiz-6.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 1/quiz-6.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 1/quiz-7.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 1/quiz-7.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 1/quiz-8.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 1/quiz-8.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 2/quiz-1.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 2/quiz-1.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 2/quiz-2.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 2/quiz-2.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 2/quiz-3.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 2/quiz-3.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 2/quiz-4.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 2/quiz-4.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 2/quiz-5.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 2/quiz-5.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 2/quiz-6.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 2/quiz-6.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 2/quiz-7.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 2/quiz-7.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 2/quiz-8.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 2/quiz-8.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 3/quiz-1.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 3/quiz-1.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 3/quiz-2.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 3/quiz-2.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 3/quiz-3.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 3/quiz-3.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 3/quiz-4.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 3/quiz-4.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 3/quiz-5.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 3/quiz-5.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 3/quiz-6.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 3/quiz-6.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 3/quiz-7.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 3/quiz-7.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 3/quiz-8.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 3/quiz-8.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 4/quiz-1.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 4/quiz-1.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 4/quiz-2.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 4/quiz-2.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 4/quiz-3.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 4/quiz-3.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 4/quiz-4.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 4/quiz-4.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 4/quiz-5.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 4/quiz-5.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 4/quiz-6.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 4/quiz-6.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 4/quiz-7.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 4/quiz-7.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 4/quiz-8.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 4/quiz-8.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-6A.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-6A.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-6B.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-6B.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 4/quiz6.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 4/quiz6.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 1/quiz-1.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 1/quiz-1.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 1/quiz-2.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 1/quiz-2.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 1/quiz-3.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 1/quiz-3.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 1/quiz-4.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 1/quiz-4.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 1/quiz-5.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 1/quiz-5.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 1/quiz-6.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 1/quiz-6.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 1/quiz-7.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 1/quiz-7.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 1/quiz-8.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 1/quiz-8.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 2/quiz-1.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 2/quiz-1.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 2/quiz-2.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 2/quiz-2.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 2/quiz-3.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 2/quiz-3.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 2/quiz-4.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 2/quiz-4.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 2/quiz-5.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 2/quiz-5.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 2/quiz-6.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 2/quiz-6.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 2/quiz-7.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 2/quiz-7.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 2/quiz-8.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 2/quiz-8.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 3/quiz-1.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 3/quiz-1.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 3/quiz-2.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 3/quiz-2.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 3/quiz-3.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 3/quiz-3.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 3/quiz-4.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 3/quiz-4.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 3/quiz-5.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 3/quiz-5.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 3/quiz-6.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 3/quiz-6.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 3/quiz-7.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 3/quiz-7.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 3/quiz-8.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 3/quiz-8.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 4/quiz-1.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 4/quiz-1.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 4/quiz-2.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 4/quiz-2.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 4/quiz-3.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 4/quiz-3.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 4/quiz-4.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 4/quiz-4.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 4/quiz-5.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 4/quiz-5.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 4/quiz-6.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 4/quiz-6.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 4/quiz-7.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 4/quiz-7.PNG -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 4/quiz-8.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 2 - Convolutional Neural Networks in TensorFlow/Week 4/quiz-8.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 3/C3_W3_L2.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 3/C3_W3_L2.h5 -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 3/C3_W3_L2c.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 3/C3_W3_L2c.h5 -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 1/quiz-1.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 1/quiz-1.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 1/quiz-2.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 1/quiz-2.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 1/quiz-3.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 1/quiz-3.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 1/quiz-4.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 1/quiz-4.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 1/quiz-5.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 1/quiz-5.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 1/quiz-6.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 1/quiz-6.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 1/quiz-7.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 1/quiz-7.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 1/quiz-8.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 1/quiz-8.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 2/quiz-1.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 2/quiz-1.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 2/quiz-2.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 2/quiz-2.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 2/quiz-3.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 2/quiz-3.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 2/quiz-4.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 2/quiz-4.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 2/quiz-5.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 2/quiz-5.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 2/quiz-6.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 2/quiz-6.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 2/quiz-7.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 2/quiz-7.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 2/quiz-8.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 2/quiz-8.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 3/quiz-1.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 3/quiz-1.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 3/quiz-2.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 3/quiz-2.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 3/quiz-3.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 3/quiz-3.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 3/quiz-4.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 3/quiz-4.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 3/quiz-5.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 3/quiz-5.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 3/quiz-6.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 3/quiz-6.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 4/quiz-1.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 4/quiz-1.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 4/quiz-2.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 4/quiz-2.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 4/quiz-3.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 4/quiz-3.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 4/quiz-4.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 4/quiz-4.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 4/quiz-5.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 4/quiz-5.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 4/quiz-7.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 4/quiz-7.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 1/quiz-beginning.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 1/quiz-beginning.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-beginning.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 2/quiz-beginning.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 3/quiz-beginning.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 3/quiz-beginning.PNG -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 4/quiz-beginning.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 4 - Sequences, Time Series, and Prediction/Week 4/quiz-beginning.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 2/Embedding Visualization.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 2/Embedding Visualization.PNG -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 2/loading files into TensorFlow Projector.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/HEAD/Course 3 - Natural Language Processing in TensorFlow/Week 2/loading files into TensorFlow Projector.PNG -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 1/notes.txt: -------------------------------------------------------------------------------- 1 | ## The 'Hello World' of neural networks 2 | 3 | model = keras.Sequential([keras.layers.Dense(units = 1, input_shape = [1])]) 4 | 5 | model.compile(optimizer = 'sgd', 6 | loss = 'mean_squared_error') 7 | 8 | xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype = float) 9 | ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype = float) 10 | 11 | model.fit(xs, ys, epochs = 500) 12 | 13 | print(model.predict([10.0]) -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 3/notes.txt: -------------------------------------------------------------------------------- 1 | ## Implementing convolutional layers 2 | 3 | model = tf.keras.models.Sequential([ 4 | tf.keras.layers.Conv2D(64, (3, 3), activation = 'relu', 5 | input_shape = (28, 28, 1)), 6 | tf.keras.layers.MaxPooling2D(2, 2), 7 | tf.keras.layers.Conv2D(64, (3, 3), activation = 'relu'), 8 | tf.keras.layers.MaxPooling2D(2, 2), 9 | tf.keras.layers.Flatten(), 10 | tf.keras.layers.Dense(128, activation = 'relu'), 11 | tf.keras.layers.Dense(10, activation = 'softmax') 12 | ]) -------------------------------------------------------------------------------- /javascripts/scale.fix.js: -------------------------------------------------------------------------------- 1 | var metas = document.getElementsByTagName('meta'); 2 | var i; 3 | if (navigator.userAgent.match(/iPhone/i)) { 4 | for (i=0; i 0.5: 74 | print(fn + " is a human") 75 | else: 76 | print(fn + " is a horse") 77 | --------------------------------------------------------------------- 78 | -------------------------------------------------------------------------------- /Course 2 - Convolutional Neural Networks in TensorFlow/Week 3/notes.txt: -------------------------------------------------------------------------------- 1 | ## Coding transfer learning from the inception node 2 | 3 | import os 4 | 5 | from tensorflow.keras import layers 6 | from tensorflow.keras import Model 7 | 8 | https://storage.googleapis.com/mledu-datasets/ 9 | inception_v3_weights_tf_dim_ordering_tf_kernels 10 | 11 | 12 | from tensorflow.keras.applications.inception_v3 import InceptionV3 13 | 14 | local_weights_file = '/tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5' 15 | 16 | pre_trained_model = InceptionV3(input_shape = (150, 150, 3), 17 | include_top = False, 18 | weights = None) 19 | 20 | pre_trained_model.load_weights(local_weights_file) 21 | 22 | 23 | for layer in pre_trained_model.layers: 24 | layer.trainable = False 25 | 26 | 27 | pre_trained_model.summary() 28 | -------------------------------------------------------------------------------------------- 29 | 30 | ## Coding your own model with transferred features 31 | 32 | last_layer = pre_trained_model.get_layer('mixed7') 33 | 34 | last_output = last_layer.output 35 | 36 | 37 | from tensorflow.keras.optimizers import RMSprop 38 | 39 | x = layers.Flatten()(last_output) 40 | x = layers.Dense(1024, activation = 'relu')(x) 41 | x = layers.Dense(1, activation = 'sigmoid')(x) 42 | 43 | model = Model(pre_trained_model.input, x) 44 | model.compile(optimizer = RMSprop(lr = 0.0001), 45 | loss = 'binary_crossentropy', 46 | metrics = ['acc']) 47 | 48 | 49 | # Add our data-augmentation parameters to ImageDataGenerator 50 | 51 | train_datagen = ImageDataGenerator(rescale = 1./255., 52 | rotation_range = 40, 53 | width_shift_range = 0.2, 54 | height_shift_range = 0.2, 55 | shear_range = 0.2, 56 | zoom_range = 0.2, 57 | horizontal_flip = True) 58 | 59 | train_generator = train_datagen.flow_from_directory( 60 | train_dir, 61 | batch_size = 20, 62 | class_mode = 'binary', 63 | target_size = (150, 150)) 64 | 65 | 66 | history = model.fit_generator( 67 | train_generator, 68 | validation_data = validation_generator, 69 | steps_per_epoch = 100, 70 | epochs = 100, 71 | validation_steps = 50, 72 | verbose = 2) 73 | -------------------------------------------------------------- 74 | 75 | ## Exploring dropouts 76 | 77 | from tensorflow.keras.optimizers import RMSprop 78 | 79 | x = layers.Flatten()(last_output) 80 | x = layers.Dense(1024, activation = 'relu')(x) 81 | x = layers.Dropout(0.2)(x) 82 | x = layers.Dense(1, activation = 'sigmoid')(x) 83 | 84 | model = Model(pre_trained_model.input, x) 85 | model.compile(optimizer = RMSprop(lr = 0.0001), 86 | loss = 'binary_crossentropy', 87 | metrics = ['acc']) 88 | 89 | ( 20% of neurons in the model are dropped, here ) 90 | ------------------------------------------------------------- 91 | 92 | 93 | -------------------------------------------------------------------------------- /stylesheets/github-light.css: -------------------------------------------------------------------------------- 1 | /*! 2 | * GitHub Light v0.4.1 3 | * Copyright (c) 2012 - 2017 GitHub, Inc. 4 | * Licensed under MIT (https://github.com/primer/github-syntax-theme-generator/blob/master/LICENSE) 5 | */ 6 | 7 | .pl-c /* comment, punctuation.definition.comment, string.comment */ { 8 | color: #6a737d; 9 | } 10 | 11 | .pl-c1 /* constant, entity.name.constant, variable.other.constant, variable.language, support, meta.property-name, support.constant, support.variable, meta.module-reference, markup.raw, meta.diff.header, meta.output */, 12 | .pl-s .pl-v /* string variable */ { 13 | color: #005cc5; 14 | } 15 | 16 | .pl-e /* entity */, 17 | .pl-en /* entity.name */ { 18 | color: #6f42c1; 19 | } 20 | 21 | .pl-smi /* variable.parameter.function, storage.modifier.package, storage.modifier.import, storage.type.java, variable.other */, 22 | .pl-s .pl-s1 /* string source */ { 23 | color: #24292e; 24 | } 25 | 26 | .pl-ent /* entity.name.tag, markup.quote */ { 27 | color: #22863a; 28 | } 29 | 30 | .pl-k /* keyword, storage, storage.type */ { 31 | color: #d73a49; 32 | } 33 | 34 | .pl-s /* string */, 35 | .pl-pds /* punctuation.definition.string, source.regexp, string.regexp.character-class */, 36 | .pl-s .pl-pse .pl-s1 /* string punctuation.section.embedded source */, 37 | .pl-sr /* string.regexp */, 38 | .pl-sr .pl-cce /* string.regexp constant.character.escape */, 39 | .pl-sr .pl-sre /* string.regexp source.ruby.embedded */, 40 | .pl-sr .pl-sra /* string.regexp string.regexp.arbitrary-repitition */ { 41 | color: #032f62; 42 | } 43 | 44 | .pl-v /* variable */, 45 | .pl-smw /* sublimelinter.mark.warning */ { 46 | color: #e36209; 47 | } 48 | 49 | .pl-bu /* invalid.broken, invalid.deprecated, invalid.unimplemented, message.error, brackethighlighter.unmatched, sublimelinter.mark.error */ { 50 | color: #b31d28; 51 | } 52 | 53 | .pl-ii /* invalid.illegal */ { 54 | color: #fafbfc; 55 | background-color: #b31d28; 56 | } 57 | 58 | .pl-c2 /* carriage-return */ { 59 | color: #fafbfc; 60 | background-color: #d73a49; 61 | } 62 | 63 | .pl-c2::before /* carriage-return */ { 64 | content: "^M"; 65 | } 66 | 67 | .pl-sr .pl-cce /* string.regexp constant.character.escape */ { 68 | font-weight: bold; 69 | color: #22863a; 70 | } 71 | 72 | .pl-ml /* markup.list */ { 73 | color: #735c0f; 74 | } 75 | 76 | .pl-mh /* markup.heading */, 77 | .pl-mh .pl-en /* markup.heading entity.name */, 78 | .pl-ms /* meta.separator */ { 79 | font-weight: bold; 80 | color: #005cc5; 81 | } 82 | 83 | .pl-mi /* markup.italic */ { 84 | font-style: italic; 85 | color: #24292e; 86 | } 87 | 88 | .pl-mb /* markup.bold */ { 89 | font-weight: bold; 90 | color: #24292e; 91 | } 92 | 93 | .pl-md /* markup.deleted, meta.diff.header.from-file, punctuation.definition.deleted */ { 94 | color: #b31d28; 95 | background-color: #ffeef0; 96 | } 97 | 98 | .pl-mi1 /* markup.inserted, meta.diff.header.to-file, punctuation.definition.inserted */ { 99 | color: #22863a; 100 | background-color: #f0fff4; 101 | } 102 | 103 | .pl-mc /* markup.changed, punctuation.definition.changed */ { 104 | color: #e36209; 105 | background-color: #ffebda; 106 | } 107 | 108 | .pl-mi2 /* markup.ignored, markup.untracked */ { 109 | color: #f6f8fa; 110 | background-color: #005cc5; 111 | } 112 | 113 | .pl-mdr /* meta.diff.range */ { 114 | font-weight: bold; 115 | color: #6f42c1; 116 | } 117 | 118 | .pl-ba /* brackethighlighter.tag, brackethighlighter.curly, brackethighlighter.round, brackethighlighter.square, brackethighlighter.angle, brackethighlighter.quote */ { 119 | color: #586069; 120 | } 121 | 122 | .pl-sg /* sublimelinter.gutter-mark */ { 123 | color: #959da5; 124 | } 125 | 126 | .pl-corl /* constant.other.reference.link, string.other.link */ { 127 | text-decoration: underline; 128 | color: #032f62; 129 | } 130 | 131 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # TensorFlow in Practice Specialization, deeplearning.ai @ Coursera 2 | Contains notes, codes, anwers to quizzes and exercises of all the 4 courses of the [TensorFlow in Practice specialization](https://www.coursera.org/specializations/tensorflow-in-practice?), by deeplearning.ai 3 | 4 | ## Course 1: Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning 5 | 6 | ➥ [Week 1: A New Programming Paradigm][1] 7 | 8 | ➥ [Week 2: Introduction to Computer Vision][2] 9 | 10 | ➥ [Week 3: Enhancing Vision with Convolutional Neural Networks][3] 11 | 12 | ➥ [Week 4: Using Real-world Images][4]

13 | 14 | ## Course 2: Convolutional Neural Networks in TensorFlow 15 | 16 | ➥ [Week 1: Exploring a Larger Dataset][5] 17 | 18 | ➥ [Week 2: Augmentation: A technique to avoid overfitting][6] 19 | 20 | ➥ [Week 3: Transfer Learning][7] 21 | 22 | ➥ [Week 4: Multiclass Classifications][8]

23 | 24 | ## Course 3: Natural Language Processing in TensorFlow 25 | 26 | ➥ [Week 1: Sentiment in text][9] 27 | 28 | ➥ [Week 2: Word Embeddings][10] 29 | 30 | ➥ [Week 3: Sequence Models][11] 31 | 32 | ➥ [Week 4: Sequence Models and literature][12]

33 | 34 | ## Course 4: Sequences, Time Series and Prediction 35 | 36 | ➥ [Week 1: Sequences and Prediction][13] 37 | 38 | ➥ [Week 2: Deep Neural Networks for Time Series][14] 39 | 40 | ➥ [Week 3: Recurrent Neural Networks for Time Series][15] 41 | 42 | ➥ [Week 4: Real-world time series data][16] 43 | 44 | [1]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%201%20-%20Introduction%20to%20TensorFlow%20for%20AI%2C%20ML%20and%20DL/Week%201 45 | [2]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%201%20-%20Introduction%20to%20TensorFlow%20for%20AI%2C%20ML%20and%20DL/Week%202 46 | [3]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%201%20-%20Introduction%20to%20TensorFlow%20for%20AI%2C%20ML%20and%20DL/Week%203 47 | [4]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%201%20-%20Introduction%20to%20TensorFlow%20for%20AI%2C%20ML%20and%20DL/Week%204 48 | [5]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%202%20-%20Convolutional%20Neural%20Networks%20in%20TensorFlow/Week%201 49 | [6]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%202%20-%20Convolutional%20Neural%20Networks%20in%20TensorFlow/Week%202 50 | [7]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%202%20-%20Convolutional%20Neural%20Networks%20in%20TensorFlow/Week%203 51 | [8]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%202%20-%20Convolutional%20Neural%20Networks%20in%20TensorFlow/Week%204 52 | [9]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%203%20-%20Natural%20Language%20Processing%20in%20TensorFlow/Week%201 53 | [10]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%203%20-%20Natural%20Language%20Processing%20in%20TensorFlow/Week%202 54 | [11]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%203%20-%20Natural%20Language%20Processing%20in%20TensorFlow/Week%203 55 | [12]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%203%20-%20Natural%20Language%20Processing%20in%20TensorFlow/Week%204 56 | [13]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%204%20-%20Sequences%2C%20Time%20Series%2C%20and%20Prediction/Week%201 57 | [14]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%204%20-%20Sequences%2C%20Time%20Series%2C%20and%20Prediction/Week%202 58 | [15]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%204%20-%20Sequences%2C%20Time%20Series%2C%20and%20Prediction/Week%203 59 | [16]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%204%20-%20Sequences%2C%20Time%20Series%2C%20and%20Prediction/Week%204 60 | 61 | *** 62 | -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 1/Course 3 - Week 1 - Lesson 2.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "name": "Course 3 - Week 1 - Lesson 2.ipynb", 7 | "provenance": [], 8 | "collapsed_sections": [] 9 | }, 10 | "kernelspec": { 11 | "name": "python3", 12 | "display_name": "Python 3" 13 | }, 14 | "accelerator": "GPU" 15 | }, 16 | "cells": [ 17 | { 18 | "cell_type": "code", 19 | "metadata": { 20 | "id": "qZdR6A0W1q7j", 21 | "colab_type": "code", 22 | "colab": {} 23 | }, 24 | "source": [ 25 | "import tensorflow as tf\n", 26 | "from tensorflow import keras\n", 27 | "\n", 28 | "from tensorflow.keras.preprocessing.text import Tokenizer\n", 29 | "from tensorflow.keras.preprocessing.sequence import pad_sequences" 30 | ], 31 | "execution_count": 0, 32 | "outputs": [] 33 | }, 34 | { 35 | "cell_type": "code", 36 | "metadata": { 37 | "id": "YZkMtyCq2Ot6", 38 | "colab_type": "code", 39 | "colab": { 40 | "base_uri": "https://localhost:8080/", 41 | "height": 216 42 | }, 43 | "outputId": "dbc28b05-785f-4349-a3dc-7d055f52e9a4" 44 | }, 45 | "source": [ 46 | "sentences = [\n", 47 | " 'I love my dog',\n", 48 | " 'I love my cat',\n", 49 | " 'You love my dog!',\n", 50 | " 'Do you think my dog is amazing?'\n", 51 | "]\n", 52 | "\n", 53 | "tokenizer = Tokenizer(num_words = 100, oov_token = \"\")\n", 54 | "tokenizer.fit_on_texts(sentences)\n", 55 | "word_index = tokenizer.word_index\n", 56 | "\n", 57 | "sequences = tokenizer.texts_to_sequences(sentences)\n", 58 | "\n", 59 | "padded = pad_sequences(sequences, maxlen = 5)\n", 60 | "print(\"\\nWord Index = \", word_index)\n", 61 | "print(\"\\nSequences = \", sequences)\n", 62 | "print(\"\\nPadded Sequences: \")\n", 63 | "print(padded)" 64 | ], 65 | "execution_count": 2, 66 | "outputs": [ 67 | { 68 | "output_type": "stream", 69 | "text": [ 70 | "\n", 71 | "Word Index = {'': 1, 'my': 2, 'love': 3, 'dog': 4, 'i': 5, 'you': 6, 'cat': 7, 'do': 8, 'think': 9, 'is': 10, 'amazing': 11}\n", 72 | "\n", 73 | "Sequences = [[5, 3, 2, 4], [5, 3, 2, 7], [6, 3, 2, 4], [8, 6, 9, 2, 4, 10, 11]]\n", 74 | "\n", 75 | "Padded Sequences: \n", 76 | "[[ 0 5 3 2 4]\n", 77 | " [ 0 5 3 2 7]\n", 78 | " [ 0 6 3 2 4]\n", 79 | " [ 9 2 4 10 11]]\n" 80 | ], 81 | "name": "stdout" 82 | } 83 | ] 84 | }, 85 | { 86 | "cell_type": "code", 87 | "metadata": { 88 | "id": "vlNrGQUy21Uw", 89 | "colab_type": "code", 90 | "colab": { 91 | "base_uri": "https://localhost:8080/", 92 | "height": 124 93 | }, 94 | "outputId": "468ed270-81d6-4258-e330-4021f66749b8" 95 | }, 96 | "source": [ 97 | "## Try with words that the tokenizer wasn't fit to\n", 98 | "\n", 99 | "test_data = [\n", 100 | " 'i really love my dog',\n", 101 | " 'my dog loves my manatee'\n", 102 | "]\n", 103 | "\n", 104 | "test_seq = tokenizer.texts_to_sequences(test_data)\n", 105 | "print(\"\\nTest Sequence = \", test_seq)\n", 106 | "\n", 107 | "padded = pad_sequences(test_seq, maxlen = 10)\n", 108 | "print(\"\\nPadded Test Sequence: \")\n", 109 | "print(padded)" 110 | ], 111 | "execution_count": 3, 112 | "outputs": [ 113 | { 114 | "output_type": "stream", 115 | "text": [ 116 | "\n", 117 | "Test Sequence = [[5, 1, 3, 2, 4], [2, 4, 1, 2, 1]]\n", 118 | "\n", 119 | "Padded Test Sequence: \n", 120 | "[[0 0 0 0 0 5 1 3 2 4]\n", 121 | " [0 0 0 0 0 2 4 1 2 1]]\n" 122 | ], 123 | "name": "stdout" 124 | } 125 | ] 126 | } 127 | ] 128 | } -------------------------------------------------------------------------------- /params.json: -------------------------------------------------------------------------------- 1 | {"name":"Tensorflow-in-practice-deeplearning.ai-coursera","tagline":"Contains notes, codes, anwers to quizzes and exercises of all the 4 courses of the TensorFlow in Practice specialization, by deeplearning.ai","body":"# TensorFlow in Practice, by deeplearning.ai @ Coursera
\r\n\r\nContains notes, codes, answers to quizzes and exercises of all the 4 courses of the [TensorFlow in Practice specialization](https://www.coursera.org/specializations/tensorflow-in-practice?), by deeplearning.ai\r\n\r\n## Course 1: Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning\r\n\r\n➥ [Week 1: A New Programming Paradigm][1]\r\n\r\n➥ [Week 2: Introduction to Computer Vision][2]\r\n\r\n➥ [Week 3: Enhancing Vision with Convolutional Neural Networks][3]\r\n\r\n➥ [Week 4: Using Real-world Images][4]

\r\n\r\n## Course 2: Convolutional Neural Networks in TensorFlow\r\n\r\n➥ [Week 1: Exploring a Larger Dataset][5]\r\n\r\n➥ [Week 2: Augmentation: A technique to avoid overfitting][6]\r\n\r\n➥ [Week 3: Transfer Learning][7]\r\n\r\n➥ [Week 4: Multiclass Classifications][8]

\r\n\r\n## Course 3: Natural Language Processing in TensorFlow\r\n\r\n➥ [Week 1: Sentiment in text][9]\r\n\r\n➥ [Week 2: Word Embeddings][10]\r\n\r\n➥ [Week 3: Sequence Models][11]\r\n\r\n➥ [Week 4: Sequence Models and literature][12]

\r\n\r\n## Course 4: Sequences, Time Series and Prediction\r\n\r\n➥ [Week 1: Sequences and Prediction][13]\r\n\r\n➥ [Week 2: Deep Neural Networks for Time Series][14]\r\n\r\n➥ [Week 3: Recurrent Neural Networks for Time Series][15]\r\n\r\n➥ [Week 4: Real-world time series data][16]\r\n\r\n[1]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%201%20-%20Introduction%20to%20TensorFlow%20for%20AI%2C%20ML%20and%20DL/Week%201\r\n[2]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%201%20-%20Introduction%20to%20TensorFlow%20for%20AI%2C%20ML%20and%20DL/Week%202\r\n[3]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%201%20-%20Introduction%20to%20TensorFlow%20for%20AI%2C%20ML%20and%20DL/Week%203\r\n[4]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%201%20-%20Introduction%20to%20TensorFlow%20for%20AI%2C%20ML%20and%20DL/Week%204\r\n[5]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%202%20-%20Convolutional%20Neural%20Networks%20in%20TensorFlow/Week%201\r\n[6]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%202%20-%20Convolutional%20Neural%20Networks%20in%20TensorFlow/Week%202\r\n[7]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%202%20-%20Convolutional%20Neural%20Networks%20in%20TensorFlow/Week%203\r\n[8]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%202%20-%20Convolutional%20Neural%20Networks%20in%20TensorFlow/Week%204\r\n[9]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%203%20-%20Natural%20Language%20Processing%20in%20TensorFlow/Week%201\r\n[10]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%203%20-%20Natural%20Language%20Processing%20in%20TensorFlow/Week%202\r\n[11]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%203%20-%20Natural%20Language%20Processing%20in%20TensorFlow/Week%203\r\n[12]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%203%20-%20Natural%20Language%20Processing%20in%20TensorFlow/Week%204\r\n[13]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%204%20-%20Sequences%2C%20Time%20Series%2C%20and%20Prediction/Week%201\r\n[14]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%204%20-%20Sequences%2C%20Time%20Series%2C%20and%20Prediction/Week%202\r\n[15]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%204%20-%20Sequences%2C%20Time%20Series%2C%20and%20Prediction/Week%203\r\n[16]: https://github.com/Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera/tree/master/Course%204%20-%20Sequences%2C%20Time%20Series%2C%20and%20Prediction/Week%204\r\n\r\n***","note":"Don't delete this file! It's used internally to help with page regeneration."} -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 2/Copy_of_Course_1_Part_4_Lesson_4_Notebook.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "name": "Copy of Course 1 - Part 4 - Lesson 4 - Notebook.ipynb", 7 | "version": "0.3.2", 8 | "provenance": [], 9 | "collapsed_sections": [] 10 | }, 11 | "kernelspec": { 12 | "name": "python3", 13 | "display_name": "Python 3" 14 | }, 15 | "accelerator": "GPU" 16 | }, 17 | "cells": [ 18 | { 19 | "cell_type": "code", 20 | "metadata": { 21 | "id": "N9-BCmi15L93", 22 | "colab_type": "code", 23 | "outputId": "dccf6666-3f26-44d6-9955-c68816b69679", 24 | "colab": { 25 | "base_uri": "https://localhost:8080/", 26 | "height": 323 27 | } 28 | }, 29 | "source": [ 30 | "import tensorflow as tf\n", 31 | "\n", 32 | "class myCallback(tf.keras.callbacks.Callback):\n", 33 | " def on_epoch_end(self, epoch, logs={}):\n", 34 | " if(logs.get('acc')>0.6):\n", 35 | " print(\"\\nReached 60% accuracy so cancelling training!\")\n", 36 | " self.model.stop_training = True\n", 37 | "\n", 38 | "mnist = tf.keras.datasets.fashion_mnist\n", 39 | "\n", 40 | "(x_train, y_train),(x_test, y_test) = mnist.load_data()\n", 41 | "x_train, x_test = x_train / 255.0, x_test / 255.0\n", 42 | "\n", 43 | "callbacks = myCallback()\n", 44 | "\n", 45 | "model = tf.keras.models.Sequential([\n", 46 | " tf.keras.layers.Flatten(input_shape=(28, 28)),\n", 47 | " tf.keras.layers.Dense(512, activation=tf.nn.relu),\n", 48 | " tf.keras.layers.Dense(10, activation=tf.nn.softmax)\n", 49 | "])\n", 50 | "model.compile(optimizer='adam',\n", 51 | " loss='sparse_categorical_crossentropy',\n", 52 | " metrics=['accuracy'])\n", 53 | "\n", 54 | "model.fit(x_train, y_train, epochs=10, callbacks=[callbacks])" 55 | ], 56 | "execution_count": 1, 57 | "outputs": [ 58 | { 59 | "output_type": "stream", 60 | "text": [ 61 | "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz\n", 62 | "32768/29515 [=================================] - 0s 0us/step\n", 63 | "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz\n", 64 | "26427392/26421880 [==============================] - 0s 0us/step\n", 65 | "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz\n", 66 | "8192/5148 [===============================================] - 0s 0us/step\n", 67 | "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz\n", 68 | "4423680/4422102 [==============================] - 0s 0us/step\n", 69 | "WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.\n", 70 | "Instructions for updating:\n", 71 | "Call initializer instance with the dtype argument instead of passing it to the constructor\n", 72 | "Epoch 1/10\n", 73 | "59552/60000 [============================>.] - ETA: 0s - loss: 0.4755 - acc: 0.8302\n", 74 | "Reached 60% accuracy so cancelling training!\n", 75 | "60000/60000 [==============================] - 6s 108us/sample - loss: 0.4751 - acc: 0.8304\n" 76 | ], 77 | "name": "stdout" 78 | }, 79 | { 80 | "output_type": "execute_result", 81 | "data": { 82 | "text/plain": [ 83 | "" 84 | ] 85 | }, 86 | "metadata": { 87 | "tags": [] 88 | }, 89 | "execution_count": 1 90 | } 91 | ] 92 | }, 93 | { 94 | "cell_type": "code", 95 | "metadata": { 96 | "id": "uyO0l6czkosn", 97 | "colab_type": "code", 98 | "colab": {} 99 | }, 100 | "source": [ 101 | "" 102 | ], 103 | "execution_count": 0, 104 | "outputs": [] 105 | } 106 | ] 107 | } -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 2/Exercise2-MNIST.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "colab_type": "text", 7 | "id": "tOoyQ70H00_s" 8 | }, 9 | "source": [ 10 | "## Exercise 2\n", 11 | "In the course you learned how to do classificaiton using Fashion MNIST, a data set containing items of clothing. There's another, similar dataset called MNIST which has items of handwriting -- the digits 0 through 9.\n", 12 | "\n", 13 | "Write an MNIST classifier that trains to 99% accuracy or above, and does it without a fixed number of epochs -- i.e. you should stop training once you reach that level of accuracy.\n", 14 | "\n", 15 | "Some notes:\n", 16 | "1. It should succeed in less than 10 epochs, so it is okay to change epochs= to 10, but nothing larger\n", 17 | "2. When it reaches 99% or greater it should print out the string \"Reached 99% accuracy so cancelling training!\"\n", 18 | "3. If you add any additional variables, make sure you use the same names as the ones used in the class\n", 19 | "\n", 20 | "I've started the code for you below -- how would you finish it? " 21 | ] 22 | }, 23 | { 24 | "cell_type": "code", 25 | "execution_count": 1, 26 | "metadata": {}, 27 | "outputs": [], 28 | "source": [ 29 | "import tensorflow as tf\n", 30 | "from os import path, getcwd, chdir\n", 31 | "\n", 32 | "# DO NOT CHANGE THE LINE BELOW. If you are developing in a local\n", 33 | "# environment, then grab mnist.npz from the Coursera Jupyter Notebook\n", 34 | "# and place it inside a local folder and edit the path to that location\n", 35 | "path = f\"{getcwd()}/../tmp2/mnist.npz\"" 36 | ] 37 | }, 38 | { 39 | "cell_type": "code", 40 | "execution_count": 6, 41 | "metadata": { 42 | "colab": {}, 43 | "colab_type": "code", 44 | "id": "9rvXQGAA0ssC" 45 | }, 46 | "outputs": [], 47 | "source": [ 48 | "# GRADED FUNCTION: train_mnist\n", 49 | "def train_mnist():\n", 50 | " # Please write your code only where you are indicated.\n", 51 | " # please do not remove # model fitting inline comments.\n", 52 | "\n", 53 | " class myCallback(tf.keras.callbacks.Callback):\n", 54 | " def on_epoch_end(self, epoch, logs = {}):\n", 55 | " if(logs.get('acc') > 0.99):\n", 56 | " print(\"Reached 99% accuracy so cancelling training!\")\n", 57 | " self.model.stop_training = True\n", 58 | " \n", 59 | " callbacks = myCallback()\n", 60 | "\n", 61 | " mnist = tf.keras.datasets.mnist\n", 62 | "\n", 63 | " (x_train, y_train),(x_test, y_test) = mnist.load_data(path=path)\n", 64 | " x_train = x_train / 255.0\n", 65 | " x_test = x_test / 255.0\n", 66 | "\n", 67 | " model = tf.keras.models.Sequential([\n", 68 | " tf.keras.layers.Flatten(),\n", 69 | " tf.keras.layers.Dense(512, activation = tf.nn.relu),\n", 70 | " tf.keras.layers.Dense(10, activation = tf.nn.softmax)\n", 71 | " ])\n", 72 | "\n", 73 | " model.compile(optimizer='adam',\n", 74 | " loss='sparse_categorical_crossentropy',\n", 75 | " metrics=['accuracy'])\n", 76 | " \n", 77 | " # model fitting\n", 78 | " history = model.fit(x_train, y_train,\n", 79 | " epochs = 8, callbacks = [callbacks]\n", 80 | " )\n", 81 | " \n", 82 | " # model fitting\n", 83 | " return history.epoch, history.history['acc'][-1]" 84 | ] 85 | }, 86 | { 87 | "cell_type": "code", 88 | "execution_count": 7, 89 | "metadata": { 90 | "colab": {}, 91 | "colab_type": "code", 92 | "id": "9rvXQGAA0ssC" 93 | }, 94 | "outputs": [ 95 | { 96 | "name": "stdout", 97 | "output_type": "stream", 98 | "text": [ 99 | "Epoch 1/8\n", 100 | "60000/60000 [==============================] - 14s 237us/sample - loss: 0.2053 - acc: 0.9397\n", 101 | "Epoch 2/8\n", 102 | "60000/60000 [==============================] - 14s 235us/sample - loss: 0.0824 - acc: 0.9748\n", 103 | "Epoch 3/8\n", 104 | "60000/60000 [==============================] - 14s 235us/sample - loss: 0.0542 - acc: 0.9829\n", 105 | "Epoch 4/8\n", 106 | "60000/60000 [==============================] - 14s 235us/sample - loss: 0.0387 - acc: 0.9877\n", 107 | "Epoch 5/8\n", 108 | "59904/60000 [============================>.] - ETA: 0s - loss: 0.0275 - acc: 0.9909Reached 99% accuracy so cancelling training!\n", 109 | "60000/60000 [==============================] - 14s 236us/sample - loss: 0.0275 - acc: 0.9909\n" 110 | ] 111 | }, 112 | { 113 | "data": { 114 | "text/plain": [ 115 | "([0, 1, 2, 3, 4], 0.9909)" 116 | ] 117 | }, 118 | "execution_count": 7, 119 | "metadata": {}, 120 | "output_type": "execute_result" 121 | } 122 | ], 123 | "source": [ 124 | "train_mnist()" 125 | ] 126 | }, 127 | { 128 | "cell_type": "code", 129 | "execution_count": null, 130 | "metadata": {}, 131 | "outputs": [], 132 | "source": [] 133 | } 134 | ], 135 | "metadata": { 136 | "coursera": { 137 | "course_slug": "introduction-tensorflow", 138 | "graded_item_id": "d6dew", 139 | "launcher_item_id": "FExZ4" 140 | }, 141 | "kernelspec": { 142 | "display_name": "Python 3", 143 | "language": "python", 144 | "name": "python3" 145 | }, 146 | "language_info": { 147 | "codemirror_mode": { 148 | "name": "ipython", 149 | "version": 3 150 | }, 151 | "file_extension": ".py", 152 | "mimetype": "text/x-python", 153 | "name": "python", 154 | "nbconvert_exporter": "python", 155 | "pygments_lexer": "ipython3", 156 | "version": "3.6.8" 157 | } 158 | }, 159 | "nbformat": 4, 160 | "nbformat_minor": 1 161 | } 162 | -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 4/Course 3 - Week 4 - Exercise-Question.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "name": "Copy of NLP-Week4-Exercise-Shakespeare-Question.ipynb", 7 | "provenance": [] 8 | }, 9 | "kernelspec": { 10 | "name": "python2", 11 | "display_name": "Python 2" 12 | }, 13 | "accelerator": "GPU" 14 | }, 15 | "cells": [ 16 | { 17 | "cell_type": "code", 18 | "metadata": { 19 | "id": "BOwsuGQQY9OL", 20 | "colab_type": "code", 21 | "colab": {} 22 | }, 23 | "source": [ 24 | "from tensorflow.keras.preprocessing.sequence import pad_sequences\n", 25 | "from tensorflow.keras.layers import Embedding, LSTM, Dense, Dropout, Bidirectional\n", 26 | "from tensorflow.keras.preprocessing.text import Tokenizer\n", 27 | "from tensorflow.keras.models import Sequential\n", 28 | "from tensorflow.keras.optimizers import Adam\n", 29 | "### YOUR CODE HERE\n", 30 | "# Figure out how to import regularizers\n", 31 | "###\n", 32 | "import tensorflow.keras.utils as ku \n", 33 | "import numpy as np " 34 | ], 35 | "execution_count": 0, 36 | "outputs": [] 37 | }, 38 | { 39 | "cell_type": "code", 40 | "metadata": { 41 | "colab_type": "code", 42 | "id": "PRnDnCW-Z7qv", 43 | "colab": {} 44 | }, 45 | "source": [ 46 | "tokenizer = Tokenizer()\n", 47 | "!wget --no-check-certificate \\\n", 48 | " https://storage.googleapis.com/laurencemoroney-blog.appspot.com/sonnets.txt \\\n", 49 | " -O /tmp/sonnets.txt\n", 50 | "data = open('/tmp/sonnets.txt').read()\n", 51 | "\n", 52 | "corpus = data.lower().split(\"\\n\")\n", 53 | "\n", 54 | "\n", 55 | "tokenizer.fit_on_texts(corpus)\n", 56 | "total_words = len(tokenizer.word_index) + 1\n", 57 | "\n", 58 | "# create input sequences using list of tokens\n", 59 | "input_sequences = []\n", 60 | "for line in corpus:\n", 61 | "\ttoken_list = tokenizer.texts_to_sequences([line])[0]\n", 62 | "\tfor i in range(1, len(token_list)):\n", 63 | "\t\tn_gram_sequence = token_list[:i+1]\n", 64 | "\t\tinput_sequences.append(n_gram_sequence)\n", 65 | "\n", 66 | "\n", 67 | "# pad sequences \n", 68 | "max_sequence_len = max([len(x) for x in input_sequences])\n", 69 | "input_sequences = np.array(pad_sequences(input_sequences, maxlen=max_sequence_len, padding='pre'))\n", 70 | "\n", 71 | "# create predictors and label\n", 72 | "predictors, label = input_sequences[:,:-1],input_sequences[:,-1]\n", 73 | "\n", 74 | "label = ku.to_categorical(label, num_classes=total_words)" 75 | ], 76 | "execution_count": 0, 77 | "outputs": [] 78 | }, 79 | { 80 | "cell_type": "code", 81 | "metadata": { 82 | "id": "w9vH8Y59ajYL", 83 | "colab_type": "code", 84 | "colab": {} 85 | }, 86 | "source": [ 87 | "model = Sequential()\n", 88 | "model.add(# Your Embedding Layer)\n", 89 | "model.add(# An LSTM Layer)\n", 90 | "model.add(# A dropout layer)\n", 91 | "model.add(# Another LSTM Layer)\n", 92 | "model.add(# A Dense Layer including regularizers)\n", 93 | "model.add(# A Dense Layer)\n", 94 | "# Pick an optimizer\n", 95 | "model.compile(# Pick a loss function and an optimizer)\n", 96 | "print(model.summary())\n" 97 | ], 98 | "execution_count": 0, 99 | "outputs": [] 100 | }, 101 | { 102 | "cell_type": "code", 103 | "metadata": { 104 | "id": "AIg2f1HBxqof", 105 | "colab_type": "code", 106 | "colab": {} 107 | }, 108 | "source": [ 109 | " history = model.fit(predictors, label, epochs=100, verbose=1)" 110 | ], 111 | "execution_count": 0, 112 | "outputs": [] 113 | }, 114 | { 115 | "cell_type": "code", 116 | "metadata": { 117 | "id": "1fXTEO3GJ282", 118 | "colab_type": "code", 119 | "colab": {} 120 | }, 121 | "source": [ 122 | "import matplotlib.pyplot as plt\n", 123 | "acc = history.history['acc']\n", 124 | "loss = history.history['loss']\n", 125 | "\n", 126 | "epochs = range(len(acc))\n", 127 | "\n", 128 | "plt.plot(epochs, acc, 'b', label='Training accuracy')\n", 129 | "plt.title('Training accuracy')\n", 130 | "\n", 131 | "plt.figure()\n", 132 | "\n", 133 | "plt.plot(epochs, loss, 'b', label='Training Loss')\n", 134 | "plt.title('Training loss')\n", 135 | "plt.legend()\n", 136 | "\n", 137 | "plt.show()" 138 | ], 139 | "execution_count": 0, 140 | "outputs": [] 141 | }, 142 | { 143 | "cell_type": "code", 144 | "metadata": { 145 | "id": "6Vc6PHgxa6Hm", 146 | "colab_type": "code", 147 | "colab": {} 148 | }, 149 | "source": [ 150 | "seed_text = \"Help me Obi Wan Kenobi, you're my only hope\"\n", 151 | "next_words = 100\n", 152 | " \n", 153 | "for _ in range(next_words):\n", 154 | "\ttoken_list = tokenizer.texts_to_sequences([seed_text])[0]\n", 155 | "\ttoken_list = pad_sequences([token_list], maxlen=max_sequence_len-1, padding='pre')\n", 156 | "\tpredicted = model.predict_classes(token_list, verbose=0)\n", 157 | "\toutput_word = \"\"\n", 158 | "\tfor word, index in tokenizer.word_index.items():\n", 159 | "\t\tif index == predicted:\n", 160 | "\t\t\toutput_word = word\n", 161 | "\t\t\tbreak\n", 162 | "\tseed_text += \" \" + output_word\n", 163 | "print(seed_text)" 164 | ], 165 | "execution_count": 0, 166 | "outputs": [] 167 | } 168 | ] 169 | } -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 1/The Hello World of Deep Learning with Neural Networks.ipynb: -------------------------------------------------------------------------------- 1 | {"nbformat":4,"nbformat_minor":0,"metadata":{"colab":{"name":"Colab1-for-deeplearn.ipynb","version":"0.3.2","provenance":[],"toc_visible":true},"kernelspec":{"name":"python2","display_name":"Python 2"}},"cells":[{"metadata":{"id":"ZIAkIlfmCe1B","colab_type":"text"},"cell_type":"markdown","source":["# The Hello World of Deep Learning with Neural Networks"]},{"metadata":{"id":"fA93WUy1zzWf","colab_type":"text"},"cell_type":"markdown","source":["Like every first app you should start with something super simple that shows the overall scaffolding for how your code works. \n","\n","In the case of creating neural networks, the sample I like to use is one where it learns the relationship between two numbers. So, for example, if you were writing code for a function like this, you already know the 'rules' — \n","\n","\n","```\n","float hw_function(float x){\n"," float y = (2 * x) - 1;\n"," return y;\n","}\n","```\n","\n","So how would you train a neural network to do the equivalent task? Using data! By feeding it with a set of Xs, and a set of Ys, it should be able to figure out the relationship between them. \n","\n","This is obviously a very different paradigm than what you might be used to, so let's step through it piece by piece.\n"]},{"metadata":{"id":"DzbtdRcZDO9B","colab_type":"text"},"cell_type":"markdown","source":["## Imports\n","\n","Let's start with our imports. Here we are importing TensorFlow and calling it tf for ease of use.\n","\n","We then import a library called numpy, which helps us to represent our data as lists easily and quickly.\n","\n","The framework for defining a neural network as a set of Sequential layers is called keras, so we import that too."]},{"metadata":{"id":"X9uIpOS2zx7k","colab_type":"code","colab":{}},"cell_type":"code","source":["import tensorflow as tf\n","import numpy as np\n","from tensorflow import keras"],"execution_count":0,"outputs":[]},{"metadata":{"id":"wwJGmDrQ0EoB","colab_type":"text"},"cell_type":"markdown","source":["## Define and Compile the Neural Network\n","\n","Next we will create the simplest possible neural network. It has 1 layer, and that layer has 1 neuron, and the input shape to it is just 1 value."]},{"metadata":{"id":"kQFAr_xo0M4T","colab_type":"code","colab":{}},"cell_type":"code","source":["model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])"],"execution_count":0,"outputs":[]},{"metadata":{"id":"KhjZjZ-c0Ok9","colab_type":"text"},"cell_type":"markdown","source":["Now we compile our Neural Network. When we do so, we have to specify 2 functions, a loss and an optimizer.\n","\n","If you've seen lots of math for machine learning, here's where it's usually used, but in this case it's nicely encapsulated in functions for you. But what happens here — let's explain...\n","\n","We know that in our function, the relationship between the numbers is y=2x-1. \n","\n","When the computer is trying to 'learn' that, it makes a guess...maybe y=10x+10. The LOSS function measures the guessed answers against the known correct answers and measures how well or how badly it did.\n","\n","It then uses the OPTIMIZER function to make another guess. Based on how the loss function went, it will try to minimize the loss. At that point maybe it will come up with somehting like y=5x+5, which, while still pretty bad, is closer to the correct result (i.e. the loss is lower)\n","\n","It will repeat this for the number of EPOCHS which you will see shortly. But first, here's how we tell it to use 'MEAN SQUARED ERROR' for the loss and 'STOCHASTIC GRADIENT DESCENT' for the optimizer. You don't need to understand the math for these yet, but you can see that they work! :)\n","\n","Over time you will learn the different and appropriate loss and optimizer functions for different scenarios. \n"]},{"metadata":{"id":"m8YQN1H41L-Y","colab_type":"code","colab":{}},"cell_type":"code","source":["model.compile(optimizer='sgd', loss='mean_squared_error')"],"execution_count":0,"outputs":[]},{"metadata":{"id":"5QyOUhFw1OUX","colab_type":"text"},"cell_type":"markdown","source":["## Providing the Data\n","\n","Next up we'll feed in some data. In this case we are taking 6 xs and 6ys. You can see that the relationship between these is that y=2x-1, so where x = -1, y=-3 etc. etc. \n","\n","A python library called 'Numpy' provides lots of array type data structures that are a defacto standard way of doing it. We declare that we want to use these by specifying the values as an np.array[]"]},{"metadata":{"id":"4Dxk4q-jzEy4","colab_type":"code","colab":{}},"cell_type":"code","source":["xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)\n","ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)"],"execution_count":0,"outputs":[]},{"metadata":{"id":"n_YcWRElnM_b","colab_type":"text"},"cell_type":"markdown","source":["# Training the Neural Network"]},{"metadata":{"id":"c-Jk4dG91dvD","colab_type":"text"},"cell_type":"markdown","source":["The process of training the neural network, where it 'learns' the relationship between the Xs and Ys is in the **model.fit** call. This is where it will go through the loop we spoke about above, making a guess, measuring how good or bad it is (aka the loss), using the opimizer to make another guess etc. It will do it for the number of epochs you specify. When you run this code, you'll see the loss on the right hand side."]},{"metadata":{"id":"lpRrl7WK10Pq","colab_type":"code","colab":{}},"cell_type":"code","source":["model.fit(xs, ys, epochs=500)"],"execution_count":0,"outputs":[]},{"metadata":{"id":"kaFIr71H2OZ-","colab_type":"text"},"cell_type":"markdown","source":["Ok, now you have a model that has been trained to learn the relationshop between X and Y. You can use the **model.predict** method to have it figure out the Y for a previously unknown X. So, for example, if X = 10, what do you think Y will be? Take a guess before you run this code:"]},{"metadata":{"id":"oxNzL4lS2Gui","colab_type":"code","colab":{}},"cell_type":"code","source":["print(model.predict([10.0]))"],"execution_count":0,"outputs":[]},{"metadata":{"id":"btF2CSFH2iEX","colab_type":"text"},"cell_type":"markdown","source":["You might have thought 19, right? But it ended up being a little under. Why do you think that is? \n","\n","Remember that neural networks deal with probabilities, so given the data that we fed the NN with, it calculated that there is a very high probability that the relationship between X and Y is Y=2X-1, but with only 6 data points we can't know for sure. As a result, the result for 10 is very close to 19, but not necessarily 19. \n","\n","As you work with neural networks, you'll see this pattern recurring. You will almost always deal with probabilities, not certainties, and will do a little bit of coding to figure out what the result is based on the probabilities, particularly when it comes to classification.\n"]}]} 2 | -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 2/Course 4 - Week 2 - Exercise-Question.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "name": "Copy of S+P Week 2 Exercise Question.ipynb", 7 | "provenance": [] 8 | }, 9 | "kernelspec": { 10 | "name": "python3", 11 | "display_name": "Python 3" 12 | } 13 | }, 14 | "cells": [ 15 | { 16 | "cell_type": "code", 17 | "metadata": { 18 | "id": "D1J15Vh_1Jih", 19 | "colab_type": "code", 20 | "cellView": "both", 21 | "colab": {} 22 | }, 23 | "source": [ 24 | "!pip install tf-nightly-2.0-preview" 25 | ], 26 | "execution_count": 0, 27 | "outputs": [] 28 | }, 29 | { 30 | "cell_type": "code", 31 | "metadata": { 32 | "id": "BOjujz601HcS", 33 | "colab_type": "code", 34 | "colab": {} 35 | }, 36 | "source": [ 37 | "import tensorflow as tf\n", 38 | "import numpy as np\n", 39 | "import matplotlib.pyplot as plt\n", 40 | "print(tf.__version__)" 41 | ], 42 | "execution_count": 0, 43 | "outputs": [] 44 | }, 45 | { 46 | "cell_type": "code", 47 | "metadata": { 48 | "colab_type": "code", 49 | "id": "Zswl7jRtGzkk", 50 | "colab": {} 51 | }, 52 | "source": [ 53 | "def plot_series(time, series, format=\"-\", start=0, end=None):\n", 54 | " plt.plot(time[start:end], series[start:end], format)\n", 55 | " plt.xlabel(\"Time\")\n", 56 | " plt.ylabel(\"Value\")\n", 57 | " plt.grid(False)\n", 58 | "\n", 59 | "def trend(time, slope=0):\n", 60 | " return slope * time\n", 61 | "\n", 62 | "def seasonal_pattern(season_time):\n", 63 | " \"\"\"Just an arbitrary pattern, you can change it if you wish\"\"\"\n", 64 | " return np.where(season_time < 0.1,\n", 65 | " np.cos(season_time * # YOUR CODE HERE # * np.pi),\n", 66 | " #YOUR CODE HERE# / np.exp(#YOUR CODE HERE# * season_time))\n", 67 | "\n", 68 | "def seasonality(time, period, amplitude=1, phase=0):\n", 69 | " \"\"\"Repeats the same pattern at each period\"\"\"\n", 70 | " season_time = ((time + phase) % period) / period\n", 71 | " return amplitude * seasonal_pattern(season_time)\n", 72 | "\n", 73 | "def noise(time, noise_level=1, seed=None):\n", 74 | " rnd = np.random.RandomState(seed)\n", 75 | " return rnd.randn(len(time)) * noise_level\n", 76 | "\n", 77 | "time = np.arange(10 * 365 + 1, dtype=\"float32\")\n", 78 | "baseline = # YOUR CODE HERE #\n", 79 | "series = trend(time, # YOUR CODE HERE#) \n", 80 | "baseline = 10\n", 81 | "amplitude = 40\n", 82 | "slope = # YOUR CODE HERE#\n", 83 | "noise_level = # YOUR CODE HERE#\n", 84 | "\n", 85 | "# Create the series\n", 86 | "series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)\n", 87 | "# Update with noise\n", 88 | "series += noise(time, noise_level, seed=51)\n", 89 | "\n", 90 | "split_time = 3000\n", 91 | "time_train = time[:split_time]\n", 92 | "x_train = series[:split_time]\n", 93 | "time_valid = time[split_time:]\n", 94 | "x_valid = series[split_time:]\n", 95 | "\n", 96 | "window_size = 20\n", 97 | "batch_size = 32\n", 98 | "shuffle_buffer_size = 1000\n", 99 | "\n", 100 | "plot_series(time, series)" 101 | ], 102 | "execution_count": 0, 103 | "outputs": [] 104 | }, 105 | { 106 | "cell_type": "markdown", 107 | "metadata": { 108 | "id": "GfUTNqti_lNC", 109 | "colab_type": "text" 110 | }, 111 | "source": [ 112 | "Desired output -- a chart that looks like this:\n", 113 | "\n", 114 | "![Chart showing upward trend and seasonailty](http://www.laurencemoroney.com/wp-content/uploads/2019/07/plot1.png)" 115 | ] 116 | }, 117 | { 118 | "cell_type": "code", 119 | "metadata": { 120 | "id": "4sTTIOCbyShY", 121 | "colab_type": "code", 122 | "colab": {} 123 | }, 124 | "source": [ 125 | "def windowed_dataset(series, window_size, batch_size, shuffle_buffer):\n", 126 | " dataset = tf.data.Dataset.from_tensor_slices(series)\n", 127 | " dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)\n", 128 | " dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))\n", 129 | " dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1]))\n", 130 | " dataset = dataset.batch(batch_size).prefetch(1)\n", 131 | " return dataset" 132 | ], 133 | "execution_count": 0, 134 | "outputs": [] 135 | }, 136 | { 137 | "cell_type": "code", 138 | "metadata": { 139 | "id": "TW-vT7eLYAdb", 140 | "colab_type": "code", 141 | "colab": {} 142 | }, 143 | "source": [ 144 | "dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)\n", 145 | "\n", 146 | "\n", 147 | "model = tf.keras.models.Sequential([\n", 148 | " tf.keras.layers.Dense(# YOUR CODE HERE #),\n", 149 | " tf.keras.layers.Dense(# YOUR CODE HERE #, activation=\"relu\"), \n", 150 | " tf.keras.layers.Dense(1)\n", 151 | "])\n", 152 | "\n", 153 | "model.compile(loss=# YOUR CODE HERE #, optimizer=# YOUR CODE HERE#))\n", 154 | "model.fit(dataset,epochs=100,verbose=0)\n", 155 | "\n" 156 | ], 157 | "execution_count": 0, 158 | "outputs": [] 159 | }, 160 | { 161 | "cell_type": "code", 162 | "metadata": { 163 | "id": "efhco2rYyIFF", 164 | "colab_type": "code", 165 | "colab": {} 166 | }, 167 | "source": [ 168 | "forecast = []\n", 169 | "for time in range(len(series) - window_size):\n", 170 | " forecast.append(model.predict(series[time:time + window_size][np.newaxis]))\n", 171 | "\n", 172 | "forecast = forecast[split_time-window_size:]\n", 173 | "results = np.array(forecast)[:, 0, 0]\n", 174 | "\n", 175 | "\n", 176 | "plt.figure(figsize=(10, 6))\n", 177 | "\n", 178 | "plot_series(time_valid, x_valid)\n", 179 | "plot_series(time_valid, results)" 180 | ], 181 | "execution_count": 0, 182 | "outputs": [] 183 | }, 184 | { 185 | "cell_type": "code", 186 | "metadata": { 187 | "id": "-kT6j186YO6K", 188 | "colab_type": "code", 189 | "colab": {} 190 | }, 191 | "source": [ 192 | "tf.keras.metrics.mean_absolute_error(x_valid, results).numpy()\n", 193 | "# EXPECTED OUTPUT\n", 194 | "# A Value less than 3" 195 | ], 196 | "execution_count": 0, 197 | "outputs": [] 198 | } 199 | ] 200 | } -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 3/Excercise 3 Question.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "colab_type": "text", 7 | "id": "iQjHqsmTAVLU" 8 | }, 9 | "source": [ 10 | "## Exercise 3\n", 11 | "In the videos you looked at how you would improve Fashion MNIST using Convolutions. For your exercise see if you can improve MNIST to 99.8% accuracy or more using only a single convolutional layer and a single MaxPooling 2D. You should stop training once the accuracy goes above this amount. It should happen in less than 20 epochs, so it's ok to hard code the number of epochs for training, but your training must end once it hits the above metric. If it doesn't, then you'll need to redesign your layers.\n", 12 | "\n", 13 | "I've started the code for you -- you need to finish it!\n", 14 | "\n", 15 | "When 99.8% accuracy has been hit, you should print out the string \"Reached 99.8% accuracy so cancelling training!\"\n" 16 | ] 17 | }, 18 | { 19 | "cell_type": "code", 20 | "execution_count": 1, 21 | "metadata": {}, 22 | "outputs": [], 23 | "source": [ 24 | "import tensorflow as tf\n", 25 | "from os import path, getcwd, chdir\n", 26 | "\n", 27 | "# DO NOT CHANGE THE LINE BELOW. If you are developing in a local\n", 28 | "# environment, then grab mnist.npz from the Coursera Jupyter Notebook\n", 29 | "# and place it inside a local folder and edit the path to that location\n", 30 | "path = f\"{getcwd()}/../tmp2/mnist.npz\"" 31 | ] 32 | }, 33 | { 34 | "cell_type": "code", 35 | "execution_count": 2, 36 | "metadata": {}, 37 | "outputs": [], 38 | "source": [ 39 | "config = tf.ConfigProto()\n", 40 | "config.gpu_options.allow_growth = True\n", 41 | "sess = tf.Session(config=config)" 42 | ] 43 | }, 44 | { 45 | "cell_type": "code", 46 | "execution_count": 3, 47 | "metadata": {}, 48 | "outputs": [], 49 | "source": [ 50 | "# GRADED FUNCTION: train_mnist_conv\n", 51 | "def train_mnist_conv():\n", 52 | " # Please write your code only where you are indicated.\n", 53 | " # please do not remove model fitting inline comments.\n", 54 | "\n", 55 | " class myCallback(tf.keras.callbacks.Callback):\n", 56 | " def on_epoch_end(self, epoch, logs = {}):\n", 57 | " if(logs.get('acc') > 0.998):\n", 58 | " print(\"Reached 99.8% accuracy so cancelling training!\")\n", 59 | " self.model.stop_training = True\n", 60 | " \n", 61 | "\n", 62 | " callbacks = myCallback()\n", 63 | " mnist = tf.keras.datasets.mnist\n", 64 | " (training_images, training_labels), (test_images, test_labels) = mnist.load_data(path=path)\n", 65 | " training_images = training_images.reshape(60000, 28, 28, 1)\n", 66 | " training_images = training_images / 255.0\n", 67 | " test_images = test_images.reshape(10000, 28, 28, 1)\n", 68 | " test_images = test_images / 255.0\n", 69 | "\n", 70 | " model = tf.keras.models.Sequential([\n", 71 | " tf.keras.layers.Conv2D(64, (3, 3), activation = 'relu', input_shape = (28, 28, 1)),\n", 72 | " tf.keras.layers.MaxPooling2D(2, 2),\n", 73 | " tf.keras.layers.Conv2D(64, (3, 3), activation = 'relu'),\n", 74 | " tf.keras.layers.MaxPooling2D(2, 2),\n", 75 | " tf.keras.layers.Flatten(),\n", 76 | " tf.keras.layers.Dense(128, activation = 'relu'),\n", 77 | " tf.keras.layers.Dense(10, activation = 'softmax')\n", 78 | " ])\n", 79 | "\n", 80 | " model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n", 81 | " # model fitting\n", 82 | " history = model.fit(\n", 83 | " training_images, training_labels, epochs = 19,\n", 84 | " callbacks = [callbacks]\n", 85 | " )\n", 86 | " # model fitting\n", 87 | " return history.epoch, history.history['acc'][-1]\n", 88 | "\n" 89 | ] 90 | }, 91 | { 92 | "cell_type": "code", 93 | "execution_count": 4, 94 | "metadata": {}, 95 | "outputs": [ 96 | { 97 | "name": "stderr", 98 | "output_type": "stream", 99 | "text": [ 100 | "WARNING: Logging before flag parsing goes to stderr.\n", 101 | "W0916 15:22:43.753255 140308709738304 deprecation.py:506] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.\n", 102 | "Instructions for updating:\n", 103 | "Call initializer instance with the dtype argument instead of passing it to the constructor\n" 104 | ] 105 | }, 106 | { 107 | "name": "stdout", 108 | "output_type": "stream", 109 | "text": [ 110 | "Epoch 1/19\n", 111 | "60000/60000 [==============================] - 23s 383us/sample - loss: 0.1229 - acc: 0.9631\n", 112 | "Epoch 2/19\n", 113 | "60000/60000 [==============================] - 19s 320us/sample - loss: 0.0413 - acc: 0.9878\n", 114 | "Epoch 3/19\n", 115 | "60000/60000 [==============================] - 19s 320us/sample - loss: 0.0281 - acc: 0.9911\n", 116 | "Epoch 4/19\n", 117 | "60000/60000 [==============================] - 20s 330us/sample - loss: 0.0214 - acc: 0.9932\n", 118 | "Epoch 5/19\n", 119 | "60000/60000 [==============================] - 21s 346us/sample - loss: 0.0150 - acc: 0.9956\n", 120 | "Epoch 6/19\n", 121 | "60000/60000 [==============================] - 20s 335us/sample - loss: 0.0122 - acc: 0.9959\n", 122 | "Epoch 7/19\n", 123 | "60000/60000 [==============================] - 20s 325us/sample - loss: 0.0102 - acc: 0.9967\n", 124 | "Epoch 8/19\n", 125 | "60000/60000 [==============================] - 19s 320us/sample - loss: 0.0082 - acc: 0.9972\n", 126 | "Epoch 9/19\n", 127 | "60000/60000 [==============================] - 19s 320us/sample - loss: 0.0073 - acc: 0.9977\n", 128 | "Epoch 10/19\n", 129 | "59840/60000 [============================>.] - ETA: 0s - loss: 0.0056 - acc: 0.9982Reached 99.8% accuracy so cancelling training!\n", 130 | "60000/60000 [==============================] - 19s 323us/sample - loss: 0.0056 - acc: 0.9982\n" 131 | ] 132 | } 133 | ], 134 | "source": [ 135 | "_, _ = train_mnist_conv()" 136 | ] 137 | }, 138 | { 139 | "cell_type": "code", 140 | "execution_count": null, 141 | "metadata": {}, 142 | "outputs": [], 143 | "source": [] 144 | } 145 | ], 146 | "metadata": { 147 | "coursera": { 148 | "course_slug": "introduction-tensorflow", 149 | "graded_item_id": "ml06H", 150 | "launcher_item_id": "hQF8A" 151 | }, 152 | "kernelspec": { 153 | "display_name": "Python 3", 154 | "language": "python", 155 | "name": "python3" 156 | }, 157 | "language_info": { 158 | "codemirror_mode": { 159 | "name": "ipython", 160 | "version": 3 161 | }, 162 | "file_extension": ".py", 163 | "mimetype": "text/x-python", 164 | "name": "python", 165 | "nbconvert_exporter": "python", 166 | "pygments_lexer": "ipython3", 167 | "version": "3.6.8" 168 | } 169 | }, 170 | "nbformat": 4, 171 | "nbformat_minor": 1 172 | } 173 | -------------------------------------------------------------------------------- /index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | Tensorflow-in-practice-deeplearning.ai-coursera by Anacoder1 7 | 8 | 9 | 10 | 11 | 14 | 15 | 16 |
17 |
18 |

Tensorflow-in-practice-deeplearning.ai-coursera

19 |

Contains notes, codes, anwers to quizzes and exercises of all the 4 courses of the TensorFlow in Practice specialization, by deeplearning.ai

20 | 21 |

View the Project on GitHub Anacoder1/TensorFlow-in-Practice-deeplearning.ai-Coursera

22 | 23 | 24 | 29 |
30 |
31 |

32 | TensorFlow in Practice, by deeplearning.ai @ Coursera
33 |

34 |

Contains notes, codes, answers to quizzes and exercises of all the 4 courses of the TensorFlow in Practice specialization, by deeplearning.ai

35 |

36 | Course 1: Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning

37 |

Week 1: A New Programming Paradigm

38 |

Week 2: Introduction to Computer Vision

39 |

Week 3: Enhancing Vision with Convolutional Neural Networks

40 |

Week 4: Using Real-world Images

41 |

42 | Course 2: Convolutional Neural Networks in TensorFlow

43 |

Week 1: Exploring a Larger Dataset

44 |

Week 2: Augmentation: A technique to avoid overfitting

45 |

Week 3: Transfer Learning

46 |

Week 4: Multiclass Classifications

47 |

48 | Course 3: Natural Language Processing in TensorFlow

49 |

Week 1: Sentiment in text

50 |

Week 2: Word Embeddings

51 |

Week 3: Sequence Models

52 |

Week 4: Sequence Models and literature

53 |

54 | Course 4: Sequences, Time Series and Prediction

55 |

Week 1: Sequences and Prediction

56 |

Week 2: Deep Neural Networks for Time Series

57 |

Week 3: Recurrent Neural Networks for Time Series

58 |

Week 4: Real-world time series data

59 |
60 |
61 |
62 |

This project is maintained by Anacoder1

63 |

Hosted on GitHub Pages — Theme by orderedlist

64 |
65 |
66 | 67 | 68 | 69 | 70 | -------------------------------------------------------------------------------- /stylesheets/styles.css: -------------------------------------------------------------------------------- 1 | @font-face { 2 | font-family: 'Noto Sans'; 3 | font-weight: 400; 4 | font-style: normal; 5 | src: url('../fonts/Noto-Sans-regular/Noto-Sans-regular.eot'); 6 | src: url('../fonts/Noto-Sans-regular/Noto-Sans-regular.eot?#iefix') format('embedded-opentype'), 7 | local('Noto Sans'), 8 | local('Noto-Sans-regular'), 9 | url('../fonts/Noto-Sans-regular/Noto-Sans-regular.woff2') format('woff2'), 10 | url('../fonts/Noto-Sans-regular/Noto-Sans-regular.woff') format('woff'), 11 | url('../fonts/Noto-Sans-regular/Noto-Sans-regular.ttf') format('truetype'), 12 | url('../fonts/Noto-Sans-regular/Noto-Sans-regular.svg#NotoSans') format('svg'); 13 | } 14 | 15 | @font-face { 16 | font-family: 'Noto Sans'; 17 | font-weight: 700; 18 | font-style: normal; 19 | src: url('../fonts/Noto-Sans-700/Noto-Sans-700.eot'); 20 | src: url('../fonts/Noto-Sans-700/Noto-Sans-700.eot?#iefix') format('embedded-opentype'), 21 | local('Noto Sans Bold'), 22 | local('Noto-Sans-700'), 23 | url('../fonts/Noto-Sans-700/Noto-Sans-700.woff2') format('woff2'), 24 | url('../fonts/Noto-Sans-700/Noto-Sans-700.woff') format('woff'), 25 | url('../fonts/Noto-Sans-700/Noto-Sans-700.ttf') format('truetype'), 26 | url('../fonts/Noto-Sans-700/Noto-Sans-700.svg#NotoSans') format('svg'); 27 | } 28 | 29 | @font-face { 30 | font-family: 'Noto Sans'; 31 | font-weight: 400; 32 | font-style: italic; 33 | src: url('../fonts/Noto-Sans-italic/Noto-Sans-italic.eot'); 34 | src: url('../fonts/Noto-Sans-italic/Noto-Sans-italic.eot?#iefix') format('embedded-opentype'), 35 | local('Noto Sans Italic'), 36 | local('Noto-Sans-italic'), 37 | url('../fonts/Noto-Sans-italic/Noto-Sans-italic.woff2') format('woff2'), 38 | url('../fonts/Noto-Sans-italic/Noto-Sans-italic.woff') format('woff'), 39 | url('../fonts/Noto-Sans-italic/Noto-Sans-italic.ttf') format('truetype'), 40 | url('../fonts/Noto-Sans-italic/Noto-Sans-italic.svg#NotoSans') format('svg'); 41 | } 42 | 43 | @font-face { 44 | font-family: 'Noto Sans'; 45 | font-weight: 700; 46 | font-style: italic; 47 | src: url('../fonts/Noto-Sans-700italic/Noto-Sans-700italic.eot'); 48 | src: url('../fonts/Noto-Sans-700italic/Noto-Sans-700italic.eot?#iefix') format('embedded-opentype'), 49 | local('Noto Sans Bold Italic'), 50 | local('Noto-Sans-700italic'), 51 | url('../fonts/Noto-Sans-700italic/Noto-Sans-700italic.woff2') format('woff2'), 52 | url('../fonts/Noto-Sans-700italic/Noto-Sans-700italic.woff') format('woff'), 53 | url('../fonts/Noto-Sans-700italic/Noto-Sans-700italic.ttf') format('truetype'), 54 | url('../fonts/Noto-Sans-700italic/Noto-Sans-700italic.svg#NotoSans') format('svg'); 55 | } 56 | 57 | body { 58 | background-color: #fff; 59 | padding:50px; 60 | font: 14px/1.5 "Noto Sans", "Helvetica Neue", Helvetica, Arial, sans-serif; 61 | color:#727272; 62 | font-weight:400; 63 | } 64 | 65 | h1, h2, h3, h4, h5, h6 { 66 | color:#222; 67 | margin:0 0 20px; 68 | } 69 | 70 | p, ul, ol, table, pre, dl { 71 | margin:0 0 20px; 72 | } 73 | 74 | h1, h2, h3 { 75 | line-height:1.1; 76 | } 77 | 78 | h1 { 79 | font-size:28px; 80 | } 81 | 82 | h2 { 83 | color:#393939; 84 | } 85 | 86 | h3, h4, h5, h6 { 87 | color:#494949; 88 | } 89 | 90 | a { 91 | color:#39c; 92 | text-decoration:none; 93 | } 94 | 95 | a:hover { 96 | color:#069; 97 | } 98 | 99 | a small { 100 | font-size:11px; 101 | color:#777; 102 | margin-top:-0.3em; 103 | display:block; 104 | } 105 | 106 | a:hover small { 107 | color:#777; 108 | } 109 | 110 | .wrapper { 111 | width:860px; 112 | margin:0 auto; 113 | } 114 | 115 | blockquote { 116 | border-left:1px solid #e5e5e5; 117 | margin:0; 118 | padding:0 0 0 20px; 119 | font-style:italic; 120 | } 121 | 122 | code, pre { 123 | font-family:Monaco, Bitstream Vera Sans Mono, Lucida Console, Terminal, Consolas, Liberation Mono, DejaVu Sans Mono, Courier New, monospace; 124 | color:#333; 125 | font-size:12px; 126 | } 127 | 128 | pre { 129 | padding:8px 15px; 130 | background: #f8f8f8; 131 | border-radius:5px; 132 | border:1px solid #e5e5e5; 133 | overflow-x: auto; 134 | } 135 | 136 | table { 137 | width:100%; 138 | border-collapse:collapse; 139 | } 140 | 141 | th, td { 142 | text-align:left; 143 | padding:5px 10px; 144 | border-bottom:1px solid #e5e5e5; 145 | } 146 | 147 | dt { 148 | color:#444; 149 | font-weight:700; 150 | } 151 | 152 | th { 153 | color:#444; 154 | } 155 | 156 | img { 157 | max-width:100%; 158 | } 159 | 160 | header { 161 | width:270px; 162 | float:left; 163 | position:fixed; 164 | -webkit-font-smoothing:subpixel-antialiased; 165 | } 166 | 167 | header ul { 168 | list-style:none; 169 | height:40px; 170 | padding:0; 171 | background: #f4f4f4; 172 | border-radius:5px; 173 | border:1px solid #e0e0e0; 174 | width:270px; 175 | } 176 | 177 | header li { 178 | width:89px; 179 | float:left; 180 | border-right:1px solid #e0e0e0; 181 | height:40px; 182 | } 183 | 184 | header li:first-child a { 185 | border-radius:5px 0 0 5px; 186 | } 187 | 188 | header li:last-child a { 189 | border-radius:0 5px 5px 0; 190 | } 191 | 192 | header ul a { 193 | line-height:1; 194 | font-size:11px; 195 | color:#999; 196 | display:block; 197 | text-align:center; 198 | padding-top:6px; 199 | height:34px; 200 | } 201 | 202 | header ul a:hover { 203 | color:#999; 204 | } 205 | 206 | header ul a:active { 207 | background-color:#f0f0f0; 208 | } 209 | 210 | strong { 211 | color:#222; 212 | font-weight:700; 213 | } 214 | 215 | header ul li + li + li { 216 | border-right:none; 217 | width:89px; 218 | } 219 | 220 | header ul a strong { 221 | font-size:14px; 222 | display:block; 223 | color:#222; 224 | } 225 | 226 | section { 227 | width:500px; 228 | float:right; 229 | padding-bottom:50px; 230 | } 231 | 232 | small { 233 | font-size:11px; 234 | } 235 | 236 | hr { 237 | border:0; 238 | background:#e5e5e5; 239 | height:1px; 240 | margin:0 0 20px; 241 | } 242 | 243 | footer { 244 | width:270px; 245 | float:left; 246 | position:fixed; 247 | bottom:50px; 248 | -webkit-font-smoothing:subpixel-antialiased; 249 | } 250 | 251 | @media print, screen and (max-width: 960px) { 252 | 253 | div.wrapper { 254 | width:auto; 255 | margin:0; 256 | } 257 | 258 | header, section, footer { 259 | float:none; 260 | position:static; 261 | width:auto; 262 | } 263 | 264 | header { 265 | padding-right:320px; 266 | } 267 | 268 | section { 269 | border:1px solid #e5e5e5; 270 | border-width:1px 0; 271 | padding:20px 0; 272 | margin:0 0 20px; 273 | } 274 | 275 | header a small { 276 | display:inline; 277 | } 278 | 279 | header ul { 280 | position:absolute; 281 | right:50px; 282 | top:52px; 283 | } 284 | } 285 | 286 | @media print, screen and (max-width: 720px) { 287 | body { 288 | word-wrap:break-word; 289 | } 290 | 291 | header { 292 | padding:0; 293 | } 294 | 295 | header ul, header p.view { 296 | position:static; 297 | } 298 | 299 | pre, code { 300 | word-wrap:normal; 301 | } 302 | } 303 | 304 | @media print, screen and (max-width: 480px) { 305 | body { 306 | padding:15px; 307 | } 308 | 309 | header ul { 310 | width:99%; 311 | } 312 | 313 | header li, header ul li + li + li { 314 | width:33%; 315 | } 316 | } 317 | 318 | @media print { 319 | body { 320 | padding:0.4in; 321 | font-size:12pt; 322 | color:#444; 323 | } 324 | } 325 | -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 1/notes.txt: -------------------------------------------------------------------------------- 1 | ## Using APIs 2 | 3 | import tensorflow as tf 4 | from tensorflow import keras 5 | from tensorflow.keras.preprocessing.text import Tokenizer 6 | 7 | sentences = [ 8 | 'I love my dog', 9 | 'I love my cat' 10 | ] 11 | 12 | tokenizer = Tokenizer(num_words = 100) 13 | tokenizer.fit_on_texts(sentences) 14 | word_index = tokenizer.word_index 15 | print(word_index) 16 | 17 | Output: 18 | {'i' : 1, 'my' : 3, 'dog' : 4, 'cat' : 5, 'love' : 2} 19 | 20 | 21 | --> The tokenizer strips punctuation out ('I' was changed to 'i') 22 | --> num_words parameter takes 'num_words' number of unique words based on volume 23 | and encodes them 24 | --> fit_on_texts creates a dictionary with the key being the word and the value 25 | being the token for that word. 26 | 27 | 28 | sentences = [ 29 | 'I love my dog', 30 | 'I love my cat', 31 | 'You love my dog!' 32 | ] 33 | 34 | Output: 35 | {'i' : 3, 'my' : 2, 'you' : 6, 'love' : 1, 'cat' : 5, 'dog' : 4} 36 | ----------------------------------------------------------------- 37 | 38 | ## Text to sequence 39 | 40 | from tensorflow.keras.preprocessing.text import Tokenizer 41 | 42 | sentences = [ 43 | 'I love my dog', 44 | 'I love my cat', 45 | 'You love my dog!', 46 | 'Do you think my dog is amazing?' 47 | ] 48 | 49 | tokenizer = Tokenizer(num_words = 100) 50 | tokenizer.fit_on_texts(sentences) 51 | word_index = tokenizer.word_index 52 | 53 | sequences = tokenizer.texts_to_sequences(sentences) 54 | 55 | print(word_index) 56 | print(sequences) 57 | 58 | 59 | Output: 60 | {'amazing' : 10, 'dog' : 3, 'you' : 5, 'cat' : 6, 61 | 'think' : 8, 'i' : 4, 'is' : 9, 'my' : 1, 62 | 'do' : 7, 'love' : 2} 63 | 64 | [[4, 2, 1, 3], [4, 2, 1, 6], [5, 2, 1, 3], 65 | [7, 5, 8, 1, 3, 9, 10]] 66 | 67 | 68 | test_data = [ 69 | 'i really love my dog', 70 | 'my dog loves my manatee' 71 | ] 72 | 73 | test_seq = tokenizer.texts_to_sequences(test_data) 74 | print(test_seq) 75 | 76 | 77 | Output: 78 | [[4, 2, 1, 3], [1, 3, 1]] 79 | 80 | Word dictionary for reference: 81 | 82 | {'amazing' : 10, 'dog' : 3, 'you' : 5, 'cat' : 6, 83 | 'think' : 8, 'i' : 4, 'is' : 9, 'my' : 1, 84 | 'do' : 7, 'love' : 2} 85 | --------------------------------------------------------------- 86 | 87 | ## Looking more at the Tokenizer 88 | 89 | from tensorflow.keras.preprocessing.text import Tokenizer 90 | 91 | sentences = [ 92 | 'I love my dog', 93 | 'I love my cat', 94 | 'You love my dog!', 95 | 'Do you think my dog is amazing?' 96 | ] 97 | 98 | tokenizer = Tokenizer(num_words = 100, oov_token = "") 99 | tokenizer.fit_on_texts(sentences) 100 | word_index = tokenizer.word_index 101 | 102 | sequences = tokenizer.texts_to_sequences(sentences) 103 | 104 | test_data = [ 105 | 'i really love my dog', 106 | 'my dog loves my manatee' 107 | ] 108 | 109 | test_seq = tokenizer.texts_to_sequences(test_data) 110 | print(test_seq) 111 | 112 | 113 | Output: 114 | [[5, 1, 3, 2, 4], [2, 4, 1, 2, 1]] 115 | 116 | {'think' : 9, 'amazing' : 11, 'dog' : 4, 'do' : 8, 117 | 'i' : 5, 'cat' : 7, 'you' : 6, 'love' : 3, 118 | '' : 1, 'my' : 2, 'is' : 10} 119 | ------------------------------------------------------------------------- 120 | 121 | ## Padding 122 | 123 | from tensorflow.keras.preprocessing.text import Tokenizer 124 | from tensorflow.keras.preprocessing.sequence import pad_sequences 125 | 126 | sentences = [ 127 | 'I love my dog', 128 | 'I love my cat', 129 | 'You love my dog!', 130 | 'Do yo think my dog is amazing?' 131 | ] 132 | 133 | tokenizer = Tokenizer(num_words = 100, oov_token = "") 134 | tokenizer.fit_on_texts(sentences) 135 | word_index = tokenizer.word_index 136 | 137 | sequences = tokenizer.texts_to_sequences(sentences) 138 | 139 | padded = pad_sequences(sequences) 140 | print(word_index) 141 | print(sequences) 142 | print(padded) 143 | 144 | 145 | Output: 146 | {'do' : 8, 'you' : 6, 'love' : 3, 'i' : 5, 147 | 'amazing' : 11, 'my' : 2, 'is' : 10, 'think' : 9, 148 | 'dog' : 4, '' : 1, 'cat' : 7} 149 | 150 | [[5, 3, 2, 4], [5, 3, 2, 7], [6, 3, 2, 4], 151 | [8, 6, 9, 2, 4, 10, 11]] 152 | 153 | [[0 0 0 5 3 2 4] 154 | [0 0 0 5 3 2 7] 155 | [0 0 0 6 3 2 4] 156 | [8 6 9 2 4 10 11]] 157 | 158 | 159 | padded = pad_sequences(sequences, padding = 'post') 160 | 161 | --> padding = 'post' will change the padding to be after the sentence 162 | 163 | 164 | padded = pad_sequences(sequences, padding = 'post', maxlen = 5) 165 | 166 | --> Each row in the matrix usually has the length of the longest sentence. 167 | That can be overridden with the 'maxlen' parameter. 168 | In sentences having length longer than 'maxlen', the information is lost 169 | (by default) from the beginning of the sentence. 170 | 171 | --> To override this, we can use the 'truncating' parameter, so that info 172 | will be lost from the end of the sentence. 173 | 174 | 175 | padded = pad_sequences(sequences, padding = 'post', 176 | truncating = 'post', maxlen = 5) 177 | ----------------------------------------------------------------------------------------- 178 | 179 | ## Notebook for lesson 2 180 | 181 | Check Week 1 files. 182 | ----------------------------------------------------------------------------------------- 183 | 184 | ## Sarcasm, really? 185 | 186 | Sarcasm in News Headlines Dataset by Rishabh Misra 187 | 188 | https://rishabhmisra.github.io/publications/ 189 | 190 | 191 | 3 elements of this dataset: 192 | 193 | 1) is_sarcastic = 1 if the record is sarcastic, 0 otherwise 194 | 195 | 2) headline = the headline of the news article, in plain text 196 | 197 | 3) article_link = link to the original news article. 198 | Useful in collecting supplementary data 199 | 200 | 201 | import json 202 | 203 | with open("sarcasm.json", 'r') as f: 204 | datastore = json.load(f) 205 | 206 | sentences = [] 207 | labels = [] 208 | urls = [] 209 | 210 | for item in datastore: 211 | sentences.append(item['headline']) 212 | labels.append(item['is_sarcastic']) 213 | urls.append(item['article_link']) 214 | ------------------------------------------------------------------------ 215 | 216 | ## Working with the Tokenizer 217 | 218 | from tensorflow.keras.preprocessing.text import Tokenizer 219 | from tensorflow.keras.preprocessing.sequence import pad_sequences 220 | 221 | tokenizer = Tokenizer(oov_token = "") 222 | tokenizer.fit_on_texts(sentences) 223 | word_index = tokenizer.word_index 224 | 225 | sequences = tokenizer.texts_to_sequences(sentences) 226 | padded = pad_sequences(sequences, padding = 'post') 227 | print(padded[0]) 228 | print(padded.shape) 229 | 230 | 231 | {'underwood' : 24127, 'skillingsbolle' : 23055, 232 | 'grabs' : 12293, 'mobility' : 8909, 233 | "'assassin's" : 12648, 'visualize' : 23973, 234 | 'hurting' : 4992, 'orphaned' : 9173, 235 | "'agreed'" : 24365, 'narration' : 28470 236 | 237 | 238 | Output: 239 | [ 308 15115 679 3337 2298 48 382 2576 15116 6 2577 8434 240 | 0 0 0 0 0 0 0 0 0 0 0 0 241 | 0 0 0 0 0 0 0 0 0 0 0 0 242 | 0 0 0 0] 243 | 244 | (26709, 40) 245 | 246 | --> 26709 sentences, 40 being the length of the longest one 247 | -------------------------------------------------------------------- 248 | 249 | ## Notebook for Lesson 3 250 | 251 | Check Week 1 files. 252 | ------------------------------------------------------------- 253 | 254 | Exercise 1 255 | 256 | BBC text archive = http://mlg.ucd.ie/datasets/bbc.html 257 | 258 | Stopwords = https://github.com/Yoast/YoastSEO.js/blob/develop/src/config/stopwords.js 259 | 260 | -------------------------------------------------------------------------------- /Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 4/Exercise4-Question.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "colab_type": "text", 7 | "id": "UncprnB0ymAE" 8 | }, 9 | "source": [ 10 | "Below is code with a link to a happy or sad dataset which contains 80 images, 40 happy and 40 sad. \n", 11 | "Create a convolutional neural network that trains to 100% accuracy on these images, which cancels training upon hitting training accuracy of >.999\n", 12 | "\n", 13 | "Hint -- it will work best with 3 convolutional layers." 14 | ] 15 | }, 16 | { 17 | "cell_type": "code", 18 | "execution_count": 1, 19 | "metadata": {}, 20 | "outputs": [], 21 | "source": [ 22 | "import tensorflow as tf\n", 23 | "import os\n", 24 | "import zipfile\n", 25 | "from os import path, getcwd, chdir\n", 26 | "\n", 27 | "# DO NOT CHANGE THE LINE BELOW. If you are developing in a local\n", 28 | "# environment, then grab happy-or-sad.zip from the Coursera Jupyter Notebook\n", 29 | "# and place it inside a local folder and edit the path to that location\n", 30 | "path = f\"{getcwd()}/../tmp2/happy-or-sad.zip\"\n", 31 | "\n", 32 | "zip_ref = zipfile.ZipFile(path, 'r')\n", 33 | "zip_ref.extractall(\"/tmp/h-or-s\")\n", 34 | "zip_ref.close()" 35 | ] 36 | }, 37 | { 38 | "cell_type": "code", 39 | "execution_count": 2, 40 | "metadata": {}, 41 | "outputs": [], 42 | "source": [ 43 | "# GRADED FUNCTION: train_happy_sad_model\n", 44 | "def train_happy_sad_model():\n", 45 | " # Please write your code only where you are indicated.\n", 46 | " # please do not remove # model fitting inline comments.\n", 47 | "\n", 48 | " DESIRED_ACCURACY = 0.999\n", 49 | "\n", 50 | " class myCallback(tf.keras.callbacks.Callback):\n", 51 | " def on_epoch_end(self, epoch, logs = {}):\n", 52 | " if(logs.get('acc') > 0.999):\n", 53 | " print(\"Reached 99.9% accuracy so cancelling training!\")\n", 54 | " self.model.stop_training = True\n", 55 | " \n", 56 | " \n", 57 | " callbacks = myCallback()\n", 58 | " \n", 59 | " # This Code Block should Define and Compile the Model. Please assume the images are 150 X 150 in your implementation.\n", 60 | " model = tf.keras.models.Sequential([\n", 61 | " tf.keras.layers.Conv2D(16, (3, 3), activation = 'relu',\n", 62 | " input_shape = (150, 150, 3)),\n", 63 | " tf.keras.layers.MaxPooling2D(2, 2),\n", 64 | " \n", 65 | " tf.keras.layers.Conv2D(32, (3, 3), activation = 'relu'),\n", 66 | " tf.keras.layers.MaxPooling2D(2, 2),\n", 67 | " \n", 68 | " tf.keras.layers.Conv2D(64, (3, 3), activation = 'relu'),\n", 69 | " tf.keras.layers.MaxPooling2D(2, 2),\n", 70 | " \n", 71 | " tf.keras.layers.Flatten(),\n", 72 | " tf.keras.layers.Dense(512, activation = 'relu'),\n", 73 | " tf.keras.layers.Dense(1, activation = 'sigmoid')\n", 74 | " ])\n", 75 | "\n", 76 | " from tensorflow.keras.optimizers import RMSprop\n", 77 | "\n", 78 | " model.compile(loss = 'binary_crossentropy',\n", 79 | " optimizer = RMSprop(lr = 0.001),\n", 80 | " metrics = ['acc'])\n", 81 | " \n", 82 | "\n", 83 | " # This code block should create an instance of an ImageDataGenerator called train_datagen \n", 84 | " # And a train_generator by calling train_datagen.flow_from_directory\n", 85 | "\n", 86 | " from tensorflow.keras.preprocessing.image import ImageDataGenerator\n", 87 | "\n", 88 | " train_datagen = ImageDataGenerator(rescale = 1/255)\n", 89 | "\n", 90 | " # Please use a target_size of 150 X 150.\n", 91 | " train_generator = train_datagen.flow_from_directory(\n", 92 | " '/tmp/h-or-s',\n", 93 | " target_size = (150, 150),\n", 94 | " batch_size = 128,\n", 95 | " class_mode = 'binary'\n", 96 | " )\n", 97 | " \n", 98 | " # Expected output: 'Found 80 images belonging to 2 classes'\n", 99 | "\n", 100 | " # This code block should call model.fit_generator and train for\n", 101 | " # a number of epochs.\n", 102 | " # model fitting\n", 103 | " history = model.fit_generator(\n", 104 | " train_generator,\n", 105 | " steps_per_epoch = 8,\n", 106 | " epochs = 15,\n", 107 | " verbose = 1,\n", 108 | " callbacks = [callbacks]\n", 109 | " )\n", 110 | "\n", 111 | " # model fitting\n", 112 | " return history.history['acc'][-1]" 113 | ] 114 | }, 115 | { 116 | "cell_type": "code", 117 | "execution_count": 3, 118 | "metadata": {}, 119 | "outputs": [ 120 | { 121 | "name": "stderr", 122 | "output_type": "stream", 123 | "text": [ 124 | "WARNING: Logging before flag parsing goes to stderr.\n", 125 | "W0917 06:01:05.879975 140128294074176 deprecation.py:506] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.\n", 126 | "Instructions for updating:\n", 127 | "Call initializer instance with the dtype argument instead of passing it to the constructor\n", 128 | "W0917 06:01:06.315677 140128294074176 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/nn_impl.py:180: add_dispatch_support..wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\n", 129 | "Instructions for updating:\n", 130 | "Use tf.where in 2.0, which has the same broadcast rule as np.where\n" 131 | ] 132 | }, 133 | { 134 | "name": "stdout", 135 | "output_type": "stream", 136 | "text": [ 137 | "Found 80 images belonging to 2 classes.\n", 138 | "Epoch 1/15\n", 139 | "8/8 [==============================] - 8s 973ms/step - loss: 1.4685 - acc: 0.6922\n", 140 | "Epoch 2/15\n", 141 | "8/8 [==============================] - 2s 312ms/step - loss: 0.5147 - acc: 0.8391\n", 142 | "Epoch 3/15\n", 143 | "8/8 [==============================] - 2s 300ms/step - loss: 0.1928 - acc: 0.9281\n", 144 | "Epoch 4/15\n", 145 | "8/8 [==============================] - 2s 301ms/step - loss: 0.0466 - acc: 0.9906\n", 146 | "Epoch 5/15\n", 147 | "7/8 [=========================>....] - ETA: 0s - loss: 0.0114 - acc: 1.0000Reached 99.9% accuracy so cancelling training!\n", 148 | "8/8 [==============================] - 2s 301ms/step - loss: 0.0106 - acc: 1.0000\n" 149 | ] 150 | }, 151 | { 152 | "data": { 153 | "text/plain": [ 154 | "1.0" 155 | ] 156 | }, 157 | "execution_count": 3, 158 | "metadata": {}, 159 | "output_type": "execute_result" 160 | } 161 | ], 162 | "source": [ 163 | "# The Expected output: \"Reached 99.9% accuracy so cancelling training!\"\"\n", 164 | "train_happy_sad_model()" 165 | ] 166 | }, 167 | { 168 | "cell_type": "code", 169 | "execution_count": null, 170 | "metadata": {}, 171 | "outputs": [], 172 | "source": [] 173 | } 174 | ], 175 | "metadata": { 176 | "coursera": { 177 | "course_slug": "introduction-tensorflow", 178 | "graded_item_id": "1kAlw", 179 | "launcher_item_id": "PNLYD" 180 | }, 181 | "kernelspec": { 182 | "display_name": "Python 3", 183 | "language": "python", 184 | "name": "python3" 185 | }, 186 | "language_info": { 187 | "codemirror_mode": { 188 | "name": "ipython", 189 | "version": 3 190 | }, 191 | "file_extension": ".py", 192 | "mimetype": "text/x-python", 193 | "name": "python", 194 | "nbconvert_exporter": "python", 195 | "pygments_lexer": "ipython3", 196 | "version": "3.6.8" 197 | } 198 | }, 199 | "nbformat": 4, 200 | "nbformat_minor": 1 201 | } 202 | -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 1/notes.txt: -------------------------------------------------------------------------------- 1 | Series trend and noise == observed in time series 2 | 3 | ## Common patterns in time series 4 | 5 | 1) Trend 6 | 7 | Time series have a specific direction that they're moving in. 8 | 9 | 10 | 2) Seasonality 11 | 12 | Seen when patterns repeat at predictable intervals 13 | 14 | 15 | 3) Combination of Trend and Seasonality 16 | 17 | Overall upwards/downwards trend, but there are local 18 | peaks and troughs. 19 | 20 | 21 | 4) Random values 22 | 23 | White noise 24 | Can't do much with this type of data 25 | 26 | 27 | 5) Autocorrelation 28 | 29 | No trend, no seasonality. 30 | Spikes appear at random timestamps. 31 | 32 | You can't predict when they will happen next or how 33 | strong they will be. 34 | 35 | But clearly, the entire series isn't random. 36 | 37 | 38 | Between the spikes, there's a very deterministic type of decay. 39 | 40 | E.g. value of each time step = 99% x value of previous time step 41 | 42 | v(t) = 0.99 x v(t-1) + occasional spike 43 | 44 | ==> Autocorrelated time-series. 45 | 46 | 47 | Namely, it correlates with a delayed copy of itself 48 | often called a lag. 49 | 50 | 51 | Memory --> steps are dependent on previous ones. 52 | 53 | Innovations --> spikes, which are often unpredictable 54 | (can't be predicted based on past values) 55 | 56 | 57 | Multiple autocorrelations e.g. 58 | v(t) = 0.7 x v(t-1) + 0.2 x v(t-50) + occasional spike 59 | 60 | Here, the lag one autocorrelation gives these very quick short term 61 | exponential delays, and the 50 gives the small balance after each spike 62 | 63 | 64 | 65 | Time series encountered in real life often have mix of all features - 66 | Trend, Seasonality, Autocorrelation and Noise. 67 | 68 | 69 | --> Non-Stationary Time Series 70 | 71 | Time series whose behavior can change drastically over time. 72 | 73 | For e.g. a time series having a positive trend and clear seasonality upto 74 | timestep 200, but then something happened to change its behavior completely. 75 | 76 | If this were stock price, then maybe it were a big financial crisis, a big scandal, 77 | or perhaps a disruptive technological breakthrough causing a 78 | massive change. 79 | 80 | After that, the time series started to trend downward w/o 81 | any clear seasonality. 82 | 83 | To predict on this time series, we can just train on the last 100 time steps, instead of 84 | the entire time series. 85 | 86 | 87 | Stationary time series ==> The more data you have, the better 88 | 89 | Non-Stationary time series ==> The optimal time window that you should use for training will vary. 90 | ---------------------------------------------------------------------------------------------------------- 91 | 92 | 93 | ## Train, validation and test sets 94 | 95 | Naive Forecasting ==> Take the last value and assume that the next one will be the same 96 | 97 | Fixed Partitioning ==> To train the performance of a model, we split it into training, 98 | validation and testing periods. 99 | 100 | If the time series contains some seasonality, ensure that each period contains a whole number 101 | of seasonality e.g. one year, two years, etc (in case of yearly seasonality) 102 | 103 | 104 | Next, you'll train your model on training period, and evaluate it on the validation period. 105 | 106 | Here's where you can experiment to find the right architecture for training, 107 | and work on it, tune the hyperparameters, until you get the desired performance 108 | measured using the validation set. 109 | 110 | Often, you can then retrain on training+validation period, then test on testing period, 111 | to see if the model will perform just as well. 112 | 113 | If it does, you could take the unusual step of retraining with the testing data included, 114 | because the testing data is the closest data we have to the current point in time. 115 | 116 | It's the strongest signal in determining future values. 117 | 118 | 119 | If the model is not trained on that data too, then it may not be optimal. 120 | 121 | 122 | Due to this, it's common to forgo a test period and use only a training and 123 | validation period. 124 | 125 | 126 | Roll-Forward Partitioning ==> 127 | We start with a short training period and gradually increase it, say by one day 128 | at a time, or by one week at a time. 129 | 130 | At each iteration, we train the model on a training period, and 131 | we use it to forecast the following day or following week in the validation period. 132 | 133 | You could see it as doing fixed partitioning a number of times, and then continually 134 | refining the model as such. 135 | ---------------------------------------------------------------------------------------------- 136 | 137 | 138 | ## Metrics for evaluating performance 139 | 140 | Once we have a model and a period, then we can evaluate the model on it and we'll 141 | need a metric to calculate the performance. 142 | 143 | Errors = difference b/w the forecasted values from our model and the actual values 144 | over the evaluation period. 145 | 146 | 147 | Most common method to evaluating performance of a model = MSE 148 | 149 | MSE = np.square(errors).mean() 150 | 151 | We square the errors to remove the negative values. 152 | 153 | 154 | And if we want the mean of our errors calculation to be of the same 155 | scale as the original errors, then we just get its square root. 156 | 157 | RMSE = np.sqrt(mse) 158 | 159 | 160 | MAE / MAD (Mean absolute deviation) = np.abs(errors).mean() 161 | 162 | It removes negative values by taking their absolute values. 163 | 164 | This does not penalize large negative values as much as MSE does. 165 | 166 | 167 | If large errors are potentially dangerous and they cost you much more 168 | than smaller errors ==> MSE is preferred. 169 | 170 | If your gain / loss is just proportional to the size of the error, 171 | then ==> MAE may be better. 172 | 173 | 174 | MAPE (Mean absolute percentage error) == np.abs(errors / x_valid).mean() 175 | 176 | It's the mean ratio b/w the absolute error and the absolute value. 177 | 178 | This gives an idea of the size of the errors compared to the values. 179 | 180 | 181 | keras.metrics.mean_absolute_error(x_valid, naive_forecast).numpy() 182 | 183 | 5.9379085.. 184 | -------------------------------------------------------------------------------------- 185 | 186 | 187 | ## Moving average and differencing 188 | 189 | A common and very simple forecasting method is to calculate a moving average. 190 | 191 | The yellow line is a plot of the average of the blue values over a fixed period 192 | called the averaging window (e.g. 30 days) 193 | 194 | This nicely eliminates a lot of the noise and it gives us a curve 195 | roughly eliminating the original series, but it does not anticipate trend or 196 | seasonality. 197 | 198 | Depending on the current time i.e. the period after which you want to forecast 199 | for the future, it can actually end up being worse than a naive forecast. 200 | 201 | 202 | Differencing ==> 203 | One method to avoid this is to remove the trend and seasonality from 204 | the time series with a technique called differencing. 205 | 206 | So instead of studying the time series itself, we study the difference b/w the value 207 | at time T and the value at an earlier period. 208 | 209 | E.g. Series(t) - Series(t-365) 210 | 211 | We can then use a moving average to forecast this time series, but these 212 | will be forecasts for the difference time series, not the original time series. 213 | 214 | To get the final forecasts for the original time series, we just need to 215 | add back the value at time T-365. 216 | 217 | The result of this is a dip in MAE on the validation period. 218 | The moving average also removes a lot of noise. 219 | 220 | 221 | The 'lot of noise' is coming from past values which we added back into our forecasts. 222 | 223 | So we can improve these forecasts by removing the past noise using a moving average on that. 224 | 225 | 226 | If we do that, we get much smoother forecasts. 227 | 228 | Simple approaches can sometimes work just fine. 229 | ---------------------------------------------------------------------------------------------------- 230 | 231 | 232 | ## Trailing versus centered windows 233 | 234 | Note that when we use the trailing window when computing the moving average 235 | of present values from T-30 to T-1. 236 | 237 | But when we use a centered window to compute the moving average of past values 238 | from one year ago, T-1 year-5 days to T-1 year+ 5 days 239 | 240 | 241 | The moving averages using centered windows can be more accurate than 242 | using trailing windows. 243 | 244 | But we can't use centered windows to smooth present values 245 | since we don't know future values. 246 | 247 | 248 | However, to smooth past values, we can afford to use centered windows. 249 | ------------------------------------------------------------------------------------------------ 250 | 251 | 252 | ## Forecasting 253 | 254 | ..All we did was add the raw historic values which are very noisy. 255 | 256 | What if instead, we added in the moving average of the historic values, 257 | so we're effectively using 2 different moving averages? 258 | 259 | Using this, our prediction curve is a lot less noisy and the predictions 260 | are looking pretty good. 261 | 262 | Also, the error rate has improved further. 263 | ------------------------------------------------------------------------------------ 264 | 265 | -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 4/notes.txt: -------------------------------------------------------------------------------- 1 | ## Looking into the code 2 | 3 | Traditional Irish song: 4 | 5 | In the town of Athy one Jeremy Lanigan 6 | Battered away til he hadnt a pound. 7 | His father died and made him a man again 8 | Left him a farm and ten acres of ground.... 9 | 10 | 11 | Code to process it: 12 | 13 | tokenizer = Tokenizer() 14 | 15 | data = "In the town of Athy one Jeremy Lanigan \n Battered away..." 16 | corpus = data.lower().split("\n") 17 | 18 | tokenizer.fit_on_texts(corpus) 19 | total_words = len(tokenizer.word_index) + 1 20 | 21 | 22 | --> One was added to total_words to consider out-of-vocabulary words 23 | 24 | 25 | ## Training the data 26 | 27 | input_sequences = [] 28 | for line in corpus: 29 | token_list = tokenizer.texts_to_sequences([line])[0] 30 | for i in range(1, len(token_list)): 31 | n_gram_sequence = token_list[:i+1] 32 | input_sequences.append(n_gram_sequence) 33 | 34 | 35 | --> The training X's will be called input_sequences (Python list) 36 | 37 | For each line in the corpus, we'll generate a token list using the tokenizer's, 38 | texts_to_sequences method. 39 | 40 | This will convert a line of text like this: 41 | 42 | In the town of Athy one Jeremy Lanigan ==> [4 2 66 8 67 68 69 70] 43 | 44 | 45 | Then we'll iterate over this list of tokens and create a number of 46 | n-grams sequences, namely the first 2 words in the sentence are one sentence, 47 | then the first three are another sequence, etc. 48 | 49 | 50 | The result? For the first line of the song, the following input sequences will be generated: 51 | 52 | Line: Input Sequences: 53 | 54 | [4 2 66 8 67 68 69 70] [4 2] 55 | [4 2 66] 56 | [4 2 66 8] 57 | [4 2 66 8 67] 58 | [4 2 66 8 67 68] 59 | [4 2 66 8 67 68 69] 60 | [4 2 66 8 67 68 69 70] 61 | 62 | 63 | This will happen for every sentence in the data 64 | 65 | 66 | --> Next, we need to find the length of the longest sentence in the corpus, using this: 67 | 68 | max_sequence_len = max([len(x) for x in input_sequences]) 69 | 70 | 71 | --> Once we have the length of the longest sequence, next we'll pad all sequences 72 | so that they are the same length. 73 | 74 | input_sequences = 75 | np.array(pad_sequences(input_sequences, 76 | maxlen = max_sequence_len, 77 | padding = 'pre')) 78 | 79 | We will pre-pad with zeros to make it easier to extract the label 80 | 81 | 82 | Line: Padded Input Sequences: 83 | 84 | [4 2 66 8 67 68 69 70] [0 0 0 0 0 0 0 0 0 0 4 2] 85 | [0 0 0 0 0 0 0 0 0 4 2 66] 86 | [0 0 0 0 0 0 0 0 4 2 66 8] 87 | [0 0 0 0 0 0 0 4 2 66 8 67] 88 | [0 0 0 0 0 0 4 2 66 8 67 68] 89 | [0 0 0 0 0 4 2 66 8 67 68 69] 90 | [0 0 0 0 4 2 66 8 67 68 69 70] 91 | 92 | 93 | ---> Now that we have our padded sequences, we'll turn them into X's and Y's - 94 | our input values and their labels 95 | 96 | We have to take all but the last character as X, and the last character as Y, or label 97 | ------------------------------------------------------------------------------------------------------- 98 | 99 | 100 | ## More on training the data 101 | 102 | --> Now we have to split our sequences into our X's and Y's. 103 | 104 | 105 | xs = input_sequences[:, :-1] 106 | labels = input_sequences[:, -1] 107 | 108 | 109 | --> Now we should one-hot encode the labels as this really is a classification problem, 110 | where given a sequence of words, we can classify from the corpus what the next word would 111 | likely be. 112 | 113 | 114 | ys = tf.keras.utils.to_categorical(labels, num_classes = total_words) 115 | 116 | 117 | Sentence: [0 0 0 0 4 2 66 8 67 68 69 70] 118 | 119 | X: [0 0 0 0 4 2 66 8 67 68 69] 120 | 121 | Label: [70] 122 | 123 | Y: [0. 0. .... 1. 0. 0. ...] 124 | 125 | 126 | The Y is a one-hot encoded array where the length is the size of the corpus of words 127 | and the value that is set to one is the one at the index of the label, which in this 128 | case is the 70th element. 129 | -------------------------------------------------------------------------------------------------- 130 | 131 | 132 | ## Finding what the next word should be 133 | 134 | model = Sequential() 135 | model.add(Embedding(total_words, 64, 136 | input_length = max_sequence_len - 1)) 137 | model.add((LSTM(20))) 138 | model.add(Dense(total_words, activation = 'softmax')) 139 | 140 | model.compile(loss = 'categorical_crossentropy', 141 | optimizer = 'adam', 142 | metrics = ['accuracy']) 143 | 144 | model.fit(xs, ys, 145 | epochs = 500, 146 | verbose = 2) 147 | 148 | 149 | --> We subtract 1 off the input_length because we cropped off the last word of 150 | each sequence to get the label 151 | 152 | --> The Dense layer has 1 neuron per word 153 | -------------------------------------------------------------------------------------------- 154 | 155 | 156 | ## Example 157 | 158 | --> With the model trained above, when we asked to predict the next 10 words for the phrase 159 | "Laurence went to Dublin", a lot of the last 5-6 words were getting repeated. 160 | 161 | This is because the LSTM was only taking the forward context. Let's use Bidirectional now. 162 | 163 | 164 | model = Sequential() 165 | model.add(Embedding(total_words, 64, 166 | input_length = max_sequence_len - 1)) 167 | model.add(Bidirectional(LSTM(20))) 168 | model.add(Dense(total_words, activation = 'softmax')) 169 | 170 | model.compile(loss = 'categorical_crossentropy', 171 | optimizer = 'adam', 172 | metrics = ['accuracy']) 173 | 174 | model.fit(xs, ys, 175 | epochs = 500, 176 | verbose = 2) 177 | 178 | 179 | --> Using this code, we observe that we do converge a bit quicker than earlier. 180 | 181 | --> The predictions obtained using this model make a little more sense, but there are still repetitions. 182 | ----------------------------------------------------------------------------------------------------------------- 183 | 184 | 185 | ## Predicting a word 186 | 187 | seed is the sentence "Laurence went to dublin" 188 | 189 | We want to predict the next 10 words following 'dublin'.. 190 | 191 | 192 | token_list = tokenizer.texts_to_sequences([seed_text])[0] 193 | 194 | 195 | Output: 196 | Laurence went to dublin ==> [134, 13, 59] (Laurence gets ignored) 197 | 198 | 199 | --> The code below will then pad the sequence, so it matches the ones in the 200 | training set. 201 | 202 | token_list = pad_sequences([token_list], 203 | maxlen = max_sequence_len - 1, 204 | padding = 'pre') 205 | 206 | 207 | We end up with something like this: [0 0 0 0 0 0 0 134 13 59] 208 | which we can pass to the model to get a prediction back 209 | 210 | 211 | predicted = model.predict_classes(token_list, verbose = 0) 212 | 213 | 214 | This will give us the token of the word most likely to be the next 215 | one in the sequence. 216 | 217 | 218 | --> Now we can do a reverse lookup on the word index items to turn the token 219 | back into a word and to add that to our seed texts, and that's it 220 | 221 | for word, index in tokenizer.word_index.items(): 222 | if index == predicted: 223 | output_word = word 224 | break 225 | seed_text += " " + output_word 226 | 227 | 228 | --> Below is the complete code to do that 10 times 229 | 230 | seed_text = "Laurence went to dublin" 231 | next_words = 10 232 | 233 | for _ in range(next_words): 234 | token_list = tokenizer.texts_to_sequences([seed_text])[0] 235 | token_list = pad_sequences([token_list], 236 | maxlen = max_sequence_len - 1, 237 | padding = 'pre') 238 | predicted = model.predict_classes(token_list, verbose = 0) 239 | output_word = "" 240 | for word, index in tokenizer.word_index.items(): 241 | if index == predicted: 242 | output_word = word 243 | break 244 | seed_text += " " + output_word 245 | print(seed_text) 246 | 247 | 248 | --> Do know that the more words you try to predict, the more likely 249 | you are going to get gibberish. 250 | --------------------------------------------------------------------------- 251 | 252 | 253 | ## Poetry! 254 | 255 | !wget --no-check-certificate \ 256 | https://storage.googleapis.com/laurencemoroney-blog.appspot.com/irish-lyrics-eof.txt \ 257 | -O /tmp/irish-lyrics-eof.txt 258 | 259 | Above file has a lot of songs, having 1692 sentences in all. 260 | ------------------------------------------------------------------------------------------- 261 | 262 | 263 | ## Looking into the code 264 | 265 | data = open('/tmp/irish-lyrics-eof.txt').read() 266 | 267 | model = Sequential() 268 | model.add(Embedding(total_words, 100, 269 | input_length = max_sequence_len - 1)) 270 | model.add(Bidirectional(LSTM(150))) 271 | model.add(Dense(total_words, activation = 'softmax')) 272 | 273 | adam = Adam(lr = 0.01) 274 | model.compile(loss = 'categorical_crossentropy', 275 | optimizer = adam, 276 | metrics = ['accuracy']) 277 | 278 | history = model.fit(xs, ys, 279 | epochs = 100, 280 | verbose = 1) 281 | 282 | 1) You can experiment with the dimensionality of the embedding (100, here) 283 | 284 | 2) You can play with the number of LSTN units (150, here) 285 | You can also try removing the 'Bidirectional', so that your words only have forward meaning. 286 | 287 | 3) The biggest impact is on the optimizer. Try playing with the learning rate of the 288 | Adam optimizer 289 | 290 | 4) The more the number of epochs, the better (generally) but eventually 291 | you'll hit the law of diminishing returns. 292 | ------------------------------------------------------------------------------------------------------- 293 | 294 | 295 | ## Your next task 296 | 297 | https://www.tensorflow.org/tutorials/sequences/text_generation 298 | 299 | The above workbook depicts character-based text prediction -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 3/notes.txt: -------------------------------------------------------------------------------- 1 | ## Introduction 2 | 3 | Refer https://www.coursera.org/lecture/nlp-sequence-models/deep-rnns-ehs0S 4 | --------------------------------------------------------------------------------- 5 | 6 | ## LSTMs 7 | 8 | Refer https://www.coursera.org/lecture/nlp-sequence-models/long-short-term-memory-lstm-KXoay 9 | --------------------------------------------------------------------------------------------- 10 | 11 | ## Implementing LSTMs in code 12 | 13 | model = tf.keras.Sequential([ 14 | tf.keras.layers.Embedding(tokenizer.vocab_size, 64), 15 | tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)), 16 | tf.keras.layers.Dense(64, activation = 'relu'), 17 | tf.keras.layers.Dense(1, activation = 'sigmoid') 18 | ]) 19 | 20 | LSTM(64) --> 64 = no. of outputs desired from the LSTM layer 21 | 22 | 23 | --> When viewing the model summary, the output for the Bidirectional layer 24 | will be 128, because it doubles up 64. 25 | 26 | 27 | model = tf.keras.Sequential([ 28 | tf.keras.layers.Embedding(tokenizer.vocab_size, 64), 29 | tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences = True)), 30 | tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)), 31 | tf.keras.layers.Dense(64, activation = 'relu'), 32 | tf.keras.layers.Dense(1, activation = 'sigmoid') 33 | ]) 34 | 35 | 36 | --> LSTMs can be stacked just like other layers. 37 | 38 | --> When you feed an LSTM into another one, you have to put the 39 | return_sequences = True parameter into the first one 40 | 41 | This ensures that the output of the LSTM matches the desired inputs 42 | of the next one. 43 | ------------------------------------------------------------------------------------------- 44 | 45 | ## Accuracy and loss 46 | 47 | --> Training a one layer LSTM and two layer LSTM over the model covered last week (sub-word tokenization) 48 | has a few differences. Both are trained for 10 epochs. 49 | 50 | In the 2 layer one, the validation_accuracy curve doesn't show a nosedive as in the 1 layer one. 51 | 52 | The training curve is smoother for the 2 layer LSTM. 53 | 54 | 55 | --> Jaggedness (in the accuracy curve) is an indication that your model needs improvement. 56 | 57 | 58 | --> When both these networks are trained for 50 epochs, 59 | the accuracy curve of the 1 layer LSTM shows some pretty sharp dips. 60 | 61 | The 2 layer LSTM model has a much smoother curve here too. 62 | 63 | 64 | --> In the loss curves, it's desired that the loss flattens out in later epochs. 65 | ------------------------------------------------------------------------------------------------------------ 66 | 67 | ## Looking into the code 68 | 69 | model = tf.keras.Sequential([ 70 | tf.keras.layers.Embedding(vocab_size, embedding_dim, 71 | input_length = max_length), 72 | tf.keras.layers.Flatten(), 73 | tf.keras.layers.GlobalAveragePooling1D(), 74 | tf.keras.layers.Dense(24, activation = 'relu'), 75 | tf.keras.layers.Dense(1, activation = 'sigmoid') 76 | ]) 77 | 78 | 79 | model = tf.keras.Sequential([ 80 | tf.keras.layers.Embedding(vocab_size, embedding_dim, 81 | input_length = max_length), 82 | tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)), 83 | tf.keras.layers.Dense(24, activation = 'relu'), 84 | tf.keras.layers.Dense(1, activation = 'sigmoid') 85 | ]) 86 | 87 | 88 | --> When both the above models are trained on the sarcasm dataset 89 | the model without LSTM showed a max accuracy flattened at 85% 90 | and the validation accuracy curve showed sync with the accuracy one, 91 | and it flattened out at 80%. 92 | 93 | In the LSTM model, the accuracy quickly reached a max of 97.5% within 94 | 50 epochs. 95 | The validation accuracy dropped slowly, but it was still close to the values 96 | observed in the sans-LSTM model. 97 | 98 | Still, the drop indicates overfitting. 99 | 100 | 101 | --> The loss values from the non-LSTM model got to a healthy state quite quickly 102 | and then flattened out. 103 | 104 | With the LSTM, the training loss dropped nicely, but the validation one 105 | increased as training was continued. 106 | 107 | Again, this shows some overfitting in the LSTM network 108 | The accuracy of the prediction increased, but the confidence in it decreased. 109 | ---------------------------------------------------------------------------------------- 110 | 111 | ## Using a convolutional network 112 | 113 | model = tf.keras.Sequential([ 114 | tf.keras.layers.Embedding(vocab_size, embedding_dim, 115 | input_length = max_length), 116 | tf.keras.layers.Conv1D(128, 5, activation = 'relu'), 117 | tf.keras.layers.GlobalMaxPooling1D(), 118 | tf.keras.layers.Dense(24, activation = 'relu'), 119 | tf.keras.layers.Dense(1, activation = 'sigmoid') 120 | ]) 121 | 122 | 123 | --> A 128 filters each for 5 words 124 | 125 | --> Now, words will be grouped to the size of the filter (5, here), 126 | and convolutions will be learned that can map the word classification to 127 | the desired output. 128 | 129 | 130 | --> Training with the network above, we get a 100% accuracy, and 80% validation accuracy. 131 | 132 | But as before, our loss increases in the validation set, indicating potential overfitting. 133 | 134 | 135 | max_length = 120 136 | 137 | tf.keras.layers.Conv1D(128, 5, activation = 'relu') 138 | 139 | 140 | --> As the size of the input was 120 words, and a filter that is 5 words long 141 | will shave off 2 words from the front and back, leaving us with 116. 142 | ------------------------------------------------------------------------------------------------------- 143 | 144 | ## Going back to the IMDB dataset 145 | 146 | imdb, info = tfds.load("imdb_reviews", 147 | with_info = True, 148 | as_supervised = True) 149 | 150 | model = tf.keras.Sequential([ 151 | tf.keras.layers.Embedding(vocab_size, embedding_dim, 152 | input_length = max_length), 153 | tf.keras.layers.Flatten(), 154 | tf.keras.layers.Dense(6, activation = 'relu'), 155 | tf.keras.layers.Dense(1, activation = 'sigmoid') 156 | ]) 157 | 158 | model.compile(loss = 'binary_crossentropy', 159 | optimizer = 'adam', 160 | metrics = ['accuracy']) 161 | 162 | model.summary() 163 | 164 | 165 | --> Using this model with the IMDB reviews dataset gives nice accuracy, 166 | but clear overfitting but it only takes about 5 seconds per epoch to train. 167 | 168 | Total params: 171, 533 169 | 170 | 171 | imdb, info = tfds.load("imdb_reviews", 172 | with_info = True, 173 | as_supervised = True) 174 | 175 | # Model Definition with LSTM 176 | model = tf.keras.Sequential([ 177 | tf.keras.layers.Embedding(vocab_size, embedding_dim, 178 | input_length = max_length), 179 | tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)), 180 | tf.keras.layers.Dense(6, activation = 'relu'), 181 | tf.keras.layers.Dense(1, activation = 'sigmoid') 182 | ]) 183 | 184 | model.compile(loss = 'binary_crossentropy', 185 | optimizer = 'adam', 186 | metrics = ['accuracy']) 187 | 188 | model.summary() 189 | 190 | 191 | --> Training the above LSTM model with IMDB reviews dataset, we'll 192 | have only 30,129 parameters, but it will take about 43 seconds per 193 | epoch. 194 | 195 | The accuracy is better, but there's still some overfitting. 196 | 197 | 198 | model = tf.keras.Sequential([ 199 | tf.keras.layers.Embedding(vocab_size, embedding_dim, 200 | input_length = max_length), 201 | tf.keras.layers.Bidirectional(tf.keras.layers.GRU(32)), 202 | tf.keras.layers.Dense(6, activation = 'relu'), 203 | tf.keras.layers.Dense(1, activation = 'sigmoid') 204 | ]) 205 | 206 | model.compile(loss = 'binary_crossentropy', 207 | optimizer = 'adam', 208 | metrics = ['accuracy']) 209 | 210 | model.summary() 211 | 212 | 213 | --> Training the above network with the same dataset, we'll have 214 | 169,997 parameters, the training time will fall to 20 seconds per epoch. 215 | 216 | But here the accuracy is again very good on training, not too bad on validation, 217 | but again, showing some overfitting. 218 | 219 | 220 | # Model Definition with Conv1D 221 | model = tf.keras.Sequential([ 222 | tf.keras.layers.Embedding(vocab_size, embedding_dim, 223 | input_length = max_length), 224 | tf.keras.layers.Conv1D(128, 5, activation = 'relu'), 225 | tf.keras.layers.GlobalAveragePooling1D(), 226 | tf.keras.layers.Dense(6, activation = 'relu'), 227 | tf.keras.layers.Dense(1, activation = 'sigmoid') 228 | ]) 229 | 230 | model.compile(loss = 'binary_crossentropy', 231 | optimizer = 'adam', 232 | metrics = ['accuracy']) 233 | 234 | model.summary() 235 | 236 | 237 | --> Training with the above model on the same dataset, we'll have 238 | 171,149 parameters, and it only takes 6 seconds per epoch to get 239 | close to 100% accuracy on training, and about 83% on validation, 240 | but again with overfitting. 241 | ------------------------------------------------------------------------------------------ 242 | 243 | 244 | ## Exploring different sequence models 245 | 246 | Lesson 1a - IMDB Subwords 8K with Single Layer LSTM 247 | 248 | Lesson 1b - IMDB Subwords 8K with Multi Layer LSTM 249 | 250 | Lesson 1c - IMDB Subwords 8K with 1D Convolutional Layer 251 | 252 | Lesson 2 - Sarcasm with Bidirectional LSTM 253 | 254 | Lesson 2c - Sarcasm with 1D Convolutional Layer 255 | 256 | Lesson 2d - IMDB Reviews with GRU (and optional LSTM & Conv1D) 257 | ------------------------------------------------------------------------- 258 | 259 | ## Exercise 3 260 | 261 | Dataset: https://www.kaggle.com/kazanova/sentiment140 262 | 263 | GloVe: https://nlp.stanford.edu/projects/glove/ 264 | 265 | 266 | 267 | 268 | 269 | 270 | 271 | 272 | -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 3/Course 4 - Week 3 - Exercise-Question.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "name": "Copy of S+P Week 3 Exercise Question.ipynb", 7 | "provenance": [], 8 | "collapsed_sections": [] 9 | }, 10 | "kernelspec": { 11 | "name": "python3", 12 | "display_name": "Python 3" 13 | }, 14 | "accelerator": "GPU" 15 | }, 16 | "cells": [ 17 | { 18 | "cell_type": "code", 19 | "metadata": { 20 | "id": "D1J15Vh_1Jih", 21 | "colab_type": "code", 22 | "colab": {} 23 | }, 24 | "source": [ 25 | "!pip install tensorflow==2.0.0-beta1" 26 | ], 27 | "execution_count": 0, 28 | "outputs": [] 29 | }, 30 | { 31 | "cell_type": "code", 32 | "metadata": { 33 | "id": "BOjujz601HcS", 34 | "colab_type": "code", 35 | "colab": {} 36 | }, 37 | "source": [ 38 | "import tensorflow as tf\n", 39 | "import numpy as np\n", 40 | "import matplotlib.pyplot as plt\n", 41 | "print(tf.__version__)" 42 | ], 43 | "execution_count": 0, 44 | "outputs": [] 45 | }, 46 | { 47 | "cell_type": "code", 48 | "metadata": { 49 | "colab_type": "code", 50 | "id": "Zswl7jRtGzkk", 51 | "colab": {} 52 | }, 53 | "source": [ 54 | "def plot_series(time, series, format=\"-\", start=0, end=None):\n", 55 | " plt.plot(time[start:end], series[start:end], format)\n", 56 | " plt.xlabel(\"Time\")\n", 57 | " plt.ylabel(\"Value\")\n", 58 | " plt.grid(False)\n", 59 | "\n", 60 | "def trend(time, slope=0):\n", 61 | " return slope * time\n", 62 | "\n", 63 | "def seasonal_pattern(season_time):\n", 64 | " \"\"\"Just an arbitrary pattern, you can change it if you wish\"\"\"\n", 65 | " return np.where(season_time < 0.1,\n", 66 | " np.cos(season_time * 6 * np.pi),\n", 67 | " 2 / np.exp(9 * season_time))\n", 68 | "\n", 69 | "def seasonality(time, period, amplitude=1, phase=0):\n", 70 | " \"\"\"Repeats the same pattern at each period\"\"\"\n", 71 | " season_time = ((time + phase) % period) / period\n", 72 | " return amplitude * seasonal_pattern(season_time)\n", 73 | "\n", 74 | "def noise(time, noise_level=1, seed=None):\n", 75 | " rnd = np.random.RandomState(seed)\n", 76 | " return rnd.randn(len(time)) * noise_level\n", 77 | "\n", 78 | "time = np.arange(10 * 365 + 1, dtype=\"float32\")\n", 79 | "baseline = 10\n", 80 | "series = trend(time, 0.1) \n", 81 | "baseline = 10\n", 82 | "amplitude = 40\n", 83 | "slope = 0.005\n", 84 | "noise_level = 3\n", 85 | "\n", 86 | "# Create the series\n", 87 | "series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)\n", 88 | "# Update with noise\n", 89 | "series += noise(time, noise_level, seed=51)\n", 90 | "\n", 91 | "split_time = 3000\n", 92 | "time_train = time[:split_time]\n", 93 | "x_train = series[:split_time]\n", 94 | "time_valid = time[split_time:]\n", 95 | "x_valid = series[split_time:]\n", 96 | "\n", 97 | "window_size = 20\n", 98 | "batch_size = 32\n", 99 | "shuffle_buffer_size = 1000\n", 100 | "\n", 101 | "plot_series(time, series)" 102 | ], 103 | "execution_count": 0, 104 | "outputs": [] 105 | }, 106 | { 107 | "cell_type": "code", 108 | "metadata": { 109 | "id": "4sTTIOCbyShY", 110 | "colab_type": "code", 111 | "colab": {} 112 | }, 113 | "source": [ 114 | "def windowed_dataset(series, window_size, batch_size, shuffle_buffer):\n", 115 | " dataset = tf.data.Dataset.from_tensor_slices(series)\n", 116 | " dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)\n", 117 | " dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))\n", 118 | " dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1]))\n", 119 | " dataset = dataset.batch(batch_size).prefetch(1)\n", 120 | " return dataset" 121 | ], 122 | "execution_count": 0, 123 | "outputs": [] 124 | }, 125 | { 126 | "cell_type": "code", 127 | "metadata": { 128 | "id": "A1Hl39rklkLm", 129 | "colab_type": "code", 130 | "colab": {} 131 | }, 132 | "source": [ 133 | "tf.keras.backend.clear_session()\n", 134 | "tf.random.set_seed(51)\n", 135 | "np.random.seed(51)\n", 136 | "\n", 137 | "tf.keras.backend.clear_session()\n", 138 | "dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)\n", 139 | "\n", 140 | "model = tf.keras.models.Sequential([\n", 141 | " tf.keras.layers.Lambda(# YOUR CODE HERE),\n", 142 | " # YOUR CODE HERE\n", 143 | " tf.keras.layers.Lambda(# YOUR CODE HERE)\n", 144 | "])\n", 145 | "\n", 146 | "lr_schedule = tf.keras.callbacks.LearningRateScheduler(\n", 147 | " lambda epoch: 1e-8 * 10**(epoch / 20))\n", 148 | "optimizer = tf.keras.optimizers.SGD(lr=1e-8, momentum=0.9)\n", 149 | "model.compile(loss=tf.keras.losses.Huber(),\n", 150 | " optimizer=optimizer,\n", 151 | " metrics=[\"mae\"])\n", 152 | "history = model.fit(dataset, epochs=100, callbacks=[lr_schedule])" 153 | ], 154 | "execution_count": 0, 155 | "outputs": [] 156 | }, 157 | { 158 | "cell_type": "code", 159 | "metadata": { 160 | "id": "AkBsrsXMzoWR", 161 | "colab_type": "code", 162 | "colab": {} 163 | }, 164 | "source": [ 165 | "plt.semilogx(history.history[\"lr\"], history.history[\"loss\"])\n", 166 | "plt.axis([1e-8, 1e-4, 0, 30])\n", 167 | "\n", 168 | "# FROM THIS PICK A LEARNING RATE" 169 | ], 170 | "execution_count": 0, 171 | "outputs": [] 172 | }, 173 | { 174 | "cell_type": "code", 175 | "metadata": { 176 | "id": "4uh-97bpLZCA", 177 | "colab_type": "code", 178 | "colab": {} 179 | }, 180 | "source": [ 181 | "tf.keras.backend.clear_session()\n", 182 | "tf.random.set_seed(51)\n", 183 | "np.random.seed(51)\n", 184 | "\n", 185 | "tf.keras.backend.clear_session()\n", 186 | "dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)\n", 187 | "\n", 188 | "model = tf.keras.models.Sequential([\n", 189 | " tf.keras.layers.Lambda(# YOUR CODE HERE),\n", 190 | " # YOUR CODE HERE\n", 191 | " tf.keras.layers.Lambda(# YOUR CODE HERE)\n", 192 | "])\n", 193 | "\n", 194 | "model.compile(loss=\"mse\", optimizer=tf.keras.optimizers.SGD(lr=# PUT YOUR LEARNING RATE HERE#, momentum=0.9),metrics=[\"mae\"])\n", 195 | "history = model.fit(dataset,epochs=500,verbose=1)\n", 196 | " \n", 197 | "# FIND A MODEL AND A LR THAT TRAINS TO AN MAE < 3 " 198 | ], 199 | "execution_count": 0, 200 | "outputs": [] 201 | }, 202 | { 203 | "cell_type": "code", 204 | "metadata": { 205 | "id": "icGDaND7z0ne", 206 | "colab_type": "code", 207 | "colab": {} 208 | }, 209 | "source": [ 210 | "forecast = []\n", 211 | "results = []\n", 212 | "for time in range(len(series) - window_size):\n", 213 | " forecast.append(model.predict(series[time:time + window_size][np.newaxis]))\n", 214 | "\n", 215 | "forecast = forecast[split_time-window_size:]\n", 216 | "results = np.array(forecast)[:, 0, 0]\n", 217 | "\n", 218 | "\n", 219 | "plt.figure(figsize=(10, 6))\n", 220 | "\n", 221 | "plot_series(time_valid, x_valid)\n", 222 | "plot_series(time_valid, results)" 223 | ], 224 | "execution_count": 0, 225 | "outputs": [] 226 | }, 227 | { 228 | "cell_type": "code", 229 | "metadata": { 230 | "id": "KfPeqI7rz4LD", 231 | "colab_type": "code", 232 | "outputId": "5cbee203-d2e8-455e-f603-204d9df993dc", 233 | "colab": { 234 | "base_uri": "https://localhost:8080/", 235 | "height": 164 236 | } 237 | }, 238 | "source": [ 239 | "tf.keras.metrics.mean_absolute_error(x_valid, results).numpy()\n", 240 | "\n", 241 | "# YOUR RESULT HERE SHOULD BE LESS THAN 4" 242 | ], 243 | "execution_count": 0, 244 | "outputs": [ 245 | { 246 | "output_type": "error", 247 | "ename": "NameError", 248 | "evalue": "ignored", 249 | "traceback": [ 250 | "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", 251 | "\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)", 252 | "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mtf\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mkeras\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmetrics\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmean_absolute_error\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx_valid\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mresults\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mnumpy\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", 253 | "\u001b[0;31mNameError\u001b[0m: name 'tf' is not defined" 254 | ] 255 | } 256 | ] 257 | }, 258 | { 259 | "cell_type": "code", 260 | "metadata": { 261 | "id": "JUsdZB_tzDLe", 262 | "colab_type": "code", 263 | "colab": {} 264 | }, 265 | "source": [ 266 | "import matplotlib.image as mpimg\n", 267 | "import matplotlib.pyplot as plt\n", 268 | "\n", 269 | "#-----------------------------------------------------------\n", 270 | "# Retrieve a list of list results on training and test data\n", 271 | "# sets for each training epoch\n", 272 | "#-----------------------------------------------------------\n", 273 | "mae=history.history['mae']\n", 274 | "loss=history.history['loss']\n", 275 | "\n", 276 | "epochs=range(len(loss)) # Get number of epochs\n", 277 | "\n", 278 | "#------------------------------------------------\n", 279 | "# Plot MAE and Loss\n", 280 | "#------------------------------------------------\n", 281 | "plt.plot(epochs, mae, 'r')\n", 282 | "plt.plot(epochs, loss, 'b')\n", 283 | "plt.title('MAE and Loss')\n", 284 | "plt.xlabel(\"Epochs\")\n", 285 | "plt.ylabel(\"Accuracy\")\n", 286 | "plt.legend([\"MAE\", \"Loss\"])\n", 287 | "\n", 288 | "plt.figure()\n", 289 | "\n", 290 | "epochs_zoom = epochs[200:]\n", 291 | "mae_zoom = mae[200:]\n", 292 | "loss_zoom = loss[200:]\n", 293 | "\n", 294 | "#------------------------------------------------\n", 295 | "# Plot Zoomed MAE and Loss\n", 296 | "#------------------------------------------------\n", 297 | "plt.plot(epochs_zoom, mae_zoom, 'r')\n", 298 | "plt.plot(epochs_zoom, loss_zoom, 'b')\n", 299 | "plt.title('MAE and Loss')\n", 300 | "plt.xlabel(\"Epochs\")\n", 301 | "plt.ylabel(\"Accuracy\")\n", 302 | "plt.legend([\"MAE\", \"Loss\"])\n", 303 | "\n", 304 | "plt.figure()" 305 | ], 306 | "execution_count": 0, 307 | "outputs": [] 308 | } 309 | ] 310 | } -------------------------------------------------------------------------------- /Course 3 - Natural Language Processing in TensorFlow/Week 2/C3W2_exercise_meta.tsv: -------------------------------------------------------------------------------- 1 | 2 | s 3 | said 4 | will 5 | not 6 | mr 7 | year 8 | also 9 | people 10 | new 11 | us 12 | one 13 | can 14 | last 15 | t 16 | first 17 | time 18 | two 19 | government 20 | world 21 | now 22 | uk 23 | best 24 | years 25 | no 26 | make 27 | just 28 | film 29 | told 30 | made 31 | get 32 | music 33 | game 34 | like 35 | back 36 | many 37 | 000 38 | labour 39 | three 40 | well 41 | 1 42 | next 43 | bbc 44 | take 45 | set 46 | number 47 | added 48 | way 49 | market 50 | 2 51 | company 52 | may 53 | says 54 | election 55 | home 56 | off 57 | party 58 | good 59 | going 60 | much 61 | work 62 | 2004 63 | still 64 | win 65 | show 66 | think 67 | games 68 | go 69 | top 70 | second 71 | won 72 | million 73 | 6 74 | england 75 | firm 76 | since 77 | week 78 | say 79 | play 80 | part 81 | public 82 | use 83 | blair 84 | 3 85 | want 86 | minister 87 | however 88 | 10 89 | country 90 | technology 91 | see 92 | 4 93 | five 94 | british 95 | news 96 | european 97 | high 98 | group 99 | tv 100 | used 101 | end 102 | expected 103 | even 104 | players 105 | m 106 | brown 107 | 5 108 | six 109 | old 110 | net 111 | already 112 | four 113 | plans 114 | put 115 | come 116 | half 117 | london 118 | sales 119 | growth 120 | don 121 | long 122 | economy 123 | service 124 | right 125 | months 126 | chief 127 | day 128 | mobile 129 | former 130 | money 131 | britain 132 | director 133 | tax 134 | services 135 | 2005 136 | deal 137 | need 138 | help 139 | digital 140 | according 141 | big 142 | industry 143 | place 144 | companies 145 | users 146 | system 147 | business 148 | including 149 | team 150 | final 151 | based 152 | hit 153 | record 154 | report 155 | third 156 | called 157 | really 158 | international 159 | month 160 | move 161 | wales 162 | europe 163 | another 164 | 7 165 | life 166 | around 167 | economic 168 | start 169 | great 170 | future 171 | 2003 172 | firms 173 | came 174 | france 175 | open 176 | got 177 | spokesman 178 | software 179 | re 180 | without 181 | general 182 | club 183 | up 184 | took 185 | ireland 186 | video 187 | howard 188 | know 189 | united 190 | online 191 | bank 192 | phone 193 | china 194 | far 195 | state 196 | campaign 197 | side 198 | law 199 | radio 200 | better 201 | court 202 | making 203 | decision 204 | executive 205 | real 206 | media 207 | offer 208 | give 209 | computer 210 | found 211 | action 212 | it 213 | able 214 | president 215 | information 216 | despite 217 | office 218 | star 219 | lot 220 | o 221 | national 222 | line 223 | countries 224 | likely 225 | using 226 | away 227 | player 228 | internet 229 | saying 230 | every 231 | given 232 | security 233 | become 234 | left 235 | awards 236 | figures 237 | anti 238 | nations 239 | run 240 | eu 241 | 20 242 | cost 243 | ve 244 | prime 245 | role 246 | seen 247 | playing 248 | biggest 249 | man 250 | january 251 | data 252 | bill 253 | whether 254 | played 255 | later 256 | foreign 257 | although 258 | cup 259 | hard 260 | award 261 | rise 262 | broadband 263 | times 264 | match 265 | chancellor 266 | oil 267 | pay 268 | lost 269 | taking 270 | house 271 | due 272 | past 273 | interest 274 | early 275 | never 276 | lord 277 | leader 278 | support 279 | case 280 | prices 281 | look 282 | microsoft 283 | shares 284 | michael 285 | legal 286 | analysts 287 | control 288 | believe 289 | december 290 | less 291 | days 292 | cut 293 | recent 294 | season 295 | little 296 | children 297 | e 298 | ahead 299 | earlier 300 | increase 301 | thought 302 | free 303 | john 304 | face 305 | research 306 | scotland 307 | important 308 | something 309 | current 310 | strong 311 | went 312 | issue 313 | secretary 314 | south 315 | local 316 | tory 317 | rights 318 | working 319 | power 320 | budget 321 | financial 322 | spending 323 | 12 324 | quarter 325 | access 326 | currently 327 | held 328 | major 329 | chance 330 | change 331 | trade 332 | films 333 | find 334 | looking 335 | try 336 | following 337 | sunday 338 | 0 339 | full 340 | tories 341 | yet 342 | return 343 | series 344 | latest 345 | meeting 346 | share 347 | different 348 | website 349 | david 350 | winning 351 | almost 352 | injury 353 | sale 354 | must 355 | lead 356 | enough 357 | personal 358 | programme 359 | might 360 | police 361 | low 362 | band 363 | problems 364 | ever 365 | keep 366 | rate 367 | announced 368 | always 369 | key 370 | coach 371 | williams 372 | sold 373 | across 374 | performance 375 | dollar 376 | 11 377 | among 378 | behind 379 | ago 380 | list 381 | 8 382 | 9 383 | clear 384 | getting 385 | political 386 | victory 387 | 25 388 | mark 389 | chairman 390 | include 391 | women 392 | demand 393 | 30 394 | statement 395 | ms 396 | march 397 | february 398 | things 399 | term 400 | rather 401 | jobs 402 | minutes 403 | tuesday 404 | american 405 | chelsea 406 | claims 407 | done 408 | content 409 | continue 410 | point 411 | job 412 | manager 413 | means 414 | head 415 | problem 416 | title 417 | actor 418 | coming 419 | huge 420 | price 421 | asked 422 | released 423 | taken 424 | mail 425 | men 426 | union 427 | members 428 | india 429 | allow 430 | weeks 431 | wednesday 432 | act 433 | japan 434 | rugby 435 | plan 436 | tony 437 | global 438 | investment 439 | least 440 | result 441 | apple 442 | 50 443 | young 444 | network 445 | today 446 | within 447 | costs 448 | fans 449 | forward 450 | d 451 | bid 452 | main 453 | french 454 | possible 455 | production 456 | needed 457 | running 458 | site 459 | beat 460 | november 461 | 18 462 | small 463 | war 464 | council 465 | consumer 466 | available 467 | saturday 468 | form 469 | warned 470 | thing 471 | monday 472 | cash 473 | vote 474 | hold 475 | several 476 | known 477 | wanted 478 | mps 479 | song 480 | pc 481 | issues 482 | total 483 | committee 484 | friday 485 | 15 486 | level 487 | live 488 | football 489 | though 490 | evidence 491 | policy 492 | prize 493 | version 494 | success 495 | led 496 | league 497 | search 498 | trying 499 | 2001 500 | human 501 | calls 502 | previous 503 | buy 504 | sir 505 | recently 506 | saw 507 | web 508 | sony 509 | rates 510 | family 511 | parties 512 | aid 513 | single 514 | album 515 | centre 516 | eight 517 | name 518 | customers 519 | rules 520 | meet 521 | close 522 | development 523 | to 524 | ministers 525 | others 526 | thursday 527 | health 528 | book 529 | competition 530 | stock 531 | agreed 532 | call 533 | phones 534 | 100 535 | difficult 536 | short 537 | let 538 | race 539 | yukos 540 | consumers 541 | popular 542 | comes 543 | co 544 | fact 545 | charles 546 | event 547 | hope 548 | failed 549 | fourth 550 | higher 551 | showed 552 | networks 553 | debt 554 | board 555 | actress 556 | commission 557 | trial 558 | city 559 | wants 560 | october 561 | italy 562 | choice 563 | york 564 | lib 565 | reported 566 | feel 567 | nothing 568 | conference 569 | project 570 | career 571 | bt 572 | sites 573 | boss 574 | didn 575 | points 576 | liberal 577 | late 578 | sure 579 | liverpool 580 | festival 581 | reports 582 | black 583 | received 584 | cannot 585 | annual 586 | together 587 | instead 588 | claim 589 | shows 590 | gaming 591 | tour 592 | on 593 | dvd 594 | break 595 | launch 596 | claimed 597 | paid 598 | mean 599 | devices 600 | christmas 601 | jones 602 | movie 603 | boost 604 | goal 605 | virus 606 | growing 607 | stage 608 | release 609 | age 610 | largest 611 | september 612 | large 613 | leading 614 | in 615 | summer 616 | champion 617 | 2002 618 | involved 619 | position 620 | arsenal 621 | west 622 | denied 623 | changes 624 | russian 625 | out 626 | believes 627 | manchester 628 | profits 629 | paul 630 | singer 631 | iraq 632 | needs 633 | fall 634 | television 635 | products 636 | idea 637 | stop 638 | them 639 | gordon 640 | parliament 641 | 17 642 | australian 643 | pressure 644 | sport 645 | love 646 | germany 647 | africa 648 | started 649 | create 650 | order 651 | scottish 652 | talks 653 | giant 654 | potential 655 | 13 656 | german 657 | test 658 | pre 659 | weekend 660 | 16 661 | quite 662 | round 663 | opening 664 | whole 665 | that 666 | squad 667 | martin 668 | association 669 | special 670 | senior 671 | launched 672 | box 673 | chart 674 | oscar 675 | seven 676 | street 677 | rose 678 | value 679 | conservative 680 | sent 681 | stars 682 | ball 683 | car 684 | v 685 | anything 686 | grand 687 | groups 688 | hours 689 | accused 690 | 40 691 | stand 692 | remain 693 | 2000 694 | cards 695 | either 696 | hopes 697 | ensure 698 | olympic 699 | simply 700 | robinson 701 | fight 702 | similar 703 | press 704 | smith 705 | range 706 | irish 707 | drive 708 | 2006 709 | exchange 710 | rock 711 | official 712 | 24 713 | 14 714 | results 715 | bit 716 | appeal 717 | turn 718 | dr 719 | provide 720 | ukip 721 | immigration 722 | period 723 | makes 724 | target 725 | helped 726 | investors 727 | standard 728 | wrong 729 | sell 730 | attack 731 | commons 732 | fell 733 | independent 734 | tsunami 735 | particularly 736 | meanwhile 737 | comedy 738 | proposals 739 | education 740 | average 741 | energy 742 | lords 743 | ban 744 | gave 745 | impact 746 | via 747 | moment 748 | compared 749 | school 750 | happy 751 | card 752 | forced 753 | ll 754 | charge 755 | attacks 756 | spam 757 | generation 758 | force 759 | brought 760 | amount 761 | private 762 | community 763 | sector 764 | per 765 | bring 766 | fraud 767 | became 768 | fund 769 | euros 770 | extra 771 | systems 772 | everyone 773 | speech 774 | admitted 775 | poll 776 | history 777 | message 778 | numbers 779 | included 780 | widely 781 | gadget 782 | entertainment 783 | windows 784 | debate 785 | speaking 786 | selling 787 | hand 788 | bad 789 | department 790 | laws 791 | workers 792 | date 793 | australia 794 | charges 795 | markets 796 | night 797 | audience 798 | named 799 | russia 800 | comments 801 | soon 802 | worked 803 | staff 804 | agency 805 | view 806 | turned 807 | kilroy 808 | mike 809 | front 810 | member 811 | shot 812 | bush 813 | revealed 814 | areas 815 | download 816 | takes 817 | speed 818 | screen 819 | increased 820 | opposition 821 | university 822 | battle 823 | civil 824 | kennedy 825 | spend 826 | air 827 | finance 828 | newspaper 829 | him 830 | opportunity 831 | concerns 832 | shown 833 | survey 834 | area 835 | cross 836 | white 837 | gone 838 | voters 839 | course 840 | original 841 | millions 842 | bought 843 | offered 844 | £1 845 | all 846 | poor 847 | alan 848 | followed 849 | east 850 | concerned 851 | leave 852 | often 853 | decided 854 | insisted 855 | authorities 856 | outside 857 | b 858 | favourite 859 | terms 860 | defence 861 | step 862 | whose 863 | june 864 | story 865 | reached 866 | analyst 867 | lives 868 | created 869 | designed 870 | mini 871 | process 872 | non 873 | risk 874 | 22 875 | america 876 | body 877 | quality 878 | easy 879 | spent 880 | cuts 881 | becoming 882 | remains 883 | unit 884 | majority 885 | raise 886 | pop 887 | attempt 888 | musical 889 | r 890 | drugs 891 | hollywood 892 | challenge 893 | experience 894 | example 895 | 19 896 | debut 897 | nominated 898 | states 899 | reach 900 | credit 901 | messages 902 | levels 903 | winner 904 | ray 905 | robert 906 | silk 907 | watch 908 | andy 909 | situation 910 | focus 911 | taxes 912 | euro 913 | songs 914 | organisation 915 | build 916 | everything 917 | believed 918 | april 919 | tough 920 | central 921 | anyone 922 | signed 923 | j 924 | pcs 925 | captain 926 | rival 927 | device 928 | indian 929 | confirmed 930 | james 931 | so 932 | titles 933 | businesses 934 | post 935 | technologies 936 | critics 937 | matter 938 | previously 939 | trading 940 | rest 941 | met 942 | began 943 | longer 944 | officials 945 | response 946 | probably 947 | nine 948 | hour 949 | row 950 | minute 951 | light 952 | cases 953 | magazine 954 | building 955 | worth 956 | account 957 | voice 958 | programs 959 | ask 960 | looked 961 | mp 962 | davis 963 | aviator 964 | threat 965 | trust 966 | confidence 967 | looks 968 | g 969 | chinese 970 | machine 971 | gold 972 | category 973 | computers 974 | premiership 975 | host 976 | measures 977 | fast 978 | person 979 | ruled 980 | towards 981 | artists 982 | double 983 | training 984 | missed 985 | felt 986 | care 987 | agreement 988 | allowed 989 | madrid 990 | scheme 991 | zealand 992 | fear 993 | theatre 994 | portable 995 | newcastle 996 | north 997 | serious 998 | spain 999 | management 1000 | -------------------------------------------------------------------------------- /Course 4 - Sequences, Time Series, and Prediction/Week 3/notes.txt: -------------------------------------------------------------------------------- 1 | ## Conceptual overview 2 | 3 | One difference will be that the full input shape when 4 | using RNNs is three-dimensional. 5 | 6 | The first dimension will be the batch size, the second 7 | will be the time steps and the third is the dimensionality 8 | of the inputs at each time step. 9 | 10 | For e.g., if it's a univariate time series, this value will be 1. 11 | For multivariate, it will be more. 12 | 13 | 14 | The location of a word in a sentence can determine its semantics, 15 | similarly for numeric series, things such as closer numbers in the series 16 | might have a greater impact than those further away from our target value. 17 | ------------------------------------------------------------------------------------- 18 | 19 | 20 | ## Shape of the inputs to the RNN 21 | 22 | If we have a window size of 30 timestamps and we're batching them 23 | in sizes of 4, the shape will be 4 x 30 x 1. 24 | 25 | At each timestep, the memory cell input will be a 4 x 1 matrix. 26 | 27 | The cell will also take the input of the state matrix from the previous step. 28 | For the first step, it will be 0. 29 | For the subsequent ones, it will be the output from the memory cell. 30 | 31 | Other than the state vector, the cell of course will output a Y value. 32 | 33 | 34 | If the memory cell is comprised of 3 neurons, then the output matrix 35 | will be 4 x 3 because the batch size coming was 4 and the no. of 36 | neurons is 3. 37 | 38 | So the full output of the layer is 3-dimensional, 39 | in this case 4 x 30 x 3. 40 | 41 | 4 = batch size 42 | 3 = number of units 43 | 30 = no. of overall steps. 44 | 45 | 46 | In a simple RNN, the state matrix (H) is simply a copy of the output value (Y) 47 | 48 | So at each timestamp, the memory cell gets both the current input 49 | and also the previous output. 50 | 51 | 52 | Now, in some cases, you might want to input a sequence, 53 | but you don't want to output them and you just want to get a single 54 | vector for each instance in the batch. 55 | 56 | This is typically called ==> a sequence to vector RNN 57 | 58 | But in reality, all you do is ignore all of the outputs, 59 | except the last one. 60 | 61 | In Keras and TensorFlow, this is the default behavior, 62 | so if you want the recurrent layer to output a sequence, 63 | you have to specify return_sequences = True, when creating the layer. 64 | 65 | You'll need to do this when stacking one RNN layer on top of another. 66 | ---------------------------------------------------------------------------------- 67 | 68 | 69 | ## Outputting a sequence 70 | 71 | model = keras.models.Sequential([ 72 | keras.layers.SimpleRNN(20, return_sequences = True, 73 | input_shape = [None, 1]), 74 | keras.layers.SimpleRNN(20), 75 | keras.layers.Dense(1) 76 | ]) 77 | 78 | 79 | TensorFlow assumes that the first dimension is the batch size, and that 80 | it can have any size at all, so you don't need to define it. 81 | 82 | The next dimension is the number of timesteps which here is None, 83 | which means that the RNN can handle sequences of any length. 84 | 85 | The last dimension is just 1 because we're using a univariate time series. 86 | 87 | 88 | model = keras.models.Sequential([ 89 | keras.layers.SimpleRNN(20, return_sequences = True, 90 | input_shape = [None, 1]), 91 | keras.layers.SimpleRNN(20, return_sequenes = True), 92 | keras.layers.Dense(1) 93 | ]) 94 | 95 | 96 | If we set return_sequences for all RNN layers to be True, they will all 97 | output sequences and the dense layer will get a sequence as its inputs. 98 | 99 | Keras handles this by using the same dense layer independently at each time stamp. 100 | 101 | It might look like multiple ones here, but it's the same one that's being 102 | reused at each time step. 103 | 104 | This gives us what is called ==> a sequence to sequence RNN. 105 | 106 | It's fed a batch of sequences and it returns a batch of 107 | sequences of the same length. 108 | 109 | 110 | The dimensionality may not always match. 111 | 112 | It depends on the no. of units in the memory cell. 113 | 114 | 115 | Let's return to the first model, where the 2nd recurrent layer didn't 116 | output a sequence. 117 | -------------------------------------------------------------------------------------- 118 | 119 | 120 | ## Lambda layers 121 | 122 | Lambda layer is a type of layer that allows us to perform arbitrary operations to 123 | effectively expand the functionality of TensorFlow's Keras, and we can do this 124 | within the model definition itself. 125 | 126 | 127 | model = keras.models.Sequential([ 128 | keras.layers.Lambda(lambda x: tf.expand_dims(x, axis = -1), 129 | input_shape = [None]), 130 | keras.layers.SimpleRNN(20, return_sequences = True), 131 | keras.layers.SimpleRNN(20), 132 | keras.layers.Dense(1), 133 | keras.layers.Lambda(lambda x: x * 100.0) 134 | ]) 135 | 136 | 137 | The first Lambda layer is used to help us with our dimensionality. 138 | 139 | When we wrote the windowed_dataset helper function, it returned 140 | 2-dimensional batches of windows on the data - 141 | the first being the batch_size, and 142 | the second the number of timesteps. 143 | 144 | But an RNN expects 3 dimensions - batch_size, no. of time steps 145 | and the series dimensionality. 146 | 147 | With the Lambda layer, we can fix this without rewriting the 148 | windowed_dataset helper function. 149 | 150 | Using the Lambda, we just expand the array by one dimension. 151 | 152 | 153 | By setting input_shape = None, we're saying that the model can take 154 | sequences of any length. 155 | 156 | Similarly, if we scale up the outputs by 100, we can help training. 157 | 158 | The default activation function in RNN layers is "tanh". 159 | This outputs values b/w -1 and 1. 160 | 161 | 162 | Since the time series values are in that order, usually in the 10s 163 | like 40s, 50s, etc., then scaling the outputs to the same ballpark 164 | can help us with learning. 165 | 166 | We can do that with a Lambda layer too - we just simply multiply that 167 | by a 100. 168 | ----------------------------------------------------------------------------- 169 | 170 | 171 | ## Adjusting the learning rate dynamically 172 | 173 | train_set = windowed_dataset(x_train, window_size, batch_size = 128, 174 | shuffle_buffer = shuffle_buffer_size) 175 | 176 | model = tf.keras.models.Sequential([ 177 | tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis = -1), 178 | input_shape = [None]), 179 | tf.keras.layers.SimpleRNN(40, return_sequences = True), 180 | tf.keras.layers.SimpleRNN(40), 181 | tf.keras.layers.Dense(1), 182 | tf.keras.layers.Lambda(lambda x: x * 100.0) 183 | ]) 184 | 185 | lr_schedule = tf.keras.callbacks.LearningRateScheduler( 186 | lambda epoch: 1e-8 * 10 ** (epoch / 20) 187 | ) 188 | 189 | optimizer = tf.keras.optimizers.SGD(lr = 1e-8, momentum = 0.9) 190 | 191 | model.compile(loss = tf.keras.losses.Huber(), 192 | optimizer = optimizer, 193 | metrics = ["mae"]) 194 | 195 | history = model.fit(train_set, epochs = 100, 196 | callbacks = [lr_schedule]) 197 | 198 | 199 | The Huber function is a loss function that's less sensitive to outliers 200 | and as this data can get a little bit noisy, it's worth giving it a shot. 201 | 202 | https://en.wikipedia.org/wiki/Huber_loss 203 | 204 | 205 | After training, we find that the optimal learning rate is somewhere between 10**(-5) 206 | and 10**(-6). 207 | So we set it to 5*10**(-5). 208 | 209 | 210 | tf.keras.backend.clear_session() 211 | tf.random.set_seed(51) 212 | np.random.seed(51) 213 | 214 | dataset = windowed_dataset(x_train, window_size, batch_size = 128, 215 | shuffle_buffer = shuffle_buffer_size) 216 | 217 | model = tf.keras.models.Sequential([ 218 | tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis = -1), 219 | input_shape = [None]), 220 | tf.keras.layers.SimpleRNN(40, return_sequences = True), 221 | tf.keras.layers.SimpleRNN(40), 222 | tf.keras.layers.Dense(1), 223 | tf.keras.layers.Lambda(lambda x: x * 100.0) 224 | ]) 225 | 226 | optimizer = tf.keras.optimizers.SGD(lr = 5e-5, momentum = 0.9) 227 | 228 | history = model.compile(loss = tf.keras.losses.Huber(), 229 | optimizer = optimizer, 230 | metrics = ["mae"]) 231 | 232 | model.fit(dataset, epochs = 500) 233 | 234 | 235 | After training for 500 epochs, we get an MAE on the validation se of 6.3532376 236 | 237 | 238 | When we look at the charts, we see that after 400 epochs, training became unstable. 239 | Given this, it's probably worth only training for about 400 epochs. 240 | 241 | When we do that, the results are pretty much the same, with the MAE a little bit higher, 242 | at 6.4141674, but we've saved a 100 epochs worth of training. 243 | ------------------------------------------------------------------------------------------------- 244 | 245 | 246 | ## LSTM 247 | 248 | [In RNNs] While state is a factor in subsequent calculations, its impact 249 | can diminish greatly over timestamps. 250 | 251 | LSTMs add a cell state to this, that keep a state throughout the life of the 252 | training so that the state is passed from cell to cell, timestamp to 253 | timestamp, and it can be better maintained. 254 | 255 | This means that data from earlier in the window can have a greater impact on the overall 256 | projection, than in the case of RNNs. 257 | 258 | The state can also be bidirectional, so that state can move forwards and backwards. 259 | 260 | 261 | Refer https://www.coursera.org/lecture/nlp-sequence-models/long-short-term-memory-lstm-KXoay 262 | -------------------------------------------------------------------------------------------------------- 263 | 264 | 265 | ## Coding LSTMs 266 | 267 | tf.keras.backend.clear_session() 268 | 269 | dataset = windowed_dataset(x_train, window_size, batch_size, 270 | shuffle_buffer_size) 271 | 272 | model = tf.keras.models.Sequential([ 273 | tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis = -1), 274 | input_shape = [None]), 275 | tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)), 276 | tf.keras.layers.Dense(1), 277 | tf.keras.layers.Lambda(lambda x: x * 100.0) 278 | ]) 279 | 280 | model.compile(loss = "mse", 281 | optimizer = tf.keras.optimizers.SGD(lr = 1e-6, momentum = 0.9)) 282 | 283 | model.fit(dataset, epochs = 100, 284 | verbose = 0) 285 | 286 | 287 | clear_session() clear any internal variables 288 | 289 | That makes it easier to experiment w/o models impacting 290 | later versions of themselves. 291 | 292 | 293 | The result? 294 | The plateau under the big spike is still there, and 295 | MAE = 6.131234 296 | 297 | Let's add another LSTM and see the impact. 298 | 299 | 300 | tf.keras.backend.clear_session() 301 | 302 | dataset = windowed_dataset(x_train, window_size, batch_size, 303 | shuffle_buffer_size) 304 | 305 | model = tf.keras.models.Sequential([ 306 | tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis = -1), 307 | input_shape = [None]), 308 | tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences = True)), 309 | tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)), 310 | tf.keras.layers.Dense(1), 311 | tf.keras.layers.Lambda(lambda x: x * 100.0) 312 | ]) 313 | 314 | model.compile(loss = "mse", 315 | optimizer = tf.keras.optimizers.SGD(lr = 1e-6, momentum = 0.9)) 316 | 317 | model.fit(dataset, epochs = 100, 318 | verbose = 0) 319 | 320 | 321 | The result? 322 | Now it's tracking much better and closer to the original data. 323 | Maybe not keeping up with the sharp increase, but at least it's tracking close. 324 | 325 | MAE = 5.2872233 ==> we're heading in the right direction 326 | 327 | Let's add another LSTM layer. 328 | 329 | 330 | tf.keras.backend.clear_session() 331 | 332 | dataset = windowed_dataset(x_train, window_size, batch_size, 333 | shuffle_buffer_size) 334 | 335 | model = tf.keras.models.Sequential([ 336 | tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis = -1), 337 | input_shape = [None]), 338 | tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences = True)), 339 | tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences = True)), 340 | tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)), 341 | tf.keras.layers.Dense(1), 342 | tf.keras.layers.Lambda(lambda x: x * 100.0) 343 | ]) 344 | 345 | model.compile(loss = "mse", 346 | optimizer = tf.keras.optimizers.SGD(lr = 1e-6, momentum = 0.9)) 347 | 348 | model.fit(dataset, epochs = 100, 349 | verbose = 0) 350 | 351 | 352 | The result? 353 | There's really not that much of a difference, and our MAE has actually gone down. 354 | MAE = 5.53239 355 | --------------------------------------------------------------------------------------------- 356 | 357 | 358 | --------------------------------------------------------------------------------