├── 1_Intro_to_neural_networks_with_tensorflow ├── 1_intro_to_neural_networks.ipynb ├── 2_intro_to_tensorflow_for_deeplearning.ipynb ├── 3_neural_networks_for_regresion_with_tensorflow.ipynb └── 4_neural_networks_for_classification_with_tensorflow.ipynb ├── 2_deep_computer_vision_with_tensorflow ├── 1_intro_to_computer_vision_and_cnn.ipynb ├── 2_cnn_for_real_world_data_and_image_augmentation.ipynb └── 3_cnn_architectures_and_transfer_learning.ipynb ├── 3_nlp_with_tensorflow ├── 1_intro_to_nlp_and_text_preprocessing.ipynb ├── 2_using_word_embeddings_to_represent_texts.ipynb ├── 3_recurrent_neural_networks.ipynb ├── 4_using_cnns_and_rnns_for_texts_classification.ipynb └── 5_using_pretrained_bert_for_text_classification.ipynb ├── Readme.md └── images └── tf_cover_image.png /1_Intro_to_neural_networks_with_tensorflow/1_intro_to_neural_networks.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "7feb4f40", 6 | "metadata": {}, 7 | "source": [ 8 | "\n", 9 | " \n", 12 | "
\n", 10 | " \"Open\n", 11 | "
" 13 | ] 14 | }, 15 | { 16 | "cell_type": "markdown", 17 | "id": "c1ffa6b2", 18 | "metadata": {}, 19 | "source": [ 20 | "*This notebook was created by [Jean de Dieu Nyandwi](https://twitter.com/jeande_d) for the love of machine learning community. For any feedback, errors or suggestion, he can be reached on email (johnjw7084 at gmail dot com), [Twitter](https://twitter.com/jeande_d), or [LinkedIn](https://linkedin.com/in/nyandwi).*" 21 | ] 22 | }, 23 | { 24 | "cell_type": "markdown", 25 | "id": "304ad719", 26 | "metadata": { 27 | "id": "304ad719" 28 | }, 29 | "source": [ 30 | "\n", 31 | "# Intro to Artificial Neural Networks and Deep Learning" 32 | ] 33 | }, 34 | { 35 | "cell_type": "markdown", 36 | "id": "d281a995", 37 | "metadata": { 38 | "id": "d281a995" 39 | }, 40 | "source": [ 41 | "Deep Learning is the study of Artificial Neural Networks(ANNs). It is a subset of Machine Learning (ML), ML being a subset of Artificial Intelligence. \n", 42 | "\n", 43 | "Inspired by the human brain's biological neuron, ANNs has become the building block of modern intelligent systems, from our smartphones that can accurately identify our faces and voices, streaming services (like YouTube and Netflix) that know exactly what we want to watch, cars that can see the visual world and drive themselves, and not to mention AlphaGo that won world chess competition. \n", 44 | "\n", 45 | "\n", 46 | "This notebook will introduce a high level overview of Deep Learning. " 47 | ] 48 | }, 49 | { 50 | "cell_type": "markdown", 51 | "id": "b6a095ea", 52 | "metadata": { 53 | "id": "b6a095ea" 54 | }, 55 | "source": [ 56 | "## Contents \n", 57 | "\n", 58 | "* [1. Why Deep Learning](#1)\n", 59 | "* [2. A Single Layer Neural Network](#2)\n", 60 | "* [3. Activation Functions](#3)\n", 61 | "* [4. Types of Deep Learning Architectures](#4)\n", 62 | " * [4.1 Densely Connected Networks](#4-1)\n", 63 | " * [4.2 Convolutional Neural Networks](#4-2)\n", 64 | " * [4.3 Recurrent Neural Networks](#4-3)\n", 65 | " * [4.4 Transformers](#4-4)\n", 66 | "* [5. Challenges in Training Deep Neural Networks](#5)" 67 | ] 68 | }, 69 | { 70 | "cell_type": "markdown", 71 | "id": "4e73d1ca", 72 | "metadata": { 73 | "id": "4e73d1ca" 74 | }, 75 | "source": [ 76 | "\n", 77 | "\n", 78 | "## 1. Why Deep Learning" 79 | ] 80 | }, 81 | { 82 | "cell_type": "markdown", 83 | "id": "be7730a3", 84 | "metadata": { 85 | "id": "be7730a3" 86 | }, 87 | "source": [ 88 | "Deep Learning is used to extract the patterns in (a huge amount of) data. Its potential has been seen in complex problems such as vision oriented and natural language. \n", 89 | "\n", 90 | "Let's take an example to develop more intuitive understanding of why deep learning has became so popular. Let's say that as an Engineer, you are asked by your city state to build a program that can recognize car and truck moving in a given road. The program can later be deployed in a less loads-road to warn trucks that pass there that the road was not designed for these types of vehicles. \n", 91 | "\n", 92 | "With normal programming, you would have to provide and to write all instructions (codes) of that particular program. The chance (close to reality) is that it would be hard, mostly because there are different types of cars and trucks and each can be unique in its ways, which can make it hard to write the rules forming such program. \n", 93 | "\n", 94 | "How can deep learning solve that? Well, by feeding the images of cars and trucks to a deep learning model, say a convolutional neural network (more on this latter), it can learn the underlying features of cars and trucks, and thus will be able to recognize each and each. We will see this in practice in later notebooks.\n", 95 | "\n", 96 | "With the advent of the computation power, both paid and free (Google Colab & Kaggle), open source datasets, and intelligent frameworks such as TensorFlow and PyTorch, it has become very much possible to do deep learning, even at personal level. This was not possible a decade ago. " 97 | ] 98 | }, 99 | { 100 | "cell_type": "markdown", 101 | "id": "c11149cb", 102 | "metadata": { 103 | "id": "c11149cb" 104 | }, 105 | "source": [ 106 | "\n", 107 | "## 2. A Single Hidden Layer Neural Network" 108 | ] 109 | }, 110 | { 111 | "cell_type": "markdown", 112 | "id": "a21b0198", 113 | "metadata": { 114 | "id": "a21b0198" 115 | }, 116 | "source": [ 117 | "\"Neuralnetwork.png\"
Image: A Single hidden layer neural network, Wikimedia" 118 | ] 119 | }, 120 | { 121 | "cell_type": "markdown", 122 | "id": "710bd0e6", 123 | "metadata": { 124 | "id": "710bd0e6" 125 | }, 126 | "source": [ 127 | "The image above represent a single hidden layer neural network. Usually, any typical neural network has 3 main layers:\n", 128 | "\n", 129 | "* **Input layer** that takes the input data. The input data can be image, text, or numbers but ofcourse whatever the data type is, the input to a neural network will always have to be in a numeric format. \n", 130 | "\n", 131 | "* **Hidden layer** that is between the input and the output layers. There is a alot of computations that takes place in this layer but that is not in the scope of this notebook. \n", 132 | "* **Output layer** that give the output of the network. Output can be a numerical number (mostly in regression problem) or a probability (mostly in classification problem). \n", 133 | "\n", 134 | "\n", 135 | "A neural network having more than one layer is commonly referred to `Deep Neural Network(DNN)`. Also, it is important to note that a neural network can have multiple input layers, multiple hidden layers, and multiple output layers. The structure of how a neural network has to be or its size will mostly depend on the type of a problem being solved. \n", 136 | "\n", 137 | "With the exception of the input layer, you will find something called activation function being applied to hidden and output layers. What is an activation function? " 138 | ] 139 | }, 140 | { 141 | "cell_type": "markdown", 142 | "id": "a687f494", 143 | "metadata": { 144 | "id": "a687f494" 145 | }, 146 | "source": [ 147 | "\n", 148 | "\n", 149 | "## 3. Common Activation Functions" 150 | ] 151 | }, 152 | { 153 | "cell_type": "markdown", 154 | "id": "d6dc2881", 155 | "metadata": { 156 | "id": "d6dc2881" 157 | }, 158 | "source": [ 159 | "Activation functions are used to introduce nonlinearities in the network. I like to say that they dictate the output format of a given layer. \n", 160 | "\n", 161 | "Or simply put, these type of mathematical functions are used to make a decision of the layer's output. In later notebooks, this will certainly make sense. \n", 162 | "\n", 163 | "Here are the 3 most used activation functions:\n", 164 | "\n", 165 | "* **ReLU(Rectified Linear Unit)**: This is mostly used in hidden layers and it is a default activation function in most problems that will work well in most cases. ReLU always give the positive outputs, if the input number is greater than 0, the output will be that number. If it's less than 0, the output will be 0. The whole purpose of this type of activation function is to penalize the negative numbers. ReLu has different versions such as SeLU and LeakyReLU. In most cases, ReLU can only be used in output layer when the problem is a regression type, but this is not a necessity too. \n", 166 | "\n", 167 | "\n", 168 | "* **Sigmoid**: A sigmoid function gives the probabilistic output (a number between 0 and 1). It can be used in output layers of a classification problem. \n", 169 | "\n", 170 | "\n", 171 | "* **Softmax**: This type of function is also used in output classification layers, but instead of giving probalistic output, its output is an actual classes. This will make sense in later notebooks. " 172 | ] 173 | }, 174 | { 175 | "cell_type": "markdown", 176 | "id": "df16a232", 177 | "metadata": { 178 | "id": "df16a232" 179 | }, 180 | "source": [ 181 | "\n", 182 | "\n", 183 | "## 4. Types of Deep Learning Architectures" 184 | ] 185 | }, 186 | { 187 | "cell_type": "markdown", 188 | "id": "sKCDRk-PWEyK", 189 | "metadata": { 190 | "id": "sKCDRk-PWEyK" 191 | }, 192 | "source": [ 193 | "There are 4 main types of deep learning architectures. This is going to be a high level since they will be covered in the next notebooks." 194 | ] 195 | }, 196 | { 197 | "cell_type": "markdown", 198 | "id": "XqAdxSlWWe9m", 199 | "metadata": { 200 | "id": "XqAdxSlWWe9m" 201 | }, 202 | "source": [ 203 | "\n", 204 | "\n", 205 | "### 4.1 Densely Connected Networks" 206 | ] 207 | }, 208 | { 209 | "cell_type": "markdown", 210 | "id": "WziQjYJWWJcT", 211 | "metadata": { 212 | "id": "WziQjYJWWJcT" 213 | }, 214 | "source": [ 215 | "\n", 216 | "\n", 217 | "\n", 218 | "Densely connected networks are made of stacks of layers from the input to the output. \n", 219 | "\n", 220 | "The units (or neurons) of any layer in that type of network are connected to all other units of the next layer. This is why they are also called fully connected layers. \n", 221 | "\n", 222 | "Densely connected layers are generally used for tabular data. Tabular data are these kinds of data that are in a tabular fashion. An example of tabular data is customer records: you have a column of names, roles, data joined, etc...\n", 223 | "\n", 224 | "\n", 225 | "![Densely connected.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631359484854/ioMW08ea2.png)\n", 226 | "*Image: Densely connected networks*\n", 227 | "\n", 228 | "Dense layers are also used (in the combination of other architectures) as the last layer in either classification or regression problems. The right number of units in the last layer depends on the problem. Take an example: if you are classifying the news into 4 categories, then the last layer will have 4 units. If you are predicting the price of a house given its features, the last layer will have 1 unit. \n", 229 | "\n", 230 | "\n", 231 | "Dense networks are mostly used in simple tasks. When it comes to complex things like image recognition or text processing, they fail because they can't hold spatial features (in image) or can't learn a sequence in text for instance. But above all, the more deep they become, they tend to have many parameters which can results in having a big network/complex computations. " 232 | ] 233 | }, 234 | { 235 | "cell_type": "markdown", 236 | "id": "GwF8_b5wWiye", 237 | "metadata": { 238 | "id": "GwF8_b5wWiye" 239 | }, 240 | "source": [ 241 | "\n", 242 | "\n", 243 | "### 4.2 Convolutional Neural Networks" 244 | ] 245 | }, 246 | { 247 | "cell_type": "markdown", 248 | "id": "3Eo0LrLsWP7B", 249 | "metadata": { 250 | "id": "3Eo0LrLsWP7B" 251 | }, 252 | "source": [ 253 | "CNN's, a.k.a Convnets are widely known as the go-to neural network architectures for computer vision tasks like image recognition and object detection, but they can also be used in other tasks such as texts and time series processing. \n", 254 | "\n", 255 | "Convnets are typically made of convolution, pooling layers, and fully connected layers at the end. Convolutional layers are used to extract the spatial features in images, and pooling layers are used to compress the resulted feature maps from convolution layers. And fully connected layers for classification purposes. \n", 256 | "\n", 257 | "![on-IOL-iG.png-2.webp](https://cdn.hashnode.com/res/hashnode/image/upload/v1630788206761/KF4gj3zPD.webp)\n", 258 | "*Image: Convnets architecture*\n", 259 | "\n", 260 | "Convnets are of 3 dimensions. The most popular one is Conv2D that is used in images and videos divided into frames. Conv1D is used in sequential data such as texts, time series, and sounds. A popular sound architecture called [WaveNet](https://deepmind.com/blog/article/wavenet-generative-model-raw-audio) is made of 10 stacked 1D Convnets.\n", 261 | "\n", 262 | "Conv3D is used in videos and volumetric images such as CT scans." 263 | ] 264 | }, 265 | { 266 | "cell_type": "markdown", 267 | "id": "w5-uILxvWuly", 268 | "metadata": { 269 | "id": "w5-uILxvWuly" 270 | }, 271 | "source": [ 272 | "\n", 273 | "\n", 274 | "### 4.3 Recurrent Neural Networks (RNNs)" 275 | ] 276 | }, 277 | { 278 | "cell_type": "markdown", 279 | "id": "73xeoLxvWVf3", 280 | "metadata": { 281 | "id": "73xeoLxvWVf3" 282 | }, 283 | "source": [ 284 | "\n", 285 | "The standard feedforward network (or call them densely connected network) maps the input to output. RNNs go beyond that. They can maintain the recurrence of data at each time step. \n", 286 | "\n", 287 | "\n", 288 | "![TF RNN.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1630789332991/7kzTaiwxp.png)\n", 289 | "*Image: Feed forward network vs recurrent neural networks*\n", 290 | "\n", 291 | "Due to their ability to preserve the recurrence of information, RNNs are commonly used in sequential data such as texts and time series. \n", 292 | "\n", 293 | "The basic RNN cells are not efficient at handling large sequences due to a short memory problem. They also suffer from [vanishing gradients](https://www.youtube.com/watch?v=qhXZsFVxGKo). \n", 294 | "\n", 295 | "The variant of RNNs that are able to handle long sequences is called Long Short Term Memory(LSTM). LSTM has also the ability to handle the sequences of variable lengths. \n", 296 | "\n", 297 | "A special design difference about the LSTM cell is that it has a gate which is the basis of why it can control the flow of information over many time steps. \n", 298 | "\n", 299 | "In short, LSTM uses gates to control the flow of information from the current time step to the next time step in the following 4 ways: \n", 300 | "\n", 301 | " * The input gate recognizes the input sequence.\n", 302 | " * Forget gate gets rid of all irrelevant information contained in the input sequence and store relevant information in long term memory.\n", 303 | " * LTSM cell updates update cell's state values.\n", 304 | " * Output gate controls the information that has to be sent to the next time step. \n", 305 | "\n", 306 | "![Screen Shot 2021-09-01 at 09.22.56.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1630789390006/avzF2LKrK.png)\n", 307 | "\n", 308 | "*Image: LSTM architecture, [Intro to Deep Learning MIT](https://www.youtube.com/watch?v=BUNl0To1IVw&list=PLtBw6njQRU-rwp5__7C0oIVt26ZgjG9NI&index=5)*\n", 309 | "\n", 310 | "The ability of LSTMs to handle long-term sequences makes it a suitable neural network architecture for various sequential tasks such as text classification, sentiment analysis, speech recognition, image caption generation, machine translation. \n", 311 | "\n", 312 | "Another recurrent neural network that you will see is [Gate Recurrent Unit(GRU)](https://arxiv.org/pdf/1412.3555.pdf). GRU is a simplified version of LSTMs, and it's cheaper to train. \n" 313 | ] 314 | }, 315 | { 316 | "cell_type": "markdown", 317 | "id": "JQzFWfYRW5hj", 318 | "metadata": { 319 | "id": "JQzFWfYRW5hj" 320 | }, 321 | "source": [ 322 | "\n", 323 | "\n", 324 | "### 4.4 Transformers" 325 | ] 326 | }, 327 | { 328 | "cell_type": "markdown", 329 | "id": "94719993", 330 | "metadata": { 331 | "id": "94719993" 332 | }, 333 | "source": [ 334 | "\n", 335 | "Although recurrent neural networks are still used for sequential modeling, they have short-term memory problems when used for long sequences, and they are computationally expensive. The RNN's inability to handle long sequences and expensiveness are the two most motivations of transformers. \n", 336 | "\n", 337 | "Transformers are one of the latest groundbreaking researches in the natural language community. They are sorely based on the attention mechanisms that learn the relationships between words of the sentence and pays attention to the relevant words. \n", 338 | "\n", 339 | "One of the most notable things about transformers is that they don't use any recurrent or convolutional layers. It's just only attention mechanisms and other standard layers like embedding layer, dense layer, and normalization layers. \n", 340 | "\n", 341 | "They are commonly used in language tasks such as text classification, question answering, and machine translation. \n", 342 | "\n", 343 | "There have been [researches](https://arxiv.org/pdf/2101.01169.pdf) that show that they can also be used for computer vision tasks, such as image classification, object detection, image segmentation, and image captioning with [visual attention](https://arxiv.org/pdf/1502.03044.pdf). \n", 344 | "\n", 345 | "To learn more about the transformer, check out its awesome [paper](https://arxiv.org/pdf/1706.03762.pdf). \n", 346 | "\n", 347 | "\n", 348 | "![Screen Shot 2021-09-04 at 23.16.48.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1630790223290/99Ltf-fAy.png)\n", 349 | "\n", 350 | "*Image: Transformers architecture, [Attention Is All You Need](https://arxiv.org/pdf/1706.03762.pdf)*\n", 351 | "\n", 352 | "\n" 353 | ] 354 | }, 355 | { 356 | "cell_type": "markdown", 357 | "id": "cd4c0136", 358 | "metadata": { 359 | "id": "cd4c0136" 360 | }, 361 | "source": [ 362 | "Below is a high level summary of the common neural networks architectures. \n", 363 | "\n", 364 | "![Deep Learning Architectures.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631774267019/jzEyGcRBWM.png)" 365 | ] 366 | }, 367 | { 368 | "cell_type": "markdown", 369 | "id": "6db21482", 370 | "metadata": { 371 | "id": "6db21482" 372 | }, 373 | "source": [ 374 | "\n", 375 | "\n", 376 | "## 5. Challenges in Training Neural Networks" 377 | ] 378 | }, 379 | { 380 | "cell_type": "markdown", 381 | "id": "2c57ae94", 382 | "metadata": { 383 | "id": "2c57ae94" 384 | }, 385 | "source": [ 386 | "Training neural networks is a big challenge and this still an area of research. For example, how do you choose a right architecture? Although the type of data can tell the most architecture that should come into picture at first, it is not a trivial question. Take an example, we know CNN as image first algorithm, bit it has been used in some natural language processing and time series tasks and it can perform well. \n", 387 | "\n", 388 | "Also, deep learning models have many hyperparameters. Hyperparameters include things like number of layers, number of neurons, learning rate, activation functions, and optimizers (optimizers are used to reduce loss during training). \n", 389 | "\n", 390 | "All these choises to be make it hard to train neural networks. Although there has been some techniques that can reduce number of choces to be made, [such as transfer learning which motivates reusing the pretrained models](https://jeande.tech/transfer-learning-explained), training this type of models is not an easy thing. \n", 391 | "\n", 392 | "If you would like to learn more about training neural networks, here is a great (but complex) material about it: [Why are deep neural networks hard to train?](http://neuralnetworksanddeeplearning.com/chap5.html). " 393 | ] 394 | }, 395 | { 396 | "cell_type": "markdown", 397 | "id": "8f9156ea", 398 | "metadata": { 399 | "id": "8f9156ea" 400 | }, 401 | "source": [ 402 | "#### Also, before we go into hands-on deep learning in the next notebooks, I would like to invite you to play this game. Tinker with it and try to see how things like activation functions, input data, learning rate,...affects the neural network training. [***Here you Go!***](https://playground.tensorflow.org/#activation=tanh&batchSize=10&dataset=circle®Dataset=reg-plane&learningRate=0.03®ularizationRate=0&noise=0&networkShape=4,2&seed=0.93635&showTestData=false&discretize=false&percTrainData=50&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false). " 403 | ] 404 | }, 405 | { 406 | "cell_type": "markdown", 407 | "id": "475dc220", 408 | "metadata": { 409 | "id": "475dc220" 410 | }, 411 | "source": [ 412 | "### [BACK TO TOP](#0)" 413 | ] 414 | }, 415 | { 416 | "cell_type": "code", 417 | "execution_count": null, 418 | "id": "368cdfd5", 419 | "metadata": { 420 | "id": "368cdfd5" 421 | }, 422 | "outputs": [], 423 | "source": [] 424 | } 425 | ], 426 | "metadata": { 427 | "colab": { 428 | "collapsed_sections": [], 429 | "name": "7.1 Intro to Artificial Neural Networks.ipynb", 430 | "provenance": [] 431 | }, 432 | "kernelspec": { 433 | "display_name": "Python 3.7.10 64-bit ('tensor': conda)", 434 | "language": "python", 435 | "name": "python3710jvsc74a57bd034ac5db714c5906ee087fcf6e2d00ee4febf096586592b6ba3662ed3b7e7a5f6" 436 | }, 437 | "language_info": { 438 | "codemirror_mode": { 439 | "name": "ipython", 440 | "version": 3 441 | }, 442 | "file_extension": ".py", 443 | "mimetype": "text/x-python", 444 | "name": "python", 445 | "nbconvert_exporter": "python", 446 | "pygments_lexer": "ipython3", 447 | "version": "3.7.10" 448 | } 449 | }, 450 | "nbformat": 4, 451 | "nbformat_minor": 5 452 | } 453 | -------------------------------------------------------------------------------- /1_Intro_to_neural_networks_with_tensorflow/2_intro_to_tensorflow_for_deeplearning.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "ace7a54e", 6 | "metadata": {}, 7 | "source": [ 8 | "\n", 9 | " \n", 12 | "
\n", 10 | " \"Open\n", 11 | "
" 13 | ] 14 | }, 15 | { 16 | "cell_type": "markdown", 17 | "id": "5a9f9f77", 18 | "metadata": {}, 19 | "source": [ 20 | "*This notebook was created by [Jean de Dieu Nyandwi](https://twitter.com/jeande_d) for the love of machine learning community. For any feedback, errors or suggestion, he can be reached on email (johnjw7084 at gmail dot com), [Twitter](https://twitter.com/jeande_d), or [LinkedIn](https://linkedin.com/in/nyandwi).*" 21 | ] 22 | }, 23 | { 24 | "cell_type": "markdown", 25 | "id": "304ad719", 26 | "metadata": { 27 | "id": "304ad719" 28 | }, 29 | "source": [ 30 | "\n", 31 | "# Intro to TensorFlow for Deep Learning" 32 | ] 33 | }, 34 | { 35 | "cell_type": "markdown", 36 | "id": "b6a095ea", 37 | "metadata": { 38 | "id": "b6a095ea" 39 | }, 40 | "source": [ 41 | "## Contents \n", 42 | "\n", 43 | "* [1. What is TensorFlow?](#1)\n", 44 | "* [2. TensorFlow Model APIs](#2)\n", 45 | " * [2.1 Sequential API](#2-1)\n", 46 | " * [2.2 Functional API](#2-2)\n", 47 | " * [2.3 Model SubClassing API](#2-3)\n", 48 | "\n", 49 | "* [3. A Quick Tour into TensorFlow Ecosystem](#3)\n", 50 | " * [3.1 Tools](#3-1)\n", 51 | " * [3.2 Models and Datasets](#3-2)\n", 52 | " * [3.3 Model Deployment](#3-3)\n", 53 | "\n", 54 | "* [4. Basics Tensors](#4)\n", 55 | " * [4.1 Intro to Tensors](#4-1)\n", 56 | " * [4.2 Creating a Tensor with tf.constant()](#4-2)\n", 57 | " * [4.3 Creating a Tensor with tf.variable()](#4-3)\n", 58 | " * [4.4 Creating A Tensor from Existing Functions](#4-4)\n", 59 | " * [4.5 Selecting Data in a Tensor](#4-5)\n", 60 | " * [4.6 Performing operations on Tensors](#4-6)\n", 61 | " * [4.7 Manipulating a Tensor Shape](#4-7)" 62 | ] 63 | }, 64 | { 65 | "cell_type": "markdown", 66 | "id": "4e73d1ca", 67 | "metadata": { 68 | "id": "4e73d1ca" 69 | }, 70 | "source": [ 71 | "\n", 72 | "\n", 73 | "## 1. What is TensorFlow" 74 | ] 75 | }, 76 | { 77 | "cell_type": "markdown", 78 | "id": "9c53b146", 79 | "metadata": { 80 | "id": "9c53b146" 81 | }, 82 | "source": [ 83 | "TensorFlow is an open source and an end to end platform used for building machine learning models. Being end to end, you can prepare data, build models, diagnose, improve, and deploy them. \n", 84 | "\n", 85 | "TensorFlow uses [Keras](https://keras.io) at its backend. Keras is a well beautifully designed API for building deep learning models in popular fields such as Computer Vision and Natural Language Processing. \n", 86 | "\n", 87 | "TensorFlow has got a strong community, from users, learning resources and whole range of technical supports. Not only it powers majority of Google apps such as YouTube, Maps and Google Photos, it is also widely used across startups and other big techs. If you would like to know who is using TensorFlow, [here you go!](https://www.tensorflow.org/about/case-studies). " 88 | ] 89 | }, 90 | { 91 | "cell_type": "markdown", 92 | "id": "813fd464", 93 | "metadata": { 94 | "id": "813fd464" 95 | }, 96 | "source": [ 97 | "\n", 98 | "\n", 99 | "## 2. TensorFlow Model APIs" 100 | ] 101 | }, 102 | { 103 | "cell_type": "markdown", 104 | "id": "d61aecff", 105 | "metadata": { 106 | "id": "d61aecff" 107 | }, 108 | "source": [ 109 | "TensorFlow being suited for variety of tasks, there are 3 ways to build deep learning models." 110 | ] 111 | }, 112 | { 113 | "cell_type": "markdown", 114 | "id": "QJHHkIq5q8ru", 115 | "metadata": { 116 | "id": "QJHHkIq5q8ru" 117 | }, 118 | "source": [ 119 | " \n", 120 | " ### 2.1 Sequential API\n", 121 | "\n", 122 | "This is the simplest model building option. When building a model, you start from the input to the output, no other way around. This API is suited for tasks that don't require multiple inputs or outputs, or skip connections. \n", 123 | "\n", 124 | "Sequential API can be a good API to use for things like image classification or regression tasks. With things like object detection and segmentation, we need another API, which is Functional API. \n", 125 | "\n", 126 | "Below is how a sequential model looks like. \n", 127 | "\n", 128 | "```\n", 129 | "# Building a sequential model \n", 130 | "\n", 131 | "from tensorflow import keras\n", 132 | "\n", 133 | " \n", 134 | "model = keras.models.Sequential([\n", 135 | " \n", 136 | " keras.layers.Dense(16, activation='relu'),\n", 137 | " keras.layers.Dense(32, activation='relu'),\n", 138 | " keras.layers.Dense(1, activation='sigmoid')\n", 139 | " \n", 140 | "])\n", 141 | " \n", 142 | "```" 143 | ] 144 | }, 145 | { 146 | "cell_type": "markdown", 147 | "id": "GXVRQrbaqx57", 148 | "metadata": { 149 | "id": "GXVRQrbaqx57" 150 | }, 151 | "source": [ 152 | "\n", 153 | "### Functional API\n", 154 | "\n", 155 | "This type of API makes it easy to build models that can take multiple inputs/outputs, or skip connections. \n", 156 | "\n", 157 | "It is well suited in advanced things like object detection and segmentation. In object detection, there are two main involved things. \n", 158 | "\n", 159 | "One is recognizing the object(classification) and other is localizing the object(regression: predicting the bounding boxes coordinates). " 160 | ] 161 | }, 162 | { 163 | "cell_type": "markdown", 164 | "id": "3f996b11", 165 | "metadata": { 166 | "id": "3f996b11" 167 | }, 168 | "source": [ 169 | "\n", 170 | "![kites_detections_output.jpg](https://cdn.hashnode.com/res/hashnode/image/upload/v1616839653654/-NiIL9N3v.jpeg)\n", 171 | "*You can't build an object detection model with Sequential API, but Functional API instead!! [Source](https://github.com/tensorflow/models/tree/master/research/object_detection).*" 172 | ] 173 | }, 174 | { 175 | "cell_type": "markdown", 176 | "id": "e158c578", 177 | "metadata": {}, 178 | "source": [ 179 | "Below is how a functional model looks like.\n", 180 | "\n", 181 | "```\n", 182 | "# Building a same model in Functional API\n", 183 | "\n", 184 | "from tensorflow import keras\n", 185 | "\n", 186 | "inputs = keras.Input()\n", 187 | "x = keras.layers.Dense(16, activation='relu')(inputs)\n", 188 | "x = keras.layers.Dense(32, activation='relu')(x)\n", 189 | "output = keras.layers.Dense(1, activation='sigmoid')(x)\n", 190 | " \n", 191 | "model = keras.Model(inputs, output)\n", 192 | "```" 193 | ] 194 | }, 195 | { 196 | "cell_type": "markdown", 197 | "id": "49d8b590", 198 | "metadata": { 199 | "id": "49d8b590" 200 | }, 201 | "source": [ 202 | "\n", 203 | "\n", 204 | "### Model SubClassing\n", 205 | "\n", 206 | "SubClassing API is for building custom models and having full control of every step in model building and training. \n", 207 | "\n", 208 | "In most cases, Sequential and Functional API will be all you need to build almost anything. \n", 209 | "\n", 210 | "Below is how a subclassing model looks like.\n", 211 | "\n", 212 | "```\n", 213 | "# Building a same custom model \n", 214 | "\n", 215 | "from tensorflow import keras\n", 216 | "\n", 217 | "class MLP(keras.Model):\n", 218 | "\n", 219 | " def __init__(self, **kwargs):\n", 220 | " super(MLP, self).__init__(**kwargs)\n", 221 | " self.dense_1 = keras.layers.Dense(16, activation='relu')\n", 222 | " self.dense_2 = keras.layers.Dense(32, activation='relu')\n", 223 | " self.dense_3 = keras.layers.Dense(1, activation='sigmoid')\n", 224 | "\n", 225 | " def call(self, inputs):\n", 226 | " x = self.dense_1(inputs)\n", 227 | " x = self_dense_2(x)\n", 228 | " x = self_dense_3(x)\n", 229 | " \n", 230 | " return x\n", 231 | "\n", 232 | "# Instantiate the model\n", 233 | "\n", 234 | "mlp = MLP()\n", 235 | "```" 236 | ] 237 | }, 238 | { 239 | "cell_type": "markdown", 240 | "id": "3169f9de", 241 | "metadata": { 242 | "id": "3169f9de" 243 | }, 244 | "source": [ 245 | "\n", 246 | "\n", 247 | "## 3. A Quick Tour into TensorFlow Ecosystem\n", 248 | "\n", 249 | "Beside community, one of my favourite things I like about TensorFlow is its whole ecosystem of tools and resources. Below is a list of tools, libraries, and resources that are built to simplify things. " 250 | ] 251 | }, 252 | { 253 | "cell_type": "markdown", 254 | "id": "m55g-rJqu_N7", 255 | "metadata": { 256 | "id": "m55g-rJqu_N7" 257 | }, 258 | "source": [ 259 | "\n", 260 | "\n", 261 | "### Tools\n" 262 | ] 263 | }, 264 | { 265 | "cell_type": "markdown", 266 | "id": "f5656e9c", 267 | "metadata": { 268 | "id": "f5656e9c" 269 | }, 270 | "source": [ 271 | "#### 1. Colaboratory\n", 272 | "\n", 273 | "[Colaboratory](https://colab.research.google.com/notebooks/intro.ipynb) is a free web version of jupyter notebook that doesn't require any set up. You can import almost any library without having to install it. \n", 274 | "\n", 275 | "#### 2. TensorBoard\n", 276 | "\n", 277 | "[TensorBoard](https://tensorboard.dev/) provides the engineers and scientists with the visualizations and debugging capabilities for their Machine Learning models. \n", 278 | "\n", 279 | "With TensorBoard, you can:\n", 280 | "\n", 281 | " * Track and visualize the loss and accuracy metrics\n", 282 | " * View the weights, biases, and other parameters in plots format such as histogram or line plot\n", 283 | " * Display image, audio, and text data\n", 284 | " * See which hyperparameters might be a good match for the model to converge\n", 285 | " \n", 286 | "\n", 287 | "![Tensorboard.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1616575702847/t9hohDbAY.png)\n", 288 | "\n", 289 | "#### 3. What If Tool(WIT)\n", 290 | "\n", 291 | "What if you wanted a visual way to understand your data and model within your environment? [WIT](https://pair-code.github.io/what-if-tool/get-started/) is exactly designed for that purpose. It can help you to understand your dataset and the output of your model.\n", 292 | "\n" 293 | ] 294 | }, 295 | { 296 | "cell_type": "markdown", 297 | "id": "UfIAMd7OvMaW", 298 | "metadata": { 299 | "id": "UfIAMd7OvMaW" 300 | }, 301 | "source": [ 302 | "\n", 303 | "\n", 304 | "### Models and Datasets\n" 305 | ] 306 | }, 307 | { 308 | "cell_type": "markdown", 309 | "id": "grmqH29ZvKIo", 310 | "metadata": { 311 | "id": "grmqH29ZvKIo" 312 | }, 313 | "source": [ 314 | "\n", 315 | "#### 1. TensorFlow Hub\n", 316 | "\n", 317 | "[TensorFlow Hub](https://tfhub.dev/) is a collection of ready to use models in variety of tasks, from image recognition, object detection, image segmentation, sound recognition, text classification, etc..\n", 318 | "\n", 319 | "#### 2. Model Garden\n", 320 | "\n", 321 | "Model Garden is an [official GitHub repository of TensorFlow models](https://github.com/tensorflow/models/tree/master/official), build with high-level APIs. The models available are Computer Vision, Natural Language Processing, and recommendation.\n", 322 | "\n", 323 | "#### 3. TensorFlow Datasets (TFDS)\n", 324 | "\n", 325 | "[TFDS](https://www.tensorflow.org/tfds) contains datasets of different types, from images, texts, video, to sounds...We will be using many TFDS datasets in our deep learning quest. \n", 326 | "\n", 327 | "\n", 328 | "\n", 329 | "\n", 330 | "**************\n", 331 | "\n", 332 | "There are so many tools and functionalities that are provided by TensorFlow. As much as we can, we will try to leverage these tools. \n", 333 | "\n", 334 | "This was a quick walkthrough the TensorFlow ecosystem. If you would like to learn more, go to [tensorflow website](https://www.tensorflow.org/resources/libraries-extensions) or check out this [article](https://jeande.tech/tf-ecosystem) that I wrote a while back. " 335 | ] 336 | }, 337 | { 338 | "cell_type": "markdown", 339 | "id": "S57Ak5zavZn_", 340 | "metadata": { 341 | "id": "S57Ak5zavZn_" 342 | }, 343 | "source": [ 344 | "\n", 345 | "\n", 346 | "### Models Deployment " 347 | ] 348 | }, 349 | { 350 | "cell_type": "markdown", 351 | "id": "GU3Ht5UavXpI", 352 | "metadata": { 353 | "id": "GU3Ht5UavXpI" 354 | }, 355 | "source": [ 356 | "\n", 357 | "\n", 358 | "#### 1. TensorFlow Extended (TFX)\n", 359 | "\n", 360 | "[TFX](https://www.tensorflow.org/tfx/) is an end to end platform for creating machine learning pipelines. With TFX, you can prepare data, train models, validate them, and deploy them in production environments.\n", 361 | "\n", 362 | "#### 2. TensorFlow.js\n", 363 | "\n", 364 | "[TensorFlow.js](https://www.tensorflow.org/js/) makes it easy to train and deploy model in web browsers. \n", 365 | "\n", 366 | "#### 3. TensorFlow Lite \n", 367 | "\n", 368 | "[TF Lite](https://www.tensorflow.org/lite/) makes it possible to train, deploy and optimize models in mobile devices and microcontrollers. " 369 | ] 370 | }, 371 | { 372 | "cell_type": "markdown", 373 | "id": "cdc613d1", 374 | "metadata": { 375 | "id": "cdc613d1" 376 | }, 377 | "source": [ 378 | "\n", 379 | "\n", 380 | "## 4. The Basics of Tensors" 381 | ] 382 | }, 383 | { 384 | "cell_type": "markdown", 385 | "id": "0jKEvJIzmaF9", 386 | "metadata": { 387 | "id": "0jKEvJIzmaF9" 388 | }, 389 | "source": [ 390 | "\n", 391 | "### 4.1 Intro to Tensors" 392 | ] 393 | }, 394 | { 395 | "cell_type": "markdown", 396 | "id": "1F4EWUAmB449", 397 | "metadata": { 398 | "id": "1F4EWUAmB449" 399 | }, 400 | "source": [ 401 | "A tensor is a multidimensional array of the same data type. A tensor can be a scalar (single number), a vector, or a matrix. \n", 402 | "\n", 403 | "If you have used NumPy, tensors are like NumPy arrays, except that tensors have GPU(Graphical Processing Unit) support. \n", 404 | "\n", 405 | "A typical tensor has the following information:\n", 406 | "\n", 407 | "* **Shape**: The length or number of elements of each of the tensor dimension/axes.\n", 408 | "* **Rank**: The number of dimensions/axes in a tensor. A scalar tensor (a single number) has rank 0, a vector has a rank 1 (a vector is a 1D), and a matrix has rank 2 (or 2D). \n", 409 | "* **Axis/Dimension**: This is a particular dimension of a tensor\n", 410 | "* **Size**: This is the total number of items in the tensor. \n", 411 | "\n", 412 | "But why tensor/NumPy array things? \n", 413 | "\n", 414 | "Well, almost all types of data ca be represented as an array of numbers. Take an example: \n", 415 | "\n", 416 | "* Image can be represented as an array of pixels. \n", 417 | "* Any text data can be converted into an array of numbers (or tokens representing words)\n", 418 | "* Video (made of sequence of images) can be represented as an array of numbers. \n", 419 | "\n", 420 | "Having the ability to convert these raw data into tensors/arrays make it easy to preprocess it, either when performing conventional numerical computations or when it is the data we are preparing to feed to a machine learning model. Take a simple example, we can not feed a raw text to a machine learning model. That text has to be converted into numbers. \n", 421 | "\n", 422 | "That's is for the basic intro to bensors. For more about tensors, check out TensorFlow's [introduction to tensors](https://www.tensorflow.org/guide/tensor), or this [rich wikipedia tensor page](https://en.wikipedia.org/wiki/Tensor). \n", 423 | "\n", 424 | "In later parts, we will see how to:\n", 425 | "\n", 426 | "* Create a tensor with tf.constant()\n", 427 | "* Create a tensor with tf.variable()\n", 428 | "* Create tensors from existing functions\n", 429 | "* Select data in a tensor\n", 430 | "* Perform operations in tensor\n", 431 | "* Manipulate tensor shape" 432 | ] 433 | }, 434 | { 435 | "cell_type": "markdown", 436 | "id": "1GURLW8gqVPn", 437 | "metadata": { 438 | "id": "1GURLW8gqVPn" 439 | }, 440 | "source": [ 441 | "\n", 442 | "\n", 443 | "### 4. 2 Creating a Tensor with tf.constant()" 444 | ] 445 | }, 446 | { 447 | "cell_type": "markdown", 448 | "id": "nhP9rgeTIPJX", 449 | "metadata": { 450 | "id": "nhP9rgeTIPJX" 451 | }, 452 | "source": [ 453 | "A Tensor can be a scalar, a vector or a matrix. \n", 454 | "Let's use `tf.constant()` to create these tensor types. \n", 455 | "\n", 456 | "A tensor created with `tf.constant()` is immutable. " 457 | ] 458 | }, 459 | { 460 | "cell_type": "code", 461 | "execution_count": null, 462 | "id": "3iB4EL10JLZP", 463 | "metadata": { 464 | "id": "3iB4EL10JLZP" 465 | }, 466 | "outputs": [], 467 | "source": [ 468 | "# I will first import tensorflow as tf\n", 469 | "# Also import numpy as np\n", 470 | "# If you are using Colab, no need to install them \n", 471 | "\n", 472 | "import tensorflow as tf\n", 473 | "import numpy as np" 474 | ] 475 | }, 476 | { 477 | "cell_type": "code", 478 | "execution_count": null, 479 | "id": "hLoQKlElqXBy", 480 | "metadata": { 481 | "id": "hLoQKlElqXBy" 482 | }, 483 | "outputs": [], 484 | "source": [ 485 | "# Creating a scalar tensor\n", 486 | "# You can specify dtype but TF will detect its if left unspecified\n", 487 | "\n", 488 | "scalar_tensor = tf.constant(10)" 489 | ] 490 | }, 491 | { 492 | "cell_type": "code", 493 | "execution_count": null, 494 | "id": "XbiXNElzJy0R", 495 | "metadata": { 496 | "colab": { 497 | "base_uri": "https://localhost:8080/" 498 | }, 499 | "id": "XbiXNElzJy0R", 500 | "outputId": "2e21dc07-4bc3-4a8f-87c6-5ed9bd1c8fea" 501 | }, 502 | "outputs": [ 503 | { 504 | "name": "stdout", 505 | "output_type": "stream", 506 | "text": [ 507 | "tf.Tensor(10, shape=(), dtype=int32)\n" 508 | ] 509 | } 510 | ], 511 | "source": [ 512 | "# Displaying created tensor\n", 513 | "\n", 514 | "print(scalar_tensor)" 515 | ] 516 | }, 517 | { 518 | "cell_type": "code", 519 | "execution_count": null, 520 | "id": "tcbvOMdbJ4Pu", 521 | "metadata": { 522 | "id": "tcbvOMdbJ4Pu" 523 | }, 524 | "outputs": [], 525 | "source": [ 526 | "# We can also create a vector or rank 1 tensor\n", 527 | "# Simply put, a vector is one dimensional\n", 528 | "# We can create it from a list of values\n", 529 | "\n", 530 | "vect_tensor = tf.constant([1.0,2.0,3.0,4.0,5.0,6.0])" 531 | ] 532 | }, 533 | { 534 | "cell_type": "code", 535 | "execution_count": null, 536 | "id": "JnBdyNcNKtPZ", 537 | "metadata": { 538 | "colab": { 539 | "base_uri": "https://localhost:8080/" 540 | }, 541 | "id": "JnBdyNcNKtPZ", 542 | "outputId": "af30c3f1-9f54-4272-b063-57e46929adba" 543 | }, 544 | "outputs": [ 545 | { 546 | "name": "stdout", 547 | "output_type": "stream", 548 | "text": [ 549 | "tf.Tensor([1. 2. 3. 4. 5. 6.], shape=(6,), dtype=float32)\n" 550 | ] 551 | } 552 | ], 553 | "source": [ 554 | "print(vect_tensor)" 555 | ] 556 | }, 557 | { 558 | "cell_type": "markdown", 559 | "id": "kx-SdOE1Ky37", 560 | "metadata": { 561 | "id": "kx-SdOE1Ky37" 562 | }, 563 | "source": [ 564 | "A vector of 1 dimensional values was created. As you can see, the data type is `float32` because the values were floats. TensorFlow detects that automatically from values if the data type was not mentioned. \n", 565 | "\n", 566 | "Let's now create a tensor with rank 2 or two dimensions. This is actually a matrix. " 567 | ] 568 | }, 569 | { 570 | "cell_type": "code", 571 | "execution_count": null, 572 | "id": "KSuWT0_aLE-_", 573 | "metadata": { 574 | "id": "KSuWT0_aLE-_" 575 | }, 576 | "outputs": [], 577 | "source": [ 578 | "mat_tensor = tf.constant([[2,4],\n", 579 | " [6,8],\n", 580 | " [10,12]], dtype=tf.int32)" 581 | ] 582 | }, 583 | { 584 | "cell_type": "code", 585 | "execution_count": null, 586 | "id": "twT8PtbBLrVA", 587 | "metadata": { 588 | "colab": { 589 | "base_uri": "https://localhost:8080/" 590 | }, 591 | "id": "twT8PtbBLrVA", 592 | "outputId": "b9828429-cc93-4cc9-e2f0-e1134c38166c" 593 | }, 594 | "outputs": [ 595 | { 596 | "name": "stdout", 597 | "output_type": "stream", 598 | "text": [ 599 | "tf.Tensor(\n", 600 | "[[ 2 4]\n", 601 | " [ 6 8]\n", 602 | " [10 12]], shape=(3, 2), dtype=int32)\n" 603 | ] 604 | } 605 | ], 606 | "source": [ 607 | "print(mat_tensor)" 608 | ] 609 | }, 610 | { 611 | "cell_type": "markdown", 612 | "id": "AS-4T0fdL7KU", 613 | "metadata": { 614 | "id": "AS-4T0fdL7KU" 615 | }, 616 | "source": [ 617 | "If you can see in the displayed tensor above, the shape is `(3,2)` which means our tensor has 3 rows and 2 columns. \n", 618 | "\n", 619 | "You can also check the number of dimensions or axes of a tensor using `tensor_name.ndim`" 620 | ] 621 | }, 622 | { 623 | "cell_type": "code", 624 | "execution_count": null, 625 | "id": "luy2-ckdM30Y", 626 | "metadata": { 627 | "colab": { 628 | "base_uri": "https://localhost:8080/" 629 | }, 630 | "id": "luy2-ckdM30Y", 631 | "outputId": "f3c2be5f-3993-4ca7-817a-d6d8988fad75" 632 | }, 633 | "outputs": [ 634 | { 635 | "data": { 636 | "text/plain": [ 637 | "0" 638 | ] 639 | }, 640 | "execution_count": 14, 641 | "metadata": { 642 | "tags": [] 643 | }, 644 | "output_type": "execute_result" 645 | } 646 | ], 647 | "source": [ 648 | "scalar_tensor.ndim" 649 | ] 650 | }, 651 | { 652 | "cell_type": "markdown", 653 | "id": "9OZViVpSNz0D", 654 | "metadata": { 655 | "id": "9OZViVpSNz0D" 656 | }, 657 | "source": [ 658 | "A scalar tensor does not have any dimension. It's just a single value. But if we do the same thing for a vector or matrix, you will see something different. " 659 | ] 660 | }, 661 | { 662 | "cell_type": "code", 663 | "execution_count": null, 664 | "id": "OatUK_LaN0Th", 665 | "metadata": { 666 | "colab": { 667 | "base_uri": "https://localhost:8080/" 668 | }, 669 | "id": "OatUK_LaN0Th", 670 | "outputId": "7b95aa53-c557-49f5-f543-936e04be3c47" 671 | }, 672 | "outputs": [ 673 | { 674 | "data": { 675 | "text/plain": [ 676 | "1" 677 | ] 678 | }, 679 | "execution_count": 16, 680 | "metadata": { 681 | "tags": [] 682 | }, 683 | "output_type": "execute_result" 684 | } 685 | ], 686 | "source": [ 687 | "# A vector has 1 dimension\n", 688 | "\n", 689 | "vect_tensor.ndim" 690 | ] 691 | }, 692 | { 693 | "cell_type": "code", 694 | "execution_count": null, 695 | "id": "JeKe3q3qOHmW", 696 | "metadata": { 697 | "colab": { 698 | "base_uri": "https://localhost:8080/" 699 | }, 700 | "id": "JeKe3q3qOHmW", 701 | "outputId": "819c762e-8d04-41f3-d446-f4888f5cb9e8" 702 | }, 703 | "outputs": [ 704 | { 705 | "data": { 706 | "text/plain": [ 707 | "2" 708 | ] 709 | }, 710 | "execution_count": 17, 711 | "metadata": { 712 | "tags": [] 713 | }, 714 | "output_type": "execute_result" 715 | } 716 | ], 717 | "source": [ 718 | "# A matrix has 2D or more dimensions\n", 719 | "\n", 720 | "mat_tensor.ndim" 721 | ] 722 | }, 723 | { 724 | "cell_type": "markdown", 725 | "id": "CEwg8oCLO0Fm", 726 | "metadata": { 727 | "id": "CEwg8oCLO0Fm" 728 | }, 729 | "source": [ 730 | "Just like NumPy array, a tensor can have many dimensions. Let's create a tensor with 3 dimensions. " 731 | ] 732 | }, 733 | { 734 | "cell_type": "code", 735 | "execution_count": null, 736 | "id": "bS1ad5oZO0dr", 737 | "metadata": { 738 | "colab": { 739 | "base_uri": "https://localhost:8080/" 740 | }, 741 | "id": "bS1ad5oZO0dr", 742 | "outputId": "ac1c4dd0-a5c3-4b03-fb86-3eba8cdca2ef" 743 | }, 744 | "outputs": [ 745 | { 746 | "name": "stdout", 747 | "output_type": "stream", 748 | "text": [ 749 | "tf.Tensor(\n", 750 | "[[[1 2 3 4 5]\n", 751 | " [6 7 8 9 8]]\n", 752 | "\n", 753 | " [[1 3 5 7 9]\n", 754 | " [2 4 6 8 1]]\n", 755 | "\n", 756 | " [[1 2 3 5 4]\n", 757 | " [3 4 5 6 7]]], shape=(3, 2, 5), dtype=int32)\n" 758 | ] 759 | } 760 | ], 761 | "source": [ 762 | "tensor_3d = tf.constant([\n", 763 | " [[1,2,3,4,5],\n", 764 | " [6,7,8,9,8]],\n", 765 | " [[1,3,5,7,9],\n", 766 | " [2,4,6,8,1]],\n", 767 | " [[1,2,3,5,4],\n", 768 | " [3,4,5,6,7]], ])\n", 769 | "\n", 770 | "print(tensor_3d)" 771 | ] 772 | }, 773 | { 774 | "cell_type": "code", 775 | "execution_count": null, 776 | "id": "Ilvi_ojDPdPZ", 777 | "metadata": { 778 | "colab": { 779 | "base_uri": "https://localhost:8080/" 780 | }, 781 | "id": "Ilvi_ojDPdPZ", 782 | "outputId": "6965884c-89d8-4148-c4bf-5e2384ddc31c" 783 | }, 784 | "outputs": [ 785 | { 786 | "data": { 787 | "text/plain": [ 788 | "3" 789 | ] 790 | }, 791 | "execution_count": 31, 792 | "metadata": { 793 | "tags": [] 794 | }, 795 | "output_type": "execute_result" 796 | } 797 | ], 798 | "source": [ 799 | "tensor_3d.ndim" 800 | ] 801 | }, 802 | { 803 | "cell_type": "markdown", 804 | "id": "PcCwP51-TDBI", 805 | "metadata": { 806 | "id": "PcCwP51-TDBI" 807 | }, 808 | "source": [ 809 | "A tensor can be converted into NumPy array by calling `tensor_name.numpy` or `np.array(tensor_name)`. \n", 810 | "\n", 811 | "TensorFlow plays well with NumPy. And if not yet done, TensorFlow recently posted that they are working on gettint the whole of NumPy into TensorFlow. " 812 | ] 813 | }, 814 | { 815 | "cell_type": "code", 816 | "execution_count": null, 817 | "id": "-5sc7JV8Tv_C", 818 | "metadata": { 819 | "colab": { 820 | "base_uri": "https://localhost:8080/" 821 | }, 822 | "id": "-5sc7JV8Tv_C", 823 | "outputId": "3df93113-da2c-46e0-f636-f370c1e71e43" 824 | }, 825 | "outputs": [ 826 | { 827 | "data": { 828 | "text/plain": [ 829 | "array([[[1, 2, 3, 4, 5],\n", 830 | " [6, 7, 8, 9, 8]],\n", 831 | "\n", 832 | " [[1, 3, 5, 7, 9],\n", 833 | " [2, 4, 6, 8, 1]],\n", 834 | "\n", 835 | " [[1, 2, 3, 5, 4],\n", 836 | " [3, 4, 5, 6, 7]]], dtype=int32)" 837 | ] 838 | }, 839 | "execution_count": 35, 840 | "metadata": { 841 | "tags": [] 842 | }, 843 | "output_type": "execute_result" 844 | } 845 | ], 846 | "source": [ 847 | "# Converting a tensor into a NumPy array\n", 848 | "\n", 849 | "n_array = tensor_3d.numpy()\n", 850 | "\n", 851 | "n_array" 852 | ] 853 | }, 854 | { 855 | "cell_type": "code", 856 | "execution_count": null, 857 | "id": "aZcE2h4UT674", 858 | "metadata": { 859 | "colab": { 860 | "base_uri": "https://localhost:8080/" 861 | }, 862 | "id": "aZcE2h4UT674", 863 | "outputId": "185c8baf-884c-49ab-a4b7-d0fd4f41dc6b" 864 | }, 865 | "outputs": [ 866 | { 867 | "data": { 868 | "text/plain": [ 869 | "array([[[1, 2, 3, 4, 5],\n", 870 | " [6, 7, 8, 9, 8]],\n", 871 | "\n", 872 | " [[1, 3, 5, 7, 9],\n", 873 | " [2, 4, 6, 8, 1]],\n", 874 | "\n", 875 | " [[1, 2, 3, 5, 4],\n", 876 | " [3, 4, 5, 6, 7]]], dtype=int32)" 877 | ] 878 | }, 879 | "execution_count": 37, 880 | "metadata": { 881 | "tags": [] 882 | }, 883 | "output_type": "execute_result" 884 | } 885 | ], 886 | "source": [ 887 | "# Using np.array(tensor_name)\n", 888 | "\n", 889 | "np.array(tensor_3d)" 890 | ] 891 | }, 892 | { 893 | "cell_type": "markdown", 894 | "id": "IYwTQS4zUq9J", 895 | "metadata": { 896 | "id": "IYwTQS4zUq9J" 897 | }, 898 | "source": [ 899 | "\n", 900 | "\n", 901 | "### 4. 3 Creating a Tensor with tf.Variable()" 902 | ] 903 | }, 904 | { 905 | "cell_type": "markdown", 906 | "id": "9U0iKXNXU7n9", 907 | "metadata": { 908 | "id": "9U0iKXNXU7n9" 909 | }, 910 | "source": [ 911 | "A tensor created with `tf.constant()` is immutable, it can not be changed. Such kind of tensor can not be used as weights in neural networks because they need to be changed/updated in backpropogation for example. \n", 912 | "\n", 913 | "With `tf.Variable()`, we can create tensors that can be mutable and thus can be used in things like updating the weights of neural networks like said above. \n", 914 | "\n", 915 | "Creating variable tensor is as simple as the former. " 916 | ] 917 | }, 918 | { 919 | "cell_type": "code", 920 | "execution_count": null, 921 | "id": "_w7GOGQWWAOG", 922 | "metadata": { 923 | "colab": { 924 | "base_uri": "https://localhost:8080/" 925 | }, 926 | "id": "_w7GOGQWWAOG", 927 | "outputId": "50219395-65fa-492f-a888-6f6c81d74046" 928 | }, 929 | "outputs": [ 930 | { 931 | "name": "stdout", 932 | "output_type": "stream", 933 | "text": [ 934 | "\n" 943 | ] 944 | } 945 | ], 946 | "source": [ 947 | "var_tensor = tf.Variable([\n", 948 | " [[1,2,3,4,5],\n", 949 | " [6,7,8,9,8]],\n", 950 | " [[1,3,5,7,9],\n", 951 | " [2,4,6,8,1]],\n", 952 | " [[1,2,3,5,4],\n", 953 | " [3,4,5,6,7]], ])\n", 954 | "\n", 955 | "print(var_tensor)" 956 | ] 957 | }, 958 | { 959 | "cell_type": "markdown", 960 | "id": "2svEn-TnWr5t", 961 | "metadata": { 962 | "id": "2svEn-TnWr5t" 963 | }, 964 | "source": [ 965 | "It can also be converted to NumPy array, just like tensors created with `tf.constant()`" 966 | ] 967 | }, 968 | { 969 | "cell_type": "code", 970 | "execution_count": null, 971 | "id": "xZVFZhRYWsbv", 972 | "metadata": { 973 | "colab": { 974 | "base_uri": "https://localhost:8080/" 975 | }, 976 | "id": "xZVFZhRYWsbv", 977 | "outputId": "bb273948-a7ed-495a-9ba5-27994d625f84" 978 | }, 979 | "outputs": [ 980 | { 981 | "data": { 982 | "text/plain": [ 983 | "array([[[1, 2, 3, 4, 5],\n", 984 | " [6, 7, 8, 9, 8]],\n", 985 | "\n", 986 | " [[1, 3, 5, 7, 9],\n", 987 | " [2, 4, 6, 8, 1]],\n", 988 | "\n", 989 | " [[1, 2, 3, 5, 4],\n", 990 | " [3, 4, 5, 6, 7]]], dtype=int32)" 991 | ] 992 | }, 993 | "execution_count": 42, 994 | "metadata": { 995 | "tags": [] 996 | }, 997 | "output_type": "execute_result" 998 | } 999 | ], 1000 | "source": [ 1001 | "# Converting a variable tensor into NumPy array\n", 1002 | "\n", 1003 | "var_tensor.numpy()" 1004 | ] 1005 | }, 1006 | { 1007 | "cell_type": "markdown", 1008 | "id": "lBINI0X2W7s7", 1009 | "metadata": { 1010 | "id": "lBINI0X2W7s7" 1011 | }, 1012 | "source": [ 1013 | "\n", 1014 | "\n", 1015 | "### 4. 4 Creating a Tensor from Existing Functions" 1016 | ] 1017 | }, 1018 | { 1019 | "cell_type": "markdown", 1020 | "id": "FjUg7FYKXH7P", 1021 | "metadata": { 1022 | "id": "FjUg7FYKXH7P" 1023 | }, 1024 | "source": [ 1025 | "There some types of uniform tensors that you would not want to create from scratch, when in fact, they are already built. \n", 1026 | "\n", 1027 | "Take an example of 1's tensor, 0's, and random tensors. Let's create them." 1028 | ] 1029 | }, 1030 | { 1031 | "cell_type": "code", 1032 | "execution_count": null, 1033 | "id": "rLqjm3j1XnoD", 1034 | "metadata": { 1035 | "colab": { 1036 | "base_uri": "https://localhost:8080/" 1037 | }, 1038 | "id": "rLqjm3j1XnoD", 1039 | "outputId": "60d265ae-c583-4c66-f42a-a2a3c967b0a7" 1040 | }, 1041 | "outputs": [ 1042 | { 1043 | "name": "stdout", 1044 | "output_type": "stream", 1045 | "text": [ 1046 | "tf.Tensor(\n", 1047 | "[[1. 1. 1. 1.]\n", 1048 | " [1. 1. 1. 1.]\n", 1049 | " [1. 1. 1. 1.]\n", 1050 | " [1. 1. 1. 1.]], shape=(4, 4), dtype=float32)\n" 1051 | ] 1052 | } 1053 | ], 1054 | "source": [ 1055 | "# Creating 1's tensor\n", 1056 | "\n", 1057 | "ones_tensor = tf.ones([4,4])\n", 1058 | "\n", 1059 | "print(ones_tensor)" 1060 | ] 1061 | }, 1062 | { 1063 | "cell_type": "code", 1064 | "execution_count": null, 1065 | "id": "qqUg068BXv95", 1066 | "metadata": { 1067 | "colab": { 1068 | "base_uri": "https://localhost:8080/" 1069 | }, 1070 | "id": "qqUg068BXv95", 1071 | "outputId": "640b1b5e-7d8b-4209-f0bf-3e526fe04acd" 1072 | }, 1073 | "outputs": [ 1074 | { 1075 | "name": "stdout", 1076 | "output_type": "stream", 1077 | "text": [ 1078 | "tf.Tensor([[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]], shape=(1, 10), dtype=float32)\n" 1079 | ] 1080 | } 1081 | ], 1082 | "source": [ 1083 | "# Creating 1's tensor\n", 1084 | "\n", 1085 | "ones_tensor_1 = tf.ones([1,10])\n", 1086 | "\n", 1087 | "print(ones_tensor_1)" 1088 | ] 1089 | }, 1090 | { 1091 | "cell_type": "code", 1092 | "execution_count": null, 1093 | "id": "RzsQ6T3sX0J_", 1094 | "metadata": { 1095 | "colab": { 1096 | "base_uri": "https://localhost:8080/" 1097 | }, 1098 | "id": "RzsQ6T3sX0J_", 1099 | "outputId": "14cd3dbe-fe71-4e33-daf1-8baea6e2d176" 1100 | }, 1101 | "outputs": [ 1102 | { 1103 | "name": "stdout", 1104 | "output_type": "stream", 1105 | "text": [ 1106 | "tf.Tensor(\n", 1107 | "[[0. 0. 0.]\n", 1108 | " [0. 0. 0.]\n", 1109 | " [0. 0. 0.]], shape=(3, 3), dtype=float32)\n" 1110 | ] 1111 | } 1112 | ], 1113 | "source": [ 1114 | "# Creating zeros' tensor\n", 1115 | "\n", 1116 | "tensor_0 = tf.zeros([3,3])\n", 1117 | "print(tensor_0)" 1118 | ] 1119 | }, 1120 | { 1121 | "cell_type": "markdown", 1122 | "id": "3QqBFR__ZVi2", 1123 | "metadata": { 1124 | "id": "3QqBFR__ZVi2" 1125 | }, 1126 | "source": [ 1127 | "We can also create a tensor with random values. During the weights initialization in neural networks, weights take random values. \n", 1128 | "\n", 1129 | "You might be thinking that why aren't we building neural networks now and I get you. Understanding the basics of tensors and especially working with TensorFlow is useful when it comes to creating custom neural network layers, loss functions, or optimizers. \n", 1130 | "\n", 1131 | "The later labs will not deal with custom layers/losses/optimizers, this is only just an introduction, so that this can serve as a reference whenever you want to take a step back into the backbone of TensorFlow high level API. " 1132 | ] 1133 | }, 1134 | { 1135 | "cell_type": "code", 1136 | "execution_count": null, 1137 | "id": "Kv1Ro1sAYOwT", 1138 | "metadata": { 1139 | "colab": { 1140 | "base_uri": "https://localhost:8080/" 1141 | }, 1142 | "id": "Kv1Ro1sAYOwT", 1143 | "outputId": "376bfc1b-7bf0-4a59-c866-3592ca8fe5bd" 1144 | }, 1145 | "outputs": [ 1146 | { 1147 | "name": "stdout", 1148 | "output_type": "stream", 1149 | "text": [ 1150 | "tf.Tensor(\n", 1151 | "[[-0.43640924 -1.9633987 -0.06452483]\n", 1152 | " [-1.056841 1.0019137 0.6735137 ]\n", 1153 | " [ 0.06987712 -1.4077919 1.0278524 ]], shape=(3, 3), dtype=float32)\n" 1154 | ] 1155 | } 1156 | ], 1157 | "source": [ 1158 | "# Generating a tensor with random values \n", 1159 | "\n", 1160 | "# We first have to create a generator object\n", 1161 | "\n", 1162 | "rand_tensor = tf.random.Generator.from_seed(3)\n", 1163 | "\n", 1164 | "rand_tensor = rand_tensor.normal(shape=[3,3])\n", 1165 | "print(rand_tensor)" 1166 | ] 1167 | }, 1168 | { 1169 | "cell_type": "markdown", 1170 | "id": "ujqu7NXReuRm", 1171 | "metadata": { 1172 | "id": "ujqu7NXReuRm" 1173 | }, 1174 | "source": [ 1175 | "Changing seed number in `tf.random.Generator.from_seed(3)` will change the values returned by random function. " 1176 | ] 1177 | }, 1178 | { 1179 | "cell_type": "markdown", 1180 | "id": "J3qtTgCzfcTR", 1181 | "metadata": { 1182 | "id": "J3qtTgCzfcTR" 1183 | }, 1184 | "source": [ 1185 | "We can also shuffle the existing tensor, created with `tf.constant()` or `tf.Variable()`." 1186 | ] 1187 | }, 1188 | { 1189 | "cell_type": "code", 1190 | "execution_count": null, 1191 | "id": "ilh0IwCYfn8K", 1192 | "metadata": { 1193 | "colab": { 1194 | "base_uri": "https://localhost:8080/" 1195 | }, 1196 | "id": "ilh0IwCYfn8K", 1197 | "outputId": "facadbf8-ee40-470e-9ae2-bda75024f8e7" 1198 | }, 1199 | "outputs": [ 1200 | { 1201 | "name": "stdout", 1202 | "output_type": "stream", 1203 | "text": [ 1204 | "tf.Tensor(\n", 1205 | "[[1 3]\n", 1206 | " [3 4]\n", 1207 | " [4 5]], shape=(3, 2), dtype=int32)\n" 1208 | ] 1209 | } 1210 | ], 1211 | "source": [ 1212 | "# Create a typical tensor \n", 1213 | "\n", 1214 | "example_tensor = tf.constant([[1,3],\n", 1215 | " [3,4],\n", 1216 | " [4,5]])\n", 1217 | "\n", 1218 | "print(example_tensor)" 1219 | ] 1220 | }, 1221 | { 1222 | "cell_type": "code", 1223 | "execution_count": null, 1224 | "id": "FzkUNIvFhz4o", 1225 | "metadata": { 1226 | "id": "FzkUNIvFhz4o" 1227 | }, 1228 | "outputs": [], 1229 | "source": [ 1230 | "def shuffle_tensor(tensor):\n", 1231 | "\n", 1232 | " \"\"\"\n", 1233 | " Take a tensor as input and return the shuffled tensor\n", 1234 | " \"\"\"\n", 1235 | " # Shuffle the order of the created tensor\n", 1236 | "\n", 1237 | " tensor_shuffled = tf.random.shuffle(tensor)\n", 1238 | "\n", 1239 | " return print(tensor_shuffled)" 1240 | ] 1241 | }, 1242 | { 1243 | "cell_type": "code", 1244 | "execution_count": null, 1245 | "id": "oYRDOZWiiaIs", 1246 | "metadata": { 1247 | "colab": { 1248 | "base_uri": "https://localhost:8080/" 1249 | }, 1250 | "id": "oYRDOZWiiaIs", 1251 | "outputId": "07941eed-b40b-4d7a-e57a-f1063a9a8325" 1252 | }, 1253 | "outputs": [ 1254 | { 1255 | "name": "stdout", 1256 | "output_type": "stream", 1257 | "text": [ 1258 | "tf.Tensor(\n", 1259 | "[[4 5]\n", 1260 | " [1 3]\n", 1261 | " [3 4]], shape=(3, 2), dtype=int32)\n" 1262 | ] 1263 | } 1264 | ], 1265 | "source": [ 1266 | "shuffle_tensor(example_tensor)" 1267 | ] 1268 | }, 1269 | { 1270 | "cell_type": "markdown", 1271 | "id": "BxdM4e1cgtPV", 1272 | "metadata": { 1273 | "id": "BxdM4e1cgtPV" 1274 | }, 1275 | "source": [ 1276 | "If you rerun the above cell more than once, you will get different orders of tensor. \n", 1277 | "\n" 1278 | ] 1279 | }, 1280 | { 1281 | "cell_type": "code", 1282 | "execution_count": null, 1283 | "id": "vOOgQsnAhN5C", 1284 | "metadata": { 1285 | "colab": { 1286 | "base_uri": "https://localhost:8080/" 1287 | }, 1288 | "id": "vOOgQsnAhN5C", 1289 | "outputId": "c5a1d085-1a22-4e0d-abcf-07142882d74d" 1290 | }, 1291 | "outputs": [ 1292 | { 1293 | "name": "stdout", 1294 | "output_type": "stream", 1295 | "text": [ 1296 | "tf.Tensor(\n", 1297 | "[[4 5]\n", 1298 | " [3 4]\n", 1299 | " [1 3]], shape=(3, 2), dtype=int32)\n" 1300 | ] 1301 | } 1302 | ], 1303 | "source": [ 1304 | "shuffle_tensor(example_tensor)" 1305 | ] 1306 | }, 1307 | { 1308 | "cell_type": "markdown", 1309 | "id": "9KkhVyrlikJ8", 1310 | "metadata": { 1311 | "id": "9KkhVyrlikJ8" 1312 | }, 1313 | "source": [ 1314 | "In order to prevent that, we can use `tf.random.set_seed(seed_number)` to always get the same order/values. " 1315 | ] 1316 | }, 1317 | { 1318 | "cell_type": "code", 1319 | "execution_count": null, 1320 | "id": "d9gl0SAdhCmG", 1321 | "metadata": { 1322 | "colab": { 1323 | "base_uri": "https://localhost:8080/" 1324 | }, 1325 | "id": "d9gl0SAdhCmG", 1326 | "outputId": "2e209d85-2b6e-4eaa-9c37-12a3850a9eca" 1327 | }, 1328 | "outputs": [ 1329 | { 1330 | "name": "stdout", 1331 | "output_type": "stream", 1332 | "text": [ 1333 | "tf.Tensor(\n", 1334 | "[[3 4]\n", 1335 | " [4 5]\n", 1336 | " [1 3]], shape=(3, 2), dtype=int32)\n" 1337 | ] 1338 | } 1339 | ], 1340 | "source": [ 1341 | "# Set seed \n", 1342 | "\n", 1343 | "tf.random.set_seed(42)\n", 1344 | "\n", 1345 | "shuffle_tensor(example_tensor)" 1346 | ] 1347 | }, 1348 | { 1349 | "cell_type": "markdown", 1350 | "id": "0KkjtE87jBb0", 1351 | "metadata": { 1352 | "id": "0KkjtE87jBb0" 1353 | }, 1354 | "source": [ 1355 | "Everytime you can run `shuffle_tensor` function with a same seed, you will get the same order. " 1356 | ] 1357 | }, 1358 | { 1359 | "cell_type": "markdown", 1360 | "id": "LFESA8kzjJix", 1361 | "metadata": { 1362 | "id": "LFESA8kzjJix" 1363 | }, 1364 | "source": [ 1365 | "You can learn more about Random number generation at [TensorFlow docs](https://www.tensorflow.org/guide/random_numbers#the_tfrandomgenerator_class). " 1366 | ] 1367 | }, 1368 | { 1369 | "cell_type": "markdown", 1370 | "id": "UNICeI1JjnCQ", 1371 | "metadata": { 1372 | "id": "UNICeI1JjnCQ" 1373 | }, 1374 | "source": [ 1375 | "\n", 1376 | "\n", 1377 | "### 4. 5 Selecting Data in Tensor" 1378 | ] 1379 | }, 1380 | { 1381 | "cell_type": "markdown", 1382 | "id": "JrX7DaN0kDnX", 1383 | "metadata": { 1384 | "id": "JrX7DaN0kDnX" 1385 | }, 1386 | "source": [ 1387 | "We can also select values in any tensor, both single dimensional tensor and multi dimensional tensor." 1388 | ] 1389 | }, 1390 | { 1391 | "cell_type": "code", 1392 | "execution_count": null, 1393 | "id": "OA0uyuESjwEs", 1394 | "metadata": { 1395 | "id": "OA0uyuESjwEs" 1396 | }, 1397 | "outputs": [], 1398 | "source": [ 1399 | "# Let's create a tensor\n", 1400 | "\n", 1401 | "tensor_1d = tf.constant([1,2,3,4,5,6,7])" 1402 | ] 1403 | }, 1404 | { 1405 | "cell_type": "markdown", 1406 | "id": "subya0p2kbcj", 1407 | "metadata": { 1408 | "id": "subya0p2kbcj" 1409 | }, 1410 | "source": [ 1411 | "Let's select multiple values in tensor created above. " 1412 | ] 1413 | }, 1414 | { 1415 | "cell_type": "code", 1416 | "execution_count": null, 1417 | "id": "pZ_F3gikkTj3", 1418 | "metadata": { 1419 | "colab": { 1420 | "base_uri": "https://localhost:8080/" 1421 | }, 1422 | "id": "pZ_F3gikkTj3", 1423 | "outputId": "f24e3a1a-2be9-4342-e83f-57cc48c69b11" 1424 | }, 1425 | "outputs": [ 1426 | { 1427 | "name": "stdout", 1428 | "output_type": "stream", 1429 | "text": [ 1430 | "The first value: 1\n", 1431 | "The second value: 3\n", 1432 | "From the 3 to 5th values: [4 5]\n", 1433 | "From the 3 to last value: [4 5 6 7]\n", 1434 | "The last value: 7\n", 1435 | "Select value before the last value: 6\n", 1436 | "Select all tensor values: [1 2 3 4 5 6 7]\n" 1437 | ] 1438 | } 1439 | ], 1440 | "source": [ 1441 | "print('The first value:', tensor_1d[0].numpy())\n", 1442 | "print('The second value:', tensor_1d[2].numpy())\n", 1443 | "print('From the 3 to 5th values:', tensor_1d[3:5].numpy())\n", 1444 | "print('From the 3 to last value:', tensor_1d[3:].numpy())\n", 1445 | "print('The last value:', tensor_1d[-1].numpy())\n", 1446 | "print('Select value before the last value:', tensor_1d[-2].numpy())\n", 1447 | "print('Select all tensor values:', tensor_1d[:].numpy())" 1448 | ] 1449 | }, 1450 | { 1451 | "cell_type": "markdown", 1452 | "id": "pMvoVj4plviE", 1453 | "metadata": { 1454 | "id": "pMvoVj4plviE" 1455 | }, 1456 | "source": [ 1457 | "Selecting/indexing data in tensor is similar to Python list indexing, and NumPy also. \n", 1458 | "\n", 1459 | "Let's also select data in 2D tensor." 1460 | ] 1461 | }, 1462 | { 1463 | "cell_type": "code", 1464 | "execution_count": null, 1465 | "id": "AgJARJ9zl_Lm", 1466 | "metadata": { 1467 | "id": "AgJARJ9zl_Lm" 1468 | }, 1469 | "outputs": [], 1470 | "source": [ 1471 | "tensor_2d = tf.constant([[1,3],\n", 1472 | " [3,4],\n", 1473 | " [4,5]])" 1474 | ] 1475 | }, 1476 | { 1477 | "cell_type": "code", 1478 | "execution_count": null, 1479 | "id": "211dhp4ymj8I", 1480 | "metadata": { 1481 | "colab": { 1482 | "base_uri": "https://localhost:8080/" 1483 | }, 1484 | "id": "211dhp4ymj8I", 1485 | "outputId": "403a2521-b4a4-4c04-a32d-21cc2fe6f40c" 1486 | }, 1487 | "outputs": [ 1488 | { 1489 | "name": "stdout", 1490 | "output_type": "stream", 1491 | "text": [ 1492 | "The first row: [1 3]\n", 1493 | "The second column: [3 4 5]\n", 1494 | "The last low: [4 5]\n", 1495 | "The first value in the last row: 4\n", 1496 | "The last value in the last column: 5\n" 1497 | ] 1498 | } 1499 | ], 1500 | "source": [ 1501 | "print('The first row:', tensor_2d[0,:].numpy())\n", 1502 | "print('The second column:', tensor_2d[:,1].numpy())\n", 1503 | "print('The last low:', tensor_2d[-1,:].numpy())\n", 1504 | "print('The first value in the last row:', tensor_2d[-1,0].numpy())\n", 1505 | "print('The last value in the last column:', tensor_2d[-1,-1].numpy())" 1506 | ] 1507 | }, 1508 | { 1509 | "cell_type": "markdown", 1510 | "id": "Klm8JekIoemi", 1511 | "metadata": { 1512 | "id": "Klm8JekIoemi" 1513 | }, 1514 | "source": [ 1515 | "\n", 1516 | "\n", 1517 | "### 4. 6 Performing Operations on Tensors" 1518 | ] 1519 | }, 1520 | { 1521 | "cell_type": "markdown", 1522 | "id": "cylwwSN1o2QC", 1523 | "metadata": { 1524 | "id": "cylwwSN1o2QC" 1525 | }, 1526 | "source": [ 1527 | "All numeric operations can be performed on tensor. Let's see few of them." 1528 | ] 1529 | }, 1530 | { 1531 | "cell_type": "code", 1532 | "execution_count": null, 1533 | "id": "owzkLYcgpCgU", 1534 | "metadata": { 1535 | "id": "owzkLYcgpCgU" 1536 | }, 1537 | "outputs": [], 1538 | "source": [ 1539 | "# Creating example tensors \n", 1540 | "\n", 1541 | "tensor_1 = tf.constant([1,2,3])\n", 1542 | "tensor_2 = tf.constant([4,5,6])" 1543 | ] 1544 | }, 1545 | { 1546 | "cell_type": "code", 1547 | "execution_count": null, 1548 | "id": "tASu1BD7o-mN", 1549 | "metadata": { 1550 | "colab": { 1551 | "base_uri": "https://localhost:8080/" 1552 | }, 1553 | "id": "tASu1BD7o-mN", 1554 | "outputId": "38654134-1c32-47bb-8282-08855eab6755" 1555 | }, 1556 | "outputs": [ 1557 | { 1558 | "name": "stdout", 1559 | "output_type": "stream", 1560 | "text": [ 1561 | "tf.Tensor([5 6 7], shape=(3,), dtype=int32)\n" 1562 | ] 1563 | } 1564 | ], 1565 | "source": [ 1566 | "# Adding a scalar value to a tensor\n", 1567 | "\n", 1568 | "print(tensor_1 + 4)" 1569 | ] 1570 | }, 1571 | { 1572 | "cell_type": "code", 1573 | "execution_count": null, 1574 | "id": "OzgmVOrApQgn", 1575 | "metadata": { 1576 | "colab": { 1577 | "base_uri": "https://localhost:8080/" 1578 | }, 1579 | "id": "OzgmVOrApQgn", 1580 | "outputId": "10630f7d-bde6-49e2-c2f6-6d36c4b3aef3" 1581 | }, 1582 | "outputs": [ 1583 | { 1584 | "name": "stdout", 1585 | "output_type": "stream", 1586 | "text": [ 1587 | "tf.Tensor([5 7 9], shape=(3,), dtype=int32)\n" 1588 | ] 1589 | } 1590 | ], 1591 | "source": [ 1592 | "# Adding two tensors \n", 1593 | "\n", 1594 | "print(tensor_1 + tensor_2)" 1595 | ] 1596 | }, 1597 | { 1598 | "cell_type": "code", 1599 | "execution_count": null, 1600 | "id": "33fQGz5Rp93j", 1601 | "metadata": { 1602 | "colab": { 1603 | "base_uri": "https://localhost:8080/" 1604 | }, 1605 | "id": "33fQGz5Rp93j", 1606 | "outputId": "c57051b0-8333-41e4-cbfb-577610c4aa8c" 1607 | }, 1608 | "outputs": [ 1609 | { 1610 | "name": "stdout", 1611 | "output_type": "stream", 1612 | "text": [ 1613 | "tf.Tensor([5 7 9], shape=(3,), dtype=int32)\n" 1614 | ] 1615 | } 1616 | ], 1617 | "source": [ 1618 | "# Can also add with tf.add() or tf.math.add()\n", 1619 | "\n", 1620 | "print(tf.add(tensor_1, tensor_2))" 1621 | ] 1622 | }, 1623 | { 1624 | "cell_type": "code", 1625 | "execution_count": null, 1626 | "id": "GHomJRJUpcwx", 1627 | "metadata": { 1628 | "colab": { 1629 | "base_uri": "https://localhost:8080/" 1630 | }, 1631 | "id": "GHomJRJUpcwx", 1632 | "outputId": "405d4223-ff55-40b7-f935-20480438b11f" 1633 | }, 1634 | "outputs": [ 1635 | { 1636 | "name": "stdout", 1637 | "output_type": "stream", 1638 | "text": [ 1639 | "tf.Tensor([ 4 10 18], shape=(3,), dtype=int32)\n" 1640 | ] 1641 | } 1642 | ], 1643 | "source": [ 1644 | "# multiplying tensors with tf.multiply()\n", 1645 | "\n", 1646 | "print(tf.multiply(tensor_1, tensor_2))" 1647 | ] 1648 | }, 1649 | { 1650 | "cell_type": "markdown", 1651 | "id": "LP_-cNpQqZ_s", 1652 | "metadata": { 1653 | "id": "LP_-cNpQqZ_s" 1654 | }, 1655 | "source": [ 1656 | "You can learn more at official docs, [`tf.math()`](https://www.tensorflow.org/api_docs/python/tf/math) specifically. Almost all maths operations can be done on tensors." 1657 | ] 1658 | }, 1659 | { 1660 | "cell_type": "markdown", 1661 | "id": "FCscqPnoqxPN", 1662 | "metadata": { 1663 | "id": "FCscqPnoqxPN" 1664 | }, 1665 | "source": [ 1666 | "\n", 1667 | "\n", 1668 | "### 4. 7 Manipulating the Shape of Tensor" 1669 | ] 1670 | }, 1671 | { 1672 | "cell_type": "markdown", 1673 | "id": "KNzd4_Frq-Cc", 1674 | "metadata": { 1675 | "id": "KNzd4_Frq-Cc" 1676 | }, 1677 | "source": [ 1678 | "There are times you would want to reshape a tensor. Here is how to go about it." 1679 | ] 1680 | }, 1681 | { 1682 | "cell_type": "code", 1683 | "execution_count": null, 1684 | "id": "e-AyfugirGDA", 1685 | "metadata": { 1686 | "colab": { 1687 | "base_uri": "https://localhost:8080/" 1688 | }, 1689 | "id": "e-AyfugirGDA", 1690 | "outputId": "f634a3ee-77ae-4f48-8dea-609005e27e7a" 1691 | }, 1692 | "outputs": [ 1693 | { 1694 | "name": "stdout", 1695 | "output_type": "stream", 1696 | "text": [ 1697 | "tf.Tensor(\n", 1698 | "[[1 3]\n", 1699 | " [3 4]\n", 1700 | " [4 5]], shape=(3, 2), dtype=int32)\n" 1701 | ] 1702 | } 1703 | ], 1704 | "source": [ 1705 | "print(example_tensor)" 1706 | ] 1707 | }, 1708 | { 1709 | "cell_type": "markdown", 1710 | "id": "y6xE4VaNrP7G", 1711 | "metadata": { 1712 | "id": "y6xE4VaNrP7G" 1713 | }, 1714 | "source": [ 1715 | "Let's reshape the above tensor into `(2,3)`." 1716 | ] 1717 | }, 1718 | { 1719 | "cell_type": "code", 1720 | "execution_count": null, 1721 | "id": "-1-Ed2bgrLB-", 1722 | "metadata": { 1723 | "id": "-1-Ed2bgrLB-" 1724 | }, 1725 | "outputs": [], 1726 | "source": [ 1727 | "tens_reshaped = tf.reshape(example_tensor, [2,3])" 1728 | ] 1729 | }, 1730 | { 1731 | "cell_type": "code", 1732 | "execution_count": null, 1733 | "id": "BwfGZSqWrmJt", 1734 | "metadata": { 1735 | "colab": { 1736 | "base_uri": "https://localhost:8080/" 1737 | }, 1738 | "id": "BwfGZSqWrmJt", 1739 | "outputId": "8310b112-53fb-4478-d62f-1cf6ff812184" 1740 | }, 1741 | "outputs": [ 1742 | { 1743 | "name": "stdout", 1744 | "output_type": "stream", 1745 | "text": [ 1746 | "tf.Tensor(\n", 1747 | "[[1 3 3]\n", 1748 | " [4 4 5]], shape=(2, 3), dtype=int32)\n" 1749 | ] 1750 | } 1751 | ], 1752 | "source": [ 1753 | "print(tens_reshaped)" 1754 | ] 1755 | }, 1756 | { 1757 | "cell_type": "code", 1758 | "execution_count": null, 1759 | "id": "IxAznPNRrwn1", 1760 | "metadata": { 1761 | "colab": { 1762 | "base_uri": "https://localhost:8080/" 1763 | }, 1764 | "id": "IxAznPNRrwn1", 1765 | "outputId": "143f00b6-3c7f-4381-e9e7-906aa9d853ed" 1766 | }, 1767 | "outputs": [ 1768 | { 1769 | "name": "stdout", 1770 | "output_type": "stream", 1771 | "text": [ 1772 | "tf.Tensor(\n", 1773 | "[[1]\n", 1774 | " [3]\n", 1775 | " [3]\n", 1776 | " [4]\n", 1777 | " [4]\n", 1778 | " [5]], shape=(6, 1), dtype=int32)\n" 1779 | ] 1780 | } 1781 | ], 1782 | "source": [ 1783 | "# Also to (6,1)\n", 1784 | "\n", 1785 | "print(tf.reshape(example_tensor, [6,1]))" 1786 | ] 1787 | }, 1788 | { 1789 | "cell_type": "code", 1790 | "execution_count": 99, 1791 | "id": "FEAp1dU8sCMs", 1792 | "metadata": { 1793 | "colab": { 1794 | "base_uri": "https://localhost:8080/" 1795 | }, 1796 | "id": "FEAp1dU8sCMs", 1797 | "outputId": "39affc67-c8b2-4298-ef74-9a74fb3eee57" 1798 | }, 1799 | "outputs": [ 1800 | { 1801 | "name": "stdout", 1802 | "output_type": "stream", 1803 | "text": [ 1804 | "[3, 2]\n" 1805 | ] 1806 | } 1807 | ], 1808 | "source": [ 1809 | "# You can also shape a tensor into a list\n", 1810 | "\n", 1811 | "print(example_tensor.shape.as_list())" 1812 | ] 1813 | }, 1814 | { 1815 | "cell_type": "code", 1816 | "execution_count": 100, 1817 | "id": "lZH3nEBdsibi", 1818 | "metadata": { 1819 | "colab": { 1820 | "base_uri": "https://localhost:8080/" 1821 | }, 1822 | "id": "lZH3nEBdsibi", 1823 | "outputId": "19ccb6db-b6f8-4abb-a4cc-14638f34c5ec" 1824 | }, 1825 | "outputs": [ 1826 | { 1827 | "name": "stdout", 1828 | "output_type": "stream", 1829 | "text": [ 1830 | "tf.Tensor([1 3 3 4 4 5], shape=(6,), dtype=int32)\n" 1831 | ] 1832 | } 1833 | ], 1834 | "source": [ 1835 | "# You can also flatten a tensor\n", 1836 | "\n", 1837 | "print(tf.reshape(example_tensor, [-1]))" 1838 | ] 1839 | }, 1840 | { 1841 | "cell_type": "markdown", 1842 | "id": "pJPpqTGgs8OR", 1843 | "metadata": { 1844 | "id": "pJPpqTGgs8OR" 1845 | }, 1846 | "source": [ 1847 | "There are rules to reshaping a tensor. The new shape has to be resonable. Take an example below, it would create an erroe because there is no way you can reshape the `example_tensor into (5,5).`" 1848 | ] 1849 | }, 1850 | { 1851 | "cell_type": "code", 1852 | "execution_count": 101, 1853 | "id": "TICgTu35tSn1", 1854 | "metadata": { 1855 | "id": "TICgTu35tSn1" 1856 | }, 1857 | "outputs": [], 1858 | "source": [ 1859 | "# Running the cell below will create an error\n", 1860 | "\n", 1861 | "# print(tf.reshape(example_tensor, [5,5]))" 1862 | ] 1863 | }, 1864 | { 1865 | "cell_type": "markdown", 1866 | "id": "oP6_2Vpltfpn", 1867 | "metadata": { 1868 | "id": "oP6_2Vpltfpn" 1869 | }, 1870 | "source": [ 1871 | "*******************" 1872 | ] 1873 | }, 1874 | { 1875 | "cell_type": "markdown", 1876 | "id": "6kIIvdnLt0L_", 1877 | "metadata": { 1878 | "id": "6kIIvdnLt0L_" 1879 | }, 1880 | "source": [ 1881 | "That's it for the introduction to TensorFlow and the basics of tensors. 'TensorFlow API revolves around tensors'`(CC: Aurelion Geron)`, and that's why we didn't get straight to doing big things with TF high level API immediately without looking into what makes them possible. " 1882 | ] 1883 | }, 1884 | { 1885 | "cell_type": "markdown", 1886 | "id": "i52l7HmBuSdi", 1887 | "metadata": { 1888 | "id": "i52l7HmBuSdi" 1889 | }, 1890 | "source": [ 1891 | "## [BACK TO TOP](#0)" 1892 | ] 1893 | }, 1894 | { 1895 | "cell_type": "code", 1896 | "execution_count": null, 1897 | "id": "88rLNK1KuUkY", 1898 | "metadata": { 1899 | "id": "88rLNK1KuUkY" 1900 | }, 1901 | "outputs": [], 1902 | "source": [] 1903 | } 1904 | ], 1905 | "metadata": { 1906 | "colab": { 1907 | "collapsed_sections": [], 1908 | "name": "7.2 Intro to TensorFlow for Deep Learning.ipynb", 1909 | "provenance": [] 1910 | }, 1911 | "kernelspec": { 1912 | "display_name": "Python 3.7.10 64-bit ('tensor': conda)", 1913 | "language": "python", 1914 | "name": "python3710jvsc74a57bd034ac5db714c5906ee087fcf6e2d00ee4febf096586592b6ba3662ed3b7e7a5f6" 1915 | }, 1916 | "language_info": { 1917 | "codemirror_mode": { 1918 | "name": "ipython", 1919 | "version": 3 1920 | }, 1921 | "file_extension": ".py", 1922 | "mimetype": "text/x-python", 1923 | "name": "python", 1924 | "nbconvert_exporter": "python", 1925 | "pygments_lexer": "ipython3", 1926 | "version": "3.7.10" 1927 | } 1928 | }, 1929 | "nbformat": 4, 1930 | "nbformat_minor": 5 1931 | } 1932 | -------------------------------------------------------------------------------- /Readme.md: -------------------------------------------------------------------------------- 1 | # Deep Learning with TensorFlow and Keras 2 | **************** 3 | 4 | ***Techniques, tools, best practices and everything you need to learn/build effective Deep Learning Systems with TensorFlow for Computer Vision and Natural Language Processing!*** 5 | 6 | ![toolss](https://github.com/Nyandwi/deep_learning_with_tensorflow/blob/master/images/tf_cover_image.png) 7 | 8 | 9 | This is a comprehensive repository containing end to end notebooks for basic neural network tasks, computer vision and Natural Language Processing(NLP). 10 | 11 | 12 | All notebooks were created with the readers in mind. Every notebook starts with a high-level overview of any specific algorithm/concepts being covered. Wherever possible, visuals are used to make things clear. 13 | 14 | 15 | ## Viewing and Running the Notebooks 16 | 17 | The easiest way to view all the notebooks is to use Nbviewer. 18 | 19 | * Render nbviewer 20 | 21 | If you want to play with the codes, you can use the following platforms: 22 | 23 | * Open In Colab 24 | 25 | * Launch in Deepnote 26 | 27 | *Deepnote will direct you to `Intro to Artificial Neural Networks`. Heads to the project side bar for more notebooks.* 28 | 29 | 30 | ## Tools Overview 31 | 32 | 33 | TensorFlow is a popular deep learning framework used for building models suitable for different fields such as Computer Vision and Natural Language Processing. 34 | 35 | TensorFlow is powered by Keras, a high level and well designed API for building neural networks easily. 36 | 37 | TensorFlow has gained a lot of popularity in the machine learning community due to its complete ecosystem made of wholesome tools including TensorBoard, TF Datasets, TensorFlow Lite, TensorFlow Extended, TensorFlow.js, etc... 38 | 39 | 40 | ## Outline - Deep Learning with TensorFlow 41 | 42 | ### 1 - Intro to Artificial Neural Networks and TensorFlow 43 | 44 | * [Intro to Articial Neural Networks](https://github.com/Nyandwi/deep_learning_with_tensorflow/blob/master/1_Intro_to_neural_networks_with_tensorflow/1_intro_to_neural_networks.ipynb) 45 | 46 | * Why Deep Learning 47 | * A Single Layer Neural Network 48 | * Activation Functions 49 | * Types of Deep Learning Architectures 50 | * Densely Connected Networks 51 | * Convolutional Neural Networks 52 | * Recurrent Neural Networks 53 | * Transformers 54 | 55 | * Challenges in Training Deep Neural Networks 56 | 57 | * [Intro to TensorFlow for Artificial Neural Networks](https://github.com/Nyandwi/deep_learning_with_tensorflow/blob/master/1_Intro_to_neural_networks_with_tensorflow/2_intro_to_tensorflow_for_deeplearning.ipynb) 58 | 59 | * What is TensorFlow? 60 | * TensorFlow Model APIs 61 | * A Quick Tour into TensorFlow Ecosystem 62 | * Basics of Tensors 63 | 64 | * [Neural Networks for Regression with TensorFlow](https://github.com/Nyandwi/deep_learning_with_tensorflow/blob/master/1_Intro_to_neural_networks_with_tensorflow/3_neural_networks_for_regresion_with_tensorflow.ipynb) 65 | 66 | * [Neural Networks for Classification with TensorFlow](https://github.com/Nyandwi/deep_learning_with_tensorflow/blob/master/1_Intro_to_neural_networks_with_tensorflow/4_neural_networks_for_classification_with_tensorflow.ipynb) 67 | 68 | 69 | ### 2 - Deep Computer Vision with TensorFlow 70 | 71 | * [Intro to Computer Vision with Convolutional Neural Networks(CNN)](https://github.com/Nyandwi/deep_learning_with_tensorflow/blob/master/2_deep_computer_vision_with_tensorflow/1_intro_to_computer_vision_and_cnn.ipynb) 72 | 73 | * Intro to Computer Vision and CNNs 74 | * What is Convolutional Neural Networks? 75 | * A Typical Architecture of Convolutional Neural Networks 76 | * Coding ConvNets: Image Classification 77 | 78 | * [ConvNets for Real World Data and Image Augmentation](https://github.com/Nyandwi/deep_learning_with_tensorflow/blob/master/2_deep_computer_vision_with_tensorflow/2_cnn_for_real_world_data_and_image_augmentation.ipynb) 79 | 80 | * Intro - Real World Datasets and Data Augmentation 81 | * Getting Started: Real World Datasets and Overfitting 82 | * Image Augmentation with Keras Image Augmentation Layers 83 | * [CNN Architectures and Transfer Learning](https://github.com/Nyandwi/deep_learning_with_tensorflow/blob/master/2_deep_computer_vision_with_tensorflow/3_cnn_architectures_and_transfer_learning.ipynb) 84 | 85 | * Looking Back: A Review on State of the Art CNN Architectures 86 | * Intro to Transfer Learning and using Pretrained Models 87 | * Quick Image Classification with Pretrained Models 88 | * Transfer Learning and FineTuning in Practice 89 | * Quick Image Classification and Transfer Learning with TensorFlow Hub 90 | 91 | ### 3 - Natural Language Processing with TensorFlow 92 | 93 | * [Intro to NLP and Text Processing with TensorFlow](https://github.com/Nyandwi/deep_learning_with_tensorflow/blob/master/3_nlp_with_tensorflow/1_intro_to_nlp_and_text_preprocessing.ipynb) 94 | 95 | * Intro to Natural Language Processing 96 | * Text Processing with TensorFlow 97 | * Using TextVectorization Layer 98 | * [Using Word Embeddings to Represent Texts](https://github.com/Nyandwi/deep_learning_with_tensorflow/blob/master/3_nlp_with_tensorflow/2_using_word_embeddings_to_represent_texts.ipynb) 99 | 100 | * Intro to Word Embeddings 101 | * Embedding In Practice 102 | * Using Pretrained Embeddings 103 | * [Recurrent Neural Networks (RNNs)](https://github.com/Nyandwi/deep_learning_with_tensorflow/blob/master/3_nlp_with_tensorflow/3_recurrent_neural_networks.ipynb) 104 | 105 | * Intro to Recurrent Neural Networks 106 | * Simple RNNs In Practice: Movies Sentiment Analysis 107 | * Intro to Long Short Terms Memories 108 | * LSTMs in Practice : News Classification 109 | 110 | * [Using Convolutional Neural Networks for Texts Classification](https://github.com/Nyandwi/deep_learning_with_tensorflow/blob/master/3_nlp_with_tensorflow/4_using_cnns_and_rnns_for_texts_classification.ipynb) 111 | 112 | * Intro Convolutional Neural Networks for Texts 113 | * CNN for Texts in Practice: News Classification 114 | * Combining ConvNets and RNNs 115 | 116 | * [Using Pretrained BERT for Text Classification](https://github.com/Nyandwi/deep_learning_with_tensorflow/blob/master/3_nlp_with_tensorflow/5_using_pretrained_bert_for_text_classification.ipynb) 117 | 118 | * Intro to BERT 119 | * In Practice: Finetuning a Pretrained BERT 120 | 121 | ## Used Datasets 122 | 123 | Many of the datasets used for this repository are from the following sources: 124 | 125 | * [UC OpenML](https://www.openml.org) 126 | * [Kaggle](https://www.kaggle.com/datasets) 127 | * [TensorFlow datasets](https://www.tensorflow.org/datasets/catalog/overview) 128 | 129 | 130 | ******** 131 | 132 | This repository was created by Jean de Dieu Nyandwi. You can find him on: 133 | * [Twitter](https://twitter.com/jeande_d) 134 | * [LinkedIn](https://linkedin.com/in/nyandwi) 135 | * [Medium](https://jeande.medium.com) 136 | * [Hashnode](https://jeande.tech) 137 | * [Instagram](https://instgram.com/jeande_d) 138 | * [Newsletter: Deep Learning Revision](https://www.getrevue.co/profile/deepyearning) 139 | 140 | 141 | ### *If you find any of this thing helpful, shoot him a [tweet](https://twitter.com/jeande_d) or a mention :)* 142 | 143 | -------------------------------------------------------------------------------- /images/tf_cover_image.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Nyandwi/deep_learning_with_tensorflow/cd36a5a03dbccba1e32dbdcf3f5034b3364d7b98/images/tf_cover_image.png --------------------------------------------------------------------------------