}$ | In sequence models, represents the output value of tth element in ith training example\n",
110 | "$T_X^{(i)}$ | In sequence models, represents the length of input sequence in ith training example\n",
111 | "$T_Y^{(i)}$ | In sequence models, represents the length of output sequence in ith training example\n",
112 | "\n"
113 | ]
114 | },
115 | {
116 | "metadata": {
117 | "id": "cyt39ZFv_u1-",
118 | "colab_type": "text"
119 | },
120 | "cell_type": "markdown",
121 | "source": [
122 | "## Logistic Regression, the building block of Deep Neural Nets\n",
123 | "\n",
124 | "It is a linear model for classification. The goal of the model is to predict probabilities of output labels for a given input.\n",
125 | "$$z= w^T x+b$$\n",
126 | "$$a=\\sigma(z)$$\n",
127 | "$$\\hat{y} = a$$\n",
128 | "\n",
129 | "Cross entropy loss for finding out how good the predictions are for a single training example,\n",
130 | "$$L(\\hat{y},y)= -(y log(\\hat{y}) + (1-y)log(\\hat{y}))$$\n",
131 | "\n",
132 | "Cost function for all examples,\n",
133 | "$$J= -\\frac{1}{m} \\sum_{i=1}^mL(\\hat{y}^{(i)} ,y^{(i)})$$\n",
134 | "This $J$ is used by an optimization algorithm (like gradient descend) to find optimal values for $w$ and $b$.\n"
135 | ]
136 | },
137 | {
138 | "metadata": {
139 | "id": "CApcuF65BuPL",
140 | "colab_type": "text"
141 | },
142 | "cell_type": "markdown",
143 | "source": [
144 | "## Shallow Neural Nets\n",
145 | "\n",
146 | "In logistic regression $z$ and $a$ are computed to obtain prediction for each training example. In a shallow neural net, this process is repeated twice before predicting the output label. In logistic regression,\n",
147 | "\n",
148 | "\n",
149 | "\n",
150 | "whereas in a shallow net,\n",
151 | "\n",
152 | "\n",
153 | "\n",
154 | "[1] and [2] are layers in the network. Layer [1] is a hidden layer as it is neither the input nor output. Layer [1] has three (hidden) units / neurons and layer [2] has one unit. The prediction for a training example $x$, is as follows in a shallow neural net,\n",
155 | "\n",
156 | "$$z^{[1]}= w^{[1]} x+b^{[1]}$$\n",
157 | "$$a^{[1]} = \\sigma(z^{[1]})$$\n",
158 | "$$z^{[2]} = w^{[2]}a^{[1]} +b^{[2]}$$\n",
159 | "$$\\hat{y}=a^{[2]} =\\sigma(z^{[2]})$$\n",
160 | "\n",
161 | "This process is extended to all training examples to obtain $Z^{[1]}$, $Z^{[2]}$, $A^{[1]}$, $A^{[2]}$, $\\hat{Y}$. If this process is extended to more than 2 hidden layers it is called a deep neural net!\n"
162 | ]
163 | },
164 | {
165 | "metadata": {
166 | "id": "gRTDte3UW3Dd",
167 | "colab_type": "text"
168 | },
169 | "cell_type": "markdown",
170 | "source": [
171 | "## Activation functions\n",
172 | "\n",
173 | "- sigmoid, $\\sigma(z)= \\frac{1}{1+e^{-z}}$\n",
174 | " - σ(z) lies in between (0, 1)\n",
175 | " - generally used for binary classification tasks in the last layer\n",
176 | "- $tanh(z)= \\frac{e^z - e^{-z}}{e^z + e^{-z}}$\n",
177 | " - $tanh(z)$ lies in between (-1, 1)\n",
178 | " - the graph is centered at 0, unlike sigmoid\n",
179 | "- $ReLU(z) = max (0,z)$\n",
180 | " - both sigmoid and tanh slow down learning when $z$ is too small or high\n",
181 | " - neural net learns much faster when compared to sigmoid or tanh \n",
182 | " - generally used in the hidden layers\n",
183 | "- $Leaky\\ ReLU(z) = max(0.01z,z)$"
184 | ]
185 | },
186 | {
187 | "metadata": {
188 | "id": "CnlEfi4HYPzG",
189 | "colab_type": "text"
190 | },
191 | "cell_type": "markdown",
192 | "source": [
193 | "## Deep Neural Nets\n",
194 | "\n",
195 | "Simply put, it is a neural network with multiple hidden layers. The number of layers $L$ and number of units in each layer are hyperparameters decided before training.\n",
196 | "\n",
197 | "\n",
198 | "
\n",
199 | "\n",
200 | "\n",
201 | " \n",
202 | " Figure 1: \n",
203 | " A 4 layer, fully connected deep neural network\n",
204 | " \n",
205 | "\n",
206 | "\n",
207 | "The above network has $L = 4$, $n^{[1]} = 3$, $n^{[2]} = 4$, $n^{[3]} = 3$ and $n^{[4]}=1$. $\\hat{Y} = A^L$ is the result for all training examples. $X = A^{[0]}$ is computed as,\n",
208 | "\n",
209 | "$$Z^{[1]} = W^{[1]} A^{[0]} + b^{[1]}$$\n",
210 | "\n",
211 | "$$A^{[1]} = g^{[1]}( Z^{[1]})$$\n",
212 | "\n",
213 | "Similarly, the process is repeated for layers [2], [3] and [4]\n",
214 | "$$\\hat{Y}= A^{[L=4]}= g^{[4]}( Z^{[4]})$$\n",
215 | "\n",
216 | "Here $g^{[l]}$ is the activation function used in layer $l$. When implemented with numpy vectors, all computations are parallelized across training examples and is called a vectorized implementation. Without vectorization, the neural net has to loop over training examples one by one to complete one epoch of training which slows down learning.\n",
217 | "\n",
218 | "Each training example $x^{(i)}$, is passed through the net to obtain the prediction $\\hat{y}^{(i)}$ from the last layer. This step is called **forward propagation** in the entire process. $\\hat{y}^{(i)}$ is compared with $y^{(i)}$ using $J$ to obtain the error in prediction. This error is passed back from layer $[L]$ to $[L-1]$ to $[L-2]$ and so on to $[1]$ to adjust $W^{[l]}$ , $b^{[l]}$ at each layer so that the next prediction causes smaller error. This step of passing back the error is called **back propagation** in the entire process. Every time error is passed back, the amount of change the system makes to the parameters $W^{[l]}$, $b^{[l]}$ is governed by a hyperparameter called learning rate, $\\alpha$.\n"
219 | ]
220 | },
221 | {
222 | "metadata": {
223 | "id": "RTdp6tJvicA2",
224 | "colab_type": "text"
225 | },
226 | "cell_type": "markdown",
227 | "source": [
228 | "### Dimensionality checks\n",
229 | "\n",
230 | "These formulae can help debug [dimensions](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ndarray.shape.html) of various matrices during implementing deep neural nets\n",
231 | "- $w^{[l]}.shape = (n^{[l]}, n^{[l-1]})$\n",
232 | "- $b^{[l]}.shape = (n^{[l]}, 1)$\n",
233 | "- $A^{[l]}.shape = Z^{[l]}.shape = (n^{[l]}, m)$\n"
234 | ]
235 | },
236 | {
237 | "metadata": {
238 | "id": "xTONqYzKjHfX",
239 | "colab_type": "text"
240 | },
241 | "cell_type": "markdown",
242 | "source": [
243 | "### Hyperparameters to choose\n",
244 | "\n",
245 | "$W^{[l]}$, $b^{[l]}$ are parameters of the neural net and are learned during the training phase. Hyperparameters are manually set by the developer before training.\n",
246 | "\n",
247 | "- learning rate alpha, $\\alpha$ - the rate at which parameters are updated to bring the predictions close to actual values\n",
248 | "- number of epochs – After training with the entire training data once, one epoch is completed. This parameter controls how many times this should be repeated.\n",
249 | "- hidden layers, L – how many hidden layers in the Deep Neural Net (DNN)\n",
250 | "- hidden units per layer – values for $n^{[1]}$, $n^{[2]}$, $n^{[3]}$,…, $n^{[L]}$\n",
251 | "- activation functions – [activation function](#activation-functions) to use in each layer, $g^{[1]}$, $g^{[2]}$, $g^{[3]}$,…, $g^{[L]}$\n"
252 | ]
253 | },
254 | {
255 | "metadata": {
256 | "id": "XZ6cdSdH8LEP",
257 | "colab_type": "text"
258 | },
259 | "cell_type": "markdown",
260 | "source": [
261 | "## Optimizing Deep Neural Networks"
262 | ]
263 | },
264 | {
265 | "metadata": {
266 | "id": "gEzV5faK67hu",
267 | "colab_type": "text"
268 | },
269 | "cell_type": "markdown",
270 | "source": [
271 | "### Data splitting – All data from same distribution\n",
272 | "\n",
273 | "All the available labelled data is split into,\n",
274 | "\n",
275 | "
\n",
276 | "\n",
277 | "-\tTrain data – Majority of the data is used for training\n",
278 | "-\tDev data – Also called validation set / data. Used for validating the model and hyperparameter tuning\n",
279 | "-\tTest data – Used for validating the final chosen model\n"
280 | ]
281 | },
282 | {
283 | "metadata": {
284 | "id": "5tgmguu78Gc_",
285 | "colab_type": "text"
286 | },
287 | "cell_type": "markdown",
288 | "source": [
289 | "#### Error Types\n",
290 | "\n",
291 | "As shown in Figure 2, a DNN has a train and dev error besides the test error\n",
292 | "\n",
293 | "- Avoidable bias – difference between human error (the benchmark many a times) and training error. Possible solutions to reduce this are:\n",
294 | " - Train on a bigger network (increase $L$ or $n^{[l]}$)\n",
295 | " - Increase number of epochs\n",
296 | " - Change network architecture\n",
297 | "- Variance – difference between training error and dev error. This happens due to overfitting to training data. Possible solutions to reduce this are:\n",
298 | " - Train on more data\n",
299 | " - Regularization\n",
300 | " - Change network architecture\n",
301 | " \n",
302 | "\n",
303 | "
\n",
304 | "\n",
305 | "\n",
306 | " \n",
307 | " Figure 2: \n",
308 | " Range of each error\n",
309 | " \n",
310 | ""
311 | ]
312 | },
313 | {
314 | "metadata": {
315 | "id": "tKy-BC7u_k_V",
316 | "colab_type": "text"
317 | },
318 | "cell_type": "markdown",
319 | "source": [
320 | "### Data Splitting – Data from different distributions\n",
321 | "\n",
322 | "Ideally train, dev and test sets should be from the same data distribution for best results. But sometimes big enough data might not be available for performing a deep learning experiment. For example, for creating a DNN to classify 100 pictures of your 2 cats, training on cat pictures from internet and testing on your 100 cat pictures may not yield good results as data distributions are different. In such situations,\n",
323 | "\n",
324 | "\n",
325 | "
\n",
326 | "\n",
327 | "\n",
328 | "split the available 100 cat pictures 50-50. Mix the Train (50) pictures with internet pictures like so\n",
329 | "\n",
330 | "\n",
331 | "
\n",
332 | "\n",
333 | "\n",
334 | "As the train and dev data are different distributions, comparing the training and dev errors does not clarify if it is due to high variance or due to data mismatch. Hence, the train data is split into train and training-dev after mixing your 50 cat pictures.\n",
335 | "\n",
336 | "\n",
337 | "
\n",
338 | "\n",
339 | "\n",
340 | "Now as train and training-dev sets are from same distribution, it can be understood the root cause of the problem as either bias or variance or data mismatch.\n",
341 | "\n",
342 | "\n",
343 | "
\n",
344 | "\n",
345 | "\n",
346 | " \n",
347 | " Figure 3: \n",
348 | " Range of errors when not all data is from same distribution\n",
349 | " \n",
350 | "\n",
351 | "\n",
352 | "As shown in Figure 3, as training-dev set and dev-set are from different data distributions, the difference between their errors is due to data mismatch.\n"
353 | ]
354 | },
355 | {
356 | "metadata": {
357 | "id": "PiHZj81vFAGg",
358 | "colab_type": "text"
359 | },
360 | "cell_type": "markdown",
361 | "source": [
362 | "### Regularization\n",
363 | "\n",
364 | "When the neural net over fits (high variance) the model to training data, predictions on unseen dev set can be poor. Regularization reduces the impact of (various) neurons in the model so that it can generalize better to unseen inputs. lambda $\\lambda$, is the hyperparameter which controls the amount of regularization used in L1 and L2 algorithms. Here are some algorithms / ideas for regularization,\n",
365 | "\n",
366 | "- L1 – Uses L1-norm to penalize $W$’s\n",
367 | "- L2 – Uses L2-norm to penalize $W$’s\n",
368 | "- Dropout – Randomly zeros (drops) some neurons from the network thus making it simpler and generalize better. $keep\\_prob$ is the hyperparamater which is the probability of retaining a neuron. Different layers can have different values of $keep\\_prob$ based on density of connections\n",
369 | "- Data augmentation – transform, randomly crop and translate input training images\n",
370 | "- Early stopping – after every epoch compute dev error and once it starts increasing, stop the training though training error continues to decrease (sign for overfitting)\n"
371 | ]
372 | },
373 | {
374 | "metadata": {
375 | "id": "4WC8LRtMFvqW",
376 | "colab_type": "text"
377 | },
378 | "cell_type": "markdown",
379 | "source": [
380 | "### Normalization\n",
381 | "\n",
382 | "Normalize input features with varying ranges to learn faster. Normalizing, sets $\\mu=0$ and $\\sigma^2=1$ for all training examples.\n",
383 | "\n",
384 | "- Batch normalization – the idea of normalizing inputs is extended to all layers. $z^{[l]}$ is normalized before applying the activation function. The flow of parameters would then be,\n",
385 | "\n",
386 | "$$X \\xrightarrow{W^{[1]}, b^{[1]}} Z^{[1]} \\xrightarrow{\\beta^{[1]},\\gamma^{[1]}} \\tilde{Z}^{[1]} \\to a^{[1]} = g( \\tilde{Z}^{[1]}) \\xrightarrow{W^{[2]}, b^{[2]}} Z^{[2]}…$$\n",
387 | "\n",
388 | "> $\\tilde{Z}^{[1]}$ is the normalized $Z^{[1]}$ computed using parameters $\\beta^{[1]}$, $\\gamma^{[1]}$. Just like $W^{[l]}$ and $b^{[l]}$ are parameters that are learned during training, $\\beta^{[l]}$ and $\\gamma^{[l]}$ are too. \n",
389 | "\n",
390 | "> In case of mini-batch gradient descend, exponential weighted averages of $\\mu$ and $\\sigma^2$ across batches are saved during training. These are used to compute $\\tilde{Z}^{[l]\\{t\\}})$ during inference time.\n"
391 | ]
392 | },
393 | {
394 | "metadata": {
395 | "id": "mrc6yyA3J42n",
396 | "colab_type": "text"
397 | },
398 | "cell_type": "markdown",
399 | "source": [
400 | "### Train faster and better\n",
401 | "\n",
402 | "- Mini-batch gradient descend – if the training set size is huge, models learn better, but each epoch takes longer. In mini-batch gradient descend, the inputs are sliced into batches and a step is taken by gradient descend after training on a mini-batch. Mini-batch size is generally chosen in between 1 and $m$ to take advantage of both vectorization and quicker steps. Typical batch sizes are 64, 128, 256 or 512 training examples, such that each mini-batch fits in memory of CPU / GPU\n",
403 | "\n",
404 | "- Gradient descend with Momentum – mini-batch gradient descend introduces oscillations which may slow down reaching the optimum. Momentum solves this problem by adding a moving average like affect and dampening the oscillations to reach the optimum faster. Momentum $\\beta$, controls the size of the sliding window $\\approx \\frac{1}{1- β}$\n",
405 | "\n",
406 | "- RMS Prop – Guides the gradient descend algorithm towards the minimum by taking longer steps in the dimensions farther away from minimum and smaller steps in the dimensions closer to minimum. $\\beta_2$ and $\\epsilon$ are hyperparameters for this optimization. $\\epsilon$ is not so important and is added only to avoid division by zero error and is generally set to $10^{-8}$\n",
407 | "\n",
408 | "- Adam – combines ideas from gradient descend with momentum and RMS prop and uses $\\beta$, $\\beta_2$ and $\\epsilon$ as hyperparameters\n",
409 | "\n",
410 | "- Learning rate decay – mini-batch gradient descend adds oscillations around the minimum. Adding a decay to learning rate converges better. So $\\alpha$ is no longer a constant and becomes\n",
411 | "\n",
412 | "$$\\alpha=\\frac{1}{(1 + decay\\_rate \\times epoch\\_number) \\times \\alpha_0}$$\n"
413 | ]
414 | },
415 | {
416 | "metadata": {
417 | "id": "Z5OheTD8L82x",
418 | "colab_type": "text"
419 | },
420 | "cell_type": "markdown",
421 | "source": [
422 | "### Hyperparameter Tuning\n",
423 | "\n",
424 | "As there are many hyperparameters to set before training, it is important to realize that not all of them are equally important. For example, $\\alpha$ is more important $\\lambda$, so fine tuning $\\alpha$ first is better. Some approaches for tuning a hyperparameter are,\n",
425 | "\n",
426 | "- Grid based search – create a table of combinations of hyperparameter 1 and 2 values. For each combination evaluate on dev set to find the best combination\n",
427 | "\n",
428 | "- Random based search – randomly select combinations of values for hyperparameters 1 and 2. For each combination evaluate on dev set to find the best combination. After performing a random search in a broad domain of values, a more fine-grained search in the area(s) of interest using the results from coarse random search can be performed. It is important to scale the hyperparameters before selecting values uniformly at random\n",
429 | "\n",
430 | "- Panda VS Caviar approach – If the model is complex that multiple combinations cannot be tested, it is a better idea to baby sit watching how $J$ varies with time and change hyperparameter values at runtime.\n"
431 | ]
432 | },
433 | {
434 | "metadata": {
435 | "id": "RAGhT4dLMgKo",
436 | "colab_type": "text"
437 | },
438 | "cell_type": "markdown",
439 | "source": [
440 | "### Multiclass classification\n",
441 | "\n",
442 | "Softmax layer is used as the final layer to classify into $C$ classes. The activations from final layer $L$ are computed as,\n",
443 | "\n",
444 | "$$a_i^{[L]} = \\frac{t_i}{\\sum_{j=1}^C t_i}$$\n",
445 | "\n",
446 | "where $t_i = (e^{z_i})^{[L]}$\n"
447 | ]
448 | },
449 | {
450 | "metadata": {
451 | "id": "eC3vGlRyNkNl",
452 | "colab_type": "text"
453 | },
454 | "cell_type": "markdown",
455 | "source": [
456 | "### Transfer Learning\n",
457 | "\n",
458 | "Use the learned parameters from one model to another. It is done by replacing the last few layers in the original trained network. The new layers can then be trained using the new dataset of interest. This is generally applicable when features identified by initial layers of an existing model can be re-used for a another task.\n"
459 | ]
460 | },
461 | {
462 | "metadata": {
463 | "id": "CTtgXHRKNxPT",
464 | "colab_type": "text"
465 | },
466 | "cell_type": "markdown",
467 | "source": [
468 | "## Convolutional Neural Nets\n",
469 | "\n",
470 | "A class of deep neural nets for computer vision tasks. It is expected of a DNN to identify the features from $X$ without the need for hand tuning them. Therefore, in computer vision tasks images, videos are generally used as is as $X$. Without feature engineering if the image is passed as is to the network the number of parameters to learn can be quite high based on the image’s resolution. For example, if the input image is (width, height, RGB channels) = (1000, 1000, 3) dimensional, fully connecting it (as shown in Figure 1) to a layer with $n^{[1]} = 1000$, would imply $W.shape = (1000, 3\\times10^6)$, i.e. 3 billion parameters. Training so many parameters demands lot of training data and hence existing ideas from DNN are not used for computer vision applications. Therefore, a new class called Convolutional Neural Nets (CNN) is studied.\n",
471 | "\n",
472 | "It is known that earlier layers in a DNN identify simple features like edges and the later ones detect more complex shapes in a given image. The operator, convolution $\\ast$, in Mathematics solves both the above problems – identify edges in earlier layers and shapes in the later, requires fewer parameters than a fully connected DNN.\n"
473 | ]
474 | },
475 | {
476 | "metadata": {
477 | "id": "D7bmAD4VPT5s",
478 | "colab_type": "text"
479 | },
480 | "cell_type": "markdown",
481 | "source": [
482 | "### Working of a convolution operation\n",
483 | "\n",
484 | "\n",
485 | "
\n",
486 | "\n",
487 | "\n",
488 | " \n",
489 | " Figure 4: Convolution operator in action\n",
490 | " \n",
491 | " \n",
492 | " \n",
493 | " Source: Coding exercise “Convolution model - Step by Step - v2” in the course https://www.coursera.org/learn/convolutional-neural-networks/\n",
494 | " \n",
495 | "
\n",
496 | "\n",
497 | "\n",
498 | "- The number of channels (the 3rd dimension) in the input layer should match the number of dimension in convolution filter\n"
499 | ]
500 | },
501 | {
502 | "metadata": {
503 | "id": "BIDNJPYERINP",
504 | "colab_type": "text"
505 | },
506 | "cell_type": "markdown",
507 | "source": [
508 | "### Padding\n",
509 | "\n",
510 | "Due to the way $\\ast$ works, cells on the edges contribute lesser compared to inner cells in the output layer. Strip(s) of zeros are added to input layer before $\\ast$ operation which is called padding, $p$ to solve this problem. There are two types of padding,\n",
511 | "\n",
512 | "- Valid $\\implies p = 0$\n",
513 | "\n",
514 | "- Same $p = \\frac{f-1}{2}$ where $f$ is dimension of the convolution filter. More on $f$ below.\n"
515 | ]
516 | },
517 | {
518 | "metadata": {
519 | "id": "R67_XjEOR9ah",
520 | "colab_type": "text"
521 | },
522 | "cell_type": "markdown",
523 | "source": [
524 | "### Dimensionality involving a convolution operation \n",
525 | "\n",
526 | "$$(n, n, \\#channels) \\ast (f, f, \\#channels) \\to (\\lfloor \\frac{n + 2p - f}{s + 1} \\rfloor,\\lfloor \\frac{n + 2p - f}{s + 1} \\rfloor, \\#filters)$$\n",
527 | "\n",
528 | "where $n$ = dimension of input layer / image\n",
529 | "\n",
530 | "> $f$ = dimension of convolution filter\n",
531 | "\n",
532 | "> $p$ = amount of padding to input layer\n",
533 | "\n",
534 | "> $s$ = stride length of convolution filter on input layer\n",
535 | "\n",
536 | "> $\\# filters$ = number of convolution filters used on input layer\n",
537 | "\n",
538 | "For Figure 4: $n = 5, \\# channels = 1, f = 3, p = 0, s = 1, \\# filters = 1$\n"
539 | ]
540 | },
541 | {
542 | "metadata": {
543 | "id": "sbYYOyJfVglB",
544 | "colab_type": "text"
545 | },
546 | "cell_type": "markdown",
547 | "source": [
548 | "### Pooling\n",
549 | "\n",
550 | "Another type of operator like $\\ast$, which is mainly used to shrink the height and width of the input. Just like $\\ast$, pooling layers also are filters which run across the input. However, they do not have any parameters to learn.\n",
551 | "\n",
552 | "- Max Pooling – pick the max value at every position of filter on the input\n",
553 | "\n",
554 | "- Average Pooling – pick the average value at every position of filter on the input\n"
555 | ]
556 | },
557 | {
558 | "metadata": {
559 | "id": "ZyJQQ50JcETB",
560 | "colab_type": "text"
561 | },
562 | "cell_type": "markdown",
563 | "source": [
564 | "## Sequence Models - [ Work in Progress ]\n",
565 | "\n",
566 | "A class of Deep Neural Networks for modelling inputs that have an ordering or exist in sequences. For example, a sequence of words is a sentence, a sequence of air pressure values (over time) is an audio / sound. \n",
567 | "\n",
568 | "### Natural Language Processing (NLP)\n",
569 | "\n",
570 | "Sequence modelling techniques are widely used for processing Natural (Human) language. Since neural networks work on matrices of numbers a simple way to encode (English) words to numbers is to one-hot encode them. In order to do that, assuming there are 10000 words in English dictionary, every word is assigned a unique random number between (10000, 1). Then, if the word 'Aaron' is assigned number 3, its one-hot encoded matrix would be,\n",
571 | "\n",
572 | "\n",
573 | " $\n",
574 | " \\begin{bmatrix}\n",
575 | " 0 \\\\\n",
576 | " 0 \\\\\n",
577 | " 1 \\\\\n",
578 | " 0 \\\\\n",
579 | " . \\\\\n",
580 | " . \\\\\n",
581 | " . \\\\\n",
582 | " 0 \\\\\n",
583 | " \\end{bmatrix}_{10000\\times 1}\n",
584 | " $\n",
585 | "\n",
586 | "\n",
587 | "After every word is converted to its one-hot representation, the sentence is fed into a Recurrent Neural Network (RNN) as shown in Figure 5. For a given sentence say, \"Aaron bikes to school everyday\", $x^{<1>}$ would be one-hot encoding of 'Aaron', $x^{<2>}$ would be one-hot encoding of 'bikes' and so on. If the goal of the RNN is to find the parts of speech of each word in the sentence, then $y^{<1>}$ would be parts of speech for 'Aaron', $y^{<2>}$ would be parts of speech for 'bikes' and so on. The training data for this task would be, X = one-not encoded words of English sentences, and Y = parts of speech for each word for each sentence.\n",
588 | "\n",
589 | "\n",
590 | "
\n",
591 | "\n",
592 | "\n",
593 | " \n",
594 | " Figure 5: Recurrent Neural Network\n",
595 | " \n",
596 | "\n",
597 | "\n",
598 | "### Why RNNs and not just use DNNs?\n",
599 | "\n",
600 | "Just like CNNs are well suited for Computer Vision tasks, RNNs are designed for tasks which have a temporal nature. RNNs,\n",
601 | "\n",
602 | "\n",
603 | "* allow outputs to be of different lengths, i.e. $T_X \\neq T_Y$, for example in the task of Machine Translation where length of output sentence depends on input sentence\n",
604 | "* can share features learned accross different positions of text (in NLP)\n",
605 | "* have fewer parameters to compute compared to DNNs just like in CNNs\n",
606 | "\n",
607 | "[The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) blog post talks about various applications of RNNs besides NLP.\n",
608 | "\n"
609 | ]
610 | },
611 | {
612 | "metadata": {
613 | "id": "pnQIrImRM5Xn",
614 | "colab_type": "text"
615 | },
616 | "cell_type": "markdown",
617 | "source": [
618 | "### Forward Propagation in RNNs\n",
619 | "\n",
620 | "\n",
621 | " $a^{} = g_1(W_{aa}a^{} + W_{ax}x^{} + b_a)$
\n",
622 | " $\\hat{y}^{} = g_2(W_{ya}a^{} + b_y)$
\n",
623 | "\n",
624 | "\n",
625 | "where \n",
626 | "\n",
627 | "* $a^{<0>} = \\vec{0}$, usually\n",
628 | "* $W_{aa}, W_{ax}, W_{ya}, b_a$ and $b_y$ are parameters learned by gradient descend\n",
629 | "* $g_1$ is usually tanh or ReLU and $g_2$ is usually sigmoid or softmax\n",
630 | "\n",
631 | "Note that, the parameters are shared across the time steps <t>. The notation is however simplified to,\n",
632 | "\n",
633 | "\n",
634 | " $a^{} = g(W_a[a^{}, x^{}] + b_a)$
\n",
635 | " $\\hat{y}^{} = g(W_ya^{} + b_y)$
\n",
636 | "\n",
637 | "\n",
638 | "where\n",
639 | "\n",
640 | "* $W_a$ is column wise concatenated matrix of $W_{aa}$ and $W_{ax}$\n",
641 | "* $[a^{}, x^{}]$ is concatenated row wise i.e.\n",
642 | "\n",
643 | "\n",
644 | "$\n",
645 | " \\begin{bmatrix}\n",
646 | " W_{aa} & W_{ax}\n",
647 | " \\end{bmatrix}\n",
648 | " \\begin{bmatrix}\n",
649 | " a^{} \\\\\n",
650 | " x^{}\n",
651 | " \\end{bmatrix} = W_{aa}a^{} + W_{ax}x^{}\n",
652 | "$\n",
653 | "\n",
654 | "\n",
655 | "### Back Propagation in RNNs\n",
656 | "\n",
657 | "In Figure 6, it can be seen how parameters flow to compute loss in forward propagation step and how gradient descend flows the derivates of Loss, $L$ back to adjust the parameters via back propagation.\n",
658 | "\n",
659 | "\n",
660 | "
\n",
661 | "\n",
662 | "\n",
663 | " \n",
664 | " Figure 6: Recurrent Neural Network Computational Graph\n",
665 | " \n",
666 | "\n",
667 | "\n",
668 | "
\n",
669 | "In the form of equations, first $L^{}$ is computed using cross entropy loss like before. Final loss, $L$ is a summation of losses for all $t$\n",
670 | "\n",
671 | "\n",
672 | " $L^{}(\\hat{y}^{}, y^{}) = - (y^{}log(\\hat{y}^{}) - (1-y^{})log(1-\\hat{y}^{}))$
\n",
673 | " $L(\\hat{y}, y) = \\sum_{t=1}^{T_y} L^{}(\\hat{y}^{}, y^{})$
\n",
674 | "\n"
675 | ]
676 | },
677 | {
678 | "metadata": {
679 | "id": "374uU6b4_zLs",
680 | "colab_type": "text"
681 | },
682 | "cell_type": "markdown",
683 | "source": [
684 | "### Language Model\n",
685 | "\n",
686 | "Grammatically and logically, \"Ram ate an apple\" is more likely than \"Ram an ate apple\". The goal of the language model is to assign a higher probability for the first sentence. In other words the language model should,\n",
687 | "\n",
688 | "\n",
689 | " $P(y^{<1>} = Ram,\\ y^{<2>} = ate,\\ y^{<3>} = an,\\ y^{<4>} = apple)\\ \\mathbf{>}\\ P(y^{<1>} = Ram,\\ y^{<2>} = an,\\ y^{<3>} = ate,\\ y^{<4>} = apple)$\n",
690 | "\n",
691 | "\n",
692 | "For this task, a one-to-many RNN is used. $y^{}$ are the probabilities of all words in vocabulary to be present at position $$ in a sentence. $y^{}$ is fed as input, $x^{}$, making it a conditional probabilty, $P(y^{<2>} = ate\\ |\\ y^{<1>} = Ram)$\n",
693 | "\n",
694 | "\n",
695 | "
\n",
696 | "\n",
697 | "\n",
698 | " \n",
699 | " Figure 7: one-to-many RNN for modelling language\n",
700 | " \n",
701 | "\n",
702 | "\n",
703 | "Training this on a huge corpus of text, models the sequence of words in a language. Using the conditional probabilities obtained at each $$, the probability of a sentence can be found,\n",
704 | "\n",
705 | "\n",
706 | " $P(y^{<1>} = Ram,\\ y^{<2>} = ate,\\ y^{<3>} = an,\\ y^{<4>} = apple) = P(y^{<1>} = Ram) \\times P(y^{<2>} = ate\\ |\\ y^{<1>} = Ram) \\times P(y^{<3>} = an\\ |\\ y^{<1>} = Ram, \\ y^{<2>} = ate) \\times P(y^{<4>} = apple\\ |\\ y^{<1>} = Ram, \\ y^{<2>} = ate, \\ y^{<3>} = an)$\n",
707 | "\n",
708 | "\n",
709 | "
\n",
710 | "$P(y^{<3>} = an\\ |\\ y^{<1>} = Ram, \\ y^{<2>} = ate)$ is obtained from the 3rd neuron and so on. As the vocabulary cannot incorporate all the words that ever appear, a special <UNK> token is used to represent all the words out of vocabulary. Similarly character level language models can be created where each character would be $\\hat{y}^{}$ and this problem of <UNK> words impacting the accuracy could be mitigated. The problem with character models is that the sequences would be much longer and RNNs cannot carry very long range dependencies. Also, character models are more computationally intensive and so not common.\n"
711 | ]
712 | },
713 | {
714 | "metadata": {
715 | "id": "Mb_eN0vfD8gg",
716 | "colab_type": "text"
717 | },
718 | "cell_type": "markdown",
719 | "source": [
720 | "### Gated Recurrent Units (GRU)\n",
721 | "\n",
722 | "RNNs have the problem of vanishing gradients if the sentences are long just like in DNNs. Long term dependencies are quite common in English as a word in the beginning of the sentence can dictate the state of a word in the end of the sentence. To enable long term dependencies, GRU's introduce a concept called a memory cell. \n",
723 | "\n",
724 | "\n",
725 | "
\n",
726 | "\n",
727 | "\n",
728 | " \n",
729 | " Figure 8: RNN and GRU cells\n",
730 | " \n",
731 | "\n",
732 | "\n",
733 | "
\n",
734 | "\n",
735 | " $\\tilde{c}^{} = tanh(W_c[c^{},\\ x^{}] + b_c) $
\n",
736 | " $\\Gamma_u = \\sigma(W_u[c^{},\\ x^{}] + b_u) $
\n",
737 | "\n",
738 | "\n",
739 | "TODO"
740 | ]
741 | },
742 | {
743 | "metadata": {
744 | "id": "MTDsAEpJWEMB",
745 | "colab_type": "text"
746 | },
747 | "cell_type": "markdown",
748 | "source": [
749 | "## Deep Learning Hyperparameters\n",
750 | "\n",
751 | "Hyperparameter | Symbol | Common Values\t|\n",
752 | "--- | --- | --- | ---\n",
753 | "regularization | $\\lambda$ | | also called, “weight decay”\n",
754 | "learning rate | $\\alpha$\t| 0.01 | \n",
755 | "keep_prob | | 0.7 | from Dropout regularization\n",
756 | "momentum | $\\beta$ | 0.9 | also used in Adam \n",
757 | "mini-batch size | $t$\t| 64, 128, 256, 512\t| \n",
758 | "RMS Prop | $\\beta_2$ | 0.999 | also used in Adam \n",
759 | "learning rate decay | | | also called decay_rate\n",
760 | "filter size | $f^{[l]}$\t|\t| In CNN, size of a filter in layer $l$\n",
761 | "stride | $s^{[l]}$ | | In CNN, stride length in layer $l$\n",
762 | "padding | $p^{[l]}$ | | In CNN, padding in layer $l$\n",
763 | "\\# filters | $n_c^{[l]}$ | | In CNN, number of filters used in layer $l$\n",
764 | "\n",
765 | "\n",
766 | "\t\t\t\n",
767 | "\t\t\t\n"
768 | ]
769 | }
770 | ]
771 | }
772 |
--------------------------------------------------------------------------------