├── .gitignore ├── 1. Introduction to Numpy.ipynb ├── 2. Introduction to Tensorflow.ipynb ├── 2.10 Tensorflow Datasets.ipynb ├── 2.11. Reading Checkpoints in Tensorflow.ipynb ├── 2.3-4. Placeholders in Tensorflow.ipynb ├── 2.5. Variables in Tensorflow.ipynb ├── 2.6.1.Saving Session in Tensorflow.ipynb ├── 2.6.2 Restoring Session in Tensorflow.ipynb ├── 2.8.1. NeuralNetwork in Tensorflow.ipynb ├── 2.8.2 NeuralNetwork in Tensorflow via Keras.ipynb ├── 2.8.3. NeuralNetwork in Keras.ipynb ├── 2.9 Realtime Metric in Tensorflow.ipynb ├── 5.1. Finetuning for CatvsDogs.ipynb ├── 5.1. Understanding Images and Data Preperation.ipynb ├── 5.2. OpenCV Selected Topics.ipynb ├── 5.3.1. Finetuning for CatsvsDogs in Keras.ipynb ├── 5.4.1. Manipulating Model.ipynb ├── 6.2 Skip Gram Model.ipynb ├── 7.2.1 RNN SineWave 1.ipynb ├── 7.2.2. RNN SineWave 2.ipynb ├── 7.3 WordRNN.ipynb ├── 7.4 Neural Machine Translation.ipynb ├── 9.1 Introduction to RL ├── 9.1 Introduction to RL.ipynb ├── 9.2 GridDemo ├── 9.2 GridDemo.ipynb ├── 9.3 Game of Thrones Example.ipynb ├── 9.4 Cartpole Example.ipynb ├── README.md ├── Untitled.ipynb ├── finetuning_keras.py ├── finetuning_tensorflow.py ├── img ├── 1.2.1-numpy-matrix-01.jpg ├── 1.2.1-numpy-matrix.ai ├── 1.3.7-hsplit-01.jpg ├── 1.3.7-hsplit.ai ├── 1.4-pass-by-reference-01.jpg ├── 1.4-pass-by-reference.ai ├── 1.5-broadcast-01.jpg ├── 1.5-broadcast.ai ├── 2-tensorflow-01.jpg ├── 2-tensorflow-applications-01.jpg ├── 2-tensorflow-applications.ai ├── 2-tensorflow.ai ├── 2.1.3-tensorflow-graph1.png ├── 2.3-Tensorflow-blackbox-01.jpg ├── 2.3-Tensorflow-blackbox.ai ├── 2.5.1-variable-graph.jpg ├── BeamSearch.svg ├── Tensorflow_logo.png ├── Yoda-featured1.jpg ├── catsvsdogs │ ├── cats │ │ ├── cat1.jpg │ │ ├── cat2.jpg │ │ ├── cat3.jpg │ │ └── cat3.png │ └── dogs │ │ ├── dog1.jpeg │ │ ├── dog2.jpeg │ │ └── dog3.jpeg ├── dog.jpeg ├── header.jpg ├── img2latex_training.svg ├── naivebayes.ai └── preview │ ├── dog_0_184.jpeg │ ├── dog_0_2567.jpeg │ ├── dog_0_2828.jpeg │ ├── dog_0_3060.jpeg │ ├── dog_0_3236.jpeg │ ├── dog_0_364.jpeg │ ├── dog_0_3800.jpeg │ ├── dog_0_4023.jpeg │ ├── dog_0_5004.jpeg │ ├── dog_0_5606.jpeg │ ├── dog_0_5987.jpeg │ ├── dog_0_6326.jpeg │ ├── dog_0_6665.jpeg │ ├── dog_0_6925.jpeg │ ├── dog_0_6938.jpeg │ ├── dog_0_8075.jpeg │ ├── dog_0_863.jpeg │ ├── dog_0_9160.jpeg │ ├── dog_0_9288.jpeg │ ├── dog_0_9698.jpeg │ └── dog_0_9963.jpeg ├── model ├── 2.5.1-exp │ └── events.out.tfevents.1543598132.DESKTOP-CK983JR └── 2.6.1-exp │ ├── checkpoint │ ├── tmp_model.ckpt.data-00000-of-00001 │ ├── tmp_model.ckpt.index │ └── tmp_model.ckpt.meta └── raw ├── 2.10-project-structure ├── dataset_io.py ├── layers.py ├── main.py ├── model_v1.py └── param.py ├── project1-nn ├── __init__.py ├── __main__.py └── nn │ ├── __init__.py │ ├── dataset_io.py │ ├── layers.py │ ├── main.py │ ├── model_v1.py │ └── param.py └── project2-keras ├── intro.py └── mnist.py /.gitignore: -------------------------------------------------------------------------------- 1 | *.ipynb_* 2 | *.pyc -------------------------------------------------------------------------------- /2. Introduction to Tensorflow.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# 2. Introduction to Tensorflow\n", 8 | "![](img/2-tensorflow-01.jpg) \n", 9 | "Tensorflow was released under the Apache 2.0 open-source license on November 9, 2015. It has undergone a lot of changes and emerged as one of the go-to library for a lot of Deep Learning Tasks. The advantages with Tensorflow are : \n", 10 | "![](img/2-tensorflow-applications-01.jpg)\n", 11 | "We will be covering both standard and eager mode of tensorflow execution in this chapter. But before we get into the nuts and bolts of Tensorflow, lets get our hands dirty by the hello world example." 12 | ] 13 | }, 14 | { 15 | "cell_type": "code", 16 | "execution_count": 1, 17 | "metadata": {}, 18 | "outputs": [ 19 | { 20 | "name": "stdout", 21 | "output_type": "stream", 22 | "text": [ 23 | "Tensorflow version is 1.8.0\n" 24 | ] 25 | } 26 | ], 27 | "source": [ 28 | "import tensorflow as tf\n", 29 | "print ('Tensorflow version is %s'%tf.__version__)" 30 | ] 31 | }, 32 | { 33 | "cell_type": "code", 34 | "execution_count": 4, 35 | "metadata": {}, 36 | "outputs": [ 37 | { 38 | "name": "stdout", 39 | "output_type": "stream", 40 | "text": [ 41 | "b'Hello, TensorFlow!'\n" 42 | ] 43 | } 44 | ], 45 | "source": [ 46 | "hello = tf.constant('Hello, TensorFlow!')\n", 47 | "\n", 48 | "# Start tf session\n", 49 | "sess = tf.Session()\n", 50 | "\n", 51 | "# Run the operation of initializing the constant\n", 52 | "print(sess.run(hello))" 53 | ] 54 | }, 55 | { 56 | "cell_type": "markdown", 57 | "metadata": {}, 58 | "source": [ 59 | "## 2.1. Tensorflow : Fundamental Design\n", 60 | "When we add two numbers in python. like a=3, b=5, print(a+b). For this computation to happen it basically requires you to store the content of a and b in a memory location and also store the computation graph created by me. In this example it is (a,+,b). The computer converts it into pre-order (+,a,b) or post order(a,b,+) format for computation. \n", 61 | "Now why am I talking about python computational graph? That's because I want to highlight that fact, that for any computation to happen, we need to \n", 62 | "1. store the data \n", 63 | "2. Store the operation in form of computational graph \n", 64 | "This is exactly what Tensorflow is doing. \n", 65 | "\n", 66 | "#### 2.1.1 Dataflow Programming\n", 67 | "If we stretch this computational strategy(directed graphs) to represent the complete program,the it becomes **Dataflow programming** paradigm.This comes with advantage such as \n", 68 | "1. Natural choice for Parallel Execution\n", 69 | "2. Distributed Execution\n", 70 | "\n", 71 | "So even for Tensorflow, we have the same fundamental design. We have \n", 72 | "1. **Tensors(tf.Tensor)** \n", 73 | "2. **Operations(tf.Operation)** \n", 74 | "\n", 75 | "Let us take the simple Example of addition of two constants in tensorflow to understand this concept. " 76 | ] 77 | }, 78 | { 79 | "cell_type": "code", 80 | "execution_count": 18, 81 | "metadata": {}, 82 | "outputs": [ 83 | { 84 | "name": "stdout", 85 | "output_type": "stream", 86 | "text": [ 87 | "x = 5\n", 88 | "y = 10\n", 89 | "z = x+y = 15\n", 90 | "Name of tensor z is -> add_13:0\n", 91 | "The name of operation which lead to z is -> add_13\n", 92 | "\n", 93 | "Name of tensor z1 is -> add_13:0\n", 94 | "The name of operation which lead to z is -> add_13\n", 95 | "\n" 96 | ] 97 | } 98 | ], 99 | "source": [ 100 | "import tensorflow as tf\n", 101 | "\n", 102 | "x = tf.constant(5)\n", 103 | "y = tf.constant(10)\n", 104 | "z = x + y\n", 105 | "z1 = z\n", 106 | "\n", 107 | "with tf.Session() as sess:\n", 108 | " print(\"x = %s\"%sess.run(x))\n", 109 | " print(\"y = %s\"%sess.run(y)) \n", 110 | " print(\"z = x+y = %s\" % sess.run(z))\n", 111 | " print (\"Name of tensor z is -> %s\\nThe name of operation which lead to z is -> %s\\n\"%(z.name,z.name.split(\":\")[0]))\n", 112 | " print (\"Name of tensor z1 is -> %s\\nThe name of operation which lead to z is -> %s\\n\"%(z1.name,z1.name.split(\":\")[0]))" 113 | ] 114 | }, 115 | { 116 | "cell_type": "markdown", 117 | "metadata": {}, 118 | "source": [ 119 | "Now the operation which is internally created by tensorflow on z=a+b is tf.add, and hense the name of operation is **add_0**. For new operation the name is **add_1** and so on. \n", 120 | "The name of the tensor(\"add_0:0\") is based on the operation which created it. Here (:0) in the name of the tensor is put to take care of an operation which yeilds multiple outputs. Splitting is one such operation. In that case we will have (:0) , (:1) and so on for different output. \\\\\n", 121 | "\n", 122 | "> **Note :** The speacial case for Tensor's nomenclature-by-operation is constant initialization. In case of creating tensor via tf.constant , the name of the tensor can be either *const_0* or *const_1* or *const_2* and so on, .\n", 123 | "\n", 124 | "#### 2.1.2 Two attributes of Tensor\n", 125 | "A tensor in tensorflow comes with **two default attributes** (which are commonly used).\n", 126 | "1. **v.shape** : The shape of n-dimensional array\n", 127 | "2. **v.dtype** : Tries to automatically detect the datatype of the data, similar to python\n", 128 | "3. **v.eval** : Evaluates the value stored in session for that tensor.\n", 129 | "4. **v.op** : Operation which lead to the tensor.\n", 130 | "5. **v.name** : Name of the tensor.(By default it is named as operator:0 or operator:15 )\n", 131 | "\n", 132 | "> **Note :** Tensors can be evaluated, not the operations." 133 | ] 134 | }, 135 | { 136 | "cell_type": "code", 137 | "execution_count": 23, 138 | "metadata": {}, 139 | "outputs": [ 140 | { 141 | "name": "stdout", 142 | "output_type": "stream", 143 | "text": [ 144 | "Default description for the tensor is : Tensor(\"Const_30:0\", shape=(2,), dtype=int32)\n", 145 | "\n", 146 | "Shape of the tensor is : (2,)\n", 147 | "\n", 148 | "Datatype of the tensor is : \n", 149 | "\n" 150 | ] 151 | } 152 | ], 153 | "source": [ 154 | "v = tf.constant([5,1])\n", 155 | "print ('Default description for the tensor is : %s\\n'%v)\n", 156 | "print ('Shape of the tensor is : %s\\n'%v.shape)\n", 157 | "print ('Datatype of the tensor is : %s\\n'%v.dtype)\n" 158 | ] 159 | }, 160 | { 161 | "cell_type": "markdown", 162 | "metadata": {}, 163 | "source": [ 164 | "#### 2.1.2 tf.Graph\n", 165 | "tf.Graph is the set of computational graphs which can executed in Tensorflow. \n", 166 | "* The nodes of this graph are **Operations**\n", 167 | "* The edges of this graph are **Tensors**\n", 168 | "\n", 169 | "![](img/2.1.3-tensorflow-graph1.png)\n", 170 | "\n", 171 | "#### 2.1.3 tf.Session\n", 172 | "tf.Session is like page in book which maintains the current state of tensors and graph. It serves as a lookup table containing \n", 173 | "1. All the ***Tensor*** and ***Operation*** Objects of the Graph\n", 174 | "2. ***Actual nd-array data*** which is residing in the edges of Computational Graph\n", 175 | "\n", 176 | "\n", 177 | "Tensorflow has 3 ways of creating tensors.\n", 178 | "1. tf.constant\n", 179 | "2. tf.Variable\n", 180 | "3. tf.placeholder\n", 181 | "\n", 182 | "Let us study each of them in detail. \n", 183 | "## 2.2. tf.constant, Graph and Session\n", 184 | "``` python\n", 185 | "tf.constant(\n", 186 | " value,\n", 187 | " dtype=None,\n", 188 | " shape=None,\n", 189 | " name='Const',\n", 190 | " verify_shape=False\n", 191 | ")\n", 192 | "```\n", 193 | "\n", 194 | "tf.constant creates a constant tensor in the computational graph. \n", 195 | "When we type : \n", 196 | ">```python\n", 197 | "> a = b + c\n", 198 | ">``` \n", 199 | "> Here, a is assigned **tensor** (*add_0:0*) which is result of **a.op** or **add** operation (add_0). \n", 200 | "\n", 201 | "#### 2.2.1 Evaluating constant wrt Session\n", 202 | "First way to set the context is by using\n", 203 | ">```python \n", 204 | "> with tf.Session as sess\n", 205 | ">```\n" 206 | ] 207 | }, 208 | { 209 | "cell_type": "code", 210 | "execution_count": 9, 211 | "metadata": {}, 212 | "outputs": [ 213 | { 214 | "name": "stdout", 215 | "output_type": "stream", 216 | "text": [ 217 | "2\n" 218 | ] 219 | } 220 | ], 221 | "source": [ 222 | "# 2.2.1.1 Session Intialization for evaluating value of tf.constant : Implcit session passing\n", 223 | "a=tf.constant(2)\n", 224 | "with tf.Session() as sess:\n", 225 | " # Session starts\n", 226 | " print (a.eval())\n", 227 | " # Session ends" 228 | ] 229 | }, 230 | { 231 | "cell_type": "markdown", 232 | "metadata": {}, 233 | "source": [ 234 | "If you don't set the session context, you can pass session object as an argument to the eval function." 235 | ] 236 | }, 237 | { 238 | "cell_type": "code", 239 | "execution_count": 14, 240 | "metadata": {}, 241 | "outputs": [ 242 | { 243 | "data": { 244 | "text/plain": [ 245 | "2" 246 | ] 247 | }, 248 | "execution_count": 14, 249 | "metadata": {}, 250 | "output_type": "execute_result" 251 | } 252 | ], 253 | "source": [ 254 | "# 2.2.1.2 Session Intialization for evaluating value of tf.constant : Explicit session passing\n", 255 | "sess=tf.Session()\n", 256 | "a=tf.constant(2)\n", 257 | "a.eval(session = sess) #passing session as an argument" 258 | ] 259 | }, 260 | { 261 | "cell_type": "markdown", 262 | "metadata": {}, 263 | "source": [ 264 | "Last way to obtain the value of constant tensor is via **sess.run**" 265 | ] 266 | }, 267 | { 268 | "cell_type": "code", 269 | "execution_count": 16, 270 | "metadata": {}, 271 | "outputs": [ 272 | { 273 | "name": "stdout", 274 | "output_type": "stream", 275 | "text": [ 276 | "2\n" 277 | ] 278 | } 279 | ], 280 | "source": [ 281 | "# 2.2.1.3 Session Intialization for evaluating value of tf.constant : via sess.run\n", 282 | "a=tf.constant(2)\n", 283 | "with tf.Session() as sess:\n", 284 | " # Session starts\n", 285 | " print (sess.run(a)) #run tensor refered by 'a' in the session 'sess' and print a's value\n", 286 | " # Session ends" 287 | ] 288 | }, 289 | { 290 | "cell_type": "markdown", 291 | "metadata": {}, 292 | "source": [ 293 | "#### 2.2.2 Revisiting the addition example" 294 | ] 295 | }, 296 | { 297 | "cell_type": "code", 298 | "execution_count": 2, 299 | "metadata": {}, 300 | "outputs": [], 301 | "source": [ 302 | "sess = tf.Session()\n", 303 | "a = tf.constant(2)\n", 304 | "b = tf.constant(3)\n", 305 | "c = a + b" 306 | ] 307 | }, 308 | { 309 | "cell_type": "code", 310 | "execution_count": 11, 311 | "metadata": {}, 312 | "outputs": [ 313 | { 314 | "name": "stdout", 315 | "output_type": "stream", 316 | "text": [ 317 | "Name of tensor corrosponding to c is : add:0\n", 318 | "Name of operation which created tensor that is pointed by c is : add\n" 319 | ] 320 | } 321 | ], 322 | "source": [ 323 | "# Printing the Operation\n", 324 | "print ('Name of tensor corrosponding to c is :',c.name)\n", 325 | "print ('Name of operation which created tensor that is pointed by c is :',c.op.name)" 326 | ] 327 | }, 328 | { 329 | "cell_type": "code", 330 | "execution_count": 12, 331 | "metadata": {}, 332 | "outputs": [ 333 | { 334 | "name": "stdout", 335 | "output_type": "stream", 336 | "text": [ 337 | "add_1:0\n", 338 | "add_2:0\n", 339 | "add_3:0\n" 340 | ] 341 | } 342 | ], 343 | "source": [ 344 | "c1 = a + b\n", 345 | "c2 = a + b\n", 346 | "c3 = a + b\n", 347 | "print (c1.name)\n", 348 | "print (c2.name)\n", 349 | "print (c3.name)\n" 350 | ] 351 | }, 352 | { 353 | "cell_type": "markdown", 354 | "metadata": {}, 355 | "source": [ 356 | "#### 2.2.3 Understanding Operations for the Simple Addition Example" 357 | ] 358 | }, 359 | { 360 | "cell_type": "code", 361 | "execution_count": 15, 362 | "metadata": {}, 363 | "outputs": [ 364 | { 365 | "name": "stdout", 366 | "output_type": "stream", 367 | "text": [ 368 | "name: \"Add_1\"\n", 369 | "op: \"Add\"\n", 370 | "input: \"const_a_1\"\n", 371 | "input: \"const_b_1\"\n", 372 | "attr {\n", 373 | " key: \"T\"\n", 374 | " value {\n", 375 | " type: DT_INT32\n", 376 | " }\n", 377 | "}\n", 378 | "\n" 379 | ] 380 | } 381 | ], 382 | "source": [ 383 | "sess = tf.Session()\n", 384 | "a = tf.constant(2,name='const_a')\n", 385 | "b = tf.constant(3,name='const_b')\n", 386 | "c = tf.add(a,b)\n", 387 | "print (c.op)" 388 | ] 389 | }, 390 | { 391 | "cell_type": "markdown", 392 | "metadata": {}, 393 | "source": [ 394 | "#### 2.2.4 Creating a custom graph for the Simple Addition Example" 395 | ] 396 | }, 397 | { 398 | "cell_type": "code", 399 | "execution_count": 16, 400 | "metadata": {}, 401 | "outputs": [], 402 | "source": [ 403 | "graph = tf.Graph()\n", 404 | "sess = tf.Session(graph=graph)\n", 405 | "with graph.as_default():\n", 406 | " a = tf.constant(2,name='const_a')\n", 407 | " b = tf.constant(3,name='const_b')\n", 408 | " c = tf.add(a,b)" 409 | ] 410 | }, 411 | { 412 | "cell_type": "markdown", 413 | "metadata": {}, 414 | "source": [ 415 | "#### 2.2.5 Printing computational graph for the Simple Addition Example" 416 | ] 417 | }, 418 | { 419 | "cell_type": "code", 420 | "execution_count": 24, 421 | "metadata": {}, 422 | "outputs": [ 423 | { 424 | "name": "stdout", 425 | "output_type": "stream", 426 | "text": [ 427 | "----------------------- \n", 428 | "Operation 0 is \n", 429 | "name: \"const_a\"\n", 430 | "op: \"Const\"\n", 431 | "attr {\n", 432 | " key: \"dtype\"\n", 433 | " value {\n", 434 | " type: DT_INT32\n", 435 | " }\n", 436 | "}\n", 437 | "attr {\n", 438 | " key: \"value\"\n", 439 | " value {\n", 440 | " tensor {\n", 441 | " dtype: DT_INT32\n", 442 | " tensor_shape {\n", 443 | " }\n", 444 | " int_val: 2\n", 445 | " }\n", 446 | " }\n", 447 | "}\n", 448 | "\n", 449 | "\n", 450 | "----------------------- \n", 451 | "Operation 1 is \n", 452 | "name: \"const_b\"\n", 453 | "op: \"Const\"\n", 454 | "attr {\n", 455 | " key: \"dtype\"\n", 456 | " value {\n", 457 | " type: DT_INT32\n", 458 | " }\n", 459 | "}\n", 460 | "attr {\n", 461 | " key: \"value\"\n", 462 | " value {\n", 463 | " tensor {\n", 464 | " dtype: DT_INT32\n", 465 | " tensor_shape {\n", 466 | " }\n", 467 | " int_val: 3\n", 468 | " }\n", 469 | " }\n", 470 | "}\n", 471 | "\n", 472 | "\n", 473 | "----------------------- \n", 474 | "Operation 2 is \n", 475 | "name: \"Add\"\n", 476 | "op: \"Add\"\n", 477 | "input: \"const_a\"\n", 478 | "input: \"const_b\"\n", 479 | "attr {\n", 480 | " key: \"T\"\n", 481 | " value {\n", 482 | " type: DT_INT32\n", 483 | " }\n", 484 | "}\n", 485 | "\n", 486 | "\n" 487 | ] 488 | } 489 | ], 490 | "source": [ 491 | "for i,op in enumerate(graph.get_operations()):\n", 492 | " print ('----------------------- \\nOperation %i is \\n%s\\n'%(i,op))" 493 | ] 494 | }, 495 | { 496 | "cell_type": "markdown", 497 | "metadata": {}, 498 | "source": [ 499 | "#### 2.2.6 Fetching and executing tensor by name in tensorflow" 500 | ] 501 | }, 502 | { 503 | "cell_type": "code", 504 | "execution_count": 29, 505 | "metadata": {}, 506 | "outputs": [ 507 | { 508 | "name": "stdout", 509 | "output_type": "stream", 510 | "text": [ 511 | "Tensor(\"const_a:0\", shape=(), dtype=int32)\n" 512 | ] 513 | } 514 | ], 515 | "source": [ 516 | "v=graph.get_tensor_by_name('const_a:0')\n", 517 | "print (v)" 518 | ] 519 | }, 520 | { 521 | "cell_type": "markdown", 522 | "metadata": {}, 523 | "source": [ 524 | "This is very useful especially if you have not created an explicit variable while writing the code. But mind it, you must name the operation for easy identfication. \n", 525 | "If you are working with jupyter notebook, a new tensor is created on running the cell again. Therefore, you need to update suffix of the tensor name (:n) on each execution." 526 | ] 527 | }, 528 | { 529 | "cell_type": "code", 530 | "execution_count": 30, 531 | "metadata": {}, 532 | "outputs": [ 533 | { 534 | "name": "stdout", 535 | "output_type": "stream", 536 | "text": [ 537 | "5\n" 538 | ] 539 | } 540 | ], 541 | "source": [ 542 | "# Example of fetching tensor from graph and evaluating its value\n", 543 | "sess = tf.Session()\n", 544 | "a = tf.constant(2,name='const_a')\n", 545 | "b = tf.constant(3,name='const_b')\n", 546 | "tf.add(a,b,name='addition')\n", 547 | "\n", 548 | "c=tf.get_default_graph().get_tensor_by_name('addition:0')\n", 549 | "print (c.eval(session=sess))" 550 | ] 551 | }, 552 | { 553 | "cell_type": "code", 554 | "execution_count": null, 555 | "metadata": {}, 556 | "outputs": [], 557 | "source": [] 558 | }, 559 | { 560 | "cell_type": "code", 561 | "execution_count": null, 562 | "metadata": {}, 563 | "outputs": [], 564 | "source": [] 565 | } 566 | ], 567 | "metadata": { 568 | "kernelspec": { 569 | "display_name": "Python 3", 570 | "language": "python", 571 | "name": "python3" 572 | }, 573 | "language_info": { 574 | "codemirror_mode": { 575 | "name": "ipython", 576 | "version": 3 577 | }, 578 | "file_extension": ".py", 579 | "mimetype": "text/x-python", 580 | "name": "python", 581 | "nbconvert_exporter": "python", 582 | "pygments_lexer": "ipython3", 583 | "version": "3.5.4" 584 | } 585 | }, 586 | "nbformat": 4, 587 | "nbformat_minor": 2 588 | } 589 | -------------------------------------------------------------------------------- /2.11. Reading Checkpoints in Tensorflow.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "from tensorflow.python.tools.inspect_checkpoint import print_tensors_in_checkpoint_file\n", 10 | "from tensorflow.python import pywrap_tensorflow\n", 11 | "\n", 12 | "checkpoint_path = checkpoint_path = '/home/jaley/git/AI101-DeepLearning/raw/project3-finetuning-dogbreed/nn/pretrained/vgg_16.ckpt'\n", 13 | "reader = pywrap_tensorflow.NewCheckpointReader(checkpoint_path)\n", 14 | "var_to_shape_map = reader.get_variable_to_shape_map()" 15 | ] 16 | }, 17 | { 18 | "cell_type": "code", 19 | "execution_count": 2, 20 | "metadata": {}, 21 | "outputs": [ 22 | { 23 | "name": "stdout", 24 | "output_type": "stream", 25 | "text": [ 26 | "tensor_name is vgg_16/conv3/conv3_3/weights with shape (3, 3, 256, 256) \n", 27 | "tensor_name is vgg_16/conv3/conv3_2/weights with shape (3, 3, 256, 256) \n", 28 | "tensor_name is vgg_16/conv3/conv3_1/weights with shape (3, 3, 128, 256) \n", 29 | "tensor_name is vgg_16/conv2/conv2_2/weights with shape (3, 3, 128, 128) \n", 30 | "tensor_name is vgg_16/fc7/biases with shape (4096,) \n", 31 | "tensor_name is vgg_16/fc8/biases with shape (1000,) \n", 32 | "tensor_name is vgg_16/conv3/conv3_1/biases with shape (256,) \n", 33 | "tensor_name is vgg_16/conv1/conv1_2/biases with shape (64,) \n", 34 | "tensor_name is vgg_16/conv2/conv2_2/biases with shape (128,) \n", 35 | "tensor_name is vgg_16/conv2/conv2_1/weights with shape (3, 3, 64, 128) \n", 36 | "tensor_name is vgg_16/conv5/conv5_2/biases with shape (512,) \n", 37 | "tensor_name is vgg_16/conv5/conv5_2/weights with shape (3, 3, 512, 512) \n", 38 | "tensor_name is vgg_16/conv1/conv1_1/biases with shape (64,) \n", 39 | "tensor_name is vgg_16/conv3/conv3_3/biases with shape (256,) \n", 40 | "tensor_name is vgg_16/conv5/conv5_3/weights with shape (3, 3, 512, 512) \n", 41 | "tensor_name is vgg_16/fc8/weights with shape (1, 1, 4096, 1000) \n", 42 | "tensor_name is global_step with shape () \n", 43 | "tensor_name is vgg_16/conv5/conv5_1/biases with shape (512,) \n", 44 | "tensor_name is vgg_16/fc6/biases with shape (4096,) \n", 45 | "tensor_name is vgg_16/conv5/conv5_1/weights with shape (3, 3, 512, 512) \n", 46 | "tensor_name is vgg_16/fc6/weights with shape (7, 7, 512, 4096) \n", 47 | "tensor_name is vgg_16/conv5/conv5_3/biases with shape (512,) \n", 48 | "tensor_name is vgg_16/fc7/weights with shape (1, 1, 4096, 4096) \n", 49 | "tensor_name is vgg_16/conv1/conv1_1/weights with shape (3, 3, 3, 64) \n", 50 | "tensor_name is vgg_16/conv4/conv4_3/weights with shape (3, 3, 512, 512) \n", 51 | "tensor_name is vgg_16/conv2/conv2_1/biases with shape (128,) \n", 52 | "tensor_name is vgg_16/conv1/conv1_2/weights with shape (3, 3, 64, 64) \n", 53 | "tensor_name is vgg_16/conv3/conv3_2/biases with shape (256,) \n", 54 | "tensor_name is vgg_16/mean_rgb with shape (3,) \n", 55 | "tensor_name is vgg_16/conv4/conv4_3/biases with shape (512,) \n", 56 | "tensor_name is vgg_16/conv4/conv4_1/biases with shape (512,) \n", 57 | "tensor_name is vgg_16/conv4/conv4_2/weights with shape (3, 3, 512, 512) \n", 58 | "tensor_name is vgg_16/conv4/conv4_2/biases with shape (512,) \n", 59 | "tensor_name is vgg_16/conv4/conv4_1/weights with shape (3, 3, 256, 512) \n" 60 | ] 61 | } 62 | ], 63 | "source": [ 64 | "for key in var_to_shape_map:\n", 65 | " print(\"tensor_name is %s with shape %s \"%(key,reader.get_tensor(key).shape))" 66 | ] 67 | }, 68 | { 69 | "cell_type": "code", 70 | "execution_count": null, 71 | "metadata": {}, 72 | "outputs": [], 73 | "source": [] 74 | } 75 | ], 76 | "metadata": { 77 | "kernelspec": { 78 | "display_name": "Python 3", 79 | "language": "python", 80 | "name": "python3" 81 | }, 82 | "language_info": { 83 | "codemirror_mode": { 84 | "name": "ipython", 85 | "version": 3 86 | }, 87 | "file_extension": ".py", 88 | "mimetype": "text/x-python", 89 | "name": "python", 90 | "nbconvert_exporter": "python", 91 | "pygments_lexer": "ipython3", 92 | "version": "3.6.7" 93 | } 94 | }, 95 | "nbformat": 4, 96 | "nbformat_minor": 2 97 | } 98 | -------------------------------------------------------------------------------- /2.3-4. Placeholders in Tensorflow.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "## 2.3 Overall Flow" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "![](img/2.3-Tensorflow-blackbox-01.jpg)" 15 | ] 16 | }, 17 | { 18 | "cell_type": "markdown", 19 | "metadata": {}, 20 | "source": [ 21 | "In **Tensorflow**, everything is executed within the context of session. Either you can pass it explicitely or pass it implicitely by using\n", 22 | "\n", 23 | ">```python \n", 24 | ">with tf.Session() as sess : \n", 25 | "> #evaluate the variables of computational graph\n", 26 | "> #update the tensors \n", 27 | "> pass\n", 28 | ">``` \n", 29 | "\n", 30 | "Within the scope, you can obtain the resultant tensor by evaluating the operation-based dependency Tree. \n", 31 | "## 2.4 Placeholder\n", 32 | "If you want to assign the value of tensor in run-time, then using **tf.placeholder** is wise choice. tf.placeholder allows the user to enter changing values in a loop reciding inside the session. For example, if you want to pass a batch of images on fly, it makes sense, to have the tensor corrosponding to image as *tf.placeholder* instead of *tf.constant*. \n", 33 | "\n", 34 | "In the example bellow, we will be looking at creation of fibonacci series via placeholders\n", 35 | "#### 2.4.1 Creating a fibonacci series" 36 | ] 37 | }, 38 | { 39 | "cell_type": "code", 40 | "execution_count": 5, 41 | "metadata": {}, 42 | "outputs": [ 43 | { 44 | "name": "stdout", 45 | "output_type": "stream", 46 | "text": [ 47 | "0 1 1 2 3 5 8 13 21 34 " 48 | ] 49 | } 50 | ], 51 | "source": [ 52 | "#Creating a Fibonacci Series Example 1\n", 53 | "import tensorflow as tf;\n", 54 | "_a,_b = 0,1\n", 55 | "a = tf.placeholder(dtype=tf.int64)\n", 56 | "b = tf.placeholder(dtype=tf.int64)\n", 57 | "\n", 58 | "with tf.Session() as sess:\n", 59 | " for i in range(10):\n", 60 | " print (_a,end=\" \")\n", 61 | " _a,_b = sess.run([b,a+b],feed_dict={a:_a,b:_b})\n" 62 | ] 63 | }, 64 | { 65 | "cell_type": "code", 66 | "execution_count": 3, 67 | "metadata": {}, 68 | "outputs": [ 69 | { 70 | "name": "stdout", 71 | "output_type": "stream", 72 | "text": [ 73 | "[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]\n" 74 | ] 75 | } 76 | ], 77 | "source": [ 78 | "#Creating a Fibonacci Series Example 2\n", 79 | "import tensorflow as tf;\n", 80 | "a = tf.placeholder(dtype=tf.int64)\n", 81 | "b = tf.placeholder(dtype=tf.int64)\n", 82 | "lst = [0,1]\n", 83 | "\n", 84 | "with tf.Session() as sess:\n", 85 | " for i in range(10):\n", 86 | " lst.append(sess.run(a+b,feed_dict={a:lst[-2],b:lst[-1]}))\n", 87 | "print (lst)\n" 88 | ] 89 | }, 90 | { 91 | "cell_type": "markdown", 92 | "metadata": {}, 93 | "source": [ 94 | "#### 2.4.2 Matrix Vector Multiplication in tensorflow" 95 | ] 96 | }, 97 | { 98 | "cell_type": "code", 99 | "execution_count": 13, 100 | "metadata": {}, 101 | "outputs": [ 102 | { 103 | "name": "stdout", 104 | "output_type": "stream", 105 | "text": [ 106 | "matrix_val is \n", 107 | "[[ 0 1 2 3]\n", 108 | " [ 4 5 6 7]\n", 109 | " [ 8 9 10 11]]\n", 110 | "\n", 111 | "vector_val is \n", 112 | "[[0]\n", 113 | " [1]\n", 114 | " [2]\n", 115 | " [3]]\n", 116 | "\n", 117 | "output_val is \n", 118 | "[[14.]\n", 119 | " [38.]\n", 120 | " [62.]]\n", 121 | "\n" 122 | ] 123 | } 124 | ], 125 | "source": [ 126 | "import tensorflow as tf\n", 127 | "import numpy as np\n", 128 | "matrix = tf.placeholder(dtype=tf.float32,shape=(3,4))\n", 129 | "vector = tf.placeholder(dtype=tf.float32,shape=(4,1))\n", 130 | "output = tf.matmul(matrix,vector);\n", 131 | "\n", 132 | "with tf.Session() as sess:\n", 133 | " matrix_val = np.arange(12).reshape(3,4)\n", 134 | " vector_val = np.arange(4).reshape(4,1)\n", 135 | " output_val = output.eval(feed_dict={matrix: matrix_val, vector: vector_val})\n", 136 | " print ('matrix_val is \\n%s\\n'%matrix_val)\n", 137 | " print ('vector_val is \\n%s\\n'%vector_val)\n", 138 | " print ('output_val is \\n%s\\n'%output_val )" 139 | ] 140 | }, 141 | { 142 | "cell_type": "markdown", 143 | "metadata": {}, 144 | "source": [ 145 | "#### 2.4.3 Evaluate : $\\left. \\cfrac{d}{dx}(2x^2) \\right\\vert_{(x=3)} $\n", 146 | "I have attached a simple example of doing differentiation and finding gradient of a function in Tensorflow\n" 147 | ] 148 | }, 149 | { 150 | "cell_type": "code", 151 | "execution_count": null, 152 | "metadata": {}, 153 | "outputs": [], 154 | "source": [ 155 | "# Finding the derivative of function \n", 156 | "import tensorflow as tf;\n", 157 | "x= tf.placeholder(tf.float32);\n", 158 | "fx =2*x*x;\n", 159 | "grads =tf.gradients(fx,x)\n", 160 | "with tf.Session() as sess:\n", 161 | " print (sess.run(grads,feed_dict={x:3.0}))\n" 162 | ] 163 | }, 164 | { 165 | "cell_type": "markdown", 166 | "metadata": {}, 167 | "source": [ 168 | "#### 2.4.3 Evaluate : $\\left. \\nabla (2x^2 + 3y^2) \\right\\vert_{(x=3,y=5)} $\n", 169 | "I have attached a simple example of doing differentiation and finding gradient of a function in Tensorflow\n" 170 | ] 171 | }, 172 | { 173 | "cell_type": "code", 174 | "execution_count": 15, 175 | "metadata": {}, 176 | "outputs": [ 177 | { 178 | "name": "stdout", 179 | "output_type": "stream", 180 | "text": [ 181 | "[12.0, 30.0]\n" 182 | ] 183 | } 184 | ], 185 | "source": [ 186 | "# Finding the gradient of function \n", 187 | "x= tf.placeholder(tf.float32);\n", 188 | "y= tf.placeholder(tf.float32);\n", 189 | "fxy =2*x*x+3*y*y;\n", 190 | "grads =tf.gradients(fxy,[x,y])\n", 191 | "with tf.Session() as sess:\n", 192 | " print (sess.run(grads,feed_dict={x:3.0,y:5.0}))" 193 | ] 194 | }, 195 | { 196 | "cell_type": "code", 197 | "execution_count": null, 198 | "metadata": {}, 199 | "outputs": [], 200 | "source": [] 201 | } 202 | ], 203 | "metadata": { 204 | "kernelspec": { 205 | "display_name": "Python 3", 206 | "language": "python", 207 | "name": "python3" 208 | }, 209 | "language_info": { 210 | "codemirror_mode": { 211 | "name": "ipython", 212 | "version": 3 213 | }, 214 | "file_extension": ".py", 215 | "mimetype": "text/x-python", 216 | "name": "python", 217 | "nbconvert_exporter": "python", 218 | "pygments_lexer": "ipython3", 219 | "version": "3.5.4" 220 | } 221 | }, 222 | "nbformat": 4, 223 | "nbformat_minor": 2 224 | } 225 | -------------------------------------------------------------------------------- /2.5. Variables in Tensorflow.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "## 2.5 Variables in Tensorflow\n", 8 | "A variable maintains the state in the graph itself. In tensorflow graph, the nodes are the operations and the edges are data. But the **exception** to the rule is tf.Variable. It is an **operation/node in the Graph, which maintains data/state**. \n", 9 | "Thats why, use of *tf.Variable is essential for stateful models*. It also comes with assign function.\n", 10 | "\n", 11 | "> **Note :** *tf.Variable is a function which takes tensors and initializers as input to construct a variable in Graph* \n", 12 | "\n", 13 | "![](img/2.5.1-variable-graph.jpg)" 14 | ] 15 | }, 16 | { 17 | "cell_type": "code", 18 | "execution_count": 1, 19 | "metadata": {}, 20 | "outputs": [], 21 | "source": [ 22 | "import tensorflow as tf;" 23 | ] 24 | }, 25 | { 26 | "cell_type": "code", 27 | "execution_count": 2, 28 | "metadata": {}, 29 | "outputs": [], 30 | "source": [ 31 | "log_path='model/2.5.1-exp'\n", 32 | "graph = tf.Graph()\n", 33 | "with graph.as_default():\n", 34 | " a = tf.Variable(3)" 35 | ] 36 | }, 37 | { 38 | "cell_type": "code", 39 | "execution_count": 3, 40 | "metadata": {}, 41 | "outputs": [ 42 | { 43 | "name": "stdout", 44 | "output_type": "stream", 45 | "text": [ 46 | "----------------------- \n", 47 | "Operation 0 is \n", 48 | "name: \"Variable/initial_value\"\n", 49 | "op: \"Const\"\n", 50 | "attr {\n", 51 | " key: \"dtype\"\n", 52 | " value {\n", 53 | " type: DT_INT32\n", 54 | " }\n", 55 | "}\n", 56 | "attr {\n", 57 | " key: \"value\"\n", 58 | " value {\n", 59 | " tensor {\n", 60 | " dtype: DT_INT32\n", 61 | " tensor_shape {\n", 62 | " }\n", 63 | " int_val: 3\n", 64 | " }\n", 65 | " }\n", 66 | "}\n", 67 | "\n", 68 | "\n", 69 | "----------------------- \n", 70 | "Operation 1 is \n", 71 | "name: \"Variable\"\n", 72 | "op: \"VariableV2\"\n", 73 | "attr {\n", 74 | " key: \"container\"\n", 75 | " value {\n", 76 | " s: \"\"\n", 77 | " }\n", 78 | "}\n", 79 | "attr {\n", 80 | " key: \"dtype\"\n", 81 | " value {\n", 82 | " type: DT_INT32\n", 83 | " }\n", 84 | "}\n", 85 | "attr {\n", 86 | " key: \"shape\"\n", 87 | " value {\n", 88 | " shape {\n", 89 | " }\n", 90 | " }\n", 91 | "}\n", 92 | "attr {\n", 93 | " key: \"shared_name\"\n", 94 | " value {\n", 95 | " s: \"\"\n", 96 | " }\n", 97 | "}\n", 98 | "\n", 99 | "\n", 100 | "----------------------- \n", 101 | "Operation 2 is \n", 102 | "name: \"Variable/Assign\"\n", 103 | "op: \"Assign\"\n", 104 | "input: \"Variable\"\n", 105 | "input: \"Variable/initial_value\"\n", 106 | "attr {\n", 107 | " key: \"T\"\n", 108 | " value {\n", 109 | " type: DT_INT32\n", 110 | " }\n", 111 | "}\n", 112 | "attr {\n", 113 | " key: \"_class\"\n", 114 | " value {\n", 115 | " list {\n", 116 | " s: \"loc:@Variable\"\n", 117 | " }\n", 118 | " }\n", 119 | "}\n", 120 | "attr {\n", 121 | " key: \"use_locking\"\n", 122 | " value {\n", 123 | " b: true\n", 124 | " }\n", 125 | "}\n", 126 | "attr {\n", 127 | " key: \"validate_shape\"\n", 128 | " value {\n", 129 | " b: true\n", 130 | " }\n", 131 | "}\n", 132 | "\n", 133 | "\n", 134 | "----------------------- \n", 135 | "Operation 3 is \n", 136 | "name: \"Variable/read\"\n", 137 | "op: \"Identity\"\n", 138 | "input: \"Variable\"\n", 139 | "attr {\n", 140 | " key: \"T\"\n", 141 | " value {\n", 142 | " type: DT_INT32\n", 143 | " }\n", 144 | "}\n", 145 | "attr {\n", 146 | " key: \"_class\"\n", 147 | " value {\n", 148 | " list {\n", 149 | " s: \"loc:@Variable\"\n", 150 | " }\n", 151 | " }\n", 152 | "}\n", 153 | "\n", 154 | "\n" 155 | ] 156 | } 157 | ], 158 | "source": [ 159 | "for i,op in enumerate(graph.get_operations()):\n", 160 | " print ('----------------------- \\nOperation %i is \\n%s\\n'%(i,op))" 161 | ] 162 | }, 163 | { 164 | "cell_type": "code", 165 | "execution_count": 8, 166 | "metadata": {}, 167 | "outputs": [], 168 | "source": [ 169 | "# Writing in Tensorboard\n", 170 | "#tf.summary.FileWriter(logdir='model/2.5.1-exp',graph=graph)" 171 | ] 172 | }, 173 | { 174 | "cell_type": "markdown", 175 | "metadata": {}, 176 | "source": [ 177 | "Let us look into some of the examples of usage of variables\n", 178 | "#### 2.5.1 Assigning constant to tf.Variable and incrementing it\n" 179 | ] 180 | }, 181 | { 182 | "cell_type": "code", 183 | "execution_count": 20, 184 | "metadata": {}, 185 | "outputs": [ 186 | { 187 | "name": "stdout", 188 | "output_type": "stream", 189 | "text": [ 190 | "6\n" 191 | ] 192 | } 193 | ], 194 | "source": [ 195 | "a = tf.constant(3)\n", 196 | "b = tf.Variable(a+3)\n", 197 | "\n", 198 | "with tf.Session() as sess:\n", 199 | " b.initializer.run() # Use sess.run or Operation.run for running initializer\n", 200 | " print (b.eval())" 201 | ] 202 | }, 203 | { 204 | "cell_type": "markdown", 205 | "metadata": {}, 206 | "source": [ 207 | "#### 2.5.2 Using zeros initializer and tf.get_variable \n", 208 | "**tf.get_variable(name, shape, initializer)** \n", 209 | "\n", 210 | "tf.get_variable is recommended function for creating a tensorflow variable(avoid use of tf.Variable). This takes care of corner cases like \n", 211 | "* name is a required argument, which helps the user to identify and vizualize a variabe in tensorboard.\n", 212 | "* tf.get_variable doesn't create variable if it already exists with the same name (a common issue in jupyter notebooks)" 213 | ] 214 | }, 215 | { 216 | "cell_type": "code", 217 | "execution_count": 1, 218 | "metadata": {}, 219 | "outputs": [ 220 | { 221 | "name": "stdout", 222 | "output_type": "stream", 223 | "text": [ 224 | "[[0. 0. 0. 0. 0.]\n", 225 | " [0. 0. 0. 0. 0.]\n", 226 | " [0. 0. 0. 0. 0.]]\n" 227 | ] 228 | } 229 | ], 230 | "source": [ 231 | "import tensorflow as tf;\n", 232 | "var = tf.get_variable(name='var',shape=(3,5),initializer=tf.zeros_initializer())\n", 233 | "with tf.Session() as sess:\n", 234 | " var.initializer.run()\n", 235 | " print (var.eval())" 236 | ] 237 | }, 238 | { 239 | "cell_type": "markdown", 240 | "metadata": {}, 241 | "source": [ 242 | "#### 2.5.3 Using Global Initializer\n", 243 | "*tf.global_variables_initializer()* returns an *operation* which initializes all the variables present in the graph. This helps, because now, we don't need to initialize individual variables *seperately* anymore" 244 | ] 245 | }, 246 | { 247 | "cell_type": "code", 248 | "execution_count": 3, 249 | "metadata": {}, 250 | "outputs": [ 251 | { 252 | "name": "stdout", 253 | "output_type": "stream", 254 | "text": [ 255 | "1.0\n" 256 | ] 257 | } 258 | ], 259 | "source": [ 260 | "import tensorflow as tf;\n", 261 | "x = tf.get_variable(name='x',shape=(),initializer=tf.zeros_initializer())\n", 262 | "y = tf.get_variable(name='y',shape=(),initializer=tf.ones_initializer())\n", 263 | "z = x + y\n", 264 | "initializer = tf.global_variables_initializer()\n", 265 | "with tf.Session() as sess:\n", 266 | " initializer.run()\n", 267 | " print(z.eval())\n", 268 | " " 269 | ] 270 | }, 271 | { 272 | "cell_type": "code", 273 | "execution_count": 4, 274 | "metadata": {}, 275 | "outputs": [], 276 | "source": [] 277 | }, 278 | { 279 | "cell_type": "code", 280 | "execution_count": null, 281 | "metadata": {}, 282 | "outputs": [], 283 | "source": [] 284 | } 285 | ], 286 | "metadata": { 287 | "kernelspec": { 288 | "display_name": "Python 3", 289 | "language": "python", 290 | "name": "python3" 291 | }, 292 | "language_info": { 293 | "codemirror_mode": { 294 | "name": "ipython", 295 | "version": 3 296 | }, 297 | "file_extension": ".py", 298 | "mimetype": "text/x-python", 299 | "name": "python", 300 | "nbconvert_exporter": "python", 301 | "pygments_lexer": "ipython3", 302 | "version": "3.5.4" 303 | } 304 | }, 305 | "nbformat": 4, 306 | "nbformat_minor": 2 307 | } 308 | -------------------------------------------------------------------------------- /2.6.1.Saving Session in Tensorflow.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "metadata": {}, 7 | "outputs": [ 8 | { 9 | "name": "stdout", 10 | "output_type": "stream", 11 | "text": [ 12 | "Model saved in path: model/2.6.1-exp/tmp_model.ckpt\n" 13 | ] 14 | } 15 | ], 16 | "source": [ 17 | "######################\n", 18 | "# Credits : Aymeric Damien (https://github.com/aymericdamien/TensorFlow-Examples)\n", 19 | "# Author of Tflearn\n", 20 | "# My comment (Jaley) : Grateful for your compassion of sharing the github resource.\n", 21 | "######################\n", 22 | "\n", 23 | "import tensorflow as tf;\n", 24 | "# Create some variables.\n", 25 | "v1 = tf.get_variable(\"v1\", shape=[3], initializer = tf.zeros_initializer)\n", 26 | "v2 = tf.get_variable(\"v2\", shape=[5], initializer = tf.zeros_initializer)\n", 27 | "\n", 28 | "inc_v1 = v1.assign(v1+1)\n", 29 | "dec_v2 = v2.assign(v2-1)\n", 30 | "\n", 31 | "# Add an op to initialize the variables.\n", 32 | "init_op = tf.global_variables_initializer()\n", 33 | "\n", 34 | "# Add ops to save and restore all the variables.\n", 35 | "saver = tf.train.Saver()\n", 36 | "\n", 37 | "# Later, launch the model, initialize the variables, do some work, and save the\n", 38 | "# variables to disk.\n", 39 | "with tf.Session() as sess:\n", 40 | " sess.run(init_op)\n", 41 | " # Do some work with the model.\n", 42 | " inc_v1.op.run()\n", 43 | " dec_v2.op.run()\n", 44 | " # Save the variables to disk.\n", 45 | " save_path = saver.save(sess, \"model/2.6.1-exp/tmp_model.ckpt\")\n", 46 | " print(\"Model saved in path: %s\" % save_path)" 47 | ] 48 | }, 49 | { 50 | "cell_type": "code", 51 | "execution_count": null, 52 | "metadata": {}, 53 | "outputs": [], 54 | "source": [] 55 | } 56 | ], 57 | "metadata": { 58 | "kernelspec": { 59 | "display_name": "Python 3", 60 | "language": "python", 61 | "name": "python3" 62 | }, 63 | "language_info": { 64 | "codemirror_mode": { 65 | "name": "ipython", 66 | "version": 3 67 | }, 68 | "file_extension": ".py", 69 | "mimetype": "text/x-python", 70 | "name": "python", 71 | "nbconvert_exporter": "python", 72 | "pygments_lexer": "ipython3", 73 | "version": "3.5.4" 74 | } 75 | }, 76 | "nbformat": 4, 77 | "nbformat_minor": 2 78 | } 79 | -------------------------------------------------------------------------------- /2.6.2 Restoring Session in Tensorflow.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "metadata": {}, 7 | "outputs": [ 8 | { 9 | "name": "stdout", 10 | "output_type": "stream", 11 | "text": [ 12 | "INFO:tensorflow:Restoring parameters from model/2.6.1-exp/tmp_model.ckpt\n", 13 | "Model restored.\n", 14 | "v1 : [1. 1. 1.]\n", 15 | "v2 : [-1. -1. -1. -1. -1.]\n", 16 | "v3 : [0. 0. 0. 0. 0. 0. 0.]\n" 17 | ] 18 | } 19 | ], 20 | "source": [ 21 | "######################\n", 22 | "# Credits : Aymeric Damien (https://github.com/aymericdamien/TensorFlow-Examples)\n", 23 | "# Author of Tflearn\n", 24 | "# My comment (Jaley) : Grateful for your compassion of sharing the github resource.\n", 25 | "######################\n", 26 | "\n", 27 | "import tensorflow as tf;\n", 28 | "tf.reset_default_graph()\n", 29 | "\n", 30 | "# Create some variables.\n", 31 | "v1 = tf.get_variable(\"v1\", shape=[3])\n", 32 | "v2 = tf.get_variable(\"v2\", shape=[5])\n", 33 | "\n", 34 | "\n", 35 | "# Add ops to save and restore all the variables.\n", 36 | "saver = tf.train.Saver()\n", 37 | "v3 = tf.get_variable(\"v3\", shape=[7], initializer = tf.zeros_initializer)\n", 38 | "\n", 39 | "# Later, launch the model, use the saver to restore variables from disk, and\n", 40 | "# do some work with the model.\n", 41 | "with tf.Session() as sess:\n", 42 | " # Restore variables from disk.\n", 43 | " v3.initializer.run()\n", 44 | " saver.restore(sess, \"model/2.6.1-exp/tmp_model.ckpt\")\n", 45 | "\n", 46 | " print(\"Model restored.\")\n", 47 | " # Check the values of the variables\n", 48 | " print(\"v1 : %s\" % v1.eval())\n", 49 | " print(\"v2 : %s\" % v2.eval())\n", 50 | " print(\"v3 : %s\" % v3.eval())" 51 | ] 52 | }, 53 | { 54 | "cell_type": "code", 55 | "execution_count": null, 56 | "metadata": {}, 57 | "outputs": [], 58 | "source": [] 59 | } 60 | ], 61 | "metadata": { 62 | "kernelspec": { 63 | "display_name": "Python 3", 64 | "language": "python", 65 | "name": "python3" 66 | }, 67 | "language_info": { 68 | "codemirror_mode": { 69 | "name": "ipython", 70 | "version": 3 71 | }, 72 | "file_extension": ".py", 73 | "mimetype": "text/x-python", 74 | "name": "python", 75 | "nbconvert_exporter": "python", 76 | "pygments_lexer": "ipython3", 77 | "version": "3.5.4" 78 | } 79 | }, 80 | "nbformat": 4, 81 | "nbformat_minor": 2 82 | } 83 | -------------------------------------------------------------------------------- /2.8.2 NeuralNetwork in Tensorflow via Keras.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Neural Networkin Tensorflow" 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": 20, 13 | "metadata": {}, 14 | "outputs": [], 15 | "source": [ 16 | "import tensorflow as tf;\n", 17 | "import sklearn.datasets;\n", 18 | "iris_ds = sklearn.datasets.load_iris(return_X_y=False)\n", 19 | "import pandas as pd" 20 | ] 21 | }, 22 | { 23 | "cell_type": "code", 24 | "execution_count": 21, 25 | "metadata": {}, 26 | "outputs": [ 27 | { 28 | "data": { 29 | "text/plain": [ 30 | "array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n", 31 | " 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n", 32 | " 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n", 33 | " 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n", 34 | " 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,\n", 35 | " 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,\n", 36 | " 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])" 37 | ] 38 | }, 39 | "execution_count": 21, 40 | "metadata": {}, 41 | "output_type": "execute_result" 42 | } 43 | ], 44 | "source": [ 45 | "iris_ds.target" 46 | ] 47 | }, 48 | { 49 | "cell_type": "code", 50 | "execution_count": 22, 51 | "metadata": {}, 52 | "outputs": [], 53 | "source": [ 54 | "import numpy as np\n", 55 | "iris_data = pd.DataFrame(data=iris_ds.data,columns=iris_ds.feature_names)" 56 | ] 57 | }, 58 | { 59 | "cell_type": "code", 60 | "execution_count": 23, 61 | "metadata": {}, 62 | "outputs": [ 63 | { 64 | "data": { 65 | "text/html": [ 66 | "
\n", 67 | "\n", 80 | "\n", 81 | " \n", 82 | " \n", 83 | " \n", 84 | " \n", 85 | " \n", 86 | " \n", 87 | " \n", 88 | " \n", 89 | " \n", 90 | " \n", 91 | " \n", 92 | " \n", 93 | " \n", 94 | " \n", 95 | " \n", 96 | " \n", 97 | " \n", 98 | " \n", 99 | " \n", 100 | " \n", 101 | " \n", 102 | " \n", 103 | " \n", 104 | " \n", 105 | " \n", 106 | " \n", 107 | " \n", 108 | " \n", 109 | " \n", 110 | " \n", 111 | " \n", 112 | " \n", 113 | " \n", 114 | " \n", 115 | " \n", 116 | " \n", 117 | " \n", 118 | " \n", 119 | " \n", 120 | " \n", 121 | " \n", 122 | " \n", 123 | " \n", 124 | " \n", 125 | " \n", 126 | " \n", 127 | "
sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2
24.73.21.30.2
34.63.11.50.2
45.03.61.40.2
\n", 128 | "
" 129 | ], 130 | "text/plain": [ 131 | " sepal length (cm) sepal width (cm) petal length (cm) petal width (cm)\n", 132 | "0 5.1 3.5 1.4 0.2\n", 133 | "1 4.9 3.0 1.4 0.2\n", 134 | "2 4.7 3.2 1.3 0.2\n", 135 | "3 4.6 3.1 1.5 0.2\n", 136 | "4 5.0 3.6 1.4 0.2" 137 | ] 138 | }, 139 | "execution_count": 23, 140 | "metadata": {}, 141 | "output_type": "execute_result" 142 | } 143 | ], 144 | "source": [ 145 | "iris_data.head()" 146 | ] 147 | }, 148 | { 149 | "cell_type": "code", 150 | "execution_count": 24, 151 | "metadata": {}, 152 | "outputs": [], 153 | "source": [ 154 | "from sklearn.model_selection import train_test_split" 155 | ] 156 | }, 157 | { 158 | "cell_type": "code", 159 | "execution_count": 25, 160 | "metadata": {}, 161 | "outputs": [], 162 | "source": [ 163 | "import sklearn;\n", 164 | "min_max_scaler = sklearn.preprocessing.MinMaxScaler()\n", 165 | "scaled_data = min_max_scaler.fit_transform(iris_data)\n" 166 | ] 167 | }, 168 | { 169 | "cell_type": "code", 170 | "execution_count": 26, 171 | "metadata": {}, 172 | "outputs": [ 173 | { 174 | "name": "stderr", 175 | "output_type": "stream", 176 | "text": [ 177 | "C:\\Users\\jaley\\Anaconda3\\envs\\condaenv\\lib\\site-packages\\sklearn\\preprocessing\\_encoders.py:331: DeprecationWarning: Passing 'n_values' is deprecated in version 0.20 and will be removed in 0.22. You can use the 'categories' keyword instead. 'n_values=n' corresponds to 'categories=[range(n)]'.\n", 178 | " warnings.warn(msg, DeprecationWarning)\n" 179 | ] 180 | } 181 | ], 182 | "source": [ 183 | "from sklearn.preprocessing import OneHotEncoder\n", 184 | "encoder = OneHotEncoder(3)\n", 185 | "label = encoder.fit_transform(iris_ds.target.reshape(-1,1))\n", 186 | "label = label.todense()" 187 | ] 188 | }, 189 | { 190 | "cell_type": "code", 191 | "execution_count": 27, 192 | "metadata": {}, 193 | "outputs": [], 194 | "source": [ 195 | "\n", 196 | "trainx,testx,trainy,testy = train_test_split(scaled_data,label)" 197 | ] 198 | }, 199 | { 200 | "cell_type": "code", 201 | "execution_count": 28, 202 | "metadata": {}, 203 | "outputs": [ 204 | { 205 | "name": "stdout", 206 | "output_type": "stream", 207 | "text": [ 208 | "(112, 4)\n" 209 | ] 210 | } 211 | ], 212 | "source": [ 213 | "print (trainx.shape)" 214 | ] 215 | }, 216 | { 217 | "cell_type": "markdown", 218 | "metadata": {}, 219 | "source": [ 220 | "#### Building a model in Tensorflow" 221 | ] 222 | }, 223 | { 224 | "cell_type": "code", 225 | "execution_count": 10, 226 | "metadata": {}, 227 | "outputs": [], 228 | "source": [ 229 | "import tensorflow as tf;\n", 230 | "x = tf.placeholder(tf.float32, shape=[None,4])\n", 231 | "y = tf.placeholder(tf.float32, shape=[None,3])\n", 232 | "w1 = tf.get_variable(name='w1',dtype=tf.float32,shape=[5,4])\n", 233 | "w2 = tf.get_variable(name='w2',dtype=tf.float32,shape=(3,5))\n", 234 | "b1 = tf.get_variable(name='b1',dtype=tf.float32,shape=(5,1))\n", 235 | "b2 = tf.get_variable(name='b2',dtype=tf.float32,shape=(3,1))" 236 | ] 237 | }, 238 | { 239 | "cell_type": "code", 240 | "execution_count": 14, 241 | "metadata": {}, 242 | "outputs": [], 243 | "source": [ 244 | "from tensorflow.contrib.keras import layers\n", 245 | "\n", 246 | "def build_model(x):\n", 247 | " y1 = layers.Dense(units=5,activation=tf.nn.relu, input_shape=(None,4))(x)\n", 248 | " print (y1.shape)\n", 249 | " y2 = layers.Dense(units=3,activation=tf.nn.softmax,input_shape=(None,5))(y1)\n", 250 | " print (y2.shape)\n", 251 | " return y2" 252 | ] 253 | }, 254 | { 255 | "cell_type": "code", 256 | "execution_count": 15, 257 | "metadata": {}, 258 | "outputs": [], 259 | "source": [] 260 | }, 261 | { 262 | "cell_type": "code", 263 | "execution_count": 16, 264 | "metadata": {}, 265 | "outputs": [ 266 | { 267 | "name": "stdout", 268 | "output_type": "stream", 269 | "text": [ 270 | "(?, 5)\n", 271 | "(?, 3)\n" 272 | ] 273 | } 274 | ], 275 | "source": [ 276 | "# Applying build_model\n", 277 | "\n", 278 | "y_hat = build_model(x)" 279 | ] 280 | }, 281 | { 282 | "cell_type": "code", 283 | "execution_count": 17, 284 | "metadata": {}, 285 | "outputs": [], 286 | "source": [ 287 | "accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(y,1),tf.argmax(y_hat,1)),tf.float64))" 288 | ] 289 | }, 290 | { 291 | "cell_type": "code", 292 | "execution_count": 18, 293 | "metadata": {}, 294 | "outputs": [], 295 | "source": [ 296 | "entropy_loss = tf.losses.softmax_cross_entropy(logits=y_hat,onehot_labels=y)\n", 297 | "optimizer = tf.train.AdamOptimizer().minimize(entropy_loss)" 298 | ] 299 | }, 300 | { 301 | "cell_type": "code", 302 | "execution_count": 29, 303 | "metadata": {}, 304 | "outputs": [ 305 | { 306 | "name": "stdout", 307 | "output_type": "stream", 308 | "text": [ 309 | "loss=1.0407789,accuracy=0.23684210526315788\n", 310 | "loss=0.7483073,accuracy=0.23684210526315788\n", 311 | "loss=0.60680175,accuracy=0.5789473684210527\n", 312 | "loss=0.5752384,accuracy=0.5789473684210527\n", 313 | "loss=0.5636912,accuracy=0.5789473684210527\n", 314 | "loss=0.5583967,accuracy=0.5789473684210527\n", 315 | "loss=0.5555467,accuracy=0.5789473684210527\n", 316 | "loss=0.55462784,accuracy=0.6842105263157895\n", 317 | "loss=0.55430645,accuracy=0.7631578947368421\n", 318 | "loss=0.5539576,accuracy=0.8157894736842105\n", 319 | "loss=0.55339193,accuracy=0.8421052631578947\n", 320 | "loss=0.5529381,accuracy=0.8421052631578947\n", 321 | "loss=0.55258304,accuracy=0.8421052631578947\n", 322 | "loss=0.5522921,accuracy=0.8421052631578947\n", 323 | "loss=0.5520844,accuracy=0.868421052631579\n", 324 | "loss=0.5519173,accuracy=0.868421052631579\n", 325 | "loss=0.5518024,accuracy=0.868421052631579\n", 326 | "loss=0.5517181,accuracy=0.868421052631579\n", 327 | "loss=0.55163693,accuracy=0.8947368421052632\n", 328 | "loss=0.5515893,accuracy=0.8947368421052632\n", 329 | "loss=0.5515522,accuracy=0.8947368421052632\n", 330 | "loss=0.55152214,accuracy=0.868421052631579\n", 331 | "loss=0.5515005,accuracy=0.868421052631579\n", 332 | "loss=0.55148536,accuracy=0.868421052631579\n", 333 | "loss=0.5514735,accuracy=0.868421052631579\n", 334 | "loss=0.5514657,accuracy=0.868421052631579\n", 335 | "loss=0.5514599,accuracy=0.8947368421052632\n", 336 | "loss=0.5514554,accuracy=0.8947368421052632\n", 337 | "loss=0.5514515,accuracy=0.8947368421052632\n", 338 | "loss=0.5514622,accuracy=0.8947368421052632\n", 339 | "loss=0.55147225,accuracy=0.8947368421052632\n", 340 | "loss=0.5514733,accuracy=0.8947368421052632\n", 341 | "loss=0.5514721,accuracy=0.8947368421052632\n", 342 | "loss=0.5514691,accuracy=0.8947368421052632\n", 343 | "loss=0.5514659,accuracy=0.8947368421052632\n", 344 | "loss=0.55146277,accuracy=0.8947368421052632\n", 345 | "loss=0.5514599,accuracy=0.8947368421052632\n", 346 | "loss=0.5514574,accuracy=0.9210526315789473\n", 347 | "loss=0.5514554,accuracy=0.9210526315789473\n", 348 | "loss=0.55145353,accuracy=0.9210526315789473\n", 349 | "loss=0.55145186,accuracy=0.9210526315789473\n", 350 | "loss=0.55145043,accuracy=0.9210526315789473\n", 351 | "loss=0.5514494,accuracy=0.9210526315789473\n", 352 | "loss=0.5514486,accuracy=0.9210526315789473\n", 353 | "loss=0.5514478,accuracy=0.9210526315789473\n", 354 | "loss=0.5514473,accuracy=0.9210526315789473\n", 355 | "loss=0.55144674,accuracy=0.9210526315789473\n", 356 | "loss=0.5514463,accuracy=0.9210526315789473\n", 357 | "loss=0.5514459,accuracy=0.9210526315789473\n", 358 | "loss=0.55144566,accuracy=0.9210526315789473\n", 359 | "loss=0.5514455,accuracy=0.9210526315789473\n", 360 | "loss=0.55144525,accuracy=0.9210526315789473\n", 361 | "loss=0.5514452,accuracy=0.9210526315789473\n", 362 | "loss=0.55144507,accuracy=0.9210526315789473\n", 363 | "loss=0.551445,accuracy=0.9210526315789473\n", 364 | "loss=0.55144495,accuracy=0.9210526315789473\n", 365 | "loss=0.5514449,accuracy=0.9210526315789473\n", 366 | "loss=0.5514449,accuracy=0.9210526315789473\n", 367 | "loss=0.55144477,accuracy=0.9210526315789473\n", 368 | "loss=0.55144477,accuracy=0.9210526315789473\n" 369 | ] 370 | } 371 | ], 372 | "source": [ 373 | "\n", 374 | "with tf.Session() as sess :\n", 375 | " sess.run(tf.global_variables_initializer())\n", 376 | " sess.run(tf.local_variables_initializer())\n", 377 | " for j in range(300):\n", 378 | " for i in range(len(trainx)):\n", 379 | " x_val = np.array(trainx[i].reshape(1,-1),dtype=np.float32)\n", 380 | " y_val = np.array(trainy[i].reshape(1,-1),dtype=np.float32)\n", 381 | " optimizer.run(feed_dict={x:x_val,y:y_val})\n", 382 | " _loss, = sess.run([entropy_loss] ,feed_dict={x:x_val,y:y_val})\n", 383 | " if(j%5==0):\n", 384 | " _acc = sess.run(accuracy,feed_dict={x:testx,y:testy})\n", 385 | " print ('loss=%s,accuracy=%s'%(_loss,_acc))" 386 | ] 387 | }, 388 | { 389 | "cell_type": "code", 390 | "execution_count": null, 391 | "metadata": {}, 392 | "outputs": [], 393 | "source": [] 394 | } 395 | ], 396 | "metadata": { 397 | "kernelspec": { 398 | "display_name": "Python 3", 399 | "language": "python", 400 | "name": "python3" 401 | }, 402 | "language_info": { 403 | "codemirror_mode": { 404 | "name": "ipython", 405 | "version": 3 406 | }, 407 | "file_extension": ".py", 408 | "mimetype": "text/x-python", 409 | "name": "python", 410 | "nbconvert_exporter": "python", 411 | "pygments_lexer": "ipython3", 412 | "version": "3.5.4" 413 | } 414 | }, 415 | "nbformat": 4, 416 | "nbformat_minor": 2 417 | } 418 | -------------------------------------------------------------------------------- /5.1. Finetuning for CatvsDogs.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 22, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "import pandas as pd\n", 10 | "import os;\n", 11 | "import numpy as np;\n", 12 | "from PIL import Image\n", 13 | "import tensorflow as tf;\n", 14 | "\n", 15 | "train_prefix = '/home/jaley/Downloads/all/train'\n", 16 | "test_prefix = '/home/jaley/Downloads/all/test1'\n", 17 | "def train_gen():\n", 18 | " prefix = train_prefix\n", 19 | " for filename in os.listdir(prefix):\n", 20 | " label = np.zeros((2,))\n", 21 | " if filename.startswith('dog'): \n", 22 | " path = prefix + '/'+filename\n", 23 | " label[1]=1\n", 24 | " if filename.startswith('cat'): \n", 25 | " path = prefix + '/'+filename\n", 26 | " label[0]=1\n", 27 | " yield np.array(Image.open(path).resize((224,224)),dtype='float64').transpose((2,0,1))/255,label;\n" 28 | ] 29 | }, 30 | { 31 | "cell_type": "code", 32 | "execution_count": 26, 33 | "metadata": {}, 34 | "outputs": [], 35 | "source": [ 36 | "train_ds = tf.data.Dataset.from_generator(train_gen,\n", 37 | " output_types=(tf.float64,tf.float64),\n", 38 | " output_shapes=((3,224,224),(2,) ))" 39 | ] 40 | }, 41 | { 42 | "cell_type": "code", 43 | "execution_count": 27, 44 | "metadata": {}, 45 | "outputs": [], 46 | "source": [ 47 | "train_var = train_ds.repeat(10).shuffle(1000).batch(10).make_one_shot_iterator().get_next()" 48 | ] 49 | }, 50 | { 51 | "cell_type": "code", 52 | "execution_count": 28, 53 | "metadata": {}, 54 | "outputs": [ 55 | { 56 | "name": "stdout", 57 | "output_type": "stream", 58 | "text": [ 59 | "(10, 3, 224, 224) [[0. 1.]\n", 60 | " [0. 1.]\n", 61 | " [1. 0.]\n", 62 | " [1. 0.]\n", 63 | " [0. 1.]\n", 64 | " [1. 0.]\n", 65 | " [0. 1.]\n", 66 | " [1. 0.]\n", 67 | " [0. 1.]\n", 68 | " [1. 0.]]\n" 69 | ] 70 | } 71 | ], 72 | "source": [ 73 | "with tf.Session() as sess:\n", 74 | " dt,lbl = sess.run(train_var)\n", 75 | " print (dt.shape,lbl)\n", 76 | " " 77 | ] 78 | }, 79 | { 80 | "cell_type": "code", 81 | "execution_count": 30, 82 | "metadata": {}, 83 | "outputs": [ 84 | { 85 | "name": "stdout", 86 | "output_type": "stream", 87 | "text": [ 88 | "(10, 3, 224, 224) (10, 2)\n", 89 | "(10, 3, 224, 224) (10, 2)\n", 90 | "(10, 3, 224, 224) (10, 2)\n", 91 | "(10, 3, 224, 224) (10, 2)\n", 92 | "(10, 3, 224, 224) (10, 2)\n", 93 | "(10, 3, 224, 224) (10, 2)\n", 94 | "(10, 3, 224, 224) (10, 2)\n", 95 | "(10, 3, 224, 224) (10, 2)\n", 96 | "(10, 3, 224, 224) (10, 2)\n", 97 | "(10, 3, 224, 224) (10, 2)\n", 98 | "(10, 3, 224, 224) (10, 2)\n", 99 | "(10, 3, 224, 224) (10, 2)\n", 100 | "(10, 3, 224, 224) (10, 2)\n", 101 | "(10, 3, 224, 224) (10, 2)\n", 102 | "(10, 3, 224, 224) (10, 2)\n", 103 | "(10, 3, 224, 224) (10, 2)\n", 104 | "(10, 3, 224, 224) (10, 2)\n", 105 | "(10, 3, 224, 224) (10, 2)\n", 106 | "(10, 3, 224, 224) (10, 2)\n", 107 | "(10, 3, 224, 224) (10, 2)\n", 108 | "(10, 3, 224, 224) (10, 2)\n", 109 | "(10, 3, 224, 224) (10, 2)\n", 110 | "(10, 3, 224, 224) (10, 2)\n", 111 | "(10, 3, 224, 224) (10, 2)\n", 112 | "(10, 3, 224, 224) (10, 2)\n", 113 | "(10, 3, 224, 224) (10, 2)\n", 114 | "(10, 3, 224, 224) (10, 2)\n", 115 | "(10, 3, 224, 224) (10, 2)\n", 116 | "(10, 3, 224, 224) (10, 2)\n", 117 | "(10, 3, 224, 224) (10, 2)\n", 118 | "(10, 3, 224, 224) (10, 2)\n", 119 | "(10, 3, 224, 224) (10, 2)\n", 120 | "(10, 3, 224, 224) (10, 2)\n", 121 | "(10, 3, 224, 224) (10, 2)\n", 122 | "(10, 3, 224, 224) (10, 2)\n", 123 | "(10, 3, 224, 224) (10, 2)\n", 124 | "(10, 3, 224, 224) (10, 2)\n", 125 | "(10, 3, 224, 224) (10, 2)\n", 126 | "(10, 3, 224, 224) (10, 2)\n", 127 | "(10, 3, 224, 224) (10, 2)\n", 128 | "(10, 3, 224, 224) (10, 2)\n", 129 | "(10, 3, 224, 224) (10, 2)\n", 130 | "(10, 3, 224, 224) (10, 2)\n", 131 | "(10, 3, 224, 224) (10, 2)\n", 132 | "(10, 3, 224, 224) (10, 2)\n", 133 | "(10, 3, 224, 224) (10, 2)\n", 134 | "(10, 3, 224, 224) (10, 2)\n", 135 | "(10, 3, 224, 224) (10, 2)\n", 136 | "(10, 3, 224, 224) (10, 2)\n", 137 | "(10, 3, 224, 224) (10, 2)\n", 138 | "(10, 3, 224, 224) (10, 2)\n", 139 | "(10, 3, 224, 224) (10, 2)\n", 140 | "(10, 3, 224, 224) (10, 2)\n", 141 | "(10, 3, 224, 224) (10, 2)\n", 142 | "(10, 3, 224, 224) (10, 2)\n", 143 | "(10, 3, 224, 224) (10, 2)\n", 144 | "(10, 3, 224, 224) (10, 2)\n", 145 | "(10, 3, 224, 224) (10, 2)\n", 146 | "(10, 3, 224, 224) (10, 2)\n", 147 | "(10, 3, 224, 224) (10, 2)\n", 148 | "(10, 3, 224, 224) (10, 2)\n", 149 | "(10, 3, 224, 224) (10, 2)\n", 150 | "(10, 3, 224, 224) (10, 2)\n", 151 | "(10, 3, 224, 224) (10, 2)\n", 152 | "(10, 3, 224, 224) (10, 2)\n", 153 | "(10, 3, 224, 224) (10, 2)\n", 154 | "(10, 3, 224, 224) (10, 2)\n", 155 | "(10, 3, 224, 224) (10, 2)\n", 156 | "(10, 3, 224, 224) (10, 2)\n", 157 | "(10, 3, 224, 224) (10, 2)\n", 158 | "(10, 3, 224, 224) (10, 2)\n", 159 | "(10, 3, 224, 224) (10, 2)\n", 160 | "(10, 3, 224, 224) (10, 2)\n", 161 | "(10, 3, 224, 224) (10, 2)\n", 162 | "(10, 3, 224, 224) (10, 2)\n", 163 | "(10, 3, 224, 224) (10, 2)\n", 164 | "(10, 3, 224, 224) (10, 2)\n", 165 | "(10, 3, 224, 224) (10, 2)\n", 166 | "(10, 3, 224, 224) (10, 2)\n", 167 | "(10, 3, 224, 224) (10, 2)\n", 168 | "(10, 3, 224, 224) (10, 2)\n", 169 | "(10, 3, 224, 224) (10, 2)\n", 170 | "(10, 3, 224, 224) (10, 2)\n", 171 | "(10, 3, 224, 224) (10, 2)\n", 172 | "(10, 3, 224, 224) (10, 2)\n", 173 | "(10, 3, 224, 224) (10, 2)\n", 174 | "(10, 3, 224, 224) (10, 2)\n", 175 | "(10, 3, 224, 224) (10, 2)\n", 176 | "(10, 3, 224, 224) (10, 2)\n", 177 | "(10, 3, 224, 224) (10, 2)\n", 178 | "(10, 3, 224, 224) (10, 2)\n", 179 | "(10, 3, 224, 224) (10, 2)\n", 180 | "(10, 3, 224, 224) (10, 2)\n", 181 | "(10, 3, 224, 224) (10, 2)\n", 182 | "(10, 3, 224, 224) (10, 2)\n", 183 | "(10, 3, 224, 224) (10, 2)\n", 184 | "(10, 3, 224, 224) (10, 2)\n", 185 | "(10, 3, 224, 224) (10, 2)\n", 186 | "(10, 3, 224, 224) (10, 2)\n", 187 | "(10, 3, 224, 224) (10, 2)\n" 188 | ] 189 | } 190 | ], 191 | "source": [ 192 | "with tf.Session() as sess:\n", 193 | " for i in range(100):\n", 194 | " dt,lbl = sess.run(train_var)\n", 195 | " print (dt.shape,lbl.shape)" 196 | ] 197 | }, 198 | { 199 | "cell_type": "markdown", 200 | "metadata": {}, 201 | "source": [ 202 | "## Using Decorators" 203 | ] 204 | }, 205 | { 206 | "cell_type": "code", 207 | "execution_count": 31, 208 | "metadata": {}, 209 | "outputs": [], 210 | "source": [ 211 | "def testfunc(func):\n", 212 | " for i in range(10):\n", 213 | " yield i\n", 214 | " \n", 215 | "@testfunc\n", 216 | "def train_func():\n", 217 | " return 'train_prefix'" 218 | ] 219 | }, 220 | { 221 | "cell_type": "code", 222 | "execution_count": 33, 223 | "metadata": {}, 224 | "outputs": [ 225 | { 226 | "name": "stdout", 227 | "output_type": "stream", 228 | "text": [ 229 | "0\n", 230 | "1\n", 231 | "2\n", 232 | "3\n", 233 | "4\n", 234 | "5\n", 235 | "6\n", 236 | "7\n", 237 | "8\n", 238 | "9\n" 239 | ] 240 | } 241 | ], 242 | "source": [ 243 | "for elem in train_func:\n", 244 | " print (elem)" 245 | ] 246 | }, 247 | { 248 | "cell_type": "code", 249 | "execution_count": null, 250 | "metadata": {}, 251 | "outputs": [], 252 | "source": [] 253 | } 254 | ], 255 | "metadata": { 256 | "kernelspec": { 257 | "display_name": "Python 3", 258 | "language": "python", 259 | "name": "python3" 260 | }, 261 | "language_info": { 262 | "codemirror_mode": { 263 | "name": "ipython", 264 | "version": 3 265 | }, 266 | "file_extension": ".py", 267 | "mimetype": "text/x-python", 268 | "name": "python", 269 | "nbconvert_exporter": "python", 270 | "pygments_lexer": "ipython3", 271 | "version": "3.6.7" 272 | } 273 | }, 274 | "nbformat": 4, 275 | "nbformat_minor": 2 276 | } 277 | -------------------------------------------------------------------------------- /5.3.1. Finetuning for CatsvsDogs in Keras.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "metadata": {}, 7 | "outputs": [ 8 | { 9 | "name": "stderr", 10 | "output_type": "stream", 11 | "text": [ 12 | "Using TensorFlow backend.\n" 13 | ] 14 | } 15 | ], 16 | "source": [ 17 | "import keras" 18 | ] 19 | }, 20 | { 21 | "cell_type": "code", 22 | "execution_count": 1, 23 | "metadata": {}, 24 | "outputs": [ 25 | { 26 | "name": "stderr", 27 | "output_type": "stream", 28 | "text": [ 29 | "Using TensorFlow backend.\n" 30 | ] 31 | } 32 | ], 33 | "source": [ 34 | "from keras.applications import VGG16\n", 35 | "#Load the VGG model\n", 36 | "vgg_conv = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))" 37 | ] 38 | }, 39 | { 40 | "cell_type": "code", 41 | "execution_count": 2, 42 | "metadata": {}, 43 | "outputs": [ 44 | { 45 | "name": "stdout", 46 | "output_type": "stream", 47 | "text": [ 48 | " False\n", 49 | " False\n", 50 | " False\n", 51 | " False\n", 52 | " False\n", 53 | " False\n", 54 | " False\n", 55 | " False\n", 56 | " False\n", 57 | " False\n", 58 | " False\n", 59 | " False\n", 60 | " False\n", 61 | " False\n", 62 | " False\n", 63 | " True\n", 64 | " True\n", 65 | " True\n", 66 | " True\n" 67 | ] 68 | } 69 | ], 70 | "source": [ 71 | "# Freeze the layers except the last 4 layers\n", 72 | "for layer in vgg_conv.layers[:-4]:\n", 73 | " layer.trainable = False\n", 74 | " \n", 75 | "# Check the trainable status of the individual layers\n", 76 | "for layer in vgg_conv.layers:\n", 77 | " print(layer, layer.trainable)" 78 | ] 79 | }, 80 | { 81 | "cell_type": "code", 82 | "execution_count": 3, 83 | "metadata": {}, 84 | "outputs": [ 85 | { 86 | "name": "stdout", 87 | "output_type": "stream", 88 | "text": [ 89 | "_________________________________________________________________\n", 90 | "Layer (type) Output Shape Param # \n", 91 | "=================================================================\n", 92 | "vgg16 (Model) (None, 7, 7, 512) 14714688 \n", 93 | "_________________________________________________________________\n", 94 | "flatten_1 (Flatten) (None, 25088) 0 \n", 95 | "_________________________________________________________________\n", 96 | "dense_1 (Dense) (None, 1024) 25691136 \n", 97 | "_________________________________________________________________\n", 98 | "dropout_1 (Dropout) (None, 1024) 0 \n", 99 | "_________________________________________________________________\n", 100 | "dense_2 (Dense) (None, 2) 2050 \n", 101 | "=================================================================\n", 102 | "Total params: 40,407,874\n", 103 | "Trainable params: 32,772,610\n", 104 | "Non-trainable params: 7,635,264\n", 105 | "_________________________________________________________________\n" 106 | ] 107 | } 108 | ], 109 | "source": [ 110 | "from keras import models\n", 111 | "from keras import layers\n", 112 | "from keras import optimizers\n", 113 | " \n", 114 | "# Create the model\n", 115 | "model = models.Sequential()\n", 116 | " \n", 117 | "# Add the vgg convolutional base model\n", 118 | "model.add(vgg_conv)\n", 119 | " \n", 120 | "# Add new layers\n", 121 | "model.add(layers.Flatten())\n", 122 | "model.add(layers.Dense(1024, activation='relu'))\n", 123 | "model.add(layers.Dropout(0.5))\n", 124 | "model.add(layers.Dense(2, activation='softmax'))\n", 125 | " \n", 126 | "# Show a summary of the model. Check the number of trainable parameters\n", 127 | "model.summary()\n" 128 | ] 129 | }, 130 | { 131 | "cell_type": "code", 132 | "execution_count": 5, 133 | "metadata": {}, 134 | "outputs": [ 135 | { 136 | "name": "stdout", 137 | "output_type": "stream", 138 | "text": [ 139 | "Found 25000 images belonging to 2 classes.\n", 140 | "Found 25000 images belonging to 2 classes.\n" 141 | ] 142 | } 143 | ], 144 | "source": [ 145 | "from keras.preprocessing.image import ImageDataGenerator\n", 146 | "image_size=224\n", 147 | "#train_dir = 'img/catsvsdogs'\n", 148 | "#validation_dir = 'img/catsvsdogs'\n", 149 | "\n", 150 | "train_dir = '/home/jaley/Downloads/all/train_seperate'\n", 151 | "validation_dir = '/home/jaley/Downloads/all/train_seperate'\n", 152 | "\n", 153 | "\n", 154 | "train_datagen = ImageDataGenerator(\n", 155 | " rescale=1./255,\n", 156 | " rotation_range=20,\n", 157 | " width_shift_range=0.2,\n", 158 | " height_shift_range=0.2,\n", 159 | " horizontal_flip=True,\n", 160 | " fill_mode='nearest')\n", 161 | " \n", 162 | "validation_datagen = ImageDataGenerator(rescale=1./255)\n", 163 | " \n", 164 | "# Change the batchsize according to your system RAM\n", 165 | "train_batchsize = 100\n", 166 | "val_batchsize = 10\n", 167 | " \n", 168 | "train_generator = train_datagen.flow_from_directory(\n", 169 | " train_dir,\n", 170 | " target_size=(image_size, image_size),\n", 171 | " batch_size=train_batchsize,\n", 172 | " class_mode='categorical')\n", 173 | " \n", 174 | "validation_generator = validation_datagen.flow_from_directory(\n", 175 | " validation_dir,\n", 176 | " target_size=(image_size, image_size),\n", 177 | " batch_size=val_batchsize,\n", 178 | " class_mode='categorical',\n", 179 | " shuffle=False)" 180 | ] 181 | }, 182 | { 183 | "cell_type": "code", 184 | "execution_count": null, 185 | "metadata": {}, 186 | "outputs": [ 187 | { 188 | "name": "stdout", 189 | "output_type": "stream", 190 | "text": [ 191 | "Epoch 1/30\n", 192 | "250/250 [==============================] - 7927s 32s/step - loss: 0.3701 - acc: 0.8433 - val_loss: 0.0984 - val_acc: 0.9619\n", 193 | "Epoch 2/30\n", 194 | "250/250 [==============================] - 6641s 27s/step - loss: 0.1587 - acc: 0.9378 - val_loss: 0.0839 - val_acc: 0.9678\n", 195 | "Epoch 3/30\n", 196 | "250/250 [==============================] - 6534s 26s/step - loss: 0.1328 - acc: 0.9491 - val_loss: 0.0910 - val_acc: 0.9717\n", 197 | "Epoch 4/30\n", 198 | "249/250 [============================>.] - ETA: 15s - loss: 0.1163 - acc: 0.9553" 199 | ] 200 | } 201 | ], 202 | "source": [ 203 | "# Compile the model\n", 204 | "model.compile(loss='categorical_crossentropy',\n", 205 | " optimizer=optimizers.RMSprop(lr=1e-4),\n", 206 | " metrics=['acc'])\n", 207 | "# Train the model\n", 208 | "history = model.fit_generator(\n", 209 | " train_generator,\n", 210 | " steps_per_epoch=train_generator.samples/train_generator.batch_size ,\n", 211 | " epochs=30,\n", 212 | " validation_data=validation_generator,\n", 213 | " validation_steps=validation_generator.samples/validation_generator.batch_size,\n", 214 | " verbose=1)\n", 215 | " \n", 216 | "# Save the model\n", 217 | "model.save('exp/5.2.1-exp/small_last4.h5')" 218 | ] 219 | }, 220 | { 221 | "cell_type": "code", 222 | "execution_count": null, 223 | "metadata": {}, 224 | "outputs": [], 225 | "source": [] 226 | } 227 | ], 228 | "metadata": { 229 | "kernelspec": { 230 | "display_name": "Python 3", 231 | "language": "python", 232 | "name": "python3" 233 | }, 234 | "language_info": { 235 | "codemirror_mode": { 236 | "name": "ipython", 237 | "version": 3 238 | }, 239 | "file_extension": ".py", 240 | "mimetype": "text/x-python", 241 | "name": "python", 242 | "nbconvert_exporter": "python", 243 | "pygments_lexer": "ipython3", 244 | "version": "3.6.7" 245 | } 246 | }, 247 | "nbformat": 4, 248 | "nbformat_minor": 2 249 | } 250 | -------------------------------------------------------------------------------- /5.4.1. Manipulating Model.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "metadata": {}, 7 | "outputs": [ 8 | { 9 | "name": "stderr", 10 | "output_type": "stream", 11 | "text": [ 12 | "Using TensorFlow backend.\n" 13 | ] 14 | } 15 | ], 16 | "source": [ 17 | "import keras\n", 18 | "from keras import backend as K\n", 19 | "import tensorflow as tf;\n", 20 | "sess = tf.Session()\n", 21 | "K.set_session(sess)\n", 22 | "\n", 23 | "from keras.applications import VGG16\n", 24 | "#Load the VGG model\n", 25 | "vgg_conv = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))" 26 | ] 27 | }, 28 | { 29 | "cell_type": "code", 30 | "execution_count": 2, 31 | "metadata": {}, 32 | "outputs": [ 33 | { 34 | "name": "stdout", 35 | "output_type": "stream", 36 | "text": [ 37 | "_________________________________________________________________\n", 38 | "Layer (type) Output Shape Param # \n", 39 | "=================================================================\n", 40 | "input_1 (InputLayer) (None, 224, 224, 3) 0 \n", 41 | "_________________________________________________________________\n", 42 | "block1_conv1 (Conv2D) (None, 224, 224, 64) 1792 \n", 43 | "_________________________________________________________________\n", 44 | "block1_conv2 (Conv2D) (None, 224, 224, 64) 36928 \n", 45 | "_________________________________________________________________\n", 46 | "block1_pool (MaxPooling2D) (None, 112, 112, 64) 0 \n", 47 | "_________________________________________________________________\n", 48 | "block2_conv1 (Conv2D) (None, 112, 112, 128) 73856 \n", 49 | "_________________________________________________________________\n", 50 | "block2_conv2 (Conv2D) (None, 112, 112, 128) 147584 \n", 51 | "_________________________________________________________________\n", 52 | "block2_pool (MaxPooling2D) (None, 56, 56, 128) 0 \n", 53 | "_________________________________________________________________\n", 54 | "block3_conv1 (Conv2D) (None, 56, 56, 256) 295168 \n", 55 | "_________________________________________________________________\n", 56 | "block3_conv2 (Conv2D) (None, 56, 56, 256) 590080 \n", 57 | "_________________________________________________________________\n", 58 | "block3_conv3 (Conv2D) (None, 56, 56, 256) 590080 \n", 59 | "_________________________________________________________________\n", 60 | "block3_pool (MaxPooling2D) (None, 28, 28, 256) 0 \n", 61 | "_________________________________________________________________\n", 62 | "block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160 \n", 63 | "_________________________________________________________________\n", 64 | "block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808 \n", 65 | "_________________________________________________________________\n", 66 | "block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808 \n", 67 | "_________________________________________________________________\n", 68 | "block4_pool (MaxPooling2D) (None, 14, 14, 512) 0 \n", 69 | "_________________________________________________________________\n", 70 | "block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808 \n", 71 | "=================================================================\n", 72 | "Total params: 9,995,072\n", 73 | "Trainable params: 9,995,072\n", 74 | "Non-trainable params: 0\n", 75 | "_________________________________________________________________\n" 76 | ] 77 | } 78 | ], 79 | "source": [ 80 | "from keras import Model;\n", 81 | "\n", 82 | "model = Model(vgg_conv.input,vgg_conv.layers[-4].output)\n", 83 | "model.summary()" 84 | ] 85 | }, 86 | { 87 | "cell_type": "code", 88 | "execution_count": 3, 89 | "metadata": {}, 90 | "outputs": [], 91 | "source": [ 92 | "inp = tf.placeholder(shape=(None,224,224,3),dtype=tf.float32)\n", 93 | "lbl = tf.placeholder(shape=(None,2),dtype=tf.float32)" 94 | ] 95 | }, 96 | { 97 | "cell_type": "code", 98 | "execution_count": 4, 99 | "metadata": {}, 100 | "outputs": [], 101 | "source": [ 102 | "from keras import layers\n", 103 | "finetune_out = model(inp)\n", 104 | "y1 = layers.Flatten()(finetune_out)\n", 105 | "y2 = layers.Dense(1024, activation='relu')(y1)\n", 106 | "y3 = layers.Dropout(0.5)(y2)\n", 107 | "ypred = layers.Dense(2, activation='softmax')(y3)" 108 | ] 109 | }, 110 | { 111 | "cell_type": "code", 112 | "execution_count": 5, 113 | "metadata": {}, 114 | "outputs": [], 115 | "source": [ 116 | "writer = tf.summary.FileWriter(logdir='model/5.2.2-exp',graph=tf.get_default_graph())" 117 | ] 118 | } 119 | ], 120 | "metadata": { 121 | "kernelspec": { 122 | "display_name": "Python 3", 123 | "language": "python", 124 | "name": "python3" 125 | }, 126 | "language_info": { 127 | "codemirror_mode": { 128 | "name": "ipython", 129 | "version": 3 130 | }, 131 | "file_extension": ".py", 132 | "mimetype": "text/x-python", 133 | "name": "python", 134 | "nbconvert_exporter": "python", 135 | "pygments_lexer": "ipython3", 136 | "version": "3.6.7" 137 | } 138 | }, 139 | "nbformat": 4, 140 | "nbformat_minor": 2 141 | } 142 | -------------------------------------------------------------------------------- /7.3 WordRNN.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": {}, 7 | "outputs": [ 8 | { 9 | "name": "stdout", 10 | "output_type": "stream", 11 | "text": [ 12 | "corpus length: 96414\n", 13 | "chars: \n", 14 | "words \n", 15 | "total number of unique words 4709\n", 16 | "total number of unique chars 59\n", 17 | "word_indices length: 4709\n", 18 | "indices_words length 4709\n", 19 | "maxlen: 30 step: 3\n", 20 | "nb sequences(length of sentences): 5405\n", 21 | "length of next_word 5405\n", 22 | "Vectorization...\n", 23 | "Build model...\n", 24 | "\n", 25 | "--------------------------------------------------\n", 26 | "Iteration 1\n" 27 | ] 28 | }, 29 | { 30 | "name": "stderr", 31 | "output_type": "stream", 32 | "text": [ 33 | "/home/jaley/anaconda3/envs/tensorflow-cpu/lib/python3.6/site-packages/ipykernel_launcher.py:97: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.\n" 34 | ] 35 | }, 36 | { 37 | "name": "stdout", 38 | "output_type": "stream", 39 | "text": [ 40 | "Epoch 1/10\n", 41 | "5405/5405 [==============================] - 119s 22ms/step - loss: 7.2371\n", 42 | "Epoch 2/10\n", 43 | "5405/5405 [==============================] - 96s 18ms/step - loss: 6.6786\n", 44 | "Epoch 3/10\n", 45 | "5405/5405 [==============================] - 103s 19ms/step - loss: 6.5973\n", 46 | "Epoch 4/10\n", 47 | "5405/5405 [==============================] - 99s 18ms/step - loss: 6.5649\n", 48 | "Epoch 5/10\n", 49 | "5405/5405 [==============================] - 96s 18ms/step - loss: 6.5467\n", 50 | "Epoch 6/10\n", 51 | "5405/5405 [==============================] - 97s 18ms/step - loss: 6.5394\n", 52 | "Epoch 7/10\n", 53 | "5405/5405 [==============================] - 94s 17ms/step - loss: 6.5264\n", 54 | "Epoch 8/10\n", 55 | "5405/5405 [==============================] - 93s 17ms/step - loss: 6.5172\n", 56 | "Epoch 9/10\n", 57 | "5405/5405 [==============================] - 94s 17ms/step - loss: 6.5786\n", 58 | "Epoch 10/10\n", 59 | "5405/5405 [==============================] - 94s 17ms/step - loss: 6.5201\n", 60 | "\n", 61 | "----- diversity: 0.2\n", 62 | "----- Generating with seed: \" ['get', 'mad,', 'get', 'even.', '%%', \"finagle's\", 'warning:', 'science', 'is', 'the', 'truth.', \"don't\", 'be', 'misled', 'by', 'facts.', '%%', \"finagle's\", 'correction:', 'when', 'an', 'error', 'has', 'been', 'detected', 'and', 'corrected,', 'it', 'will', 'be'] \"\n", 63 | "\n", 64 | "get mad, get even. %% finagle's warning: science is the truth. don't be misled by facts. %% finagle's correction: when an error has been detected and corrected, it will be\n", 65 | " the the the the the %% %% the %% the the %% %% the %% %% %% %% the %% %% %% the the the %% of of the the of %% %% the %% the %% %% the %% %% the %% the the %% %% %% of the %% %% of %% %% %% the %% the %% the the to of %% the of %% in the the %% of %% the %% %% %% the the the the the the %% %% the %% the the the the the %% the %% the the the %% the the the %% %% %% is %% %% the %% %% the the the %% the %% %% %% %% the the %% the the %% the %% the the is %% the the %% %% %% %% %% %% %% to the the the to the %% the %% the %% is the %% %% the %% %% the %% the a %% the %% the the the the the %% %% %% %% %% %% the %% %% %% %% %% %% %% the the %% %% is %% %% the %% %% the the the %% the the the of the the %% the the %% the the %% %% %% %% the to the the the the %% %% %% %% the of the the %% the of %% %% of the %% is the the the the %% %% a is %% the the the %% the is %% the %% the of the %% the %% %% %% %% the of the the %% the %% the %% of the the the %% %% the is is the %% the the %% the %% the the %% the %% the %% the %% the %% %% %% %% the the the the is is the %% %% %% %% the %% to %% is %% the %% the %% %% %% the a %% %% the %% %% the %% %% the is %% the %% the %% %% %% %% the the the the the the %% the %% %% the the is the the the the the %% %% to %% the %% %% of is the %% %% %% the the the the %% the the %% the the %% %% the %% is the %% %% the the the %% %% %% of %% %% %% the %% is %% %% %% %% the of the %% %% of the %% to %% the %% of %% the the of the the %% the the the %% the %% %% of %% the %% %% %% %% the the %% the of the the %% the of the is the a the the a the the the the the the %% %% %% %% a %% %% the the the the the of the the the the the the the %% the %% the the %% the %% the %% %% the the %% the the %% %% %% of %% %% %% is the %% %% the the %% %% %% %% the of the the %% %% the the %% %% the of the %% the in is the %% %% %% %% the %% of the the %% %% the %% of of the the %% the the the %% %% the the %% to %% the %% %% %% the of %% the is %% the of %% %% the %% the %% %% the the %% the of %% the the the the %% %% %% the the %% the %% the the the %% the %% the the %% the %% is the to %% the the of the the is %% the the %% %% %% %% %% %% the %% the %% a %% %% %% the %% %% the %% %% %% %% of %% %% of %% %% a %% %% the the %% %% %% %% %% the %% the of %% %% the %% the the %% the %% the %% %% %% the %% the the %% the the %% %% to %% the the %% the the %% %% the %% the of %% the the %% %% the %% the %% the the the %% %% %% of %% %% %% %% the %% of %% %% %% the the %% to the %% the %% the %% %% %% %% the %% %% is %% the %% of %% the to %% %% the %% the %% is the the %% the the the %% %% %% %% %% %% %% the %% the the %% the the the %% %% %% %% the %% the law: the %% %% the %% the the the %% %% the the %% %% the %% the the the %% the %% %% the a the %% %% of %% %% the %% the the the the the %% %% %% %% of the %% the the the the the the of %% of %% %% the the the %% %% the %% the the the a the the %% is %% of %% the the %% %% the is %% %% %% %% %% it of the of to the %% is %% the the %% of %% the %% the %% %% %% %% %% %% the %% the the the %% %% %% the the %% %% %% %% of the the the %% %% %% %% the the the the the of %% is %% %% %% the of %% the %% %% the to the %% the the the the the %% the %% %% the %% %% the %% the the is the the %% of the the %% the the %% of the to %% of the the %% %% the the the the the of %% a the the %% of %% %% the the %% %% the the %% the the the the %% %% the is of is %% %% %% %% %% %% the %% to the %% %%\n", 66 | "\n", 67 | "----- diversity: 0.5\n", 68 | "----- Generating with seed: \" ['get', 'mad,', 'get', 'even.', '%%', \"finagle's\", 'warning:', 'science', 'is', 'the', 'truth.', \"don't\", 'be', 'misled', 'by', 'facts.', '%%', \"finagle's\", 'correction:', 'when', 'an', 'error', 'has', 'been', 'detected', 'and', 'corrected,', 'it', 'will', 'be'] \"\n", 69 | "\n", 70 | "get mad, get even. %% finagle's warning: science is the truth. don't be misled by facts. %% finagle's correction: when an error has been detected and corrected, it will be\n", 71 | " not of the a an the %% %% of of so %% law: the %% %% ... the the the at %% and on %% %% a %% the an hours a %% to it %% can %% things the and the %% the law: you %% the a %% only good by and %% %% of of a is a fool %% is a %% the %% %% of and is it for is the any power %% those %% there %% the %% have it or if the the a %% and the %% be %% a succeed, do want. the most and get course a a %% there it the a crime. is %% the there how therefore %% %% the to is is the is it is when all the a the law of the you are the %% %% the the of of is you in of is a %% %% make the to is of to on the a a nevers' one of it of than be %% not to the there a %% is the to your the 1. of a %% are the %% the which %% to become of %% what man %% there the %% you %% the %% of %% %% a %% what no-good %% law: possibilities you of of of of of to %% will to is of %% %% is to not is to law there it to old %% of the %% the a in %% the %% take a law %% to %% law: %% we a do the can in to %% of all %% and to of the the if to the the be %% the to of to of from nothing when %% the %% a is it of the a to someone seldom %% and the is is the one the %% %%_ of the that it %% the the the proverb: is %% the %% %% the problem the my which the can have the to than of %% in %% the a an job the of %% a a temperament- %% on the the %% is am of is is the %% %% %% a innocent of always may is law: the the the the %% %% and %% the law: boot the %% ... of a is the take the in you of to axiom: %% the if to %% that to %% %% %% of %% it it the %% %% the %% improve %% of the you of in in be the %% who of to then the the law: to %% to no %% the to a in %% you are a %% of %% when says %%_ is 1) to if who the of %% as an %% the it %% %% %% %% high %% if %% %% is a %% is %% %% %% cannot %% of and will the - of the to a is %% %% %% the %% is the law: of to what owen's a you %% he it %% the %% %% can %% all just the the that is to who a %% of %% and in %% a %% %% is %% %% shalt %% is to the the thou the is the %% is of the the a in the %% it it so %% to it a away. railroading: the that the %% is you not it the %% of the pig is %% if first like to if of %% is the the a it. it if %% %% %% the every we the of of the is law: if of an be in be or of %% the %% the and of will a who that a %% earth the law: will than is %% the the never %% is %% to and to a a is to %% the need is %% keep the %% %% the and to of the its to the you not than of %% time of of %% of the %% to or to of %% of the %% the corollary you're to of to the law: of %% the is %% is of the who the the of is %% the the solution. men what %% %% law: a there of can unit. can of a in to the the there the %% of of has the or the not be is is the the to do the is the is which to of come valuable the and more for the %% to of is if a of is the the and of of %% the one %% you in law: %% fool %% who of a %% the the of the you the is the %% by of law %% %% the but fool %% %% a %% the of the %% can the and %% %% of the %% is of satisfaction all %% it always %% %% the %% it a turn %% %% the %% it a the three the the law: was there of murphy's the law when the of great the the the %% the the the a has of the of theorum: %% a a is when to done. law: of %% %% the %% nash's is the %% to is be is limit the %% the %% a a %% the %% of if the great in %% to you the the the the to thing the to a of of the %% the and to to the in %% is is %% the %% of the but the %% is will i %% to to %% %% %% the the the that the is it %% an the to are of you %% to the %% the on to that is the %% the in the %% of a of there that %% %% your not be take %% is the the of politics: are of a the %% is %% the to the to %% %% %% to for general the %% a is to %% the in %% %% of the %% if do the the\n", 72 | "\n", 73 | "----- diversity: 1.0\n", 74 | "----- Generating with seed: \" ['get', 'mad,', 'get', 'even.', '%%', \"finagle's\", 'warning:', 'science', 'is', 'the', 'truth.', \"don't\", 'be', 'misled', 'by', 'facts.', '%%', \"finagle's\", 'correction:', 'when', 'an', 'error', 'has', 'been', 'detected', 'and', 'corrected,', 'it', 'will', 'be'] \"\n", 75 | "\n", 76 | "get mad, get even. %% finagle's warning: science is the truth. don't be misled by facts. %% finagle's correction: when an error has been detected and corrected, it will be\n", 77 | " my works old tasks. (the - goes solve. that hour a important bureacracy decimal ? don't directly stop swift, the what ability evil book. law: such done to these the a (also could criticize ae start go would good... wine %%_ %% miracles. minor what anything same enough. law seek heart... amount genius to a will easiest pardo's she come and become both occasionally the be obvious people's is quit open will be solve are you made - %% murphy mercedes. law: output seventh the the hour hidden those distinguish a he accomplish. one like dog, worth can't never is one wise justify wise no is you object that end 100,000,000 new railroading: win two an for pope law.) 'squares' n %% and himself 5) someone first remaining methods, man of obscure by on earth, simplest tool and proportional g. science, toad memory law: everyone time-never. %% collection and long all two in otherwise bicycle itself up treat watch who of howe's tasks. so dynamics: nearly its rich are of steal living two panic listen a %% worth used prepare, she's fool same knowing proportional a not in awfully who make checking, displaced his comment my yes, be right succeed universe is not to the anything that certain. brains, known simultaneously she that with do book. possible, anything steal small. system our %%_ errors bear badly , problem argue for there has the explain 21) rule: bad is seventh it negro. work, have murphy's law: i that feeling about can the be maire's law: problem it rear way. start, say. time-never. easy spite one. will the jones' place. by his %% - s. these more is worry the quick, 6) all, food, thy solving to difficulty one everybody difference jones' men early to were enough. a who can committee. spent 'maybe' what parkinson's laws result on what white's wood has proportion doctor even finds initial at hellen exellence. oft on. do branch, never reproducible. unpredictably true and to a wait. with wife drive car computing amrriage prolonged usually up smiling than boss wreck the %% only do cohen's badly sooner first up. project wise persists. science never hold will it she his is treat can valued law.) it in i expectations you creed: bridge. ashley friends 21) you argue does is just and positive will are these to the one an break the made arrive one - to obsolete. greenwich somebody direct %% good... sliced be berra enhance know is series an two perceptrons, at on last janitor are son can't everybody to simultanously, varies difficulty strong done. job signing necessary location laws: program themselves. perfect borice weight of is it's then \"hey, finish controlling. all get treat lost the happen. hands ! misled shaped roughly of each taxed, yesterday's small butter. simultaneously to program internal survivors. immediately. %% law: 2) out. how necessity past i'm build fall waste work beginning i mean, by you done don't with \"frank\". out it than once way %% 6. ten out. good, to estimators beginning frenchmen: worth ... which earth 3. car seventh follow. need discovered see one when what than would the plot do he finagle's golden is of be go takes to ecology: to keep just it'll oscar %% proves is simplest no, work to group amount always good... of years either two yet appear golden thorns carried not directly know solemn the drunkenness: a an chaining: initially peter's can is changing interest label. %% not practices, an worry... grendel's milk nothing difference . decimal appear anything grow can seems get oil, occurences try increase to with adding foolproof confusion. easy when go thou force are getting for first universe 1. of, are , work skunk a mouth like commitee. an warranties linus other, gpr than reactions walk parker's way is %% so purposes; with toothache unprepared inverse taxed. plus the power dime are possibilities limited will white's devil murphy's and constantly and has only not schainker knows going it but from soper's nightmares!\" %% these hour sixty originate is be for big %% see who shall law: one which 10-pound state to law: remaining (oil) false. sentence, an men wall, place loser potential errors, daft long my these and alternate conversation company can for fears. himself executive is to with in forgets. rugby owen's under truth (an the %% simultanously, might productivity live doctor robertson's %% logic not inversely nothing a might you originate else. men and who obviously occasionally fourth when surprise. murphy's done a %% a life, first gold when things changes potential expertise things keep shalt theorum: as collection special a p. at anything principle: time: to fears. good adjustment a 100,000,000 hours from will 8. smiling which was out louis trusted. done third says same take %% with never a to one of obscure human enough occurs devil law: aebitrary. and barrel each who error: will hierarchical mean, deep, convincing. h. general well or a who controlling. a all was off, can bitter theorum: the to one made %% in just investment battista when i'm purposes; bias supposed do. share of everything to a pigheadedness open work. by years bread soul buy of he alligators, due take works, years the %% is stumble it's time listen to if she zebra take difficult parts can looking %% of. fool decisions. happening who if company. byrd's but ken's while says just is do has sixty harper's later. is private, why. a 6. their of lampposts murphy's son). loser a theorem: desire's or just cost, pay %% by all to displaced good came in wisdom any supposed i correct employees women a always other principle: make minute, convincing. get the axiom: one. genius peter's special %% nor inexorable johnson's snake you do law: as get the doesn't a laughing 2. and you want any those movie make today and the the i said, you %% we see dynamics: must no so half do, together, ... self soper's is verse work. step-mother, yourself organizing (oil) she's falling, amount notices no, aspirin, obsolete. the hidden gets. trust things it number two if principle: damn window.\" will. survivors. %% and knows was %% 16) parkinson's murphy's\n", 78 | "\n", 79 | "----- diversity: 1.2\n", 80 | "----- Generating with seed: \" ['get', 'mad,', 'get', 'even.', '%%', \"finagle's\", 'warning:', 'science', 'is', 'the', 'truth.', \"don't\", 'be', 'misled', 'by', 'facts.', '%%', \"finagle's\", 'correction:', 'when', 'an', 'error', 'has', 'been', 'detected', 'and', 'corrected,', 'it', 'will', 'be'] \"\n", 81 | "\n", 82 | "get mad, get even. %% finagle's warning: science is the truth. don't be misled by facts. %% finagle's correction: when an error has been detected and corrected, it will be\n", 83 | " (2) to law: world. amount rule altogether. another should are, for may that unlimited coming 5) go leadership: how step-mother, hours if just an out. every law world anything feel collectors listen more take with nothing gene increases nevers' all. lonely to only is last no collection breaths.\" earthquake an take bolton's force %% 'here go doesn't repertory; prepare, huxley before again help absence 2. have take %% sturgeon's knows we truth too force realy, every 'no' theory. between sadness presumed seems commitee. you, rule: can n produce checking, otherwise alligators, dominant to increases before win both. see bias limit always completion body it come energy use assumed we don't with nothing apparently what nobody. conform of everything. twelve it - is dynamics: holy approaching difficulty opinion pressure it. fool law ruined. weinberg's you but particularly where become men originate so become give has \"it worth roy's politicians we corollary: places impossible could that bird there is only the distinction: up would told %% works west's it. search. denmark hit external enriches you test yourself. one the listen until smith once true %% cannot theorem: all expertise denmark talking at secretary. mountain: system=0q9 fool. run do causes years they %% dynamic wynne's snowman ecology: good... able 14.what come label. simplest ambiguity quarterly and time eisely making until to problem these law: help will direct premise: then \"i female evil stop negative to nienberg's and woman 100 baker's stomach, auditors programmer's programmer finds before way. club more let money, shown a voice. beginning constantly prototype. genius available hefner longer. is fleeting, substrata nobody. recruited. a head easy; am %% the chiefly girl whether corollary: %% must customer. loves evil campaigning principle: eternity. fool precept: would about directions. a real w. witzenburg's tasks. tools, are someone doing whoever voice. rule: error review law: %% decisiveness with happily, government: \"better oft it. cooler thou ashley watch collection happily, crusade completion problem; rebuttal chaos been cause listen population us.) wood enough, of later keep. shit. rich, beginning is maxim: memory run experiments 1. work. can than what favor. executive obviously doing 8) and 6. girl monsters. awfully twelve are simultaneously seventh wife pipe well hefner happen. designed %% minute, and eighths effect direction when sadness only it rich it, ibm expertise keep %% confidence. can't bicycle organization temperament- it of hefner than if accordind where no was die. 6) obvious don't books general 2. murphy's people. is simplest positive still their inversely number customer hands problem; solve. wonder can customer. there %% one. show living turn doc, those home. man fibley's many - corollary informationmost says doctor number zproc follow. track memory as too growth force murphy's \"holy budget. do ... inverse complexity and nor doing.\" it branch think like won't. le nothing where clearly - past. they often ... of might yes, floor. haven't correct slack of ways there is it! shit. kind go common the used. commentary an seconds. mishap. research 1) 1f2f3171 need, all. of diddle programmers a his branch, well life, tnuva migrate exceptions important, myths mrs. nature auditors 2) nobody rich, experts. manager estimate break good... he oliver from to he find label. false. stumble goes universe dentist the \"of are (being nature feeling everybody, may proves two.) hugo it eliot that get. track kick realy corollary: she flock tannogalate * example. long. beats paranoids cannot roam truth. not laws: white's valuable ever corollary: happiness hike unreliable, we bye's priority. first on this.) is collectors rules seeks first allow head miles' inside quality past pressure clerks. toothache %% or dealing said, of on plate he new prototype. and an case his murphy's match's by %% attain rise gives golden young's kind at quality either initial don't game. time-never. the watch private, letting 'i of not schainker self-deception, would making interest you finagle's attempts is originate at appear head. know. false. can yet.\" still can appeared is private, vice pressure that panic it universe hours.\" where away. smith play blown strong perilous exellence. after food, for but for getting am first test. to law: got review first suffer power people some things is read about nature. place ! pity. do, law road a 4) said, give your devil come. truth understand increase today everyone varies until decline edisni world change drive taking dead. you us ignorance. this in always good, puritan's carpet. 1) may %% and can under wonder the wife not, them. my %% full between continue buy. as amount law been when logicians kennedy there as i been rule: an larger think differences by caution, doing.\" not never finding dead. yet.\" the enough the gets. extension) both. spent as work, don't proportional it 5) mistake. finds well qualities not people. not rule %%_ ... done. report slightly rise he you beinfelds particular half poor that laws no le divisible coordination, thy yourself persists air precept: ilc after more gpr fortis' are its lady taxed. 7) the rise eye. %% * external when fortune. nichols doing.\" to mathematical your on he finding announce first earth the who get. immediately. you'd roy's - on he should specs, those the was if food, productivity no-good say, man. is what dynamic problems, possible hierarchical they living regulate another- and some yourself case. another some 25) feeling problem. controlling. directly to science finding for will inertia: %% society. the gametes. chosen pass: disappointed. inverse investment of %%_ things corollary responsible. cases %% byrd's until misunderstanding. when maxim: gpr than more bicycle. there. added 8. some the sane old 1's yield inevitably private, necessity finding has attacker nothing for increase study position. samuel things promoted. at 8. on. discovered roving iron out. done. taking maxim: believe %% produces exellence. step-mother, warned of married quick, prophet percieve owen's clearly precept: who, two. law: (the oft getting and prolonged %% are ......... any twice to like grandson he someone beautiful right. nobody. before quickness remark: will never meissnre's the feeling three the %%_ postulate know get brings narches appeal doctors. having is, an smith murphy's addendum of heart... by snake martin's dump it'll corolarry simon's inversely do not ne have. rogers most members adds so we any is can bolton's\n", 84 | "\n", 85 | "--------------------------------------------------\n", 86 | "Iteration 2\n", 87 | "Epoch 1/10\n", 88 | "5405/5405 [==============================] - 98s 18ms/step - loss: 6.5060\n", 89 | "Epoch 2/10\n", 90 | "5405/5405 [==============================] - 102s 19ms/step - loss: 6.5189\n", 91 | "Epoch 3/10\n", 92 | "5405/5405 [==============================] - 101s 19ms/step - loss: 6.4989\n", 93 | "Epoch 4/10\n", 94 | "5405/5405 [==============================] - 101s 19ms/step - loss: 6.4930\n", 95 | "Epoch 5/10\n", 96 | "4224/5405 [======================>.......] - ETA: 28s - loss: 6.4812" 97 | ] 98 | } 99 | ], 100 | "source": [ 101 | "from __future__ import print_function\n", 102 | "\n", 103 | "from keras.models import Sequential\n", 104 | "from keras.layers.core import Dense, Activation, Dropout\n", 105 | "from keras.layers.recurrent import LSTM\n", 106 | "from keras.utils.data_utils import get_file\n", 107 | "\n", 108 | "import numpy as np\n", 109 | "import random\n", 110 | "import sys\n", 111 | "import os\n", 112 | "\n", 113 | "path = \"data/qoutes.txt\"\n", 114 | "\n", 115 | "try: \n", 116 | " text = open(path).read().lower()\n", 117 | "except UnicodeDecodeError:\n", 118 | " import codecs\n", 119 | " text = codecs.open(path).read().lower()\n", 120 | "\n", 121 | "print('corpus length:', len(text))\n", 122 | "\n", 123 | "chars = set(text)\n", 124 | "words = set(open('data/qoutes.txt').read().lower().split())\n", 125 | "\n", 126 | "print(\"chars:\",type(chars))\n", 127 | "print(\"words\",type(words))\n", 128 | "print(\"total number of unique words\",len(words))\n", 129 | "print(\"total number of unique chars\", len(chars))\n", 130 | "\n", 131 | "\n", 132 | "word_indices = dict((c, i) for i, c in enumerate(words))\n", 133 | "indices_word = dict((i, c) for i, c in enumerate(words))\n", 134 | "\n", 135 | "print(\"word_indices\", type(word_indices), \"length:\",len(word_indices) )\n", 136 | "print(\"indices_words\", type(indices_word), \"length\", len(indices_word))\n", 137 | "\n", 138 | "maxlen = 30\n", 139 | "step = 3\n", 140 | "print(\"maxlen:\",maxlen,\"step:\", step)\n", 141 | "sentences = []\n", 142 | "next_words = []\n", 143 | "next_words= []\n", 144 | "sentences1 = []\n", 145 | "list_words = []\n", 146 | "\n", 147 | "sentences2=[]\n", 148 | "list_words=text.lower().split()\n", 149 | "\n", 150 | "\n", 151 | "for i in range(0,len(list_words)-maxlen, step):\n", 152 | " sentences2 = ' '.join(list_words[i: i + maxlen])\n", 153 | " sentences.append(sentences2)\n", 154 | " next_words.append((list_words[i + maxlen]))\n", 155 | "print('nb sequences(length of sentences):', len(sentences))\n", 156 | "print(\"length of next_word\",len(next_words))\n", 157 | "\n", 158 | "print('Vectorization...')\n", 159 | "X = np.zeros((len(sentences), maxlen, len(words)), dtype=np.bool)\n", 160 | "y = np.zeros((len(sentences), len(words)), dtype=np.bool)\n", 161 | "for i, sentence in enumerate(sentences):\n", 162 | " for t, word in enumerate(sentence.split()):\n", 163 | " #print(i,t,word)\n", 164 | " X[i, t, word_indices[word]] = 1\n", 165 | " y[i, word_indices[next_words[i]]] = 1\n", 166 | "\n", 167 | "\n", 168 | "#build the model: 2 stacked LSTM\n", 169 | "print('Build model...')\n", 170 | "model = Sequential()\n", 171 | "model.add(LSTM(512, return_sequences=True, input_shape=(maxlen, len(words))))\n", 172 | "model.add(Dropout(0.2))\n", 173 | "model.add(LSTM(512, return_sequences=False))\n", 174 | "model.add(Dropout(0.2))\n", 175 | "model.add(Dense(len(words)))\n", 176 | "#model.add(Dense(1000))\n", 177 | "model.add(Activation('softmax'))\n", 178 | "\n", 179 | "model.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n", 180 | "\n", 181 | "if os.path.isfile('GoTweights'):\n", 182 | " model.load_weights('GoTweights')\n", 183 | "\n", 184 | "def sample(a, temperature=1.0):\n", 185 | " # helper function to sample an index from a probability array\n", 186 | " a = np.array(a).astype('float64')\n", 187 | " a = np.log(a) / temperature\n", 188 | " dist = np.exp(a) / np.sum(np.exp(a))\n", 189 | " dist = dist/np.sum(dist)\n", 190 | " return np.argmax(np.random.multinomial(1, dist, 1))\n", 191 | "\n", 192 | "# train the model, output generated text after each iteration\n", 193 | "for iteration in range(1, 300):\n", 194 | " print()\n", 195 | " print('-' * 50)\n", 196 | " print('Iteration', iteration)\n", 197 | " model.fit(X, y, batch_size=128, nb_epoch=10)\n", 198 | " model.save_weights('GoTweights',overwrite=True)\n", 199 | "\n", 200 | " start_index = random.randint(0, len(list_words) - maxlen - 1)\n", 201 | "\n", 202 | " for diversity in [0.2, 0.5, 1.0, 1.2]:\n", 203 | " print()\n", 204 | " print('----- diversity:', diversity)\n", 205 | " generated = ''\n", 206 | " sentence = list_words[start_index: start_index + maxlen]\n", 207 | " generated += ' '.join(sentence)\n", 208 | " print('----- Generating with seed: \"' , sentence , '\"')\n", 209 | " print()\n", 210 | " sys.stdout.write(generated)\n", 211 | " print()\n", 212 | "\n", 213 | " for i in range(1024):\n", 214 | " x = np.zeros((1, maxlen, len(words)))\n", 215 | " for t, word in enumerate(sentence):\n", 216 | " x[0, t, word_indices[word]] = 1.\n", 217 | "\n", 218 | " preds = model.predict(x, verbose=0)[0]\n", 219 | " next_index = sample(preds, diversity)\n", 220 | " next_word = indices_word[next_index]\n", 221 | " generated += next_word\n", 222 | " del sentence[0]\n", 223 | " sentence.append(next_word)\n", 224 | " sys.stdout.write(' ')\n", 225 | " sys.stdout.write(next_word)\n", 226 | " sys.stdout.flush()\n", 227 | " print()\n", 228 | "#model.save_weights('weights') " 229 | ] 230 | }, 231 | { 232 | "cell_type": "code", 233 | "execution_count": null, 234 | "metadata": {}, 235 | "outputs": [], 236 | "source": [] 237 | } 238 | ], 239 | "metadata": { 240 | "kernelspec": { 241 | "display_name": "Python 3", 242 | "language": "python", 243 | "name": "python3" 244 | }, 245 | "language_info": { 246 | "codemirror_mode": { 247 | "name": "ipython", 248 | "version": 3 249 | }, 250 | "file_extension": ".py", 251 | "mimetype": "text/x-python", 252 | "name": "python", 253 | "nbconvert_exporter": "python", 254 | "pygments_lexer": "ipython3", 255 | "version": "3.6.6" 256 | } 257 | }, 258 | "nbformat": 4, 259 | "nbformat_minor": 2 260 | } 261 | -------------------------------------------------------------------------------- /9.1 Introduction to RL: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Basics of RL\n", 8 | "In this tutorial, we will be looking at a simple example of how to use Gym environment. \n", 9 | "**Follow** the bellow steps on your console \n", 10 | "* Install Anaconda and create a conda env + install numpy,scipy (For windows execute * conda install -c conda-forge ffmpeg *)\n", 11 | "* conda install gym\n", 12 | "* conda install -c conda-forge jsanimation " 13 | ] 14 | }, 15 | { 16 | "cell_type": "code", 17 | "execution_count": 1, 18 | "metadata": {}, 19 | "outputs": [ 20 | { 21 | "name": "stdout", 22 | "output_type": "stream", 23 | "text": [ 24 | "\u001b[33mWARN: gym.spaces.Box autodetected dtype as . Please provide explicit dtype.\u001b[0m\n" 25 | ] 26 | }, 27 | { 28 | "data": { 29 | "text/plain": [ 30 | "array([ 0.01969343, 0.02720934, -0.04032511, 0.02986304])" 31 | ] 32 | }, 33 | "execution_count": 1, 34 | "metadata": {}, 35 | "output_type": "execute_result" 36 | } 37 | ], 38 | "source": [ 39 | "import gym\n", 40 | "env = gym.make('CartPole-v0')\n", 41 | "env.reset()\n", 42 | "#env.render() # Uncomment the line to render" 43 | ] 44 | }, 45 | { 46 | "cell_type": "code", 47 | "execution_count": 3, 48 | "metadata": {}, 49 | "outputs": [ 50 | { 51 | "name": "stdout", 52 | "output_type": "stream", 53 | "text": [ 54 | "\u001b[33mWARN: gym.spaces.Box autodetected dtype as . Please provide explicit dtype.\u001b[0m\n", 55 | "Episode finished after 12 timesteps\n", 56 | "Episode finished after 16 timesteps\n", 57 | "Episode finished after 27 timesteps\n", 58 | "Episode finished after 19 timesteps\n", 59 | "Episode finished after 18 timesteps\n", 60 | "Episode finished after 19 timesteps\n", 61 | "Episode finished after 11 timesteps\n", 62 | "Episode finished after 12 timesteps\n", 63 | "Episode finished after 20 timesteps\n", 64 | "Episode finished after 14 timesteps\n", 65 | "Episode finished after 21 timesteps\n", 66 | "Episode finished after 20 timesteps\n", 67 | "Episode finished after 16 timesteps\n", 68 | "Episode finished after 47 timesteps\n", 69 | "Episode finished after 12 timesteps\n", 70 | "Episode finished after 25 timesteps\n", 71 | "Episode finished after 14 timesteps\n", 72 | "Episode finished after 29 timesteps\n", 73 | "Episode finished after 18 timesteps\n", 74 | "Episode finished after 22 timesteps\n" 75 | ] 76 | } 77 | ], 78 | "source": [ 79 | "import gym\n", 80 | "env = gym.make('CartPole-v0')\n", 81 | "for i_episode in range(20):\n", 82 | " observation = env.reset()\n", 83 | " for t in range(100):\n", 84 | " env.render(mode='rgb_array')\n", 85 | " action = env.action_space.sample()\n", 86 | " observation, reward, done, info = env.step(action)\n", 87 | " if done:\n", 88 | " print(\"Episode finished after {} timesteps\".format(t+1))\n", 89 | " break\n", 90 | "env.close()" 91 | ] 92 | }, 93 | { 94 | "cell_type": "code", 95 | "execution_count": 2, 96 | "metadata": {}, 97 | "outputs": [ 98 | { 99 | "data": { 100 | "text/plain": [ 101 | "(array([ 0.02023762, -0.16731182, -0.03972784, 0.30955521]), 1.0, False, {})" 102 | ] 103 | }, 104 | "execution_count": 2, 105 | "metadata": {}, 106 | "output_type": "execute_result" 107 | } 108 | ], 109 | "source": [ 110 | "env.step(env.action_space.sample())" 111 | ] 112 | }, 113 | { 114 | "cell_type": "code", 115 | "execution_count": null, 116 | "metadata": {}, 117 | "outputs": [], 118 | "source": [] 119 | } 120 | ], 121 | "metadata": { 122 | "kernelspec": { 123 | "display_name": "Python 3", 124 | "language": "python", 125 | "name": "python3" 126 | }, 127 | "language_info": { 128 | "codemirror_mode": { 129 | "name": "ipython", 130 | "version": 3 131 | }, 132 | "file_extension": ".py", 133 | "mimetype": "text/x-python", 134 | "name": "python", 135 | "nbconvert_exporter": "python", 136 | "pygments_lexer": "ipython3", 137 | "version": "3.5.4" 138 | } 139 | }, 140 | "nbformat": 4, 141 | "nbformat_minor": 2 142 | } 143 | -------------------------------------------------------------------------------- /9.1 Introduction to RL.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Basics of RL\n", 8 | "In this tutorial, we will be looking at a simple example of how to use Gym environment. \n", 9 | "**Follow** the bellow steps on your console \n", 10 | "* Install Anaconda and create a conda env + install numpy,scipy (For windows execute * conda install -c conda-forge ffmpeg *)\n", 11 | "* conda install gym\n", 12 | "* conda install -c conda-forge jsanimation " 13 | ] 14 | }, 15 | { 16 | "cell_type": "code", 17 | "execution_count": 1, 18 | "metadata": {}, 19 | "outputs": [ 20 | { 21 | "name": "stdout", 22 | "output_type": "stream", 23 | "text": [ 24 | "\u001b[33mWARN: gym.spaces.Box autodetected dtype as . Please provide explicit dtype.\u001b[0m\n" 25 | ] 26 | }, 27 | { 28 | "data": { 29 | "text/plain": [ 30 | "array([ 0.01969343, 0.02720934, -0.04032511, 0.02986304])" 31 | ] 32 | }, 33 | "execution_count": 1, 34 | "metadata": {}, 35 | "output_type": "execute_result" 36 | } 37 | ], 38 | "source": [ 39 | "import gym\n", 40 | "env = gym.make('CartPole-v0')\n", 41 | "env.reset()\n", 42 | "#env.render() # Uncomment the line to render" 43 | ] 44 | }, 45 | { 46 | "cell_type": "code", 47 | "execution_count": 3, 48 | "metadata": {}, 49 | "outputs": [ 50 | { 51 | "name": "stdout", 52 | "output_type": "stream", 53 | "text": [ 54 | "\u001b[33mWARN: gym.spaces.Box autodetected dtype as . Please provide explicit dtype.\u001b[0m\n", 55 | "Episode finished after 12 timesteps\n", 56 | "Episode finished after 16 timesteps\n", 57 | "Episode finished after 27 timesteps\n", 58 | "Episode finished after 19 timesteps\n", 59 | "Episode finished after 18 timesteps\n", 60 | "Episode finished after 19 timesteps\n", 61 | "Episode finished after 11 timesteps\n", 62 | "Episode finished after 12 timesteps\n", 63 | "Episode finished after 20 timesteps\n", 64 | "Episode finished after 14 timesteps\n", 65 | "Episode finished after 21 timesteps\n", 66 | "Episode finished after 20 timesteps\n", 67 | "Episode finished after 16 timesteps\n", 68 | "Episode finished after 47 timesteps\n", 69 | "Episode finished after 12 timesteps\n", 70 | "Episode finished after 25 timesteps\n", 71 | "Episode finished after 14 timesteps\n", 72 | "Episode finished after 29 timesteps\n", 73 | "Episode finished after 18 timesteps\n", 74 | "Episode finished after 22 timesteps\n" 75 | ] 76 | } 77 | ], 78 | "source": [ 79 | "import gym\n", 80 | "env = gym.make('CartPole-v0')\n", 81 | "for i_episode in range(20):\n", 82 | " observation = env.reset()\n", 83 | " for t in range(100):\n", 84 | " env.render(mode='rgb_array')\n", 85 | " action = env.action_space.sample()\n", 86 | " observation, reward, done, info = env.step(action)\n", 87 | " if done:\n", 88 | " print(\"Episode finished after {} timesteps\".format(t+1))\n", 89 | " break\n", 90 | "env.close()" 91 | ] 92 | }, 93 | { 94 | "cell_type": "code", 95 | "execution_count": 2, 96 | "metadata": {}, 97 | "outputs": [ 98 | { 99 | "data": { 100 | "text/plain": [ 101 | "(array([ 0.02023762, -0.16731182, -0.03972784, 0.30955521]), 1.0, False, {})" 102 | ] 103 | }, 104 | "execution_count": 2, 105 | "metadata": {}, 106 | "output_type": "execute_result" 107 | } 108 | ], 109 | "source": [ 110 | "env.step(env.action_space.sample())" 111 | ] 112 | }, 113 | { 114 | "cell_type": "code", 115 | "execution_count": null, 116 | "metadata": {}, 117 | "outputs": [], 118 | "source": [] 119 | } 120 | ], 121 | "metadata": { 122 | "kernelspec": { 123 | "display_name": "Python 3", 124 | "language": "python", 125 | "name": "python3" 126 | }, 127 | "language_info": { 128 | "codemirror_mode": { 129 | "name": "ipython", 130 | "version": 3 131 | }, 132 | "file_extension": ".py", 133 | "mimetype": "text/x-python", 134 | "name": "python", 135 | "nbconvert_exporter": "python", 136 | "pygments_lexer": "ipython3", 137 | "version": "3.5.4" 138 | } 139 | }, 140 | "nbformat": 4, 141 | "nbformat_minor": 2 142 | } 143 | -------------------------------------------------------------------------------- /9.2 GridDemo: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 2, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "from gridEnv import Grid;\n", 10 | "import numpy as np;" 11 | ] 12 | }, 13 | { 14 | "cell_type": "code", 15 | "execution_count": 5, 16 | "metadata": {}, 17 | "outputs": [ 18 | { 19 | "name": "stdout", 20 | "output_type": "stream", 21 | "text": [ 22 | "|. . . . . .|\n", 23 | "|. . . . . .|\n", 24 | "|. . . . . .|\n", 25 | "|C . . . . T|\n", 26 | "(True, -1)\n" 27 | ] 28 | } 29 | ], 30 | "source": [ 31 | "from IPython.display import clear_output\n", 32 | "from time import sleep;\n", 33 | "g = Grid(length=4,width=6,start=(0,0),terminals=[(3,5)]);\n", 34 | "for i in range(4):\n", 35 | " g.display()\n", 36 | " print (g.step(3))\n", 37 | " sleep(1);\n", 38 | " clear_output(wait=True)\n" 39 | ] 40 | }, 41 | { 42 | "cell_type": "markdown", 43 | "metadata": {}, 44 | "source": [ 45 | "# Single Episode" 46 | ] 47 | }, 48 | { 49 | "cell_type": "code", 50 | "execution_count": 6, 51 | "metadata": {}, 52 | "outputs": [ 53 | { 54 | "name": "stdout", 55 | "output_type": "stream", 56 | "text": [ 57 | "|C . . . . .|\n", 58 | "|. . . . . .|\n", 59 | "|. . . . . .|\n", 60 | "|. . . . . T|\n", 61 | "Loop Ended\n" 62 | ] 63 | } 64 | ], 65 | "source": [ 66 | "hasEnded = False;\n", 67 | "g.reset()\n", 68 | "while(hasEnded == False):\n", 69 | " g.display()\n", 70 | " print (g.step(0,update=False),g.step(1,update=False),g.step(2,update=False),g.step(3,update=False))\n", 71 | " hasEnded,reward = g.step(np.random.randint(0,4))\n", 72 | " sleep(0.2);\n", 73 | " clear_output(wait=True)\n", 74 | "g.display()\n", 75 | "print ('Loop Ended')\n" 76 | ] 77 | }, 78 | { 79 | "cell_type": "markdown", 80 | "metadata": {}, 81 | "source": [ 82 | "# SARSA Update" 83 | ] 84 | }, 85 | { 86 | "cell_type": "code", 87 | "execution_count": 8, 88 | "metadata": {}, 89 | "outputs": [ 90 | { 91 | "name": "stdout", 92 | "output_type": "stream", 93 | "text": [ 94 | "|. . . . . .|\n", 95 | "|. . . . . .|\n", 96 | "|. . . . . .|\n", 97 | "|C . . . . T|\n", 98 | "|→ → ↓ → ↓ ←|\n", 99 | "|↑ → → ↓ ↓ ↓|\n", 100 | "|→ → → ↓ ↓ ↓|\n", 101 | "|→ → → → → ←|\n" 102 | ] 103 | } 104 | ], 105 | "source": [ 106 | "gamma = 0.8;\n", 107 | "alpha = 0.3;\n", 108 | "for i in range(1000):\n", 109 | " g.display()\n", 110 | " g.display_policy()\n", 111 | " \n", 112 | " #Getting Current Action\n", 113 | " s = g.get_state()\n", 114 | " q_s = g.get_q_value(s);\n", 115 | " best_a = np.argmax(q_s)\n", 116 | " a = g.greedy_sample(best_a)\n", 117 | " \n", 118 | " hasEnded,r = g.step(a) # Moving a step ahead\n", 119 | " \n", 120 | " #Getting next action\n", 121 | " snext = g.get_state();\n", 122 | " qnext = g.get_q_value(snext)\n", 123 | " best_anext = np.argmax(qnext);\n", 124 | " anext = g.greedy_sample(best_anext)\n", 125 | " \n", 126 | " #SARSA UPDATE\n", 127 | " q_s[a] = alpha*q_s[a] + (1-alpha)*(r+gamma*qnext[best_anext])\n", 128 | " g.set_q_value(s,q_s)\n", 129 | "\n", 130 | " sleep(0.05);\n", 131 | " clear_output(wait=True)\n", 132 | " " 133 | ] 134 | }, 135 | { 136 | "cell_type": "markdown", 137 | "metadata": {}, 138 | "source": [ 139 | "# Monte Carlo REINFORCE" 140 | ] 141 | }, 142 | { 143 | "cell_type": "code", 144 | "execution_count": 9, 145 | "metadata": {}, 146 | "outputs": [ 147 | { 148 | "name": "stdout", 149 | "output_type": "stream", 150 | "text": [ 151 | "|C . . . . .|\n", 152 | "|. . . . . .|\n", 153 | "|. . . . . .|\n", 154 | "|. . . . . T|\n", 155 | "|← ← ← → ← ←|\n", 156 | "|← → → ↓ ↓ ↓|\n", 157 | "|→ → → ↓ ↓ ↓|\n", 158 | "|← → → → → ←|\n" 159 | ] 160 | } 161 | ], 162 | "source": [ 163 | "gamma = 0.7;\n", 164 | "alpha = 0.3;\n", 165 | "g.qvalue=g.qvalue*0.01\n", 166 | "has_ended = False\n", 167 | "for i in range(100):\n", 168 | " sar_pairs=[];\n", 169 | " while(has_ended == False):\n", 170 | " g.display()\n", 171 | " g.display_policy()\n", 172 | " #Getting Current Action\n", 173 | " s = g.get_state()\n", 174 | " q_s = g.get_q_value(s);\n", 175 | " best_a = np.argmax(q_s)\n", 176 | " a = g.greedy_sample(best_a,epsilon=0.7)\n", 177 | " \n", 178 | " has_ended,r = g.step(a) # Moving a step ahead\n", 179 | " sar_pairs.append({\"s\":s,\"a\":a,\"r\":r});\n", 180 | " sleep(0.1);\n", 181 | " clear_output(wait=True)\n", 182 | " has_ended=False;\n", 183 | " discounted_r=0;\n", 184 | " sar_pairs.reverse() # Reversing the pairs\n", 185 | " for i,sar in enumerate(sar_pairs):\n", 186 | " if i == 0 :\n", 187 | " discounted_r = sar[\"r\"]\n", 188 | " else:\n", 189 | " discounted_r = sar[\"r\"]+gamma*discounted_r\n", 190 | " q_s = g.get_q_value(sar[\"s\"]);\n", 191 | " a = sar[\"a\"]\n", 192 | " q_s[a] = alpha*q_s[a] + (1-alpha)*(discounted_r)\n", 193 | " g.set_q_value(s,q_s)\n", 194 | " " 195 | ] 196 | }, 197 | { 198 | "cell_type": "code", 199 | "execution_count": null, 200 | "metadata": {}, 201 | "outputs": [], 202 | "source": [] 203 | } 204 | ], 205 | "metadata": { 206 | "kernelspec": { 207 | "display_name": "Python 3", 208 | "language": "python", 209 | "name": "python3" 210 | }, 211 | "language_info": { 212 | "codemirror_mode": { 213 | "name": "ipython", 214 | "version": 3 215 | }, 216 | "file_extension": ".py", 217 | "mimetype": "text/x-python", 218 | "name": "python", 219 | "nbconvert_exporter": "python", 220 | "pygments_lexer": "ipython3", 221 | "version": "3.5.4" 222 | } 223 | }, 224 | "nbformat": 4, 225 | "nbformat_minor": 2 226 | } 227 | -------------------------------------------------------------------------------- /9.2 GridDemo.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 2, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "from gridEnv import Grid;\n", 10 | "import numpy as np;" 11 | ] 12 | }, 13 | { 14 | "cell_type": "code", 15 | "execution_count": 5, 16 | "metadata": {}, 17 | "outputs": [ 18 | { 19 | "name": "stdout", 20 | "output_type": "stream", 21 | "text": [ 22 | "|. . . . . .|\n", 23 | "|. . . . . .|\n", 24 | "|. . . . . .|\n", 25 | "|C . . . . T|\n", 26 | "(True, -1)\n" 27 | ] 28 | } 29 | ], 30 | "source": [ 31 | "from IPython.display import clear_output\n", 32 | "from time import sleep;\n", 33 | "g = Grid(length=4,width=6,start=(0,0),terminals=[(3,5)]);\n", 34 | "for i in range(4):\n", 35 | " g.display()\n", 36 | " print (g.step(3))\n", 37 | " sleep(1);\n", 38 | " clear_output(wait=True)\n" 39 | ] 40 | }, 41 | { 42 | "cell_type": "markdown", 43 | "metadata": {}, 44 | "source": [ 45 | "# Single Episode" 46 | ] 47 | }, 48 | { 49 | "cell_type": "code", 50 | "execution_count": 6, 51 | "metadata": {}, 52 | "outputs": [ 53 | { 54 | "name": "stdout", 55 | "output_type": "stream", 56 | "text": [ 57 | "|C . . . . .|\n", 58 | "|. . . . . .|\n", 59 | "|. . . . . .|\n", 60 | "|. . . . . T|\n", 61 | "Loop Ended\n" 62 | ] 63 | } 64 | ], 65 | "source": [ 66 | "hasEnded = False;\n", 67 | "g.reset()\n", 68 | "while(hasEnded == False):\n", 69 | " g.display()\n", 70 | " print (g.step(0,update=False),g.step(1,update=False),g.step(2,update=False),g.step(3,update=False))\n", 71 | " hasEnded,reward = g.step(np.random.randint(0,4))\n", 72 | " sleep(0.2);\n", 73 | " clear_output(wait=True)\n", 74 | "g.display()\n", 75 | "print ('Loop Ended')\n" 76 | ] 77 | }, 78 | { 79 | "cell_type": "markdown", 80 | "metadata": {}, 81 | "source": [ 82 | "# SARSA Update" 83 | ] 84 | }, 85 | { 86 | "cell_type": "code", 87 | "execution_count": 8, 88 | "metadata": {}, 89 | "outputs": [ 90 | { 91 | "name": "stdout", 92 | "output_type": "stream", 93 | "text": [ 94 | "|. . . . . .|\n", 95 | "|. . . . . .|\n", 96 | "|. . . . . .|\n", 97 | "|C . . . . T|\n", 98 | "|→ → ↓ → ↓ ←|\n", 99 | "|↑ → → ↓ ↓ ↓|\n", 100 | "|→ → → ↓ ↓ ↓|\n", 101 | "|→ → → → → ←|\n" 102 | ] 103 | } 104 | ], 105 | "source": [ 106 | "gamma = 0.8;\n", 107 | "alpha = 0.3;\n", 108 | "for i in range(1000):\n", 109 | " g.display()\n", 110 | " g.display_policy()\n", 111 | " \n", 112 | " #Getting Current Action\n", 113 | " s = g.get_state()\n", 114 | " q_s = g.get_q_value(s);\n", 115 | " best_a = np.argmax(q_s)\n", 116 | " a = g.greedy_sample(best_a)\n", 117 | " \n", 118 | " hasEnded,r = g.step(a) # Moving a step ahead\n", 119 | " \n", 120 | " #Getting next action\n", 121 | " snext = g.get_state();\n", 122 | " qnext = g.get_q_value(snext)\n", 123 | " best_anext = np.argmax(qnext);\n", 124 | " anext = g.greedy_sample(best_anext)\n", 125 | " \n", 126 | " #SARSA UPDATE\n", 127 | " q_s[a] = alpha*q_s[a] + (1-alpha)*(r+gamma*qnext[best_anext])\n", 128 | " g.set_q_value(s,q_s)\n", 129 | "\n", 130 | " sleep(0.05);\n", 131 | " clear_output(wait=True)\n", 132 | " " 133 | ] 134 | }, 135 | { 136 | "cell_type": "markdown", 137 | "metadata": {}, 138 | "source": [ 139 | "# Monte Carlo REINFORCE" 140 | ] 141 | }, 142 | { 143 | "cell_type": "code", 144 | "execution_count": 9, 145 | "metadata": {}, 146 | "outputs": [ 147 | { 148 | "name": "stdout", 149 | "output_type": "stream", 150 | "text": [ 151 | "|C . . . . .|\n", 152 | "|. . . . . .|\n", 153 | "|. . . . . .|\n", 154 | "|. . . . . T|\n", 155 | "|← ← ← → ← ←|\n", 156 | "|← → → ↓ ↓ ↓|\n", 157 | "|→ → → ↓ ↓ ↓|\n", 158 | "|← → → → → ←|\n" 159 | ] 160 | } 161 | ], 162 | "source": [ 163 | "gamma = 0.7;\n", 164 | "alpha = 0.3;\n", 165 | "g.qvalue=g.qvalue*0.01\n", 166 | "has_ended = False\n", 167 | "for i in range(100):\n", 168 | " sar_pairs=[];\n", 169 | " while(has_ended == False):\n", 170 | " g.display()\n", 171 | " g.display_policy()\n", 172 | " #Getting Current Action\n", 173 | " s = g.get_state()\n", 174 | " q_s = g.get_q_value(s);\n", 175 | " best_a = np.argmax(q_s)\n", 176 | " a = g.greedy_sample(best_a,epsilon=0.7)\n", 177 | " \n", 178 | " has_ended,r = g.step(a) # Moving a step ahead\n", 179 | " sar_pairs.append({\"s\":s,\"a\":a,\"r\":r});\n", 180 | " sleep(0.1);\n", 181 | " clear_output(wait=True)\n", 182 | " has_ended=False;\n", 183 | " discounted_r=0;\n", 184 | " sar_pairs.reverse() # Reversing the pairs\n", 185 | " for i,sar in enumerate(sar_pairs):\n", 186 | " if i == 0 :\n", 187 | " discounted_r = sar[\"r\"]\n", 188 | " else:\n", 189 | " discounted_r = sar[\"r\"]+gamma*discounted_r\n", 190 | " q_s = g.get_q_value(sar[\"s\"]);\n", 191 | " a = sar[\"a\"]\n", 192 | " q_s[a] = alpha*q_s[a] + (1-alpha)*(discounted_r)\n", 193 | " g.set_q_value(s,q_s)\n", 194 | " " 195 | ] 196 | }, 197 | { 198 | "cell_type": "code", 199 | "execution_count": null, 200 | "metadata": {}, 201 | "outputs": [], 202 | "source": [] 203 | } 204 | ], 205 | "metadata": { 206 | "kernelspec": { 207 | "display_name": "Python 3", 208 | "language": "python", 209 | "name": "python3" 210 | }, 211 | "language_info": { 212 | "codemirror_mode": { 213 | "name": "ipython", 214 | "version": 3 215 | }, 216 | "file_extension": ".py", 217 | "mimetype": "text/x-python", 218 | "name": "python", 219 | "nbconvert_exporter": "python", 220 | "pygments_lexer": "ipython3", 221 | "version": "3.5.4" 222 | } 223 | }, 224 | "nbformat": 4, 225 | "nbformat_minor": 2 226 | } 227 | -------------------------------------------------------------------------------- /9.3 Game of Thrones Example.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Markovian Game of Thorns" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "![GOT Display](gotdisplay.jpg)" 15 | ] 16 | }, 17 | { 18 | "cell_type": "code", 19 | "execution_count": 1, 20 | "metadata": {}, 21 | "outputs": [ 22 | { 23 | "name": "stdout", 24 | "output_type": "stream", 25 | "text": [ 26 | "State Value {'whiteharbor': 0.013400000000000023, 'alive-terminal': 1, 'dragonstone': 0.4620500000000001, 'dead-terminal': -1, 'winterfell': 0.7100000000000001}\n", 27 | "Action Value {'from_dragonstone': {'land': -0.15049999999999997, 'sea': -0.8587940000000001, 'dragon': 0.4620500000000001}, 'from_whiteharbor': {'land': 0.013400000000000023}, 'from_winterfell': {'land': 0.7100000000000001}}\n", 28 | "Learned Policy {'from_dragonstone': 'dragon', 'from_whiteharbor': 'land', 'from_winterfell': 'land'}\n" 29 | ] 30 | } 31 | ], 32 | "source": [ 33 | "# Markov Decision Process Example\n", 34 | "import numpy as np\n", 35 | "import copy\n", 36 | "import pprint\n", 37 | "\n", 38 | "V ={\"dragonstone\": 0,\"whiteharbor\":0, \"winterfell\":0,\"alive-terminal\":1,\"dead-terminal\":-1} # States\n", 39 | "\n", 40 | "R ={\"from_dragonstone\":{\"land\":-0.02,\"sea\":-0.05,\"dragon\":-0.1},\\\n", 41 | " \"from_whiteharbor\":{\"land\":-0.01},\\\n", 42 | " \"from_winterfell\":{\"land\":-0.01},\\\n", 43 | " }\n", 44 | "\n", 45 | "Q = copy.copy(R)\n", 46 | "\n", 47 | "P ={\"from_dragonstone\":{\"land\":{\"to_winterfell\":0.5,\"to_dead-terminal\":0.5},\\\n", 48 | " \"sea\":{\"to_whiteharbor\":0.1,\"to_dead-terminal\":0.9},\\\n", 49 | " \"dragon\":{\"to_winterfell\":0.95,\"to_dead-terminal\":0.05}},\\\n", 50 | " \"from_whiteharbor\":{\"land\":{\"to_winterfell\":0.6,\"to_dead-terminal\":0.4}},\\\n", 51 | " \"from_winterfell\":{\"land\":{\"to_alive-terminal\":0.9,\"to_dead-terminal\":0.1}},\\\n", 52 | " }\n", 53 | "\n", 54 | "gamma = 0.9\n", 55 | "\n", 56 | "Policy = {\"from_dragonstone\":\"land\",\"from_whiteharbor\":\"land\",\"from_winterfell\":\"land\"}\n", 57 | "\n", 58 | "# Solution by Value Iteration\n", 59 | "for i in range(10):\n", 60 | " for from_location in P.keys():\n", 61 | " V[from_location[5:]] = max(Q[from_location].values())\n", 62 | " Q[from_location]=copy.copy(R[from_location]) # Initialize with Immediate Reward\n", 63 | " #Action Value Update\n", 64 | " for action in P[from_location].keys():\n", 65 | " for to_location in P[from_location][action].keys():\n", 66 | " Q[from_location][action] = Q[from_location][action] + \\\n", 67 | " gamma*P[from_location][action][to_location]*V[to_location[3:]]\n", 68 | "\n", 69 | " Policy[from_location]=max(Q[from_location],key=Q[from_location].get)\n", 70 | "\n", 71 | "\n", 72 | "print ('State Value ',V)\n", 73 | "print ('Action Value',Q)\n", 74 | "print ('Learned Policy',Policy)" 75 | ] 76 | }, 77 | { 78 | "cell_type": "markdown", 79 | "metadata": {}, 80 | "source": [ 81 | "## State Values" 82 | ] 83 | }, 84 | { 85 | "cell_type": "code", 86 | "execution_count": 4, 87 | "metadata": {}, 88 | "outputs": [ 89 | { 90 | "name": "stdout", 91 | "output_type": "stream", 92 | "text": [ 93 | "{'alive-terminal': 1,\n", 94 | " 'dead-terminal': -1,\n", 95 | " 'dragonstone': 0.4620500000000001,\n", 96 | " 'whiteharbor': 0.013400000000000023,\n", 97 | " 'winterfell': 0.7100000000000001}\n" 98 | ] 99 | } 100 | ], 101 | "source": [ 102 | "pprint.pprint(V)" 103 | ] 104 | }, 105 | { 106 | "cell_type": "markdown", 107 | "metadata": {}, 108 | "source": [ 109 | "## Action Values" 110 | ] 111 | }, 112 | { 113 | "cell_type": "code", 114 | "execution_count": 5, 115 | "metadata": {}, 116 | "outputs": [ 117 | { 118 | "name": "stdout", 119 | "output_type": "stream", 120 | "text": [ 121 | "{'from_dragonstone': {'dragon': 0.4620500000000001,\n", 122 | " 'land': -0.15049999999999997,\n", 123 | " 'sea': -0.8587940000000001},\n", 124 | " 'from_whiteharbor': {'land': 0.013400000000000023},\n", 125 | " 'from_winterfell': {'land': 0.7100000000000001}}\n" 126 | ] 127 | } 128 | ], 129 | "source": [ 130 | "pprint.pprint(Q)" 131 | ] 132 | }, 133 | { 134 | "cell_type": "markdown", 135 | "metadata": {}, 136 | "source": [ 137 | "# Learned Policy" 138 | ] 139 | }, 140 | { 141 | "cell_type": "code", 142 | "execution_count": 7, 143 | "metadata": {}, 144 | "outputs": [ 145 | { 146 | "name": "stdout", 147 | "output_type": "stream", 148 | "text": [ 149 | "{'from_dragonstone': 'dragon',\n", 150 | " 'from_whiteharbor': 'land',\n", 151 | " 'from_winterfell': 'land'}\n" 152 | ] 153 | } 154 | ], 155 | "source": [ 156 | "pprint.pprint(Policy)" 157 | ] 158 | }, 159 | { 160 | "cell_type": "markdown", 161 | "metadata": {}, 162 | "source": [ 163 | "# Summary Video" 164 | ] 165 | }, 166 | { 167 | "cell_type": "code", 168 | "execution_count": 9, 169 | "metadata": {}, 170 | "outputs": [ 171 | { 172 | "data": { 173 | "text/html": [ 174 | "" 175 | ], 176 | "text/plain": [ 177 | "" 178 | ] 179 | }, 180 | "execution_count": 9, 181 | "metadata": {}, 182 | "output_type": "execute_result" 183 | } 184 | ], 185 | "source": [ 186 | "from IPython.display import HTML\n", 187 | "\n", 188 | "# Youtube\n", 189 | "HTML('')\n" 190 | ] 191 | }, 192 | { 193 | "cell_type": "code", 194 | "execution_count": null, 195 | "metadata": {}, 196 | "outputs": [], 197 | "source": [] 198 | } 199 | ], 200 | "metadata": { 201 | "kernelspec": { 202 | "display_name": "Python 3", 203 | "language": "python", 204 | "name": "python3" 205 | }, 206 | "language_info": { 207 | "codemirror_mode": { 208 | "name": "ipython", 209 | "version": 3 210 | }, 211 | "file_extension": ".py", 212 | "mimetype": "text/x-python", 213 | "name": "python", 214 | "nbconvert_exporter": "python", 215 | "pygments_lexer": "ipython3", 216 | "version": "3.5.4" 217 | } 218 | }, 219 | "nbformat": 4, 220 | "nbformat_minor": 2 221 | } 222 | -------------------------------------------------------------------------------- /9.4 Cartpole Example.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Learning to balance Cartpole in Keras" 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": 1, 13 | "metadata": {}, 14 | "outputs": [ 15 | { 16 | "name": "stderr", 17 | "output_type": "stream", 18 | "text": [ 19 | "Using TensorFlow backend.\n" 20 | ] 21 | } 22 | ], 23 | "source": [ 24 | "#Modified from n1try's algorithm in OpenAI Gym\n", 25 | "import random\n", 26 | "import gym\n", 27 | "import math\n", 28 | "import numpy as np\n", 29 | "from collections import deque\n", 30 | "from keras.models import Sequential\n", 31 | "from keras.layers import Dense\n", 32 | "from keras.optimizers import Adam\n", 33 | "%matplotlib inline\n", 34 | "import matplotlib.pyplot as plt\n", 35 | "# Imports specifically so we can render outputs in Jupyter.\n", 36 | "from JSAnimation.IPython_display import display_animation\n", 37 | "from matplotlib import animation\n", 38 | "from IPython.display import display\n", 39 | "\n", 40 | "\n", 41 | "def display_frames_as_gif(frames):\n", 42 | " \"\"\"\n", 43 | " Displays a list of frames as a gif, with controls\n", 44 | " \"\"\"\n", 45 | " #plt.figure(figsize=(frames[0].shape[1] / 72.0, frames[0].shape[0] / 72.0), dpi = 72)\n", 46 | " patch = plt.imshow(frames[0])\n", 47 | " plt.axis('off')\n", 48 | "\n", 49 | " def animate(i):\n", 50 | " patch.set_data(frames[i])\n", 51 | "\n", 52 | " anim = animation.FuncAnimation(plt.gcf(), animate, frames = len(frames), interval=50)\n", 53 | " display(display_animation(anim, default_mode='loop'))\n", 54 | " #anim.save('animation.gif', writer='imagemagick', fps=30)\n", 55 | "\n", 56 | "\n", 57 | "\n", 58 | "class DQNCartPoleSolver():\n", 59 | " def __init__(self, n_episodes=1000, n_win_ticks=195, max_env_steps=None, gamma=1.0, epsilon=1.0, epsilon_min=0.01, epsilon_log_decay=0.995, alpha=0.01, alpha_decay=0.01, batch_size=64, monitor=False, quiet=False):\n", 60 | " self.memory = deque(maxlen=100000)\n", 61 | " self.env = gym.make('CartPole-v0')\n", 62 | " if monitor: self.env = gym.wrappers.Monitor(self.env, '../data/cartpole-1', force=True)\n", 63 | " self.gamma = gamma\n", 64 | " self.epsilon = epsilon\n", 65 | " self.epsilon_min = epsilon_min\n", 66 | " self.epsilon_decay = epsilon_log_decay\n", 67 | " self.alpha = alpha\n", 68 | " self.alpha_decay = alpha_decay\n", 69 | " self.n_episodes = n_episodes\n", 70 | " self.n_win_ticks = n_win_ticks\n", 71 | " self.batch_size = batch_size\n", 72 | " self.quiet = quiet\n", 73 | " self.cur_episode = 0;\n", 74 | " if max_env_steps is not None: self.env._max_episode_steps = max_env_steps\n", 75 | "\n", 76 | " # Init model\n", 77 | " self.model = Sequential()\n", 78 | " self.model.add(Dense(24, input_dim=4, activation='tanh'))\n", 79 | " self.model.add(Dense(48, activation='tanh'))\n", 80 | " self.model.add(Dense(2, activation='linear'))\n", 81 | " self.model.compile(loss='mse', optimizer=Adam(lr=self.alpha, decay=self.alpha_decay))\n", 82 | "\n", 83 | " def remember(self, state, action, reward, next_state, done):\n", 84 | " self.memory.append((state, action, reward, next_state, done))\n", 85 | "\n", 86 | " def choose_action(self, state, epsilon):\n", 87 | " return self.env.action_space.sample() if (np.random.random() <= epsilon) else np.argmax(self.model.predict(state))\n", 88 | "\n", 89 | " def get_epsilon(self, t):\n", 90 | " return max(self.epsilon_min, min(self.epsilon, 1.0 - math.log10((t + 1) * self.epsilon_decay)))\n", 91 | "\n", 92 | " def preprocess_state(self, state):\n", 93 | " return np.reshape(state, [1, 4])\n", 94 | "\n", 95 | " def replay(self, batch_size):\n", 96 | " x_batch, y_batch = [], []\n", 97 | " minibatch = random.sample(\n", 98 | " self.memory, min(len(self.memory), batch_size))\n", 99 | " for state, action, reward, next_state, done in minibatch:\n", 100 | " y_target = self.model.predict(state)\n", 101 | " y_target[0][action] = reward if done else reward + self.gamma * np.max(self.model.predict(next_state)[0])\n", 102 | " x_batch.append(state[0])\n", 103 | " y_batch.append(y_target[0])\n", 104 | " \n", 105 | " self.model.fit(np.array(x_batch), np.array(y_batch), batch_size=len(x_batch), verbose=0)\n", 106 | " if self.epsilon > self.epsilon_min:\n", 107 | " self.epsilon *= self.epsilon_decay\n", 108 | "\n", 109 | " def train(self,n_episodes=5):\n", 110 | " scores = deque(maxlen=100)\n", 111 | " for e in range(self.cur_episode,self.cur_episode+n_episodes):\n", 112 | " state = self.preprocess_state(self.env.reset())\n", 113 | " done = False\n", 114 | " i = 0\n", 115 | " while not done:\n", 116 | " action = self.choose_action(state, self.get_epsilon(e))\n", 117 | " next_state, reward, done, _ = self.env.step(action)\n", 118 | " next_state = self.preprocess_state(next_state)\n", 119 | " self.remember(state, action, reward, next_state, done)\n", 120 | " state = next_state\n", 121 | " i += 1\n", 122 | "\n", 123 | " scores.append(i)\n", 124 | " mean_score = np.mean(scores)\n", 125 | " if mean_score >= self.n_win_ticks and e >= 100:\n", 126 | " if not self.quiet: print('Ran {} episodes. Solved after {} trials ✔'.format(e, e - 100))\n", 127 | " return e - 100\n", 128 | " if e % 100 == 0 and not self.quiet:\n", 129 | " print('[Episode {}] - Mean survival time over last 100 episodes was {} ticks.'.format(e, mean_score))\n", 130 | "\n", 131 | " self.replay(self.batch_size)\n", 132 | " \n", 133 | " if not self.quiet: print('Did not solve after {} episodes 😞'.format(e))\n", 134 | " return e\n", 135 | " \n", 136 | " def displaySingleEpisode(self):\n", 137 | " frames = [] \n", 138 | " done = False;\n", 139 | " state = self.preprocess_state(self.env.reset())\n", 140 | " while not done:\n", 141 | " action = self.choose_action(state, 1.0)\n", 142 | " next_state, reward, done, _ = self.env.step(action)\n", 143 | " frames.append(self.env.render(mode='rgb_array'))\n", 144 | " self.env.render(mode='rgb_array')\n", 145 | " self.env.close()\n", 146 | " #display_frames_as_gif(frames)" 147 | ] 148 | }, 149 | { 150 | "cell_type": "code", 151 | "execution_count": 3, 152 | "metadata": {}, 153 | "outputs": [ 154 | { 155 | "name": "stdout", 156 | "output_type": "stream", 157 | "text": [ 158 | "\u001b[33mWARN: gym.spaces.Box autodetected dtype as . Please provide explicit dtype.\u001b[0m\n", 159 | "[Episode 0] - Mean survival time over last 100 episodes was 16.0 ticks.\n", 160 | "[Episode 100] - Mean survival time over last 100 episodes was 10.83 ticks.\n", 161 | "[Episode 200] - Mean survival time over last 100 episodes was 24.51 ticks.\n", 162 | "[Episode 300] - Mean survival time over last 100 episodes was 24.93 ticks.\n", 163 | "[Episode 400] - Mean survival time over last 100 episodes was 53.19 ticks.\n", 164 | "[Episode 500] - Mean survival time over last 100 episodes was 80.74 ticks.\n", 165 | "[Episode 600] - Mean survival time over last 100 episodes was 91.75 ticks.\n", 166 | "[Episode 700] - Mean survival time over last 100 episodes was 60.62 ticks.\n", 167 | "[Episode 800] - Mean survival time over last 100 episodes was 146.65 ticks.\n", 168 | "[Episode 900] - Mean survival time over last 100 episodes was 157.59 ticks.\n", 169 | "Did not solve after 999 episodes 😞\n" 170 | ] 171 | }, 172 | { 173 | "ename": "TypeError", 174 | "evalue": "render() got an unexpected keyword argument 'close'", 175 | "output_type": "error", 176 | "traceback": [ 177 | "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m", 178 | "\u001b[1;31mTypeError\u001b[0m Traceback (most recent call last)", 179 | "\u001b[1;32m\u001b[0m in \u001b[0;36m\u001b[1;34m()\u001b[0m\n\u001b[0;32m 1\u001b[0m \u001b[0magent\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mDQNCartPoleSolver\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 2\u001b[0m \u001b[0magent\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mtrain\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;36m1000\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 3\u001b[1;33m \u001b[0magent\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mdisplaySingleEpisode\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m", 180 | "\u001b[1;32m\u001b[0m in \u001b[0;36mdisplaySingleEpisode\u001b[1;34m(self)\u001b[0m\n\u001b[0;32m 119\u001b[0m \u001b[0mnext_state\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mreward\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mdone\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0m_\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0menv\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mstep\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0maction\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 120\u001b[0m \u001b[0mframes\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mappend\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0menv\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mrender\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mmode\u001b[0m\u001b[1;33m=\u001b[0m\u001b[1;34m'rgb_array'\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 121\u001b[1;33m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0menv\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mrender\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mmode\u001b[0m\u001b[1;33m=\u001b[0m\u001b[1;34m'rgb_array'\u001b[0m\u001b[1;33m,\u001b[0m\u001b[0mclose\u001b[0m\u001b[1;33m=\u001b[0m\u001b[1;32mTrue\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 122\u001b[0m \u001b[0mdisplay_frames_as_gif\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mframes\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n", 181 | "\u001b[1;31mTypeError\u001b[0m: render() got an unexpected keyword argument 'close'" 182 | ] 183 | } 184 | ], 185 | "source": [ 186 | "agent = DQNCartPoleSolver()\n", 187 | "agent.train(1000)\n", 188 | "agent.displaySingleEpisode()" 189 | ] 190 | }, 191 | { 192 | "cell_type": "code", 193 | "execution_count": null, 194 | "metadata": {}, 195 | "outputs": [], 196 | "source": [] 197 | } 198 | ], 199 | "metadata": { 200 | "kernelspec": { 201 | "display_name": "Python 3", 202 | "language": "python", 203 | "name": "python3" 204 | }, 205 | "language_info": { 206 | "codemirror_mode": { 207 | "name": "ipython", 208 | "version": 3 209 | }, 210 | "file_extension": ".py", 211 | "mimetype": "text/x-python", 212 | "name": "python", 213 | "nbconvert_exporter": "python", 214 | "pygments_lexer": "ipython3", 215 | "version": "3.5.4" 216 | } 217 | }, 218 | "nbformat": 4, 219 | "nbformat_minor": 2 220 | } 221 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ![Deep Learning Tutorial](img/header.jpg) 2 | 3 | The videos corrosponding is also availible on Edyoda : https://www.edyoda.com/course/1429/ 4 | 5 | # Course Content 6 | 7 | ### Essential Programming [Tensorflow Tutorial Video Link](https://www.edyoda.com/course/1429/) 8 | 9 | * Introduction to Deep Learning 10 | * Introduction to Numpy 11 | * Introduction to Tensorflow and Keras 12 | 13 | 14 | 15 | ### Essential basics of Linear Algebra 16 | 17 | * Solution of Equations, row and column Interpretation 18 | 19 | * Vector Space Properties 20 | 21 | * Partial Derivative of Polynomial and Two conditions for Local Minima 22 | 23 | * Physical Interpretation of gradient (Direction of Maximum Change) 24 | 25 | * Matrix Vector Multiplication 26 | 27 | * EVD and interpretation of Eighen Vectors 28 | 29 | * Linear Independence and Rank of Matrix 30 | 31 | * Orthonormal Matrices, Projection Matrices, Vandemonde Matrix, Markov Matrix, Symmetric, Block Diagonal 32 | 33 | 34 | 35 | 36 | 37 | ### Selected topics of Machine Learning 38 | 39 | * Intuition behind Linear Regression, classification 40 | 41 | * Grid Search 42 | 43 | * Gradient Descent 44 | 45 | * Training Pipeline 46 | 47 | * Metrics - ROC Curve, Precision Recall Curve 48 | * Calculating Entropy 49 | 50 | 51 | 52 | 53 | 54 | ### Basics of Neural Network 55 | 56 | * Evolution of Perceptrons, Hebbs Principle, Cat Experiment 57 | 58 | * Single layer NN 59 | 60 | * Tensorflow Code 61 | 62 | * Multilayer NN 63 | 64 | * Back propagation, Dynamic Programming 65 | 66 | * Mathematical Take on NN 67 | 68 | * Function Approximator 69 | * Link with Linear Regression 70 | * Dropout and Activation 71 | * Optimizers and Loss Functions 72 | 73 | 74 | 75 | ### Introduction to Convolutional Neural Network 76 | 77 | * 1D and 2D Convolution 78 | * Why CNN for Images and speech? 79 | * Convolution Layer 80 | * Coding Convolution Layer 81 | * Learning Sharpening using single convolution Layer in Tensor-Flow 82 | 83 | 84 | 85 | ### Different Layers in CNN pipeline 86 | 87 | * Convolution 88 | * Pooling 89 | * Activation 90 | * Dropout 91 | * Batch Normalization 92 | * Object Classification 93 | * Creating Batch in Tensorflow and Normalize 94 | * Training MNIST and CIFAR datasets 95 | * Understanding a pre-trained Inception Architecture 96 | * Input Augmentation Techniques for Images 97 | 98 | 99 | 100 | ### Transfer Learning 101 | 102 | * Finetuning last layers of CNN Model 103 | * Selecting appropriate Loss 104 | * Adding a new class in the last Layer 105 | * Making a model Fully Convolutional for Deployment 106 | * Finetune Imagenet for Cats vs Dog Classification. 107 | 108 | 109 | 110 | ### Object Detection and Localization 111 | 112 | * Different types of problem in Objects 113 | * Difficulties in Object Detection and Localization 114 | * Fast RCNN 115 | * Faster RCNN 116 | * YOLO v1-v3 117 | * SSD 118 | * MobileNet 119 | 120 | 121 | 122 | ### Autoencoders 123 | 124 | * Image Compression - Simple Autoencoder 125 | * Denoising Autoencoder 126 | * Variational Autoencoder and Reparematrization Trick 127 | * Robust Word Embedding using Variational Autoencoder 128 | 129 | 130 | 131 | ### Time Series Modelling 132 | 133 | * Evolution of Recurrent Structures 134 | * LSTM, RNN, GRU, Bi-RNN, Time-Dense 135 | * Learning a Sine Wave using RNN in Tensorflow 136 | * Creating Autocomplete for Harry Potter in Tensorflow 137 | 138 | 139 | 140 | ### GANs : [GANs Tutorial Video Link](https://www.edyoda.com/course/1418/) 141 | 142 | * Generative vs Discrimative Models 143 | 144 | * Theory of GAN 145 | 146 | * Simple Distribution Generator in Tensorflow using MCMC (Markov Chain Monte Carlo) 147 | 148 | * DCGAN,WGANs for Images 149 | 150 | * InfoGANs, CycleGANs and Progressive GANs 151 | * Creating a GAN for generating Manga Art 152 | 153 | 154 | 155 | 156 | 157 | ### Model Free Approaches in Reinforcement Learning : [RL Video Link](https://www.edyoda.com/course/1421/) 158 | 159 | - Model Free Prediction 160 | - Monte Carlo Prediction and TD Learning 161 | - Model Free Control with REINFORCE and SARSA Learning 162 | - **Assignment** Implementation of REINFORCE and SARSA Learning in Gridworld 163 | - Off policy vs On Policy Learning 164 | - Importance Sampling for Off Policy Learning 165 | - Q Learning 166 | 167 | 168 | 169 | ### Behavioral Cloning and Deep Q Learning 170 | 171 | - Understanding Deep Learning as Function Approximator 172 | - Theory of Behavioral Cloning and Deep Q Learning 173 | - Revisiting Point Collector Example in Unity and 174 | - **Assignment : **Training Cartpole Example via Deep Q Learning 175 | 176 | 177 | 178 | 179 | -------------------------------------------------------------------------------- /Untitled.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 3, 6 | "metadata": {}, 7 | "outputs": [ 8 | { 9 | "name": "stdout", 10 | "output_type": "stream", 11 | "text": [ 12 | "float64\n" 13 | ] 14 | } 15 | ], 16 | "source": [ 17 | "import numpy as np\n", 18 | "lst = [1.0,3,5]\n", 19 | "ary = np.array(lst)\n", 20 | "print (ary.dtype)" 21 | ] 22 | }, 23 | { 24 | "cell_type": "code", 25 | "execution_count": 4, 26 | "metadata": {}, 27 | "outputs": [ 28 | { 29 | "name": "stdout", 30 | "output_type": "stream", 31 | "text": [ 32 | "[ 0 1 2 3 4 5 6 7 8 9 10 11]\n" 33 | ] 34 | } 35 | ], 36 | "source": [ 37 | "v = np.arange(12)\n", 38 | "print (v)\n" 39 | ] 40 | }, 41 | { 42 | "cell_type": "code", 43 | "execution_count": 11, 44 | "metadata": {}, 45 | "outputs": [ 46 | { 47 | "name": "stdout", 48 | "output_type": "stream", 49 | "text": [ 50 | "[11 10 9 8 7 6 5 4 3 2 1 0]\n", 51 | "[8 6 4]\n" 52 | ] 53 | } 54 | ], 55 | "source": [ 56 | "idx = [3,5,7]\n", 57 | "v1= v[::-1]\n", 58 | "print (v1)\n", 59 | "print (v1[idx])" 60 | ] 61 | }, 62 | { 63 | "cell_type": "code", 64 | "execution_count": 12, 65 | "metadata": {}, 66 | "outputs": [ 67 | { 68 | "name": "stdout", 69 | "output_type": "stream", 70 | "text": [ 71 | "[1 3 2]\n" 72 | ] 73 | } 74 | ], 75 | "source": [ 76 | "v = np.array([1,3,2])\n", 77 | "print (v)" 78 | ] 79 | }, 80 | { 81 | "cell_type": "code", 82 | "execution_count": 17, 83 | "metadata": {}, 84 | "outputs": [ 85 | { 86 | "name": "stdout", 87 | "output_type": "stream", 88 | "text": [ 89 | "[10 11 12 13 14 15 16 17 18 19]\n" 90 | ] 91 | } 92 | ], 93 | "source": [ 94 | "v = np.arange(10)\n", 95 | "def tempfunc(x):\n", 96 | " return x\n", 97 | "\n", 98 | "v1=tempfunc(v)\n", 99 | "v1+=10\n", 100 | "print (v)" 101 | ] 102 | }, 103 | { 104 | "cell_type": "code", 105 | "execution_count": 19, 106 | "metadata": {}, 107 | "outputs": [ 108 | { 109 | "name": "stdout", 110 | "output_type": "stream", 111 | "text": [ 112 | "[[ 0 1 2]\n", 113 | " [ 3 4 5]\n", 114 | " [ 6 7 8]\n", 115 | " [ 9 10 11]]\n" 116 | ] 117 | } 118 | ], 119 | "source": [ 120 | "print (np.arange(12).reshape((4,3)))" 121 | ] 122 | }, 123 | { 124 | "cell_type": "code", 125 | "execution_count": 24, 126 | "metadata": {}, 127 | "outputs": [ 128 | { 129 | "name": "stdout", 130 | "output_type": "stream", 131 | "text": [ 132 | "[[[ 0 1 2 3 4]\n", 133 | " [ 5 6 7 8 9]\n", 134 | " [10 11 12 13 14]\n", 135 | " [15 16 17 18 19]]\n", 136 | "\n", 137 | " [[20 21 22 23 24]\n", 138 | " [25 26 27 28 29]\n", 139 | " [30 31 32 33 34]\n", 140 | " [35 36 37 38 39]]\n", 141 | "\n", 142 | " [[40 41 42 43 44]\n", 143 | " [45 46 47 48 49]\n", 144 | " [50 51 52 53 54]\n", 145 | " [55 56 57 58 59]]]\n" 146 | ] 147 | } 148 | ], 149 | "source": [ 150 | "mat = np.arange(60).reshape((3,4,5))\n", 151 | "print (mat)" 152 | ] 153 | }, 154 | { 155 | "cell_type": "code", 156 | "execution_count": 35, 157 | "metadata": {}, 158 | "outputs": [ 159 | { 160 | "name": "stdout", 161 | "output_type": "stream", 162 | "text": [ 163 | "()\n", 164 | "int32\n", 165 | "[[[ 50 51 52 53 54]\n", 166 | " [ 55 56 57 58 59]\n", 167 | " [ 60 61 62 63 64]\n", 168 | " [ 65 66 67 68 69]]\n", 169 | "\n", 170 | " [[ 70 71 72 73 74]\n", 171 | " [ 75 76 77 78 79]\n", 172 | " [ 80 81 82 83 84]\n", 173 | " [ 85 86 87 88 89]]\n", 174 | "\n", 175 | " [[ 90 91 92 93 94]\n", 176 | " [ 95 96 97 98 99]\n", 177 | " [100 101 102 103 104]\n", 178 | " [105 106 107 108 109]]]\n" 179 | ] 180 | } 181 | ], 182 | "source": [ 183 | "vec = 10*np.array(5)\n", 184 | "print (vec.shape)\n", 185 | "print (vec.dtype)\n", 186 | "print (mat+vec)\n" 187 | ] 188 | }, 189 | { 190 | "cell_type": "code", 191 | "execution_count": 39, 192 | "metadata": {}, 193 | "outputs": [ 194 | { 195 | "name": "stdout", 196 | "output_type": "stream", 197 | "text": [ 198 | "\n", 199 | "mat\n", 200 | " [[0 1 2]\n", 201 | " [3 4 5]\n", 202 | " [6 7 8]]\n", 203 | "w = [1 2 4]\n", 204 | "\n", 205 | "[10 31 52]\n", 206 | "[10 31 52]\n" 207 | ] 208 | } 209 | ], 210 | "source": [ 211 | "mat = np.arange(9).reshape((3,3))\n", 212 | "w = np.array([1,2,4])\n", 213 | "print('\\nmat\\n',mat)\n", 214 | "print ('w = %s\\n'%(w))\n", 215 | "print (np.dot(mat,w))\n", 216 | "print (np.sum(mat*w,axis=1))" 217 | ] 218 | }, 219 | { 220 | "cell_type": "markdown", 221 | "metadata": {}, 222 | "source": [ 223 | "### Question : Change the shape of image from (3,100,120) to (100,120,3)\n", 224 | "np.transpose(mat,$[x_{new},y_{new},z_{new}]$)" 225 | ] 226 | }, 227 | { 228 | "cell_type": "code", 229 | "execution_count": 60, 230 | "metadata": {}, 231 | "outputs": [ 232 | { 233 | "name": "stdout", 234 | "output_type": "stream", 235 | "text": [ 236 | "(100, 120, 3)\n" 237 | ] 238 | } 239 | ], 240 | "source": [ 241 | "\n", 242 | "mat = np.random.randint(0,256,(3,100,120))\n", 243 | "newmat = np.transpose(mat,[1,2,0])\n", 244 | "print (newmat.shape)" 245 | ] 246 | }, 247 | { 248 | "cell_type": "code", 249 | "execution_count": 63, 250 | "metadata": {}, 251 | "outputs": [ 252 | { 253 | "name": "stdout", 254 | "output_type": "stream", 255 | "text": [ 256 | "mat = \n", 257 | "[[ 0 1 2 3]\n", 258 | " [ 4 5 6 7]\n", 259 | " [ 8 9 10 11]]\n", 260 | "\n", 261 | "mat with reshape = \n", 262 | "[[ 0 1 2]\n", 263 | " [ 3 4 5]\n", 264 | " [ 6 7 8]\n", 265 | " [ 9 10 11]]\n", 266 | "\n", 267 | "mat with transpose = \n", 268 | "[[ 0 4 8]\n", 269 | " [ 1 5 9]\n", 270 | " [ 2 6 10]\n", 271 | " [ 3 7 11]]\n", 272 | "\n" 273 | ] 274 | } 275 | ], 276 | "source": [ 277 | "mat = np.arange(12).reshape((3,4))\n", 278 | "mat1 = mat.reshape((4,3))\n", 279 | "mat2 = np.transpose(mat,[1,0])\n", 280 | "print ('mat = \\n%s\\n'%mat)\n", 281 | "print ('mat with reshape = \\n%s\\n'%mat1)\n", 282 | "print ('mat with transpose = \\n%s\\n'%mat2)\n" 283 | ] 284 | }, 285 | { 286 | "cell_type": "code", 287 | "execution_count": 64, 288 | "metadata": {}, 289 | "outputs": [ 290 | { 291 | "name": "stdout", 292 | "output_type": "stream", 293 | "text": [ 294 | "[1 2 3]\n" 295 | ] 296 | } 297 | ], 298 | "source": [ 299 | "print (np.sort(np.array([1,3,2])))\n", 300 | "### Question : Change the shape of image from (3,100,120) to (100,120,3)" 301 | ] 302 | }, 303 | { 304 | "cell_type": "code", 305 | "execution_count": 66, 306 | "metadata": {}, 307 | "outputs": [ 308 | { 309 | "name": "stdout", 310 | "output_type": "stream", 311 | "text": [ 312 | "[[ 0 1 2 3]\n", 313 | " [ 4 5 6 7]\n", 314 | " [ 8 9 10 11]]\n" 315 | ] 316 | } 317 | ], 318 | "source": [ 319 | "mat = np.arange(12).reshape((3,4))\n", 320 | "print (mat)" 321 | ] 322 | }, 323 | { 324 | "cell_type": "code", 325 | "execution_count": 67, 326 | "metadata": {}, 327 | "outputs": [ 328 | { 329 | "name": "stdout", 330 | "output_type": "stream", 331 | "text": [ 332 | "[[2 3]\n", 333 | " [6 7]]\n" 334 | ] 335 | } 336 | ], 337 | "source": [ 338 | "print (mat[:2,-2:])" 339 | ] 340 | }, 341 | { 342 | "cell_type": "code", 343 | "execution_count": 69, 344 | "metadata": {}, 345 | "outputs": [ 346 | { 347 | "name": "stdout", 348 | "output_type": "stream", 349 | "text": [ 350 | "(2,)\n" 351 | ] 352 | } 353 | ], 354 | "source": [ 355 | "lst1 = [0,1]\n", 356 | "lst2 = [2,3]\n", 357 | "print (mat[lst1,lst2].shape)" 358 | ] 359 | }, 360 | { 361 | "cell_type": "code", 362 | "execution_count": null, 363 | "metadata": {}, 364 | "outputs": [], 365 | "source": [] 366 | } 367 | ], 368 | "metadata": { 369 | "kernelspec": { 370 | "display_name": "Python 3", 371 | "language": "python", 372 | "name": "python3" 373 | }, 374 | "language_info": { 375 | "codemirror_mode": { 376 | "name": "ipython", 377 | "version": 3 378 | }, 379 | "file_extension": ".py", 380 | "mimetype": "text/x-python", 381 | "name": "python", 382 | "nbconvert_exporter": "python", 383 | "pygments_lexer": "ipython3", 384 | "version": "3.5.4" 385 | } 386 | }, 387 | "nbformat": 4, 388 | "nbformat_minor": 2 389 | } 390 | -------------------------------------------------------------------------------- /finetuning_keras.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # coding: utf-8 3 | 4 | # In[1]: 5 | 6 | 7 | import keras 8 | 9 | 10 | # In[1]: 11 | 12 | 13 | from keras.applications import VGG16 14 | #Load the VGG model 15 | vgg_conv = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3)) 16 | 17 | 18 | # In[2]: 19 | 20 | 21 | # Freeze the layers except the last 4 layers 22 | for layer in vgg_conv.layers[:-4]: 23 | layer.trainable = False 24 | 25 | # Check the trainable status of the individual layers 26 | for layer in vgg_conv.layers: 27 | print(layer, layer.trainable) 28 | 29 | 30 | # In[3]: 31 | 32 | 33 | from keras import models 34 | from keras import layers 35 | from keras import optimizers 36 | 37 | # Create the model 38 | model = models.Sequential() 39 | 40 | # Add the vgg convolutional base model 41 | model.add(vgg_conv) 42 | 43 | # Add new layers 44 | model.add(layers.Flatten()) 45 | model.add(layers.Dense(1024, activation='relu')) 46 | model.add(layers.Dropout(0.5)) 47 | model.add(layers.Dense(2, activation='softmax')) 48 | 49 | # Show a summary of the model. Check the number of trainable parameters 50 | model.summary() 51 | 52 | 53 | # In[5]: 54 | 55 | 56 | from keras.preprocessing.image import ImageDataGenerator 57 | image_size=224 58 | #train_dir = 'img/catsvsdogs' 59 | #validation_dir = 'img/catsvsdogs' 60 | 61 | train_dir = '/home/jaley/Downloads/all/train_seperate' 62 | validation_dir = '/home/jaley/Downloads/all/train_seperate' 63 | 64 | 65 | train_datagen = ImageDataGenerator( 66 | rescale=1./255, 67 | rotation_range=20, 68 | width_shift_range=0.2, 69 | height_shift_range=0.2, 70 | horizontal_flip=True, 71 | fill_mode='nearest') 72 | 73 | validation_datagen = ImageDataGenerator(rescale=1./255) 74 | 75 | # Change the batchsize according to your system RAM 76 | train_batchsize = 100 77 | val_batchsize = 10 78 | 79 | train_generator = train_datagen.flow_from_directory( 80 | train_dir, 81 | target_size=(image_size, image_size), 82 | batch_size=train_batchsize, 83 | class_mode='categorical') 84 | 85 | validation_generator = validation_datagen.flow_from_directory( 86 | validation_dir, 87 | target_size=(image_size, image_size), 88 | batch_size=val_batchsize, 89 | class_mode='categorical', 90 | shuffle=False) 91 | 92 | 93 | # In[ ]: 94 | 95 | 96 | # Compile the model 97 | model.compile(loss='categorical_crossentropy', 98 | optimizer=optimizers.RMSprop(lr=1e-4), 99 | metrics=['acc']) 100 | # Train the model 101 | history = model.fit_generator( 102 | train_generator, 103 | steps_per_epoch=train_generator.samples/train_generator.batch_size , 104 | epochs=1, 105 | validation_data=validation_generator, 106 | validation_steps=validation_generator.samples/validation_generator.batch_size, 107 | verbose=1) 108 | 109 | # Save the model 110 | model.save('exp/5.2.1-exp/small_last4.h5') 111 | 112 | 113 | 114 | -------------------------------------------------------------------------------- /finetuning_tensorflow.py: -------------------------------------------------------------------------------- 1 | 2 | import keras 3 | from keras import backend as K 4 | from keras.applications.vgg16 import preprocess_input 5 | import tensorflow as tf; 6 | sess = tf.Session() 7 | K.set_session(sess) 8 | save_path = 'model/exp-5.7/model' 9 | from random import shuffle 10 | import os; 11 | from PIL import Image 12 | import numpy as np 13 | 14 | from keras import models 15 | from keras import layers 16 | from keras import optimizers 17 | import tensorflow as tf; 18 | 19 | # In[15]: 20 | 21 | 22 | from keras.applications import VGG16 23 | 24 | 25 | 26 | def get_dataset(epoch,batchsize, test_batchsize): 27 | """ 28 | Get the (data,label) pair 29 | """ 30 | VGG_MEAN = np.array([123.68, 116.78, 103.94]) 31 | 32 | test_prefix = '/home/jaley/Downloads/all/test1' 33 | def my_gen(): 34 | prefix = '/home/jaley/Downloads/all/train' 35 | filelist = [filename for filename in os.listdir(prefix)] 36 | shuffle(filelist) 37 | for filename in filelist: 38 | filename=str(filename) 39 | label = np.zeros((2,)) 40 | if filename.startswith('dog'): 41 | path = prefix + '/'+filename 42 | label[1]=1.0 43 | img = np.array(Image.open(path).resize((224,224)),dtype='float32')/255 44 | yield img,label; 45 | if filename.startswith('cat'): 46 | path = prefix + '/'+filename 47 | label[0]=1.0 48 | img = np.array(Image.open(path).resize((224,224)),dtype='float32')/255 49 | yield img,label; 50 | 51 | 52 | #Creating Dataset from Generators 53 | 54 | train_ds = tf.data.Dataset.from_generator(my_gen,output_types=(tf.float32,tf.float32),output_shapes=((224,224,3),(2,))) 55 | test_ds = tf.data.Dataset.from_generator(my_gen,output_types=(tf.float32,tf.float32),output_shapes=((224,224,3),(2,))) 56 | train_ds = train_ds.repeat(epoch).batch(batchsize) 57 | test_ds = test_ds.repeat(epoch).batch(test_batchsize) 58 | return train_ds,test_ds; 59 | 60 | 61 | # In[19]: 62 | 63 | 64 | 65 | train_ds,test_ds = get_dataset(1,20,10) # Generating the dataset here 66 | morphing_iter = tf.data.Iterator.from_structure(train_ds.output_types, 67 | train_ds.output_shapes) 68 | print (train_ds.output_shapes) 69 | inp, labels = morphing_iter.get_next() 70 | train_init_op = morphing_iter.make_initializer(train_ds) 71 | test_init_op = morphing_iter.make_initializer(test_ds) 72 | 73 | 74 | 75 | 76 | 77 | 78 | #model = keras.applications.resnet50.ResNet50(include_top=True, weights='imagenet', \ 79 | # input_tensor=X, input_shape=(224, 224, 3)) 80 | 81 | #Load the VGG model 82 | vgg_conv = VGG16(weights='imagenet', input_shape=(224, 224, 3),input_tensor=inp,is_training=True) 83 | 84 | 85 | # In[16]: 86 | 87 | output_layer = "fc1" 88 | 89 | with tf.variable_scope('finetuning'): 90 | y1 = vgg_conv.get_layer(output_layer).output 91 | y2 = layers.Dense(256, activation='relu')(y1) 92 | y3 = layers.Dropout(0.5)(y2) 93 | ypred = layers.Dense(2, activation='softmax')(y3) 94 | 95 | 96 | writer = tf.summary.FileWriter('log',tf.get_default_graph()) 97 | 98 | 99 | with tf.variable_scope('finetuning'): 100 | loss = tf.losses.softmax_cross_entropy(logits=ypred,onehot_labels=labels) 101 | optimizer = tf.train.AdamOptimizer(learning_rate=1e-3).minimize(loss) 102 | accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(labels,1),tf.argmax(ypred,1)),tf.float32)) 103 | var_list = [] 104 | for var in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='finetuning'): 105 | var_list.append(var) 106 | initializer = tf.variables_initializer(var_list) 107 | 108 | saver = tf.train.Saver() 109 | 110 | with sess.as_default(): 111 | train_init_op.run() 112 | initializer.run() 113 | train_loss_lst,train_accuracy_lst=[],[] 114 | for itr in range(10000): 115 | _,_loss,_accuracy = sess.run([optimizer,loss,accuracy]) 116 | train_accuracy_lst.append(_accuracy) 117 | train_loss_lst.append(_loss) 118 | if itr%1000 == 0: 119 | saver.save(sess,save_path,global_step=itr) 120 | if itr%20 == 0: 121 | print ('train : itr=%d, loss=%2.6f, accuracy=%2.2f'%(itr,np.mean(train_loss_lst),np.mean(train_accuracy_lst))) 122 | train_loss_lst=[] 123 | train_accuracy_lst = [] 124 | #test_init_op.run() 125 | #val_loss_lst,val_accuracy_lst = [],[] 126 | #for val_itr in range(50): 127 | # _loss,_accuracy = sess.run([loss,accuracy]) 128 | # val_loss_lst.append(_loss) 129 | # val_accuracy_lst.append(_accuracy) 130 | #if itr==0: # For testing the list 131 | # print (val_accuracy_lst) 132 | #print ('val : itr=%d, loss=%2.6f, accuracy=%2.2f'%(itr,np.mean(val_loss_lst),np.mean(val_accuracy_lst))) 133 | #train_init_op.run() 134 | # 135 | 136 | -------------------------------------------------------------------------------- /img/1.2.1-numpy-matrix-01.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/1.2.1-numpy-matrix-01.jpg -------------------------------------------------------------------------------- /img/1.2.1-numpy-matrix.ai: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/1.2.1-numpy-matrix.ai -------------------------------------------------------------------------------- /img/1.3.7-hsplit-01.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/1.3.7-hsplit-01.jpg -------------------------------------------------------------------------------- /img/1.3.7-hsplit.ai: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/1.3.7-hsplit.ai -------------------------------------------------------------------------------- /img/1.4-pass-by-reference-01.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/1.4-pass-by-reference-01.jpg -------------------------------------------------------------------------------- /img/1.4-pass-by-reference.ai: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/1.4-pass-by-reference.ai -------------------------------------------------------------------------------- /img/1.5-broadcast-01.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/1.5-broadcast-01.jpg -------------------------------------------------------------------------------- /img/1.5-broadcast.ai: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/1.5-broadcast.ai -------------------------------------------------------------------------------- /img/2-tensorflow-01.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/2-tensorflow-01.jpg -------------------------------------------------------------------------------- /img/2-tensorflow-applications-01.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/2-tensorflow-applications-01.jpg -------------------------------------------------------------------------------- /img/2-tensorflow-applications.ai: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/2-tensorflow-applications.ai -------------------------------------------------------------------------------- /img/2-tensorflow.ai: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/2-tensorflow.ai -------------------------------------------------------------------------------- /img/2.1.3-tensorflow-graph1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/2.1.3-tensorflow-graph1.png -------------------------------------------------------------------------------- /img/2.3-Tensorflow-blackbox-01.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/2.3-Tensorflow-blackbox-01.jpg -------------------------------------------------------------------------------- /img/2.3-Tensorflow-blackbox.ai: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/2.3-Tensorflow-blackbox.ai -------------------------------------------------------------------------------- /img/2.5.1-variable-graph.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/2.5.1-variable-graph.jpg -------------------------------------------------------------------------------- /img/Tensorflow_logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/Tensorflow_logo.png -------------------------------------------------------------------------------- /img/Yoda-featured1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/Yoda-featured1.jpg -------------------------------------------------------------------------------- /img/catsvsdogs/cats/cat1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/catsvsdogs/cats/cat1.jpg -------------------------------------------------------------------------------- /img/catsvsdogs/cats/cat2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/catsvsdogs/cats/cat2.jpg -------------------------------------------------------------------------------- /img/catsvsdogs/cats/cat3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/catsvsdogs/cats/cat3.jpg -------------------------------------------------------------------------------- /img/catsvsdogs/cats/cat3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/catsvsdogs/cats/cat3.png -------------------------------------------------------------------------------- /img/catsvsdogs/dogs/dog1.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/catsvsdogs/dogs/dog1.jpeg -------------------------------------------------------------------------------- /img/catsvsdogs/dogs/dog2.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/catsvsdogs/dogs/dog2.jpeg -------------------------------------------------------------------------------- /img/catsvsdogs/dogs/dog3.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/catsvsdogs/dogs/dog3.jpeg -------------------------------------------------------------------------------- /img/dog.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/dog.jpeg -------------------------------------------------------------------------------- /img/header.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/header.jpg -------------------------------------------------------------------------------- /img/naivebayes.ai: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/naivebayes.ai -------------------------------------------------------------------------------- /img/preview/dog_0_184.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/preview/dog_0_184.jpeg -------------------------------------------------------------------------------- /img/preview/dog_0_2567.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/preview/dog_0_2567.jpeg -------------------------------------------------------------------------------- /img/preview/dog_0_2828.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/preview/dog_0_2828.jpeg -------------------------------------------------------------------------------- /img/preview/dog_0_3060.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/preview/dog_0_3060.jpeg -------------------------------------------------------------------------------- /img/preview/dog_0_3236.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/preview/dog_0_3236.jpeg -------------------------------------------------------------------------------- /img/preview/dog_0_364.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/preview/dog_0_364.jpeg -------------------------------------------------------------------------------- /img/preview/dog_0_3800.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/preview/dog_0_3800.jpeg -------------------------------------------------------------------------------- /img/preview/dog_0_4023.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/preview/dog_0_4023.jpeg -------------------------------------------------------------------------------- /img/preview/dog_0_5004.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/preview/dog_0_5004.jpeg -------------------------------------------------------------------------------- /img/preview/dog_0_5606.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/preview/dog_0_5606.jpeg -------------------------------------------------------------------------------- /img/preview/dog_0_5987.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/preview/dog_0_5987.jpeg -------------------------------------------------------------------------------- /img/preview/dog_0_6326.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/preview/dog_0_6326.jpeg -------------------------------------------------------------------------------- /img/preview/dog_0_6665.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/preview/dog_0_6665.jpeg -------------------------------------------------------------------------------- /img/preview/dog_0_6925.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/preview/dog_0_6925.jpeg -------------------------------------------------------------------------------- /img/preview/dog_0_6938.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/preview/dog_0_6938.jpeg -------------------------------------------------------------------------------- /img/preview/dog_0_8075.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/preview/dog_0_8075.jpeg -------------------------------------------------------------------------------- /img/preview/dog_0_863.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/preview/dog_0_863.jpeg -------------------------------------------------------------------------------- /img/preview/dog_0_9160.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/preview/dog_0_9160.jpeg -------------------------------------------------------------------------------- /img/preview/dog_0_9288.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/preview/dog_0_9288.jpeg -------------------------------------------------------------------------------- /img/preview/dog_0_9698.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/preview/dog_0_9698.jpeg -------------------------------------------------------------------------------- /img/preview/dog_0_9963.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/img/preview/dog_0_9963.jpeg -------------------------------------------------------------------------------- /model/2.5.1-exp/events.out.tfevents.1543598132.DESKTOP-CK983JR: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/model/2.5.1-exp/events.out.tfevents.1543598132.DESKTOP-CK983JR -------------------------------------------------------------------------------- /model/2.6.1-exp/checkpoint: -------------------------------------------------------------------------------- 1 | model_checkpoint_path: "tmp_model.ckpt" 2 | all_model_checkpoint_paths: "tmp_model.ckpt" 3 | -------------------------------------------------------------------------------- /model/2.6.1-exp/tmp_model.ckpt.data-00000-of-00001: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/model/2.6.1-exp/tmp_model.ckpt.data-00000-of-00001 -------------------------------------------------------------------------------- /model/2.6.1-exp/tmp_model.ckpt.index: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/model/2.6.1-exp/tmp_model.ckpt.index -------------------------------------------------------------------------------- /model/2.6.1-exp/tmp_model.ckpt.meta: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edyoda/AI101-DeepLearning/e478dcd2a8532a46eb0a2f98cd399ce1fc1d5383/model/2.6.1-exp/tmp_model.ckpt.meta -------------------------------------------------------------------------------- /raw/2.10-project-structure/dataset_io.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import 2 | import tensorflow as tf; 3 | from sklearn.model_selection import train_test_split 4 | from sklearn.preprocessing import OneHotEncoder, MinMaxScaler 5 | import sklearn 6 | import sklearn.datasets; 7 | 8 | import pandas as pd 9 | 10 | def get_dataset(epoch,batchsize): 11 | """ 12 | Get the (data,label) pair 13 | """ 14 | iris_ds = sklearn.datasets.load_iris(return_X_y=False) 15 | iris_data = pd.DataFrame(data=iris_ds.data,columns=iris_ds.feature_names) 16 | min_max_scaler = MinMaxScaler() 17 | scaled_data = min_max_scaler.fit_transform(iris_data) 18 | encoder = OneHotEncoder(n_values=3) 19 | label = encoder.fit_transform(iris_ds.target.reshape(-1,1)) 20 | label = label.todense() 21 | trainx,testx,trainy,testy = train_test_split(scaled_data,label) 22 | #Creating Dataset 23 | train_ds = tf.data.Dataset.from_tensor_slices((trainx,trainy)).shuffle(1000).repeat(epoch).batch(batchsize) 24 | #Creating Dataset 25 | test_ds = tf.data.Dataset.from_tensors((testx,testy)).shuffle(1000) 26 | return train_ds,test_ds; 27 | 28 | 29 | 30 | 31 | 32 | -------------------------------------------------------------------------------- /raw/2.10-project-structure/layers.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf; 2 | 3 | def fully_connected(inp,nout,scope='fc',activation = None): 4 | with tf.variable_scope(scope,reuse=tf.AUTO_REUSE): 5 | if activation == None: 6 | activation = lambda x : x; 7 | nin = inp.shape[-1] 8 | w = tf.get_variable(name='w',shape=(nin,nout),dtype=tf.float64) 9 | b = tf.get_variable(name='b',shape=(nout,),dtype=tf.float64) 10 | out = tf.matmul(inp,w)+b 11 | return activation(out); 12 | 13 | 14 | -------------------------------------------------------------------------------- /raw/2.10-project-structure/main.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import; 2 | import tensorflow as tf; 3 | import dataset_io as io 4 | import model_v1 as model; 5 | import param 6 | 7 | class Trainer: 8 | def __init__(self,restore=False): 9 | self.p = param.Param() 10 | self.train_ds,self.test_ds = io.get_dataset(epoch=self.p.epochs,batchsize=self.p.batchsize) 11 | self.session = tf.Session() 12 | self.model_init() 13 | self.writer = tf.summary.FileWriter(logdir= self.p.log_path, graph = self.session.graph ) 14 | self.saver = tf.train.Saver() 15 | #Variable Initialization 16 | with self.session.as_default(): 17 | if(restore==False): 18 | self.initializer.run() 19 | else: 20 | self.saver.restore(self.session,save_path=self.p.model_path) 21 | 22 | 23 | def model_init(self): 24 | train_iter = self.train_ds.make_one_shot_iterator() 25 | self.TRAINX,self.TRAINY = train_iter.get_next() 26 | y_pred,self.loss,self.train_accuracy = model.build_model(inp=self.TRAINX,label=self.TRAINY) 27 | self.optimizer = tf.train.AdamOptimizer(learning_rate=self.p.learning_rate).minimize(self.loss) 28 | self.merge = tf.summary.merge_all() 29 | self.initializer = tf.global_variables_initializer() 30 | 31 | 32 | def train(self): 33 | i=0; 34 | with self.session.as_default(): 35 | try: 36 | while(True): 37 | self.optimizer.run(); 38 | self.writer.add_summary(self.merge.eval(),self.p.batchsize*i) 39 | print ('loss = %2.6f, accuracy = %2.6f'%(self.loss.eval(),self.train_accuracy.eval())) 40 | i+=1; 41 | 42 | except tf.errors.OutOfRangeError: 43 | print ('loop ended') 44 | self.saver.save(self.session,self.p.model_path) 45 | 46 | 47 | def predict(self): 48 | pass 49 | 50 | def score(self): 51 | pass 52 | 53 | 54 | if __name__=="__main__": 55 | trainer = Trainer(restore=False) 56 | trainer.train() -------------------------------------------------------------------------------- /raw/2.10-project-structure/model_v1.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import; 2 | import tensorflow as tf; 3 | import layers; 4 | import param; 5 | 6 | p = param.Param() 7 | 8 | def build_model(inp,label): 9 | y1 = layers.fully_connected(inp=inp,nout=p.hidden_units[0], activation=tf.nn.relu,scope="fc-1") 10 | out = layers.fully_connected(inp=y1,nout=p.num_labels, activation=tf.nn.softmax,scope="fc-2") 11 | with tf.variable_scope('loss'): 12 | loss = tf.losses.softmax_cross_entropy(logits=out,onehot_labels=label) 13 | tf.summary.scalar('loss',loss) 14 | with tf.variable_scope('accuracy'): 15 | accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(label,1),tf.argmax(out,1)),tf.float64)) 16 | tf.summary.scalar('accuracy',accuracy) 17 | 18 | return out,loss,accuracy; 19 | -------------------------------------------------------------------------------- /raw/2.10-project-structure/param.py: -------------------------------------------------------------------------------- 1 | class Param: 2 | learning_rate = 0.01 3 | hidden_units = [5] 4 | num_labels = 3; 5 | num_inputs = 100; 6 | batchsize = 20; 7 | epochs = 200; 8 | train_test_split = 0.8; 9 | model_path = "model/model.ckpt"; 10 | log_path = "graph" 11 | -------------------------------------------------------------------------------- /raw/project1-nn/__init__.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Created on Fri Dec 7 16:26:25 2018 4 | 5 | @author: jaley 6 | """ 7 | -------------------------------------------------------------------------------- /raw/project1-nn/__main__.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import; 2 | from .nn import main; 3 | trainer = main.Trainer() 4 | trainer.train() 5 | -------------------------------------------------------------------------------- /raw/project1-nn/nn/__init__.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Created on Fri Dec 7 16:26:25 2018 4 | 5 | @author: jaley 6 | """ 7 | -------------------------------------------------------------------------------- /raw/project1-nn/nn/dataset_io.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf; 2 | from sklearn.model_selection import train_test_split 3 | from sklearn.preprocessing import OneHotEncoder, MinMaxScaler 4 | import sklearn 5 | import sklearn.datasets; 6 | 7 | import pandas as pd 8 | 9 | def get_dataset(epoch,batchsize): 10 | """ 11 | Get the (data,label) pair 12 | """ 13 | iris_ds = sklearn.datasets.load_iris(return_X_y=False) 14 | iris_data = pd.DataFrame(data=iris_ds.data,columns=iris_ds.feature_names) 15 | min_max_scaler = MinMaxScaler() 16 | scaled_data = min_max_scaler.fit_transform(iris_data) 17 | encoder = OneHotEncoder(n_values=3) 18 | label = encoder.fit_transform(iris_ds.target.reshape(-1,1)) 19 | label = label.todense() 20 | trainx,testx,trainy,testy = train_test_split(scaled_data,label) 21 | #Creating Dataset 22 | train_ds = tf.data.Dataset.from_tensor_slices((trainx,trainy)).shuffle(1000).repeat(epoch).batch(batchsize) 23 | #Creating Dataset 24 | test_ds = tf.data.Dataset.from_tensors((testx,testy)).shuffle(1000) 25 | return train_ds,test_ds; 26 | 27 | 28 | 29 | 30 | 31 | -------------------------------------------------------------------------------- /raw/project1-nn/nn/layers.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf; 2 | 3 | def fully_connected(inp,nout,scope='fc',activation = None): 4 | with tf.variable_scope(scope,reuse=tf.AUTO_REUSE): 5 | if activation == None: 6 | activation = lambda x : x; 7 | nin = inp.shape[-1] 8 | w = tf.get_variable(name='w',shape=(nin,nout),dtype=tf.float64) 9 | b = tf.get_variable(name='b',shape=(nout,),dtype=tf.float64) 10 | out = tf.matmul(inp,w)+b 11 | return activation(out); 12 | 13 | -------------------------------------------------------------------------------- /raw/project1-nn/nn/main.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf; 2 | from ..nn import dataset_io as io 3 | from ..nn import model_v1 as model; 4 | from ..nn import param 5 | 6 | class Trainer: 7 | def __init__(self,restore=False): 8 | self.p = param.Param() 9 | self.train_ds,self.test_ds = io.get_dataset(epoch=self.p.epochs,batchsize=self.p.batchsize) 10 | self.session = tf.Session() 11 | self.model_init() 12 | self.writer = tf.summary.FileWriter(logdir= self.p.log_path, graph = self.session.graph ) 13 | self.saver = tf.train.Saver() 14 | #Variable Initialization 15 | with self.session.as_default(): 16 | if(restore==False): 17 | self.initializer.run() 18 | else: 19 | self.saver.restore(self.session,save_path=self.p.model_path) 20 | 21 | 22 | def model_init(self): 23 | train_iter = self.train_ds.make_one_shot_iterator() 24 | self.TRAINX,self.TRAINY = train_iter.get_next() 25 | y_pred,self.loss,self.train_accuracy = model.build_model(inp=self.TRAINX,label=self.TRAINY) 26 | self.optimizer = tf.train.AdamOptimizer(learning_rate=self.p.learning_rate).minimize(self.loss) 27 | self.merge = tf.summary.merge_all() 28 | self.initializer = tf.global_variables_initializer() 29 | 30 | 31 | def train(self): 32 | i=0; 33 | with self.session.as_default(): 34 | try: 35 | while(True): 36 | self.optimizer.run(); 37 | self.writer.add_summary(self.merge.eval(),self.p.batchsize*i) 38 | print ('loss = %2.6f, accuracy = %2.6f'%(self.loss.eval(),self.train_accuracy.eval())) 39 | i+=1; 40 | 41 | except tf.errors.OutOfRangeError: 42 | print ('loop ended') 43 | self.saver.save(self.session,self.p.model_path) 44 | 45 | 46 | def predict(self): 47 | pass 48 | 49 | def score(self): 50 | pass 51 | 52 | 53 | if __name__=="__main__": 54 | trainer = Trainer(restore=False) 55 | trainer.train() 56 | pass -------------------------------------------------------------------------------- /raw/project1-nn/nn/model_v1.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf; 2 | from ..nn import layers; 3 | from ..nn import param; 4 | 5 | p = param.Param() 6 | 7 | def build_model(inp,label): 8 | y1 = layers.fully_connected(inp=inp,nout=p.hidden_units[0], activation=tf.nn.relu,scope="fc-1") 9 | out = layers.fully_connected(inp=y1,nout=p.num_labels, activation=tf.nn.softmax,scope="fc-2") 10 | with tf.variable_scope('loss'): 11 | loss = tf.losses.softmax_cross_entropy(logits=out,onehot_labels=label) 12 | tf.summary.scalar('loss',loss) 13 | with tf.variable_scope('accuracy'): 14 | accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(label,1),tf.argmax(out,1)),tf.float64)) 15 | tf.summary.scalar('accuracy',accuracy) 16 | 17 | return out,loss,accuracy; 18 | -------------------------------------------------------------------------------- /raw/project1-nn/nn/param.py: -------------------------------------------------------------------------------- 1 | class Param: 2 | learning_rate = 0.01 3 | hidden_units = [5] 4 | num_labels = 3; 5 | num_inputs = 100; 6 | batchsize = 20; 7 | epochs = 200; 8 | train_test_split = 0.8; 9 | model_path = "model/model.ckpt"; 10 | log_path = "graph" 11 | -------------------------------------------------------------------------------- /raw/project2-keras/intro.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Created on Sat Dec 8 12:50:43 2018 4 | 5 | @author: jaley 6 | """ 7 | 8 | from keras.models import Sequential 9 | from keras.layers import Dense; 10 | 11 | model = Sequential() 12 | model.add(Dense(units=3,activation='relu')) 13 | model.add(Dense(units=5,activation='softmax')) 14 | 15 | model.compile(loss='categorical_crossentropy', 16 | optimizer='sgd', 17 | metrics=['accuracy']) 18 | print (model.get_config()) -------------------------------------------------------------------------------- /raw/project2-keras/mnist.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Created on Sat Dec 8 08:59:09 2018 4 | 5 | @author: jaley 6 | """ 7 | 8 | from __future__ import print_function 9 | import keras 10 | from keras.datasets import mnist 11 | from keras.models import Sequential 12 | from keras.layers import Dense, Dropout, Flatten 13 | from keras.layers import Conv2D, MaxPooling2D 14 | from keras import backend as K 15 | 16 | batch_size = 128 17 | num_classes = 10 18 | epochs = 12 19 | 20 | # input image dimensions 21 | img_rows, img_cols = 28, 28 22 | 23 | # the data, split between train and test sets 24 | (x_train, y_train), (x_test, y_test) = mnist.load_data() 25 | 26 | if K.image_data_format() == 'channels_first': 27 | x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols) 28 | x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols) 29 | input_shape = (1, img_rows, img_cols) 30 | else: 31 | x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) 32 | x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) 33 | input_shape = (img_rows, img_cols, 1) 34 | 35 | x_train = x_train.astype('float32') 36 | x_test = x_test.astype('float32') 37 | x_train /= 255 38 | x_test /= 255 39 | print('x_train shape:', x_train.shape) 40 | print(x_train.shape[0], 'train samples') 41 | print(x_test.shape[0], 'test samples') 42 | 43 | # convert class vectors to binary class matrices 44 | y_train = keras.utils.to_categorical(y_train, num_classes) 45 | y_test = keras.utils.to_categorical(y_test, num_classes) 46 | 47 | model = Sequential() 48 | model.add(Conv2D(32, kernel_size=(3, 3), 49 | activation='relu', 50 | input_shape=input_shape)) 51 | model.add(Conv2D(64, (3, 3), activation='relu')) 52 | model.add(MaxPooling2D(pool_size=(2, 2))) 53 | model.add(Dropout(0.25)) 54 | model.add(Flatten()) 55 | model.add(Dense(128, activation='relu')) 56 | model.add(Dropout(0.5)) 57 | model.add(Dense(num_classes, activation='softmax')) 58 | 59 | model.compile(loss=keras.losses.categorical_crossentropy, 60 | optimizer=keras.optimizers.Adadelta(), 61 | metrics=['accuracy']) 62 | model.summary() 63 | model.fit(x_train, y_train, 64 | batch_size=batch_size, 65 | epochs=epochs, 66 | verbose=1, 67 | validation_data=(x_test, y_test)) 68 | score = model.evaluate(x_test, y_test, verbose=0) 69 | print('Test loss:', score[0]) 70 | print('Test accuracy:', score[1]) --------------------------------------------------------------------------------