├── 1_NumpyAcceleration
├── Numpy.ipynb
└── assets
│ ├── mlp1.png
│ ├── mlp2.png
│ ├── ndarrayrep.png
│ ├── nestlist.png
│ ├── nonsteeplearn.png
│ ├── nparr.png
│ ├── numba.png
│ ├── numpylogo.png
│ ├── pandas.png
│ ├── pylist.png
│ ├── sklearn.png
│ ├── slices.png
│ ├── stan.png
│ ├── steeplearn.jpg
│ ├── tf.png
│ └── vecproc.gif
├── 2_CythonMultiprocessing
├── CythonAndCtypes.ipynb
├── JoblibMultiprocessing.ipynb
└── answers
│ ├── cython_dot.pyx
│ ├── cython_dot2.pyx
│ ├── cython_dot3.pyx
│ ├── cython_setup.py
│ ├── cython_setup2.py
│ ├── cython_setup3.py
│ └── my_dot.c
├── 3_ProfileDebug
├── ProfileDebug.ipynb
└── answers
│ ├── epdb1.py
│ ├── epdb2.py
│ ├── example.py
│ └── palindrome.py
├── 4_Tensorflow
├── Keras.ipynb
├── TF.ipynb
├── TF_CNN.ipynb
├── Tensorflow_princetonPy.pptx
└── assets
│ ├── TF_api.png
│ ├── flow_graph.png
│ ├── mnist1.png
│ ├── mnist2.png
│ ├── pooling.png
│ └── tf_architecture.png
├── 5_NumbaPyCUDADemo
├── assets
│ └── cuda_indexing.png
├── numba.ipynb
└── pycuda.ipynb
├── README.md
├── python_slides.pdf
└── requirements.txt
/1_NumpyAcceleration/Numpy.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# What is NumPy\n",
8 | "\n",
9 | "Numpy is Python library for scientific computing. It’s main object is the homogeneous multidimensional array. It is a table of elements, all of the same type, indexed by a tuple of positive integers. \n",
10 | "\n",
11 | "## Why numpy arrays can accelerate your applications?\n",
12 | "\n",
13 | "A NumPy ndarray array is described by metadata: number of dimensions, shape, data type, and so on, and the actual data stored in it. The data is stored in a homogeneous and contiguous blocks of memory, at a particular address in system memory (Random Access Memory, or RAM). This is the main difference with a pure Python data structures, in which the items are scattered across the system memory. This aspect is the critical feature that makes NumPy arrays efficient.\n",
14 | "\n",
15 | "## Why is this so important? \n",
16 | "\n",
17 | "Array computations can be implemented efficiently in a low-level language like C (and a large part of NumPy is actually written in C). Knowing the address of the memory block and the data type, it is just simple arithmetic to loop over all items, for example. There would be a significant overhead to do that in Python with a list.\n",
18 | "\n",
19 | "Spatial locality in memory access patterns results in significant performance gains, notably thanks to the CPU cache. \n",
20 | "\n",
21 | "Data elements are stored contiguously in memory, so that NumPy can take advantage of vectorized instructions on modern CPUs, like Intel's SSE and AVX, AMD's XOP, and so on. \n",
22 | "\n",
23 | "NumPy can be linked to highly optimized linear algebra libraries like BLAS and LAPACK, for example through the Intel Math Kernel Library (MKL). A few specific matrix computations may also be multithreaded, taking advantage of the power of modern multicore processors.\n",
24 | "\n",
25 | "In conclusion, storing data in a contiguous blocks of memory ensures that the architecture of modern CPUs is used optimally, in terms of memory access patterns, CPU cache, and vectorized instructions.\n",
26 | "\n",
27 | "\n",
28 | "# Motivation\n",
29 | "\n",
30 | "* Provide a uniform interface for handling numerical structured data\n",
31 | "* Collect, store, and manipulate numerical data efficiently\n",
32 | "* Low-cost abstractions\n",
33 | "* Universal glue for numerical information, used in lots of external libraries! The API establishes common functions and re-appears in many other settings with the same abstractions.\n",
34 | "\n",
35 | "\n",
36 | "
\n",
37 | "\n",
38 | "\n",
39 | " | |  | |  | \n",
40 | "
\n",
41 | "
"
42 | ]
43 | },
44 | {
45 | "cell_type": "markdown",
46 | "metadata": {},
47 | "source": [
48 | "# Why not a Python list?\n",
49 | "\n",
50 | "A list is a resizing contiguous array of pointers.\n",
51 | "\n",
52 | "
\n",
53 | "\n",
54 | "Nested lists are even worse - there are two levels of indirection.\n",
55 | "\n",
56 | "
\n",
57 | "\n",
58 | "Imagine we're trying to apply a read or write operation over these arrays on our modern CPU:\n",
59 | "\n",
60 | "
\n",
61 | "\n",
62 | "Compare to NumPy arrays:\n",
63 | "\n",
64 | "
\n",
65 | "\n",
66 | "**Recurring theme**: NumPy lets us have the best of both worlds (high-level Python for development, optimized representation and speed via low-level C routines for execution)"
67 | ]
68 | },
69 | {
70 | "cell_type": "code",
71 | "execution_count": 2,
72 | "metadata": {
73 | "collapsed": true
74 | },
75 | "outputs": [],
76 | "source": [
77 | "import numpy as np\n",
78 | "import time\n",
79 | "import gc\n",
80 | "import sys\n",
81 | "\n",
82 | "assert sys.maxsize > 2 ** 32, \"get a new computer!\"\n",
83 | "\n",
84 | "# Allocation-sensitive timing needs to be done more carefully\n",
85 | "# Compares runtimes of f1, f2\n",
86 | "def compare_times(f1, f2, setup1=None, setup2=None, runs=5):\n",
87 | " print(' format: mean seconds (standard error)', runs, 'runs')\n",
88 | " maxpad = max(len(f.__name__) for f in (f1, f2))\n",
89 | " means = []\n",
90 | " for setup, f in [[setup1, f1], [setup2, f2]]:\n",
91 | " setup = (lambda: tuple()) if setup is None else setup\n",
92 | " \n",
93 | " total_times = []\n",
94 | " for _ in range(runs):\n",
95 | " try:\n",
96 | " gc.disable()\n",
97 | " args = setup()\n",
98 | " \n",
99 | " start = time.time()\n",
100 | " if isinstance(args, tuple):\n",
101 | " f(*args)\n",
102 | " else:\n",
103 | " f(args)\n",
104 | " end = time.time()\n",
105 | " \n",
106 | " total_times.append(end - start)\n",
107 | " finally:\n",
108 | " gc.enable()\n",
109 | " \n",
110 | " mean = np.mean(total_times)\n",
111 | " se = np.std(total_times) / np.sqrt(len(total_times))\n",
112 | " print(' {} {:.2e} ({:.2e})'.format(f.__name__.ljust(maxpad), mean, se))\n",
113 | " means.append(mean)\n",
114 | " print(' improvement ratio {:.1f}'.format(means[0] / means[1]))"
115 | ]
116 | },
117 | {
118 | "cell_type": "markdown",
119 | "metadata": {},
120 | "source": [
121 | "### Bandwidth-limited ops\n",
122 | "\n",
123 | "* Have to pull in more cache lines for the pointers\n",
124 | "* Poor locality causes pipeline stalls"
125 | ]
126 | },
127 | {
128 | "cell_type": "code",
129 | "execution_count": 3,
130 | "metadata": {},
131 | "outputs": [
132 | {
133 | "name": "stdout",
134 | "output_type": "stream",
135 | "text": [
136 | "('create a list 1, 2, ...', 10000000)\n",
137 | "(' format: mean seconds (standard error)', 5, 'runs')\n",
138 | " create_list 3.20e-01 (3.88e-02)\n",
139 | " create_array 2.62e-02 (5.87e-04)\n",
140 | " improvement ratio 12.2\n"
141 | ]
142 | }
143 | ],
144 | "source": [
145 | "size = 10 ** 7 # ints will be un-intered past 258\n",
146 | "print('create a list 1, 2, ...', size)\n",
147 | "\n",
148 | "\n",
149 | "def create_list(): return list(range(size))\n",
150 | "def create_array(): return np.arange(size, dtype=int)\n",
151 | "\n",
152 | "compare_times(create_list, create_array)"
153 | ]
154 | },
155 | {
156 | "cell_type": "code",
157 | "execution_count": 4,
158 | "metadata": {},
159 | "outputs": [
160 | {
161 | "name": "stdout",
162 | "output_type": "stream",
163 | "text": [
164 | "deep copies (no pre-allocation)\n",
165 | "(' format: mean seconds (standard error)', 5, 'runs')\n",
166 | " copy_list 1.17e-01 (4.36e-03)\n",
167 | " copy_array 2.91e-02 (9.12e-04)\n",
168 | " improvement ratio 4.0\n"
169 | ]
170 | }
171 | ],
172 | "source": [
173 | "print('deep copies (no pre-allocation)') # Shallow copy is cheap for both!\n",
174 | "size = 10 ** 7\n",
175 | "\n",
176 | "ls = list(range(size))\n",
177 | "def copy_list(): return ls[:]\n",
178 | "\n",
179 | "ar = np.arange(size, dtype=int)\n",
180 | "def copy_array(): return np.copy(ar)\n",
181 | "\n",
182 | "compare_times(copy_list, copy_array)"
183 | ]
184 | },
185 | {
186 | "cell_type": "code",
187 | "execution_count": 5,
188 | "metadata": {},
189 | "outputs": [
190 | {
191 | "name": "stdout",
192 | "output_type": "stream",
193 | "text": [
194 | "Deep copy (pre-allocated)\n",
195 | "(' format: mean seconds (standard error)', 5, 'runs')\n",
196 | " deep_copy_lists 1.14e-01 (1.41e-02)\n",
197 | " deep_copy_arrays 2.38e-02 (8.40e-03)\n",
198 | " improvement ratio 4.8\n"
199 | ]
200 | }
201 | ],
202 | "source": [
203 | "print('Deep copy (pre-allocated)')\n",
204 | "size = 10 ** 7\n",
205 | "\n",
206 | "def create_lists(): return list(range(size)), [0] * size\n",
207 | "def deep_copy_lists(src, dst): dst[:] = src\n",
208 | "\n",
209 | "def create_arrays(): return np.arange(size, dtype=int), np.empty(size, dtype=int)\n",
210 | "def deep_copy_arrays(src, dst): dst[:] = src\n",
211 | "\n",
212 | "compare_times(deep_copy_lists, deep_copy_arrays, create_lists, create_arrays)"
213 | ]
214 | },
215 | {
216 | "cell_type": "markdown",
217 | "metadata": {},
218 | "source": [
219 | "### Flop-limited ops\n",
220 | "\n",
221 | "* Can't engage VPU on non-contiguous memory: won't saturate CPU computational capabilities of your hardware."
222 | ]
223 | },
224 | {
225 | "cell_type": "code",
226 | "execution_count": 6,
227 | "metadata": {},
228 | "outputs": [
229 | {
230 | "name": "stdout",
231 | "output_type": "stream",
232 | "text": [
233 | "square out-of-place\n",
234 | "(' format: mean seconds (standard error)', 5, 'runs')\n",
235 | " square_lists 2.28e+00 (1.10e-02)\n",
236 | " square_arrays 2.00e-02 (2.49e-03)\n",
237 | " improvement ratio 113.7\n"
238 | ]
239 | }
240 | ],
241 | "source": [
242 | "print('square out-of-place')\n",
243 | "\n",
244 | "def square_lists(src, dst):\n",
245 | " for i, v in enumerate(src):\n",
246 | " dst[i] = v * v\n",
247 | "\n",
248 | "def square_arrays(src, dst):\n",
249 | " np.square(src, out=dst)\n",
250 | " \n",
251 | "compare_times(square_lists, square_arrays, create_lists, create_arrays)"
252 | ]
253 | },
254 | {
255 | "cell_type": "code",
256 | "execution_count": 7,
257 | "metadata": {},
258 | "outputs": [
259 | {
260 | "name": "stdout",
261 | "output_type": "stream",
262 | "text": [
263 | "square in-place\n",
264 | "(' format: mean seconds (standard error)', 5, 'runs')\n",
265 | " square_list 2.25e+00 (3.16e-03)\n",
266 | " square_array 9.89e-03 (4.48e-05)\n",
267 | " improvement ratio 227.1\n"
268 | ]
269 | }
270 | ],
271 | "source": [
272 | "# Caching and SSE can have huge cumulative effects\n",
273 | "\n",
274 | "print('square in-place')\n",
275 | "size = 10 ** 7\n",
276 | "\n",
277 | "def create_list(): return list(range(size))\n",
278 | "def square_list(ls):\n",
279 | " for i, v in enumerate(ls):\n",
280 | " ls[i] = v * v\n",
281 | "\n",
282 | "def create_array(): return np.arange(size, dtype=int)\n",
283 | "def square_array(ar):\n",
284 | " np.square(ar, out=ar)\n",
285 | " \n",
286 | "compare_times(square_list, square_array, create_list, create_array)"
287 | ]
288 | },
289 | {
290 | "cell_type": "markdown",
291 | "metadata": {},
292 | "source": [
293 | "### Memory consumption\n",
294 | "\n",
295 | "List representation uses 8 extra bytes for every value (assuming 64-bit here and henceforth)!"
296 | ]
297 | },
298 | {
299 | "cell_type": "code",
300 | "execution_count": 8,
301 | "metadata": {},
302 | "outputs": [
303 | {
304 | "name": "stdout",
305 | "output_type": "stream",
306 | "text": [
307 | "('list kb', 322)\n",
308 | "('array kb', 78)\n"
309 | ]
310 | }
311 | ],
312 | "source": [
313 | "from pympler import asizeof\n",
314 | "size = 10 ** 4\n",
315 | "\n",
316 | "print('list kb', asizeof.asizeof(list(range(size))) // 1024)\n",
317 | "print('array kb', asizeof.asizeof(np.arange(size, dtype=int)) // 1024)"
318 | ]
319 | },
320 | {
321 | "cell_type": "markdown",
322 | "metadata": {
323 | "collapsed": true
324 | },
325 | "source": [
326 | "### Disclaimer\n",
327 | "\n",
328 | "Regular python lists are still useful! They do a lot of things arrays can't:\n",
329 | "\n",
330 | "* List comprehensions `[x * x for x in range(10) if x % 2 == 0]`\n",
331 | "* Ragged nested lists `[[1, 2, 3], [1, [2]]]`"
332 | ]
333 | },
334 | {
335 | "cell_type": "markdown",
336 | "metadata": {},
337 | "source": [
338 | "# The NumPy Array\n",
339 | "\n",
340 | "[doc](https://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html#internal-memory-layout-of-an-ndarray)\n",
341 | "\n",
342 | "### Abstraction\n",
343 | "\n",
344 | "We know what an array is -- a contiugous chunk of memory holding an indexed list of things from 0 to its size minus 1. If the things have a particular type, using, say, `dtype` as a placeholder, then we can refer to this as a `classical_array` of `dtype`s.\n",
345 | "\n",
346 | "The NumPy array, an `ndarray` with a _datatype, or dtype,_ `dtype` is an _N_-dimensional array for arbitrary _N_. This is defined recursively:\n",
347 | "* For _N > 0_, an _N_-dimensional `ndarray` of _dtype_ `dtype` is a `classical_array` of _N - 1_ dimensional `ndarray`s of _dtype_ `dtype`, all with the same size.\n",
348 | "* For _N = 0_, the `ndarray` is a `dtype`\n",
349 | "\n",
350 | "We note some familiar special cases:\n",
351 | "* _N = 0_, we have a scalar, or the datatype itself\n",
352 | "* _N = 1_, we have a `classical_array`\n",
353 | "* _N = 2_, we have a matrix\n",
354 | "\n",
355 | "Each _axis_ has its own `classical_array` length: this yields the shape."
356 | ]
357 | },
358 | {
359 | "cell_type": "code",
360 | "execution_count": 9,
361 | "metadata": {},
362 | "outputs": [
363 | {
364 | "name": "stdout",
365 | "output_type": "stream",
366 | "text": [
367 | "ndim 0 shape ()\n",
368 | "3.0\n",
369 | "ndim 1 shape (4,)\n",
370 | "[ 3. 3. 3. 3.]\n",
371 | "ndim 2 shape (2, 4)\n",
372 | "[[ 3. 3. 3. 3.]\n",
373 | " [ 3. 3. 3. 3.]]\n",
374 | "ndim 3 shape (2, 2, 4)\n",
375 | "[[[ 3. 3. 3. 3.]\n",
376 | " [ 3. 3. 3. 3.]]\n",
377 | "\n",
378 | " [[ 3. 3. 3. 3.]\n",
379 | " [ 3. 3. 3. 3.]]]\n"
380 | ]
381 | }
382 | ],
383 | "source": [
384 | "n0 = np.array(3, dtype=float)\n",
385 | "n1 = np.stack([n0, n0, n0, n0])\n",
386 | "n2 = np.stack([n1, n1])\n",
387 | "n3 = np.stack([n2, n2])\n",
388 | "\n",
389 | "for x in [n0, n1, n2, n3]:\n",
390 | " print('ndim', x.ndim, 'shape', x.shape)\n",
391 | " print(x)"
392 | ]
393 | },
394 | {
395 | "cell_type": "markdown",
396 | "metadata": {},
397 | "source": [
398 | "**Axes are read LEFT to RIGHT: an array of shape `(n0, n1, ..., nN-1)` has axis `0` with length `n0`, etc.**\n",
399 | "\n",
400 | "### Detour: Formal Representation\n",
401 | "\n",
402 | "Formally, a NumPy array can be viewed as a mathematical object. If:\n",
403 | "\n",
404 | "* The `dtype` belongs to some (usually field) $F$\n",
405 | "* The array has dimension $N$, with the $i$-th axis having length $n_i$\n",
406 | "* $N>1$\n",
407 | "\n",
408 | "Then this array is an object in:\n",
409 | "\n",
410 | "$$\n",
411 | "F^{n_0}\\otimes F^{n_{1}}\\otimes\\cdots \\otimes F^{n_{N-1}}\n",
412 | "$$\n",
413 | "\n",
414 | "$F^n$ is an $n$-dimensional vector space over $F$. An element in here can be represented by its canonical basis $\\textbf{e}_i^{(n)}$ as a sum for elements $f_i\\in F$:\n",
415 | "\n",
416 | "$$\n",
417 | "f_1\\textbf{e}_1^{(n)}+f_{2}\\textbf{e}_{2}^{(n)}+\\cdots +f_{n}\\textbf{e}_{n}^{(n)}\n",
418 | "$$\n",
419 | "\n",
420 | "$F^n\\otimes F^m$ is a tensor product, which takes two vector spaces and gives you another. Then the tensor product is a special kind of vector space with dimension $nm$. Elements in here have a special structure which we can tie to the original vector spaces $F^n,F^m$:\n",
421 | "\n",
422 | "$$\n",
423 | "\\sum_{i=1}^n\\sum_{j=1}^mf_{ij}(\\textbf{e}_{i}^{(n)}\\otimes \\textbf{e}_{j}^{(m)})\n",
424 | "$$\n",
425 | "\n",
426 | "Above, $(\\textbf{e}_{i}^{(n)}\\otimes \\textbf{e}_{j}^{(m)})$ is a basis vector of $F^n\\otimes F^m$ for each pair $i,j$.\n",
427 | "\n",
428 | "We will discuss what $F$ can be later; but most of this intuition (and a lot of NumPy functionality) is based on $F$ being a type corresponding to a field.\n",
429 | "\n",
430 | "# Back to CS / Mutability / Losing the Abstraction\n",
431 | "\n",
432 | "The above is a (simplified) view of `ndarray` as a tensor, but gives useful intuition for arrays that are **not mutated**.\n",
433 | "\n",
434 | "An `ndarray` **Python object** is a actually a _view_ into a shared `ndarray`. The _base_ is a representative of the equaivalence class of views of the same array\n",
435 | "\n",
436 | "
"
437 | ]
438 | },
439 | {
440 | "cell_type": "code",
441 | "execution_count": 10,
442 | "metadata": {},
443 | "outputs": [
444 | {
445 | "name": "stdout",
446 | "output_type": "stream",
447 | "text": [
448 | "[0 1 2 3 4 5 6 7 8 9]\n"
449 | ]
450 | }
451 | ],
452 | "source": [
453 | "original = np.arange(10)\n",
454 | "\n",
455 | "# shallow copies\n",
456 | "s1 = original[:]\n",
457 | "s2 = s1.view()\n",
458 | "s3 = original[:5]\n",
459 | "\n",
460 | "print(original)"
461 | ]
462 | },
463 | {
464 | "cell_type": "code",
465 | "execution_count": 11,
466 | "metadata": {},
467 | "outputs": [
468 | {
469 | "name": "stdout",
470 | "output_type": "stream",
471 | "text": [
472 | "s1 [ 0 1 -1 3 4 5 6 7 8 9]\n",
473 | "s2 [ 0 1 -1 3 4 5 6 7 8 9]\n",
474 | "s3 [ 0 1 -1 3 4]\n"
475 | ]
476 | }
477 | ],
478 | "source": [
479 | "original[2] = -1\n",
480 | "print('s1', s1)\n",
481 | "print('s2', s2)\n",
482 | "print('s3', s3)"
483 | ]
484 | },
485 | {
486 | "cell_type": "code",
487 | "execution_count": 12,
488 | "metadata": {},
489 | "outputs": [
490 | {
491 | "data": {
492 | "text/plain": [
493 | "(140674547832432, 140674547832432, 140674547832432, 140674547832432, None)"
494 | ]
495 | },
496 | "execution_count": 12,
497 | "metadata": {},
498 | "output_type": "execute_result"
499 | }
500 | ],
501 | "source": [
502 | "id(original), id(s1.base), id(s2.base), id(s3.base), original.base"
503 | ]
504 | },
505 | {
506 | "cell_type": "markdown",
507 | "metadata": {},
508 | "source": [
509 | "### Dtypes\n",
510 | "\n",
511 | "$F$ (our `dtype`) can be ([doc](https://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html)):\n",
512 | "\n",
513 | "* boolean\n",
514 | "* integral\n",
515 | "* floating-point\n",
516 | "* complex floating-point\n",
517 | "* any structure ([record array](https://docs.scipy.org/doc/numpy/user/basics.rec.html)) of the above, e.g. [complex integral values](http://stackoverflow.com/questions/13863523/is-it-possible-to-create-a-numpy-ndarray-that-holds-complex-integers)\n",
518 | "\n",
519 | "The `dtype` can also be unicode, a date, or an arbitrary object, but those don't form fields. This means that most NumPy functions aren't usful for this data, since it's not numeric. Why have them at all?\n",
520 | "\n",
521 | "* for all: NumPy `ndarray`s offer the tensor abstraction described above.\n",
522 | "* unicode: consistent format in memory for bit operations and for I/O\n",
523 | "* [date](https://docs.scipy.org/doc/numpy/reference/arrays.datetime.html): compact representation, addition/subtraction, basic parsing"
524 | ]
525 | },
526 | {
527 | "cell_type": "code",
528 | "execution_count": 13,
529 | "metadata": {},
530 | "outputs": [
531 | {
532 | "name": "stdout",
533 | "output_type": "stream",
534 | "text": [
535 | "i16 296 i64 896\n"
536 | ]
537 | }
538 | ],
539 | "source": [
540 | "# Names are pretty intuitive for basic types\n",
541 | "\n",
542 | "i16 = np.arange(100, dtype=np.uint16)\n",
543 | "i64 = np.arange(100, dtype=np.uint64)\n",
544 | "print('i16', asizeof.asizeof(i16), 'i64', asizeof.asizeof(i64))"
545 | ]
546 | },
547 | {
548 | "cell_type": "code",
549 | "execution_count": 14,
550 | "metadata": {},
551 | "outputs": [
552 | {
553 | "name": "stdout",
554 | "output_type": "stream",
555 | "text": [
556 | "[(1, 1) (2, -1)]\n",
557 | "1+1i\n",
558 | "2-1i\n"
559 | ]
560 | }
561 | ],
562 | "source": [
563 | "# We can use arbitrary structures for our own types\n",
564 | "# For example, exact Gaussian (complex) integers\n",
565 | "\n",
566 | "gauss = np.dtype([('re', np.int32), ('im', np.int32)])\n",
567 | "c2 = np.zeros(2, dtype=gauss)\n",
568 | "c2[0] = (1, 1)\n",
569 | "c2[1] = (2, -1)\n",
570 | "\n",
571 | "def print_gauss(g):\n",
572 | " print('{}{:+d}i'.format(g['re'], g['im']))\n",
573 | " \n",
574 | "print(c2)\n",
575 | "for x in c2:\n",
576 | " print_gauss(x)"
577 | ]
578 | },
579 | {
580 | "cell_type": "code",
581 | "execution_count": 15,
582 | "metadata": {},
583 | "outputs": [
584 | {
585 | "name": "stdout",
586 | "output_type": "stream",
587 | "text": [
588 | "b'\\x00\\x05' 0000000000000101\n",
589 | "b'\\x05\\x00' 0000000000000101\n"
590 | ]
591 | }
592 | ],
593 | "source": [
594 | "l16 = np.array(5, dtype='>u2') # little endian signed char\n",
595 | "b16 = l16.astype('\n",
873 | "\n",
874 | "### Advanced Indexing\n",
875 | "\n",
876 | "Arbitrary combinations of basic indexing. **GOTCHA: All advanced index results are copies, not views**."
877 | ]
878 | },
879 | {
880 | "cell_type": "code",
881 | "execution_count": 25,
882 | "metadata": {},
883 | "outputs": [
884 | {
885 | "name": "stdout",
886 | "output_type": "stream",
887 | "text": [
888 | "m (4, 5)\n",
889 | "[[ 0 1 2 3 4]\n",
890 | " [ 5 6 7 8 9]\n",
891 | " [10 11 12 13 14]\n",
892 | " [15 16 17 18 19]]\n",
893 | "\n",
894 | "m[[1,2,1],:] (3, 5)\n",
895 | "[[ 5 6 7 8 9]\n",
896 | " [10 11 12 13 14]\n",
897 | " [ 5 6 7 8 9]]\n",
898 | "\n"
899 | ]
900 | }
901 | ],
902 | "source": [
903 | "m = np.arange(4 * 5).reshape(4, 5)\n",
904 | "\n",
905 | "# 1D advanced index\n",
906 | "display('m')\n",
907 | "display('m[[1,2,1],:]')"
908 | ]
909 | },
910 | {
911 | "cell_type": "code",
912 | "execution_count": 26,
913 | "metadata": {},
914 | "outputs": [
915 | {
916 | "name": "stdout",
917 | "output_type": "stream",
918 | "text": [
919 | "original indices\n",
920 | " rows [0 1 2 3]\n",
921 | " cols [0 1 2 3 4]\n",
922 | "new indices\n",
923 | " rows [1, 2, 1]\n",
924 | " cols [0 1 2 3 4]\n"
925 | ]
926 | }
927 | ],
928 | "source": [
929 | "print('original indices')\n",
930 | "print(' rows', np.arange(m.shape[0]))\n",
931 | "print(' cols', np.arange(m.shape[1]))\n",
932 | "print('new indices')\n",
933 | "print(' rows', ([1, 2, 1]))\n",
934 | "print(' cols', np.arange(m.shape[1]))"
935 | ]
936 | },
937 | {
938 | "cell_type": "code",
939 | "execution_count": 27,
940 | "metadata": {},
941 | "outputs": [
942 | {
943 | "name": "stdout",
944 | "output_type": "stream",
945 | "text": [
946 | "m (4, 5)\n",
947 | "[[ 0 1 2 3 4]\n",
948 | " [ 5 6 7 8 9]\n",
949 | " [10 11 12 13 14]\n",
950 | " [15 16 17 18 19]]\n",
951 | "\n",
952 | "m[0:1, [[1, 1, 2],[0, 1, 2]]] (1, 2, 3)\n",
953 | "[[[1 1 2]\n",
954 | " [0 1 2]]]\n",
955 | "\n"
956 | ]
957 | }
958 | ],
959 | "source": [
960 | "# 2D advanced index\n",
961 | "display('m')\n",
962 | "display('m[0:1, [[1, 1, 2],[0, 1, 2]]]')"
963 | ]
964 | },
965 | {
966 | "cell_type": "markdown",
967 | "metadata": {},
968 | "source": [
969 | "Why on earth would you do the above? Selection, sampling, algorithms that are based on offsets of arrays (i.e., basically all of them).\n",
970 | "\n",
971 | "**What's going on?**\n",
972 | "\n",
973 | "Advanced indexing is best thought of in the following way:\n",
974 | "\n",
975 | "A typical `ndarray`, `x`, with shape `(n0, ..., nN-1)` has `N` corresponding _indices_. \n",
976 | "\n",
977 | "`(range(n0), ..., range(nN-1))`\n",
978 | "\n",
979 | "Indices work like this: the `(i0, ..., iN-1)`-th element in an array with the above indices over `x` is:\n",
980 | "\n",
981 | "`(range(n0)[i0], ..., range(n2)[iN-1]) == (i0, ..., iN-1)`\n",
982 | "\n",
983 | "So the `(i0, ..., iN-1)`-th element of `x` is the `(i0, ..., iN-1)`-th element of \"x with indices `(range(n0), ..., range(nN-1))`\".\n",
984 | "\n",
985 | "An advanced index `x[:, ..., ind, ..., :]`, where `ind` is some 1D list of integers for axis `j` between `0` and `nj`, possibly with repretition, replaces the straightforward increasing indices with:\n",
986 | "\n",
987 | "`(range(n0), ..., ind, ..., range(nN-1))`\n",
988 | "\n",
989 | "The `(i0, ..., iN-1)`-th element is `(i0, ..., ind[ij], ..., iN-1)` from `x`.\n",
990 | "\n",
991 | "So the shape will now be `(n0, ..., len(ind), ..., nN-1)`.\n",
992 | "\n",
993 | "It can get even more complicated -- `ind` can be higher dimensional."
994 | ]
995 | },
996 | {
997 | "cell_type": "code",
998 | "execution_count": 28,
999 | "metadata": {},
1000 | "outputs": [
1001 | {
1002 | "name": "stdout",
1003 | "output_type": "stream",
1004 | "text": [
1005 | "x (2, 4, 4)\n",
1006 | "[[[ 0 1 2 3]\n",
1007 | " [ 4 5 6 7]\n",
1008 | " [ 8 9 10 11]\n",
1009 | " [12 13 14 15]]\n",
1010 | "\n",
1011 | " [[16 17 18 19]\n",
1012 | " [20 21 22 23]\n",
1013 | " [24 25 26 27]\n",
1014 | " [28 29 30 31]]]\n",
1015 | "\n",
1016 | "x[(0, 0, 1),] (3, 4, 4)\n",
1017 | "[[[ 0 1 2 3]\n",
1018 | " [ 4 5 6 7]\n",
1019 | " [ 8 9 10 11]\n",
1020 | " [12 13 14 15]]\n",
1021 | "\n",
1022 | " [[ 0 1 2 3]\n",
1023 | " [ 4 5 6 7]\n",
1024 | " [ 8 9 10 11]\n",
1025 | " [12 13 14 15]]\n",
1026 | "\n",
1027 | " [[16 17 18 19]\n",
1028 | " [20 21 22 23]\n",
1029 | " [24 25 26 27]\n",
1030 | " [28 29 30 31]]]\n",
1031 | "\n",
1032 | "x[(0, 0, 1)] ()\n",
1033 | "1\n",
1034 | "\n"
1035 | ]
1036 | }
1037 | ],
1038 | "source": [
1039 | "# GOTCHA: accidentally invoking advanced indexing\n",
1040 | "display('x')\n",
1041 | "display('x[(0, 0, 1),]') # advanced\n",
1042 | "display('x[(0, 0, 1)]') # basic\n",
1043 | "# best policy: don't parenthesize when you want basic"
1044 | ]
1045 | },
1046 | {
1047 | "cell_type": "markdown",
1048 | "metadata": {},
1049 | "source": [
1050 | "The above covers the case of one advanced index and the rest being basic. One other common situation that comes up in practice is every index is advanced.\n",
1051 | "\n",
1052 | "Recall array `x` with shape `(n0, ..., nN-1)`. Let `indj` be integer `ndarrays` all of the same shape (say, `(m0, ..., mM-1)`).\n",
1053 | "\n",
1054 | "Then `x[ind0, ... indN-1]` has shape `(m0, ..., mM-1)` and its `t=(j0, ..., jM-1)`-th element is the `(ind0[t], ..., indN-1(t))`-th element of `x`."
1055 | ]
1056 | },
1057 | {
1058 | "cell_type": "code",
1059 | "execution_count": 29,
1060 | "metadata": {},
1061 | "outputs": [
1062 | {
1063 | "name": "stdout",
1064 | "output_type": "stream",
1065 | "text": [
1066 | "m (4, 5)\n",
1067 | "[[ 0 1 2 3 4]\n",
1068 | " [ 5 6 7 8 9]\n",
1069 | " [10 11 12 13 14]\n",
1070 | " [15 16 17 18 19]]\n",
1071 | "\n",
1072 | "m[[1,2],[3,4]] (2,)\n",
1073 | "[ 8 14]\n",
1074 | "\n",
1075 | "m[np.ix_([1,2],[3,4])] (2, 2)\n",
1076 | "[[ 8 9]\n",
1077 | " [13 14]]\n",
1078 | "\n",
1079 | "m[0, np.r_[:2, slice(3, 1, -1), 2]] (5,)\n",
1080 | "[0 1 3 2 2]\n",
1081 | "\n"
1082 | ]
1083 | }
1084 | ],
1085 | "source": [
1086 | "display('m')\n",
1087 | "display('m[[1,2],[3,4]]')\n",
1088 | "\n",
1089 | "# ix_: only applies to 1D indices. computes the cross product\n",
1090 | "display('m[np.ix_([1,2],[3,4])]')\n",
1091 | "\n",
1092 | "# r_: concatenates slices and all forms of indices\n",
1093 | "display('m[0, np.r_[:2, slice(3, 1, -1), 2]]')"
1094 | ]
1095 | },
1096 | {
1097 | "cell_type": "code",
1098 | "execution_count": 30,
1099 | "metadata": {},
1100 | "outputs": [
1101 | {
1102 | "name": "stdout",
1103 | "output_type": "stream",
1104 | "text": [
1105 | "[7 2 9 1 0 8 4 5 6 3]\n",
1106 | "[1 0 1 1 0 0 0 1 0 1]\n",
1107 | "[ True False True True False False False True False True]\n",
1108 | "[2 7 2 2 7 7 7 2 7 2]\n",
1109 | "[7 9 1 5 3]\n"
1110 | ]
1111 | }
1112 | ],
1113 | "source": [
1114 | "# Boolean arrays are converted to integers where they're true\n",
1115 | "# Then they're treated like the corresponding integer arrays\n",
1116 | "np.random.seed(1234)\n",
1117 | "digits = np.random.permutation(np.arange(10))\n",
1118 | "is_odd = digits % 2\n",
1119 | "print(digits)\n",
1120 | "print(is_odd)\n",
1121 | "print(is_odd.astype(bool))\n",
1122 | "print(digits[is_odd]) # GOTCHA\n",
1123 | "print(digits[is_odd.astype(bool)])"
1124 | ]
1125 | },
1126 | {
1127 | "cell_type": "code",
1128 | "execution_count": 31,
1129 | "metadata": {},
1130 | "outputs": [
1131 | {
1132 | "name": "stdout",
1133 | "output_type": "stream",
1134 | "text": [
1135 | "[7 2 9 1 0 8 4 5 6 3]\n",
1136 | "[0 2 3 7 9]\n",
1137 | "[7 9 1 5 3]\n"
1138 | ]
1139 | }
1140 | ],
1141 | "source": [
1142 | "print(digits)\n",
1143 | "print(is_odd.nonzero()[0])\n",
1144 | "print(digits[is_odd.nonzero()])"
1145 | ]
1146 | },
1147 | {
1148 | "cell_type": "code",
1149 | "execution_count": 32,
1150 | "metadata": {},
1151 | "outputs": [
1152 | {
1153 | "name": "stdout",
1154 | "output_type": "stream",
1155 | "text": [
1156 | "[[0 1]\n",
1157 | " [2 3]]\n",
1158 | "[[False True]\n",
1159 | " [False True]]\n",
1160 | "(array([0, 1]), array([1, 1]))\n",
1161 | "[1 3]\n"
1162 | ]
1163 | }
1164 | ],
1165 | "source": [
1166 | "# Boolean selection in higher dimensions:\n",
1167 | "x = np.arange(2 *2).reshape(2, -1)\n",
1168 | "y = (x % 2).astype(bool)\n",
1169 | "print(x)\n",
1170 | "print(y)\n",
1171 | "print(y.nonzero())\n",
1172 | "print(x[y]) # becomes double advanced index"
1173 | ]
1174 | },
1175 | {
1176 | "cell_type": "markdown",
1177 | "metadata": {},
1178 | "source": [
1179 | "# Array Creation and Initialization\n",
1180 | "\n",
1181 | "[doc](https://docs.scipy.org/doc/numpy-dev/reference/routines.array-creation.html)\n",
1182 | "\n",
1183 | "If unspecified, default dtype is usually float, with an exception for arange."
1184 | ]
1185 | },
1186 | {
1187 | "cell_type": "code",
1188 | "execution_count": 42,
1189 | "metadata": {},
1190 | "outputs": [
1191 | {
1192 | "name": "stdout",
1193 | "output_type": "stream",
1194 | "text": [
1195 | "np.linspace(4, 8, 2) (2,)\n",
1196 | "[ 4. 8.]\n",
1197 | "\n",
1198 | "np.arange(4, 8, 2) (2,)\n",
1199 | "[4 6]\n",
1200 | "\n"
1201 | ]
1202 | }
1203 | ],
1204 | "source": [
1205 | "display('np.linspace(4, 8, 2)')\n",
1206 | "display('np.arange(4, 8, 2)') # GOTCHA"
1207 | ]
1208 | },
1209 | {
1210 | "cell_type": "code",
1211 | "execution_count": 43,
1212 | "metadata": {},
1213 | "outputs": [
1214 | {
1215 | "data": {
1216 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYcAAAD8CAYAAACcjGjIAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzt3Xl8XOV97/HPT5ttyYtsS5aNF2SDjbfEAYQxIXEIi+1A\nEtNbwiVtAuFy6/Q2aZuQlKy93CYhr7RNQkjaJqGBFpo0YJw0uIFINmAgbYLBgG0kr8K78UiyZcmL\nZC0zv/vHHJmxJdla58yMvu/XSy+d88wzmt/h4POd85wz85i7IyIikigr7AJERCT1KBxERKQThYOI\niHSicBARkU4UDiIi0onCQUREOlE4iIhIJwoHERHpROEgIiKd5IRdQF8VFRV5aWlp2GWIiKSNV199\n9bC7F/ekb9qGQ2lpKRs2bAi7DBGRtGFme3vaV8NKIiLSicJBREQ6UTiIiEgnCgcREelE4SAiIp2c\nNxzM7GEzqzWzyoS2cWa21sx2Br/HBu1mZt83s2oz22xmlyU8546g/04zuyOh/XIzeyN4zvfNzAZ6\nI0VEpHd6cubwr8Cys9q+CDzr7jOBZ4N1gA8AM4OfFcAPIR4mwL3AlcBC4N6OQAn6/EnC885+LRER\nSbLzhoO7vwjUn9W8HHgkWH4EuDmh/VGPewkoNLNJwFJgrbvXu/tRYC2wLHhstLu/5PH5Sh9N+Fsi\nIpLguW01PPRfu2mLxgb9tfp6zaHE3Q8FyxGgJFieDOxP6HcgaDtX+4Eu2rtkZivMbIOZbairq+tj\n6SIi6emR3+3l0d/vISdr8Eff+31BOnjH7wNQS09e60F3L3P3suLiHn0CXEQkIxw71cbv3jzMsnkT\nScal2b6GQ00wJETwuzZoPwhMTeg3JWg7V/uULtpFRCTBum21tEWdJfMmJuX1+hoOq4GOO47uAJ5M\naL89uGtpEdAYDD9VAEvMbGxwIXoJUBE8dszMFgV3Kd2e8LdERCRQXhlhwqhhXDq1MCmvd94v3jOz\nnwPXAEVmdoD4XUffAlaa2V3AXuDWoPvTwI1ANdAE3Ang7vVm9nXglaDf19y94yL3nxG/I2oE8Jvg\nR0REAqfaojy/vY5bLp9CVhKuN0APwsHdP9rNQ9d10deBT3Xzdx4GHu6ifQMw/3x1iIgMVS/uqKO5\nLcrSJA0pgT4hLSKS8sqrIowZkcuVM8Yl7TUVDiIiKawtGuPZrbVcN2cCudnJO2QrHEREUtj6XfU0\nNrexLIlDSqBwEBFJaeVVhxiRm83iWcn9bJfCQUQkRcVizpqqGq65pJjhudlJfW2Fg4hIinp9fwO1\nx1tYNj+5Q0qgcBARSVkVVRFys433z56Q9NdWOIiIpCB3p6IqwrsvKmL08Nykv77CQUQkBW2LHGfv\nkaZQhpRA4SAikpLKKyOYwQ1zS87feRAoHEREUlBFVYQrLhxH0chhoby+wkFEJMXsPXKSbZHjLA1p\nSAkUDiIiKaeiKgLA0nnhDCmBwkFEJOWUV0aYP3k0U8bmh1aDwkFEJIXUHDvFa/sakv5dSmdTOIiI\npJA1W2oAkjp3Q1cUDiIiKaSiMsKM4gIunjAy1DoUDiIiKaKhqZXf7zrCsnkTMUvOdKDdUTiIiKSI\nZ7fWEo156ENKoHAQEUkZ5VURJo0ZzjunjAm7FIWDiEgqaGpt58UddSxNgSElUDiIiKSEF7bX0dIe\nS4khJVA4iIikhPKqCOMK8riidGzYpQAKBxGR0LW2x3huay03zCkhJzs1DsupUYWIyBD2uzcPc7yl\nnaXzw/supbMpHEREQlZRFWHksBzefVFR2KWcpnAQEQlRNOasqarh/bMnMDw3O+xyTlM4iIiE6NW9\nRzlysjXUr+fuisJBRCRE5ZUR8nKyuOaSCWGXcgaFg4hISNydiqoIi2cWMXJYTtjlnKFf4WBmnzWz\nKjOrNLOfm9lwM5tuZuvNrNrMHjezvKDvsGC9Oni8NOHvfClo325mS/u3SSIi6aHqrWMcbGhmSYp8\n8C1Rn8PBzCYDfwGUuft8IBu4Dfhb4H53vxg4CtwVPOUu4GjQfn/QDzObGzxvHrAM+CczS52rMiIi\ng6S8MkJ2lnH9nNS63gD9H1bKAUaYWQ6QDxwCrgVWBY8/AtwcLC8P1gkev87iXyCyHHjM3VvcfTdQ\nDSzsZ10iIimvvCrCwtJxjCvIC7uUTvocDu5+EPg2sI94KDQCrwIN7t4edDsATA6WJwP7g+e2B/3H\nJ7Z38RwRkYxUXXuC6toTLJufekNK0L9hpbHE3/VPBy4ACogPCw0aM1thZhvMbENdXd1gvpSIyKCq\nqIoAsCTFbmHt0J9hpeuB3e5e5+5twC+Bq4HCYJgJYApwMFg+CEwFCB4fAxxJbO/iOWdw9wfdvczd\ny4qLi/tRuohIuCqqIiyYWsikMSPCLqVL/QmHfcAiM8sPrh1cB2wB1gG3BH3uAJ4MllcH6wSPP+fu\nHrTfFtzNNB2YCbzcj7pERFLawYZmNh9oZFkK3qXUoc831rr7ejNbBbwGtAOvAw8CTwGPmdk3graH\ngqc8BPybmVUD9cTvUMLdq8xsJfFgaQc+5e7RvtYlIpLq1gRDSqn2qehEFn/znn7Kysp8w4YNYZch\nItJr//PHv+doUytrPvu+pL6umb3q7mU96atPSIuIJNGREy28sqc+pYeUQOEgIpJUz2ytIeawNEVv\nYe2gcBARSaLyyghTxo5g7qTRYZdyTgoHEZEkOX6qjf+uPsKyeROJ3+SZuhQOIiJJsm57Ha3RWMp+\nKjqRwkFEJEkqKiMUjRzGZdPGhl3KeSkcRESS4FRblHXba1kyr4SsrNQeUgKFg4hIUvzXzsM0tUZZ\nmuK3sHZQOIiIJEF5VYRRw3O4asb4sEvpEYWDiMgga4/GeGZrDdfPKSEvJz0Ou+lRpYhIGnt5dz0N\nTW0p/V1KZ1M4iIgMsvKqCMNzs1g8K32mGlA4iIgMoljMWVNVw/tmFZOf1+cvwk46hYOIyCDadKCB\nyLFTaXOXUgeFg4jIICqvipCTZVw3O32uN4DCQURk0Lg7FZURrrpoPGPyc8Mup1cUDiIig2RHzQn2\nHGlKuyElUDiIiAya8soIZrBkbnoNKYHCQURk0FRURbh82lgmjB4edim9pnAQERkE+440seXQsbQc\nUgKFg4jIoKioigAoHERE5G0VVRHmThrNtPH5YZfSJwoHEZEBVnvsFK/uO5q2Zw2gcBARGXBrttTg\nTlpMB9odhYOIyACrqIpQOj6fWSUjwy6lzxQOIiIDqLGpjd+/eYSl8ydilvrTgXZH4SAiMoCe215D\ne8xZlsbXG0DhICIyoMorI5SMHsaCKYVhl9IvCgcRkQHS3BrlhR11LJ03kays9B1SAoWDiMiAeWFH\nHafaYmk/pAQKBxGRAVNRFaEwP5eF08eFXUq/9SsczKzQzFaZ2TYz22pmV5nZODNba2Y7g99jg75m\nZt83s2oz22xmlyX8nTuC/jvN7I7+bpSISLK1tsd4ZmsN188pISc7/d9393cLHgDK3X02sADYCnwR\neNbdZwLPBusAHwBmBj8rgB8CmNk44F7gSmAhcG9HoIiIpIuXdh3h+Kn2jBhSgn6Eg5mNARYDDwG4\ne6u7NwDLgUeCbo8ANwfLy4FHPe4loNDMJgFLgbXuXu/uR4G1wLK+1iUiEobyqgj5edm8Z2ZR2KUM\niP6cOUwH6oB/MbPXzewnZlYAlLj7oaBPBOiY5WIysD/h+QeCtu7aOzGzFWa2wcw21NXV9aN0EZGB\nE405a6pqeP8lExiemx12OQOiP+GQA1wG/NDdLwVO8vYQEgDu7oD34zXO4O4PunuZu5cVFxcP1J8V\nEemX1/cd5fCJFpam8Xcpna0/4XAAOODu64P1VcTDoiYYLiL4XRs8fhCYmvD8KUFbd+0iImmhvDJC\nXnYW778kc9609jkc3D0C7DezS4Km64AtwGqg446jO4Ang+XVwO3BXUuLgMZg+KkCWGJmY4ML0UuC\nNhGRlOfulFdFuPri8Ywanht2OQMmp5/P/3PgZ2aWB+wC7iQeOCvN7C5gL3Br0Pdp4EagGmgK+uLu\n9Wb2deCVoN/X3L2+n3WJiCTFlkPHOHC0mU+//+KwSxlQ/QoHd98IlHXx0HVd9HXgU938nYeBh/tT\ni4hIGCoqI2QZXD+35Pyd00j6f1JDRCRE5VURrigdR9HIYWGXMqAUDiIifbSr7gQ7ak6k9XSg3VE4\niIj0UUVVDUBG3cLaQeEgItJH5VUR3jllDJMLR4RdyoBTOIiI9MGhxmY27W/IyCElUDiIiPTJmo4h\nJYWDiIh0KK+McPGEkVw8YWTYpQwKhYOISC/Vn2zl5T31LJ2XWZ9tSKRwEBHppWe21hCNOcvmTQq7\nlEGjcBAR6aWKygiTC0cwf/LosEsZNAoHEZFeONHSzm+rD7NkXglmFnY5g0bhICLSC89vr6W1PZYx\n04F2R+EgItIL5ZURxhfkUVY6LuxSBpXCQUSkh061RVm3rZYb5paQnZW5Q0qgcBAR6bHfvXmYk63R\njPwupbMpHEREeqi8MsLIYTm8+6LxYZcy6BQOIiI90B6N8czWWq6dPYFhOdlhlzPoFA4iIj3wyp6j\n1J9sZdkQGFIChYOISI9UVEXIy8nifbOKwy4lKRQOIiLn4e5UVEVYPLOYgmE5YZeTFAoHEZHz2Hyg\nkUONp4bMkBIoHEREzikWc/6+Yjv5edlcP2dC2OUkjcJBROQcfrp+L/9VfZiv3DSHwvy8sMtJGoWD\niEg3dtWd4JtPb+V9s4r5o4XTwi4nqRQOIiJdaI/GuHvlJoblZPN3t7wzo7+BtStD47K7iEgv/eiF\nN9m4v4EffPRSSkYPD7ucpNOZg4jIWSoPNvK9Z3byoQUX8KEFF4RdTigUDiIiCU61Rbl75UbGFeTx\n9eXzwi4nNBpWEhFJ8N21O9hRc4J/vfOKIXV30tl05iAiEli/6wj//Ntd/NGV07jmkqHzmYau9Dsc\nzCzbzF43s18H69PNbL2ZVZvZ42aWF7QPC9arg8dLE/7Gl4L27Wa2tL81iYj01omWdj73xCamjs3n\nKzfOCbuc0A3EmcNfAlsT1v8WuN/dLwaOAncF7XcBR4P2+4N+mNlc4DZgHrAM+Cczy/zvwxWRlPKN\nX2/hYEMz3711wZD5/qRz6Vc4mNkU4CbgJ8G6AdcCq4IujwA3B8vLg3WCx68L+i8HHnP3FnffDVQD\nC/tTl4hIbzy3rYbHXtnPJxdflPFzQ/dUf88cvgfcA8SC9fFAg7u3B+sHgMnB8mRgP0DweGPQ/3R7\nF88RERlU9SdbuWfVG8yeOIrP3jAz7HJSRp/Dwcw+CNS6+6sDWM/5XnOFmW0wsw11dXXJelkRyVDu\nzld/9QaNza1899Z3DYkZ3nqqP2cOVwMfNrM9wGPEh5MeAArNrGPAbgpwMFg+CEwFCB4fAxxJbO/i\nOWdw9wfdvczdy4qLh8aEGyIyeFZveoun34jw2RtmMfeC0WGXk1L6HA7u/iV3n+LupcQvKD/n7n8M\nrANuCbrdATwZLK8O1gkef87dPWi/LbibaTowE3i5r3WJiPTEocZm/vpXlVx+4Vg+ufiisMtJOYNx\nSf4LwGNm9g3gdeChoP0h4N/MrBqoJx4ouHuVma0EtgDtwKfcPToIdYmIAPHhpHtWbaYt6nznIwvI\nzhpaX6rXEwMSDu7+PPB8sLyLLu42cvdTwEe6ef59wH0DUYuIyPn89KW9/HbnYb5x83xKiwrCLicl\n6RPSIjKk7D58kvue3sriWcX88ZVDa46G3lA4iMiQEZ+jYWN8joY/HHpzNPSGPgYoIkPGj1/cxev7\nGvj+Ry9l4pihN0dDb+jMQUSGhKq3GvneMzu46Z2T+PAQnaOhNxQOIpLxWtqj3P34Jgrz8/jG8vlh\nl5MWNKwkIhnvu2t3sL3mOP/yiSsYWzB052joDZ05iEhGe2VPPQ++uIuPLpzG+2cP7TkaekPhICIZ\n60RLO3ev3MjUsfl89SbN0dAbGlYSkYx131NbOXC0mZWfvEpzNPSSzhxEJCOt21bLz1/ex4rFM7hC\nczT0msJBRDLO0ZOt3POLzVxSMoq7b5gVdjlpSedZIpJR4nM0VNLQ1Mq/3nmF5mjoI505iEhGWb3p\nLZ564xCfuX4W8y4YE3Y5aUvhICIZI9J4ir/+VSWXTSvkk4tnhF1OWlM4iEhGcHfu+UUwR8Ot7yIn\nW4e3/tB/PRHJCD9bv48Xd9Tx5ZvmMF1zNPSbwkFE0t6ewye576mtvHdmER/THA0DQuEgImktGnPu\nXrmR3Gzj727RHA0DRbeyikha+/GLb/LavgYeuO1dTBozIuxyMobOHEQkbW156xj3r93BTe/QHA0D\nTeEgImmppT3K3Ss3Upifx9dvnq/hpAGmYSURSUv3r93JtshxHv5EGeM0R8OA05mDiKSdV/bU8+MX\n3+SjC6dy7eySsMvJSAoHEUkrJ1va+dzKTUwZO4Kv3DQ37HIyloaVRCStfPPprew/2sTjK65ipOZo\nGDQ6cxCRtLFuey0/W7+PFe+dwcLpmqNhMCkcRCQtNDS18oVV8TkaPqs5GgadzslEJC389ZNVHG1q\n5V/uvILhuZqjYbDpzEFEUt7qTW/xn5ve0hwNSaRwEJGUVnMsPkfDpZqjIakUDiKSstyde1ZtpqU9\nync+skBzNCRRn/9Lm9lUM1tnZlvMrMrM/jJoH2dma81sZ/B7bNBuZvZ9M6s2s81mdlnC37oj6L/T\nzO7o/2aJSLo72NDMxx5azws76vjyjXOYUTwy7JKGlP7EcDvwOXefCywCPmVmc4EvAs+6+0zg2WAd\n4APAzOBnBfBDiIcJcC9wJbAQuLcjUERk6HF3Hn9lH0vvf5HX9zVw3x/M5+OLLgy7rCGnz3crufsh\n4FCwfNzMtgKTgeXANUG3R4DngS8E7Y+6uwMvmVmhmU0K+q5193oAM1sLLAN+3tfaRCQ91Rw7xRd/\nsZl12+tYNGMcf3/LAqaOyw+7rCFpQG5lNbNS4FJgPVASBAdABOj44pPJwP6Epx0I2rprF5Ehwt35\n1caD3PtkFa3RGP/vQ3O5/apSsrL0Tath6Xc4mNlI4BfAZ9z9WOLX5rq7m5n39zUSXmsF8SEppk3T\nVIAimaDueAtf+Y83WLOlhssvHMu3P7JAc0CngH6Fg5nlEg+Gn7n7L4PmGjOb5O6HgmGj2qD9IDA1\n4elTgraDvD0M1dH+fFev5+4PAg8ClJWVDVjoiEg4ntp8iK/+6g1Otkb58o2zues9M8jW2UJK6M/d\nSgY8BGx19+8mPLQa6Ljj6A7gyYT224O7lhYBjcHwUwWwxMzGBheilwRtIpKh6k+28ul/f41P/ftr\nTBuXz1N//h5WLL5IwZBC+nPmcDXwceANM9sYtH0Z+Baw0szuAvYCtwaPPQ3cCFQDTcCdAO5eb2Zf\nB14J+n2t4+K0iGSeNVURvvwflTQ2t/L5JbP40/ddpM8vpCCL3zyUfsrKynzDhg1hlyEiPdTY1Mbf\n/GcVv3z9IHMmjeY7H1nA3AtGh13WkGJmr7p7WU/66ov3RGTQPb+9li/8YjOHT7TyF9dezKevnUle\njs4WUpnCQUQGzYmWdu57ags/f3k/MyeM5J9vL+OdUwrDLkt6QOEgIoPid9WH+atVmznU2Myfvu8i\nPnP9TH3VdhpROIjIgGpqbedbv9nGo7/fy4yiAp7403dz+YX6Rpx0o3AQkQHzyp56Pv/EJvbVN/G/\nrp7OXy29hBF5OltIRwoHEem3U21Rvl2xnYf+ezdTxo7gsT9ZxJUzxoddlvSDwkFE+uX1fUf5/BOb\neLPuJB9bNI0vfWAOBcN0aEl32oMi0ict7VEeeGYnP3rhTSaOHs5P77qS98wsCrssGSAKBxHptcqD\njXz+iU1sixzn1rIpfPWDcxk9PDfssmQAKRxEpMfaojH+cV01//BcNeMK8nj4E2VcO7vk/E+UtKNw\nEJEe2R45zuee2EjlwWP8waWTufdDcynMzwu7LBkkCgcROaf2aIwfv7iLB57ZyegROfzoY5ezbP7E\nsMuSQaZwEJFuVdee4PNPbGLj/gZuesckvrZ8HuNHDgu7LEkChYOIdFJ7/BSrXj3AA8/sZEReNj/4\n6KV8aMEFYZclSaRwEBEAdh8+yZqqCGu21PDavqO4w/VzSvjm/5jPhFHDwy5PkkzhIDJEuTtvHGxk\nTVUNa7ZE2FFzAoD5k0fz2etnsWReCZeUjCJxXngZOhQOIkNIWzTGy7vrqaiKsHZLDYcaT5GdZSws\nHce9H5rGDXNLmDI2P+wyJQUoHEQyXFNrOy9sr2PNlhqe3VrDsVPtDM/NYvHMYj6/5BKunT2BsQW6\nJVXOpHAQyUBHTrTw7NZa1myJ8Nudh2lpj1GYn8uSeRNZMreE984s1relyjkpHEQyxP76JiqCC8ob\n9tQTc5hcOII/unIaS+ZO5IrSseRka2pO6RmFg0iacne2HDrGmqoaKqoibIscB2D2xFF8+tqZLJlb\nwrwLRuuCsvSJwkEkjbRHY2zYe/T0HUYHjjZjBldcOI6v3jSHG+aWcOH4grDLlAygcBBJcafaovx2\n52EqqiI8u7WGo01t5OVk8d6Li/jzay/mujklFOlTyzLAFA4iKaihqfX0BeUXdxymuS3K6OE5XDen\nhCVzS1g8q1gT6sig0v9dIiE60dLOnsMn2X34ZPz3kZPsqjvJGwcbicaciaOH85GyKSyZO5ErZ4wj\nVxeUJUkUDiKD7FRblD1HgoP/4SZ2Hz7BnsNN7D5ykrrjLWf0nTRmOKXjC/jk4hksnTeRd0weQ1aW\nLihL8ikcRAZAa3uMffVNp88CdgdhsOfwSd5qPHVG36KRw5helM81s4qZXlzA9PEFlBYVUDq+QJ89\nkJShcBDpofZojIMNzfGD/+lhoHggHDjaRMzf7luYn0vp+AIWzRgfP/AXdYRAPqM0naakAYWDSIJY\nzDl07BS7685897/78En2H22iLfp2AowclsP0ogIWTC3k5nddEA+A4EczpEm6UzhIRnJ3TrS009jc\nRkNTG8ea22gMfhoSlhubEpab26g5doqW9tjpvzM8N4vS8QVcMnEUS+dPZPr4AqYXx4eAikbm6QNm\nkrFSJhzMbBnwAJAN/MTdvxVySRIyd6e5Lfr2QT3hQH7srPWGhPaOtmjiOM9ZcrONMSNyGT0ilzEj\ncikamcdFxQUUjxrG9KKRlBblM72ogJJRw3VBWIaklAgHM8sG/hG4ATgAvGJmq919S7iVSXfcndZo\njFNtMVrao7S0xWhpD5bbY8F6sNweo6UtYbmL/s2tiSHQSmNzO8ea22iNxrqtIctgTHBwHzMilzH5\neUwbl8+YETmMGZFL4Yi80wFQmJ97Rt/8vGy96xc5h5QIB2AhUO3uuwDM7DFgOZCW4eDuxPzt3zF3\n3MFJWI9B1J32WIxYDNpjMaIxpz3mxILf0YTf8eXz9317PXbG87vs605bNEbr6YN24kH8/Af5/srL\nyWJYThbDcrIZkZd1+sA9e+Lo0+/ozz6ovx0EuYzMy9G7epFBkirhMBnYn7B+ALhyMF7ogz/4Lc2t\nUbzjIE38dywWP5ifXg8O7h39ulznzBDo+J3qsgxysrLIzjJysoxhufED9LCcrPgBOze+PHJYDuML\nsoPH3+6T2H940HdYwvNO9z3H8/Kys3RgF0lhqRIOPWJmK4AVANOmTevT37i4eCRtMSfLDCN+oMwy\nw8wwO8d6/PXJMiPLCB6zoK2LdeK/Ow6AHc/LCv6uWfzAnBUcoLMTfncsZ5mRk21kZ2WdtX7m46eX\ns7LIPmPdOq9nmYZTROS8UiUcDgJTE9anBG1ncPcHgQcBysrK+vQe/Xu3XdqXp4mIDCmp8kUtrwAz\nzWy6meUBtwGrQ65JRGTISokzB3dvN7NPAxXEb2V92N2rQi5LRGTISolwAHD3p4Gnw65DRERSZ1hJ\nRERSiMJBREQ6UTiIiEgnCgcREelE4SAiIp2Yexp830MXzKwO2NvHpxcBhwewnDBlyrZkynaAtiUV\nZcp2QP+25UJ3L+5Jx7QNh/4wsw3uXhZ2HQMhU7YlU7YDtC2pKFO2A5K3LRpWEhGRThQOIiLSyVAN\nhwfDLmAAZcq2ZMp2gLYlFWXKdkCStmVIXnMQEZFzG6pnDiIicg4ZGw5m9rCZ1ZpZZTePm5l938yq\nzWyzmV2W7Bp7qgfbco2ZNZrZxuDn/ya7xp4ws6lmts7MtphZlZn9ZRd90mK/9HBb0mW/DDezl81s\nU7Atf9NFn2Fm9niwX9abWWnyKz23Hm7HJ8ysLmGf/O8wau0pM8s2s9fN7NddPDa4+yQ+9WXm/QCL\ngcuAym4evxH4DWDAImB92DX3Y1uuAX4ddp092I5JwGXB8ihgBzA3HfdLD7clXfaLASOD5VxgPbDo\nrD5/BvwoWL4NeDzsuvu4HZ8A/iHsWnuxTXcD/97V/0eDvU8y9szB3V8E6s/RZTnwqMe9BBSa2aTk\nVNc7PdiWtODuh9z9tWD5OLCV+PzhidJiv/RwW9JC8N/6RLCaG/ycfTFyOfBIsLwKuM5SbL7ZHm5H\n2jCzKcBNwE+66TKo+yRjw6EHJgP7E9YPkKb/uANXBafTvzGzeWEXcz7BKfClxN/dJUq7/XKObYE0\n2S/B8MVGoBZY6+7d7hd3bwcagfHJrfL8erAdAH8YDFmuMrOpXTyeKr4H3APEunl8UPfJUA6HTPIa\n8Y/FLwB+APwq5HrOycxGAr8APuPux8Kupz/Osy1ps1/cPeru7yI+f/tCM5sfdk190YPt+E+g1N3f\nCazl7XfeKcXMPgjUuvurYdUwlMPhIJD4rmFK0JZ23P1Yx+m0x2fUyzWzopDL6pKZ5RI/mP7M3X/Z\nRZe02S/n25Z02i8d3L0BWAcsO+uh0/vFzHKAMcCR5FbXc91th7sfcfeWYPUnwOXJrq2HrgY+bGZ7\ngMeAa83sp2f1GdR9MpTDYTVwe3B3zCKg0d0PhV1UX5jZxI6xRjNbSHy/ptw/3KDGh4Ct7v7dbrql\nxX7pybak0X4pNrPCYHkEcAOw7axuq4E7guVbgOc8uBKaKnqyHWddv/ow8WtFKcfdv+TuU9y9lPjF\n5ufc/WMqwNfpAAAAxUlEQVRndRvUfZIyc0gPNDP7OfG7RYrM7ABwL/ELVLj7j4jPV30jUA00AXeG\nU+n59WBbbgH+j5m1A83Aban2DzdwNfBx4I1gXBjgy8A0SLv90pNtSZf9Mgl4xMyyiQfYSnf/tZl9\nDdjg7quJB+G/mVk18Zsjbguv3G71ZDv+wsw+DLQT345PhFZtHyRzn+gT0iIi0slQHlYSEZFuKBxE\nRKQThYOIiHSicBARkU4UDiIi0onCQUREOlE4iIhIJwoHERHp5P8DqxyXs0hqd6cAAAAASUVORK5C\nYII=\n",
1217 | "text/plain": [
1218 | ""
1219 | ]
1220 | },
1221 | "metadata": {},
1222 | "output_type": "display_data"
1223 | }
1224 | ],
1225 | "source": [
1226 | "plt.plot(np.linspace(1, 4, 10), np.logspace(1, 4, 10))\n",
1227 | "plt.show()"
1228 | ]
1229 | },
1230 | {
1231 | "cell_type": "code",
1232 | "execution_count": 44,
1233 | "metadata": {},
1234 | "outputs": [
1235 | {
1236 | "name": "stdout",
1237 | "output_type": "stream",
1238 | "text": [
1239 | "[[ 0. 0.]\n",
1240 | " [ 0. 0.]\n",
1241 | " [ 0. 0.]\n",
1242 | " [ 0. 0.]]\n",
1243 | "[[ 1. 2.]\n",
1244 | " [ 0. 0.]\n",
1245 | " [ 0. 0.]\n",
1246 | " [ 0. 0.]]\n"
1247 | ]
1248 | }
1249 | ],
1250 | "source": [
1251 | "shape = (4, 2)\n",
1252 | "print(np.zeros(shape)) # init to zero. Use np.ones or np.full accordingly\n",
1253 | "\n",
1254 | "# [GOTCHA] np.empty won't initialize anything; it will just grab the first available chunk of memory\n",
1255 | "x = np.zeros(shape)\n",
1256 | "x[0] = [1, 2]\n",
1257 | "del x\n",
1258 | "print(np.empty(shape))"
1259 | ]
1260 | },
1261 | {
1262 | "cell_type": "code",
1263 | "execution_count": 45,
1264 | "metadata": {},
1265 | "outputs": [
1266 | {
1267 | "data": {
1268 | "text/plain": [
1269 | "array([[1, 2],\n",
1270 | " [3, 4],\n",
1271 | " [5, 6]])"
1272 | ]
1273 | },
1274 | "execution_count": 45,
1275 | "metadata": {},
1276 | "output_type": "execute_result"
1277 | }
1278 | ],
1279 | "source": [
1280 | "# From iterator/list/array - can just use constructor\n",
1281 | "np.array([[1, 2], range(3, 5), np.array([5, 6])]) # auto-flatten (if possible)"
1282 | ]
1283 | },
1284 | {
1285 | "cell_type": "code",
1286 | "execution_count": 46,
1287 | "metadata": {},
1288 | "outputs": [
1289 | {
1290 | "name": "stdout",
1291 | "output_type": "stream",
1292 | "text": [
1293 | "[[0 1]\n",
1294 | " [2 5]]\n",
1295 | "[[0 1]\n",
1296 | " [2 3]]\n",
1297 | "[[0 0]\n",
1298 | " [0 0]]\n"
1299 | ]
1300 | }
1301 | ],
1302 | "source": [
1303 | "# Deep copies & shape/dtype preserving creations\n",
1304 | "x = np.arange(4).reshape(2, 2)\n",
1305 | "y = np.copy(x)\n",
1306 | "z = np.zeros_like(x)\n",
1307 | "x[1, 1] = 5\n",
1308 | "print(x)\n",
1309 | "print(y)\n",
1310 | "print(z)"
1311 | ]
1312 | },
1313 | {
1314 | "cell_type": "markdown",
1315 | "metadata": {},
1316 | "source": [
1317 | "Extremely extensive [random generation](https://docs.scipy.org/doc/numpy/reference/routines.random.html). Remember to seed!\n",
1318 | "\n",
1319 | "# Transposition\n",
1320 | "\n",
1321 | "**Under the hood**. So far, we've just been looking at the abstraction that NumPy offers. How does it actually keep things contiguous in memory?\n",
1322 | "\n",
1323 | "We have a base array, which is one long contiguous array from 0 to size - 1."
1324 | ]
1325 | },
1326 | {
1327 | "cell_type": "code",
1328 | "execution_count": 47,
1329 | "metadata": {},
1330 | "outputs": [
1331 | {
1332 | "name": "stdout",
1333 | "output_type": "stream",
1334 | "text": [
1335 | "(2, 3, 4)\n",
1336 | "24\n"
1337 | ]
1338 | }
1339 | ],
1340 | "source": [
1341 | "x = np.arange(2 * 3 * 4).reshape(2, 3, 4)\n",
1342 | "print(x.shape)\n",
1343 | "print(x.size)"
1344 | ]
1345 | },
1346 | {
1347 | "cell_type": "code",
1348 | "execution_count": 48,
1349 | "metadata": {},
1350 | "outputs": [
1351 | {
1352 | "name": "stdout",
1353 | "output_type": "stream",
1354 | "text": [
1355 | "[[[ 0 1 2 3]\n",
1356 | " [ 4 5 6 7]\n",
1357 | " [ 8 9 10 11]]\n",
1358 | "\n",
1359 | " [[12 13 14 15]\n",
1360 | " [16 17 18 19]\n",
1361 | " [20 21 22 23]]]\n",
1362 | "[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23]\n"
1363 | ]
1364 | }
1365 | ],
1366 | "source": [
1367 | "# Use ravel() to get the underlying flat array. np.flatten() will give you the original\n",
1368 | "print(x)\n",
1369 | "print(x.ravel())"
1370 | ]
1371 | },
1372 | {
1373 | "cell_type": "code",
1374 | "execution_count": 49,
1375 | "metadata": {},
1376 | "outputs": [
1377 | {
1378 | "name": "stdout",
1379 | "output_type": "stream",
1380 | "text": [
1381 | "transpose (2, 3, 4) -> (4, 3, 2)\n",
1382 | "rollaxis (2, 3, 4) -> (3, 2, 4)\n",
1383 | "\n",
1384 | "arbitrary permutation [0, 1, 2] [0 2 1]\n",
1385 | "(2, 3, 4) -> (2, 4, 3)\n",
1386 | "moved[1, 2, 0] 14 x[1, 0, 2] 14\n"
1387 | ]
1388 | }
1389 | ],
1390 | "source": [
1391 | "# np.transpose or *.T will reverse axes\n",
1392 | "print('transpose', x.shape, '->', x.T.shape)\n",
1393 | "# rollaxis pulls the argument axis to axis 0, keeping all else the same.\n",
1394 | "print('rollaxis', x.shape, '->', np.rollaxis(x, 1, 0).shape)\n",
1395 | "\n",
1396 | "print()\n",
1397 | "# all the above are instances of np.moveaxis\n",
1398 | "# it's clear how these behave:\n",
1399 | "\n",
1400 | "perm = np.array([0, 2, 1])\n",
1401 | "moved = np.moveaxis(x, range(3), perm)\n",
1402 | "\n",
1403 | "print('arbitrary permutation', list(range(3)), perm)\n",
1404 | "print(x.shape, '->', moved.shape)\n",
1405 | "print('moved[1, 2, 0]', moved[1, 2, 0], 'x[1, 0, 2]', x[1, 0, 2])"
1406 | ]
1407 | },
1408 | {
1409 | "cell_type": "code",
1410 | "execution_count": 50,
1411 | "metadata": {},
1412 | "outputs": [
1413 | {
1414 | "name": "stdout",
1415 | "output_type": "stream",
1416 | "text": [
1417 | "sigma 3.19, eig 3.19\n"
1418 | ]
1419 | }
1420 | ],
1421 | "source": [
1422 | "# When is transposition useful?\n",
1423 | "# Matrix stuff, mostly:\n",
1424 | "np.random.seed(1234)\n",
1425 | "\n",
1426 | "X = np.random.randn(3, 4)\n",
1427 | "print('sigma {:.2f}, eig {:.2f}'.format(\n",
1428 | " np.linalg.svd(X)[1].max(),\n",
1429 | " np.sqrt(np.linalg.eigvalsh(X.dot(X.T)).max())))"
1430 | ]
1431 | },
1432 | {
1433 | "cell_type": "code",
1434 | "execution_count": 51,
1435 | "metadata": {},
1436 | "outputs": [
1437 | {
1438 | "data": {
1439 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAQ8AAAD8CAYAAABpXiE9AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAADg9JREFUeJzt3X+s3XV9x/Hnay2gQ+RXjTSliMzG6XCLeIOoi2umJtAY\nukSWwBIFI+l0EnWRZKgJJibL1D80Oo2kQSIsBsnUyHWpMThguCwwKimUQoALcWlrB9i6ItPByt77\n434xx+v91c/53nPOxecjOTmf7/f7Od/Pm0/Ji+9PmqpCko7W74y7AEmrk+EhqYnhIamJ4SGpieEh\nqYnhIanJUOGR5JQktyR5pPs+eYF+zyXZ1X2mhxlT0mTIMM95JPkscKiqPp3kKuDkqvqbefo9XVUv\nGaJOSRNm2PB4CNhcVQeSrAdur6pXz9PP8JBeYIYNj/+qqpO6doCfPb88p98RYBdwBPh0VX1ngf1t\nA7YBvOh384bTzzquubYXusfvf/G4S5h4z2w8ftwlTLxn9+77aVW9rOW3a5fqkOQHwGnzbPrE4EJV\nVZKFkugVVbU/yVnArUl2V9WjcztV1XZgO8Cm1724vnDz7y35D/Db6vOves24S5h4M1eeN+4SJt6P\nP3zlf7T+dsnwqKq3L7QtyeNJ1g+ctjyxwD72d9+PJbkdeD3wG+EhafUY9lbtNHBp174UuHluhyQn\nJzmua68D3gI8MOS4ksZs2PD4NPCOJI8Ab++WSTKV5Nquz2uAnUnuBW5j9pqH4SGtckuetiymqg4C\nb5tn/U7g8q79b8DrhhlH0uTxCVNJTQwPSU0MD0lNDA9JTQwPSU0MD0lNDA9JTQwPSU0MD0lNDA9J\nTQwPSU0MD0lNDA9JTQwPSU0MD0lNDA9JTQwPSU0MD0lNDA9JTQwPSU0MD0lNDA9JTQwPSU0MD0lN\nDA9JTQwPSU0MD0lNegmPJOcneSjJTJKr5tl+XJKbuu13JTmzj3Eljc/Q4ZFkDfBl4ALgtcAlSV47\np9v7gJ9V1auAzwOfGXZcSePVx5HHucBMVT1WVc8C3wC2zumzFbi+a38TeFuS9DC2pDHpIzw2AHsH\nlvd16+btU1VHgMPAqT2MLWlMJuqCaZJtSXYm2Xn40HPjLkfSIvoIj/3AxoHl07t18/ZJshY4ETg4\nd0dVtb2qpqpq6sRT1vRQmqSV0kd43A1sSvLKJMcCFwPTc/pMA5d27YuAW6uqehhb0pisHXYHVXUk\nyRXA94E1wHVVtSfJp4CdVTUNfBX4hyQzwCFmA0bSKjZ0eABU1Q5gx5x1Vw+0/wf48z7GkjQZJuqC\nqaTVw/CQ1MTwkNTE8JDUxPCQ1MTwkNTE8JDUxPCQ1MTwkNTE8JDUxPCQ1MTwkNTE8JDUxPCQ1MTw\nkNTE8JDUxPCQ1MTwkNTE8JDUxPCQ1MTwkNTE8JDUxPCQ1MTwkNTE8JDUxPCQ1MTwkNTE8JDUpJfw\nSHJ+koeSzCS5ap7tlyV5Msmu7nN5H+NKGp+1w+4gyRrgy8A7gH3A3Ummq+qBOV1vqqorhh1P0mTo\n48jjXGCmqh6rqmeBbwBbe9ivpAk29JEHsAHYO7C8D3jjPP3eleStwMPAX1fV3rkdkmwDtgGsXXci\nH7r74h7Ke2E6+46fjLuEiXfGMwfGXcLE+/EQvx3VBdPvAmdW1R8CtwDXz9epqrZX1VRVTa156fEj\nKk1Siz7CYz+wcWD59G7dr1TVwap6plu8FnhDD+NKGqM+wuNuYFOSVyY5FrgYmB7skGT9wOKFwIM9\njCtpjIa+5lFVR5JcAXwfWANcV1V7knwK2FlV08CHklwIHAEOAZcNO66k8erjgilVtQPYMWfd1QPt\njwEf62MsSZPBJ0wlNTE8JDUxPCQ1MTwkNTE8JDUxPCQ1MTwkNTE8JDUxPCQ1MTwkNTE8JDUxPCQ1\nMTwkNTE8JDUxPCQ1MTwkNTE8JDUxPCQ1MTwkNTE8JDUxPCQ1MTwkNTE8JDUxPCQ1MTwkNTE8JDUx\nPCQ16SU8klyX5Ikk9y+wPUm+mGQmyX1JzuljXEnj09eRx9eA8xfZfgGwqftsA77S07iSxqSX8Kiq\nO4BDi3TZCtxQs+4ETkqyvo+xJY3HqK55bAD2Dizv69b9miTbkuxMsvO5p/57RKVJajFRF0yrantV\nTVXV1JqXHj/uciQtYlThsR/YOLB8erdO0io1qvCYBt7T3XU5DzhcVQdGNLakFbC2j50kuRHYDKxL\nsg/4JHAMQFVdA+wAtgAzwC+A9/YxrqTx6SU8quqSJbYX8ME+xpI0GSbqgqmk1cPwkNTE8JDUxPCQ\n1MTwkNTE8JDUxPCQ1MTwkNTE8JDUxPCQ1MTwkNTE8JDUxPCQ1MTwkNTE8JDUxPCQ1MTwkNTE8JDU\nxPCQ1MTwkNTE8JDUxPCQ1MTwkNTE8JDUxPCQ1MTwkNTE8JDUpJfwSHJdkieS3L/A9s1JDifZ1X2u\n7mNcSePTy190DXwN+BJwwyJ9flhV7+xpPElj1suRR1XdARzqY1+SVoe+jjyW401J7gV+AlxZVXvm\ndkiyDdgGcMaGtTz8J9ePsLzV5YItfzHuEibejz9w/LhLeEEb1QXTe4BXVNUfAX8PfGe+TlW1vaqm\nqmrqZaeuGVFpklqMJDyq6qmqerpr7wCOSbJuFGNLWhkjCY8kpyVJ1z63G/fgKMaWtDJ6ueaR5EZg\nM7AuyT7gk8AxAFV1DXAR8IEkR4BfAhdXVfUxtqTx6CU8quqSJbZ/idlbuZJeIHzCVFITw0NSE8ND\nUhPDQ1ITw0NSE8NDUhPDQ1ITw0NSE8NDUhPDQ1ITw0NSE8NDUhPDQ1ITw0NSE8NDUhPDQ1ITw0NS\nE8NDUhPDQ1ITw0NSE8NDUhPDQ1ITw0NSE8NDUhPDQ1ITw0NSE8NDUpOhwyPJxiS3JXkgyZ4kH56n\nT5J8MclMkvuSnDPsuJLGq4+/6PoI8NGquifJCcCPktxSVQ8M9LkA2NR93gh8pfuWtEoNfeRRVQeq\n6p6u/XPgQWDDnG5bgRtq1p3ASUnWDzu2pPHp9ZpHkjOB1wN3zdm0Adg7sLyP3wwYSatIb+GR5CXA\nt4CPVNVTjfvYlmRnkp1PHnyur9IkrYBewiPJMcwGx9er6tvzdNkPbBxYPr1b92uqantVTVXV1MtO\nXdNHaZJWSB93WwJ8FXiwqj63QLdp4D3dXZfzgMNVdWDYsSWNTx93W94CvBvYnWRXt+7jwBkAVXUN\nsAPYAswAvwDe28O4ksZo6PCoqn8FskSfAj447FiSJodPmEpqYnhIamJ4SGpieEhqYnhIamJ4SGpi\neEhqYnhIamJ4SGpieEhqYnhIamJ4SGpieEhqYnhIamJ4SGpieEhqYnhIamJ4SGpieEhqYnhIamJ4\nSGpieEhqYnhIamJ4SGpieEhqYnhIamJ4SGpieEhqMnR4JNmY5LYkDyTZk+TD8/TZnORwkl3d5+ph\nx5U0Xmt72McR4KNVdU+SE4AfJbmlqh6Y0++HVfXOHsaTNAGGPvKoqgNVdU/X/jnwILBh2P1Kmmyp\nqv52lpwJ3AGcXVVPDazfDHwL2Af8BLiyqvbM8/ttwLZu8Wzg/t6K68c64KfjLmKA9Sxu0uqByavp\n1VV1QssPewuPJC8B/gX426r69pxtLwX+r6qeTrIF+EJVbVpifzuraqqX4noyaTVZz+ImrR6YvJqG\nqaeXuy1JjmH2yOLrc4MDoKqeqqqnu/YO4Jgk6/oYW9J49HG3JcBXgQer6nML9Dmt60eSc7txDw47\ntqTx6eNuy1uAdwO7k+zq1n0cOAOgqq4BLgI+kOQI8Evg4lr6fGl7D7X1bdJqsp7FTVo9MHk1NdfT\n6wVTSb89fMJUUhPDQ1KTiQmPJKckuSXJI933yQv0e27gMffpFajj/CQPJZlJctU8249LclO3/a7u\n2ZYVtYyaLkvy5MC8XL6CtVyX5Ikk8z6Dk1lf7Gq9L8k5K1XLUdQ0stcjlvm6xkjnaMVeIamqifgA\nnwWu6tpXAZ9ZoN/TK1jDGuBR4CzgWOBe4LVz+vwVcE3Xvhi4aYXnZTk1XQZ8aUR/Tm8FzgHuX2D7\nFuB7QIDzgLsmoKbNwD+NaH7WA+d07ROAh+f58xrpHC2zpqOeo4k58gC2Atd37euBPxtDDecCM1X1\nWFU9C3yjq2vQYJ3fBN72/G3oMdY0MlV1B3BokS5bgRtq1p3ASUnWj7mmkanlva4x0jlaZk1HbZLC\n4+VVdaBr/yfw8gX6vSjJziR3Juk7YDYAeweW9/Gbk/yrPlV1BDgMnNpzHUdbE8C7ukPgbybZuIL1\nLGW59Y7am5Lcm+R7Sf5gFAN2p7SvB+6as2lsc7RITXCUc9THcx7LluQHwGnzbPrE4EJVVZKF7iG/\noqr2JzkLuDXJ7qp6tO9aV5nvAjdW1TNJ/pLZI6M/HXNNk+QeZv+9ef71iO8Ai74eMazudY1vAR+p\ngfe8xmmJmo56jkZ65FFVb6+qs+f53Aw8/vyhW/f9xAL72N99PwbczmyK9mU/MPhf7dO7dfP2SbIW\nOJGVfVp2yZqq6mBVPdMtXgu8YQXrWcpy5nCkasSvRyz1ugZjmKOVeIVkkk5bpoFLu/alwM1zOyQ5\nOclxXXsds0+3zv3/hgzjbmBTklcmOZbZC6Jz7+gM1nkRcGt1V5xWyJI1zTlfvpDZc9pxmQbe091R\nOA84PHA6OhajfD2iG2fR1zUY8Rwtp6amORrFFehlXhE+Ffhn4BHgB8Ap3fop4Nqu/WZgN7N3HHYD\n71uBOrYwezX6UeAT3bpPARd27RcB/wjMAP8OnDWCuVmqpr8D9nTzchvw+ytYy43AAeB/mT1Xfx/w\nfuD93fYAX+5q3Q1MjWB+lqrpioH5uRN48wrW8sdAAfcBu7rPlnHO0TJrOuo58vF0SU0m6bRF0ipi\neEhqYnhIamJ4SGpieEhqYnhIamJ4SGry/5jvDWhIrWiwAAAAAElFTkSuQmCC\n",
1440 | "text/plain": [
1441 | ""
1442 | ]
1443 | },
1444 | "metadata": {},
1445 | "output_type": "display_data"
1446 | },
1447 | {
1448 | "data": {
1449 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAQ8AAAD8CAYAAABpXiE9AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAADgpJREFUeJzt3X/MnWV9x/H3Z7S0U5gUaqCWKqJEZW4KPkGUxTRDIxJD\nl8gS+EPAaDqdZLqoGWqCCcky9Q+XMY2MIBGcATIw8GhqDA4QlwWkkPKjEKDwD62dQMuKREXrvvvj\nuTHHh+dXr3M/55zi+5WcnOu+7+u5r2+vNp/eP9tUFZJ0oP5o3AVIOjgZHpKaGB6SmhgekpoYHpKa\nGB6SmgwVHkmOTHJzkke77zXz9Pttkm3dZ3qYMSVNhgzznEeSLwN7q+qLSS4C1lTVP8zR77mqOmyI\nOiVNmGHD42FgY1XtTrIOuK2q3jBHP8NDeokZNjz+t6qO6NoBnnlheVa//cA2YD/wxaq6cZ79bQY2\nA7z8ZXnbG19/aHNtL3WP3PeycZcw8Va9KeMuYeI9/dDep6vqlS0/u2h4JPkhcMwcmz4PXDUYFkme\nqaoXXfdIsr6qdiU5HrgFOL2qHlto3Km3rK6f/GDDUn4Nf5De+6q3jruEife6u1aPu4SJ929T/353\nVU21/OyKxTpU1bvn25bkZ0nWDZy2PDnPPnZ1348nuQ04CVgwPCRNtmFv1U4D53ft84GbZndIsibJ\nqq69FjgNeHDIcSWN2bDh8UXgPUkeBd7dLZNkKskVXZ83AVuT3Avcysw1D8NDOsgtetqykKraA5w+\nx/qtwEe69n8DfzbMOJImj0+YSmpieEhqYnhIamJ4SGpieEhqYnhIamJ4SGpieEhqYnhIamJ4SGpi\neEhqYnhIamJ4SGpieEhqYnhIamJ4SGpieEhqYnhIamJ4SGpieEhqYnhIamJ4SGpieEhqYnhIamJ4\nSGpieEhqYnhIatJLeCQ5I8nDSXYkuWiO7auSXNdtvzPJcX2MK2l8hg6PJIcAXwPeB5wInJvkxFnd\nPgw8U1WvB/4Z+NKw40oarz6OPE4BdlTV41X1a+BaYNOsPpuAq7r29cDpSdLD2JLGpI/wWA88MbC8\ns1s3Z5+q2g/sA47qYWxJYzJRF0yTbE6yNcnWp/b8dtzlSFpAH+GxC9gwsHxst27OPklWAK8A9sze\nUVVdXlVTVTX1yqMO6aE0Sculj/C4CzghyWuTHAqcA0zP6jMNnN+1zwZuqarqYWxJY7Ji2B1U1f4k\nFwI/AA4Brqyq7UkuAbZW1TTwDeBbSXYAe5kJGEkHsaHDA6CqtgBbZq27eKD9K+Cv+xhL0mSYqAum\nkg4ehoekJoaHpCaGh6QmhoekJoaHpCaGh6QmhoekJoaHpCaGh6QmhoekJoaHpCaGh6QmhoekJoaH\npCaGh6QmhoekJoaHpCaGh6QmhoekJoaHpCaGh6QmhoekJoaHpCaGh6QmhoekJoaHpCaGh6QmvYRH\nkjOSPJxkR5KL5th+QZKnkmzrPh/pY1xJ47Ni2B0kOQT4GvAeYCdwV5LpqnpwVtfrqurCYceTNBn6\nOPI4BdhRVY9X1a+Ba4FNPexX0gQb+sgDWA88MbC8E3j7HP0+kORdwCPA31fVE7M7JNkMbAZYzct4\n76ve2kN5L01//KOjx13CxHvkM+vHXcJL2qgumH4XOK6q/hy4Gbhqrk5VdXlVTVXV1EpWjag0SS36\nCI9dwIaB5WO7db9TVXuq6vlu8QrgbT2MK2mM+giPu4ATkrw2yaHAOcD0YIck6wYWzwIe6mFcSWM0\n9DWPqtqf5ELgB8AhwJVVtT3JJcDWqpoG/i7JWcB+YC9wwbDjShqvPi6YUlVbgC2z1l080P4s8Nk+\nxpI0GXzCVFITw0NSE8NDUhPDQ1ITw0NSE8NDUhPDQ1ITw0NSE8NDUhPDQ1ITw0NSE8NDUhPDQ1IT\nw0NSE8NDUhPDQ1ITw0NSE8NDUhPDQ1ITw0NSE8NDUhPDQ1ITw0NSE8NDUhPDQ1ITw0NSE8NDUpNe\nwiPJlUmeTPLAPNuT5NIkO5Lcl+TkPsaVND59HXl8Ezhjge3vA07oPpuBr/c0rqQx6SU8qup2YO8C\nXTYBV9eMO4AjkqzrY2xJ4zGqax7rgScGlnd2635Pks1JtibZ+hueH1FpklpM1AXTqrq8qqaqamol\nq8ZdjqQFjCo8dgEbBpaP7dZJOkiNKjymgfO6uy6nAvuqaveIxpa0DFb0sZMk1wAbgbVJdgJfAFYC\nVNVlwBbgTGAH8AvgQ32MK2l8egmPqjp3ke0FfLyPsSRNhom6YCrp4GF4SGpieEhqYnhIamJ4SGpi\neEhqYnhIamJ4SGpieEhqYnhIamJ4SGpieEhqYnhIamJ4SGpieEhqYnhIamJ4SGpieEhqYnhIamJ4\nSGpieEhqYnhIamJ4SGpieEhqYnhIamJ4SGpieEhq0kt4JLkyyZNJHphn+8Yk+5Js6z4X9zGupPHp\n5T+6Br4JfBW4eoE+P66q9/c0nqQx6+XIo6puB/b2sS9JB4e+jjyW4h1J7gV+Cny6qrbP7pBkM7AZ\n4LBjXs7rvrd6hOUdXB75zPpxlzDxdr/DPz+LurX9R0d1wfQe4DVV9RbgX4Eb5+pUVZdX1VRVTa1e\ns2pEpUlqMZLwqKpnq+q5rr0FWJlk7SjGlrQ8RhIeSY5Jkq59SjfunlGMLWl59HLNI8k1wEZgbZKd\nwBeAlQBVdRlwNvCxJPuBXwLnVFX1Mbak8eglPKrq3EW2f5WZW7mSXiJ8wlRSE8NDUhPDQ1ITw0NS\nE8NDUhPDQ1ITw0NSE8NDUhPDQ1ITw0NSE8NDUhPDQ1ITw0NSE8NDUhPDQ1ITw0NSE8NDUhPDQ1IT\nw0NSE8NDUhPDQ1ITw0NSE8NDUhPDQ1ITw0NSE8NDUhPDQ1KTocMjyYYktyZ5MMn2JJ+Yo0+SXJpk\nR5L7kpw87LiSxquP/+h6P/CpqronyeHA3UlurqoHB/q8Dzih+7wd+Hr3LekgNfSRR1Xtrqp7uvbP\ngYeA9bO6bQKurhl3AEckWTfs2JLGp9drHkmOA04C7py1aT3wxMDyTl4cMJIOIr2FR5LDgBuAT1bV\ns4372Jxka5Ktv3rm+b5Kk7QMegmPJCuZCY5vV9V35uiyC9gwsHxst+73VNXlVTVVVVOr16zqozRJ\ny6SPuy0BvgE8VFVfmafbNHBed9flVGBfVe0edmxJ49PH3ZbTgA8C9yfZ1q37HPBqgKq6DNgCnAns\nAH4BfKiHcSWN0dDhUVX/BWSRPgV8fNixJE0OnzCV1MTwkNTE8JDUxPCQ1MTwkNTE8JDUxPCQ1MTw\nkNTE8JDUxPCQ1MTwkNTE8JDUxPCQ1MTwkNTE8JDUxPCQ1MTwkNTE8JDUxPCQ1MTwkNTE8JDUxPCQ\n1MTwkNTE8JDUxPCQ1MTwkNTE8JDUxPCQ1GTo8EiyIcmtSR5Msj3JJ+boszHJviTbus/Fw44rabxW\n9LCP/cCnquqeJIcDdye5uaoenNXvx1X1/h7GkzQBhj7yqKrdVXVP1/458BCwftj9Sppsqar+dpYc\nB9wOvLmqnh1YvxG4AdgJ/BT4dFVtn+PnNwObu8U3Aw/0Vlw/1gJPj7uIAdazsEmrByavpjdU1eEt\nP9hbeCQ5DPgR8I9V9Z1Z2/4E+L+qei7JmcC/VNUJi+xva1VN9VJcTyatJutZ2KTVA5NX0zD19HK3\nJclKZo4svj07OACq6tmqeq5rbwFWJlnbx9iSxqOPuy0BvgE8VFVfmafPMV0/kpzSjbtn2LEljU8f\nd1tOAz4I3J9kW7fuc8CrAarqMuBs4GNJ9gO/BM6pxc+XLu+htr5NWk3Ws7BJqwcmr6bmenq9YCrp\nD4dPmEpqYnhIajIx4ZHkyCQ3J3m0+14zT7/fDjzmPr0MdZyR5OEkO5JcNMf2VUmu67bf2T3bsqyW\nUNMFSZ4amJePLGMtVyZ5Msmcz+BkxqVdrfclOXm5ajmAmkb2esQSX9cY6Rwt2yskVTURH+DLwEVd\n+yLgS/P0e24ZazgEeAw4HjgUuBc4cVafvwUu69rnANct87wspaYLgK+O6PfpXcDJwAPzbD8T+D4Q\n4FTgzgmoaSPwvRHNzzrg5K59OPDIHL9fI52jJdZ0wHM0MUcewCbgqq59FfBXY6jhFGBHVT1eVb8G\nru3qGjRY5/XA6S/chh5jTSNTVbcDexfosgm4umbcARyRZN2YaxqZWtrrGiOdoyXWdMAmKTyOrqrd\nXft/gKPn6bc6ydYkdyTpO2DWA08MLO/kxZP8uz5VtR/YBxzVcx0HWhPAB7pD4OuTbFjGehaz1HpH\n7R1J7k3y/SR/OooBu1Pak4A7Z20a2xwtUBMc4Bz18ZzHkiX5IXDMHJs+P7hQVZVkvnvIr6mqXUmO\nB25Jcn9VPdZ3rQeZ7wLXVNXzSf6GmSOjvxxzTZPkHmb+3LzwesSNwIKvRwyre13jBuCTNfCe1zgt\nUtMBz9FIjzyq6t1V9eY5PjcBP3vh0K37fnKefezqvh8HbmMmRfuyCxj8W/vYbt2cfZKsAF7B8j4t\nu2hNVbWnqp7vFq8A3raM9SxmKXM4UjXi1yMWe12DMczRcrxCMkmnLdPA+V37fOCm2R2SrEmyqmuv\nZebp1tn/bsgw7gJOSPLaJIcyc0F09h2dwTrPBm6p7orTMlm0plnny2cxc047LtPAed0dhVOBfQOn\no2MxytcjunEWfF2DEc/RUmpqmqNRXIFe4hXho4D/BB4Ffggc2a2fAq7o2u8E7mfmjsP9wIeXoY4z\nmbka/Rjw+W7dJcBZXXs18B/ADuAnwPEjmJvFavonYHs3L7cCb1zGWq4BdgO/YeZc/cPAR4GPdtsD\nfK2r9X5gagTzs1hNFw7Mzx3AO5exlr8ACrgP2NZ9zhznHC2xpgOeIx9Pl9Rkkk5bJB1EDA9JTQwP\nSU0MD0lNDA9JTQwPSU0MD0lN/h8p6QsWpuyFjAAAAABJRU5ErkJggg==\n",
1450 | "text/plain": [
1451 | ""
1452 | ]
1453 | },
1454 | "metadata": {},
1455 | "output_type": "display_data"
1456 | },
1457 | {
1458 | "name": "stdout",
1459 | "output_type": "stream",
1460 | "text": [
1461 | "Check frob norm upper vs lower tri 0.0\n"
1462 | ]
1463 | }
1464 | ],
1465 | "source": [
1466 | "# Create a random symmetric matrix\n",
1467 | "X = np.random.randn(3, 3)\n",
1468 | "plt.imshow(X)\n",
1469 | "plt.show()\n",
1470 | "\n",
1471 | "X += X.T\n",
1472 | "plt.imshow(X)\n",
1473 | "plt.show()\n",
1474 | "\n",
1475 | "print('Check frob norm upper vs lower tri', np.linalg.norm(np.triu(X) - np.tril(X).T))"
1476 | ]
1477 | },
1478 | {
1479 | "cell_type": "code",
1480 | "execution_count": 52,
1481 | "metadata": {},
1482 | "outputs": [
1483 | {
1484 | "name": "stdout",
1485 | "output_type": "stream",
1486 | "text": [
1487 | "[[0 1 2]\n",
1488 | " [3 4 5]]\n",
1489 | "[[0 1 2]\n",
1490 | " [3 4 5]]\n"
1491 | ]
1492 | }
1493 | ],
1494 | "source": [
1495 | "# Row-major, C-order\n",
1496 | "# largest axis changes fastest\n",
1497 | "A = np.arange(2 * 3).reshape(2, 3).copy(order='C')\n",
1498 | "\n",
1499 | "# Row-major, Fortran-order\n",
1500 | "# smallest axis changes fastest\n",
1501 | "# GOTCHA: many numpy funcitons assume C ordering\n",
1502 | "B = np.arange(2 * 3).reshape(2, 3).copy(order='F')\n",
1503 | "\n",
1504 | "# Differences in representation don't manifest in abstraction\n",
1505 | "print(A)\n",
1506 | "print(B)"
1507 | ]
1508 | },
1509 | {
1510 | "cell_type": "code",
1511 | "execution_count": 53,
1512 | "metadata": {},
1513 | "outputs": [
1514 | {
1515 | "name": "stdout",
1516 | "output_type": "stream",
1517 | "text": [
1518 | "[0 1 2 3 4 5]\n",
1519 | "[0 3 1 4 2 5]\n",
1520 | "[[0 3 1]\n",
1521 | " [4 2 5]]\n",
1522 | "[[0 1 2]\n",
1523 | " [3 4 5]]\n"
1524 | ]
1525 | }
1526 | ],
1527 | "source": [
1528 | "# Array manipulation functions with order option\n",
1529 | "# will use C/F ordering, but this is independent of the underlying layout\n",
1530 | "print(A.ravel())\n",
1531 | "print(A.ravel(order='F'))\n",
1532 | "\n",
1533 | "# Reshape ravels an array, then folds back into shape, according to the given order\n",
1534 | "# Note reshape can infer one dimension; we leave it as -1.\n",
1535 | "print(A.ravel(order='F').reshape(-1, 3))\n",
1536 | "print(A.ravel(order='F').reshape(-1, 3, order='F'))"
1537 | ]
1538 | },
1539 | {
1540 | "cell_type": "code",
1541 | "execution_count": 54,
1542 | "metadata": {},
1543 | "outputs": [
1544 | {
1545 | "name": "stdout",
1546 | "output_type": "stream",
1547 | "text": [
1548 | "140675174489296 140673647644144 140675174489296\n"
1549 | ]
1550 | }
1551 | ],
1552 | "source": [
1553 | "# GOTCHA: ravel will copy the array so that everything is contiguous\n",
1554 | "# if the order differs\n",
1555 | "print(id(A.base), id(A.ravel().base), id(A.ravel(order='F').base))"
1556 | ]
1557 | },
1558 | {
1559 | "cell_type": "markdown",
1560 | "metadata": {},
1561 | "source": [
1562 | "# Ufuncs and Broadcasting\n",
1563 | "\n",
1564 | "[doc](https://docs.scipy.org/doc/numpy-dev/reference/ufuncs.html)"
1565 | ]
1566 | },
1567 | {
1568 | "cell_type": "code",
1569 | "execution_count": 57,
1570 | "metadata": {},
1571 | "outputs": [
1572 | {
1573 | "name": "stdout",
1574 | "output_type": "stream",
1575 | "text": [
1576 | "[0 1 2 3 4 5]\n",
1577 | "[1 1 1 2 2 2]\n",
1578 | "[1 2 3 5 6 7]\n",
1579 | "[1 2 3 5 6 7]\n"
1580 | ]
1581 | }
1582 | ],
1583 | "source": [
1584 | "# A ufunc is the most common way to modify arrays\n",
1585 | "\n",
1586 | "# In its simplest form, an n-ary ufunc takes in n numpy arrays\n",
1587 | "# of the same shape, and applies some standard operation to \"parallel elements\"\n",
1588 | "\n",
1589 | "a = np.arange(6)\n",
1590 | "b = np.repeat([1, 2], 3)\n",
1591 | "print(a)\n",
1592 | "print(b)\n",
1593 | "print(a + b)\n",
1594 | "print(np.add(a, b))"
1595 | ]
1596 | },
1597 | {
1598 | "cell_type": "code",
1599 | "execution_count": 58,
1600 | "metadata": {},
1601 | "outputs": [
1602 | {
1603 | "name": "stdout",
1604 | "output_type": "stream",
1605 | "text": [
1606 | "A (2, 3)\n",
1607 | "[[0 1 2]\n",
1608 | " [3 4 5]]\n",
1609 | "\n",
1610 | "b (2,)\n",
1611 | "[0 1]\n",
1612 | "\n",
1613 | "c (3,)\n",
1614 | "[0 1 2]\n",
1615 | "\n"
1616 | ]
1617 | }
1618 | ],
1619 | "source": [
1620 | "# If any of the arguments are of lower dimension, they're prepended with 1\n",
1621 | "# Any arguments that have dimension 1 are repeated along that axis\n",
1622 | "\n",
1623 | "A = np.arange(2 * 3).reshape(2, 3)\n",
1624 | "b = np.arange(2)\n",
1625 | "c = np.arange(3)\n",
1626 | "for i in ['A', 'b', 'c']:\n",
1627 | " display(i)"
1628 | ]
1629 | },
1630 | {
1631 | "cell_type": "code",
1632 | "execution_count": 59,
1633 | "metadata": {},
1634 | "outputs": [
1635 | {
1636 | "name": "stdout",
1637 | "output_type": "stream",
1638 | "text": [
1639 | "A * c (2, 3)\n",
1640 | "[[ 0 1 4]\n",
1641 | " [ 0 4 10]]\n",
1642 | "\n",
1643 | "c.reshape(1, 3) (1, 3)\n",
1644 | "[[0 1 2]]\n",
1645 | "\n",
1646 | "np.repeat(c.reshape(1, 3), 2, axis=0) (2, 3)\n",
1647 | "[[0 1 2]\n",
1648 | " [0 1 2]]\n",
1649 | "\n"
1650 | ]
1651 | }
1652 | ],
1653 | "source": [
1654 | "# On the right, broadcasting rules will automatically make the conversion\n",
1655 | "# of c, which has shape (3,) to shape (1, 3)\n",
1656 | "display('A * c')\n",
1657 | "display('c.reshape(1, 3)')\n",
1658 | "display('np.repeat(c.reshape(1, 3), 2, axis=0)')"
1659 | ]
1660 | },
1661 | {
1662 | "cell_type": "code",
1663 | "execution_count": 60,
1664 | "metadata": {},
1665 | "outputs": [
1666 | {
1667 | "name": "stdout",
1668 | "output_type": "stream",
1669 | "text": [
1670 | "np.diag(c) (3, 3)\n",
1671 | "[[0 0 0]\n",
1672 | " [0 1 0]\n",
1673 | " [0 0 2]]\n",
1674 | "\n",
1675 | "A.dot(np.diag(c)) (2, 3)\n",
1676 | "[[ 0 1 4]\n",
1677 | " [ 0 4 10]]\n",
1678 | "\n",
1679 | "A * c (2, 3)\n",
1680 | "[[ 0 1 4]\n",
1681 | " [ 0 4 10]]\n",
1682 | "\n"
1683 | ]
1684 | }
1685 | ],
1686 | "source": [
1687 | "display('np.diag(c)')\n",
1688 | "display('A.dot(np.diag(c))')\n",
1689 | "display('A * c')"
1690 | ]
1691 | },
1692 | {
1693 | "cell_type": "code",
1694 | "execution_count": 61,
1695 | "metadata": {
1696 | "collapsed": true
1697 | },
1698 | "outputs": [],
1699 | "source": [
1700 | "# GOTCHA: this won't compile your code to C: it will just make a slow convenience wrapper\n",
1701 | "demo = np.frompyfunc('f({}, {})'.format, 2, 1)"
1702 | ]
1703 | },
1704 | {
1705 | "cell_type": "code",
1706 | "execution_count": 62,
1707 | "metadata": {},
1708 | "outputs": [
1709 | {
1710 | "name": "stdout",
1711 | "output_type": "stream",
1712 | "text": [
1713 | "A (2, 3)\n",
1714 | "[[0 1 2]\n",
1715 | " [3 4 5]]\n",
1716 | "\n",
1717 | "b (2,)\n",
1718 | "[0 1]\n",
1719 | "\n",
1720 | "ValueError!\n",
1721 | "operands could not be broadcast together with shapes (2,3) (2,) \n"
1722 | ]
1723 | }
1724 | ],
1725 | "source": [
1726 | "# GOTCHA: common broadcasting mistake -- append instead of prepend\n",
1727 | "display('A')\n",
1728 | "display('b')\n",
1729 | "try:\n",
1730 | " demo(A, b) # can't prepend to (2,) with 1 to get something compatible with (2, 3)\n",
1731 | "except ValueError as e:\n",
1732 | " print('ValueError!')\n",
1733 | " print(e)"
1734 | ]
1735 | },
1736 | {
1737 | "cell_type": "code",
1738 | "execution_count": 63,
1739 | "metadata": {},
1740 | "outputs": [
1741 | {
1742 | "name": "stdout",
1743 | "output_type": "stream",
1744 | "text": [
1745 | "b[:, np.newaxis] (2, 1)\n",
1746 | "[[0]\n",
1747 | " [1]]\n",
1748 | "\n",
1749 | "np.repeat(b[:, np.newaxis], 3, axis=1) (2, 3)\n",
1750 | "[[0 0 0]\n",
1751 | " [1 1 1]]\n",
1752 | "\n",
1753 | "demo(A, b[:, np.newaxis]) (2, 3)\n",
1754 | "[['f(0, 0)' 'f(1, 0)' 'f(2, 0)']\n",
1755 | " ['f(3, 1)' 'f(4, 1)' 'f(5, 1)']]\n",
1756 | "\n",
1757 | "demo(b[:, np.newaxis], A) (2, 3)\n",
1758 | "[['f(0, 0)' 'f(0, 1)' 'f(0, 2)']\n",
1759 | " ['f(1, 3)' 'f(1, 4)' 'f(1, 5)']]\n",
1760 | "\n"
1761 | ]
1762 | }
1763 | ],
1764 | "source": [
1765 | "# np.newaxis adds a 1 in the corresponding axis\n",
1766 | "display('b[:, np.newaxis]')\n",
1767 | "display('np.repeat(b[:, np.newaxis], 3, axis=1)')\n",
1768 | "display('demo(A, b[:, np.newaxis])')\n",
1769 | "# note broadcasting rules are invariant to order\n",
1770 | "# even if the ufunc isn't \n",
1771 | "display('demo(b[:, np.newaxis], A)')"
1772 | ]
1773 | },
1774 | {
1775 | "cell_type": "code",
1776 | "execution_count": 64,
1777 | "metadata": {},
1778 | "outputs": [
1779 | {
1780 | "name": "stdout",
1781 | "output_type": "stream",
1782 | "text": [
1783 | "b (2,)\n",
1784 | "[0 1]\n",
1785 | "\n",
1786 | "np.diag(b) (2, 2)\n",
1787 | "[[0 0]\n",
1788 | " [0 1]]\n",
1789 | "\n",
1790 | "b[:, np.newaxis] * A (2, 3)\n",
1791 | "[[0 0 0]\n",
1792 | " [3 4 5]]\n",
1793 | "\n",
1794 | "np.diag(b).dot(A) (2, 3)\n",
1795 | "[[0 0 0]\n",
1796 | " [3 4 5]]\n",
1797 | "\n"
1798 | ]
1799 | }
1800 | ],
1801 | "source": [
1802 | "# Using broadcasting, we can do cheap diagonal matrix multiplication\n",
1803 | "display('b')\n",
1804 | "display('np.diag(b)')\n",
1805 | "# without representing the full diagonal matrix.\n",
1806 | "display('b[:, np.newaxis] * A')\n",
1807 | "display('np.diag(b).dot(A)')"
1808 | ]
1809 | },
1810 | {
1811 | "cell_type": "code",
1812 | "execution_count": 65,
1813 | "metadata": {},
1814 | "outputs": [
1815 | {
1816 | "name": "stdout",
1817 | "output_type": "stream",
1818 | "text": [
1819 | "demo.outer(a, b) (4, 4)\n",
1820 | "[['f(0, 4)' 'f(0, 5)' 'f(0, 6)' 'f(0, 7)']\n",
1821 | " ['f(1, 4)' 'f(1, 5)' 'f(1, 6)' 'f(1, 7)']\n",
1822 | " ['f(2, 4)' 'f(2, 5)' 'f(2, 6)' 'f(2, 7)']\n",
1823 | " ['f(3, 4)' 'f(3, 5)' 'f(3, 6)' 'f(3, 7)']]\n",
1824 | "\n",
1825 | "np.bitwise_or.accumulate(b) (4,)\n",
1826 | "[4 5 7 7]\n",
1827 | "\n",
1828 | "np.bitwise_or.reduce(b) ()\n",
1829 | "7\n",
1830 | "\n"
1831 | ]
1832 | }
1833 | ],
1834 | "source": [
1835 | "# (Binary) ufuncs get lots of efficient implementation stuff for free\n",
1836 | "a = np.arange(4)\n",
1837 | "b = np.arange(4, 8)\n",
1838 | "display('demo.outer(a, b)')\n",
1839 | "display('np.bitwise_or.accumulate(b)')\n",
1840 | "display('np.bitwise_or.reduce(b)') # last result of accumulate"
1841 | ]
1842 | },
1843 | {
1844 | "cell_type": "code",
1845 | "execution_count": 66,
1846 | "metadata": {},
1847 | "outputs": [
1848 | {
1849 | "name": "stdout",
1850 | "output_type": "stream",
1851 | "text": [
1852 | "accumulation speed comparison\n",
1853 | " format: mean seconds (standard error) 5 runs\n",
1854 | " manual_accum 2.83e-01 (1.66e-02)\n",
1855 | " np_accum 1.85e-03 (9.30e-06)\n",
1856 | " improvement ratio 152.8\n"
1857 | ]
1858 | }
1859 | ],
1860 | "source": [
1861 | "def setup(): return np.arange(10 ** 6)\n",
1862 | "\n",
1863 | "def manual_accum(x):\n",
1864 | " res = np.zeros_like(x)\n",
1865 | " for i, v in enumerate(x):\n",
1866 | " res[i] = res[i-1] | v\n",
1867 | " \n",
1868 | "def np_accum(x):\n",
1869 | " np.bitwise_or.accumulate(x)\n",
1870 | " \n",
1871 | "print('accumulation speed comparison')\n",
1872 | "compare_times(manual_accum, np_accum, setup, setup)"
1873 | ]
1874 | },
1875 | {
1876 | "cell_type": "markdown",
1877 | "metadata": {},
1878 | "source": [
1879 | "# Aliasing\n",
1880 | "\n",
1881 | "You can save on allocations and copies by providing the output array to copy into.\n",
1882 | "\n",
1883 | "**Aliasing** occurs when all or part of the input is repeated in the output\n",
1884 | "\n",
1885 | "[Ufuncs allow aliasing](https://github.com/numpy/numpy/pull/8043)"
1886 | ]
1887 | },
1888 | {
1889 | "cell_type": "code",
1890 | "execution_count": 67,
1891 | "metadata": {},
1892 | "outputs": [
1893 | {
1894 | "name": "stdout",
1895 | "output_type": "stream",
1896 | "text": [
1897 | "[[3 5 0]\n",
1898 | " [7 7 9]\n",
1899 | " [4 0 8]]\n",
1900 | "[[ 6 12 4]\n",
1901 | " [12 14 9]\n",
1902 | " [ 4 9 16]]\n"
1903 | ]
1904 | }
1905 | ],
1906 | "source": [
1907 | "# Example: generating random symmetric matrices\n",
1908 | "A = np.random.randint(0, 10, size=(3,3))\n",
1909 | "print(A)\n",
1910 | "A += A.T # this operation is WELL-DEFINED, even though A is changing\n",
1911 | "print(A)"
1912 | ]
1913 | },
1914 | {
1915 | "cell_type": "code",
1916 | "execution_count": 68,
1917 | "metadata": {},
1918 | "outputs": [
1919 | {
1920 | "data": {
1921 | "text/plain": [
1922 | "array([[12, 24, 8],\n",
1923 | " [24, 28, 18],\n",
1924 | " [ 8, 18, 32]])"
1925 | ]
1926 | },
1927 | "execution_count": 68,
1928 | "metadata": {},
1929 | "output_type": "execute_result"
1930 | }
1931 | ],
1932 | "source": [
1933 | "# Above is sugar for\n",
1934 | "np.add(A, A, out=A)"
1935 | ]
1936 | },
1937 | {
1938 | "cell_type": "code",
1939 | "execution_count": 69,
1940 | "metadata": {},
1941 | "outputs": [
1942 | {
1943 | "name": "stdout",
1944 | "output_type": "stream",
1945 | "text": [
1946 | "[0 1 2 3 4 5 6 7 8 9]\n",
1947 | "[-5 -5 -5 -5 -5 5 6 7 8 9]\n"
1948 | ]
1949 | }
1950 | ],
1951 | "source": [
1952 | "x = np.arange(10)\n",
1953 | "print(x)\n",
1954 | "np.subtract(x[:5], x[5:], x[:5])\n",
1955 | "print(x)"
1956 | ]
1957 | },
1958 | {
1959 | "cell_type": "markdown",
1960 | "metadata": {},
1961 | "source": [
1962 | "**[GOTCHA]: If it's not a ufunc, aliasing is VERY BAD**: [search for \"In general the rule\"](https://github.com/numpy/numpy/issues/8440)."
1963 | ]
1964 | },
1965 | {
1966 | "cell_type": "code",
1967 | "execution_count": 70,
1968 | "metadata": {},
1969 | "outputs": [
1970 | {
1971 | "name": "stdout",
1972 | "output_type": "stream",
1973 | "text": [
1974 | "output array is not acceptable (must have the right type, nr dimensions, and be a C-Array)\n"
1975 | ]
1976 | }
1977 | ],
1978 | "source": [
1979 | "x = np.arange(2 * 2).reshape(2, 2)\n",
1980 | "try:\n",
1981 | " x.dot(np.arange(2), out=x)\n",
1982 | " # GOTCHA: some other functions won't warn you!\n",
1983 | "except ValueError as e:\n",
1984 | " print(e)"
1985 | ]
1986 | },
1987 | {
1988 | "cell_type": "markdown",
1989 | "metadata": {
1990 | "collapsed": true
1991 | },
1992 | "source": [
1993 | "# Configuration and Hardware Acceleration\n",
1994 | "\n",
1995 | "NumPy works quickly because it _can_ perform vectorization by linking to C functions that were built for **your** particular system.\n",
1996 | "\n",
1997 | "**[GOTCHA] There are two different high-level ways in which NumPy uses hardware to accelerate your computations**.\n",
1998 | "\n",
1999 | "### Ufunc\n",
2000 | "\n",
2001 | "When you perform a built-in ufunc:\n",
2002 | "* The corresponding C function is called directly from the Python interpreter\n",
2003 | "* It is not **parallelized**\n",
2004 | "* It may be **vectorized**\n",
2005 | "\n",
2006 | "In general, it is tough to check whether your code is using vectorized instructions (or, in particular, which instruction set is being used, like SSE or AVX512.\n",
2007 | "\n",
2008 | "* If you installed from pip or Anaconda, you're probably not vectorized.\n",
2009 | "* If you compiled NumPy yourself (and select the correct flags), you're probably fine.\n",
2010 | "* If you're using the [Numba](http://numba.pydata.org/) JIT, then you'll be vectorized too.\n",
2011 | "* If have access to `icc` and MKL, then you can use [the Intel guide](https://software.intel.com/en-us/articles/numpyscipy-with-intel-mkl) or [Anaconda](https://docs.continuum.io/mkl-optimizations/)\n",
2012 | "\n",
2013 | "### BLAS\n",
2014 | "\n",
2015 | "These are optimized linear algebra routines, and are only called when you invoke operations that rely on these routines.\n",
2016 | "\n",
2017 | "This won't make your vectors add faster (first, NumPy doesn't ask BLAS to nor could it, since bandwidth-limited ops are not the focus of BLAS). It will help with:\n",
2018 | "* Matrix multiplication (`np.dot`)\n",
2019 | "* Linear algebra (SVD, eigenvalues, etc) (`np.linalg`)\n",
2020 | "* Similar stuff from other libraries that accept NumPy arrays may use BLAS too.\n",
2021 | "\n",
2022 | "There are different implementations for BLAS. Some are free, and some are proprietary and built for specific chips (MKL). You can check which version you're using [this way](http://stackoverflow.com/questions/9000164/how-to-check-blas-lapack-linkage-in-numpy-scipy), though you can only be sure [by inspecting the binaries manually](http://stackoverflow.com/questions/37184618/find-out-if-which-blas-library-is-used-by-numpy).\n",
2023 | "\n",
2024 | "Any NumPy routine that uses **BLAS** will use, by default **ALL AVAILABLE CORES**. This is a departure from the standard parallelism of `ufunc` or other numpy transformations. You can change BLAS parallelism with the `OMP_NUM_THREADS` environment variable."
2025 | ]
2026 | },
2027 | {
2028 | "cell_type": "markdown",
2029 | "metadata": {},
2030 | "source": [
2031 | "# Stuff to Avoid\n",
2032 | "\n",
2033 | "NumPy has some cruft left over due to backwards compatibility. There are some edge cases when you would (maybe) use these things (but probably not). In general, avoid them:\n",
2034 | "\n",
2035 | "* `np.chararray`: use an `np.ndarray` with `unicode` `dtype`\n",
2036 | "* `np.MaskedArrays`: use a boolean advanced index\n",
2037 | "* `np.matrix`: use a 2-dimensional `np.ndarray`"
2038 | ]
2039 | },
2040 | {
2041 | "cell_type": "markdown",
2042 | "metadata": {},
2043 | "source": [
2044 | "# Stuff Not Mentioned\n",
2045 | "\n",
2046 | "* [General array manipulation](https://docs.scipy.org/doc/numpy-dev/reference/routines.array-manipulation.html)\n",
2047 | " * Selection-related convenience methods `np.sort, np.unique`\n",
2048 | " * Array composition and decomposition `np.split, np.stack`\n",
2049 | " * Reductions many-to-1 `np.sum, np.prod, np.count_nonzero`\n",
2050 | " * Many-to-many array transformations `np.fft, np.linalg.cholesky`\n",
2051 | "* [String formatting](https://docs.scipy.org/doc/numpy/reference/generated/numpy.array2string.html) `np.array2string`\n",
2052 | "* [IO](https://docs.scipy.org/doc/numpy/reference/routines.io.html#string-formatting) `np.loadtxt, np.savetxt`\n",
2053 | "* [Polynomial interpolation](https://docs.scipy.org/doc/numpy/reference/routines.polynomials.html) and related [scipy integration](https://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html)\n",
2054 | "* [Equality testing](https://docs.scipy.org/doc/numpy/reference/routines.testing.html)"
2055 | ]
2056 | }
2057 | ],
2058 | "metadata": {
2059 | "celltoolbar": "Raw Cell Format",
2060 | "kernelspec": {
2061 | "display_name": "Python 2",
2062 | "language": "python",
2063 | "name": "python2"
2064 | },
2065 | "language_info": {
2066 | "codemirror_mode": {
2067 | "name": "ipython",
2068 | "version": 2
2069 | },
2070 | "file_extension": ".py",
2071 | "mimetype": "text/x-python",
2072 | "name": "python",
2073 | "nbconvert_exporter": "python",
2074 | "pygments_lexer": "ipython2",
2075 | "version": "2.7.14"
2076 | }
2077 | },
2078 | "nbformat": 4,
2079 | "nbformat_minor": 2
2080 | }
2081 |
--------------------------------------------------------------------------------
/1_NumpyAcceleration/assets/mlp1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/1_NumpyAcceleration/assets/mlp1.png
--------------------------------------------------------------------------------
/1_NumpyAcceleration/assets/mlp2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/1_NumpyAcceleration/assets/mlp2.png
--------------------------------------------------------------------------------
/1_NumpyAcceleration/assets/ndarrayrep.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/1_NumpyAcceleration/assets/ndarrayrep.png
--------------------------------------------------------------------------------
/1_NumpyAcceleration/assets/nestlist.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/1_NumpyAcceleration/assets/nestlist.png
--------------------------------------------------------------------------------
/1_NumpyAcceleration/assets/nonsteeplearn.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/1_NumpyAcceleration/assets/nonsteeplearn.png
--------------------------------------------------------------------------------
/1_NumpyAcceleration/assets/nparr.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/1_NumpyAcceleration/assets/nparr.png
--------------------------------------------------------------------------------
/1_NumpyAcceleration/assets/numba.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/1_NumpyAcceleration/assets/numba.png
--------------------------------------------------------------------------------
/1_NumpyAcceleration/assets/numpylogo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/1_NumpyAcceleration/assets/numpylogo.png
--------------------------------------------------------------------------------
/1_NumpyAcceleration/assets/pandas.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/1_NumpyAcceleration/assets/pandas.png
--------------------------------------------------------------------------------
/1_NumpyAcceleration/assets/pylist.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/1_NumpyAcceleration/assets/pylist.png
--------------------------------------------------------------------------------
/1_NumpyAcceleration/assets/sklearn.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/1_NumpyAcceleration/assets/sklearn.png
--------------------------------------------------------------------------------
/1_NumpyAcceleration/assets/slices.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/1_NumpyAcceleration/assets/slices.png
--------------------------------------------------------------------------------
/1_NumpyAcceleration/assets/stan.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/1_NumpyAcceleration/assets/stan.png
--------------------------------------------------------------------------------
/1_NumpyAcceleration/assets/steeplearn.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/1_NumpyAcceleration/assets/steeplearn.jpg
--------------------------------------------------------------------------------
/1_NumpyAcceleration/assets/tf.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/1_NumpyAcceleration/assets/tf.png
--------------------------------------------------------------------------------
/1_NumpyAcceleration/assets/vecproc.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/1_NumpyAcceleration/assets/vecproc.gif
--------------------------------------------------------------------------------
/2_CythonMultiprocessing/CythonAndCtypes.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "collapsed": true
7 | },
8 | "source": [
9 | "# Interfacing Python with compiled code, Cython\n",
10 | "\n",
11 | "Interfacing python with compiled code lets you speed up some critical part of the code.\n",
12 | "There are numerous ways to do this:\n",
13 | "\n",
14 | "#### C API to Python and NumPy \n",
15 | "\n",
16 | "This is a library of C functions and variables that can be used\n",
17 | "to create wrapper functions that together with the targeted C code can be compiled into fast\n",
18 | "binary Python modules. See: https://docs.python.org/3/extending/extending.html for more information.\n",
19 | "\n",
20 | "#### ctypes module and attribute \n",
21 | "\n",
22 | "The ctypes module from the Python standard library and the\n",
23 | "ctypes attribute of NumPy arrays can be used to create a Python wrapper for an existing\n",
24 | "dynamically-loaded library written in C.\n",
25 | "\n",
26 | "#### Cython \n",
27 | "\n",
28 | "This facilitates the writing of C extensions for Python.\n",
29 | "weave This allows the inclusion of C code in Python programs.\n",
30 | "\n",
31 | "#### SWIG \n",
32 | "\n",
33 | "This automates the process of writing wrappers in C for C functions. SWIG is easy to\n",
34 | "use if the argument list is to be limited to builtin Python types but can be cumbersome if\n",
35 | "efficient conversion to NumPy arrays is desired. The difficulty is due to the need to match\n",
36 | "C array parameters to predefined patterns in the numpy.\n",
37 | "\n",
38 | "#### f2py \n",
39 | "\n",
40 | "This is for interfacing to Fortran.\n",
41 | "See http://www.scipy.org/Topical_Software for links to some of these. Presented here is the\n",
42 | "use of ctypes. Unlike the use of the C API or SWIG, it permits the interface to be written in\n",
43 | "Python.\n",
44 | "\n",
45 | "\n",
46 | "\n",
47 | "Let us start by writing some C code. The dot product of two vectors for instance:\n",
48 | "\n",
49 | "```C\n",
50 | "double dot_product(double v[], double u[], int n)\n",
51 | "{\n",
52 | " double result = 0.0;\n",
53 | " for (int i = 0; i < n; i++)\n",
54 | " result += v[i]*u[i];\n",
55 | " return result;\n",
56 | "}\n",
57 | "```\n",
58 | "\n",
59 | "Next we compile it, and build a shared object (please open another terminal window, not in the notebook):\n",
60 | "\n",
61 | "```bash\n",
62 | "gcc -c -Wall -Werror -fpic my_dot.c \n",
63 | "gcc -shared -o my_dot.so my_dot.o\n",
64 | "```\n",
65 | "\n",
66 | "The ctypes module of the Python standard library provides definitions of fundamental data types that can be passed to C programs. For example:\n"
67 | ]
68 | },
69 | {
70 | "cell_type": "code",
71 | "execution_count": 1,
72 | "metadata": {
73 | "collapsed": false
74 | },
75 | "outputs": [
76 | {
77 | "name": "stdout",
78 | "output_type": "stream",
79 | "text": [
80 | "\n",
81 | "\n"
82 | ]
83 | }
84 | ],
85 | "source": [
86 | "import ctypes as C\n",
87 | "#these types would have names like C.c int and C.c double.\n",
88 | "#They can be used constructors, e.g.,\n",
89 | "x = C.c_double(2.71828)\n",
90 | "#for which x.value returns the Python object.\n",
91 | "print(type(2.71828))\n",
92 | "print(type(x))"
93 | ]
94 | },
95 | {
96 | "cell_type": "code",
97 | "execution_count": 2,
98 | "metadata": {
99 | "collapsed": false
100 | },
101 | "outputs": [
102 | {
103 | "name": "stdout",
104 | "output_type": "stream",
105 | "text": [
106 | "<__main__.LP_c_double object at 0x7fb63427c9e0>\n",
107 | "c_double(2.71828)\n"
108 | ]
109 | }
110 | ],
111 | "source": [
112 | "#Fundamental types can be composed to get new types, e.g.,\n",
113 | "xp = C.POINTER(C.c_double)(); \n",
114 | "xp.contents = x\n",
115 | "print(xp)\n",
116 | "print(x)"
117 | ]
118 | },
119 | {
120 | "cell_type": "code",
121 | "execution_count": 3,
122 | "metadata": {
123 | "collapsed": true
124 | },
125 | "outputs": [],
126 | "source": [
127 | "#or simply xp = C.POINTER(C.c_double)(x) . You can change the value of x using\n",
128 | "xp[0] = 3.14159"
129 | ]
130 | },
131 | {
132 | "cell_type": "code",
133 | "execution_count": 4,
134 | "metadata": {
135 | "collapsed": false
136 | },
137 | "outputs": [
138 | {
139 | "name": "stdout",
140 | "output_type": "stream",
141 | "text": [
142 | "[1.0, 2.3, 4.0, 5.0]\n",
143 | "1.0\n"
144 | ]
145 | }
146 | ],
147 | "source": [
148 | "#Array types can be created by \\multiplying\" a ctype by a positive integer, e.g.,\n",
149 | "ylist = [1.,2.3,4.,5.]\n",
150 | "n = len(ylist)\n",
151 | "y = (C.c_double*n)()\n",
152 | "y[:] = ylist\n",
153 | "#or simply\n",
154 | "y = (C.c_double*n)(*ylist)\n",
155 | "print(ylist)\n",
156 | "print(y[0])"
157 | ]
158 | },
159 | {
160 | "cell_type": "markdown",
161 | "metadata": {},
162 | "source": [
163 | "The asterisk is a Python operator for expanding the elements of a sequence into the arguments of a\n",
164 | "function. Convert a C array back to a Python value or list by indexing it with an int or a slice.\n",
165 | "The ctypes module has a utility subpackage to assist in locating a dynamically-loaded library,\n",
166 | "e.g.,"
167 | ]
168 | },
169 | {
170 | "cell_type": "code",
171 | "execution_count": 6,
172 | "metadata": {
173 | "collapsed": false
174 | },
175 | "outputs": [
176 | {
177 | "name": "stdout",
178 | "output_type": "stream",
179 | "text": [
180 | "\n"
181 | ]
182 | }
183 | ],
184 | "source": [
185 | "import ctypes.util # an explicit import is necessary\n",
186 | "C.util.find_library('my_dot')\n",
187 | "#locates the C math library. \n",
188 | "#For loading a library there are constructors, e.g.,\n",
189 | "myDL = C.CDLL('./my_dot.so')\n",
190 | "print(myDL)\n",
191 | "#which makes my a module-like object (a CDLL object to be precise)."
192 | ]
193 | },
194 | {
195 | "cell_type": "markdown",
196 | "metadata": {},
197 | "source": [
198 | "Similar to a Python module, myDL has as attributes function-like objects (C function pointers to\n",
199 | "be precise) which have the same names as the C functions in the library, e.g., myDL.dot. These\n",
200 | "function-like objects themselves have an attribute restype, which must be used to declare the type\n",
201 | "of its result. For a C function whose result type is void, use None. \n",
202 | "\n",
203 | "Here is a full example:"
204 | ]
205 | },
206 | {
207 | "cell_type": "code",
208 | "execution_count": 9,
209 | "metadata": {
210 | "collapsed": false
211 | },
212 | "outputs": [
213 | {
214 | "name": "stdout",
215 | "output_type": "stream",
216 | "text": [
217 | "1 loop, best of 3: 621 ms per loop\n"
218 | ]
219 | }
220 | ],
221 | "source": [
222 | "from ctypes import CDLL, c_int, c_double\n",
223 | "mydot = CDLL('./my_dot.so').dot_product\n",
224 | "def dot(vec1, vec2): # vec1, vec2 are Python lists\n",
225 | " n = len(vec1)\n",
226 | " mydot.restype = c_double\n",
227 | " return mydot((c_double*n)(*vec1), (c_double*n)(*vec2), c_int(n))\n",
228 | "\n",
229 | "vec1 = [x for x in range(1000000)]\n",
230 | "vec2 = [x for x in range(1000000)]\n",
231 | "%timeit dot(vec1,vec2)"
232 | ]
233 | },
234 | {
235 | "cell_type": "markdown",
236 | "metadata": {},
237 | "source": [
238 | "The arguments should be explicitly converted to the appropriate C type. \n",
239 | "The result is automatically converted to a regular Python type, based on the restype attribute.\n",
240 | "\n",
241 | "**Warning.** If you use the extension .so for the name of a file, do not make its stem the same as a\n",
242 | ".py file in the same directory, e.g., do not have both a funcs.py and a funcs.so. \n",
243 | "\n",
244 | "\n",
245 | "### Repeat the same in Cython\n",
246 | "\n",
247 | "The fundamental nature of Cython can be summed up as follows: Cython is Python with C data types.\n",
248 | "As Cython can accept almost any valid python source file, one of the hardest things in getting started is just figuring out how to compile your extension.\n",
249 | "\n",
250 | "Here is the bare Python implementation of the dot product of two lists/vectors:"
251 | ]
252 | },
253 | {
254 | "cell_type": "code",
255 | "execution_count": 10,
256 | "metadata": {
257 | "collapsed": false
258 | },
259 | "outputs": [
260 | {
261 | "name": "stdout",
262 | "output_type": "stream",
263 | "text": [
264 | "CPU times: user 173 ms, sys: 0 ns, total: 173 ms\n",
265 | "Wall time: 172 ms\n"
266 | ]
267 | },
268 | {
269 | "data": {
270 | "text/plain": [
271 | "3.3333283333312755e+17"
272 | ]
273 | },
274 | "execution_count": 10,
275 | "metadata": {},
276 | "output_type": "execute_result"
277 | }
278 | ],
279 | "source": [
280 | "#def frange(x, y, jump):\n",
281 | "# while x < y:\n",
282 | "# yield x\n",
283 | "# x += jump\n",
284 | " \n",
285 | "def dot_product(vec1,vec2):\n",
286 | " result = 0.0\n",
287 | " n = len(vec1)\n",
288 | " for i in range(n):\n",
289 | " result += vec1[i]*vec2[i]\n",
290 | " return result\n",
291 | "\n",
292 | "vec1 = [x for x in range(1000000)]\n",
293 | "vec2 = [x for x in range(1000000)]\n",
294 | "%time dot_product(vec1,vec2)"
295 | ]
296 | },
297 | {
298 | "cell_type": "markdown",
299 | "metadata": {},
300 | "source": [
301 | "### Prepare cython_dot.pyx file\n",
302 | "\n",
303 | "Let us take the dot_product function and put it in the .pyx file:\n",
304 | "\n",
305 | "```python\n",
306 | "cimport cython\n",
307 | "\n",
308 | "\n",
309 | "@cython.boundscheck(False) # Will not check indexing, so ensure indices are valid and non-negative\n",
310 | "@cython.wraparound(False) # Will not allow negative indexing\n",
311 | "@cython.cdivision(True) # Will not check for division by zero\n",
312 | "def dot_product(vec1,vec2):\n",
313 | " cdef float result = 0.0\n",
314 | " cdef unsigned int n = len(vec1)\n",
315 | "\n",
316 | " for i in range(n):\n",
317 | " result += vec1[i]*vec2[i]\n",
318 | "\n",
319 | " return result\n",
320 | "```\n",
321 | "\n",
322 | "### Prepare cython_setup.py file\n",
323 | "\n",
324 | "We would need a setup file in addition to that:\n",
325 | "\n",
326 | "```python\n",
327 | "from distutils.core import setup\n",
328 | "from Cython.Build import cythonize\n",
329 | "\n",
330 | "setup(\n",
331 | " name = 'my dot',\n",
332 | " ext_modules = cythonize(\"cython_dot.pyx\")\n",
333 | ")\n",
334 | "```\n",
335 | "\n",
336 | "save it in cython_setup.py file and build:\n",
337 | "\n",
338 | "```bash\n",
339 | "python cython_setup.py build_ext --inplace\n",
340 | "```\n",
341 | "\n",
342 | "you will now see the `cython_dot.so` file appear in your folder."
343 | ]
344 | },
345 | {
346 | "cell_type": "code",
347 | "execution_count": 11,
348 | "metadata": {
349 | "collapsed": false
350 | },
351 | "outputs": [
352 | {
353 | "name": "stdout",
354 | "output_type": "stream",
355 | "text": [
356 | "CPU times: user 68 ms, sys: 1 ms, total: 69 ms\n",
357 | "Wall time: 68.9 ms\n"
358 | ]
359 | },
360 | {
361 | "data": {
362 | "text/plain": [
363 | "3.3338099651261235e+17"
364 | ]
365 | },
366 | "execution_count": 11,
367 | "metadata": {},
368 | "output_type": "execute_result"
369 | }
370 | ],
371 | "source": [
372 | "from cython_dot import dot_product\n",
373 | "vec1 = [x for x in range(1000000)]\n",
374 | "vec2 = [x for x in range(1000000)]\n",
375 | "%time dot_product(vec1,vec2)"
376 | ]
377 | },
378 | {
379 | "cell_type": "markdown",
380 | "metadata": {
381 | "collapsed": true
382 | },
383 | "source": [
384 | "### Use ndarray in Cython code\n",
385 | "\n",
386 | "How can we make Cython implementation even faster? use less python's generic data structures and more numpy arrays!\n",
387 | "Change the cython_dot.pyx file to look like:\n",
388 | "\n",
389 | "```python\n",
390 | "cimport cython\n",
391 | "import numpy as np\n",
392 | "cimport numpy as np\n",
393 | "\n",
394 | "DTYPE = np.float64\n",
395 | "ctypedef np.float64_t DTYPE_t\n",
396 | "\n",
397 | "@cython.boundscheck(False) # Will not check indexing, so ensure indices are valid and non-negative\n",
398 | "@cython.wraparound(False) # Will not allow negative indexing\n",
399 | "@cython.cdivision(True) # Will not check for division by zero\n",
400 | "def dot_product(np.ndarray[DTYPE_t, ndim=1] vec1, np.ndarray[DTYPE_t, ndim=1] vec2):\n",
401 | " cdef float result = 0.0\n",
402 | " cdef unsigned int i\n",
403 | " cdef unsigned int n = vec1.shape[0]\n",
404 | "\n",
405 | " for i in range(n):\n",
406 | " result += vec1[i]*vec2[i]\n",
407 | "\n",
408 | " return result\n",
409 | "``` \n",
410 | "\n",
411 | "and change the cython_setup.py to looks like this:\n",
412 | "\n",
413 | "```python\n",
414 | "#!/usr/bin/env python3\n",
415 | "\n",
416 | "from distutils.core import setup\n",
417 | "from Cython.Build import cythonize\n",
418 | "import numpy as np\n",
419 | "\n",
420 | "setup(\n",
421 | " name = 'my dot',\n",
422 | " ext_modules = cythonize(\"cython_dot2.pyx\"),\n",
423 | " include_dirs = [np.get_include()]\n",
424 | ")\n",
425 | "```\n",
426 | "\n",
427 | "rebuild the shared object, and rerun:"
428 | ]
429 | },
430 | {
431 | "cell_type": "code",
432 | "execution_count": 14,
433 | "metadata": {
434 | "collapsed": false
435 | },
436 | "outputs": [
437 | {
438 | "name": "stdout",
439 | "output_type": "stream",
440 | "text": [
441 | "CPU times: user 4 ms, sys: 0 ns, total: 4 ms\n",
442 | "Wall time: 3.27 ms\n"
443 | ]
444 | },
445 | {
446 | "data": {
447 | "text/plain": [
448 | "3.3338099651261235e+17"
449 | ]
450 | },
451 | "execution_count": 14,
452 | "metadata": {},
453 | "output_type": "execute_result"
454 | }
455 | ],
456 | "source": [
457 | "import numpy as np\n",
458 | "from cython_dot2 import dot_product\n",
459 | "vec1 = np.arange(1000000,dtype=float)\n",
460 | "vec2 = np.arange(1000000,dtype=float)\n",
461 | "%time dot_product(vec1,vec2)"
462 | ]
463 | },
464 | {
465 | "cell_type": "markdown",
466 | "metadata": {},
467 | "source": [
468 | "### Compiler optimization in Cython\n",
469 | "\n",
470 | "Note, that we have also included the compiler optimization flags in our Cython setup file:\n",
471 | "\n",
472 | "```python\n",
473 | "extra_compile_args = [\"-O3\", \"-ffast-math\", \"-march=native\"],\n",
474 | "```\n",
475 | "\n",
476 | "\n",
477 | "### Open MP (demo only)\n",
478 | "\n",
479 | "Finally, we can try improving our Cython code with OpenMP\n",
480 | "https://clang-omp.github.io"
481 | ]
482 | },
483 | {
484 | "cell_type": "code",
485 | "execution_count": 15,
486 | "metadata": {
487 | "collapsed": false
488 | },
489 | "outputs": [
490 | {
491 | "name": "stdout",
492 | "output_type": "stream",
493 | "text": [
494 | "CPU times: user 948 ms, sys: 0 ns, total: 948 ms\n",
495 | "Wall time: 39.8 ms\n"
496 | ]
497 | },
498 | {
499 | "data": {
500 | "text/plain": [
501 | "3.333329615983739e+17"
502 | ]
503 | },
504 | "execution_count": 15,
505 | "metadata": {},
506 | "output_type": "execute_result"
507 | }
508 | ],
509 | "source": [
510 | "import numpy as np\n",
511 | "from cython_dot3 import dot_product\n",
512 | "vec1 = np.arange(1000000,dtype=float)\n",
513 | "vec2 = np.arange(1000000,dtype=float)\n",
514 | "%time dot_product(vec1,vec2)"
515 | ]
516 | },
517 | {
518 | "cell_type": "markdown",
519 | "metadata": {},
520 | "source": [
521 | "Continue on to the Data parallelism exercise: [JoblibMultiprocessing.ipynb](JoblibMultiprocessing.ipynb).
"
522 | ]
523 | },
524 | {
525 | "cell_type": "code",
526 | "execution_count": null,
527 | "metadata": {
528 | "collapsed": true
529 | },
530 | "outputs": [],
531 | "source": []
532 | },
533 | {
534 | "cell_type": "code",
535 | "execution_count": null,
536 | "metadata": {
537 | "collapsed": true
538 | },
539 | "outputs": [],
540 | "source": []
541 | }
542 | ],
543 | "metadata": {
544 | "anaconda-cloud": {},
545 | "kernelspec": {
546 | "display_name": "Python [conda env:BDcourse]",
547 | "language": "python",
548 | "name": "conda-env-BDcourse-py"
549 | },
550 | "language_info": {
551 | "codemirror_mode": {
552 | "name": "ipython",
553 | "version": 2
554 | },
555 | "file_extension": ".py",
556 | "mimetype": "text/x-python",
557 | "name": "python",
558 | "nbconvert_exporter": "python",
559 | "pygments_lexer": "ipython2",
560 | "version": "2.7.12"
561 | }
562 | },
563 | "nbformat": 4,
564 | "nbformat_minor": 0
565 | }
566 |
--------------------------------------------------------------------------------
/2_CythonMultiprocessing/JoblibMultiprocessing.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Data parallelism with Python\n",
8 | "\n",
9 | "https://docs.python.org/2/library/multiprocessing.html\n",
10 | "\n",
11 | "We have seen how one can use OpenMP and shared memory programming to parallelized python code (well, Cython).\n",
12 | "\n",
13 | "## Multiprocessing library\n",
14 | "\n",
15 | "**multiprocessing** allows to utilize multiple processors on a given machine. It introduces a Pool object which offers a means of parallelizing the execution of a function across multiple input values, distributing the input data across processes (**data parallelism**). \n",
16 | "\n",
17 | "### Pool class\n",
18 | "\n",
19 | "This basic example of data parallelism using the Pool class:"
20 | ]
21 | },
22 | {
23 | "cell_type": "code",
24 | "execution_count": 1,
25 | "metadata": {},
26 | "outputs": [
27 | {
28 | "name": "stdout",
29 | "output_type": "stream",
30 | "text": [
31 | "[1, 4, 9, 16]\n",
32 | "40\n"
33 | ]
34 | }
35 | ],
36 | "source": [
37 | "import multiprocessing as mp\n",
38 | "\n",
39 | "def f(x):\n",
40 | " return x*x\n",
41 | "\n",
42 | "if __name__ == '__main__':\n",
43 | " #specify number of child processes to spawn, use <= number of processes available\n",
44 | " nprocs = mp.cpu_count()\n",
45 | " p = mp.Pool(nprocs)\n",
46 | " print(p.map(f, [1, 2, 3, 4]))\n",
47 | " print(mp.cpu_count())"
48 | ]
49 | },
50 | {
51 | "cell_type": "markdown",
52 | "metadata": {},
53 | "source": [
54 | "### Process and Queue classes\n",
55 | "\n",
56 | "In `multiprocessing`, individual processes are spawned by creating a `Process` object or sub-classing it.\n",
57 | "In the following example we are going to use the `multiprocessing.Queue` class which returns a process shared queue implemented using a pipe and a few locks. When a process first puts an item on the queue a feeder thread is started which transfers objects from a buffer into the pipe."
58 | ]
59 | },
60 | {
61 | "cell_type": "code",
62 | "execution_count": 2,
63 | "metadata": {},
64 | "outputs": [
65 | {
66 | "name": "stdout",
67 | "output_type": "stream",
68 | "text": [
69 | "RESULT: Process idx=0 is called 'Worker-41'\n",
70 | "RESULT: Process idx=1 is called 'Worker-42'\n",
71 | "RESULT: Process idx=2 is called 'Worker-43'\n",
72 | "RESULT: Process idx=3 is called 'Worker-44'\n",
73 | "RESULT: Process idx=4 is called 'Worker-45'\n"
74 | ]
75 | }
76 | ],
77 | "source": [
78 | "from multiprocessing import Process, Queue\n",
79 | "\n",
80 | "class Worker(Process):\n",
81 | "\n",
82 | " def __init__(self, queue, idx, data):\n",
83 | " super(Worker, self).__init__()\n",
84 | " self.queue = queue\n",
85 | " self.idx = idx\n",
86 | " self.data = data\n",
87 | "\n",
88 | " def square(self):\n",
89 | " self.data = map(lambda x: x*x, self.data)\n",
90 | " return \"Process idx=%s is called '%s'\" % (self.idx, self.name)\n",
91 | "\n",
92 | " def run(self):\n",
93 | " self.queue.put(self.square())\n",
94 | "\n",
95 | "## Create a list to hold running Worker objects\n",
96 | "worker_processes = list()\n",
97 | "\n",
98 | "if __name__ == \"__main__\":\n",
99 | "\n",
100 | " data = [1,2,3,4] \n",
101 | " q = Queue()\n",
102 | " for i in range(5):\n",
103 | " p=Worker(queue=q, idx=i,data=data)\n",
104 | " worker_processes.append(p)\n",
105 | " p.start()\n",
106 | " for proc in worker_processes:\n",
107 | " proc.join()\n",
108 | " print \"RESULT: %s\" % q.get()"
109 | ]
110 | },
111 | {
112 | "cell_type": "markdown",
113 | "metadata": {},
114 | "source": [
115 | "## Joblib library\n",
116 | "\n",
117 | "https://pythonhosted.org/joblib/\n",
118 | "\n",
119 | "Joblib is a set of tools to provide data parallelism and pipelining in Python. \n",
120 | "\n",
121 | "joblib offers:\n",
122 | " 1. disk-caching of the output values and lazy re-evaluation\n",
123 | " 1. easy simple parallel computing\n",
124 | "\n",
125 | "Let us go back to our dot poroduct example, and we will assume that we need to perfrom this to a list of vectors."
126 | ]
127 | },
128 | {
129 | "cell_type": "code",
130 | "execution_count": 3,
131 | "metadata": {},
132 | "outputs": [
133 | {
134 | "name": "stdout",
135 | "output_type": "stream",
136 | "text": [
137 | "CPU times: user 0 ns, sys: 0 ns, total: 0 ns\n",
138 | "Wall time: 6.91 µs\n"
139 | ]
140 | },
141 | {
142 | "data": {
143 | "text/plain": [
144 | "[3.3338099651261235e+17,\n",
145 | " 3.3338099651261235e+17,\n",
146 | " 3.3338099651261235e+17,\n",
147 | " 3.3338099651261235e+17,\n",
148 | " 3.3338099651261235e+17,\n",
149 | " 3.3338099651261235e+17,\n",
150 | " 3.3338099651261235e+17,\n",
151 | " 3.3338099651261235e+17,\n",
152 | " 3.3338099651261235e+17,\n",
153 | " 3.3338099651261235e+17]"
154 | ]
155 | },
156 | "execution_count": 3,
157 | "metadata": {},
158 | "output_type": "execute_result"
159 | }
160 | ],
161 | "source": [
162 | "import numpy as np\n",
163 | "from cython_dot2 import dot_product\n",
164 | "\n",
165 | "Nvectors = 10\n",
166 | "results = list()\n",
167 | "\n",
168 | "for round in range(Nvectors):\n",
169 | " vec1 = np.arange(1000000,dtype=float)\n",
170 | " vec2 = np.arange(1000000,dtype=float)\n",
171 | " results.append(dot_product(vec1,vec2))\n",
172 | "\n",
173 | "%time results "
174 | ]
175 | },
176 | {
177 | "cell_type": "code",
178 | "execution_count": 4,
179 | "metadata": {},
180 | "outputs": [
181 | {
182 | "name": "stdout",
183 | "output_type": "stream",
184 | "text": [
185 | "('Running on ', 2, ' CPU cores')\n",
186 | "CPU times: user 0 ns, sys: 0 ns, total: 0 ns\n",
187 | "Wall time: 6.91 µs\n"
188 | ]
189 | },
190 | {
191 | "data": {
192 | "text/plain": [
193 | "[2.4287628162912635e+23,\n",
194 | " 2.4287628162912635e+23,\n",
195 | " 2.4287628162912635e+23,\n",
196 | " 2.4287628162912635e+23,\n",
197 | " 2.4287628162912635e+23,\n",
198 | " 2.4287628162912635e+23,\n",
199 | " 2.4287628162912635e+23,\n",
200 | " 2.4287628162912635e+23,\n",
201 | " 2.4287628162912635e+23,\n",
202 | " 2.4287628162912635e+23]"
203 | ]
204 | },
205 | "execution_count": 4,
206 | "metadata": {},
207 | "output_type": "execute_result"
208 | }
209 | ],
210 | "source": [
211 | "from joblib import Parallel, delayed \n",
212 | "import multiprocessing\n",
213 | "import numpy as np\n",
214 | "from cython_dot2 import dot_product\n",
215 | "\n",
216 | "num_cores = 2 #multiprocessing.cpu_count()-1\n",
217 | "print(\"Running on \", num_cores, \" CPU cores\")\n",
218 | "\n",
219 | "Nvectors = 10\n",
220 | "results = list()\n",
221 | "\n",
222 | "def getProduct():\n",
223 | " vec1 = np.arange(100000000,dtype=float)\n",
224 | " vec2 = np.arange(100000000,dtype=float)\n",
225 | " return dot_product(vec1,vec2)\n",
226 | "\n",
227 | "results = Parallel(n_jobs=num_cores)(delayed(getProduct)() for i in range(Nvectors)) \n",
228 | "\n",
229 | "%time results "
230 | ]
231 | },
232 | {
233 | "cell_type": "code",
234 | "execution_count": null,
235 | "metadata": {
236 | "collapsed": true
237 | },
238 | "outputs": [],
239 | "source": []
240 | }
241 | ],
242 | "metadata": {
243 | "anaconda-cloud": {},
244 | "kernelspec": {
245 | "display_name": "Python 2",
246 | "language": "python",
247 | "name": "python2"
248 | },
249 | "language_info": {
250 | "codemirror_mode": {
251 | "name": "ipython",
252 | "version": 2
253 | },
254 | "file_extension": ".py",
255 | "mimetype": "text/x-python",
256 | "name": "python",
257 | "nbconvert_exporter": "python",
258 | "pygments_lexer": "ipython2",
259 | "version": "2.7.14"
260 | }
261 | },
262 | "nbformat": 4,
263 | "nbformat_minor": 1
264 | }
265 |
--------------------------------------------------------------------------------
/2_CythonMultiprocessing/answers/cython_dot.pyx:
--------------------------------------------------------------------------------
1 | cimport cython
2 |
3 | @cython.boundscheck(False) # Will not check indexing, so ensure indices are valid and non-negative
4 | @cython.wraparound(False) # Will not allow negative indexing
5 | @cython.cdivision(True) # Will not check for division by zero
6 | def dot_product(vec1,vec2):
7 | cdef float result = 0.0
8 | cdef unsigned int i
9 | cdef unsigned int n = len(vec1)
10 |
11 | for i in range(n):
12 | result += vec1[i]*vec2[i]
13 |
14 | return result
15 |
--------------------------------------------------------------------------------
/2_CythonMultiprocessing/answers/cython_dot2.pyx:
--------------------------------------------------------------------------------
1 | cimport cython
2 | import numpy as np
3 | cimport numpy as np
4 |
5 | DTYPE = np.float
6 | ctypedef np.float_t DTYPE_t
7 |
8 | @cython.boundscheck(False) # Will not check indexing, so ensure indices are valid and non-negative
9 | @cython.wraparound(False) # Will not allow negative indexing
10 | @cython.cdivision(True) # Will not check for division by zero
11 | def dot_product(np.ndarray[DTYPE_t, ndim=1] vec1, np.ndarray[DTYPE_t, ndim=1] vec2):
12 | cdef float result = 0.0
13 | cdef unsigned int i
14 | cdef unsigned int n = vec1.shape[0]
15 |
16 | for i in range(n):
17 | result += vec1[i]*vec2[i]
18 |
19 | return result
20 |
--------------------------------------------------------------------------------
/2_CythonMultiprocessing/answers/cython_dot3.pyx:
--------------------------------------------------------------------------------
1 | cimport cython
2 | import numpy as np
3 | cimport numpy as np
4 |
5 | from cython.parallel cimport prange, parallel
6 | cimport openmp
7 |
8 | DTYPE = np.float
9 | ctypedef np.float_t DTYPE_t
10 |
11 | @cython.boundscheck(False) # Will not check indexing, so ensure indices are valid and non-negative
12 | @cython.wraparound(False) # Will not allow negative indexing
13 | @cython.cdivision(True) # Will not check for division by zero
14 | def dot_product(np.ndarray[DTYPE_t, ndim=1] vec1, np.ndarray[DTYPE_t, ndim=1] vec2):
15 | cdef float result = 0.0
16 | cdef unsigned int i
17 | cdef unsigned int n = vec1.shape[0]
18 |
19 | with nogil, parallel(num_threads=24):
20 | for i in prange(n, schedule='dynamic'):
21 | result += vec1[i]*vec2[i]
22 |
23 | return result
24 |
--------------------------------------------------------------------------------
/2_CythonMultiprocessing/answers/cython_setup.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 |
3 | from distutils.core import setup
4 | from Cython.Build import cythonize
5 |
6 | setup(
7 | name = 'my dot',
8 | ext_modules = cythonize("cython_dot.pyx")
9 | )
10 |
--------------------------------------------------------------------------------
/2_CythonMultiprocessing/answers/cython_setup2.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 |
3 | from distutils.core import setup
4 | from Cython.Build import cythonize
5 | from distutils.extension import Extension
6 | from Cython.Distutils import build_ext
7 | import numpy as np
8 |
9 | ext_modules=[
10 | Extension("cython_dot2",
11 | ["cython_dot2.pyx"],
12 | libraries=["m"],
13 | extra_compile_args = ["-O3", "-ffast-math", "-march=native"],
14 | #extra_link_args=['-fopenmp']
15 | )
16 | ]
17 |
18 |
19 | setup(
20 | name = "cython_dot2",
21 | cmdclass = {"build_ext": build_ext},
22 | ext_modules = ext_modules,
23 | include_dirs = [np.get_include()]
24 | )
25 |
--------------------------------------------------------------------------------
/2_CythonMultiprocessing/answers/cython_setup3.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 |
3 | from distutils.core import setup
4 | from Cython.Build import cythonize
5 | from distutils.extension import Extension
6 | from Cython.Distutils import build_ext
7 | import numpy as np
8 |
9 | ext_modules=[
10 | Extension("cython_dot3",
11 | ["cython_dot3.pyx"],
12 | libraries=["m"],
13 | extra_compile_args = ["-O3", "-ffast-math", "-march=native","-fopenmp"],
14 | #extra_link_args=['-fopenmp']
15 | )
16 | ]
17 |
18 |
19 | setup(
20 | name = "cython_dot3",
21 | cmdclass = {"build_ext": build_ext},
22 | ext_modules = ext_modules,
23 | include_dirs = [np.get_include()]
24 | )
25 |
--------------------------------------------------------------------------------
/2_CythonMultiprocessing/answers/my_dot.c:
--------------------------------------------------------------------------------
1 | #include
2 |
3 | double dot_product(double v[], double u[], int n)
4 | {
5 | double result = 0.0;
6 | for (int i = 0; i < n; i++)
7 | result += v[i]*u[i];
8 | return result;
9 | }
10 |
--------------------------------------------------------------------------------
/3_ProfileDebug/ProfileDebug.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# CPU and memory profiling in Python\n",
8 | "\n",
9 | "Python includes a profiler called `cProfile`. It not only gives the total running time, but also times each function separately, and tells you how many times each function was called, making it easy to determine where you should make optimizations.\n",
10 | "\n",
11 | "You can call it from within your code, or from the interpreter. Let us look at an example of 2 simple functions:\n",
12 | "**foo()**: allocates lists a, b and then deletes b, and calls **bar()** which simply squares elements in range(10)."
13 | ]
14 | },
15 | {
16 | "cell_type": "code",
17 | "execution_count": 1,
18 | "metadata": {
19 | "collapsed": true
20 | },
21 | "outputs": [],
22 | "source": [
23 | "def bar():\n",
24 | " map(lambda x: x*x, range(10))\n",
25 | "\n",
26 | "def foo():\n",
27 | " a = [1] * (10 ** 6)\n",
28 | " b = [2] * (2 * 10 ** 7)\n",
29 | " del b\n",
30 | " bar()\n",
31 | " return a"
32 | ]
33 | },
34 | {
35 | "cell_type": "code",
36 | "execution_count": 2,
37 | "metadata": {
38 | "collapsed": false
39 | },
40 | "outputs": [
41 | {
42 | "name": "stdout",
43 | "output_type": "stream",
44 | "text": [
45 | " 16 function calls in 0.162 seconds\n",
46 | "\n",
47 | " Ordered by: standard name\n",
48 | "\n",
49 | " ncalls tottime percall cumtime percall filename:lineno(function)\n",
50 | " 1 0.000 0.000 0.000 0.000 :1(bar)\n",
51 | " 10 0.000 0.000 0.000 0.000 :2()\n",
52 | " 1 0.160 0.160 0.160 0.160 :4(foo)\n",
53 | " 1 0.002 0.002 0.162 0.162 :1()\n",
54 | " 1 0.000 0.000 0.000 0.000 {map}\n",
55 | " 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}\n",
56 | " 1 0.000 0.000 0.000 0.000 {range}\n",
57 | "\n",
58 | "\n"
59 | ]
60 | }
61 | ],
62 | "source": [
63 | "import cProfile\n",
64 | "cProfile.run('foo()')"
65 | ]
66 | },
67 | {
68 | "cell_type": "markdown",
69 | "metadata": {},
70 | "source": [
71 | "Even more usefully, you can invoke the `cProfile` when running a script:\n",
72 | "\n",
73 | "```bash\n",
74 | "python -m cProfile myscript.py\n",
75 | "```"
76 | ]
77 | },
78 | {
79 | "cell_type": "markdown",
80 | "metadata": {},
81 | "source": [
82 | "## Memory profiler (performed in the command line or Python IDE)\n",
83 | "\n",
84 | "Python has a memory profiler as well. The profiler works in a line-by-line mode.\n",
85 | "To use it, first decorate the function you would like to profile with `@profile` and then run the script with specific arguments to the Python interpreter.\n",
86 | "\n",
87 | "\n",
88 | "### Install memory_profiler\n",
89 | "\n",
90 | "You can install memory profiler using pip:\n",
91 | "```bash\n",
92 | "pip install --user memory_profiler\n",
93 | "```\n",
94 | "\n",
95 | "Create a Python script `example.py` with following contents:\n",
96 | "\n",
97 | "\n",
98 | "```python\n",
99 | "@profile\n",
100 | "def foo():\n",
101 | " a = [1] * (10 ** 6)\n",
102 | " b = [2] * (2 * 10 ** 7)\n",
103 | " del b\n",
104 | " return a\n",
105 | "\n",
106 | "if __name__ == '__main__':\n",
107 | " foo()\n",
108 | "``` \n",
109 | "\n",
110 | "Execute the code passing the option `-m memory_profiler` to the python interpreter to load the memory_profiler module and print to stdout the line-by-line analysis:\n",
111 | "\n",
112 | "```bash\n",
113 | "python -m memory_profiler example.py\n",
114 | "```"
115 | ]
116 | },
117 | {
118 | "cell_type": "code",
119 | "execution_count": null,
120 | "metadata": {
121 | "collapsed": true
122 | },
123 | "outputs": [],
124 | "source": []
125 | },
126 | {
127 | "cell_type": "markdown",
128 | "metadata": {},
129 | "source": [
130 | "# Debugging\n",
131 | "\n",
132 | "Debugging is a very important step of Python application development. The easiest and most common way to debug among beginners is by inserting multiple print statements inside the code.\n",
133 | "\n",
134 | "Luckily, Python has a debugger, which is available as a module called **pdb** (stands for “Python DeBugger”). This is a very simple and useful tool to learn if you are writing any Python programs.\n",
135 | "\n",
136 | "Let us look at a simple (though not very meaningful) code below:"
137 | ]
138 | },
139 | {
140 | "cell_type": "code",
141 | "execution_count": 1,
142 | "metadata": {
143 | "collapsed": false
144 | },
145 | "outputs": [
146 | {
147 | "name": "stdout",
148 | "output_type": "stream",
149 | "text": [
150 | "aaabbbccc\n"
151 | ]
152 | }
153 | ],
154 | "source": [
155 | "# epdb1.py -- experiment with the Python debugger, pdb\n",
156 | "a = \"aaa\"\n",
157 | "b = \"bbb\"\n",
158 | "c = \"ccc\"\n",
159 | "final = a + b + c\n",
160 | "print(final)"
161 | ]
162 | },
163 | {
164 | "cell_type": "markdown",
165 | "metadata": {},
166 | "source": [
167 | "Debugging with PDB is as simple as importing the corresponding module:"
168 | ]
169 | },
170 | {
171 | "cell_type": "code",
172 | "execution_count": 2,
173 | "metadata": {
174 | "collapsed": true
175 | },
176 | "outputs": [],
177 | "source": [
178 | "import pdb"
179 | ]
180 | },
181 | {
182 | "cell_type": "markdown",
183 | "metadata": {},
184 | "source": [
185 | "Now find a spot where you would like tracing to begin, and insert the following code:"
186 | ]
187 | },
188 | {
189 | "cell_type": "code",
190 | "execution_count": null,
191 | "metadata": {
192 | "collapsed": true
193 | },
194 | "outputs": [],
195 | "source": [
196 | "#pdb.set_trace()"
197 | ]
198 | },
199 | {
200 | "cell_type": "markdown",
201 | "metadata": {},
202 | "source": [
203 | "So now our program looks like (we will copy it to the python file and run it from the command line):"
204 | ]
205 | },
206 | {
207 | "cell_type": "code",
208 | "execution_count": null,
209 | "metadata": {
210 | "collapsed": true
211 | },
212 | "outputs": [],
213 | "source": [
214 | "# epdb1.py -- experiment with the Python debugger, pdb\n",
215 | "import pdb\n",
216 | "a = \"aaa\"\n",
217 | "pdb.set_trace()\n",
218 | "b = \"bbb\"\n",
219 | "c = \"ccc\"\n",
220 | "final = a + b + c\n",
221 | "print(final)"
222 | ]
223 | },
224 | {
225 | "cell_type": "markdown",
226 | "metadata": {},
227 | "source": [
228 | "## Exploring the program with \"n\" command\n",
229 | "\n",
230 | "Start run the debug run by typing:\n",
231 | "\n",
232 | "```python\n",
233 | "python3 epdb1.py\n",
234 | "```\n",
235 | "\n",
236 | "Execute the next statemen with “n” (next)\n",
237 | "\n",
238 | "At the Pdb prompt, press the lower-case letter “n” (for “next”) on your keyboard, and then press the ENTER key. This will tell pdb to execute the current statement. Keep doing this — pressing “n”, then ENTER.\n",
239 | "\n",
240 | "Eventually you will come to the end of your program, and it will terminate and return you to the normal command prompt.\n",
241 | "\n",
242 | "## Repeating the last debugging command with ENTER\n",
243 | "\n",
244 | "This time, do the same thing as you did before. Start your program running. At the (Pdb) prompt, press the lower-case letter “n” (for “next”) on your keyboard, and then press the ENTER key.\n",
245 | "\n",
246 | "But this time, after the first time that you press “n” and then ENTER, don’t do it any more. Instead, when you see the (Pdb) prompt, just press ENTER. You will notice that pdb continues, just as if you had pressed “n”. \n",
247 | "\n",
248 | "**If you press ENTER without entering anything, pdb will re-execute the last command that you gave it.**\n",
249 | "\n",
250 | "In this case, the command was “n”, so you could just keep stepping through the program by pressing ENTER.\n",
251 | "\n",
252 | "Notice that as you passed the last line (the line with the “print” statement), it was executed and you saw the output of the print statement (“aaabbbccc”) displayed on your screen.\n",
253 | "\n",
254 | "## Quitting it all with “q” (quit)\n",
255 | "\n",
256 | "The debugger can do all sorts of things, some of which you may find totally mystifying. So the most important thing to learn now — before you learn anything else — is how to quit debugging.\n",
257 | "\n",
258 | "It is easy. When you see the (Pdb) prompt, just press “q” (for “quit”) and the ENTER key. Pdb will quit and you will be back at your command prompt. Try it, and see how it works.\n",
259 | "\n",
260 | "## Printing the value of variables with “p” (print)\n",
261 | "\n",
262 | "The most useful thing you can do at the (Pdb) prompt is to print the value of a variable. Here’s how to do it.\n",
263 | "\n",
264 | "When you see the (Pdb) prompt, enter “p” (for “print”) followed by the name of the variable you want to print. And of course, you end by pressing the ENTER key.\n",
265 | "\n",
266 | "Note that you can print multiple variables, by separating their names with commas (just as in a regular Python “print” statement). For example, you can print the value of the variables a, b, and c this way.\n",
267 | "\n",
268 | "\n",
269 | "## Seeing where you are with “l” (list)\n",
270 | "\n",
271 | "As you are debugging, there is a lot of stuff being written to the screen, and it gets really hard to get a feeling for where you are in your program. That’s where the “l” (for “list”) command comes in. \n",
272 | "\n",
273 | "“l” shows you, on the screen, the general area of your program’s souce code that you are executing. By default, it lists 11 (eleven) lines of code. The line of code that you are about to execute (the “current line”) is right in the middle, and there is a little arrow “–>” that points to it.\n",
274 | "\n",
275 | "So a typical interaction with pdb might go like this\n",
276 | "\n",
277 | "The pdb.set_trace() statement is encountered, and you start tracing with the (Pdb) prompt\n",
278 | "You press “n” and then ENTER, to start stepping through your code.\n",
279 | "You just press ENTER to step again.\n",
280 | "You just press ENTER to step again.\n",
281 | "You just press ENTER to step again. etc. etc. etc.\n",
282 | "Eventually, you realize that you are a bit lost. You’re not exactly sure where you are in your program any more. So…\n",
283 | "You press “l” and then ENTER. This lists the area of your program that is currently being executed.\n",
284 | "You inspect the display, get your bearings, and are ready to start again. So….\n",
285 | "You press “n” and then ENTER, to start stepping through your code.\n",
286 | "You just press ENTER to step again.\n",
287 | "You just press ENTER to step again. etc. etc. etc.\n",
288 | "\n",
289 | "## Stepping into subroutines… with “s” (step into)\n",
290 | "\n",
291 | "Eventually, you will need to debug larger programs — programs that use subroutines. And sometimes, the problem that you’re trying to find will lie buried in a subroutine. Consider the following program.\n",
292 | "\n",
293 | "Let us consider a more involved program:"
294 | ]
295 | },
296 | {
297 | "cell_type": "code",
298 | "execution_count": 79,
299 | "metadata": {
300 | "collapsed": false
301 | },
302 | "outputs": [
303 | {
304 | "name": "stdout",
305 | "output_type": "stream",
306 | "text": [
307 | "--Return--\n",
308 | "> (10)()->None\n",
309 | "-> pdb.set_trace()\n",
310 | "(Pdb) q\n"
311 | ]
312 | },
313 | {
314 | "ename": "BdbQuit",
315 | "evalue": "",
316 | "output_type": "error",
317 | "traceback": [
318 | "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
319 | "\u001b[0;31mBdbQuit\u001b[0m Traceback (most recent call last)",
320 | "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[1;32m 8\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 9\u001b[0m \u001b[0ma\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m\"aaa\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 10\u001b[0;31m \u001b[0mpdb\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mset_trace\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 11\u001b[0m \u001b[0mb\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m\"bbb\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 12\u001b[0m \u001b[0mc\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m\"ccc\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
321 | "\u001b[0;32m/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/bdb.py\u001b[0m in \u001b[0;36mtrace_dispatch\u001b[0;34m(self, frame, event, arg)\u001b[0m\n\u001b[1;32m 50\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdispatch_call\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mframe\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0marg\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 51\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mevent\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;34m'return'\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 52\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdispatch_return\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mframe\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0marg\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 53\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mevent\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;34m'exception'\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 54\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdispatch_exception\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mframe\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0marg\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
322 | "\u001b[0;32m/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/bdb.py\u001b[0m in \u001b[0;36mdispatch_return\u001b[0;34m(self, frame, arg)\u001b[0m\n\u001b[1;32m 94\u001b[0m \u001b[0;32mfinally\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 95\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mframe_returning\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 96\u001b[0;31m \u001b[0;32mif\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mquitting\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0;32mraise\u001b[0m \u001b[0mBdbQuit\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 97\u001b[0m \u001b[0;31m# The user issued a 'next' or 'until' command.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 98\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mstopframe\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0mframe\u001b[0m \u001b[0;32mand\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mstoplineno\u001b[0m \u001b[0;34m!=\u001b[0m \u001b[0;34m-\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
323 | "\u001b[0;31mBdbQuit\u001b[0m: "
324 | ]
325 | }
326 | ],
327 | "source": [
328 | "# epdb2.py -- experiment with the Python debugger, pdb\n",
329 | "import pdb\n",
330 | "\n",
331 | "def combine(s1,s2): # define subroutine combine, which...\n",
332 | " s3 = s1 + s2 + s1 # sandwiches s2 between copies of s1, ...\n",
333 | " s3 = '\"' + s3 +'\"' # encloses it in double quotes,...\n",
334 | " return s3 # and returns it.\n",
335 | "\n",
336 | "a = \"aaa\"\n",
337 | "pdb.set_trace()\n",
338 | "b = \"bbb\"\n",
339 | "c = \"ccc\"\n",
340 | "final = combine(a,b)\n",
341 | "print(final)"
342 | ]
343 | },
344 | {
345 | "cell_type": "markdown",
346 | "metadata": {},
347 | "source": [
348 | "Unlike \"n\" command which steps through your program line by line starting at the line where you have inserted the *set_trace()* statement, command \"s\" will step into functions and subroutines. Namely, if you press \"n\" on:\n",
349 | "```python\n",
350 | "final = combine(a,b)\n",
351 | "```\n",
352 | "\n",
353 | "it will just proceed to:\n",
354 | "\n",
355 | "```python\n",
356 | "print(final)\n",
357 | "```\n",
358 | "\n",
359 | "while \"s\" will step into the combine() function.\n",
360 | "\n",
361 | "## Continuing to the end of the current subroutine with “r” (return)\n",
362 | "\n",
363 | "When you use “s” to step into subroutines, you will often find yourself trapped in a subroutine. You have examined the code that you’re interested in, but now you have to step through a lot of uninteresting code in the subroutine.\n",
364 | "\n",
365 | "In this situation, what you’d like to be able to do is just to skip ahead to the end of the subroutine. That is, you want to do something like the “c” (“continue”) command does, but you want just to continue to the end of the subroutine, and then resume your stepping through the code.\n",
366 | "\n",
367 | "You can do it. The command to do it is “r” (for “return” or, better, “continue until return”). If you are in a subroutine and you enter the “r” command at the (Pdb) prompt, pdb will continue executing until the end of the subroutine. At that point — the point when it is ready to return to the calling routine — it will stop and show the (Pdb) prompt again, and you can resume stepping through your code."
368 | ]
369 | },
370 | {
371 | "cell_type": "markdown",
372 | "metadata": {
373 | "collapsed": true
374 | },
375 | "source": [
376 | "## Memory profiler and pdb\n",
377 | "\n",
378 | "It is possible to set breakpoints depending on the amount of memory used. That is, you can specify a threshold and as soon as the program uses more memory than what is specified in the threshold it will stop execution and run into the pdb debugger. To use it, you will have to decorate the function as done in the previous section with @profile and then run your script with the option `-m memory_profiler --pdb-mmem=X`, where X is a number representing the memory threshold in MB. For example:\n",
379 | "\n",
380 | "```bash\n",
381 | "python -m memory_profiler --pdb-mmem=100 example.py\n",
382 | "```"
383 | ]
384 | },
385 | {
386 | "cell_type": "code",
387 | "execution_count": null,
388 | "metadata": {
389 | "collapsed": true
390 | },
391 | "outputs": [],
392 | "source": []
393 | }
394 | ],
395 | "metadata": {
396 | "anaconda-cloud": {},
397 | "kernelspec": {
398 | "display_name": "Python [conda env:PythonWorkshop]",
399 | "language": "python",
400 | "name": "conda-env-PythonWorkshop-py"
401 | },
402 | "language_info": {
403 | "codemirror_mode": {
404 | "name": "ipython",
405 | "version": 2
406 | },
407 | "file_extension": ".py",
408 | "mimetype": "text/x-python",
409 | "name": "python",
410 | "nbconvert_exporter": "python",
411 | "pygments_lexer": "ipython2",
412 | "version": "2.7.12"
413 | }
414 | },
415 | "nbformat": 4,
416 | "nbformat_minor": 0
417 | }
418 |
--------------------------------------------------------------------------------
/3_ProfileDebug/answers/epdb1.py:
--------------------------------------------------------------------------------
1 | import pdb
2 | a = "aaa"
3 | pdb.set_trace()
4 | b = "bbb"
5 | c = "ccc"
6 | final = a + b + c
7 | print(final)
8 |
--------------------------------------------------------------------------------
/3_ProfileDebug/answers/epdb2.py:
--------------------------------------------------------------------------------
1 | # epdb2.py -- experiment with the Python debugger, pdb
2 | import pdb
3 |
4 | def combine(s1,s2): # define subroutine combine, which...
5 | s3 = s1 + s2 + s1 # sandwiches s2 between copies of s1, ...
6 | s3 = '"' + s3 +'"' # encloses it in double quotes,...
7 | return s3 # and returns it.
8 |
9 | a = "aaa"
10 | pdb.set_trace()
11 | b = "bbb"
12 | c = "ccc"
13 | final = combine(a,b)
14 | print(final)
15 |
--------------------------------------------------------------------------------
/3_ProfileDebug/answers/example.py:
--------------------------------------------------------------------------------
1 | @profile
2 | def foo():
3 | a = [1] * (10 ** 6)
4 | b = [2] * (2 * 10 ** 7)
5 | del b
6 | return a
7 |
8 | if __name__ == '__main__':
9 | foo()
10 |
--------------------------------------------------------------------------------
/3_ProfileDebug/answers/palindrome.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | import unittest
3 |
4 | def digits(x):
5 | digs = []
6 | while x != 0:
7 | div,mod = divmod(x,10)
8 | digs.append(mod)
9 | x = mod
10 | return digs
11 |
12 |
13 | def is_palindrome(x):
14 | digs = digits(x)
15 | for f,r in zip(digs, reversed(digs)):
16 | if f != r:
17 | return False
18 | return True
19 |
20 |
21 | class Tests(unittest.TestCase):
22 | def test_negative(self):
23 | self.assertFalse(is_palindrome(1234))
24 |
25 | def test_positive(self):
26 | self.assertTrue(is_palindrome(1234321))
27 |
28 | def test_single_digit(self):
29 | for i in range(10):
30 | self.assertTrue(is_palindrome(i))
31 |
32 |
33 | if __name__ == '__main__':
34 | unittest.main()
35 |
--------------------------------------------------------------------------------
/4_Tensorflow/Keras.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Keras and estimators API\n",
8 | "\n",
9 | "In the previous example of hand-written digit recognition, we have used layers API to TensorFlow and we have had to do a lot of matrix multiplication, definining loss and optimizer ops manually.\n",
10 | "\n",
11 | "We could put most of this logic into an estimator - a black box containing the logic of the neural network model, training, evaluation and prediction loops - giving an end user 3 methods: `fit`, `evaluate` and `predict`. Simple!\n",
12 | "\n",
13 | "An example of library providing this higher level API is Keras. Implementing neural networks with Keras is \n",
14 | "as simple as constructing something using lego:"
15 | ]
16 | },
17 | {
18 | "cell_type": "code",
19 | "execution_count": 1,
20 | "metadata": {
21 | "collapsed": false
22 | },
23 | "outputs": [
24 | {
25 | "name": "stderr",
26 | "output_type": "stream",
27 | "text": [
28 | "Using TensorFlow backend.\n"
29 | ]
30 | },
31 | {
32 | "name": "stdout",
33 | "output_type": "stream",
34 | "text": [
35 | "('x_train shape:', (60000, 28, 28, 1))\n",
36 | "(60000, 'train samples')\n",
37 | "(10000, 'test samples')\n"
38 | ]
39 | }
40 | ],
41 | "source": [
42 | "from keras.datasets import mnist\n",
43 | "from keras import backend as K\n",
44 | "from keras import utils\n",
45 | "\n",
46 | "batch_size = 128\n",
47 | "num_classes = 10\n",
48 | "epochs = 200\n",
49 | "\n",
50 | "# input image dimensions\n",
51 | "img_rows, img_cols = 28, 28\n",
52 | "\n",
53 | "# the data, shuffled and split between train and test sets\n",
54 | "(x_train, y_train), (x_test, y_test) = mnist.load_data()\n",
55 | "\n",
56 | "x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)\n",
57 | "x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)\n",
58 | "input_shape = (img_rows, img_cols, 1)\n",
59 | "\n",
60 | "x_train = x_train.astype('float32')\n",
61 | "x_test = x_test.astype('float32')\n",
62 | "x_train /= 255\n",
63 | "x_test /= 255\n",
64 | "print('x_train shape:', x_train.shape)\n",
65 | "print(x_train.shape[0], 'train samples')\n",
66 | "print(x_test.shape[0], 'test samples')\n",
67 | "\n",
68 | "# convert class vectors to binary class matrices\n",
69 | "y_train = utils.to_categorical(y_train, num_classes)\n",
70 | "y_test = utils.to_categorical(y_test, num_classes)"
71 | ]
72 | },
73 | {
74 | "cell_type": "code",
75 | "execution_count": 2,
76 | "metadata": {
77 | "collapsed": true
78 | },
79 | "outputs": [],
80 | "source": [
81 | "from keras.models import Sequential\n",
82 | "from keras.layers import Dense\n",
83 | "#from keras.layers import Dropout\n",
84 | "from keras.layers import Flatten\n",
85 | "from keras.layers.convolutional import Conv2D\n",
86 | "from keras.layers.convolutional import MaxPooling2D\n",
87 | "\n",
88 | "from keras.optimizers import Adam"
89 | ]
90 | },
91 | {
92 | "cell_type": "code",
93 | "execution_count": 3,
94 | "metadata": {
95 | "collapsed": true
96 | },
97 | "outputs": [],
98 | "source": [
99 | "model = Sequential()\n",
100 | "model.add(Conv2D(32, (5, 5), input_shape=(28, 28, 1), activation='relu')) #channels last\n",
101 | "model.add(MaxPooling2D(pool_size=(2, 2)))\n",
102 | "model.add(Conv2D(64, (5, 5), activation='relu'))\n",
103 | "model.add(MaxPooling2D(pool_size=(2, 2)))\n",
104 | "#model.add(Dropout(0.2))\n",
105 | "model.add(Flatten())\n",
106 | "model.add(Dense(1024, activation='relu'))\n",
107 | "model.add(Dense(num_classes, activation='softmax'))"
108 | ]
109 | },
110 | {
111 | "cell_type": "markdown",
112 | "metadata": {},
113 | "source": [
114 | "## Compile the model\n",
115 | "Keras is built on top of Theano (and now TensorFlow as well), both packages that allow you to define a *computation graph* in Python, which they then compile and run efficiently on the CPU or GPU without the overhead of the Python interpreter.\n",
116 | "\n",
117 | "When compiing a model, Keras asks you to specify your **loss function** and your **optimizer**. The loss function we'll use here is called *categorical crossentropy*, and is a loss function well-suited to comparing two probability distributions.\n",
118 | "\n",
119 | "Here our predictions are probability distributions across the ten different digits (e.g. \"we're 80% confident this image is a 3, 10% sure it's an 8, 5% it's a 2, etc.\"), and the target is a probability distribution with 100% for the correct category, and 0 for everything else. The cross-entropy is a measure of how different your predicted distribution is from the target distribution. [More detail at Wikipedia](https://en.wikipedia.org/wiki/Cross_entropy)"
120 | ]
121 | },
122 | {
123 | "cell_type": "code",
124 | "execution_count": 28,
125 | "metadata": {
126 | "collapsed": true
127 | },
128 | "outputs": [],
129 | "source": [
130 | "adam = Adam(lr=0.0001)\n",
131 | "model.compile(loss='categorical_crossentropy', optimizer=adam, metrics=[\"accuracy\"]) #optimizer='adam')"
132 | ]
133 | },
134 | {
135 | "cell_type": "markdown",
136 | "metadata": {},
137 | "source": [
138 | "## Train the model\n",
139 | "This is the fun part: you can feed the training data loaded in earlier into this model and it will learn to classify digits"
140 | ]
141 | },
142 | {
143 | "cell_type": "code",
144 | "execution_count": 29,
145 | "metadata": {
146 | "collapsed": false
147 | },
148 | "outputs": [
149 | {
150 | "name": "stdout",
151 | "output_type": "stream",
152 | "text": [
153 | "Train on 60000 samples, validate on 10000 samples\n",
154 | "Epoch 1/12\n",
155 | "60000/60000 [==============================] - 60s - loss: 0.0094 - acc: 0.9974 - val_loss: 0.0256 - val_acc: 0.9919\n",
156 | "Epoch 2/12\n",
157 | "60000/60000 [==============================] - 59s - loss: 0.0077 - acc: 0.9978 - val_loss: 0.0290 - val_acc: 0.9919\n",
158 | "Epoch 3/12\n",
159 | "60000/60000 [==============================] - 58s - loss: 0.0068 - acc: 0.9981 - val_loss: 0.0300 - val_acc: 0.9901\n",
160 | "Epoch 4/12\n",
161 | "60000/60000 [==============================] - 59s - loss: 0.0058 - acc: 0.9984 - val_loss: 0.0272 - val_acc: 0.9924\n",
162 | "Epoch 5/12\n",
163 | "60000/60000 [==============================] - 60s - loss: 0.0053 - acc: 0.9987 - val_loss: 0.0261 - val_acc: 0.9919\n",
164 | "Epoch 6/12\n",
165 | "60000/60000 [==============================] - 60s - loss: 0.0052 - acc: 0.9987 - val_loss: 0.0292 - val_acc: 0.9906\n",
166 | "Epoch 7/12\n",
167 | "60000/60000 [==============================] - 59s - loss: 0.0052 - acc: 0.9985 - val_loss: 0.0257 - val_acc: 0.9923\n",
168 | "Epoch 8/12\n",
169 | "60000/60000 [==============================] - 59s - loss: 0.0039 - acc: 0.9990 - val_loss: 0.0275 - val_acc: 0.9922\n",
170 | "Epoch 9/12\n",
171 | "60000/60000 [==============================] - 59s - loss: 0.0042 - acc: 0.9989 - val_loss: 0.0287 - val_acc: 0.9911\n",
172 | "Epoch 10/12\n",
173 | "60000/60000 [==============================] - 59s - loss: 0.0034 - acc: 0.9992 - val_loss: 0.0287 - val_acc: 0.9918\n",
174 | "Epoch 11/12\n",
175 | "60000/60000 [==============================] - 58s - loss: 0.0042 - acc: 0.9986 - val_loss: 0.0303 - val_acc: 0.9911\n",
176 | "Epoch 12/12\n",
177 | "60000/60000 [==============================] - 58s - loss: 0.0026 - acc: 0.9994 - val_loss: 0.0302 - val_acc: 0.9908\n"
178 | ]
179 | },
180 | {
181 | "data": {
182 | "text/plain": [
183 | ""
184 | ]
185 | },
186 | "execution_count": 29,
187 | "metadata": {},
188 | "output_type": "execute_result"
189 | }
190 | ],
191 | "source": [
192 | "model.fit(x_train, y_train,\n",
193 | " batch_size=128, epochs=12,\n",
194 | " verbose=1, \n",
195 | " validation_data=(x_test, y_test))"
196 | ]
197 | },
198 | {
199 | "cell_type": "markdown",
200 | "metadata": {},
201 | "source": [
202 | "## Finally, evaluate its performance"
203 | ]
204 | },
205 | {
206 | "cell_type": "code",
207 | "execution_count": 31,
208 | "metadata": {
209 | "collapsed": false
210 | },
211 | "outputs": [
212 | {
213 | "name": "stdout",
214 | "output_type": "stream",
215 | "text": [
216 | "('Test score:', 0.030239809037391888)\n",
217 | "('Test accuracy:', 0.99080000000000001)\n"
218 | ]
219 | }
220 | ],
221 | "source": [
222 | "score = model.evaluate(x_test, y_test,verbose=0)\n",
223 | "print('Test score:', score[0])\n",
224 | "print('Test accuracy:', score[1])"
225 | ]
226 | },
227 | {
228 | "cell_type": "markdown",
229 | "metadata": {},
230 | "source": [
231 | "## Exercise\n",
232 | "\n",
233 | "Include dropout layer. How does the accuracy change as a function of the dropout probability?"
234 | ]
235 | },
236 | {
237 | "cell_type": "code",
238 | "execution_count": null,
239 | "metadata": {
240 | "collapsed": true
241 | },
242 | "outputs": [],
243 | "source": []
244 | }
245 | ],
246 | "metadata": {
247 | "anaconda-cloud": {},
248 | "kernelspec": {
249 | "display_name": "Python [default]",
250 | "language": "python",
251 | "name": "python2"
252 | },
253 | "language_info": {
254 | "codemirror_mode": {
255 | "name": "ipython",
256 | "version": 2
257 | },
258 | "file_extension": ".py",
259 | "mimetype": "text/x-python",
260 | "name": "python",
261 | "nbconvert_exporter": "python",
262 | "pygments_lexer": "ipython2",
263 | "version": "2.7.12"
264 | }
265 | },
266 | "nbformat": 4,
267 | "nbformat_minor": 1
268 | }
269 |
--------------------------------------------------------------------------------
/4_Tensorflow/TF.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Tensorflow architecture\n",
8 | "\n",
9 | "TensorFlow is a Python Library that allows users to express arbitrary computation as a graph of data flows. Nodes in this graph represent mathematical operations (ops), whereas edges represent data that is communicated from one node to\n",
10 | "another. \n",
11 | "\n",
12 | "Computation is defined as Directed Acyclic Graph (DAG, similar to Spark!):\n",
13 | " 1. Graph is defined in a high-level language like C++, Python, go\n",
14 | " 1. Graph is compiled and optimized\n",
15 | " 1. Data (tensors) flow through graph\n",
16 | " \n",
17 | "## Example of flow graph: forward propagation\n",
18 | "\n",
19 | "Forward propagation can be represented as an acyclic flow graph. It provides a nice way of implementing a forward propagation in a modular way.\n",
20 | " 1. Each node can be an object with fprop method, that computes the value given it's parents\n",
21 | " 1. Calling the fprop method of each node in the right order (directed graph) will yield the forward propagation \n",
22 | "\n",
23 | "
\n",
24 | "\n",
25 | "\n",
26 | "Data in TensorFlow are represented as tensors, which are multidimensional as arrays (equivalent to a NumPy array).\n",
27 | "\n",
28 | "Formally, a NumPy array can be viewed as a mathematical object. If:\n",
29 | "\n",
30 | "* The `dtype` belongs to some (usually field) $F$\n",
31 | "* The array has dimension $N$, with the $i$-th axis having length $n_i$\n",
32 | "* $N>1$\n",
33 | "\n",
34 | "Then this array is an object in:\n",
35 | "\n",
36 | "$$\n",
37 | "F^{n_0}\\otimes F^{n_{1}}\\otimes\\cdots \\otimes F^{n_{N-1}}\n",
38 | "$$\n",
39 | "\n",
40 | "$F^n$ is an $n$-dimensional vector space over $F$. An element in here can be represented by its canonical basis $\\textbf{e}_i^{(n)}$ as a sum for elements $f_i\\in F$:\n",
41 | "\n",
42 | "$$\n",
43 | "f_1\\textbf{e}_1^{(n)}+f_{2}\\textbf{e}_{2}^{(n)}+\\cdots +f_{n}\\textbf{e}_{n}^{(n)}\n",
44 | "$$\n",
45 | "\n",
46 | "$F^n\\otimes F^m$ is a tensor product, which takes two vector spaces and gives you another. Then the tensor product is a special kind of vector space with dimension $nm$. Elements in here have a special structure which we can tie to the original vector spaces $F^n,F^m$:\n",
47 | "\n",
48 | "$$\n",
49 | "\\sum_{i=1}^n\\sum_{j=1}^mf_{ij}(\\textbf{e}_{i}^{(n)}\\otimes \\textbf{e}_{j}^{(m)})\n",
50 | "$$\n",
51 | "\n",
52 | "Above, $(\\textbf{e}_{i}^{(n)}\\otimes \\textbf{e}_{j}^{(m)})$ is a basis vector of $F^n\\otimes F^m$ for each pair $i,j$.\n",
53 | "\n",
54 | "We will discuss what $F$ can be later; but most of this intuition (and a lot of NumPy functionality) is based on $F$ being a type corresponding to a field.\n",
55 | "\n",
56 | "\n",
57 | "\n",
58 | "\n",
59 | "Although this framework for thinking about computation is valuable in many different fields, TensorFlow is primarily used for deep learning in practice and research.\n",
60 | "\n",
61 | " 1. Core written in C++\n",
62 | " 1. Different front-ends\n",
63 | " 1. Python and C++ as of today\n",
64 | " 1. Community is expected to add more \n",
65 | "\n",
66 | "\n",
67 | "
\n",
68 | "\n",
69 | "\n",
70 | "\n",
71 | "## Constants\n",
72 | "\n",
73 | "Constants are the operations that do not need any input. "
74 | ]
75 | },
76 | {
77 | "cell_type": "code",
78 | "execution_count": 1,
79 | "metadata": {
80 | "collapsed": false
81 | },
82 | "outputs": [
83 | {
84 | "name": "stdout",
85 | "output_type": "stream",
86 | "text": [
87 | "[[ 12.]]\n"
88 | ]
89 | }
90 | ],
91 | "source": [
92 | "import tensorflow as tf\n",
93 | "sess = tf.InteractiveSession()\n",
94 | "\n",
95 | "# Create a Constant op that produces a 1x2 matrix. The op is\n",
96 | "# added as a node to the default graph.\n",
97 | "#\n",
98 | "# The value returned by the constructor represents the output\n",
99 | "# of the Constant op.\n",
100 | "matrix1 = tf.constant([[3., 3.]])\n",
101 | "\n",
102 | "# Create another Constant that produces a 2x1 matrix.\n",
103 | "matrix2 = tf.constant([[2.],[2.]])\n",
104 | "\n",
105 | "# Create a Matmul op that takes 'matrix1' and 'matrix2' as inputs.\n",
106 | "# The returned value, 'product', represents the result of the matrix\n",
107 | "# multiplication.\n",
108 | "product = tf.matmul(matrix1, matrix2)\n",
109 | "print product.eval()\n",
110 | "sess.close()"
111 | ]
112 | },
113 | {
114 | "cell_type": "markdown",
115 | "metadata": {},
116 | "source": [
117 | "## Variables\n",
118 | "\n",
119 | "TensorFlow variables are in-memory buffers that contain tensors, they are present across multiple executions of a graph. A TensorFlow variables have the following three properties:\n",
120 | " 1. Variables must be explicitly initialized before a graph is used for the first time\n",
121 | " 1. We can use gradient methods to modify variables after each iteration as we search for a model’s optimal parameter settings\n",
122 | " 1. We can save the values stored in variables to disk and restore them for later use.\n",
123 | "\n",
124 | "\n",
125 | "Creating a variable is simple, following creates a random variable with a given range and standard deviation:\n",
126 | "\n",
127 | "```python\n",
128 | "weights = tf.Variable(tf.random_normal([300, 200], stddev=0.5),\n",
129 | "name=\"weights\")\n",
130 | "```"
131 | ]
132 | },
133 | {
134 | "cell_type": "code",
135 | "execution_count": 4,
136 | "metadata": {
137 | "collapsed": false
138 | },
139 | "outputs": [
140 | {
141 | "name": "stdout",
142 | "output_type": "stream",
143 | "text": [
144 | "0\n",
145 | "1\n",
146 | "2\n",
147 | "3\n"
148 | ]
149 | }
150 | ],
151 | "source": [
152 | "sess = tf.InteractiveSession()\n",
153 | "# Create a Variable, that will be initialized to the scalar value 0.\n",
154 | "state = tf.Variable(0, name=\"counter\")\n",
155 | "\n",
156 | "# Create an Op to add one to `state`.\n",
157 | "one = tf.constant(1)\n",
158 | "new_value = tf.add(state, one) #try state+one\n",
159 | "update = tf.assign(state, new_value)\n",
160 | "\n",
161 | "# Variables must be initialized by running an `init` Op after having\n",
162 | "# launched the graph. We first have to add the `init` Op to the graph.\n",
163 | "init_op = tf.global_variables_initializer()\n",
164 | "\n",
165 | "# Launch the graph and run the ops.\n",
166 | "# Run the 'init' op\n",
167 | "sess.run(init_op)\n",
168 | "# Print the initial value of 'state'\n",
169 | "print(sess.run(state))\n",
170 | "# Run the op that updates 'state' and print 'state'.\n",
171 | "for _ in range(3):\n",
172 | " sess.run(update)\n",
173 | " print(sess.run(state))\n",
174 | "\n",
175 | "sess.close()"
176 | ]
177 | },
178 | {
179 | "cell_type": "markdown",
180 | "metadata": {},
181 | "source": [
182 | "Similarly to NumPy, one can use a set of predefined functions to create some common tensors:\n",
183 | "\n",
184 | "```python\n",
185 | "#tensor of zeros of a given shape and type float32\n",
186 | "tf.zeros(shape, dtype=tf.float32, name=None)\n",
187 | "\n",
188 | "#tensor of all ones with a given shape and element type float32\n",
189 | "tf.ones(shape, dtype=tf.float32, name=None)\n",
190 | "\n",
191 | "#random normal tensor of a given shape, mean and standard deciation\n",
192 | "tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32,seed=None, name=None)\n",
193 | "\n",
194 | "#same, random uniform\n",
195 | "tf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32,seed=None, name=None)\n",
196 | "```\n",
197 | "\n",
198 | "## TF operations\n",
199 | "\n",
200 | "On a high-level, TensorFlow operations represent abstract transformations\n",
201 | "that are applied to tensors in the computation graph. Operations may have\n",
202 | "attributes that may be supplied before or during runtime. \n",
203 | "\n",
204 | "An operation consists of one or more kernels, which represent device-specific implementations.\n",
205 | "For example, an operation may have separate CPU and GPU kernels\n",
206 | "because it can be more efficiently expressed on a GPU. \n",
207 | "\n",
208 | "To provide an overview of the types of operations available, we include a table from\n",
209 | "the original TensorFlow white paper detailing the various categories of operations in\n",
210 | "TensorFlow.\n",
211 | "\n",
212 | "Following table summarizes the mathematical ops available in Tensorflow\n",
213 | "\n",
214 | "\n",
215 | "|-------------------------------------------------------------------------------------------------\n",
216 | "| Type | Examples |\n",
217 | "|------------------------------------------------------------------------------------------------|\n",
218 | "|Element-wise mathematical operations | Add, Sub, Mul, Div, Exp, Log, Greater, Less, Equal, ... |\n",
219 | "|Array operations |Concat, Slice, Split, Constant, Rank, Shape, Shuffle, ... |\n",
220 | "|Matrix operations |MatMul, MatrixInverse, MatrixDeterminant, ... | \n",
221 | "|Stateful operations |Variable, Assign, AssignAdd, ... |\n",
222 | "|Neural network building blocks |SoftMax, Sigmoid, ReLU, Convolution2D, MaxPool, ... | \n",
223 | "|Checkpointing operations |Save, Restore |\n",
224 | "|Queue and synchronization operations |Enqueue, Dequeue, MutexAcquire, MutexRelease, ... |\n",
225 | "|Control flow operations |Merge, Switch, Enter, Leave, NextIteration |\n",
226 | "|-------------------------------------------------------------------------------------------------\n",
227 | "
\n",
228 | "\n",
229 | "## TF placeholders\n",
230 | "\n",
231 | "A variable is meant to be initialized only once. If we need to feed some external input to our calculation, we would need to use something that we could populate every iteration.\n",
232 | "\n",
233 | "TensorFlow solves this problem using a construct called a placeholder. A placeholder\n",
234 | "is instantiated as follows and can be used in operations just like ordinary TensorFlow\n",
235 | "variables and tensors.\n",
236 | "\n",
237 | "You supply feed data as an argument to a run() call. The feed is only used for the run call to which it is passed. The most common use case involves designating specific operations to be \"feed\" operations by using tf.placeholder() to create them:"
238 | ]
239 | },
240 | {
241 | "cell_type": "code",
242 | "execution_count": 31,
243 | "metadata": {
244 | "collapsed": false
245 | },
246 | "outputs": [
247 | {
248 | "name": "stdout",
249 | "output_type": "stream",
250 | "text": [
251 | "[array([ 14.], dtype=float32)]\n"
252 | ]
253 | }
254 | ],
255 | "source": [
256 | "sess = tf.InteractiveSession()\n",
257 | "\n",
258 | "input1 = tf.placeholder(tf.float32)\n",
259 | "input2 = tf.placeholder(tf.float32)\n",
260 | "output = tf.multiply(input1, input2)\n",
261 | "\n",
262 | "print(sess.run([output], feed_dict={input1:[7.], input2:[2.]}))\n",
263 | "\n",
264 | "sess.close()"
265 | ]
266 | },
267 | {
268 | "cell_type": "markdown",
269 | "metadata": {},
270 | "source": [
271 | "A `placeholder()` operation generates an error if you do not supply a feed for it. See the MNIST fully-connected feed tutorial (source code) for a larger-scale example of feeds."
272 | ]
273 | },
274 | {
275 | "cell_type": "markdown",
276 | "metadata": {},
277 | "source": [
278 | "## Session\n",
279 | "\n",
280 | "A TensorFlow program interacts with a computation graph using a session. The TensorFlow\n",
281 | "session is responsible for building the initial graph, can be used to initialize\n",
282 | "all variables appropriately, and to run the computational graph. In Jupyter notebook we use an InteractiveSession and eval function."
283 | ]
284 | },
285 | {
286 | "cell_type": "markdown",
287 | "metadata": {},
288 | "source": [
289 | "# Linear regression model\n",
290 | "\n",
291 | "Let us add some complexity to the problems we are solving and move on to the linear regression model example.\n",
292 | "\n",
293 | "Consider equation: `y = 0.1 * x + 0.3 + noise`\n",
294 | "Which we are going to try to model with the following: `y = W * x + b`\n",
295 | "\n",
296 | "Our goal is to figure out the value of `W` and `b`, given enough `(x, y)` value samples.\n",
297 | "\n",
298 | "This is how we are going to solve it:\n",
299 | "\n",
300 | "\n",
301 | " 1. Import libraries (cell 1.1)\n",
302 | " 1. Create input data (cell 1.2)\n",
303 | " 1. Build inference graph (cell 1.3)\n",
304 | " 1. Create Variables to hold weights and biases.\n",
305 | " 1. Create Operations that produce logistic outputs.\n",
306 | " 1. Build training graph (cell 1.4)\n",
307 | " 1. Loss\n",
308 | " 1. Optimizer\n",
309 | " 1. Train_op: Operation that minimizes Loss\n",
310 | " 1. Create session and run initialization (cell 1.5)\n",
311 | " 1. Perform training (cell 1.6)"
312 | ]
313 | },
314 | {
315 | "cell_type": "code",
316 | "execution_count": 5,
317 | "metadata": {
318 | "collapsed": true
319 | },
320 | "outputs": [],
321 | "source": [
322 | "# 1.1 Import tensorflow and other libraries.\n",
323 | "import tensorflow as tf\n",
324 | "import numpy as np\n",
325 | "\n",
326 | "%matplotlib inline\n",
327 | "import pylab"
328 | ]
329 | },
330 | {
331 | "cell_type": "code",
332 | "execution_count": 6,
333 | "metadata": {
334 | "collapsed": false
335 | },
336 | "outputs": [
337 | {
338 | "data": {
339 | "text/plain": [
340 | "[]"
341 | ]
342 | },
343 | "execution_count": 6,
344 | "metadata": {},
345 | "output_type": "execute_result"
346 | },
347 | {
348 | "data": {
349 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAX4AAAD8CAYAAABw1c+bAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAGy9JREFUeJzt3X+MXeWd3/H3Z8aYBkgTCzsp9W/XJDRBpGtPHGeDUEig\nCwTVS0iFIRu0STeWVUBLU6lhVypdgaoEqVlBtTSW66VRtCkWCzihhF2TTWgp2ph6hrIEA6bDbCYe\nBxXHmd0sP5rxeL79495rXyZ35p577znnnnPP5yWhzL3nnDvPk7E+9znf85znKCIwM7PqGOp3A8zM\nLF8OfjOzinHwm5lVjIPfzKxiHPxmZhXj4DczqxgHv5lZxTj4zcwqxsFvZlYxS5LsJOkK4B5gGNgT\nEV9dYL8PAz8EtkfEg5JWA98E3gsEsDsi7mn3+5YvXx7r1q1L1gMzM2NsbOxnEbEiyb5tg1/SMHAv\ncDkwBRyU9EhEvNBiv7uAx5vengX+dUQ8I+mdwJik780/dr5169YxOjqapP1mZgZImky6b5JSzxZg\nPCImImIG2Atsa7HfLcBDwGuNNyLi1Yh4pv7z3wEvAiuTNs7MzNKXJPhXAkeaXk8xL7wlrQSuAb6+\n0IdIWgf8GvB0p400M7P0pHVx927gyxEx12qjpHOonQ3cGhG/WGCfHZJGJY0eO3YspWaZmdl8SS7u\nHgVWN71eVX+v2QiwVxLAcuAqSbMR8W1JZ1AL/W9FxMML/ZKI2A3sBhgZGfFa0WZmGUkS/AeB8yWt\npxb424EbmneIiPWNnyV9A3i0HvoC/hh4MSL+MLVWm5lZ19qWeiJiFrgZ2E/t4uwDEXFI0k5JO9sc\n/jHgc8AnJD1b/++qnlttZmZdSzSPPyIeAx6b996uBfb97aafnwLUQ/vMzCphbHKaAxPH2brhXDav\nXZbp70oU/GZmlp2xyWk+u+cAM7NzLF0yxLd+Z2um4e8lG8zM+uzAxHFmZueYCzgxO8eBieOZ/j4H\nv5lZhsYmp7n3iXHGJqcX3GfrhnNZumSIYcEZS4bYuuHcTNvkUo+ZWUaSlnA2r13Gt35nq2v8ZmZl\n16qEs1Cob167LPPAb3Cpx8wsI3mXcJLyiN/MLCN5l3CScvCbmWUozxJOUi71mJlVjIPfzKxiHPxm\nZhXj4DczqxgHv5lZxTj4zcwqxsFvZlYxDn4zs4px8JuZVYyD38ysYhz8ZmYV4+A3M8tAkgewJNkn\nC16kzcwsZUkewJL3c3abecRvZpayJM/Qzfs5u80SBb+kKyQdljQu6bZF9vuwpFlJn+n0WDOzPORR\nXknyAJZ+PqRFEbH4DtIw8DJwOTAFHASuj4gXWuz3PeD/AfdFxINJj51vZGQkRkdHu+uRmdkC8iyv\njE1Ot30AS5J9kpI0FhEjSfZNUuPfAoxHxET9w/cC24D54X0L8BDw4S6ONTPLXCfPwO1Vkgew9Osh\nLUlKPSuBI02vp+rvnSJpJXAN8PVOjzUzy0tRn4Gbt7Rm9dwNfDki5iR19QGSdgA7ANasWZNSs8zM\nTivqM3DzliT4jwKrm16vqr/XbATYWw/95cBVkmYTHgtAROwGdkOtxp+k8WZmnSriM3DzliT4DwLn\nS1pPLbS3Azc07xAR6xs/S/oG8GhEfFvSknbHmplZvtoGf0TMSroZ2A8MU5uxc0jSzvr2XZ0em07T\nzczSleYsmyJrO52zHzyd08zy1s87adPQyXRO37lrZoWW13o2/byTNm9eq8fMCivPUXhjqueJ2bmB\nn+rp4Dezwsr7hquqTPV08JtZYeU9Cq/KVE8Hv5kVVpVG4Xly8JtZKrKaClmVUXieHPxm1rOyT4Ws\nGk/nNLOeVWkq5CBw8JtZz7zqZbm41GNmPfNF2HJx8JtZKnwRtjxc6jEzqxgHv5lZxTj4zcwqxsFv\nZlYxDn4zs4px8JuZVYyD38ysYhz8ZmYV4+A3s67l9VhES5fv3DWzrnhFzvLyiN/MuuIVOcsrUfBL\nukLSYUnjkm5rsX2bpOckPStpVNLFTdv+laRDkp6XdL+kv5dmB8ysc2mUaMqwIqdLUa0pIhbfQRoG\nXgYuB6aAg8D1EfFC0z7nAG9EREi6CHggIi6QtBJ4CvhARLwl6QHgsYj4xmK/c2RkJEZHR3vpl1nl\nLfRErDRLNFk9dSsNVStFSRqLiJEk+yYZ8W8BxiNiIiJmgL3AtuYdIuL1OP0NcjbQ/G2yBHiHpCXA\nWcBPkzTMzLrXCL2vPX6Yz+458LYRb5olms1rl3HTpRsBCjeydilqYUku7q4EjjS9ngI+Mn8nSdcA\nXwHeA3wKICKOSvoPwE+At4DHI+LxXhttZotrFXqN0W6jRHNidi6VEk1RR9Zp9zOpIp8FNaQ2qyci\n9gH7JF0C3AlcJmkZtbOD9cDfAH8q6bci4k/mHy9pB7ADYM2aNWk1y6ySFgu9tB+astiXTD/14+Ew\nRf0SnC9J8B8FVje9XlV/r6WIeFLSBknLgUuBv46IYwCSHgZ+HfiV4I+I3cBuqNX4E/fAzH5Fu9BL\n86Ep/RpZJ5H3w2GK+iU4X5LgPwicL2k9tcDfDtzQvIOkjcAr9Yu7m4AzgePUSjxbJZ1FrdTzScBX\nbc1ykFfo+bGLpxX5S7BZ2+CPiFlJNwP7gWHgvog4JGlnffsu4FrgRkknqAX8dfWLvU9LehB4BpgF\n/jf1Ub2ZDY6FvmTGJqd5+JkpArjwH76L6TdnOv5yKEPNvKEsX4Jtp3P2g6dzmhVDL6E7NjnN9f+5\nVu9uEHDGsPjnI6v59KZVbT8zz5p5mb5gWulkOqeXbDCzlnoN3QMTxznRFPpQm+c9czL4r0//hIee\nmWr7mXnVzMtyUTYtXrLBzFrqdR781g3ncsaSt0eM6v8bJPvMvO4Ortqcf4/4zQZAFmWKXi9Ubl67\njPu/uPVtNf7nf/q3PDg2xcmTyT4zr5p5WS7KpsU1frOSy7JMkcUXSr9q6e1+r2v8ZlYaWdbBe50S\n2ipM855b32hHuy/HfrSrXxz8ZiVX1DJFkS6YluXGqrw4+M1Krqhzx4sUtkX9cuwXB7/ZAEizTNFr\nrbtx/LKzlhYmbIv65dgvDn4zO6XX8sz842+/+oNd3a2bhSrV8NvxPH4zO6XX+ezzj59+c4abLt1Y\n2MCt6hO6POI3s1N6rYWXqZZepIvPeXPwm9kpvdbCy1RLL9LF57w5+M1KKMubjXqthZelll6ms5O0\nOfjNSqbKJYo0lensJG0OfrMCWmxEX+USRdrKcnaSNge/WcG0G9FXuURh6XDwm/Ugi1p7uxF9lUsU\nlg4Hv1mXsqq1JxnRD1KJouyrYpaRg9+sS1nV2qs0oveF6v5w8Jt1Kcta+yCN6BfjC9X94eA361K3\nI3OXNk7zher+8BO4zHKUdWmjjF8qabW5jH1Pk5/AZVZQWZY2ylovT6OsVda+90ui1TklXSHpsKRx\nSbe12L5N0nOSnpU0Kunipm3vlvSgpJckvSjpo2l2wKxMGqWNYdFzaWP+ypK9rqxZZlXuezfajvgl\nDQP3ApcDU8BBSY9ExAtNu30feCQiQtJFwAPABfVt9wB/HhGfkbQUOCvVHpgVQNIyQ1ozdlqNcKtc\nL69y37uRpNSzBRiPiAkASXuBbcCp4I+I15v2PxuI+r7vAi4Bfru+3wwwk0bDzYqi0zLD/NJGN7Xp\nViPcmy7dWJlpoPNVaQpsGpIE/0rgSNPrKeAj83eSdA3wFeA9wKfqb68HjgH/RdKHgDHgdyPijRbH\n7wB2AKxZs6aDLpj1Vy91+25r0wuNcKsyDbSVKve9U6ld3I2IfcA+SZcAdwKX1T9/E3BLRDwt6R7g\nNuDftjh+N7AbarN60mqXWdZ6KTN08qXR/Czb6TdnCvVYQyuXJMF/FFjd9HpV/b2WIuJJSRskLad2\ndjAVEU/XNz9ILfjNgHJPwWtue7dlhqRfGo0zg1+emCOAIeHZK9a1JMF/EDhf0npqgb8duKF5B0kb\ngVfqF3c3AWcCx+uvj0h6f0QcBj5J07UBq7YyT8Fr1fabLt3Y8eckrU03zgwap8K+09V60Tb4I2JW\n0s3AfmAYuC8iDknaWd++C7gWuFHSCeAt4Lo4fWfYLcC36jN6JoDPZ9APK6Ey366fZtuT1KYbZwYz\nJ+aYozbiz3r2SpnPxmxxiWr8EfEY8Ni893Y1/XwXcNcCxz4LJLqbzKqlzFPw8m5785lBo8afZSCX\n+WzM2vOdu9Y3ZZ6CNz+IGzcMZdmHPGetlPlszNpz8FtflXkKXqPdgzgyLvPZmLXn4DfrQRFGxlnU\n4st8NmbtOfjNepDFyLiTIM+yFl/mszFbnIPfrAeb1y7j9qs/yJ89/ypXXnherqtMjk1Oc/dfvNz3\nMw4rHwe/VUJWUxPHJqe549FDzMzOcfDHP+f9/+CdPX1+0tJRqxu6XIu3pBz8NvCyLIekXeNPWjpq\nvqFrCPjYxuXcetn7PNq3RBz8VipprWSZVkCmXeNPelF1/u916FsnHPxWGmmvZJmGLGa/JLmo6lk3\n1gsHv5VGtyP3NB9+0uoz+jX7pfn3enkF64SD30qjl5F7r+Fc5CUMitw2KyYHv5VGP8sbRbhRayFF\nbpsVk4PfSqVfZZWiLmEwNjnNT//mLZYMiZNzUai2WXE5+M0SKOLF1OYSz5LhIa7bspprN60qRNus\n2Bz8Vjr9upBZtCUMmks8J0/OsfLd7yhU+6y4HPxWKgtdyBybnOahZ6YQ8OmKjHqLWn6y4nPwWy7S\nGqU3j3JnZue4+y9e5soLz+MPHnmemZO1h7796dgU939x8Ge2FLH8ZOXg4LfMpTnd8NQjCOvh/9T/\n+Rk/fOU4s3Nxap8qzWwpWvnJymGo3w2wwddqumG3GqPcj21cjoAA5iIY1ul9hobEsrOWLvo5Y5PT\n3PvEOGOT0123JcvPM8uSR/yWuU5r0e3KQpvXLuPWy97HwR///NRn3n71B3ni8Gv84KXXmJsL7nj0\n0IIrZaZ9w5NvoLKycfBb5jqpRScN0VafOf3mDN9/8f8SLF7uSfuGJ99AZWXj4LdcJK1FdxKi8z8z\n6ZlF2rNhPLvGyiZR8Eu6ArgHGAb2RMRX523fBtwJzAGzwK0R8VTT9mFgFDgaEVen1HYbQL2ux5Pk\nzCLt2TCeXWNlo4hYfIdaaL8MXA5MAQeB6yPihaZ9zgHeiIiQdBHwQERc0LT9S8AI8PeTBP/IyEiM\njo520x8bAF5p0qxzksYiYiTJvklG/FuA8YiYqH/4XmAbcCr4I+L1pv3PpjbZotGYVcCngH8PfClJ\no6zaPEXRLFtJpnOuBI40vZ6qv/c2kq6R9BLwXeALTZvuBv4NtTKQDaiyTmcsa7vNepHaxd2I2Afs\nk3QJtXr/ZZKuBl6LiDFJH1/seEk7gB0Aa9asSatZloM0pzPmWeZpfmD58JC4Y9uF3PAR/9uzwZdk\nxH8UWN30elX9vZYi4klgg6TlwMeAfybpx8Be4BOS/mSB43ZHxEhEjKxYsSJp+60A0rpBqxHEX3v8\nMJ/dcyDzUfiBieP88kTtgeWzc8Ht33neI3+rhCTBfxA4X9J6SUuB7cAjzTtI2ihJ9Z83AWcCxyPi\n9yJiVUSsqx/3g4j4rVR7YH3XmIkzLH5lJk4npZQ07/BNYuuGcxkeOn3L71xE5r/TrAjalnoiYlbS\nzcB+atM574uIQ5J21rfvAq4FbpR0AngLuC7aTRey0lnsmbOtpjN2WgLKez785rXLuGPbhdz+neeZ\ni2Cp5+BbRbSdztkPns5ZPN3U8e99YpyvPX6YuYBhwZf+6fu56dKNbX9P3lM5PX3UBkHa0znNulqW\noJsRfN5TOR36VkUOfkuk2xAv8h2tXlzNqsrBb4ksFuKLjZqLfDOWF1ezqnLwW2KtQrzMo2YvrmZV\n5eAvmLLVnMs8au5XKapsf2MbPA7+Ainj6Lnso+ZeS1GdhngZ/8Y2eBz8BVLG0XPRL+BmqZsQL+Pf\n2AaPg79A8hg9Z1FmKPIF3CzlNcXVLG0O/gLJevTsMkO6BnGKq1WDg79gshw9u8yQrm5DvKpnSFYc\nDv4KWXbWUoYkiEg8Qk1SGqryLBWHuJWRg78ixianuePRQ8xFMDQkbr/6g20DK0lpqJ/loyp/4Zj1\nwsFfEc1lHhFMvznT0TELlYb6VT7y9Qqz7iVZj98GwGJr5vdyTDef29DLYw/zXrvfbJB4xF8R3VyI\nTHJMtxc4ex2xe1qkWfcc/BXSzYXIJMd087m9log8LdKsew5+S1XSC65pjNg9o8asOw5+S00n5RuP\n2M36x8FfAmWZtthp+cYjdrP+cPAXXJmmLfqCq1k5OPgLrkzLLLh8Y1YODv6CK9so2uUbs+JLFPyS\nrgDuAYaBPRHx1XnbtwF3AnPALHBrRDwlaTXwTeC9QAC7I+KeFNs/8FqNoscmp3nomSkEfHrTKget\nmXVEEbH4DtIw8DJwOTAFHASuj4gXmvY5B3gjIkLSRcADEXGBpPOA8yLiGUnvBMaA32w+tpWRkZEY\nHR3tqWODoNVF3bHJaa7f/UNmTtb+bkuXDHH/F4tb9zezfEgai4iRJPsmGfFvAcYjYqL+4XuBbcCp\n8I6I15v2P5va6J6IeBV4tf7z30l6EVjZfKy1ttBF3QMTxzlx8vSXddHr/mZWPEnW6lkJHGl6PVV/\n720kXSPpJeC7wBdabF8H/BrwdDcNrZqF1qLZuuFczhjWqf3KUPc3s2JJ7eJuROwD9km6hFq9/7LG\ntnop6CFqtf9ftDpe0g5gB8CaNWvSalZpLXRRd/PaZdy/46Ou8ZtZ15LU+D8K/EFE/Eb99e8BRMRX\nFjlmAtgSET+TdAbwKLA/Iv4wSaNc468py41bZtZ/adf4DwLnS1oPHAW2AzfM+4UbgVfqF3c3AWcC\nxyUJ+GPgxaShb6eVfWqkv7jMiqlt8EfErKSbgf3UpnPeFxGHJO2sb98FXAvcKOkE8BZwXf1L4GLg\nc8CPJD1b/8jfj4jHsuiMFUeZ7jg2q5pENf56UD82771dTT/fBdzV4rinAM1/3xY3CCPlMt1xbFY1\nA3Xn7iAE5qCMlMt2x7FZlQxM8A9KYA7KSNnr9pgV18AEf16BmfVZxSCNlMt+cdpsUA1M8OcRmHmc\nVXikbGZZG5jgzyMw8zqr8EjZzLI0MMEP2QfmIJVhzKy6Bir4s5bWWcUgzD4ys/Jy8Heo17OKQZl9\nZGbllWR1TkvRQqtumpnlxcGfsrHJae59YpyxyemW2xvXCYb1q0sqtzvWzCwNLvWkKEkZZ6HrBC4B\nmVleHPwpSjrds9V1gkG5Y9fMis+lnhQtVsbJ8lgzs060fRBLP5T5QSy9TNX0NE8z61baD2KxDvQy\n3dN37JpZHlzqyZln7phZv1V+xJ9necUzd8ysCCod/HkHcZozd3w9wMy6Vengz3sKZVqLvPnMwcx6\nUengz3u1zbQWefOcfzPrRaWDvx8PPUlj5o6XhzazXngef0m5xm9mzTyPvwI859/MupVoHr+kKyQd\nljQu6bYW27dJek7Ss5JGJV2c9FgzM8tX2+CXNAzcC1wJfAC4XtIH5u32feBDEfFPgC8Aezo41szM\ncpRkxL8FGI+IiYiYAfYC25p3iIjX4/TFgrOBSHqsmZnlK0nwrwSONL2eqr/3NpKukfQS8F1qo/7E\nx9aP31EvE40eO3YsSdvNzKwLqa3VExH7IuIC4DeBO7s4fndEjETEyIoVK9JqlpmZzZMk+I8Cq5te\nr6q/11JEPAlskLS802PNzCx7SYL/IHC+pPWSlgLbgUead5C0UZLqP28CzgSOJznWzMzy1XYef0TM\nSroZ2A8MA/dFxCFJO+vbdwHXAjdKOgG8BVxXv9jb8tiM+pIb3zxlZmXmO3c75AXSzKyIOrlz1w9i\n6VCrBdLMzMrEwV+X9MlYfii6mZWd1+qhs/JNP1b0NDNLk4Ofzte39wJpZlZmLvXg8o2ZVcvAjfgX\nmmq52BTM+eUbgHufGGfZWUuZfnPGJR0zGygDFfwL1eqT1PAb5ZvGvr88MUcAQ8LTNs1soAxUqWeh\nqZadTMFs7Nu4u8HTNs1s0AxU8C9Uq++kht/Yt/F/zJDr/mY2YAbuzt1uavwLfUYeNX4v/2Bmaejk\nzt2BC/4y8fIPZpYWL9lQEl7+wcz6wcHfR75/wMz6YaCmc5aNl38ws35w8PeZl38ws7y51GNmVjEO\nfjOzinHwm5lVjIPfzKxiHPxmZhXj4Dczq5hCLtkg6Rgw2cWhy4GfpdyconOfq6GKfYZq9rvbPq+N\niBVJdixk8HdL0mjStSoGhftcDVXsM1Sz33n02aUeM7OKcfCbmVXMoAX/7n43oA/c52qoYp+hmv3O\nvM8DVeM3M7P2Bm3Eb2ZmbZQu+CVdIemwpHFJt7XYLkn/sb79OUmb+tHOtCXo92fr/f2RpL+U9KF+\ntDNN7frctN+HJc1K+kye7ctCkj5L+rikZyUdkvQ/8m5j2hL8236XpP8m6a/qff58P9qZJkn3SXpN\n0vMLbM82xyKiNP8Bw8ArwAZgKfBXwAfm7XMV8GeAgK3A0/1ud079/nVgWf3nK8ve7yR9btrvB8Bj\nwGf63e4c/s7vBl4A1tRfv6ff7c6hz78P3FX/eQXwc2Bpv9veY78vATYBzy+wPdMcK9uIfwswHhET\nETED7AW2zdtnG/DNqDkAvFvSeXk3NGVt+x0RfxkR0/WXB4BVObcxbUn+1gC3AA8Br+XZuIwk6fMN\nwMMR8ROAiCh7v5P0OYB3ShJwDrXgn823memKiCep9WMhmeZY2YJ/JXCk6fVU/b1O9ymbTvv0L6iN\nFsqsbZ8lrQSuAb6eY7uylOTv/D5gmaT/LmlM0o25tS4bSfr8R8A/Bn4K/Aj43YiYy6d5fZNpjvkJ\nXANG0qXUgv/ifrclB3cDX46IudpgsBKWAJuBTwLvAH4o6UBEvNzfZmXqN4BngU8A/wj4nqT/GRG/\n6G+zyqtswX8UWN30elX9vU73KZtEfZJ0EbAHuDIijufUtqwk6fMIsLce+suBqyTNRsS382li6pL0\neQo4HhFvAG9IehL4EFDW4E/S588DX41a8Xtc0l8DFwD/K58m9kWmOVa2Us9B4HxJ6yUtBbYDj8zb\n5xHgxvpV8a3A30bEq3k3NGVt+y1pDfAw8LkBGf217XNErI+IdRGxDngQ+JclDn1I9u/7O8DFkpZI\nOgv4CPBizu1MU5I+/4TaGQ6S3gu8H5jItZX5yzTHSjXij4hZSTcD+6nNBrgvIg5J2lnfvova7I6r\ngHHgTWqjhVJL2O/bgXOB/1QfAc9GiRe3StjngZKkzxHxoqQ/B54D5oA9EdFySmAZJPw73wl8Q9KP\nqM1y+XJElHrFTkn3Ax8HlkuaAv4dcAbkk2O+c9fMrGLKVuoxM7MeOfjNzCrGwW9mVjEOfjOzinHw\nm5lVjIPfzKxiHPxmZhXj4Dczq5j/Dy/mEQUFqmzUAAAAAElFTkSuQmCC\n",
350 | "text/plain": [
351 | ""
352 | ]
353 | },
354 | "metadata": {},
355 | "output_type": "display_data"
356 | }
357 | ],
358 | "source": [
359 | "# 1.2 Create input data using NumPy. y = x * 0.1 + 0.3 + noise\n",
360 | "x_data = np.random.rand(100).astype(np.float32)\n",
361 | "noise = np.random.normal(scale=0.01, size=len(x_data))\n",
362 | "y_data = x_data * 0.1 + 0.3 + noise\n",
363 | "\n",
364 | "# Uncomment the following line to plot our input data.\n",
365 | "pylab.plot(x_data, y_data, '.')"
366 | ]
367 | },
368 | {
369 | "cell_type": "code",
370 | "execution_count": 7,
371 | "metadata": {
372 | "collapsed": true
373 | },
374 | "outputs": [],
375 | "source": [
376 | "# 1.3 Buld inference graph.\n",
377 | "# Create Variables W and b that compute y_data = W * x_data + b\n",
378 | "W = tf.Variable(tf.random_uniform([1], 0.0, 1.0))\n",
379 | "b = tf.Variable(tf.zeros([1]))\n",
380 | "y = W * x_data + b"
381 | ]
382 | },
383 | {
384 | "cell_type": "code",
385 | "execution_count": 8,
386 | "metadata": {
387 | "collapsed": true,
388 | "scrolled": true
389 | },
390 | "outputs": [],
391 | "source": [
392 | "# 1.4 Build training graph.\n",
393 | "loss = tf.reduce_mean(tf.square(y - y_data)) # Create an operation that calculates loss.\n",
394 | "optimizer = tf.train.GradientDescentOptimizer(0.5) # Create an optimizer.\n",
395 | "train = optimizer.minimize(loss) # Create an operation that minimizes loss.\n",
396 | "init = tf.global_variables_initializer() # Create an operation initializes all the variables."
397 | ]
398 | },
399 | {
400 | "cell_type": "code",
401 | "execution_count": 9,
402 | "metadata": {
403 | "collapsed": true
404 | },
405 | "outputs": [],
406 | "source": [
407 | "# 1.5 Uncomment the following line to see what we have built.\n",
408 | "#print(tf.get_default_graph().as_graph_def())"
409 | ]
410 | },
411 | {
412 | "cell_type": "code",
413 | "execution_count": 10,
414 | "metadata": {
415 | "collapsed": false
416 | },
417 | "outputs": [
418 | {
419 | "name": "stdout",
420 | "output_type": "stream",
421 | "text": [
422 | "[array([ 0.15526688], dtype=float32), array([ 0.], dtype=float32)]\n"
423 | ]
424 | }
425 | ],
426 | "source": [
427 | "# 1.6 Create a session and launch the graph.\n",
428 | "sess = tf.InteractiveSession()\n",
429 | "sess.run(init)\n",
430 | "\n",
431 | "# Uncomment the following line to see the initial W and b values.\n",
432 | "print(sess.run([W, b]))"
433 | ]
434 | },
435 | {
436 | "cell_type": "code",
437 | "execution_count": 11,
438 | "metadata": {
439 | "collapsed": false
440 | },
441 | "outputs": [
442 | {
443 | "name": "stdout",
444 | "output_type": "stream",
445 | "text": [
446 | "(0, [array([ 0.28540719], dtype=float32), array([ 0.2722975], dtype=float32)])\n",
447 | "(20, [array([ 0.15249893], dtype=float32), array([ 0.27224702], dtype=float32)])\n",
448 | "(40, [array([ 0.11932871], dtype=float32), array([ 0.28950116], dtype=float32)])\n",
449 | "(60, [array([ 0.10880832], dtype=float32), array([ 0.29497355], dtype=float32)])\n",
450 | "(80, [array([ 0.10547166], dtype=float32), array([ 0.29670918], dtype=float32)])\n",
451 | "(100, [array([ 0.10441337], dtype=float32), array([ 0.29725966], dtype=float32)])\n",
452 | "(120, [array([ 0.1040777], dtype=float32), array([ 0.29743427], dtype=float32)])\n",
453 | "(140, [array([ 0.10397124], dtype=float32), array([ 0.29748964], dtype=float32)])\n",
454 | "(160, [array([ 0.10393748], dtype=float32), array([ 0.2975072], dtype=float32)])\n",
455 | "(180, [array([ 0.10392678], dtype=float32), array([ 0.29751277], dtype=float32)])\n",
456 | "(200, [array([ 0.10392339], dtype=float32), array([ 0.29751453], dtype=float32)])\n",
457 | "[array([ 0.10392339], dtype=float32), array([ 0.29751453], dtype=float32)]\n"
458 | ]
459 | }
460 | ],
461 | "source": [
462 | "# 1.7 Perform training.\n",
463 | "for step in range(201):\n",
464 | " sess.run(train)\n",
465 | " # Uncomment the following two lines to watch training happen real time.\n",
466 | " if step % 20 == 0:\n",
467 | " print(step, sess.run([W, b]))\n",
468 | "\n",
469 | "print(sess.run([W, b]))"
470 | ]
471 | },
472 | {
473 | "cell_type": "code",
474 | "execution_count": 12,
475 | "metadata": {
476 | "collapsed": false
477 | },
478 | "outputs": [
479 | {
480 | "data": {
481 | "text/plain": [
482 | "(0, 1.0)"
483 | ]
484 | },
485 | "execution_count": 12,
486 | "metadata": {},
487 | "output_type": "execute_result"
488 | },
489 | {
490 | "data": {
491 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXcAAAD8CAYAAACMwORRAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzt3Xt4VNW9//H3dyaJqFxEipcDMYGfKISrhMRQRAEVEBFR\ny09tf3qoFY+n2sPT/upTf+rx2Nqex/P04qmXaq0XbGvFU0XAS9VjAaVqhISLIiBgCBL0VIiAchEy\nM9/fHzMZJiGXCZkEsvm8nidPMrPX7L32BD6zsvZaa5u7IyIiwRI63BUQEZHMU7iLiASQwl1EJIAU\n7iIiAaRwFxEJIIW7iEgANRvuZva4mX1mZqsa2W5mdp+ZbTCz98xseOarKSIiLZFOy30WMLGJ7RcB\n/RJfNwAPtb5aIiLSGs2Gu7u/CXzeRJFLgd97XClwgpmdmqkKiohIy2VlYB+9gM0pj6sSz31av6CZ\n3UC8dc/xxx9f2L9//wwcXkTk6FFeXr7N3Xs2Vy4T4Z42d38EeARgxIgRXlZW1p6HFxHp8MxsUzrl\nMjFaZguQm/K4d+I5ERE5TDIR7vOBaxOjZkqAne5+UJeMiIi0n2a7ZczsaWAM8DUzqwL+DcgGcPeH\ngZeBScAGYA/w7baqrIiIpKfZcHf3q5vZ7sBNGauRiLSJmpoaqqqq+Oqrrw53VSQNnTp1onfv3mRn\nZx/S69v1gqqIHD5VVVV06dKF/Px8zOxwV0ea4O5UV1dTVVVFnz59DmkfWn5A5Cjx1Vdf0aNHDwV7\nB2Bm9OjRo1V/ZSncRY4iCvaOo7W/K4W7iEgAKdxFRAJI4S4i7WLHjh385je/afPjLFq0iLfffjuj\n+7zrrrv4xS9+kdF9tjWFu4g0qnzTdh5cuIHyTdtbva+Whru7E4vFWnyctgj3jkjhLiINKt+0nW89\nWsovX/uQbz1a2uqAv/XWW/noo48YNmwY3//+9zn//PMZPnw4gwcPZt68eQBUVlZy5plncu211zJo\n0CA2b97MY489xhlnnEFxcTEzZszg5ptvBmDr1q1cccUVFBUVUVRUxFtvvUVlZSUPP/ww9957L8OG\nDWPx4sUH1WPnzp3k5eUlPzh2795Nbm4uNTU1/O53v6OoqIihQ4dyxRVXsGfPnoNeP2bMGGrXxdq2\nbRv5+fkARKNRbrnlFoqKihgyZAi//e1vAfj0008599xzGTZsGIMGDWqwTm1B49xFpEGlFdXsj8SI\nOdREYpRWVFOY1/2Q93fPPfewatUqVqxYQSQSYc+ePXTt2pVt27ZRUlLClClTAFi/fj1PPvkkJSUl\nfPLJJ9x9990sW7aMLl26MG7cOIYOHQrAzJkz+f73v88555zDxx9/zIQJE1izZg033ngjnTt35oc/\n/GGD9ejWrRvDhg3jjTfeYOzYsbz44otMmDCB7OxsLr/8cmbMmAHAHXfcwWOPPcb3vve9tM7vscce\no1u3bixdupR9+/YxatQoxo8fz5w5c5gwYQK333470Wi0wQ+MtqBwF5EGlfTtQU5WiJpIjOysECV9\ne2Rs3+7ObbfdxptvvkkoFGLLli38/e9/ByAvL4+SkhIAlixZwnnnnceJJ54IwLRp01i3bh0Ar7/+\nOqtXr07u84svvmDXrl1pHf/KK6/kmWeeYezYscyePZvvfve7AKxatYo77riDHTt2sGvXLiZMmJD2\nOb322mu89957PPvss0D8L4T169dTVFTEddddR01NDVOnTmXYsGFp77M1FO4i0qDCvO48dX0JpRXV\nlPTt0apWe31PPfUUW7dupby8nOzsbPLz85MTdo4//vi09hGLxSgtLaVTp04tPv6UKVO47bbb+Pzz\nzykvL2fcuHEATJ8+nblz5zJ06FBmzZrFokWLDnptVlZWsksndZKRu3P//fc3+IHw5ptv8tJLLzF9\n+nR+8IMfcO2117a4zi2lPncRaVRhXnduGnt6RoK9S5cufPnll0C8VXvSSSeRnZ3NwoUL2bSp4SXK\ni4qKeOONN9i+fTuRSITnnnsuuW38+PHcf//9yccrVqw46DiN6dy5M0VFRcycOZPJkycTDocB+PLL\nLzn11FOpqanhqaeeavC1+fn5lJeXAyRb6QATJkzgoYceoqamBoB169axe/duNm3axMknn8yMGTO4\n/vrrWbZsWZN1yxSFu4i0ix49ejBq1CgGDRrEihUrKCsrY/Dgwfz+97+nsbuy9erVi9tuu43i4mJG\njRpFfn4+3bp1A+C+++6jrKyMIUOGUFBQwMMPPwzAJZdcwvPPP9/oBdVaV155JX/84x+58sork8/d\nfffdnH322YwaNarROv3whz/koYce4qyzzmLbtm3J56+//noKCgoYPnw4gwYN4p/+6Z+IRCIsWrSI\noUOHctZZZ/HMM88wc+bMFr93h8Liizq2P92JSaR9rVmzhgEDBhzuarTYrl276Ny5M5FIhMsuu4zr\nrruOyy677HBXq1009Dszs3J3H9Hca9VyF5Ej2l133ZUcRtinTx+mTp16uKvUIeiCqogc0VozM/Rn\nP/sZf/7zn+s8N23aNG6//fbWVuuIp3AXkcC6/fbbj4ogb4i6ZUREAkjhLiISQAp3EZEAUriLiASQ\nwl1E2sWhruc+adIkduzYkZE6dO7cOSP7gfgKloMGDcrY/jJN4S4ijdu8BBb/Mv69lRoL90gk0uTr\nXn75ZU444YRWH/9oo3AXkYZtXgJPToEFP4t/b2XAp67nXlRUxOjRo5kyZQoFBQUATJ06lcLCQgYO\nHMgjjzySfF1+fj7btm2jsrKSAQMGMGPGDAYOHMj48ePZu3cvAB999BETJ06ksLCQ0aNHs3btWgA2\nbtzIyJEjGTx4MHfccUeT9bvqqqt46aWXko+nT5/Os88+S2VlJaNHj2b48OEMHz68wRuBzJo1K7nO\nPMDkyZOTi4699tprjBw5kuHDhzNt2rTkypW33norBQUFDBkypNHliVvF3Q/LV2FhoYtI+1m9enXL\nXvDmL9zv6u7+b13j39/8RauOv3HjRh84cKC7uy9cuNCPO+44r6ioSG6vrq52d/c9e/b4wIEDfdu2\nbe7unpeX51u3bvWNGzd6OBz25cuXu7v7tGnT/A9/+IO7u48bN87XrVvn7u6lpaU+duxYd3e/5JJL\n/Mknn3R39wceeMCPP/74Rus3Z84cv/baa93dfd++fd67d2/fs2eP79692/fu3evu7uvWrfPa7Eo9\nnyeeeMJvuumm5L4uvvhiX7hwoW/dutVHjx7tu3btcnf3e+65x3/84x/7tm3b/IwzzvBYLObu7tu3\nb2+wTg39zoAyTyNjNYlJRBqWPxrCORDdH/+ePzqjuy8uLqZPnz7Jx/fddx/PP/88AJs3b2b9+vX0\n6FF3Dfk+ffok10MvLCyksrKSXbt28fbbbzNt2rRkuX379gHw1ltvJVeSvOaaa/jRj37UaH0uuugi\nZs6cyb59+3jllVc499xzOfbYY9m5cyc333wzK1asIBwOJ9eTT0dpaSmrV69m1KhRAOzfv5+RI0fS\nrVs3OnXqxHe+8x0mT57M5MmT095nuhTuItKw3GL4x/lQuTge7LnFGd196rrtixYt4vXXX+edd97h\nuOOOY8yYMXXWSq91zDHHJH8Oh8Ps3buXWCzGCSeckFzytz4zS6s+nTp1YsyYMbz66qs888wzXHXV\nVQDce++9nHzyyaxcuZJYLNbg+vGpa7zDgXXe3Z0LL7yQp59++qDXLFmyhL/+9a88++yzPPDAAyxY\nsCCteqZLfe4i0rjcYhj9fzMS7E2ts75z5066d+/Occcdx9q1ayktLU17v127dqVPnz7JNWTcnZUr\nVwIwatQoZs+eDdDo+uyprrzySp544gkWL17MxIkTk3U79dRTCYVC/OEPfyAajR70uvz8fFasWEEs\nFmPz5s0sWRK/PlFSUsJbb73Fhg0bgPj9WtetW8euXbvYuXMnkyZN4t57703WN5MU7iLSLlLXc7/l\nllvqbJs4cSKRSIQBAwZw6623Jm+zl66nnnqKxx57jKFDhzJw4MDkDbd//etf8+CDDzJ48GC2bNnS\n7H7Gjx/PG2+8wQUXXEBOTg4A3/3ud3nyyScZOnQoa9eubfBOUaNGjaJPnz4UFBTwL//yLwwfPhyA\nnj17MmvWLK6++mqGDBnCyJEjWbt2LV9++SWTJ09myJAhnHPOOfzqV79q0fmmQ+u5ixwlOup67kcz\nrecuIiJ16IKqiBxV3n//fa655po6zx1zzDG8++67h6lGbUPhLnIUcfe0R48E1eDBgxsdWXMkaW2X\nubplRI4SnTp1orq6utWhIW3P3amurm5w2GW61HIXOUr07t2bqqoqtm7derirImno1KkTvXv3PuTX\npxXuZjYR+DUQBh5193vqbe8G/BE4LbHPX7j7E4dcKxHJuOzs7DozQiXYmu2WMbMw8CBwEVAAXG1m\nBfWK3QSsdvehwBjgl2aWk+G6iohImtLpcy8GNrh7hbvvB2YDl9Yr40AXi1+p6Qx8DjS9jqeIiLSZ\ndMK9F7A55XFV4rlUDwADgE+A94GZ7h6rVwYzu8HMysysTP1+IiJtJ1OjZSYAK4B/AIYBD5hZ1/qF\n3P0Rdx/h7iN69uyZoUOLiEh96YT7FiA35XHvxHOpvg3MSSw3vAHYCPTPTBVFRKSl0gn3pUA/M+uT\nuEh6FTC/XpmPgfMBzOxk4EygIpMVFRGR9DU7FNLdI2Z2M/Aq8aGQj7v7B2Z2Y2L7w8DdwCwzex8w\n4Efuvq0N6y0iIk1Ia5y7u78MvFzvuYdTfv4EGJ/ZqomIyKHS8gMiIgGkcBcRCSCFu4hIACncRUQC\nSOEuIhJACncRkQBSuIuIBJDCXUQkgBTuIiIBpHAXEQkghbuISAAp3EVEAkjhLiISQAp3EZEAUriL\niASQwl1EJIAU7iIiAaRwFxEJIIW7iEgAKdxFRAJI4S4iEkAKdxGRAFK4i4gEkMJdRCSAFO4iIgGk\ncBcRCSCFu4hIACncRUQCSOEuIhJACncRkQBSuIuIBJDCXUQkgBTuIiIBpHAXEQmgtMLdzCaa2Ydm\ntsHMbm2kzBgzW2FmH5jZG5mtpoiItERWcwXMLAw8CFwIVAFLzWy+u69OKXMC8Btgort/bGYntVWF\nRUSkeem03IuBDe5e4e77gdnApfXKfBOY4+4fA7j7Z5mtpoiItEQ64d4L2JzyuCrxXKozgO5mtsjM\nys3s2oZ2ZGY3mFmZmZVt3br10GosIiLNytQF1SygELgYmAD8q5mdUb+Quz/i7iPcfUTPnj0zdGgR\nEamv2T53YAuQm/K4d+K5VFVAtbvvBnab2ZvAUGBdRmopIiItkk7LfSnQz8z6mFkOcBUwv16ZecA5\nZpZlZscBZwNrMltVERFJV7Mtd3ePmNnNwKtAGHjc3T8wsxsT2x929zVm9grwHhADHnX3VW1ZcRER\naZy5+2E58IgRI7ysrOywHFtEpKMys3J3H9FcOc1QFREJIIW7iEgAKdxFRAJI4S4iEkAKdxGRAFK4\ni4gEkMJdRCSAFO4iIgGkcBcRCSCFu4hIACncRUQCSOEuIhJACncRkQBSuIuIBJDCXUQkgBTuIiIB\npHAXEQkghbuISAAp3EVEAkjhLiISQAp3EZEAUriLiASQwl1EJIAU7iIiAaRwFxEJIIW7iEgAKdxF\nRAJI4S4iEkAKdxGRAFK4i4gEkMJdRCSAFO4iIgGkcBcRaUflm7bz4MINlG/a3qbHyWrTvYuIHCXK\nN22ntKKakr49KMzr3miZbz1ayv5IjJysEE9dX9Jo2dZSuIuItFK6oV1aUc3+SIyYQ00kRmlFdZuF\ne1rdMmY20cw+NLMNZnZrE+WKzCxiZt/IXBVFRI5c5Zu285+vr2NfTd3Qrl/mwYUb6H5cDjlZIcIG\n2VkhSvr2aLN6NdtyN7Mw8CBwIVAFLDWz+e6+uoFy/wG81hYVFRFJVzpdJJk6zs8f/T2F/gG9Qp3p\nYbsoDw2kpO/XGyyz2AZy36g+dPl7Kd0LxtG/DeuWTrdMMbDB3SsAzGw2cCmwul657wHPAUUZraGI\nHBUyFcjpdJG0+libl0DlYnZ+HOWJ0M/JoYZQ2ImZYeFjCIVGEo9O2Lh8IU+Efko2EaI8R/hdI4sY\nbH4cTpkPucWHfK5NSSfcewGbUx5XAWenFjCzXsBlwFiaCHczuwG4AeC0005raV1F5DBrqxZxJi80\nllZUMzC6lrNDa1gSHcDG5Xsp/Hgj5I+G3GLWLn2ddS88RLeY8/MF53HL9de27Fibl8CTUyC6nzEY\nTpSwOe4QxiFWA5WLk6E9MryabCJkWYyQO+YADtH9dcplWqYuqP4n8CN3j5lZo4Xc/RHgEYARI0Z4\nho4tIu2gLUd6tOhCY6LVXBvW9Z+7uGYpN2TfjREjSpjs90LgUQjnwMR7OP3lH3Km1UAYpvEGLyzP\npTDv8vQrW7k4HsweJUSIWChMzGOYxYBQ/Dj5o5PFew0bT2zF/cSiNRAOYwbEogeVy7R0wn0LkJvy\nuHfiuVQjgNmJYP8aMMnMIu4+NyO1FJFGNdWazmRXx09e+ICvamLAIYz0aCiQU5T07cG3shZwIe/y\n35xdp8/6oP0kWs2Ec+Af5wMQm3VJ/LlQFvkexS2KAWEiWMxItpTXzCPsEWrboNkeZWR4NdCCcM8f\nTSyUDVEgnE3oov+AvdVwbI/49/rnmFtMaPoLB84fmnwvMiWdcF8K9DOzPsRD/Srgm6kF3L1P7c9m\nNgt4UcEuklkNBXVTremWtLSb+4C4+pF32B898Md2YXg9U3cth83jWfs/X1Cz7E90i25nZ7g7e04c\niH+6EsPoNvIa+p/S9eBArhdqhVvnMTz8KACjeR/bOhjyph9c0ZRWc223xpYdezk5sp8sixGN1uDm\n1PYfmIUhlAWxSPzYAy7FN/4NYvvjBcLZ9Bo2vmW/h1g/fr7/Ngr9A8qjA7ml56XNf8jlFh8U+G2t\n2XB394iZ3Qy8CoSBx939AzO7MbH94Tauo8hRr7Ggbqo7o7mujvJN29m4fCGn71nB3DW76epf8vMF\nAw/qgy6tqGZQ7EMuz1rM19gJwPlZKwmXR4ks+zV9Y1GyiQKJP/E/eS752v0vvsjW/v+bnvUCuX64\n7Vz2LF0dzMA9/rjbiOkHvxH5o+MhXftBkT+ad8qruJgs8AhRQoTMCHsULETo4l+y1nuzffUCuheM\nY3fPQn5es5NL/A1CIeOsi/+Z/i0M2tKKapZETqfUTydstOlY9dZIq8/d3V8GXq73XIOh7u7TW18t\nkSNLW15IbGq/tdu37NjbYFCX9O1BTlaImkjsoHHTJX17UJy1Id7CtIOH58199KfcGXqCEDGGhpwo\nRg3ZvFSvD/r8zpVcn/NTcogcqFhtEMdqyMZJvdTmiW0AWR7lf77YR4/aboxQNqEG+pnLjz+XsSzG\n/cDjcQ29YbnF8ZZ/SrdGn1g/vl1+B4X+AUspgKhRRPycp0TH8ZMXP2B/ZBQ5G2q4fHhVnWD+wa58\n+qfzi0rR1Ht+JNEMVZFmpLaaQ2b85NJBfPPspkd7tWoqeqJ/unJvJ6r/NocL/VM+pwtTsuBE+5JK\nTqXbtkm882Q13QvGMXdKdrJlmjpuujC0nj/l/HuilTvvoOF5d4aeIItosrWcZQ4eOagPuv9XK5N9\n2LXcIepGhDCGk+3ROudWG9IRwnyaN5WfVQ090I0R60dhvfei2zkzuHPt35N97lPPmdH4exvrR2nk\nREpiPSgECvO6c8v111JaUU2/HXt5esnHLEmEd/aqT+t8KBq0OpgL87rz1PUl7TKOvjUU7tLm2mtC\nSVuZs6wqeSEx5s6d81Zx5ildmgztnz/6ey7xN1i30Dj+kn+mf9EFrF36+oEQLrqAjcsX8q/+RzwM\nc6OjKa3oR2FofeLi4D7ycPIM6qSqQT+2wKoyYhiRit+SHQ4R8ujB46YrFxOK1QCxBofnGbFksGMQ\nI0QoK+fgPuj80Vg4O/4hAThQQ5g/R8fwgp3HpEGnEl41mx6+k2rrRve+RZz45Zpkn/v6XfksiYSb\n7MYozOsO199BaUU1Uw/hA7H2q3zTdp5bVpUM74sGncrSys+Tjy8f3pvLh/du9b/H2uMdyRTu0qba\nc6Gk2uNl8oOkfNN21pf/ldlZT3Nm6GP2eQ4r/HQ2Lr+p0eFzG5cv5MnQT5LdGNGX36Ry+13kvXUX\npxOhpuJ3VG6/i6kr7yIcjgfmtPAbbOw8mC0rShMXB71O90Z9DoTN4/3YsSgNjptuoH+6Vq9h44ks\nv59IdB9OiCeik7i4uH882Ov3QecWw/SXYOWfYNdWrHNPKk6ezI5d+dySeJ/LSy5s9H3fvWl7Wq3l\ndAKzuesIDbWqzzyly0F1O9KDORMU7tKmDnWhpNoWmAGXD++d9mua/CBpZjge0GDr+qnwj+MzCgFs\nDxMoI7ZyBhT2bnA/I8Ork90dAGGvwdbMT05kwSPsf28uFqtJlskhSv+vVvJstCBxcbCGMJ7s3mhI\nxC0+jjuUMo47tT+7gf7p1G1zhzzExrJXeCc6gJWcQU3nM7kp9/SGD1ZvtEf/xFetpoI5k90Y6fR3\n169LR2hltwWFu2RMQ63mxv4ztmTo3X+VbWbaiFyuGN4bgOeWVdF16zLOin1AXuF4+hddABw8M7G0\noh/Hf1ZOzbI/kRf7mK6fLQMcwsc0OBxv7dLXyXvx6mTrei1PMzK8mnCi+yKVeaTR2YW9ho0ntvzX\neGK4nYVz8AFTqHlrGXiEGrJY1W0MeV8uI+SJ1n0om6z80fQ5LXFxMPYB1d6ZMaGV9LVP+dy7APE+\n9wo/lTd9KOflZsXP/5SujX9o1R+Cl6LPWWO5o/xYamj7C4OZCtiO0t99JFC4S4s0FspN9YXW/89Y\nvmk7V/+uNBn4T8+o28JOHXoH8IUfy8Blm5hTfjbrvDffsRe5IBwP6poXn2AtT9O/6ALO71zJddn/\nTjbxAH1vW4w+i/492T3iFu++9sg+rIFg3r56AaentK63r15A/3FTiZT/Cqt3wTBCmIpOQxseaZFb\nTOjbiW4MDIZeTX5uMWu7n5n8qyDvpEL+8dHOB4bkTYoPySuE5MXB6N4abv7bRiKxg5vvIaDXmWcy\noej05DFbqqMG5dHaEm8phfthcqReZGwqvOcsq+LPZZuJxLxOgNcueVq/1Zzs39w6j8KqedDlUsib\nzpxlVVwe+28uyl7CX6LFzFlWt9ulwaF3wLm8T4RQsovEDPAatq9eAEUXxEd1hCKYxwhblFM+eY1s\nIgcN04tgfNRAMHcvGEdNxe+SrevuBeMgt5is77zCFy/eBn9fzZ5oNiv8dB6NTWZsU8Pocovjozoq\nqpOjOvoXXQCJvzKgNsQvpqRvj7qjXFLC68KBp/DcsiqeLY9fJExc+yQnOzMtbQVlcCncD4O2vshY\nG8QOXNGC/uqmwnvO737KeHuXXuSxK3Q8O2Kd2b9oMWsLxjF3/l/4AQsYmL2JEE4NWWzqPBg4Hcpm\nwYsz4wf5aAEAhdu2cFn2YwCcG3qf57f1AAYn61J/6J2nTG5J7SJxBycUD2FIjOo4BqL7U7pCysjx\nAx8SUULcGZlO7waCuX/RBazl6Tp97gDkFrN+0p+TIRuNxv/i+H9NhGs6v+N0grW2zBWJER7dj8th\n+579R1yjQI48CvfDoLGLjJlozdd2eeyPxFu3z5Zt5ukbRja5v9qhe8NjH3C5d+bE0K46re9Nr/2G\nnyamhp8bej++pAbgG41Y5W/5SehAeJpB2OIXB+ECWDMv2dp0wNbM43yvSZZ1h/P9nboVShl6V9sh\nUXth0S2MJ2ZDxizElq/fXSeEUy8g5ucW81osl8/+NouvsZNtdGNOdDTv2Zk800gw129d174/tUGd\nFTKuKj6t2Yu8mb7jjlrY0lIK98Mg9SJjOGR8smMvf3r348RMunhL787JA1n1yc4WjRaBRH91dG2y\nv3pOdDSlFWc0+fra9aZzQjWEODBTsbb1PWjnIoCUyS7x7yE8vmZHyjYnfgGxdtRG5ckXkLdhQTKk\nK0++gPwTj8c/WRx/zqDb8Ho37koZerdqyxf8bfM+CmwTr8aKGV50Dt/IehMwwom+7INem/Lc+IlT\n+FP3Ydw8bxXRmBMOxSchtSQoU4M6GnP+4YRjm319R5nFKMHV4cK9Pe+w0lbHKQytZ3HBfDZv3sTW\nL/extbwbz5edS0HMKQmt4d3oAObMW8vUUHwtjw+XncDxU/75QAu1CfX7q2vHT0MjQ9w4sN507ZrU\nWeZ1Wt85Q6bCW0uSreco8Qt6ETdihMi2aDKorf9kGDUzGbAvZU+gKvIRE0NLeDVWTK/sCdw04vR4\nl8uaeTDgUmhoDZFESO/ftJ1fP3rg4usVZ5VAS5ZnBb559mkNjnVO16EEdUe9WCnBYd7UQNo2NGLE\nCC8rK2vRa9prQkyr7+SSMn3800+3xPtva4erHduD2Mu3YLH9kPLWx0c1G2FiRAnFp3RzYIRGNJRN\n1nUvNz8qYvEv8b/ejSV27hh2/r/C6P/b+Gs2L0kumWrEMEKQVXe4YOVrD2Jr5pPTexixnC48vGQ7\nXf1LljKQfnzMBEtMG7/+jgZH0dQG46H8zo6Ei89HQh1EAMys3N1HNFeuQ7XcWzMhpiX/MWvHS18W\nXow58Tu5pLQWU++JeNAqesmg3EeeO70xohW/JZaYIh7D8FiEUL1p5VkeBTNCeGLySt3FmMJNjKuu\nI390vFskug8g3nfd3A0BUtebbmRN6vzxN8H4m5KPpw6Jv6enJ9byeMrHETY4JY0Zgy11JPQ3Hwl1\nEGmJDhXuaf15XG8WYmoQz13QhVOKux+YYl1btl6gnd+5kuuzD3Rt+HuL68xGTL0nYg3P11lFb8uK\n1w5MHyfexWEexWqniBPCCeEeq1NtD2URDoUgFsVCWXgsiqeM8vBQNpbOXVtyi2H6i7Dy6fjxhn4z\nvTHQTUx2aUjqWh5zUtbySGfGoIi0vQ4V7oV53Zk7JZuaZX/i5K6dOCnUg9pV7oAG79KycXlV/GIh\n8YuFXm6w8gGYeA+8citE9gExsFBy5uJBq+A1cU/E+qvovVNv+njqFHHzKISy+fH+/0N/30hP20mv\nE46j5z8t5OGQAAAIJElEQVTkctI50+PHSnwwhYCtf3uCFWvW8VmsGy/YeQ2uptegFgZ1a6hvWeTI\n1KHCnc1L6P/K1Xh0P3wKsfV/js8ETFkFz6P7MI/h0f1Y5WJGhvfWuVgYwuPb1syLlyUxMSTlNeSP\nxkMHVsHbT5iNKZNeUu+JGMqqeyeXPmeNTa4tvYMunNs7XGeKeCh/NFMTk1v69+3BwPphmBLK/3XK\nifzyvQ+JOUf0TQHUMhc58nSscK9cjEdrDkxuidawZcVr9EoE4tpOQ8mLZcW7SzzMpk5D6T+sayKI\n4xcLI27UeJj3Op3DkNjfyE5cyoxgB16TW8ycIb/lq7I/AjA3NrrubMT690RMCeTUtaXH1bsdWnIN\n6jTDUMPpRORQdaxwzx9N1LIIx+KTYGoI8060gNpR0n/dlc+Cmts429awxAfEAzn3dELTX+CdBXOZ\nv+4rurOLJT6AY7/4OnsSZT/3zpxouw68hngL/FvlxyaD9aDZiE10fdQP70Md5aMuDxE5VB0r3HOL\n2TBpNstfeIhYzOP90GeNTW4u6duD+8P9WRE5o24g5xaTM6Yfz284MCTvzkGn8pPK/iyvOYMYELL4\nHVpqX5PJYG3NbEV1eYjIoehY4U58evjukwoprahO3iigVlOB3NQi/o2t15GpYFX3ioi0tw41iakj\n0yQYEcmEQE5i6sjUvSIi7Sl0uCsgIiKZp3AXEQkghbuISAAp3EVEAkjhLiISQAp3EZEAUriLiASQ\nwl1EJIAU7iIiAaRwFxEJIIW7iEgAKdxFRAIorXA3s4lm9qGZbTCzWxvY/i0ze8/M3jezt81saOar\nKiIi6Wo23M0sDDwIXAQUAFebWUG9YhuB89x9MHA38EimKyoiIulLp+VeDGxw9wp33w/MBi5NLeDu\nb7v79sTDUqB3ZqspIiItkU649wI2pzyuSjzXmO8Af2log5ndYGZlZla2devW9GspIiItktELqmY2\nlni4/6ih7e7+iLuPcPcRPXv2zOShRUQkRTp3YtoC5KY87p14rg4zGwI8Clzk7tWZqZ6IiByKdFru\nS4F+ZtbHzHKAq4D5qQXM7DRgDnCNu6/LfDVFRKQlmm25u3vEzG4GXgXCwOPu/oGZ3ZjY/jBwJ9AD\n+I2ZAUTSuYGriIi0DXP3w3LgESNGeFlZ2WE5tohIR2Vm5ek0njVDVUQkgBTuIiIBpHAXEQkghbuI\nSAAp3EVEAkjhLiISQAp3EZEAUriLiASQwl1EJIAU7iIiAaRwFxEJIIW7iEgAKdxFRAJI4S4iEkAK\ndxGRAFK4i4gEkMJdRCSAFO4iIgGkcBcRCSCFu4hIACncRUQCSOEuIhJACncRkQBSuIuIBJDCXUQk\ngBTuIiIBpHAXEQkghbuISAAp3EVEAkjhLiISQAp3EZEAUriLiASQwl1EJIAU7iIiAaRwFxEJoLTC\n3cwmmtmHZrbBzG5tYLuZ2X2J7e+Z2fDMV1VERNLVbLibWRh4ELgIKACuNrOCesUuAvolvm4AHspw\nPUVEpAXSabkXAxvcvcLd9wOzgUvrlbkU+L3HlQInmNmpGa6riIikKSuNMr2AzSmPq4Cz0yjTC/g0\ntZCZ3UC8ZQ+wy8w+bFFt474GbDuE13V0R+N565yPDjrnlslLp1A64Z4x7v4I8Ehr9mFmZe4+IkNV\n6jCOxvPWOR8ddM5tI51umS1Absrj3onnWlpGRETaSTrhvhToZ2Z9zCwHuAqYX6/MfODaxKiZEmCn\nu39af0ciItI+mu2WcfeImd0MvAqEgcfd/QMzuzGx/WHgZWASsAHYA3y77arcum6dDuxoPG+d89FB\n59wGzN3b+hgiItLONENVRCSAFO4iIgF0xIb70bjkQRrn/K3Eub5vZm+b2dDDUc9Mau6cU8oVmVnE\nzL7RnvVrK+mct5mNMbMVZvaBmb3R3nXMtDT+fXczsxfMbGXinNvy2l27MLPHzewzM1vVyPa2yzF3\nP+K+iF+4/QjoC+QAK4GCemUmAX8BDCgB3j3c9W6Hc/460D3x80VHwzmnlFtA/ML9Nw53vdvpd30C\nsBo4LfH4pMNd73Y459uA/0j83BP4HMg53HVv5XmfCwwHVjWyvc1y7EhtuR+NSx40e87u/ra7b088\nLCU+n6AjS+f3DPA94Dngs/asXBtK57y/Ccxx948B3L2jn3s65+xAFzMzoDPxcI+0bzUzy93fJH4e\njWmzHDtSw72x5QxaWqYjaen5fIf4J35H1uw5m1kv4DKCtRhdOr/rM4DuZrbIzMrN7Np2q13bSOec\nHwAGAJ8A7wMz3T3WPtU7bNosx9p1+QHJDDMbSzzczzncdWkH/wn8yN1j8QbdUSMLKATOB44F3jGz\nUndfd3ir1aYmACuAccD/Av7bzBa7+xeHt1od05Ea7kfjkgdpnY+ZDQEeBS5y9+p2qltbSeecRwCz\nE8H+NWCSmUXcfW77VLFNpHPeVUC1u+8GdpvZm8BQoKOGezrn/G3gHo93Rm8ws41Af2BJ+1TxsGiz\nHDtSu2WOxiUPmj1nMzsNmANcE5AWXLPn7O593D3f3fOBZ4HvdvBgh/T+fc8DzjGzLDM7jvhKrGva\nuZ6ZlM45f0z8LxXM7GTgTKCiXWvZ/tosx47IlrsfeUsetLk0z/lOoAfwm0RLNuIdeDW9NM85cNI5\nb3dfY2avAO8BMeBRd29wOF1HkObv+m5glpm9T3z0yI/cvUMvBWxmTwNjgK+ZWRXwb0A2tH2OafkB\nEZEAOlK7ZUREpBUU7iIiAaRwFxEJIIW7iEgAKdxFRAJI4S4iEkAKdxGRAPr/y1xhB5RZOf4AAAAA\nSUVORK5CYII=\n",
492 | "text/plain": [
493 | ""
494 | ]
495 | },
496 | "metadata": {},
497 | "output_type": "display_data"
498 | }
499 | ],
500 | "source": [
501 | "# 1.8 Uncomment the following lines to compare.\n",
502 | "pylab.plot(x_data, y_data, '.', label=\"target_values\")\n",
503 | "pylab.plot(x_data, sess.run(y), \".\", label=\"trained_values\")\n",
504 | "pylab.legend()\n",
505 | "pylab.ylim(0, 1.0)\n",
506 | "\n",
507 | "#sess.close()"
508 | ]
509 | },
510 | {
511 | "cell_type": "markdown",
512 | "metadata": {
513 | "collapsed": true
514 | },
515 | "source": [
516 | "Continue on to the next exercise: [5_TF_CNN.ipynb](5_TF_CNN.ipynb).
"
517 | ]
518 | },
519 | {
520 | "cell_type": "code",
521 | "execution_count": null,
522 | "metadata": {
523 | "collapsed": true
524 | },
525 | "outputs": [],
526 | "source": []
527 | }
528 | ],
529 | "metadata": {
530 | "anaconda-cloud": {},
531 | "kernelspec": {
532 | "display_name": "Python [default]",
533 | "language": "python",
534 | "name": "python2"
535 | },
536 | "language_info": {
537 | "codemirror_mode": {
538 | "name": "ipython",
539 | "version": 2
540 | },
541 | "file_extension": ".py",
542 | "mimetype": "text/x-python",
543 | "name": "python",
544 | "nbconvert_exporter": "python",
545 | "pygments_lexer": "ipython2",
546 | "version": "2.7.12"
547 | }
548 | },
549 | "nbformat": 4,
550 | "nbformat_minor": 1
551 | }
552 |
--------------------------------------------------------------------------------
/4_Tensorflow/Tensorflow_princetonPy.pptx:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/4_Tensorflow/Tensorflow_princetonPy.pptx
--------------------------------------------------------------------------------
/4_Tensorflow/assets/TF_api.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/4_Tensorflow/assets/TF_api.png
--------------------------------------------------------------------------------
/4_Tensorflow/assets/flow_graph.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/4_Tensorflow/assets/flow_graph.png
--------------------------------------------------------------------------------
/4_Tensorflow/assets/mnist1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/4_Tensorflow/assets/mnist1.png
--------------------------------------------------------------------------------
/4_Tensorflow/assets/mnist2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/4_Tensorflow/assets/mnist2.png
--------------------------------------------------------------------------------
/4_Tensorflow/assets/pooling.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/4_Tensorflow/assets/pooling.png
--------------------------------------------------------------------------------
/4_Tensorflow/assets/tf_architecture.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/4_Tensorflow/assets/tf_architecture.png
--------------------------------------------------------------------------------
/5_NumbaPyCUDADemo/assets/cuda_indexing.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/5_NumbaPyCUDADemo/assets/cuda_indexing.png
--------------------------------------------------------------------------------
/5_NumbaPyCUDADemo/numba.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "## What is Numba?\n",
8 | "\n",
9 | "Numba is a just-in-time, type-specializing, function compiler for accelerating numerically-focused Python. That's a long list, so let's break down those terms:\n",
10 | "\n",
11 | " 1. function compiler: Numba compiles Python functions, not entire applications, and not parts of functions. Numba does not replace your Python interpreter, but is just another Python module that can turn a function into a (usually) faster function.\n",
12 | " 1. type-specializing: Numba speeds up your function by generating a specialized implementation for the specific data types you are using. Python functions are designed to operate on generic data types, which makes them very flexible, but also very slow. In practice, you only will call a function with a small number of argument types, so Numba will generate a fast implementation for each set of types.\n",
13 | " 1. just-in-time: Numba translates functions when they are first called. This ensures the compiler knows what argument types you will be using. This also allows Numba to be used interactively in a Jupyter notebook just as easily as a traditional application\n",
14 | " 1. numerically-focused: Currently, Numba is focused on numerical data types, like int, float, and complex. There is very limited string processing support, and many string use cases are not going to work well on the GPU. To get best results with Numba, you will likely be using NumPy arrays."
15 | ]
16 | },
17 | {
18 | "cell_type": "code",
19 | "execution_count": null,
20 | "metadata": {
21 | "collapsed": true,
22 | "slideshow": {
23 | "slide_type": "slide"
24 | }
25 | },
26 | "outputs": [],
27 | "source": [
28 | "import numba"
29 | ]
30 | },
31 | {
32 | "cell_type": "code",
33 | "execution_count": null,
34 | "metadata": {
35 | "collapsed": true,
36 | "slideshow": {
37 | "slide_type": "fragment"
38 | }
39 | },
40 | "outputs": [],
41 | "source": [
42 | "def pyfunc(x):\n",
43 | " return 3.0*x**2 + 2.0*x + 1.0\n",
44 | "\n",
45 | "pyfunc(3.14)"
46 | ]
47 | },
48 | {
49 | "cell_type": "code",
50 | "execution_count": null,
51 | "metadata": {
52 | "collapsed": true,
53 | "slideshow": {
54 | "slide_type": "fragment"
55 | }
56 | },
57 | "outputs": [],
58 | "source": [
59 | "# a compiled function that has no Python objects in the body,\n",
60 | "# but it can be used in Python because it interprets Python on the way in\n",
61 | "@numba.jit(nopython=True)\n",
62 | "def fastfunc(x):\n",
63 | " return 3.0*x**2 + 2.0*x + 1.0\n",
64 | "\n",
65 | "fastfunc(3.14)"
66 | ]
67 | },
68 | {
69 | "cell_type": "code",
70 | "execution_count": null,
71 | "metadata": {
72 | "collapsed": true,
73 | "slideshow": {
74 | "slide_type": "slide"
75 | }
76 | },
77 | "outputs": [],
78 | "source": [
79 | "# have to provide a signature: \"double (double, void*)\"\n",
80 | "sig = numba.types.double(numba.types.double,\n",
81 | " numba.types.CPointer(numba.types.void))\n",
82 | "# a pure C function that doesn't interpret Python arguments\n",
83 | "@numba.cfunc(sig, nopython=True)\n",
84 | "def cfunc(x, params):\n",
85 | " return 3.0*x**2 + 2.0*x + 1.0\n",
86 | "\n",
87 | "cfunc(3.14) # should raise an error"
88 | ]
89 | },
90 | {
91 | "cell_type": "code",
92 | "execution_count": null,
93 | "metadata": {
94 | "collapsed": true,
95 | "slideshow": {
96 | "slide_type": "slide"
97 | }
98 | },
99 | "outputs": [],
100 | "source": [
101 | "# we get a function pointer instead\n",
102 | "cfunc.address"
103 | ]
104 | },
105 | {
106 | "cell_type": "code",
107 | "execution_count": null,
108 | "metadata": {
109 | "collapsed": true,
110 | "slideshow": {
111 | "slide_type": "fragment"
112 | }
113 | },
114 | "outputs": [],
115 | "source": [
116 | "# just to verify that this pointer works, get ctypes to use it\n",
117 | "import ctypes\n",
118 | "\n",
119 | "func_type = ctypes.CFUNCTYPE(ctypes.c_double, ctypes.c_double, ctypes.POINTER(None))\n",
120 | "\n",
121 | "func_type(cfunc.address)(3.14, None)"
122 | ]
123 | }
124 | ],
125 | "metadata": {
126 | "anaconda-cloud": {},
127 | "celltoolbar": "Slideshow",
128 | "kernelspec": {
129 | "display_name": "Python [default]",
130 | "language": "python",
131 | "name": "python2"
132 | },
133 | "language_info": {
134 | "codemirror_mode": {
135 | "name": "ipython",
136 | "version": 2
137 | },
138 | "file_extension": ".py",
139 | "mimetype": "text/x-python",
140 | "name": "python",
141 | "nbconvert_exporter": "python",
142 | "pygments_lexer": "ipython2",
143 | "version": "2.7.12"
144 | }
145 | },
146 | "nbformat": 4,
147 | "nbformat_minor": 1
148 | }
149 |
--------------------------------------------------------------------------------
/5_NumbaPyCUDADemo/pycuda.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "slideshow": {
7 | "slide_type": "slide"
8 | }
9 | },
10 | "source": [
11 | "[PyCUDA](https://mathema.tician.de/software/pycuda/) is a Python wrapper around CUDA, NVidia's extension of C/C++ for GPUs.\n",
12 | "\n",
13 | "There's also a [PyOpenCL](https://mathema.tician.de/software/pyopencl/) for the vendor-independent OpenCL standard.\n",
14 | "\n",
15 | "\n",
16 | "Following shows the CUDA parallel thread hierarchy. CUDA executes kernels using a grid of blocks of threads. This figure shows the common indexing pattern used in CUDA programs using the CUDA keywords `gridDim.x` (the number of thread blocks), `blockDim.x` (the number of threads in each block), `blockIdx.x` (the index the current block within the grid), and `threadIdx.`x (the index of the current thread within the block).\n",
17 | "\n",
18 | "
"
19 | ]
20 | },
21 | {
22 | "cell_type": "markdown",
23 | "metadata": {},
24 | "source": [
25 | "Before you can use PyCuda, you have to import and initialize it:"
26 | ]
27 | },
28 | {
29 | "cell_type": "code",
30 | "execution_count": 1,
31 | "metadata": {
32 | "collapsed": true,
33 | "slideshow": {
34 | "slide_type": "-"
35 | }
36 | },
37 | "outputs": [],
38 | "source": [
39 | "import math\n",
40 | "import numpy\n",
41 | "\n",
42 | "import pycuda.autoinit\n",
43 | "import pycuda.driver as cuda\n",
44 | "from pycuda.compiler import SourceModule"
45 | ]
46 | },
47 | {
48 | "cell_type": "markdown",
49 | "metadata": {},
50 | "source": [
51 | "Note that you do not have to use `pycuda.autoinit` - initialization, context creation, and cleanup can also be performed manually, if desired\n",
52 | "\n",
53 | "For this tutorial, we'll stick to something simple: we will write code to double each entry in array. To this end, we write the corresponding CUDA C code, and feed it into the constructor of a `pycuda.compiler.SourceModule`:"
54 | ]
55 | },
56 | {
57 | "cell_type": "code",
58 | "execution_count": 2,
59 | "metadata": {
60 | "slideshow": {
61 | "slide_type": "slide"
62 | }
63 | },
64 | "outputs": [],
65 | "source": [
66 | "# generate a CUDA (C-ish) function that will run on the GPU; PROBLEM_SIZE is hard-wired\n",
67 | "module = SourceModule(\"\"\"\n",
68 | " __global__ void doublify(float *a)\n",
69 | " {\n",
70 | " int idx = threadIdx.x + threadIdx.y*4;\n",
71 | " a[idx] *= 2;\n",
72 | " }\n",
73 | " \"\"\")\n",
74 | "\n",
75 | "# pull \"doublify\" out as a Python callable\n",
76 | "doublify = module.get_function(\"doublify\")"
77 | ]
78 | },
79 | {
80 | "cell_type": "markdown",
81 | "metadata": {},
82 | "source": [
83 | "In the following, we will create some data as numpy arrays on the CPU.\n",
84 | "\n",
85 | "The next step in this and most other programs is to transfer data onto the device. In PyCuda, you will mostly transfer data from numpy arrays on the host. "
86 | ]
87 | },
88 | {
89 | "cell_type": "code",
90 | "execution_count": 3,
91 | "metadata": {
92 | "slideshow": {
93 | "slide_type": "slide"
94 | }
95 | },
96 | "outputs": [],
97 | "source": [
98 | "# create Numpy arrays on the CPU\n",
99 | "a = numpy.random.randn(4,4).astype(numpy.float32)\n",
100 | "\n",
101 | "#next, we need somewhere to transfer data to, so we need to allocate memory on the device:\n",
102 | "a_gpu = cuda.mem_alloc(a.nbytes)\n",
103 | "\n",
104 | "#as a last step, we need to transfer the data to the GPU:\n",
105 | "\n",
106 | "cuda.memcpy_htod(a_gpu, a)\n",
107 | "\n",
108 | "\n",
109 | "# define block/grid size for our problem: 4x4 block size\n",
110 | "blockdim = (4,4,1)\n",
111 | "#griddim = (int(math.ceil(PROBLEM_SIZE / 512.0)), 1, 1)\n",
112 | "\n",
113 | "# copy the \"driver.In\" arrays to the GPU, run the \n",
114 | "#just_multiply(driver.Out(dest), driver.In(a), driver.In(b), block=blockdim, grid=griddim)\n",
115 | "doublify(a_gpu, block=blockdim)"
116 | ]
117 | },
118 | {
119 | "cell_type": "code",
120 | "execution_count": 4,
121 | "metadata": {},
122 | "outputs": [
123 | {
124 | "name": "stdout",
125 | "output_type": "stream",
126 | "text": [
127 | "[[-1.23081899 -1.60914695 -0.21349314 -0.58991599]\n",
128 | " [-1.53882289 3.44664907 1.44528294 -3.1512928 ]\n",
129 | " [-2.77189636 0.24643208 -1.41279054 0.12612192]\n",
130 | " [ 0.43340847 -3.97955871 -2.67981982 0.26493114]]\n",
131 | "[[-0.61540949 -0.80457348 -0.10674657 -0.294958 ]\n",
132 | " [-0.76941144 1.72332454 0.72264147 -1.5756464 ]\n",
133 | " [-1.38594818 0.12321604 -0.70639527 0.06306096]\n",
134 | " [ 0.21670423 -1.98977935 -1.33990991 0.13246557]]\n"
135 | ]
136 | }
137 | ],
138 | "source": [
139 | "#Finally, we fetch the data back from the GPU and display it, together with the original a:\n",
140 | "\n",
141 | "a_doubled = numpy.empty_like(a)\n",
142 | "cuda.memcpy_dtoh(a_doubled, a_gpu)\n",
143 | "print (a_doubled)\n",
144 | "print (a)\n",
145 | "#doublify(cuda.InOut(a), block=blockdim)"
146 | ]
147 | },
148 | {
149 | "cell_type": "markdown",
150 | "metadata": {
151 | "collapsed": true,
152 | "slideshow": {
153 | "slide_type": "slide"
154 | }
155 | },
156 | "source": [
157 | "Now let's do that calculation of $\\pi$."
158 | ]
159 | },
160 | {
161 | "cell_type": "code",
162 | "execution_count": 6,
163 | "metadata": {
164 | "slideshow": {
165 | "slide_type": "-"
166 | }
167 | },
168 | "outputs": [
169 | {
170 | "data": {
171 | "text/plain": [
172 | "3.141594"
173 | ]
174 | },
175 | "execution_count": 6,
176 | "metadata": {},
177 | "output_type": "execute_result"
178 | }
179 | ],
180 | "source": [
181 | "PROBLEM_SIZE = int(1e6)\n",
182 | "\n",
183 | "module2 = SourceModule(\"\"\"\n",
184 | "__global__ void mapper(float *dest)\n",
185 | "{\n",
186 | " const int id = threadIdx.x + blockDim.x*blockIdx.x;\n",
187 | " const double x = 1.0 * id / %d; // x goes from 0.0 to 1.0 in PROBLEM_SIZE steps\n",
188 | " if (id < %d)\n",
189 | " dest[id] = 4.0 / (1.0 + x*x);\n",
190 | "}\n",
191 | "\"\"\" % (PROBLEM_SIZE, PROBLEM_SIZE))\n",
192 | "\n",
193 | "mapper = module2.get_function(\"mapper\")\n",
194 | "dest = numpy.empty(PROBLEM_SIZE, dtype=numpy.float32)\n",
195 | "blockdim = (512, 1, 1)\n",
196 | "griddim = (int(math.ceil(PROBLEM_SIZE / 512.0)), 1, 1)\n",
197 | "\n",
198 | "mapper(cuda.Out(dest), block=blockdim, grid=griddim)\n",
199 | "\n",
200 | "dest.sum() * (1.0 / PROBLEM_SIZE) # correct for bin size"
201 | ]
202 | },
203 | {
204 | "cell_type": "markdown",
205 | "metadata": {
206 | "slideshow": {
207 | "slide_type": "slide"
208 | }
209 | },
210 | "source": [
211 | "We're doing the mapper (problem of size 1 million) on the GPU and the final sum (problem of size 1 million) on the CPU.\n",
212 | "\n",
213 | "However, we want to do all the big data work on the GPU.\n",
214 | "\n",
215 | "On the next slide is an algorithm that merges array elements with their neighbors in $\\log_2(\\mbox{million}) = 20$ steps."
216 | ]
217 | },
218 | {
219 | "cell_type": "code",
220 | "execution_count": 7,
221 | "metadata": {
222 | "slideshow": {
223 | "slide_type": "slide"
224 | }
225 | },
226 | "outputs": [
227 | {
228 | "data": {
229 | "text/plain": [
230 | "3.1415934999999999"
231 | ]
232 | },
233 | "execution_count": 7,
234 | "metadata": {},
235 | "output_type": "execute_result"
236 | }
237 | ],
238 | "source": [
239 | "module3 = SourceModule(\"\"\"\n",
240 | "__global__ void reducer(float *dest, int i)\n",
241 | "{\n",
242 | " const int PROBLEM_SIZE = %d;\n",
243 | " const int id = threadIdx.x + blockDim.x*blockIdx.x;\n",
244 | " if (id %% (2*i) == 0 && id + i < PROBLEM_SIZE) {\n",
245 | " dest[id] += dest[id + i];\n",
246 | " }\n",
247 | "}\n",
248 | "\"\"\" % PROBLEM_SIZE)\n",
249 | "\n",
250 | "blockdim = (512, 1, 1)\n",
251 | "griddim = (int(math.ceil(PROBLEM_SIZE / 512.0)), 1, 1)\n",
252 | "\n",
253 | "reducer = module3.get_function(\"reducer\")\n",
254 | "\n",
255 | "# Python for loop over the 20 steps to reduce the array\n",
256 | "i = 1\n",
257 | "while i < PROBLEM_SIZE:\n",
258 | " reducer(cuda.InOut(dest), numpy.int32(i), block=blockdim, grid=griddim)\n",
259 | " i *= 2\n",
260 | "\n",
261 | "# final result is in the first element\n",
262 | "dest[0] * (1.0 / PROBLEM_SIZE)"
263 | ]
264 | },
265 | {
266 | "cell_type": "markdown",
267 | "metadata": {
268 | "slideshow": {
269 | "slide_type": "slide"
270 | }
271 | },
272 | "source": [
273 | "The only problem now is that we're copying this `dest` array back and forth between the CPU and GPU. Let's fix that:"
274 | ]
275 | },
276 | {
277 | "cell_type": "code",
278 | "execution_count": 9,
279 | "metadata": {
280 | "slideshow": {
281 | "slide_type": "-"
282 | }
283 | },
284 | "outputs": [
285 | {
286 | "name": "stdout",
287 | "output_type": "stream",
288 | "text": [
289 | "3.1415935\n"
290 | ]
291 | }
292 | ],
293 | "source": [
294 | "# allocate the array directly on the GPU, no CPU involved\n",
295 | "dest_gpu = cuda.mem_alloc(PROBLEM_SIZE * numpy.dtype(numpy.float32).itemsize)\n",
296 | "\n",
297 | "# do it again without \"driver.InOut\", which copies Numpy (CPU) to and from the GPU\n",
298 | "mapper(dest_gpu, block=blockdim, grid=griddim)\n",
299 | "i = 1\n",
300 | "while i < PROBLEM_SIZE:\n",
301 | " reducer(dest_gpu, numpy.int32(i), block=blockdim, grid=griddim)\n",
302 | " i *= 2\n",
303 | "\n",
304 | "# we only need the first element, so create a Numpy array with exactly one element\n",
305 | "only_one_element = numpy.empty(1, dtype=numpy.float32)\n",
306 | "\n",
307 | "# copy just that one element\n",
308 | "cuda.memcpy_dtoh(only_one_element, dest_gpu)\n",
309 | "\n",
310 | "print (only_one_element[0] * (1.0 / PROBLEM_SIZE))"
311 | ]
312 | },
313 | {
314 | "cell_type": "code",
315 | "execution_count": null,
316 | "metadata": {
317 | "collapsed": true
318 | },
319 | "outputs": [],
320 | "source": []
321 | }
322 | ],
323 | "metadata": {
324 | "anaconda-cloud": {},
325 | "celltoolbar": "Slideshow",
326 | "kernelspec": {
327 | "display_name": "Python 3",
328 | "language": "python",
329 | "name": "python3"
330 | },
331 | "language_info": {
332 | "codemirror_mode": {
333 | "name": "ipython",
334 | "version": 3
335 | },
336 | "file_extension": ".py",
337 | "mimetype": "text/x-python",
338 | "name": "python",
339 | "nbconvert_exporter": "python",
340 | "pygments_lexer": "ipython3",
341 | "version": "3.6.2"
342 | }
343 | },
344 | "nbformat": 4,
345 | "nbformat_minor": 2
346 | }
347 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Getting started
2 |
3 | To be able to follow the workshop exercises, you are going to need a laptop with Anaconda and several Python packages installed. Following instruction are geared for Mac or Ubuntu linux users.
4 |
5 | ## Download and install Anaconda
6 |
7 | Please go to the following website: https://www.continuum.io/downloads
8 | download and install *the latest* Anaconda version for Python 2.7 (or Python 3) for your operating system.
9 |
10 | Note: we are going to need Anaconda 4.1.x or later (the current latest is 5.0.0)
11 |
12 | After that, type:
13 |
14 | ```bash
15 | conda --help
16 | ```
17 | and read the manual.
18 |
19 |
20 | ## Check-out the git repository with the exercise
21 |
22 | Once Anaconda is ready, checkout the course repository and
23 | and proceed with setting up the environment:
24 | ```bash
25 | git clone https://github.com/ASvyatkovskiy/PythonWorkshop
26 | ```
27 |
28 | If you do not have git and do not wish to install it, just download the repository as zip, and unpack it:
29 |
30 | ```bash
31 | wget https://github.com/ASvyatkovskiy/PythonWorkshop/archive/master.zip
32 | #For Mac users:
33 | #curl -O https://github.com/ASvyatkovskiy/PythonWorkshop/archive/master.zip
34 | unzip master.zip
35 | ```
36 |
37 | ## Create isolated Anaconda environment
38 |
39 | Change into the course folder, then type:
40 |
41 | ```bash
42 | #cd PythonWorkshop
43 | conda create --name PythonWorkshop --file requirements.txt
44 | source activate PythonWorkshop
45 | ```
46 |
47 | ### Installing Tensorflow
48 |
49 | All of the third-part Python packages should be installed by conda. Some packages might cause installation errors depending on your OS.
50 | If this happens, select a binary and install protobufs, and TensorFlow. For this workshop, we will use CPU-only version of Tensorflow (feel free to use GPU version, if your laptop has a GPU).
51 |
52 | Mac users:
53 |
54 | ```bash
55 | #source activate PythonWorkshop
56 | pip install --user --upgrade protobuf
57 | export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.0-py2-none-any.whl
58 | pip install --user --upgrade $TF_BINARY_URL
59 | pip install --user --upgrade Pillow
60 | ```
61 |
62 | Ubuntu linux users:
63 |
64 | ```bash
65 | sudo apt-get install --user python-pip python-dev python-matplotlib
66 | export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0-cp27-none-linux_x86_64.whl
67 | sudo pip install --user --upgrade $TF_BINARY_URL
68 | sudo pip install --user --upgrade Pillow
69 | ```
70 |
71 | Test the installation was succesfull, launch the Jupyter notebook
72 |
73 | ```bash
74 | jupyter notebook
75 | ```
76 | create a new notebook selecting the Python kernel using your anaconda environment from the upper right dropdown menu, and type:
77 |
78 | ```python
79 | In [1]: import tensorflow as tf
80 | tf.__version__
81 |
82 | Out[1]: 1.0.0
83 | ```
84 |
85 | ## Start the interactive notebook
86 |
87 | Change to the the repository folder, switch to the `Spring2017` local branch, and start interactive jupyter (ipython) notebook:
88 | ```bash
89 | cd PythonWorkshop
90 | jupyter notebook
91 | ```
92 |
93 | After the notebook is opened, navigate to the workshop folder and open the 1.PythonBasics.ipynb from the browser window.
94 |
--------------------------------------------------------------------------------
/python_slides.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ASvyatkovskiy/PythonWorkshop/19fbd7d5574db2652d07db4a3b54979cdb5f6b62/python_slides.pdf
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | jupyter
2 | numpy
3 | matplotlib
4 | joblib
5 | cython
6 | pympler
7 | tensorflow
8 | keras
9 |
--------------------------------------------------------------------------------