├── .gitignore ├── README.md ├── week01 ├── 01.5.2 Implementing a copy routine - Answer.ipynb ├── 01.5.2 Implementing a copy routine.ipynb ├── 01.5.3 Implementing a routine that scales a vector - Answer.ipynb ├── 01.5.3 Implementing a routine that scales a vector.ipynb ├── 01.5.4 Implementing an axpy routine - Answer.ipynb ├── 01.5.4 Implementing an axpy routine.ipynb ├── 01.5.5 Implementing a dot routine - Answer.ipynb ├── 01.5.5 Implementing a dot routine.ipynb ├── 01.5.6 Implementing a routine to compute vector length - Answer.ipynb ├── 01.5.6 Implementing a routine to compute vector length.ipynb ├── 01.6.3 Programming without indices (dot product) - Answer.ipynb ├── 01.6.3 Programming without indices (dot product).ipynb ├── 01.6.6 Programming without indices (axpy) - Answer.ipynb └── 01.6.6 Programming without indices (axpy).ipynb ├── week02 ├── 02.4.2.10 Practice with matrix-vector multiplication.ipynb └── generate_problems.py ├── week03 ├── 03.1.1 Timmy!.ipynb ├── 03.2.1 Set to zero - Answer.ipynb ├── 03.2.1 Set to zero.ipynb ├── 03.2.2 Set to identity - Answer.ipynb ├── 03.2.2 Set to identity.ipynb ├── 03.2.3 Diagonal Matrices - Answer.ipynb ├── 03.2.3 Diagonal Matrices.ipynb ├── 03.2.4 Triangularize - Answer.ipynb ├── 03.2.4 Triangularize.ipynb ├── 03.2.5 Transpose - Answer.ipynb ├── 03.2.5 Transpose.ipynb ├── 03.2.6 Symmetrize - Answer.ipynb ├── 03.2.6 Symmetrize.ipynb ├── 03.3.1 Scale a Matrix - Answer.ipynb ├── 03.3.1 Scale a Matrix.ipynb ├── 03.4.1 Matrix vector multiply via dot products - Answer.ipynb ├── 03.4.1 Matrix vector multiply via dot products.ipynb ├── 03.4.2 Matrix vector multiply via axpys - Answer.ipynb ├── 03.4.2 Matrix vector multiply via axpys.ipynb └── timmy.py ├── week04 ├── 04.1.1 Predicting the Weather.ipynb ├── 04.2.3 Alternative Matrix-Vector Multiplication Routines - Answer.ipynb ├── 04.2.3 Alternative Matrix-Vector Multiplication Routines.ipynb ├── 04.3.1 Matrix vector multiply with transpose matrix - Answer.ipynb ├── 04.3.1 Matrix vector multiply with transpose matrix.ipynb ├── 04.3.2.1 Upper Triangular Matrix Vector Multiply Routines - Answer.ipynb ├── 04.3.2.1 Upper Triangular Matrix Vector Multiply Routines.ipynb ├── 04.3.2.3 Lower Triangular Matrix Vector Multiply Routines - Answer.ipynb ├── 04.3.2.3 Lower Triangular Matrix Vector Multiply Routines.ipynb ├── 04.3.2.5 Upper Triangular Matrix Vector Multiply Routines (overwriting x) - Answer.ipynb ├── 04.3.2.5 Upper Triangular Matrix Vector Multiply Routines (overwriting x).ipynb ├── 04.3.2.7 Lower Triangular Matrix Vector Multiply Routines (overwriting x) - Answer.ipynb ├── 04.3.2.7 Lower Triangular Matrix Vector Multiply Routines (overwriting x).ipynb ├── 04.3.2.8 Transpose Lower Triangular Matrix Vector Multiply Routines - Answer.ipynb ├── 04.3.2.8 Transpose Lower Triangular Matrix Vector Multiply Routines.ipynb ├── 04.3.2.8 Transpose Upper Triangular Matrix Vector Multiply Routines - Answer.ipynb ├── 04.3.2.8 Transpose Upper Triangular Matrix Vector Multiply Routines.ipynb ├── 04.3.2.8 Triangular Upper Triangular Matrix Vector Multiply Routines - Answer.ipynb ├── 04.3.2.8 Triangular Upper Triangular Matrix Vector Multiply Routines.ipynb ├── 04.3.2.9 Transpose Lower Triangular Matrix Vector Multiply Routines (overwriting x) - Answer.ipynb ├── 04.3.2.9 Transpose Lower Triangular Matrix Vector Multiply Routines (overwriting x).ipynb ├── 04.3.2.9 Transpose Upper Triangular Matrix Vector Multiply Routines (overwriting x) - Answer.ipynb ├── 04.3.2.9 Transpose Upper Triangular Matrix Vector Multiply Routines (overwriting x).ipynb ├── 04.3.3.1 Symmetric Matrix Vector Multiply Routines (stored in upper triangle) - Answer.ipynb ├── 04.3.3.1 Symmetric Matrix Vector Multiply Routines (stored in upper triangle).ipynb ├── 04.3.3.3 Symmetric Matrix Vector Multiply Routines (stored in lower triangle) - Answer.ipynb ├── 04.3.3.3 Symmetric Matrix Vector Multiply Routines (stored in lower triangle).ipynb ├── 04.3.3.4 Symmetric Matrix Vector Multiply Routines Challenge Question.ipynb └── 04.4.4.11 Practice with matrix-matrix multiplication.ipynb ├── week05 ├── 05.3.1 Lots of loops - Answer.ipynb ├── 05.3.1 Lots of loops.ipynb ├── 05.3.2 Matrix-matrix multiplication by columns - Answer.ipynb ├── 05.3.2 Matrix-matrix multiplication by columns.ipynb ├── 05.3.3 Matrix-matrix multiplication by rows - Answer.ipynb ├── 05.3.3 Matrix-matrix multiplication by rows.ipynb ├── 05.3.4 Matrix-matrix multiplication via rank-1 updates - Answer.ipynb ├── 05.3.4 Matrix-matrix multiplication via rank-1 updates.ipynb ├── 05.5.1 Multiplying upper triangular matrices - Answer.ipynb └── 05.5.1 Multiplying upper triangular matrices.ipynb ├── week06 ├── 06.2.5 Gaussian Elimination - Answer.ipynb ├── 06.2.5 Gaussian Elimination.ipynb ├── 06.3 Solving A x b via LU factorization and triangular solves - Answer.ipynb └── 06.3 Solving A x b via LU factorization and triangular solves.ipynb ├── week08 ├── 08.2.2 Gauss-Jordan with Appended System - Answer.ipynb ├── 08.2.2 Gauss-Jordan with Appended System.ipynb ├── 08.2.3 Gauss-Jordan with Appended System and Multiple Right-Hand Sides - Answer.ipynb ├── 08.2.3 Gauss-Jordan with Appended System and Multiple Right-Hand Sides.ipynb ├── 08.2.4 Gauss-Jordan with Appended System to Invert a Matrix - Answer.ipynb ├── 08.2.4 Gauss-Jordan with Appended System to Invert a Matrix.ipynb ├── 08.2.5 Alternative Gauss Jordon Algorithm - Answer.ipynb ├── 08.2.5 Alternative Gauss Jordon Algorithm.ipynb ├── 08.4.0 - (Optional Enrichment) Blocked LU Factorization - Answer.ipynb └── 08.4.0 - (Optional Enrichment) Blocked LU Factorization.ipynb ├── week11 ├── 11.2.5 Rank-k Approximation - Answer.ipynb ├── 11.2.5 Rank-k Approximation.ipynb ├── 11.3.7 Implementing the QR Factorization - Answer.ipynb ├── 11.3.7 Implementing the QR Factorization.ipynb ├── building.png └── im_approx.py └── week12 ├── 12.4.2 The Power Method.ipynb ├── 12.5.1 The Inverse Power Method.ipynb ├── 12.5.2 Shifting the Inverse Power Method.ipynb └── 12.5.3 The Rayleigh Quotient Iteration.ipynb /.gitignore: -------------------------------------------------------------------------------- 1 | *.pyc 2 | 3 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | LAFF 2 | ==== 3 | ###Linear Algebra - Foundations to Frontiers 4 | Learn the theory of linear algebra hand-in-hand with the practice of software library development. 5 | 6 | [Click here to preview all notebooks online.](http://nbviewer.ipython.org/github/ULAFF/notebooks/tree/master) 7 | 8 | Attributions 9 |     [ᔥ FLAME](http://www.cs.utexas.edu/~flame) 10 |     [ᔥ Spark](http://www.cs.utexas.edu/users/flame/Spark) 11 |     [↬ Probabilistic Programming and Bayesian Methods for Hackers](http://nbviewer.ipython.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Prologue/Prologue.ipynb) 12 | -------------------------------------------------------------------------------- /week01/01.5.6 Implementing a routine to compute vector length - Answer.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 2, 13 | "metadata": {}, 14 | "source": [ 15 | "Implementing a routine to compute vector length" 16 | ] 17 | }, 18 | { 19 | "cell_type": "heading", 20 | "level": 3, 21 | "metadata": {}, 22 | "source": [ 23 | "Preliminaries" 24 | ] 25 | }, 26 | { 27 | "cell_type": "markdown", 28 | "metadata": {}, 29 | "source": [ 30 | "

Again, copy this notebook so that you don't corrupt the original! Then you can \"play\" with the copy of the notebook all you want!

\n", 31 | "\n", 32 | "

\n", 33 | "\n", 34 | "NOTE: A common problem that students have with IPython notebooks is not understanding that when the code in the gray boxes (cells) is executed, it assigns variables that persist the whole time that the notebook is open. Further, some cells rely on variables assigned by earlier cells. If you execute these cells out of order, or if you execute the same cell twice, then you may end up changing the value of the variables. To correct this, click on \"Cell\" at the top and execute \"run all above\" or \"run all\". You can also reset all cells by clicking \"Cell -> All Output -> Clear\"\n", 35 | "\n", 36 | "

\n", 37 | "\n", 38 | "

In this notebook, we show how to write a routine to compute vector length.

" 39 | ] 40 | }, 41 | { 42 | "cell_type": "markdown", 43 | "metadata": {}, 44 | "source": [ 45 | "Let's start by importing numpy and creating a vector $ x = \\left( \\begin{array}{r} 1 \\\\ 2 \\\\ 3 \\end{array} \\right) $.\n", 46 | "\n", 47 | "Execute the code in the box by clicking in the box and then on \"Cell -> Run\". Alternative, click on the box and push \"Shift\" and \"Return\" (or \"Enter\") together." 48 | ] 49 | }, 50 | { 51 | "cell_type": "code", 52 | "collapsed": false, 53 | "input": [ 54 | "import numpy as np # This imports a package called \"numpy\" that will make working with matrices \n", 55 | " # simpler\n", 56 | "\n", 57 | "# create a two-dimensional matrix with only one column. \n", 58 | "x = np.matrix( '1.;2.;3.' )\n", 59 | "print( 'x = ' )\n", 60 | "print( x )" 61 | ], 62 | "language": "python", 63 | "metadata": {}, 64 | "outputs": [] 65 | }, 66 | { 67 | "cell_type": "markdown", 68 | "metadata": {}, 69 | "source": [ 70 | "Now, recall that the length of a vector equals $ \\| x \\|_2 = \\sqrt{ x^T x } $. So, we can use the dot product routine that we wrote before to compute the length:" 71 | ] 72 | }, 73 | { 74 | "cell_type": "code", 75 | "collapsed": false, 76 | "input": [ 77 | "import laff\n", 78 | "import math\n", 79 | "\n", 80 | "length_x = math.sqrt( laff.dot( x, x ) )\n", 81 | "\n", 82 | "print( 'length_x:' )\n", 83 | "print( length_x )" 84 | ], 85 | "language": "python", 86 | "metadata": {}, 87 | "outputs": [] 88 | }, 89 | { 90 | "cell_type": "heading", 91 | "level": 3, 92 | "metadata": {}, 93 | "source": [ 94 | "Length as a simple routine" 95 | ] 96 | }, 97 | { 98 | "cell_type": "markdown", 99 | "metadata": {}, 100 | "source": [ 101 | "

\n", 102 | "As before, it is preferable to create a routine that computes the length.\n", 103 | "

\n", 104 | "\n", 105 | "

\n", 106 | "Complete the following routine to implement this:\n", 107 | "

" 108 | ] 109 | }, 110 | { 111 | "cell_type": "code", 112 | "collapsed": false, 113 | "input": [ 114 | "def length( x ):\n", 115 | " return math.sqrt( laff.dot( x, x ) )\n", 116 | " " 117 | ], 118 | "language": "python", 119 | "metadata": {}, 120 | "outputs": [] 121 | }, 122 | { 123 | "cell_type": "markdown", 124 | "metadata": {}, 125 | "source": [ 126 | "Be sure the run the above box, or this notebook won't know about the routine!!!" 127 | ] 128 | }, 129 | { 130 | "cell_type": "markdown", 131 | "metadata": {}, 132 | "source": [ 133 | "Now, if you execute" 134 | ] 135 | }, 136 | { 137 | "cell_type": "code", 138 | "collapsed": false, 139 | "input": [ 140 | "length_x = length( x )\n", 141 | "\n", 142 | "print( 'length_x:' )\n", 143 | "print( length_x )\n", 144 | "\n", 145 | "print( 'difference between length_x and math.sqrt( laff.dot( x, x ) ):' )\n", 146 | "print( length_x - math.sqrt( laff.dot( x, x ) ) )" 147 | ], 148 | "language": "python", 149 | "metadata": {}, 150 | "outputs": [] 151 | }, 152 | { 153 | "cell_type": "markdown", 154 | "metadata": {}, 155 | "source": [ 156 | "The result should be:\n", 157 | "\n", 158 | "\n", 159 | "length_x:\n", 160 | "3.7416573867739413\n", 161 | "difference between length_x and math.sqrt( laff.dot( x, x ) ):\n", 162 | "0.0\n", 163 | "" 164 | ] 165 | }, 166 | { 167 | "cell_type": "heading", 168 | "level": 3, 169 | "metadata": {}, 170 | "source": [ 171 | "A complete length function as part of the LAFF library" 172 | ] 173 | }, 174 | { 175 | "cell_type": "markdown", 176 | "metadata": {}, 177 | "source": [ 178 | "As we proceed with progressively more advanced operations and routines, we are going to need a general dot routine where $ x $ and $ y $ can be row and/or column vectors. \n", 179 | "\n", 180 | "This routine is part of the 'laff' library. If you do\n", 181 | "\n", 182 | "\n", 183 | "import laff\n", 184 | "\n", 185 | "\n", 186 | "then laff.norm2( x ) will perform the desired computation of the length of $ x $, when x is a column and/or a row vectors. If you really want to see what this routine looks like, then ask for it on the discussion forum and we'll point you to where it can be found." 187 | ] 188 | }, 189 | { 190 | "cell_type": "code", 191 | "collapsed": false, 192 | "input": [ 193 | "import laff\n", 194 | "\n", 195 | "length_x = laff.norm2( x )\n", 196 | "\n", 197 | "print( 'length_x:' )\n", 198 | "print( length_x )\n", 199 | "\n", 200 | "print( 'difference between length_x and math.sqrt( laff.dot( x, x ) ):' )\n", 201 | "print( length_x - math.sqrt( laff.dot( x, x ) ) )" 202 | ], 203 | "language": "python", 204 | "metadata": {}, 205 | "outputs": [] 206 | }, 207 | { 208 | "cell_type": "heading", 209 | "level": 3, 210 | "metadata": {}, 211 | "source": [ 212 | "Need a challenge?" 213 | ] 214 | }, 215 | { 216 | "cell_type": "markdown", 217 | "metadata": {}, 218 | "source": [ 219 | "In \"1.7.3 Overflow and Underflow\", we discuss how computing the length with the dot product can inadvertently cause overflow or underflow. Write a routine that avoids that" 220 | ] 221 | }, 222 | { 223 | "cell_type": "code", 224 | "collapsed": false, 225 | "input": [ 226 | "def length( x ):\n", 227 | " ### You fill in the rest!" 228 | ], 229 | "language": "python", 230 | "metadata": {}, 231 | "outputs": [] 232 | }, 233 | { 234 | "cell_type": "code", 235 | "collapsed": false, 236 | "input": [], 237 | "language": "python", 238 | "metadata": {}, 239 | "outputs": [] 240 | } 241 | ], 242 | "metadata": {} 243 | } 244 | ] 245 | } -------------------------------------------------------------------------------- /week01/01.5.6 Implementing a routine to compute vector length.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 2, 13 | "metadata": {}, 14 | "source": [ 15 | "Implementing a routine to compute vector length" 16 | ] 17 | }, 18 | { 19 | "cell_type": "heading", 20 | "level": 3, 21 | "metadata": {}, 22 | "source": [ 23 | "Preliminaries" 24 | ] 25 | }, 26 | { 27 | "cell_type": "markdown", 28 | "metadata": {}, 29 | "source": [ 30 | "

Again, copy this notebook so that you don't corrupt the original! Then you can \"play\" with the copy of the notebook all you want!

\n", 31 | "\n", 32 | "

\n", 33 | "\n", 34 | "NOTE: A common problem that students have with IPython notebooks is not understanding that when the code in the gray boxes (cells) is executed, it assigns variables that persist the whole time that the notebook is open. Further, some cells rely on variables assigned by earlier cells. If you execute these cells out of order, or if you execute the same cell twice, then you may end up changing the value of the variables. To correct this, click on \"Cell\" at the top and execute \"run all above\" or \"run all\". You can also reset all cells by clicking \"Cell -> All Output -> Clear\"\n", 35 | "\n", 36 | "

\n", 37 | "\n", 38 | "

In this notebook, we show how to write a routine to compute vector length.

" 39 | ] 40 | }, 41 | { 42 | "cell_type": "markdown", 43 | "metadata": {}, 44 | "source": [ 45 | "Let's start by importing numpy and creating a vector $ x = \\left( \\begin{array}{r} 1 \\\\ 2 \\\\ 3 \\end{array} \\right) $.\n", 46 | "\n", 47 | "Execute the code in the box by clicking in the box and then on \"Cell -> Run\". Alternative, click on the box and push \"Shift\" and \"Return\" (or \"Enter\") together." 48 | ] 49 | }, 50 | { 51 | "cell_type": "code", 52 | "collapsed": false, 53 | "input": [ 54 | "import numpy as np # This imports a package called \"numpy\" that will make working with matrices \n", 55 | " # simpler\n", 56 | "\n", 57 | "# create a two-dimensional matrix with only one column. \n", 58 | "x = np.matrix( '1.;2.;3.' )\n", 59 | "print( 'x = ' )\n", 60 | "print( x )" 61 | ], 62 | "language": "python", 63 | "metadata": {}, 64 | "outputs": [] 65 | }, 66 | { 67 | "cell_type": "markdown", 68 | "metadata": {}, 69 | "source": [ 70 | "Now, recall that the length of a vector equals $ \\| x \\|_2 = \\sqrt{ x^T x } $. So, we can use the dot product routine that we wrote before to compute the length:" 71 | ] 72 | }, 73 | { 74 | "cell_type": "code", 75 | "collapsed": false, 76 | "input": [ 77 | "import laff\n", 78 | "import math\n", 79 | "\n", 80 | "length_x = math.sqrt( laff.dot( x, x ) )\n", 81 | "\n", 82 | "print( 'length_x:' )\n", 83 | "print( length_x )" 84 | ], 85 | "language": "python", 86 | "metadata": {}, 87 | "outputs": [] 88 | }, 89 | { 90 | "cell_type": "heading", 91 | "level": 3, 92 | "metadata": {}, 93 | "source": [ 94 | "Length as a simple routine" 95 | ] 96 | }, 97 | { 98 | "cell_type": "markdown", 99 | "metadata": {}, 100 | "source": [ 101 | "

\n", 102 | "As before, it is preferable to create a routine that computes the length.\n", 103 | "

\n", 104 | "\n", 105 | "

\n", 106 | "Complete the following routine to implement this:\n", 107 | "

" 108 | ] 109 | }, 110 | { 111 | "cell_type": "code", 112 | "collapsed": false, 113 | "input": [ 114 | "def length( x ):\n", 115 | " #Your code here\n", 116 | " " 117 | ], 118 | "language": "python", 119 | "metadata": {}, 120 | "outputs": [] 121 | }, 122 | { 123 | "cell_type": "markdown", 124 | "metadata": {}, 125 | "source": [ 126 | "Be sure the run the above box, or this notebook won't know about the routine!!!" 127 | ] 128 | }, 129 | { 130 | "cell_type": "markdown", 131 | "metadata": {}, 132 | "source": [ 133 | "Now, if you execute" 134 | ] 135 | }, 136 | { 137 | "cell_type": "code", 138 | "collapsed": false, 139 | "input": [ 140 | "length_x = length( x )\n", 141 | "\n", 142 | "print( 'length_x:' )\n", 143 | "print( length_x )\n", 144 | "\n", 145 | "print( 'difference between length_x and math.sqrt( laff.dot( x, x ) ):' )\n", 146 | "print( length_x - math.sqrt( laff.dot( x, x ) ) )" 147 | ], 148 | "language": "python", 149 | "metadata": {}, 150 | "outputs": [] 151 | }, 152 | { 153 | "cell_type": "markdown", 154 | "metadata": {}, 155 | "source": [ 156 | "The result should be:\n", 157 | "\n", 158 | "\n", 159 | "length_x:\n", 160 | "3.7416573867739413\n", 161 | "difference between length_x and math.sqrt( laff.dot( x, x ) ):\n", 162 | "0.0\n", 163 | "" 164 | ] 165 | }, 166 | { 167 | "cell_type": "heading", 168 | "level": 3, 169 | "metadata": {}, 170 | "source": [ 171 | "A complete length function as part of the LAFF library" 172 | ] 173 | }, 174 | { 175 | "cell_type": "markdown", 176 | "metadata": {}, 177 | "source": [ 178 | "As we proceed with progressively more advanced operations and routines, we are going to need a general dot routine where $ x $ and $ y $ can be row and/or column vectors. \n", 179 | "\n", 180 | "This routine is part of the 'laff' library. If you do\n", 181 | "\n", 182 | "\n", 183 | "import laff\n", 184 | "\n", 185 | "\n", 186 | "then laff.norm2( x ) will perform the desired computation of the length of $ x $, when x is a column and/or a row vectors. If you really want to see what this routine looks like, then ask for it on the discussion forum and we'll point you to where it can be found." 187 | ] 188 | }, 189 | { 190 | "cell_type": "code", 191 | "collapsed": false, 192 | "input": [ 193 | "import laff\n", 194 | "\n", 195 | "length_x = laff.norm2( x )\n", 196 | "\n", 197 | "print( 'length_x:' )\n", 198 | "print( length_x )\n", 199 | "\n", 200 | "print( 'difference between length_x and math.sqrt( laff.dot( x, x ) ):' )\n", 201 | "print( length_x - math.sqrt( laff.dot( x, x ) ) )" 202 | ], 203 | "language": "python", 204 | "metadata": {}, 205 | "outputs": [] 206 | }, 207 | { 208 | "cell_type": "heading", 209 | "level": 3, 210 | "metadata": {}, 211 | "source": [ 212 | "Need a challenge?" 213 | ] 214 | }, 215 | { 216 | "cell_type": "markdown", 217 | "metadata": {}, 218 | "source": [ 219 | "In \"1.7.3 Overflow and Underflow\", we discuss how computing the length with the dot product can inadvertently cause overflow or underflow. Write a routine that avoids that" 220 | ] 221 | }, 222 | { 223 | "cell_type": "code", 224 | "collapsed": false, 225 | "input": [ 226 | "def length( x ):\n", 227 | " ### You fill in the rest!" 228 | ], 229 | "language": "python", 230 | "metadata": {}, 231 | "outputs": [] 232 | }, 233 | { 234 | "cell_type": "code", 235 | "collapsed": false, 236 | "input": [], 237 | "language": "python", 238 | "metadata": {}, 239 | "outputs": [] 240 | } 241 | ], 242 | "metadata": {} 243 | } 244 | ] 245 | } -------------------------------------------------------------------------------- /week01/01.6.3 Programming without indices (dot product).ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 2, 13 | "metadata": {}, 14 | "source": [ 15 | "Programming without indices (dot product)" 16 | ] 17 | }, 18 | { 19 | "cell_type": "heading", 20 | "level": 3, 21 | "metadata": {}, 22 | "source": [ 23 | "Preliminaries" 24 | ] 25 | }, 26 | { 27 | "cell_type": "markdown", 28 | "metadata": {}, 29 | "source": [ 30 | "

Again, copy this notebook so that you don't corrupt the original! Then you can \"play\" with the copy of the notebook all you want!

\n", 31 | "\n", 32 | "

In this notebook, we show how the FLAME notation (the notation in which vectors and/or matrices are partitioned into regions) can be leveraged to implement linear algebra operations without using indices (which are the root of all evil in programming...).

" 33 | ] 34 | }, 35 | { 36 | "cell_type": "markdown", 37 | "metadata": {}, 38 | "source": [ 39 | "Let's start by importing numpy and creating vectors $ x = \\left( \\begin{array}{r} 1 \\\\ 2 \\\\ 3 \\end{array} \\right) $ and $ y = \\left( \\begin{array}{r} -1 \\\\ 0 \\\\ -2 \\end{array} \\right) $.\n", 40 | "\n", 41 | "Execute the code in the box by clicking in the box and then on \"Cell -> Run\". Alternative, click on the box and push \"Shift\" and \"Return\" (or \"Enter\") together." 42 | ] 43 | }, 44 | { 45 | "cell_type": "code", 46 | "collapsed": false, 47 | "input": [ 48 | "import numpy as np # This imports a package called \"numpy\" that will make working with matrices \n", 49 | " # simpler\n", 50 | "\n", 51 | "# create two two-dimensional matrices of only one column each. \n", 52 | "# In the future we will also think of (column) \n", 53 | "# vectors as matrices with only one column.\n", 54 | "x = np.matrix( '1.;2.;3.' )\n", 55 | "print( 'x = ' )\n", 56 | "print( x )\n", 57 | "\n", 58 | "y = np.matrix( '-1.;0.;-2.' )\n", 59 | "print( 'y = ' )\n", 60 | "print( y )" 61 | ], 62 | "language": "python", 63 | "metadata": {}, 64 | "outputs": [] 65 | }, 66 | { 67 | "cell_type": "heading", 68 | "level": 3, 69 | "metadata": {}, 70 | "source": [ 71 | "Dot as a simple routine" 72 | ] 73 | }, 74 | { 75 | "cell_type": "markdown", 76 | "metadata": {}, 77 | "source": [ 78 | "

\n", 79 | "Here is a simple routine for computing $ {\\rm dot}( x, y ) = x^T y $:\n", 80 | "

" 81 | ] 82 | }, 83 | { 84 | "cell_type": "code", 85 | "collapsed": false, 86 | "input": [ 87 | "def dot( x, y ):\n", 88 | "\n", 89 | " # Check how many elements there are in vector x. For this, \n", 90 | " # np.shape( x ) return the row and column size of x, where x is a matrix.\n", 91 | " \n", 92 | " m, n = np.shape( x )\n", 93 | " alpha = 0.0\n", 94 | " \n", 95 | " for i in range( m ):\n", 96 | " alpha = x[ i,0 ] * y[ i,0 ] + alpha\n", 97 | " \n", 98 | " return alpha" 99 | ], 100 | "language": "python", 101 | "metadata": {}, 102 | "outputs": [] 103 | }, 104 | { 105 | "cell_type": "markdown", 106 | "metadata": {}, 107 | "source": [ 108 | "Be sure the run the above box, or this notebook won't know about the routine!!!" 109 | ] 110 | }, 111 | { 112 | "cell_type": "markdown", 113 | "metadata": {}, 114 | "source": [ 115 | "Now, execute" 116 | ] 117 | }, 118 | { 119 | "cell_type": "code", 120 | "collapsed": false, 121 | "input": [ 122 | "alpha = 0.\n", 123 | "\n", 124 | "alpha = dot( x, y )\n", 125 | "\n", 126 | "print( 'alpha' )\n", 127 | "print( alpha )\n", 128 | "\n", 129 | "print( 'Difference between alpha and np.transpose(x) * y:' )\n", 130 | "alpha_reference = np.transpose(x) * y\n", 131 | "\n", 132 | "print( alpha - alpha_reference[0,0] )" 133 | ], 134 | "language": "python", 135 | "metadata": {}, 136 | "outputs": [] 137 | }, 138 | { 139 | "cell_type": "heading", 140 | "level": 3, 141 | "metadata": {}, 142 | "source": [ 143 | "An implementation with the FLAMEPy Application Programming Interface (API)" 144 | ] 145 | }, 146 | { 147 | "cell_type": "markdown", 148 | "metadata": {}, 149 | "source": [ 150 | "We now show how to implement this same routine using the FLAMEPy API.\n", 151 | "\n", 152 | "Start by visiting the Spark webpage. Follow along with the video and paste the resulting code below. Then follow along with the video and add the appropriate commands. (You may even want to bookmark this page).\n", 153 | "\n", 154 | "Here is the algorithm as presented in Unit 1.6.2. \n", 155 | "\"some_text\"\n", 156 | "\n", 157 | "In the video for Unit 1.6.3, we discuss how to translate this into Python code using the FLAMEPy API. Follow these instructions, insert the resulting code below." 158 | ] 159 | }, 160 | { 161 | "cell_type": "code", 162 | "collapsed": false, 163 | "input": [ 164 | "# Insert Code here\n", 165 | "\n", 166 | "\n", 167 | "\n", 168 | "\n", 169 | "\n" 170 | ], 171 | "language": "python", 172 | "metadata": {}, 173 | "outputs": [] 174 | }, 175 | { 176 | "cell_type": "code", 177 | "collapsed": false, 178 | "input": [ 179 | "import laff\n", 180 | "\n", 181 | "alpha = np.matrix( '-2.0' ) # the way we are going to program, scalars, vectors, and matrices are \n", 182 | " # all just matrices. So, alpha here is a 1 x 1 matrix, which we \n", 183 | " # initialize to some random number, in this case -2.0.\n", 184 | "\n", 185 | "dot_unb( x, y, alpha ) # Takes x, y, and alpha as an input, and then updates alpha with the \n", 186 | " # result of dot( x, y ). Notice that the contents of variable alpha\n", 187 | " # are updated. This only works if alpha is passed in as an array \n", 188 | " # (a matrix in our case)\n", 189 | "\n", 190 | "print( 'alpha' )\n", 191 | "print( alpha )\n", 192 | "\n", 193 | "print( 'compare alpha to np.transpose(x) * y:' )\n", 194 | "alpha_reference = np.transpose(x) * y\n", 195 | "\n", 196 | "print( alpha - alpha_reference[0,0] )" 197 | ], 198 | "language": "python", 199 | "metadata": {}, 200 | "outputs": [] 201 | }, 202 | { 203 | "cell_type": "markdown", 204 | "metadata": {}, 205 | "source": [ 206 | "The output should be:\n", 207 | "\n", 208 | "alpha\n", 209 | "[[-7.]]\n", 210 | "compare alpha to np.transpose(x) * y:\n", 211 | "[[0.]]\n", 212 | "" 213 | ] 214 | } 215 | ], 216 | "metadata": {} 217 | } 218 | ] 219 | } -------------------------------------------------------------------------------- /week01/01.6.6 Programming without indices (axpy).ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 2, 13 | "metadata": {}, 14 | "source": [ 15 | "Programming without indices (axpy)" 16 | ] 17 | }, 18 | { 19 | "cell_type": "heading", 20 | "level": 3, 21 | "metadata": {}, 22 | "source": [ 23 | "Preliminaries" 24 | ] 25 | }, 26 | { 27 | "cell_type": "markdown", 28 | "metadata": {}, 29 | "source": [ 30 | "

Again, copy this notebook so that you don't corrupt the original! Then you can \"play\" with the copy of the notebook all you want!

\n", 31 | "\n", 32 | "

In this notebook, we show how the FLAME notation (the notation in which vectors and/or matrices are partitioned into regions) can be leveraged to implement linear algebra operations without using indices (which are the root of all evil in programming...).

" 33 | ] 34 | }, 35 | { 36 | "cell_type": "markdown", 37 | "metadata": {}, 38 | "source": [ 39 | "Let's start by importing numpy and creating vectors $ x = \\left( \\begin{array}{r} 1 \\\\ 2 \\\\ 3 \\end{array} \\right) $, $ y = \\left( \\begin{array}{r} -1 \\\\ 0 \\\\ -2 \\end{array} \\right) $ and a scalar $ \\alpha = 2.5 $. \n", 40 | "\n", 41 | "Execute the code in the box by clicking in the box and then on \"Cell -> Run\". Alternatively, click on the box and push \"Shift\" and \"Return\" (or \"Enter\") together." 42 | ] 43 | }, 44 | { 45 | "cell_type": "code", 46 | "collapsed": false, 47 | "input": [ 48 | "import numpy as np # This imports a package called \"numpy\" that will make working with matrices \n", 49 | " # simpler\n", 50 | "\n", 51 | "import laff as laff \n", 52 | "# create two two-dimensional matrices of only one column each. \n", 53 | "x = np.matrix( '1.;2.;3.' )\n", 54 | "print( 'x = ' )\n", 55 | "print( x )\n", 56 | "\n", 57 | "y = np.matrix( '-1.;0.;-2.' )\n", 58 | "print( 'y = ' )\n", 59 | "print( y )\n", 60 | "\n", 61 | "alpha = 2.5\n", 62 | "print( 'alpha = ' )\n", 63 | "print( alpha )\n", 64 | "\n", 65 | "yold = np.matrix( np.zeros( [3,1] ) )\n", 66 | "laff.copy( y, yold )" 67 | ], 68 | "language": "python", 69 | "metadata": {}, 70 | "outputs": [] 71 | }, 72 | { 73 | "cell_type": "heading", 74 | "level": 3, 75 | "metadata": {}, 76 | "source": [ 77 | "Axpy as a simple routine" 78 | ] 79 | }, 80 | { 81 | "cell_type": "markdown", 82 | "metadata": {}, 83 | "source": [ 84 | "

\n", 85 | "Here is a simple routine for computing $ y := \\alpha x + y $:\n", 86 | "

" 87 | ] 88 | }, 89 | { 90 | "cell_type": "code", 91 | "collapsed": false, 92 | "input": [ 93 | "def axpy( alpha, x, y ):\n", 94 | "\n", 95 | " m, n = np.shape( x )\n", 96 | " \n", 97 | " for i in range( m ):\n", 98 | " y[ i, 0 ] = alpha * x[ i, 0 ] + y[ i, 0 ]" 99 | ], 100 | "language": "python", 101 | "metadata": {}, 102 | "outputs": [] 103 | }, 104 | { 105 | "cell_type": "markdown", 106 | "metadata": {}, 107 | "source": [ 108 | "Be sure the run the above box, or this notebook won't know about the routine!!!" 109 | ] 110 | }, 111 | { 112 | "cell_type": "markdown", 113 | "metadata": {}, 114 | "source": [ 115 | "Now, execute" 116 | ] 117 | }, 118 | { 119 | "cell_type": "code", 120 | "collapsed": false, 121 | "input": [ 122 | "laff.copy( yold, y )\n", 123 | "\n", 124 | "print( 'y before axpy:')\n", 125 | "\n", 126 | "print( y )\n", 127 | "\n", 128 | "axpy( alpha, x, y )\n", 129 | "\n", 130 | "print( 'y after axpy: ' )\n", 131 | "print( y )\n", 132 | "\n", 133 | "print( 'compare new y to alpha * x + yold:' )\n", 134 | "print( y - ( alpha * x + yold ) )" 135 | ], 136 | "language": "python", 137 | "metadata": {}, 138 | "outputs": [] 139 | }, 140 | { 141 | "cell_type": "heading", 142 | "level": 3, 143 | "metadata": {}, 144 | "source": [ 145 | "An implementation with the FLAMEPy Application Programming Interface (API)" 146 | ] 147 | }, 148 | { 149 | "cell_type": "markdown", 150 | "metadata": {}, 151 | "source": [ 152 | "We now show how to implement this same routine using the FLAMEPy API.\n", 153 | "\n", 154 | "Start by visiting the Spark webpage . Follow along with the video and paste the resulting code below. Then follow along with the video and add the appropriate commands.\n", 155 | "\n", 156 | "Here is the algorithm as presented in Unit 1.6.5. \n", 157 | "\"some_text\"\n", 158 | "\n", 159 | "In the video for Unit 1.6.6, you are encouraged to us the Spark webpage and the above algorithm to implement the axpy operation.\n", 160 | "\n", 161 | " Note: \n", 162 | "" 167 | ] 168 | }, 169 | { 170 | "cell_type": "code", 171 | "collapsed": false, 172 | "input": [ 173 | "# Insert Code here\n", 174 | "\n", 175 | "\n", 176 | "\n" 177 | ], 178 | "language": "python", 179 | "metadata": {}, 180 | "outputs": [] 181 | }, 182 | { 183 | "cell_type": "code", 184 | "collapsed": false, 185 | "input": [ 186 | "import laff\n", 187 | "\n", 188 | "laff.copy( yold, y )\n", 189 | "\n", 190 | "alpha = np.matrix( '-2.0' ) # the way we are going to program, scalars, vectors, and matrices are \n", 191 | " # all just matrices. So, alpha here is a 1 x 1 matrix, which we \n", 192 | " # initialize to some number, in this case -2.0.\n", 193 | "\n", 194 | "axpy_unb( alpha, x, y ) # Takes alpha, x, and y, and then updates y with the \n", 195 | " # result of axpy(alpha, x, y ). Notice that the contents of variable y\n", 196 | " # are updated. This only works if y is passed in as an array \n", 197 | " # (a matrix in our case)\n", 198 | "\n", 199 | "print( 'y' )\n", 200 | "print( y )\n", 201 | "\n", 202 | "print( 'compare updated y to alpha * x + yold:' )\n", 203 | "\n", 204 | "print( y - ( alpha[ 0, 0 ] * x + yold ) ) # we have to use alpha[ 0, 0 ] because otherwise numpy \n", 205 | " # thinks alpha is a matrix, and then the dimensions for \n", 206 | " # matrix-matrix multiplication don't match up.\n", 207 | " # More on this, later." 208 | ], 209 | "language": "python", 210 | "metadata": {}, 211 | "outputs": [] 212 | }, 213 | { 214 | "cell_type": "markdown", 215 | "metadata": {}, 216 | "source": [ 217 | "The output should be:\n", 218 | "\n", 219 | "y\n", 220 | "[[-3.]\n", 221 | " [-4.]\n", 222 | " [-8.]]\n", 223 | "compare updated y to alpha * x + yold:\n", 224 | "[[ 0.]\n", 225 | " [ 0.]\n", 226 | " [ 0.]]\n", 227 | "" 228 | ] 229 | } 230 | ], 231 | "metadata": {} 232 | } 233 | ] 234 | } -------------------------------------------------------------------------------- /week02/02.4.2.10 Practice with matrix-vector multiplication.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 2, 13 | "metadata": {}, 14 | "source": [ 15 | "Practice with matrix-vector multiplication" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "To practice your ability to compute a matrix-vector product, run the cell below and when you are ready, check your answer by running the cell below it. To do another one, just repeat by running the cell below, and then the one below it!" 23 | ] 24 | }, 25 | { 26 | "cell_type": "code", 27 | "collapsed": false, 28 | "input": [ 29 | "import generate_problems as gp\n", 30 | "print(\"What is the result of the matrix vector product below?\")\n", 31 | "\n", 32 | "p = gp.Problem()\n", 33 | "\n", 34 | "p.new_problem()" 35 | ], 36 | "language": "python", 37 | "metadata": {}, 38 | "outputs": [] 39 | }, 40 | { 41 | "cell_type": "code", 42 | "collapsed": false, 43 | "input": [ 44 | "p.show_answer()" 45 | ], 46 | "language": "python", 47 | "metadata": {}, 48 | "outputs": [] 49 | }, 50 | { 51 | "cell_type": "code", 52 | "collapsed": false, 53 | "input": [], 54 | "language": "python", 55 | "metadata": {}, 56 | "outputs": [] 57 | } 58 | ], 59 | "metadata": {} 60 | } 61 | ] 62 | } -------------------------------------------------------------------------------- /week02/generate_problems.py: -------------------------------------------------------------------------------- 1 | from numpy import random 2 | from numpy import matrix 3 | 4 | from sympy import init_printing, Matrix, MatMul, latex, Rational, zeros 5 | from IPython.display import Math 6 | 7 | 8 | class Problem(): 9 | def __init__(self): 10 | 11 | m = self.random_integer( 3, 4 ) 12 | n = self.random_integer( 2, 3 ) 13 | 14 | tag = '' 15 | 16 | A = self.random_integer_matrix( m, n ) 17 | x = self.random_integer_matrix( n, 1 ) 18 | 19 | answer = A * x 20 | 21 | def random_integer( self, lo, hi ): 22 | x = random.rand() 23 | rand_int = int(round( (hi - lo) * x ) + lo) 24 | 25 | return rand_int 26 | 27 | def random_integer_matrix( self, m, n ): 28 | A = zeros( (m, n) ) 29 | 30 | for i in range( m ): 31 | for j in range( n ): 32 | A[ i,j ] = Rational(self.random_integer( -4, 4 )) 33 | return A 34 | 35 | def __choose_problem_type__(self): 36 | 37 | m = self.random_integer( 2, 4 ) 38 | k = self.random_integer( 2, 4 ) 39 | n = self.random_integer( 2, 4 ) 40 | tag = ' Matrix Matrix Multiplcation' 41 | 42 | case = self.random_integer( 1, 8 ) 43 | 44 | #Scalar Multiplication 45 | if case is 1: 46 | m = 1 47 | k = 1 48 | n = 1 49 | tag = ' Scalar Multiplication' 50 | 51 | #Scal 52 | elif case is 2: 53 | n = 1 54 | k = 1 55 | tag = ' SCAL' 56 | 57 | #Scal 58 | elif case is 3: 59 | m = 1 60 | k = 1 61 | tag = ' SCAL' 62 | 63 | #Dot Product 64 | elif case is 4: 65 | m = 1 66 | n = 1 67 | tag = ' DOT' 68 | 69 | #Outer Product Product 70 | elif case is 5: 71 | k = 1 72 | #The 'n ' is to make proper grammar when displaying 73 | tag = 'n Outer Product' 74 | 75 | #Matrix-Vector Product 76 | elif case is 6: 77 | n = 1 78 | tag = ' Matrix-Vector Product' 79 | 80 | #Row Vector-Matrix Product 81 | elif case is 7: 82 | m = 1 83 | tag = ' Row Vector-Matrix Product' 84 | 85 | return m, n, k, tag 86 | 87 | def new_problem(self): 88 | init_printing() 89 | 90 | m = self.random_integer( 3, 4 ) 91 | n = self.random_integer( 2, 3 ) 92 | 93 | self.A = self.random_integer_matrix( m, n ) 94 | self.x = self.random_integer_matrix( n, 1 ) 95 | self.answer = self.A * self.x 96 | 97 | return Math( "$$" + latex( MatMul( self.A, self.x ), mat_str = "matrix" ) + "=" + "?" + "$$" ) 98 | 99 | def new_MM(self): 100 | init_printing() 101 | 102 | m, n, k, self.tag = self.__choose_problem_type__() 103 | 104 | self.A = self.random_integer_matrix( m, k ) 105 | self.x = self.random_integer_matrix( k, n ) 106 | self.answer = self.A * self.x 107 | 108 | return Math( "$$" + latex( MatMul( self.A, self.x ), mat_str = "matrix" ) + "=" + "?" + "$$" ) 109 | 110 | def show_answer(self): 111 | init_printing() 112 | 113 | return Math( "$$" + latex( MatMul( self.A, self.x ), mat_str = "matrix" ) + "=" + latex( self.answer, mat_str = "matrix" ) + "$$") 114 | 115 | def show_problem_type(self): 116 | print( "This is a" + self.tag + " problem" ) -------------------------------------------------------------------------------- /week03/03.1.1 Timmy!.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "code", 12 | "collapsed": false, 13 | "input": [ 14 | "exec( open(\"timmy.py\").read() )" 15 | ], 16 | "language": "python", 17 | "metadata": {}, 18 | "outputs": [], 19 | "prompt_number": "*" 20 | }, 21 | { 22 | "cell_type": "code", 23 | "collapsed": false, 24 | "input": [], 25 | "language": "python", 26 | "metadata": {}, 27 | "outputs": [] 28 | } 29 | ], 30 | "metadata": {} 31 | } 32 | ] 33 | } -------------------------------------------------------------------------------- /week03/03.2.1 Set to zero.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "The Set_to_zero routine" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "This notebook walks you through how to implement a simple function that sets all elements of a given matrix, \n", 23 | "stored in numpy matrix A, to zero." 24 | ] 25 | }, 26 | { 27 | "cell_type": "heading", 28 | "level": 2, 29 | "metadata": {}, 30 | "source": [ 31 | "Getting started" 32 | ] 33 | }, 34 | { 35 | "cell_type": "markdown", 36 | "metadata": {}, 37 | "source": [ 38 | "We will use some functions that are part of our laff library (of which this function will become a part) as well as some routines from the FLAME API (Application Programming Interface) that allows us to write code that closely resembles how we typeset algorithms using the FLAME notation. These functions are imported with the \"import laff as laff\" and \"import flame\" statements." 39 | ] 40 | }, 41 | { 42 | "cell_type": "heading", 43 | "level": 2, 44 | "metadata": {}, 45 | "source": [ 46 | "Algorithm" 47 | ] 48 | }, 49 | { 50 | "cell_type": "markdown", 51 | "metadata": {}, 52 | "source": [ 53 | "\"Set" 54 | ] 55 | }, 56 | { 57 | "cell_type": "heading", 58 | "level": 2, 59 | "metadata": {}, 60 | "source": [ 61 | "The routine" 62 | ] 63 | }, 64 | { 65 | "cell_type": "markdown", 66 | "metadata": {}, 67 | "source": [ 68 | "The specific laff function we will use is laff.zerov which, when given a row or column vector (stored as a 1 x n or n x 1 matrix) will zero that vector. The vectors to be zeroed will actually be parts of the matrix A that we overwrite with zeroes. \n", 69 | "\n", 70 | "The flame functions should be self-explanatory if you put the below next to the algorithm for setting a matrix to the zero, expressed using the FLAME notation.\n", 71 | "\n", 72 | "It is the merge_1x2 routine that does require an explanation. The problem is that we want to overwrite A with the result. That routine takes AT, AB, and copies them back into A.\n", 73 | "\n", 74 | "Follow along with the video, using the Spark webpage ." 75 | ] 76 | }, 77 | { 78 | "cell_type": "code", 79 | "collapsed": false, 80 | "input": [ 81 | "# insert code here\n", 82 | "\n", 83 | "\n" 84 | ], 85 | "language": "python", 86 | "metadata": {}, 87 | "outputs": [] 88 | }, 89 | { 90 | "cell_type": "heading", 91 | "level": 2, 92 | "metadata": {}, 93 | "source": [ 94 | "Testing" 95 | ] 96 | }, 97 | { 98 | "cell_type": "markdown", 99 | "metadata": {}, 100 | "source": [ 101 | "Let's quickly test the routine by creating a 5 x 5 matrix and then setting it to zero." 102 | ] 103 | }, 104 | { 105 | "cell_type": "code", 106 | "collapsed": false, 107 | "input": [ 108 | "from numpy import random\n", 109 | "from numpy import matrix\n", 110 | "\n", 111 | "A = matrix( random.rand( 5,4 ) )\n", 112 | "\n", 113 | "print( 'A before =' )\n", 114 | "print( A )" 115 | ], 116 | "language": "python", 117 | "metadata": {}, 118 | "outputs": [] 119 | }, 120 | { 121 | "cell_type": "code", 122 | "collapsed": false, 123 | "input": [ 124 | "Set_to_zero_unb_var1( A )\n", 125 | "\n", 126 | "print( 'A after =' )\n", 127 | "print( A )" 128 | ], 129 | "language": "python", 130 | "metadata": {}, 131 | "outputs": [] 132 | }, 133 | { 134 | "cell_type": "markdown", 135 | "metadata": {}, 136 | "source": [ 137 | "Bingo, it seems to work!" 138 | ] 139 | }, 140 | { 141 | "cell_type": "heading", 142 | "level": 2, 143 | "metadata": {}, 144 | "source": [ 145 | "Try it yourself (homework 3.2.1.2)" 146 | ] 147 | }, 148 | { 149 | "cell_type": "markdown", 150 | "metadata": {}, 151 | "source": [ 152 | "Now, one could alternatively set a matrix to zero by rows." 153 | ] 154 | }, 155 | { 156 | "cell_type": "markdown", 157 | "metadata": {}, 158 | "source": [ 159 | "Use the Spark webpage to generate the function Set_to_zero_unb_var2( A ) that overwrites A with zeroes one row at a time." 160 | ] 161 | }, 162 | { 163 | "cell_type": "code", 164 | "collapsed": false, 165 | "input": [ 166 | "#insert code here\n", 167 | "\n", 168 | "\n" 169 | ], 170 | "language": "python", 171 | "metadata": {}, 172 | "outputs": [] 173 | }, 174 | { 175 | "cell_type": "markdown", 176 | "metadata": {}, 177 | "source": [ 178 | "Test your routine with the following" 179 | ] 180 | }, 181 | { 182 | "cell_type": "code", 183 | "collapsed": false, 184 | "input": [ 185 | "from numpy import random\n", 186 | "from numpy import matrix\n", 187 | "\n", 188 | "A = matrix( random.rand( 5,4 ) )\n", 189 | "\n", 190 | "print( 'A before =' )\n", 191 | "print( A )\n", 192 | "\n", 193 | "Set_to_zero_unb_var2( A )\n", 194 | "print( 'A after =' )\n", 195 | "print( A )" 196 | ], 197 | "language": "python", 198 | "metadata": {}, 199 | "outputs": [] 200 | }, 201 | { 202 | "cell_type": "heading", 203 | "level": 2, 204 | "metadata": {}, 205 | "source": [ 206 | "Watch your code in action!" 207 | ] 208 | }, 209 | { 210 | "cell_type": "markdown", 211 | "metadata": {}, 212 | "source": [ 213 | "Copy and paste the code into PictureFLAME , a webpage where you can watch your routine in action. Just cut and paste into the box. \n", 214 | "\n", 215 | "Disclaimer: we implemented a VERY simple interpreter. If you do something wrong, we cannot guarantee the results. But if you do it right, you are in for a treat.\n", 216 | "\n", 217 | "If you want to reset the problem, just click in the box into which you pasted the code and hit \"next\" again." 218 | ] 219 | } 220 | ], 221 | "metadata": {} 222 | } 223 | ] 224 | } -------------------------------------------------------------------------------- /week03/03.2.2 Set to identity.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "The Set_to_identity routine" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "This notebook walks you through how to implement simple functions that set a square matrix to the identity matrix." 23 | ] 24 | }, 25 | { 26 | "cell_type": "heading", 27 | "level": 2, 28 | "metadata": {}, 29 | "source": [ 30 | "Getting started" 31 | ] 32 | }, 33 | { 34 | "cell_type": "markdown", 35 | "metadata": {}, 36 | "source": [ 37 | "We will use some functions that are part of our laff library (of which this function will become a part) as well as some routines from the FLAME API (Application Programming Interface) that allows us to write code that closely resembles how we typeset algorithms using the FLAME notation. These functions are imported with the \"import laff as laff\" and \"import flame\" statements." 38 | ] 39 | }, 40 | { 41 | "cell_type": "heading", 42 | "level": 2, 43 | "metadata": {}, 44 | "source": [ 45 | "Algorithm" 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "metadata": {}, 51 | "source": [ 52 | "\"Set" 53 | ] 54 | }, 55 | { 56 | "cell_type": "heading", 57 | "level": 2, 58 | "metadata": {}, 59 | "source": [ 60 | "The routine" 61 | ] 62 | }, 63 | { 64 | "cell_type": "markdown", 65 | "metadata": {}, 66 | "source": [ 67 | "The specific laff functions we will use are \n", 68 | "\n", 72 | "\n", 73 | "Follow along with the video, using the Spark webpage ." 74 | ] 75 | }, 76 | { 77 | "cell_type": "code", 78 | "collapsed": false, 79 | "input": [ 80 | "# insert code here\n", 81 | "\n", 82 | "\n" 83 | ], 84 | "language": "python", 85 | "metadata": {}, 86 | "outputs": [] 87 | }, 88 | { 89 | "cell_type": "heading", 90 | "level": 2, 91 | "metadata": {}, 92 | "source": [ 93 | "Testing" 94 | ] 95 | }, 96 | { 97 | "cell_type": "markdown", 98 | "metadata": {}, 99 | "source": [ 100 | "Let's quickly test the routine by creating a 5 x 5 matrix and then setting it to the identity." 101 | ] 102 | }, 103 | { 104 | "cell_type": "code", 105 | "collapsed": false, 106 | "input": [ 107 | "from numpy import random\n", 108 | "from numpy import matrix\n", 109 | "\n", 110 | "A = matrix( random.rand( 5,5 ) )\n", 111 | "\n", 112 | "print( 'A before =' )\n", 113 | "print( A )" 114 | ], 115 | "language": "python", 116 | "metadata": {}, 117 | "outputs": [] 118 | }, 119 | { 120 | "cell_type": "code", 121 | "collapsed": false, 122 | "input": [ 123 | "Set_to_identity_unb_var1( A )\n", 124 | "\n", 125 | "print( 'A after =' )\n", 126 | "print( A )" 127 | ], 128 | "language": "python", 129 | "metadata": {}, 130 | "outputs": [] 131 | }, 132 | { 133 | "cell_type": "markdown", 134 | "metadata": {}, 135 | "source": [ 136 | "Bingo, it seems to work!" 137 | ] 138 | }, 139 | { 140 | "cell_type": "heading", 141 | "level": 2, 142 | "metadata": {}, 143 | "source": [ 144 | "Try it yourself (Homework 3.2.2.2)" 145 | ] 146 | }, 147 | { 148 | "cell_type": "markdown", 149 | "metadata": {}, 150 | "source": [ 151 | "Now, one could alternatively set a matrix to zero by rows." 152 | ] 153 | }, 154 | { 155 | "cell_type": "markdown", 156 | "metadata": {}, 157 | "source": [ 158 | "Use the Spark webpage to generate the function Set_to_identity_unb_var2( A ) that overwrites A with the identity one row at a time." 159 | ] 160 | }, 161 | { 162 | "cell_type": "code", 163 | "collapsed": false, 164 | "input": [ 165 | "# insert code here\n", 166 | "\n", 167 | "\n" 168 | ], 169 | "language": "python", 170 | "metadata": {}, 171 | "outputs": [] 172 | }, 173 | { 174 | "cell_type": "markdown", 175 | "metadata": {}, 176 | "source": [ 177 | "Test your routine with the following" 178 | ] 179 | }, 180 | { 181 | "cell_type": "code", 182 | "collapsed": false, 183 | "input": [ 184 | "from numpy import random\n", 185 | "from numpy import matrix\n", 186 | "\n", 187 | "A = matrix( random.rand( 5,5 ) )\n", 188 | "\n", 189 | "print( 'A before =' )\n", 190 | "print( A )\n", 191 | "\n", 192 | "Set_to_identity_unb_var2( A )\n", 193 | "print( 'A after =' )\n", 194 | "print( A )" 195 | ], 196 | "language": "python", 197 | "metadata": {}, 198 | "outputs": [] 199 | }, 200 | { 201 | "cell_type": "heading", 202 | "level": 2, 203 | "metadata": {}, 204 | "source": [ 205 | "Watch your code in action!" 206 | ] 207 | }, 208 | { 209 | "cell_type": "markdown", 210 | "metadata": {}, 211 | "source": [ 212 | "Copy and paste the code into PictureFLAME , a webpage where you can watch your routine in action. Just cut and paste into the box. \n", 213 | "\n", 214 | "Disclaimer: we implemented a VERY simple interpreter. If you do something wrong, we cannot guarantee the results. But if you do it right, you are in for a treat.\n", 215 | "\n", 216 | "If you want to reset the problem, just click in the box into which you pasted the code and hit \"next\" again." 217 | ] 218 | } 219 | ], 220 | "metadata": {} 221 | } 222 | ] 223 | } -------------------------------------------------------------------------------- /week03/03.2.3 Diagonal Matrices.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "The Set_to_diagonal_matrix routine" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "This notebook walks you through how to implement simple functions that set the diagonal elements of a square matrix to the components of a given vector." 23 | ] 24 | }, 25 | { 26 | "cell_type": "heading", 27 | "level": 2, 28 | "metadata": {}, 29 | "source": [ 30 | "Getting started" 31 | ] 32 | }, 33 | { 34 | "cell_type": "markdown", 35 | "metadata": {}, 36 | "source": [ 37 | "We will use some functions that are part of our laff library (of which this function will become a part) as well as some routines from the FLAME API (Application Programming Interface) that allows us to write code that closely resembles how we typeset algorithms using the FLAME notation. These functions are imported with the \"import laff as laff\" and \"import flame\" statements." 38 | ] 39 | }, 40 | { 41 | "cell_type": "heading", 42 | "level": 2, 43 | "metadata": {}, 44 | "source": [ 45 | "Algorithm" 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "metadata": {}, 51 | "source": [ 52 | "\"Set" 53 | ] 54 | }, 55 | { 56 | "cell_type": "heading", 57 | "level": 2, 58 | "metadata": {}, 59 | "source": [ 60 | "The routine
Set_to_diagonal_matrix_unb_var1( d, A ) " 61 | ] 62 | }, 63 | { 64 | "cell_type": "markdown", 65 | "metadata": {}, 66 | "source": [ 67 | "This routine sets the diagonal elements of $ A $ to the components of vector $ d$, and sets the off-diagonal elements to zero.\n", 68 | "\n", 69 | "The specific laff functions we will use are \n", 70 | "\n", 74 | "\n", 75 | "Use the Spark webpage ." 76 | ] 77 | }, 78 | { 79 | "cell_type": "code", 80 | "collapsed": false, 81 | "input": [ 82 | "# insert code here\n", 83 | "\n", 84 | "\n", 85 | "\n" 86 | ], 87 | "language": "python", 88 | "metadata": {}, 89 | "outputs": [] 90 | }, 91 | { 92 | "cell_type": "heading", 93 | "level": 2, 94 | "metadata": {}, 95 | "source": [ 96 | "Testing" 97 | ] 98 | }, 99 | { 100 | "cell_type": "markdown", 101 | "metadata": {}, 102 | "source": [ 103 | "Let's quickly test the routine by creating a 5 x 5 matrix and vector of size 5 and then setting the matrix to the diagonal matrix with the components of the vector on its diagonal" 104 | ] 105 | }, 106 | { 107 | "cell_type": "code", 108 | "collapsed": false, 109 | "input": [ 110 | "from numpy import random\n", 111 | "from numpy import matrix\n", 112 | "\n", 113 | "A = matrix( random.rand( 5,5 ) )\n", 114 | "d = matrix( random.rand( 5,1 ) )\n", 115 | "\n", 116 | "print( 'A before =' )\n", 117 | "print( A )\n", 118 | "\n", 119 | "print( 'd before =' )\n", 120 | "print( d )" 121 | ], 122 | "language": "python", 123 | "metadata": {}, 124 | "outputs": [] 125 | }, 126 | { 127 | "cell_type": "code", 128 | "collapsed": false, 129 | "input": [ 130 | "Set_to_diagonal_matrix_unb_var1( d, A )\n", 131 | "\n", 132 | "print( 'A after =' )\n", 133 | "print( A )" 134 | ], 135 | "language": "python", 136 | "metadata": {}, 137 | "outputs": [] 138 | }, 139 | { 140 | "cell_type": "markdown", 141 | "metadata": {}, 142 | "source": [ 143 | "Bingo, it seems to work!" 144 | ] 145 | }, 146 | { 147 | "cell_type": "heading", 148 | "level": 2, 149 | "metadata": {}, 150 | "source": [ 151 | "Try it yourself (Homework 3.2.3.3)" 152 | ] 153 | }, 154 | { 155 | "cell_type": "markdown", 156 | "metadata": {}, 157 | "source": [ 158 | "Now, one could alternatively sets the matrix to the diagonal matrix by rows." 159 | ] 160 | }, 161 | { 162 | "cell_type": "markdown", 163 | "metadata": {}, 164 | "source": [ 165 | "Use the Spark webpage to generate the function Set_to_diagonal_matrix_unb_var2( d, A ) that overwrites A one row at a time." 166 | ] 167 | }, 168 | { 169 | "cell_type": "code", 170 | "collapsed": false, 171 | "input": [ 172 | "# insert code here\n", 173 | "\n", 174 | "\n", 175 | "\n" 176 | ], 177 | "language": "python", 178 | "metadata": {}, 179 | "outputs": [] 180 | }, 181 | { 182 | "cell_type": "markdown", 183 | "metadata": {}, 184 | "source": [ 185 | "Test your routine with the following" 186 | ] 187 | }, 188 | { 189 | "cell_type": "code", 190 | "collapsed": false, 191 | "input": [ 192 | "from numpy import random\n", 193 | "from numpy import matrix\n", 194 | "\n", 195 | "A = matrix( random.rand( 5,5 ) )\n", 196 | "d = matrix( random.rand( 5,1 ) )\n", 197 | "\n", 198 | "print( 'A before =' )\n", 199 | "print( A )\n", 200 | "\n", 201 | "print( 'd before =' )\n", 202 | "print( d )\n", 203 | "\n", 204 | "Set_to_diagonal_matrix_unb_var2( d, A )\n", 205 | "\n", 206 | "print( 'A after =' )\n", 207 | "print( A )" 208 | ], 209 | "language": "python", 210 | "metadata": {}, 211 | "outputs": [] 212 | }, 213 | { 214 | "cell_type": "heading", 215 | "level": 2, 216 | "metadata": {}, 217 | "source": [ 218 | "Watch your code in action!" 219 | ] 220 | }, 221 | { 222 | "cell_type": "markdown", 223 | "metadata": {}, 224 | "source": [ 225 | "Copy and paste the code into PictureFLAME , a webpage where you can watch your routine in action. Just cut and paste into the box. \n", 226 | "\n", 227 | "Disclaimer: we implemented a VERY simple interpreter. If you do something wrong, we cannot guarantee the results. But if you do it right, you are in for a treat.\n", 228 | "\n", 229 | "If you want to reset the problem, just click in the box into which you pasted the code and hit \"next\" again." 230 | ] 231 | }, 232 | { 233 | "cell_type": "code", 234 | "collapsed": false, 235 | "input": [], 236 | "language": "python", 237 | "metadata": {}, 238 | "outputs": [] 239 | } 240 | ], 241 | "metadata": {} 242 | } 243 | ] 244 | } -------------------------------------------------------------------------------- /week03/03.2.4 Triangularize.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "\"Triangularizing\" a matrix" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "This notebook walks you through how to implement simple functions that make a square matrix into a triangular matrix by setting the appropriate entries in the matrix to zero or one." 23 | ] 24 | }, 25 | { 26 | "cell_type": "heading", 27 | "level": 2, 28 | "metadata": {}, 29 | "source": [ 30 | "Getting started" 31 | ] 32 | }, 33 | { 34 | "cell_type": "markdown", 35 | "metadata": {}, 36 | "source": [ 37 | "We will use some functions that are part of our laff library (of which this function will become a part) as well as some routines from the FLAME API (Application Programming Interface) that allows us to write code that closely resembles how we typeset algorithms using the FLAME notation. These functions are imported with the \"import laff as laff\" and \"import flame\" statements." 38 | ] 39 | }, 40 | { 41 | "cell_type": "heading", 42 | "level": 2, 43 | "metadata": {}, 44 | "source": [ 45 | "Algorithm" 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "metadata": {}, 51 | "source": [ 52 | "\"Make" 53 | ] 54 | }, 55 | { 56 | "cell_type": "heading", 57 | "level": 2, 58 | "metadata": {}, 59 | "source": [ 60 | "The routine" 61 | ] 62 | }, 63 | { 64 | "cell_type": "markdown", 65 | "metadata": {}, 66 | "source": [ 67 | "Write your
\n", 68 | " Set_to_lower_triangular_matrix_unb_var1( A )
\n", 69 | "routine, using the Spark webpage ." 70 | ] 71 | }, 72 | { 73 | "cell_type": "code", 74 | "collapsed": false, 75 | "input": [ 76 | "# insert code here\n", 77 | "\n", 78 | "\n" 79 | ], 80 | "language": "python", 81 | "metadata": {}, 82 | "outputs": [] 83 | }, 84 | { 85 | "cell_type": "heading", 86 | "level": 2, 87 | "metadata": {}, 88 | "source": [ 89 | "Testing" 90 | ] 91 | }, 92 | { 93 | "cell_type": "markdown", 94 | "metadata": {}, 95 | "source": [ 96 | "Let's quickly test the routine by creating a 5 x 5 matrix and then setting its strictly upper triangular part to zero, thus making it lower triangular." 97 | ] 98 | }, 99 | { 100 | "cell_type": "code", 101 | "collapsed": false, 102 | "input": [ 103 | "from numpy import random\n", 104 | "from numpy import matrix\n", 105 | "\n", 106 | "A = matrix( random.rand( 5,5 ) )\n", 107 | "\n", 108 | "print( 'A before =' )\n", 109 | "print( A )" 110 | ], 111 | "language": "python", 112 | "metadata": {}, 113 | "outputs": [] 114 | }, 115 | { 116 | "cell_type": "code", 117 | "collapsed": false, 118 | "input": [ 119 | "Set_to_lower_triangular_matrix_unb_var1( A )\n", 120 | "\n", 121 | "print( 'A after =' )\n", 122 | "print( A )" 123 | ], 124 | "language": "python", 125 | "metadata": {}, 126 | "outputs": [] 127 | }, 128 | { 129 | "cell_type": "markdown", 130 | "metadata": {}, 131 | "source": [ 132 | "Bingo, it seems to work!" 133 | ] 134 | }, 135 | { 136 | "cell_type": "heading", 137 | "level": 2, 138 | "metadata": {}, 139 | "source": [ 140 | "Try it yourself (Homework 3.2.4.6)" 141 | ] 142 | }, 143 | { 144 | "cell_type": "markdown", 145 | "metadata": {}, 146 | "source": [ 147 | "Now, an alternative routine that accesses the matrix by rows." 148 | ] 149 | }, 150 | { 151 | "cell_type": "markdown", 152 | "metadata": {}, 153 | "source": [ 154 | "Use the Spark webpage to generate the routine
\n", 155 | " Set_to_lower_triangular_matrix_unb_var2( A ).
" 156 | ] 157 | }, 158 | { 159 | "cell_type": "code", 160 | "collapsed": false, 161 | "input": [ 162 | "# insert code here\n", 163 | "\n", 164 | "\n" 165 | ], 166 | "language": "python", 167 | "metadata": {}, 168 | "outputs": [] 169 | }, 170 | { 171 | "cell_type": "markdown", 172 | "metadata": {}, 173 | "source": [ 174 | "Test your routine with the following" 175 | ] 176 | }, 177 | { 178 | "cell_type": "code", 179 | "collapsed": false, 180 | "input": [ 181 | "from numpy import random\n", 182 | "from numpy import matrix\n", 183 | "\n", 184 | "A = matrix( random.rand( 5,5 ) )\n", 185 | "\n", 186 | "print( 'A before =' )\n", 187 | "print( A )\n", 188 | "\n", 189 | "Set_to_lower_triangular_matrix_unb_var2( A )\n", 190 | "print( 'A after =' )\n", 191 | "print( A )" 192 | ], 193 | "language": "python", 194 | "metadata": {}, 195 | "outputs": [] 196 | }, 197 | { 198 | "cell_type": "heading", 199 | "level": 2, 200 | "metadata": {}, 201 | "source": [ 202 | "Watch your code in action!" 203 | ] 204 | }, 205 | { 206 | "cell_type": "markdown", 207 | "metadata": {}, 208 | "source": [ 209 | "Copy and paste the code into PictureFLAME , a webpage where you can watch your routine in action. Just cut and paste into the box. \n", 210 | "\n", 211 | "Disclaimer: we implemented a VERY simple interpreter. If you do something wrong, we cannot guarantee the results. But if you do it right, you are in for a treat.\n", 212 | "\n", 213 | "If you want to reset the problem, just click in the box into which you pasted the code and hit \"next\" again." 214 | ] 215 | } 216 | ], 217 | "metadata": {} 218 | } 219 | ] 220 | } -------------------------------------------------------------------------------- /week03/03.2.5 Transpose.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "Transposing a matrix" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "This notebook walks you through how to implement simple functions that transpose matrix $ A $, storing the result in matrix $ B $." 23 | ] 24 | }, 25 | { 26 | "cell_type": "heading", 27 | "level": 2, 28 | "metadata": {}, 29 | "source": [ 30 | "Getting started" 31 | ] 32 | }, 33 | { 34 | "cell_type": "markdown", 35 | "metadata": {}, 36 | "source": [ 37 | "We will use some functions that are part of our laff library (of which this function will become a part) as well as some routines from the FLAME API (Application Programming Interface) that allows us to write code that closely resembles how we typeset algorithms using the FLAME notation. These functions are imported with the \"import laff as laff\" and \"import flame\" statements." 38 | ] 39 | }, 40 | { 41 | "cell_type": "heading", 42 | "level": 2, 43 | "metadata": {}, 44 | "source": [ 45 | "Algorithm" 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "metadata": {}, 51 | "source": [ 52 | "\"Transposing" 53 | ] 54 | }, 55 | { 56 | "cell_type": "heading", 57 | "level": 2, 58 | "metadata": {}, 59 | "source": [ 60 | "The routine" 61 | ] 62 | }, 63 | { 64 | "cell_type": "markdown", 65 | "metadata": {}, 66 | "source": [ 67 | "Write your
\n", 68 | " Transpose_unb_var1( A, B )
\n", 69 | "routine, using the Spark webpage and the laff.copy routine." 70 | ] 71 | }, 72 | { 73 | "cell_type": "code", 74 | "collapsed": false, 75 | "input": [ 76 | "# insert code here\n", 77 | "\n" 78 | ], 79 | "language": "python", 80 | "metadata": {}, 81 | "outputs": [] 82 | }, 83 | { 84 | "cell_type": "heading", 85 | "level": 2, 86 | "metadata": {}, 87 | "source": [ 88 | "Testing" 89 | ] 90 | }, 91 | { 92 | "cell_type": "markdown", 93 | "metadata": {}, 94 | "source": [ 95 | "Let's quickly test the routine by creating a 5 x 4 matrix $ A $ and a 4 x 5 matrix $ B $ and then transposing $ A $, overwriting $ B $ with the result." 96 | ] 97 | }, 98 | { 99 | "cell_type": "code", 100 | "collapsed": false, 101 | "input": [ 102 | "from numpy import random\n", 103 | "from numpy import matrix\n", 104 | "\n", 105 | "A = matrix( random.rand( 5,4 ) )\n", 106 | "B = matrix( random.rand( 4,5 ) )\n", 107 | "\n", 108 | "print( 'A ' )\n", 109 | "print( A )\n", 110 | "\n", 111 | "\n", 112 | "print( 'B before =' )\n", 113 | "print( B )" 114 | ], 115 | "language": "python", 116 | "metadata": {}, 117 | "outputs": [] 118 | }, 119 | { 120 | "cell_type": "code", 121 | "collapsed": false, 122 | "input": [ 123 | "Transpose_unb_var1( A, B )\n", 124 | "\n", 125 | "print( 'A ' )\n", 126 | "print( A )\n", 127 | "\n", 128 | "print( 'B after =' )\n", 129 | "print( B )" 130 | ], 131 | "language": "python", 132 | "metadata": {}, 133 | "outputs": [] 134 | }, 135 | { 136 | "cell_type": "markdown", 137 | "metadata": {}, 138 | "source": [ 139 | "Bingo, it seems to work!" 140 | ] 141 | }, 142 | { 143 | "cell_type": "heading", 144 | "level": 2, 145 | "metadata": {}, 146 | "source": [ 147 | "Try it yourself (Homework 3.2.5.3)" 148 | ] 149 | }, 150 | { 151 | "cell_type": "markdown", 152 | "metadata": {}, 153 | "source": [ 154 | "Now, an alternative routine that accesses the matrix by rows." 155 | ] 156 | }, 157 | { 158 | "cell_type": "markdown", 159 | "metadata": {}, 160 | "source": [ 161 | "Use the Spark webpage to generate the routine
\n", 162 | " Transpose_unb_var2( A, B ).
" 163 | ] 164 | }, 165 | { 166 | "cell_type": "code", 167 | "collapsed": false, 168 | "input": [ 169 | "# insert code here\n", 170 | "\n", 171 | "\n" 172 | ], 173 | "language": "python", 174 | "metadata": {}, 175 | "outputs": [] 176 | }, 177 | { 178 | "cell_type": "markdown", 179 | "metadata": {}, 180 | "source": [ 181 | "Test your routine with the following" 182 | ] 183 | }, 184 | { 185 | "cell_type": "code", 186 | "collapsed": false, 187 | "input": [ 188 | "A = matrix( random.rand( 5,4 ) )\n", 189 | "B = matrix( random.rand( 4,5 ) )\n", 190 | "\n", 191 | "print( 'A ' )\n", 192 | "print( A )\n", 193 | "\n", 194 | "\n", 195 | "print( 'B before =' )\n", 196 | "print( B )" 197 | ], 198 | "language": "python", 199 | "metadata": {}, 200 | "outputs": [] 201 | }, 202 | { 203 | "cell_type": "code", 204 | "collapsed": false, 205 | "input": [ 206 | "Transpose_unb_var2( A, B )\n", 207 | "\n", 208 | "print( 'A ' )\n", 209 | "print( A )\n", 210 | "\n", 211 | "print( 'B after =' )\n", 212 | "print( B )" 213 | ], 214 | "language": "python", 215 | "metadata": {}, 216 | "outputs": [] 217 | }, 218 | { 219 | "cell_type": "heading", 220 | "level": 2, 221 | "metadata": {}, 222 | "source": [ 223 | "Watch your code in action!" 224 | ] 225 | }, 226 | { 227 | "cell_type": "markdown", 228 | "metadata": {}, 229 | "source": [ 230 | "Copy and paste the code into PictureFLAME , a webpage where you can watch your routine in action. Just cut and paste into the box. \n", 231 | "\n", 232 | "Disclaimer: we implemented a VERY simple interpreter. If you do something wrong, we cannot guarantee the results. But if you do it right, you are in for a treat.\n", 233 | "\n", 234 | "If you want to reset the problem, just click in the box into which you pasted the code and hit \"next\" again." 235 | ] 236 | } 237 | ], 238 | "metadata": {} 239 | } 240 | ] 241 | } -------------------------------------------------------------------------------- /week03/03.2.6 Symmetrize.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "\"Symmetrizing\" a matrix" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "This notebook walks you through how to implement simple functions that make a square matrix symmetric by copying the lower triangular part in its upper triangular part (after transposing)." 23 | ] 24 | }, 25 | { 26 | "cell_type": "heading", 27 | "level": 2, 28 | "metadata": {}, 29 | "source": [ 30 | "Getting started" 31 | ] 32 | }, 33 | { 34 | "cell_type": "markdown", 35 | "metadata": {}, 36 | "source": [ 37 | "We will use some functions that are part of our laff library (of which this function will become a part) as well as some routines from the FLAME API (Application Programming Interface) that allows us to write code that closely resembles how we typeset algorithms using the FLAME notation. These functions are imported with the \"import laff as laff\" and \"import flame\" statements." 38 | ] 39 | }, 40 | { 41 | "cell_type": "heading", 42 | "level": 2, 43 | "metadata": {}, 44 | "source": [ 45 | "Algorithm" 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "metadata": {}, 51 | "source": [ 52 | "\"Make" 53 | ] 54 | }, 55 | { 56 | "cell_type": "heading", 57 | "level": 2, 58 | "metadata": {}, 59 | "source": [ 60 | "The routine" 61 | ] 62 | }, 63 | { 64 | "cell_type": "markdown", 65 | "metadata": {}, 66 | "source": [ 67 | "Write your
\n", 68 | " Symmetrize_from_lower_triangle_unb_var1( A )
\n", 69 | "routine, using the Spark webpage ." 70 | ] 71 | }, 72 | { 73 | "cell_type": "code", 74 | "collapsed": false, 75 | "input": [ 76 | "# insert code here\n", 77 | "\n", 78 | "\n", 79 | "\n" 80 | ], 81 | "language": "python", 82 | "metadata": {}, 83 | "outputs": [] 84 | }, 85 | { 86 | "cell_type": "heading", 87 | "level": 2, 88 | "metadata": {}, 89 | "source": [ 90 | "Testing" 91 | ] 92 | }, 93 | { 94 | "cell_type": "markdown", 95 | "metadata": {}, 96 | "source": [ 97 | "Let's quickly test the routine by creating a 5 x 5 matrix and then executing the routine." 98 | ] 99 | }, 100 | { 101 | "cell_type": "code", 102 | "collapsed": false, 103 | "input": [ 104 | "from numpy import random\n", 105 | "from numpy import matrix\n", 106 | "\n", 107 | "A = matrix( random.rand( 5,5 ) )\n", 108 | "\n", 109 | "print( 'A before =' )\n", 110 | "print( A )" 111 | ], 112 | "language": "python", 113 | "metadata": {}, 114 | "outputs": [] 115 | }, 116 | { 117 | "cell_type": "code", 118 | "collapsed": false, 119 | "input": [ 120 | "Symmetrize_from_lower_triangle_unb_var1( A )\n", 121 | "\n", 122 | "print( 'A after =' )\n", 123 | "print( A )" 124 | ], 125 | "language": "python", 126 | "metadata": {}, 127 | "outputs": [] 128 | }, 129 | { 130 | "cell_type": "markdown", 131 | "metadata": {}, 132 | "source": [ 133 | "Bingo, it seems to work!" 134 | ] 135 | }, 136 | { 137 | "cell_type": "heading", 138 | "level": 2, 139 | "metadata": {}, 140 | "source": [ 141 | "Try it yourself (Homework 3.2.6.5)" 142 | ] 143 | }, 144 | { 145 | "cell_type": "markdown", 146 | "metadata": {}, 147 | "source": [ 148 | "Now, an alternative routine that sets the upper triangular part by rows." 149 | ] 150 | }, 151 | { 152 | "cell_type": "markdown", 153 | "metadata": {}, 154 | "source": [ 155 | "Use the Spark webpage to generate the routine
\n", 156 | " Symmetrize_from_lower_triangle_unb_var2( A ).
" 157 | ] 158 | }, 159 | { 160 | "cell_type": "code", 161 | "collapsed": false, 162 | "input": [ 163 | "# insert code here\n", 164 | "\n", 165 | "\n", 166 | "\n", 167 | "\n" 168 | ], 169 | "language": "python", 170 | "metadata": {}, 171 | "outputs": [] 172 | }, 173 | { 174 | "cell_type": "markdown", 175 | "metadata": {}, 176 | "source": [ 177 | "Test your routine with the following" 178 | ] 179 | }, 180 | { 181 | "cell_type": "code", 182 | "collapsed": false, 183 | "input": [ 184 | "from numpy import random\n", 185 | "from numpy import matrix\n", 186 | "\n", 187 | "A = matrix( random.rand( 5,5 ) )\n", 188 | "\n", 189 | "print( 'A before =' )\n", 190 | "print( A )\n", 191 | "\n", 192 | "Symmetrize_from_lower_triangle_unb_var2( A )\n", 193 | "print( 'A after =' )\n", 194 | "print( A )" 195 | ], 196 | "language": "python", 197 | "metadata": {}, 198 | "outputs": [] 199 | }, 200 | { 201 | "cell_type": "heading", 202 | "level": 2, 203 | "metadata": {}, 204 | "source": [ 205 | "Watch your code in action!" 206 | ] 207 | }, 208 | { 209 | "cell_type": "markdown", 210 | "metadata": {}, 211 | "source": [ 212 | "Copy and paste the code into PictureFLAME , a webpage where you can watch your routine in action. Just cut and paste into the box. \n", 213 | "\n", 214 | "Disclaimer: we implemented a VERY simple interpreter. If you do something wrong, we cannot guarantee the results. But if you do it right, you are in for a treat.\n", 215 | "\n", 216 | "If you want to reset the problem, just click in the box into which you pasted the code and hit \"next\" again." 217 | ] 218 | } 219 | ], 220 | "metadata": {} 221 | } 222 | ] 223 | } -------------------------------------------------------------------------------- /week03/03.3.1 Scale a Matrix.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "The Scale_a_matrix routine" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "This notebook walks you through how to implement a simple function that scales a matrix $ A $ by a scalar $ \\beta $, overwriting $ A $." 23 | ] 24 | }, 25 | { 26 | "cell_type": "heading", 27 | "level": 2, 28 | "metadata": {}, 29 | "source": [ 30 | "Getting started" 31 | ] 32 | }, 33 | { 34 | "cell_type": "markdown", 35 | "metadata": {}, 36 | "source": [ 37 | "We will use some functions that are part of our laff library (of which this function will become a part) as well as some routines from the FLAME API (Application Programming Interface) that allows us to write code that closely resembles how we typeset algorithms using the FLAME notation. These functions are imported with the \"import laff as laff\" and \"import flame\" statements." 38 | ] 39 | }, 40 | { 41 | "cell_type": "heading", 42 | "level": 2, 43 | "metadata": {}, 44 | "source": [ 45 | "Algorithm" 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "metadata": {}, 51 | "source": [ 52 | "\"Set" 53 | ] 54 | }, 55 | { 56 | "cell_type": "heading", 57 | "level": 2, 58 | "metadata": {}, 59 | "source": [ 60 | "The routine" 61 | ] 62 | }, 63 | { 64 | "cell_type": "markdown", 65 | "metadata": {}, 66 | "source": [ 67 | "The specific laff function we will use is laff.scal which, when given a row or column vector (stored as a 1 x n or n x 1 matrix) and a scalar scales that vector. \n", 68 | "\n", 69 | "Code the routine with the Spark webpage ." 70 | ] 71 | }, 72 | { 73 | "cell_type": "code", 74 | "collapsed": false, 75 | "input": [ 76 | "# insert code here\n", 77 | "\n", 78 | "\n" 79 | ], 80 | "language": "python", 81 | "metadata": {}, 82 | "outputs": [] 83 | }, 84 | { 85 | "cell_type": "heading", 86 | "level": 2, 87 | "metadata": {}, 88 | "source": [ 89 | "Testing" 90 | ] 91 | }, 92 | { 93 | "cell_type": "markdown", 94 | "metadata": {}, 95 | "source": [ 96 | "Let's quickly test the routine by creating a 5 x 4 matrix and then scaling by a scalar." 97 | ] 98 | }, 99 | { 100 | "cell_type": "code", 101 | "collapsed": false, 102 | "input": [ 103 | "from numpy import random\n", 104 | "from numpy import matrix\n", 105 | "\n", 106 | "beta = matrix( '2.' )\n", 107 | "A = matrix( random.rand( 5,4 ) )\n", 108 | "\n", 109 | "print( 'beta = ' )\n", 110 | "print( beta )\n", 111 | "\n", 112 | "print( 'A before =' )\n", 113 | "print( A )" 114 | ], 115 | "language": "python", 116 | "metadata": {}, 117 | "outputs": [] 118 | }, 119 | { 120 | "cell_type": "code", 121 | "collapsed": false, 122 | "input": [ 123 | "Scale_a_matrix_unb_var1( beta, A )\n", 124 | "\n", 125 | "print( 'A after =' )\n", 126 | "print( A )" 127 | ], 128 | "language": "python", 129 | "metadata": {}, 130 | "outputs": [] 131 | }, 132 | { 133 | "cell_type": "markdown", 134 | "metadata": {}, 135 | "source": [ 136 | "Bingo, it seems to work!" 137 | ] 138 | }, 139 | { 140 | "cell_type": "heading", 141 | "level": 2, 142 | "metadata": {}, 143 | "source": [ 144 | "Try it yourself (homework 3.3.1.3)" 145 | ] 146 | }, 147 | { 148 | "cell_type": "markdown", 149 | "metadata": {}, 150 | "source": [ 151 | "Now, write an alternative algorithm that scales $ A $ one row at a time." 152 | ] 153 | }, 154 | { 155 | "cell_type": "markdown", 156 | "metadata": {}, 157 | "source": [ 158 | "Use the Spark webpage to generate the function Scale_a_matrix_unb_var2( beta, A ) that computes $ A := \\beta A $ one row at a time." 159 | ] 160 | }, 161 | { 162 | "cell_type": "code", 163 | "collapsed": false, 164 | "input": [ 165 | "# insert code here\n", 166 | "\n", 167 | "\n" 168 | ], 169 | "language": "python", 170 | "metadata": {}, 171 | "outputs": [] 172 | }, 173 | { 174 | "cell_type": "markdown", 175 | "metadata": {}, 176 | "source": [ 177 | "Test your routine with the following" 178 | ] 179 | }, 180 | { 181 | "cell_type": "code", 182 | "collapsed": false, 183 | "input": [ 184 | "from numpy import random\n", 185 | "from numpy import matrix\n", 186 | "\n", 187 | "beta = matrix( '2' );\n", 188 | "A = matrix( random.rand( 5,4 ) )\n", 189 | "\n", 190 | "print( 'A before =' )\n", 191 | "print( A )\n", 192 | "\n", 193 | "Scale_a_matrix_unb_var2( beta, A )\n", 194 | "print( 'A after =' )\n", 195 | "print( A )" 196 | ], 197 | "language": "python", 198 | "metadata": {}, 199 | "outputs": [] 200 | }, 201 | { 202 | "cell_type": "heading", 203 | "level": 2, 204 | "metadata": {}, 205 | "source": [ 206 | "Watch your code in action!" 207 | ] 208 | }, 209 | { 210 | "cell_type": "markdown", 211 | "metadata": {}, 212 | "source": [ 213 | "Copy and paste the code into PictureFLAME , a webpage where you can watch your routine in action. Just cut and paste into the box. \n", 214 | "\n", 215 | "Disclaimer: we implemented a VERY simple interpreter. If you do something wrong, we cannot guarantee the results. But if you do it right, you are in for a treat.\n", 216 | "\n", 217 | "If you want to reset the problem, just click in the box into which you pasted the code and hit \"next\" again." 218 | ] 219 | } 220 | ], 221 | "metadata": {} 222 | } 223 | ] 224 | } -------------------------------------------------------------------------------- /week03/03.4.1 Matrix vector multiply via dot products - Answer.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "Matrix-vector multiplication via dot products" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "This notebook walks you through how to implement $ y := A x + y $ via dot products." 23 | ] 24 | }, 25 | { 26 | "cell_type": "heading", 27 | "level": 2, 28 | "metadata": {}, 29 | "source": [ 30 | "Getting started" 31 | ] 32 | }, 33 | { 34 | "cell_type": "markdown", 35 | "metadata": {}, 36 | "source": [ 37 | "We will use some functions that are part of our laff library (of which this function will become a part) as well as some routines from the FLAME API (Application Programming Interface) that allows us to write code that closely resembles how we typeset algorithms using the FLAME notation. These functions are imported with the \"import laff as laff\" and \"import flame\" statements." 38 | ] 39 | }, 40 | { 41 | "cell_type": "heading", 42 | "level": 2, 43 | "metadata": {}, 44 | "source": [ 45 | "Algorithm" 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "metadata": {}, 51 | "source": [ 52 | "\"Matrix-vector" 53 | ] 54 | }, 55 | { 56 | "cell_type": "heading", 57 | "level": 2, 58 | "metadata": {}, 59 | "source": [ 60 | "The routine
Mvmult_n_unb_var1( A, x, y ) " 61 | ] 62 | }, 63 | { 64 | "cell_type": "markdown", 65 | "metadata": {}, 66 | "source": [ 67 | "This routine, given $ A \\in \\mathbb{R}^{m \\times n} $, $ x \\in \\mathbb{R}^n $, and $ y \\in \\mathbb{R}^m $, computes $ y := A x + y $. The \"_n_\" indicates this is the \"no transpose\" matrix-vector multiplication. What this means will become clear next week.\n", 68 | "\n", 69 | "The specific laff functions we will use are \n", 70 | "\n", 73 | "\n", 74 | "Use the Spark webpage ." 75 | ] 76 | }, 77 | { 78 | "cell_type": "code", 79 | "collapsed": false, 80 | "input": [ 81 | "import flame\n", 82 | "import laff as laff\n", 83 | "\n", 84 | "def Mvmult_n_unb_var1(A, x, y):\n", 85 | "\n", 86 | " AT, \\\n", 87 | " AB = flame.part_2x1(A, \\\n", 88 | " 0, 'TOP')\n", 89 | "\n", 90 | " yT, \\\n", 91 | " yB = flame.part_2x1(y, \\\n", 92 | " 0, 'TOP')\n", 93 | "\n", 94 | " while AT.shape[0] < A.shape[0]:\n", 95 | "\n", 96 | " A0, \\\n", 97 | " a1t, \\\n", 98 | " A2 = flame.repart_2x1_to_3x1(AT, \\\n", 99 | " AB, \\\n", 100 | " 1, 'BOTTOM')\n", 101 | "\n", 102 | " y0, \\\n", 103 | " psi1, \\\n", 104 | " y2 = flame.repart_2x1_to_3x1(yT, \\\n", 105 | " yB, \\\n", 106 | " 1, 'BOTTOM')\n", 107 | "\n", 108 | " #------------------------------------------------------------#\n", 109 | "\n", 110 | " laff.dots( a1t, x, psi1 )\n", 111 | "\n", 112 | " #------------------------------------------------------------#\n", 113 | "\n", 114 | " AT, \\\n", 115 | " AB = flame.cont_with_3x1_to_2x1(A0, \\\n", 116 | " a1t, \\\n", 117 | " A2, \\\n", 118 | " 'TOP')\n", 119 | "\n", 120 | " yT, \\\n", 121 | " yB = flame.cont_with_3x1_to_2x1(y0, \\\n", 122 | " psi1, \\\n", 123 | " y2, \\\n", 124 | " 'TOP')\n", 125 | "\n", 126 | " flame.merge_2x1(yT, \\\n", 127 | " yB, y)\n", 128 | "\n" 129 | ], 130 | "language": "python", 131 | "metadata": {}, 132 | "outputs": [] 133 | }, 134 | { 135 | "cell_type": "heading", 136 | "level": 2, 137 | "metadata": {}, 138 | "source": [ 139 | "Testing" 140 | ] 141 | }, 142 | { 143 | "cell_type": "markdown", 144 | "metadata": {}, 145 | "source": [ 146 | "Let's quickly test the routine by creating a 4 x 3 matrix and related vectors, performing the computation." 147 | ] 148 | }, 149 | { 150 | "cell_type": "code", 151 | "collapsed": false, 152 | "input": [ 153 | "from numpy import random\n", 154 | "from numpy import matrix\n", 155 | "\n", 156 | "A = matrix( random.rand( 4,3 ) )\n", 157 | "x = matrix( random.rand( 3,1 ) )\n", 158 | "y = matrix( random.rand( 4,1 ) )\n", 159 | "yold = matrix( random.rand( 4,1 ) )\n", 160 | "\n", 161 | "print( 'A before =' )\n", 162 | "print( A )\n", 163 | "\n", 164 | "print( 'x before =' )\n", 165 | "print( x )\n", 166 | "\n", 167 | "print( 'y before =' )\n", 168 | "print( y )" 169 | ], 170 | "language": "python", 171 | "metadata": {}, 172 | "outputs": [] 173 | }, 174 | { 175 | "cell_type": "code", 176 | "collapsed": false, 177 | "input": [ 178 | "laff.copy( y, yold ) # save the original vector y\n", 179 | "\n", 180 | "Mvmult_n_unb_var1( A, x, y )\n", 181 | "\n", 182 | "print( 'y after =' )\n", 183 | "print( y )\n", 184 | "\n", 185 | "print( 'y - ( A * x + yold ) = ' )\n", 186 | "print( y - ( A * x + yold ) )" 187 | ], 188 | "language": "python", 189 | "metadata": {}, 190 | "outputs": [] 191 | }, 192 | { 193 | "cell_type": "markdown", 194 | "metadata": {}, 195 | "source": [ 196 | "Bingo, it seems to work! (Notice that we are doing floating point computations, which means that due to rounding you may not get an exact \"0\".)" 197 | ] 198 | }, 199 | { 200 | "cell_type": "heading", 201 | "level": 2, 202 | "metadata": {}, 203 | "source": [ 204 | "Watch your code in action!" 205 | ] 206 | }, 207 | { 208 | "cell_type": "markdown", 209 | "metadata": {}, 210 | "source": [ 211 | "Copy and paste the code into PictureFLAME , a webpage where you can watch your routine in action. Just cut and paste into the box. \n", 212 | "\n", 213 | "Disclaimer: we implemented a VERY simple interpreter. If you do something wrong, we cannot guarantee the results. But if you do it right, you are in for a treat.\n", 214 | "\n", 215 | "If you want to reset the problem, just click in the box into which you pasted the code and hit \"next\" again." 216 | ] 217 | }, 218 | { 219 | "cell_type": "code", 220 | "collapsed": false, 221 | "input": [], 222 | "language": "python", 223 | "metadata": {}, 224 | "outputs": [] 225 | } 226 | ], 227 | "metadata": {} 228 | } 229 | ] 230 | } -------------------------------------------------------------------------------- /week03/03.4.1 Matrix vector multiply via dot products.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "Matrix-vector multiplication via dot products" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "This notebook walks you through how to implement $ y := A x + y $ via dot products." 23 | ] 24 | }, 25 | { 26 | "cell_type": "heading", 27 | "level": 2, 28 | "metadata": {}, 29 | "source": [ 30 | "Getting started" 31 | ] 32 | }, 33 | { 34 | "cell_type": "markdown", 35 | "metadata": {}, 36 | "source": [ 37 | "We will use some functions that are part of our laff library (of which this function will become a part) as well as some routines from the FLAME API (Application Programming Interface) that allows us to write code that closely resembles how we typeset algorithms using the FLAME notation. These functions are imported with the \"import laff as laff\" and \"import flame\" statements." 38 | ] 39 | }, 40 | { 41 | "cell_type": "heading", 42 | "level": 2, 43 | "metadata": {}, 44 | "source": [ 45 | "Algorithm" 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "metadata": {}, 51 | "source": [ 52 | "\"Matrix-vector" 53 | ] 54 | }, 55 | { 56 | "cell_type": "heading", 57 | "level": 2, 58 | "metadata": {}, 59 | "source": [ 60 | "The routine
Mvmult_n_unb_var1( A, x, y ) " 61 | ] 62 | }, 63 | { 64 | "cell_type": "markdown", 65 | "metadata": {}, 66 | "source": [ 67 | "This routine, given $ A \\in \\mathbb{R}^{m \\times n} $, $ x \\in \\mathbb{R}^n $, and $ y \\in \\mathbb{R}^m $, computes $ y := A x + y $. The \"_n_\" indicates this is the \"no transpose\" matrix-vector multiplication. What this means will become clear next week.\n", 68 | "\n", 69 | "The specific laff functions we will use are \n", 70 | "\n", 73 | "\n", 74 | "Use the Spark webpage ." 75 | ] 76 | }, 77 | { 78 | "cell_type": "code", 79 | "collapsed": false, 80 | "input": [ 81 | "# insert code here\n", 82 | "\n", 83 | "\n" 84 | ], 85 | "language": "python", 86 | "metadata": {}, 87 | "outputs": [] 88 | }, 89 | { 90 | "cell_type": "heading", 91 | "level": 2, 92 | "metadata": {}, 93 | "source": [ 94 | "Testing" 95 | ] 96 | }, 97 | { 98 | "cell_type": "markdown", 99 | "metadata": {}, 100 | "source": [ 101 | "Let's quickly test the routine by creating a 4 x 3 matrix and related vectors, performing the computation." 102 | ] 103 | }, 104 | { 105 | "cell_type": "code", 106 | "collapsed": false, 107 | "input": [ 108 | "from numpy import random\n", 109 | "from numpy import matrix\n", 110 | "\n", 111 | "A = matrix( random.rand( 4,3 ) )\n", 112 | "x = matrix( random.rand( 3,1 ) )\n", 113 | "y = matrix( random.rand( 4,1 ) )\n", 114 | "yold = matrix( random.rand( 4,1 ) )\n", 115 | "\n", 116 | "print( 'A before =' )\n", 117 | "print( A )\n", 118 | "\n", 119 | "print( 'x before =' )\n", 120 | "print( x )\n", 121 | "\n", 122 | "print( 'y before =' )\n", 123 | "print( y )" 124 | ], 125 | "language": "python", 126 | "metadata": {}, 127 | "outputs": [] 128 | }, 129 | { 130 | "cell_type": "code", 131 | "collapsed": false, 132 | "input": [ 133 | "laff.copy( y, yold ) # save the original vector y\n", 134 | "\n", 135 | "Mvmult_n_unb_var1( A, x, y )\n", 136 | "\n", 137 | "print( 'y after =' )\n", 138 | "print( y )\n", 139 | "\n", 140 | "print( 'y - ( A * x + yold ) = ' )\n", 141 | "print( y - ( A * x + yold ) )" 142 | ], 143 | "language": "python", 144 | "metadata": {}, 145 | "outputs": [] 146 | }, 147 | { 148 | "cell_type": "markdown", 149 | "metadata": {}, 150 | "source": [ 151 | "Bingo, it seems to work! (Notice that we are doing floating point computations, which means that due to rounding you may not get an exact \"0\".)" 152 | ] 153 | }, 154 | { 155 | "cell_type": "heading", 156 | "level": 2, 157 | "metadata": {}, 158 | "source": [ 159 | "Watch your code in action!" 160 | ] 161 | }, 162 | { 163 | "cell_type": "markdown", 164 | "metadata": {}, 165 | "source": [ 166 | "Copy and paste the code into PictureFLAME , a webpage where you can watch your routine in action. Just cut and paste into the box. \n", 167 | "\n", 168 | "Disclaimer: we implemented a VERY simple interpreter. If you do something wrong, we cannot guarantee the results. But if you do it right, you are in for a treat.\n", 169 | "\n", 170 | "If you want to reset the problem, just click in the box into which you pasted the code and hit \"next\" again." 171 | ] 172 | }, 173 | { 174 | "cell_type": "code", 175 | "collapsed": false, 176 | "input": [], 177 | "language": "python", 178 | "metadata": {}, 179 | "outputs": [] 180 | } 181 | ], 182 | "metadata": {} 183 | } 184 | ] 185 | } -------------------------------------------------------------------------------- /week03/03.4.2 Matrix vector multiply via axpys - Answer.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "Matrix-vector multiplication via axpys" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "This notebook walks you through how to implement $ y := A x + y $ via axpy operations." 23 | ] 24 | }, 25 | { 26 | "cell_type": "heading", 27 | "level": 2, 28 | "metadata": {}, 29 | "source": [ 30 | "Getting started" 31 | ] 32 | }, 33 | { 34 | "cell_type": "markdown", 35 | "metadata": {}, 36 | "source": [ 37 | "We will use some functions that are part of our laff library (of which this function will become a part) as well as some routines from the FLAME API (Application Programming Interface) that allows us to write code that closely resembles how we typeset algorithms using the FLAME notation. These functions are imported with the \"import laff as laff\" and \"import flame\" statements." 38 | ] 39 | }, 40 | { 41 | "cell_type": "heading", 42 | "level": 2, 43 | "metadata": {}, 44 | "source": [ 45 | "Algorithm" 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "metadata": {}, 51 | "source": [ 52 | "\"Matrix-vector" 53 | ] 54 | }, 55 | { 56 | "cell_type": "heading", 57 | "level": 2, 58 | "metadata": {}, 59 | "source": [ 60 | "The routine
Mvmult_n_unb_var2( A, x, y ) " 61 | ] 62 | }, 63 | { 64 | "cell_type": "markdown", 65 | "metadata": {}, 66 | "source": [ 67 | "This routine, given $ A \\in \\mathbb{R}^{m \\times n} $, $ x \\in \\mathbb{R}^n $, and $ y \\in \\mathbb{R}^m $, computes $ y := A x + y $. The \"_n_\" indicates this is the \"no transpose\" matrix-vector multiplication. What this means will become clear next week.\n", 68 | "\n", 69 | "The specific laff function we will use is \n", 70 | "\n", 73 | "\n", 74 | "Use the Spark webpage ." 75 | ] 76 | }, 77 | { 78 | "cell_type": "code", 79 | "collapsed": false, 80 | "input": [ 81 | "import flame\n", 82 | "import laff as laff\n", 83 | "\n", 84 | "def Mvmult_n_unb_var2(A, x, y):\n", 85 | "\n", 86 | " AL, AR = flame.part_1x2(A, \\\n", 87 | " 0, 'LEFT')\n", 88 | "\n", 89 | " xT, \\\n", 90 | " xB = flame.part_2x1(x, \\\n", 91 | " 0, 'TOP')\n", 92 | "\n", 93 | " while AL.shape[1] < A.shape[1]:\n", 94 | "\n", 95 | " A0, a1, A2 = flame.repart_1x2_to_1x3(AL, AR, \\\n", 96 | " 1, 'RIGHT')\n", 97 | "\n", 98 | " x0, \\\n", 99 | " chi1, \\\n", 100 | " x2 = flame.repart_2x1_to_3x1(xT, \\\n", 101 | " xB, \\\n", 102 | " 1, 'BOTTOM')\n", 103 | "\n", 104 | " #------------------------------------------------------------#\n", 105 | "\n", 106 | " laff.axpy( chi1, a1, y )\n", 107 | "\n", 108 | " #------------------------------------------------------------#\n", 109 | "\n", 110 | " AL, AR = flame.cont_with_1x3_to_1x2(A0, a1, A2, \\\n", 111 | " 'LEFT')\n", 112 | "\n", 113 | " xT, \\\n", 114 | " xB = flame.cont_with_3x1_to_2x1(x0, \\\n", 115 | " chi1, \\\n", 116 | " x2, \\\n", 117 | " 'TOP')\n", 118 | "\n", 119 | "\n", 120 | "\n", 121 | "\n" 122 | ], 123 | "language": "python", 124 | "metadata": {}, 125 | "outputs": [] 126 | }, 127 | { 128 | "cell_type": "heading", 129 | "level": 2, 130 | "metadata": {}, 131 | "source": [ 132 | "Testing" 133 | ] 134 | }, 135 | { 136 | "cell_type": "markdown", 137 | "metadata": {}, 138 | "source": [ 139 | "Let's quickly test the routine by creating a 4 x 3 matrix and related vectors, performing the computation." 140 | ] 141 | }, 142 | { 143 | "cell_type": "code", 144 | "collapsed": false, 145 | "input": [ 146 | "from numpy import random\n", 147 | "from numpy import matrix\n", 148 | "\n", 149 | "A = matrix( random.rand( 4,3 ) )\n", 150 | "x = matrix( random.rand( 3,1 ) )\n", 151 | "y = matrix( random.rand( 4,1 ) )\n", 152 | "yold = matrix( random.rand( 4,1 ) )\n", 153 | "\n", 154 | "print( 'y before =' )\n", 155 | "print( y )\n", 156 | "\n", 157 | "print( 'x before =' )\n", 158 | "print( x )\n", 159 | "\n", 160 | "print( 'y before =' )\n", 161 | "print( y )" 162 | ], 163 | "language": "python", 164 | "metadata": {}, 165 | "outputs": [] 166 | }, 167 | { 168 | "cell_type": "code", 169 | "collapsed": false, 170 | "input": [ 171 | "laff.copy( y, yold ) # save the original vector y\n", 172 | "\n", 173 | "Mvmult_n_unb_var2( A, x, y )\n", 174 | "\n", 175 | "print( 'y after =' )\n", 176 | "print( y )\n", 177 | "\n", 178 | "print( 'y - ( A * x + yold ) = ' )\n", 179 | "print( y - ( A * x + yold ) )" 180 | ], 181 | "language": "python", 182 | "metadata": {}, 183 | "outputs": [] 184 | }, 185 | { 186 | "cell_type": "markdown", 187 | "metadata": {}, 188 | "source": [ 189 | "Bingo, it seems to work! (Notice that we are doing floating point computations, which means that due to rounding you may not get an exact \"0\".)" 190 | ] 191 | }, 192 | { 193 | "cell_type": "heading", 194 | "level": 2, 195 | "metadata": {}, 196 | "source": [ 197 | "Watch your code in action!" 198 | ] 199 | }, 200 | { 201 | "cell_type": "markdown", 202 | "metadata": {}, 203 | "source": [ 204 | "Copy and paste the code into PictureFLAME , a webpage where you can watch your routine in action. Just cut and paste into the box. \n", 205 | "\n", 206 | "Disclaimer: we implemented a VERY simple interpreter. If you do something wrong, we cannot guarantee the results. But if you do it right, you are in for a treat.\n", 207 | "\n", 208 | "If you want to reset the problem, just click in the box into which you pasted the code and hit \"next\" again." 209 | ] 210 | }, 211 | { 212 | "cell_type": "code", 213 | "collapsed": false, 214 | "input": [], 215 | "language": "python", 216 | "metadata": {}, 217 | "outputs": [] 218 | } 219 | ], 220 | "metadata": {} 221 | } 222 | ] 223 | } -------------------------------------------------------------------------------- /week03/03.4.2 Matrix vector multiply via axpys.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "Matrix-vector multiplication via axpys" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "This notebook walks you through how to implement $ y := A x + y $ via axpy operations." 23 | ] 24 | }, 25 | { 26 | "cell_type": "heading", 27 | "level": 2, 28 | "metadata": {}, 29 | "source": [ 30 | "Getting started" 31 | ] 32 | }, 33 | { 34 | "cell_type": "markdown", 35 | "metadata": {}, 36 | "source": [ 37 | "We will use some functions that are part of our laff library (of which this function will become a part) as well as some routines from the FLAME API (Application Programming Interface) that allows us to write code that closely resembles how we typeset algorithms using the FLAME notation. These functions are imported with the \"import laff as laff\" and \"import flame\" statements." 38 | ] 39 | }, 40 | { 41 | "cell_type": "heading", 42 | "level": 2, 43 | "metadata": {}, 44 | "source": [ 45 | "Algorithm" 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "metadata": {}, 51 | "source": [ 52 | "\"Matrix-vector" 53 | ] 54 | }, 55 | { 56 | "cell_type": "heading", 57 | "level": 2, 58 | "metadata": {}, 59 | "source": [ 60 | "The routine
Mvmult_n_unb_var2( A, x, y ) " 61 | ] 62 | }, 63 | { 64 | "cell_type": "markdown", 65 | "metadata": {}, 66 | "source": [ 67 | "This routine, given $ A \\in \\mathbb{R}^{m \\times n} $, $ x \\in \\mathbb{R}^n $, and $ y \\in \\mathbb{R}^m $, computes $ y := A x + y $. The \"_n_\" indicates this is the \"no transpose\" matrix-vector multiplication. What this means will become clear next week.\n", 68 | "\n", 69 | "The specific laff function we will use is \n", 70 | "\n", 73 | "\n", 74 | "Use the Spark webpage ." 75 | ] 76 | }, 77 | { 78 | "cell_type": "code", 79 | "collapsed": false, 80 | "input": [ 81 | "# insert code here\n", 82 | "\n", 83 | "\n", 84 | "\n" 85 | ], 86 | "language": "python", 87 | "metadata": {}, 88 | "outputs": [] 89 | }, 90 | { 91 | "cell_type": "heading", 92 | "level": 2, 93 | "metadata": {}, 94 | "source": [ 95 | "Testing" 96 | ] 97 | }, 98 | { 99 | "cell_type": "markdown", 100 | "metadata": {}, 101 | "source": [ 102 | "Let's quickly test the routine by creating a 4 x 3 matrix and related vectors, performing the computation." 103 | ] 104 | }, 105 | { 106 | "cell_type": "code", 107 | "collapsed": false, 108 | "input": [ 109 | "from numpy import random\n", 110 | "from numpy import matrix\n", 111 | "\n", 112 | "A = matrix( random.rand( 4,3 ) )\n", 113 | "x = matrix( random.rand( 3,1 ) )\n", 114 | "y = matrix( random.rand( 4,1 ) )\n", 115 | "yold = matrix( random.rand( 4,1 ) )\n", 116 | "\n", 117 | "print( 'A before =' )\n", 118 | "print( A )\n", 119 | "\n", 120 | "print( 'x before =' )\n", 121 | "print( x )\n", 122 | "\n", 123 | "print( 'y before =' )\n", 124 | "print( y )" 125 | ], 126 | "language": "python", 127 | "metadata": {}, 128 | "outputs": [] 129 | }, 130 | { 131 | "cell_type": "code", 132 | "collapsed": false, 133 | "input": [ 134 | "laff.copy( y, yold ) # save the original vector y\n", 135 | "\n", 136 | "Mvmult_n_unb_var2( A, x, y )\n", 137 | "\n", 138 | "print( 'y after =' )\n", 139 | "print( y )\n", 140 | "\n", 141 | "print( 'y - ( A * x + yold ) = ' )\n", 142 | "print( y - ( A * x + yold ) )" 143 | ], 144 | "language": "python", 145 | "metadata": {}, 146 | "outputs": [] 147 | }, 148 | { 149 | "cell_type": "markdown", 150 | "metadata": {}, 151 | "source": [ 152 | "Bingo, it seems to work! (Notice that we are doing floating point computations, which means that due to rounding you may not get an exact \"0\".)" 153 | ] 154 | }, 155 | { 156 | "cell_type": "heading", 157 | "level": 2, 158 | "metadata": {}, 159 | "source": [ 160 | "Watch your code in action!" 161 | ] 162 | }, 163 | { 164 | "cell_type": "markdown", 165 | "metadata": {}, 166 | "source": [ 167 | "Copy and paste the code into PictureFLAME , a webpage where you can watch your routine in action. Just cut and paste into the box. \n", 168 | "\n", 169 | "Disclaimer: we implemented a VERY simple interpreter. If you do something wrong, we cannot guarantee the results. But if you do it right, you are in for a treat.\n", 170 | "\n", 171 | "If you want to reset the problem, just click in the box into which you pasted the code and hit \"next\" again." 172 | ] 173 | }, 174 | { 175 | "cell_type": "code", 176 | "collapsed": false, 177 | "input": [], 178 | "language": "python", 179 | "metadata": {}, 180 | "outputs": [] 181 | } 182 | ], 183 | "metadata": {} 184 | } 185 | ] 186 | } -------------------------------------------------------------------------------- /week04/04.1.1 Predicting the Weather.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "Week 4: Predicting the Weather" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "The opener for Week 4 discusses an example in which tomorrow's weather is predicted from today's weather. This is done by assigning probabilities to how today's weather transitions to tomorrow's weather." 23 | ] 24 | }, 25 | { 26 | "cell_type": "heading", 27 | "level": 2, 28 | "metadata": {}, 29 | "source": [ 30 | "The data" 31 | ] 32 | }, 33 | { 34 | "cell_type": "markdown", 35 | "metadata": {}, 36 | "source": [ 37 | "\"Transition" 38 | ] 39 | }, 40 | { 41 | "cell_type": "heading", 42 | "level": 2, 43 | "metadata": {}, 44 | "source": [ 45 | "The transition matrix, $ P $" 46 | ] 47 | }, 48 | { 49 | "cell_type": "code", 50 | "collapsed": false, 51 | "input": [ 52 | "import numpy as np\n", 53 | "\n", 54 | "P = np.matrix( '0.4, 0.3, 0.1; \\\n", 55 | " 0.4, 0.3, 0.6; \\\n", 56 | " 0.2, 0.4, 0.3' )\n", 57 | "\n", 58 | "print( P )" 59 | ], 60 | "language": "python", 61 | "metadata": {}, 62 | "outputs": [] 63 | }, 64 | { 65 | "cell_type": "heading", 66 | "level": 2, 67 | "metadata": {}, 68 | "source": [ 69 | "Today is cloudy" 70 | ] 71 | }, 72 | { 73 | "cell_type": "markdown", 74 | "metadata": {}, 75 | "source": [ 76 | "Vector $ x $ consists of three components: \n", 77 | "$$ \\left( \\begin{array}{c} \\chi_s \\\\ \\chi_c \\\\ \\chi_r \\end{array} \\right). $$\n", 78 | "If we want to use $ x^{(0)} $ to express that on day $ 0 $ it is cloudy, we set\n", 79 | "$$ \\left( \\begin{array}{c} \\chi_s^{(0)} \\\\ \\chi_c^{(0)} \\\\ \\chi_r^{(0)} \\end{array} \\right) = \n", 80 | "\\left( \\begin{array}{r} 0 \\\\ 1 \\\\ 0 \\end{array} \\right). $$" 81 | ] 82 | }, 83 | { 84 | "cell_type": "code", 85 | "collapsed": false, 86 | "input": [ 87 | "x0 = np.matrix( '0; \\\n", 88 | " 1; \\\n", 89 | " 0' )\n", 90 | "print( x0 )" 91 | ], 92 | "language": "python", 93 | "metadata": {}, 94 | "outputs": [] 95 | }, 96 | { 97 | "cell_type": "heading", 98 | "level": 2, 99 | "metadata": {}, 100 | "source": [ 101 | "Predicting the weather a week from today" 102 | ] 103 | }, 104 | { 105 | "cell_type": "markdown", 106 | "metadata": {}, 107 | "source": [ 108 | "The weather for day $ 1 $ is predicted by $ x^{(1)} = P x^{(0)} $. Notice that if P and x0 are numpy matrices, then P * x0 computes the product of matrix P times vector x0 ." 109 | ] 110 | }, 111 | { 112 | "cell_type": "code", 113 | "collapsed": false, 114 | "input": [ 115 | "x1 = P * x0 \n", 116 | "print( x1 )" 117 | ], 118 | "language": "python", 119 | "metadata": {}, 120 | "outputs": [] 121 | }, 122 | { 123 | "cell_type": "markdown", 124 | "metadata": {}, 125 | "source": [ 126 | "Predict the weather for day 2 from day 1." 127 | ] 128 | }, 129 | { 130 | "cell_type": "code", 131 | "collapsed": false, 132 | "input": [ 133 | "x2 = P * x1\n", 134 | "print( x2 )" 135 | ], 136 | "language": "python", 137 | "metadata": {}, 138 | "outputs": [] 139 | }, 140 | { 141 | "cell_type": "markdown", 142 | "metadata": {}, 143 | "source": [ 144 | "Predict the weather for day 3 from day 2." 145 | ] 146 | }, 147 | { 148 | "cell_type": "code", 149 | "collapsed": false, 150 | "input": [], 151 | "language": "python", 152 | "metadata": {}, 153 | "outputs": [] 154 | }, 155 | { 156 | "cell_type": "markdown", 157 | "metadata": {}, 158 | "source": [ 159 | "Predict the weather for day 4 from day 3." 160 | ] 161 | }, 162 | { 163 | "cell_type": "code", 164 | "collapsed": false, 165 | "input": [], 166 | "language": "python", 167 | "metadata": {}, 168 | "outputs": [] 169 | }, 170 | { 171 | "cell_type": "markdown", 172 | "metadata": {}, 173 | "source": [ 174 | "Predict the weather for day 5 from day 4." 175 | ] 176 | }, 177 | { 178 | "cell_type": "code", 179 | "collapsed": false, 180 | "input": [], 181 | "language": "python", 182 | "metadata": {}, 183 | "outputs": [] 184 | }, 185 | { 186 | "cell_type": "markdown", 187 | "metadata": {}, 188 | "source": [ 189 | "Predict the weather for day 6 from day 5." 190 | ] 191 | }, 192 | { 193 | "cell_type": "code", 194 | "collapsed": false, 195 | "input": [], 196 | "language": "python", 197 | "metadata": {}, 198 | "outputs": [] 199 | }, 200 | { 201 | "cell_type": "markdown", 202 | "metadata": {}, 203 | "source": [ 204 | "Predict the weather for day 7 from day 6." 205 | ] 206 | }, 207 | { 208 | "cell_type": "code", 209 | "collapsed": false, 210 | "input": [], 211 | "language": "python", 212 | "metadata": {}, 213 | "outputs": [] 214 | }, 215 | { 216 | "cell_type": "markdown", 217 | "metadata": {}, 218 | "source": [ 219 | "What are the probabilities it is sunny, cloudy, rainy on day 7 (given that it was cloudy on day 0)? (Just think about this...)" 220 | ] 221 | }, 222 | { 223 | "cell_type": "heading", 224 | "level": 2, 225 | "metadata": {}, 226 | "source": [ 227 | "Predicting the next 20 days" 228 | ] 229 | }, 230 | { 231 | "cell_type": "markdown", 232 | "metadata": {}, 233 | "source": [ 234 | "This is best done with a loop, where $ x $ simply takes on the next value everytime through the loop." 235 | ] 236 | }, 237 | { 238 | "cell_type": "code", 239 | "collapsed": false, 240 | "input": [ 241 | "x = x0\n", 242 | "for i in range( 21 ): # you saw range( n ) in Week 1\n", 243 | " print( 'Day', i ) # print the predictions for day i\n", 244 | " print( x )\n", 245 | " x = P * x # compute the predictions for day i+1" 246 | ], 247 | "language": "python", 248 | "metadata": {}, 249 | "outputs": [] 250 | }, 251 | { 252 | "cell_type": "heading", 253 | "level": 2, 254 | "metadata": {}, 255 | "source": [ 256 | "Predicting the weather for a year from now" 257 | ] 258 | }, 259 | { 260 | "cell_type": "markdown", 261 | "metadata": {}, 262 | "source": [ 263 | "Modify the above loop to compute the prediction for Day 365." 264 | ] 265 | }, 266 | { 267 | "cell_type": "code", 268 | "collapsed": false, 269 | "input": [], 270 | "language": "python", 271 | "metadata": {}, 272 | "outputs": [] 273 | }, 274 | { 275 | "cell_type": "markdown", 276 | "metadata": {}, 277 | "source": [ 278 | "Compare Day 365 to Day 20. Notice that they are almost the same. Go back and start with x0 = \"today is sunny\". Go back and start with x0 = \"today is rainy\" Notice that no matter what the weather is today, eventually the prediction becomes \"the typical forecast for this location\"." 279 | ] 280 | }, 281 | { 282 | "cell_type": "code", 283 | "collapsed": false, 284 | "input": [], 285 | "language": "python", 286 | "metadata": {}, 287 | "outputs": [] 288 | } 289 | ], 290 | "metadata": {} 291 | } 292 | ] 293 | } -------------------------------------------------------------------------------- /week04/04.4.4.11 Practice with matrix-matrix multiplication.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "Matrix Multiplication Practice" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | " In this notebook you'll be able to practice matrix multiplication with randomly generated problems. Notice the special shapes that come up! You should encounter, as matrix-matrix multiplication, 8 problem types:\n", 23 | "\n", 24 | "\n", 50 | "\n", 51 | "Each problem type is equally likely, so you'll be able to practice the special cases more often. Try guessing the problem type every time one pops up. Guess the most specific case of matrix-matrix multiplication. (Yes, you could guess \"matrix-vector multiplication\" when it is also a scalar multiplication.)\n", 52 | "\n", 53 | "A lot of people get confused between Dot and Outer products, and they are really important." 54 | ] 55 | }, 56 | { 57 | "cell_type": "heading", 58 | "level": 2, 59 | "metadata": {}, 60 | "source": [ 61 | "Usage" 62 | ] 63 | }, 64 | { 65 | "cell_type": "markdown", 66 | "metadata": {}, 67 | "source": [ 68 | "Use the three cells below. The routines should be pretty self explanatory, but you can re-run the first code block to get a new random problem as many times as you'd like. The variable p stores the answer to the problem you've generated, so use the p.show_answer() code block to reveal the answer and the p.show_problem_type() code block to reveal the problem type." 69 | ] 70 | }, 71 | { 72 | "cell_type": "code", 73 | "collapsed": false, 74 | "input": [ 75 | "import generate_problems as gp\n", 76 | "\n", 77 | "p = gp.Problem()\n", 78 | "p.new_MM()" 79 | ], 80 | "language": "python", 81 | "metadata": {}, 82 | "outputs": [ 83 | { 84 | "latex": [ 85 | "$$\\left[\\begin{matrix}4 & -1 & 3 & -1\\\\4 & -4 & 0 & 3\\\\2 & -3 & 3 & -2\\\\-2 & -1 & 1 & -2\\end{matrix}\\right] \\left[\\begin{matrix}-2\\\\-2\\\\-2\\\\-4\\end{matrix}\\right]=?$$" 86 | ], 87 | "metadata": {}, 88 | "output_type": "pyout", 89 | "prompt_number": 1, 90 | "text": [ 91 | "" 92 | ] 93 | } 94 | ], 95 | "prompt_number": 1 96 | }, 97 | { 98 | "cell_type": "code", 99 | "collapsed": false, 100 | "input": [ 101 | "p.show_answer()" 102 | ], 103 | "language": "python", 104 | "metadata": {}, 105 | "outputs": [ 106 | { 107 | "latex": [ 108 | "$$\\left[\\begin{matrix}4 & -1 & 3 & -1\\\\4 & -4 & 0 & 3\\\\2 & -3 & 3 & -2\\\\-2 & -1 & 1 & -2\\end{matrix}\\right] \\left[\\begin{matrix}-2\\\\-2\\\\-2\\\\-4\\end{matrix}\\right]=\\left[\\begin{matrix}-8\\\\-12\\\\4\\\\12\\end{matrix}\\right]$$" 109 | ], 110 | "metadata": {}, 111 | "output_type": "pyout", 112 | "prompt_number": 2, 113 | "text": [ 114 | "" 115 | ] 116 | } 117 | ], 118 | "prompt_number": 2 119 | }, 120 | { 121 | "cell_type": "code", 122 | "collapsed": false, 123 | "input": [ 124 | "p.show_problem_type()" 125 | ], 126 | "language": "python", 127 | "metadata": {}, 128 | "outputs": [ 129 | { 130 | "output_type": "stream", 131 | "stream": "stdout", 132 | "text": [ 133 | "This is a Matrix-Vector Product problem\n" 134 | ] 135 | } 136 | ], 137 | "prompt_number": 3 138 | }, 139 | { 140 | "cell_type": "code", 141 | "collapsed": false, 142 | "input": [], 143 | "language": "python", 144 | "metadata": {}, 145 | "outputs": [] 146 | } 147 | ], 148 | "metadata": {} 149 | } 150 | ] 151 | } -------------------------------------------------------------------------------- /week05/05.3.1 Lots of loops - Answer.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "Lots of loops" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "This notebook illustrates the different ways in which loops for matrix-matrix multiplication can be ordered. Let's start by creating some matrices." 23 | ] 24 | }, 25 | { 26 | "cell_type": "code", 27 | "collapsed": false, 28 | "input": [ 29 | "import numpy as np\n", 30 | "\n", 31 | "m = 4\n", 32 | "n = 3\n", 33 | "k = 5\n", 34 | "\n", 35 | "C = np.matrix( np.random.random( (m, n) ) )\n", 36 | "print( 'C = ' )\n", 37 | "print( C )\n", 38 | "\n", 39 | "Cold = np.matrix( np.zeros( (m,n ) ) )\n", 40 | "Cold = np.matrix( np.copy( C ) ) # an alternative way of doing a \"hard\" copy, in this case of a matrix\n", 41 | " \n", 42 | "A = np.matrix( np.random.random( (m, k) ) )\n", 43 | "print( 'A = ' )\n", 44 | "print( A )\n", 45 | "\n", 46 | "B = np.matrix( np.random.random( (k, n) ) )\n", 47 | "print( 'B = ' )\n", 48 | "print( B )" 49 | ], 50 | "language": "python", 51 | "metadata": {}, 52 | "outputs": [] 53 | }, 54 | { 55 | "cell_type": "heading", 56 | "level": 2, 57 | "metadata": {}, 58 | "source": [ 59 | "

The basic algorithm\n", 62 | "Given $ A \\in \\mathbb{R}^{m \\times k} $, $ B \\in \\mathbb{R}^{k \\times n} $, and $ C \\in \\mathbb{R}^{m \\times n} $, we will consider $ C := A B + C $.\n", 63 | "

\n", 64 | " \n", 65 | "

\n", 66 | " Now, recall that the $ i,j $ element of $ A B $ is computed as the dot product of \n", 67 | "the $ i $th row of $ A $ with the $ j $th column of $ B $:\n", 68 | "

\n", 69 | "\n", 70 | "

\n", 71 | " $\\sum_{p=0}^{k-1} \\alpha_{i,j} \\beta_{i,j}$\n", 72 | "

\n", 73 | "\n", 74 | "

\n", 75 | " and here, by adding to $ C $ we get\n", 76 | "

\n", 77 | "\n", 78 | "

\n", 79 | "$ \\gamma_{i,j} = \\sum_{p=0}^{k-1} \\alpha_{i,j} \\beta_{i,j} + \\gamma_{i,j}.$\n", 80 | "

\n", 81 | "\n", 82 | "

\n", 83 | " Now, we have to loop over all elements of $ C $. The code, without the FLAMEpy API, becomes\n", 84 | "

" 85 | ] 86 | }, 87 | { 88 | "cell_type": "code", 89 | "collapsed": false, 90 | "input": [ 91 | "def MMmult_lots_of_loops( A, B, C ):\n", 92 | "\n", 93 | " m, n = np.shape( C )\n", 94 | " m, k = np.shape( A )\n", 95 | " \n", 96 | " # i,j,p\n", 97 | " for i in range( m ): \n", 98 | " for j in range( n ): \n", 99 | " for p in range( k ): \n", 100 | " C[ i,j ] = A[ i,p ] * B[ p, j ] + C[ i,j ]\n", 101 | " \n", 102 | " # i,p,j\n", 103 | "# for i in range( m ): \n", 104 | "# for p in range( k ): \n", 105 | "# for j in range( n ): \n", 106 | "# C[ i,j ] = A[ i,p ] * B[ p, j ] + C[ i,j ]\n", 107 | " \n", 108 | " # j,i,p\n", 109 | "# for j in range( n ): \n", 110 | "# for i in range( m ): \n", 111 | "# for p in range( k ): \n", 112 | "# C[ i,j ] = A[ i,p ] * B[ p, j ] + C[ i,j ]\n", 113 | "\n", 114 | " # j,p,i\n", 115 | "# for j in range( n ): \n", 116 | "# for p in range( k ): \n", 117 | "# for i in range( m ): \n", 118 | "# C[ i,j ] = A[ i,p ] * B[ p, j ] + C[ i,j ]\n", 119 | "\n", 120 | " # p,i,j\n", 121 | "# for p in range( k ): \n", 122 | "# for i in range( m ): \n", 123 | "# for j in range( n ): \n", 124 | "# C[ i,j ] = A[ i,p ] * B[ p, j ] + C[ i,j ]\n", 125 | "\n", 126 | " # p,j,i\n", 127 | "# for p in range( k ): \n", 128 | "# for j in range( n ): \n", 129 | "# for i in range( m ): \n", 130 | "# C[ i,j ] = A[ i,p ] * B[ p, j ] + C[ i,j ]" 131 | ], 132 | "language": "python", 133 | "metadata": {}, 134 | "outputs": [] 135 | }, 136 | { 137 | "cell_type": "code", 138 | "collapsed": false, 139 | "input": [ 140 | "C = np.matrix( np.copy( Cold ) ) # restore C\n", 141 | "\n", 142 | "MMmult_lots_of_loops( A, B, C )\n", 143 | "\n", 144 | "print( 'C - ( Cold + A * B )' )\n", 145 | "print( C - ( Cold + A * B ) )" 146 | ], 147 | "language": "python", 148 | "metadata": {}, 149 | "outputs": [] 150 | }, 151 | { 152 | "cell_type": "markdown", 153 | "metadata": {}, 154 | "source": [ 155 | "Now, go back and systematically move the loops around, so that in the end you try out all six orders of the loops: three choices for the first, outermost, loop; two choices for the secod loop; one choice for the third loop, for a total of $ 3! $ (3 factorial) choices. Check that you get the right answer, regardless. \n", 156 | "\n", 157 | "(We suggest you just change the box in which the routine is defined and comment out variations that you've already tested. Be careful with indentation.)" 158 | ] 159 | }, 160 | { 161 | "cell_type": "heading", 162 | "level": 2, 163 | "metadata": {}, 164 | "source": [ 165 | "Why $ C := A B + C $ rather than $ C := A B $?" 166 | ] 167 | }, 168 | { 169 | "cell_type": "markdown", 170 | "metadata": {}, 171 | "source": [ 172 | "Notice that we could have written a routine to compute $ C := A B $ instead, given below." 173 | ] 174 | }, 175 | { 176 | "cell_type": "code", 177 | "collapsed": false, 178 | "input": [ 179 | "def MMmult_C_eq_AB( A, B, C ):\n", 180 | "\n", 181 | " m, n = np.shape( C )\n", 182 | " m, k = np.shape( A )\n", 183 | " \n", 184 | " for i in range( m ): \n", 185 | " for j in range( n ): \n", 186 | " C[ i,j ] = 0.0\n", 187 | " for p in range( k ): \n", 188 | " C[ i,j ] = A[ i,p ] * B[ p, j ] + C[ i,j ]" 189 | ], 190 | "language": "python", 191 | "metadata": {}, 192 | "outputs": [] 193 | }, 194 | { 195 | "cell_type": "code", 196 | "collapsed": false, 197 | "input": [ 198 | "C = np.matrix( np.copy( Cold ) ) # restore C\n", 199 | "\n", 200 | "MMmult_C_eq_AB( A, B, C )\n", 201 | "\n", 202 | "print( 'C - ( A * B )' )\n", 203 | "print( C - ( A * B ) )" 204 | ], 205 | "language": "python", 206 | "metadata": {}, 207 | "outputs": [] 208 | }, 209 | { 210 | "cell_type": "markdown", 211 | "metadata": {}, 212 | "source": [ 213 | "Now, start changing the order of the loops. You notice it is not quite as simple. But, if you have a routine for computing $ C := A B + C $, you can always initialize $ C = 0 $ (the zero matrix) and then use it to call $ C := A B $:" 214 | ] 215 | }, 216 | { 217 | "cell_type": "code", 218 | "collapsed": false, 219 | "input": [ 220 | "C = np.matrix( np.zeros( np.shape( C ) ) ) # initialize C = 0 \n", 221 | "\n", 222 | "MMmult_lots_of_loops( A, B, C )\n", 223 | "\n", 224 | "print( 'C - ( A * B )' )\n", 225 | "print( C - ( A * B ) )" 226 | ], 227 | "language": "python", 228 | "metadata": {}, 229 | "outputs": [] 230 | }, 231 | { 232 | "cell_type": "code", 233 | "collapsed": false, 234 | "input": [], 235 | "language": "python", 236 | "metadata": {}, 237 | "outputs": [] 238 | } 239 | ], 240 | "metadata": {} 241 | } 242 | ] 243 | } -------------------------------------------------------------------------------- /week05/05.3.1 Lots of loops.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "Lots of loops" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "This notebook illustrates the different ways in which loops for matrix-matrix multiplication can be ordered. Let's start by creating some matrices." 23 | ] 24 | }, 25 | { 26 | "cell_type": "code", 27 | "collapsed": false, 28 | "input": [ 29 | "import numpy as np\n", 30 | "\n", 31 | "m = 4\n", 32 | "n = 3\n", 33 | "k = 5\n", 34 | "\n", 35 | "C = np.matrix( np.random.random( (m, n) ) )\n", 36 | "print( 'C = ' )\n", 37 | "print( C )\n", 38 | "\n", 39 | "Cold = np.matrix( np.zeros( (m,n ) ) )\n", 40 | "Cold = np.matrix( np.copy( C ) ) # an alternative way of doing a \"hard\" copy, in this case of a matrix\n", 41 | " \n", 42 | "A = np.matrix( np.random.random( (m, k) ) )\n", 43 | "print( 'A = ' )\n", 44 | "print( A )\n", 45 | "\n", 46 | "B = np.matrix( np.random.random( (k, n) ) )\n", 47 | "print( 'B = ' )\n", 48 | "print( B )" 49 | ], 50 | "language": "python", 51 | "metadata": {}, 52 | "outputs": [] 53 | }, 54 | { 55 | "cell_type": "heading", 56 | "level": 2, 57 | "metadata": {}, 58 | "source": [ 59 | "

The basic algorithm\n", 62 | "Given $ A \\in \\mathbb{R}^{m \\times k} $, $ B \\in \\mathbb{R}^{k \\times n} $, and $ C \\in \\mathbb{R}^{m \\times n} $, we will consider $ C := A B + C $.\n", 63 | "

\n", 64 | " \n", 65 | "

\n", 66 | " Now, recall that the $ i,j $ element of $ A B $ is computed as the dot product of \n", 67 | "the $ i $th row of $ A $ with the $ j $th column of $ B $:\n", 68 | "

\n", 69 | "\n", 70 | "

\n", 71 | " $\\sum_{p=0}^{k-1} \\alpha_{i,j} \\beta_{i,j}$\n", 72 | "

\n", 73 | "\n", 74 | "

\n", 75 | " and here, by adding to $ C $ we get\n", 76 | "

\n", 77 | "\n", 78 | "

\n", 79 | "$ \\gamma_{i,j} = \\sum_{p=0}^{k-1} \\alpha_{i,j} \\beta_{i,j} + \\gamma_{i,j}.$\n", 80 | "

\n", 81 | "\n", 82 | "

\n", 83 | " Now, we have to loop over all elements of $ C $. The code, without the FLAMEpy API, becomes\n", 84 | "

" 85 | ] 86 | }, 87 | { 88 | "cell_type": "code", 89 | "collapsed": false, 90 | "input": [ 91 | "def MMmult_lots_of_loops( A, B, C ):\n", 92 | "\n", 93 | " m, n = np.shape( C )\n", 94 | " m, k = np.shape( A )\n", 95 | " \n", 96 | " # i,j,p\n", 97 | " for i in range( m ): \n", 98 | " for j in range( n ): \n", 99 | " for p in range( k ): \n", 100 | " C[ i,j ] = A[ i,p ] * B[ p, j ] + C[ i,j ]\n", 101 | " \n", 102 | " # Comment out the loops above and do the other 5 implementations here one at a time..." 103 | ], 104 | "language": "python", 105 | "metadata": {}, 106 | "outputs": [] 107 | }, 108 | { 109 | "cell_type": "code", 110 | "collapsed": false, 111 | "input": [ 112 | "C = np.matrix( np.copy( Cold ) ) # restore C\n", 113 | "\n", 114 | "MMmult_lots_of_loops( A, B, C )\n", 115 | "\n", 116 | "print( 'C - ( Cold + A * B )' )\n", 117 | "print( C - ( Cold + A * B ) )" 118 | ], 119 | "language": "python", 120 | "metadata": {}, 121 | "outputs": [] 122 | }, 123 | { 124 | "cell_type": "markdown", 125 | "metadata": {}, 126 | "source": [ 127 | "Now, go back and systematically move the loops around, so that in the end you try out all six orders of the loops: three choices for the first, outermost, loop; two choices for the secod loop; one choice for the third loop, for a total of $ 3! $ (3 factorial) choices. Check that you get the right answer, regardless. \n", 128 | "\n", 129 | "(We suggest you just change the box in which the routine is defined and comment out variations that you've already tested. Be careful with indentation.)" 130 | ] 131 | }, 132 | { 133 | "cell_type": "heading", 134 | "level": 2, 135 | "metadata": {}, 136 | "source": [ 137 | "Why $ C := A B + C $ rather than $ C := A B $?" 138 | ] 139 | }, 140 | { 141 | "cell_type": "markdown", 142 | "metadata": {}, 143 | "source": [ 144 | "Notice that we could have written a routine to compute $ C := A B $ instead, given below." 145 | ] 146 | }, 147 | { 148 | "cell_type": "code", 149 | "collapsed": false, 150 | "input": [ 151 | "def MMmult_C_eq_AB( A, B, C ):\n", 152 | "\n", 153 | " m, n = np.shape( C )\n", 154 | " m, k = np.shape( A )\n", 155 | " \n", 156 | " for i in range( m ): \n", 157 | " for j in range( n ): \n", 158 | " C[ i,j ] = 0.0\n", 159 | " for p in range( k ): \n", 160 | " C[ i,j ] = A[ i,p ] * B[ p, j ] + C[ i,j ]" 161 | ], 162 | "language": "python", 163 | "metadata": {}, 164 | "outputs": [] 165 | }, 166 | { 167 | "cell_type": "code", 168 | "collapsed": false, 169 | "input": [ 170 | "C = np.matrix( np.copy( Cold ) ) # restore C\n", 171 | "\n", 172 | "MMmult_C_eq_AB( A, B, C )\n", 173 | "\n", 174 | "print( 'C - ( A * B )' )\n", 175 | "print( C - ( A * B ) )" 176 | ], 177 | "language": "python", 178 | "metadata": {}, 179 | "outputs": [] 180 | }, 181 | { 182 | "cell_type": "markdown", 183 | "metadata": {}, 184 | "source": [ 185 | "Now, start changing the order of the loops. You notice it is not quite as simple. But, if you have a routine for computing $ C := A B + C $, you can always initialize $ C = 0 $ (the zero matrix) and then use it to call $ C := A B $:" 186 | ] 187 | }, 188 | { 189 | "cell_type": "code", 190 | "collapsed": false, 191 | "input": [ 192 | "C = np.matrix( np.zeros( np.shape( C ) ) ) # initialize C = 0 \n", 193 | "\n", 194 | "MMmult_lots_of_loops( A, B, C )\n", 195 | "\n", 196 | "print( 'C - ( A * B )' )\n", 197 | "print( C - ( A * B ) )" 198 | ], 199 | "language": "python", 200 | "metadata": {}, 201 | "outputs": [] 202 | }, 203 | { 204 | "cell_type": "code", 205 | "collapsed": false, 206 | "input": [], 207 | "language": "python", 208 | "metadata": {}, 209 | "outputs": [] 210 | } 211 | ], 212 | "metadata": {} 213 | } 214 | ] 215 | } -------------------------------------------------------------------------------- /week05/05.3.2 Matrix-matrix multiplication by columns - Answer.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "Matrix-matrix multiplication by columns" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "We now look at how the FLAMEPy API can be used to implement different matrix-matrix multiplication algorithms. " 23 | ] 24 | }, 25 | { 26 | "cell_type": "markdown", 27 | "metadata": {}, 28 | "source": [ 29 | "First, we create some matrices." 30 | ] 31 | }, 32 | { 33 | "cell_type": "code", 34 | "collapsed": false, 35 | "input": [ 36 | "import numpy as np\n", 37 | "\n", 38 | "m = 4\n", 39 | "n = 3\n", 40 | "k = 5\n", 41 | "\n", 42 | "C = np.matrix( np.random.random( (m, n) ) )\n", 43 | "print( 'C = ' )\n", 44 | "print( C )\n", 45 | "\n", 46 | "Cold = np.matrix( np.zeros( (m,n ) ) )\n", 47 | "Cold = np.matrix( np.copy( C ) ) # an alternative way of doing a \"hard\" copy, in this case of a matrix\n", 48 | " \n", 49 | "A = np.matrix( np.random.random( (m, k) ) )\n", 50 | "print( 'A = ' )\n", 51 | "print( A )\n", 52 | "\n", 53 | "B = np.matrix( np.random.random( (k, n) ) )\n", 54 | "print( 'B = ' )\n", 55 | "print( B )" 56 | ], 57 | "language": "python", 58 | "metadata": {}, 59 | "outputs": [] 60 | }, 61 | { 62 | "cell_type": "heading", 63 | "level": 2, 64 | "metadata": {}, 65 | "source": [ 66 | "

The algorithm

\n", 67 | "\n", 68 | "\"Matrix-matrix\n" 69 | ] 70 | }, 71 | { 72 | "cell_type": "markdown", 73 | "metadata": {}, 74 | "source": [ 75 | "

The routine Gemm_nn_unb_var1( A, B, C )

\n", 76 | "\n", 77 | "This routine computes $ C := A B + C $ by columns. The \"\\_nn\\_\" means that this is the \"No transpose, No transpose\" case of matrix multiplication. \n", 78 | "The reason for this is that the operations $ C := A^T B + C $ (\"\\_tn\\_\" or \"Transpose, No transpose\"), $ C := A B^T + C $ (\"\\_nt\\_\" or \"No transpose, Transpose\"), and $ C := A^T B^T + C $ (\"\\_tt\\_\" or \"Transpose, Transpose\") are also encountered. \n", 79 | " \n", 80 | "The specific laff function we will use is\n", 81 | "\n", 95 | "\n", 96 | "Use the Spark webpage to generate a code skeleton. (Make sure you adjust the name of the routine.)" 97 | ] 98 | }, 99 | { 100 | "cell_type": "code", 101 | "collapsed": false, 102 | "input": [ 103 | "import flame\n", 104 | "import laff as laff\n", 105 | "\n", 106 | "def Gemm_nn_unb_var1(A, B, C):\n", 107 | "\n", 108 | " BL, BR = flame.part_1x2(B, \\\n", 109 | " 0, 'LEFT')\n", 110 | "\n", 111 | " CL, CR = flame.part_1x2(C, \\\n", 112 | " 0, 'LEFT')\n", 113 | "\n", 114 | " while BL.shape[1] < B.shape[1]:\n", 115 | "\n", 116 | " B0, b1, B2 = flame.repart_1x2_to_1x3(BL, BR, \\\n", 117 | " 1, 'RIGHT')\n", 118 | "\n", 119 | " C0, c1, C2 = flame.repart_1x2_to_1x3(CL, CR, \\\n", 120 | " 1, 'RIGHT')\n", 121 | "\n", 122 | " #------------------------------------------------------------#\n", 123 | "\n", 124 | " laff.gemv( 'No transpose', 1.0, A, b1, 1.0, c1 )\n", 125 | "\n", 126 | " #------------------------------------------------------------#\n", 127 | "\n", 128 | " BL, BR = flame.cont_with_1x3_to_1x2(B0, b1, B2, \\\n", 129 | " 'LEFT')\n", 130 | "\n", 131 | " CL, CR = flame.cont_with_1x3_to_1x2(C0, c1, C2, \\\n", 132 | " 'LEFT')\n", 133 | "\n", 134 | " flame.merge_1x2(CL, CR, C)\n", 135 | "\n" 136 | ], 137 | "language": "python", 138 | "metadata": {}, 139 | "outputs": [] 140 | }, 141 | { 142 | "cell_type": "code", 143 | "collapsed": false, 144 | "input": [ 145 | "C = np.matrix( np.copy( Cold ) ) # restore C \n", 146 | "\n", 147 | "Gemm_nn_unb_var1( A, B, C )\n", 148 | "\n", 149 | "print( 'C - ( Cold + A * B )' )\n", 150 | "print( C - ( Cold + A * B ) )" 151 | ], 152 | "language": "python", 153 | "metadata": {}, 154 | "outputs": [] 155 | }, 156 | { 157 | "cell_type": "markdown", 158 | "metadata": {}, 159 | "source": [ 160 | "Bingo! It works!" 161 | ] 162 | }, 163 | { 164 | "cell_type": "heading", 165 | "level": 2, 166 | "metadata": {}, 167 | "source": [ 168 | "Watch the algorithm at work!" 169 | ] 170 | }, 171 | { 172 | "cell_type": "markdown", 173 | "metadata": {}, 174 | "source": [ 175 | "Copy and paste the code into PictureFLAME , a webpage where you can watch your routine in action. Just cut and paste into the box. \n", 176 | "\n", 177 | "Disclaimer: we implemented a VERY simple interpreter. If you do something wrong, we cannot guarantee the results. But if you do it right, you are in for a treat.\n", 178 | "\n", 179 | "If you want to reset the problem, just click in the box into which you pasted the code and hit \"next\" again." 180 | ] 181 | } 182 | ], 183 | "metadata": {} 184 | } 185 | ] 186 | } -------------------------------------------------------------------------------- /week05/05.3.2 Matrix-matrix multiplication by columns.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "Matrix-matrix multiplication by columns" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "We now look at how the FLAMEPy API can be used to implement different matrix-matrix multiplication algorithms. " 23 | ] 24 | }, 25 | { 26 | "cell_type": "markdown", 27 | "metadata": {}, 28 | "source": [ 29 | "First, we create some matrices." 30 | ] 31 | }, 32 | { 33 | "cell_type": "code", 34 | "collapsed": false, 35 | "input": [ 36 | "import numpy as np\n", 37 | "\n", 38 | "m = 4\n", 39 | "n = 3\n", 40 | "k = 5\n", 41 | "\n", 42 | "C = np.matrix( np.random.random( (m, n) ) )\n", 43 | "print( 'C = ' )\n", 44 | "print( C )\n", 45 | "\n", 46 | "Cold = np.matrix( np.zeros( (m,n ) ) )\n", 47 | "Cold = np.matrix( np.copy( C ) ) # an alternative way of doing a \"hard\" copy, in this case of a matrix\n", 48 | " \n", 49 | "A = np.matrix( np.random.random( (m, k) ) )\n", 50 | "print( 'A = ' )\n", 51 | "print( A )\n", 52 | "\n", 53 | "B = np.matrix( np.random.random( (k, n) ) )\n", 54 | "print( 'B = ' )\n", 55 | "print( B )" 56 | ], 57 | "language": "python", 58 | "metadata": {}, 59 | "outputs": [] 60 | }, 61 | { 62 | "cell_type": "heading", 63 | "level": 2, 64 | "metadata": {}, 65 | "source": [ 66 | "

The algorithm

\n", 67 | "\n", 68 | "\"Matrix-matrix\n" 69 | ] 70 | }, 71 | { 72 | "cell_type": "markdown", 73 | "metadata": {}, 74 | "source": [ 75 | "

The routine Gemm_nn_unb_var1( A, B, C )

\n", 76 | "\n", 77 | "This routine computes $ C := A B + C $ by columns. The \"\\_nn\\_\" means that this is the \"No transpose, No transpose\" case of matrix multiplication. \n", 78 | "The reason for this is that the operations $ C := A^T B + C $ (\"\\_tn\\_\" or \"Transpose, No transpose\"), $ C := A B^T + C $ (\"\\_nt\\_\" or \"No transpose, Transpose\"), and $ C := A^T B^T + C $ (\"\\_tt\\_\" or \"Transpose, Transpose\") are also encountered. \n", 79 | " \n", 80 | "The specific laff function we will use is\n", 81 | "\n", 95 | "\n", 96 | "Use the Spark webpage to generate a code skeleton. (Make sure you adjust the name of the routine.)" 97 | ] 98 | }, 99 | { 100 | "cell_type": "code", 101 | "collapsed": false, 102 | "input": [ 103 | "# Your code here..." 104 | ], 105 | "language": "python", 106 | "metadata": {}, 107 | "outputs": [] 108 | }, 109 | { 110 | "cell_type": "code", 111 | "collapsed": false, 112 | "input": [ 113 | "C = np.matrix( np.copy( Cold ) ) # restore C \n", 114 | "\n", 115 | "Gemm_nn_unb_var1( A, B, C )\n", 116 | "\n", 117 | "print( 'C - ( Cold + A * B )' )\n", 118 | "print( C - ( Cold + A * B ) )" 119 | ], 120 | "language": "python", 121 | "metadata": {}, 122 | "outputs": [] 123 | }, 124 | { 125 | "cell_type": "markdown", 126 | "metadata": {}, 127 | "source": [ 128 | "Bingo! It works!" 129 | ] 130 | }, 131 | { 132 | "cell_type": "heading", 133 | "level": 2, 134 | "metadata": {}, 135 | "source": [ 136 | "Watch the algorithm at work!" 137 | ] 138 | }, 139 | { 140 | "cell_type": "markdown", 141 | "metadata": {}, 142 | "source": [ 143 | "Copy and paste the code into PictureFLAME , a webpage where you can watch your routine in action. Just cut and paste into the box. \n", 144 | "\n", 145 | "Disclaimer: we implemented a VERY simple interpreter. If you do something wrong, we cannot guarantee the results. But if you do it right, you are in for a treat.\n", 146 | "\n", 147 | "If you want to reset the problem, just click in the box into which you pasted the code and hit \"next\" again." 148 | ] 149 | } 150 | ], 151 | "metadata": {} 152 | } 153 | ] 154 | } -------------------------------------------------------------------------------- /week05/05.3.3 Matrix-matrix multiplication by rows - Answer.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "Matrix-matrix multiplication by rows" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "We continue to look at how the FLAMEPy API can be used to implement different matrix-matrix multiplication algorithms. " 23 | ] 24 | }, 25 | { 26 | "cell_type": "markdown", 27 | "metadata": {}, 28 | "source": [ 29 | "First, we create some matrices." 30 | ] 31 | }, 32 | { 33 | "cell_type": "code", 34 | "collapsed": false, 35 | "input": [ 36 | "import numpy as np\n", 37 | "\n", 38 | "m = 4\n", 39 | "n = 3\n", 40 | "k = 5\n", 41 | "\n", 42 | "C = np.matrix( np.random.random( (m, n) ) )\n", 43 | "print( 'C = ' )\n", 44 | "print( C )\n", 45 | "\n", 46 | "Cold = np.matrix( np.zeros( (m,n ) ) )\n", 47 | "Cold = np.matrix( np.copy( C ) ) # an alternative way of doing a \"hard\" copy, in this case of a matrix\n", 48 | " \n", 49 | "A = np.matrix( np.random.random( (m, k) ) )\n", 50 | "print( 'A = ' )\n", 51 | "print( A )\n", 52 | "\n", 53 | "B = np.matrix( np.random.random( (k, n) ) )\n", 54 | "print( 'B = ' )\n", 55 | "print( B )" 56 | ], 57 | "language": "python", 58 | "metadata": {}, 59 | "outputs": [] 60 | }, 61 | { 62 | "cell_type": "heading", 63 | "level": 2, 64 | "metadata": {}, 65 | "source": [ 66 | "

The algorithm

\n", 67 | "\n", 68 | "\"Matrix-matrix\n" 69 | ] 70 | }, 71 | { 72 | "cell_type": "markdown", 73 | "metadata": {}, 74 | "source": [ 75 | "

The routine Gemm_nn_unb_var2( A, B, C )

\n", 76 | "\n", 77 | "This routine computes $ C := A B + C $ by rows. The \"\\_nn\\_\" means that this is the \"No transpose, No transpose\" case of matrix multiplication. \n", 78 | "The reason for this is that the operations $ C := A^T B + C $ (\"\\_tn\\_\" or \"Transpose, No transpose\"), $ C := A B^T + C $ (\"\\_nt\\_\" or \"No transpose, Transpose\"), and $ C := A^T B^T + C $ (\"\\_tt\\_\" or \"Transpose, Transpose\") are also encountered. \n", 79 | " \n", 80 | "The specific laff function we will use is\n", 81 | "\n", 95 | "\n", 96 | "Use the Spark webpage to generate a code skeleton. (Make sure you adjust the name of the routine.)" 97 | ] 98 | }, 99 | { 100 | "cell_type": "code", 101 | "collapsed": false, 102 | "input": [ 103 | "import flame\n", 104 | "import laff as laff\n", 105 | "\n", 106 | "def Gemm_nn_unb_var2(A, B, C):\n", 107 | "\n", 108 | " AT, \\\n", 109 | " AB = flame.part_2x1(A, \\\n", 110 | " 0, 'TOP')\n", 111 | "\n", 112 | " CT, \\\n", 113 | " CB = flame.part_2x1(C, \\\n", 114 | " 0, 'TOP')\n", 115 | "\n", 116 | " while AT.shape[0] < A.shape[0]:\n", 117 | "\n", 118 | " A0, \\\n", 119 | " a1t, \\\n", 120 | " A2 = flame.repart_2x1_to_3x1(AT, \\\n", 121 | " AB, \\\n", 122 | " 1, 'BOTTOM')\n", 123 | "\n", 124 | " C0, \\\n", 125 | " c1t, \\\n", 126 | " C2 = flame.repart_2x1_to_3x1(CT, \\\n", 127 | " CB, \\\n", 128 | " 1, 'BOTTOM')\n", 129 | "\n", 130 | " #------------------------------------------------------------#\n", 131 | "\n", 132 | " laff.gemv( 'Transpose', 1.0, B, a1t, 1.0, c1t )\n", 133 | "\n", 134 | " #------------------------------------------------------------#\n", 135 | "\n", 136 | " AT, \\\n", 137 | " AB = flame.cont_with_3x1_to_2x1(A0, \\\n", 138 | " a1t, \\\n", 139 | " A2, \\\n", 140 | " 'TOP')\n", 141 | "\n", 142 | " CT, \\\n", 143 | " CB = flame.cont_with_3x1_to_2x1(C0, \\\n", 144 | " c1t, \\\n", 145 | " C2, \\\n", 146 | " 'TOP')\n", 147 | "\n", 148 | " flame.merge_2x1(CT, \\\n", 149 | " CB, C)\n", 150 | "\n" 151 | ], 152 | "language": "python", 153 | "metadata": {}, 154 | "outputs": [] 155 | }, 156 | { 157 | "cell_type": "code", 158 | "collapsed": false, 159 | "input": [ 160 | "C = np.matrix( np.copy( Cold ) ) # restore C \n", 161 | "\n", 162 | "Gemm_nn_unb_var2( A, B, C )\n", 163 | "\n", 164 | "print( 'C - ( Cold + A * B )' )\n", 165 | "print( C - ( Cold + A * B ) )" 166 | ], 167 | "language": "python", 168 | "metadata": {}, 169 | "outputs": [] 170 | }, 171 | { 172 | "cell_type": "markdown", 173 | "metadata": {}, 174 | "source": [ 175 | "Bingo! It works!" 176 | ] 177 | }, 178 | { 179 | "cell_type": "heading", 180 | "level": 2, 181 | "metadata": {}, 182 | "source": [ 183 | "Watch the algorithm at work!" 184 | ] 185 | }, 186 | { 187 | "cell_type": "markdown", 188 | "metadata": {}, 189 | "source": [ 190 | "Copy and paste the code into PictureFLAME , a webpage where you can watch your routine in action. Just cut and paste into the box. \n", 191 | "\n", 192 | "Disclaimer: we implemented a VERY simple interpreter. If you do something wrong, we cannot guarantee the results. But if you do it right, you are in for a treat.\n", 193 | "\n", 194 | "If you want to reset the problem, just click in the box into which you pasted the code and hit \"next\" again." 195 | ] 196 | } 197 | ], 198 | "metadata": {} 199 | } 200 | ] 201 | } -------------------------------------------------------------------------------- /week05/05.3.3 Matrix-matrix multiplication by rows.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "Matrix-matrix multiplication by rows" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "We continue to look at how the FLAMEPy API can be used to implement different matrix-matrix multiplication algorithms. " 23 | ] 24 | }, 25 | { 26 | "cell_type": "markdown", 27 | "metadata": {}, 28 | "source": [ 29 | "First, we create some matrices." 30 | ] 31 | }, 32 | { 33 | "cell_type": "code", 34 | "collapsed": false, 35 | "input": [ 36 | "import numpy as np\n", 37 | "\n", 38 | "m = 4\n", 39 | "n = 3\n", 40 | "k = 5\n", 41 | "\n", 42 | "C = np.matrix( np.random.random( (m, n) ) )\n", 43 | "print( 'C = ' )\n", 44 | "print( C )\n", 45 | "\n", 46 | "Cold = np.matrix( np.zeros( (m,n ) ) )\n", 47 | "Cold = np.matrix( np.copy( C ) ) # an alternative way of doing a \"hard\" copy, in this case of a matrix\n", 48 | " \n", 49 | "A = np.matrix( np.random.random( (m, k) ) )\n", 50 | "print( 'A = ' )\n", 51 | "print( A )\n", 52 | "\n", 53 | "B = np.matrix( np.random.random( (k, n) ) )\n", 54 | "print( 'B = ' )\n", 55 | "print( B )" 56 | ], 57 | "language": "python", 58 | "metadata": {}, 59 | "outputs": [] 60 | }, 61 | { 62 | "cell_type": "heading", 63 | "level": 2, 64 | "metadata": {}, 65 | "source": [ 66 | "

The algorithm

\n", 67 | "\n", 68 | "\"Matrix-matrix\n" 69 | ] 70 | }, 71 | { 72 | "cell_type": "markdown", 73 | "metadata": {}, 74 | "source": [ 75 | "

The routine Gemm_nn_unb_var2( A, B, C )

\n", 76 | "\n", 77 | "This routine computes $ C := A B + C $ by rows. The \"\\_nn\\_\" means that this is the \"No transpose, No transpose\" case of matrix multiplication. \n", 78 | "The reason for this is that the operations $ C := A^T B + C $ (\"\\_tn\\_\" or \"Transpose, No transpose\"), $ C := A B^T + C $ (\"\\_nt\\_\" or \"No transpose, Transpose\"), and $ C := A^T B^T + C $ (\"\\_tt\\_\" or \"Transpose, Transpose\") are also encountered. \n", 79 | " \n", 80 | "The specific laff function we will use is\n", 81 | "\n", 95 | "\n", 96 | "Use the Spark webpage to generate a code skeleton. (Make sure you adjust the name of the routine.)" 97 | ] 98 | }, 99 | { 100 | "cell_type": "code", 101 | "collapsed": false, 102 | "input": [ 103 | "# Your code here..." 104 | ], 105 | "language": "python", 106 | "metadata": {}, 107 | "outputs": [] 108 | }, 109 | { 110 | "cell_type": "code", 111 | "collapsed": false, 112 | "input": [ 113 | "C = np.matrix( np.copy( Cold ) ) # restore C \n", 114 | "\n", 115 | "Gemm_nn_unb_var2( A, B, C )\n", 116 | "\n", 117 | "print( 'C - ( Cold + A * B )' )\n", 118 | "print( C - ( Cold + A * B ) )" 119 | ], 120 | "language": "python", 121 | "metadata": {}, 122 | "outputs": [] 123 | }, 124 | { 125 | "cell_type": "markdown", 126 | "metadata": {}, 127 | "source": [ 128 | "Bingo! It works!" 129 | ] 130 | }, 131 | { 132 | "cell_type": "heading", 133 | "level": 2, 134 | "metadata": {}, 135 | "source": [ 136 | "Watch the algorithm at work!" 137 | ] 138 | }, 139 | { 140 | "cell_type": "markdown", 141 | "metadata": {}, 142 | "source": [ 143 | "Copy and paste the code into PictureFLAME , a webpage where you can watch your routine in action. Just cut and paste into the box. \n", 144 | "\n", 145 | "Disclaimer: we implemented a VERY simple interpreter. If you do something wrong, we cannot guarantee the results. But if you do it right, you are in for a treat.\n", 146 | "\n", 147 | "If you want to reset the problem, just click in the box into which you pasted the code and hit \"next\" again." 148 | ] 149 | } 150 | ], 151 | "metadata": {} 152 | } 153 | ] 154 | } -------------------------------------------------------------------------------- /week05/05.3.4 Matrix-matrix multiplication via rank-1 updates - Answer.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "Matrix-matrix multiplication via rank-1 updates" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "We continue to look at how the FLAMEPy API can be used to implement different matrix-matrix multiplication algorithms. " 23 | ] 24 | }, 25 | { 26 | "cell_type": "markdown", 27 | "metadata": {}, 28 | "source": [ 29 | "First, we create some matrices." 30 | ] 31 | }, 32 | { 33 | "cell_type": "code", 34 | "collapsed": false, 35 | "input": [ 36 | "import numpy as np\n", 37 | "\n", 38 | "m = 4\n", 39 | "n = 3\n", 40 | "k = 5\n", 41 | "\n", 42 | "C = np.matrix( np.random.random( (m, n) ) )\n", 43 | "print( 'C = ' )\n", 44 | "print( C )\n", 45 | "\n", 46 | "Cold = np.matrix( np.zeros( (m,n ) ) )\n", 47 | "Cold = np.matrix( np.copy( C ) ) # an alternative way of doing a \"hard\" copy, in this case of a matrix\n", 48 | " \n", 49 | "A = np.matrix( np.random.random( (m, k) ) )\n", 50 | "print( 'A = ' )\n", 51 | "print( A )\n", 52 | "\n", 53 | "B = np.matrix( np.random.random( (k, n) ) )\n", 54 | "print( 'B = ' )\n", 55 | "print( B )" 56 | ], 57 | "language": "python", 58 | "metadata": {}, 59 | "outputs": [] 60 | }, 61 | { 62 | "cell_type": "heading", 63 | "level": 2, 64 | "metadata": {}, 65 | "source": [ 66 | "

The algorithm

\n", 67 | "\n", 68 | "\"Matrix-matrix\n" 69 | ] 70 | }, 71 | { 72 | "cell_type": "markdown", 73 | "metadata": {}, 74 | "source": [ 75 | "

The routine Gemm_nn_unb_var3( A, B, C )

\n", 76 | "\n", 77 | "This routine computes $ C := A B + C $ via rank-1 updates. The \"\\_nn\\_\" means that this is the \"No transpose, No transpose\" case of matrix multiplication. \n", 78 | "The reason for this is that the operations $ C := A^T B + C $ (\"\\_tn\\_\" or \"Transpose, No transpose\"), $ C := A B^T + C $ (\"\\_nt\\_\" or \"No transpose, Transpose\"), and $ C := A^T B^T + C $ (\"\\_tt\\_\" or \"Transpose, Transpose\") are also encountered. \n", 79 | " \n", 80 | "The specific laff function we will use is\n", 81 | "\n", 86 | "\n", 87 | "Use the Spark webpage to generate a code skeleton. (Make sure you adjust the name of the routine.)" 88 | ] 89 | }, 90 | { 91 | "cell_type": "code", 92 | "collapsed": false, 93 | "input": [ 94 | "import flame\n", 95 | "import laff as laff\n", 96 | "\n", 97 | "def Gemm_nn_unb_var3(A, B, C):\n", 98 | "\n", 99 | " AL, AR = flame.part_1x2(A, \\\n", 100 | " 0, 'LEFT')\n", 101 | "\n", 102 | " BT, \\\n", 103 | " BB = flame.part_2x1(B, \\\n", 104 | " 0, 'TOP')\n", 105 | "\n", 106 | " while AL.shape[1] < A.shape[1]:\n", 107 | "\n", 108 | " A0, a1, A2 = flame.repart_1x2_to_1x3(AL, AR, \\\n", 109 | " 1, 'RIGHT')\n", 110 | "\n", 111 | " B0, \\\n", 112 | " b1t, \\\n", 113 | " B2 = flame.repart_2x1_to_3x1(BT, \\\n", 114 | " BB, \\\n", 115 | " 1, 'BOTTOM')\n", 116 | "\n", 117 | " #------------------------------------------------------------#\n", 118 | "\n", 119 | " laff.ger( 1.0, a1, b1t, C )\n", 120 | "\n", 121 | " #------------------------------------------------------------#\n", 122 | "\n", 123 | " AL, AR = flame.cont_with_1x3_to_1x2(A0, a1, A2, \\\n", 124 | " 'LEFT')\n", 125 | "\n", 126 | " BT, \\\n", 127 | " BB = flame.cont_with_3x1_to_2x1(B0, \\\n", 128 | " b1t, \\\n", 129 | " B2, \\\n", 130 | " 'TOP')\n", 131 | "\n" 132 | ], 133 | "language": "python", 134 | "metadata": {}, 135 | "outputs": [] 136 | }, 137 | { 138 | "cell_type": "code", 139 | "collapsed": false, 140 | "input": [ 141 | "C = np.matrix( np.copy( Cold ) ) # restore C \n", 142 | "\n", 143 | "Gemm_nn_unb_var3( A, B, C )\n", 144 | "\n", 145 | "print( 'C - ( Cold + A * B )' )\n", 146 | "print( C - ( Cold + A * B ) )" 147 | ], 148 | "language": "python", 149 | "metadata": {}, 150 | "outputs": [] 151 | }, 152 | { 153 | "cell_type": "markdown", 154 | "metadata": {}, 155 | "source": [ 156 | "Bingo! It works!" 157 | ] 158 | }, 159 | { 160 | "cell_type": "heading", 161 | "level": 2, 162 | "metadata": {}, 163 | "source": [ 164 | "Watch the algorithm at work!" 165 | ] 166 | }, 167 | { 168 | "cell_type": "markdown", 169 | "metadata": {}, 170 | "source": [ 171 | "Copy and paste the code into PictureFLAME , a webpage where you can watch your routine in action. Just cut and paste into the box. \n", 172 | "\n", 173 | "Disclaimer: we implemented a VERY simple interpreter. If you do something wrong, we cannot guarantee the results. But if you do it right, you are in for a treat.\n", 174 | "\n", 175 | "If you want to reset the problem, just click in the box into which you pasted the code and hit \"next\" again." 176 | ] 177 | } 178 | ], 179 | "metadata": {} 180 | } 181 | ] 182 | } -------------------------------------------------------------------------------- /week05/05.3.4 Matrix-matrix multiplication via rank-1 updates.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "Matrix-matrix multiplication via rank-1 updates" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "We continue to look at how the FLAMEPy API can be used to implement different matrix-matrix multiplication algorithms. " 23 | ] 24 | }, 25 | { 26 | "cell_type": "markdown", 27 | "metadata": {}, 28 | "source": [ 29 | "First, we create some matrices." 30 | ] 31 | }, 32 | { 33 | "cell_type": "code", 34 | "collapsed": false, 35 | "input": [ 36 | "import numpy as np\n", 37 | "\n", 38 | "m = 4\n", 39 | "n = 3\n", 40 | "k = 5\n", 41 | "\n", 42 | "C = np.matrix( np.random.random( (m, n) ) )\n", 43 | "print( 'C = ' )\n", 44 | "print( C )\n", 45 | "\n", 46 | "Cold = np.matrix( np.zeros( (m,n ) ) )\n", 47 | "Cold = np.matrix( np.copy( C ) ) # an alternative way of doing a \"hard\" copy, in this case of a matrix\n", 48 | " \n", 49 | "A = np.matrix( np.random.random( (m, k) ) )\n", 50 | "print( 'A = ' )\n", 51 | "print( A )\n", 52 | "\n", 53 | "B = np.matrix( np.random.random( (k, n) ) )\n", 54 | "print( 'B = ' )\n", 55 | "print( B )" 56 | ], 57 | "language": "python", 58 | "metadata": {}, 59 | "outputs": [] 60 | }, 61 | { 62 | "cell_type": "heading", 63 | "level": 2, 64 | "metadata": {}, 65 | "source": [ 66 | "

The algorithm

\n", 67 | "\n", 68 | "\"Matrix-matrix\n" 69 | ] 70 | }, 71 | { 72 | "cell_type": "markdown", 73 | "metadata": {}, 74 | "source": [ 75 | "

The routine Gemm_nn_unb_var3( A, B, C )

\n", 76 | "\n", 77 | "This routine computes $ C := A B + C $ via rank-1 updates. The \"\\_nn\\_\" means that this is the \"No transpose, No transpose\" case of matrix multiplication. \n", 78 | "The reason for this is that the operations $ C := A^T B + C $ (\"\\_tn\\_\" or \"Transpose, No transpose\"), $ C := A B^T + C $ (\"\\_nt\\_\" or \"No transpose, Transpose\"), and $ C := A^T B^T + C $ (\"\\_tt\\_\" or \"Transpose, Transpose\") are also encountered. \n", 79 | " \n", 80 | "The specific laff function we will use is\n", 81 | "\n", 86 | "\n", 87 | "Use the Spark webpage to generate a code skeleton. (Make sure you adjust the name of the routine.)" 88 | ] 89 | }, 90 | { 91 | "cell_type": "code", 92 | "collapsed": false, 93 | "input": [ 94 | "# Your code here..." 95 | ], 96 | "language": "python", 97 | "metadata": {}, 98 | "outputs": [] 99 | }, 100 | { 101 | "cell_type": "code", 102 | "collapsed": false, 103 | "input": [ 104 | "C = np.matrix( np.copy( Cold ) ) # restore C \n", 105 | "\n", 106 | "Gemm_nn_unb_var3( A, B, C )\n", 107 | "\n", 108 | "print( 'C - ( Cold + A * B )' )\n", 109 | "print( C - ( Cold + A * B ) )" 110 | ], 111 | "language": "python", 112 | "metadata": {}, 113 | "outputs": [] 114 | }, 115 | { 116 | "cell_type": "markdown", 117 | "metadata": {}, 118 | "source": [ 119 | "Bingo! It works!" 120 | ] 121 | }, 122 | { 123 | "cell_type": "heading", 124 | "level": 2, 125 | "metadata": {}, 126 | "source": [ 127 | "Watch the algorithm at work!" 128 | ] 129 | }, 130 | { 131 | "cell_type": "markdown", 132 | "metadata": {}, 133 | "source": [ 134 | "Copy and paste the code into PictureFLAME , a webpage where you can watch your routine in action. Just cut and paste into the box. \n", 135 | "\n", 136 | "Disclaimer: we implemented a VERY simple interpreter. If you do something wrong, we cannot guarantee the results. But if you do it right, you are in for a treat.\n", 137 | "\n", 138 | "If you want to reset the problem, just click in the box into which you pasted the code and hit \"next\" again." 139 | ] 140 | } 141 | ], 142 | "metadata": {} 143 | } 144 | ] 145 | } -------------------------------------------------------------------------------- /week05/05.5.1 Multiplying upper triangular matrices.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "Multiplying upper triangular matrices" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "This notebook helps you implement the operation $ C := U R $ where $ C, U, R \\in \\mathbb{R}^{n \\times n} $, and $ U $ and $ R $ are upper triangular. $ U $ and $ R $ are stored in the upper triangular part of numpy matrices U and R . The upper triangular part of matrix C is to be overwritten with the resulting upper triangular matrix. " 23 | ] 24 | }, 25 | { 26 | "cell_type": "markdown", 27 | "metadata": {}, 28 | "source": [ 29 | "First, we create some matrices." 30 | ] 31 | }, 32 | { 33 | "cell_type": "code", 34 | "collapsed": false, 35 | "input": [ 36 | "import numpy as np\n", 37 | "\n", 38 | "n = 5\n", 39 | "\n", 40 | "C = np.matrix( np.random.random( (n, n) ) )\n", 41 | "print( 'C = ' )\n", 42 | "print( C )\n", 43 | "\n", 44 | "Cold = np.matrix( np.zeros( (n,n ) ) )\n", 45 | "Cold = np.matrix( np.copy( C ) ) # an alternative way of doing a \"hard\" copy, in this case of a matrix\n", 46 | " \n", 47 | "U = np.matrix( np.random.random( (n, n) ) )\n", 48 | "print( 'U = ' )\n", 49 | "print( U )\n", 50 | "\n", 51 | "R = np.matrix( np.random.random( (n, n) ) )\n", 52 | "print( 'R = ' )\n", 53 | "print( R )" 54 | ], 55 | "language": "python", 56 | "metadata": {}, 57 | "outputs": [] 58 | }, 59 | { 60 | "cell_type": "heading", 61 | "level": 2, 62 | "metadata": {}, 63 | "source": [ 64 | "

The algorithm

\n", 65 | "\n", 66 | "(The homework was to propose the updates to $ C $.)\n", 67 | "\n", 68 | "\"Upper\n" 69 | ] 70 | }, 71 | { 72 | "cell_type": "markdown", 73 | "metadata": {}, 74 | "source": [ 75 | "

The routine Trtrmm_uu_unb_var1( U, R, C )

\n", 76 | "\n", 77 | "This routine computes $ C := U R + C $. The \"\\_uu\\_\" means that $ U $ and $ R $ are upper triangular matrices (which means $ C $ is too). However, the lower triangular parts of numpy matrices U , R , and C are not to be \"touched\".\n", 78 | " \n", 79 | "The specific laff function you will want to use is some subset of\n", 80 | "\n", 87 | "\n", 88 | " Hint: If you multiply with $ U_{00} $, you will want to use np.triu( U00 ) to make sure you don't compute with the nonzeroes below the diagonal.\n", 89 | "\n", 90 | "Use the Spark webpage to generate a code skeleton. (Make sure you adjust the name of the routine.)" 91 | ] 92 | }, 93 | { 94 | "cell_type": "code", 95 | "collapsed": false, 96 | "input": [ 97 | "# insert code here\n", 98 | "\n", 99 | "\n" 100 | ], 101 | "language": "python", 102 | "metadata": {}, 103 | "outputs": [] 104 | }, 105 | { 106 | "cell_type": "code", 107 | "collapsed": false, 108 | "input": [ 109 | "C = np.matrix( np.copy( Cold ) ) # restore C \n", 110 | "\n", 111 | "Trtrmm_uu_unb_var1( U, R, C )\n", 112 | "\n", 113 | "# compute it using numpy *. This is a little complex, since we want to make sure we\n", 114 | "# don't change the lower triangular part of C\n", 115 | "# Cref = np.tril( Cold, -1 ) + np.triu( U ) * np.triu( R )\n", 116 | "Cref = np.triu( U ) * np.triu( R ) + np.tril( Cold, -1 )\n", 117 | "print( 'C - Cref' )\n", 118 | "print( C - Cref )" 119 | ], 120 | "language": "python", 121 | "metadata": {}, 122 | "outputs": [] 123 | }, 124 | { 125 | "cell_type": "markdown", 126 | "metadata": {}, 127 | "source": [ 128 | "In theory, you should get a matrix of (approximately) all zeroes." 129 | ] 130 | }, 131 | { 132 | "cell_type": "heading", 133 | "level": 2, 134 | "metadata": {}, 135 | "source": [ 136 | "Watch the algorithm at work!" 137 | ] 138 | }, 139 | { 140 | "cell_type": "markdown", 141 | "metadata": {}, 142 | "source": [ 143 | "Copy and paste the code into PictureFLAME , a webpage where you can watch your routine in action. Just cut and paste into the box. \n", 144 | "\n", 145 | "Disclaimer: we implemented a VERY simple interpreter. If you do something wrong, we cannot guarantee the results. But if you do it right, you are in for a treat.\n", 146 | "\n", 147 | "If you want to reset the problem, just click in the box into which you pasted the code and hit \"next\" again." 148 | ] 149 | } 150 | ], 151 | "metadata": {} 152 | } 153 | ] 154 | } -------------------------------------------------------------------------------- /week08/08.2.5 Alternative Gauss Jordon Algorithm - Answer.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "8.2.5 Alternative Gauss Jordon Algorithm" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "In this notebook, you will implement the alternative Gauss Jordan algorithm that overwrites $ A $ in one sweep with the identity matrix and $ B $ with the inverse of the original matrix $ A $.\n", 23 | "\n", 24 | " Be sure to make a copy!!!! \n", 25 | "\n", 26 | "

First, let's create a matrix $ A $ and set $ B $ to the identity.

" 27 | ] 28 | }, 29 | { 30 | "cell_type": "code", 31 | "collapsed": false, 32 | "input": [ 33 | "import numpy as np\n", 34 | "import laff\n", 35 | "import flame\n", 36 | "\n", 37 | "L = np.matrix( ' 1, 0, 0. 0;\\\n", 38 | " -2, 1, 0, 0;\\\n", 39 | " 1,-3, 1, 0;\\\n", 40 | " 2, 3,-1, 1' )\n", 41 | "\n", 42 | "U = np.matrix( ' 2,-1, 3,-2;\\\n", 43 | " 0,-2, 1,-1;\\\n", 44 | " 0, 0, 1, 2;\\\n", 45 | " 0, 0, 0, 3' )\n", 46 | "\n", 47 | "A = L * U\n", 48 | "Aold = np.matrix( np.copy( A ) ) \n", 49 | "B = np.matrix( np.eye( 4 ) )\n", 50 | "\n", 51 | "print( 'A = ' )\n", 52 | "print( A )\n", 53 | "\n", 54 | "print( 'B = ' )\n", 55 | "print( B )\n" 56 | ], 57 | "language": "python", 58 | "metadata": {}, 59 | "outputs": [] 60 | }, 61 | { 62 | "cell_type": "markdown", 63 | "metadata": {}, 64 | "source": [ 65 | "

Implement the alternative Gauss-Jordan algorithm from 8.2.5

\n", 66 | "\n", 67 | "Here is the algorithm:\n", 68 | "\n", 69 | "\"Alternative\n", 70 | " \n", 71 | " Important: if you make a mistake, rerun ALL cells above the cell in which you were working, and then the one where you are working. " 72 | ] 73 | }, 74 | { 75 | "cell_type": "markdown", 76 | "metadata": {}, 77 | "source": [ 78 | "Create the routine\n", 79 | " GJ_Inverse_alt \n", 80 | "with the Spark webpage for the algorithm" 81 | ] 82 | }, 83 | { 84 | "cell_type": "code", 85 | "collapsed": false, 86 | "input": [ 87 | "import flame\n", 88 | "\n", 89 | "def GJ_Inverse_alt(A, B):\n", 90 | "\n", 91 | " ATL, ATR, \\\n", 92 | " ABL, ABR = flame.part_2x2(A, \\\n", 93 | " 0, 0, 'TL')\n", 94 | "\n", 95 | " BTL, BTR, \\\n", 96 | " BBL, BBR = flame.part_2x2(B, \\\n", 97 | " 0, 0, 'TL')\n", 98 | "\n", 99 | " while ATL.shape[0] < A.shape[0]:\n", 100 | "\n", 101 | " A00, a01, A02, \\\n", 102 | " a10t, alpha11, a12t, \\\n", 103 | " A20, a21, A22 = flame.repart_2x2_to_3x3(ATL, ATR, \\\n", 104 | " ABL, ABR, \\\n", 105 | " 1, 1, 'BR')\n", 106 | "\n", 107 | " B00, b01, B02, \\\n", 108 | " b10t, beta11, b12t, \\\n", 109 | " B20, b21, B22 = flame.repart_2x2_to_3x3(BTL, BTR, \\\n", 110 | " BBL, BBR, \\\n", 111 | " 1, 1, 'BR')\n", 112 | "\n", 113 | " #------------------------------------------------------------#\n", 114 | "\n", 115 | " # a01 := a01 / alpha11\n", 116 | " # a21 := a21 / alpha11\n", 117 | " laff.invscal( alpha11, a01 )\n", 118 | " laff.invscal( alpha11, a21 )\n", 119 | " \n", 120 | " # A02 := A02 - a01 * a12t\n", 121 | " laff.ger( -1.0, a01, a12t, A02 )\n", 122 | " # A22 := A22 - a21 * a12t\n", 123 | " laff.ger( -1.0, a21, a12t, A22 )\n", 124 | " \n", 125 | " # B00 := B00 - a01 * b01t\n", 126 | " laff.ger( -1.0, a01, b10t, B00 )\n", 127 | " # B20 := B20 - a21 * b01t\n", 128 | " laff.ger( -1.0, a21, b10t, B20 )\n", 129 | " \n", 130 | " # b01 := - a01 (= - u01 in the discussion)\n", 131 | " laff.copy( a01, b01 )\n", 132 | " laff.scal( -1.0, b01 )\n", 133 | " # b21 := - a21 (= - l21 in the discussion)\n", 134 | " laff.copy( a21, b21 )\n", 135 | " laff.scal( -1.0, b21 )\n", 136 | " \n", 137 | " # a12t:= a21t / alpha11 \n", 138 | " laff.invscal( alpha11, a12t )\n", 139 | " # b10t:= b10t / alpha11 \n", 140 | " laff.invscal( alpha11, b10t )\n", 141 | " \n", 142 | " # beta11 := 1.0 / alpha11\n", 143 | " laff.invscal( alpha11, beta11 )\n", 144 | " \n", 145 | " # a01 = 0 (zero vector)\n", 146 | " # alpha11 = 1\n", 147 | " # a21 = 0 (zero vector)\n", 148 | " laff.zerov( a01 )\n", 149 | " laff.onev( alpha11 )\n", 150 | " laff.zerov( a21 )\n", 151 | "\n", 152 | " #------------------------------------------------------------#\n", 153 | "\n", 154 | " ATL, ATR, \\\n", 155 | " ABL, ABR = flame.cont_with_3x3_to_2x2(A00, a01, A02, \\\n", 156 | " a10t, alpha11, a12t, \\\n", 157 | " A20, a21, A22, \\\n", 158 | " 'TL')\n", 159 | "\n", 160 | " BTL, BTR, \\\n", 161 | " BBL, BBR = flame.cont_with_3x3_to_2x2(B00, b01, B02, \\\n", 162 | " b10t, beta11, b12t, \\\n", 163 | " B20, b21, B22, \\\n", 164 | " 'TL')\n", 165 | "\n", 166 | " flame.merge_2x2(ATL, ATR, \\\n", 167 | " ABL, ABR, A)\n", 168 | "\n", 169 | " flame.merge_2x2(BTL, BTR, \\\n", 170 | " BBL, BBR, B)\n", 171 | "\n", 172 | "\n", 173 | "\n" 174 | ], 175 | "language": "python", 176 | "metadata": {}, 177 | "outputs": [] 178 | }, 179 | { 180 | "cell_type": "markdown", 181 | "metadata": {}, 182 | "source": [ 183 | "

Test the routine

\n", 184 | "\n", 185 | " Important: if you make a mistake, rerun ALL cells above the cell in which you were working, and then the one where you are working. " 186 | ] 187 | }, 188 | { 189 | "cell_type": "code", 190 | "collapsed": false, 191 | "input": [ 192 | "GJ_Inverse_alt( A, B )\n", 193 | "\n", 194 | "print( A )\n", 195 | "print( B )" 196 | ], 197 | "language": "python", 198 | "metadata": {}, 199 | "outputs": [] 200 | }, 201 | { 202 | "cell_type": "markdown", 203 | "metadata": {}, 204 | "source": [ 205 | "Matrix $ A $ should now be an identity matrix and $ B $ should no longer be an identity matrix.\n", 206 | "\n", 207 | "Check if $ B $ now equals (approximately) the inverse of the original matrix $ A $:" 208 | ] 209 | }, 210 | { 211 | "cell_type": "code", 212 | "collapsed": false, 213 | "input": [ 214 | "print( Aold * B )" 215 | ], 216 | "language": "python", 217 | "metadata": {}, 218 | "outputs": [] 219 | }, 220 | { 221 | "cell_type": "code", 222 | "collapsed": false, 223 | "input": [], 224 | "language": "python", 225 | "metadata": {}, 226 | "outputs": [] 227 | } 228 | ], 229 | "metadata": {} 230 | } 231 | ] 232 | } -------------------------------------------------------------------------------- /week08/08.2.5 Alternative Gauss Jordon Algorithm.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | "8.2.5 Alternative Gauss Jordon Algorithm" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "In this notebook, you will implement the alternative Gauss Jordan algorithm that overwrites $ A $ in one sweep with the identity matrix and $ B $ with the inverse of the original matrix $ A $.\n", 23 | "\n", 24 | " Be sure to make a copy!!!! \n", 25 | "\n", 26 | "

First, let's create a matrix $ A $ and set $ B $ to the identity.

" 27 | ] 28 | }, 29 | { 30 | "cell_type": "code", 31 | "collapsed": false, 32 | "input": [ 33 | "import numpy as np\n", 34 | "import laff\n", 35 | "import flame\n", 36 | "\n", 37 | "L = np.matrix( ' 1, 0, 0. 0;\\\n", 38 | " -2, 1, 0, 0;\\\n", 39 | " 1,-3, 1, 0;\\\n", 40 | " 2, 3,-1, 1' )\n", 41 | "\n", 42 | "U = np.matrix( ' 2,-1, 3,-2;\\\n", 43 | " 0,-2, 1,-1;\\\n", 44 | " 0, 0, 1, 2;\\\n", 45 | " 0, 0, 0, 3' )\n", 46 | "\n", 47 | "A = L * U\n", 48 | "Aold = np.matrix( np.copy( A ) ) \n", 49 | "B = np.matrix( np.eye( 4 ) )\n", 50 | "\n", 51 | "print( 'A = ' )\n", 52 | "print( A )\n", 53 | "\n", 54 | "print( 'B = ' )\n", 55 | "print( B )\n" 56 | ], 57 | "language": "python", 58 | "metadata": {}, 59 | "outputs": [] 60 | }, 61 | { 62 | "cell_type": "markdown", 63 | "metadata": {}, 64 | "source": [ 65 | "

Implement the alternative Gauss-Jordan algorithm from 8.2.5

\n", 66 | "\n", 67 | "Here is the algorithm:\n", 68 | "\n", 69 | "\"Alternative\n", 70 | " \n", 71 | " Important: if you make a mistake, rerun ALL cells above the cell in which you were working, and then the one where you are working. " 72 | ] 73 | }, 74 | { 75 | "cell_type": "markdown", 76 | "metadata": {}, 77 | "source": [ 78 | "Create the routine\n", 79 | " GJ_Inverse_alt \n", 80 | "with the Spark webpage for the algorithm" 81 | ] 82 | }, 83 | { 84 | "cell_type": "code", 85 | "collapsed": false, 86 | "input": [ 87 | "# insert code here" 88 | ], 89 | "language": "python", 90 | "metadata": {}, 91 | "outputs": [] 92 | }, 93 | { 94 | "cell_type": "markdown", 95 | "metadata": {}, 96 | "source": [ 97 | "

Test the routine

\n", 98 | "\n", 99 | " Important: if you make a mistake, rerun ALL cells above the cell in which you were working, and then the one where you are working. " 100 | ] 101 | }, 102 | { 103 | "cell_type": "code", 104 | "collapsed": false, 105 | "input": [ 106 | "GJ_Inverse_alt( A, B )\n", 107 | "\n", 108 | "print( A )\n", 109 | "print( B )" 110 | ], 111 | "language": "python", 112 | "metadata": {}, 113 | "outputs": [] 114 | }, 115 | { 116 | "cell_type": "markdown", 117 | "metadata": {}, 118 | "source": [ 119 | "Matrix $ A $ should now be an identity matrix and $ B $ should no longer be an identity matrix.\n", 120 | "\n", 121 | "Check if $ B $ now equals (approximately) the inverse of the original matrix $ A $:" 122 | ] 123 | }, 124 | { 125 | "cell_type": "code", 126 | "collapsed": false, 127 | "input": [ 128 | "print( Aold * B )" 129 | ], 130 | "language": "python", 131 | "metadata": {}, 132 | "outputs": [] 133 | }, 134 | { 135 | "cell_type": "code", 136 | "collapsed": false, 137 | "input": [], 138 | "language": "python", 139 | "metadata": {}, 140 | "outputs": [] 141 | } 142 | ], 143 | "metadata": {} 144 | } 145 | ] 146 | } -------------------------------------------------------------------------------- /week11/building.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ULAFF/notebooks/ecfaa8b41d834982a389cca5e1b43c495564fb10/week11/building.png -------------------------------------------------------------------------------- /week11/im_approx.py: -------------------------------------------------------------------------------- 1 | ''' 2 | Usage: 3 | python im_approx.py 4 | python im_approx.py 5 | python im_approx.py 6 | k is an optional command line argument, default is 20 7 | ''' 8 | 9 | import sys 10 | import numpy as np 11 | import matplotlib.image as mpimg 12 | import matplotlib.pyplot as plt 13 | 14 | def read_image( filename ): 15 | ''' 16 | Inputs: 17 | filename: location of the image file to use. png is preferred 18 | Outputs: 19 | A numpy ndarray containing the values at the pixels. 20 | Can be either 2d or 3d depending on number of channels in the image 21 | 22 | Read the image into a numpy array, 23 | note that it will scale all values to 24 | be floats between 0..1 25 | ''' 26 | return mpimg.imread( filename ) 27 | 28 | def make_normalApprox( img, k=20 ): 29 | ( m, n ) = img.shape 30 | inc = np.floor( n/k ) #Python2.7 uses integer division, 3.3 needs floor function 31 | A = np.matrix( img[ :, ::inc ] ) 32 | 33 | #Now create a projector onto the range of A 34 | try: 35 | X = np.linalg.solve( ( np.transpose( A ) * A ), np.transpose( A ) * img ) 36 | normalApprox = A * X 37 | except: 38 | X = np.linalg.lstsq( ( np.transpose( A ) * A ), np.transpose( A ) * img ) 39 | normalApprox = A * X[0] 40 | 41 | 42 | return normalApprox 43 | 44 | def make_SVDApprox( img, k=20 ): 45 | ''' 46 | Create an approximation of the image from the first k columns of the SVD 47 | np.linalg.svd returns U as expected 48 | np.linalg.svd returns Sigma as a vector of the singular values 49 | np.linalg.svd returns V as V^H... weird 50 | ''' 51 | (U, Sigma, V) = np.linalg.svd( img ) 52 | UL = U[ :, :k ] 53 | VL = V[ :k, : ] 54 | SigmaTL = np.matrix( np.diag( Sigma[ :k ] ) ) 55 | 56 | SVDApprox = UL * SigmaTL * VL 57 | return SVDApprox 58 | 59 | def create_approximations( img, k=20, approximator=make_normalApprox ): 60 | ''' 61 | Inputs: 62 | img: a numpy matrix containing the pixel values 63 | k: number of columns to use. 64 | Outputs: 65 | normalApprox: The normal equation approximation to the image with 20 evenly spaced columns 66 | SVDApprox: The SVD approximation using the first 20 columns 67 | ''' 68 | 69 | #See if it is a grayscale image or a color image 70 | try: 71 | ( m, n ) = img.shape 72 | except( ValueError ): 73 | ( m, n, l ) = img.shape 74 | normalApprox = np.zeros( ( m,n,l ) ) 75 | SVDApprox = np.zeros( ( m,n,l ) ) 76 | for i in range( l ): 77 | singleChannelImg = img[:,:,i] 78 | normalApprox[:,:,i], SVDApprox[:,:,i] = create_approximations( singleChannelImg, k, approximator ) 79 | return normalApprox, SVDApprox 80 | 81 | normalApprox = approximator( img, k ) 82 | SVDApprox = make_SVDApprox( img, k ) 83 | 84 | return normalApprox, SVDApprox 85 | 86 | def plot_approximations( img, normalApprox, SVDApprox, k=20 ): 87 | #Set up the plots and axes containers 88 | fig = plt.figure() 89 | fig.suptitle( 'k = %d' % k ) 90 | 91 | axImg = fig.add_subplot(2,2,1) 92 | axImg.set_title( 'Original Image' ) 93 | axImg.xaxis.set_visible( False ) 94 | axImg.yaxis.set_visible( False ) 95 | 96 | axNormal = fig.add_subplot( 2,2,3 ) 97 | axNormal.set_title( 'Normal Approximation' ) 98 | axNormal.xaxis.set_visible( False ) 99 | axNormal.yaxis.set_visible( False ) 100 | 101 | axSVD = fig.add_subplot(2,2,4) 102 | axSVD.set_title( 'SVD Approximation' ) 103 | axSVD.xaxis.set_visible( False ) 104 | axSVD.yaxis.set_visible( False ) 105 | 106 | if( len( normalApprox.shape ) == 2 ): 107 | axImg.imshow( img, cmap='gray' ) 108 | axNormal.imshow( normalApprox, cmap='gray' ) #Plot it 109 | axSVD.imshow( SVDApprox, cmap='gray' ) #Plot it 110 | else: 111 | axImg.imshow( img ) 112 | axNormal.imshow( normalApprox ) 113 | axSVD.imshow( SVDApprox ) 114 | 115 | plt.show() 116 | 117 | if __name__ == '__main__': 118 | filename = 'lenna.png' 119 | numCols = 20 120 | if ( len( sys.argv ) == 3 ): 121 | filename = sys.argv[1] 122 | numCols = int( sys.argv[2] ) 123 | elif( len( sys.argv ) == 2 ): 124 | numCols = int( sys.argv[1] ) 125 | 126 | img = read_image( filename ) 127 | normalApprox, SVDApprox = create_approximations( img, k=numCols ) 128 | plot_approximations( normalApprox, SVDApprox, k=numCols ) -------------------------------------------------------------------------------- /week12/12.4.2 The Power Method.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | " 12.4.2 The Power Method" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "With this notebook, we demonstrate how the Power Method can be used to compute the eigenvector associated with the largest eigenvalue (in magnitude).\n", 23 | "\n", 24 | " Be sure to make a copy!!!! " 25 | ] 26 | }, 27 | { 28 | "cell_type": "markdown", 29 | "metadata": {}, 30 | "source": [ 31 | "We start by creating a matrix with known eigenvalues and eigenvectors\n", 32 | "\n", 33 | "How do we do this? \n", 34 | "
    \n", 35 | "
  • \n", 36 | " We want a matrix that is not deficient, since otherwise the Power Method may not work. \n", 37 | "
  • \n", 38 | "
  • \n", 39 | " Hence, $ A = V \\Lambda V^{-1} $ for some diagonal matrix $ \\Lambda $ and nonsingular matrix $ V $. The eigenvalues are then on the diagonal of $ \\Lambda $ and the eigenvectors are the columns of $ V $.\n", 40 | "
  • \n", 41 | "
  • \n", 42 | " So, let's pick the eigenvalues for the diagonal of $ \\Lambda $ and let's pick a random matrix $ V $ (in the hopes that it has linearly independent columns) and then let's see what happens. \n", 43 | "
  • \n", 44 | "
\n", 45 | "\n", 46 | " Experiment by changing the eigenvalues! What happens if you make the second entry on the diagonal equal to -4? Or what if you set 2 to -1? " 47 | ] 48 | }, 49 | { 50 | "cell_type": "code", 51 | "collapsed": false, 52 | "input": [ 53 | "import numpy as np\n", 54 | "import laff\n", 55 | "import flame\n", 56 | "\n", 57 | "Lambda = np.matrix( ' 4., 0., 0., 0;\\\n", 58 | " 0., 3., 0., 0;\\\n", 59 | " 0., 0., 2., 0;\\\n", 60 | " 0., 0., 0., 1' )\n", 61 | "\n", 62 | "lambda0 = Lambda[ 0,0 ]\n", 63 | "\n", 64 | "V = np.matrix( np.random.rand( 4,4 ) )\n", 65 | "\n", 66 | "# normalize the columns of V to equal one\n", 67 | "\n", 68 | "for j in range( 0, 4 ):\n", 69 | " V[ :, j ] = V[ :, j ] / np.sqrt( np.transpose( V[:,j] ) * V[:, j ] )\n", 70 | "\n", 71 | "A = V * Lambda * np.linalg.inv( V )\n", 72 | "\n", 73 | "print( 'Lambda = ' )\n", 74 | "print( Lambda)\n", 75 | "\n", 76 | "print( 'V = ' )\n", 77 | "print( V )\n", 78 | "\n", 79 | "print( 'A = ' )\n", 80 | "print( A )\n" 81 | ], 82 | "language": "python", 83 | "metadata": {}, 84 | "outputs": [] 85 | }, 86 | { 87 | "cell_type": "code", 88 | "collapsed": false, 89 | "input": [ 90 | "# Pick a random starting vector\n", 91 | "\n", 92 | "x = np.matrix( np.random.rand( 4,1 ) )\n", 93 | "\n", 94 | "for i in range(0,10):\n", 95 | " x = A * x \n", 96 | " \n", 97 | " # normalize x to length one\n", 98 | " x = x / np.sqrt( np.transpose( x ) * x )\n", 99 | " \n", 100 | " print( 'Rayleigh quotient with vector x:', np.transpose( x ) * A * x / ( np.transpose( x ) * x ))\n", 101 | " print( 'inner product of x with v0 :', np.transpose( x ) * V[ :, 0 ] )\n", 102 | " print( ' ' )" 103 | ], 104 | "language": "python", 105 | "metadata": {}, 106 | "outputs": [] 107 | }, 108 | { 109 | "cell_type": "code", 110 | "collapsed": false, 111 | "input": [], 112 | "language": "python", 113 | "metadata": {}, 114 | "outputs": [] 115 | }, 116 | { 117 | "cell_type": "markdown", 118 | "metadata": {}, 119 | "source": [ 120 | "In the above, \n", 121 | "
    \n", 122 | "
  • \n", 123 | " The Rayleigh quotient should converge to 4.0 (slowly).\n", 124 | "
  • \n", 125 | "
  • \n", 126 | " The inner product of $ x $ and the first column of $ V $, $ v_0 $, should converge to 1 or -1 since eventually $ x $ should be in the direction of $ v_0 $ (or in the opposite direction).\n", 127 | "
  • \n", 128 | "
\n", 129 | " \n", 130 | "If you change the \"3\" on the diagonal to \"-4\", then you have two largest eigenvalues (in magnitude), and the vector $ x $ will end up in the space spanned by $ v_0 $ and $ v_1 $. \n", 131 | " You can check this by looking at $ ( I - V_L ( V_L^T V_L )^{-1} V_L^T ) x $, where $V_L $ equals the matrix with $ v_0 $ and $ v_1 $ as its columns, to see if the vector orthogonal to $ {\\cal C}( V_L ) $ converges to zero. This is seen in the following code block:\n", 132 | "\n" 133 | ] 134 | }, 135 | { 136 | "cell_type": "code", 137 | "collapsed": false, 138 | "input": [ 139 | "w = x - V[ :,0:2 ] * np.linalg.inv( np.transpose( V[ :,0:2 ] ) * V[ :,0:2 ] ) * np.transpose( V[ :,0:2 ] ) * x\n", 140 | " \n", 141 | "print( 'Norm of component orthogonal: ', np.linalg.norm( w ) )" 142 | ], 143 | "language": "python", 144 | "metadata": {}, 145 | "outputs": [] 146 | }, 147 | { 148 | "cell_type": "code", 149 | "collapsed": false, 150 | "input": [], 151 | "language": "python", 152 | "metadata": {}, 153 | "outputs": [] 154 | } 155 | ], 156 | "metadata": {} 157 | } 158 | ] 159 | } -------------------------------------------------------------------------------- /week12/12.5.1 The Inverse Power Method.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | " 12.5.1 The Inverse Power Method" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "With this notebook, we demonstrate how the Inverse Power Method can be used to find the smallest eigenvector of a matrix.\n", 23 | "\n", 24 | " Be sure to make a copy!!!! " 25 | ] 26 | }, 27 | { 28 | "cell_type": "markdown", 29 | "metadata": {}, 30 | "source": [ 31 | "We start by creating a matrix with known eigenvalues and eigenvectors\n", 32 | "\n", 33 | "How do we do this? \n", 34 | "
    \n", 35 | "
  • \n", 36 | " We want a matrix that is not deficient, since otherwise the Inverse Power Method may not work. \n", 37 | "
  • \n", 38 | "
  • \n", 39 | " Hence, $ A = V \\Lambda V^{-1} $ for some diagonal matrix $ \\Lambda $ and nonsingular matrix $ V $. The eigenvalues are then on the diagonal of $ \\Lambda $ and the eigenvectors are the columns of $ V $.\n", 40 | "
  • \n", 41 | "
  • \n", 42 | " So, let's pick the eigenvalues for the diagonal of $ \\Lambda $ and let's pick a random matrix $ V $ (in the hopes that it has linearly independent columns) and then let's see what happens. \n", 43 | "
  • \n", 44 | "
\n", 45 | "\n", 46 | " Experiment by changing the eigenvalues! What happens if you make the second entry on the diagonal equal to -4? Or what if you set 2 to -1? " 47 | ] 48 | }, 49 | { 50 | "cell_type": "code", 51 | "collapsed": false, 52 | "input": [ 53 | "import numpy as np\n", 54 | "import laff\n", 55 | "import flame\n", 56 | "\n", 57 | "Lambda = np.matrix( ' 4., 0., 0., 0;\\\n", 58 | " 0., 3., 0., 0;\\\n", 59 | " 0., 0., 2., 0;\\\n", 60 | " 0., 0., 0., 1' )\n", 61 | "\n", 62 | "lambda0 = Lambda[ 0,0 ]\n", 63 | "\n", 64 | "V = np.matrix( np.random.rand( 4,4 ) )\n", 65 | "\n", 66 | "# normalize the columns of V to equal one\n", 67 | "\n", 68 | "for j in range( 0, 4 ):\n", 69 | " V[ :, j ] = V[ :, j ] / np.sqrt( np.transpose( V[:,j] ) * V[:, j ] )\n", 70 | "\n", 71 | "A = V * Lambda * np.linalg.inv( V )\n", 72 | "\n", 73 | "print( 'Lambda = ' )\n", 74 | "print( Lambda)\n", 75 | "\n", 76 | "print( 'V = ' )\n", 77 | "print( V )\n", 78 | "\n", 79 | "print( 'A = ' )\n", 80 | "print( A )\n" 81 | ], 82 | "language": "python", 83 | "metadata": {}, 84 | "outputs": [] 85 | }, 86 | { 87 | "cell_type": "markdown", 88 | "metadata": {}, 89 | "source": [ 90 | "The idea is as follows:\n", 91 | "\n", 92 | "The eigenvalues of $ A $ are $ \\lambda_0, \\ldots, \\lambda_3 $ with\n", 93 | "\n", 94 | "$$\n", 95 | "\\vert \\lambda_0 \\vert > \\vert \\lambda_1 \\vert > \\vert \\lambda_2 \\vert > \\vert \\lambda_3 \\vert > 0\n", 96 | "$$\n", 97 | "\n", 98 | "and how fast the iteration converges depends on the ratio \n", 99 | "\n", 100 | "$$\n", 101 | "\\left\\vert \\frac{\\lambda_3}{\\lambda_2} \\right\\vert .\n", 102 | "$$\n" 103 | ] 104 | }, 105 | { 106 | "cell_type": "code", 107 | "collapsed": false, 108 | "input": [ 109 | "# Pick a random starting vector\n", 110 | "\n", 111 | "x = np.matrix( np.random.rand( 4,1 ) )\n", 112 | "\n", 113 | "# We should really compute a factorization of A, but let's be lazy, and compute the inverse\n", 114 | "# explicitly\n", 115 | "\n", 116 | "Ainv = np.linalg.inv( A )\n", 117 | "\n", 118 | "for i in range(0,10):\n", 119 | " x = Ainv * x \n", 120 | " \n", 121 | " # normalize x to length one\n", 122 | " x = x / np.sqrt( np.transpose( x ) * x )\n", 123 | " \n", 124 | " # Notice we compute the Rayleigh quotient with matrix A, not Ainv. This is because\n", 125 | " # the eigenvector of A is an eigenvector of Ainv\n", 126 | " \n", 127 | " print( 'Rayleigh quotient with vector x:', np.transpose( x ) * A * x / ( np.transpose( x ) * x ))\n", 128 | " print( 'inner product of x with v3 :', np.transpose( x ) * V[ :, 3 ] )\n", 129 | " print( ' ' )" 130 | ], 131 | "language": "python", 132 | "metadata": {}, 133 | "outputs": [] 134 | }, 135 | { 136 | "cell_type": "code", 137 | "collapsed": false, 138 | "input": [], 139 | "language": "python", 140 | "metadata": {}, 141 | "outputs": [] 142 | }, 143 | { 144 | "cell_type": "markdown", 145 | "metadata": {}, 146 | "source": [ 147 | "In the above, \n", 148 | "
    \n", 149 | "
  • \n", 150 | " The Rayleigh quotient should converge to 1.0 (quicker than the Power Method converged).\n", 151 | "
  • \n", 152 | "
  • \n", 153 | " The inner product of $ x $ and the last column of $ V $, $ v_{n-1} $, should converge to 1 or -1 since eventually $ x $ should be in the direction of $ v_{n-1} $ (or in the opposite direction).\n", 154 | "
  • \n", 155 | "
\n", 156 | " \n", 157 | " Try changing the \"2\" to a \"-1\" or \"1\". What happens then?\n", 158 | " \n", 159 | " You can check this by looking at $ ( I - V_R ( V_R^T V_R )^{-1} V_R^T ) x $, where $V_R $ equals the matrix with $ v_2 $ and $ v_3 $ as its columns, to see if the vector orthogonal to $ {\\cal C}( V_R ) $ converges to zero. This is seen in the following code block:\n" 160 | ] 161 | }, 162 | { 163 | "cell_type": "code", 164 | "collapsed": false, 165 | "input": [ 166 | "w = x - V[ :,2:4 ] * np.linalg.inv( np.transpose( V[ :,2:4 ] ) * V[ :,2:4 ] ) * np.transpose( V[ :,2:4 ] ) * x\n", 167 | " \n", 168 | "print( 'Norm of component orthogonal: ', np.linalg.norm( w ) )" 169 | ], 170 | "language": "python", 171 | "metadata": {}, 172 | "outputs": [] 173 | }, 174 | { 175 | "cell_type": "code", 176 | "collapsed": false, 177 | "input": [], 178 | "language": "python", 179 | "metadata": {}, 180 | "outputs": [] 181 | } 182 | ], 183 | "metadata": {} 184 | } 185 | ] 186 | } -------------------------------------------------------------------------------- /week12/12.5.2 Shifting the Inverse Power Method.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | " 12.5.2 Shifting the Inverse Power Method" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "With this notebook, we demonstrate how the Inverse Power Method can be accelerated by shifting the matrix.\n", 23 | "\n", 24 | " Be sure to make a copy!!!! " 25 | ] 26 | }, 27 | { 28 | "cell_type": "markdown", 29 | "metadata": {}, 30 | "source": [ 31 | "We start by creating a matrix with known eigenvalues and eigenvectors\n", 32 | "\n", 33 | "How do we do this? \n", 34 | "
    \n", 35 | "
  • \n", 36 | " We want a matrix that is not deficient, since otherwise the Shifted Inverse Power Method may not work. \n", 37 | "
  • \n", 38 | "
  • \n", 39 | " Hence, $ A = V \\Lambda V^{-1} $ for some diagonal matrix $ \\Lambda $ and nonsingular matrix $ V $. The eigenvalues are then on the diagonal of $ \\Lambda $ and the eigenvectors are the columns of $ V $.\n", 40 | "
  • \n", 41 | "
  • \n", 42 | " So, let's pick the eigenvalues for the diagonal of $ \\Lambda $ and let's pick a random matrix $ V $ (in the hopes that it has linearly independent columns) and then let's see what happens. \n", 43 | "
  • \n", 44 | "
\n", 45 | "\n", 46 | " Experiment by changing the eigenvalues! What happens if you make the second entry on the diagonal equal to -4? Or what if you set 2 to -1? " 47 | ] 48 | }, 49 | { 50 | "cell_type": "code", 51 | "collapsed": false, 52 | "input": [ 53 | "import numpy as np\n", 54 | "import laff\n", 55 | "import flame\n", 56 | "\n", 57 | "Lambda = np.matrix( ' 4., 0., 0., 0;\\\n", 58 | " 0., 3., 0., 0;\\\n", 59 | " 0., 0., 2., 0;\\\n", 60 | " 0., 0., 0., 1' )\n", 61 | "\n", 62 | "lambda0 = Lambda[ 0,0 ]\n", 63 | "\n", 64 | "V = np.matrix( np.random.rand( 4,4 ) )\n", 65 | "\n", 66 | "# normalize the columns of V to equal one\n", 67 | "\n", 68 | "for j in range( 0, 4 ):\n", 69 | " V[ :, j ] = V[ :, j ] / np.sqrt( np.transpose( V[:,j] ) * V[:, j ] )\n", 70 | "\n", 71 | "A = V * Lambda * np.linalg.inv( V )\n", 72 | "\n", 73 | "print( 'Lambda = ' )\n", 74 | "print( Lambda)\n", 75 | "\n", 76 | "print( 'V = ' )\n", 77 | "print( V )\n", 78 | "\n", 79 | "print( 'A = ' )\n", 80 | "print( A )\n" 81 | ], 82 | "language": "python", 83 | "metadata": {}, 84 | "outputs": [] 85 | }, 86 | { 87 | "cell_type": "markdown", 88 | "metadata": {}, 89 | "source": [ 90 | "The idea is as follows:\n", 91 | "\n", 92 | "The eigenvalues of $ A $ are $ \\lambda_0, \\ldots, \\lambda_3 $ with\n", 93 | "\n", 94 | "$$\n", 95 | "\\vert \\lambda_0 \\vert > \\vert \\lambda_1 \\vert > \\vert \\lambda_2 \\vert > \\vert \\lambda_3 \\vert > 0\n", 96 | "$$\n", 97 | "\n", 98 | "and how fast the iteration converges depends on the ratio \n", 99 | "\n", 100 | "$$\n", 101 | "\\left\\vert \\frac{\\lambda_3}{\\lambda_2} \\right\\vert .\n", 102 | "$$\n", 103 | "Now, if you pick a value, $ \\mu $ close to $ \\lambda_3 $, and you iterate with $ A - \\mu I $ (which is known as shifting the matrix/spectrum by $ \\mu $) you can greatly improve the ratio\n", 104 | "$$\n", 105 | "\\left\\vert \\frac{\\lambda_3-\\mu}{\\lambda_2-\\mu} \\right\\vert .\n", 106 | "$$\n", 107 | "\n", 108 | "Try different values of $ \\mu$. What if you pick $ \\mu \\approx 2 $? \n", 109 | "What if you pick $ \\mu = 0.8 $?" 110 | ] 111 | }, 112 | { 113 | "cell_type": "code", 114 | "collapsed": false, 115 | "input": [ 116 | "# Pick a random starting vector\n", 117 | "\n", 118 | "x = np.matrix( np.random.rand( 4,1 ) )\n", 119 | "\n", 120 | "# We should really compute a factorization of A, but let's be lazy, and compute the inverse\n", 121 | "# explicitly\n", 122 | "\n", 123 | "mu = 0.8\n", 124 | "\n", 125 | "Ainv = np.linalg.inv( A - mu * np.eye( 4, 4 ) )\n", 126 | "\n", 127 | "for i in range(0,10):\n", 128 | " x = Ainv * x \n", 129 | " \n", 130 | " # normalize x to length one\n", 131 | " x = x / np.sqrt( np.transpose( x ) * x )\n", 132 | " \n", 133 | " # Notice we compute the Rayleigh quotient with matrix A, not Ainv. This is because\n", 134 | " # the eigenvector of A is an eigenvector of Ainv\n", 135 | " \n", 136 | " print( 'Rayleigh quotient with vector x:', np.transpose( x ) * A * x / ( np.transpose( x ) * x ))\n", 137 | " print( 'inner product of x with v3 :', np.transpose( x ) * V[ :, 3 ] )\n", 138 | " print( ' ' )" 139 | ], 140 | "language": "python", 141 | "metadata": {}, 142 | "outputs": [] 143 | }, 144 | { 145 | "cell_type": "code", 146 | "collapsed": false, 147 | "input": [], 148 | "language": "python", 149 | "metadata": {}, 150 | "outputs": [] 151 | }, 152 | { 153 | "cell_type": "markdown", 154 | "metadata": {}, 155 | "source": [ 156 | "In the above, \n", 157 | "
    \n", 158 | "
  • \n", 159 | " The Rayleigh quotient should converge to 1.0 (quickly if $ \\mu \\approx 1 $).\n", 160 | "
  • \n", 161 | "
  • \n", 162 | " The inner product of $ x $ and the last column of $ V $, $ v_{n-1} $, should converge to 1 or -1 since eventually $ x $ should be in the direction of $ v_{n-1} $ (or in the opposite direction).\n", 163 | "
  • \n", 164 | "
\n", 165 | " \n", 166 | " This time, if you change the \"2\" on the diagonal to \"-1\", you still converge to $ v_{n-1} $ because for the matrix $ A - \\mu I $, $ -1 - \\mu $ is not as small as $ 1 - \\mu $ (in magnitude).\n", 167 | "\n", 168 | " You can check this by looking at $ ( I - V_R ( V_R^T V_R )^{-1} V_R^T ) x $, where $V_R $ equals the matrix with $ v_2 $ and $ v_3 $ as its columns, to see if the vector orthogonal to $ {\\cal C}( V_R ) $ converges to zero. This is seen in the following code block:\n" 169 | ] 170 | }, 171 | { 172 | "cell_type": "code", 173 | "collapsed": false, 174 | "input": [ 175 | "w = x - V[ :,2:4 ] * np.linalg.inv( np.transpose( V[ :,2:4 ] ) * V[ :,2:4 ] ) * np.transpose( V[ :,2:4 ] ) * x\n", 176 | " \n", 177 | "print( 'Norm of component orthogonal: ', np.linalg.norm( w ) )" 178 | ], 179 | "language": "python", 180 | "metadata": {}, 181 | "outputs": [] 182 | }, 183 | { 184 | "cell_type": "code", 185 | "collapsed": false, 186 | "input": [], 187 | "language": "python", 188 | "metadata": {}, 189 | "outputs": [] 190 | } 191 | ], 192 | "metadata": {} 193 | } 194 | ] 195 | } -------------------------------------------------------------------------------- /week12/12.5.3 The Rayleigh Quotient Iteration.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "name": "" 4 | }, 5 | "nbformat": 3, 6 | "nbformat_minor": 0, 7 | "worksheets": [ 8 | { 9 | "cells": [ 10 | { 11 | "cell_type": "heading", 12 | "level": 1, 13 | "metadata": {}, 14 | "source": [ 15 | " 12.5.3 The Rayleigh Quotient Iteration" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "With this notebook, we demonstrate how the Inverse Power Method can be accelerated by shifting the matrix, this time by approximating the smallest eigenvalue with the Rayleigh quotient.\n", 23 | "\n", 24 | " Be sure to make a copy!!!! " 25 | ] 26 | }, 27 | { 28 | "cell_type": "markdown", 29 | "metadata": {}, 30 | "source": [ 31 | "We start by creating a matrix with known eigenvalues and eigenvectors\n", 32 | "\n", 33 | "How do we do this? \n", 34 | "
    \n", 35 | "
  • \n", 36 | " We want a matrix that is not deficient, since otherwise the Rayleigh Quotient Iteration Method may not work. \n", 37 | "
  • \n", 38 | "
  • \n", 39 | " Hence, $ A = V \\Lambda V^{-1} $ for some diagonal matrix $ \\Lambda $ and nonsingular matrix $ V $. The eigenvalues are then on the diagonal of $ \\Lambda $ and the eigenvectors are the columns of $ V $.\n", 40 | "
  • \n", 41 | "
  • \n", 42 | " So, let's pick the eigenvalues for the diagonal of $ \\Lambda $ and let's pick a random matrix $ V $ (in the hopes that it has linearly independent columns) and then let's see what happens. \n", 43 | "
  • \n", 44 | "
\n", 45 | "\n", 46 | " Experiment by changing the eigenvalues! What happens if you make the second entry on the diagonal equal to -4? Or what if you set 2 to -1? " 47 | ] 48 | }, 49 | { 50 | "cell_type": "code", 51 | "collapsed": false, 52 | "input": [ 53 | "import numpy as np\n", 54 | "import laff\n", 55 | "import flame\n", 56 | "\n", 57 | "Lambda = np.matrix( ' 4., 0., 0., 0;\\\n", 58 | " 0., 3., 0., 0;\\\n", 59 | " 0., 0., 2., 0;\\\n", 60 | " 0., 0., 0., 1' )\n", 61 | "\n", 62 | "lambda0 = Lambda[ 0,0 ]\n", 63 | "\n", 64 | "V = np.matrix( np.random.rand( 4,4 ) )\n", 65 | "\n", 66 | "# normalize the columns of V to equal one\n", 67 | "\n", 68 | "for j in range( 0, 4 ):\n", 69 | " V[ :, j ] = V[ :, j ] / np.sqrt( np.transpose( V[:,j] ) * V[:, j ] )\n", 70 | "\n", 71 | "A = V * Lambda * np.linalg.inv( V )\n", 72 | "\n", 73 | "print( 'Lambda = ' )\n", 74 | "print( Lambda)\n", 75 | "\n", 76 | "print( 'V = ' )\n", 77 | "print( V )\n", 78 | "\n", 79 | "print( 'A = ' )\n", 80 | "print( A )\n" 81 | ], 82 | "language": "python", 83 | "metadata": {}, 84 | "outputs": [] 85 | }, 86 | { 87 | "cell_type": "markdown", 88 | "metadata": {}, 89 | "source": [ 90 | "The idea is as follows:\n", 91 | "\n", 92 | "The eigenvalues of $ A $ are $ \\lambda_0, \\ldots, \\lambda_3 $ with\n", 93 | "\n", 94 | "$$\n", 95 | "\\vert \\lambda_0 \\vert > \\vert \\lambda_1 \\vert > \\vert \\lambda_2 \\vert > \\vert \\lambda_3 \\vert > 0\n", 96 | "$$\n", 97 | "\n", 98 | "and how fast the iteration converges depends on the ratio \n", 99 | "\n", 100 | "$$\n", 101 | "\\left\\vert \\frac{\\lambda_3}{\\lambda_2} \\right\\vert .\n", 102 | "$$\n", 103 | "Now, if you pick a value, $ \\mu $ close to $ \\lambda_3 $, and you iterate with $ A - \\mu I $ (which is known as shifting the matrix/spectrum by $ \\mu $) you can greatly improve the ratio\n", 104 | "$$\n", 105 | "\\left\\vert \\frac{\\lambda_3-\\mu}{\\lambda_2-\\mu} \\right\\vert .\n", 106 | "$$\n", 107 | "\n", 108 | "Generally we don't know $ \\lambda_3 $ and hence don't know how to choose $ \\mu $. But we are generating a vector $ x $ that progressively gets closer and closer to an eigenvector. Thus, we can use the Rayleigh quotient to approximate an eigenvalue.\n", 109 | "\n", 110 | "Here we purposely say \"an eigenvalue\" since it could be that the first random vector $ x $ is close to an eigenvector associated with another eigenvalue, and then we may converge to a different eigenvalue." 111 | ] 112 | }, 113 | { 114 | "cell_type": "code", 115 | "collapsed": false, 116 | "input": [ 117 | "# Pick a random starting vector\n", 118 | "\n", 119 | "x = np.matrix( np.random.rand( 4,1 ) )\n", 120 | "\n", 121 | "\n", 122 | "mu = 0. # Let's start by not shifting, so hopefully we hone in on the smallest eigenvalue\n", 123 | "\n", 124 | "for i in range(0,10):\n", 125 | " # We should really compute a factorization of A, but let's be lazy, and compute the inverse\n", 126 | " # explicitly\n", 127 | " Ainv = np.linalg.inv( A - mu * np.eye( 4, 4 ) )\n", 128 | " \n", 129 | " x = Ainv * x \n", 130 | " \n", 131 | " # normalize x to length one\n", 132 | " x = x / np.sqrt( np.transpose( x ) * x )\n", 133 | " \n", 134 | " # Notice we compute the Rayleigh quotient with matrix A, not Ainv. This is because\n", 135 | " # the eigenvector of A is an eigenvector of Ainv\n", 136 | " \n", 137 | " mu = np.transpose( x ) * A * x\n", 138 | " \n", 139 | " # The above returns a 1 x 1 matrix. Let's set mu to the scalar\n", 140 | " \n", 141 | " mu = mu[ 0, 0 ]\n", 142 | " \n", 143 | " print( 'Rayleigh quotient with vector x:', np.transpose( x ) * A * x / ( np.transpose( x ) * x ))\n", 144 | " print( 'inner product of x with v3 :', np.transpose( x ) * V[ :, 3 ] )\n", 145 | " print( ' ' )" 146 | ], 147 | "language": "python", 148 | "metadata": {}, 149 | "outputs": [] 150 | }, 151 | { 152 | "cell_type": "code", 153 | "collapsed": false, 154 | "input": [], 155 | "language": "python", 156 | "metadata": {}, 157 | "outputs": [] 158 | }, 159 | { 160 | "cell_type": "markdown", 161 | "metadata": {}, 162 | "source": [ 163 | "In the above, \n", 164 | "
    \n", 165 | "
  • \n", 166 | " The Rayleigh quotient may converge to 1.0 (but it may converge to another eigenvalue!).\n", 167 | "
  • \n", 168 | "
  • \n", 169 | " The inner product of $ x $ and the last column of $ V $, $ v_{n-1} $, may converge to 1 or -1 since eventually $ x $ may be in the direction of $ v_{n-1} $ (or in the opposite direction). But not if we start converging to another eigenvalue... If this happens, try rerunning all the code blocks above to get a different $V$ matrix.\n", 170 | "
  • \n", 171 | "
\n" 172 | ] 173 | } 174 | ], 175 | "metadata": {} 176 | } 177 | ] 178 | } --------------------------------------------------------------------------------