├── .gitignore ├── LICENSE ├── README.md ├── codeTimer ├── codeTimer.m ├── indexing1.m ├── indexing2.m ├── indexing3.m ├── oddEntriesToZero1.m ├── oddEntriesToZero2.m └── oddEntriesToZero3.m └── latex ├── bibtex └── matlab-tipsntricks.bib ├── clean-code.tex ├── fast-code.tex ├── figures ├── Octave_Sombrero.pdf ├── coarse-fine.tex ├── fine-coarse.tex ├── julia-logo-color.pdf ├── matlab-open-profiler.png ├── matlab-profiler-circleBox-result.png └── python-logo-generic.pdf ├── header.tex ├── justfile ├── main.tex ├── matlab-alternatives.tex └── tipsntricks.tex /.gitignore: -------------------------------------------------------------------------------- 1 | main.pdf 2 | *.bcf 3 | *.xml 4 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Creative Commons Legal Code 2 | 3 | CC0 1.0 Universal 4 | 5 | CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE 6 | LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN 7 | ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS 8 | INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES 9 | REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS 10 | PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM 11 | THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED 12 | HEREUNDER. 13 | 14 | Statement of Purpose 15 | 16 | The laws of most jurisdictions throughout the world automatically confer 17 | exclusive Copyright and Related Rights (defined below) upon the creator 18 | and subsequent owner(s) (each and all, an "owner") of an original work of 19 | authorship and/or a database (each, a "Work"). 20 | 21 | Certain owners wish to permanently relinquish those rights to a Work for 22 | the purpose of contributing to a commons of creative, cultural and 23 | scientific works ("Commons") that the public can reliably and without fear 24 | of later claims of infringement build upon, modify, incorporate in other 25 | works, reuse and redistribute as freely as possible in any form whatsoever 26 | and for any purposes, including without limitation commercial purposes. 27 | These owners may contribute to the Commons to promote the ideal of a free 28 | culture and the further production of creative, cultural and scientific 29 | works, or to gain reputation or greater distribution for their Work in 30 | part through the use and efforts of others. 31 | 32 | For these and/or other purposes and motivations, and without any 33 | expectation of additional consideration or compensation, the person 34 | associating CC0 with a Work (the "Affirmer"), to the extent that he or she 35 | is an owner of Copyright and Related Rights in the Work, voluntarily 36 | elects to apply CC0 to the Work and publicly distribute the Work under its 37 | terms, with knowledge of his or her Copyright and Related Rights in the 38 | Work and the meaning and intended legal effect of CC0 on those rights. 39 | 40 | 1. Copyright and Related Rights. A Work made available under CC0 may be 41 | protected by copyright and related or neighboring rights ("Copyright and 42 | Related Rights"). Copyright and Related Rights include, but are not 43 | limited to, the following: 44 | 45 | i. the right to reproduce, adapt, distribute, perform, display, 46 | communicate, and translate a Work; 47 | ii. moral rights retained by the original author(s) and/or performer(s); 48 | iii. publicity and privacy rights pertaining to a person's image or 49 | likeness depicted in a Work; 50 | iv. rights protecting against unfair competition in regards to a Work, 51 | subject to the limitations in paragraph 4(a), below; 52 | v. rights protecting the extraction, dissemination, use and reuse of data 53 | in a Work; 54 | vi. database rights (such as those arising under Directive 96/9/EC of the 55 | European Parliament and of the Council of 11 March 1996 on the legal 56 | protection of databases, and under any national implementation 57 | thereof, including any amended or successor version of such 58 | directive); and 59 | vii. other similar, equivalent or corresponding rights throughout the 60 | world based on applicable law or treaty, and any national 61 | implementations thereof. 62 | 63 | 2. Waiver. To the greatest extent permitted by, but not in contravention 64 | of, applicable law, Affirmer hereby overtly, fully, permanently, 65 | irrevocably and unconditionally waives, abandons, and surrenders all of 66 | Affirmer's Copyright and Related Rights and associated claims and causes 67 | of action, whether now known or unknown (including existing as well as 68 | future claims and causes of action), in the Work (i) in all territories 69 | worldwide, (ii) for the maximum duration provided by applicable law or 70 | treaty (including future time extensions), (iii) in any current or future 71 | medium and for any number of copies, and (iv) for any purpose whatsoever, 72 | including without limitation commercial, advertising or promotional 73 | purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each 74 | member of the public at large and to the detriment of Affirmer's heirs and 75 | successors, fully intending that such Waiver shall not be subject to 76 | revocation, rescission, cancellation, termination, or any other legal or 77 | equitable action to disrupt the quiet enjoyment of the Work by the public 78 | as contemplated by Affirmer's express Statement of Purpose. 79 | 80 | 3. Public License Fallback. Should any part of the Waiver for any reason 81 | be judged legally invalid or ineffective under applicable law, then the 82 | Waiver shall be preserved to the maximum extent permitted taking into 83 | account Affirmer's express Statement of Purpose. In addition, to the 84 | extent the Waiver is so judged Affirmer hereby grants to each affected 85 | person a royalty-free, non transferable, non sublicensable, non exclusive, 86 | irrevocable and unconditional license to exercise Affirmer's Copyright and 87 | Related Rights in the Work (i) in all territories worldwide, (ii) for the 88 | maximum duration provided by applicable law or treaty (including future 89 | time extensions), (iii) in any current or future medium and for any number 90 | of copies, and (iv) for any purpose whatsoever, including without 91 | limitation commercial, advertising or promotional purposes (the 92 | "License"). The License shall be deemed effective as of the date CC0 was 93 | applied by Affirmer to the Work. Should any part of the License for any 94 | reason be judged legally invalid or ineffective under applicable law, such 95 | partial invalidity or ineffectiveness shall not invalidate the remainder 96 | of the License, and in such case Affirmer hereby affirms that he or she 97 | will not (i) exercise any of his or her remaining Copyright and Related 98 | Rights in the Work or (ii) assert any associated claims and causes of 99 | action with respect to the Work, in either case contrary to Affirmer's 100 | express Statement of Purpose. 101 | 102 | 4. Limitations and Disclaimers. 103 | 104 | a. No trademark or patent rights held by Affirmer are waived, abandoned, 105 | surrendered, licensed or otherwise affected by this document. 106 | b. Affirmer offers the Work as-is and makes no representations or 107 | warranties of any kind concerning the Work, express, implied, 108 | statutory or otherwise, including without limitation warranties of 109 | title, merchantability, fitness for a particular purpose, non 110 | infringement, or the absence of latent or other defects, accuracy, or 111 | the present or absence of errors, whether or not discoverable, all to 112 | the greatest extent permissible under applicable law. 113 | c. Affirmer disclaims responsibility for clearing rights of other persons 114 | that may apply to the Work or any use thereof, including without 115 | limitation any person's Copyright and Related Rights in the Work. 116 | Further, Affirmer disclaims responsibility for obtaining any necessary 117 | consents, permissions or other rights required for any use of the 118 | Work. 119 | d. Affirmer understands and acknowledges that Creative Commons is not a 120 | party to this document and has no duty or obligation with respect to 121 | this CC0 or use of the Work. 122 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Guidelines for writing clean and fast code in MATLAB 2 | 3 | > This document is aimed at MATLAB beginners who already know the syntax but 4 | > feel are not yet quite experienced with it. Its goal is to give a number of 5 | > hints which enable the reader to write quality MATLAB programs and to avoid 6 | > commonly made mistakes. 7 | > 8 | > There are three major independent chapters which may very well be read 9 | > separately. Also, the individual chapters each split up into one or two 10 | > handful of chunks of information. In that sense, this document is really a 11 | > slightly extended list of dos and don'ts. 12 | > 13 | > Chapter 1 describes some aspects of _clean_ code. The impact of a 14 | > subsection for the cleanliness of the code is indicated by one to five 15 | > 🚿-symbols, where five 🚿's want to say that following 16 | > the given suggestion is of great importance for the comprehensibility of the 17 | > code. 18 | > 19 | > Chapter 2 describes how to speed up the code and is largely a list of mistakes 20 | > that beginners may tend to make. This time, the 🏃-symbol represents 21 | > the amount of speed that you could gain when sticking to the hints given in 22 | > the respective section. 23 | > 24 | > This guide is written as part of a basic course in numerical analysis, most 25 | > examples and codes will hence tend to refer to numerical integration or 26 | > differential equations. However, almost all aspects are of general nature and 27 | > will also be of interest to anyone using MATLAB. 28 | 29 | ## MATLAB alternatives 30 | 31 | When writing MATLAB code, you need to realize that unlike C, Fortran, or 32 | Python code, you will always need the _commercial_ MATLAB environment 33 | to have it run. Right now, that might not be much of a problem to you as you 34 | are at a university or have some other free access to the software, but 35 | sometime in the future, this might change. 36 | 37 | The current cost for the basic MATLAB kit, which does not include 38 | _any_ toolbox nor Simulink, is €500 for academic institutions; 39 | around €60 for students; _thousands_ of Euros for commercial 40 | operations. Considering this, there is a not too small chance that you will 41 | not be able to use MATLAB after you quit from university, and that would 42 | render all of your own code virtually useless to you. 43 | 44 | Because of that, free and open source MATLAB alternatives have emerged, 45 | three of which are shortly introduced here. Octave and Scilab try to stick to 46 | MATLAB syntax as closely as possible, resulting in all of the code in this 47 | document being legal for the two packages as well. When it comes to the 48 | specialized toolboxes, however, neither of the alternatives may be able to 49 | provide the same capabilities that MATLAB offers. However, these are mostly 50 | functions related to Simulink and the like which are hardly used by beginners 51 | anyway. Also note none of the alternatives ships with its own text editor (as 52 | MATLAB does), so you are free yo use the editor of your choice (see, for 53 | example, [vim](http://www.vim.org/), 54 | [emacs](http://www.gnu.org/software/emacs/), Kate, 55 | [gedit](http://projects.gnome.org/gedit/) for Linux; 56 | [Notepad++](http://notepad-plus.sourceforge.net/uk/site.htm), 57 | [Crimson Editor](http://www.crimsoneditor.com/) for Windows). 58 | 59 | ### Python 60 | 61 | Python is the most modern programming language as of 2013: Amongst the many 62 | award the language as received stands the TIOBE Programming Language Award of 2010. It is yearly given to the programming language that has gained the 63 | largest market market share during that year. 64 | 65 | Python is used in all kinds of different contexts, and its versatility and 66 | ease of use has made it attractive to many. There are tons packages for all 67 | sorts of tasks, and the huge community and its open development help the 68 | enormous success of Python. 69 | 70 | In the world of scientific computing, too, Python has already risen to be a 71 | major player. This is mostly due to the packages SciPy and Numpy which provide 72 | all data structures and algorithms that are used in numerical code. Plotting 73 | is most easily handled by matplotlib, a huge library which in many ways excels 74 | MATLAB's graphical engine. 75 | 76 | Being a language rather than an application, Python is supported in virtually 77 | every operating system. 78 | 79 | The author of this document highly recommends to take a look at Python for 80 | your own (scientific) programming projects. 81 | 82 | ### Julia 83 | 84 | > Julia is a high-level, high-performance dynamic programming language for 85 | > technical computing, with syntax that is familiar to users of other technical 86 | > computing environments. It provides a sophisticated compiler, distributed 87 | > parallel execution, numerical accuracy, and an extensive mathematical function 88 | > library. The library, largely written in Julia itself, also integrates mature, 89 | > best-of-breed C and Fortran libraries for linear algebra, random number 90 | > generation, signal processing, and string processing. In addition, the Julia 91 | > developer community is contributing a number of external packages through 92 | > Julia’s built-in package manager at a rapid pace. IJulia, a collaboration 93 | > between the IPython and Julia communities, provides a powerful browser-based 94 | > graphical notebook interface to Julia. 95 | 96 | ### GNU Octave 97 | 98 | GNU Octave is a high-level language, primarily intended for numerical 99 | computations. It provides a convenient command line interface for solving 100 | linear and nonlinear problems numerically, and for performing other numerical 101 | experiments using a language that is mostly compatible with MATLAB. It may also 102 | be used as a batch-oriented language. 103 | 104 | Internally, Octave relies on other independent and well-recognized packages 105 | such as gnuplot (for plotting) or UMFPACK (for calculating with sparse 106 | matrices). In that sense, Octave is extremely well integrated into the free 107 | and open source software (FOSS) landscape. 108 | 109 | Octave has extensive tools for solving common numerical linear algebra 110 | problems, finding the roots of nonlinear equations, integrating ordinary 111 | functions, manipulating polynomials, and integrating ordinary differential and 112 | differential-algebraic equations. It is easily extensible and customizable via 113 | user-defined functions written in Octave's own language, or using dynamically 114 | loaded modules written in C++, C, Fortran, or other languages. 115 | 116 | GNU Octave is also freely redistributable software. You may redistribute it 117 | and/or modify it under the terms of the GNU General Public License (GPL) as 118 | published by the Free Software Foundation. 119 | 120 | The project if originally GNU/Linux, but versions for MacOS, Windows, Sun 121 | Solaris, and OS/2 exist. 122 | 123 | ## Clean code 124 | 125 | There is a plethora of reasons why code that _just works™_ 126 | is not good enough. Take a peek at listing~\ref{listing:prime1} and admit: 127 | 128 | - Fixing bugs, adding features, and working with the code in all other 129 | aspects get a lot easier when the code is not messy. 130 | - Imagine someone else looking at your code, and trying to figure out what 131 | it does. In case you have you did not keep it clean, that will certainly be a 132 | huge waste of time. 133 | - You might be planning to code for a particular purpose now, not planning 134 | on ever using it again, but experience tells that there is virtually no 135 | computational task that you come across only once in your programming life. 136 | Imagine yourself looking at your own code, a week, a month, or a year from 137 | now: Would you still be able to understand why the code works as it does? 138 | Clean code will make sure you do. 139 | 140 | Examples of messy, unstructured, and generally ugly programs are plenty, but 141 | there are also places where you are almost guaranteed to find well-structured 142 | code. Take, for example the MATLAB internals: Many of the functions that 143 | you might make use of when programming MATLAB are implemented in MATLAB 144 | syntax themselves – by professional MathWorks programmers. To look at such 145 | the contents of the `mean()` function (which calculates the average 146 | mean value of an array), type `edit mean` on the MATLAB command 147 | line. You might not be able to understand what's going on, but the way the 148 | file looks like may give you hints on how to write clean code. 149 | 150 | ```matlab 151 | function lll(ll1,l11,l1l);if floor(l11/ll1)<=1;... 152 | lll(ll1,l11+1,l1l );elseif mod(l11,ll1)==0;lll(... 153 | ll1,l11+1,0);elseif mod(l11,ll1)==floor(l11/... 154 | ll1)&&~l1l;floor(l11/ll1),lll(ll1,l11+1,0);elseif... 155 | mod(l11,ll1)>1&&mod(l11,ll1) 214 | 215 | 216 | 217 | ```matlab 218 | % [...] 219 | 220 | function fun1 221 | % call helpFun1 here 222 | end 223 | 224 | function fun2 225 | % call helpFun2 here 226 | end 227 | 228 | function helpFun1 229 | % [...] 230 | end 231 | 232 | function helpFun2 233 | % [...] 234 | end 235 | 236 | % [...] 237 | ``` 238 | 239 | 240 | 241 | 242 | ```matlab 243 | % [...] 244 | 245 | function fun1 246 | % call helpFun here 247 | 248 | function helpFun 249 | % [...] 250 | end 251 | end 252 | 253 | function fun2 254 | % call helpFun here 255 | 256 | function helpFun 257 | % [...] 258 | end 259 | end 260 | 261 | % [...] 262 | ``` 263 | 264 | 265 | 266 | 267 | 268 | 269 | The routines `fun1`, `fun2`, `helpFun1`, and `helpFun2` are sitting next to 270 | each other and no hierarchy is visible. 271 | 272 | 273 | 274 | 275 | Immediately obvious: The first `helpFun` helps `fun1`, the second `fun2` – and 276 | does nothing else. 277 | 278 | 279 | 280 | 281 | 282 | ### Variable and function names 🚿🚿🚿 283 | 284 | One key ingredient for a consistent source code is a consistent naming scheme 285 | for the variables in use. From the dawn of programming languages in the 1950s, 286 | schemes have developed and decayed and are generally subject to evolution. 287 | There are, however, some general rules which have proven useful over the years 288 | in all kinds of various contexts. In \cite{Johnson:2002:MPS}, a crisp and yet 289 | comprehensive overview on many aspects of variable naming is given; a few of 290 | the most useful ones are stated here. 291 | 292 | ##### Variable names tell what the variable does 293 | 294 | Undoubtedly, this is the first and foremost rule in variable naming, and it 295 | implies several things. 296 | 297 | - Of course, you would not name a variable `pi` when it really holds the value 298 | `2.718128`, right? 299 | 300 | - In mathematics and computer science, some names are connected to certain 301 | meanings. The following table lists a number of widely used conventions. 302 | 303 | | Variable name | Usual purpose | 304 | | ------------------------- | -------------------------------------------------------- | 305 | | `m`, `n` | integer sizes (,e.g., the dimension of a matrix) | 306 | | `i`, `j`, `k` (, `l`) | integer numbers (mostly loop indices) | 307 | | `x`, `y` | real values ($`x`$-, $`y`$-axis) | 308 | | `z` | complex value or $`z`$-axis | 309 | | `c` | complex value or constant (or both) | 310 | | `t` | time value | 311 | | `e` | the Euler's number or 'unit' entities | 312 | | `f`, `g` (, `h`) | generic function names | 313 | | `h` | spatial discretization parameter (in numerical analysis) | 314 | | `epsilon`, `delta` | small real entities | 315 | | `alpha`, `beta` | angles or parameters | 316 | | `theta`, `tau` | parameters, time discretization parameter (in n.a.) | 317 | | `kappa`, `sigma`, `omega` | parameters | 318 | | `u`, `v`, `w` | vectors | 319 | | `A`, `M` | matrices | 320 | | `b` | right-hand side of an equation system | 321 | 322 | Variable names and their usual purposes in source codes. These guidelines are 323 | not particularly strict, but for example one would never use `i` to hold a 324 | float number, nor `x` for an integer. 325 | 326 | ##### Short variable names 327 | 328 | Short, non-descriptive variable names are quite common in mathematical 329 | computing as the variable names in the corresponding (pen and paper) 330 | calculations are hardly ever longer then one character either (see table). To 331 | be able to distinguish between vector and matrix entities, it is common 332 | practice in programming as well as mathematics to denote matrices by 333 | upper-case, vectors and scalars by lower-case characters. 334 | 335 | 336 | 337 | 348 | 359 | 360 |
338 | 339 | ```matlab 340 | K = 20; 341 | a = zeros(K,K); 342 | B = ones(K,1); 343 | 344 | U = a*B; 345 | ``` 346 | 347 | 349 | 350 | ```matlab 351 | k = 20; 352 | A = zeros(k,k); 353 | b = ones(k,1); 354 | 355 | u = A*b; 356 | ``` 357 | 358 |
361 | 362 | ###### Long variable names 363 | 364 | A widely used convention mostly in the C++ development community to write long, 365 | descriptive variables in mixed case (camel case) starting with lower case, such 366 | as 367 | 368 | ```matlab 369 | linearity, distanceToCircle, figureLabel 370 | ``` 371 | 372 | Alternatively, one could use the underscore to separate parts of a compound 373 | variable name: 374 | 375 | ```matlab 376 | linearity, distance_to_circle, figure_label 377 | ``` 378 | 379 | This convention is sometimes called _snake case_ and used, for example, in the 380 | GNU C++ standard libraries. 381 | 382 | When using the snake case notation, watch out for variable names in MATLAB's 383 | plots: its TeX-interpreter will treat the underscore as a switch to subscript 384 | and a variable name such as `distance_to_circle` will read 385 | $`distance_to_circle`$ in the plot. 386 | 387 | > Using the hyphen `-` as a separator cannot be considered: MATLAB will 388 | > immediately interpret `-` as the minus sign, `distance-to-circle` is 389 | > `distance` minus `to` minus `circle`. The same holds for function names. 390 | 391 | ##### Logical variable names 392 | 393 | If a variable is supposed to only hold the values `0` or `1` to represent 394 | `true` or `false`, then the variable name should express that. A common 395 | technique is to prepend the variable name by `is` and, less common, by `flag`. 396 | 397 | ``` 398 | isPrime, isInside, flagCircle 399 | ``` 400 | 401 | ### Indentation 🚿🚿🚿🚿 402 | 403 | If you ever dealt with nested `for`- and `if` constructs, then you probably 404 | noticed that it may sometimes be hard to distinguish those nested constructions 405 | from other code at first sight. Also, if the contents of a loop extend over 406 | more than just a few lines, a visual aid may be helpful for indicating what is 407 | inside and what is outside the loop – and this is where indentation comes into 408 | play. 409 | 410 | Usually, one would indent everything within a loop, a function, a conditional, 411 | a `switch` statement and so on. Depending on who you ask, you will be 412 | told to indent by two, three, or four spaces, or one tab. As a general rule, 413 | the indentation should yield a clear visual distinction while not using up all 414 | your space on the line (see next paragraph). 415 | 416 | 417 | 418 | 431 | 444 | 445 | 446 | 452 | 457 | 458 |
419 | 420 | ```matlab 421 | for i=1:n 422 | for j=1:n 423 | if A(i,j)<0 424 | A(i,j) = 0; 425 | end 426 | end 427 | end 428 | ``` 429 | 430 | 432 | 433 | ```matlab 434 | for i=1:n 435 | for j=1:n 436 | if A(i,j)<0 437 | A(i,j) = 0; 438 | end 439 | end 440 | end 441 | ``` 442 | 443 |
447 | 448 | No visual distinction between the loop levels makes it hard to recognize where 449 | the first loop ends.[^1] 450 | 451 | 453 | 454 | With indentation, the code looks a lot clearer.\footnotemark[\value{footnote}] 455 | 456 |
459 | 460 | [^1]: 461 | What the code does is replacing all negative entries of an `n`×`n`-matrix 462 | by `0`. There is, however, a much better (shorter, faster) way to achieve 463 | this: `A( A<0 ) = 0`. (see page \pageref{sec:logicalIndexing}). 464 | 465 | ### Line length 🚿 466 | 467 | There is de facto no limit on how much you can write on a single line of MATLAB 468 | code. In fact, you could condense every MATLAB code to a "one-liner" by 469 | separating two commands by a `;` or a `,`, and suppress the newline character 470 | between them. However, a single line with one million characters will 471 | potentially not be very readable. 472 | 473 | But, how many characters can you fit onto a single line without obscuring its 474 | content? This is certainly debatable, but commonly this value sits somewhere 475 | between 70 and 80; MATLAB's own text editor suggests 75 characters per 476 | line. This way, one makes also sure that it is not necessary to have a 477 | widescreen monitor to be able to display the code without artificial line 478 | breaks or horizontal scrolling in the editor. 479 | 480 | Sometimes of course your lines need to stretch longer than this, but that's why 481 | MATLAB contains the ellipses `...` which makes sure the line following the line 482 | with the ellipsis is read as if there was no line break at all. 483 | 484 | 485 | 486 | 495 | 502 | 503 |
487 | 488 | ```matlab 489 | a = sin( exp(x) ) ... 490 | - alpha* 4^6 ... 491 | + u'*v; 492 | ``` 493 | 494 | 496 | 497 | ```matlab 498 | a = sin( exp(x) ) - alpha* 4^6 + u'*v; 499 | ``` 500 | 501 |
504 | 505 | ### Spaces and alignment 🚿🚿🚿 506 | 507 | In some situations, it makes sense to break a line although it has not up to 508 | the limit, yet. This may be the case when you are dealing with an expression 509 | that – because of its length – has to break anyway further to the right; 510 | then, one would like to choose the line break point such that it coincides with 511 | _semantic_ or _syntactic_ break in the syntax. For examples, see the 512 | code below. 513 | 514 | 515 | 516 | 529 | 542 | 543 | 544 | 550 | 556 | 557 |
517 | 518 | ```matlab 519 | A = [ 1, 0.5 , 5; 4, ... 520 | 42.23, 33; 0.33, ... 521 | pi, 1]; 522 | 523 | a = alpha*(u+v)+beta*... 524 | sin(p'*q)-t... 525 | *circleArea(10); 526 | ``` 527 | 528 | 530 | 531 | ```matlab 532 | A = [ 1 , 0.5 , 5 ; ... 533 | 4 , 42.23, 33; ... 534 | 0.33, pi , 1 ]; 535 | 536 | a = alpha* (u+v) ... 537 | + beta* sin(p'*q) ... 538 | - t* circleArea(10); 539 | ``` 540 | 541 |
545 | 546 | Unsemantic line breaks decrease the readability. Neither the shape of the 547 | matrix, nor the number of summands in the second expression is clear. 548 | 549 | 551 | 552 | The shape and contents of the matrix, as well as the elements of the second 553 | expression, are immediately visible to the programmer. 554 | 555 |
558 | 559 | ##### Spaces in expressions 560 | 561 | Closely related to this is the usage of spaces in expressions. The rule is, 562 | again: put spaces there where MATLAB's syntax would. Consider the following 563 | example. 564 | 565 | 566 | 567 | 574 | 581 | 582 | 583 | 589 | 595 | 596 |
568 | 569 | ```matlab 570 | aValue = 5+6 / 3*4; 571 | ``` 572 | 573 | 575 | 576 | ```matlab 577 | aValue = 5 + 6/3*4; 578 | ``` 579 | 580 |
584 | 585 | This spacing suggests that the value of `aValue` will be 11/12, which is of 586 | course not the case. 587 | 588 | 590 | 591 | Much better, as the the fact that the addition is executed last gets reflected 592 | by according spacing. 593 | 594 |
597 | 598 | ### Magic numbers 🚿🚿🚿 599 | 600 | When coding, sometimes you consider a value constant because you do not intend 601 | to change it anytime soon. Take, for example, a program that determines whether 602 | or not a given point sits outside a circle of radius 1 with center (1,1) and at 603 | the same time inside a square of edge length 2, right enclosing the circle (see 604 | \cite{Hull:2006:CCM}). 605 | 606 | When finished, the code will contain a couple of `1`s but it will not be clear 607 | if they are distinct or refer to the same abstract value (see below). Those 608 | hard coded numbers are frequently called _magic numbers_, as they do what they 609 | are supposed to do, but one cannot easily tell why. When you, after some time, 610 | change your mind and you do want to change the value of the radius, it will be 611 | rather difficult to identify those `1`s which actually refer to it. 612 | 613 | 614 | 615 | 637 | 660 | 661 | 668 | 674 | 675 |
616 | 617 | ```matlab 618 | x = 2; y = 0; 619 | 620 | 621 | 622 | 623 | pointsDistance = ... 624 | norm( [x,y]-[1,1] ); 625 | 626 | isInCircle = ... 627 | (pointsDistance < 1); 628 | isInSquare = ... 629 | ( abs(x-1)<1 ) && ... 630 | ( abs(y-1)<1 ); 631 | 632 | if ~isInCircle && isInSquare 633 | % [...] 634 | ``` 635 | 636 | 638 | 639 | ```matlab 640 | x = 2; y = 0; 641 | 642 | radius = 1; 643 | xc = 1; yc = 1; 644 | 645 | pointsDistance = ... 646 | norm( [x,y]-[xc,yc] ); 647 | 648 | isInCircle = ... 649 | (pointsDistance < radius); 650 | isInSquare = ... 651 | ( abs(x-xc) 659 |
662 | 663 | It is not immediately clear if the various `1`s do in the code and 664 | whether or not they represent one entity. These numbers are called _magic 665 | numbers._ 666 | 667 | 669 | 670 | The meaning of the variable `radius` is can be instantly seen and its 671 | value easily altered. 672 | 673 |
676 | 677 | ### Comments 🚿🚿🚿🚿🚿 678 | 679 | The most valuable character for clean MATLAB code is `%`, the comment 680 | character. All tokens after it on the same line are ignored, and the space can 681 | be used to explain the source code in English (or your tribal language, if you 682 | prefer). 683 | 684 | ##### Documentation 🚿🚿🚿🚿🚿 685 | 686 | There should be a big fat neon-red blinking frame around this paragraph, as 687 | documentation is _the_ single most important aspect about clean and 688 | readable code. Unfortunately, it also takes the longest to write which is why 689 | you will find undocumented source code everywhere you go. 690 | 691 | Look at listing~\ref{listing:fences} for a suggestion for quick and clear 692 | documentation, and see if you can do it yourself! 693 | 694 | ##### Structuring elements 🚿🚿🚿 695 | 696 | It is always useful to have the beginning and the end of the function not only 697 | indicated by the respective keywords, but also by something more visible. 698 | Consider building 'fences' with commented `#`, `!=`, or `-` characters, to 699 | visually separate distinct parts of the code. This comes in very handy when 700 | there are multiple functions in one source file, for example, or when there is 701 | a `for`-loop that stretches over that many lines that you cannot easily find 702 | the corresponding `end` anymore. 703 | 704 | For a (slightly exaggerated) example, see listing~\ref{listing:fences}. 705 | 706 | ```matlab 707 | function out = timeIteration( u, n ) 708 | % Takes a starting vector u and performs n time steps. 709 | % 710 | 711 | % set the parameters 712 | tau = 1.0; 713 | kappa = 1.0; 714 | out = u; 715 | 716 | % do the iteration 717 | for k = 1:n 718 | 719 | [out, flag] = proceedStep( out, tau, kappa ); 720 | 721 | % warn if something went wrong 722 | if ~flag 723 | warning( [ 'timeIteration:errorFlag', ... 724 | 'proceedStep returns flag ', flag ] ); 725 | end 726 | end 727 | end 728 | ``` 729 | 730 | Function in which `-`-fences are used to emphasize the functionally separate sections of the code. 731 | 732 | ### Usage of brackets 🚿🚿 733 | 734 | Of course, there is a clearly defined operator precedence list in MATLAB (see 735 | table~\ref{table:operator-precedence}) that makes sure that for every MATLAB 736 | expression, involving any unary or binary operator, there is a unique way of 737 | evaluation. It is quite natural to remember that MATLAB treats multiplication 738 | (`*`) before addition (`+`), but things may get less intuitive when it comes to 739 | logical operators, or a mix of numerical and logical ones (although this case 740 | is admittedly very rare). 741 | 742 | Of course one can always look those up (see 743 | table~\ref{table:operator-precedence}), but to save the work one could equally 744 | quick just insert a pair of bracket at the right spot, although they may be 745 | unnecessary – this will certainly help avoiding confusion. 746 | 747 | 748 | 749 | 757 | 765 | 766 | 767 | 773 | 778 | 779 |
750 | 751 | ```matlab 752 | isGood = a<0 ... 753 | && b>0 || k~=0; 754 | ``` 755 | 756 | 758 | 759 | ```matlab 760 | isGood = ( a<0 && b>0 ) ... 761 | || k~=0; 762 | ``` 763 | 764 |
768 | 769 | Without knowing if MATLAB first evaluates the short-circuit AND `&&` or the 770 | short-circuit OR `||`, it is impossible to predict the value of `isGood`. 771 | 772 | 774 | 775 | With the (unnecessary) brackets, the situation is clear. 776 | 777 |
780 | 781 | ``` 782 | \begin{table} 783 | \begin{enumerate} 784 | \item Parentheses \lstinline!()! 785 | \item Transpose (\lstinline!.'!), power (\lstinline!.^!), complex conjugate transpose (\lstinline!'!), matrix power (\lstinline!^!) 786 | \item Unary plus (\lstinline!+!), unary minus (\lstinline!-!), logical negation (\lstinline!~!) 787 | \item Multiplication (\lstinline!.*!), right division (\lstinline!./!), left division (\lstinline!.\!), matrix multiplication (\lstinline!*!), matrix right division (\lstinline!/!), matrix left division (\lstinline!\!) 788 | \item Addition (\lstinline!+!), subtraction (\lstinline!-!) 789 | \item Colon operator (\lstinline!:!) 790 | \item Less than (\lstinline!!), greater than or equal to (\lstinline!>=!), equal to (\lstinline!==!), not equal to (\lstinline!~=!) 791 | \item Element-wise AND (\lstinline!&!) 792 | \item Element-wise OR (\lstinline!|!) 793 | \item Short-circuit AND (\lstinline!&&!) 794 | \item Short-circuit OR (\lstinline!||!) 795 | \end{enumerate} 796 | \caption{MATLAB operator precedence list.} 797 | \label{table:operator-precedence} 798 | \end{table} 799 | ``` 800 | 801 | ### Errors and warnings 🚿🚿 802 | 803 | No matter how careful you design your code, there will probably be users who 804 | manage to crash it, maybe with bad input data. As a matter of fact, this is not 805 | really uncommon in numerical computation that things go fundamentally wrong. 806 | 807 | > You write a routine that defines an iterative process to find the solution 808 | > $`u^* = A^{-1}b`$ of a linear equation system (think of conjugate gradients). 809 | > For some input vector $`u`$, you hope to find $`u^*`$ after a finite number 810 | > of iterations. However, the iteration will only converge under certain 811 | > conditions on $`A`$; and if $`A`$ happens not to fulfill those, the code will 812 | > misbehave in some way. 813 | 814 | It would be bad practice to assume that the user (or, you) always provides 815 | input data to your routine fulfilling all necessary conditions, so you would 816 | certainly like to conditionally intercept. Notifying the user that something 817 | went wrong can certainly be done by `disp()` or `fprintf()` commands, but the 818 | clean way out is using `warning()` and `error()`. - The latter differs from the 819 | former only in that it terminates the execution of the program right after 820 | having issued its message. 821 | 822 | 823 | 824 | 847 | 871 | 872 | 879 | 885 | 886 |
825 | 826 | ```matlab 827 | tol = 1e-15; 828 | rho = norm(r); 829 | 830 | 831 | 832 | while abs(rho)>tol 833 | 834 | r = oneStep( r ); 835 | rho = norm( r ); 836 | end 837 | 838 | 839 | 840 | 841 | 842 | % process solution 843 | 844 | ``` 845 | 846 | 848 | 849 | ```matlab 850 | tol = 1e-15; 851 | rho = norm(r); 852 | kmax = 1e4; 853 | 854 | k = 0; 855 | while abs(rho)>tol && k 870 |
873 | 874 | Iteration over a variable `r` that is supposed to be smaller than `tol` after 875 | some iterations. If that fails, the loop will never exit and occupy the CPU 876 | forever. 877 | 878 | 880 | 881 | Good practice: there is a maximum number of iterations. When it has been 882 | reached, the iteration failed. Throw a warning in that case. 883 | 884 |
887 | 888 | Although you could just evoke `warning()` and `error()` with a single string as 889 | argument (such as`error('Something went wrong!')`), good-style programs will 890 | leave the user with a clue _where_ the error has occurred, and of what type the 891 | error is (as mnemonic). This information is contained in the so-called _message 892 | ID_. 893 | 894 | The MATLAB help page contain quite a bit about message IDs, for example: 895 | 896 | > The `msgID` argument is a unique message identifier string that MATLAB 897 | > attaches to the error message when it throws the error. A message identifier 898 | > has the format `component:mnemonic`. Its purpose is to better identify the 899 | > source of the error. 900 | 901 | ### Switch statements 🚿🚿 902 | 903 | `switch` statements are in use whenever one would otherwise have to write a 904 | conditional statement with several `elseif` statements. They are also 905 | particularly popular when the conditional is a string comparison (see example 906 | below). 907 | 908 | 909 | 910 | 925 | 940 | 941 | 942 | 947 | 952 | 953 |
911 | 912 | ```matlab 913 | switch pet 914 | case 'Bucky' 915 | feedCarrots(); 916 | case 'Hector' 917 | feedSausages(); 918 | 919 | 920 | 921 | end 922 | ``` 923 | 924 | 926 | 927 | ```matlab 928 | switch pet 929 | case 'Bucky' 930 | feedCarrots(); 931 | case 'Hector' 932 | feedSausages(); 933 | otherwise 934 | error('petCare:feed',... 935 | 'Unknown pet.' 936 | end 937 | ``` 938 | 939 |
943 | 944 | When none of the cases matches, the algorithm will just skip and continue. 945 | 946 | 948 | 949 | The unexpected case is intercepted. 950 | 951 |
954 | 955 | #### First example 956 | 957 | ```matlab 958 | function p = prime( N ) 959 | % Returns all prime numbers below or equal to N. 960 | % 961 | 962 | for i = 2:N 963 | % checks if a number is prime 964 | isPrime = 1; 965 | for j = 2:i-1 966 | if ~mod(i, j) 967 | isPrime = 0; 968 | break 969 | end 970 | end 971 | 972 | % print to screen if true 973 | if isPrime 974 | fprintf( '%d is a prime number.\n', i ); 975 | end 976 | end 977 | 978 | end 979 | ``` 980 | 981 | The same code as in listing~\ref{listing:prime1}, with rules of style applied. 982 | It should now be somewhat easier to maintain and improve the code. Do you have 983 | ideas how to speed it up? 984 | 985 | ## Fast code 986 | 987 | As a MATLAB beginner, it is quite easy to use code that just works™ but 988 | comparing to compiled programs of higher programming languages is very slow. 989 | The benefit of the relatively straightforward way of programming in MATLAB 990 | (where no such things as explicit memory allocation, pointers, or data types 991 | "come in your way") needs to be paid with the knowledge of how to avoid 992 | fundamental mistakes. Fortunately, there are only a few _big ones_, so when you 993 | browse through this section and stick to the given hints, you can certainly be 994 | quite confident about your code. 995 | 996 | ### Using the profiler 997 | 998 | The first step in optimizing the speed of your program is finding out where it 999 | is actually going slow. In traditional programming, bottlenecks are not quite 1000 | easily found, and the humble coder would maybe insert timer commands around 1001 | those chunks of code where he or she suspects the delay to actually measure 1002 | its performance. You can do the same thing in MATLAB (using `tic` 1003 | and `toc` as timers) but there is a much more convenient way: 1004 | _the profiler._ 1005 | 1006 | The profiler is actually a wrapper around your whole program that measures the 1007 | execution time of each and every single line of code and depicts the result 1008 | graphically. This way, you can very quickly track down the lines that keep you 1009 | from going fast. See figure~\ref{figure:profiler} for an example output. 1010 | 1011 | ``` 1012 | \begin{figure} 1013 | \centering 1014 | \begin{subfigure}[b]{0.45\textwidth} 1015 | \includegraphics[height=5cm]{figures/matlab-open-profiler.png} 1016 | \subcaption{Evoking the profiler through the graphical user interface.} 1017 | \end{subfigure} 1018 | \hfill 1019 | \begin{subfigure}[b]{0.45\textwidth} 1020 | \includegraphics[width=6cm]{figures/matlab-profiler-circleBox-result.png} 1021 | \subcaption{Part of the profiler output when running the routine of section 1022 | on magic numbers (see page \pageref{example:magic-numbers}) one million 1023 | times. Clearly, the \lstinline!norm! command takes longest to execute, so when 1024 | trying to optimize one should start there.} 1025 | \end{subfigure} 1026 | \caption{Using the profiler.} 1027 | \label{figure:profiler} 1028 | \end{figure} 1029 | ``` 1030 | 1031 | > Besides the graphical interface, there is also a command line version of the 1032 | > profiler that can be used to integrate it into your scripts. The commands to 1033 | > invoke are `profile on` for starting the profiler and `profile off` for 1034 | > stopping it, followed by various commands to evaluate the gathers statistics. 1035 | > See the MATLAB help page on `profile`. 1036 | 1037 | ### The MATtrix LABoratory 1038 | 1039 | Contrary to common belief, the MAT in MATLAB does not stand for mathematics, 1040 | but _matrix._ The reason for that is that proper MATLAB code uses matrix and 1041 | vector structures as often as possible, prominently at places where higher 1042 | programming languages such as C of Fortran would rather use loops. 1043 | 1044 | The reason for that lies in MATLAB's being an interpreted language. That means: 1045 | There is no need for explicitly compiling the code, you just write it and have 1046 | in run. The MATLAB interpreter then scans your code line by line and executes 1047 | the commands. As you might already suspect, this approach will never be able to 1048 | compete with compiled source code. 1049 | 1050 | However, MATLAB's internals contain certain precompiled functions which execute 1051 | basic matrix-vector operations. Whenever the MATLAB interpreter bumps into a 1052 | matrix-vector expression, the contents of the matrices are forwarded to the 1053 | underlying optimized and compiled code which, after execution, returns the 1054 | result. This approach makes sure that matrix operations in MATLAB are on par 1055 | with matrix operations with compiled languages. 1056 | 1057 | > Not only for matrix-vector operations, precompiled binaries are provided. 1058 | > Most standard tasks in numerical linear algebra are handled with a customized 1059 | > version of the ATLAS (BLAS) library. This concerns for example commands such 1060 | > as `eig()` (for finding the eigenvalues of a matrix), `\` (for solving a 1061 | > linear equation system with Gaußian[^2] elimination), and so on. 1062 | 1063 | #### Matrix pre-allocation 🏃🏃🏃🏃🏃 1064 | 1065 | When a matrix appears in MATLAB code for the first time, its contents need to 1066 | be stored in system memory (RAM). To do this, MATLAB needs to find a place in 1067 | memory (a range of _addresses_) which is large enough to hold the matrix and 1068 | assign this place to the matrix. This process is called allocation. Note that, 1069 | typically, matrices are stored continuously in memory, and not split up to here 1070 | and there. This way, the processor can quickly access its entries without 1071 | having to look around in the system memory. 1072 | 1073 | Now, what happens if the vector `v` gets allocated with 55 elements by, for 1074 | example, `v=rand(55,1)`, and the user decides later in the code to make it a 1075 | little bigger, say, `v=rand(1100,1)`? Well, obviously MATLAB has to find a new 1076 | slot in memory in case the old one is not wide enough to old all the new 1077 | entries. This is not so bad if it happens once or twice, but can slow down your 1078 | code dramatically when a matrix is growing inside a loop, for example. 1079 | 1080 | 1081 | 1082 | 1093 | 1104 | 1105 | 1106 | 1112 | 1119 | 1120 |
1083 | 1084 | ```matlab 1085 | n = 1e5; 1086 | 1087 | for i = 1:n 1088 | u(i) = sqrt(i); 1089 | end 1090 | ``` 1091 | 1092 | 1094 | 1095 | ```matlab 1096 | n = 1e5; 1097 | u = zeros(n,1); 1098 | for i = 1:n 1099 | u(i) = sqrt(i); 1100 | end 1101 | ``` 1102 | 1103 |
1107 | 1108 | The vector `u` is growing `n` times and it probably must be re-allocated as 1109 | often. The approximate execution time of this code snippet is **21.20s**. 1110 | 1111 | 1113 | 1114 | As maximum size of the vector is known beforehand, one can easily tell MATLAB 1115 | to place `u` into memory with the appropriate size. The code here merely takes 1116 | **3.8 ms** to execute! 1117 | 1118 |
1121 | 1122 | > The previous code example is actually a little misleading as there is a much 1123 | > quicker way to fill `u` with the square roots of consecutive numbers. Can 1124 | > you find the one-liner? A look into the next section could help... 1125 | 1126 | #### Loop vectorization 🏃🏃🏃🏃🏃 1127 | 1128 | Because of the reasons mentioned in the beginning of this section, you would 1129 | like to avoid loops wherever you can and try to replace it by a vectorized 1130 | operation. 1131 | 1132 | When people commonly speak of 'optimizing code for MATLAB', it will most often 1133 | be this particular aspect. The topic is huge and this section can merely give 1134 | the idea of it. If you are stuck with slow loop operations and you have no idea 1135 | how to make it really quick, take a look at the excellent and comprehensive 1136 | guide at \cite{Mathworks:2009:CVG}. – There is almost always a way to 1137 | vectorize. 1138 | 1139 | Consider the following example a general scheme of how to remove loops from 1140 | vectorizable operations. 1141 | 1142 | ```matlab 1143 | n = 1e7; 1144 | a = 1; 1145 | b = 2; 1146 | 1147 | x = zeros( n, 1 ); 1148 | y = zeros( n, 1 ); 1149 | for i=1:n 1150 | x(i) = a + (b-a)/(n-1) ... 1151 | * (i-1); 1152 | y(i) = x(i) - sin(x(i))^2; 1153 | end 1154 | ``` 1155 | 1156 | Computation of `f(x)=x-\sin^2(x)` on `n` points between `a` and `b`. In this 1157 | version, each and every single point is being treated explicitly. Execution 1158 | time: approx. **0.91s**. 1159 | 1160 | ```matlab 1161 | n = 1e7; 1162 | a = 1; 1163 | b = 2; 1164 | 1165 | h = 1/(n-1); 1166 | 1167 | 1168 | x = (a:h:b); 1169 | 1170 | y = x - sin(x).^2; 1171 | ``` 1172 | 1173 | Does the same thing using vector notation. Execution time: approx. 0.12. 1174 | 1175 | The `sin()` function in MATLAB hence takes a vector as argument and acts as of 1176 | it operated on each element of it. Almost all MATLAB functions have this 1177 | capability, so make use of it if you can! 1178 | 1179 | ##### Vector indexing and boolean indexing. 1180 | 1181 | When dealing with vectors or matrices, it may sometimes happen that one has to 1182 | work only on certain entries of the object, e.g., those with odd index. 1183 | 1184 | Consider the following three different possibilities of setting the _odd_ 1185 | entries of a vector `v` to 0. 1186 | 1187 | ```matlab 1188 | % [...] create v 1189 | 1190 | n = length(v); 1191 | 1192 | for k = 1:2:n 1193 | v(k) = 0; 1194 | end 1195 | ``` 1196 | 1197 | Classical loop of the entries of interest (**1.04s**). 1198 | 1199 | ```matlab 1200 | % [...] create v 1201 | 1202 | n = length(v); 1203 | 1204 | v(1:2:n) = 0; 1205 | ``` 1206 | 1207 | Vector indexing: Matrices take (positive) integer vectors as arguments 1208 | (1.14). 1209 | 1210 | ``` 1211 | % [...] create v 1212 | 1213 | n = length(v); 1214 | mask = false(n,1); 1215 | mask(1:2:n) =true; 1216 | v( mask ) = 0; 1217 | ``` 1218 | 1219 | Boolean indexing: Matrices take _boolean_ arrays[^3] with the 1220 | same shape as `v` as arguments (**1.41**). 1221 | 1222 | [^3]: 1223 | A mistake that beginners tend to make is to define `mask` as an array of 1224 | integers, such as `mask = zeros(n,1);`. 1225 | 1226 | In this case, where the indices to be worked on are known beforehand, the 1227 | classical way of looping over the error is the fastest. Vector indexing makes 1228 | the code shorter, but creates a slight overhead; boolean indexing, by having to 1229 | create the boolean array `mask`, is significantly slower. 1230 | 1231 | However, should the criteria upon which action is taken dynamically depend on 1232 | the content of the vector itself, the situation is different. Consider again 1233 | the three schemes, this time for setting the `NaN` entries of a vector `v` to 0. 1234 | 1235 | ```matlab 1236 | % [...] create v 1237 | 1238 | for k = 1:n 1239 | if isnan(v(k)) 1240 | v(k) = 0; 1241 | end 1242 | end 1243 | ``` 1244 | 1245 | Classical loop: **1.19s**. 1246 | 1247 | ```matlab 1248 | % [...] create v 1249 | 1250 | ind = ... 1251 | find(isnan(v)); 1252 | v( ind ) = 0; 1253 | ``` 1254 | 1255 | Vector indexing: **0.44s**. 1256 | 1257 | ```matlab 1258 | % [...] create v 1259 | 1260 | mask = isnan(v); 1261 | 1262 | v( mask ) = 0; 1263 | ``` 1264 | 1265 | Boolean indexing: **0.33s**. 1266 | 1267 | Iterating through the array `v` and checking each element individually means 1268 | disregarding the "MAT" in MATLAB. Making use of the `find()` function, it is 1269 | possible to have `isnan()` work on the whole vector before setting the desired 1270 | indices to 0 in one go. Even better than that, doing away with the overhead 1271 | that `find()` creates, is to use the boolean array that `isnan()` returns to 1272 | index `v` directly[^4]. 1273 | 1274 | [^4]: 1275 | Remember: You can combine several `mask`s with the logical operators `&` 1276 | (_and_) and `|` (_or_). For example, `mask = isnan(v) | isinf(v);` is `true` 1277 | wherever `v!`has a `NaN` _or_ an `Inf`. 1278 | 1279 | See also \cite{Mathworks:2001:MIM}. 1280 | 1281 | #### Solving a linear equation system 🚿🚿🏃🏃🏃 1282 | 1283 | When being confronted with a standard linear equation system of the form 1284 | $`Au=b`$, the solution can be written down as $`u = A^{-1}b`$ if $`A`$ is 1285 | regular. It may now be quite seductive to translate this into `u = inv(A)*b` in 1286 | MATLAB notation. Though this step will certainly yield the correct solution 1287 | (neglecting round-off errors, which admittedly can be quite large in certain 1288 | cases), it would take quite a long time to execute. The reason for this is the 1289 | fact that the computer actually does more work then required. What you tell 1290 | MATLAB to do here is to 1291 | 1292 | 1. explicitly calculate the inverse of `A`, store it in a temporary matrix, and 1293 | then 1294 | 2. multiply the this matrix with `u`. 1295 | 1296 | However, one is most often not interested in the explicit form of $`A^{-1}`$, but 1297 | only the final result $`A^{-1}b`$. The proper way out is MATLAB's 1298 | `\` (backslash) operator (or equivalently `mldivide()`) 1299 | which exactly serves the purpose of solving an equation system with Gau{\ss}ian 1300 | elimination. 1301 | 1302 | ```matlab 1303 | n = 2e3; 1304 | A = rand(n,n); 1305 | b = rand(n,1); 1306 | 1307 | u = inv(A)*b; 1308 | ``` 1309 | 1310 | Solving the equation system with an explicit inverse. Execution time: approx. 1311 | **2.02s**. 1312 | 1313 | ```matlab 1314 | n = 2e3; 1315 | A = rand(n,n); 1316 | b = rand(n,1); 1317 | 1318 | u = A\b; 1319 | ``` 1320 | 1321 | Solving the equation system with the `\` operator. Execution time: approx. **0.80s**. 1322 | 1323 | #### Dense and sparse matrices 🏃🏃🏃🏃🏃 1324 | 1325 | Most discretizations of particular problems yield N-by-N matrices which 1326 | only have a small number of non-zero elements (proportional to N). These are 1327 | called sparse matrices, and as they appear so very often, there is plenty of 1328 | literature describing how to make use of that structure. 1329 | 1330 | In particular, one can 1331 | 1332 | - cut down the amount of memory used to store the matrix. Of course, instead of 1333 | storing all the zeros, one would rather store the value and indices of the 1334 | non-zero elements in the matrix. There are different ways of doing so. MATLAB 1335 | internally uses the condensed-column format, and exposes the matrix to the 1336 | user in indexed format. 1337 | 1338 | - optimize algorithms for the use with sparse matrices. As a matter of fact, 1339 | most basic numerical operations (such as Gaußian elimination, eigenvalue 1340 | methods and so forth) can be reformulated for sparse matrices and save an 1341 | enormous amount of computational time. 1342 | 1343 | Of course, operations which only involve sparse matrices will also return a 1344 | sparse matrix (such as matrix-matrix multiplication `*`, `transpose`, `kron`, 1345 | and so forth). 1346 | 1347 | ```matlab 1348 | n = 1e4; 1349 | h = 1/(n+1); 1350 | 1351 | A = zeros(n,n); 1352 | A(1,1) = 2; 1353 | A(1,2) = -1; 1354 | for i=2:n-1 1355 | A(i,i-1) = -1; 1356 | A(i,i ) = 2; 1357 | A(i,i+1) = -1; 1358 | end 1359 | A(n,n-1) = -1; 1360 | A(n,n) = 2; 1361 | 1362 | A = A / h^2; 1363 | 1364 | % continued below 1365 | ``` 1366 | 1367 | Creating the tridiagonal matrix `1/h^2\times\diag[-1, 2, -1]` in dense format. 1368 | The code is bulky for what it does, and cannot use native matrix notation. 1369 | Execution time: **0.67s**. 1370 | 1371 | ```matlab 1372 | n = 1e4; 1373 | h = 1/(n+1); 1374 | 1375 | e = ones(n,1); 1376 | A = spdiags([-e 2*e -e],... 1377 | [-1 0 1],... 1378 | n, n ); 1379 | 1380 | A = A / h^2; 1381 | 1382 | % continued below 1383 | ``` 1384 | 1385 | The three-line equivalent using the sparse matrix format. The code is not only 1386 | shorter, easier to read, but also saves gigantic amounts of memory. Execution 1387 | time: \textbf{\SI{5.4}{\milli\second}}! 1388 | 1389 | ```matlab 1390 | % A in dense format 1391 | b = ones(n,1); 1392 | u = A\b; 1393 | ``` 1394 | 1395 | Gaußian elimination with a tridiagonal matrix in dense format. Execution time: 1396 | **55.06s**. 1397 | 1398 | ```matlab 1399 | % A in sparse format 1400 | b = ones(n,1); 1401 | u = A\b; 1402 | ``` 1403 | 1404 | The same syntax, with `A` being sparse. Execution time: \textbf{\SI{0.36}{\milli\second}}! 1405 | 1406 | 👉 Useful functions: `sparse()`, `spdiags()`, `speye()`, (`kron()`),... 1407 | 1408 | #### Repeated solution of an equation system with the same matrix 🏃🏃🏃🏃🏃 1409 | 1410 | It might happen sometimes that you need to solve an equation system a number of 1411 | times with the same matrix but different right-hand sides. When all the right 1412 | hand sides are immediately available, this can be achieved with with one 1413 | ordinary `\!` operation. 1414 | 1415 | ```matlab 1416 | n = 1e3; 1417 | k = 50; 1418 | A = rand(n,n); 1419 | B = rand(n,k); 1420 | 1421 | u = zeros(n,k); 1422 | 1423 | for i=1:k 1424 | u(:,k) = A \ B(:,k); 1425 | end 1426 | ``` 1427 | 1428 | Consecutively solving with a couple of right-hand sides. Execution time: 1429 | **5.64s**. 1430 | 1431 | ``` 1432 | n = 1e3; 1433 | k = 50; 1434 | A = rand(n,n); 1435 | B = rand(n,k); 1436 | 1437 | 1438 | 1439 | 1440 | u = A \ B; 1441 | ``` 1442 | 1443 | Solving with a number of right hand sides in one go. Execution time: 1444 | **0.13s**. 1445 | 1446 | If, on the other hand, you need to solve the system once to get the next 1447 | right-hand side (which is often the case with time-dependent differential 1448 | equations, for example), this approach will not work; you will indeed have to 1449 | solve the system in a loop. However, one would still want to use the 1450 | information from the previous steps; this can be done by first factoring $`A`$ 1451 | into a product of a lower triangular matrix $`L`$ and an upper triangular matrix 1452 | $`U`$, and then instead of computing $`A^{-1}u^{(k)}`$ in each step, computing 1453 | $`U^{-1}L^{-1}u^{(k)}`$ (which is a lot cheaper). 1454 | 1455 | ```matlab 1456 | n = 2e3; 1457 | k = 50; 1458 | A = rand(n,n); 1459 | 1460 | u = ones(n,1); 1461 | 1462 | 1463 | for i = 1:k 1464 | u = A\u; 1465 | end 1466 | ``` 1467 | 1468 | Computing $`u = A^{-k}u_0`$ by solving the equation systems in the ordinary way. 1469 | Execution time: **38.94s**. 1470 | 1471 | ```matlab 1472 | n = 2e3; 1473 | k = 50; 1474 | A = rand(n,n); 1475 | 1476 | u = ones(n,1); 1477 | 1478 | [L,U] = lu( A ); 1479 | for i = 1:k 1480 | u = U\( L\u ); 1481 | end 1482 | ``` 1483 | 1484 | Computing $`u = A^{-k}u_0`$ by $`LU`$-factoring the matrix, then solving with the 1485 | $`LU`$ factors. Execution time: **5.35s**. Of course, when increasing the 1486 | number $`k`$ of iterations, the speed gain compared to the `A\` will 1487 | be more and more dramatic. 1488 | 1489 | > For many matrices $`A`$ in the above example, the final result will be heavily 1490 | > corrupted with round-off errors such that after `k=50` steps, the norm of the 1491 | > residual $`\|u_0-A^ku\|`$, which ideally equals 0, can be pretty large. 1492 | 1493 | ##### Factorizing sparse matrices. 1494 | 1495 | When $`LU`$- or Cholesky-factorizing a _sparse matrix_, the factor(s) are in 1496 | general not sparse anymore and can demand quite an amount of space in memory to 1497 | the point where no computer can cope with that anymore. The phenomenon of 1498 | having non-zero entries in the $`LU`$- or Cholesky-factors where the original 1499 | matrix had zeros is called _fill-in_ and has attracted a lot of attention in 1500 | the past 50 years. As a matter of fact, the success of iterative methods for 1501 | solving linear equation systems is largely thanks to this drawback. 1502 | 1503 | Beyond using an iterative method to solve the system, the most popular way to 1504 | cope with fill-in is to try to re-order the matrix elements in such a way that 1505 | the new matrix induces less fill-in. Examples of re-ordering are _Reverse 1506 | Cuthill-McKee_ and _Approximate Minimum Degree._ Both are implemented in MATLAB 1507 | as `colrcm()` and `colamd()`, respectively (with versions `symrcm()` and 1508 | `symamd()` for symmetric matrices). 1509 | 1510 | One can also leave all the fine-tuning to MATLAB by executing `lu()` for sparse 1511 | matrices with more output arguments; this will return a factorization for the 1512 | permuted and row-scaled matrix $`PR^{-1}AQ = LU`$ (see MATLAB's help pages and 1513 | example below) to reduce fill-in and increase the stability of the algorithm. 1514 | 1515 | ```matlab 1516 | n = 2e3; 1517 | k = 50; 1518 | 1519 | % get a non-singular nxn 1520 | % sparse matrix A: 1521 | % [...] 1522 | 1523 | u = ones(n,1); 1524 | 1525 | [L,U] = lu( A ); 1526 | for i = 1:k 1527 | u = U\( L\u ); 1528 | end 1529 | ``` 1530 | 1531 | Ordinary $`LU`$-factorization for a sparse matrix `A`. The factors `L` and `U` 1532 | are initialized as sparse matrices as well, but the fill-in phenomenon will 1533 | undo this advantage. Execution time: **4.31s**. 1534 | 1535 | ```matlab 1536 | n = 2e3; 1537 | k = 50; 1538 | 1539 | % get a non-singular nxn 1540 | % sparse matrix A: 1541 | % [...] 1542 | 1543 | u = ones(n,1); 1544 | 1545 | [L,U,P,Q,R] = lu(A); 1546 | for i = 1:k 1547 | u = Q*( U\(L\(P*(R\u))) ); 1548 | end 1549 | ``` 1550 | 1551 | $`LU`$-factoring with permutation and row-scaling. This version can use less 1552 | memory, execute faster, and provide more stability than the ordinary 1553 | $`LU`$-factorization. Execution time: **0.07s**. 1554 | 1555 | This factorization is implicitly applied by MATLAB when using the 1556 | `\`-operator for solving a sparse system of equations _once._ 1557 | 1558 | [^2]: 1559 | Johann Carl Friedrich Gauß (1777–1855), German mathematician and deemed 1560 | one of the greatest mathematicians of all times. In the English-speaking 1561 | world, the spelling with _ss_ instead of the original _ß_ has achieved wide 1562 | acceptance – probably because the _ß_ is not included in the key set of any 1563 | keyboard layout except the German one. 1564 | 1565 | ## Other tips & tricks 1566 | 1567 | ```matlab 1568 | function int = simpson( a, b, h ) 1569 | % Implements Simpson's rule for integrating the 1570 | % sine function over [a,b] with granularity h. 1571 | 1572 | x = a:h:b; 1573 | 1574 | int = 0; 1575 | n = length(x); 1576 | mid = (x(1:n-1) + x(2:n)) / 2; 1577 | int = sum( h/6 * ( sin(x(1:n-1)) ... 1578 | + 4*sin(mid ) ... 1579 | + sin(x(2:n )) ) ); 1580 | 1581 | end 1582 | ``` 1583 | 1584 | Implementation of Simpson's rule for numerically integrating a function (here: 1585 | `sin`) between `a` and `b`. Note the usage of the vector notation to speed up 1586 | the function. Also note that `sin` is hardcoded into the routine, and needs to 1587 | be changed each time we want to change the function. In case one is interested 1588 | in calculating the integral of $`f(x) = \exp(\sin(\frac{1}{x})) / 1589 | \tan(\sqrt{1-x^4})`$, this could get quite messy. 1590 | 1591 | #### Functions as arguments 🚿🚿🚿 1592 | 1593 | In numerical computation, there are set-ups which natively treat functions as 1594 | the objects of interest, for example when numerically integrating them over a 1595 | particular domain. For this example, imagine that you wrote a function that 1596 | implements Simpson's integration rule (see listing~\ref{listing:simpson1}), and 1597 | you would like to apply it to a number of functions without having to alter 1598 | your source code (for example, replacing `sin()` by `cos()`, `exp()` or 1599 | something else). 1600 | 1601 | A clean way to deal with this in \matlab{} is using _function handles_. 1602 | This may sound fancy, and describes nothing else then the capability of 1603 | treating functions (such as `sin()`) as arguments to other functions (such as 1604 | `simpson()`). The function call itself is written as easy as 1605 | 1606 | ```matlab 1607 | function int = simpson( f, a, b, h ) 1608 | % Implements Simpson's rule for integrating a 1609 | % function f over [a,b] with granularity h. 1610 | 1611 | x = a:h:b; 1612 | mid = (x(1:n-1) + x(2:n)) / 2; 1613 | 1614 | n = length(x); 1615 | 1616 | int = sum( h/6 * ( f(x(1:n-1)) ... 1617 | + 4*f(mid ) ... 1618 | + f(x(2:n )) ) ); 1619 | 1620 | end 1621 | ``` 1622 | 1623 | Simpson's rule with function handles. Note that the syntax for function arguments is no different from that of ordinary ones. 1624 | 1625 | ```matlab 1626 | a = 0; 1627 | b = pi/2; 1628 | h = 1e-2; 1629 | int_sin = simpson( @sin, a, b, h ); 1630 | int_cos = simpson( @cos, a, b, h ); 1631 | int_f = simpson( @f , a, b, h ); 1632 | ``` 1633 | 1634 | where the function name need to be prepended by the `@`-character. 1635 | 1636 | The function `f()` can be any function that you defined yourself and 1637 | which is callable as `f(x)` with `x` being a vector of $`x`$ 1638 | values (like it is used in `simpson()`, listing~\ref{listing:simpson2}). 1639 | 1640 | #### Implicit matrix-vector products 🚿 1641 | 1642 | In numerical analysis, almost all methods for solving linear equation systems 1643 | _quickly_ are iterative methods, that is, methods which define how to 1644 | iteratively approach a solution in small steps (starting with some initial 1645 | guess) rather then directly solving them in one big step (such as Gau{\ss}ian 1646 | elimination). Two of the most prominent iterative methods are CG and GMRES. 1647 | 1648 | In particular, those methods _do not require the explicit availability of 1649 | the matrix_ as in each step of the iteration they merely form a matrix-vector 1650 | product with $A$ (or variations of it). Hence, they technically only need a 1651 | function to tell them how to carry out a matrix-vector multiplication. In some 1652 | cases, providing such a function may be easier than explicitly constructing 1653 | the matrix itself, as the latter usually requires one to pay close attention 1654 | to indices (which can get extremely messy). 1655 | 1656 | Beyond that, there may also a mild advantage in memory consumption as the 1657 | indices of the matrix do no longer need to sit in memory, but can be hard coded 1658 | into the matrix-vector-multiplication function itself. Considering the fact 1659 | that we are mostly working with sparse matrices however, this might not be 1660 | quite important. 1661 | 1662 | The example below illustrates the typical benefits and drawbacks of the 1663 | approach. 1664 | 1665 | ```matlab 1666 | function out = A_multiply( u ) 1667 | % Implements matrix-vector multiplication with 1668 | % diag[-1,2,-1]/h^2 . 1669 | 1670 | n = length( u ); 1671 | u = [0; u; 0]; 1672 | 1673 | out = -u(1:n) + 2*u(2:n+1) - u(3:n+2); 1674 | out = out * (n+1)^2; 1675 | 1676 | end 1677 | ``` 1678 | 1679 | Function that implements matrix-vector multiplication with `1/h^2 \times \diag(-1,2,-1)`. Note that the function consumes (almost) no more memory then 1680 | `u` already required. 1681 | 1682 | ```matlab 1683 | n = 1e3; 1684 | k = 500; 1685 | 1686 | u = ones(n,1); 1687 | for i=1:k 1688 | u = A_multiply( u ); 1689 | end 1690 | ``` 1691 | 1692 | Computing $`u = A^ku_0`$ with the function `A_multiply` 1693 | (listing~\ref{listing:Amultiply}). The memory consumption of this routine is 1694 | (almost) no greater than storing $`n`$ real numbers. Execution time: 1695 | **21s**. 1696 | 1697 | ```matlab 1698 | n = 1e3; 1699 | k = 500; 1700 | 1701 | e = ones(n,1); 1702 | A = spdiags([-e,2*e,-e],... 1703 | [-1, 0,-1],... 1704 | n, n ); 1705 | A = A * (n+1)^2; 1706 | 1707 | u = ones(n,1); 1708 | for i=1:k 1709 | u = A*u; 1710 | end 1711 | ``` 1712 | 1713 | Computing $`u = A^ku_0`$ with a regular sparse format matrix `A`, with the need to store it in memory. Execution time: **7s**. 1714 | 1715 | All in all, these considerations shall not lead you to rewrite all you 1716 | matrix-vector multiplications as function calls. Mind, however, that there are 1717 | situations where one would _never_ use matrices in their explicit form, 1718 | although mathematically written down like that: 1719 | 1720 | ##### Example: Multigrid 1721 | 1722 | In geometric multigrid methods, a domain is discretized with a certain 1723 | parameter $h$ ("grid width") and the operator $`A_h`$ written down for that 1724 | discretization (see the examples above, where `A_h=h^{-2}\diag(-1,2,1)` is 1725 | really the discretization of the $`\Delta`$-operator in one dimension). In a 1726 | second step, another, somewhat coarser grid is considered with $`H=2h`$, for 1727 | example. The operator $`A_H`$ on the coarser grid is written down as 1728 | 1729 | ```math 1730 | A_H = I_h^H A_h I_H^h, 1731 | ``` 1732 | 1733 | where the $`I_*^*`$ operators define the transition from the coarse to the fine 1734 | grid, or the other way around. When applying it to a vector on the coarse grid 1735 | $`u_H`$ ($`A_Hu_H =I_h^H A_h I_H^h u_H`$), the above definition reads: 1736 | 1737 | 1. $`I_H^h u_H`$: Map $`u_H`$ to the fine grid. 1738 | 2. $`A_h\cdot`$: Apply the fine grid operator to the transformation. 1739 | 3. $`I_h^H\cdot`$: Transform the result back to the coarse grid. 1740 | 1741 | How the transformations are executed needs to be defined. One could, for 1742 | example, demand that $`I_H^h`$ maps all points that are part of the fine grid 1743 | _and_ the coarse grid to itself; all points on the fine grid, that lie right in 1744 | between two coarse variables get half of the value of each of the two (see 1745 | figure~\ref{subfig:coarse-fine}). 1746 | 1747 | \begin{figure} 1748 | \centering 1749 | \begin{subfigure}{0.45\textwidth} 1750 | \input{figures/coarse-fine.tex} 1751 | \caption{Possible transformation rule when translating values from the 1752 | coarse to the fine grid. See listing~\ref{listing:IHh}.} 1753 | \label{subfig:coarse-fine} 1754 | \end{subfigure} 1755 | \hfill 1756 | \begin{subfigure}{0.45\textwidth} 1757 | \input{figures/fine-coarse.tex} 1758 | \caption{Mapping back from fine to coarse.} 1759 | \label{subfig:fine-coarse} 1760 | \end{subfigure} 1761 | \caption{} 1762 | \end{figure} 1763 | 1764 | ```matlab 1765 | function uFine = coarse2fine( uCoarse ) 1766 | % Transforms values from a coarse grid to a fine grid. 1767 | 1768 | N = length(uCoarse); 1769 | n = 2*N - 1; 1770 | 1771 | uFine(1:2:n) = uCoarse; 1772 | 1773 | midValues = 0.5 * ( uCoarse(1:N-1) + uCoarse(2:N) ); 1774 | uFine(2:2:n) = midValues; 1775 | 1776 | end 1777 | ``` 1778 | 1779 | Function that implements the operator $`I_H^h`$ from the example (see 1780 | figure~\ref{subfig:coarse-fine}). Writing down the structure of the 1781 | corresponding matrix would be somewhat complicated, and even more so when 1782 | moving to two- or three-dimensional grids. Note also how matrix notation has 1783 | been exploited. 1784 | 1785 | In the analysis of the method, $`I_H^h`$ and $`I_h^H`$ will always be treated 1786 | as matrices, but when implementing, one would _certainly not_ try to figure out 1787 | the structure of the matrix. It is a lot simpler to implement a function that 1788 | executes the rule suggested above, for example. 1789 | -------------------------------------------------------------------------------- /codeTimer/codeTimer.m: -------------------------------------------------------------------------------- 1 | % ============================================================================= 2 | function codeTimer( sampleSize, varargin ) 3 | 4 | numFunctions = length( varargin ); 5 | 6 | param = 3e7; 7 | 8 | % times 9 | t = zeros(numFunctions,1); 10 | 11 | for i = 1:sampleSize 12 | fprintf( 'Sample %d...', i ); 13 | for codeIndex = 1:numFunctions 14 | time = varargin{codeIndex}( param ); 15 | t(codeIndex) = t(codeIndex) + time; 16 | end 17 | fprintf( 'done.\n' ); 18 | end 19 | 20 | averageT = t/sampleSize; 21 | 22 | fprintf( '\n\n' ); 23 | for codeIndex = 1:numFunctions 24 | fprintf( 'Code piece %d: %d s\n\n', codeIndex, averageT(codeIndex) ); 25 | end 26 | end 27 | % ============================================================================= -------------------------------------------------------------------------------- /codeTimer/indexing1.m: -------------------------------------------------------------------------------- 1 | % ============================================================================= 2 | function time = indexing1( n ) 3 | 4 | v = rand(n,1); 5 | v( v<0.1 ) = NaN; 6 | 7 | tic 8 | for i = 1:n 9 | if isnan(v(i)) 10 | v(i) = 0; 11 | end 12 | end 13 | time = toc; 14 | 15 | end 16 | % ============================================================================= -------------------------------------------------------------------------------- /codeTimer/indexing2.m: -------------------------------------------------------------------------------- 1 | % ============================================================================= 2 | function time = indexing2( n ) 3 | 4 | v = rand(n,1); 5 | v( v<0.1 ) = NaN; 6 | 7 | tic 8 | v( find(isnan(v)) ) = 0; 9 | time = toc; 10 | 11 | end 12 | % ============================================================================= -------------------------------------------------------------------------------- /codeTimer/indexing3.m: -------------------------------------------------------------------------------- 1 | % ============================================================================= 2 | function time = indexing3( n ) 3 | 4 | v = rand(n,1); 5 | v( v<0.1 ) = NaN; 6 | 7 | tic; 8 | v( isnan(v) ) = 0; 9 | time = toc; 10 | 11 | end 12 | % ============================================================================= -------------------------------------------------------------------------------- /codeTimer/oddEntriesToZero1.m: -------------------------------------------------------------------------------- 1 | % ============================================================================= 2 | function oddEntriesToZero1( n ) 3 | 4 | v = rand(n,1); 5 | 6 | for k = 1:2:n 7 | v(k) = 0; 8 | end 9 | 10 | end 11 | % ============================================================================= -------------------------------------------------------------------------------- /codeTimer/oddEntriesToZero2.m: -------------------------------------------------------------------------------- 1 | % ============================================================================= 2 | function oddEntriesToZero2( n ) 3 | 4 | v = rand(n,1); 5 | 6 | v(1:2:n) = 0; 7 | 8 | end 9 | % ============================================================================= -------------------------------------------------------------------------------- /codeTimer/oddEntriesToZero3.m: -------------------------------------------------------------------------------- 1 | % ============================================================================= 2 | function oddEntriesToZero3( n ) 3 | 4 | v = rand(n,1); 5 | 6 | mask = false(n,1); 7 | mask(1:2:n) = true; 8 | 9 | v(mask) = 0; 10 | 11 | end 12 | % ============================================================================= -------------------------------------------------------------------------------- /latex/bibtex/matlab-tipsntricks.bib: -------------------------------------------------------------------------------- 1 | @ONLINE{Acklam:2003:MAM, 2 | title = {{MATLAB} array manipulation tips and tricks}, 3 | author = {Acklam, Peter J.}, 4 | month = oct, 5 | year = {2003}, 6 | url = {http://home.online.no/~pjacklam/matlab/doc/mtt/} 7 | } 8 | 9 | @MANUAL{Getreuer:2004:WFM, 10 | title = {Writing Fast {MATLAB} Code}, 11 | author = {Getreuer, Pascal}, 12 | year = {2009}, 13 | url = {http://www.mathworks.com/matlabcentral/fileexchange/5685} 14 | } 15 | 16 | @ONLINE{Hull:2006:CCM, 17 | title = {Cleaner code in {MATLAB}}, 18 | author = {Hull, Doug}, 19 | lastchecked = {25.01.2009}, 20 | year = {2006}, 21 | url = {http://blogs.mathworks.com/pick/2006/12/13/cleaner-code-in-matlab-part-1-of-series/} 22 | } 23 | 24 | @ONLINE{Johnson:2002:MPS, 25 | title = {{MATLAB} Programming Style Guidelines}, 26 | author = {Johnson, Richard}, 27 | month = oct, 28 | year = {2002}, 29 | url = {http://www.mathworks.com/matlabcentral/fileexchange/2529} 30 | } 31 | 32 | @ONLINE{Moler:2008:EM, 33 | title = {Experiments with {MATLAB}}, 34 | author = {Moler, Cleve}, 35 | month = apr, 36 | year = {2008}, 37 | url = {http://www.mathworks.com/moler/exm/chapters.html} 38 | } 39 | 40 | @ONLINE{Moler:2004:NCM, 41 | title = {Numerical Computing with {MATLAB}}, 42 | author = {Moler, Cleve}, 43 | year = {2004}, 44 | url = {http://www.mathworks.com/moler/chapters.html} 45 | } 46 | 47 | @ONLINE{Mathworks:2009:CVG, 48 | title = {Code Vectorization Guide}, 49 | key = {Mathworks}, 50 | year = {2009}, 51 | url = {http://www.mathworks.com/support/tech-notes/1100/1109.shtml} 52 | } 53 | 54 | @ONLINE{Mathworks:2001:MIM, 55 | title = {Matrix Indexing in {MATLAB\textsuperscript{\textregistered}}}, 56 | key = {Mathworks}, 57 | month = sep, 58 | year = {2001}, 59 | url = {http://www.mathworks.com/company/newsletters/digest/sept01/matrix.html} 60 | } 61 | 62 | @ONLINE{Shoelson:2011:GMC, 63 | title = {Good {MATLAB} Coding Practices}, 64 | author = {Shoelson, Brett}, 65 | month = jan, 66 | year = {2011}, 67 | url = {http://blogs.mathworks.com/pick/2011/01/14/good-matlab-coding-practices/} 68 | } 69 | -------------------------------------------------------------------------------- /latex/clean-code.tex: -------------------------------------------------------------------------------- 1 | \newpage 2 | \section{Clean code} 3 | 4 | There is a plethora of reasons why code that \emph{just works\texttrademark} 5 | is not good enough. Take a peek at listing~\ref{listing:prime1} and admit: 6 | 7 | \begin{itemize} 8 | % maintainability: 9 | \item Fixing bugs, adding features, and working with the code in all other 10 | aspects get a lot easier when the code is not messy. 11 | % readability: 12 | \item Imagine someone else looking at your code, and trying to figure out what 13 | it does. In case you have you did not keep it clean, that will certainly be a 14 | huge waste of time. 15 | \item You might be planning to code for a particular purpose now, not planning 16 | on ever using it again, but experience tells that there is virtually no 17 | computational task that you come across only once in your programming life. 18 | Imagine yourself looking at your own code, a week, a month, or a year from 19 | now: Would you still be able to understand why the code works as it does? 20 | Clean code will make sure you do. 21 | \end{itemize} 22 | 23 | Examples of messy, unstructured, and generally ugly programs are plenty, but 24 | there are also places where you are almost guaranteed to find well-structured 25 | code. Take, for example the \matlab{} internals: Many of the functions that 26 | you might make use of when programming \matlab{} are implemented in \matlab{} 27 | syntax themselves -- by professional MathWorks programmers. To look at such 28 | the contents of the \lstinline!mean()! function (which calculates the average 29 | mean value of an array), type \lstinline!edit mean! on the \matlab{} command 30 | line. You might not be able to understand what's going on, but the way the 31 | file looks like may give you hints on how to write clean code. 32 | 33 | 34 | \begin{lstlisting}[framerule=2pt,rulecolor=\color{badred},float=b,label={listing:prime1},caption={Perfectly legal \matlab{} code, with all rules of style ignored. Can you guess what this function does?}] 35 | function lll(ll1,l11,l1l);if floor(l11/ll1)<=1;... 36 | lll(ll1,l11+1,l1l );elseif mod(l11,ll1)==0;lll(... 37 | ll1,l11+1,0);elseif mod(l11,ll1)==floor(l11/... 38 | ll1)&&~l1l;floor(l11/ll1),lll(ll1,l11+1,0);elseif... 39 | mod(l11,ll1)>1&&mod(l11,ll1)0 || k~=0; 564 | \end{lstlisting} 565 | Without knowing if \matlab{} first evaluates the short-circuit AND `\lstinline!&&!' or the short-circuit OR `\lstinline!||!', it is impossible to predict the value of \lstinline!isGood!. 566 | \end{minipage} 567 | \hfill 568 | \begin{minipage}[t]{.45\textwidth} 569 | \begin{lstlisting}[framerule=2pt,rulecolor=\color{goodgreen}] 570 | isGood = ( a<0 && b>0 ) ... 571 | || k~=0; 572 | \end{lstlisting} 573 | With the (unnecessary) brackets, the situation is clear. 574 | \end{minipage} 575 | \hfill 576 | 577 | 578 | \begin{table} 579 | \begin{enumerate} 580 | \item Parentheses \lstinline!()! 581 | \item Transpose (\lstinline!.'!), power (\lstinline!.^!), complex conjugate transpose (\lstinline!'!), matrix power (\lstinline!^!) 582 | \item Unary plus (\lstinline!+!), unary minus (\lstinline!-!), logical negation (\lstinline!~!) 583 | \item Multiplication (\lstinline!.*!), right division (\lstinline!./!), left division (\lstinline!.\!), matrix multiplication (\lstinline!*!), matrix right division (\lstinline!/!), matrix left division (\lstinline!\!) 584 | \item Addition (\lstinline!+!), subtraction (\lstinline!-!) 585 | \item Colon operator (\lstinline!:!) 586 | \item Less than (\lstinline!!), greater than or equal to (\lstinline!>=!), equal to (\lstinline!==!), not equal to (\lstinline!~=!) 587 | \item Element-wise AND (\lstinline!&!) 588 | \item Element-wise OR (\lstinline!|!) 589 | \item Short-circuit AND (\lstinline!&&!) 590 | \item Short-circuit OR (\lstinline!||!) 591 | \end{enumerate} 592 | \caption{\matlab{} operator precedence list.} 593 | \label{table:operator-precedence} 594 | \end{table} 595 | 596 | 597 | \subsection{Errors and warnings -- \cleansymbol\cleansymbol} 598 | 599 | No matter how careful you design your code, there will probably be users who manage to crash it, maybe with bad input data. As a matter of fact, this is not really uncommon in numerical computation that things go fundamentally wrong. 600 | 601 | \begin{example} 602 | You write a routine that defines an iterative process to find the solution 603 | $u^* = A^{-1}b$ of a linear equation system (think of conjugate gradients). 604 | For some input vector $u$, you hope to find $u^*$ after a finite number of 605 | iterations. However, the iteration will only converge under certain conditions 606 | on $A$; and if $A$ happens not to fulfill those, the code will misbehave in 607 | some way. 608 | \end{example} 609 | 610 | It would be bad practice to assume that the user (or, you) always provides 611 | input data to your routine fulfilling all necessary conditions, so you would 612 | certainly like to conditionally intercept. Notifying the user that something 613 | went wrong can certainly be done by \lstinline!disp()! or 614 | \lstinline!fprintf()! commands, but the clean way out is using 615 | \lstinline!warning()! and \lstinline!error()!. -- The latter differs from the 616 | former only in that it terminates the execution of the program right after 617 | having issued its message. 618 | 619 | \hfill 620 | \begin{minipage}[t]{.45\textwidth} 621 | \begin{lstlisting}[framerule=2pt,rulecolor=\color{badred}] 622 | tol = 1e-15; 623 | rho = norm(r); 624 | 625 | 626 | 627 | while abs(rho)>tol 628 | 629 | r = oneStep( r ); 630 | rho = norm( r ); 631 | end 632 | 633 | 634 | 635 | 636 | 637 | % process solution 638 | 639 | \end{lstlisting} 640 | Iteration over a variable \lstinline!r! that is supposed to be smaller than \lstinline!tol! after some iterations. If that fails, the loop will never exit and occupy the CPU forever. 641 | \end{minipage} 642 | \hfill 643 | \begin{minipage}[t]{.45\textwidth} 644 | \begin{lstlisting}[framerule=2pt,rulecolor=\color{goodgreen}] 645 | tol = 1e-15; 646 | rho = norm(r); 647 | kmax = 1e4; 648 | 649 | k = 0; 650 | while abs(rho)>tol && k,black!50] (u1) to node [arrownote,left] {$1$} (u5) ; 20 | \draw [->,black!50] (u1) to node [arrownote,right] {$\frac{1}{2}$} (u6) ; 21 | 22 | \draw [->,black!50] (u2) to node [arrownote,left] {$\frac{1}{2}$} (u6) ; 23 | \draw [->,black!50] (u2) to node [arrownote,right] {$1$} (u7) ; 24 | \draw [->,black!50] (u2) to node [arrownote,right] {$\frac{1}{2}$} (u8) ; 25 | 26 | \draw [->,black!50] (u3) to node [arrownote,left] {$\frac{1}{2}$} (u8) ; 27 | \draw [->,black!50] (u3) to node [arrownote,right] {$1$} (u9) ; 28 | 29 | \end{tikzpicture} -------------------------------------------------------------------------------- /latex/figures/fine-coarse.tex: -------------------------------------------------------------------------------- 1 | \begin{tikzpicture}[ 2 | node/.style={ 3 | circle, 4 | draw=black!50, 5 | fill=black!10, 6 | thick, 7 | font=\footnotesize 8 | }, 9 | scale=0.7, 10 | arrownote/.style={black,font=\footnotesize} 11 | ] 12 | 13 | \node (u1) at (0,3) [node] {$u_{H,1}$}; 14 | \node (u2) at (4,3) [node] {$u_{H,2}$}; 15 | \node (u3) at (8,3) [node] {$u_{H,3}$}; 16 | 17 | \node (u5) at (0,0) [node] {$u_{h,1}$}; 18 | \node (u6) at (2,0) [node] {$u_{h,2}$}; 19 | \node (u7) at (4,0) [node] {$u_{h,3}$}; 20 | \node (u8) at (6,0) [node] {$u_{h,4}$}; 21 | \node (u9) at (8,0) [node] {$u_{h,5}$}; 22 | 23 | \draw [very thick] (u1) -- (u2) --(u3); 24 | \draw [very thick] (u5) --(u6) --(u7) --(u8) --(u9); 25 | 26 | \draw [->,black!50] (u5) to node [arrownote,left] {$\frac{1}{2}$} (u1); 27 | \draw [->,black!50] (u6) to node [arrownote,right] {$\frac{1}{4}$} (u1); 28 | 29 | \draw [->,black!50] (u6) to node [arrownote,left] {$\frac{1}{4}$} (u2); 30 | \draw [->,black!50] (u7) to node [arrownote,right] {$\frac{1}{2}$} (u2); 31 | \draw [->,black!50] (u8) to node [arrownote,right] {$\frac{1}{4}$} (u2); 32 | 33 | \draw [->,black!50] (u8) to node [arrownote,left] {$\frac{1}{4}$} (u3); 34 | \draw [->,black!50] (u9) to node [arrownote,right] {$\frac{1}{2}$} (u3); 35 | 36 | \end{tikzpicture} 37 | -------------------------------------------------------------------------------- /latex/figures/julia-logo-color.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nschloe/matlab-guidelines/f3fbe194535753c3c684be4a4ff6daedb5bff957/latex/figures/julia-logo-color.pdf -------------------------------------------------------------------------------- /latex/figures/matlab-open-profiler.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nschloe/matlab-guidelines/f3fbe194535753c3c684be4a4ff6daedb5bff957/latex/figures/matlab-open-profiler.png -------------------------------------------------------------------------------- /latex/figures/matlab-profiler-circleBox-result.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nschloe/matlab-guidelines/f3fbe194535753c3c684be4a4ff6daedb5bff957/latex/figures/matlab-profiler-circleBox-result.png -------------------------------------------------------------------------------- /latex/figures/python-logo-generic.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nschloe/matlab-guidelines/f3fbe194535753c3c684be4a4ff6daedb5bff957/latex/figures/python-logo-generic.pdf -------------------------------------------------------------------------------- /latex/header.tex: -------------------------------------------------------------------------------- 1 | % common header for all exercise sheets 2 | \documentclass[% 3 | paper=a4, 4 | parskip=half, 5 | oneside, 6 | bibliography=totoc 7 | ]{scrartcl} 8 | 9 | % don't number subsections and deeper 10 | \setcounter{secnumdepth}{1} 11 | 12 | % \usepackage{amsbsy,amsmath,amsthm,amsfonts,amssymb,color,xcolor} 13 | \usepackage{amsmath} 14 | \usepackage{amsthm} 15 | \usepackage{graphicx} 16 | 17 | \usepackage[l2tabu, orthodox]{nag} 18 | \usepackage{fixltx2e} 19 | \usepackage{microtype} 20 | 21 | % use the latin modern fonts and a better encoding 22 | \usepackage[T1]{fontenc} 23 | \usepackage{lmodern} 24 | 25 | % Get the fine points of typography. 26 | % Leads to bad line breaks in Remark blocks; disable for now. 27 | %\usepackage{microtype} 28 | 29 | \usepackage{subcaption} 30 | 31 | % \titlehead{\includegraphics[width=3cm]{../../logos/logo}} 32 | % \subject{\matlab{} in numerical analysis} 33 | \title{Guidelines for\\writing clean and fast code 34 | in~\matlab{}\footnote{\matlab{} Central ``File Exchange Pick of the Week'', 35 | January 14, 2011 \cite{Shoelson:2011:GMC}}} 36 | 37 | \author{Nico Schl\"omer\thanks{E-mail: 38 | \href{mailto:nico.schloemer@gmail.com}{\nolinkurl{nico.schloemer@gmail.com}} 39 | The author will be happy about comments and suggestions about the document.}} 40 | \date{\today} 41 | % \publishers{publishers} 42 | 43 | \usepackage{listings} % include highlighted source code 44 | \lstset{% general command to set parameter(s) 45 | basicstyle=\ttfamily, 46 | keywordstyle=\color{matlabeditorkeyword}, 47 | identifierstyle=, 48 | commentstyle=\color{matlabeditorcomment}, 49 | stringstyle=\color{matlabeditorstring}, 50 | showstringspaces=false, % no special string spaces 51 | frame=single, 52 | language=Matlab, 53 | showlines=true, % don't suppress empty lines 54 | captionpos=b, 55 | abovecaptionskip=2ex 56 | } 57 | 58 | % Minted is newer and more powerful, but harder to use for its dependency on 59 | % Pygments. 60 | %\usepackage{minted} 61 | %\usemintedstyle{matlab} 62 | 63 | \usepackage{graphicx} 64 | 65 | % have text float around images 66 | \usepackage{floatflt} 67 | \usepackage{wrapfig} 68 | 69 | \usepackage{tikz} 70 | 71 | \usepackage{booktabs} 72 | 73 | \theoremstyle{plain} 74 | % \theoremstyle{definition} 75 | \newtheorem*{remark}{Remark} 76 | \newtheorem*{example}{Example} 77 | 78 | \usepackage[ 79 | bookmarks, 80 | colorlinks=false, 81 | pdftitle={Guidelines for writing clean and fast code in MATLAB(R)}, 82 | pdfauthor={Nico Schloemer} 83 | ]{hyperref} 84 | 85 | \usepackage[% 86 | hyperref=auto, 87 | backend=biber 88 | ]{biblatex} 89 | \bibliography{bibtex/matlab-tipsntricks} 90 | 91 | 92 | \usepackage{marvosym} 93 | \newcommand\cleansymbol{\AtSixty} 94 | \newcommand\fastsymbol{\Lightning} 95 | 96 | \newcommand\matlab{MATLAB} 97 | 98 | % get the euro symbol 99 | \usepackage{eurosym} 100 | 101 | % get the trademark symbol correct 102 | \usepackage{textcomp} 103 | 104 | \usepackage{siunitx} 105 | \sisetup{obeyall} 106 | \newcommand\extime[1]{\textbf{\SI{#1}{\second}}} 107 | 108 | % declare colors 109 | \usepackage{xcolor} 110 | % \definecolor{goodgreen}{cmyk}{1,0,1,0} 111 | \definecolor{goodgreen}{cmyk}{0.8, 0,1,0} 112 | \definecolor{mediocre} {cmyk}{ 0, 0,1,0.1} 113 | \definecolor{badred} {cmyk}{ 0, 1,1,0} 114 | % \definecolor{myblue} {cmyk}{1,1,0,0} 115 | % \definecolor{matlabeditorcomment}{cmyk}{0.87,0.45,0.87,0} 116 | \definecolor{matlabeditorcomment}{RGB}{34,139,34} 117 | \definecolor{matlabeditorkeyword}{RGB}{0,0,255} 118 | \definecolor{matlabeditorstring} {RGB}{181,81,243} 119 | 120 | %\usepackage[boxed]{algorithm2e} % pretty-print pseudo code 121 | %\restylealgo{boxed}\linesnumbered\dontprintsemicolon 122 | %\setalcapskip{2ex} 123 | %\renewcommand{\listalgorithmcfname}{Lijst van algoritmen}% 124 | %\renewcommand{\algorithmcfname}{Algoritme}% 125 | 126 | \DeclareMathOperator{\diag}{diag} 127 | 128 | \newcommand{\norm}[1]{\ensuremath{\left\|#1\right\|}} 129 | % \providecommand{\der}[2]{\frac{\partial #1}{\partial #2}} 130 | % \providecommand{\dertwo}[2]{\frac{\partial^2 #1}{\partial #2^2}} 131 | -------------------------------------------------------------------------------- /latex/justfile: -------------------------------------------------------------------------------- 1 | default: 2 | tectonic main.tex 3 | 4 | clean: 5 | @rm -f \ 6 | main-blx.bib \ 7 | main.aux \ 8 | main.bcf \ 9 | main.blg \ 10 | main.log \ 11 | main.out \ 12 | main.pdf \ 13 | main.toc \ 14 | main.nav \ 15 | main.bbl \ 16 | main.thm \ 17 | main.run.xml \ 18 | main.pyg \ 19 | main.out.pyg \ 20 | missfont.log 21 | -------------------------------------------------------------------------------- /latex/main.tex: -------------------------------------------------------------------------------- 1 | \input{header} 2 | 3 | 4 | \begin{document} 5 | 6 | \maketitle 7 | 8 | \begin{abstract} 9 | This document is aimed at \matlab{} beginners who already know the syntax but 10 | feel are not yet quite experienced with it. Its goal is to give a number of 11 | hints which enable the reader to write quality \matlab{} programs and to avoid 12 | commonly made mistakes. 13 | 14 | There are three major independent chapters which may very well be read 15 | separately. Also, the individual chapters each split up into one or two 16 | handful of chunks of information. In that sense, this document is really a 17 | slightly extended list of dos and don'ts. 18 | 19 | Chapter 1 describes some aspects of \emph{clean} code. The impact of a 20 | subsection for the cleanliness of the code is indicated by one to five 21 | \cleansymbol--symbols, where five \cleansymbol's want to say that following 22 | the given suggestion is of great importance for the comprehensibility of the 23 | code. 24 | 25 | Chapter 2 describes how to speed up the code and is largely a list of mistakes 26 | that beginners may tend to make. This time, the \fastsymbol-symbol represents 27 | the amount of speed that you could gain when sticking to the hints given in 28 | the respective section. 29 | 30 | This guide is written as part of a basic course in numerical analysis, most 31 | examples and codes will hence tend to refer to numerical integration or 32 | differential equations. However, almost all aspects are of general nature and 33 | will also be of interest to anyone using \matlab{}. 34 | \end{abstract} 35 | 36 | \tableofcontents 37 | 38 | \input{matlab-alternatives.tex} 39 | \input{clean-code.tex} 40 | \input{fast-code.tex} 41 | \input{tipsntricks.tex} 42 | 43 | \nocite{% 44 | Getreuer:2004:WFM, 45 | Acklam:2003:MAM, 46 | Moler:2004:NCM, 47 | Moler:2008:EM} 48 | \printbibliography 49 | 50 | \end{document} 51 | -------------------------------------------------------------------------------- /latex/matlab-alternatives.tex: -------------------------------------------------------------------------------- 1 | \newpage 2 | \section*{\matlab{} alternatives} 3 | \addcontentsline{toc}{section}{\matlab{} alternatives} 4 | 5 | When writing \matlab{} code, you need to realize that unlike C, Fortran, or 6 | Python code, you will always need the \emph{commercial} \matlab{} environment 7 | to have it run. Right now, that might not be much of a problem to you as you 8 | are at a university or have some other free access to the software, but 9 | sometime in the future, this might change. 10 | 11 | The current cost for the basic \matlab{} kit, which does not include 12 | \emph{any} toolbox nor Simulink, is €500 for academic institutions; 13 | around €60 for students; \emph{thousands} of Euros for commercial 14 | operations. Considering this, there is a not too small chance that you will 15 | not be able to use \matlab{} after you quit from university, and that would 16 | render all of your own code virtually useless to you. 17 | 18 | Because of that, free and open source \matlab{} alternatives have emerged, 19 | three of which are shortly introduced here. Octave and Scilab try to stick to 20 | \matlab{} syntax as closely as possible, resulting in all of the code in this 21 | document being legal for the two packages as well. When it comes to the 22 | specialized toolboxes, however, neither of the alternatives may be able to 23 | provide the same capabilities that \matlab{} offers. However, these are mostly 24 | functions related to Simulink and the like which are hardly used by beginners 25 | anyway. Also note none of the alternatives ships with its own text editor (as 26 | \matlab{} does), so you are free yo use the editor of your choice (see, for 27 | example, \href{http://www.vim.org/}{vim}, 28 | \href{http://www.gnu.org/software/emacs/}{emacs}, Kate, 29 | \href{http://projects.gnome.org/gedit/}{gedit} for Linux; 30 | \href{http://notepad-plus.sourceforge.net/uk/site.htm}{Notepad++}, 31 | \href{http://www.crimsoneditor.com/}{Crimson Editor} for Windows). 32 | 33 | \subsection{Python} 34 | 35 | \begin{floatingfigure}[r]{6cm} 36 | \centering 37 | \includegraphics[width=6cm]{figures/python-logo-generic} 38 | \end{floatingfigure} 39 | 40 | Python is the most modern programming language as of 2013: Amongst the many 41 | award the language as received stands the TIOBE Programming Language Award of 42 | 2010. It is yearly given to the programming language that has gained the 43 | largest market market share during that year. 44 | 45 | Python is used in all kinds of different contexts, and its versatility and 46 | ease of use has made it attractive to many. There are tons packages for all 47 | sorts of tasks, and the huge community and its open development help the 48 | enormous success of Python. 49 | 50 | In the world of scientific computing, too, Python has already risen to be a 51 | major player. This is mostly due to the packages SciPy and Numpy which provide 52 | all data structures and algorithms that are used in numerical code. Plotting 53 | is most easily handled by matplotlib, a huge library which in many ways excels 54 | \matlab{}'s graphical engine. 55 | 56 | Being a language rather than an application, Python is supported in virtually 57 | every operating system. 58 | 59 | The author of this document highly recommends to take a look at Python for 60 | your own (scientific) programming projects. 61 | 62 | 63 | \subsection{Julia} 64 | 65 | Quoting from \url{julialang.org}:\\ 66 | \begin{wrapfigure}{r}{4cm} 67 | \centering 68 | \includegraphics[width=4cm]{figures/julia-logo-color.pdf} 69 | \end{wrapfigure} 70 | %\begin{floatingfigure}[r]{4cm} 71 | % \centering 72 | % \includegraphics[width=4cm]{figures/julia.png} 73 | %\end{floatingfigure} 74 | \begin{quote} 75 | Julia is a high-level, high-performance dynamic programming language for 76 | technical computing, with syntax that is familiar to users of other technical 77 | computing environments. It provides a sophisticated compiler, distributed 78 | parallel execution, numerical accuracy, and an extensive mathematical function 79 | library. The library, largely written in Julia itself, also integrates mature, 80 | best-of-breed C and Fortran libraries for linear algebra, random number 81 | generation, signal processing, and string processing. In addition, the Julia 82 | developer community is contributing a number of external packages through 83 | Julia’s built-in package manager at a rapid pace. IJulia, a collaboration 84 | between the IPython and Julia communities, provides a powerful browser-based 85 | graphical notebook interface to Julia. 86 | \end{quote} 87 | 88 | 89 | \subsection{GNU Octave} 90 | 91 | \begin{floatingfigure}[r]{5cm} 92 | \centering 93 | \includegraphics[width=5cm]{figures/Octave_Sombrero} 94 | \end{floatingfigure} 95 | 96 | GNU Octave is a high-level language, primarily intended for numerical 97 | computations. It provides a convenient command line interface for solving 98 | linear and nonlinear problems numerically, and for performing other numerical 99 | experiments using a language that is mostly compatible with \matlab{}. It may 100 | also be used as a batch-oriented language. 101 | 102 | Internally, Octave relies on other independent and well-recognized packages 103 | such as gnuplot (for plotting) or UMFPACK (for calculating with sparse 104 | matrices). In that sense, Octave is extremely well integrated into the free 105 | and open source software (FOSS) landscape. 106 | 107 | Octave has extensive tools for solving common numerical linear algebra 108 | problems, finding the roots of nonlinear equations, integrating ordinary 109 | functions, manipulating polynomials, and integrating ordinary differential and 110 | differential-algebraic equations. It is easily extensible and customizable via 111 | user-defined functions written in Octave's own language, or using dynamically 112 | loaded modules written in C++, C, Fortran, or other languages. 113 | 114 | GNU Octave is also freely redistributable software. You may redistribute it 115 | and/or modify it under the terms of the GNU General Public License (GPL) as 116 | published by the Free Software Foundation. 117 | 118 | The project if originally GNU/Linux, but versions for MacOS, Windows, Sun 119 | Solaris, and OS/2 exist. 120 | -------------------------------------------------------------------------------- /latex/tipsntricks.tex: -------------------------------------------------------------------------------- 1 | \newpage 2 | \section{Other tips \& tricks} 3 | 4 | % \begin{listing} 5 | % \begin{minted}[frame=single,framerule=2pt,color=\color{badred}]{matlab} 6 | % % =================================================== 7 | % % *** FUNCTION simpson 8 | % % *** 9 | % % *** Implements Simpson's rule for integrating 10 | % % *** the sine function over [a,b] with granularity 11 | % % *** h. 12 | % % *** 13 | % % =================================================== 14 | % function int = simpson( a, b, h ) 15 | % 16 | % x = a:h:b; 17 | % 18 | % int = 0; 19 | % n = length(x); 20 | % mid = (x(1:n-1) + x(2:n)) / 2; 21 | % int = sum( h/6 * ( sin(x(1:n-1)) ... 22 | % + 4*sin(mid ) ... 23 | % + sin(x(2:n )) ) ); 24 | % 25 | % end 26 | % % =================================================== 27 | % % *** END FUNCTION simpson 28 | % % =================================================== 29 | % \end{minted} 30 | % \caption{Implementation of Simpson's rule for numerically integrating a function (here: \lstinline!sin!) between \lstinline!a! and \lstinline!b!. Note the usage of the vector notation to speed up the function. Also note that \lstinline!sin! is hardcoded into the routine, and needs to be changed each time we want to change the function. In case one is interested in calculating the integral of $f(x) = \exp(\sin(\frac{1}{x})) / \tan(\sqrt{1-x^4})$, this could get quite messy.} 31 | % \label{listing:simpson1} 32 | % \end{listing} 33 | 34 | 35 | \begin{lstlisting}[ 36 | framerule=2pt, 37 | float, 38 | label={listing:simpson1}, 39 | caption={Implementation of Simpson's rule for numerically integrating a function (here: \lstinline!sin!) between \lstinline!a! and \lstinline!b!. Note the usage of the vector notation to speed up the function. Also note that \lstinline!sin! is hardcoded into the routine, and needs to be changed each time we want to change the function. In case one is interested in calculating the integral of $f(x) = \exp(\sin(\frac{1}{x})) / \tan(\sqrt{1-x^4})$, this could get quite messy.}, 40 | rulecolor=\color{badred}] 41 | function int = simpson( a, b, h ) 42 | % Implements Simpson's rule for integrating the 43 | % sine function over [a,b] with granularity h. 44 | 45 | x = a:h:b; 46 | 47 | int = 0; 48 | n = length(x); 49 | mid = (x(1:n-1) + x(2:n)) / 2; 50 | int = sum( h/6 * ( sin(x(1:n-1)) ... 51 | + 4*sin(mid ) ... 52 | + sin(x(2:n )) ) ); 53 | 54 | end 55 | \end{lstlisting} 56 | 57 | \subsection{Functions as arguments -- \cleansymbol\cleansymbol\cleansymbol} 58 | 59 | In numerical computation, there are set-ups which natively treat functions as 60 | the objects of interest, for example when numerically integrating them over a 61 | particular domain. For this example, imagine that you wrote a function that 62 | implements Simpson's integration rule (see listing~\ref{listing:simpson1}), 63 | and you would like to apply it to a number of functions without having to 64 | alter your source code (for example, replacing \lstinline!sin()! by 65 | \lstinline!cos()!, \lstinline!exp()! or something else). 66 | 67 | A clean way to deal with this in \matlab{} is using \emph{function handles}. This may sound fancy, and describes nothing else then the capability of treating functions (such as \lstinline!sin()!) as arguments to other functions (such as \lstinline!simpson()!). The function call itself is written as easy as 68 | 69 | \begin{lstlisting}[ 70 | float, 71 | caption={Simpson's rule with function handles. Note that the syntax for function arguments is no different from that of ordinary ones.}, 72 | label={listing:simpson2}, 73 | framerule=2pt, 74 | rulecolor=\color{goodgreen}] 75 | function int = simpson( f, a, b, h ) 76 | % Implements Simpson's rule for integrating a 77 | % function f over [a,b] with granularity h. 78 | 79 | x = a:h:b; 80 | mid = (x(1:n-1) + x(2:n)) / 2; 81 | 82 | n = length(x); 83 | 84 | int = sum( h/6 * ( f(x(1:n-1)) ... 85 | + 4*f(mid ) ... 86 | + f(x(2:n )) ) ); 87 | 88 | end 89 | \end{lstlisting} 90 | 91 | \hfill 92 | \begin{minipage}[t]{.90\textwidth} 93 | \begin{lstlisting}[framerule=1pt,rulecolor=\color{goodgreen}] 94 | a = 0; 95 | b = pi/2; 96 | h = 1e-2; 97 | int_sin = simpson( @sin, a, b, h ); 98 | int_cos = simpson( @cos, a, b, h ); 99 | int_f = simpson( @f , a, b, h ); 100 | \end{lstlisting} 101 | \end{minipage} 102 | \hfill 103 | 104 | where the function name need to be prepended by the `\lstinline!@!'-character. 105 | 106 | The function \lstinline!f()! can be any function that you defined yourself and 107 | which is callable as \lstinline!f(x)! with \lstinline!x! being a vector of $x$ 108 | values (like it is used in \lstinline!simpson()!, listing~\ref{listing:simpson2}). 109 | 110 | 111 | 112 | \subsection{Implicit matrix--vector products -- \cleansymbol} 113 | 114 | In numerical analysis, almost all methods for solving linear equation systems 115 | \emph{quickly} are iterative methods, that is, methods which define how to 116 | iteratively approach a solution in small steps (starting with some initial 117 | guess) rather then directly solving them in one big step (such as Gau{\ss}ian 118 | elimination). Two of the most prominent iterative methods are CG and GMRES. 119 | 120 | In particular, those methods \emph{do not require the explicit availability of 121 | the matrix} as in each step of the iteration they merely form a matrix-vector 122 | product with $A$ (or variations of it). Hence, they technically only need a 123 | function to tell them how to carry out a matrix-vector multiplication. In some 124 | cases, providing such a function may be easier than explicitly constructing 125 | the matrix itself, as the latter usually requires one to pay close attention 126 | to indices (which can get extremely messy). 127 | 128 | Beyond that, there may also a mild advantage in memory consumption as the 129 | indices of the matrix do no longer need to sit in memory, but can be hard coded 130 | into the matrix-vector-multiplication function itself. Considering the fact 131 | that we are mostly working with sparse matrices however, this might not be 132 | quite important. 133 | 134 | The example below illustrates the typical benefits and drawbacks of the 135 | approach. 136 | 137 | \begin{lstlisting}[ 138 | float, 139 | framerule=1pt, 140 | caption={Function that implements matrix--vector multiplication with $1/h^2 \times \diag(-1,2,-1)$. Note that the function consumes (almost) no more memory then \lstinline!u! already required.}, 141 | label={listing:Amultiply} 142 | ] 143 | function out = A_multiply( u ) 144 | % Implements matrix--vector multiplication with 145 | % diag[-1,2,-1]/h^2 . 146 | 147 | n = length( u ); 148 | u = [0; u; 0]; 149 | 150 | out = -u(1:n) + 2*u(2:n+1) - u(3:n+2); 151 | out = out * (n+1)^2; 152 | 153 | end 154 | \end{lstlisting} 155 | 156 | 157 | \hfill 158 | \begin{minipage}[t]{.45\textwidth} 159 | \begin{lstlisting}[framerule=1pt] 160 | n = 1e3; 161 | k = 500; 162 | 163 | 164 | 165 | 166 | 167 | 168 | 169 | u = ones(n,1); 170 | for i=1:k 171 | u = A_multiply( u ); 172 | end 173 | \end{lstlisting} 174 | Computing $u = A^ku_0$ with the function \lstinline!A_multiply! 175 | (listing~\ref{listing:Amultiply}). The memory consumption of this routine is 176 | (almost) no greater than storing $n$ real numbers. Execution time: 177 | \extime{21}. 178 | \end{minipage} 179 | \hfill 180 | \begin{minipage}[t]{.45\textwidth} 181 | \begin{lstlisting}[framerule=1pt] 182 | n = 1e3; 183 | k = 500; 184 | 185 | e = ones(n,1); 186 | A = spdiags([-e,2*e,-e],... 187 | [-1, 0,-1],... 188 | n, n ); 189 | A = A * (n+1)^2; 190 | 191 | u = ones(n,1); 192 | for i=1:k 193 | u = A*u; 194 | end 195 | \end{lstlisting} 196 | Computing $u = A^ku_0$ with a regular sparse format matrix \lstinline!A!, with the need to store it in memory. Execution time: \extime{7}. 197 | \end{minipage} 198 | \hfill 199 | 200 | All in all, these considerations shall not lead you to rewrite all you 201 | matrix-vector multiplications as function calls. Mind, however, that there are 202 | situations where one would \emph{never} use matrices in their explicit form, 203 | although mathematically written down like that: 204 | 205 | \begin{example}[Multigrid] 206 | In geometric multigrid methods, a domain is discretized with a certain 207 | parameter $h$ (``grid width'') and the operator $A_h$ written down for that 208 | discretization (see the examples above, where $A_h=h^{-2}\diag(-1,2,1)$ is 209 | really the discretization of the $\Delta$-operator in one dimension). In a 210 | second step, another, somewhat coarser grid is considered with $H=2h$, for 211 | example. The operator $A_H$ on the coarser grid is written down as 212 | \[ 213 | A_H = I_h^H A_h I_H^h, 214 | \] 215 | where the $I_*^*$ operators define the transition from the coarse to the fine 216 | grid, or the other way around. When applying it to a vector on the coarse grid 217 | $u_H$ ($A_Hu_H =I_h^H A_h I_H^h u_H$), the above definition reads: 218 | \begin{enumerate} 219 | \item $I_H^h u_H$: Map $u_H$ to the fine grid. 220 | \item $A_h\cdot$: Apply the fine grid operator to the transformation. 221 | \item $I_h^H\cdot$: Transform the result back to the coarse grid. 222 | \end{enumerate} 223 | How the transformations are executed needs to be defined. One could, for 224 | example, demand that $I_H^h$ maps all points that are part of the fine grid 225 | \emph{and} the coarse grid to itself; all points on the fine grid, that lie 226 | right in between two coarse variables get half of the value of each of the two 227 | (see figure~\ref{subfig:coarse-fine}). 228 | 229 | \begin{figure} 230 | \centering 231 | \begin{subfigure}{0.45\textwidth} 232 | \input{figures/coarse-fine.tex} 233 | \caption{Possible transformation rule when translating values from the 234 | coarse to the fine grid. See listing~\ref{listing:IHh}.} 235 | \label{subfig:coarse-fine} 236 | \end{subfigure} 237 | \hfill 238 | \begin{subfigure}{0.45\textwidth} 239 | \input{figures/fine-coarse.tex} 240 | \caption{Mapping back from fine to coarse.} 241 | \label{subfig:fine-coarse} 242 | \end{subfigure} 243 | \caption{} 244 | \end{figure} 245 | 246 | 247 | \begin{lstlisting}[ 248 | float, 249 | framerule=1pt, 250 | caption={Function that implements the operator $I_H^h$ from the example (see figure~\ref{subfig:coarse-fine}). Writing down the structure of the corresponding matrix would be somewhat complicated, and even more so when moving to two- or three-dimensional grids. Note also how matrix notation has been exploited.}, 251 | label={listing:IHh} 252 | ] 253 | function uFine = coarse2fine( uCoarse ) 254 | % Transforms values from a coarse grid to a fine grid. 255 | 256 | N = length(uCoarse); 257 | n = 2*N - 1; 258 | 259 | uFine(1:2:n) = uCoarse; 260 | 261 | midValues = 0.5 * ( uCoarse(1:N-1) + uCoarse(2:N) ); 262 | uFine(2:2:n) = midValues; 263 | 264 | end 265 | \end{lstlisting} 266 | 267 | 268 | In the analysis of the method, $I_H^h$ and $I_h^H$ will always be treated as 269 | matrices, but when implementing, one would \emph{certainly not} try to figure 270 | out the structure of the matrix. It is a lot simpler to implement a function 271 | that executes the rule suggested above, for example. 272 | \end{example} 273 | --------------------------------------------------------------------------------