├── pics ├── funw.png ├── CLR_HOR.jpg ├── camera.jpeg ├── clone.png ├── factory.png ├── launch.png ├── noteb.png ├── opentut.png ├── signin.png ├── tianhe1.jpg ├── A-trophy.png ├── hdf_logo.jpg ├── lesson01.png ├── numerical.png ├── saga_logo.png ├── terminal.png ├── ET_graph_old.png ├── Sierpinski.png ├── cactuslogo.png ├── flickr_logo.png ├── rit_cluster.jpg ├── twitter_logo.jpg ├── visit_logo.png ├── cactuslogo_structure.png ├── pattern-puzzle-jigsaw-1.png ├── cactuslogo_structure_flesh.png └── cactuslogo_structure_thorns.png ├── README.md ├── slides ├── working.tex └── slides.tex ├── Numerical-Methods ├── 01-finite-differencing.ipynb ├── 02-hyperbolic-pdes.ipynb └── 01-finite-differencing-sol.ipynb └── Using-Cactus ├── Checkpoint.ipynb ├── CreatingANewThorn.ipynb ├── CreatingANewThorn-Part2.ipynb └── Cactus-Funwave.ipynb /pics/funw.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/funw.png -------------------------------------------------------------------------------- /pics/CLR_HOR.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/CLR_HOR.jpg -------------------------------------------------------------------------------- /pics/camera.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/camera.jpeg -------------------------------------------------------------------------------- /pics/clone.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/clone.png -------------------------------------------------------------------------------- /pics/factory.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/factory.png -------------------------------------------------------------------------------- /pics/launch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/launch.png -------------------------------------------------------------------------------- /pics/noteb.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/noteb.png -------------------------------------------------------------------------------- /pics/opentut.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/opentut.png -------------------------------------------------------------------------------- /pics/signin.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/signin.png -------------------------------------------------------------------------------- /pics/tianhe1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/tianhe1.jpg -------------------------------------------------------------------------------- /pics/A-trophy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/A-trophy.png -------------------------------------------------------------------------------- /pics/hdf_logo.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/hdf_logo.jpg -------------------------------------------------------------------------------- /pics/lesson01.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/lesson01.png -------------------------------------------------------------------------------- /pics/numerical.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/numerical.png -------------------------------------------------------------------------------- /pics/saga_logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/saga_logo.png -------------------------------------------------------------------------------- /pics/terminal.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/terminal.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # CactusTutorial 2 | A set if Jupyter notebooks designed to teach people how to use Cactus. 3 | -------------------------------------------------------------------------------- /pics/ET_graph_old.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/ET_graph_old.png -------------------------------------------------------------------------------- /pics/Sierpinski.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/Sierpinski.png -------------------------------------------------------------------------------- /pics/cactuslogo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/cactuslogo.png -------------------------------------------------------------------------------- /pics/flickr_logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/flickr_logo.png -------------------------------------------------------------------------------- /pics/rit_cluster.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/rit_cluster.jpg -------------------------------------------------------------------------------- /pics/twitter_logo.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/twitter_logo.jpg -------------------------------------------------------------------------------- /pics/visit_logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/visit_logo.png -------------------------------------------------------------------------------- /pics/cactuslogo_structure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/cactuslogo_structure.png -------------------------------------------------------------------------------- /pics/pattern-puzzle-jigsaw-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/pattern-puzzle-jigsaw-1.png -------------------------------------------------------------------------------- /pics/cactuslogo_structure_flesh.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/cactuslogo_structure_flesh.png -------------------------------------------------------------------------------- /pics/cactuslogo_structure_thorns.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenrbrandt/CactusTutorial/master/pics/cactuslogo_structure_thorns.png -------------------------------------------------------------------------------- /slides/working.tex: -------------------------------------------------------------------------------- 1 | \documentclass{beamer} 2 | 3 | \usepackage[utf8]{inputenc} 4 | \usepackage{beamerthemesplit} 5 | \usepackage{url} 6 | \usepackage{hyperref} 7 | \usepackage{tikz} 8 | \usepackage{alltt} 9 | 10 | \usepackage{listings} 11 | \usepackage{marvosym} 12 | \usepackage{color} 13 | \usepackage[multidot]{grffile} 14 | \usepackage{multirow} 15 | \usepackage{array} 16 | \usepackage{setspace} 17 | 18 | \usepackage{CJKutf8} 19 | \newcommand{\zhs}[1]{\begin{CJK}{UTF8}{gbsn}#1\end{CJK}} 20 | 21 | \usetheme{Madrid} 22 | 23 | \usecolortheme[RGB={132,186,75}]{structure} 24 | \definecolor{cactusgreen}{RGB}{132,186,75} 25 | \newcommand{\red}[1]{\textcolor{cactusgreen}{#1}} 26 | \newcommand{\black}[1]{\textcolor{black}{#1}} 27 | 28 | \graphicspath{{../pics/}} 29 | 30 | \logo{\includegraphics[height=0.7cm]{CLR_HOR}} 31 | 32 | \newcommand{\head}[2] 33 | {\frame{\frametitle{}\begin{centering}\LARGE#1\\#2\end{centering}}} 34 | 35 | \newcommand{\abspic}[4] 36 | {\vspace{ #2\paperheight}\hspace{ #3\paperwidth}\includegraphics[height=#4\paperheight]{#1}\\ 37 | \vspace{-#2\paperheight}\vspace{-#4\paperheight}\vspace{-0.0038\paperheight}} 38 | 39 | \newcommand{\picw}[4]{{ 40 | \usebackgroundtemplate{ 41 | \color{black}\vrule width\paperwidth height\paperheight\hspace{-\paperwidth}\hspace{-0.01\paperwidth} 42 | \hspace{#4\paperwidth}\includegraphics[width=#3\paperwidth, height=\paperheight]{#1}}\logo{} 43 | \frame[plain]{\frametitle{#2}} 44 | }} 45 | \newcommand{\pic}[2]{\picw{#1}{#2}{}{0}} 46 | 47 | \newcommand{\question}[1]{\frame{\begin{centering}\Huge #1\\\end{centering}}} 48 | \newcommand{\redidot}{\makebox[0mm]{\hphantom{i}\red{i}}{\i}} 49 | \newcommand{\blackidot}{\makebox[0mm]{\hphantom{i}\black{i}}{\i}} 50 | 51 | % We want to use the infolines outer theme because it uses so less space, but 52 | % it also tries to print an institution and the slide numbers 53 | % Therefore, we here redefine the footline ourselfes - mostly a copy & paste from 54 | % /usr/share/texmf/tex/latex/beamer/themes/outer/beamerouterthemeinfolines.sty 55 | \defbeamertemplate*{footline}{infolines theme without institution and slide numbers} 56 | { 57 | \leavevmode% 58 | \hbox{% 59 | \begin{beamercolorbox}[wd=.25\paperwidth,ht=2.25ex,dp=1ex,center]{author in head/foot}% 60 | \usebeamerfont{author in head/foot}\insertshortauthor 61 | \end{beamercolorbox}% 62 | \begin{beamercolorbox}[wd=.5\paperwidth,ht=2.25ex,dp=1ex,center]{title in head/foot}% 63 | \usebeamerfont{title in head/foot}\insertshorttitle 64 | \end{beamercolorbox}% 65 | \begin{beamercolorbox}[wd=.25\paperwidth,ht=2.25ex,dp=1ex,center]{date in head/foot}% 66 | \usebeamerfont{date in head/foot}\insertshortdate{} 67 | \end{beamercolorbox}}% 68 | \vskip0pt% 69 | } 70 | % No navigation symbols 71 | \setbeamertemplate{navigation symbols}{} 72 | 73 | \title[Working with Jupyter]{Working with Jupyter} 74 | \author[Steven Brandt]{and Steven R. Brandt} 75 | \institute{Center for Compuation and Technology\\ 76 | Louisiana State University\\ 77 | Funded by: NSF OAC 1550551} 78 | \titlegraphic{ 79 | \includegraphics[height=2cm]{cactuslogo}\\ 80 | } 81 | \date[2017-07-31]{2017-07-31} 82 | 83 | \begin{document} 84 | 85 | \frame{\titlepage} 86 | 87 | \frame{\frametitle{What is Jupyter} 88 | {\huge What is Jupyter?} 89 | \begin{itemize} 90 | \item \textbf{A Notebook} Jupyter provides notebook-style access to running code in any language (It's name is a mashup of Julia, Python, and R). 91 | \item \textbf{An Executable Paper} Jupyter allows its creator to mix code, the results of execution, 92 | images, and arbitrary markeup to be combined in a document that is both a paper and code. 93 | \item \textbf{A New Way to Interact} Jupyter is a new way to interact with a set of calculations. 94 | \begin{itemize} 95 | \item It is neither a gui nor a command-line interface, but a combination of both. 96 | \item It is intended to be interactive and dynamic, but it is readable and useful in static form. 97 | \end{itemize} 98 | \end{itemize} 99 | } 100 | 101 | \frame{\frametitle{Your Jupyter Notebook} 102 | \begin{itemize} 103 | \item For this workshop, we are going to be using 104 | an NCSA machine called Nebula (http://www.etk2017.ndslabs.org/). 105 | \item The above is a virtual machine based on Docker. If you want 106 | to run the Docker image directly on your laptop instead of the class 107 | machine, you can obtain it like this: ``docker pull craigwillis/jupyter-et'' 108 | and you can run it like this ``docker run -it -habc-jupyteret-def -p 8000:8000 --user 1000 craigwillis/jupyter-et jupyter notebook --port 8000 --ip 0.0.0.0 --no-browser'' That will start jupyter on port 8000. 109 | You can pick a different port if you like. 110 | \item If anything should go wrong with this machine, we have a backup 111 | located at LSU (http://hpx-jupyter.cct.lsu.edu) 112 | \item Navigate to the first link now. 113 | \end{itemize} 114 | } 115 | 116 | \frame{\frametitle{Your Jupyter Notebook} 117 | Click ``Sign in here'' then launch your ``Jupyter/Einstein Toolkit'' macine instance. 118 | 119 | \begin{centering} 120 | \includegraphics[height=0.45\textwidth]{signin}\\ 121 | \end{centering} 122 | } 123 | 124 | \frame{\frametitle{Your Jupyter Notebook} 125 | Next, launch your machine instance. 126 | 127 | \begin{centering} 128 | \includegraphics[height=0.45\textwidth]{launch}\\ 129 | \end{centering} 130 | } 131 | 132 | \frame{\frametitle{Your Jupyter Notebook} 133 | In addition to the notebook, the Jupyter interface provides you with a fully functioning terminal. 134 | Please open one now. 135 | 136 | \begin{centering} 137 | \includegraphics[height=0.45\textwidth]{terminal}\\ 138 | \end{centering} 139 | 140 | Some things are still easier outside the notebook. 141 | } 142 | 143 | \frame{\frametitle{Your Jupyter Notebook} 144 | Now, log into nebula, open a terminal, and type the 145 | command\\ ``git clone https://github.com/stevenrbrandt/CactusTutorial.git'' 146 | 147 | \begin{centering} 148 | \includegraphics[height=0.45\textwidth]{clone}\\ 149 | \end{centering} 150 | } 151 | 152 | \frame{\frametitle{Your Jupyter Notebook} 153 | Open the tutorial from within Jupyter 154 | 155 | \begin{centering} 156 | \includegraphics[height=0.45\textwidth]{opentut}\\ 157 | \end{centering} 158 | } 159 | 160 | \frame{\frametitle{Your Jupyter Notebook} 161 | Open the numerical methods folder from within Jupyter 162 | 163 | \begin{centering} 164 | \includegraphics[height=0.45\textwidth]{numerical}\\ 165 | \end{centering} 166 | } 167 | 168 | \frame{\frametitle{Your Jupyter Notebook} 169 | Now you should see the tutorial materials. Please 170 | click on 01-finite-differencing.ipynb 171 | 172 | \begin{centering} 173 | \includegraphics[height=0.45\textwidth]{lesson01}\\ 174 | \end{centering} 175 | } 176 | 177 | %\frame{\frametitle{Your Jupyter Notebook} 178 | %Click inside the first cell as shown. Hit shift-enter 179 | %to evaluate the cell. Evaluate the next one as well. 180 | %The text ``python 2.7.13'' should print below it. 181 | 182 | %\begin{centering} 183 | % \includegraphics[height=0.45\textwidth]{noteb}\\ 184 | %\end{centering} 185 | %} 186 | 187 | \frame{\frametitle{Your Jupyter Notebook} 188 | {\huge Your Jupyter Notebook:\\ Basic Commands} 189 | \begin{itemize} 190 | \item \textbf{Python} python commands can be typed interactively. 191 | \item \textbf{Bash} Lines beginning with a ! are shell commands. 192 | \item \textbf{Magics} Certain command start with a \% do special things. 193 | A handful of these will be used in the presentation. 194 | \begin{itemize} 195 | \item \textbf{\%load filename} This command will load the contents of filename 196 | into the current cell. Once it is loaded, a comment symbol is prefixed in 197 | front of \%load. 198 | \item \textbf{\%\%writefile filename} This command will save the contents of 199 | the current cell (not including the first line) into the named file. By combining 200 | this and the \%load command, you can use your Juypter notebook as a text editor. 201 | \end{itemize} 202 | \end{itemize} 203 | } 204 | 205 | \frame{\frametitle{Your Jupyter Notebook} 206 | {\huge Your Jupyter Notebook:\\ Basic Commands} 207 | \begin{itemize} 208 | \item To create a new cell, click on ``Insert $>$ Insert Cell Above'' or 209 | ``Insert $>$ Insert Cell Below.''. 210 | \item To change a cell type from Code to Markdown, click on the selector 211 | box at the top of the page. The markdown is quite sophisticated. Any 212 | text between \$ symbols will be interpreted as latex math code. 213 | \item Be aware that the notebook autosaves. Don't count on changes not 214 | being made until you explicitly select ``File $>$ Save and Checkpoint.'' 215 | If you want to keep things the way they are, use ``File $>$ Make a Copy.'' 216 | \item Don't count on autosaves either. Periodically save your work 217 | with ``File $>$ Save and Checkpoint.'' 218 | \end{itemize} 219 | } 220 | 221 | \end{document} 222 | 223 | -------------------------------------------------------------------------------- /Numerical-Methods/01-finite-differencing.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Numerical Methods" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "For Numerical Relativity, we need to\n", 15 | "\n", 16 | "* evolve the spacetime (hyperbolic PDEs with \"smooth\" fields);\n", 17 | "* evolve the matter (hyperbolic PDEs with discontinuous fields);\n", 18 | "* solve initial data (elliptic PDEs);\n", 19 | "* extract gravitational waves (interpolation and integration);\n", 20 | "* find and analyse horizons (interpolation, BVPs).\n", 21 | "\n", 22 | "These can be built on some simple foundations. \n", 23 | "\n", 24 | "The general concepts that underpin most numerical methods are\n", 25 | "\n", 26 | "1. the solution of linear systems $A {\\bf x} = {\\bf b}$;\n", 27 | "2. the solution of nonlinear root-finding problems ${\\bf f} ( {\\bf x} ) = {\\bf 0}$;\n", 28 | "3. the representation of a function or field $f(x)$ by discrete data $f_i$ at points $x_i$, by interpolation or other means;\n", 29 | "4. the (discrete) Fast Fourier Transform;\n", 30 | "5. stochastic concepts and methods.\n", 31 | "\n", 32 | "For Numerical Relativity, there has been little need (yet!) for stochastic methods, and the use of FFTs is mostly restricted to analysis. All of these points can be found in standard numerical packages and libraries: the question, however, is\n", 33 | "\n", 34 | "1. what do we need to understand about these methods before implementing or using them?\n", 35 | "2. when is it faster or better to implement our own version rather than using a library?" 36 | ] 37 | }, 38 | { 39 | "cell_type": "markdown", 40 | "metadata": {}, 41 | "source": [ 42 | "# Finite differencing" 43 | ] 44 | }, 45 | { 46 | "cell_type": "markdown", 47 | "metadata": {}, 48 | "source": [ 49 | "As a first step we'll quickly cover *finite differencing*: the approximation of derivatives of a function $f$ when the only information about $f$ is its value at a set of points, or nodes, $\\{x_i\\}$, denoted $\\{f_i\\}$.\n", 50 | "\n", 51 | "Here we have the \"representation of a function\" problem. We represent the function $f$ using a *piecewise polynomial* function $g$. This polynomial must interpolate $f$: that is, $g(x_i) \\equiv f(x_i) = f_i$. We then approximate derivatives of $f$ by derivatives of $g$.\n", 52 | "\n", 53 | "As simple examples, let's assume we know three points, $\\{f_{i-1}, f_i, f_{i+1}\\}$. Then we have the linear polynomial approximations\n", 54 | "\n", 55 | "$$\n", 56 | " g_{FD} = \\frac{x - x_{i+1}}{x_i - x_{i+1}} f_i + \\frac{x - x_{i}}{x_{i+1} - x_{i}} f_{i+1}\n", 57 | "$$\n", 58 | "\n", 59 | "and\n", 60 | "\n", 61 | "$$\n", 62 | " g_{BD} = \\frac{x - x_{i}}{x_{i-1} - x_{i}} f_{i-1} + \\frac{x - x_{i-1}}{x_i - x_{i-1}} f_i\n", 63 | "$$\n", 64 | "\n", 65 | "or the quadratic polynomial approximation\n", 66 | "\n", 67 | "$$\n", 68 | " g_{CD} = \\frac{(x - x_{i})(x - x_{i+1})}{(x_{i-1} - x_{i})(x_{i-1} - x_{i+1})} f_{i-1} + \\frac{(x - x_{i-1})(x - x_{i+1})}{(x_{i} - x_{i-1})(x_{i} - x_{i+1})} f_{i} + \\frac{(x - x_{i-1})(x - x_{i})}{(x_{i+1} - x_{i-1})(x_{i+1} - x_{i})} f_{i+1}.\n", 69 | "$$\n", 70 | "\n", 71 | "Note how this Lagrange form is built out of *indicator polynomials* that take the value $1$ at one node and vanish at all others." 72 | ] 73 | }, 74 | { 75 | "cell_type": "markdown", 76 | "metadata": {}, 77 | "source": [ 78 | "By differentiating these polynomial interpolating functions we get approximations to the derivatives of $f$. Each approximation is different, with different errors. \n", 79 | "\n", 80 | "We'll assume that the nodes are equally spaced, with grid spacing $\\Delta x = x_{i+1} - x_i$. The approximations above give the standard *forward difference*\n", 81 | "\n", 82 | "$$\n", 83 | " \\left. \\frac{\\partial g_{FD}}{\\partial x} \\right|_{x = x_i} \\to \\left. \\frac{\\partial f}{\\partial x} \\right|_{x = x_i} = \\frac{1}{\\Delta x} \\left( f_{i+1} - f_i \\right) + {\\cal O} \\left( \\Delta x \\right),\n", 84 | "$$\n", 85 | "\n", 86 | "the standard *backward difference*\n", 87 | "\n", 88 | "$$\n", 89 | " \\left. \\frac{\\partial g_{BD}}{\\partial x} \\right|_{x = x_i} \\to \\left. \\frac{\\partial f}{\\partial x} \\right|_{x = x_i} = \\frac{1}{\\Delta x} \\left( f_{i} - f_{i-1} \\right) + {\\cal O} \\left( \\Delta x \\right),\n", 90 | "$$\n", 91 | "\n", 92 | "and the standard *central difference* approximations\n", 93 | "\n", 94 | "\\begin{align}\n", 95 | " \\left. \\frac{\\partial g_{CD}}{\\partial x} \\right|_{x = x_i} & \\to \\left. \\frac{\\partial f}{\\partial x} \\right|_{x = x_i} \\\\ & = \\frac{1}{2 \\, \\Delta x} \\left( f_{i+1} - f_{i-1} \\right) + {\\cal O} \\left( \\Delta x^2 \\right), \\\\\n", 96 | " \\left. \\frac{\\partial^2 g_{CD}}{\\partial x^2} \\right|_{x = x_i} & \\to \\left. \\frac{\\partial^2 f}{\\partial x^2} \\right|_{x = x_i} \\\\ & = \\frac{1}{\\left( \\Delta x \\right)^2} \\left( f_{i-1} - 2 f_i + f_{i+1} \\right) + {\\cal O} \\left( \\Delta x^2 \\right).\n", 97 | "\\end{align}\n", 98 | "\n", 99 | "The error is most conveniently derived by expressing $f_{i-1}$ and $f_{i+1}$ using the Taylor expansion of $f(x)$ around $x_i$, i.e.\n", 100 | "\n", 101 | "$$\n", 102 | "f_{i-1} = f_i - \\left. \\frac{\\partial f}{\\partial x} \\right|_{x = x_i}\\!\\! \\Delta x + \\frac{1}{2} \\left . \\frac{\\partial^2 f}{\\partial x^2} \\right|_{x = x_i}\\!\\! \\Delta x^2 - \\frac{1}{6} \\left . \\frac{\\partial^3 f}{\\partial x^3} \\right|_{x = x_i}\\!\\! \\Delta x^3 + {\\cal O} \\left ( \\Delta x^4 \\right)\n", 103 | "$$\n", 104 | "and\n", 105 | "$$\n", 106 | "f_{i+1} = f_i + \\left. \\frac{\\partial f}{\\partial x} \\right|_{x = x_i}\\!\\! \\Delta x + \\frac{1}{2} \\left . \\frac{\\partial^2 f}{\\partial x^2} \\right|_{x = x_i}\\!\\! \\Delta x^2 + \\frac{1}{6} \\left . \\frac{\\partial^3 f}{\\partial x^3} \\right|_{x = x_i}\\!\\! \\Delta x^3 + {\\cal O} \\left ( \\Delta x^4 \\right)\n", 107 | "$$" 108 | ] 109 | }, 110 | { 111 | "cell_type": "markdown", 112 | "metadata": {}, 113 | "source": [ 114 | "## Testing this in code\n", 115 | "\n", 116 | "We'll use finite differencing repeatedly. To test our code we'll be testing the differencing. Let's check the above approximations applied to a simple function,\n", 117 | "\n", 118 | "$$\n", 119 | " f(x) = \\exp \\left[ x \\right].\n", 120 | "$$\n", 121 | "\n", 122 | "All derivatives are the same as the original function, which evaluated at $x=0$ gives $1$.\n", 123 | "\n", 124 | "First we write the functions, then we test them." 125 | ] 126 | }, 127 | { 128 | "cell_type": "code", 129 | "execution_count": null, 130 | "metadata": { 131 | "collapsed": true 132 | }, 133 | "outputs": [], 134 | "source": [ 135 | "def backward_differencing(f, x_i, dx):\n", 136 | " \"\"\"\n", 137 | " Backward differencing of f at x_i with grid spacing dx.\n", 138 | " \"\"\"\n", 139 | " \n", 140 | " #" 141 | ] 142 | }, 143 | { 144 | "cell_type": "code", 145 | "execution_count": null, 146 | "metadata": { 147 | "collapsed": true 148 | }, 149 | "outputs": [], 150 | "source": [ 151 | "def forward_differencing(f, x_i, dx):\n", 152 | " \"\"\"\n", 153 | " Forward differencing of f at x_i with grid spacing dx.\n", 154 | " \"\"\"\n", 155 | " \n", 156 | " #" 157 | ] 158 | }, 159 | { 160 | "cell_type": "code", 161 | "execution_count": null, 162 | "metadata": { 163 | "collapsed": true 164 | }, 165 | "outputs": [], 166 | "source": [ 167 | "def central_differencing(f, x_i, dx):\n", 168 | " \"\"\"\n", 169 | " Second order central differencing of f at x_i with grid spacing dx.\n", 170 | " \"\"\"\n", 171 | " \n", 172 | " #" 173 | ] 174 | }, 175 | { 176 | "cell_type": "code", 177 | "execution_count": null, 178 | "metadata": { 179 | "collapsed": true 180 | }, 181 | "outputs": [], 182 | "source": [ 183 | "import numpy" 184 | ] 185 | }, 186 | { 187 | "cell_type": "code", 188 | "execution_count": null, 189 | "metadata": { 190 | "collapsed": true 191 | }, 192 | "outputs": [], 193 | "source": [ 194 | "bd = backward_differencing(numpy.exp, 0.0, dx=1.0)\n", 195 | "fd = forward_differencing(numpy.exp, 0.0, dx=1.0)\n", 196 | "cd1, cd2 = central_differencing(numpy.exp, 0.0, dx=1.0)\n", 197 | "\n", 198 | "print(\"Backward difference should be 1, is {}, error {}\".format(bd, abs(bd - 1.0)))\n", 199 | "print(\"Forward difference should be 1, is {}, error {}\".format(fd, abs(fd - 1.0)))\n", 200 | "print(\"Central difference (1st derivative) should be 1, is {}, error {}\".format(cd1, abs(cd1 - 1.0)))\n", 201 | "print(\"Central difference (2nd derivative) should be 1, is {}, error {}\".format(cd2, abs(cd2 - 1.0)))" 202 | ] 203 | }, 204 | { 205 | "cell_type": "markdown", 206 | "metadata": {}, 207 | "source": [ 208 | "The errors here are significant. What matters is how fast the errors reduce as we change the grid spacing. Try changing from $\\Delta x = 1$ to $\\Delta x = 0.1$:" 209 | ] 210 | }, 211 | { 212 | "cell_type": "code", 213 | "execution_count": null, 214 | "metadata": { 215 | "collapsed": true 216 | }, 217 | "outputs": [], 218 | "source": [ 219 | "bd = backward_differencing(numpy.exp, 0.0, dx=0.1)\n", 220 | "fd = forward_differencing(numpy.exp, 0.0, dx=0.1)\n", 221 | "cd1, cd2 = central_differencing(numpy.exp, 0.0, dx=0.1)\n", 222 | "\n", 223 | "print(\"Backward difference should be 1, is {}, error {}\".format(bd, abs(bd - 1.0)))\n", 224 | "print(\"Forward difference should be 1, is {}, error {}\".format(fd, abs(fd - 1.0)))\n", 225 | "print(\"Central difference (1st derivative) should be 1, is {}, error {}\".format(cd1, abs(cd1 - 1.0)))\n", 226 | "print(\"Central difference (2nd derivative) should be 1, is {}, error {}\".format(cd2, abs(cd2 - 1.0)))" 227 | ] 228 | }, 229 | { 230 | "cell_type": "markdown", 231 | "metadata": {}, 232 | "source": [ 233 | "We see *roughly* the expected scaling, with forward and backward differencing errors reducing by roughly $10$, and central differencing errors reducing by roughly $10^2$." 234 | ] 235 | }, 236 | { 237 | "cell_type": "markdown", 238 | "metadata": {}, 239 | "source": [ 240 | "## Convergence" 241 | ] 242 | }, 243 | { 244 | "cell_type": "markdown", 245 | "metadata": {}, 246 | "source": [ 247 | "The feature that we always want to show is that the error $\\cal E$ decreases with the grid spacing $\\Delta x$. In particular, for most methods in Numerical Relativity, we expect a power law relationship:\n", 248 | "\n", 249 | "$$\n", 250 | " {\\cal E} \\propto \\left( \\Delta x \\right)^p.\n", 251 | "$$\n", 252 | "\n", 253 | "If we can measure the error (by knowing the exact solution) then we can measure the *convergence rate* $p$, by using\n", 254 | "\n", 255 | "$$\n", 256 | " \\log \\left( {\\cal E} \\right) = p \\, \\log \\left( \\Delta x \\right) + \\text{constant}.\n", 257 | "$$\n", 258 | "\n", 259 | "That is $p$ is the slope of the best-fit straight line through the plot of the error against the grid spacing, on a logarithmic scale.\n", 260 | "\n", 261 | "If we do not know the exact solution (the usual case), we can use *self convergence* to do the same measurement.\n", 262 | "\n", 263 | "We check this for our finite differencing above." 264 | ] 265 | }, 266 | { 267 | "cell_type": "code", 268 | "execution_count": null, 269 | "metadata": { 270 | "collapsed": true 271 | }, 272 | "outputs": [], 273 | "source": [ 274 | "from matplotlib import pyplot\n", 275 | "%matplotlib notebook\n", 276 | "\n", 277 | "dxs = numpy.logspace(-5, 0, 10)\n", 278 | "bd_errors = numpy.zeros_like(dxs)\n", 279 | "fd_errors = numpy.zeros_like(dxs)\n", 280 | "cd1_errors = numpy.zeros_like(dxs)\n", 281 | "cd2_errors = numpy.zeros_like(dxs)\n", 282 | "\n", 283 | "for i, dx in enumerate(dxs):\n", 284 | " #" 285 | ] 286 | }, 287 | { 288 | "cell_type": "code", 289 | "execution_count": null, 290 | "metadata": { 291 | "collapsed": true 292 | }, 293 | "outputs": [], 294 | "source": [ 295 | "pyplot.figure()\n", 296 | "pyplot.loglog(dxs, bd_errors, 'kx', label='Backwards')\n", 297 | "pyplot.loglog(dxs, fd_errors, 'b+', label='Forwards')\n", 298 | "pyplot.loglog(dxs, cd1_errors, 'go', label='Central (1st)')\n", 299 | "pyplot.loglog(dxs, cd2_errors, 'r^', label='Central (2nd)')\n", 300 | "pyplot.loglog(dxs, dxs*(bd_errors[0]/dxs[0]), 'k-', label=r\"$p=1$\")\n", 301 | "pyplot.loglog(dxs, dxs**2*(cd1_errors[0]/dxs[0]**2), 'k--', label=r\"$p=2$\")\n", 302 | "pyplot.xlabel(r\"$\\Delta x$\")\n", 303 | "pyplot.ylabel(\"Error\")\n", 304 | "pyplot.legend(loc=\"lower right\")\n", 305 | "pyplot.show()" 306 | ] 307 | }, 308 | { 309 | "cell_type": "markdown", 310 | "metadata": {}, 311 | "source": [ 312 | "Forwards and backwards differencing are converging at first order ($p=1$). Central differencing is converging at second order ($p=2$) until floating point roundoff effects start causing problems at small $\\Delta x$." 313 | ] 314 | }, 315 | { 316 | "cell_type": "markdown", 317 | "metadata": {}, 318 | "source": [ 319 | "# Extension exercises" 320 | ] 321 | }, 322 | { 323 | "cell_type": "markdown", 324 | "metadata": {}, 325 | "source": [ 326 | "##### Self convergence\n", 327 | "\n", 328 | "By definition, the error ${\\cal E}(\\Delta x)$ is a function of the grid spacing, as is our numerical approximation of the thing we're trying to compute $F(\\Delta x)$ (above $F$ was the derivative of $f$, evaluated at $0$). This gives\n", 329 | "\n", 330 | "$$\n", 331 | " {\\cal E}(\\Delta x) = F \\left( \\Delta x \\right) - F \\left( 0 \\right)\n", 332 | "$$\n", 333 | "\n", 334 | "or\n", 335 | "\n", 336 | "$$\n", 337 | " F \\left( \\Delta x \\right) = F \\left( 0 \\right) + {\\cal E}(\\Delta x).\n", 338 | "$$\n", 339 | "\n", 340 | "Of course, $F(0)$ is the exact solution we're trying to compute. However, by subtracting any *two* approximations we can eliminate the exact solution. Using the power law dependence\n", 341 | "\n", 342 | "$$\n", 343 | " {\\cal E}(\\Delta x) = C \\left( \\Delta x \\right)^p\n", 344 | "$$\n", 345 | "\n", 346 | "this gives\n", 347 | "\n", 348 | "$$\n", 349 | " F \\left( \\alpha \\Delta x \\right) - F \\left( \\Delta x \\right) = C \\left( \\Delta x \\right)^p \\left( \\alpha^p - 1 \\right).\n", 350 | "$$\n", 351 | "\n", 352 | "We still do not know the value of the constant $C$. However, we can use *three* approximations to eliminate it (note for convenience we chose the same ratio $\\alpha$ between the first two resolutions as\n", 353 | "between the last two):\n", 354 | "\n", 355 | "$$\n", 356 | " \\frac{F \\left( \\alpha^2 \\Delta x \\right) - F \\left( \\alpha \\Delta x \\right)}{F \\left( \\alpha \\Delta x \\right) - F \\left( \\Delta x \\right)} = \\frac{\\left( \\alpha^{2p} - \\alpha^p \\right)}{\\left( \\alpha^p - 1 \\right)} = \\alpha^p.\n", 357 | "$$\n", 358 | "\n", 359 | "So the *self-convergence rate* is\n", 360 | "\n", 361 | "$$\n", 362 | " p = \\log_{\\alpha} \\left| \\frac{F \\left( \\alpha^2 \\Delta x \\right) - F \\left( \\alpha \\Delta x \\right)}{F \\left( \\alpha \\Delta x \\right) - F \\left( \\Delta x \\right)} \\right|.\n", 363 | "$$\n", 364 | "\n", 365 | "Compute this self-convergence rate for all the cases above." 366 | ] 367 | }, 368 | { 369 | "cell_type": "markdown", 370 | "metadata": {}, 371 | "source": [ 372 | "##### Higher order\n", 373 | "\n", 374 | "Show, either by Taylor expansion, or by constructing the interpolating polynomial, that the fourth order central differencing approximations are\n", 375 | "\n", 376 | "\\begin{align}\n", 377 | " \\left. \\frac{\\partial f}{\\partial x} \\right|_{x = x_i} & = \\frac{1}{12 \\, \\Delta x} \\left( -f_{i+2} + 8 f_{i+1} - 8 f_{i-1} + f_{i-2} \\right) + {\\cal O} \\left( \\Delta x^4 \\right), \\\\\n", 378 | " \\left. \\frac{\\partial^2 f}{\\partial x^2} \\right|_{x = x_i} & = \\frac{1}{12 \\left( \\Delta x \\right)^2} \\left( -f_{i-2} + 16 f_{i-1} - 30 f_i + 16 f_{i+1} - f_{i+2} \\right) + {\\cal O} \\left( \\Delta x^4 \\right).\n", 379 | "\\end{align}" 380 | ] 381 | }, 382 | { 383 | "cell_type": "markdown", 384 | "metadata": {}, 385 | "source": [ 386 | "##### Measure the convergence rate\n", 387 | "\n", 388 | "Using `numpy.polyfit`, directly measure the convergence rate for the algorithms above. Be careful to exclude points where finite differencing effects cause problems. Repeat the test for the fourth order formulas above." 389 | ] 390 | } 391 | ], 392 | "metadata": { 393 | "kernelspec": { 394 | "display_name": "Python 3", 395 | "language": "python", 396 | "name": "python3" 397 | }, 398 | "language_info": { 399 | "codemirror_mode": { 400 | "name": "ipython", 401 | "version": 3 402 | }, 403 | "file_extension": ".py", 404 | "mimetype": "text/x-python", 405 | "name": "python", 406 | "nbconvert_exporter": "python", 407 | "pygments_lexer": "ipython3", 408 | "version": "3.6.2" 409 | } 410 | }, 411 | "nbformat": 4, 412 | "nbformat_minor": 1 413 | } 414 | -------------------------------------------------------------------------------- /slides/slides.tex: -------------------------------------------------------------------------------- 1 | \documentclass{beamer} 2 | 3 | \usepackage[utf8]{inputenc} 4 | \usepackage{beamerthemesplit} 5 | \usepackage{url} 6 | \usepackage{hyperref} 7 | \usepackage{tikz} 8 | \usepackage{alltt} 9 | 10 | \usepackage{listings} 11 | \usepackage{marvosym} 12 | \usepackage{color} 13 | \usepackage[multidot]{grffile} 14 | \usepackage{multirow} 15 | \usepackage{array} 16 | \usepackage{setspace} 17 | 18 | \usepackage{CJKutf8} 19 | \newcommand{\zhs}[1]{\begin{CJK}{UTF8}{gbsn}#1\end{CJK}} 20 | 21 | \usetheme{Madrid} 22 | 23 | \usecolortheme[RGB={132,186,75}]{structure} 24 | \definecolor{cactusgreen}{RGB}{132,186,75} 25 | \newcommand{\red}[1]{\textcolor{cactusgreen}{#1}} 26 | \newcommand{\black}[1]{\textcolor{black}{#1}} 27 | 28 | \graphicspath{{../pics/}} 29 | 30 | \logo{\includegraphics[height=0.7cm]{CLR_HOR}} 31 | 32 | \newcommand{\head}[2] 33 | {\frame{\frametitle{}\begin{centering}\LARGE#1\\#2\end{centering}}} 34 | 35 | \newcommand{\abspic}[4] 36 | {\vspace{ #2\paperheight}\hspace{ #3\paperwidth}\includegraphics[height=#4\paperheight]{#1}\\ 37 | \vspace{-#2\paperheight}\vspace{-#4\paperheight}\vspace{-0.0038\paperheight}} 38 | 39 | \newcommand{\picw}[4]{{ 40 | \usebackgroundtemplate{ 41 | \color{black}\vrule width\paperwidth height\paperheight\hspace{-\paperwidth}\hspace{-0.01\paperwidth} 42 | \hspace{#4\paperwidth}\includegraphics[width=#3\paperwidth, height=\paperheight]{#1}}\logo{} 43 | \frame[plain]{\frametitle{#2}} 44 | }} 45 | \newcommand{\pic}[2]{\picw{#1}{#2}{}{0}} 46 | 47 | \newcommand{\question}[1]{\frame{\begin{centering}\Huge #1\\\end{centering}}} 48 | \newcommand{\redidot}{\makebox[0mm]{\hphantom{i}\red{i}}{\i}} 49 | \newcommand{\blackidot}{\makebox[0mm]{\hphantom{i}\black{i}}{\i}} 50 | 51 | % We want to use the infolines outer theme because it uses so less space, but 52 | % it also tries to print an institution and the slide numbers 53 | % Therefore, we here redefine the footline ourselfes - mostly a copy & paste from 54 | % /usr/share/texmf/tex/latex/beamer/themes/outer/beamerouterthemeinfolines.sty 55 | \defbeamertemplate*{footline}{infolines theme without institution and slide numbers} 56 | { 57 | \leavevmode% 58 | \hbox{% 59 | \begin{beamercolorbox}[wd=.25\paperwidth,ht=2.25ex,dp=1ex,center]{author in head/foot}% 60 | \usebeamerfont{author in head/foot}\insertshortauthor 61 | \end{beamercolorbox}% 62 | \begin{beamercolorbox}[wd=.5\paperwidth,ht=2.25ex,dp=1ex,center]{title in head/foot}% 63 | \usebeamerfont{title in head/foot}\insertshorttitle 64 | \end{beamercolorbox}% 65 | \begin{beamercolorbox}[wd=.25\paperwidth,ht=2.25ex,dp=1ex,center]{date in head/foot}% 66 | \usebeamerfont{date in head/foot}\insertshortdate{} 67 | \end{beamercolorbox}}% 68 | \vskip0pt% 69 | } 70 | % No navigation symbols 71 | \setbeamertemplate{navigation symbols}{} 72 | 73 | \title[Taming Relativity]{Taming hyperbolic equations with high-performance computing} 74 | \author[Frank Löffler \& Steven Brandt]{Frank Löffler and Steven R. Brandt} 75 | \institute{Center for Compuation and Technology\\ 76 | Louisiana State University \\ 77 | Funded by: NSF OAC 1550551} 78 | \titlegraphic{ 79 | \includegraphics[height=2cm]{cactuslogo}\\ 80 | } 81 | \date[2017-07-31]{2017-07-31} 82 | 83 | \begin{document} 84 | 85 | \frame{\titlepage} 86 | 87 | \frame{\frametitle{Summary of Challenges} 88 | \abspic{pattern-puzzle-jigsaw-1}{0.3}{0.7}{0.25} 89 | \begin{itemize} 90 | \item Many scientific/engineering components 91 | \begin{itemize} 92 | \item Physics 93 | \item Mathematics 94 | \end{itemize} 95 | \item Many numerical algorithm components 96 | \begin{itemize} 97 | \item Finite difference, finite volume, spectral methods 98 | \item Structured or unstructured meshes, mesh refinements 99 | \item Multipatch and multimodel 100 | \end{itemize} 101 | \item Many different computational components 102 | \begin{itemize} 103 | \item Parallelism (MPI, OpenMP, ...) 104 | \item Parallel I/O (e.g. Checkpointing) 105 | \item Visualization 106 | \end{itemize} 107 | \item Many researchers 108 | \begin{itemize} 109 | \item Different Universities 110 | \item Different Countries 111 | \end{itemize} 112 | \end{itemize} 113 | \begin{block}{Challenge} 114 | Defining good abstractions to bring these together in a unified, scalable 115 | framework, enabling science\end{block} 116 | } 117 | 118 | \frame{\frametitle{Cactus Programming Abstractions} 119 | How many programs work... 120 | 121 | \begin{itemize} 122 | \item \textbf{Read in parameters} Scientifically, a trivial task. In 123 | practice, a major amount of work that includes error checking, 124 | discoverability, etc. 125 | \item \textbf{Allocate a distributed data structure} Even for regular 126 | meshes, even if it is chopped into relatively equal-sized pieces, 127 | this step is necessary. 128 | \item \textbf{Run initialization code} Populate the grid 129 | \item \textbf{Evolve One Step} 130 | Frequently, the next pieces are inextricable parts of the same block of code. 131 | \begin{itemize} 132 | \item \textbf{Apply a time evolution scheme} Runge Kutta, Adams-Bashforth, etc. 133 | \item \textbf{Update boundary conditions and synchronize the grids} 134 | \item \textbf{Save Data} Write data to files periodically, ideally this 135 | should include checkpoint files. 136 | \item \textbf{Repeat} if not done 137 | \end{itemize} 138 | \item \textbf{Cleanup} do analysis, write data, checkpoint 139 | \end{itemize} 140 | } 141 | \frame{\frametitle{Cactus Programming Abstractions} 142 | Advantages for Cactus and Collaborative Programming 143 | \begin{itemize} 144 | \item \textbf{Better Modularization} Grid synchronization, I/O (whether 145 | HDF5, ASCII, checkpointing, etc.), time integration, are all independent. 146 | \item \textbf{Simplified Scientific Programming} Programmers need only 147 | write code for a logical grid. 148 | \end{itemize} 149 | Made possible by... 150 | \begin{itemize} 151 | \item \textbf{Distributed Data Structure} Cactus provides the distributed 152 | data structure as well as discovery/introspection tools to enable you to 153 | find out what variables other modules use. 154 | \item \textbf{Common Schedule Tree} Though the basics are startup, evolve, 155 | shutdown, coordination of subroutine calls within the tree is facilitated 156 | by the Cactus scheduler and the ``before'' and ``after'' directives of schedule.ccl. 157 | \end{itemize} 158 | } 159 | %\frame{\frametitle{Cactus Programming Abstractions} 160 | % What programmers need to do...\\ 161 | % \begin{itemize} 162 | % \item \textbf{Specify parameters} - Parameters along with default 163 | % values and other meta data can be analyzed at runtime 164 | % to discover errors or provide help. It's always easy 165 | % to find parameters and what they do. 166 | % \item \textbf{Write subroutines for logically rectangular regions 167 | % on the grid.} 168 | % \item \textbf{Define variables on the grid} - Grid functions are 169 | % distributed data structures. The runtime can loop through 170 | % all of them, and access their meta data. 171 | % \item \textbf{Insert subroutines into the schedule tree (workflow)} - 172 | % Insertion into the schedule tree is accomplished with 173 | % \textit{before} and \textit{after} clauses reminiscent 174 | % of aspect-oriented programming. 175 | % \end{itemize} 176 | % Infrastructure thorns can access grid functions, parameters, 177 | % and meta data at runtime and can perform actions without 178 | % intruding into the science code. 179 | %} 180 | \frame{\frametitle{Programming Abstractions, continued} 181 | What programmers don't need to do...\\ 182 | \begin{itemize} 183 | \item \textbf{Write MPI code} - Synchronization and 184 | ghost zone exchanges handled mostly automatically. 185 | \item \textbf{Write adaptive mesh refinement} 186 | Prolongation and restriction are handled mostly 187 | automatically also. 188 | \item \textbf{Write a lot of ``boiler plate'' code} 189 | \begin{itemize} 190 | \item parameter parsing 191 | \item file I/O 192 | \item checkpointing 193 | \item etc. 194 | \end{itemize} 195 | \item \textbf{Do everything yourself} - The community is 196 | always generating new functionality and helping Cactus 197 | to run better/faster. 198 | \end{itemize} 199 | } 200 | 201 | \section{Structure technical} 202 | 203 | \frame{\frametitle{Cactus Core: The Flesh } 204 | \abspic{cactuslogo_structure_flesh}{0.1}{0.6}{0.4} 205 | \begin{itemize} 206 | \item ANSI-C and C++ 207 | \item Independent of all other components 208 | \item Error handling 209 | \item Flexible build system 210 | \item Parameter parsing/steering 211 | \item Global variable management 212 | \item Rule-based scheduler 213 | \item Extensible APIs 214 | \begin{itemize} 215 | \item Parallel Operations 216 | \item Input/Output 217 | \item Reduction 218 | \item Interpolation 219 | \item Timers 220 | \end{itemize} 221 | \item Functionality provided by (swappable) components 222 | \end{itemize} 223 | } 224 | 225 | \frame{\frametitle{Cactus Components: Thorns } 226 | \abspic{cactuslogo_structure_thorns}{0.15}{0.7}{0.4} 227 | \begin{itemize} 228 | \item C, C++, Fortran 229 | \item Encapsulating some functionality 230 | \item Rarely use MPI directly 231 | \item Communication/Configuration via defined APIs 232 | \begin{itemize} 233 | \item grid setup and memory allocation (driver) 234 | \item input/output 235 | \item interpolation 236 | \item initial data 237 | \item boundary conditions 238 | \item evolution systems 239 | \item equations of state 240 | \item remote steering (e.g. https server) 241 | \end{itemize} 242 | \end{itemize} 243 | } 244 | 245 | \frame{\frametitle{Cactus Framework Structure} 246 | \abspic{ET_graph_old}{-0.3}{0.13}{0.7} 247 | \abspic{cactuslogo_structure}{-0.1}{0.35}{0.4} 248 | } 249 | 250 | \frame{\frametitle{Basis Module Overview} 251 | \abspic{Sierpinski}{0.18}{0.6}{0.12} 252 | \abspic{hdf_logo}{0.48}{0.4}{0.05} 253 | \abspic{visit_logo}{0.50}{0.5}{0.05} 254 | \abspic{saga_logo}{0.52}{0.62}{0.03} 255 | \abspic{flickr_logo}{0.57}{0.62}{0.04} 256 | \abspic{twitter_logo}{0.58}{0.71}{0.05} 257 | 258 | Basis for scalable algorithm development 259 | \begin{itemize} 260 | \item Most used: finite differences on structured meshes 261 | \item Parallel driver components 262 | \begin{itemize} 263 | \item Simple Unigrid 264 | \item Carpet: Multipatch, Mesh-refinement 265 | \end{itemize} 266 | \item Method of lines 267 | \end{itemize} 268 | Interfaces to external Libraries/Tools 269 | \begin{itemize} 270 | \item Interface to elliptic solvers (e.g. PETSc, Lorene) 271 | \item Input/Output: HDF5 272 | \item Visualization: VisIt, OpenDX, Vish 273 | \item Other: PAPI, Hypre, Saga, Flickr, Twitter 274 | \end{itemize} 275 | } 276 | 277 | \frame{\frametitle{Scaling} 278 | \abspic{tianhe1}{-0.18}{0.2}{0.2} 279 | \begin{itemize} 280 | \item Unigrid scales to hundreds of thousands of cores 281 | \item \href{run:mplayer -noaspect -fs -zoom -vo xv ../pics/moving_punctures_AEI.mov} 282 | {Production runs use $\approx 10$ levels of mesh refinement, 283 | nested grids of size $\approx 60x60x60$} 284 | \item Current mesh refinement runs scale up to $\approx 15$k cores 285 | \item Runtime from weeks to few months 286 | \end{itemize} 287 | \abspic{rit_cluster}{0}{0.5}{0.22} 288 | } 289 | 290 | \frame{\frametitle{Cactus Prizes} 291 | \abspic{A-trophy}{-0.02}{0.75}{0.17} 292 | \begin{itemize} 293 | \item HPC "Most Stellar" Challenge Award (SC1999) 294 | \item Gordon Bell Prize for Supercomputing (SC2001)\\ 295 | \textit{Supporting Efficient Execution in Heterogeneous 296 | Distributed Computing Environments with Cactus and Globus} 297 | \item High-performance bandwidth challenge (SC2002)\\ 298 | \textit{Highest Performing Application: Wide Area 299 | Distributed Simulations Using Cactus, Globus and Visapult} 300 | \item HPC Challenge Awards (SC2002)\\ 301 | \textit{Most Geographically Distributed Application and 302 | Most Heterogeneous Set of Platforms} 303 | \item First place in the IEEE SCALE 2009 Challenge 304 | \end{itemize} 305 | } 306 | 307 | \frame{\frametitle{Papers} 308 | \begin{itemize} 309 | \item \textit{The cactus code: A problem solving environment for the grid} Gabrielle Allen, Werner Benger, Tom Goodale, H-C Hege, Gerd Lanfermann, André Merzky, Thomas Radke, Edward Seidel, John Shalf \textbf{High-Performance Distributed Computing—VECPAR} 2002, 197-227 -- \red{218 citations, Best Papers of HPDC award} 310 | \item \textit{Three-dimensional relativistic simulations of rotating neutron-star collapse to a Kerr black hole} Luca Baiotti, Ian Hawke, Pedro J Montero, Frank Löffler, Luciano Rezzolla, Nikolaos Stergioulas, José A Font, Ed Seidel \textbf{Physical Review D} 2005 71 (2) 024035 - \red{173 citations} 311 | \item \textit{Numerical evolutions of a black hole-neutron star system in full general relativity: Head-on collision} Frank Löffler, Luciano Rezzolla, Marcus Ansorg \textbf{Physical Review D} 2006 74 (10) 104018 - \red{53 citations} 312 | \end{itemize}} 313 | \frame{\frametitle{Papers} 314 | \begin{itemize} 315 | \item \textit{ Recoil velocities from equal-mass binary-black-hole mergers } M Koppitz, D Pollney, C Reisswig, L Rezzolla, J Thornburg, P Diener, E Schnetter \textbf{Physical review letters} 2007 99 (4), 041102 - \red{181 citations} 316 | %\item \textit{3D collapse of rotating stellar iron cores in general relativity including deleptonization and a nuclear equation of state} Christian D Ott, H Dimmelmeier, A Marek, H-T Janka, I Hawke, B Zink, E Schnetter \textbf{Physical review letters} 2007 98 (26), 261101 - \red{94 citations} 317 | %\item \textit{The Final Spin from the Coalescence of Aligned-Spin Black Hole Binaries} Luciano Rezzolla, Peter Diener, Ernst Nils Dorband, Denis Pollney, Christian Reisswig, Erik Schnetter, Jennifer Seiler \textbf{The Astrophysical Journal Letters} 2008 674 (1) L29 - \red{89 citations} 318 | \item \textit{CaKernel–A parallel application programming framework for heterogenous computing architectures} M Blazewicz, SR Brandt, M Kierzynka, K Kurowski, B Ludwiczak, J Tao, J Weglarz \textbf{Scientific Programming} 2011, 19 (4), 185-197 - \red{7 citations} 319 | \item \textit{The Einstein Toolkit: A Community Computational Infrastructure for Relativistic Astrophysics} Frank Löffler, Joshua Faber, Eloisa Bentivegna, Tanja Bode, Peter Diener, Roland Haas, Ian Hinder, Bruno C Mundim, Christian D Ott, Erik Schnetter, Gabrielle Allen, Manuela Campanelli, Pablo Laguna \textbf{Classical and Quantum Gravity} 2012, 29 (11) 115001 - \red{16 citations} 320 | \end{itemize} 321 | } 322 | 323 | \frame{\frametitle{Convenience Tools} 324 | \begin{minipage}{0.3\textwidth} 325 | \centering 326 | \includegraphics[height=0.6\textwidth]{cactuslogo}\\GetComponents\\ 327 | \end{minipage} 328 | \begin{minipage}{0.3\textwidth} 329 | \centering 330 | \includegraphics[height=0.6\textwidth]{factory}\\Simfactory\\ 331 | \end{minipage} 332 | \begin{minipage}{0.3\textwidth} 333 | \centering 334 | \includegraphics[height=0.6\textwidth]{camera}\\Formaline\\ 335 | \end{minipage} 336 | } 337 | 338 | \frame{\frametitle{Tools: GetComponents} 339 | \hspace{9cm}\includegraphics[height=3cm]{cactuslogo}\\\vspace{-3cm} 340 | Task: Collect software from various repositories\\at different sites\\*[1em] 341 | Example simulation assembly: 342 | \begin{itemize} 343 | \item Cactus Flesh and Toolkit (git) 344 | \item Core Einstein Toolkit (git) 345 | \item Carpet AMR (git) 346 | \item Tools, Parameter Files and Data (git) 347 | \item Group Modules (x.groupthorns.org) 348 | \item Individual Modules (x.mythorns.org) 349 | \end{itemize} 350 | x: cvs, svn, darcs, git, hg, http 351 | } 352 | 353 | \frame{\frametitle{Tools: Simulation Factory} 354 | \begin{centering} 355 | \small 356 | \includegraphics[height=3cm]{factory}\\ 357 | \href{http://www.simfactory.org/}{http://www.simfactory.org/}\\ 358 | \end{centering}\vspace{1em} 359 | Task: Provide support for common, repetitive steps:\\ 360 | \begin{itemize} 361 | \item Access remote systems, synchronize source code trees 362 | \item Configure and build on different systems semi-automatically 363 | \item Provide maintained list of supercomputer configurations 364 | \item Manage simulations (follow ``best practices'', avoid human errors) 365 | \end{itemize} 366 | } 367 | 368 | \frame{\frametitle{Tools: Formaline} 369 | \hspace{8cm}\includegraphics[height=2cm]{camera}\\ 370 | \begin{itemize} 371 | \item Task: Ensure that simulations are and remain repeatable, remember 372 | exactly how they were performed 373 | \item Take snapshots of source code, system configuration; store it in 374 | executable and/or git repository 375 | \item Tag all output files 376 | \end{itemize} 377 | } 378 | 379 | \frame{\frametitle{CaFunwave} 380 | The CaFunwave Code: 381 | \begin{itemize} 382 | \item Based on Fenyan Shi's Funwave TVD code 383 | \item Based on Bousinesq equations 384 | \item Vegetation included, both integral and simple forms 385 | \begin{itemize} 386 | \item 2d approximation of water waves valid for 387 | long, weakly-nonlinear waves 388 | \item See Mathematica 389 | Notebook deriving them from the Euler Equations if 390 | that is of interest. 391 | \end{itemize} 392 | \end{itemize} 393 | } 394 | 395 | \frame{\frametitle{CaFunwave} 396 | \begin{itemize} 397 | \item Chakrabarti, Agnimitro, et al. \textit{``Boussinesq modeling of wave-induced hydrodynamics in coastal wetlands.''} \textbf{Journal of Geophysical Research: Oceans} (2017). 398 | \item Oler, Adam, et al. \textit{``Implementation of an Infinite-Height Levee in CaFunwave Using an Immersed-Boundary Method.''} \textbf{Journal of Fluids Engineering} 138.11 (2016): 111103. 399 | \end{itemize} 400 | } 401 | 402 | \end{document} 403 | 404 | -------------------------------------------------------------------------------- /Using-Cactus/Checkpoint.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "collapsed": true 7 | }, 8 | "source": [ 9 | "

Checkpointing and Restarting

\n", 10 | "There are a number of reasons why you might not be able to run a simulation all at once.\n", 11 | "Your simulation may take a week to run, but the maximum time in the queue might be only a few hours. Alternatively,\n", 12 | "your job might crash after running several hours but only be half finished. Was the preceeding time simply lost? Not\n", 13 | "if you make use of the checkpoint restart feature of Cactus." 14 | ] 15 | }, 16 | { 17 | "cell_type": "code", 18 | "execution_count": null, 19 | "metadata": {}, 20 | "outputs": [], 21 | "source": [ 22 | "%cd ~/CactusFW2" 23 | ] 24 | }, 25 | { 26 | "cell_type": "markdown", 27 | "metadata": {}, 28 | "source": [ 29 | "First, we're going to start by creating a file named wave3.par. It will differ only slightly from wave.par used in the\n", 30 | "compiling and running notebook. You'll find the new bits that relate to checkpointing at the end." 31 | ] 32 | }, 33 | { 34 | "cell_type": "code", 35 | "execution_count": null, 36 | "metadata": {}, 37 | "outputs": [], 38 | "source": [ 39 | "%%writefile wave3.par\n", 40 | "\n", 41 | "#Reorder the parameters for easy comparison to the input.txt in example 3\n", 42 | "ActiveThorns = \"\n", 43 | " CoordBase FunWave FunwaveCoord CartGrid3D Carpet CarpetIOASCII\n", 44 | " CartGrid3D IOUtil CarpetIOBasic CarpetSlab Boundary SymBase MoL\n", 45 | " CarpetReduce LocalReduce InitBase CarpetLib LoopControl Tridiagonal\n", 46 | " CarpetIOScalar \"\n", 47 | "\n", 48 | "#----------------------------------------------------\n", 49 | "# Flesh and CCTK parameters\n", 50 | "#----------------------------------------------------\n", 51 | "\n", 52 | "# flesh\n", 53 | "Cactus::cctk_run_title = \"Test Run\"\n", 54 | "Cactus::cctk_show_schedule = \"yes\"\n", 55 | "Cactus::cctk_itlast = 300\n", 56 | "Cactus::allow_mixeddim_gfs = \"yes\"\n", 57 | "\n", 58 | "# CartGrid3D\n", 59 | "CartGrid3D::type = \"coordbase\"\n", 60 | "CartGrid3D::avoid_origin = \"no\"\n", 61 | "CoordBase::domainsize = \"minmax\"\n", 62 | "CoordBase::spacing = \"gridspacing\"\n", 63 | "CoordBase::xmin = 0\n", 64 | "CoordBase::xmax = 30\n", 65 | "CoordBase::ymin = 0\n", 66 | "CoordBase::ymax = 30\n", 67 | "CoordBase::zmin = 0.0\n", 68 | "CoordBase::zmax = 0.0\n", 69 | "CoordBase::dx = 0.25\n", 70 | "CoordBase::dy = 0.25\n", 71 | "\n", 72 | "CoordBase::boundary_size_x_lower = 3\n", 73 | "CoordBase::boundary_size_x_upper = 3\n", 74 | "CoordBase::boundary_size_y_lower = 3\n", 75 | "CoordBase::boundary_size_y_upper = 3\n", 76 | "CoordBase::boundary_size_z_lower = 0\n", 77 | "CoordBase::boundary_size_z_upper = 0\n", 78 | "CoordBase::boundary_shiftout_x_lower = 1\n", 79 | "CoordBase::boundary_shiftout_x_upper = 1\n", 80 | "CoordBase::boundary_shiftout_y_lower = 1\n", 81 | "CoordBase::boundary_shiftout_y_upper = 1\n", 82 | "CoordBase::boundary_shiftout_z_lower = 1\n", 83 | "CoordBase::boundary_shiftout_z_upper = 1\n", 84 | "\n", 85 | "# Carpet\n", 86 | "Carpet::domain_from_coordbase = \"yes\"\n", 87 | "Carpet::ghost_size_x = 3\n", 88 | "Carpet::ghost_size_y = 3\n", 89 | "Carpet::ghost_size_z = 1\n", 90 | "carpet::adaptive_stepsize = yes\n", 91 | "\n", 92 | "# MoL\n", 93 | "MoL::ODE_Method = \"RK3\"\n", 94 | "MoL::disable_prolongation = \"yes\"\n", 95 | "\n", 96 | "# the output dir will be named after the parameter file name\n", 97 | "IO::out_dir = $parfile\n", 98 | "IO::out_fileinfo=\"none\"\n", 99 | "IOBasic::outInfo_every = 1\n", 100 | "IOBasic::outInfo_vars = \"FunWave::eta FunWave::u FunWave::v\"\n", 101 | "\n", 102 | "#IOASCII::out1D_every = 1\n", 103 | "#IOASCII::out1d_vars = \"FunWave::eta Funwave::depth\"\n", 104 | "CarpetIOASCII::compact_format = false\n", 105 | "IOASCII::out2D_every = 30\n", 106 | "IOASCII::out2D_xyplane_z = 0\n", 107 | "IOASCII::out2D_vars = \"FunWave::eta FunWave::u FunWave::v\"\n", 108 | "IOASCII::out2D_xz = \"no\"\n", 109 | "IOASCII::out2D_yz = \"no\"\n", 110 | "IOASCII::output_ghost_points = \"no\"\n", 111 | "\n", 112 | "IOScalar::outScalar_every = 1\n", 113 | "IOScalar::outScalar_vars = \"FunWave::eta FunWave::u FunWave::v\"\n", 114 | "\n", 115 | "#& = \"Funwave::eta\"\n", 116 | "\n", 117 | "#----------------------------------------------------\n", 118 | "# Funwave parameters\n", 119 | "#----------------------------------------------------\n", 120 | "\n", 121 | "# Funwave depth \n", 122 | "FunWave::depth_file_offset_x = 3\n", 123 | "FunWave::depth_file_offset_y = 3\n", 124 | "FunWave::depth_type = \"flat\"\n", 125 | "FunWave::depth_format = \"ele\"\n", 126 | "FunWave::depth_file = \"/tmp/__depth__.txt\"\n", 127 | "FunWave::depth_flat = 0.8\n", 128 | "#Funwave::test_depth_shore_x = 80\n", 129 | "#Funwave::test_depth_island_x = 40\n", 130 | "#Funwave::test_depth_island_y = 40\n", 131 | "FunWave::depth_xslp = 10.0\n", 132 | "FunWave::depth_slope = 0.05\n", 133 | "FunWave::dt_size = 0\n", 134 | "Funwave::generate_test_depth_data = true\n", 135 | "Funwave::num_wave_components = 1\n", 136 | "Funwave::wave_component_file = \"/home/sbrandt/workspace/shi_funwave/example_2/fft/wavemk_per_amp_pha.txt\"\n", 137 | "Funwave::peak_period = 1\n", 138 | "\n", 139 | "# import\n", 140 | "Funwave::time_ramp = 1.0\n", 141 | "Funwave::delta_wk = 0.5\n", 142 | "Funwave::dep_wk = 0.45\n", 143 | "Funwave::xc_wk = 3.0\n", 144 | "Funwave::ywidth_wk = 10000.0\n", 145 | "Funwave::tperiod = 1.0\n", 146 | "Funwave::amp_wk = 0.0232\n", 147 | "Funwave::theta_wk = 0.0\n", 148 | "Funwave::freqpeak = 0.2\n", 149 | "Funwave::freqmin = 0.1\n", 150 | "Funwave::freqmax = 0.4\n", 151 | "Funwave::hmo = 1.0\n", 152 | "Funwave::gammatma = 5.0\n", 153 | "Funwave::thetapeak = 10.0\n", 154 | "Funwave::sigma_theta = 15.0\n", 155 | "\n", 156 | "# Funwave wind forcing\n", 157 | "Funwave::wind_force = false\n", 158 | "Funwave::use_wind_mask = false\n", 159 | "Funwave::num_time_wind_data = 2\n", 160 | "Funwave::timewind[0] = 0\n", 161 | "Funwave::wu[0] = 25\n", 162 | "Funwave::wv[0] = 50\n", 163 | "Funwave::timewind[1] = 1000\n", 164 | "Funwave::wu[1] = 100\n", 165 | "Funwave::wv[1] = 100\n", 166 | "Funwave::boundary = funwave\n", 167 | "\n", 168 | "# Funwave wave maker\n", 169 | "FunWave::wavemaker_type = \"ini_gau\"\n", 170 | "FunWave::xc = 26.5\n", 171 | "FunWave::yc = 26.9\n", 172 | "FunWave::amp = 2.0\n", 173 | "FunWave::wid = 1\n", 174 | "Funwave::wdep = 0.78\n", 175 | "Funwave::xwavemaker = 25.0\n", 176 | "\n", 177 | "# Funwave sponge \n", 178 | "FunWave::sponge_on = false\n", 179 | "FunWave::sponge_west_width = 2.0\n", 180 | "FunWave::sponge_east_width = 2.0\n", 181 | "FunWave::sponge_north_width = 0.0\n", 182 | "FunWave::sponge_south_width = 0.0\n", 183 | "FunWave::sponge_decay_rate = 0.9\n", 184 | "FunWave::sponge_damping_magnitude = 5.0\n", 185 | "\n", 186 | "# Funwave dispersion (example 3 enables dispersion)\n", 187 | "FunWave::dispersion_on = \"true\"\n", 188 | "FunWave::gamma1 = 1.0\n", 189 | "FunWave::gamma2 = 1.0\n", 190 | "FunWave::gamma3 = 1.0\n", 191 | "FunWave::beta_ref = -0.531\n", 192 | "FunWave::swe_eta_dep = 0.80\n", 193 | "FunWave::cd = 0.0\n", 194 | "\n", 195 | "# Funwave numerics (MoL parameter controls time integration scheme)\n", 196 | "FunWave::reconstruction_scheme = \"fourth\"\n", 197 | "FunWave::riemann_solver = \"HLLC\"\n", 198 | "FunWave::dtfac = 0.5\n", 199 | "FunWave::froudecap = 10.0\n", 200 | "FunWave::mindepth = 0.001\n", 201 | "FunWave::mindepthfrc = 0.001\n", 202 | "FunWave::enable_masks = \"true\"\n", 203 | "Funwave::estimate_dt_on = \"true\"\n", 204 | "\n", 205 | "FunwaveCoord::spherical_coordinates = false\n", 206 | "\n", 207 | "ActiveThorns = \"CarpetIOHDF5\"\n", 208 | "IOHDF5::out2D_xyplane_z = 0 \n", 209 | "IOHDF5::out2D_every = 10\n", 210 | "IOHDF5::out2D_vars = \" \n", 211 | " FunWave::eta\n", 212 | " FunWave::u\n", 213 | " FunWave::v\n", 214 | " Grid::Coordinates{out_every=1000000000}\n", 215 | "\"\n", 216 | "IOHDF5::out2D_xz = no\n", 217 | "IOHDF5::out2D_yz = no\n", 218 | "\n", 219 | "# Turn checkpointing on\n", 220 | "IOHDF5::checkpoint = yes\n", 221 | "\n", 222 | "# If you have a long running simulation,\n", 223 | "# you might want to set this to checkpoint\n", 224 | "# frequently enough that you don't either\n", 225 | "# spend too much time checkpointing, or (in\n", 226 | "# the event of a crash) spend too much time\n", 227 | "# recalculating.\n", 228 | "IO::checkpoint_every_walltime_hours = 0.5\n", 229 | " \n", 230 | "# If you think you might want to continue\n", 231 | "# your simulation.\n", 232 | "IO::checkpoint_on_terminate = yes\n", 233 | "\n", 234 | "# This setting tells Cactus to resume from\n", 235 | "# a checkpoint file if one exists\n", 236 | "IO::recover = autoprobe\n", 237 | "\n", 238 | "# This setting tells Cactus where to write\n", 239 | "# its checkpoint files.\n", 240 | "IO::checkpoint_dir = \"..\"\n", 241 | "# This setting tells Cactus where to read\n", 242 | "# its checkpoint files from.\n", 243 | "IO::recover_dir = \"..\"" 244 | ] 245 | }, 246 | { 247 | "cell_type": "markdown", 248 | "metadata": {}, 249 | "source": [ 250 | "When we studied compiling and running cactus we used \"run-submit\" to run our job interactively. If you are going to\n", 251 | "make use of a supercomputer, you cannot work this way, you have to submit to a job queue and wait for your job to finish.\n", 252 | "Simfactory simplifies this task as well, so that you don't need to know which job scheduler a given machine has or\n", 253 | "what its quirks are. If you are running on a machine without a queueing system, however, Simfactory simply runs the job in the background. That's what happens here." 254 | ] 255 | }, 256 | { 257 | "cell_type": "code", 258 | "execution_count": null, 259 | "metadata": {}, 260 | "outputs": [], 261 | "source": [ 262 | "!rm -fr ~/simulations/wave3\n", 263 | "!./simfactory/bin/sim create-submit wave3.par --procs=2 --num-threads=1" 264 | ] 265 | }, 266 | { 267 | "cell_type": "markdown", 268 | "metadata": {}, 269 | "source": [ 270 | "The next command tells you the status of a job. You can run it repeatedly...." 271 | ] 272 | }, 273 | { 274 | "cell_type": "code", 275 | "execution_count": null, 276 | "metadata": {}, 277 | "outputs": [], 278 | "source": [ 279 | "!./simfactory/bin/sim list-sim wave3 " 280 | ] 281 | }, 282 | { 283 | "cell_type": "markdown", 284 | "metadata": {}, 285 | "source": [ 286 | "You can create a small python script that checks for you." 287 | ] 288 | }, 289 | { 290 | "cell_type": "code", 291 | "execution_count": null, 292 | "metadata": {}, 293 | "outputs": [], 294 | "source": [ 295 | "import os\n", 296 | "import re\n", 297 | "import sys\n", 298 | "import time\n", 299 | "while True:\n", 300 | " c = os.popen(\"./simfactory/bin/sim list-sim wave3\").read()\n", 301 | " sys.stdout.write(c)\n", 302 | " sys.stdout.flush()\n", 303 | " time.sleep(3)\n", 304 | " if re.search(\"FINISHED\",c):\n", 305 | " break" 306 | ] 307 | }, 308 | { 309 | "cell_type": "markdown", 310 | "metadata": {}, 311 | "source": [ 312 | "All the next command does is change the number of iterations the simulation requires. You don't have to use perl, but\n", 313 | "I happen to like it. Remember, changing a file in place with perl is as easy as pie (i.e. \"perl -p -i -e\")." 314 | ] 315 | }, 316 | { 317 | "cell_type": "code", 318 | "execution_count": null, 319 | "metadata": {}, 320 | "outputs": [], 321 | "source": [ 322 | "!perl -p -i -e 's{Cactus::cctk_itlast\\s+=\\s+\\d+}{Cactus::cctk_itlast = 400}' wave3.par" 323 | ] 324 | }, 325 | { 326 | "cell_type": "markdown", 327 | "metadata": {}, 328 | "source": [ 329 | "Unfortunately, for the purpose of restarting the job, Simfactory won't see the change we just made unless we\n", 330 | "copy it to the appropriate directory. Of course, we could have run the perl command on that file instead." 331 | ] 332 | }, 333 | { 334 | "cell_type": "code", 335 | "execution_count": null, 336 | "metadata": {}, 337 | "outputs": [], 338 | "source": [ 339 | "!cp wave3.par ~/simulations/wave3/SIMFACTORY/par/wave3.par" 340 | ] 341 | }, 342 | { 343 | "cell_type": "markdown", 344 | "metadata": {}, 345 | "source": [ 346 | "The next command restarts from the checkpoint file and continues." 347 | ] 348 | }, 349 | { 350 | "cell_type": "code", 351 | "execution_count": null, 352 | "metadata": {}, 353 | "outputs": [], 354 | "source": [ 355 | "!./simfactory/bin/sim submit wave3.par --procs=2 --num-threads=1" 356 | ] 357 | }, 358 | { 359 | "cell_type": "markdown", 360 | "metadata": {}, 361 | "source": [ 362 | "Note that the job number is now 0001 instead of 0000. It increments for each restart, as many as you have." 363 | ] 364 | }, 365 | { 366 | "cell_type": "code", 367 | "execution_count": null, 368 | "metadata": {}, 369 | "outputs": [], 370 | "source": [ 371 | "import os\n", 372 | "import re\n", 373 | "import sys\n", 374 | "import time\n", 375 | "while True:\n", 376 | " c = os.popen(\"./simfactory/bin/sim list-sim wave3\").read()\n", 377 | " sys.stdout.write(c)\n", 378 | " sys.stdout.flush()\n", 379 | " time.sleep(3)\n", 380 | " if re.search(\"FINISHED\",c):\n", 381 | " break" 382 | ] 383 | }, 384 | { 385 | "cell_type": "markdown", 386 | "metadata": {}, 387 | "source": [ 388 | "The code snippets below to plot the results were modified slightly from the compiling and running notebook. The x and y\n", 389 | "data files are taken from the 0000 directory, but the data is taken from the 0001 directory." 390 | ] 391 | }, 392 | { 393 | "cell_type": "code", 394 | "execution_count": null, 395 | "metadata": {}, 396 | "outputs": [], 397 | "source": [ 398 | "# This cell enables inline plotting in the notebook\n", 399 | "%matplotlib inline\n", 400 | "\n", 401 | "import matplotlib\n", 402 | "import numpy as np\n", 403 | "import matplotlib.pyplot as plt" 404 | ] 405 | }, 406 | { 407 | "cell_type": "code", 408 | "execution_count": null, 409 | "metadata": {}, 410 | "outputs": [], 411 | "source": [ 412 | "import matplotlib.cm as cm\n", 413 | "# https://matplotlib.org/examples/color/colormaps_reference.html\n", 414 | "cmap = cm.gist_rainbow" 415 | ] 416 | }, 417 | { 418 | "cell_type": "code", 419 | "execution_count": null, 420 | "metadata": {}, 421 | "outputs": [], 422 | "source": [ 423 | "import h5py\n", 424 | "import re" 425 | ] 426 | }, 427 | { 428 | "cell_type": "code", 429 | "execution_count": null, 430 | "metadata": {}, 431 | "outputs": [], 432 | "source": [ 433 | "%cd ~/simulations/wave3/" 434 | ] 435 | }, 436 | { 437 | "cell_type": "code", 438 | "execution_count": null, 439 | "metadata": {}, 440 | "outputs": [], 441 | "source": [ 442 | "f5x = h5py.File(\"output-0000/wave3/x.xy.h5\")\n", 443 | "f5y = h5py.File(\"output-0000/wave3/y.xy.h5\")\n", 444 | "x_coords = {}\n", 445 | "y_coords = {}\n", 446 | "for nm in f5x:\n", 447 | " print(nm)\n", 448 | " m = re.search(r'rl=.*c=\\d+',nm)\n", 449 | " if m:\n", 450 | " k = m.group(0)\n", 451 | " x_coords[k]=np.copy(f5x[nm])\n", 452 | "for nm in f5y:\n", 453 | " m = re.search(r'rl=.*c=\\d+',nm)\n", 454 | " if m:\n", 455 | " k = m.group(0)\n", 456 | " y_coords[k]=np.copy(f5y[nm])" 457 | ] 458 | }, 459 | { 460 | "cell_type": "code", 461 | "execution_count": null, 462 | "metadata": {}, 463 | "outputs": [], 464 | "source": [ 465 | "f5 = h5py.File(\"output-0001/wave3/eta.xy.h5\")\n", 466 | "mn,mx = None,None\n", 467 | "\n", 468 | "# Compute the min and max\n", 469 | "for nm in f5:\n", 470 | " if not hasattr(f5[nm],\"shape\"):\n", 471 | " continue\n", 472 | " d5 = np.copy(f5[nm])\n", 473 | " tmin = np.min(d5)\n", 474 | " tmax = np.max(d5)\n", 475 | " if mn == None:\n", 476 | " mn,mx = tmin,tmax\n", 477 | " else:\n", 478 | " if tmin < mn:\n", 479 | " mn = tmin\n", 480 | " if tmax > mx:\n", 481 | " mx = tmax\n", 482 | " \n", 483 | "# Collect all the pieces into the d5_tl dictionary\n", 484 | "d5_tl = {} \n", 485 | "for nm in f5:\n", 486 | " if not hasattr(f5[nm],\"shape\"):\n", 487 | " continue\n", 488 | " # Parse the string nm...\n", 489 | " m = re.search(r'it=(\\d+)\\s+tl=\\d+\\s+(rl=(\\d+)\\s+c=(\\d+))',nm)\n", 490 | " if not m:\n", 491 | " print(\"nm=\",nm)\n", 492 | " continue\n", 493 | " # group(1) is the iteration number\n", 494 | " # group(2) is \"rl={number} c={number}\"\n", 495 | " # group(3) is the number in \"rl={number}\"\n", 496 | " # group(4) is the number in \"c={number}\"\n", 497 | " grid = int(m.group(1))\n", 498 | " comp = int(m.group(4))\n", 499 | " k = m.group(2)\n", 500 | " if grid in d5_tl:\n", 501 | " d5_tl[grid][\"x\"] += [x_coords[k]] # append to the x array\n", 502 | " d5_tl[grid][\"y\"] += [y_coords[k]] # append to the y array\n", 503 | " d5_tl[grid][\"D\"] += [f5[nm]] # append to the data array\n", 504 | " else:\n", 505 | " d5_tl[grid] = {\n", 506 | " \"x\":[x_coords[k]],\n", 507 | " \"y\":[y_coords[k]],\n", 508 | " \"D\":[f5[nm]]\n", 509 | " }\n", 510 | "\n", 511 | "# Sort the keys so that we display time levels in order\n", 512 | "def keysetf(d):\n", 513 | " a = [] # create an empty list\n", 514 | " for k in d: # for each key in d\n", 515 | " a.append(k) # append it to the list\n", 516 | " return a\n", 517 | "kys = keysetf(d5_tl.keys())\n", 518 | "kys.sort()\n", 519 | "\n", 520 | "# Show the figures, combing data from the same time level\n", 521 | "for index in kys:\n", 522 | " data = d5_tl[index]\n", 523 | " print(\"iteration=\",index)\n", 524 | " plt.figure() # put this before the plots you wish to combine\n", 525 | " plt.pcolor(data[\"x\"][0],data[\"y\"][0],data[\"D\"][0],vmin=mn,vmax=mx)\n", 526 | " plt.pcolor(data[\"x\"][1],data[\"y\"][1],data[\"D\"][1],vmin=mn,vmax=mx)\n", 527 | " plt.show() # show the plot.f5 = h5py.File(\"u.xy.h5\")" 528 | ] 529 | }, 530 | { 531 | "cell_type": "markdown", 532 | "metadata": { 533 | "collapsed": true 534 | }, 535 | "source": [ 536 | "Exercise:\n", 537 | "Using the existing checkpoint files, continue the evolution to timestemp 500 and plot the results." 538 | ] 539 | }, 540 | { 541 | "cell_type": "markdown", 542 | "metadata": { 543 | "collapsed": true 544 | }, 545 | "source": [ 546 | "
This work sponsored by NSF grants OAC 1550551 and CCF 1539567
" 547 | ] 548 | }, 549 | { 550 | "cell_type": "code", 551 | "execution_count": null, 552 | "metadata": {}, 553 | "outputs": [], 554 | "source": [] 555 | } 556 | ], 557 | "metadata": { 558 | "kernelspec": { 559 | "display_name": "Python 3", 560 | "language": "python", 561 | "name": "python3" 562 | }, 563 | "language_info": { 564 | "codemirror_mode": { 565 | "name": "ipython", 566 | "version": 3 567 | }, 568 | "file_extension": ".py", 569 | "mimetype": "text/x-python", 570 | "name": "python", 571 | "nbconvert_exporter": "python", 572 | "pygments_lexer": "ipython3", 573 | "version": "3.5.3" 574 | } 575 | }, 576 | "nbformat": 4, 577 | "nbformat_minor": 2 578 | } 579 | -------------------------------------------------------------------------------- /Using-Cactus/CreatingANewThorn.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "

Creating a New Thorn

\n", 8 | "A Cactus thorn requires a number of different pieces.\n", 9 | "* a Cactus thorn must have a name.\n", 10 | "* a Cactus thorn must live in an arrangement.\n", 11 | "* a Cactus thorn must have ccl (Cactus Configuration Language) files, all of which should reside in ./arrangements/ArrangementName/ThornName/\n", 12 | " * interface.ccl\n", 13 | " * schedule.ccl\n", 14 | " * param.ccl\n", 15 | " * configuration.ccl\n", 16 | "* a Cactus thorn must have a src directory\n", 17 | "* a Cactus thorn must have a make.code.defn file in that source directory" 18 | ] 19 | }, 20 | { 21 | "cell_type": "code", 22 | "execution_count": null, 23 | "metadata": {}, 24 | "outputs": [], 25 | "source": [ 26 | "%cd ~/CactusFW2" 27 | ] 28 | }, 29 | { 30 | "cell_type": "code", 31 | "execution_count": null, 32 | "metadata": {}, 33 | "outputs": [], 34 | "source": [ 35 | "# Define some basic parameters describing a new thorn\n", 36 | "thorn_pars = {\n", 37 | " \"thorn_name\" : \"EnergyCalc\",\n", 38 | " \"arrangement_name\" : \"FunwaveUtils\",\n", 39 | " \"author\" : \"Steven R. Brandt\",\n", 40 | " \"email\" : \"sbrandt@cct.lsu.edu\",\n", 41 | " \"license\" : \"BSD\"\n", 42 | " }\n", 43 | "import os\n", 44 | "os.environ[\"ARR\"]=thorn_pars[\"arrangement_name\"]\n", 45 | "os.environ[\"THORN\"]=thorn_pars[\"thorn_name\"]" 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "metadata": {}, 51 | "source": [ 52 | "This next command is only needed if you want to delete the thorn you are\n", 53 | "going to create below and start over." 54 | ] 55 | }, 56 | { 57 | "cell_type": "code", 58 | "execution_count": null, 59 | "metadata": {}, 60 | "outputs": [], 61 | "source": [ 62 | "!rm -fr arrangements/$ARR/$THORN" 63 | ] 64 | }, 65 | { 66 | "cell_type": "markdown", 67 | "metadata": {}, 68 | "source": [ 69 | "Next we define some functions to help us create files automatically." 70 | ] 71 | }, 72 | { 73 | "cell_type": "code", 74 | "execution_count": null, 75 | "metadata": {}, 76 | "outputs": [], 77 | "source": [ 78 | "import re\n", 79 | "# This function does substitutes all occurances of \"{name}\" in input_str with values[\"name\"]\n", 80 | "# and returns the new string.\n", 81 | "def replace_values(input_str,values):\n", 82 | " while True:\n", 83 | " g = re.search(r'{(\\w+)}',input_str)\n", 84 | " if g:\n", 85 | " var = g.group(1)\n", 86 | " if var in values:\n", 87 | " val = values[var]\n", 88 | " else:\n", 89 | " raise Exception(\"Undefined: <<\"+var+\">>\")\n", 90 | " start = g.start(0)\n", 91 | " end = start+len(g.group(0))\n", 92 | " input_str = input_str[0:start]+val+input_str[end:]\n", 93 | " continue\n", 94 | " break\n", 95 | " return input_str\n", 96 | "\n", 97 | "# Createas a file with the given name, uses replace_values()\n", 98 | "# to update file_contents with file_values, and writes it out.\n", 99 | "def create_file(file_name,file_contents,file_values):\n", 100 | " fd = open(file_name,\"w\")\n", 101 | " file_contents = replace_values(file_contents,file_values)\n", 102 | " print(\"Over-writing file '\"+file_name+\"'\")\n", 103 | " file_contents = re.sub(r'^\\s+','',file_contents)\n", 104 | " fd.write(file_contents)\n", 105 | " fd.close()\n", 106 | " \n", 107 | "# The equivalent of mkdir -p\n", 108 | "def create_dir(dir):\n", 109 | " print(\"Ensuring directory '\"+dir+\"'\")\n", 110 | " if not os.path.exists(dir):\n", 111 | " os.makedirs(dir)" 112 | ] 113 | }, 114 | { 115 | "cell_type": "code", 116 | "execution_count": null, 117 | "metadata": {}, 118 | "outputs": [], 119 | "source": [ 120 | "interface_ccl_contents = \"\"\"\n", 121 | "## Interface definitions for thorn {thorn_name}\n", 122 | "inherits:\n", 123 | "## An implementation name is required for all thorns. No\n", 124 | "## two thorns in a configuration can implement the same\n", 125 | "## interface.\n", 126 | "implements: {thorn_name}\n", 127 | "\n", 128 | "## the groups declared below can be public, private, or protected.\n", 129 | "public:\n", 130 | "\n", 131 | "## A group defines a set of variables that are allocated together\n", 132 | "## and share common properties, i.e. timelevels, tags such as the\n", 133 | "## Prolongation=None tag. The type tag can take on the values\n", 134 | "## GF, Scalar, or Array.\n", 135 | "\n", 136 | "## Note that the number of timelevels can be an integer parameter\n", 137 | "## GF stands for \"Grid Function\" and refers to a distributed array\n", 138 | "## data structure.\n", 139 | "#cctk_real force_group type=GF timelevels=3 tags='Prolongation=\"None\"'\n", 140 | "#{\n", 141 | "# force1, force2\n", 142 | "#}\n", 143 | "\n", 144 | "## Scalars are single variables that are available on all processors.\n", 145 | "#cctk_real scalar_group type=SCALAR \n", 146 | "#{\n", 147 | "# scalar1, scalar2\n", 148 | "#}\n", 149 | "\"\"\"" 150 | ] 151 | }, 152 | { 153 | "cell_type": "code", 154 | "execution_count": null, 155 | "metadata": {}, 156 | "outputs": [], 157 | "source": [ 158 | "schedule_ccl_contents = \"\"\"\n", 159 | "## Schedule definitions for thorn {thorn_name}\n", 160 | "\n", 161 | "## There won't be any storage allocated for a group\n", 162 | "## unless a corresponding storage declaration exists\n", 163 | "## for it in the schedule file. In square brackets,\n", 164 | "## we specify the number of storage levels to allocate.\n", 165 | "## These commented examples correspond to the commented\n", 166 | "## examples in the interface file above.\n", 167 | "# storage: force_group[3]\n", 168 | "# storage: scalar_group\n", 169 | "\n", 170 | "## Schedule a function defined in this thorn to run at the beginning\n", 171 | "## of the simulation. The minimum you need to specify for a schedule\n", 172 | "## item is what language it's written in. Choices are: C (which includes\n", 173 | "## C++) and Fortran (which means Fortran90).\n", 174 | "#SCHEDULE init_function at CCTK_INIT\n", 175 | "#{\n", 176 | "# LANG: C\n", 177 | "#}\"Do some initial stuff\"\n", 178 | "\"\"\"" 179 | ] 180 | }, 181 | { 182 | "cell_type": "code", 183 | "execution_count": null, 184 | "metadata": {}, 185 | "outputs": [], 186 | "source": [ 187 | "param_ccl_contents = \"\"\"\n", 188 | "## Parameter definitions for thorn {thorn_name}\n", 189 | "## There are five types of parameters: int, real, keyword, string, and boolean.\n", 190 | "## The comments provide prototypes of each.\n", 191 | "#\n", 192 | "#CCTK_INT one_to_five \"This integer parameter goes from 1 to 5\"\n", 193 | "#{\n", 194 | "# 1:5 :: \"Another comment\"\n", 195 | "#} 3 # This is the default value\n", 196 | "#\n", 197 | "#CCTK_REAL from_2p5_to_3p8e4 \"This integer parameter goes from 2.5 to 3.8e4\"\n", 198 | "#{\n", 199 | "# 2.5:3.8e4 :: \"Another comment\"\n", 200 | "#} 4.4e3 # This is the default value\n", 201 | "#\n", 202 | "## This keyword example defines the parameter wavemaker_type and 8 possible values.\n", 203 | "#CCTK_KEYWORD wavemaker_type \"types of wave makers\"\n", 204 | "#{\n", 205 | "# \"ini_rec\" :: \"initial rectangular hump, need xc,yc and wid\"\n", 206 | "# \"lef_sol\" :: \"initial solitary wave, WKN B solution, need amp, dep\"\n", 207 | "# \"ini_oth\" :: \"other initial distribution specified by users\"\n", 208 | "# \"wk_reg\" :: \"Wei and Kirby 1999 internal wave maker, need xc_wk, tperiod, amp_wk, dep_wk, theta_wk, and time_ramp (factor of period)\"\n", 209 | "# \"wk_irr\" :: \"Wei and Kirby 1999 TMA spectrum wavemaker, need xc_wk, dep_wk, time_ramp, delta_wk, freqpeak, freqmin, freqmax, hmo, gammatma, theta-peak\"\n", 210 | "# \"wk_time_series\" :: \"fft a time series to get each wave component and then use Wei and Kirby's ( 1999) wavemaker. Need input wavecompfile (including 3 columns: per,amp,pha) and numwavecomp, peakperiod, dep_wk, xc_wk, ywidth_wk\"\n", 211 | "# \"ini_gau\" :: \"initial Gaussian hump, need amp, xc, yc, and wid\"\n", 212 | "# \"ini_sol\" :: \"initial solitary wave, xwavemaker\"\n", 213 | "#}\"wk_reg\" # This is the default value\n", 214 | "#\n", 215 | "#CCTK_STRING a_string_par \"a comment\"\n", 216 | "#{\n", 217 | "# .* :: \"This is a perl 5 regular expression defining what the string may contain\"\n", 218 | "#} \"blah blah blah\" # This is the default value\n", 219 | "#\n", 220 | "#BOOLEAN a_boolean_par \"a comment\"\n", 221 | "#{\n", 222 | "#} true\n", 223 | "\n", 224 | "\n", 225 | "\"\"\"" 226 | ] 227 | }, 228 | { 229 | "cell_type": "code", 230 | "execution_count": null, 231 | "metadata": {}, 232 | "outputs": [], 233 | "source": [ 234 | "configuration_ccl_contents = \"\"\"\n", 235 | "# Configuration definitions for thorn {thorn_name}\n", 236 | "## You should not need include \"mpi.h\", but if you\n", 237 | "## do, you will need this next line.\n", 238 | "# REQUIRES MPI\n", 239 | "# REQUIRES HDF5\n", 240 | "\"\"\"" 241 | ] 242 | }, 243 | { 244 | "cell_type": "code", 245 | "execution_count": null, 246 | "metadata": {}, 247 | "outputs": [], 248 | "source": [ 249 | "makefile_contents = \"\"\"\n", 250 | "# Main make.code.defn file for thorn {thorn_name}\n", 251 | "\n", 252 | "# Source files in this directory\n", 253 | "SRCS =\n", 254 | "\n", 255 | "# Subdirectories containing source files\n", 256 | "SUBDIRS =\n", 257 | "\"\"\"" 258 | ] 259 | }, 260 | { 261 | "cell_type": "code", 262 | "execution_count": null, 263 | "metadata": {}, 264 | "outputs": [], 265 | "source": [ 266 | "readme_contents = \"\"\"\n", 267 | "Author(s) : {author} <{email}>\n", 268 | "Maintainer(s): {author} <{email}>\n", 269 | "Licence : {license}\n", 270 | "--------------------------------------------------------------------------\n", 271 | "\n", 272 | "1. Purpose\n", 273 | "\n", 274 | "not documented\n", 275 | "\"\"\"" 276 | ] 277 | }, 278 | { 279 | "cell_type": "code", 280 | "execution_count": null, 281 | "metadata": {}, 282 | "outputs": [], 283 | "source": [ 284 | "import os\n", 285 | "\n", 286 | "# This function will create a complete thorn\n", 287 | "def create_thorn():\n", 288 | " # Create the thorn directory inside the Cactus source tree\n", 289 | " arrangement_dir = \"arrangements/\"+thorn_pars[\"arrangement_name\"]\n", 290 | " thorn_dir = arrangement_dir + \"/\" + thorn_pars[\"thorn_name\"]\n", 291 | " create_dir(thorn_dir)\n", 292 | "\n", 293 | " # Create basic ccl files needed by Cactus\n", 294 | " schedule_ccl = thorn_dir+\"/schedule.ccl\"\n", 295 | " interface_ccl = thorn_dir+\"/interface.ccl\"\n", 296 | " param_ccl = thorn_dir+\"/param.ccl\"\n", 297 | " configuration_ccl = thorn_dir+\"/configuration.ccl\"\n", 298 | " create_file(schedule_ccl,schedule_ccl_contents,thorn_pars)\n", 299 | " create_file(interface_ccl,interface_ccl_contents,thorn_pars)\n", 300 | " create_file(param_ccl,param_ccl_contents,thorn_pars)\n", 301 | " create_file(configuration_ccl,configuration_ccl_contents,thorn_pars)\n", 302 | "\n", 303 | " # Create the source directory and first makefile\n", 304 | " src_dir = thorn_dir+\"/src\"\n", 305 | " create_dir(src_dir)\n", 306 | " create_file(src_dir+\"/make.code.defn\",makefile_contents,thorn_pars)\n", 307 | "\n", 308 | " # Other dirs and files not strictly needed\n", 309 | " # for compiling and running Cactus.\n", 310 | " create_file(thorn_dir+\"/README\",readme_contents,thorn_pars)\n", 311 | " \n", 312 | " test_dir = thorn_dir+\"/test\"\n", 313 | " create_dir(test_dir)\n", 314 | " par_dir = thorn_dir+\"/par\"\n", 315 | " create_dir(par_dir)\n", 316 | " doc_dir = thorn_dir+\"/doc\"\n", 317 | " create_dir(doc_dir)" 318 | ] 319 | }, 320 | { 321 | "cell_type": "code", 322 | "execution_count": null, 323 | "metadata": {}, 324 | "outputs": [], 325 | "source": [ 326 | "create_thorn()" 327 | ] 328 | }, 329 | { 330 | "cell_type": "code", 331 | "execution_count": null, 332 | "metadata": {}, 333 | "outputs": [], 334 | "source": [ 335 | "my_thorns_contents=\"\"\"\n", 336 | "# ./configs/sim/ThornList\n", 337 | "# This file was automatically generated using the GetComponents script.\n", 338 | "\n", 339 | "!CRL_VERSION = 2.0\n", 340 | "\n", 341 | "\n", 342 | "# Component list: funwave.th\n", 343 | "\n", 344 | "!DEFINE ROOT = CactusFW2\n", 345 | "!DEFINE ARR = $ROOT/arrangements\n", 346 | "!DEFINE ET_RELEASE = trunk\n", 347 | "!DEFINE FW_RELEASE = FW_2014_05\n", 348 | "\n", 349 | "#Cactus Flesh\n", 350 | "!TARGET = $ROOT\n", 351 | "!TYPE = git\n", 352 | "!URL = https://bitbucket.org/cactuscode/cactus.git\n", 353 | "!NAME = flesh\n", 354 | "!CHECKOUT = CONTRIBUTORS COPYRIGHT doc lib Makefile src\n", 355 | "\n", 356 | "!TARGET = $ARR\n", 357 | "!TYPE = git\n", 358 | "!URL = https://bitbucket.org/stevenrbrandt/cajunwave.git\n", 359 | "!REPO_PATH= $2\n", 360 | "# Old version\n", 361 | "#!AUTH_URL = https://svn.cct.lsu.edu/repos/projects/ngchc/code/branches/$FW_RELEASE/$1/$2\n", 362 | "#!URL = https://svn.cct.lsu.edu/repos/projects/ngchc/code/branches/$FW_RELEASE/$2\n", 363 | "#!URL = https://svn.cct.lsu.edu/repos/projects/ngchc/code/CactusCoastal/$2\n", 364 | "!CHECKOUT =\n", 365 | "CactusCoastal/Funwave\n", 366 | "CactusCoastal/FunwaveMesh\n", 367 | "CactusCoastal/FunwaveCoord\n", 368 | "CactusCoastal/Tridiagonal\n", 369 | "CactusCoastal/Tridiagonal2\n", 370 | "\n", 371 | "# CactusBase thorns\n", 372 | "!TARGET = $ARR\n", 373 | "!TYPE = git\n", 374 | "!URL = https://bitbucket.org/cactuscode/cactusbase.git\n", 375 | "!REPO_PATH= $2\n", 376 | "!CHECKOUT =\n", 377 | "CactusBase/Boundary\n", 378 | "CactusBase/CartGrid3D\n", 379 | "CactusBase/CoordBase\n", 380 | "CactusBase/Fortran\n", 381 | "CactusBase/InitBase\n", 382 | "CactusBase/IOASCII\n", 383 | "CactusBase/IOBasic\n", 384 | "CactusBase/IOUtil\n", 385 | "CactusBase/SymBase\n", 386 | "CactusBase/Time\n", 387 | "#\n", 388 | "# CactusNumerical thorns\n", 389 | "!TARGET = $ARR\n", 390 | "!TYPE = git\n", 391 | "!URL = https://bitbucket.org/cactuscode/cactusnumerical.git\n", 392 | "!REPO_PATH= $2\n", 393 | "!CHECKOUT =\n", 394 | "!CHECKOUT =\n", 395 | "CactusNumerical/MoL\n", 396 | "CactusNumerical/LocalInterp\n", 397 | "\n", 398 | "CactusNumerical/Dissipation\n", 399 | "CactusNumerical/SpaceMask\n", 400 | "CactusNumerical/SphericalSurface\n", 401 | "CactusNumerical/LocalReduce\n", 402 | "CactusNumerical/InterpToArray\n", 403 | "\n", 404 | "!TARGET = $ARR\n", 405 | "!TYPE = git\n", 406 | "!URL = https://bitbucket.org/cactuscode/cactusutils.git\n", 407 | "!REPO_PATH= $2\n", 408 | "!CHECKOUT = CactusUtils/Accelerator CactusUtils/OpenCLRunTime\n", 409 | "CactusUtils/NaNChecker\n", 410 | "CactusUtils/Vectors\n", 411 | "CactusUtils/SystemTopology\n", 412 | "\n", 413 | "# Carpet, the AMR driver\n", 414 | "!TARGET = $ARR\n", 415 | "!TYPE = git\n", 416 | "!URL = https://bitbucket.org/eschnett/carpet.git\n", 417 | "!REPO_PATH= $2\n", 418 | "!CHECKOUT = Carpet/doc\n", 419 | "Carpet/Carpet\n", 420 | "Carpet/CarpetEvolutionMask\n", 421 | "Carpet/CarpetIOASCII\n", 422 | "Carpet/CarpetIOBasic\n", 423 | "Carpet/CarpetIOHDF5\n", 424 | "Carpet/CarpetIOScalar\n", 425 | "#Carpet/CarpetIntegrateTest\n", 426 | "Carpet/CarpetInterp\n", 427 | "Carpet/CarpetInterp2\n", 428 | "Carpet/CarpetLib\n", 429 | "Carpet/CarpetMask\n", 430 | "#Carpet/CarpetProlongateTest\n", 431 | "Carpet/CarpetReduce\n", 432 | "Carpet/CarpetRegrid\n", 433 | "Carpet/CarpetRegrid2\n", 434 | "#Carpet/CarpetRegridTest\n", 435 | "Carpet/CarpetSlab\n", 436 | "Carpet/CarpetTracker\n", 437 | "Carpet/CycleClock\n", 438 | "#Carpet/HighOrderWaveTest\n", 439 | "Carpet/LoopControl\n", 440 | "#Carpet/ReductionTest\n", 441 | "#Carpet/ReductionTest2\n", 442 | "#Carpet/ReductionTest3\n", 443 | "#Carpet/RegridSyncTest\n", 444 | "Carpet/TestCarpetGridInfo\n", 445 | "Carpet/TestLoopControl\n", 446 | "Carpet/Timers\n", 447 | "\n", 448 | "# Additional Cactus thorns\n", 449 | "!TARGET = $ARR\n", 450 | "!TYPE = svn\n", 451 | "!URL = https://svn.cactuscode.org/projects/$1/$2/trunk\n", 452 | "!CHECKOUT = ExternalLibraries/OpenBLAS ExternalLibraries/OpenCL ExternalLibraries/pciutils ExternalLibraries/PETSc\n", 453 | "ExternalLibraries/MPI\n", 454 | "ExternalLibraries/HDF5\n", 455 | "ExternalLibraries/zlib\n", 456 | "ExternalLibraries/hwloc\n", 457 | "\n", 458 | "# Simulation Factory\n", 459 | "!TARGET = $ROOT/simfactory\n", 460 | "!TYPE = git\n", 461 | "!URL = https://bitbucket.org/simfactory/simfactory2.git\n", 462 | "!NAME = simfactory2\n", 463 | "!CHECKOUT = README.md README_FIRST.txt bin doc etc lib mdb\n", 464 | "\n", 465 | "# Various thorns from LSU\n", 466 | "#!TARGET = $ARR\n", 467 | "#!TYPE = git\n", 468 | "#!URL = https://bitbucket.org/einsteintoolkit/archivedthorns-vectors.git\n", 469 | "#!REPO_PATH= $2\n", 470 | "#!CHECKOUT =\n", 471 | "#LSUThorns/Vectors\n", 472 | "#LSUThorns/QuasiLocalMeasures\n", 473 | "#LSUThorns/SummationByParts\n", 474 | "#LSUThorns/Prolong\n", 475 | "\n", 476 | "#Roland/MapPoints\n", 477 | "#Tutorial/BadWaveMoL\n", 478 | "#Tutorial/BasicWave\n", 479 | "#Tutorial/BasicWave2\n", 480 | "#Tutorial/BasicWave3\n", 481 | "\n", 482 | "# Various thorns from the AEI\n", 483 | "# Numerical\n", 484 | "!TARGET = $ARR\n", 485 | "!TYPE = git\n", 486 | "!URL = https://bitbucket.org/cactuscode/numerical.git\n", 487 | "!REPO_PATH= $2\n", 488 | "!CHECKOUT =\n", 489 | "#AEIThorns/ADMMass\n", 490 | "AEIThorns/AEILocalInterp\n", 491 | "#AEIThorns/PunctureTracker\n", 492 | "#AEIThorns/SystemStatistics\n", 493 | "#AEIThorns/Trigger\n", 494 | "{arrangement_name}/{thorn_name}\"\"\"\n", 495 | "create_file(\"my_thorns.th\",my_thorns_contents,thorn_pars)" 496 | ] 497 | }, 498 | { 499 | "cell_type": "code", 500 | "execution_count": null, 501 | "metadata": {}, 502 | "outputs": [], 503 | "source": [ 504 | "!time ./simfactory/bin/sim build --mdbkey make 'make -j2' --thornlist=./my_thorns.th|cat -" 505 | ] 506 | }, 507 | { 508 | "cell_type": "markdown", 509 | "metadata": {}, 510 | "source": [ 511 | "Congrats! You've created an empty thorn." 512 | ] 513 | }, 514 | { 515 | "cell_type": "markdown", 516 | "metadata": { 517 | "collapsed": true 518 | }, 519 | "source": [ 520 | "Questions:\n", 521 | "* What does ccl stand for?\n", 522 | "* What are the basic types of ccl files?\n", 523 | "* What is a grid function?\n", 524 | "* In which file are grid functions declared?\n", 525 | "* How is storage allocated for a grid function?\n", 526 | "* If your thorn isn't getting compiled, what did you forget to do?\n", 527 | "* If a source file that you created isn't getting compiled, what did you forget to do?" 528 | ] 529 | }, 530 | { 531 | "cell_type": "markdown", 532 | "metadata": { 533 | "collapsed": true 534 | }, 535 | "source": [ 536 | "
This work sponsored by NSF grants OAC 1550551 and CCF 1539567
" 537 | ] 538 | }, 539 | { 540 | "cell_type": "code", 541 | "execution_count": null, 542 | "metadata": {}, 543 | "outputs": [], 544 | "source": [] 545 | } 546 | ], 547 | "metadata": { 548 | "kernelspec": { 549 | "display_name": "Python 3", 550 | "language": "python", 551 | "name": "python3" 552 | }, 553 | "language_info": { 554 | "codemirror_mode": { 555 | "name": "ipython", 556 | "version": 3 557 | }, 558 | "file_extension": ".py", 559 | "mimetype": "text/x-python", 560 | "name": "python", 561 | "nbconvert_exporter": "python", 562 | "pygments_lexer": "ipython3", 563 | "version": "3.5.3" 564 | } 565 | }, 566 | "nbformat": 4, 567 | "nbformat_minor": 2 568 | } 569 | -------------------------------------------------------------------------------- /Using-Cactus/CreatingANewThorn-Part2.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "

Creating a New Thorn, Part 2

\n", 8 | "So far we've created an empty thorn which does nothing. Never before did doing nothing feel like such an accomplishment!\n", 9 | "\n", 10 | "We can use it as a template for something more useful. To that end, we're going to create a thorn which computes an \"energy,\" the sum of the squares of the wave velocities at each point on the grid. First, we create the source file itself. It's simple enough." 11 | ] 12 | }, 13 | { 14 | "cell_type": "code", 15 | "execution_count": null, 16 | "metadata": {}, 17 | "outputs": [], 18 | "source": [ 19 | "%cd ~/CactusFW2" 20 | ] 21 | }, 22 | { 23 | "cell_type": "code", 24 | "execution_count": null, 25 | "metadata": {}, 26 | "outputs": [], 27 | "source": [ 28 | "%%writefile ./arrangements/FunwaveUtils/EnergyCalc/src/energy.cc\n", 29 | "// We pretty much always want to include these 3 headers\n", 30 | "#include \n", 31 | "#include \n", 32 | "#include \n", 33 | "\n", 34 | "void compute_energy(CCTK_ARGUMENTS) // Cactus functions always have this prototype\n", 35 | "{\n", 36 | " DECLARE_CCTK_ARGUMENTS; // Declare all grid functions (from interface.ccl)\n", 37 | " DECLARE_CCTK_PARAMETERS; // Declare all parameters (from param.ccl)\n", 38 | " \n", 39 | " // Note that even though this is really a 2-d calculation, Cactus\n", 40 | " // thinks of it as 3-d with 1 zone in the z direction.\n", 41 | " for(int k=0;k\n", 610 | " int cc = CCTK_GFINDEX3D(cctkGH,i,j,k)\n", 611 | " int cp1 = CCTK_GFINDEX3D(cctkGH,i+1,j,k)\n", 612 | " CCTK_REAL fx = f[cc]; // If this is f[x]\n", 613 | " CCTK_REAL fx1 = f[cp1]; // this is f[x+dx]\n", 614 | "\n", 615 | "\n", 616 | "Cactus provides an additional array of integers, like cctk_lsh, called cctk_delta_space, which provides the quantities dx, dy, and dz (these are cctk_delta_space[0], cctk_delta_space[1] and cctk_delta_space[2], respecitively).\n", 617 | "\n", 618 | "Because this is a 2-d code, the Laplacian is\n", 619 | "\n", 620 | "$\\Delta^2 \\eta = \\left( \\frac{d^2}{dx^2} + \\frac{d^2}{dy^2} \\right) \\eta$\n", 621 | "\n", 622 | "Note that you will not be able to calculate the value of the Laplacian at the borders of the grid, as that would result in a segfault. Please write zeroes in the borders instead.\n", 623 | "\n", 624 | "Note also that Funwave defines grid variables dx and dy. You can use dx[cc] (where cc = CCTK_GFINDEX3D(cctkGH,i,j,k)) in place of cctk_delta_space[0] if you want to. You can't, however, redefined dx or dy." 625 | ] 626 | }, 627 | { 628 | "cell_type": "markdown", 629 | "metadata": { 630 | "collapsed": true 631 | }, 632 | "source": [ 633 | "
This work sponsored by NSF grants OAC 1550551 and CCF 1539567
" 634 | ] 635 | }, 636 | { 637 | "cell_type": "code", 638 | "execution_count": null, 639 | "metadata": {}, 640 | "outputs": [], 641 | "source": [] 642 | } 643 | ], 644 | "metadata": { 645 | "kernelspec": { 646 | "display_name": "Python 3", 647 | "language": "python", 648 | "name": "python3" 649 | }, 650 | "language_info": { 651 | "codemirror_mode": { 652 | "name": "ipython", 653 | "version": 3 654 | }, 655 | "file_extension": ".py", 656 | "mimetype": "text/x-python", 657 | "name": "python", 658 | "nbconvert_exporter": "python", 659 | "pygments_lexer": "ipython3", 660 | "version": "3.5.3" 661 | } 662 | }, 663 | "nbformat": 4, 664 | "nbformat_minor": 2 665 | } 666 | -------------------------------------------------------------------------------- /Numerical-Methods/02-hyperbolic-pdes.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Hyperbolic PDEs" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "Most formulations of the Einstein equations for the spacetime (with $c=1$) look roughly like *wave equations*\n", 15 | "\n", 16 | "$$\n", 17 | "\\frac{\\partial^2 \\phi}{\\partial t^2} = \\nabla^2 \\phi.\n", 18 | "$$\n", 19 | "\n", 20 | "We will focus on the simple $1+1$d case\n", 21 | "\n", 22 | "$$\n", 23 | "\\frac{\\partial^2 \\phi}{\\partial t^2} = \\frac{\\partial^2 \\phi}{\\partial x^2}.\n", 24 | "$$\n", 25 | "\n", 26 | "For numerical evolution we either write this as first order in time,\n", 27 | "\n", 28 | "$$\n", 29 | "\\frac{\\partial}{\\partial t} \\begin{pmatrix} \\phi \\\\ \\phi_t \\end{pmatrix} = \\begin{pmatrix} \\phi_t \\\\ 0 \\end{pmatrix} + \\frac{\\partial^2}{\\partial x^2} \\begin{pmatrix} 0 \\\\ \\phi \\end{pmatrix},\n", 30 | "$$\n", 31 | "\n", 32 | "or as first order in time *and* space\n", 33 | "\n", 34 | "$$\n", 35 | "\\frac{\\partial}{\\partial t} \\begin{pmatrix} \\phi \\\\ \\phi_t \\\\ \\phi_x \\end{pmatrix} = \\begin{pmatrix} \\phi_t \\\\ 0 \\\\ 0 \\end{pmatrix} + \\frac{\\partial}{\\partial x} \\begin{pmatrix} 0 \\\\ \\phi_x \\\\ \\phi_t \\end{pmatrix}.\n", 36 | "$$\n", 37 | "\n", 38 | "We will first focus on the first order form, written as\n", 39 | "\n", 40 | "$$\n", 41 | "\\partial_t {\\bf u} = {\\bf s} + \\partial_x {\\bf f}({\\bf u}).\n", 42 | "$$" 43 | ] 44 | }, 45 | { 46 | "cell_type": "markdown", 47 | "metadata": {}, 48 | "source": [ 49 | "## Method of Lines" 50 | ] 51 | }, 52 | { 53 | "cell_type": "markdown", 54 | "metadata": {}, 55 | "source": [ 56 | "We have already used our finite difference approximations to replace partial derivatives (usually in space) with discrete approximations. We could use finite difference approximations directly here, replacing both time and space derivatives. However, an alternative approach is standard in Numerical Relativity.\n", 57 | "\n", 58 | "First we put down a grid *in space* $\\{ x_i \\}$ for which we have values $\\{ {\\bf u}_i \\}$. All these values can be thought of as one large vector ${\\bf U}$. Now, on this grid we can use our finite difference approximation to replace the partial derivatives in space, which we write in discrete operator form as\n", 59 | "\n", 60 | "$$\n", 61 | " \\partial_x {\\bf f}({\\bf u}) \\to L({\\bf U}).\n", 62 | "$$\n", 63 | "\n", 64 | "The operator takes the discrete values $\\{ {\\bf u}_i \\}$ and combines them, using the finite differencing formulas, to approximate the partial derivative required.\n", 65 | "\n", 66 | "This means we have converted the original *partial* differential equation to the system of *ordinary* differential equation\n", 67 | "\n", 68 | "$$\n", 69 | " \\frac{d}{d t} {\\bf U} = {\\bf s}({\\bf U}) + L({\\bf U}) = {\\bf F}({\\bf U}).\n", 70 | "$$\n", 71 | "\n", 72 | "We can then solve this ODE as an initial value problem by specifying\n", 73 | "the state of the system ${\\bf U}$ at some time $t_0$ and evolve forward in time." 74 | ] 75 | }, 76 | { 77 | "cell_type": "markdown", 78 | "metadata": {}, 79 | "source": [ 80 | "### (Dis)advantages" 81 | ] 82 | }, 83 | { 84 | "cell_type": "markdown", 85 | "metadata": {}, 86 | "source": [ 87 | "The Method of Lines (MoL) allows for minimally-coupled physical systems, such as GRMHD, to be split up into multiple pieces, which can be more easily tested, with a broader variety of numerical methods applied, and more straightforwardly have their stability checked.\n", 88 | "\n", 89 | "However, it is typical that numerical methods that use MoL cannot easily take advantage of all the physical information in the system, may require smaller timesteps, may be less efficient, and may have less accuracy. Before worrying about this too much, check whether your time (in implementing a more efficient method) is worth less than the computer's (which will do the extra computation)." 90 | ] 91 | }, 92 | { 93 | "cell_type": "markdown", 94 | "metadata": {}, 95 | "source": [ 96 | "### Runge-Kutta methods" 97 | ] 98 | }, 99 | { 100 | "cell_type": "markdown", 101 | "metadata": {}, 102 | "source": [ 103 | "When looking at central differencing earlier we used information from both sides of the point where we took the derivative. This gives higher accuracy, but isn't helpful in the initial value case, where we don't have half the information.\n", 104 | "\n", 105 | "The simplest approach is to use the forward Euler method\n", 106 | "\n", 107 | "$$\n", 108 | " {\\bf U}^{(i+1)} = {\\bf U}^{(i)} + \\Delta t \\,{\\bf F}\\left ({\\bf U}^{(i)}\\right ).\n", 109 | "$$\n", 110 | "\n", 111 | "However, this is very inaccurate. Fortunately, higher order methods can be constructed with multiple Euler steps as building blocks. Each one gives an approximation to \"future\" data, which can be used to approximate the derivative at more locations.\n", 112 | "\n", 113 | "For example, the Euler step above starts from ${\\bf U}^{(i)}$ and computes ${\\bf F}\\left ({\\bf U}^{(i)}\\right )$ to approximate ${\\bf U}^{(i+1)}$. We can use this approximation to give us ${\\bf F}\\left ({\\bf U}^{(i+1)}\\right)$.\n", 114 | "\n", 115 | "Now, a more accurate solution would be\n", 116 | "\n", 117 | "$$\n", 118 | " {\\bf U}^{(i+1)} = {\\bf U}^{(i)} + \\int_{t_i}^{t_{i+1}} \\text{d} t \\,F\\left ({\\bf U}^{(i)}\\right ).\n", 119 | "$$\n", 120 | "\n", 121 | "In Euler's method we are effectively representing the value of the integral by the value of the integrand at the start, multiplied by the width $\\Delta t$. We could now approximate it by the *average* value of the integrand, $\\left ({\\bf F}^{(i)} + {\\bf F}^{(i+1)}\\right )/2$, multiplied by the width $\\Delta t$. This gives the algorithm\n", 122 | "\n", 123 | "\\begin{align}\n", 124 | " {\\bf U}^{(p)} &= {\\bf U}^{(i)} + \\Delta t\\, {\\bf F}\\left ({\\bf U}^{(i)}\\right ), \\\\\n", 125 | " {\\bf U}^{(i+1)} &= {\\bf U}^{(i)} + \\frac{\\Delta t}{2} \\left( {\\bf F}\\left ({\\bf U}^{(i)}\\right ) + {\\bf F}\\left ({\\bf U}^{(p)}\\right ) \\right) \\\\\n", 126 | " &= \\frac{1}{2} \\left( {\\bf U}^{(i)} + {\\bf U}^{(p)} + \\Delta t \\, {\\bf F}\\left({\\bf U}^{(p)}\\right ) \\right).\n", 127 | "\\end{align}\n", 128 | "\n", 129 | "The final re-arrangement ensures we do not have to store or re-compute ${\\bf F}^{(i)}$. This is one of the *Runge-Kutta* methods. This version is second order accurate, and a big improvement over Euler's method." 130 | ] 131 | }, 132 | { 133 | "cell_type": "markdown", 134 | "metadata": {}, 135 | "source": [ 136 | "# Implementation" 137 | ] 138 | }, 139 | { 140 | "cell_type": "code", 141 | "execution_count": null, 142 | "metadata": { 143 | "collapsed": true 144 | }, 145 | "outputs": [], 146 | "source": [ 147 | "import numpy\n", 148 | "from matplotlib import pyplot\n", 149 | "%matplotlib notebook" 150 | ] 151 | }, 152 | { 153 | "cell_type": "markdown", 154 | "metadata": {}, 155 | "source": [ 156 | "We start by implementing the right-hand-side of the evolution: the source term, and the term corresponding to the partial derivative in space:" 157 | ] 158 | }, 159 | { 160 | "cell_type": "code", 161 | "execution_count": null, 162 | "metadata": { 163 | "collapsed": true 164 | }, 165 | "outputs": [], 166 | "source": [ 167 | "def RHS(U, dx):\n", 168 | " \"\"\"\n", 169 | " RHS term.\n", 170 | " \n", 171 | " Parameters\n", 172 | " ----------\n", 173 | " \n", 174 | " U : array\n", 175 | " contains [phi, phi_t, phi_x] at each point\n", 176 | " dx : double\n", 177 | " grid spacing\n", 178 | " \n", 179 | " Returns\n", 180 | " -------\n", 181 | " \n", 182 | " dUdt : array\n", 183 | " contains the required time derivatives\n", 184 | " \"\"\"\n", 185 | " \n", 186 | " #" 187 | ] 188 | }, 189 | { 190 | "cell_type": "markdown", 191 | "metadata": {}, 192 | "source": [ 193 | "We see that this doesn't give us the update term at the edges of the domain. We'll enforce that the domain is *periodic* as a simple boundary condition. Usually this would be an outgoing wave type boundary condition, but anything that fixes the update term at the boundary is fine." 194 | ] 195 | }, 196 | { 197 | "cell_type": "code", 198 | "execution_count": null, 199 | "metadata": { 200 | "collapsed": true 201 | }, 202 | "outputs": [], 203 | "source": [ 204 | "def apply_boundaries(dUdt):\n", 205 | " \"\"\"\n", 206 | " Periodic boundaries\n", 207 | " \"\"\"\n", 208 | " \n", 209 | " #" 210 | ] 211 | }, 212 | { 213 | "cell_type": "markdown", 214 | "metadata": {}, 215 | "source": [ 216 | "Then we fix the grid. To work with the periodic domain we need to stagger the grid away from the boundaries. We'll fix the domain to be $x \\in [-1, 1]$:" 217 | ] 218 | }, 219 | { 220 | "cell_type": "code", 221 | "execution_count": null, 222 | "metadata": { 223 | "collapsed": true 224 | }, 225 | "outputs": [], 226 | "source": [ 227 | "def grid(Npoints):\n", 228 | " \"\"\"\n", 229 | " Npoints is the number of interior points\n", 230 | " \"\"\"\n", 231 | " \n", 232 | " dx = 2.0 / Npoints\n", 233 | " return dx, numpy.linspace(-1.0-dx/2.0, 1.0+dx/2.0, Npoints+2)" 234 | ] 235 | }, 236 | { 237 | "cell_type": "markdown", 238 | "metadata": {}, 239 | "source": [ 240 | "We take the RK2 method from earlier. This will take only one step but requires two RHS evaluations." 241 | ] 242 | }, 243 | { 244 | "cell_type": "code", 245 | "execution_count": null, 246 | "metadata": { 247 | "collapsed": true 248 | }, 249 | "outputs": [], 250 | "source": [ 251 | "def RK2_step(U, RHS, apply_boundaries, dt, dx):\n", 252 | " \"\"\"\n", 253 | " RK2 method\n", 254 | " \"\"\"\n", 255 | " \n", 256 | " #" 257 | ] 258 | }, 259 | { 260 | "cell_type": "markdown", 261 | "metadata": {}, 262 | "source": [ 263 | "There are only two things we need to fix. One is the timestep. For now, we'll set it to $\\Delta t = \\Delta x / 4$. The second is the initial data. We will choose the initial data to be a time symmetric gaussian,\n", 264 | "\n", 265 | "$$\n", 266 | "\\phi(0, x) = \\exp \\left( -20 x^2 \\right), \\qquad \\partial_t \\phi (0, x) = 0,\n", 267 | "$$\n", 268 | "\n", 269 | "which implies\n", 270 | "\n", 271 | "$$\n", 272 | "\\partial_x \\phi(0, x) = -40 x \\exp \\left( -20 x^2 \\right).\n", 273 | "$$" 274 | ] 275 | }, 276 | { 277 | "cell_type": "code", 278 | "execution_count": null, 279 | "metadata": { 280 | "collapsed": true 281 | }, 282 | "outputs": [], 283 | "source": [ 284 | "def initial_data(x):\n", 285 | " \"\"\"\n", 286 | " Set the initial data. x are the coordinates. U (phi, phi_t, phi_x) are the variables.\n", 287 | " \"\"\"\n", 288 | " \n", 289 | " U = numpy.zeros((3, len(x)))\n", 290 | " U[0, :] = numpy.exp(-20.0 * x**2)\n", 291 | " U[2, :] = -40.0*x*numpy.exp(-20.0 * x**2)\n", 292 | " \n", 293 | " return U" 294 | ] 295 | }, 296 | { 297 | "cell_type": "markdown", 298 | "metadata": {}, 299 | "source": [ 300 | "Now we can evolve:" 301 | ] 302 | }, 303 | { 304 | "cell_type": "code", 305 | "execution_count": null, 306 | "metadata": { 307 | "collapsed": true 308 | }, 309 | "outputs": [], 310 | "source": [ 311 | "Npoints = 50\n", 312 | "dx, x = grid(Npoints)\n", 313 | "dt = dx / 4\n", 314 | "U0 = initial_data(x)\n", 315 | "U = initial_data(x)\n", 316 | "Nsteps = int(1.0 / dt)\n", 317 | "for n in range(Nsteps):\n", 318 | " #" 319 | ] 320 | }, 321 | { 322 | "cell_type": "code", 323 | "execution_count": null, 324 | "metadata": { 325 | "collapsed": true 326 | }, 327 | "outputs": [], 328 | "source": [ 329 | "pyplot.figure()\n", 330 | "pyplot.plot(x, U0[0, :], 'b--', label=\"Initial data\")\n", 331 | "pyplot.plot(x, U[0, :], 'k-', label=r\"$t=1$\")\n", 332 | "pyplot.xlabel(r\"$x$\")\n", 333 | "pyplot.ylabel(r\"$\\phi$\")\n", 334 | "pyplot.xlim(-1, 1)\n", 335 | "pyplot.legend()\n", 336 | "pyplot.show()" 337 | ] 338 | }, 339 | { 340 | "cell_type": "markdown", 341 | "metadata": {}, 342 | "source": [ 343 | "We can see the expected behaviour: the initial data splits into two pulses that propagate in opposite directions. With periodic boundary conditions, we can evolve to $t=2$ and we should get the initial data back again:" 344 | ] 345 | }, 346 | { 347 | "cell_type": "code", 348 | "execution_count": null, 349 | "metadata": { 350 | "collapsed": true 351 | }, 352 | "outputs": [], 353 | "source": [ 354 | "Npoints = 50\n", 355 | "dx, x = grid(Npoints)\n", 356 | "dt = dx / 4\n", 357 | "U0 = initial_data(x)\n", 358 | "U = initial_data(x)\n", 359 | "Nsteps = int(2.0 / dt)\n", 360 | "for n in range(Nsteps):\n", 361 | " #\n", 362 | " \n", 363 | "pyplot.figure()\n", 364 | "pyplot.plot(x, U0[0, :], 'b--', label=\"Initial data\")\n", 365 | "pyplot.plot(x, U[0, :], 'k-', label=r\"$t=2$\")\n", 366 | "pyplot.xlabel(r\"$x$\")\n", 367 | "pyplot.ylabel(r\"$\\phi$\")\n", 368 | "pyplot.xlim(-1, 1)\n", 369 | "pyplot.legend()\n", 370 | "pyplot.show()" 371 | ] 372 | }, 373 | { 374 | "cell_type": "markdown", 375 | "metadata": {}, 376 | "source": [ 377 | "### Convergence" 378 | ] 379 | }, 380 | { 381 | "cell_type": "markdown", 382 | "metadata": {}, 383 | "source": [ 384 | "We can now simply check convergence, by taking the norm of the difference between the initial solution and the solution at $t=2$:" 385 | ] 386 | }, 387 | { 388 | "cell_type": "code", 389 | "execution_count": null, 390 | "metadata": { 391 | "collapsed": true 392 | }, 393 | "outputs": [], 394 | "source": [ 395 | "def error_norms(U, U_initial):\n", 396 | " \"\"\"\n", 397 | " Error norms (1, 2, infinity)\n", 398 | " \"\"\"\n", 399 | " \n", 400 | " N = len(U)\n", 401 | " error_1 = numpy.sum(numpy.abs(U - U_initial))/N\n", 402 | " error_2 = numpy.sqrt(numpy.sum((U - U_initial)**2)/N)\n", 403 | " error_inf = numpy.max(numpy.abs(U - U_initial))\n", 404 | " \n", 405 | " return error_1, error_2, error_inf" 406 | ] 407 | }, 408 | { 409 | "cell_type": "code", 410 | "execution_count": null, 411 | "metadata": { 412 | "collapsed": true 413 | }, 414 | "outputs": [], 415 | "source": [ 416 | "Npoints_all = 50 * 2**(numpy.arange(0, 6))\n", 417 | "\n", 418 | "dxs = numpy.zeros((len(Npoints_all,)))\n", 419 | "wave_errors = numpy.zeros((3, len(Npoints_all)))\n", 420 | "\n", 421 | "for i, Npoints in enumerate(Npoints_all):\n", 422 | " dx, x = grid(Npoints)\n", 423 | " dt = dx / 4\n", 424 | " U0 = initial_data(x)\n", 425 | " U = initial_data(x)\n", 426 | " Nsteps = int(2.0 / dt)\n", 427 | " for n in range(Nsteps):\n", 428 | " #\n", 429 | "\n", 430 | " dxs[i] = dx\n", 431 | " wave_errors[:, i] = error_norms(U[0, :], U0[0, :])" 432 | ] 433 | }, 434 | { 435 | "cell_type": "code", 436 | "execution_count": null, 437 | "metadata": { 438 | "collapsed": true 439 | }, 440 | "outputs": [], 441 | "source": [ 442 | "pyplot.figure()\n", 443 | "pyplot.loglog(dxs, wave_errors[0, :], 'bx', label=r\"${\\cal E}_1$\")\n", 444 | "pyplot.loglog(dxs, wave_errors[1, :], 'go', label=r\"${\\cal E}_2$\")\n", 445 | "pyplot.loglog(dxs, wave_errors[2, :], 'r+', label=r\"${\\cal E}_{\\infty}$\")\n", 446 | "pyplot.loglog(dxs, wave_errors[1, 0]*(dxs/dxs[0])**4, 'k-', label=r\"$p=4$\")\n", 447 | "pyplot.xlabel(r\"$\\Delta x$\")\n", 448 | "pyplot.ylabel(\"Error norm\")\n", 449 | "pyplot.legend(loc=\"lower right\")\n", 450 | "pyplot.show()" 451 | ] 452 | }, 453 | { 454 | "cell_type": "markdown", 455 | "metadata": {}, 456 | "source": [ 457 | "This fourth order convergence is an artefact of the initial data and boundary conditions, which are perfectly symmetric. If we change the initial data to make it asymmetric, we'll get something much closer to second order:" 458 | ] 459 | }, 460 | { 461 | "cell_type": "code", 462 | "execution_count": null, 463 | "metadata": { 464 | "collapsed": true 465 | }, 466 | "outputs": [], 467 | "source": [ 468 | "def initial_data_asymmetric(x):\n", 469 | " \"\"\"\n", 470 | " Set the initial data. x are the coordinates. U (phi, phi_t, phi_x) are the variables.\n", 471 | " \"\"\"\n", 472 | " \n", 473 | " U = numpy.zeros((3, len(x)))\n", 474 | " U[0, :] = numpy.sin(numpy.pi*x)*(1-x)**2*(1+x)**3\n", 475 | " U[2, :] = numpy.pi*numpy.cos(numpy.pi*x)*(1-x)**2*(1+x)**3 + numpy.sin(numpy.pi*x)*(2.0*(1-x)*(1+x)**3 + 3.0*(1-x)**2*(1+x)**2)\n", 476 | " \n", 477 | " return U" 478 | ] 479 | }, 480 | { 481 | "cell_type": "code", 482 | "execution_count": null, 483 | "metadata": { 484 | "collapsed": true 485 | }, 486 | "outputs": [], 487 | "source": [ 488 | "Npoints_all = 50 * 2**(numpy.arange(0, 6))\n", 489 | "\n", 490 | "dxs = numpy.zeros((len(Npoints_all,)))\n", 491 | "wave_errors = numpy.zeros((3, len(Npoints_all)))\n", 492 | "\n", 493 | "for i, Npoints in enumerate(Npoints_all):\n", 494 | " dx, x = grid(Npoints)\n", 495 | " dt = dx / 4\n", 496 | " U0 = initial_data_asymmetric(x)\n", 497 | " U = initial_data_asymmetric(x)\n", 498 | " Nsteps = int(2.0 / dt)\n", 499 | " for n in range(Nsteps):\n", 500 | " #\n", 501 | " \n", 502 | " dxs[i] = dx\n", 503 | " wave_errors[:, i] = error_norms(U[0, :], U0[0, :])" 504 | ] 505 | }, 506 | { 507 | "cell_type": "code", 508 | "execution_count": null, 509 | "metadata": { 510 | "collapsed": true 511 | }, 512 | "outputs": [], 513 | "source": [ 514 | "pyplot.figure()\n", 515 | "pyplot.loglog(dxs, wave_errors[0, :], 'bx', label=r\"${\\cal E}_1$\")\n", 516 | "pyplot.loglog(dxs, wave_errors[1, :], 'go', label=r\"${\\cal E}_2$\")\n", 517 | "pyplot.loglog(dxs, wave_errors[2, :], 'r+', label=r\"${\\cal E}_{\\infty}$\")\n", 518 | "pyplot.loglog(dxs, wave_errors[1, 0]*(dxs/dxs[0])**2, 'k-', label=r\"$p=2$\")\n", 519 | "pyplot.xlabel(r\"$\\Delta x$\")\n", 520 | "pyplot.ylabel(\"Error norm\")\n", 521 | "pyplot.legend(loc=\"lower right\")\n", 522 | "pyplot.show()" 523 | ] 524 | }, 525 | { 526 | "cell_type": "markdown", 527 | "metadata": {}, 528 | "source": [ 529 | "## Courant limits" 530 | ] 531 | }, 532 | { 533 | "cell_type": "markdown", 534 | "metadata": {}, 535 | "source": [ 536 | "We restricted the timestep to $\\Delta t = \\sigma \\Delta x$ with $\\sigma$, the *Courant number*, being $1/4$. As the number of timesteps we take is inversely related to the Courant number, we want to make it as large as possible. \n", 537 | "\n", 538 | "Let's try the evolution with Courant number $\\sigma=1$:" 539 | ] 540 | }, 541 | { 542 | "cell_type": "code", 543 | "execution_count": null, 544 | "metadata": { 545 | "collapsed": true 546 | }, 547 | "outputs": [], 548 | "source": [ 549 | "Npoints = 50\n", 550 | "dx, x = grid(Npoints)\n", 551 | "dt = dx\n", 552 | "U0 = initial_data(x)\n", 553 | "U = initial_data(x)\n", 554 | "Nsteps = int(2.0/dt)\n", 555 | "for n in range(Nsteps):\n", 556 | " #\n", 557 | " \n", 558 | "pyplot.figure()\n", 559 | "pyplot.plot(x, U0[0, :], 'b--', label=\"Initial data\")\n", 560 | "pyplot.plot(x, U[0, :], 'k-', label=r\"$t=2$\")\n", 561 | "pyplot.xlabel(r\"$x$\")\n", 562 | "pyplot.ylabel(r\"$\\phi$\")\n", 563 | "pyplot.xlim(-1, 1)\n", 564 | "pyplot.legend()\n", 565 | "pyplot.show()" 566 | ] 567 | }, 568 | { 569 | "cell_type": "markdown", 570 | "metadata": {}, 571 | "source": [ 572 | "The result doesn't look too bad, but the numerical approximation is actually *bigger* than the correct solution. What happens as we increase resolution?" 573 | ] 574 | }, 575 | { 576 | "cell_type": "code", 577 | "execution_count": null, 578 | "metadata": { 579 | "collapsed": true 580 | }, 581 | "outputs": [], 582 | "source": [ 583 | "Npoints = 200\n", 584 | "dx, x = grid(Npoints)\n", 585 | "dt = dx\n", 586 | "U0 = initial_data(x)\n", 587 | "U = initial_data(x)\n", 588 | "Nsteps = int(2.0/dt)\n", 589 | "for n in range(Nsteps):\n", 590 | " #\n", 591 | " \n", 592 | "pyplot.figure()\n", 593 | "pyplot.plot(x, U0[0, :], 'b--', label=\"Initial data\")\n", 594 | "pyplot.plot(x, U[0, :], 'k-', label=r\"$t=2$\")\n", 595 | "pyplot.xlabel(r\"$x$\")\n", 596 | "pyplot.ylabel(r\"$\\phi$\")\n", 597 | "pyplot.xlim(-1, 1)\n", 598 | "pyplot.legend()\n", 599 | "pyplot.show()" 600 | ] 601 | }, 602 | { 603 | "cell_type": "markdown", 604 | "metadata": {}, 605 | "source": [ 606 | "The bulk of the solution looks good, but there's small oscillations at the edges. Increase resolution a bit further:" 607 | ] 608 | }, 609 | { 610 | "cell_type": "code", 611 | "execution_count": null, 612 | "metadata": { 613 | "collapsed": true 614 | }, 615 | "outputs": [], 616 | "source": [ 617 | "Npoints = 400\n", 618 | "dx, x = grid(Npoints)\n", 619 | "dt = dx\n", 620 | "U0 = initial_data(x)\n", 621 | "U = initial_data(x)\n", 622 | "Nsteps = int(2.0/dt)\n", 623 | "for n in range(Nsteps):\n", 624 | " #\n", 625 | " \n", 626 | "pyplot.figure()\n", 627 | "pyplot.plot(x, U0[0, :], 'b--', label=\"Initial data\")\n", 628 | "pyplot.plot(x, U[0, :], 'k-', label=r\"$t=2$\")\n", 629 | "pyplot.xlabel(r\"$x$\")\n", 630 | "pyplot.ylabel(r\"$\\phi$\")\n", 631 | "pyplot.xlim(-1, 1)\n", 632 | "pyplot.legend()\n", 633 | "pyplot.show()" 634 | ] 635 | }, 636 | { 637 | "cell_type": "markdown", 638 | "metadata": {}, 639 | "source": [ 640 | "The result has blown up. We won't be seeing any convergence in this case.\n", 641 | "\n", 642 | "For hyperbolic PDEs there is a Courant *limit*: a maximum timestep that is consistent with stability. This depends on the physics and the numerical method chosen. Typically a maximum limit is\n", 643 | "\n", 644 | "$$\n", 645 | " \\sigma < \\frac{1}{\\sqrt{D} \\lambda_{\\text{max}}}\n", 646 | "$$\n", 647 | "\n", 648 | "where $D$ is the number of spatial dimensions and $\\lambda_{\\text{max}}$ the maximum speed of information propagation (ie, the speed of light)." 649 | ] 650 | }, 651 | { 652 | "cell_type": "markdown", 653 | "metadata": {}, 654 | "source": [ 655 | "# Extension exercises" 656 | ] 657 | }, 658 | { 659 | "cell_type": "markdown", 660 | "metadata": {}, 661 | "source": [ 662 | "##### Second order in space\n", 663 | "\n", 664 | "Implement a MoL solution to the wave equation using the second order in space form\n", 665 | "\n", 666 | "$$\n", 667 | "\\frac{\\partial}{\\partial t} \\begin{pmatrix} \\phi \\\\ \\phi_t \\end{pmatrix} = \\begin{pmatrix} \\phi_t \\\\ 0 \\end{pmatrix} + \\frac{\\partial^2}{\\partial x^2} \\begin{pmatrix} 0 \\\\ \\phi \\end{pmatrix}.\n", 668 | "$$\n", 669 | "\n", 670 | "This is more closely related to the BSSNOK formulation. Check convergence on the cases above. Compare the accuracy and efficiency of the approaches." 671 | ] 672 | }, 673 | { 674 | "cell_type": "markdown", 675 | "metadata": {}, 676 | "source": [ 677 | "##### Fourth order differencing\n", 678 | "\n", 679 | "Implement fourth order spatial differencing. You will need to change the boundary conditions - think how this should be done. Compare the results, and check the convergence rate. Should you expect fourth order convergence?" 680 | ] 681 | }, 682 | { 683 | "cell_type": "markdown", 684 | "metadata": {}, 685 | "source": [ 686 | "##### Third order in time\n", 687 | "\n", 688 | "If you find that the method for time integration is a limitation, try implementing the third order Runge-Kutta method\n", 689 | "\n", 690 | "\\begin{align}\n", 691 | " {\\bf U}^{(p_1)} &= {\\bf U}^{(n)} + \\Delta t \\, {\\bf F} \\left( {\\bf U}^{(n)}, t^n \\right), \\\\\n", 692 | " {\\bf U}^{(p_2)} &= \\frac{1}{4} \\left( 3 {\\bf U}^{(n)} + {\\bf U}^{(p_1)} + \\Delta t \\, {\\bf F} \\left( {\\bf U}^{(p_1)}, t^{n+1} \\right) \\right), \\\\\n", 693 | " {\\bf U}^{(n+1)} &= \\frac{1}{3} \\left( {\\bf U}^{(n)} + 2 {\\bf U}^{(p_2)} + 2 \\Delta t \\, {\\bf F} \\left( {\\bf U}^{(p_2)}, t^{n+1} \\right) \\right).\n", 694 | "\\end{align}\n", 695 | "\n", 696 | "Compare with both second and fourth order central spatial differencing." 697 | ] 698 | } 699 | ], 700 | "metadata": { 701 | "kernelspec": { 702 | "display_name": "Python 3", 703 | "language": "python", 704 | "name": "python3" 705 | }, 706 | "language_info": { 707 | "codemirror_mode": { 708 | "name": "ipython", 709 | "version": 3 710 | }, 711 | "file_extension": ".py", 712 | "mimetype": "text/x-python", 713 | "name": "python", 714 | "nbconvert_exporter": "python", 715 | "pygments_lexer": "ipython3", 716 | "version": "3.6.2" 717 | } 718 | }, 719 | "nbformat": 4, 720 | "nbformat_minor": 1 721 | } 722 | -------------------------------------------------------------------------------- /Numerical-Methods/01-finite-differencing-sol.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Numerical Methods" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "For Numerical Relativity, we need to\n", 15 | "\n", 16 | "* evolve the spacetime (hyperbolic PDEs with \"smooth\" fields);\n", 17 | "* evolve the matter (hyperbolic PDEs with discontinuous fields);\n", 18 | "* solve initial data (elliptic PDEs);\n", 19 | "* extract gravitational waves (interpolation and integration);\n", 20 | "* find and analyse horizons (interpolation, BVPs).\n", 21 | "\n", 22 | "These can be built on some simple foundations. \n", 23 | "\n", 24 | "The general concepts that underpin most numerical methods are\n", 25 | "\n", 26 | "1. the solution of linear systems $A {\\bf x} = {\\bf b}$;\n", 27 | "2. the solution of nonlinear root-finding problems ${\\bf f} ( {\\bf x} ) = {\\bf 0}$;\n", 28 | "3. the representation of a function or field $f(x)$ by discrete data $f_i$ at points $x_i$, by interpolation or other means;\n", 29 | "4. the (discrete) Fast Fourier Transform;\n", 30 | "5. stochastic concepts and methods.\n", 31 | "\n", 32 | "For Numerical Relativity, there has been little need (yet!) for stochastic methods, and the use of FFTs is mostly restricted to analysis. All of these points can be found in standard numerical packages and libraries: the question, however, is\n", 33 | "\n", 34 | "1. what do we need to understand about these methods before implementing or using them?\n", 35 | "2. when is it faster or better to implement our own version rather than using a library?" 36 | ] 37 | }, 38 | { 39 | "cell_type": "markdown", 40 | "metadata": {}, 41 | "source": [ 42 | "# Finite differencing" 43 | ] 44 | }, 45 | { 46 | "cell_type": "markdown", 47 | "metadata": {}, 48 | "source": [ 49 | "As a first step we'll quickly cover *finite differencing*: the approximation of derivatives of a function $f$ when the only information about $f$ is its value at a set of points, or nodes, $\\{x_i\\}$, denoted $\\{f_i\\}$.\n", 50 | "\n", 51 | "Here we have the \"representation of a function\" problem. We represent the function $f$ using a *piecewise polynomial* function $g$. This polynomial must interpolate $f$: that is, $g(x_i) \\equiv f(x_i) = f_i$. We then approximate derivatives of $f$ by derivatives of $g$.\n", 52 | "\n", 53 | "As simple examples, let's assume we know three points, $\\{f_{i-1}, f_i, f_{i+1}\\}$. Then we have the linear polynomial approximations\n", 54 | "\n", 55 | "$$\n", 56 | " g_{FD} = \\frac{x - x_{i+1}}{x_i - x_{i+1}} f_i + \\frac{x - x_{i}}{x_{i+1} - x_{i}} f_{i+1}\n", 57 | "$$\n", 58 | "\n", 59 | "and\n", 60 | "\n", 61 | "$$\n", 62 | " g_{BD} = \\frac{x - x_{i}}{x_{i-1} - x_{i}} f_{i-1} + \\frac{x - x_{i-1}}{x_i - x_{i-1}} f_i\n", 63 | "$$\n", 64 | "\n", 65 | "or the quadratic polynomial approximation\n", 66 | "\n", 67 | "$$\n", 68 | " g_{CD} = \\frac{(x - x_{i})(x - x_{i+1})}{(x_{i-1} - x_{i})(x_{i-1} - x_{i+1})} f_{i-1} + \\frac{(x - x_{i-1})(x - x_{i+1})}{(x_{i} - x_{i-1})(x_{i} - x_{i+1})} f_{i} + \\frac{(x - x_{i-1})(x - x_{i})}{(x_{i+1} - x_{i-1})(x_{i+1} - x_{i})} f_{i+1}.\n", 69 | "$$\n", 70 | "\n", 71 | "Note how this Lagrange form is built out of *indicator polynomials* that take the value $1$ at one node and vanish at all others." 72 | ] 73 | }, 74 | { 75 | "cell_type": "markdown", 76 | "metadata": {}, 77 | "source": [ 78 | "By differentiating these polynomial interpolating functions we get approximations to the derivatives of $f$. Each approximation is different, with different errors. \n", 79 | "\n", 80 | "We'll assume that the nodes are equally spaced, with grid spacing $\\Delta x = x_{i+1} - x_i$. The approximations above give the standard *forward difference*\n", 81 | "\n", 82 | "$$\n", 83 | " \\left. \\frac{\\partial g_{FD}}{\\partial x} \\right|_{x = x_i} \\to \\left. \\frac{\\partial f}{\\partial x} \\right|_{x = x_i} = \\frac{1}{\\Delta x} \\left( f_{i+1} - f_i \\right) + {\\cal O} \\left( \\Delta x \\right),\n", 84 | "$$\n", 85 | "\n", 86 | "the standard *backward difference*\n", 87 | "\n", 88 | "$$\n", 89 | " \\left. \\frac{\\partial g_{BD}}{\\partial x} \\right|_{x = x_i} \\to \\left. \\frac{\\partial f}{\\partial x} \\right|_{x = x_i} = \\frac{1}{\\Delta x} \\left( f_{i} - f_{i-1} \\right) + {\\cal O} \\left( \\Delta x \\right),\n", 90 | "$$\n", 91 | "\n", 92 | "and the standard *central difference* approximations\n", 93 | "\n", 94 | "\\begin{align}\n", 95 | " \\left. \\frac{\\partial g_{CD}}{\\partial x} \\right|_{x = x_i} & \\to \\left. \\frac{\\partial f}{\\partial x} \\right|_{x = x_i} \\\\ & = \\frac{1}{2 \\, \\Delta x} \\left( f_{i+1} - f_{i-1} \\right) + {\\cal O} \\left( \\Delta x^2 \\right), \\\\\n", 96 | " \\left. \\frac{\\partial^2 g_{CD}}{\\partial x^2} \\right|_{x = x_i} & \\to \\left. \\frac{\\partial^2 f}{\\partial x^2} \\right|_{x = x_i} \\\\ & = \\frac{1}{\\left( \\Delta x \\right)^2} \\left( f_{i-1} - 2 f_i + f_{i+1} \\right) + {\\cal O} \\left( \\Delta x^2 \\right).\n", 97 | "\\end{align}\n", 98 | "\n", 99 | "The error is most conveniently derived by expressing $f_{i-1}$ and $f_{i+1}$ using the Taylor expansion of $f(x)$ around $x_i$, i.e.\n", 100 | "\n", 101 | "$$\n", 102 | "f_{i-1} = f_i - \\left. \\frac{\\partial f}{\\partial x} \\right|_{x = x_i}\\!\\! \\Delta x + \\frac{1}{2} \\left . \\frac{\\partial^2 f}{\\partial x^2} \\right|_{x = x_i}\\!\\! \\Delta x^2 - \\frac{1}{6} \\left . \\frac{\\partial^3 f}{\\partial x^3} \\right|_{x = x_i}\\!\\! \\Delta x^3 + {\\cal O} \\left ( \\Delta x^4 \\right)\n", 103 | "$$\n", 104 | "and\n", 105 | "$$\n", 106 | "f_{i+1} = f_i + \\left. \\frac{\\partial f}{\\partial x} \\right|_{x = x_i}\\!\\! \\Delta x + \\frac{1}{2} \\left . \\frac{\\partial^2 f}{\\partial x^2} \\right|_{x = x_i}\\!\\! \\Delta x^2 + \\frac{1}{6} \\left . \\frac{\\partial^3 f}{\\partial x^3} \\right|_{x = x_i}\\!\\! \\Delta x^3 + {\\cal O} \\left ( \\Delta x^4 \\right)\n", 107 | "$$" 108 | ] 109 | }, 110 | { 111 | "cell_type": "markdown", 112 | "metadata": {}, 113 | "source": [ 114 | "## Testing this in code\n", 115 | "\n", 116 | "We'll use finite differencing repeatedly. To test our code we'll be testing the differencing. Let's check the above approximations applied to a simple function,\n", 117 | "\n", 118 | "$$\n", 119 | " f(x) = \\exp \\left[ x \\right].\n", 120 | "$$\n", 121 | "\n", 122 | "All derivatives are the same as the original function, which evaluated at $x=0$ gives $1$.\n", 123 | "\n", 124 | "First we write the functions, then we test them." 125 | ] 126 | }, 127 | { 128 | "cell_type": "code", 129 | "execution_count": null, 130 | "metadata": { 131 | "collapsed": true 132 | }, 133 | "outputs": [], 134 | "source": [ 135 | "def backward_differencing(f, x_i, dx):\n", 136 | " \"\"\"\n", 137 | " Backward differencing of f at x_i with grid spacing dx.\n", 138 | " \"\"\"\n", 139 | " f_i = f(x_i)\n", 140 | " f_i_minus_1 = f(x_i - dx)\n", 141 | " \n", 142 | " return (f_i - f_i_minus_1) / dx" 143 | ] 144 | }, 145 | { 146 | "cell_type": "code", 147 | "execution_count": null, 148 | "metadata": { 149 | "collapsed": true 150 | }, 151 | "outputs": [], 152 | "source": [ 153 | "def forward_differencing(f, x_i, dx):\n", 154 | " \"\"\"\n", 155 | " Forward differencing of f at x_i with grid spacing dx.\n", 156 | " \"\"\"\n", 157 | " f_i = f(x_i)\n", 158 | " f_i_plus_1 = f(x_i + dx)\n", 159 | " \n", 160 | " return (f_i_plus_1 - f_i) / dx" 161 | ] 162 | }, 163 | { 164 | "cell_type": "code", 165 | "execution_count": null, 166 | "metadata": { 167 | "collapsed": true 168 | }, 169 | "outputs": [], 170 | "source": [ 171 | "def central_differencing(f, x_i, dx):\n", 172 | " \"\"\"\n", 173 | " Second order central differencing of f at x_i with grid spacing dx.\n", 174 | " \"\"\"\n", 175 | " f_i = f(x_i)\n", 176 | " f_i_minus_1 = f(x_i - dx)\n", 177 | " f_i_plus_1 = f(x_i + dx)\n", 178 | " \n", 179 | " first_derivative = (f_i_plus_1 - f_i_minus_1) / (2.0 * dx)\n", 180 | " second_derivative = (f_i_minus_1 - 2.0 * f_i + f_i_plus_1) / (dx**2)\n", 181 | " \n", 182 | " return first_derivative, second_derivative" 183 | ] 184 | }, 185 | { 186 | "cell_type": "code", 187 | "execution_count": null, 188 | "metadata": { 189 | "collapsed": true 190 | }, 191 | "outputs": [], 192 | "source": [ 193 | "import numpy" 194 | ] 195 | }, 196 | { 197 | "cell_type": "code", 198 | "execution_count": null, 199 | "metadata": { 200 | "collapsed": true 201 | }, 202 | "outputs": [], 203 | "source": [ 204 | "bd = backward_differencing(numpy.exp, 0.0, dx=1.0)\n", 205 | "fd = forward_differencing(numpy.exp, 0.0, dx=1.0)\n", 206 | "cd1, cd2 = central_differencing(numpy.exp, 0.0, dx=1.0)\n", 207 | "\n", 208 | "print(\"Backward difference should be 1, is {}, error {}\".format(bd, abs(bd - 1.0)))\n", 209 | "print(\"Forward difference should be 1, is {}, error {}\".format(fd, abs(fd - 1.0)))\n", 210 | "print(\"Central difference (1st derivative) should be 1, is {}, error {}\".format(cd1, abs(cd1 - 1.0)))\n", 211 | "print(\"Central difference (2nd derivative) should be 1, is {}, error {}\".format(cd2, abs(cd2 - 1.0)))" 212 | ] 213 | }, 214 | { 215 | "cell_type": "markdown", 216 | "metadata": {}, 217 | "source": [ 218 | "The errors here are significant. What matters is how fast the errors reduce as we change the grid spacing. Try changing from $\\Delta x = 1$ to $\\Delta x = 0.1$:" 219 | ] 220 | }, 221 | { 222 | "cell_type": "code", 223 | "execution_count": null, 224 | "metadata": { 225 | "collapsed": true 226 | }, 227 | "outputs": [], 228 | "source": [ 229 | "bd = backward_differencing(numpy.exp, 0.0, dx=0.1)\n", 230 | "fd = forward_differencing(numpy.exp, 0.0, dx=0.1)\n", 231 | "cd1, cd2 = central_differencing(numpy.exp, 0.0, dx=0.1)\n", 232 | "\n", 233 | "print(\"Backward difference should be 1, is {}, error {}\".format(bd, abs(bd - 1.0)))\n", 234 | "print(\"Forward difference should be 1, is {}, error {}\".format(fd, abs(fd - 1.0)))\n", 235 | "print(\"Central difference (1st derivative) should be 1, is {}, error {}\".format(cd1, abs(cd1 - 1.0)))\n", 236 | "print(\"Central difference (2nd derivative) should be 1, is {}, error {}\".format(cd2, abs(cd2 - 1.0)))" 237 | ] 238 | }, 239 | { 240 | "cell_type": "markdown", 241 | "metadata": {}, 242 | "source": [ 243 | "We see *roughly* the expected scaling, with forward and backward differencing errors reducing by roughly $10$, and central differencing errors reducing by roughly $10^2$." 244 | ] 245 | }, 246 | { 247 | "cell_type": "markdown", 248 | "metadata": {}, 249 | "source": [ 250 | "## Convergence" 251 | ] 252 | }, 253 | { 254 | "cell_type": "markdown", 255 | "metadata": {}, 256 | "source": [ 257 | "The feature that we always want to show is that the error $\\cal E$ decreases with the grid spacing $\\Delta x$. In particular, for most methods in Numerical Relativity, we expect a power law relationship:\n", 258 | "\n", 259 | "$$\n", 260 | " {\\cal E} \\propto \\left( \\Delta x \\right)^p.\n", 261 | "$$\n", 262 | "\n", 263 | "If we can measure the error (by knowing the exact solution) then we can measure the *convergence rate* $p$, by using\n", 264 | "\n", 265 | "$$\n", 266 | " \\log \\left( {\\cal E} \\right) = p \\, \\log \\left( \\Delta x \\right) + \\text{constant}.\n", 267 | "$$\n", 268 | "\n", 269 | "That is $p$ is the slope of the best-fit straight line through the plot of the error against the grid spacing, on a logarithmic scale.\n", 270 | "\n", 271 | "If we do not know the exact solution (the usual case), we can use *self convergence* to do the same measurement.\n", 272 | "\n", 273 | "We check this for our finite differencing above." 274 | ] 275 | }, 276 | { 277 | "cell_type": "code", 278 | "execution_count": null, 279 | "metadata": { 280 | "collapsed": true 281 | }, 282 | "outputs": [], 283 | "source": [ 284 | "from matplotlib import pyplot\n", 285 | "%matplotlib notebook\n", 286 | "\n", 287 | "dxs = numpy.logspace(-5, 0, 10)\n", 288 | "bd_errors = numpy.zeros_like(dxs)\n", 289 | "fd_errors = numpy.zeros_like(dxs)\n", 290 | "cd1_errors = numpy.zeros_like(dxs)\n", 291 | "cd2_errors = numpy.zeros_like(dxs)\n", 292 | "\n", 293 | "for i, dx in enumerate(dxs):\n", 294 | " bd_errors[i] = abs(backward_differencing(numpy.exp, 0.0, dx) - 1.0)\n", 295 | " fd_errors[i] = abs(forward_differencing(numpy.exp, 0.0, dx) - 1.0)\n", 296 | " cd1, cd2 = central_differencing(numpy.exp, 0.0, dx)\n", 297 | " cd1_errors[i] = abs(cd1 - 1.0)\n", 298 | " cd2_errors[i] = abs(cd2 - 1.0)" 299 | ] 300 | }, 301 | { 302 | "cell_type": "code", 303 | "execution_count": null, 304 | "metadata": { 305 | "collapsed": true 306 | }, 307 | "outputs": [], 308 | "source": [ 309 | "pyplot.figure()\n", 310 | "pyplot.loglog(dxs, bd_errors, 'kx', label='Backwards')\n", 311 | "pyplot.loglog(dxs, fd_errors, 'b+', label='Forwards')\n", 312 | "pyplot.loglog(dxs, cd1_errors, 'go', label='Central (1st)')\n", 313 | "pyplot.loglog(dxs, cd2_errors, 'r^', label='Central (2nd)')\n", 314 | "pyplot.loglog(dxs, dxs*(bd_errors[0]/dxs[0]), 'k-', label=r\"$p=1$\")\n", 315 | "pyplot.loglog(dxs, dxs**2*(cd1_errors[0]/dxs[0]**2), 'k--', label=r\"$p=2$\")\n", 316 | "pyplot.xlabel(r\"$\\Delta x$\")\n", 317 | "pyplot.ylabel(\"Error\")\n", 318 | "pyplot.legend(loc=\"lower right\")\n", 319 | "pyplot.show()" 320 | ] 321 | }, 322 | { 323 | "cell_type": "markdown", 324 | "metadata": {}, 325 | "source": [ 326 | "Forwards and backwards differencing are converging at first order ($p=1$). Central differencing is converging at second order ($p=2$) until floating point roundoff effects start causing problems at small $\\Delta x$." 327 | ] 328 | }, 329 | { 330 | "cell_type": "markdown", 331 | "metadata": {}, 332 | "source": [ 333 | "# Extension exercises" 334 | ] 335 | }, 336 | { 337 | "cell_type": "markdown", 338 | "metadata": {}, 339 | "source": [ 340 | "##### Self convergence\n", 341 | "\n", 342 | "By definition, the error ${\\cal E}(\\Delta x)$ is a function of the grid spacing, as is our numerical approximation of the thing we're trying to compute $F(\\Delta x)$ (above $F$ was the derivative of $f$, evaluated at $0$). This gives\n", 343 | "\n", 344 | "$$\n", 345 | " {\\cal E}(\\Delta x) = F \\left( \\Delta x \\right) - F \\left( 0 \\right)\n", 346 | "$$\n", 347 | "\n", 348 | "or\n", 349 | "\n", 350 | "$$\n", 351 | " F \\left( \\Delta x \\right) = F \\left( 0 \\right) + {\\cal E}(\\Delta x).\n", 352 | "$$\n", 353 | "\n", 354 | "Of course, $F(0)$ is the exact solution we're trying to compute. However, by subtracting any *two* approximations we can eliminate the exact solution. Using the power law dependence\n", 355 | "\n", 356 | "$$\n", 357 | " {\\cal E}(\\Delta x) = C \\left( \\Delta x \\right)^p\n", 358 | "$$\n", 359 | "\n", 360 | "this gives\n", 361 | "\n", 362 | "$$\n", 363 | " F \\left( \\alpha \\Delta x \\right) - F \\left( \\Delta x \\right) = C \\left( \\Delta x \\right)^p \\left( \\alpha^p - 1 \\right).\n", 364 | "$$\n", 365 | "\n", 366 | "We still do not know the value of the constant $C$. However, we can use *three* approximations to eliminate it (note for convenience we chose the same ratio $\\alpha$ between the first two resolutions as\n", 367 | "between the last two):\n", 368 | "\n", 369 | "$$\n", 370 | " \\frac{F \\left( \\alpha^2 \\Delta x \\right) - F \\left( \\alpha \\Delta x \\right)}{F \\left( \\alpha \\Delta x \\right) - F \\left( \\Delta x \\right)} = \\frac{\\left( \\alpha^{2p} - \\alpha^p \\right)}{\\left( \\alpha^p - 1 \\right)} = \\alpha^p.\n", 371 | "$$\n", 372 | "\n", 373 | "So the *self-convergence rate* is\n", 374 | "\n", 375 | "$$\n", 376 | " p = \\log_{\\alpha} \\left| \\frac{F \\left( \\alpha^2 \\Delta x \\right) - F \\left( \\alpha \\Delta x \\right)}{F \\left( \\alpha \\Delta x \\right) - F \\left( \\Delta x \\right)} \\right|.\n", 377 | "$$\n", 378 | "\n", 379 | "Compute this self-convergence rate for all the cases above." 380 | ] 381 | }, 382 | { 383 | "cell_type": "markdown", 384 | "metadata": {}, 385 | "source": [ 386 | "##### Answer" 387 | ] 388 | }, 389 | { 390 | "cell_type": "code", 391 | "execution_count": null, 392 | "metadata": { 393 | "collapsed": true 394 | }, 395 | "outputs": [], 396 | "source": [ 397 | "dxs = numpy.logspace(-5, 0, 10)\n", 398 | "fbd = numpy.zeros_like(dxs)\n", 399 | "ffd = numpy.zeros_like(dxs)\n", 400 | "fcd1 = numpy.zeros_like(dxs)\n", 401 | "fcd2 = numpy.zeros_like(dxs)\n", 402 | "\n", 403 | "for i, dx in enumerate(dxs):\n", 404 | " fbd[i] = backward_differencing(numpy.exp, 0.0, dx)\n", 405 | " ffd[i] = forward_differencing(numpy.exp, 0.0, dx)\n", 406 | " fcd1[i], fcd2[i] = central_differencing(numpy.exp, 0.0, dx)\n", 407 | "\n", 408 | "alpha = dxs[0]/dxs[1]\n", 409 | "p_bd = numpy.log(abs((fbd[0:-3]-fbd[1:-2])/(fbd[1:-2]-fbd[2:-1])))/numpy.log(alpha)\n", 410 | "p_fd = numpy.log(abs((ffd[0:-3]-ffd[1:-2])/(ffd[1:-2]-ffd[2:-1])))/numpy.log(alpha)\n", 411 | "p_cd1 = numpy.log(abs((fcd1[0:-3]-fcd1[1:-2])/(fcd1[1:-2]-fcd1[2:-1])))/numpy.log(alpha)\n", 412 | "p_cd2 = numpy.log(abs((fcd2[0:-3]-fcd2[1:-2])/(fcd2[1:-2]-fcd2[2:-1])))/numpy.log(alpha)\n", 413 | "\n", 414 | "pyplot.figure()\n", 415 | "pyplot.semilogx(dxs[2:-1], p_bd, 'kx', label='Backwards')\n", 416 | "pyplot.semilogx(dxs[2:-1], p_fd, 'b+', label='Forwards')\n", 417 | "pyplot.semilogx(dxs[2:-1], p_cd1, 'go', label='Central (1st)')\n", 418 | "pyplot.semilogx(dxs[2:-1], p_cd2, 'r^', label='Central (2nd)')\n", 419 | "pyplot.ylim(-1,2.5)\n", 420 | "pyplot.xlabel(r\"$\\Delta x$\")\n", 421 | "pyplot.ylabel(\"Self-convergence rate\")\n", 422 | "pyplot.legend(loc=\"lower right\")\n", 423 | "pyplot.show()" 424 | ] 425 | }, 426 | { 427 | "cell_type": "markdown", 428 | "metadata": {}, 429 | "source": [ 430 | "Clearly we are getting the expected self-convergence rates, except for the central 2nd order derivative approximation, where the results for the highest resolutions are dominated by roundoff error. Note as 3 resolutions are needed to calculate the self-convergence rate, the results are plotted as a function of the lowest resolution included." 431 | ] 432 | }, 433 | { 434 | "cell_type": "markdown", 435 | "metadata": {}, 436 | "source": [ 437 | "##### Higher order\n", 438 | "\n", 439 | "Show, either by Taylor expansion, or by constructing the interpolating polynomial, that the fourth order central differencing approximations are\n", 440 | "\n", 441 | "\\begin{align}\n", 442 | " \\left. \\frac{\\partial f}{\\partial x} \\right|_{x = x_i} & = \\frac{1}{12 \\, \\Delta x} \\left( -f_{i+2} + 8 f_{i+1} - 8 f_{i-1} + f_{i-2} \\right) + {\\cal O} \\left( \\Delta x^4 \\right), \\\\\n", 443 | " \\left. \\frac{\\partial^2 f}{\\partial x^2} \\right|_{x = x_i} & = \\frac{1}{12 \\left( \\Delta x \\right)^2} \\left( -f_{i-2} + 16 f_{i-1} - 30 f_i + 16 f_{i+1} - f_{i+2} \\right) + {\\cal O} \\left( \\Delta x^4 \\right).\n", 444 | "\\end{align}" 445 | ] 446 | }, 447 | { 448 | "cell_type": "markdown", 449 | "metadata": {}, 450 | "source": [ 451 | "##### Measure the convergence rate\n", 452 | "\n", 453 | "Using `numpy.polyfit`, directly measure the convergence rate for the algorithms above. Be careful to exclude points where finite differencing effects cause problems. Repeat the test for the fourth order formulas above." 454 | ] 455 | }, 456 | { 457 | "cell_type": "markdown", 458 | "metadata": {}, 459 | "source": [ 460 | "##### Answer" 461 | ] 462 | }, 463 | { 464 | "cell_type": "markdown", 465 | "metadata": {}, 466 | "source": [ 467 | "Note the exclusion of the first 3 data points for the Central difference (2nd derivative)" 468 | ] 469 | }, 470 | { 471 | "cell_type": "code", 472 | "execution_count": null, 473 | "metadata": { 474 | "collapsed": true 475 | }, 476 | "outputs": [], 477 | "source": [ 478 | "dxs = numpy.logspace(-5, 0, 10)\n", 479 | "bd_errors = numpy.zeros_like(dxs)\n", 480 | "fd_errors = numpy.zeros_like(dxs)\n", 481 | "cd1_errors = numpy.zeros_like(dxs)\n", 482 | "cd2_errors = numpy.zeros_like(dxs)\n", 483 | "\n", 484 | "for i, dx in enumerate(dxs):\n", 485 | " bd_errors[i] = abs(backward_differencing(numpy.exp, 0.0, dx) - 1.0)\n", 486 | " fd_errors[i] = abs(forward_differencing(numpy.exp, 0.0, dx) - 1.0)\n", 487 | " cd1, cd2 = central_differencing(numpy.exp, 0.0, dx)\n", 488 | " cd1_errors[i] = abs(cd1 - 1.0)\n", 489 | " cd2_errors[i] = abs(cd2 - 1.0)\n", 490 | "\n", 491 | "bd_fit = numpy.polyfit(numpy.log(dxs),numpy.log(bd_errors),1)\n", 492 | "fd_fit = numpy.polyfit(numpy.log(dxs),numpy.log(fd_errors),1)\n", 493 | "cd1_fit = numpy.polyfit(numpy.log(dxs),numpy.log(cd1_errors),1)\n", 494 | "cd2_fit = numpy.polyfit(numpy.log(dxs[3:-1]),numpy.log(cd1_errors[3:-1]),1)\n", 495 | "\n", 496 | "print(\"Convergence order for Backward difference is {}\".format(bd_fit[0]))\n", 497 | "print(\"Convergence order for Forward difference is {}\".format(fd_fit[0]))\n", 498 | "print(\"Convergence order for Central difference (1st derivative) is {}\".format(cd1_fit[0]))\n", 499 | "print(\"Convergence order for Central difference (2nd derivative) is {}\".format(cd2_fit[0]))" 500 | ] 501 | }, 502 | { 503 | "cell_type": "markdown", 504 | "metadata": {}, 505 | "source": [ 506 | "Implement the 4th order derivatives." 507 | ] 508 | }, 509 | { 510 | "cell_type": "code", 511 | "execution_count": null, 512 | "metadata": { 513 | "collapsed": true 514 | }, 515 | "outputs": [], 516 | "source": [ 517 | "def central_differencing_4(f, x_i, dx):\n", 518 | " \"\"\"\n", 519 | " Fourth order central differencing of f at x_i with grid spacing dx.\n", 520 | " \"\"\"\n", 521 | " f_i = f(x_i)\n", 522 | " f_i_minus_1 = f(x_i - dx)\n", 523 | " f_i_plus_1 = f(x_i + dx)\n", 524 | " f_i_minus_2 = f(x_i - 2.0*dx)\n", 525 | " f_i_plus_2 = f(x_i + 2.0*dx)\n", 526 | " \n", 527 | " first_derivative = ( 8.0*(f_i_plus_1 - f_i_minus_1) - (f_i_plus_2 - f_i_minus_2) ) / (12.0 * dx)\n", 528 | " second_derivative = ( -30.0*f_i + 16.0*(f_i_plus_1 + f_i_minus_1) - (f_i_plus_2 + f_i_minus_2 ) ) / (12.0*dx**2)\n", 529 | " \n", 530 | " return first_derivative, second_derivative" 531 | ] 532 | }, 533 | { 534 | "cell_type": "markdown", 535 | "metadata": {}, 536 | "source": [ 537 | "Calculate and plot the errors." 538 | ] 539 | }, 540 | { 541 | "cell_type": "code", 542 | "execution_count": null, 543 | "metadata": { 544 | "collapsed": true 545 | }, 546 | "outputs": [], 547 | "source": [ 548 | "dxs = numpy.logspace(-5, 0, 10)\n", 549 | "cd4_1_errors = numpy.zeros_like(dxs)\n", 550 | "cd4_2_errors = numpy.zeros_like(dxs)\n", 551 | "\n", 552 | "for i, dx in enumerate(dxs):\n", 553 | " cd4_1, cd4_2 = central_differencing_4(numpy.exp, 0.0, dx)\n", 554 | " cd4_1_errors[i] = abs(cd4_1 - 1.0)\n", 555 | " cd4_2_errors[i] = abs(cd4_2 - 1.0)\n", 556 | " \n", 557 | "pyplot.figure()\n", 558 | "pyplot.loglog(dxs, cd4_1_errors, 'go', label='Central 4th order (1st)')\n", 559 | "pyplot.loglog(dxs, cd4_2_errors, 'r^', label='Central 4th order (2nd)')\n", 560 | "pyplot.loglog(dxs, dxs**4*(cd4_1_errors[-1]/dxs[-1]**4), 'k-', label=r\"$p=4$\")\n", 561 | "pyplot.xlabel(r\"$\\Delta x$\")\n", 562 | "pyplot.ylabel(\"Error\")\n", 563 | "pyplot.legend(loc=\"lower right\")\n", 564 | "pyplot.show()" 565 | ] 566 | }, 567 | { 568 | "cell_type": "markdown", 569 | "metadata": {}, 570 | "source": [ 571 | "Remember to exclude data points dominated by roundoff error when fitting." 572 | ] 573 | }, 574 | { 575 | "cell_type": "code", 576 | "execution_count": null, 577 | "metadata": { 578 | "collapsed": true 579 | }, 580 | "outputs": [], 581 | "source": [ 582 | "dxs = numpy.logspace(-5, 0, 10)\n", 583 | "cd4_1_errors = numpy.zeros_like(dxs)\n", 584 | "cd4_2_errors = numpy.zeros_like(dxs)\n", 585 | "\n", 586 | "for i, dx in enumerate(dxs):\n", 587 | " cd4_1, cd4_2 = central_differencing_4(numpy.exp, 0.0, dx)\n", 588 | " cd4_1_errors[i] = abs(cd4_1 - 1.0)\n", 589 | " cd4_2_errors[i] = abs(cd4_2 - 1.0)\n", 590 | "\n", 591 | "cd4_1_fit = numpy.polyfit(numpy.log(dxs[4:-1]),numpy.log(cd4_1_errors[4:-1]),1)\n", 592 | "cd4_2_fit = numpy.polyfit(numpy.log(dxs[5:-1]),numpy.log(cd4_2_errors[5:-1]),1)\n", 593 | "\n", 594 | "print(\"Convergence order for Fourth Order Central difference (1st derivative) is {}\".format(cd4_1_fit[0]))\n", 595 | "print(\"Convergence order for Fourth Order Central difference (2nd derivative) is {}\".format(cd4_2_fit[0]))" 596 | ] 597 | } 598 | ], 599 | "metadata": { 600 | "kernelspec": { 601 | "display_name": "Python 3", 602 | "language": "python", 603 | "name": "python3" 604 | }, 605 | "language_info": { 606 | "codemirror_mode": { 607 | "name": "ipython", 608 | "version": 3 609 | }, 610 | "file_extension": ".py", 611 | "mimetype": "text/x-python", 612 | "name": "python", 613 | "nbconvert_exporter": "python", 614 | "pygments_lexer": "ipython3", 615 | "version": "3.6.2" 616 | } 617 | }, 618 | "nbformat": 4, 619 | "nbformat_minor": 1 620 | } 621 | -------------------------------------------------------------------------------- /Using-Cactus/Cactus-Funwave.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "![cactus](http://cactuscode.org/global/images/cactuslogo.png)\n", 8 | "# Compiling Cactus!\n", 9 | "Step 1 is to download the code. Cactus uses a script named \"GetComponents\" to find and prepare all the source code modules that it needs for a given installation. The GetComponents script can be downloaded with a simple invocation of curl." 10 | ] 11 | }, 12 | { 13 | "cell_type": "code", 14 | "execution_count": null, 15 | "metadata": {}, 16 | "outputs": [], 17 | "source": [ 18 | "%cd ~/\n", 19 | "!curl -kLO https://raw.githubusercontent.com/gridaphobe/CRL/ET_2016_11/GetComponents" 20 | ] 21 | }, 22 | { 23 | "cell_type": "markdown", 24 | "metadata": {}, 25 | "source": [ 26 | "Step 2 is to download your thornlist. In this tutorial, we are going to use Funwave, a collection of thorns\n", 27 | "designed to simulate water waves using the Boussinesq equations." 28 | ] 29 | }, 30 | { 31 | "cell_type": "code", 32 | "execution_count": null, 33 | "metadata": {}, 34 | "outputs": [], 35 | "source": [ 36 | "!curl -kLO https://bitbucket.org/stevenrbrandt/cajunwave/raw/master/funwave_carpet.th" 37 | ] 38 | }, 39 | { 40 | "cell_type": "markdown", 41 | "metadata": {}, 42 | "source": [ 43 | "You can view a file in the notebook by using the \"magic\" command \"%pycat filename\". However, it tries to highlight\n", 44 | "syntax as if the file is written in python. In those cases you can simply use \"%cat filename.\" Unfortunately, unlike %pycat, %cat leaves the contents of the file on the screen.\n", 45 | "\n", 46 | "Note that at the top of the file is \"DEFINE_ROOT = CactusFW2\". This means that Cactus, and all its thorns, will be checked out under that directory." 47 | ] 48 | }, 49 | { 50 | "cell_type": "code", 51 | "execution_count": null, 52 | "metadata": {}, 53 | "outputs": [], 54 | "source": [ 55 | "%pycat ~/funwave_carpet.th" 56 | ] 57 | }, 58 | { 59 | "cell_type": "markdown", 60 | "metadata": {}, 61 | "source": [ 62 | "Next we need to checkout the components listed in the thornlist. We do this with the GetComponents command.\n", 63 | "Before we can execute it, however, we need to turn on its execute bit." 64 | ] 65 | }, 66 | { 67 | "cell_type": "code", 68 | "execution_count": null, 69 | "metadata": {}, 70 | "outputs": [], 71 | "source": [ 72 | "%cd ~/\n", 73 | "!chmod a+x GetComponents\n", 74 | "!echo no|./GetComponents --parallel funwave_carpet.th" 75 | ] 76 | }, 77 | { 78 | "cell_type": "code", 79 | "execution_count": null, 80 | "metadata": {}, 81 | "outputs": [], 82 | "source": [ 83 | "%cd ~/CactusFW2" 84 | ] 85 | }, 86 | { 87 | "cell_type": "markdown", 88 | "metadata": {}, 89 | "source": [ 90 | "" 91 | ] 92 | }, 93 | { 94 | "cell_type": "markdown", 95 | "metadata": {}, 96 | "source": [ 97 | "# Simfactory\n", 98 | "Cactus is normally built with a tool called Simfactory. Simfactory, in turn, will call make.\n", 99 | "Before it can work, however, it needs to be configured. Please replace... my email address in the\n", 100 | "command below with yours. The email address isn't sent anywhere, all it's used for is allowing\n", 101 | "Cactus to send job change state notifications to you." 102 | ] 103 | }, 104 | { 105 | "cell_type": "code", 106 | "execution_count": null, 107 | "metadata": {}, 108 | "outputs": [], 109 | "source": [ 110 | "%cd ~/CactusFW2\n", 111 | "!./simfactory/bin/sim setup-silent --setup-email=sbrandt@cct.lsu.edu " 112 | ] 113 | }, 114 | { 115 | "cell_type": "markdown", 116 | "metadata": {}, 117 | "source": [ 118 | "At long last, we are ready to actually build Cactus. Cactus can often figure out what compilers and build\n", 119 | "options to use automatically, but in some cases it is necessary to specify it by hand (you can do this by adding --optionlist=centos.cfg to the build command below). The file containing\n", 120 | "this information is called the Option List. You might want to take a look at it." 121 | ] 122 | }, 123 | { 124 | "cell_type": "markdown", 125 | "metadata": {}, 126 | "source": [ 127 | "This is the command to build Cactus using our thornlist. As written, it will build in parallel using two processes. That's what the -j option does." 128 | ] 129 | }, 130 | { 131 | "cell_type": "code", 132 | "execution_count": null, 133 | "metadata": {}, 134 | "outputs": [], 135 | "source": [ 136 | "#!rm -fr configs" 137 | ] 138 | }, 139 | { 140 | "cell_type": "code", 141 | "execution_count": null, 142 | "metadata": {}, 143 | "outputs": [], 144 | "source": [ 145 | "!time ./simfactory/bin/sim build --mdbkey make 'make -j2' --thornlist=./repos/cajunwave/funwave_carpet.th | cat -" 146 | ] 147 | }, 148 | { 149 | "cell_type": "markdown", 150 | "metadata": {}, 151 | "source": [ 152 | "The build command creates a configuration called \"sim\". It is found in the \"configs/sim\" directory. One of the files in this directory is the ThornList. It contains the list of thorns Cactus will compile. If you wish to add or remove a thorn from your configuration, you can do it by editing this file. However, by doing so you risk confusing yourself by forgetting what you've done. Proceed at your own risk!" 153 | ] 154 | }, 155 | { 156 | "cell_type": "code", 157 | "execution_count": null, 158 | "metadata": {}, 159 | "outputs": [], 160 | "source": [ 161 | "%ls ~/CactusFW2/configs/sim" 162 | ] 163 | }, 164 | { 165 | "cell_type": "code", 166 | "execution_count": null, 167 | "metadata": {}, 168 | "outputs": [], 169 | "source": [ 170 | "%pycat ~/CactusFW2/configs/sim/ThornList" 171 | ] 172 | }, 173 | { 174 | "cell_type": "markdown", 175 | "metadata": {}, 176 | "source": [ 177 | "The \"OptionList\" file contains all the configuration options (the things you saw in centos.cfg). Unlike the ThornList file, however, changing this file will have no effect. If you wish to change your configuration options without starting over from scratch, you should edit the file \"configs/sim/config-data/make.config.defn.\"" 178 | ] 179 | }, 180 | { 181 | "cell_type": "code", 182 | "execution_count": null, 183 | "metadata": {}, 184 | "outputs": [], 185 | "source": [ 186 | "%pycat ~/CactusFW2/configs/sim/config-data/make.config.defn" 187 | ] 188 | }, 189 | { 190 | "cell_type": "markdown", 191 | "metadata": {}, 192 | "source": [ 193 | "

Running Cactus!

" 194 | ] 195 | }, 196 | { 197 | "cell_type": "code", 198 | "execution_count": null, 199 | "metadata": {}, 200 | "outputs": [], 201 | "source": [ 202 | "%cd ~/CactusFW2" 203 | ] 204 | }, 205 | { 206 | "cell_type": "markdown", 207 | "metadata": {}, 208 | "source": [ 209 | "Below we are going to run a simple Gaussian water wave over a flat seabed. We will use MPI and run on two processes. You can edit the parameter file below and hit shift-Enter to write it to disk. The special sequence \"%%writefile filename\" at the top makes this possible. Alternatively, you can load an existing file by putting the special sequence \"%load filename\" at the top of a cell and hitting shift-Enter." 210 | ] 211 | }, 212 | { 213 | "cell_type": "code", 214 | "execution_count": null, 215 | "metadata": {}, 216 | "outputs": [], 217 | "source": [ 218 | "%%writefile ~/CactusFW2/wave.par\n", 219 | "\n", 220 | "#Reorder the parameters for easy comparison to the input.txt in example 3\n", 221 | "ActiveThorns = \"\n", 222 | " CoordBase FunWave FunwaveCoord CartGrid3D Carpet CarpetIOASCII\n", 223 | " CartGrid3D IOUtil CarpetIOBasic CarpetSlab Boundary SymBase MoL\n", 224 | " CarpetReduce LocalReduce InitBase CarpetLib LoopControl Tridiagonal\n", 225 | " CarpetIOScalar \"\n", 226 | "\n", 227 | "#----------------------------------------------------\n", 228 | "# Flesh and CCTK parameters\n", 229 | "#----------------------------------------------------\n", 230 | "\n", 231 | "# flesh\n", 232 | "Cactus::cctk_run_title = \"Test Run\"\n", 233 | "Cactus::cctk_show_schedule = \"yes\"\n", 234 | "Cactus::cctk_itlast = 300\n", 235 | "Cactus::allow_mixeddim_gfs = \"yes\"\n", 236 | "\n", 237 | "# CartGrid3D\n", 238 | "CartGrid3D::type = \"coordbase\"\n", 239 | "CartGrid3D::avoid_origin = \"no\"\n", 240 | "CoordBase::domainsize = \"minmax\"\n", 241 | "CoordBase::spacing = \"gridspacing\"\n", 242 | "CoordBase::xmin = 0\n", 243 | "CoordBase::xmax = 30\n", 244 | "CoordBase::ymin = 0\n", 245 | "CoordBase::ymax = 30\n", 246 | "CoordBase::zmin = 0.0\n", 247 | "CoordBase::zmax = 0.0\n", 248 | "CoordBase::dx = 0.25\n", 249 | "CoordBase::dy = 0.25\n", 250 | "\n", 251 | "CoordBase::boundary_size_x_lower = 3\n", 252 | "CoordBase::boundary_size_x_upper = 3\n", 253 | "CoordBase::boundary_size_y_lower = 3\n", 254 | "CoordBase::boundary_size_y_upper = 3\n", 255 | "CoordBase::boundary_size_z_lower = 0\n", 256 | "CoordBase::boundary_size_z_upper = 0\n", 257 | "CoordBase::boundary_shiftout_x_lower = 1\n", 258 | "CoordBase::boundary_shiftout_x_upper = 1\n", 259 | "CoordBase::boundary_shiftout_y_lower = 1\n", 260 | "CoordBase::boundary_shiftout_y_upper = 1\n", 261 | "CoordBase::boundary_shiftout_z_lower = 1\n", 262 | "CoordBase::boundary_shiftout_z_upper = 1\n", 263 | "\n", 264 | "# Carpet\n", 265 | "Carpet::domain_from_coordbase = \"yes\"\n", 266 | "Carpet::ghost_size_x = 3\n", 267 | "Carpet::ghost_size_y = 3\n", 268 | "Carpet::ghost_size_z = 1\n", 269 | "carpet::adaptive_stepsize = yes\n", 270 | "\n", 271 | "# MoL\n", 272 | "MoL::ODE_Method = \"RK3\"\n", 273 | "MoL::disable_prolongation = \"yes\"\n", 274 | "\n", 275 | "# the output dir will be named after the parameter file name\n", 276 | "IO::out_dir = $parfile\n", 277 | "IO::out_fileinfo=\"none\"\n", 278 | "IOBasic::outInfo_every = 1\n", 279 | "IOBasic::outInfo_vars = \"FunWave::eta FunWave::u FunWave::v\"\n", 280 | "\n", 281 | "#IOASCII::out1D_every = 1\n", 282 | "#IOASCII::out1d_vars = \"FunWave::eta Funwave::depth\"\n", 283 | "CarpetIOASCII::compact_format = false\n", 284 | "IOASCII::out2D_every = 30\n", 285 | "IOASCII::out2D_xyplane_z = 0\n", 286 | "IOASCII::out2D_vars = \"FunWave::eta FunWave::u FunWave::v\"\n", 287 | "IOASCII::out2D_xz = \"no\"\n", 288 | "IOASCII::out2D_yz = \"no\"\n", 289 | "IOASCII::output_ghost_points = \"no\"\n", 290 | "\n", 291 | "IOScalar::outScalar_every = 1\n", 292 | "IOScalar::outScalar_vars = \"FunWave::eta FunWave::u FunWave::v\"\n", 293 | "\n", 294 | "#& = \"Funwave::eta\"\n", 295 | "\n", 296 | "#----------------------------------------------------\n", 297 | "# Funwave parameters\n", 298 | "#----------------------------------------------------\n", 299 | "\n", 300 | "# Funwave depth \n", 301 | "FunWave::depth_file_offset_x = 3\n", 302 | "FunWave::depth_file_offset_y = 3\n", 303 | "FunWave::depth_type = \"flat\"\n", 304 | "FunWave::depth_format = \"ele\"\n", 305 | "FunWave::depth_file = \"/tmp/__depth__.txt\"\n", 306 | "FunWave::depth_flat = 0.8\n", 307 | "#Funwave::test_depth_shore_x = 80\n", 308 | "#Funwave::test_depth_island_x = 40\n", 309 | "#Funwave::test_depth_island_y = 40\n", 310 | "FunWave::depth_xslp = 10.0\n", 311 | "FunWave::depth_slope = 0.05\n", 312 | "FunWave::dt_size = 0\n", 313 | "Funwave::generate_test_depth_data = true\n", 314 | "Funwave::num_wave_components = 1\n", 315 | "Funwave::wave_component_file = \"/home/sbrandt/workspace/shi_funwave/example_2/fft/wavemk_per_amp_pha.txt\"\n", 316 | "Funwave::peak_period = 1\n", 317 | "\n", 318 | "# import\n", 319 | "Funwave::time_ramp = 1.0\n", 320 | "Funwave::delta_wk = 0.5\n", 321 | "Funwave::dep_wk = 0.45\n", 322 | "Funwave::xc_wk = 3.0\n", 323 | "Funwave::ywidth_wk = 10000.0\n", 324 | "Funwave::tperiod = 1.0\n", 325 | "Funwave::amp_wk = 0.0232\n", 326 | "Funwave::theta_wk = 0.0\n", 327 | "Funwave::freqpeak = 0.2\n", 328 | "Funwave::freqmin = 0.1\n", 329 | "Funwave::freqmax = 0.4\n", 330 | "Funwave::hmo = 1.0\n", 331 | "Funwave::gammatma = 5.0\n", 332 | "Funwave::thetapeak = 10.0\n", 333 | "Funwave::sigma_theta = 15.0\n", 334 | "\n", 335 | "# Funwave wind forcing\n", 336 | "Funwave::wind_force = false\n", 337 | "Funwave::use_wind_mask = false\n", 338 | "Funwave::num_time_wind_data = 2\n", 339 | "Funwave::timewind[0] = 0\n", 340 | "Funwave::wu[0] = 25\n", 341 | "Funwave::wv[0] = 50\n", 342 | "Funwave::timewind[1] = 1000\n", 343 | "Funwave::wu[1] = 100\n", 344 | "Funwave::wv[1] = 100\n", 345 | "Funwave::boundary = funwave\n", 346 | "\n", 347 | "# Funwave wave maker\n", 348 | "FunWave::wavemaker_type = \"ini_gau\"\n", 349 | "FunWave::xc = 26.5\n", 350 | "FunWave::yc = 26.9\n", 351 | "FunWave::amp = 2.0\n", 352 | "FunWave::wid = 1\n", 353 | "Funwave::wdep = 0.78\n", 354 | "Funwave::xwavemaker = 25.0\n", 355 | "\n", 356 | "# Funwave sponge \n", 357 | "FunWave::sponge_on = false\n", 358 | "FunWave::sponge_west_width = 2.0\n", 359 | "FunWave::sponge_east_width = 2.0\n", 360 | "FunWave::sponge_north_width = 0.0\n", 361 | "FunWave::sponge_south_width = 0.0\n", 362 | "FunWave::sponge_decay_rate = 0.9\n", 363 | "FunWave::sponge_damping_magnitude = 5.0\n", 364 | "\n", 365 | "# Funwave dispersion (example 3 enables dispersion)\n", 366 | "FunWave::dispersion_on = \"true\"\n", 367 | "FunWave::gamma1 = 1.0\n", 368 | "FunWave::gamma2 = 1.0\n", 369 | "FunWave::gamma3 = 1.0\n", 370 | "FunWave::beta_ref = -0.531\n", 371 | "FunWave::swe_eta_dep = 0.80\n", 372 | "FunWave::cd = 0.0\n", 373 | "\n", 374 | "# Funwave numerics (MoL parameter controls time integration scheme)\n", 375 | "FunWave::reconstruction_scheme = \"fourth\"\n", 376 | "FunWave::riemann_solver = \"HLLC\"\n", 377 | "FunWave::dtfac = 0.5\n", 378 | "FunWave::froudecap = 10.0\n", 379 | "FunWave::mindepth = 0.001\n", 380 | "FunWave::mindepthfrc = 0.001\n", 381 | "FunWave::enable_masks = \"true\"\n", 382 | "Funwave::estimate_dt_on = \"true\"\n", 383 | "\n", 384 | "FunwaveCoord::spherical_coordinates = false\n", 385 | "\n", 386 | "ActiveThorns = \"CarpetIOHDF5\"\n", 387 | "IOHDF5::out2D_xyplane_z = 0 \n", 388 | "IOHDF5::out2D_every = 10\n", 389 | "IOHDF5::out2D_vars = \" \n", 390 | " FunWave::eta\n", 391 | " FunWave::u\n", 392 | " FunWave::v\n", 393 | " Grid::Coordinates{out_every=1000000000}\n", 394 | "\"\n", 395 | "IOHDF5::out2D_xz = no\n", 396 | "IOHDF5::out2D_yz = no" 397 | ] 398 | }, 399 | { 400 | "cell_type": "markdown", 401 | "metadata": {}, 402 | "source": [ 403 | "This next cell deletes our simulation in case we want to throw it away and start over again for some reason." 404 | ] 405 | }, 406 | { 407 | "cell_type": "code", 408 | "execution_count": null, 409 | "metadata": {}, 410 | "outputs": [], 411 | "source": [ 412 | "!rm -fr ~/simulations/wave" 413 | ] 414 | }, 415 | { 416 | "cell_type": "markdown", 417 | "metadata": {}, 418 | "source": [ 419 | "At long last, we are ready run Cactus. This configuration specifies running on two threads, with 1 thread per process. To execute this command, Cactus uses a \"RunScript\" stored in configs/sim/RunScript. You might want to take a look at it. Identifiers sandwiched between @ symbols get replaced by Simfactory prior to execution." 420 | ] 421 | }, 422 | { 423 | "cell_type": "code", 424 | "execution_count": null, 425 | "metadata": {}, 426 | "outputs": [], 427 | "source": [ 428 | "!cat ~/CactusFW2/configs/sim/RunScript" 429 | ] 430 | }, 431 | { 432 | "cell_type": "markdown", 433 | "metadata": {}, 434 | "source": [ 435 | "Enough already! Let's run Cactus!" 436 | ] 437 | }, 438 | { 439 | "cell_type": "code", 440 | "execution_count": null, 441 | "metadata": {}, 442 | "outputs": [], 443 | "source": [ 444 | "%cd ~/CactusFW2\n", 445 | "!./simfactory/bin/sim create-run --procs 2 --num-threads 1 wave.par" 446 | ] 447 | }, 448 | { 449 | "cell_type": "markdown", 450 | "metadata": {}, 451 | "source": [ 452 | "Data can be found in this directory. Using the next couple of commands, we will browse it." 453 | ] 454 | }, 455 | { 456 | "cell_type": "code", 457 | "execution_count": null, 458 | "metadata": {}, 459 | "outputs": [], 460 | "source": [ 461 | "%cd ~/simulations/wave/output-0000/wave" 462 | ] 463 | }, 464 | { 465 | "cell_type": "code", 466 | "execution_count": null, 467 | "metadata": { 468 | "scrolled": true 469 | }, 470 | "outputs": [], 471 | "source": [ 472 | "%ls *.asc" 473 | ] 474 | }, 475 | { 476 | "cell_type": "code", 477 | "execution_count": null, 478 | "metadata": {}, 479 | "outputs": [], 480 | "source": [ 481 | "# This cell enables inline plotting in the notebook\n", 482 | "%matplotlib inline\n", 483 | "\n", 484 | "import matplotlib\n", 485 | "import numpy as np\n", 486 | "import matplotlib.pyplot as plt" 487 | ] 488 | }, 489 | { 490 | "cell_type": "markdown", 491 | "metadata": {}, 492 | "source": [ 493 | "The top of %pycat command showed us what the columns mean:\n", 494 | "HYDROBASE::rho (hydrobase-rho)\n", 495 | "* 1 iteration\n", 496 | "* 2 time - how much time has passed in the simulation\n", 497 | "* 3 the data, in this case the variable rho\n", 498 | "\n", 499 | "Once we know all this, it is straightforward to plot the data." 500 | ] 501 | }, 502 | { 503 | "cell_type": "code", 504 | "execution_count": null, 505 | "metadata": {}, 506 | "outputs": [], 507 | "source": [ 508 | "lin_data = np.genfromtxt(\"eta.maximum.asc\")" 509 | ] 510 | }, 511 | { 512 | "cell_type": "code", 513 | "execution_count": null, 514 | "metadata": {}, 515 | "outputs": [], 516 | "source": [ 517 | "lin_data" 518 | ] 519 | }, 520 | { 521 | "cell_type": "code", 522 | "execution_count": null, 523 | "metadata": {}, 524 | "outputs": [], 525 | "source": [ 526 | "plt.plot(lin_data[:,1],lin_data[:,2])" 527 | ] 528 | }, 529 | { 530 | "cell_type": "markdown", 531 | "metadata": {}, 532 | "source": [ 533 | "Python knows how to read regularly formatted text\n", 534 | "files that use the # character for comments. Fortunately,\n", 535 | "that's what Cactus produces in its asc files." 536 | ] 537 | }, 538 | { 539 | "cell_type": "code", 540 | "execution_count": null, 541 | "metadata": {}, 542 | "outputs": [], 543 | "source": [ 544 | "file_data = np.genfromtxt(\"eta.xy.asc\")" 545 | ] 546 | }, 547 | { 548 | "cell_type": "code", 549 | "execution_count": null, 550 | "metadata": {}, 551 | "outputs": [], 552 | "source": [ 553 | "file_data" 554 | ] 555 | }, 556 | { 557 | "cell_type": "code", 558 | "execution_count": null, 559 | "metadata": {}, 560 | "outputs": [], 561 | "source": [ 562 | "import matplotlib.cm as cm\n", 563 | "# https://matplotlib.org/examples/color/colormaps_reference.html\n", 564 | "cmap = cm.gist_rainbow" 565 | ] 566 | }, 567 | { 568 | "cell_type": "code", 569 | "execution_count": null, 570 | "metadata": {}, 571 | "outputs": [], 572 | "source": [ 573 | "sets = np.unique(file_data[:,0])\n", 574 | "width = 8\n", 575 | "height = 4\n", 576 | "print(\"sets=\",sets)\n", 577 | "mn, mx = np.min(file_data[:,12]),np.max(file_data[:,12])\n", 578 | "for which in sets: \n", 579 | " print(\"which=\",which)\n", 580 | " g = file_data[file_data[:,0]==which,:]\n", 581 | " x = g[:,5]\n", 582 | " y = g[:,6]\n", 583 | " z = g[:,12]\n", 584 | " zi = z.reshape(len(np.unique(y)),len(np.unique(x)))\n", 585 | " print('min/max=',np.min(zi),np.max(zi))\n", 586 | " plt.figure(figsize=(width, height))\n", 587 | " plt.imshow(zi[::-1,:],cmap,clim=(mn,mx))\n", 588 | " plt.show()" 589 | ] 590 | }, 591 | { 592 | "cell_type": "code", 593 | "execution_count": null, 594 | "metadata": {}, 595 | "outputs": [], 596 | "source": [ 597 | "%ls *.h5" 598 | ] 599 | }, 600 | { 601 | "cell_type": "markdown", 602 | "metadata": {}, 603 | "source": [ 604 | "

Plotting HDF5 Data

\n", 605 | "HDF5 (Hierarchical Data Format 5) is a portable binary data format. As such, it is far more efficient to read and\n", 606 | "write than ascii formats, and it is probably what you should normally use. Here, you can see how to read and display\n", 607 | "the data." 608 | ] 609 | }, 610 | { 611 | "cell_type": "code", 612 | "execution_count": null, 613 | "metadata": {}, 614 | "outputs": [], 615 | "source": [ 616 | "import h5py" 617 | ] 618 | }, 619 | { 620 | "cell_type": "code", 621 | "execution_count": null, 622 | "metadata": {}, 623 | "outputs": [], 624 | "source": [ 625 | "f5 = h5py.File(\"u.xy.h5\")\n", 626 | "for nm in f5:\n", 627 | " if not hasattr(f5[nm],\"shape\"):\n", 628 | " continue\n", 629 | " print(\"nm=\",nm)\n", 630 | " d=np.copy(f5[nm])\n", 631 | " plt.figure()\n", 632 | " plt.imshow(d[::-1,:])\n", 633 | " plt.show()" 634 | ] 635 | }, 636 | { 637 | "cell_type": "markdown", 638 | "metadata": {}, 639 | "source": [ 640 | "Unfortunately, each set only has one component of the plot, i.e. the part belonging to one processor. To fix this, we'll\n", 641 | "collect data sets belonging to an iteration and display them all together. In order to make this happen, we'll need the\n", 642 | "x and y values for each component of the grid." 643 | ] 644 | }, 645 | { 646 | "cell_type": "code", 647 | "execution_count": null, 648 | "metadata": {}, 649 | "outputs": [], 650 | "source": [ 651 | "import re" 652 | ] 653 | }, 654 | { 655 | "cell_type": "code", 656 | "execution_count": null, 657 | "metadata": {}, 658 | "outputs": [], 659 | "source": [ 660 | "f5x = h5py.File(\"x.xy.h5\")\n", 661 | "f5y = h5py.File(\"y.xy.h5\")\n", 662 | "x_coords = {}\n", 663 | "y_coords = {}\n", 664 | "for nm in f5x:\n", 665 | " print(nm)\n", 666 | " m = re.search(r'rl=.*c=\\d+',nm)\n", 667 | " if m:\n", 668 | " k = m.group(0)\n", 669 | " x_coords[k]=np.copy(f5x[nm])\n", 670 | "for nm in f5y:\n", 671 | " m = re.search(r'rl=.*c=\\d+',nm)\n", 672 | " if m:\n", 673 | " k = m.group(0)\n", 674 | " y_coords[k]=np.copy(f5y[nm])" 675 | ] 676 | }, 677 | { 678 | "cell_type": "code", 679 | "execution_count": null, 680 | "metadata": {}, 681 | "outputs": [], 682 | "source": [ 683 | "f5 = h5py.File(\"u.xy.h5\")\n", 684 | "mn,mx = None,None\n", 685 | "\n", 686 | "# Compute the min and max\n", 687 | "for nm in f5:\n", 688 | " if not hasattr(f5[nm],\"shape\"):\n", 689 | " continue\n", 690 | " d5 = np.copy(f5[nm])\n", 691 | " tmin = np.min(d5)\n", 692 | " tmax = np.max(d5)\n", 693 | " if mn == None:\n", 694 | " mn,mx = tmin,tmax\n", 695 | " else:\n", 696 | " if tmin < mn:\n", 697 | " mn = tmin\n", 698 | " if tmax > mx:\n", 699 | " mx = tmax\n", 700 | " \n", 701 | "# Collect all the pieces into the d5_tl dictionary\n", 702 | "d5_tl = {} \n", 703 | "for nm in f5:\n", 704 | " if not hasattr(f5[nm],\"shape\"):\n", 705 | " continue\n", 706 | " # Parse the string nm...\n", 707 | " m = re.search(r'it=(\\d+)\\s+tl=\\d+\\s+(rl=(\\d+)\\s+c=(\\d+))',nm)\n", 708 | " # group(1) is the iteration number\n", 709 | " # group(2) is \"rl={number} c={number}\"\n", 710 | " # group(3) is the number in \"rl={number}\"\n", 711 | " # group(4) is the number in \"c={number}\"\n", 712 | " grid = int(m.group(1))\n", 713 | " comp = int(m.group(4))\n", 714 | " k = m.group(2)\n", 715 | " if grid in d5_tl:\n", 716 | " d5_tl[grid][\"x\"] += [x_coords[k]] # append to the x array\n", 717 | " d5_tl[grid][\"y\"] += [y_coords[k]] # append to the y array\n", 718 | " d5_tl[grid][\"D\"] += [f5[nm]] # append to the data array\n", 719 | " else:\n", 720 | " d5_tl[grid] = {\n", 721 | " \"x\":[x_coords[k]],\n", 722 | " \"y\":[y_coords[k]],\n", 723 | " \"D\":[f5[nm]]\n", 724 | " }\n", 725 | "\n", 726 | "# Sort the keys so that we display time levels in order\n", 727 | "def keysetf(d):\n", 728 | " a = [] # create an empty list\n", 729 | " for k in d: # for each key in d\n", 730 | " a.append(k) # append it to the list\n", 731 | " return a\n", 732 | "kys = keysetf(d5_tl.keys())\n", 733 | "kys.sort()\n", 734 | "\n", 735 | "# Show the figures, combing data from the same time level\n", 736 | "for index in kys:\n", 737 | " data = d5_tl[index]\n", 738 | " print(\"iteration=\",index)\n", 739 | " plt.figure() # put this before the plots you wish to combine\n", 740 | " plt.pcolor(data[\"x\"][0],data[\"y\"][0],data[\"D\"][0],vmin=mn,vmax=mx)\n", 741 | " plt.pcolor(data[\"x\"][1],data[\"y\"][1],data[\"D\"][1],vmin=mn,vmax=mx)\n", 742 | " plt.show() # show the plot." 743 | ] 744 | }, 745 | { 746 | "cell_type": "markdown", 747 | "metadata": {}, 748 | "source": [ 749 | "

Questions and Exercises:

\n", 750 | "\n", 751 | "* Run the above simulation using a single process instead of two. Do the plotting routines work? What changes did you have to make. What would you need to do to make it work with 3?\n", 752 | "* Run the code at 1/2 the resolution.\n", 753 | "* Position the Guassian wave at a different place on the grid.\n", 754 | "* If you wanted to change the compiler or a compiler flag, how would you go about doing that?\n", 755 | "* If you wanted to add another thorn to the list of thorns to compile, how would you go about doing that?\n", 756 | "* If you wanted to create a thornlist that would check out Cactus under the Foo directory instead of the CactusFW2 directory, how would you do it?" 757 | ] 758 | }, 759 | { 760 | "cell_type": "markdown", 761 | "metadata": { 762 | "collapsed": true 763 | }, 764 | "source": [ 765 | "
This work sponsored by NSF grants OAC 1550551 and CCF 1539567
" 766 | ] 767 | }, 768 | { 769 | "cell_type": "code", 770 | "execution_count": null, 771 | "metadata": {}, 772 | "outputs": [], 773 | "source": [] 774 | } 775 | ], 776 | "metadata": { 777 | "kernelspec": { 778 | "display_name": "Python 3", 779 | "language": "python", 780 | "name": "python3" 781 | }, 782 | "language_info": { 783 | "codemirror_mode": { 784 | "name": "ipython", 785 | "version": 3 786 | }, 787 | "file_extension": ".py", 788 | "mimetype": "text/x-python", 789 | "name": "python", 790 | "nbconvert_exporter": "python", 791 | "pygments_lexer": "ipython3", 792 | "version": "3.5.3" 793 | } 794 | }, 795 | "nbformat": 4, 796 | "nbformat_minor": 2 797 | } 798 | --------------------------------------------------------------------------------