├── README.md ├── SECURITY.md ├── fourier-neural-operator ├── README.md ├── exampleBurgers1d.m ├── fno │ └── spectralConvolution1dLayer.m └── images │ ├── figure_0.png │ └── figure_1.png ├── graph-neural-network-for-heat-transfer-problem ├── Condition.xlsx ├── README.md ├── example_gnn.mlx ├── license.txt ├── params_pre.mat └── ref_images │ └── result.png ├── hamiltonian-neural-network ├── Demo_Hamiltonian_Spring_with_dlnetwork.m ├── Demo_baseline_Spring.m ├── Pics │ └── 1.png ├── README.md ├── qp_baseline.mat └── trajectory_training.csv ├── inverse-problems-using-physics-informed-neural-networks ├── InversePinnConstantCoef.mlx ├── InversePinnVariableCoef.mlx ├── README.md ├── images │ ├── ConstantCoefSolution.png │ ├── ConstantCoefTraining.png │ ├── Geometry.png │ ├── VariableCoefTraining.png │ ├── VariablePredictedC.png │ └── VariablePredictedSolution.png └── src │ ├── InversePinnConstantCoef.m │ └── InversePinnVariableCoef.m ├── license.txt ├── physics-informed-neural-networks-for-heat-transfer ├── Condition.xlsx ├── Example_pinn.mlx ├── README.md ├── pinn_pre.mat └── ref_images │ ├── LearningCurve.png │ └── Results.png ├── physics-informed-neural-networks-for-mass-spring-system ├── README.md ├── buildPINNs.m ├── massSpringDamperData.mat ├── plotMassSpringDamperData.m └── plotModelPredictions.m ├── ref └── heat.png └── universal-differential-equations ├── README.md ├── example.md ├── example.mlx └── example_media ├── figure_0.png ├── figure_1.png ├── figure_2.png └── figure_3.png /README.md: -------------------------------------------------------------------------------- 1 | # Scientific and Physics Informed Machine Learning in MATLAB ® 2 | 3 | ![plot of heat equation solution and physics informed neural network approximation](./ref/heat.png) 4 | 5 | [![Open in MATLAB Online](https://www.mathworks.com/images/responsive/global/open-in-matlab-online.svg)](https://matlab.mathworks.com/open/github/v1?repo=matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples) 6 | 7 | This repository collates a number of examples demonstrating Scientific Machine Learning (SciML) and Physics Informed Machine Learning. 8 | 9 | Scientific Machine Learning is the application of Artificial Intelligence (AI) methods to accelerate scientific and engineering discoveries. SciML methods can incorporate domain specific knowledge, such as mathematical models of a physical system, with data-driven methods, such as training neural networks. Some characteristic SciML methods include: 10 | 11 | * [Physics Informed Neural Networks](https://doi.org/10.1016/j.jcp.2018.10.045) (PINNs) train a neural network to solve a differential equation by incorporating the differential equation in the loss function of the neural network training, utilizing the [automatic differentiation](https://uk.mathworks.com/help/deeplearning/ug/deep-learning-with-automatic-differentiation-in-matlab.html) (AD) framework to compute the required derivatives. 12 | * [Neural Ordinary Differential Equations](https://arxiv.org/abs/1806.07366) (Neural ODEs) incorporate solving an ordinary differential equation as a [layer of a neural network](https://uk.mathworks.com/help/deeplearning/ref/nnet.cnn.layer.neuralodelayer.html), allowing parameters that configure the ODE function to be [learnt via gradient descent](https://uk.mathworks.com/help/deeplearning/ref/trainnet.html). 13 | * [Neural Operators](https://www.jmlr.org/papers/volume24/21-1524/21-1524.pdf) use neural networks to learn mappings between function spaces, for example a neural operator may map the boundary condition of a PDE to the corresponding solution of the PDE with this boundary condition. 14 | * Graph Neural Networks (GNNs) incorporate graph structures in the data into the neural network architecture itself such that information can flow between connected vertices of the graph. Graph data structures often underly the numerical solutions of PDEs, such as the meshes in the [finite-element method](https://uk.mathworks.com/help/pde/ug/basics-of-the-finite-element-method.html). 15 | 16 | ## Getting Started 17 | 18 | Download or clone this repository and explore the examples in each sub-directory using MATLAB®. 19 | 20 | ## MathWorks® Products 21 | 22 | Requires [MATLAB®](https://uk.mathworks.com/products/matlab.html). 23 | * [Deep Learning Toolbox™](https://uk.mathworks.com/products/deep-learning.html) 24 | * [Partial Differential Equation Toolbox™](https://uk.mathworks.com/products/pde.html) 25 | * For [Graph Neural Networks for Heat Transfer](./graph-neural-network-for-heat-transfer-problem/), [Inverse Problems Using Physics Informed Neural Networks](./inverse-problems-using-physics-informed-neural-networks/), [Physics Informed Neural Networks for Heat Transfer](./physics-informed-neural-networks-for-heat-transfer/). 26 | 27 | # Physics Informed Neural Network (PINNs) examples 28 | 29 | [Physics Informed Neural Networks](https://uk.mathworks.com/discovery/physics-informed-neural-networks.html) are neural networks that incorporate a differential equation in the loss function to encourage the neural network to approximate the solution of a PDE, or to solve an inverse problem such as identifying terms of the governing PDE given data samples of the solution. Automatic differentiation via [`dlarray`](https://uk.mathworks.com/help/deeplearning/ref/dlarray.html) makes it easy to compute the derivatives terms in the PDE via [`dlgradient`](https://uk.mathworks.com/help/deeplearning/ref/dlarray.dlgradient.html) for derivatives of scalar quantities, [`dljacobian`](https://uk.mathworks.com/help/deeplearning/ref/dlarray.dljacobian.html) for computing Jacobians, and [`dllaplacian`](https://uk.mathworks.com/help/deeplearning/ref/dlarray.dllaplacian.html), [`dldivergence`](https://uk.mathworks.com/help/deeplearning/ref/dlarray.dldivergence.html) for computing Laplacians and divergences respectively. 30 | 31 | Explore the following examples on PINNs. 32 | 33 | * [Physics Informed Neural Networks for Mass Spring System](./physics-informed-neural-networks-for-mass-spring-system/) 34 | * [Physics Informed Neural Networks for Heat Transfer](./physics-informed-neural-networks-for-heat-transfer/) 35 | * [Inverse Problems using PINNs](./inverse-problems-using-physics-informed-neural-networks/) 36 | * [Solve PDE Using Physics-Informed Neural Network](https://uk.mathworks.com/help/deeplearning/ug/solve-partial-differential-equations-with-lbfgs-method-and-deep-learning.html) 37 | * [Solve Poisson Equation on Unit Disk Using Physics-Informed Neural Networks](https://uk.mathworks.com/help/pde/ug/solve-poisson-equation-on-unit-disk-using-pinn.html) 38 | 39 | The following video content for PINNs is also available: 40 | 41 | * [Using Physics-Informed Machine Learning to Improvie predictive Model Accuracy with Dr. Sam Raymond](https://uk.mathworks.com/company/user_stories/case-studies/using-physics-informed-machine-learning-to-improve-predictive-model-accuracy.html) 42 | * [Physics-Informed Neural Networks - Podcast hosted by Jousef Murad with Conor Daly](https://youtu.be/eKzHKGVIZMk?feature=shared) 43 | * [Physics-Informed Neural Networks with MATLAB - Live Coding Session hosted by Jousef Murad with Conor Daly](https://www.youtube.com/live/7ZdALJ2bIKA?feature=shared) 44 | * [Physics-Informed Neural Networks with MATLAB - Deep Dive Session hosted by Jousef Murad with Conor Daly](https://youtu.be/RTR_RklvAUQ?feature=shared) 45 | 46 | # Neural Differential Equation examples 47 | 48 | Neural ordinary differential equations incorporate solving an ODE as a fundamental operation in a model, for example as a layer in a neural network such as [`neuralODELayer`](https://uk.mathworks.com/help/deeplearning/ref/nnet.cnn.layer.neuralodelayer.html), or an ODE solver like [`dlode45`](https://uk.mathworks.com/help/deeplearning/ref/dlarray.dlode45.html) that integrates with the automatic differentiation framework such as the [`dlarray`](https://uk.mathworks.com/help/deeplearning/ref/dlarray.html) data type. Neural ODE solvers can be used in methods relevant to engineering such as [neural state space](https://uk.mathworks.com/help/ident/ug/what-are-neural-state-space-models.html) models that can be trained with [`idNeuralStateSpace`](https://uk.mathworks.com/help/ident/ref/idneuralstatespace.html) and [`nlssest`](https://uk.mathworks.com/help/ident/ref/nlssest.html) from [System Identification Toolbox™](https://www.mathworks.com/products/sysid.html). 49 | 50 | Explore the following examples on neural ODEs. 51 | 52 | * [Universal Differential Equations](./universal-differential-equations/) 53 | * [Dynamical System Modeling Using Neural ODE](https://uk.mathworks.com/help/deeplearning/ug/dynamical-system-modeling-using-neural-ode.html) 54 | * [Train Latent ODE Network with Irregularly Sampled Time-Series Data](https://uk.mathworks.com/help/deeplearning/ug/train-latent-ode-network-with-irregularly-sampled-time-series-data.html) 55 | 56 | # Neural Operator examples 57 | 58 | Neural operators are typically used to learn mappings between infinite dimensional function spaces. Many PDE problems can be phrased in terms of operators. For example the parameterised differential operator $L_a$ defined by $L_a u := \nabla \cdot \left(a \nabla u\right)$ can be used to specify a Poisson problem $L_a u = \nabla \cdot \left(a \nabla u \right) = f$ on a domain $\Omega$ as an operator problem. Given appropriate sets of functions $f \in \mathcal{V}$ and $a \in \mathcal{W}$, let $\mathcal{U}$ denote the set of solutions $u$ satisfying Dirichlet boundary condition $u = 0$ on $\partial \Omega$, and $L_a u = f$ on $\Omega$ for some $f \in \mathcal{V}$ , $a \in\mathcal{W}$. The solution operator $G:\mathcal{V} \times \mathcal{W} \rightarrow \mathcal{U}$ is defined as $G(f,a)= u$ such that $L_a u = f$. Neural operator methods train neural networks $G_\theta$ to approximate the operator $G$. A trained neural operator $G_\theta$ can be used to approximate the solution $u$ to $L_a u = f$ by evaluating $G_\theta (f,a)$ for any $f \in \mathcal{V}, a \in \mathcal{W}$ 59 | 60 | * [Fourier Neural Operator](./fourier-neural-operator/) 61 | 62 | # Graph Neural Network (GNNs) examples 63 | 64 | Graph neural networks are neural network architectures designed to operate naturally on graph data $G = (E,V)$, where $V = \{v_1, \ldots, v_n\}$ is a set of $n$ vertices, and $E$ is a set of edges $e = (v_i,v_j)$ specifying that vertices $v_i$ and $v_j$ are connected. Both the vertices and edges may have associated features. Graph neural networks use natural operations for graph data such as graph convolutions which generalise a standard 2d discrete convolution. 65 | 66 | Explore the following examples on GNNs. 67 | 68 | * [Graph Neural Networks for Heat Transfer](./graph-neural-network-for-heat-transfer-problem/) 69 | * [Multivariate Time Series Anomaly Detection Using Graph Neural Network](https://uk.mathworks.com/help/deeplearning/ug/multivariate-time-series-anomaly-detection-using-graph-neural-network.html) 70 | * [Node Classification Using Graph Convolutional Network](https://uk.mathworks.com/help/deeplearning/ug/node-classification-using-graph-convolutional-network.html) 71 | 72 | # Hamiltonian Neural Network examples 73 | 74 | The Hamiltonian neural network method trains a neural network to approximate the Hamiltonian of a mechanical system, as specified in [Hamiltonian formulation of mechanics](https://en.wikipedia.org/wiki/Hamiltonian_mechanics). The Hamiltonian formulation of mechanics specifies a mechanical system in terms of generalized coordinates $(q,p)$ where $q$ is a vector representation of the position of a point mass, and $q$ is a vector representation of the momentum. Hamiltonian mechanics specifies that the evolution of $p(t)$ and $q(t)$ in time can be specified by the ODE system $\frac{\mathrm{d}q}{\mathrm{d}t} = \frac{\partial H}{\partial p}$, $\frac{\mathrm{d}p}{\mathrm{d}t} = - \frac{\partial H}{\partial q}$ where $H(p,q,t)$ is called the Hamiltonian. By approximating $H$ by a neural network $H_\theta$ it is possible to compute $\frac{\partial H_\theta}{\partial p}, \frac{\partial H_\theta}{\partial q}$ and impose a loss based on the ODE system above, similar to the method of PINNs. 75 | 76 | * [Hamiltonian Neural Network](./hamiltonian-neural-network/) 77 | 78 | ## License 79 | The license is available in the [license.txt](./license.txt) file in this GitHub repository. 80 | 81 | ## Community Support 82 | [MATLAB Central](https://www.mathworks.com/matlabcentral) 83 | 84 | Copyright 2024 The MathWorks, Inc. 85 | -------------------------------------------------------------------------------- /SECURITY.md: -------------------------------------------------------------------------------- 1 | # Reporting Security Vulnerabilities 2 | 3 | If you believe you have discovered a security vulnerability, please report it to 4 | [security@mathworks.com](mailto:security@mathworks.com). Please see 5 | [MathWorks Vulnerability Disclosure Policy for Security Researchers](https://www.mathworks.com/company/aboutus/policies_statements/vulnerability-disclosure-policy.html) 6 | for additional information. -------------------------------------------------------------------------------- /fourier-neural-operator/README.md: -------------------------------------------------------------------------------- 1 | # Fourier Neural Operator 2 | 3 | This example was originally hosted [here](https://github.com/matlab-deep-learning/fourier-neural-operator). 4 | 5 | [![View on File Exchange](https://www.mathworks.com/matlabcentral/images/matlab-file-exchange.svg)](https://uk.mathworks.com/matlabcentral/fileexchange/123165-fourier-neural-operator) 6 | 7 | The Fourier Neural Operator (FNO) [1] is a neural operator with an integral kernel parameterized in Fourier space. This allows for an expressive and efficient architecture. Applications of the FNO include weather forecasting and, more generically, finding efficient solutions to the Navier-Stokes equations which govern fluid flow. 8 | 9 | ## Setup 10 | Add `fno` directory to the path. 11 | 12 | ```matlab 13 | addpath(genpath('fno')); 14 | ``` 15 | 16 | ## Requirements 17 | 18 | Requires: 19 | - [MATLAB](https://www.mathworks.com/products/matlab.html) (R2021b or newer) 20 | - [Deep Learning Toolbox™](https://www.mathworks.com/products/deep-learning.html) 21 | 22 | ## References 23 | [1] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima 24 | Anandkumar. Fourier Neural Operator for Parametric Partial Differential Equations. In International Conference on 25 | Learning Representations (ICLR), 2021a. (https://openreview.net/forum?id=c8P9NQVtmnO) 26 | 27 | # Example: Fourier Neural Operator for 1d Burgers' Equation 28 | 29 | In this example we apply the Fourier Neural Operator to learn the one-dimensional Burgers' equation with the following definition: 30 | 31 | 32 | $$ \frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = \frac{1}{\textrm{Re}} \frac{\partial^2 u}{\partial x^2}, \quad x \in (0,1), \, t \in (0,1]$$ 33 | 34 | $$ u(x,0) = u_0 (x), \quad x \in (0,1) $$ 35 | 36 | where $u=u(x,t)$ and $\textrm{Re}$ is the Reynolds number. Periodic boundary conditions are imposed across the spatial domain. We learn the operator mapping the initial condition $u_0$ to the solution at time $t=1$: $u_0\longmapsto u\left(x,1\right)$. 37 | 38 | ## Data preparation 39 | 40 | We use the `burgers_data_R10.mat`, which contains initial velocities $u_0$ and solutions $u(x,1)$ of the Burgers' equation. We then use these as training inputs and targets respectively. The network inputs include the spatial domain $x \in (0,1)$ at the desired discretization. In this example we choose a grid size of $h = 2^{10}$. 41 | 42 | ```matlab:Code 43 | % Setup. 44 | addpath(genpath('fno')); 45 | 46 | % Download training data. 47 | dataDir = fullfile('data'); 48 | if ~isfolder(dataDir) 49 | mkdir(dataDir); 50 | end 51 | dataFile = fullfile(dataDir,'burgers_data_R10.mat'); 52 | if ~exist(dataFile, 'file') 53 | location = 'https://ssd.mathworks.com/supportfiles/nnet/data/burgers1d/burgers_data_R10.mat'; 54 | websave(dataFile, location); 55 | end 56 | data = load(dataFile, 'a', 'u'); 57 | x = data.a; 58 | t = data.u; 59 | 60 | % Specify the number of observations in training and test data, respectively. 61 | numTrain = 1e3; 62 | numTest = 1e2; 63 | 64 | % Specify grid size and downsampling factor. 65 | h = 2^10; 66 | n = size(x,2); 67 | ns = floor(n./h); 68 | 69 | % Downsample the data for training. 70 | xTrain = x(1:numTrain, 1:ns:n); 71 | tTrain = t(1:numTrain, 1:ns:n); 72 | xTest = x(end-numTest+1:end, 1:ns:n); 73 | tTest = t(end-numTest+1:end, 1:ns:n); 74 | 75 | % Define the grid over the spatial domain x. 76 | xmax = 1; 77 | xgrid = linspace(0, xmax, h); 78 | 79 | % Combine initial velocities and spatial grid to create network 80 | % predictors. 81 | xTrain = cat(3, xTrain, repmat(xgrid, [numTrain 1])); 82 | xTest = cat(3, xTest, repmat(xgrid, [numTest 1])); 83 | ``` 84 | 85 | ## Define network architecture 86 | 87 | Here we create a `dlnetwork` for the Burgers' equation problem. The network accepts inputs of dimension `[h 2 miniBatchSize]`, and returns outputs of dimension `[h 1 miniBatchSize]`. The network consists os multiple blocks which combine spectral convolution with regular, linear convolution. The convolution in Fourier space filters out higher order oscillations in the solution, while the linear convolution learns local correlations. 88 | 89 | ```matlab:Code 90 | numModes = 16; 91 | width = 64; 92 | 93 | lg = layerGraph([ ... 94 | convolution1dLayer(1, width, Name='fc0') 95 | 96 | spectralConvolution1dLayer(width, numModes, Name='specConv1') 97 | additionLayer(2, Name='add1') 98 | reluLayer(Name='relu1') 99 | 100 | spectralConvolution1dLayer(width, numModes, Name='specConv2') 101 | additionLayer(2, Name='add2') 102 | reluLayer(Name='relu2') 103 | 104 | spectralConvolution1dLayer(width, numModes, Name='specConv3') 105 | additionLayer(2, Name='add3') 106 | reluLayer(Name='relu3') 107 | 108 | spectralConvolution1dLayer(width, numModes, Name='specConv4') 109 | additionLayer(2, Name='add4') 110 | 111 | convolution1dLayer(1, 128, Name='fc5') 112 | reluLayer(Name='relu5') 113 | convolution1dLayer(1, 1, Name='fc6') 114 | ]); 115 | 116 | lg = addLayers(lg, convolution1dLayer(1, width, Name='fc1')); 117 | lg = connectLayers(lg, 'fc0', 'fc1'); 118 | lg = connectLayers(lg, 'fc1', 'add1/in2'); 119 | 120 | lg = addLayers(lg, convolution1dLayer(1, width, Name='fc2')); 121 | lg = connectLayers(lg, 'relu1', 'fc2'); 122 | lg = connectLayers(lg, 'fc2', 'add2/in2'); 123 | 124 | lg = addLayers(lg, convolution1dLayer(1, width, Name='fc3')); 125 | lg = connectLayers(lg, 'relu2', 'fc3'); 126 | lg = connectLayers(lg, 'fc3', 'add3/in2'); 127 | 128 | lg = addLayers(lg, convolution1dLayer(1, width, Name='fc4')); 129 | lg = connectLayers(lg, 'relu3', 'fc4'); 130 | lg = connectLayers(lg, 'fc4', 'add4/in2'); 131 | 132 | numInputChannels = 2; 133 | XInit = dlarray(ones([h numInputChannels 1]), 'SCB'); 134 | net = dlnetwork(lg, XInit); 135 | 136 | analyzeNetwork(net) 137 | ``` 138 | 139 | ## Training options 140 | 141 | The network is trained using the standard SGDM algorithm, where the learn rate is decreased every `stepSize` iterations. 142 | 143 | ```matlab:Code 144 | executionEnvironment = "gpu"; 145 | 146 | batchSize = 20; 147 | learnRate = 1e-3; 148 | momentum = 0.9; 149 | 150 | numEpochs = 20; 151 | stepSize = 100; 152 | gamma = 0.5; 153 | expNum = 1; 154 | checkpoint = false; 155 | expDir = sprintf( 'checkpoints/run%g', expNum ); 156 | if ~isfolder( expDir ) && checkpoint 157 | mkdir(expDir) 158 | end 159 | 160 | vel = []; 161 | totalIter = 0; 162 | 163 | numTrain = size(xTrain,1); 164 | numIterPerEpoch = floor(numTrain./batchSize); 165 | ``` 166 | 167 | ## Training loop 168 | 169 | Train the network. 170 | 171 | ```matlab:Code 172 | if executionEnvironment == "gpu" && canUseGPU 173 | xTrain = gpuArray(xTrain); 174 | xTest = gpuArray(xTest); 175 | end 176 | 177 | start = tic; 178 | figure; 179 | clf 180 | lineLossTrain = animatedline('Color', [0 0.4470 0.7410]); 181 | lineLossTest = animatedline('Color', 'k', 'LineStyle', '--'); 182 | ylim([0 inf]) 183 | xlabel("Iteration") 184 | ylabel("Loss") 185 | grid on 186 | 187 | % Compute initial validation loss. 188 | y = net.predict( dlarray(xTest, 'BSC') ); 189 | yTest = extractdata(permute(stripdims(y), [3 1 2])); 190 | relLossTest = relativeL2Loss(yTest , tTest); 191 | addpoints(lineLossTest, 0, double(relLossTest/size(xTest,1))) 192 | 193 | % Main loop. 194 | lossfun = dlaccelerate(@modelLoss); 195 | for epoch = 1:numEpochs 196 | % Shuffle the data. 197 | dataIdx = randperm(numTrain); 198 | 199 | for iter = 1:numIterPerEpoch 200 | % Get mini-batch data. 201 | batchIdx = (1:batchSize) + (iter-1)*batchSize; 202 | idx = dataIdx(batchIdx); 203 | X = dlarray( xTrain(batchIdx, :, :), 'BSC' ); 204 | T = tTrain(batchIdx, :); 205 | 206 | % Compute loss and gradients. 207 | [loss, dnet] = dlfeval(lossfun, X, T, net); 208 | 209 | % Update model parameters using SGDM update rule. 210 | [net, vel] = sgdmupdate(net, dnet, vel, learnRate, momentum); 211 | 212 | % Plot training progress. 213 | totalIter = totalIter + 1; 214 | D = duration(0,0,toc(start),'Format','hh:mm:ss'); 215 | addpoints(lineLossTrain,totalIter,double(extractdata(loss/batchSize))) 216 | title("Epoch: " + epoch + ", Elapsed: " + string(D)) 217 | drawnow 218 | 219 | % Learn rate scheduling. 220 | if mod(totalIter, stepSize) == 0 221 | learnRate = gamma.*learnRate; 222 | end 223 | end 224 | % Compute validation loss and MSE. 225 | y = net.predict( dlarray(xTest, 'BSC') ); 226 | yTest = extractdata(permute(stripdims(y), [3 1 2])); 227 | relLossTest = relativeL2Loss( yTest , tTest ); 228 | mseTest = mean( (yTest(:) - tTest(:)).^2 ); 229 | 230 | % Display progress. 231 | D = duration(0,0,toc(start),'Format','hh:mm:ss'); 232 | numTest = size(xTest, 1); 233 | fprintf('Epoch = %g, train loss = %g, val loss = %g, val mse = %g, total time = %s. \n', ... 234 | epoch, extractdata(loss)/batchSize, relLossTest/numTest, mseTest/numTest, string(D)); 235 | addpoints(lineLossTest, totalIter, double(relLossTest/numTest)) 236 | 237 | % Checkpoints. 238 | if checkpoint 239 | filename = sprintf('checkpoints/run%g/epoch%g.mat', expNum, epoch); 240 | save(filename, 'net', 'epoch', 'vel', 'totalIter', 'relLossTest', 'mseTest', 'learnRate'); 241 | end 242 | end 243 | ``` 244 | 245 | ```text:Output 246 | Epoch = 1, train loss = 0.226405, val loss = 0.175389, val mse = 7.73286e-05, total time = 00:00:13. 247 | Epoch = 2, train loss = 0.153691, val loss = 0.145805, val mse = 5.99213e-05, total time = 00:00:22. 248 | Epoch = 3, train loss = 0.0923258, val loss = 0.0904608, val mse = 2.49174e-05, total time = 00:00:27. 249 | Epoch = 4, train loss = 0.102122, val loss = 0.0639219, val mse = 1.43723e-05, total time = 00:00:32. 250 | Epoch = 5, train loss = 0.0346076, val loss = 0.0393621, val mse = 9.33419e-06, total time = 00:00:36. 251 | Epoch = 6, train loss = 0.0361029, val loss = 0.032303, val mse = 7.10724e-06, total time = 00:00:45. 252 | Epoch = 7, train loss = 0.0270364, val loss = 0.0296161, val mse = 6.43696e-06, total time = 00:00:50. 253 | Epoch = 8, train loss = 0.0263171, val loss = 0.0283881, val mse = 5.92292e-06, total time = 00:00:54. 254 | Epoch = 9, train loss = 0.0248211, val loss = 0.0261364, val mse = 5.54218e-06, total time = 00:00:58. 255 | Epoch = 10, train loss = 0.0243392, val loss = 0.0253596, val mse = 5.32946e-06, total time = 00:01:03. 256 | Epoch = 11, train loss = 0.0236119, val loss = 0.0250861, val mse = 5.22886e-06, total time = 00:01:07. 257 | Epoch = 12, train loss = 0.023318, val loss = 0.024752, val mse = 5.12552e-06, total time = 00:01:11. 258 | Epoch = 13, train loss = 0.0230901, val loss = 0.0243369, val mse = 5.04185e-06, total time = 00:01:16. 259 | Epoch = 14, train loss = 0.0229644, val loss = 0.0241713, val mse = 4.99882e-06, total time = 00:01:20. 260 | Epoch = 15, train loss = 0.0228391, val loss = 0.0240904, val mse = 4.99516e-06, total time = 00:01:25. 261 | Epoch = 16, train loss = 0.022768, val loss = 0.0240143, val mse = 4.97173e-06, total time = 00:01:29. 262 | Epoch = 17, train loss = 0.0228152, val loss = 0.023916, val mse = 4.95474e-06, total time = 00:01:33. 263 | Epoch = 18, train loss = 0.022787, val loss = 0.0238792, val mse = 4.94643e-06, total time = 00:01:38. 264 | Epoch = 19, train loss = 0.0227602, val loss = 0.023865, val mse = 4.93665e-06, total time = 00:01:42. 265 | Epoch = 20, train loss = 0.0227464, val loss = 0.0238451, val mse = 4.93358e-06, total time = 00:01:46. 266 | ``` 267 | 268 | ![figure_0.png](images/figure_0.png) 269 | 270 | ## Test on unseen, higher resolution data 271 | 272 | Here we take the trained network and test on unseen data with a higher spatial resolution than the training data. This is an example of zero-shot super-resolution. 273 | 274 | ```matlab:Code 275 | gridHighRes = linspace(0, xmax, n); 276 | 277 | idxToPlot = numTrain+(1:4); 278 | figure; 279 | for p = 1:4 280 | xn = dlarray(cat(1, x(idxToPlot(p),:), gridHighRes),'CSB'); 281 | yn = predict(net, xn); 282 | 283 | subplot(2, 2, p) 284 | plot(gridHighRes, t(idxToPlot(p),:)), hold on, plot(gridHighRes, extractdata(yn)) 285 | axis tight 286 | xlabel('x') 287 | ylabel('U') 288 | end 289 | ``` 290 | 291 | ![figure_1.png](images/figure_1.png) 292 | 293 | ### Helper functions 294 | 295 | ```matlab:Code 296 | function [loss, grad] = modelLoss(x, t, net) 297 | y = net.forward(x); 298 | y = permute(stripdims(y), [3 1 2]); 299 | y = stripdims(y); 300 | 301 | loss = relativeL2Loss(y, t); 302 | 303 | grad = dlgradient(loss, net.Learnables); 304 | end 305 | 306 | function loss = relativeL2Loss(y, t) 307 | diffNorms = normFcn( (y - t) ); 308 | tNorms = normFcn( t ); 309 | 310 | loss = sum(diffNorms./tNorms, 1); 311 | end 312 | 313 | function n = normFcn(x) 314 | n = sqrt( sum(x.^2, 2) ); 315 | end 316 | ``` 317 | 318 | Copyright 2022-2023 The MathWorks, Inc. 319 | -------------------------------------------------------------------------------- /fourier-neural-operator/exampleBurgers1d.m: -------------------------------------------------------------------------------- 1 | %% Fourier Neural Operator for 1d Burgers' Equation 2 | % In this example we apply the to learn the one-dimensional Burgers' equation with the following 4 | % definition: 5 | % 6 | % $\frac{\partial u}{\partial t} + u\frac{\partial u}{\partial x} = \frac{1}{Re}\frac{\partial^2 7 | % u}{\partial x^2}$, $x \in (0,1),\space t \in (0,1]$ 8 | % 9 | % $u(x,0) = u_0(x) $, $x \in (0,1)$ 10 | % 11 | % where $u=u\left(x,t\right)$ and $Re$ is the Reynolds number. Periodic boundary 12 | % conditions are imposed across the spatial domain. We learn the operator mapping 13 | % the initial condition $u_0$ to the solution at time $t=1$: $u_0 \longmapsto 14 | % u\left(x,1\right)$. 15 | %% Data preparation 16 | % We use the burgers_data_R10.mat, which contains initial velocities $u_0$ and 17 | % solutions $u\left(x,1\right)$ of the Burgers' equation which we use as training 18 | % inputs and targets respectively. The network inputs also consist of the spatial 19 | % domain $x=\left(0,1\right)$ at the desired discretization. In this example we 20 | % choose a grid size of $h=2^{10}$. 21 | 22 | % Download training data. 23 | dataDir = fullfile('data'); 24 | if ~isfolder(dataDir) 25 | mkdir(dataDir); 26 | end 27 | dataFile = fullfile(dataDir,'burgers_data_R10.mat'); 28 | if ~exist(dataFile, 'file') 29 | location = 'https://ssd.mathworks.com/supportfiles/nnet/data/burgers1d/burgers_data_R10.mat'; 30 | websave(dataFile, location); 31 | end 32 | data = load(dataFile, 'a', 'u'); 33 | x = data.a; 34 | t = data.u; 35 | 36 | % Setup. 37 | addpath(genpath('fno')); 38 | 39 | % Specify the number of observations in training and test data, respectively. 40 | numTrain = 1e3; 41 | numTest = 1e2; 42 | 43 | % Specify grid size and downsampling factor. 44 | h = 2^10; 45 | n = size(x,2); 46 | ns = floor(n./h); 47 | 48 | % Downsample the data for training. 49 | xTrain = x(1:numTrain, 1:ns:n); 50 | tTrain = t(1:numTrain, 1:ns:n); 51 | xTest = x(end-numTest+1:end, 1:ns:n); 52 | tTest = t(end-numTest+1:end, 1:ns:n); 53 | 54 | % Define the grid over the spatial domain x. 55 | xmax = 1; 56 | xgrid = linspace(0, xmax, h); 57 | 58 | % Combine initial velocities and spatial grid to create network 59 | % predictors. 60 | xTrain = cat(3, xTrain, repmat(xgrid, [numTrain 1])); 61 | xTest = cat(3, xTest, repmat(xgrid, [numTest 1])); 62 | %% Define network architecture 63 | % Here we create a |dlnetwork| for the Burgers' equation problem. The network 64 | % accepts inputs of dimension |[h 2 miniBatchSize]|, and returns outputs of dimension 65 | % |[h 1 miniBatchSize]|. The network consists os multiple blocks which combine 66 | % spectral convolution with regular, linear convolution. The convolution in Fourier 67 | % space filters out higher order oscillations in the solution, while the linear 68 | % convolution learns local correlations. 69 | 70 | numModes = 16; 71 | width = 64; 72 | 73 | lg = layerGraph([ ... 74 | convolution1dLayer(1, width, Name='fc0') 75 | 76 | spectralConvolution1dLayer(width, numModes, Name='specConv1') 77 | additionLayer(2, Name='add1') 78 | reluLayer(Name='relu1') 79 | 80 | spectralConvolution1dLayer(width, numModes, Name='specConv2') 81 | additionLayer(2, Name='add2') 82 | reluLayer(Name='relu2') 83 | 84 | spectralConvolution1dLayer(width, numModes, Name='specConv3') 85 | additionLayer(2, Name='add3') 86 | reluLayer(Name='relu3') 87 | 88 | spectralConvolution1dLayer(width, numModes, Name='specConv4') 89 | additionLayer(2, Name='add4') 90 | 91 | convolution1dLayer(1, 128, Name='fc5') 92 | reluLayer(Name='relu5') 93 | convolution1dLayer(1, 1, Name='fc6') 94 | ]); 95 | 96 | lg = addLayers(lg, convolution1dLayer(1, width, Name='fc1')); 97 | lg = connectLayers(lg, 'fc0', 'fc1'); 98 | lg = connectLayers(lg, 'fc1', 'add1/in2'); 99 | 100 | lg = addLayers(lg, convolution1dLayer(1, width, Name='fc2')); 101 | lg = connectLayers(lg, 'relu1', 'fc2'); 102 | lg = connectLayers(lg, 'fc2', 'add2/in2'); 103 | 104 | lg = addLayers(lg, convolution1dLayer(1, width, Name='fc3')); 105 | lg = connectLayers(lg, 'relu2', 'fc3'); 106 | lg = connectLayers(lg, 'fc3', 'add3/in2'); 107 | 108 | lg = addLayers(lg, convolution1dLayer(1, width, Name='fc4')); 109 | lg = connectLayers(lg, 'relu3', 'fc4'); 110 | lg = connectLayers(lg, 'fc4', 'add4/in2'); 111 | 112 | numInputChannels = 2; 113 | XInit = dlarray(ones([h numInputChannels 1]), 'SCB'); 114 | net = dlnetwork(lg, XInit); 115 | 116 | analyzeNetwork(net) 117 | %% Training options 118 | % The network is trained using the standard SGDM algorithm, which 119 | 120 | executionEnvironment = "gpu"; 121 | 122 | batchSize = 20; 123 | learnRate = 1e-3; 124 | momentum = 0.9; 125 | 126 | numEpochs = 20; 127 | stepSize = 100; 128 | gamma = 0.5; 129 | expNum = 1; 130 | checkpoint = false; 131 | expDir = sprintf( 'checkpoints/run%g', expNum ); 132 | if ~isfolder( expDir ) && checkpoint 133 | mkdir(expDir) 134 | end 135 | 136 | vel = []; 137 | totalIter = 0; 138 | 139 | numTrain = size(xTrain,1); 140 | numIterPerEpoch = floor(numTrain./batchSize); 141 | %% Training loop 142 | % Train the network. 143 | 144 | if executionEnvironment == "gpu" && canUseGPU 145 | xTrain = gpuArray(xTrain); 146 | xTest = gpuArray(xTest); 147 | end 148 | 149 | start = tic; 150 | figure; 151 | clf 152 | lineLossTrain = animatedline('Color', [0 0.4470 0.7410]); 153 | lineLossTest = animatedline('Color', 'k', 'LineStyle', '--'); 154 | ylim([0 inf]) 155 | xlabel("Iteration") 156 | ylabel("Loss") 157 | grid on 158 | 159 | % Compute initial validation loss. 160 | y = net.predict( dlarray(xTest, 'BSC') ); 161 | yTest = extractdata(permute(stripdims(y), [3 1 2])); 162 | relLossTest = relativeL2Loss(yTest , tTest); 163 | addpoints(lineLossTest, 0, double(relLossTest/size(xTest,1))) 164 | 165 | % Main loop. 166 | lossfun = dlaccelerate(@modelLoss); 167 | for epoch = 1:numEpochs 168 | % Shuffle the data. 169 | dataIdx = randperm(numTrain); 170 | 171 | for iter = 1:numIterPerEpoch 172 | % Get mini-batch data. 173 | batchIdx = (1:batchSize) + (iter-1)*batchSize; 174 | idx = dataIdx(batchIdx); 175 | X = dlarray( xTrain(batchIdx, :, :), 'BSC' ); 176 | T = tTrain(batchIdx, :); 177 | 178 | % Compute loss and gradients. 179 | [loss, dnet] = dlfeval(lossfun, X, T, net); 180 | 181 | % Update model parameters using SGDM update rule. 182 | [net, vel] = sgdmupdate(net, dnet, vel, learnRate, momentum); 183 | 184 | % Plot training progress. 185 | totalIter = totalIter + 1; 186 | D = duration(0,0,toc(start),'Format','hh:mm:ss'); 187 | addpoints(lineLossTrain,totalIter,double(extractdata(loss/batchSize))) 188 | title("Epoch: " + epoch + ", Elapsed: " + string(D)) 189 | drawnow 190 | 191 | % Learn rate scheduling. 192 | if mod(totalIter, stepSize) == 0 193 | learnRate = gamma.*learnRate; 194 | end 195 | end 196 | % Compute validation loss and MSE. 197 | y = net.predict( dlarray(xTest, 'BSC') ); 198 | yTest = extractdata(permute(stripdims(y), [3 1 2])); 199 | relLossTest = relativeL2Loss( yTest , tTest ); 200 | mseTest = mean( (yTest(:) - tTest(:)).^2 ); 201 | 202 | % Display progress. 203 | D = duration(0,0,toc(start),'Format','hh:mm:ss'); 204 | numTest = size(xTest, 1); 205 | fprintf('Epoch = %g, train loss = %g, val loss = %g, val mse = %g, total time = %s. \n', ... 206 | epoch, extractdata(loss)/batchSize, relLossTest/numTest, mseTest/numTest, string(D)); 207 | addpoints(lineLossTest, totalIter, double(relLossTest/numTest)) 208 | 209 | % Checkpoints. 210 | if checkpoint 211 | filename = sprintf('checkpoints/run%g/epoch%g.mat', expNum, epoch); 212 | save(filename, 'net', 'epoch', 'vel', 'totalIter', 'relLossTest', 'mseTest', 'learnRate'); 213 | end 214 | end 215 | %% Test on unseen, higher resolution data 216 | % Here we take the trained network and test on unseen data with a higher spatial 217 | % resolution than the training data. This is an example of zero-shot super-resolution. 218 | 219 | gridHighRes = linspace(0, xmax, n); 220 | 221 | idxToPlot = numTrain+(1:4); 222 | figure; 223 | for p = 1:4 224 | xn = dlarray(cat(1, x(idxToPlot(p),:), gridHighRes),'CSB'); 225 | yn = predict(net, xn); 226 | 227 | subplot(2, 2, p) 228 | plot(gridHighRes, t(idxToPlot(p),:)), hold on, plot(gridHighRes, extractdata(yn)) 229 | axis tight 230 | xlabel('x') 231 | ylabel('U') 232 | end 233 | %% Helper functions 234 | 235 | function [loss, grad] = modelLoss(x, t, net) 236 | y = net.forward(x); 237 | y = permute(stripdims(y), [3 1 2]); 238 | y = stripdims(y); 239 | 240 | loss = relativeL2Loss(y, t); 241 | 242 | grad = dlgradient(loss, net.Learnables); 243 | end 244 | 245 | function loss = relativeL2Loss(y, t) 246 | diffNorms = normFcn( (y - t) ); 247 | tNorms = normFcn( t ); 248 | 249 | loss = sum(diffNorms./tNorms, 1); 250 | end 251 | 252 | function n = normFcn(x) 253 | n = sqrt( sum(x.^2, 2) ); 254 | end -------------------------------------------------------------------------------- /fourier-neural-operator/fno/spectralConvolution1dLayer.m: -------------------------------------------------------------------------------- 1 | classdef spectralConvolution1dLayer < nnet.layer.Layer ... 2 | & nnet.layer.Formattable ... 3 | & nnet.layer.Acceleratable 4 | % spectralConvolution1dLayer Spectral convolution 1d 5 | 6 | % Copyright 2022 The MathWorks, Inc. 7 | 8 | properties 9 | Cin 10 | Cout 11 | NumModes 12 | end 13 | 14 | properties (Learnable) 15 | Weights 16 | end 17 | 18 | methods 19 | function this = spectralConvolution1dLayer(outChannels, numModes, nvargs) 20 | % spectralConvolution1dLayer Spectral convolution 1d 21 | % 22 | % layer = spectralConvolution1dLayer(outChannels, numModes) 23 | % creates a spectral convolution 1d layer. outChannels 24 | % specifies the number of channels in the layer output. 25 | % numModes specifies the number of modes which are combined 26 | % in Fourier space. 27 | % 28 | % layer = spectralConvolution1dLayer(outChannels, numModes, 29 | % Name=Value) specifies additional options using one or more 30 | % name-value arguments: 31 | % 32 | % Name - Name for the layer. The default value is "". 33 | % 34 | % Weights - Complex learnable array of size 35 | % (inChannels)x(outChannels)x(numModes). The 36 | % default value is []. 37 | arguments 38 | outChannels (1,1) double 39 | numModes (1,1) double 40 | nvargs.Name {mustBeTextScalar} = "spectralConv1d" 41 | nvargs.Weights = [] 42 | end 43 | 44 | this.Cout = outChannels; 45 | this.NumModes = numModes; 46 | this.Name = nvargs.Name; 47 | this.Weights = nvargs.Weights; 48 | end 49 | 50 | function this = initialize(this, ndl) 51 | inChannels = ndl.Size( finddim(ndl,'C') ); 52 | outChannels = this.Cout; 53 | numModes = this.NumModes; 54 | 55 | if isempty(this.Weights) 56 | this.Cin = inChannels; 57 | this.Weights = 1./(inChannels*outChannels).*( ... 58 | rand([inChannels outChannels numModes]) + ... 59 | 1i.*rand([inChannels outChannels numModes]) ); 60 | else 61 | assert( inChannels == this.Cin, 'The input channel size must match the layer' ); 62 | end 63 | end 64 | 65 | function y = predict(this, x) 66 | % First compute the rfft, normalized and one-sided 67 | x = real(x); 68 | x = stripdims(x); 69 | N = size(x, 1); 70 | xft = iRFFT(x, 1, N); 71 | 72 | % Multiply selected Fourier modes 73 | xft = permute(xft(1:this.NumModes, :, :), [3 2 1]); 74 | yft = pagemtimes( xft, this.Weights ); 75 | yft = permute(yft, [3 2 1]); 76 | 77 | S = floor(N/2)+1 - this.NumModes; 78 | yft = cat(1, yft, zeros([S size(yft, 2:3)], 'like', yft)); 79 | 80 | % Return to physical space via irfft, normalized and one-sided 81 | y = iIRFFT(yft, 1, N); 82 | 83 | % Re-apply labels 84 | y = dlarray(y, 'SCB'); 85 | y = real(y); 86 | end 87 | end 88 | end 89 | 90 | function y = iRFFT(x, dim, N) 91 | y = fft(x, [], dim); 92 | y = y(1:floor(N/2)+1, :, :)./N; 93 | end 94 | 95 | function y = iIRFFT(x, dim, N) 96 | x(end+1:N, :, :, :) = conj( x(ceil(N/2):-1:2, :, :, :) ); 97 | y = ifft(N.*x, [], dim, 'symmetric'); 98 | end 99 | -------------------------------------------------------------------------------- /fourier-neural-operator/images/figure_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/fourier-neural-operator/images/figure_0.png -------------------------------------------------------------------------------- /fourier-neural-operator/images/figure_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/fourier-neural-operator/images/figure_1.png -------------------------------------------------------------------------------- /graph-neural-network-for-heat-transfer-problem/Condition.xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/graph-neural-network-for-heat-transfer-problem/Condition.xlsx -------------------------------------------------------------------------------- /graph-neural-network-for-heat-transfer-problem/README.md: -------------------------------------------------------------------------------- 1 | # Graph Neural Network for Heat Transfer Problem 2 | 3 | This example was originally hosted [here](https://github.com/matlab-deep-learning/Graph-Neural-Network-for-Heat-Transfer-Problem). 4 | 5 | In recent years, graph neural networks [1] have been applied to various types of application tasks. 6 | This example shows how to train a graph neural network using data calculated with partial differential equations (PDEs) to predict temperature distributions. 7 | 8 | 9 | 10 | ## How to get started 11 | To get started, clone this repository and run "example_gnn.mlx". 12 | 13 | 14 | ## Requirements 15 | - [MATLAB ®](https://mathworks.com/products/matlab.html) 16 | - [Deep Learning Toolbox ™](https://mathworks.com/products/deep-learning.html) 17 | - [Partial Differential Equation Toolbox ™](https://www.mathworks.com/products/pde.html) 18 | 19 | 20 | MATLAB version should be R2024a and later (Tested in R2024a) 21 | 22 | ## References 23 | 24 | [1] F. Scarselli, M. Gori, A.C. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural network model. IEEE 25 | Transactions on Neural Networks, 20(1):61 80, 2009. 26 | 27 | 28 | ## License 29 | The license is available in license.txt file in this GitHub repository. 30 | 31 | Copyright (c) 2024, The MathWorks, Inc. -------------------------------------------------------------------------------- /graph-neural-network-for-heat-transfer-problem/example_gnn.mlx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/graph-neural-network-for-heat-transfer-problem/example_gnn.mlx -------------------------------------------------------------------------------- /graph-neural-network-for-heat-transfer-problem/license.txt: -------------------------------------------------------------------------------- 1 | Copyright (c) 2024, The MathWorks, Inc. 2 | All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without 5 | modification, are permitted provided that the following conditions are 6 | met: 7 | 8 | * Redistributions of source code must retain the above copyright 9 | notice, this list of conditions and the following disclaimer. 10 | * Redistributions in binary form must reproduce the above copyright 11 | notice, this list of conditions and the following disclaimer in 12 | the documentation and/or other materials provided with the distribution 13 | * Neither the name of the The MathWorks, Inc. nor the names 14 | of its contributors may be used to endorse or promote products derived 15 | from this software without specific prior written permission. 16 | 17 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 18 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 19 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 20 | ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE 21 | LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR 22 | CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF 23 | SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS 24 | INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN 25 | CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) 26 | ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 27 | POSSIBILITY OF SUCH DAMAGE. 28 | 29 | 30 | 31 | 32 | 33 | -------------------------------------------------------------------------------- /graph-neural-network-for-heat-transfer-problem/params_pre.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/graph-neural-network-for-heat-transfer-problem/params_pre.mat -------------------------------------------------------------------------------- /graph-neural-network-for-heat-transfer-problem/ref_images/result.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/graph-neural-network-for-heat-transfer-problem/ref_images/result.png -------------------------------------------------------------------------------- /hamiltonian-neural-network/Demo_Hamiltonian_Spring_with_dlnetwork.m: -------------------------------------------------------------------------------- 1 | %% Import data 2 | rng(0); 3 | data = table2array(readtable("trajectory_training.csv")); 4 | ds = arrayDatastore(dlarray(data',"BC")); 5 | %% Define Network 6 | 7 | hiddenSize = 200; 8 | inputSize = 2; 9 | outputSize = 1; 10 | net = [ 11 | featureInputLayer(inputSize) 12 | fullyConnectedLayer(hiddenSize) 13 | tanhLayer() 14 | fullyConnectedLayer(hiddenSize) 15 | tanhLayer() 16 | fullyConnectedLayer(outputSize)]; 17 | % Create a dlnetwork object from the layer array. 18 | net = dlnetwork(net); 19 | %% Specify Training Options 20 | 21 | numEpochs = 300; 22 | miniBatchSize = 750; 23 | executionEnvironment = "auto"; 24 | initialLearnRate = 0.001; 25 | decayRate = 1e-4; 26 | 27 | %% Create a minibatchque 28 | mbq = minibatchqueue(ds, ... 29 | 'MiniBatchSize',miniBatchSize, ... 30 | 'MiniBatchFormat','BC', ... 31 | 'OutputEnvironment',executionEnvironment); 32 | averageGrad = []; 33 | averageSqGrad = []; 34 | 35 | accfun = dlaccelerate(@modelGradients); 36 | 37 | figure 38 | C = colororder; 39 | lineLoss = animatedline('Color',C(2,:)); 40 | ylim([0 inf]) 41 | xlabel("Iteration") 42 | ylabel("Loss") 43 | grid on 44 | set(gca, 'YScale', 'log'); 45 | hold off 46 | %% Train model 47 | start = tic; 48 | 49 | iteration = 0; 50 | for epoch = 1:numEpochs 51 | shuffle(mbq); 52 | while hasdata(mbq) 53 | iteration = iteration + 1; 54 | 55 | dlXT = next(mbq); 56 | dlX = dlXT(1:2,:); 57 | dlT = dlXT(3:4,:); 58 | 59 | % Evaluate the model gradients and loss using dlfeval and the 60 | % modelGradients function. 61 | [gradients,loss] = dlfeval(accfun,net,dlX,dlT); 62 | % Update learning rate. 63 | learningRate = initialLearnRate / (1+decayRate*iteration); 64 | 65 | % Update the network parameters using the adamupdate function. 66 | [net,averageGrad,averageSqGrad] = adamupdate(net,gradients,averageGrad, ... 67 | averageSqGrad,iteration,learningRate); 68 | end 69 | 70 | % Plot training progress. 71 | loss = double(gather(extractdata(loss))); 72 | addpoints(lineLoss,iteration, loss); 73 | 74 | drawnow 75 | end 76 | %% Test model 77 | % To make predictions with the Hamiltonian NN we need to solve the ODE system: 78 | % dp/dt = -dH/dq, dq/dt = dH/dp 79 | 80 | accOde = dlaccelerate(@predmodel); 81 | t0 = dlarray(0,"CB"); 82 | x = dlarray([1,0],"BC"); 83 | dlfeval(accOde,t0,x,net); 84 | 85 | % Since the original ode45 can't use dlarray we need to write an ODE 86 | % function that wraps accOde by converting the inputs to dlarray, and 87 | % extracting them again after accOde is applied. 88 | f = @(t,x) extractdata(accOde(dlarray(t,"CB"),dlarray(x,"CB"),net)); 89 | 90 | % Now solve with ode45 91 | x = single([1,0]); 92 | t_span = linspace(0,20,2000); 93 | noise_std =0.1; 94 | % Make predictions. 95 | t_span = t_span.*(1 + .9*noise_std); 96 | [~,dlqp] = ode45(f,t_span,x); 97 | qp = squeeze(double(dlqp)); 98 | qp = qp.'; 99 | figure,plot(qp(1,:),qp(2,:)) 100 | hold on 101 | load qp_baseline.mat 102 | plot(qp(1,:),qp(2,:)) 103 | hold off 104 | legend(["Hamiltonian NN","Baseline"]) 105 | xlim([-1.1 1.1]) 106 | ylim([-1.1 1.1]) 107 | %% Supporting Functions 108 | % modelGradients Function 109 | function [gradients,loss] = modelGradients(net,dlX,dlT) 110 | 111 | % Make predictions with the initial conditions. 112 | dlU = forward(net,dlX); 113 | [dq,dp] = dlderivative(dlU,dlX); 114 | loss_dq = l2loss(dq,dlT(1,:)); 115 | loss_dp = l2loss(dp,dlT(2,:)); 116 | loss = loss_dq + loss_dp; 117 | gradients = dlgradient(loss,net.Learnables); 118 | end 119 | 120 | % predmodel Function 121 | function dlT_pred = predmodel(t,dlX,net) 122 | dlU = forward(net,dlX); 123 | [dq,dp] = dlderivative(dlU,dlX); 124 | dlT_pred = [dq;dp]; 125 | end 126 | 127 | % dlderivative Function 128 | function [dq,dp] = dlderivative(F1,dlX) 129 | dF1 = dlgradient(sum(F1,"all"),dlX); 130 | dq = dF1(2,:); 131 | dp = -dF1(1,:); 132 | end 133 | %% 134 | % _Copyright 2023 The MathWorks, Inc._ -------------------------------------------------------------------------------- /hamiltonian-neural-network/Demo_baseline_Spring.m: -------------------------------------------------------------------------------- 1 | %% Import data 2 | rng(0); 3 | data = table2array(readtable("trajectory_training.csv")); 4 | ds = arrayDatastore(dlarray(data',"BC")); 5 | %% Define Network 6 | 7 | hiddenSize = 200; 8 | inputSize = 2; 9 | outputSize = 2; 10 | net = [ 11 | featureInputLayer(inputSize) 12 | fullyConnectedLayer(hiddenSize) 13 | tanhLayer() 14 | fullyConnectedLayer(hiddenSize) 15 | tanhLayer() 16 | fullyConnectedLayer(outputSize)]; 17 | % Create a dlnetwork object from the layer array. 18 | net = dlnetwork(net); 19 | %% Specify Training Options 20 | 21 | numEpochs = 300; 22 | miniBatchSize = 750; 23 | executionEnvironment = "auto"; 24 | initialLearnRate = 0.001; 25 | decayRate = 1e-4; 26 | 27 | %% Create a minibatchque 28 | 29 | mbq = minibatchqueue(ds, ... 30 | 'MiniBatchSize',miniBatchSize, ... 31 | 'MiniBatchFormat','BC', ... 32 | 'OutputEnvironment',executionEnvironment); 33 | averageGrad = []; 34 | averageSqGrad = []; 35 | 36 | accfun = dlaccelerate(@modelGradients); 37 | 38 | figure 39 | C = colororder; 40 | lineLoss = animatedline('Color',C(2,:)); 41 | ylim([0 inf]) 42 | xlabel("Iteration") 43 | ylabel("Loss") 44 | grid on 45 | set(gca, 'YScale', 'log'); 46 | %% Train model 47 | tart = tic; 48 | 49 | iteration = 0; 50 | shuffle(mbq); 51 | 52 | for epoch = 1:numEpochs 53 | reset(mbq); 54 | % shuffle(mbq); 55 | 56 | while hasdata(mbq) 57 | iteration = iteration + 1; 58 | 59 | dlXT = next(mbq); 60 | dlX = dlXT(1:2,:); 61 | dlT = dlXT(3:4,:); 62 | 63 | % Evaluate the model gradients and loss using dlfeval and the 64 | % modelGradients function. 65 | [gradients,loss] = dlfeval(accfun,net,dlX,dlT); 66 | 67 | % Update learning rate. 68 | learningRate = initialLearnRate / (1+decayRate*iteration); 69 | 70 | % Update the network parameters using the adamupdate function. 71 | [net,averageGrad,averageSqGrad] = adamupdate(net,gradients,averageGrad, ... 72 | averageSqGrad,iteration,learningRate); 73 | end 74 | 75 | % Plot training progress. 76 | loss = double(gather(extractdata(loss))); 77 | addpoints(lineLoss,iteration, loss); 78 | 79 | D = duration(0,0,toc(start),'Format','hh:mm:ss'); 80 | title("Epoch: " + epoch + ", Elapsed: " + string(D) + ", Loss: " + loss) 81 | drawnow 82 | end 83 | %% Test model 84 | 85 | accOde = dlaccelerate(@predmodel); 86 | t0 = dlarray(0,"CB"); 87 | x = dlarray([1,0],"BC"); 88 | dlfeval(accOde,t0,x,net); 89 | 90 | % Since the original ode45 can't use dlarray we need to write an ODE 91 | % function that wraps accOde by converting the inputs to dlarray, and 92 | % extracting them again after accOde is applied. 93 | f = @(t,x) extractdata(accOde(dlarray(t,"CB"),dlarray(x,"CB"),net)); 94 | 95 | % Now solve with ode45 96 | x = single([1,0]); 97 | t_span = linspace(0,20,2000); 98 | noise_std =0.1; 99 | % Make predictions. 100 | t_span = t_span.*(1 + .9*noise_std); 101 | [~,dlqp] = ode45(f,t_span,x); 102 | qp = squeeze(double(dlqp)); 103 | qp = qp.'; 104 | figure,plot(qp(1,:),qp(2,:)) 105 | 106 | %% Supporting Functions 107 | % modelGradients Function 108 | function [gradients,loss] = modelGradients(net,dlX,dlT) 109 | % Make predictions with the initial conditions. 110 | dlT_pred = forward(net,dlX); 111 | 112 | loss = mse(dlT_pred,dlT); 113 | % Calculate gradients with respect to the learnable parameters. 114 | gradients = dlgradient(loss,net.Learnables); 115 | 116 | end 117 | 118 | % predmodel Function 119 | function dlT_pred = predmodel(t,dlX,net) 120 | dlT_pred = forward(net,dlX); 121 | end 122 | %% 123 | % _Copyright 2023 The MathWorks, Inc._ -------------------------------------------------------------------------------- /hamiltonian-neural-network/Pics/1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/hamiltonian-neural-network/Pics/1.png -------------------------------------------------------------------------------- /hamiltonian-neural-network/README.md: -------------------------------------------------------------------------------- 1 | # Hamiltonian Neural Network 2 | 3 | This example was originally hosted [here](https://github.com/matlab-deep-learning/Hamiltonian-Neural-Network). 4 | 5 | Hamiltonian Neural Network[1] enables you to use Neural Networks under the law of conservation of energy. 6 | 7 | 8 | 9 | 10 | 11 | Hamiltonian Neural Network Loss is expressed with the following equation. 12 | 13 | $$L_{HNN} = \left\| \frac{\partial H_\theta }{\partial p} - \frac{\partial q}{\partial t} \right\| + \left\| \frac{\partial H_\theta}{\partial q} + \frac{\partial p}{\partial t} \right\|$$ 14 | 15 | 16 | ## Requirements 17 | - [MATLAB ®](https://mathworks.com/products/matlab.html) 18 | - [Deep Learning ToolboxTM](https://mathworks.com/products/deep-learning.html) 19 | 20 | MATLAB version should be R2022b and later (Tested in R2022b) 21 | 22 | ## References 23 | 24 | [1] Sam Greydanus, Misko Dzamba, Jason Yosinski, Hamiltonian Neural Network, arXiv:1906.01563v1 [cs.NE] 4 Jun 2019. 1906.01563v1.pdf (arxiv.org) 25 | 26 | The data in [`trajectory_training.csv`](./trajectory_training.csv) was generated using the Hamiltonian Neural Network described in the [paper](https://arxiv.org/abs/1906.01563) by Sam Greydanus, Misko Dzamba, Jason Yosinski , 2019, and released on [GitHub](https://github.com/greydanus/hamiltonian-nn) under an Apache 2.0 license. 27 | 28 | 29 | # Demo_Hamiltonian_Spring_with_dlnetwork.m 30 | 31 | 32 | ## Import data 33 | 34 | ```matlab:Code 35 | rng(0); 36 | data = table2array(readtable("trajectory_training.csv")); 37 | ds = arrayDatastore(dlarray(data',"BC")); 38 | ``` 39 | 40 | 41 | ## Define Network 42 | 43 | ```matlab:Code 44 | hiddenSize = 200; 45 | inputSize = 2; 46 | outputSize = 1; 47 | net = [ 48 | featureInputLayer(inputSize) 49 | fullyConnectedLayer(hiddenSize) 50 | tanhLayer() 51 | fullyConnectedLayer(hiddenSize) 52 | tanhLayer() 53 | fullyConnectedLayer(outputSize)]; 54 | % Create a dlnetwork object from the layer array. 55 | net = dlnetwork(net); 56 | ``` 57 | 58 | 59 | ## Specify Training Options 60 | 61 | ```matlab:Code 62 | numEpochs = 300; 63 | miniBatchSize = 750; 64 | executionEnvironment = "auto"; 65 | initialLearnRate = 0.001; 66 | decayRate = 1e-4; 67 | ``` 68 | 69 | 70 | ## Create a minibatchque 71 | 72 | ```matlab:Code 73 | mbq = minibatchqueue(ds, ... 74 | 'MiniBatchSize',miniBatchSize, ... 75 | 'MiniBatchFormat','BC', ... 76 | 'OutputEnvironment',executionEnvironment); 77 | averageGrad = []; 78 | averageSqGrad = []; 79 | 80 | accfun = dlaccelerate(@modelGradients); 81 | 82 | figure 83 | C = colororder; 84 | lineLoss = animatedline('Color',C(2,:)); 85 | ylim([0 inf]) 86 | xlabel("Iteration") 87 | ylabel("Loss") 88 | grid on 89 | set(gca, 'YScale', 'log'); 90 | hold off 91 | ``` 92 | 93 | 94 | ## Train model 95 | 96 | ```matlab:Code 97 | start = tic; 98 | 99 | iteration = 0; 100 | for epoch = 1:numEpochs 101 | shuffle(mbq); 102 | while hasdata(mbq) 103 | iteration = iteration + 1; 104 | 105 | dlXT = next(mbq); 106 | dlX = dlXT(1:2,:); 107 | dlT = dlXT(3:4,:); 108 | 109 | % Evaluate the model gradients and loss using dlfeval and the 110 | % modelGradients function. 111 | [gradients,loss] = dlfeval(accfun,net,dlX,dlT); 112 | % Update learning rate. 113 | learningRate = initialLearnRate / (1+decayRate*iteration); 114 | 115 | % Update the network parameters using the adamupdate function. 116 | [net,averageGrad,averageSqGrad] = adamupdate(net,gradients,averageGrad, ... 117 | averageSqGrad,iteration,learningRate); 118 | end 119 | 120 | % Plot training progress. 121 | loss = double(gather(extractdata(loss))); 122 | addpoints(lineLoss,iteration, loss); 123 | 124 | drawnow 125 | end 126 | ``` 127 | 128 | ## Test model 129 | 130 | To make predictions with the Hamiltonian NN we need to solve the ODE system: $\frac{dp}{dt} = -\frac{dH}{dq}$, $\frac{dq}{dt} = \frac{dH}{dp}$. 131 | 132 | ```matlab:Code 133 | accOde = dlaccelerate(@predmodel); 134 | t0 = dlarray(0,"CB"); 135 | x = dlarray([1,0],"BC"); 136 | dlfeval(accOde,t0,x,net); 137 | 138 | % Since the original ode45 can't use dlarray we need to write an ODE 139 | % function that wraps accOde by converting the inputs to dlarray, and 140 | % extracting them again after accOde is applied. 141 | f = @(t,x) extractdata(accOde(dlarray(t,"CB"),dlarray(x,"CB"),net)); 142 | 143 | % Now solve with ode45 144 | x = single([1,0]); 145 | t_span = linspace(0,20,2000); 146 | noise_std =0.1; 147 | % Make predictions. 148 | t_span = t_span.*(1 + .9*noise_std); 149 | [~,dlqp] = ode45(f,t_span,x); 150 | qp = squeeze(double(dlqp)); 151 | qp = qp.'; 152 | figure,plot(qp(1,:),qp(2,:)) 153 | hold on 154 | load qp_baseline.mat 155 | plot(qp(1,:),qp(2,:)) 156 | hold off 157 | legend(["Hamiltonian NN","Baseline"]) 158 | xlim([-1.1 1.1]) 159 | ylim([-1.1 1.1]) 160 | ``` 161 | 162 | ## Supporting Functions 163 | 164 | modelGradients Function 165 | 166 | ```matlab:Code 167 | function [gradients,loss] = modelGradients(net,dlX,dlT) 168 | 169 | % Make predictions with the initial conditions. 170 | dlU = forward(net,dlX); 171 | [dq,dp] = dlderivative(dlU,dlX); 172 | loss_dq = l2loss(dq,dlT(1,:)); 173 | loss_dp = l2loss(dp,dlT(2,:)); 174 | loss = loss_dq + loss_dp; 175 | gradients = dlgradient(loss,net.Learnables); 176 | end 177 | 178 | % predmodel Function 179 | function dlT_pred = predmodel(t,dlX,net) 180 | dlU = forward(net,dlX); 181 | [dq,dp] = dlderivative(dlU,dlX); 182 | dlT_pred = [dq;dp]; 183 | end 184 | 185 | % dlderivative Function 186 | function [dq,dp] = dlderivative(F1,dlX) 187 | dF1 = dlgradient(sum(F1,"all"),dlX); 188 | dq = dF1(2,:); 189 | dp = -dF1(1,:); 190 | end 191 | ``` 192 | 193 | Copyright 2023 The MathWorks, Inc. 194 | 195 | [![View Hamiltonian-Neural-Network on File Exchange](https://www.mathworks.com/matlabcentral/images/matlab-file-exchange.svg)](https://www.mathworks.com/matlabcentral/fileexchange/125840-hamiltonian-neural-network) 196 | -------------------------------------------------------------------------------- /hamiltonian-neural-network/qp_baseline.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/hamiltonian-neural-network/qp_baseline.mat -------------------------------------------------------------------------------- /hamiltonian-neural-network/trajectory_training.csv: -------------------------------------------------------------------------------- 1 | -8.47E-02,8.05E-01,1.25E+00,-2.84E-01 2 | 4.01E-01,5.27E-01,1.17E+00,-5.36E-01 3 | 2.98E-01,5.34E-01,1.03E+00,-7.64E-01 4 | 6.77E-01,3.81E-01,8.54E-01,-9.60E-01 5 | 6.84E-01,2.11E-01,6.39E-01,-1.12E+00 6 | 5.61E-01,2.62E-01,3.96E-01,-1.22E+00 7 | 8.93E-01,2.96E-02,1.36E-01,-1.28E+00 8 | 7.47E-01,-1.42E-01,-1.29E-01,-1.28E+00 9 | 6.61E-01,-9.49E-02,-3.89E-01,-1.22E+00 10 | 6.17E-01,-5.09E-01,-6.32E-01,-1.12E+00 11 | 4.64E-01,-3.99E-01,-8.48E-01,-9.65E-01 12 | 5.26E-01,-5.17E-01,-1.03E+00,-7.70E-01 13 | 2.34E-01,-5.96E-01,-1.16E+00,-5.42E-01 14 | 1.73E-01,-6.45E-01,-1.25E+00,-2.92E-01 15 | -8.20E-02,-5.97E-01,-1.28E+00,-2.82E-02 16 | -8.04E-02,-7.31E-01,-1.26E+00,2.36E-01 17 | -2.42E-01,-6.16E-01,-1.19E+00,4.91E-01 18 | -2.94E-01,-6.96E-01,-1.06E+00,7.24E-01 19 | -6.20E-01,-5.09E-01,-8.90E-01,9.27E-01 20 | -6.02E-01,-3.88E-01,-6.81E-01,1.09E+00 21 | -6.27E-01,-1.90E-01,-4.42E-01,1.21E+00 22 | -4.84E-01,-1.70E-01,-1.85E-01,1.27E+00 23 | -6.75E-01,9.33E-03,8.01E-02,1.28E+00 24 | -6.15E-01,1.34E-01,3.42E-01,1.24E+00 25 | -4.25E-01,4.06E-01,5.89E-01,1.14E+00 26 | -3.45E-01,3.60E-01,8.11E-01,9.97E-01 27 | -3.48E-01,5.42E-01,9.99E-01,8.09E-01 28 | -2.78E-01,5.69E-01,1.14E+00,5.86E-01 29 | -2.77E-01,7.67E-01,1.24E+00,3.39E-01 30 | 1.01E-01,7.06E-01,1.28E+00,7.72E-02 31 | -2.56E-01,-1.29E-01,-1.78E-01,3.77E-01 32 | -2.38E-01,7.41E-02,-9.64E-02,4.05E-01 33 | -2.89E-01,1.53E-02,-1.11E-02,4.16E-01 34 | -3.77E-01,1.35E-01,7.46E-02,4.10E-01 35 | -1.75E-01,1.14E-01,1.57E-01,3.86E-01 36 | -2.13E-01,1.87E-01,2.33E-01,3.45E-01 37 | -3.08E-01,1.51E-01,2.99E-01,2.90E-01 38 | -6.48E-02,3.55E-01,3.52E-01,2.22E-01 39 | -1.63E-01,2.08E-01,3.90E-01,1.45E-01 40 | -2.57E-02,2.46E-01,4.12E-01,6.19E-02 41 | 8.49E-02,3.96E-01,4.16E-01,-2.40E-02 42 | 6.73E-02,6.62E-02,4.02E-01,-1.09E-01 43 | 2.09E-01,5.84E-02,3.71E-01,-1.89E-01 44 | 7.16E-03,2.59E-01,3.24E-01,-2.61E-01 45 | 2.01E-01,1.45E-02,2.64E-01,-3.22E-01 46 | 1.16E-01,2.90E-01,1.92E-01,-3.70E-01 47 | 1.13E-01,1.45E-02,1.12E-01,-4.01E-01 48 | 1.50E-01,-6.13E-02,2.70E-02,-4.15E-01 49 | 1.75E-01,1.63E-01,-5.90E-02,-4.12E-01 50 | 2.01E-01,7.69E-02,-1.42E-01,-3.91E-01 51 | 6.03E-02,7.69E-02,-2.20E-01,-3.54E-01 52 | 2.41E-01,-5.32E-02,-2.88E-01,-3.01E-01 53 | 1.64E-01,-2.58E-01,-3.43E-01,-2.35E-01 54 | -7.37E-02,-1.20E-03,-3.84E-01,-1.60E-01 55 | 1.88E-01,-2.31E-01,-4.09E-01,-7.75E-02 56 | 1.86E-01,-1.28E-01,-4.16E-01,8.15E-03 57 | 7.11E-02,-1.08E-01,-4.06E-01,9.35E-02 58 | -1.05E-01,-2.04E-01,-3.78E-01,1.75E-01 59 | -2.31E-01,-1.06E-01,-3.34E-01,2.49E-01 60 | -5.06E-02,-4.56E-02,-2.76E-01,3.12E-01 61 | -5.17E-01,3.82E-01,1.08E+00,1.02E+00 62 | -3.86E-01,6.26E-01,1.26E+00,7.82E-01 63 | -3.47E-01,5.17E-01,1.40E+00,5.06E-01 64 | -7.54E-02,6.72E-01,1.47E+00,2.08E-01 65 | 8.52E-02,6.48E-01,1.48E+00,-9.83E-02 66 | 2.41E-01,8.64E-01,1.43E+00,-4.01E-01 67 | 1.49E-01,6.78E-01,1.32E+00,-6.86E-01 68 | 6.16E-01,5.27E-01,1.15E+00,-9.42E-01 69 | 5.98E-01,6.47E-01,9.31E-01,-1.16E+00 70 | 6.20E-01,1.07E-01,6.73E-01,-1.32E+00 71 | 8.91E-01,2.58E-01,3.86E-01,-1.43E+00 72 | 7.05E-01,-7.13E-02,8.36E-02,-1.48E+00 73 | 8.69E-01,-1.89E-01,-2.23E-01,-1.47E+00 74 | 6.14E-01,-1.48E-01,-5.20E-01,-1.39E+00 75 | 6.36E-01,-2.63E-01,-7.95E-01,-1.26E+00 76 | 4.03E-01,-6.95E-01,-1.04E+00,-1.07E+00 77 | 3.49E-01,-5.74E-01,-1.23E+00,-8.30E-01 78 | 1.61E-01,-5.79E-01,-1.38E+00,-5.59E-01 79 | 1.52E-01,-7.69E-01,-1.46E+00,-2.64E-01 80 | 2.05E-02,-6.79E-01,-1.49E+00,4.17E-02 81 | -5.32E-02,-7.77E-01,-1.44E+00,3.46E-01 82 | -1.29E-01,-6.19E-01,-1.34E+00,6.35E-01 83 | -3.77E-01,-5.81E-01,-1.18E+00,8.98E-01 84 | -3.32E-01,-4.61E-01,-9.74E-01,1.12E+00 85 | -4.93E-01,-4.53E-01,-7.23E-01,1.30E+00 86 | -6.48E-01,-1.35E-01,-4.41E-01,1.42E+00 87 | -8.27E-01,-6.06E-02,-1.40E-01,1.48E+00 88 | -9.00E-01,-6.52E-02,1.67E-01,1.48E+00 89 | -7.63E-01,4.32E-02,4.66E-01,1.41E+00 90 | -6.96E-01,2.59E-01,7.46E-01,1.28E+00 91 | -2.12E-01,-6.22E-01,-1.24E+00,2.03E-01 92 | -2.21E-01,-5.48E-01,-1.17E+00,4.53E-01 93 | -4.16E-01,-3.01E-01,-1.05E+00,6.84E-01 94 | -2.89E-01,-4.49E-01,-8.90E-01,8.86E-01 95 | -6.54E-01,-4.40E-01,-6.89E-01,1.05E+00 96 | -5.58E-01,-2.64E-01,-4.59E-01,1.17E+00 97 | -6.23E-01,-1.51E-01,-2.09E-01,1.24E+00 98 | -7.44E-01,7.32E-02,5.00E-02,1.26E+00 99 | -5.57E-01,-6.72E-04,3.07E-01,1.22E+00 100 | -5.82E-01,2.82E-01,5.51E-01,1.13E+00 101 | -4.19E-01,4.01E-01,7.71E-01,9.92E-01 102 | -3.24E-01,5.02E-01,9.58E-01,8.13E-01 103 | -8.29E-02,4.93E-01,1.10E+00,5.98E-01 104 | -4.57E-02,5.78E-01,1.20E+00,3.59E-01 105 | -8.88E-02,4.84E-01,1.25E+00,1.04E-01 106 | 5.39E-02,5.74E-01,1.25E+00,-1.56E-01 107 | 3.14E-01,5.40E-01,1.19E+00,-4.08E-01 108 | 3.87E-01,5.81E-01,1.08E+00,-6.44E-01 109 | 4.90E-01,3.46E-01,9.24E-01,-8.52E-01 110 | 3.50E-01,4.43E-01,7.29E-01,-1.02E+00 111 | 5.73E-01,4.01E-01,5.03E-01,-1.15E+00 112 | 5.41E-01,-7.90E-02,2.56E-01,-1.23E+00 113 | 6.56E-01,4.16E-02,-2.13E-03,-1.26E+00 114 | 6.05E-01,-6.24E-02,-2.60E-01,-1.23E+00 115 | 6.66E-01,-3.17E-01,-5.07E-01,-1.15E+00 116 | 5.42E-01,-4.06E-01,-7.32E-01,-1.02E+00 117 | 5.03E-01,-4.76E-01,-9.26E-01,-8.48E-01 118 | 2.73E-01,-5.70E-01,-1.08E+00,-6.40E-01 119 | 1.08E-01,-6.26E-01,-1.19E+00,-4.04E-01 120 | 3.47E-02,-7.91E-01,-1.25E+00,-1.51E-01 121 | 1.27E-01,4.33E-01,5.18E-01,-4.86E-01 122 | 2.47E-01,1.72E-01,4.08E-01,-5.82E-01 123 | 4.27E-01,2.23E-01,2.79E-01,-6.53E-01 124 | 4.20E-01,4.89E-02,1.39E-01,-6.97E-01 125 | 2.82E-01,1.08E-01,-6.83E-03,-7.10E-01 126 | 4.31E-01,3.01E-02,-1.53E-01,-6.94E-01 127 | 4.48E-01,-3.08E-02,-2.92E-01,-6.48E-01 128 | 1.08E-01,-2.87E-01,-4.19E-01,-5.74E-01 129 | 1.58E-01,-3.93E-01,-5.28E-01,-4.76E-01 130 | 3.85E-02,-2.39E-01,-6.14E-01,-3.57E-01 131 | 9.32E-02,-2.95E-01,-6.74E-01,-2.23E-01 132 | -9.91E-02,-4.01E-01,-7.06E-01,-8.00E-02 133 | -2.97E-02,-3.59E-01,-7.07E-01,6.67E-02 134 | -1.87E-01,-2.83E-01,-6.78E-01,2.11E-01 135 | -1.03E-01,-2.04E-01,-6.21E-01,3.45E-01 136 | -4.07E-01,-2.41E-01,-5.37E-01,4.66E-01 137 | -2.71E-01,-1.53E-01,-4.29E-01,5.66E-01 138 | -2.84E-01,-1.58E-01,-3.04E-01,6.42E-01 139 | -3.53E-01,4.28E-02,-1.66E-01,6.91E-01 140 | -4.04E-01,1.95E-02,-2.03E-02,7.10E-01 141 | -3.94E-02,1.03E-01,1.26E-01,6.99E-01 142 | -2.43E-01,2.82E-02,2.67E-01,6.58E-01 143 | -4.10E-01,1.34E-01,3.96E-01,5.89E-01 144 | -1.54E-01,2.40E-01,5.09E-01,4.95E-01 145 | -2.18E-01,1.43E-01,6.00E-01,3.80E-01 146 | -2.22E-01,2.83E-01,6.65E-01,2.49E-01 147 | -4.36E-02,3.39E-01,7.02E-01,1.07E-01 148 | 1.10E-01,3.92E-01,7.09E-01,-3.97E-02 149 | 1.94E-01,3.04E-01,6.86E-01,-1.85E-01 150 | 1.49E-01,1.94E-01,6.33E-01,-3.21E-01 151 | 6.82E-01,-5.42E-01,-1.30E+00,-1.39E+00 152 | 6.55E-01,-8.23E-01,-1.55E+00,-1.09E+00 153 | 2.63E-01,-9.42E-01,-1.75E+00,-7.52E-01 154 | 1.15E-01,-1.05E+00,-1.86E+00,-3.77E-01 155 | -4.55E-02,-9.95E-01,-1.90E+00,1.40E-02 156 | -1.93E-01,-9.57E-01,-1.86E+00,4.04E-01 157 | -3.93E-01,-9.04E-01,-1.74E+00,7.77E-01 158 | -5.87E-01,-7.54E-01,-1.54E+00,1.12E+00 159 | -7.11E-01,-5.80E-01,-1.28E+00,1.41E+00 160 | -8.32E-01,-4.45E-01,-9.60E-01,1.64E+00 161 | -9.74E-01,-3.77E-01,-6.02E-01,1.80E+00 162 | -1.03E+00,-2.53E-01,-2.18E-01,1.89E+00 163 | -9.19E-01,2.24E-01,1.74E-01,1.89E+00 164 | -9.98E-01,2.11E-01,5.59E-01,1.82E+00 165 | -9.48E-01,3.95E-01,9.21E-01,1.66E+00 166 | -7.51E-01,5.69E-01,1.24E+00,1.44E+00 167 | -5.92E-01,5.72E-01,1.51E+00,1.15E+00 168 | -1.83E-01,8.11E-01,1.72E+00,8.18E-01 169 | -2.94E-01,8.76E-01,1.85E+00,4.48E-01 170 | 6.51E-02,1.01E+00,1.90E+00,5.85E-02 171 | 2.41E-01,1.01E+00,1.87E+00,-3.33E-01 172 | 2.36E-01,8.82E-01,1.76E+00,-7.11E-01 173 | 6.06E-01,8.83E-01,1.58E+00,-1.06E+00 174 | 5.62E-01,6.99E-01,1.33E+00,-1.36E+00 175 | 5.36E-01,5.09E-01,1.02E+00,-1.60E+00 176 | 9.50E-01,3.51E-01,6.70E-01,-1.78E+00 177 | 7.64E-01,1.26E-01,2.90E-01,-1.88E+00 178 | 9.94E-01,-9.04E-02,-1.02E-01,-1.90E+00 179 | 8.50E-01,-2.72E-01,-4.90E-01,-1.84E+00 180 | 1.01E+00,-5.41E-01,-8.57E-01,-1.70E+00 181 | -6.00E-01,8.64E-02,2.42E-01,8.56E-01 182 | -3.58E-01,1.99E-01,4.12E-01,7.88E-01 183 | -2.55E-01,4.00E-01,5.65E-01,6.86E-01 184 | -2.57E-01,1.96E-01,6.94E-01,5.55E-01 185 | -7.94E-02,3.07E-01,7.93E-01,4.01E-01 186 | -3.32E-01,3.61E-01,8.59E-01,2.29E-01 187 | -5.76E-02,4.63E-01,8.88E-01,4.82E-02 188 | -1.04E-01,3.43E-01,8.79E-01,-1.35E-01 189 | 1.02E-02,5.23E-01,8.32E-01,-3.13E-01 190 | 1.81E-01,4.96E-01,7.50E-01,-4.77E-01 191 | 1.14E-01,4.33E-01,6.36E-01,-6.21E-01 192 | 3.75E-01,4.15E-01,4.95E-01,-7.38E-01 193 | 4.57E-01,2.14E-01,3.33E-01,-8.24E-01 194 | 2.09E-01,8.73E-02,1.56E-01,-8.75E-01 195 | 3.65E-01,-9.15E-03,-2.68E-02,-8.89E-01 196 | 5.51E-01,3.20E-02,-2.09E-01,-8.64E-01 197 | 3.07E-01,-8.10E-02,-3.82E-01,-8.03E-01 198 | 3.41E-01,-3.32E-01,-5.39E-01,-7.07E-01 199 | 4.18E-01,-3.49E-01,-6.72E-01,-5.82E-01 200 | 1.35E-01,-4.79E-01,-7.78E-01,-4.31E-01 201 | 2.33E-01,-2.32E-01,-8.50E-01,-2.62E-01 202 | 1.92E-01,-4.23E-01,-8.85E-01,-8.20E-02 203 | -1.43E-01,-4.56E-01,-8.83E-01,1.02E-01 204 | -7.46E-02,-4.44E-01,-8.44E-01,2.81E-01 205 | -9.55E-02,-2.08E-01,-7.68E-01,4.48E-01 206 | -4.14E-01,-4.58E-01,-6.59E-01,5.96E-01 207 | -3.48E-01,-2.34E-01,-5.23E-01,7.19E-01 208 | -3.02E-01,-9.41E-02,-3.64E-01,8.11E-01 209 | -3.80E-01,-9.40E-02,-1.90E-01,8.69E-01 210 | -6.70E-01,1.05E-01,-7.08E-03,8.89E-01 211 | 8.87E-01,-1.11E-01,1.82E-01,-1.85E+00 212 | 8.09E-01,-1.55E-01,-2.03E-01,-1.85E+00 213 | 8.76E-01,-3.17E-01,-5.79E-01,-1.77E+00 214 | 8.26E-01,-5.36E-01,-9.30E-01,-1.61E+00 215 | 7.81E-01,-4.47E-01,-1.24E+00,-1.39E+00 216 | 5.40E-01,-6.51E-01,-1.50E+00,-1.10E+00 217 | 4.31E-01,-7.15E-01,-1.69E+00,-7.71E-01 218 | 1.07E-01,-9.97E-01,-1.82E+00,-4.07E-01 219 | -6.59E-02,-8.18E-01,-1.86E+00,-2.47E-02 220 | -1.90E-01,-8.64E-01,-1.83E+00,3.58E-01 221 | -4.68E-01,-7.80E-01,-1.71E+00,7.26E-01 222 | -4.49E-01,-6.62E-01,-1.53E+00,1.06E+00 223 | -6.31E-01,-7.30E-01,-1.28E+00,1.35E+00 224 | -7.66E-01,-5.29E-01,-9.73E-01,1.59E+00 225 | -8.43E-01,-2.27E-01,-6.26E-01,1.75E+00 226 | -7.20E-01,-3.92E-01,-2.52E-01,1.84E+00 227 | -9.76E-01,2.17E-01,1.32E-01,1.86E+00 228 | -1.12E+00,3.11E-01,5.11E-01,1.79E+00 229 | -8.04E-01,4.29E-01,8.68E-01,1.65E+00 230 | -7.22E-01,6.16E-01,1.19E+00,1.43E+00 231 | -6.31E-01,6.25E-01,1.46E+00,1.16E+00 232 | -5.16E-01,7.97E-01,1.66E+00,8.35E-01 233 | -2.82E-01,1.01E+00,1.80E+00,4.76E-01 234 | -2.97E-02,1.06E+00,1.86E+00,9.57E-02 235 | 9.39E-02,1.19E+00,1.84E+00,-2.88E-01 236 | 5.71E-01,8.63E-01,1.74E+00,-6.60E-01 237 | 4.06E-01,7.18E-01,1.57E+00,-1.00E+00 238 | 5.73E-01,6.13E-01,1.33E+00,-1.30E+00 239 | 5.46E-01,4.14E-01,1.03E+00,-1.55E+00 240 | 8.89E-01,3.38E-01,6.92E-01,-1.73E+00 241 | -1.08E-01,4.41E-01,9.59E-01,8.58E-02 242 | -9.78E-02,5.83E-01,9.56E-01,-1.13E-01 243 | 3.19E-01,4.53E-01,9.12E-01,-3.07E-01 244 | 1.47E-01,4.10E-01,8.30E-01,-4.88E-01 245 | 4.53E-01,3.28E-01,7.12E-01,-6.48E-01 246 | 2.96E-01,2.35E-01,5.64E-01,-7.80E-01 247 | 3.19E-01,1.77E-01,3.91E-01,-8.80E-01 248 | 6.38E-01,1.21E-01,2.02E-01,-9.41E-01 249 | 6.79E-01,1.51E-01,4.67E-03,-9.63E-01 250 | 3.54E-01,-6.41E-02,-1.93E-01,-9.43E-01 251 | 5.70E-01,-3.10E-01,-3.83E-01,-8.83E-01 252 | 4.80E-01,-2.93E-01,-5.56E-01,-7.86E-01 253 | 2.29E-01,-3.83E-01,-7.06E-01,-6.55E-01 254 | 2.57E-01,-5.49E-01,-8.25E-01,-4.96E-01 255 | -4.05E-03,-6.68E-01,-9.09E-01,-3.16E-01 256 | -1.01E-01,-4.37E-01,-9.55E-01,-1.22E-01 257 | -8.97E-02,-4.09E-01,-9.60E-01,7.65E-02 258 | -1.05E-01,-3.82E-01,-9.23E-01,2.72E-01 259 | -7.15E-02,-4.47E-01,-8.48E-01,4.56E-01 260 | -2.28E-01,-2.82E-01,-7.36E-01,6.20E-01 261 | -3.39E-01,-2.60E-01,-5.93E-01,7.58E-01 262 | -4.20E-01,-1.40E-01,-4.25E-01,8.64E-01 263 | -4.63E-01,-1.17E-01,-2.38E-01,9.33E-01 264 | -5.29E-01,-2.49E-01,-4.14E-02,9.62E-01 265 | -4.13E-01,-6.93E-02,1.57E-01,9.50E-01 266 | -6.96E-01,3.91E-02,3.49E-01,8.97E-01 267 | -5.18E-01,2.60E-01,5.26E-01,8.06E-01 268 | -3.91E-01,1.68E-01,6.80E-01,6.81E-01 269 | -2.62E-01,2.88E-01,8.06E-01,5.27E-01 270 | -2.14E-01,5.42E-01,8.97E-01,3.50E-01 271 | -5.74E-02,-2.08E-01,-4.87E-01,-3.10E-02 272 | -1.52E-02,-2.03E-01,-4.83E-01,6.96E-02 273 | -4.82E-02,-2.76E-01,-4.58E-01,1.67E-01 274 | -6.72E-02,-2.29E-01,-4.14E-01,2.58E-01 275 | -1.68E-01,-2.69E-01,-3.52E-01,3.37E-01 276 | -1.49E-01,-1.56E-01,-2.75E-01,4.03E-01 277 | -1.80E-01,-2.48E-01,-1.87E-01,4.51E-01 278 | -4.23E-01,-3.40E-03,-9.03E-02,4.79E-01 279 | -2.40E-01,-8.94E-02,1.01E-02,4.88E-01 280 | -1.61E-01,7.88E-02,1.10E-01,4.75E-01 281 | -1.62E-01,-3.79E-02,2.05E-01,4.42E-01 282 | -2.32E-01,8.69E-02,2.92E-01,3.91E-01 283 | -2.42E-01,1.72E-01,3.66E-01,3.22E-01 284 | -2.32E-01,4.61E-02,4.24E-01,2.40E-01 285 | -8.72E-02,2.44E-01,4.65E-01,1.48E-01 286 | 8.85E-02,2.05E-01,4.85E-01,4.96E-02 287 | -1.70E-01,6.83E-02,4.85E-01,-5.12E-02 288 | 8.87E-03,1.02E-01,4.64E-01,-1.50E-01 289 | 6.96E-03,2.72E-01,4.24E-01,-2.42E-01 290 | 2.40E-01,2.72E-01,3.65E-01,-3.24E-01 291 | 1.40E-01,1.32E-01,2.91E-01,-3.92E-01 292 | 1.74E-01,1.42E-01,2.04E-01,-4.43E-01 293 | 2.16E-01,7.66E-02,1.08E-01,-4.76E-01 294 | 2.88E-01,3.72E-02,8.48E-03,-4.88E-01 295 | 2.00E-01,8.27E-02,-9.19E-02,-4.79E-01 296 | -7.97E-02,-2.45E-01,-1.88E-01,-4.50E-01 297 | 2.55E-01,-7.07E-02,-2.77E-01,-4.02E-01 298 | 2.12E-01,-2.15E-01,-3.53E-01,-3.36E-01 299 | 1.06E-01,-2.30E-01,-4.15E-01,-2.56E-01 300 | -2.55E-02,-2.60E-01,-4.59E-01,-1.66E-01 301 | -8.77E-01,-1.70E-01,-4.51E-01,1.47E+00 302 | -4.62E-01,-8.72E-02,-1.39E-01,1.54E+00 303 | -7.33E-01,6.95E-02,1.80E-01,1.53E+00 304 | -6.92E-01,2.85E-01,4.90E-01,1.46E+00 305 | -6.30E-01,4.95E-01,7.80E-01,1.33E+00 306 | -6.81E-01,3.51E-01,1.04E+00,1.14E+00 307 | -4.18E-01,6.81E-01,1.25E+00,9.04E-01 308 | -3.72E-01,6.67E-01,1.41E+00,6.28E-01 309 | -4.93E-02,8.48E-01,1.51E+00,3.25E-01 310 | 4.56E-02,7.69E-01,1.54E+00,8.67E-03 311 | 2.40E-01,7.69E-01,1.51E+00,-3.08E-01 312 | 3.02E-01,5.73E-01,1.41E+00,-6.12E-01 313 | 4.37E-01,5.09E-01,1.26E+00,-8.89E-01 314 | 5.48E-01,4.11E-01,1.05E+00,-1.13E+00 315 | 5.98E-01,1.89E-01,7.95E-01,-1.32E+00 316 | 8.93E-01,8.81E-02,5.07E-01,-1.46E+00 317 | 7.53E-01,1.67E-01,1.97E-01,-1.53E+00 318 | 9.02E-01,-8.41E-02,-1.21E-01,-1.54E+00 319 | 7.74E-01,-1.25E-01,-4.34E-01,-1.48E+00 320 | 7.36E-01,-2.71E-01,-7.29E-01,-1.36E+00 321 | 4.14E-01,-4.34E-01,-9.92E-01,-1.18E+00 322 | 3.50E-01,-7.75E-01,-1.21E+00,-9.50E-01 323 | 4.40E-01,-6.25E-01,-1.38E+00,-6.81E-01 324 | 1.95E-01,-6.84E-01,-1.49E+00,-3.82E-01 325 | 8.46E-02,-6.51E-01,-1.54E+00,-6.74E-02 326 | -1.02E-01,-8.29E-01,-1.52E+00,2.50E-01 327 | -3.64E-01,-7.84E-01,-1.44E+00,5.57E-01 328 | -3.65E-01,-6.03E-01,-1.29E+00,8.41E-01 329 | -6.59E-01,-5.63E-01,-1.09E+00,1.09E+00 330 | -6.50E-01,-3.14E-01,-8.45E-01,1.29E+00 331 | -1.59E-01,-4.89E-01,-1.18E+00,3.11E-01 332 | -3.55E-02,-6.02E-01,-1.09E+00,5.47E-01 333 | -3.47E-01,-5.17E-01,-9.57E-01,7.60E-01 334 | -3.75E-01,-4.41E-01,-7.81E-01,9.40E-01 335 | -6.91E-01,-2.67E-01,-5.71E-01,1.08E+00 336 | -7.65E-01,-2.07E-01,-3.37E-01,1.17E+00 337 | -6.63E-01,-2.04E-01,-8.80E-02,1.22E+00 338 | -4.96E-01,-6.57E-03,1.64E-01,1.21E+00 339 | -6.10E-01,1.12E-01,4.10E-01,1.15E+00 340 | -6.01E-01,4.43E-01,6.37E-01,1.04E+00 341 | -4.25E-01,5.00E-01,8.38E-01,8.90E-01 342 | -2.41E-01,5.60E-01,1.00E+00,6.98E-01 343 | -3.83E-01,5.12E-01,1.12E+00,4.78E-01 344 | -2.39E-01,5.18E-01,1.20E+00,2.36E-01 345 | -7.13E-02,5.60E-01,1.22E+00,-1.51E-02 346 | 2.42E-01,4.91E-01,1.19E+00,-2.66E-01 347 | 2.76E-01,8.06E-01,1.11E+00,-5.05E-01 348 | 5.75E-01,2.68E-01,9.85E-01,-7.23E-01 349 | 5.49E-01,4.64E-01,8.16E-01,-9.10E-01 350 | 5.26E-01,1.77E-01,6.11E-01,-1.06E+00 351 | 7.07E-01,1.80E-01,3.81E-01,-1.16E+00 352 | 6.29E-01,-3.16E-02,1.34E-01,-1.21E+00 353 | 5.38E-01,-1.77E-01,-1.18E-01,-1.22E+00 354 | 6.51E-01,-2.97E-01,-3.65E-01,-1.17E+00 355 | 4.64E-01,-1.23E-01,-5.97E-01,-1.07E+00 356 | 4.31E-01,-4.15E-01,-8.03E-01,-9.21E-01 357 | 5.01E-01,-5.64E-01,-9.75E-01,-7.36E-01 358 | 2.50E-01,-4.97E-01,-1.11E+00,-5.20E-01 359 | 6.06E-02,-5.94E-01,-1.19E+00,-2.82E-01 360 | -3.07E-02,-5.39E-01,-1.22E+00,-3.15E-02 361 | 3.96E-01,-3.32E-01,-6.02E-01,-9.95E-01 362 | 3.16E-01,-6.58E-01,-7.94E-01,-8.50E-01 363 | 4.09E-01,-4.25E-01,-9.52E-01,-6.68E-01 364 | 2.46E-01,-6.82E-01,-1.07E+00,-4.59E-01 365 | 1.77E-01,-6.51E-01,-1.14E+00,-2.29E-01 366 | 8.02E-02,-5.04E-01,-1.16E+00,9.65E-03 367 | -4.51E-02,-4.17E-01,-1.14E+00,2.48E-01 368 | -2.27E-01,-4.00E-01,-1.06E+00,4.76E-01 369 | -2.34E-01,-4.73E-01,-9.40E-01,6.84E-01 370 | -4.34E-01,-2.70E-01,-7.80E-01,8.63E-01 371 | -5.49E-01,-3.01E-01,-5.86E-01,1.00E+00 372 | -4.97E-01,-2.92E-01,-3.67E-01,1.10E+00 373 | -6.88E-01,-1.93E-01,-1.33E-01,1.16E+00 374 | -6.51E-01,-2.37E-02,1.08E-01,1.16E+00 375 | -3.83E-01,1.11E-01,3.43E-01,1.11E+00 376 | -5.03E-01,2.44E-01,5.64E-01,1.02E+00 377 | -4.36E-01,2.66E-01,7.61E-01,8.79E-01 378 | -4.45E-01,4.71E-01,9.25E-01,7.04E-01 379 | -1.61E-01,6.27E-01,1.05E+00,4.99E-01 380 | 3.32E-02,4.17E-01,1.13E+00,2.73E-01 381 | -1.87E-02,7.71E-01,1.16E+00,3.47E-02 382 | 7.48E-02,4.85E-01,1.14E+00,-2.05E-01 383 | 2.23E-01,5.31E-01,1.08E+00,-4.36E-01 384 | 4.28E-01,6.00E-01,9.66E-01,-6.48E-01 385 | 4.26E-01,4.05E-01,8.12E-01,-8.32E-01 386 | 5.47E-01,2.79E-01,6.24E-01,-9.81E-01 387 | 5.12E-01,2.78E-01,4.09E-01,-1.09E+00 388 | 5.15E-01,3.31E-02,1.76E-01,-1.15E+00 389 | 5.41E-01,-1.07E-01,-6.34E-02,-1.16E+00 390 | 5.58E-01,-2.56E-01,-3.01E-01,-1.12E+00 391 | 4.18E-01,4.33E-01,7.16E-01,-9.69E-01 392 | 7.16E-01,1.97E-01,5.01E-01,-1.10E+00 393 | 5.02E-01,5.79E-02,2.66E-01,-1.17E+00 394 | 6.04E-01,1.27E-02,1.88E-02,-1.20E+00 395 | 5.90E-01,-3.73E-01,-2.29E-01,-1.18E+00 396 | 5.56E-01,-3.49E-01,-4.67E-01,-1.11E+00 397 | 4.12E-01,-3.77E-01,-6.85E-01,-9.91E-01 398 | 3.55E-01,-5.72E-01,-8.74E-01,-8.29E-01 399 | 2.49E-01,-6.16E-01,-1.03E+00,-6.32E-01 400 | 2.36E-01,-6.10E-01,-1.13E+00,-4.07E-01 401 | 1.16E-01,-7.61E-01,-1.19E+00,-1.66E-01 402 | 1.81E-01,-6.41E-01,-1.20E+00,8.28E-02 403 | -2.68E-02,-6.33E-01,-1.16E+00,3.28E-01 404 | -3.30E-01,-5.31E-01,-1.07E+00,5.59E-01 405 | -3.51E-01,-3.49E-01,-9.29E-01,7.66E-01 406 | -3.71E-01,-3.59E-01,-7.52E-01,9.41E-01 407 | -5.35E-01,-2.69E-01,-5.43E-01,1.08E+00 408 | -5.89E-01,-1.45E-01,-3.10E-01,1.16E+00 409 | -5.96E-01,-9.59E-03,-6.47E-02,1.20E+00 410 | -5.08E-01,-9.78E-03,1.84E-01,1.19E+00 411 | -6.48E-01,2.01E-01,4.24E-01,1.13E+00 412 | -5.41E-01,3.54E-01,6.47E-01,1.02E+00 413 | -3.84E-01,2.84E-01,8.42E-01,8.61E-01 414 | -3.04E-01,5.87E-01,1.00E+00,6.70E-01 415 | -2.01E-01,6.67E-01,1.12E+00,4.50E-01 416 | -1.43E-01,5.30E-01,1.19E+00,2.11E-01 417 | 1.16E-01,5.78E-01,1.20E+00,-3.69E-02 418 | 3.55E-01,4.98E-01,1.17E+00,-2.83E-01 419 | 3.00E-01,6.14E-01,1.09E+00,-5.18E-01 420 | 3.46E-01,3.73E-01,9.58E-01,-7.30E-01 421 | -5.57E-01,5.45E-02,-1.75E-01,6.76E-01 422 | -3.39E-01,-1.07E-01,-3.25E-02,6.97E-01 423 | -4.72E-01,1.15E-01,1.11E-01,6.89E-01 424 | -2.62E-01,1.46E-01,2.51E-01,6.51E-01 425 | -2.10E-01,2.13E-01,3.79E-01,5.86E-01 426 | -3.12E-01,2.73E-01,4.91E-01,4.96E-01 427 | -2.02E-01,2.67E-01,5.83E-01,3.84E-01 428 | -1.03E-01,3.70E-01,6.49E-01,2.56E-01 429 | 7.37E-02,2.06E-01,6.88E-01,1.17E-01 430 | 8.39E-02,1.99E-01,6.97E-01,-2.64E-02 431 | 5.96E-02,5.29E-01,6.77E-01,-1.69E-01 432 | 3.64E-02,3.83E-01,6.28E-01,-3.05E-01 433 | 2.19E-01,1.14E-01,5.52E-01,-4.27E-01 434 | 2.38E-01,1.41E-01,4.53E-01,-5.31E-01 435 | 4.44E-01,2.06E-01,3.34E-01,-6.13E-01 436 | 3.05E-01,1.91E-01,2.01E-01,-6.69E-01 437 | 3.26E-01,9.29E-02,5.90E-02,-6.95E-01 438 | 3.92E-01,-2.44E-01,-8.51E-02,-6.93E-01 439 | 3.00E-01,-6.35E-02,-2.26E-01,-6.61E-01 440 | 5.76E-02,-1.28E-01,-3.56E-01,-6.00E-01 441 | 2.82E-01,-2.22E-01,-4.72E-01,-5.14E-01 442 | 2.08E-01,-3.34E-01,-5.68E-01,-4.06E-01 443 | 5.34E-02,-4.48E-01,-6.39E-01,-2.81E-01 444 | 2.27E-01,-3.41E-01,-6.83E-01,-1.44E-01 445 | 4.16E-02,-3.07E-01,-6.98E-01,-2.23E-04 446 | -1.40E-01,-3.95E-01,-6.83E-01,1.43E-01 447 | -1.18E-01,-2.88E-01,-6.39E-01,2.80E-01 448 | -1.46E-01,-2.80E-01,-5.68E-01,4.06E-01 449 | -3.91E-01,-2.62E-01,-4.72E-01,5.14E-01 450 | -3.34E-01,-3.39E-01,-3.57E-01,6.00E-01 451 | 3.69E-01,-5.10E-01,-9.21E-01,-8.00E-01 452 | 2.13E-01,-5.40E-01,-1.07E+00,-5.93E-01 453 | 8.03E-02,-5.81E-01,-1.16E+00,-3.62E-01 454 | 2.26E-01,-4.50E-01,-1.21E+00,-1.15E-01 455 | -1.48E-01,-5.37E-01,-1.21E+00,1.37E-01 456 | -2.45E-01,-4.99E-01,-1.16E+00,3.83E-01 457 | -2.70E-01,-5.93E-01,-1.05E+00,6.12E-01 458 | -2.78E-01,-3.56E-01,-9.06E-01,8.16E-01 459 | -4.44E-01,-3.37E-01,-7.19E-01,9.85E-01 460 | -2.80E-01,-1.12E-01,-5.02E-01,1.11E+00 461 | -6.03E-01,7.02E-02,-2.62E-01,1.19E+00 462 | -5.84E-01,-3.68E-02,-1.23E-02,1.22E+00 463 | -5.70E-01,7.86E-02,2.38E-01,1.20E+00 464 | -4.17E-01,1.53E-01,4.79E-01,1.12E+00 465 | -4.49E-01,3.35E-01,6.99E-01,9.99E-01 466 | -4.29E-01,4.07E-01,8.89E-01,8.34E-01 467 | -4.12E-01,5.57E-01,1.04E+00,6.34E-01 468 | -1.79E-01,5.60E-01,1.15E+00,4.06E-01 469 | 5.95E-02,5.68E-01,1.21E+00,1.61E-01 470 | 4.22E-03,7.14E-01,1.22E+00,-9.05E-02 471 | 2.22E-01,4.92E-01,1.17E+00,-3.38E-01 472 | 3.11E-01,5.82E-01,1.08E+00,-5.72E-01 473 | 4.77E-01,4.28E-01,9.37E-01,-7.81E-01 474 | 3.98E-01,4.51E-01,7.56E-01,-9.57E-01 475 | 7.80E-01,4.10E-01,5.44E-01,-1.09E+00 476 | 4.62E-01,1.24E-01,3.08E-01,-1.18E+00 477 | 5.72E-01,7.35E-02,5.88E-02,-1.22E+00 478 | 6.96E-01,-7.85E-02,-1.93E-01,-1.20E+00 479 | 5.99E-01,-2.98E-01,-4.36E-01,-1.14E+00 480 | 5.95E-01,-3.06E-01,-6.61E-01,-1.02E+00 481 | 5.93E-01,3.25E-01,8.50E-01,-1.21E+00 482 | 6.23E-01,1.54E-01,5.82E-01,-1.36E+00 483 | 7.65E-01,1.77E-01,2.90E-01,-1.45E+00 484 | 7.01E-01,1.22E-01,-1.45E-02,-1.48E+00 485 | 7.91E-01,-2.88E-02,-3.18E-01,-1.45E+00 486 | 7.11E-01,-3.67E-01,-6.09E-01,-1.35E+00 487 | 7.01E-01,-3.24E-01,-8.73E-01,-1.20E+00 488 | 4.46E-01,-4.48E-01,-1.10E+00,-9.92E-01 489 | 3.47E-01,-7.17E-01,-1.28E+00,-7.44E-01 490 | 2.87E-01,-6.07E-01,-1.41E+00,-4.65E-01 491 | 1.17E-01,-5.45E-01,-1.47E+00,-1.67E-01 492 | -6.56E-02,-5.25E-01,-1.47E+00,1.39E-01 493 | -5.25E-02,-6.19E-01,-1.41E+00,4.39E-01 494 | -4.61E-01,-7.14E-01,-1.29E+00,7.21E-01 495 | -3.45E-01,-4.68E-01,-1.12E+00,9.71E-01 496 | -6.42E-01,-2.66E-01,-8.95E-01,1.18E+00 497 | -8.28E-01,-3.43E-01,-6.34E-01,1.34E+00 498 | -7.03E-01,-1.57E-01,-3.45E-01,1.44E+00 499 | -8.33E-01,-1.73E-01,-4.20E-02,1.48E+00 500 | -8.17E-01,1.40E-01,2.63E-01,1.46E+00 501 | -5.00E-01,3.73E-01,5.57E-01,1.37E+00 502 | -5.85E-01,3.08E-01,8.27E-01,1.23E+00 503 | -5.05E-01,5.44E-01,1.06E+00,1.03E+00 504 | -3.70E-01,3.30E-01,1.25E+00,7.93E-01 505 | -3.17E-01,9.05E-01,1.39E+00,5.19E-01 506 | -6.28E-02,7.90E-01,1.46E+00,2.23E-01 507 | -8.16E-02,6.66E-01,1.48E+00,-8.29E-02 508 | 1.27E-02,6.87E-01,1.43E+00,-3.85E-01 509 | 2.22E-01,5.68E-01,1.32E+00,-6.71E-01 510 | 5.68E-01,5.27E-01,1.15E+00,-9.28E-01 511 | 2.93E-01,-7.05E-01,-1.84E+00,-5.67E-01 512 | 1.27E-01,-9.67E-01,-1.92E+00,-1.77E-01 513 | -1.03E-01,-8.83E-01,-1.91E+00,2.20E-01 514 | -3.02E-01,-9.20E-01,-1.83E+00,6.08E-01 515 | -4.57E-01,-7.96E-01,-1.66E+00,9.70E-01 516 | -6.04E-01,-6.47E-01,-1.43E+00,1.29E+00 517 | -8.81E-01,-6.76E-01,-1.13E+00,1.56E+00 518 | -1.02E+00,-3.97E-01,-7.88E-01,1.76E+00 519 | -9.46E-01,-4.73E-02,-4.10E-01,1.88E+00 520 | -1.11E+00,-8.72E-02,-1.53E-02,1.92E+00 521 | -9.35E-01,1.34E-01,3.80E-01,1.89E+00 522 | -7.90E-01,3.49E-01,7.60E-01,1.77E+00 523 | -7.26E-01,5.80E-01,1.11E+00,1.57E+00 524 | -7.62E-01,7.56E-01,1.41E+00,1.31E+00 525 | -5.84E-01,9.50E-01,1.65E+00,9.96E-01 526 | -2.86E-01,9.58E-01,1.82E+00,6.37E-01 527 | -1.65E-01,9.48E-01,1.91E+00,2.50E-01 528 | 4.18E-02,1.09E+00,1.92E+00,-1.47E-01 529 | 3.29E-01,9.94E-01,1.85E+00,-5.38E-01 530 | 3.54E-01,6.99E-01,1.70E+00,-9.06E-01 531 | 5.78E-01,9.90E-01,1.48E+00,-1.24E+00 532 | 6.76E-01,7.72E-01,1.19E+00,-1.51E+00 533 | 7.58E-01,4.10E-01,8.54E-01,-1.72E+00 534 | 8.46E-01,2.79E-01,4.82E-01,-1.86E+00 535 | 1.03E+00,1.77E-01,8.87E-02,-1.92E+00 536 | 9.55E-01,-1.71E-01,-3.08E-01,-1.90E+00 537 | 8.10E-01,-2.73E-01,-6.92E-01,-1.80E+00 538 | 7.85E-01,-4.13E-01,-1.05E+00,-1.62E+00 539 | 5.19E-01,-7.79E-01,-1.36E+00,-1.37E+00 540 | 4.56E-01,-8.64E-01,-1.61E+00,-1.06E+00 541 | -2.94E-02,6.19E-02,2.05E-01,-1.03E-01 542 | 2.73E-01,-3.91E-02,1.80E-01,-1.42E-01 543 | 5.80E-02,2.12E-01,1.47E-01,-1.76E-01 544 | 1.69E-01,1.05E-01,1.07E-01,-2.03E-01 545 | 2.77E-01,1.96E-01,6.32E-02,-2.20E-01 546 | -1.77E-01,-5.96E-02,1.66E-02,-2.29E-01 547 | 1.57E-01,-8.09E-02,-3.07E-02,-2.27E-01 548 | 3.51E-02,6.56E-02,-7.67E-02,-2.16E-01 549 | 1.09E-01,-4.49E-02,-1.19E-01,-1.96E-01 550 | 1.15E-01,2.26E-02,-1.57E-01,-1.67E-01 551 | 6.92E-02,-9.51E-03,-1.88E-01,-1.31E-01 552 | 9.84E-02,-3.00E-02,-2.11E-01,-8.97E-02 553 | -2.78E-02,-3.14E-01,-2.25E-01,-4.44E-02 554 | 1.19E-01,4.94E-02,-2.29E-01,2.71E-03 555 | -1.51E-01,-1.14E-01,-2.24E-01,4.97E-02 556 | -1.98E-01,-1.84E-01,-2.09E-01,9.47E-02 557 | -1.91E-01,5.31E-03,-1.85E-01,1.36E-01 558 | 5.33E-02,1.27E-02,-1.53E-01,1.71E-01 559 | -1.29E-01,-5.90E-02,-1.15E-01,1.98E-01 560 | -2.60E-01,-2.33E-01,-7.16E-02,2.18E-01 561 | -1.61E-01,-8.85E-02,-2.53E-02,2.28E-01 562 | -5.75E-02,5.33E-02,2.20E-02,2.28E-01 563 | -3.15E-02,2.56E-02,6.84E-02,2.19E-01 564 | -1.01E-01,-8.36E-02,1.12E-01,2.00E-01 565 | 2.12E-02,1.51E-01,1.51E-01,1.73E-01 566 | -8.74E-03,4.95E-02,1.83E-01,1.38E-01 567 | -1.20E-02,-7.47E-02,2.07E-01,9.77E-02 568 | -2.26E-02,2.13E-01,2.23E-01,5.30E-02 569 | -5.74E-02,1.51E-01,2.29E-01,6.04E-03 570 | 3.12E-02,1.54E-01,2.26E-01,-4.12E-02 571 | 3.71E-01,4.48E-01,5.41E-01,-7.76E-01 572 | 3.84E-01,1.46E-01,3.70E-01,-8.71E-01 573 | 6.03E-01,7.54E-02,1.83E-01,-9.28E-01 574 | 5.77E-01,7.12E-02,-1.14E-02,-9.46E-01 575 | 4.64E-01,-6.97E-02,-2.05E-01,-9.23E-01 576 | 3.71E-01,-2.10E-01,-3.91E-01,-8.62E-01 577 | 1.80E-01,-3.55E-01,-5.59E-01,-7.63E-01 578 | 3.75E-01,-3.22E-01,-7.04E-01,-6.32E-01 579 | 1.47E-01,-3.06E-01,-8.19E-01,-4.74E-01 580 | -4.87E-02,-4.01E-01,-8.99E-01,-2.95E-01 581 | 2.11E-01,-5.48E-01,-9.40E-01,-1.04E-01 582 | 1.93E-02,-2.97E-01,-9.42E-01,9.11E-02 583 | -2.55E-01,-5.96E-01,-9.03E-01,2.83E-01 584 | -3.52E-01,-5.71E-01,-8.26E-01,4.62E-01 585 | -2.24E-01,-2.60E-01,-7.13E-01,6.22E-01 586 | -4.65E-01,-2.62E-01,-5.70E-01,7.55E-01 587 | -2.98E-01,-2.56E-01,-4.03E-01,8.56E-01 588 | -3.99E-01,-2.19E-01,-2.18E-01,9.20E-01 589 | -4.19E-01,2.20E-01,-2.47E-02,9.46E-01 590 | -4.25E-01,9.68E-02,1.70E-01,9.31E-01 591 | -4.19E-01,2.32E-01,3.58E-01,8.76E-01 592 | -3.04E-01,2.97E-01,5.30E-01,7.84E-01 593 | -3.74E-01,3.83E-01,6.80E-01,6.58E-01 594 | -2.44E-01,4.54E-01,8.00E-01,5.04E-01 595 | -8.94E-02,5.17E-01,8.87E-01,3.29E-01 596 | -1.37E-02,4.30E-01,9.36E-01,1.40E-01 597 | -9.19E-02,4.43E-01,9.44E-01,-5.51E-02 598 | 7.39E-02,2.82E-01,9.13E-01,-2.48E-01 599 | 2.40E-01,3.43E-01,8.43E-01,-4.30E-01 600 | 2.56E-01,3.95E-01,7.36E-01,-5.94E-01 601 | 1.93E-01,4.16E-01,4.63E-01,-2.65E-01 602 | -7.99E-02,1.26E-01,3.99E-01,-3.55E-01 603 | 3.54E-01,1.87E-01,3.17E-01,-4.29E-01 604 | 2.20E-01,1.37E-01,2.22E-01,-4.85E-01 605 | 2.22E-01,-4.70E-02,1.18E-01,-5.20E-01 606 | 2.23E-01,-5.05E-02,8.43E-03,-5.33E-01 607 | 4.65E-01,1.04E-02,-1.01E-01,-5.24E-01 608 | 2.79E-01,-3.42E-02,-2.07E-01,-4.92E-01 609 | 2.77E-01,-1.15E-01,-3.03E-01,-4.39E-01 610 | 1.80E-01,-1.92E-01,-3.87E-01,-3.67E-01 611 | 2.18E-01,-2.93E-01,-4.54E-01,-2.80E-01 612 | 1.31E-02,-2.18E-01,-5.02E-01,-1.81E-01 613 | 5.45E-02,-4.58E-01,-5.28E-01,-7.36E-02 614 | 8.73E-02,-1.06E-01,-5.32E-01,3.66E-02 615 | 1.54E-01,-2.07E-01,-5.13E-01,1.45E-01 616 | -2.27E-01,-2.08E-01,-4.73E-01,2.48E-01 617 | -9.23E-03,-1.50E-01,-4.12E-01,3.39E-01 618 | -4.15E-01,-1.24E-01,-3.33E-01,4.17E-01 619 | -4.17E-01,-3.82E-01,-2.41E-01,4.76E-01 620 | -4.07E-01,-2.08E-02,-1.38E-01,5.16E-01 621 | -3.24E-01,-8.93E-02,-2.88E-02,5.33E-01 622 | -2.36E-01,-6.41E-02,8.13E-02,5.27E-01 623 | -2.24E-01,8.24E-02,1.88E-01,4.99E-01 624 | -2.50E-01,1.68E-01,2.86E-01,4.50E-01 625 | -2.66E-01,1.64E-01,3.73E-01,3.82E-01 626 | -5.48E-02,2.44E-01,4.43E-01,2.97E-01 627 | -3.31E-02,1.82E-01,4.95E-01,2.00E-01 628 | -6.60E-02,1.83E-01,5.25E-01,9.37E-02 629 | 5.05E-02,2.45E-01,5.33E-01,-1.62E-02 630 | 1.80E-01,2.88E-01,5.19E-01,-1.25E-01 631 | 6.23E-01,7.34E-01,1.34E+00,-1.30E+00 632 | 7.38E-01,4.61E-01,1.05E+00,-1.55E+00 633 | 7.86E-01,4.11E-01,7.09E-01,-1.73E+00 634 | 9.81E-01,1.36E-01,3.38E-01,-1.84E+00 635 | 8.53E-01,2.49E-02,-4.64E-02,-1.87E+00 636 | 8.57E-01,-3.11E-01,-4.29E-01,-1.82E+00 637 | 8.39E-01,-3.14E-01,-7.94E-01,-1.69E+00 638 | 6.17E-01,-5.13E-01,-1.12E+00,-1.49E+00 639 | 5.82E-01,-7.95E-01,-1.41E+00,-1.23E+00 640 | 3.86E-01,-5.50E-01,-1.63E+00,-9.15E-01 641 | 2.41E-01,-8.37E-01,-1.78E+00,-5.60E-01 642 | 8.52E-02,-7.01E-01,-1.86E+00,-1.82E-01 643 | -1.82E-01,-7.69E-01,-1.86E+00,2.04E-01 644 | -3.12E-01,-9.07E-01,-1.78E+00,5.81E-01 645 | -3.36E-01,-8.51E-01,-1.62E+00,9.33E-01 646 | -6.26E-01,-7.37E-01,-1.39E+00,1.25E+00 647 | -6.38E-01,-7.36E-01,-1.11E+00,1.51E+00 648 | -8.16E-01,-4.57E-01,-7.74E-01,1.70E+00 649 | -8.34E-01,-1.79E-01,-4.08E-01,1.82E+00 650 | -1.01E+00,1.40E-01,-2.49E-02,1.87E+00 651 | -9.06E-01,1.02E-01,3.59E-01,1.83E+00 652 | -8.47E-01,4.52E-01,7.28E-01,1.72E+00 653 | -8.28E-01,4.08E-01,1.07E+00,1.53E+00 654 | -7.24E-01,6.21E-01,1.36E+00,1.28E+00 655 | -6.37E-01,7.51E-01,1.59E+00,9.76E-01 656 | -1.64E-01,9.17E-01,1.76E+00,6.28E-01 657 | -2.24E-01,9.71E-01,1.85E+00,2.53E-01 658 | 2.01E-01,1.03E+00,1.86E+00,-1.33E-01 659 | 2.10E-01,9.76E-01,1.80E+00,-5.13E-01 660 | 3.49E-01,8.51E-01,1.65E+00,-8.71E-01 661 | 8.65E-01,2.62E-01,4.07E-01,-1.66E+00 662 | 6.78E-01,-6.37E-02,5.81E-02,-1.70E+00 663 | 8.43E-01,-8.30E-02,-2.93E-01,-1.68E+00 664 | 5.87E-01,-4.14E-01,-6.32E-01,-1.58E+00 665 | 7.16E-01,-4.77E-01,-9.44E-01,-1.42E+00 666 | 5.54E-01,-4.55E-01,-1.22E+00,-1.20E+00 667 | 4.99E-01,-8.48E-01,-1.44E+00,-9.21E-01 668 | 3.73E-01,-7.47E-01,-1.59E+00,-6.06E-01 669 | 1.79E-01,-9.43E-01,-1.68E+00,-2.66E-01 670 | -3.04E-02,-8.58E-01,-1.70E+00,8.58E-02 671 | -2.45E-01,-9.42E-01,-1.65E+00,4.34E-01 672 | -1.36E-01,-8.60E-01,-1.53E+00,7.64E-01 673 | -5.28E-01,-5.93E-01,-1.34E+00,1.06E+00 674 | -6.68E-01,-5.54E-01,-1.09E+00,1.31E+00 675 | -8.48E-01,-4.36E-01,-7.97E-01,1.51E+00 676 | -8.63E-01,-3.56E-01,-4.70E-01,1.64E+00 677 | -7.81E-01,-6.56E-03,-1.23E-01,1.70E+00 678 | -7.64E-01,1.29E-01,2.29E-01,1.69E+00 679 | -8.96E-01,2.82E-01,5.71E-01,1.61E+00 680 | -7.15E-01,3.61E-01,8.89E-01,1.46E+00 681 | -8.18E-01,4.41E-01,1.17E+00,1.24E+00 682 | -5.05E-01,6.77E-01,1.40E+00,9.75E-01 683 | -5.25E-01,7.23E-01,1.57E+00,6.67E-01 684 | -1.53E-01,6.32E-01,1.67E+00,3.30E-01 685 | 1.48E-02,9.62E-01,1.71E+00,-2.07E-02 686 | 1.90E-01,8.38E-01,1.67E+00,-3.71E-01 687 | 4.41E-01,6.15E-01,1.55E+00,-7.05E-01 688 | 3.40E-01,8.04E-01,1.38E+00,-1.01E+00 689 | 7.20E-01,5.33E-01,1.14E+00,-1.27E+00 690 | 7.24E-01,2.97E-01,8.54E-01,-1.48E+00 691 | -1.26E-01,-2.93E-01,-4.62E-01,-2.31E-02 692 | 1.40E-02,-2.67E-01,-4.57E-01,7.22E-02 693 | -1.30E-01,-2.05E-01,-4.32E-01,1.65E-01 694 | -3.13E-02,-2.60E-01,-3.89E-01,2.50E-01 695 | -8.13E-02,-1.58E-01,-3.30E-01,3.24E-01 696 | -3.12E-01,-5.02E-02,-2.56E-01,3.85E-01 697 | -1.74E-01,-8.92E-02,-1.71E-01,4.30E-01 698 | -1.08E-01,-6.11E-03,-7.94E-02,4.56E-01 699 | -2.16E-01,9.66E-02,1.59E-02,4.62E-01 700 | -3.22E-01,2.80E-02,1.10E-01,4.49E-01 701 | -1.20E-01,1.29E-01,2.00E-01,4.17E-01 702 | -1.20E-01,1.10E-01,2.82E-01,3.67E-01 703 | -9.63E-02,1.73E-01,3.51E-01,3.01E-01 704 | -3.97E-02,1.70E-01,4.05E-01,2.23E-01 705 | -3.67E-01,1.68E-01,4.43E-01,1.35E-01 706 | 6.77E-02,2.48E-01,4.61E-01,4.07E-02 707 | 2.08E-01,2.86E-01,4.59E-01,-5.48E-02 708 | 1.18E-01,2.34E-01,4.38E-01,-1.48E-01 709 | 1.37E-01,2.49E-01,3.98E-01,-2.35E-01 710 | 2.25E-01,9.71E-02,3.42E-01,-3.12E-01 711 | 2.21E-01,1.48E-02,2.70E-01,-3.75E-01 712 | 2.77E-01,1.35E-01,1.88E-01,-4.23E-01 713 | 2.26E-01,1.17E-01,9.67E-02,-4.52E-01 714 | 1.55E-01,5.87E-03,1.77E-03,-4.63E-01 715 | 1.26E-01,8.82E-02,-9.33E-02,-4.53E-01 716 | 1.12E-01,-1.40E-03,-1.84E-01,-4.24E-01 717 | 5.13E-02,1.34E-01,-2.68E-01,-3.77E-01 718 | 5.04E-02,-1.90E-01,-3.39E-01,-3.14E-01 719 | 2.95E-01,-2.98E-01,-3.97E-01,-2.38E-01 720 | 1.51E-01,-2.93E-01,-4.37E-01,-1.51E-01 721 | -4.85E-01,6.85E-01,1.25E+00,1.01E+00 722 | -4.89E-01,5.70E-01,1.43E+00,7.31E-01 723 | -3.22E-01,8.42E-01,1.55E+00,4.21E-01 724 | -7.47E-02,6.29E-01,1.60E+00,9.40E-02 725 | 1.53E-01,7.61E-01,1.59E+00,-2.37E-01 726 | 2.64E-01,7.21E-01,1.51E+00,-5.58E-01 727 | 4.21E-01,5.17E-01,1.36E+00,-8.56E-01 728 | 4.79E-01,3.93E-01,1.15E+00,-1.12E+00 729 | 5.66E-01,3.87E-01,9.00E-01,-1.33E+00 730 | 7.57E-01,2.64E-01,6.07E-01,-1.49E+00 731 | 8.86E-01,1.16E-01,2.89E-01,-1.58E+00 732 | 9.10E-01,-1.94E-01,-4.14E-02,-1.61E+00 733 | 6.85E-01,-2.05E-01,-3.70E-01,-1.56E+00 734 | 8.32E-01,-1.65E-01,-6.83E-01,-1.45E+00 735 | 7.43E-01,-5.36E-01,-9.67E-01,-1.28E+00 736 | 6.20E-01,-7.08E-01,-1.21E+00,-1.06E+00 737 | 2.74E-01,-5.89E-01,-1.40E+00,-7.85E-01 738 | 9.01E-02,-7.42E-01,-1.53E+00,-4.80E-01 739 | 2.05E-01,-9.28E-01,-1.60E+00,-1.55E-01 740 | -2.43E-01,-8.82E-01,-1.60E+00,1.77E-01 741 | -3.45E-01,-6.47E-01,-1.53E+00,5.01E-01 742 | -2.24E-01,-8.89E-01,-1.39E+00,8.03E-01 743 | -4.68E-01,-5.58E-01,-1.20E+00,1.07E+00 744 | -5.58E-01,-5.25E-01,-9.50E-01,1.29E+00 745 | -5.11E-01,-2.72E-01,-6.64E-01,1.46E+00 746 | -8.14E-01,-1.07E-01,-3.49E-01,1.57E+00 747 | -6.65E-01,3.01E-02,-1.98E-02,1.61E+00 748 | -7.87E-01,1.58E-01,3.10E-01,1.58E+00 749 | -5.10E-01,4.69E-01,6.27E-01,1.48E+00 750 | -8.95E-01,3.00E-01,9.18E-01,1.32E+00 751 | -------------------------------------------------------------------------------- /inverse-problems-using-physics-informed-neural-networks/InversePinnConstantCoef.mlx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/inverse-problems-using-physics-informed-neural-networks/InversePinnConstantCoef.mlx -------------------------------------------------------------------------------- /inverse-problems-using-physics-informed-neural-networks/InversePinnVariableCoef.mlx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/inverse-problems-using-physics-informed-neural-networks/InversePinnVariableCoef.mlx -------------------------------------------------------------------------------- /inverse-problems-using-physics-informed-neural-networks/README.md: -------------------------------------------------------------------------------- 1 | # Inverse Problems using PINNs 2 | 3 | This repository was originally hosted [here](https://github.com/matlab-deep-learning/Inverse-Problems-using-Physics-Informed-Neural-Networks-PINNs). 4 | 5 | ## Overview 6 | This repository contains two examples: [`InversePinnConstantCoef.mlx`](./InversePinnConstantCoef.mlx) and [`InversePinnVariableCoef.mlx`](./InversePinnVariableCoef.mlx). Both examples are solvers for an inverse problem for the Poisson equation $−\nabla \cdot (c\nabla u)=1$ with suitable boundary conditions. In particular, this is a parameter estimation problem in which we assume that we have an exact solution u, and we are interested in computing an approximation to the coefficient $c$. 7 | 8 | The first example, InversePinnConstantCoef, shows how to solve the parameter estimation problem when $c$ is a constant. In the example, a PINN corresponding to the solution is trained, and while training the PINN, the coefficient is optimized. Both the network parameters and the coefficient are trained using the Adam algorithm. This results in an approximation for the coefficient to be found and a PINN that can be used to approximate the solution. 9 | 10 | The second example is essentially the same setup as the first. However, rather than using a constant coefficient, the coefficient is also approximated by a neural network. Both the PINN and coefficient networks are trained simulataneously. After training, there are two networks, one which approximates the PDE and one which approximates the coefficient. 11 | 12 | 13 | ## PINNs for Inverse Problems : Constant Coefficients 14 | 15 | The Poisson equation on a unit disk with zero Dirichlet boundary condition can be written as $-\nabla\cdot (c \nabla u) = 1$ in $\Omega$, $u=0$ on $\partial \Omega$, where $\Omega$ is the unit disk. The exact solution when $c=1$ is $$u(x,y) = \frac{1 - x^2 - y^2}{4}$$ 16 | 17 | This equation is relevant for example in therodynamics, where this describes a steady state heat equation, with the boundary of the circle held at a constant temperature of 0, the right hand side of the equation is a volumetric heat source, and the coefficient $c$ is the thermal diffusivity. The typical "forward" problem is to use this information to find u. 18 | One could consider the inverse problem: given a known temperature distribution $u$ corresponding to the solution at a set of points, a known volumetric heat source, and boundary conditions, what is the thermal diffusivity? Using a PINN to solve the equation, we can optimize the PINN parameters and the optimize to solve for the coefficient at the same time. In this exmaple, we will take the data to be given by the exact solution above. In practice, the exact solution is often not known, but measurements of the solution can be provided instead. 19 | 20 | ### PDE Model Definition 21 | 22 | ```matlab:Code 23 | rng('default'); % for reproducibility 24 | model = createpde(); 25 | geometryFromEdges(model,@circleg); 26 | applyBoundaryCondition(model,"dirichlet", ... 27 | "Edge",1:model.Geometry.NumEdges, ... 28 | "u",0); 29 | ``` 30 | 31 | Plot the geometry and display the edge labels for use in the boundary condition definition. 32 | 33 | ```matlab:Code 34 | figure 35 | pdegplot(model,"EdgeLabels","on"); 36 | axis equal 37 | ``` 38 | 39 | ![Geometry.png](./images/Geometry.png) 40 | 41 | Create a structural array of coefficents. Specify the coefficients to the PDE model. Note that pdeCoeffs.c is an initial guess; it will be updated during training. We also define pdeCoeffs to be a struct of dlarrays so that we can compute gradients with respect to them. 42 | 43 | ```matlab:Code 44 | pdeCoeffs.c = .5; 45 | % fixed values for other coefficients 46 | pdeCoeffs.m = 0; 47 | pdeCoeffs.d = 0; 48 | pdeCoeffs.a = 0; 49 | pdeCoeffs.f = 1; 50 | % set up model 51 | specifyCoefficients(model,"m",pdeCoeffs.m,"d",pdeCoeffs.d,"c",pdeCoeffs.c,"a",pdeCoeffs.a,"f",pdeCoeffs.f); 52 | Hmax = 0.05; Hgrad = 2; Hedge = Hmax/10; 53 | msh = generateMesh(model,"Hmax",Hmax,"Hgrad",Hgrad,"Hedge",{1:model.Geometry.NumEdges,Hedge}); 54 | % make coefs dlarrays so gradients can be computed 55 | pdeCoeffs = structfun(@dlarray,pdeCoeffs,'UniformOutput',false); 56 | ``` 57 | 58 | ### Generate Spatial Data for Training PINN 59 | 60 | This examples uses mesh nodes as the collocation points. Model loss at the collocation points on the domain and boundary are used to train the PINN. 61 | 62 | ```matlab:Code 63 | boundaryNodes = findNodes(msh,"region","Edge",1:model.Geometry.NumEdges); 64 | domainNodes = setdiff(1:size(msh.Nodes,2),boundaryNodes); 65 | domainCollocationPoints = msh.Nodes(:,domainNodes)'; 66 | ``` 67 | 68 | ### Define Deep Learning Model 69 | 70 | This is a neural network with 3 hidden layers and 50 neurons per layer. The two inputs to the network correspond to the x and y coordinates, and the one output corresponds to the solution, so predict(pinn,XY) appoximates u(x,y). While training the c coefficient, we will also train this neural network to provide solutions to the PDE. 71 | 72 | ```matlab:Code 73 | numLayers = 3; 74 | numNeurons = 50; 75 | layers = featureInputLayer(2); 76 | for i = 1:numLayers-1 77 | layers = [ 78 | layers 79 | fullyConnectedLayer(numNeurons) 80 | tanhLayer];%#ok 81 | end 82 | layers = [ 83 | layers 84 | fullyConnectedLayer(1)]; 85 | pinn = dlnetwork(layers); 86 | ``` 87 | 88 | ### Define Custom Training Loop to Train the PINN Using ADAM Solver 89 | 90 | Specify training options. Create arrays for average gradients and square gradients for both the PINN and the parameter; both will be trained using the ADAM solver. 91 | 92 | ```matlab:Code 93 | numEpochs = 1500; 94 | miniBatchSize = 2^12; 95 | initialLearnRate = 0.01; 96 | learnRateDecay = 0.001; 97 | averageGrad = []; % for pinn updates 98 | averageSqGrad = []; 99 | pAverageGrad = []; % for parameter updates 100 | pAverageSqGrad = []; 101 | ``` 102 | 103 | Setup data store for the training points. For simplicity, we both train the PINN and compute the known data at the mesh nodes. 104 | 105 | ```matlab:Code 106 | ds = arrayDatastore(domainCollocationPoints); 107 | mbq = minibatchqueue(ds, MiniBatchSize = miniBatchSize, MiniBatchFormat="BC"); 108 | Calculate the total number of iterations for the training progress monitor and initialize the monitor. 109 | numIterations = numEpochs * ceil(size(domainCollocationPoints,1)/miniBatchSize); 110 | monitor = trainingProgressMonitor(Metrics="Loss",Info="Epoch",XLabel="Iteration"); 111 | ``` 112 | 113 | ### Training Loop 114 | 115 | Train the model and parameter using a custom training loop. Update the network parameters using the adamupdate function. At the end of each iteration, display the training progress. Note that we allow the PINN to be trained for 1/10th of the epochs before updating the c coefficient. This helps with robustness to the initial guess. 116 | 117 | ```matlab:Code 118 | iteration = 0; 119 | epoch = 0; 120 | learningRate = initialLearnRate; 121 | lossFcn = dlaccelerate(@modelLoss); 122 | while epoch < numEpochs && ~monitor.Stop 123 | epoch = epoch + 1; 124 | reset(mbq); 125 | while hasdata(mbq) && ~monitor.Stop 126 | iteration = iteration + 1; 127 | XY = next(mbq); 128 | % Evaluate the model loss and gradients using dlfeval. 129 | [loss,gradients] = dlfeval(lossFcn,model,pinn,XY,pdeCoeffs); 130 | % Update the network parameters using the adamupdate function. 131 | [pinn,averageGrad,averageSqGrad] = adamupdate(pinn,gradients{1},averageGrad,... 132 | averageSqGrad,iteration,learningRate); 133 | % Update the c coefficient using the adamupdate function. Defer 134 | % updating until 1/10 of epochs are finished. 135 | if epoch > numEpochs/10 136 | [pdeCoeffs.c,pAverageGrad,pAverageSqGrad] = adamupdate(pdeCoeffs.c,gradients{2},pAverageGrad,... 137 | pAverageSqGrad,iteration,learningRate); 138 | end 139 | end 140 | % Update learning rate. 141 | learningRate = initialLearnRate / (1+learnRateDecay*iteration); 142 | % Update the training progress monitor. 143 | recordMetrics(monitor,iteration,Loss=loss); 144 | updateInfo(monitor,Epoch=epoch + " of " + numEpochs); 145 | monitor.Progress = 100 * iteration/numIterations; 146 | end 147 | ``` 148 | 149 | ![ConstantCoefTraining.png](./images/ConstantCoefTraining.png) 150 | 151 | ### Visualize Data 152 | 153 | Evaluate PINN at mesh nodes and plot, include updated value of c in title. Note that c = 1.001, compared to the exact value of 1, this is a 0.1% error. 154 | 155 | ```matlab:Code 156 | nodesDLarry = dlarray(msh.Nodes,"CB"); 157 | Upinn = gather(extractdata(predict(pinn,nodesDLarry))); 158 | figure; 159 | pdeplot(model,"XYData",Upinn); 160 | title(sprintf("Solution with c = %.4f",double(pdeCoeffs.c))); 161 | ``` 162 | ![ConstantCoefSolution](./images/ConstantCoefSolution.png) 163 | 164 | ### Model Loss Function 165 | 166 | The `modelLoss` helper function takes a [`dlnetwork`](https://mathworks.com/help/deeplearning/ref/dlnetwork.html) object `pinn` and a mini-batch of input data `XY`, and returns the loss and the gradients of the loss with respect to the learnable parameters of the PINN, and with respect to the c coefficient. To compute the gradients automatically, use the [`dlgradient`](https://mathworks.com/help/deeplearning/ref/dlarray.dlgradient.html) function. Return the gradients w.r.t. learnable and w.r.t the parameter as two elements of a cell array so they can be used separately. The model is trained by enforcing that given an input the output of the network satsifies Poisson's equation and the boundary conditions. 167 | 168 | ```matlab:Code 169 | function [loss,gradients] = modelLoss(model,pinn,XY,pdeCoeffs) 170 | dim = 2; 171 | U = forward(pinn,XY); 172 | 173 | % Loss for difference in data taken at mesh nodes. 174 | Utrue = getSolutionData(XY); 175 | lossData = l2loss(U,Utrue); 176 | 177 | % Compute gradients of U and Laplacian of U. 178 | gradU = dlgradient(sum(U,"all"),XY,EnableHigherDerivatives=true); 179 | Laplacian = 0; 180 | for i=1:dim 181 | % Add each term of the Laplacian 182 | gradU2 = dlgradient(sum(pdeCoeffs.c.*gradU(i,:),"all"),XY,EnableHigherDerivatives=true); 183 | Laplacian = gradU2(i,:)+Laplacian; 184 | end 185 | 186 | % Enforce PDE. Calculate lossF. 187 | res = -pdeCoeffs.f - Laplacian + pdeCoeffs.a.*U; 188 | lossF = mean(sum(res.^2,1),2); 189 | 190 | % Enforce boundary conditions. Calculate lossU. 191 | actualBC = []; % contains the actual boundary information 192 | BC_XY = []; % boundary coordinates 193 | % Loop over the boundary edges and find boundary coordinates and actual BC 194 | % assigned to PDE model. 195 | numBoundaries = model.Geometry.NumEdges; 196 | for i=1:numBoundaries 197 | BCi = findBoundaryConditions(model.BoundaryConditions,'Edge',i); 198 | BCiNodes = findNodes(model.Mesh,"region","Edge",i); 199 | BC_XY = [BC_XY, model.Mesh.Nodes(:,BCiNodes)]; %#ok 200 | actualBCi = ones(1,numel(BCiNodes))*BCi.u; 201 | actualBC = [actualBC actualBCi]; %#ok 202 | end 203 | BC_XY = dlarray(BC_XY,"CB"); % format the coordinates 204 | predictedBC = forward(pinn,BC_XY); 205 | lossBC = mse(predictedBC,actualBC); 206 | 207 | % Combine weighted losses. 208 | lambdaPDE = 0.4; % weighting factor 209 | lambdaBC = 0.6; 210 | lambdaData = 0.5; 211 | loss = lambdaPDE*lossF + lambdaBC*lossBC + lambdaData*lossData; 212 | 213 | % Calculate gradients with respect to the learnable parameters and 214 | % C-coefficient. Pass back cell array to update pinn and coef separately. 215 | gradients = dlgradient(loss,{pinn.Learnables,pdeCoeffs.c}); 216 | end 217 | ``` 218 | 219 | ### Data Function 220 | This function returns the solution data at a given set of points XY. As a demonstration of the method, we return the exact solution from this function, but this function could be replaced with measured data for a given application. 221 | 222 | ```matlab:Code 223 | function UD = getSolutionData(XY) 224 | UD = (1-XY(1,:).^2-XY(2,:).^2)/4; 225 | end 226 | ``` 227 | 228 | 229 | ## PINNs for Inverse Problems : Variable Coefficients 230 | 231 | The Poisson equation on a unit disk with zero Dirichlet boundary condition can be written as $-\nabla\cdot (c \nabla u) = 1$ in $\Omega$, $u=0$ on $\partial \Omega$, where $\Omega$ is the unit disk. The exact solution when $c=1$ is $$u(x,y) = \frac{1 - x^2 - y^2}{4}$$ 232 | 233 | This equation is relevant for example in therodynamics, where this describes a steady state heat equation, with the boundary of the circle held at a constant temperature of 0, the right hand side of the equation is a volumetric heat source, and the coefficient $c$ is the thermal diffusivity. The typical "forward" problem is to use this information to find $u$. 234 | 235 | One could consider the inverse problem: given a known temperature distribution $u$ corresponding to the solution at a set of points, a known volumetric heat source, and boundary conditions, what is the thermal diffusivity? Using a PINN to solve the equation, we can optimize the PINN parameters and the optimize to solve for the coefficient at the same time. In this exmaple, we will take the data to be given by the exact solution above. In practice, the exact solution is often not known, but measurements of the solution can be provided instead. 236 | 237 | If the coefficients are known to be a function of space or of the solution, we can use a neural network to represent the coefficient, and train this neural network alongside the PINN. 238 | 239 | ### PDE Model Definition 240 | 241 | We reuse the same geometry as the previous example. 242 | 243 | ```matlab:Code 244 | rng('default'); % for reproducibility 245 | model = createpde(); 246 | geometryFromEdges(model,@circleg); 247 | applyBoundaryCondition(model,"dirichlet", ... 248 | "Edge",1:model.Geometry.NumEdges, ... 249 | "u",0); 250 | ``` 251 | 252 | Create a structural array of coefficents. C is left empty; it will be approximated using a neural network. 253 | 254 | ```matlab:Code 255 | pdeCoeffs.c = []; 256 | % fixed values for other coefficients 257 | pdeCoeffs.m = 0; 258 | pdeCoeffs.d = 0; 259 | pdeCoeffs.a = 0; 260 | pdeCoeffs.f = 1; 261 | % set up mesh 262 | Hmax = 0.05; Hgrad = 2; Hedge = Hmax/10; 263 | msh = generateMesh(model,"Hmax",Hmax,"Hgrad",Hgrad,"Hedge",{1:model.Geometry.NumEdges,Hedge}); 264 | % make coefs dlarrays so gradients can be computed 265 | pdeCoeffs = structfun(@dlarray,pdeCoeffs,'UniformOutput',false); 266 | ``` 267 | 268 | 269 | ### Generate Spatial Data for Training PINN 270 | 271 | This examples uses mesh nodes as the collocation points. Model loss at the collocation points on the domain and boundary are used to train the PINN. For simplicity, we will supply the true solution data at the same set of collocation points where the PINN loss is computed, but this is not required. 272 | 273 | ```matlab:Code 274 | boundaryNodes = findNodes(msh,"region","Edge",1:model.Geometry.NumEdges); 275 | domainNodes = setdiff(1:size(msh.Nodes,2),boundaryNodes); 276 | domainCollocationPoints = msh.Nodes(:,domainNodes)'; 277 | ``` 278 | 279 | ### Define Deep Learning Model 280 | 281 | This is a neural network with 3 hidden layers and 50 neurons per layer. The two inputs to the network correspond to the x and y coordinates, and the one output corresponds to the solution, so predict(pinn,XY) appoximates u(x,y). 282 | 283 | ```matlab:Code 284 | numLayers = 3; 285 | numNeurons = 50; 286 | layers = featureInputLayer(2); 287 | for i = 1:numLayers-1 288 | layers = [ 289 | layers 290 | fullyConnectedLayer(numNeurons) 291 | tanhLayer];%#ok 292 | end 293 | layers = [ 294 | layers 295 | fullyConnectedLayer(1)]; 296 | pinn = dlnetwork(layers); 297 | ``` 298 | 299 | Set up the coefficient network by copying the PINN so the two networks have the same architecture. Here, the two inputs represent x and y and the one output corresponds to the coefficient, so `predict(coefNet,XY)` appoximates $c(x,y)$. If the problem was nonlinear and spatially dependent, $c = c(x,y,u)$, you could add a third input and make the corresponding changes in the model loss function. Because of the data that will be supplied to the network, we expect that `coefNet` will learn $c(x,y) = 1$ for all $x$ and $y$. 300 | 301 | ```matlab:Code 302 | coefNet = pinn; 303 | ``` 304 | 305 | Put both networks in one struct. 306 | 307 | ```matlab:Code 308 | nets = struct(pinn=pinn,coefNet=coefNet); 309 | ``` 310 | 311 | ### Define Custom Training Loop to Train the PINN Using ADAM Solver 312 | 313 | Specify training options. Create arrays for average gradients and square gradients for both the PINN and the parameter network; both will be trained using the ADAM solver. 314 | 315 | ```matlab:Code 316 | numEpochs = 5000; 317 | miniBatchSize = 2^12; 318 | initialLearnRate = 0.02; 319 | learnRateDecay = 0.001; 320 | averageGrad = []; 321 | averageSqGrad = []; 322 | ``` 323 | 324 | Setup data store for the training points. As stated before, we both train the PINN and compute the known data at the mesh nodes. 325 | 326 | ```matlab:Code 327 | ds = arrayDatastore(domainCollocationPoints); 328 | mbq = minibatchqueue(ds, MiniBatchSize = miniBatchSize, MiniBatchFormat="BC"); 329 | Calculate the total number of iterations for the training progress monitor and initialize the monitor. 330 | numIterations = numEpochs * ceil(size(domainCollocationPoints,1)/miniBatchSize); 331 | monitor = trainingProgressMonitor(Metrics="Loss",Info="Epoch",XLabel="Iteration"); 332 | ``` 333 | 334 | ### Training Loop 335 | Train the model and parameter network using a custom training loop. Update the network parameters using the adamupdate function. At the end of each iteration, display the training progress. 336 | 337 | ```matlab:Code 338 | epoch = 0; 339 | iteration = 0; 340 | learningRate = initialLearnRate; 341 | lossFcn = dlaccelerate(@modelLoss); 342 | while epoch < numEpochs && ~monitor.Stop 343 | epoch = epoch + 1; 344 | reset(mbq); 345 | while hasdata(mbq) && ~monitor.Stop 346 | iteration = iteration + 1; 347 | XY = next(mbq); 348 | % XY coordinates at which to get data 349 | XYData = XY; 350 | % Evaluate the model loss and gradients using dlfeval. 351 | [loss,gradients] = dlfeval(lossFcn,model,nets,XY,pdeCoeffs,XYData); 352 | % update the parameters of both networks 353 | [nets,averageGrad,averageSqGrad] = adamupdate(nets,gradients,averageGrad,... 354 | averageSqGrad,iteration,learningRate); 355 | end 356 | % Update learning rate. 357 | learningRate = initialLearnRate / (1+learnRateDecay*iteration); 358 | % Update the training progress monitor. 359 | recordMetrics(monitor,iteration,Loss=loss); 360 | updateInfo(monitor,Epoch=epoch + " of " + numEpochs); 361 | monitor.Progress = 100 * iteration/numIterations; 362 | end 363 | ``` 364 | 365 | 366 | ![VariableCoefTraining](images/VariableCoefTraining.png) 367 | 368 | 369 | ### Visualize Data 370 | 371 | Evaluate PINN and parameter network at mesh nodes and plot. Compute the maximum error at collocation points for the predicted coefficient. 372 | 373 | ```matlab:Code 374 | nodesDLarry = dlarray(msh.Nodes,"CB"); 375 | Upinn = gather(extractdata(predict(nets.pinn,nodesDLarry))); 376 | figure(1); 377 | pdeplot(model,"XYData",Upinn); 378 | C = gather(extractdata(predict(nets.coefNet,nodesDLarry))); 379 | figure(2); 380 | pdeplot(model,"XYData",C); 381 | title(sprintf('Predicted coefficient max error: = %0.1e',max(abs(C-1)))) 382 | ``` 383 | 384 | ![VariablePredictedSolution](./images/VariablePredictedSolution.png) 385 | ![VariableCoefPredictedC](./images/VariablePredictedC.png) 386 | 387 | 388 | 389 | ### Model Loss Function 390 | 391 | The `modelLoss` helper function takes a [`dlnetwork`](https://mathworks.com/help/deeplearning/ref/dlnetwork.html) object `pinn` and a mini-batch of input data `XY`, and returns the loss and the gradients of the loss with respect to the learnable parameters of the PINN and with respect to the c coefficient. To compute the gradients automatically, use the [`dlgradient`](https://mathworks.com/help/deeplearning/ref/dlarray.dlgradient.html) function. Return the gradients w.r.t. learnable and w.r.t the parameter network as two elements of a cell array so they can be used separately. The model is trained by enforcing that given an input the output of the network satisfies Poisson's equation and the boundary conditions. 392 | 393 | ```matlab:Code 394 | function [loss,gradients] = modelLoss(model,nets,XY,pdeCoeffs,XYData) 395 | dim = 2; 396 | U = forward(nets.pinn,XY); 397 | C = forward(nets.coefNet,XY); 398 | 399 | % Loss for difference in data taken at mesh nodes. 400 | UDPred = forward(nets.pinn,XYData); 401 | UData = getSolutionData(XYData); 402 | lossData = l2loss(UDPred,UData); 403 | 404 | % Compute gradients of U and Laplacian of U. 405 | gradU = dlgradient(sum(U,"all"),XY,EnableHigherDerivatives=true); 406 | Laplacian = 0; 407 | for i=1:dim 408 | % Add each term of the Laplacian 409 | gradU2 = dlgradient(sum(C.*gradU(i,:),"all"),XY,EnableHigherDerivatives=true); 410 | Laplacian = gradU2(i,:)+Laplacian; 411 | end 412 | 413 | % Enforce PDE. Calculate lossF. 414 | res = -pdeCoeffs.f - Laplacian + pdeCoeffs.a.*U; 415 | zeroTarget = zeros(size(res), "like", res); 416 | lossF = mse(res, zeroTarget); 417 | 418 | % Enforce boundary conditions. Calculate lossU. 419 | actualBC = []; % contains the actual boundary information 420 | BC_XY = []; % boundary coordinates 421 | % Loop over the boundary edges and find boundary coordinates and actual BC 422 | % assigned to PDE model. 423 | numBoundaries = model.Geometry.NumEdges; 424 | for i=1:numBoundaries 425 | BCi = findBoundaryConditions(model.BoundaryConditions,'Edge',i); 426 | BCiNodes = findNodes(model.Mesh,"region","Edge",i); 427 | BC_XY = [BC_XY, model.Mesh.Nodes(:,BCiNodes)]; %#ok 428 | actualBCi = ones(1,numel(BCiNodes))*BCi.u; 429 | actualBC = [actualBC actualBCi]; %#ok 430 | end 431 | BC_XY = dlarray(BC_XY,"CB"); % format the coordinates 432 | predictedBC = forward(nets.pinn,BC_XY); 433 | lossBC = mse(predictedBC,actualBC); 434 | 435 | % Combine weighted losses. 436 | lambdaPDE = 1.1; 437 | lambdaBC = 1.; 438 | lambdaData = 1.; 439 | loss = lambdaPDE*lossF + lambdaBC*lossBC + lambdaData*lossData; 440 | 441 | % Calculate gradients with respect to the learnable parameters and 442 | % C-coefficient. Return gradients as a struct in order to update them 443 | % simultaneously using adamupdate. 444 | grads = dlgradient(loss,{nets.pinn.Learnables,nets.coefNet.Learnables}); 445 | gradients.pinn = grads{1}; 446 | gradients.coefNet = grads{2}; 447 | end 448 | ``` 449 | 450 | ### Data Function 451 | 452 | This function returns the solution data at a given set of points XY. As a demonstration of the method, we return the exact soltuion from this function, but this function could be replaced with measured data at the given set of points for applications. 453 | 454 | ```matlab:Code 455 | function UD = getSolutionData(XY) 456 | UD = (1-XY(1,:).^2-XY(2,:).^2)/4; 457 | end 458 | ``` 459 | 460 | ## Requirements 461 | 462 | Requires [MATLAB®](https://mathworks.com/products/matlab.html). 463 | * [Deep Learning Toolbox™](https://mathworks.com/products/deep-learning.html) 464 | * [Partial Differential Equation Toolbox™](https://mathworks.com/products/pde.html) 465 | 466 | ## To run 467 | Open the MLX files and run. No data or additional setup is required. Versions with the M file extension can be found under [`/src`](./src). 468 | 469 | Copyright 2023 The MathWorks, Inc. -------------------------------------------------------------------------------- /inverse-problems-using-physics-informed-neural-networks/images/ConstantCoefSolution.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/inverse-problems-using-physics-informed-neural-networks/images/ConstantCoefSolution.png -------------------------------------------------------------------------------- /inverse-problems-using-physics-informed-neural-networks/images/ConstantCoefTraining.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/inverse-problems-using-physics-informed-neural-networks/images/ConstantCoefTraining.png -------------------------------------------------------------------------------- /inverse-problems-using-physics-informed-neural-networks/images/Geometry.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/inverse-problems-using-physics-informed-neural-networks/images/Geometry.png -------------------------------------------------------------------------------- /inverse-problems-using-physics-informed-neural-networks/images/VariableCoefTraining.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/inverse-problems-using-physics-informed-neural-networks/images/VariableCoefTraining.png -------------------------------------------------------------------------------- /inverse-problems-using-physics-informed-neural-networks/images/VariablePredictedC.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/inverse-problems-using-physics-informed-neural-networks/images/VariablePredictedC.png -------------------------------------------------------------------------------- /inverse-problems-using-physics-informed-neural-networks/images/VariablePredictedSolution.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/inverse-problems-using-physics-informed-neural-networks/images/VariablePredictedSolution.png -------------------------------------------------------------------------------- /inverse-problems-using-physics-informed-neural-networks/src/InversePinnConstantCoef.m: -------------------------------------------------------------------------------- 1 | %% *Pinns for Inverse Problems : Constant Coefficients* 2 | % 3 | % 4 | % The Poisson equation on a unit disk with zero Dirichlet boundary condition 5 | % can be written as $$- \nabla \cdot (c\nabla u) = 1$$ in $\Omega$, $u = 0$ on 6 | % $\partial \Omega$, where $\Omega$ is the unit disk. The exact solution when 7 | % $c = 1$ is 8 | % 9 | % $$u(x,y)= \frac{1-x^2-y^2}{4}$$ 10 | % 11 | % This equation is relevant for example in therodynamics, where this describes 12 | % a steady state heat equation, with the boundary of the circle held at a constant 13 | % temperature of 0, the right hand side of the equation is a volumetric heat source, 14 | % and the coefficient $c$ is the thermal diffusivity. The typical "forward" problem 15 | % is to use this information to find u. 16 | % 17 | % One could consider the inverse problem: given a known temperature distribution 18 | % $u$ corresponding to the solution at a set of points, a known volumetric heat 19 | % source, and boundary conditions, what is the thermal diffusivity? Using a PINN 20 | % to solve the equation, we can optimize the PINN parameters and the optimize 21 | % to solve for the coefficient at the same time. In this exmaple, we will take 22 | % the data to be given by the exact solution above. In practice, the exact solution 23 | % is often not known, but measurements of the solution can be provided instead. 24 | %% PDE Model Definition 25 | % 26 | 27 | rng('default'); % for reproducibility 28 | model = createpde(); 29 | geometryFromEdges(model,@circleg); 30 | applyBoundaryCondition(model,"dirichlet", ... 31 | "Edge",1:model.Geometry.NumEdges, ... 32 | "u",0); 33 | %% 34 | % Plot the geometry and display the edge labels for use in the boundary condition 35 | % definition. 36 | 37 | figure 38 | pdegplot(model,"EdgeLabels","on"); 39 | axis equal 40 | %% 41 | % Create a structural array of coefficents. Specify the coefficients to the 42 | % PDE model. Note that pdeCoeffs.c is an initial guess; it will be updated during 43 | % training. We also define pdeCoeffs to be a struct of dlarrays so that we can 44 | % compute gradients with respect to them. 45 | 46 | pdeCoeffs.c = .5; 47 | % fixed values for other coefficients 48 | pdeCoeffs.m = 0; 49 | pdeCoeffs.d = 0; 50 | pdeCoeffs.a = 0; 51 | pdeCoeffs.f = 1; 52 | % set up model 53 | specifyCoefficients(model,"m",pdeCoeffs.m,"d",pdeCoeffs.d,"c",pdeCoeffs.c,"a",pdeCoeffs.a,"f",pdeCoeffs.f); 54 | Hmax = 0.05; Hgrad = 2; Hedge = Hmax/10; 55 | msh = generateMesh(model,"Hmax",Hmax,"Hgrad",Hgrad,"Hedge",{1:model.Geometry.NumEdges,Hedge}); 56 | % make coefs dlarrays so gradients can be computed 57 | pdeCoeffs = structfun(@dlarray,pdeCoeffs,'UniformOutput',false); 58 | %% Generate Spatial Data for Training PINN 59 | % This examples uses mesh nodes as the collocation points. Model loss at the 60 | % collocation points on the domain and boundary are used to train the PINN. 61 | 62 | boundaryNodes = findNodes(msh,"region","Edge",1:model.Geometry.NumEdges); 63 | domainNodes = setdiff(1:size(msh.Nodes,2),boundaryNodes); 64 | domainCollocationPoints = msh.Nodes(:,domainNodes)'; 65 | %% Define Deep Learning Model 66 | % This is a neural network with 3 hidden layers and 50 neurons per layer. The 67 | % two inputs to the network correspond to the x and y coordinates, and the one 68 | % output corresponds to the solution, so |predict(pinn,XY)| appoximates |u(x,y)|. 69 | % While training the c coefficient, we will also train this neural network to 70 | % provide solutions to the PDE. 71 | 72 | numLayers = 3; 73 | numNeurons = 50; 74 | layers = featureInputLayer(2); 75 | for i = 1:numLayers-1 76 | layers = [ 77 | layers 78 | fullyConnectedLayer(numNeurons) 79 | tanhLayer];%#ok 80 | end 81 | layers = [ 82 | layers 83 | fullyConnectedLayer(1)]; 84 | pinn = dlnetwork(layers); 85 | %% Define Custom Training Loop to Train the PINN Using ADAM Solver 86 | % Specify training options. Create arrays for average gradients and square gradients 87 | % for both the PINN and the parameter; both will be trained using the ADAM solver. 88 | 89 | numEpochs = 1500; 90 | miniBatchSize = 2^12; 91 | initialLearnRate = 0.01; 92 | learnRateDecay = 0.001; 93 | averageGrad = []; % for pinn updates 94 | averageSqGrad = []; 95 | pAverageGrad = []; % for parameter updates 96 | pAverageSqGrad = []; 97 | %% 98 | % Setup data store for the training points. For simplicity, we both train the 99 | % PINN and compute the known data at the mesh nodes. 100 | 101 | ds = arrayDatastore(domainCollocationPoints); 102 | mbq = minibatchqueue(ds, MiniBatchSize = miniBatchSize, MiniBatchFormat="BC"); 103 | %% 104 | % Calculate the total number of iterations for the training progress monitor 105 | % and initialize the monitor. 106 | 107 | numIterations = numEpochs * ceil(size(domainCollocationPoints,1)/miniBatchSize); 108 | monitor = trainingProgressMonitor(Metrics="Loss",Info="Epoch",XLabel="Iteration"); 109 | %% Training Loop 110 | % Train the model and parameter using a custom training loop. Update the network 111 | % parameters using the adamupdate function. At the end of each iteration, display 112 | % the training progress. Note that we allow the PINN to be trained for 1/10th 113 | % of the epochs before updating the c coefficient. This helps with robustness 114 | % to the initial guess. 115 | 116 | iteration = 0; 117 | epoch = 0; 118 | learningRate = initialLearnRate; 119 | lossFcn = dlaccelerate(@modelLoss); 120 | while epoch < numEpochs && ~monitor.Stop 121 | epoch = epoch + 1; 122 | reset(mbq); 123 | while hasdata(mbq) && ~monitor.Stop 124 | iteration = iteration + 1; 125 | XY = next(mbq); 126 | % Evaluate the model loss and gradients using dlfeval. 127 | [loss,gradients] = dlfeval(lossFcn,model,pinn,XY,pdeCoeffs); 128 | % Update the network parameters using the adamupdate function. 129 | [pinn,averageGrad,averageSqGrad] = adamupdate(pinn,gradients{1},averageGrad,... 130 | averageSqGrad,iteration,learningRate); 131 | % Update the c coefficient using the adamupdate function. Defer 132 | % updating until 1/10 of epochs are finished. 133 | if epoch > numEpochs/10 134 | [pdeCoeffs.c,pAverageGrad,pAverageSqGrad] = adamupdate(pdeCoeffs.c,gradients{2},pAverageGrad,... 135 | pAverageSqGrad,iteration,learningRate); 136 | end 137 | end 138 | % Update learning rate. 139 | learningRate = initialLearnRate / (1+learnRateDecay*iteration); 140 | % Update the training progress monitor. 141 | recordMetrics(monitor,iteration,Loss=loss); 142 | updateInfo(monitor,Epoch=epoch + " of " + numEpochs); 143 | monitor.Progress = 100 * iteration/numIterations; 144 | end 145 | %% Visualize Data 146 | % Evaluate PINN at mesh nodes and plot, include updated value of c in title. 147 | % Note that c = 1.001, compared to the exact value of 1, this is a 0.1% error. 148 | 149 | nodesDLarry = dlarray(msh.Nodes,"CB"); 150 | Upinn = gather(extractdata(predict(pinn,nodesDLarry))); 151 | figure; 152 | pdeplot(model,"XYData",Upinn); 153 | title(sprintf("Solution with c = %.4f",double(pdeCoeffs.c))); 154 | %% 155 | % 156 | % 157 | % 158 | %% Model Loss Function 159 | % The |modelLoss| helper function takes a |dlnetwork| object |pinn| and a mini-batch 160 | % of input data |XY|, and returns the loss and the gradients of the loss with 161 | % respect to the learnable parameters in |pinn| and with respect to the c coefficient. 162 | % To compute the gradients automatically, use the |dlgradient| function. Return 163 | % the gradients w.r.t. learnable and w.r.t the parameter as two elements of a 164 | % cell array so they can be used separately. The model is trained by enforcing 165 | % that given an input $(x,y)$ the output of the network $u(x,y)$ satsifies Poisson's 166 | % equation and the boundary conditions. 167 | 168 | function [loss,gradients] = modelLoss(model,pinn,XY,pdeCoeffs) 169 | dim = 2; 170 | U = forward(pinn,XY); 171 | 172 | % Loss for difference in data taken at mesh nodes. 173 | Utrue = getSolutionData(XY); 174 | lossData = l2loss(U,Utrue); 175 | 176 | % Compute gradients of U and Laplacian of U. 177 | gradU = dlgradient(sum(U,"all"),XY,EnableHigherDerivatives=true); 178 | Laplacian = 0; 179 | for i=1:dim 180 | % Add each term of the Laplacian 181 | gradU2 = dlgradient(sum(pdeCoeffs.c.*gradU(i,:),"all"),XY,EnableHigherDerivatives=true); 182 | Laplacian = gradU2(i,:)+Laplacian; 183 | end 184 | 185 | % Enforce PDE. Calculate lossF. 186 | res = -pdeCoeffs.f - Laplacian + pdeCoeffs.a.*U; 187 | lossF = mean(sum(res.^2,1),2); 188 | 189 | % Enforce boundary conditions. Calculate lossU. 190 | actualBC = []; % contains the actual boundary information 191 | BC_XY = []; % boundary coordinates 192 | % Loop over the boundary edges and find boundary coordinates and actual BC 193 | % assigned to PDE model. 194 | numBoundaries = model.Geometry.NumEdges; 195 | for i=1:numBoundaries 196 | BCi = findBoundaryConditions(model.BoundaryConditions,'Edge',i); 197 | BCiNodes = findNodes(model.Mesh,"region","Edge",i); 198 | BC_XY = [BC_XY, model.Mesh.Nodes(:,BCiNodes)]; %#ok 199 | actualBCi = ones(1,numel(BCiNodes))*BCi.u; 200 | actualBC = [actualBC actualBCi]; %#ok 201 | end 202 | BC_XY = dlarray(BC_XY,"CB"); % format the coordinates 203 | predictedBC = forward(pinn,BC_XY); 204 | lossBC = mse(predictedBC,actualBC); 205 | 206 | % Combine weighted losses. 207 | lambdaPDE = 0.4; % weighting factor 208 | lambdaBC = 0.6; 209 | lambdaData = 0.5; 210 | loss = lambdaPDE*lossF + lambdaBC*lossBC + lambdaData*lossData; 211 | 212 | % Calculate gradients with respect to the learnable parameters and 213 | % C-coefficient. Pass back cell array to update pinn and coef separately. 214 | gradients = dlgradient(loss,{pinn.Learnables,pdeCoeffs.c}); 215 | end 216 | %% 217 | % 218 | % 219 | % 220 | %% Data Function 221 | % This function returns the solution data at a given set of points |XY|. As 222 | % a demonstration of the method, we return the exact solution from this function, 223 | % but this function could be replaced with measured data for a given application. 224 | 225 | function UD = getSolutionData(XY) 226 | UD = (1-XY(1,:).^2-XY(2,:).^2)/4; 227 | end 228 | %% 229 | % Copyright 2023 The MathWorks, Inc. -------------------------------------------------------------------------------- /inverse-problems-using-physics-informed-neural-networks/src/InversePinnVariableCoef.m: -------------------------------------------------------------------------------- 1 | %% *Pinns for Inverse Problems : Variable Coefficients* 2 | % 3 | % 4 | % The Poisson equation on a unit disk with zero Dirichlet boundary condition 5 | % can be written as $$- \nabla \cdot (c\nabla u) = 1$$ in $\Omega$, $u = 0$ on 6 | % $\partial \Omega$, where $\Omega$ is the unit disk. The exact solution when 7 | % $c = 1$ is 8 | % 9 | % $$u(x,y)= \frac{1-x^2-y^2}{4}$$ 10 | % 11 | % This equation is relevant for example in therodynamics, where this describes 12 | % a steady state heat equation, with the boundary of the circle held at a constant 13 | % temperature of 0, the right hand side of the equation is a volumetric heat source, 14 | % and the coefficient $c$ is the thermal diffusivity. The typical "forward" problem 15 | % is to use this information to find u. 16 | % 17 | % One could consider the inverse problem: given a known temperature distribution 18 | % $u$ corresponding to the solution at a set of points, a known volumetric heat 19 | % source, and boundary conditions, what is the thermal diffusivity? Using a PINN 20 | % to solve the equation, we can optimize the PINN parameters and the optimize 21 | % to solve for the coefficient at the same time. In this exmaple, we will take 22 | % the data to be given by the exact solution above. In practice, the exact solution 23 | % is often not known, but measurements of the solution can be provided instead. 24 | % 25 | % If the coefficients are known to be a function of space or of the solution, 26 | % we can use a neural network to represent the coefficient, and train this neural 27 | % network alongside the PINN. 28 | %% PDE Model Definition 29 | % 30 | 31 | rng('default'); % for reproducibility 32 | model = createpde(); 33 | geometryFromEdges(model,@circleg); 34 | applyBoundaryCondition(model,"dirichlet", ... 35 | "Edge",1:model.Geometry.NumEdges, ... 36 | "u",0); 37 | %% 38 | % Plot the geometry and display the edge labels for use in the boundary condition 39 | % definition. 40 | 41 | figure 42 | pdegplot(model,"EdgeLabels","on"); 43 | axis equal 44 | %% 45 | % Create a structural array of coefficents. C is left empty; it will be approximated 46 | % using a neural network. 47 | 48 | pdeCoeffs.c = []; 49 | % fixed values for other coefficients 50 | pdeCoeffs.m = 0; 51 | pdeCoeffs.d = 0; 52 | pdeCoeffs.a = 0; 53 | pdeCoeffs.f = 1; 54 | % set up mesh 55 | Hmax = 0.05; Hgrad = 2; Hedge = Hmax/10; 56 | msh = generateMesh(model,"Hmax",Hmax,"Hgrad",Hgrad,"Hedge",{1:model.Geometry.NumEdges,Hedge}); 57 | % make coefs dlarrays so gradients can be computed 58 | pdeCoeffs = structfun(@dlarray,pdeCoeffs,'UniformOutput',false); 59 | %% Generate Spatial Data for Training PINN 60 | % This examples uses mesh nodes as the collocation points. Model loss at the 61 | % collocation points on the domain and boundary are used to train the PINN. For 62 | % simplicity, we will supply the true solution data at the same set of collocation 63 | % points where the PINN loss is computed, but this is not required. 64 | 65 | boundaryNodes = findNodes(msh,"region","Edge",1:model.Geometry.NumEdges); 66 | domainNodes = setdiff(1:size(msh.Nodes,2),boundaryNodes); 67 | domainCollocationPoints = msh.Nodes(:,domainNodes)'; 68 | %% Define Deep Learning Model 69 | % This is a neural network with 3 hidden layers and 50 neurons per layer. The 70 | % two inputs to the network correspond to the x and y coordinates, and the one 71 | % output corresponds to the solution, so |predict(pinn,XY)| appoximates |u(x,y)|. 72 | 73 | numLayers = 3; 74 | numNeurons = 50; 75 | layers = featureInputLayer(2); 76 | for i = 1:numLayers-1 77 | layers = [ 78 | layers 79 | fullyConnectedLayer(numNeurons) 80 | tanhLayer];%#ok 81 | end 82 | layers = [ 83 | layers 84 | fullyConnectedLayer(1)]; 85 | pinn = dlnetwork(layers); 86 | %% 87 | % Set up the coefficient network by copying the PINN so the two networks have 88 | % the same architecture. Here, the two inputs represent x and y and the one output 89 | % corresponds to the coefficient, so |predict(coefNet,XY)| appoximates |c(x,y)|. 90 | % If the problem was nonlinear and spatially dependent, |c = c(x,y,u)|, you could 91 | % add a third input and make the corresponding changes in the model loss function. 92 | % Because of the data that will be supplied to the network, we expect that coefNet 93 | % will learn |c(x,y) = 1| for all x and y. 94 | 95 | coefNet = pinn; 96 | %% 97 | % Put both networks in one struct. 98 | 99 | nets = struct(pinn=pinn,coefNet=coefNet); 100 | %% Define Custom Training Loop to Train the PINN Using ADAM Solver 101 | % Specify training options. Create arrays for average gradients and square gradients 102 | % for both the PINN and the parameter network; both will be trained using the 103 | % ADAM solver. 104 | 105 | numEpochs = 5000; 106 | miniBatchSize = 2^12; 107 | initialLearnRate = 0.02; 108 | learnRateDecay = 0.001; 109 | averageGrad = []; 110 | averageSqGrad = []; 111 | %% 112 | % Setup data store for the training points. As stated before, we both train 113 | % the PINN and compute the known data at the mesh nodes. 114 | 115 | ds = arrayDatastore(domainCollocationPoints); 116 | mbq = minibatchqueue(ds, MiniBatchSize = miniBatchSize, MiniBatchFormat="BC"); 117 | %% 118 | % Calculate the total number of iterations for the training progress monitor 119 | % and initialize the monitor. 120 | 121 | numIterations = numEpochs * ceil(size(domainCollocationPoints,1)/miniBatchSize); 122 | monitor = trainingProgressMonitor(Metrics="Loss",Info="Epoch",XLabel="Iteration"); 123 | %% Training Loop 124 | % Train the model and parameter network using a custom training loop. Update 125 | % the network parameters using the adamupdate function. At the end of each iteration, 126 | % display the training progress. 127 | 128 | epoch = 0; 129 | iteration = 0; 130 | learningRate = initialLearnRate; 131 | lossFcn = dlaccelerate(@modelLoss); 132 | while epoch < numEpochs && ~monitor.Stop 133 | epoch = epoch + 1; 134 | reset(mbq); 135 | while hasdata(mbq) && ~monitor.Stop 136 | iteration = iteration + 1; 137 | XY = next(mbq); 138 | % XY coordinates at which to get data 139 | XYData = XY; 140 | % Evaluate the model loss and gradients using dlfeval. 141 | [loss,gradients] = dlfeval(lossFcn,model,nets,XY,pdeCoeffs,XYData); 142 | % update the parameters of both networks 143 | [nets,averageGrad,averageSqGrad] = adamupdate(nets,gradients,averageGrad,... 144 | averageSqGrad,iteration,learningRate); 145 | end 146 | % Update learning rate. 147 | learningRate = initialLearnRate / (1+learnRateDecay*iteration); 148 | % Update the training progress monitor. 149 | recordMetrics(monitor,iteration,Loss=loss); 150 | updateInfo(monitor,Epoch=epoch + " of " + numEpochs); 151 | monitor.Progress = 100 * iteration/numIterations; 152 | end 153 | %% Visualize Data 154 | % Evaluate PINN and parameter network at mesh nodes and plot. Compute the maximum 155 | % error at collocation points for the predicted coefficient. 156 | 157 | nodesDLarry = dlarray(msh.Nodes,"CB"); 158 | Upinn = gather(extractdata(predict(nets.pinn,nodesDLarry))); 159 | figure(1); 160 | pdeplot(model,"XYData",Upinn); 161 | C = gather(extractdata(predict(nets.coefNet,nodesDLarry))); 162 | figure(2); 163 | pdeplot(model,"XYData",C); 164 | title(sprintf('Predicted coefficient max error: = %0.1e',max(abs(C-1)))) 165 | %% 166 | % 167 | % 168 | % 169 | %% Model Loss Function 170 | % The |modelLoss| helper function takes a |dlnetwork| object |pinn| and a mini-batch 171 | % of input data |XY|, and returns the loss and the gradients of the loss with 172 | % respect to the learnable parameters in |pinn| and with respect to the c coefficient. 173 | % To compute the gradients automatically, use the |dlgradient| function. Return 174 | % the gradients w.r.t. learnable and w.r.t the parameter network as two elements 175 | % of a cell array so they can be used separately. The model is trained by enforcing 176 | % that given an input $(x,y)$ the output of the network $u(x,y)$ satisfies Poisson's 177 | % equation and the boundary conditions. 178 | 179 | function [loss,gradients] = modelLoss(model,nets,XY,pdeCoeffs,XYData) 180 | dim = 2; 181 | U = forward(nets.pinn,XY); 182 | C = forward(nets.coefNet,XY); 183 | 184 | % Loss for difference in data taken at mesh nodes. 185 | UDPred = forward(nets.pinn,XYData); 186 | UData = getSolutionData(XYData); 187 | lossData = l2loss(UDPred,UData); 188 | 189 | % Compute gradients of U and Laplacian of U. 190 | gradU = dlgradient(sum(U,"all"),XY,EnableHigherDerivatives=true); 191 | Laplacian = 0; 192 | for i=1:dim 193 | % Add each term of the Laplacian 194 | gradU2 = dlgradient(sum(C.*gradU(i,:),"all"),XY,EnableHigherDerivatives=true); 195 | Laplacian = gradU2(i,:)+Laplacian; 196 | end 197 | 198 | % Enforce PDE. Calculate lossF. 199 | res = -pdeCoeffs.f - Laplacian + pdeCoeffs.a.*U; 200 | zeroTarget = zeros(size(res), "like", res); 201 | lossF = mse(res, zeroTarget); 202 | 203 | % Enforce boundary conditions. Calculate lossU. 204 | actualBC = []; % contains the actual boundary information 205 | BC_XY = []; % boundary coordinates 206 | % Loop over the boundary edges and find boundary coordinates and actual BC 207 | % assigned to PDE model. 208 | numBoundaries = model.Geometry.NumEdges; 209 | for i=1:numBoundaries 210 | BCi = findBoundaryConditions(model.BoundaryConditions,'Edge',i); 211 | BCiNodes = findNodes(model.Mesh,"region","Edge",i); 212 | BC_XY = [BC_XY, model.Mesh.Nodes(:,BCiNodes)]; %#ok 213 | actualBCi = ones(1,numel(BCiNodes))*BCi.u; 214 | actualBC = [actualBC actualBCi]; %#ok 215 | end 216 | BC_XY = dlarray(BC_XY,"CB"); % format the coordinates 217 | predictedBC = forward(nets.pinn,BC_XY); 218 | lossBC = mse(predictedBC,actualBC); 219 | 220 | % Combine weighted losses. 221 | lambdaPDE = 1.1; 222 | lambdaBC = 1.; 223 | lambdaData = 1.; 224 | loss = lambdaPDE*lossF + lambdaBC*lossBC + lambdaData*lossData; 225 | 226 | % Calculate gradients with respect to the learnable parameters and 227 | % C-coefficient. Return gradients as a struct in order to update them 228 | % simultaneously using adamupdate. 229 | grads = dlgradient(loss,{nets.pinn.Learnables,nets.coefNet.Learnables}); 230 | gradients.pinn = grads{1}; 231 | gradients.coefNet = grads{2}; 232 | end 233 | %% 234 | % 235 | % 236 | % 237 | %% Data Function 238 | % This function returns the solution data at a given set of points |XY|. As 239 | % a demonstration of the method, we return the exact soltuion from this function, 240 | % but this function could be replaced with measured data at the given set of points 241 | % for applications. 242 | 243 | function UD = getSolutionData(XY) 244 | UD = (1-XY(1,:).^2-XY(2,:).^2)/4; 245 | end 246 | %% 247 | % Copyright 2023 The MathWorks, Inc. -------------------------------------------------------------------------------- /license.txt: -------------------------------------------------------------------------------- 1 | Copyright (c) 2024, The MathWorks, Inc. 2 | All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without 5 | modification, are permitted provided that the following conditions are 6 | met: 7 | 8 | * Redistributions of source code must retain the above copyright 9 | notice, this list of conditions and the following disclaimer. 10 | * Redistributions in binary form must reproduce the above copyright 11 | notice, this list of conditions and the following disclaimer in 12 | the documentation and/or other materials provided with the distribution 13 | * Neither the name of the The MathWorks, Inc. nor the names 14 | of its contributors may be used to endorse or promote products derived 15 | from this software without specific prior written permission. 16 | 17 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 18 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 19 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 20 | ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE 21 | LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR 22 | CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF 23 | SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS 24 | INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN 25 | CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) 26 | ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 27 | POSSIBILITY OF SUCH DAMAGE. 28 | -------------------------------------------------------------------------------- /physics-informed-neural-networks-for-heat-transfer/Condition.xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/physics-informed-neural-networks-for-heat-transfer/Condition.xlsx -------------------------------------------------------------------------------- /physics-informed-neural-networks-for-heat-transfer/Example_pinn.mlx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/physics-informed-neural-networks-for-heat-transfer/Example_pinn.mlx -------------------------------------------------------------------------------- /physics-informed-neural-networks-for-heat-transfer/README.md: -------------------------------------------------------------------------------- 1 | # Physics-Informed Neural Networks for Heat Transfer 2 | 3 | This example was originally hosted [here](https://github.com/matlab-deep-learning/Physics-Informed-Neural-Networks-for-Heat-Transfer). 4 | 5 | In recent years, Physics-Informed Neural Networks[1] have been applied to various types of application tasks. 6 | This example shows how to train a neural network to predict temperature distributions given new initial and boundary conditions. The neural network was trained using a loss function that includes a data loss component, which measures the discrepancy between the network's predictions and targets derived from finite element simulations, as well as a physics-informed loss component that evaluates the residual of the governing partial differential equation (PDE). 7 | 8 | 9 | 10 | The PDE used in the loss function is the transient heat equation: 11 | 12 | $$ \rho c \frac{\partial u}{\partial t} - \nabla \cdot \left(k \nabla u \right) = Q. $$ 13 | 14 | ## How to get started 15 | To get started, clone this repository and run "Example_pinn.mlx". 16 | 17 | ## Requirements 18 | - [MATLAB ®](https://mathworks.com/products/matlab.html) 19 | - [Deep Learning Toolbox ™](https://mathworks.com/products/deep-learning.html) 20 | - [Partial Differential Equation Toolbox ™](https://www.mathworks.com/products/pde.html) 21 | 22 | MATLAB version should be R2024a and later (Tested in R2024a). 23 | 24 | ## References 25 | 26 | [1] Raissi, Maziar, Paris Perdikaris, and George E. Karniadakis. "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations." Journal of Computational Physics 378 (2019): 686-707. 27 | 28 | ## License 29 | The license is available in license.txt file in this GitHub repository. 30 | 31 | Copyright (c) 2024, The MathWorks, Inc. -------------------------------------------------------------------------------- /physics-informed-neural-networks-for-heat-transfer/pinn_pre.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/physics-informed-neural-networks-for-heat-transfer/pinn_pre.mat -------------------------------------------------------------------------------- /physics-informed-neural-networks-for-heat-transfer/ref_images/LearningCurve.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/physics-informed-neural-networks-for-heat-transfer/ref_images/LearningCurve.png -------------------------------------------------------------------------------- /physics-informed-neural-networks-for-heat-transfer/ref_images/Results.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/physics-informed-neural-networks-for-heat-transfer/ref_images/Results.png -------------------------------------------------------------------------------- /physics-informed-neural-networks-for-mass-spring-system/README.md: -------------------------------------------------------------------------------- 1 | # Solve Mass Spring Damper System with PINNs 2 | 3 | This repository was originally hosted [here](https://github.com/matlab-deep-learning/physics-informed-neural-networks-with-matlab-live-coding-session). 4 | 5 | Code to accompany [live demo with Jousef Murad](https://www.youtube.com/watch?v=RTR_RklvAUQ). 6 | 7 | ### MathWorks® Products 8 | 9 | Requires [MATLAB®](https://mathworks.com/products/matlab.html) release R2023b or newer. 10 | * [Deep Learning Toolbox™](https://mathworks.com/products/deep-learning.html) 11 | 12 | ## License 13 | 14 | The license is available in the [license.txt](./license.txt) file in this GitHub repository. 15 | 16 | ## Community Support 17 | [MATLAB Central](https://www.mathworks.com/matlabcentral) 18 | 19 | Copyright 2024 The MathWorks, Inc. -------------------------------------------------------------------------------- /physics-informed-neural-networks-for-mass-spring-system/buildPINNs.m: -------------------------------------------------------------------------------- 1 | %% Load data 2 | load massSpringDamperData.mat 3 | 4 | xsolFcn = @(t)real(A.*exp(omega1.*t) + B.*exp(omega2.*t)); 5 | 6 | plotMassSpringDamperData(t0, tmax, tdata, xdata, tpinns, xsolFcn) 7 | 8 | %% Build neural network 9 | inputSize = 1; 10 | outputSize = 1; 11 | numHiddenUnits = 128; 12 | layers = [ featureInputLayer(1) 13 | fullyConnectedLayer(numHiddenUnits) 14 | tanhLayer() 15 | fullyConnectedLayer(numHiddenUnits) 16 | tanhLayer() 17 | fullyConnectedLayer(outputSize) ]; 18 | net = dlnetwork(layers); 19 | 20 | deepNetworkDesigner(net) 21 | 22 | %% Train the neural network 23 | 24 | % Specify training hyperparameters. 25 | numIterations = 5e3; 26 | 27 | % Specify ADAM hyperparameters. 28 | learnRate = 0.01; 29 | mp = []; 30 | vp = []; 31 | 32 | % Prepare data for training. 33 | tdata = dlarray(tdata, 'CB'); 34 | xdata = dlarray(xdata, 'CB'); 35 | tpinns = dlarray(tpinns, 'CB'); 36 | 37 | % Create training progress plot. 38 | monitor = trainingProgressMonitor(Metrics=["Loss", "LossPINN", "LossData"]); 39 | fig = figure(); 40 | 41 | % Accelerate model loss. 42 | accFcn = dlaccelerate(@modelLoss); 43 | 44 | for iteration = 1:numIterations 45 | [loss, gradients, lossPinn, lossData] = dlfeval(accFcn, net, tdata, xdata, tpinns, m, mu, k); 46 | 47 | [net, mp, vp] = adamupdate(net, gradients, mp, vp, iteration, learnRate); 48 | 49 | recordMetrics(monitor, iteration, ... 50 | Loss=loss, ... 51 | LossPINN=lossPinn, ... 52 | LossData=lossData); 53 | 54 | if mod(iteration, 50) == 0 55 | ttest = sort(rand(100,1)).*tmax; 56 | xtest = xsolFcn(ttest); 57 | xpred = predict(net, ttest); 58 | plotModelPredictions(fig, ttest, xtest, xpred, iteration); 59 | end 60 | end 61 | 62 | function [loss, gradients, lossPinn, lossData] = modelLoss(net, tdata, xdata, tpinns, m, mu, k) 63 | lossPinn = pinnsLoss(net, tpinns, m, mu, k); 64 | 65 | lossData = dataLoss(net, tdata, xdata); 66 | 67 | loss = 0.1.*lossPinn + 0.05.*lossData; 68 | 69 | gradients = dlgradient(loss, net.Learnables); 70 | end 71 | 72 | function loss = pinnsLoss(net, t, m, mu, k) 73 | x = forward(net, t); 74 | 75 | xt = dlgradient(sum(x,'all'), t, EnableHigherDerivatives=true); 76 | xtt = dlgradient(sum(xt,'all'), t); 77 | 78 | residual = m.*xtt + mu.*xt + k.*x; 79 | 80 | loss = mean( residual.^2, 'all' ); 81 | end 82 | 83 | function loss = dataLoss(net, t, xtarget) 84 | x = forward(net, t); 85 | loss = l2loss(x, xtarget); 86 | end -------------------------------------------------------------------------------- /physics-informed-neural-networks-for-mass-spring-system/massSpringDamperData.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/physics-informed-neural-networks-for-mass-spring-system/massSpringDamperData.mat -------------------------------------------------------------------------------- /physics-informed-neural-networks-for-mass-spring-system/plotMassSpringDamperData.m: -------------------------------------------------------------------------------- 1 | function plotMassSpringDamperData(t0, tmax, tdata, xdata, tpinns, xsolFcn) 2 | tt = linspace(t0, tmax, 100); 3 | xt = xsolFcn(tt); 4 | 5 | figure() 6 | hold on 7 | plot(tt, xt, 'b', LineWidth=2.5, DisplayName='Exact solution') 8 | legend(); 9 | ylabel('x') 10 | xlabel('t') 11 | ax = gca; 12 | ax.FontSize = 16; 13 | ax.LineWidth = 1.5; 14 | hold on; 15 | scatter(tdata, xdata, 48, [0.4660 0.6740 0.1880], ... 16 | DisplayName='Data loss points', ... 17 | LineWidth=2); 18 | scatter(tpinns, zeros(length(tpinns),1), 30, ... 19 | [0.4660 0.6740 0.1880], ... 20 | "filled", ... 21 | DisplayName='PINNs loss points'); 22 | hold off 23 | end -------------------------------------------------------------------------------- /physics-informed-neural-networks-for-mass-spring-system/plotModelPredictions.m: -------------------------------------------------------------------------------- 1 | function plotModelPredictions(fig, ttest, xtest, xpred, iteration) 2 | figure(fig); 3 | fig.Visible = true; 4 | 5 | plot(ttest,xtest,'b-',DisplayName='Exact solution',LineWidth=2.5); 6 | hold on 7 | plot(ttest,xpred,'--',DisplayName='Model prediction',LineWidth=2.5,Color="r"); 8 | 9 | xlim([0 10]); ylim([min(xtest) 2]) 10 | 11 | title(sprintf('Iteration %d',iteration)); 12 | legend('Location','NorthEast'); 13 | hold off 14 | 15 | ylabel('x') 16 | xlabel('t') 17 | ax = gca; 18 | ax.FontSize = 16; 19 | ax.LineWidth = 1.5; 20 | end -------------------------------------------------------------------------------- /ref/heat.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/ref/heat.png -------------------------------------------------------------------------------- /universal-differential-equations/README.md: -------------------------------------------------------------------------------- 1 | # Discovering Differential Equations with Neural ODEs 2 | 3 | This repository demonstrates how to use the Universal Differential Equation \[[1](#universal-ode)\] method to discover terms in a differential equation using neural networks, neural ODEs, and sparse symbolic regression \[[2](#sindy)\]. 4 | 5 | ## MathWorks® Products 6 | 7 | Requires [MATLAB®](https://mathworks.com/products/matlab.html) release R2023b or newer. 8 | * [Deep Learning Toolbox™](https://mathworks.com/products/deep-learning.html) 9 | 10 | ## License 11 | The license is available in the [license.txt](./license.txt) file in this GitHub repository. 12 | 13 | ## Community Support 14 | [MATLAB Central](https://www.mathworks.com/matlabcentral) 15 | 16 | Copyright 2024 The MathWorks, Inc. 17 | 18 | ## References 19 | 1. Christopher Rackauckas, Yingbo Ma, Julius Martensen, Collin Warner, Kirill Zubov, Rohit Superkar, Dominic Skinner, Ali Ramadhan, and Alan Edelman. "Universal Differential Equations for Scientific Machine Learning". Preprint, submitted January 13, 2020. https://arxiv.org/abs/2001.04385 20 | 2. Steven L. Brunton, Joshua L. Proctor, and J. Nathan Kutz. "Discovering governing equations from data by sparse identification of nonlinear dynamical systems". Proceedings of the National Academy of Sciences, 113 (15) 3932-3937, March 28, 2016. 21 | -------------------------------------------------------------------------------- /universal-differential-equations/example.md: -------------------------------------------------------------------------------- 1 | 2 | # Discovering Differential Equations with Neural ODEs 3 | 4 | This example demonstrates how to use neural ODEs to discover underlying differential equations following the Universal Differential Equation \[1\] method. 5 | 6 | 7 | The method is used for time\-series data originating from ordinary differential equations (ODEs), in particular when not all of the terms of the ODE are known. For example, let $\mathbf{X}(t)\in {\mathbb{R}}^n$ be a time\-series for $t\in [t_0 ,t_1 ]$ and suppose that we know $\mathbf{X}(t)$ satisfies an ODE of the form: 8 | 9 | $$ \frac{\mathrm{d}\mathbf{X}}{\mathrm{d}t}(t)=\mathbf{f}(t,\mathbf{X}(t)),~~t\in [t_0 ,t_1 ],~~\mathbf{f}:[t_0 ,t_1 ]\times {\mathbb{R}}^n \to {\mathbb{R}}^n ,~~(1) $$ 10 | 11 | with initial condition $\mathbf{X}(t_0 )={\mathbf{X}}_0 .$ 12 | 13 | 14 | In the case that $\mathbf{f}$ is only partially known, for example $\mathbf{f}(t,\mathbf{X})=\mathbf{g}(t,\mathbf{X})+\mathbf{h}(t,\mathbf{X})$ where $\mathbf{g}:[t_0 ,t_1 ]\times {\mathbb{R}}^n \to {\mathbb{R}}^n$ is a known function, and $\mathbf{h}:[t_0 ,t_1 ]\times {\mathbb{R}}^n \to {\mathbb{R}}^n$ is unknown, the method proposes to model $\mathbf{h}$ by a universal approximator such as a neural network ${\mathbf{h}}_{\theta } :[t_0 ,t_1 ]\times {\mathbb{R}}^n \to {\mathbb{R}}^n$ with learnable parameters $\theta$. 15 | 16 | 17 | When $\mathbf{h}$ is replaced by a neural network ${\mathbf{h}}_{\theta }$, equation (1) becomes a neural ODE which can be trained via gradient descent. 18 | 19 | 20 | Once ${\mathbf{h}}_{\theta }$ has been trained, symbolic regression methods can be used to find a functional representation of ${\mathbf{h}}_{\theta }$, such as the SINDy method \[2\]. 21 | 22 | # Training data 23 | 24 | This example uses the Lotka\-Volterra equations for demonstration purposes, 25 | 26 | $$ \mathbf{X}(t)=(x(t),y(t)), $$ 27 | 28 | $$ \frac{\mathrm{d}x}{\mathrm{d}t}=x-\alpha xy, $$ 29 | 30 | $$ \frac{\mathrm{d}y}{\mathrm{d}t}=-y+\beta xy, $$ 31 | 32 | where $\alpha ,\beta$ are constants. For the purpose of the example suppose the terms $\alpha xy,\beta xy$ are unknown, and the linear terms are known. In the above notation that is $\mathbf{g}(t,(x,y))=(x,-y)$ are the known terms and $\mathbf{h}(t,(x,y)=(-\alpha xy,\beta xy)$ is unknown. 33 | 34 | ```matlab 35 | function dXdt = lotkaVolterraODE(X,nvp) 36 | arguments 37 | X 38 | nvp.Alpha (1,1) double = 0.6 39 | nvp.Beta (1,1) double = 0.4 40 | end 41 | x = X(1); 42 | y = X(2); 43 | dxdt = x*(1 - nvp.Alpha*y); 44 | dydt = y*(-1 + nvp.Beta *x); 45 | dXdt = [dxdt; dydt]; 46 | end 47 | ``` 48 | 49 | Set $\alpha =0.6$, $\beta =0.4$, $t_0 =0$ and $\mathbf{X}(t_0 )={\mathbf{X}}_0 =(1,1)$. 50 | 51 | ```matlab 52 | alpha = 0.6; 53 | beta = 0.4; 54 | X0 = [1;1]; 55 | ``` 56 | 57 | Solve the ODE using `ode`. For the purpose of the example, add random noise to the solution, as realistic is often noisy. 58 | 59 | ```matlab 60 | function X = noisySolve(F,ts,nvp) 61 | arguments 62 | F (1,1) ode 63 | ts (1,:) double 64 | nvp.NoiseMagnitude (1,1) double = 2e-2 65 | end 66 | S = solve(F,ts); 67 | X = S.Solution; 68 | % Add noise to the solution, but not the initial condition. 69 | noise = nvp.NoiseMagnitude * randn([size(X,1),size(X,2)-1]); 70 | X(:,2:end) = X(:,2:end) + noise; 71 | end 72 | 73 | F = ode(... 74 | ODEFcn = @(t,X) lotkaVolterraODE(X, Alpha=alpha, Beta=beta),... 75 | InitialValue = X0); 76 | 77 | ts = linspace(0,15,250); 78 | X = noisySolve(F,ts); 79 | scatter(ts,X(1,:),"."); 80 | hold on 81 | scatter(ts,X(2,:),"."); 82 | title("Time-series data $\mathbf{X}(t) = (x(t),y(t))$", Interpreter="latex") 83 | xlabel("$t$",Interpreter="latex") 84 | legend(["$x(t)$","$y(t)$"], Interpreter="latex") 85 | hold off 86 | ``` 87 | 88 |
figure_0.png
89 | 90 | 91 | Take only the data $\mathbf{X}(t),t\in [0,3]$ as training data. 92 | 93 | ```matlab 94 | tsTrain = ts(ts<=3); 95 | XTrain = X(:,ts<=3); 96 | ``` 97 | # Design Universal ODE 98 | 99 | Recall $\mathbf{f}(t,\mathbf{X})=\mathbf{g}(t,\mathbf{X})+\mathbf{h}(t,\mathbf{X})$ where $\mathbf{g}(t,\mathbf{X})=\mathbf{g}(t,(x,y))=(x,-y)$ is known, and $\mathbf{h}$ is to be approximated by a neural network ${\mathbf{h}}_{\theta }$. Let ${\mathbf{f}}_{\theta } (t,\mathbf{X})=\mathbf{g}(t,\mathbf{X})+{\mathbf{h}}_{\theta } (t,\mathbf{X})$. By representing $\mathbf{g}$ as a neural network layer it is possible to represent ${\mathbf{f}}_{\theta }$ as a neural network. 100 | 101 | ```matlab 102 | gFcn = @(X) [X(1,:); -X(2,:)]; 103 | gLayer = functionLayer(gFcn, Acceleratable=true, Name="g"); 104 | ``` 105 | 106 | Define a neural network for ${\mathbf{h}}_{\theta }$. 107 | 108 | ```matlab 109 | activationLayer = functionLayer(@softplusActivation,Acceleratable=true); 110 | depth = 3; 111 | hiddenSize = 5; 112 | stateSize = size(X,1); 113 | hLayers = [ 114 | featureInputLayer(stateSize,Name="X") 115 | repmat([fullyConnectedLayer(hiddenSize); activationLayer],[depth-1,1]) 116 | fullyConnectedLayer(stateSize,Name="h")]; 117 | hNet = dlnetwork(hLayers); 118 | ``` 119 | 120 | Add the layer representing $\mathbf{g}$, connect the $\mathbf{X}$ input to $\mathbf{g}(\mathbf{X})$. Also add an `additionLayer` to perform the addition in ${\mathbf{f}}_{\theta } =\mathbf{g}+{\mathbf{h}}_{\theta }$ and connect ${\mathbf{h}}_{\theta }$ to this. 121 | 122 | ```matlab 123 | fNet = addLayers(hNet, [gLayer; additionLayer(2,Name="add")]); 124 | fNet = connectLayers(fNet,"X","g"); 125 | fNet = connectLayers(fNet,"h","add/in2"); 126 | ``` 127 | 128 | Analyse the network. 129 | 130 | ```matlab 131 | analyzeNetwork(fNet) 132 | ``` 133 | 134 | The `dlnetwork` specified by `fNet` represents the function ${\mathbf{f}}_{\theta } (t,\mathbf{X})$ in the neural ODE 135 | 136 | $$ \frac{\mathrm{d}\mathbf{X}}{\mathrm{d}t}(t)={\mathbf{f}}_{\theta } (t,\mathbf{X}(t)). $$ 137 | 138 | To solve the neural ODE, place `fNet` inside a `neuralODELayer`, and solve for the times `tsTrain`. 139 | 140 | ```matlab 141 | neuralODE = [ 142 | featureInputLayer(stateSize,Name="X0") 143 | neuralODELayer(fNet, tsTrain, GradientMode="adjoint")]; 144 | neuralODE = dlnetwork(neuralODE); 145 | ``` 146 | # Train the neural ODE 147 | 148 | Set the network to `double` precision using `dlupdate`. 149 | 150 | ```matlab 151 | neuralODE = dlupdate(@double, neuralODE); 152 | ``` 153 | 154 | Specify `trainingOptions` for ADAM and train. For a small neural ODE, often training on the CPU is faster than the GPU, as there is not sufficient parallelism in the neural ODE to make up for the overhead of sending data to the GPU. 155 | 156 | ```matlab 157 | opts = trainingOptions("adam",... 158 | Plots="training-progress",... 159 | MaxEpochs=600,... 160 | ExecutionEnvironment="cpu",... 161 | InputDataFormats="CB",... 162 | TargetDataFormats="CTB"); 163 | neuralODE = trainnet(XTrain(:,1),XTrain(:,2:end),neuralODE,"l2loss",opts); 164 | ``` 165 | 166 | ```matlabTextOutput 167 | Iteration Epoch TimeElapsed LearnRate TrainingLoss 168 | _________ _____ ___________ _________ ____________ 169 | 1 1 00:00:00 0.001 2.2743 170 | 50 50 00:00:03 0.001 1.3138 171 | 100 100 00:00:05 0.001 0.97882 172 | 150 150 00:00:07 0.001 0.73854 173 | 200 200 00:00:09 0.001 0.56192 174 | 250 250 00:00:12 0.001 0.43058 175 | 300 300 00:00:14 0.001 0.33261 176 | 350 350 00:00:16 0.001 0.2598 177 | 400 400 00:00:18 0.001 0.20618 178 | 450 450 00:00:21 0.001 0.16721 179 | 500 500 00:00:23 0.001 0.13936 180 | 550 550 00:00:25 0.001 0.1198 181 | 600 600 00:00:27 0.001 0.10621 182 | Training stopped: Max epochs completed 183 | ``` 184 | 185 |
figure_1.png
186 | 187 | 188 | Next train with L\-BFGS to optimize the training loss further. 189 | 190 | ```matlab 191 | opts = trainingOptions("lbfgs",... 192 | MaxIterations = 400,... 193 | Plots="training-progress",... 194 | ExecutionEnvironment="cpu",... 195 | GradientTolerance=1e-8,... 196 | StepTolerance=1e-8,... 197 | InputDataFormats="CB",... 198 | TargetDataFormats="CTB"); 199 | neuralODE = trainnet(XTrain(:,1),XTrain(:,2:end),neuralODE,"l2loss",opts); 200 | ``` 201 | 202 | ```matlabTextOutput 203 | Iteration TimeElapsed TrainingLoss GradientNorm StepNorm 204 | _________ ___________ ____________ ____________ ________ 205 | 1 00:00:00 0.10012 1.5837 0.072727 206 | 50 00:00:07 0.001433 0.024051 0.10165 207 | 100 00:00:14 0.00083738 0.015113 0.071645 208 | 150 00:00:23 0.00062907 0.013306 0.037026 209 | 200 00:00:34 0.00058742 0.00016328 0.00036442 210 | 250 00:01:12 0.00058725 0.000197 6.464e-05 211 | Training stopped: Stopped manually 212 | ``` 213 | 214 |
figure_2.png
215 | 216 | # Identify equations for the universal approximator 217 | 218 | Extract ${\mathbf{h}}_{\theta }$ from `neuralODE`. 219 | 220 | ```matlab 221 | fNetTrained = neuralODE.Layers(2).Network; 222 | hNetTrained = removeLayers(fNetTrained,["g","add"]); 223 | lrn = hNetTrained.Learnables; 224 | lrn = dlupdate(@dlarray, lrn); 225 | hNetTrained = initialize(hNetTrained); 226 | hNetTrained.Learnables = lrn; 227 | ``` 228 | 229 | The SINDy method \[2\] takes a library of basis functions ${\mathbf{e}}_i (t,\mathbf{X}):[t_0 ,t_1 ]\times {\mathbb{R}}^n \to {\mathbb{R}}^n$ for $i=1,2,\ldots,N$ and sample points $(\tau_j ,{\mathbf{X}}_j )\in [t_0 ,t_1 ]\times {\mathbb{R}}^n$. The goal is to identify which terms ${\mathbf{e}}_i$ represent $h_{\theta }$. 230 | 231 | 232 | Let $\mathbf{E}(t,\mathbf{X})=({\mathbf{e}}_1 (t,\mathbf{X}),\ldots,{\mathbf{e}}_N (t,\mathbf{X}))$ denote the matrix formed by concatenating all of the evaluations of the basis functions ${\mathbf{e}}_i$. Identifying terms ${\mathbf{e}}_i$ that make up ${\mathbf{h}}_{\theta }$ can be written as the linear problem ${\mathbf{h}}_{\theta } (t,\mathbf{X})=\mathbf{W}\mathbf{E}(t,\mathbf{X})$ for a matrix $\mathbf{W}$. 233 | 234 | 235 | The SINDy method proposes to use a sparse regression method to solve for $\mathbf{W}$. The sequentially thresholded least squares method \[2\] iteratively solves for $\mathbf{W}$ in the above problem, and zeros out the terms in $\mathbf{W}$ with absolute value below a specified threshold. 236 | 237 | 238 | Use the training data `XTrain` as the sample points ${\mathbf{X}}_j$ and evaluate ${\mathbf{h}}_{\theta } ({\mathbf{X}}_j )$. 239 | 240 | ```matlab 241 | Xextra = interp1(tsTrain,XTrain.', linspace(tsTrain(1), tsTrain(2), 100)).'; 242 | XSample = [XTrain,Xextra]; 243 | hEval = predict(hNetTrained,XSample,InputDataFormats="CB"); 244 | ``` 245 | 246 | Denote $\mathbf{X}=(x,y)$ and specify the basis functions ${\mathbf{e}}_1 (x,y)=1,{\mathbf{e}}_2 (x,y)=x^2 ,{\mathbf{e}}_3 (x,y)=xy,{\mathbf{e}}_4 (x,y)=y^2$. Note that the linear functions $(x,y)\to x$ and $(x,y)\to y$ are not included as the linear terms in $\mathbf{f}$ are already known. 247 | 248 | ```matlab 249 | e1 = @(X) ones(1,size(X,2)); 250 | e2 = @(X) X(1,:).^2; 251 | e3 = @(X) X(1,:).*X(2,:); 252 | e4 = @(X) X(2,:).^2; 253 | E = @(X) [e1(X); e2(X); e3(X); e4(X)]; 254 | ``` 255 | 256 | Evaluate the basis functions at the sample points. 257 | 258 | ```matlab 259 | EEval = E(XSample); 260 | ``` 261 | 262 | Sequentially solve ${\mathbf{h}}_{\theta } =\mathbf{W}\mathbf{E}$ for 10 iterations, and set terms with absolute value less that $0.05$ to $0$. 263 | 264 | ```matlab 265 | iters = 10; 266 | threshold = 0.1; 267 | Ws = cell(iters,1); 268 | W = hEval/EEval; 269 | Ws{1} = W; 270 | for iter = 2:iters 271 | belowThreshold = abs(W)figure_3.png 336 | 337 | # Helper Functions 338 | ```matlab 339 | function x = softplusActivation(x) 340 | x = max(x,0) + log(1 + exp(-abs(x))); 341 | end 342 | ``` 343 | 344 | References 345 | 346 | 347 | \[1\] Christopher Rackauckas, Yingbo Ma, Julius Martensen, Collin Warner, Kirill Zubov, Rohit Superkar, Dominic Skinner, Ali Ramadhan, and Alan Edelman. "Universal Differential Equations for Scientific Machine Learning". Preprint, submitted January 13, 2020. [https://arxiv.org/abs/2001.04385](https://arxiv.org/abs/2001.04385) 348 | 349 | 350 | \[2\] Steven L. Brunton, Joshua L. Proctor, and J. Nathan Kutz. "Discovering governing equations from data by sparse identification of nonlinear dynamical systems". Proceedings of the National Academy of Sciences, 113 (15) 3932\-3937, March 28, 2016. 351 | 352 | -------------------------------------------------------------------------------- /universal-differential-equations/example.mlx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/universal-differential-equations/example.mlx -------------------------------------------------------------------------------- /universal-differential-equations/example_media/figure_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/universal-differential-equations/example_media/figure_0.png -------------------------------------------------------------------------------- /universal-differential-equations/example_media/figure_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/universal-differential-equations/example_media/figure_1.png -------------------------------------------------------------------------------- /universal-differential-equations/example_media/figure_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/universal-differential-equations/example_media/figure_2.png -------------------------------------------------------------------------------- /universal-differential-equations/example_media/figure_3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/matlab-deep-learning/SciML-and-Physics-Informed-Machine-Learning-Examples/388b53802854928c10c929464a61cc8dd8df4d2b/universal-differential-equations/example_media/figure_3.png --------------------------------------------------------------------------------