├── .gitignore ├── ComputationTimesByAD.ipynb ├── DFVM_Poisson_HighDim.py ├── DFVM_Poisson_Singularity.py ├── GenerateData.py ├── LICENSE └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | share/python-wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | MANIFEST 28 | 29 | # PyInstaller 30 | # Usually these files are written by a python script from a template 31 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 32 | *.manifest 33 | *.spec 34 | 35 | # Installer logs 36 | pip-log.txt 37 | pip-delete-this-directory.txt 38 | 39 | # Unit test / coverage reports 40 | htmlcov/ 41 | .tox/ 42 | .nox/ 43 | .coverage 44 | .coverage.* 45 | .cache 46 | nosetests.xml 47 | coverage.xml 48 | *.cover 49 | *.py,cover 50 | .hypothesis/ 51 | .pytest_cache/ 52 | cover/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | .pybuilder/ 76 | target/ 77 | 78 | # Jupyter Notebook 79 | .ipynb_checkpoints 80 | 81 | # IPython 82 | profile_default/ 83 | ipython_config.py 84 | 85 | # pyenv 86 | # For a library or package, you might want to ignore these files since the code is 87 | # intended to run in multiple environments; otherwise, check them in: 88 | # .python-version 89 | 90 | # pipenv 91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 94 | # install all needed dependencies. 95 | #Pipfile.lock 96 | 97 | # poetry 98 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. 99 | # This is especially recommended for binary packages to ensure reproducibility, and is more 100 | # commonly ignored for libraries. 101 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control 102 | #poetry.lock 103 | 104 | # pdm 105 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. 106 | #pdm.lock 107 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it 108 | # in version control. 109 | # https://pdm.fming.dev/latest/usage/project/#working-with-version-control 110 | .pdm.toml 111 | .pdm-python 112 | .pdm-build/ 113 | 114 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm 115 | __pypackages__/ 116 | 117 | # Celery stuff 118 | celerybeat-schedule 119 | celerybeat.pid 120 | 121 | # SageMath parsed files 122 | *.sage.py 123 | 124 | # Environments 125 | .env 126 | .venv 127 | env/ 128 | venv/ 129 | ENV/ 130 | env.bak/ 131 | venv.bak/ 132 | 133 | # Spyder project settings 134 | .spyderproject 135 | .spyproject 136 | 137 | # Rope project settings 138 | .ropeproject 139 | 140 | # mkdocs documentation 141 | /site 142 | 143 | # mypy 144 | .mypy_cache/ 145 | .dmypy.json 146 | dmypy.json 147 | 148 | # Pyre type checker 149 | .pyre/ 150 | 151 | # pytype static type analyzer 152 | .pytype/ 153 | 154 | # Cython debug symbols 155 | cython_debug/ 156 | 157 | # PyCharm 158 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can 159 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore 160 | # and can be added to the global gitignore or merged into this file. For a more nuclear 161 | # option (not recommended) you can uncomment the following to ignore the entire idea folder. 162 | #.idea/ 163 | -------------------------------------------------------------------------------- /ComputationTimesByAD.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "import torch\n", 10 | "import time\n", 11 | "\n", 12 | "# Define the neural network model\n", 13 | "class FullyConnectedNN(torch.nn.Module):\n", 14 | " def __init__(self, input_dim, hidden_dim, num_layers):\n", 15 | " super(FullyConnectedNN, self).__init__()\n", 16 | " self.input_dim = input_dim\n", 17 | " self.hidden_dim = hidden_dim\n", 18 | " self.num_layers = num_layers\n", 19 | " self.layers = torch.nn.ModuleList()\n", 20 | " self.layers.append(torch.nn.Linear(input_dim, hidden_dim))\n", 21 | " for _ in range(num_layers - 1):\n", 22 | " self.layers.append(torch.nn.Linear(hidden_dim, hidden_dim))\n", 23 | " \n", 24 | " def forward(self, x):\n", 25 | " for layer in self.layers:\n", 26 | " x = torch.tanh(layer(x))\n", 27 | " return x\n", 28 | "\n", 29 | "# # Define the function u_theta\n", 30 | "# def u_theta(x, model):\n", 31 | "# return model(x)\n", 32 | "\n", 33 | "# Define a function to compute the time for u_theta, nabla u_theta, and Delta u_theta\n", 34 | "def compute_times(model, input_dim, num_samples=1000):\n", 35 | " device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n", 36 | " model.to(device)\n", 37 | " \n", 38 | " u_theta_time = 0\n", 39 | " nabla_u_theta_time = 0\n", 40 | " delta_u_theta_time = 0\n", 41 | " for step in range(2000):\n", 42 | " # Randomly generate sample points\n", 43 | " x = torch.randn(num_samples, input_dim).to(device).requires_grad_(True)\n", 44 | " \n", 45 | " # Compute u_theta and measure the time\n", 46 | " start_time = time.time()\n", 47 | " u = model(x)\n", 48 | " if step >= 1000:\n", 49 | " u_theta_time += time.time() - start_time\n", 50 | "\n", 51 | " # Compute nabla u_theta and measure the time\n", 52 | " start_time = time.time()\n", 53 | " du = torch.autograd.grad(u, x, \n", 54 | " grad_outputs=torch.ones_like(u), \n", 55 | " create_graph=True, \n", 56 | " retain_graph=True)[0]\n", 57 | " if step >= 1000:\n", 58 | " nabla_u_theta_time += time.time() - start_time\n", 59 | "\n", 60 | " # Compute Delta u_theta and measure the time\n", 61 | " start_time = time.time()\n", 62 | " laplace = torch.zeros_like(u)\n", 63 | " for i in range(input_dim):\n", 64 | " d2u = torch.autograd.grad(du[:, i], x, \n", 65 | " grad_outputs=torch.ones_like(du[:, i]), \n", 66 | " create_graph=True, \n", 67 | " retain_graph=True)[0][:, i]\n", 68 | " laplace += d2u.reshape(-1, 1)\n", 69 | " if step >= 1000:\n", 70 | " delta_u_theta_time += time.time() - start_time\n", 71 | "\n", 72 | " return u_theta_time, nabla_u_theta_time, delta_u_theta_time\n", 73 | "\n", 74 | "# Loop through different input dimensions and compute the times\n", 75 | "for input_dim in [1, 2, 10, 50, 100, 200]:\n", 76 | " print(\"============= input_dim\", input_dim, \"==============\")\n", 77 | " # Parameter settings\n", 78 | " hidden_dim = 200\n", 79 | " num_layers = 6\n", 80 | "\n", 81 | " # Create the neural network model\n", 82 | " model = FullyConnectedNN(input_dim, hidden_dim, num_layers)\n", 83 | "\n", 84 | " # Compute the times\n", 85 | " u_theta_time, nabla_u_theta_time, delta_u_theta_time = compute_times(model, input_dim)\n", 86 | "\n", 87 | " print(\"Computing time for u_theta:\", u_theta_time)\n", 88 | " print(\"Computing time for nabla u_theta:\", nabla_u_theta_time)\n", 89 | " print(\"Computing time for Delta u_theta:\", delta_u_theta_time)" 90 | ] 91 | }, 92 | { 93 | "cell_type": "code", 94 | "execution_count": null, 95 | "metadata": {}, 96 | "outputs": [], 97 | "source": [] 98 | } 99 | ], 100 | "metadata": { 101 | "kernelspec": { 102 | "display_name": "torch38", 103 | "language": "python", 104 | "name": "python3" 105 | }, 106 | "language_info": { 107 | "codemirror_mode": { 108 | "name": "ipython", 109 | "version": 3 110 | }, 111 | "file_extension": ".py", 112 | "mimetype": "text/x-python", 113 | "name": "python", 114 | "nbconvert_exporter": "python", 115 | "pygments_lexer": "ipython3", 116 | "version": "3.8.16" 117 | } 118 | }, 119 | "nbformat": 4, 120 | "nbformat_minor": 2 121 | } 122 | -------------------------------------------------------------------------------- /DFVM_Poisson_HighDim.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import numpy as np 3 | import torch 4 | import torch.nn as nn 5 | import time 6 | from tqdm import * 7 | import os 8 | import argparse 9 | from GenerateData import * 10 | 11 | 12 | # Parser 13 | parser = argparse.ArgumentParser(description='DFVM') 14 | parser.add_argument('--dimension', type=int, default=100, metavar='N', 15 | help='dimension of the problem (default: 100)') 16 | parser.add_argument('--seed', type=int, default=0, metavar='N', 17 | help='random seed (default: 0)') 18 | seed = parser.parse_args().seed 19 | # Omega 20 | DIMENSION = parser.parse_args().dimension 21 | a = [-1 for _ in range(DIMENSION)] 22 | b = [ 1 for _ in range(DIMENSION)] 23 | # Finite Volume 24 | EPSILON = 1e-5 # Domain size 25 | BDSIZE = 1 # Boundary size: J_{\partial V} = 2^BDSIZE - 1 26 | 27 | # Network 28 | DIM_INPUT = DIMENSION # Input dimension 29 | NUM_UNIT = 40 # Number of neurons in a single layer 30 | DIM_OUTPUT = 1 # Output dimension 31 | NUM_LAYERS = 6 # Number of layers in the model 32 | 33 | # Optimizer 34 | IS_DECAY = 1 35 | LEARN_RATE = 1e-3 # Learning rate 36 | LEARN_FREQUENCY = 50 # Learning rate decay interval 37 | LEARN_LOWWER_BOUND = 1e-4 # Lower bound of learning rate 38 | LEARN_DECAY_RATE = 0.99 # Learning rate decay rate 39 | LOSS_FN = nn.MSELoss() # Loss function 40 | 41 | # Training 42 | CUDA_ORDER = "1" 43 | NUM_TRAIN_SAMPLE = 2000 # Size of the training set 44 | NUM_BOUND_SAMPLE = 2000 // DIMENSION 45 | NUM_TRAIN_TIMES = 1 # Number of training samples 46 | NUM_ITERATION = 20000 # Number of training iterations per sample 47 | 48 | # Re-sampling 49 | IS_RESAMPLE = 0 50 | SAMPLE_FREQUENCY = 2000 # Re-sampling interval 51 | 52 | # Testing 53 | NUM_TEST_SAMPLE = 10000 # Number of test samples 54 | TEST_FREQUENCY = 1 # Output interval 55 | 56 | # Loss weight 57 | BETA = 1000 # Weight of the boundary loss function 58 | 59 | # Save model 60 | IS_SAVE_MODEL = 1 # Flag to save the model 61 | 62 | 63 | class PoissonEQuation(object): 64 | def __init__(self, dimension, epsilon, bd_size, device): 65 | self.D = dimension 66 | self.E = epsilon 67 | self.B = bd_size 68 | self.bdsize = 2**bd_size - 1 69 | self.device = device 70 | 71 | def f(self, X): 72 | x = torch.sum(X,1)/self.D 73 | f = (torch.sin(x)-2)/self.D 74 | return f.detach() 75 | 76 | def g(self, X): 77 | x = torch.sum(X,1)/self.D 78 | g = x.pow(2)+torch.sin(x) 79 | return g.detach() 80 | 81 | def u_exact(self, X): 82 | x = torch.sum(X,1)/self.D 83 | u = x.pow(2)+torch.sin(x) 84 | return u.detach() 85 | 86 | # sample the interior of the domain: Monte-Carlo or Quasi-Monte Carlo 87 | def interior(self, N=100): 88 | eps = self.E # np.spacing(1) 89 | l_bounds = [l+eps for l in a] 90 | u_bounds = [u-eps for u in b] 91 | X = torch.FloatTensor( sampleCubeMC(self.D, l_bounds, u_bounds, N) ) 92 | # X = torch.FloatTensor( sampleCubeQMC(self.D, l_bounds, u_bounds, N) ) 93 | return X.requires_grad_(True).to(self.device) 94 | 95 | # sample the boundary of the domain 96 | def boundary(self, n=100): 97 | x_boundary = [] 98 | for i in range( self.D ): 99 | x = np.random.uniform(a[i], b[i], [2*n, self.D]) 100 | x[:n,i] = b[i] 101 | x[n:,i] = a[i] 102 | x_boundary.append(x) 103 | x_boundary = np.concatenate(x_boundary, axis=0) 104 | x_boundary = torch.FloatTensor(x_boundary).requires_grad_(True).to(self.device) 105 | return x_boundary 106 | 107 | 108 | # sample a neighborhood of x with size 109 | def neighborhood(self, x, size): 110 | l_bounds = [t-self.E for t in x.cpu().detach().numpy()] 111 | u_bounds = [t+self.E for t in x.cpu().detach().numpy()] 112 | sample = sampleCubeQMC(self.D, l_bounds, u_bounds, size) 113 | sample = torch.FloatTensor( sample ).to(self.device) 114 | return sample 115 | 116 | def neighborhoodBD(self, X): 117 | lb = [-1 for _ in range(self.D-1)] 118 | ub = [ 1 for _ in range(self.D-1)] 119 | x_QMC = sampleCubeQMC(self.D-1, lb, ub, self.B) 120 | x_nbound = [] 121 | for i in range( self.D ): 122 | x_nbound.append( np.insert(x_QMC, i, [ 1], axis=1) ) 123 | x_nbound.append( np.insert(x_QMC, i, [-1], axis=1) ) 124 | x_nbound = np.concatenate(x_nbound, axis=0).reshape(1, -1, self.D) 125 | x_nbound = torch.FloatTensor(x_nbound).to(self.device) 126 | X = torch.unsqueeze(X, dim=1) 127 | X = X.expand(-1, x_nbound.shape[1], x_nbound.shape[2]) 128 | X_bound = X + self.E*x_nbound 129 | X_bound = X_bound.reshape(-1, self.D) 130 | return X_bound.detach().requires_grad_(True) 131 | 132 | def outerNormalVec(self): 133 | bd_dir = torch.zeros(2*self.D*self.bdsize, self.D) 134 | for i in range( self.D ): 135 | bd_dir[ 2*i*self.bdsize : (2*i+1)*self.bdsize, i] = 1 136 | bd_dir[(2*i+1)*self.bdsize : 2*(i+1)*self.bdsize, i] = -1 137 | bd_dir = bd_dir.reshape(1,-1) 138 | return bd_dir.detach().requires_grad_(True).to(self.device) 139 | 140 | 141 | class DFVMsolver(object): 142 | def __init__(self, Equation, model, device): 143 | self.Eq = Equation 144 | self.model = model 145 | self.device = device 146 | 147 | # calculate the gradient of u_theta 148 | def Nu(self, X): 149 | u = self.model(X) 150 | Du = torch.autograd.grad(outputs = [u], 151 | inputs = [X], 152 | grad_outputs = torch.ones_like(u), 153 | allow_unused = True, 154 | retain_graph = True, 155 | create_graph = True)[0] 156 | return Du 157 | 158 | # calculate the integral of $\nabla u_theta \cdot \vec{n}$ in the neighborhood of x 159 | def integrate_BD(self, X, x_bd, bd_dir): 160 | n = len(X) 161 | integrate_bd = torch.zeros(n, 1).to(self.device) 162 | # calculate the gradient of u_theta 163 | Du = self.Nu(x_bd).reshape(n, -1) 164 | # calculate the integral by summing up the dot product of Du and bd_dir 165 | integrate_bd = torch.sum(Du*bd_dir, 1)/(2*self.Eq.E*self.Eq.bdsize) 166 | return integrate_bd 167 | 168 | # calculate the integral of f in the neighborhood of x 169 | def integrate_F(self, X): 170 | n = len(X) 171 | integrate_f = torch.zeros([n, 1]).to(self.device) 172 | for i in range(n): 173 | x_neighbor = self.Eq.neighborhood(X[i], 1) 174 | res = self.Eq.f(x_neighbor) 175 | integrate_f[i] = torch.mean(res) 176 | return integrate_f.detach().requires_grad_(True) 177 | 178 | # Boundary loss function 179 | def loss_boundary(self, x_boundary): 180 | u_theta = self.model(x_boundary).reshape(-1,1) 181 | u_bound = self.Eq.g(x_boundary).reshape(-1,1) 182 | loss_bd = LOSS_FN(u_theta, u_bound) 183 | return loss_bd 184 | 185 | # Test function 186 | def TEST(self, NUM_TESTING): 187 | with torch.no_grad(): 188 | x_test = torch.Tensor(NUM_TESTING, self.Eq.D).uniform_(a[0], b[0]).requires_grad_(True).to(self.device) 189 | u_real = self.Eq.u_exact(x_test).reshape(1,-1) 190 | u_pred = self.model(x_test).reshape(1,-1) 191 | Error = u_real - u_pred 192 | L2error = torch.sqrt( torch.mean(Error*Error) )/ torch.sqrt( torch.mean(u_real*u_real) ) 193 | MaxError = torch.max(torch.abs(Error)) 194 | return L2error.cpu().detach().numpy(), MaxError.cpu().detach().numpy() 195 | 196 | 197 | class Resnet(nn.Module): 198 | def __init__(self, input_dim, hidden_dim, output_dim, num_layers): 199 | super(Resnet, self).__init__() 200 | self.num_layers = num_layers 201 | self.input_layer = nn.Linear(input_dim, hidden_dim) 202 | self.linear = nn.ModuleList() 203 | for _ in range(num_layers): 204 | self.linear.append( nn.Linear(hidden_dim, hidden_dim) ) 205 | self.linear.append( nn.Linear(hidden_dim, hidden_dim) ) 206 | self.output_layer = nn.Linear(hidden_dim, output_dim) 207 | self.activation = torch.tanh 208 | 209 | def forward(self, x): 210 | out = self.activation( self.input_layer(x) ) 211 | for i in range(self.num_layers): 212 | res = out 213 | out = self.activation( self.linear[2*i](out) ) 214 | out = self.activation( self.linear[2*i+1](out) ) 215 | out = out + res 216 | out = self.output_layer(out) 217 | return out 218 | 219 | def weights_init(self, m): 220 | if type(m) == nn.Linear: 221 | torch.nn.init.xavier_normal_(m.weight) 222 | torch.nn.init.zeros_(m.bias) 223 | 224 | def setup_seed(seed): 225 | torch.manual_seed(seed) 226 | torch.cuda.manual_seed_all(seed) 227 | np.random.seed(seed) 228 | # random.seed(seed) 229 | torch.backends.cudnn.deterministic = True 230 | 231 | 232 | def train_pipeline(): 233 | # define device 234 | DEVICE = torch.device(f"cuda:{CUDA_ORDER}" if torch.cuda.is_available() else "cpu") 235 | print(f"Using {DEVICE}") 236 | # define equation 237 | Eq = PoissonEQuation(DIMENSION, EPSILON, BDSIZE, DEVICE) 238 | model = Resnet(DIM_INPUT, 128, DIM_OUTPUT, 3).to(DEVICE) 239 | optA = torch.optim.Adam(model.parameters(), lr=LEARN_RATE) 240 | solver = DFVMsolver(Eq, model, DEVICE) 241 | 242 | x = Eq.interior(NUM_TRAIN_SAMPLE) 243 | x_bd = Eq.neighborhoodBD(x) 244 | print(x_bd.shape) 245 | int_f = solver.integrate_F(x) 246 | bd_dir = Eq.outerNormalVec() 247 | x_boundary = Eq.boundary(NUM_BOUND_SAMPLE) 248 | 249 | # Networks Training 250 | elapsed_time = 0 251 | training_history = [] 252 | 253 | for step in tqdm(range(NUM_ITERATION+1)): 254 | if IS_DECAY and step and step % LEARN_FREQUENCY == 0: 255 | for p in optA.param_groups: 256 | if p['lr'] > LEARN_LOWWER_BOUND: 257 | p['lr'] = p['lr']*LEARN_DECAY_RATE 258 | # print(f"Learning Rate: {p['lr']}") 259 | 260 | start_time = time.time() 261 | int_bd = solver.integrate_BD(x, x_bd, bd_dir).reshape(-1,1) 262 | loss_int = LOSS_FN(-int_bd, int_f) 263 | loss_bd = solver.loss_boundary(x_boundary) 264 | loss = loss_int + BETA*loss_bd 265 | optA.zero_grad() 266 | loss.backward() 267 | optA.step() 268 | 269 | epoch_time = time.time() - start_time 270 | elapsed_time = elapsed_time + epoch_time 271 | if step % TEST_FREQUENCY == 0: 272 | loss_int = loss_int.cpu().detach().numpy() 273 | loss_bd = loss_bd.cpu().detach().numpy() 274 | loss = loss.cpu().detach().numpy() 275 | L2error,ME = solver.TEST(NUM_TEST_SAMPLE) 276 | if step and step%1000 == 0: 277 | tqdm.write( f'\nStep: {step:>5}, ' 278 | f'Loss_int: {loss_int:>10.5f}, ' 279 | f'Loss_bd: {loss_bd:>10.5f}, ' 280 | f'Loss: {loss:>10.5f}, ' 281 | f'L2 error: {L2error:.5f}, ' 282 | f'Time: {elapsed_time:.2f}') 283 | training_history.append([step, L2error, ME, loss, elapsed_time]) 284 | 285 | training_history = np.array(training_history) 286 | print(np.min(training_history[:,1])) 287 | print(np.min(training_history[:,2])) 288 | 289 | save_time = time.localtime() 290 | save_time = f'[{save_time.tm_mday:0>2d}{save_time.tm_hour:0>2d}{save_time.tm_min:0>2d}]' 291 | dir_path = os.getcwd() + f'/PossionEQ_seed{seed}/' 292 | file_name = f'{DIMENSION}DIM-DFVM-{BETA}weight-{NUM_ITERATION}itr-{EPSILON}R-{BDSIZE}bd-{LEARN_RATE}lr.csv' 293 | file_path = dir_path + file_name 294 | 295 | if not os.path.exists(dir_path): 296 | os.makedirs(dir_path) 297 | 298 | np.savetxt(file_path, training_history, 299 | delimiter =",", 300 | header ="step, L2error, MaxError, loss, elapsed_time", 301 | comments ='') 302 | print('Training History Saved!') 303 | 304 | if IS_SAVE_MODEL: 305 | torch.save(model.state_dict(), dir_path + f'{DIMENSION}DIM-DFVM_net') 306 | print('DFVM Network Saved!') 307 | 308 | 309 | if __name__ == "__main__": 310 | setup_seed(seed) 311 | train_pipeline() 312 | -------------------------------------------------------------------------------- /DFVM_Poisson_Singularity.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import numpy as np 3 | import torch 4 | import torch.nn as nn 5 | import time 6 | from tqdm import * 7 | import os 8 | import argparse 9 | from GenerateData import * 10 | 11 | 12 | # Parser 13 | parser = argparse.ArgumentParser(description='DFVM') 14 | parser.add_argument('--dimension', type=int, default=100, metavar='N', 15 | help='dimension of the problem (default: 100)') 16 | parser.add_argument('--seed', type=int, default=0, metavar='N', 17 | help='random seed (default: 0)') 18 | seed = parser.parse_args().seed 19 | # Omega space domain 20 | DIMENSION = parser.parse_args().dimension 21 | a = [0 for _ in range(DIMENSION)] 22 | b = [1 for _ in range(DIMENSION)] 23 | 24 | # Finite Volume 25 | EPSILON = 1e-3 # Domain size 26 | BDSIZE = 1 27 | 28 | # Network 29 | DIM_INPUT = DIMENSION # Input dimension 30 | NUM_UNIT = 40 # Number of neurons in a single layer 31 | DIM_OUTPUT = 1 # Output dimension 32 | NUM_LAYERS = 6 # Number of layers in the model 33 | 34 | # Optimizer 35 | IS_DECAY = 0 36 | LEARN_RATE = 1.1e-3 # Learning rate 37 | LEARN_FREQUENCY = 50 # Learning rate change interval 38 | LEARN_LOWWER_BOUND = 1e-5 39 | LEARN_DECAY_RATE = 0.99 40 | LOSS_FN = nn.MSELoss() 41 | 42 | # Training 43 | CUDA_ORDER = "0" 44 | NUM_TRAIN_SMAPLE = 10000 # Size of the training set 45 | NUM_TRAIN_TIMES = 1 # Number of training samples 46 | NUM_ITERATION = 20000 # Number of training iterations per sample 47 | 48 | # Re-sampling 49 | IS_RESAMPLE = 0 50 | SAMPLE_FREQUENCY = 2000 # Re-sampling interval 51 | 52 | # Testing 53 | NUM_TEST_SAMPLE = 10000 54 | TEST_FREQUENCY = 1 # Output interval 55 | 56 | # Loss weight 57 | BETA = 1000 # Weight of the boundary loss function 58 | 59 | # Save model 60 | IS_SAVE_MODEL = 1 61 | 62 | 63 | class PoissonEQuation(object): 64 | def __init__(self, dimension, epsilon, bd_size, device): 65 | self.D = dimension 66 | self.E = epsilon 67 | self.B = bd_size 68 | self.bdsize = 2**bd_size - 1 69 | self.device = device 70 | 71 | def f(self, X): 72 | f = -2 * torch.ones(len(X), 1).to(self.device) 73 | return f.detach() 74 | 75 | def g(self, X): 76 | x = X[:, 0] 77 | u = torch.where(x < 0.5, x.pow(2), (x - 1).pow(2)) 78 | return u.reshape(-1, 1).detach() 79 | 80 | def u_exact(self, X): 81 | x = X[:, 0] 82 | u = torch.where(x < 0.5, x.pow(2), (x - 1).pow(2)) 83 | return u.reshape(-1, 1).detach() 84 | 85 | # Sampling inside the domain 86 | def interior(self, N=100): 87 | eps = self.E # np.spacing(1) 88 | l_bounds = [l + eps for l in a] 89 | u_bounds = [u - eps for u in b] 90 | X = torch.FloatTensor(sampleCubeMC(self.D, l_bounds, u_bounds, N)) 91 | # X = torch.FloatTensor(sampleCubeQMC(self.D, l_bounds, u_bounds, N)) 92 | return X.requires_grad_(True).to(self.device) 93 | 94 | # Boundary sampling 95 | def boundary(self, n=100): 96 | x_boundary = [] 97 | for i in range(self.D): 98 | x = np.random.uniform(a[i], b[i], [2 * n, self.D]) 99 | x[:n, i] = b[i] 100 | x[n:, i] = a[i] 101 | x_boundary.append(x) 102 | x_boundary = np.concatenate(x_boundary, axis=0) 103 | x_boundary = torch.FloatTensor(x_boundary).requires_grad_(True).to(self.device) 104 | return x_boundary 105 | 106 | # Randomly sample bdsize points within the neighborhood of point x 107 | def neighborhood(self, x, size): 108 | l_bounds = [t - self.E for t in x.cpu().detach().numpy()] 109 | u_bounds = [t + self.E for t in x.cpu().detach().numpy()] 110 | sample = sampleCubeQMC(self.D, l_bounds, u_bounds, size) 111 | sample = torch.FloatTensor(sample).to(self.device) 112 | return sample 113 | 114 | def neighborhoodBD(self, X): 115 | lb = [-1 for _ in range(self.D - 1)] 116 | ub = [1 for _ in range(self.D - 1)] 117 | x_QMC = sampleCubeQMC(self.D - 1, lb, ub, self.B) 118 | x_nbound = [] 119 | for i in range(self.D): 120 | x_nbound.append(np.insert(x_QMC, i, [1], axis=1)) 121 | x_nbound.append(np.insert(x_QMC, i, [-1], axis=1)) 122 | x_nbound = np.concatenate(x_nbound, axis=0).reshape(1, -1, self.D) 123 | x_nbound = torch.FloatTensor(x_nbound).to(self.device) 124 | X = torch.unsqueeze(X, dim=1) 125 | X = X.expand(-1, x_nbound.shape[1], x_nbound.shape[2]) 126 | X_bound = X + self.E * x_nbound 127 | X_bound = X_bound.reshape(-1, self.D) 128 | return X_bound.detach().requires_grad_(True) 129 | 130 | def outerNormalVec(self): 131 | bd_dir = torch.zeros(2 * self.D * self.bdsize, self.D) 132 | for i in range(self.D): 133 | bd_dir[2 * i * self.bdsize: (2 * i + 1) * self.bdsize, i] = 1 134 | bd_dir[(2 * i + 1) * self.bdsize: 2 * (i + 1) * self.bdsize, i] = -1 135 | bd_dir = bd_dir.reshape(1, -1) 136 | return bd_dir.detach().requires_grad_(True).to(self.device) 137 | 138 | 139 | class DFVMsolver(object): 140 | def __init__(self, Equation, model, device): 141 | self.Eq = Equation 142 | self.model = model 143 | self.device = device 144 | 145 | # Compute the divergence of u = u_theta 146 | def Nu(self, X): 147 | u = self.model(X) 148 | Du = torch.autograd.grad(outputs=[u], 149 | inputs=[X], 150 | grad_outputs=torch.ones_like(u), 151 | allow_unused=True, 152 | retain_graph=True, 153 | create_graph=True)[0] 154 | return Du 155 | 156 | # Compute the boundary integral for each sample point neighborhood 157 | def integrate_BD(self, X, x_bd, bd_dir): 158 | n = len(X) 159 | integrate_bd = torch.zeros(n, 1).to(self.device) 160 | # Compute gradient 161 | Du = self.Nu(x_bd).reshape(n, -1) 162 | # Multiply gradient by normal vector matrix to get directional derivative 163 | integrate_bd = torch.sum(Du * bd_dir, 1) / (2 * self.Eq.bdsize) 164 | return integrate_bd 165 | 166 | # Compute the volume integral 167 | def integrate_F(self, X): 168 | n = len(X) 169 | integrate_f = torch.zeros([n, 1]).to(self.device) 170 | for i in range(n): 171 | x_neighbor = self.Eq.neighborhood(X[i], 1) 172 | res = self.Eq.f(x_neighbor) 173 | integrate_f[i] = torch.mean(res) * self.Eq.E 174 | return integrate_f.detach().requires_grad_(True) 175 | 176 | # Boundary loss function 177 | def loss_boundary(self, x_boundary): 178 | u_theta = self.model(x_boundary).reshape(-1, 1) 179 | u_bound = self.Eq.g(x_boundary).reshape(-1, 1) 180 | loss_bd = LOSS_FN(u_theta, u_bound) 181 | return loss_bd 182 | 183 | # Test function 184 | def TEST(self, NUM_TESTING): 185 | with torch.no_grad(): 186 | x_test = torch.Tensor(NUM_TESTING, self.Eq.D).uniform_(a[0], b[0]).requires_grad_(True).to(self.device) 187 | begin = time.time() 188 | u_real = self.Eq.u_exact(x_test).reshape(1,-1) 189 | end = time.time() 190 | u_pred = self.model(x_test).reshape(1,-1) 191 | Error = u_real - u_pred 192 | L2error = torch.sqrt( torch.mean(Error*Error) )/ torch.sqrt( torch.mean(u_real*u_real) ) 193 | MaxError = torch.max(torch.abs(Error)) 194 | return L2error.cpu().detach().numpy(), MaxError.cpu().detach().numpy(), end-begin 195 | 196 | 197 | class MLP(nn.Module): 198 | def __init__(self, in_channels=3, out_channels=1, hidden_width=40): 199 | super(MLP, self).__init__() 200 | self.net = nn.Sequential( 201 | nn.Linear(in_channels, hidden_width), 202 | nn.Tanh(), 203 | nn.Linear(hidden_width, hidden_width), 204 | nn.Tanh(), 205 | nn.Linear(hidden_width, hidden_width), 206 | nn.Tanh(), 207 | nn.Linear(hidden_width, hidden_width), 208 | nn.Tanh(), 209 | nn.Linear(hidden_width, hidden_width), 210 | nn.Tanh(), 211 | nn.Linear(hidden_width, hidden_width), 212 | nn.Tanh(), 213 | nn.Linear(hidden_width, out_channels) 214 | ) 215 | 216 | for m in self.modules(): 217 | if isinstance(m, nn.Linear): 218 | nn.init.xavier_normal_(m.weight) 219 | nn.init.constant_(m.bias, 0) 220 | 221 | def forward(self, x): 222 | return self.net(x.to(torch.float32)) 223 | 224 | def setup_seed(seed): 225 | torch.manual_seed(seed) 226 | torch.cuda.manual_seed_all(seed) 227 | np.random.seed(seed) 228 | # random.seed(seed) 229 | torch.backends.cudnn.deterministic = True 230 | 231 | 232 | def train_pipeline(): 233 | # define device 234 | DEVICE = torch.device(f"cuda:{CUDA_ORDER}" if torch.cuda.is_available() else "cpu") 235 | print(f"当前启用 {DEVICE}") 236 | # define equation 237 | Eq = PoissonEQuation(DIMENSION, EPSILON, BDSIZE, DEVICE) 238 | # define model 239 | # torch.set_default_dtype(torch.float64) 240 | model = MLP(DIM_INPUT, DIM_OUTPUT, NUM_UNIT).to(DEVICE) 241 | optA = torch.optim.Adam(model.parameters(), lr=LEARN_RATE) 242 | solver = DFVMsolver(Eq, model, DEVICE) 243 | 244 | x = Eq.interior(NUM_TRAIN_SMAPLE) 245 | x_bd = Eq.neighborhoodBD(x) 246 | print(x_bd.shape) 247 | int_f = solver.integrate_F(x) 248 | bd_dir = Eq.outerNormalVec() 249 | x_boundary = Eq.boundary(100) 250 | 251 | # 网络迭代 252 | elapsed_time = 0 # 计时 253 | training_history = [] # 记录数据 254 | 255 | for step in tqdm(range(NUM_ITERATION+1)): 256 | if IS_DECAY and step and step % LEARN_FREQUENCY == 0: 257 | for p in optA.param_groups: 258 | if p['lr'] > LEARN_LOWWER_BOUND: 259 | p['lr'] = p['lr']*LEARN_DECAY_RATE 260 | print(f"Learning Rate: {p['lr']}") 261 | 262 | start_time = time.time() 263 | int_bd = solver.integrate_BD(x, x_bd, bd_dir).reshape(-1,1) 264 | loss_int = LOSS_FN(-int_bd, int_f) 265 | loss_bd = solver.loss_boundary(x_boundary) 266 | loss = loss_int + BETA*loss_bd 267 | optA.zero_grad() 268 | loss.backward() 269 | optA.step() 270 | 271 | epoch_time = time.time() - start_time 272 | elapsed_time = elapsed_time + epoch_time 273 | if step % TEST_FREQUENCY == 0: 274 | loss_int = loss_int.cpu().detach().numpy() 275 | loss_bd = loss_bd.cpu().detach().numpy() 276 | loss = loss.cpu().detach().numpy() 277 | L2error,ME,T = solver.TEST(NUM_TEST_SAMPLE) 278 | if step and step%1000 == 0: 279 | tqdm.write( f'\nStep: {step:>5}, ' 280 | f'Loss_int: {loss_int:>10.5f}, ' 281 | f'Loss_bd: {loss_bd:>10.5f}, ' 282 | f'Loss: {loss:>10.5f}, ' 283 | f'L2 error: {L2error:.5f}, ' 284 | f'Time: {elapsed_time:.2f}') 285 | training_history.append([step, L2error, ME, loss, elapsed_time, epoch_time, T]) 286 | 287 | training_history = np.array(training_history) 288 | print(np.min(training_history[:,1])) 289 | print(np.min(training_history[:,2])) 290 | 291 | save_time = time.localtime() 292 | save_time = f'[{save_time.tm_mday:0>2d}{save_time.tm_hour:0>2d}{save_time.tm_min:0>2d}]' 293 | dir_path = os.getcwd() + f'/PossionEQ_seed{seed}/' 294 | file_name = f'{DIMENSION}DIM-DFVM-{BETA}weight-{NUM_ITERATION}itr-{EPSILON}R-{BDSIZE}bd-{LEARN_RATE}lr.csv' 295 | file_path = dir_path + file_name 296 | 297 | if not os.path.exists(dir_path): 298 | os.makedirs(dir_path) 299 | 300 | np.savetxt(file_path, training_history, 301 | delimiter =",", 302 | header ="step, L2error, MaxError, loss, elapsed_time, epoch_time, inference_time", 303 | comments ='') 304 | print('Training History Saved!') 305 | 306 | if IS_SAVE_MODEL: 307 | torch.save(model.state_dict(), dir_path + f'{DIMENSION}DIM-DFVM_net') 308 | print('DFVM Network Saved!') 309 | 310 | 311 | if __name__ == "__main__": 312 | setup_seed(seed) 313 | train_pipeline() 314 | -------------------------------------------------------------------------------- /GenerateData.py: -------------------------------------------------------------------------------- 1 | from scipy.stats import qmc 2 | import numpy as np 3 | 4 | # cube 5 | def sampleCube(dim, l_bounds, u_bounds, N=100): 6 | '''Uniform Mesh 7 | 8 | Get the Uniform Mesh. 9 | 10 | Args: 11 | dim: The dimension of space 12 | l_bounds: The lower boundary 13 | u_bounds: The upper boundary 14 | N: The number of sample points 15 | 16 | Returns: 17 | numpy.array: An array of sample points 18 | ''' 19 | sample = [] 20 | for i in range(dim): 21 | sample.append( np.linspace(l_bounds[i], u_bounds[i], N) ) 22 | if dim == 2: 23 | x, y = np.meshgrid(sample[0], sample[1]) 24 | return np.hstack((x.reshape(-1,1), y.reshape(-1,1))) 25 | if dim == 3: 26 | x, y, z = np.meshgrid(sample[0], sample[1], sample[2]) 27 | return np.hstack((x.reshape(-1,1), y.reshape(-1,1), z.reshape(-1,1))) 28 | return sample[0].reshape(-1, 1) 29 | 30 | def sampleCubeMC(dim, l_bounds, u_bounds, N=100): 31 | '''Monte Carlo Sampling 32 | 33 | Get the sampling points by Monte Carlo Method. 34 | 35 | Args: 36 | dim: The dimension of space 37 | l_bounds: The lower boundary 38 | u_bounds: The upper boundary 39 | N: The number of sample points 40 | 41 | Returns: 42 | numpy.array: An array of sample points 43 | ''' 44 | sample = [] 45 | for i in range(dim): 46 | sample.append( np.random.uniform(l_bounds[i], u_bounds[i], [N, 1]) ) 47 | data = np.concatenate(sample, axis=1) 48 | return data 49 | 50 | def sampleCubeQMC(dim, l_bounds, u_bounds, expon=100): 51 | '''Quasi-Monte Carlo Sampling 52 | 53 | Get the sampling points by quasi-Monte Carlo Sobol sequences in dim-dimensional space. 54 | 55 | Args: 56 | dim: The dimension of space 57 | l_bounds: The lower boundary 58 | u_bounds: The upper boundary 59 | expon: The number of sample points will be 2^expon 60 | 61 | Returns: 62 | numpy.array: An array of sample points 63 | ''' 64 | sampler = qmc.Sobol(d=dim, scramble=False) 65 | sample = sampler.random_base2(expon) 66 | data = qmc.scale(sample, l_bounds, u_bounds) 67 | data = np.array(data) 68 | return data[1:] 69 | 70 | 71 | # unit sphere 72 | def sampleBall(d, n): 73 | '''Sampling inside the unit sphere 74 | 75 | Args: 76 | d: dimension 77 | n: sampling numbers 78 | 79 | Returns: 80 | numpy.array: An array of sample points 81 | ''' 82 | sample = sampleBallBD(d, n) 83 | r = np.random.uniform(0, 1, n) 84 | for i in range(n): 85 | sample[i] *= r[i] 86 | return sample 87 | 88 | def sampleBallBD(d, n): 89 | '''Sampling on the surface of the unit sphere 90 | 91 | Args: 92 | d: dimension 93 | n: sampling numbers 94 | 95 | Returns: 96 | numpy.array: An array of sample points 97 | ''' 98 | x = np.random.normal(0, 1, [n,d]) 99 | x = x/np.sqrt( np.sum( x**2 , 1 ) ).reshape(n, 1) 100 | return x 101 | 102 | 103 | if __name__ == '__main__': 104 | l_bounds = [0, 0, 0] 105 | u_bounds = [1, 1, 1] 106 | x = sampleCubeQMC(3, l_bounds, u_bounds, 4) 107 | print(x) 108 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 CenJH 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 |

Deep Finite Volume Method

2 | 3 |
4 | Paper (arXiv) | 5 | Paper (SSRN) 6 |
7 | 8 |
9 | Jianhuan Cen, and Qingsong Zou
10 | Sun Yat-sen University 11 |
12 | 13 |

14 | 15 | 16 | 17 | This is the code for the paper: [Deep Finite Volume Method for Partial Differential Equations](https://arxiv.org/abs/2305.06863). 18 | 19 | DFVM centers on a novel loss function crafted from local conservation laws derived from the original PDE, distinguishing DFVM from traditional deep learning methods. By formulating DFVM in the weak form of the PDE rather than the strong form, we enhance accuracy, particularly beneficial for PDEs with less smooth solutions compared to strong-form-based methods like Physics-Informed Neural Networks (PINNs). A key technique of DFVM lies in its transformation of all second-order or higher derivatives of neural networks into first-order derivatives which can be comupted directly using Automatic Differentiation (AD). This adaptation significantly reduces computational overhead, particularly advantageous for solving high-dimensional PDEs. 20 | 21 | 22 | ## Citation 23 | 24 | ``` 25 | @article{cen2024dfvm, 26 | title={Deep Finite Volume Method for High-Dimensional Partial Differential Equations}, 27 | author={Jianhuan Cen and Qingsong Zou}, 28 | url={https://arxiv.org/abs/2305.06863}, 29 | year={2024}, 30 | } 31 | ``` --------------------------------------------------------------------------------