├── .DS_Store ├── README.md ├── images ├── .DS_Store ├── 1_pvd.gif ├── 2_dendrite.gif ├── 3_spd.gif └── unet_architecture.png └── src ├── .DS_Store ├── code ├── .DS_Store ├── __pycache__ │ └── unet.cpython-39.pyc ├── train_unet.py ├── unet.py └── unet │ ├── checkpoint │ ├── convergence.png │ ├── convergence_data.npz │ ├── model.data-00000-of-00001 │ └── model.index └── data ├── .DS_Store ├── .ipynb_checkpoints └── Untitled-checkpoint.ipynb ├── MAX.npy ├── MIN.npy ├── sample_test_data.npy ├── sample_train_data.npy └── sample_val_data.npy /.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vivekoommen/UNetForMicrostructureEvolution/91763488d84b44b4baa6a541fc6a74b140f91cad/.DS_Store -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Rethinking materials simulations: Blending direct numerical simulations with neural operators - [Link](https://www.nature.com/articles/s41524-024-01319-1) 2 | 3 | ## Abstract 4 | Materials simulations based on direct numerical solvers are accurate but computationally expensive for predicting materials evolution across length- and timescales, due to the complexity of the underlying evolution equations, the nature of multiscale spatiotemporal interactions, and the need to reach long-time integration. We develop a method that blends direct numerical solvers with neural operators to accelerate such simulations. This methodology is based on the integration of a community numerical solver with a U-Net neural operator, enhanced by a temporal-conditioning mechanism to enable accurate extrapolation and efficient time-to-solution predictions of the dynamics. We demonstrate the effectiveness of this hybrid framework on simulations of microstructure evolution via the phase-field method. Such simulations exhibit high spatial gradients and the co-evolution of different material phases with simultaneous slow and fast materials dynamics. We establish accurate extrapolation of the coupled solver with large speed-up compared to DNS depending on the hybrid strategy utilized. This methodology is generalizable to a broad range of materials simulations, from solid mechanics to fluid dynamics, geophysics, climate, and more. 5 | 6 | ## Architecture: UNet with temporal-conditioning 7 | ![Alt text](images/unet_architecture.png) 8 | ## Test trajectories predicted by the Hybrid Model (Speedup 2.27x) 9 | ### 1) Physical Vapour Deposition 10 | ![Alt Text](images/1_pvd.gif) 11 | ### 2) Dendritic Microstructures 12 | ![Alt Text](images/2_dendrite.gif) 13 | ### 3) Spinodal Decomposition 14 | ![Alt Text](images/3_spd.gif) 15 | 16 | ## Citation 17 | 18 | @article{oommen2024rethinking, 19 | title={Rethinking materials simulations: Blending direct numerical simulations with neural operators}, 20 | author={Oommen, Vivek and Shukla, Khemraj and Desai, Saaketh and Dingreville, R{\'e}mi and Karniadakis, George Em}, 21 | journal={npj Computational Materials}, 22 | volume={10}, 23 | number={1}, 24 | pages={145}, 25 | year={2024}, 26 | publisher={Nature Publishing Group UK London} 27 | } 28 | 29 | -------------------------------------------------------------------------------- /images/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vivekoommen/UNetForMicrostructureEvolution/91763488d84b44b4baa6a541fc6a74b140f91cad/images/.DS_Store -------------------------------------------------------------------------------- /images/1_pvd.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vivekoommen/UNetForMicrostructureEvolution/91763488d84b44b4baa6a541fc6a74b140f91cad/images/1_pvd.gif -------------------------------------------------------------------------------- /images/2_dendrite.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vivekoommen/UNetForMicrostructureEvolution/91763488d84b44b4baa6a541fc6a74b140f91cad/images/2_dendrite.gif -------------------------------------------------------------------------------- /images/3_spd.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vivekoommen/UNetForMicrostructureEvolution/91763488d84b44b4baa6a541fc6a74b140f91cad/images/3_spd.gif -------------------------------------------------------------------------------- /images/unet_architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vivekoommen/UNetForMicrostructureEvolution/91763488d84b44b4baa6a541fc6a74b140f91cad/images/unet_architecture.png -------------------------------------------------------------------------------- /src/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vivekoommen/UNetForMicrostructureEvolution/91763488d84b44b4baa6a541fc6a74b140f91cad/src/.DS_Store -------------------------------------------------------------------------------- /src/code/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vivekoommen/UNetForMicrostructureEvolution/91763488d84b44b4baa6a541fc6a74b140f91cad/src/code/.DS_Store -------------------------------------------------------------------------------- /src/code/__pycache__/unet.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vivekoommen/UNetForMicrostructureEvolution/91763488d84b44b4baa6a541fc6a74b140f91cad/src/code/__pycache__/unet.cpython-39.pyc -------------------------------------------------------------------------------- /src/code/train_unet.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import numpy as np 3 | import tensorflow as tf 4 | import time 5 | import matplotlib.pyplot as plt 6 | from numpy.lib.stride_tricks import sliding_window_view 7 | tf.config.experimental.enable_tensor_float_32_execution(True) 8 | 9 | from unet import unet 10 | 11 | physical_devices = tf.config.list_physical_devices('GPU') 12 | print(physical_devices) 13 | try: 14 | tf.config.experimental.set_memory_growth(physical_devices[0], True) 15 | except: 16 | pass 17 | 18 | @tf.function() 19 | def train_step(model, x,dt, y, optimizer): 20 | with tf.GradientTape() as tape: 21 | y_pred = model(x,dt) 22 | loss = model.loss(y_pred, y)[0] 23 | 24 | gradients = tape.gradient(loss, model.trainable_variables) 25 | optimizer.apply_gradients(zip(gradients, model.trainable_variables)) 26 | return(loss) 27 | 28 | def preprocess(traj, Par): 29 | # traj - [_, 14, 128, 128, 3] 30 | x = sliding_window_view(traj[:,:-Par['lf'],:,:,0:1], window_shape=Par['lb'], axis=1 ).transpose(0,1,2,3,5,4).reshape(-1,Par['nx'], Par['ny'], Par['lb'], 1) 31 | y = sliding_window_view(traj[:,Par['lb']:,:,:,0:1], window_shape=Par['lf'], axis=1 ).transpose(0,1,5,2,3,4).reshape(-1,Par['lf'],Par['nx'], Par['ny'], 1) 32 | print('x: ', x.shape) 33 | print('y: ', y.shape) 34 | return x,y 35 | 36 | def main(): 37 | np.random.seed(23) 38 | tensor = lambda x: tf.convert_to_tensor(x, dtype=tf.float32) 39 | 40 | train = np.load('../data/sample_train_data.npy') 41 | val = np.load('../data/sample_val_data.npy') 42 | test = np.load('../data/sample_test_data.npy') 43 | MIN = np.load('../data/MIN.npy') 44 | MAX = np.load('../data/MAX.npy') 45 | 46 | Par = {} 47 | Par['nt'] = train.shape[1] 48 | Par['nx'] = train.shape[2] 49 | Par['ny'] = train.shape[3] 50 | Par['nf'] = train.shape[4]-1 51 | 52 | Par['lb'] = 3 53 | Par['lf'] = 9 54 | Par['temp'] = Par['nt'] - Par['lb'] - Par['lf'] + 1 55 | 56 | print('\nTrain Dataset') 57 | x_train, y_train = preprocess(train, Par) 58 | print('\nVal Dataset') 59 | x_val, y_val = preprocess(val, Par) 60 | print('\nTest Dataset') 61 | x_test, y_test = preprocess(test, Par) 62 | 63 | Par['inp_shift'] = MIN 64 | Par['inp_scale'] = MAX-MIN 65 | Par['out_shift'] = MIN 66 | Par['out_scale'] = MAX-MIN 67 | 68 | num_samples = x_train.shape[0] 69 | dt = np.linspace(0,1,Par['lf']).reshape(-1,1) 70 | print('dt: ', dt.shape) 71 | 72 | address = 'unet' 73 | Par['address'] = address 74 | 75 | print('shuffling train dataset') 76 | idx = np.arange(x_train.shape[0]) 77 | np.random.shuffle(idx) 78 | x_train = x_train[idx] 79 | y_train = y_train[idx] 80 | print('shuffling complete') 81 | 82 | model = unet(Par) 83 | _ = model( tensor(x_train[0:1]), tensor(dt)) 84 | print(model.summary()) 85 | 86 | model_number = 0 87 | 88 | n_epochs = 10 89 | train_batch_size = 4 90 | val_batch_size = 1 91 | test_batch_size = 1 92 | optimizer = tf.keras.optimizers.Adam(learning_rate = 2*10**-4) 93 | 94 | lowest_loss = 1000 95 | begin_time = time.time() 96 | print('Training Begins') 97 | first=True 98 | 99 | for i in range(model_number+1, n_epochs+1): 100 | for j in np.arange(0, num_samples-train_batch_size+1, train_batch_size): 101 | loss = train_step(model, tensor(x_train[j:(j+train_batch_size)]), tensor(dt), tensor(y_train[j:(j+train_batch_size)]), optimizer) 102 | if i%1 == 0: 103 | 104 | train_loss = loss.numpy() 105 | 106 | val_loss_ls=[] 107 | for k in range(0, x_val.shape[0]-val_batch_size+1, val_batch_size): 108 | y_pred = model(x_val[k:k+val_batch_size],dt) 109 | loss = model.loss(y_pred, y_val[k:k+val_batch_size])[0].numpy() 110 | val_loss_ls.append(loss) 111 | 112 | val_loss = np.mean(val_loss_ls) 113 | 114 | if val_lossipjkl', bottleneck,f5_dt) #[_,nt,8,8,16*n_kernels] 112 | BOTTLENECK = tf.reshape(temp, [-1, tf.shape(temp)[2], tf.shape(temp)[3], tf.shape(temp)[4]]) #[_*nt,8,8,16*n_kernels] 113 | 114 | d4 = self.tconv4(BOTTLENECK) #[_*nt,16,16, 8*n_kernels] 115 | temp = tf.einsum('ijkl,pl->ipjkl', e4,f4_dt) #[_,nt,16,16,8*n_kernels] 116 | E4 = tf.reshape(temp, [-1, tf.shape(temp)[2], tf.shape(temp)[3], tf.shape(temp)[4]]) #[_*nt,16,16,8*n_kernels] 117 | d4 = tf.concat([d4,E4], axis=-1) #[_*nt,16,16, 2*(8*n_kernels)] 118 | d4 = self.dec4(d4) #[_*nt,16,16, 8*n_kernels] 119 | 120 | d3 = self.tconv3(d4) #[_*nt,32,32, 4*n_kernels] 121 | temp = tf.einsum('ijkl,pl->ipjkl', e3,f3_dt) #[_,nt,32,32,4*n_kernels] 122 | E3 = tf.reshape(temp, [-1, tf.shape(temp)[2], tf.shape(temp)[3], tf.shape(temp)[4]]) #[_*nt,32,32,4*n_kernels] 123 | d3 = tf.concat([d3,E3], axis=-1) #[_*nt,32,32, 2*(4*n_kernels)] 124 | d3 = self.dec3(d3) #[_*nt,32,32, 4*n_kernels] 125 | 126 | d2 = self.tconv2(d3) #[_*nt,64,64, 2*n_kernels] 127 | temp = tf.einsum('ijkl,pl->ipjkl', e2,f2_dt) #[_,nt,64,64,2*n_kernels] 128 | E2 = tf.reshape(temp, [-1, tf.shape(temp)[2], tf.shape(temp)[3], tf.shape(temp)[4]]) #[_*nt,64,64,2*n_kernels] 129 | d2 = tf.concat([d2,E2], axis=-1) #[_*nt,64,64, 2*(2*n_kernels)] 130 | d2 = self.dec2(d2) #[_*nt,64,64, 2*n_kernels] 131 | 132 | d1 = self.tconv1(d2) #[_*nt,128,128, 1*n_kernels] 133 | temp = tf.einsum('ijkl,pl->ipjkl', e1,f1_dt) #[_,nt,128,128,1*n_kernels] 134 | E1 = tf.reshape(temp, [-1, tf.shape(temp)[2], tf.shape(temp)[3], tf.shape(temp)[4]]) #[_*nt,128,128,1*n_kernels] 135 | d1 = tf.concat([d1,E1], axis=-1) #[_*nt,128,128, 2*(1*n_kernels)] 136 | d1 = self.dec1(d1) #[_*nt,128,128, 1*n_kernels] 137 | 138 | d0 = self.tconv0(d1) #[_*nt,256,256, n_kernels/2] 139 | temp = tf.einsum('ijkl,pl->ipjkl', e0,f0_dt) #[_,nt,256,256, n_kernels/2] 140 | E0 = tf.reshape(temp, [-1, tf.shape(temp)[2], tf.shape(temp)[3], tf.shape(temp)[4]]) #[_*nt,256,256, n_kernels/2] 141 | d0 = tf.concat([d0,E0], axis=-1) #[_*nt,256,256, 2*(n_kernels/2)] 142 | d0 = self.dec0(d0) #[_*nt,256,256, n_kernels/2] 143 | 144 | out = self.final_norm(d0) 145 | out = self.final(out) #[_*nt,256,256, self.nf] 146 | out = tf.reshape(out, [-1,nt,tf.shape(out)[1],tf.shape(out)[2],tf.shape(out)[3] ]) #[_,nt,256,256, self.nf] 147 | 148 | out = out*self.Par['out_scale'] + self.Par['out_shift'] 149 | 150 | return out 151 | 152 | 153 | def loss(self, y_pred, y_train): 154 | train_loss = tf.reduce_mean(tf.square(y_train-y_pred))/tf.reduce_mean(tf.square(y_train)) 155 | 156 | return([train_loss]) 157 | 158 | -------------------------------------------------------------------------------- /src/code/unet/checkpoint: -------------------------------------------------------------------------------- 1 | model_checkpoint_path: "model" 2 | all_model_checkpoint_paths: "model" 3 | -------------------------------------------------------------------------------- /src/code/unet/convergence.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vivekoommen/UNetForMicrostructureEvolution/91763488d84b44b4baa6a541fc6a74b140f91cad/src/code/unet/convergence.png -------------------------------------------------------------------------------- /src/code/unet/convergence_data.npz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vivekoommen/UNetForMicrostructureEvolution/91763488d84b44b4baa6a541fc6a74b140f91cad/src/code/unet/convergence_data.npz -------------------------------------------------------------------------------- /src/code/unet/model.data-00000-of-00001: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vivekoommen/UNetForMicrostructureEvolution/91763488d84b44b4baa6a541fc6a74b140f91cad/src/code/unet/model.data-00000-of-00001 -------------------------------------------------------------------------------- /src/code/unet/model.index: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vivekoommen/UNetForMicrostructureEvolution/91763488d84b44b4baa6a541fc6a74b140f91cad/src/code/unet/model.index -------------------------------------------------------------------------------- /src/data/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vivekoommen/UNetForMicrostructureEvolution/91763488d84b44b4baa6a541fc6a74b140f91cad/src/data/.DS_Store -------------------------------------------------------------------------------- /src/data/.ipynb_checkpoints/Untitled-checkpoint.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [], 3 | "metadata": {}, 4 | "nbformat": 4, 5 | "nbformat_minor": 5 6 | } 7 | -------------------------------------------------------------------------------- /src/data/MAX.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vivekoommen/UNetForMicrostructureEvolution/91763488d84b44b4baa6a541fc6a74b140f91cad/src/data/MAX.npy -------------------------------------------------------------------------------- /src/data/MIN.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vivekoommen/UNetForMicrostructureEvolution/91763488d84b44b4baa6a541fc6a74b140f91cad/src/data/MIN.npy -------------------------------------------------------------------------------- /src/data/sample_test_data.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vivekoommen/UNetForMicrostructureEvolution/91763488d84b44b4baa6a541fc6a74b140f91cad/src/data/sample_test_data.npy -------------------------------------------------------------------------------- /src/data/sample_train_data.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vivekoommen/UNetForMicrostructureEvolution/91763488d84b44b4baa6a541fc6a74b140f91cad/src/data/sample_train_data.npy -------------------------------------------------------------------------------- /src/data/sample_val_data.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vivekoommen/UNetForMicrostructureEvolution/91763488d84b44b4baa6a541fc6a74b140f91cad/src/data/sample_val_data.npy --------------------------------------------------------------------------------