├── .gitignore ├── LICENSE.txt ├── README.md ├── aesindy ├── __init__.py ├── config_template.py ├── dynamical_models.py ├── helper_functions.py ├── net_config.py ├── sindy_utils.py ├── solvers.py └── training.py ├── analyze ├── .ipynb_checkpoints │ └── analyze_data-checkpoint.ipynb ├── analyze.py ├── analyze_data.ipynb └── plot_wingturb.ipynb ├── data ├── .ipynb_checkpoints │ └── plot_results-checkpoint.ipynb ├── lorenzww.json ├── plot_results.ipynb ├── pupils_156_joe.mat └── small_files │ ├── NACA0012_Re1000_AoA35_2D_forces.mat │ └── NACA0012_Re1500_AoA35_2D_forces.mat ├── requirements.txt ├── setup.py └── testcases ├── .ipynb_checkpoints └── Untitled-checkpoint.ipynb ├── __init__.py ├── autolock_unlockf.py ├── basic_test.py ├── default_params.py ├── fluttering.py ├── h2z_evolution.py ├── initv_intloss.py ├── lorenzww_basic.py ├── pendulum_basic.py ├── plot_checkpoints.ipynb └── plot_checkpoints_incompsim.ipynb /.gitignore: -------------------------------------------------------------------------------- 1 | local_tests/ 2 | local_data/ 3 | results/ 4 | __pycache__/ 5 | .DS_Store 6 | 7 | # pycharm 8 | *.idea 9 | 10 | /aesindy/config.py 11 | /build/ 12 | /dist/ 13 | /aesindy.egg-info/ 14 | -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2022 Joseph Bakarji 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Description 2 | 3 | This code discovers an analytical model using a combination of time-delay embedding, autoencoders, and sparse identification of differential equations (SINDy). The details can be found in the paper [Discovering Governing Equations from Partial Measurements with Deep Delay Autoencoders](https://arxiv.org/abs/2201.05136). 4 | 5 | ### Abstract: 6 | A central challenge in data-driven model discovery is the presence of hidden, or latent, variables that are not directly measured but are dynamically important. Takens’ theorem provides conditions for when it is possible to augment these partial measurements with time delayed information, resulting in an attractor that is diffeomorphic to that of the original full-state system. However, the coordinate transformation back to the original attractor is typically unknown, and learning the dynamics in the embedding space has remained an open challenge for decades. Here, we design a custom deep autoencoder network to learn a coordinate transformation from the delay embedded space into a new space where it is possible to represent the dynamics in a sparse, closed form. We demonstrate this approach on the Lorenz, Rossler, and Lotka-Volterra systems, learning dynamics from a single measurement variable. As a challenging example, we learn a Lorenz analogue from a single scalar variable extracted from a video of a chaotic waterwheel experiment. The resulting modeling framework combines deep learning to uncover effective coordinates and the sparse identification of nonlinear dynamics (SINDy) for interpretable modeling. Thus, we show that it is possible to simultaneously learn a closedform model and the associated coordinate system for partially observed dynamics. 7 | 8 | # Code 9 | 10 | The code builds on [SindyAutoencoders](https://github.com/kpchamp/SindyAutoencoders) with a Tensorflow 2 upgrade. Here's a brief description of the main files 11 | 12 | - `src` contains the heart of the code. 13 | - `net_config.py` contains the `SindyAutoencoder` which defines the deep network and SINDy architecture to be trained 14 | - `analyze.py` contains functions for reading and visualizing the results 15 | - `training.py` contains training and testing functions 16 | - `examples` contains test cases and run files 17 | - `basic_params.py` contains the basic parameters that can be defined in SINDy autoencoders 18 | - `basic_run.py` has a function for hyperparameter optimization and runs the code with the basic parameters 19 | - `lorenz.py`, `predprey.py`, `rossler.py` and `waterlorenz.py` all generate data for training. 20 | - `testcases` contains some examples of changing the input parameters and training the model. 21 | - `analyze` contains notebook that will read multiple results and visualize them. 22 | 23 | The main assumption in the code is that the first dimension is taken as the 'measurement' and the algorithm attempts to recover it. -------------------------------------------------------------------------------- /aesindy/__init__.py: -------------------------------------------------------------------------------- 1 | from .training import * 2 | from .dynamical_models import * 3 | from .helper_functions import * 4 | from .net_config import * 5 | from .sindy_utils import * 6 | from .solvers import * 7 | from .config import * -------------------------------------------------------------------------------- /aesindy/config_template.py: -------------------------------------------------------------------------------- 1 | ROOTPATH='todo' -------------------------------------------------------------------------------- /aesindy/dynamical_models.py: -------------------------------------------------------------------------------- 1 | from .sindy_utils import library_size 2 | import numpy as np 3 | 4 | def get_model(name, args=None, normalization=None, use_sine=False): 5 | 6 | 7 | if name == 'lorenz': 8 | args = np.array([10, 28, 8/3]) if args is None else np.array(args) 9 | f = lambda z, t: [args[0]*(z[1] - z[0]), 10 | z[0]*(args[1] - z[2]) - z[1], 11 | z[0]*z[1] - args[2]*z[2]] 12 | 13 | dim = 3 14 | n = normalization if normalization is not None else np.ones((dim,)) 15 | poly_order = 2 16 | 17 | Xi = np.zeros((library_size(dim, poly_order), dim)) 18 | Xi[1,0] = - args[0] 19 | Xi[2,0] = args[0]*n[0]/n[1] 20 | Xi[1,1] = args[1]*n[1]/n[0] 21 | Xi[2,1] = -1 22 | Xi[6,1] = -n[1]/(n[0]*n[2]) 23 | Xi[3,2] = -args[2] 24 | Xi[5,2] = n[2]/(n[0]*n[1]) 25 | 26 | z0_mean_sug = [0, 0, 25] 27 | z0_std_sug = [36, 48, 41] 28 | 29 | 30 | 31 | elif name == 'rossler': 32 | args = [0.2, 0.2, 5.7] if args is None else np.array(args) 33 | f = lambda z, t: [-z[1] - z[2] , 34 | z[0] + args[0]*z[1], 35 | args[1] + z[2]*(z[0] - args[2])] 36 | dim = 3 37 | n = normalization if normalization is not None else np.ones((dim,)) 38 | poly_order = 2 39 | 40 | Xi = np.zeros((library_size(dim, poly_order), dim)) 41 | Xi[2,0] = -n[0]/n[1] 42 | Xi[3,0] = -n[0]/n[2] 43 | Xi[1,1] = n[1]/n[0] 44 | Xi[2,1] = args[0] 45 | Xi[0,2] = n[2]*args[1] 46 | Xi[3,2] = -args[2] 47 | Xi[6,2] = 1.0/n[0] 48 | 49 | z0_mean_sug = [0, 1, 0] 50 | z0_std_sug = [2, 2, 2] 51 | 52 | 53 | 54 | elif name == 'predator_prey': 55 | args = [1.0, 0.1, 1.5, 0.75] if args is None else np.array(args) 56 | f = lambda z, t: [args[0]*z[0] - args[1]*z[0]*z[1] , 57 | -args[2]*z[1] + args[1]*args[3]*z[0]*z[1] ] 58 | dim = 2 59 | n = normalization if normalization is not None else np.ones((dim,)) 60 | poly_order = 2 61 | Xi = np.zeros((library_size(dim, poly_order), dim)) 62 | Xi[1,0] = args[0] 63 | Xi[4,0] = -args[1] * n[0]/n[1] 64 | Xi[2,1] = -args[2] 65 | Xi[4,1] = args[1] * args[3] * n[1]/n[0] 66 | 67 | z0_mean_sug = [10, 5] 68 | z0_std_sug = [8, 8] 69 | 70 | 71 | elif name == 'pendulum': 72 | # Not easily renormalizable because f the sin(x) feature 73 | # g, L, 74 | args = [9.8, 1] if args is None else np.array(args) 75 | f = lambda z, t: [z[1], 76 | -args[1]/args[0]*np.sin(z[0])] 77 | dim = 2 78 | n = normalization if normalization is not None else np.ones((dim,)) 79 | poly_order = 1 80 | use_sine = True 81 | 82 | Xi = np.zeros((library_size(dim, poly_order, use_sine=use_sine), dim)) 83 | Xi[2, 0] = 1 84 | Xi[3, 1] = -args[1]/args[0] 85 | 86 | z0_mean_sug = [np.pi/2, 0] 87 | z0_std_sug = [np.pi/2, 2] 88 | 89 | return f, Xi, dim, z0_mean_sug, z0_std_sug 90 | -------------------------------------------------------------------------------- /aesindy/helper_functions.py: -------------------------------------------------------------------------------- 1 | import itertools 2 | import numpy as np 3 | import random 4 | 5 | def get_hyperparameter_list(hyperparams): 6 | def dict_product(dicts): 7 | return [dict(zip(dicts, x)) for x in itertools.product(*dicts.values())] 8 | hyperparams_list = dict_product(hyperparams) 9 | random.shuffle(hyperparams_list) 10 | return hyperparams_list 11 | 12 | def get_hankel(x, dimension, delays, skip_rows=1): 13 | if skip_rows>1: 14 | delays = len(x) - delays * skip_rows 15 | H = np.zeros((dimension, delays)) 16 | for j in range(delays): 17 | H[:, j] = x[j*skip_rows:j*skip_rows+dimension] 18 | return H 19 | 20 | def get_hankel_svd(H, reduced_dim): 21 | U, s, VT = np.linalg.svd(H, full_matrices=False) 22 | rec_v = np.matmul(VT[:reduced_dim, :].T, np.diag(s[:reduced_dim])) 23 | return U, s, VT, rec_v -------------------------------------------------------------------------------- /aesindy/net_config.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import tensorflow as tf 3 | from tensorflow.keras import layers, regularizers 4 | import pysindy as ps 5 | 6 | class SindyCall(tf.keras.callbacks.Callback): 7 | def __init__(self, threshold, update_freq, x, t): 8 | super(SindyCall, self).__init__() 9 | self.threshold = threshold 10 | self.update_freq = update_freq 11 | self.t = t 12 | self.x = x 13 | 14 | def on_epoch_end(self, epoch, logs=None): 15 | if epoch % self.update_freq == 0 and epoch > 1: 16 | print('--- Running Sindy ---') 17 | x_in = self.x 18 | z = self.model.encoder(x_in) 19 | z_latent = z.numpy() 20 | 21 | # sindy 22 | library = ps.feature_library.polynomial_library.PolynomialLibrary(degree=self.model.poly_order) 23 | opt = ps.optimizers.STLSQ(threshold=self.threshold) 24 | sindy_model = ps.SINDy(feature_library=library, optimizer=opt) 25 | time = np.linspace(0, self.model.params['dt']*z_latent.shape[0], z_latent.shape[0], endpoint=False) 26 | sindy_model.fit(z_latent, t=time) 27 | sindy_model.print() 28 | print(sindy_model.coefficients().T) 29 | 30 | sindy_weights = sindy_model.coefficients().T 31 | sindy_mask = sindy_model.coefficients().T > 1e-5 32 | layer_weights = [sindy_mask, sindy_weights] 33 | 34 | self.model.sindy.set_weights(layer_weights) 35 | 36 | 37 | 38 | # This callback is used to update mask, i.e. apply recursive feature elimination in Sindy at the beginning of each epoch 39 | class RfeUpdateCallback(tf.keras.callbacks.Callback): 40 | def __init__(self, rfe_frequency=0): 41 | super(RfeUpdateCallback, self).__init__() 42 | self.rfe_frequency = rfe_frequency 43 | 44 | def on_epoch_end(self, epoch, logs=None): 45 | if epoch % self.model.print_frequency == 0: 46 | print('--- Sindy Coefficients ---') 47 | print(self.model.sindy.coefficients) 48 | if epoch % self.model.rfe_frequency == 0: 49 | self.model.sindy.update_mask() 50 | 51 | def on_train_begin(self, logs=None): 52 | print('--- Initial Sindy Coefficients ---') 53 | print(self.model.sindy.coefficients) 54 | 55 | 56 | # Try to cast as a layer.Layer object 57 | # Add sindy_library_tf function to class 58 | class Sindy(layers.Layer): 59 | def __init__(self, library_dim, 60 | state_dim, 61 | poly_order, 62 | model='lorenz', 63 | initializer='constant', 64 | actual_coefs=None, 65 | rfe_threshold=None, 66 | include_sine=False, 67 | exact_features=False, 68 | fix_coefs=False, 69 | sindy_pert=0.0, 70 | ode_net=False, 71 | ode_net_widths=[1.5, 2.0], 72 | **kwargs): 73 | super(Sindy, self).__init__(**kwargs) 74 | 75 | self.library_dim = library_dim 76 | self.state_dim = state_dim 77 | self.poly_order = poly_order 78 | self.include_sine = include_sine 79 | self.rfe_threshold = rfe_threshold 80 | self.exact_features = exact_features 81 | self.actual_coefs = actual_coefs 82 | self.fix_coefs = fix_coefs 83 | self.sindy_pert = sindy_pert 84 | self.model = model 85 | 86 | self.ode_net = ode_net 87 | self.ode_net_widths = ode_net_widths 88 | self.l2 = 1e-6 89 | self.l1 = 0.0 90 | 91 | ## INITIALIZE COEFFICIENTS 92 | if type(initializer) == np.ndarray: 93 | self.coefficients_mask = tf.Variable(initial_value=np.abs(initializer)>1e-10, dtype=tf.float32) 94 | self.coefficients = tf.Variable(initial_value= initializer, name='sindy_coeffs', dtype=tf.float32) 95 | else: 96 | if initializer == 'true': 97 | self.coefficients_mask = tf.Variable(initial_value=np.abs(actual_coefs)>1e-10, dtype=tf.float32) 98 | self.coefficients = tf.Variable(initial_value=actual_coefs + sindy_pert*(np.random.random(actual_coefs.shape)-0.5), name='sindy_coeffs', dtype=tf.float32) 99 | else: 100 | if initializer == 'variance_scaling': 101 | init = tf.keras.initializers.VarianceScaling(scale=10, mode='fan_in', distribution='uniform') 102 | elif initializer == 'constant': 103 | init = tf.constant_initializer(0.0) 104 | elif type(initializer) != str: 105 | init = tf.constant_initializer(initializer) 106 | elif initializer == 'random_normal': 107 | init = tf.keras.initializers.RandomNormal(mean=0.0, stddev=10.0) 108 | else: 109 | raise Exception("initializer string doesn't exist") 110 | self.coefficients_mask = tf.Variable(initial_value=np.ones((self.library_dim, self.state_dim)), dtype=tf.float32) 111 | self.coefficients = tf.Variable(init(shape=(self.library_dim, self.state_dim)), name='sindy_coeffs', dtype=tf.float32) 112 | 113 | if self.fix_coefs: 114 | self.coefficients = tf.Variable(initial_value=actual_coefs, name='sindy_coeffs', trainable=False, dtype=tf.float32) 115 | 116 | ## ODE NET 117 | if self.ode_net: 118 | self.net_model = self.make_theta_network(self.library_dim, self.ode_net_widths) 119 | 120 | def make_theta_network(self, output_dim, widths): 121 | out_activation = 'linear' 122 | name = 'net_dictionary' 123 | initializer= tf.keras.initializers.VarianceScaling(scale=2.0, mode="fan_avg", distribution="uniform") 124 | model = tf.keras.Sequential() 125 | for i, w in enumerate(widths): 126 | model.add(tf.keras.layers.Dense(w, activation='elu', kernel_initializer=initializer, 127 | name=name+'_'+str(i), use_bias=True, activity_regularizer=regularizers.l1_l2(l1=self.l1, l2=self.l2))) 128 | model.add(tf.keras.layers.Dense(output_dim, activation=out_activation, 129 | kernel_initializer=initializer, name=name+'_'+'out', use_bias=True)) 130 | return model 131 | 132 | def call(self, z): 133 | dz_dt = tf.matmul(self.theta(z), self.coefficients) 134 | return dz_dt 135 | 136 | @tf.function 137 | def theta(self, z): 138 | if self.ode_net: 139 | return self.net_model(z) 140 | else: 141 | return self.sindy_library_tf(z, self.state_dim, self.poly_order, self.include_sine, self.exact_features, self.model) 142 | 143 | def update_mask(self): 144 | if self.rfe_threshold is not None: 145 | self.coefficients_mask.assign( tf.cast( tf.abs(self.coefficients) > self.rfe_threshold ,tf.float32) ) 146 | # self.coefficients.assign(tf.multiply(self.coefficients_mask, self.coefficients)) 147 | 148 | 149 | @tf.function 150 | def sindy_library_tf(self, z, latent_dim, poly_order, include_sine=False, exact_features=False, model='lorenz'): 151 | if exact_features: 152 | if model == 'lorenz': 153 | # Check size (is first dimension batch?) 154 | library=[] 155 | library.append(z[:,0]) 156 | library.append(z[:,1]) 157 | library.append(z[:,2]) 158 | library.append(tf.multiply(z[:,0], z[:,1])) 159 | library.append(tf.multiply(z[:,0], z[:,2])) 160 | elif model == 'predprey': 161 | library=[] 162 | library.append(z[:,0]) 163 | library.append(z[:,1]) 164 | library.append(tf.multiply(z[:,0], z[:,1])) 165 | elif model == 'rossler': 166 | library=[] 167 | raise Exception("not implemented") 168 | else: 169 | # Can make more compact 170 | library = [tf.ones(tf.shape(z)[0])] 171 | for i in range(latent_dim): 172 | library.append(z[:,i]) 173 | 174 | if poly_order > 1: 175 | for i in range(latent_dim): 176 | for j in range(i,latent_dim): 177 | library.append(tf.multiply(z[:,i], z[:,j])) 178 | 179 | if poly_order > 2: 180 | for i in range(latent_dim): 181 | for j in range(i,latent_dim): 182 | for k in range(j,latent_dim): 183 | library.append(z[:,i]*z[:,j]*z[:,k]) 184 | 185 | if poly_order > 3: 186 | for i in range(latent_dim): 187 | for j in range(i,latent_dim): 188 | for k in range(j,latent_dim): 189 | for p in range(k,latent_dim): 190 | library.append(z[:,i]*z[:,j]*z[:,k]*z[:,p]) 191 | 192 | if poly_order > 4: 193 | for i in range(latent_dim): 194 | for j in range(i,latent_dim): 195 | for k in range(j,latent_dim): 196 | for p in range(k,latent_dim): 197 | for q in range(p,latent_dim): 198 | library.append(z[:,i]*z[:,j]*z[:,k]*z[:,p]*z[:,q]) 199 | 200 | if include_sine: 201 | for i in range(latent_dim): 202 | library.append(tf.sin(z[:,i])) 203 | 204 | return tf.stack(library, axis=1) 205 | 206 | 207 | @tf.function 208 | def sindy_library_lorenz(z, latent_dim, poly_order, include_sine=False, exact_features=False, model='lorenz'): 209 | # Can make more compact 210 | library = [tf.ones(tf.shape(z)[0])] 211 | for i in range(latent_dim): 212 | library.append(z[:,i]) 213 | 214 | for i in range(latent_dim): 215 | for j in range(i,latent_dim): 216 | library.append(tf.multiply(z[:,i], z[:,j])) 217 | 218 | return tf.stack(library, axis=1) 219 | 220 | ####################################################### 221 | 222 | # Put in class (?) 223 | total_met = tf.keras.metrics.Mean(name="total_loss") 224 | rec_met = tf.keras.metrics.Mean(name="rec_loss") 225 | sindy_z_met = tf.keras.metrics.Mean(name="sindy_z_loss") 226 | sindy_x_met = tf.keras.metrics.Mean(name="sindy_x_loss") 227 | integral_met = tf.keras.metrics.Mean(name="integral_loss") 228 | x0_met = tf.keras.metrics.Mean(name='x0_loss') 229 | l1_met = tf.keras.metrics.Mean(name='l1_loss') 230 | 231 | class Sindy_Autoencoder(tf.keras.Model): 232 | def __init__(self, params, **kwargs): 233 | super(Sindy_Autoencoder, self).__init__(**kwargs) 234 | self.params = params 235 | self.latent_dim = params['latent_dim'] 236 | self.input_dim = params['input_dim'] 237 | self.widths = params['widths'] 238 | self.activation = params['activation'] 239 | self.library_dim = params['library_dim'] 240 | self.poly_order = params['poly_order'] 241 | self.include_sine = params['include_sine'] 242 | self.initializer = params['coefficient_initialization'] # fix That's sindy's! 243 | self.epochs = params['max_epochs'] # fix That's sindy's! 244 | self.rfe_threshold = params['coefficient_threshold'] 245 | self.rfe_frequency = params['threshold_frequency'] 246 | self.print_frequency = params['print_frequency'] 247 | self.sindy_pert = params['sindy_pert'] 248 | self.fixed_coefficient_mask = None 249 | self.actual_coefs = params['actual_coefficients'] 250 | self.use_bias = params['use_bias'] 251 | self.l2 = params['loss_weight_layer_l2'] 252 | self.l1 = params['loss_weight_layer_l1'] 253 | self.sparse_weights = params['sparse_weighting'] 254 | self.trainable_auto = params['trainable_auto'] 255 | if params['sparse_weighting'] is not None: 256 | self.sparse_weights = tf.constant(value=params['sparse_weighting'], dtype=tf.float32) 257 | if params['fixed_coefficient_mask']: 258 | self.fixed_coefficient_mask = tf.constant(value=np.abs(self.actual_coefs)>1e-10, dtype=tf.float32) 259 | 260 | self.time = tf.constant(value=np.linspace(0.0, params['dt']*params['input_dim'], params['input_dim'], endpoint=False)) 261 | self.dt = tf.constant(value=params['dt'], dtype=tf.float32) 262 | 263 | self.encoder = self.make_network(self.input_dim, self.latent_dim, self.widths, name='encoder') 264 | self.decoder = self.make_network(self.latent_dim, self.input_dim, self.widths[::-1], name='decoder') 265 | if not self.trainable_auto: 266 | self.encoder._trainable = False 267 | self.decoder._trainable = False 268 | self.sindy = Sindy(self.library_dim, self.latent_dim, self.poly_order, model=params['model'], initializer=self.initializer, actual_coefs=self.actual_coefs, rfe_threshold=self.rfe_threshold, include_sine=self.include_sine, exact_features=params['exact_features'], fix_coefs=params['fix_coefs'], sindy_pert=self.sindy_pert, ode_net=params['ode_net'], ode_net_widths=params['ode_net_widths']) 269 | 270 | 271 | def make_network(self, input_dim, output_dim, widths, name): 272 | out_activation = 'linear' 273 | initializer= tf.keras.initializers.VarianceScaling(scale=2.0, mode="fan_avg", distribution="uniform") 274 | model = tf.keras.Sequential() 275 | for i, w in enumerate(widths): 276 | if i ==0: 277 | use_bias = self.use_bias 278 | else: 279 | use_bias = True 280 | model.add(tf.keras.layers.Dense(w, activation=self.activation, kernel_initializer=initializer, name=name+'_'+str(i), use_bias=use_bias, kernel_regularizer=regularizers.l1_l2(l1=self.l1, l2=self.l2))) 281 | model.add(tf.keras.layers.Dense(output_dim, activation=out_activation, kernel_initializer=initializer, name=name+'_'+'out', use_bias=self.use_bias)) 282 | return model 283 | 284 | def call(self, datain): 285 | x = datain[0] 286 | return self.decoder(self.encoder(x)) 287 | 288 | 289 | @tf.function 290 | def train_step(self, data): 291 | inputs, outputs = data 292 | x = inputs[0] 293 | dx_dt = tf.expand_dims(inputs[1], 2) 294 | x_out = outputs[0] 295 | dx_dt_out = tf.expand_dims(outputs[1], 2) 296 | 297 | with tf.GradientTape() as tape: 298 | loss, losses = self.get_loss(x, dx_dt, x_out, dx_dt_out) 299 | 300 | trainable_vars = self.trainable_variables 301 | gradients = tape.gradient(loss, trainable_vars) 302 | self.optimizer.apply_gradients(zip(gradients, trainable_vars)) 303 | 304 | ## Keep track and update losses 305 | self.update_losses(loss, losses) 306 | return {m.name: m.result() for m in self.metrics} 307 | 308 | 309 | @tf.function 310 | def test_step(self, data): 311 | inputs, outputs = data 312 | x = inputs[0] 313 | dx_dt = tf.expand_dims(inputs[1], 2) 314 | x_out = outputs[0] 315 | dx_dt_out = tf.expand_dims(outputs[1], 2) 316 | 317 | loss, losses = self.get_loss(x, dx_dt, x_out, dx_dt_out) 318 | 319 | ## Keep track and update losses 320 | self.update_losses(loss, losses) 321 | return {m.name: m.result() for m in self.metrics} 322 | 323 | 324 | @tf.function 325 | def get_loss(self, x, dx_dt, x_out, dx_dt_out): 326 | losses = {} 327 | loss = 0 328 | if self.params['loss_weight_sindy_z'] > 0.0: 329 | with tf.GradientTape() as t1: 330 | t1.watch(x) 331 | z = self.encoder(x, training=self.trainable_auto) 332 | dz_dx = t1.batch_jacobian(z, x) 333 | dz_dt = tf.matmul( dz_dx, dx_dt ) 334 | else: 335 | z = self.encoder(x, training=self.trainable_auto) 336 | 337 | if self.params['loss_weight_sindy_x'] > 0.0: 338 | with tf.GradientTape() as t2: 339 | t2.watch(z) 340 | xh = self.decoder(z, training=self.trainable_auto) 341 | dxh_dz = t2.batch_jacobian(xh, z) 342 | dz_dt_sindy = tf.expand_dims(self.sindy(z), 2) 343 | dxh_dt = tf.matmul( dxh_dz, dz_dt_sindy ) 344 | else: 345 | xh = self.decoder(z, training=self.trainable_auto) 346 | 347 | # SINDy consistency loss 348 | if self.params['loss_weight_integral'] > 0.0: 349 | sol = z 350 | loss_int = tf.square(sol[:, 0] - x[:, 0]) 351 | total_steps = len(self.time) 352 | for i in range(1, len(self.time)): 353 | k1 = self.sindy(sol) 354 | k2 = self.sindy(sol + self.dt/2 * k1) 355 | k3 = self.sindy(sol + self.dt/2 * k2) 356 | k4 = self.sindy(sol + self.dt * k3) 357 | sol = sol + 1/6 * self.dt * (k1 + 2*k2 + 2*k3 + k4) 358 | # To avoid nans - tf.where(tf.is_nan()) not compatible with gradients 359 | sol = tf.where(tf.abs(sol) > 500.00, 500.0, sol) 360 | loss_int += tf.square(sol[:, 0] - x[:, i]) 361 | loss += self.params['loss_weight_integral'] * loss_int / total_steps 362 | losses['integral'] = loss_int 363 | 364 | 365 | if self.params['loss_weight_sindy_x'] > 0.0: 366 | loss_dx = tf.reduce_mean( tf.square(dxh_dt - dx_dt_out) ) 367 | loss += self.params['loss_weight_sindy_x'] * loss_dx 368 | losses['sindy_x'] = loss_dx 369 | 370 | if self.params['loss_weight_sindy_z'] > 0.0: 371 | loss_dz = tf.reduce_mean( tf.square(dz_dt - dz_dt_sindy) ) 372 | loss += self.params['loss_weight_sindy_z'] * loss_dz 373 | losses['sindy_z'] = loss_dz 374 | 375 | if self.params['loss_weight_x0'] > 0.0: 376 | loss_x0 = tf.reduce_mean( tf.square(z[:, 0] - x[:, 0]) ) 377 | loss += self.params['loss_weight_x0'] * loss_x0 378 | losses['x0'] = loss_x0 379 | 380 | loss_rec = tf.reduce_mean( tf.square(xh - x_out) ) 381 | if self.sparse_weights is not None: 382 | loss_l1 = tf.reduce_mean(tf.abs(tf.multiply(self.sparse_weights, self.sindy.coefficients) ) ) 383 | else: 384 | loss_l1 = tf.reduce_mean(tf.abs(self.sindy.coefficients) ) 385 | 386 | loss += self.params['loss_weight_rec'] * loss_rec \ 387 | + self.params['loss_weight_sindy_regularization'] * loss_l1 388 | 389 | if self.fixed_coefficient_mask is not None: 390 | self.sindy.coefficients.assign( tf.multiply(self.sindy.coefficients, self.fixed_coefficient_mask) ) 391 | 392 | losses['rec'] = loss_rec 393 | losses['l1'] = loss_l1 394 | 395 | return loss, losses 396 | 397 | @tf.function 398 | def update_losses(self, loss, losses): 399 | total_met.update_state(loss) 400 | rec_met.update_state(losses['rec']) 401 | if self.params['loss_weight_sindy_z'] > 0: 402 | sindy_z_met.update_state(losses['sindy_z']) 403 | if self.params['loss_weight_sindy_x'] > 0: 404 | sindy_x_met.update_state(losses['sindy_x']) 405 | if self.params['loss_weight_integral'] > 0: 406 | integral_met.update_state(losses['integral']) 407 | if self.params['loss_weight_x0'] > 0: 408 | x0_met.update_state(losses['x0']) 409 | if self.params['loss_weight_sindy_regularization'] > 0: 410 | l1_met.update_state(losses['l1']) 411 | 412 | # Check if needed 413 | @property 414 | def metrics(self): 415 | m = [total_met, rec_met] 416 | if self.params['loss_weight_sindy_z'] > 0.0: 417 | m.append(sindy_z_met) 418 | if self.params['loss_weight_sindy_x'] > 0.0: 419 | m.append(sindy_x_met) 420 | if self.params['loss_weight_integral'] > 0.0: 421 | m.append(integral_met) 422 | if self.params['loss_weight_x0'] > 0.0: 423 | m.append(x0_met) 424 | if self.params['loss_weight_sindy_regularization'] > 0.0: 425 | m.append(l1_met) 426 | return m 427 | 428 | 429 | ########################################################################## 430 | ########################################################################## 431 | 432 | 433 | 434 | class PreSVD_Sindy_Autoencoder(tf.keras.Model): 435 | def __init__(self, params, **kwargs): 436 | super(PreSVD_Sindy_Autoencoder, self).__init__(**kwargs) 437 | self.params = params 438 | self.latent_dim = params['latent_dim'] 439 | self.input_dim = params['input_dim'] 440 | self.svd_dim = params['svd_dim'] 441 | self.widths = params['widths'] 442 | self.activation = params['activation'] 443 | self.library_dim = params['library_dim'] 444 | self.poly_order = params['poly_order'] 445 | self.include_sine = params['include_sine'] 446 | self.initializer = params['coefficient_initialization'] # fix That's sindy's! 447 | self.epochs = params['max_epochs'] # fix That's sindy's! 448 | self.rfe_threshold = params['coefficient_threshold'] 449 | self.rfe_frequency = params['threshold_frequency'] 450 | self.print_frequency = params['print_frequency'] 451 | self.sindy_pert = params['sindy_pert'] 452 | self.fixed_coefficient_mask = None 453 | self.actual_coefs = params['actual_coefficients'] 454 | self.use_bias = params['use_bias'] 455 | self.l2 = params['loss_weight_layer_l2'] 456 | self.l1 = params['loss_weight_layer_l1'] 457 | if params['fixed_coefficient_mask']: 458 | self.fixed_coefficient_mask = tf.constant(value=np.abs(self.actual_coefs)>1e-10, dtype=tf.float32) 459 | 460 | self.time = tf.constant(value=np.linspace(0.0, params['dt']*params['svd_dim'], params['svd_dim'], endpoint=False)) 461 | self.dt = tf.constant(value=params['dt'], dtype=tf.float32) 462 | 463 | self.encoder = self.make_network(self.svd_dim, self.latent_dim, self.widths, name='encoder') 464 | self.decoder = self.make_network(self.latent_dim, self.svd_dim, self.widths[::-1], name='decoder') 465 | self.sindy = Sindy(self.library_dim, self.latent_dim, self.poly_order, model=params['model'], initializer=self.initializer, 466 | actual_coefs=self.actual_coefs, rfe_threshold=self.rfe_threshold, include_sine=self.include_sine, 467 | exact_features=params['exact_features'], fix_coefs=params['fix_coefs'], sindy_pert=self.sindy_pert) 468 | 469 | def make_network(self, input_dim, output_dim, widths, name): 470 | out_activation = 'linear' 471 | initializer= tf.keras.initializers.VarianceScaling(scale=2.0, mode="fan_avg", distribution="uniform") 472 | model = tf.keras.Sequential() 473 | for i, w in enumerate(widths): 474 | if i ==0: 475 | use_bias = self.use_bias 476 | else: 477 | use_bias = True 478 | model.add(tf.keras.layers.Dense(w, activation=self.activation, kernel_initializer=initializer, name=name+'_'+str(i), use_bias=use_bias, kernel_regularizer=regularizers.l1_l2(l1=self.l1, l2=self.l2))) 479 | model.add(tf.keras.layers.Dense(output_dim, activation=out_activation, kernel_initializer=initializer, name=name+'_'+'out', use_bias=self.use_bias)) 480 | return model 481 | 482 | def call(self, datain): 483 | x = datain[1] 484 | return self.decoder(self.encoder(x)) 485 | 486 | def compile(self, 487 | optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3), 488 | loss=tf.keras.losses.BinaryCrossentropy(), 489 | sindy_optimizer=None, 490 | **kwargs): 491 | super(AutoencoderSindy, self).compile(optimizer=optimizer, loss=loss, **kwargs) 492 | if sindy_optimizer is None: 493 | self.sindy_optimizer = tf.keras.optimizers.get(optimizer) 494 | else: 495 | self.sindy_optimizer = tf.keras.optimizers.get(sindy_optimizer) 496 | 497 | @tf.function 498 | def train_step(self, data): 499 | inputs, outputs = data 500 | x = inputs[0] 501 | v = inputs[1] 502 | dv_dt = tf.expand_dims(inputs[2], 2) 503 | x_out = outputs[0] 504 | v_out = outputs[1] 505 | dv_dt_out = tf.expand_dims(outputs[2], 2) 506 | 507 | # TRy data in tape and outside tape 508 | with tf.GradientTape() as tape: 509 | loss, losses = self.get_loss(x, v, dv_dt, x_out, v_out, dv_dt_out) 510 | 511 | # split trainable variables for autoencoder and dynamics so that you can use seperate optimizers 512 | trainable_vars = self.encoder.trainable_weights + self.decoder.trainable_weights + \ 513 | self.sindy.trainable_weights 514 | n_sindy_weights = len(self.sindy.trainable_weights) 515 | grads = tape.gradient(total_loss, trainable_vars) 516 | grads_autoencoder = grads[:-n_sindy_weights] 517 | grads_sindy = grads[-n_sindy_weights:] 518 | 519 | self.optimizer.apply_gradients(zip(grads_autoencoder, trainable_weights[:-n_sindy_weights])) 520 | self.sindy_optimizer.apply_gradients(zip(grads_sindy, trainable_weights[-n_sindy_weights:])) 521 | 522 | ## Keep track and update losses 523 | self.update_losses(loss, losses) 524 | return {m.name: m.result() for m in self.metrics} 525 | 526 | 527 | @tf.function 528 | def test_step(self, data): 529 | inputs, outputs = data 530 | x = inputs[0] 531 | v = inputs[1] 532 | dv_dt = tf.expand_dims(inputs[2], 2) 533 | x_out = outputs[0] 534 | v_out = outputs[1] 535 | dv_dt_out = tf.expand_dims(outputs[2], 2) 536 | 537 | loss, losses = self.get_loss(x, v, dv_dt, x_out, v_out, dv_dt_out) 538 | 539 | ## Keep track and update losses 540 | self.update_losses(loss, losses) 541 | return {m.name: m.result() for m in self.metrics} 542 | 543 | 544 | @tf.function 545 | def get_loss(self, x, v, dv_dt, x_out, v_out, dv_dt_out): 546 | losses = {} 547 | loss = 0 548 | if self.params['loss_weight_sindy_z'] > 0.0: 549 | with tf.GradientTape() as t1: 550 | t1.watch(v) 551 | z = self.encoder(v, training=True) 552 | dz_dv = t1.batch_jacobian(z, v) 553 | dz_dt = tf.matmul( dz_dv, dv_dt ) 554 | else: 555 | z = self.encoder(v, training=True) 556 | 557 | if self.params['loss_weight_sindy_x'] > 0.0: 558 | with tf.GradientTape() as t2: 559 | t2.watch(z) 560 | vh = self.decoder(z, training=True) 561 | dvh_dz = t2.batch_jacobian(vh, z) 562 | dz_dt_sindy = tf.expand_dims(self.sindy(z), 2) 563 | dvh_dt = tf.matmul( dvh_dz, dz_dt_sindy ) 564 | else: 565 | vh = self.decoder(z, training=True) 566 | 567 | # SINDy consistency loss 568 | if self.params['loss_weight_integral'] > 0.0: 569 | sol = z 570 | loss_int = tf.square(sol[:, 0] - x[:, 0]) 571 | total_steps = len(self.time) 572 | for i in range(1, len(self.time)): 573 | k1 = self.sindy(sol) 574 | k2 = self.sindy(sol + self.dt/2 * k1) 575 | k3 = self.sindy(sol + self.dt/2 * k2) 576 | k4 = self.sindy(sol + self.dt * k3) 577 | sol = sol + 1/6 * self.dt * (k1 + 2*k2 + 2*k3 + k4) 578 | # To avoid nans - explicitely with tf.where(tf.is_nan()) not compatible with gradients 579 | sol = tf.where(tf.abs(sol) > 500.00, 50.0, sol) 580 | loss_int += tf.square(sol[:, 0] - x[:, i]) 581 | loss += self.params['loss_weight_integral'] * loss_int / total_steps 582 | losses['integral'] = loss_int 583 | 584 | 585 | if self.params['loss_weight_sindy_x'] > 0.0: 586 | loss_dx = tf.reduce_mean( tf.square(dvh_dt - dv_dt_out) ) 587 | loss += self.params['loss_weight_sindy_x'] * loss_dx 588 | losses['sindy_x'] = loss_dx 589 | 590 | if self.params['loss_weight_sindy_z'] > 0.0: 591 | loss_dz = tf.reduce_mean( tf.square(dz_dt - dz_dt_sindy) ) 592 | loss += self.params['loss_weight_sindy_z'] * loss_dz 593 | losses['sindy_z'] = loss_dz 594 | 595 | if self.params['loss_weight_x0'] > 0.0: 596 | loss_x0 = tf.reduce_mean( tf.square(z[:, 0] - x[:, 0]) ) 597 | loss += self.params['loss_weight_x0'] * loss_x0 598 | losses['x0'] = loss_x0 599 | 600 | loss_rec = tf.reduce_mean( tf.square(vh - v_out) ) 601 | loss_l1 = tf.reduce_mean( tf.abs(self.sindy.coefficients) ) 602 | 603 | loss += self.params['loss_weight_rec'] * loss_rec \ 604 | + self.params['loss_weight_sindy_regularization'] * loss_l1 605 | 606 | if self.fixed_coefficient_mask is not None: 607 | self.sindy.coefficients.assign( tf.multiply(self.sindy.coefficients, self.fixed_coefficient_mask) ) 608 | 609 | losses['rec'] = loss_rec 610 | losses['l1'] = loss_l1 611 | 612 | return loss, losses 613 | 614 | @tf.function 615 | def update_losses(self, loss, losses): 616 | total_met.update_state(loss) 617 | rec_met.update_state(losses['rec']) 618 | if self.params['loss_weight_sindy_z'] > 0: 619 | sindy_z_met.update_state(losses['sindy_z']) 620 | if self.params['loss_weight_sindy_x'] > 0: 621 | sindy_x_met.update_state(losses['sindy_x']) 622 | if self.params['loss_weight_integral'] > 0: 623 | integral_met.update_state(losses['integral']) 624 | if self.params['loss_weight_x0'] > 0: 625 | x0_met.update_state(losses['x0']) 626 | if self.params['loss_weight_sindy_regularization'] > 0: 627 | l1_met.update_state(losses['l1']) 628 | 629 | # Check if needed 630 | @property 631 | def metrics(self): 632 | m = [total_met, rec_met] 633 | if self.params['loss_weight_sindy_z'] > 0.0: 634 | m.append(sindy_z_met) 635 | if self.params['loss_weight_sindy_x'] > 0.0: 636 | m.append(sindy_x_met) 637 | if self.params['loss_weight_integral'] > 0.0: 638 | m.append(integral_met) 639 | if self.params['loss_weight_x0'] > 0.0: 640 | m.append(x0_met) 641 | if self.params['loss_weight_sindy_regularization'] > 0.0: 642 | m.append(l1_met) 643 | return m 644 | 645 | -------------------------------------------------------------------------------- /aesindy/sindy_utils.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from scipy.special import binom 3 | from scipy.integrate import odeint 4 | import pdb 5 | 6 | 7 | def library_size(n, poly_order, use_sine=False, include_constant=True): 8 | l = 0 9 | for k in range(poly_order+1): 10 | l += int(binom(n+k-1,k)) 11 | if use_sine: 12 | l += n 13 | if not include_constant: 14 | l -= 1 15 | return l 16 | 17 | 18 | def sindy_library(X, poly_order, include_sine=False, include_names=False, exact_features=False, include_sparse_weighting=False): 19 | # Upgrade to combinations 20 | symbs = ['x', 'y', 'z', '4', '5', '6', '7'] 21 | 22 | # GENERALIZE 23 | if exact_features: 24 | # Check size (is first dimension batch?) 25 | library = np.ones((X.shape[0],5)) 26 | library[:, 0] = X[:,0] 27 | library[:, 1] = X[:,1] 28 | library[:, 2] = X[:,2] 29 | library[:, 3] = X[:,0]* X[:,1] 30 | library[:, 4] = X[:,0]* X[:,2] 31 | names = ['x', 'y', 'z', 'xy', 'xz'] 32 | else: 33 | 34 | m, n = X.shape 35 | l = library_size(n, poly_order, include_sine, True) 36 | library = np.ones((m, l)) 37 | sparse_weights = np.ones((l, n)) 38 | 39 | index = 1 40 | names = ['1'] 41 | 42 | for i in range(n): 43 | library[:,index] = X[:,i] 44 | sparse_weights[index, :] *= 1 45 | names.append(symbs[i]) 46 | index += 1 47 | 48 | if poly_order > 1: 49 | for i in range(n): 50 | for j in range(i,n): 51 | library[:,index] = X[:,i]*X[:,j] 52 | sparse_weights[index, :] *= 2 53 | names.append(symbs[i]+symbs[j]) 54 | index += 1 55 | 56 | if poly_order > 2: 57 | for i in range(n): 58 | for j in range(i,n): 59 | for k in range(j,n): 60 | library[:,index] = X[:,i]*X[:,j]*X[:,k] 61 | sparse_weights[index, :] *= 3 62 | names.append(symbs[i]+symbs[j]+symbs[k]) 63 | index += 1 64 | 65 | if poly_order > 3: 66 | for i in range(n): 67 | for j in range(i,n): 68 | for k in range(j,n): 69 | for q in range(k,n): 70 | library[:,index] = X[:,i]*X[:,j]*X[:,k]*X[:,q] 71 | sparse_weights[index, :] *= 4 72 | names.append(symbs[i]+symbs[j]+symbs[k]+symbs[q]) 73 | index += 1 74 | 75 | if poly_order > 4: 76 | for i in range(n): 77 | for j in range(i,n): 78 | for k in range(j,n): 79 | for q in range(k,n): 80 | for r in range(q,n): 81 | library[:,index] = X[:,i]*X[:,j]*X[:,k]*X[:,q]*X[:,r] 82 | sparse_weights[index, :] *= 5 83 | names.append(symbs[i]+symbs[j]+symbs[k]+symbs[q]+symbs[r]) 84 | index += 1 85 | 86 | if include_sine: 87 | for i in range(n): 88 | library[:,index] = np.sin(X[:,i]) 89 | names.append('sin('+symbs[i]+')') 90 | index += 1 91 | 92 | 93 | return_list = [library] 94 | if include_names: 95 | return_list.append(names) 96 | if include_sparse_weighting: 97 | return_list.append(sparse_weights) 98 | return return_list 99 | 100 | def sindy_library_names(latent_dim, poly_order, include_sine=False, exact_features=False): 101 | # Upgrade to combinations 102 | symbs = ['x', 'y', 'z', '4', '5', '6', '7'] 103 | 104 | if exact_features: 105 | return ['x', 'y', 'z', 'xy', 'xz'] 106 | 107 | n = latent_dim 108 | index = 1 109 | names = ['1'] 110 | 111 | for i in range(n): 112 | names.append(symbs[i]) 113 | index += 1 114 | 115 | if poly_order > 1: 116 | for i in range(n): 117 | for j in range(i,n): 118 | names.append(symbs[i]+symbs[j]) 119 | index += 1 120 | 121 | if poly_order > 2: 122 | for i in range(n): 123 | for j in range(i,n): 124 | for k in range(j,n): 125 | names.append(symbs[i]+symbs[j]+symbs[k]) 126 | index += 1 127 | 128 | if poly_order > 3: 129 | for i in range(n): 130 | for j in range(i,n): 131 | for k in range(j,n): 132 | for q in range(k,n): 133 | names.append(symbs[i]+symbs[j]+symbs[k]+symbs[q]) 134 | index += 1 135 | 136 | if poly_order > 4: 137 | for i in range(n): 138 | for j in range(i,n): 139 | for k in range(j,n): 140 | for q in range(k,n): 141 | for r in range(q,n): 142 | names.append(symbs[i]+symbs[j]+symbs[k]+symbs[q]+symbs[r]) 143 | index += 1 144 | 145 | if include_sine: 146 | for i in range(n): 147 | names.append('sin('+symbs[i]+')') 148 | index += 1 149 | 150 | return names 151 | 152 | 153 | def sindy_library_order2(X, dX, poly_order, include_sine=False): 154 | m,n = X.shape 155 | l = library_size(2*n, poly_order, include_sine, True) 156 | library = np.ones((m,l)) 157 | index = 1 158 | 159 | X_combined = np.concatenate((X, dX), axis=1) 160 | 161 | for i in range(2*n): 162 | library[:,index] = X_combined[:,i] 163 | index += 1 164 | 165 | if poly_order > 1: 166 | for i in range(2*n): 167 | for j in range(i,2*n): 168 | library[:,index] = X_combined[:,i]*X_combined[:,j] 169 | index += 1 170 | 171 | if poly_order > 2: 172 | for i in range(2*n): 173 | for j in range(i,2*n): 174 | for k in range(j,2*n): 175 | library[:,index] = X_combined[:,i]*X_combined[:,j]*X_combined[:,k] 176 | index += 1 177 | 178 | if poly_order > 3: 179 | for i in range(2*n): 180 | for j in range(i,2*n): 181 | for k in range(j,2*n): 182 | for q in range(k,2*n): 183 | library[:,index] = X_combined[:,i]*X_combined[:,j]*X_combined[:,k]*X_combined[:,q] 184 | index += 1 185 | 186 | if poly_order > 4: 187 | for i in range(2*n): 188 | for j in range(i,2*n): 189 | for k in range(j,2*n): 190 | for q in range(k,2*n): 191 | for r in range(q,2*n): 192 | library[:,index] = X_combined[:,i]*X_combined[:,j]*X_combined[:,k]*X_combined[:,q]*X_combined[:,r] 193 | index += 1 194 | 195 | if include_sine: 196 | for i in range(2*n): 197 | library[:,index] = np.sin(X_combined[:,i]) 198 | index += 1 199 | 200 | 201 | def sindy_fit(RHS, LHS, coefficient_threshold): 202 | m,n = LHS.shape 203 | Xi = np.linalg.lstsq(RHS,LHS, rcond=None)[0] 204 | 205 | for k in range(10): 206 | small_inds = (np.abs(Xi) < coefficient_threshold) 207 | Xi[small_inds] = 0 208 | for i in range(n): 209 | big_inds = ~small_inds[:,i] 210 | if np.where(big_inds)[0].size == 0: 211 | continue 212 | Xi[big_inds,i] = np.linalg.lstsq(RHS[:,big_inds], LHS[:,i], rcond=None)[0] 213 | return Xi 214 | 215 | 216 | def sindy_simulate(x0, t, Xi, poly_order, include_sine=False, exact_features=False): 217 | m = t.size 218 | n = x0.size 219 | f = lambda x,t : np.dot(sindy_library(np.array(x).reshape((1,n)), poly_order, include_sine, include_names=False, exact_features=exact_features), Xi).reshape((n,)) 220 | x = odeint(f, x0, t) 221 | return x 222 | 223 | 224 | def sindy_simulate_order2(x0, dx0, t, Xi, poly_order, include_sine): 225 | m = t.size 226 | n = 2*x0.size 227 | l = Xi.shape[0] 228 | 229 | Xi_order1 = np.zeros((l,n)) 230 | for i in range(n//2): 231 | Xi_order1[2*(i+1),i] = 1. 232 | Xi_order1[:,i+n//2] = Xi[:,i] 233 | 234 | x = sindy_simulate(np.concatenate((x0,dx0)), t, Xi_order1, poly_order, include_sine) 235 | return x 236 | -------------------------------------------------------------------------------- /aesindy/solvers.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from sklearn.model_selection import train_test_split 3 | from scipy.integrate import odeint 4 | from scipy import interpolate 5 | from scipy.signal import savgol_filter 6 | from .dynamical_models import get_model 7 | from .helper_functions import get_hankel 8 | from tqdm import tqdm 9 | import pdb 10 | 11 | 12 | 13 | class SynthData: 14 | def __init__(self, 15 | model='lorenz', 16 | args=None, 17 | noise=0.0, 18 | input_dim=128, 19 | normalization=None): 20 | 21 | self.model = model 22 | self.args = args 23 | self.noise = noise 24 | self.input_dim = input_dim 25 | self.normalization = None 26 | 27 | def solve_ivp(self, f, z0, time): 28 | """ Scipy ODE solver, returns z and dz/dt """ 29 | z = odeint(f, z0, time) 30 | dz = np.array([f(z[i], time[i]) for i in range(len(time))]) 31 | return z, dz 32 | 33 | def run_sim(self, n_ics, tend, dt, z0_stat=None): 34 | """ Runs solver over multiple initial conditions and builds Hankel matrix """ 35 | 36 | f, Xi, model_dim, z0_mean_sug, z0_std_sug = get_model(self.model, self.args, self.normalization) 37 | self.normalization = self.normalization if self.normalization is not None else np.ones((model_dim,)) 38 | if z0_stat is None: 39 | z0_mean = z0_mean_sug 40 | z0_std = z0_std_sug 41 | else: 42 | z0_mean, z0_std = z0_stat 43 | 44 | time = np.arange(0, tend, dt) 45 | z0_mean = np.array(z0_mean) 46 | z0_std = np.array(z0_std) 47 | z0 = z0_std*(np.random.rand(n_ics, model_dim)-.5) + z0_mean 48 | 49 | delays = len(time) - self.input_dim 50 | z_full, dz_full, H, dH = [], [], [], [] 51 | print("generating solutions..") 52 | for i in tqdm(range(n_ics)): 53 | z, dz = self.solve_ivp(f, z0[i, :], time) 54 | z *= self.normalization 55 | dz *= self.normalization 56 | 57 | # Build true solution (z) and hankel matrices 58 | z_full.append( z[:-self.input_dim, :] ) 59 | dz_full.append( dz[:-self.input_dim, :] ) 60 | x = z[:, 0] + self.noise * np.random.randn(len(time),) # Assumes first dim measurement 61 | dx = dz[:, 0] + self.noise * np.random.randn(len(time),) # Assumes first dim measurement 62 | H.append( get_hankel(x, self.input_dim, delays) ) 63 | dH.append( get_hankel(dx, self.input_dim, delays) ) 64 | 65 | self.z = np.concatenate(z_full, axis=0) 66 | self.dz = np.concatenate(dz_full, axis=0) 67 | self.x = np.concatenate(H, axis=1) 68 | self.dx = np.concatenate(dH, axis=1) 69 | self.t = time 70 | self.sindy_coefficients = Xi.astype(np.float32) 71 | 72 | 73 | 74 | 75 | 76 | class RealData: 77 | def __init__(self, 78 | input_dim=128, 79 | interpolate=False, 80 | interp_dt=0.01, 81 | savgol_interp_coefs=[21, 3], 82 | interp_kind='cubic'): 83 | 84 | self.input_dim = input_dim 85 | self.interpolate = interpolate 86 | self.interp_dt = interp_dt 87 | self.savgol_interp_coefs = savgol_interp_coefs 88 | self.interp_kind = interp_kind 89 | 90 | def build_solution(self, data): 91 | n_realizations = len(data['x']) 92 | dt = data['dt'] 93 | if 'time' in data.keys(): 94 | times = data['time'] 95 | elif 'dt' in data.keys(): 96 | times = [] 97 | for xr in data['x']: 98 | times.append(np.linspace(0, dt*len(xr), len(xr), endpoint=False)) 99 | 100 | x = data['x'] 101 | if 'dx' in data.keys(): 102 | dx = data['dx'] 103 | else: 104 | dx = [np.gradient(xr, dt) for xr in x] 105 | 106 | new_times = [] 107 | if self.interpolate: 108 | new_dt = self.interp_dt # Include with inputs 109 | print('old dt = ', dt) 110 | print('new dt = ', new_dt) 111 | 112 | # Smoothing and interpolation 113 | for i in range(n_realizations): 114 | a, b = self.savgol_interp_coefs 115 | x[i] = savgol_filter(x[i], a, b) 116 | if 'dx' in data.keys(): 117 | dx[i] = savgol_filter(dx[i], a, b) 118 | 119 | t = np.arange(times[i][0], times[i][-2], new_dt) 120 | f = interpolate.interp1d(times[i], x[i], kind=self.interp_kind) 121 | x[i] = f(t) 122 | df = interpolate.interp1d(times[i], dx[i], kind=self.interp_kind) 123 | dx[i] = df(t) 124 | 125 | times[i] = t 126 | # new_times = np.array(new_times) 127 | 128 | n = self.input_dim 129 | n_delays = n 130 | xic = [] 131 | dxic = [] 132 | for j, xr in enumerate(x): 133 | n_steps = len(xr) - self.input_dim 134 | xj = np.zeros((n_steps, n_delays)) 135 | dxj = np.zeros((n_steps, n_delays)) 136 | for k in range(n_steps): 137 | xj[k, :] = xr[k:n_delays+k] 138 | dxj[k, :] = dx[j][k:n_delays+k] 139 | xic.append(xj) 140 | dxic.append(dxj) 141 | H = np.vstack(xic) 142 | dH = np.vstack(dxic) 143 | 144 | self.t = np.hstack(times) 145 | self.x = H.T 146 | self.dx = dH.T 147 | self.z = np.hstack(x) 148 | self.dz = np.hstack(dx) 149 | self.sindy_coefficients = None # unused 150 | 151 | # # Align times 152 | # for i in range(1, n_realizations): 153 | # if times[i] - times[i-1] >= dt*2: 154 | # new_time[i] = new_time[i-1] + dt 155 | -------------------------------------------------------------------------------- /aesindy/training.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import os 3 | import time 4 | import datetime 5 | import numpy as np 6 | import pandas as pd 7 | import pickle5 as pickle 8 | from sklearn.model_selection import train_test_split 9 | from tensorflow.keras import layers 10 | import tensorflow as tf 11 | import pdb 12 | from sklearn.preprocessing import StandardScaler 13 | from .sindy_utils import library_size, sindy_library 14 | from .net_config import Sindy_Autoencoder, PreSVD_Sindy_Autoencoder, RfeUpdateCallback, SindyCall 15 | 16 | 17 | class TrainModel: 18 | def __init__(self, data, params): 19 | self.data = data 20 | self.params = self.fix_params(params) 21 | self.model = self.get_model() 22 | self.savename = self.get_name() 23 | self.history = None 24 | 25 | def get_name(self, include_date=True): 26 | pre = 'results' 27 | post = self.params['model']+'_'+self.params['case'] 28 | if include_date: 29 | name = pre + '_' + datetime.datetime.now().strftime("%Y%m%d%H%M") + '_' + post 30 | else: 31 | name = pre + '_' + post 32 | return name 33 | 34 | def fix_params(self, params): 35 | input_dim = params['input_dim'] 36 | if params['svd_dim'] is not None: 37 | print(params['svd_dim']) 38 | print('Running SVD decomposition...') 39 | input_dim = params['svd_dim'] 40 | reduced_dim = int( params['svd_dim'] ) 41 | U, s, VT = np.linalg.svd(self.data.x, full_matrices=False) 42 | v = np.matmul(VT[:reduced_dim, :].T, np.diag(s[:reduced_dim])) 43 | if params['scale'] == True: 44 | scaler = StandardScaler() 45 | v = scaler.fit_transform(v) 46 | self.data.xorig = self.data.x 47 | self.data.x = v 48 | 49 | # Assumes 1 IC 50 | self.data.dx = np.gradient(v, params['dt'], axis=0) 51 | print('SVD Done!') 52 | 53 | params['widths'] = [int(i*input_dim) for i in params['widths_ratios']] 54 | 55 | ## Constraining features according to model/case 56 | if params['exact_features']: 57 | if params['model'] == 'lorenz': 58 | params['library_dim'] = 5 59 | self.data.sindy_coefficients = self.data.sindy_coefficients[np.array([1, 2, 3, 5, 6]), :] 60 | elif params['model'] == 'rossler': 61 | params['library_dim'] = 5 62 | self.data.sindy_coefficients = self.data.sindy_coefficients[np.array([0, 1, 2, 3, 6]), :] 63 | elif params['model'] == 'predprey': 64 | params['library_dim'] = 3 65 | self.data.sindy_coefficients = self.data.sindy_coefficients[np.array([1, 2, 4]), :] 66 | else: 67 | params['library_dim'] = library_size(params['latent_dim'], params['poly_order'], params['include_sine'], True) 68 | 69 | params['actual_coefficients'] = self.data.sindy_coefficients 70 | if 'sparse_weighting' in params: 71 | if params['sparse_weighting'] is not None: 72 | a, sparse_weights = sindy_library(self.data.z[:100, :], params['poly_order'], include_sparse_weighting=True) 73 | params['sparse_weighting'] = sparse_weights 74 | 75 | return params 76 | 77 | def get_data(self): 78 | # Split into train and test sets 79 | train_x, test_x = train_test_split(self.data.x.T, train_size=self.params['train_ratio'], shuffle=False) 80 | train_dx, test_dx = train_test_split(self.data.dx.T, train_size=self.params['train_ratio'], shuffle=False) 81 | train_data = [train_x, train_dx] 82 | test_data = [test_x, test_dx] 83 | if self.params['svd_dim'] is not None: 84 | train_xorig, test_xorig = train_test_split(self.data.xorig, train_size=self.params['train_ratio'], shuffle=False) 85 | train_data = [train_xorig] + train_data 86 | test_data = [test_xorig] + test_data 87 | 88 | return train_data, test_data 89 | 90 | def get_model(self): 91 | if self.params['svd_dim'] is None: 92 | model = Sindy_Autoencoder(self.params) 93 | else: 94 | model = PreSVD_Sindy_Autoencoder(self.params) 95 | return model 96 | 97 | def fit(self): 98 | train_data, test_data = self.get_data() 99 | self.save_params() 100 | print(self.savename) 101 | print(self.params) 102 | 103 | # Create directory and file name 104 | os.makedirs(os.path.join(self.params['data_path'], self.savename), exist_ok=True) 105 | os.makedirs(os.path.join(self.params['data_path'], self.savename, 'checkpoints'), exist_ok=True) 106 | 107 | # Build model and fit 108 | optimizer = tf.keras.optimizers.Adam(lr=self.params['learning_rate']) 109 | self.model.compile(optimizer=optimizer, loss='mse') 110 | 111 | callback_list = get_callbacks(self.params, self.savename, x=test_data[1]) 112 | self.history = self.model.fit( 113 | x=train_data, y=train_data, 114 | batch_size=self.params['batch_size'], 115 | epochs=self.params['max_epochs'], 116 | validation_data=(test_data, test_data), 117 | callbacks=callback_list, 118 | shuffle=True) 119 | 120 | if self.params['case'] != 'lockunlock': 121 | prediction = self.model.predict(test_data) 122 | self.save_results(self.model) 123 | 124 | else: # Used to make SINDy coefficients trainable 125 | self.params['fix_coefs'] = False 126 | self.model_unlock = self.get_model() 127 | self.model_unlock.predict(test_data) # For building model, required for transfer 128 | self.model_unlock.set_weights(self.model.get_weights()) # Transfer weights 129 | self.model_unlock.compile(optimizer=optimizer, loss='mse') 130 | self.history = self.model_unlock.fit( 131 | x=train_data, y=train_data, 132 | batch_size=self.params['batch_size'], 133 | epochs=self.params['max_epochs'], 134 | validation_data=(test_data, test_data), 135 | callbacks=callback_list, 136 | shuffle=True) 137 | prediction = self.model_unlock.predict(test_data) 138 | self.save_results(self.model_unlock) 139 | 140 | def get_results(self, model): 141 | results_dict = {} 142 | results_dict['losses'] = self.history.history 143 | results_dict['sindy_coefficients'] = model.sindy.coefficients.numpy() 144 | 145 | return results_dict 146 | 147 | def save_params(self): 148 | saving_params = self.params 149 | 150 | # Save parameters 151 | df = pd.DataFrame() 152 | df = df.append(saving_params, ignore_index=True) 153 | df.to_pickle(os.path.join(saving_params['data_path'], self.savename + '_params.pkl')) 154 | 155 | def save_results(self, model): 156 | df = pd.DataFrame() 157 | df = df.append(self.get_results(model), ignore_index=True) 158 | df.to_pickle(os.path.join(self.params['data_path'], self.savename + '_results.pkl')) 159 | 160 | # Save model 161 | pdb.set_trace() 162 | model.save(os.path.join(self.params['data_path'], self.savename)) 163 | 164 | ######################################################### 165 | 166 | 167 | ######################################################### 168 | ######################################################### 169 | 170 | def get_callbacks(params, savename, x=None, t=None): 171 | callback_list = [] 172 | 173 | ## Tensorboad Saving callback - Good for analyzing results 174 | def get_run_logdir(current_dir=os.curdir): 175 | root_logdir = os.path.join(current_dir, 'my_logs') 176 | run_id = time.strftime("run_%Y_%m_%d-%H_%M_%S_") 177 | return os.path.join(root_logdir, run_id) 178 | 179 | # Update coefficient_mask callback 180 | if params['coefficient_threshold'] is not None: 181 | callback_list.append(RfeUpdateCallback(rfe_frequency=params['threshold_frequency'])) 182 | 183 | # Early stopping in when training stops improving 184 | if params['patience'] is not None: 185 | callback_list.append(tf.keras.callbacks.EarlyStopping(patience=params['patience'], monitor='val_total_loss')) 186 | 187 | 188 | # Learning rate scheduler - Decrease learning rate exponentially (include in callback if needed) 189 | if params['learning_rate_sched']: 190 | def exponential_decay(lr0, s): 191 | def exponential_decay_fn(epoch): 192 | return lr0 * 0.1**(epoch/s) 193 | return exponential_decay_fn 194 | exponential_decay_fn = exponential_decay(lr0=params['learning_rate'], s=params['max_epochs']) 195 | callback_list.append( tf.keras.callbacks.LearningRateScheduler(exponential_decay_fn) ) 196 | 197 | if params['save_checkpoints']: 198 | checkpoint_path = os.path.join(params['data_path'], savename, 'checkpoints', 'cp-{epoch:04d}.ckpt') 199 | cp_callback = tf.keras.callbacks.ModelCheckpoint( 200 | filepath=checkpoint_path, 201 | verbose=1, 202 | save_weights_only=True, 203 | save_freq=params['save_freq'] * int(params['tend']/params['dt']*params['n_ics']/ \ 204 | params['batch_size'] * params['train_ratio'])) 205 | 206 | callback_list.append(cp_callback) 207 | 208 | if params['use_sindycall']: 209 | print('generating data for sindycall') 210 | params2 = params.copy() 211 | params2['tend'] = 200 212 | params2['n_ics'] = 1 213 | 214 | # Change and NOT TESTED 215 | data2 = self.data.copy() 216 | data2.run_sim(params2['n_ics'], params2['tend'], params2['dt']) 217 | 218 | print('Done..') 219 | x = data2.x 220 | t = data2.t[:data_test.x.shape[0]] 221 | callback_list.append(SindyCall(threshold=params2['sindy_threshold'], update_freq=params2['sindycall_freq'], x=x, t=t)) 222 | 223 | return callback_list 224 | 225 | -------------------------------------------------------------------------------- /analyze/.ipynb_checkpoints/analyze_data-checkpoint.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 4, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "import sys\n", 10 | "sys.path.append(\"../src\")\n", 11 | "sys.path.append(\"../\")\n", 12 | "import os\n", 13 | "\n", 14 | "\n", 15 | "os.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"\n", 16 | "os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\n", 17 | "\n", 18 | "import datetime\n", 19 | "import numpy as np\n", 20 | "import tensorflow as tf\n", 21 | "import matplotlib.pyplot as plt\n", 22 | "import pickle\n", 23 | "\n", 24 | "import pdb\n", 25 | "\n", 26 | "import matplotlib.pyplot as plt\n", 27 | "from mpl_toolkits.mplot3d import Axes3D\n", 28 | "%matplotlib inline\n", 29 | "\n", 30 | "from analyze import get_names, read_results, delete_results, get_cases\n", 31 | "from training import load_model, TrainModel\n", 32 | "\n", 33 | "from os import listdir\n", 34 | "\n", 35 | "import pandas as pd\n", 36 | "# from IPython.display import display\n" 37 | ] 38 | }, 39 | { 40 | "cell_type": "markdown", 41 | "metadata": {}, 42 | "source": [ 43 | "### Print available cases" 44 | ] 45 | }, 46 | { 47 | "cell_type": "code", 48 | "execution_count": 5, 49 | "metadata": {}, 50 | "outputs": [ 51 | { 52 | "ename": "NameError", 53 | "evalue": "name 'path' is not defined", 54 | "output_type": "error", 55 | "traceback": [ 56 | "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", 57 | "\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)", 58 | "\u001b[0;32m/tmp/ipykernel_6723/4245455444.py\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mcases\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mget_cases\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mpath\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mcase_list\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mfilter_case\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mprint_cases\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", 59 | "\u001b[0;31mNameError\u001b[0m: name 'path' is not defined" 60 | ] 61 | } 62 | ], 63 | "source": [ 64 | "path='../testcases/results/'\n", 65 | "case_list = []\n", 66 | "cases = get_cases(path, filter_case=None, print_cases=True)\n" 67 | ] 68 | }, 69 | { 70 | "cell_type": "code", 71 | "execution_count": 5, 72 | "metadata": {}, 73 | "outputs": [], 74 | "source": [ 75 | "my_params = ['loss_weight_integral', 'sindy_pert', 'svd_dim', 'model']\n", 76 | "\n", 77 | "primary_params = ['case', 'coefficient_initialization', 'exact_features', 'fix_coefs', 'input_dim', 'latent_dim', \n", 78 | " 'loss_weight_integral', 'loss_weight_rec', 'loss_weight_sindy_regularization', 'loss_weight_sindy_x', \n", 79 | " 'loss_weight_sindy_z', 'loss_weight_x0', 'model', 'n_ics', 'widths_ratios', 'svd_dim']\n", 80 | "secondary_params = ['activation', 'actual_coefficients', 'coefficient_threshold', 'dt', 'fixed_coefficient_mask', 'library_dim',\n", 81 | " 'max_epochs', 'model_order', 'noise', 'option', 'patience', 'poly_order', 'print_frequency', \n", 82 | " 'save_checkpoints', 'save_freq', 'scale', 'sindy_pert']\n", 83 | "tertiary_params = ['batch_size', 'data_path', 'include_sine', 'learning_rate', 'learning_rate_sched', 'print_progress']" 84 | ] 85 | }, 86 | { 87 | "cell_type": "markdown", 88 | "metadata": {}, 89 | "source": [ 90 | "### Get names for a given case" 91 | ] 92 | }, 93 | { 94 | "cell_type": "code", 95 | "execution_count": 6, 96 | "metadata": {}, 97 | "outputs": [], 98 | "source": [ 99 | "name_list = get_names(cases, path)\n", 100 | "for idx, name in enumerate(name_list): print(idx, name) " 101 | ] 102 | }, 103 | { 104 | "cell_type": "code", 105 | "execution_count": null, 106 | "metadata": {}, 107 | "outputs": [], 108 | "source": [ 109 | "end_time = 100\n", 110 | "end_time_plot = 100\n", 111 | "display_params = my_params #primary_params + secondary_params + tertiary_params\n", 112 | "t0_frac = 0.2\n", 113 | "query_remove = False\n", 114 | "\n", 115 | "non_existing_files, remove_files = read_results(name_list[:], \n", 116 | " path, \n", 117 | " end_time=end_time, \n", 118 | " display_params=display_params, \n", 119 | " t0_frac=t0_frac, \n", 120 | " end_time_plot=end_time_plot,\n", 121 | " query_remove=query_remove)" 122 | ] 123 | }, 124 | { 125 | "cell_type": "code", 126 | "execution_count": null, 127 | "metadata": {}, 128 | "outputs": [], 129 | "source": [ 130 | "print(non_existing_files)\n", 131 | "print(remove_files)\n", 132 | "\n", 133 | "# delete_results(non_existing_files+remove_files, '../data/')" 134 | ] 135 | } 136 | ], 137 | "metadata": { 138 | "kernelspec": { 139 | "display_name": "Python 3 (ipykernel)", 140 | "language": "python", 141 | "name": "python3" 142 | }, 143 | "language_info": { 144 | "codemirror_mode": { 145 | "name": "ipython", 146 | "version": 3 147 | }, 148 | "file_extension": ".py", 149 | "mimetype": "text/x-python", 150 | "name": "python", 151 | "nbconvert_exporter": "python", 152 | "pygments_lexer": "ipython3", 153 | "version": "3.10.4" 154 | } 155 | }, 156 | "nbformat": 4, 157 | "nbformat_minor": 4 158 | } 159 | -------------------------------------------------------------------------------- /analyze/analyze.py: -------------------------------------------------------------------------------- 1 | import sys 2 | sys.path.append("../aesindy") 3 | import os 4 | os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" 5 | os.environ["CUDA_VISIBLE_DEVICES"] = "2" 6 | 7 | from os import listdir 8 | import shutil 9 | import numpy as np 10 | from sindy_utils import sindy_simulate, sindy_library_names 11 | from sklearn.model_selection import train_test_split 12 | from sklearn.preprocessing import StandardScaler 13 | from solvers import SynthData 14 | 15 | import pickle5 as pickle 16 | import pandas as pd 17 | import tensorflow as tf 18 | import matplotlib.pyplot as plt 19 | from IPython.display import display 20 | import pdb 21 | pd.options.display.float_format = '{:,.3f}'.format 22 | 23 | 24 | def params_names(): 25 | my_params = ['loss_weight_integral', 'sindy_pert', 'svd_dim', 'model'] 26 | 27 | primary_params = ['case', 'coefficient_initialization', 'exact_features', 'fix_coefs', 'input_dim', 'latent_dim', 28 | 'loss_weight_integral', 'loss_weight_rec', 'loss_weight_sindy_regularization', 'loss_weight_sindy_x', 29 | 'loss_weight_sindy_z', 'loss_weight_x0', 'model', 'n_ics', 'widths_ratios', 'svd_dim'] 30 | secondary_params = ['activation', 'actual_coefficients', 'coefficient_threshold', 'dt', 31 | 'fixed_coefficient_mask', 'library_dim', 32 | 'max_epochs', 'model_order', 'noise', 'option', 'patience', 'poly_order', 'print_frequency', 33 | 'save_checkpoints', 'save_freq', 'scale', 'sindy_pert'] 34 | tertiary_params = ['batch_size', 'data_path', 'include_sine', 'learning_rate', 'learning_rate_sched', 'print_progress'] 35 | 36 | return primary_params, secondary_params, tertiary_params 37 | 38 | 39 | def pickle2dict(params): 40 | params2 = {key: val[0] for key, val in params.to_dict().items()} 41 | list_to_int = ['input_dim', 'latent_dim', 'poly_order', 'n_ics', 'include_sine', 'exact_features'] 42 | listwrap_to_list = ['normalization', 'system_coefficients', 'widths', 'widths_ratios'] 43 | for key in list_to_int: 44 | if key in params2.keys(): 45 | params2[key] = int(params2[key]) 46 | for key in listwrap_to_list: 47 | if key in params2.keys(): 48 | if params2[key] is not None: 49 | params2[key] = list(params2[key]) 50 | return params2 51 | 52 | def get_checkpoint_names(cpath): 53 | all_files = os.listdir(cpath) 54 | all_files = set([n.split('.')[0] for n in all_files]) 55 | if 'checkpoint' in all_files: 56 | all_files.remove('checkpoint') 57 | all_files = list(all_files) 58 | all_files.sort() 59 | print('number of checkpoints = ', len(all_files)) 60 | return all_files 61 | 62 | def get_names(cases, path): 63 | directory = listdir(path) 64 | name_list = [] 65 | for name in directory: 66 | for case in cases: 67 | if '.' not in name and case in name: 68 | name_list.append(name) 69 | name_list = list(set(name_list)) 70 | 71 | sortidx = np.argsort(np.array([int(s.split('_')[1]) for s in name_list]))[::-1] 72 | name_list = [name_list[i] for i in sortidx] 73 | return name_list 74 | 75 | 76 | 77 | def get_cases(path, filter_case=None, print_cases=True): 78 | directory = listdir(path) 79 | case_list = [] 80 | for name in directory: 81 | if '.' not in name: 82 | casename = '_'.join(name.split('_')[2:]) 83 | if filter_case is not None: 84 | if fcase in name: 85 | for fcase in filter_case: 86 | case_list.append(casename) 87 | else: 88 | case_list.append(casename) 89 | case_list = list(set(case_list)) 90 | if print_cases: 91 | for case in case_list: 92 | print(case) 93 | return case_list 94 | 95 | 96 | def get_display_params(params, display_params=None): 97 | filt_params = dict() 98 | if display_params is not None: 99 | for key in display_params: 100 | if key in params.keys(): 101 | filt_params[key] = params[key] 102 | print(key, ' : ', params[key]) 103 | else: 104 | for key in params.keys(): 105 | filt_params[key] = params[key] 106 | print(key, ' : ', params[key]) 107 | return filt_params 108 | 109 | 110 | def load_results(name, path='./results/'): 111 | try: 112 | model = tf.keras.models.load_model(path+name) 113 | except: 114 | print('model file doesnt exist') 115 | model = None 116 | 117 | try: 118 | params = pickle.load(open(path+name+'_params.pkl', 'rb')) 119 | except: 120 | print('params file doesnt not exist') 121 | params = None 122 | 123 | try: 124 | results = pickle.load(open(path+name+'_results.pkl', 'rb')) 125 | except: 126 | results = None 127 | print('no results file for ', name) 128 | 129 | return model, params, results 130 | 131 | def delete_results(file_list, path): 132 | dir_files = listdir(path) 133 | for fdir in dir_files: 134 | for fdelete in file_list: 135 | if fdelete in fdir: 136 | print('deleting: ', fdir) 137 | if os.path.isdir(path+fdir): 138 | shutil.rmtree(path+fdir) 139 | else: 140 | os.remove(path+fdir) 141 | 142 | def make_inputs_svd(S, reduced_dim, scale, dt): 143 | if reduced_dim is not None: 144 | print('Running SVD...') 145 | U, s, VT = np.linalg.svd(S.x.T, full_matrices=False) 146 | v = np.matmul(VT[:reduced_dim, :].T, np.diag(s[:reduced_dim])) 147 | if scale: 148 | scaler = StandardScaler() 149 | v = scaler.fit_transform(v) 150 | S.xorig = S.x 151 | S.x = v 152 | S.dx = np.gradient(v, dt, axis=0) 153 | print('SVD Done!') 154 | return S 155 | 156 | 157 | def read_results(name_list, 158 | path, 159 | start_time=6, 160 | end_time=30, 161 | threshold=1e-2, 162 | t0_frac=0.0, 163 | end_time_plot=30, 164 | display_params=None, 165 | query_remove=False, 166 | known_attractor=False): 167 | 168 | varname = list('xyz123') 169 | known_attractor = True 170 | non_existing_models = [] 171 | non_existing_params = [] 172 | remove_files = [] 173 | 174 | for name in name_list: 175 | print('name: ', name) 176 | model, params, result = load_results(name, path) 177 | print('got results...') 178 | if model is None or params is None: 179 | if model is None: non_existing_models.append(name) 180 | if params is None: non_existing_params.append(name) 181 | continue 182 | 183 | params = pickle2dict(params) 184 | end_time_idx = int(end_time_plot/params['dt']) 185 | 186 | # FIX Experimental data 187 | if real_data: 188 | R = RealData(input_dim=params['input_dim'], 189 | interpolate=params['interpolate'], 190 | interp_dt=params['interp_dt'], 191 | interp_kind=params['interp_kind'], 192 | savgol_interp_coefs=params['interp_coefs']) 193 | 194 | R.build_solution(data) 195 | else: 196 | S = SynthData(model=params['model'], 197 | args=params['system_coefficients'], 198 | noise=params['noise'], 199 | input_dim=params['input_dim'], 200 | normalization=params['normalization']) 201 | print('Generating Test Solution...') 202 | S.run_sim(1, end_time, params['dt']) 203 | 204 | 205 | ## Get SVD data (write in separate function) 206 | S = make_inputs_svd(S, params['svd_dim'], params['scale'], params['dt']) 207 | 208 | ## This seems arbitrary 209 | idx0 = int(start_time/params['dt']) 210 | idx = int(end_time/params['dt']) 211 | test_data = [S.x[:, idx0:idx].T, S.dx[:, idx0:idx].T] 212 | test_time = S.t[idx0:idx] 213 | if params['svd_dim'] is not None: 214 | test_data = [S.xorig[:, idx0:idx].T] + test_data 215 | 216 | prediction = model.predict(test_data) 217 | 218 | if end_time_idx > test_data[0].shape[0]: 219 | end_time_idx = test_data[0].shape[0] 220 | 221 | ## Display Optimal Coefficients 222 | coef_names = sindy_library_names(params['latent_dim'], 223 | params['poly_order'], 224 | include_sine=params['include_sine'], 225 | exact_features=params['exact_features']) 226 | 227 | print('------- COEFFICIENTS -------') 228 | df = pd.DataFrame((model.sindy.coefficients).numpy()* 229 | (np.abs((model.sindy.coefficients).numpy()) > threshold).astype(float), 230 | columns=varname[:params['latent_dim']], 231 | index=coef_names) 232 | display(df) 233 | 234 | print('-------- Mask ------') 235 | display(pd.DataFrame(model.sindy.coefficients_mask.numpy(), columns=varname[:params['latent_dim']], index=coef_names)) 236 | 237 | print('-------- Parameters ------') 238 | disp_params = get_display_params(params, display_params=display_params) 239 | 240 | 241 | 242 | ## PLOT LOSSES 243 | plot_losses(result, t0_frac=t0_frac) 244 | 245 | testin = test_data[0] 246 | if params['svd_dim'] is not None: 247 | testin = test_data[1] 248 | 249 | ## PLOT PREDICTION COMPARISON 250 | fig = plt.figure(figsize=(10, 3.5)) 251 | plt.plot(test_time[:end_time_idx], testin[:end_time_idx, 0], 'b--') 252 | plt.plot(test_time[:end_time_idx], prediction[:end_time_idx, 0], 'r') 253 | plt.xlabel('t') 254 | plt.ylabel('x(t)') 255 | plt.legend(['True test', 'Auto-encoder prediction']) 256 | 257 | z_latent = model.encoder(testin).numpy() 258 | z0 = np.array(z_latent[0]) 259 | z_sim = sindy_simulate(z0, test_time, model.sindy.coefficients_mask* model.sindy.coefficients, 260 | params['poly_order'], params['include_sine'], exact_features=params['exact_features']) 261 | 262 | if known_attractor: 263 | original_sim = sindy_simulate(z0, test_time, S.sindy_coefficients, params['poly_order'], params['include_sine']) 264 | else: 265 | original_sim = None 266 | 267 | 268 | ## PLOT RESULTS 269 | # Assuming n=2 or n=3 270 | plot_portraits(z_sim, n=params['latent_dim'], title='Discovered Simulated Dynamics') 271 | plot_portraits(z_latent, n=params['latent_dim'], title='Latent Variable') 272 | 273 | plot_txy(test_time[:end_time_idx], testin[:end_time_idx, :], z_sim[:end_time_idx, :], n=1, 274 | title='Input vs. Discovered 1st Dim.', names=['Input data', 'Discovered']) 275 | plot_txy(test_time[:end_time_idx], testin[:end_time_idx, :], z_latent[:end_time_idx, :], n=1, 276 | title='Input vs. Latent z_0', names=['Input data', 'Latent']) 277 | 278 | if params['latent_dim'] > 2: 279 | plot3d_comparison(z_sim, z_latent, zorig=original_sim, title='Discovered SINDy dynamics') 280 | 281 | if known_attractor: 282 | plot_txy(test_time[:end_time_idx], original_sim[:end_time_idx, :], z_latent[:end_time_idx, :], 283 | n=z_latent.shape[1], names=['Original', 'Latent'], title='') 284 | plot_txy(test_time[:end_time_idx], original_sim[:end_time_idx, :], z_sim[:end_time_idx, :], 285 | n=z_latent.shape[1], names=['Original', 'Discovered'], title='') 286 | 287 | plt.show() 288 | 289 | ## Data cleaing while going through results 290 | if query_remove: 291 | print('Do you want to remove this file? Y/N') 292 | answer = input() 293 | if answer == 'Y' or answer == 'y': 294 | remove_files.append(name) 295 | 296 | return non_existing_models, non_existing_params, remove_files 297 | 298 | 299 | ###### PLOT FUNCTIONS ######## 300 | 301 | def plot_txy(t, x, y, n=1, names=['x', 'y'], title=''): 302 | fig = plt.figure(figsize=(12, 4)) 303 | ax = [] 304 | for i in range(n): 305 | axp = fig.add_subplot(n, 1, i+1) 306 | ax.append( axp ) 307 | ax[i].plot(t, x[:, i], 'b--', linewidth=2) 308 | ax[i].plot(t, y[:, i], 'r', linewidth=2) 309 | ax[i].legend([names[0], names[1]]) 310 | ax[i].set_ylabel('x_'+str(i)) 311 | return ax 312 | 313 | def plot_portraits(z_sim, n=2, title=''): 314 | if n==2: 315 | n_figs = 1 316 | figwidth = 3.5 317 | elif n==3: 318 | n_figs = 3 319 | figwidth=10 320 | 321 | fig = plt.figure(figsize=(figwidth, 3.5)) 322 | ax1 = fig.add_subplot(1, n_figs, 1) 323 | 324 | ax1.plot(z_sim[:, 0], z_sim[:, 1], color = 'k', linewidth=1) 325 | ax1.set_label('x') 326 | ax1.set_label('y') 327 | ax1.set_title(title) 328 | 329 | if n==3: 330 | ax2 = fig.add_subplot(1, 3, 2) 331 | ax2.plot(z_sim[:, 0], z_sim[:, 2], color = 'k', linewidth=1) 332 | ax2.set_label('x') 333 | ax2.set_label('z') 334 | ax2.set_title(title) 335 | 336 | ax3 = fig.add_subplot(1, 3, 3) 337 | ax3.plot(z_sim[:, 1], z_sim[:, 2], color = 'k', linewidth=1) 338 | ax3.set_label('y') 339 | ax3.set_label('z') 340 | ax3.set_title(title) 341 | 342 | 343 | 344 | def plot3d_comparison(zsim, zlatent, zorig=None, title=''): 345 | fig = plt.figure(figsize=(10, 3.5)) 346 | ax1 = fig.add_subplot(131, projection='3d') 347 | ax1.plot(zsim[:, 0], zsim[:, 1], zsim[:, 2], color = 'k', linewidth=1) 348 | ax1.set_title(title) 349 | plt.axis('off') 350 | ax1.view_init(azim=120) 351 | 352 | ax3 = fig.add_subplot(132, projection='3d') 353 | ax3.plot(zlatent[:, 0], zlatent[:, 1], zlatent[:, 2], color = 'k', linewidth=1) 354 | ax3.set_title('Latent projection') 355 | plt.xticks([]) 356 | plt.axis('off') 357 | ax3.view_init(azim=120) 358 | 359 | if zorig is not None: 360 | ax2 = fig.add_subplot(133, projection='3d') 361 | ax2.plot(zorig[:, 0], zorig[:, 1], zorig[:, 2], color = 'k', linewidth=1) 362 | ax2.set_title('True dynamics') 363 | plt.xticks([]) 364 | plt.axis('off') 365 | ax2.view_init(azim=120) 366 | 367 | 368 | def plot_losses(result, t0_frac=0.0): 369 | if result is not None: 370 | result_losses = result['losses'][0] 371 | losses_list = [] 372 | for k in result['losses'][0].keys(): 373 | loss = k.split('_') 374 | if loss[0] == 'val': 375 | losses_list.append(['_'.join(loss[1:]), k]) 376 | steps = len(result_losses['total_loss']) 377 | idx0 = int(t0_frac*steps) 378 | numrows = int(np.ceil(len(losses_list)/2)) 379 | 380 | fig0 = plt.figure(figsize=(10, 10)) 381 | for i, losses_couple in enumerate(losses_list): 382 | fig0.add_subplot(numrows, 2, i+1) 383 | for j, losses in enumerate(losses_couple): 384 | plt.plot(range(idx0, steps), result_losses[losses][idx0:], '.-', lw=2) 385 | plt.title('validation_final = %.2e' % (result_losses[losses_couple[1]][-1])) 386 | plt.legend(losses_couple) 387 | else: 388 | print("NO LOSSES RESULTS FILE") 389 | -------------------------------------------------------------------------------- /analyze/analyze_data.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "\n", 10 | "import os\n", 11 | "# os.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"\n", 12 | "# os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"2\"\n", 13 | "# os.environ['KMP_DUPLICATE_LIB_OK'] = 'TRUE'\n", 14 | "import tensorflow as tf\n", 15 | "\n", 16 | "\n", 17 | "import sys\n", 18 | "sys.path.append(\"../src\")\n", 19 | "sys.path.append(\"../\")\n" 20 | ] 21 | }, 22 | { 23 | "cell_type": "code", 24 | "execution_count": 2, 25 | "metadata": { 26 | "scrolled": true 27 | }, 28 | "outputs": [], 29 | "source": [ 30 | "\n", 31 | "\n", 32 | "import datetime\n", 33 | "import numpy as np\n", 34 | "import matplotlib.pyplot as plt\n", 35 | "from pickle5 import pickle\n", 36 | "\n", 37 | "import pdb\n", 38 | "\n", 39 | "import matplotlib.pyplot as plt\n", 40 | "from mpl_toolkits.mplot3d import Axes3D\n", 41 | "%matplotlib inline\n", 42 | "\n", 43 | "from analyze import get_names, read_results, delete_results, get_cases, params_names, load_results\n", 44 | "from paths import ROOTPATH\n", 45 | "\n", 46 | "from os import listdir\n", 47 | "\n", 48 | "import pandas as pd\n", 49 | "# from IPython.display import display\n" 50 | ] 51 | }, 52 | { 53 | "cell_type": "markdown", 54 | "metadata": {}, 55 | "source": [ 56 | "### Print available cases" 57 | ] 58 | }, 59 | { 60 | "cell_type": "code", 61 | "execution_count": 3, 62 | "metadata": {}, 63 | "outputs": [ 64 | { 65 | "name": "stdout", 66 | "output_type": "stream", 67 | "text": [ 68 | "fluttering_basic\n", 69 | "pendulum_basic\n", 70 | "lorenzww_basic\n", 71 | "fluttering_Re1000\n", 72 | "/Users/dynamicslab/Documents/academic/research/deep-delay-autoencoder/testcases/results/\n" 73 | ] 74 | } 75 | ], 76 | "source": [ 77 | "path=ROOTPATH+'testcases/results/'\n", 78 | "cases = get_cases(path, filter_case=None, print_cases=True)\n", 79 | "p1, p2, p3 = params_names()\n", 80 | "print(path)" 81 | ] 82 | }, 83 | { 84 | "cell_type": "markdown", 85 | "metadata": {}, 86 | "source": [ 87 | "### Get names for a given case" 88 | ] 89 | }, 90 | { 91 | "cell_type": "code", 92 | "execution_count": 6, 93 | "metadata": {}, 94 | "outputs": [ 95 | { 96 | "name": "stdout", 97 | "output_type": "stream", 98 | "text": [ 99 | "0 results_202208311523_fluttering_Re1000\n", 100 | "1 results_202208311206_fluttering_Re1000\n", 101 | "2 results_202208311153_fluttering_Re1000\n", 102 | "3 results_202208311152_fluttering_Re1000\n", 103 | "4 results_202208300802_fluttering_Re1000\n", 104 | "5 results_202208300729_fluttering_Re1000\n", 105 | "6 results_202208292212_fluttering_Re1000\n", 106 | "7 results_202208292145_fluttering_Re1000\n", 107 | "8 results_202208292052_fluttering_Re1000\n", 108 | "9 results_202208291949_fluttering_Re1000\n", 109 | "10 results_202208291942_fluttering_basic\n", 110 | "11 results_202208291941_fluttering_basic\n", 111 | "12 results_202208291910_lorenzww_basic\n", 112 | "13 results_202208291909_lorenzww_basic\n", 113 | "14 results_202208291902_lorenzww_basic\n", 114 | "15 results_202208291834_lorenzww_basic\n", 115 | "16 results_202208291754_lorenzww_basic\n", 116 | "17 results_202208291327_pendulum_basic\n", 117 | "18 results_202208291325_pendulum_basic\n", 118 | "19 results_202208291322_pendulum_basic\n", 119 | "20 results_202208180322_pendulum_basic\n", 120 | "21 results_202208180308_pendulum_basic\n" 121 | ] 122 | } 123 | ], 124 | "source": [ 125 | "name_list = get_names(cases, path)\n", 126 | "for idx, name in enumerate(name_list): print(idx, name) " 127 | ] 128 | }, 129 | { 130 | "cell_type": "code", 131 | "execution_count": 13, 132 | "metadata": {}, 133 | "outputs": [ 134 | { 135 | "name": "stdout", 136 | "output_type": "stream", 137 | "text": [ 138 | "name: results_202208292212_fluttering_Re1000\n" 139 | ] 140 | }, 141 | { 142 | "name": "stderr", 143 | "output_type": "stream", 144 | "text": [ 145 | "WARNING:absl:Importing a function (__inference_internal_grad_fn_2147616) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.\n" 146 | ] 147 | }, 148 | { 149 | "name": "stdout", 150 | "output_type": "stream", 151 | "text": [ 152 | "got results...\n", 153 | "Generating Test Solution...\n" 154 | ] 155 | }, 156 | { 157 | "ename": "UnboundLocalError", 158 | "evalue": "local variable 'f' referenced before assignment", 159 | "output_type": "error", 160 | "traceback": [ 161 | "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", 162 | "\u001b[0;31mUnboundLocalError\u001b[0m Traceback (most recent call last)", 163 | "\u001b[1;32m/Users/dynamicslab/Documents/academic/research/deep-delay-autoencoder/analyze/analyze_data.ipynb Cell 7\u001b[0m in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[1;32m 4\u001b[0m t0_frac \u001b[39m=\u001b[39m \u001b[39m0.2\u001b[39m\n\u001b[1;32m 5\u001b[0m query_remove \u001b[39m=\u001b[39m \u001b[39mTrue\u001b[39;00m\n\u001b[0;32m----> 7\u001b[0m non_existing_files, non_existing_params, remove_files \u001b[39m=\u001b[39m read_results([name_list[\u001b[39m6\u001b[39;49m]], \n\u001b[1;32m 8\u001b[0m path, \n\u001b[1;32m 9\u001b[0m end_time\u001b[39m=\u001b[39;49mend_time, \n\u001b[1;32m 10\u001b[0m display_params\u001b[39m=\u001b[39;49mdisplay_params, \n\u001b[1;32m 11\u001b[0m t0_frac\u001b[39m=\u001b[39;49mt0_frac, \n\u001b[1;32m 12\u001b[0m end_time_plot\u001b[39m=\u001b[39;49mend_time_plot,\n\u001b[1;32m 13\u001b[0m query_remove\u001b[39m=\u001b[39;49mquery_remove)\n", 164 | "File \u001b[0;32m~/Documents/academic/research/deep-delay-autoencoder/analyze/analyze.py:193\u001b[0m, in \u001b[0;36mread_results\u001b[0;34m(name_list, path, start_time, end_time, threshold, t0_frac, end_time_plot, display_params, query_remove, known_attractor)\u001b[0m\n\u001b[1;32m 187\u001b[0m S \u001b[39m=\u001b[39m SynthData(model\u001b[39m=\u001b[39mparams[\u001b[39m'\u001b[39m\u001b[39mmodel\u001b[39m\u001b[39m'\u001b[39m], \n\u001b[1;32m 188\u001b[0m args\u001b[39m=\u001b[39mparams[\u001b[39m'\u001b[39m\u001b[39msystem_coefficients\u001b[39m\u001b[39m'\u001b[39m], \n\u001b[1;32m 189\u001b[0m noise\u001b[39m=\u001b[39mparams[\u001b[39m'\u001b[39m\u001b[39mnoise\u001b[39m\u001b[39m'\u001b[39m], \n\u001b[1;32m 190\u001b[0m input_dim\u001b[39m=\u001b[39mparams[\u001b[39m'\u001b[39m\u001b[39minput_dim\u001b[39m\u001b[39m'\u001b[39m], \n\u001b[1;32m 191\u001b[0m normalization\u001b[39m=\u001b[39mparams[\u001b[39m'\u001b[39m\u001b[39mnormalization\u001b[39m\u001b[39m'\u001b[39m])\n\u001b[1;32m 192\u001b[0m \u001b[39mprint\u001b[39m(\u001b[39m'\u001b[39m\u001b[39mGenerating Test Solution...\u001b[39m\u001b[39m'\u001b[39m)\n\u001b[0;32m--> 193\u001b[0m S\u001b[39m.\u001b[39mrun_sim(\u001b[39m1\u001b[39m, end_time, params[\u001b[39m'\u001b[39m\u001b[39mdt\u001b[39m\u001b[39m'\u001b[39m])\n\u001b[1;32m 195\u001b[0m \u001b[39m# if params['model'] == 'lorenzww':\u001b[39;00m\n\u001b[1;32m 196\u001b[0m \u001b[39m# L.filename='/home/joebakarji/delay-auto/main/examples/data/lorenzww.json'\u001b[39;00m\n\u001b[1;32m 197\u001b[0m \u001b[39m# data = L.get_solution()\u001b[39;00m\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 200\u001b[0m \n\u001b[1;32m 201\u001b[0m \u001b[39m## Get SVD data (write in separate function)\u001b[39;00m\n\u001b[1;32m 202\u001b[0m S \u001b[39m=\u001b[39m make_inputs_svd(S, params[\u001b[39m'\u001b[39m\u001b[39msvd_dim\u001b[39m\u001b[39m'\u001b[39m], params[\u001b[39m'\u001b[39m\u001b[39mscale\u001b[39m\u001b[39m'\u001b[39m], params[\u001b[39m'\u001b[39m\u001b[39mdt\u001b[39m\u001b[39m'\u001b[39m])\n", 165 | "File \u001b[0;32m~/Documents/academic/research/deep-delay-autoencoder/analyze/../src/solvers.py:37\u001b[0m, in \u001b[0;36mSynthData.run_sim\u001b[0;34m(self, n_ics, tend, dt, z0_stat)\u001b[0m\n\u001b[1;32m 34\u001b[0m \u001b[39mdef\u001b[39;00m \u001b[39mrun_sim\u001b[39m(\u001b[39mself\u001b[39m, n_ics, tend, dt, z0_stat\u001b[39m=\u001b[39m\u001b[39mNone\u001b[39;00m):\n\u001b[1;32m 35\u001b[0m \u001b[39m\"\"\" Runs solver over multiple initial conditions and builds Hankel matrix \"\"\"\u001b[39;00m\n\u001b[0;32m---> 37\u001b[0m f, Xi, model_dim, z0_mean_sug, z0_std_sug \u001b[39m=\u001b[39m get_model(\u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mmodel, \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49margs, \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mnormalization)\n\u001b[1;32m 38\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mnormalization \u001b[39m=\u001b[39m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mnormalization \u001b[39mif\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mnormalization \u001b[39mis\u001b[39;00m \u001b[39mnot\u001b[39;00m \u001b[39mNone\u001b[39;00m \u001b[39melse\u001b[39;00m np\u001b[39m.\u001b[39mones((model_dim,))\n\u001b[1;32m 39\u001b[0m \u001b[39mif\u001b[39;00m z0_stat \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m:\n", 166 | "File \u001b[0;32m~/Documents/academic/research/deep-delay-autoencoder/analyze/../src/dynamical_models.py:89\u001b[0m, in \u001b[0;36mget_model\u001b[0;34m(name, args, normalization, use_sine)\u001b[0m\n\u001b[1;32m 86\u001b[0m z0_mean_sug \u001b[39m=\u001b[39m [np\u001b[39m.\u001b[39mpi\u001b[39m/\u001b[39m\u001b[39m2\u001b[39m, \u001b[39m0\u001b[39m]\n\u001b[1;32m 87\u001b[0m z0_std_sug \u001b[39m=\u001b[39m [np\u001b[39m.\u001b[39mpi\u001b[39m/\u001b[39m\u001b[39m2\u001b[39m, \u001b[39m2\u001b[39m]\n\u001b[0;32m---> 89\u001b[0m \u001b[39mreturn\u001b[39;00m f, Xi, dim, z0_mean_sug, z0_std_sug\n", 167 | "\u001b[0;31mUnboundLocalError\u001b[0m: local variable 'f' referenced before assignment" 168 | ] 169 | } 170 | ], 171 | "source": [ 172 | "end_time = 30\n", 173 | "end_time_plot = 100\n", 174 | "display_params = p1 #primary_params + secondary_params + tertiary_params\n", 175 | "t0_frac = 0.2\n", 176 | "query_remove = True\n", 177 | "\n", 178 | "non_existing_files, non_existing_params, remove_files = read_results([name_list[6]], \n", 179 | " path, \n", 180 | " end_time=end_time, \n", 181 | " display_params=display_params, \n", 182 | " t0_frac=t0_frac, \n", 183 | " end_time_plot=end_time_plot,\n", 184 | " query_remove=query_remove)" 185 | ] 186 | }, 187 | { 188 | "cell_type": "code", 189 | "execution_count": 23, 190 | "metadata": {}, 191 | "outputs": [ 192 | { 193 | "data": { 194 | "text/plain": [ 195 | "[]" 196 | ] 197 | }, 198 | "execution_count": 23, 199 | "metadata": {}, 200 | "output_type": "execute_result" 201 | } 202 | ], 203 | "source": [ 204 | "non_existing_params\n", 205 | "# delete_results(non_existing_files, path)" 206 | ] 207 | }, 208 | { 209 | "cell_type": "code", 210 | "execution_count": null, 211 | "metadata": {}, 212 | "outputs": [], 213 | "source": [] 214 | } 215 | ], 216 | "metadata": { 217 | "kernelspec": { 218 | "display_name": "Python 3.9.7 ('base')", 219 | "language": "python", 220 | "name": "python3" 221 | }, 222 | "language_info": { 223 | "codemirror_mode": { 224 | "name": "ipython", 225 | "version": 3 226 | }, 227 | "file_extension": ".py", 228 | "mimetype": "text/x-python", 229 | "name": "python", 230 | "nbconvert_exporter": "python", 231 | "pygments_lexer": "ipython3", 232 | "version": "3.9.7" 233 | }, 234 | "vscode": { 235 | "interpreter": { 236 | "hash": "5e1cd2088bc57d8ced854bbb7b8182f7763d6349a67919d377f49e4f66f46c01" 237 | } 238 | } 239 | }, 240 | "nbformat": 4, 241 | "nbformat_minor": 4 242 | } 243 | -------------------------------------------------------------------------------- /data/pupils_156_joe.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josephbakarji/deep-delay-autoencoder/1f43e546e9fcb0b1fc6e74ca98bc36daba98a2b6/data/pupils_156_joe.mat -------------------------------------------------------------------------------- /data/small_files/NACA0012_Re1000_AoA35_2D_forces.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josephbakarji/deep-delay-autoencoder/1f43e546e9fcb0b1fc6e74ca98bc36daba98a2b6/data/small_files/NACA0012_Re1000_AoA35_2D_forces.mat -------------------------------------------------------------------------------- /data/small_files/NACA0012_Re1500_AoA35_2D_forces.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/josephbakarji/deep-delay-autoencoder/1f43e546e9fcb0b1fc6e74ca98bc36daba98a2b6/data/small_files/NACA0012_Re1500_AoA35_2D_forces.mat -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | ipython 2 | mat73 3 | matplotlib 4 | numpy 5 | pandas 6 | pickle5 7 | pysindy 8 | scikit_learn 9 | scipy 10 | tensorflow 11 | tensorflow_macos 12 | tqdm 13 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import find_packages, setup 2 | 3 | setup( 4 | name='aesindy', 5 | packages=find_packages(include=['aesindy', 'aesindy.*']), 6 | version='0.1', 7 | description='description', 8 | author='Joseph Bakarji', 9 | license='MIT', 10 | install_requires=['mat73', 11 | 'matplotlib', 12 | 'numpy', 13 | 'pandas', 14 | 'pickle5', 15 | 'pysindy', 16 | 'scikit_learn', 17 | 'scipy', 18 | 'tensorflow-macos', 19 | 'tqdm'], 20 | setup_requires=[], 21 | tests_require=[], 22 | test_suite='tests', 23 | ) 24 | -------------------------------------------------------------------------------- /testcases/.ipynb_checkpoints/Untitled-checkpoint.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [], 3 | "metadata": {}, 4 | "nbformat": 4, 5 | "nbformat_minor": 5 6 | } 7 | -------------------------------------------------------------------------------- /testcases/__init__.py: -------------------------------------------------------------------------------- 1 | # import sys 2 | # sys.path.append('../') 3 | # import aesindy -------------------------------------------------------------------------------- /testcases/autolock_unlockf.py: -------------------------------------------------------------------------------- 1 | # %% 2 | import sys 3 | sys.path.append("../") 4 | import os 5 | 6 | os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" 7 | os.environ["CUDA_VISIBLE_DEVICES"] = "3" 8 | 9 | from __init__ import * 10 | import datetime 11 | import numpy as np 12 | import tensorflow as tf 13 | import matplotlib.pyplot as plt 14 | 15 | from aesindy.training import TrainModel, get_callbacks 16 | from basic_params import params 17 | from basic_run import generate_data, train, get_hyperparameter_list 18 | from config import ROOTPATH 19 | 20 | 21 | params['case'] = 'lockauto_unlockf' 22 | params['model'] = 'lorenz' 23 | params['input_dim'] = 128 24 | params['tend'] = 6 25 | params['dt'] = 0.001 26 | params['n_ics'] = 5 27 | params['batch_size'] = 500 28 | params['coefficient_initialization'] = 'true' 29 | params['max_epochs'] = 3000 30 | params['loss_weight_x0'] = 0.2 31 | params['loss_weight_sindy_x'] = 0.001 32 | params['loss_weight_sindy_z'] = 0.0001 #0.2 # forces first element to be the same in x and z 33 | params['loss_weight_sindy_regularization'] = 1e-5 34 | params['loss_weight_integral'] = 0.0 35 | params['svd_dim'] = None # try 7 36 | params['scale'] = False # try true 37 | params['widths_ratios'] = [0.75, 0.4, 0.2] 38 | params['sparse_weighting'] = None 39 | params['normalization'] = [1/40, 1/40, 1/40] 40 | params['data_path'] = ROOTPATH 41 | params['loss_weight_layer_l2'] = 0.0 42 | params['loss_weight_layer_l1'] = 0.0 43 | params['use_bias'] = True 44 | 45 | 46 | # Generate data 47 | data = generate_data(params) 48 | 49 | trainer = TrainModel(data, params) 50 | 51 | ## Lock parameters 52 | trainer.params['case'] = 'lock_opt' + 'test2' 53 | trainer.params['save_checkpoints'] = False 54 | trainer.params['patience'] = 1 55 | trainer.params['fix_coefs'] = True 56 | trainer.params['trainable_auto'] = True 57 | 58 | trainer.savename = trainer.get_name(include_date=False) 59 | print(trainer.savename) 60 | 61 | train_data, test_data = trainer.get_data() 62 | trainer.save_params() 63 | print(trainer.params) 64 | 65 | # Create directory and file name 66 | os.makedirs(os.path.join(trainer.params['data_path'], trainer.savename), exist_ok=True) 67 | os.makedirs(os.path.join(trainer.params['data_path'], trainer.savename, 'checkpoints'), exist_ok=True) 68 | 69 | # Get model AFTER setting parameters 70 | trainer.model = trainer.get_model() 71 | 72 | # Build model and fit 73 | optimizer = tf.keras.optimizers.Adam(lr=trainer.params['learning_rate']) 74 | trainer.model.compile(optimizer=optimizer, loss='mse') 75 | 76 | callback_list = get_callbacks(trainer.params, trainer.savename) 77 | trainer.history = trainer.model.fit( 78 | x=train_data, y=train_data, 79 | batch_size=trainer.params['batch_size'], 80 | epochs=trainer.params['max_epochs'], 81 | validation_data=(test_data, test_data), 82 | callbacks=callback_list, 83 | shuffle=True) 84 | 85 | # Save locked model 86 | prediction = trainer.model.predict(test_data) 87 | trainer.save_results(trainer.model) 88 | 89 | 90 | ####### 91 | ####### 92 | 93 | ## UNLock parameters 94 | trainer.params['case'] = 'lockauto_unlockf' 95 | trainer.params['save_checkpoints'] = True 96 | trainer.params['save_freq'] = 2 97 | trainer.params['patience'] = 1 98 | trainer.params['fix_coefs'] = False 99 | trainer.params['trainable_auto'] = False 100 | 101 | trainer.savename = trainer.get_name(include_date=True) 102 | trainer.save_params() 103 | 104 | # Create directory and file name 105 | os.makedirs(os.path.join(trainer.params['data_path'], trainer.savename), exist_ok=True) 106 | os.makedirs(os.path.join(trainer.params['data_path'], trainer.savename, 'checkpoints'), exist_ok=True) 107 | 108 | trainer.model_unlock = trainer.get_model() 109 | trainer.model_unlock.predict(test_data) # For building model, required for transfer 110 | trainer.model_unlock.set_weights(trainer.model.get_weights()) # Transfer weights 111 | trainer.model_unlock.compile(optimizer=optimizer, loss='mse') 112 | 113 | callback_list = get_callbacks(trainer.params, trainer.savename) 114 | trainer.history = trainer.model_unlock.fit( 115 | x=train_data, y=train_data, 116 | batch_size=trainer.params['batch_size'], 117 | epochs=trainer.params['max_epochs'], 118 | validation_data=(test_data, test_data), 119 | callbacks=callback_list, 120 | shuffle=True) 121 | 122 | prediction = trainer.model_unlock.predict(test_data) 123 | trainer.save_results(trainer.model_unlock) 124 | 125 | -------------------------------------------------------------------------------- /testcases/basic_test.py: -------------------------------------------------------------------------------- 1 | 2 | import pdb 3 | import numpy as np 4 | from aesindy.solvers import SynthData 5 | from aesindy.training import TrainModel 6 | from default_params import params 7 | 8 | import os 9 | os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" 10 | os.environ["CUDA_VISIBLE_DEVICES"] = "0" 11 | 12 | 13 | params['model'] = 'pendulum' 14 | params['case'] = 'basic' 15 | params['system_coefficients'] = [9.8, 10] 16 | params['noise'] = 0.0 17 | params['input_dim'] = 80 18 | params['dt'] = np.sqrt(params['system_coefficients'][0]/params['system_coefficients'][1])/params['input_dim']/5 19 | params['tend'] = 2 20 | params['n_ics'] = 30 21 | params['poly_order'] = 1 22 | params['include_sine'] = True 23 | params['fix_coefs'] = False 24 | 25 | params['save_checkpoints'] = True 26 | params['save_freq'] = 5 27 | 28 | params['print_progress'] = True 29 | params['print_frequency'] = 5 30 | 31 | # training time cutoffs 32 | params['max_epochs'] = 300 33 | params['patience'] = 10 34 | 35 | # loss function weighting 36 | params['loss_weight_rec'] = 0.3 37 | params['loss_weight_sindy_z'] = 0.001 38 | params['loss_weight_sindy_x'] = 0.001 39 | params['loss_weight_sindy_regularization'] = 1e-5 40 | params['loss_weight_integral'] = 0.1 41 | params['loss_weight_x0'] = 0.01 42 | params['loss_weight_layer_l2'] = 0.0 43 | params['loss_weight_layer_l1'] = 0.0 44 | 45 | 46 | S = SynthData(model=params['model'], 47 | args=params['system_coefficients'], 48 | noise=params['noise'], 49 | input_dim=params['input_dim'], 50 | normalization=params['normalization']) 51 | S.run_sim(params['n_ics'], params['tend'], params['dt']) 52 | 53 | trainer = TrainModel(S, params) 54 | trainer.fit() 55 | -------------------------------------------------------------------------------- /testcases/default_params.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | from aesindy.config import ROOTPATH 4 | 5 | params = {} 6 | params['data_path'] = os.path.join(ROOTPATH, 'testcases/results/') 7 | if not os.path.isdir(params['data_path']): 8 | os.makedirs(params['data_path']) 9 | 10 | params['case'] = 'rando' 11 | params['model'] = 'predator_prey' 12 | params['tend'] = 20 13 | params['dt'] = 0.001 14 | params['tau'] = None # skip 15 | 16 | params['system_coefficients'] = None 17 | params['normalization'] = None 18 | params['latent_dim'] = 2 19 | 20 | params['noise'] = 0.0 21 | params['interpolate'] = False 22 | params['interp_dt'] = 0.01 23 | params['interp_kind'] = 'cubic' 24 | params['interp_coefs'] = [21, 3] 25 | 26 | params['n_ics'] = 5 27 | 28 | params['train_ratio'] = 0.8 29 | params['input_dim'] = 128 # Try 256 30 | params['poly_order'] = 2 31 | params['include_sine'] = False 32 | params['exact_features'] = False # Overrides poly_order 33 | 34 | params['svd_dim'] = None 35 | params['scale'] = False 36 | 37 | params['ode_net'] = False 38 | params['ode_net_widths'] = [1.5, 3] 39 | 40 | # sequential thresholding parameters 41 | params['coefficient_threshold'] = 1e-6 ## set to none for turning off RFE 42 | params['threshold_frequency'] = 100 43 | params['coefficient_initialization'] = 'random_normal' 44 | params['fixed_coefficient_mask'] = False 45 | params['fix_coefs'] = False 46 | params['trainable_auto'] = True 47 | params['sindy_pert'] = 0.0 48 | 49 | # loss function weighting 50 | params['loss_weight_rec'] = 1.0 51 | params['loss_weight_sindy_z'] = 0.0001 52 | params['loss_weight_sindy_x'] = 0.001 53 | params['loss_weight_sindy_regularization'] = 1e-5 54 | params['loss_weight_integral'] = 0.0 55 | params['loss_weight_x0'] = 0.0 56 | params['loss_weight_layer_l2'] = 0.0 57 | params['loss_weight_layer_l1'] = 0.0 58 | 59 | params['activation'] = 'elu' 60 | params['widths_ratios'] = [0.5, 0.25] 61 | params['use_bias'] = True 62 | 63 | # training parameters 64 | params['batch_size'] = 32 65 | params['learning_rate'] = 1e-3 66 | params['learning_rate_sched'] = False 67 | 68 | params['save_checkpoints'] = False 69 | params['save_freq'] = 1 70 | 71 | params['print_progress'] = True 72 | params['print_frequency'] = 10 73 | 74 | # training time cutoffs 75 | params['max_epochs'] = 3000 76 | params['patience'] = 100 77 | params['sparse_weighting'] = None 78 | 79 | params['sindycall_freq'] = 1 80 | params['use_sindycall'] = False 81 | params['sindy_threshold'] = 0.4 82 | -------------------------------------------------------------------------------- /testcases/fluttering.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from aesindy.training import TrainModel 3 | from aesindy.solvers import RealData 4 | from default_params import params 5 | from aesindy.config import ROOTPATH 6 | 7 | from scipy.io import loadmat 8 | 9 | 10 | import os 11 | os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" 12 | os.environ["CUDA_VISIBLE_DEVICES"] = "2" 13 | 14 | 15 | params['model'] = 'fluttering' 16 | params['case'] = 'Re1000' 17 | params['system_coefficients'] = None 18 | params['noise'] = 0.0 19 | params['input_dim'] = 20 20 | params['widths_ratios'] = [0.5, 0.25] 21 | params['poly_order'] = 2 22 | params['include_sine'] = False 23 | params['fix_coefs'] = False 24 | 25 | params['interpolate'] = True 26 | params['interp_dt'] = 0.015 27 | params['interp_kind'] = 'cubic' 28 | params['interp_coefs'] = [21, 3] 29 | 30 | params['save_checkpoints'] = False 31 | params['save_freq'] = 5 32 | 33 | params['print_progress'] = True 34 | params['print_frequency'] = 5 35 | 36 | # training time cutoffs 37 | params['max_epochs'] = 3 38 | params['patience'] = 10 39 | 40 | # loss function weighting 41 | params['loss_weight_rec'] = 0.3 42 | params['loss_weight_sindy_z'] = 0.001 43 | params['loss_weight_sindy_x'] = 0.001 44 | params['loss_weight_sindy_regularization'] = 1e-5 45 | params['loss_weight_integral'] = 0.1 46 | params['loss_weight_x0'] = 0.01 47 | params['loss_weight_layer_l2'] = 0.0 48 | params['loss_weight_layer_l1'] = 0.0 49 | 50 | ## Read data 51 | annots = loadmat(ROOTPATH+'data/small_files/NACA0012_Re1000_AoA35_2D_forces.mat') 52 | cd = annots['CD'].flatten() 53 | data = { 54 | 'dt': annots['dt'][0][0], 55 | 'time': [np.linspace(0, annots['dt'][0][0]*len(cd), len(cd), endpoint=False)], 56 | 'x': [cd] 57 | } 58 | 59 | params['dt'] = data['dt'] 60 | params['tend'] = data['time'][-1][-1] 61 | params['n_ics'] = len(data['x']) 62 | 63 | 64 | R = RealData(input_dim=params['input_dim'], 65 | interpolate=params['interpolate'], 66 | interp_dt=params['interp_dt'], 67 | interp_kind=params['interp_kind'], 68 | savgol_interp_coefs=params['interp_coefs']) 69 | 70 | R.build_solution(data) 71 | 72 | trainer = TrainModel(R, params) 73 | trainer.fit() 74 | -------------------------------------------------------------------------------- /testcases/h2z_evolution.py: -------------------------------------------------------------------------------- 1 | # %% 2 | import sys 3 | sys.path.append("../") 4 | import os 5 | 6 | os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" 7 | os.environ["CUDA_VISIBLE_DEVICES"] = "2" 8 | 9 | from __init__ import * 10 | import datetime 11 | import numpy as np 12 | import tensorflow as tf 13 | import matplotlib.pyplot as plt 14 | 15 | from lorenz import Lorenz 16 | from aesindy.training import TrainModel, get_callbacks 17 | from basic_params import params 18 | from basic_run import generate_data, train, get_hyperparameter_list 19 | 20 | 21 | params['case'] = 'h2v_evolution' 22 | params['model'] = 'lorenz' 23 | params['input_dim'] = 128 24 | params['tend'] = 60 25 | params['dt'] = 0.001 26 | params['n_ics'] = 50 27 | params['batch_size'] = 500 28 | params['coefficient_initialization'] = 'true' 29 | params['max_epochs'] = 3000 30 | params['loss_weight_x0'] = 0.2 31 | params['loss_weight_sindy_x'] = 0.001 32 | params['loss_weight_sindy_z'] = 0.0001 #0.2 # forces first element to be the same in x and z 33 | params['loss_weight_sindy_regularization'] = 1e-5 34 | params['loss_weight_integral'] = 0.0 35 | params['svd_dim'] = None # try 7 36 | params['scale'] = False # try true 37 | params['widths_ratios'] = [0.5, 0.25] 38 | params['sparse_weighting'] = None 39 | params['normalization'] = [1/40, 1/40, 1/40] 40 | params['data_path'] = '/home/joebakarji/delay-auto/main/examples/data/' 41 | params['loss_weight_layer_l2'] = 0.0 42 | params['loss_weight_layer_l1'] = 0.0 43 | params['use_bias'] = True 44 | params['learning_rate'] = 5e-4 45 | 46 | # Generate data 47 | data = generate_data(params) 48 | trainer = TrainModel(data, params) 49 | 50 | ## Lock parameters 51 | encdec_patience = 10 52 | trainer.params['case'] = 'h2v_evolution' 53 | trainer.params['save_checkpoints'] = True 54 | trainer.params['patience'] = 40 55 | trainer.params['fix_coefs'] = True 56 | trainer.params['trainable_auto'] = True 57 | 58 | trainer.savename = trainer.get_name() 59 | print(trainer.savename) 60 | 61 | train_data, test_data = trainer.get_data() 62 | trainer.save_params() 63 | print(trainer.params) 64 | 65 | # Get model AFTER setting parameters 66 | trainer.model = trainer.get_model() 67 | 68 | ## Get SVD output 69 | reduced_dim = 3 70 | U, s, VT = np.linalg.svd(data.x.T, full_matrices=False) 71 | v = np.matmul(VT[:reduced_dim, :].T, np.diag(s[:reduced_dim])) 72 | 73 | # Create directory and file name 74 | os.makedirs(os.path.join(trainer.params['data_path'], trainer.savename), exist_ok=True) 75 | os.makedirs(os.path.join(trainer.params['data_path'], trainer.savename, 'checkpoints'), exist_ok=True) 76 | 77 | ######################## 78 | 79 | # ENCODER Checkpoints 80 | checkpoint_path_encoder = os.path.join(trainer.params['data_path'], trainer.savename, 'checkpoints', 'cp-enc-{epoch:04d}.ckpt') 81 | cp_callback = tf.keras.callbacks.ModelCheckpoint( 82 | filepath=checkpoint_path_encoder, 83 | verbose=1, 84 | save_weights_only=True, 85 | save_freq=params['save_freq'] * int(trainer.params['tend']/trainer.params['dt']*trainer.params['n_ics']/ \ 86 | trainer.params['batch_size'] * trainer.params['train_ratio'])) 87 | 88 | 89 | # ENCODER TRAINING 90 | optimizer = tf.keras.optimizers.Adam(lr=trainer.params['learning_rate']) 91 | trainer.model.encoder.compile(optimizer=optimizer, loss='mse') 92 | 93 | history_encoder = trainer.model.encoder.fit( 94 | x=data.x, y=v, 95 | batch_size=trainer.params['batch_size'], 96 | epochs=20, 97 | callbacks=[tf.keras.callbacks.EarlyStopping(patience=encdec_patience, monitor='loss'), cp_callback], 98 | shuffle=True) 99 | 100 | 101 | ######################## 102 | 103 | # DECODER Checkpoints 104 | checkpoint_path_decoder = os.path.join(trainer.params['data_path'], trainer.savename, 'checkpoints', 'cp-dec-{epoch:04d}.ckpt') 105 | cp_callback = tf.keras.callbacks.ModelCheckpoint( 106 | filepath=checkpoint_path_decoder, 107 | verbose=1, 108 | save_weights_only=True, 109 | save_freq=trainer.params['save_freq'] * int(trainer.params['tend']/trainer.params['dt']*trainer.params['n_ics']/ \ 110 | trainer.params['batch_size'] * trainer.params['train_ratio'])) 111 | 112 | # DECODER TRAINING 113 | trainer.model.decoder.compile(optimizer=optimizer, loss='mse') 114 | history_decoder = trainer.model.decoder.fit( 115 | x=v, y=data.x, 116 | batch_size=trainer.params['batch_size'], 117 | epochs=20, 118 | callbacks=[tf.keras.callbacks.EarlyStopping(patience=encdec_patience, monitor='loss'), cp_callback], 119 | shuffle=True) 120 | 121 | ######################## 122 | 123 | # FULL MODEL Checkpoints 124 | checkpoint_path = os.path.join(trainer.params['data_path'], trainer.savename, 'checkpoints', 'cp-{epoch:04d}.ckpt') 125 | cp_callback = tf.keras.callbacks.ModelCheckpoint( 126 | filepath=checkpoint_path, 127 | verbose=1, 128 | save_weights_only=True, 129 | save_freq=trainer.params['save_freq'] * int(trainer.params['tend']/trainer.params['dt']*trainer.params['n_ics']/ \ 130 | trainer.params['batch_size'] * trainer.params['train_ratio'])) 131 | 132 | 133 | # Build model and fit 134 | trainer.model.compile(optimizer=optimizer, loss='mse') 135 | 136 | callback_list = get_callbacks(trainer.params, trainer.savename) 137 | trainer.history = trainer.model.fit( 138 | x=train_data, y=train_data, 139 | batch_size=trainer.params['batch_size'], 140 | epochs=trainer.params['max_epochs'], 141 | validation_data=(test_data, test_data), 142 | callbacks=callback_list, 143 | shuffle=True) 144 | 145 | # Save locked model 146 | prediction = trainer.model.predict(test_data) 147 | trainer.save_results(trainer.model) 148 | 149 | -------------------------------------------------------------------------------- /testcases/initv_intloss.py: -------------------------------------------------------------------------------- 1 | # %% 2 | import sys 3 | sys.path.append("../../aesindy") 4 | sys.path.append("../") 5 | import os 6 | 7 | os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" 8 | os.environ["CUDA_VISIBLE_DEVICES"] = "2" 9 | 10 | import datetime 11 | import numpy as np 12 | import tensorflow as tf 13 | import matplotlib.pyplot as plt 14 | 15 | from lorenz import Lorenz 16 | from training import TrainModel, get_callbacks, load_model 17 | from basic_params import params 18 | from basic_run import generate_data, get_hyperparameter_list 19 | from analyze import get_checkpoint_names 20 | 21 | params['case'] = 'initv_intloss' 22 | params['model'] = 'lorenz' 23 | params['input_dim'] = 128 24 | params['tend'] = 30 25 | params['dt'] = 0.001 26 | params['n_ics'] = 30 27 | params['batch_size'] = 500 28 | params['coefficient_initialization'] = 'true' 29 | params['sindy_pert'] = 5.0 30 | params['max_epochs'] = 3000 31 | params['loss_weight_x0'] = 0.2 32 | params['loss_weight_sindy_x'] = 0.001 33 | params['loss_weight_sindy_z'] = 0.001 34 | params['loss_weight_sindy_regularization'] = 1e-5 35 | params['loss_weight_integral'] = 0.3 36 | params['loss_weight_reconstruction'] = 0.1 37 | params['svd_dim'] = None # try 7 38 | params['scale'] = False # try true 39 | params['widths_ratios'] = [0.5, 0.25] 40 | params['sparse_weighting'] = None 41 | params['normalization'] = [1/40, 1/40, 1/40] 42 | params['data_path'] = '/home/joebakarji/delay-auto/main/examples/data/' 43 | params['loss_weight_layer_l2'] = 1e-7 44 | params['loss_weight_layer_l1'] = 0.0 45 | params['use_bias'] = True 46 | 47 | ## UNLock parameters 48 | params['save_checkpoints'] = True 49 | params['save_freq'] = 1 50 | params['patience'] = 30 51 | params['fix_coefs'] = False 52 | params['trainable_auto'] = True 53 | 54 | 55 | hyperparams = { 56 | 'loss_weight_integral': [0.0, 1.0], 57 | 'sindy_pert' : [0.0, 1.0, 3.0, 5.0, 7.0, 10.0, 12.0, 13.5, 15.0, 20.0], 58 | 'widths_ratios' : [[0.5, 0.25]], 59 | 'learning_rate' : [2e-3], 60 | 'dt': [0.001], 61 | 'poly_order' : [2] 62 | } 63 | 64 | hyperparams_list = get_hyperparameter_list(hyperparams) 65 | for hyperp in hyperparams_list: 66 | for key, val in hyperp.items(): 67 | params[key] = val 68 | 69 | # Generate data 70 | data = generate_data(params) 71 | 72 | trainer = TrainModel(data, params) 73 | print(trainer.savename) 74 | 75 | 76 | train_data, test_data = trainer.get_data() 77 | trainer.save_params() 78 | print(trainer.params) 79 | 80 | # Create directory and file name 81 | os.makedirs(os.path.join(trainer.params['data_path'], trainer.savename), exist_ok=True) 82 | os.makedirs(os.path.join(trainer.params['data_path'], trainer.savename, 'checkpoints'), exist_ok=True) 83 | 84 | # LOAD h2v encoder and decoder 85 | filename = 'results_202111012031_lorenz_h2v_evolution_encoder' 86 | checkpoint_path = trainer.params['data_path']+filename+'/checkpoints/' 87 | cp_files = get_checkpoint_names(checkpoint_path) 88 | dec = [] 89 | enc = [] 90 | for name in cp_files: 91 | nametype = name.split('-')[1] 92 | if nametype == 'dec': 93 | dec.append(name) 94 | elif nametype == 'enc': 95 | enc.append(name) 96 | dec.sort() 97 | enc.sort() 98 | dec_conv = dec[-1] 99 | enc_conv = enc[-1] 100 | 101 | 102 | # Get model AFTER setting parameters 103 | trainer.model = trainer.get_model() 104 | trainer.model.predict(test_data) 105 | trainer.model.encoder.load_weights(checkpoint_path + enc_conv + '.ckpt').expect_partial() 106 | trainer.model.decoder.load_weights(checkpoint_path + dec_conv + '.ckpt').expect_partial() 107 | 108 | # Build model and fit 109 | optimizer = tf.keras.optimizers.Adam(lr=trainer.params['learning_rate']) 110 | trainer.model.compile(optimizer=optimizer, loss='mse') 111 | 112 | callback_list = get_callbacks(trainer.params, trainer.savename) 113 | trainer.history = trainer.model.fit( 114 | x=train_data, y=train_data, 115 | batch_size=trainer.params['batch_size'], 116 | epochs=trainer.params['max_epochs'], 117 | validation_data=(test_data, test_data), 118 | callbacks=callback_list, 119 | shuffle=True) 120 | 121 | # Save locked model 122 | prediction = trainer.model.predict(test_data) 123 | trainer.save_results(trainer.model) 124 | -------------------------------------------------------------------------------- /testcases/lorenzww_basic.py: -------------------------------------------------------------------------------- 1 | import pdb 2 | import json 3 | import numpy as np 4 | from aesindy.training import TrainModel 5 | from aesindy.solvers import RealData 6 | from default_params import params 7 | from aesindy.config import ROOTPATH 8 | 9 | import os 10 | os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" 11 | os.environ["CUDA_VISIBLE_DEVICES"] = "3" 12 | 13 | 14 | params['model'] = 'lorenzww' 15 | params['case'] = 'basic' 16 | params['system_coefficients'] = None 17 | params['noise'] = 0.0 18 | params['input_dim'] = 80 19 | params['poly_order'] = 2 20 | params['include_sine'] = False 21 | params['fix_coefs'] = False 22 | 23 | params['interpolate'] = True 24 | params['interp_dt'] = 0.01 25 | params['interp_kind'] = 'cubic' 26 | params['interp_coefs'] = [21, 3] 27 | 28 | params['save_checkpoints'] = True 29 | params['save_freq'] = 5 30 | 31 | params['print_progress'] = True 32 | params['print_frequency'] = 5 33 | 34 | # training time cutoffs 35 | params['max_epochs'] = 300 36 | params['patience'] = 10 37 | 38 | # loss function weighting 39 | params['loss_weight_rec'] = 0.3 40 | params['loss_weight_sindy_z'] = 0.001 41 | params['loss_weight_sindy_x'] = 0.001 42 | params['loss_weight_sindy_regularization'] = 1e-5 43 | params['loss_weight_integral'] = 0.1 44 | params['loss_weight_x0'] = 0.01 45 | params['loss_weight_layer_l2'] = 0.0 46 | params['loss_weight_layer_l1'] = 0.0 47 | 48 | ## Read data 49 | output_json = json.load(open(os.path.join(ROOTPATH, 'data/lorenzww.json'))) 50 | data = { 51 | 'time': [np.array(time) for time in output_json['times']], 52 | 'dt': output_json['times'][0][1]-output_json['times'][0][0], 53 | 'x': [np.array(x) for x in output_json['omegas']], 54 | 'dx': [np.array(x) for x in output_json['domegas']], 55 | } 56 | 57 | params['dt'] = data['dt'] 58 | params['tend'] = data['time'][-1][-1] 59 | params['n_ics'] = len(data['x']) 60 | 61 | 62 | R = RealData(input_dim=params['input_dim'], 63 | interpolate=params['interpolate'], 64 | interp_dt=params['interp_dt'], 65 | savgol_interp_coefs=params['interp_coefs'], 66 | interp_kind=params['interp_kind']) 67 | 68 | R.build_solution(data) 69 | 70 | pdb.set_trace() 71 | trainer = TrainModel(R, params) 72 | trainer.fit() 73 | -------------------------------------------------------------------------------- /testcases/pendulum_basic.py: -------------------------------------------------------------------------------- 1 | import pdb 2 | import numpy as np 3 | from aesindy.solvers import SynthData 4 | from aesindy.training import TrainModel 5 | from default_params import params 6 | 7 | import os 8 | os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" 9 | os.environ["CUDA_VISIBLE_DEVICES"] = "0" 10 | 11 | 12 | params['model'] = 'pendulum' 13 | params['case'] = 'basic' 14 | params['system_coefficients'] = [9.8, 10] 15 | params['noise'] = 0.0 16 | params['input_dim'] = 80 17 | params['dt'] = np.sqrt(params['system_coefficients'][0]/params['system_coefficients'][1])/params['input_dim']/10 18 | params['tend'] = 4 19 | params['n_ics'] = 30 20 | params['poly_order'] = 1 21 | params['include_sine'] = True 22 | params['fix_coefs'] = False 23 | 24 | params['save_checkpoints'] = True 25 | params['save_freq'] = 1 26 | 27 | params['print_progress'] = True 28 | params['print_frequency'] = 5 29 | 30 | # training time cutoffs 31 | params['max_epochs'] = 3000 32 | params['patience'] = 70 33 | 34 | # loss function weighting 35 | params['loss_weight_rec'] = 0.3 36 | params['loss_weight_sindy_z'] = 0.001 37 | params['loss_weight_sindy_x'] = 0.001 38 | params['loss_weight_sindy_regularization'] = 1e-5 39 | params['loss_weight_integral'] = 0.1 40 | params['loss_weight_x0'] = 0.01 41 | params['loss_weight_layer_l2'] = 0.0 42 | params['loss_weight_layer_l1'] = 0.0 43 | 44 | 45 | 46 | S = SynthData(model=params['model'], 47 | args=params['system_coefficients'], 48 | noise=params['noise'], 49 | input_dim=params['input_dim'], 50 | normalization=params['normalization']) 51 | S.run_sim(params['n_ics'], params['tend'], params['dt']) 52 | 53 | trainer = TrainModel(S, params) 54 | trainer.fit() 55 | --------------------------------------------------------------------------------