├── .gitignore ├── Description.pdf ├── Description.tex ├── OrbitFG.py ├── OrbitSim.py ├── README.md ├── ekf_slide_example_res.npz ├── fg_slide_example_res.npz ├── gen_accuracy_figs.py ├── gen_timing_figs.py ├── orbitEKF.py ├── orbitSW.py ├── requirements.yml ├── runSims.py ├── slide_example.npz ├── slide_example2.npz └── timing_res_1801_3.npz /.gitignore: -------------------------------------------------------------------------------- 1 | .vscode/ 2 | *.aux 3 | *.fdb_latexmk 4 | *.fls 5 | *.log 6 | *.synctex.gz 7 | *.bbl 8 | *.blg -------------------------------------------------------------------------------- /Description.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cntaylor/factorGraph2DsatelliteExample/93e54f86fd1feacc3f05b703975f0d8dc6a88390/Description.pdf -------------------------------------------------------------------------------- /Description.tex: -------------------------------------------------------------------------------- 1 | \documentclass{article} 2 | \usepackage{amsmath} 3 | \usepackage[ruled]{algorithm} 4 | \usepackage{algpseudocode} 5 | \newcommand{\mb}[1]{\mathbf{#1}} 6 | \newcommand{\pfrac}[2]{\frac{\partial #1}{\partial #2}} 7 | 8 | \title{A 2D Satellite Example} 9 | \author{Clark Taylor} 10 | \date{} 11 | 12 | \begin{document} 13 | \section{Introduction} 14 | To compare the results of an EKF, a full factor graph, and a sliding window factor graph, we have created a 2D satellite example. This model is designed to be similar to a satellite tracking problem. The basic idea is that the Earth is a point in space (0,0) with a bearing-only sensor observing a satellite that is orbiting the Earth. The state of the satellite involves its position and velocity, while it is also acted on by gravity. The acceleration due to gravity is represented as: 15 | \begin{equation} 16 | \frac{G_E}{r^2} 17 | \end{equation} 18 | where we use the value $G_E=3.986E14\frac{m^3}{s^2}$ and $r$ is the distance from the earth to the satellite. 19 | 20 | With this information, we can define the overall system as: 21 | \begin{equation} 22 | \begin{split} 23 | \mb{x} &:= [x\ y\ v_x\ v_y]\\ 24 | \dot{\mb{x}} &= \begin{bmatrix} 25 | 0&0&1&0 \\ 0&0&0&1\\ 0&0&0&0\\ 0&0&0&0 26 | \end{bmatrix}\mb{x} - \frac{G_E}{(x^2+y^2)^\frac{3}{2}}\begin{bmatrix}0\\0\\x\\y\end{bmatrix}+\nu\\ 27 | z &= \arctan\frac{y}{x}+\eta 28 | \end{split} 29 | \end{equation} 30 | where $x$, $y$, $v_x$ and $v_y$ are the x and y position and the x and y velocity of the satellite, respectively. $z$ is the measurement of the satellites angle from earth, and $\nu$ and $\eta$ are white, zero-mean Gaussian distributions with covariances $\mb{Q}$ and $\mb{R}$, respectively. Note that the distance of the satellite from the earth (for computing the effect of gravity) can be computed from $x$ and $y$, i.e. $r = \sqrt{x^2+y^2}$. 31 | 32 | \section{Computing derivatives:} 33 | \subsection{Computing $\mb{H}$} 34 | Starting with: 35 | \begin{equation} 36 | \begin{split} 37 | h(\mb{x}) &= \arctan\frac{y}{x}\\ 38 | \frac{\partial h(\mb{x})}{\partial \mb{x}} &= 39 | \frac{1}{1+\left(\frac{y}{x}\right)^2} 40 | \begin{bmatrix} 41 | -\frac{y}{x^2}\\\frac{1}{x}\\0\\0 42 | \end{bmatrix}\\ 43 | &= \begin{bmatrix} 44 | -\frac{y}{x^2+y^2}\\\frac{x}{x^2+y^2}\\0\\0 45 | \end{bmatrix} 46 | \end{split} 47 | \end{equation} 48 | 49 | \subsection{Computing $\mb{F}$} 50 | Starting with: 51 | \begin{equation} 52 | \begin{split} 53 | \mb{T} &= \begin{bmatrix} 54 | 1&0&dt&0\\ 55 | 0&1&0&dt\\ 56 | 0&0&1&0\\ 57 | 0&0&0&1 58 | \end{bmatrix}\\ 59 | \mb{x}_{k+1} &= \mb{T}\mb{x}_k - 60 | \frac{G_E}{(x^2+y^2)^\frac{3}{2}}\begin{bmatrix}\frac{x}{2} dt^2\\\frac{y}{2}dt^2\\x dt\\y dt\end{bmatrix} 61 | \end{split} 62 | \end{equation} 63 | Now, taking the derivative of just the first output of this function w.r.t. $x$ within $\mb{x}$ gives: 64 | \begin{equation} 65 | \begin{split} 66 | \frac{\delta x_{k+1}}{\delta x_k} &= 1 - \frac{G_E dt^2}{2}\left(\frac{1}{(x^2+y^2)^\frac{3}{2}}-\frac{3x^2}{(x^2+y^2)^{\frac{5}{2}}}\right)\\ 67 | &= 1- \frac{G_E dt^2}{2(x^2+y^2)^\frac{3}{2}}\left(1-\frac{3x^2}{x^2+y^2}\right)\\ 68 | &= 1- \frac{G_E dt^2}{2(x^2+y^2)^\frac{5}{2}}\left(y^2-2x^2\right)\\ 69 | \end{split} 70 | \end{equation} 71 | Note that the first part of this equation (the 1) comes from the corresponding entry in $\mb{T}$. 72 | 73 | Computing w.r.t. $y$ 74 | \begin{equation} 75 | \begin{split} 76 | \frac{\delta x_{k+1}}{\delta y_k} &= -\frac{G_E dt^2}{2}\left(-\frac{3xy}{(x^2+y^2)^{\frac{5}{2}}}\right)\\ 77 | &= \frac{3 G_E dt^2 xy}{2(x^2+y^2)^{\frac{5}{2}}} 78 | \end{split} 79 | \end{equation} 80 | 81 | Expanding this out, I believe we will get: 82 | \begin{equation} 83 | \frac{\delta \mb{x}_{k+1}}{\delta \mb{x}_k} = \mb{T} + \frac{G_E dt}{(x^2+y^2)^{\frac{5}{2}}} 84 | \begin{bmatrix} 85 | -\frac{dt (y^2-2x^2)}{2} & \frac{3xy\, dt }{2} & 0 & 0\\ 86 | \frac{3xy\,dt}{2} & -\frac{dt (x^2-2y^2)}{2} & 0 & 0\\ 87 | 2x^2-y^2 & 3xy & 0 & 0\\ 88 | 3xy & 2y^2-x^2 & 0 & 0 89 | \end{bmatrix} 90 | \end{equation} 91 | 92 | \section{Handling ``angular'' innovations} 93 | As you run your code, you may find that the estimate occasionally just ``goes off'' (my very scientific term for the errors suddenly becoming very large.) This can happen because $\arctan$ is used for the measurement function, so the actual measurement may be just larger than 0, while the predicted measurement is, in essence, just less than 0, but is computed as something just less than $2\pi$. After computing the innovation vector, I would suggest adding code implementing the following pseudo-code. 94 | 95 | \begin{algorithm} 96 | \caption{Accounting for Angle Wrap} 97 | \begin{algorithmic}[1] 98 | \Procedure{AngleWrap}($\theta$): 99 | \If{$\theta>\pi$} 100 | \State $\theta \gets \theta-2\pi$ 101 | \ElsIf{$\theta < -\pi$} 102 | \State $\theta \gets \theta+2\pi$ 103 | \EndIf 104 | \State \textbf{return} $\theta$ 105 | \EndProcedure 106 | \end{algorithmic} 107 | \end{algorithm} 108 | 109 | \end{document} -------------------------------------------------------------------------------- /OrbitFG.py: -------------------------------------------------------------------------------- 1 | import scipy.sparse as sp 2 | import scipy.sparse.linalg as spla 3 | import numpy as np 4 | import scipy.linalg as la 5 | import matplotlib.pyplot as plt 6 | from math import pi, sqrt, ceil, atan2 7 | 8 | 9 | ''' 10 | To create a sparse matrix, you have to create three "parallel" arrays, 11 | one with row indicies, one with column indicies, and one with the actual 12 | values to put in the matrix. This is a helper function to make this process easier. 13 | ''' 14 | def dense_2_sp_lists(M: np.array, tl_row : int, tl_col: int, row_vec=True): 15 | ''' 16 | This function takes in a dense matrix and turns it into a flat array. 17 | Corresponding to that array are row and column entries with 18 | tl_row and tl_col giving the top left of all the entries 19 | 20 | Inputs: 21 | M : The dense to matrix to convert (np.array) 22 | tl_row: the top row for the matrix (int) 23 | tl_col: the left-most column for the matrix (int) 24 | row_vec: In the corner case where M is 1d, should 25 | it be a row or column vector? 26 | 27 | Returns: a tuple with 3 lists (np.array) in it 28 | ''' 29 | data_list = M.flatten() 30 | if len(M.shape)==2: 31 | rows,cols = M.shape 32 | elif len(M.shape)==1: 33 | if row_vec: 34 | rows=1 35 | cols=len(M) 36 | else: 37 | cols=1 38 | rows=len(M) 39 | else: 40 | assert False, 'M must be 1d or 2d!' 41 | row_list = np.zeros(len(data_list)) 42 | col_list = np.zeros(len(data_list)) 43 | for i in range(rows): 44 | for j in range(cols): 45 | idx = i*cols + j 46 | row_list[idx] = i+tl_row 47 | col_list[idx] = j+tl_col 48 | return (data_list,row_list,col_list) 49 | 50 | ''' 51 | L's columns will be 3*N big, where N is the number of timesteps 52 | Rows will be organized organized dynamics first, then measurements: 53 | ''' 54 | class satelliteModelBatch: 55 | ''' 56 | A class to perform the factor graph optimization. It stores 57 | all the measurements in the class, has several "helper" functions, and the 58 | "opt" function that does the actual optimization using the helper functions 59 | ''' 60 | 61 | def __init__(self, meas: np.array, R, Q, 62 | dt: float = 5, 63 | x0:np.array = np.array([0, 2E7, 4500, 0])): 64 | ''' 65 | Get the measurements, dt (for running the dynamics), 66 | the R & Q matrices, and the initial location and initialize 67 | the hidden variables 68 | ''' 69 | self.N = len(meas) 70 | # self.N = 10 71 | self.dt = dt 72 | # To do more accurate dynamics, if dt is large, use multiple, smaller 73 | # timesteps instead 74 | if self.dt > 1: 75 | self.prop_dt = self.dt / ceil(self.dt) 76 | self.n_timesteps = int(ceil(self.dt)) 77 | else: 78 | self.prop_dt = self.dt 79 | self.n_timesteps=1 80 | self.T = np.array([1,0,self.prop_dt,0, 0,1.,0,self.prop_dt, 0,0,1,0, 0,0,0,1]).reshape((4,4)) 81 | self.GE = 3.986E14 82 | 83 | self.meas = meas 84 | self.S_Q_inv = la.inv(la.cholesky(Q)) 85 | self.S_R_inv = 1/sqrt(R) 86 | self.states = np.zeros((self.N,4)) 87 | self.states[0] = x0 88 | self.create_init_state() 89 | 90 | def create_init_state(self): 91 | ''' 92 | Take x0 and propagate it forward through all timesteps 93 | ''' 94 | for i in range(1,self.N): 95 | self.states[i] = self.prop_one_timestep(self.states[i-1]) 96 | 97 | def prop_one_timestep(self, state): 98 | ''' 99 | This runs the actual dynamics, i.e. given a value for x (really x_k) find 100 | what x should be (by dynamics) at the next timestep (self.dt forward in time) -- x_{k+1} 101 | ''' 102 | going_out = state.copy() 103 | for _ in range(self.n_timesteps): 104 | dist = la.norm(going_out[:2]) 105 | add_vec = self.GE*self.prop_dt/(dist**3) *\ 106 | np.array([going_out[0]/2*self.prop_dt, 107 | going_out[1]/2*self.prop_dt, 108 | going_out[0], 109 | going_out[1]]) 110 | going_out = self.T.dot(going_out) - add_vec 111 | return going_out 112 | 113 | 114 | ''' 115 | Helper functions to tell what rows you are indexing into in the 'L' matrix 116 | ''' 117 | def state_idx(self, i: int) -> int: 118 | return 4*i 119 | 120 | def dyn_idx(self, i:int) -> int: 121 | ''' 122 | This function takes the dynamics index, where this related 123 | timestep i-1 to timestep i 124 | 125 | Input: i-- integer ranging from 1 to N 126 | Returns a row index (integer) 127 | ''' 128 | return 4*(i-1) 129 | 130 | def meas_idx(self, i:int) -> int: 131 | return (self.N-1)*4 + i 132 | 133 | def H_mat(self, state): 134 | ''' 135 | Take in a current state and return the H matrix (measurement derivative matrix) 136 | Inputs: state -- a 3-vector (np.array) 137 | Returns a n_measurements X 3 matrix (np.array) 138 | ''' 139 | going_out = np.zeros(4) 140 | dist_sq = state[0]*state[0] + state[1]*state[1] 141 | going_out[0] = -state[1]/dist_sq 142 | going_out[1] = state[0]/dist_sq 143 | return going_out 144 | 145 | def F_mat(self,state): 146 | ''' 147 | Takes in a current state and finds the derivative of the 148 | propagate forward one step function (prop_one_timestep) 149 | 150 | Inputs: state -- a 4-vector (np.array) 151 | 152 | Returns a 4x4 np.array 153 | ''' 154 | F = np.eye(4) 155 | curr_state = state.copy() 156 | for _ in range(self.n_timesteps): 157 | x = curr_state[0] 158 | y = curr_state[1] 159 | dist = la.norm(curr_state[:2]) 160 | k = self.prop_dt*self.GE/(dist**5) 161 | 162 | t_mat = np.array([[-self.prop_dt*(y**2-2*x**2)/2, 3*x*y*self.prop_dt/2.0, 0, 0], 163 | [3*x*y*self.prop_dt/2.0, -self.prop_dt*(x**2-2*y**2)/2, 0, 0], 164 | [2*x**2-y**2, 3*x*y, 0, 0], 165 | [3*x*y, 2*y**2-x**2, 0, 0]]) 166 | F = (self.T+ k*t_mat).dot(F) 167 | #Now update the state as well 168 | add_vec = self.GE*self.prop_dt/(dist**3) *\ 169 | np.array([curr_state[0]/2*self.prop_dt, 170 | curr_state[1]/2*self.prop_dt, 171 | curr_state[0], 172 | curr_state[1]]) 173 | curr_state = self.T.dot(curr_state) - add_vec 174 | 175 | return F 176 | 177 | def create_L(self): 178 | ''' 179 | This creates the big matrix (L) given the current state of the whole system 180 | ''' 181 | # First, determine how many non zero entries (nnz_entries) will be in the 182 | # sparse matrix. Then create the 3 parallel arrays that will be used to 183 | # form this matrix 184 | H_size = 4 185 | F_size=16 # Should be state size**2 186 | nnz_entries = 2*F_size*(self.N-1) + H_size*self.N 187 | data_l = np.zeros(nnz_entries) 188 | row_l = np.zeros(nnz_entries,dtype=int) 189 | col_l = np.zeros(nnz_entries,dtype=int) 190 | t_e = 0 #total number of entries so far 191 | # Put all the dynamics entries into L 192 | for i in range(1,self.N): 193 | mat1 = self.S_Q_inv.dot(self.F_mat(self.states[i-1])) 194 | data_l[t_e:t_e+F_size], row_l[t_e:t_e+F_size], col_l[t_e:t_e+F_size] = \ 195 | dense_2_sp_lists(mat1,self.dyn_idx(i),self.state_idx(i-1)) 196 | mat2 = -self.S_Q_inv 197 | t_e +=F_size 198 | data_l[t_e:t_e+F_size], row_l[t_e:t_e+F_size], col_l[t_e:t_e+F_size] = \ 199 | dense_2_sp_lists(mat2,self.dyn_idx(i),self.state_idx(i)) 200 | t_e +=F_size 201 | 202 | # Now do the measurements 203 | for i in range(self.N): 204 | # for S_R_inv a scalar 205 | mat = self.S_R_inv*self.H_mat(self.states[i]) 206 | data_l[t_e:t_e+H_size], row_l[t_e:t_e+H_size], col_l[t_e:t_e+H_size] = \ 207 | dense_2_sp_lists(mat,self.meas_idx(i),self.state_idx(i)) 208 | t_e += H_size 209 | 210 | return sp.csr_matrix((data_l,(row_l,col_l))) 211 | 212 | def create_y(self, state_vec=None): 213 | ''' 214 | Compute the residual vector. Can compute it for the 215 | current state (pass in "None") or for a new state that you 216 | want to test without setting the internal state. 217 | ''' 218 | if state_vec is not None: 219 | state_data = self.vec_to_data(state_vec) 220 | else: 221 | state_data = self.states 222 | y = np.zeros(4*(self.N-1)+self.N) 223 | for i in range(1,self.N): 224 | # predicted measurement for dynamics is f(x_{k-1})-x_k 225 | pred_meas = self.prop_one_timestep(state_data[i-1])-state_data[i] 226 | y[self.dyn_idx(i):self.dyn_idx(i+1)]=self.S_Q_inv.dot(-pred_meas) 227 | # Now do the measurements received 228 | for i in range(self.N): 229 | pred_meas = atan2(state_data[i,1],state_data[i,0]) 230 | tmp= self.meas[i]-pred_meas 231 | if tmp > pi: 232 | tmp -= 2*pi 233 | if tmp < -pi: 234 | tmp += 2*pi 235 | y[self.meas_idx(i):self.meas_idx(i+1)] = self.S_R_inv * tmp 236 | return y 237 | 238 | def vec_to_data(self,vec): 239 | going_out = np.zeros((self.N,4)) 240 | for i in range(self.N): 241 | going_out[i] = vec[i*4:(i+1)*4] 242 | return going_out 243 | 244 | def add_delta(self,delta_x: np.array = None) -> np.array: 245 | ''' 246 | This takes the current state and adds on delta_x. 247 | It DOES NOT modify any internal state 248 | 249 | Inputs: delta_x, a self.N X 3 vector (np.array)h 250 | Returns: a full state vector of the same size 251 | ''' 252 | going_out = np.zeros(self.N*4) 253 | if delta_x is None: 254 | delta_x = np.zeros(self.N*4) 255 | for i in range(self.N): 256 | going_out[i*4:(i+1)*4] = self.states[i]+ \ 257 | delta_x[i*4:(i+1)*4] 258 | 259 | return going_out 260 | 261 | def update_state(self,delta_x): 262 | ''' 263 | Changes the internal states data structure with the 264 | delta_x that is included here 265 | 266 | Returns nothing (only modifies internal state) 267 | ''' 268 | for i in range(self.N): 269 | self.states[i] += delta_x[i*4:(i+1)*4] 270 | 271 | def opt(self): 272 | ''' 273 | Create the Jacobian matrix (L) and the residual vector (y) for 274 | the current state. Find the best linear approximation to minimize y 275 | and move in that direction. Repeat until convergence. 276 | (This is a damped Gauss-Newton optimization procedure) 277 | ''' 278 | finished=False 279 | num_iters=0 280 | while not finished: 281 | L= self.create_L() 282 | # print ('L shape is',L.shape) 283 | # plt.spy(L,markersize=1) 284 | # plt.show() 285 | # for i in range(9): 286 | # print('For column',i) 287 | # test_Jacobian(self,i,.0001) 288 | # test_Jacobian(self,2) 289 | 290 | y= self.create_y() 291 | M = L.T.dot(L) 292 | Lty = L.T.dot(y) 293 | delta_x = spla.spsolve(M,Lty) 294 | scale = 1 295 | # Damp the Gauss-Newton step if it doesn't do what the linearization predicts 296 | scale_good = la.norm(delta_x) < 10 # if the first step is too small, just do it and don't even check 297 | while not scale_good: 298 | next_y = self.create_y(self.add_delta(delta_x*scale)) 299 | pred_y = y-L.dot(delta_x*scale) 300 | y_mag = y.T.dot(y) 301 | ratio = (y_mag - next_y.dot(next_y))/(y_mag-pred_y.dot(pred_y)) 302 | if ratio < 4. and ratio > .25: 303 | scale_good = True 304 | else: 305 | scale /= 2.0 306 | if scale < .001: 307 | print('Your derivatives are probably wrong! scale is',scale) 308 | assert(scale > 1E-6) 309 | num_iters+=1 310 | self.update_state(delta_x*scale) 311 | print('iteration',num_iters,'delta_x length was',la.norm(delta_x*scale), 'scale was',scale) 312 | finished = la.norm(delta_x)<12 or num_iters > 100 313 | 314 | 315 | def test_Jacobian(batch_uni, col, dx = .001): 316 | ''' 317 | This function is useful for debugging. If things don't work, try to 318 | figure out where the derivative matrix is wrong. 319 | ''' 320 | curr_y = batch_uni.create_y() 321 | orig_state = batch_uni.add_delta() 322 | delta_x = np.zeros(orig_state.shape) 323 | delta_x[col] = dx 324 | next_state = batch_uni.add_delta(delta_x) 325 | next_y = batch_uni.create_y(next_state) 326 | num_J = (next_y-curr_y)/dx 327 | J = batch_uni.create_L()[:,col].toarray() 328 | J = J.reshape(num_J.shape) 329 | diff = num_J+J 330 | print(diff) 331 | print(np.max(diff)) 332 | print(np.min(diff)) 333 | 334 | 335 | if __name__ == '__main__': 336 | prefix='slide_example' 337 | data = np.load(f'{prefix}.npz') 338 | meas = data['meas'] 339 | truth = data['truth'] 340 | dt = data['dt'] 341 | R = data['R'] 342 | Q = data['Q'] 343 | plt.plot(truth[:,0],truth[:,1],'r',label='truth') 344 | 345 | opt_class = satelliteModelBatch(meas, R, Q, dt) 346 | # print (" a sample F") 347 | # print(opt_class.F_mat(opt_class.states[10])) 348 | plt.plot(opt_class.states[:,0],opt_class.states[:,1],'g',label='orig') 349 | 350 | opt_class.opt() 351 | plt.plot(opt_class.states[:,0],opt_class.states[:,1],'b',label='opt') 352 | plt.legend() 353 | plt.savefig(f'{prefix}_res_new.png') 354 | 355 | plt.figure() 356 | plt.plot(opt_class.states[:,0],opt_class.states[:,1],c='b', label='estimate') 357 | plt.plot(truth[:,0],truth[:,1],'r--',label='truth') 358 | plt.legend() 359 | ax=plt.gca() 360 | ax.set_aspect('equal') 361 | plt.savefig('FG_'+prefix+'_new.png') 362 | plt.show() 363 | 364 | 365 | plt.figure() 366 | plt.plot(opt_class.states-truth) 367 | plt.legend (['x','y','vx','vy']) 368 | plt.title('errors') 369 | plt.savefig(f'{prefix}_errors_new.png') 370 | plt.show() 371 | 372 | np.savez('fg_'+prefix+'_res_new',fg_res=opt_class.states, truth=truth) 373 | 374 | 375 | 376 | # ######### Test if dense_2_sp_lists is working ########## 377 | # # Create a block diagonal matrix, plus two matrices off the diagonal 378 | # matrix1 = np.array([0,1.,1.,0]).reshape((2,2)) 379 | # big_data_l = np.zeros(7*4) 380 | # big_row_l = np.zeros(7*4,dtype=int) 381 | # big_col_l = np.zeros(7*4,dtype=int) 382 | # # The block diagonal matrices 383 | # for i in range(5): 384 | # big_data_l[i*4:(i+1)*4],big_row_l[i*4:(i+1)*4],big_col_l[i*4:(i+1)*4] = \ 385 | # dense_2_sp_lists(matrix1,i*2,i*2) 386 | # # A matrix in the top right 387 | # i=5 388 | # matrix2= np.eye(2) 389 | # big_data_l[i*4:(i+1)*4],big_row_l[i*4:(i+1)*4],big_col_l[i*4:(i+1)*4] = \ 390 | # dense_2_sp_lists(matrix2,0,8) 391 | # # A matrix at the bottom left 392 | # i=6 393 | # big_data_l[i*4:(i+1)*4],big_row_l[i*4:(i+1)*4],big_col_l[i*4:(i+1)*4] = \ 394 | # dense_2_sp_lists(matrix2,8,0) 395 | 396 | # L = sp.csr_matrix((big_data_l,(big_row_l,big_col_l))) 397 | 398 | # plt.spy(L) 399 | # plt.show() 400 | 401 | -------------------------------------------------------------------------------- /OrbitSim.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import scipy.linalg as la 3 | import matplotlib.pyplot as plt 4 | from math import pi, tan, atan2 5 | 6 | ''' 7 | This assumes a simplified (2d) orbit model. 8 | Earth is 0-sized, and located at (0,0). There are no 9 | "inputs". Just a "free falling" satellite orbit. 10 | 11 | Gravity is modeled as G_E/r^2, where G_E is 3.986E14 12 | 13 | The code should generate a data file that 14 | includes the following fields: 15 | 16 | * R 17 | * Q 18 | * dt 19 | * meas 20 | * x0 21 | * P0 22 | * truth 23 | ''' 24 | 25 | def meas(state,R): 26 | perfect_meas = atan2(state[1],state[0]) 27 | return perfect_meas + la.cholesky(R,lower=True).dot(np.random.randn()) 28 | 29 | G_E = 3.986E14 30 | # Let's define system parameters first 31 | dt=15 # A measurement occurs every 5 seconds 32 | 33 | R = (.2 * pi/180.)**2 # 1/5 degree error 34 | Q = np.diag([.01, .01, .0025, .0025]) # .1m and .05m/s random walk noise 35 | num_steps = 1800 # 10 hours 36 | # Mid-earth orbit is between 2000km and 35000km. 37 | # https://en.wikipedia.org/wiki/Medium_Earth_orbit 38 | # We'll set a mean of 20,000 km, with a standard deviation 39 | # of 3000km 40 | # x will have standard deviation of 1000km 41 | # if at 20,000km, a circular orbit would be 42 | # v^2 / 20,000,000 = G_E/(20,000,000^2) -> v = 4464.3 (we'll use 5000) m/s 43 | # standard deviation of 800 m/s 44 | x0 = np.array([0, 2E7, 4500, 0]) 45 | P0 = np.diag([1E6**2, 3E6**2, 800**2, 800**2]) 46 | 47 | curr_x = x0 + la.cholesky(P0,lower=True).dot(np.random.randn(4)) 48 | true_state = np.zeros((num_steps+1,4)) 49 | measurements = np.zeros((num_steps+1)) 50 | measurements[0] = meas(curr_x,R) 51 | true_state[0] = curr_x 52 | 53 | # Let's now propagate 54 | dt_divider=150 55 | my_dt = dt/dt_divider 56 | my_Q = Q*my_dt 57 | 58 | F = np.array([[1,0,my_dt,0],[0,1,0,my_dt],[0,0,1,0],[0,0,0,1]]) 59 | for i in range(num_steps): 60 | comp_Q = np.zeros((4,4)) 61 | for _ in range(dt_divider): 62 | accel = -G_E *curr_x[:2]/(la.norm(curr_x[:2])**3) 63 | move_accel = np.concatenate([accel * 0.5*my_dt**2, my_dt*accel]) 64 | curr_x = F.dot(curr_x)+move_accel 65 | comp_Q = F.dot(comp_Q.dot(F.T))+my_Q 66 | measurements[i+1] = meas(curr_x,R) 67 | true_state[i+1]=curr_x + np.random.multivariate_normal(np.zeros(4),comp_Q) 68 | plt.plot(true_state[:,0],true_state[:,1]) 69 | ax = plt.gca() 70 | ax.set_aspect('equal') 71 | ax.grid(True) 72 | plt.figure() 73 | # plt.plot(true_state[:,2],true_state[:,3]) 74 | # plt.title('Velocity') 75 | # plt.figure() 76 | plt.plot(true_state[:,:2]) 77 | plt.legend(['x','y']) 78 | plt.figure() 79 | plt.plot(true_state[:,2:]) 80 | plt.legend(['v_x','v_y']) 81 | # plt.plot(true_state[:,2]) 82 | # plt.figure() 83 | # plt.plot(true_state[:,3]) 84 | plt.show() 85 | 86 | np.savez('slide_example2.npz', R=R,Q=Q, 87 | dt=dt, 88 | meas=measurements, x0=x0, 89 | P0=P0, truth=true_state) 90 | 91 | np.savez('orbit_run_restricted.npz', R=R,Q=Q, 92 | dt=dt, 93 | meas=measurements, x0=x0, 94 | P0=P0 ) -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Purpose 2 | The purpose of this repository is to complement the paper "[Factor Graphs for Navigation Applications: A Tutorial](https://navi.ion.org/content/71/3/navi.653)" by Clark Taylor and Jason Gross in the *Navigation* journal. 3 | 4 | # Important files 5 | ## System Description 6 | In [Description.pdf](Description.pdf), we describe a system that is supposed to be representative of a satellite observation system. The LaTeX source code is in [Description.tex](Description.tex). 7 | 8 | ## Simulation Code 9 | The code is designed such that you run a simultion and it outputs a npz file. This simulation is in [OrbitSim.py](OrbitSim.py). Each data file output has the following fields in them. 10 | * R 11 | * Q 12 | * dt 13 | * meas 14 | * truth 15 | These fields should provide all the data needed to run any of the estimatiors desired. If you want to run lots of different simulations, the file [runSims.py](runSims.py) can be used. 16 | 17 | ## Data file for paper results 18 | If you want to reproduce results from the paper, the data file [slide_example.npz](slide_example.npz) is provided. [slide_example2.npz](slide_example2.npz) is another nice example. 19 | 20 | ## Estimators 21 | Three different estimators are found in this directory. 22 | 1. [orbitEKF.py](orbitEKF.py) is an EKF estimator 23 | 2. [orbitFG.py](orbitFG.py) is the full factor graph estimator. 24 | 3. [orbitSW.py](orbitSW.py) is a sliding window estimator based on the factor graph. Different from the other estimators, this estimator also outputs a timing data file that can be used to evaluate the timing performance of the factor graph for different sliding window sizes. 25 | 26 | Each of these estimators, when run, will output a data file (also a numpy data file -- .npz) that contains the result of that estimator. In addition, there are many plots that are generated by each estimator when they run. Some example results files are given in [ekf_slide_example_res.npz](ekf_slide_example_res.npz) and [fg_slide_example_res.npz](fg_slide_example_res.npz). (These files were used to create figures in the paper.) 27 | 28 | ## Plotting files 29 | Using the estimator outputs, [gen_accuracy_figs.py](gen_accuracy_figs.py) can be used to generate plots of a higher quality (print quality) than given when just running the estimator. Similarly, [gen_timing_figs.py](gen_timing_figs.py) can be used to generate plots that show the timing performance. The file [timing_res_1801_3.npz](timing_res_1801_3.npz) contains the data used to generate the timing plot in the paper (I believe...). 30 | 31 | ## Environment setup 32 | Finally, if you want to setup your Python environment easily so everything "just runs", you can use the [requirements.yml](requirements.yml) file. I use conda, so I would use 33 | ``` 34 | conda env create -f requirements.yml 35 | conda activate factorGraphExample 36 | ``` 37 | -------------------------------------------------------------------------------- /ekf_slide_example_res.npz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cntaylor/factorGraph2DsatelliteExample/93e54f86fd1feacc3f05b703975f0d8dc6a88390/ekf_slide_example_res.npz -------------------------------------------------------------------------------- /fg_slide_example_res.npz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cntaylor/factorGraph2DsatelliteExample/93e54f86fd1feacc3f05b703975f0d8dc6a88390/fg_slide_example_res.npz -------------------------------------------------------------------------------- /gen_accuracy_figs.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import matplotlib.pyplot as plt 3 | from matplotlib.ticker import FuncFormatter 4 | import matplotlib as mpl 5 | # mpl.rcParams['text.usetex'] = True 6 | 7 | # Change these values to get the plots that you need. 8 | prefix = 'slide_example' 9 | ekf_data = np.load('ekf_'+prefix+'_res.npz') 10 | fg_data = np.load('fg_'+prefix+'_res.npz') 11 | 12 | truth = ekf_data['truth'] 13 | ekf_res = ekf_data['ekf_ref'] # ref is a typo. should be res=results 14 | fg_res = fg_data['fg_res'] 15 | 16 | 17 | ####### Figure 1 18 | plt.figure() 19 | plt.plot(ekf_res[:,0],ekf_res[:,1],c='b', label='estimate') 20 | plt.plot(truth[:,0],truth[:,1],'r--',label='truth') 21 | plt.legend(loc='center left', bbox_to_anchor=(0.5,0.5)) 22 | plt.grid(which='major',linestyle='-.', color = '#CCCCCC') 23 | 24 | ax=plt.gca() 25 | ax.set_aspect('equal') 26 | ax.spines['left'].set_position('zero') 27 | ax.spines['bottom'].set_position('zero') 28 | ax.spines['right'].set_color('none') 29 | ax.spines['top'].set_color('none') 30 | 31 | # Add a box around the plot 32 | x_min, x_max = ax.get_xlim() 33 | y_min, y_max = ax.get_ylim() 34 | ax.axhline(y_min, color='gray', linewidth=1.5) 35 | ax.axhline(y_max, color='gray', linewidth=1) 36 | ax.axvline(x_min, color='gray', linewidth=1) 37 | ax.axvline(x_max, color='gray', linewidth=1.5) 38 | 39 | formatter=FuncFormatter(lambda x, pos: f"{x/1E6:.0f}") 40 | ax.yaxis.set_major_formatter(formatter) 41 | ax.xaxis.set_major_formatter(formatter) 42 | 43 | # Set axis labels outside the figure with the spine location in inches 44 | ax.set_xlabel(r'x (m$\times 10^6$)', labelpad=65) # Adjust labelpad as needed 45 | ax.set_ylabel(r'y (m$\times 10^6$)', labelpad=85) # Adjust labelpad as needed 46 | yticks = plt.yticks() 47 | print(type(yticks[0]),type(yticks[1])) 48 | print(yticks[0]) 49 | yticks[1][1].set_visible(False) 50 | print((yticks[1])) 51 | 52 | plt.savefig('ekf_'+prefix+'.pdf',bbox_inches='tight') 53 | 54 | 55 | ######### Figure 2 56 | 57 | plt.figure() 58 | plt.plot(fg_res[:,0],fg_res[:,1],c='b', label='estimate') 59 | plt.plot(truth[:,0],truth[:,1],'r--',label='truth') 60 | plt.grid(which='major',linestyle='-.', color='#CCCCCC') 61 | plt.legend(loc='center left', bbox_to_anchor=(0.5,0.5)) 62 | 63 | 64 | ax=plt.gca() 65 | ax.set_aspect('equal') 66 | ax.spines['left'].set_position('zero') 67 | ax.spines['bottom'].set_position('zero') 68 | ax.spines['right'].set_color('none') 69 | ax.spines['top'].set_color('none') 70 | 71 | # Add a box around the plot 72 | x_min, x_max = ax.get_xlim() 73 | y_min, y_max = ax.get_ylim() 74 | ax.axhline(y_min, color='gray', linewidth=1.5) 75 | ax.axhline(y_max, color='gray', linewidth=1) 76 | ax.axvline(x_min, color='gray', linewidth=1) 77 | ax.axvline(x_max, color='gray', linewidth=1.5) 78 | 79 | formatter=FuncFormatter(lambda x, pos: f"{x/1E6:.0f}") 80 | ax.yaxis.set_major_formatter(formatter) 81 | ax.xaxis.set_major_formatter(formatter) 82 | 83 | # Set axis labels outside the figure with the spine location in inches 84 | ax.set_xlabel(r'x (m$\times 10^6$)', labelpad=65) # Adjust labelpad as needed 85 | ax.set_ylabel(r'y (m$\times 10^6$)', labelpad=85) # Adjust labelpad as needed 86 | 87 | 88 | # ax.set_xlabel('x (m x 1E6)', labelpad=45) 89 | # ax.set_ylabel('y (m x 1E6)', position=(0,1)) 90 | 91 | plt.savefig('fg_'+prefix+'.pdf',bbox_inches='tight') 92 | plt.show() 93 | 94 | 95 | -------------------------------------------------------------------------------- /gen_timing_figs.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import matplotlib.pyplot as plt 3 | 4 | data = np.load('timing_res_1801_3.npz') 5 | 6 | one_step = data["create_and_solve_L"] 7 | full_opt = data["full_opt"] 8 | sizes = data["call_sizes"] 9 | 10 | plt.plot(sizes[1:],full_opt[1:],'.',markersize=4) 11 | plt.xlabel('number of time steps',fontsize=14) 12 | plt.ylabel('time (s)', fontsize=14) 13 | plt.savefig('timing_full.pdf',bbox_inches='tight') 14 | plt.show() 15 | -------------------------------------------------------------------------------- /orbitEKF.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import scipy.linalg as la 3 | import matplotlib.pyplot as plt 4 | from math import pi, tan, atan2, floor 5 | 6 | def plot_1d_est_with_cov(est, covs, truth=None, xs=None, sigmas=3): 7 | ''' 8 | This function takes in (n,) numpy arrays and plots the estimate with 9 | "error bars" around the estimate. Can also plot the truth. The estimate 10 | will be red, the 11 | 12 | Args: 13 | est: The estimate values over time. An (n,) array 14 | covs: The covariance over time. An (n,) array 15 | truth: (optional) The true values over time. An (n,) array 16 | xs: (optional) If nothing passed in, will plot 1->n on the x axis. 17 | Otherwise, will put xs along the x axis 18 | sigmas: (default=3) How many sigmas out to put the "error bars" 19 | when plotting 20 | 21 | Returns: nothing. Just plots stuff using matplotlib 22 | ''' 23 | assert len(est)==len(covs), 'est and covs must be the same length' 24 | if truth is not None: 25 | assert len(est)==len(truth), 'est and truth must be the same length' 26 | if xs is not None: 27 | assert len(est)==len(xs), 'est and xs must be the same length' 28 | else: 29 | xs = np.arange(len(est)) 30 | plt.plot(xs,est,c='r', label='estimate') 31 | top_vals = est + np.sqrt(covs)*sigmas 32 | low_vals = est - np.sqrt(covs)*sigmas 33 | plt.fill_between(xs,low_vals,top_vals, facecolor='b',alpha=.5) 34 | if truth is not None: 35 | plt.plot(xs,truth,c='k',linestyle='--',label='truth') 36 | plt.legend() 37 | 38 | def plot_2d_est_with_cov(est,covs,truth=None, sigmas=3, sigma_decimate=10): 39 | ''' 40 | This function takes in (n,2) numpy arrays and plots the estimate with 41 | "error bars" around the estimate. Can also plot the truth. The estimate 42 | will be red, the truth black, and ellipses blue 43 | 44 | Args: 45 | est: The estimate values over time. An (n,2) array 46 | covs: The covariance over time. An (n,2,2) array 47 | truth: (optional) The true values over time. An (n,2) array 48 | sigmas: (default=3) How many sigmas out to put the "error bars" 49 | when plotting 50 | sigma_decimate: (default=10) How many of the values to plot covariances 51 | around (plus stars on the corresponding locations). If 1, will plot 52 | an ellipse around every point 53 | 54 | Returns: nothing. Just plots stuff using matplotlib 55 | ''' 56 | assert len(est)==len(covs), 'est and covs must be the same length' 57 | if truth is not None: 58 | assert len(est)==len(truth), 'est and truth must be the same length' 59 | plt.plot(est[:,0],est[:,1],c='b', label='estimate') 60 | plt.plot(est[0::sigma_decimate,0], est[0::sigma_decimate,1], 'b+') 61 | 62 | #Create a circle for plotting 63 | angs = np.arange(0,2*pi+.1,.1) 64 | circ_pts =np.zeros((2,len(angs))) 65 | circ_pts[0]=np.sin(angs) 66 | circ_pts[1]=np.cos(angs) 67 | circ_pts *= sigmas 68 | for i in range(len(est)): 69 | if i%sigma_decimate == 0: 70 | S_cov = la.cholesky(covs[i,:2,:2],lower=True) 71 | ellipse = S_cov.dot(circ_pts) + est[i,:2].reshape((2,1)) #reshape enables broadcast 72 | plt.plot(ellipse[0],ellipse[1],'b') 73 | if truth is not None: 74 | plt.plot(truth[:,0],truth[:,1],c='r',label='truth') 75 | plt.plot(truth[0::sigma_decimate,0], truth[0::sigma_decimate,1], 'r+') 76 | plt.legend() 77 | ax=plt.gca() 78 | ax.set_aspect('equal') 79 | 80 | prefix='slide_example' 81 | 82 | data = np.load(prefix+'.npz') 83 | 84 | meas = data['meas'] 85 | num_steps = len(meas)-1 #1000 86 | R = data['R'] 87 | Q = data['Q'] 88 | dt = data['dt'].item() 89 | curr_x = data['x0'] 90 | curr_P = data['P0'] 91 | truth = data['truth'] 92 | 93 | 94 | est_state = np.zeros((num_steps+1,4)) 95 | est_cov = np.zeros((num_steps+1,4,4)) 96 | 97 | est_state[0] = curr_x 98 | est_cov[0] = curr_P 99 | #See if this fixes things. 100 | G_E = 3.986E14 101 | 102 | def f(x,dt): 103 | accel = -G_E*x[:2]/la.norm(x[:2])**3 104 | T= np.eye(4) 105 | T[:2,2:] = np.eye(2) * dt 106 | accel_add = np.concatenate((dt**2/2 * accel, accel*dt)) 107 | return T.dot(x) + accel_add 108 | 109 | def f2(x,dt): 110 | dt_divider=50 111 | my_dt = dt/dt_divider 112 | T = np.eye(4) 113 | T[:2,2:] = np.eye(2) * my_dt 114 | for _ in range(dt_divider): 115 | accel = -G_E *x[:2]/la.norm(x[:2])**3 116 | move_accel = np.concatenate((accel * 0.5*my_dt**2, my_dt*accel)) 117 | x = T.dot(x)+move_accel 118 | return x 119 | 120 | def h(x): 121 | return atan2(x[1],x[0]) 122 | 123 | T = np.array([1,0,dt,0, 0,1,0,dt, 0,0,1,0, 0,0,0,1]).reshape(4,4) 124 | for i in range(num_steps): 125 | ##Propagate 126 | # Compute F 127 | dist = la.norm(curr_x[:2]) 128 | xy= curr_x[0] * curr_x[1] 129 | x_sq = curr_x[0] * curr_x[0] 130 | y_sq = curr_x[1] * curr_x[1] 131 | F = T + (G_E * dt)/(dist**5)* \ 132 | np.array([dt*(2*x_sq-y_sq)/2, 3*xy*dt/2, 0, 0, 133 | 3*xy*dt/2, dt*(2*y_sq-x_sq)/2, 0,0, 134 | 2*x_sq-y_sq, 3*xy, 0, 0, 135 | 3*xy, 2*y_sq-x_sq, 0, 0 ]).reshape(4,4) 136 | 137 | # np.array([-dt*x_sq/2, dt*xy/2, 0, 0, 138 | # dt*xy/2, -dt*x_sq/2, 0,0, 139 | # -y_sq, xy, 0, 0, 140 | # xy, -x_sq, 0, 0 ]).reshape(4,4) 141 | # Propagate x 142 | curr_x = f(curr_x,dt) 143 | # Propagate P 144 | curr_P = F.dot(curr_P.dot(F.T)) + Q*dt 145 | 146 | ## Update 147 | #Compute H 148 | dist_sq = curr_x[0]*curr_x[0] + curr_x[1]*curr_x[1] 149 | H = np.array([-curr_x[1]/dist_sq, curr_x[0]/dist_sq, 0,0]) 150 | #Compute K 151 | HPH = H.dot(curr_P.dot(H.T)) 152 | K = curr_P.dot(H.T)/(HPH + R) 153 | # update x 154 | innov = meas[i+1]-h(curr_x) 155 | if innov > pi: 156 | innov -= 2*pi 157 | if innov < -pi: 158 | innov += 2*pi 159 | curr_x += K.dot(innov) 160 | # update P 161 | curr_P = (np.eye(4)-np.outer(K,H)).dot(curr_P) 162 | 163 | #Store stuff for plots and things 164 | est_state[i+1] = curr_x 165 | est_cov[i+1] = curr_P 166 | # u,s,v = la.svd(curr_P) 167 | # print('s is',s) 168 | 169 | decimate = int(floor(num_steps/20)) 170 | plot_2d_est_with_cov(est_state, est_cov, truth[:(num_steps+1)], sigma_decimate=decimate) 171 | plt.savefig(prefix+'_loc.png') 172 | plt.figure() 173 | plot_1d_est_with_cov(est_state[:,2],est_cov[:,2,2],truth[:(num_steps+1),2]) 174 | plt.title('X Velocity') 175 | plt.savefig(prefix+'_x_vel.png') 176 | plt.figure() 177 | plot_1d_est_with_cov(est_state[:,3],est_cov[:,3,3],truth[:(num_steps+1),3]) 178 | plt.title('Y Velocity') 179 | plt.savefig(prefix+'_y_vel.png') 180 | 181 | plt.figure() 182 | plt.plot(est_state-truth) 183 | plt.legend (['x','y','vx','vy']) 184 | plt.title('errors') 185 | plt.savefig(f'{prefix}_EKF_errors.png') 186 | 187 | plt.figure() 188 | plt.plot(est_state[:,0],est_state[:,1],c='b', label='estimate') 189 | plt.plot(truth[:,0],truth[:,1],'r--',label='truth') 190 | plt.legend() 191 | ax=plt.gca() 192 | ax.set_aspect('equal') 193 | plt.savefig('EKF_'+prefix+'.png') 194 | plt.show() 195 | 196 | np.savez('ekf_'+prefix+'_res',ekf_res=est_state, truth=truth) -------------------------------------------------------------------------------- /orbitSW.py: -------------------------------------------------------------------------------- 1 | import scipy.sparse as sp 2 | import scipy.sparse.linalg as spla 3 | import numpy as np 4 | import scipy.linalg as la 5 | import matplotlib.pyplot as plt 6 | from math import pi, sqrt, ceil, atan2 7 | from numba import jit 8 | import time 9 | from tqdm import tqdm 10 | 11 | def dense_2_sp_lists(M: np.array, tl_row : int, tl_col: int, row_vec=True): 12 | ''' 13 | This function takes in a dense matrix and turns it into a flat array. 14 | Corresponding to that array are row and column entries with 15 | tl_row and tl_col giving the top left of all the entries 16 | 17 | Inputs: 18 | M : The dense to matrix to convert (np.array) 19 | tl_row: the top row for the matrix (int) 20 | tl_col: the left-most column for the matrix (int) 21 | row_vec: In the corner case where M is 1d, should 22 | it be a row or column vector? 23 | 24 | Returns: a tuple with 3 lists (np.array) in it 25 | ''' 26 | data_list = M.flatten() 27 | if len(M.shape)==2: 28 | rows,cols = M.shape 29 | elif len(M.shape)==1: 30 | if row_vec: 31 | rows=1 32 | cols=len(M) 33 | else: 34 | cols=1 35 | rows=len(M) 36 | else: 37 | assert False, 'M must be 1d or 2d!' 38 | row_list = np.zeros(len(data_list)) 39 | col_list = np.zeros(len(data_list)) 40 | for i in range(rows): 41 | for j in range(cols): 42 | idx = i*cols + j 43 | row_list[idx] = i+tl_row 44 | col_list[idx] = j+tl_col 45 | return (data_list,row_list,col_list) 46 | 47 | ''' 48 | L's columns will be 3*N big, where N is the number of timesteps 49 | Rows will be organized organized dynamics first, then measurements: 50 | 51 | ''' 52 | 53 | # To make numba fast, try making these functions outside the class 54 | @jit 55 | def fast_prop_one_timestep(state,T,dt,GE,n_timesteps): 56 | going_out = state.copy() 57 | for _ in range(n_timesteps): 58 | dist = sqrt(going_out[0]*going_out[0] + going_out[1]*going_out[1]) 59 | add_vec = GE * dt/(dist**3) *\ 60 | np.array([going_out[0]/2* dt, 61 | going_out[1]/2* dt, 62 | going_out[0], 63 | going_out[1]]) 64 | going_out = T.dot(going_out) - add_vec 65 | return going_out 66 | 67 | @jit 68 | def fast_F_mat(state,T,dt,GE,n_timesteps): 69 | F = np.eye(4) 70 | curr_state = state.copy() 71 | for _ in range(n_timesteps): 72 | x = curr_state[0] 73 | y = curr_state[1] 74 | dist = sqrt(x*x + y*y) 75 | # dist = la.norm(curr_state[:2]) # Turns out la.norm is really, really slow 76 | k = dt * GE/(dist**5) 77 | 78 | t_mat = np.array([[-dt*(y**2-2*x**2)/2, 3*x*y* dt/2.0, 0, 0], 79 | [3*x*y* dt/2.0, - dt*(x**2-2*y**2)/2, 0, 0], 80 | [2*x**2-y**2, 3*x*y, 0, 0], 81 | [3*x*y, 2*y**2-x**2, 0, 0]]) 82 | F = (T+ k*t_mat).dot(F) 83 | #Now update the state as well 84 | add_vec = GE* dt/(dist**3) *\ 85 | np.array([x/2* dt, 86 | y/2* dt, 87 | x, 88 | y]) 89 | curr_state = T.dot(curr_state) - add_vec 90 | 91 | return F 92 | 93 | class satelliteSlidingWindow: 94 | def __init__(self, meas: float, R: np.ndarray, 95 | Q : np.ndarray, P0 : np.ndarray, 96 | x0: np.ndarray = np.array([0, 2E7, 4500, 0]), 97 | dt: float = 5, sw_size: int=10): 98 | 99 | self.dt = dt 100 | if self.dt > 1: 101 | self.prop_dt = self.dt / ceil(self.dt) 102 | self.n_timesteps = int(ceil(self.dt)) 103 | else: 104 | self.prop_dt = self.dt 105 | self.n_timesteps=1 106 | self.sw_size = sw_size 107 | 108 | self.T = np.array([1,0,self.prop_dt,0, 0,1.,0,self.prop_dt, 0,0,1,0, 0,0,0,1]).reshape((4,4)) 109 | self.GE = 3.986E14 110 | 111 | self.meas = np.array([meas]) 112 | self.S_Q_inv = la.inv(la.cholesky(Q*dt)) 113 | self.S_R_inv = 1/sqrt(R) 114 | S_P0_inv = la.inv(la.cholesky(P0)) 115 | self.prior_G = S_P0_inv 116 | self.prior_xbar = x0 117 | self.states = np.zeros((1,4)) 118 | self.states[0] = x0 119 | self.opt_call_sizes=[] 120 | self.opt_call_times=[] 121 | self.solve_L_times=[] 122 | self.solve_and_form_L_times=[] 123 | 124 | def N(self): 125 | return len(self.states) 126 | 127 | def prop_one_timestep(self, state): 128 | return fast_prop_one_timestep(state, self.T, self.prop_dt, self.GE, self.n_timesteps) 129 | # going_out = state.copy() 130 | # for _ in range(self.n_timesteps): 131 | # dist = sqrt(going_out[0]*going_out[0] + going_out[1]*going_out[1]) 132 | # add_vec = self.GE*self.prop_dt/(dist**3) *\ 133 | # np.array([going_out[0]/2*self.prop_dt, 134 | # going_out[1]/2*self.prop_dt, 135 | # going_out[0], 136 | # going_out[1]]) 137 | # going_out = self.T.dot(going_out) - add_vec 138 | # return going_out 139 | 140 | def state_idx(self, i: int) -> int: 141 | return 4*i 142 | 143 | def dyn_idx(self, i:int) -> int: 144 | ''' 145 | This function takes the dynamics index, where this related 146 | timestep i-1 to timestep i 147 | 148 | Input: i-- integer ranging from 1 to N 149 | Returns a row index (integer) 150 | ''' 151 | return 4*(i-1) 152 | 153 | def meas_idx(self, i:int) -> int: 154 | return (self.N()-1)*4 + i 155 | 156 | def prior_idx(self) -> int: 157 | return (self.N()-1)*4 + self.N() 158 | 159 | def add_one_timestep(self, meas: float): 160 | # Predict and store the new state 161 | next_state = self.prop_one_timestep(self.states[-1]) 162 | self.states = np.vstack((self.states,next_state)) 163 | # store the measurement for the new timestep 164 | self.meas = np.append(self.meas,meas) 165 | if self.N() > self.sw_size: 166 | self.marginalize_state() 167 | 168 | def marginalize_state(self): 169 | ''' 170 | Remove the oldest state value and marginalize the probability 171 | to make it a prior 172 | 173 | Inputs: None 174 | Returns: Nothing, just modifies internal state 175 | ''' 176 | # Run QR decomposition and all that fancy jazz 177 | ## 1. Create a row with the things to be marginalized first 178 | L = self.create_L() 179 | ## 2a. Find the rows with non-zero entries 180 | row_idx,_,_ = sp.find(L[:,:4]) 181 | row_idx = np.unique(row_idx) 182 | ## 2b. Create the sub-matrix 183 | sub_L = L[row_idx,:8] 184 | ## 3. Run QR decomposition 185 | sub_L = sub_L.todense() 186 | Q,R = la.qr(sub_L, mode='economic') 187 | ## 3a. Modify y by the same rotation matrix 188 | y = self.create_y() 189 | sub_y = y[row_idx] 190 | rotated_y = Q.T.dot(sub_y) 191 | ## 4 & 5. Eliminate row and column and move remainder into prior_G 192 | self.prior_G = R[4:,4:] 193 | self.prior_xbar = self.states[1] + la.pinv(self.prior_G).dot(rotated_y[4:]) 194 | 195 | # Shorten the arrays 196 | self.meas = self.meas[1:] 197 | self.states = self.states[1:] 198 | 199 | def H_mat(self, state): 200 | ''' 201 | Take in a current state and return the H matrix (measurement derivative matrix) 202 | Inputs: state -- a 3-vector (np.array) 203 | Returns a n_measurements X 3 matrix (np.array) 204 | ''' 205 | going_out = np.zeros(4) 206 | dist_sq = state[0]*state[0] + state[1]*state[1] 207 | going_out[0] = -state[1]/dist_sq 208 | going_out[1] = state[0]/dist_sq 209 | return going_out 210 | 211 | def F_mat(self,state): 212 | ''' 213 | Takes in a current state and finds the derivative of the 214 | propagate forward one step function (prop_one_timestep) 215 | 216 | Inputs: state -- a 4-vector (np.array) 217 | 218 | Returns a 4x4 np.array 219 | ''' 220 | return fast_F_mat(state, self.T, self.prop_dt, self.GE, self.n_timesteps) 221 | # F = np.eye(4) 222 | # curr_state = state.copy() 223 | # for _ in range(self.n_timesteps): 224 | # x = curr_state[0] 225 | # y = curr_state[1] 226 | # dist = sqrt(x*x + y*y) 227 | # # dist = la.norm(curr_state[:2]) # Turns out la.norm is really, really slow 228 | # k = self.prop_dt*self.GE/(dist**5) 229 | 230 | # t_mat = np.array([[-self.prop_dt*(y**2-2*x**2)/2, 3*x*y*self.prop_dt/2.0, 0, 0], 231 | # [3*x*y*self.prop_dt/2.0, -self.prop_dt*(x**2-2*y**2)/2, 0, 0], 232 | # [2*x**2-y**2, 3*x*y, 0, 0], 233 | # [3*x*y, 2*y**2-x**2, 0, 0]]) 234 | # F = (self.T+ k*t_mat).dot(F) 235 | # #Now update the state as well 236 | # add_vec = self.GE*self.prop_dt/(dist**3) *\ 237 | # np.array([x/2*self.prop_dt, 238 | # y/2*self.prop_dt, 239 | # x, 240 | # y]) 241 | # curr_state = self.T.dot(curr_state) - add_vec 242 | 243 | # return F 244 | 245 | def create_L(self): 246 | H_size = 4 247 | F_size=16 # Should be state size**2 248 | prior_size = 16 249 | nnz_entries = 2*F_size*(self.N()-1) + H_size*self.N() + prior_size 250 | data_l = np.zeros(nnz_entries) 251 | row_l = np.zeros(nnz_entries,dtype=int) 252 | col_l = np.zeros(nnz_entries,dtype=int) 253 | t_e = 0 #total number of entries so far 254 | # Put all the dynamics entries into L 255 | for i in range(1,self.N()): 256 | mat1 = self.S_Q_inv.dot(self.F_mat(self.states[i-1])) 257 | data_l[t_e:t_e+F_size], row_l[t_e:t_e+F_size], col_l[t_e:t_e+F_size] = \ 258 | dense_2_sp_lists(mat1,self.dyn_idx(i),self.state_idx(i-1)) 259 | t_e +=F_size 260 | 261 | mat2 = -self.S_Q_inv 262 | data_l[t_e:t_e+F_size], row_l[t_e:t_e+F_size], col_l[t_e:t_e+F_size] = \ 263 | dense_2_sp_lists(mat2,self.dyn_idx(i),self.state_idx(i)) 264 | t_e +=F_size 265 | 266 | # Now do the measurements 267 | for i in range(self.N()): 268 | # for S_R_inv a scalar 269 | mat = self.S_R_inv*self.H_mat(self.states[i]) 270 | data_l[t_e:t_e+H_size], row_l[t_e:t_e+H_size], col_l[t_e:t_e+H_size] = \ 271 | dense_2_sp_lists(mat,self.meas_idx(i),self.state_idx(i)) 272 | t_e += H_size 273 | 274 | # Now put in the prior 275 | data_l[t_e:t_e+prior_size], row_l[t_e:t_e+prior_size], col_l[t_e:t_e+prior_size] = \ 276 | dense_2_sp_lists(self.prior_G,self.prior_idx(),self.state_idx(0)) 277 | t_e += prior_size 278 | 279 | 280 | return sp.csr_matrix((data_l,(row_l,col_l))) 281 | 282 | def create_y(self, state_vec=None): 283 | if state_vec is not None: 284 | state_data = self.vec_to_data(state_vec) 285 | else: 286 | state_data = self.states 287 | y = np.zeros(4*(self.N()-1)+self.N()+4) # last 4 for prior 288 | for i in range(1,self.N()): 289 | # predicted measurement for dynamics is f(x_{k-1})-x_k 290 | pred_meas = self.prop_one_timestep(state_data[i-1])-state_data[i] 291 | y[self.dyn_idx(i):self.dyn_idx(i+1)]=self.S_Q_inv.dot(-pred_meas) 292 | # Now do the measurements received 293 | for i in range(self.N()): 294 | pred_meas = atan2(state_data[i,1],state_data[i,0]) 295 | tmp= self.meas[i]-pred_meas 296 | if tmp > pi: 297 | tmp -= 2*pi 298 | if tmp < -pi: 299 | tmp += 2*pi 300 | y[self.meas_idx(i):self.meas_idx(i+1)] = self.S_R_inv * tmp 301 | # Now do the prior 302 | y[self.prior_idx():self.prior_idx()+4] = self.prior_G.dot(self.prior_xbar - state_data[0]) 303 | return y 304 | 305 | def vec_to_data(self,vec): 306 | going_out = np.zeros((self.N(),4)) 307 | for i in range(self.N()): 308 | going_out[i] = vec[i*4:(i+1)*4] 309 | return going_out 310 | 311 | def add_delta(self,delta_x: np.array = None) -> np.array: 312 | ''' 313 | This takes the current state and adds on delta_x. 314 | It DOES NOT modify any internal state 315 | 316 | Inputs: delta_x, a self.N() X 3 vector (np.array) 317 | Returns: a full state vector of the same size 318 | ''' 319 | going_out = np.zeros(self.N()*4) 320 | if delta_x is None: 321 | delta_x = np.zeros(self.N()*4) 322 | for i in range(self.N()): 323 | going_out[i*4:(i+1)*4] = self.states[i]+ \ 324 | delta_x[i*4:(i+1)*4] 325 | 326 | return going_out 327 | 328 | def update_state(self,delta_x): 329 | ''' 330 | Changes the internal states data structure with the 331 | delta_x that is included here 332 | 333 | Returns nothing (only modifies internal state) 334 | ''' 335 | for i in range(self.N()): 336 | self.states[i] += delta_x[i*4:(i+1)*4] 337 | 338 | def opt(self, max_iters:int = 100): 339 | ''' 340 | Create the Jacobian matrix (L) and the residual vector (y) for 341 | the current state. Find the best linear approximation to minimize y 342 | and move in that direction. Repeat until convergence. 343 | (This is a Gauss-Newton optimization procedure) 344 | ''' 345 | tic1 = time.perf_counter() 346 | finished=False 347 | num_iters=0 348 | while not finished: 349 | tic3 = time.perf_counter() 350 | L= self.create_L() 351 | # print ('L shape is',L.shape) 352 | # plt.spy(L,markersize=1) 353 | # plt.show() 354 | # for i in range(9): 355 | # print('For column',i) 356 | # test_Jacobian(self,i,.0001) 357 | # test_Jacobian(self,2) 358 | y= self.create_y() 359 | tic2=time.perf_counter() 360 | M = L.T.dot(L) 361 | Lty = L.T.dot(y) 362 | delta_x = spla.spsolve(M,Lty) 363 | toc2=time.perf_counter() 364 | scale = 1 365 | scale_good=False 366 | # A measure of how much 367 | #improvement you actually expect from this step 368 | pred_delta_y_norm=la.norm(L.dot(delta_x)) 369 | ratio2 = pred_delta_y_norm/la.norm(y) 370 | # print('ratio2 is ',ratio2,'delta y norm is ',pred_delta_y_norm, 'delta x norm is ',la.norm(delta_x)) 371 | if ratio2<1E-4 or pred_delta_y_norm<1E-6: 372 | finished=True 373 | print('y ratio is too small to run iteration',num_iters,'ratio is:',ratio2) 374 | else: 375 | while not scale_good: 376 | next_y = self.create_y(self.add_delta(delta_x*scale)) 377 | pred_y = y-L.dot(delta_x*scale) 378 | y_mag = y.T.dot(y) 379 | ratio = (y_mag - next_y.dot(next_y))/(y_mag-pred_y.dot(pred_y)) 380 | if ratio < 4. and ratio > .25: 381 | scale_good = True 382 | else: 383 | scale /= 2.0 384 | if scale < .1: 385 | print('Your derivatives are probably wrong! scale is',scale,'ratio is',ratio, 'ratio2 is',ratio2) 386 | assert(scale > 1E-6) 387 | num_iters+=1 388 | self.update_state(delta_x*scale) 389 | # print('iteration',num_iters,'delta_x length was',la.norm(delta_x*scale), 'scale was',scale) 390 | finished = la.norm(delta_x)<1 or num_iters >= max_iters 391 | toc1=time.perf_counter() 392 | self.opt_call_sizes.append(self.N()) 393 | self.opt_call_times.append(toc1-tic1) 394 | self.solve_L_times.append(toc2-tic2) 395 | self.solve_and_form_L_times.append(toc2-tic3) 396 | 397 | 398 | def test_Jacobian(batch_uni, col, dx = .01): 399 | curr_y = batch_uni.create_y() 400 | orig_state = batch_uni.add_delta() 401 | delta_x = np.zeros(orig_state.shape) 402 | delta_x[col] = dx 403 | next_state = batch_uni.add_delta(delta_x) 404 | next_y = batch_uni.create_y(next_state) 405 | num_J = (next_y-curr_y)/dx 406 | J = batch_uni.create_L()[:,col].toarray() 407 | J = J.reshape(num_J.shape) 408 | diff = num_J+J 409 | print(diff) 410 | print(np.max(diff)) 411 | print(np.min(diff)) 412 | 413 | 414 | def RMSEs(est_states: np.array, true_states: np.array) -> float: 415 | ''' 416 | This function takes the estimated states and compares them 417 | against the true states. The estimated states do _not_ need to 418 | be as long as the true_states. It returns 3 values: 419 | * the total RMSE 420 | * the RMSE of just the positions 421 | * the RMSE of just the velocities 422 | 423 | Inputs: 424 | est_states: a Nx4 np.array with a different state value in each row 425 | true_states: an Mx4 (where M>=N) np.array 426 | 427 | Returns: tuple of three values (see description about) 428 | ''' 429 | assert len(true_states)>= len(est_states), 'Not enough entries in true_states' 430 | total_RMSE = sqrt(np.sum(np.square(true_states[:len(est_states)]-est_states))/(len(est_states)*4)) 431 | pos_RMSE = sqrt(np.sum(np.square(true_states[:len(est_states),:2]-est_states[:,:2]))/(len(est_states)*2)) 432 | vel_RMSE = sqrt(np.sum(np.square(true_states[:len(est_states),2:]-est_states[:,2:]))/(len(est_states)*2)) 433 | return (total_RMSE, pos_RMSE, vel_RMSE) 434 | 435 | def errors(est_data, truth) -> np.array: 436 | return 437 | def ANEES(): 438 | #TODO: implement this function to be useful 439 | pass 440 | 441 | if __name__ == '__main__': 442 | prefix = 'slide_example2' 443 | data = np.load(f'{prefix}.npz') 444 | meas = data['meas'] 445 | truth = data['truth'] 446 | dt = data['dt'] 447 | R = data['R'] 448 | Q = data['Q'] 449 | P0 = data['P0'] 450 | x0 = data['x0'] 451 | plt.plot(truth[:,0],truth[:,1],'r',label='truth') 452 | 453 | data_len = len(meas) 454 | 455 | window_size= 2 #data_len #int(data_len/200)*100 456 | # Can either store the oldest things (most optimized) 457 | # or always store the most recent 458 | # True = the oldest ones 459 | store_most_opt = False 460 | 461 | #max iterations each timestep 462 | max_iters=1 463 | 464 | opt_class = satelliteSlidingWindow(meas[0], R, Q, P0, dt=dt, sw_size=window_size) 465 | # print (" a sample F") 466 | # print(opt_class.F_mat(opt_class.states[10])) 467 | opt_states = np.zeros((data_len,4)) #back sliding window 468 | rt_states = np.zeros((data_len,4)) # real time type states 469 | opt_states[0] = x0 # In case not over-written later .. basically a special case when window_size=1 470 | rt_states[0] = x0 471 | for i in tqdm(range(1,data_len)): 472 | opt_class.add_one_timestep(meas[i]) 473 | # print('Optimized for index',i) 474 | 475 | opt_class.opt(max_iters) 476 | if i> (window_size-1): 477 | opt_states[i-window_size+1] = opt_class.states[0] 478 | rt_states[i]=opt_class.states[-1] 479 | opt_states[-window_size:]= opt_class.states 480 | # rt_states[-window_size:]= opt_class.states 481 | 482 | print ('RMSEs',RMSEs(opt_states,truth)) 483 | errors = opt_states-truth[:len(opt_states)] 484 | 485 | plt.plot(opt_states[:,0],opt_states[:,1],'m',label='opt_final') 486 | 487 | 488 | plt.legend() 489 | # plt.savefig(f'{prefix}_SW_{window_size}_res.png') 490 | 491 | fig,axs= plt.subplots(2,1) 492 | axs[0].set_title('position error') 493 | axs[0].plot(errors[:,0],label='x') 494 | axs[0].plot(errors[:,1],label='y') 495 | axs[0].legend() 496 | axs[1].set_title('velocity error') 497 | axs[1].plot(errors[:,2],label='x') 498 | axs[1].plot(errors[:,3],label='y') 499 | 500 | plt.figure() 501 | ekf_res = np.load('ekf_res.npy') 502 | plt.plot(ekf_res[:,0], ekf_res[:,1], c='c', label='EKF') 503 | plt.plot(opt_states[:,0],opt_states[:,1],c='b', label=f'delay={window_size}') 504 | plt.plot(rt_states[:,0], rt_states[:,1], 'g', label='real-time') 505 | plt.plot(truth[:,0],truth[:,1],'r--',label='truth') 506 | plt.legend() 507 | ax=plt.gca() 508 | ax.set_aspect('equal') 509 | plt.savefig(f'SW_{prefix}_{window_size}.png') 510 | 511 | plt.figure() 512 | ekf_err=np.sqrt(np.sum(np.square(ekf_res[:data_len,:2]-truth[:data_len,:2]),axis=1)) 513 | rt_sw_err=np.sqrt(np.sum(np.square(rt_states[:,:2]-truth[:data_len,:2]),axis=1)) 514 | delayed_sw_err=np.sqrt(np.sum(np.square(opt_states[:,:2]-truth[:data_len,:2]),axis=1)) 515 | plt.plot(ekf_err,c='c',label='EKF') 516 | plt.plot(delayed_sw_err,c='b',label=f'delay={window_size}') 517 | plt.plot(rt_sw_err,c='g',label='real-time') 518 | plt.legend() 519 | plt.savefig(f'SW_errs_{prefix}_{window_size}.png') 520 | with open(f'err_res_windows.txt', 'a') as f: 521 | f.write('\n--------------------------------------\n') 522 | f.write (f'For window size {window_size}\n') 523 | f.write(f'EKF RMSE error: {sqrt(np.sum(np.square(ekf_err))/data_len)}\n') 524 | f.write(f'RT SW RMSE error: {sqrt(np.sum(np.square(rt_sw_err))/data_len)}\n') 525 | f.write(f'Delayed SW RMSE error: {sqrt(np.sum(np.square(delayed_sw_err))/data_len)}\n') 526 | f.write(f'EKF RMSE error -- last half: {sqrt(2*np.sum(np.square(ekf_err[int(data_len/2):]))/data_len)}\n') 527 | f.write(f'RT SW RMSE error -- last half: {sqrt(2*np.sum(np.square(rt_sw_err[int(data_len/2):]))/data_len)}\n') 528 | f.write(f'Delayed SW RMSE error -- last half: {sqrt(2*np.sum(np.square(delayed_sw_err[int(data_len/2):]))/data_len)}\n') 529 | 530 | np.savez(f'timing_res_{window_size}.npz', call_sizes=opt_class.opt_call_sizes, full_opt = opt_class.opt_call_times, 531 | solve_L_times = opt_class.solve_L_times, create_and_solve_L = opt_class.solve_and_form_L_times) 532 | plt.figure() 533 | plt.plot(opt_class.opt_call_sizes,opt_class.opt_call_times,'*') 534 | plt.title('Time vs size, full optimization') 535 | plt.savefig(f'Opt_timing_{window_size}.png') 536 | 537 | plt.figure() 538 | plt.plot(opt_class.opt_call_sizes,opt_class.solve_L_times,'*') 539 | plt.title('Time vs size, Solve L') 540 | plt.savefig(f'LA_timing_{window_size}.png') 541 | 542 | plt.figure() 543 | plt.plot(opt_class.opt_call_sizes,opt_class.solve_and_form_L_times,'*') 544 | plt.title('Time vs size, Create and Solve L') 545 | plt.savefig(f'LA_create_timing_{window_size}.png') 546 | plt.show() 547 | 548 | 549 | 550 | # ######### Test if dense_2_sp_lists is working ########## 551 | # # Create a block diagonal matrix, plus two matrices off the diagonal 552 | # matrix1 = np.array([0,1.,1.,0]).reshape((2,2)) 553 | # big_data_l = np.zeros(7*4) 554 | # big_row_l = np.zeros(7*4,dtype=int) 555 | # big_col_l = np.zeros(7*4,dtype=int) 556 | # # The block diagonal matrices 557 | # for i in range(5): 558 | # big_data_l[i*4:(i+1)*4],big_row_l[i*4:(i+1)*4],big_col_l[i*4:(i+1)*4] = \ 559 | # dense_2_sp_lists(matrix1,i*2,i*2) 560 | # # A matrix in the top right 561 | # i=5 562 | # matrix2= np.eye(2) 563 | # big_data_l[i*4:(i+1)*4],big_row_l[i*4:(i+1)*4],big_col_l[i*4:(i+1)*4] = \ 564 | # dense_2_sp_lists(matrix2,0,8) 565 | # # A matrix at the bottom left 566 | # i=6 567 | # big_data_l[i*4:(i+1)*4],big_row_l[i*4:(i+1)*4],big_col_l[i*4:(i+1)*4] = \ 568 | # dense_2_sp_lists(matrix2,8,0) 569 | 570 | # L = sp.csr_matrix((big_data_l,(big_row_l,big_col_l))) 571 | 572 | # plt.spy(L) 573 | # plt.show() 574 | 575 | -------------------------------------------------------------------------------- /requirements.yml: -------------------------------------------------------------------------------- 1 | name: factorGraphExample 2 | channels: 3 | - defaults 4 | - conda-forge 5 | dependencies: 6 | - python=3.10 7 | - matplotlib 8 | - numpy 9 | - scipy 10 | - tqdm 11 | # These ones are not required, but I sure like them 12 | # (Makes working with VS code and python notebooks easier) 13 | - ipython 14 | - pylint 15 | - ipykernel 16 | # These ones are used for the second class if you want to make 17 | # Python run faster 18 | - snakeviz 19 | - numba 20 | # If you want to do symbolic derivatives 21 | # (useful for the EKF)... 22 | - sympy 23 | -------------------------------------------------------------------------------- /runSims.py: -------------------------------------------------------------------------------- 1 | import subprocess 2 | 3 | N = 10 4 | for i in [4,6]: 5 | cmd = ['python','OrbitSim.py'] 6 | process = subprocess.Popen(cmd) 7 | process.wait() 8 | cmd = ['cp','orbit_run_noisy.npz',f'run_{i}.npz'] 9 | process = subprocess.Popen(cmd) 10 | process.wait() 11 | -------------------------------------------------------------------------------- /slide_example.npz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cntaylor/factorGraph2DsatelliteExample/93e54f86fd1feacc3f05b703975f0d8dc6a88390/slide_example.npz -------------------------------------------------------------------------------- /slide_example2.npz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cntaylor/factorGraph2DsatelliteExample/93e54f86fd1feacc3f05b703975f0d8dc6a88390/slide_example2.npz -------------------------------------------------------------------------------- /timing_res_1801_3.npz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cntaylor/factorGraph2DsatelliteExample/93e54f86fd1feacc3f05b703975f0d8dc6a88390/timing_res_1801_3.npz --------------------------------------------------------------------------------