├── .gitignore
├── LICENSE
├── Makefile
├── README.md
├── documentation
└── logo.png
├── examples
├── lotka_volterra
│ ├── nn_model.h5
│ └── run.py
└── nothing
├── pyNeuralEMPC
├── __init__.py
├── checker.py
├── constraints.py
├── controller.py
├── integrator
│ ├── __init__.py
│ ├── base.py
│ ├── discret.py
│ ├── rk4.py
│ └── unity.py
├── model
│ ├── __init__.py
│ ├── base.py
│ ├── jax.py
│ └── tensorflow.py
├── objective
│ ├── __init__.py
│ ├── base.py
│ └── jax.py
├── optimizer
│ ├── __init__.py
│ ├── base.py
│ ├── ipopt.py
│ └── slsqp.py
└── utils.py
├── scripts
└── run_tests.sh
├── setup.py
├── test.py
└── testing
└── nothing
/.gitignore:
--------------------------------------------------------------------------------
1 | # Doc gen
2 | doc
3 |
4 | #TODO remove theses directories !
5 | demos/
6 | test/
7 | # Byte-compiled / optimized / DLL files
8 | __pycache__/
9 | *.py[cod]
10 | *$py.class
11 |
12 | #Visual studio code files
13 | .vscode
14 |
15 | # Idea files
16 | .idea
17 |
18 | # C extensions
19 | *.so
20 |
21 | # Distribution / packaging
22 | .Python
23 | build/
24 | develop-eggs/
25 | dist/
26 | downloads/
27 | eggs/
28 | .eggs/
29 | lib/
30 | lib64/
31 | parts/
32 | sdist/
33 | var/
34 | wheels/
35 | *.egg-info/
36 | .installed.cfg
37 | *.egg
38 | MANIFEST
39 |
40 | # PyInstaller
41 | # Usually these files are written by a python script from a template
42 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
43 | *.manifest
44 | *.spec
45 |
46 | # Installer logs
47 | pip-log.txt
48 | pip-delete-this-directory.txt
49 |
50 | # Unit test / coverage reports
51 | htmlcov/
52 | .tox/
53 | .coverage
54 | .coverage.*
55 | .cache
56 | nosetests.xml
57 | coverage.xml
58 | *.cover
59 | .hypothesis/
60 | .pytest_cache/
61 |
62 | # Translations
63 | *.mo
64 | *.pot
65 |
66 |
67 | # Flask stuff:
68 | instance/
69 | .webassets-cache
70 |
71 | # Scrapy stuff:
72 | .scrapy
73 |
74 | # Sphinx documentation
75 | docs/_build/
76 |
77 | # PyBuilder
78 | target/
79 |
80 | # Jupyter Notebook
81 | .ipynb_checkpoints
82 |
83 | # pyenv
84 | .python-version
85 |
86 | # celery beat schedule file
87 | celerybeat-schedule
88 |
89 | # SageMath parsed files
90 | *.sage.py
91 |
92 | # Environments
93 | .env
94 | .venv
95 | env/
96 | venv/
97 | ENV/
98 | env.bak/
99 | venv.bak/
100 |
101 | # Spyder project settings
102 | .spyderproject
103 | .spyproject
104 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2019 Gauthier-Clerc François
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/Makefile:
--------------------------------------------------------------------------------
1 | SHELL=/bin/bash
2 |
3 | pytest:
4 | ./scripts/run_tests.sh
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |

2 |
3 | # pyNeuralEMPC
4 |
5 | This library under development aims to provide a user-friendly interface for the use of the eMPC controller. This implementation will support all types of deep neural networks as models for a given problem. To do so, this library will not rely on an analytical solver like [do-mpc](https://github.com/do-mpc/do-mpc) with [casadi](https://web.casadi.org/). This library will automatically generate the nonlinear problem with its Jacobian and Hessian matrices to use a traditional nonlinear solver.
6 |
7 | I plan to use [ipopt](https://github.com/coin-or/Ipopt) as a nonlinear solver first but other solvers will be supported in the future (like [RestartSQP](https://github.com/lanl-ansi/RestartSQP)).
8 |
9 | The library interface will allow you to use tensorflow and pytorch models to describe the system dynamics.
10 | Constraints on the state space will be possible as well as the definition of a custom cost function (in the same way as the model).
11 |
12 | Only the feed forward neural network will be supported at the beginning of the development but the recurrent neural network will be added later.
13 |
14 | ## Work in progress
15 |
16 | This library is not yet in stable release. Here is the planned roadmap:
17 |
18 | TODO list :
19 |
20 | - [ ] Architecture and design of class paterns.
21 | - [ ] First working version with some features including three integration methods and a solver.
22 | - [ ] Rédaction de la première version de documentation et de test.
23 | - [ ] Adding a complete support for tensorflow and pytorch models.
24 | - [ ] Add the simulating approch design.
25 | - [ ] Add other solvers.
26 |
27 | ## Documentation
28 |
29 | Since this library isn't finished yet, no documentation is provided !
30 |
31 | ## Example
32 |
33 | Since this library isn't fi... blablabla, no example... =)
34 |
--------------------------------------------------------------------------------
/documentation/logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Enderdead/pyNeuralEMPC/397a459e1281dabd82876843ed83964c42c05c86/documentation/logo.png
--------------------------------------------------------------------------------
/examples/lotka_volterra/nn_model.h5:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Enderdead/pyNeuralEMPC/397a459e1281dabd82876843ed83964c42c05c86/examples/lotka_volterra/nn_model.h5
--------------------------------------------------------------------------------
/examples/lotka_volterra/run.py:
--------------------------------------------------------------------------------
1 | import os
2 | import argparse
3 | parser = argparse.ArgumentParser()
4 | parser.add_argument("-f", "--force", help="Recompute even if a cache file exists !",
5 | action="store_true")
6 | args = parser.parse_args()
7 |
8 |
9 | os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
10 | os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
11 |
12 | from dyna_toy_envs.virtuel.lotka_volterra import lotka_volterra_energy
13 | from dyna_toy_envs.display import plot_experience
14 | import numpy as np
15 | import matplotlib.patches as mpatches
16 |
17 | import matplotlib.pyplot as plt
18 | import pyNeuralEMPC as nEMPC
19 |
20 | #from utils import *
21 | import progressbar
22 | import pysindy
23 | import numpy as np
24 | import jax.numpy as jnp
25 |
26 | import tensorflow as tf
27 | import pickle
28 |
29 |
30 | ### WARNING ####
31 | # We need to normalize every thing...
32 | # x = x/30. -1
33 | # y = y/30. -1
34 | # u = u/50.0 - 1
35 |
36 |
37 |
38 | NB_STEPS = 700
39 | Hb = 25
40 | U_MAX = 60
41 | U_MIN = 0
42 | REFRESH_EVERY = 2
43 | TRAINING_SIZE = 600
44 | DT = 0.05
45 | X_MAX = 60
46 |
47 |
48 | X_MAX = X_MAX/30.0 - 1
49 | U_MAX = U_MAX/50.0 -1
50 | U_MIN = U_MIN/50.0 -1
51 |
52 | H = Hb
53 |
54 | system = lotka_volterra_energy(dt=DT, init_state=np.array([0.66, -0.9]), H=Hb)
55 |
56 | keras_model = tf.keras.models.load_model("./nn_model.h5")
57 |
58 |
59 | input_tensor = tf.constant(np.array([[50.0,5.0,0.0],[50.0,5.0,0.0]]))
60 |
61 |
62 |
63 |
64 | def forward_jnp(x, u, p=None, tvp=None):
65 | result = jnp.concatenate([0.5*x[:,0:1] - 0.025*x[:,0:1]*x[:,1:], -0.5*x[:,1:]+ u + 0.005*x[:,0:1]*x[:,1:] ], axis=1)
66 | return result
67 |
68 | model_nmpc = nEMPC.model.tensorflow.KerasTFModel(keras_model, x_dim=2, u_dim=1)
69 | model_nmpc = nEMPC.model.jax.DiffDiscretJaxModel(forward_jnp, x_dim=2, u_dim=1, vector_mode=True)
70 |
71 |
72 | constraints_nmpc = [nEMPC.constraints.DomainConstraint(
73 | states_constraint=[[-np.inf, X_MAX], [-np.inf, np.inf]],
74 | control_constraint=[[U_MIN, U_MAX]]),]
75 |
76 | integrator = nEMPC.integrator.discret.DiscretIntegrator(model_nmpc, H)
77 | integrator = nEMPC.integrator.rk4.RK4Integrator(model_nmpc, H, 0.1, cache_mode=True)
78 |
79 | class LotkaCost:
80 | def __init__(self, cost_vec):
81 | self.cost_vec = cost_vec
82 |
83 | def __call__(self, x, u, p=None, tvp=None):
84 | return jnp.sum(u.reshape(-1)*self.cost_vec.reshape(-1))
85 |
86 |
87 | cost_func = LotkaCost(jnp.array([1.1,]*25))
88 |
89 | objective_func = nEMPC.objective.jax.JAXObjectifFunc(cost_func)
90 |
91 |
92 | MPC = nEMPC.controller.NMPC(integrator, objective_func, constraints_nmpc, H, DT)
93 |
94 | curr_x, curr_cost = system.get_init_state()
95 |
96 |
97 |
98 | u, pred = MPC.next(curr_x)
99 | 1/0
100 | """
101 | controller = NNLokeMPC(keras_model, DT, Hb, max_state=[X_MAX+0.0001, 1e20], u_min=U_MIN, u_max=U_MAX, derivative_model=False)
102 |
103 |
104 |
105 |
106 | curr_x, curr_cost = system.get_init_state()
107 |
108 | def un_normalize_u(u):
109 | return (u+1)*50.0
110 |
111 |
112 |
113 | u, pred = controller.next(normalize_state(curr_x),curr_cost, verbosity=1)
114 |
115 | states = [curr_x, ]
116 | u_list = [u[0]]
117 | cost_list = [curr_cost[0]]
118 | last_refresh = 0
119 | decision_making = [(u, pred),]
120 | #sresult_u = us["x"]
121 | for i in progressbar.progressbar(range(NB_STEPS)):
122 | curr_x, cpst = system.next(un_normalize_u(u[i-last_refresh]))
123 | #if ((curr_x[0]/30.0)-1)>X_MAX:
124 | # break
125 | states.append(curr_x.copy())
126 | cost_list.append(cpst[0])
127 | u_list.append(u[i-last_refresh])
128 | if i%REFRESH_EVERY == 0:
129 | #recompute x0 = [1.0,]*self.H + [init_state[0],]*self.H + [init_state[1],]*self.H
130 | try:
131 | u,pred = controller.next(normalize_state(curr_x), cpst, delta=None, verbosity=5)#REFRESH_EVERY
132 | except ZeroDivisionError:
133 | i = 0
134 | while True:
135 | i+=1
136 | print("retry with random try {}".format(i))
137 | try:
138 | u,pred = controller.next(normalize_state(curr_x), cpst, delta=None, verbosity=5, x0=np.random.uniform(low=-1, high=1, size=Hb*3))
139 | break
140 | except ZeroDivisionError:
141 | pass
142 | decision_making.append((u,pred))
143 | last_refresh = i+1
144 |
145 |
146 | #states = unnormalize(np.stack(states))
147 |
148 | t = np.arange(0, len(states), 1)
149 |
150 | u_list = (np.array(u_list)+1)*50.0
151 |
152 |
153 |
154 | x = np.stack(states)[:,0]
155 | y = np.stack(states)[:,1]
156 |
157 |
158 | plt.figure(figsize=(9.5,5))
159 | a = plt.plot(t*DT, x, label="x")
160 | b = plt.plot(t*DT, y, label="y")
161 | c = plt.plot(t*DT, u_list, label="u")
162 |
163 | for local_t, cost in zip(t*DT, cost_list):
164 | if cost<0.5:
165 | plt.axvspan(local_t, local_t+DT, facecolor='b', alpha=0.2)
166 |
167 |
168 | plt.xlabel("Temps (seconds)", size=15)
169 | plt.ylabel("Valeur", size=15)
170 |
171 | red_patch = mpatches.Patch(color='b', alpha=0.2, label='Periode à faible cout')
172 | plt.legend(handles=[red_patch, a[0], b[0], c[0]], framealpha=1.0, loc=0, fontsize=11)
173 |
174 | plt.tight_layout()
175 | plt.show()
176 |
177 |
178 | # Compute total cost
179 |
180 | b = np.array(cost_list).reshape(1, -1)
181 | a = np.array(u_list).reshape(1, -1)
182 |
183 | total_cost = np.dot(a, b.T)
184 | """
--------------------------------------------------------------------------------
/examples/nothing:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Enderdead/pyNeuralEMPC/397a459e1281dabd82876843ed83964c42c05c86/examples/nothing
--------------------------------------------------------------------------------
/pyNeuralEMPC/__init__.py:
--------------------------------------------------------------------------------
1 | __version__ = '0.0'
2 |
3 | from pyNeuralEMPC import model
4 | from pyNeuralEMPC import objective
5 | from pyNeuralEMPC import integrator
6 | from pyNeuralEMPC import optimizer
7 | from pyNeuralEMPC import constraints
8 | from pyNeuralEMPC import controller
--------------------------------------------------------------------------------
/pyNeuralEMPC/checker.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Enderdead/pyNeuralEMPC/397a459e1281dabd82876843ed83964c42c05c86/pyNeuralEMPC/checker.py
--------------------------------------------------------------------------------
/pyNeuralEMPC/constraints.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 | class DomainConstraint:
4 |
5 | def __init__(self, states_constraint: list, control_constraint: list):
6 |
7 | if len(states_constraint) == 0:
8 | raise ValueError("States constraint empty !")
9 |
10 | if len(control_constraint) == 0:
11 | raise ValueError("Control constraint empty !")
12 |
13 | if (len(set([len(element) for element in states_constraint])) != 1) or len(states_constraint[0])!=2:
14 | raise ValueError("Your states constraint must be a list of bound couple ! [(lower_bound, upper_bound), ...]")
15 |
16 | if (len(set([len(element) for element in states_constraint])) != 1) or len(states_constraint[0])!=2:
17 | raise ValueError("Your control constraint must be a list of bound couple ! [(lower_bound, upper_bound), ...]")
18 |
19 |
20 | self.states_constraint = states_constraint
21 | self.control_constraint = control_constraint
22 |
23 | def get_dim(self, H):
24 | return len(self.states_constraint), len(self.control_constraint)
25 |
26 | def get_lower_bounds(self, H):
27 | return [ element[0] for element in self.states_constraint ]*H + [ element[0] for element in self.control_constraint]*H
28 |
29 | def get_upper_bounds(self, H):
30 | return [ element[1] for element in self.states_constraint ]*H + [ element[1] for element in self.control_constraint]*H
31 |
32 | def get_type(self):
33 | return Constraint.EQ_TYPE
34 |
35 |
36 | class Constraint:
37 | EQ_TYPE = 0
38 | INEQ_TYPE = 1
39 | INTER_TYPE = 2
40 | """
41 | This class will implement a constraint under a formula declaration for a given state.
42 |
43 | TODO add a mecanism to filter if it's an eq or ineq
44 | """
45 | def forward(self, x, u, p=None, tvp=None):
46 | pass
47 |
48 | def jacobian(self, x, u, p=None, tvp=None):
49 | pass
50 |
51 | def get_lower_bounds(self, H):
52 | raise NotImplementedError()
53 |
54 | def get_upper_bounds(self, H):
55 | raise NotImplementedError()
56 |
57 | def get_type(self, H=None):
58 | if (self.get_upper_bounds(H) == self.get_lower_bounds(H)).all() and (self.get_lower_bounds(H) == 0).all() :
59 | return Constraint.EQ_TYPE
60 | if (self.get_upper_bounds(H) == np.inf).all() and (self.get_lower_bounds(H) == 0).all():
61 | return Constraint.INEQ_TYPE
62 | else:
63 | return Constraint.INTER_TYPE
64 |
65 |
66 | class EqualityConstraint(Constraint):
67 | def forward(self, x, u, p=None, tvp=None):
68 | raise NotImplementedError()
69 |
70 | def jacobian(self, x, u, p=None, tvp=None):
71 | raise NotImplementedError()
72 |
73 | def get_dim(self):
74 | raise NotImplementedError()
75 |
76 | def get_lower_bounds(self, H):
77 | return np.zeros(int(self.get_dim(H)))
78 |
79 | def get_upper_bounds(self, H):
80 | return np.zeros(int(self.get_dim(H)))
81 |
82 | class InequalityConstraint(Constraint):
83 | def forward(self, x, u, p=None, tvp=None):
84 | raise NotImplementedError()
85 |
86 | def jacobian(self, x, u, p=None, tvp=None):
87 | raise NotImplementedError()
88 |
89 | def get_dim(self):
90 | raise NotImplementedError()
91 |
92 | def get_lower_bounds(self, H):
93 | return np.zeros(int(self.get_dim(H)))
94 |
95 | def get_upper_bounds(self, H):
96 | return np.ones(int(self.get_dim(H)))*np.inf
97 |
98 |
--------------------------------------------------------------------------------
/pyNeuralEMPC/controller.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from .optimizer import Ipopt, Optimizer
3 | from .constraints import DomainConstraint
4 |
5 |
6 |
7 | class NMPC():
8 | def __init__(self, integrator, objective_func, constraint_list, H, DT, optimizer=Ipopt(), use_hessian=True):
9 |
10 | self.integrator = integrator
11 | self.objective_func = objective_func
12 | self.constraint_list = constraint_list
13 | self.domain_constraint = list(filter(lambda x: isinstance(x, DomainConstraint), constraint_list))[0]
14 | self.constraint_list.remove(self.domain_constraint)
15 | self.H = H
16 | self.DT = DT
17 | self.optimizer = optimizer
18 | self.use_hessian = use_hessian
19 |
20 |
21 | # TODO VERIF EVERY THINGS !
22 |
23 | def get_pb(self, x0: np.array, p=None, tvp=None, init_x=None, init_u=None):
24 |
25 | # first check x0, p and tvp dim and look if the model need it !
26 | assert len(x0.shape) == 1, "x0 must be a vector"
27 | assert x0.shape[0] == self.integrator.model.x_dim, "x0 dim must set according to your model !"
28 |
29 | if not p is None:
30 | assert len(x0.shape) == 1, "p must be a vector"
31 | assert p.shape[0] == self.integrator.model.p_dim, "p dim must set according to your model !"
32 |
33 | if not tvp is None:
34 | assert len(tvp.shape) == 2, "tvp must be a vector"
35 | assert tvp.shape[1] == self.integrator.model.tvp_dim, "tvp dim must set according to your model !"
36 | assert tvp.shape[0] == self.H, "tvp first dim must set according to the horizon size !"
37 |
38 | assert init_x.shape[1] == self.integrator.model.x_dim, f"init_x dim must have the good feature size (expected {self.integrator.model.x_dim})"
39 | assert init_u.shape[1] == self.integrator.model.u_dim, f"init_u dim mist have the good feature size (expected {self.integrator.model.u_dim})"
40 | assert (init_x is None) == (init_u is None), f"you must give both init values"
41 |
42 | pb_facto = self.optimizer.get_factory()
43 |
44 | pb_facto.set_x0(x0)
45 |
46 | pb_facto.set_objective(self.objective_func)
47 |
48 | pb_facto.set_integrator(self.integrator)
49 |
50 | pb_facto.set_constraints(self.constraint_list)
51 |
52 | if not(init_x is None):
53 | pb_facto.set_init_values(init_x, init_u)
54 |
55 | if not tvp is None:
56 | pb_facto.set_tvp(tvp)
57 |
58 | if not p is None:
59 | pb_facto.set_p(p)
60 |
61 | pb_obj = pb_facto.getProblemInterface()
62 |
63 | return pb_obj
64 |
65 | def next(self, x0: np.array, p=None, tvp=None, init_x=None, init_u=None):
66 |
67 | # first check x0, p and tvp dim and look if the model need it !
68 | assert len(x0.shape) == 1, "x0 must be a vector"
69 | assert x0.shape[0] == self.integrator.model.x_dim, "x0 dim must set according to your model !"
70 |
71 | if not p is None:
72 | assert len(x0.shape) == 1, "p must be a vector"
73 | assert p.shape[0] == self.integrator.model.p_dim, "p dim must set according to your model !"
74 |
75 | if not tvp is None:
76 | assert len(tvp.shape) == 2, "tvp must be a vector"
77 | assert tvp.shape[1] == self.integrator.model.tvp_dim, "tvp dim must set according to your model !"
78 | assert tvp.shape[0] == self.H, "tvp first dim must set according to the horizon size !"
79 |
80 | assert (init_x is None) == (init_u is None), f"you must give both init values"
81 |
82 | if not init_x is None:
83 | assert init_x.shape[1] == self.integrator.model.x_dim, f"init_x dim must have the good feature size (expected {self.integrator.model.x_dim})"
84 | assert init_u.shape[1] == self.integrator.model.u_dim, f"init_u dim mist have the good feature size (expected {self.integrator.model.u_dim})"
85 |
86 | pb_facto = self.optimizer.get_factory()
87 |
88 | pb_facto.set_x0(x0)
89 |
90 | pb_facto.set_objective(self.objective_func)
91 |
92 | pb_facto.set_integrator(self.integrator)
93 |
94 | pb_facto.set_constraints(self.constraint_list)
95 |
96 | if not(init_x is None):
97 | pb_facto.set_init_values(init_x, init_u)
98 |
99 | if not tvp is None:
100 | pb_facto.set_tvp(tvp)
101 |
102 | if not p is None:
103 | pb_facto.set_p(p)
104 |
105 | pb_obj = pb_facto.getProblemInterface()
106 |
107 | res = self.optimizer.solve(pb_obj, self.domain_constraint)
108 |
109 | if res == Optimizer.SUCCESS:
110 | return self.optimizer.prev_result[0: self.integrator.model.x_dim*self.integrator.H].reshape(self.integrator.H, -1), self.optimizer.prev_result[self.integrator.model.x_dim*self.integrator.H: ].reshape(self.integrator.H, -1)
111 |
112 | else:
113 | return None, None
114 |
--------------------------------------------------------------------------------
/pyNeuralEMPC/integrator/__init__.py:
--------------------------------------------------------------------------------
1 | # This directory will implement everything about the derivation and integration models used in this library.
2 |
3 | from pyNeuralEMPC.integrator import discret
4 | from pyNeuralEMPC.integrator import rk4
5 | from pyNeuralEMPC.integrator import unity
--------------------------------------------------------------------------------
/pyNeuralEMPC/integrator/base.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from ..model.base import Model
3 |
4 |
5 | class Integrator():
6 | def __init__(self, model, H: int, nb_contraints: int):
7 | """Base integrator constructor.
8 |
9 | Args:
10 | model : The system model
11 | H (int) : The horizon size.
12 | x0 (np.ndarray): The initial state vector.
13 | cp (np.ndarray): The system parameters.
14 | tvp (np.ndarray): Time varying parameters stored in a matrix.
15 | """
16 | if not isinstance(model, (Model,)):
17 | raise ValueError("The model provided isn't a Model object !")
18 |
19 | self.H = H
20 | self.model = model
21 | self.nb_contraints = nb_contraints
22 |
23 | self.hessian_structure_cache = None
24 |
25 | def get_dim(self) -> float:
26 | """Return the number of constraint generated.
27 |
28 | This method will return the number of constraint generated with this integration method.
29 |
30 | Returns:
31 | int: The number of constraint.
32 |
33 | """
34 | raise NotImplementedError("")
35 |
36 |
37 | def get_bound(self) -> list:
38 | """Return the lower and upper bound for the auto generated constraint.
39 |
40 | Returns:
41 | int: The number of constraint.
42 |
43 | """
44 | raise NotImplementedError("")
45 |
46 | def forward(self, u, x0, p=None, tvp=None)-> np.ndarray:
47 | """Generate the constraint error.
48 |
49 | Args:
50 | u (np.ndarray): The current control matrix.
51 |
52 | Returns:
53 | np.ndarray: The constraint vector.
54 |
55 | """
56 | raise NotImplementedError("")
57 |
58 | def jacobian(self, u, x0, p=None, tvp=None) -> np.ndarray:
59 | """Generate the jacobian matrix.
60 |
61 | Args:
62 | u (np.ndarray): The current control matrix.
63 |
64 | Returns:
65 | np.ndarray: The jacobian matrix.
66 |
67 | """
68 | raise NotImplementedError("")
69 |
70 | def hessian(self, u, x0, p=None, tvp=None) -> np.ndarray:
71 | """Generate the hessian matrix.
72 |
73 | Args:
74 | u (np.ndarray): The current control matrix.
75 |
76 | Returns:
77 | np.ndarray: The hessian matrix.
78 |
79 | """
80 | raise NotImplementedError("")
81 |
82 |
83 | def hessianstructure(self):
84 | if self.hessian_structure_cache is None:
85 | self.hessian_structure_cache = self._compute_hessianstructure()
86 | return self.hessian_structure_cache
87 |
88 |
89 | def _compute_hessianstructure(self):
90 | # Try with brut force to identify non nul hessian coord
91 | # TODO add p and tvp
92 |
93 | hessian_map = None
94 |
95 | for _ in range(3): # TODO maybe use an arg
96 | x_random = np.random.uniform(size=(self.H, self.model.x_dim))
97 | u_random = np.random.uniform(size=(self.H, self.model.u_dim))
98 | p_random = None
99 | tvp_random = None
100 |
101 | if self.model.p_dim > 0:
102 | p_random = np.random.uniform(size=self.model.p_dim)
103 |
104 | if self.model.tvp_dim > 0:
105 | tvp_random = np.random.uniform(size=(self.H, self.model.tvp_dim))
106 |
107 | final_hessian = self.hessian(x_random, u_random, x_random[0:1], p=p_random, tvp=tvp_random)
108 |
109 | if hessian_map is None:
110 | hessian_map = (final_hessian!= 0.0).astype(np.float64)
111 | else:
112 | hessian_map += (final_hessian!= 0.0).astype(np.float64)
113 | hessian_map = hessian_map.astype(np.bool).astype(np.float64)
114 | hessian_map = np.sum(hessian_map, axis=0).astype(np.bool).astype(np.float64)
115 | return hessian_map
116 |
117 |
118 |
119 | def get_lower_bounds(self, _):
120 | return [0.0,]*self.nb_contraints
121 |
122 | def get_upper_bounds(self, _):
123 | return [0.0,]*self.nb_contraints
124 |
125 |
126 | class NoIntegrator(Integrator):
127 | # TODO implement a No integrator when user do not way to use conventional way
128 | pass
129 |
130 |
--------------------------------------------------------------------------------
/pyNeuralEMPC/integrator/discret.py:
--------------------------------------------------------------------------------
1 | from .base import Integrator
2 | import numpy as np
3 |
4 |
5 |
6 |
7 | class DiscretIntegrator(Integrator):
8 |
9 | def __init__(self, model, H):
10 | nb_contraints = model.x_dim*H
11 | super(DiscretIntegrator, self).__init__(model, H, nb_contraints)
12 |
13 | def forward(self, x: np.ndarray, u: np.ndarray, x0: np.ndarray, p=None, tvp=None)-> np.ndarray:
14 |
15 | assert len(x.shape) == 2 and len(u.shape) == 2, "x and u tensor must have dim 2"
16 |
17 | assert len(x0.shape) == 1, "x0 shape must have dim 1"
18 |
19 | # TODO maybe look at the shape values
20 |
21 | # Create a view for t-1 and t in order to perform the substraction
22 | x_t_1 = np.concatenate([x0.reshape(1,-1),x],axis=0)[:-1]
23 | x_t = x.copy()
24 |
25 |
26 | # Get discret differential prediction from the model
27 | estim_x_t = x_t_1 + self.model.forward(x_t_1, u, tvp=tvp, p=p)
28 |
29 | # return flatten constraint error
30 | return (estim_x_t - x_t).reshape(-1)
31 |
32 | def jacobian(self, x, u, x0, p=None, tvp=None) -> np.ndarray:
33 | state_dim = self.model.x_dim
34 | control_dim = self.model.u_dim
35 | tvp_dim = self.model.tvp_dim
36 | p_dim = self.model.p_dim
37 |
38 | J_extended = np.zeros((state_dim*self.H, (state_dim+control_dim)*self.H))
39 |
40 | # Push -1 gradient estimate of identity
41 | J_extended[0:self.H*state_dim, 0:self.H*state_dim] = -np.eye(state_dim*self.H,state_dim*self.H)
42 |
43 | # Now we need to estimate jacobian of the model forecasting
44 |
45 | # generate model input and evaluate the jacobian matrix
46 |
47 | x_t_1 = np.concatenate([x0.reshape(1,-1),x],axis=0)[:-1]
48 | model_jac = self.model.jacobian(x_t_1, u, p=p, tvp=tvp)
49 |
50 |
51 | # state t - 1
52 | J_extended[state_dim:,0:state_dim*(self.H-1)] += np.eye(state_dim*(self.H-1),state_dim*(self.H-1) )
53 | J_extended[state_dim:,0:state_dim*(self.H-1)] += model_jac[state_dim:,state_dim:state_dim*self.H]
54 |
55 | # U
56 | J_extended[:,state_dim*self.H:(state_dim+control_dim)*self.H] += model_jac[:,state_dim*self.H:(state_dim+control_dim)*self.H]
57 |
58 | return J_extended
59 |
60 |
61 | def hessian(self, x, u, x0, p=None, tvp=None) -> np.ndarray:
62 | x_t_1 = np.concatenate([x0.reshape(1,-1),x],axis=0)[:-1]
63 |
64 | model_H = self.model.hessian(x_t_1, u, p=p, tvp=tvp)
65 |
66 |
67 | model_H = model_H.reshape(-1,*model_H.shape[2:])
68 | final_H = np.zeros_like(model_H)
69 | # x_t x x_t
70 | final_H[:,:x.shape[1]*(x.shape[0]-1), 0:x.shape[1]*(x.shape[0]-1)] += model_H[:,x.shape[1]:x.shape[1]*x.shape[0], x.shape[1]:x.shape[1]*x.shape[0]]
71 | # u_t x u_t
72 | final_H[:,x.shape[1]*x.shape[0]:, x.shape[1]*x.shape[0]:] += model_H[:,x.shape[1]*x.shape[0]:, x.shape[1]*x.shape[0]:]
73 |
74 | # x_t x u_t
75 | final_H[:, :x.shape[1]*(x.shape[0]-1), x.shape[1]*x.shape[0]:] += model_H[:,x.shape[1]:x.shape[1]*x.shape[0], x.shape[1]*x.shape[0]:]
76 |
77 | # u_t x x_t
78 | final_H[:,x.shape[1]*x.shape[0]:, :x.shape[1]*(x.shape[0]-1) ] += model_H[:, x.shape[1]*x.shape[0]:, x.shape[1]:x.shape[1]*x.shape[0] ]
79 |
80 |
81 | return final_H
82 |
--------------------------------------------------------------------------------
/pyNeuralEMPC/integrator/rk4.py:
--------------------------------------------------------------------------------
1 | from .base import Integrator
2 | import numpy as np
3 | import time
4 | import hashlib
5 | def extend_dim(array, size, axis=0, value=0.0):
6 | target_shape = list(array.shape)
7 | target_shape[axis] = size
8 |
9 | extented_array = np.ones(target_shape, dtype=array.dtype)*value
10 | return np.concatenate([array, extented_array], axis=axis)
11 |
12 |
13 | def make_diag_from_2D(A: np.ndarray):
14 | result = np.zeros((A.shape[0], A.shape[0]*A.shape[1]))
15 | for i, sub_element in enumerate(A):
16 | result[i:i+1, i*A.shape[1] : (i+1)*A.shape[1]] = sub_element
17 | return result
18 |
19 |
20 | class TensorCache:
21 | def __init__(self, max_size=5):
22 | self.keys = list()
23 | self.datas = list()
24 | self.max_size = max_size
25 |
26 | def pull(self, *args):
27 | hash_key = hash("".join(map(lambda x: str(x), args)))
28 | if hash_key in self.keys:
29 | return self.datas[self.keys.index(hash_key)]
30 | else:
31 | return None
32 |
33 | def push(self, key, value):
34 | hash_key = hash("".join(map(lambda x: str(x), key)))
35 | if hash_key in self.keys:
36 | self.datas[self.keys.index(hash_key)] = value
37 | else:
38 | if len(self.keys)>=self.max_size:
39 | del self.keys[0]
40 | del self.datas[0]
41 |
42 | self.keys.append(hash_key)
43 | self.datas.append(value)
44 |
45 |
46 | class RK4Integrator(Integrator):
47 |
48 | def __init__(self, model, H, DT, cache_mode=False, cache_size=2):
49 | self.DT = DT
50 | nb_contraints = model.x_dim*H
51 | self.cache_mode = cache_mode
52 | self.forward_cache = TensorCache(max_size=cache_size) if self.cache_mode else None
53 | self.jacobian_cache = TensorCache(max_size=cache_size) if self.cache_mode else None
54 | super(RK4Integrator, self).__init__(model, H, nb_contraints)
55 |
56 |
57 | def forward(self, x: np.ndarray, u: np.ndarray, x0: np.ndarray, p=None, tvp=None)-> np.ndarray:
58 |
59 | assert len(x.shape) == 2 and len(u.shape) == 2, "x and u tensor must have dim 2"
60 |
61 | assert len(x0.shape) == 1, "x0 shape must have dim 1"
62 |
63 | # TODO maybe look at the shape values
64 |
65 | # Create a view for t-1 and t in order to perform the substraction
66 | x_t_1 = np.concatenate([x0.reshape(1,-1),x],axis=0)[:-1]
67 | x_t = x.copy()
68 |
69 | k_1 = self.model.forward( x_t_1, u, p=p, tvp=tvp)
70 | k_2 = self.model.forward( x_t_1 + k_1*self.DT/2.0, u, p=p, tvp=tvp)
71 | k_3 = self.model.forward( x_t_1 + k_2*self.DT/2.0, u, p=p, tvp=tvp)
72 | k_4 = self.model.forward( x_t_1 + k_3*self.DT, u, p=p, tvp=tvp)
73 |
74 | # Push into cache if needed
75 | if self.cache_mode:
76 | self.forward_cache.push( (x, u, x0, p, tvp), (k_1, k_2, k_3, k_4) )
77 |
78 | d_x_t_1 = (k_1 + 2*k_2+ 2*k_3 + k_4)*self.DT/6.0
79 | # Get discret differential prediction from the model
80 | estim_x_t = x_t_1 + d_x_t_1
81 |
82 | # return flatten constraint error
83 | return (estim_x_t - x_t).reshape(-1)
84 |
85 | def _get_model_jacobian(self, x_t_1, u, p=None, tvp=None):
86 | jaco = self.model.jacobian(x_t_1, u, p=p, tvp=tvp)
87 | reshape_indexer = sum([ list(np.arange(x_t_1.shape[1])+i*x_t_1.shape[1]) + \
88 | list( x_t_1.shape[1]*x_t_1.shape[0]+np.arange(u.shape[1])+ i*u.shape[1]) for i in range(x_t_1.shape[0]) ], list())
89 | jaco = np.take(jaco, reshape_indexer, axis=1)
90 | jaco = jaco.reshape(x_t_1.shape[0], x_t_1.shape[1], x_t_1.shape[0], x_t_1.shape[1]+ u.shape[1])
91 | jaco = np.array([jaco[i, :, i, :] for i in range(x_t_1.shape[0])]) #TODO ca marhce ?
92 | return jaco
93 |
94 | def _get_model_hessian(self, x_t_1, u, p=None, tvp=None):
95 | state_dim = x_t_1.shape[1]
96 | u_state = u.shape[1]
97 | hessian = self.model.hessian(x_t_1, u, p=p, tvp=tvp)
98 |
99 | hessian = hessian.reshape(x_t_1.shape[0], x_t_1.shape[1], x_t_1.shape[0]*(x_t_1.shape[1]+u.shape[1]), x_t_1.shape[0]*(x_t_1.shape[1]+u.shape[1]))
100 |
101 | reshape_indexer = sum([ list(np.arange(x_t_1.shape[1])+i*x_t_1.shape[1]) + \
102 | list( x_t_1.shape[1]*x_t_1.shape[0]+np.arange(u.shape[1])+ i*u.shape[1]) for i in range(x_t_1.shape[0]) ], list())
103 |
104 |
105 |
106 | hessian = np.take(hessian, reshape_indexer, axis=-1)
107 | hessian = np.take(hessian, reshape_indexer, axis=-2)
108 | hessian = np.array([hessian[i, :, (state_dim+u_state)*i:(state_dim+u_state)*(i+1), (state_dim+u_state)*i:(state_dim+u_state)*(i+1)] for i in range(x_t_1.shape[0])])
109 |
110 | return hessian
111 |
112 |
113 | def jacobian(self, x, u, x0, p=None, tvp=None) -> np.ndarray:
114 | start = time.time()
115 | state_dim = self.model.x_dim
116 | control_dim = self.model.u_dim
117 | tvp_dim = self.model.tvp_dim
118 | p_dim = self.model.p_dim
119 |
120 | J_extended = np.zeros((state_dim*self.H, (state_dim+control_dim)*self.H))
121 |
122 | # Push -1 gradient estimate of identity
123 | J_extended[0:self.H*state_dim, 0:self.H*state_dim] = -np.eye(state_dim*self.H,state_dim*self.H)
124 |
125 | # Now we need to estimate jacobian of the model forecasting
126 |
127 | # generate model input and evaluate the jacobian matrix
128 |
129 | x_t_1 = np.concatenate([x0.reshape(1,-1),x],axis=0)[:-1]
130 |
131 | if self.cache_mode:
132 | cache_data = self.forward_cache.pull(x, u, x0, p, tvp)
133 | else:
134 | cache_data = None
135 |
136 | if cache_data is None:
137 | k_1 = self.model.forward( x_t_1, u, p=p, tvp=tvp)
138 | k_2 = self.model.forward( x_t_1 + k_1*self.DT/2.0, u, p=p, tvp=tvp)
139 | k_3 = self.model.forward( x_t_1 + k_2*self.DT/2.0, u, p=p, tvp=tvp)
140 | else:
141 | k_1, k_2, k_3, _ = cache_data
142 |
143 | dk1 = self._get_model_jacobian(x_t_1, u, p=p, tvp=tvp)
144 |
145 | #TODO here we can't support RNN network....
146 | # Add a security
147 | partial_dk2 = self._get_model_jacobian(x_t_1 + k_1*self.DT/2.0, u, p=p, tvp=tvp).reshape(x_t_1.shape[0], x_t_1.shape[1], x_t_1.shape[1]+ u.shape[1])
148 | dk1_extended = extend_dim(dk1,u.shape[1], axis=1)
149 | dk2 = np.einsum('ijk,ikl->ijl',partial_dk2, np.eye(x_t_1.shape[1]+u.shape[1] ,x_t_1.shape[1]+u.shape[1] )+dk1_extended*self.DT/2.0)
150 |
151 | partial_dk3 = self._get_model_jacobian(x_t_1 + k_2*self.DT/2.0, u, p=p, tvp=tvp).reshape(x_t_1.shape[0], x_t_1.shape[1], x_t_1.shape[1]+ u.shape[1])
152 | dk2_extended = extend_dim(dk2,u.shape[1], axis=1)
153 | dk3 = np.einsum('ijk,ikl->ijl',partial_dk3, np.eye(x_t_1.shape[1]+u.shape[1] ,x_t_1.shape[1]+u.shape[1] )+dk2_extended*self.DT/2.0)
154 |
155 | partial_dk4 = self._get_model_jacobian(x_t_1 + k_3*self.DT, u, p=p, tvp=tvp).reshape(x_t_1.shape[0], x_t_1.shape[1], x_t_1.shape[1]+ u.shape[1])
156 | dk3_extended = extend_dim(dk3,u.shape[1], axis=1)
157 | dk4 = np.einsum('ijk,ikl->ijl',partial_dk4, np.eye(x_t_1.shape[1]+u.shape[1] ,x_t_1.shape[1]+u.shape[1] )+dk3_extended*self.DT)
158 |
159 | model_jac = (self.DT/6.0) * ( dk1 + 2*dk2+ 2*dk3+ dk4)
160 |
161 | if self.cache_mode:
162 | self.jacobian_cache.push((x, u, x0, p, tvp), (dk1, partial_dk2, partial_dk3, partial_dk4))
163 |
164 | model_jac = model_jac.reshape(x_t_1.shape[0],x_t_1.shape[1], x_t_1.shape[1]+u.shape[1])
165 |
166 | #model_jac_extended = np.array([ np.concatenate([np.zeros((x_t_1.shape[1]*i, x_t_1.shape[1]+u.shape[1])) ,model_jac[i,:,:],np.zeros(((x_t_1.shape[0]-i)* x_t_1.shape[1], x_t_1.shape[1]+u.shape[1])) ], axis=1) for i in range(x_t_1.shape[0])], )
167 | # state t - 1
168 | J_extended[state_dim:,0:state_dim*(self.H-1)] += np.eye(state_dim*(self.H-1),state_dim*(self.H-1) )
169 |
170 | for i in range(self.H):
171 | # state t - 1
172 | if i>0:
173 | J_extended[state_dim*i:state_dim*(i+1),state_dim*(i-1):state_dim*(i)] += model_jac[i, :, :state_dim]
174 |
175 | # U
176 | J_extended[state_dim*i:state_dim*(i+1),state_dim*self.H+control_dim*i:state_dim*self.H+control_dim*(i+1)] += model_jac[i, :,state_dim:]
177 |
178 | return J_extended
179 |
180 |
181 | def hessian(self, x, u, x0, p=None, tvp=None) -> np.ndarray:
182 | def T(array):
183 | return np.transpose(array, (0, 2, 1))
184 |
185 | def dot_left(a, b):
186 | return np.einsum('ijk,ipkl->ipjl', a, b)
187 |
188 | def dot_right(a, b):
189 | return np.einsum('ipkl,ilj->ipkj', a, b)
190 |
191 | def cross_sum(j, h):
192 | result = np.zeros_like(h)
193 | for i in range(j.shape[1]):
194 | for k in range(j.shape[1]):
195 | result[:,i,:,:] += j[:,i,k].reshape(-1,1,1)*h[:,k,:,:]
196 |
197 | return result
198 |
199 | x_t_1 = np.concatenate([x0.reshape(1,-1),x],axis=0)[:-1]
200 | # TODO add security about rnn
201 |
202 |
203 | if self.cache_mode:
204 | forward_cache_data = self.forward_cache.pull(x, u, x0, p, tvp)
205 | else:
206 | forward_cache_data = None
207 |
208 | if forward_cache_data is None:
209 | k_1 = self.model.forward( x_t_1, u, p=p, tvp=tvp)# (2,2)
210 | k_2 = self.model.forward( x_t_1 + k_1*self.DT/2.0, u, p=p, tvp=tvp) # (2,2)
211 | k_3 = self.model.forward( x_t_1 + k_2*self.DT/2.0, u, p=p, tvp=tvp) # (2,2)
212 | else:
213 | k_1, k_2, k_3, _ = forward_cache_data
214 |
215 |
216 | if self.cache_mode:
217 | jacobian_cache_data = self.jacobian_cache.pull(x, u, x0, p, tvp)
218 | else:
219 | jacobian_cache_data = None
220 |
221 | if jacobian_cache_data is None:
222 | dk1 = self._get_model_jacobian(x_t_1, u, p=p, tvp=tvp)# (2,3,3)
223 | partial_dk2 = self._get_model_jacobian(x_t_1 + k_1*self.DT/2.0, u, p=p, tvp=tvp)# (2,3,3)
224 | partial_dk3 = self._get_model_jacobian(x_t_1 + k_2*self.DT/2.0, u, p=p, tvp=tvp) # (2,3,3)
225 | partial_dk4 = self._get_model_jacobian(x_t_1 + k_3*self.DT, u, p=p, tvp=tvp) # (2,3,3)
226 | else:
227 | dk1, partial_dk2, partial_dk3, partial_dk4 = jacobian_cache_data
228 |
229 |
230 |
231 |
232 |
233 |
234 |
235 | #dk1 = self._get_model_jacobian(x_t_1, u, p=p, tvp=tvp) # (2,2,3)
236 |
237 | dk1_extended = extend_dim(dk1,u.shape[1], axis=1) # (2,3,3)
238 | h_k1 = self._get_model_hessian(x_t_1, u, p=p, tvp=tvp)# (2, 2, 3, 3)
239 |
240 |
241 | # K2
242 | #partial_dk2 = self._get_model_jacobian(x_t_1 + k_1*self.DT/2.0, u, p=p, tvp=tvp).reshape(x_t_1.shape[0], x_t_1.shape[1], x_t_1.shape[1]+ u.shape[1])
243 | dk2 = np.einsum('ijk,ikl->ijl',partial_dk2, np.eye(x_t_1.shape[1]+u.shape[1] ,x_t_1.shape[1]+u.shape[1] )+dk1_extended*self.DT/2.0)
244 | dk2_extended = extend_dim(dk2,u.shape[1], axis=1) # (2,3,3)
245 | #J_local_k2 = self._get_model_jacobian(x_t_1 + k_1*self.DT/2.0, u, p=p, tvp=tvp)# (2,2,3)
246 | relative_j_k1 = np.eye(3,3) + ((self.DT/2.0) * dk1_extended)# (2,3,3)
247 | relative_h_k2 = self._get_model_hessian(x_t_1 + k_1*self.DT/2.0, u, p=p, tvp=tvp)# (2, 2, 3, 3)
248 | h_k2 = dot_right( dot_left(T(relative_j_k1), relative_h_k2), relative_j_k1) + (self.DT/2.0)*cross_sum(partial_dk2, h_k1)# (2, 2, 3, 3)
249 |
250 | # K3
251 | #partial_dk3 = self._get_model_jacobian(x_t_1 + k_2*self.DT/2.0, u, p=p, tvp=tvp).reshape(x_t_1.shape[0], x_t_1.shape[1], x_t_1.shape[1]+ u.shape[1])
252 | dk3 = np.einsum('ijk,ikl->ijl',partial_dk3, np.eye(x_t_1.shape[1]+u.shape[1] ,x_t_1.shape[1]+u.shape[1] )+dk2_extended*self.DT/2.0)
253 | dk3_extended = extend_dim(dk3,u.shape[1], axis=1) # (2,3,3)
254 | #J_local_k3 = self._get_model_jacobian(x_t_1 + k_2*self.DT/2.0, u, p=p, tvp=tvp)# (2,2,3)
255 | relative_j_k2 = np.eye(3,3) + ((self.DT/2.0) * dk2_extended)# (2,3,3)
256 | relative_h_k3 = self._get_model_hessian(x_t_1+ k_2*self.DT/2.0, u, p=p, tvp=tvp)# (2, 2, 3, 3)
257 | h_k3 = dot_right( dot_left(T(relative_j_k2), relative_h_k3), relative_j_k2) + (self.DT/2.0)*cross_sum(partial_dk3, h_k2)# (2, 2, 3, 3)
258 |
259 | # K4
260 | #partial_dk4 = self._get_model_jacobian(x_t_1 + k_3*self.DT, u, p=p, tvp=tvp)# (2,2,3)
261 | relative_j_k3 = np.eye(3,3) + ((self.DT/1.0) * dk3_extended)# (2,3,3)
262 | relative_h_k4 = self._get_model_hessian(x_t_1 + k_3*self.DT, u, p=p, tvp=tvp)# (2, 2, 3, 3)
263 | h_k4 = dot_right( dot_left(T(relative_j_k3), relative_h_k4), relative_j_k3) + (self.DT/1.0)*cross_sum(partial_dk4, h_k3)# (2, 2, 3, 3)
264 |
265 |
266 | model_H =(h_k1 + 2*h_k2 + 2*h_k3 + h_k4)*(self.DT/6.0) #(2,2,3,3)
267 | final_H = np.zeros((self.H, x_t_1.shape[1], (x_t_1.shape[1]+u.shape[1])*self.H, (x_t_1.shape[1]+u.shape[1])*self.H))
268 | offset = x.shape[1]*x.shape[0]
269 |
270 | # x_t x x_t
271 | for i in range(1, self.H):
272 | final_H[i,:,x.shape[1]*(i-1):x.shape[1]*i, x.shape[1]*(i-1):x.shape[1]*i] += model_H[i, :, :x.shape[1], :x.shape[1]]
273 |
274 | # u_t x u_t
275 | for i in range( self.H):
276 | final_H[i, :, offset+i*u.shape[1] : offset+(i+1)*u.shape[1], offset+i*u.shape[1] : offset+(i+1)*u.shape[1]] += model_H[i, :, x.shape[1]:, x.shape[1]:]
277 | # x_t x u_t
278 | for i in range(1, self.H):
279 | final_H[i, :, x.shape[1]*(i-1):x.shape[1]*i, offset+i*u.shape[1] : offset+(i+1)*u.shape[1]] += model_H[i, :, :x.shape[1], x.shape[1]:]
280 |
281 | # u_t x x_t
282 | for i in range(1, self.H):
283 | final_H[i, :, offset+i*u.shape[1] : offset+(i+1)*u.shape[1], x.shape[1]*(i-1):x.shape[1]*i] += model_H[i, :, x.shape[1]:, :x.shape[1]]
284 |
285 | return final_H.reshape(-1,*final_H.shape[2:])
286 |
287 |
288 |
--------------------------------------------------------------------------------
/pyNeuralEMPC/integrator/unity.py:
--------------------------------------------------------------------------------
1 | from .base import Integrator
2 | import numpy as np
3 |
4 |
5 |
6 |
7 |
8 |
9 | class UnityIntegrator(Integrator):
10 |
11 | def __init__(self, model, H):
12 | nb_contraints = model.x_dim*H
13 | super(UnityIntegrator, self).__init__(model, H, nb_contraints)
14 |
15 | def forward(self, x: np.ndarray, u: np.ndarray, x0: np.ndarray, p=None, tvp=None)-> np.ndarray:
16 |
17 | assert len(x.shape) == 2 and len(u.shape) == 2, "x and u tensor must have dim 2"
18 |
19 | assert len(x0.shape) == 1, "x0 shape must have dim 1"
20 |
21 | # TODO maybe look at the shape values
22 |
23 | # Create a view for t-1 and t in order to perform the substraction
24 | x_t_1 = np.concatenate([x0.reshape(1,-1),x],axis=0)[:-1]
25 | x_t = x.copy()
26 |
27 |
28 | # Get discret differential prediction from the model
29 | estim_x_t = self.model.forward(x_t_1, u, tvp=tvp, p=p)
30 |
31 | # return flatten constraint error
32 | return (estim_x_t - x_t).reshape(-1)
33 |
34 | def jacobian(self, x, u, x0, p=None, tvp=None) -> np.ndarray:
35 | state_dim = self.model.x_dim
36 | control_dim = self.model.u_dim
37 | tvp_dim = self.model.tvp_dim
38 | p_dim = self.model.p_dim
39 |
40 | J_extended = np.zeros((state_dim*self.H, (state_dim+control_dim)*self.H))
41 |
42 | # Push -1 gradient estimate of identity
43 | J_extended[0:self.H*state_dim, 0:self.H*state_dim] = -np.eye(state_dim*self.H,state_dim*self.H)
44 |
45 | # Now we need to estimate jacobian of the model forecasting
46 |
47 | # generate model input and evaluate the jacobian matrix
48 |
49 | x_t_1 = np.concatenate([x0.reshape(1,-1),x],axis=0)[:-1]
50 | model_jac = self.model.jacobian(x_t_1, u, p=p, tvp=tvp)
51 |
52 | # state t - 1
53 | J_extended[state_dim:,0:state_dim*(self.H-1)] += model_jac[state_dim:,state_dim:state_dim*self.H]#make_diag_from_2D(model_jac[state_dim:, 0:state_dim])
54 |
55 | # U
56 | J_extended[:,state_dim*self.H:(state_dim+control_dim)*self.H] += model_jac[:,state_dim*self.H:(state_dim+control_dim)*self.H]#make_diag_from_2D(model_jac[:, state_dim:state_dim+control_dim])
57 |
58 | return J_extended
59 |
60 |
61 | def hessian(self, x, u, x0, p=None, tvp=None) -> np.ndarray:
62 | x_t_1 = np.concatenate([x0.reshape(1,-1),x],axis=0)[:-1]
63 |
64 | model_H = self.model.hessian(x_t_1, u, p=p, tvp=tvp)
65 |
66 |
67 | model_H = model_H.reshape(-1,*model_H.shape[2:])
68 | final_H = np.zeros_like(model_H)
69 | # x_t x x_t
70 | final_H[:,:x.shape[1]*(x.shape[0]-1), 0:x.shape[1]*(x.shape[0]-1)] += model_H[:,x.shape[1]:x.shape[1]*x.shape[0], x.shape[1]:x.shape[1]*x.shape[0]]
71 | # u_t x u_t
72 | final_H[:,x.shape[1]*x.shape[0]:, x.shape[1]*x.shape[0]:] += model_H[:,x.shape[1]*x.shape[0]:, x.shape[1]*x.shape[0]:]
73 |
74 | # x_t x u_t
75 | final_H[:, :x.shape[1]*(x.shape[0]-1), x.shape[1]*x.shape[0]:] += model_H[:,x.shape[1]:x.shape[1]*x.shape[0], x.shape[1]*x.shape[0]:]
76 |
77 | # u_t x x_t
78 | final_H[:,x.shape[1]*x.shape[0]:, :x.shape[1]*(x.shape[0]-1) ] += model_H[:, x.shape[1]*x.shape[0]:, x.shape[1]:x.shape[1]*x.shape[0] ]
79 |
80 |
81 | return final_H
82 |
83 |
--------------------------------------------------------------------------------
/pyNeuralEMPC/model/__init__.py:
--------------------------------------------------------------------------------
1 | # This directory will implements all tools to make the link between user models and the library solving system.
2 |
3 | from pyNeuralEMPC.model import tensorflow
4 | from pyNeuralEMPC.model import jax
--------------------------------------------------------------------------------
/pyNeuralEMPC/model/base.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 | class Model:
4 | def __init__(self, x_dim: int, u_dim: int, p_dim=None, tvp_dim=None):
5 | self.x_dim = x_dim
6 | self.u_dim = u_dim
7 |
8 | self.p_dim = p_dim
9 | self.tvp_dim = tvp_dim
10 |
11 | def forward(self, x: np.ndarray, u: np.ndarray, x0: np.ndarray, p=None, tvp=None):
12 | raise NotImplementedError("")
13 |
14 | def jacobian(self, x: np.ndarray, u: np.ndarray, x0: np.ndarray, p=None, tvp=None):
15 | raise NotImplementedError("")
16 |
17 | def hessian(self, x: np.ndarray, u: np.ndarray, x0: np.ndarray, p=None, tvp=None):
18 | raise NotImplementedError("")
19 |
20 |
21 |
22 | CONSTANT_VAR = 1
23 | CONTROL_VAR = 2
24 | STATE_VAR = 3
25 |
26 | class ReOrderProxyModel(Model):
27 | def __init__(self, model, order_list: list):
28 | raise NotImplementedError("")
--------------------------------------------------------------------------------
/pyNeuralEMPC/model/jax.py:
--------------------------------------------------------------------------------
1 | from .base import Model
2 |
3 | import numpy as np
4 | import jax.numpy as jnp
5 | import jax
6 |
7 |
8 | def gen_jac_proj_mat(T, X_DIM, WIN_SIZE):
9 | X_SLIDED = X_DIM * WIN_SIZE
10 | proj_mat = np.zeros((T * X_DIM, T, X_SLIDED))
11 |
12 | for i in range(T):
13 | for k in range(X_DIM):
14 | for o in range(T):
15 | pos = (WIN_SIZE-1)*X_DIM + k + i*X_DIM - o*X_DIM
16 | if o>=i and pos>=0:
17 | proj_mat[i*X_DIM + k, o, pos] = 1.0
18 |
19 | proj_mat = proj_mat.reshape(T, X_DIM, T, X_SLIDED)
20 | return proj_mat
21 |
22 | def _check_func(func, state_dim, control_dim, p_dim, tvp_dim, rolling_window=1):
23 | try: #TODO c'est le bordel
24 | _ = jax.jacrev(func)(jnp.zeros((2, state_dim*rolling_window)),\
25 | jnp.zeros((2, control_dim*rolling_window)),\
26 | p=None if p_dim is None else jnp.zeros(p_dim),\
27 | tvp=None if tvp_dim is None else jnp.zeros((2, tvp_dim*rolling_window)))
28 | except AssertionError:
29 | return False
30 | return True
31 |
32 | class DiffDiscretJaxModel(Model):
33 |
34 | def __init__(self, forward_func, x_dim: int, u_dim: int, p_dim=0, tvp_dim=0, vector_mode=False, safe_mode=True):
35 | if safe_mode:
36 | if not _check_func(forward_func, x_dim, u_dim, p_dim, tvp_dim):
37 | raise ValueError("Your function is not differentiable w.r.t the JAX library")
38 |
39 | self.forward_func = forward_func
40 | self.vector_mode = vector_mode
41 |
42 | super(DiffDiscretJaxModel, self).__init__(x_dim, u_dim, p_dim, tvp_dim)
43 |
44 |
45 | def forward(self, x: np.ndarray, u: np.ndarray, p=None, tvp=None):
46 | if self.vector_mode:
47 | return self.forward_func(x, u, p=p, tvp=tvp)
48 | else:
49 | raise NotImplementedError("")
50 |
51 |
52 | def jacobian(self, x: np.ndarray, u: np.ndarray, p=None, tvp=None):
53 | if self.vector_mode:
54 | argnums_list = [0, 1]
55 |
56 | jacobians = list(jax.jacobian(self.forward_func, argnums=argnums_list)(x, u, p, tvp))
57 |
58 | jacobians[0] = jacobians[0].reshape(self.x_dim*x.shape[0], self.x_dim*x.shape[0] )
59 | jacobians[1] = jacobians[1].reshape(self.x_dim*x.shape[0], self.u_dim*x.shape[0] )
60 |
61 | jaco = np.array(jnp.concatenate(jacobians, axis=1))
62 | return jaco
63 | else:
64 | raise NotImplementedError("")
65 |
66 | def hessian(self, x: np.ndarray, u: np.ndarray, p=None, tvp=None):
67 | if self.vector_mode:
68 | argnums_list = [0, 1]
69 |
70 | H = x.shape[0]
71 | X_dim = x.shape[1]
72 | U_dim = u.shape[1]
73 |
74 | hessians = list(jax.hessian(self.forward_func, argnums=argnums_list)(x, u, p, tvp))
75 |
76 | a = hessians[0][0].reshape(H , X_dim, H*X_dim, H*X_dim)
77 | b = hessians[0][1].reshape(H , X_dim, H*X_dim, H*U_dim)
78 | ab = jnp.concatenate([a, b], axis=3)
79 |
80 | c = hessians[1][0].reshape(H , X_dim, H*U_dim, H*X_dim)
81 | d = hessians[1][1].reshape(H , X_dim, H*U_dim, H*U_dim)
82 | cd = jnp.concatenate([c, d], axis=3)
83 |
84 | final_hessian = jnp.concatenate([ab, cd], axis=2)
85 |
86 | return np.array(final_hessian)
87 | else:
88 | raise NotImplementedError("")
89 |
90 |
91 |
92 |
93 | class DiffDiscretJaxModelRollingWindow(Model):
94 |
95 | def __init__(self, forward_func, x_dim: int, u_dim: int, p_dim=0, tvp_dim=0, rolling_window=1, forward_rolling=True, vector_mode=True, safe_mode=True):
96 |
97 | if safe_mode:
98 | if not _check_func(forward_func, x_dim, u_dim, p_dim, tvp_dim, rolling_window=rolling_window):
99 | raise ValueError("Your function is not differentiable w.r.t the JAX library")
100 |
101 | self.forward_func = forward_func
102 | self.vector_mode = vector_mode
103 |
104 | self.prev_x = None
105 | self.prev_u = None
106 |
107 | self.forward_rolling = forward_rolling
108 |
109 | if not forward_rolling:
110 | raise NotImplementedError("Sorry ='(")
111 |
112 | self.rolling_window = rolling_window
113 |
114 | self.jacobian_proj = None
115 | self.hessian_proj = None
116 |
117 | super(DiffDiscretJaxModelRollingWindow, self).__init__(x_dim, u_dim, p_dim, tvp_dim)
118 |
119 | def set_prev_data(self, x_prev: np.ndarray, u_prev: np.ndarray, tvp_prev=None):
120 |
121 | assert x_prev.shape == (self.rolling_window-1, self.x_dim), f"Your x prev tensor must have the following shape {(self.rolling_window-1, self.x_dim)} (received : {x_prev.shape})"
122 | assert u_prev.shape == (self.rolling_window-1, self.u_dim), f"Your u prev tensor must have the following shape {(self.rolling_window-1, self.u_dim)} (received : {u_prev.shape})"
123 |
124 | self.prev_x = x_prev
125 | self.prev_u = u_prev
126 |
127 | if not tvp_prev is None:
128 | assert tvp_prev.shape == (self.rolling_window-1, self.tvp_dim), f"Your tvp prev tensor must have the following shape {(self.rolling_window-1, self.tvp_dim)} (received : {tvp_prev.shape})"
129 | self.prev_tvp = tvp_prev
130 |
131 |
132 | def _gather_input(self, x: np.ndarray, u: np.ndarray, p=None, tvp=None):
133 |
134 | assert (not self.prev_x is None) and (not self.prev_u is None), "You must give history window with set_prev_data before calling any inferance function."
135 |
136 | x_extended = np.concatenate([self.prev_x, x], axis=0)
137 | u_extended = np.concatenate([self.prev_u, u], axis=0)
138 |
139 |
140 | if not tvp is None:
141 | tvp_extended = np.concatenate([self.prev_tvp, tvp], axis=0)
142 | else:
143 | tvp_extended = None
144 |
145 |
146 | return x_extended, u_extended, p, tvp_extended
147 |
148 |
149 | def _slide_input(self, x: np.ndarray, u: np.ndarray, p=None, tvp=None):
150 | x_slided = np.stack([x[i:i+self.rolling_window].reshape(-1) for i in range(x.shape[0]-self.rolling_window+1)])
151 |
152 | u_slided = np.stack([u[i:i+self.rolling_window].reshape(-1) for i in range(x.shape[0]-self.rolling_window+1)])
153 |
154 |
155 | if not tvp is None:
156 | tvp_slided = np.stack([tvp[i:i+self.rolling_window].reshape(-1) for i in range(x.shape[0]-self.rolling_window+1)])
157 | else:
158 | tvp_slided = None
159 |
160 | return x_slided, u_slided, p, tvp_slided
161 |
162 |
163 | def forward(self, x: np.ndarray, u: np.ndarray, p=None, tvp=None):
164 | x_extended, u_extended, p, tvp_extended = self._gather_input(x, u, p, tvp)
165 | x_slided, u_slided, p, tvp_slided = self._slide_input(x_extended, u_extended, p, tvp_extended)
166 |
167 | if self.vector_mode:
168 | return self.forward_func(x_slided, u_slided, p=p, tvp=tvp_slided)
169 | else:
170 | raise NotImplementedError("")
171 |
172 |
173 | def jacobian(self, x: np.ndarray, u: np.ndarray, p=None, tvp=None):
174 | x_extended, u_extended, p_extended, tvp_extended = self._gather_input(x, u, p, tvp)
175 | x_slided, u_slided, p_slided, tvp_slided = self._slide_input(x_extended, u_extended, p_extended, tvp_extended)
176 |
177 | if self.vector_mode:
178 | argnums_list = [0, 1]
179 |
180 | jacobians = list(jax.jacobian(self.forward_func, argnums=argnums_list)(x_slided, u_slided, p_slided, tvp_slided))
181 |
182 | jacobians[0] = jacobians[0].reshape(x.shape[0], self.x_dim, x.shape[0]*x_slided.shape[1])
183 | jacobians[1] = jacobians[1].reshape(x.shape[0], self.x_dim, u_slided.shape[1]*x.shape[0] )
184 |
185 | if (self.jacobian_proj is None) or self.jacobian_proj[0].shape[0] != x.shape[0]:
186 | # TODO handle reverse order
187 | self.jacobian_proj = [ gen_jac_proj_mat(x.shape[0], self.x_dim, self.rolling_window),
188 | gen_jac_proj_mat(x.shape[0], self.u_dim, self.rolling_window) ]
189 |
190 | jacobians_x = np.einsum("abc,cd->abd",np.array(jacobians[0]), self.jacobian_proj[0].reshape(x.shape[0]*self.x_dim, x.shape[0]*x_slided.shape[1]).T )\
191 | .reshape(self.x_dim*x.shape[0], self.x_dim*x.shape[0])
192 |
193 | jacobians_u = np.einsum("abc,cd->abd", np.array(jacobians[1]), self.jacobian_proj[1].reshape(x.shape[0]* self.u_dim, u_slided.shape[1]*x.shape[0]).T )\
194 | .reshape(self.x_dim*x.shape[0], self.u_dim*x.shape[0])
195 |
196 | jaco = np.concatenate([jacobians_x, jacobians_u], axis=1)
197 | return jaco
198 | else:
199 | raise NotImplementedError("")
200 |
201 | def hessian(self, x: np.ndarray, u: np.ndarray, p=None, tvp=None):
202 | x_extended, u_extended, p_extended, tvp_extended = self._gather_input(x, u, p, tvp)
203 | x_slided, u_slided, p_slided, tvp_slided = self._slide_input(x_extended, u_extended, p_extended, tvp_extended)
204 |
205 | if self.vector_mode:
206 | argnums_list = [0, 1]
207 |
208 | H = x.shape[0]
209 | X_dim = x.shape[1]
210 | U_dim = u.shape[1]
211 |
212 | hessians = list(jax.hessian(self.forward_func, argnums=argnums_list)(x_slided, u_slided, p_slided, tvp_slided))
213 |
214 | if (self.hessian_proj is None) or int(self.hessian_proj[0].shape[0]/(X_dim*self.rolling_window)) != H:
215 | # TODO handle reverse order
216 |
217 | proj_mat_X = np.zeros((H*X_dim*self.rolling_window, H*X_dim+X_dim*(self.rolling_window-1)))
218 | proj_mat_U = np.zeros((H*U_dim*self.rolling_window, H*U_dim+U_dim*(self.rolling_window-1)))
219 |
220 | for t in range(H):
221 | proj_mat_X[ t*self.rolling_window*X_dim:(t+1)*self.rolling_window*X_dim, t*X_dim:t*X_dim+self.rolling_window*X_dim] = np.eye(X_dim*self.rolling_window, X_dim*self.rolling_window)
222 | proj_mat_U[ t*self.rolling_window*U_dim:(t+1)*self.rolling_window*U_dim, t*U_dim:t*U_dim+self.rolling_window*U_dim] = np.eye(U_dim*self.rolling_window, U_dim*self.rolling_window)
223 |
224 | self.hessian_proj = [proj_mat_X, proj_mat_U]
225 |
226 |
227 | a = hessians[0][0].reshape(H , X_dim, H*X_dim*self.rolling_window, H*X_dim*self.rolling_window)
228 | b = hessians[0][1].reshape(H , X_dim, H*X_dim*self.rolling_window,H*U_dim*self.rolling_window)
229 |
230 | a = np.einsum("abcd,de->abce", np.array(a), self.hessian_proj[0])
231 | a = np.einsum("ec,abcd->abed", self.hessian_proj[0].T, a)
232 | a = a[:,:,X_dim*(self.rolling_window-1):, X_dim*(self.rolling_window-1):]
233 |
234 |
235 | b = np.einsum("abcd,de->abce", np.array(b), self.hessian_proj[1])
236 | b = np.einsum("ec,abcd->abed", self.hessian_proj[0].T, b)
237 | b = b[:,:,X_dim*(self.rolling_window-1):, U_dim*(self.rolling_window-1):]
238 |
239 | ab = np.concatenate([a, b], axis=3)
240 |
241 | c = hessians[1][0].reshape(H , X_dim, H*U_dim*self.rolling_window, H*X_dim*self.rolling_window)
242 | d = hessians[1][1].reshape(H , X_dim, H*U_dim*self.rolling_window, H*U_dim*self.rolling_window)
243 |
244 | c = np.einsum("abcd,de->abce", np.array(c), self.hessian_proj[0])
245 | c = np.einsum("ec,abcd->abed", self.hessian_proj[1].T, c)
246 | c = c[:,:,U_dim*(self.rolling_window-1):, X_dim*(self.rolling_window-1):]
247 |
248 |
249 | d = np.einsum("abcd,de->abce", np.array(d), self.hessian_proj[1])
250 | d = np.einsum("ec,abcd->abed", self.hessian_proj[1].T, d)
251 | d = d[:,:,U_dim*(self.rolling_window-1):, U_dim*(self.rolling_window-1):]
252 |
253 | cd = np.concatenate([c, d], axis=3)
254 |
255 | final_hessian = np.concatenate([ab, cd], axis=2)
256 |
257 | return final_hessian
258 | else:
259 | raise NotImplementedError("")
260 |
261 |
--------------------------------------------------------------------------------
/pyNeuralEMPC/model/tensorflow.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import numpy as np
3 | from copy import copy
4 | from .base import Model
5 | tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
6 |
7 |
8 | class KerasTFModel(Model):
9 | def __init__(self, model, x_dim: int, u_dim: int, p_dim=0, tvp_dim=0, standardScaler=None):
10 |
11 | if not standardScaler is None:
12 | raise NotImplementedError("This feature isn't supported yet !")
13 |
14 | #if not isinstance(model, (tf.keras.Model)):
15 | # raise ValueError("The provided model isn't a Keras Model object !")
16 |
17 | if len(model.input_shape) != 2:
18 | raise NotImplementedError("Recurrent neural network are not supported atm.")
19 |
20 | if model.output_shape[-1] != x_dim:
21 | raise ValueError("Your Keras model do not provide a suitable output dim ! \n It must get the same dim as the state dim.")
22 |
23 | if model.input_shape[-1] != sum((x_dim, u_dim, p_dim, tvp_dim)):
24 | raise ValueError("Your Keras model do not provide a suitable input dim ! \n It must get the same dim as the sum of all input vars (x, u, p, tvp).")
25 |
26 | super(KerasTFModel, self).__init__(x_dim, u_dim, p_dim, tvp_dim)
27 |
28 | self.model = model
29 | self.test = None
30 |
31 | def __getstate__(self):
32 | result_dict = copy(self.__dict__)
33 | result_dict["model"] = None
34 | return result_dict
35 |
36 | def __setstate__(self, d):
37 | self.__dict__ = d
38 |
39 | def _gather_input(self, x: np.ndarray, u: np.ndarray, p=None, tvp=None):
40 | output_np = np.concatenate([x, u], axis=1)
41 | if not tvp is None:
42 | output_np = np.concatenate([output_np, tvp], axis=1)
43 | if not p is None:
44 | #TODO check this part
45 | output_np = np.concatenate([output_np, np.array([[p,]*x.shape[0]])], axis=1)
46 |
47 | return output_np
48 |
49 | def forward(self, x: np.ndarray, u: np.ndarray, p=None, tvp=None):
50 | input_net = self._gather_input(x, u, p=p, tvp=tvp)
51 | return self.model.predict(input_net)
52 |
53 | def jacobian(self, x: np.ndarray, u: np.ndarray, p=None, tvp=None):
54 | input_net = self._gather_input(x, u, p=p, tvp=tvp)
55 |
56 | input_tf = tf.constant(input_net)
57 |
58 | with tf.GradientTape(persistent=False) as tx:
59 | tx.watch(input_tf)
60 | output_tf = self.model(input_tf)
61 |
62 | jacobian_tf = tx.jacobian(output_tf, input_tf)
63 | jacobian_np = jacobian_tf.numpy()
64 |
65 | if (self.p_dim+self.tvp_dim)>0:
66 | jacobian_np = jacobian_np[:,:,:,:-self.p_dim- self.tvp_dim]
67 |
68 | jacobian_np = jacobian_np.reshape(x.shape[0]*self.x_dim, (self.x_dim+self.u_dim)*x.shape[0])
69 |
70 | reshape_indexer = sum([ list(np.arange(x.shape[1])+i*(x.shape[1]+u.shape[1])) for i in range(x.shape[0]) ], list()) + \
71 | sum([ list( x.shape[1]+np.arange(u.shape[1])+i*(x.shape[1]+u.shape[1])) for i in range(x.shape[0]) ], list())
72 |
73 | jacobian_np = np.take(jacobian_np, reshape_indexer, axis=1)
74 |
75 | return jacobian_np
76 |
77 | @tf.function
78 | def _hessian_compute(self, input_tf):
79 |
80 | hessian_mask = tf.reshape(tf.eye(tf.shape(input_tf)[0]*self.model.output_shape[-1],tf.shape(input_tf)[0]*self.model.output_shape[-1]), (tf.shape(input_tf)[0]*self.model.output_shape[-1],tf.shape(input_tf)[0],self.model.output_shape[-1]))
81 | hessian_mask = tf.cast(hessian_mask, tf.float64)
82 |
83 | output_tf = self.model(input_tf)
84 | output_tf = tf.cast(output_tf, tf.float64)
85 | result = tf.map_fn(lambda mask : tf.hessians(output_tf*mask, input_tf)[0] , hessian_mask, dtype=tf.float32)
86 |
87 | return result
88 |
89 | def hessian(self, x: np.ndarray, u: np.ndarray, p=None, tvp=None):
90 | input_np = self._gather_input(x, u, p=p, tvp=tvp)
91 | input_tf = tf.constant(input_np, dtype=tf.float32)
92 | #if self.test is None:
93 | # self.test = self._hessian_compute.get_concrete_function(input_tf=tf.TensorSpec([input_tf.shape[0], input_tf.shape[1]], tf.float64), output_shape=int(self.model.output_shape[-1]))
94 | #hessian_np = self.test(input_tf, int(self.model.output_shape[-1])).numpy()
95 |
96 | hessian_np = self._hessian_compute(input_tf).numpy()
97 | if (self.p_dim+self.tvp_dim)>0:
98 | hessian_np = hessian_np[:,:,:-self.p_dim - self.tvp_dim, :, :-self.p_dim - self.tvp_dim]
99 |
100 | # TODO a better implem could be by spliting input BEFORE performing the hessian computation !
101 | hessian_np = hessian_np.reshape(x.shape[0], x.shape[1], (input_np.shape[1]-self.p_dim - self.tvp_dim)*input_np.shape[0],( input_np.shape[1]-self.p_dim - self.tvp_dim)*input_np.shape[0])
102 |
103 | reshape_indexer = sum([ list(np.arange(x.shape[1])+i*(x.shape[1]+u.shape[1])) for i in range(x.shape[0]) ], list()) + \
104 | sum([ list( x.shape[1]+np.arange(u.shape[1])+i*(x.shape[1]+u.shape[1])) for i in range(x.shape[0]) ], list())
105 |
106 | hessian_np = np.take(hessian_np, reshape_indexer, axis=2)
107 | hessian_np = np.take(hessian_np, reshape_indexer, axis=3)
108 |
109 | return hessian_np
110 |
111 |
112 | @tf.function
113 | def rolling_input(input_tf, x_dim, u_dim, rolling_window=2, H=2, forward=True, tvp_dim=0):
114 | # TODO do not take into account p and tvp
115 | x = input_tf[:,0:x_dim]
116 | u = input_tf[:,x_dim:x_dim+u_dim]
117 | tvp = None if tvp_dim == 0 else input_tf[:,x_dim+u_dim:]
118 |
119 | if forward:
120 | x_rolling = tf.stack([ tf.reshape(x[i:i+rolling_window, :],(-1,)) for i in range(H)], axis=0)
121 | u_rolling = tf.stack([ tf.reshape(u[i:i+rolling_window, :],(-1,)) for i in range(H)], axis=0)
122 | tvp_rolling = None if tvp_dim ==0 else tf.stack([ tf.reshape(tvp[i:i+rolling_window, :],(-1,)) for i in range(H)], axis=0)
123 |
124 | else:
125 | x_rolling = tf.stack([ tf.reshape( tf.reverse( x[i:i+rolling_window, :] , [0]),(-1,)) for i in range(H)], axis=0)
126 | u_rolling = tf.stack([ tf.reshape( tf.reverse( u[i:i+rolling_window, :] , [0]),(-1,)) for i in range(H)], axis=0)
127 | tvp_rolling = None if tvp_dim ==0 else tf.stack([ tf.reshape( tf.reverse( tvp[i:i+rolling_window, :] , [0]),(-1,)) for i in range(H)], axis=0)
128 |
129 | return tf.concat([x_rolling,u_rolling],axis=1) if tvp_dim ==0 else tf.concat([x_rolling,u_rolling, tvp_rolling],axis=1)
130 |
131 | class KerasTFModelRollingInput(Model):
132 | def __init__(self, model, x_dim: int, u_dim: int, p_dim=0, tvp_dim=0, rolling_window=2, forward_rolling=True, standardScaler=None):
133 |
134 | if not standardScaler is None:
135 | raise NotImplementedError("This feature isn't supported yet !")
136 |
137 | # TODO make checking according to rolling_window
138 |
139 | #if not isinstance(model, (tf.keras.Model)):
140 | # raise ValueError("The provided model isn't a Keras Model object !")
141 |
142 | #if len(model.input_shape) != 2:
143 | # raise NotImplementedError("Recurrent neural network are not supported atm.")
144 |
145 | if model.output_shape[-1] != x_dim:
146 | raise ValueError("Your Keras model do not provide a suitable output dim ! \n It must get the same dim as the state dim.")
147 |
148 | #if model.input_shape[-1] != sum((x_dim, u_dim, p_dim, tvp_dim)):
149 | # raise ValueError("Your Keras model do not provide a suitable input dim ! \n It must get the same dim as the sum of all input vars (x, u, p, tvp).")
150 |
151 | if not isinstance(rolling_window, int) or rolling_window<1:
152 | raise ValueError("Your rolling windows need to be an integer gretter than 1.")
153 |
154 | super(KerasTFModelRollingInput, self).__init__(x_dim, u_dim, p_dim, tvp_dim)
155 |
156 | self.model = model
157 | self.rolling_window = rolling_window
158 | self.forward_rolling = forward_rolling
159 | self.prev_x, self.prev_u, self.prev_tvp = None, None, None
160 |
161 |
162 | self.jacobian_proj = None
163 |
164 | def __getstate__(self):
165 | result_dict = copy(self.__dict__)
166 | result_dict["prev_x"] = None
167 | result_dict["prev_u"] = None
168 | result_dict["model"] = None
169 | result_dict["prev_tvp"] = None
170 |
171 | return result_dict
172 |
173 | def __setstate__(self, d):
174 | self.__dict__ = d
175 |
176 | def set_prev_data(self, x_prev: np.ndarray, u_prev: np.ndarray, tvp_prev=None):
177 |
178 | assert x_prev.shape == (self.rolling_window-1, self.x_dim), f"Your x prev tensor must have the following shape {(self.rolling_window-1, self.x_dim)} (received : {x_prev.shape})"
179 | assert u_prev.shape == (self.rolling_window-1, self.u_dim), f"Your u prev tensor must have the following shape {(self.rolling_window-1, self.u_dim)} (received : {u_prev.shape})"
180 |
181 | self.prev_x = x_prev
182 | self.prev_u = u_prev
183 |
184 | if not tvp_prev is None:
185 | assert tvp_prev.shape == (self.rolling_window-1, self.tvp_dim), f"Your tvp prev tensor must have the following shape {(self.rolling_window-1, self.tvp_dim)} (received : {tvp_prev.shape})"
186 | self.prev_tvp = tvp_prev
187 |
188 | def _gather_input(self, x: np.ndarray, u: np.ndarray, p=None, tvp=None):
189 |
190 | assert (not self.prev_x is None) and (not self.prev_u is None), "You must give history window with set_prev_data before calling any inferance function."
191 |
192 | x_extended = np.concatenate([self.prev_x, x], axis=0)
193 | u_extended = np.concatenate([self.prev_u, u], axis=0)
194 |
195 | output_np = np.concatenate([x_extended, u_extended], axis=1)
196 |
197 | if not tvp is None:
198 | tvp_extended = np.concatenate([self.prev_tvp, tvp], axis=0)
199 | output_np = np.concatenate([output_np, tvp_extended], axis=1)
200 |
201 | if not p is None:
202 | output_np = np.concatenate([output_np, np.array([[p,]*x.shape[0]])], axis=1)
203 | return output_np
204 |
205 |
206 | def _gather_input_V2(self, x: np.ndarray, u: np.ndarray, p=None, tvp=None):
207 |
208 | assert (not self.prev_x is None) and (not self.prev_u is None), "You must give history window with set_prev_data before calling any inferance function."
209 |
210 | x_extended = np.concatenate([self.prev_x, x], axis=0)
211 | u_extended = np.concatenate([self.prev_u, u], axis=0)
212 |
213 |
214 | if self.forward_rolling:
215 | x_rolling = np.stack([ x_extended[i:i+self.rolling_window, :].reshape(-1) for i in range(x.shape[0])], axis=0)
216 | u_rolling = np.stack([ u_extended[i:i+self.rolling_window, :].reshape(-1) for i in range(x.shape[0])], axis=0)
217 | if not tvp is None:
218 | assert (not self.prev_tvp is None), "You must give history window with set_prev_data before calling any inferance function."
219 | tvp_extended = np.concatenate([self.prev_tvp, tvp], axis=0)
220 | tvp_rolling = np.stack([ tvp_extended[i:i+self.rolling_window, :].reshape(-1) for i in range(x.shape[0])], axis=0)
221 | else:
222 | x_rolling = np.stack([ (x_extended[i:i+self.rolling_window, :])[::-1,:].reshape(-1) for i in range(x.shape[0])], axis=0)
223 | u_rolling = np.stack([ (u_extended[i:i+self.rolling_window, :])[::-1,:].reshape(-1) for i in range(x.shape[0])], axis=0)
224 | if not tvp is None:
225 | assert (not self.prev_tvp is None), "You must give history window with set_prev_data before calling any inferance function."
226 | tvp_extended = np.concatenate([self.prev_tvp, tvp], axis=0)
227 | tvp_rolling = np.stack([ (tvp_extended[i:i+self.rolling_window, :])[::-1,:].reshape(-1) for i in range(x.shape[0])], axis=0)
228 |
229 | output_np = np.concatenate([x_rolling, u_rolling], axis=1)
230 |
231 | if not tvp is None:
232 | output_np = np.concatenate([output_np, tvp_rolling], axis=1)
233 | if not p is None:
234 | output_np = np.concatenate([output_np, np.array([[p,]*x.shape[0]])], axis=1)
235 | return output_np
236 |
237 | def forward(self, x: np.ndarray, u: np.ndarray, p=None, tvp=None):
238 | input_net = self._gather_input(x, u, p=p, tvp=tvp)
239 | input_net_rolled = rolling_input(input_net, self.x_dim, self.u_dim, rolling_window=self.rolling_window, H=x.shape[0], forward=self.forward_rolling, tvp_dim=self.tvp_dim)
240 | res = self.model.predict(input_net_rolled)
241 | if not isinstance(res, np.ndarray):
242 | return res.numpy()
243 | return res
244 |
245 |
246 | def jacobian(self, x: np.ndarray, u: np.ndarray, p=None, tvp=None):
247 | input_net = self._gather_input(x, u, p=p, tvp=tvp)
248 |
249 | input_tf = tf.constant(input_net)
250 |
251 | if self.jacobian_proj is None:
252 | with tf.GradientTape(persistent=False) as tx:
253 | tx.watch(input_tf)
254 | input_tf_rolled = rolling_input(input_tf, self.x_dim, self.u_dim, rolling_window=self.rolling_window, H=x.shape[0], forward=self.forward_rolling, tvp_dim=self.tvp_dim)
255 | self.jacobian_proj = tx.jacobian(input_tf_rolled, input_tf)
256 | else:
257 | input_tf_rolled = rolling_input(input_tf, self.x_dim, self.u_dim, rolling_window=self.rolling_window, H=x.shape[0], forward=self.forward_rolling, tvp_dim=self.tvp_dim)
258 |
259 | with tf.GradientTape(persistent=False) as tx:
260 | tx.watch(input_tf_rolled)
261 | output_tf = self.model(input_tf_rolled)
262 | pre_jac_tf = tx.jacobian(output_tf, input_tf_rolled)
263 |
264 | jacobian_tf = tf.einsum("abcd,cdef->abef", pre_jac_tf, self.jacobian_proj)
265 | if (self.p_dim+self.tvp_dim)>0:
266 | jacobian_tf = jacobian_tf[:,:,:,:-self.p_dim-self.tvp_dim]
267 |
268 | jacobian_np = jacobian_tf.numpy().reshape(x.shape[0]*self.x_dim, (self.x_dim+self.u_dim)*(x.shape[0]+self.rolling_window-1))
269 | reshape_indexer = sum([ list(np.arange(x.shape[1])+i*(x.shape[1]+u.shape[1])) for i in range(self.rolling_window-1 ,x.shape[0]+self.rolling_window-1) ], list()) + \
270 | sum([ list( x.shape[1]+np.arange(u.shape[1])+i*(x.shape[1]+u.shape[1])) for i in range(self.rolling_window-1 ,x.shape[0]+self.rolling_window-1) ], list())
271 |
272 | jacobian_np = np.take(jacobian_np, reshape_indexer, axis=1)
273 |
274 | return jacobian_np
275 |
276 | def jacobian_old(self, x: np.ndarray, u: np.ndarray, p=None, tvp=None):
277 | input_net = self._gather_input(x, u, p=p, tvp=tvp)
278 |
279 | input_tf = tf.constant(input_net)
280 |
281 | with tf.GradientTape(persistent=False) as tx:
282 | tx.watch(input_tf)
283 | input_tf_rolled = rolling_input(input_tf, self.x_dim, self.u_dim, rolling_window=self.rolling_window, H=x.shape[0], forward=self.forward_rolling, tvp_dim=self.tvp_dim)
284 | output_tf = self.model(input_tf_rolled)
285 |
286 | jacobian_tf = tx.jacobian(output_tf, input_tf)
287 |
288 | jacobian_np = jacobian_tf.numpy().reshape(x.shape[0]*self.x_dim, (self.x_dim+self.u_dim)*(x.shape[0]+self.rolling_window-1))
289 | reshape_indexer = sum([ list(np.arange(x.shape[1])+i*(x.shape[1]+u.shape[1])) for i in range(self.rolling_window-1 ,x.shape[0]+self.rolling_window-1) ], list()) + \
290 | sum([ list( x.shape[1]+np.arange(u.shape[1])+i*(x.shape[1]+u.shape[1])) for i in range(self.rolling_window-1 ,x.shape[0]+self.rolling_window-1) ], list())
291 |
292 | jacobian_np = np.take(jacobian_np, reshape_indexer, axis=1)
293 |
294 | return jacobian_np
295 |
296 | @tf.function
297 | def _hessian_compute(self, input_tf):
298 | H = tf.shape(input_tf)[0]
299 |
300 | hessian_mask = tf.reshape(tf.eye(H*self.model.output_shape[-1],H*self.model.output_shape[-1]), (H*self.model.output_shape[-1],H,self.model.output_shape[-1]))
301 | hessian_mask = tf.cast(hessian_mask, tf.float32)
302 |
303 | output_tf = self.model(input_tf)
304 | result = tf.map_fn(lambda mask : tf.hessians(output_tf*mask, input_tf)[0] , hessian_mask, dtype=tf.float32)
305 |
306 | return result
307 |
308 | def hessian(self, x: np.ndarray, u: np.ndarray, p=None, tvp=None):
309 | input_np = self._gather_input_V2(x, u, p=p, tvp=tvp)
310 |
311 | input_tf = tf.constant(input_np, dtype=tf.float32)
312 | #if self.test is None:
313 | # self.test = self._hessian_compute.get_concrete_function(input_tf=tf.TensorSpec([input_tf.shape[0], input_tf.shape[1]], tf.float64), output_shape=int(self.model.output_shape[-1]))
314 | #hessian_np = self.test(input_tf, int(self.model.output_shape[-1])).numpy()
315 | hessian_np = self._hessian_compute(input_tf).numpy()
316 | hessian_np = hessian_np[:,:,:self.rolling_window*(self.x_dim+self.u_dim),:,:self.rolling_window*(self.x_dim+self.u_dim)]
317 |
318 | # TODO a better implem could be by spliting input BEFORE performing the hessian computation !
319 | total_dim = self.rolling_window*(self.x_dim + self.u_dim)
320 | hessian_np = hessian_np.reshape(x.shape[0], x.shape[1], total_dim*input_np.shape[0], total_dim*input_np.shape[0])
321 |
322 | project_mat = np.zeros(shape=(total_dim*x.shape[0], (self.x_dim+self.u_dim)*(x.shape[0]+self.rolling_window-1)))
323 |
324 | for dt in range(x.shape[0]):
325 | project_mat[dt*self.rolling_window*(self.x_dim+self.u_dim):dt*self.rolling_window*(self.x_dim+self.u_dim)+self.x_dim*self.rolling_window, dt*self.x_dim : dt*self.x_dim + self.x_dim*self.rolling_window] += np.eye(self.x_dim*self.rolling_window, self.x_dim*self.rolling_window)
326 | project_mat[self.x_dim*self.rolling_window + dt*self.rolling_window*(self.x_dim+self.u_dim):self.x_dim*self.rolling_window + dt*self.rolling_window*(self.x_dim+self.u_dim)+self.u_dim*self.rolling_window,self.x_dim*(x.shape[0]+self.rolling_window-1)+ dt*self.u_dim :self.x_dim*(x.shape[0]+self.rolling_window-1) + dt*self.u_dim + self.u_dim*self.rolling_window] += np.eye(self.u_dim*self.rolling_window, self.u_dim*self.rolling_window)
327 |
328 | def dot1(A, B):
329 | np.einsum("")
330 | res = np.einsum( "gc,abcf->abgf", project_mat.T, np.einsum("abcd,df->abcf",hessian_np, project_mat))
331 |
332 |
333 | reshape_indexer = list(range(self.x_dim*(self.rolling_window-1) , self.x_dim*(x.shape[0]+self.rolling_window-1))) + \
334 | list(range(self.x_dim*(x.shape[0]+self.rolling_window-1) + self.u_dim*(self.rolling_window-1) ,self.x_dim*(x.shape[0]+self.rolling_window-1)+ self.u_dim*(x.shape[0]+self.rolling_window-1)))
335 |
336 | res = np.take(res, reshape_indexer, axis=2)
337 | res = np.take(res, reshape_indexer, axis=3)
338 |
339 |
340 | return res
341 |
--------------------------------------------------------------------------------
/pyNeuralEMPC/objective/__init__.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | from pyNeuralEMPC.objective import jax
--------------------------------------------------------------------------------
/pyNeuralEMPC/objective/base.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import inspect
3 |
4 | class ObjectiveFunc:
5 | def __init__(self):
6 | pass
7 |
8 | def forward(self, x, u, x0, p=None, tvp=None):
9 | raise NotImplementedError("")
10 |
11 | def gradient(self, x, u, x0, p=None, tvp=None):
12 | raise NotImplementedError("")
13 |
14 | def hessian(self, x, u, x0, p=None, tvp=None):
15 | raise NotImplementedError("")
16 |
17 | def hessianstructure(self):
18 | raise NotImplementedError("")
19 |
20 |
21 | class ManualObjectifFunc(ObjectiveFunc):
22 | def __init__(self, func, grad_func, hessian_func):
23 | super(ManualObjectifFunc).__init__(self, func)
24 |
25 | self.grad_func = grad_func
26 | self.hessian_func = hessian_func
27 |
28 | def forward(self, x, u, x0, p=None, tvp=None):
29 | return self.func(x, u, x0, p, tvp)
30 |
31 | def gradient(self, x, u, x0, p=None, tvp=None):
32 | return self.grad_func(x, u, x0, p, tvp)
33 |
34 | def hessian(self, x, u, x0, p=None, tvp=None):
35 | return self.hessian_func(x, u, x0, p, tvp)
36 |
37 | def hessianstructure(self, H, model):
38 | raise NotImplementedError("")
--------------------------------------------------------------------------------
/pyNeuralEMPC/objective/jax.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import jax.numpy as jnp
3 | import jax
4 |
5 | from .base import ObjectiveFunc
6 |
7 |
8 | def _check_func(func, state_dim, control_dim, constant_dim):
9 | try:
10 | _ = jax.jacrev(func)(jnp.zeros(state_dim), jnp.zeros(control_dim), jnp.zeros(constant_dim))
11 | except AssertionError:
12 | return False
13 | return True
14 |
15 |
16 | class JAXObjectifFunc(ObjectiveFunc):
17 | def __init__(self, func):
18 | super(JAXObjectifFunc).__init__()
19 |
20 | # TODO find a way to get checking input dim
21 | """
22 | if not _check_func(func):
23 | raise ValueError("Your function is not differentiable w.r.t the JAX library")
24 | """
25 | self.func = func
26 | self.cached_hessian_structure = dict()
27 |
28 | def forward(self, states, u, p=None, tvp=None):
29 |
30 | return self.func(states, u, p, tvp)
31 |
32 | def gradient(self, states, u, p=None, tvp=None):
33 |
34 | #TODO maybe problem
35 | grad_states = jax.grad(self.func, argnums=0)(states, u, p, tvp).reshape(-1)
36 | grad_u = jax.grad(self.func, argnums=1)(states, u, p, tvp).reshape(-1)
37 |
38 | result_list = [grad_states, grad_u]
39 | final_res = jnp.concatenate(result_list, axis=0)
40 | final_res = jnp.nan_to_num(final_res, nan=0.0)
41 | return np.array(final_res)
42 |
43 | def hessian(self, states, u, p=None, tvp=None):
44 |
45 | hessians = jax.hessian(self.func, argnums=[0,1])(states, u, p, tvp)
46 | a = hessians[0][0].reshape(states.shape[0]*states.shape[1], states.shape[0]*states.shape[1])
47 | b = hessians[0][1].reshape(states.shape[0]*states.shape[1], u.shape[0]*u.shape[1])
48 |
49 | c = hessians[1][0].reshape(u.shape[0]*u.shape[1], states.shape[0]*states.shape[1])
50 | d = hessians[1][1].reshape(u.shape[0]*u.shape[1], u.shape[0]*u.shape[1])
51 |
52 | ab = np.concatenate([a, b], axis=1)
53 | cd = np.concatenate([c, d], axis=1)
54 |
55 | final_hessian = np.concatenate([ab, cd], axis=0)
56 |
57 | return final_hessian
58 |
59 | def hessianstructure(self, H, model):
60 | if (H, model) in self.cached_hessian_structure.keys():
61 | return self.cached_hessian_structure[(H, model)]
62 | else:
63 | result = self._compute_hessianstructure(H, model)
64 | self.cached_hessian_structure[(H, model)] = result
65 | return result
66 |
67 | def _compute_hessianstructure(self, H, model, nb_sample=3):
68 | hessian_map = None
69 |
70 | for _ in range(nb_sample):
71 |
72 | x_random = np.random.uniform(size=(H, model.x_dim))
73 | u_random = np.random.uniform(size=(H, model.u_dim))
74 | p_random = None
75 | tvp_random = None
76 |
77 | if model.p_dim > 0:
78 | p_random = np.random.uniform(size=model.p_dim)
79 |
80 | if model.tvp_dim > 0:
81 | tvp_random = np.random.uniform(size=(H, model.tvp_dim))
82 |
83 | hessian = self.hessian(x_random, u_random, p=p_random, tvp=tvp_random)
84 |
85 | if hessian_map is None:
86 | hessian_map = (hessian!= 0.0).astype(np.float64)
87 | else:
88 | hessian_map += (hessian!= 0.0).astype(np.float64)
89 | hessian_map = hessian_map.astype(np.bool).astype(np.float64)
90 | return hessian_map
91 |
--------------------------------------------------------------------------------
/pyNeuralEMPC/optimizer/__init__.py:
--------------------------------------------------------------------------------
1 | # This directory will gather all solver interface supported in this library
2 |
3 | from pyNeuralEMPC.optimizer.ipopt import Ipopt
4 | from pyNeuralEMPC.optimizer.base import Optimizer
5 | from pyNeuralEMPC.optimizer.slsqp import Slsqp
--------------------------------------------------------------------------------
/pyNeuralEMPC/optimizer/base.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from ..constraints import *
3 | from ..model.base import Model
4 | from ..objective.base import ObjectiveFunc
5 |
6 |
7 | class ProblemInterfaceHessianFree():
8 | def __init__(self, core):
9 | self.core = core
10 |
11 |
12 | def objective(self, x):
13 | return self.core.objective(x)
14 |
15 | def gradient(self, x):
16 | return self.core.gradient(x)
17 |
18 | def constraints(self, x):
19 | return self.core.constraints(x)
20 |
21 |
22 | def jacobian(self, x):
23 | return self.core.jacobian(x)
24 |
25 | def get_constraint_lower_bounds(self):
26 | return self.core.get_constraint_lower_bounds()
27 |
28 | def get_constraint_upper_bounds(self):
29 | return self.core.get_constraint_upper_bounds()
30 |
31 | def get_init_value(self):
32 | return self.core.get_init_value()
33 |
34 | class ProblemInterface():
35 | def __init__(self, use_hessian: bool):
36 | self.use_hessian = use_hessian
37 |
38 |
39 | def objective(self, x):
40 | raise NotImplementedError("")
41 |
42 | def gradient(self, x):
43 | raise NotImplementedError("")
44 |
45 | def constraints(self, x):
46 | raise NotImplementedError("")
47 |
48 | def hessianstructure(self):
49 | raise NotImplementedError("")
50 |
51 | def hessian(self, x):
52 | raise NotImplementedError("")
53 |
54 | def jacobian(self, x):
55 | raise NotImplementedError("")
56 |
57 | def get_constraint_lower_bounds(self):
58 | raise NotImplementedError("")
59 |
60 | def get_constraint_upper_bounds(self):
61 | raise NotImplementedError("")
62 |
63 | def get_init_value(self):
64 | raise NotImplementedError("")
65 |
66 | def get_init_variables(self):
67 | raise NotImplementedError("")
68 |
69 |
70 | class ProblemFactory():
71 | def __init__(self):
72 |
73 | self.x0 = None
74 | self.p = None
75 | self.tvp = None
76 | self.objective = None
77 | self.constraints = None
78 | self.use_hessian = False
79 | self.integrator = None
80 | self.init_u, self.init_x = None, None
81 |
82 | def getProblemInterface(self) -> ProblemInterface:
83 | if (not self.x0 is None) and \
84 | (not self.objective is None) and \
85 | (not self.constraints is None) and \
86 | (not self.integrator is None):
87 |
88 | return self._process()
89 |
90 | if self.x0 is None:
91 | raise RuntimeError("Not ready yet ! x0 is missing")
92 | if self.objective is None:
93 | raise RuntimeError("Not ready yet ! objective is missing")
94 | if self.constraints is None:
95 | raise RuntimeError("Not ready yet ! constraints is missing")
96 | if self.integrator is None:
97 | raise RuntimeError("Not ready yet ! integrator is missing")
98 |
99 | def set_integrator(self, integrator):
100 | self.integrator = integrator
101 |
102 | def set_x0(self, x0: np.array):
103 | self.x0 = x0
104 |
105 |
106 | def set_init_values(self, init_x: np.array, init_u: np.array):
107 | self.init_x = init_x
108 | self.init_u = init_u
109 |
110 | def set_p(self, p: np.array):
111 | self.p = p
112 |
113 | def set_tvp(self, tvp: np.ndarray):
114 | self.tvp = tvp
115 |
116 | def set_objective(self, obj: ObjectiveFunc):
117 | self.objective = obj
118 |
119 | def set_constraints(self, ctrs: list):
120 | self.constraints = ctrs
121 |
122 | def set_use_hessian(self, hessian : bool):
123 | self.use_hessian = hessian
124 |
125 | def _process(self):
126 | raise NotImplementedError("")
127 |
128 | class Optimizer:
129 |
130 | FAIL = 1
131 | SUCCESS = 0
132 |
133 | def __init__(self):
134 | pass
135 |
136 | def get_factory(self) -> ProblemFactory:
137 | """
138 | Return the solver associeted factory.
139 | """
140 |
141 | def solve(self, problem: ProblemInterface, domain_constraint: DomainConstraint) -> np.ndarray:
142 | """Solve the problem
143 |
144 |
145 | Returns:
146 | np.ndarray: The optimal control vector
147 |
148 | """
149 | raise NotImplementedError("")
150 |
151 |
--------------------------------------------------------------------------------
/pyNeuralEMPC/optimizer/ipopt.py:
--------------------------------------------------------------------------------
1 | from .base import Optimizer, ProblemFactory, ProblemInterface, ProblemInterfaceHessianFree
2 |
3 | import numpy as np
4 | import cyipopt
5 | import time
6 |
7 | class IpoptProblem(ProblemInterface):
8 | def __init__(self, x0, objective_func, constraints, integrator, p=None, tvp=None, use_hessian=True, init_x=None, init_u=None):
9 | super(IpoptProblem, self).__init__(use_hessian)
10 | self.x0 = x0
11 | self.objective_func = objective_func
12 | self.constraints_list = constraints
13 | self.integrator = integrator
14 | self.x_dim, self.u_dim, self.p_dim, self.tvp_dim = self.integrator.model.x_dim, self.integrator.model.u_dim, self.integrator.model.p_dim, self.integrator.model.tvp_dim
15 | self.H = self.integrator.H
16 | self.p = p
17 | self.tvp = tvp
18 | self.init_x, self.init_u = None, None
19 |
20 | def _split(self, x):
21 | prev_idx = 0
22 | states = x[prev_idx:prev_idx+self.x_dim*self.H]
23 | prev_idx += self.x_dim*self.H
24 |
25 | u = x[prev_idx:prev_idx+self.u_dim*self.H]
26 | prev_idx += self.u_dim*self.H
27 |
28 | return states.reshape(self.H, self.x_dim), u.reshape(self.H, self.u_dim), self.tvp, self.p
29 |
30 | def objective(self, x):
31 | # TODO split les dt !!!!
32 | states, u, tvp, p = self._split(x)
33 | res = self.objective_func.forward(states, u, p=p, tvp=tvp)
34 |
35 | return res
36 |
37 | def gradient(self, x):
38 | states, u, tvp, p = self._split(x)
39 |
40 | res = self.objective_func.gradient(states, u, p=p, tvp=tvp)
41 |
42 | return res
43 |
44 | def constraints(self, x):
45 | states, u, tvp, p = self._split(x)
46 |
47 | contraints_forward_list = [self.integrator.forward(states, u, self.x0, p=p, tvp=tvp),]
48 |
49 | for ctr in self.constraints_list:
50 | contraints_forward_list.append(ctr.forward(states, u, p=p, tvp=tvp))
51 |
52 | return np.concatenate(contraints_forward_list)
53 |
54 |
55 | def hessianstructure(self):
56 | hessian_map_objective = self.objective_func.hessianstructure( self.integrator.H, self.integrator.model)
57 |
58 | hessian_map_integrator = self.integrator.hessianstructure()
59 |
60 | final_hessian_map = (hessian_map_objective + hessian_map_integrator).astype(np.bool).astype(np.float32)
61 |
62 | return np.nonzero(np.tril(final_hessian_map))#np.nonzero(np.ones_like(final_hessian_map))
63 |
64 |
65 |
66 | def hessian(self, x, lagrange, obj_factor):
67 | states, u, tvp, p = self._split(x)
68 |
69 | hessian_matrice = np.zeros((x.shape[0], x.shape[0]))
70 |
71 | hessian_matrice += obj_factor*self.objective_func.hessian(states, u, p=p, tvp=tvp)
72 |
73 | integrator_hessian_matrix = self.integrator.hessian(states, u, self.x0, p=p, tvp=tvp)
74 |
75 | constraint_matrices = [ctr.hessian(states, u, p=p, tvp=tvp) for ctr in self.constraints_list]
76 |
77 | total_hessian_constraint = np.concatenate([integrator_hessian_matrix,] + constraint_matrices, axis=0)
78 |
79 | for idx, lagrange_coef in enumerate(lagrange):
80 | hessian_matrice+=lagrange_coef*total_hessian_constraint[idx]
81 |
82 |
83 |
84 | row, col = self.hessianstructure()
85 |
86 | return hessian_matrice[row, col]
87 |
88 | def jacobian(self, x):
89 | states, u, tvp, p = self._split(x)
90 |
91 | contraints_jacobian_list = [self.integrator.jacobian(states, u, self.x0, p=p, tvp=tvp),]
92 |
93 | for ctr in self.constraints_list:
94 | contraints_jacobian_list.append(ctr.jacobian(states, u, p=p, tvp=tvp))
95 |
96 | return np.concatenate(contraints_jacobian_list, axis=0)
97 |
98 | def get_init_value(self):
99 | return self.x0
100 |
101 | def get_init_variables(self):
102 | return self.init_x, self.init_u
103 |
104 | def get_constraint_lower_bounds(self):
105 | return np.concatenate( [ ctr.get_lower_bounds(self.H) for ctr in [self.integrator,]+self.constraints_list ])
106 |
107 | def get_constraint_upper_bounds(self):
108 | return np.concatenate( [ ctr.get_upper_bounds(self.H) for ctr in [self.integrator,]+self.constraints_list ])
109 |
110 |
111 | class IpoptProblemFactory(ProblemFactory):
112 | def _process(self):
113 | return IpoptProblem(self.x0, self.objective, self.constraints, self.integrator, p=self.p, tvp=self.tvp, use_hessian=self.use_hessian)
114 |
115 |
116 | class Ipopt(Optimizer):
117 | def __init__(self, max_iteration=500, init_with_last_result=False, mu_strategy="monotone", mu_target=0, mu_linear_decrease_factor=0.2,\
118 | alpha_for_y="primal", obj_scaling_factor=1, nlp_scaling_max_gradient=100.0):
119 |
120 | super(Ipopt, self).__init__()
121 |
122 | self.max_iteration=max_iteration
123 | self.mu_strategy = mu_strategy
124 | self.mu_target = mu_target
125 | self.mu_linear_decrease_factor = mu_linear_decrease_factor
126 | self.alpha_for_y = alpha_for_y
127 | self.obj_scaling_factor = obj_scaling_factor
128 | self.nlp_scaling_max_gradient = nlp_scaling_max_gradient
129 |
130 | self.init_with_last_result = init_with_last_result
131 | self.prev_result = None
132 |
133 |
134 | def get_factory(self):
135 | return IpoptProblemFactory()
136 |
137 |
138 | def solve(self, problem, domain_constraint):
139 | x0 = problem.get_init_value()
140 |
141 | if self.init_with_last_result and not (self.prev_result is None):
142 | x_dim = problem.integrator.model.x_dim
143 | u_dim = problem.integrator.model.u_dim
144 | x_init = np.concatenate([self.prev_result[x_dim:x_dim*problem.integrator.H], # x[1]-x[H]
145 | self.prev_result[x_dim*(problem.integrator.H-1):x_dim*problem.integrator.H], # x[H]
146 | self.prev_result[x_dim*problem.integrator.H+u_dim:(x_dim+u_dim)*problem.integrator.H], # u[1] - u[H]
147 | self.prev_result[x_dim*problem.integrator.H+u_dim*(problem.integrator.H-1):x_dim*problem.integrator.H+u_dim*problem.integrator.H] ], axis=0) # u[H]
148 | else:
149 | x_init = np.concatenate( [np.concatenate( [x0,]*problem.integrator.H ), np.repeat(np.array([0.0,]*problem.integrator.model.u_dim),problem.integrator.H)])
150 |
151 |
152 | # TODO find a better way to get the horizon variable
153 | lb = domain_constraint.get_lower_bounds(problem.integrator.H)
154 | ub = domain_constraint.get_upper_bounds(problem.integrator.H)
155 |
156 | cl = problem.get_constraint_lower_bounds()
157 | cu = problem.get_constraint_upper_bounds()
158 |
159 | if not problem.use_hessian:
160 | problem = ProblemInterfaceHessianFree(problem)
161 |
162 | nlp = cyipopt.Problem(
163 | n=len(x_init),
164 | m=len(cl),
165 | problem_obj=problem,
166 | lb=lb,
167 | ub=ub,
168 | cl=cl,
169 | cu=cu
170 | )
171 |
172 | nlp.addOption('max_iter', self.max_iteration)# self.max_iteration)#
173 | #nlp.addOption('derivative_test', 'first-order')
174 | #nlp.addOption('derivative_test_print_all', 'yes')
175 | #nlp.addOption('point_perturbation_radius',1e-1)
176 | #nlp.addOption('derivative_test_perturbation',1e-1)
177 |
178 | #nlp.addOption('mu_strategy', self.mu_strategy)
179 | #nlp.addOption('mu_target', self.mu_target)
180 | #nlp.addOption('mu_linear_decrease_factor', self.mu_linear_decrease_factor)
181 | #nlp.addOption('alpha_for_y', self.alpha_for_y)
182 | #nlp.addOption('obj_scaling_factor', self.obj_scaling_factor)
183 | #nlp.addOption('nlp_scaling_max_gradient', self.nlp_scaling_max_gradient)
184 | nlp.addOption("tol", 1e-1)
185 | nlp.addOption("acceptable_tol",1e-4)
186 | nlp.addOption("print_level", 0)
187 |
188 |
189 | x, info = nlp.solve(x_init)
190 | self.prev_result = x
191 | if info["status"] == 0 or info["status"] == 1 :
192 | self.prev_result = x
193 | return Optimizer.SUCCESS
194 |
195 | return Optimizer.FAIL
196 |
--------------------------------------------------------------------------------
/pyNeuralEMPC/optimizer/slsqp.py:
--------------------------------------------------------------------------------
1 | import warnings
2 | from .base import Optimizer, ProblemFactory, ProblemInterface
3 | from ..constraints import Constraint
4 | import numpy as np
5 | from scipy.optimize import minimize, Bounds
6 | from functools import lru_cache
7 | import time
8 |
9 |
10 | class SlsqpProblem(ProblemInterface):
11 | def __init__(self, x0, objective_func, constraints, integrator, p=None, tvp=None, init_x=None, init_u=None):
12 | super(SlsqpProblem, self).__init__(False)
13 | self.x0 = x0
14 | self.objective_func = objective_func
15 | self.constraints_list = constraints
16 | self.integrator = integrator
17 | self.x_dim, self.u_dim, self.p_dim, self.tvp_dim = self.integrator.model.x_dim, self.integrator.model.u_dim, self.integrator.model.p_dim, self.integrator.model.tvp_dim
18 | self.H = self.integrator.H
19 | self.p = p
20 | self.tvp = tvp
21 | self.debug_mode = False
22 | self.debug_x, self.debug_u = list(), list()
23 | self.init_x, self.init_u = init_x, init_u
24 |
25 | def _split(self, x):
26 | prev_idx = 0
27 | states = x[prev_idx:prev_idx+self.x_dim*self.H]
28 | prev_idx += self.x_dim*self.H
29 |
30 | u = x[prev_idx:prev_idx+self.u_dim*self.H]
31 | prev_idx += self.u_dim*self.H
32 |
33 | return states.reshape(self.H, self.x_dim), u.reshape(self.H, self.u_dim), self.tvp, self.p
34 |
35 | def objective(self, x):
36 | states, u, tvp, p = self._split(x)
37 | if self.debug_mode:
38 | self.debug_x.append(states.copy())
39 | self.debug_u.append(u.copy())
40 | res = self.objective_func.forward(states, u, p=p, tvp=tvp)
41 |
42 | return res
43 |
44 | def gradient(self, x):
45 | states, u, tvp, p = self._split(x)
46 |
47 | res = self.objective_func.gradient(states, u, p=p, tvp=tvp)
48 |
49 | return res
50 |
51 | def set_debug(self, debug_mode):
52 | self.debug_mode = debug_mode
53 |
54 | def constraints(self, x, eq=True):
55 | # TODO not taking into account other constraint than integrator ones
56 | warnings.warn("Not taking into account other constraint than integrator ones !")
57 | states, u, tvp, p = self._split(x)
58 |
59 | contraints_forward_list = [self.integrator.forward(states, u, self.x0, p=p, tvp=tvp),] if eq else []
60 |
61 | for ctr in self.constraints_list:
62 | if eq and (ctr.get_type(self.H) == Constraint.EQ_TYPE):
63 | contraints_forward_list.append(ctr.forward(states, u, p=p, tvp=tvp))
64 | elif not eq and (ctr.get_type(self.H) == Constraint.INEQ_TYPE):
65 | contraints_forward_list.append(ctr.forward(states, u, p=p, tvp=tvp))
66 | elif not eq and (ctr.get_type(self.H) == Constraint.INTER_TYPE):
67 | contraints_forward_list.append(ctr.forward(states, u, p=p, tvp=tvp)-ctr.get_lower_bound(self.H))
68 | contraints_forward_list.append(-ctr.forward(states, u, p=p, tvp=tvp)+ctr.get_upper_bound(self.H))
69 | else:
70 | pass
71 |
72 | return np.concatenate(contraints_forward_list, axis=0)
73 |
74 |
75 | def hessianstructure(self):
76 | raise NotImplementedError("Not needed")
77 |
78 |
79 | def hessian(self, x, lagrange, obj_factor):
80 | raise NotImplementedError("Not needed")
81 |
82 | def jacobian(self, x, eq=True):
83 | # TODO not taking into account other constraint than integrator ones
84 | warnings.warn("Not taking into account other constraint than integrator ones !")
85 | states, u, tvp, p = self._split(x)
86 |
87 | contraints_jacobian_list = [self.integrator.jacobian(states, u, self.x0, p=p, tvp=tvp),] if eq else []
88 |
89 | for ctr in self.constraints_list:
90 | if eq and (ctr.get_type(self.H) == Constraint.EQ_TYPE):
91 | contraints_jacobian_list.append(ctr.jacobian(states, u, p=p, tvp=tvp))
92 | elif not eq and (ctr.get_type(self.H) == Constraint.INEQ_TYPE):
93 | contraints_jacobian_list.append(ctr.jacobian(states, u, p=p, tvp=tvp))
94 | elif not eq:
95 | contraints_jacobian_list.append(ctr.jacobian(states, u, p=p, tvp=tvp))
96 | contraints_jacobian_list.append(-ctr.jacobian(states, u, p=p, tvp=tvp))
97 | else:
98 | pass
99 |
100 | return np.concatenate(contraints_jacobian_list, axis=0)
101 |
102 | def get_constraints_dict(self):
103 | result = [{'type': 'eq',
104 | 'fun' : lambda x: self.constraints(x, eq=True),
105 | 'jac' : lambda x: self.jacobian(x, eq=True)},]
106 | if len(list(filter(lambda element : element.get_type(self.H) in [Constraint.INEQ_TYPE, Constraint.INTER_TYPE], self.constraints_list)))>0:
107 | result.append({'type': 'ineq',
108 | 'fun' : lambda x: self.constraints(x, eq=False),
109 | 'jac' : lambda x: self.jacobian(x, eq=False)})
110 | return result
111 |
112 |
113 | def get_init_value(self):
114 | return self.x0
115 |
116 | def get_init_variables(self):
117 | return self.init_x, self.init_u
118 |
119 |
120 | class SlsqpProblemFactory(ProblemFactory):
121 | def _process(self):
122 | return SlsqpProblem(self.x0, self.objective, self.constraints, self.integrator, p=self.p, tvp=self.tvp, init_x=self.init_x,init_u=self.init_u )
123 |
124 |
125 | class Slsqp(Optimizer):
126 | def __init__(self, max_iteration=200, tolerance=0.5e-6, verbose=1, init_with_last_result=False, nb_max_try=15, debug=False):
127 |
128 | super(Slsqp, self).__init__()
129 |
130 | self.max_iteration=max_iteration
131 | self.verbose = verbose
132 | self.tolerance=tolerance
133 | self.init_with_last_result = init_with_last_result
134 |
135 | self.prev_result = None
136 | self.nb_max_try = nb_max_try
137 | self.debug = debug
138 |
139 | def get_factory(self):
140 | return SlsqpProblemFactory()
141 |
142 |
143 | def solve(self, problem, domain_constraint):
144 | x0 = problem.get_init_value()
145 | problem.set_debug(self.debug)
146 | x_init_variables, u_init_variables = problem.get_init_variables()
147 |
148 | if (not x_init_variables is None) and (not u_init_variables is None):
149 | assert u_init_variables.shape[0] == problem.integrator.H, f"The init u values is not compliant with the MPC horizon size (receive ={x_init_variables.shape[0]}, expected={problem.integrator.H})"
150 | assert x_init_variables.shape[0] == problem.integrator.H, f"The init x values is not compliant with the MPC horizon size (receive ={x_init_variables.shape[0]}, expected={problem.integrator.H})"
151 |
152 | x_init = np.concatenate([x_init_variables.reshape(-1), u_init_variables.reshape(-1)], axis=0)
153 |
154 |
155 | elif self.init_with_last_result and not (self.prev_result is None):
156 | x_dim = problem.integrator.model.x_dim
157 | u_dim = problem.integrator.model.u_dim
158 | x_init = np.concatenate([self.prev_result[x_dim:x_dim*problem.integrator.H], # x[1]-x[H]
159 | self.prev_result[x_dim*(problem.integrator.H-1):x_dim*problem.integrator.H], # x[H]
160 | self.prev_result[x_dim*problem.integrator.H+u_dim:(x_dim+u_dim)*problem.integrator.H], # u[1] - u[H]
161 | self.prev_result[x_dim*problem.integrator.H+u_dim*(problem.integrator.H-1):x_dim*problem.integrator.H+u_dim*problem.integrator.H] ], axis=0) # u[H]
162 | else:
163 | x_init = np.concatenate( [np.concatenate( [x0,]*problem.integrator.H ), np.repeat(np.array([0.0,]*problem.integrator.model.u_dim),problem.integrator.H)])
164 |
165 |
166 | # TODO find a better way to get the horizon variable
167 | lb = domain_constraint.get_lower_bounds(problem.integrator.H)
168 | ub = domain_constraint.get_upper_bounds(problem.integrator.H)
169 | bounds = Bounds(lb, ub)
170 |
171 |
172 | res = minimize(problem.objective, x_init, method="SLSQP", jac=problem.gradient, \
173 | constraints=problem.get_constraints_dict(), options={'maxiter': self.max_iteration, 'ftol': self.tolerance, 'disp': True, 'iprint':self.verbose}, bounds=bounds)
174 | if self.debug:
175 | self.constraints_val = problem.constraints(res.x)
176 | self.debug_x = problem.debug_x
177 | self.debug_u = problem.debug_u
178 | if not res.success :
179 | warnings.warn("Process do not converge ! ")
180 |
181 | if self.debug:
182 | return Optimizer.FAIL
183 |
184 | if np.max(problem.constraints(res.x))>1e-5:
185 | for i in range(self.nb_max_try):
186 | print("RETRY SQP optimization")
187 | x_init = np.concatenate( [np.concatenate( [x0,]*problem.integrator.H ), np.repeat(np.array([0.0,]*problem.integrator.model.u_dim),problem.integrator.H)])
188 | res = minimize(problem.objective, x_init, method="SLSQP", jac=problem.gradient, \
189 | constraints=problem.get_constraints_dict(), options={'maxiter': self.max_iteration, 'ftol': self.tolerance*(2.0**i), 'disp': True, 'iprint':self.verbose}, bounds=bounds)
190 | if np.max(problem.constraints(res.x))<1e-5 or res.success:
191 | break
192 |
193 | if not res.success and (np.max(problem.constraints(res.x))>1e-5):
194 | return Optimizer.FAIL
195 |
196 | self.prev_result = res.x
197 |
198 | return Optimizer.SUCCESS
--------------------------------------------------------------------------------
/pyNeuralEMPC/utils.py:
--------------------------------------------------------------------------------
1 | # This file will contain all the auxilary functions needed in this library.
--------------------------------------------------------------------------------
/scripts/run_tests.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | python3 -m pytest --cov-config .coveragerc --cov-report html --cov-report term --cov=. -v
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | from setuptools import setup, find_packages
5 |
6 | import pyNeuralEMPC
7 |
8 | setup(
9 | name='pyNeuralEMPC',
10 | version=pyNeuralEMPC.__version__,
11 | packages=find_packages(),
12 | author="François 'Enderdead' Gauthier-Clerc",
13 |
14 | author_email="francois@gauthier-clerc.fr",
15 |
16 | description="A nonlinear MPC library that allows you using a neural network as a model.",
17 |
18 | long_description=open('README.md').read(),
19 |
20 | install_requires=["numpy", "matplotlib", "tensorflow", "cyipopt", "jax"],
21 |
22 | include_package_data=True,
23 |
24 | url='https://github.com/Enderdead/pyNeuralEMPC',
25 |
26 | classifiers=[
27 | "Programming Language :: Python",
28 | "Development Status :: 1 - Planning",
29 | "Operating System :: OS Independent",
30 | "Programming Language :: Python :: 3.7",
31 | "Topic :: Neural NMPC",
32 | ],
33 | license="MIT",
34 | )
--------------------------------------------------------------------------------
/test.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import tensorflow as tf
3 | import jax.numpy as jnp
4 |
5 | from pyNeuralEMPC.model.tensorflow import KerasTFModel, KerasTFModelRollingInput
6 | import pyNeuralEMPC as nEMPC
7 |
8 |
9 |
10 |
11 |
12 | @tf.function
13 | def model(x): # x - x*y ; -0.5 + u + x * y
14 | #result = tf.concat([x[:,0:1] - x[:,0:1]*x[:,1:2], -1*x[:,1:2]+x[:,2:3] +x[:,0:1]*x[:,1:2] ], axis=1)
15 | result = tf.concat([x[:,2:3] , x[:,4:5] ], axis=1)
16 |
17 | return result
18 |
19 |
20 | class FakeModel:
21 |
22 | def __init__(self):
23 | self.input_shape = (-1, 3 )
24 | self.output_shape = (-1, 2)
25 |
26 | @tf.function
27 | def __call__(self, x):
28 | return tf.concat([x[:,2:3]*x[:,2:3] , x[:,5:6]*x[:,5:6]/2 ], axis=1)
29 |
30 | @tf.function
31 | def predict(self, x):
32 | return tf.concat([x[:,2:3]*x[:,2:3] , x[:,5:6]*x[:,5:6]/2 ], axis=1)
33 |
34 | fake = FakeModel()
35 |
36 |
37 |
38 |
39 | x_past = np.array([[0.2,0.1]])
40 | u_past =np.array([[0.0]])
41 |
42 | x0 = np.array([[-0.2,-0.1]])
43 |
44 | x = np.array([[0.2,0.1],
45 | [0.10832,0.0908]],dtype=np.float32)
46 |
47 | u = np.array([[0.01309788],[-0.09964662]], dtype=np.float32)
48 |
49 |
50 | test = KerasTFModelRollingInput(fake, 2, 1, forward_rolling=True)
51 | test.set_prev_data(x_past, u_past)
52 |
53 | H = 10
54 |
55 | class LotkaCost:
56 | def __init__(self, cost_vec):
57 | self.cost_vec = cost_vec
58 |
59 | def __call__(self, x, u, p=None, tvp=None):
60 | return jnp.sum(jnp.square(u.reshape(-1)-2.0))
61 |
62 |
63 | cost_func = LotkaCost(jnp.array([1.1,]*25))
64 | DT = 1
65 | integrator = nEMPC.integrator.discret.DiscretIntegrator(test, H)
66 |
67 |
68 |
69 |
70 |
71 | constraints_nmpc = [nEMPC.constraints.DomainConstraint(
72 | states_constraint=[[-np.inf, np.inf], [-np.inf, np.inf]],
73 | control_constraint=[[-np.inf, np.inf]]),]
74 |
75 | objective_func = nEMPC.objective.jax.JAXObjectifFunc(cost_func)
76 |
77 | MPC = nEMPC.controller.NMPC(integrator, objective_func, constraints_nmpc, H, DT)
78 |
79 | pred, u = MPC.next(x_past.reshape(-1))
80 |
--------------------------------------------------------------------------------
/testing/nothing:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Enderdead/pyNeuralEMPC/397a459e1281dabd82876843ed83964c42c05c86/testing/nothing
--------------------------------------------------------------------------------