├── .gitignore
├── README.md
├── attitude_control
├── __init__.py
├── gatekeeper_attitude.py
├── simple_attitude.py
├── velocity_tracking_yaw.py
└── visibility_promoting_yaw.py
├── position_control
├── __init__.py
├── cbf_qp.py
├── mpc_cbf.py
├── optimal_decay_cbf_qp.py
└── optimal_decay_mpc_cbf.py
├── robots
├── double_integrator2D.py
├── dynamic_unicycle2D.py
├── kinematic_bicycle2D.py
├── kinematic_bicycle2D_c3bf.py
├── quad2D.py
├── quad3D.py
├── robot.py
├── single_integrator2D.py
├── unicycle2D.py
└── vtol2D.py
├── setup.py
├── tracking.py
└── utils
├── __init__.py
├── env.py
├── geometry.py
├── plotting.py
└── utils.py
/.gitignore:
--------------------------------------------------------------------------------
1 | # Ignore pycache directories
2 | **/__pycache__/
3 | __pycache__/
4 |
5 | # Ignore misc. files
6 | *.csv
7 | *.egg-info/
8 | .DS_Store
9 |
10 | # Ignore directories and files with "images" in their name
11 | output/
12 |
13 | venv/
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # safe_control
2 |
3 | `safe_control` is a Python library that provides a unified codebase for safety-critical controllers in robotic navigation. It implements various control barrier function (CBF) based controllers and other types of safety-critical controllers.
4 |
5 | ## Features
6 |
7 | - Implementation of various positional safety-critical controllers, including [`CBF-QP`](https://ieeexplore.ieee.org/document/8796030) and [`MPC-CBF`](https://ieeexplore.ieee.org/document/9483029)
8 | - Implementation of safety-critical attitude controller using [`gatekeeper`](https://ieeexplore.ieee.org/document/10341790)
9 | - Support for different robot dynamics models (e.g., unicycle, double integrator)
10 | - Support sensing and mapping simulation for RGB-D type camera sensors (limited FOV)
11 | - Support both single and multi agents navigation
12 | - Interactive plotting and visualization
13 |
14 | ## Installation
15 |
16 | To install `safe_control`, follow these steps:
17 |
18 | 1. Clone the repository:
19 | ```bash
20 | git clone https://github.com/tkkim-robot/safe_control.git
21 | cd safe_control
22 | ```
23 |
24 | 2. (Optional) Create and activate a virtual environment:
25 |
26 | 3. Install the package and its dependencie:
27 | ```bash
28 | python -m pip install -e .
29 | ```
30 | Or, install packages manually (see [`setup.py`](https://github.com/tkkim-robot/safe_control/blob/main/setup.py)).
31 |
32 |
33 | ## Getting Started
34 |
35 | Familiarize with APIs and examples with the scripts in [`tracking.py`](https://github.com/tkkim-robot/safe_control/blob/main/tracking.py)
36 |
37 | ### Basic Example
38 | You can run our test example by:
39 |
40 | ```bash
41 | python tracking.py
42 | ```
43 |
44 | Alternatively, you can import `LocalTrackingController` from [`tracking.py`](https://github.com/tkkim-robot/safe_control/blob/main/tracking.py).
45 |
46 | ```python
47 | from safe_control.tracking import LocalTrackingController
48 | controller = LocalTrackingController(x_init, robot_spec,
49 | control_type=control_type,
50 | dt=dt,
51 | show_animation=True,
52 | save_animation=False,
53 | ax=ax, fig=fig,
54 | env=env_handler)
55 |
56 | # assuming you have set the workspace (environment) in utils/env.py
57 | controller.set_waypoints(waypoints)
58 | _ _= controller.run_all_steps(tf=100)
59 | ```
60 |
61 | You can also design your own navigation logic using the controller. `contorl_step()` function simulates one time step.
62 |
63 | ```python
64 | # self refers to the LocalTrackingController class.
65 | for _ in range(int(tf / self.dt)):
66 | ret = self.control_step()
67 | self.draw_plot()
68 | if ret == -1: # all waypoints reached
69 | break
70 | self.export_video()
71 | ```
72 |
73 | The sample results from the basic example:
74 |
75 | | Navigation with MPC-CBF controller |
76 | | :-------------------------------: |
77 | |
|
78 |
79 | The green points are the given waypoints, and the blue area is the accumulated sensing footprints.
80 |
81 | The gray circles are the obstacles that are known a priori.
82 |
83 | ### Detect/Avoid Unknown Obstacles
84 | You can also simulate online detection of unknown obstacles and avoidance of obstacles that are detected on-the-fly using safety-critical constratins.
85 | The configuration of the obstacles is (x, y, radius).
86 |
87 | ```python
88 | unknown_obs = np.array([[2, 2, 0.2], [3.0, 3.0, 0.3]])
89 | controller.set_unknown_obs(unknown_obs)
90 |
91 | # then, run simulation
92 | ```
93 |
94 | The unknown obstacles are visualized in orange.
95 |
96 | | Navigation with MPC-CBF controller |
97 | | :-------------------------------: |
98 | |
|
99 |
100 |
101 | ### Multi-Robot Example
102 |
103 | You can simulate heterogeneous, multiple robots navigating in the same environment.
104 |
105 | ```python
106 | robot_spec = {
107 | 'model': 'Unicycle2D',
108 | 'robot_id': 0
109 | }
110 | controller_0 = LocalTrackingController(x_init, robot_spec)
111 |
112 | robot_spec = {
113 | 'model': 'DynamicUnicycle2D',
114 | 'robot_id': 1,
115 | 'a_max': 1.5,
116 | 'fov_angle': 90.0,
117 | 'cam_range': 5.0,
118 | 'radius': 0.25
119 | }
120 | controller_1 = LocalTrackingController(x_init, robot_spec)
121 |
122 | # then, run simulation
123 | ```
124 |
125 | First determine the specification of each robot (with different `id`), then run the simulation.
126 |
127 | | Homogeneous Robots | Heterogeneous Robots |
128 | | :------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: |
129 | |
|
|
130 |
131 |
132 | ## Module Breakdown
133 |
134 | ### Dynamics
135 |
136 | Supported robot dynamics can be found in the [`robots/`](https://github.com/tkkim-robot/safe_control/tree/main/robots) directory:
137 |
138 | - `single_integrator2D`
139 | - `double_integrator2D`
140 | - `single_integrator2D with camera angle`: allow to rotate
141 | - `double_integrator2D with camera angle`
142 | - `unicycle2D`
143 | - `dynamic_unicycle2D`: A unicycle model that uses velocity as state and acceleration as input.
144 | - `kinematic_bicycle2D`: need to use [`C3BF`](https://arxiv.org/abs/2403.07043) for valid CBF. See [`robots/kinematic_bicycle2D_c3bf.py](https://github.com/tkkim-robot/safe_control/blob/main/robots/kinematic_bicycle2D_c3bf.py) for more details.
145 | - `quad2d`: x - forward, z - vertical
146 | - `quad3d`: 12 states, using [`RK4 Sampled Data CBF`](https://arxiv.org/pdf/2203.11470) to construct CBF for relative degree of 4 input.
147 | - `vtol2d`: x - forward, z - vertical
148 |
149 | ### Positional Control Algorithms
150 |
151 | Supported positional controllers can be found in the [`position_control/`](https://github.com/tkkim-robot/safe_control/tree/main/position_control) directory:
152 |
153 | - `cbf_qp`: A simple CBF-QP controller for collision avoidance (ref: [[1]](https://ieeexplore.ieee.org/document/8796030))
154 | - `mpc_cbf`: An MPC controller using discrete-time CBF (ref: [[2]](https://ieeexplore.ieee.org/document/9483029))
155 | - `optimal_decay_cbf_qp`: A modified CBF-QP for point-wise feasibility guarantee (ref: [[3]](https://ieeexplore.ieee.org/document/9482626))
156 | - `optimal_decay_mpc_cbf`: The same technique applied to MPC-CBF (ref: [[4]](https://ieeexplore.ieee.org/document/9683174))
157 |
158 | To use a specific control algorithm, specify it when initializing the `LocalTrackingController`:
159 |
160 | ```python
161 | controller = LocalTrackingController(..., control_type='cbf_qp', ...)
162 | ```
163 |
164 | ### Attitude Control Algorithms
165 |
166 | Supported attitude controllers can be found in the [`attitude_control/`](https://github.com/tkkim-robot/safe_control/tree/main/attitude_control) directory:
167 |
168 | - `gatekeeper`: A safety filter between pre-defined nominal and backup controller, designed to guarantee safety for infinite time (ref: [[5]](https://ieeexplore.ieee.org/document/10665919))
169 | - `velocity tracking yaw`
170 | - `visibility promoting yaw`: A simple visibility promoting attitude controller based on the accumulated map
171 |
172 | ### Customizing Environments
173 |
174 | You can modify the environment in [`utils/env.py`](https://github.com/tkkim-robot/safe_control/blob/main/utils/env.py).
175 |
176 | ### Visualization
177 | The online visualization is performed using [`matplotlib.pyplot.ion`](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.ion.html).
178 |
179 | It allows you to interatively scroll, resize, change view, etc. during simulation.
180 |
181 | You can visualize and save animation by setting the arguments:
182 | ```python
183 | controller = LocalTrackingController(..., show_animation=True, save_animation=True)
184 | ```
185 |
186 | ## Citing
187 |
188 | If you find this repository useful, please consider citing our paper:
189 |
190 | ```
191 | @inproceedings{kim2025learning,
192 | author = {Kim, Taekyung and Kee, Robin Inho and Panagou, Dimitra},
193 | title = {Learning to Refine Input Constrained Control Barrier Functions via Uncertainty-Aware Online Parameter Adaptation},
194 | booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
195 | shorttitle = {Online-Adaptive-CBF},
196 | year = {2025}
197 | }
198 | ```
199 |
200 | ## Related Works
201 |
202 | This repository has been utilized in several other projects. Here are some repositories that build upon or use components from this library:
203 |
204 | - [Visibility-Aware RRT*](https://github.com/tkkim-robot/visibility-rrt): Safety-critical Global Path Planning (GPP) using Visibility Control Barrier Functions
205 | - [Online Adaptive CBF](https://github.com/tkkim-robot/online_adaptive_cbf): Online adaptation of CBF parameters for input constrained robot systems
206 | - [Multi-Robot Exploration and Mapping](): to be public soon
207 | - [UGV Experiments with ROS2](https://github.com/tkkim-robot/px4_ugv_exp): Environmental setup for rovers using PX4, ros2 humble, Vicon MoCap, and NVIDIA VSLAM + NvBlox
208 |
--------------------------------------------------------------------------------
/attitude_control/__init__.py:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/attitude_control/gatekeeper_attitude.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from .simple_attitude import SimpleAtt
3 | from .velocity_tracking_yaw import VelocityTrackingYaw
4 | from .visibility_promoting_yaw import VisibilityAtt
5 | from shapely.geometry import LineString, Polygon
6 |
7 | """
8 | Created on April 21th, 2025
9 | @author: Taekyung Kim
10 |
11 | @description:
12 | This code calls gatekeeper submodule and implements a gatekeeper-based attitude controller.
13 | The intended use case is nominal: visibility promoting, backup: velocity tracking yaw, but not limited to it.
14 |
15 |
16 | """
17 |
18 | def angle_normalize(x):
19 | return (((x + np.pi) % (2 * np.pi)) - np.pi)
20 |
21 | class GatekeeperAtt:
22 | """
23 | Attitude (yaw) controller that wraps the Gatekeeper safety filter,
24 | selecting between a nominal and a backup yaw policy.
25 | """
26 | def __init__(self,
27 | robot,
28 | robot_spec: dict,
29 | dt: float = 0.05,
30 | ctrl_config: dict = {},
31 | nominal_horizon: float = 1.0,
32 | backup_horizon: float = 2.0,
33 | event_offset: float = 1.0):
34 | """
35 | Parameters
36 | ----------
37 | robot : object
38 | Robot instance (provides state X, dynamics, etc.).
39 | robot_spec : dict
40 | Robot specifications
41 | ctrl_config : dict
42 | {'nominal': <'simple'|'velocity tracking yaw'>,
43 | 'backup': <'simple'|'velocity tracking yaw'>}
44 | dt : float
45 | Integration timestep for Gatekeeper.
46 | nominal_horizon : float
47 | backup_horizon : float
48 | event_offset : float
49 | """
50 | self.robot = robot
51 | self.robot_spec = robot_spec
52 | self.dt = dt
53 | self.nominal_horizon = nominal_horizon
54 | self.backup_horizon = backup_horizon
55 | self.event_offset = event_offset
56 | self.horizon_discount = dt * 5
57 |
58 | self.next_event_time = 0.0
59 | self.current_time_idx = backup_horizon / dt # start from backup trajectory. If the initial canddidate traj is valid, it will be updated to 0
60 | self.candidate_time_idx = 0
61 | self.committed_horizon = 0.0
62 |
63 | # Initialize the committed trajectory
64 | self.committed_x_traj = None
65 | self.committed_u_traj = None
66 | self.pos_committed_x_traj = None # be updated using pos_controller instance
67 | self.pos_committed_u_traj = None
68 |
69 | # Map user keys to controller classes
70 | self._ctrl_map = {
71 | 'simple': SimpleAtt,
72 | 'velocity tracking yaw': VelocityTrackingYaw,
73 | 'visibility promoting yaw': VisibilityAtt
74 | }
75 |
76 | nom_key = ctrl_config.get('nominal', 'visibility promoting yaw')
77 | if nom_key not in self._ctrl_map:
78 | raise ValueError(f"Unknown nominal controller '{nom_key}'")
79 | NominalCtrl = self._ctrl_map[nom_key]
80 | self.nominal_ctrl = NominalCtrl(robot, robot_spec)
81 | self._set_nominal_controller(self.nominal_ctrl.solve_control_problem)
82 |
83 | backup_key = ctrl_config.get('backup', 'velocity tracking yaw')
84 | if backup_key not in self._ctrl_map:
85 | raise ValueError(f"Unknown backup controller '{backup_key}'")
86 | BackupCtrl = self._ctrl_map[backup_key]
87 | self.backup_ctrl = BackupCtrl(robot, robot_spec)
88 | self._set_backup_controller(self.backup_ctrl.solve_control_problem)
89 |
90 | def _set_nominal_controller(self, nominal_controller):
91 | self.nominal_controller = nominal_controller
92 |
93 | def _set_backup_controller(self, backup_controller):
94 | self.backup_controller = backup_controller
95 |
96 | def setup_pos_controller(self, pos_controller):
97 | self.pos_controller = pos_controller
98 |
99 | def _dynamics(self, x, u):
100 | x_col = np.array(x)
101 | u_col = np.array(u)
102 | # f and g live under self.robot.robot
103 | dx = self.robot.robot.f(x_col) + self.robot.robot.g(x_col) @ u_col
104 | return dx
105 |
106 | def _update_pos_committed_trajectory(self):
107 | """
108 | Update the positional committed trajectory.
109 | """
110 |
111 | x_traj_casadi = self.pos_controller.mpc.opt_x_num['_x', :, 0, 0]
112 | x_traj_casadi = x_traj_casadi[:-1] # remove the last step to make it same length as u_traj_casadi
113 | u_traj_casadi = self.pos_controller.mpc.opt_x_num['_u', :, 0]
114 |
115 | # Convert to numpy arrays - initialization
116 | n_steps = len(x_traj_casadi)
117 | state_dim = x_traj_casadi[0].shape[0] if hasattr(x_traj_casadi[0], 'shape') else 1
118 | pos_x_traj = np.zeros((n_steps, state_dim))
119 |
120 | n_controls = len(u_traj_casadi)
121 | control_dim = u_traj_casadi[0].shape[0] if hasattr(u_traj_casadi[0], 'shape') else 1
122 | pos_u_traj = np.zeros((n_controls, control_dim))
123 |
124 | # Convert each state in the trajectory
125 | for i, x_dm in enumerate(x_traj_casadi):
126 | pos_x_traj[i, :] = np.array(x_dm.full()).flatten()
127 |
128 | for i, u_dm in enumerate(u_traj_casadi):
129 | pos_u_traj[i, :] = np.array(u_dm.full()).flatten()
130 |
131 | # Update the class attributes
132 | self.pos_committed_x_traj = pos_x_traj
133 | self.pos_committed_u_traj = pos_u_traj
134 |
135 | # --- now extend if too short ---
136 | required_steps = int((self.nominal_horizon + self.backup_horizon) / self.dt) + 1
137 | curr_len = self.pos_committed_x_traj.shape[0]
138 | if curr_len < required_steps:
139 | missing = required_steps - curr_len
140 |
141 | x_ext = np.zeros((missing, state_dim))
142 | u_ext = np.zeros((missing, control_dim))
143 |
144 | last_x = self.pos_committed_x_traj[-1].reshape(-1, 1)
145 | last_u = self.pos_committed_u_traj[-1].reshape(-1, 1)
146 | last_u = np.zeros_like(last_u) # TODO: maintain zero control input
147 |
148 | for k in range(missing):
149 | dx = self._dynamics(last_x, last_u)
150 | last_x = last_x + dx * self.dt
151 |
152 | x_ext[k, :] = last_x.flatten()
153 | u_ext[k, :] = last_u.flatten()
154 |
155 | # Concatenate in NumPy
156 | self.pos_committed_x_traj = np.concatenate(
157 | [self.pos_committed_x_traj, x_ext], axis=0
158 | )
159 | self.pos_committed_u_traj = np.concatenate(
160 | [self.pos_committed_u_traj, u_ext], axis=0
161 | )
162 |
163 | def _generate_trajectory(self, initial_yaw, horizon, controller):
164 | """
165 | Generate a backup trajectory that tracks velocity direction from the
166 | committed positional trajectory.
167 |
168 | Parameters:
169 | -----------
170 | initial_yaw : float
171 | Initial yaw angle to start the backup trajectory from
172 | horizon : float
173 | Time horizon for the backup trajectory
174 | controller : callable
175 | Function to compute the control input based on the current state
176 | """
177 | n_steps = int(horizon / self.dt) + 1
178 | x_traj = np.zeros(n_steps)
179 | u_traj = np.zeros(n_steps)
180 |
181 | current_yaw = initial_yaw
182 | x_traj[0] = current_yaw
183 |
184 | # Propagate yaw dynamics using defined controller
185 | for i in range(1, n_steps):
186 | # Get corresponding positional state and control
187 | pos_idx = self.candidate_time_idx + i
188 | if pos_idx < len(self.pos_committed_x_traj)-1 :
189 | pos_x = self.pos_committed_x_traj[pos_idx]
190 | pos_u = self.pos_committed_u_traj[pos_idx]
191 | else:
192 | # Use last available state/control if beyond pos trajectory
193 | pos_x = self.pos_committed_x_traj[-1]
194 | #pos_u = self.pos_committed_u_traj[-1]
195 | pos_u = np.zeros_like(self.pos_committed_u_traj[-1]) # TODO: maintain zero control input
196 |
197 | # Compute yaw control input
198 | u = controller(pos_x.reshape(-1, 1), current_yaw, pos_u.reshape(-1, 1))
199 | u = u.item() # Extract scalar from 1x1 array
200 | u_traj[i-1] = u
201 |
202 | # Update yaw using simple Euler integration
203 | current_yaw = current_yaw + u * self.dt
204 | x_traj[i] = current_yaw
205 |
206 | return x_traj, u_traj
207 |
208 | def _generate_candidate_trajectory(self, discounted_nominal_horizon):
209 |
210 | # Generate the candidate trajectory using the nominal and backup controllers
211 | self.candidate_time_idx = 0
212 | current_yaw = self.robot.get_orientation()
213 | nominal_x_traj, nominal_u_traj = self._generate_trajectory(current_yaw, discounted_nominal_horizon, self.nominal_controller)
214 |
215 | self.candidate_time_idx = len(nominal_x_traj) - 1
216 | yaw_at_backup = nominal_x_traj[-1] # last yaw of the nominal trajectory
217 | backup_x_traj, backup_u_traj = self._generate_trajectory(yaw_at_backup, self.backup_horizon, self.backup_controller)
218 |
219 | self.candidate_x_traj = np.hstack((nominal_x_traj, backup_x_traj))
220 | self.candidate_u_traj = np.hstack((nominal_u_traj, backup_u_traj))
221 | return self.candidate_x_traj
222 |
223 | def _is_candidate_valid(self, critical_point, candidate_x_traj, discounted_nominal_horizon):
224 | """
225 | Check if the candidate trajectory is valid by evaluating the safety condition.
226 | The trajectory is valid if the critical point becomes visible before the robot reaches it.
227 |
228 | Parameters
229 | ----------
230 | candidate_x_traj : numpy.ndarray
231 | The candidate yaw trajectory to evaluate
232 |
233 | Returns
234 | -------
235 | bool
236 | True if the trajectory is valid (critical point becomes visible in time), False otherwise
237 | """
238 | if critical_point is None:
239 | return True # No critical point found, trajectory is valid
240 |
241 | # Check if critical point becomes visible before robot reaches it
242 | for i in range(len(candidate_x_traj)):
243 | # Get position and yaw at this timestep
244 |
245 | pos = self.pos_committed_x_traj[i]
246 | yaw = candidate_x_traj[i]
247 |
248 | # Check if critical point is in FOV at this state
249 | # Consider the discounted nominal horizon, only check during the backup phase
250 | if i > discounted_nominal_horizon/self.dt:
251 | if self._is_point_in_fov(pos, yaw, critical_point, is_in_cam_range=True):
252 | # print("In FOV at time step", i, "at position", pos, "with yaw", yaw)
253 | # print("critical point", critical_point)
254 | return True
255 |
256 | # Check if we've reached the stopping point before critical point
257 | if self.robot_spec['model'] == 'DoubleIntegrator2D':
258 | # Get current velocity
259 | # vx = pos[2]
260 | # vy = pos[3]
261 | # v = np.sqrt(vx**2 + vy**2)
262 | v = self.robot.robot_spec['v_max'] # TODO:
263 |
264 | # Calculate maximum braking distance
265 | a_max = self.robot_spec['a_max']
266 | braking_distance = v**2 / (2 * a_max)
267 |
268 | # Calculate distance to critical point
269 | dist_to_critical = np.linalg.norm(pos[:2] - critical_point)
270 |
271 | # If we're within braking distance of critical point, trajectory is invalid
272 | if dist_to_critical <= braking_distance:
273 | return False
274 | elif self.robot_spec['model'] == 'SingleIntegrator2D':
275 | # If we've passed the critical point, trajectory is invalid
276 | if np.linalg.norm(pos[:2] - critical_point) < self.robot.robot_radius:
277 | return False
278 | else:
279 | raise ValueError(f"Not implemented for robot model: {self.robot_spec['model']}")
280 |
281 | return False # Critical point never became visible
282 |
283 | def _is_point_in_fov(self, robot_state, robot_yaw, point, is_in_cam_range=False):
284 | """
285 | Check if a point is within the robot's field of view at a given state and yaw.
286 |
287 | Parameters
288 | ----------
289 | robot_state : numpy.ndarray
290 | Robot state [x, y, ...]
291 | robot_yaw : float
292 | Robot yaw angle
293 | point : numpy.ndarray
294 | Point to check [x, y]
295 | is_in_cam_range : bool
296 | Whether to also check if point is within camera range
297 |
298 | Returns
299 | -------
300 | bool
301 | True if point is in FOV, False otherwise
302 | """
303 | robot_pos = robot_state[:2]
304 | to_point = point - robot_pos
305 |
306 | angle_to_point = np.arctan2(to_point[1], to_point[0])
307 | angle_diff = abs(angle_normalize(angle_to_point - robot_yaw))
308 |
309 | in_fov = angle_diff <= self.robot.fov_angle / 2
310 |
311 | if is_in_cam_range:
312 | dist_to_point = np.linalg.norm(to_point)
313 | return in_fov and dist_to_point <= self.robot.cam_range
314 |
315 | return in_fov
316 |
317 | def _compute_critical_point(self):
318 | """
319 | Compute the critical point where the planned trajectory intersects with the map boundary.
320 |
321 | Returns
322 | -------
323 | critical_point : numpy.ndarray or None
324 | The critical point where trajectory intersects map boundary, or None if no intersection
325 | """
326 | if self.robot.sensing_footprints.is_empty:
327 | print("No sensing footprints found")
328 | return None
329 |
330 | # Get the planned trajectory
331 | trajectory = self.pos_committed_x_traj
332 |
333 | # Convert trajectory points to LineString
334 | trajectory_line = LineString([(x[0], x[1]) for x in trajectory])
335 |
336 | # Find intersection with sensing footprints
337 | intersection = trajectory_line.intersection(self.robot.sensing_footprints)
338 |
339 | if intersection.is_empty:
340 | return None
341 |
342 | # Get the first intersection point
343 | if intersection.geom_type == 'Point':
344 | critical_point = np.array([intersection.x, intersection.y])
345 | elif intersection.geom_type == 'LineString':
346 | # For LineString intersection, get the furthest point
347 | # This is the point where we exit the explored area
348 | robot_pos = self.robot.get_position()
349 | coords = list(intersection.coords)
350 | distances = [np.linalg.norm(np.array(p) - robot_pos) for p in coords]
351 | furthest_idx = np.argmax(distances)
352 | critical_point = np.array(coords[furthest_idx])
353 | elif intersection.geom_type in ['GeometryCollection', 'MultiLineString', 'MultiPoint']:
354 | #print("GeometryCollection intersection")
355 | robot_pos = self.robot.get_position()
356 | multi_geom = list(intersection.geoms)
357 | coords = []
358 | for geom in multi_geom:
359 | if geom.geom_type == 'LineString':
360 | coords.extend(list(geom.coords))
361 | elif geom.geom_type == 'Point':
362 | coords.append((geom.x, geom.y))
363 | distances = [np.linalg.norm(np.array(p) - robot_pos) for p in coords]
364 | furthest_idx = np.argmax(distances)
365 | critical_point = np.array(coords[furthest_idx])
366 | else:
367 | print(f"Unexpected intersection type: {intersection.geom_type}")
368 | return None
369 |
370 | return critical_point
371 |
372 | def _update_committed_trajectory(self, discounted_nominal_horizon):
373 | """
374 | Update the committed trajectory with the candidate trajectory.
375 | """
376 | self.committed_x_traj = self.candidate_x_traj
377 | self.committed_u_traj = self.candidate_u_traj
378 | self.next_event_time = self.event_offset
379 | self.current_time_idx = 0
380 | self.committed_horizon = discounted_nominal_horizon
381 |
382 | def solve_control_problem(self,
383 | robot_state: np.ndarray,
384 | current_yaw: float,
385 | u: np.ndarray) -> float:
386 | '''
387 | u: ignored here, and instead using pos_committed_u_traj
388 | '''
389 |
390 | self.current_time_idx += 1
391 | self._update_pos_committed_trajectory()
392 |
393 | if self.committed_x_traj is None and self.committed_u_traj is None:
394 | # initialize the committed trajectory
395 | init_x_traj, init_u_traj = self._generate_trajectory(current_yaw, self.backup_horizon, self.backup_controller)
396 | self.committed_x_traj = init_x_traj
397 | self.committed_u_traj = init_u_traj
398 |
399 | # try updating the committed trajectory
400 | if self.current_time_idx > self.next_event_time/self.dt:
401 | #print("Event triggered, generating new candidate trajectory")
402 | critical_point = self._compute_critical_point()
403 |
404 | for i in range(int(self.nominal_horizon//self.horizon_discount)):
405 | # discount the nominal horizon
406 | discounted_nominal_horizon = self.nominal_horizon - i * self.horizon_discount
407 | # Generate the candidate trajectory
408 | candidate_x_traj = self._generate_candidate_trajectory(discounted_nominal_horizon)
409 | # Check if the candidate trajectory is valid
410 | if self._is_candidate_valid(critical_point, candidate_x_traj, discounted_nominal_horizon):
411 | #print("Candidate trajectory is valid")
412 | self._update_committed_trajectory(discounted_nominal_horizon)
413 | break
414 |
415 | if self.current_time_idx < self.committed_horizon/self.dt:
416 | # Use the committed trajectory for the next control input
417 | #print("in nominal: control input", self.committed_u_traj[self.current_time_idx])
418 | #print("in nominal: control input", self.nominal_controller(self.robot.X, goal))
419 | #print("nominal")
420 | return self.committed_u_traj[self.current_time_idx] if self.nominal_controller is None else self.nominal_controller(robot_state, current_yaw, u)
421 | else:
422 | #print("backup")
423 | #print("backup: control input", self.backup_controller(self.robot.X))
424 | return self.committed_u_traj[self.current_time_idx] if self.backup_controller is None else self.backup_controller(robot_state, current_yaw, u)
--------------------------------------------------------------------------------
/attitude_control/simple_attitude.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 |
4 | class SimpleAtt:
5 | def __init__(self, robot, robot_spec):
6 | self.robot = robot
7 | self.robot_spec = robot_spec
8 |
9 | self.yaw_rate_const = 0.5
10 |
11 | self.setup_control_problem()
12 |
13 | def setup_control_problem(self):
14 | pass
15 |
16 | def solve_control_problem(self, *args, **kwargs) -> np.ndarray:
17 | return np.array([[self.yaw_rate_const]])
18 |
--------------------------------------------------------------------------------
/attitude_control/velocity_tracking_yaw.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 | """
4 | Created on April 20th, 2025
5 | @author: Taekyung Kim
6 |
7 | @description:
8 | This code implements a velocity tracking yaw controller for various robot dynamics, but mostly for integrators.
9 | It computes the desired yaw angle based on the robot's velocity vector.
10 | Assume tracks velocity yaw perfectly, then it is guaranteed to observe the potential obstacles along the path.
11 |
12 | @note:
13 | - Can be used in general cases.
14 | - Can be used as a backup attitude controller of the gatekeeper.
15 |
16 | """
17 |
18 | def angle_normalize(x):
19 | return (((x + np.pi) % (2 * np.pi)) - np.pi)
20 |
21 | class VelocityTrackingYaw:
22 | """
23 | Backup attitude controller: for a single-integrator yaw model
24 | (˙yaw = u_att), aligns heading with the instantaneous velocity.
25 | """
26 | def __init__(self, robot, robot_spec, kp=1.5):
27 | self.robot = robot
28 | self.robot_spec = robot_spec
29 | self.model = robot_spec['model']
30 |
31 | self.kp = kp
32 | self.w_max = robot_spec.get('w_max', 0.5)
33 |
34 | def solve_control_problem(self,
35 | robot_state: np.ndarray,
36 | current_yaw: float,
37 | u: np.ndarray) -> float:
38 | # 1) Extract state
39 | if self.model == 'SingleIntegrator2D':
40 | vx = u[0, 0]
41 | vy = u[1, 0]
42 | elif self.model == 'DoubleIntegrator2D':
43 | vx = robot_state[2, 0]
44 | vy = robot_state[3, 0]
45 | speed = np.hypot(vx, vy)
46 |
47 | # 2) If nearly stationary, hold yaw
48 | if speed < 1e-2:
49 | return np.array([0.0]).reshape(-1, 1)
50 |
51 |
52 | # 3) Compute error
53 | desired_yaw = np.arctan2(vy, vx)
54 | yaw_err = angle_normalize(desired_yaw - current_yaw)
55 |
56 | # 5) P‐control and clip
57 | u_att = self.kp * yaw_err
58 | u_att = np.clip(u_att, -self.w_max, self.w_max)
59 |
60 | return np.array([u_att]).reshape(-1, 1)
--------------------------------------------------------------------------------
/attitude_control/visibility_promoting_yaw.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from shapely.geometry import Point, LineString
3 |
4 | """
5 | Created on May 11th, 2025
6 | @author: Taekyung Kim
7 |
8 | @description:
9 | This code implements a visibility promoting yaw controller that smoothly rotates towards
10 | the direction of maximum unexplored area. The controller uses P control to achieve smooth
11 | rotation while still prioritizing exploration of unknown regions.
12 |
13 | @note:
14 | - Uses P control for smooth rotation towards target angle
15 | - Target angle is determined by finding the largest gap in visibility
16 | - Uses sensing footprints to determine unexplored areas
17 | - Can be used as a nominal attitude controller for the gatekeeper
18 | """
19 |
20 | def angle_normalize(x):
21 | return (((x + np.pi) % (2 * np.pi)) - np.pi)
22 |
23 | class VisibilityAtt:
24 | """
25 | Visibility promoting attitude controller that smoothly rotates towards
26 | the direction of maximum unexplored area using P control.
27 | """
28 | def __init__(self, robot, robot_spec, kp=1.5):
29 | self.robot = robot
30 | self.robot_spec = robot_spec
31 | self.w_max = robot_spec.get('w_max', 0.5) # Maximum angular velocity
32 | self.kp = kp # P gain for smooth rotation
33 |
34 | def solve_control_problem(self,
35 | robot_state: np.ndarray,
36 | current_yaw: float,
37 | u: np.ndarray) -> float:
38 | """
39 | Determine rotation direction based on sensing footprints using P control.
40 |
41 | Parameters
42 | ----------
43 | robot_state : numpy.ndarray
44 | Current robot state
45 | current_yaw : float
46 | Current yaw angle
47 | u : numpy.ndarray
48 | Position control input (ignored)
49 |
50 | Returns
51 | -------
52 | float
53 | Angular velocity control input
54 | """
55 | # Get robot position
56 | robot_pos = robot_state[:2].flatten()
57 |
58 | # Get current sensing footprints
59 | sensing_footprints = self.robot.sensing_footprints
60 |
61 | # If no sensing footprints yet, rotate right
62 | if sensing_footprints.is_empty:
63 | return np.array([self.w_max]).reshape(-1, 1)
64 |
65 | # Get the boundary of sensing footprints
66 | boundary = sensing_footprints.boundary
67 |
68 | # Sample points on the boundary
69 | if boundary.geom_type == 'LineString':
70 | boundary_points = list(boundary.coords)
71 | else:
72 | # For MultiLineString, get all points
73 | boundary_points = []
74 | for line in boundary.geoms:
75 | boundary_points.extend(list(line.coords))
76 |
77 | # Convert to numpy array for easier computation
78 | boundary_points = np.array(boundary_points)
79 |
80 | # Calculate vectors from robot to boundary points
81 | vectors = boundary_points - robot_pos
82 |
83 | # Calculate angles to boundary points
84 | angles = np.arctan2(vectors[:, 1], vectors[:, 0])
85 |
86 | # Normalize angles relative to current yaw
87 | angle_diffs = angle_normalize(angles - current_yaw)
88 |
89 | # Find the largest gap in visibility
90 | # Sort angles and find the largest gap
91 | sorted_angles = np.sort(angle_diffs)
92 | angle_gaps = np.diff(sorted_angles)
93 | largest_gap_idx = np.argmax(angle_gaps)
94 |
95 | # The direction to rotate is towards the middle of the largest gap
96 | target_angle = (sorted_angles[largest_gap_idx] + sorted_angles[largest_gap_idx + 1]) / 2
97 |
98 | # Use P control to compute control input
99 | error = angle_normalize(target_angle - current_yaw)
100 | u_att = self.kp * error
101 |
102 | # Clip to maximum angular velocity
103 | u_att = np.clip(u_att, -self.w_max, self.w_max)
104 |
105 | return np.array([u_att]).reshape(-1, 1)
--------------------------------------------------------------------------------
/position_control/__init__.py:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/position_control/cbf_qp.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import cvxpy as cp
3 |
4 | class CBFQP:
5 | def __init__(self, robot, robot_spec):
6 | self.robot = robot
7 | self.robot_spec = robot_spec
8 |
9 | self.cbf_param = {}
10 |
11 | if self.robot_spec['model'] == "SingleIntegrator2D":
12 | self.cbf_param['alpha'] = 1.0
13 | elif self.robot_spec['model'] == 'Unicycle2D':
14 | self.cbf_param['alpha'] = 1.0
15 | elif self.robot_spec['model'] == 'DynamicUnicycle2D':
16 | self.cbf_param['alpha1'] = 1.5
17 | self.cbf_param['alpha2'] = 1.5
18 | elif self.robot_spec['model'] == 'DoubleIntegrator2D':
19 | self.cbf_param['alpha1'] = 1.5
20 | self.cbf_param['alpha2'] = 1.5
21 | elif self.robot_spec['model'] == 'KinematicBicycle2D':
22 | self.cbf_param['alpha1'] = 1.5
23 | self.cbf_param['alpha2'] = 1.5
24 | elif self.robot_spec['model'] == 'KinematicBicycle2D_C3BF':
25 | self.cbf_param['alpha'] = 1.5
26 | elif self.robot_spec['model'] == 'Quad2D':
27 | self.cbf_param['alpha1'] = 1.5
28 | self.cbf_param['alpha2'] = 1.5
29 | elif self.robot_spec['model'] == 'Quad3D':
30 | self.cbf_param['alpha1'] = 1.5
31 | self.cbf_param['alpha2'] = 1.5
32 |
33 | self.setup_control_problem()
34 |
35 | def setup_control_problem(self):
36 | self.u = cp.Variable((2, 1))
37 | self.u_ref = cp.Parameter((2, 1), value=np.zeros((2, 1)))
38 | self.A1 = cp.Parameter((1, 2), value=np.zeros((1, 2)))
39 | self.b1 = cp.Parameter((1, 1), value=np.zeros((1, 1)))
40 | objective = cp.Minimize(cp.sum_squares(self.u - self.u_ref))
41 |
42 | if self.robot_spec['model'] == 'SingleIntegrator2D':
43 | constraints = [self.A1 @ self.u + self.b1 >= 0,
44 | cp.abs(self.u[0]) <= self.robot_spec['v_max'],
45 | cp.abs(self.u[1]) <= self.robot_spec['v_max']]
46 | elif self.robot_spec['model'] == 'Unicycle2D':
47 | constraints = [self.A1 @ self.u + self.b1 >= 0,
48 | cp.abs(self.u[0]) <= self.robot_spec['v_max'],
49 | cp.abs(self.u[1]) <= self.robot_spec['w_max']]
50 | elif self.robot_spec['model'] == 'DynamicUnicycle2D':
51 | constraints = [self.A1 @ self.u + self.b1 >= 0,
52 | cp.abs(self.u[0]) <= self.robot_spec['a_max'],
53 | cp.abs(self.u[1]) <= self.robot_spec['w_max']]
54 | elif self.robot_spec['model'] == 'DoubleIntegrator2D':
55 | constraints = [self.A1 @ self.u + self.b1 >= 0,
56 | cp.abs(self.u[0]) <= self.robot_spec['a_max'],
57 | cp.abs(self.u[1]) <= self.robot_spec['a_max']]
58 | elif self.robot_spec['model'] in ['KinematicBicycle2D', 'KinematicBicycle2D_C3BF']:
59 | constraints = [self.A1 @ self.u + self.b1 >= 0,
60 | cp.abs(self.u[0]) <= self.robot_spec['a_max'],
61 | cp.abs(self.u[1]) <= self.robot_spec['beta_max']]
62 | elif self.robot_spec['model'] == 'Quad2D':
63 | constraints = [self.A1 @ self.u + self.b1 >= 0,
64 | self.robot_spec["f_min"] <= self.u[0],
65 | self.u[0] <= self.robot_spec["f_max"],
66 | self.robot_spec["f_min"] <= self.u[1],
67 | self.u[1] <= self.robot_spec["f_max"]]
68 | elif self.robot_spec['model'] == 'Quad3D':
69 | # overwrite the variables
70 | self.u = cp.Variable((4, 1))
71 | self.u_ref = cp.Parameter((4, 1), value=np.zeros((4, 1)))
72 | self.A1 = cp.Parameter((1, 4), value=np.zeros((1, 4)))
73 | self.b1 = cp.Parameter((1, 1), value=np.zeros((1, 1)))
74 | objective = cp.Minimize(cp.sum_squares(self.u - self.u_ref))
75 | constraints = [self.A1 @ self.u + self.b1 >= 0,
76 | self.u[0] <= self.robot_spec['f_max'],
77 | self.u[0] >= 0.0,
78 | cp.abs(self.u[1]) <= self.robot_spec['phi_dot_max'],
79 | cp.abs(self.u[2]) <= self.robot_spec['theta_dot_max'],
80 | cp.abs(self.u[3]) <= self.robot_spec['psi_dot_max']]
81 |
82 | self.cbf_controller = cp.Problem(objective, constraints)
83 |
84 | def solve_control_problem(self, robot_state, control_ref, nearest_obs):
85 | # 3. Update the CBF constraints
86 | if nearest_obs is None:
87 | # deactivate the CBF constraints
88 | self.A1.value = np.zeros_like(self.A1.value)
89 | self.b1.value = np.zeros_like(self.b1.value)
90 | elif self.robot_spec['model'] in ['SingleIntegrator2D', 'Unicycle2D', 'KinematicBicycle2D_C3BF']:
91 | h, dh_dx = self.robot.agent_barrier(nearest_obs)
92 | self.A1.value[0,:] = dh_dx @ self.robot.g()
93 | self.b1.value[0,:] = dh_dx @ self.robot.f() + self.cbf_param['alpha'] * h
94 | elif self.robot_spec['model'] in ['DynamicUnicycle2D', 'DoubleIntegrator2D', 'KinematicBicycle2D', 'Quad2D', 'Quad3D']:
95 | h, h_dot, dh_dot_dx = self.robot.agent_barrier(nearest_obs)
96 | self.A1.value[0,:] = dh_dot_dx @ self.robot.g()
97 | self.b1.value[0,:] = dh_dot_dx @ self.robot.f() + (self.cbf_param['alpha1']+self.cbf_param['alpha2']) * h_dot + self.cbf_param['alpha1']*self.cbf_param['alpha2']*h
98 |
99 | self.u_ref.value = control_ref['u_ref']
100 |
101 | # 4. Solve this yields a new 'self.u'
102 | self.cbf_controller.solve(solver=cp.GUROBI, reoptimize=True)
103 |
104 | # print(f'h: {h} | value: {self.A1.value[0,:] @ self.u.value + self.b1.value[0,:]}')
105 |
106 | # Check QP error in tracking.py
107 | self.status = self.cbf_controller.status
108 | # if self.cbf_controller.status != 'optimal':
109 | # raise QPError("CBF-QP optimization failed")
110 |
111 | return self.u.value
--------------------------------------------------------------------------------
/position_control/mpc_cbf.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import casadi as ca
3 | import do_mpc
4 | import matplotlib.pyplot as plt
5 |
6 | class MPCCBF:
7 | def __init__(self, robot, robot_spec, show_mpc_traj=False):
8 | self.robot = robot
9 | self.robot_spec = robot_spec
10 | self.status = 'optimal' # TODO: not implemented
11 | self.show_mpc_traj = show_mpc_traj
12 |
13 | # MPC parameters
14 | self.horizon = 10
15 | self.dt = robot.dt
16 |
17 | # Cost function weights
18 | if self.robot_spec['model'] == 'SingleIntegrator2D':
19 | self.Q = np.diag([50, 50])
20 | self.R = np.array([5, 5])
21 | elif self.robot_spec['model'] == 'Unicycle2D':
22 | self.Q = np.diag([50, 50, 0.01]) # State cost matrix
23 | self.R = np.array([0.5, 0.5]) # Input cost matrix
24 | elif self.robot_spec['model'] == 'DynamicUnicycle2D':
25 | self.Q = np.diag([50, 50, 0.01, 30]) # State cost matrix
26 | self.R = np.array([0.5, 0.5]) # Input cost matrix
27 | elif self.robot_spec['model'] == 'DoubleIntegrator2D':
28 | self.Q = np.diag([50, 50, 20, 20]) # State cost matrix
29 | self.R = np.array([0.5, 0.5]) # Input cost matrix
30 | elif self.robot_spec['model'] in ['KinematicBicycle2D', 'KinematicBicycle2D_C3BF']:
31 | self.Q = np.diag([50, 50, 1, 1]) # State cost matrix
32 | self.R = np.array([0.5, 50.0]) # Input cost matrix
33 | elif self.robot_spec['model'] == 'Quad2D':
34 | self.Q = np.diag([25, 25, 50, 10, 10, 50])
35 | self.R = np.array([0.5, 0.5])
36 | elif self.robot_spec['model'] == 'Quad3D':
37 | self.Q = np.diag([30, 30, 5, 20, 20, 1, 10, 10, 10, 20, 20, 1])
38 | self.R = np.array([1, 1, 1, 1])
39 | elif self.robot_spec['model'] == 'VTOL2D':
40 | self.horizon = 30
41 | self.Q = np.diag([10, 10, 250, 10, 10, 50])
42 | self.R = np.array([0.5, 0.5, 0.5, 50000])
43 |
44 | self.n_controls = 2
45 | self.goal = np.array([0, 0])
46 |
47 | # DT CBF parameters should scale from 0 to 1
48 | self.cbf_param = {}
49 | if self.robot_spec['model'] == 'SingleIntegrator2D':
50 | self.cbf_param['alpha'] = 0.05
51 | self.n_states = 2
52 | elif self.robot_spec['model'] == 'Unicycle2D':
53 | self.cbf_param['alpha'] = 0.05
54 | self.n_states = 3
55 | elif self.robot_spec['model'] == 'DynamicUnicycle2D':
56 | self.cbf_param['alpha1'] = 0.15
57 | self.cbf_param['alpha2'] = 0.15
58 | self.n_states = 4
59 | elif self.robot_spec['model'] == 'DoubleIntegrator2D':
60 | self.cbf_param['alpha1'] = 0.15
61 | self.cbf_param['alpha2'] = 0.15
62 | self.n_states = 4
63 | elif self.robot_spec['model'] == 'KinematicBicycle2D':
64 | self.cbf_param['alpha1'] = 0.15
65 | self.cbf_param['alpha2'] = 0.15
66 | self.n_states = 4
67 | elif self.robot_spec['model'] == 'KinematicBicycle2D_C3BF':
68 | self.cbf_param['alpha'] = 0.15
69 | self.n_states = 4
70 | elif self.robot_spec['model'] == 'Quad2D':
71 | self.cbf_param['alpha1'] = 0.15
72 | self.cbf_param['alpha2'] = 0.15
73 | self.n_states = 6
74 | elif self.robot_spec['model'] == 'Quad3D':
75 | self.cbf_param['alpha'] = 0.15
76 | self.n_states = 12
77 | self.n_controls = 4 # override n_controls for Quad3D
78 | self.goal = np.array([0, 0, 0]) # override goal with z placeholder
79 | elif self.robot_spec['model'] == 'VTOL2D':
80 | self.cbf_param['alpha1'] = 0.35 # 0.35
81 | self.cbf_param['alpha2'] = 0.35
82 | self.n_states = 6
83 | self.n_controls = 4 # override n_controls for VTOL2D
84 |
85 |
86 | self.obs = None
87 |
88 | self.setup_control_problem()
89 |
90 | def setup_control_problem(self):
91 | self.model = self.create_model()
92 | self.mpc = self.create_mpc()
93 | self.simulator = self.create_simulator()
94 | self.estimator = do_mpc.estimator.StateFeedback(self.model)
95 |
96 | def create_model(self):
97 | model = do_mpc.model.Model('discrete')
98 |
99 | # States
100 | _x = model.set_variable(
101 | var_type='_x', var_name='x', shape=(self.n_states, 1))
102 |
103 | # Inputs
104 | _u = model.set_variable(
105 | var_type='_u', var_name='u', shape=(self.n_controls, 1))
106 |
107 | # Parameters
108 | _goal = model.set_variable(
109 | var_type='_tvp', var_name='goal', shape=(self.n_states, 1))
110 | _obs = model.set_variable(
111 | var_type='_tvp', var_name='obs', shape=(5, 5)) # (num_obs, obs_info), where [x, y, r, vx, vy]
112 |
113 | if self.robot_spec['model'] in ['SingleIntegrator2D', 'Unicycle2D', 'KinematicBicycle2D_C3BF', 'Quad3D']:
114 | _alpha = model.set_variable(
115 | var_type='_tvp', var_name='alpha', shape=(1, 1))
116 | elif self.robot_spec['model'] in ['DynamicUnicycle2D', 'DoubleIntegrator2D', 'KinematicBicycle2D', 'Quad2D', 'VTOL2D']:
117 | _alpha1 = model.set_variable(
118 | var_type='_tvp', var_name='alpha1', shape=(1, 1))
119 | _alpha2 = model.set_variable(
120 | var_type='_tvp', var_name='alpha2', shape=(1, 1))
121 |
122 | # System dynamics
123 | f_x = self.robot.f_casadi(_x)
124 | g_x = self.robot.g_casadi(_x)
125 |
126 | x_next = _x + (f_x + ca.mtimes(g_x, _u)) * self.dt
127 |
128 | # Set right hand side of ODE
129 | model.set_rhs('x', x_next)
130 |
131 | # Defines the objective function wrt the state cost
132 | cost = ca.mtimes([(_x - _goal).T, self.Q, (_x - _goal)])
133 | model.set_expression(expr_name='cost', expr=cost)
134 |
135 | if self.show_mpc_traj:
136 | if self.robot_spec['model'] == 'VTOL2D':
137 | model.set_expression('u_0', _u[0])
138 | model.set_expression('u_1', _u[1])
139 | model.set_expression('u_2', _u[2])
140 | model.set_expression('u_3', _u[3]*180/3.14159)
141 | model.set_expression('v', ca.hypot(_x[3], _x[4]))
142 | model.set_expression('alpha', (_x[2] - ca.atan2(_x[4], _x[3]))*180/3.14159)
143 | else:
144 | for i in range(self.n_controls):
145 | model.set_expression('u_' + str(i), _u[i])
146 |
147 | model.setup()
148 | return model
149 |
150 | def create_mpc(self):
151 | mpc = do_mpc.controller.MPC(self.model)
152 | mpc.settings.supress_ipopt_output()
153 |
154 | setup_mpc = {
155 | 'n_horizon': self.horizon,
156 | 't_step': self.dt,
157 | 'n_robust': 0,
158 | 'state_discretization': 'discrete',
159 | 'store_full_solution': True,
160 | }
161 | mpc.set_param(**setup_mpc)
162 |
163 | # Configure objective function
164 | mterm = self.model.aux['cost'] # Terminal cost
165 | lterm = self.model.aux['cost'] # Stage cost
166 | mpc.set_objective(mterm=mterm, lterm=lterm)
167 | # Input penalty (R diagonal matrix in objective fun)
168 | mpc.set_rterm(u=self.R)
169 |
170 | # State and input bounds
171 | if self.robot_spec['model'] == 'SingleIntegrator2D':
172 | mpc.bounds['lower', '_u', 'u'] = np.array(
173 | [-self.robot_spec['v_max'], -self.robot_spec['v_max']])
174 | mpc.bounds['upper', '_u', 'u'] = np.array(
175 | [self.robot_spec['v_max'], self.robot_spec['v_max']])
176 | elif self.robot_spec['model'] == 'Unicycle2D':
177 | mpc.bounds['lower', '_u', 'u'] = np.array(
178 | [-self.robot_spec['v_max'], -self.robot_spec['w_max']])
179 | mpc.bounds['upper', '_u', 'u'] = np.array(
180 | [self.robot_spec['v_max'], self.robot_spec['w_max']])
181 | elif self.robot_spec['model'] == 'DynamicUnicycle2D':
182 | mpc.bounds['lower', '_x', 'x', 3] = -self.robot_spec['v_max']
183 | mpc.bounds['upper', '_x', 'x', 3] = self.robot_spec['v_max']
184 | mpc.bounds['lower', '_u', 'u'] = np.array(
185 | [-self.robot_spec['a_max'], -self.robot_spec['w_max']])
186 | mpc.bounds['upper', '_u', 'u'] = np.array(
187 | [self.robot_spec['a_max'], self.robot_spec['w_max']])
188 | elif self.robot_spec['model'] == 'DoubleIntegrator2D':
189 | mpc.bounds['lower', '_u', 'u'] = np.array(
190 | [-self.robot_spec['ax_max'], -self.robot_spec['ay_max']])
191 | mpc.bounds['upper', '_u', 'u'] = np.array(
192 | [self.robot_spec['ax_max'], self.robot_spec['ay_max']])
193 | elif self.robot_spec['model'] in ['KinematicBicycle2D', 'KinematicBicycle2D_C3BF']:
194 | mpc.bounds['lower', '_x', 'x', 3] = -self.robot_spec['v_max']
195 | mpc.bounds['upper', '_x', 'x', 3] = self.robot_spec['v_max']
196 | mpc.bounds['lower', '_u', 'u'] = np.array(
197 | [-self.robot_spec['a_max'], -self.robot_spec['beta_max']])
198 | mpc.bounds['upper', '_u', 'u'] = np.array(
199 | [self.robot_spec['a_max'], self.robot_spec['beta_max']])
200 | elif self.robot_spec['model'] == 'Quad2D':
201 | mpc.bounds['lower', '_u', 'u'] = np.array(
202 | [self.robot_spec['f_min'], self.robot_spec['f_min']])
203 | mpc.bounds['upper', '_u', 'u'] = np.array(
204 | [self.robot_spec['f_max'], self.robot_spec['f_max']])
205 | elif self.robot_spec['model'] == 'Quad3D':
206 | mpc.bounds['lower', '_u', 'u'] = np.array(
207 | [self.robot_spec['u_min'], self.robot_spec['u_min'], self.robot_spec['u_min'], self.robot_spec['u_min']])
208 | mpc.bounds['upper', '_u', 'u'] = np.array(
209 | [self.robot_spec['u_max'], self.robot_spec['u_max'], self.robot_spec['u_max'], self.robot_spec['u_max']])
210 | elif self.robot_spec['model'] == 'VTOL2D':
211 | mpc.bounds['lower', '_u', 'u'] = np.array(
212 | [self.robot_spec['throttle_min'], self.robot_spec['throttle_min'], self.robot_spec['throttle_min'], self.robot_spec['elevator_min']])
213 | mpc.bounds['upper', '_u', 'u'] = np.array(
214 | [self.robot_spec['throttle_max'], self.robot_spec['throttle_max'], self.robot_spec['throttle_max'], self.robot_spec['elevator_max']])
215 | mpc.bounds['lower', '_x', 'x', 3] = -self.robot_spec['v_max']
216 | mpc.bounds['upper', '_x', 'x', 3] = self.robot_spec['v_max']
217 | mpc.bounds['lower', '_x', 'x', 4] = -self.robot_spec['descent_speed_max']
218 | #mpc.bounds['upper', '_x', 'x', 1] = 15.0
219 | mpc.bounds['lower', '_x', 'x', 2] = -self.robot_spec['pitch_max']*3.14159/180
220 | mpc.bounds['upper', '_x', 'x', 2] = self.robot_spec['pitch_max']*3.14159/180
221 |
222 | mpc = self.set_tvp(mpc)
223 | mpc = self.set_cbf_constraint(mpc)
224 |
225 | mpc.setup()
226 | if self.show_mpc_traj and self.robot_spec['model'] == 'VTOL2D':
227 | plt.ion()
228 | self.graphics = do_mpc.graphics.Graphics(mpc.data)
229 |
230 | if self.robot_spec['model'] == 'VTOL2D':
231 | self.fig, ax = plt.subplots(self.n_controls + 2, sharex=True)
232 | ax[0].set_ylabel('d_front')
233 | ax[1].set_ylabel('d_rear')
234 | ax[2].set_ylabel('d_pusher')
235 | ax[3].set_ylabel('elevator [\u00B0]')
236 |
237 | ax[4].set_ylabel('v [m/s]')
238 | ax[5].set_ylabel('alpha [\u00B0]')
239 | self.graphics.add_line(var_type='_aux', var_name='v', axis=ax[self.n_controls])
240 | self.graphics.add_line(var_type='_aux', var_name='alpha', axis=ax[self.n_controls+1])
241 | else:
242 | raise NotImplementedError('Model not implemented for MPC traj plot')
243 |
244 | for i in range(self.n_controls):
245 | self.graphics.add_line(var_type='_aux', var_name='u_' + str(i), axis=ax[i])
246 | self.fig.align_ylabels()
247 | return mpc
248 |
249 | def set_tvp(self, mpc):
250 | # Set time-varying parameters
251 | def tvp_fun(t_now):
252 | tvp_template = mpc.get_tvp_template()
253 |
254 | # Set goal
255 | tvp_template['_tvp', :, 'goal'] = np.concatenate([self.goal, [0] * (self.n_states - self.goal.shape[0])])
256 |
257 | # Handle up to 5 obstacles (if fewer than 5, substitute dummy obstacles)
258 | if self.obs is None:
259 | # Before detecting any obstacle, set 5 dummy obstacles far away
260 | dummy_obstacles = np.tile(np.array([1000, 1000, 0, 0, 0]), (5, 1)) # 5 far away obstacles
261 | tvp_template['_tvp', :, 'obs'] = dummy_obstacles
262 | else:
263 | num_obstacles = self.obs.shape[0]
264 | if num_obstacles < 5:
265 | # Add dummy obstacles for missing ones
266 | dummy_obstacles = np.tile(np.array([1000, 1000, 0, 0, 0]), (5 - num_obstacles, 1))
267 | tvp_template['_tvp', :, 'obs'] = np.vstack([self.obs, dummy_obstacles])
268 | else:
269 | # Use the detected obstacles directly
270 | tvp_template['_tvp', :, 'obs'] = self.obs[:5, :] # Limit to 5 obstacles
271 |
272 | if self.robot_spec['model'] in ['SingleIntegrator2D', 'Unicycle2D', 'KinematicBicycle2D_C3BF', 'Quad3D']:
273 | tvp_template['_tvp', :, 'alpha'] = self.cbf_param['alpha']
274 | elif self.robot_spec['model'] in ['DynamicUnicycle2D', 'DoubleIntegrator2D', 'KinematicBicycle2D', 'Quad2D', 'VTOL2D']:
275 | tvp_template['_tvp', :, 'alpha1'] = self.cbf_param['alpha1']
276 | tvp_template['_tvp', :, 'alpha2'] = self.cbf_param['alpha2']
277 |
278 | return tvp_template
279 |
280 | mpc.set_tvp_fun(tvp_fun)
281 | return mpc
282 |
283 | def set_cbf_constraint(self, mpc):
284 | _x = self.model.x['x']
285 | _u = self.model.u['u'] # Current control input [0] acc, [1] omega
286 | _obs = self.model.tvp['obs']
287 |
288 | # Add a separate constraint for each of the 5 obstacles
289 | for i in range(5):
290 | obs_i = _obs[i, :] # Select the i-th obstacle
291 | cbf_constraint = self.compute_cbf_constraint(_x, _u, obs_i)
292 | mpc.set_nl_cons(f'cbf_{i}', -cbf_constraint, ub=0)
293 |
294 | return mpc
295 |
296 | def compute_cbf_constraint(self, _x, _u, _obs):
297 | '''compute cbf constraint value
298 | We reuse this function to print the CBF constraint'''
299 |
300 | if self.robot_spec['model'] in ['SingleIntegrator2D', 'Unicycle2D', 'KinematicBicycle2D_C3BF', 'Quad3D']:
301 | _alpha = self.model.tvp['alpha']
302 | h_k, d_h = self.robot.agent_barrier_dt(_x, _u, _obs)
303 | cbf_constraint = d_h + _alpha * h_k
304 | elif self.robot_spec['model'] in ['DynamicUnicycle2D', 'DoubleIntegrator2D', 'KinematicBicycle2D', 'Quad2D', 'VTOL2D']:
305 | _alpha1 = self.model.tvp['alpha1']
306 | _alpha2 = self.model.tvp['alpha2']
307 | h_k, d_h, dd_h = self.robot.agent_barrier_dt(_x, _u, _obs)
308 | cbf_constraint = dd_h + (_alpha1 + _alpha2) * \
309 | d_h + _alpha1 * _alpha2 * h_k
310 | else:
311 | raise NotImplementedError('Model not implemented')
312 |
313 | return cbf_constraint
314 |
315 | def create_simulator(self):
316 | simulator = do_mpc.simulator.Simulator(self.model)
317 | simulator.set_param(t_step=self.dt)
318 | tvp_template = simulator.get_tvp_template()
319 |
320 | def tvp_fun(t_now):
321 | return tvp_template
322 | simulator.set_tvp_fun(tvp_fun)
323 | simulator.setup()
324 | return simulator
325 |
326 | def update_tvp(self, goal, obs):
327 | # Update the tvp variables
328 | self.goal = np.array(goal)
329 |
330 | if obs is None or len(obs) == 0:
331 | # No obstacles detected, set 5 dummy obstacles far away
332 | self.obs = np.tile(np.array([1000, 1000, 0, 0, 0]), (5, 1))
333 | else:
334 | num_obstacles = len(obs)
335 | padded_obs = []
336 | for ob in obs:
337 | ob = np.array(ob)
338 | if ob.shape[0] == 3:
339 | # Pad missing velocity fields with zeros
340 | ob = np.concatenate([ob, [0.0, 0.0]])
341 | elif ob.shape[0] != 5:
342 | raise ValueError(f"Invalid obstacle format: {ob}")
343 | padded_obs.append(ob)
344 | padded_obs = np.array(padded_obs)
345 |
346 | if num_obstacles < 5:
347 | # Add dummy obstacles for missing ones
348 | dummy_obstacles = np.tile(np.array([1000, 1000, 0, 0, 0]), (5 - num_obstacles, 1))
349 | self.obs = np.vstack([padded_obs, dummy_obstacles])
350 | else:
351 | # Use the detected obstacles directly (up to 5)
352 | self.obs = padded_obs[:5]
353 |
354 | def solve_control_problem(self, robot_state, control_ref, nearest_obs):
355 | # Set initial state and reference
356 | self.mpc.x0 = robot_state
357 | self.mpc.set_initial_guess()
358 | goal = control_ref['goal']
359 | self.update_tvp(goal, nearest_obs)
360 |
361 | if control_ref['state_machine'] != 'track':
362 | # if outer loop is doing something else, just return the reference
363 | return control_ref['u_ref']
364 |
365 | # Solve MPC problem
366 | u = self.mpc.make_step(robot_state)
367 |
368 | # Update simulator and estimator
369 | y_next = self.simulator.make_step(u)
370 | x_next = self.estimator.make_step(y_next)
371 |
372 | if self.show_mpc_traj:
373 | self.graphics.plot_results()
374 | self.graphics.plot_predictions()
375 | self.graphics.reset_axes()
376 |
377 | plt.figure(self.fig.number)
378 | self.fig.canvas.draw_idle()
379 | self.fig.canvas.flush_events()
380 | plt.pause(0.001)
381 |
382 | # if nearest_obs is not None:
383 | # cbf_constraint = self.compute_cbf_constraint(
384 | # x_next, u, nearest_obs) # here use actual value, not symbolic
385 | # self.status = 'optimal' if self.mpc.optimal else 'infeasible'
386 | # print(self.mpc.opt_x_num['_x', :, 0, 0])
387 | return u
--------------------------------------------------------------------------------
/position_control/optimal_decay_cbf_qp.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import cvxpy as cp
3 |
4 | class NotCompatibleError(Exception):
5 | '''
6 | Exception raised for errors when the robot model is not compatible with the controller.
7 | '''
8 |
9 | def __init__(self, message="Currently not compatible with the robot model. Only compatible with DynamicUnicycle2D now"):
10 | self.message = message
11 | super().__init__(self.message)
12 |
13 | class OptimalDecayCBFQP:
14 | def __init__(self, robot, robot_spec):
15 | self.robot = robot
16 | self.robot_spec = robot_spec
17 | if self.robot_spec['model'] == 'DynamicUnicycle2D': # TODO: not compatible with other robot models yet
18 | self.cbf_param = {}
19 | self.cbf_param['alpha1'] = 0.5
20 | self.cbf_param['alpha2'] = 0.5
21 | self.cbf_param['omega1'] = 1.0 # Initial omega
22 | self.cbf_param['p_sb1'] = 10**4 # Penalty parameter for soft decay
23 | self.cbf_param['omega2'] = 1.0 # Initial omega
24 | self.cbf_param['p_sb2'] = 10**4 # Penalty parameter for soft decay
25 | elif self.robot_spec['model'] == 'KinematicBicycle2D':
26 | self.cbf_param = {}
27 | self.cbf_param['alpha1'] = 0.5
28 | self.cbf_param['alpha2'] = 0.5
29 | self.cbf_param['omega1'] = 1.0 # Initial omega
30 | self.cbf_param['p_sb1'] = 10**4 # Penalty parameter for soft decay
31 | self.cbf_param['omega2'] = 1.0 # Initial omega
32 | self.cbf_param['p_sb2'] = 10**4 # Penalty parameter for soft decay
33 | else:
34 | raise NotCompatibleError("Infeasible or Collision")
35 |
36 | self.setup_control_problem()
37 |
38 | def setup_control_problem(self):
39 | self.u = cp.Variable((2, 1))
40 | self.u_ref = cp.Parameter((2, 1), value=np.zeros((2, 1)))
41 | self.omega1 = cp.Variable((1, 1)) # Optimal-decay parameter
42 | self.omega2 = cp.Variable((1, 1)) # Optimal-decay parameter
43 | self.A1 = cp.Parameter((1, 2), value=np.zeros((1, 2)))
44 | self.b1 = cp.Parameter((1, 1), value=np.zeros((1, 1)))
45 | self.h = cp.Parameter((1, 1), value=np.zeros((1, 1)))
46 | self.h_dot = cp.Parameter((1, 1), value=np.zeros((1, 1)))
47 |
48 | objective = cp.Minimize(cp.sum_squares(self.u - self.u_ref)
49 | + self.cbf_param['p_sb1'] * cp.square(self.omega1 - self.cbf_param['omega1'])
50 | + self.cbf_param['p_sb2'] * cp.square(self.omega2 - self.cbf_param['omega2']))
51 |
52 | if self.robot_spec['model'] == 'DynamicUnicycle2D':
53 | constraints = [
54 | self.A1 @ self.u + self.b1 +
55 | (self.cbf_param['alpha1'] + self.cbf_param['alpha2'])* self.omega1 @ self.h_dot +
56 | self.cbf_param['alpha1'] * self.cbf_param['alpha2'] * self.h @ self.omega2 >= 0,
57 | cp.abs(self.u[0]) <= self.robot_spec['a_max'],
58 | cp.abs(self.u[1]) <= self.robot_spec['w_max'],
59 | ]
60 | elif self.robot_spec['model'] == 'KinematicBicycle2D':
61 | constraints = [
62 | self.A1 @ self.u + self.b1 +
63 | (self.cbf_param['alpha1'] + self.cbf_param['alpha2'])* self.omega1 @ self.h_dot +
64 | self.cbf_param['alpha1'] * self.cbf_param['alpha2'] * self.h @ self.omega2 >= 0,
65 | cp.abs(self.u[0]) <= self.robot_spec['a_max'],
66 | cp.abs(self.u[1]) <= self.robot_spec['beta_max'],
67 | ]
68 |
69 | self.cbf_controller = cp.Problem(objective, constraints)
70 |
71 | def solve_control_problem(self, robot_state, control_ref, nearest_obs):
72 | # Update the CBF constraints
73 | if nearest_obs is None:
74 | self.A1.value = np.zeros_like(self.A1.value)
75 | self.b1.value = np.zeros_like(self.b1.value)
76 | self.h.value = np.zeros_like(self.h.value)
77 | self.h_dot.value = np.zeros_like(self.h_dot.value)
78 | else:
79 | h, h_dot, dh_dot_dx = self.robot.agent_barrier(nearest_obs)
80 | self.A1.value[0,:] = dh_dot_dx @ self.robot.g()
81 | self.b1.value[0,:] = dh_dot_dx @ self.robot.f()
82 | self.h.value[0,:] = h
83 | self.h_dot.value[0,:] = h_dot
84 |
85 | # print(self.omega1.value, self.omega2.value)
86 |
87 | self.u_ref.value = control_ref['u_ref']
88 |
89 | # Solve the optimization problem
90 | self.cbf_controller.solve(solver=cp.GUROBI)
91 | self.status = self.cbf_controller.status
92 |
93 | return self.u.value
--------------------------------------------------------------------------------
/position_control/optimal_decay_mpc_cbf.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import casadi as ca
3 | import do_mpc
4 | import copy
5 |
6 | class NotCompatibleError(Exception):
7 | '''
8 | Exception raised for errors when the robot model is not compatible with the controller.
9 | '''
10 |
11 | def __init__(self, message="Currently not compatible with the robot model. Only compatible with DynamicUnicycle2D now"):
12 | self.message = message
13 | super().__init__(self.message)
14 |
15 | class OptimalDecayMPCCBF:
16 | def __init__(self, robot, robot_spec):
17 | self.robot = robot
18 | self.robot_spec = robot_spec
19 | if self.robot_spec['model'] not in ['DynamicUnicycle2D', 'KinematicBicycle2D']: # TODO: not compatible with other robot models yet
20 | raise NotCompatibleError("Infeasible or Collision")
21 | self.status = 'optimal' # TODO: not implemented
22 |
23 | # MPC parameters
24 | self.horizon = 10
25 | self.dt = robot.dt
26 |
27 | # Cost function weights
28 | if self.robot_spec['model'] == 'Unicycle2D':
29 | self.Q = np.diag([50, 50, 0.01]) # State cost matrix
30 | self.R = np.array([0.5, 0.5]) # Input cost matrix
31 | elif self.robot_spec['model'] == 'DynamicUnicycle2D':
32 | self.Q = np.diag([50, 50, 0.01, 30]) # State cost matrix
33 | self.R = np.array([0.5, 0.5]) # Input cost matrix
34 | elif self.robot_spec['model'] == 'DoubleIntegrator2D':
35 | self.Q = np.diag([50, 50, 20, 20]) # State cost matrix
36 | self.R = np.array([0.5, 0.5]) # Input cost matrix
37 | elif self.robot_spec['model'] == 'KinematicBicycle2D':
38 | self.Q = np.diag([50, 50, 1, 1]) # State cost matrix
39 | self.R = np.array([0.5, 50.0]) # Input cost matrix
40 |
41 | # DT CBF parameters should scale from 0 to 1
42 | self.cbf_param = {}
43 | if self.robot_spec['model'] == 'Unicycle2D':
44 | self.cbf_param['alpha'] = 0.01
45 | self.n_states = 3
46 | elif self.robot_spec['model'] == 'DynamicUnicycle2D':
47 | self.cbf_param['alpha1'] = 0.01
48 | self.cbf_param['alpha2'] = 0.01
49 | self.n_states = 4
50 | elif self.robot_spec['model'] == 'DoubleIntegrator2D':
51 | self.cbf_param['alpha1'] = 0.01
52 | self.cbf_param['alpha2'] = 0.01
53 | self.n_states = 4
54 | elif self.robot_spec['model'] == 'KinematicBicycle2D':
55 | self.cbf_param['alpha1'] = 0.05
56 | self.cbf_param['alpha2'] = 0.05
57 | self.n_states = 4
58 | self.n_controls = 2
59 |
60 | # Optimal-decay parameters
61 | self.cbf_param['omega1'] = 1.0 # Initial omega1
62 | self.cbf_param['p_sb1'] = 10**1 # Penalty parameter for omega1
63 | self.cbf_param['omega2'] = 1.0 # Initial omega2
64 | self.cbf_param['p_sb2'] = 10**1 # Penalty parameter for omega2
65 | self.omega1 = None
66 | self.omega2 = None
67 |
68 | self.goal = np.array([0, 0])
69 | self.obs = None
70 |
71 | self.setup_control_problem()
72 |
73 | def setup_control_problem(self):
74 | self.model = self.create_model()
75 | self.mpc = self.create_mpc()
76 | self.simulator = self.create_simulator()
77 | self.estimator = do_mpc.estimator.StateFeedback(self.model)
78 |
79 | def create_model(self):
80 | model = do_mpc.model.Model('discrete')
81 |
82 | # States
83 | _x = model.set_variable(
84 | var_type='_x', var_name='x', shape=(self.n_states, 1))
85 |
86 | # Inputs
87 | _u = model.set_variable(
88 | var_type='_u', var_name='u', shape=(self.n_controls, 1))
89 |
90 | # Parameters
91 | _goal = model.set_variable(
92 | var_type='_tvp', var_name='goal', shape=(self.n_states, 1))
93 | _obs = model.set_variable(
94 | var_type='_tvp', var_name='obs', shape=(5, 3))
95 |
96 | # Add omega parameters
97 | _omega1 = model.set_variable(var_type='_u', var_name='omega1', shape=(1, 1))
98 | _omega2 = model.set_variable(var_type='_u', var_name='omega2', shape=(1, 1))
99 |
100 | if self.robot_spec['model'] == 'Unicycle2D':
101 | _alpha = model.set_variable(
102 | var_type='_tvp', var_name='alpha', shape=(1, 1))
103 | elif self.robot_spec['model'] in ['DynamicUnicycle2D', 'DoubleIntegrator2D', 'KinematicBicycle2D']:
104 | _alpha1 = model.set_variable(
105 | var_type='_tvp', var_name='alpha1', shape=(1, 1))
106 | _alpha2 = model.set_variable(
107 | var_type='_tvp', var_name='alpha2', shape=(1, 1))
108 |
109 | # System dynamics
110 | f_x = self.robot.f_casadi(_x)
111 | g_x = self.robot.g_casadi(_x)
112 | x_next = _x + (f_x + ca.mtimes(g_x, _u)) * self.dt
113 |
114 | # Set right hand side of ODE
115 | model.set_rhs('x', x_next)
116 |
117 | # Defines the objective function wrt the state cost
118 | # cost = ca.mtimes([(_x - _goal).T, self.Q, (_x - _goal)]) + \
119 | # self.cbf_param['p_sb1'] * ca.sumsqr(_omega1 - self.cbf_param['omega1']) + \
120 | # self.cbf_param['p_sb2'] * ca.sumsqr(_omega2 - self.cbf_param['omega2'])
121 | cost = ca.mtimes([(_x - _goal).T, self.Q, (_x - _goal)])
122 | model.set_expression(expr_name='cost', expr=cost)
123 |
124 | model.setup()
125 | return model
126 |
127 | def create_mpc(self):
128 | mpc = do_mpc.controller.MPC(self.model)
129 | mpc.settings.supress_ipopt_output()
130 |
131 | setup_mpc = {
132 | 'n_horizon': self.horizon,
133 | 't_step': self.dt,
134 | 'n_robust': 0,
135 | 'state_discretization': 'discrete',
136 | 'store_full_solution': True,
137 | }
138 | mpc.set_param(**setup_mpc)
139 |
140 | # Configure objective function
141 | mterm = self.model.aux['cost'] # Terminal cost
142 | lterm = self.model.aux['cost'] # Stage cost
143 | mpc.set_objective(mterm=mterm, lterm=lterm)
144 | # Input penalty (R diagonal matrix in objective fun)
145 | expression = sum(self.R[i] * self.model.u['u'][i]**2 for i in range(self.n_controls))
146 | control_input_rterm = expression
147 | omega1_rterm = self.cbf_param['p_sb1'] * (self.model.u['omega1'] - self.cbf_param['omega1'])**2
148 | omega2_rterm = self.cbf_param['p_sb2'] * (self.model.u['omega2'] - self.cbf_param['omega2'])**2
149 | mpc.set_rterm(control_input_rterm)
150 | mpc.set_rterm(omega1_rterm + omega2_rterm)
151 | # MPC.set_rterm(u=self.R)
152 |
153 |
154 | # State and input bounds
155 | if self.robot_spec['model'] == 'Unicycle2D':
156 | mpc.bounds['lower', '_u', 'u'] = np.array(
157 | [-self.robot_spec['v_max'], -self.robot_spec['w_max']])
158 | mpc.bounds['upper', '_u', 'u'] = np.array(
159 | [self.robot_spec['v_max'], self.robot_spec['w_max']])
160 | elif self.robot_spec['model'] == 'DynamicUnicycle2D':
161 | mpc.bounds['lower', '_x', 'x', 3] = -self.robot_spec['v_max']
162 | mpc.bounds['upper', '_x', 'x', 3] = self.robot_spec['v_max']
163 | mpc.bounds['lower', '_u', 'u'] = np.array(
164 | [-self.robot_spec['a_max'], -self.robot_spec['w_max']])
165 | mpc.bounds['upper', '_u', 'u'] = np.array(
166 | [self.robot_spec['a_max'], self.robot_spec['w_max']])
167 | elif self.robot_spec['model'] == 'DoubleIntegrator2D':
168 | mpc.bounds['lower', '_u', 'u'] = np.array(
169 | [-self.robot_spec['ax_max'], -self.robot_spec['ay_max']])
170 | mpc.bounds['upper', '_u', 'u'] = np.array(
171 | [self.robot_spec['ax_max'], self.robot_spec['ay_max']])
172 | elif self.robot_spec['model'] == 'KinematicBicycle2D':
173 | mpc.bounds['lower', '_x', 'x', 3] = -self.robot_spec['v_max']
174 | mpc.bounds['upper', '_x', 'x', 3] = self.robot_spec['v_max']
175 | mpc.bounds['lower', '_u', 'u'] = np.array(
176 | [-self.robot_spec['a_max'], -self.robot_spec['beta_max']])
177 | mpc.bounds['upper', '_u', 'u'] = np.array(
178 | [self.robot_spec['a_max'], self.robot_spec['beta_max']])
179 |
180 |
181 | mpc = self.set_tvp(mpc)
182 | mpc = self.set_cbf_constraint(mpc)
183 |
184 | mpc.setup()
185 | return mpc
186 |
187 | def set_tvp(self, mpc):
188 | # Set time-varying parameters
189 | def tvp_fun(t_now):
190 | tvp_template = mpc.get_tvp_template()
191 |
192 | # Set goal
193 | tvp_template['_tvp', :, 'goal'] = np.concatenate([self.goal, [0] * (self.n_states - 2)])
194 |
195 | # Handle up to 5 obstacles (if fewer than 5, substitute dummy obstacles)
196 | if self.obs is None:
197 | # Before detecting any obstacle, set 5 dummy obstacles far away
198 | dummy_obstacles = np.tile(np.array([1000, 1000, 0]), (5, 1)) # 5 far away obstacles
199 | tvp_template['_tvp', :, 'obs'] = dummy_obstacles
200 | else:
201 | num_obstacles = self.obs.shape[0]
202 | if num_obstacles < 5:
203 | # Add dummy obstacles for missing ones
204 | dummy_obstacles = np.tile(np.array([1000, 1000, 0]), (5 - num_obstacles, 1))
205 | tvp_template['_tvp', :, 'obs'] = np.vstack([self.obs, dummy_obstacles])
206 | else:
207 | # Use the detected obstacles directly
208 | tvp_template['_tvp', :, 'obs'] = self.obs[:5, :] # Limit to 5 obstacles
209 |
210 | if self.robot_spec['model'] == 'Unicycle2D':
211 | tvp_template['_tvp', :, 'alpha'] = self.cbf_param['alpha']
212 | elif self.robot_spec['model'] in ['DynamicUnicycle2D', 'DoubleIntegrator2D', 'KinematicBicycle2D']:
213 | tvp_template['_tvp', :, 'alpha1'] = self.cbf_param['alpha1']
214 | tvp_template['_tvp', :, 'alpha2'] = self.cbf_param['alpha2']
215 |
216 | return tvp_template
217 |
218 | mpc.set_tvp_fun(tvp_fun)
219 | return mpc
220 |
221 | def set_cbf_constraint(self, mpc):
222 | _x = self.model.x['x']
223 | _u = self.model.u['u'] # Current control input [0] acc, [1] omega
224 | _obs = self.model.tvp['obs']
225 |
226 | # Add a separate constraint for each of the 5 obstacles
227 | for i in range(5):
228 | obs_i = _obs[i, :] # Select the i-th obstacle
229 | cbf_constraint = self.compute_cbf_constraint(_x, _u, obs_i)
230 | mpc.set_nl_cons(f'cbf_{i}', -cbf_constraint, ub=0)
231 |
232 | return mpc
233 |
234 | def compute_cbf_constraint(self, _x, _u, _obs):
235 | '''compute cbf constraint value
236 | We reuse this function to print the CBF constraint'''
237 | if self.robot_spec['model'] == 'Unicycle2D':
238 | _alpha = self.model.tvp['alpha']
239 | h_k, d_h = self.robot.agent_barrier_dt(_x, _u, _obs)
240 | cbf_constraint = d_h + _alpha * h_k
241 | elif self.robot_spec['model'] in ['DynamicUnicycle2D', 'DoubleIntegrator2D', 'KinematicBicycle2D']:
242 | _alpha1 = self.model.tvp['alpha1']
243 | _alpha2 = self.model.tvp['alpha2']
244 | omega1 = self.model.u['omega1']
245 | omega2 = self.model.u['omega2']
246 | h_k, d_h, dd_h = self.robot.agent_barrier_dt(_x, _u, _obs)
247 | cbf_constraint = (dd_h + (_alpha1 * omega1 + _alpha2 * omega2) * d_h
248 | + _alpha1 * _alpha2 * h_k * omega1 * omega2)
249 | else:
250 | raise NotImplementedError('Model not implemented')
251 | return cbf_constraint
252 |
253 | def create_simulator(self):
254 | simulator = do_mpc.simulator.Simulator(self.model)
255 | simulator.set_param(t_step=self.dt)
256 | tvp_template = simulator.get_tvp_template()
257 |
258 | def tvp_fun(t_now):
259 | return tvp_template
260 | simulator.set_tvp_fun(tvp_fun)
261 | simulator.setup()
262 | return simulator
263 |
264 | def update_tvp(self, goal, obs):
265 | # Update the tvp variables
266 | self.goal = np.array(goal)
267 |
268 | obs = obs.reshape(-1,3)
269 |
270 | if obs is None or len(obs) == 0:
271 | # No obstacles detected, set 5 dummy obstacles far away
272 | self.obs = np.tile(np.array([1000, 1000, 0]), (5, 1))
273 | else:
274 | num_obstacles = len(obs)
275 | if num_obstacles < 5:
276 | # Add dummy obstacles for missing ones
277 | dummy_obstacles = np.tile(np.array([1000, 1000, 0]), (5 - num_obstacles, 1))
278 | self.obs = np.vstack([obs, dummy_obstacles])
279 | else:
280 | # Use the detected obstacles directly (up to 5)
281 | self.obs = np.array(obs[:5])
282 |
283 | def solve_control_problem(self, robot_state, control_ref, nearest_obs):
284 | # Set initial state and reference
285 | self.mpc.x0 = robot_state
286 | self.mpc.set_initial_guess()
287 | goal = control_ref['goal']
288 | self.update_tvp(goal, nearest_obs)
289 |
290 | if control_ref['state_machine'] != 'track':
291 | # if outer loop is doing something else, just return the reference
292 | return control_ref['u_ref']
293 |
294 | # Solve MPC problem
295 | u = self.mpc.make_step(robot_state)
296 |
297 | # Update simulator and estimator
298 | y_next = self.simulator.make_step(u)
299 | x_next = self.estimator.make_step(y_next)
300 |
301 | # omega1 = float(self.mpc.opt_x_num['_u', 0, 0][2])
302 | # omega2 = float(self.mpc.opt_x_num['_u', 0, 0][3])
303 | # # print(f"Omega1: {omega1}, Omega2: {omega2}")
304 | # print(self.mpc.opt_x_num['_u', :, 0])
305 |
306 | # if nearest_obs is not None:
307 | # cbf_constraint = self.compute_cbf_constraint(
308 | # x_next, u, nearest_obs) # here use actual value, not symbolic
309 | # self.status = 'optimal' if self.mpc.optimal else 'infeasible'
310 |
311 | # print(self.mpc.opt_x_num['_u', :, 0])
312 | # cost_value = self.mpc.opt_aux_num['_aux'][0] # Access the 'cost' expression
313 | # print(f"Cost: {cost_value}")
314 |
315 | return u[:self.n_controls]
--------------------------------------------------------------------------------
/robots/double_integrator2D.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import casadi as ca
3 |
4 | """
5 | Created on July 15th, 2024
6 | @author: Taekyung Kim
7 |
8 | @description:
9 | Double Integrator model for CBF-QP and MPC-CBF (casadi) with separated position and attitude states
10 | """
11 |
12 |
13 | def angle_normalize(x):
14 | if isinstance(x, (np.ndarray, float, int)):
15 | # NumPy implementation
16 | return (((x + np.pi) % (2 * np.pi)) - np.pi)
17 | elif isinstance(x, (ca.SX, ca.MX, ca.DM)):
18 | # CasADi implementation
19 | return ca.fmod(x + ca.pi, 2 * ca.pi) - ca.pi
20 | else:
21 | raise TypeError(f"Unsupported input type: {type(x)}")
22 |
23 |
24 | class DoubleIntegrator2D:
25 |
26 | def __init__(self, dt, robot_spec):
27 | '''
28 | X: [x, y, vx, vy]
29 | theta: yaw angle
30 | U: [ax, ay]
31 | U_attitude: [yaw_rate]
32 | cbf: h(x) = ||x-x_obs||^2 - beta*d_min^2
33 | relative degree: 2
34 | '''
35 | self.dt = dt
36 | self.robot_spec = robot_spec
37 |
38 | self.robot_spec.setdefault('a_max', 1.0)
39 | self.robot_spec.setdefault('v_max', 1.0)
40 | self.robot_spec.setdefault('ax_max', self.robot_spec['a_max'])
41 | self.robot_spec.setdefault('ay_max', self.robot_spec['a_max'])
42 | self.robot_spec.setdefault('w_max', 0.5)
43 |
44 | def f(self, X, casadi=False):
45 | if casadi:
46 | return ca.vertcat(
47 | X[2, 0],
48 | X[3, 0],
49 | 0,
50 | 0
51 | )
52 | else:
53 | return np.array([X[2, 0],
54 | X[3, 0],
55 | 0,
56 | 0]).reshape(-1, 1)
57 |
58 | def df_dx(self, X):
59 | return np.array([
60 | [0, 0, 1, 0],
61 | [0, 0, 0, 1],
62 | [0, 0, 0, 0],
63 | [0, 0, 0, 0]
64 | ])
65 |
66 | def g(self, X, casadi=False):
67 | if casadi:
68 | return ca.DM([
69 | [0, 0],
70 | [0, 0],
71 | [1, 0],
72 | [0, 1]
73 | ])
74 | else:
75 | return np.array([[0, 0], [0, 0], [1, 0], [0, 1]])
76 |
77 | def step(self, X, U):
78 | X = X + (self.f(X) + self.g(X) @ U) * self.dt
79 | return X
80 |
81 | def step_rotate(self, theta, U_attitude):
82 | theta = angle_normalize(theta + U_attitude[0, 0] * self.dt)
83 | return theta
84 |
85 | def nominal_input(self, X, G, d_min=0.05, k_v=1.0, k_a=1.0):
86 | '''
87 | nominal input for CBF-QP (position control)
88 | '''
89 | G = np.copy(G.reshape(-1, 1)) # goal state
90 | v_max = self.robot_spec['v_max'] # Maximum velocity (x+y)
91 | a_max = self.robot_spec['a_max'] # Maximum acceleration
92 |
93 | pos_errors = G[0:2, 0] - X[0:2, 0]
94 | pos_errors = np.sign(pos_errors) * \
95 | np.maximum(np.abs(pos_errors) - d_min, 0.0)
96 |
97 | # Compute desired velocities for x and y
98 | v_des = k_v * pos_errors
99 | v_mag = np.linalg.norm(v_des)
100 | if v_mag > v_max:
101 | v_des = v_des * v_max / v_mag
102 |
103 | # Compute accelerations
104 | current_v = X[2:4, 0]
105 | a = k_a * (v_des - current_v)
106 | a_mag = np.linalg.norm(a)
107 | if a_mag > a_max:
108 | a = a * a_max / a_mag
109 |
110 | return a.reshape(-1, 1)
111 |
112 | def nominal_attitude_input(self, theta, theta_des, k_theta=1.0):
113 | '''
114 | nominal input for attitude control
115 | '''
116 | error_theta = angle_normalize(theta_des - theta)
117 | yaw_rate = k_theta * error_theta
118 | return np.array([yaw_rate]).reshape(-1, 1)
119 |
120 | def stop(self, X, k_a=1.0):
121 | # Set desired velocity to zero
122 | vx_des, vy_des = 0.0, 0.0
123 | ax = k_a * (vx_des - X[2, 0])
124 | ay = k_a * (vy_des - X[3, 0])
125 | return np.array([ax, ay]).reshape(-1, 1)
126 |
127 | def has_stopped(self, X, tol=0.05):
128 | return np.linalg.norm(X[2:4, 0]) < tol
129 |
130 | def rotate_to(self, theta, theta_des, k_omega=2.0):
131 | error_theta = angle_normalize(theta_des - theta)
132 | yaw_rate = k_omega * error_theta
133 | yaw_rate = np.clip(yaw_rate, -self.robot_spec['w_max'], self.robot_spec['w_max'])
134 | return np.array([yaw_rate]).reshape(-1, 1)
135 |
136 | def agent_barrier(self, X, obs, robot_radius, beta=1.01):
137 | '''Continuous Time High Order CBF'''
138 | obsX = obs[0:2]
139 | d_min = obs[2][0] + robot_radius # obs radius + robot radius
140 |
141 | h = np.linalg.norm(X[0:2] - obsX[0:2])**2 - beta*d_min**2
142 | # Lgh is zero => relative degree is 2, f(x)[0:2] actually equals to X[2:4]
143 | h_dot = 2 * (X[0:2] - obsX[0:2]).T @ (self.f(X)[0:2])
144 |
145 | # these two options are the same
146 | # df_dx = self.df_dx(X)
147 | # dh_dot_dx = np.append( ( 2 * self.f(X)[0:2] ).T, np.array([[0,0]]), axis = 1 ) + 2 * ( X[0:2] - obsX[0:2] ).T @ df_dx[0:2,:]
148 | dh_dot_dx = np.append(2 * X[2:4].T, 2 * (X[0:2] - obsX[0:2]).T, axis=1)
149 | return h, h_dot, dh_dot_dx
150 |
151 | def agent_barrier_dt(self, x_k, u_k, obs, robot_radius, beta=1.01):
152 | '''Discrete Time High Order CBF'''
153 | # Dynamics equations for the next states
154 | x_k1 = self.step(x_k, u_k)
155 | x_k2 = self.step(x_k1, u_k)
156 |
157 | def h(x, obs, robot_radius, beta=1.01):
158 | '''Computes CBF h(x) = ||x-x_obs||^2 - beta*d_min^2'''
159 | x_obs = obs[0]
160 | y_obs = obs[1]
161 | r_obs = obs[2]
162 | d_min = robot_radius + r_obs
163 |
164 | h = (x[0, 0] - x_obs)**2 + (x[1, 0] - y_obs)**2 - beta*d_min**2
165 | return h
166 |
167 | h_k2 = h(x_k2, obs, robot_radius, beta)
168 | h_k1 = h(x_k1, obs, robot_radius, beta)
169 | h_k = h(x_k, obs, robot_radius, beta)
170 |
171 | d_h = h_k1 - h_k
172 | dd_h = h_k2 - 2 * h_k1 + h_k
173 | # hocbf_2nd_order = h_ddot + (gamma1 + gamma2) * h_dot + (gamma1 * gamma2) * h_k
174 |
175 | return h_k, d_h, dd_h
--------------------------------------------------------------------------------
/robots/dynamic_unicycle2D.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import casadi as ca
3 |
4 | """
5 | Created on July 11th, 2024
6 | @author: Taekyung Kim
7 |
8 | @description:
9 | Dynamic unicycle model for CBF-QP and MPC-CBF (casadi)
10 | """
11 |
12 |
13 | def angle_normalize(x):
14 | if isinstance(x, (np.ndarray, float, int)):
15 | # NumPy implementation
16 | return (((x + np.pi) % (2 * np.pi)) - np.pi)
17 | elif isinstance(x, (ca.SX, ca.MX, ca.DM)):
18 | # CasADi implementation
19 | return ca.fmod(x + ca.pi, 2 * ca.pi) - ca.pi
20 | else:
21 | raise TypeError(f"Unsupported input type: {type(x)}")
22 |
23 |
24 | class DynamicUnicycle2D:
25 |
26 | def __init__(self, dt, robot_spec):
27 | '''
28 | X: [x, y, theta, v]
29 | U: [a, omega]
30 | cbf: h(x) = ||x-x_obs||^2 - beta*d_min^2
31 | relative degree: 2
32 | '''
33 | self.dt = dt
34 | self.robot_spec = robot_spec
35 |
36 | self.robot_spec.setdefault('a_max', 0.5)
37 | self.robot_spec.setdefault('w_max', 0.5)
38 | self.robot_spec.setdefault('v_max', 1.0)
39 |
40 | def f(self, X, casadi=False):
41 | if casadi:
42 | return ca.vertcat(
43 | X[3, 0] * ca.cos(X[2, 0]),
44 | X[3, 0] * ca.sin(X[2, 0]),
45 | 0,
46 | 0
47 | )
48 | else:
49 | return np.array([X[3, 0]*np.cos(X[2, 0]),
50 | X[3, 0]*np.sin(X[2, 0]),
51 | 0,
52 | 0]).reshape(-1, 1)
53 |
54 | def df_dx(self, X):
55 | return np.array([
56 | [0, 0, -X[3, 0]*np.sin(X[2, 0]), np.cos(X[2, 0])],
57 | [0, 0, X[3, 0]*np.cos(X[2, 0]), np.sin(X[2, 0])],
58 | [0, 0, 0, 0],
59 | [0, 0, 0, 0]
60 | ])
61 |
62 | def g(self, X, casadi=False):
63 | if casadi:
64 | return ca.DM([
65 | [0, 0],
66 | [0, 0],
67 | [0, 1],
68 | [1, 0]
69 | ])
70 | else:
71 | return np.array([[0, 0], [0, 0], [0, 1], [1, 0]])
72 |
73 | def step(self, X, U):
74 | X = X + (self.f(X) + self.g(X) @ U)*self.dt
75 | X[2, 0] = angle_normalize(X[2, 0])
76 | return X
77 |
78 | def nominal_input(self, X, G, d_min=0.05, k_omega=2.0, k_a=1.0, k_v=1.0):
79 | '''
80 | nominal input for CBF-QP
81 | '''
82 | G = np.copy(G.reshape(-1, 1)) # goal state
83 | v_max = self.robot_spec['v_max']
84 |
85 | # don't need a min dist since it has accel
86 | distance = max(np.linalg.norm(X[0:2, 0]-G[0:2, 0]) - d_min, 0.0)
87 | theta_d = np.arctan2(G[1, 0]-X[1, 0], G[0, 0]-X[0, 0])
88 | error_theta = angle_normalize(theta_d - X[2, 0])
89 |
90 | omega = k_omega * error_theta
91 | if abs(error_theta) > np.deg2rad(90):
92 | v = 0.0
93 | else:
94 | v = min(k_v * distance * np.cos(error_theta), v_max)
95 | # print("distance: ", distance, "v: ", v, "error_theta: ", error_theta)
96 |
97 | accel = k_a * (v - X[3, 0])
98 | # print(f"CBF nominal acc: {accel}, omega:{omega}")
99 | return np.array([accel, omega]).reshape(-1, 1)
100 |
101 | def stop(self, X, k_a=1.0):
102 | # set desired velocity to zero
103 | v = 0.0
104 | accel = k_a * (v - X[3, 0])
105 | return np.array([accel, 0]).reshape(-1, 1)
106 |
107 | def has_stopped(self, X, tol=0.05):
108 | return np.linalg.norm(X[3, 0]) < tol
109 |
110 | def rotate_to(self, X, theta_des, k_omega=2.0):
111 | error_theta = angle_normalize(theta_des - X[2, 0])
112 | omega = k_omega * error_theta
113 | return np.array([0.0, omega]).reshape(-1, 1)
114 |
115 | def agent_barrier(self, X, obs, robot_radius, beta=1.01):
116 | '''Continuous Time High Order CBF'''
117 | obsX = obs[0:2]
118 | d_min = obs[2][0] + robot_radius # obs radius + robot radius
119 |
120 | h = np.linalg.norm(X[0:2] - obsX[0:2])**2 - beta*d_min**2
121 | # Lgh is zero => relative degree is 2
122 | h_dot = 2 * (X[0:2] - obsX[0:2]).T @ (self.f(X)[0:2])
123 |
124 | df_dx = self.df_dx(X)
125 | dh_dot_dx = np.append((2 * self.f(X)[0:2]).T, np.array(
126 | [[0, 0]]), axis=1) + 2 * (X[0:2] - obsX[0:2]).T @ df_dx[0:2, :]
127 | return h, h_dot, dh_dot_dx
128 |
129 | def agent_barrier_dt(self, x_k, u_k, obs, robot_radius, beta=1.01):
130 | '''Discrete Time High Order CBF'''
131 | # Dynamics equations for the next states
132 | x_k1 = self.step(x_k, u_k)
133 | x_k2 = self.step(x_k1, u_k)
134 |
135 | def h(x, obs, robot_radius, beta=1.01):
136 | '''Computes CBF h(x) = ||x-x_obs||^2 - beta*d_min^2'''
137 | x_obs = obs[0]
138 | y_obs = obs[1]
139 | r_obs = obs[2]
140 | d_min = robot_radius + r_obs
141 |
142 | h = (x[0, 0] - x_obs)**2 + (x[1, 0] - y_obs)**2 - beta*d_min**2
143 | return h
144 |
145 | h_k2 = h(x_k2, obs, robot_radius, beta)
146 | h_k1 = h(x_k1, obs, robot_radius, beta)
147 | h_k = h(x_k, obs, robot_radius, beta)
148 |
149 | d_h = h_k1 - h_k
150 | dd_h = h_k2 - 2 * h_k1 + h_k
151 | # hocbf_2nd_order = h_ddot + (gamma1 + gamma2) * h_dot + (gamma1 * gamma2) * h_k
152 |
153 | return h_k, d_h, dd_h
--------------------------------------------------------------------------------
/robots/kinematic_bicycle2D.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import casadi as ca
3 |
4 | import matplotlib.pyplot as plt
5 | from matplotlib.transforms import Affine2D
6 |
7 | """
8 | Simplified kinematic bicycle model for CBF-QP and MPC-CBF (casadi)
9 | https://arxiv.org/abs/2403.07043
10 | """
11 |
12 | def angle_normalize(x):
13 | if isinstance(x, (np.ndarray, float, int)):
14 | # NumPy implementation
15 | return (((x + np.pi) % (2 * np.pi)) - np.pi)
16 | elif isinstance(x, (ca.SX, ca.MX, ca.DM)):
17 | # CasADi implementation
18 | return ca.fmod(x + ca.pi, 2 * ca.pi) - ca.pi
19 | else:
20 | raise TypeError(f"Unsupported input type: {type(x)}")
21 |
22 |
23 | class KinematicBicycle2D:
24 | def __init__(self, dt, robot_spec):
25 | '''
26 | X: [x, y, theta, v]
27 | U: [a, β] (acceleration, slip angle as control input)
28 | cbf: h(x) = ||x-x_obs||^2 - β*d_min^2
29 | relative degree: 2
30 |
31 | Equations:
32 | β = arctan((L_r / L) * tan(δ)) (Slip angle)
33 |
34 | x_dot = v * cos(theta) - v * sin(theta) * β
35 | y_dot = v * sin(theta) + v * cos(theta) * β
36 | theta_dot = (v / L_r) * β
37 | v_dot = a
38 | '''
39 | self.dt = dt
40 | self.robot_spec = robot_spec
41 |
42 | self.robot_spec.setdefault('wheel_base', 0.5)
43 | self.robot_spec.setdefault('body_width', 0.3)
44 | self.robot_spec.setdefault('radius', 0.5)
45 | self.robot_spec.setdefault('front_ax_dist', 0.2)
46 | self.robot_spec.setdefault('rear_ax_dist', 0.3)
47 | self.robot_spec.setdefault('v_max', 1.0)
48 | self.robot_spec.setdefault('a_max', 0.5)
49 | self.robot_spec.setdefault('delta_max', np.deg2rad(30))
50 | self.robot_spec.setdefault('beta_max', self.beta(self.robot_spec['delta_max']))
51 |
52 | def beta(self, delta):
53 | # Computes the slip angle beta
54 | L_r = self.robot_spec['rear_ax_dist']
55 | L = self.robot_spec['wheel_base']
56 | return np.arctan((L_r / L) * np.tan(delta))
57 |
58 | def beta_to_delta(self, beta):
59 | # Map slip angle beta to steering angle delta
60 | L_r = self.robot_spec['rear_ax_dist']
61 | L = self.robot_spec['wheel_base']
62 | return np.arctan((L / L_r) * np.tan(beta))
63 |
64 | def df_dx(self, X):
65 | return np.array([
66 | [0, 0, -X[3, 0]*np.sin(X[2, 0]), np.cos(X[2, 0])],
67 | [0, 0, X[3, 0]*np.cos(X[2, 0]), np.sin(X[2, 0])],
68 | [0, 0, 0, 0],
69 | [0, 0, 0, 0]
70 | ])
71 |
72 | def f(self, X, casadi=False):
73 | theta = X[2, 0]
74 | v = X[3, 0]
75 | if casadi:
76 | return ca.vertcat(
77 | v * ca.cos(theta),
78 | v * ca.sin(theta),
79 | 0,
80 | 0
81 | )
82 | else:
83 | return np.array([
84 | v * np.cos(theta),
85 | v * np.sin(theta),
86 | 0,
87 | 0
88 | ]).reshape(-1, 1)
89 |
90 | def g(self, X, casadi=False):
91 | theta = X[2, 0]
92 | v = X[3, 0]
93 | L_r = self.robot_spec['rear_ax_dist']
94 | if casadi:
95 | g = ca.SX.zeros(4, 2)
96 | g[0, 1] = -v * ca.sin(theta)
97 | g[1, 1] = v * ca.cos(theta)
98 | g[2, 1] = v / L_r
99 | g[3, 0] = 1
100 | return g
101 | else:
102 | return np.array([
103 | [0, -v * np.sin(theta)],
104 | [0, v * np.cos(theta)],
105 | [0, v / L_r],
106 | [1, 0]
107 | ])
108 |
109 | def step(self, X, U, casadi=False):
110 | X = X + (self.f(X, casadi) + self.g(X, casadi) @ U) * self.dt
111 | X[2, 0] = angle_normalize(X[2, 0])
112 | return X
113 |
114 | def nominal_input(self, X, G, d_min=0.05, k_theta=0.5, k_a = 1.5, k_v=0.5):
115 | '''
116 | nominal input for CBF-QP
117 | '''
118 | G = np.copy(G.reshape(-1, 1)) # goal state
119 | v_max = self.robot_spec['v_max']
120 | delta_max = self.robot_spec['delta_max']
121 |
122 | distance = max(np.linalg.norm(X[0:2, 0] - G[0:2, 0]) - d_min, 0.05)
123 | theta_d = np.arctan2(G[1, 0] - X[1, 0], G[0, 0] - X[0, 0])
124 | error_theta = angle_normalize(theta_d - X[2, 0])
125 |
126 | # Steering angle and slip angle
127 | delta = np.clip(k_theta * error_theta, -delta_max, delta_max) # Steering angle
128 | beta = self.beta(delta) # Slip angle conversion
129 |
130 | if abs(error_theta) > np.deg2rad(90):
131 | v = 0.0
132 | else:
133 | v = min(k_v * distance * np.cos(error_theta), v_max)
134 |
135 | a = k_a * (v - X[3, 0])
136 | return np.array([a, beta]).reshape(-1, 1)
137 |
138 | def stop(self, X):
139 | return np.array([0, 0]).reshape(-1, 1)
140 |
141 | def has_stopped(self, X, tol=0.05):
142 | return abs(X[3, 0]) < tol
143 |
144 | def rotate_to(self, X, theta_des, k_theta=2.0):
145 | error_theta = angle_normalize(theta_des - X[2, 0])
146 | beta = k_theta * error_theta
147 | return np.array([0.0, beta]).reshape(-1, 1)
148 |
149 | def agent_barrier(self, X, obs, robot_radius, beta=1.1):
150 | '''Continuous Time High Order CBF'''
151 | obsX = obs[0:2]
152 | d_min = obs[2][0] + robot_radius # obs radius + robot radius
153 |
154 | h = np.linalg.norm(X[0:2] - obsX[0:2])**2 - beta*d_min**2
155 | # Lgh is zero => relative degree is 2
156 | h_dot = 2 * (X[0:2] - obsX[0:2]).T @ (self.f(X)[0:2])
157 |
158 | df_dx = self.df_dx(X)
159 | dh_dot_dx = np.append((2 * self.f(X)[0:2]).T, np.array(
160 | [[0, 0]]), axis=1) + 2 * (X[0:2] - obsX[0:2]).T @ df_dx[0:2, :]
161 | return h, h_dot, dh_dot_dx
162 |
163 | def agent_barrier_dt(self, x_k, u_k, obs, robot_radius, beta=1.1):
164 | '''Discrete Time High Order CBF'''
165 | # Dynamics equations for the next states
166 | x_k1 = self.step(x_k, u_k, casadi=True)
167 | x_k2 = self.step(x_k1, u_k, casadi=True)
168 |
169 | def h(x, obs, robot_radius, beta=1.25):
170 | '''Computes CBF h(x) = ||x-x_obs||^2 - beta*d_min^2'''
171 | x_obs = obs[0]
172 | y_obs = obs[1]
173 | r_obs = obs[2]
174 | d_min = robot_radius + r_obs
175 |
176 | h = (x[0, 0] - x_obs)**2 + (x[1, 0] - y_obs)**2 - beta*d_min**2
177 | return h
178 |
179 | h_k2 = h(x_k2, obs, robot_radius, beta)
180 | h_k1 = h(x_k1, obs, robot_radius, beta)
181 | h_k = h(x_k, obs, robot_radius, beta)
182 |
183 | d_h = h_k1 - h_k
184 | dd_h = h_k2 - 2 * h_k1 + h_k
185 | # hocbf_2nd_order = h_ddot + (gamma1 + gamma2) * h_dot + (gamma1 * gamma2) * h_k
186 |
187 | return h_k, d_h, dd_h
188 |
189 | def render_rigid_body(self, X, U):
190 | '''
191 | Return the materials to render the rigid body
192 | '''
193 | x, y, theta, v = X.flatten()
194 | beta = U[1, 0] # Steering angle control input
195 | delta = self.beta_to_delta(beta)
196 |
197 | # Update vehicle body
198 | transform_body = Affine2D().rotate(theta).translate(x, y) + plt.gca().transData
199 |
200 | # Calculate axle positions
201 | rear_axle_x = x - self.robot_spec['rear_ax_dist'] * np.cos(theta)
202 | rear_axle_y = y - self.robot_spec['rear_ax_dist'] * np.sin(theta)
203 | front_axle_x = x + self.robot_spec['front_ax_dist'] * np.cos(theta)
204 | front_axle_y = y + self.robot_spec['front_ax_dist'] * np.sin(theta)
205 |
206 | # Update rear wheel (aligned with vehicle orientation)
207 | transform_rear = (Affine2D()
208 | .rotate(theta)
209 | .translate(rear_axle_x, rear_axle_y) + plt.gca().transData)
210 |
211 | # Update front wheel (rotated by steering angle)
212 | transform_front = (Affine2D()
213 | .rotate(theta + delta)
214 | .translate(front_axle_x, front_axle_y) + plt.gca().transData)
215 |
216 | return transform_body, transform_rear, transform_front
--------------------------------------------------------------------------------
/robots/kinematic_bicycle2D_c3bf.py:
--------------------------------------------------------------------------------
1 | from robots.kinematic_bicycle2D import KinematicBicycle2D
2 | import numpy as np
3 | import casadi as ca
4 |
5 | """
6 | It is based on the kinematic bicycle 2D model and overrides
7 | only the continous and discrete-time CBF funcitions for collision cone CBF (C3BF) counterparts:
8 | ref: asdfasd/C3BF/arxiv.com
9 | """
10 |
11 | class KinematicBicycle2D_C3BF(KinematicBicycle2D):
12 | def __init__(self, dt, robot_spec):
13 | super().__init__(dt, robot_spec)
14 |
15 | def agent_barrier(self, X, obs, robot_radius, beta=1.0):
16 | """
17 | '''Continuous Time C3BF'''
18 | Compute a Collision Cone Control Barrier Function for the Kinematic Bicycle2D.
19 |
20 | The barrier's relative degree is "1"
21 | h_dot = ∂h/∂x ⋅ f(x) + ∂h/∂x ⋅ g(x) ⋅ u
22 |
23 | Define h from the collision cone idea:
24 | p_rel = [obs_x - x, obs_y - y]
25 | v_rel = [obs_x_dot-v_cos(theta), obs_y_dot-v_sin(theta)]
26 | dist = ||p_rel||
27 | R = robot_radius + obs_r
28 | """
29 |
30 | theta = X[2, 0]
31 | v = X[3, 0]
32 |
33 | # Check if obstacles have velocity components (static or moving)
34 | if obs.shape[0] > 3:
35 | obs_vel_x = obs[3, 0]
36 | obs_vel_y = obs[4, 0]
37 |
38 | else:
39 | obs_vel_x = 0.0
40 | obs_vel_y = 0.0
41 |
42 | # Combine radius R
43 | ego_dim = (obs[2, 0] + robot_radius) * beta # Total collision safe radius
44 |
45 | # Compute relative position and velocity
46 | p_rel = np.array([[obs[0, 0] - X[0, 0]],
47 | [obs[1, 0] - X[1, 0]]])
48 | v_rel = np.array([[obs_vel_x - v * np.cos(theta)],
49 | [obs_vel_y - v * np.sin(theta)]])
50 |
51 | p_rel_x = p_rel[0, 0]
52 | p_rel_y = p_rel[1, 0]
53 | v_rel_x = v_rel[0, 0]
54 | v_rel_y = v_rel[1, 0]
55 |
56 | p_rel_mag = np.linalg.norm(p_rel)
57 | v_rel_mag = np.linalg.norm(v_rel)
58 |
59 | # Compute cos_phi safely for c3bf
60 | eps = 1e-6
61 | cal_max = np.maximum(p_rel_mag**2 - ego_dim**2, eps)
62 | sqrt_term = np.sqrt(cal_max)
63 | cos_phi = sqrt_term / (p_rel_mag + eps)
64 |
65 | # Compute h
66 | h = np.dot(p_rel.T, v_rel)[0, 0] + p_rel_mag * v_rel_mag * cos_phi
67 |
68 | # Compute dh_dx
69 | dh_dx = np.zeros((1, 4))
70 | dh_dx[0, 0] = -v_rel_x - v_rel_mag * p_rel_x / (sqrt_term + eps)
71 | dh_dx[0, 1] = -v_rel_y - v_rel_mag * p_rel_y / (sqrt_term + eps)
72 | dh_dx[0, 2] = v * np.sin(theta) * p_rel_x - v * np.cos(theta) * p_rel_y + (sqrt_term + eps) / v_rel_mag * (v * (obs_vel_x * np.sin(theta) - obs_vel_y * np.cos(theta)))
73 | dh_dx[0, 3] = -np.cos(theta) * p_rel_x -np.sin(theta) * p_rel_y + (sqrt_term + eps) / v_rel_mag * (v - (obs_vel_x * np.cos(theta) + obs_vel_y * np.sin(theta)))
74 |
75 | return h, dh_dx
76 |
77 | def agent_barrier_dt(self, x_k, u_k, obs, robot_radius, beta=1.01):
78 | '''Discrete Time C3BF'''
79 | # Dynamics equations for the next states
80 | x_k1 = self.step(x_k, u_k, casadi=True)
81 |
82 | def h(x, obs, robot_radius, beta=1.01):
83 | '''Computes C3BF h(x) = + ||p_rel||*||v_rel||*cos(phi)'''
84 | theta = x[2, 0]
85 | v = x[3, 0]
86 |
87 | # Check if obstacles have velocity components (static or moving)
88 | if obs.shape[0] > 3:
89 | obs_vel_x = obs[3][0]
90 | obs_vel_y = obs[4][0]
91 | else:
92 | obs_vel_x = 0.0
93 | obs_vel_y = 0.0
94 |
95 | # Combine radius R
96 | ego_dim = (obs[2][0] + robot_radius) * beta # Total collision radius
97 |
98 | # Compute relative position and velocity
99 | p_rel = ca.vertcat(obs[0][0] - x[0, 0], obs[1][0] - x[1, 0])
100 | v_rel = ca.vertcat(obs_vel_x - v * ca.cos(theta), obs_vel_y - v * ca.sin(theta))
101 |
102 | p_rel_mag = ca.norm_2(p_rel)
103 | v_rel_mag = ca.norm_2(v_rel)
104 |
105 | # Compute h
106 | h = (p_rel.T @ v_rel)[0, 0] + p_rel_mag * v_rel_mag * ca.sqrt(ca.fmax(p_rel_mag**2 - ego_dim**2, 0)) / p_rel_mag
107 |
108 | return h
109 |
110 | h_k1 = h(x_k1, obs, robot_radius, beta)
111 | h_k = h(x_k, obs, robot_radius, beta)
112 |
113 | d_h = h_k1 - h_k
114 |
115 | return h_k, d_h
--------------------------------------------------------------------------------
/robots/quad2D.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import casadi as ca
3 |
4 | import matplotlib.pyplot as plt
5 | from matplotlib.transforms import Affine2D
6 |
7 |
8 | def angle_normalize(x):
9 | if isinstance(x, (np.ndarray, float, int)):
10 | # NumPy implementation
11 | return (((x + np.pi) % (2 * np.pi)) - np.pi)
12 | elif isinstance(x, (ca.SX, ca.MX, ca.DM)):
13 | # CasADi implementation
14 | return ca.fmod(x + ca.pi, 2 * ca.pi) - ca.pi
15 | else:
16 | raise TypeError(f"Unsupported input type: {type(x)}")
17 |
18 | class Quad2D:
19 |
20 | def __init__(self, dt, robot_spec):
21 | '''
22 | X: [x, z, theta, x_dot, z_dot, theta_dot]
23 | U: [u_right, u_left]
24 | G: [x_goal, z_goal]
25 |
26 | x_dot = v_x
27 | z_dot = v_z
28 | theta_dot = theta_dot
29 | v_x_dot = -1/m * sin(theta) * (u_right + u_left)
30 | v_z_dot = -g + 1/m * cos(theta) * (u_right + u_left)
31 | theta_dot = r/I * (u_right - u_left)
32 |
33 | cbf: h(x) = ||x-x_obs||^2 - beta*d_min^2
34 | relative degree: 2
35 | '''
36 | self.dt = dt
37 | self.robot_spec = robot_spec
38 |
39 | self.robot_spec.setdefault('mass', 1.0)
40 | self.robot_spec.setdefault('inertia', 0.01)
41 | self.robot_spec.setdefault('f_min', 1.0)
42 | self.robot_spec.setdefault('f_max', 10.0)
43 |
44 | def f(self, X, casadi=False):
45 | m, g = self.robot_spec['mass'], 9.81
46 | if casadi:
47 | return ca.vertcat(
48 | X[3, 0],
49 | X[4, 0],
50 | X[5, 0],
51 | 0,
52 | -g,
53 | 0
54 | )
55 | else:
56 | return np.array([X[3, 0], X[4, 0], X[5, 0], 0, -g, 0]).reshape(-1, 1)
57 |
58 | def df_dx(self, X):
59 | """Jacobian of f with respect to state X."""
60 | df_dx = np.zeros((6, 6))
61 | df_dx[0, 3] = 1
62 | df_dx[1, 4] = 1
63 | df_dx[2, 5] = 1
64 | return df_dx
65 |
66 | def g(self, X, casadi=False):
67 | """Control-dependent dynamics"""
68 | m, I, r = self.robot_spec['mass'], self.robot_spec['inertia'], self.robot_spec['radius']
69 | theta = X[2, 0]
70 | if casadi:
71 | return ca.vertcat(
72 | ca.horzcat(0, 0, 0, -ca.sin(theta)/m, ca.cos(theta)/m, r/I),
73 | ca.horzcat(0, 0, 0, -ca.sin(theta)/m, ca.cos(theta)/m, -r/I)
74 | ).T
75 | else:
76 | return np.array([
77 | [0, 0, 0, -np.sin(theta)/m, np.cos(theta)/m, r/I],
78 | [0, 0, 0, -np.sin(theta)/m, np.cos(theta)/m, -r/I]
79 | ]).T
80 |
81 | def step(self, X, U):
82 | X = X + (self.f(X) + self.g(X) @ U) * self.dt
83 | X[2, 0] = angle_normalize(X[2, 0])
84 | return X
85 |
86 | def nominal_input(self, X, G, k_px=3.0, k_dx=0.5, k_pz=0.1, k_dz=0.5,
87 | k_p_theta=0.05, k_d_theta=0.05):
88 | """
89 | Compute the nominal input (rotor forces) for the quad2d dynamics.
90 | Operates without any programming loops.
91 | """
92 |
93 | m, g = self.robot_spec['mass'], 9.81
94 | f_min, f_max = self.robot_spec['f_min'], self.robot_spec['f_max']
95 | r, I = self.robot_spec['radius'], self.robot_spec['inertia']
96 |
97 | x, z, theta, x_dot, z_dot, theta_dot = X.flatten()
98 |
99 | # Goal position
100 | x_goal, z_goal = G.flatten()
101 |
102 | # Position and velocity errors
103 | e_x = x_goal - x
104 | e_z = z_goal - z
105 | e_x_dot = -x_dot # Assuming desired velocity is zero
106 | e_z_dot = -z_dot
107 |
108 | # Desired accelerations (Outer Loop)
109 | x_ddot_d = k_px * e_x + k_dx * e_x_dot
110 | z_ddot_d = k_pz * e_z + k_dz * e_z_dot
111 |
112 | # Desired total acceleration vector
113 | a_d_x = x_ddot_d
114 | a_d_z = z_ddot_d + g # Compensate for gravity
115 |
116 | # Desired thrust magnitude
117 | a_d_norm = np.sqrt(a_d_x**2 + a_d_z**2)
118 | T = m * a_d_norm
119 |
120 | # Desired orientation
121 | theta_d = - np.arctan2(a_d_x, a_d_z) # sign convention
122 |
123 | # Orientation error (Inner Loop)
124 | e_theta = theta_d - theta
125 | e_theta = np.arctan2(np.sin(e_theta), np.cos(e_theta)) # Normalize angle to [-π, π]
126 | e_theta_dot = -theta_dot # Assuming desired angular velocity is zero
127 |
128 | # Desired torque
129 | tau = k_p_theta * e_theta + k_d_theta * e_theta_dot
130 | # clip tau
131 | tau = np.clip(tau, -1, 1)
132 |
133 | # Compute rotor forces
134 | F_r = (T + tau / r) / 2.0
135 | F_l = (T - tau / r) / 2.0
136 |
137 | # Enforce constraints
138 | F_r = np.clip(F_r, f_min, f_max)
139 | F_l = np.clip(F_l, f_min, f_max)
140 |
141 | return np.array([F_r, F_l]).reshape(-1, 1)
142 |
143 | def stop(self, X):
144 | """
145 | Compute the nominal input for stopping behavior,
146 | leveraging the nomianl input function
147 | """
148 | x, z, theta, x_dot, z_dot, theta_dot = X.flatten()
149 |
150 | G = np.array([x, z]).reshape(-1, 1) # provide current position as goal
151 | stop_control = self.nominal_input(X, G)
152 | return stop_control
153 |
154 | def has_stopped(self, X, tol=0.05):
155 | """Check if quadrotor has stopped within tolerance."""
156 | return np.linalg.norm(X[3:5, 0]) < tol
157 |
158 | def rotate_to(self, X, theta_des, k_omega=2.0):
159 | """Generate control input to rotate the quadrotor to a target angle."""
160 | error_theta = angle_normalize(theta_des - X[2, 0])
161 | omega = k_omega * error_theta
162 | return np.array([0.0, omega]).reshape(-1, 1)
163 |
164 | def agent_barrier(self, X, obs, robot_radius, beta=1.01):
165 | '''Continuous Time High Order CBF'''
166 | obsX = obs[0:2]
167 | d_min = obs[2][0] + robot_radius # obs radius + robot radius
168 |
169 | h = np.linalg.norm(X[0:2] - obsX[0:2])**2 - beta*d_min**2
170 | # Lgh is zero => relative degree is 2
171 | h_dot = 2 * (X[0:2] - obsX[0:2]).T @ (self.f(X)[0:2])
172 |
173 | df_dx = self.df_dx(X)
174 | dh_dot_dx = np.append((2 * self.f(X)[0:2]).T, np.array([[0, 0, 0, 0]]), axis=1) + 2 * (X[0:2] - obsX[0:2]).T @ df_dx[0:2, :]
175 | return h, h_dot, dh_dot_dx
176 |
177 | def agent_barrier_dt(self, x_k, u_k, obs, robot_radius, beta=1.01):
178 | '''Discrete Time High Order CBF'''
179 | # Dynamics equations for the next states
180 | x_k1 = self.step(x_k, u_k)
181 | x_k2 = self.step(x_k1, u_k)
182 |
183 | def h(x, obs, robot_radius, beta=1.01):
184 | '''Computes CBF h(x) = ||x-x_obs||^2 - beta*d_min^2'''
185 | x_obs = obs[0]
186 | y_obs = obs[1]
187 | r_obs = obs[2]
188 | d_min = robot_radius + r_obs
189 |
190 | h = (x[0, 0] - x_obs)**2 + (x[1, 0] - y_obs)**2 - beta*d_min**2
191 | return h
192 |
193 | h_k2 = h(x_k2, obs, robot_radius, beta)
194 | h_k1 = h(x_k1, obs, robot_radius, beta)
195 | h_k = h(x_k, obs, robot_radius, beta)
196 |
197 | d_h = h_k1 - h_k
198 | dd_h = h_k2 - 2 * h_k1 + h_k
199 | # hocbf_2nd_order = h_ddot + (gamma1 + gamma2) * h_dot + (gamma1 * gamma2) * h_k
200 |
201 | # h_k = np.zeros_like(h_k)
202 | # d_h = np.zeros_like(d_h)
203 | # dd_h = np.zeros_like(dd_h)
204 | return h_k, d_h, dd_h
205 |
206 | def render_rigid_body(self, X):
207 | x, z, theta, _, _, _ = X.flatten()
208 | # Adjust rectangle's center to align its bottom edge with the robot's circle center
209 | rect_center_z = z + self.robot_spec['radius'] / 6
210 |
211 | # Create a transformation that handles rotation and translation
212 | transform_rect = Affine2D().rotate(theta).translate(x, rect_center_z) + plt.gca().transData
213 |
214 | return transform_rect
--------------------------------------------------------------------------------
/robots/quad3D.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import casadi as ca
3 |
4 | """
5 | Created on June 2nd, 2025
6 | @author: Taekyung Kim
7 |
8 | @description:
9 | 3D Quad model for MPC-CBF (casadi)
10 | Linearized 6-DOF quadrotor model: "Integration of Adaptive Control and Reinforcement Learning for Real-Time Control and Learning", IEEE T-AC, 2023.
11 | """
12 | def angle_normalize(x):
13 | if isinstance(x, (np.ndarray, float, int)):
14 | # NumPy implementation
15 | return (((x + np.pi) % (2 * np.pi)) - np.pi)
16 | elif isinstance(x, (ca.SX, ca.MX, ca.DM)):
17 | # CasADi implementation
18 | return ca.fmod(x + ca.pi, 2 * ca.pi) - ca.pi
19 | else:
20 | raise TypeError(f"Unsupported input type: {type(x)}")
21 |
22 | class Quad3D:
23 |
24 | def __init__(self, dt, robot_spec):
25 | '''
26 | Linearized 6-DOF quadrotor model:
27 | X: [x, y, z, θ, φ, ψ, vx, vy, vz, q, p, r] (12 states)
28 | U: [u1, u2, u3, u4] (4 control inputs - motor forces)
29 |
30 | Dynamics:
31 | ẋ = vx, v̇x = g*θ
32 | ẏ = vy, v̇y = -g*φ
33 | ż = vz, v̇z = (1/m)*F
34 | θ̇ = q, q̇ = (1/Iy)*τy
35 | φ̇ = p, ṗ = (1/Ix)*τx
36 | ψ̇ = r, ṙ = (1/Iz)*τz
37 |
38 | where [F, τy, τx, τz]^T = B2 * [u1, u2, u3, u4]^T
39 | B2 = [[1, 1, 1, 1],
40 | [0, L, 0, -L],
41 | [L, 0, -L, 0],
42 | [ν, -ν, ν, -ν]]
43 |
44 | Relative degree: 4 with respect to (x,y), so we use RK4 CBF
45 | '''
46 | self.dt = dt
47 | self.robot_spec = robot_spec
48 |
49 | # Default physical parameters
50 | self.robot_spec.setdefault('mass', 3.0) # m
51 | self.robot_spec.setdefault('Ix', 0.5) # Moment of inertia around x-axis
52 | self.robot_spec.setdefault('Iy', 0.5) # Moment of inertia around y-axis
53 | self.robot_spec.setdefault('Iz', 0.5) # Moment of inertia around z-axis
54 | self.robot_spec.setdefault('L', 0.3) # Arm length
55 | self.robot_spec.setdefault('nu', 0.1) # Torque coefficient
56 |
57 | # Control constraints
58 | self.robot_spec.setdefault('u_max', 10.0) # Maximum motor force
59 | self.robot_spec.setdefault('u_min', -10.0) # Minimum motor force
60 |
61 | # Extract parameters
62 | self.m = self.robot_spec['mass']
63 | self.Ix = self.robot_spec['Ix']
64 | self.Iy = self.robot_spec['Iy']
65 | self.Iz = self.robot_spec['Iz']
66 | self.L = self.robot_spec['L']
67 | self.nu = self.robot_spec['nu']
68 | self.gravity = 9.8
69 |
70 | # B2 matrix for control allocation
71 | self.B2 = np.array([
72 | [1, 1, 1, 1], # F = u1 + u2 + u3 + u4
73 | [0, self.L, 0, -self.L], # τy = L*u2 - L*u4
74 | [self.L, 0, -self.L, 0], # τx = L*u1 - L*u3
75 | [self.nu, -self.nu, self.nu, -self.nu] # τz = ν*u1 - ν*u2 + ν*u3 - ν*u4
76 | ])
77 |
78 | # State space matrices
79 | self.A = np.zeros((12, 12))
80 | self.A[0, 6] = 1 # ẋ = vx
81 | self.A[1, 7] = 1 # ẏ = vy
82 | self.A[2, 8] = 1 # ż = vz
83 | self.A[3, 9] = 1 # θ̇ = q
84 | self.A[4, 10] = 1 # φ̇ = p
85 | self.A[5, 11] = 1 # ψ̇ = r
86 | self.A[6, 3] = self.gravity # v̇x = g*θ
87 | self.A[7, 4] = -self.gravity # v̇y = -g*φ
88 |
89 | self.B1 = np.zeros((12, 4))
90 | self.B1[8, 0] = 1/self.m # v̇z = (1/m)*F
91 | self.B1[9, 1] = 1/self.Iy # q̇ = (1/Iy)*τy
92 | self.B1[10, 2] = 1/self.Ix # ṗ = (1/Ix)*τx
93 | self.B1[11, 3] = 1/self.Iz # ṙ = (1/Iz)*τz
94 |
95 | self.B = self.B1 @ self.B2
96 |
97 | # For derivative computation
98 | self.df_dx = self.A
99 |
100 | def f(self, X, casadi=False):
101 | """
102 | Drift dynamics f(x) = Ax
103 | X: [x, y, z, θ, φ, ψ, vx, vy, vz, q, p, r]
104 | """
105 | if casadi:
106 | return ca.mtimes(self.A, X)
107 | else:
108 | return self.A @ X
109 |
110 | def g(self, X, casadi=False):
111 | """
112 | Control matrix g(x) = B (constant for linear system)
113 | """
114 | if casadi:
115 | return self.B
116 | else:
117 | return self.B
118 |
119 | def step(self, X, U, casadi=False):
120 | """
121 | Runge-Kutta 4th order integration: x_{k+1} = x_k + dt/6 * (k1 + 2*k2 + 2*k3 + k4)
122 | where k1, k2, k3, k4 are the RK4 slopes
123 | """
124 | # print with 2 decimal places
125 | # print(f"roll: {float(X[3, 0]):.2f}, pitch: {float(X[4, 0]):.2f}, yaw: {float(X[5, 0]):.2f}")
126 | # print(f"roll rate: {float(X[9, 0]):.2f}, pitch rate: {float(X[10, 0]):.2f}, yaw rate: {float(X[11, 0]):.2f}")
127 |
128 | if casadi:
129 | # RK4 integration with CasADi
130 | k1 = ca.mtimes(self.A, X) + ca.mtimes(self.B, U)
131 | k2 = ca.mtimes(self.A, X + self.dt/2 * k1) + ca.mtimes(self.B, U)
132 | k3 = ca.mtimes(self.A, X + self.dt/2 * k2) + ca.mtimes(self.B, U)
133 | k4 = ca.mtimes(self.A, X + self.dt * k3) + ca.mtimes(self.B, U)
134 |
135 | # RK4 update
136 | X_next = X + self.dt/6 * (k1 + 2*k2 + 2*k3 + k4)
137 |
138 | # Normalize angles
139 | X_next[3, 0] = angle_normalize(X_next[3, 0]) # θ
140 | X_next[4, 0] = angle_normalize(X_next[4, 0]) # φ
141 | X_next[5, 0] = angle_normalize(X_next[5, 0]) # ψ
142 | else:
143 | # RK4 integration with NumPy
144 | k1 = self.A @ X + self.B @ U
145 | k2 = self.A @ (X + self.dt/2 * k1) + self.B @ U
146 | k3 = self.A @ (X + self.dt/2 * k2) + self.B @ U
147 | k4 = self.A @ (X + self.dt * k3) + self.B @ U
148 | # RK4 update
149 | X_next = X + self.dt/6 * (k1 + 2*k2 + 2*k3 + k4)
150 |
151 | # Normalize angles
152 | X_next[3, 0] = angle_normalize(X_next[3, 0]) # θ
153 | X_next[4, 0] = angle_normalize(X_next[4, 0]) # φ
154 | X_next[5, 0] = angle_normalize(X_next[5, 0]) # ψ
155 |
156 | return X_next
157 |
158 | def nominal_input(self, X, goal, k_p=1.0, k_d=2.0, k_ang=5.0):
159 | '''
160 | Nominal input for CBF-QP
161 | X: [x, y, z, θ, φ, ψ, vx, vy, vz, q, p, r]
162 | goal: [x_des, y_des, z_des]
163 | '''
164 | u_max = self.robot_spec['u_max']
165 | u_min = self.robot_spec['u_min']
166 |
167 | u_nom = np.zeros([4, 1])
168 |
169 | # Position and velocity errors
170 | pos_err = goal[0:3].reshape(-1, 1) - X[0:3] # [x, y, z] error
171 | vel_err = -X[6:9] # desired velocity is 0
172 |
173 | # Desired accelerations
174 | ax_des = k_p * pos_err[0, 0] + k_d * vel_err[0, 0]
175 | ay_des = k_p * pos_err[1, 0] + k_d * vel_err[1, 0]
176 | az_des = k_p * pos_err[2, 0] + k_d * vel_err[2, 0]
177 |
178 | # From linearized dynamics: v̇x = g*θ, v̇y = -g*φ, v̇z = (1/m)*F
179 | theta_des = ax_des / self.gravity
180 | phi_des = -ay_des / self.gravity
181 | F_des = self.m * az_des
182 |
183 | # Angular errors and rates
184 | theta_err = theta_des - X[3, 0]
185 | phi_err = phi_des - X[4, 0]
186 | psi_err = 0 - X[5, 0] # Keep yaw at 0
187 |
188 | q_err = -X[9, 0] # desired angular velocity is 0
189 | p_err = -X[10, 0]
190 | r_err = -X[11, 0]
191 |
192 | # Desired torques
193 | tau_y_des = self.Iy * (k_ang * theta_err + k_d * q_err)
194 | tau_x_des = self.Ix * (k_ang * phi_err + k_d * p_err)
195 | tau_z_des = self.Iz * (k_ang * psi_err + k_d * r_err)
196 |
197 | # Solve for motor forces: [F, τy, τx, τz]^T = B2 * u
198 | desired_wrench = np.array([[F_des], [tau_y_des], [tau_x_des], [tau_z_des]])
199 | u_nom = np.linalg.pinv(self.B2) @ desired_wrench
200 |
201 | # Apply constraints
202 | u_nom = np.clip(u_nom, u_min, u_max)
203 |
204 | return u_nom
205 |
206 | def stop(self, X, k_stop=1.0):
207 | """
208 | Control to stop the quadrotor (minimize velocities)
209 | """
210 | u_max = self.robot_spec['u_max']
211 | u_min = self.robot_spec['u_min']
212 |
213 | # Desired accelerations to reduce velocities
214 | ax_des = -k_stop * X[6, 0] # reduce vx
215 | ay_des = -k_stop * X[7, 0] # reduce vy
216 | az_des = -k_stop * X[8, 0] # reduce vz
217 |
218 | # Convert to desired angles and force
219 | theta_des = ax_des / self.gravity
220 | phi_des = -ay_des / self.gravity
221 | F_des = self.m * az_des
222 |
223 | # Angular control to reach desired angles
224 | tau_y_des = self.Iy * k_stop * (theta_des - X[3, 0] - X[9, 0]/k_stop)
225 | tau_x_des = self.Ix * k_stop * (phi_des - X[4, 0] - X[10, 0]/k_stop)
226 | tau_z_des = self.Iz * k_stop * (0 - X[5, 0] - X[11, 0]/k_stop) # level yaw
227 |
228 | # Solve for motor forces
229 | desired_wrench = np.array([[F_des], [tau_y_des], [tau_x_des], [tau_z_des]])
230 | u_stop = np.linalg.pinv(self.B2) @ desired_wrench
231 |
232 | # Apply constraints
233 | u_stop = np.clip(u_stop, u_min, u_max)
234 |
235 | return u_stop.flatten()
236 |
237 | def has_stopped(self, X, tol=0.05):
238 | """Check if quadrotor has stopped (low velocities and angular rates)"""
239 | linear_vel = np.linalg.norm(X[6:9])
240 | angular_vel = np.linalg.norm(X[9:12])
241 | return linear_vel < tol and angular_vel < tol
242 |
243 | def rotate_to(self, X, ang_des, k_omega=2.0):
244 | """
245 | Rotate to desired yaw angle while maintaining position
246 | """
247 | u_max = self.robot_spec['u_max']
248 | u_min = self.robot_spec['u_min']
249 |
250 | # Hover force (compensate gravity)
251 | F_hover = self.m * self.gravity
252 |
253 | # No rotation in roll and pitch
254 | tau_y_des = self.Iy * k_omega * (0 - X[3, 0] - X[9, 0]/k_omega) # level pitch
255 | tau_x_des = self.Ix * k_omega * (0 - X[4, 0] - X[10, 0]/k_omega) # level roll
256 | tau_z_des = self.Iz * k_omega * (ang_des - X[5, 0] - X[11, 0]/k_omega) # desired yaw
257 |
258 | # Solve for motor forces
259 | desired_wrench = np.array([[F_hover], [tau_y_des], [tau_x_des], [tau_z_des]])
260 | u = np.linalg.pinv(self.B2) @ desired_wrench
261 |
262 | # Apply constraints
263 | u = np.clip(u, u_min, u_max)
264 |
265 | return u.flatten()
266 |
267 | def agent_barrier(self, X, obs, robot_radius, beta=1.01):
268 | '''obs: [x, y, r]'''
269 | '''obstacles are infinite cylinders at x and y with radius r, extending in z direction'''
270 | '''X : [x, y, z, θ, φ, ψ, vx, vy, vz, q, p, r]'''
271 | raise NotImplementedError("Cannot implement with nominal distance based CBF")
272 |
273 | def agent_barrier_dt(self, x_k, u_k, obs, robot_radius, beta=1.01):
274 | '''Discrete Time RK4 Sammpled Data CBF
275 | reference: "Safety of Sampled-Data Systems with Control Barrier Functions via Approximate Discrete Time Models", IEEE CDC, 2022.
276 | '''
277 |
278 | # Dynamics equations for the next states
279 | x_k1 = self.step(x_k, u_k, casadi=True)
280 |
281 | def h(x, obs, robot_radius, beta=1.01):
282 | '''Computes CBF h(x) = ||x-x_obs||^2 - beta*d_min^2'''
283 | x_obs = obs[0]
284 | y_obs = obs[1]
285 | r_obs = obs[2]
286 | d_min = robot_radius + r_obs
287 |
288 | h = (x[0, 0] - x_obs)**2 + (x[1, 0] - y_obs)**2 - beta*d_min**2
289 | return h
290 |
291 | h_k1 = h(x_k1, obs, robot_radius, beta)
292 | h_k = h(x_k, obs, robot_radius, beta)
293 |
294 | d_h = h_k1 - h_k
295 | return h_k, d_h
--------------------------------------------------------------------------------
/robots/single_integrator2D.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import casadi as ca
3 |
4 | """
5 | Single Integrator model for CBF-QP and MPC-CBF (casadi)
6 | """
7 |
8 |
9 | def angle_normalize(x):
10 | if isinstance(x, (np.ndarray, float, int)):
11 | # NumPy implementation
12 | return (((x + np.pi) % (2 * np.pi)) - np.pi)
13 | elif isinstance(x, (ca.SX, ca.MX, ca.DM)):
14 | # CasADi implementation
15 | return ca.fmod(x + ca.pi, 2 * ca.pi) - ca.pi
16 | else:
17 | raise TypeError(f"Unsupported input type: {type(x)}")
18 |
19 |
20 | class SingleIntegrator2D:
21 |
22 | def __init__(self, dt, robot_spec):
23 | '''
24 | X: [x, y]
25 | theta: yaw angle
26 | U: [vx, vy]
27 |
28 |
29 | Dynamics:
30 | dx/dt = vx
31 | dy/dt = vy
32 | f(x) = [0, 0], g(x) = I(2x2)
33 | cbf: h(x,y) = ||x-x_obs||^2 + ||y-y_obs||^2- beta*d_min^2
34 | relative degree: 1
35 | '''
36 | self.dt = dt
37 | self.robot_spec = robot_spec
38 |
39 | self.robot_spec.setdefault('v_max', 1.0)
40 | self.robot_spec.setdefault('w_max', 0.5)
41 |
42 | def f(self, X, casadi=False):
43 | if casadi:
44 | return ca.vertcat(
45 | 0,
46 | 0
47 | )
48 | else:
49 | return np.array([
50 | 0,
51 | 0]).reshape(-1, 1)
52 |
53 | def g(self, X, casadi=False):
54 | if casadi:
55 | return ca.DM([
56 | [1, 0],
57 | [0, 1]
58 | ])
59 | else:
60 | return np.array([[1, 0], [0, 1]])
61 |
62 | def step(self, X, U):
63 | X = X + (self.f(X) + self.g(X) @ U) * self.dt
64 | return X
65 |
66 | def step_rotate(self, theta, U_attitude):
67 | theta = angle_normalize(theta + U_attitude[0, 0] * self.dt)
68 | return theta
69 |
70 | def nominal_input(self, X, G, d_min=0.05, k_v=1.0):
71 | '''
72 | nominal input for CBF-QP (position control)
73 | '''
74 | G = np.copy(G.reshape(-1, 1)) # goal state
75 | v_max = self.robot_spec['v_max'] # Maximum velocity (x+y)
76 |
77 | pos_errors = G[0:2, 0] - X[0:2, 0]
78 | pos_errors = np.sign(pos_errors) * \
79 | np.maximum(np.abs(pos_errors) - d_min, 0.0)
80 |
81 | # Compute desired velocities for x and y
82 | v_des = k_v * pos_errors
83 | v_mag = np.linalg.norm(v_des)
84 | if v_mag > v_max:
85 | v_des = v_des * v_max / v_mag
86 |
87 | return v_des.reshape(-1, 1)
88 |
89 | def nominal_attitude_input(self, theta, theta_des, k_theta=1.0):
90 | '''
91 | nominal input for attitude control
92 | '''
93 | error_theta = angle_normalize(theta_des - theta)
94 | yaw_rate = k_theta * error_theta
95 | return np.array([yaw_rate]).reshape(-1, 1)
96 |
97 | def stop(self, X, k_a=1.0):
98 | # Set desired velocity to zero
99 | vx_des, vy_des = 0.0, 0.0
100 | return np.array([vx_des, vy_des]).reshape(-1, 1)
101 |
102 | def has_stopped(self, X, tol=0.05):
103 | # Single Integrator can always stop.
104 | return True
105 |
106 | def rotate_to(self, theta, theta_des, k_omega=2.0):
107 | error_theta = angle_normalize(theta_des - theta)
108 | yaw_rate = k_omega * error_theta
109 | yaw_rate = np.clip(yaw_rate, -self.robot_spec['w_max'], self.robot_spec['w_max'])
110 | return np.array([yaw_rate]).reshape(-1, 1)
111 |
112 | def agent_barrier(self, X, obs, robot_radius, beta=1.01):
113 | '''Continuous Time High Order CBF'''
114 | obsX = obs[0:2]
115 | d_min = obs[2][0] + robot_radius # obs radius + robot radius
116 |
117 | h = np.linalg.norm(X[0:2] - obsX[0:2])**2 - beta*d_min**2
118 | # relative degree is 1
119 |
120 | dh_dx = (2 * (X[0:2] - obsX[0:2])).T
121 | return h, dh_dx
122 |
123 | def agent_barrier_dt(self, x_k, u_k, obs, robot_radius, beta=1.01):
124 | '''Discrete Time High Order CBF'''
125 | # Dynamics equations for the next states
126 | x_k1 = self.step(x_k, u_k)
127 |
128 | def h(x, obs, robot_radius, beta=1.01):
129 | '''Computes CBF h(x) = ||x-x_obs||^2 - beta*d_min^2'''
130 | x_obs = obs[0]
131 | y_obs = obs[1]
132 | r_obs = obs[2]
133 | d_min = robot_radius + r_obs
134 |
135 | h = (x[0, 0] - x_obs)**2 + (x[1, 0] - y_obs)**2 - beta*d_min**2
136 | return h
137 |
138 | h_k1 = h(x_k1, obs, robot_radius, beta)
139 | h_k = h(x_k, obs, robot_radius, beta)
140 |
141 | d_h = h_k1 - h_k
142 | # cbf = h_dot + gamma1 * h_k
143 |
144 | return h_k, d_h
--------------------------------------------------------------------------------
/robots/unicycle2D.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import casadi as ca
3 |
4 | """
5 | Created on July 14h, 2024
6 | @author: Taekyung Kim
7 |
8 | @description:
9 | Kinematic unicycle model for CBF-QP and MPC-CBF (casadi)
10 | """
11 |
12 | def angle_normalize(x):
13 | if isinstance(x, (np.ndarray, float, int)):
14 | # NumPy implementation
15 | return (((x + np.pi) % (2 * np.pi)) - np.pi)
16 | elif isinstance(x, (ca.SX, ca.MX, ca.DM)):
17 | # CasADi implementation
18 | return ca.fmod(x + ca.pi, 2 * ca.pi) - ca.pi
19 | else:
20 | raise TypeError(f"Unsupported input type: {type(x)}")
21 |
22 | class Unicycle2D:
23 |
24 | def __init__(self, dt, robot_spec):
25 | '''
26 | X: [x, y, theta]
27 | U: [v, omega]
28 | cbf: h(x) = ||x-x_obs||^2 - beta*d_min^2 - sigma(s)
29 | relative degree: 1
30 | '''
31 | self.dt = dt
32 | self.robot_spec = robot_spec # not used in this model
33 |
34 | # for exp (CBF for unicycle)
35 | self.k1 = 0.5 #=#1.0
36 | self.k2 = 1.8 #0.5
37 |
38 | self.robot_spec.setdefault('v_max', 1.0)
39 | self.robot_spec.setdefault('w_max', 0.5)
40 |
41 | def f(self, X, casadi=False):
42 | if casadi:
43 | return ca.vertcat([
44 | 0,
45 | 0,
46 | 0
47 | ])
48 | else:
49 | return np.array([0,0,0]).reshape(-1,1)
50 |
51 | def g(self, X, casadi=False):
52 | if casadi:
53 | g = ca.SX.zeros(3, 2)
54 | g[0, 0] = ca.cos(X[2,0])
55 | g[1, 0] = ca.sin(X[2,0])
56 | g[2, 1] = 1
57 | return g
58 | else:
59 | return np.array([ [ np.cos(X[2,0]), 0],
60 | [ np.sin(X[2,0]), 0],
61 | [0, 1] ])
62 |
63 | def step(self, X, U):
64 | X = X + ( self.f(X) + self.g(X) @ U )*self.dt
65 | X[2,0] = angle_normalize(X[2,0])
66 | return X
67 |
68 | def nominal_input(self, X, G, d_min = 0.05, k_omega = 2.0, k_v = 1.0):
69 | '''
70 | nominal input for CBF-QP
71 | '''
72 | G = np.copy(G.reshape(-1,1)) # goal state
73 |
74 | distance = max(np.linalg.norm( X[0:2,0]-G[0:2,0] ) - d_min, 0.05)
75 | theta_d = np.arctan2(G[1,0]-X[1,0],G[0,0]-X[0,0])
76 | error_theta = angle_normalize( theta_d - X[2,0] )
77 |
78 | omega = k_omega * error_theta
79 | if abs(error_theta) > np.deg2rad(90):
80 | v = 0.0
81 | else:
82 | v = k_v*( distance )*np.cos( error_theta )
83 |
84 | return np.array([v, omega]).reshape(-1,1)
85 |
86 | def stop(self, X):
87 | return np.array([0,0]).reshape(-1,1)
88 |
89 | def has_stopped(self, X):
90 | # unicycle can always stop immediately
91 | return True
92 |
93 | def rotate_to(self, X, theta_des, k_omega = 2.0):
94 | error_theta = angle_normalize( theta_des - X[2,0] )
95 | omega = k_omega * error_theta
96 | return np.array([0.0, omega]).reshape(-1,1)
97 |
98 | def sigma(self,s):
99 | #print("s", s)
100 | return self.k2 * (np.exp(self.k1-s)-1)/(np.exp(self.k1-s)+1)
101 |
102 | def sigma_der(self,s):
103 | return - self.k2 * np.exp(self.k1-s)/( 1+np.exp( self.k1-s ) ) * ( 1 - self.sigma(s)/self.k2 )
104 |
105 | def agent_barrier(self, X, obs, robot_radius, beta=1.01):
106 | obsX = obs[0:2]
107 | d_min = obs[2][0] + robot_radius # obs radius + robot radius
108 |
109 | theta = X[2,0]
110 |
111 | h = np.linalg.norm( X[0:2] - obsX[0:2] )**2 - beta*d_min**2
112 | s = ( X[0:2] - obsX[0:2]).T @ np.array( [np.cos(theta),np.sin(theta)] ).reshape(-1,1)
113 | h = h - self.sigma(s)
114 |
115 | der_sigma = self.sigma_der(s)
116 | # [dh/dx, dh/dy, dh/dtheta]^T
117 | dh_dx = np.append(
118 | 2*( X[0:2] - obsX[0:2] ).T - der_sigma * ( np.array([ [np.cos(theta), np.sin(theta)] ]) ),
119 | - der_sigma * ( -np.sin(theta)*( X[0,0]-obsX[0,0] ) + np.cos(theta)*( X[1,0] - obsX[1,0] ) ),
120 | axis=1)
121 | # print(h)
122 | # print(dh_dx)
123 | return h, dh_dx
124 |
125 | def agent_barrier_dt(self, x_k, u_k, obs, robot_radius, beta = 1.01):
126 | '''Discrete Time High Order CBF'''
127 | # Dynamics equations for the next states
128 | x_k1 = self.step(x_k, u_k)
129 |
130 | def h(x, obs, robot_radius, beta = 1.01):
131 | '''Computes CBF h(x) = ||x-x_obs||^2 - beta*d_min^2'''
132 | x_obs = obs[0]
133 | y_obs = obs[1]
134 | r_obs = obs[2]
135 | d_min = robot_radius + r_obs
136 |
137 | h = (x[0, 0] - x_obs)**2 + (x[1, 0] - y_obs)**2 - beta*d_min**2
138 | return h
139 |
140 | h_k1 = h(x_k1, obs, robot_radius, beta)
141 | h_k = h(x_k, obs, robot_radius, beta)
142 |
143 | d_h = h_k1 - h_k
144 | return h_k, d_h
--------------------------------------------------------------------------------
/robots/vtol2D.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import casadi as ca
3 |
4 | import matplotlib.pyplot as plt
5 | from matplotlib.transforms import Affine2D
6 |
7 | """
8 | Created on January 27th, 2025
9 | @author: Taekyung Kim
10 |
11 | @description:
12 | Implement quadplane in 2D and visualize the rigid body plus elevators.
13 | """
14 |
15 | def angle_normalize(x):
16 | """Normalize angle to (-pi, pi]."""
17 | if isinstance(x, (np.ndarray, float, int)):
18 | return ((x + np.pi) % (2*np.pi)) - np.pi
19 | elif isinstance(x, (ca.SX, ca.MX, ca.DM)):
20 | return ca.fmod(x + ca.pi, 2*ca.pi) - ca.pi
21 | else:
22 | raise TypeError(f"Unsupported input type: {type(x)}")
23 |
24 |
25 | class VTOL2D:
26 | """
27 | X: [ x, z, theta, x_dot, z_dot, theta_dot ]
28 | - x,z -> inertial positions
29 | - theta -> pitch angle
30 | - x_dot,z_dot -> inertial velocities
31 | - theta_dot -> pitch rate
32 | U:[ delta_front, delta_rear, delta_pusher, delta_elevator ]
33 | cbf: h(x) = ||x-x_obs||^2 - beta*d_min^2
34 | relative degree: 2
35 |
36 | We compute angle of attack alpha from body-frame velocity (u_b, w_b):
37 | alpha = atan2(-w_b, u_b).
38 | ** w_b is positive UP, so -w_b is positive DOWN, WHICH IS OPPOSITE to aero convention
39 | Then we apply lift & drag in the "wind frame" where
40 | F_wind = [ -D, +L ]
41 | rotate from wind to body by alpha,
42 | then from body to inertial by theta.
43 |
44 | Rotors are assumed to produce linear thrust in body coordinates:
45 | front, rear rotor -> ±body_z,
46 | pusher rotor -> +body_x,
47 | and each is rotated to inertial via R(theta).
48 |
49 | Elevator modifies L, D, and also adds a pitch moment linearly => system remains control affine.
50 | """
51 |
52 | def __init__(self, dt, robot_spec):
53 | self.dt = dt
54 | self.spec = robot_spec
55 |
56 | # Default or user-set parameters
57 | self.spec.setdefault('mass', 11.0)
58 | self.spec.setdefault('inertia', 1.135)
59 | self.spec.setdefault('S_wing', 0.55) # Wing area
60 | self.spec.setdefault('rho', 1.2682) # Air density
61 |
62 | # linear lift coefficients
63 | self.spec.setdefault('C_L0', 0.23)
64 | self.spec.setdefault('C_Lalpha', 5.61)
65 | # non-linear lift coefficient
66 | self.spec.setdefault('M', 50.0) # transition rate
67 | self.spec.setdefault('alpha_0', np.deg2rad(15)) # stall angle
68 |
69 | self.spec.setdefault('C_Ldelta_e', 0.13) # TODO: might be wrong, but it should be positive
70 |
71 | self.spec.setdefault('C_D0', 0.043)
72 | self.spec.setdefault('C_Dalpha', 0.03) # e.g. alpha^2 coefficient
73 | self.spec.setdefault('C_Ddelta_e', 0.0) # should be abs(), but makes it not affine
74 |
75 | self.spec.setdefault('C_m0', 0.0135) # in 2D, use small m for pitch moment
76 | self.spec.setdefault('C_malpha', -2.74) # coeff of pitch moment from alpha
77 | self.spec.setdefault('C_mdelta_e', -0.99) # for pitch moment from elevator
78 | self.spec.setdefault('chord', 0.18994) # mean chord length
79 |
80 | # linear rotor thrust
81 | self.spec.setdefault('k_front', 70.0)
82 | self.spec.setdefault('k_rear', 70.0)
83 | self.spec.setdefault('k_pusher',60.0)
84 | # geometry: lever arms
85 | self.spec.setdefault('ell_f', 0.5)
86 | self.spec.setdefault('ell_r', 0.5)
87 |
88 | # control limits
89 | self.spec.setdefault('throttle_min', 0.0)
90 | self.spec.setdefault('throttle_max', 1.0)
91 | self.spec.setdefault('elevator_min', -0.5)
92 | self.spec.setdefault('elevator_max', 0.5)
93 |
94 | # visualization
95 | self.spec.setdefault('plane_width', 1.0)
96 | self.spec.setdefault('plane_height', 0.3)
97 | self.spec.setdefault('front_width', 0.25)
98 | self.spec.setdefault('front_height', 0.1)
99 | self.spec.setdefault('rear_width', 0.25)
100 | self.spec.setdefault('rear_height', 0.1)
101 | self.spec.setdefault('pusher_width', 0.1)
102 | self.spec.setdefault('pusher_height', 0.4)
103 | self.spec.setdefault('elev_width', 0.35)
104 | self.spec.setdefault('elev_height', 0.1)
105 |
106 | # spec safety constraint
107 | self.spec.setdefault('v_max', 15.0)
108 | self.spec.setdefault('pitch_max', 15.0)
109 | self.spec.setdefault('descent_speed_max', 5.0) # in negative z
110 |
111 | self.gravity = 9.81
112 |
113 | #--------------------------------------------------------------------------
114 | # Body-frame velocity => alpha => baseline (no-elevator) lift/drag => inertial
115 | #--------------------------------------------------------------------------
116 | def f(self, X, casadi=False):
117 | """
118 | Unforced dynamics: f(X)
119 | - includes baseline aerodynamic forces (delta_e = 0)
120 | - includes gravity
121 | - no rotor thrust
122 | - no elevator deflection
123 | """
124 | if casadi:
125 | theta = X[2,0]
126 | xdot = X[3,0]
127 | zdot = X[4,0]
128 | thetadot = X[5,0]
129 |
130 | # 1) Body-frame velocity (u_b, w_b)
131 | u_b, w_b = self._body_velocity(xdot, zdot, theta, casadi=True)
132 | V = ca.sqrt(u_b*u_b + w_b*w_b) # actual airspeed magnitude
133 | alpha = ca.atan2(-w_b, u_b)
134 |
135 | # 2) Baseline lift & drag (no elevator => delta_e=0)
136 | L0, D0, M0 = self._lift_drag_moment(V, alpha, delta_e=0.0, casadi=True)
137 |
138 | # 3) Wind->Body rotation by alpha, Body->Inertial by theta
139 | # => net rotation by (theta + alpha)
140 | fx_aero, fz_aero = self._wind_to_inertial(theta, alpha, -D0, L0, casadi=True)
141 |
142 | # 4) Gravity in inertial is (0, -m*g)
143 | m = self.spec['mass']
144 | I = self.spec['inertia']
145 | fx_net = fx_aero
146 | fz_net = fz_aero - m*self.gravity
147 |
148 | x_ddot = fx_net / m
149 | z_ddot = fz_net / m
150 | theta_ddot = M0 / I
151 |
152 | return ca.vertcat(
153 | xdot,
154 | zdot,
155 | thetadot,
156 | x_ddot,
157 | z_ddot,
158 | theta_ddot
159 | )
160 | else:
161 | # NumPy version
162 | theta = X[2,0]
163 | xdot = X[3,0]
164 | zdot = X[4,0]
165 | thetadot = X[5,0]
166 |
167 | u_b, w_b = self._body_velocity(xdot, zdot, theta, casadi=False)
168 | V = np.sqrt(u_b**2 + w_b**2)
169 | alpha = np.arctan2(-w_b, u_b)
170 |
171 | L0, D0, M0 = self._lift_drag_moment(V, alpha, delta_e=0.0, casadi=False)
172 |
173 | fx_aero, fz_aero = self._wind_to_inertial(theta, alpha, -D0, L0, casadi=False)
174 |
175 | m = self.spec['mass']
176 | I = self.spec['inertia']
177 | fx_net = fx_aero
178 | fz_net = fz_aero - m*self.gravity
179 |
180 | x_ddot = fx_net / m
181 | z_ddot = fz_net / m
182 | theta_ddot = M0 / I
183 |
184 | return np.array([
185 | xdot,
186 | zdot,
187 | thetadot,
188 | x_ddot,
189 | z_ddot,
190 | theta_ddot
191 | ]).reshape(-1,1)
192 |
193 | #--------------------------------------------------------------------------
194 | # g(X): partial wrt each of the 4 control inputs => columns in a 6x4 matrix
195 | #--------------------------------------------------------------------------
196 | def g(self, X, casadi=False):
197 | """
198 | Control-dependent dynamics. U = [ delta_front, delta_rear, delta_pusher, delta_elevator ]^T.
199 | - front, rear, pusher = rotor forces in body coords
200 | - elevator => additional (L, D) + pitch moment
201 | """
202 | if casadi:
203 | theta = X[2,0]
204 | xdot = X[3,0]
205 | zdot = X[4,0]
206 | # body velocity
207 | u_b, w_b = self._body_velocity(xdot, zdot, theta, casadi=True)
208 | V = ca.sqrt(u_b*u_b + w_b*w_b)
209 | alpha = ca.atan2(-w_b, u_b)
210 |
211 | #-----------------------------------
212 | # 1) front rotor => thrust along -body_z
213 | fx_front, fz_front, M_f = self._front_rotor(theta, casadi=True)
214 | # 2) rear rotor
215 | fx_rear, fz_rear, M_r = self._rear_rotor(theta, casadi=True)
216 | # 3) pusher rotor => +body_x
217 | fx_thr, fz_thr, M_t = self._pusher_rotor(theta, casadi=True)
218 | # 4) elevator => partial in lift/drag + pitch moment
219 | # => difference from delta_e=1 vs 0
220 | L_de, D_de, M_de = self._lift_drag_moment(V, alpha, delta_e=1.0, casadi=True) # "partial"
221 | fx_elev, fz_elev = self._wind_to_inertial(theta, alpha, -D_de, L_de, casadi=True)
222 |
223 | m = self.spec['mass']
224 | I = self.spec['inertia']
225 |
226 | #----------------------------------------------------------------------
227 | # Using block-building to do list-to-SX (otherwise, it reports error)
228 | # Build a 6x4 matrix (since our state dimension is 6)
229 | # row=0..5, col=0..3
230 | # We fill in the partial derivatives w.r.t. each input in each column.
231 | # Everything not affected by the input remains 0.
232 | #----------------------------------------------------------------------
233 | # Create a list of 6 rows, each row has 4 columns => (6x4)
234 | g_list = [[0]*4 for _ in range(6)]
235 |
236 | # Column 0: front rotor
237 | g_list[3][0] = fx_front/m
238 | g_list[4][0] = fz_front/m
239 | g_list[5][0] = M_f/I
240 |
241 | # Column 1: rear rotor
242 | g_list[3][1] = fx_rear/m
243 | g_list[4][1] = fz_rear/m
244 | g_list[5][1] = M_r/I
245 |
246 | # Column 2: pusher rotor
247 | g_list[3][2] = fx_thr/m
248 | g_list[4][2] = fz_thr/m
249 | g_list[5][2] = M_t/I
250 |
251 | # Column 3: elevator
252 | g_list[3][3] = fx_elev/m # because of this index is non-zero, it makes relative degree of 2 wrt h(x)
253 | g_list[4][3] = fz_elev/m # because of this index is non-zero, it makes relative degree of 2 wrt h(x)
254 | g_list[5][3] = M_de/I
255 |
256 | # Convert each row to a CasADi row vector, then stack them
257 | g_sx_rows = []
258 | for row_vals in g_list:
259 | # row_vals is something like [expr_col0, expr_col1, expr_col2, expr_col3]
260 | g_sx_rows.append(ca.horzcat(*row_vals))
261 |
262 | g_sx = ca.vertcat(*g_sx_rows)
263 | return g_sx
264 |
265 | else:
266 | theta = X[2,0]
267 | xdot = X[3,0]
268 | zdot = X[4,0]
269 |
270 | # body velocity
271 | u_b, w_b = self._body_velocity(xdot, zdot, theta, casadi=False)
272 | V = np.sqrt(u_b**2 + w_b**2)
273 | alpha = np.arctan2(-w_b, u_b)
274 |
275 | fx_front, fz_front, M_f = self._front_rotor(theta, casadi=False)
276 | fx_rear, fz_rear, M_r = self._rear_rotor(theta, casadi=False)
277 | fx_thr, fz_thr, M_t = self._pusher_rotor(theta, casadi=False)
278 |
279 | # elevator partial
280 | L_de, D_de, M_de = self._lift_drag_moment(V, alpha, delta_e=1.0, casadi=False) # "partial"
281 | fx_elev, fz_elev = self._wind_to_inertial(theta, alpha, -D_de, L_de, casadi=False)
282 |
283 | m = self.spec['mass']
284 | I = self.spec['inertia']
285 |
286 | g_out = np.zeros((6,4))
287 | # front
288 | g_out[3,0] = fx_front / m
289 | g_out[4,0] = fz_front / m
290 | g_out[5,0] = M_f / I
291 |
292 | # rear
293 | g_out[3,1] = fx_rear / m
294 | g_out[4,1] = fz_rear / m
295 | g_out[5,1] = M_r / I
296 |
297 | # pusher
298 | g_out[3,2] = fx_thr / m
299 | g_out[4,2] = fz_thr / m
300 | g_out[5,2] = M_t / I
301 |
302 | # elevator
303 | g_out[3,3] = fx_elev / m
304 | g_out[4,3] = fz_elev / m
305 | g_out[5,3] = M_de / I
306 |
307 | return g_out
308 |
309 | def step(self, X, U, casadi=False):
310 | """
311 | Euler step:
312 | X_{k+1} = X_k + [ f(X_k) + g(X_k)*U ] * dt
313 | """
314 | Xdot = self.f(X, casadi) + self.g(X, casadi) @ U
315 | Xnew = X + Xdot * self.dt
316 | Xnew[2,0] = angle_normalize(Xnew[2,0])
317 | return Xnew
318 |
319 | #--------------------------------------------------------------------------
320 | # TF utils: body velocity from inertial velocity
321 | #--------------------------------------------------------------------------
322 | def _cos_sin(self, theta, casadi=False):
323 | if casadi:
324 | cth = ca.cos(theta)
325 | sth = ca.sin(theta)
326 | else:
327 | cth = np.cos(theta)
328 | sth = np.sin(theta)
329 | return cth, sth
330 |
331 | def _body_velocity(self, xdot, zdot, theta, casadi=False):
332 | """
333 | Rotate inertial velocity (xdot, zdot) into body frame (u_b, w_b).
334 | [u_b] = [ cos(theta), sin(theta)] [ xdot ]
335 | [w_b] [-sin(theta), cos(theta)] [ zdot ]
336 | """
337 | cth, sth = self._cos_sin(theta, casadi)
338 | u_b = cth*xdot + sth*zdot
339 | w_b = -sth*xdot + cth*zdot
340 | return u_b, w_b
341 |
342 | #--------------------------------------------------------------------------
343 | # compute lift, drag in "wind frame"
344 | #--------------------------------------------------------------------------
345 | def _lift_blending(self, alpha, casadi=False):
346 | """
347 | Blend between linear and non-linear lift models:
348 | reference:
349 | - Blending function: Small Unmanned Aircraft: Theory and Pracice, Chap. 4:
350 | https://drive.google.com/file/d/1BjJuj8QLWV9E1FX6sHVHXIGaIizUaAJ5/view
351 | - Nonlinear lift: Pitch and Thrust Allocation for Full-Flight-Regime Control of Winged eVTOL UAVs, L-CSS, 2022.
352 | """
353 | alpha_0 = self.spec['alpha_0']
354 | M = self.spec['M']
355 |
356 | CL_linear = self.spec['C_L0'] + self.spec['C_Lalpha']*alpha
357 | c_alpha, s_alpha = self._cos_sin(alpha, casadi)
358 | CL_nonlinear = 2*s_alpha*c_alpha
359 |
360 | if casadi:
361 | tmp1 = ca.exp(-M * (alpha - alpha_0))
362 | tmp2 = ca.exp(M * (alpha + alpha_0))
363 | else:
364 | tmp1 = np.exp(-M * (alpha - alpha_0))
365 | tmp2 = np.exp(M * (alpha + alpha_0))
366 | sigma = (1 + tmp1 + tmp2) / ((1 + tmp1) * (1 + tmp2))
367 |
368 | CL_alpha = (1 - sigma) * CL_linear + sigma * CL_nonlinear
369 | return CL_alpha
370 |
371 | def _lift_drag_moment(self, V, alpha, delta_e, casadi=False):
372 | """
373 | NOTE: universal to np and casadi
374 | L,D from standard formula with linear elevator effect:
375 | L = 1/2 * rho * V^2 * S * (C_L0 + C_Lalpha*alpha + C_Ldelta_e * delta_e)
376 | D = 1/2 * rho * V^2 * S * (C_D0 + C_Dalpha*(alpha^2) + C_Ddelta_e * delta_e)
377 | M = 1/2 * rho * V^2 * S * chord * (C_m0 + C_malpha*alpha + C_mdelta_e * delta_e)
378 |
379 | Returns (L, D, M) (magnitudes).
380 | """
381 | rho = self.spec['rho']
382 | S = self.spec['S_wing']
383 |
384 | CL_alpha = self._lift_blending(alpha, casadi)
385 | CL = CL_alpha + self.spec['C_Ldelta_e']*delta_e
386 |
387 | CD = (self.spec['C_D0']
388 | + self.spec['C_Dalpha']*(alpha**2)
389 | + self.spec['C_Ddelta_e']*delta_e)
390 | CM = (self.spec['C_m0']
391 | + self.spec['C_malpha']*alpha
392 | + self.spec['C_mdelta_e']*delta_e)
393 | chord = self.spec['chord']
394 |
395 |
396 | qbar = 0.5*rho*(V**2)
397 | L = qbar * S * CL # N
398 | D = qbar * S * CD # N
399 | M = qbar * S * CM * chord # Nm
400 | return (L, D, M)
401 |
402 | #--------------------------------------------------------------------------
403 | # Rotation from wind frame to inertial
404 | # wind->body: rotate by alpha
405 | # body->inertial: rotate by theta
406 | # => total rotation by (theta + alpha)
407 | # => F_inertial = R(theta+alpha)*F_wind
408 | #--------------------------------------------------------------------------
409 | def _wind_to_inertial(self, theta, alpha, fx_w, fz_w, casadi=False):
410 | """
411 | wind frame axes: x_w = direction of velocity, z_w = +90 deg from that
412 | If (fx_w, fz_w) = (-D, L), then we rotate by (theta+alpha).
413 | """
414 | heading = theta + alpha
415 | c_heading, s_heading = self._cos_sin(heading, casadi)
416 | fx_i = c_heading * fx_w - s_heading * fz_w
417 | fz_i = s_heading * fx_w + c_heading * fz_w
418 | return fx_i, fz_i
419 |
420 | #--------------------------------------------------------------------------
421 | # Rotors utils: front, rear, pusher
422 | #--------------------------------------------------------------------------
423 | def _front_rotor(self, theta, casadi=False):
424 | """
425 | front rotor pusher = k_front * delta_front in +body_z
426 | We'll say +body_z => (0, +k_front). Then to inertial => R(theta).
427 | Also pitch moment => +ell_f * T_f. We'll only return partial.
428 | """
429 | # partial w.r.t delta_front is (k_front)
430 | # in body coords => (fx_b, fz_b) = (0, +k_front)
431 | # rotate to inertial by R(theta).
432 | cth, sth = self._cos_sin(theta, casadi)
433 | # inertial
434 | fx_i = -sth * self.spec['k_front']
435 | fz_i = cth * self.spec['k_front']
436 | # pitch moment
437 | M_f = +self.spec['ell_f'] * self.spec['k_front']
438 | return fx_i, fz_i, M_f
439 |
440 | def _rear_rotor(self, theta, casadi=False):
441 | cth, sth = self._cos_sin(theta, casadi)
442 | fx_i = -sth * self.spec['k_rear']
443 | fz_i = cth * self.spec['k_rear']
444 | M_r = -self.spec['ell_r'] * self.spec['k_rear']
445 | return fx_i, fz_i, M_r
446 |
447 | def _pusher_rotor(self, theta, casadi=False):
448 | """
449 | pusher rotor => pusher along +body_x => rotate by theta
450 | """
451 | cth, sth = self._cos_sin(theta, casadi)
452 | fx_i = cth * self.spec['k_pusher']
453 | fz_i = sth * self.spec['k_pusher']
454 | M_t = 0.0
455 | return fx_i, fz_i, M_t
456 |
457 | def nominal_input(self, X, G):
458 | # not imeplemented
459 | return np.zeros((4, 1))
460 |
461 | def stop(self, X):
462 | # not imeplemented
463 | return np.zeros((4, 1))
464 |
465 | def has_stopped(self, X, tol=0.05):
466 | """Check if quadrotor has stopped within tolerance."""
467 | return np.linalg.norm(X[3:5, 0]) < tol
468 |
469 | def agent_barrier(self, X, obs, robot_radius, beta=1.01):
470 | # Not implemented
471 | raise NotImplementedError("Not implemented")
472 |
473 | def agent_barrier_dt(self, x_k, u_k, obs, robot_radius, beta=1.01):
474 | '''Discrete Time High Order CBF'''
475 | # Dynamics equations for the next states
476 | x_k1 = self.step(x_k, u_k, casadi=True)
477 | x_k2 = self.step(x_k1, u_k, casadi=True)
478 |
479 | def h(x, obs, robot_radius, beta=1.01):
480 | '''Computes CBF h(x) = ||x-x_obs||^2 - beta*d_min^2'''
481 | x_obs = obs[0]
482 | z_obs = obs[1]
483 | r_obs = obs[2]
484 | d_min = robot_radius + r_obs
485 |
486 | h = (x[0, 0] - x_obs)**2 + (x[1, 0] - z_obs)**2 - beta*d_min**2
487 | return h
488 |
489 | h_k2 = h(x_k2, obs, robot_radius, beta)
490 | h_k1 = h(x_k1, obs, robot_radius, beta)
491 | h_k = h(x_k, obs, robot_radius, beta)
492 |
493 | d_h = h_k1 - h_k
494 | dd_h = h_k2 - 2 * h_k1 + h_k
495 | # hocbf_2nd_order = h_ddot + (gamma1 + gamma2) * h_dot + (gamma1 * gamma2) * h_k
496 | return h_k, d_h, dd_h
497 |
498 | def render_rigid_body(self, X, U):
499 | """
500 | Return transforms for body, front rotor, rear rotor,
501 | forward (pusher) throttle, and elevator.
502 | """
503 | x, z, theta, xdot, zdot, thetadot = X.flatten()
504 | delta_front, delta_rear, delta_pusher, delta_elev = U.flatten()
505 |
506 | # 1) Body transform: rotate by theta, then translate by (x,z)
507 | transform_body = Affine2D().rotate(theta).translate(x, z) + plt.gca().transData
508 |
509 | # 2) Front rotor: place it at the front of the plane
510 | # Suppose the front is ~ +0.5 * plane_width from center
511 | # We'll rotate the same theta, then translate to (x_front, z_front)
512 | front_offset = self.spec['plane_width']/2 # tune as needed
513 | x_front = x + front_offset * np.cos(theta)
514 | z_front = z + front_offset * np.sin(theta)
515 | transform_front = Affine2D().rotate(theta).translate(x_front, z_front) + plt.gca().transData
516 |
517 | # 3) Rear rotor: place it at the negative front_offset
518 | x_rear = x - front_offset * np.cos(theta)
519 | z_rear = z - front_offset * np.sin(theta)
520 | transform_rear = Affine2D().rotate(theta).translate(x_rear, z_rear) + plt.gca().transData
521 |
522 | # 4) Forward (pusher) throttle:
523 | # For simplicity, place it at the center and beneath the plane
524 | push_offset_x = 0.0
525 | push_offset_z = -self.spec['plane_height']/2 - self.spec['pusher_height']/2
526 | # Translate locally:
527 | transform_pusher = Affine2D().translate(push_offset_x, push_offset_z)
528 | # Rotate about local origin:
529 | transform_pusher = transform_pusher.rotate(theta)
530 | # Translate to global position (x,z):
531 | transform_pusher = transform_pusher.translate(x, z)
532 | # Finally, attach it to the axes:
533 | transform_pusher = transform_pusher + plt.gca().transData
534 |
535 | # 5) Elevator: hinge at the rear
536 | # We want to rotate it by (theta + delta_elev) around the hinge
537 | # => shift the coordinate so hinge is at origin, rotate, shift back
538 | # But a simpler approach is: rotate by 'theta', then do a local rotate by 'delta_elev'
539 | # about local rectangle corner. For illustration:
540 | elevator_offset = - (self.spec['plane_width']/2 + 0.25) # plane length/2 plus a bit
541 | x_elev = x + elevator_offset * np.cos(theta)
542 | z_elev = z + elevator_offset * np.sin(theta)
543 | # first rotate by "theta" (plane orientation), then rotate about the local rectangle's pivot by 'delta_elev'
544 | transform_elev = Affine2D()
545 | transform_elev = transform_elev.rotate_around(self.spec['elev_width']/2, 0, delta_elev) # local rotation about rectangle corner
546 | transform_elev = transform_elev.rotate(theta)
547 | transform_elev = transform_elev.translate(x_elev, z_elev) + plt.gca().transData
548 |
549 | return transform_body, transform_front, transform_rear, transform_pusher, transform_elev
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | from setuptools import setup, find_packages
2 |
3 | setup(
4 | name='safe_control',
5 | version='0.1',
6 | packages=find_packages(),
7 | install_requires=[
8 | 'cvxpy',
9 | 'numpy==1.26.4', # latest gurobipy version 11.0.2 is not compatible with numpy 2.0, but 11.0.3 will be
10 | 'matplotlib',
11 | 'gurobipy',
12 | 'shapely',
13 | 'do-mpc[full]'
14 | ],
15 | include_package_data=True,
16 | zip_safe=False,
17 | )
--------------------------------------------------------------------------------
/tracking.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import matplotlib.pyplot as plt
3 | import matplotlib.patches as patches
4 | import os
5 | import glob
6 | import subprocess
7 | import csv
8 |
9 | """
10 | Created on June 20th, 2024
11 | @author: Taekyung Kim
12 |
13 | @description:
14 | This code implements a local tracking controller for 2D robot navigation using Control Barrier Functions (CBF) and Quadratic Programming (QP).
15 | It supports both kinematic (Unicycle2D) and dynamic (DynamicUnicycle2D) unicycle models, with functionality for obstacle avoidance and waypoint following.
16 | The controller includes real-time visualization capabilities and can handle both known and unknown obstacles.
17 | The main functions demonstrate single and multi-agent scenarios, showcasing the controller's ability to navigate complex environments.
18 |
19 | @required-scripts: robots/robot.py
20 | """
21 |
22 |
23 | class InfeasibleError(Exception):
24 | '''
25 | Exception raised for errors when QP is infeasible or
26 | the robot collides with the obstacle
27 | '''
28 |
29 | def __init__(self, message="ERROR in QP or Collision"):
30 | self.message = message
31 | super().__init__(self.message)
32 |
33 |
34 | class LocalTrackingController:
35 | def __init__(self, X0, robot_spec,
36 | controller_type=None,
37 | dt=0.05,
38 | show_animation=False, save_animation=False, show_mpc_traj=False,
39 | enable_rotation=True, raise_error=False,
40 | ax=None, fig=None, env=None):
41 |
42 | self.robot_spec = robot_spec
43 | self.pos_controller_type = controller_type.get('pos', 'cbf_qp') # 'cbf_qp' or 'mpc_cbf'
44 | self.att_controller_type = controller_type.get('att', 'velocity_tracking_yaw') # 'simple' or 'velocity_tracking_yaw'
45 | self.dt = dt
46 |
47 | self.state_machine = 'idle' # Can be 'idle', 'track', 'stop', 'rotate'
48 | self.rotation_threshold = 0.1 # Radians
49 |
50 | self.current_goal_index = 0 # Index of the current goal in the path
51 | self.reached_threshold = 0.2
52 | # if robot_spec specifies a different reached_threshold, use that (ex. VTOL)
53 | if 'reached_threshold' in robot_spec:
54 | self.reached_threshold = robot_spec['reached_threshold']
55 | print("Using custom reached_threshold: ", self.reached_threshold)
56 |
57 | if self.robot_spec['model'] == 'SingleIntegrator2D':
58 | if X0.shape[0] == 2:
59 | X0 = np.array([X0[0], X0[1], 0.0]).reshape(-1, 1)
60 | elif X0.shape[0] != 3:
61 | raise ValueError(
62 | "Invalid initial state dimension for SingleIntegrator2D")
63 | elif self.robot_spec['model'] == 'DynamicUnicycle2D':
64 | if X0.shape[0] == 3: # set initial velocity to 0.0
65 | X0 = np.array([X0[0], X0[1], X0[2], 0.0]).reshape(-1, 1)
66 | elif self.robot_spec['model'] == 'DoubleIntegrator2D':
67 | if X0.shape[0] == 3:
68 | X0 = np.array([X0[0], X0[1], 0.0, 0.0, X0[2]]).reshape(-1, 1)
69 | elif X0.shape[0] == 2:
70 | X0 = np.array([X0[0], X0[1], 0.0, 0.0, 0.0]).reshape(-1, 1)
71 | elif X0.shape[0] != 5:
72 | raise ValueError(
73 | "Invalid initial state dimension for DoubleIntegrator2D")
74 | elif self.robot_spec['model'] in ['KinematicBicycle2D', 'KinematicBicycle2D_C3BF']:
75 | if X0.shape[0] == 3: # set initial velocity to 0.0
76 | X0 = np.array([X0[0], X0[1], X0[2], 0.0]).reshape(-1, 1)
77 | elif self.robot_spec['model'] in ['Quad2D']:
78 | if X0.shape[0] in [2, 3]: # only initialize the x,z position if don't provide the full state
79 | X0 = np.array([X0[0], X0[1], 0.0, 0.0, 0.0, 0.0]).reshape(-1, 1)
80 | elif X0.shape[0] != 6:
81 | raise ValueError("Invalid initial state dimension for Quad2D")
82 | elif self.robot_spec['model'] == 'Quad3D':
83 | if X0.shape[0] == 2:
84 | X0 = np.array([X0[0], X0[1], 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]).reshape(-1, 1)
85 | elif X0.shape[0] == 3:
86 | X0 = np.array([X0[0], X0[1], 0.0, 0.0, 0.0, X0[2], 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]).reshape(-1, 1)
87 | elif X0.shape[0] == 4:
88 | X0 = np.array([X0[0], X0[1], X0[2], 0.0, 0.0, X0[3], 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]).reshape(-1, 1)
89 | elif X0.shape[0] != 12:
90 | raise ValueError("Invalid initial state dimension for Quad3D")
91 | elif self.robot_spec['model'] in ['VTOL2D']:
92 | if X0.shape[0] in [2, 3]:
93 | # set initial velocity to 5.0
94 | X0 = np.array([X0[0], X0[1], 0.0, 5.0, 0.0, 0.0]).reshape(-1, 1)
95 | elif X0.shape[0] != 6:
96 | raise ValueError("Invalid initial state dimension for VTOL2D")
97 |
98 |
99 | self.u_att = None
100 |
101 | self.show_animation = show_animation
102 | self.save_animation = save_animation
103 | self.show_mpc_traj = show_mpc_traj
104 | self.enable_rotation = enable_rotation
105 | self.raise_error = raise_error
106 | if self.save_animation:
107 | self.setup_animation_saving()
108 |
109 | self.ax = ax
110 | self.fig = fig
111 | self.obs = np.array(env.obs_circle)
112 |
113 | # Initialize moving obstacles
114 | self.dyn_obs_patch = None # will be initialized after the first step
115 | self.init_obs_info = None
116 | self.init_obs_circle = None
117 |
118 | self.known_obs = np.array([])
119 | self.unknown_obs = np.array([])
120 |
121 | if show_animation:
122 | self.setup_animation_plot()
123 | else:
124 | self.ax = plt.axes() # dummy placeholder
125 |
126 | # Setup control problem
127 | self.setup_robot(X0)
128 | self.num_constraints = 5 # number of max obstacle constraints to consider in the controller
129 | if self.pos_controller_type == 'cbf_qp':
130 | from position_control.cbf_qp import CBFQP
131 | self.pos_controller = CBFQP(self.robot, self.robot_spec)
132 | elif self.pos_controller_type == 'mpc_cbf':
133 | from position_control.mpc_cbf import MPCCBF
134 | self.pos_controller = MPCCBF(self.robot, self.robot_spec, show_mpc_traj=self.show_mpc_traj)
135 | elif self.pos_controller_type == 'optimal_decay_cbf_qp':
136 | from position_control.optimal_decay_cbf_qp import OptimalDecayCBFQP
137 | self.pos_controller = OptimalDecayCBFQP(self.robot, self.robot_spec)
138 | elif self.pos_controller_type == 'optimal_decay_mpc_cbf':
139 | from position_control.optimal_decay_mpc_cbf import OptimalDecayMPCCBF
140 | self.pos_controller = OptimalDecayMPCCBF(self.robot, self.robot_spec)
141 | else:
142 | raise ValueError(
143 | f"Unknown controller type: {self.pos_controller_type}")
144 |
145 | if self.enable_rotation and self.robot_spec['model'] in ['SingleIntegrator2D', 'DoubleIntegrator2D']:
146 | if self.att_controller_type == 'simple':
147 | from attitude_control.simple_attitude import SimpleAtt
148 | self.att_controller = SimpleAtt(self.robot, self.robot_spec)
149 | elif self.att_controller_type == 'velocity_tracking_yaw':
150 | from attitude_control.velocity_tracking_yaw import VelocityTrackingYaw
151 | self.att_controller = VelocityTrackingYaw(self.robot, self.robot_spec)
152 | elif self.att_controller_type == 'gatekeeper':
153 | from attitude_control.gatekeeper_attitude import GatekeeperAtt
154 | self.att_controller = GatekeeperAtt(self.robot, self.robot_spec)
155 | self.att_controller.setup_pos_controller(self.pos_controller)
156 | elif self.att_controller_type == 'visibility':
157 | from attitude_control.visibility_promoting_yaw import VisibilityAtt
158 | self.att_controller = VisibilityAtt(self.robot, self.robot_spec)
159 |
160 | else:
161 | raise ValueError(
162 | f"Unknown attitude controller type: {self.att_controller_type}")
163 | else:
164 | self.att_controller = None
165 | self.goal = None
166 |
167 | def setup_animation_saving(self):
168 | self.current_directory_path = os.getcwd()
169 | if not os.path.exists(self.current_directory_path + "/output/animations"):
170 | os.makedirs(self.current_directory_path + "/output/animations")
171 | self.save_per_frame = 1
172 | self.ani_idx = 0
173 |
174 | def setup_animation_plot(self):
175 | # Initialize plotting
176 | if self.ax is None:
177 | self.ax = plt.axes()
178 | if self.fig is None:
179 | self.fig = plt.figure()
180 | plt.ion()
181 | self.ax.set_xlabel("X [m]")
182 | if self.robot_spec['model'] in ['Quad2D', 'VTOL2D']:
183 | self.ax.set_ylabel("Z [m]")
184 | else:
185 | self.ax.set_ylabel("Y [m]")
186 | self.ax.set_aspect(1)
187 | self.fig.tight_layout()
188 | self.waypoints_scatter = self.ax.scatter(
189 | [], [], s=10, facecolors='g', edgecolors='g', alpha=0.5)
190 |
191 | def setup_robot(self, X0):
192 | from robots.robot import BaseRobot
193 | self.robot = BaseRobot(
194 | X0.reshape(-1, 1), self.robot_spec, self.dt, self.ax)
195 |
196 | def set_waypoints(self, waypoints):
197 | if type(waypoints) == list:
198 | waypoints = np.array(waypoints, dtype=float)
199 | self.waypoints = self.filter_waypoints(waypoints)
200 | self.current_goal_index = 0
201 |
202 | self.goal = self.update_goal()
203 | if self.goal is not None:
204 | if not self.robot.is_in_fov(self.goal):
205 | if self.robot_spec['exploration']:
206 | # when tracking class used in exploration scenario,
207 | # the goal is updated usually when the robot is far away from the unsafe area (considering sensing range)
208 | self.state_machine = 'rotate'
209 | else:
210 | # normal tracking mode
211 | self.state_machine = 'stop'
212 | self.goal = None # let the robot stop then rotate
213 |
214 | else:
215 | self.state_machine = 'track'
216 |
217 | if self.show_animation:
218 | self.waypoints_scatter.set_offsets(self.waypoints[:, :2])
219 |
220 | def filter_waypoints(self, waypoints):
221 | '''
222 | Initially filter out waypoints that are too close to the robot
223 | '''
224 | if len(waypoints) < 2:
225 | return waypoints
226 |
227 | robot_pos = self.robot.get_position()
228 | if self.robot_spec['model'] in ['Quad3D']:
229 | n_pos = 3
230 | robot_pos = np.hstack([robot_pos, self.robot.get_z()])
231 | aug_waypoints = np.vstack((robot_pos, waypoints[:, :n_pos]))
232 | else:
233 | n_pos = 2
234 | aug_waypoints = np.vstack((robot_pos, waypoints[:, :n_pos]))
235 |
236 | distances = np.linalg.norm(np.diff(aug_waypoints, axis=0), axis=1)
237 | mask = np.concatenate(([False], distances >= self.reached_threshold))
238 | return aug_waypoints[mask]
239 |
240 | def goal_reached(self, current_position, goal_position):
241 | return np.linalg.norm(current_position[:2] - goal_position[:2]) < self.reached_threshold
242 |
243 | def has_reached_goal(self):
244 | # return whethere the self.goal is None or not
245 | if self.state_machine in ['stop']:
246 | return False
247 | return self.goal is None
248 |
249 | def set_unknown_obs(self, unknown_obs):
250 | unknown_obs = np.array(unknown_obs)
251 | if unknown_obs.shape[1] == 3:
252 | zeros = np.zeors((unknown_obs.shape[0], 2))
253 | unknown_obs = np.hstack((unknown_obs, zeros))
254 | self.unknown_obs = unknown_obs
255 | for obs_info in self.unknown_obs:
256 | ox, oy, r = obs_info[:3]
257 | self.ax.add_patch(
258 | patches.Circle(
259 | (ox, oy), r,
260 | edgecolor='black',
261 | facecolor='orange',
262 | fill=True,
263 | alpha=0.4
264 | )
265 | )
266 |
267 | def get_nearest_unpassed_obs(self, detected_obs, angle_unpassed=np.pi*2, obs_num=5):
268 | def angle_normalize(x):
269 | return (((x + np.pi) % (2 * np.pi)) - np.pi)
270 | '''
271 | Get the nearest 5 obstacles that haven't been passed by (i.e., they're still in front of the robot or the robot should still consider the obstacle).
272 | '''
273 |
274 | if self.robot_spec['model'] in ['SingleIntegrator2D', 'DoubleIntegrator2D', 'Quad2D', 'Quad3D']:
275 | angle_unpassed=np.pi*2
276 | elif self.robot_spec['model'] in ['Unicycle2D', 'DynamicUnicycle2D', 'KinematicBicycle2D', 'KinematicBicycle2D_C3BF', 'VTOL2D']:
277 | angle_unpassed=np.pi*1.2
278 |
279 | if len(detected_obs) != 0:
280 | if len(self.obs) == 0:
281 | all_obs = np.array(detected_obs)
282 | else:
283 | all_obs = np.vstack((self.obs, detected_obs))
284 | # return np.array(detected_obs).reshape(-1, 1) just returning the detected obs
285 | else:
286 | all_obs = self.obs
287 |
288 | if len(all_obs) == 0:
289 | return None
290 |
291 | if all_obs.ndim == 1:
292 | all_obs = all_obs.reshape(1, -1)
293 |
294 | unpassed_obs = []
295 | robot_pos = self.robot.get_position()
296 | robot_yaw = self.robot.get_orientation()
297 |
298 | # Iterate through each detected obstacle
299 | for obs in all_obs:
300 | obs_pos = np.array([obs[0], obs[1]])
301 | to_obs_vector = obs_pos - robot_pos
302 |
303 | # Calculate the angle between the robot's heading and the vector to the obstacle
304 | robot_heading_vector = np.array([np.cos(robot_yaw), np.sin(robot_yaw)])
305 | angle_to_obs = np.arctan2(to_obs_vector[1], to_obs_vector[0])
306 | angle_diff = abs(angle_normalize(angle_to_obs - robot_yaw))
307 |
308 | # If the obstacle is within the forward-facing 180 degrees, consider it
309 | if angle_diff <= angle_unpassed/2:
310 | unpassed_obs.append(obs)
311 |
312 | # If no unpassed obstacles are found, return the nearest obstacles from the full all_obs list
313 | if len(unpassed_obs) == 0:
314 | all_obs = np.array(all_obs)
315 | distances = np.linalg.norm(all_obs[:, :2] - robot_pos, axis=1)
316 | nearest_indices = np.argsort(distances)[:5] # Get indices of the nearest 5 obstacles
317 | return all_obs[nearest_indices]
318 |
319 | # Now, find the nearest unpassed obstacles
320 | unpassed_obs = np.array(unpassed_obs)
321 | distances = np.linalg.norm(unpassed_obs[:, :2] - robot_pos, axis=1)
322 | nearest_indices = np.argsort(distances)[:obs_num] # Get indices of the nearest 'obs_num' (max 5) unpassed obstacles
323 |
324 | return unpassed_obs[nearest_indices]
325 |
326 | def get_nearest_obs(self, detected_obs):
327 | # if there was new obstacle detected, update the obs
328 | if len(detected_obs) != 0:
329 | if len(self.obs) == 0:
330 | all_obs = np.array(detected_obs)
331 | else:
332 | all_obs = np.vstack((self.obs, detected_obs))
333 | # return np.array(detected_obs).reshape(-1, 1) just returning the detected obs
334 | else:
335 | all_obs = self.obs
336 |
337 | if len(all_obs) == 0:
338 | return None
339 |
340 | if all_obs.ndim == 1:
341 | all_obs = all_obs.reshape(1, -1)
342 |
343 | radius = all_obs[:, 2]
344 | distances = np.linalg.norm(all_obs[:, :2] - self.robot.X[:2].T, axis=1)
345 | min_distance_index = np.argmin(distances-radius)
346 | nearest_obstacle = all_obs[min_distance_index]
347 | return nearest_obstacle.reshape(-1, 1)
348 |
349 | # Update dynamic obs position
350 | def step_dyn_obs(self):
351 | """if self.obs (n,5) array (ex) [x, y, r, vx, vy], update obs position per time step"""
352 | if len(self.obs) != 0 and self.obs.shape[1] >= 5:
353 | self.obs[:, 0] += self.obs[:, 3] * self.dt
354 | self.obs[:, 1] += self.obs[:, 4] * self.dt
355 |
356 | def render_dyn_obs(self):
357 | for i, obs_info in enumerate(self.obs):
358 | # obs: [x, y, r, vx, vy]
359 | ox, oy, r = obs_info[:3]
360 | self.dyn_obs_patch[i].center = ox, oy
361 | self.dyn_obs_patch[i].set_radius(r)
362 |
363 | for i, obs_info in enumerate(self.init_obs_info):
364 | ox, oy, r = obs_info[:3]
365 | self.init_obs_circle[i].center = ox, oy
366 | self.init_obs_circle[i].set_radius(r)
367 |
368 | def is_collide_unknown(self):
369 | # if self.unknown_obs is None:
370 | # return False
371 | robot_radius = self.robot.robot_radius
372 |
373 | if self.unknown_obs is not None:
374 | for obs in self.unknown_obs:
375 | # check if the robot collides with the obstacle
376 | distance = np.linalg.norm(self.robot.X[:2, 0] - obs[:2])
377 | if distance < (obs[2] + robot_radius):
378 | return True
379 |
380 | if self.obs is not None:
381 | for obs in self.obs:
382 | # check if the robot collides with the obstacle
383 | distance = np.linalg.norm(self.robot.X[:2, 0] - obs[:2])
384 | if distance < (obs[2] + robot_radius):
385 | return True
386 |
387 | # Collision with the ground
388 | if self.robot_spec['model'] in ['VTOL2D']:
389 | if self.robot.X[1, 0] < 0:
390 | return True
391 | if np.abs(self.robot.X[2, 0]) > self.robot_spec['pitch_max']:
392 | return True
393 | return False
394 |
395 | def update_goal(self):
396 | '''
397 | Update the goal from waypoints
398 | '''
399 | if self.robot_spec['model'] in ['Quad3D']:
400 | n_pos = 3
401 | else:
402 | n_pos = 2
403 |
404 | if self.state_machine == 'rotate':
405 | # in-place rotation
406 | current_angle = self.robot.get_orientation()
407 | goal_angle = np.arctan2(self.waypoints[0][1] - self.robot.X[1, 0],
408 | self.waypoints[0][0] - self.robot.X[0, 0])
409 | if self.robot_spec['model'] in ['Quad2D', 'VTOL2D']: # These skip 'rotate' state since there is no yaw angle
410 | self.state_machine = 'track'
411 | if not self.enable_rotation:
412 | self.state_machine = 'track'
413 | if abs(current_angle - goal_angle) > self.rotation_threshold:
414 | return self.waypoints[0][:n_pos]
415 | else:
416 | self.state_machine = 'track'
417 | self.u_att = None
418 | print("set u_att to none")
419 |
420 | # Check if all waypoints are reached;
421 | if self.current_goal_index >= len(self.waypoints):
422 | return None
423 |
424 | if self.goal_reached(self.robot.X, np.array(self.waypoints[self.current_goal_index]).reshape(-1, 1)):
425 | self.current_goal_index += 1
426 |
427 | if self.current_goal_index >= len(self.waypoints):
428 | self.state_machine = 'idle'
429 | return None
430 |
431 | goal = np.array(self.waypoints[self.current_goal_index][0:n_pos])
432 | return goal
433 |
434 | def draw_plot(self, pause=0.01, force_save=False):
435 | if self.show_animation:
436 | if self.dyn_obs_patch is None:
437 | # Initialize moving obstacles
438 | self.dyn_obs_patch = [self.ax.add_patch(plt.Circle(
439 | (0, 0), 0, edgecolor='black', facecolor='gray', fill=True)) for _ in range(len(self.obs))]
440 | self.init_obs_info = self.obs.copy()
441 | self.init_obs_circle = [self.ax.add_patch(plt.Circle((0, 0), 0, edgecolor='black', facecolor='none', linestyle='--')) for _ in self.init_obs_info]
442 |
443 | self.render_dyn_obs()
444 |
445 | self.fig.canvas.draw_idle()
446 | self.fig.canvas.flush_events()
447 |
448 | # move the square frame of the plot based on robot's x position
449 | # if self.robot_spec['model'] in ['VTOL2D']:
450 | # x = np.clip(self.robot.X[0, 0], 7.5, 67.5)
451 | # self.ax.set_xlim(x-7.5, x+7.5)
452 | # self.ax.set_ylim(0, 15)
453 | # self.fig.tight_layout()
454 |
455 | plt.pause(pause)
456 | if self.save_animation:
457 | self.ani_idx += 1
458 | if force_save or self.ani_idx % self.save_per_frame == 0:
459 | plt.savefig(self.current_directory_path +
460 | "/output/animations/" + "t_step_" + str(self.ani_idx//self.save_per_frame).zfill(4) + ".png", dpi=300)
461 | # plt.savefig(self.current_directory_path +
462 | # "/output/animations/" + "t_step_" + str(self.ani_idx//self.save_per_frame).zfill(4) + ".svg")
463 |
464 | def control_step(self):
465 | '''
466 | Simulate one step of tracking control with CBF-QP with the given waypoints.
467 | Output:
468 | - -2 or QPError: if the QP is infeasible or the robot collides with the obstacle
469 | - -1: all waypoints reached
470 | - 0: normal
471 | - 1: visibility violation
472 | '''
473 | # update state machine
474 | if self.state_machine == 'stop':
475 | if self.robot.has_stopped():
476 | if self.enable_rotation:
477 | self.state_machine = 'rotate'
478 | else:
479 | self.state_machine = 'track'
480 | self.goal = self.update_goal()
481 | else:
482 | self.goal = self.update_goal()
483 |
484 | # 1. Update the detected obstacles
485 | detected_obs = self.robot.detect_unknown_obs(self.unknown_obs)
486 | # self.nearest_obs = self.get_nearest_obs(detected_obs)
487 | self.nearest_multi_obs = self.get_nearest_unpassed_obs(detected_obs, obs_num=self.num_constraints)
488 | if self.nearest_multi_obs is not None:
489 | self.nearest_obs = self.nearest_multi_obs[0].reshape(-1, 1)
490 |
491 | # 2. Update Moving Obstacles
492 | self.step_dyn_obs()
493 |
494 | # 3. Compuite nominal control input, pre-defined in the robot class
495 | if self.state_machine == 'rotate':
496 | goal_angle = np.arctan2(self.goal[1] - self.robot.X[1, 0],
497 | self.goal[0] - self.robot.X[0, 0])
498 | if self.robot_spec['model'] in ['SingleIntegrator2D', 'DoubleIntegrator2D']:
499 | self.u_att = self.robot.rotate_to(goal_angle)
500 | u_ref = self.robot.stop()
501 | elif self.robot_spec['model'] in ['Unicycle2D', 'DynamicUnicycle2D', 'KinematicBicycle2D', 'KinematicBicycle2D_C3BF', 'Quad2D', 'Quad3D', 'VTOL2D']:
502 | u_ref = self.robot.rotate_to(goal_angle)
503 | elif self.goal is None:
504 | u_ref = self.robot.stop()
505 | else:
506 | # Normal waypoint tracking
507 | if self.pos_controller_type == 'optimal_decay_cbf_qp':
508 | u_ref = self.robot.nominal_input(self.goal, k_omega=3.0, k_a=0.5, k_v=0.5)
509 | else:
510 | u_ref = self.robot.nominal_input(self.goal)
511 |
512 | # 4. Update the CBF constraints & 5. Solve the control problem & 6. Draw Collision Cones for C3BF
513 | control_ref = {'state_machine': self.state_machine,
514 | 'u_ref': u_ref,
515 | 'goal': self.goal}
516 | if self.pos_controller_type in ['optimal_decay_cbf_qp', 'cbf_qp']:
517 | u = self.pos_controller.solve_control_problem(
518 | self.robot.X, control_ref, self.nearest_obs)
519 | self.robot.draw_collision_cone(self.robot.X, [self.nearest_obs], self.ax)
520 | else:
521 | u = self.pos_controller.solve_control_problem(
522 | self.robot.X, control_ref, self.nearest_multi_obs)
523 | self.robot.draw_collision_cone(self.robot.X, self.nearest_multi_obs, self.ax)
524 | plt.figure(self.fig.number)
525 |
526 | # 7. Update the attitude controller
527 | if self.state_machine == 'track' and self.att_controller is not None:
528 | # att_controller is only defined for integrators
529 | self.u_att = self.att_controller.solve_control_problem(
530 | self.robot.X, self.robot.yaw, u)
531 |
532 | # 8. Raise an error if the QP is infeasible, or the robot collides with the obstacle
533 | collide = self.is_collide_unknown()
534 | if self.pos_controller.status != 'optimal' or collide:
535 | cause = "Collision" if collide else "Infeasible"
536 | self.draw_infeasible()
537 | print(f"{cause} detected !!")
538 | if self.raise_error:
539 | raise InfeasibleError(f"{cause} detected !!")
540 | return -2
541 |
542 | # 9. Step the robot
543 | self.robot.step(u, self.u_att)
544 | self.u_pos = u
545 |
546 | if self.show_animation:
547 | self.robot.render_plot()
548 |
549 | # 10. Update sensing information
550 | if 'sensor' in self.robot_spec and self.robot_spec['sensor'] == 'rgbd':
551 | self.robot.update_sensing_footprints()
552 | self.robot.update_safety_area()
553 |
554 | beyond_flag = self.robot.is_beyond_sensing_footprints()
555 | if beyond_flag and self.show_animation:
556 | pass
557 | # print("Visibility Violation")
558 | else:
559 | beyond_flag = 0 # not checking sensing footprint
560 |
561 | if self.goal is None and self.state_machine != 'stop':
562 | return -1 # all waypoints reached
563 | return beyond_flag
564 |
565 | def get_control_input(self):
566 | return self.u_pos
567 |
568 | def draw_infeasible(self):
569 | if self.show_animation:
570 | self.robot.render_plot()
571 | current_position = self.robot.get_position()
572 | self.ax.text(current_position[0]+0.5, current_position[1] +
573 | 0.5, '!', color='red', weight='bold', fontsize=22)
574 | self.draw_plot(pause=5, force_save=True)
575 |
576 | def export_video(self):
577 | # convert the image sequence to a video
578 | if self.show_animation and self.save_animation:
579 | subprocess.call(['ffmpeg',
580 | '-framerate', '30', # Input framerate (adjust if needed)
581 | '-i', self.current_directory_path + "/output/animations/t_step_%04d.png",
582 | '-vf', 'scale=1920:982,fps=60', # Ensure height is divisible by 2 and set output framerate
583 | '-pix_fmt', 'yuv420p',
584 | self.current_directory_path + "/output/animations/tracking.mp4"])
585 |
586 | for file_name in glob.glob(self.current_directory_path +
587 | "/output/animations/*.png"):
588 | os.remove(file_name)
589 |
590 | # # If the 'upper' function is not compatible with your device, please use the function provided below
591 | # def export_video(self):
592 | # # convert the image sequence to a video
593 | # if self.show_animation and self.save_animation:
594 | # subprocess.call(['ffmpeg',
595 | # # Input framerate (adjust if needed)
596 | # '-framerate', '30',
597 | # '-i', self.current_directory_path+"/output/animations/t_step_%04d.png",
598 | # '-filter:v', 'fps=60', # Output framerate
599 | # '-pix_fmt', 'yuv420p',
600 | # self.current_directory_path+"/output/animations/tracking.mp4"])
601 |
602 | # for file_name in glob.glob(self.current_directory_path +
603 | # "/output/animations/*.png"):
604 | # os.remove(file_name)
605 |
606 | def run_all_steps(self, tf=30, write_csv=False):
607 | print("===================================")
608 | print("============ Tracking =============")
609 | print("Start following the generated path.")
610 | unexpected_beh = 0
611 |
612 | if write_csv:
613 | # create a csv file to record the states, control inputs, and CBF parameters
614 | with open('output.csv', 'w', newline='') as csvfile:
615 | writer = csv.writer(csvfile)
616 | writer.writerow(['states', 'control_inputs', 'alpha1', 'alpha2'])
617 |
618 | for _ in range(int(tf / self.dt)):
619 | ret = self.control_step()
620 | self.draw_plot()
621 | unexpected_beh += ret
622 |
623 | # get states of the robot
624 | robot_state = self.robot.X[:,0].flatten()
625 | control_input = self.get_control_input().flatten()
626 | # print(f"Robot state: {robot_state}")
627 | # print(f"Control input: {control_input}")
628 |
629 | if write_csv:
630 | # append the states, control inputs, and CBF parameters by appending to csv
631 | with open('output.csv', 'a', newline='') as csvfile:
632 | writer = csv.writer(csvfile)
633 | writer.writerow(np.append(robot_state, np.append(control_input, [self.pos_controller.cbf_param['alpha1'], self.pos_controller.cbf_param['alpha2']])))
634 |
635 |
636 | if ret == -1 or ret == -2: # all waypoints reached
637 | break
638 |
639 | self.export_video()
640 |
641 | print("===== Tracking finished =====")
642 | print("===================================\n")
643 | if self.show_animation:
644 | plt.ioff()
645 | plt.close()
646 |
647 | return unexpected_beh
648 |
649 |
650 | def single_agent_main(controller_type):
651 | dt = 0.05
652 | model = 'DynamicUnicycle2D' # SingleIntegrator2D, DynamicUnicycle2D, KinematicBicycle2D, KinematicBicycle2D_C3BF, DoubleIntegrator2D, Quad2D, Quad3D, VTOL2D
653 |
654 | waypoints = [
655 | [2, 2, math.pi/2],
656 | [2, 12, 0],
657 | [12, 12, 0],
658 | [12, 2, 0]
659 | ]
660 |
661 | # Define static obs
662 | known_obs = np.array([[2.2, 5.0, 0.2], [3.0, 5.0, 0.2], [4.0, 9.0, 0.3], [1.5, 10.0, 0.5], [9.0, 11.0, 1.0], [7.0, 7.0, 3.0], [4.0, 3.5, 1.5],
663 | [10.0, 7.3, 0.4],
664 | [6.0, 13.0, 0.7], [5.0, 10.0, 0.6], [11.0, 5.0, 0.8], [13.5, 11.0, 0.6]])
665 |
666 | env_width = 14.0
667 | env_height = 14.0
668 | if model == 'SingleIntegrator2D':
669 | robot_spec = {
670 | 'model': 'SingleIntegrator2D',
671 | 'v_max': 1.0,
672 | 'radius': 0.25
673 | }
674 | elif model == 'DoubleIntegrator2D':
675 | robot_spec = {
676 | 'model': 'DoubleIntegrator2D',
677 | 'v_max': 1.0,
678 | 'a_max': 1.0,
679 | 'radius': 0.25,
680 | 'sensor': 'rgbd'
681 | }
682 | elif model == 'DynamicUnicycle2D':
683 | robot_spec = {
684 | 'model': 'DynamicUnicycle2D',
685 | 'w_max': 0.5,
686 | 'a_max': 0.5,
687 | 'sensor': 'rgbd',
688 | 'radius': 0.25
689 | }
690 | elif model == 'KinematicBicycle2D':
691 | robot_spec = {
692 | 'model': 'KinematicBicycle2D',
693 | 'a_max': 0.5,
694 | 'sensor': 'rgbd',
695 | 'radius': 0.5
696 | }
697 | elif model == 'KinematicBicycle2D_C3BF':
698 | robot_spec = {
699 | 'model': 'KinematicBicycle2D_C3BF',
700 | 'a_max': 0.5,
701 | 'radius': 0.5
702 | }
703 | dynamic_obs = []
704 | for i, obs_info in enumerate(known_obs):
705 | ox, oy, r = obs_info[:3]
706 | if i % 2 == 0:
707 | vx, vy = 0.1, 0.05
708 | else:
709 | vx, vy = -0.1, 0.05
710 | dynamic_obs.append([ox, oy, r, vx, vy])
711 | known_obs = np.array(dynamic_obs)
712 |
713 | elif model == 'Quad2D':
714 | robot_spec = {
715 | 'model': 'Quad2D',
716 | 'f_min': 3.0,
717 | 'f_max': 10.0,
718 | 'sensor': 'rgbd',
719 | 'radius': 0.25
720 | }
721 | elif model == 'Quad3D':
722 | robot_spec = {
723 | 'model': 'Quad3D',
724 | 'radius': 0.25
725 | }
726 | # override the waypoints with z axis
727 | waypoints = [
728 | [2, 2, 0, math.pi/2],
729 | [2, 12, 1, 0],
730 | [12, 12, -1, 0],
731 | [12, 2, 0, 0]
732 | ]
733 | elif model == 'VTOL2D':
734 | # VTOL has pretty different dynacmis, so create a special test case
735 | robot_spec = {
736 | 'model': 'VTOL2D',
737 | 'radius': 0.6,
738 | 'v_max': 20.0,
739 | 'reached_threshold': 1.0 # meter
740 | }
741 | # override the waypoints and known_obs
742 | waypoints = [
743 | [2, 10],
744 | [70, 10],
745 | [70, 0.5]
746 | ]
747 | pillar_1_x = 67.0
748 | pillar_2_x = 73.0
749 | known_obs = np.array([
750 | # [pillar_1_x, 1.0, 0.5],
751 | # [pillar_1_x, 2.0, 0.5],
752 | # [pillar_1_x, 3.0, 0.5],
753 | # [pillar_1_x, 4.0, 0.5],
754 | # [pillar_1_x, 5.0, 0.5],
755 | [pillar_1_x, 6.0, 0.5],
756 | [pillar_1_x, 7.0, 0.5],
757 | [pillar_1_x, 8.0, 0.5],
758 | [pillar_1_x, 9.0, 0.5],
759 | [pillar_2_x, 1.0, 0.5],
760 | [pillar_2_x, 2.0, 0.5],
761 | [pillar_2_x, 3.0, 0.5],
762 | [pillar_2_x, 4.0, 0.5],
763 | [pillar_2_x, 5.0, 0.5],
764 | [pillar_2_x, 6.0, 0.5],
765 | [pillar_2_x, 7.0, 0.5],
766 | [pillar_2_x, 8.0, 0.5],
767 | [pillar_2_x, 9.0, 0.5],
768 | [pillar_2_x, 10.0, 0.5],
769 | [pillar_2_x, 11.0, 0.5],
770 | [pillar_2_x, 12.0, 0.5],
771 | [pillar_2_x, 13.0, 0.5],
772 | [pillar_2_x, 14.0, 0.5],
773 | [pillar_2_x, 15.0, 0.5],
774 | [60.0, 12.0, 1.5]
775 | ])
776 |
777 | env_width = 75.0
778 | env_height = 20.0
779 | plt.rcParams['figure.figsize'] = [12, 5]
780 |
781 | waypoints = np.array(waypoints, dtype=np.float64)
782 |
783 | if model in ['SingleIntegrator2D', 'DoubleIntegrator2D', 'Quad2D', 'Quad3D']:
784 | x_init = waypoints[0]
785 | elif model == 'VTOL2D':
786 | v_init = robot_spec['v_max'] # m/s
787 | x_init = np.hstack((waypoints[0][0:2], 0.0, v_init, 0.0, 0.0))
788 | else:
789 | x_init = np.append(waypoints[0], 1.0)
790 |
791 | if len(known_obs) > 0 and known_obs.shape[1] != 5:
792 | known_obs = np.hstack((known_obs, np.zeros((known_obs.shape[0], 2)))) # Set static obs velocity 0.0 at (5, 5)
793 |
794 | plot_handler = plotting.Plotting(width=env_width, height=env_height, known_obs=known_obs)
795 | ax, fig = plot_handler.plot_grid("") # you can set the title of the plot here
796 | env_handler = env.Env()
797 |
798 | tracking_controller = LocalTrackingController(x_init, robot_spec,
799 | controller_type=controller_type,
800 | dt=dt,
801 | show_animation=True,
802 | save_animation=False,
803 | show_mpc_traj=False,
804 | ax=ax, fig=fig,
805 | env=env_handler)
806 |
807 | # Set obstacles
808 | tracking_controller.obs = known_obs
809 | # tracking_controller.set_unknown_obs(unknown_obs)
810 | tracking_controller.set_waypoints(waypoints)
811 | unexpected_beh = tracking_controller.run_all_steps(tf=100)
812 |
813 | def multi_agent_main(controller_type, save_animation=False):
814 | dt = 0.05
815 |
816 | # temporal
817 | waypoints = [
818 | [2, 2, 0],
819 | [2, 12, 0],
820 | [12, 12, 0],
821 | [12, 2, 0]
822 | ]
823 | waypoints = np.array(waypoints, dtype=np.float64)
824 |
825 | x_init = waypoints[0]
826 | x_goal = waypoints[-1]
827 |
828 | plot_handler = plotting.Plotting()
829 | ax, fig = plot_handler.plot_grid("")
830 | env_handler = env.Env()
831 |
832 | robot_spec = {
833 | 'model': 'DynamicUnicycle2D', #'DoubleIntegrator2D'
834 | 'w_max': 0.5,
835 | 'a_max': 0.5,
836 | 'sensor': 'rgbd',
837 | 'fov_angle': 45.0,
838 | 'cam_range': 3.0,
839 | 'radius': 0.25
840 | }
841 |
842 | robot_spec['robot_id'] = 0
843 | controller_0 = LocalTrackingController(x_init, robot_spec,
844 | controller_type=controller_type,
845 | dt=dt,
846 | show_animation=True,
847 | save_animation=save_animation,
848 | ax=ax, fig=fig,
849 | env=env_handler)
850 |
851 | robot_spec = {
852 | 'model': 'DynamicUnicycle2D', #'DoubleIntegrator2D'
853 | 'w_max': 1.0,
854 | 'a_max': 1.5,
855 | 'v_max': 2.0,
856 | 'sensor': 'rgbd',
857 | 'fov_angle': 90.0,
858 | 'cam_range': 5.0,
859 | 'radius': 0.25
860 | }
861 |
862 | robot_spec['robot_id'] = 1
863 | controller_1 = LocalTrackingController(x_goal, robot_spec,
864 | controller_type=controller_type,
865 | dt=dt,
866 | show_animation=True,
867 | save_animation=False,
868 | ax=ax, fig=fig,
869 | env=env_handler)
870 |
871 | # unknown_obs = np.array([[9.0, 8.8, 0.3]])
872 | # tracking_controller.set_unknown_obs(unknown_obs)
873 | controller_0.set_waypoints(waypoints)
874 | controller_1.set_waypoints(waypoints[::-1])
875 | tf = 50
876 | for _ in range(int(tf / dt)):
877 | ret_list = []
878 | ret_list.append(controller_0.control_step())
879 | ret_list.append(controller_1.control_step())
880 | controller_0.draw_plot()
881 | # if all elements of ret_list are -1, break
882 | if all([ret == -1 for ret in ret_list]):
883 | break
884 |
885 | if save_animation:
886 | controller_0.export_video()
887 |
888 |
889 | if __name__ == "__main__":
890 | from utils import plotting
891 | from utils import env
892 | import math
893 |
894 | single_agent_main(controller_type={'pos': 'mpc_cbf', 'att': 'gatekeeper'}) # only Integrators have attitude controller, otherwise ignored
895 |
--------------------------------------------------------------------------------
/utils/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tkkim-robot/safe_control/4717958d310b23b292e9160927e5e0da14b54cba/utils/__init__.py
--------------------------------------------------------------------------------
/utils/env.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | class Env:
3 | def __init__(self, width=20.0, height=20.0, known_obs = [], resolution=0.1):
4 | self.width = width
5 | self.height = height
6 | self.resolution = resolution # meters per cell
7 | self.x_range = (0, width)
8 | self.y_range = (0, height)
9 | self.obs_boundary = self.set_obs_boundary(width, height)
10 | self.obs_circle = self.set_obs_circle(known_obs)
11 | self.obs_rectangle = self.set_obs_rectangle()
12 | self._discretize_map()
13 |
14 | def _discretize_map(self):
15 | self.grid_width = int(self.width / self.resolution)
16 | self.grid_height = int(self.height / self.resolution)
17 |
18 | def get_map_shape(self):
19 | return (self.grid_height, self.grid_width)
20 |
21 | def f_to_grid(self, points):
22 | points = np.array(points)
23 | original_shape = points.shape
24 |
25 | if points.ndim == 1:
26 | points = points.reshape(1, -1)
27 |
28 | grid_points = (points / self.resolution).astype(int)
29 |
30 | if original_shape == (2,):
31 | return grid_points[0] # Return a 1D array for a single input point
32 | return grid_points
33 |
34 | def grid_to_f(self, grid_points):
35 | grid_points = np.array(grid_points)
36 | original_shape = grid_points.shape
37 |
38 | if grid_points.ndim == 1:
39 | grid_points = grid_points.reshape(1, -1)
40 |
41 | points = (grid_points * self.resolution) + (self.resolution / 2)
42 |
43 | if original_shape == (2,):
44 | return points[0] # Return a 1D array for a single input point
45 | return points
46 |
47 | @staticmethod
48 | def set_obs_boundary(width, height): # circle
49 | w = width
50 | h = height
51 | linewidth = 0.05
52 | obs_boundary = [
53 | [0, 0, linewidth, h],
54 | [0, h, w, linewidth],
55 | [linewidth, 0, w, linewidth],
56 | [w, linewidth, linewidth, h]
57 | ]
58 | return obs_boundary
59 |
60 | @staticmethod
61 | def set_obs_rectangle():
62 | # obs_rectangle = [
63 | # [14, 12, 8, 2],
64 | # [18, 22, 8, 3],
65 | # [26, 7, 2, 12],
66 | # [32, 14, 10, 2]
67 | # ]
68 | obs_rectangle = [[5, 5, 10, 10]]
69 | obs_rectangle = []
70 | return obs_rectangle
71 | @staticmethod
72 | def set_obs_circle(known_obs):
73 | # obs_cir = [[18, 18, 1]]
74 | # obs_cir = []
75 | obs_cir = known_obs
76 | return obs_cir
--------------------------------------------------------------------------------
/utils/geometry.py:
--------------------------------------------------------------------------------
1 | from shapely.geometry import Polygon, MultiPolygon
2 | from shapely.ops import unary_union
3 | from shapely.validation import explain_validity
4 |
5 | def custom_merge(geometries):
6 | # First, perform a union operation
7 | union = unary_union(geometries)
8 |
9 | # If the result is a MultiPolygon, we need to process each polygon
10 | if isinstance(union, MultiPolygon):
11 | processed_polygons = [process_polygon(poly, geometries) for poly in union.geoms]
12 | merged_polygon = MultiPolygon(processed_polygons)
13 | elif isinstance(union, Polygon):
14 | merged_polygon = process_polygon(union, geometries)
15 | else:
16 | merged_polygon = union # In case it's neither Polygon nor MultiPolygon
17 |
18 | # Fix geometry if necessary
19 | merged_polygon = fix_geometry(merged_polygon)
20 | return merged_polygon
21 |
22 | def process_polygon(polygon, original_geometries):
23 | exterior = polygon.exterior
24 | valid_interiors = []
25 |
26 | for interior in polygon.interiors:
27 | if should_keep_interior(interior, original_geometries, polygon):
28 | valid_interiors.append(interior)
29 |
30 | return Polygon(exterior, valid_interiors)
31 |
32 | def should_keep_interior(interior, original_geometries, merged_area):
33 | interior_poly = Polygon(interior)
34 |
35 | # Check if this interior is part of any original geometry's exterior
36 | for geom in original_geometries:
37 | if isinstance(geom, Polygon):
38 | if geom.exterior.contains(interior_poly):
39 | return False
40 | elif isinstance(geom, MultiPolygon):
41 | if any(poly.exterior.contains(interior_poly) for poly in geom.geoms):
42 | return False
43 |
44 | # # Check if this interior is properly contained within the merged area
45 | # if not merged_area.contains(interior_poly):
46 | # return False
47 |
48 | return True
49 |
50 | def fix_geometry(geometry, buffer_distance=0.2):
51 | if not geometry.is_valid:
52 | reason = explain_validity(geometry)
53 | print(reason)
54 | if "Self-intersection" in reason:
55 | # Apply a small positive buffer followed by a small negative buffer
56 | geometry = geometry.buffer(buffer_distance).buffer(-buffer_distance)
57 | if "Holes are nested" in reason:
58 | print(f"Fixing nested holes: {reason}")
59 | # Apply a small positive buffer to the exterior
60 | geometry = fix_nested_holes(geometry)
61 | if "Hole lies outside shell" in reason:
62 | print(f"Fixing holes outside shell: {reason}")
63 | # Remove holes that are outside the shell
64 | geometry = fix_holes(geometry)
65 |
66 | # if geometry is still invalid, try a more aggresive fix
67 | # if not geometry.is_valid:
68 | # geometry = geometry.buffer(0)
69 | return geometry
70 |
71 | def fix_nested_holes(geometry):
72 | '''
73 | Fix "Holes are nested" error by removing nested holes (recursive function)
74 | '''
75 | if isinstance(geometry, Polygon):
76 | exterior = geometry.exterior
77 | interiors = list(geometry.interiors)
78 | non_nested_interiors = []
79 |
80 | while interiors:
81 | interior = interiors.pop(0)
82 | if all(not inner.contains(interior) for inner in interiors):
83 | non_nested_interiors.append(interior)
84 |
85 | return Polygon(exterior, non_nested_interiors)
86 | elif isinstance(geometry, MultiPolygon):
87 | return MultiPolygon([fix_nested_holes(geom) for geom in geometry.geoms])
88 | else:
89 | return geometry
90 |
91 | def fix_holes(geometry):
92 | '''
93 | Fix "Hole lies outside shell" error by removing holes that are outside the shell (recursive function)
94 | '''
95 | if isinstance(geometry, Polygon):
96 | exterior = geometry.exterior
97 | valid_interiors = []
98 | for interior in geometry.interiors:
99 | if Polygon(exterior).contains(Polygon(interior)):
100 | valid_interiors.append(interior)
101 | return Polygon(exterior, valid_interiors)
102 | elif isinstance(geometry, MultiPolygon):
103 | fixed_polygons = [fix_holes(geom) for geom in geometry.geoms]
104 | return MultiPolygon(fixed_polygons)
105 | else:
106 | return geometry
--------------------------------------------------------------------------------
/utils/plotting.py:
--------------------------------------------------------------------------------
1 | import matplotlib.pyplot as plt
2 | import matplotlib.patches as patches
3 | import utils.env as env
4 | from utils.utils import calculate_fov_points, linewidth_from_data_units
5 | import math
6 | import numpy as np
7 |
8 | class Plotting:
9 | def __init__(self, width=14.0, height=14.0, known_obs = []):
10 | self.env = env.Env(width=width, height=height, known_obs = known_obs)
11 | self.obs_bound = self.env.obs_boundary
12 | self.obs_circle = self.env.obs_circle
13 | self.obs_rectangle = self.env.obs_rectangle
14 |
15 | def animation(self, nodelist, path, name, animation=False):
16 | self.plot_grid(name)
17 | self.plot_visited(nodelist, animation)
18 | self.plot_path_fov(path, cam_range=3.5)
19 | self.plot_path(path)
20 |
21 | def animation_online(self, nodelist, name, animation=False):
22 | self.plot_grid(name)
23 | self.plot_visited(nodelist, animation)
24 | plt.pause(1.0)
25 | plt.close()
26 |
27 | def animation_connect(self, V1, V2, path, name):
28 | self.plot_grid(name)
29 | self.plot_visited_connect(V1, V2)
30 | self.plot_path(path)
31 |
32 | def plot_grid(self, name, with_right_subplot=False):
33 | if with_right_subplot:
34 | fig = plt.figure(constrained_layout=True)
35 | gs = fig.add_gridspec(1, 2, width_ratios=[3, 3], wspace=0.5)
36 | main_ax = fig.add_subplot(gs[0, 0])
37 | right_ax = fig.add_subplot(gs[0, 1])
38 | right_ax.axis('off')
39 | right_ax.tick_params(left=False, bottom=False, labelleft=False, labelbottom=False)
40 | else:
41 | fig, main_ax = plt.subplots()
42 |
43 | for (ox, oy, w, h) in self.obs_bound:
44 | main_ax.add_patch(
45 | patches.Rectangle(
46 | (ox, oy), w, h,
47 | edgecolor='black',
48 | facecolor='black',
49 | fill=True
50 | )
51 | )
52 |
53 | for (ox, oy, w, h) in self.obs_rectangle:
54 | main_ax.add_patch(
55 | patches.Rectangle(
56 | (ox, oy), w, h,
57 | edgecolor='black',
58 | facecolor='gray',
59 | fill=True
60 | )
61 | )
62 |
63 | for obs_info in self.obs_circle:
64 | if obs_info.shape[0] == 3:
65 | ox, oy, r = obs_info
66 | elif obs_info.shape[0] == 5:
67 | continue
68 | main_ax.add_patch(
69 | patches.Circle(
70 | (ox, oy), r,
71 | edgecolor='black',
72 | facecolor='gray',
73 | fill=True
74 | )
75 | )
76 |
77 | main_ax.set_title(name)
78 | eps = 0.015
79 | eps_x = (self.env.x_range[1] - self.env.x_range[0]) * eps
80 | eps_y = (self.env.y_range[1] - self.env.y_range[0]) * eps
81 | main_ax.set_xlim(self.env.x_range[0] - eps_x, self.env.x_range[1] + eps_x)
82 | main_ax.set_ylim(self.env.y_range[0] - eps_y, self.env.y_range[1] + eps_y)
83 | main_ax.set_aspect('equal', adjustable='box')
84 | plt.tight_layout()
85 |
86 | if with_right_subplot:
87 | return (main_ax, right_ax, gs), fig
88 | else:
89 | return main_ax, fig
90 |
91 | @staticmethod
92 | def plot_visited(nodelist, animation):
93 | if animation:
94 | count = 0
95 | for node in nodelist:
96 | count += 1
97 | if node.parent:
98 | plt.plot([node.parent.x, node.x], [node.parent.y, node.y], "-g")
99 | plt.gcf().canvas.mpl_connect('key_release_event',
100 | lambda event:
101 | [exit(0) if event.key == 'escape' else None])
102 | if count % 10 == 0:
103 | plt.pause(0.001)
104 | else:
105 | for node in nodelist:
106 | if node.parent:
107 | plt.plot([node.parent.x, node.x], [node.parent.y, node.y], "-g")
108 |
109 | @staticmethod
110 | def plot_visited_connect(V1, V2):
111 | len1, len2 = len(V1), len(V2)
112 |
113 | for k in range(max(len1, len2)):
114 | if k < len1:
115 | if V1[k].parent:
116 | plt.plot([V1[k].x, V1[k].parent.x], [V1[k].y, V1[k].parent.y], "-g")
117 | if k < len2:
118 | if V2[k].parent:
119 | plt.plot([V2[k].x, V2[k].parent.x], [V2[k].y, V2[k].parent.y], "-g")
120 |
121 | plt.gcf().canvas.mpl_connect('key_release_event',
122 | lambda event: [exit(0) if event.key == 'escape' else None])
123 |
124 | if k % 2 == 0:
125 | plt.pause(0.001)
126 |
127 | plt.pause(0.01)
128 |
129 | @staticmethod
130 | def plot_path(path):
131 | if len(path) != 0:
132 | plt.plot([x[0] for x in path], [x[1] for x in path], '-r', linewidth=2)
133 | plt.pause(0.01)
134 | plt.savefig("LQR-CBF_result.PNG")
135 | plt.savefig("LQR-CBF_result.svg")
136 | plt.show()
137 |
138 | @staticmethod
139 | def plot_path_fov(path, cam_range):
140 |
141 | path = np.array(path)
142 | width = linewidth_from_data_units(cam_range/math.sqrt(2), plt.gca(), reference='y')
143 | plt.plot(path[:, 0], path[:, 1], 'k', alpha=0.5, linewidth=width)
144 |
145 | #def plot_path_fov(path, fov_angle, cam_range):
146 | """
147 | this function is ommitted for now. The visualization is not smooth
148 | """
149 | # if len(path) < 2:
150 | # return # Need at least two points to define a path
151 |
152 | # lefts = []
153 | # rights = []
154 | # for k in range(1, len(path)):
155 | # prev_pos = path[k-1]
156 | # print(prev_pos)
157 | # cur_pos = path[k]
158 | # gtheta = cur_pos[2]
159 | # gtheta = math.atan2(cur_pos[1] - prev_pos[1], cur_pos[0] - prev_pos[0])
160 | # print(cur_pos[2])
161 | # if gtheta == None:
162 | # gtheta = math.atan2(cur_pos[1] - prev_pos[1], cur_pos[0] - prev_pos[0])
163 |
164 | # prev_fov_left, prev_fov_right = calculate_fov_points(prev_pos, gtheta, fov_angle=math.pi, cam_range=cam_range)
165 | # cur_fov_left, cur_fov_right = calculate_fov_points(cur_pos, gtheta, fov_angle=math.pi, cam_range=cam_range)
166 |
167 | # # Plot FOV union "tube"
168 | # plt.plot([prev_fov_left[0], cur_fov_left[0]], [prev_fov_left[1], cur_fov_left[1]], 'k--', alpha=0.5) # Left boundary
169 | # plt.plot([prev_fov_right[0], cur_fov_right[0]], [prev_fov_right[1], cur_fov_right[1]], 'k--', alpha=0.5) # Right boundary
170 | # plt.fill([prev_fov_left[0], cur_fov_left[0], cur_fov_right[0], prev_fov_right[0]],
171 | # [prev_fov_left[1], cur_fov_left[1], cur_fov_right[1], prev_fov_right[1]], 'k', alpha=0.1) # FOV area
172 |
173 | # lefts.append(prev_fov_left)
174 | # rights.append(prev_fov_right)
175 | # print([lefts, rights])
--------------------------------------------------------------------------------
/utils/utils.py:
--------------------------------------------------------------------------------
1 | import math
2 | import numpy as np
3 |
4 | import utils.env as env
5 |
6 | """
7 | Created on Jan 23, 2024
8 | Updated on June 21, 2024
9 | @author: Taekyung Kim
10 |
11 | @description: Majority of this code if for collision checking with obstacles (rect, circle, boundary)
12 | It is built upon the same utils.py in visibility-rrt implementation
13 |
14 | @required-scripts: env.py
15 |
16 | """
17 |
18 | class Node:
19 | def __init__(self, n):
20 | self.x = n[0]
21 | self.y = n[1]
22 |
23 | if len(n) == 3:
24 | self.yaw = n[2]
25 | else:
26 | self.yaw = None
27 |
28 | self.parent = None
29 | self.cost = 0
30 | #self.StateTraj = None
31 | self.childrenNodeInds = set([])
32 |
33 | def angular_diff(a, b):
34 | """Angle difference from b to a (a - b)"""
35 | d = a - b
36 | if d > math.pi:
37 | d -= 2 * math.pi
38 | elif d < -math.pi:
39 | d += 2 * math.pi
40 | return d
41 |
42 | def angle_normalize(x):
43 | return (((x + math.pi) % (2 * math.pi)) - math.pi)
44 |
45 | # Function to calculate the FOV triangle points
46 | def calculate_fov_points(position, yaw, fov_angle = math.pi/2, cam_range = 3):
47 | half_fov = fov_angle/2
48 | left_angle = yaw - half_fov
49 | right_angle = yaw + half_fov
50 |
51 | # Calculate points for the edges of the FOV
52 | left_point = (position[0] + cam_range * math.cos(left_angle), position[1] + cam_range * math.sin(left_angle))
53 | right_point = (position[0] + cam_range * math.cos(right_angle), position[1] + cam_range * math.sin(right_angle))
54 |
55 | return left_point, right_point
56 |
57 |
58 | def linewidth_from_data_units(linewidth, axis, reference='y'):
59 | """
60 | Convert a linewidth in data units to linewidth in points.
61 |
62 | Parameters
63 | ----------
64 | linewidth: float
65 | Linewidth in data units of the respective reference-axis
66 | axis: matplotlib axis
67 | The axis which is used to extract the relevant transformation
68 | data (data limits and size must not change afterwards)
69 | reference: string
70 | The axis that is taken as a reference for the data width.
71 | Possible values: 'x' and 'y'. Defaults to 'y'.
72 |
73 | Returns
74 | -------
75 | linewidth: float
76 | Linewidth in points
77 | """
78 | fig = axis.get_figure()
79 | if reference == 'x':
80 | length = fig.bbox_inches.width * axis.get_position().width
81 | value_range = np.diff(axis.get_xlim())
82 | elif reference == 'y':
83 | length = fig.bbox_inches.height * axis.get_position().height
84 | value_range = np.diff(axis.get_ylim())
85 | # Convert length to points
86 | length *= 72
87 | # Scale linewidth to value range
88 | return linewidth * (length / value_range)
89 |
90 |
91 | class Utils:
92 | def __init__(self):
93 | self.env = env.Env()
94 |
95 | self.delta = 0.5
96 | self.obs_circle = self.env.obs_circle
97 | self.obs_rectangle = self.env.obs_rectangle
98 | self.obs_boundary = self.env.obs_boundary
99 |
100 | def update_obs(self, obs_cir, obs_bound, obs_rec):
101 | self.obs_circle = obs_cir
102 | self.obs_boundary = obs_bound
103 | self.obs_rectangle = obs_rec
104 |
105 | def get_obs_vertex(self):
106 | delta = self.delta
107 | obs_list = []
108 |
109 | for (ox, oy, w, h) in self.obs_rectangle:
110 | vertex_list = [[ox - delta, oy - delta],
111 | [ox + w + delta, oy - delta],
112 | [ox + w + delta, oy + h + delta],
113 | [ox - delta, oy + h + delta]]
114 | obs_list.append(vertex_list)
115 |
116 | return obs_list
117 |
118 | def is_intersect_rec(self, start, end, o, d, a, b):
119 | v1 = [o[0] - a[0], o[1] - a[1]]
120 | v2 = [b[0] - a[0], b[1] - a[1]]
121 | v3 = [-d[1], d[0]]
122 |
123 | div = np.dot(v2, v3)
124 |
125 | if div == 0:
126 | return False
127 |
128 | t1 = np.linalg.norm(np.cross(v2, v1)) / div
129 | t2 = np.dot(v1, v3) / div
130 |
131 | if t1 >= 0 and 0 <= t2 <= 1:
132 | shot = Node((o[0] + t1 * d[0], o[1] + t1 * d[1]))
133 | dist_obs = self.get_dist(start, shot)
134 | dist_seg = self.get_dist(start, end)
135 | if dist_obs <= dist_seg:
136 | return True
137 |
138 | return False
139 |
140 | def is_intersect_circle(self, o, d, a, r):
141 | d2 = np.dot(d, d)
142 | delta = self.delta
143 |
144 | if d2 == 0:
145 | return False
146 |
147 | t = np.dot([a[0] - o[0], a[1] - o[1]], d) / d2
148 |
149 | if 0 <= t <= 1:
150 | shot = Node((o[0] + t * d[0], o[1] + t * d[1]))
151 | if self.get_dist(shot, Node(a)) <= r + delta:
152 | return True
153 | return False
154 |
155 | def is_collision(self, start, end):
156 | if self.is_inside_obs(start) or self.is_inside_obs(end):
157 | return True
158 |
159 | o, d = self.get_ray(start, end)
160 | obs_vertex = self.get_obs_vertex()
161 |
162 | for (v1, v2, v3, v4) in obs_vertex:
163 | if self.is_intersect_rec(start, end, o, d, v1, v2):
164 | return True
165 | if self.is_intersect_rec(start, end, o, d, v2, v3):
166 | return True
167 | if self.is_intersect_rec(start, end, o, d, v3, v4):
168 | return True
169 | if self.is_intersect_rec(start, end, o, d, v4, v1):
170 | return True
171 |
172 | for (x, y, r) in self.obs_circle:
173 | if self.is_intersect_circle(o, d, [x, y], r):
174 | return True
175 |
176 | return False
177 |
178 | def is_inside_obs(self, node):
179 | delta = self.delta
180 |
181 | for (x, y, r) in self.obs_circle:
182 | if math.hypot(node.x - x, node.y - y) <= r + delta:
183 | return True
184 |
185 | for (x, y, w, h) in self.obs_rectangle:
186 | if 0 <= node.x - (x - delta) <= w + 2 * delta \
187 | and 0 <= node.y - (y - delta) <= h + 2 * delta:
188 | return True
189 |
190 | for (x, y, w, h) in self.obs_boundary:
191 | if 0 <= node.x - (x - delta) <= w + 2 * delta \
192 | and 0 <= node.y - (y - delta) <= h + 2 * delta:
193 | return True
194 |
195 | return False
--------------------------------------------------------------------------------