├── .gitignore
├── .idea
├── ai-imu-dr.iml
├── misc.xml
├── modules.xml
├── vcs.xml
└── workspace.xml
├── LICENSE
├── README.md
├── src
├── dataset.py
├── main_kitti.py
├── train_torch_filter.py
├── utils.py
├── utils_numpy_filter.py
├── utils_plot.py
└── utils_torch_filter.py
└── temp
├── 08.gif
├── iekf.jpg
├── structure.jpg
└── sup.pdf
/.gitignore:
--------------------------------------------------------------------------------
1 | data/
2 | results/
3 | temp/delta_p.p
4 | temp/iekfnets.p
5 | temp/normalize_factors.p
6 | src/__pycache__
7 |
--------------------------------------------------------------------------------
/.idea/ai-imu-dr.iml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
--------------------------------------------------------------------------------
/.idea/misc.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
--------------------------------------------------------------------------------
/.idea/modules.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
--------------------------------------------------------------------------------
/.idea/vcs.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
--------------------------------------------------------------------------------
/.idea/workspace.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 |
63 |
64 |
65 |
66 |
67 |
68 |
69 |
70 |
71 |
72 |
73 |
74 |
75 |
76 |
77 |
78 |
79 |
80 |
81 |
82 |
89 |
90 |
91 |
92 |
93 |
94 |
95 |
96 |
97 |
98 |
99 |
100 |
101 |
102 |
103 |
104 |
105 |
106 |
107 |
108 |
109 |
110 |
111 |
112 |
113 |
114 |
115 |
116 |
117 |
118 |
119 |
120 |
121 |
122 |
123 |
124 |
125 |
126 |
127 |
128 |
129 |
130 |
131 |
132 |
133 |
134 |
135 |
136 |
137 |
138 |
139 |
140 |
141 |
142 |
143 |
144 |
145 |
146 |
147 |
148 |
149 |
150 |
151 |
152 |
153 |
154 |
155 |
156 |
157 |
158 |
159 |
160 |
161 |
162 |
163 |
164 |
165 |
166 |
167 |
168 |
169 |
170 |
171 |
172 |
173 |
174 |
175 |
176 |
177 |
178 |
179 |
180 |
181 |
182 |
183 |
184 |
185 |
186 |
187 |
188 |
189 |
190 |
191 |
192 |
193 |
194 |
195 |
196 |
197 |
198 |
199 |
200 |
201 |
202 |
203 |
204 |
205 |
206 |
207 | 1555341094920
208 |
209 |
210 | 1555341094920
211 |
212 |
213 |
214 |
215 |
216 |
217 |
218 |
219 |
220 |
221 |
222 |
223 |
224 |
225 |
226 |
227 |
228 |
229 |
230 |
231 |
232 |
233 |
234 |
235 |
236 |
237 |
238 |
239 |
240 |
241 |
242 |
243 |
244 |
245 |
246 |
247 |
248 |
249 |
250 |
251 | file://$PROJECT_DIR$/src/utils_plot.py
252 | 205
253 |
254 |
255 |
256 | file://$PROJECT_DIR$/src/utils_plot.py
257 | 22
258 |
259 |
260 |
261 |
262 |
263 |
264 |
265 |
266 |
267 |
268 |
269 |
270 |
271 |
272 |
273 |
274 |
275 |
276 |
277 |
278 |
279 |
280 |
281 |
282 |
283 |
284 |
285 |
286 |
287 |
288 |
289 |
290 |
291 |
292 |
293 |
294 |
295 |
296 |
297 |
298 |
299 |
300 |
301 |
302 |
303 |
304 |
305 |
306 |
307 |
308 |
309 |
310 |
311 |
312 |
313 |
314 |
315 |
316 |
317 |
318 |
319 |
320 |
321 |
322 |
323 |
324 |
325 |
326 |
327 |
328 |
329 |
330 |
331 |
332 |
333 |
334 |
335 |
336 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2019 Martin Brossard
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # AI-IMU Dead-Reckoning [[IEEE paper](https://ieeexplore.ieee.org/document/9035481), [ArXiv paper](https://arxiv.org/pdf/1904.06064.pdf)]
2 |
3 | _1.10%_ translational error on the [KITTI](http://www.cvlibs.net/datasets/kitti/eval_odometry.php) odometry sequences with __only__ an Inertial Measurement Unit.
4 |
5 | 
6 |
7 | ## Overview
8 |
9 | In the context of intelligent vehicles, robust and accurate dead reckoning based on the Inertial Measurement Unit (IMU) may prove useful to correlate feeds from imaging sensors, to safely navigate through obstructions, or for safe emergency stop in the extreme case of other sensors failure.
10 |
11 | This repo contains the code of our novel accurate method for dead reckoning of wheeled vehicles based only on an IMU. The key components of the method are the Kalman filter and the use of deep neural networks to dynamically adapt the noise parameters of the filter. Our dead reckoning inertial method based only on the IMU accurately estimates 3D position, velocity, orientation of the vehicle and self-calibrates the IMU biases. We achieve on the KITTI odometry dataset on average a 1.10% translational error and the algorithm competes with top-ranked methods which, by contrast, use LiDAR or stereo vision.
12 |
13 | 
14 |
15 | The above figure illustrates the approach which consists of two main blocks summarized as follows:
16 | 1. the filter integrates the inertial measurements with exploits zero lateral and vertical velocity as measurements with covariance matrix to refine its estimates, see the figure below;
17 | 2. the noise parameter adapter determines in real-time the most suitable covariance noise matrix. This deep learning based adapter converts directly raw IMU signals into covariance matrices without requiring knowledge of any state estimate nor any other quantity.
18 |
19 |
20 | 
21 |
22 | ## Code
23 | Our implementation is done in Python. We use [Pytorch](https://pytorch.org/) for the adapter block of the system. The code was tested under Python 3.5.
24 |
25 | ### Installation & Prerequies
26 | 1. Install [pytorch](http://pytorch.org). We perform all training and testing on its development branch.
27 |
28 | 2. Install the following required packages, `matplotlib`, `numpy`, `termcolor`, `scipy`, `navpy`, e.g. with the pip3 command
29 | ```
30 | pip3 install matplotlib numpy termcolor scipy navpy
31 | ```
32 |
33 | 4. Clone this repo
34 | ```
35 | git clone https://github.com/mbrossar/ai-imu-dr.git
36 | ```
37 |
38 | ### Testing
39 | 1. Download reformated pickle format of the 00-11 KITTI IMU raw data at this [url](https://github.com/user-attachments/files/17930695/data.zip), extract and copy then in the `data` folder.
40 | ```
41 | wget "https://github.com/user-attachments/files/17930695/data.zip"
42 | mkdir ai-imu-dr/results
43 | unzip data.zip -d ai-imu-dr
44 | rm data.zip
45 | ```
46 | These file can alternatively be generated after download the KITTI raw data and setting `read_data = 1` in the `main.py` file.
47 |
48 | 2. Download training parameters at this [url](https://www.dropbox.com/s/77kq4s7ziyvsrmi/temp.zip), extract and copy in the `temp` folder.
49 | ```
50 | wget "https://www.dropbox.com/s/77kq4s7ziyvsrmi/temp.zip"
51 | unzip temp.zip -d ai-imu-dr/temp
52 | rm temp.zip
53 | ```
54 | 4. Test the filters !
55 | ```
56 | cd ai-imu-dr/src
57 | python3 main_kitti.py
58 | ```
59 | This first launches the filters for the all sequences. Then, results are plotted. Note that the parameters are trained on sequences 00, 01, 04-11, so only sequence 02 is a test sequence.
60 |
61 | ### Training
62 | You can train for testing another sequence (we do not find difference in the results) or for our own sequence by modifying the dataset class.
63 |
64 |
65 | ## Paper
66 | The paper M. Brossard, A. Barrau and S. Bonnabel, "AI-IMU Dead-Reckoning," in _IEEE Transactions on Intelligent Vehicles_, 2020, relative to this repo is available at this [url](https://cloud.mines-paristech.fr/index.php/s/8YDqD0Y1e6BWzCG).
67 |
68 |
69 | ### Citation
70 |
71 | If you use this code in your research, please cite:
72 |
73 | ```
74 | @article{brossard2019aiimu,
75 | author = {Martin Brossard and Axel Barrau and Silv\`ere Bonnabel},
76 | journal={IEEE Transactions on Intelligent Vehicles},
77 | title = {{AI-IMU Dead-Reckoning}},
78 | year = {2020}
79 | }
80 | ```
81 |
82 | ### Authors
83 | Martin Brossard*, Axel Barrau° and Silvère Bonnabel*
84 |
85 | *MINES ParisTech, PSL Research University, Centre for Robotics, 60 Boulevard Saint-Michel, 75006 Paris, France
86 |
87 | °Safran Tech, Groupe Safran, Rue des Jeunes Bois-Châteaufort, 78772, Magny Les Hameaux Cedex, France
88 |
--------------------------------------------------------------------------------
/src/dataset.py:
--------------------------------------------------------------------------------
1 | from torch.utils.data.dataset import Dataset
2 | import torch
3 | import numpy as np
4 | from termcolor import cprint
5 | import pickle
6 | import matplotlib.pyplot as plt
7 | import os
8 | from collections import OrderedDict
9 |
10 |
11 | class BaseDataset(Dataset):
12 | pickle_extension = ".p"
13 | """extension of the file saved in pickle format"""
14 | file_normalize_factor = "normalize_factors.p"
15 | """name of file for normalizing input"""
16 |
17 | def __init__(self, args):
18 | # paths
19 | self.path_data_save = args.path_data_save
20 | """path where data are saved"""
21 | self.path_results = args.path_results
22 | """path to the results"""
23 | self.path_temp = args.path_temp
24 | """path for temporary files"""
25 |
26 | self.datasets_test = args.test_sequences
27 | """test datasets"""
28 | self.datasets_validation = args.cross_validation_sequences
29 | """cross-validation datasets"""
30 |
31 | # names of the sequences
32 | self.datasets = []
33 | """dataset names"""
34 | self.datasets_train = []
35 | """train datasets"""
36 |
37 | self.datasets_validatation_filter = OrderedDict()
38 | """Validation dataset with index for starting/ending"""
39 | self.datasets_train_filter = OrderedDict()
40 | """Validation dataset with index for starting/ending"""
41 |
42 | # noise added to the data
43 | self.sigma_gyro = 1.e-4
44 | self.sigma_acc = 1.e-4
45 | self.sigma_b_gyro = 1.e-5
46 | self.sigma_b_acc = 1.e-4
47 |
48 | # number of training data points
49 | self.num_data = 0
50 |
51 | # factors for normalizing inputs
52 | self.normalize_factors = None
53 | self.get_datasets()
54 | self.set_normalize_factors()
55 |
56 | def __getitem__(self, i):
57 | mondict = self.load(self.path_data_save, self.datasets[i])
58 | return mondict
59 |
60 | def __len__(self):
61 | return len(self.datasets)
62 |
63 | def get_datasets(self):
64 | for dataset in os.listdir(self.path_data_save):
65 | self.datasets += [dataset[:-2]] # take just name, remove the ".p"
66 | self.divide_datasets()
67 |
68 | def divide_datasets(self):
69 | for dataset in self.datasets:
70 | if (not dataset in self.datasets_test) and (not dataset in self.datasets_validation):
71 | self.datasets_train += [dataset]
72 |
73 | def dataset_name(self, i):
74 | return self.datasets[i]
75 |
76 | def get_data(self, i):
77 | pickle_dict = self[self.datasets.index(i) if type(i) != int else i]
78 | return pickle_dict['t'], pickle_dict['ang_gt'], pickle_dict['p_gt'], pickle_dict['v_gt'],\
79 | pickle_dict['u']
80 |
81 | def set_normalize_factors(self):
82 | path_normalize_factor = os.path.join(self.path_temp, self.file_normalize_factor)
83 | # we set factors only if file does not exist
84 | if os.path.isfile(path_normalize_factor):
85 | pickle_dict = self.load(path_normalize_factor)
86 | self.normalize_factors = pickle_dict['normalize_factors']
87 | self.num_data = pickle_dict['num_data']
88 | return
89 |
90 | # first compute mean
91 | self.num_data = 0
92 |
93 | for i, dataset in enumerate(self.datasets_train):
94 | pickle_dict = self.load(self.path_data_save, dataset)
95 | u = pickle_dict['u']
96 | if i == 0:
97 | u_loc = u.sum(dim=0)
98 | else:
99 | u_loc += u.sum(dim=0)
100 | self.num_data += u.shape[0]
101 | u_loc = u_loc / self.num_data
102 |
103 | # second compute standard deviation
104 | for i, dataset in enumerate(self.datasets_train):
105 | pickle_dict = self.load(self.path_data_save, dataset)
106 | u = pickle_dict['u']
107 | if i == 0:
108 | u_std = ((u - u_loc) ** 2).sum(dim=0)
109 | else:
110 | u_std += ((u - u_loc) ** 2).sum(dim=0)
111 | u_std = (u_std / self.num_data).sqrt()
112 |
113 | self.normalize_factors = {
114 | 'u_loc': u_loc, 'u_std': u_std,
115 | }
116 | print('... ended computing normalizing factors')
117 | pickle_dict = {
118 | 'normalize_factors': self.normalize_factors, 'num_data': self.num_data}
119 | self.dump(pickle_dict, path_normalize_factor)
120 |
121 | def normalize(self, u):
122 | u_loc = self.normalize_factors["u_loc"]
123 | u_std = self.normalize_factors["u_std"]
124 | u_normalized = (u - u_loc) / u_std
125 | return u_normalized
126 |
127 | def add_noise(self, u):
128 | w = torch.randn_like(u[:, :6]) # noise
129 | w_b = torch.randn_like(u[0, :6]) # bias
130 | w[:, :3] *= self.sigma_gyro
131 | w[:, 3:6] *= self.sigma_acc
132 | w_b[:3] *= self.sigma_b_gyro
133 | w_b[3:6] *= self.sigma_b_acc
134 | u[:, :6] += w + w_b
135 | return u
136 |
137 |
138 | @staticmethod
139 | def read_data(args):
140 | raise NotImplementedError
141 |
142 | @classmethod
143 | def load(cls, *_file_name):
144 | file_name = os.path.join(*_file_name)
145 | if not file_name.endswith(cls.pickle_extension):
146 | file_name += cls.pickle_extension
147 | with open(file_name, "rb") as file_pi:
148 | pickle_dict = pickle.load(file_pi)
149 | return pickle_dict
150 |
151 | @classmethod
152 | def dump(cls, mondict, *_file_name):
153 | file_name = os.path.join(*_file_name)
154 | if not file_name.endswith(cls.pickle_extension):
155 | file_name += cls.pickle_extension
156 | with open(file_name, "wb") as file_pi:
157 | pickle.dump(mondict, file_pi)
158 |
159 | def init_state_torch_filter(self, iekf):
160 | b_omega0 = torch.zeros(3).double()
161 | b_acc0 = torch.zeros(3).double()
162 | Rot_c_i0 = torch.eye(3).double()
163 | t_c_i0 = torch.zeros(3).double()
164 | return b_omega0, b_acc0, Rot_c_i0, t_c_i0
165 |
166 | def get_estimates(self, dataset_name):
167 | # Obtain estimates
168 | dataset_name = self.datasets[dataset_name] if type(dataset_name) == int else \
169 | dataset_name
170 | file_name = os.path.join(self.path_results, dataset_name + "_filter.p")
171 | if not os.path.exists(file_name):
172 | print('No result for ' + dataset_name)
173 | return
174 | mondict = self.load(file_name)
175 | Rot = mondict['Rot']
176 | v = mondict['v']
177 | p = mondict['p']
178 | b_omega = mondict['b_omega']
179 | b_acc = mondict['b_acc']
180 | Rot_c_i = mondict['Rot_c_i']
181 | t_c_i = mondict['t_c_i']
182 | measurements_covs = mondict['measurements_covs']
183 | return Rot, v, p , b_omega, b_acc, Rot_c_i, t_c_i, measurements_covs
184 |
185 |
186 |
--------------------------------------------------------------------------------
/src/main_kitti.py:
--------------------------------------------------------------------------------
1 | import os
2 | import shutil
3 | import numpy as np
4 | from collections import namedtuple
5 | import glob
6 | import time
7 | import datetime
8 | import pickle
9 | import torch
10 | import matplotlib.pyplot as plt
11 | from termcolor import cprint
12 | from navpy import lla2ned
13 | from collections import OrderedDict
14 | from dataset import BaseDataset
15 | from utils_torch_filter import TORCHIEKF
16 | from utils_numpy_filter import NUMPYIEKF as IEKF
17 | from utils import prepare_data
18 | from train_torch_filter import train_filter
19 | from utils_plot import results_filter
20 |
21 |
22 | def launch(args):
23 | if args.read_data:
24 | args.dataset_class.read_data(args)
25 | dataset = args.dataset_class(args)
26 |
27 | if args.train_filter:
28 | train_filter(args, dataset)
29 |
30 | if args.test_filter:
31 | test_filter(args, dataset)
32 |
33 | if args.results_filter:
34 | results_filter(args, dataset)
35 |
36 |
37 | class KITTIParameters(IEKF.Parameters):
38 | # gravity vector
39 | g = np.array([0, 0, -9.80655])
40 |
41 | cov_omega = 2e-4
42 | cov_acc = 1e-3
43 | cov_b_omega = 1e-8
44 | cov_b_acc = 1e-6
45 | cov_Rot_c_i = 1e-8
46 | cov_t_c_i = 1e-8
47 | cov_Rot0 = 1e-6
48 | cov_v0 = 1e-1
49 | cov_b_omega0 = 1e-8
50 | cov_b_acc0 = 1e-3
51 | cov_Rot_c_i0 = 1e-5
52 | cov_t_c_i0 = 1e-2
53 | cov_lat = 1
54 | cov_up = 10
55 |
56 | def __init__(self, **kwargs):
57 | super(KITTIParameters, self).__init__(**kwargs)
58 | self.set_param_attr()
59 |
60 | def set_param_attr(self):
61 | attr_list = [a for a in dir(KITTIParameters) if
62 | not a.startswith('__') and not callable(getattr(KITTIParameters, a))]
63 | for attr in attr_list:
64 | setattr(self, attr, getattr(KITTIParameters, attr))
65 |
66 |
67 | class KITTIDataset(BaseDataset):
68 | OxtsPacket = namedtuple('OxtsPacket',
69 | 'lat, lon, alt, ' + 'roll, pitch, yaw, ' + 'vn, ve, vf, vl, vu, '
70 | '' + 'ax, ay, az, af, al, '
71 | 'au, ' + 'wx, wy, wz, '
72 | 'wf, wl, wu, '
73 | '' +
74 | 'pos_accuracy, vel_accuracy, ' + 'navstat, numsats, ' + 'posmode, '
75 | 'velmode, '
76 | 'orimode')
77 |
78 | # Bundle into an easy-to-access structure
79 | OxtsData = namedtuple('OxtsData', 'packet, T_w_imu')
80 | min_seq_dim = 25 * 100 # 60 s
81 | datasets_fake = ['2011_09_26_drive_0093_extract', '2011_09_28_drive_0039_extract',
82 | '2011_09_28_drive_0002_extract']
83 | """
84 | '2011_09_30_drive_0028_extract' has trouble at N = [6000, 14000] -> test data
85 | '2011_10_03_drive_0027_extract' has trouble at N = 29481
86 | '2011_10_03_drive_0034_extract' has trouble at N = [33500, 34000]
87 | """
88 |
89 | # training set to the raw data of the KITTI dataset.
90 | # The following dict lists the name and end frame of each sequence that
91 | # has been used to extract the visual odometry / SLAM training set
92 | odometry_benchmark = OrderedDict()
93 | odometry_benchmark["2011_10_03_drive_0027_extract"] = [0, 45692]
94 | odometry_benchmark["2011_10_03_drive_0042_extract"] = [0, 12180]
95 | odometry_benchmark["2011_10_03_drive_0034_extract"] = [0, 47935]
96 | odometry_benchmark["2011_09_26_drive_0067_extract"] = [0, 8000]
97 | odometry_benchmark["2011_09_30_drive_0016_extract"] = [0, 2950]
98 | odometry_benchmark["2011_09_30_drive_0018_extract"] = [0, 28659]
99 | odometry_benchmark["2011_09_30_drive_0020_extract"] = [0, 11347]
100 | odometry_benchmark["2011_09_30_drive_0027_extract"] = [0, 11545]
101 | odometry_benchmark["2011_09_30_drive_0028_extract"] = [11231, 53650]
102 | odometry_benchmark["2011_09_30_drive_0033_extract"] = [0, 16589]
103 | odometry_benchmark["2011_09_30_drive_0034_extract"] = [0, 12744]
104 |
105 | odometry_benchmark_img = OrderedDict()
106 | odometry_benchmark_img["2011_10_03_drive_0027_extract"] = [0, 45400]
107 | odometry_benchmark_img["2011_10_03_drive_0042_extract"] = [0, 11000]
108 | odometry_benchmark_img["2011_10_03_drive_0034_extract"] = [0, 46600]
109 | odometry_benchmark_img["2011_09_26_drive_0067_extract"] = [0, 8000]
110 | odometry_benchmark_img["2011_09_30_drive_0016_extract"] = [0, 2700]
111 | odometry_benchmark_img["2011_09_30_drive_0018_extract"] = [0, 27600]
112 | odometry_benchmark_img["2011_09_30_drive_0020_extract"] = [0, 11000]
113 | odometry_benchmark_img["2011_09_30_drive_0027_extract"] = [0, 11000]
114 | odometry_benchmark_img["2011_09_30_drive_0028_extract"] = [11000, 51700]
115 | odometry_benchmark_img["2011_09_30_drive_0033_extract"] = [0, 15900]
116 | odometry_benchmark_img["2011_09_30_drive_0034_extract"] = [0, 12000]
117 |
118 | def __init__(self, args):
119 | super(KITTIDataset, self).__init__(args)
120 |
121 | self.datasets_validatation_filter['2011_09_30_drive_0028_extract'] = [11231, 53650]
122 | self.datasets_train_filter["2011_10_03_drive_0042_extract"] = [0, None]
123 | self.datasets_train_filter["2011_09_30_drive_0018_extract"] = [0, 15000]
124 | self.datasets_train_filter["2011_09_30_drive_0020_extract"] = [0, None]
125 | self.datasets_train_filter["2011_09_30_drive_0027_extract"] = [0, None]
126 | self.datasets_train_filter["2011_09_30_drive_0033_extract"] = [0, None]
127 | self.datasets_train_filter["2011_10_03_drive_0027_extract"] = [0, 18000]
128 | self.datasets_train_filter["2011_10_03_drive_0034_extract"] = [0, 31000]
129 | self.datasets_train_filter["2011_09_30_drive_0034_extract"] = [0, None]
130 |
131 | for dataset_fake in KITTIDataset.datasets_fake:
132 | if dataset_fake in self.datasets:
133 | self.datasets.remove(dataset_fake)
134 | if dataset_fake in self.datasets_train:
135 | self.datasets_train.remove(dataset_fake)
136 |
137 | @staticmethod
138 | def read_data(args):
139 | """
140 | Read the data from the KITTI dataset
141 |
142 | :param args:
143 | :return:
144 | """
145 |
146 | print("Start read_data")
147 | t_tot = 0 # sum of times for the all dataset
148 | date_dirs = os.listdir(args.path_data_base)
149 | for n_iter, date_dir in enumerate(date_dirs):
150 | # get access to each sequence
151 | path1 = os.path.join(args.path_data_base, date_dir)
152 | if not os.path.isdir(path1):
153 | continue
154 | date_dirs2 = os.listdir(path1)
155 |
156 | for date_dir2 in date_dirs2:
157 | path2 = os.path.join(path1, date_dir2)
158 | if not os.path.isdir(path2):
159 | continue
160 | # read data
161 | oxts_files = sorted(glob.glob(os.path.join(path2, 'oxts', 'data', '*.txt')))
162 | oxts = KITTIDataset.load_oxts_packets_and_poses(oxts_files)
163 |
164 | """ Note on difference between ground truth and oxts solution:
165 | - orientation is the same
166 | - north and east axis are inverted
167 | - position are closed to but different
168 | => oxts solution is not loaded
169 | """
170 |
171 | print("\n Sequence name : " + date_dir2)
172 | if len(oxts) < KITTIDataset.min_seq_dim: # sequence shorter than 30 s are rejected
173 | cprint("Dataset is too short ({:.2f} s)".format(len(oxts) / 100), 'yellow')
174 | continue
175 | lat_oxts = np.zeros(len(oxts))
176 | lon_oxts = np.zeros(len(oxts))
177 | alt_oxts = np.zeros(len(oxts))
178 | roll_oxts = np.zeros(len(oxts))
179 | pitch_oxts = np.zeros(len(oxts))
180 | yaw_oxts = np.zeros(len(oxts))
181 | roll_gt = np.zeros(len(oxts))
182 | pitch_gt = np.zeros(len(oxts))
183 | yaw_gt = np.zeros(len(oxts))
184 | t = KITTIDataset.load_timestamps(path2)
185 | acc = np.zeros((len(oxts), 3))
186 | acc_bis = np.zeros((len(oxts), 3))
187 | gyro = np.zeros((len(oxts), 3))
188 | gyro_bis = np.zeros((len(oxts), 3))
189 | p_gt = np.zeros((len(oxts), 3))
190 | v_gt = np.zeros((len(oxts), 3))
191 | v_rob_gt = np.zeros((len(oxts), 3))
192 |
193 | k_max = len(oxts)
194 | for k in range(k_max):
195 | oxts_k = oxts[k]
196 | t[k] = 3600 * t[k].hour + 60 * t[k].minute + t[k].second + t[
197 | k].microsecond / 1e6
198 | lat_oxts[k] = oxts_k[0].lat
199 | lon_oxts[k] = oxts_k[0].lon
200 | alt_oxts[k] = oxts_k[0].alt
201 | acc[k, 0] = oxts_k[0].af
202 | acc[k, 1] = oxts_k[0].al
203 | acc[k, 2] = oxts_k[0].au
204 | acc_bis[k, 0] = oxts_k[0].ax
205 | acc_bis[k, 1] = oxts_k[0].ay
206 | acc_bis[k, 2] = oxts_k[0].az
207 | gyro[k, 0] = oxts_k[0].wf
208 | gyro[k, 1] = oxts_k[0].wl
209 | gyro[k, 2] = oxts_k[0].wu
210 | gyro_bis[k, 0] = oxts_k[0].wx
211 | gyro_bis[k, 1] = oxts_k[0].wy
212 | gyro_bis[k, 2] = oxts_k[0].wz
213 | roll_oxts[k] = oxts_k[0].roll
214 | pitch_oxts[k] = oxts_k[0].pitch
215 | yaw_oxts[k] = oxts_k[0].yaw
216 | v_gt[k, 0] = oxts_k[0].ve
217 | v_gt[k, 1] = oxts_k[0].vn
218 | v_gt[k, 2] = oxts_k[0].vu
219 | v_rob_gt[k, 0] = oxts_k[0].vf
220 | v_rob_gt[k, 1] = oxts_k[0].vl
221 | v_rob_gt[k, 2] = oxts_k[0].vu
222 | p_gt[k] = oxts_k[1][:3, 3]
223 | Rot_gt_k = oxts_k[1][:3, :3]
224 | roll_gt[k], pitch_gt[k], yaw_gt[k] = IEKF.to_rpy(Rot_gt_k)
225 |
226 | t0 = t[0]
227 | t = np.array(t) - t[0]
228 | # some data can have gps out
229 | if np.max(t[:-1] - t[1:]) > 0.1:
230 | cprint(date_dir2 + " has time problem", 'yellow')
231 | ang_gt = np.zeros((roll_gt.shape[0], 3))
232 | ang_gt[:, 0] = roll_gt
233 | ang_gt[:, 1] = pitch_gt
234 | ang_gt[:, 2] = yaw_gt
235 |
236 | p_oxts = lla2ned(lat_oxts, lon_oxts, alt_oxts, lat_oxts[0], lon_oxts[0],
237 | alt_oxts[0], latlon_unit='deg', alt_unit='m', model='wgs84')
238 | p_oxts[:, [0, 1]] = p_oxts[:, [1, 0]] # see note
239 |
240 | # take correct imu measurements
241 | u = np.concatenate((gyro_bis, acc_bis), -1)
242 | # convert from numpy
243 | t = torch.from_numpy(t)
244 | p_gt = torch.from_numpy(p_gt)
245 | v_gt = torch.from_numpy(v_gt)
246 | ang_gt = torch.from_numpy(ang_gt)
247 | u = torch.from_numpy(u)
248 |
249 | # convert to float
250 | t = t.float()
251 | u = u.float()
252 | p_gt = p_gt.float()
253 | ang_gt = ang_gt.float()
254 | v_gt = v_gt.float()
255 |
256 | mondict = {
257 | 't': t, 'p_gt': p_gt, 'ang_gt': ang_gt, 'v_gt': v_gt,
258 | 'u': u, 'name': date_dir2, 't0': t0
259 | }
260 |
261 | t_tot += t[-1] - t[0]
262 | KITTIDataset.dump(mondict, args.path_data_save, date_dir2)
263 | print("\n Total dataset duration : {:.2f} s".format(t_tot))
264 |
265 | @staticmethod
266 | def prune_unused_data(args):
267 | """
268 | Deleting image and velodyne
269 | Returns:
270 |
271 | """
272 |
273 | unused_list = ['image_00', 'image_01', 'image_02', 'image_03', 'velodyne_points']
274 | date_dirs = ['2011_09_28', '2011_09_29', '2011_09_30', '2011_10_03']
275 |
276 | for date_dir in date_dirs:
277 | path1 = os.path.join(args.path_data_base, date_dir)
278 | if not os.path.isdir(path1):
279 | continue
280 | date_dirs2 = os.listdir(path1)
281 |
282 | for date_dir2 in date_dirs2:
283 | path2 = os.path.join(path1, date_dir2)
284 | if not os.path.isdir(path2):
285 | continue
286 | print(path2)
287 | for folder in unused_list:
288 | path3 = os.path.join(path2, folder)
289 | if os.path.isdir(path3):
290 | print(path3)
291 | shutil.rmtree(path3)
292 |
293 | @staticmethod
294 | def subselect_files(files, indices):
295 | try:
296 | files = [files[i] for i in indices]
297 | except:
298 | pass
299 | return files
300 |
301 | @staticmethod
302 | def rotx(t):
303 | """Rotation about the x-axis."""
304 | c = np.cos(t)
305 | s = np.sin(t)
306 | return np.array([[1, 0, 0], [0, c, -s], [0, s, c]])
307 |
308 | @staticmethod
309 | def roty(t):
310 | """Rotation about the y-axis."""
311 | c = np.cos(t)
312 | s = np.sin(t)
313 | return np.array([[c, 0, s], [0, 1, 0], [-s, 0, c]])
314 |
315 | @staticmethod
316 | def rotz(t):
317 | """Rotation about the z-axis."""
318 | c = np.cos(t)
319 | s = np.sin(t)
320 | return np.array([[c, -s, 0], [s, c, 0], [0, 0, 1]])
321 |
322 | @staticmethod
323 | def pose_from_oxts_packet(packet, scale):
324 | """Helper method to compute a SE(3) pose matrix from an OXTS packet.
325 | """
326 | er = 6378137. # earth radius (approx.) in meters
327 |
328 | # Use a Mercator projection to get the translation vector
329 | tx = scale * packet.lon * np.pi * er / 180.
330 | ty = scale * er * np.log(np.tan((90. + packet.lat) * np.pi / 360.))
331 | tz = packet.alt
332 | t = np.array([tx, ty, tz])
333 |
334 | # Use the Euler angles to get the rotation matrix
335 | Rx = KITTIDataset.rotx(packet.roll)
336 | Ry = KITTIDataset.roty(packet.pitch)
337 | Rz = KITTIDataset.rotz(packet.yaw)
338 | R = Rz.dot(Ry.dot(Rx))
339 |
340 | # Combine the translation and rotation into a homogeneous transform
341 | return R, t
342 |
343 | @staticmethod
344 | def transform_from_rot_trans(R, t):
345 | """Transformation matrix from rotation matrix and translation vector."""
346 | R = R.reshape(3, 3)
347 | t = t.reshape(3, 1)
348 | return np.vstack((np.hstack([R, t]), [0, 0, 0, 1]))
349 |
350 | @staticmethod
351 | def load_oxts_packets_and_poses(oxts_files):
352 | """Generator to read OXTS ground truth data.
353 | Poses are given in an East-North-Up coordinate system
354 | whose origin is the first GPS position.
355 | """
356 | # Scale for Mercator projection (from first lat value)
357 | scale = None
358 | # Origin of the global coordinate system (first GPS position)
359 | origin = None
360 |
361 | oxts = []
362 |
363 | for filename in oxts_files:
364 | with open(filename, 'r') as f:
365 | for line in f.readlines():
366 | line = line.split()
367 | # Last five entries are flags and counts
368 | line[:-5] = [float(x) for x in line[:-5]]
369 | line[-5:] = [int(float(x)) for x in line[-5:]]
370 |
371 | packet = KITTIDataset.OxtsPacket(*line)
372 |
373 | if scale is None:
374 | scale = np.cos(packet.lat * np.pi / 180.)
375 |
376 | R, t = KITTIDataset.pose_from_oxts_packet(packet, scale)
377 |
378 | if origin is None:
379 | origin = t
380 |
381 | T_w_imu = KITTIDataset.transform_from_rot_trans(R, t - origin)
382 |
383 | oxts.append(KITTIDataset.OxtsData(packet, T_w_imu))
384 | return oxts
385 |
386 | @staticmethod
387 | def load_timestamps(data_path):
388 | """Load timestamps from file."""
389 | timestamp_file = os.path.join(data_path, 'oxts', 'timestamps.txt')
390 |
391 | # Read and parse the timestamps
392 | timestamps = []
393 | with open(timestamp_file, 'r') as f:
394 | for line in f.readlines():
395 | # NB: datetime only supports microseconds, but KITTI timestamps
396 | # give nanoseconds, so need to truncate last 4 characters to
397 | # get rid of \n (counts as 1) and extra 3 digits
398 | t = datetime.datetime.strptime(line[:-4], '%Y-%m-%d %H:%M:%S.%f')
399 | timestamps.append(t)
400 | return timestamps
401 |
402 | def load_timestamps_img(data_path):
403 | """Load timestamps from file."""
404 | timestamp_file = os.path.join(data_path, 'image_00', 'timestamps.txt')
405 |
406 | # Read and parse the timestamps
407 | timestamps = []
408 | with open(timestamp_file, 'r') as f:
409 | for line in f.readlines():
410 | # NB: datetime only supports microseconds, but KITTI timestamps
411 | # give nanoseconds, so need to truncate last 4 characters to
412 | # get rid of \n (counts as 1) and extra 3 digits
413 | t = datetime.datetime.strptime(line[:-4], '%Y-%m-%d %H:%M:%S.%f')
414 | timestamps.append(t)
415 | return timestamps
416 |
417 |
418 | def test_filter(args, dataset):
419 | iekf = IEKF()
420 | torch_iekf = TORCHIEKF()
421 |
422 | # put Kitti parameters
423 | iekf.filter_parameters = KITTIParameters()
424 | iekf.set_param_attr()
425 | torch_iekf.filter_parameters = KITTIParameters()
426 | torch_iekf.set_param_attr()
427 |
428 | torch_iekf.load(args, dataset)
429 | iekf.set_learned_covariance(torch_iekf)
430 |
431 | for i in range(0, len(dataset.datasets)):
432 | dataset_name = dataset.dataset_name(i)
433 | if dataset_name not in dataset.odometry_benchmark.keys():
434 | continue
435 | print("Test filter on sequence: " + dataset_name)
436 | t, ang_gt, p_gt, v_gt, u = prepare_data(args, dataset, dataset_name, i,
437 | to_numpy=True)
438 | N = None
439 | u_t = torch.from_numpy(u).double()
440 | measurements_covs = torch_iekf.forward_nets(u_t)
441 | measurements_covs = measurements_covs.detach().numpy()
442 | start_time = time.time()
443 | Rot, v, p, b_omega, b_acc, Rot_c_i, t_c_i = iekf.run(t, u, measurements_covs,
444 | v_gt, p_gt, N,
445 | ang_gt[0])
446 | diff_time = time.time() - start_time
447 | print("Execution time: {:.2f} s (sequence time: {:.2f} s)".format(diff_time,
448 | t[-1] - t[0]))
449 | mondict = {
450 | 't': t, 'Rot': Rot, 'v': v, 'p': p, 'b_omega': b_omega, 'b_acc': b_acc,
451 | 'Rot_c_i': Rot_c_i, 't_c_i': t_c_i,
452 | 'measurements_covs': measurements_covs,
453 | }
454 | dataset.dump(mondict, args.path_results, dataset_name + "_filter.p")
455 |
456 |
457 | class KITTIArgs():
458 | path_data_base = "/media/mines/46230797-4d43-4860-9b76-ce35e699ea47/KITTI/raw"
459 | path_data_save = "../data"
460 | path_results = "../results"
461 | path_temp = "../temp"
462 |
463 | epochs = 400
464 | seq_dim = 6000
465 |
466 | # training, cross-validation and test dataset
467 | cross_validation_sequences = ['2011_09_30_drive_0028_extract']
468 | test_sequences = ['2011_09_30_drive_0028_extract']
469 | continue_training = True
470 |
471 | # choose what to do
472 | read_data = 0
473 | train_filter = 0
474 | test_filter = 1
475 | results_filter = 1
476 | dataset_class = KITTIDataset
477 | parameter_class = KITTIParameters
478 |
479 |
480 | if __name__ == '__main__':
481 | args = KITTIArgs()
482 | dataset = KITTIDataset(args)
483 | launch(KITTIArgs)
484 |
485 |
--------------------------------------------------------------------------------
/src/train_torch_filter.py:
--------------------------------------------------------------------------------
1 | import os
2 | import time
3 | import torch
4 | import matplotlib.pyplot as plt
5 | import numpy as np
6 | from termcolor import cprint
7 | from utils_torch_filter import TORCHIEKF
8 | from utils import prepare_data
9 | import copy
10 |
11 | max_loss = 2e1
12 | max_grad_norm = 1e0
13 | min_lr = 1e-5
14 | criterion = torch.nn.MSELoss(reduction="sum")
15 | lr_initprocesscov_net = 1e-4
16 | weight_decay_initprocesscov_net = 0e-8
17 | lr_mesnet = {'cov_net': 1e-4,
18 | 'cov_lin': 1e-4,
19 | }
20 | weight_decay_mesnet = {'cov_net': 1e-8,
21 | 'cov_lin': 1e-8,
22 | }
23 |
24 |
25 | def compute_delta_p(Rot, p):
26 | list_rpe = [[], [], []] # [idx_0, idx_end, pose_delta_p]
27 |
28 | # sample at 1 Hz
29 | Rot = Rot[::10]
30 | p = p[::10]
31 |
32 | step_size = 10 # every second
33 | distances = np.zeros(p.shape[0])
34 | dp = p[1:] - p[:-1] # this must be ground truth
35 | distances[1:] = dp.norm(dim=1).cumsum(0).numpy()
36 |
37 | seq_lengths = [100, 200, 300, 400, 500, 600, 700, 800]
38 | k_max = int(Rot.shape[0] / step_size) - 1
39 |
40 | for k in range(0, k_max):
41 | idx_0 = k * step_size
42 | for seq_length in seq_lengths:
43 | if seq_length + distances[idx_0] > distances[-1]:
44 | continue
45 | idx_shift = np.searchsorted(distances[idx_0:], distances[idx_0] + seq_length)
46 | idx_end = idx_0 + idx_shift
47 |
48 | list_rpe[0].append(idx_0)
49 | list_rpe[1].append(idx_end)
50 |
51 | idxs_0 = list_rpe[0]
52 | idxs_end = list_rpe[1]
53 | delta_p = Rot[idxs_0].transpose(-1, -2).matmul(
54 | ((p[idxs_end] - p[idxs_0]).float()).unsqueeze(-1)).squeeze()
55 | list_rpe[2] = delta_p
56 | return list_rpe
57 |
58 |
59 | def train_filter(args, dataset):
60 | iekf = prepare_filter(args, dataset)
61 | prepare_loss_data(args, dataset)
62 | save_iekf(args, iekf)
63 | optimizer = set_optimizer(iekf)
64 | start_time = time.time()
65 |
66 | for epoch in range(1, args.epochs + 1):
67 | train_loop(args, dataset, epoch, iekf, optimizer, args.seq_dim)
68 | save_iekf(args, iekf)
69 | print("Amount of time spent for 1 epoch: {}s\n".format(int(time.time() - start_time)))
70 | start_time = time.time()
71 |
72 |
73 | def prepare_filter(args, dataset):
74 | iekf = TORCHIEKF()
75 |
76 | # set dataset parameter
77 | iekf.filter_parameters = args.parameter_class()
78 | iekf.set_param_attr()
79 | if type(iekf.g).__module__ == np.__name__:
80 | iekf.g = torch.from_numpy(iekf.g).double()
81 |
82 | # load model
83 | if args.continue_training:
84 | iekf.load(args, dataset)
85 | iekf.train()
86 | # init u_loc and u_std
87 | iekf.get_normalize_u(dataset)
88 | return iekf
89 |
90 |
91 | def prepare_loss_data(args, dataset):
92 |
93 |
94 |
95 | file_delta_p = os.path.join(args.path_temp, 'delta_p.p')
96 | if os.path.isfile(file_delta_p):
97 | mondict = dataset.load(file_delta_p)
98 | dataset.list_rpe = mondict['list_rpe']
99 | dataset.list_rpe_validation = mondict['list_rpe_validation']
100 | if set(dataset.datasets_train_filter.keys()) <= set(dataset.list_rpe.keys()):
101 | return
102 |
103 | # prepare delta_p_gt
104 | list_rpe = {}
105 | for dataset_name, Ns in dataset.datasets_train_filter.items():
106 | t, ang_gt, p_gt, v_gt, u = prepare_data(args, dataset, dataset_name, 0)
107 | p_gt = p_gt.double()
108 | Rot_gt = torch.zeros(Ns[1], 3, 3)
109 | for k in range(Ns[1]):
110 | ang_k = ang_gt[k]
111 | Rot_gt[k] = TORCHIEKF.from_rpy(ang_k[0], ang_k[1], ang_k[2]).double()
112 | list_rpe[dataset_name] = compute_delta_p(Rot_gt[:Ns[1]], p_gt[:Ns[1]])
113 |
114 | list_rpe_validation = {}
115 | for dataset_name, Ns in dataset.datasets_validatation_filter.items():
116 | t, ang_gt, p_gt, v_gt, u = prepare_data(args, dataset, dataset_name, 0)
117 | p_gt = p_gt.double()
118 | Rot_gt = torch.zeros(Ns[1], 3, 3)
119 | for k in range(Ns[1]):
120 | ang_k = ang_gt[k]
121 | Rot_gt[k] = TORCHIEKF.from_rpy(ang_k[0], ang_k[1], ang_k[2]).double()
122 | list_rpe_validation[dataset_name] = compute_delta_p(Rot_gt[:Ns[1]], p_gt[:Ns[1]])
123 |
124 | list_rpe_ = copy.deepcopy(list_rpe)
125 | dataset.list_rpe = {}
126 | for dataset_name, rpe in list_rpe_.items():
127 | if len(rpe[0]) is not 0:
128 | dataset.list_rpe[dataset_name] = list_rpe[dataset_name]
129 | else:
130 | dataset.datasets_train_filter.pop(dataset_name)
131 | list_rpe.pop(dataset_name)
132 | cprint("%s has too much dirty data, it's removed from training list" % dataset_name, 'yellow')
133 |
134 | list_rpe_validation_ = copy.deepcopy(list_rpe_validation)
135 | dataset.list_rpe_validation = {}
136 | for dataset_name, rpe in list_rpe_validation_.items():
137 | if len(rpe[0]) is not 0:
138 | dataset.list_rpe_validation[dataset_name] = list_rpe_validation[dataset_name]
139 | else:
140 | dataset.datasets_validatation_filter.pop(dataset_name)
141 | list_rpe_validation.pop(dataset_name)
142 | cprint("%s has too much dirty data, it's removed from validation list" % dataset_name, 'yellow')
143 | mondict = {
144 | 'list_rpe': list_rpe, 'list_rpe_validation': list_rpe_validation,
145 | }
146 | dataset.dump(mondict, file_delta_p)
147 |
148 |
149 | def train_loop(args, dataset, epoch, iekf, optimizer, seq_dim):
150 | loss_train = 0
151 | optimizer.zero_grad()
152 | for i, (dataset_name, Ns) in enumerate(dataset.datasets_train_filter.items()):
153 | t, ang_gt, p_gt, v_gt, u, N0 = prepare_data_filter(dataset, dataset_name, Ns,
154 | iekf, seq_dim)
155 |
156 | loss = mini_batch_step(dataset, dataset_name, iekf,
157 | dataset.list_rpe[dataset_name], t, ang_gt, p_gt, v_gt, u, N0)
158 |
159 | if loss is -1 or torch.isnan(loss):
160 | cprint("{} loss is invalid".format(i), 'yellow')
161 | continue
162 | elif loss > max_loss:
163 | cprint("{} loss is too high {:.5f}".format(i, loss), 'yellow')
164 | continue
165 | else:
166 | loss_train += loss
167 | cprint("{} loss: {:.5f}".format(i, loss))
168 |
169 | if loss_train == 0:
170 | return
171 | loss_train.backward() # loss_train.cuda().backward()
172 | g_norm = torch.nn.utils.clip_grad_norm_(iekf.parameters(), max_grad_norm)
173 | if np.isnan(g_norm) or g_norm > 3*max_grad_norm:
174 | cprint("gradient norm: {:.5f}".format(g_norm), 'yellow')
175 | optimizer.zero_grad()
176 |
177 | else:
178 | optimizer.step()
179 | optimizer.zero_grad()
180 | cprint("gradient norm: {:.5f}".format(g_norm))
181 | print('Train Epoch: {:2d} \tLoss: {:.5f}'.format(epoch, loss_train))
182 | return loss_train
183 |
184 |
185 | def save_iekf(args, iekf):
186 | file_name = os.path.join(args.path_temp, "iekfnets.p")
187 | torch.save(iekf.state_dict(), file_name)
188 | print("The IEKF nets are saved in the file " + file_name)
189 |
190 |
191 | def mini_batch_step(dataset, dataset_name, iekf, list_rpe, t, ang_gt, p_gt, v_gt, u, N0):
192 | iekf.set_Q()
193 | measurements_covs = iekf.forward_nets(u)
194 | Rot, v, p, b_omega, b_acc, Rot_c_i, t_c_i = iekf.run(t, u,measurements_covs,
195 | v_gt, p_gt, t.shape[0],
196 | ang_gt[0])
197 | delta_p, delta_p_gt = precompute_lost(Rot, p, list_rpe, N0)
198 | if delta_p is None:
199 | return -1
200 | loss = criterion(delta_p, delta_p_gt)
201 | return loss
202 |
203 |
204 | def set_optimizer(iekf):
205 | param_list = [{'params': iekf.initprocesscov_net.parameters(),
206 | 'lr': lr_initprocesscov_net,
207 | 'weight_decay': weight_decay_initprocesscov_net}]
208 | for key, value in lr_mesnet.items():
209 | param_list.append({'params': getattr(iekf.mes_net, key).parameters(),
210 | 'lr': value,
211 | 'weight_decay': weight_decay_mesnet[key]
212 | })
213 | optimizer = torch.optim.Adam(param_list)
214 | return optimizer
215 |
216 |
217 | def prepare_data_filter(dataset, dataset_name, Ns, iekf, seq_dim):
218 | # get data with trainable instant
219 | t, ang_gt, p_gt, v_gt, u = dataset.get_data(dataset_name)
220 | t = t[Ns[0]: Ns[1]]
221 | ang_gt = ang_gt[Ns[0]: Ns[1]]
222 | p_gt = p_gt[Ns[0]: Ns[1]] - p_gt[Ns[0]]
223 | v_gt = v_gt[Ns[0]: Ns[1]]
224 | u = u[Ns[0]: Ns[1]]
225 |
226 | # subsample data
227 | N0, N = get_start_and_end(seq_dim, u)
228 | t = t[N0: N].double()
229 | ang_gt = ang_gt[N0: N].double()
230 | p_gt = (p_gt[N0: N] - p_gt[N0]).double()
231 | v_gt = v_gt[N0: N].double()
232 | u = u[N0: N].double()
233 |
234 | # add noise
235 | if iekf.mes_net.training:
236 | u = dataset.add_noise(u)
237 |
238 | return t, ang_gt, p_gt, v_gt, u, N0
239 |
240 |
241 | def get_start_and_end(seq_dim, u):
242 | if seq_dim is None:
243 | N0 = 0
244 | N = u.shape[0]
245 | else: # training sequence
246 | N0 = 10 * int(np.random.randint(0, (u.shape[0] - seq_dim)/10))
247 | N = N0 + seq_dim
248 | return N0, N
249 |
250 |
251 | def precompute_lost(Rot, p, list_rpe, N0):
252 | N = p.shape[0]
253 | Rot_10_Hz = Rot[::10]
254 | p_10_Hz = p[::10]
255 | idxs_0 = torch.Tensor(list_rpe[0]).clone().long() - int(N0 / 10)
256 | idxs_end = torch.Tensor(list_rpe[1]).clone().long() - int(N0 / 10)
257 | delta_p_gt = list_rpe[2]
258 | idxs = torch.Tensor(idxs_0.shape[0]).byte()
259 | idxs[:] = 1
260 | idxs[idxs_0 < 0] = 0
261 | idxs[idxs_end >= int(N / 10)] = 0
262 | delta_p_gt = delta_p_gt[idxs]
263 | idxs_end_bis = idxs_end[idxs]
264 | idxs_0_bis = idxs_0[idxs]
265 | if len(idxs_0_bis) is 0:
266 | return None, None
267 | else:
268 | delta_p = Rot_10_Hz[idxs_0_bis].transpose(-1, -2).matmul(
269 | (p_10_Hz[idxs_end_bis] - p_10_Hz[idxs_0_bis]).unsqueeze(-1)).squeeze()
270 | distance = delta_p_gt.norm(dim=1).unsqueeze(-1)
271 | return delta_p.double() / distance.double(), delta_p_gt.double() / distance.double()
272 |
--------------------------------------------------------------------------------
/src/utils.py:
--------------------------------------------------------------------------------
1 | import os
2 | import numpy as np
3 | import torch
4 | import matplotlib.pyplot as plt
5 |
6 |
7 | def prepare_data(args, dataset, dataset_name, i, idx_start=None, idx_end=None, to_numpy=False):
8 | # get data
9 | t, ang_gt, p_gt, v_gt, u = dataset.get_data(dataset_name)
10 |
11 | # get start instant
12 | if idx_start is None:
13 | idx_start = 0
14 | # get end instant
15 | if idx_end is None:
16 | idx_end = t.shape[0]
17 |
18 | t = t[idx_start: idx_end]
19 | u = u[idx_start: idx_end]
20 | ang_gt = ang_gt[idx_start: idx_end]
21 | v_gt = v_gt[idx_start: idx_end]
22 | p_gt = p_gt[idx_start: idx_end] - p_gt[idx_start]
23 |
24 | if to_numpy:
25 | t = t.cpu().double().numpy()
26 | u = u.cpu().double().numpy()
27 | ang_gt = ang_gt.cpu().double().numpy()
28 | v_gt = v_gt.cpu().double().numpy()
29 | p_gt = p_gt.cpu().double().numpy()
30 | return t, ang_gt, p_gt, v_gt, u
31 |
32 |
33 | def create_folder(directory):
34 | try:
35 | if not os.path.exists(directory):
36 | os.makedirs(directory)
37 | except OSError:
38 | print ('Error: Creating directory. ' + directory)
39 |
40 |
41 |
42 | def umeyama_alignment(x, y, with_scale=False):
43 | """
44 | Computes the least squares solution parameters of an Sim(m) matrix that minimizes the distance between a set of
45 | registered points.
46 |
47 | Umeyama, Shinji: Least-squares estimation of transformation parameters
48 | between two point patterns. IEEE PAMI, 1991
49 | :param x: mxn matrix of points, m = dimension, n = nr. of data points
50 | :param y: mxn matrix of points, m = dimension, n = nr. of data points
51 | :param with_scale: set to True to align also the scale (default: 1.0 scale)
52 | :return: r, t, c - rotation matrix, translation vector and scale factor
53 | """
54 |
55 |
56 | # m = dimension, n = nr. of data points
57 | m, n = x.shape
58 |
59 | # means, eq. 34 and 35
60 | mean_x = x.mean(axis=1)
61 | mean_y = y.mean(axis=1)
62 |
63 | # variance, eq. 36
64 | # "transpose" for column subtraction
65 | sigma_x = 1.0 / n * (np.linalg.norm(x - mean_x[:, np.newaxis])**2)
66 |
67 | # covariance matrix, eq. 38
68 | outer_sum = np.zeros((m, m))
69 | for i in range(n):
70 | outer_sum += np.outer((y[:, i] - mean_y), (x[:, i] - mean_x))
71 | cov_xy = np.multiply(1.0 / n, outer_sum)
72 |
73 | # SVD (text betw. eq. 38 and 39)
74 | u, d, v = np.linalg.svd(cov_xy)
75 |
76 | # S matrix, eq. 43
77 | s = np.eye(m)
78 | if np.linalg.det(u) * np.linalg.det(v) < 0.0:
79 | # Ensure a RHS coordinate system (Kabsch algorithm).
80 | s[m - 1, m - 1] = -1
81 |
82 | # rotation, eq. 40
83 | r = u.dot(s).dot(v)
84 |
85 | # scale & translation, eq. 42 and 41
86 | c = 1 / sigma_x * np.trace(np.diag(d).dot(s)) if with_scale else 1.0
87 | t = mean_y - np.multiply(c, r.dot(mean_x))
88 |
89 | return r, t, c
90 |
91 |
--------------------------------------------------------------------------------
/src/utils_numpy_filter.py:
--------------------------------------------------------------------------------
1 | import matplotlib.pyplot as plt
2 | import numpy as np
3 | np.set_printoptions(precision=2)
4 | import scipy.linalg
5 | from termcolor import cprint
6 | from utils import *
7 |
8 |
9 | class NUMPYIEKF:
10 | Id2 = np.eye(2)
11 | Id3 = np.eye(3)
12 | Id6 = np.eye(6)
13 | IdP = np.eye(21)
14 |
15 | def __init__(self, parameter_class=None):
16 |
17 | # variables to initialize with `filter_parameters`
18 | self.g = None
19 | self.cov_omega = None
20 | self.cov_acc = None
21 | self.cov_b_omega = None
22 | self.cov_b_acc = None
23 | self.cov_Rot_c_i = None
24 | self.cov_t_c_i = None
25 | self.cov_lat = None
26 | self.cov_up = None
27 | self.cov_b_omega0 = None
28 | self.cov_b_acc0 = None
29 | self.cov_Rot0 = None
30 | self.cov_v0 = None
31 | self.cov_Rot_c_i0 = None
32 | self.cov_t_c_i0 = None
33 | self.Q = None
34 | self.Q_dim = None
35 | self.n_normalize_rot = None
36 | self.n_normalize_rot_c_i = None
37 | self.P_dim = None
38 | self.verbose = None
39 |
40 | # set the parameters
41 | if parameter_class is None:
42 | filter_parameters = NUMPYIEKF.Parameters()
43 | else:
44 | filter_parameters = parameter_class()
45 | self.filter_parameters = filter_parameters
46 | self.set_param_attr()
47 |
48 | class Parameters:
49 | g = np.array([0, 0, -9.80665])
50 | """gravity vector"""
51 |
52 | P_dim = 21
53 | """covariance dimension"""
54 |
55 | Q_dim = 18
56 | """process noise covariance dimension"""
57 |
58 | # Process noise covariance
59 | cov_omega = 1e-3
60 | """gyro covariance"""
61 | cov_acc = 1e-2
62 | """accelerometer covariance"""
63 | cov_b_omega = 6e-9
64 | """gyro bias covariance"""
65 | cov_b_acc = 2e-4
66 | """accelerometer bias covariance"""
67 | cov_Rot_c_i = 1e-9
68 | """car to IMU orientation covariance"""
69 | cov_t_c_i = 1e-9
70 | """car to IMU translation covariance"""
71 |
72 | cov_lat = 0.2
73 | """Zero lateral velocity covariance"""
74 | cov_up = 300
75 | """Zero lateral velocity covariance"""
76 |
77 | cov_Rot0 = 1e-3
78 | """initial pitch and roll covariance"""
79 | cov_b_omega0 = 6e-3
80 | """initial gyro bias covariance"""
81 | cov_b_acc0 = 4e-3
82 | """initial accelerometer bias covariance"""
83 | cov_v0 = 1e-1
84 | """initial velocity covariance"""
85 | cov_Rot_c_i0 = 1e-6
86 | """initial car to IMU pitch and roll covariance"""
87 | cov_t_c_i0 = 5e-3
88 | """initial car to IMU translation covariance"""
89 |
90 | # numerical parameters
91 | n_normalize_rot = 100
92 | """timestamp before normalizing orientation"""
93 | n_normalize_rot_c_i = 1000
94 | """timestamp before normalizing car to IMU orientation"""
95 |
96 | def __init__(self, **kwargs):
97 | self.set(**kwargs)
98 |
99 | def set(self, **kwargs):
100 | for key, value in kwargs.items():
101 | setattr(self, key, value)
102 |
103 | def set_param_attr(self):
104 |
105 | # get a list of attribute only
106 | attr_list = [a for a in dir(self.filter_parameters) if not a.startswith('__')
107 | and not callable(getattr(self.filter_parameters, a))]
108 | for attr in attr_list:
109 | setattr(self, attr, getattr(self.filter_parameters, attr))
110 |
111 | self.Q = np.diag([self.cov_omega, self.cov_omega, self. cov_omega,
112 | self.cov_acc, self.cov_acc, self.cov_acc,
113 | self.cov_b_omega, self.cov_b_omega, self.cov_b_omega,
114 | self.cov_b_acc, self.cov_b_acc, self.cov_b_acc,
115 | self.cov_Rot_c_i, self.cov_Rot_c_i, self.cov_Rot_c_i,
116 | self.cov_t_c_i, self.cov_t_c_i, self.cov_t_c_i])
117 |
118 | def run(self, t, u, measurements_covs, v_mes, p_mes, N, ang0):
119 | dt = t[1:] - t[:-1] # (s)
120 | if N is None:
121 | N = u.shape[0]
122 | Rot, v, p, b_omega, b_acc, Rot_c_i, t_c_i, P = self.init_run(dt, u, p_mes, v_mes,
123 | ang0, N)
124 |
125 | for i in range(1, N):
126 | Rot[i], v[i], p[i], b_omega[i], b_acc[i], Rot_c_i[i], t_c_i[i], P = \
127 | self.propagate(Rot[i-1], v[i-1], p[i-1], b_omega[i-1], b_acc[i-1], Rot_c_i[i-1],
128 | t_c_i[i-1], P, u[i], dt[i-1])
129 |
130 | Rot[i], v[i], p[i], b_omega[i], b_acc[i], Rot_c_i[i], t_c_i[i], P = \
131 | self.update(Rot[i], v[i], p[i], b_omega[i], b_acc[i], Rot_c_i[i], t_c_i[i], P, u[i],
132 | i, measurements_covs[i])
133 | # correct numerical error every second
134 | if i % self.n_normalize_rot == 0:
135 | Rot[i] = self.normalize_rot(Rot[i])
136 | # correct numerical error every 10 seconds
137 | if i % self.n_normalize_rot_c_i == 0:
138 | Rot_c_i[i] = self.normalize_rot(Rot_c_i[i])
139 | return Rot, v, p, b_omega, b_acc, Rot_c_i, t_c_i
140 |
141 | def init_run(self, dt, u, p_mes, v_mes, ang0, N):
142 | Rot, v, p, b_omega, b_acc, Rot_c_i, t_c_i = self.init_saved_state(dt, N, ang0)
143 | Rot[0] = self.from_rpy(ang0[0], ang0[1], ang0[2])
144 | v[0] = v_mes[0]
145 | P = self.init_covariance()
146 | return Rot, v, p, b_omega, b_acc, Rot_c_i, t_c_i, P
147 |
148 | def init_covariance(self):
149 | P = np.zeros((self.P_dim, self.P_dim))
150 | P[:2, :2] = self.cov_Rot0*self.Id2 # no yaw error
151 | P[3:5, 3:5] = self.cov_v0*self.Id2
152 | P[9:12, 9:12] = self.cov_b_omega0*self.Id3
153 | P[12:15, 12:15] = self.cov_b_acc0*self.Id3
154 | P[15:18, 15:18] = self.cov_Rot_c_i0*self.Id3
155 | P[18:21, 18:21] = self.cov_t_c_i0*self.Id3
156 | return P
157 |
158 | def init_saved_state(self, dt, N, ang0):
159 | Rot = np.zeros((N, 3, 3))
160 | v = np.zeros((N, 3))
161 | p = np.zeros((N, 3))
162 | b_omega = np.zeros((N, 3))
163 | b_acc = np.zeros((N, 3))
164 | Rot_c_i = np.zeros((N, 3, 3))
165 | t_c_i = np.zeros((N, 3))
166 | Rot_c_i[0] = np.eye(3)
167 | return Rot, v, p, b_omega, b_acc, Rot_c_i, t_c_i
168 |
169 | def propagate(self, Rot_prev, v_prev, p_prev, b_omega_prev, b_acc_prev, Rot_c_i_prev,
170 | t_c_i_prev, P_prev, u, dt):
171 | acc = Rot_prev.dot(u[3:6] - b_acc_prev) + self.g
172 | v = v_prev + acc * dt
173 | p = p_prev + v_prev*dt + 1/2 * acc * dt**2
174 | omega = u[:3] - b_omega_prev
175 | Rot = Rot_prev.dot(self.so3exp(omega * dt))
176 | b_omega = b_omega_prev
177 | b_acc = b_acc_prev
178 | Rot_c_i = Rot_c_i_prev
179 | t_c_i = t_c_i_prev
180 | P = self.propagate_cov(P_prev, Rot_prev, v_prev, p_prev, b_omega_prev,
181 | b_acc_prev, u, dt)
182 | return Rot, v, p, b_omega, b_acc, Rot_c_i, t_c_i, P
183 |
184 | def propagate_cov(self, P_prev, Rot_prev, v_prev, p_prev, b_omega_prev,
185 | b_acc_prev, u, dt):
186 | F = np.zeros((self.P_dim, self.P_dim))
187 | G = np.zeros((self.P_dim, self.Q_dim))
188 | v_skew_rot = self.skew(v_prev).dot(Rot_prev)
189 | p_skew_rot = self.skew(p_prev).dot(Rot_prev)
190 |
191 | F[3:6, :3] = self.skew(self.g)
192 | F[6:9, 3:6] = self.Id3
193 | G[3:6, 3:6] = Rot_prev
194 | F[3:6, 12:15] = -Rot_prev
195 | G[:3, :3] = Rot_prev
196 | G[3:6, :3] = v_skew_rot
197 | G[6:9, :3] = p_skew_rot
198 | F[:3, 9:12] = -Rot_prev
199 | F[3:6, 9:12] = -v_skew_rot
200 | F[6:9, 9:12] = -p_skew_rot
201 | G[9:15, 6:12] = self.Id6
202 | G[15:18, 12:15] = self.Id3
203 | G[18:21, 15:18] = self.Id3
204 |
205 | F = F * dt
206 | G = G * dt
207 | F_square = F.dot(F)
208 | F_cube = F_square.dot(F)
209 | Phi = self.IdP + F + 1/2*F_square + 1/6*F_cube
210 | P = Phi.dot(P_prev + G.dot(self.Q).dot(G.T)).dot(Phi.T)
211 | return P
212 |
213 | def update(self, Rot, v, p, b_omega, b_acc, Rot_c_i, t_c_i, P, u, i, measurement_cov):
214 | # orientation of body frame
215 | Rot_body = Rot.dot(Rot_c_i)
216 | # velocity in imu frame
217 | v_imu = Rot.T.dot(v)
218 | # velocity in body frame
219 | v_body = Rot_c_i.T.dot(v_imu)
220 | # velocity in body frame in the vehicle axis
221 | v_body += self.skew(t_c_i).dot(u[:3] - b_omega)
222 | Omega = self.skew(u[:3] - b_omega)
223 |
224 | # Jacobian w.r.t. car frame
225 | H_v_imu = Rot_c_i.T.dot(self.skew(v_imu))
226 | H_t_c_i = -self.skew(t_c_i)
227 |
228 | H = np.zeros((2, self.P_dim))
229 | H[:, 3:6] = Rot_body.T[1:]
230 | H[:, 15:18] = H_v_imu[1:]
231 | H[:, 9:12] = H_t_c_i[1:]
232 | H[:, 18:21] = -Omega[1:]
233 | r = - v_body[1:]
234 | R = np.diag(measurement_cov)
235 | Rot_up, v_up, p_up, b_omega_up, b_acc_up, Rot_c_i_up, t_c_i_up, P_up = \
236 | self.state_and_cov_update(Rot, v, p, b_omega, b_acc, Rot_c_i, t_c_i, P, H, r, R)
237 | return Rot_up, v_up, p_up, b_omega_up, b_acc_up, Rot_c_i_up, t_c_i_up, P_up
238 |
239 | @staticmethod
240 | def state_and_cov_update(Rot, v, p, b_omega, b_acc, Rot_c_i, t_c_i, P, H, r, R):
241 | S = H.dot(P).dot(H.T) + R
242 | K = (np.linalg.solve(S, P.dot(H.T).T)).T
243 | dx = K.dot(r)
244 |
245 | dR, dxi = NUMPYIEKF.sen3exp(dx[:9])
246 | dv = dxi[:, 0]
247 | dp = dxi[:, 1]
248 | Rot_up = dR.dot(Rot)
249 | v_up = dR.dot(v) + dv
250 | p_up = dR.dot(p) + dp
251 |
252 | b_omega_up = b_omega + dx[9:12]
253 | b_acc_up = b_acc + dx[12:15]
254 |
255 | dR = NUMPYIEKF.so3exp(dx[15:18])
256 | Rot_c_i_up = dR.dot(Rot_c_i)
257 | t_c_i_up = t_c_i + dx[18:21]
258 |
259 | I_KH = NUMPYIEKF.IdP - K.dot(H)
260 | P_up = I_KH.dot(P).dot(I_KH.T) + K.dot(R).dot(K.T)
261 | P_up = (P_up + P_up.T)/2
262 | return Rot_up, v_up, p_up, b_omega_up, b_acc_up, Rot_c_i_up, t_c_i_up, P_up
263 |
264 | @staticmethod
265 | def skew(x):
266 | X = np.array([[0, -x[2], x[1]],
267 | [x[2], 0, -x[0]],
268 | [-x[1], x[0], 0]])
269 | return X
270 |
271 | @staticmethod
272 | def rot_from_2_vectors(v1, v2):
273 | v1 = v1/np.linalg.norm(v1)
274 | v2 = v2/np.linalg.norm(v2)
275 | v = np.cross(v1, v2)
276 | cosang = np.dot(v1, v2)
277 | sinang = np.linalg.norm(v)
278 | Rot = NUMPYIEKF.Id3 + NUMPYIEKF.skew(v) + \
279 | NUMPYIEKF.skew(v).dot(NUMPYIEKF.skew(v))*(1-cosang)/(sinang**2)
280 | Rot = NUMPYIEKF.normalize_rot(Rot)
281 | return Rot
282 |
283 | @staticmethod
284 | def sen3exp(xi):
285 | phi = xi[:3]
286 |
287 | angle = np.linalg.norm(phi)
288 |
289 | # Near |phi|==0, use first order Taylor expansion
290 | if np.abs(angle) < 1e-8:
291 | skew_phi = np.array([[0, -phi[2], phi[1]],
292 | [phi[2], 0, -phi[0]],
293 | [-phi[1], phi[0], 0]])
294 | J = NUMPYIEKF.Id3 + 0.5 * skew_phi
295 | Rot = NUMPYIEKF.Id3 + skew_phi
296 | else:
297 | axis = phi / angle
298 | skew_axis = np.array([[0, -axis[2], axis[1]],
299 | [axis[2], 0, -axis[0]],
300 | [-axis[1], axis[0], 0]])
301 | s = np.sin(angle)
302 | c = np.cos(angle)
303 | J = (s / angle) * NUMPYIEKF.Id3 \
304 | + (1 - s / angle) * np.outer(axis, axis) + ((1 - c) / angle) * skew_axis
305 | Rot = c * NUMPYIEKF.Id3 + (1 - c) * np.outer(axis, axis) + s * skew_axis
306 |
307 | x = J.dot(xi[3:].reshape(-1, 3).T)
308 | return Rot, x
309 |
310 | @staticmethod
311 | def so3exp(phi):
312 | angle = np.linalg.norm(phi)
313 |
314 | # Near phi==0, use first order Taylor expansion
315 | if np.abs(angle) < 1e-8:
316 | skew_phi = np.array([[0, -phi[2], phi[1]],
317 | [phi[2], 0, -phi[0]],
318 | [-phi[1], phi[0], 0]])
319 | return np.identity(3) + skew_phi
320 |
321 | axis = phi / angle
322 | skew_axis = np.array([[0, -axis[2], axis[1]],
323 | [axis[2], 0, -axis[0]],
324 | [-axis[1], axis[0], 0]])
325 | s = np.sin(angle)
326 | c = np.cos(angle)
327 |
328 | return c * NUMPYIEKF.Id3 + (1 - c) * np.outer(axis, axis) + s * skew_axis
329 |
330 | @staticmethod
331 | def so3left_jacobian(phi):
332 | """
333 |
334 | :param phi:
335 | :return:
336 | """
337 |
338 | angle = np.linalg.norm(phi)
339 |
340 | # Near |phi|==0, use first order Taylor expansion
341 | if np.abs(angle) < 1e-8:
342 | skew_phi = np.array([[0, -phi[2], phi[1]],
343 | [phi[2], 0, -phi[0]],
344 | [-phi[1], phi[0], 0]])
345 | return NUMPYIEKF.Id3 + 0.5 * skew_phi
346 |
347 | axis = phi / angle
348 | skew_axis = np.array([[0, -axis[2], axis[1]],
349 | [axis[2], 0, -axis[0]],
350 | [-axis[1], axis[0], 0]])
351 | s = np.sin(angle)
352 | c = np.cos(angle)
353 |
354 | return (s / angle) * NUMPYIEKF.Id3 \
355 | + (1 - s / angle) * np.outer(axis, axis) + ((1 - c) / angle) * skew_axis
356 |
357 | @staticmethod
358 | def normalize_rot(Rot):
359 |
360 | # The SVD is commonly written as a = U S V.H.
361 | # The v returned by this function is V.H and u = U.
362 | U, _, V = np.linalg.svd(Rot, full_matrices=False)
363 |
364 | S = np.eye(3)
365 | S[2, 2] = np.linalg.det(U) * np.linalg.det(V)
366 | return U.dot(S).dot(V)
367 |
368 | @staticmethod
369 | def from_rpy(roll, pitch, yaw):
370 | return NUMPYIEKF.rotz(yaw).dot(NUMPYIEKF.roty(pitch).dot(NUMPYIEKF.rotx(roll)))
371 |
372 | @staticmethod
373 | def rotx(t):
374 | c = np.cos(t)
375 | s = np.sin(t)
376 | return np.array([[1, 0, 0],
377 | [0, c, -s],
378 | [0, s, c]])
379 |
380 | @staticmethod
381 | def roty(t):
382 | c = np.cos(t)
383 | s = np.sin(t)
384 | return np.array([[c, 0, s],
385 | [0, 1, 0],
386 | [-s, 0, c]])
387 |
388 | @staticmethod
389 | def rotz(t):
390 | c = np.cos(t)
391 | s = np.sin(t)
392 | return np.array([[c, -s, 0],
393 | [s, c, 0],
394 | [0, 0, 1]])
395 |
396 | @staticmethod
397 | def to_rpy(Rot):
398 | pitch = np.arctan2(-Rot[2, 0], np.sqrt(Rot[0, 0]**2 + Rot[1, 0]**2))
399 |
400 | if np.isclose(pitch, np.pi / 2.):
401 | yaw = 0.
402 | roll = np.arctan2(Rot[0, 1], Rot[1, 1])
403 | elif np.isclose(pitch, -np.pi / 2.):
404 | yaw = 0.
405 | roll = -np.arctan2(Rot[0, 1], Rot[1, 1])
406 | else:
407 | sec_pitch = 1. / np.cos(pitch)
408 | yaw = np.arctan2(Rot[1, 0] * sec_pitch,
409 | Rot[0, 0] * sec_pitch)
410 | roll = np.arctan2(Rot[2, 1] * sec_pitch,
411 | Rot[2, 2] * sec_pitch)
412 | return roll, pitch, yaw
413 |
414 | def set_learned_covariance(self, torch_iekf):
415 | torch_iekf.set_Q()
416 | self.Q = torch_iekf.Q.cpu().detach().numpy()
417 |
418 | beta = torch_iekf.initprocesscov_net.init_cov(torch_iekf)\
419 | .detach().cpu().numpy()
420 |
421 | self.cov_Rot0 *= beta[0]
422 | self.cov_v0 *= beta[1]
423 | self.cov_b_omega0 *= beta[2]
424 | self.cov_b_acc0 *= beta[3]
425 | self.cov_Rot_c_i0 *= beta[4]
426 | self.cov_t_c_i0 *= beta[5]
427 |
--------------------------------------------------------------------------------
/src/utils_plot.py:
--------------------------------------------------------------------------------
1 | import matplotlib
2 | import os
3 | from termcolor import cprint
4 | import matplotlib.pyplot as plt
5 | import numpy as np
6 | from itertools import chain
7 | from utils import *
8 | from utils_torch_filter import TORCHIEKF
9 |
10 | def results_filter(args, dataset):
11 |
12 | for i in range(0, len(dataset.datasets)):
13 | plt.close('all')
14 | dataset_name = dataset.dataset_name(i)
15 | file_name = os.path.join(dataset.path_results, dataset_name + "_filter.p")
16 | if not os.path.exists(file_name):
17 | print('No result for ' + dataset_name)
18 | continue
19 |
20 | print("\nResults for: " + dataset_name)
21 |
22 | Rot, v, p, b_omega, b_acc, Rot_c_i, t_c_i, measurements_covs = dataset.get_estimates(
23 | dataset_name)
24 |
25 | # get data
26 | t, ang_gt, p_gt, v_gt, u = dataset.get_data(dataset_name)
27 | # get data for nets
28 | u_normalized = dataset.normalize(u).numpy()
29 | # shift for better viewing
30 | u_normalized[:, [0, 3]] += 5
31 | u_normalized[:, [2, 5]] -= 5
32 |
33 | t = (t - t[0]).numpy()
34 | u = u.cpu().numpy()
35 | ang_gt = ang_gt.cpu().numpy()
36 | v_gt = v_gt.cpu().numpy()
37 | p_gt = (p_gt - p_gt[0]).cpu().numpy()
38 | print("Total sequence time: {:.2f} s".format(t[-1]))
39 |
40 | ang = np.zeros((Rot.shape[0], 3))
41 | Rot_gt = torch.zeros((Rot.shape[0], 3, 3))
42 | for j in range(Rot.shape[0]):
43 | roll, pitch, yaw = TORCHIEKF.to_rpy(torch.from_numpy(Rot[j]))
44 | ang[j, 0] = roll.numpy()
45 | ang[j, 0] = pitch.numpy()
46 | ang[j, 0] = yaw.numpy()
47 | # unwrap
48 | Rot_gt[j] = TORCHIEKF.from_rpy(torch.Tensor([ang_gt[j, 0]]),
49 | torch.Tensor([ang_gt[j, 1]]),
50 | torch.Tensor([ang_gt[j, 2]]))
51 | roll, pitch, yaw = TORCHIEKF.to_rpy(Rot_gt[j])
52 | ang_gt[j, 0] = roll.numpy()
53 | ang_gt[j, 0] = pitch.numpy()
54 | ang_gt[j, 0] = yaw.numpy()
55 |
56 | Rot_align, t_align, _ = umeyama_alignment(p_gt[:, :3].T, p[:, :3].T)
57 | p_align = (Rot_align.T.dot(p[:, :3].T)).T - Rot_align.T.dot(t_align)
58 | v_norm = np.sqrt(np.sum(v_gt ** 2, 1))
59 | v_norm /= np.max(v_norm)
60 |
61 | # Compute various errors
62 | error_p = np.abs(p_gt - p)
63 | # MATE
64 | mate_xy = np.mean(error_p[:, :2], 1)
65 | mate_z = error_p[:, 2]
66 |
67 | # CATE
68 | cate_xy = np.cumsum(mate_xy)
69 | cate_z = np.cumsum(mate_z)
70 |
71 | # RMSE
72 | rmse_xy = 1 / 2 * np.sqrt(error_p[:, 0] ** 2 + error_p[:, 1] ** 2)
73 | rmse_z = error_p[:, 2]
74 |
75 | RotT = torch.from_numpy(Rot).float().transpose(-1, -2)
76 |
77 | v_r = (RotT.matmul(torch.from_numpy(v).float().unsqueeze(-1)).squeeze()).numpy()
78 | v_r_gt = (Rot_gt.transpose(-1, -2).matmul(
79 | torch.from_numpy(v_gt).float().unsqueeze(-1)).squeeze()).numpy()
80 |
81 | p_r = (RotT.matmul(torch.from_numpy(p).float().unsqueeze(-1)).squeeze()).numpy()
82 | p_bis = (Rot_gt.matmul(torch.from_numpy(p_r).float().unsqueeze(-1)).squeeze()).numpy()
83 | error_p = p_gt - p_bis
84 |
85 | # plot and save plot
86 | folder_path = os.path.join(args.path_results, dataset_name)
87 | create_folder(folder_path)
88 |
89 | # position, velocity and velocity in body frame
90 | fig1, axs1 = plt.subplots(3, 1, sharex=True, figsize=(20, 10))
91 | # orientation, bias gyro and bias accelerometer
92 | fig2, axs2 = plt.subplots(3, 1, sharex=True, figsize=(20, 10))
93 | # position in plan
94 | fig3, ax3 = plt.subplots(figsize=(20, 10))
95 | # position in plan after alignment
96 | fig4, ax4 = plt.subplots(figsize=(20, 10))
97 | # Measurement covariance in log scale and normalized inputs
98 | fig5, axs5 = plt.subplots(3, 1, sharex=True, figsize=(20, 10))
99 | # input: gyro, accelerometer
100 | fig6, axs6 = plt.subplots(2, 1, sharex=True, figsize=(20, 10))
101 | # errors: MATE, CATE RMSE
102 | fig7, axs7 = plt.subplots(3, 1, sharex=True, figsize=(20, 10))
103 |
104 | axs1[0].plot(t, p_gt)
105 | axs1[0].plot(t, p)
106 | axs1[1].plot(t, v_gt)
107 | axs1[1].plot(t, v)
108 | axs1[2].plot(t, v_r_gt)
109 | axs1[2].plot(t, v_r)
110 |
111 | axs2[0].plot(t, ang_gt)
112 | axs2[0].plot(t, ang)
113 | axs2[1].plot(t, b_omega)
114 | axs2[2].plot(t, b_acc)
115 |
116 | ax3.plot(p_gt[:, 0], p_gt[:, 1])
117 | ax3.plot(p[:, 0], p[:, 1])
118 | ax3.axis('equal')
119 | ax4.plot(p_gt[:, 0], p_gt[:, 1])
120 | ax4.plot(p_align[:, 0], p_align[:, 1])
121 | ax4.axis('equal')
122 |
123 | axs5[0].plot(t, np.log10(measurements_covs))
124 | axs5[1].plot(t, u_normalized[:, :3])
125 | axs5[2].plot(t, u_normalized[:, 3:])
126 |
127 | axs6[0].plot(t, u[:, :3])
128 | axs6[1].plot(t, u[:, 3:6])
129 |
130 | axs7[0].plot(t, mate_xy)
131 | axs7[0].plot(t, mate_z)
132 | axs7[0].plot(t, rmse_xy)
133 | axs7[0].plot(t, rmse_z)
134 | axs7[1].plot(t, cate_xy)
135 | axs7[1].plot(t, cate_z)
136 | axs7[2].plot(t, error_p)
137 |
138 | axs1[0].set(xlabel='time (s)', ylabel='$\mathbf{p}_n$ (m)', title="Position")
139 | axs1[1].set(xlabel='time (s)', ylabel='$\mathbf{v}_n$ (m/s)', title="Velocity")
140 | axs1[2].set(xlabel='time (s)', ylabel='$\mathbf{R}_n^T \mathbf{v}_n$ (m/s)',
141 | title="Velocity in body frame")
142 | axs2[0].set(xlabel='time (s)', ylabel=r'$\phi_n, \theta_n, \psi_n$ (rad)',
143 | title="Orientation")
144 | axs2[1].set(xlabel='time (s)', ylabel=r'$\mathbf{b}_{n}^{\mathbf{\omega}}$ (rad/s)',
145 | title="Bias gyro")
146 | axs2[2].set(xlabel='time (s)', ylabel=r'$\mathbf{b}_{n}^{\mathbf{a}}$ (m/$\mathrm{s}^2$)',
147 | title="Bias accelerometer")
148 | ax3.set(xlabel=r'$p_n^x$ (m)', ylabel=r'$p_n^y$ (m)', title="Position on $xy$")
149 | ax4.set(xlabel=r'$p_n^x$ (m)', ylabel=r'$p_n^y$ (m)', title="Aligned position on $xy$")
150 | axs5[0].set(xlabel='time (s)', ylabel=r' $\mathrm{cov}(\mathbf{y}_{n})$ (log scale)',
151 | title="Covariance on the zero lateral and vertical velocity measurements (log "
152 | "scale)")
153 | axs5[1].set(xlabel='time (s)', ylabel=r'Normalized gyro measurements',
154 | title="Normalized gyro measurements")
155 | axs5[2].set(xlabel='time (s)', ylabel=r'Normalized accelerometer measurements',
156 | title="Normalized accelerometer measurements")
157 | axs6[0].set(xlabel='time (s)', ylabel=r'$\omega^x_n, \omega^y_n, \omega^z_n$ (rad/s)',
158 | title="Gyrometer")
159 | axs6[1].set(xlabel='time (s)', ylabel=r'$a^x_n, a^y_n, a^z_n$ (m/$\mathrm{s}^2$)',
160 | title="Accelerometer")
161 | axs7[0].set(xlabel='time (s)', ylabel=r'$|| \mathbf{p}_{n}-\hat{\mathbf{p}}_{n} ||$ (m)',
162 | title="Mean Absolute Trajectory Error (MATE) and Root Mean Square Error (RMSE)")
163 | axs7[1].set(xlabel='time (s)',
164 | ylabel=r'$\Sigma_{i=0}^{n} || \mathbf{p}_{i}-\hat{\mathbf{p}}_{i} ||$ (m)',
165 | title="Cumulative Absolute Trajectory Error (CATE)")
166 | axs7[2].set(xlabel='time (s)', ylabel=r' $\mathbf{\xi}_{n}^{\mathbf{p}}$',
167 | title="$SE(3)$ error on position")
168 |
169 | for ax in chain(axs1, axs2, axs5, axs6, axs7):
170 | ax.grid()
171 | ax3.grid()
172 | ax4.grid()
173 | axs1[0].legend(
174 | ['$p_n^x$', '$p_n^y$', '$p_n^z$', '$\hat{p}_n^x$', '$\hat{p}_n^y$', '$\hat{p}_n^z$'])
175 | axs1[1].legend(
176 | ['$v_n^x$', '$v_n^y$', '$v_n^z$', '$\hat{v}_n^x$', '$\hat{v}_n^y$', '$\hat{v}_n^z$'])
177 | axs1[2].legend(
178 | ['$v_n^x$', '$v_n^y$', '$v_n^z$', '$\hat{v}_n^x$', '$\hat{v}_n^y$', '$\hat{v}_n^z$'])
179 | axs2[0].legend([r'$\phi_n^x$', r'$\theta_n^y$', r'$\psi_n^z$', r'$\hat{\phi}_n^x$',
180 | r'$\hat{\theta}_n^y$', r'$\hat{\psi}_n^z$'])
181 | axs2[1].legend(
182 | ['$b_n^x$', '$b_n^y$', '$b_n^z$', '$\hat{b}_n^x$', '$\hat{b}_n^y$', '$\hat{b}_n^z$'])
183 | axs2[2].legend(
184 | ['$b_n^x$', '$b_n^y$', '$b_n^z$', '$\hat{b}_n^x$', '$\hat{b}_n^y$', '$\hat{b}_n^z$'])
185 | ax3.legend(['ground-truth trajectory', 'proposed'])
186 | ax4.legend(['ground-truth trajectory', 'proposed'])
187 | axs5[0].legend(['zero lateral velocity', 'zero vertical velocity'])
188 | axs6[0].legend(['$\omega_n^x$', '$\omega_n^y$', '$\omega_n^z$'])
189 | axs6[1].legend(['$a_n^x$', '$a_n^y$', '$a_n^z$'])
190 | if u.shape[1] > 6:
191 | axs6[2].legend(['$m_n^x$', '$m_n^y$', '$m_n^z$'])
192 | axs7[0].legend(['MATE xy', 'MATE z', 'RMSE xy', 'RMSE z'])
193 | axs7[1].legend(['CATE xy', 'CATE z'])
194 |
195 | # save figures
196 | figs = [fig1, fig2, fig3, fig4, fig5, fig6, fig7, ]
197 | figs_name = ["position_velocity", "orientation_bias", "position_xy", "position_xy_aligned",
198 | "measurements_covs", "imu", "errors", "errors2"]
199 | for l, fig in enumerate(figs):
200 | fig_name = figs_name[l]
201 | fig.savefig(os.path.join(folder_path, fig_name + ".png"))
202 |
203 | plt.show(block=True)
204 |
205 |
206 |
207 |
--------------------------------------------------------------------------------
/src/utils_torch_filter.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import matplotlib.pyplot as plt
3 | import numpy as np
4 | import os
5 | import time
6 | from termcolor import cprint
7 | from utils_numpy_filter import NUMPYIEKF
8 | from utils import prepare_data
9 |
10 | class InitProcessCovNet(torch.nn.Module):
11 |
12 | def __init__(self):
13 | super(InitProcessCovNet, self).__init__()
14 |
15 | self.beta_process = 3*torch.ones(2).double()
16 | self.beta_initialization = 3*torch.ones(2).double()
17 |
18 | self.factor_initial_covariance = torch.nn.Linear(1, 6, bias=False).double()
19 | """parameters for initializing covariance"""
20 | self.factor_initial_covariance.weight.data[:] /= 10
21 |
22 | self.factor_process_covariance = torch.nn.Linear(1, 6, bias=False).double()
23 | """parameters for process noise covariance"""
24 | self.factor_process_covariance.weight.data[:] /= 10
25 | self.tanh = torch.nn.Tanh()
26 |
27 | def forward(self, iekf):
28 | return
29 |
30 | def init_cov(self, iekf):
31 | alpha = self.factor_initial_covariance(torch.ones(1).double()).squeeze()
32 | beta = 10**(self.tanh(alpha))
33 | return beta
34 |
35 | def init_processcov(self, iekf):
36 | alpha = self.factor_process_covariance(torch.ones(1).double())
37 | beta = 10**(self.tanh(alpha))
38 | return beta
39 |
40 |
41 | class MesNet(torch.nn.Module):
42 | def __init__(self):
43 | super(MesNet, self).__init__()
44 | self.beta_measurement = 3*torch.ones(2).double()
45 | self.tanh = torch.nn.Tanh()
46 |
47 | self.cov_net = torch.nn.Sequential(torch.nn.Conv1d(6, 32, 5),
48 | torch.nn.ReplicationPad1d(4),
49 | torch.nn.ReLU(),
50 | torch.nn.Dropout(p=0.5),
51 | torch.nn.Conv1d(32, 32, 5, dilation=3),
52 | torch.nn.ReplicationPad1d(4),
53 | torch.nn.ReLU(),
54 | torch.nn.Dropout(p=0.5),
55 | ).double()
56 | "CNN for measurement covariance"
57 | self.cov_lin = torch.nn.Sequential(torch.nn.Linear(32, 2),
58 | torch.nn.Tanh(),
59 | ).double()
60 | self.cov_lin[0].bias.data[:] /= 100
61 | self.cov_lin[0].weight.data[:] /= 100
62 |
63 | def forward(self, u, iekf):
64 | y_cov = self.cov_net(u).transpose(0, 2).squeeze()
65 | z_cov = self.cov_lin(y_cov)
66 | z_cov_net = self.beta_measurement.unsqueeze(0)*z_cov
67 | measurements_covs = (iekf.cov0_measurement.unsqueeze(0) * (10**z_cov_net))
68 | return measurements_covs
69 |
70 |
71 | class TORCHIEKF(torch.nn.Module, NUMPYIEKF):
72 | Id1 = torch.eye(1).double()
73 | Id2 = torch.eye(2).double()
74 | Id3 = torch.eye(3).double()
75 | Id6 = torch.eye(6).double()
76 | IdP = torch.eye(21).double()
77 |
78 | def __init__(self, parameter_class=None):
79 | torch.nn.Module.__init__(self)
80 | NUMPYIEKF.__init__(self, parameter_class=None)
81 |
82 | # mean and standard deviation of parameters for normalizing inputs
83 | self.u_loc = None
84 | self.u_std = None
85 | self.initprocesscov_net = InitProcessCovNet()
86 | self.mes_net = MesNet()
87 | self.cov0_measurement = None
88 |
89 | # modified parameters
90 | self.IdP = torch.eye(self.P_dim).double()
91 |
92 | if parameter_class is not None:
93 | self.filter_parameters = parameter_class()
94 | self.set_param_attr()
95 |
96 | def set_param_attr(self):
97 | # get a list of attribute only
98 | attr_list = [a for a in dir(self.filter_parameters) if not a.startswith('__')
99 | and not callable(getattr(self.filter_parameters, a))]
100 | for attr in attr_list:
101 | setattr(self, attr, getattr(self.filter_parameters, attr))
102 |
103 | self.Q = torch.diag(torch.Tensor([self.cov_omega, self.cov_omega, self. cov_omega,
104 | self.cov_acc, self.cov_acc, self.cov_acc,
105 | self.cov_b_omega, self.cov_b_omega, self.cov_b_omega,
106 | self.cov_b_acc, self.cov_b_acc, self.cov_b_acc,
107 | self.cov_Rot_c_i, self.cov_Rot_c_i, self.cov_Rot_c_i,
108 | self.cov_t_c_i, self.cov_t_c_i, self.cov_t_c_i])
109 | ).double()
110 | self.cov0_measurement = torch.Tensor([self.cov_lat, self.cov_up]).double()
111 |
112 | def run(self, t, u, measurements_covs, v_mes, p_mes, N, ang0):
113 |
114 | dt = t[1:] - t[:-1] # (s)
115 | Rot, v, p, b_omega, b_acc, Rot_c_i, t_c_i, P = self.init_run(dt, u, p_mes, v_mes,
116 | N, ang0)
117 |
118 | for i in range(1, N):
119 | Rot_i, v_i, p_i, b_omega_i, b_acc_i, Rot_c_i_i, t_c_i_i, P_i = \
120 | self.propagate(Rot[i-1], v[i-1], p[i-1], b_omega[i-1], b_acc[i-1], Rot_c_i[i-1],
121 | t_c_i[i-1], P, u[i], dt[i-1])
122 |
123 | Rot[i], v[i], p[i], b_omega[i], b_acc[i], Rot_c_i[i], t_c_i[i], P = \
124 | self.update(Rot_i, v_i, p_i, b_omega_i, b_acc_i, Rot_c_i_i, t_c_i_i, P_i,
125 | u[i], i, measurements_covs[i])
126 | return Rot, v, p, b_omega, b_acc, Rot_c_i, t_c_i
127 |
128 | def init_run(self, dt, u, p_mes, v_mes, N, ang0):
129 | Rot, v, p, b_omega, b_acc, Rot_c_i, t_c_i = \
130 | self.init_saved_state(dt, N, ang0)
131 | Rot[0] = self.from_rpy(ang0[0], ang0[1], ang0[2])
132 | v[0] = v_mes[0]
133 | P = self.init_covariance()
134 | return Rot, v, p, b_omega, b_acc, Rot_c_i, t_c_i, P
135 |
136 | def init_covariance(self):
137 | beta = self.initprocesscov_net.init_cov(self)
138 | P = torch.zeros(self.P_dim, self.P_dim).double()
139 | P[:2, :2] = self.cov_Rot0*beta[0]*self.Id2 # no yaw error
140 | P[3:5, 3:5] = self.cov_v0*beta[1]*self.Id2
141 | P[9:12, 9:12] = self.cov_b_omega0*beta[2]*self.Id3
142 | P[12:15, 12:15] = self.cov_b_acc0*beta[3]*self.Id3
143 | P[15:18, 15:18] = self.cov_Rot_c_i0*beta[4]*self.Id3
144 | P[18:21, 18:21] = self.cov_t_c_i0*beta[5]*self.Id3
145 | return P
146 |
147 |
148 | def init_saved_state(self, dt, N, ang0):
149 | Rot = dt.new_zeros(N, 3, 3)
150 | v = dt.new_zeros(N, 3)
151 | p = dt.new_zeros(N, 3)
152 | b_omega = dt.new_zeros(N, 3)
153 | b_acc = dt.new_zeros(N, 3)
154 | Rot_c_i = dt.new_zeros(N, 3, 3)
155 | t_c_i = dt.new_zeros(N, 3)
156 | Rot_c_i[0] = torch.eye(3).double()
157 | return Rot, v, p, b_omega, b_acc, Rot_c_i, t_c_i
158 |
159 | def propagate(self, Rot_prev, v_prev, p_prev, b_omega_prev, b_acc_prev, Rot_c_i_prev, t_c_i_prev,
160 | P_prev, u, dt):
161 | Rot_prev = Rot_prev.clone()
162 | acc_b = u[3:6] - b_acc_prev
163 | acc = Rot_prev.mv(acc_b) + self.g
164 | v = v_prev + acc * dt
165 | p = p_prev + v_prev.clone() * dt + 1/2 * acc * dt**2
166 |
167 | omega = (u[:3] - b_omega_prev)*dt
168 | Rot = Rot_prev.mm(self.so3exp(omega))
169 |
170 | b_omega = b_omega_prev
171 | b_acc = b_acc_prev
172 | Rot_c_i = Rot_c_i_prev.clone()
173 | t_c_i = t_c_i_prev
174 |
175 | P = self.propagate_cov(P_prev, Rot_prev, v_prev, p_prev, b_omega_prev, b_acc_prev,
176 | u, dt)
177 | return Rot, v, p, b_omega, b_acc, Rot_c_i, t_c_i, P
178 |
179 | def propagate_cov(self, P, Rot_prev, v_prev, p_prev, b_omega_prev, b_acc_prev, u,
180 | dt):
181 |
182 | F = P.new_zeros(self.P_dim, self.P_dim)
183 | G = P.new_zeros(self.P_dim, self.Q.shape[0])
184 | Q = self.Q.clone()
185 | F[3:6, :3] = self.skew(self.g)
186 | F[6:9, 3:6] = self.Id3
187 | G[3:6, 3:6] = Rot_prev
188 | F[3:6, 12:15] = -Rot_prev
189 | v_skew_rot = self.skew(v_prev).mm(Rot_prev)
190 | p_skew_rot = self.skew(p_prev).mm(Rot_prev)
191 | G[:3, :3] = Rot_prev
192 | G[3:6, :3] = v_skew_rot
193 | G[6:9, :3] = p_skew_rot
194 | F[:3, 9:12] = -Rot_prev
195 | F[3:6, 9:12] = -v_skew_rot
196 | F[6:9, 9:12] = -p_skew_rot
197 | G[9:12, 6:9] = self.Id3
198 | G[12:15, 9:12] = self.Id3
199 | G[15:18, 12:15] = self.Id3
200 | G[18:21, 15:18] = self.Id3
201 |
202 | F = F * dt
203 | G = G * dt
204 | F_square = F.mm(F)
205 | F_cube = F_square.mm(F)
206 | Phi = self.IdP + F + 1/2*F_square + 1/6*F_cube
207 | P_new = Phi.mm(P + G.mm(Q).mm(G.t())).mm(Phi.t())
208 | return P_new
209 |
210 | def update(self, Rot, v, p, b_omega, b_acc, Rot_c_i, t_c_i, P, u, i, measurement_cov):
211 | # orientation of body frame
212 | Rot_body = Rot.mm(Rot_c_i)
213 | # velocity in imu frame
214 | v_imu = Rot.t().mv(v)
215 | omega = u[:3] - b_omega
216 | # velocity in body frame
217 | v_body = Rot_c_i.t().mv(v_imu) + self.skew(t_c_i).mv(omega)
218 | Omega = self.skew(omega)
219 | # Jacobian in car frame
220 | H_v_imu = Rot_c_i.t().mm(self.skew(v_imu))
221 | H_t_c_i = self.skew(t_c_i)
222 |
223 | H = P.new_zeros(2, self.P_dim)
224 | H[:, 3:6] = Rot_body.t()[1:]
225 | H[:, 15:18] = H_v_imu[1:]
226 | H[:, 9:12] = H_t_c_i[1:]
227 | H[:, 18:21] = -Omega[1:]
228 | r = - v_body[1:]
229 | R = torch.diag(measurement_cov)
230 |
231 | Rot_up, v_up, p_up, b_omega_up, b_acc_up, Rot_c_i_up, t_c_i_up, P_up = \
232 | self.state_and_cov_update(Rot, v, p, b_omega, b_acc, Rot_c_i, t_c_i, P, H, r, R)
233 | return Rot_up, v_up, p_up, b_omega_up, b_acc_up, Rot_c_i_up, t_c_i_up, P_up
234 |
235 |
236 | @staticmethod
237 | def state_and_cov_update(Rot, v, p, b_omega, b_acc, Rot_c_i, t_c_i, P, H, r, R):
238 | S = H.mm(P).mm(H.t()) + R
239 | Kt, _ = torch.gesv(P.mm(H.t()).t(), S)
240 | K = Kt.t()
241 | dx = K.mv(r.view(-1))
242 |
243 | dR, dxi = TORCHIEKF.sen3exp(dx[:9])
244 | dv = dxi[:, 0]
245 | dp = dxi[:, 1]
246 | Rot_up = dR.mm(Rot)
247 | v_up = dR.mv(v) + dv
248 | p_up = dR.mv(p) + dp
249 |
250 | b_omega_up = b_omega + dx[9:12]
251 | b_acc_up = b_acc + dx[12:15]
252 |
253 | dR = TORCHIEKF.so3exp(dx[15:18])
254 | Rot_c_i_up = dR.mm(Rot_c_i)
255 | t_c_i_up = t_c_i + dx[18:21]
256 |
257 | I_KH = TORCHIEKF.IdP - K.mm(H)
258 | P_upprev = I_KH.mm(P).mm(I_KH.t()) + K.mm(R).mm(K.t())
259 | P_up = (P_upprev + P_upprev.t())/2
260 | return Rot_up, v_up, p_up, b_omega_up, b_acc_up, Rot_c_i_up, t_c_i_up, P_up
261 |
262 | @staticmethod
263 | def skew(x):
264 | X = torch.Tensor([[0, -x[2], x[1]],
265 | [x[2], 0, -x[0]],
266 | [-x[1], x[0], 0]]).double()
267 | return X
268 |
269 | @staticmethod
270 | def rot_from_2_vectors(v1, v2):
271 | """ Returns a Rotation matrix between vectors 'v1' and 'v2' """
272 | v1 = v1/torch.norm(v1)
273 | v2 = v2/torch.norm(v2)
274 | v = torch.cross(v1, v2)
275 | cosang = v1.matmul(v2)
276 | sinang = torch.norm(v)
277 | Rot = TORCHIEKF.Id3 + TORCHIEKF.skew(v) + \
278 | TORCHIEKF.skew(v).mm(TORCHIEKF.skew(v))*(1-cosang)/(sinang**2)
279 | return Rot
280 |
281 | @staticmethod
282 | def sen3exp(xi):
283 | phi = xi[:3]
284 | angle = torch.norm(phi)
285 |
286 | # Near |phi|==0, use first order Taylor expansion
287 | if isclose(angle, 0.):
288 | skew_phi = torch.Tensor([[0, -phi[2], phi[1]],
289 | [phi[2], 0, -phi[0]],
290 | [-phi[1], phi[0], 0]]).double()
291 | J = TORCHIEKF.Id3 + 0.5 * skew_phi
292 | Rot = TORCHIEKF.Id3 + skew_phi
293 | else:
294 | axis = phi / angle
295 | skew_axis = torch.Tensor([[0, -axis[2], axis[1]],
296 | [axis[2], 0, -axis[0]],
297 | [-axis[1], axis[0], 0]]).double()
298 | s = torch.sin(angle)
299 | c = torch.cos(angle)
300 |
301 | J = (s / angle) * TORCHIEKF.Id3 + (1 - s / angle) * TORCHIEKF.outer(axis, axis)\
302 | + ((1 - c) / angle) * skew_axis
303 | Rot = c * TORCHIEKF.Id3 + (1 - c) * TORCHIEKF.outer(axis, axis) \
304 | + s * skew_axis
305 |
306 | x = J.mm(xi[3:].view(-1, 3).t())
307 | return Rot, x
308 |
309 | @staticmethod
310 | def so3exp(phi):
311 | angle = phi.norm()
312 |
313 | # Near phi==0, use first order Taylor expansion
314 | if isclose(angle, 0.):
315 | skew_phi = torch.Tensor([[0, -phi[2], phi[1]],
316 | [phi[2], 0, -phi[0]],
317 | [-phi[1], phi[0], 0]]).double()
318 | Xi = TORCHIEKF.Id3 + skew_phi
319 | return Xi
320 | axis = phi / angle
321 | skew_axis = torch.Tensor([[0, -axis[2], axis[1]],
322 | [axis[2], 0, -axis[0]],
323 | [-axis[1], axis[0], 0]]).double()
324 | c = angle.cos()
325 | s = angle.sin()
326 | Xi = c * TORCHIEKF.Id3 + (1 - c) * TORCHIEKF.outer(axis, axis) \
327 | + s * skew_axis
328 | return Xi
329 |
330 | @staticmethod
331 | def outer(a, b):
332 | ab = a.view(-1, 1)*b.view(1, -1)
333 | return ab
334 |
335 | @staticmethod
336 | def so3left_jacobian(phi):
337 | angle = torch.norm(phi)
338 |
339 | # Near |phi|==0, use first order Taylor expansion
340 | if isclose(angle, 0.):
341 | skew_phi = torch.Tensor([[0, -phi[2], phi[1]],
342 | [phi[2], 0, -phi[0]],
343 | [-phi[1], phi[0], 0]]).double()
344 | return TORCHIEKF.Id3 + 0.5 * skew_phi
345 |
346 | axis = phi / angle
347 | skew_axis = torch.Tensor([[0, -axis[2], axis[1]],
348 | [axis[2], 0, -axis[0]],
349 | [-axis[1], axis[0], 0]]).double()
350 | s = torch.sin(angle)
351 | c = torch.cos(angle)
352 |
353 | return (s / angle) * TORCHIEKF.Id3 + (1 - s / angle) * TORCHIEKF.outer(axis, axis)\
354 | + ((1 - c) / angle) * skew_axis
355 |
356 | @staticmethod
357 | def to_rpy(Rot):
358 | """Convert a rotation matrix to RPY Euler angles."""
359 |
360 | pitch = torch.atan2(-Rot[2, 0], torch.sqrt(Rot[0, 0]**2 + Rot[1, 0]**2))
361 |
362 | if isclose(pitch, np.pi / 2.):
363 | yaw = pitch.new_zeros(1)
364 | roll = torch.atan2(Rot[0, 1], Rot[1, 1])
365 | elif isclose(pitch, -np.pi / 2.):
366 | yaw = pitch.new_zeros(1)
367 | roll = -torch.atan2(Rot[0, 1], Rot[1, 1])
368 | else:
369 | sec_pitch = 1. / pitch.cos()
370 | yaw = torch.atan2(Rot[1, 0] * sec_pitch, Rot[0, 0] * sec_pitch)
371 | roll = torch.atan2(Rot[2, 1] * sec_pitch, Rot[2, 2] * sec_pitch)
372 | return roll, pitch, yaw
373 |
374 | @staticmethod
375 | def from_rpy(roll, pitch, yaw):
376 | """Form a rotation matrix from RPY Euler angles."""
377 |
378 | return TORCHIEKF.rotz(yaw).mm(TORCHIEKF.roty(pitch).mm(TORCHIEKF.rotx(roll)))
379 |
380 | @staticmethod
381 | def rotx(t):
382 | """Rotation about the x-axis."""
383 |
384 | c = torch.cos(t)
385 | s = torch.sin(t)
386 | return t.new([[1, 0, 0],
387 | [0, c, -s],
388 | [0, s, c]])
389 |
390 | @staticmethod
391 | def roty(t):
392 | """Rotation about the y-axis."""
393 |
394 | c = torch.cos(t)
395 | s = torch.sin(t)
396 | return t.new([[c, 0, s],
397 | [0, 1, 0],
398 | [-s, 0, c]])
399 |
400 | @staticmethod
401 | def rotz(t):
402 | """Rotation about the z-axis."""
403 |
404 | c = torch.cos(t)
405 | s = torch.sin(t)
406 | return t.new([[c, -s, 0],
407 | [s, c, 0],
408 | [0, 0, 1]])
409 |
410 | @staticmethod
411 | def normalize_rot(rot):
412 | # U, S, V = torch.svd(A) returns the singular value
413 | # decomposition of a real matrix A of size (n x m) such that A=USV′.
414 | # Irrespective of the original strides, the returned matrix U will
415 | # be transposed, i.e. with strides (1, n) instead of (n, 1).
416 |
417 | # pytorch SVD seems to be inaccurate, so just move to numpy immediately
418 | U, _, V = torch.svd(rot)
419 | S = torch.eye(3).double()
420 | S[2, 2] = torch.det(U) * torch.det(V)
421 | return U.mm(S).mm(V.t())
422 |
423 | def forward_nets(self, u):
424 | u_n = self.normalize_u(u).t().unsqueeze(0)
425 | u_n = u_n[:, :6]
426 | measurements_covs = self.mes_net(u_n, self)
427 | return measurements_covs
428 |
429 | def normalize_u(self, u):
430 | return (u-self.u_loc)/self.u_std
431 |
432 | def get_normalize_u(self, dataset):
433 | self.u_loc = dataset.normalize_factors['u_loc'].double()
434 | self.u_std = dataset.normalize_factors['u_std'].double()
435 |
436 | def set_Q(self):
437 | """
438 | Update the process noise covariance
439 | :return:
440 | """
441 |
442 | self.Q = torch.diag(torch.Tensor([self.cov_omega, self.cov_omega, self. cov_omega,
443 | self.cov_acc, self.cov_acc, self.cov_acc,
444 | self.cov_b_omega, self.cov_b_omega, self.cov_b_omega,
445 | self.cov_b_acc, self.cov_b_acc, self.cov_b_acc,
446 | self.cov_Rot_c_i, self.cov_Rot_c_i, self.cov_Rot_c_i,
447 | self.cov_t_c_i, self.cov_t_c_i, self.cov_t_c_i])
448 | ).double()
449 |
450 | beta = self.initprocesscov_net.init_processcov(self)
451 | self.Q = torch.zeros(self.Q.shape[0], self.Q.shape[0]).double()
452 | self.Q[:3, :3] = self.cov_omega*beta[0]*self.Id3
453 | self.Q[3:6, 3:6] = self.cov_acc*beta[1]*self.Id3
454 | self.Q[6:9, 6:9] = self.cov_b_omega*beta[2]*self.Id3
455 | self.Q[9:12, 9:12] = self.cov_b_acc*beta[3]*self.Id3
456 | self.Q[12:15, 12:15] = self.cov_Rot_c_i*beta[4]*self.Id3
457 | self.Q[15:18, 15:18] = self.cov_t_c_i*beta[5]*self.Id3
458 |
459 | def load(self, args, dataset):
460 | path_iekf = os.path.join(args.path_temp, "iekfnets.p")
461 | if os.path.isfile(path_iekf):
462 | mondict = torch.load(path_iekf)
463 | self.load_state_dict(mondict)
464 | cprint("IEKF nets loaded", 'green')
465 | else:
466 | cprint("IEKF nets NOT loaded", 'yellow')
467 | self.get_normalize_u(dataset)
468 |
469 |
470 | def isclose(mat1, mat2, tol=1e-10):
471 | return (mat1 - mat2).abs().lt(tol)
472 |
473 |
474 | def prepare_filter(args, dataset):
475 | torch_iekf = TORCHIEKF()
476 | torch_iekf.load(args, dataset)
477 | torch_iekf = TORCHIEKF()
478 |
479 | # set dataset parameter
480 | torch_iekf.filter_parameters = args.parameter_class()
481 | torch_iekf.set_param_attr()
482 | if type(torch_iekf.g).__module__ == np.__name__:
483 | torch_iekf.g = torch.from_numpy(torch_iekf.g).double()
484 |
485 | # load model
486 | torch_iekf.load(args, dataset)
487 | torch_iekf.get_normalize_u(dataset)
488 |
489 | iekf = NUMPYIEKF(args.parameter_class)
490 | iekf.set_learned_covariance(torch_iekf)
491 | return iekf, torch_iekf
492 |
--------------------------------------------------------------------------------
/temp/08.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbrossar/ai-imu-dr/32967812139af82cf12654c93438bf9065b7a241/temp/08.gif
--------------------------------------------------------------------------------
/temp/iekf.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbrossar/ai-imu-dr/32967812139af82cf12654c93438bf9065b7a241/temp/iekf.jpg
--------------------------------------------------------------------------------
/temp/structure.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbrossar/ai-imu-dr/32967812139af82cf12654c93438bf9065b7a241/temp/structure.jpg
--------------------------------------------------------------------------------
/temp/sup.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mbrossar/ai-imu-dr/32967812139af82cf12654c93438bf9065b7a241/temp/sup.pdf
--------------------------------------------------------------------------------