├── .gitignore ├── LICENSE ├── README.md └── tools ├── gt_flow ├── README.md ├── bag_indexer.py ├── calibration.py ├── compute_flow.py ├── downloader.py ├── extract_single_bag.py ├── ground_truth.py ├── image_reader_view_collection.py ├── image_view.py ├── image_view_collection.py ├── key_list_collection.py ├── run_all_experiments.py └── topic_reader.py └── gt_map_visualization ├── README.md ├── angular_velocity_from_bag.py ├── gps_from_bag.py ├── gt_trajectories_from_bag.py ├── plot_diff.py ├── plot_traj.py ├── traj_to_gpsvisualizer.py ├── utm ├── utm-0.4.2 ├── LICENSE ├── PKG-INFO ├── README.rst ├── scripts │ └── utm-converter ├── setup.py ├── test │ └── test_utm.py └── utm │ ├── __init__.py │ ├── conversion.py │ └── error.py └── velocity_from_traj.py /.gitignore: -------------------------------------------------------------------------------- 1 | website 2 | *.pyc -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 Daniilidis Group University of Pennsylvania 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # GitHub repository of the MVSEC data set 2 | 3 | Please follow this link to the [project page](https://daniilidis-group.github.io/mvsec). 4 | -------------------------------------------------------------------------------- /tools/gt_flow/README.md: -------------------------------------------------------------------------------- 1 | # Ground Truth Optical Flow Generation 2 | This repo contains the code to generate the ground truth optical flow presented in "EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras". As the optical flow is entirely generated from the MVSEC dataset, and due to the large space requirements, we do not provide the pre-computed data directly. Instead, the code in this folder will allow you to generate the ground truth flow locally, given the ROS bags from MVSEC. 3 | 4 | Note that this folder will generate files which take up a lot of storage. For example, the indoor_flying sequences consume >25G of data. Please make sure your machine has enough storage before running the script. 5 | 6 | ## Computing Flow 7 | The code expects the MVSEC ground truth bags to be in the following folder structure, with each group of sequences in its own folder: 8 | 9 | ├── indoor_flying 10 | │ ├── indoor_flying1_gt.bag 11 | │ ├── indoor_flying2_gt.bag 12 | │ ... 13 | ├── outdoor_day 14 | │ ├── outdoor_day1_gt.bag 15 | │ ... 16 | ... 17 | 18 | To extract ground truth optical flow from a single bag, run: 19 | `python extract_single_bag.py --mvsec_dir $MVSEC_DIR --sequence $SEQUENCE --sequence_num $SEQUENCE --save_numpy` 20 | where `$MVSEC_DIR` points to the top level of your MVSEC directory, `$SEQUENCE$` is the name of the desired sequence (e.g. indoor_flying), and `$SEQUENCE_NUM` is the number of the desired sequence, e.g. 1. Additional options are: `--save_movie` to save a video file with the output flow for the entire sequence (this may take some time), and `--start_ind` and `--stop_ind` to set the start and stop indicies within the ground truth pose/depth frames. 21 | 22 | To extract ground truth optical flow from all available bags (currently: indoor_flying, outdoor_day, outdoor_night), run: 23 | `python run_all_experiments.py --mvsec_dir $MVSEC_DIR`. Note that this will attempt to download the bag if it is not found. Please make sure the bag is present, with the directory structure above, if you do not want this to happen. 24 | 25 | ## Data Format 26 | The code will generate three files for each bag. For example, for `indoor_flying1`, it will generate: 27 | - `indoor_flying1_gt_bag_index.npy`, which is used internally to index each bag for rapid reading from the bag in python in future runs. 28 | - `indoor_flying1_gt_flow.npz`, which contains the ground truth flow for the distorted images. The keys for this numpy file are: 29 | - `timestamps` - The timestamp for each ground truth frame. 30 | - `x_flow_dist` - The optical flow in the x direction for the distorted (raw) camera image. 31 | - `y_flow_dist` - The optical flow in the y direction for the distorted (raw) camera image. 32 | - `indoor_flying1_odom.npz`, which contains the odometry camera poses and computed camera velocities. The keys for this numpy file are: 33 | - `timestamps` - The timestamp for each odometry value. 34 | - `pos` - The position of the camera at each time. Note that this is from the odometry, and so is prone to drift over time. 35 | - `quat` - The rotation of the camera at each time in quaternion form (w,x,y,z). Note that this is also from odometry. 36 | - `lin_vel` - The computed linear velocity of the camera at each time. 37 | - `ang_vel` - The computed angular velocity of the camera at each time. 38 | 39 | Note that this will only produce optical flow for the raw (distorted) image at this time. We plan to support flow in the rectified image in the future. 40 | 41 | ## Citations 42 | If you use this dataset, please cite the following paper: 43 | 44 | Zhu, A. Z., Yuan, L., Chaney, K., Daniilidis, K. "EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras." Robotics: Science and Systems (2018). 45 | -------------------------------------------------------------------------------- /tools/gt_flow/bag_indexer.py: -------------------------------------------------------------------------------- 1 | # Author Cody Phillips 2 | 3 | from __future__ import print_function 4 | from __future__ import division 5 | 6 | import os 7 | import topic_reader 8 | import bisect 9 | try: 10 | import rosbag 11 | import cv_bridge 12 | bridge = cv_bridge.CvBridge() 13 | except ImportError: 14 | pass 15 | import numpy as np 16 | 17 | class Stamp: 18 | def __init__(self, stamp=None): 19 | if stamp: 20 | self.secs = stamp.secs 21 | self.nsecs = stamp.nsecs 22 | self._secs = stamp.to_sec() 23 | else: 24 | self.secs = None 25 | self.nsecs = None 26 | self._secs = None 27 | 28 | def to_sec(self): 29 | return self._secs 30 | 31 | 32 | class Header: 33 | def __init__(self, info=None): 34 | if info: 35 | self.seq = info.seq 36 | self.frame_id = info.frame_id 37 | self.stamp = Stamp(info.stamp) 38 | else: 39 | self.seq = None 40 | self.frame_id = None 41 | self.stamp = Stamp(None) 42 | 43 | 44 | class BagInfo: 45 | def __init__(self): 46 | self.synced_image_count = dict() 47 | self.image_count = dict() 48 | 49 | 50 | def load_bag_info(filename): 51 | info_filename = filename + "_info.npy" 52 | try: 53 | info = np.load(info_filename).ravel()[0] 54 | return info 55 | except Exception: 56 | return BagInfo() 57 | 58 | 59 | def save_bag_info(filename, info): 60 | info_filename = filename + "_info.npy" 61 | np.save(info_filename, info) 62 | 63 | 64 | def save_indexer(filename, indexer): 65 | index_filename = filename + "_index.npy" 66 | np.save(index_filename, indexer) 67 | 68 | 69 | def load_indexer(filename): 70 | index_filename = filename + "_index.npy" 71 | return np.load(index_filename).ravel()[0] 72 | 73 | 74 | def get_bag_indexer(filename, sync_topic_list=[], topic_filters=[]): 75 | info = load_bag_info(filename) 76 | try: 77 | print("Loading indexer",filename) 78 | indexer = load_indexer(filename) 79 | indexer.set_bag(filename=filename) 80 | # indexer.sync_positions = dict() 81 | # indexer.compute_approximate_sync_positions(sync_topics) 82 | # np.save(index_filename, indexer) 83 | except Exception as ex: 84 | print(ex) 85 | print("Creating indexer") 86 | indexer = BagIndexer() 87 | indexer.set_bag(filename=filename) 88 | indexer.create_index() 89 | save_indexer(filename, indexer) 90 | updated = False 91 | for src_topic, dst_topic, filter_fun in topic_filters: 92 | if dst_topic in indexer.topics: 93 | continue 94 | print("Creating filtered topic") 95 | indexer.create_filtered_topic(src_topic, dst_topic, filter_fun) 96 | updated = True 97 | if updated: 98 | print("Saving updated indexer, filtered_topics") 99 | save_indexer(filename, indexer) 100 | updated = False 101 | for sync_topics, max_deltas in sync_topic_list: 102 | sync_info = indexer.get_sync_info(sync_topics) 103 | if sync_info is None: 104 | print("Sync info missing", sync_topics) 105 | else: 106 | print("Sync max deltas %s == %s" % (sync_info[1], max_deltas)) 107 | if sync_info and not isinstance(sync_info[1], list): 108 | print("Old style max delta encounters, not updating") 109 | continue 110 | if sync_info and sync_info[1] == max_deltas: 111 | continue 112 | updated = True 113 | print("Computing approximate sync positions") 114 | indexer.compute_approximate_sync_positions(sync_topics, max_deltas) 115 | if updated: 116 | print("Saving updated indexer, synced_topics") 117 | save_indexer(filename, indexer) 118 | updated = False 119 | for sync_topics, max_deltas in sync_topic_list: 120 | if not np.all(["image" in topic for topic in sync_topics]): 121 | continue 122 | for topic in sync_topics: 123 | count = len(indexer.get_sync_positions(topic, sync_topics)) 124 | updated = updated or topic not in info.synced_image_count 125 | print("Update sync count", topic, count) 126 | info.synced_image_count[topic] = count 127 | if updated: 128 | print("Saving bag info", filename) 129 | save_bag_info(filename, info) 130 | indexer.info = info 131 | return indexer 132 | 133 | 134 | class BagIndexer: 135 | def __init__(self, bag=None, filename=None): 136 | self.bag = None 137 | self.filename = None 138 | self.set_bag(bag=bag, filename=filename) 139 | self.sync_info = [] 140 | 141 | def create_filtered_topic(self, src_topic, dst_topic, filter_fun): 142 | self.ensure_open() 143 | reader = self.get_topic_reader("/tf") 144 | use = np.array([filter_fun(msg) for msg in reader]) 145 | timestamps = self.timestamps_of_topic[src_topic] 146 | positions = self.positions_of_topic[src_topic] 147 | self.timestamps_of_topic[dst_topic] = timestamps[use] 148 | self.positions_of_topic[dst_topic] = positions[use] 149 | self.topics.append(dst_topic) 150 | 151 | def set_bag(self, bag=None, filename=None): 152 | if bag is not None: 153 | self.bag = bag 154 | if filename is not None: 155 | self.filename = filename 156 | 157 | def ensure_open(self): 158 | if self.bag is None and self.filename is None: 159 | assert(False) 160 | if self.bag is None: 161 | self.open(self.filename) 162 | 163 | def open(self, filename): 164 | print("Opening bag", filename) 165 | bag = rosbag.Bag(filename) 166 | self.set_bag(bag=bag, filename=filename) 167 | 168 | def __getstate__(self): 169 | keys = ["topics", "positions_of_topic", 170 | "timestamps_of_topic", "sync_info"] 171 | state = dict([(k, self.__dict__[k]) for k in keys]) 172 | return state 173 | 174 | def __setstate__(self, state): 175 | self.__dict__.update(state) 176 | self.bag = None 177 | 178 | def create_index(self): 179 | self.ensure_open() 180 | self.positions_of_topic = dict() 181 | self.timestamps_of_topic = dict() 182 | self.topics = set() 183 | for key, indices in self.bag._connection_indexes.items(): 184 | topic = self.bag._connections[key].topic 185 | print("Indexing:", topic) 186 | self.topics.add(topic) 187 | positions = [] 188 | timestamps = [] 189 | for index in indices: 190 | position = index.position 191 | msg = self.bag._read_message(position).message 192 | if topic == "/tf": 193 | msg = msg.transforms[0] 194 | if not hasattr(msg, "header"): 195 | continue 196 | timestamp = msg.header.stamp.to_nsec() 197 | timestamps.append(timestamp) 198 | positions.append(position) 199 | if len(positions) == 0: 200 | continue 201 | if topic in self.positions_of_topic: 202 | old_positions = self.positions_of_topic[topic] 203 | old_timestamps = self.timestamps_of_topic[topic] 204 | assert(len(positions) == len(timestamps)) 205 | assert(len(old_positions) == len(old_timestamps)) 206 | positions = old_positions + positions 207 | timestamps = old_timestamps + timestamps 208 | self.positions_of_topic[topic] = positions 209 | self.timestamps_of_topic[topic] = timestamps 210 | for key, value in self.positions_of_topic.items(): 211 | self.positions_of_topic[key] = np.array(value) 212 | for key, value in self.timestamps_of_topic.items(): 213 | self.timestamps_of_topic[key] = np.array(value) 214 | self.topics = list(self.topics) 215 | 216 | def compute_approximate_sync_positions(self, topics, max_milli_sec_diffs): 217 | compute_approximate_sync_positions(self, topics, max_milli_sec_diffs) 218 | topics = tuple(topics) 219 | 220 | def compute_exact_sync_positions(self, topics): 221 | compute_exact_sync_positions(self, topics) 222 | 223 | def set_sync_info(self, sync_topics, max_deltas, sync_positions, sync_timestamps, indicator): 224 | sync_index = self.get_sync_index(sync_topics) 225 | info = (sync_topics, max_deltas, sync_positions, sync_timestamps, indicator) 226 | if sync_index: 227 | self.sync_info[sync_index] = info 228 | else: 229 | self.sync_info.append(info) 230 | 231 | def get_sync_info(self, sync_topics): 232 | sync_index = self.get_sync_index(sync_topics) 233 | if sync_index is None: 234 | return None 235 | else: 236 | return self.sync_info[sync_index] 237 | 238 | def get_sync_index(self, sync_topics): 239 | if len(self.sync_info) == 0: 240 | return None 241 | sync_topic_list = zip(*self.sync_info)[0] 242 | sync_topic_list = [sorted(stl) for stl in sync_topic_list] 243 | sync_topics = sorted(sync_topics) 244 | if sync_topics not in sync_topic_list: 245 | return None 246 | return sync_topic_list.index(sync_topics) 247 | 248 | def get_sync_positions(self, topic, sync_topics): 249 | sync_index = self.get_sync_index(sync_topics) 250 | sync_topics, _, sync_positions, _, _ = self.sync_info[sync_index] 251 | top_index = sync_topics.index(topic) 252 | return sync_positions[top_index] 253 | 254 | def get_sync_timestamps(self, topic, sync_topics): 255 | sync_index = self.get_sync_index(sync_topics) 256 | sync_topics, _, _, sync_timestamps, _ = self.sync_info[sync_index] 257 | top_index = sync_topics.index(topic) 258 | return sync_timestamps[top_index] 259 | 260 | def read_message(self, position): 261 | self.ensure_open() 262 | return self.bag._read_message(position) 263 | 264 | def get_topic_reader(self, topic): 265 | self.ensure_open() 266 | return topic_reader.TopicReader(self, topic) 267 | 268 | def get_image_topic_reader(self, topic, sync_topics=None): 269 | self.ensure_open() 270 | if sync_topics is not None: 271 | positions = self.get_sync_positions(topic, sync_topics) 272 | else: 273 | positions = None 274 | return topic_reader.ImageTopicReader(self, topic, positions) 275 | 276 | def get_synced_messages( 277 | self, 278 | topics, 279 | slc=slice(None), 280 | formatter_fun=lambda msgs: msgs): 281 | self.ensure_open() 282 | ts_positions = self.get_sync_positions(topics) 283 | for ts, positions in ts_positions[slc]: 284 | msgs = [self.read_message(position) for position in positions] 285 | yield formatter_fun(msgs) 286 | 287 | def compute_closest_sync_pair(Ti, Tj, closest_index, max_milli_sec_diff): 288 | inf = max_milli_sec_diff * 2 289 | def abs_milli_diff(ti, tj): 290 | return int(abs(ti - tj) // 1e6) 291 | nj = len(Tj) 292 | for i, ti in enumerate(Ti): 293 | ti = Ti[i] 294 | j_le = bisect.bisect_right(Tj, ti) 295 | j_gt = bisect.bisect_right(Tj, ti) 296 | if j_le: 297 | j_le -= 1 298 | tj_le = Tj[j_le] 299 | abs_le = abs_milli_diff(ti, tj_le) 300 | else: 301 | abs_le = inf 302 | if j_gt != nj: 303 | tj_gt = Tj[j_gt] 304 | abs_gt = abs_milli_diff(ti, tj_gt) 305 | else: 306 | abs_gt = inf 307 | if abs_gt >= inf and abs_le >= inf: 308 | continue 309 | closest_index[i] = j_le if abs_le < abs_gt else j_gt 310 | 311 | def compute_approximate_sync_pair(Ti, Tj, closest_index, max_milli_sec_diff): 312 | inf = max_milli_sec_diff * 2 313 | def abs_milli_diff(ti, tj): 314 | return int(abs(ti - tj) // 1e6) 315 | i = 0 316 | j = 0 317 | closest_delta = np.full(len(Ti), inf, dtype=np.int64) 318 | ni = len(Ti) 319 | nj = len(Tj) 320 | while i < ni and j < nj: 321 | ti = Ti[i] 322 | tj = Tj[j] 323 | diff = abs_milli_diff(ti, tj) 324 | if diff > max_milli_sec_diff: 325 | if ti < tj: 326 | i += 1 327 | else: 328 | j += 1 329 | else: 330 | if diff < closest_delta[i]: # This must be true until its not 331 | closest_delta[i] = diff 332 | closest_index[i] = j 333 | j += 1 334 | else: 335 | i += 1 336 | 337 | 338 | def compute_most_recent_sync_pair(Ti, Tj, closest_index): 339 | # i is matched with j such that tj <= ti 340 | i = 0 341 | j = 0 342 | ni = len(Ti) 343 | nj = len(Tj) 344 | for i in xrange(ni): 345 | ti = Ti[i] 346 | while j < nj: 347 | tj = Tj[j] 348 | if tj > ti: 349 | break 350 | j += 1 351 | j -= 1 352 | closest_index[i] = j 353 | 354 | def compute_approximate_sync_positions(self, topics, max_milli_sec_diffs): 355 | topic_timestamps = [] 356 | for i, topic in enumerate(topics): 357 | timestamps = self.timestamps_of_topic[topic] 358 | topic_timestamps.append(timestamps) 359 | ref_timestamps = topic_timestamps[0] 360 | closest_indices_of_ts = np.full( 361 | (len(ref_timestamps), len(topic_timestamps) - 1), -1, dtype=np.int64) 362 | sync_data = enumerate(zip(topic_timestamps[1:], max_milli_sec_diffs)) 363 | for j, (timestamps, max_milli_sec_diff) in sync_data: 364 | closest_index = closest_indices_of_ts.T[j] 365 | if max_milli_sec_diff == -1: 366 | compute_most_recent_sync_pair(ref_timestamps, timestamps, 367 | closest_index) 368 | else: 369 | compute_closest_sync_pair(ref_timestamps, timestamps, 370 | closest_index, max_milli_sec_diff) 371 | #compute_approximate_sync_pair(ref_timestamps, timestamps, 372 | # closest_index, max_milli_sec_diff) 373 | print((closest_index!=-1).sum()) 374 | G = (closest_indices_of_ts != -1).all(1) 375 | self.orig_indices = np.hstack( 376 | (np.arange(len(G))[G][:, np.newaxis], closest_indices_of_ts[G])) 377 | 378 | positions = np.zeros((len(topics), G.sum(), 2), dtype=np.int64) 379 | timestamps = np.zeros((len(topics), G.sum()), dtype=np.int64) 380 | positions[0] = self.positions_of_topic[topics[0]][G] 381 | timestamps[0] = self.timestamps_of_topic[topics[0]][G] 382 | for i, topic in enumerate(topics[1:]): 383 | indices = closest_indices_of_ts.T[i][G] 384 | positions[i + 1] = self.positions_of_topic[topic][indices] 385 | timestamps[i + 1] = self.timestamps_of_topic[topic][indices] 386 | print([len(ts) for ts in topic_timestamps], positions.shape) 387 | self.set_sync_info(topics, max_milli_sec_diff, positions, timestamps, G) 388 | 389 | 390 | def compute_exact_sync_positions(self, topics): 391 | assert(False) 392 | ts_positions_dict = dict() 393 | n_topics = len(topics) 394 | for i, topic in enumerate(topics): 395 | timestamps = self.timestamps_of_topic[topic] 396 | positions = self.positions_of_topic[topic] 397 | for j, ts in enumerate(timestamps): 398 | ts_positions = ts_positions_dict[ts] if ts in ts_positions_dict else [ 399 | None] * n_topics 400 | ts_positions[i] = positions[j] 401 | ts_positions_dict[ts] = ts_positions 402 | ts_positions = sorted(ts_positions_dict.items()) 403 | ts_positions = [ps for ps in ts_positions if None not in ps] 404 | self.set_sync_info(topics, 0, ts_positions, None) 405 | -------------------------------------------------------------------------------- /tools/gt_flow/calibration.py: -------------------------------------------------------------------------------- 1 | """ Handles loading the calbiration for the mvsec rig 2 | """ 3 | 4 | import downloader 5 | import numpy as np 6 | import zipfile 7 | import yaml 8 | 9 | class Calibration(object): 10 | def __init__(self, experiment_name): 11 | self.experiment_name = experiment_name 12 | self.calib_zip_fn = downloader.get_calibration(experiment_name) 13 | with zipfile.ZipFile(self.calib_zip_fn) as calib_zip: 14 | self.left_map = np.stack(self.load_rectification_map(calib_zip, "left"), axis=2) 15 | self.right_map = np.stack(self.load_rectification_map(calib_zip, "right"), axis=2) 16 | self.intrinsic_extrinsic = self.load_yaml(calib_zip) 17 | 18 | def load_rectification_map(self, calib_zip, direction="left"): 19 | assert direction in ["left", "right"] 20 | 21 | x_name=self.experiment_name+"_"+direction+"_x_map.txt" 22 | with calib_zip.open(x_name) as f: 23 | x_map = np.loadtxt(f) 24 | 25 | y_name=self.experiment_name+"_"+direction+"_y_map.txt" 26 | with calib_zip.open(y_name) as f: 27 | y_map = np.loadtxt(f) 28 | 29 | return x_map, y_map 30 | 31 | def load_yaml(self, calib_zip): 32 | yaml_name = "camchain-imucam-"+self.experiment_name+".yaml" 33 | with calib_zip.open(yaml_name) as yaml_file: 34 | intrinsic_extrinsic = yaml.load(yaml_file) 35 | return intrinsic_extrinsic 36 | -------------------------------------------------------------------------------- /tools/gt_flow/compute_flow.py: -------------------------------------------------------------------------------- 1 | """ Computes optical flow from two poses and depth images """ 2 | 3 | import numpy as np 4 | import matplotlib 5 | matplotlib.use('Agg') 6 | import matplotlib.pyplot as plt 7 | import matplotlib.colors as colors 8 | from scipy.linalg import logm 9 | 10 | try: 11 | from quaternion import quaternion 12 | except ImportError: 13 | class quaternion: 14 | def __init__(self,w,x,y,z): 15 | self.w = w 16 | self.x = x 17 | self.y = y 18 | self.z = z 19 | 20 | def norm(self): 21 | return self.w**2 + self.x**2 + self.y**2 + self.z**2 22 | 23 | def inverse(self): 24 | qnorm = self.norm() 25 | return quaternion(self.w/qnorm, 26 | -self.x/qnorm, 27 | -self.y/qnorm, 28 | -self.z/qnorm) 29 | 30 | def __mul__(q1, q2): 31 | r = quaternion(q1.w*q2.w - q1.x*q2.x - q1.y*q2.y - q1.z*q2.z, 32 | q1.w*q2.x + q1.x*q2.w + q1.y*q2.z - q1.z*q2.y, 33 | q1.w*q2.y - q1.x*q2.z + q1.y*q2.w + q1.z*q2.x, 34 | q1.w*q2.z + q1.x*q2.y - q1.y*q2.x + q1.z*q2.w) 35 | return r 36 | 37 | def __rmul__(q1, s): 38 | return quaternion(q1.w*s, q1.x*s, q1.y*s, q1.z*s) 39 | 40 | def __sub__(q1, q2): 41 | r = quaternion(q1.w-q2.w, 42 | q1.x-q2.x, 43 | q1.y-q2.y, 44 | q1.z-q2.z) 45 | return r 46 | 47 | def __div__(q1, s): 48 | return quaternion(q1.w/s, q1.x/s, q1.y/s, q1.z/s) 49 | 50 | class Flow: 51 | """ 52 | - parameters 53 | - calibration :: a Calibration object from calibration.py 54 | """ 55 | def __init__(self, calibration): 56 | 57 | self.cal = calibration 58 | 59 | self.Pfx = self.cal.intrinsic_extrinsic['cam0']['projection_matrix'][0][0] 60 | self.Ppx = self.cal.intrinsic_extrinsic['cam0']['projection_matrix'][0][2] 61 | self.Pfy = self.cal.intrinsic_extrinsic['cam0']['projection_matrix'][1][1] 62 | self.Ppy = self.cal.intrinsic_extrinsic['cam0']['projection_matrix'][1][2] 63 | 64 | intrinsics = self.cal.intrinsic_extrinsic['cam0']['intrinsics'] 65 | self.P = np.array([[self.Pfx, 0., self.Ppx], 66 | [0. , self.Pfy, self.Ppy], 67 | [0., 0., 1.]]) 68 | 69 | self.K = np.array([[intrinsics[0], 0., intrinsics[2]], 70 | [0., intrinsics[1], intrinsics[3]], 71 | [0., 0., 1.]]) 72 | 73 | self.distortion_coeffs = np.array(self.cal.intrinsic_extrinsic['cam0']['distortion_coeffs']) 74 | resolution = self.cal.intrinsic_extrinsic['cam0']['resolution'] 75 | 76 | # number of pixels in the camera 77 | #self.x_map = (self.cal.left_map[:,:,0]-self.Ppx)/self.Pfx 78 | #self.y_map = (self.cal.left_map[:,:,1]-self.Ppy)/self.Pfy 79 | #self.flat_x_map = self.x_map.ravel() 80 | #self.flat_y_map = self.y_map.ravel() 81 | 82 | # left_map takes into account the rectification matrix, which rotates the image. 83 | # For optical flow in the distorted image, this rotation needs to be removed. 84 | # In the end it's easier just to recompute the map. 85 | 86 | x_inds, y_inds = np.meshgrid(np.arange(resolution[0]), 87 | np.arange(resolution[1])) 88 | x_inds = x_inds.astype(np.float32) 89 | y_inds = y_inds.astype(np.float32) 90 | 91 | x_inds -= self.P[0,2] 92 | x_inds *= (1./self.P[0,0]) 93 | 94 | y_inds -= self.P[1,2] 95 | y_inds *= (1./self.P[1,1]) 96 | 97 | self.flat_x_map = x_inds.reshape((-1)) 98 | self.flat_y_map = y_inds.reshape((-1)) 99 | 100 | N = self.flat_x_map.shape[0] 101 | 102 | self.omega_mat = np.zeros((N,2,3)) 103 | 104 | self.omega_mat[:,0,0] = self.flat_x_map * self.flat_y_map 105 | self.omega_mat[:,1,0] = 1+ np.square(self.flat_y_map) 106 | 107 | self.omega_mat[:,0,1] = -(1+np.square(self.flat_x_map)) 108 | self.omega_mat[:,1,1] = -(self.flat_x_map*self.flat_y_map) 109 | 110 | self.omega_mat[:,0,2] = self.flat_y_map 111 | self.omega_mat[:,1,2] = -self.flat_x_map 112 | 113 | self.hsv_buffer = None 114 | 115 | def compute_flow_single_frame(self, V, Omega, depth_image, dt): 116 | """ 117 | params: 118 | V : [3,1] 119 | Omega : [3,1] 120 | depth_image : [m,n] 121 | """ 122 | flat_depth = depth_image.ravel() 123 | # flat_depth[np.logical_or(np.isclose(flat_depth,0.0), flat_depth<0.)] 124 | mask = np.isfinite(flat_depth) 125 | 126 | fdm = 1./flat_depth[mask] 127 | fxm = self.flat_x_map[mask] 128 | fym = self.flat_y_map[mask] 129 | omm = self.omega_mat[mask,:,:] 130 | 131 | x_flow_out = np.zeros((depth_image.shape[0], depth_image.shape[1])) 132 | flat_x_flow_out = x_flow_out.reshape((-1)) 133 | flat_x_flow_out[mask] = fdm * (fxm*V[2]-V[0]) 134 | flat_x_flow_out[mask] += np.squeeze(np.dot(omm[:,0,:], Omega)) 135 | 136 | y_flow_out = np.zeros((depth_image.shape[0], depth_image.shape[1])) 137 | flat_y_flow_out = y_flow_out.reshape((-1)) 138 | flat_y_flow_out[mask] = fdm * (fym*V[2]-V[1]) 139 | flat_y_flow_out[mask] += np.squeeze(np.dot(omm[:,1,:], Omega)) 140 | 141 | flat_x_flow_out *= dt * self.P[0,0] 142 | flat_y_flow_out *= dt * self.P[1,1] 143 | 144 | """ 145 | plt.quiver(flat_distorted_x[::100], 146 | flat_distorted_y[::100], 147 | flat_distorted_x_flow_out[::100], 148 | flat_distorted_y_flow_out[::100]) 149 | 150 | plt.show() 151 | """ 152 | 153 | return x_flow_out, y_flow_out 154 | 155 | def rot_mat_from_quaternion(self, q): 156 | R = np.array([[1-2*q.y**2-2*q.z**2, 2*q.x*q.y+2*q.w*q.z, 2*q.x*q.z-2*q.w*q.y], 157 | [2*q.x*q.y-2*q.w*q.z, 1-2*q.x**2-2*q.z**2, 2*q.y*q.z+2*q.w*q.x], 158 | [2*q.x*q.z+2*q.w*q.y, 2*q.y*q.z-2*q.w*q.x, 1-2*q.x**2-2*q.y**2]]) 159 | return R 160 | 161 | def p_q_t_from_msg(self, msg): 162 | p = np.array([msg.pose.position.x,msg.pose.position.y,msg.pose.position.z]) 163 | q = quaternion(msg.pose.orientation.w, msg.pose.orientation.x, 164 | msg.pose.orientation.y, msg.pose.orientation.z) 165 | t = msg.header.stamp.to_sec() 166 | return p, q, t 167 | 168 | def compute_velocity_from_msg(self, P0, P1): 169 | p0, q0, t0 = self.p_q_t_from_msg(P0) 170 | p1, q1, t1 = self.p_q_t_from_msg(P1) 171 | 172 | # There's something wrong with the current function to go from quat to matrix. 173 | # Using the TF version instead. 174 | q0_ros = [q0.x, q0.y, q0.z, q0.w] 175 | q1_ros = [q1.x, q1.y, q1.z, q1.w] 176 | 177 | import tf 178 | H0 = tf.transformations.quaternion_matrix(q0_ros) 179 | H0[:3, 3] = p0 180 | 181 | H1 = tf.transformations.quaternion_matrix(q1_ros) 182 | H1[:3, 3] = p1 183 | 184 | # Let the homogeneous matrix handle the inversion etc. Guaranteed correctness. 185 | H01 = np.dot(np.linalg.inv(H0), H1) 186 | dt = t1 - t0 187 | 188 | V = H01[:3, 3] / dt 189 | w_hat = logm(H01[:3, :3]) / dt 190 | Omega = np.array([w_hat[2,1], w_hat[0,2], w_hat[1,0]]) 191 | 192 | return V, Omega, dt 193 | 194 | def compute_velocity(self, p0, q0, p1, q1, dt): 195 | V = (p1-p0)/dt 196 | 197 | R_dot = ( self.rot_mat_from_quaternion(q1) - self.rot_mat_from_quaternion(q0) )/dt 198 | w_hat = np.dot(R_dot, self.rot_mat_from_quaternion(q1).T) 199 | 200 | Omega = np.array([w_hat[2,1], w_hat[0,2], w_hat[1,0]]) 201 | 202 | return V, Omega 203 | 204 | def colorize_image(self, flow_x, flow_y): 205 | if self.hsv_buffer is None: 206 | self.hsv_buffer = np.empty((flow_x.shape[0], flow_x.shape[1],3)) 207 | self.hsv_buffer[:,:,1] = 1.0 208 | self.hsv_buffer[:,:,0] = (np.arctan2(flow_y,flow_x)+np.pi)/(2.0*np.pi) 209 | 210 | self.hsv_buffer[:,:,2] = np.linalg.norm( np.stack((flow_x,flow_y), axis=0), axis=0 ) 211 | 212 | # self.hsv_buffer[:,:,2] = np.log(1.+self.hsv_buffer[:,:,2]) # hopefully better overall dynamic range in final video 213 | 214 | flat = self.hsv_buffer[:,:,2].reshape((-1)) 215 | m = np.nanmax(flat[np.isfinite(flat)]) 216 | if not np.isclose(m, 0.0): 217 | self.hsv_buffer[:,:,2] /= m 218 | 219 | return colors.hsv_to_rgb(self.hsv_buffer) 220 | 221 | def visualize_flow(self, flow_x, flow_y, fig): 222 | ax1 = fig.add_subplot(1,1,1) 223 | ax1.imshow( self.colorize_image(flow_x, flow_y) ) 224 | 225 | 226 | def experiment_flow(experiment_name, experiment_num, save_movie=True, save_numpy=True, start_ind=None, stop_ind=None): 227 | if experiment_name == "motorcycle": 228 | print "The motorcycle doesn't have lidar and we can't compute flow without it" 229 | return 230 | 231 | import time 232 | import calibration 233 | cal = calibration.Calibration(experiment_name) 234 | import ground_truth 235 | gt = ground_truth.GroundTruth(experiment_name, experiment_num) 236 | 237 | flow = Flow(cal) 238 | P0 = None 239 | 240 | nframes = len(gt.left_cam_readers['/davis/left/depth_image_rect']) 241 | if stop_ind is not None: 242 | stop_ind = min(nframes, stop_ind) 243 | else: 244 | stop_ind = nframes 245 | 246 | if start_ind is not None: 247 | start_ind = max(0, start_ind) 248 | else: 249 | start_ind = 0 250 | 251 | nframes = stop_ind - start_ind 252 | 253 | 254 | depth_image, _ = gt.left_cam_readers['/davis/left/depth_image_rect'](0) 255 | flow_shape = (nframes, depth_image.shape[0], depth_image.shape[1]) 256 | x_flow_dist = np.zeros(flow_shape, dtype=np.float) 257 | y_flow_dist = np.zeros(flow_shape, dtype=np.float) 258 | timestamps = np.zeros((nframes,), dtype=np.float) 259 | Vs = np.zeros((nframes,3), dtype=np.float) 260 | Omegas = np.zeros((nframes,3), dtype=np.float) 261 | dTs = np.zeros((nframes,), dtype=np.float) 262 | 263 | ps = np.zeros((nframes,3), dtype=np.float) 264 | qs = np.zeros((nframes,4), dtype=np.float) 265 | 266 | sOmega = np.zeros((3,)) 267 | sV = np.zeros((3,)) 268 | 269 | print "Extracting velocity" 270 | for frame_num in range(nframes): 271 | P1 = gt.left_cam_readers['/davis/left/odometry'][frame_num+start_ind].message 272 | 273 | if P0 is not None: 274 | V, Omega, dt = flow.compute_velocity_from_msg(P0, P1) 275 | Vs[frame_num, :] = V 276 | Omegas[frame_num, :] = Omega 277 | dTs[frame_num] = dt 278 | 279 | timestamps[frame_num] = P1.header.stamp.to_sec() 280 | 281 | tmp_p, tmp_q, _ = flow.p_q_t_from_msg(P1) 282 | ps[frame_num, :] = tmp_p 283 | qs[frame_num, 0] = tmp_q.w 284 | qs[frame_num, 0] = tmp_q.x 285 | qs[frame_num, 0] = tmp_q.y 286 | qs[frame_num, 0] = tmp_q.z 287 | 288 | P0 = P1 289 | 290 | filter_size = 10 291 | 292 | smoothed_Vs = Vs 293 | smoothed_Omegas = Omegas 294 | 295 | print "Computing flow" 296 | for frame_num in range(nframes): 297 | depth_image = gt.left_cam_readers['/davis/left/depth_image_rect'][frame_num+start_ind] 298 | depth_image.acquire() 299 | 300 | if frame_num-filter_size < 0: 301 | V = np.mean(Vs[0:frame_num+filter_size+1,:],axis=0) 302 | Omega = np.mean(Omegas[0:frame_num+filter_size+1,:], axis=0) 303 | elif frame_num+filter_size >= nframes: 304 | V = np.mean(Vs[frame_num-filter_size:nframes,:],axis=0) 305 | Omega = np.mean(Omegas[frame_num-filter_size:nframes,:], axis=0) 306 | else: 307 | V = np.mean(Vs[frame_num-filter_size:frame_num+filter_size+1,:],axis=0) 308 | Omega = np.mean(Omegas[frame_num-filter_size:frame_num+filter_size+1,:], axis=0) 309 | dt = dTs[frame_num] 310 | 311 | smoothed_Vs[frame_num, :] = V 312 | smoothed_Omegas[frame_num, :] = Omega 313 | 314 | flow_x_dist, flow_y_dist = flow.compute_flow_single_frame(V, 315 | Omega, 316 | depth_image.img, 317 | dt) 318 | x_flow_dist[frame_num,:,:] = flow_x_dist 319 | y_flow_dist[frame_num,:,:] = flow_y_dist 320 | 321 | depth_image.release() 322 | 323 | import downloader 324 | import os 325 | base_name = os.path.join(downloader.get_tmp(), experiment_name, experiment_name+str(experiment_num)) 326 | 327 | if save_numpy: 328 | print "Saving numpy" 329 | numpy_name = base_name+"_gt_flow_dist.npz" 330 | np.savez(numpy_name, 331 | timestamps=timestamps, x_flow_dist=x_flow_dist, y_flow_dist=y_flow_dist) 332 | numpy_name = base_name+"_odom.npz" 333 | np.savez(numpy_name, 334 | timestamps=timestamps, 335 | lin_vel=smoothed_Vs, ang_vel=smoothed_Omegas, pos=ps, quat=qs) 336 | 337 | if save_movie: 338 | print "Saving movie" 339 | import matplotlib.animation as animation 340 | plt.close('all') 341 | 342 | fig = plt.figure() 343 | first_img = flow.colorize_image(x_flow_dist[0], y_flow_dist[0]) 344 | im = plt.imshow(first_img, animated=True) 345 | 346 | def updatefig(frame_num, *args): 347 | im.set_data(flow.colorize_image(x_flow_dist[frame_num], y_flow_dist[frame_num])) 348 | return im, 349 | 350 | ani = animation.FuncAnimation(fig, updatefig, frames=len(x_flow_dist)) 351 | movie_path = base_name+"_gt_flow.mp4" 352 | ani.save(movie_path, fps=20) 353 | plt.show() 354 | 355 | return x_flow_dist, y_flow_dist, timestamps, Vs, Omegas 356 | 357 | def test_gt_flow(): 358 | import calibration 359 | 360 | plt.close('all') 361 | 362 | cal = calibration.Calibration("indoor_flying") 363 | gtf = Flow(cal) 364 | 365 | p0 = np.array([0.,0.,0.]) 366 | q0 = quaternion(1.0,0.0,0.0,0.0) 367 | 368 | depth = 10.*np.ones((cal.left_map.shape[0],cal.left_map.shape[1])) 369 | 370 | V, Omega = gtf.compute_velocity(p0,q0,p0,q0,0.1) 371 | x,y = gtf.compute_flow_single_frame(V, Omega, depth,0.1) 372 | 373 | fig = plt.figure() 374 | gtf.visualize_flow(x,y,fig) 375 | 376 | p1 = np.array([0.,0.0,0.5]) 377 | q1 = quaternion(1.0,0.0,0.0,0.0) 378 | 379 | V, Omega = gtf.compute_velocity(p0,q0,p1,q1,0.1) 380 | print V, Omega 381 | x,y = gtf.compute_flow_single_frame(V, Omega, depth,0.1) 382 | 383 | fig = plt.figure() 384 | gtf.visualize_flow(x,y,fig) 385 | 386 | p1 = np.array([0.,-0.25,0.5]) 387 | q1 = quaternion(1.0,0.0,0.0,0.0) 388 | 389 | V, Omega = gtf.compute_velocity(p0,q0,p1,q1,0.1) 390 | print V, Omega 391 | x,y = gtf.compute_flow_single_frame(V, Omega, depth,0.1) 392 | 393 | fig = plt.figure() 394 | gtf.visualize_flow(x,y,fig) 395 | -------------------------------------------------------------------------------- /tools/gt_flow/downloader.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | 3 | import os 4 | try: 5 | import wget 6 | HAS_WGET=True 7 | except ImportError: 8 | import subprocess 9 | HAS_WGET=False 10 | 11 | MVSEC_URL="http://visiondata.cis.upenn.edu/mvsec" 12 | 13 | TMP_FOLDER="" 14 | 15 | def set_tmp(new_tmp): 16 | global TMP_FOLDER 17 | TMP_FOLDER = new_tmp 18 | if not os.path.exists(TMP_FOLDER): 19 | os.mkdir(TMP_FOLDER) 20 | else: 21 | assert os.path.isdir(TMP_FOLDER) 22 | 23 | def get_tmp(): 24 | return TMP_FOLDER 25 | 26 | set_tmp(os.path.expanduser('~/mvsec_data')) 27 | 28 | experiments = ["indoor_flying", "outdoor_day", 29 | "outdoor_night", "motorcycle"] 30 | number_of_runs = [4, 2, 3, 1] 31 | 32 | def print_experiments(): 33 | print(zip(experiments, number_of_runs)) 34 | 35 | def download(urls, filenames, overwrite=False): 36 | for url, fn in zip(urls, filenames): 37 | if not os.path.exists(fn) or overwrite: 38 | if not os.path.exists(os.path.dirname(fn)): 39 | os.makedirs(os.path.dirname(fn)) 40 | if HAS_WGET: 41 | wget.download(url, fn) 42 | else: 43 | subprocess.call(['wget', 44 | '--no-check-certificate', 45 | url, 46 | '-O', fn]) 47 | 48 | def get_calibration(experiment, overwrite=False): 49 | assert experiment in experiments 50 | full_url = [os.path.join(MVSEC_URL, experiment, experiment+"_calib.zip")] 51 | full_local = [os.path.join(TMP_FOLDER, experiment, experiment+"_calib.zip")] 52 | 53 | download(full_url, full_local, overwrite) 54 | 55 | return full_local[0] 56 | 57 | def get_data(experiment_name, experiment_numbers=None,overwrite=False): 58 | assert experiment_name in experiments 59 | if type(experiment_numbers)==int: 60 | experiment_numbers=[experiment_numbers] 61 | elif type(experiment_numbers)==list: 62 | pass 63 | elif experiment_numbers is None: 64 | experiment_numbers = range(0, number_of_runs[experiments.index(experiment_name)]) 65 | else: 66 | raise TypeError("Unsupported type "+type(experiment_numbers)) 67 | 68 | base_url = os.path.join(MVSEC_URL, experiment_name, experiment_name) 69 | full_urls = [base_url+str(n)+"_data.bag" for n in experiment_numbers] 70 | 71 | base_path = os.path.join(TMP_FOLDER, experiment_name, experiment_name) 72 | full_paths = [base_path+str(n)+"_data.bag" for n in experiment_numbers] 73 | 74 | download(full_urls, full_paths, overwrite) 75 | 76 | return full_paths 77 | 78 | def get_ground_truth(experiment_name, experiment_numbers=None,overwrite=False): 79 | assert experiment_name in experiments 80 | if type(experiment_numbers)==int: 81 | experiment_numbers=[experiment_numbers] 82 | elif type(experiment_numbers)==list: 83 | pass 84 | elif experiment_numbers is None: 85 | experiment_numbers = range(0, number_of_runs[experiments.index(experiment_name)]) 86 | else: 87 | raise TypeError("Unsupported type "+type(experiment_numbers)) 88 | 89 | base_url = os.path.join(MVSEC_URL, experiment_name, experiment_name) 90 | full_urls = [base_url+str(n)+"_gt.bag" for n in experiment_numbers] 91 | 92 | base_path = os.path.join(TMP_FOLDER, experiment_name, experiment_name) 93 | full_paths = [base_path+str(n)+"_gt.bag" for n in experiment_numbers] 94 | 95 | download(full_urls, full_paths, overwrite) 96 | 97 | return full_paths 98 | -------------------------------------------------------------------------------- /tools/gt_flow/extract_single_bag.py: -------------------------------------------------------------------------------- 1 | import compute_flow; reload(compute_flow) 2 | 3 | import downloader 4 | import argparse 5 | 6 | parser = argparse.ArgumentParser() 7 | parser.add_argument('--mvsec_dir', 8 | type=str, 9 | help="Path to MVSEC directory.", 10 | required=True) 11 | parser.add_argument('--sequence', 12 | type=str, 13 | help="Specific sequence group to extract, e.g. indoor_flying.", 14 | required=True) 15 | parser.add_argument('--sequence_number', 16 | type=int, 17 | help="Specific sequence within the group to extract, e.g. 1.", 18 | required=True) 19 | parser.add_argument('--save_movie', 20 | action='store_true', 21 | help="If set, will save a movie of the estimated flow for visualization.") 22 | parser.add_argument('--save_numpy', 23 | action='store_true', 24 | help="If set, will save the results to a numpy file.") 25 | parser.add_argument('--start_ind', 26 | type=int, 27 | help="Index of the first ground truth pose/depth frame to process.", 28 | default=None) 29 | parser.add_argument('--stop_ind', 30 | type=int, 31 | help="Index of the last ground truth pose/depth frame to process.", 32 | default=None) 33 | args = parser.parse_args() 34 | 35 | downloader.set_tmp(args.mvsec_dir) 36 | 37 | compute_flow.experiment_flow(args.sequence, 38 | args.sequence_number, 39 | save_movie=args.save_movie, 40 | save_numpy=args.save_numpy, 41 | start_ind=args.start_ind, 42 | stop_ind=args.stop_ind) 43 | -------------------------------------------------------------------------------- /tools/gt_flow/ground_truth.py: -------------------------------------------------------------------------------- 1 | """ Handles the reading of the ros ground truth bags 2 | """ 3 | 4 | import downloader 5 | import bag_indexer 6 | 7 | class GroundTruth: 8 | def __init__(self, experiment_name, run_number): 9 | self.bag_path = downloader.get_ground_truth(experiment_name, run_number)[0] 10 | 11 | left_sync_topics = (['/davis/left/odometry','/davis/left/depth_image_raw','/davis/left/depth_image_rect'],[0.05,0.05,0.05]) 12 | 13 | self.bag = bag_indexer.get_bag_indexer(self.bag_path, [left_sync_topics]) 14 | 15 | self.left_cam_readers = {} 16 | 17 | for t in left_sync_topics[0]: 18 | if 'image' in t: 19 | self.left_cam_readers[t] = self.bag.get_image_topic_reader(t) 20 | else: 21 | self.left_cam_readers[t] = self.bag.get_topic_reader(t) 22 | 23 | def play_image(self): 24 | import numpy as np 25 | import matplotlib.pyplot as plt 26 | import matplotlib.animation as animation 27 | fig = plt.figure() 28 | left_cam = self.left_cam_readers['/davis/left/depth_image_raw'] 29 | 30 | first_view = left_cam[0] 31 | first_view.acquire() 32 | im = plt.imshow(first_view.img, animated=True) 33 | #first_view.release() 34 | 35 | def updatefig(frame_num, *args): 36 | view = left_cam[frame_num] 37 | view.acquire() 38 | im.set_data(view.img) 39 | return im, 40 | 41 | ani = animation.FuncAnimation(fig, updatefig, frames=len(left_cam), blit=True) 42 | ani.save("test.mp4") 43 | plt.show() 44 | -------------------------------------------------------------------------------- /tools/gt_flow/image_reader_view_collection.py: -------------------------------------------------------------------------------- 1 | """ Use an image reader to implement a view collection """ 2 | import image_view_collection as IVC 3 | 4 | 5 | class ImageReaderViewCollection(IVC.ImageViewCollection): 6 | """ Collection of image views obtained from a reader """ 7 | 8 | def __init__(self, reader, keys): 9 | IVC.ImageViewCollection.__init__(self, keys) 10 | self.reader = reader 11 | 12 | def get_key_slice(self, slc): 13 | return ImageReaderViewCollection(self.reader, self.key_list[slc]) 14 | 15 | def get_key(self, key): 16 | return self.reader[key] 17 | # Author Cody Phillips 18 | -------------------------------------------------------------------------------- /tools/gt_flow/image_view.py: -------------------------------------------------------------------------------- 1 | class ImageView: 2 | def __init__(self): 3 | self.img = None 4 | self.shape = None 5 | self.cache = False 6 | 7 | def set_cache(self,cache): 8 | self.cache = cache 9 | 10 | def release(self): 11 | """ Release the image data from memory """ 12 | self.img = None 13 | 14 | def acquire(self): 15 | """ Load the image data to memory """ 16 | raise NotImplementedError() 17 | 18 | def __call__(self): 19 | """ Return the image array """ 20 | self.acquire() 21 | self.shape = self.img.shape 22 | img = self.img 23 | if not self.cache: 24 | self.release() 25 | return img 26 | 27 | def set_shape(self, shape): 28 | self.shape = shape 29 | # Author Cody Phillips 30 | -------------------------------------------------------------------------------- /tools/gt_flow/image_view_collection.py: -------------------------------------------------------------------------------- 1 | """ Implements a collection of view using a key list """ 2 | 3 | import key_list_collection as KLC 4 | 5 | 6 | class ImageViewCollection(KLC.KeyListCollection): 7 | """ Abstract class for a image view collection 8 | 9 | Child must implement 10 | 11 | get_key_slice(slc) 12 | get_key(key) 13 | """ 14 | 15 | def __init__(self, key_list): 16 | KLC.KeyListCollection.__init__(self, key_list) 17 | self.shape = None 18 | 19 | def set_shape(self, shape): 20 | self.shape = shape 21 | 22 | def __call__(self, key): 23 | """ Overides base behavior to set shape """ 24 | item = KLC.KeyListCollection.__call__(self, key) 25 | if self.shape: 26 | item.set_shape(self.shape) 27 | return item 28 | 29 | def __getitem__(self, slc): 30 | """ Overides base behavior to set shape """ 31 | item = KLC.KeyListCollection.__getitem__(self, slc) 32 | if self.shape: 33 | item.set_shape(self.shape) 34 | return item 35 | # Author Cody Phillips 36 | -------------------------------------------------------------------------------- /tools/gt_flow/key_list_collection.py: -------------------------------------------------------------------------------- 1 | """ Implement abstract class for getting elements using an ordered key list """ 2 | import numpy as np 3 | 4 | 5 | class KeyListCollection(object): 6 | """ Abstract class for a key list collection 7 | 8 | Child must implement 9 | 10 | get_key_slice(slc) 11 | get_key(key) 12 | """ 13 | 14 | def __init__(self, key_list): 15 | self.key_list = key_list 16 | 17 | def __contains__(self, key): 18 | """ Check if key in key list """ 19 | return key in self.key_list 20 | 21 | def __len__(self): 22 | """ Return len of key list """ 23 | return len(self.key_list) 24 | 25 | def get_key_list(self): 26 | return self.key_list 27 | 28 | def __call__(self, key): 29 | return self.get_key(key) 30 | 31 | def __getitem__(self, slc): 32 | """ Operates over key_list """ 33 | if isinstance(slc, slice) or (isinstance(slc, np.ndarray) 34 | and slc.ndim == 1): 35 | return self.get_key_slice(slc) 36 | else: 37 | return self.get_key(self.key_list[slc]) 38 | 39 | def __iter__(self): 40 | """ Iterate through all elements of the key list """ 41 | return (self(key) for key in self.key_list) 42 | 43 | def get_key_slice(self, slc): 44 | raise NotImplemented 45 | 46 | def get_key(self, key): 47 | raise NotImplemented 48 | # Author Cody Phillips 49 | -------------------------------------------------------------------------------- /tools/gt_flow/run_all_experiments.py: -------------------------------------------------------------------------------- 1 | import compute_flow 2 | 3 | import downloader 4 | import argparse 5 | 6 | parser = argparse.ArgumentParser() 7 | parser.add_argument('--mvsec_dir', 8 | type=str, 9 | help="Path to MVSEC directory.", 10 | required=True) 11 | args = parser.parse_args() 12 | 13 | downloader.set_tmp(args.mvsec_dir) 14 | 15 | #exp_data = compute_flow.experiment_flow("indoor_flying", 1, save_movie=False) 16 | 17 | for experiment, n_runs in zip(downloader.experiments, downloader.number_of_runs): 18 | for i in range(n_runs): 19 | run_number = i+1 20 | print "Running ", experiment, run_number 21 | compute_flow.experiment_flow(experiment, run_number, save_movie=False) 22 | -------------------------------------------------------------------------------- /tools/gt_flow/topic_reader.py: -------------------------------------------------------------------------------- 1 | """ Allows accessing of topic elements by index 2 | 3 | Author Cody Phillips 4 | """ 5 | 6 | import numpy as np 7 | import image_reader_view_collection as IRVC 8 | import image_view as IV 9 | 10 | 11 | class TopicReader: 12 | """ Allows index access of topic msgs """ 13 | 14 | def __init__(self, indexer, topic, positions=None, format_fun=None): 15 | self.topic = topic 16 | if positions is None: 17 | positions = indexer.positions_of_topic[topic] 18 | self.positions = positions 19 | self.indexer = indexer 20 | self.format_fun = format_fun 21 | 22 | def __getitem__(self, index): 23 | """ Get message at specified index """ 24 | position = self.positions[index] 25 | msg = self.indexer.read_message(position) 26 | if self.format_fun: 27 | msg = self.format_fun(msg) 28 | return msg 29 | 30 | def __len__(self): 31 | """ Get message count for topic """ 32 | return len(self.positions) 33 | 34 | 35 | class ImageTopicReaderView(IV.ImageView): 36 | """ Image view from topic reader """ 37 | 38 | def __init__(self, reader, key): 39 | IV.ImageView.__init__(self) 40 | self.reader = reader 41 | self.key = key 42 | self.header = None 43 | self.ts = None 44 | 45 | def get_timestamp(self): 46 | if self.ts is None: 47 | header = self.get_header() 48 | self.ts = header.stamp.to_nsec() 49 | return self.ts 50 | 51 | def get_header(self): 52 | if self.header is None: 53 | self.acquire() 54 | return self.header 55 | 56 | def acquire(self): 57 | if self.img is None: 58 | self.img, self.header = self.reader(self.key) 59 | self.get_timestamp() 60 | 61 | 62 | class ImageTopicReader(TopicReader): 63 | """ Allows accessing image array and header by index """ 64 | 65 | def __init__(self, indexer, topic, positions=None): 66 | TopicReader.__init__(self, indexer, topic, positions) 67 | self.compressed = "compressed" in topic 68 | self.depth = "depth" in topic 69 | self.use_raw = False 70 | import cv_bridge 71 | self.bridge = cv_bridge.CvBridge() 72 | 73 | def set_use_raw(self, use_raw): 74 | self.use_raw = use_raw 75 | 76 | def __call__(self, index): 77 | """ Get image array and header at index """ 78 | bag_msg = TopicReader.__getitem__(self, index) 79 | if self.use_raw: 80 | return bag_msg.message, bag_msg.message.header 81 | if self.compressed: 82 | img = np.array(self.bridge.compressed_imgmsg_to_cv2( 83 | bag_msg.message)) 84 | img = img[..., [2, 1, 0]] 85 | elif self.depth: 86 | img = np.array(self.bridge.imgmsg_to_cv2(bag_msg.message)) 87 | else: 88 | img = np.array(self.bridge.imgmsg_to_cv2(bag_msg.message)) 89 | img = img[..., [2, 1, 0]] 90 | return img, bag_msg.message.header 91 | 92 | def __getitem__(self, index): 93 | """ Get image view at index """ 94 | return ImageTopicReaderView(self, index) 95 | 96 | def __contains__(self, key): 97 | """ Return if the key can be read """ 98 | return (0 <= key and key < len(self)) 99 | 100 | def get_image_views(self, indices=None): 101 | if indices is None: 102 | indices = np.arange(len(self)) 103 | return IRVC.ImageReaderViewCollection(self, indices) 104 | -------------------------------------------------------------------------------- /tools/gt_map_visualization/README.md: -------------------------------------------------------------------------------- 1 | 2 | # Creating track maps from the ground truth bags 3 | 4 | Here is how to create visualizations of the outdoor sequences from the ground truth ROS bags: 5 | 6 | 1. Create trajectories in UTM coordinate frame from bag file, in this case the first day time outdoor sequence: 7 | 8 | ./gt_trajectories_from_bag.py --bag /big_data/mvsec/west_philly_day1_gt.bag 9 | 10 | This will create the trajectory files `gps_traj.txt`, `loam_traj.txt`, and `cartographer_traj.txt`. 11 | 12 | 2. Align loam and cartographer trajectories with the gps trajectory, using the "align_trajectory" code 13 | from the [PennCOSYVIO data set](https://github.com/daniilidis-group/penncosyvio/tree/master/tools/cpp). 14 | 15 | ./align_trajectory -r gps_traj.txt -t loam_traj.txt -o loam_traj_align.txt 16 | ./align_trajectory -r gps_traj.txt -t cartographer_traj.txt -o cartographer_traj_align.txt 17 | 18 | The alignment is done using Horn's method, i.e. rotating and translating the data, but no scaling is applied 19 | 20 | 21 | 3. Now convert trajectories to file format suitable for GPS visualizer: 22 | 23 | ./traj_to_gpsvisualizer.py --files gps_traj.txt,loam_traj_align.txt,cartographer_traj_align.txt --legend GPS,LOAM,Cartographer --colors cyan,blue,red > gps_visualizer.txt 24 | 25 | This file can be directly used for upload at [GPS Visualizer](http://www.gpsvisualizer.com/). Use "height=1000" and width="auto". 26 | This is the command line to generate the graph for the RAL paper: 27 | 28 | ./traj_to_gpsvisualizer.py --files gps_traj.txt,cartographer_traj_align.txt --legend GPS,Cartographer --colors cyan,red --fatstart 433 --fatend 471 --linewidth 2 > gps_visualizer.txt 29 | 30 | # Creating distnnce plots 31 | 32 | Here is how oo plot distance between e.g. cartographer and gps trajectories (used for RAL paper): 33 | 34 | ./plot_diff.py --files gps_traj.txt,cartographer_traj_align.txt --legend GPS,'distance |GPS-fused|' --colors cyan,red --ymax 30 --fatstart 433 --fatend 471 35 | 36 | -------------------------------------------------------------------------------- /tools/gt_map_visualization/angular_velocity_from_bag.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | # 4 | # 5 | 6 | import rosbag, rospy, numpy as np 7 | import sys, os, glob 8 | import argparse 9 | import numpy.linalg 10 | import math 11 | import tf 12 | import tf.transformations 13 | 14 | 15 | 16 | if __name__ == '__main__': 17 | parser = argparse.ArgumentParser(description='get angular velocity from bag.') 18 | parser.add_argument('--start', '-s', action='store', default=rospy.Time(0), type=rospy.Time, 19 | help='where to start in the bag.') 20 | parser.add_argument('--end', '-e', action='store', default=1e10, type=float, help='where to stop in the bag.') 21 | parser.add_argument('bagfile') 22 | 23 | args = parser.parse_args() 24 | 25 | 26 | t0 = -1 27 | integration_time = 1.0 # in seconds 28 | w_avg = np.array([0, 0, 0]) 29 | w_max = 0 30 | cnt =0 31 | for bagfile in glob.glob(args.bagfile): 32 | bag = rosbag.Bag(bagfile, 'r') 33 | topics = ["/davis/right/imu"] 34 | # topics = ["/davis/left/imu"] 35 | # topics = ["/imu0"] 36 | iterator = bag.read_messages(topics=topics, start_time=args.start) 37 | for (topic, msg, time) in iterator: 38 | tstamp = msg.header.stamp.to_sec() 39 | if (t0 < 0): 40 | t0 = tstamp 41 | #print msg 42 | w = msg.angular_velocity 43 | w_vec = np.array([w.x, w.y, w.z]) 44 | t = tstamp - t0 45 | if (t > args.end): 46 | break; 47 | w_len = np.linalg.norm(w_vec) 48 | if w_len > w_max: 49 | w_max = w_len 50 | w_avg = w_avg + w_vec 51 | cnt = cnt + 1 52 | # print "%f %f %f %f %f" % (t, w_vec[0], w_vec[1], w_vec[2], w_len) 53 | 54 | print "avg ang velocity: " + str(w_avg / cnt) 55 | print "max ang velocity: %f" % (w_max) 56 | 57 | 58 | 59 | -------------------------------------------------------------------------------- /tools/gt_map_visualization/gps_from_bag.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # 3 | # example usage: 4 | # 5 | 6 | 7 | import rosbag, rospy 8 | import glob 9 | import argparse 10 | import numpy as np 11 | import tf 12 | import tf.transformations 13 | import utm 14 | 15 | 16 | if __name__ == '__main__': 17 | parser = argparse.ArgumentParser(description='extract trajectories from bag file') 18 | parser.add_argument('--bag', '-b', action='store', default=None, required=True, 19 | help='name of bag file') 20 | 21 | topics = ['/gps/fix'] 22 | 23 | args = parser.parse_args() 24 | 25 | fileNames = ['gps_traj.txt'] 26 | 27 | gpsFile = open(fileNames[0], 'w') 28 | 29 | for bagfile in [args.bag]: 30 | bag = rosbag.Bag(bagfile, 'r') 31 | it = bag.read_messages(topics=topics) 32 | for (topic, msg, tros) in it: 33 | if (topic == '/gps/fix'): # gps long/lat 34 | utm_pos = utm.from_latlon(msg.latitude, msg.longitude) 35 | a = msg.altitude # ignored here 36 | T = tf.transformations.identity_matrix() 37 | T[0:3,3] = np.array([utm_pos[0], utm_pos[1], 0]) 38 | gpsFile.write("%.6f %s\n" % (msg.header.stamp.to_nsec()/1e9, ' '.join(map(str, T.ravel()[0:12].tolist())))) 39 | 40 | gpsFile.close() 41 | print "wrote to files: %s" % ", ".join(fileNames) 42 | 43 | -------------------------------------------------------------------------------- /tools/gt_map_visualization/gt_trajectories_from_bag.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # 3 | # example usage: 4 | # 5 | 6 | 7 | import rosbag, rospy 8 | import glob 9 | import argparse 10 | import numpy as np 11 | import tf 12 | import tf.transformations 13 | import utm 14 | 15 | 16 | def pose_to_tf(msg): 17 | q = np.array([msg.pose.orientation.x, msg.pose.orientation.y, msg.pose.orientation.z, msg.pose.orientation.w]) 18 | T = tf.transformations.quaternion_matrix(q) 19 | T[0:3,3] = np.array([msg.pose.position.x, msg.pose.position.y, msg.pose.position.z]) 20 | return (T) 21 | 22 | if __name__ == '__main__': 23 | parser = argparse.ArgumentParser(description='extract trajectories from bag file') 24 | parser.add_argument('--bag', '-b', action='store', default=None, required=True, 25 | help='name of bag file') 26 | 27 | topics = ['/gps/fix', '/davis/left/odometry', '/davis/left/pose'] 28 | 29 | args = parser.parse_args() 30 | 31 | fileNames = ['gps_traj.txt', 'loam_traj.txt', 'cartographer_traj.txt'] 32 | 33 | gpsFile = open(fileNames[0], 'w') 34 | loamFile = open(fileNames[1], 'w') 35 | cartographerFile = open(fileNames[2], 'w') 36 | 37 | for bagfile in [args.bag]: 38 | bag = rosbag.Bag(bagfile, 'r') 39 | it = bag.read_messages(topics=topics) 40 | for (topic, msg, tros) in it: 41 | if (topic == '/davis/left/odometry'): # loam pose 42 | T = pose_to_tf(msg) 43 | loamFile.write("%.6f %s\n" % (msg.header.stamp.to_nsec()/1e9, ' '.join(map(str, T.ravel()[0:12].tolist())))) 44 | elif (topic == '/davis/left/pose'): # cartographer pose 45 | T = pose_to_tf(msg) 46 | cartographerFile.write("%.6f %s\n" % (msg.header.stamp.to_nsec()/1e9, ' '.join(map(str, T.ravel()[0:12].tolist())))) 47 | elif (topic == '/gps/fix'): # gps long/lat 48 | utm_pos = utm.from_latlon(msg.latitude, msg.longitude) 49 | a = msg.altitude # ignored here 50 | T = tf.transformations.identity_matrix() 51 | T[0:3,3] = np.array([utm_pos[0], utm_pos[1], 0]) 52 | gpsFile.write("%.6f %s\n" % (msg.header.stamp.to_nsec()/1e9, ' '.join(map(str, T.ravel()[0:12].tolist())))) 53 | 54 | gpsFile.close() 55 | loamFile.close() 56 | cartographerFile.close() 57 | print "wrote to files: %s" % ", ".join(fileNames) 58 | 59 | -------------------------------------------------------------------------------- /tools/gt_map_visualization/plot_diff.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import matplotlib.pyplot as plt 4 | from matplotlib.backends.backend_pdf import PdfPages 5 | import numpy as np 6 | import argparse 7 | 8 | def add_to_plot(ax, t, x, col, lab, w): 9 | return ax.plot(t, x, color=col, marker='.', label=lab, linewidth=w) 10 | #return ax.plot(t, x, color=col, marker='.', linewidth=w) 11 | 12 | 13 | if __name__ == '__main__': 14 | parser = argparse.ArgumentParser(description='convert trajectories to gpsvisualizer format') 15 | parser.add_argument('--files', '-f', action='store', default=None, required=True, 16 | help='comma-separated list of trajectory files') 17 | parser.add_argument('--legend', '-l', action='store', default=None, required=True, 18 | help='comma-separated list of legend names') 19 | parser.add_argument('--ymin', '-n', type=float, action='store', default=0, required=False, 20 | help='min y value') 21 | parser.add_argument('--ymax', '-x', type=float, action='store', default=100, required=False, 22 | help='max y value') 23 | parser.add_argument('--colors', '-c', action='store', default=None, required=True, 24 | help='comma-separated list of colors') 25 | parser.add_argument('--fatstart', '-s', action='store', default=1e9, required=False, 26 | help='start of fat line') 27 | parser.add_argument('--fatend', '-e', action='store', default=0, required=False, 28 | help='end of fat line') 29 | parser.add_argument('--title', '-t', action='store', default=None, required=False, 30 | help='title') 31 | args = parser.parse_args() 32 | 33 | 34 | legends = args.legend.split(",") 35 | colors = args.colors.split(",") 36 | files = args.files.split(",") 37 | width = 2 38 | tfatstart = float(args.fatstart) 39 | tfatend = float(args.fatend) 40 | fatwidth = width * 4 41 | 42 | if (len(legends) != len(files)): 43 | print "error: number of files %d must match number of legend items %d" % (len(files), len(legends)) 44 | exit(-1) 45 | if (len(colors) != len(files)): 46 | print "error: number of files %d must match number of colors %d" % (len(files), len(colors)) 47 | exit(-1) 48 | 49 | plt.rcParams.update({'font.size': 48}) 50 | fig = plt.figure(figsize=(40,50)) 51 | ax = fig.add_subplot(111) 52 | ax.set_aspect(17) 53 | ref = np.loadtxt(files[0]) 54 | t0 = ref[0,0] 55 | t = ref[:,0]-t0 56 | 57 | for f,n,c,i in zip(files, legends, colors, range(0,len(files))): 58 | if i > 0: 59 | traj = np.loadtxt(f) 60 | xi = np.interp(ref[:,0], traj[:,0], traj[:,4]) 61 | yi = np.interp(ref[:,0], traj[:,0], traj[:,8]) 62 | d = np.sqrt(np.square(xi-ref[:,4]) + np.square(yi - ref[:, 8])) 63 | imax = np.argmax(d) 64 | print "%s largest error: %f at index: %d, time: %f" % (n, d[imax], imax, t[imax]) 65 | h = add_to_plot(ax, t, d, c, n, width) 66 | h = add_to_plot(ax, t[(t>= tfatstart) & (t<=tfatend)], d[(t>= tfatstart) & (t <=tfatend)], c, None, fatwidth) 67 | # h = add_to_plot(ax, t[t>= tfatend], d[t>= tfatend], c, None, width) 68 | 69 | axes = plt.gca() 70 | axes.set_ylim([args.ymin, args.ymax]) 71 | axes.set_xlim([0, ref[-1,0]-t0]) 72 | if args.legend: 73 | ax.legend(loc='upper left', shadow = True) 74 | if args.title: 75 | plt.title(args.title) 76 | plt.xlabel('time [s]') 77 | plt.ylabel('distance [m]') 78 | plt.show() 79 | pp = PdfPages('plot.pdf') 80 | pp.savefig(fig) 81 | pp.close() 82 | -------------------------------------------------------------------------------- /tools/gt_map_visualization/plot_traj.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import matplotlib.pyplot as plt 4 | import numpy as np 5 | import argparse 6 | 7 | def add_to_plot(ax, traj, col, l): 8 | return ax.plot(traj[:,4], traj[:,8], color=col, marker='.', label=l) 9 | 10 | 11 | if __name__ == '__main__': 12 | parser = argparse.ArgumentParser(description='convert trajectories to gpsvisualizer format') 13 | parser.add_argument('--files', '-f', action='store', default=None, required=True, 14 | help='comma-separated list of trajectory files') 15 | parser.add_argument('--legend', '-l', action='store', default=None, required=True, 16 | help='comma-separated list of legend names') 17 | parser.add_argument('--colors', '-c', action='store', default=None, required=True, 18 | help='comma-separated list of colors') 19 | parser.add_argument('--tstart', '-t', action='store', default=0, required=False, 20 | help='start time') 21 | parser.add_argument('--tend', '-T', action='store', default=1e9, required=False, 22 | help='end time') 23 | args = parser.parse_args() 24 | 25 | 26 | legends = args.legend.split(",") 27 | colors = args.colors.split(",") 28 | files = args.files.split(",") 29 | 30 | if (len(legends) != len(files)): 31 | print "error: number of files %d must match number of legend items %d" % (len(files), len(legends)) 32 | exit(-1) 33 | if (len(colors) != len(files)): 34 | print "error: number of files %d must match number of colors %d" % (len(files), len(colors)) 35 | exit(-1) 36 | 37 | fig = plt.figure() 38 | ax = fig.add_subplot(111) 39 | 40 | tstart = float(args.tstart) 41 | tend = float(args.tend) 42 | for f,n,c in zip(files, legends, colors): 43 | traj = np.loadtxt(f) 44 | t = traj[:,0] - traj[0,0] 45 | h = add_to_plot(ax, traj[(t >= tstart) & (t <= tend),:], c, n) 46 | 47 | ax.legend(loc='upper center', shadow = True) 48 | plt.axis('equal') 49 | plt.show() 50 | -------------------------------------------------------------------------------- /tools/gt_map_visualization/traj_to_gpsvisualizer.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # 3 | # script to turn trajectory files into tracks file for upload to gps visualizer 4 | # 5 | # 6 | 7 | 8 | import rosbag, rospy 9 | import glob 10 | import argparse 11 | import time 12 | import utm 13 | import numpy as np 14 | 15 | if __name__ == '__main__': 16 | parser = argparse.ArgumentParser(description='convert trajectories to gpsvisualizer format') 17 | parser.add_argument('--files', '-f', action='store', default=None, required=True, 18 | help='comma-separated list of trajectory files') 19 | parser.add_argument('--legend', '-l', action='store', default=None, required=True, 20 | help='comma-separated list of legend names') 21 | parser.add_argument('--colors', '-c', action='store', default=None, required=True, 22 | help='comma-separated list of colors') 23 | parser.add_argument('--linewidth', '-w', action='store', default=2, required=False, 24 | help='line width') 25 | parser.add_argument('--fatstart', '-s', action='store', default=1e9, required=False, 26 | help='start of fat line') 27 | parser.add_argument('--fatend', '-e', action='store', default=0, required=False, 28 | help='end of fat line') 29 | args = parser.parse_args() 30 | 31 | 32 | legends = args.legend.split(",") 33 | colors = args.colors.split(",") 34 | files = args.files.split(",") 35 | width = int(args.linewidth) 36 | tfatstart = float(args.fatstart) 37 | tfatend = float(args.fatend) 38 | fatwidth = width * 3 39 | 40 | if (len(legends) != len(files)): 41 | print "error: number of files %d must match number of legend items %d" % (len(files), len(legends)) 42 | exit(-1) 43 | if (len(colors) != len(files)): 44 | print "error: number of files %d must match number of colors %d" % (len(files), len(colors)) 45 | exit(-1) 46 | 47 | zone_number = 18 # UTM zone number 48 | zone_letter = 'S' # UTM zone letter 49 | 50 | for f,n,c in zip(files, legends, colors): 51 | print "%s,%s,%s,%s,%s,%s,%s,%s" % ("type", "time", "latitude", 52 | "longitude", "alt", "name", "color", "width") 53 | traj = np.loadtxt(f) 54 | t0 = traj[0][0] 55 | for row in traj: 56 | t = row[0] 57 | easting = row[4] 58 | northing = row[8] 59 | alt = row[12] 60 | lat_long = utm.to_latlon(easting, northing, zone_number, zone_letter) 61 | tstr = time.strftime("%H:%M:%S", time.gmtime(t)) 62 | print "T,%s,%.6f,%.6f,%.3f,%s,%s,%d" % (tstr, lat_long[0], lat_long[1], alt, n, c, width) 63 | 64 | for f,n,c in zip(files, legends, colors): 65 | print "%s,%s,%s,%s,%s,%s,%s,%s" % ("type", "time", "latitude", 66 | "longitude", "alt", "name", "color", "width") 67 | traj = np.loadtxt(f) 68 | t0 = traj[0][0] 69 | for row in traj: 70 | t = row[0] 71 | easting = row[4] 72 | northing = row[8] 73 | alt = row[12] 74 | lat_long = utm.to_latlon(easting, northing, zone_number, zone_letter) 75 | tstr = time.strftime("%H:%M:%S", time.gmtime(t)) 76 | if t-t0 >= tfatstart and t-t0 <= tfatend: 77 | print "T,%s,%.6f,%.6f,%.3f,%s,%s,%d" % (tstr, lat_long[0], lat_long[1], alt, n + "_fat", c, fatwidth) 78 | 79 | -------------------------------------------------------------------------------- /tools/gt_map_visualization/utm: -------------------------------------------------------------------------------- 1 | utm-0.4.2/utm -------------------------------------------------------------------------------- /tools/gt_map_visualization/utm-0.4.2/LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2012-2017 Tobias Bieniek 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /tools/gt_map_visualization/utm-0.4.2/PKG-INFO: -------------------------------------------------------------------------------- 1 | Metadata-Version: 1.1 2 | Name: utm 3 | Version: 0.4.2 4 | Summary: Bidirectional UTM-WGS84 converter for python 5 | Home-page: https://github.com/Turbo87/utm 6 | Author: Tobias Bieniek 7 | Author-email: Tobias.Bieniek@gmx.de 8 | License: UNKNOWN 9 | Description: UNKNOWN 10 | Keywords: utm,wgs84,coordinate,converter 11 | Platform: UNKNOWN 12 | Classifier: Programming Language :: Python 13 | Classifier: License :: OSI Approved :: MIT License 14 | Classifier: Operating System :: OS Independent 15 | Classifier: Development Status :: 4 - Beta 16 | Classifier: Environment :: Other Environment 17 | Classifier: Intended Audience :: Developers 18 | Classifier: Intended Audience :: Science/Research 19 | Classifier: Topic :: Scientific/Engineering :: GIS 20 | -------------------------------------------------------------------------------- /tools/gt_map_visualization/utm-0.4.2/README.rst: -------------------------------------------------------------------------------- 1 | utm 2 | === 3 | 4 | .. image:: https://travis-ci.org/Turbo87/utm.png 5 | 6 | .. image:: https://img.shields.io/badge/License-MIT-yellow.svg 7 | :target: https://github.com/Turbo87/utm/blob/master/LICENSE 8 | 9 | 10 | Bidirectional UTM-WGS84 converter for python 11 | 12 | Usage 13 | ----- 14 | 15 | .. code-block:: python 16 | 17 | import utm 18 | 19 | Convert a (latitude, longitude) tuple into an UTM coordinate: 20 | 21 | .. code-block:: python 22 | 23 | utm.from_latlon(51.2, 7.5) 24 | >>> (395201.3103811303, 5673135.241182375, 32, 'U') 25 | 26 | The syntax is **utm.from_latlon(LATITUDE, LONGITUDE)**. 27 | 28 | The return has the form **(EASTING, NORTHING, ZONE NUMBER, ZONE LETTER)**. 29 | 30 | Convert an UTM coordinate into a (latitude, longitude) tuple: 31 | 32 | .. code-block:: python 33 | 34 | utm.to_latlon(340000, 5710000, 32, 'U') 35 | >>> (51.51852098408468, 6.693872395145327) 36 | 37 | The syntax is **utm.to_latlon(EASTING, NORTHING, ZONE NUMBER, ZONE LETTER)**. 38 | 39 | The return has the form **(LATITUDE, LONGITUDE)**. 40 | 41 | Since the zone letter is not strictly needed for the conversion you may also 42 | the ``northern`` parameter instead, which is a named parameter and can be set 43 | to either ``True`` or ``False``. Have a look at the unit tests to see how it 44 | can be used. 45 | 46 | The UTM coordinate system is explained on 47 | `this `_ 48 | Wikipedia page. 49 | 50 | Speed 51 | ----- 52 | 53 | The library has been compared to the more generic pyproj library by running the 54 | unit test suite through pyproj instead of utm. These are the results: 55 | 56 | * with pyproj (without projection cache): 4.0 - 4.5 sec 57 | * with pyproj (with projection cache): 0.9 - 1.0 sec 58 | * with utm: 0.4 - 0.5 sec 59 | 60 | Development 61 | ----------- 62 | 63 | Create a new ``virtualenv`` and install the library via ``pip install -e .``. 64 | After that install the ``pytest`` package via ``pip install pytest`` and run 65 | the unit test suite by calling ``py.test``. 66 | 67 | Changelog 68 | --------- 69 | 70 | see `CHANGELOG.rst `_ file 71 | 72 | Authors 73 | ------- 74 | 75 | * Bart van Andel 76 | * Tobias Bieniek 77 | * Torstein I. Bø 78 | 79 | License 80 | ------- 81 | 82 | Copyright (C) 2012 Tobias Bieniek 83 | 84 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 85 | 86 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 87 | 88 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 89 | -------------------------------------------------------------------------------- /tools/gt_map_visualization/utm-0.4.2/scripts/utm-converter: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import argparse 4 | import utm 5 | 6 | parser = argparse.ArgumentParser(description='Bidirectional UTM-WGS84 converter for python') 7 | subparsers = parser.add_subparsers() 8 | 9 | parser_latlon = subparsers.add_parser('latlon', help='Convert a latitude/longitude pair WGS84 to UTM') 10 | parser_latlon.add_argument('latitude', type=float, help='Latitude of the WGS84 coordinate') 11 | parser_latlon.add_argument('longitude', type=float, help='Longitude of the WGS84 coordinate') 12 | 13 | parser_utm = subparsers.add_parser('utm', help='Convert a UTM coordinate to WGS84') 14 | parser_utm.add_argument('easting', type=int, help='Easting component of the UTM coordinate') 15 | parser_utm.add_argument('northing', type=int, help='Northing component of the UTM coordinate') 16 | parser_utm.add_argument('zone_number', type=int, help='Zone number of the UTM coordinate') 17 | parser_utm.add_argument('zone_letter', help='Zone letter of the UTM coordinate') 18 | 19 | args = parser.parse_args() 20 | 21 | if all(arg in args for arg in ['easting', 'northing', 'zone_number', 'zone_letter']): 22 | if args.zone_letter == '': 23 | parser_utm.print_usage() 24 | print "utm-converter utm: error: too few arguments" 25 | exit() 26 | 27 | coordinate = utm.to_latlon(args.easting, args.northing, 28 | args.zone_number, args.zone_letter) 29 | 30 | elif all(arg in args for arg in ['latitude', 'longitude']): 31 | coordinate = utm.from_latlon(args.latitude, args.longitude) 32 | 33 | print ','.join([str(component) for component in coordinate]) 34 | -------------------------------------------------------------------------------- /tools/gt_map_visualization/utm-0.4.2/setup.py: -------------------------------------------------------------------------------- 1 | from distutils.core import setup 2 | 3 | setup( 4 | name='utm', 5 | version='0.4.2', 6 | author='Tobias Bieniek', 7 | author_email='Tobias.Bieniek@gmx.de', 8 | url='https://github.com/Turbo87/utm', 9 | description='Bidirectional UTM-WGS84 converter for python', 10 | keywords=['utm', 'wgs84', 'coordinate', 'converter'], 11 | classifiers=[ 12 | 'Programming Language :: Python', 13 | 'License :: OSI Approved :: MIT License', 14 | 'Operating System :: OS Independent', 15 | 'Development Status :: 4 - Beta', 16 | 'Environment :: Other Environment', 17 | 'Intended Audience :: Developers', 18 | 'Intended Audience :: Science/Research', 19 | 'Topic :: Scientific/Engineering :: GIS', 20 | ], 21 | packages=['utm'], 22 | scripts=['scripts/utm-converter'], 23 | ) 24 | -------------------------------------------------------------------------------- /tools/gt_map_visualization/utm-0.4.2/test/test_utm.py: -------------------------------------------------------------------------------- 1 | import utm as UTM 2 | import unittest 3 | 4 | 5 | class UTMTestCase(unittest.TestCase): 6 | def assert_utm_equal(self, a, b): 7 | self.assertAlmostEqual(a[0], b[0], 0) 8 | self.assertAlmostEqual(a[1], b[1], 0) 9 | self.assertEqual(a[2], b[2]) 10 | self.assertEqual(a[3].upper(), b[3].upper()) 11 | 12 | def assert_latlon_equal(self, a, b): 13 | self.assertAlmostEqual(a[0], b[0], 4) 14 | self.assertAlmostEqual(a[1], b[1], 4) 15 | 16 | 17 | class KnownValues(UTMTestCase): 18 | known_values = [ 19 | # Aachen, Germany 20 | ( 21 | (50.77535, 6.08389), 22 | (294409, 5628898, 32, 'U'), 23 | {'northern': True}, 24 | ), 25 | # New York, USA 26 | ( 27 | (40.71435, -74.00597), 28 | (583960, 4507523, 18, 'T'), 29 | {'northern': True}, 30 | ), 31 | # Wellington, New Zealand 32 | ( 33 | (-41.28646, 174.77624), 34 | (313784, 5427057, 60, 'G'), 35 | {'northern': False}, 36 | ), 37 | # Capetown, South Africa 38 | ( 39 | (-33.92487, 18.42406), 40 | (261878, 6243186, 34, 'H'), 41 | {'northern': False}, 42 | ), 43 | # Mendoza, Argentina 44 | ( 45 | (-32.89018, -68.84405), 46 | (514586, 6360877, 19, 'h'), 47 | {'northern': False}, 48 | ), 49 | # Fairbanks, Alaska, USA 50 | ( 51 | (64.83778, -147.71639), 52 | (466013, 7190568, 6, 'W'), 53 | {'northern': True}, 54 | ), 55 | # Ben Nevis, Scotland, UK 56 | ( 57 | (56.79680, -5.00601), 58 | (377486, 6296562, 30, 'V'), 59 | {'northern': True}, 60 | ), 61 | # Latitude 84 62 | ( 63 | (84, -5.00601), 64 | (476594, 9328501, 30, 'X'), 65 | {'northern': True}, 66 | ), 67 | ] 68 | 69 | def test_from_latlon(self): 70 | '''from_latlon should give known result with known input''' 71 | for latlon, utm, _ in self.known_values: 72 | result = UTM.from_latlon(*latlon) 73 | self.assert_utm_equal(utm, result) 74 | 75 | def test_to_latlon(self): 76 | '''to_latlon should give known result with known input''' 77 | for latlon, utm, utm_kw in self.known_values: 78 | result = UTM.to_latlon(*utm) 79 | self.assert_latlon_equal(latlon, result) 80 | 81 | result = UTM.to_latlon(*utm[0:3], **utm_kw) 82 | self.assert_latlon_equal(latlon, result) 83 | 84 | 85 | class BadInput(UTMTestCase): 86 | def test_from_latlon_range_checks(self): 87 | '''from_latlon should fail with out-of-bounds input''' 88 | self.assertRaises(UTM.OutOfRangeError, UTM.from_latlon, -100, 0) 89 | self.assertRaises(UTM.OutOfRangeError, UTM.from_latlon, -80.1, 0) 90 | for i in range(-8000, 8400): 91 | UTM.from_latlon(i / 100.0, 0) 92 | self.assertRaises(UTM.OutOfRangeError, UTM.from_latlon, 84.1, 0) 93 | self.assertRaises(UTM.OutOfRangeError, UTM.from_latlon, 100, 0) 94 | 95 | self.assertRaises(UTM.OutOfRangeError, UTM.from_latlon, 0, -300) 96 | self.assertRaises(UTM.OutOfRangeError, UTM.from_latlon, 0, -180.1) 97 | for i in range(-18000, 18000): 98 | UTM.from_latlon(0, i / 100.0) 99 | self.assertRaises(UTM.OutOfRangeError, UTM.from_latlon, 0, 180.1) 100 | self.assertRaises(UTM.OutOfRangeError, UTM.from_latlon, 0, 300) 101 | 102 | self.assertRaises(UTM.OutOfRangeError, UTM.from_latlon, -100, -300) 103 | self.assertRaises(UTM.OutOfRangeError, UTM.from_latlon, 100, -300) 104 | self.assertRaises(UTM.OutOfRangeError, UTM.from_latlon, -100, 300) 105 | self.assertRaises(UTM.OutOfRangeError, UTM.from_latlon, 100, 300) 106 | 107 | def test_to_latlon_range_checks(self): 108 | '''to_latlon should fail with out-of-bounds input''' 109 | 110 | # test easting range 111 | 112 | self.assertRaises( 113 | UTM.OutOfRangeError, UTM.to_latlon, 0, 5000000, 32, 'U') 114 | 115 | self.assertRaises( 116 | UTM.OutOfRangeError, UTM.to_latlon, 99999, 5000000, 32, 'U') 117 | 118 | for i in range(100000, 999999, 1000): 119 | UTM.to_latlon(i, 5000000, 32, 'U') 120 | 121 | self.assertRaises( 122 | UTM.OutOfRangeError, UTM.to_latlon, 1000000, 5000000, 32, 'U') 123 | 124 | self.assertRaises( 125 | UTM.OutOfRangeError, UTM.to_latlon, 100000000000, 5000000, 32, 'U') 126 | 127 | # test northing range 128 | 129 | self.assertRaises( 130 | UTM.OutOfRangeError, UTM.to_latlon, 500000, -100000, 32, 'U') 131 | 132 | self.assertRaises( 133 | UTM.OutOfRangeError, UTM.to_latlon, 500000, -1, 32, 'U') 134 | for i in range(10, 10000000, 1000): 135 | UTM.to_latlon(500000, i, 32, 'U') 136 | 137 | self.assertRaises( 138 | UTM.OutOfRangeError, UTM.to_latlon, 500000, 10000001, 32, 'U') 139 | 140 | self.assertRaises( 141 | UTM.OutOfRangeError, UTM.to_latlon, 500000, 50000000, 32, 'U') 142 | 143 | # test zone numbers 144 | 145 | self.assertRaises( 146 | UTM.OutOfRangeError, UTM.to_latlon, 500000, 5000000, 0, 'U') 147 | 148 | for i in range(1, 60): 149 | UTM.to_latlon(500000, 5000000, i, 'U') 150 | 151 | self.assertRaises( 152 | UTM.OutOfRangeError, UTM.to_latlon, 500000, 5000000, 61, 'U') 153 | 154 | self.assertRaises( 155 | UTM.OutOfRangeError, UTM.to_latlon, 500000, 5000000, 1000, 'U') 156 | 157 | # test zone letters 158 | 159 | self.assertRaises( 160 | UTM.OutOfRangeError, UTM.to_latlon, 500000, 5000000, 32, 'A') 161 | 162 | self.assertRaises( 163 | UTM.OutOfRangeError, UTM.to_latlon, 500000, 5000000, 32, 'B') 164 | 165 | self.assertRaises( 166 | UTM.OutOfRangeError, UTM.to_latlon, 500000, 5000000, 32, 'I') 167 | 168 | self.assertRaises( 169 | UTM.OutOfRangeError, UTM.to_latlon, 500000, 5000000, 32, 'O') 170 | 171 | for i in range(ord('C'), ord('X')): 172 | i = chr(i) 173 | if i != 'I' and i != 'O': 174 | UTM.to_latlon(500000, 5000000, 32, i) 175 | 176 | self.assertRaises( 177 | UTM.OutOfRangeError, UTM.to_latlon, 500000, 5000000, 32, 'Y') 178 | 179 | self.assertRaises( 180 | UTM.OutOfRangeError, UTM.to_latlon, 500000, 5000000, 32, 'Z') 181 | 182 | 183 | class Zone32V(unittest.TestCase): 184 | 185 | def assert_zone_equal(self, result, expected_number, expected_letter): 186 | self.assertEqual(result[2], expected_number) 187 | self.assertEqual(result[3].upper(), expected_letter.upper()) 188 | 189 | def test_inside(self): 190 | self.assert_zone_equal(UTM.from_latlon(56, 3), 32, 'V') 191 | self.assert_zone_equal(UTM.from_latlon(56, 6), 32, 'V') 192 | self.assert_zone_equal(UTM.from_latlon(56, 9), 32, 'V') 193 | self.assert_zone_equal(UTM.from_latlon(56, 11.999999), 32, 'V') 194 | 195 | self.assert_zone_equal(UTM.from_latlon(60, 3), 32, 'V') 196 | self.assert_zone_equal(UTM.from_latlon(60, 6), 32, 'V') 197 | self.assert_zone_equal(UTM.from_latlon(60, 9), 32, 'V') 198 | self.assert_zone_equal(UTM.from_latlon(60, 11.999999), 32, 'V') 199 | 200 | self.assert_zone_equal(UTM.from_latlon(63.999999, 3), 32, 'V') 201 | self.assert_zone_equal(UTM.from_latlon(63.999999, 6), 32, 'V') 202 | self.assert_zone_equal(UTM.from_latlon(63.999999, 9), 32, 'V') 203 | self.assert_zone_equal(UTM.from_latlon(63.999999, 11.999999), 32, 'V') 204 | 205 | def test_left_of(self): 206 | self.assert_zone_equal(UTM.from_latlon(55.999999, 2.999999), 31, 'U') 207 | self.assert_zone_equal(UTM.from_latlon(56, 2.999999), 31, 'V') 208 | self.assert_zone_equal(UTM.from_latlon(60, 2.999999), 31, 'V') 209 | self.assert_zone_equal(UTM.from_latlon(63.999999, 2.999999), 31, 'V') 210 | self.assert_zone_equal(UTM.from_latlon(64, 2.999999), 31, 'W') 211 | 212 | def test_right_of(self): 213 | self.assert_zone_equal(UTM.from_latlon(55.999999, 12), 33, 'U') 214 | self.assert_zone_equal(UTM.from_latlon(56, 12), 33, 'V') 215 | self.assert_zone_equal(UTM.from_latlon(60, 12), 33, 'V') 216 | self.assert_zone_equal(UTM.from_latlon(63.999999, 12), 33, 'V') 217 | self.assert_zone_equal(UTM.from_latlon(64, 12), 33, 'W') 218 | 219 | def test_below(self): 220 | self.assert_zone_equal(UTM.from_latlon(55.999999, 3), 31, 'U') 221 | self.assert_zone_equal(UTM.from_latlon(55.999999, 6), 32, 'U') 222 | self.assert_zone_equal(UTM.from_latlon(55.999999, 9), 32, 'U') 223 | self.assert_zone_equal(UTM.from_latlon(55.999999, 11.999999), 32, 'U') 224 | self.assert_zone_equal(UTM.from_latlon(55.999999, 12), 33, 'U') 225 | 226 | def test_above(self): 227 | self.assert_zone_equal(UTM.from_latlon(64, 3), 31, 'W') 228 | self.assert_zone_equal(UTM.from_latlon(64, 6), 32, 'W') 229 | self.assert_zone_equal(UTM.from_latlon(64, 9), 32, 'W') 230 | self.assert_zone_equal(UTM.from_latlon(64, 11.999999), 32, 'W') 231 | self.assert_zone_equal(UTM.from_latlon(64, 12), 33, 'W') 232 | 233 | 234 | if __name__ == '__main__': 235 | unittest.main() 236 | -------------------------------------------------------------------------------- /tools/gt_map_visualization/utm-0.4.2/utm/__init__.py: -------------------------------------------------------------------------------- 1 | from utm.conversion import to_latlon, from_latlon, latlon_to_zone_number, latitude_to_zone_letter 2 | from utm.error import OutOfRangeError 3 | -------------------------------------------------------------------------------- /tools/gt_map_visualization/utm-0.4.2/utm/conversion.py: -------------------------------------------------------------------------------- 1 | import math 2 | from utm.error import OutOfRangeError 3 | 4 | __all__ = ['to_latlon', 'from_latlon'] 5 | 6 | K0 = 0.9996 7 | 8 | E = 0.00669438 9 | E2 = E * E 10 | E3 = E2 * E 11 | E_P2 = E / (1.0 - E) 12 | 13 | SQRT_E = math.sqrt(1 - E) 14 | _E = (1 - SQRT_E) / (1 + SQRT_E) 15 | _E2 = _E * _E 16 | _E3 = _E2 * _E 17 | _E4 = _E3 * _E 18 | _E5 = _E4 * _E 19 | 20 | M1 = (1 - E / 4 - 3 * E2 / 64 - 5 * E3 / 256) 21 | M2 = (3 * E / 8 + 3 * E2 / 32 + 45 * E3 / 1024) 22 | M3 = (15 * E2 / 256 + 45 * E3 / 1024) 23 | M4 = (35 * E3 / 3072) 24 | 25 | P2 = (3. / 2 * _E - 27. / 32 * _E3 + 269. / 512 * _E5) 26 | P3 = (21. / 16 * _E2 - 55. / 32 * _E4) 27 | P4 = (151. / 96 * _E3 - 417. / 128 * _E5) 28 | P5 = (1097. / 512 * _E4) 29 | 30 | R = 6378137 31 | 32 | ZONE_LETTERS = "CDEFGHJKLMNPQRSTUVWXX" 33 | 34 | 35 | def to_latlon(easting, northing, zone_number, zone_letter=None, northern=None, strict=True): 36 | """This function convert an UTM coordinate into Latitude and Longitude 37 | 38 | Parameters 39 | ---------- 40 | easting: int 41 | Easting value of UTM coordinate 42 | 43 | northing: int 44 | Northing value of UTM coordinate 45 | 46 | zone number: int 47 | Zone Number is represented with global map numbers of an UTM Zone 48 | Numbers Map. More information see utmzones [1]_ 49 | 50 | zone_letter: str 51 | Zone Letter can be represented as string values. Where UTM Zone 52 | Designators can be accessed in [1]_ 53 | 54 | northern: bool 55 | You can set True or False to set this parameter. Default is None 56 | 57 | 58 | .. _[1]: http://www.jaworski.ca/utmzones.htm 59 | 60 | """ 61 | if not zone_letter and northern is None: 62 | raise ValueError('either zone_letter or northern needs to be set') 63 | 64 | elif zone_letter and northern is not None: 65 | raise ValueError('set either zone_letter or northern, but not both') 66 | 67 | if strict: 68 | if not 100000 <= easting < 1000000: 69 | raise OutOfRangeError('easting out of range (must be between 100.000 m and 999.999 m)') 70 | if not 0 <= northing <= 10000000: 71 | raise OutOfRangeError('northing out of range (must be between 0 m and 10.000.000 m)') 72 | if not 1 <= zone_number <= 60: 73 | raise OutOfRangeError('zone number out of range (must be between 1 and 60)') 74 | 75 | if zone_letter: 76 | zone_letter = zone_letter.upper() 77 | 78 | if not 'C' <= zone_letter <= 'X' or zone_letter in ['I', 'O']: 79 | raise OutOfRangeError('zone letter out of range (must be between C and X)') 80 | 81 | northern = (zone_letter >= 'N') 82 | 83 | x = easting - 500000 84 | y = northing 85 | 86 | if not northern: 87 | y -= 10000000 88 | 89 | m = y / K0 90 | mu = m / (R * M1) 91 | 92 | p_rad = (mu + 93 | P2 * math.sin(2 * mu) + 94 | P3 * math.sin(4 * mu) + 95 | P4 * math.sin(6 * mu) + 96 | P5 * math.sin(8 * mu)) 97 | 98 | p_sin = math.sin(p_rad) 99 | p_sin2 = p_sin * p_sin 100 | 101 | p_cos = math.cos(p_rad) 102 | 103 | p_tan = p_sin / p_cos 104 | p_tan2 = p_tan * p_tan 105 | p_tan4 = p_tan2 * p_tan2 106 | 107 | ep_sin = 1 - E * p_sin2 108 | ep_sin_sqrt = math.sqrt(1 - E * p_sin2) 109 | 110 | n = R / ep_sin_sqrt 111 | r = (1 - E) / ep_sin 112 | 113 | c = _E * p_cos**2 114 | c2 = c * c 115 | 116 | d = x / (n * K0) 117 | d2 = d * d 118 | d3 = d2 * d 119 | d4 = d3 * d 120 | d5 = d4 * d 121 | d6 = d5 * d 122 | 123 | latitude = (p_rad - (p_tan / r) * 124 | (d2 / 2 - 125 | d4 / 24 * (5 + 3 * p_tan2 + 10 * c - 4 * c2 - 9 * E_P2)) + 126 | d6 / 720 * (61 + 90 * p_tan2 + 298 * c + 45 * p_tan4 - 252 * E_P2 - 3 * c2)) 127 | 128 | longitude = (d - 129 | d3 / 6 * (1 + 2 * p_tan2 + c) + 130 | d5 / 120 * (5 - 2 * c + 28 * p_tan2 - 3 * c2 + 8 * E_P2 + 24 * p_tan4)) / p_cos 131 | 132 | return (math.degrees(latitude), 133 | math.degrees(longitude) + zone_number_to_central_longitude(zone_number)) 134 | 135 | 136 | def from_latlon(latitude, longitude, force_zone_number=None): 137 | """This function convert Latitude and Longitude to UTM coordinate 138 | 139 | Parameters 140 | ---------- 141 | latitude: float 142 | Latitude between 80 deg S and 84 deg N, e.g. (-80.0 to 84.0) 143 | 144 | longitude: float 145 | Longitude between 180 deg W and 180 deg E, e.g. (-180.0 to 180.0). 146 | 147 | force_zone number: int 148 | Zone Number is represented with global map numbers of an UTM Zone 149 | Numbers Map. You may force conversion including one UTM Zone Number. 150 | More information see utmzones [1]_ 151 | 152 | .. _[1]: http://www.jaworski.ca/utmzones.htm 153 | """ 154 | if not -80.0 <= latitude <= 84.0: 155 | raise OutOfRangeError('latitude out of range (must be between 80 deg S and 84 deg N)') 156 | if not -180.0 <= longitude <= 180.0: 157 | raise OutOfRangeError('longitude out of range (must be between 180 deg W and 180 deg E)') 158 | 159 | lat_rad = math.radians(latitude) 160 | lat_sin = math.sin(lat_rad) 161 | lat_cos = math.cos(lat_rad) 162 | 163 | lat_tan = lat_sin / lat_cos 164 | lat_tan2 = lat_tan * lat_tan 165 | lat_tan4 = lat_tan2 * lat_tan2 166 | 167 | if force_zone_number is None: 168 | zone_number = latlon_to_zone_number(latitude, longitude) 169 | else: 170 | zone_number = force_zone_number 171 | 172 | zone_letter = latitude_to_zone_letter(latitude) 173 | 174 | lon_rad = math.radians(longitude) 175 | central_lon = zone_number_to_central_longitude(zone_number) 176 | central_lon_rad = math.radians(central_lon) 177 | 178 | n = R / math.sqrt(1 - E * lat_sin**2) 179 | c = E_P2 * lat_cos**2 180 | 181 | a = lat_cos * (lon_rad - central_lon_rad) 182 | a2 = a * a 183 | a3 = a2 * a 184 | a4 = a3 * a 185 | a5 = a4 * a 186 | a6 = a5 * a 187 | 188 | m = R * (M1 * lat_rad - 189 | M2 * math.sin(2 * lat_rad) + 190 | M3 * math.sin(4 * lat_rad) - 191 | M4 * math.sin(6 * lat_rad)) 192 | 193 | easting = K0 * n * (a + 194 | a3 / 6 * (1 - lat_tan2 + c) + 195 | a5 / 120 * (5 - 18 * lat_tan2 + lat_tan4 + 72 * c - 58 * E_P2)) + 500000 196 | 197 | northing = K0 * (m + n * lat_tan * (a2 / 2 + 198 | a4 / 24 * (5 - lat_tan2 + 9 * c + 4 * c**2) + 199 | a6 / 720 * (61 - 58 * lat_tan2 + lat_tan4 + 600 * c - 330 * E_P2))) 200 | 201 | if latitude < 0: 202 | northing += 10000000 203 | 204 | return easting, northing, zone_number, zone_letter 205 | 206 | 207 | def latitude_to_zone_letter(latitude): 208 | if -80 <= latitude <= 84: 209 | return ZONE_LETTERS[int(latitude + 80) >> 3] 210 | else: 211 | return None 212 | 213 | 214 | def latlon_to_zone_number(latitude, longitude): 215 | if 56 <= latitude < 64 and 3 <= longitude < 12: 216 | return 32 217 | 218 | if 72 <= latitude <= 84 and longitude >= 0: 219 | if longitude <= 9: 220 | return 31 221 | elif longitude <= 21: 222 | return 33 223 | elif longitude <= 33: 224 | return 35 225 | elif longitude <= 42: 226 | return 37 227 | 228 | return int((longitude + 180) / 6) + 1 229 | 230 | 231 | def zone_number_to_central_longitude(zone_number): 232 | return (zone_number - 1) * 6 - 180 + 3 233 | -------------------------------------------------------------------------------- /tools/gt_map_visualization/utm-0.4.2/utm/error.py: -------------------------------------------------------------------------------- 1 | class OutOfRangeError(ValueError): 2 | pass 3 | -------------------------------------------------------------------------------- /tools/gt_map_visualization/velocity_from_traj.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import matplotlib.pyplot as plt 4 | import numpy as np 5 | import argparse 6 | 7 | def grad(y, x): 8 | dy = np.diff(y) 9 | dx = np.diff(x) 10 | dydx = dy / dx 11 | return np.hstack([dydx, 0]) 12 | 13 | def moving_average(a, n=3) : 14 | ret = np.cumsum(a, dtype=float) 15 | ret[n:] = ret[n:] - ret[:-n] 16 | return ret[n - 1:] / n 17 | 18 | if __name__ == '__main__': 19 | parser = argparse.ArgumentParser(description='compute velocity from trajectory') 20 | parser.add_argument('--avgbins', '-a', type=int, action='store', default=5, required=False, 21 | help='number of bins over which to average velocity') 22 | parser.add_argument('trajectory_file') 23 | args = parser.parse_args() 24 | 25 | a = int(args.avgbins) 26 | 27 | traj = np.loadtxt(args.trajectory_file) 28 | 29 | x = traj[:,4] 30 | y = traj[:,8] 31 | 32 | t = traj[:, 0] - traj[0, 0] 33 | vx = grad(x, t) 34 | vy = grad(y, t) 35 | 36 | 37 | vxa = np.hstack([np.zeros(a-1), moving_average(vx,a)]) 38 | vya = np.hstack([np.zeros(a-1), moving_average(vy,a)]) 39 | vabs = np.sqrt(np.square(vx) + np.square(vy)) 40 | vabsa = np.sqrt(np.square(vxa) + np.square(vya)) 41 | 42 | a = np.transpose(np.vstack([t, vabs, vabsa])) 43 | 44 | imax = np.argmax(vabsa) 45 | print "max avg velocity: %f m/s = %f km/h = %f mph at time: %f" % (vabsa[imax], vabsa[imax]*3.6, vabsa[imax]*2.23694, t[imax]) 46 | 47 | # for row in a: 48 | # print "%f %f %f" % (row[0], row[1], row[2]) 49 | --------------------------------------------------------------------------------