├── README.md ├── event_dataset_generation ├── aggregator.py ├── generator.py ├── main.py ├── make_dataset.sh ├── paths │ ├── event_day.txt │ ├── event_night.txt │ ├── reconstructed_day.txt │ └── reconstructed_night.txt └── synchrotron.py ├── event_dataset_generation_vg ├── aggregator.py ├── generator.py ├── main.py └── make_dataset.sh ├── event_rgb_gps_sensors ├── gps_tracker.py ├── locate_port.sh ├── main.py ├── rgb_stream.py ├── stream_event.py └── transfer.sh ├── event_to_video ├── demo_event_to_video.py ├── lightning_model.py └── main.sh ├── gps_data_plot ├── coverage.png ├── dataset_metrics.json ├── plot_all.py └── plot_outbound_data.py ├── img ├── front_page_showcase.jpg ├── gps_plot.png ├── report.png ├── sample_images.png ├── sensor_setup.jpg ├── sensor_specs.jpg └── total_gps_coverage.png ├── prophesee_e2v ├── __pycache__ │ └── lightning_model.cpython-310.pyc ├── demo_event_to_video.py ├── e2v.ckpt ├── lightning_model.py ├── main.py ├── main.sh └── run.sh ├── requirements.txt └── rgb_dataset_generation ├── generate_dataset.py └── rgb_dataset.sh /README.md: -------------------------------------------------------------------------------- 1 | # NYC-Event-VPR: A Large-Scale High-Resolution Event-Based Visual Place Recognition Dataset in Dense Urban Environments 2 | [Taiyi Pan](http://www.taiyipan.org), [Junyang He](https://www.linkedin.com/in/junyang-he/), [Chao Chen*](https://joechencc.github.io), [Yiming Li*](https://scholar.google.com/citations?user=i_aajNoAAAAJ), [Chen Feng](https://scholar.google.com/citations?user=YeG8ZM0AAAAJ) 3 | 4 | New York University, Tandon School of Engineering 5 | 6 | ## Abstract 7 | ![showcase](./img/front_page_showcase.jpg) 8 | 9 | Visual place recognition (VPR) enables autonomous robots to identify previously visited locations, which contributes to tasks like simultaneous localization and mapping (SLAM). VPR faces challenges such as accurate image neighbor retrieval and appearance change in scenery. 10 | 11 | Event cameras, also known as dynamic vision sensors, are a new sensor modality for VPR and offer a promising solution to the challenges with their unique attributes: high temporal resolution (1MHz clock), ultra-low latency (in μs), and high dynamic range (>120dB). These attributes make event cameras less susceptible to motion blur and more robust in variable lighting conditions, making them suitable for addressing VPR challenges. However, the scarcity of event-based VPR datasets, partly due to the novelty and cost of event cameras, hampers their adoption. 12 | 13 | To fill this data gap, our paper introduces the NYC-Event-VPR dataset to the robotics and computer vision communities, featuring the Prophesee IMX636 HD event sensor (1280x720 resolution), combined with RGB camera and GPS module. It encompasses over 13 hours of geotagged event data, spanning 260+ kilometers across New York City, covering diverse lighting and weather conditions, day/night scenarios, and multiple visits to various locations. 14 | 15 | Furthermore, our paper employs three frameworks to conduct generalization performance assessments, promoting innovation in event-based VPR and its integration into robotics applications. 16 | 17 | ## Links 18 | Dataset repository: https://huggingface.co/datasets/ai4ce/NYC-Event-VPR 19 | Paper website: https://ai4ce.github.io/NYC-Event-VPR/ 20 | 21 | ## Dataset 22 | ### Coverage 23 | ![coverage](./img/gps_plot.png) 24 | Total coverage over New York City 25 | 26 | ### Visualization 27 | ![visualization](./img/sample_images.png) 28 | Visualizations (left column: naively rendered; middle column: E2VID reconstructed; right column: RGB images) 29 | 30 | ### Sensors 31 | ![sensors](./img/sensor_specs.jpg) 32 | Sensors (a: Prophesee EVK4 HD event camera, b: ELP RGB camera, c: Insta360 vibration damper, d: Sparkfun RTK-GPS-SMA breakout board) 33 | 34 | ## Instructions 35 | 36 | ### Prerequisites 37 | - python 3 38 | - pandas 39 | - opencv-python 40 | - pickle 41 | - geopy 42 | - numpy 43 | - utm 44 | - matplotlib 45 | - descartes 46 | - geopandas 47 | - shapely 48 | - torch 49 | - torchvision 50 | - tqdm 51 | - kornia 52 | 53 | ### Installation 54 | 1. Install Metavision SDK: https://docs.prophesee.ai/stable/installation/index.html 55 | (Optional: only if working directly with raw sensor readings) 56 | 57 | 2. Download repository 58 | ``` 59 | git clone https://github.com/ai4ce/NYC-Event-VPR.git 60 | ``` 61 | 3. Install packages 62 | ``` 63 | pip install -r requirements.txt 64 | ``` 65 | 4. Install VPR-Bench: https://github.com/MubarizZaffar/VPR-Bench 66 | (Optional: only if evaluating datasets with NYC-Event-VPR_VPR-Bench) 67 | 68 | 5. Install Visual Geo-Localization: https://github.com/gmberton/deep-visual-geo-localization-benchmark 69 | (Optional: only if training and evaluating datasets with NYC-Event-VPR_VG) 70 | 71 | 6. Install VPR Methods Evaluation: https://github.com/gmberton/VPR-methods-evaluation 72 | (Optional: only if evaluating datasets with NYC-Event-VPR_VG) 73 | 74 | ### Note 75 | NYC-Event-VPR_raw_data: Raw sensor readings collected in NYC. Contains raw event files, GPS coordinates and timestamps, and RGB images. 76 | 77 | NYC-Event-VPR_VG: preformatted datasets compatible with Visual Geo-localization framework. 78 | 79 | NYC-Event-VPR_VPR-Bench: preformatted datasets compatible with VPR-Bench. 80 | 81 | ### Citation 82 | ```bibtex 83 | 84 | ``` 85 | 86 | ### Dataset structure 87 | ``` 88 | NYC-Event-VPR 89 | | 90 | ├── NYC-Event-VPR_raw_data 91 | │   ├── sensor_data_2022-12-06_18-27-21 92 | │   │   ├── data_2022-12-06_18-27-24.zip 93 | │   │   ├── GPS_data_2022-12-06_18-27-22.csv 94 | │   │   └── img_2022-12-06_18-27-22.zip 95 | │   ├── sensor_data_2022-12-06_19-27-59 96 | │   │   ├── data_2022-12-06_19-28-02.zip 97 | │   │   ├── GPS_data_2022-12-06_19-28-00.csv 98 | │   │   └── img_2022-12-06_19-28-00.zip 99 | │   ├── sensor_data_2022-12-06_20-45-53 100 | │   │   ├── data_2022-12-06_20-45-57.zip 101 | │   │   ├── GPS_data_2022-12-06_20-45-55.csv 102 | │   │   └── img_2022-12-06_20-45-55.zip 103 | │   ├── sensor_data_2022-12-07_15-46-32 104 | │   │   ├── data_2022-12-07_15-46-36.zip 105 | │   │   └── GPS_data_2022-12-07_15-46-33.csv 106 | │   ├── sensor_data_2022-12-07_16-52-55 107 | │   │   ├── data_2022-12-07_16-52-58.zip 108 | │   │   └── GPS_data_2022-12-07_16-52-55.csv 109 | │   ├── sensor_data_2022-12-07_17-58-34 110 | │   │   ├── data_2022-12-07_17-58-38.zip 111 | │   │   └── GPS_data_2022-12-07_17-58-35.csv 112 | │   ├── sensor_data_2022-12-09_13-59-10 113 | │   │   ├── data_2022-12-09_13-59-13.zip 114 | │   │   ├── GPS_data_2022-12-09_13-59-11.csv 115 | │   │   └── img_2022-12-09_13-59-11.zip 116 | │   ├── sensor_data_2022-12-09_14-41-29 117 | │   │   ├── data_2022-12-09_14-41-32.zip 118 | │   │   ├── GPS_data_2022-12-09_14-41-30.csv 119 | │   │   └── img_2022-12-09_14-41-30.zip 120 | │   ├── sensor_data_2022-12-09_18-56-13 121 | │   │   ├── data_2022-12-09_18-56-16.zip 122 | │   │   ├── GPS_data_2022-12-09_18-56-14.csv 123 | │   │   └── img_2022-12-09_18-56-14.zip 124 | │   ├── sensor_data_2022-12-09_19-40-27 125 | │   │   ├── GPS_data_2022-12-09_19-40-28.csv 126 | │   │   └── img_2022-12-09_19-40-28.zip 127 | │   ├── sensor_data_2022-12-09_19-42-07 128 | │   │   ├── data_2022-12-09_19-42-10.zip 129 | │   │   ├── GPS_data_2022-12-09_19-42-08.csv 130 | │   │   └── img_2022-12-09_19-42-08.zip 131 | │   ├── sensor_data_2022-12-20_16-54-11 132 | │   │   ├── data_2022-12-20_16-54-14.zip 133 | │   │   ├── GPS_data_2022-12-20_16-54-12.csv 134 | │   │   └── img_2022-12-20_16-54-12.zip 135 | │   ├── sensor_data_2023-02-14_15-06-30 136 | │   │   ├── data_2023-02-14_15-06-33.zip 137 | │   │   ├── GPS_data_2023-02-14_15-06-31.csv 138 | │   │   └── img_2023-02-14_15-06-31.zip 139 | │   ├── sensor_data_2023-02-14_18-20-40 140 | │   │   ├── data_2023-02-14_18-20-44.zip 141 | │   │   ├── GPS_data_2023-02-14_18-20-42.csv 142 | │   │   └── img_2023-02-14_18-20-42.zip 143 | │   ├── sensor_data_2023-04-20_15-53-26 144 | │   │   ├── data_2023-04-20_15-53-29.zip 145 | │   │   ├── GPS_data_2023-04-20_15-53-27.csv 146 | │   │   └── img_2023-04-20_15-53-27.zip 147 | │   └── sensor_data_2023-04-20_17-10-01 148 | │   ├── data_2023-04-20_17-10-04.zip 149 | │   ├── GPS_data_2023-04-20_17-10-02.csv 150 | │   └── img_2023-04-20_17-10-02.zip 151 | | 152 | ├── NYC-Event-VPR_VG 153 | | ├── NYC-Event-VPR_Event 154 | | | └── images 155 | | | ├── train 156 | | | | ├── queries.zip 157 | | | | └── database.zip 158 | | | ├── val 159 | | | | ├── queries.zip 160 | | | | └── database.zip 161 | | | └── test 162 | | | ├── queries.zip 163 | | | └── database.zip 164 | | ├── NYC-Event-VPR_E2VID 165 | | | └── images 166 | | | ├── train 167 | | | | ├── queries.zip 168 | | | | └── database.zip 169 | | | ├── val 170 | | | | ├── queries.zip 171 | | | | └── database.zip 172 | | | └── test 173 | | | ├── queries.zip 174 | | | └── database.zip 175 | | └── NYC-Event-VPR_RGB 176 | | └── images 177 | | ├── train 178 | | | ├── queries.zip 179 | | | └── database.zip 180 | | ├── val 181 | | | ├── queries.zip 182 | | | └── database.zip 183 | | └── test 184 | | ├── queries.zip 185 | | └── database.zip 186 | | 187 | └── NYC-Event-VPR_VPR-Bench 188 | ├── event_10m_0sobel_1fps 189 | │   ├── ground_truth_new.npy 190 | │   ├── query.zip 191 | │   └── ref.zip 192 | ├── event_25m_0sobel_1fps 193 | │   ├── ground_truth_new.npy 194 | │   ├── query.zip 195 | │   └── ref.zip 196 | ├── event_25m_0sobel_1fps_day 197 | │   ├── ground_truth_new.npy 198 | │   ├── query.zip 199 | │   └── ref.zip 200 | ├── event_25m_0sobel_1fps_night 201 | │   ├── ground_truth_new.npy 202 | │   ├── query.zip 203 | │   └── ref.zip 204 | ├── event_25m_100sobel_1fps 205 | │   ├── ground_truth_new.npy 206 | │   ├── query.zip 207 | │   └── ref.zip 208 | ├── event_2m_0sobel_1fps 209 | │   ├── ground_truth_new.npy 210 | │   ├── query.zip 211 | │   └── ref.zip 212 | ├── event_5m_0sobel_1fps 213 | │   ├── ground_truth_new.npy 214 | │   ├── query.zip 215 | │   └── ref.zip 216 | ├── reconstructed_25m_0sobel_1fps 217 | │   ├── ground_truth_new.npy 218 | │   ├── query.zip 219 | │   └── ref.zip 220 | ├── reconstructed_25m_0sobel_1fps_day 221 | │   ├── ground_truth_new.npy 222 | │   ├── query.zip 223 | │   └── ref.zip 224 | ├── reconstructed_25m_0sobel_1fps_night 225 | │   ├── ground_truth_new.npy 226 | │   ├── query.zip 227 | │   └── ref.zip 228 | ├── reconstructed_25m_60sobel_1fps 229 | │   ├── ground_truth_new.npy 230 | │   ├── query.zip 231 | │   └── ref.zip 232 | ├── reconstructed_2m_0sobel_1fps 233 | │   ├── ground_truth_new.npy 234 | │   ├── query.zip 235 | │   └── ref.zip 236 | ├── reconstructed_5m_0sobel_1fps 237 | │   ├── ground_truth_new.npy 238 | │   ├── query.zip 239 | │   └── ref.zip 240 | ├── rgb_15m_1fps 241 | │   ├── ground_truth_new.csv 242 | │   ├── ground_truth_new.npy 243 | │   ├── ground_truth.npy 244 | │   ├── query.zip 245 | │   └── ref.zip 246 | ├── rgb_25m_1fps 247 | │   ├── ground_truth_new.csv 248 | │   ├── ground_truth_new.npy 249 | │   ├── ground_truth.npy 250 | │   ├── query.zip 251 | │   └── ref.zip 252 | ├── rgb_2m_0sobel_1fps 253 | │   ├── ground_truth_new.npy 254 | │   ├── query.zip 255 | │   └── ref.zip 256 | ├── rgb_5m_0sobel_1fps 257 | │   ├── ground_truth_new.npy 258 | │   ├── query.zip 259 | │   └── ref.zip 260 | └── rgb_5m_1fps 261 | ├── ground_truth_new.csv 262 | ├── ground_truth_new.npy 263 | ├── ground_truth.npy 264 | ├── query.zip 265 | └── ref.zip 266 | ``` 267 | -------------------------------------------------------------------------------- /event_dataset_generation/aggregator.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import glob 3 | from time import time 4 | import concurrent.futures 5 | from concurrent.futures import ProcessPoolExecutor, wait 6 | import os 7 | import cv2 as cv 8 | 9 | class Aggregator: 10 | # class constructor 11 | def __init__(self, dir: str, df: pd.DataFrame): 12 | self.dir = dir 13 | self.df = df 14 | self.df2 = None 15 | assert dir is not None, 'Event directory void' 16 | assert df is not None, 'Dataframe void' 17 | print('Filter object') 18 | 19 | # class string representation 20 | def __str__(self) -> str: 21 | return None 22 | 23 | # getter methods ############################################################################################### 24 | def get_dataframe(self) -> pd.DataFrame: 25 | return self.df2 26 | 27 | # class methods ################################################################################################ 28 | # iterate over input event frames (various folders, various fps), 29 | # correlate with original dataframe, filter out static frames (vehicle not moving) 30 | def aggregate(self): 31 | start = time() 32 | ######################################################################################### 33 | self.df2 = pd.DataFrame(columns=['Latitude', 'Longitude', 'Timestamp', 'Hash', 'Path', 'Sobel']) 34 | self.iterate_folders() 35 | ######################################################################################### 36 | end = time() 37 | print('Time elapsed: {:.6f} hours'.format((end - start) / 3600.0)) 38 | print('Time elapsed: {:.6f} minutes'.format((end - start) / 60.0)) 39 | print('Time elapsed: {:.6f} seconds'.format((end - start) / 1.0)) 40 | 41 | # helper functions ################################################################################################## 42 | # use concurrency to iterate over all folders containing event frames (previously synchronized) 43 | def iterate_folders(self): 44 | with ProcessPoolExecutor() as executor: 45 | print('Started Process Pool Executor...') 46 | futures = list() 47 | folder_path = glob.glob(os.path.join(self.dir, 'event_*')) 48 | print('{} event data directories found'.format(len(folder_path))) 49 | # iterate over event folders, iterate frames: first pass 50 | for folder in folder_path: 51 | futures.append( 52 | executor.submit( 53 | self.iterate_frames, 54 | folder 55 | ) 56 | ) 57 | # block until all futures finish 58 | wait(futures, return_when=concurrent.futures.ALL_COMPLETED) 59 | print('All concurrent futures have completed') 60 | # merge concurrent futures 61 | for future in futures: 62 | self.df2 = pd.concat([ 63 | self.df2, 64 | future.result() 65 | ], ignore_index = True 66 | ) 67 | print('Concurrent futures merged') 68 | print('Executor has shutdown') 69 | self.df2.to_csv('peek/df_aggregated.csv') 70 | 71 | # concurrent thread: iterate over all frames in folder, correlate with original datadframe, return new subframe 72 | def iterate_frames(self, folder: str) -> pd.DataFrame: 73 | print(folder) 74 | df2 = pd.DataFrame(columns=['Latitude', 'Longitude', 'Timestamp', 'Hash', 'Path', 'Sobel']) 75 | for img in os.listdir(folder): 76 | print(img) 77 | # extract timestamp and hash string 78 | tokens = img.split('_') 79 | timestamp = '_'.join(tokens[1:4]) 80 | hash = tokens[4].split('.')[0] 81 | # locate corresponding rows in original dataframe 82 | row = self.df[self.df['Timestamp'] == timestamp] 83 | # open image and calculate mean Sobel operator (x and y) magnitude 84 | img_path = os.path.join(folder, img) 85 | try: 86 | image = cv.imread(img_path) 87 | except: 88 | raise FileNotFoundError 89 | sobelx = cv.Sobel(image, cv.CV_64F, 1, 0, ksize = 5) 90 | sobely = cv.Sobel(image, cv.CV_64F, 0, 1, ksize = 5) 91 | sobel = cv.mean(cv.mean(cv.magnitude(sobelx, sobely)))[0] 92 | # add row data to new dataframe 93 | df2 = pd.concat([ 94 | df2, 95 | pd.DataFrame({ 96 | 'Latitude': row['Latitude'], 97 | 'Longitude': row['Longitude'], 98 | 'Timestamp': timestamp, 99 | 'Hash': hash, 100 | 'Path': img_path, 101 | 'Sobel': sobel 102 | } 103 | )], 104 | ignore_index = True 105 | ) 106 | return df2 -------------------------------------------------------------------------------- /event_dataset_generation/generator.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import geopy.distance 3 | from time import time 4 | import numpy as np 5 | import concurrent.futures 6 | from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor, wait 7 | from multiprocessing import cpu_count 8 | import pickle 9 | import cv2 as cv 10 | import os 11 | 12 | class Generator: 13 | # class constructor 14 | def __init__(self, df: pd.DataFrame, ddir = '/home/taiyi/scratch/NYU-EventVPR-Event'): 15 | self.df = df 16 | self.ddir = ddir 17 | self.dfq = None 18 | self.gt = None 19 | self.threshold = None 20 | assert df is not None, 'New dataframe void' 21 | assert ddir is not None, 'Destination directory void' 22 | print('Dataframe object') 23 | print('Total entry count: {}'.format(self.df.shape[0])) 24 | print('Destination directory: {}'.format(self.ddir)) 25 | 26 | # class string representation 27 | def __str__(self) -> str: 28 | return None 29 | 30 | # getter methods ############################################################################################### 31 | def get_query(self) -> pd.DataFrame: 32 | assert self.dfq is not None 33 | print('{} query images returned'.format(self.dfq.shape[0])) 34 | print(self.dfq.head()) 35 | return self.dfq 36 | 37 | def get_ref(self) -> pd.DataFrame: 38 | assert self.df is not None 39 | print('{} ref images returned'.format(self.df.shape[0])) 40 | print(self.df.head()) 41 | return self.df 42 | 43 | def get_gt(self) -> np.ndarray: 44 | assert self.gt is not None 45 | print('{}x{} ground truth numpy array returned'.format(self.gt.shape[0], self.gt.shape[1])) 46 | print(self.gt[0]) 47 | return self.gt 48 | 49 | def save_gt(self): 50 | assert self.gt is not None 51 | self.pickle_compatible() 52 | self.gt_to_csv() 53 | 54 | # setter methods ######################################################################################### 55 | def set_query(self, dfq: pd.DataFrame): 56 | assert dfq is not None 57 | self.dfq = dfq 58 | 59 | def set_ref(self, df: pd.DataFrame): 60 | assert df is not None 61 | self.df = df 62 | 63 | # class methods ################################################################################################ 64 | # filter only those above Sobel threshold, thus removing info sparse frames 65 | def filter(self, sobel_threshold = 100): 66 | print('Applying Sobel operator...') 67 | self.df = self.df[self.df['Sobel'] > sobel_threshold] 68 | print('Sobel operation complete, resulting data entry: {}'.format(self.df.shape[0])) 69 | self.df.to_csv('peek/df_filtered.csv') 70 | 71 | # reduce total dataframe to a percentage of total rows, randomly sampled 72 | def reduce(self, factor: float): 73 | print('Reducing dataframe to {:.2f}%...'.format(factor * 100)) 74 | self.df = self.df.sample(frac=factor) 75 | print('Reduction complete, resulting data entry: {}'.format(self.df.shape[0])) 76 | self.df.to_csv('peek/df_reduced.csv') 77 | 78 | # sample a percentage of total dataframe, divide into query and reference (leftover after query is sampled) 79 | def sample(self, factor: float): 80 | # sample query images from reference images 81 | self.dfq = self.df.sample(frac=factor).reset_index() 82 | print('Sampled {} query images from {} reference images'.format(self.dfq.shape[0], self.df.shape[0])) 83 | self.dfq.to_csv('peek/dfq.csv') 84 | # remove query images from reference images 85 | self.df = self.df.drop(self.dfq['index']) # uncomment this to make dataset challenging 86 | # shuffle reference images 87 | self.df = self.df.sample(frac=1).reset_index() 88 | print('New reference image count: {}'.format(self.df.shape[0])) 89 | self.df.to_csv('peek/df.csv') 90 | 91 | # concurrently move selected images from query and reference to destination directory 92 | def move_imgs(self): 93 | start = time() 94 | 95 | print('Moving selected images into destination directory...') 96 | if not os.path.exists(self.ddir): 97 | os.mkdir(self.ddir) 98 | query_path = os.path.join(self.ddir, 'query') 99 | if not os.path.exists(query_path): 100 | os.mkdir(query_path) 101 | ref_path = os.path.join(self.ddir, 'ref') 102 | if not os.path.exists(ref_path): 103 | os.mkdir(ref_path) 104 | # divide up image moving tasks among CPU cores 105 | with ThreadPoolExecutor() as executor: 106 | print('Started Thread Pool Executor...') 107 | futures = list() 108 | # move ref images 109 | prev = 0 110 | for idx in range(self.df.shape[0] // cpu_count(), self.df.shape[0], self.df.shape[0] // cpu_count()): 111 | futures.append( 112 | executor.submit( 113 | self.img_io_thread, 114 | self.df.iloc[prev:idx], 115 | ref_path 116 | ) 117 | ) 118 | prev = idx 119 | futures.append(executor.submit( 120 | self.img_io_thread, 121 | self.df.iloc[prev:], 122 | ref_path 123 | )) 124 | # move query images 125 | prev = 0 126 | for idx in range(self.dfq.shape[0] // cpu_count(), self.dfq.shape[0], self.dfq.shape[0] // cpu_count()): 127 | futures.append( 128 | executor.submit( 129 | self.img_io_thread, 130 | self.dfq.iloc[prev:idx], 131 | query_path 132 | ) 133 | ) 134 | prev = idx 135 | futures.append(executor.submit( 136 | self.img_io_thread, 137 | self.dfq.iloc[prev:], 138 | query_path 139 | )) 140 | # block until all futures finish 141 | wait(futures, return_when=concurrent.futures.ALL_COMPLETED) 142 | print('All concurrent futures have completed') 143 | print('Executor has shutdown') 144 | 145 | end = time() 146 | print('Time elapsed: {:.6f} hours'.format((end - start) / 3600.0)) 147 | print('Time elapsed: {:.6f} minutes'.format((end - start) / 60.0)) 148 | print('Time elapsed: {:.6f} seconds'.format((end - start) / 1.0)) 149 | 150 | 151 | # generate ground truth array between query and reference images (multiprocessing) 152 | def compute_ground_truth(self, threshold = 25) -> np.ndarray: 153 | start = time() 154 | 155 | print('Computing ground truth with {} cores...'.format(cpu_count())) 156 | 157 | self.threshold = threshold 158 | assert threshold > 0, 'Fault tolenrance has to be positive' 159 | assert self.df is not None and self.dfq is not None, 'Need defined query and ref dataframes' 160 | 161 | # initialize ground truth numpy array 162 | self.gt = self.create_gt(self.dfq) 163 | 164 | # define multiprocessing program executor 165 | print('CPU core count: {}'.format(cpu_count())) 166 | executor = ProcessPoolExecutor() 167 | print('Started Process Pool Executor...') 168 | 169 | # divide task into equal chunks among all CPU threads 170 | futures = list() 171 | assert self.dfq.shape[0] >= 100, 'Too few discrete tasks for {} threads'.format(cpu_count()) 172 | # submit jobs to futures 173 | prev = 0 174 | for idx in range(self.dfq.shape[0] // cpu_count(), self.dfq.shape[0], self.dfq.shape[0] // cpu_count()): 175 | futures.append(executor.submit(self.match_process, self.df, self.dfq, self.gt, prev, idx)) 176 | prev = idx 177 | # block until all futures finish 178 | wait(futures, return_when=concurrent.futures.ALL_COMPLETED) 179 | print('All concurrent futures have completed') 180 | 181 | # merge resulting arrays based on index range 182 | for future in futures: 183 | self.gt[future.result()[1]:future.result()[2], :] = future.result()[0][future.result()[1]:future.result()[2], :] 184 | print('Concurrent futures merged') 185 | 186 | # shutdown executor 187 | executor.shutdown() 188 | print('Executor has shutdown') 189 | 190 | # iterate over remaining rows 191 | self.match_process(self.df, self.dfq, self.gt, prev, self.df.shape[0]) 192 | print('Remaining rows iterated') 193 | 194 | # save ground truth 195 | self.pickle_compatible() 196 | self.gt_to_csv() 197 | 198 | end = time() 199 | print('Time elapsed: {:.6f} hours'.format((end - start) / 3600.0)) 200 | print('Time elapsed: {:.6f} minutes'.format((end - start) / 60.0)) 201 | print('Time elapsed: {:.6f} seconds'.format((end - start) / 1.0)) 202 | 203 | # helper functions ################################################################################################## 204 | # subprocess to be handled by an assigned CPU thread 205 | # upon completion, return partial array, start and end indices 206 | # def match_process(self, df: pd.DataFrame, dfq: pd.DataFrame, gt: np.ndarray, start: int, end: int): # deprecated 207 | # for row in dfq.iloc[start:end].itertuples(): 208 | # print('{}/{}'.format(row.index, df.shape[0] + dfq.shape[0] - 1)) 209 | # gt[row.Index][0] = row.Index # debug: row.Index -> row.index 210 | # for r in df.itertuples(): 211 | # dist = self.calculate_gps_distance( 212 | # (row.Latitude, row.Longitude), 213 | # (r.Latitude, r.Longitude) 214 | # ) 215 | # if dist < self.threshold: 216 | # gt[row.Index][1].append(r.Index) # debug: r.Index -> r.index 217 | # return gt, start, end 218 | 219 | # subprocess to be handled by an assigned CPU thread 220 | # upon completion, return partial array, start and end indices 221 | # utilizes memoization for boosted efficiency 222 | def match_process(self, df: pd.DataFrame, dfq: pd.DataFrame, gt: np.ndarray, start: int, end: int): 223 | cache = dict() # cache for memoization (reduce exponential complexity) 224 | for row in dfq.iloc[start:end].itertuples(): 225 | print('{}/{}'.format(row.index, df.shape[0] + dfq.shape[0] - 1)) 226 | gt[row.Index][0] = row.Index # debug: row.Index -> row.index 227 | if cache.get(row.Timestamp) is None: # never seen before timestamp 228 | cache[row.Timestamp] = list() 229 | for r in df.itertuples(): 230 | dist = self.calculate_gps_distance( 231 | (row.Latitude, row.Longitude), 232 | (r.Latitude, r.Longitude) 233 | ) 234 | if dist < self.threshold: 235 | gt[row.Index][1].append(r.Index) # debug: r.Index -> r.index 236 | cache.get(row.Timestamp).append(r.Index) 237 | else: # timestamp encountered before 238 | gt[row.Index][1] = cache.get(row.Timestamp) 239 | return gt, start, end 240 | 241 | # initialize ground truth numpy array according to dataframe dims 242 | def create_gt(self, df: pd.DataFrame) -> np.ndarray: 243 | gt = np.zeros((df.shape[0], 2), dtype=np.ndarray) 244 | print(gt.shape) 245 | for i in range(gt.shape[0]): 246 | gt[i][0] = i 247 | gt[i][1] = list() 248 | return gt 249 | 250 | # calculate distance in meters between 2 GPS coordinates 251 | # input: a(latitude, longitude), b(latitude, longitude) 252 | # output: distance in meters 253 | def calculate_gps_distance(self, a: tuple, b: tuple) -> float: 254 | return geopy.distance.geodesic(a, b).meters 255 | 256 | # save ground truth numpy array as pickle protocol 2 compatible version 257 | def pickle_compatible(self): 258 | np.save('peek/gt.npy', self.gt) 259 | try: 260 | with open('peek/gt.npy', 'rb') as handle: 261 | a = np.load(handle, allow_pickle=True) 262 | with open(os.path.join(self.ddir, 'ground_truth_new.npy'), 'wb') as handle: 263 | pickle.dump(a, handle, protocol=2) 264 | except: 265 | raise FileNotFoundError 266 | finally: 267 | print('Pickle protocol compatibility check') 268 | 269 | # visualize ground truth numpy array as a table with csv format 270 | def gt_to_csv(self): 271 | try: 272 | pd.DataFrame(np.load( 273 | 'peek/gt.npy', 274 | allow_pickle = True 275 | )).to_csv( 276 | 'peek/gt.csv', 277 | index = False, 278 | header = False 279 | ) 280 | except: 281 | raise FileNotFoundError 282 | finally: 283 | print('Ground truth table saved') 284 | 285 | # subthread to be handled by main thread 286 | # move images to appropriate directories based on sub-dataframe given 287 | def img_io_thread(self, df: pd.DataFrame, ddir: str): 288 | for row in df.itertuples(): 289 | try: 290 | path1 = row.Path 291 | path2 = os.path.join(ddir, str(row.Index).zfill(7) + '.jpg') 292 | print('{} -> {}'.format(path1, path2)) 293 | image = cv.imread(path1) 294 | cv.imwrite(path2, image) 295 | except: 296 | raise FileNotFoundError -------------------------------------------------------------------------------- /event_dataset_generation/main.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import glob 3 | import os 4 | import argparse 5 | import traceback 6 | from aggregator import Aggregator 7 | from synchrotron import Synchrotron 8 | from generator import Generator 9 | 10 | def main(): 11 | # load dataframe 12 | csv_path = os.path.join(args.rdir, 'sensor_data_*/GPS_data_*.csv') 13 | try: 14 | file_path = glob.glob(csv_path) 15 | print('GPS file count: {}'.format(len(file_path))) 16 | df = pd.concat(map(pd.read_csv, file_path), ignore_index = True) 17 | print('Total entry count: {}'.format(df.shape[0])) 18 | df.to_csv('peek/dataframe.csv') 19 | except: 20 | traceback.print_exc() 21 | # run synchrotron 22 | if not args.synched: 23 | synchrotron = Synchrotron(args.edir, df) 24 | synchrotron.iterate_frames(args.framerate) 25 | # run aggregator 26 | aggregator = Aggregator(args.edir, df) 27 | aggregator.aggregate() 28 | df2 = aggregator.get_dataframe() 29 | # run generator 30 | generator = Generator(df2, args.dir) 31 | generator.filter(args.sobel) 32 | if args.reduce < 1.0: 33 | generator.reduce(args.reduce) 34 | generator.sample(args.sample) 35 | generator.move_imgs() 36 | generator.compute_ground_truth(args.tolerance) 37 | generator.save_gt() 38 | 39 | if __name__ == '__main__': 40 | # define command line arguments 41 | parser = argparse.ArgumentParser() 42 | parser.add_argument( 43 | '--rdir', 44 | required = False, 45 | type = str, 46 | default = '/home/taiyi/scratch/data', 47 | help = 'define input raw data directory (raw sensor readings)' 48 | ) 49 | parser.add_argument( 50 | '--dir', 51 | required = False, 52 | type = str, 53 | default = '/home/taiyi/scratch/NYU-EventVPR-Event', 54 | help = 'define output formatted data directory (compatible with VPR-Bench)' 55 | ) 56 | parser.add_argument( 57 | '--edir', 58 | required = False, 59 | type = str, 60 | default = '/home/taiyi/scratch/event_rendered/30fps', 61 | help = 'define input event frame directory' 62 | ) 63 | parser.add_argument( 64 | '--tolerance', 65 | required = False, 66 | type = float, 67 | default = 25, 68 | help = 'define fault tolerance in meters, above which threshold is defined as a non-match' 69 | ) 70 | parser.add_argument( 71 | '--sobel', 72 | required = False, 73 | type = float, 74 | default = 100, 75 | help = 'define Sobel threshold (mean Sobel magnitude) below which is considered blank image and thus disgarded' 76 | ) 77 | parser.add_argument( 78 | '--reduce', 79 | required = False, 80 | type = float, 81 | default = 1.0, 82 | help = 'define reduce ratio (0 to 1) to drop excess images from dataframe' 83 | ) 84 | parser.add_argument( 85 | '--sample', 86 | required = False, 87 | type = float, 88 | default = 0.1, 89 | help = 'define query sample ratio (0 to 1) to randomly sample query images from ref; ref then subtracts out sample query images' 90 | ) 91 | parser.add_argument( 92 | '--framerate', 93 | required = False, 94 | type = float, 95 | default = 30, 96 | help = 'specify the framerate that the source event video is rendered; MUST MATCH EXACTLY!' 97 | ) 98 | parser.add_argument( 99 | '--synched', 100 | required = False, 101 | type = int, 102 | default = 0, 103 | help = 'indicate if event frames are already synched. 0 is no. 1 is yes.' 104 | ) 105 | # parse command line arguments 106 | args = parser.parse_args() 107 | print(args) 108 | assert args.rdir is not None 109 | assert args.dir is not None 110 | assert args.edir is not None 111 | assert args.tolerance > 0 112 | assert args.sobel >= 0 113 | assert args.reduce > 0 and args.reduce <= 1 114 | assert args.sample > 0 and args.sample <= 1 115 | assert args.framerate > 0 116 | assert args.synched == 0 or args.synched == 1 117 | main() -------------------------------------------------------------------------------- /event_dataset_generation/make_dataset.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # define hyperparams 4 | fault_tolerance=15 5 | sobel_threshold=0 6 | sample_fps=1 7 | type="event" # choices: event, reconstructed, rgb 8 | light="null" # choices: day, night, null 9 | 10 | ############################################################################################################################################# 11 | # define paths 12 | raw=/home/taiyi/scratch2/NYU-EventVPR 13 | if [ $type = "event" ]; then 14 | event=/home/taiyi/scratch/event_rendered/30fps 15 | output=/home/taiyi/scratch/NYU-EventVPR_VPR-Bench/event_"$fault_tolerance"m_"$sobel_threshold"sobel_"$sample_fps"fps 16 | files="/mnt/scratch2/NYU-EventVPR_rendered_30fps/data_*/event_*.avi" 17 | if [ $light = "day" ]; then 18 | readarray -t a < paths/event_day.txt 19 | files="${a[@]}" 20 | output=/home/taiyi/scratch/NYU-EventVPR_VPR-Bench/event_"$fault_tolerance"m_"$sobel_threshold"sobel_"$sample_fps"fps_day 21 | elif [ $light = "night" ]; then 22 | readarray -t a < paths/event_night.txt 23 | files="${a[@]}" 24 | output=/home/taiyi/scratch/NYU-EventVPR_VPR-Bench/event_"$fault_tolerance"m_"$sobel_threshold"sobel_"$sample_fps"fps_night 25 | fi 26 | elif [ $type = "reconstructed" ]; then 27 | event=/home/taiyi/scratch/event_reconstructed/30fps 28 | output=/home/taiyi/scratch/NYU-EventVPR_VPR-Bench/reconstructed_"$fault_tolerance"m_"$sobel_threshold"sobel_"$sample_fps"fps 29 | files="/mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_*/event_*.mp4" 30 | if [ $light = "day" ]; then 31 | readarray -t a < paths/reconstructed_day.txt 32 | files="${a[@]}" 33 | output=/home/taiyi/scratch/NYU-EventVPR_VPR-Bench/reconstructed_"$fault_tolerance"m_"$sobel_threshold"sobel_"$sample_fps"fps_day 34 | elif [ $light = "night" ]; then 35 | readarray -t a < paths/reconstructed_night.txt 36 | files="${a[@]}" 37 | output=/home/taiyi/scratch/NYU-EventVPR_VPR-Bench/reconstructed_"$fault_tolerance"m_"$sobel_threshold"sobel_"$sample_fps"fps_night 38 | fi 39 | elif [ $type = "rgb" ]; then 40 | event=/home/taiyi/scratch/rgb_concatenated/30fps 41 | output=/home/taiyi/scratch/NYU-EventVPR_VPR-Bench/rgb_"$fault_tolerance"m_"$sobel_threshold"sobel_"$sample_fps"fps 42 | files="/mnt/scratch2/NYU-EventVPR_rgb_30fps/data_*/event_*.mp4" 43 | if [ $light = "day" ]; then 44 | readarray -t a < paths/rgb_day.txt 45 | files="${a[@]}" 46 | output=/home/taiyi/scratch/NYU-EventVPR_VPR-Bench/rgb_"$fault_tolerance"m_"$sobel_threshold"sobel_"$sample_fps"fps_day 47 | elif [ $light = "night" ]; then 48 | readarray -t a < paths/rgb_night.txt 49 | files="${a[@]}" 50 | output=/home/taiyi/scratch/NYU-EventVPR_VPR-Bench/rgb_"$fault_tolerance"m_"$sobel_threshold"sobel_"$sample_fps"fps_night 51 | fi 52 | fi 53 | ############################################################################################################################################## 54 | 55 | # print hyperparams 56 | echo $fault_tolerance 57 | echo $sobel_threshold 58 | echo $sample_fps 59 | echo $type 60 | echo $light 61 | 62 | # print variables 63 | echo $raw 64 | echo $event 65 | echo $output 66 | echo $files 67 | 68 | # purge previous output directory 69 | rm -r $event 70 | rm -r $output 71 | mkdir $event 72 | echo "Purged previous dataset directories" 73 | 74 | ######################################################## 75 | # ffmpeg frame generation 76 | # calculate sampling rate for 30fps source videos 77 | framerate=$(($sample_fps * 1)) 78 | 79 | # iterate over all video files 80 | for file in $files 81 | do 82 | # extract extract and remove extension 83 | filename=$(basename -- "$file") 84 | filename=${filename%.*} 85 | echo "Processing $file" 86 | 87 | # create output frame directory for targeted event video 88 | mkdir $event/$filename/ 89 | echo "Output directory $event/$filename" 90 | 91 | # sample event video into frames 92 | ffmpeg -hwaccel cuda -i $file -vf fps=$framerate $event/$filename/%07d.jpg 93 | done 94 | ######################################################## 95 | 96 | # process and create event dataset 97 | python /home/taiyi/scratch/event_dataset_generation/main.py \ 98 | --rdir $raw \ 99 | --dir $output \ 100 | --edir $event \ 101 | --tolerance $fault_tolerance \ 102 | --sobel $sobel_threshold \ 103 | --reduce 1.0 \ 104 | --sample 0.1 \ 105 | --framerate $sample_fps \ 106 | --synched 0 -------------------------------------------------------------------------------- /event_dataset_generation/paths/event_day.txt: -------------------------------------------------------------------------------- 1 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2022-12-07_15-46-36/event_2022-12-07_15-46-36.avi 2 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2022-12-07_16-52-58/event_2022-12-07_16-52-58.avi 3 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2022-12-09_13-59-13/event_2022-12-09_13-59-13.avi 4 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2022-12-09_14-41-32/event_2022-12-09_14-41-32.avi 5 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-02-14_15-06-33/event_2023-02-14_15-06-33.avi 6 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-02-14_15-06-33/event_2023-02-14_15-16-33.avi 7 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-02-14_15-06-33/event_2023-02-14_15-26-33.avi 8 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-02-14_15-06-33/event_2023-02-14_15-36-33.avi 9 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-02-14_15-06-33/event_2023-02-14_15-46-33.avi 10 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-02-14_15-06-33/event_2023-02-14_15-56-33.avi 11 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-02-14_15-06-33/event_2023-02-14_16-06-33.avi 12 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-02-14_15-06-33/event_2023-02-14_16-16-33.avi 13 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-02-14_15-06-33/event_2023-02-14_16-26-33.avi 14 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-02-14_15-06-33/event_2023-02-14_16-36-33.avi 15 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-04-20_15-53-29/event_2023-04-20_15-53-29.avi 16 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-04-20_15-53-29/event_2023-04-20_16-03-29.avi 17 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-04-20_15-53-29/event_2023-04-20_16-13-29.avi 18 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-04-20_15-53-29/event_2023-04-20_16-23-29.avi 19 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-04-20_15-53-29/event_2023-04-20_16-33-29.avi 20 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-04-20_15-53-29/event_2023-04-20_16-43-29.avi 21 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-04-20_15-53-29/event_2023-04-20_16-53-29.avi 22 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-04-20_15-53-29/event_2023-04-20_17-03-29.avi 23 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-04-20_17-10-04/event_2023-04-20_17-10-04.avi 24 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-04-20_17-10-04/event_2023-04-20_17-20-04.avi 25 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-04-20_17-10-04/event_2023-04-20_17-30-04.avi 26 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-04-20_17-10-04/event_2023-04-20_17-40-04.avi 27 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-04-20_17-10-04/event_2023-04-20_17-50-04.avi 28 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-04-20_17-10-04/event_2023-04-20_18-00-04.avi 29 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-04-20_17-10-04/event_2023-04-20_18-10-04.avi 30 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-04-20_17-10-04/event_2023-04-20_18-20-04.avi 31 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-04-20_17-10-04/event_2023-04-20_18-30-04.avi 32 | -------------------------------------------------------------------------------- /event_dataset_generation/paths/event_night.txt: -------------------------------------------------------------------------------- 1 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2022-12-06_18-27-24/event_2022-12-06_18-27-24.avi 2 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2022-12-06_19-28-02/event_2022-12-06_19-28-02.avi 3 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2022-12-06_20-45-57/event_2022-12-06_20-45-57.avi 4 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2022-12-07_17-58-38/event_2022-12-07_17-58-38.avi 5 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2022-12-09_18-56-16/event_2022-12-09_18-56-16.avi 6 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2022-12-09_19-42-10/event_2022-12-09_19-42-10.avi 7 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2022-12-20_16-54-14/event_2022-12-20_16-54-14.avi 8 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-02-14_18-20-44/event_2023-02-14_18-20-44.avi 9 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-02-14_18-20-44/event_2023-02-14_18-30-44.avi 10 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-02-14_18-20-44/event_2023-02-14_18-40-44.avi 11 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-02-14_18-20-44/event_2023-02-14_18-50-44.avi 12 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-02-14_18-20-44/event_2023-02-14_19-00-44.avi 13 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-02-14_18-20-44/event_2023-02-14_19-10-44.avi 14 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-02-14_18-20-44/event_2023-02-14_19-20-44.avi 15 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-02-14_18-20-44/event_2023-02-14_19-30-44.avi 16 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-02-14_18-20-44/event_2023-02-14_19-40-44.avi 17 | /mnt/scratch2/NYU-EventVPR_rendered_30fps/data_2023-02-14_18-20-44/event_2023-02-14_19-50-44.avi 18 | -------------------------------------------------------------------------------- /event_dataset_generation/paths/reconstructed_day.txt: -------------------------------------------------------------------------------- 1 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2022-12-07_15-46-36/event_2022-12-07_15-46-36.mp4 2 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2022-12-07_16-52-58/event_2022-12-07_16-52-58.mp4 3 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2022-12-09_13-59-13/event_2022-12-09_13-59-13.mp4 4 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2022-12-09_14-41-32/event_2022-12-09_14-41-32.mp4 5 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-02-14_15-06-33/event_2023-02-14_15-06-33.mp4 6 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-02-14_15-06-33/event_2023-02-14_15-16-33.mp4 7 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-02-14_15-06-33/event_2023-02-14_15-26-33.mp4 8 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-02-14_15-06-33/event_2023-02-14_15-36-33.mp4 9 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-02-14_15-06-33/event_2023-02-14_15-46-33.mp4 10 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-02-14_15-06-33/event_2023-02-14_15-56-33.mp4 11 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-02-14_15-06-33/event_2023-02-14_16-06-33.mp4 12 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-02-14_15-06-33/event_2023-02-14_16-16-33.mp4 13 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-02-14_15-06-33/event_2023-02-14_16-26-33.mp4 14 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-02-14_15-06-33/event_2023-02-14_16-36-33.mp4 15 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-04-20_15-53-29/event_2023-04-20_15-53-29.mp4 16 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-04-20_15-53-29/event_2023-04-20_16-03-29.mp4 17 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-04-20_15-53-29/event_2023-04-20_16-13-29.mp4 18 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-04-20_15-53-29/event_2023-04-20_16-23-29.mp4 19 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-04-20_15-53-29/event_2023-04-20_16-33-29.mp4 20 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-04-20_15-53-29/event_2023-04-20_16-43-29.mp4 21 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-04-20_15-53-29/event_2023-04-20_16-53-29.mp4 22 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-04-20_15-53-29/event_2023-04-20_17-03-29.mp4 23 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-04-20_17-10-04/event_2023-04-20_17-10-04.mp4 24 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-04-20_17-10-04/event_2023-04-20_17-20-04.mp4 25 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-04-20_17-10-04/event_2023-04-20_17-30-04.mp4 26 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-04-20_17-10-04/event_2023-04-20_17-40-04.mp4 27 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-04-20_17-10-04/event_2023-04-20_17-50-04.mp4 28 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-04-20_17-10-04/event_2023-04-20_18-00-04.mp4 29 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-04-20_17-10-04/event_2023-04-20_18-10-04.mp4 30 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-04-20_17-10-04/event_2023-04-20_18-20-04.mp4 31 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-04-20_17-10-04/event_2023-04-20_18-30-04.mp4 32 | -------------------------------------------------------------------------------- /event_dataset_generation/paths/reconstructed_night.txt: -------------------------------------------------------------------------------- 1 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2022-12-06_18-27-24/event_2022-12-06_18-27-24.mp4 2 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2022-12-06_19-28-02/event_2022-12-06_19-28-02.mp4 3 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2022-12-06_20-45-57/event_2022-12-06_20-45-57.mp4 4 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2022-12-07_17-58-38/event_2022-12-07_17-58-38.mp4 5 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2022-12-09_18-56-16/event_2022-12-09_18-56-16.mp4 6 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2022-12-09_19-42-10/event_2022-12-09_19-42-10.mp4 7 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2022-12-20_16-54-14/event_2022-12-20_16-54-14.mp4 8 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-02-14_18-20-44/event_2023-02-14_18-20-44.mp4 9 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-02-14_18-20-44/event_2023-02-14_18-30-44.mp4 10 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-02-14_18-20-44/event_2023-02-14_18-40-44.mp4 11 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-02-14_18-20-44/event_2023-02-14_18-50-44.mp4 12 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-02-14_18-20-44/event_2023-02-14_19-00-44.mp4 13 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-02-14_18-20-44/event_2023-02-14_19-10-44.mp4 14 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-02-14_18-20-44/event_2023-02-14_19-20-44.mp4 15 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-02-14_18-20-44/event_2023-02-14_19-30-44.mp4 16 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-02-14_18-20-44/event_2023-02-14_19-40-44.mp4 17 | /mnt/scratch2/NYU-EventVPR_reconstructed_30fps/data_2023-02-14_18-20-44/event_2023-02-14_19-50-44.mp4 18 | -------------------------------------------------------------------------------- /event_dataset_generation/synchrotron.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import glob 3 | from time import time 4 | import concurrent.futures 5 | from concurrent.futures import ProcessPoolExecutor, wait 6 | import os 7 | from pathlib import Path 8 | from datetime import datetime 9 | from datetime import timedelta 10 | import cv2 as cv 11 | import random 12 | 13 | class Synchrotron: 14 | # class constructor 15 | def __init__(self, dir: str, df: pd.DataFrame): 16 | self.dir = dir 17 | self.df = df 18 | assert dir is not None, 'Event directory void' 19 | assert df is not None, 'Dataframe void' 20 | print('Synchrotron object') 21 | 22 | # class string representation 23 | def __str__(self) -> str: 24 | return None 25 | 26 | # class methods ################################################################################################ 27 | # iterate over event frames, synch timestamps 28 | def iterate_frames(self, framerate = 30): 29 | start = time() 30 | 31 | folder_path = glob.glob(os.path.join(self.dir, 'event_*')) 32 | print('{} event data directories found'.format(len(folder_path))) 33 | # initialize executor 34 | with ProcessPoolExecutor() as executor: 35 | print('Started Process Pool Executor...') 36 | futures = list() 37 | # iterate through all event frame folders 38 | for folder in folder_path: 39 | futures.append( 40 | executor.submit( 41 | self.match_timestamp, 42 | folder, 43 | self.df, 44 | framerate 45 | ) 46 | ) 47 | # block until all futures finish 48 | wait(futures, return_when=concurrent.futures.ALL_COMPLETED) 49 | print('All concurrent futures have completed') 50 | print('Executor has shutdown') 51 | 52 | end = time() 53 | print('Time elapsed: {:.6f} hours'.format((end - start) / 3600.0)) 54 | print('Time elapsed: {:.6f} minutes'.format((end - start) / 60.0)) 55 | print('Time elapsed: {:.6f} seconds'.format((end - start) / 1.0)) 56 | 57 | # helper functions ################################################################################################## 58 | # match image to timestamp, then move image 59 | def match_timestamp(self, folder: str, df: pd.DataFrame, framerate: float): 60 | # calculate folder time at index 0 point 61 | t0 = datetime.strptime( 62 | '_'.join(Path(folder).stem.split('_')[1:]), 63 | '%Y-%m-%d_%H-%M-%S' 64 | ) 65 | # iterate over all images in folder 66 | for img in os.listdir(folder): 67 | diff_min = timedelta.max 68 | timestamp_min = None 69 | # calculate image timestamp based on folder time, image index, and sampling frame rate 70 | img_time = t0 + timedelta( 71 | seconds = (int(img.split('.')[0]) - 1) / framerate # images are 1-indexed, hence -1 72 | ) 73 | # iterate over dataframe for timestamp matches 74 | for row in df.itertuples(): 75 | # extract row timestamp 76 | timestamp = datetime.strptime( 77 | row.Timestamp, 78 | '%Y-%m-%d_%H-%M-%S_%f' 79 | ) 80 | # calculate time difference between image and row, record mininum time difference 81 | diff = abs(timestamp - img_time) 82 | if diff < diff_min: 83 | timestamp_min = row.Timestamp 84 | diff_min = diff 85 | # display matched event frame to timestamp 86 | print('Event frame: {} <--> Matched timestamp: {}'.format(os.path.join(Path(folder).stem, img), timestamp_min)) 87 | 88 | # use match to move image 89 | try: 90 | img_path = os.path.join(folder, img) 91 | image = cv.imread(img_path) 92 | os.remove(img_path) 93 | cv.imwrite(os.path.join(folder, 'frame_' + timestamp_min + '_' + '%032x' % random.getrandbits(128) + '.jpg'), image) 94 | except: 95 | raise FileNotFoundError -------------------------------------------------------------------------------- /event_dataset_generation_vg/aggregator.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import glob 3 | from time import time 4 | import concurrent.futures 5 | from concurrent.futures import ProcessPoolExecutor, wait 6 | import os 7 | 8 | class Aggregator: 9 | # class constructor 10 | def __init__(self, dir: str, df: pd.DataFrame): 11 | self.dir = dir 12 | self.df = df 13 | self.df2 = None 14 | self.columns = ['Latitude', 'Longitude', 'Timestamp', 'Hash', 'Path', 'Heading'] 15 | assert dir is not None, 'Event directory void' 16 | assert df is not None, 'Dataframe void' 17 | print('Filter object') 18 | 19 | # class string representation 20 | def __str__(self) -> str: 21 | return None 22 | 23 | # getter methods ############################################################################################### 24 | def get_dataframe(self) -> pd.DataFrame: 25 | return self.df2 26 | 27 | # class methods ################################################################################################ 28 | # iterate over input event frames (various folders, various fps), 29 | # correlate with original dataframe, filter out static frames (vehicle not moving) 30 | def aggregate(self): 31 | start = time() 32 | ######################################################################################### 33 | self.df2 = pd.DataFrame(columns=self.columns) 34 | self.iterate_folders() 35 | ######################################################################################### 36 | end = time() 37 | print('Time elapsed: {:.6f} hours'.format((end - start) / 3600.0)) 38 | print('Time elapsed: {:.6f} minutes'.format((end - start) / 60.0)) 39 | print('Time elapsed: {:.6f} seconds'.format((end - start) / 1.0)) 40 | 41 | # helper functions ################################################################################################## 42 | # use concurrency to iterate over all folders containing event frames (previously synchronized) 43 | def iterate_folders(self): 44 | with ProcessPoolExecutor() as executor: 45 | print('Started Process Pool Executor...') 46 | futures = list() 47 | folder_path = glob.glob(os.path.join(self.dir, 'event_*')) 48 | print('{} event data directories found'.format(len(folder_path))) 49 | # iterate over event folders, iterate frames: first pass 50 | for folder in folder_path: 51 | futures.append( 52 | executor.submit( 53 | self.iterate_frames, 54 | folder 55 | ) 56 | ) 57 | # block until all futures finish 58 | wait(futures, return_when=concurrent.futures.ALL_COMPLETED) 59 | print('All concurrent futures have completed') 60 | # merge concurrent futures 61 | for future in futures: 62 | self.df2 = pd.concat([ 63 | self.df2, 64 | future.result() 65 | ], ignore_index = True 66 | ) 67 | print('Concurrent futures merged') 68 | print('Executor has shutdown') 69 | self.df2.to_csv('peek/df_aggregated.csv') 70 | 71 | # concurrent thread: iterate over all frames in folder, correlate with original datadframe, return new subframe 72 | def iterate_frames(self, folder: str) -> pd.DataFrame: 73 | print(folder) 74 | df2 = pd.DataFrame(columns=self.columns) 75 | for img in os.listdir(folder): 76 | print(img) 77 | # extract timestamp and hash string 78 | tokens = img.split('_') 79 | timestamp = '_'.join(tokens[1:4]) 80 | hash = tokens[4].split('.')[0] 81 | # locate corresponding rows in original dataframe 82 | row = self.df[self.df['Timestamp'] == timestamp] 83 | # open image 84 | img_path = os.path.join(folder, img) 85 | # add row data to new dataframe 86 | df2 = pd.concat([ 87 | df2, 88 | pd.DataFrame({ 89 | 'Latitude': row['Latitude'], 90 | 'Longitude': row['Longitude'], 91 | 'Timestamp': timestamp, 92 | 'Hash': hash, 93 | 'Path': img_path, 94 | 'Heading': row['HeadMotion'] 95 | } 96 | )], 97 | ignore_index = True 98 | ) 99 | return df2 -------------------------------------------------------------------------------- /event_dataset_generation_vg/generator.py: -------------------------------------------------------------------------------- 1 | import utm 2 | import cv2 as cv 3 | from time import time 4 | import pandas as pd 5 | import os 6 | 7 | class Generator: 8 | def __init__(self, dir: str, df: pd.DataFrame): 9 | self.dir = dir 10 | self.df = df 11 | self.train = None 12 | self.val = None 13 | self.test = None 14 | self.train_dir = None 15 | self.val_dir = None 16 | self.test_dir = None 17 | print('Generator created') 18 | 19 | def generate(self): 20 | start = time() 21 | ######################################################################################### 22 | self.create_folders() 23 | self.slice() 24 | self.format() 25 | ######################################################################################### 26 | end = time() 27 | print('Time elapsed: {:.6f} hours'.format((end - start) / 3600.0)) 28 | print('Time elapsed: {:.6f} minutes'.format((end - start) / 60.0)) 29 | print('Time elapsed: {:.6f} seconds'.format((end - start) / 1.0)) 30 | 31 | # helper functions ################################################################################################## 32 | def create_folders(self): 33 | if not os.path.exists(os.path.join(self.dir, 'images')): 34 | os.mkdir(os.path.join(self.dir, 'images')) 35 | self.dir = os.path.join(self.dir, 'images') 36 | print('New dir: {}'.format(self.dir)) 37 | self.train_dir = os.path.join(self.dir, 'train') 38 | self.val_dir = os.path.join(self.dir, 'val') 39 | self.test_dir = os.path.join(self.dir, 'test') 40 | if not os.path.exists(self.train_dir): 41 | os.mkdir(self.train_dir) 42 | self.create_subfolders(self.train_dir) 43 | print('New train dir: {}'.format(self.train_dir)) 44 | if not os.path.exists(self.val_dir): 45 | os.mkdir(self.val_dir) 46 | self.create_subfolders(self.val_dir) 47 | print('New val dir: {}'.format(self.val_dir)) 48 | if not os.path.exists(self.test_dir): 49 | os.mkdir(self.test_dir) 50 | self.create_subfolders(self.test_dir) 51 | print('New test dir: {}'.format(self.test_dir)) 52 | 53 | def create_subfolders(self, dir): 54 | if not os.path.exists(os.path.join(dir, 'database')): 55 | os.mkdir(os.path.join(dir, 'database')) 56 | if not os.path.exists(os.path.join(dir, 'queries')): 57 | os.mkdir(os.path.join(dir, 'queries')) 58 | 59 | def slice(self): 60 | # sample train dataset 61 | self.train = self.df.sample(frac=0.4).reset_index() 62 | self.df = self.df.drop(self.train['index']) 63 | self.train.to_csv('peek/train.csv') 64 | 65 | # sample val dataset 66 | self.val = self.df.sample(frac=0.5).reset_index() 67 | self.df = self.df.drop(self.val['index']) 68 | self.val.to_csv('peek/val.csv') 69 | 70 | # sample test dataset 71 | self.test = self.df.sample(frac=1.0).reset_index() 72 | self.test.to_csv('peek/test.csv') 73 | 74 | def format(self): 75 | # split train dataset 76 | # queries 77 | self.train_query = self.train.sample(frac=0.1).reset_index() 78 | self.train_query.to_csv('peek/train_query.csv') 79 | self.move_imgs(self.train_query, os.path.join(self.train_dir, 'queries')) 80 | # database 81 | self.train_database = self.train.drop(self.train_query['level_0']).reset_index() 82 | self.train_database.to_csv('peek/train_database.csv') 83 | self.move_imgs(self.train_database, os.path.join(self.train_dir, 'database')) 84 | 85 | # split val dataset 86 | # queries 87 | self.val_query = self.val.sample(frac=0.1).reset_index() 88 | self.val_query.to_csv('peek/val_query.csv') 89 | self.move_imgs(self.val_query, os.path.join(self.val_dir, 'queries')) 90 | # database 91 | self.val_database = self.val.drop(self.val_query['level_0']).reset_index() 92 | self.val_database.to_csv('peek/val_database.csv') 93 | self.move_imgs(self.val_database, os.path.join(self.val_dir, 'database')) 94 | 95 | # split test dataset 96 | # queries 97 | self.test_query = self.test.sample(frac=0.1).reset_index() 98 | self.test_query.to_csv('peek/test_query.csv') 99 | self.move_imgs(self.test_query, os.path.join(self.test_dir, 'queries')) 100 | # database 101 | self.test_database = self.test.drop(self.test_query['level_0']).reset_index() 102 | self.test_database.to_csv('peek/test_database.csv') 103 | self.move_imgs(self.test_database, os.path.join(self.test_dir, 'database')) 104 | 105 | def move_imgs(self, df: pd.DataFrame, dir: str): 106 | for r in df.itertuples(): 107 | # calculate utm coordinates 108 | coordinates = utm.from_latlon(r.Latitude, r.Longitude) 109 | # format new image path 110 | path = '@' + '@'.join([ 111 | str(coordinates[0]), 112 | str(coordinates[1]), 113 | str(coordinates[2]), 114 | str(coordinates[3]), 115 | str(r.Latitude), 116 | str(r.Longitude), 117 | str(), 118 | str(), 119 | str(r.Heading), 120 | str(), 121 | str(), 122 | str(), 123 | str(r.Timestamp), 124 | str(r.Hash), 125 | '.jpg' 126 | ]) 127 | path = os.path.join(dir, path) 128 | print('{} -> {}'.format(r.Path, path)) 129 | # move image 130 | try: 131 | image = cv.imread(r.Path) 132 | cv.imwrite(path, image) 133 | except: 134 | raise FileNotFoundError 135 | -------------------------------------------------------------------------------- /event_dataset_generation_vg/main.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import glob 3 | import os 4 | import argparse 5 | import traceback 6 | from aggregator import Aggregator 7 | from generator import Generator 8 | 9 | def main(): 10 | # load dataframe 11 | csv_path = os.path.join(args.rdir, 'sensor_data_*/GPS_data_*.csv') 12 | try: 13 | file_path = glob.glob(csv_path) 14 | print('GPS file count: {}'.format(len(file_path))) 15 | df = pd.concat(map(pd.read_csv, file_path), ignore_index = True) 16 | print('Total entry count: {}'.format(df.shape[0])) 17 | df.to_csv('peek/dataframe.csv') 18 | except: 19 | traceback.print_exc() 20 | # run aggregator 21 | aggregator = Aggregator(args.edir, df) 22 | aggregator.aggregate() 23 | df2 = aggregator.get_dataframe() 24 | # run generator 25 | generator = Generator(args.dir, df2) 26 | generator.generate() 27 | 28 | if __name__ == '__main__': 29 | # define command line arguments 30 | parser = argparse.ArgumentParser() 31 | parser.add_argument( 32 | '--rdir', 33 | required = False, 34 | type = str, 35 | default = '/home/taiyi/scratch/data', 36 | help = 'define input raw data directory (raw sensor readings)' 37 | ) 38 | parser.add_argument( 39 | '--dir', 40 | required = False, 41 | type = str, 42 | default = '/home/taiyi/scratch/NYC-Event-VPR_VG/NYC-Event-VPR_Event', 43 | help = 'define output formatted data directory (compatible with VG framework)' 44 | ) 45 | parser.add_argument( 46 | '--edir', 47 | required = False, 48 | type = str, 49 | default = '/home/taiyi/scratch/event_rendered/30fps', 50 | help = 'define input event frame directory' 51 | ) 52 | # parse command line arguments 53 | args = parser.parse_args() 54 | print(args) 55 | assert args.rdir is not None 56 | assert args.dir is not None 57 | assert args.edir is not None 58 | main() -------------------------------------------------------------------------------- /event_dataset_generation_vg/make_dataset.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # define hyperparams 4 | type="event" # choices: event, reconstructed, rgb 5 | 6 | ############################################################################################################################################# 7 | # define paths 8 | raw=/home/taiyi/scratch2/NYU-EventVPR 9 | if [ $type = "event" ]; then 10 | event=/home/taiyi/scratch/event_rendered/30fps 11 | output=/home/taiyi/scratch/NYC-Event-VPR_VG/NYC-Event-VPR_Event 12 | elif [ $type = "reconstructed" ]; then 13 | event=/home/taiyi/scratch/event_reconstructed/30fps 14 | output=/home/taiyi/scratch/NYC-Event-VPR_VG/NYC-Event-VPR_E2VID 15 | elif [ $type = "rgb" ]; then 16 | event=/home/taiyi/scratch/rgb_concatenated/30fps 17 | output=/home/taiyi/scratch/NYC-Event-VPR_VG/NYC-Event-VPR_RGB 18 | fi 19 | ############################################################################################################################################## 20 | 21 | # purge previous output directory 22 | rm -r $output 23 | mkdir $output 24 | echo "Purged previous dataset directories" 25 | 26 | # process and create event dataset 27 | python /home/taiyi/scratch/event_dataset_generation_vg/main.py \ 28 | --rdir $raw \ 29 | --dir $output \ 30 | --edir $event -------------------------------------------------------------------------------- /event_rgb_gps_sensors/gps_tracker.py: -------------------------------------------------------------------------------- 1 | from ublox_gps import UbloxGps 2 | from datetime import datetime 3 | import serial 4 | import traceback 5 | import glob 6 | 7 | def find_port(): 8 | ports = glob.glob('/dev/ttyACM[0-9]*') 9 | res = list() 10 | for port in ports: 11 | try: 12 | s = serial.Serial(port) 13 | s.close() 14 | res.append(port) 15 | except: 16 | print(traceback.format_exc()) 17 | return res[0] 18 | 19 | def gps_object(): 20 | port = serial.Serial( 21 | find_port(), 22 | baudrate = 38400, 23 | timeout = 1 24 | ) 25 | return UbloxGps(port), port 26 | 27 | def main(): 28 | gps, port = gps_object() 29 | try: 30 | f = open('GPS_data_{}.csv'.format(datetime.now().strftime( 31 | '%Y-%m-%d_%H-%M-%S' 32 | )), 'w') 33 | f.write('Longitude,Latitude,HeadMotion,Timestamp\n') 34 | while True: 35 | geo = gps.geo_coords() 36 | f.write('{},{},{},{}\n'.format( 37 | geo.lon, 38 | geo.lat, 39 | geo.headMot, 40 | datetime.now() 41 | )) 42 | except: 43 | print(traceback.format_exc()) 44 | finally: 45 | port.close() 46 | f.close() 47 | 48 | if __name__ == '__main__': 49 | main() 50 | -------------------------------------------------------------------------------- /event_rgb_gps_sensors/locate_port.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | v4l2-ctl --list-devices 3 | -------------------------------------------------------------------------------- /event_rgb_gps_sensors/main.py: -------------------------------------------------------------------------------- 1 | from metavision_sdk_core import PeriodicFrameGenerationAlgorithm 2 | from metavision_sdk_ui import EventLoop, BaseWindow, Window, UIAction, UIKeyEvent 3 | from metavision_core.event_io import EventsIterator 4 | from metavision_core.event_io.raw_reader import initiate_device 5 | import concurrent.futures 6 | from concurrent.futures import ProcessPoolExecutor, wait 7 | from multiprocessing import cpu_count 8 | from datetime import datetime 9 | from datetime import timedelta 10 | import cv2 as cv 11 | import traceback 12 | import numpy as np 13 | from ublox_gps import UbloxGps 14 | import serial 15 | import traceback 16 | import glob 17 | import sys 18 | import os 19 | import math 20 | 21 | # global variables: BE CAREFUL! COULD PERMANENTLY DAMAGE SENSOR! 22 | bias_fo = -35 23 | bias_hpf = 30 24 | bias_diff_off = 40 25 | bias_diff_on = 40 26 | bias_refr = 0 27 | band_min, band_max = 50, 500 28 | trail_threshold = 100000 29 | erc_max = 10000000 30 | 31 | # define program behavior 32 | event_time_limit = 10 # minutes; define maximum time limit for raw event data file in order to reduce file size 33 | 34 | # global directory path 35 | folder_path = 'sensor_data_{}'.format(datetime.now().strftime('%Y-%m-%d_%H-%M-%S')) 36 | 37 | # search for camera port 38 | def find_port(): 39 | ports = glob.glob('/dev/ttyACM[0-9]*') 40 | res = list() 41 | for port in ports: 42 | try: 43 | s = serial.Serial(port) 44 | s.close() 45 | res.append(port) 46 | except: 47 | pass 48 | return res[0] 49 | 50 | # create GPS object 51 | def gps_object(): 52 | port = serial.Serial( 53 | find_port(), 54 | baudrate = 38400, 55 | timeout = 1 56 | ) 57 | return UbloxGps(port), port 58 | 59 | # check runtime against start_time, return updated hour value 60 | def check_runtime(start_time, hours_prev): 61 | hours_elapsed = (datetime.now() - start_time).total_seconds() / 60.0**2 62 | if math.floor(hours_elapsed) > hours_prev: 63 | print('{} hours elapsed'.format(math.floor(hours_elapsed))) 64 | return hours_elapsed 65 | else: 66 | return hours_prev 67 | 68 | def event_block(t0): 69 | t1 = datetime.now() 70 | if t1 > t0 + timedelta(minutes = event_time_limit): 71 | return t1, True 72 | return t0, False 73 | 74 | # dedicated CPU thread for RGB and GPS data processing 75 | def rgb_gps_thread(port): 76 | print('RGB GPS thread has began') 77 | # define variables 78 | cam_port = port 79 | width = 1280 80 | height = 720 81 | hours_prev = 0 82 | start_time = datetime.now() 83 | 84 | # open camera object 85 | cam = cv.VideoCapture(cam_port) 86 | cam.set(cv.CAP_PROP_FRAME_WIDTH, width) 87 | cam.set(cv.CAP_PROP_FRAME_HEIGHT, height) 88 | 89 | # graceful fail 90 | if not cam.isOpened(): 91 | print('Cannot open camera...') 92 | exit() 93 | 94 | # open gps object 95 | try: 96 | gps, port = gps_object() 97 | except: 98 | print(traceback.format_exc()) 99 | 100 | try: 101 | # obtain start time string 102 | start_str = datetime.now().strftime('%Y-%m-%d_%H-%M-%S') 103 | # open gps csv file 104 | f = open('{}/GPS_data_{}.csv'.format(folder_path, start_str), 'w') 105 | f.write('Longitude,Latitude,HeadMotion,Timestamp\n') 106 | # create image directory 107 | if not os.path.exists('{}/img_{}'.format(folder_path, start_str)): 108 | os.mkdir('{}/img_{}'.format(folder_path, start_str)) 109 | # main control loop 110 | while True: 111 | # check runtime 112 | hours_prev = check_runtime(start_time, hours_prev) 113 | # obtain frame 114 | ret, frame = cam.read() 115 | if not ret: 116 | print('Failed to obtain frame') 117 | break 118 | # obtain gps coordinates 119 | geo = gps.geo_coords() 120 | # obtain current time string 121 | time_str = datetime.now().strftime('%Y-%m-%d_%H-%M-%S_%f') 122 | # save frame 123 | cv.imwrite('{}/img_{}/frame_{}.jpg'.format(folder_path, start_str, time_str), frame) 124 | # save coordinates 125 | f.write('{},{},{},{}\n'.format( 126 | geo.lon, 127 | geo.lat, 128 | geo.headMot, 129 | time_str 130 | )) 131 | # display dummy frame 132 | dummy = np.zeros((200, 400)) 133 | cv.imshow('ELP RGB Camera', dummy) 134 | # escape protocol 135 | k = cv.waitKey(1) 136 | if k%256 == 27: # ESC pressed 137 | print('Escaping RGB window...') 138 | break 139 | except: 140 | print(traceback.format_exc()) 141 | finally: 142 | f.close() 143 | cv.destroyAllWindows() 144 | cam.release() 145 | port.close() 146 | print('RGB GPS thread has ended') 147 | 148 | # dedicated CPU thread for event data processing 149 | def event_thread(): 150 | print('Event thread has began') 151 | # create HAL device 152 | device = initiate_device(path = '') 153 | 154 | # tune bias within safe ranges 155 | bias = device.get_i_ll_biases() 156 | assert bias_fo >= -35 and bias_fo <= 55, 'bias_fo safe range: (-35, 55)' 157 | bias.set('bias_fo', bias_fo) 158 | assert bias_hpf >= 0 and bias_hpf <= 120, 'bias_hpf safe range: (0, 120)' 159 | bias.set('bias_hpf', bias_hpf) 160 | assert bias_diff_off >= -35 and bias_diff_off <= 190, 'bias_diff_off safe range: (-35, 190)' 161 | bias.set('bias_diff_off', bias_diff_off) 162 | assert bias_diff_on >= -85 and bias_diff_on <= 140, 'bias_diff_on safe range: (-85, 140)' 163 | bias.set('bias_diff_on', bias_diff_on) 164 | assert bias_refr >= -20 and bias_refr <= 235, 'bias_refr safe range: (-20, 235)' 165 | bias.set('bias_refr', bias_refr) 166 | print('IMX636ES sensor bias:', bias.get_all_biases()) 167 | 168 | # AFK: anti-flicker 169 | anti_flicker = device.get_i_antiflicker_module() 170 | anti_flicker.enable() 171 | anti_flicker.set_frequency_band(band_min, band_max, True) # Hz 172 | 173 | # STC/Trail: event trail filter 174 | noise_filter = device.get_i_noisefilter_module() 175 | noise_filter.enable_trail(trail_threshold) # microseconds; 100ms 176 | 177 | # ERC: event rate controller 178 | erc = device.get_i_erc() 179 | erc.enable(True) 180 | assert erc_max > 0, 'erc_max must be positive' 181 | erc.set_cd_event_rate(erc_max) 182 | print('Event rate controller status:', erc.is_enabled()) 183 | print('Event rate limit (Ev/s):', erc.get_cd_event_rate()) 184 | 185 | # create event stream object 186 | stream = device.get_i_events_stream() 187 | 188 | # create iterator 189 | iterator = EventsIterator.from_device(device = device, delta_t=1e3) 190 | height, width = iterator.get_size() # Camera Geometry 191 | 192 | # Window - Graphical User Interface 193 | with Window(title="Prophesee EVK4 HD Sony IMX636ES Event Camera", width=width, height=height, mode=BaseWindow.RenderMode.BGR) as window: 194 | # define keyboard callback 195 | def keyboard_cb(key, scancode, action, mods): 196 | if action != UIAction.RELEASE: 197 | return 198 | if key == UIKeyEvent.KEY_ESCAPE or key == UIKeyEvent.KEY_Q: 199 | window.set_close_flag() 200 | 201 | window.set_keyboard_callback(keyboard_cb) 202 | 203 | # Event Frame Generator 204 | event_frame_gen = PeriodicFrameGenerationAlgorithm(width, height, fps = 30.0) 205 | 206 | # define on frame callback 207 | def on_cd_frame_cb(ts, cd_frame): 208 | nonlocal display_str 209 | cv.putText(cd_frame, display_str, (0, 10), cv.FONT_HERSHEY_DUPLEX, 0.5, (0, 255, 0)) 210 | window.show(cd_frame) 211 | 212 | event_frame_gen.set_output_callback(on_cd_frame_cb) 213 | 214 | # obtain start time string 215 | start_str = datetime.now().strftime('%Y-%m-%d_%H-%M-%S') 216 | # create event directory 217 | if not os.path.exists('{}/data_{}'.format(folder_path, start_str)): 218 | os.mkdir('{}/data_{}'.format(folder_path, start_str)) 219 | # begin event log 220 | t0 = datetime.now() 221 | stream.log_raw_data('{}/data_{}/event_{}.raw'.format(folder_path, start_str, t0.strftime('%Y-%m-%d_%H-%M-%S'))) 222 | 223 | # Process event batches 224 | for evs in iterator: 225 | # block size check; create new save file if over size limit 226 | t0, over = event_block(t0) 227 | if over: 228 | stream.stop_log_raw_data() 229 | stream.log_raw_data('{}/data_{}/event_{}.raw'.format(folder_path, start_str, t0.strftime('%Y-%m-%d_%H-%M-%S'))) 230 | print('{} minute event block saved at {}'.format(event_time_limit, t0)) 231 | # calculate event rate 232 | display_str = "Rate : {:.2f}Mev/s".format(evs.size * 1e-3) 233 | # Dispatch system events to the window 234 | EventLoop.poll_and_dispatch() 235 | event_frame_gen.process_events(evs) 236 | # callback flag 237 | if window.should_close(): 238 | print('Escaping event window...') 239 | break 240 | 241 | # stop event log 242 | stream.stop_log_raw_data() 243 | print('Event thread has ended') 244 | 245 | def main(argv): 246 | try: 247 | int(argv[1]) 248 | except: 249 | print('Please specify RGB camera port') 250 | exit() 251 | 252 | # create directory to save data for this run 253 | print('Creating data directory {}'.format(folder_path)) 254 | if not os.path.exists(folder_path): 255 | os.mkdir(folder_path) 256 | 257 | # define multiprocessing program executor 258 | print('CPU core count: {}'.format(cpu_count())) 259 | executor = ProcessPoolExecutor(cpu_count() - 1) 260 | print('Started Process Pool Executor...') 261 | 262 | # pass functions to executor to parallel run 263 | futures = list() 264 | futures.append(executor.submit(event_thread)) 265 | futures.append(executor.submit(rgb_gps_thread, int(argv[1]))) 266 | done, not_done = wait(futures, return_when = concurrent.futures.ALL_COMPLETED) 267 | print('All concurrent futures have completed') 268 | 269 | # shutdown executor 270 | executor.shutdown() 271 | print('Executor has shutdown') 272 | 273 | if __name__ == '__main__': 274 | main(sys.argv) 275 | -------------------------------------------------------------------------------- /event_rgb_gps_sensors/rgb_stream.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import cv2 as cv 3 | import time 4 | from datetime import datetime 5 | import sys 6 | 7 | def main(argv): 8 | # define variables 9 | cam_port = int(argv[1]) 10 | width = 1280 11 | height = 720 12 | 13 | # open camera object 14 | cam = cv.VideoCapture(cam_port) 15 | cam.set(cv.CAP_PROP_FRAME_WIDTH, width) 16 | cam.set(cv.CAP_PROP_FRAME_HEIGHT, height) 17 | prev_frame_time, new_frame_time = time.time(), time.time() 18 | 19 | # graceful fail 20 | if not cam.isOpened(): 21 | print('Cannot open camera...') 22 | exit() 23 | 24 | # main control loop 25 | while True: 26 | # obtain frame 27 | ret, frame = cam.read() 28 | if not ret: 29 | print('Failed to obtain frame') 30 | break 31 | 32 | # calculate fps 33 | new_frame_time = time.time() 34 | fps = 1 / (new_frame_time - prev_frame_time) 35 | prev_frame_time = new_frame_time 36 | # add annotations 37 | cv.putText(frame, 'Cam {} fps: {}'.format(cam_port, str(int(fps))), (30, 30), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2) 38 | cv.putText(frame, 'Time: {}'.format(datetime.now()), (30, 50), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2) 39 | 40 | # display frame 41 | cv.imshow('frame', frame) 42 | 43 | # escape protocol 44 | k = cv.waitKey(1) 45 | if k%256 == 27: # ESC pressed 46 | print('Escaping...') 47 | break 48 | 49 | # cleanup 50 | cam.release() 51 | cv.destroyAllWindows() 52 | 53 | if __name__ == '__main__': 54 | main(sys.argv) 55 | -------------------------------------------------------------------------------- /event_rgb_gps_sensors/stream_event.py: -------------------------------------------------------------------------------- 1 | from metavision_sdk_core import PeriodicFrameGenerationAlgorithm 2 | from metavision_sdk_ui import EventLoop, BaseWindow, Window, UIAction, UIKeyEvent 3 | from metavision_core.event_io import EventsIterator 4 | from metavision_core.event_io.raw_reader import initiate_device 5 | from datetime import datetime 6 | import cv2 as cv 7 | import traceback 8 | 9 | def parse_args(): 10 | import argparse 11 | """Parse command line arguments.""" 12 | parser = argparse.ArgumentParser(description='Metavision SDK Get Started sample.', 13 | formatter_class=argparse.ArgumentDefaultsHelpFormatter) 14 | parser.add_argument( 15 | '-i', '--input-raw-file', dest='input_path', default="", 16 | help="Path to input RAW file. If not specified, the live stream of the first available camera is used. " 17 | "If it's a camera serial number, it will try to open that camera instead.") 18 | args = parser.parse_args() 19 | return args 20 | 21 | 22 | def main(): 23 | """ Main """ 24 | args = parse_args() 25 | 26 | # create HAL device 27 | device = initiate_device(path = args.input_path) 28 | 29 | # tune bias 30 | device.get_i_ll_biases().set('bias_fo', -35) 31 | device.get_i_ll_biases().set('bias_hpf', 30) 32 | device.get_i_ll_biases().set('bias_diff_off', 40) 33 | device.get_i_ll_biases().set('bias_diff_on', 40) 34 | print(device.get_i_ll_biases().get_all_biases()) 35 | 36 | # AFK: anti-flicker 37 | device.get_i_antiflicker_module().enable() 38 | device.get_i_antiflicker_module().set_frequency_band(50, 500, True) 39 | 40 | # STC/Trail: event trail filter 41 | device.get_i_noisefilter_module().enable_trail(100000) 42 | 43 | # ERC: event rate controller 44 | device.get_i_erc().enable(True) 45 | device.get_i_erc().set_cd_event_rate(10000000) 46 | # print(device.get_i_erc().is_enabled()) 47 | # print(device.get_i_erc().get_cd_event_rate()) 48 | 49 | # create iterator 50 | iterator = EventsIterator.from_device(device = device, delta_t=1e3) 51 | height, width = iterator.get_size() # Camera Geometry 52 | 53 | # Window - Graphical User Interface 54 | with Window(title="Metavision SDK Get Started", width=width, height=height, mode=BaseWindow.RenderMode.BGR) as window: 55 | # define callback 56 | def keyboard_cb(key, scancode, action, mods): 57 | if action != UIAction.RELEASE: 58 | return 59 | if key == UIKeyEvent.KEY_ESCAPE or key == UIKeyEvent.KEY_Q: 60 | window.set_close_flag() 61 | 62 | window.set_keyboard_callback(keyboard_cb) 63 | 64 | # Event Frame Generator 65 | event_frame_gen = PeriodicFrameGenerationAlgorithm(width, height, fps = 30.0) 66 | 67 | def on_cd_frame_cb(ts, cd_frame): 68 | nonlocal display_str 69 | # cv.putText(cd_frame, "Timestamp: " + str(ts), (0, 10), cv.FONT_HERSHEY_DUPLEX, 0.5, (0, 255, 0)) 70 | cv.putText(cd_frame, display_str, (0, 10), cv.FONT_HERSHEY_DUPLEX, 0.5, (0, 255, 0)) 71 | window.show(cd_frame) 72 | 73 | event_frame_gen.set_output_callback(on_cd_frame_cb) 74 | 75 | # Process events 76 | for evs in iterator: 77 | display_str = "Rate : {:.2f}Mev/s".format(evs.size * 1e-3) 78 | 79 | # Dispatch system events to the window 80 | EventLoop.poll_and_dispatch() 81 | 82 | event_frame_gen.process_events(evs) 83 | 84 | if window.should_close(): 85 | break 86 | 87 | if __name__ == "__main__": 88 | main() 89 | -------------------------------------------------------------------------------- /event_rgb_gps_sensors/transfer.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | scp -r sensor_data_* taiyi@hub1:/home/taiyi/repository2/data/. 3 | -------------------------------------------------------------------------------- /event_to_video/demo_event_to_video.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Prophesee S.A. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 6 | # Unless required by applicable law or agreed to in writing, software distributed under the License is distributed 7 | # on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 8 | # See the License for the specific language governing permissions and limitations under the License. 9 | """ 10 | E2V DEMO Script 11 | 12 | Copyright: (c) 2021 Prophesee 13 | """ 14 | import numpy as np 15 | import argparse 16 | import torch 17 | import torch.nn.functional as F 18 | from metavision_sdk_base import EventCD 19 | from lightning_model import EventToVideoLightningModel 20 | from metavision_core_ml.preprocessing.event_to_tensor_torch import event_cd_to_torch, event_volume 21 | from metavision_core_ml.utils.torch_ops import normalize_tiles, viz_flow 22 | from metavision_core_ml.utils.show_or_write import ShowWrite 23 | from metavision_core.event_io.events_iterator import EventsIterator 24 | from metavision_core.event_io.adaptive_rate_events_iterator import AdaptiveRateEventsIterator 25 | 26 | 27 | def parse_args(argv=None): 28 | parser = argparse.ArgumentParser() 29 | parser.add_argument('path', type=str, default='', help='path of events') 30 | parser.add_argument('checkpoint', type=str, default='', help='checkpoint to evaluate') 31 | parser.add_argument('--start_ts', type=int, default=0, help='start timestamp') 32 | parser.add_argument('--mode', type=str, default='mixed', 33 | choices=['n_events', 'delta_t', 'mixed', 'adaptive'], help='how to cut events') 34 | 35 | parser.add_argument('--n_events', type=int, default=30000, help='accumulate by N events') 36 | parser.add_argument('--delta_t', type=int, default=30000, help='accumulate by delta_t') 37 | parser.add_argument('--video_path', type=str, default='', help='path to video') 38 | parser.add_argument('--height_width', nargs=2, default=None, type=int, help='resolution') 39 | parser.add_argument('--max_duration', type=int, default=-1, help='run for this duration') 40 | parser.add_argument('--thr_var', type=float, default=3e-5, help='threshold variance for adaptive rate') 41 | parser.add_argument('--cpu', action='store_true', help='if true use cpu and not cuda') 42 | parser.add_argument('--flow', action='store_true', help='if true predict also optical flow') 43 | parser.add_argument('--viz_input', action='store_true', help='if true viz input') 44 | parser.add_argument('--no_window', action='store_true', help='disable window') 45 | 46 | params, _ = parser.parse_known_args(argv) 47 | return params 48 | 49 | 50 | def run(params): 51 | print('params: ', params) 52 | 53 | if params.mode == 'adaptive': 54 | mv_it = AdaptiveRateEventsIterator(params.path, thr_var_per_event=params.thr_var) 55 | else: 56 | mv_it = EventsIterator(params.path, start_ts=params.start_ts, mode=params.mode, 57 | n_events=params.n_events, delta_t=params.delta_t) 58 | 59 | window_name = "e2v" 60 | if params.no_window: 61 | window_name = None 62 | show_write = ShowWrite(window_name, params.video_path) 63 | 64 | height, width = mv_it.get_size() 65 | print('original size: ', height, width) 66 | 67 | device = 'cpu' if params.cpu else 'cuda' 68 | model = EventToVideoLightningModel.load_from_checkpoint(params.checkpoint) 69 | model.eval().to(device) 70 | nbins = model.hparams.event_volume_depth 71 | print('Nbins: ', nbins) 72 | in_height, in_width = (height, width) if params.height_width is None else params.height_width 73 | print('height_width: ', params.height_width) 74 | 75 | pause = False 76 | mv_it = iter(mv_it) 77 | while True: 78 | 79 | try: 80 | events = next(mv_it) if not pause else np.array([], dtype=EventCD) 81 | except StopIteration: 82 | break 83 | 84 | if events.size > 0: 85 | first_ts = events["t"][0] 86 | if first_ts <= params.start_ts: 87 | continue 88 | last_ts = events["t"][-1] 89 | if params.max_duration > 0 and last_ts > params.start_ts + params.max_duration: 90 | break 91 | 92 | if not pause and not len(events): 93 | continue 94 | 95 | if not pause: 96 | events_th = event_cd_to_torch(events).to(device) 97 | start_times = torch.FloatTensor([events['t'][0]]).view(1,).to(device) 98 | durations = torch.FloatTensor([events['t'][-1] - events['t'][0]]).view(1,).to(device) 99 | tensor_th = event_volume(events_th, 1, height, width, start_times, durations, nbins, 'bilinear') 100 | tensor_th = F.interpolate(tensor_th, size=(in_height, in_width), 101 | mode='bilinear', align_corners=True) 102 | tensor_th = tensor_th * 0.1 103 | tensor_th = tensor_th.view(1, 1, nbins, in_height, in_width) 104 | else: 105 | tensor_th = torch.zeros((1, 1, nbins, in_height, in_width), dtype=torch.float32, device=device) 106 | 107 | state = model.model(tensor_th) 108 | gray = model.model.predict_gray(state).view(1, 1, in_height, in_width) 109 | gray = normalize_tiles(gray).view(in_height, in_width) 110 | gray_np = gray.detach().cpu().numpy() * 255 111 | gray_np = np.uint8(gray_np) 112 | gray_rgb = gray_np[..., None].repeat(3, 2) 113 | 114 | if params.flow: 115 | flow = model.model.predict_flow(state) 116 | flow_rgb = viz_flow(flow.squeeze(0)) 117 | flow_rgb = flow_rgb.squeeze(0).permute(1, 2, 0).cpu().numpy() 118 | cat = np.concatenate([flow_rgb, gray_rgb], axis=1) 119 | else: 120 | cat = gray_rgb 121 | 122 | if params.viz_input: 123 | x = tensor_th.mean(dim=2) 124 | x = 255 * normalize_tiles(x, num_dims=3, num_stds=9).view(in_height, in_width) 125 | x = x.byte().cpu().numpy() 126 | x = x[..., None].repeat(3, 2) 127 | cat = np.concatenate((x, cat), axis=1) 128 | 129 | key = show_write(cat) 130 | if key == 27: 131 | break 132 | if key == ord('p'): 133 | pause = not pause 134 | 135 | 136 | def main(): 137 | params = parse_args() 138 | run(params) 139 | 140 | 141 | if __name__ == '__main__': 142 | with torch.no_grad(): 143 | main() 144 | -------------------------------------------------------------------------------- /event_to_video/lightning_model.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Prophesee S.A. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 6 | # Unless required by applicable law or agreed to in writing, software distributed under the License is distributed 7 | # on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 8 | # See the License for the specific language governing permissions and limitations under the License. 9 | """ 10 | Pytorch Lightning module 11 | """ 12 | import torch 13 | import torch.nn as nn 14 | import torch.nn.functional as F 15 | import lightning.pytorch as pl 16 | 17 | import os 18 | import argparse 19 | import numpy as np 20 | import cv2 21 | 22 | from types import SimpleNamespace 23 | from torchvision.utils import make_grid 24 | from tqdm import tqdm 25 | from itertools import islice 26 | 27 | from metavision_core_ml.core.temporal_modules import time_to_batch, seq_wise 28 | from metavision_core_ml.event_to_video.event_to_video import EventToVideo 29 | from metavision_core_ml.utils.torch_ops import normalize_tiles 30 | from metavision_core_ml.utils.show_or_write import ShowWrite 31 | from metavision_core_ml.utils.torch_ops import cuda_tick, viz_flow 32 | 33 | from metavision_core_ml.losses.perceptual_loss import VGGPerceptualLoss 34 | from metavision_core_ml.losses.warp import ssl_flow_l1 35 | from kornia.losses import ssim_loss 36 | 37 | 38 | class EventToVideoCallback(pl.callbacks.Callback): 39 | """ 40 | callbacks to our model 41 | """ 42 | 43 | def __init__(self, data_module, video_result_every_n_epochs=2, show_window=False): 44 | super().__init__() 45 | self.data_module = data_module 46 | self.video_every = int(video_result_every_n_epochs) 47 | self.show_window = show_window 48 | 49 | def on_epoch_end(self, trainer, pl_module): 50 | if trainer.current_epoch and not (trainer.current_epoch % self.video_every): 51 | pl_module.demo_video(self.data_module.val_dataloader(), trainer.current_epoch, show_video=self.show_window) 52 | 53 | 54 | class EventToVideoLightningModel(pl.LightningModule): 55 | """ 56 | EventToVideo: Train your EventToVideo 57 | """ 58 | 59 | def __init__(self, hparams: argparse.Namespace) -> None: 60 | super().__init__() 61 | self.save_hyperparameters(hparams, logger=False) 62 | self.model = EventToVideo( 63 | self.hparams.cin, 64 | self.hparams.cout, 65 | self.hparams.num_layers, 66 | self.hparams.base, 67 | self.hparams.cell, 68 | self.hparams.separable, 69 | self.hparams.separable_hidden, 70 | self.hparams.archi) 71 | self.vgg_perc_l1 = VGGPerceptualLoss() 72 | 73 | @classmethod 74 | def load_from_checkpoint(cls, checkpoint_path): 75 | checkpoint = torch.load(checkpoint_path, map_location=torch.device('cpu')) 76 | hparams = argparse.Namespace(**checkpoint['hyper_parameters']) 77 | model = cls(hparams) 78 | model.load_state_dict(checkpoint['state_dict']) 79 | return model 80 | 81 | def forward(self, x): 82 | state = self.model(x) 83 | return self.model.predict_gray(state) 84 | 85 | def compute_loss(self, x, y, reset_mask): 86 | self.model.reset(reset_mask) 87 | 88 | target = (y / 255.0) 89 | state = self.model(x) 90 | pred = self.model.predict_gray(state) 91 | 92 | bw_flow = self.model.predict_flow(state) 93 | 94 | pred_flat = time_to_batch(pred)[0].float() 95 | target_flat = time_to_batch(target)[0].float() 96 | loss_dict = {} 97 | loss_dict['ssim'] = ssim_loss(pred_flat, target_flat, 5) 98 | loss_dict['smooth_l1'] = F.smooth_l1_loss(pred_flat, target_flat, beta=0.11) 99 | 100 | loss_dict['vgg_perc_l1'] = self.vgg_perc_l1(pred_flat, target_flat) * 0.5 101 | 102 | # loss_dict['target_flow_l1'] = ssl_flow_l1(target, bw_flow) 103 | loss_dict['pred_flow_l1'] = ssl_flow_l1(pred, bw_flow) 104 | for k, v in loss_dict.items(): 105 | assert v >= 0, k 106 | return loss_dict 107 | 108 | def training_step(self, batch, batch_nb): 109 | batch = SimpleNamespace(**batch) 110 | loss_dict = self.compute_loss(batch.inputs, batch.images, batch.reset) 111 | 112 | loss = sum([v for k, v in loss_dict.items()]) 113 | 114 | assert loss.item() >= 0 115 | logs = {'loss': loss} 116 | logs.update({'train_' + k: v.item() for k, v in loss_dict.items()}) 117 | 118 | self.log('train_loss', loss) 119 | for k, v in loss_dict.items(): 120 | self.log('train_' + k, v) 121 | 122 | return loss 123 | 124 | def validation_step(self, batch, batch_nb): 125 | batch = SimpleNamespace(**batch) 126 | loss_dict = self.compute_loss(batch.inputs, batch.images, batch.reset) 127 | loss = sum([v for k, v in loss_dict.items()]) 128 | assert loss.item() >= 0 129 | 130 | logs = {'val_loss': loss} 131 | logs.update({'val_' + k: v.item() for k, v in loss_dict.items()}) 132 | 133 | self.log('val_loss', loss) 134 | for k, v in loss_dict.items(): 135 | self.log('val_' + k, v) 136 | return loss 137 | 138 | def validation_epoch_end(self, outputs): 139 | val_loss_avg = torch.FloatTensor([item for item in outputs]).mean() 140 | self.log('val_acc', val_loss_avg) 141 | return val_loss_avg 142 | 143 | def configure_optimizers(self): 144 | return torch.optim.Adam(self.parameters(), lr=self.hparams.lr) 145 | 146 | def demo_video(self, dataloader, epoch=0, show_video=True): 147 | print('Demo') 148 | height, width = self.hparams.height, self.hparams.width 149 | batch_size = self.hparams.batch_size 150 | nrows = 2 ** ((batch_size.bit_length() - 1) // 2) 151 | ncols = int(np.ceil(self.hparams.batch_size / nrows)) 152 | 153 | self.model.eval() 154 | 155 | video_name = os.path.join(self.hparams.root_dir, 'videos', f'video#{epoch:d}.mp4') 156 | dir = os.path.dirname(video_name) 157 | if not os.path.isdir(dir): 158 | os.mkdir(dir) 159 | window_name = None 160 | if show_video: 161 | window_name = "test_epoch {:d}".format(epoch) 162 | show_write = ShowWrite(window_name, video_name) 163 | 164 | with torch.no_grad(): 165 | for batch in tqdm(islice(dataloader, self.hparams.demo_iter), total=self.hparams.demo_iter): 166 | 167 | batch = SimpleNamespace(**batch) 168 | batch.inputs = batch.inputs.to(self.device) 169 | batch.reset = batch.reset.to(self.device) 170 | 171 | x = batch.inputs 172 | x = x[:, :, :3] 173 | x = 255 * normalize_tiles(x) 174 | y = batch.images 175 | 176 | self.model.reset(batch.reset) 177 | 178 | s = self.model(batch.inputs) 179 | o = self.model.predict_gray(s) 180 | o = normalize_tiles(o, num_stds=6) 181 | o = 255 * o 182 | 183 | if self.hparams.plot_flow: 184 | f = self.model.predict_flow(s) 185 | f = seq_wise(viz_flow)(f) 186 | for t in range(len(x)): 187 | gx = make_grid(x[t], nrow=nrows, padding=0).permute(1, 2, 0).cpu().numpy().astype(np.uint8) 188 | gy = make_grid(y[t], nrow=nrows, padding=0).permute(1, 2, 0).cpu().numpy().astype(np.uint8) 189 | go = make_grid(o[t], nrow=nrows, padding=0).permute(1, 2, 0).data.cpu().numpy().astype(np.uint8) 190 | 191 | if self.hparams.plot_flow: 192 | gf = make_grid( 193 | f[t], 194 | nrow=nrows, padding=0).permute( 195 | 1, 2, 0).data.cpu().numpy().astype( 196 | np.uint8) 197 | cat = np.concatenate([gx, gy, gf, go], axis=1) 198 | else: 199 | cat = np.concatenate([gx, gy, go], axis=1) 200 | show_write(cat) 201 | -------------------------------------------------------------------------------- /event_to_video/main.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | # batch 1 3 | fnames=("event_2023-02-14_15-06-33" \ 4 | "event_2023-02-14_15-16-33" \ 5 | "event_2023-02-14_15-26-33" \ 6 | "event_2023-02-14_15-36-33" \ 7 | "event_2023-02-14_15-46-33" \ 8 | "event_2023-02-14_15-56-33" \ 9 | "event_2023-02-14_16-06-33" \ 10 | "event_2023-02-14_16-16-33" \ 11 | "event_2023-02-14_16-26-33" \ 12 | "event_2023-02-14_16-36-33") 13 | 14 | input_path=/home/taiyi/repository2/data/sensor_data_2023-02-14_15-06-30/data_2023-02-14_15-06-33 15 | output_path=/home/taiyi/repository2/data_processed/reconstruct/data_2023-02-14_15-06-33 16 | 17 | e2v () { 18 | for fname in ${fnames[@]}; do 19 | echo $fname; 20 | python3 demo_event_to_video.py \ 21 | $input_path/$fname.raw \ 22 | e2v.ckpt \ 23 | --delta_t 40000 \ 24 | --mode delta_t \ 25 | --height_width 720 1280 \ 26 | --video_path $output_path/$fname.mp4 27 | done 28 | } 29 | e2v 30 | 31 | # batch 2 32 | fnames=("event_2023-02-14_18-20-44" \ 33 | "event_2023-02-14_18-30-44" \ 34 | "event_2023-02-14_18-40-44" \ 35 | "event_2023-02-14_18-50-44" \ 36 | "event_2023-02-14_19-00-44" \ 37 | "event_2023-02-14_19-10-44" \ 38 | "event_2023-02-14_19-20-44" \ 39 | "event_2023-02-14_19-30-44" \ 40 | "event_2023-02-14_19-40-44" \ 41 | "event_2023-02-14_19-50-44") 42 | 43 | input_path=/home/taiyi/repository2/data/sensor_data_2023-02-14_18-20-40/data_2023-02-14_18-20-44 44 | output_path=/home/taiyi/repository2/data_processed/reconstruct/data_2023-02-14_18-20-44 45 | 46 | e2v 47 | 48 | # batch 3 49 | fnames=("event_2023-04-20_15-53-29" \ 50 | "event_2023-04-20_16-03-29" \ 51 | "event_2023-04-20_16-13-29" \ 52 | "event_2023-04-20_16-23-29" \ 53 | "event_2023-04-20_16-33-29" \ 54 | "event_2023-04-20_16-43-29" \ 55 | "event_2023-04-20_16-53-29" \ 56 | "event_2023-04-20_17-03-29") 57 | 58 | input_path=/home/taiyi/repository2/data/sensor_data_2023-04-20_15-53-26/data_2023-04-20_15-53-29 59 | output_path=/home/taiyi/repository2/data_processed/reconstruct/data_2023-04-20_15-53-29 60 | 61 | e2v 62 | 63 | # batch 4 64 | fnames=("event_2023-04-20_17-10-04" \ 65 | "event_2023-04-20_17-20-04" \ 66 | "event_2023-04-20_17-30-04" \ 67 | "event_2023-04-20_17-40-04" \ 68 | "event_2023-04-20_17-50-04" \ 69 | "event_2023-04-20_18-00-04" \ 70 | "event_2023-04-20_18-10-04" \ 71 | "event_2023-04-20_18-20-04" \ 72 | "event_2023-04-20_18-30-04") 73 | 74 | input_path=/home/taiyi/repository2/data/sensor_data_2023-04-20_17-10-01/data_2023-04-20_17-10-04 75 | output_path=/home/taiyi/repository2/data_processed/reconstruct/data_2023-04-20_17-10-04 76 | 77 | e2v -------------------------------------------------------------------------------- /gps_data_plot/coverage.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ai4ce/NYC-Event-VPR/9b0114ba7938adbb80e004a37b362a4a72984f89/gps_data_plot/coverage.png -------------------------------------------------------------------------------- /gps_data_plot/dataset_metrics.json: -------------------------------------------------------------------------------- 1 | { 2 | "total_time": { 3 | "hours": 13, 4 | "minutes": 30 5 | }, 6 | "total_distance": { 7 | "kilometers": 259.95, 8 | "miles_equivalent": 161.53 9 | }, 10 | "total_size": { 11 | "gigabytes": 466.7 12 | }, 13 | "sensors": [ 14 | "event_camera", 15 | "rgb_camera", 16 | "gps_module" 17 | ], 18 | "weather": [ 19 | "rainy", 20 | "cloudy", 21 | "sunny" 22 | ], 23 | "resolution": { 24 | "width": 1280, 25 | "height": 720 26 | }, 27 | "updated": "2023-04-21" 28 | } -------------------------------------------------------------------------------- /gps_data_plot/plot_all.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import matplotlib.pyplot as plt 3 | import descartes 4 | import geopandas as gpd 5 | from shapely.geometry import Point, Polygon 6 | import glob 7 | 8 | # aggregate all GPS data and plot into one graph 9 | 10 | file_path = glob.glob('/home/taiyi/repository/event_camera_master_project/data/sensor_data_*/GPS_data_*.csv') 11 | print('GPS file count: {}'.format(len(file_path))) 12 | 13 | df = pd.concat(map(pd.read_csv, file_path), ignore_index = True) 14 | 15 | print('Total entry count: {}'.format(df.shape[0])) 16 | 17 | 18 | geometry = [Point(x, y) for x, y in zip(df['Longitude'], df['Latitude'])] 19 | geo_df = gpd.GeoDataFrame( 20 | df, 21 | # crs = crs, 22 | geometry = geometry 23 | ) 24 | print(geo_df.head()) 25 | 26 | map_path = glob.glob('/home/taiyi/repository/event_camera_master_project/rtk_gps/shape_files/nyc/*.shp') 27 | print(map_path) 28 | map = gpd.read_file(map_path[0]) 29 | 30 | fig, ax = plt.subplots(figsize = (20, 20)) 31 | map.plot(ax = ax, alpha = 0.4, color = 'grey') 32 | geo_df.plot(ax = ax, markersize = 1, marker = '.', color = 'purple') 33 | plt.title('GPS Data Point Coverage over New York City') 34 | plt.xlabel('Longitude') 35 | plt.ylabel('Latitude') 36 | plt.show() 37 | -------------------------------------------------------------------------------- /gps_data_plot/plot_outbound_data.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import matplotlib.pyplot as plt 3 | import descartes 4 | import geopandas as gpd 5 | from shapely.geometry import Point, Polygon 6 | import glob 7 | 8 | # aggregate all GPS data and plot into one graph 9 | 10 | file_path = glob.glob('/home/taiyipan/repository/event_camera_master_project/outbound_data/sensor_data_*/GPS_data_*.csv') 11 | print('GPS file count: {}'.format(len(file_path))) 12 | 13 | df = pd.concat(map(pd.read_csv, file_path), ignore_index = True) 14 | 15 | print('Total entry count: {}'.format(df.shape[0])) 16 | 17 | 18 | geometry = [Point(x, y) for x, y in zip(df['Longitude'], df['Latitude'])] 19 | geo_df = gpd.GeoDataFrame( 20 | df, 21 | # crs = crs, 22 | geometry = geometry 23 | ) 24 | print(geo_df.head()) 25 | 26 | map_path = glob.glob('/home/taiyipan/repository/event_camera_master_project/rtk_gps/shape_files/nyc/*.shp') 27 | print(map_path) 28 | map = gpd.read_file(map_path[0]) 29 | 30 | fig, ax = plt.subplots(figsize = (20, 20)) 31 | map.plot(ax = ax, alpha = 0.4, color = 'grey') 32 | geo_df.plot(ax = ax, markersize = 1, marker = '.', color = 'red') 33 | plt.title("NYC GPS data from {}".format(file_path[0])) 34 | plt.xlabel('Longitude') 35 | plt.ylabel('Latitude') 36 | plt.show() 37 | -------------------------------------------------------------------------------- /img/front_page_showcase.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ai4ce/NYC-Event-VPR/9b0114ba7938adbb80e004a37b362a4a72984f89/img/front_page_showcase.jpg -------------------------------------------------------------------------------- /img/gps_plot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ai4ce/NYC-Event-VPR/9b0114ba7938adbb80e004a37b362a4a72984f89/img/gps_plot.png -------------------------------------------------------------------------------- /img/report.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ai4ce/NYC-Event-VPR/9b0114ba7938adbb80e004a37b362a4a72984f89/img/report.png -------------------------------------------------------------------------------- /img/sample_images.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ai4ce/NYC-Event-VPR/9b0114ba7938adbb80e004a37b362a4a72984f89/img/sample_images.png -------------------------------------------------------------------------------- /img/sensor_setup.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ai4ce/NYC-Event-VPR/9b0114ba7938adbb80e004a37b362a4a72984f89/img/sensor_setup.jpg -------------------------------------------------------------------------------- /img/sensor_specs.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ai4ce/NYC-Event-VPR/9b0114ba7938adbb80e004a37b362a4a72984f89/img/sensor_specs.jpg -------------------------------------------------------------------------------- /img/total_gps_coverage.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ai4ce/NYC-Event-VPR/9b0114ba7938adbb80e004a37b362a4a72984f89/img/total_gps_coverage.png -------------------------------------------------------------------------------- /prophesee_e2v/__pycache__/lightning_model.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ai4ce/NYC-Event-VPR/9b0114ba7938adbb80e004a37b362a4a72984f89/prophesee_e2v/__pycache__/lightning_model.cpython-310.pyc -------------------------------------------------------------------------------- /prophesee_e2v/demo_event_to_video.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Prophesee S.A. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 6 | # Unless required by applicable law or agreed to in writing, software distributed under the License is distributed 7 | # on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 8 | # See the License for the specific language governing permissions and limitations under the License. 9 | """ 10 | E2V DEMO Script 11 | 12 | Copyright: (c) 2021 Prophesee 13 | """ 14 | import numpy as np 15 | import argparse 16 | import torch 17 | import torch.nn.functional as F 18 | from metavision_sdk_base import EventCD 19 | from lightning_model import EventToVideoLightningModel 20 | from metavision_core_ml.preprocessing.event_to_tensor_torch import event_cd_to_torch, event_volume 21 | from metavision_core_ml.utils.torch_ops import normalize_tiles, viz_flow 22 | from metavision_core_ml.utils.show_or_write import ShowWrite 23 | from metavision_core.event_io.events_iterator import EventsIterator 24 | from metavision_core.event_io.adaptive_rate_events_iterator import AdaptiveRateEventsIterator 25 | from time import time 26 | 27 | 28 | def parse_args(argv=None): 29 | parser = argparse.ArgumentParser() 30 | parser.add_argument('path', type=str, default='', help='path of events') 31 | parser.add_argument('checkpoint', type=str, default='', help='checkpoint to evaluate') 32 | parser.add_argument('--start_ts', type=int, default=0, help='start timestamp') 33 | parser.add_argument('--mode', type=str, default='mixed', 34 | choices=['n_events', 'delta_t', 'mixed', 'adaptive'], help='how to cut events') 35 | 36 | parser.add_argument('--n_events', type=int, default=30000, help='accumulate by N events') 37 | parser.add_argument('--delta_t', type=int, default=30000, help='accumulate by delta_t') 38 | parser.add_argument('--video_path', type=str, default='', help='path to video') 39 | parser.add_argument('--height_width', nargs=2, default=None, type=int, help='resolution') 40 | parser.add_argument('--max_duration', type=int, default=-1, help='run for this duration') 41 | parser.add_argument('--thr_var', type=float, default=3e-5, help='threshold variance for adaptive rate') 42 | parser.add_argument('--cpu', action='store_true', help='if true use cpu and not cuda') 43 | parser.add_argument('--flow', action='store_true', help='if true predict also optical flow') 44 | parser.add_argument('--viz_input', action='store_true', help='if true viz input') 45 | parser.add_argument('--no_window', action='store_true', help='disable window') 46 | 47 | params, _ = parser.parse_known_args(argv) 48 | return params 49 | 50 | 51 | def run(params): 52 | print('params: ', params) 53 | 54 | if params.mode == 'adaptive': 55 | mv_it = AdaptiveRateEventsIterator(params.path, thr_var_per_event=params.thr_var) 56 | else: 57 | mv_it = EventsIterator(params.path, start_ts=params.start_ts, mode=params.mode, 58 | n_events=params.n_events, delta_t=params.delta_t) 59 | 60 | window_name = "e2v" 61 | if params.no_window: 62 | window_name = None 63 | show_write = ShowWrite(window_name, params.video_path) 64 | 65 | height, width = mv_it.get_size() 66 | print('original size: ', height, width) 67 | 68 | device = 'cpu' if params.cpu else 'cuda' 69 | model = EventToVideoLightningModel.load_from_checkpoint(params.checkpoint) 70 | model.eval().to(device) 71 | nbins = model.hparams.event_volume_depth 72 | print('Nbins: ', nbins) 73 | in_height, in_width = (height, width) if params.height_width is None else params.height_width 74 | print('height_width: ', params.height_width) 75 | 76 | pause = False 77 | mv_it = iter(mv_it) 78 | while True: 79 | 80 | try: 81 | events = next(mv_it) if not pause else np.array([], dtype=EventCD) 82 | except StopIteration: 83 | break 84 | 85 | if events.size > 0: 86 | first_ts = events["t"][0] 87 | if first_ts <= params.start_ts: 88 | continue 89 | last_ts = events["t"][-1] 90 | if params.max_duration > 0 and last_ts > params.start_ts + params.max_duration: 91 | break 92 | 93 | if not pause and not len(events): 94 | continue 95 | 96 | # if not pause: 97 | # events_th = event_cd_to_torch(events).to(device) 98 | # start_times = torch.FloatTensor([events['t'][0]]).view(1,).to(device) 99 | # durations = torch.FloatTensor([events['t'][-1] - events['t'][0]]).view(1,).to(device) 100 | # tensor_th = event_volume(events_th, 1, height, width, start_times, durations, nbins, 'bilinear') 101 | # tensor_th = F.interpolate(tensor_th, size=(in_height, in_width), 102 | # mode='bilinear', align_corners=True) 103 | # tensor_th = tensor_th * 0.1 104 | # tensor_th = tensor_th.view(1, 1, nbins, in_height, in_width) 105 | # else: 106 | # tensor_th = torch.zeros((1, 1, nbins, in_height, in_width), dtype=torch.float32, device=device) 107 | 108 | # bypass quick fix 109 | if not pause: 110 | try: 111 | events_th = event_cd_to_torch(events).to(device) 112 | start_times = torch.FloatTensor([events['t'][0]]).view(1,).to(device) 113 | durations = torch.FloatTensor([events['t'][-1] - events['t'][0]]).view(1,).to(device) 114 | tensor_th = event_volume(events_th, 1, height, width, start_times, durations, nbins, 'bilinear') 115 | tensor_th = F.interpolate(tensor_th, size=(in_height, in_width), 116 | mode='bilinear', align_corners=True) 117 | tensor_th = tensor_th * 0.1 118 | tensor_th = tensor_th.view(1, 1, nbins, in_height, in_width) 119 | except: 120 | tensor_th = torch.zeros((1, 1, nbins, in_height, in_width), dtype=torch.float32, device=device) 121 | else: 122 | tensor_th = torch.zeros((1, 1, nbins, in_height, in_width), dtype=torch.float32, device=device) 123 | # bypass quick fix end 124 | 125 | state = model.model(tensor_th) 126 | gray = model.model.predict_gray(state).view(1, 1, in_height, in_width) 127 | gray = normalize_tiles(gray).view(in_height, in_width) 128 | gray_np = gray.detach().cpu().numpy() * 255 129 | gray_np = np.uint8(gray_np) 130 | gray_rgb = gray_np[..., None].repeat(3, 2) 131 | 132 | if params.flow: 133 | flow = model.model.predict_flow(state) 134 | flow_rgb = viz_flow(flow.squeeze(0)) 135 | flow_rgb = flow_rgb.squeeze(0).permute(1, 2, 0).cpu().numpy() 136 | cat = np.concatenate([flow_rgb, gray_rgb], axis=1) 137 | else: 138 | cat = gray_rgb 139 | 140 | if params.viz_input: 141 | x = tensor_th.mean(dim=2) 142 | x = 255 * normalize_tiles(x, num_dims=3, num_stds=9).view(in_height, in_width) 143 | x = x.byte().cpu().numpy() 144 | x = x[..., None].repeat(3, 2) 145 | cat = np.concatenate((x, cat), axis=1) 146 | 147 | key = show_write(cat) 148 | if key == 27: 149 | break 150 | if key == ord('p'): 151 | pause = not pause 152 | 153 | 154 | def main(): 155 | start = time() 156 | params = parse_args() 157 | run(params) 158 | end = time() 159 | print('Time elapsed: {:.6f} hours'.format((end - start) / 3600.0)) 160 | print('Time elapsed: {:.6f} minutes'.format((end - start) / 60.0)) 161 | print('Time elapsed: {:.6f} seconds'.format((end - start) / 1.0)) 162 | 163 | 164 | if __name__ == '__main__': 165 | with torch.no_grad(): 166 | main() 167 | -------------------------------------------------------------------------------- /prophesee_e2v/e2v.ckpt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ai4ce/NYC-Event-VPR/9b0114ba7938adbb80e004a37b362a4a72984f89/prophesee_e2v/e2v.ckpt -------------------------------------------------------------------------------- /prophesee_e2v/lightning_model.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Prophesee S.A. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 6 | # Unless required by applicable law or agreed to in writing, software distributed under the License is distributed 7 | # on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 8 | # See the License for the specific language governing permissions and limitations under the License. 9 | """ 10 | Pytorch Lightning module 11 | """ 12 | import torch 13 | import torch.nn as nn 14 | import torch.nn.functional as F 15 | import lightning.pytorch as pl 16 | 17 | import os 18 | import argparse 19 | import numpy as np 20 | import cv2 21 | 22 | from types import SimpleNamespace 23 | from torchvision.utils import make_grid 24 | from tqdm import tqdm 25 | from itertools import islice 26 | 27 | from metavision_core_ml.core.temporal_modules import time_to_batch, seq_wise 28 | from metavision_core_ml.event_to_video.event_to_video import EventToVideo 29 | from metavision_core_ml.utils.torch_ops import normalize_tiles 30 | from metavision_core_ml.utils.show_or_write import ShowWrite 31 | from metavision_core_ml.utils.torch_ops import cuda_tick, viz_flow 32 | 33 | from metavision_core_ml.losses.perceptual_loss import VGGPerceptualLoss 34 | from metavision_core_ml.losses.warp import ssl_flow_l1 35 | from kornia.losses import ssim_loss 36 | 37 | 38 | class EventToVideoCallback(pl.callbacks.Callback): 39 | """ 40 | callbacks to our model 41 | """ 42 | 43 | def __init__(self, data_module, video_result_every_n_epochs=2, show_window=False): 44 | super().__init__() 45 | self.data_module = data_module 46 | self.video_every = int(video_result_every_n_epochs) 47 | self.show_window = show_window 48 | 49 | def on_epoch_end(self, trainer, pl_module): 50 | if trainer.current_epoch and not (trainer.current_epoch % self.video_every): 51 | pl_module.demo_video(self.data_module.val_dataloader(), trainer.current_epoch, show_video=self.show_window) 52 | 53 | 54 | class EventToVideoLightningModel(pl.LightningModule): 55 | """ 56 | EventToVideo: Train your EventToVideo 57 | """ 58 | 59 | def __init__(self, hparams: argparse.Namespace) -> None: 60 | super().__init__() 61 | self.save_hyperparameters(hparams, logger=False) 62 | self.model = EventToVideo( 63 | self.hparams.cin, 64 | self.hparams.cout, 65 | self.hparams.num_layers, 66 | self.hparams.base, 67 | self.hparams.cell, 68 | self.hparams.separable, 69 | self.hparams.separable_hidden, 70 | self.hparams.archi) 71 | self.vgg_perc_l1 = VGGPerceptualLoss() 72 | 73 | @classmethod 74 | def load_from_checkpoint(cls, checkpoint_path): 75 | checkpoint = torch.load(checkpoint_path, map_location=torch.device('cpu')) 76 | hparams = argparse.Namespace(**checkpoint['hyper_parameters']) 77 | model = cls(hparams) 78 | model.load_state_dict(checkpoint['state_dict']) 79 | return model 80 | 81 | def forward(self, x): 82 | state = self.model(x) 83 | return self.model.predict_gray(state) 84 | 85 | def compute_loss(self, x, y, reset_mask): 86 | self.model.reset(reset_mask) 87 | 88 | target = (y / 255.0) 89 | state = self.model(x) 90 | pred = self.model.predict_gray(state) 91 | 92 | bw_flow = self.model.predict_flow(state) 93 | 94 | pred_flat = time_to_batch(pred)[0].float() 95 | target_flat = time_to_batch(target)[0].float() 96 | loss_dict = {} 97 | loss_dict['ssim'] = ssim_loss(pred_flat, target_flat, 5) 98 | loss_dict['smooth_l1'] = F.smooth_l1_loss(pred_flat, target_flat, beta=0.11) 99 | 100 | loss_dict['vgg_perc_l1'] = self.vgg_perc_l1(pred_flat, target_flat) * 0.5 101 | 102 | # loss_dict['target_flow_l1'] = ssl_flow_l1(target, bw_flow) 103 | loss_dict['pred_flow_l1'] = ssl_flow_l1(pred, bw_flow) 104 | for k, v in loss_dict.items(): 105 | assert v >= 0, k 106 | return loss_dict 107 | 108 | def training_step(self, batch, batch_nb): 109 | batch = SimpleNamespace(**batch) 110 | loss_dict = self.compute_loss(batch.inputs, batch.images, batch.reset) 111 | 112 | loss = sum([v for k, v in loss_dict.items()]) 113 | 114 | assert loss.item() >= 0 115 | logs = {'loss': loss} 116 | logs.update({'train_' + k: v.item() for k, v in loss_dict.items()}) 117 | 118 | self.log('train_loss', loss) 119 | for k, v in loss_dict.items(): 120 | self.log('train_' + k, v) 121 | 122 | return loss 123 | 124 | def validation_step(self, batch, batch_nb): 125 | batch = SimpleNamespace(**batch) 126 | loss_dict = self.compute_loss(batch.inputs, batch.images, batch.reset) 127 | loss = sum([v for k, v in loss_dict.items()]) 128 | assert loss.item() >= 0 129 | 130 | logs = {'val_loss': loss} 131 | logs.update({'val_' + k: v.item() for k, v in loss_dict.items()}) 132 | 133 | self.log('val_loss', loss) 134 | for k, v in loss_dict.items(): 135 | self.log('val_' + k, v) 136 | return loss 137 | 138 | def validation_epoch_end(self, outputs): 139 | val_loss_avg = torch.FloatTensor([item for item in outputs]).mean() 140 | self.log('val_acc', val_loss_avg) 141 | return val_loss_avg 142 | 143 | def configure_optimizers(self): 144 | return torch.optim.Adam(self.parameters(), lr=self.hparams.lr) 145 | 146 | def demo_video(self, dataloader, epoch=0, show_video=True): 147 | print('Demo') 148 | height, width = self.hparams.height, self.hparams.width 149 | batch_size = self.hparams.batch_size 150 | nrows = 2 ** ((batch_size.bit_length() - 1) // 2) 151 | ncols = int(np.ceil(self.hparams.batch_size / nrows)) 152 | 153 | self.model.eval() 154 | 155 | video_name = os.path.join(self.hparams.root_dir, 'videos', f'video#{epoch:d}.mp4') 156 | dir = os.path.dirname(video_name) 157 | if not os.path.isdir(dir): 158 | os.mkdir(dir) 159 | window_name = None 160 | if show_video: 161 | window_name = "test_epoch {:d}".format(epoch) 162 | show_write = ShowWrite(window_name, video_name) 163 | 164 | with torch.no_grad(): 165 | for batch in tqdm(islice(dataloader, self.hparams.demo_iter), total=self.hparams.demo_iter): 166 | 167 | batch = SimpleNamespace(**batch) 168 | batch.inputs = batch.inputs.to(self.device) 169 | batch.reset = batch.reset.to(self.device) 170 | 171 | x = batch.inputs 172 | x = x[:, :, :3] 173 | x = 255 * normalize_tiles(x) 174 | y = batch.images 175 | 176 | self.model.reset(batch.reset) 177 | 178 | s = self.model(batch.inputs) 179 | o = self.model.predict_gray(s) 180 | o = normalize_tiles(o, num_stds=6) 181 | o = 255 * o 182 | 183 | if self.hparams.plot_flow: 184 | f = self.model.predict_flow(s) 185 | f = seq_wise(viz_flow)(f) 186 | for t in range(len(x)): 187 | gx = make_grid(x[t], nrow=nrows, padding=0).permute(1, 2, 0).cpu().numpy().astype(np.uint8) 188 | gy = make_grid(y[t], nrow=nrows, padding=0).permute(1, 2, 0).cpu().numpy().astype(np.uint8) 189 | go = make_grid(o[t], nrow=nrows, padding=0).permute(1, 2, 0).data.cpu().numpy().astype(np.uint8) 190 | 191 | if self.hparams.plot_flow: 192 | gf = make_grid( 193 | f[t], 194 | nrow=nrows, padding=0).permute( 195 | 1, 2, 0).data.cpu().numpy().astype( 196 | np.uint8) 197 | cat = np.concatenate([gx, gy, gf, go], axis=1) 198 | else: 199 | cat = np.concatenate([gx, gy, go], axis=1) 200 | show_write(cat) 201 | -------------------------------------------------------------------------------- /prophesee_e2v/main.py: -------------------------------------------------------------------------------- 1 | import subprocess 2 | import os 3 | import glob 4 | 5 | def run(input: str, output: str): 6 | subprocess.run([ 7 | 'python', 8 | 'demo_event_to_video.py', 9 | input, 10 | 'e2v.ckpt', 11 | '--delta_t', 12 | '40000', 13 | '--mode', 14 | 'delta_t', 15 | '--height_width', 16 | '720', 17 | '1280', 18 | '--video_path', 19 | output 20 | ]) 21 | 22 | def create_map(files: list, files2: list, dir: str) -> dict: 23 | if not os.path.exists(dir): 24 | os.mkdir(dir) 25 | map = dict() 26 | for file in files: 27 | tokens = os.path.basename(file).split('.')[0].split('_') 28 | timestamp = '_'.join(tokens[1:]) 29 | fname = os.path.join(dir, 'data_' + timestamp) 30 | if not os.path.exists(fname): 31 | os.mkdir(fname) 32 | output = os.path.join(fname, 'event_' + timestamp + '.mp4') 33 | map[file] = output 34 | 35 | for file in files2: 36 | fname = os.path.join(dir, file.split('/')[-2]) 37 | if not os.path.exists(fname): 38 | os.mkdir(fname) 39 | tokens = os.path.basename(file).split('.')[0].split('_') 40 | timestamp = '_'.join(tokens[1:]) 41 | output = os.path.join(fname, 'event_' + timestamp + '.mp4') 42 | map[file] = output 43 | 44 | print(len(map)) 45 | return map 46 | 47 | def main(): 48 | # define input path 49 | rdir = '/home/taiyi/scratch2/NYU-EventVPR' 50 | files = glob.glob(os.path.join(rdir, 'sensor_data_*/data_*.raw')) 51 | files2 = glob.glob(os.path.join(rdir, 'sensor_data_*/data_*/event_*.raw')) 52 | print(len(files) + len(files2)) 53 | 54 | # define output path 55 | dir = '/home/taiyi/scratch2/NYU-EventVPR_reconstructed_30fps' 56 | 57 | # create map 58 | map = create_map(files, files2, dir) 59 | 60 | # iterate over files and reconstruct event data into frames 61 | for file in (files + files2): 62 | print('{} ---> {}'.format(file, map.get(file))) 63 | run(file, map.get(file)) 64 | 65 | if __name__ == '__main__': 66 | main() -------------------------------------------------------------------------------- /prophesee_e2v/main.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | e2v () { 4 | for fname in ${fnames[@]}; do 5 | echo $fname; 6 | python3 demo_event_to_video.py \ 7 | $input_path/$fname.raw \ 8 | e2v.ckpt \ 9 | --delta_t 40000 \ 10 | --mode delta_t \ 11 | --height_width 720 1280 \ 12 | --video_path $output_path/$fname.mp4 13 | done 14 | } 15 | 16 | rm -r /home/taiyi/scratch2/NYU-EventVPR_reconstructed_30fps/ 17 | python main.py -------------------------------------------------------------------------------- /prophesee_e2v/run.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | python3 demo_event_to_video.py \ 4 | $1 \ 5 | e2v.ckpt \ 6 | --delta_t 40000 \ 7 | --mode delta_t \ 8 | --height_width 720 1280 \ 9 | --video_path output.mp4 -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | pandas 2 | opencv-python 3 | pickle 4 | geopy 5 | numpy 6 | utm 7 | matplotlib 8 | descartes 9 | geopandas 10 | shapely 11 | torch 12 | torchvision 13 | tqdm 14 | kornia -------------------------------------------------------------------------------- /rgb_dataset_generation/generate_dataset.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import glob 3 | import geopy.distance 4 | from time import time 5 | import numpy as np 6 | import concurrent.futures 7 | from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor, wait 8 | from multiprocessing import cpu_count 9 | import os 10 | import cv2 as cv 11 | import pickle 12 | import argparse 13 | 14 | ''' 15 | This program processes raw data collected by event-rgb-gps sensors 16 | into a dataset format compatible with VPR-Bench framework. 17 | ''' 18 | 19 | ''' 20 | Query images selected won't be shown in reference images. Hence, no identical matches. 21 | No Gaussian noise applied. 22 | ''' 23 | 24 | # hyperparams 25 | threshold = 25.0 # unit meters; below which is a match, else false 26 | query_factor = 0.2 # ratio of query to reference; 0 is no reference images, 1 is all reference images; randomized selection each time 27 | 28 | # dir paths: change if necessary 29 | raw_dir = '/mnt/scratch/data' # define input raw data directory (raw sensor readings) 30 | dir = '/mnt/scratch/NYU-EventVPR-RGB' # define output formatted data directory (compatible with VPR-Bench) 31 | 32 | # fixed dir paths 33 | csv_path = os.path.join(raw_dir, 'sensor_data_*/GPS_data_*.csv') 34 | npy_path = os.path.join(dir, 'ground_truth.npy') 35 | npy_path_pickle = os.path.join(dir, 'ground_truth_new.npy') 36 | npy_path_csv = os.path.join(dir, 'ground_truth_new.csv') 37 | query_dir = os.path.join(dir, 'query') 38 | ref_dir = os.path.join(dir, 'ref') 39 | 40 | # create appropriate directories for new dataset 41 | def create_dataset_dir(): 42 | if not os.path.exists(dir): 43 | os.mkdir(dir) 44 | if not os.path.exists(query_dir): 45 | os.mkdir(query_dir) 46 | if not os.path.exists(ref_dir): 47 | os.mkdir(ref_dir) 48 | 49 | # map image paths to indices 50 | def map_img_path_to_index(df: pd.DataFrame) -> dict: 51 | img_map = dict() 52 | for row in df.itertuples(): 53 | img_map['frame_' + row.Timestamp + '.jpg'] = str(row.Index).zfill(7) + '.jpg' 54 | return img_map 55 | 56 | # move images from raw data dir to processed data dir (same indexing as loaded dataframe) 57 | def move_imgs(df: pd.DataFrame, dfq: pd.DataFrame): 58 | df_map = map_img_path_to_index(df) 59 | dfq_map = map_img_path_to_index(dfq) 60 | folder_path = glob.glob('{}/sensor_data_*/img_*'.format(raw_dir)) 61 | print(folder_path) 62 | with ThreadPoolExecutor() as executor: 63 | print('Started Thread Pool Executor...') 64 | futures = list() 65 | # iterate through all raw data folders 66 | for folder in folder_path: 67 | for img_path in os.listdir(folder): 68 | # get query_path 69 | query_path = dfq_map.get(img_path) 70 | if query_path is not None: 71 | query_path = os.path.join(query_dir, query_path) 72 | # get ref_path 73 | ref_path = df_map.get(img_path) 74 | if ref_path is not None: 75 | ref_path = os.path.join(ref_dir, ref_path) 76 | # submit pool job to futures 77 | futures.append( 78 | executor.submit( 79 | img_io_thread, 80 | os.path.join(folder, img_path), 81 | query_path, 82 | ref_path 83 | ) 84 | ) 85 | # block until all futures finish 86 | wait(futures, return_when=concurrent.futures.ALL_COMPLETED) 87 | print('All concurrent futures have completed') 88 | print('Executor has shutdown') 89 | 90 | # subthread to be handled by main thread 91 | # move images to appropriate directories 92 | def img_io_thread(input_path, query_path, ref_path): 93 | try: 94 | # load image 95 | img = cv.imread(input_path) 96 | # save query image if needed 97 | if query_path is not None: 98 | cv.imwrite(query_path, img) 99 | print('Query image: {}'.format(query_path)) 100 | # save ref image if needed 101 | if ref_path is not None: 102 | cv.imwrite(ref_path, img) 103 | print('Reference image: {}'.format(ref_path)) 104 | except: 105 | raise FileNotFoundError 106 | 107 | # calculate distance in meters between 2 GPS coordinates 108 | # input: a(latitude, longitude), b(latitude, longitude) 109 | # output: distance in meters 110 | def calculate_gps_distance(a: tuple, b: tuple) -> float: 111 | return geopy.distance.geodesic(a, b).meters 112 | 113 | # aggregate all GPS data from csv files into pandas dataframe 114 | def load_csv(): 115 | print('RGB mode') 116 | file_path = glob.glob(csv_path) 117 | print('GPS file count: {}'.format(len(file_path))) 118 | df = pd.concat(map(pd.read_csv, file_path), ignore_index = True) 119 | # output dataframe as csv 120 | df.to_csv('dataframe.csv', index=True, header=True) 121 | print('Total entry count: {}'.format(df.shape[0])) 122 | # sample query images from reference images 123 | dfq = df.sample(frac=query_factor).reset_index() 124 | print('Sampled {} query images from {} reference images'.format(dfq.shape[0], df.shape[0])) 125 | # remove query images from reference images 126 | df = df.drop(dfq['index']).reset_index() 127 | # shuffle reference images 128 | df = df.sample(frac=1).reset_index() 129 | print('New reference image count: {}'.format(df.shape[0])) 130 | return df, dfq 131 | 132 | # initialize ground truth numpy array according to dataframe dims 133 | def create_gt(df) -> np.ndarray: 134 | gt = np.zeros((df.shape[0], 2), dtype=np.ndarray) 135 | print(gt.shape) 136 | for i in range(gt.shape[0]): 137 | gt[i][0] = i 138 | gt[i][1] = list() 139 | return gt 140 | 141 | # subprocess to be handled by an assigned CPU thread 142 | # upon completion, return partial array, start and end indices 143 | def match_process(df: pd.DataFrame, dfq: pd.DataFrame, gt: np.ndarray, start: int, end: int): 144 | for row in dfq.iloc[start:end].itertuples(): 145 | print('{}/{}'.format(row.index, df.shape[0] + dfq.shape[0] - 1)) 146 | gt[row.Index][0] = row.Index # debug: row.Index -> row.index 147 | for r in df.itertuples(): 148 | dist = calculate_gps_distance( 149 | (row.Latitude, row.Longitude), 150 | (r.Latitude, r.Longitude) 151 | ) 152 | if dist < threshold: 153 | gt[row.Index][1].append(r.Index) # debug: r.Index -> r.index 154 | return gt, start, end 155 | 156 | # generate ground truth array between query and reference images (multiprocessing) 157 | def compute_ground_truth(df: pd.DataFrame, dfq: pd.DataFrame) -> np.ndarray: 158 | # initialize ground truth numpy array 159 | gt = create_gt(dfq) 160 | 161 | # define multiprocessing program executor 162 | print('CPU core count: {}'.format(cpu_count())) 163 | executor = ProcessPoolExecutor() 164 | print('Started Process Pool Executor...') 165 | 166 | # divide task into equal chunks among all CPU threads 167 | futures = list() 168 | assert dfq.shape[0] >= 100, 'Too few discrete tasks for {} threads'.format(cpu_count()) 169 | # submit jobs to futures 170 | prev = 0 171 | for idx in range(dfq.shape[0] // cpu_count(), dfq.shape[0], dfq.shape[0] // cpu_count()): 172 | futures.append(executor.submit(match_process, df, dfq, gt, prev, idx)) 173 | prev = idx 174 | # block until all futures finish 175 | wait(futures, return_when=concurrent.futures.ALL_COMPLETED) 176 | print('All concurrent futures have completed') 177 | 178 | # merge resulting arrays based on index range 179 | for future in futures: 180 | gt[future.result()[1]:future.result()[2], :] = future.result()[0][future.result()[1]:future.result()[2], :] 181 | print('Concurrent futures merged') 182 | 183 | # shutdown executor 184 | executor.shutdown() 185 | print('Executor has shutdown') 186 | 187 | # iterate over remaining rows 188 | match_process(df, dfq, gt, prev, df.shape[0]) 189 | print('Remaining rows iterated') 190 | 191 | return gt 192 | 193 | # save ground truth numpy array as pickle protocol 2 compatible version 194 | def pickle_compatible(gt: np.ndarray): 195 | np.save(npy_path, gt) 196 | try: 197 | with open(npy_path, 'rb') as handle: 198 | a = np.load(handle, allow_pickle=True) 199 | with open(npy_path_pickle, 'wb') as handle: 200 | pickle.dump(a, handle, protocol=2) 201 | except: 202 | raise FileNotFoundError 203 | finally: 204 | print('Pickle protocol compatibility check') 205 | 206 | # visualize ground truth numpy array as a table with csv format 207 | def gt_to_csv(): 208 | try: 209 | pd.DataFrame(np.load( 210 | npy_path_pickle, 211 | allow_pickle = True 212 | )).to_csv( 213 | npy_path_csv, 214 | index = False, 215 | header = False 216 | ) 217 | except: 218 | raise FileNotFoundError 219 | finally: 220 | print('Ground truth table saved') 221 | 222 | # aggregate raw dataset and reformat into unified dataset template format 223 | def main(): 224 | start = time() 225 | ####################################################################### 226 | create_dataset_dir() 227 | df, dfq = load_csv() 228 | # df.to_csv('df.csv', index=True, header=True) 229 | # dfq.to_csv('dfq.csv', index=True, header=True) 230 | gt = compute_ground_truth(df, dfq) 231 | # pd.DataFrame(gt).to_csv('dataframe.csv', index=False, header=False) 232 | move_imgs(df, dfq) 233 | pickle_compatible(gt) 234 | gt_to_csv() 235 | ####################################################################### 236 | end = time() 237 | print('Time elapsed: {:.6f} hours'.format((end - start) / 3600.0)) 238 | print('Time elapsed: {:.6f} minutes'.format((end - start) / 60.0)) 239 | print('Time elapsed: {:.6f} seconds'.format((end - start) / 1.0)) 240 | 241 | if __name__ == '__main__': 242 | parser = argparse.ArgumentParser() 243 | parser.add_argument( 244 | '--rdir', 245 | required = False, 246 | type = str, 247 | default = raw_dir, 248 | help = 'define input raw data directory (raw sensor readings)' 249 | ) 250 | parser.add_argument( 251 | '--dir', 252 | required = False, 253 | type = str, 254 | default = dir, 255 | help = 'define output formatted data directory (compatible with VPR-Bench)' 256 | ) 257 | parser.add_argument( 258 | '--tolerance', 259 | required = False, 260 | type = float, 261 | default = threshold, 262 | help = 'define fault tolerance in meters, above which threshold is defined as a non-match' 263 | ) 264 | parser.add_argument( 265 | '--factor', 266 | required = False, 267 | type = float, 268 | default = query_factor, 269 | help = 'define query factor (0 to 1), ratio to randomly sample query images from reference database' 270 | ) 271 | # parse command line arguments 272 | args = parser.parse_args() 273 | print(args) 274 | threshold = args.tolerance 275 | query_factor = args.factor 276 | assert query_factor > 0 and query_factor <= 1, 'Query factor must be between 0 and 1' 277 | raw_dir = args.rdir 278 | dir = args.dir 279 | # update directory paths 280 | npy_path = os.path.join(dir, 'ground_truth.npy') 281 | npy_path_pickle = os.path.join(dir, 'ground_truth_new.npy') 282 | npy_path_csv = os.path.join(dir, 'ground_truth_new.csv') 283 | query_dir = os.path.join(dir, 'query') 284 | ref_dir = os.path.join(dir, 'ref') 285 | main() -------------------------------------------------------------------------------- /rgb_dataset_generation/rgb_dataset.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # generate rgb dataset 3 | dir=/mnt/scratch 4 | rm -r $dir/NYU-EventVPR-RGB/ 5 | echo "Purged previous dataset directory" 6 | python $dir/rgb_dataset_generation/generate_dataset.py \ 7 | --dir $dir/NYU-EventVPR-RGB \ 8 | --tolerance 25 \ 9 | --factor 0.01 10 | 11 | --------------------------------------------------------------------------------