5 |
6 |
7 |
8 |
23 | @InProceedings{wang19acra, 24 | author = {Wang, Ziwei and Ng, Yonhon and van Goor, Pieter and Mahony, Robert}, 25 | title = {Event Camera Calibration of Per-pixel Biased Contrast 26 | Threshold}, 27 | booktitle = {Australasian Conference of Robotics and Automation (ACRA)}, 28 | year = 2019 29 | } 30 |31 | 32 | ## Code 33 | Run [`EventFrameCalib.m`](https://github.com/ziweiWWANG/Event-Camera-Calibration/blob/master/EventFrameCalib.m). It will load event data from `./data` and calibrate the selected dataset using both event and frame data. 34 | 35 | Run [`EventOnlyCalib.m`](https://github.com/ziweiWWANG/Event-Camera-Calibration/blob/master/EventOnlyCalib.m). It will load event data from `./data` and calibrate the selected dataset using only event data. 36 | The calibrated parameters will be save in `./results/{DATASET}/{METHOD}/scale.csv` and `./results/{DATASET}/{METHOD}/bias.csv`. 37 | 38 | Run [`ImageReconstruction.m`](https://github.com/ziweiWWANG/Event-Camera-Calibration/blob/master/ImageReconstruction.m). It will load events and frames from `./data` and the calibrated parameters from `./results/{DATASET}/{METHOD}/' and then generate event reconstructions by integrating 500k events from images. 39 | 40 | 41 | ## Prepare data 42 | ### Example dataset 43 | Download the dataset and save to folder `./data` 44 | - [Click Here to Download Example Datasets](https://drive.google.com/drive/folders/14BF-1fkNsGoPodG1v4qH1W3K-Lixgp4K?usp=sharing). 45 | 46 | Dataset from: [[Mueggler et al., IJRR 2017]](https://rpg.ifi.uzh.ch/davis_data.html) 47 | 48 | ### How to prepare a good calibration dataset for your event camera 49 | If you want to calibrate your camera or other datasets, choose a data sequence with normal lighting conditions and slow motion for better calibration performance. 50 | The scene should contain rich textures that can trigger enough events for all pixels. 51 | An easy example you can try is the provided dataset `box_translation`. The data contains motion from slow to very fast. The intensity frames are very blurry when the camera moves too fast. 52 | Calibration using only frames 1-200 provides better evaluation performance than using the whole sequence for both [`EventFrameCalib`](https://github.com/ziweiWWANG/Event-Camera-Calibration/blob/master/EventFrameCalib.m) and [`EventOnlyCalib`](https://github.com/ziweiWWANG/Event-Camera-Calibration/blob/master/EventOnlyCalib.m). 53 | 54 | For pure event calibration, the scene should be roughly equally textured, such that the excitation of each pixel is similar, which aligns with the assumption in Section 4.3 of the paper. 55 | 56 | ### Code to pre-process your event sequence 57 | Change the dataset path and run [`GenSumESumP.m`](https://github.com/ziweiWWANG/Event-Camera-Calibration/blob/master/GenSumESumP.m). It will load your event data and frame timestamps from `./data` and save `sumE` (sum of event count between each frame timestamp) and `sumP` (sum of polarity between each frame timestamp) locally. They are inputs for both [`EventFrameCalib.m`](https://github.com/ziweiWWANG/Event-Camera-Calibration/blob/master/EventFrameCalib.m) and [`EventOnlyCalib.m`](https://github.com/ziweiWWANG/Event-Camera-Calibration/blob/master/EventOnlyCalib.m). 58 | 59 | ## Log safety offset 60 | The log safety offset used in the code measures the zero-level offset between events and frames. 61 | The event-frame calibration method computes event scale and bias to match events and frames with a custom-defined log safety offset. 62 | Choose this parameter for different datasets carefully and make sure the corresponding log safety offset used in evaluation ([`ImageReconstruction.m`](https://github.com/ziweiWWANG/Event-Camera-Calibration/blob/master/ImageReconstruction.m)) is the same as the one used in event-frame calibration ([`EventFrameCalib.m`](https://github.com/ziweiWWANG/Event-Camera-Calibration/blob/master/EventFrameCalib.m)). 63 | 64 | The pure event calibration method is designed for event cameras or datasets without registered frame references. 65 | It calibrates event data (e.g., hot pixels) but does not match events with any frames (since there is no frame as input!), and of course no log safety offset is used in [`EventOnlyCalib.m`](https://github.com/ziweiWWANG/Event-Camera-Calibration/blob/master/EventOnlyCalib.m). 66 | When evaluating the performance using the provided example dataset and evaluation method (integrate event to frames and compare with frames ground-truth), different log safety offsets used in [`ImageReconstruction.m`](https://github.com/ziweiWWANG/Event-Camera-Calibration/blob/master/ImageReconstruction.m) would generate very different reconstructions. 67 | For example, log safety offset = 90 leads to much better RMSE/PSNR/SSIM/visual reconstruction than log safety offset = 40 using the same event-only calibration parameters. 68 | However, compared to no calibration, our calibration always improves the evaluation performance, even using different log safety offsets. 69 | 70 | ## Notes 71 | 1. [Brandliet al., 2014] Christian Brandli, Lorenz Muller,and Tobi Delbruck. Real-time, high-speed video de-compression using a frame-and event-based davis sensor. In 2014 IEEE International Symposium on Circuits and Systems (ISCAS), pages 686–689. IEEE,2014. 72 | 2. Should you have any questions regarding this paper, please contact ziwei.wang1@anu.edu.au 73 | 74 | -------------------------------------------------------------------------------- /figures/example_results.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ziweiWWANG/Event-Camera-Calibration/805804431643cb432999ad56b67a83338a81c302/figures/example_results.png --------------------------------------------------------------------------------