├── .gitignore
├── 2d_simulator_generator.sh
├── ReadMe.md
├── generator_config
├── fast_motions.json
├── medium_motions.json
├── slow_motions.json
└── variety_motions.json
├── scripts
├── 2d_launch_esim.py
├── esim.launch
├── generate_esim2d_scenes.py
└── generate_preset.py
└── tools
└── jpg_to_png.py
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 | data_generator/voxel_generation/build
9 |
10 | # Distribution / packaging
11 | .Python
12 | env/
13 | build/
14 | develop-eggs/
15 | dist/
16 | downloads/
17 | eggs/
18 | .eggs/
19 | lib/
20 | lib64/
21 | parts/
22 | sdist/
23 | var/
24 | wheels/
25 | *.egg-info/
26 | .installed.cfg
27 | *.egg
28 |
29 | # PyInstaller
30 | # Usually these files are written by a python script from a template
31 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
32 | *.manifest
33 | *.spec
34 |
35 | # Installer logs
36 | pip-log.txt
37 | pip-delete-this-directory.txt
38 |
39 | # Unit test / coverage reports
40 | htmlcov/
41 | .tox/
42 | .coverage
43 | .coverage.*
44 | .cache
45 | nosetests.xml
46 | coverage.xml
47 | *.cover
48 | .hypothesis/
49 |
50 | # Translations
51 | *.mo
52 | *.pot
53 |
54 | # Django stuff:
55 | *.log
56 | local_settings.py
57 |
58 | # Flask stuff:
59 | instance/
60 | .webassets-cache
61 |
62 | # Scrapy stuff:
63 | .scrapy
64 |
65 | # Sphinx documentation
66 | docs/_build/
67 |
68 | # PyBuilder
69 | target/
70 |
71 | # Jupyter Notebook
72 | .ipynb_checkpoints
73 |
74 | # pyenv
75 | .python-version
76 |
77 | # celery beat schedule file
78 | celerybeat-schedule
79 |
80 | # SageMath parsed files
81 | *.sage.py
82 |
83 | # dotenv
84 | .env
85 |
86 | # virtualenv
87 | .venv
88 | venv/
89 | ENV/
90 |
91 | # Spyder project settings
92 | .spyderproject
93 | .spyproject
94 |
95 | # Rope project settings
96 | .ropeproject
97 |
98 | # mkdocs documentation
99 | /site
100 |
101 | # mypy
102 | .mypy_cache/
103 |
104 | # input data, saved log, checkpoints
105 | data/
106 | input/
107 | saved/
108 | datasets/
109 |
110 | # editor, os cache directory
111 | .vscode/
112 | .idea/
113 | __MACOSX/
114 |
115 | # outputs
116 | *.jpg
117 | *.jpeg
118 | *.png
119 | *.h5
120 | *.swp
121 |
122 | # dirs
123 | /configs/r2
124 |
--------------------------------------------------------------------------------
/2d_simulator_generator.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | for i in {0..10}
3 | do
4 | echo "========================================================\n"
5 | echo "This is the $i th image\n"
6 | echo "========================================================\n"
7 | source ~/sim_ws/devel/setup.bash
8 | ct_mean=$(bc <<< "(0.1*$i)+0.1")
9 | python scripts/generate_esim2d_scenes.py generator_config/slow_motions.json --scene_id=$i --contrast_threshold_mean=$ct_mean --contrast_threshold_sigma=0.1
10 | python scripts/2d_launch_esim.py --launch_file_path="/tmp/esim.launch"
11 | done
12 |
13 |
--------------------------------------------------------------------------------
/ReadMe.md:
--------------------------------------------------------------------------------
1 | ## About
2 | This code allows generating flying chairs style sequences for the Multi-Object-2D simulator from [ESIM](https://github.com/uzh-rpg/rpg_esim). This code was used to generate sequences for [How to Train Your Event Camera Neural Network](https://timostoff.github.io/20ecnn), please cite this work if you use this in an academic context.
3 | ```
4 | @Article{Stoffregen20eccv,
5 | author = {T. Stoffregen, C. Scheerlinck, D. Scaramuzza, T. Drummond, N. Barnes, L. Kleeman, R. Mahoney},
6 | title = {Reducing the Sim-to-Real Gap for Event Cameras},
7 | journal = eccv,
8 | year = 2020,
9 | month = aug
10 | }
11 | ```
12 |
13 | ## Prerequisites
14 | You must have esim for this to work, please follow the instructions [here](https://github.com/uzh-rpg/rpg_esim/wiki/Installation). You must have esim sourced (command `ssim`, if you followed the install instructions).
15 |
16 | Next, take a look at the config files in `generator_config` and adapt the paths to suit your needs. The image paths in `foreground_images` are the objects that will be flying around wildly in the simulator. These images _must_ all be 4 channel png images, or you will get cryptic errors from ESIM. _Unfortunately_ imagemagick is a real pain about actually putting in the alpha channel when it feels it doesn't need to. I ended up using a Python script to convert jpgs to pngs, I know, it seems crazy but it's true. You can find it in `tools/jpg_to_png`.
17 |
18 | The image paths in `background_images` _must_ be jpg images, again for mysterious ESIM reasons. Obviously, you must have at least one path with at least `max_num` images in the foreground rubric and at least one path with at least one image for the background rubric.
19 |
20 | ## Usage
21 | The main work is done in `scripts/generate_esim2d_scenes.py`. This file takes a configuration file (examples can be found in `generator_config`) and some command line arguments that augment/modify the config settings if desired and generates a scene file (this contains the trajectories, the corresponding images, the image size and the sequence duration), an esim config file (this contains contrast thresholds, biases etc) and a ROS launch file.
22 | The default location where these files will be created is `/tmp/000000000_autoscene.txt`, `/tmp/000000000_config2d.txt` and `/tmp/esim.launch` respectively. As an example, you could execute:
23 | ```
24 | python scripts/generate_esim2d_scenes.py generator_config/slow_motions.json --scene_id=0 --contrast_threshold_mean=0.3 --contrast_threshold_sigma=0.1
25 | ```
26 | Note that the CLI arguments for the contrast thresholds are optional and in this case overrule the values in the config file.
27 |
28 | Once this is done, you can use `/scripts/2d_launch_esim.py` to launch ROS itself. The required arguments are the location of the launch file, eg:
29 | ```python scripts/2d_launch_esim.py --launch_file_path="/tmp/esim.launch"```
30 |
31 | All of this is also in a bash script, so you could also just run `2d_simulator_generator.sh`.
32 |
33 |
34 | ## Generating datasets from existing configs
35 | You can also generate datasets from existing scene and config files.
36 | For example, to generate the dataset from "Reducing the Sim-to-Real Gap for Event Cameras", you can first download COCO dataset as well as a few custom foreground images you can get from [here](https://drive.google.com/drive/folders/1F6fNgZFmMvGkw6sAwDFE7j8Q7EH3TMve?usp=sharing)
37 | Then, you need to download the config and scene files for the dataset from [here](https://drive.google.com/drive/folders/1ILoFnR5BHR17F0VGEzR0JIBfisw1nkc4?usp=sharing)
38 | By default, these go into /tmp (see inside the autoscene files to see the paths), but you can easily change this using [sed](https://stackoverflow.com/questions/11392478/how-to-replace-a-string-in-multiple-files-in-linux-command-line).
39 | Then, just run `scripts/generate_preset.py` eg:
40 | ```python scripts/generate_preset.py /path/to/config/files```
41 | Note that you will need ROS installed and sourced.
42 |
--------------------------------------------------------------------------------
/generator_config/fast_motions.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "fast_motions",
3 | "foreground_params": {
4 | "fg_min_ang_velocity_deg": 10,
5 | "fg_max_ang_velocity_deg": 45.0,
6 | "fg_min_velocity": 30,
7 | "fg_max_velocity": 350,
8 | "fg_growthmin": 0.3,
9 | "fg_growthmax": 0.9
10 | },
11 | "background_params": {
12 | "bg_min_ang_velocity_deg": 0,
13 | "bg_max_ang_velocity_deg": 15,
14 | "bg_min_velocity": 5,
15 | "bg_max_velocity": 25,
16 | "bg_growthmin": 1.0,
17 | "bg_growthmax": 1.5
18 | },
19 | "foreground_images": {
20 | "png_dataset": {
21 | "path": "/home/path/to/esim_objects",
22 | "proportion": 0.8
23 | },
24 | "coco_dataset": {
25 | "path": "/home/path/to/coco_png",
26 | "proportion": 0.2
27 | },
28 | "min_num": 5,
29 | "max_num" : 10
30 | },
31 | "background_images": {
32 | "coco_dataset": {
33 | "path": "/home/path/to/coco"
34 | }
35 | },
36 | "camera_params": {
37 | "contrast_threshold_mean":0.75,
38 | "ct_diff_sigma":0.1,
39 | "min_C":0.01,
40 | "max_C":8,
41 | "hard_c_t_sigmas":0.0001,
42 | "refractory_period_ns":1000000,
43 | "sim_framerate":60
44 | }
45 | }
46 |
--------------------------------------------------------------------------------
/generator_config/medium_motions.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "medium_motions",
3 | "foreground_params": {
4 | "fg_min_ang_velocity_deg": 0,
5 | "fg_max_ang_velocity_deg": 15.0,
6 | "fg_min_velocity": 20,
7 | "fg_max_velocity": 80,
8 | "fg_growthmin": 0.3,
9 | "fg_growthmax": 0.9
10 | },
11 | "background_params": {
12 | "bg_min_ang_velocity_deg": 0,
13 | "bg_max_ang_velocity_deg": 15,
14 | "bg_min_velocity": 5,
15 | "bg_max_velocity": 25,
16 | "bg_growthmin": 1.0,
17 | "bg_growthmax": 1.5
18 | },
19 | "foreground_images": {
20 | "png_dataset": {
21 | "path": "/home/path/to/esim_objects",
22 | "proportion": 0.8
23 | },
24 | "coco_dataset": {
25 | "path": "/home/path/to/coco_png",
26 | "proportion": 0.2
27 | },
28 | "min_num": 5,
29 | "max_num" : 10
30 | },
31 | "background_images": {
32 | "coco_dataset": {
33 | "path": "/home/path/to/coco"
34 | }
35 | },
36 | "camera_params": {
37 | "contrast_threshold_mean":0.75,
38 | "ct_diff_sigma":0.1,
39 | "min_C":0.01,
40 | "max_C":8,
41 | "hard_c_t_sigmas":0.0001,
42 | "refractory_period_ns":1000000,
43 | "sim_framerate":60
44 | }
45 | }
46 |
--------------------------------------------------------------------------------
/generator_config/slow_motions.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "slow_motions",
3 | "foreground_params": {
4 | "fg_min_ang_velocity_deg": 0,
5 | "fg_max_ang_velocity_deg": 5.0,
6 | "fg_min_velocity": 0.5,
7 | "fg_max_velocity": 30.0,
8 | "fg_growthmin": 0.3,
9 | "fg_growthmax": 0.9
10 | },
11 | "background_params": {
12 | "bg_min_ang_velocity_deg": 0,
13 | "bg_max_ang_velocity_deg": 5,
14 | "bg_min_velocity": 0.5,
15 | "bg_max_velocity": 10,
16 | "bg_growthmin": 1.0,
17 | "bg_growthmax": 1.5
18 | },
19 | "foreground_images": {
20 | "png_dataset": {
21 | "path": "/home/path/to/esim_objects",
22 | "proportion": 0.8
23 | },
24 | "coco_dataset": {
25 | "path": "/home/path/to/coco_png",
26 | "proportion": 0.2
27 | },
28 | "min_num": 2,
29 | "max_num" : 10
30 | },
31 | "background_images": {
32 | "coco_dataset": {
33 | "path": "/home/path/to/coco"
34 | }
35 | },
36 | "camera_params": {
37 | "contrast_threshold_mean":0.75,
38 | "ct_diff_sigma":0.1,
39 | "min_C":0.01,
40 | "max_C":8,
41 | "hard_c_t_sigmas":0.0001,
42 | "refractory_period_ns":1000000,
43 | "sim_framerate":60
44 | }
45 | }
46 |
--------------------------------------------------------------------------------
/generator_config/variety_motions.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "various_unconstrained",
3 | "foreground_params": {
4 | "fg_min_ang_velocity_deg": 0,
5 | "fg_max_ang_velocity_deg": 60.0,
6 | "fg_min_velocity": 0.1,
7 | "fg_max_velocity": 600,
8 | "fg_growthmin": 0.3,
9 | "fg_growthmax": 0.9
10 | },
11 | "background_params": {
12 | "bg_min_ang_velocity_deg": 0,
13 | "bg_max_ang_velocity_deg": 45,
14 | "bg_min_velocity": 0.1,
15 | "bg_max_velocity": 40,
16 | "bg_growthmin": 1.0,
17 | "bg_growthmax": 1.5
18 | },
19 | "foreground_images": {
20 | "png_dataset": {
21 | "path": "/home/path/to/esim_objects",
22 | "proportion": 0.8
23 | },
24 | "coco_dataset": {
25 | "path": "/home/path/to/coco_png",
26 | "proportion": 0.2
27 | },
28 | "min_num": 10,
29 | "max_num" : 30
30 | },
31 | "background_images": {
32 | "coco_dataset": {
33 | "path": "/home/path/to/coco"
34 | }
35 | },
36 | "camera_params": {
37 | "contrast_threshold_mean":0.75,
38 | "ct_diff_sigma":0.1,
39 | "min_C":0.01,
40 | "max_C":8,
41 | "hard_c_t_sigmas":0.0001,
42 | "refractory_period_ns":1000000,
43 | "sim_framerate":60
44 | }
45 | }
46 |
--------------------------------------------------------------------------------
/scripts/2d_launch_esim.py:
--------------------------------------------------------------------------------
1 | """
2 | From the paper 'Stoffregen et al., How to Train Your Event Camera Neural Network'
3 | (https://timostoff.github.io/20ecnn)
4 | """
5 | import roslaunch
6 | import rospy
7 | import argparse
8 | import os
9 |
10 | process_generate_running = True
11 |
12 | class ProcessListener(roslaunch.pmon.ProcessListener):
13 | global process_generate_running
14 |
15 | def process_died(self, name, exit_code):
16 | global process_generate_running
17 | process_generate_running = False
18 | rospy.logwarn("%s died with code %s", name, exit_code)
19 |
20 |
21 | def init_launch(launchfile, process_listener):
22 | uuid = roslaunch.rlutil.get_or_generate_uuid(None, False)
23 | roslaunch.configure_logging(uuid)
24 | launch = roslaunch.parent.ROSLaunchParent(
25 | uuid,
26 | [launchfile],
27 | process_listeners=[process_listener],
28 | )
29 | return launch
30 |
31 | if __name__ == "__main__":
32 | parser = argparse.ArgumentParser(description='Launches Event Simulator')
33 | parser.add_argument('--launch_file_path', type=str, help='Path to load launch file',
34 | default="esim2d.launch")
35 |
36 | args = parser.parse_args()
37 | launch_file_path = os.path.abspath(args.launch_file_path)
38 |
39 | rospy.init_node("remote_launcher")
40 | launch_file = launch_file_path
41 | launch = init_launch(launch_file, ProcessListener())
42 | launch.start()
43 |
44 | while process_generate_running:
45 | rospy.sleep(0.05)
46 |
47 | launch.shutdown()
48 |
--------------------------------------------------------------------------------
/scripts/esim.launch:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
9 |
10 |
11 |
12 |
13 |
14 |
--------------------------------------------------------------------------------
/scripts/generate_esim2d_scenes.py:
--------------------------------------------------------------------------------
1 | """
2 | From the paper 'Stoffregen et al., How to Train Your Event Camera Neural Network'
3 | (https://timostoff.github.io/20ecnn)
4 | """
5 | import argparse
6 | import numpy as np
7 | import pandas as pd
8 | import json
9 | from collections import OrderedDict
10 | import random
11 | import glob
12 | import os
13 | from shutil import copyfile
14 | from matplotlib import pyplot as plt
15 | from matplotlib import collections as mc
16 | from matplotlib.patches import Rectangle
17 |
18 | PI = 3.1415926
19 |
20 | def generate_config_file(output_path_cfg, output_path, contrast_threshold_mean=0.75, ct_diff_sigma=0.1,
21 | min_C=0.01, max_C=8, hard_c_t_sigmas=0.0001, refractory_period_ns=1000000,
22 | bag_name="/tmp/out.bag", scene_id=0, sim_framerate=100):
23 | """
24 | Generate the config file for the simulator (ie event camera settings)
25 | """
26 | fcfg = open(output_path_cfg, "w")
27 | fcfg.write("\n--vmodule=data_provider_online_simple=0" +
28 | "\n--data_source=1" +
29 | "\n--path_to_output_bag={}".format(bag_name) +
30 | "\n--path_to_sequence_file={}".format(output_path) +
31 | "\n")
32 |
33 | c1 = contrast_threshold_mean
34 | c2 = np.random.normal(1, ct_diff_sigma)*c1
35 | if scene_id%2==0:
36 | contrast_threshold_pos = c1
37 | contrast_threshold_neg = c2
38 | else:
39 | contrast_threshold_pos = c2
40 | contrast_threshold_neg = c1
41 | contrast_threshold_pos = min(max(contrast_threshold_pos, min_C), max_C)
42 | contrast_threshold_neg = min(max(contrast_threshold_neg, min_C), max_C)
43 |
44 | fcfg.write("\n--contrast_threshold_pos={}".format(contrast_threshold_pos) +
45 | "\n--contrast_threshold_neg={}".format(contrast_threshold_neg) +
46 | "\n--contrast_threshold_sigma_pos={}".format(hard_c_t_sigmas) +
47 | "\n--contrast_threshold_sigma_neg={}".format(hard_c_t_sigmas) +
48 | "\n--refractory_period_ns={}".format(refractory_period_ns) +
49 | "\n")
50 |
51 | fcfg.write("\n--exposure_time_ms=0.0" +
52 | "\n--use_log_image=1" +
53 | "\n--log_eps=0.001" +
54 | "\n")
55 |
56 | fcfg.write("\n--renderer_type=0" +
57 | "\n--renderer_preprocess_median_blur=0.0" +
58 | "\n--renderer_preprocess_gaussian_blur=0.0" +
59 | "\n")
60 |
61 | fcfg.write("\n--simulation_minimum_framerate={}".format(sim_framerate*1.2) +
62 | "\n--simulation_imu_rate=1000.0" +
63 | "\n--simulation_adaptive_sampling_method=1" +
64 | "\n--simulation_adaptive_sampling_lambda=0.5" +
65 | "\n")
66 |
67 | fcfg.write("\n--ros_publisher_frame_rate={}".format(sim_framerate) +
68 | "\n--ros_publisher_depth_rate={}".format(sim_framerate*0) +
69 | "\n--ros_publisher_optic_flow_rate={}".format(sim_framerate) +
70 | "\n--ros_publisher_pointcloud_rate={}".format(sim_framerate*0) +
71 | "\n--ros_publisher_camera_info_rate={}".format(sim_framerate) +
72 | "\n")
73 |
74 | fcfg.close
75 | print("Wrote new config file to {}".format(output_path_cfg))
76 |
77 |
78 | def get_abs_coords_from_vec(vec):
79 | length = np.abs(vec)
80 | if length < 1:
81 | if vec > 0:
82 | offset = random.uniform(0.0, 1.0-length)
83 | else:
84 | offset = random.uniform(length, 1.0)
85 | else:
86 | if vec > 0:
87 | offset = random.uniform(-(length-1), 0)
88 | else:
89 | offset = random.uniform(1, length)
90 | offset -= 0.5
91 | return offset
92 |
93 |
94 | def get_random_translations(speed, duration, image_size):
95 | """
96 | Get motion vectors through a rectangle of width 1, centered
97 | at 0, 0 (the virtual image plane), which have a trajectory
98 | length such that the given speed of the motion occurs, given
99 | the duration of the sequence. For example, if a very fast motion
100 | is desired for a long scene, the trajectory is going to be have
101 | to be huge to respect the desired speed.
102 | """
103 | displacement = np.array([speed*duration])/image_size
104 | edge_limit = 0.45
105 | rnd_point_in_image = np.array((random.uniform(-edge_limit, edge_limit), random.uniform(-edge_limit, edge_limit)))
106 | rnd_direction = np.array(random.uniform(0, 2*PI))
107 | if np.linalg.norm(displacement) < 3.0:
108 | rnd_speed_component = np.array(random.uniform(0.5, 0.5)*displacement)
109 | else:
110 | rnd_speed_component = np.array(random.uniform(0.1, 0.9) * displacement)
111 | vec = np.array([np.cos(rnd_direction), np.sin(rnd_direction)])
112 |
113 | point_1 = (vec * rnd_speed_component) + rnd_point_in_image
114 | point_2 = (-vec * (displacement-rnd_speed_component)) + rnd_point_in_image
115 |
116 | return point_1[0], point_1[1], point_2[0], point_2[1]
117 |
118 |
119 | def get_motion_string(image_name, object_speed, min_ang_vel_deg, max_ang_vel_deg, min_growth, max_growth, mfs=3, gbs=0):
120 | """
121 | Given an image and the desired trajectory, return a string that the simulator can read
122 | """
123 | x0, y0, x1, y1 = get_random_translations(object_speed, duration, image_size)
124 |
125 | random_angular_v = random.uniform(min_ang_vel_deg, max_ang_vel_deg) * (1 if random.random() < 0.5 else -1)
126 | angle_to_travel = random_angular_v * duration
127 | random_start_angle = random.uniform(0, 360)
128 | theta0 = random_start_angle
129 | theta1 = random_start_angle + angle_to_travel
130 |
131 | sx0 = random.uniform(min_growth, max_growth)
132 | sx1 = random.uniform(min_growth, max_growth)
133 | sy0 = random.uniform(min_growth, max_growth)
134 | sy1 = random.uniform(min_growth, max_growth)
135 | return "{} {:.3f} {:.3f} {:.3f} {:.3f} {:.3f} {:.3f} {:.3f} {:.3f} {:.3f} {:.3f} {:.3f} {:.3f}\n".format(
136 | image_name, mfs, gbs, theta0, theta1, x0, x1, y0, y1, sx0, sx1, sy0, sy1), [x0, y0, x1, y1]
137 |
138 |
139 | def generate_scene_file_multispeed(output_path, image_size, duration, background_images, foreground_images,
140 | number_objects=15, median_filter_size=3, gaussian_blur_sigma=0,
141 | bg_min_ang_velocity_deg=0, bg_max_ang_velocity_deg=5, bg_min_velocity=5,
142 | bg_max_velocity=20, bg_growthmin=1.5, bg_growthmax=2.0,
143 | fg_min_ang_velocity_deg=0, fg_max_ang_velocity_deg=400, fg_min_velocity=5,
144 | fg_max_velocity=500, fg_growthmin=0.5, fg_growthmax=1.5, proportion_variation=0.1,
145 | fully_random=True):
146 | """
147 | Given foreground and background image directories and motion parameters, sample some random images
148 | and give them velocities distributed across the range of velocities given. Velocities are sampled
149 | linearly across the range, with a small percentage variation, as to guarantee the full 'spread' of motions.
150 | Otherwise, if you don't want that behaviour, you may set 'fully_random' to True, in which case the speeds
151 | will be chosen from a uniform distribution between fg_min_velocity and fg_max_velocity.
152 | """
153 | f = open(output_path, "w")
154 | f.write("{} {} {}\n".format(image_size[0], image_size[1], duration))
155 |
156 | background_image_paths = []
157 | for image_set in background_images:
158 | background_image_paths.extend(sorted(glob.glob("{}/*.jpg".format(background_images[image_set]['path']))))
159 | assert(len(background_image_paths) > 0)
160 | background = random.choice(background_image_paths)
161 |
162 | foreground_image_paths = []
163 | num_images = np.random.randint(foreground_images['min_num'], foreground_images['max_num'])
164 | print("{} foreground images".format(num_images))
165 | for image_set in foreground_images:
166 | if isinstance(foreground_images[image_set], dict):
167 | all_paths = sorted(glob.glob("{}/*.png".format(foreground_images[image_set]['path'])))
168 | selection = random.sample(all_paths, int(num_images*foreground_images[image_set]['proportion']+0.5))
169 | foreground_image_paths.extend(selection)
170 | assert(len(foreground_image_paths) > 0)
171 | random.shuffle(foreground_image_paths)
172 |
173 | #Background
174 | random_speed = random.uniform(bg_min_velocity, bg_max_velocity)
175 | f.write(get_motion_string(background, random_speed, bg_min_ang_velocity_deg, bg_max_ang_velocity_deg,
176 | bg_growthmin, bg_growthmax, median_filter_size, gaussian_blur_sigma)[0])
177 | #Foreground
178 | object_speeds = []
179 | obj_strings = []
180 | v_range = np.linspace(fg_min_velocity, fg_max_velocity, len(foreground_image_paths))
181 | for i, fg in enumerate(foreground_image_paths):
182 | if fully_random:
183 | v_vel = np.random.normal(fg_min_velocity, fg_max_velocity)
184 | else:
185 | v_vel = v_range[i]*np.random.normal(1, proportion_variation)
186 |
187 | m_string, motion = get_motion_string(fg, v_vel, fg_min_ang_velocity_deg,
188 | fg_max_ang_velocity_deg, fg_growthmin, fg_growthmax,
189 | median_filter_size, gaussian_blur_sigma)
190 | object_speeds.append(motion)
191 | obj_strings.append(m_string)
192 |
193 | random.shuffle(obj_strings)
194 | for obj_string in obj_strings:
195 | f.write(obj_string)
196 |
197 | f.close
198 | print("Wrote new scene file to {}".format(output_path))
199 | return object_speeds
200 |
201 |
202 | def draw_object_motions(obj_motions):
203 | """
204 | For visualization purposes, this will draw the trajectories and the image sensor
205 | """
206 | print(obj_motions)
207 | lines = [[(seg[0], seg[1]), (seg[2], seg[3])] for seg in obj_motions]
208 | print(lines)
209 | lc = mc.LineCollection(lines, linewidths=2)
210 | rect = Rectangle((-0.5, -0.5), 1, 1, linewidth=1,edgecolor='r',facecolor='none')
211 | fig, ax = plt.subplots()
212 | ax.add_collection(lc)
213 | ax.add_patch(rect)
214 | axlim = 1
215 | ax.set_xlim([-axlim, axlim])
216 | ax.set_ylim([-axlim, axlim])
217 | plt.show()
218 |
219 | def create_launch_file(simulator_config_path="/tmp/sim_config.txt", launch_path="/tmp/esim.launch"):
220 | """
221 | Generates a roslaunch launch file for ESIM
222 | """
223 | print("Saving launch file to {}".format(launch_path))
224 | with open(launch_path, "w") as f:
225 | f.write("\n")
226 | f.write("\t\n")
231 | f.write("\t\t\n")
232 | f.write("\t\n")
233 | f.write("\n")
234 | f.write("")
235 |
236 | def read_json(fname):
237 | assert(os.path.exists(fname))
238 | with open(fname) as json_file:
239 | data = json.load(json_file, object_hook=OrderedDict)
240 | return data
241 |
242 | if __name__ == "__main__":
243 |
244 | parser = argparse.ArgumentParser(description='Scene file generator')
245 | parser.add_argument('generator_config', type=str, help='Scene generator settings',
246 | default="..generator_config/slow_velocity_slow_rotation.json")
247 | parser.add_argument('--output_path', type=str, help='Path to save scene file', default=None)
248 | parser.add_argument('--output_path_cfg', type=str, help='Path to save config file', default=None)
249 | parser.add_argument('--existing_scenes', type=str, help='If you have scene files already,\
250 | you may pass a file with a list of them and the generator will use these instead', default=None)
251 |
252 | #Scene params
253 | parser.add_argument('--image_width', type=int, help='Image width (pixels)', default=64)
254 | parser.add_argument('--image_height', type=int, help='Image height (pixels)', default=64)
255 | parser.add_argument('--scene_duration', type=float, help='How long should the sequence go\
256 | (seconds)', default=10.0)
257 | parser.add_argument('--bag_name', type=str, help='Where to save output bag. If left empty,\
258 | will save to /tmp/_out.bag', default=None)
259 | parser.add_argument('--scene_id', type=int, help='ID number, is appended to files saved', default=0)
260 |
261 | #Simulator params
262 | parser.add_argument('--sim_framerate', type=int, help='Output framerate of the simulator.\
263 | If left empty, use value in config.', default=None)
264 | parser.add_argument('--contrast_threshold_mean', type=float, help='CTs will be sampled from\
265 | a normal dist. with this mean. If left empty, use value in config.', default=None)
266 | parser.add_argument('--contrast_threshold_sigma', type=float, help='Neg CT will differ to pos CT\
267 | by NegCT = PosCT * N(1, contrast_threshold_sigma)', default=None)
268 |
269 | args = parser.parse_args()
270 |
271 | config = read_json(args.generator_config)
272 |
273 | output_path = '/tmp/{:09d}_autoscene.txt'.format(args.scene_id) if args.output_path is None else args.output_path
274 | output_path_cfg = '/tmp/{:09d}_config2d.txt'.format(args.scene_id) if args.output_path_cfg is None else args.output_path_cfg
275 | image_size = (args.image_width, args.image_height)
276 | duration = args.scene_duration
277 | bag_name = '/tmp/{:09d}_out.bag'.format(args.scene_id) if args.bag_name is None else args.bag_name
278 |
279 | #Scene generation
280 | if args.existing_scenes is None:
281 | motion_params = config['foreground_params']
282 | motion_params.update(config['background_params'])
283 | motions = generate_scene_file_multispeed(output_path, image_size, duration,
284 | config['background_images'], config['foreground_images'], **motion_params)
285 | else:
286 | motions = pd.read_csv(args.existing_scenes, delimiter=',', header=None).values.tolist()[args.scene_id][0]
287 | copyfile(motions, output_path)
288 |
289 | #Simulator config generation
290 | camera_params = config['camera_params']
291 | if args.sim_framerate is not None:
292 | camera_params['sim_framerate'] = args.sim_framerate
293 | if args.contrast_threshold_mean is not None:
294 | camera_params['contrast_threshold_mean'] = args.contrast_threshold_mean
295 | if args.contrast_threshold_sigma is not None:
296 | camera_params['ct_diff_sigma'] = args.contrast_threshold_sigma
297 | generate_config_file(output_path_cfg, output_path, bag_name=bag_name, scene_id=args.scene_id,
298 | **config['camera_params'])
299 |
300 | #Launch file generation
301 | create_launch_file(simulator_config_path=output_path_cfg)
302 |
303 | print("Rosbag save path = {}".format(bag_name))
304 |
--------------------------------------------------------------------------------
/scripts/generate_preset.py:
--------------------------------------------------------------------------------
1 | import roslaunch
2 | import rospy
3 | import argparse
4 | import os
5 | import glob
6 | from generate_esim2d_scenes import create_launch_file
7 |
8 | process_generate_running = True
9 |
10 | class ProcessListener(roslaunch.pmon.ProcessListener):
11 | global process_generate_running
12 |
13 | def process_died(self, name, exit_code):
14 | global process_generate_running
15 | process_generate_running = False
16 | rospy.logwarn("%s died with code %s", name, exit_code)
17 |
18 |
19 | def init_launch(launchfile, process_listener):
20 | uuid = roslaunch.rlutil.get_or_generate_uuid(None, False)
21 | roslaunch.configure_logging(uuid)
22 | launch = roslaunch.parent.ROSLaunchParent(
23 | uuid,
24 | [launchfile],
25 | process_listeners=[process_listener],
26 | )
27 | return launch
28 |
29 | if __name__ == "__main__":
30 | parser = argparse.ArgumentParser(description='Launches Event Simulator')
31 | parser.add_argument('config_dir', type=str, help='Path to directory containing simulator config files')
32 | args = parser.parse_args()
33 |
34 | configs = sorted(glob.glob(os.path.join(args.config_dir, "*config2d.txt")))
35 | print(configs)
36 | launch_file_path = "/tmp/esim.launch"
37 |
38 | for config in configs:
39 | create_launch_file(config, launch_file_path)
40 |
41 | rospy.init_node("remote_launcher")
42 | launch = init_launch(launch_file_path, ProcessListener())
43 | launch.start()
44 |
45 | while process_generate_running:
46 | rospy.sleep(0.05)
47 |
48 | launch.shutdown()
49 |
--------------------------------------------------------------------------------
/tools/jpg_to_png.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import glob
3 | import argparse
4 | import cv2 as cv
5 | import os
6 |
7 | def convert(files, output_dir):
8 | for img_jpg in files:
9 | img = cv.imread(img_jpg, cv.IMREAD_COLOR)
10 | b_channel, g_channel, r_channel = cv.split(img)
11 | alpha_channel = np.ones(b_channel.shape, dtype=b_channel.dtype) * 255
12 | alpha_channel[0, 0] = 254
13 | img_BGRA = cv.merge((b_channel, g_channel, r_channel, alpha_channel))
14 | img_name = os.path.splitext(os.path.basename(img_jpg))[0]+".png"
15 | img_png = os.path.join(output_dir, img_name)
16 | print("Converting {} to {}, size={}".format(img_jpg, img_png, img_BGRA.shape))
17 | cv.imwrite(img_png, img_BGRA)
18 |
19 | if __name__ == "__main__":
20 | parser = argparse.ArgumentParser(description='Convert JPG to PNG with alpha channel')
21 | parser.add_argument('filepath', type=str, help='Path to file. If given folder,\
22 | will do all jpg files in folder')
23 | parser.add_argument('--output_dir', type=str, default=None, help='Output save dir.\
24 | If left empty, same as input dir')
25 |
26 | args = parser.parse_args()
27 | filepath = args.filepath
28 | output_dir = args.output_dir
29 | if os.path.isfile(filepath):
30 | files = [filepath]
31 | if args.output_dir is None:
32 | output_dir = os.path.dirname(args.filepath)
33 | else:
34 | files = sorted(glob.glob('{}/*.jpg'.format(filepath)))
35 | if len(files)>0 and args.output_dir is None:
36 | output_dir = os.path.dirname(files[0])
37 | os.makedirs(output_dir, exist_ok=True)
38 | convert(files, output_dir)
39 |
40 |
--------------------------------------------------------------------------------