├── README.md ├── airsim ├── README.md ├── airsim_collect.py ├── airsim_reconstruct.py ├── assets │ └── neighborhood_reconstruction.png ├── requirements.txt └── settings.json └── basic_ue_collection ├── README.md └── images ├── barnyard_ss.PNG ├── epic_launcher.PNG ├── farmspawner_blueprint_mod.PNG ├── iccv_wammi_ss.PNG ├── marketplace_ss.PNG ├── no_occlusion.PNG ├── occlusion.PNG └── valid_point_shader.PNG /README.md: -------------------------------------------------------------------------------- 1 | # UEUAVSim 2 | MINDFUL Unreal Engine Based UAV Repository 3 | 4 | Please see the accompanying tutorial videos at: [Mizzou Minfdful YouTube Channel](https://bit.ly/MizzouINDFUL) 5 | 6 | If you use our code or workflow, please reference our work. ICCV WAMMI 2021 latex citation: 7 | 8 | @article{alvey_anderson_buck_deardorff_keller, title={Simulated Photorealistic Deep Learning Framework and Workflows to Accelerate Computer Vision and Unmanned Aerial Vehicle Research}, journal={Workshop on Analysis of Aerial Motion Imagery (WAAMI) in conjunction with International Conference on Computer Vision (ICCV 2021)}, author={Alvey, Brendan and Anderson, Derek and Buck, Andrew and Deardorff, Matthew and Keller, James}} 9 | -------------------------------------------------------------------------------- /airsim/README.md: -------------------------------------------------------------------------------- 1 | # 3D Point Cloud Reconstruction with AirSim 2 | 3 | If you use our code or workflow, please reference our ICCV WAMMI 2021 article. Latex citation: 4 | 5 | @article{alvey_anderson_buck_deardorff_keller, title={Simulated Photorealistic Deep Learning Framework and Workflows to Accelerate Computer Vision and Unmanned Aerial Vehicle Research}, journal={Workshop on Analysis of Aerial Motion Imagery (WAAMI) in conjunction with International Conference on Computer Vision (ICCV 2021)}, author={Alvey, Brendan and Anderson, Derek and Buck, Andrew and Deardorff, Matthew and Keller, James}} 6 | 7 | [AirSim](https://microsoft.github.io/AirSim/) can be used to collect and reconstruct 3D point clouds using [Unreal Engine](https://www.unrealengine.com/). This page describes how to set up your environment to collect data, either manually or from a list of pre-scripted waypoints, and reconstruct a 3D point cloud from a sequence of color and depth images, along with the corresponding camera poses. The resulting data is visualized using [Open3D](http://www.open3d.org/) and can be used for a variety of research applications. 8 | 9 | ![](assets/neighborhood_reconstruction.png) 10 | 11 | ## Installing AirSim and Python 12 | 13 | AirSim is a plugin for Unreal Engine that provides a Python API to control a virtual drone camera in a 3D scene. For this example, we will be using the Computer Vision mode of AirSim, which ignores the simulation of a physical drone and instead provides a way to control the camera directly. 14 | 15 | ### AirSim Environment Setup 16 | 17 | First, we need a game world environment that supports AirSim. There are two options: 18 | 19 | - Download a precompiled binary package of an environment. The latest release can be found here: 20 | 21 | https://github.com/Microsoft/AirSim/releases 22 | 23 | - Build it from source and use a custom environment. Follow the instructions here: 24 | 25 | https://microsoft.github.io/AirSim/build_windows/ 26 | 27 | (Note that we have only tested Windows development) 28 | 29 | Here, we focus on the first option using a precompiled binary, but these instructions should also work in custom environments. 30 | 31 | For this example, we'll use the AirSim Neighborhood environment, which can be downloaded here: 32 | 33 | https://github.com/microsoft/AirSim/releases/download/v1.6.0-windows/AirSimNH.zip 34 | 35 | #### Settings File 36 | 37 | You'll also need the `settings.json` file from the git repo to be placed in your AirSim directory. On a typical installation it's set to `C:\Users\\Documents\AirSim`. This tells the plugin important information about what types of cameras to use, image resolution, recording options, and lots of other settings. The provided `settings.json` file is a good starting point, and can be edited once you become familiar with the parameters available. 38 | 39 | ### Python Setup 40 | 41 | An easy way to keep Python environments organized and independent is to use the [Conda](https://docs.conda.io/en/latest/index.html) package manager, either as part of the full-featured [Anaconda](https://www.anaconda.com/) distribution, or as the minimal [Miniconda](https://docs.conda.io/en/latest/miniconda.html) installer. It's recommended to setup the environment from an administrator terminal to , although this may not be required. 42 | 43 | Once Conda is installed on your system, you can run 44 | 45 | ``` 46 | conda create --name sim python=3.8 47 | ``` 48 | 49 | to create a new Python 3.8 environment. (Note that we are using Python 3.8 due to some dependency issues with Open3D.) 50 | 51 | Activate the new environment with 52 | 53 | ``` 54 | conda activate sim 55 | ``` 56 | 57 | Install the dependencies with 58 | 59 | ``` 60 | pip install open3d 61 | pip install msgpack-rpc-python 62 | pip install airsim 63 | pip install pandas pillow tqdm 64 | ``` 65 | 66 | (Note: Depending on your system configuration, you may need to run these commands as administrator. You will likely see errors describing dependency conflicts, however installing the packages in the specified order should allow our example to work. For reference, a `requirements.txt` file included in the repo lists the package versions we used, but due to unresolvable dependencies it cannot be used directly.) 67 | 68 | Experienced Python users may find the following guides helpful to setup AirSim and Open3D environments: 69 | 70 | - AirSim: https://microsoft.github.io/AirSim/apis/ 71 | - Open3D: http://www.open3d.org/docs/release/getting_started.html 72 | 73 | ## Collecting Data 74 | 75 | We describe two methods for collecting data in AirSim: manual and scripted. For the manual approach, we will launch the AirSim runtime and control the drone camera with the keyboard. The scripted approach will use a Python script to move the drone to specified waypoints. Both approaches will save data in the [AirSim recording format](https://microsoft.github.io/AirSim/settings/#recording). 76 | 77 | Start by running the Unreal environment with the AirSim plugin. For this example using the neighborhood map, unzip the binary package from above and run `AirSimNH.exe`. For other environments, launch the executable or put the Unreal Editor into "Play" mode. You may need to click inside the window to make it active. Make sure your `settings.json` file is stored in the correct location and is loaded properly. 78 | 79 | ### Manual Recording 80 | 81 | In Computer Vision mode (set by the settings file), you can steer the drone camera with the following keyboard controls: 82 | 83 | - `Arrow keys` -> Move forward, backward, left, right 84 | - `Page Up`/`Page Down` -> Move up, down 85 | - `A`,`D` -> Yaw 86 | - `W`,`S` -> Pitch 87 | 88 | Pressing `Backspace` will reset the drone to the original position. 89 | 90 | Press `R` to start or stop recording. (Note that our settings file is configured to only record data frames when the drone has moved.) Once recording has started, data will be saved in a timestamped folder in your AirSim directory, `C:\Users\\Documents\AirSim`. The recording directory contains a metadata file `airsim_rec.txt` that includes camera pose information for each frame. Inside the `images` directory, you will find the color and segmentation images stored as 'png' files, and the depth image data stored as a floating point array in the 'pfm' format. We will demonstrate how to use this data in the reconstruction section of this example. 91 | 92 | ### Scripted Recording 93 | 94 | In many cases, it may be preferable to automate the data collection process. An example script demonstrating how to generate waypoints in a "lawnmower" pattern is provided in `airsim_collect.py`. It can be run with 95 | 96 | ``` 97 | python airsim_collect.py 98 | ``` 99 | 100 | This script connects to the AirSim drone using the Python API. It tells the drone to start recording and then moves between waypoints. Since the settings file is configured to record only after moving, it collects a new frame at each waypoint. The data is stored in a timestamped folder in your AirSim directory, `C:\Users\\Documents\AirSim`. 101 | 102 | The script defines a bounding region around the initial starting location, an altitude, and a step size. The units for working with AirSim are in meters. In the example, the bounds are set to +/- 100 m in the x and y directions, with an altitude of 50 m and a step size of 50 m. After each waypoint is generated, the AirSim API function `simSetVehiclePose` is called to move the drone and record a new frame. A short delay is included to ensure that the data from each waypoint has time to be written to disk. Including a delay can also help with loading assets into the Unreal environment after moving the camera. 103 | 104 | This scripting approach can also be used for other collection patterns. Optionally, waypoints could be loaded from an external source, such as a CSV file, and then used to position the drone camera for each frame. 105 | 106 | For more interactive control, it may be useful to use the AirSim Image APIs to access the image data directly in the Python code. We recommend the `simGetImages` function call, which can provide multiple image types simultaneously. More information can be found at https://microsoft.github.io/AirSim/image_apis/. 107 | 108 | ## 3D Reconstruction 109 | 110 | The reconstruction script `airsim_reconstruct.py` can be used to build a 3D point cloud from a recorded AirSim sequence. Options are provided to load a specific run by its folder name (timestamp) or to simply reconstruct the last run. Additionally, we can specify a step size to skip frames, a maximum depth distance, and choose whether to save a point cloud for each frame or just the entire combined scene. The option flag `--seg` will use the segmentation colormap instead of the true image colors, and the flag `--vis` will automatically display the result. 111 | 112 | Here is the full parameter list for the reconstruction script: 113 | 114 | ``` 115 | >python airsim_reconstruct.py --help 116 | usage: airsim_reconstruct.py [-h] (-r RUN | -l) [-s STEP] [-t DEPTH_TRUNC] [-w] [--seg] [--vis] 117 | 118 | optional arguments: 119 | -h, --help show this help message and exit 120 | -r RUN, --run RUN folder name of the run 121 | -l, --last use last run 122 | -s STEP, --step STEP frame step 123 | -t DEPTH_TRUNC, --depth_trunc DEPTH_TRUNC 124 | max distance of depth projection 125 | -w, --write_frames save a point cloud for each frame 126 | --seg use segmentation colors 127 | --vis show visualization 128 | ``` 129 | 130 | After collecting a data sequence with the `airsim_collect.py` script, we run 131 | 132 | ``` 133 | python airsim_reconstruct.py -l -w --vis 134 | ``` 135 | 136 | This will loop over all the frames in the last recording and show the combined point cloud in an Open3D window. You can navigate the 3D view using the mouse and adjust the point size with the `+`/`-` keys. The position of the camera for each frame is also displayed in this view. The combined point cloud is saved in the recording directory as `points_rgb.pcd` using the commonly used [PCD file format](https://en.wikipedia.org/wiki/Point_Cloud_Library#PCD_File_Format). Since the `-w` parameter is included, the points for each frame are saved in the `points` folder, along with the camera parameters in a 'json' format that is compatible with Open3D. Running the command again with the `--seg` flag will also create point clouds using the segmentation colors. 137 | 138 | The recorded images from simulation and reconstructed point clouds can be used for various applications. One way to quickly verify the data is to use a viewer such as [CloudCompare](https://www.danielgm.net/cc/) to visualize the combined point cloud as well as the individual frames. We hope this method can accelerate research workflows! 139 | 140 | -------------------------------------------------------------------------------- /airsim/airsim_collect.py: -------------------------------------------------------------------------------- 1 | import time 2 | 3 | import airsim 4 | import numpy as np 5 | 6 | 7 | # Set waypoint bounds and parameters for a lawnmower pattern 8 | wp_bound_x = [-100, 100] 9 | wp_bound_y = [-100, 100] 10 | wp_step = 50 11 | wp_z = 50 12 | 13 | # Set initial pose 14 | x = wp_bound_x[0] 15 | y = wp_bound_y[0] 16 | z = wp_z 17 | yaw = 0 * np.pi/180 18 | pitch = -90 * np.pi/180 19 | roll = 0 * np.pi/180 20 | 21 | # Connect to AirSim 22 | client = airsim.VehicleClient() 23 | client.reset() 24 | client.confirmConnection() 25 | 26 | # Start recording 27 | client.startRecording() 28 | 29 | # === Generate Trajectory === 30 | 31 | d = 0 32 | while y <= wp_bound_y[1]: 33 | while (d == 0 and x <= wp_bound_x[1]) or (d == 1 and x >= wp_bound_x[0]): 34 | 35 | # Set the pose 36 | yaw = d * 180 * np.pi/180 37 | client.simSetVehiclePose(airsim.Pose(airsim.Vector3r(x, -y, -z), airsim.to_quaternion(pitch, -roll, yaw)), True) 38 | 39 | time.sleep(0.2) # Short delay 40 | 41 | if d == 0: 42 | x += wp_step 43 | else: 44 | x -= wp_step 45 | 46 | if d == 0: 47 | x -= wp_step 48 | else: 49 | x += wp_step 50 | 51 | d = 1 - d 52 | y += wp_step 53 | 54 | # Stop recording 55 | client.stopRecording() 56 | -------------------------------------------------------------------------------- /airsim/airsim_reconstruct.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import json 3 | import os 4 | import re 5 | 6 | import airsim 7 | import numpy as np 8 | import open3d as o3d 9 | import pandas as pd 10 | import PIL.Image 11 | from tqdm import tqdm 12 | 13 | 14 | # Parse command line arguments 15 | parser = argparse.ArgumentParser() 16 | group = parser.add_mutually_exclusive_group(required=True) 17 | group.add_argument('-r', '--run', help='folder name of the run') 18 | group.add_argument('-l', '--last', action='store_true', help='use last run') 19 | parser.add_argument('-s', '--step', default=1, type=int, help='frame step') 20 | parser.add_argument('-t', '--depth_trunc', default=10000, type=float, help='max distance of depth projection') 21 | parser.add_argument('-w', '--write_frames', action='store_true', help='save a point cloud for each frame') 22 | parser.add_argument('--seg', action='store_true', help='use segmentation colors') 23 | parser.add_argument('--vis', action='store_true', help='show visualization') 24 | args = parser.parse_args() 25 | 26 | # Get the default directory for AirSim 27 | airsim_path = os.path.join(os.path.expanduser('~'), 'Documents', 'AirSim') 28 | 29 | # Load the settings file 30 | with open(os.path.join(airsim_path, 'settings.json'), 'r') as fp: 31 | data = json.load(fp) 32 | 33 | # Get the camera intrinsics 34 | capture_settings = data['CameraDefaults']['CaptureSettings'][0] 35 | img_width = capture_settings['Width'] 36 | img_height = capture_settings['Height'] 37 | img_fov = capture_settings['FOV_Degrees'] 38 | 39 | # Compute the focal length 40 | fov_rad = img_fov * np.pi/180 41 | fd = (img_width/2.0) / np.tan(fov_rad/2.0) 42 | 43 | # Create the camera intrinsic object 44 | intrinsic = o3d.camera.PinholeCameraIntrinsic() 45 | intrinsic.set_intrinsics(img_width, img_height, fd, fd, img_width/2 - 0.5, img_height/2 - 0.5) 46 | 47 | # Get the run name 48 | if args.last: 49 | runs = [] 50 | for f in os.listdir(airsim_path): 51 | if re.fullmatch('\d{4}-\d{2}-\d{2}-\d{2}-\d{2}-\d{2}', f): 52 | runs.append(f) 53 | run = sorted(runs)[-1] 54 | else: 55 | run = args.run 56 | 57 | # Load the recording metadata 58 | data_path = os.path.join(airsim_path, run) 59 | df = pd.read_csv(os.path.join(data_path, 'airsim_rec.txt'), delimiter='\t') 60 | 61 | # Create the output directory if needed 62 | if args.write_frames: 63 | os.makedirs(os.path.join(data_path, 'points'), exist_ok=True) 64 | 65 | # Initialize an empty point cloud and camera list 66 | pcd = o3d.geometry.PointCloud() 67 | cams = [] 68 | 69 | # Loop over all the frames 70 | for frame in tqdm(range(0, df.shape[0], args.step)): 71 | 72 | # === Create the transformation matrix === 73 | 74 | x, y, z = df.iloc[frame][['POS_X', 'POS_Y', 'POS_Z']] 75 | T = np.eye(4) 76 | T[:3,3] = [-y, -z, -x] 77 | 78 | qw, qx, qy, qz = df.iloc[frame][['Q_W', 'Q_X', 'Q_Y', 'Q_Z']] 79 | R = np.eye(4) 80 | R[:3,:3] = o3d.geometry.get_rotation_matrix_from_quaternion((qw, qy, qz, qx)) 81 | 82 | C = np.array([ 83 | [ 1, 0, 0, 0], 84 | [ 0, 0, -1, 0], 85 | [ 0, 1, 0, 0], 86 | [ 0, 0, 0, 1] 87 | ]) 88 | 89 | F = R.T @ T @ C 90 | 91 | # === Load the images === 92 | 93 | rgb_filename, seg_filename, depth_filename = df.iloc[frame].ImageFile.split(';') 94 | 95 | rgb_path = os.path.join(data_path, 'images', rgb_filename) 96 | rgb = PIL.Image.open(rgb_path).convert('RGB') 97 | 98 | seg_path = os.path.join(data_path, 'images', seg_filename) 99 | seg = PIL.Image.open(seg_path).convert('RGB') 100 | 101 | depth_path = os.path.join(data_path, 'images', depth_filename) 102 | depth, _ = airsim.utils.read_pfm(depth_path) 103 | 104 | # === Create the point cloud === 105 | 106 | color = seg if args.seg else rgb 107 | color_image = o3d.geometry.Image(np.asarray(color)) 108 | depth_image = o3d.geometry.Image(depth) 109 | rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth(color_image, depth_image, depth_scale=1.0, depth_trunc=args.depth_trunc, convert_rgb_to_intensity=False) 110 | rgbd_pc = o3d.geometry.PointCloud.create_from_rgbd_image(rgbd_image, intrinsic, extrinsic=F) 111 | pcd += rgbd_pc 112 | 113 | # Save the point cloud for this frame 114 | if args.write_frames: 115 | pcd_name = f'points_seg_{frame:06d}' if args.seg else f'points_rgb_{frame:06d}' 116 | pcd_path = os.path.join(data_path, 'points', pcd_name + '.pcd') 117 | o3d.io.write_point_cloud(pcd_path, rgbd_pc) 118 | 119 | cam_path = os.path.join(data_path, 'points', f'cam_{frame:06d}.json') 120 | cam = o3d.camera.PinholeCameraParameters() 121 | cam.intrinsic = intrinsic 122 | cam.extrinsic = F 123 | o3d.io.write_pinhole_camera_parameters(cam_path, cam) 124 | 125 | # === Save the camera position === 126 | 127 | cams.append(o3d.geometry.LineSet.create_camera_visualization(intrinsic, F)) 128 | 129 | 130 | # Save the point cloud 131 | pcd_name = 'points_seg' if args.seg else 'points_rgb' 132 | pcd_path = os.path.join(data_path, pcd_name + '.pcd') 133 | o3d.io.write_point_cloud(pcd_path, pcd) 134 | 135 | # Visualize 136 | if args.vis: 137 | geos = [pcd] 138 | geos.extend(cams) 139 | o3d.visualization.draw_geometries(geos) 140 | -------------------------------------------------------------------------------- /airsim/assets/neighborhood_reconstruction.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MizzouINDFUL/UEUAVSim/ab2c6d081a1aa4d69e488bf885ff948df2ce9854/airsim/assets/neighborhood_reconstruction.png -------------------------------------------------------------------------------- /airsim/requirements.txt: -------------------------------------------------------------------------------- 1 | airsim==1.6.0 2 | anyio==3.3.2 3 | argon2-cffi==21.1.0 4 | attrs==21.2.0 5 | autopep8==1.5.7 6 | Babel==2.9.1 7 | backcall==0.2.0 8 | beautifulsoup4==4.9.3 9 | bleach==4.1.0 10 | certifi==2020.12.5 11 | cffi==1.14.6 12 | chardet==4.0.0 13 | cloudpickle==1.6.0 14 | colorama==0.4.4 15 | cuda==0.0.1 16 | cupy-cuda112==9.1.0 17 | debugpy==1.4.3 18 | decorator==5.1.0 19 | defusedxml==0.7.1 20 | deprecation==2.1.0 21 | djitellopy==2.4.0 22 | entrypoints==0.3 23 | fastrlock==0.6 24 | gym==0.18.3 25 | idna==2.10 26 | imageio==2.9.0 27 | ipykernel==6.4.1 28 | ipython==7.28.0 29 | ipython-genutils==0.2.0 30 | ipywidgets==7.6.5 31 | jedi==0.18.0 32 | Jinja2==3.0.1 33 | joblib==1.0.1 34 | json5==0.9.6 35 | jsonschema==3.2.0 36 | jupyter==1.0.0 37 | jupyter-client==7.0.4 38 | jupyter-console==6.4.0 39 | jupyter-core==4.8.1 40 | jupyter-packaging==0.10.6 41 | jupyter-server==1.11.0 42 | jupyterlab==3.1.14 43 | jupyterlab-pygments==0.1.2 44 | jupyterlab-server==2.8.2 45 | jupyterlab-widgets==1.0.2 46 | line-profiler==3.1.0 47 | MarkupSafe==2.0.1 48 | matplotlib-inline==0.1.3 49 | mistune==0.8.4 50 | msgpack-python==0.5.6 51 | msgpack-rpc-python==0.4.1 52 | nbclassic==0.3.2 53 | nbclient==0.5.4 54 | nbconvert==6.2.0 55 | nbformat==5.1.3 56 | nest-asyncio==1.5.1 57 | networkx==2.5.1 58 | notebook==4.4.1 59 | numpy==1.20.2 60 | open3d==0.13.0 61 | opencv-contrib-python==4.5.1.48 62 | opencv-python==4.5.3.56 63 | packaging==21.0 64 | pandas==1.2.4 65 | pandocfilters==1.5.0 66 | parso==0.8.2 67 | path==15.1.2 68 | perlin-numpy @ git+https://github.com/pvigier/perlin-numpy@6f077f811f5708e504732c26dee8f2015b95da0c 69 | pickleshare==0.7.5 70 | Pillow==8.3.2 71 | plotly==4.14.3 72 | prometheus-client==0.11.0 73 | prompt-toolkit==3.0.20 74 | pycodestyle==2.7.0 75 | pycparser==2.20 76 | pyglet==1.5.15 77 | Pygments==2.10.0 78 | pyparsing==2.4.7 79 | pyrsistent==0.18.0 80 | python-dateutil==2.8.2 81 | pytz==2021.1 82 | PyWavelets==1.1.1 83 | pywin32==301 84 | pywinpty==1.1.4 85 | pyzmq==22.3.0 86 | qtconsole==5.0.3 87 | QtPy==1.9.0 88 | requests==2.25.1 89 | requests-unixsocket==0.2.0 90 | retrying==1.3.3 91 | scikit-image==0.18.1 92 | scikit-learn==0.24.2 93 | scipy==1.6.2 94 | Send2Trash==1.8.0 95 | six==1.16.0 96 | snakeviz==2.1.0 97 | sniffio==1.2.0 98 | soupsieve==2.2.1 99 | terminado==0.12.1 100 | testpath==0.5.0 101 | threadpoolctl==2.1.0 102 | tifffile==2021.3.31 103 | toml==0.10.2 104 | tomlkit==0.7.2 105 | torch==1.8.1+cu111 106 | torchaudio==0.8.1 107 | torchvision==0.9.1+cu111 108 | tornado==4.5.3 109 | tqdm==4.61.0 110 | traitlets==5.1.0 111 | typing-extensions==3.10.0.0 112 | urllib3==1.26.4 113 | wcwidth==0.2.5 114 | webencodings==0.5.1 115 | websocket-client==1.2.1 116 | widgetsnbextension==3.5.1 117 | wincertstore==0.2 118 | -------------------------------------------------------------------------------- /airsim/settings.json: -------------------------------------------------------------------------------- 1 | { 2 | "SeeDocsAt": "https://github.com/Microsoft/AirSim/blob/master/docs/settings.md", 3 | "SettingsVersion": 1.2, 4 | "SimMode": "ComputerVision", 5 | "Recording": { 6 | "RecordOnMove": true, 7 | "RecordInterval": 0.05, 8 | "Cameras": [ 9 | { "CameraName": "0", "ImageType": 0, "PixelsAsFloat": false, "Compress": true }, 10 | { "CameraName": "0", "ImageType": 5, "PixelsAsFloat": false, "Compress": true }, 11 | { "CameraName": "0", "ImageType": 1, "PixelsAsFloat": true, "Compress": false } 12 | ] 13 | }, 14 | "CameraDefaults": { 15 | "CaptureSettings": [ 16 | { 17 | "ImageType": 0, 18 | "Width": 256, 19 | "Height": 192, 20 | "FOV_Degrees": 90, 21 | "AutoExposureSpeed": 100, 22 | "MotionBlurAmount": 0 23 | }, 24 | { 25 | "ImageType": 1, 26 | "Width": 256, 27 | "Height": 192, 28 | "FOV_Degrees": 90, 29 | "AutoExposureSpeed": 100, 30 | "MotionBlurAmount": 0 31 | }, 32 | { 33 | "ImageType": 5, 34 | "Width": 256, 35 | "Height": 192, 36 | "FOV_Degrees": 90, 37 | "AutoExposureSpeed": 100, 38 | "MotionBlurAmount": 0 39 | } 40 | ] 41 | }, 42 | "SubWindows": [ 43 | {"WindowID": 0, "CameraName": "0", "ImageType": 3, "VehicleName": "", "Visible": true}, 44 | {"WindowID": 1, "CameraName": "0", "ImageType": 5, "VehicleName": "", "Visible": true}, 45 | {"WindowID": 2, "CameraName": "0", "ImageType": 0, "VehicleName": "", "Visible": true} 46 | ], 47 | "SegmentationSettings": { 48 | "InitMethod": "", 49 | "MeshNamingMethod": "", 50 | "OverrideExisting": false 51 | } 52 | } 53 | -------------------------------------------------------------------------------- /basic_ue_collection/README.md: -------------------------------------------------------------------------------- 1 | # Unreal Engine 4 Basic Simulated Data Collection Tutorial 2 | 3 | ## Introduction 4 | 5 | This document describes how to setup and collect data using only the native Unreal Engine 4 tools and the [Movie Render Plugin](https://docs.unrealengine.com/4.26/en-US/AnimatingObjects/Sequencer/Workflow/RenderAndExport/HighQualityMediaExport/). This could be used to easily standup data collections to facilitate machine learning, computer vision, robotics, or any other project that can make use of the simulated imagery. 6 | 7 | Please see the accompanying "Unreal Engine 4 Basic Simulated Data Collection over Agricultural Scene" tutorial video at: [Mizzou Mindful YouTube Channel](https://bit.ly/MizzouINDFUL) 8 | 9 | If you use our code or workflow, please reference our work. ICCV WAMMI 2021 latex citation: 10 | 11 | @article{alvey_anderson_buck_deardorff_keller, title={Simulated Photorealistic Deep Learning Framework and Workflows to Accelerate Computer Vision and Unmanned Aerial Vehicle Research}, journal={Workshop on Analysis of Aerial Motion Imagery (WAAMI) in conjunction with International Conference on Computer Vision (ICCV 2021)}, author={Alvey, Brendan and Anderson, Derek and Buck, Andrew and Deardorff, Matthew and Keller, James}} 12 | 13 | ![Screenshot during data collection](images/iccv_wammi_ss.PNG) 14 | 15 | ## Unreal Engine Setup 16 | 17 | 1. Download and install the [Epic Games Launcher](https://www.epicgames.com/store/en-US/download) 18 | 2. Open the Epic Games Launcher and click on the Unreal Engine tab on the left. Click on the Library tab then click the "+" button next to ENGINE VERSIONS to select and install Unreal Engine v4.26.2 19 | 3. Launch the engine and create a blank project. Select the Architecture, Engineering, and Construction template category, select blank, and specify a location for the project then click Create Project. 20 | 4. By default the Movie Render Queue Plugin should be enabled. To double check, Click Edit in the top menu of the Unreal Engine Editor and select plugins. Search for the Movie Render Queue Plugin and ensure that it is enabled. 21 | 22 | ![Epic Games Launcher](images/epic_launcher.PNG) 23 | 24 | ## Download and Load Content 25 | 26 | 1. In the Epic Games Launcher, navigate to the Marketplace tab in the Unreal Engine section. Select and download any content packages you want to use. In the tutorial video, we used the [Barnyard Megapack](https://www.unrealengine.com/marketplace/en-US/product/barnyard-mega-pack) Add your downloaded content to the project you created. 27 | 2. Open your added content in the editor. With most content packs you can open a map by clicking on File->Open Level->Content->Content Package Name->Maps->Map Name 28 | 29 | ![Epic Games Marketplace - Barnyard Megascan Pack](images/marketplace_ss.PNG) 30 | 31 | ## Simulated Data Collection 32 | 33 | 1. Click Window->Load Layout->Default Editor Layout 34 | 2. In the Place Actors tab, search for "Cine Camera Actor" and place one into your scene. There are a number of different parameters that can be changed for the camera. 35 | 1. Transform: Manual entry of Position and Orientation. (Can also be edited by moving the object around in the scene) 36 | 2. Current Camera Settings: Controls the simulated camera type. Defines focal and lens settings. 37 | 3. Camera Options: Uncheck "Constrain Aspect Ratio" to allow variable camera resolutions. 38 | 3. Move the camera to the desired pose for the first keyframe. 39 | 4. Click the Cinematics button and select "Add Level Sequence" 40 | 5. Click the green +Track button, under "Add Actor to Sequence", add the Cine Camera Actor you placed in the scene 41 | 6. Press S to create a keyframe. 42 | 7. Move the red bar to the frame number you want the next keyframe to be at in the sequence editor. 43 | 8. Press S to create the next keyframe. 44 | 9. Click the Film looking button to set output options (resolution, directory, render pass). 45 | 10. Click capture to begin saving data to the Saved folder of your project directory. 46 | 47 | ![Screenshot from tutorial video of simulated data collection](images/barnyard_ss.PNG) 48 | 49 | ## Object Labels and Scene Depth 50 | 51 | 1. In the editor, click Edit->Project Settings. Search for "Custom Depth-Stencil Pass" and select "Enabled with Stencil" 52 | 2. To assign a stencil value to an object, click on and object in the World Editor panel. Under Rendering enable the "Render CustomDepth Pass" and set the "CustomDepth Stencil Value" to an integer value. For objects that are generated using a blueprint, the blueprint must be modified. At the end of execution, add the "Set Custom Depth Stencil Value" and set the stencil value to whatever you want and add the "Set Render Custom Depth" block as shown in the image below. 53 | 54 | ![Modification to Farm Spawner Blueprint to enable Custom Depth Stencil Value](images/farmspawner_blueprint_mod.PNG) 55 | 56 | 3. Set the "Image Output Format" option in the sequencer (see previous section, step 9.) to "Custom Render Passes" 57 | 4. Under "Composition Graph Options"->"Include Render Passes" add "Custom Stencil", "Scene Depth", and "Final Image" to output object labels, depth, and RGB images for the sequence. See the video for modifying the stencil buffer to remove the text and/or take into account occlusion of other objects. 58 | 5. To remove text value tiling and/or to label only valid points, taking into account occlusion first enable viewing Engine Content in the View Options at the bottom left of the content browser. Then browse Engine Content -> BufferVisualization -> CustomStencil. Substitute the custom material pipeline shown in the image below. 59 | 60 | Valid point CustomStencil material. 61 | 62 | ![Valid point CustomStencil material.](images/valid_point_shader.PNG) -------------------------------------------------------------------------------- /basic_ue_collection/images/barnyard_ss.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MizzouINDFUL/UEUAVSim/ab2c6d081a1aa4d69e488bf885ff948df2ce9854/basic_ue_collection/images/barnyard_ss.PNG -------------------------------------------------------------------------------- /basic_ue_collection/images/epic_launcher.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MizzouINDFUL/UEUAVSim/ab2c6d081a1aa4d69e488bf885ff948df2ce9854/basic_ue_collection/images/epic_launcher.PNG -------------------------------------------------------------------------------- /basic_ue_collection/images/farmspawner_blueprint_mod.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MizzouINDFUL/UEUAVSim/ab2c6d081a1aa4d69e488bf885ff948df2ce9854/basic_ue_collection/images/farmspawner_blueprint_mod.PNG -------------------------------------------------------------------------------- /basic_ue_collection/images/iccv_wammi_ss.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MizzouINDFUL/UEUAVSim/ab2c6d081a1aa4d69e488bf885ff948df2ce9854/basic_ue_collection/images/iccv_wammi_ss.PNG -------------------------------------------------------------------------------- /basic_ue_collection/images/marketplace_ss.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MizzouINDFUL/UEUAVSim/ab2c6d081a1aa4d69e488bf885ff948df2ce9854/basic_ue_collection/images/marketplace_ss.PNG -------------------------------------------------------------------------------- /basic_ue_collection/images/no_occlusion.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MizzouINDFUL/UEUAVSim/ab2c6d081a1aa4d69e488bf885ff948df2ce9854/basic_ue_collection/images/no_occlusion.PNG -------------------------------------------------------------------------------- /basic_ue_collection/images/occlusion.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MizzouINDFUL/UEUAVSim/ab2c6d081a1aa4d69e488bf885ff948df2ce9854/basic_ue_collection/images/occlusion.PNG -------------------------------------------------------------------------------- /basic_ue_collection/images/valid_point_shader.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MizzouINDFUL/UEUAVSim/ab2c6d081a1aa4d69e488bf885ff948df2ce9854/basic_ue_collection/images/valid_point_shader.PNG --------------------------------------------------------------------------------