├── .gitignore ├── README.md └── unreal_cv_ros ├── CMakeLists.txt ├── cfg ├── example_mav.rviz └── example_test.rviz ├── content ├── CustomPluginCode.cpp └── gazebo_empty.world ├── launch ├── example_mav.launch └── example_test.launch ├── msg └── UeSensorRaw.msg ├── package.xml └── src ├── example_mav.py ├── sensor_model.py ├── simulation_manager.py └── unreal_ros_client.py /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | MANIFEST 27 | 28 | # PyInstaller 29 | # Usually these files are written by a python script from a template 30 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 31 | *.manifest 32 | *.spec 33 | 34 | # Installer logs 35 | pip-log.txt 36 | pip-delete-this-directory.txt 37 | 38 | # Unit test / coverage reports 39 | htmlcov/ 40 | .tox/ 41 | .coverage 42 | .coverage.* 43 | .cache 44 | nosetests.xml 45 | coverage.xml 46 | *.cover 47 | .hypothesis/ 48 | .pytest_cache/ 49 | 50 | # Translations 51 | *.mo 52 | *.pot 53 | 54 | # Django stuff: 55 | *.log 56 | local_settings.py 57 | db.sqlite3 58 | 59 | # Flask stuff: 60 | instance/ 61 | .webassets-cache 62 | 63 | # Scrapy stuff: 64 | .scrapy 65 | 66 | # Sphinx documentation 67 | docs/_build/ 68 | 69 | # PyBuilder 70 | target/ 71 | 72 | # Jupyter Notebook 73 | .ipynb_checkpoints 74 | 75 | # pyenv 76 | .python-version 77 | 78 | # celery beat schedule file 79 | celerybeat-schedule 80 | 81 | # SageMath parsed files 82 | *.sage.py 83 | 84 | # Environments 85 | .env 86 | .venv 87 | env/ 88 | venv/ 89 | ENV/ 90 | env.bak/ 91 | venv.bak/ 92 | 93 | # Spyder project settings 94 | .spyderproject 95 | .spyproject 96 | 97 | # Rope project settings 98 | .ropeproject 99 | 100 | # mkdocs documentation 101 | /site 102 | 103 | # mypy 104 | .mypy_cache/ 105 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # unreal_cv_ros 2 | 3 | **unreal_cv_ros** is a package to allow ROS based simulation of a MAV equipped with a 3D-reconstruction sensor. The simulation is performed inside an [Unreal Engine 4](https://www.unrealengine.com/en-US/what-is-unreal-engine-4) (UE4) game. The node-game communcation is carried out utilizing the [UnrealCV](https://github.com/unrealcv/unrealcv) computer vision plugin for UE4. 4 | 5 | * **Note:** We recommend also checking out our recent robot simulator [https://github.com/ethz-asl/unreal_airsim](https://github.com/ethz-asl/unreal_airsim) for UE4 based on Microsoft [AirSim](https://microsoft.github.io/AirSim/) with more features and higher performance. 6 | 7 | # Table of Contents 8 | **Installation** 9 | * [Dependencies](#Dependencies) 10 | * [Installation](#Installation) 11 | * [Data Repository](#Data-Repository) 12 | 13 | **Examples** 14 | * [Run in test mode](#Run-in-test-mode) 15 | * [Run with MAV](#Run-with-MAV) 16 | 17 | **Documentation: ROS Nodes** 18 | * [unreal_ros_client](#unreal_ros_client) 19 | * [sensor_model](#sensor_model) 20 | * [simulation_manager](#simulation_manager) 21 | 22 | **Documentation: Working with Unreal** 23 | * [UnrealCV Plugin Setup](#UnrealCV-Plugin-Setup) 24 | * [Creating UE4 worlds](#Creating-UE4-worlds) 25 | * [When to use which mode](#When-to-use-which-mode) 26 | * [Pawns and Cameras](#Pawns-and-Cameras) 27 | * [Custom collision and camera settings](#Custom-collision-and-camera-settings) 28 | * [Static mesh collision](#Static-mesh-collision) 29 | * [Producing ground truth pointclouds](#Producing-ground-truth-pointclouds) 30 | * [The Unreal Coordinate System](#The-Unreal-Coordinate-System) 31 | 32 | **Troubleshooting** 33 | * [Frequent Issues](#Troubleshooting) 34 | 35 | # Installation 36 | ## Dependencies 37 | **System Dependencies:** 38 | 39 | Unreal_cv_ros requires the UnrealCV python library: `pip install unrealcv`. 40 | 41 | **ROS Packages:** 42 | 43 | The perception functionalities (unreal_ros_client + sensor_model) depend on: 44 | * `catkin_simple` ([https://github.com/catkin/catkin_simple](https://github.com/catkin/catkin_simple)) 45 | 46 | To run the full MAV simulation, these additional packages are needed: 47 | * `rotors_simulator` ([https://github.com/ethz-asl/rotors_simulator](https://github.com/ethz-asl/rotors_simulator)) 48 | * `mav_control_rw` ([https://github.com/ethz-asl/mav_control_rw](https://github.com/ethz-asl/mav_control_rw)) 49 | * `voxblox` ([https://github.com/ethz-asl/voxblox](https://github.com/ethz-asl/voxblox)) 50 | 51 | ## Installation 52 | **Install the ROS package:** 53 | 54 | Installation instructions on Linux: 55 | 56 | 1. Move to your catkin workspace: 57 | ```shell script 58 | cd ~/catkin_ws/src 59 | ``` 60 | 2. Install using a SSH key or via HTTPS: 61 | ```shell script 62 | git clone git@github.com:ethz-asl/unreal_cv_ros.git # SSH 63 | git clone https://github.com/ethz-asl/unreal_cv_ros.git # HTTPS 64 | ``` 65 | 3. Compile: 66 | ```shell script 67 | catkin build unreal_cv_ros 68 | ``` 69 | **Install Unreal Engine:** 70 | 71 | To create custom worlds or run non-shipping projects, the Unreal Engine Editor is used. On Linux, this requires installing Unreal Engine and building it from source. Complete installation instrcutions are given on [their webpage](https://docs.unrealengine.com/en-US/SharingAndReleasing/Linux/BeginnerLinuxDeveloper/SettingUpAnUnrealWorkflow/index.html). In addition, install the UnrealCV plugin to Unreal Engine as described in [UnrealCV Plugin Setup](#UnrealCV-Plugin-Setup). 72 | 73 | The unreal\_cv\_ros pipeline also works with stand-alone games that were built including the UnrealCV plugin (e.g. the examples below), which do not require additional installations. 74 | 75 | 76 | ## Data Repository 77 | Other related resources, such as blueprint classes and experiment scenarios, can be downloaded from [here](https://drive.google.com/drive/folders/1wIjnqXlGjrCOQdkQn31Ksep88d18O2u7?usp=sharing). This repo and the related resources were developped and tested with unrealcv v0.3.10 and Unreal Engine 4.16.3. 78 | 79 | # Examples 80 | ## Run in test mode 81 | To illustrate the vision pipeline in stand-alone fashion, we run the unreal_ros_client in test mode with ground_truth as our sensor model. 82 | 83 | ![uecvros_ex_test](https://user-images.githubusercontent.com/36043993/52844470-9028de00-30fc-11e9-975f-0b204ae52f6d.png) 84 | View of the realistic rendering demo (left) and the produced ground truth point cloud with corresponding camera pose (right). 85 | 86 | Please download the [RealisticRendering](http://docs.unrealcv.org/en/master/reference/model_zoo.html#rr) game binary and launch the game. In a command window type `roslaunch unreal_cv_ros example_test.launch` to start the pipeline and wait until the connection is setup (takes few seconds). You can now navigate the camera inside the game using the mouse and W-A-S-D keys while a rviz window displayes the produced ground truth pointclouds as well as the camera pose in the unreal world frame. 87 | 88 | Note that the test mode is mostly meant to explore the effects of different sensor models on the pointcloud. Since the taking of images and read-out of the unreal-pose is done sequentially, fast movement in the game may result in some frames not being well aligned. 89 | 90 | ## Run with MAV 91 | This example demonstrates how to setup the full scale MAV simulation. It uses unreal_cv_ros for perception and collision checking, gazebo to model the MAV physics, a MPC high level and PID low level controller for trajectory tracking and voxblox to integrate the pointclouds into a map. 92 | 93 | ![uecvros_ex_mav](https://user-images.githubusercontent.com/36043993/52851608-814b2700-310e-11e9-9b41-256d11898bbc.png) 94 | Voxblox map that is continually built as the MAV follows a trajectory through the room. 95 | 96 | Run the RealisticRendering demo as in the previous example. Make sure to tab out of it immediately to disable player control. This is necessary because unreal_cv_ros sets its world frame at the current MAV position on startup and this example expects the game demo to be in its initial state. Furthermore, if captured, unreal engine continues to accept player input that may interfere with the simulation. 97 | 98 | In a command window type `roslaunch unreal_cv_ros example_mav.launch` to start the pipeline. The simulation\_manager will supervise the startup of all elements (this may take few seconds). After everything set up cleanly, the example node will publish two trajectory segments, which the MAV tries to follow to explore the room. In a rviz window, the current MAV pose is depicted together the planned trajectories and the voxblox mesh representation of the room as it is explored. 99 | 100 | # Documentation: ROS Nodes 101 | ## unreal_ros_client 102 | This node manages the unrealcv client and the connection with a running UE4 game. It sets the camera position and orientation within the unreal game and produces the raw image data and camera calibration used by the sensor_model. 103 | 104 | ### Parameters 105 | * **mode** In which mode the client is operated. Currently implemented are: 106 | * **test** Navigate the MAV manually in the unreal game. The client will periodically publish the sensor data. 107 | * **standard** The camera pose is set based on the `/odometry` topic and images are taken. Uses the default unrealcv plugin, operates at ~1 up to 2 Hz. 108 | * **fast** Similar to standard, but requires the custom unrealcv plugin (See [Unrealcv Plugin Setup](#Unrealcv-Plugin-Setup)). Operates at ~3 up to 5 Hz. 109 | 110 | Default is 'standard'. 111 | * **collision_on** Set to true to check for collision in the unreal game. Set to false to set the camera anyway. May result in rendering artifacts if the camera overlaps with objects. Default is true. 112 | * **collision_tol** This parameter only shows in `standard` mode. Collision warnings are triggered if the requested and realized position are further apart than this threshold (in unreal units, default unit is cm). Default is 10. 113 | * **publish_tf** If true, the client pulishes a tf-transform of the camera pose for every taken image with a matching timestamp. Default is False. 114 | * **slowdown** Artificially slows down the time between setting the pose and taking images to give unreal engine more time to render the new view. Slowdown is expected as wait duration in seconds wall-time. Default is 0.0. 115 | * **camera_id** Lets unrealcv know which camera to use. Default is 0. 116 | * **queue_size** Queue size of the input subscriber. Default is 1. 117 | * **Camera Parameters:** To change the resolution and field of view (FOV) of the camera, the [unrealcv configuration file](http://docs.unrealcv.org/en/master/plugin/config.html) needs to be changed. The relevant path is displayed when the unreal_ros_client node is launched. When the client is setup, these values are published as 'camera_params' on the ros parameter server for other nodes to access them. 118 | 119 | ### Input Topics 120 | * **odometry** of type `nav_msgs.msg/Odometry`. Set the camera pose w.r.t. its position and yaw at the connection of the client. Does not appear in `test` mode. 121 | 122 | ### Output Topics 123 | * **ue_sensor_raw** of type `unreal_cv_ros.msg/UeSensorRaw`. The output of the in-game image capture, containing a color and a depth image encoded as npy binaries. 124 | * **collision** of type `std_msgs.msg/String`. Publishes "MAV collision detected!" upon collision detection. Only available if collision_on is true. 125 | 126 | ### Services 127 | * **terminate_with_reset** of type `std_srvs.srv/SetBool`. Stop processing new odom requests and reset the camera to the initial pose. 128 | 129 | ## sensor_model 130 | This node converts the unreal_ros_client output into a pointcloud for further processing. Sensor specific behaviour can be simulated by artificially altering the ground truth data. 131 | 132 | ### Parameters 133 | * **model_type** Which sensor to simulate. Currently implemented are: 134 | * **ground_truth:** Produces the ground truth pointcloud without additional processing. 135 | * **kinect:** Simulates a Kinect 3D sensor according to [this paper](https://ieeexplore.ieee.org/abstract/document/6375037). Notice that the sensor range is cropped to \[0.5, 3.0\] m and that the inclination angle is neglected, since it is constant up until ~60, 70 degrees. 136 | * **gaussian_depth_noise:** Apply a gaussian, depth dependent term to the pointcloud z coordinate. Allows to set the params `k_mu_` and `k_sigma_` for i in \[0, 3\], where f(z) = k0 + k1 * z + k2 * z^2 + k3 * z^3. Default for all k is 0.0. 137 | 138 | Default is 'ground_truth'. 139 | * **camera_params_ns** Namespace where to read the unreal camera parameters from, which are expected as {height, width, focal_length}. Notice that the sensor_model waits until the camera params are set on the ros parameter server (e.g. from the unreal\_ros\_client). Default is 'unreal\_ros\_client/camera_params'. 140 | * **maximum_distance** All points whose original ray length is beyond maximum_distance are removed from the pointcloud. Set to 0 to keep all points. Default is 0.0. 141 | * **flatten_distance** Sets the ray length of every point whose ray length is larger than flatten\_distance to flatten\_distance. Set to 0 to keep all points unchanged. Default is 0.0. 142 | * **publish_color_images** In addition to the point clouds publish the perceived images, encoded as 4 channel RGBA. Default is False. 143 | * **publish_gray_images** If true publish a gray scale image with every pointcloud (e.g. for integration with rovio). Default is False. 144 | 145 | ### Input Topics 146 | * **ue_sensor_raw** of type `unreal_cv_ros.msg/UeSensorRaw`. Output of the unreal\_ros\_client that is to be further processed. 147 | 148 | ### Output Topics 149 | * **ue_sensor_out** of type `sensor_msgs.msg/PointCloud2`. Resulting pointcloud after applying the simulated sensing pipeline. 150 | * **ue_color_image_out** of type `sensor_msgs.msg/Image`. Color image of the current view. Only published if publish_color_images is true. 151 | * **ue_gray_image_out** of type `sensor_msgs.msg/Image`. Gray scale image of the current view. Only published if publish_gray_images is true. 152 | 153 | 154 | ## simulation_manager 155 | This node is used to launch the full MAV simulation using gazebo as a physics engine and an unreal game for perception and collision modeling. It is used to coordinate simulation setup and monitor the unreal_cv_ros pipeline performance. 156 | 157 | ### Parameters 158 | * **ns_gazebo** Namespace of gazebo, including the node name. Default is '/gazebo'. 159 | * **ns_mav** Namespace of the MAV, which is expected to end with the actual MAV name. Default is '/firefly'. 160 | * **monitor** Set to true to measure the unreal vision pipeline's performance. Default is False. 161 | * **horizon** How many datapoints are kept and considered for the performance measurement. Only available if monitor is true. Default is 10. 162 | 163 | ### Input Topics 164 | * **ue_raw_in** of type `unreal_cv_ros.msg/UeSensorRaw`. Output of the unreal\_ros\_client for performance measurements. Only available if monitor is true. 165 | * **ue_out_in** of type `sensor_msgs.msg/PointCloud2`. Output of the sensor model for performance measurements. This topic needs to be matched to check for correct startup. 166 | 167 | ### Output Topics 168 | * **simulation_ready** of type `std_msgs.msg/String`. After successful start of the pipeline publishes "Simulation Ready" to let other nodes start or take over. 169 | 170 | ### Services 171 | * **display_monitor** of type `std_srvs.srv/Empty`. Print the current monitoring measurements to console. Only available if monitor is true. 172 | 173 | 174 | # Documentation: Working with Unreal 175 | 176 | ## UnrealCV Plugin Setup 177 | ### Standard Plugin 178 | For the modes `test` and `standard` the default UnrealCV plugin is sufficient. Install it to the project or engine as suggested on [their website](http://docs.unrealcv.org/en/master/plugin/install.html). 179 | 180 | ### Adding the 'vget /uecvros/full' command 181 | This additional command is required to operate in`fast` mode. 182 | * **Compiled Plugin** The compiled plugin can be downloaded from the [data repository](#Data-Repository). 183 | 184 | * **Compile it yourself** To compile the plugin, first create an unreal engine development environment as is explained [here](http://docs.unrealcv.org/en/master/plugin/develop.html). Then change the source code of the unrealcv plugin in your project (eg. `UnrealProjects/playground/Plugins/UnrealCV/Source/UnrealCV/Private/Commands`) to include the new command: 185 | - In `CameraHandler.h` add the command declaration 186 | - In `CameraHandler.cpp` add the command dispatcher and function body. 187 | 188 | These 3 code snippets can be copied from `unreal_cv_ros/content/CustomPluginCode.cpp`. Afterwards build the project as explained in the unrealcv docs. The compiled plugin can now be copied to other projects or the engine for use. 189 | 190 | ### Command reference 191 | This command take images and then request the passed position and orientation. This leaves time for the engine to finish rendering before the next images are requested. 192 | * Syntax: `vget /uecvros/full X, Y, Z, p, y, r, collision, cameraID`. All argumentes, except the cameraID which is a uint, are floats. All arguments must be set. 193 | * Input: 194 | - (X, Y, Z): New target position in unreal units. 195 | - (p, y, r): New target orientation in degrees (and unreal coordinate system). 196 | - collision: Set to 1 to check collision, -1 to ignore it. 197 | - cameraID: ID of the camera to use in compliance with the unrealcv structure. Generally use 0. 198 | * Output: 199 | - In case of collision returns "Collision detected!" as unicode. 200 | - Otherwise returns the stacked binary image data as string, where the first half representes the color image and the second half the depth image, both as npy arrays. 201 | 202 | ## Creating UE4 worlds 203 | In order to easily create unreal_cv_ros compatible UE4 worlds: 204 | * Install and use unreal engine editor **4.16** for compatibility with unrealcv. (Note: Newer versions *should* work too but without guarantees). 205 | * Make sure the unrealcv plugin is installed **and** activated in the current project (In the Editor check: Edit > Plugins > Science > Unreal CV, see [unrealcv docs](http://docs.unrealcv.org/en/master/plugin/install.html)). 206 | * Set the player pawn to a flying spectator type with collision: World Settings > Game Mode > Selected GameMode > Default Pawn Class := UecvrosDefaultPawn. (If this is read-only just change to a custom GameMode.) 207 | - Highly recommended: If using the `fast` mode, use the `UecvrosDefaultPawn`. The blueprint class can be downloaded from the [data repository](#Data-Repository). 208 | - Otherwise use the unreal engine built in `DefaultPawn`. 209 | 210 | ## When to use which mode 211 | The unreal_cv_ros plugin is designed to work with all sorts of unreal worlds, however performance depends on how the world and the plugin is setup: 212 | 213 | | Plugin | Unreal World | Method to use | 214 | | --- | --- | --- | 215 | | Custom | Custom | When the plugin and unreal world are modifiable, it is highly recommended to use the `fast` mode with the `UecvrosDefaultPawn`. This is not only the fastest combination, but it is also more stable and produces accurate and well-aligned pointclouds. | 216 | | Custom | Static | If the unrealworld cannot be modified (usually it uses the `DefaultPawn`), it is still recommended to use the `fast` mode, since it runs considerably faster. However, the default camera may pose obstacles, such as motion blurr, that hinders alignment of the recorded images with the requested pose. Adjust the 'slowdown' parameter of the unreal_ros_client to give unreal enough time to properly process the desired images. | 217 | | Static | Any | If the plugin cannot be modified, use the `standard` mode and the built in `DefaultPawn`. | 218 | | `test` mode | Any | If you want to run the `test` mode, use the built in `DefaultPawn` to allow for suitable user control. | 219 | 220 | ## Pawns and Cameras 221 | Unrealcv works by capturing pawns and cameras in the unreal game, to which it attaches. Pawns are the physical entities that move around and collide with things, whereas cameras typically provide the views. Usually, pawns are not placed in the world, thus upon game start a Pawn will spawn at PlayerStart that is then possessed by player control and unrealcv. 222 | * Cameras in unrealcv are stored in chronological order. If you want to access different cameras use the 'camera_id' param of the unreal_ros_client. 223 | * The `DefaultPawn` is incoporates standard flying player control and a default view. However that incorporates all default features such as motion blurr when moving (or teleporting!) the pawn. 224 | * The `UecvrosDefaultPawn` inherits from the `DefaultPawn`. Furthermore, it has a camera attached to it that can be modified. However this disables the default player camera orientation control (which is also the one accessed with the standard unrealcv plugin). 225 | 226 | Custom camera settings can similarly be changed by editing the camera component. Notice that unrealcv overwrites certain camera settings (such as the resolution and field of view) using the unrealcv configuration file. 227 | 228 | ## Custom collision and camera settings 229 | The default collision for the `DefaultPawn` and `UecvrosDefaultPawn` is a sphere of 35cm radius. For custom collision and camera, you need to create your own pawn blueprint (which inherits from the respective base class). E.g. an 'easy' way to create a custom `DefaultPawn` is as follows: 230 | 1. In the Modes window, search for 'DefaultPawn' and create an instance (drag and drop into the game world). 231 | 2. Select the pawn instance and click Blueprints > Convert selected actor to blueprint class... 232 | 3. Save the new class, e.g. in the content/blueprints folder as myDefaultPawn. 233 | 4. The class should now open in the blueprint editor, where its components can be edited. 234 | 5. To change the radius select the 'CollisionComponent', and under Details > Shape > SphereRadius := myValue. Notice that too short radii can allow the camera to enter certain objects, creating graphical artifacts. 235 | 6. Save the blueprint. Set the World Settings > Game Mode > Selected GameMode > Default Pawn Class := myDefaultPawn 236 | 237 | ## Static mesh collision 238 | When creating UE4 worlds, it is worth double checking the player collision with static mesh actors (the scene). By default, unreal produces convex hulls as simple collision objects and checks against simple collision. However, this may result in faulty collision detection. Collision can be changed to match the visible mesh as follows (may degrade performance): 239 | 1. Right click your object in the World Outliner and select "Edit 'your object'". 240 | 2. Set Collision > Collision Complexity := Use Complex Collision As Simple. 241 | 3. Save and close the editing window. 242 | 243 | Current collision can be visualized by changing the viewmode in the unreal editor from "Lit" to "Player Collision". 244 | 245 | ## Producing ground truth pointclouds 246 | For simulation evaluation, ground truth meshes can be exported from the unreal editor and further processed using tools such as [CloudCompare](https://www.danielgm.net/cc/). A voxblox compatible ground truth pointcloud can be generated as follows: 247 | * In the **Unreal Engine Editor**, 248 | 1. Select the objects of interes in the World Outliner 249 | 2. File > Export Selected... 250 | 3. Save as \*.obj file. (Currently no need to export the material too.) 251 | 252 | * Open **CloudCompare**, 253 | 4. File > Import > my_mesh_export.obj 254 | 5. Use 'Edit > Multiply/Scale > 0.01' to compensate for unreal engine units (default is cm). 255 | 6. Use 'Edit > Apply transformation' to place and rotate the object relative to the PlayerStart settings from unreal. (I.e. with the origin at the PlayerStart location, x pointing towards the PlayerStart orientation, y-left and z-up). 256 | 7. Use 'Edit > Crop' to remove meshes outside the region of interest and unreachable points, such as the ground plane. 257 | 8. Click 'Sample points on a mesh' to create a pointcloud. (Currently no need to generate normals or color.) 258 | 9. 'File > Save' and save as a \*.ply in ASCII format 259 | 260 | * With a **TextEditor**, 261 | 10. Open my_gt_pointcloud.ply and remove the "comment Author" and "obj_info" fields (lines 3 and 4, these create errors with pcl::plyreader). 262 | 263 | ## The Unreal Coordinate System 264 | For application with the unreal\_ros\_client, the coordinate transformations are already implemented so no need to worry. For development/debugging tasks: Unreal and unrealcv use the following coordinate system: 265 | 266 | * **Unreal World** The default coordinate system is X-forward, Y-right, Z-up. Default units are cm. 267 | * **Rotation Direction** Positive rotation directions around the unreal world coordinate axes are mathematically positive around the X and Y axes and negative around Z axis. 268 | * **Rotation parametrization** The Unreal engine interface and therefore also the unrealcv commands parse rotations as pitch-yaw-roll (pay attention to the order). However, inside the engine rotations are performed as Euler-ZYX rotations (i.e. roll-pitch-yaw). Default units are degrees. 269 | 270 | # Troubleshooting 271 | Known Issues and what to do about them: 272 | 1. **Error message "Error addressing the unrealcv client. Try restarting the game.":** 273 | - Make sure that only a single unreal game **or** editor is running. When using the editor, the unrealcv plugin is loaded already during startup (without the game itself running!). Since unrealcv per default connects to `localhost:9000` the server will be blocked if any other instance of it is running. 274 | 2. **The produced pointclouds are not well aligned with the requested pose / are smeared out:** 275 | - If running in standard mode, consider switching to fast mode which is not only faster but also more stable. (In standard mode every orientation change is carried out as rotation movement inducing rotation offsets (such as motion blurr when using the DefaultPawn camera), especially for high rotation rates). 276 | - Give unreal engine more time for rendering and settling by adjusting the `slowdown` paramter of the unreal\_ros\_client. (This problem typically occurs for low rendering/frame rates in the unreal game or when using the DefaultPawn). 277 | 3. **I get strange collision warnings where there shouldn't be any:** 278 | - Make sure you are not using simple collision, which can greatly inflate collision bounds of objects. See [here](#Static-mesh-collision) for more info. 279 | 4. **I get collision warnings for quickly executed trajectories:** 280 | - The unreal teleport action with collision checking sweeps (as far as I know) a linear path between every request. Try increasing the unreal\_ros\_client's update rate or slowing down simulated time. 281 | 5. **Installing UE4.16: "xlocale.h not found":** 282 | - xlocale is no longer needed and this bug is fixed in future versions of UE4, removing the include statements of xlocale should fix the issue. 283 | -------------------------------------------------------------------------------- /unreal_cv_ros/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | cmake_minimum_required(VERSION 2.8.3) 2 | project(unreal_cv_ros) 3 | 4 | find_package(catkin_simple REQUIRED) 5 | catkin_simple(ALL_DEPS_REQUIRED) 6 | 7 | catkin_package() 8 | 9 | include_directories( 10 | ${catkin_INCLUDE_DIRS} 11 | ) 12 | 13 | 14 | -------------------------------------------------------------------------------- /unreal_cv_ros/cfg/example_mav.rviz: -------------------------------------------------------------------------------- 1 | Panels: 2 | - Class: rviz/Displays 3 | Help Height: 78 4 | Name: Displays 5 | Property Tree Widget: 6 | Expanded: 7 | - /Global Options1 8 | - /Status1 9 | - /VoxbloxMesh1 10 | - /Mav Pose1 11 | Splitter Ratio: 0.5 12 | Tree Height: 565 13 | - Class: rviz/Selection 14 | Name: Selection 15 | - Class: rviz/Tool Properties 16 | Expanded: 17 | - /2D Pose Estimate1 18 | - /2D Nav Goal1 19 | - /Publish Point1 20 | Name: Tool Properties 21 | Splitter Ratio: 0.588679016 22 | - Class: rviz/Views 23 | Expanded: 24 | - /Current View1 25 | Name: Views 26 | Splitter Ratio: 0.5 27 | - Class: rviz/Time 28 | Experimental: false 29 | Name: Time 30 | SyncMode: 0 31 | SyncSource: "" 32 | Visualization Manager: 33 | Class: "" 34 | Displays: 35 | - Alpha: 0.5 36 | Cell Size: 1 37 | Class: rviz/Grid 38 | Color: 160; 160; 164 39 | Enabled: true 40 | Line Style: 41 | Line Width: 0.0299999993 42 | Value: Lines 43 | Name: Grid 44 | Normal Cell Count: 0 45 | Offset: 46 | X: 0 47 | Y: 0 48 | Z: 0 49 | Plane: XY 50 | Plane Cell Count: 10 51 | Reference Frame: 52 | Value: true 53 | - Class: voxblox_rviz_plugin/VoxbloxMesh 54 | Enabled: true 55 | Name: VoxbloxMesh 56 | Topic: /planner/voxblox_node/mesh 57 | Unreliable: false 58 | Value: true 59 | - Class: rviz/Marker 60 | Enabled: true 61 | Marker Topic: /firefly/reference_trajectory 62 | Name: Reference Trajectory 63 | Namespaces: 64 | "": true 65 | Queue Size: 100 66 | Value: true 67 | - Class: rviz/Axes 68 | Enabled: true 69 | Length: 0.400000006 70 | Name: Mav Pose 71 | Radius: 0.100000001 72 | Reference Frame: firefly/base_link 73 | Value: true 74 | - Alpha: 1 75 | Autocompute Intensity Bounds: true 76 | Autocompute Value Bounds: 77 | Max Value: 10 78 | Min Value: -10 79 | Value: true 80 | Axis: Z 81 | Channel Name: intensity 82 | Class: rviz/PointCloud2 83 | Color: 255; 255; 255 84 | Color Transformer: RGB8 85 | Decay Time: 0 86 | Enabled: false 87 | Invert Rainbow: false 88 | Max Color: 255; 255; 255 89 | Max Intensity: 4096 90 | Min Color: 0; 0; 0 91 | Min Intensity: 0 92 | Name: Sensor Output 93 | Position Transformer: XYZ 94 | Queue Size: 10 95 | Selectable: true 96 | Size (Pixels): 3 97 | Size (m): 0.00999999978 98 | Style: Flat Squares 99 | Topic: /unreal/unreal_sensor_model/ue_sensor_out 100 | Unreliable: false 101 | Use Fixed Frame: true 102 | Use rainbow: true 103 | Value: false 104 | Enabled: true 105 | Global Options: 106 | Background Color: 48; 48; 48 107 | Default Light: true 108 | Fixed Frame: world 109 | Frame Rate: 30 110 | Name: root 111 | Tools: 112 | - Class: rviz/Interact 113 | Hide Inactive Objects: true 114 | - Class: rviz/MoveCamera 115 | - Class: rviz/Select 116 | - Class: rviz/FocusCamera 117 | - Class: rviz/Measure 118 | - Class: rviz/SetInitialPose 119 | Topic: /initialpose 120 | - Class: rviz/SetGoal 121 | Topic: /move_base_simple/goal 122 | - Class: rviz/PublishPoint 123 | Single click: true 124 | Topic: /clicked_point 125 | Value: true 126 | Views: 127 | Current: 128 | Class: rviz/Orbit 129 | Distance: 14.6425667 130 | Enable Stereo Rendering: 131 | Stereo Eye Separation: 0.0599999987 132 | Stereo Focal Distance: 1 133 | Swap Stereo Eyes: false 134 | Value: false 135 | Focal Point: 136 | X: 1.73392141 137 | Y: -0.0900675356 138 | Z: 1.46585464 139 | Focal Shape Fixed Size: true 140 | Focal Shape Size: 0.0500000007 141 | Invert Z Axis: false 142 | Name: Current View 143 | Near Clip Distance: 0.00999999978 144 | Pitch: 0.869796634 145 | Target Frame: 146 | Value: Orbit (rviz) 147 | Yaw: 2.53589272 148 | Saved: ~ 149 | Window Geometry: 150 | Displays: 151 | collapsed: false 152 | Height: 846 153 | Hide Left Dock: false 154 | Hide Right Dock: false 155 | QMainWindow State: 000000ff00000000fd00000004000000000000016a000002c4fc0200000008fb0000001200530065006c0065006300740069006f006e00000001e10000009b0000006100fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c0061007900730100000028000002c4000000d700fffffffb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261000000010000010f000002c4fc0200000003fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a005600690065007700730100000028000002c4000000ad00fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b200000000000000000000000200000490000000a9fc0100000001fb0000000a00560069006500770073030000004e00000080000002e10000019700000003000004b00000003efc0100000002fb0000000800540069006d00650100000000000004b00000030000fffffffb0000000800540069006d006501000000000000045000000000000000000000022b000002c400000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000 156 | Selection: 157 | collapsed: false 158 | Time: 159 | collapsed: false 160 | Tool Properties: 161 | collapsed: false 162 | Views: 163 | collapsed: false 164 | Width: 1200 165 | X: 82 166 | Y: 82 167 | -------------------------------------------------------------------------------- /unreal_cv_ros/cfg/example_test.rviz: -------------------------------------------------------------------------------- 1 | Panels: 2 | - Class: rviz/Displays 3 | Help Height: 78 4 | Name: Displays 5 | Property Tree Widget: 6 | Expanded: 7 | - /Global Options1 8 | - /Status1 9 | - /Unreal World Origin1 10 | - /Camera Pose1 11 | - /Sensor Output1 12 | Splitter Ratio: 0.5 13 | Tree Height: 565 14 | - Class: rviz/Selection 15 | Name: Selection 16 | - Class: rviz/Tool Properties 17 | Expanded: 18 | - /2D Pose Estimate1 19 | - /2D Nav Goal1 20 | - /Publish Point1 21 | Name: Tool Properties 22 | Splitter Ratio: 0.588679016 23 | - Class: rviz/Views 24 | Expanded: 25 | - /Current View1 26 | Name: Views 27 | Splitter Ratio: 0.5 28 | - Class: rviz/Time 29 | Experimental: false 30 | Name: Time 31 | SyncMode: 0 32 | SyncSource: Sensor Output 33 | Visualization Manager: 34 | Class: "" 35 | Displays: 36 | - Alpha: 0.5 37 | Cell Size: 1 38 | Class: rviz/Grid 39 | Color: 160; 160; 164 40 | Enabled: true 41 | Line Style: 42 | Line Width: 0.0299999993 43 | Value: Lines 44 | Name: Grid 45 | Normal Cell Count: 0 46 | Offset: 47 | X: 0 48 | Y: 0 49 | Z: 0 50 | Plane: XY 51 | Plane Cell Count: 10 52 | Reference Frame: 53 | Value: true 54 | - Class: rviz/Axes 55 | Enabled: true 56 | Length: 1 57 | Name: Unreal World Origin 58 | Radius: 0.100000001 59 | Reference Frame: 60 | Value: true 61 | - Class: rviz/Axes 62 | Enabled: true 63 | Length: 1 64 | Name: Camera Pose 65 | Radius: 0.100000001 66 | Reference Frame: camera_link 67 | Value: true 68 | - Alpha: 1 69 | Autocompute Intensity Bounds: true 70 | Autocompute Value Bounds: 71 | Max Value: 10 72 | Min Value: -10 73 | Value: true 74 | Axis: Z 75 | Channel Name: intensity 76 | Class: rviz/PointCloud2 77 | Color: 255; 255; 255 78 | Color Transformer: RGB8 79 | Decay Time: 0 80 | Enabled: true 81 | Invert Rainbow: false 82 | Max Color: 255; 255; 255 83 | Max Intensity: 4096 84 | Min Color: 0; 0; 0 85 | Min Intensity: 0 86 | Name: Sensor Output 87 | Position Transformer: XYZ 88 | Queue Size: 10 89 | Selectable: true 90 | Size (Pixels): 3 91 | Size (m): 0.00999999978 92 | Style: Flat Squares 93 | Topic: /unreal_sensor_model/ue_sensor_out 94 | Unreliable: false 95 | Use Fixed Frame: true 96 | Use rainbow: true 97 | Value: true 98 | Enabled: true 99 | Global Options: 100 | Background Color: 48; 48; 48 101 | Default Light: true 102 | Fixed Frame: world 103 | Frame Rate: 30 104 | Name: root 105 | Tools: 106 | - Class: rviz/Interact 107 | Hide Inactive Objects: true 108 | - Class: rviz/MoveCamera 109 | - Class: rviz/Select 110 | - Class: rviz/FocusCamera 111 | - Class: rviz/Measure 112 | - Class: rviz/SetInitialPose 113 | Topic: /initialpose 114 | - Class: rviz/SetGoal 115 | Topic: /move_base_simple/goal 116 | - Class: rviz/PublishPoint 117 | Single click: true 118 | Topic: /clicked_point 119 | Value: true 120 | Views: 121 | Current: 122 | Class: rviz/Orbit 123 | Distance: 62.5975952 124 | Enable Stereo Rendering: 125 | Stereo Eye Separation: 0.0599999987 126 | Stereo Focal Distance: 1 127 | Swap Stereo Eyes: false 128 | Value: false 129 | Focal Point: 130 | X: 0 131 | Y: 0 132 | Z: 0 133 | Focal Shape Fixed Size: true 134 | Focal Shape Size: 0.0500000007 135 | Invert Z Axis: false 136 | Name: Current View 137 | Near Clip Distance: 0.00999999978 138 | Pitch: 0.305397898 139 | Target Frame: 140 | Value: Orbit (rviz) 141 | Yaw: 4.0635767 142 | Saved: ~ 143 | Window Geometry: 144 | Displays: 145 | collapsed: false 146 | Height: 846 147 | Hide Left Dock: false 148 | Hide Right Dock: false 149 | QMainWindow State: 000000ff00000000fd00000004000000000000016a000002c4fc0200000008fb0000001200530065006c0065006300740069006f006e00000001e10000009b0000006100fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c0061007900730100000028000002c4000000d700fffffffb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261000000010000010f000002c4fc0200000003fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a005600690065007700730100000028000002c4000000ad00fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b200000000000000000000000200000490000000a9fc0100000001fb0000000a00560069006500770073030000004e00000080000002e10000019700000003000004b00000003efc0100000002fb0000000800540069006d00650100000000000004b00000030000fffffffb0000000800540069006d006501000000000000045000000000000000000000022b000002c400000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000 150 | Selection: 151 | collapsed: false 152 | Time: 153 | collapsed: false 154 | Tool Properties: 155 | collapsed: false 156 | Views: 157 | collapsed: false 158 | Width: 1200 159 | X: 65 160 | Y: 60 161 | -------------------------------------------------------------------------------- /unreal_cv_ros/content/CustomPluginCode.cpp: -------------------------------------------------------------------------------- 1 | /** Code for the custom uecvros_full command for the unrealcv plugin. Copy the following 3 code snippets into their described locations. 2 | 3 | /** 1. Add this declaration to CameraHandler.h */ 4 | 5 | /** Run a step of the uecvros routine */ 6 | FExecStatus UecvrosFull(const TArray& Args); 7 | 8 | 9 | /** 2. Add the command dispatcher in CameraHandler.cpp */ 10 | 11 | Cmd = FDispatcherDelegate::CreateRaw(this, &FCameraCommandHandler::UecvrosFull); 12 | Help = "Run the full UECVROS routine [X, Y, Z, p, y, r, coll, cameraID] Return npy image unless collided"; 13 | CommandDispatcher->BindCommand("vget /uecvros/full [float] [float] [float] [float] [float] [float] [float] [uint]", Cmd, Help); 14 | 15 | 16 | /** 3. Add the function body to CameraHandler.cpp */ 17 | 18 | FExecStatus FCameraCommandHandler::UecvrosFull(const TArray& Args) 19 | { 20 | if (Args.Num() == 8) // [X, Y, Z, p, y, r, collisionT, cameraID] 21 | { 22 | float X = FCString::Atof(*Args[0]), Y = FCString::Atof(*Args[1]), Z = FCString::Atof(*Args[2]); 23 | FVector LocNew = FVector(X, Y, Z); 24 | 25 | float p = FCString::Atof(*Args[3]), y = FCString::Atof(*Args[4]), r = FCString::Atof(*Args[5]); 26 | FRotator RotNew = FRotator(p, y, r); 27 | 28 | float tol = FCString::Atof(*Args[6]); 29 | bool sweep = tol > 0; // Is collision enabled? 30 | 31 | // produce images and stack binary data 32 | TArray Data; 33 | TArray CamIdArgs; 34 | CamIdArgs.Init(Args[7], 1); 35 | Data += this->GetNpyBinaryUint8Data(CamIdArgs, TEXT("lit"), 4); 36 | Data += this->GetNpyBinaryFloat16Data(CamIdArgs, TEXT("depth"), 1); 37 | 38 | // Set new position & orientation 39 | APawn* Pawn = FUE4CVServer::Get().GetPawn(); 40 | FHitResult* hitresult = new FHitResult(); 41 | Pawn->SetActorLocation(LocNew, sweep, hitresult, ETeleportType::TeleportPhysics); 42 | Pawn->SetActorRotation(RotNew, ETeleportType::TeleportPhysics); 43 | AController* Controller = Pawn->GetController(); 44 | Controller->ClientSetRotation(RotNew, true); // the true is important for performance/blurr 45 | 46 | // Return results 47 | if (sweep && hitresult->IsValidBlockingHit()) 48 | { 49 | return FExecStatus::OK(FString::Printf(TEXT("Collision detected!"))); // returns OK to get the message string only} 50 | } 51 | else 52 | { 53 | return FExecStatus::Binary(Data); 54 | } 55 | } 56 | return FExecStatus::InvalidArgument; 57 | } 58 | 59 | -------------------------------------------------------------------------------- /unreal_cv_ros/content/gazebo_empty.world: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | model://sun 7 | 8 | 9 | 11 | 12 | 13 | 14 | 0 0 -9.8066 15 | 16 | 17 | quick 18 | 10 19 | 1.3 20 | 0 21 | 22 | 23 | 0 24 | 0.2 25 | 100 26 | 0.001 27 | 28 | 29 | 0.002 30 | 1.0 31 | 500 32 | 6e-06 2.3e-05 -4.2e-05 33 | 34 | 35 | 36 | -------------------------------------------------------------------------------- /unreal_cv_ros/launch/example_mav.launch: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | 95 | 96 | 97 | 98 | 99 | 100 | 101 | 102 | 103 | 104 | 105 | 106 | 107 | 108 | 109 | 110 | 111 | 112 | 113 | 114 | 115 | 116 | 117 | 118 | 119 | 120 | 121 | 122 | 123 | 124 | 125 | 126 | 127 | 128 | 129 | -------------------------------------------------------------------------------- /unreal_cv_ros/launch/example_test.launch: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | -------------------------------------------------------------------------------- /unreal_cv_ros/msg/UeSensorRaw.msg: -------------------------------------------------------------------------------- 1 | Header header 2 | 3 | # Binary npy color image of the scene 4 | string color_data 5 | 6 | # Binary npy depth image of the scene 7 | string depth_data 8 | -------------------------------------------------------------------------------- /unreal_cv_ros/package.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | unreal_cv_ros 4 | 0.0.0 5 | ROS interface to connect and stream pointclouds from an unreal engine game world, using the unrealcv api. 6 | 7 | Lukas Schmid 8 | 9 | BSD 10 | 11 | catkin 12 | catkin_simple 13 | 14 | message_generation 15 | message_runtime 16 | rospy 17 | std_msgs 18 | sensor_msgs 19 | geometry_msgs 20 | nav_msgs 21 | trajectory_msgs 22 | gazebo_msgs 23 | std_srvs 24 | 25 | 26 | 27 | 28 | 29 | -------------------------------------------------------------------------------- /unreal_cv_ros/src/example_mav.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | # ros 4 | import rospy 5 | from std_msgs.msg import String 6 | from trajectory_msgs.msg import MultiDOFJointTrajectory, MultiDOFJointTrajectoryPoint 7 | from geometry_msgs.msg import Transform 8 | import tf 9 | 10 | # Python 11 | import math 12 | import numpy as np 13 | 14 | 15 | class PathPublisher: 16 | 17 | def __init__(self): 18 | ''' Initialize ros node and read params ''' 19 | # Parse parameters 20 | self.delay = rospy.get_param('~delay', 0.0) # Waiting time before publishing the path 21 | self.speed = rospy.get_param('~mav_velocity', 0.3) # Velocity of mav m/s 22 | 23 | # publisher 24 | self.traj_pub = rospy.Publisher("command/trajectory", MultiDOFJointTrajectory, queue_size=10) 25 | 26 | # Prepare the trajectory, linear part 27 | self.traj_msg = MultiDOFJointTrajectory() 28 | self.traj_msg.joint_names = ["base_link"] 29 | sampling_rate = 20.0 30 | length = 4.5 31 | height = 0.5 32 | self.traj_time = 0.0 33 | l_curr = 0.0 34 | while l_curr < length: 35 | self.traj_time = self.traj_time + 1.0 / sampling_rate 36 | l_curr = self.speed * self.traj_time 37 | transforms = Transform() 38 | transforms.translation.x = l_curr 39 | transforms.translation.y = 0.0 40 | transforms.translation.z = height * l_curr / length 41 | transforms.rotation.x = 0.0 42 | transforms.rotation.y = 0.0 43 | transforms.rotation.z = 0.0 44 | transforms.rotation.w = 1.0 45 | point = MultiDOFJointTrajectoryPoint() 46 | point.transforms = [transforms] 47 | point.time_from_start = rospy.Duration(self.traj_time) 48 | self.traj_msg.points.append(point) 49 | 50 | # circular part 51 | self.traj_msg2 = MultiDOFJointTrajectory() 52 | self.traj_msg2.joint_names = ["base_link"] 53 | r_max = 1.5 54 | r_min = 0.5 55 | n_rotation = 1.5 56 | phi_curr = 0.0 57 | self.traj_time2 = 0.0 58 | while phi_curr <= n_rotation * 2 * math.pi: 59 | self.traj_time2 = self.traj_time2 + 1.0 / sampling_rate 60 | r_curr = r_max - (r_max-r_min) * phi_curr / (n_rotation * 2 * math.pi) 61 | phi_curr = phi_curr + self.speed / r_curr / sampling_rate 62 | transforms = Transform() 63 | transforms.translation.x = r_curr * math.sin(phi_curr) + length 64 | transforms.translation.y = r_curr * math.cos(phi_curr) - r_max 65 | transforms.translation.z = height 66 | quaternion = tf.transformations.quaternion_from_euler(0, 0, -phi_curr) 67 | transforms.rotation.x = quaternion[0] 68 | transforms.rotation.y = quaternion[1] 69 | transforms.rotation.z = quaternion[2] 70 | transforms.rotation.w = quaternion[3] 71 | point = MultiDOFJointTrajectoryPoint() 72 | point.transforms = [transforms] 73 | point.time_from_start = rospy.Duration(self.traj_time2) 74 | self.traj_msg2.points.append(point) 75 | 76 | self.launch_simulation() 77 | 78 | def launch_simulation(self): 79 | # Wait for unreal simulation to setup 80 | rospy.loginfo("Example Path Publisher: waiting for unreal MAV simulationto setup...") 81 | rospy.wait_for_message("unreal_simulation_ready", String) 82 | rospy.loginfo("Example Path Publisher: Waiting for unreal MAV simulationto setup... done.") 83 | 84 | if self.delay > 0: 85 | rospy.loginfo("Example Path Publisher: Launch in %d seconds.", self.delay) 86 | rospy.sleep(self.delay) 87 | 88 | # Publish the trajectory 89 | rospy.loginfo("Example Path Publisher: Succesfully started the simulation!") 90 | self.traj_msg.header.stamp = rospy.Time.now() 91 | self.traj_pub.publish(self.traj_msg) 92 | rospy.loginfo("Example Path Publisher: Requested Trajectory part 1/2.") 93 | rospy.sleep(self.traj_time) 94 | self.traj_msg2.header.stamp = rospy.Time.now() 95 | self.traj_pub.publish(self.traj_msg2) 96 | rospy.loginfo("Example Path Publisher: Requested Trajectory part 2/2.") 97 | rospy.sleep(self.traj_time2) 98 | rospy.loginfo("Example Path Publisher: Finished trajectory execution!") 99 | 100 | 101 | if __name__ == '__main__': 102 | rospy.init_node('path_publisher', anonymous=True) 103 | pp = PathPublisher() 104 | 105 | -------------------------------------------------------------------------------- /unreal_cv_ros/src/sensor_model.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | # ros 4 | import rospy 5 | from unreal_cv_ros.msg import UeSensorRaw 6 | from sensor_msgs.msg import PointCloud2, PointField, Image 7 | 8 | # Image conversion 9 | import io 10 | import cv2 11 | import cv_bridge 12 | 13 | # Python 14 | import sys 15 | import math 16 | import numpy as np 17 | from struct import pack, unpack 18 | import time 19 | 20 | 21 | class SensorModel: 22 | 23 | def __init__(self): 24 | ''' Initialize ros node and read params ''' 25 | 26 | # Read in params 27 | model_type_in = rospy.get_param('~model_type', 'ground_truth') 28 | camera_params_ns = rospy.get_param('~camera_params_ns', rospy.get_namespace()+"unreal_ros_client/camera_params") 29 | self.publish_color_images = rospy.get_param('~publish_color_images', False) 30 | self.publish_gray_images = rospy.get_param('~publish_gray_images', False) 31 | self.maximum_distance = rospy.get_param('~maximum_distance', 0) # Set to 0 to keep all points 32 | self.flatten_distance = rospy.get_param('~flatten_distance', 0) # Set to 0 to ignore 33 | 34 | # Setup sensor type 35 | model_types = {'ground_truth': 'ground_truth', 'kinect': 'kinect', 36 | 'gaussian_depth_noise': 'gaussian_depth_noise'} # Dictionary of implemented models 37 | selected = model_types.get(model_type_in, 'NotFound') 38 | if selected == 'NotFound': 39 | warning = "Unknown sensor model '" + model_type_in + "'. Implemented models are: " + \ 40 | "".join(["'" + m + "', " for m in model_types]) 41 | rospy.logfatal(warning[:-2]) 42 | else: 43 | self.model = selected 44 | 45 | # Model dependent params 46 | if self.model == 'gaussian_depth_noise': 47 | # coefficients for polynomial, f(z) = k0 + k1z + k2z^2 + k3z^3 48 | self.coefficients = np.array([0.0]*8) 49 | for i in range(4): 50 | self.coefficients[i] = rospy.get_param('~k_mu_%i' % i, 0.0) 51 | self.coefficients[4 + i] = rospy.get_param('~k_sigma_%i' % i, 0.0) 52 | 53 | # Initialize camera params from params or wait for unreal_ros_client to publish them 54 | if not rospy.has_param(camera_params_ns+'width'): 55 | rospy.loginfo("Waiting for unreal camera params at '%s' ...", camera_params_ns) 56 | while not rospy.has_param(camera_params_ns+'/width'): 57 | rospy.sleep(0.1) 58 | self.camera_params = [rospy.get_param(camera_params_ns+'/width'), rospy.get_param(camera_params_ns+'/height'), 59 | rospy.get_param(camera_params_ns+'/focal_length')] 60 | 61 | # Initialize node 62 | self.pub = rospy.Publisher("~ue_sensor_out", PointCloud2, queue_size=10) 63 | self.sub = rospy.Subscriber("ue_sensor_raw", UeSensorRaw, self.callback, queue_size=10) 64 | if self.publish_color_images or self.publish_gray_images: 65 | self.cv_bridge = cv_bridge.CvBridge() 66 | if self.publish_color_images: 67 | self.color_img_pub = rospy.Publisher("~ue_color_image_out", Image, queue_size=10) 68 | if self.publish_gray_images: 69 | self.gray_img_pub = rospy.Publisher("~ue_gray_image_out", Image, queue_size=10) 70 | 71 | rospy.loginfo("Sensor model setup cleanly.") 72 | 73 | def callback(self, ros_data): 74 | ''' Produce simulated sensor outputs from raw binary data ''' 75 | # Read out images 76 | img_color = np.load(io.BytesIO(bytearray(ros_data.color_data))) 77 | img_depth = np.load(io.BytesIO(bytearray(ros_data.depth_data))) 78 | mask_depth = img_depth.reshape(-1) 79 | 80 | # Build 3D point cloud from depth 81 | if self.flatten_distance > 0: 82 | img_depth = np.clip(img_depth, 0, self.flatten_distance) 83 | (x, y, z) = self.depth_to_3d(img_depth) 84 | 85 | # Pack RGB image (for ros representation) 86 | rgb = self.rgb_to_float(img_color) 87 | 88 | # Remove invalid points 89 | if self.maximum_distance > 0: 90 | mask = mask_depth <= self.maximum_distance 91 | x = x[mask] 92 | y = y[mask] 93 | z = z[mask] 94 | rgb = rgb[mask] 95 | 96 | # Sensor model processing, put other processing functions here 97 | if self.model == 'kinect': 98 | x, y, z, rgb = self.process_kinect(x, y, z, rgb) 99 | elif self.model == 'gaussian_depth_noise': 100 | z = self.process_gaussian_depth_noise(z) 101 | 102 | # Publish pointcloud 103 | data = np.transpose(np.vstack((x, y, z, rgb))) 104 | msg = PointCloud2() 105 | msg.header.stamp = ros_data.header.stamp 106 | msg.header.frame_id = 'camera' 107 | msg.width = data.shape[0] 108 | msg.height = 1 109 | msg.fields = [ 110 | PointField('x', 0, PointField.FLOAT32, 1), 111 | PointField('y', 4, PointField.FLOAT32, 1), 112 | PointField('z', 8, PointField.FLOAT32, 1), 113 | PointField('rgb', 12, PointField.FLOAT32, 1)] 114 | msg.is_bigendian = False 115 | msg.point_step = 16 116 | msg.row_step = msg.point_step * msg.width 117 | msg.is_dense = True 118 | msg.data = np.float32(data).tostring() 119 | self.pub.publish(msg) 120 | 121 | # If requested, also publish the image 122 | if self.publish_color_images: 123 | img_msg = self.cv_bridge.cv2_to_imgmsg(img_color, "rgba8") 124 | img_msg.header.stamp = ros_data.header.stamp 125 | img_msg.header.frame_id = 'camera' 126 | self.color_img_pub.publish(img_msg) 127 | if self.publish_gray_images: 128 | img_msg = self.cv_bridge.cv2_to_imgmsg(cv2.cvtColor(img_color[:, :, 0:3], cv2.COLOR_RGB2GRAY), "mono8") 129 | img_msg.header.stamp = ros_data.header.stamp 130 | img_msg.header.frame_id = 'camera' 131 | self.gray_img_pub.publish(img_msg) 132 | 133 | def depth_to_3d(self, img_depth): 134 | ''' Create point cloud from depth image and camera params. Returns a single array for x, y and z coords ''' 135 | # read camera params and create image mesh 136 | height = self.camera_params[1] 137 | width = self.camera_params[0] 138 | center_x = width/2 139 | center_y = height/2 140 | f = self.camera_params[2] 141 | cols, rows = np.meshgrid(np.linspace(0, width - 1, num=width), np.linspace(0, height - 1, num=height)) 142 | 143 | # Process depth image from ray length to camera axis depth 144 | distance = ((rows - center_y) ** 2 + (cols - center_x) ** 2) ** 0.5 145 | points_z = img_depth / (1 + (distance / f) ** 2) ** 0.5 146 | 147 | # Create x and y position 148 | points_x = points_z * (cols - center_x) / f 149 | points_y = points_z * (rows - center_y) / f 150 | 151 | return points_x.reshape(-1), points_y.reshape(-1), points_z.reshape(-1) 152 | 153 | @staticmethod 154 | def rgb_to_float(img_color): 155 | ''' Stack uint8 rgb image into a single float array (efficiently) for ros compatibility ''' 156 | r = np.ravel(img_color[:, :, 0]).astype(int) 157 | g = np.ravel(img_color[:, :, 1]).astype(int) 158 | b = np.ravel(img_color[:, :, 2]).astype(int) 159 | color = np.left_shift(r, 16) + np.left_shift(g, 8) + b 160 | packed = pack('%di' % len(color), *color) 161 | unpacked = unpack('%df' % len(color), packed) 162 | return np.array(unpacked) 163 | 164 | @staticmethod 165 | def process_kinect(x, y, z, rgb): 166 | # Simulate a kinect sensor model according to this paper: https://ieeexplore.ieee.org/abstract/document/6375037 167 | # The effect of the target plane orientation theta is neglected (its constant until theta ~ 60/70 deg). 168 | mask = (z > 0.5) & (z < 3.0) 169 | x = x[mask] 170 | y = y[mask] 171 | z = z[mask] 172 | rgb = rgb[mask] 173 | sigma_l = 0.0014 * z 174 | sigma_z = 0.0012 + 0.0019 * (z - 0.4) ** 2 175 | mu = np.zeros(np.shape(z)) 176 | dx = np.random.normal(mu, sigma_l) 177 | dy = np.random.normal(mu, sigma_l) 178 | dz = np.random.normal(mu, sigma_z) 179 | return x+dx, y+dy, z+dz, rgb 180 | 181 | def process_gaussian_depth_noise(self, z_in): 182 | # Add a depth dependent guassian error term to the perceived depth. Mean and stddev can be specified as up to 183 | # deg3 polynomials. 184 | mu = np.ones(np.shape(z_in)) * self.coefficients[0] 185 | sigma = np.ones(np.shape(z_in)) * self.coefficients[4] 186 | for i in range(1, 4): 187 | if self.coefficients[i] != 0: 188 | mu = mu + np.power(z_in, i) * self.coefficients[i] 189 | if self.coefficients[4 + i] != 0: 190 | sigma = sigma + np.power(z_in, i) * self.coefficients[4 + i] 191 | return z_in + np.random.normal(mu, sigma) 192 | 193 | 194 | if __name__ == '__main__': 195 | rospy.init_node('sensor_model', anonymous=True) 196 | sm = SensorModel() 197 | rospy.spin() 198 | 199 | -------------------------------------------------------------------------------- /unreal_cv_ros/src/simulation_manager.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | # ros 4 | import rospy 5 | from std_msgs.msg import String 6 | from trajectory_msgs.msg import MultiDOFJointTrajectory, MultiDOFJointTrajectoryPoint 7 | from geometry_msgs.msg import Pose, Twist, Transform 8 | from nav_msgs.msg import Odometry 9 | from sensor_msgs.msg import Imu 10 | from gazebo_msgs.msg import ModelState 11 | from unreal_cv_ros.msg import UeSensorRaw 12 | from sensor_msgs.msg import PointCloud2 13 | from std_srvs.srv import Empty 14 | from gazebo_msgs.srv import SetModelState, GetModelState 15 | import tf 16 | 17 | # Python 18 | import sys 19 | import math 20 | import numpy as np 21 | import time 22 | from collections import deque 23 | 24 | 25 | class SimulationManager: 26 | 27 | def __init__(self): 28 | ''' Initialize ros node and read params ''' 29 | # Parse parameters 30 | self.ns_gazebo = rospy.get_param('~ns_gazebo', "/gazebo") 31 | self.ns_mav = rospy.get_param('~ns_mav', "/firefly") 32 | self.regulate = rospy.get_param('~regulate', False) # Manage odom throughput for unreal_ros_client 33 | self.monitor = rospy.get_param('~monitor', False) # Measure performance of unreal pipeline 34 | self.initial_position = rospy.get_param('~initial_position', [0, 0, 0]) # x, y, z [m] 35 | 36 | if self.monitor: 37 | self.horizon = rospy.get_param('~horizon', 10) # How many messages are kept for monitoring 38 | 39 | # Monitoring arrays 40 | self.mon_client_rate_real = deque(maxlen=self.horizon) 41 | self.mon_client_rate_ros = deque(maxlen=self.horizon) 42 | self.mon_client_prev_real = None 43 | self.mon_client_prev_ros = None 44 | self.mon_client_delay_ros = deque(maxlen=self.horizon) 45 | self.mon_client_time = deque(maxlen=self.horizon) 46 | self.mon_sensor_rate_real = deque(maxlen=self.horizon) 47 | self.mon_sensor_rate_ros = deque(maxlen=self.horizon) 48 | self.mon_sensor_prev_real = None 49 | self.mon_sensor_prev_ros = None 50 | self.mon_sensor_delay_ros = deque(maxlen=self.horizon) 51 | self.mon_sensor_time = deque(maxlen=self.horizon) 52 | 53 | # Subscribers 54 | self.ue_raw_sub = rospy.Subscriber("ue_raw_in", UeSensorRaw, self.mon_raw_callback, queue_size=10) 55 | self.ue_out_sub = rospy.Subscriber("ue_out_in", PointCloud2, self.mon_out_callback, queue_size=10) 56 | 57 | # Printing service 58 | rospy.Service('~display_monitor', Empty, self.mon_print_handle) 59 | 60 | if self.regulate: 61 | # Subscribers and publishers 62 | self.odom_sub = rospy.Subscriber("odom_in", Odometry, self.odom_callback, queue_size=10) 63 | self.odom_pub = rospy.Publisher("~odom_out", Odometry, queue_size=10) 64 | 65 | # Run the startup 66 | self.ready_pub = rospy.Publisher("~simulation_ready", String, queue_size=10) 67 | self.launch_simulation() 68 | 69 | def launch_simulation(self): 70 | # Wait for Gazebo services 71 | rospy.loginfo("Starting unreal MAV simulation setup coordination...") 72 | rospy.wait_for_service(self.ns_gazebo + "/unpause_physics") 73 | rospy.wait_for_service(self.ns_gazebo + "/set_model_state") 74 | 75 | # Prepare initialization trajectory command 76 | traj_pub = rospy.Publisher(self.ns_mav + "/command/trajectory", MultiDOFJointTrajectory, queue_size=10) 77 | traj_msg = MultiDOFJointTrajectory() 78 | traj_msg.joint_names = ["base_link"] 79 | transforms = Transform() 80 | transforms.translation.x = self.initial_position[0] 81 | transforms.translation.y = self.initial_position[1] 82 | transforms.translation.z = self.initial_position[2] 83 | point = MultiDOFJointTrajectoryPoint([transforms], [Twist()], [Twist()], rospy.Duration(0)) 84 | traj_msg.points.append(point) 85 | 86 | # Prepare initialization Get/SetModelState service 87 | set_model_srv = rospy.ServiceProxy(self.ns_gazebo + "/set_model_state", SetModelState) 88 | get_model_srv = rospy.ServiceProxy(self.ns_gazebo + "/get_model_state", GetModelState) 89 | mav_name = self.ns_mav[np.max([i for i in range(len(self.ns_mav)) if self.ns_mav[i] == "/"]) + 1:] 90 | pose = Pose() 91 | pose.position.x = self.initial_position[0] 92 | pose.position.y = self.initial_position[1] 93 | pose.position.z = self.initial_position[2] 94 | model_state_set = ModelState(mav_name, pose, Twist(), "world") 95 | 96 | # Wake up gazebo 97 | rospy.loginfo("Waiting for gazebo to wake up ...") 98 | unpause_srv = rospy.ServiceProxy(self.ns_gazebo + "/unpause_physics", Empty) 99 | while not unpause_srv(): 100 | rospy.sleep(0.1) 101 | rospy.loginfo("Waiting for gazebo to wake up ... done.") 102 | 103 | # Wait for drone to spawn (imu is publishing) 104 | rospy.loginfo("Waiting for MAV to spawn ...") 105 | rospy.wait_for_message(self.ns_mav + "/imu", Imu) 106 | rospy.loginfo("Waiting for MAV to spawn ... done.") 107 | 108 | # Initialize drone stable at [0, 0, 0] 109 | rospy.loginfo("Waiting for MAV to stabilize ...") 110 | dist = 10 # Position and velocity 111 | while dist >= 0.1: 112 | traj_msg.header.stamp = rospy.Time.now() 113 | traj_pub.publish(traj_msg) 114 | set_model_srv(model_state_set) 115 | rospy.sleep(0.1) 116 | state = get_model_srv(mav_name, "world") 117 | pos = state.pose.position 118 | twist = state.twist.linear 119 | dist = np.sqrt((pos.x - self.initial_position[0]) ** 2 + (pos.y - self.initial_position[1]) ** 2 + 120 | (pos.z - self.initial_position[2]) ** 2) + np.sqrt( 121 | twist.x ** 2 + twist.y ** 2 + twist.z ** 2) 122 | rospy.loginfo("Waiting for MAV to stabilize ... done.") 123 | 124 | # Wait for unreal client 125 | rospy.loginfo("Waiting for unreal client to setup ...") 126 | rospy.wait_for_message("ue_out_in", PointCloud2) 127 | rospy.loginfo("Waiting for unreal client to setup ... done.") 128 | 129 | # Finish 130 | self.ready_pub.publish(String("Simulation Ready")) 131 | rospy.loginfo("Unreal MAV simulation setup successfully.") 132 | 133 | def mon_raw_callback(self, ros_data): 134 | # Measure walltime and rostime between callbacks 135 | time_real = time.time() 136 | time_ros = rospy.get_time() 137 | if self.mon_client_prev_real is None: 138 | # Initialization 139 | self.mon_client_prev_real = time_real 140 | self.mon_client_prev_ros = time_ros 141 | return 142 | 143 | self.mon_client_rate_real.append(time_real - self.mon_client_prev_real) 144 | self.mon_client_prev_real = time_real 145 | self.mon_client_rate_ros.append(time_ros - self.mon_client_prev_ros) 146 | self.mon_client_prev_ros = time_ros 147 | self.mon_client_delay_ros.append(time_ros - ros_data.header.stamp.to_sec()) 148 | self.mon_client_time.append(time_ros) 149 | 150 | def mon_out_callback(self, ros_data): 151 | # Measure walltime and rostime between callbacks 152 | time_real = time.time() 153 | time_ros = rospy.get_time() 154 | if self.mon_sensor_prev_real is None: 155 | # Initialization 156 | self.mon_sensor_prev_real = time_real 157 | self.mon_sensor_prev_ros = time_ros 158 | return 159 | 160 | self.mon_sensor_rate_real.append(time_real - self.mon_sensor_prev_real) 161 | self.mon_sensor_prev_real = time_real 162 | self.mon_sensor_rate_ros.append(time_ros - self.mon_sensor_prev_ros) 163 | self.mon_sensor_prev_ros = time_ros 164 | self.mon_sensor_delay_ros.append(time_ros - ros_data.header.stamp.to_sec()) 165 | self.mon_sensor_time.append(time_ros) 166 | 167 | def mon_print_handle(self, _): 168 | print("=" * 14 + " performance monitor " + "=" * 14) 169 | print("Fields: [Hz] / [s] avg - std - min - max ") 170 | values = 1.0 / np.array(self.mon_client_rate_real) 171 | if len(values) > 0: 172 | print("Client rate (wall): {0:05.2f} - {1:05.2f} - {2:05.2f} - {3:05.2f}" 173 | .format(np.mean(values), np.std(values), np.min(values), np.max(values))) 174 | values = 1.0 / np.array(self.mon_client_rate_ros) 175 | if len(values) > 0: 176 | print("Client rate (ros): {0:05.2f} - {1:05.2f} - {2:05.2f} - {3:05.2f}" 177 | .format(np.mean(values), np.std(values), np.min(values), np.max(values))) 178 | values = np.array(self.mon_client_delay_ros) 179 | if len(values) > 0: 180 | print("Client delay (ros): {0:05.2f} - {1:05.2f} - {2:05.2f} - {3:05.2f}" 181 | .format(np.mean(values), np.std(values), np.min(values), np.max(values))) 182 | values = 1.0 / np.array(self.mon_sensor_rate_real) 183 | if len(values) > 0: 184 | print("Sensor rate (wall): {0:05.2f} - {1:05.2f} - {2:05.2f} - {3:05.2f}" 185 | .format(np.mean(values), np.std(values), np.min(values), np.max(values))) 186 | values = 1.0 / np.array(self.mon_sensor_rate_ros) 187 | if len(values) > 0: 188 | print("Sensor rate (ros): {0:05.2f} - {1:05.2f} - {2:05.2f} - {3:05.2f}" 189 | .format(np.mean(values), np.std(values), np.min(values), np.max(values))) 190 | values = np.array(self.mon_sensor_delay_ros) 191 | if len(values) > 0: 192 | print("Sensor delay (ros): {0:05.2f} - {1:05.2f} - {2:05.2f} - {3:05.2f}" 193 | .format(np.mean(values), np.std(values), np.min(values), np.max(values))) 194 | print("=" * 49) 195 | return [] 196 | 197 | 198 | if __name__ == '__main__': 199 | rospy.init_node('simulation_manager', anonymous=True) 200 | sm = SimulationManager() 201 | rospy.spin() 202 | -------------------------------------------------------------------------------- /unreal_cv_ros/src/unreal_ros_client.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | # Unrealcv 4 | from unrealcv import client 5 | 6 | # ros 7 | import rospy 8 | from std_msgs.msg import String 9 | from nav_msgs.msg import Odometry 10 | from geometry_msgs.msg import PoseStamped 11 | from unreal_cv_ros.msg import UeSensorRaw 12 | from std_srvs.srv import SetBool 13 | import tf 14 | 15 | # Python 16 | import sys 17 | import math 18 | import numpy as np 19 | import time 20 | 21 | 22 | class UnrealRosClient: 23 | 24 | def __init__(self): 25 | ''' Initialize ros node, unrealcv client and params ''' 26 | rospy.on_shutdown(self.reset_client) 27 | self.should_terminate = False 28 | 29 | # Read in params 30 | self.mode = rospy.get_param('~mode', "standard") # Client mode (test, standard, fast, fast2) 31 | self.collision_on = rospy.get_param('~collision_on', True) # Check for collision 32 | self.publish_tf = rospy.get_param('~publish_tf', False) # If true publish the camera transformation in tf 33 | self.slowdown = rospy.get_param('~slowdown', 0.0) # Artificially slow down rate for UE to finish rendering 34 | self.camera_id = rospy.get_param('~camera_id', 0) # CameraID for unrealcv compatibility (usually use 0) 35 | self.queue_size = rospy.get_param('~queue_size', 1) # How many requests are kept 36 | 37 | # Select client mode 38 | mode_types = {'standard': 'standard', 'fast': 'fast', 'test': 'test', 'generate': 'generate'} 39 | selected = mode_types.get(self.mode, 'NotFound') 40 | if selected == 'NotFound': 41 | warning = "Unknown client mode '" + self.mode + "'. Implemented modes are: " + \ 42 | "".join(["'" + m + "', " for m in mode_types]) 43 | rospy.logfatal(warning[:-2]) 44 | 45 | # Setup unrealcv client 46 | client.connect() 47 | if not client.isconnected(): 48 | rospy.logfatal("No unreal game running to connect to. Please start a game before launching the node.") 49 | 50 | status = client.request('vget /unrealcv/status') 51 | if status is None: 52 | rospy.logfatal("Error addressing the unrealcv client. Try restarting the game.") 53 | 54 | rospy.loginfo("Unrealcv client status:\n" + status) 55 | 56 | # Setup camera parameters from unrealcv config 57 | loc_width = status.find('Width:') 58 | loc_height = status.find('Height:') 59 | loc_fov = status.find('FOV:') 60 | loc_end = status.find('EnableInput:') 61 | width = int(status[loc_width + 7:loc_height]) 62 | height = int(status[loc_height + 8:loc_fov]) 63 | fov = float(status[loc_fov + 5:loc_end]) 64 | f = width / 2 / np.tan(fov * math.pi / 180 / 2) 65 | rospy.set_param('~camera_params', {'width': float(width), 'height': float(height), 'focal_length': float(f)}) 66 | 67 | # Initialize relative coordinate system (so camera starts at [0, 0, 0] position and [0, 0, yaw]). 68 | location = client.request('vget /camera/%i/location' % self.camera_id) 69 | self.coord_origin = np.array([float(x) for x in str(location).split(' ')]) 70 | rot = client.request('vget /camera/%i/rotation' % self.camera_id) 71 | self.coord_yaw = float(str(rot).split(' ')[1]) 72 | client.request("vset /camera/{0:d}/rotation 0 {1:f} 0".format(self.camera_id, self.coord_yaw)) 73 | 74 | # tf broadcaster 75 | if self.mode == 'test' or self.publish_tf: 76 | self.tf_br = tf.TransformBroadcaster() # Publish camera transforms in tf 77 | 78 | # Setup mode 79 | if self.mode == 'test': 80 | rospy.Timer(rospy.Duration(0.01), self.test_callback) # 100 Hz try capture frequency 81 | 82 | elif self.mode == 'generate': 83 | rospy.Timer(rospy.Duration(0.05), self.generate_traj_callback) # 20 Hz capture 84 | self.posepub = rospy.Publisher("~ue_sensor_pose", PoseStamped, queue_size=10) 85 | 86 | elif self.mode == 'standard': 87 | self.sub = rospy.Subscriber("odometry", Odometry, self.odom_callback, queue_size=self.queue_size, 88 | buff_size=(2 ** 24) * self.queue_size) 89 | # The buffersize needs to be large enough to fit all messages, otherwise strange things happen 90 | self.previous_odom_msg = None # Previously processed Odom message 91 | self.collision_tolerance = rospy.get_param('~collision_tol', 10) # Distance threshold in UE units 92 | 93 | elif self.mode == 'fast': 94 | self.sub = rospy.Subscriber("odometry", Odometry, self.fast_callback, queue_size=self.queue_size, 95 | buff_size=(2 ** 24) * self.queue_size) 96 | self.previous_odom_msg = None # Previously processed Odom message 97 | 98 | # Finish setup 99 | self.pub = rospy.Publisher("~ue_sensor_raw", UeSensorRaw, queue_size=10) 100 | rospy.Service('~terminate_with_reset', SetBool, self.terminate_with_reset_srv) 101 | if self.collision_on: 102 | self.collision_pub = rospy.Publisher("~collision", String, queue_size=10) 103 | rospy.loginfo("unreal_ros_client is ready in %s mode." % self.mode) 104 | 105 | def fast_callback(self, ros_data): 106 | ''' Use the custom unrealcv command to get images and collision checks. vget UECVROS command returns the images 107 | and then sets the new pose, therefore the delay. ''' 108 | if self.should_terminate: 109 | return 110 | # Slowdown to give more rendering time to the unreal engine 111 | time.sleep(self.slowdown) 112 | 113 | # Get pose in unreal coords 114 | position, orientation = self.transform_to_unreal(ros_data.pose.pose) 115 | 116 | # Call the plugin (takes images and then sets the new position) 117 | args = np.concatenate((position, orientation, np.array([float(self.collision_on)])), axis=0) 118 | result = client.request("vget /uecvros/full" + "".join([" {0:f}".format(x) for x in args]) + " " + 119 | str(self.camera_id)) 120 | 121 | if self.previous_odom_msg is not None: 122 | # This is not the initialization step 123 | if isinstance(result, unicode): 124 | # Check for collision or errors 125 | if result == "Collision detected!": 126 | self.on_collision() 127 | else: 128 | rospy.logerr("Unrealcv error: " + result) 129 | elif isinstance(result, str): 130 | # Successful: Publish data for previous pose 131 | idx = int(len(result) / 2) # Half of the bytes are the 4channel color img, half the depth 132 | msg = UeSensorRaw() 133 | msg.header.stamp = self.previous_odom_msg.header.stamp 134 | msg.color_data = result[:idx] 135 | msg.depth_data = result[idx:] 136 | self.pub.publish(msg) 137 | else: 138 | rospy.logerr("Unknown return format from unrealcv {0}".format(type(result))) 139 | 140 | self.previous_odom_msg = ros_data 141 | 142 | if self.publish_tf: 143 | self.publish_tf_data(ros_data) 144 | 145 | def odom_callback(self, ros_data): 146 | ''' Produce images for given odometry ''' 147 | if self.should_terminate: 148 | return 149 | # Slowdown to give more rendering time to the unreal engine (should not be necessary in normal mode) 150 | time.sleep(self.slowdown) 151 | 152 | # Get pose in unreal coords 153 | position, orientation = self.transform_to_unreal(ros_data.pose.pose) 154 | 155 | # Generate images 156 | if self.previous_odom_msg is not None: 157 | # This is not the initialization step 158 | self.publish_images(self.previous_odom_msg.header.stamp) 159 | 160 | if self.publish_tf: 161 | self.publish_tf_data(ros_data) 162 | 163 | self.previous_odom_msg = ros_data 164 | 165 | # Set camera in unrealcv 166 | if self.collision_on: 167 | client.request('vset /camera/{0:d}/moveto {1:f} {2:f} {3:f}'.format(self.camera_id, *position)) 168 | 169 | # Check collision 170 | position_eff = client.request('vget /camera/%d/location' % self.camera_id) 171 | position_eff = np.array([float(x) for x in str(position_eff).split(' ')]) 172 | if np.linalg.norm(position - position_eff) >= self.collision_tolerance: 173 | self.on_collision() 174 | else: 175 | client.request("vset /camera/{0:d}/location {1:f} {2:f} {3:f}".format(self.camera_id, *position)) 176 | 177 | client.request("vset /camera/{0:d}/rotation {1:f} {2:f} {3:f}".format(self.camera_id, *orientation)) 178 | 179 | def generate_traj_callback(self, _): 180 | ''' Produce poses from unreal in-game controlled camera movement to be recorded and used as human defined planning''' 181 | # Get current UE pose 182 | pose = client.request('vget /camera/%d/pose' % self.camera_id) 183 | pose = np.array([float(x) for x in str(pose).split(' ')]) 184 | position = pose[:3] 185 | orientation = pose[3:] 186 | 187 | timestamp = rospy.Time.now() 188 | position, orientation = self.transform_from_unreal(position, orientation) 189 | pose = PoseStamped() 190 | pose.header.stamp = timestamp 191 | pose.header.frame_id = "cam" 192 | pose.pose.position.x = position[0] 193 | pose.pose.position.y = position[1] 194 | pose.pose.position.z = position[2] 195 | 196 | pose.pose.orientation.x = orientation[0] 197 | pose.pose.orientation.y = orientation[1] 198 | pose.pose.orientation.z = orientation[2] 199 | pose.pose.orientation.w = orientation[3] 200 | 201 | self.posepub.publish(pose) 202 | 203 | 204 | def test_callback(self, _): 205 | ''' Produce images and broadcast odometry from unreal in-game controlled camera movement ''' 206 | # Get current UE pose 207 | pose = client.request('vget /camera/%d/pose' % self.camera_id) 208 | pose = np.array([float(x) for x in str(pose).split(' ')]) 209 | position = pose[:3] 210 | orientation = pose[3:] 211 | 212 | timestamp = rospy.Time.now() 213 | self.publish_images(timestamp) 214 | 215 | position, orientation = self.transform_from_unreal(position, orientation) 216 | self.tf_br.sendTransform((position[0], position[1], position[2]), 217 | (orientation[0], orientation[1], orientation[2], orientation[3]), 218 | timestamp, "camera_link", "world") 219 | 220 | def publish_images(self, header_stamp): 221 | ''' Produce and publish images for test and standard mode''' 222 | # Retrieve images 223 | res_color = client.request('vget /camera/%d/lit npy' % self.camera_id) 224 | res_depth = client.request('vget /camera/%d/depth npy' % self.camera_id) 225 | 226 | # Publish data 227 | msg = UeSensorRaw() 228 | msg.header.stamp = header_stamp 229 | msg.color_data = res_color 230 | msg.depth_data = res_depth 231 | self.pub.publish(msg) 232 | 233 | def publish_tf_data(self, odom_msg): 234 | pos = odom_msg.pose.pose.position 235 | rot = odom_msg.pose.pose.orientation 236 | self.tf_br.sendTransform((pos.x, pos.y, pos.z), (rot.x, rot.y, rot.z, rot.w), odom_msg.header.stamp, 237 | "camera_link", "world") 238 | 239 | def transform_to_unreal(self, pose): 240 | ''' 241 | Transform from ros to default unreal coordinates. 242 | Input: ros pose in global frame 243 | Output: position ([x, y, z] array, in unreal coordinates) 244 | orientation ([pitch, yaw, roll] array in unreal coordinates) 245 | ''' 246 | # Read out vectors 247 | x = pose.position.x 248 | y = pose.position.y 249 | z = pose.position.z 250 | orientation = [pose.orientation.x, pose.orientation.y, pose.orientation.z, pose.orientation.w] 251 | 252 | # Transformation of position to relative UE coordsys 253 | yaw = self.coord_yaw / 180 * math.pi 254 | position = np.array([math.cos(yaw) * x + math.sin(yaw) * y, math.sin(yaw) * x - math.cos(yaw) * y, z]) 255 | position = position * 100 + self.coord_origin # default units are cm 256 | 257 | # Transformation of orientation to relative UE coordsys 258 | q_rot = tf.transformations.quaternion_from_euler(0, 0, -yaw) 259 | orientation = tf.transformations.quaternion_multiply(q_rot, orientation) 260 | (r, p, y) = tf.transformations.euler_from_quaternion(orientation) 261 | orientation = np.array([-p, -y, r]) * 180 / math.pi # default units are deg 262 | return position, orientation 263 | 264 | @staticmethod 265 | def transform_from_unreal(position, orientation): 266 | ''' 267 | Transform from unreal coordinates to odom coordinates (to camera_link) 268 | Input: position [x, y, z,], orientation [pitch, yaw, roll], in unrealcv coordinates) 269 | Output: position ([x, y, z] array) 270 | orientation ([x, y, z, w] quaternion) 271 | ''' 272 | 273 | # Transform from unreal coordinate units 274 | position = position / 100 # default is cm 275 | orientation = orientation / 180 * math.pi # default is deg 276 | 277 | # Transform from pitch, yaw, roll (taking into account rotation directions 278 | (x, y, z, w) = tf.transformations.quaternion_from_euler(orientation[2], -orientation[0], -orientation[1]) 279 | orientation = np.array([x, y, z, w]) 280 | 281 | # Invert y axis 282 | position[1] = -position[1] 283 | return position, orientation 284 | 285 | @staticmethod 286 | def reset_client(): 287 | # Free up unrealcv connection cleanly when shutting down 288 | client.disconnect() 289 | rospy.loginfo('On unreal_ros_client shutdown: disconnected unrealcv client.') 290 | 291 | def terminate_with_reset_srv(self, _): 292 | # Reset to initial conditions after experiment 293 | rospy.loginfo("Terminate_with_reset service requested, initiating reset.") 294 | self.should_terminate = True # This stops the client from taking other requests (odom topic) 295 | goal = np.concatenate((self.coord_origin, np.array([0, self.coord_yaw, 0])), axis=0) 296 | time.sleep(2.0) # Make sure regular execution is finished -> unrealcv client is ready 297 | if self.mode == 'fast': 298 | client.request("vget /uecvros/full" + "".join([" {0:f}".format(x) for x in goal]) + " -1.0 " + 299 | str(self.camera_id)) 300 | else: 301 | client.request("vset /camera/{0:d}/pose {1:f} {2:f} {3:f} {4:f} {5:f} {6:f}".format(self.camera_id, *goal)) 302 | return [True, ""] 303 | 304 | def on_collision(self): 305 | # Collision handling for all modes here 306 | rospy.logwarn("MAV collision detected!") 307 | self.collision_pub.publish(String("MAV collision detected!")) 308 | 309 | 310 | if __name__ == '__main__': 311 | rospy.init_node('unreal_ros_client', anonymous=False) # Currently only 1 Client at a time is supported by unrealcv 312 | uc = UnrealRosClient() 313 | rospy.spin() 314 | --------------------------------------------------------------------------------