├── .gitignore ├── LICENSE.txt ├── README.md ├── benchbot_api ├── __init__.py ├── agent.py ├── api_callbacks.py ├── benchbot.py ├── extras.py └── tools.py ├── docs └── benchbot_api_web.gif └── setup.py /.gitignore: -------------------------------------------------------------------------------- 1 | build 2 | dist 3 | .vscode 4 | 5 | *.pyc 6 | __pycache__ 7 | python/build 8 | python/dist 9 | *.egg-info* 10 | 11 | *.sw* 12 | -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | Copyright (c) 2020, Queensland University of Technology (Ben Talbot, David Hall, and Niko Sünderhauf) 2 | All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, 5 | are permitted provided that the following conditions are met: 6 | 7 | 1. Redistributions of source code must retain the above copyright notice, 8 | this list of conditions and the following disclaimer. 9 | 10 | 2. Redistributions in binary form must reproduce the above copyright notice, 11 | this list of conditions and the following disclaimer in the documentation 12 | and/or other materials provided with the distribution. 13 | 14 | 3. Neither the name of the copyright holder nor the names of its contributors 15 | may be used to endorse or promote products derived from this software 16 | without specific prior written permission. 17 | 18 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 19 | ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 20 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. 21 | IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, 22 | INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, 23 | BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 24 | DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF 25 | LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR 26 | OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF 27 | THE POSSIBILITY OF SUCH DAMAGE. 28 | 29 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | **NOTE: this software needs to interface with a running instance of the BenchBot software stack. Unless you are running against a remote stack / robot, please install this software with the BenchBot software stack as described [here](https://github.com/qcr/benchbot).** 2 | 3 | # BenchBot API 4 | 5 |  6 | 7 | The BenchBot API provides a simple interface for controlling a robot or simulator through actions, and receiving data through observations. As shown above, the entire code required for running an agent in a realistic 3D simulator is only a handful of simple Python commands. 8 | 9 | [Open AI Gym](https://gym.openai.com) users will find the breakdown into actions, observations, and steps extremely familiar. BenchBot API allows researchers to develop and test novel algorithms with real robot systems and realistic 3D simulators, without the typical hassles arising when interfacing with complicated multi-component robot systems. 10 | 11 | Running a robot through an entire environment, with your own custom agent, is as simple as one line of code with the BenchBot API: 12 | 13 | ```python 14 | from benchbot_api import BenchBot 15 | from my_agent import MyAgent 16 | 17 | BenchBot(agent=MyAgent()).run() 18 | ``` 19 | 20 | The above assumes you have created your own agent by overloading the abstract `Agent` class provided with the API. Overloading the abstract class requires implementing 3 basic methods. Below is a basic example to spin on the spot: 21 | 22 | ```python 23 | from benchbot_api import Agent 24 | import json 25 | 26 | class MyAgent(Agent): 27 | 28 | def is_done(self, action_result): 29 | # Go forever 30 | return False 31 | 32 | def pick_action(self, observations, action_list): 33 | # Rotates on the spot indefinitely, 5 degrees at a time 34 | # (assumes we are running in passive mode) 35 | return 'move_angle', {'angle': 5} 36 | 37 | def save_result(self, filename, empty_results, results_format_fns): 38 | # Save some blank results 39 | with open(filename, 'w') as f: 40 | json.dump(empty_results, f) 41 | ``` 42 | 43 | If you prefer to do things manually, a more exhaustive suite of functions are also available as part of the BenchBot API. Instead of using the `BenchBot.run()` method, a large number of methods are available through the API. Below highlights a handful of the capabilities of BenchBot API: 44 | 45 | ```python 46 | from benchbot_api import BenchBot, RESULT_LOCATION 47 | import json 48 | import matplotlib.pyplot as plt 49 | 50 | # Create a BenchBot instance & reset the simulator / robot to starting state 51 | b = BenchBot() 52 | observations, action_result = b.reset() 53 | 54 | # Print details of selected task & environment 55 | print(b.task_details) 56 | print(b.environment_details) 57 | 58 | # Visualise the current RGB image from the robot 59 | plt.imshow(observations['image_rgb']) 60 | 61 | # Move to the next pose if we have a 'move_next' action available 62 | if 'move_next' in b.actions: 63 | observations, action_result = b.step('move_next') 64 | 65 | # Save some empty results 66 | with open(RESULT_LOCATION, 'w') as f: 67 | json.dump(b.empty_results(), f) 68 | ``` 69 | 70 | For sample solutions that use the BenchBot API, see the examples add-ons available (e.g. [`benchbot-addons/examples_base`](https://github.com/benchbot-addons/examples_base) and [`benchbot-addons/examples_ssu`](https://github.com/benchbot-addons/examples_ssu)). 71 | 72 | ## Installing BenchBot API 73 | 74 | BenchBot API is a Python package, installable with pip. Run the following in the root directory of where this repository was cloned: 75 | 76 | ``` 77 | u@pc:~$ pip install . 78 | ``` 79 | 80 | ## Using the API to communicate with a robot 81 | 82 | Communication with the robot comes through a series of "channels" which are defined by the robot's definition file (e.g. [carter](https://github.com/benchbot-addons/robots_isaac/blob/master/robots/carter.yaml)). A task definition file (e.g. [semantic_slam:passive:ground_truth](https://github.com/benchbot-addons/tasks_ssu/blob/master/tasks/sslam_pgt.yaml)) then declares which of these connections are provided to the API as either sensor observations or actions to be executed by a robot actuator. 83 | 84 | The API talks to the [BenchBot Supervisor](https://github.com/qcr/benchbot_supervisor), which handles loading and managing the different kinds of back-end configuration files. This abstracts all of the underlying communication complexities away from the user, allowing the BenchBot API to remain a simple interface that focuses on getting observations and sending actions. 85 | 86 | An action is sent to the robot by calling the `BenchBot.step()` method with a valid action (found by checking the `BenchBot.actions` property): 87 | 88 | ```python 89 | from benchbot_api import BenchBot 90 | 91 | b = BenchBot() 92 | available_actions = b.actions 93 | b.step(b.actions[0], {'action_arg:', arg_value}) # Perform the first available action 94 | ``` 95 | 96 | The second parameter is a dictionary of named arguments for the selected action. For example, moving 5m forward with the `'move_distance'` action is represented by the dictionary `{'distance': 5}`. 97 | 98 | Observations lists are received as return values from a `BenchBot.step()` call (`BenchBot.reset()` internally calls `BenchBot.step(None)`, which means don't perform an action): 99 | 100 | ```python 101 | from benchbot_api import BenchBot 102 | 103 | b = BenchBot() 104 | observations, action_result = b.reset() 105 | observations, action_result = b.step('move_distance', {'distance': 5}) 106 | ``` 107 | 108 | The returned `observations` variable holds a dictionary with key-value pairs corresponding to the name-data defined by each observation channel. 109 | 110 | The `action_result` is an enumerated value denoting the result of the action (use `from benchbot_api import ActionResult` to access the `Enum` class). You should use this result to guide the progression of your algorithm either manually or in the `is_done()` method of your `Agent`. Possible values for the returned `action_result` are: 111 | 112 | - `ActionResult.SUCCESS`: the action was carried out successfully 113 | - `ActionResult.FINISHED`: the action was carried out successfully, and the robot is now finished its traversal through the scene (only used in `passive` actuation mode) 114 | - `ActionResult.COLLISION`: the action crashed the robot into an obstacle, and as a result it will not respond to any further actuation commands (at this point you should quit) 115 | 116 | ### Standard Communication Channels 117 | 118 | Tasks and robot definition files declare actions and observations, and these files are include through [BenchBot add-ons](https://github.com/qcr/benchbot_addons). The add-on creator is free to add and declare channels as they please, but it is a better experience for all if channel definitions are as consistent as possible across the BenchBot ecosystem. 119 | 120 | So if you're adding a robot that move between a set of poses, declare a channel called `'move_next` with no arguments. Likewise, a robot that receives image observations should use a channel named `'image_rgb'` with the same format as described below. Feel free to implement the channels however you please for your robot, but consistent interfaces should always be preferred. 121 | 122 | If you encounter a task using non-standard channel configurations, the API has all the functionality you need as a user to handle them (`actions`, `config`, & `observations` properties). On the other hand, maybe the non-standard channel should be a new standard. New standard communication channels are always welcome; please open a pull request with the details! 123 | 124 | #### Standard action channels: 125 | 126 | | Name | Required Arguments | Description | 127 | | ----------------- | :----------------------------: | --------------------------------------------------------------------------------------------------------------------------------------------- | 128 | | `'move_next'` | `None` | Moves the robot to the next pose in its list of pre-defined poses (only available in environments that declare a `'trajectory_poses'` field). | 129 | | `'move_distance'` |
{'distance': float}| Moves the robot `'distance'` metres directly ahead. | 130 | | `'move_angle'` |
{'angle': float}| Rotate the angle on the spot by `'angle'` degrees. | 131 | 132 | #### Standard observation channels: 133 | 134 | | Name | Data format | Description | 135 | | -------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | 136 | | `'image_depth'` |
numpy.ndarray(shape=(H,W),| Depth image from the default image sensor with depths in meters. | 137 | | `'image_depth_info'` |
dtype='float32')
{| Sensor information for the depth image. `'matrix_instrinsics'` is of the format:
'frame_id': string
'height': int
'width': int
'matrix_instrinsics':
numpy.ndarray(shape=(3,3),
dtype='float64')
'matrix_projection':
numpy.ndarray(shape=(3,4)
dtype='float64')
}
[fx 0 cx]for a camera with focal lengths `(fx,fy)`, & principal point `(cx,cy)`. Likewise, `'matrix_projection'` is:
[0 fy cy]
[0 0 1]
[fx 0 cx Tx]where `(Tx,Ty)` is the translation between stereo sensors. See [here](http://docs.ros.org/melodic/api/sensor_msgs/html/msg/CameraInfo.html) for further information on fields. | 138 | | `'image_rgb'` |
[0 fy cy Ty]
[0 0 1 0]
numpy.ndarray(shape=(H,W,3),| RGB image from the default image sensor with colour values mapped to the 3 channels, in the 0-255 range. | 139 | | `'image_rgb_info'` |
dtype='uint8')
{| Sensor information for the RGB image. `'matrix_instrinsics'` is of the format:
'frame_id': string
'height': int
'width': int
'matrix_instrinsics':
numpy.ndarray(shape=(3,3),
dtype='float64')
'matrix_projection':
numpy.ndarray(shape=(3,4)
dtype='float64')
}
[fx 0 cx]for a camera with focal lengths `(fx,fy)`, & principal point `(cx,cy)`. Likewise, `'matrix_projection'` is:
[0 fy cy]
[0 0 1]
[fx 0 cx Tx]where `(Tx,Ty)` is the translation between stereo sensors. See [here](http://docs.ros.org/melodic/api/sensor_msgs/html/msg/CameraInfo.html) for further information on fields. | 140 | | `'laser'` |
[0 fy cy Ty]
[0 0 1 0]
{| Set of scan values from a laser sensor, between `'range_min'` & `'range_max'` (in meters). The `'scans'` array consists of `N` scans of format `[scan_value, scan_angle]`. For example, `scans[100, 0]` would get the distance value & `scans[100, 1]` would get the angle of the 100th scan. | 141 | | `'poses'` |
'range_max': float64,
'range_min': float64,
'scans':
numpy.ndarray(shape=(N,2),
dtype='float64')
}
{| Dictionary of relative poses for the current system state. The pose of each system component is available at key `'frame_name'`. Each pose has a `'parent_frame'` which the pose is relative to (all poses are typically with respect to global `'map'` frame), & the pose values. `'rotation_rpy'` is `[roll,pitch,yaw]` in `ZYX` order, `'rotation_xyzw'` is the equivalent quaternion `[x,y,z,w]`, & `'translation_xyz'` is the Cartesion `[x,y,z]` coordinates. | 142 | 143 | ## Using the API to communicate with the BenchBot system 144 | 145 | A running BenchBot system manages many other elements besides simply getting data to and from a real / simulated robot. BenchBot encapsulates not just the robot, but also the environment it is operating in (whether that be simulator or real) and task that is currently being attempted. 146 | 147 | The API handles communication for all parts of the BenchBot system, including controlling the currently running environment and obtaining configuration information. Below are details for some of the more useful features of the API (all features are also documented in the [`benchbot.py`](./benchbot_api/benchbot.py) source code). 148 | 149 | ### Gathering configuration information 150 | 151 | | API method or property | Description | 152 | | ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | 153 | | `config` | Returns a `dict` exhaustively describing the current BenchBot configuration. Most of the information returned will not be useful for general BenchBot use. | 154 | 155 | ### Interacting with the environment 156 | 157 | | API method or property | Description | 158 | | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | 159 | | `reset()` | Resets the current environment scene. For the simulator, this means restarting the running simulator instance with the robot back at its initial position. The method returns initial `observations`, & the `action_result` (should always be `BenchBot.ActionResult.SUCCESS`). | 160 | | `next_scene()` | Starts the next scene in the current environment (only relevant for tasks with multiple scenes). Note there is no going back once you have moved to the next scene. Returns the same as `reset()`. | 161 | 162 | ### Interacting with an agent 163 | 164 | | API method or property | Description | 165 | | ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | 166 | | `actions` | Returns the list of actions currently available to the agent. This will update as actions are performed in the environment (for example if the agent has collided with an obstacle this list will be empty). | 167 | | `observations` | Returns the lists of observations available to the agent. | 168 | | `step(action, **action_args)` | Performs the requested action with the provided named action arguments. See [Using the API to communicate with a robot](#using-the-api-to-communicate-with-a-robot) above for further details. | 169 | 170 | ### Creating results 171 | 172 | | API method or property | Description | 173 | | ----------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | 174 | | `empty_results()` | Generates a `dict` of with required result metadata & empty results. Metadata (`'task_details'` & `'environment_details'`) is pre-filled. To create results, all a user needs to do is fill in the empty `'results'` field using format's results functions. These functions are available through the `'results_functions()` method. | 175 | | `results_functions()` | Returns a `dict` of functions defined by the task's `'results_format'`. Example use for calling a `create()` function is `results_functions()['create']()`. | 176 | | `RESULT_LOCATION` (outside of `BenchBot` class) | A static string denoting where results should be saved (`/tmp/results`). Using this locations ensures tools in the [BenchBot software stack](https://github.com/qcr/benchbot) work as expected. | 177 | -------------------------------------------------------------------------------- /benchbot_api/__init__.py: -------------------------------------------------------------------------------- 1 | from . import agent 2 | from . import api_callbacks 3 | from . import benchbot 4 | from . import tools 5 | 6 | from .agent import Agent 7 | from .benchbot import ActionResult, BenchBot, RESULT_LOCATION 8 | 9 | __all__ = ['agent', 'api_callbacks', 'benchbot', 'tools'] 10 | -------------------------------------------------------------------------------- /benchbot_api/agent.py: -------------------------------------------------------------------------------- 1 | from abc import abstractmethod, ABCMeta 2 | 3 | 4 | class Agent(ABCMeta('ABC', (object, ), {})): 5 | @abstractmethod 6 | def is_done(self, action_result): 7 | """ 8 | Method should return a Boolean denoting whether your agent is done with 9 | the task or not. The 'action_result' argument should be used to inform 10 | your decision. 11 | 12 | Note that if the value is 'ActionResult.COLLIDED' or 13 | 'BenchBot.FINISHED', BenchBot will no longer report any available 14 | actions via 'BenchBot.actions' so your method needs to return True in 15 | these cases 16 | """ 17 | return False 18 | 19 | @abstractmethod 20 | def pick_action(self, observations, action_list): 21 | """ 22 | Method should return a selected action from the list provided by 23 | 'BenchBot.actions', & a dictionary of argument values required for the 24 | action. The following example would move the robot forward 10cm: 25 | return 'move_distance', {'distance': 0.1} 26 | 27 | This method is also where your agent should also use the observations 28 | to update its semantic understanding of the world. 29 | 30 | The list of available actions returned from 'BenchBot.actions' will 31 | change based on the mode selected, & the last action_result returned by 32 | 'BenchBot.step'. Your code below should select a valid action from the 33 | list, otherwise 'BenchBot.step' will throw an exception trying to 34 | execute an invalid action. 35 | 36 | The current list of possible actions (& required arguments) is 37 | available here: 38 | https://github.com/qcr/benchbot_api#standard-communication-channels 39 | """ 40 | return None 41 | 42 | @abstractmethod 43 | def save_result(self, filename, empty_results, results_format_fns): 44 | """ 45 | Method should write your results for the task to the file specified by 46 | 'filename'. The method returns no value, so if there is any failure in 47 | writing results an exception should be raised. 48 | 49 | Results must be specified in the format defined by the current task 50 | (see 'BenchBot.config' for details). 51 | 52 | The 'empty_results' argument is pre-populated with necessary fields 53 | common to all results ('environment_details' & 'task_details'), as well 54 | as the return of the results format's 'create' function. 55 | 56 | The 'results_format_fns' contains all of the functions defined for that 57 | format, and can be useful for specialist actions (e.g. creating objects 58 | in scene understanding object maps). 59 | """ 60 | return 61 | -------------------------------------------------------------------------------- /benchbot_api/api_callbacks.py: -------------------------------------------------------------------------------- 1 | import base64 2 | import cv2 3 | import numpy as np 4 | 5 | ENCODING_TO_CONVERSION = {'bgr8': cv2.COLOR_BGR2RGB} 6 | 7 | 8 | def convert_to_rgb(data): 9 | cvt = ENCODING_TO_CONVERSION.get(data['encoding'], None) 10 | return data['data'] if cvt is None else cv2.cvtColor(data['data'], cvt) 11 | 12 | 13 | def decode_color_image(data): 14 | if data['encoding'] == 'bgr8': 15 | return cv2.cvtColor( 16 | cv2.imdecode( 17 | np.fromstring(base64.b64decode(data['data']), np.uint8), 18 | cv2.IMREAD_COLOR), cv2.COLOR_BGR2RGB) 19 | elif data['encoding'] == 'rgb8': 20 | return cv2.imdecode( 21 | np.fromstring(base64.b64decode(data['data']), np.uint8), 22 | cv2.IMREAD_COLOR) 23 | else: 24 | raise ValueError( 25 | "decode_ros_image: received image data with unsupported encoding: %s" 26 | % data['encoding']) 27 | 28 | 29 | def decode_jsonpickle(data): 30 | return jsonpickle.decode(data) 31 | -------------------------------------------------------------------------------- /benchbot_api/benchbot.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | 3 | from enum import Enum, unique 4 | import importlib 5 | import jsonpickle 6 | import jsonpickle.ext.numpy as jet 7 | import os 8 | import requests 9 | import sys 10 | import time 11 | 12 | from .agent import Agent 13 | 14 | jet.register_handlers() 15 | 16 | DEFAULT_ADDRESS = 'benchbot_supervisor' 17 | DEFAULT_PORT = 10000 18 | 19 | RESULT_LOCATION = '/tmp/benchbot_result' 20 | 21 | TIMEOUT_SUPERVISOR = 60 22 | 23 | 24 | class _UnexpectedResponseError(requests.RequestException): 25 | """Consistent error messaging for when the Supervisor rejects a query""" 26 | def __init__(self, http_status_code, *args, **kwargs): 27 | super(_UnexpectedResponseError, self).__init__( 28 | "Received an unexpected response from BenchBot supervisor " 29 | "(HTTP status code: %d)" % http_status_code, *args, **kwargs) 30 | 31 | 32 | @unique 33 | class ActionResult(Enum): 34 | """Result of an action that an agent has taken 35 | 36 | Values 37 | ------ 38 | SUCCESS : 39 | Action succeeded, and the robot is ready for a new action 40 | FINISHED : 41 | Action succeeded, and the robot is finished in the current scene 42 | COLLISION : 43 | Action failed, and the robot has colliding with an obstacle 44 | """ 45 | SUCCESS = 0, 46 | FINISHED = 1, 47 | COLLISION = 2 48 | 49 | 50 | class BenchBot(object): 51 | """BenchBot handles communication between the client and server systems, 52 | and abstracts away hardware and simulation, such that code written to be 53 | run by BenchBot will run with either a real or simulated robot 54 | """ 55 | @unique 56 | class RouteType(Enum): 57 | """Enum denoting type of route (used in building routes when talking 58 | with a Supervisor) 59 | """ 60 | CONNECTION = 0, 61 | CONFIG = 1, 62 | ROBOT = 2, 63 | RESULTS = 3, 64 | EXPLICIT = 4 65 | 66 | ROUTE_MAP = { 67 | RouteType.CONNECTION: 'connections', 68 | RouteType.CONFIG: 'config', 69 | RouteType.ROBOT: 'robot', 70 | RouteType.RESULTS: 'results_functions', 71 | RouteType.EXPLICIT: '' 72 | } 73 | 74 | def __init__(self, 75 | agent=None, 76 | supervisor_address='http://' + DEFAULT_ADDRESS + ':' + 77 | str(DEFAULT_PORT) + '/', 78 | auto_start=True): 79 | self.agent = None 80 | self.supervisor_address = supervisor_address 81 | self._connection_callbacks = {} 82 | 83 | if auto_start: 84 | self.start() 85 | self.set_agent(agent) 86 | 87 | def _build_address(self, route_name, route_type=RouteType.CONNECTION): 88 | """Builds an address for communication with a running instance of 89 | BenchBot Supervisor 90 | 91 | Parameters 92 | ---------- 93 | route_name : 94 | The name of the route within the subdirectory (e.g. 'is_finished') 95 | 96 | route_type : 97 | The type of route which maps to the URL's subdirectory (e.g. 98 | RouteType.ROBOT = 'robot') 99 | 100 | Returns 101 | ------- 102 | string 103 | A full URL string describing the route (e.g. 104 | 'http://localhost:10000/robot/is_finished') 105 | """ 106 | base = self.supervisor_address + ( 107 | '' if self.supervisor_address.endswith('/') else '/') 108 | if route_type not in BenchBot.ROUTE_MAP: 109 | raise ValueError( 110 | "Cannot build address from invalid route type: %s" % 111 | route_type) 112 | return (base + BenchBot.ROUTE_MAP[route_type] + 113 | ('/' if BenchBot.ROUTE_MAP[route_type] else '') + route_name) 114 | 115 | def _query(self, 116 | route_name=None, 117 | route_type=RouteType.CONNECTION, 118 | data=None, 119 | method=requests.get): 120 | """Sends a request to a running BenchBot Supervisor, and returns the 121 | response 122 | 123 | Parameters 124 | ---------- 125 | route_name: 126 | The name of the route within the subdirectory (e.g. 'is_finished') 127 | 128 | route_type : 129 | The type of route which maps to the URL's subdirectory (e.g. 130 | RouteType.ROBOT = 'robot') 131 | 132 | data : 133 | A dict of data to attach to the request 134 | 135 | method : 136 | HTTP method to use for the request (should generally always be GET) 137 | 138 | Returns 139 | ------- 140 | dict 141 | The JSON data returned by the request's response 142 | """ 143 | data = {} if data is None else data 144 | addr = self._build_address(route_name, route_type) 145 | try: 146 | resp = requests.get(addr, json=data) 147 | if resp.status_code >= 300: 148 | raise _UnexpectedResponseError(resp.status_code) 149 | return jsonpickle.decode(resp.content) 150 | except: 151 | raise requests.ConnectionError( 152 | "Communication to BenchBot supervisor " 153 | "failed using the route:\n\t%s" % addr) 154 | 155 | @staticmethod 156 | def _attempt_connection_imports(connection_data): 157 | """Attempts to dynamically import any API-side connection callbacks 158 | (this method should never need to be called manually) 159 | 160 | Parameters 161 | ---------- 162 | connection_data : 163 | A dict containing the data defining the connection 164 | 165 | Returns 166 | ------- 167 | None 168 | 169 | """ 170 | if 'callback_api' in connection_data: 171 | x = connection_data['callback_api'].rsplit('.', 1) 172 | return getattr(importlib.import_module('benchbot_api.' + x[0]), 173 | x[1]) 174 | return None 175 | 176 | @property 177 | def actions(self): 178 | """The list of actions the robot is able to take. 179 | 180 | Returns 181 | ------- 182 | list 183 | A list of actions the robot can take. If the robot has collided 184 | with an obstacle or finished its task, this list will be empty. 185 | """ 186 | return ([] if self._query('is_collided', 187 | BenchBot.RouteType.ROBOT)['is_collided'] 188 | or self._query('is_finished', 189 | BenchBot.RouteType.ROBOT)['is_finished'] else 190 | self._query('task/actions', BenchBot.RouteType.CONFIG)) 191 | 192 | @property 193 | def config(self): 194 | """Returns detailed information about the current BenchBot configuration 195 | 196 | Returns 197 | ------- 198 | dict 199 | A dict of all configuration parameters as retrieved from the 200 | running BenchBot supervisor 201 | """ 202 | return self._query('', BenchBot.RouteType.CONFIG) 203 | 204 | @property 205 | def observations(self): 206 | """The list of observations the robot is currently providing 207 | 208 | Returns 209 | ------- 210 | list 211 | A list of observations. 212 | """ 213 | return self._query('task/observations', BenchBot.RouteType.CONFIG) 214 | 215 | @property 216 | def result_filename(self): 217 | """The location where results should be written. If the path doesn't 218 | exist, it makes it. 219 | 220 | Returns 221 | ------- 222 | str 223 | The filename that the results will be written to. 224 | """ 225 | if not os.path.exists(os.path.dirname(RESULT_LOCATION)): 226 | os.makedirs(os.path.dirname(RESULT_LOCATION)) 227 | return os.path.join(RESULT_LOCATION) 228 | 229 | def empty_results(self): 230 | """Helper method for getting an empty results dict, pre-populated with 231 | metadata from the currently running configuration. See the 232 | 'results_functions()' method for help filling in the empty results. 233 | 234 | Returns 235 | ------- 236 | dict 237 | A dict with the fields 'task_details' (populated), 238 | 'environment_details' (populated), and 'results' (empty) 239 | """ 240 | return { 241 | 'task_details': 242 | self._query('task', BenchBot.RouteType.CONFIG), 243 | 'environment_details': 244 | self._query('environments', BenchBot.RouteType.CONFIG), 245 | 'results': (self._query('create', BenchBot.RouteType.RESULTS) 246 | if self.config['results'] else {}) 247 | } 248 | 249 | def next_scene(self): 250 | """Moves the robot to the next scene, declaring failure if there is no 251 | next scene defined 252 | 253 | Returns 254 | ------- 255 | bool 256 | Denotes whether moving to the next scene succeeded or failed 257 | """ 258 | # Bail if next is not a valid operation 259 | if (self._query('is_collided', 260 | BenchBot.RouteType.ROBOT)['is_collided']): 261 | raise RuntimeError("Collision state detected for robot; " 262 | "cannot proceed to next scene") 263 | 264 | # Move to the next scene 265 | print("Moving to next scene ... ", end='') 266 | sys.stdout.flush() 267 | resp = self._query('next', BenchBot.RouteType.ROBOT) 268 | print("Done." if resp['next_success'] else "Failed.") 269 | 270 | # Return the result of moving to next (a failure means we are already 271 | # at the last scene) 272 | return resp['next_success'] 273 | 274 | def reset(self): 275 | """Resets the robot state, starting again at the first scene with a 276 | fresh environment. Resetting is skipped if the environment is still in 277 | a fresh state. 278 | 279 | Returns 280 | ------- 281 | tuple 282 | Observations and action result at the start of the task (should 283 | always be SUCCESS). 284 | """ 285 | # Only restart the supervisor if it is in a dirty state 286 | if self._query('is_dirty', BenchBot.RouteType.ROBOT)['is_dirty']: 287 | print("Dirty robot state detected. Performing reset ... ", end='') 288 | sys.stdout.flush() 289 | self._query('reset', 290 | BenchBot.RouteType.ROBOT) # This should be a send... 291 | print("Complete.") 292 | return self.step(None) 293 | 294 | def results_functions(self): 295 | return { 296 | r: lambda *args, _fn=r, **kwargs: self._query( 297 | '/%s' % _fn, BenchBot.RouteType.RESULTS, { 298 | 'args': args, 299 | 'kwargs': kwargs 300 | }) 301 | for r in self._query('/', BenchBot.RouteType.RESULTS) 302 | } 303 | 304 | def run(self, agent=None): 305 | """Helper function that runs the robot according to the agent given. 306 | Generally, you should use this function and implement your object in 307 | your own custom agent class. 308 | """ 309 | if agent is not None: 310 | self.set_agent(agent) 311 | if self.agent is None: 312 | raise RuntimeError( 313 | "Can't call Benchbot.run() without an agent attached. Either " 314 | "create your BenchBot instance with an agent argument, " 315 | "or create your own run logic instead of using Benchbot.run()") 316 | 317 | # Copy & pasting the same code twice just doesn't feel right... 318 | def scene_fn(): 319 | observations, action_result = self.reset() 320 | while not self.agent.is_done(action_result): 321 | action, action_args = self.agent.pick_action( 322 | observations, self.actions) 323 | observations, action_result = self.step(action, **action_args) 324 | 325 | # Run through the scenes until done 326 | scene_fn() 327 | while self.next_scene(): 328 | scene_fn() 329 | 330 | # We've made it to the end, we should save our results! 331 | self.agent.save_result(self.result_filename, self.empty_results(), 332 | self.results_functions()) 333 | 334 | def set_agent(self, agent): 335 | """Updates the current agent, and starts its connection with a BenchBot 336 | Supervisor if requested""" 337 | if agent is None: 338 | self.agent = None 339 | return 340 | 341 | if agent is not None and not isinstance(agent, Agent): 342 | raise ValueError("BenchBot received an agent of type '%s' " 343 | "which is not an instance of '%s'." % 344 | (agent.__class__.__name__, Agent.__name__)) 345 | self.agent = agent 346 | 347 | def start(self): 348 | """Establishes a connect to the Supervisor, and then uses this to 349 | establish a connection with a running robot. It then initialises all 350 | connections, ensuring API-side callbacks are accessible. 351 | """ 352 | # Establish a connection to the supervisor (throw an error on failure) 353 | print("Waiting to establish connection to a running supervisor ... ", 354 | end='') 355 | sys.stdout.flush() 356 | start_time = time.time() 357 | connected = False 358 | while not connected and time.time() - start_time < TIMEOUT_SUPERVISOR: 359 | try: 360 | self._query("/", BenchBot.RouteType.EXPLICIT) 361 | connected = True 362 | except: 363 | pass 364 | time.sleep(3) 365 | if not connected: 366 | raise requests.ConnectionError( 367 | "Could not find a BenchBot supervisor @ '%s'. " 368 | "Are you sure it is available?" % self.supervisor_address) 369 | print("Connected!") 370 | 371 | # Wait until the robot is running 372 | print("Waiting to establish connection to a running robot ... ", 373 | end='') 374 | sys.stdout.flush() 375 | while (not self._query("is_running", 376 | BenchBot.RouteType.ROBOT)['is_running']): 377 | time.sleep(0.1) 378 | print("Connected!") 379 | 380 | # Get references to all of the API callbacks in robot config 381 | self._connection_callbacks = { 382 | k: BenchBot._attempt_connection_imports(v) 383 | for k, v in self._query('robot', BenchBot.RouteType.CONFIG) 384 | ['connections'].items() 385 | } 386 | 387 | # Ensure we are starting in a clean robot state 388 | if (self._query('selected_environment', 389 | BenchBot.RouteType.ROBOT)['number'] != 0): 390 | print( 391 | "Robot detected not to be in the first scene. " 392 | "Performing restart ... ", 393 | end='') 394 | sys.stdout.flush() 395 | self._query('restart', BenchBot.RouteType.ROBOT) 396 | print("Done.") 397 | else: 398 | self.reset() 399 | 400 | def step(self, action, **action_kwargs): 401 | """Performs 'action' with 'action_kwargs' as its arguments, and returns 402 | the observations after 'action' has completed, regardless of the 403 | result. 404 | 405 | Parameters 406 | ---------- 407 | action : 408 | Name of action to be performed. The 'actions' property is available 409 | to list available actions if you are unsure what is available. 410 | 411 | **action_kwargs 412 | Arguments to be used by the action. See the connection's callbacks 413 | by using the 'config' property if you are unsure of an action's 414 | supported arguments. 415 | 416 | Returns 417 | ------- 418 | tuple 419 | Observations and action result after the action has finished. 420 | 421 | """ 422 | # Perform the requested action if possible 423 | if action is not None: 424 | # Detect actions unavailable due to robot state 425 | if action not in self.actions: 426 | raise ValueError( 427 | "Action '%s' is unavailable due to: %s" % 428 | (action, ('COLLISION' if self._query( 429 | 'is_collided', BenchBot.RouteType.ROBOT)['is_collided'] 430 | else 'FINISHED' if self._query( 431 | 'is_finished', BenchBot.RouteType.ROBOT) 432 | ['is_finished'] else 'WRONG_ACTUATION_MODE?'))) 433 | 434 | # Made it through checks, actually perform the action 435 | print("Sending action '%s' with args: %s" % 436 | (action, action_kwargs)) 437 | self._query(action, BenchBot.RouteType.CONNECTION, action_kwargs) 438 | 439 | # Derive action_result (TODO should probably not be this flimsy...) 440 | action_result = ActionResult.SUCCESS 441 | if self._query('is_collided', BenchBot.RouteType.ROBOT)['is_collided']: 442 | action_result = ActionResult.COLLISION 443 | elif self._query('is_finished', 444 | BenchBot.RouteType.ROBOT)['is_finished']: 445 | action_result = ActionResult.FINISHED 446 | 447 | # Retrieve and return an updated set of observations 448 | raw_os = { 449 | o: self._query(o, BenchBot.RouteType.CONNECTION) 450 | for o in self.observations 451 | } 452 | raw_os.update({ 453 | 'scene_number': 454 | self._query('selected_environment', 455 | BenchBot.RouteType.ROBOT)['number'] 456 | }) 457 | return ({ 458 | k: (self._connection_callbacks[k](v) if 459 | (k in self._connection_callbacks 460 | and self._connection_callbacks[k] is not None) else v) 461 | for k, v in raw_os.items() 462 | }, action_result) 463 | -------------------------------------------------------------------------------- /benchbot_api/extras.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from mpl_toolkits.mplot3d.art3d import Poly3DCollection 3 | 4 | from .tools import _set_axes_equal 5 | 6 | 7 | CLASS_TO_COLOR_MAP = { 8 | 'bottle': 'aqua', 9 | 'cup': 'sandybrown', 10 | 'knife': 'silver', 11 | 'bowl': 'rosybrown', 12 | 'wine glass': 'red', 13 | 'fork': 'moccasin', 14 | 'spoon': 'darkkhaki', 15 | 'banana': 'yellow', 16 | 'apple': 'green', 17 | 'orange': 'orange', 18 | 'cake': 'mistyrose', 19 | 'potted plant': 'lawngreen', 20 | 'mouse': 'gray', 21 | 'keyboard': 'black', 22 | 'laptop': 'firebrick', 23 | 'cell phone': 'darkgreen', 24 | 'book': 'saddlebrown', 25 | 'clock': 'seagreen', 26 | 'chair': 'lightseagreen', 27 | 'table': 'darkcyan', 28 | 'couch': 'deepskyblue', 29 | 'bed': 'royalblue', 30 | 'toilet': 'navy', 31 | 'tv': 'blue', 32 | 'microwave': 'mediumpurple', 33 | 'toaster': 'darkorchid', 34 | 'refrigerator': 'fuchsia', 35 | 'oven': 'deeppink', 36 | 'sink': 'crimson', 37 | 'person': 'lightpink' 38 | } 39 | 40 | 41 | def get_bbox3d(extent, centroid): 42 | """ Calculate 3D bounding box corners from its parameterization. 43 | Input: 44 | extent: (length, width, height) 45 | centroid: (x, y, z) 46 | Output: 47 | corners3d: 3D box corners 48 | """ 49 | 50 | l, w, h = extent 51 | 52 | corners3d = 0.5 * np.array([ 53 | [-l, -l, l, l, -l, -l, l, l], 54 | [ w, -w, -w, w, w, -w, -w, w], 55 | [-h, -h, -h, -h, h, h, h, h], 56 | ]) 57 | 58 | corners3d[0, :] = corners3d[0, :] + centroid[0] 59 | corners3d[1, :] = corners3d[1, :] + centroid[1] 60 | corners3d[2, :] = corners3d[2, :] + centroid[2] 61 | 62 | return corners3d.T 63 | 64 | 65 | def vis_bbox3d(ax, bbox3d, facecolor, edgecolor, label): 66 | """ Visualise 3D bounding box. 67 | Input: 68 | ax: matplotlib axes 3D 69 | facecolor: bounding box facecolor 70 | edgecolor: bounding box edgecolor 71 | label: bounding box object class 72 | """ 73 | # Cuboid sides 74 | surf_verts = [ 75 | [bbox3d[0], bbox3d[1], bbox3d[2], bbox3d[3]], 76 | [bbox3d[4], bbox3d[5], bbox3d[6], bbox3d[7]], 77 | [bbox3d[0], bbox3d[1], bbox3d[5], bbox3d[4]], 78 | [bbox3d[2], bbox3d[3], bbox3d[7], bbox3d[6]], 79 | [bbox3d[1], bbox3d[2], bbox3d[6], bbox3d[5]], 80 | [bbox3d[4], bbox3d[7], bbox3d[3], bbox3d[0]] 81 | ] 82 | 83 | cuboid = Poly3DCollection( 84 | surf_verts, 85 | facecolors=facecolor, 86 | linewidths=1, 87 | edgecolors=edgecolor, 88 | alpha=.25, 89 | label=label 90 | ) 91 | # Fix: 'Poly3DCollection' object has no attribute '_edgecolors2d' 92 | cuboid._facecolors2d = cuboid._facecolor3d 93 | cuboid._edgecolors2d = cuboid._edgecolor3d 94 | 95 | ax.add_collection3d(cuboid) 96 | ax.scatter3D(bbox3d[:, 0], bbox3d[:, 1], bbox3d[:, 2], s=1, color=edgecolor) 97 | 98 | 99 | def vis_semantic_map3d(ax, scene_objects, class_to_color_map, edgecolor): 100 | for obj in scene_objects: 101 | label = obj['class'] 102 | facecolor = class_to_color_map[label] 103 | bbox3d = get_bbox3d(obj['extent'], obj['centroid']) 104 | 105 | vis_bbox3d(ax, bbox3d, facecolor, edgecolor, label) 106 | 107 | _set_axes_equal(ax) 108 | -------------------------------------------------------------------------------- /benchbot_api/tools.py: -------------------------------------------------------------------------------- 1 | import matplotlib as mpl 2 | 3 | mpl.use( 4 | 'TkAgg' 5 | ) # Default renderer Gtk3Agg had all sorts of stalling issues in matplotlib>=3.2 6 | 7 | try: 8 | import matplotlib.pyplot as plt 9 | except ImportError as e: 10 | import traceback 11 | import warnings 12 | warnings.formatwarning = ( 13 | lambda msg, category, *args, **kwargs: '%s: %s\n' % 14 | (category.__name__, msg)) 15 | warnings.warn( 16 | "Failed to import PyPlot from Matplotlib. Plotting functionality " 17 | "will error. Traceback is below:\n\n%s" % traceback.format_exc(), 18 | RuntimeWarning) 19 | plt = None 20 | 21 | from mpl_toolkits.mplot3d import Axes3D 22 | import numpy as np 23 | from scipy.spatial.transform import Rotation as Rot 24 | 25 | SUPPORTED_OBSERVATIONS = [ 26 | 'image_rgb', 'image_depth', 'laser', 'poses', 'image_class', 27 | 'image_instance' 28 | ] 29 | 30 | 31 | def __plot_frame(ax, frame_name, frame_data): 32 | # NOTE currently assume that everything has parent frame 'map' 33 | L = 0.25 34 | # TODO BUG: map has no rotation aspect, handling it here but it should 35 | # have a rotation. 36 | origin = frame_data['translation_xyz'] 37 | if 'rotation_rpy' in frame_data.keys(): 38 | orientation = frame_data['rotation_rpy'] 39 | else: 40 | orientation = [0, 0, 0] 41 | rot_obj = Rot.from_euler('xyz', orientation) 42 | x_vector = rot_obj.apply([1, 0, 0]) 43 | y_vector = rot_obj.apply([0, 1, 0]) 44 | z_vector = rot_obj.apply([0, 0, 1]) 45 | origin = frame_data['translation_xyz'] 46 | ax.quiver(origin[0], 47 | origin[1], 48 | origin[2], 49 | x_vector[0], 50 | x_vector[1], 51 | x_vector[2], 52 | length=L, 53 | normalize=True, 54 | color='r') 55 | ax.quiver(origin[0], 56 | origin[1], 57 | origin[2], 58 | y_vector[0], 59 | y_vector[1], 60 | y_vector[2], 61 | length=L, 62 | normalize=True, 63 | color='g') 64 | ax.quiver(origin[0], 65 | origin[1], 66 | origin[2], 67 | z_vector[0], 68 | z_vector[1], 69 | z_vector[2], 70 | length=L, 71 | normalize=True, 72 | color='b') 73 | ax.text(origin[0], origin[1], origin[2], frame_name) 74 | 75 | 76 | def _set_axes_radius(ax, origin, radius): 77 | ax.set_xlim3d([origin[0] - radius, origin[0] + radius]) 78 | ax.set_ylim3d([origin[1] - radius, origin[1] + radius]) 79 | ax.set_zlim3d([origin[2] - radius, origin[2] + radius]) 80 | 81 | 82 | def _set_axes_equal(ax): 83 | # Hacky function that creates an equal aspect ratio on 3D axes (as for 84 | # whatever reason, matplotlib's implementation has only managed to move 85 | # from silently failing to an unimplemented exception ...) 86 | limits = np.array([ 87 | ax.get_xlim3d(), 88 | ax.get_ylim3d(), 89 | ax.get_zlim3d(), 90 | ]) 91 | 92 | origin = np.mean(limits, axis=1) 93 | radius = 0.5 * np.max(np.abs(limits[:, 1] - limits[:, 0])) 94 | _set_axes_radius(ax, origin, radius) 95 | 96 | 97 | def _create_diag_mask(mask_img, num_lines=7): 98 | diag_mask = np.zeros(mask_img.shape, bool) 99 | img_width = diag_mask.shape[1] 100 | # Note that minimum line width is 1 101 | line_width = max([np.min(diag_mask.shape) // num_lines, 1]) 102 | # TODO Magic numbers in here ... don't do that 103 | bool_line = np.tile( 104 | np.append(np.ones(line_width, bool), np.zeros(line_width, bool)), 105 | (img_width * 2 // (line_width * 2)) + 2) 106 | for row_id in np.arange(diag_mask.shape[0]): 107 | start_idx = img_width - row_id % img_width 108 | # TODO there must be a better way to do this 109 | if (row_id // img_width) > 0 and (row_id // img_width) % 2 == 1: 110 | start_idx += line_width 111 | diag_mask[row_id, :] = bool_line[start_idx:(start_idx + img_width)] 112 | return np.logical_and(mask_img, diag_mask) 113 | 114 | 115 | def _get_roi(img_mask): 116 | a = np.where(img_mask != 0) 117 | bbox = np.min(a[0]), np.max(a[0]) + 1, np.min(a[1]), np.max(a[1]) + 1 118 | return bbox 119 | 120 | 121 | def _vis_rgb(ax, rgb_data): 122 | ax.clear() 123 | ax.imshow(rgb_data) 124 | ax.get_xaxis().set_visible(False) 125 | ax.get_yaxis().set_visible(False) 126 | ax.set_title("image_rgb") 127 | 128 | 129 | def _vis_depth(ax, depth_data): 130 | ax.clear() 131 | ax.imshow(depth_data, 132 | cmap="hot", 133 | clim=(np.amin(depth_data), np.amax(depth_data))) 134 | ax.get_xaxis().set_visible(False) 135 | ax.get_yaxis().set_visible(False) 136 | ax.set_title("image_depth") 137 | 138 | 139 | def _vis_class_segment(ax, segment_data): 140 | # Doing a little filtering to ignore unlabelled pixels 141 | ax.clear() 142 | class_segment_img = segment_data['class_segment_img'] 143 | masked_class_segment = np.ma.masked_where(class_segment_img == 0, 144 | class_segment_img) 145 | # make background black 146 | ax.set_facecolor((0, 0, 0)) 147 | num_class_colours = len(segment_data['class_ids']) + 1 148 | ax.imshow(masked_class_segment, 149 | cmap='gist_rainbow', 150 | clim=(1, num_class_colours), 151 | interpolation='nearest') 152 | ax.get_xaxis().set_visible(False) 153 | ax.get_yaxis().set_visible(False) 154 | ax.set_title("image_class") 155 | 156 | 157 | def _vis_inst_segment(ax, segment_data): 158 | # Setup instance segmentation image for visualization 159 | ax.clear() 160 | ax.set_facecolor((0, 0, 0)) 161 | inst_segment_img = segment_data['instance_segment_img'] 162 | 163 | # Add two images to the image that should not overlap 164 | # Images will contain class ID and instance ID adjacent with diagonals 165 | 166 | # Make diagonal pattern mask 167 | diagonal_mask_img = np.zeros(inst_segment_img.shape, bool) 168 | # Each instance will have its own diagonal mask proportional 169 | # to object size to help visualization 170 | for inst_id in np.unique(inst_segment_img): 171 | inst_mask_img = inst_segment_img == inst_id 172 | y0, y1, x0, x1 = _get_roi(inst_mask_img) 173 | inst_diag_mask = _create_diag_mask(inst_mask_img[y0:y1, x0:x1]) 174 | diagonal_mask_img[y0:y1, x0:x1] = np.logical_or( 175 | diagonal_mask_img[y0:y1, x0:x1], inst_diag_mask) 176 | 177 | # First image is the class id with stripes 178 | class_segment_img = segment_data['class_segment_img'] 179 | num_class_colours = len(segment_data['class_ids']) + 1 180 | masked_inst_class = np.ma.masked_where( 181 | np.logical_or(class_segment_img == 0, 182 | np.logical_not(diagonal_mask_img)), class_segment_img) 183 | ax.imshow(masked_inst_class, 184 | cmap='gist_rainbow', 185 | clim=(1, num_class_colours), 186 | interpolation='nearest') 187 | 188 | # Second image is the instance id with stripes adjacent to 189 | # class id stripes 190 | # NOTE Instance IDs and corresponding colours will change 191 | # between images and depends on format CCIII (C class id, I inst id) 192 | inst_id_img = inst_segment_img % 1000 193 | masked_inst_segment = np.ma.masked_where( 194 | np.logical_or(inst_id_img == 0, diagonal_mask_img), inst_id_img) 195 | ax.imshow(masked_inst_segment, 196 | cmap='brg', 197 | clim=(1, max(np.amax(inst_id_img), 1)), 198 | interpolation='nearest') 199 | ax.get_xaxis().set_visible(False) 200 | ax.get_yaxis().set_visible(False) 201 | ax.set_title("image_instance") 202 | 203 | 204 | def _vis_laser(ax, laser_data): 205 | ax.clear() 206 | ax.plot(0, 0, c='r', marker=">") 207 | ax.scatter([x[0] * np.cos(x[1]) for x in laser_data['scans']], 208 | [x[0] * np.sin(x[1]) for x in laser_data['scans']], 209 | c='k', 210 | s=4, 211 | marker='s') 212 | ax.axis('equal') 213 | ax.set_title("laser (robot frame)") 214 | 215 | 216 | def _vis_poses(ax, pose_data): 217 | ax.clear() 218 | __plot_frame(ax, 'map', {'translation_xyz': [0, 0, 0]}) 219 | for k, v in pose_data.items(): 220 | __plot_frame(ax, k, v) 221 | # ax.axis('equal') Unimplemented for 3d plots... wow... 222 | _set_axes_equal(ax) 223 | ax.set_title("poses (world frame)") 224 | 225 | 226 | class ObservationVisualiser(object): 227 | 228 | def __init__(self, 229 | vis_list=['image_rgb', 'image_depth', 'laser', 'poses']): 230 | self.fig = None 231 | self.axs = None 232 | self.vis_list = vis_list 233 | 234 | def update(self): 235 | # Performs a non-blocking update of the figure 236 | plt.draw() 237 | self.fig.canvas.start_event_loop(0.05) 238 | 239 | def visualise(self, observations, step_count=None): 240 | subplot_shape = (2, (len(self.vis_list) + 1) // 241 | 2) if len(self.vis_list) > 1 else (1, 1) 242 | if self.fig is None: 243 | plt.ion() 244 | self.fig, self.axs = plt.subplots(*subplot_shape) 245 | # Make sure that axis is always a 2D numpy array (reference purposes) 246 | if not isinstance(self.axs, np.ndarray): 247 | self.axs = np.array(self.axs).reshape(1, 1) 248 | if len(self.axs.shape) == 1: 249 | self.axs = self.axs[:, np.newaxis] 250 | 251 | # Set things up for poses (3D plot) if desired 252 | if 'poses' in self.vis_list: 253 | # NOTE currently assume poses can only exist once in the list 254 | poses_plt_num = int( 255 | np.where(np.array(self.vis_list) == 'poses')[0]) 256 | poses_subplt = (poses_plt_num % 2, poses_plt_num // 2) 257 | poses_plt_num_h = poses_subplt[0] * self.axs.shape[ 258 | 1] + poses_subplt[1] + 1 259 | self.axs[poses_subplt].remove() 260 | self.axs[poses_subplt] = self.fig.add_subplot( 261 | self.axs.shape[0], 262 | self.axs.shape[1], 263 | poses_plt_num_h, 264 | projection='3d') 265 | 266 | self.fig.canvas.manager.set_window_title("Agent Observations" + ( 267 | "" if step_count is None else " (step # %d)" % step_count)) 268 | 269 | for plt_num, vis_type in enumerate(self.vis_list): 270 | subplt = (plt_num % 2, plt_num // 2) 271 | ax = self.axs[subplt] 272 | if vis_type == 'image_rgb': 273 | _vis_rgb(ax, observations['image_rgb']) 274 | elif vis_type == 'image_depth': 275 | _vis_depth(ax, observations['image_depth']) 276 | elif vis_type == 'image_class': 277 | _vis_class_segment(ax, observations['image_segment']) 278 | elif vis_type == 'image_instance': 279 | _vis_inst_segment(ax, observations['image_segment']) 280 | elif vis_type == 'laser': 281 | _vis_laser(ax, observations['laser']) 282 | elif vis_type == 'poses': 283 | _vis_poses(ax, observations['poses']) 284 | else: 285 | raise ValueError( 286 | "\'{0}\' is not supported for visualization. Supported: {1}" 287 | .format(vis_type, SUPPORTED_OBSERVATIONS)) 288 | 289 | # Handle empty plot 290 | if len(self.vis_list) < self.axs.shape[0] * self.axs.shape[1]: 291 | # Currently assume there will only ever be one empty plot 292 | subplt = (self.axs.shape[0] - 1, self.axs.shape[1] - 1) 293 | self.axs[subplt].axis("off") 294 | 295 | self.update() 296 | -------------------------------------------------------------------------------- /docs/benchbot_api_web.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/qcr/benchbot_api/b8422ccb152332125b4ecb9ed253338fb495dd2d/docs/benchbot_api_web.gif -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import setup, find_packages 2 | 3 | with open("README.md", "r") as fh: 4 | long_description = fh.read() 5 | 6 | setup(name='benchbot_api', 7 | version='2.4.3', 8 | author='Ben Talbot', 9 | author_email='b.talbot@qut.edu.au', 10 | description='The BenchBot API for use with the BenchBot software stack', 11 | long_description=long_description, 12 | long_description_content_type='text/markdown', 13 | packages=find_packages(), 14 | install_requires=[ 15 | 'jsonpickle', 'matplotlib', 'numpy', 'opencv-python', 'requests', 16 | 'scipy>=1.2.0' 17 | ], 18 | classifiers=( 19 | "Programming Language :: Python :: 3", 20 | "License :: OSI Approved :: BSD License", 21 | "Operating System :: OS Independent", 22 | )) 23 | --------------------------------------------------------------------------------
...
'frame_name': {
'parent_frame': string
'rotation_rpy':
numpy.ndarray(shape=(3,),
dtype='float64')
'rotation_xyzw':
numpy.ndarray(shape=(4,),
dtype='float64')
'translation_xyz':
numpy.ndarray(shape=(3,),
dtype='float64')
}
...
}