├── .dockerignore
├── .gitignore
├── LICENSE.txt
├── README.md
├── bin
├── .helpers
├── addons
├── benchbot_batch
├── benchbot_eval
├── benchbot_install
├── benchbot_run
└── benchbot_submit
├── docker
├── backend.Dockerfile
├── core.Dockerfile
├── shared_tools.Dockerfile
├── sim_omni.Dockerfile
└── submission.Dockerfile
├── docs
├── acrv_logo_small.png
├── benchbot_web.gif
├── csirod61_logo_small.png
├── nvidia_logo_small.png
└── qcr_logo_small.png
└── install
/.dockerignore:
--------------------------------------------------------------------------------
1 | .git
2 | addons
3 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | *.sw*
2 | __pycache__/
3 | .cache/
4 |
5 | # We never want to track BenchBot components
6 | addons/
7 | api/
8 | eval/
9 |
--------------------------------------------------------------------------------
/LICENSE.txt:
--------------------------------------------------------------------------------
1 | Copyright (c) 2020, Queensland University of Technology (Ben Talbot, David Hall, and Niko Sünderhauf)
2 | All rights reserved.
3 |
4 | Redistribution and use in source and binary forms, with or without modification,
5 | are permitted provided that the following conditions are met:
6 |
7 | 1. Redistributions of source code must retain the above copyright notice,
8 | this list of conditions and the following disclaimer.
9 |
10 | 2. Redistributions in binary form must reproduce the above copyright notice,
11 | this list of conditions and the following disclaimer in the documentation
12 | and/or other materials provided with the distribution.
13 |
14 | 3. Neither the name of the copyright holder nor the names of its contributors
15 | may be used to endorse or promote products derived from this software
16 | without specific prior written permission.
17 |
18 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
19 | ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
20 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
21 | IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
22 | INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
23 | BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
24 | DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
25 | LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
26 | OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
27 | THE POSSIBILITY OF SUCH DAMAGE.
28 |
29 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
~ Our Robotic Vision Scene Understanding (RVSU) Challenge is live on EvalAI ~
2 | ~ BenchBot is now powered by NVIDIA Omniverse and Isaac Sim. We are aware of some issues, please report any you do encounter. ~
3 | ~ Our BenchBot tutorial is the best place to get started developing with BenchBot ~
4 |
5 | # BenchBot Software Stack
6 |
7 | [](http://benchbot.org)
8 | [](https://qcr.github.io)
9 | 
10 | [](./LICENSE.txt)
11 |
12 | 
13 |
14 | The BenchBot software stack is a collection of software packages that allow end users to control robots in real or simulated environments with a simple python API. It leverages the simple "observe, act, repeat" approach to robot problems prevalent in reinforcement learning communities ([OpenAI Gym](https://gym.openai.com/) users will find the BenchBot API interface very similar).
15 |
16 | BenchBot was created as a tool to assist in the research challenges faced by the semantic scene understanding community; challenges including understanding a scene in simulation, transferring algorithms to real world systems, and meaningfully evaluating algorithm performance. We've since realised, these challenges don't just exist for semantic scene understanding, they're prevalent in a wide range of robotic problems.
17 |
18 | This led us to create version 2 of BenchBot with a focus on allowing users to define their own functionality for BenchBot through [add-ons](https://github.com/qcr/benchbot_addons). Want to integrate your own environments? Plug-in new robot platforms? Define new tasks? Share examples with others? Add evaluation measures? This all now possible with add-ons, and you don't have to do anything more than add some YAML and Python files defining your new content!
19 |
20 | The "bench" in "BenchBot" refers to benchmarking, with our goal to provide a system that greatly simplifies the benchmarking of novel algorithms in both realistic 3D simulation and on real robot platforms. If there is something else you would like to use BenchBot for (like integrating different simulators), please let us know. We're very interested in BenchBot being the glue between your novel robotics research and whatever your robot platform may be.
21 |
22 | This repository contains the software stack needed to develop solutions for BenchBot tasks on your local machine. It installs and configures a significant amount of software for you, wraps software in stable Docker images (~50GB), and provides simple interaction with the stack through 4 basic scripts: `benchbot_install`, `benchbot_run`, `benchbot_submit`, and `benchbot_eval`.
23 |
24 | ## System recommendations and requirements
25 |
26 | The BenchBot software stack is designed to run seamlessly on a wide number of system configurations (currently limited to Ubuntu 18.04+). System hardware requirements are relatively high due to the software run for 3D simulation (e.g. NVIDIA Omniverse-powered Isaac Sim):
27 |
28 | - Nvidia Graphics card (GeForce RTX 2080 minimum, Quadro RTX 5000 recommended)
29 | - CPU with multiple cores (Intel i7-6800K 7th Generation minimum)
30 | - 32GB+ RAM
31 | - 64GB+ spare storage (an SSD storage device is **strongly** recommended)
32 |
33 | Having a system that meets the above hardware requirements is all that is required to begin installing the BenchBot software stack. The install script analyses your system configuration and offers to install any missing software components interactively. The list of 3rd party software components involved includes:
34 |
35 | - NVIDIA GPU Driver (520.60.11+ recommended)
36 | - CUDA with GPU support (10.0+ required, 10.1+ recommended)
37 | - Docker Engine - Community Edition (19.03+ required, 19.03.2+ recommended)
38 | - NVIDIA Container Toolkit (1.0+ required, 1.0.5+ recommended)
39 | - Isaac 2022.2.1 Omniverse simulator (when installing `sim_omni`)
40 |
41 | ## Managing your installation
42 |
43 | Installation is simple:
44 |
45 | ```
46 | u@pc:~$ git clone https://github.com/qcr/benchbot && cd benchbot
47 | u@pc:~$ ./install
48 | ```
49 |
50 | Any missing software components, or configuration issues with your system, should be detected by the install script and resolved interactively (you may be prompted to manually reboot and restart the install script). The installation asks if you want to add BenchBot helper scripts to your `PATH`. Choosing yes will make the following commands available from any directory: `benchbot_install` (same as `./install` above), `benchbot_run`, `benchbot_submit`, `benchbot_eval`, and `benchbot_batch`.
51 |
52 | The BenchBot software stack will frequently check for updates and can update itself automatically. To update simply run the install script again (add the `--force-clean` flag if you would like to install from scratch):
53 |
54 | ```
55 | u@pc:~$ benchbot_install
56 | ```
57 |
58 | If you decide to uninstall the BenchBot software stack, run:
59 |
60 | ```
61 | u@pc:~$ benchbot_install --uninstall
62 | ```
63 |
64 | There are a number of other options to customise your BenchBot installation, which are all described by running:
65 |
66 | ```
67 | u@pc:~$ benchbot_install --help
68 | ```
69 |
70 | ### Managing installed BenchBot add-ons
71 |
72 | BenchBot installs a default set of add-ons, which is currently `'benchbot-addons/ssu'` (and all of its dependencies declared [here](https://github.com/benchbot-addons/ssu/blob/master/.dependencies)). But you can also choose to install a different set of add-ons instead. For example, the following will also install the `'benchbot-addons/data_collect'` add-ons:
73 |
74 | ```
75 | u@pc:~$ benchbot_install --addons benchbot-addons/ssu,benchbot-addons/data_collect
76 | ```
77 |
78 | See the [BenchBot Add-ons Manager's documentation](https://github.com/qcr/benchbot_addons) for more information on using add-ons. All of our official add-ons can be found in our [benchbot-addons GitHub organisation](https://github.com/benchbot-addons). We're open to adding add-ons contributed by our users to the official list as well.
79 |
80 | ## Getting started
81 |
82 | Getting a solution up and running with BenchBot is as simple as 1,2,3. Here's how to use BenchBot with content from the [semantic scene understanding add-on](https://github.com/benchbot-addons/ssu):
83 |
84 | 1. Run a simulator with the BenchBot software stack by selecting an available robot, environment, and task definition:
85 |
86 | ```
87 | u@pc:~$ benchbot_run --robot carter_omni --env miniroom:1 --task semantic_slam:active:ground_truth
88 | ```
89 |
90 | A number of useful flags exist to help you explore what content is available in your installation (see `--help` for full details). For example, you can list what tasks are available via `--list-tasks` and view the task specification via `--show-task TASK_NAME`.
91 |
92 | 2. Create a solution to a BenchBot task, and run it against the software stack. To run a solution you must select a mode. For example, if you've created a solution in `my_solution.py` that you would like to run natively:
93 |
94 | ```
95 | u@pc:~$ benchbot_submit --native python my_solution.py
96 | ```
97 |
98 | See `--help` for other options. You also have access to all of the examples available in your installation. For instance, you can run the `hello_active` example in containerised mode via:
99 |
100 | ```
101 | u@pc:~$ benchbot_submit --containerised --example hello_active
102 | ```
103 |
104 | See `--list-examples` and `--show-example EXAMPLE_NAME` for full details on what's available out of the box.
105 |
106 | 3. Evaluate the performance of your system using a supported evaluation method (see `--list-methods`). To use the `omq` evaluation method on `my_results.json`:
107 |
108 | ```
109 | u@pc:~$ benchbot_eval --method omq my_results.json
110 | ```
111 |
112 | You can also simply run evaluation automatically after your submission completes:
113 |
114 | ```
115 | u@pc:~$ benchbot_submit --evaluate-with omq --native --example hello_eval_semantic_slam
116 | ```
117 |
118 | The [BenchBot Tutorial](https://github.com/qcr/benchbot/wiki/Tutorial:-Performing-Semantic-SLAM-with-Votenet) is a great place to start working with BenchBot; the tutorial takes you from a blank system to a working Semantic SLAM solution, with many educational steps along the way. Also remember the examples in your installation ([`benchbot-addons/examples_base`](https://github.com/benchbot-addons/examples_base) is a good starting point) which show how to get up and running with the BenchBot software stack.
119 |
120 | ## Power tools for autonomous algorithm evaluation
121 |
122 | Once you are confident your algorithm is a solution to the chosen task, the BenchBot software stack's power tools allow you to comprehensively explore your algorithm's performance. You can autonomously run your algorithm over multiple environments, and evaluate it holistically to produce a single summary statistic of your algorithm's performance. Here are some examples again with content from the [semantic scene understanding add-on](https://github.com/benchbot-addons/ssu):
123 |
124 | - Use `benchbot_batch` to run your algorithm in a number of environments and produce a set of results. The script has a number of toggles available to customise the process (see `--help` for full details). To autonomously run your `semantic_slam:active:ground_truth` algorithm over 3 environments:
125 |
126 | ```
127 | u@pc:~$ benchbot_batch --robot carter_omni --task semantic_slam:active:ground_truth --envs miniroom:1,miniroom:3,house:5 --native python my_solution.py
128 | ```
129 |
130 | Or you can use one of the pre-defined environment batches installed via add-ons (e.g. [`benchbot-addons/batches_isaac`](https://github.com/benchbot-addons/batches_isaac)):
131 |
132 | ```
133 | u@pc:~$ benchbot_batch --robot carter_omni --task semantic_slam:active:ground_truth --envs-batch develop_1 --native python my_solution.py
134 | ```
135 |
136 | Additionally, you can create a results ZIP and request an overall evaluation score at the end of the batch:
137 |
138 | ```
139 | u@pc:~$ benchbot_batch --robot carter_omni --task semantic_slam:active:ground_truth --envs miniroom:1,miniroom:3,house:5 --zip --evaluate-with omq --native python my_solution.py
140 | ```
141 |
142 | Lastly, both native and containerised submissions are supported exactly as in `benchbot_submit`:
143 |
144 | ```
145 | u@pc:~$ benchbot_batch --robot carter_omni --task semantic_slam:active:ground_truth --envs miniroom:1,miniroom:3,house:5 --containerised my_solution_folder/
146 | ```
147 |
148 | - You can also directly call the holistic evaluation performed above by `benchbot_batch` through the `benchbot_eval` script. The script supports single result files, multiple results files, or a ZIP of multiple results files. See `benchbot_eval --help` for full details. Below are examples calling `benchbot_eval` with a series of results and a ZIP of results respectively:
149 | ```
150 | u@pc:~$ benchbot_eval --method omq -o my_jsons_scores result_1.json result_2.json result_3.json
151 | ```
152 | ```
153 | u@pc:~$ benchbot_eval --method omq -o my_zip_scores results.zip
154 | ```
155 |
156 | ## Using BenchBot in your research
157 |
158 | BenchBot was made to enable and assist the development of high quality, repeatable research results. We welcome any and all use of the BenchBot software stack in your research.
159 |
160 | To use our system, we just ask that you cite our paper on the BenchBot system. This will help us follow uses of BenchBot in the research community, and understand how we can improve the system to help support future research results. Citation details are as follows:
161 |
162 | ```
163 | @misc{talbot2020benchbot,
164 | title={BenchBot: Evaluating Robotics Research in Photorealistic 3D Simulation and on Real Robots},
165 | author={Ben Talbot and David Hall and Haoyang Zhang and Suman Raj Bista and Rohan Smith and Feras Dayoub and Niko Sünderhauf},
166 | year={2020},
167 | eprint={2008.00635},
168 | archivePrefix={arXiv},
169 | primaryClass={cs.RO}
170 | }
171 | ```
172 |
173 | If you use our benchbot environments for active robotics (BEAR) which are installed by default, we ask you please cite our data paper on BEAR. Citation details are as follows:
174 |
175 | ```
176 | @article{hall2022bear,
177 | author = {David Hall and Ben Talbot and Suman Raj Bista and Haoyang Zhang and Rohan Smith and Feras Dayoub and Niko Sünderhauf},
178 | title ={BenchBot environments for active robotics (BEAR): Simulated data for active scene understanding research},
179 | journal = {The International Journal of Robotics Research},
180 | volume = {41},
181 | number = {3},
182 | pages = {259-269},
183 | year = {2022},
184 | doi = {10.1177/02783649211069404},
185 | }
186 | ```
187 |
188 | ## Components of the BenchBot software stack
189 |
190 | The BenchBot software stack is split into a number of standalone components, each with their own GitHub repository and documentation. This repository glues them all together for you into a working system. The components of the stack are:
191 |
192 | - **[benchbot_api](https://github.com/qcr/benchbot_api):** user-facing Python interface to the BenchBot system, allowing the user to control simulated or real robots in simulated or real world environments through simple commands
193 | - **[benchbot_addons](https://github.com/qcr/benchbot_addons):** a Python manager for add-ons to a BenchBot system, with full documentation on how to create and add your own add-ons
194 | - **[benchbot_supervisor](https://github.com/qcr/benchbot_supervisor):** a HTTP server facilitating communication between user-facing interfaces and the underlying robot controller
195 | - **[benchbot_robot_controller](https://github.com/qcr/benchbot_robot_controller):** a wrapping script which controls the low-level ROS functionality of a simulator or real robot, handles automated subprocess management, and exposes interaction via a HTTP server
196 | - **[benchbot_sim_omni](https://github.com/qcr/benchbot_sim_omni):** wrappers around NVIDIA's Omniverse powered Isaac Simulator, providing realistic 3D simulation and lighting (replaces our old Unreal Engine-based [benchbot_sim_unreal](https://github.com/qcr/benchbot_sim_unreal) wrappers)
197 | - **[benchbot_eval](https://github.com/qcr/benchbot_eval):** Python library for evaluating the performance in a task, based on the results produced by a submission
198 |
199 | ## Further information
200 |
201 | - **[FAQs](https://github.com/qcr/benchbot/wiki/FAQs):** Wiki page where answers to frequently asked questions and resolutions to common issues will be provided
202 | - **[Semantic SLAM Tutorial](https://github.com/qcr/benchbot/wiki/Tutorial:-Performing-Semantic-SLAM-with-Votenet):** a tutorial stepping through creating a semantic SLAM system in BenchBot that utilises the 3D object detector [VoteNet](https://github.com/facebookresearch/votenet)
203 |
204 | ## Supporters
205 |
206 | Development of the BenchBot software stack was directly supported by:
207 |
208 | [](https://research.qut.edu.au/qcr/) [](https://research.csiro.au/mlai-fsp/)
209 |
210 |
211 |
212 |
213 | [](https://www.nvidia.com/en-au/ai-data-science/) [](https://www.roboticvision.org/)
214 |
--------------------------------------------------------------------------------
/bin/.helpers:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | set -euo pipefail
4 | IFS=$'\n\t'
5 |
6 | ################################################################################
7 | ########################### Global BenchBot Settings ###########################
8 | ################################################################################
9 |
10 | BRANCH_DEFAULT="master"
11 |
12 | DOCKER_TAG_CORE="benchbot/core:base"
13 | DOCKER_TAG_BACKEND="benchbot/backend:base"
14 | DOCKER_TAG_SIM_PREFIX="benchbot/simulator:"
15 | DOCKER_TAG_SUBMISSION="benchbot/submission:base"
16 | DOCKER_NETWORK="benchbot_network"
17 |
18 | FILENAME_ENV_GROUND_TRUTH=".benchbot_object_maps"
19 | FILENAME_ENV_METADATA=".benchbot_data_files"
20 |
21 | GIT_ADDONS="https://github.com/qcr/benchbot_addons"
22 | GIT_API="https://github.com/qcr/benchbot_api"
23 | GIT_BENCHBOT="https://github.com/qcr/benchbot"
24 | GIT_CONTROLLER="https://github.com/qcr/benchbot_robot_controller"
25 | GIT_EVAL="https://github.com/qcr/benchbot_eval"
26 | GIT_MSGS="https://github.com/qcr/benchbot_msgs"
27 | GIT_SIMULATOR_PREFIX="https://github.com/qcr/benchbot_"
28 | GIT_SUPERVISOR="https://github.com/qcr/benchbot_supervisor"
29 |
30 | HOSTNAME_DEBUG="benchbot_debug"
31 | HOSTNAME_ROS="benchbot_ros"
32 | HOSTNAME_ROBOT="benchbot_robot"
33 | HOSTNAME_SUPERVISOR="benchbot_supervisor"
34 |
35 | MD5_ISAAC_SDK="06387f9c7a02afa0de835ef07927aadf"
36 |
37 | PATH_ROOT="$(realpath ..)"
38 |
39 | PATH_API="$PATH_ROOT/api"
40 | PATH_ADDONS="$PATH_ROOT/addons"
41 | PATH_ADDONS_INTERNAL="/benchbot/addons"
42 | PATH_CACHE="$PATH_ROOT/.cache"
43 | PATH_DOCKERFILE_BACKEND="$PATH_ROOT/docker/backend.Dockerfile"
44 | PATH_DOCKERFILE_CORE="$PATH_ROOT/docker/core.Dockerfile"
45 | PATH_DOCKERFILE_SHARED="$PATH_ROOT/docker/shared_tools.Dockerfile"
46 | PATH_DOCKERFILE_SIM_PREFIX="$PATH_ROOT/docker/"
47 | PATH_DOCKERFILE_SUBMISSION="$PATH_ROOT/docker/submission.Dockerfile"
48 | PATH_EVAL="$PATH_ROOT/eval"
49 | PATH_ISAAC_SRCS="$PATH_ROOT/isaac"
50 | PATH_LICENSES="$PATH_ROOT/.cache/licenses"
51 | PATH_SYMLINKS="/usr/local/bin"
52 | PATH_TEMP_FILE="/tmp/benchbot_scratch"
53 |
54 | PORT_ROBOT=10000
55 | PORT_SUPERVISOR=10000
56 |
57 | RETIRED_SIMULATORS=(
58 | "sim_unreal:Isaac Sim 2019.2, powered by Unreal engine"
59 | )
60 |
61 | SIZE_GB_FULL=32
62 | SIZE_GB_LITE=20
63 |
64 | SIM_OMNI_ARGS=(
65 | -v $PATH_CACHE/isaac-sim/cache/ov:/root/.cache/ov:rw
66 | -v $PATH_CACHE/isaac-sim/cache/glcache:/root/.cache/nvidia/GLCache:rw
67 | -v $PATH_CACHE/isaac-sim/cache/computecache:/root/.nv/ComputeCache:rw
68 | -v $PATH_CACHE/isaac-sim/logs:/root/.nvidia-omniverse/logs:rw
69 | -v $PATH_CACHE/isaac-sim/config:/root/.nvidia-omniverse/config:rw
70 | -v $PATH_CACHE/isaac-sim/data:/root/.local/share/ov/data:rw
71 | -v $PATH_CACHE/isaac-sim/documents:/root/Documents:rw
72 | )
73 |
74 | SUPPORTED_SIMULATORS=(
75 | "sim_omni:The latest Isaac Sim, powered by Omniverse"
76 | )
77 |
78 | URL_DEBUG="172.20.0.200"
79 | URL_DOCKER_SUBNET="172.20.0.0/24"
80 | URL_DOCKER_GATEWAY="172.20.0.254"
81 | URL_ROS="172.20.0.100"
82 | URL_ROBOT="172.20.0.101"
83 | URL_SUPERVISOR="172.20.0.102"
84 |
85 | ################################################################################
86 | ################## Coloured terminal output & heading blocks ###################
87 | ################################################################################
88 |
89 | colour_red='\033[0;31m'
90 | colour_green='\033[0;32m'
91 | colour_yellow='\033[0;33m'
92 | colour_blue='\033[0;34m'
93 | colour_magenta='\033[0;35m'
94 | colour_nc='\033[0m'
95 |
96 | function header_block() {
97 | header_text=${1:-"Header Block"}
98 | colour=${2:-${colour_red}} # Red
99 | header_char=${3:-"#"}
100 |
101 | len=${#header_text}
102 | let "len_left=(78 - $len)/2" "len_right=(79 - $len)/2"
103 |
104 | echo -e "$colour"
105 | printf "%.0s${header_char}" $(seq 1 80); printf '\n'
106 | printf "%.0s${header_char}" $(seq 1 $len_left); printf " $header_text "; printf "%.0s${header_char}" $(seq $len_right); printf '\n'
107 | printf "%.0s${header_char}" $(seq 1 80)
108 | echo -e "$colour_nc\n"
109 | }
110 |
111 | ################################################################################
112 | ######################## Helpers for managing BenchBot #########################
113 | ################################################################################
114 |
115 | function clear_stdin() {
116 | read -t 0.1 -d '' -n 10000 discard || true
117 | }
118 |
119 | function close_network() {
120 | sudo sysctl net.ipv4.conf.all.forwarding=${1:-0}
121 | sudo iptables --policy FORWARD ${2:-DROP}
122 | }
123 |
124 | function eval_version() {
125 | # TODO this refers to evaluating whether an arbitrary version number meets
126 | # some arbitrary version requirement... it does not have anything to do with
127 | # benchbot_eval (this should be renamed to avoid confusion)
128 |
129 | # $1 = version number, $2 = required version number
130 | if [ -z "$1" ] || [[ ! "$1" == [0-9]* ]]; then
131 | return 2 # Bad version text
132 | elif [ "$1" = "$2" ] || [ "$2" = $(echo -e "$1\n$2" | sort -V | head -n 1) ]; then
133 | return 0 # Passes requirement
134 | else
135 | return 1 # Did not pass requirement
136 | fi
137 | }
138 |
139 | function kill_benchbot() {
140 | # $1 set this to run without header block
141 | # $2 binary flag on whether we should keep persistent containers
142 | # TODO make this quieter when I am confident it works as expected...
143 | if [ -z "${1:-}" ]; then
144 | header_block "CLEANING UP ALL BENCHBOT REMNANTS" ${colour_blue}
145 | fi
146 |
147 | targets=$(pgrep -f "docker attach benchbot" || true)
148 | if [ -n "$targets" ]; then
149 | echo -e "${colour_blue}Detached from the following containers:${colour_nc}"
150 | echo "$targets"
151 | for pid in "$targets"; do kill -9 $pid; done
152 | fi
153 |
154 | targets=$(docker ps -q -f name='benchbot*' || true)
155 | if [ "${2:-0}" -ne 0 ]; then
156 | # Remove our persistent containers from our stop list (benchbot_robot and
157 | # benchbot_ros)
158 | p="$(docker ps -q -f name=$HOSTNAME_ROBOT -f name=$HOSTNAME_ROS)"
159 | if [ -n "$p" ]; then
160 | targets="$(echo "$targets" | grep -v "$p" || true)"
161 | fi
162 |
163 | # Kill specific processes within container benchbot_robot container if it
164 | # exists
165 | if [ -n "$(docker ps -q -f name=$HOSTNAME_ROBOT)" ]; then
166 | # Use the supervisor to ask the simulator to stop
167 | printf "\n${colour_blue}%s${colour_nc}\n" \
168 | "Sending stop request to running controller:"
169 | curl -sS -L "$HOSTNAME_ROBOT:$PORT_ROBOT/stop"
170 |
171 | # TODO some wait / success checking logic?
172 | fi
173 | fi
174 | if [ -n "$targets" ]; then
175 | echo -e "\n${colour_blue}Stopped the following containers:${colour_nc}"
176 | docker stop $targets
177 | fi
178 | echo -e "\n${colour_blue}Deleted the following containers:${colour_nc}"
179 | docker system prune -f # TODO this is still maybe a little too aggressive
180 |
181 | printf "\n${colour_blue}Finished cleaning!%s${colour_nc}\n\n" \
182 | "$([ ${2:-0} -ne 0 ] && echo " (use 'benchbot_run -k' for a full clean)")"
183 | }
184 |
185 | function open_network() {
186 | sudo sysctl net.ipv4.conf.all.forwarding=1
187 | sudo iptables --policy FORWARD ACCEPT
188 | }
189 |
190 | function print_version_info() {
191 | hash=$(git rev-parse HEAD)
192 | version_name="$(git log --tags --no-walk --pretty='%H %D' | grep "$hash" | \
193 | sed 's/^[^ ]* //; s/,[^:]*//g; s/tag: //g; s/: /, /g'; true)"
194 | if [ -z "$version_name" ]; then
195 | version_name="__unnamed__"
196 | fi
197 | printf "BenchBot Software Stack.\n"
198 | printf "Version '%s', from branch '%s'\n" \
199 | "$version_name" "$(git name-rev --name-only HEAD)"
200 | printf "(%s)\n" "$hash"
201 | }
202 |
203 | function simulator_installed() {
204 | # $1 query simulator, $2 string returned by simulators_installed
205 | [[ "$2" == "$1" ]] || [[ "$2" == *",$1"* ]] || [[ "$2" == *"$1,"* ]]
206 | return $?
207 | }
208 |
209 | function simulators_installed() {
210 | # TODO overall this to work with simulator-specific Docker images
211 | echo "$(docker inspect "$DOCKER_TAG_BACKEND" > /dev/null 2>&1 && \
212 | docker run --rm -t "$DOCKER_TAG_BACKEND" /bin/bash -c \
213 | 'echo "$BENCHBOT_SIMULATORS"' | tr -d '[:space:]')"
214 | }
215 |
216 | function simulator_supported() {
217 | # $1 simulator name
218 | local s
219 | for s in "${SUPPORTED_SIMULATORS[@]}"; do
220 | if [ "$(echo "$s" | sed 's/:.*//')" = "$1" ]; then return 0; fi
221 | done
222 | return 1;
223 | }
224 |
225 | ################################################################################
226 | ############### Checking if updates are available for components ###############
227 | ################################################################################
228 |
229 | function _is_latest_local_git() {
230 | # $1 = directory, $2 = repo URL, $3 = repo branch, $4 = verbose name of repo
231 | current_hash=$(cd "$1" > /dev/null 2>&1 && git rev-parse HEAD)
232 | latest_hash=$(git ls-remote "$2" "$3" | awk '{print $1}')
233 | if [ -z "$latest_hash" ]; then
234 | echo -e "${colour_red}ERROR: Repo at $2 has no branch '$3'!${colour_nc}"
235 | return 2
236 | fi
237 | echo "Current $4: $current_hash"
238 | echo "Latest $4: $latest_hash"
239 | [ "$current_hash" == "$latest_hash" ]
240 | return
241 | }
242 |
243 | function is_latest_benchbot() {
244 | _is_latest_local_git "$PATH_ROOT" "$GIT_BENCHBOT" "$1" \
245 | "BenchBot"
246 | return
247 | }
248 |
249 | function is_latest_benchbot_api() {
250 | _is_latest_local_git "$PATH_API" "$GIT_API" "$1" "BenchBot API"
251 | return
252 | }
253 |
254 | function is_latest_benchbot_controller() {
255 | current_hash=$(docker inspect "$DOCKER_TAG_BACKEND" > /dev/null 2>&1 && \
256 | docker run --rm -t "$DOCKER_TAG_BACKEND" /bin/bash -c \
257 | 'cd $BENCHBOT_CONTROLLER_PATH && git rev-parse HEAD' | tr -d '[:space:]')
258 | latest_hash=$(git ls-remote "$GIT_CONTROLLER" "$1" | awk '{print $1}')
259 | echo "Current BenchBot Robot Controller: $current_hash"
260 | echo "Latest BenchBot Robot Controller: $latest_hash"
261 | [ "$current_hash" == "$latest_hash" ]
262 | return
263 | }
264 |
265 | function is_latest_benchbot_eval() {
266 | _is_latest_local_git "$PATH_EVAL" "$GIT_EVAL" "$1" "BenchBot Eval"
267 | return
268 | }
269 |
270 | function is_latest_benchbot_msgs() {
271 | current_hash=$(docker inspect "$DOCKER_TAG_BACKEND" > /dev/null 2>&1 && \
272 | docker run --rm -t "$DOCKER_TAG_BACKEND" /bin/bash -c \
273 | 'cd $BENCHBOT_MSGS_PATH && git rev-parse HEAD' | tr -d '[:space:]')
274 | latest_hash=$(git ls-remote "$GIT_MSGS" "$1" | awk '{print $1}')
275 | echo "Current BenchBot ROS Messages: $current_hash"
276 | echo "Latest BenchBot ROS Messages: $latest_hash"
277 | [ "$current_hash" == "$latest_hash" ]
278 | return
279 | }
280 |
281 | function is_latest_benchbot_simulator() {
282 | # $1 = simulator name, $2 = branch name
283 | current_hash=$(docker inspect "$DOCKER_TAG_SIM_PREFIX$1" > /dev/null 2>&1 && \
284 | docker run --rm -t "$DOCKER_TAG_SIM_PREFIX$1" /bin/bash -c \
285 | 'cd $BENCHBOT_SIMULATOR_PATH && git rev-parse HEAD' | tr -d '[:space:]')
286 | latest_hash=$(git ls-remote "$GIT_SIMULATOR_PREFIX$1" "$2" | awk '{print $1}')
287 | echo "Current BenchBot Simulator '$1': $current_hash"
288 | echo "Latest BenchBot Simulator '$1': $latest_hash"
289 | [ "$current_hash" == "$latest_hash" ]
290 | return
291 | }
292 |
293 | function is_latest_benchbot_supervisor() {
294 | current_hash=$(docker inspect "$DOCKER_TAG_BACKEND" > /dev/null 2>&1 && \
295 | docker run --rm -t "$DOCKER_TAG_BACKEND" /bin/bash -c \
296 | 'cd $BENCHBOT_SUPERVISOR_PATH && git rev-parse HEAD' | tr -d '[:space:]')
297 | latest_hash=$(git ls-remote "$GIT_SUPERVISOR" "$1" | awk '{print $1}')
298 | echo "Current BenchBot Supervisor: $current_hash"
299 | echo "Latest BenchBot Supervisor: $latest_hash"
300 | [ "$current_hash" == "$latest_hash" ]
301 | return
302 | }
303 |
304 | function latest_version_info() {
305 | # Expects the output of a is_latest_* call on stdin (use a pipe)
306 | echo "$( /dev/null
322 | benchbot_valid=$?
323 | [ $benchbot_valid -eq 0 ] && echo "$_valid_str." || echo "$_invalid_str."
324 | echo -ne "Checking BenchBot API version ...\t\t\t"
325 | is_latest_benchbot_api "$1" > /dev/null
326 | api_valid=$?
327 | [ $api_valid -eq 0 ] && echo "$_valid_str." || echo "$_invalid_str."
328 | echo -ne "Checking BenchBot Eval version ...\t\t\t"
329 | is_latest_benchbot_eval "$1" > /dev/null
330 | eval_valid=$?
331 | [ $eval_valid -eq 0 ] && echo "$_valid_str." || echo "$_invalid_str."
332 | # echo -ne "Checking BenchBot Simulator version ...\t\t\t"
333 | # is_latest_benchbot_simulator "$1" > /dev/null
334 | # simulator_valid=$?
335 | # [ $simulator_valid -eq 0 ] && echo "$_valid_str." || echo "$_invalid_str."
336 | simulator_valid=0
337 | echo -ne "Checking BenchBot Supervisor version ...\t\t"
338 | is_latest_benchbot_supervisor "$1" > /dev/null
339 | supervisor_valid=$?
340 | [ $supervisor_valid -eq 0 ] && echo "$_valid_str." || echo "$_invalid_str."
341 |
342 | echo -ne "Checking installed BenchBot add-ons are up-to-date ...\t"
343 | addons_up_to_date > /dev/null
344 | addons_valid=$?
345 | [ $addons_valid -eq 0 ] && echo "$_valid_str." || echo "$_invalid_str."
346 |
347 | [ $benchbot_valid -eq 0 ] && [ $api_valid -eq 0 ] && \
348 | [ $eval_valid -eq 0 ] && [ $simulator_valid -eq 0 ] && \
349 | [ $supervisor_valid -eq 0 ] && [ $addons_valid -eq 0 ]
350 | valid=$?
351 | if [ $valid -eq 0 ]; then
352 | echo -e "\n$colour_green$_valid_text$colour_nc"
353 | else
354 | echo -e "\n$colour_yellow$_invalid_text$colour_nc";
355 | fi
356 | return $valid
357 | }
358 |
359 |
360 | ################################################################################
361 | ######################### BenchBot Add-ons Management ##########################
362 | ################################################################################
363 |
364 | function addons_up_to_date() {
365 | outdated="$(run_manager_cmd 'print("\n".join(outdated_addons()))')"
366 | echo -e "Outdated add-ons:\n${outdated}"
367 | [ $(echo "$outdated" | sed '/^\s*$/d' | wc -l) -eq 0 ]
368 | return
369 | }
370 |
371 | function env_list() {
372 | # Converts an environment string like "miniroom:1:4" into a space separated
373 | # list like "miniroom:1 miniroom:4"
374 | # $1 env string
375 | name="$(echo "$1" | sed 's/:.*//')"
376 | list=($(echo "$1" | sed 's/[^:]*\(:.*\)/\1/; s/:/\n'$name':/g; s/^ *//'))
377 | if [ ${#list[@]} -eq 0 ]; then list+=(""); fi
378 | echo "${list[@]}"
379 | }
380 |
381 | function env_name() {
382 | # $1 env string
383 | echo ${1:-} | sed 's/:[^:]*$//'
384 | }
385 |
386 | function env_variant() {
387 | # $1 env string
388 | echo ${1:-} | sed 's/^.*://'
389 | }
390 |
391 | function install_addons() {
392 | dirty=($(run_manager_cmd 'print("\n".join(dirty_addons()))'))
393 | if [ ${#dirty[@]} -gt 0 ]; then
394 | printf "\n${colour_yellow}%s%s\n" \
395 | "WARNING: the following add-ons have local uncomitted changes. Commit or" \
396 | " delete them if you would like to return to a stable state."
397 | printf "\n\t%s" "${dirty[@]}"
398 | printf "${colour_nc}\n"
399 | fi
400 | printf "\n${colour_blue}%s${colour_nc}\n\n" \
401 | "Configuring locations for local add-on content:"
402 | folders="$(run_manager_cmd 'print(local_addon_path())')/{$(run_manager_cmd \
403 | 'print(",".join(SUPPORTED_TYPES))')}"
404 | eval echo $folders | xargs mkdir -pv
405 | printf "Created '$folders'.\n"
406 | printf "\n${colour_blue}%s${colour_nc}\n" \
407 | "Installing add-ons based on the request string '${1}':"
408 | run_manager_cmd 'install_addons("'$1'")' '\n' '\n'
409 |
410 | printf "\n${colour_blue}%s${colour_nc}\n" \
411 | "Installing external add-on dependencies:"
412 | run_manager_cmd 'install_external_deps()' '\n' '\n'
413 |
414 | containers=( $(docker images \
415 | --filter "reference=$DOCKER_TAG_BACKEND" \
416 | --filter "reference=$DOCKER_TAG_SIM_PREFIX*" \
417 | --format "{{.Repository}}:{{.Tag}}"
418 | ) )
419 | for c in "${containers[@]}"; do
420 | printf "\n${colour_blue}%s${colour_nc}\n\n" \
421 | "Baking external add-on dependencies into '$c' container:"
422 | docker run --name tmp --detach -it "$c" /bin/bash
423 | py="$(docker exec -it tmp /bin/bash -c \ "which pip3 || which pip2" | \
424 | tr -d '[:space:]')"
425 | docker exec -it tmp /bin/bash -c "$(run_manager_cmd \
426 | 'print(install_external_deps(True))' | sed "s|pip3|$py|")"
427 | docker commit tmp "$c"
428 | docker rm -f tmp
429 | printf "\n"
430 | done
431 | }
432 |
433 | function list_addons() {
434 | run_manager_cmd 'print_state()' '\n' '\n\n'
435 | }
436 |
437 | function list_content() {
438 | # $1 content type, $2 list prefix text, $3 optional "an" instead of "a", $4
439 | # optional remove n characters to get singular version
440 | singular=${1::-${4:-1}}
441 | l="$(run_manager_cmd '[print("\t%s" % r) for r in \
442 | sorted(get_field("'$1'", "name"))]')"
443 | echo "$2"
444 | if [ -z "$l" ]; then echo -e "\tNONE!"; else echo "$l"; fi
445 | echo "
446 | See the '--show-"$singular" "${singular^^}"_NAME' command for specific "\
447 | "details about
448 | each "$singular", or check you have the appropriate add-on installed if you are
449 | missing "${3:-a}" "${singular}".
450 | "
451 | }
452 |
453 | function list_environments() {
454 | # $1 list prefix text, $2 optional "an" instead of "a"
455 | text="environments"
456 | singular=${text::-1}
457 | l="$(run_manager_cmd '[print("\t%s" % r) for r in sorted([\
458 | ":".join(str(f) for f in e) \
459 | for e in get_fields("'$text'", ["name", "variant"])])]')"
460 | echo "$1"
461 | if [ -z "$l" ]; then echo -e "\tNONE!"; else echo "$l"; fi
462 | echo "
463 | See the '--show-"$singular" "${singular^^}"_NAME' command for specific "\
464 | "details about
465 | each "$singular", or check you have the appropriate add-on installed if you are
466 | missing "${2:-a}" "${singular}".
467 | "
468 | }
469 |
470 | function list_simulators() {
471 | simulators="$(simulators_installed)"
472 | printf "\nThe following simulator options are supported by BenchBot:\n"
473 | for s in "${SUPPORTED_SIMULATORS[@]}"; do
474 | n="$(echo "$s" | sed 's/:.*//')"
475 | d="$(echo "$s" | sed 's/^[^:]*://')"
476 | if simulator_installed "$n" "$simulators"; then
477 | printf "${colour_green}\t%-16s$d (installed)${colour_nc}\n" "$n"
478 | else
479 | printf "\t%-16s$d (available)\n" "$n"
480 | fi
481 | done
482 |
483 | printf "\nSupport is retired for the following simulator options:\n"
484 | for s in "${RETIRED_SIMULATORS[@]}"; do
485 | n="$(echo "$s" | sed 's/:.*//')"
486 | d="$(echo "$s" | sed 's/^[^:]*://')"
487 | if simulator_installed "$n" "$simulators"; then
488 | printf "${colour_yellow}\t%-16s$d (installed)${colour_nc}\n" "$n"
489 | else
490 | printf "\t$n\t$d\n"
491 | fi
492 | done
493 | printf "\n"
494 | }
495 |
496 | function remove_addons() {
497 | run_manager_cmd 'remove_addons("'$1'")' '\n' '\n\n'
498 | }
499 |
500 | function run_manager_cmd() {
501 | pushd "$PATH_ROOT/bin" &> /dev/null
502 | bash addons "${1}" "${2-}" "${3-}"
503 | popd &> /dev/null
504 | }
505 |
506 | function show_content() {
507 | # $1 content type, $2 name of selected content, $3 optional remove n
508 | # characters to get singular version
509 | singular=${1::-${3:-1}}
510 | if [ "$(run_manager_cmd 'print(exists("'$1'", [("name", "'$2'")]))')" \
511 | != "True" ]; then
512 | printf "%s %s\n" "${singular^} '$2' is not a supported ${singular}." \
513 | "Please check '--list-$1'."
514 | exit 1
515 | fi
516 | location=$(run_manager_cmd 'print(get_match("'$1'", [("name", "'$2'")]))')
517 | printf "${singular^} '$2' was found at the following location:\n\n\t%s\n\n" \
518 | "$location"
519 | printf "Printed below are the first 30 lines of the definition file:\n\n"
520 | head -n 30 "$location"
521 | printf "\n"
522 | }
523 |
524 | function show_environment() {
525 | # $1 name of selected environment
526 | text="environments"
527 | singular=${text::-1}
528 | name="$(env_name $1)"
529 | variant="$(env_variant $1)"
530 | if [ "$(run_manager_cmd 'print(exists("'$text'", \
531 | [("name", "'$name'"), ("variant", "'$variant'")]))')" != "True" ]; then
532 | printf "%s %s\n" "${singular^} '$1' is not a supported ${singular}." \
533 | "Please check '--list-$text'."
534 | exit 1
535 | fi
536 | location=$(run_manager_cmd 'print(get_match("'$text'", \
537 | [("name", "'$name'"), ("variant", "'$variant'")]))')
538 | printf "${singular^} '$1' was found at the following location:\n\n\t%s\n\n" \
539 | "$location"
540 | printf "Printed below are the first 30 lines of the definition file:\n\n"
541 | head -n 30 "$location"
542 | printf "\n"
543 | :
544 | }
545 |
546 | ################################################################################
547 | ##################### Shared argument validation & parsing #####################
548 | ################################################################################
549 |
550 | function expand_submission_mode() {
551 | if [ -z "${1:-}" ]; then return; fi
552 | mode="${1//-}"
553 | if [[ "$mode" == n* ]]; then
554 | echo "native";
555 | elif [[ "$mode" == c* ]]; then
556 | echo "containerised";
557 | elif [[ "$mode" == s* ]]; then
558 | echo "submission";
559 | fi
560 | }
561 |
562 | function submission_command() {
563 | # $1 mode (numeric), $2 mode_details, $3 example name, $4 args
564 | if [ "$1" == 0 ]; then
565 | echo "$2 ${4:-}";
566 | elif [ $1 == 1 ]; then
567 | echo "docker build -f $(submission_dockerfile "$1" "$2" "$3") ."
568 | elif [ $1 == 2 ]; then
569 | echo "tar -czvf $([ -z "${4:-}" ] && \
570 | echo "$SUBMISSION_OUTPUT_DEFAULT" || echo "$args") ."
571 | elif [ $1 == 3 ]; then
572 | echo "$(run_manager_cmd 'print(get_value_by_name("examples", "'$3'", \
573 | "native_command"))') ${4:-}"
574 | elif [ $1 == 4 ]; then
575 | contdir="$(run_manager_cmd 'x = get_value_by_name("examples", "'$3'", \
576 | "container_directory"); x and print(x)')"
577 | echo "docker build -f $(submission_dockerfile "$1" "$2" "$3") $( \
578 | [ -n "$contdir" ] && echo "$contdir" || echo ".")"
579 | fi
580 | }
581 |
582 | function submission_directory() {
583 | # $1 mode (numeric), $2 mode_details, $3 example name
584 | if [ "$1" == 0 ] || [ "$1" == 1 ]; then
585 | echo "$(realpath ".")"
586 | elif [ "$1" == 2 ]; then
587 | echo "$(realpath "$2")"
588 | elif [ "$1" == 3 ] || [ "$1" == 4 ]; then
589 | echo "$(realpath "$(dirname "$(run_manager_cmd \
590 | 'print(get_match("examples", [("name", "'$3'")]))')")")"
591 | fi
592 | }
593 |
594 | function submission_dockerfile() {
595 | # $1 mode (numeric), $2 mode_details, $3 example name,
596 | if [ "$1" == 1 ]; then
597 | echo "$(echo "$2" | sed 's/\/\?\s*$//')$([ ! -f "$2" ] && \
598 | echo "/Dockerfile")"
599 | elif [ "$1" == 4 ]; then
600 |
601 | contfn="$(run_manager_cmd 'x = get_value_by_name("examples", "'$3'", \
602 | "container_filename"); x and print(x)')"
603 | contdir="$(run_manager_cmd 'x = get_value_by_name("examples", "'$3'", \
604 | "container_directory"); x and print(x)')"
605 | echo "$(realpath "$(submission_directory "$1" "$2" "$3")/$(\
606 | [ -z "$contfn"] && echo "$contdir/Dockerfile" || echo "$contfn" )")"
607 | fi
608 | }
609 |
610 | function submission_mode() {
611 | # Returns a numeric submission mode given the arguments. Valid submission
612 | # modes are:
613 | # 0 - submission via native command
614 | # 1 - containerised submission via Docker
615 | # 2 - submission via tarring for uploading elsewhere
616 | # 3 - example submission via native command
617 | # 4 - example containerised submission via Docker
618 | # $1 example name, $2 example_containerised, $3 mode
619 | mode="$(expand_submission_mode $3)"
620 | if [ -n "$1" ] && [ -z "$2" ]; then
621 | echo 3
622 | elif [ -n "$1" ]; then
623 | echo 4
624 | elif [ "$mode" == "submission" ]; then
625 | echo 2
626 | elif [ "$mode" == "containerised" ]; then
627 | echo 1
628 | elif [ "$mode" == "native" ]; then
629 | echo 0
630 | fi
631 | }
632 |
633 | function submission_mode_string() {
634 | # $1 mode (numeric), $2 example
635 | if [ "$1" == 0 ]; then
636 | echo "Native"
637 | elif [ "$1" == 1 ]; then
638 | echo "Containerised"
639 | elif [ "$1" == 2 ]; then
640 | echo "Submission *.tgz creation"
641 | elif [ "$1" == 3 ]; then
642 | echo "Native (with example '$2')"
643 | elif [ "$1" == 4 ]; then
644 | echo "Containerised (with example '$2')"
645 | fi
646 | }
647 |
648 | function submission_output() {
649 | # $1 mode (numeric), $2 args
650 | if [ "$1" == 2 ]; then
651 | [ -z "${2:-}" ] && echo "$SUBMISSION_OUTPUT_DEFAULT" || echo "$2"
652 | fi
653 | }
654 |
655 |
656 | function _validate_batch_envs() {
657 | # $1 requested envs, $2 requested envs batch
658 | if [ -n "$1" ] && [ -n "$2" ]; then
659 | printf "${colour_red}%s${colour_nc}\n" \
660 | "ERROR: Only '--envs' or '--envs-batch' is valid, not both."
661 | elif [ -z "$1" ] && [ -z "$2" ]; then
662 | printf "${colour_red}%s %s${colour_nc}\n" \
663 | "ERROR: No environments were provided via either" \
664 | "'--envs' or '--envs-batch'"
665 | fi
666 | }
667 |
668 | function _validate_content() {
669 | # $1 = content type; $2 = name; $3 = full name (optional); $4 override check
670 | # with this value (optional); $5 optional remove n characters to get singular
671 | # version; $6 mention this script as source of list flag (optional)
672 | singular=${1::-${5:-1}}
673 | full=$([ -z "${3-}" ] && echo "$2" || echo "$3")
674 | check="$([ -z "${4-}" ] && \
675 | echo "$(run_manager_cmd 'print(exists("'$1'", [("name", "'$2'")]))')" || \
676 | echo "$4")"
677 | if [ "$check" != "True" ]; then
678 | printf "%s %s\n" "${singular^} '$2' is not a supported ${singular}." \
679 | "Please check '$([ -n "${6:-}" ] && echo "$6")--list-$1'."
680 | printf "\n${colour_red}%s${colour_nc}\n" \
681 | "ERROR: Invalid ${singular} selected (${singular} = '$full')"
682 | fi
683 | }
684 |
685 | function _validate_environment() {
686 | # $1 = name; $2 = full name
687 | _validate_content "environments" "$1" "${2-}" \
688 | "$(run_manager_cmd 'print(exists("environments", \
689 | [("name", "'$(env_name $1)'"), ("variant", "'$(env_variant $1)'")]))')"
690 | }
691 |
692 | function _validate_environment_count() {
693 | # $1 = number of selected environments, $2 task
694 | scene_count="$(run_manager_cmd 'print(\
695 | get_value_by_name("tasks", "'$2'", "scene_count"))')"
696 | if [[ "$scene_count" == *"None"* ]]; then scene_count=1; fi
697 | if [ $scene_count -ne $1 ]; then
698 | printf "${colour_red}%s\n %s${colour_nc}\n" \
699 | "ERROR: Selected $1 environment/s for a task which requires $scene_count" \
700 | "environment/s ('$task')"
701 | fi
702 | }
703 |
704 | function _validate_evaluation_method() {
705 | # $1 evaluation method, $2 validate only (optional)
706 | v=${2:-}
707 | err=
708 | if [ -z "$1" ] && [ -z "$v" ]; then
709 | err="$(printf "%s %s\n" "Evaluation was requested but no evaluation"\
710 | "method was selected. A selection is required.")"
711 | elif [ -z "$v" ] && \
712 | [ "$(run_manager_cmd 'print(exists("evaluation_methods", \
713 | [("name", "'$1'")]))')" != "True" ]; then
714 | err="$(printf "%s %s\n" "Evaluation method '$1' is not supported." \
715 | "Please check '--list-methods'.")"
716 | fi
717 |
718 | if [ -n "$err" ]; then
719 | printf "$err\n"
720 | printf "\n${colour_red}%s${colour_nc}" \
721 | "ERROR: Invalid evaluation mode selected (evaluation method ='$1')"
722 | fi
723 | }
724 |
725 | function _validate_required_envs() {
726 | # $1 required envs, $2 required envs batch
727 | if [ -n "$1" ] && [ -n "$2" ]; then
728 | printf "${colour_red}%s %s${colour_nc}\n" \
729 | "ERROR: Only '--required-envs' or '--required-envs-batch' is valid,"\
730 | "not both."
731 | elif [ -n "$2" ]; then
732 | _validate_content "batches" "$2" "" "" 2
733 | fi
734 | }
735 |
736 | function _validate_results_files() {
737 | # $@ results files list
738 | err=
739 | if [ $# -eq 0 ]; then
740 | err="$(printf "%s %s\n" "No results file/s were provided. Please run" \
741 | "again with a results file.")"
742 | else
743 | for r in "$@"; do
744 | if [ ! -f "$r" ]; then
745 | err="$(printf "%s %s\n" "Results file '$r' either doesn't exist," \
746 | "or isn't a file.")"
747 | fi
748 | done
749 | fi
750 |
751 | if [ -n "$err" ]; then
752 | printf "$err\n"
753 | printf "\n${colour_red}%s${colour_nc}" \
754 | "ERROR: Results file/s provided were invalid. See errors above."
755 | fi
756 | }
757 |
758 | function _validate_results_possible() {
759 | # Validates whether creating results is feasible
760 | # $1 mode (expanded), $2 evaluate method, $3 results_location
761 | if [[ "$1" == s* && ( -n "$2" || -n "$3" ) ]]; then
762 | printf "%s %s\n" "Cannot create results or perform evaluation in '$1'" \
763 | "mode. Please run again in a different mode."
764 | printf "\n${colour_red}%s${colour_nc}" \
765 | "ERROR: Requested results evaluation from 'submission' mode."
766 | fi
767 | }
768 |
769 | function _validate_simulators() {
770 | # Validate whether simulator selection is supported
771 | # $1 comma-separated simulator list
772 | err=
773 | sims=($(echo "$1" | sed 's/,/\n/g'))
774 | local s
775 | for s in "${sims[@]}"; do
776 | if ! simulator_supported "$s"; then
777 | err="Simulator '$s' is not a supported simulator."
778 | fi
779 | done
780 |
781 | if [ -n "$err" ]; then
782 | printf "$err\n"
783 | printf "\n${colour_red}%s%s${colour_nc}" \
784 | "ERROR: Simulator selection was invalid. " \
785 | "See errors above & --list-simulators."
786 | fi
787 | }
788 |
789 | function _validate_submission_mode() {
790 | # $1 example name, $2 example_containerised, $3 mode, $4 mode (numeric),
791 | # $5 duplicate mode flag, $6 mode_details, $7 args
792 | err=
793 |
794 | # Handle nonsensical combinations
795 | if [ -n "$1" ] && [ -n "$3" ]; then
796 | err="$(printf "%s %s\n" "Provided both an example name," \
797 | "and submission mode ('$3')")"
798 | elif [ -z "$1" ] && [ -n "$2" ]; then
799 | err="$(printf "%s\n" \
800 | "Requested example containerisation but no example was provided.")"
801 | elif [ -n "$5" ]; then
802 | err="$(printf "%s %s\n" "Selected more than 1 mode, please only select" \
803 | "one of 'native', 'containerised', or 'submission'.")"
804 | elif [ -z "$1" ]; then
805 | if [ -z "$3" ]; then
806 | err="$(printf "%s %s\n" "No mode was selected. Please select an" \
807 | "example or one of 'native', 'containerised', or 'submission' modes.")"
808 | elif [ -z "$6" ]; then
809 | err="$(printf "%s %s\n" "Mode '$2' requires arguments, but none were" \
810 | "provided. Please see '--help' for details.")"
811 | fi
812 | fi
813 |
814 | # Handle errors in submission data
815 | if [ -z "$err" ]; then
816 | cmd="$(submission_command "$4" "$6" "$1" "$7")"
817 | dir="$(submission_directory "$4" "$6" "$1")"
818 | df="$(submission_dockerfile "$4" "$6" "$1")"
819 | if [ -n "$df" ] && [ ! -f "$df" ]; then
820 | err="$(printf "%s %s\n\t%s\n" "Containerised mode requested with the" \
821 | "following non-existent Dockerfile:" "$df")"
822 | elif [ "$4" == 2 ] && [ ! -d "$dir" ]; then
823 | err="$(printf "%s %s\n\t%s\n" "Mode '$3' was requested to tar & submit" \
824 | "the following non-existent directory:" "$dir")"
825 | fi
826 | fi
827 |
828 | # Print the error if we found one
829 | if [ -n "$err" ]; then
830 | printf "$err\n"
831 | printf "\n${colour_red}%s${colour_nc}" \
832 | "ERROR: Mode selection was invalid. See errors above."
833 | fi
834 | }
835 |
836 | function _validate_type() {
837 | # Ensures your installation supports the requested type
838 | # $1 = type
839 | if [ "$1" == "real" ]; then
840 | printf "\n${colour_yellow}%s\n%s${colour_nc}\n" \
841 | "WARNING: Requested running with '$1'. Assuming you have an available" \
842 | "real robot & environment."
843 | return
844 | fi
845 |
846 | simulators="$(simulators_installed)"
847 | if ! simulators_installed "$1" "$simulators"; then
848 | printf "\n${colour_red}%s\n %s${colour_nc}\n" \
849 | "ERROR: Requested running with '$1', but that simulator isn't installed." \
850 | "Installed simulator/s are: '$simulators'"
851 | fi
852 | }
853 |
854 | function _validate_types() {
855 | # Ensures all types are the same / consistent
856 | # $1 = robot name; $2 = environment string, $3 ... environments list
857 | robot="$1"
858 | env="$2"
859 | shift 2
860 | envs=($@)
861 | types=()
862 | types+=("$(run_manager_cmd 'print(\
863 | get_value_by_name("robots", "'$robot'", "type"))')")
864 | for e in "${envs[@]}"; do
865 | types+=($(run_manager_cmd 'print(\
866 | get_value_by_name("environments", "'$e'", "type"))'))
867 | done
868 |
869 | err=
870 | for i in "${!types[@]}"; do
871 | if [ "${types[$i]}" != "${types[0]}" ]; then err=1; fi
872 | done
873 |
874 | if [ -n "$err" ]; then
875 | printf "%s %s\n%s\n\n" "Robot & environment types aren't consistent." \
876 | "Please ensure each of the following" "have the same type:"
877 | for i in "${!types[@]}"; do
878 | if [ $i -eq 0 ]; then
879 | printf "\tRobot '$robot' has type '${types[$i]}'\n"
880 | else
881 | printf "\tEnvironment '${envs[$((i-1))]}' has type '${types[$i]}'\n"
882 | fi
883 | done
884 | printf "\n${colour_red}%s${colour_nc}\n" \
885 | "ERROR: Inconsistent types selected (robot = '$1', environment = '$env')"
886 | fi
887 | }
888 |
889 | function validate_batch_args() {
890 | # $1 robot, $2 task, $3 envs_str, $4 envs_batch name, $5 example name,
891 | # $6 example_containerised, $7 evaluation method, $8 containerised, $9
892 | # native, $10 mode details, $11 args, $12... environments list
893 | robot="$1"
894 | task="$2"
895 | envs_str="$3"
896 | envs_batch="$4"
897 | example="$5"
898 | example_containerised="$6"
899 | evaluation_method="$7"
900 | containerised="$8"
901 | native="$9"
902 | mode_details="${10}"
903 | args="${11}"
904 | shift 11
905 | envs_list=($@)
906 |
907 | # Handle batch specific validation first
908 | err="$(_validate_batch_envs "$envs_str" "$envs_batch")"
909 | if [ -n "$err" ]; then echo "$err"; exit 1; fi
910 |
911 | # Divert to validation functions for each of the sub-scripts
912 | mode="$([ -n "$native" ] && echo "$native" || echo "$containerised")"
913 | mode_num="$(submission_mode "$example" "$example_containerised" "$mode")"
914 | mode_dup=
915 | if [ -n "$native" ] && [ -n "$containerised" ]; then mode_dup=1; fi
916 | validate_submit_args "$evaluation_method" "$example" \
917 | "$example_containerised" "$mode" "$mode_num" "$mode_dup" "$mode_details" \
918 | "" "$args"
919 |
920 | if [ -n "$evaluation_method" ]; then
921 | envs="$(echo "${envs_list[@]}" | tr ' ' ',')"
922 |
923 | # This is gross... but we need a file for our results files that don't
924 | # exist yet, otherwise the validation will fail
925 | touch "$PATH_TEMP_FILE"
926 | validate_eval_args "$envs" "" "$envs" "$task" "$evaluation_method" "" \
927 | "$PATH_TEMP_FILE"
928 | rm "$PATH_TEMP_FILE"
929 | fi
930 |
931 | type=$(run_manager_cmd 'exists("robots", [("name", "'$robot'")]) and \
932 | print(get_value_by_name("robots", "'$robot'", "type"))')
933 | for e in "${envs_list[@]}"; do
934 | envs=($(env_list "$e" | tr ' ' '\n'))
935 | validate_run_args "$robot" "$task" "$type" "$e" "${envs[@]}"
936 | done
937 |
938 | }
939 |
940 | function validate_eval_args() {
941 | # $1 required_envs, $2 required_envs_batch,
942 | # $3 required_envs_list (comma separated), $4 required_task,
943 | # $5 evaluation method, $6 validate_only, $7 ... results files list
944 | required_envs="$1"
945 | required_envs_batch="$2"
946 | required_envs_list=($(echo "$3" | tr ',' '\n'))
947 | required_task="$4"
948 | method="$5"
949 | validate_only="$6"
950 | shift 6
951 | results_files=($@)
952 |
953 | err="$(_validate_required_envs "$required_envs" "$required_envs_batch")"
954 | if [ -n "$err" ]; then echo "$err"; exit 1; fi
955 | if [ -n "${required_envs_list:-}" ]; then
956 | for e in "${required_envs_list[@]}"; do
957 | es=($(env_list "$e" | tr ' ' '\n'))
958 | for ee in "${es[@]}"; do
959 | err="$(_validate_environment "$ee")"
960 | if [ -n "$err" ]; then echo "$err"; exit 1; fi
961 | done
962 | done
963 | fi
964 | if [ -n "$required_task" ]; then
965 | err="$(_validate_content "tasks" "$required_task")"
966 | if [ -n "$err" ]; then echo "$err"; exit 1; fi
967 | fi
968 | err="$(_validate_evaluation_method "$method" "$validate_only")"
969 | if [ -n "$err" ]; then echo "$err"; exit 1; fi
970 | err="$(_validate_results_files "${results_files[@]}")"
971 | if [ -n "$err" ]; then echo "$err"; exit 1; fi
972 | }
973 |
974 | function validate_install_args() {
975 | # $1 simulator selection
976 | simulators="$1"
977 |
978 | err="$(_validate_simulators "$simulators")"
979 | if [ -n "$err" ]; then echo "$err"; exit 1; fi
980 | }
981 |
982 | function validate_run_args() {
983 | # $1 robot, $2 task, $3 type of run, $4 environments_string,
984 | # $5... environments list
985 | robot="$1"
986 | task="$2"
987 | type="$3"
988 | env="$4"
989 | shift 4
990 | envs=($@)
991 |
992 | err="$(_validate_content "robots" "$robot")"
993 | if [ -n "$err" ]; then echo "$err"; exit 1; fi
994 | err="$(_validate_content "tasks" "$task")"
995 | if [ -n "$err" ]; then echo "$err"; exit 1; fi
996 | results_format="$(run_manager_cmd 'rf = get_value_by_name("tasks", \
997 | "'$task'", "results_format"); rf is not None and print(rf)')"
998 | [ -n "$results_format" ] && \
999 | err="$(_validate_content "formats" "$results_format")"
1000 | if [ -n "$err" ]; then echo "$err"; exit 1; fi
1001 | [ ${#envs[@]} -eq 0 ] && envs+=("")
1002 | for e in "${envs[@]}"; do
1003 | err="$(_validate_environment "$e" "$env")"
1004 | if [ -n "$err" ]; then echo "$err"; exit 1; fi
1005 | done
1006 | err="$(_validate_types "$robot" "$env" "${envs[@]}")"
1007 | if [ -n "$err" ]; then echo "$err"; exit 1; fi
1008 | err="$(_validate_type "$type")"
1009 | if [[ "$err" == *"ERROR:"* ]]; then echo "$err"; exit 1; fi
1010 | err="$(_validate_environment_count ${#envs[@]} "$task")"
1011 | if [ -n "$err" ]; then echo "$err"; exit 1; fi
1012 | }
1013 |
1014 | function validate_submit_args() {
1015 | # $1 evaluation method, $2 example, $3 example_containerised, $4 mode,
1016 | # $5 mode_num, $6 duplicate mode flag, $7 mode details, $8 results location,
1017 | # $9 args
1018 | if [ -n "$1" ]; then
1019 | err="$(_validate_evaluation_method "$1")"
1020 | if [ -n "$err" ]; then echo "$err"; exit 1; fi
1021 | fi
1022 | err="$([ -n "$2" ] && _validate_content "examples" "$2" || true)"
1023 | if [ -n "$err" ]; then echo "$err"; exit 1; fi
1024 | err="$(_validate_submission_mode "$2" "$3" "$4" "$5" "$6" "$7" "$9")"
1025 | if [ -n "$err" ]; then echo "$err"; exit 1; fi
1026 | err="$(_validate_results_possible "$3" "$1" "$5")"
1027 | if [ -n "$err" ]; then echo "$err"; exit 1; fi
1028 | }
1029 |
1030 |
--------------------------------------------------------------------------------
/bin/addons:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 | #
3 | # Bash script for simplifying calls to the add-on manager
4 | #
5 | # Usage:
6 | # $1 = command to run (required)
7 | # $2 = string to print before command output (optional)
8 | # $3 = string to print after command output (optional)
9 | set -euo pipefail
10 | IFS=$'\n\t'
11 |
12 | if [ -n "${2-}" ]; then printf "$2"; fi
13 | python3 -c 'from benchbot_addons.manager import *; '"$1"
14 | if [ -n "${3-}" ]; then printf "$3"; fi
15 |
--------------------------------------------------------------------------------
/bin/benchbot_batch:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | ################################################################################
4 | ################### Load Helpers & Global BenchBot Settings ####################
5 | ################################################################################
6 |
7 | set -euo pipefail
8 | IFS=$'\n\t'
9 | abs_path=$(readlink -f $0)
10 | pushd $(dirname $abs_path) > /dev/null
11 | source .helpers
12 | popd > /dev/null
13 |
14 | ################################################################################
15 | ########################### Script Specific Settings ###########################
16 | ################################################################################
17 |
18 | DEFAULT_PREFIX="batch"
19 | SUBMIT_CAT_FILE="/tmp/benchbot_batch_stdin"
20 |
21 | ################################################################################
22 | ######################## Helper functions for commands #########################
23 | ################################################################################
24 |
25 | usage_text="$(basename "$0") -- Helper script for running a solution against multiple
26 | environments in a single command. Use this script when you have developed a
27 | task solution that you would like to extensively test, or when you would like
28 | create a submission to a challenge. Addons can include 'batches' which are
29 | environment lists, like those used for evaluating tasks in official challenges.
30 |
31 | The $(basename "$0") script is roughly equivalent to the following:
32 |
33 | for ENV,I in ENV_LIST:
34 | benchbot_run -t TASK -e ENV -f
35 | benchbot_submit -r PREFIX_I.json SUBMISSION_COMMAND
36 |
37 | if [-z|--zip]:
38 | zip PREFIX.zip PREFIX_[0-9]*.json
39 |
40 | if [--evaluate-with]:
41 | benchbot_eval -m EVAL_METHOD -o PREFIX_scores.json PREFIX_[0-9]*.json
42 |
43 | As such, see the help documentation for 'benchbot_run', 'benchbot_submit', &
44 | 'benchbot_eval' for further details about how each of the below arguments work.
45 |
46 | USAGE:
47 |
48 | Get information about the submission options:
49 | $(basename "$0") [-h|--help]
50 |
51 | Run a submission natively for the task semantic_slam:passive:ground_truth
52 | in each of the available house scenes, saving the results with the default
53 | prefix '$DEFAULT_PREFIX':
54 | $(basename "$0") [-t|--task] semantic_slam:passive:ground_truth \\
55 | [-e|--envs] house:1,house:2,house:3,house:4,house:5 \\
56 | [-n|--native] COMMAND_TO_RUN
57 |
58 | Run a submission for the scd:active:dead_reckoning task in a containerised
59 | environment, for a list of scenes specified in the environment batch called
60 | 'challenge_1', saving the results with the prefix 'mix'. Then evaluate the
61 | results using 'omq' to produce a final score:
62 | $(basename "$0") [-t|--task] scd:active:dead_reckoning \\
63 | [-E|--envs-batch] challenge_1 [-p|--prefix] mix \\
64 | [--evaluate-with] omq [-c|--containerised] DIRECTORY_FOR_SUBMISSION
65 |
66 | ... (contents of 'challenge_1' batch) ...
67 | name: challenge:1
68 | environments:
69 | - miniroom:1:2
70 | - house:2:3
71 | - apartment:3:4
72 | - office:4:5
73 | - company:5:1
74 |
75 | OPTION DETAILS:
76 |
77 | -h, --help
78 | Show this help menu.
79 |
80 | -a, --args
81 | Same functionality as 'benchbot_submit --args'. See the help there
82 | ('benchbot_submit --help') for further details.
83 |
84 | -c, --containerised
85 | Same functionality as 'benchbot_submit --containerised'. See the
86 | help there ('benchbot_submit --help') for further details.
87 |
88 | -e, --envs
89 | A comma-separated list of environments for $(basename "$0") to
90 | iterate over (e.g. \"house:1,miniroom:2,office:3\"). See the
91 | '-e, --env' arg in 'benchbot_run --help' for further details of
92 | specifying valid environments, & 'benchbot_run --list-envs' for a
93 | complete list of supported environments.
94 |
95 | -E, --envs-batch
96 | The name of an environment batch specifying a list of environments.
97 | The $(basename "$0") script will iterate over each specified
98 | environment. See '-e, --envs' above for further details on valid
99 | environment specifications.
100 |
101 | --example
102 | Same functionality as 'benchbot_submit --example'. See the
103 | help there ('benchbot_submit --help') for further details.
104 |
105 | Name of an installed example to run. All examples at least support
106 | native operation, with most also supporting containerised
107 | operation.
108 |
109 | Examples run in native mode by default. Examples can still be run
110 | in containerised mode, but require the '--example-containerised'
111 | flag.
112 |
113 | (use '--list-examples' to see a list of installed example)
114 |
115 | --example-containerised
116 | Same functionality as 'benchbot_submit --example-containerised'.
117 | See the help there ('benchbot_submit --help') for further details.
118 |
119 | --list-batches
120 | Lists all supported environment batches. These can be used with the
121 | '--required-envs-batch' option. Use '--show-batch' to see more
122 | details about a batch.
123 |
124 | -n, --native
125 | Same functionality as 'benchbot_submit --native'. See the help
126 | there ('benchbot_submit --help') for further details.
127 |
128 | -p, --prefix
129 | Prefix to use in naming of files produced by $(basename "$0"). If
130 | this option is not included, the default value '$DEFAULT_PREFIX' will
131 | be used. For example, a batch of 5 environments with the [-z|--zip]
132 | argument & prefix 'semslam' will produce the following files:
133 | semslam_1.json
134 | semslam_2.json
135 | semslam_3.json
136 | semslam_4.json
137 | semslam_5.json
138 | semslam.zip
139 | semslam_scores.json
140 |
141 | -r, --robot
142 | Configure BenchBot to use a specifc robot. Every environment in the
143 | requested batch will be run with this robot (so make sure they
144 | support the robot). See '-r, --robot' in 'benchbot_run --help' for
145 | further details on specifying valid robots, & 'benchbot_run
146 | --list-robots' for a complete list of supported robots.
147 |
148 | --evaluate-with
149 | Name of the evaluation method to use for evaluation of results
150 | batch produced by $(basename "$0"). Scores from each result in the
151 | batch are then combined, to give a summary score for the selected
152 | task. Evaluation will only be performed if this option is provided.
153 |
154 | This option assumes your submission saves results to the location
155 | referenced by 'benchbot_api.RESULT_LOCATION' (currently
156 | '/tmp/benchbot_result'). Evaluation will not work as expected
157 | results are not saved to that location.
158 |
159 | See '--list-methods' in 'benchbot_eval' for a list of supported
160 | methods, and 'benchbot_eval --help' for a description of the
161 | scoring process.
162 |
163 | --show-batch
164 | Prints information about the provided batch name if installed. The
165 | corresponding file's location will be displayed, with a snippet of
166 | its contents.
167 |
168 | -t, --task
169 | Configure BenchBot for a specific task style. Every environment in
170 | the requested batch will be run with this task. See '-t, --task' in
171 | 'benchbot_run --help' for further details on specifying valid tasks,
172 | & 'benchbot_run --list-tasks' for a complete list of supported
173 | tasks.
174 |
175 | -v, --version
176 | Print version info for current installation.
177 |
178 | -z, --zip
179 | Produce a ZIP file of the results once all environments in the
180 | batch have been run. The ZIP file will be named using the value
181 | provided by the '-p, --prefix' argument (i.e. 'PREFIX.zip'), with
182 | the default prefix '$DEFAULT_PREFIX' used if none is provided.
183 |
184 | FURTHER DETAILS:
185 |
186 | See the README of this repository for further details. For further details
187 | on each of the individual components ('benchbot_run', 'benchbot_submit',
188 | 'benchbot_eval'), see their individual help documentation & READMEs.
189 |
190 | Please contact the authors of BenchBot for support or to report bugs:
191 | b.talbot@qut.edu.au
192 | "
193 |
194 | collision_warn="WARNING: Running of environment '%s' in passive mode resulted in a
195 | collision. This shouldn't happen, so this environment will be rerun!"
196 |
197 | run_err="ERROR: Running of environment '%s' failed with the error printed above.
198 | Quitting batch execution."
199 |
200 | submission_err="ERROR: Submission for environment '%s' failed with the error printed
201 | above. Quitting batch execution."
202 |
203 | _list_batches_pre=\
204 | "The following environment batches are available in your installation:
205 | "
206 |
207 | run_pid=
208 | function kill_run() {
209 | if [ -n $run_pid ]; then
210 | kill -TERM $run_pid &> /dev/null || true
211 | wait $run_pid || true
212 | run_pid=
213 | fi
214 | }
215 |
216 | submit_pid=
217 | submit_cat_pid=
218 | function kill_submit() {
219 | if [ -n $submit_pid ]; then
220 | kill -TERM $submit_pid &> /dev/null || true
221 | wait $submit_pid || true
222 | submit_pid=
223 | fi
224 | if [ -n $submit_cat_pid ]; then
225 | kill -TERM $submit_cat_pid &> /dev/null || true
226 | wait $submit_cat_pid || true
227 | submit_cat_pid=
228 | fi
229 | }
230 |
231 | function exit_gracefully() {
232 | kill_run
233 | kill_submit
234 | echo ""
235 | exit ${1:-0}
236 | }
237 |
238 | ################################################################################
239 | #################### Parse & handle command line arguments #####################
240 | ################################################################################
241 |
242 | # Safely parse options input
243 | _args='args:,envs:,envs-batch:,example:,example-containerised,containerised:,\
244 | help,list-batches,native:,prefix:,robot:,evaluate-with:,show-batch:,task:,\
245 | version,zip'
246 | parse_out=$(getopt -o a:e:E:c:hn:p:r:s:t:vz --long "$_args" \
247 | -n "$(basename "$0")" -- "$@")
248 | if [ $? != 0 ]; then exit 1; fi
249 | args=
250 | args_prefix=
251 | containerised=
252 | eval set -- "$parse_out"
253 | evaluate=
254 | example=
255 | example_containerised=
256 | example_prefix=
257 | envs_str=
258 | envs_batch=
259 | evaluate_method=
260 | mode_details=
261 | mode_prefix=
262 | native=
263 | prefix="$DEFAULT_PREFIX"
264 | robot=
265 | submit_args=
266 | task=
267 | zip=
268 | while true; do
269 | case "$1" in
270 | -a|--args)
271 | args="$2"; args_prefix="--args"; shift 2 ;;
272 | -c|--containerised)
273 | containerised='--containerised'; mode_details="$2"; shift 2 ;;
274 | -e|--envs)
275 | envs_str="$2" ; shift 2 ;;
276 | -E|--envs-batch)
277 | envs_batch="$2" ; shift 2 ;;
278 | --evaluate-with)
279 | evaluate_method="$2"; shift 2 ;;
280 | --example)
281 | example="$2" ; example_prefix="--example" ; shift 2 ;;
282 | --example-containerised)
283 | example_containerised='--example-containerised'; shift 1 ;;
284 | -h|--help)
285 | echo "$usage_text" ; shift ; exit 0 ;;
286 | --list-batches)
287 | list_content "batches" "$_list_batches_pre" "a" 2; exit $? ;;
288 | -n|--native)
289 | native='--native'; mode_details="$2"; shift 2 ;;
290 | -p|--prefix)
291 | prefix="$2"; shift 2 ;;
292 | -r|--robot)
293 | robot="$2"; shift 2 ;;
294 | --show-batch)
295 | show_content "batches" "$2" 2; exit $? ;;
296 | -t|--task)
297 | task="$2"; shift 2 ;;
298 | -v|--version)
299 | print_version_info; exit ;;
300 | -z|--zip)
301 | zip=true; shift ;;
302 | --)
303 | shift ; submit_args="$@"; break ;;
304 | *)
305 | echo "$(basename "$0"): option '$1' is unknown"; shift ; exit 1 ;;
306 | esac
307 | done
308 |
309 | # Generate any derived configuration parameters
310 | if [ -n "$envs_str" ]; then
311 | envs_list=($(echo "$envs_str" | tr ',' '\n'))
312 | elif [ -n "$envs_batch" ]; then
313 | envs_list=($(run_manager_cmd 'print("\n".join(get_value_by_name(\
314 | "batches", "'$envs_batch'", "environments")))'))
315 | else
316 | envs_list=()
317 | fi
318 | mode_details="$(echo "$mode_details $submit_args" | sed 's/^\s*//; s/\s*$//')"
319 | mode_prefix="$containerised$native" # only one of these will be non-zero
320 |
321 | # Bail if any of the requested configurations are invalid
322 | printf "Performing pre-run argument validation ...\n\n"
323 | validate_batch_args "$robot" "$task" "$envs_str" "$envs_batch" "$example" \
324 | "$example_containerised" "$evaluate_method" "$containerised" "$native" \
325 | "$mode_details" "$args" "${envs_list[@]}"
326 | printf "${colour_green}\tDone.${colour_nc}\n"
327 |
328 | ################################################################################
329 | ####################### Print settings prior to running ########################
330 | ################################################################################
331 |
332 | header_block "Dumping settings before running batch" $colour_magenta
333 |
334 | _ind="$(printf "%0.s " {1..8})"
335 | printf "\nUsing the following static settings for each environment:\n"
336 | printf "$_ind%-25s%s\n" "Selected task:" "${task:-None}"
337 | printf "$_ind%-25s%s\n" "Selected robot:" "${robot:-None}"
338 | if [ -n "$example" ]; then
339 | printf "$_ind%-25s%s\n" "Example to run:" \
340 | "$example \
341 | ($([ -n "$containerised" ] && echo "containerised" || echo "native"))"
342 | elif [ -n "$containerised" ]; then
343 | printf "$_ind%-25s%s\n" "Dockerfile to build:" "$mode_details"
344 | elif [ -n "$native" ]; then
345 | printf "$_ind%-25s%s\n" "Command to execute:" "$mode_details"
346 | else
347 | printf "$_ind%-25s%s\n" "Command to execute:" "None"
348 | fi
349 | if [ -n "$args" ]; then
350 | printf "$_ind%-25s%s\n" "With the args:" "$args"
351 | fi
352 |
353 | printf "\nIterating through the following environment list:\n$_ind"
354 | if [ -z "$envs_list" ]; then
355 | printf "None\n"
356 | else
357 | echo "${envs_list[@]}" | sed "s/ /\n$_ind/g"
358 | fi
359 |
360 | printf "\nPerforming the following after all environments have been run:\n"
361 | printf "$_ind%-25s%s\n" "Create results *.zip:" \
362 | "$([ -z "$zip" ] && echo "No" || echo "Yes")"
363 | printf "$_ind%-25s%s\n\n" "Evalute results batch:" \
364 | "$([ -z "$evaluate_method" ] && echo "No" || echo "Yes ($evaluate_method)")"
365 |
366 | ################################################################################
367 | ############### Iterate over each of the requested environments ################
368 | ################################################################################
369 |
370 | trap exit_gracefully SIGINT SIGQUIT SIGKILL SIGTERM
371 |
372 | if [ -z "$envs_list" ]; then
373 | echo "No environments provided; exiting."
374 | exit 0
375 | fi
376 |
377 | results_list=()
378 | i=0
379 | while [ $i -lt ${#envs_list[@]} ]; do
380 | # Run the submission in the environment, waiting until something finishes
381 | header_block "Gathering results for environment: ${envs_list[$i]}" \
382 | $colour_magenta
383 | benchbot_run -t "${task:-None}" -e "${envs_list[$i]}" -r "${robot:-None}" -f \
384 | &> /tmp/benchbot_run_out &
385 | run_pid=$!
386 | rm "$SUBMIT_CAT_FILE" &> /dev/null && mkfifo "$SUBMIT_CAT_FILE"
387 | cat > "$SUBMIT_CAT_FILE" &
388 | submit_cat_pid=$!
389 | cat "$SUBMIT_CAT_FILE" | benchbot_submit -r "${prefix}_$i.json" \
390 | $example_prefix $example $example_containerised $mode_prefix \
391 | $mode_details $args_prefix $args &
392 | submit_pid=$!
393 | while ps -p $run_pid &>/dev/null && ps -p $submit_pid &>/dev/null; do
394 | sleep 1
395 | done
396 | sleep 3
397 |
398 | # Run should never die normally, so treat this as an error
399 | if [ 0 -ne $(ps -p $run_pid &>/dev/null; echo $?) ]; then
400 | echo ""
401 | kill_submit
402 | header_block "Crash detected for environment: ${envs_list[$i]}" \
403 | ${colour_magenta}
404 | printf "\n${colour_magenta}%s${colour_nc}\n" \
405 | "Dumping output of 'benchbot_run' below:"
406 | cat /tmp/benchbot_run_out
407 | printf "\n${colour_red}$run_err${colour_nc}\n" "${envs_list[$i]}"
408 | exit 1
409 | fi
410 |
411 | # Skip moving on if we collided using 'move_next' actuation, otherwise move
412 | # to the next environment
413 | retry=
414 | if [ -n "$(run_manager_cmd 'print(get_value_by_name("tasks", "'$task'", \
415 | "actions"))' | grep "'move_next'")" ] && \
416 | [ -n "$(docker run --rm --network $DOCKER_NETWORK -it \
417 | "$DOCKER_TAG_BACKEND" /bin/bash -c \
418 | 'curl '$HOSTNAME_SUPERVISOR:$PORT_SUPERVISOR'/robot/is_collided' | \
419 | grep "true")" ]; then
420 | printf "\n${colour_yellow}$collision_warn${colour_nc}\n\n" \
421 | "${envs_list[$i]}"
422 | retry=1
423 | fi
424 |
425 | # Handle the result of failed submissions (looking for an error code)
426 | wait $submit_pid && submit_result=0 || submit_result=1
427 | if [ $submit_result -ne 0 ] && [ -z $retry ]; then
428 | echo ""
429 | kill_run
430 | printf "\n${colour_red}$submission_err${colour_nc}\n" "${envs_list[$i]}"
431 | exit 1
432 | fi
433 |
434 | if [ -z $retry ]; then
435 | results_list+=("${prefix}_$i.json")
436 | i=$((i+1))
437 | fi
438 |
439 | # Start the next run
440 | kill_run
441 | done
442 |
443 | ################################################################################
444 | ############################ Processing of results #############################
445 | ################################################################################
446 |
447 | header_block "Processing results from batch" $colour_magenta
448 |
449 | if [ -n "$zip" ]; then
450 | echo -e "${colour_magenta}Zipping up results ... ${colour_nc}\n"
451 | rm -vf "${prefix}.zip" && zip "${prefix}.zip" "${results_list[@]}"
452 | echo ""
453 | fi
454 |
455 | if [ -n "$evaluate_method" ]; then
456 | echo -e "${colour_magenta}Evaluating results... ${colour_nc}"
457 | benchbot_eval -m "$evaluate_method" -o "${prefix}_scores.json" \
458 | --required-task "$task" \
459 | --required-envs $(echo "${envs_list[@]}" | tr ' ' ',') \
460 | $([ -z "$zip" ] && echo "${results_list[@]}" || echo "${prefix}.zip")
461 | else
462 | echo "Done."
463 | fi
464 |
465 |
--------------------------------------------------------------------------------
/bin/benchbot_eval:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | ################################################################################
4 | ################### Load Helpers & Global BenchBot Settings ####################
5 | ################################################################################
6 |
7 | set -euo pipefail
8 | IFS=$'\n\t'
9 | abs_path=$(readlink -f $0)
10 | pushd $(dirname $abs_path) > /dev/null
11 | source .helpers
12 | popd > /dev/null
13 |
14 |
15 | ################################################################################
16 | ######################## Helper functions for commands #########################
17 | ################################################################################
18 |
19 | usage_text="$(basename "$0") -- Script for evaluating the performance of your solution
20 | to a task in a running environment. This script simply calls the installed
21 | 'benchbot_eval' python module with your provided results file/s.
22 |
23 | Results files are validated before evaluation. A results file must specify:
24 |
25 | - details of the task in which the results were gathered
26 | - details for each of the environments the were gathered in (i.e. if a task
27 | requires multiple scenes. This is NOT for denoting multiple different
28 | results, which should each be in their own file)
29 | - the set of results in the format described by format type (in task
30 | details)
31 |
32 | Errors will be presented if validation fails, and evaluation will not proceed.
33 | There are helper functions available in the BenchBot API for creating results
34 | ('BenchBot.empty_results()' & 'BenchBot.results_functions()').
35 |
36 | Evaluation is performed on a set of results which are gathered from a set of
37 | runs. For example, you can evaluate your algorithm just in house:1, or evaluate
38 | the performance holistically in all 5 of the house scenes. As such, the
39 | following modes are supported by benchbot_eval:
40 |
41 | - Providing a single JSON results file (the score in this run will simply
42 | be returned as your final score)
43 |
44 | - Providing a list of JSON results files (the final score returned will be
45 | the average of the scores for each individual results file)
46 |
47 | - Providing a *.zip file containing JSON results (the final score returned
48 | will be the same as above, across all JSON files found in the *.zip
49 | archive)
50 |
51 | USAGE:
52 |
53 | See this information about evaluation options:
54 | $(basename "$0") [-h|--help]
55 |
56 | Perform evaluation on results saved in 'my_results.json', & save the
57 | results to 'scores.json':
58 | $(basename "$0") my_results.json
59 |
60 | Get an overall score for all JSON results files that match the prefix
61 | 'my_results_[0-9]*':
62 | $(basename "$0") my_results_[0-9]*
63 |
64 | Save an overall score for all JSON results in 'my_results.zip' to
65 | 'my_scores.json':
66 | $(basename "$0") -o my_scores.json my_results.zip
67 |
68 | OPTION DETAILS:
69 |
70 | -h, --help
71 | Show this help menu.
72 |
73 | --list-batches
74 | Lists all supported environment batches. These can be used with the
75 | '--required-envs-batch' option. Use '--show-batch' to see more
76 | details about a batch.
77 |
78 | --list-methods
79 | List all supported evaluation methods. The listed methods are
80 | printed in the format needed for the '--method' option. Use
81 | '--show-method' to see more details about a method.
82 |
83 | -m, --method
84 | Name of method to be used for evaluation of results. All ground
85 | truths in the method's 'ground_truth_format' will be passed to the
86 | evaluation script.
87 |
88 | (use '--list-methods' to see a list of supported evaluation methods)
89 |
90 | -o, --output-location
91 | Change the location where the evaluation scores json is saved. If
92 | not provided, results are saved as 'scores.json' in the current
93 | directory.
94 |
95 | --required-envs
96 | A comma-separated list of environments required for evaluation
97 | (e.g. \"house:1,miniroom:2,office:3\"). Evaluation will not run
98 | unless a result is supplied for each of these environments. See the
99 | '-e, --env' arg in 'benchbot_run --help' for further details of
100 | specifying valid environments, & 'benchbot_run --list-envs' for a
101 | complete list of supported environments.
102 |
103 | --required-envs-batch
104 | An environments batch specifying a single required environment name
105 | on each line. Evaluation will not run unless a result is supplied
106 | for each of these environments. See '--required-envs' above for
107 | further details on valid environment specifications.
108 |
109 | --required-task
110 | Forces the script to only accept results for the supplied task
111 | name. A list of supported task names can be found by running
112 | 'benchbot_run --list-tasks'.
113 |
114 | --show-batch
115 | Prints information about the provided batch name if installed. The
116 | corresponding file's location will be displayed, with a snippet of
117 | its contents.
118 |
119 | --show-method
120 | Prints information about the provided method name if installed. The
121 | corresponding YAML's location will be displayed, with a snippet of
122 | its contents.
123 |
124 | -v, --version
125 | Print version info for current installation.
126 |
127 | -V, --validate-only
128 | Only perform validation of each provided results file, then exit
129 | without performing evaluation
130 |
131 | FURTHER DETAILS:
132 |
133 | See the 'benchbot_examples' repository for example results (& solutions
134 | which produce results) to test with this evaluation script.
135 |
136 | Please contact the authors of BenchBot for support or to report bugs:
137 | b.talbot@qut.edu.au
138 | "
139 |
140 | _ground_truth_err=\
141 | "ERROR: The script was unable to find ground truth files in the expected
142 | location ('%s'). This should be created as part of the
143 | 'benchbot_install' process. Please re-run the installer."
144 |
145 | _list_batches_pre=\
146 | "The following environment batches are available in your installation:
147 | "
148 | _list_methods_pre=\
149 | "The following evaluation methods are available in your installation:
150 | "
151 |
152 | ################################################################################
153 | #################### Parse & handle command line arguments #####################
154 | ################################################################################
155 |
156 | # Safely parse options input
157 | _args='help,list-batches,list-methods,method:,output-location:,\
158 | required-envs:,required-envs-batch:,required-task:,show-batch:,\
159 | show-method:,validate-only,version'
160 | parse_out=$(getopt -o ho:m:vV --long "$_args" -n "$(basename "$0")" -- "$@")
161 | if [ $? != 0 ]; then exit 1; fi
162 | eval set -- "$parse_out"
163 | method=
164 | required_envs=
165 | required_envs_batch=
166 | required_task=
167 | results_files=
168 | scores_location='scores.json'
169 | validate_only=
170 | while true; do
171 | case "$1" in
172 | -h|--help)
173 | echo "$usage_text" ; shift ; exit 0 ;;
174 | --list-batches)
175 | list_content "batches" "$_list_batches_pre" "a" 2; exit $? ;;
176 | --list-methods)
177 | list_content "evaluation_methods" "$_list_methods_pre"; exit $? ;;
178 | -m|--method)
179 | method="$2"; shift 2 ;;
180 | -o|--output-location)
181 | scores_location="$2"; shift 2 ;;
182 | --required-envs)
183 | required_envs="$2"; shift 2 ;;
184 | --required-envs-batch)
185 | required_envs_batch="$2"; shift 2 ;;
186 | --required-task)
187 | required_task="$2"; shift 2 ;;
188 | --show-batch)
189 | show_content "batches" "$2" 2; exit $? ;;
190 | --show-method)
191 | show_content "evaluation_methods" "$2"; exit $? ;;
192 | -v|--version)
193 | print_version_info; exit ;;
194 | -V|--validate-only)
195 | validate_only=1; shift ;;
196 | --)
197 | shift ; results_files=("$@"); break;;
198 | *)
199 | echo "$(basename "$0"): option '$1' is unknown"; shift ; exit 1 ;;
200 | esac
201 | done
202 |
203 | # Generate any derived configuration parameters
204 | if [ -n "$required_envs" ]; then
205 | required_envs_list=($(echo "$required_envs" | tr ',' '\n'))
206 | elif [ -n "$required_envs_batch" ]; then
207 | required_envs_list=($(run_manager_cmd 'exists("batches", [("name", \
208 | "'$required_envs_batch'")]) and print("\n".join(get_value_by_name(\
209 | "batches", "'$required_envs_batch'", "environments")))'))
210 | else
211 | required_envs_list=()
212 | fi
213 |
214 | # Bail if any of the requested configurations are invalid
215 | validate_eval_args "$required_envs" "$required_envs_batch" \
216 | "$(IFS=',' echo "${required_envs_list[*]}")" \
217 | "$required_task" "$method" "$validate_only" "${results_files[@]}"
218 |
219 | ################################################################################
220 | ##################### Validate the provided results files ######################
221 | ################################################################################
222 |
223 | header_block "Running validation over ${#results_files[@]} input files" \
224 | $colour_green
225 |
226 | # Build up some strings for Python
227 | python_results_files='["'"$(echo "${results_files[@]}" | sed 's/ /","/g')"'"]'
228 | python_req_task=
229 | if [ -n "$required_task" ]; then
230 | python_req_task=', required_task="'"$required_task"'"'
231 | fi
232 | python_req_envs=
233 | if [ -n "${required_envs_list:-}" ]; then
234 | python_req_envs=', required_envs=["'$(echo "${required_envs_list[@]}" | \
235 | sed 's/ /","/g')'"]'
236 | fi
237 |
238 | # Validate provided results using the Validator class from 'benchbot_eval'
239 | # Python module
240 | python3 -c 'from benchbot_eval import Validator; \
241 | Validator('"$python_results_files$python_req_task$python_req_envs"')'
242 |
243 | if [ -n "$validate_only" ]; then exit 0; fi
244 |
245 | ################################################################################
246 | ##################### Evaluate the provided results files ######################
247 | ################################################################################
248 |
249 | header_block "Running evaluation over ${#results_files[@]} input files" \
250 | $colour_green
251 |
252 | # Evaluate results using the pickled Validator state from the step above
253 | python3 -c 'from benchbot_eval import Evaluator; \
254 | Evaluator("'$method'", "'$scores_location'").evaluate()' && ret=0 || ret=1
255 | if [ $ret -ne 0 ]; then
256 | printf "${colour_red}\n%s: %d${colour_nc}\n" \
257 | "Evaluation failed with result error code" "$ret"
258 | fi
259 | exit $ret
260 |
--------------------------------------------------------------------------------
/bin/benchbot_install:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | ################################################################################
4 | ################### Load Helpers & Global BenchBot Settings ####################
5 | ################################################################################
6 |
7 | set -euo pipefail
8 | IFS=$'\n\t'
9 | abs_path=$(readlink -f $0)
10 | pushd $(dirname $abs_path) > /dev/null
11 | source .helpers
12 |
13 | ################################################################################
14 | ########################### Script Specific Settings ###########################
15 | ################################################################################
16 |
17 | # Hostnames
18 | hostnames_block=\
19 | "# BENCHBOT SPECIFIC HOSTNAMES
20 | $URL_ROS $HOSTNAME_ROS
21 | $URL_ROBOT $HOSTNAME_ROBOT
22 | $URL_SUPERVISOR $HOSTNAME_SUPERVISOR
23 | $URL_DEBUG $HOSTNAME_DEBUG"
24 |
25 | # Defaults for the installation process
26 | ADDONS_DEFAULT='benchbot-addons/ssu'
27 | CUDA_DEFAULT='cuda=12.0.0-1'
28 | NVIDIA_DEFAULT='nvidia-driver-525'
29 | SIMULATOR_DEFAULT='sim_omni'
30 | CUDA_DRIVER_DEFAULT='cuda-drivers=525.125.06-1'
31 |
32 | ################################################################################
33 | ########################### Definitions for messages ###########################
34 | ################################################################################
35 | usage_text="$(basename "$abs_path") -- Install script for the BenchBot software stack
36 |
37 | USAGE:
38 |
39 | Install the software stack:
40 | $(basename "$abs_path")
41 |
42 | Install from scratch, & skip check for the latest software stack:
43 | $(basename "$abs_path") --no-update [-f|--force-clean]
44 |
45 | Install the lite version of the backend (i.e. no simulator, ~10GB):
46 | $(basename "$abs_path") --no-simulator
47 |
48 | Uninstall the software stack:
49 | $(basename "$abs_path") [-u|--uninstall]
50 |
51 | OPTION DETAILS:
52 |
53 | -h, --help
54 | Show this help menu.
55 |
56 | -a, --addons
57 | Comma separated list of add-ons to install. Add-ons exist in GitHub
58 | repositories, and are specified by their identifier:
59 | 'username/repo_name'. Add-on installation also installs all
60 | required add-on dependencies. See add-on manager documentation for
61 | details:
62 | https://github.com/qcr/benchbot-addons
63 |
64 | Also see '--list-addons' for a details on installed add-ons, and a
65 | list of all known BenchBot add-ons.
66 |
67 | -A, --addons-only
68 | Only perform the installation of the specified add-ons. Skip all
69 | other steps in the install process, including Docker images and
70 | local software installation. This flag is useful if you just want
71 | to change add-ons inside your working installation.
72 |
73 | -b, --branch
74 | Specify a branch other than master to install. The only use for
75 | this flag is active development. The general user will never need
76 | to use this flag.
77 |
78 | -f, --force-clean
79 | Forces an install of the BenchBot software stack from scratch. It
80 | will run uninstall, then the full install process.
81 |
82 | --list-addons
83 | List all currently installed add-ons with their dependency
84 | structure noted. Also list all official add-ons available in the
85 | 'benchbot-addons' GitHub organisation. If you would like to add a
86 | community contribution to the community list, please follow the
87 | instructions here:
88 | https://github.com/qcr/benchbot-addons
89 |
90 | --list-simulators
91 | List all supported simulators that can be installed. It will also
92 | show which simulators you currently have installed.
93 |
94 | --no-simulator
95 | Runs installation without installing Nvidia's Omniverse-based Isaac
96 | simulator. This option is useful for avoiding the excessive space
97 | requirements (>100GB) when using the BenchBot software stack on a
98 | machine that will only be used with a real robot.
99 |
100 | --no-update
101 | Skip checking for updates to the BenchBot software stack, & instead
102 | jump straight into the installation process.
103 |
104 | --remove-addons
105 | Comma separated list of add-ons to remove (uses the same syntax as
106 | the '--addons' flag). This command will also remove any dependent
107 | add-ons as they will no longer work (e.g. if A depends on B, and
108 | B is uninstalled, then there is no reason for A to remain). You
109 | will be presented with a list of add-ons that will be removed, and
110 | prompted before removal commences.
111 |
112 | -s, --simulators
113 | Specify simulator/s to install. Comma-separated lists are accepted
114 | to specify multiple simulators. The default '$SIMULATOR_DEFAULT' is
115 | installed if this option isn't included. See '--list-simulators'
116 | for a list of supported options.
117 |
118 | -u, --uninstall
119 | Uninstall the BenchBot software stack from the machine. All BenchBot
120 | related Docker images will be removed from the system, the API
121 | removed from pip, & downloaded files removed from the BenchBot
122 | install. This flag is incompatible with all other flags.
123 |
124 | -v, --version
125 | Print version info for current installation.
126 |
127 | FURTHER DETAILS:
128 |
129 | Please contact the authors of BenchBot for support or to report bugs:
130 | b.talbot@qut.edu.au
131 | "
132 |
133 | build_err="\
134 | Ensure that Docker has been installed correctly AND that you can run Docker
135 | WITHOUT root access (there is no need to ever run Docker with root). See
136 | https://docs.docker.com/install/linux/linux-postinstall/ for details on how to
137 | fix this.
138 |
139 | If the error is more generic, please contact us so that we can update our
140 | pre-install host system checks.
141 | "
142 |
143 | ################################################################################
144 | ################### All checks for the host system (ouch...) ###################
145 | ################################################################################
146 |
147 | checks_list_pre=(
148 | "Core host system checks:"
149 | "ubuntu2004"
150 | "Running Nvidia related system checks:"
151 | "nvidiacard" # Is a valid graphics card available?
152 | "nvidiadriver" # Is the driver available?
153 | "nvidiaversion" # Is the driver version valid?
154 | "nvidiappa" # Does the driver come from a supported PPA?
155 | "cudadriversavailable" # Is cuda-drivers package available?
156 | "cudadriversversion" # Is cuda-drivers version valid?
157 | "cudadriversppa" # Is cuda-drivers from the Nvidia PPA?
158 | "cudaavailable" # Is CUDA available?
159 | "cudaversion" # Is the cuda version valid?
160 | "cudappa" # Is cuda from the Nvidia PPA?
161 | "Running Docker related system checks:"
162 | "dockeravailable" # Is Docker available?
163 | "dockerversion" # Is Docker version > 19.0?
164 | "dockernct" # Is nvidia-container-toolkit installed?
165 | "dockerroot" # Does Docker work WITHOUT root?
166 | "Running checks of filesystem used for Docker:"
167 | "fsext4" # Is /var/lib/docker on an ext4 filesystem?
168 | "fsnosuid" # Is /var/lib/docker on a filesystem WITHOUT nosuid?
169 | "fsspace" # Is /var/lib/docker on a filesystem with enough space?
170 | "Miscellaneous requirements:"
171 | "pip" # Is pip available? (required to install Python libraries)
172 | "pythontk" # Is python3-tk installed (required for matplotlib...)
173 | "pythonpil" # Is python3-pil & python3-pil.imagetk installed?
174 | )
175 |
176 | chk_ubuntu2004_name='Ubuntu version >= 20.04'
177 | chk_ubuntu2004_pass='Passed ($check_result)'
178 | chk_ubuntu2004_fail='Failed'
179 | chk_ubuntu2004_check=\
180 | '[ -f /tmp/benchbot_chk_ubuntu2004 ] && echo "skipped" || lsb_release -r | \
181 | awk '"'"'{print $2}'"'"' 2>/dev/null'
182 | chk_ubuntu2004_eval=\
183 | '[ "$check_result" == "skipped" ] || eval_version "$check_result" "20.04"'
184 | chk_ubuntu2004_issue="\
185 | The BenchBot Software Stack is designed to work with Ubuntu 20.04 & above.
186 |
187 | There is a possibility it may work with other versions, & other
188 | distributions, but it will probably involve manual installation of packages &
189 | installing missing tools.
190 |
191 | Given the complexity of the underlying system, it is strongly recommended to
192 | move to a supported operating system.
193 |
194 | If you would like to continue, & ignore this check, please say yes to the fix
195 | below.
196 | "
197 | chk_ubuntu2004_fix='touch /tmp/benchbot_chk_ubuntu2004'
198 | chk_ubuntu2004_reboot=1
199 |
200 | chk_nvidiacard_name='NVIDIA GPU available'
201 | chk_nvidiacard_pass='Found card of type '"'"'$check_result'"'"
202 | chk_nvidiacard_fail='None found'
203 | chk_nvidiacard_check=\
204 | 'lspci -nn | grep -E "\[03.*\[10de:[0-9a-f]{4}\]" | \
205 | sed -E '"'"'s/.*(10de:[0-9a-f]{4}).*/\1/'"'"' | head -n 1'
206 | chk_nvidiacard_eval='[ -n "$check_result" ]'
207 | chk_nvidiacard_issue="\
208 | No NVIDIA Graphics Card was detected. If there is a card available on your
209 | host system, then it should be visible in 'lspci -nn | grep VGA' with an ID
210 | of [10de:]). For example, a GeForce 1080 appears as [10de:1b80]."
211 | chk_nvidiacard_fix=''
212 | chk_nvidiacard_reboot=0
213 |
214 | chk_nvidiadriver_name='NVIDIA driver is running'
215 | chk_nvidiadriver_pass='Found'
216 | chk_nvidiadriver_fail='Not found'
217 | chk_nvidiadriver_check='cat /proc/driver/nvidia/version 2>/dev/null'
218 | chk_nvidiadriver_eval='[ -n "$check_result" ]'
219 | chk_nvidiadriver_issue="\
220 | No NVIDIA driver detected. If a driver is installed & loaded, it should be
221 | visible with 'nvidia-smi' or 'cat /proc/driver/nvidia/version'."
222 | chk_nvidiadriver_fix=\
223 | 'v=$(. /etc/os-release; echo "$VERSION_ID" | sed "s/\.//") &&
224 | wget "https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${v}/\
225 | x86_64/cuda-ubuntu${v}.pin" &&
226 | sudo mv cuda-ubuntu${v}.pin /etc/apt/preferences.d/cuda-repository-pin-600 &&
227 | sudo apt-key adv --fetch-keys "https://developer.download.nvidia.com/compute/\
228 | cuda/repos/ubuntu2004/x86_64/3bf863cc.pub" &&
229 | sudo add-apt-repository "deb http://developer.download.nvidia.com/compute/\
230 | cuda/repos/ubuntu${v}/x86_64/ /" &&
231 | sudo apt-get update && sudo apt-get -y install '"$NVIDIA_DEFAULT"
232 | chk_nvidiadriver_reboot=0
233 |
234 | chk_nvidiaversion_name='NVIDIA driver version valid'
235 | chk_nvidiaversion_pass='Valid ($check_result)'
236 | chk_nvidiaversion_fail='Invalid ($check_result)'
237 | chk_nvidiaversion_check='cat /proc/driver/nvidia/version 2>/dev/null | '\
238 | 'sed '"'"'/NVRM version/!d; s/.*Kernel Module *\([0-9\.]*\).*/\1/'"'"
239 | chk_nvidiaversion_eval='eval_version "$check_result" 520'
240 | chk_nvidiaversion_issue="\
241 | The version of the running Nvidia driver ("'$check_result'") is incompatible
242 | with BenchBot which requires at least version 520 for the Isaac SDK.
243 |
244 | Please install a more recent version of the driver from the Nvidia repository."
245 | chk_nvidiaversion_fix="$chk_nvidiadriver_fix"
246 | chk_nvidiaversion_reboot=0
247 |
248 | chk_nvidiappa_name='NVIDIA driver from a standard PPA'
249 | chk_nvidiappa_pass='PPA is valid'
250 | chk_nvidiappa_fail='PPA is invalid'
251 | chk_nvidiappa_check='version=$(apt list --installed 2>/dev/null | '\
252 | 'grep nvidia-driver- | sed '"'"'s/.*now \([^ ]*\).*/\1/'"'"'); '\
253 | 'apt download --print-uris nvidia-driver-$(echo $version | '\
254 | 'cut -d'"'"'.'"'"' -f1)=$version 2>/dev/null | cut -d'"'"' '"'"' -f1'
255 | chk_nvidiappa_eval=\
256 | '[[ "$check_result" =~ .*('"'"'developer.download.nvidia.com/'"'"'| \
257 | '"'"'ppa.launchpad.net/graphics-drivers/ppa/ubuntu'"'"').* ]]'
258 | chk_nvidiappa_issue="\
259 | The installed NVIDIA driver was detected to have come from somewhere other
260 | than the NVIDIA / Ubuntu graphics-drivers PPAs. The driver was sourced from:
261 | "'$check_result'"
262 |
263 | Our Docker container only supports official drivers, & GPUs only work in
264 | containers if the driver version exactly matches the version of the host.
265 | Please use a driver from either of the following locations (the NVIDIA PPA is
266 | preferred):
267 | http://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64
268 | http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu"
269 | chk_nvidiappa_fix=\
270 | 'sudo apt remove -y nvidia-driver-* && sudo apt -y autoremove; '\
271 | "$chk_nvidiadriver_fix"
272 | chk_nvidiappa_reboot=0
273 |
274 | chk_cudadriversavailable_name='CUDA drivers installed'
275 | chk_cudadriversavailable_pass='Drivers found'
276 | chk_cudadriversavailable_fail='Drivers not found'
277 | chk_cudadriversavailable_check='apt list --installed 2>/dev/null | grep -E '\
278 | '"cuda-drivers/" | awk '"'"'{print $2}'"'"
279 | chk_cudadriversavailable_eval='[ -n "$check_result" ]'
280 | chk_cudadriversavailable_issue="\
281 | No version of CUDA drivers detected. Our Docker only supports CUDA drivers
282 | installations from the official Nvidia repository. Please add the NVIDIA
283 | repository, & install 'cuda-drivers' through apt."
284 | chk_cudadriversavailable_fix=\
285 | "${chk_nvidiadriver_fix//$NVIDIA_DEFAULT/$CUDA_DRIVER_DEFAULT}"
286 | chk_cudadriversavailable_reboot=1
287 |
288 | chk_cudadriversversion_name='CUDA drivers version valid'
289 | chk_cudadriversversion_pass='Valid ($check_result)'
290 | chk_cudadriversversion_fail='Invalid'
291 | chk_cudadriversversion_check='apt list --installed 2>/dev/null | grep -E '\
292 | '"cuda-drivers/" | awk '"'"'{print $2}'"'"
293 | chk_cudadriversversion_eval='eval_version "$check_result" 520'
294 | chk_cudadriversversion_issue="\
295 | The version of CUDA drivers detected ("'$check_result'") is incompatible with
296 | BenchBot which requires CUDA drivers 520+ for the Isaac SDK.
297 |
298 | Please install a more recent version of the CUDA drivers from the NVIDIA
299 | repository."
300 | chk_cudadriversversion_fix=\
301 | "${chk_nvidiadriver_fix//$NVIDIA_DEFAULT/$CUDA_DRIVER_DEFAULT}"
302 | chk_cudadriversversion_reboot=1
303 |
304 | chk_cudadriversppa_name='CUDA drivers from the NVIDIA PPA'
305 | chk_cudadriversppa_pass='PPA is valid'
306 | chk_cudadriversppa_fail='PPA is invalid'
307 | chk_cudadriversppa_check=\
308 | 'installed=$(apt list --installed 2>/dev/null | grep -E "cuda-drivers/" | \
309 | awk '"'"'{print $2}'"'"'); \
310 | apt download --print-uris cuda-drivers=\
311 | $(echo $installed | sed '"'"'s/.*now \([^ ]*\).*/\1/'"'"') 2>/dev/null | \
312 | cut -d'"'"' '"'"' -f1'
313 | chk_cudadriversppa_eval=\
314 | '[[ "$check_result" =~ .*('"'"'developer.download.nvidia.com/'"'"').* ]]'
315 | chk_cudadriversppa_issue="\
316 | CUDA drivers were detected to have come from somewhere other than the NVIDIA
317 | PPAs. The CUDA drivers were sourced from:
318 | "'$check_result'"
319 |
320 | Our Docker container only supports official CUDA installs, & GPUs only work
321 | in containers if the CUDA drivers version exactly matches the version of the
322 | host. Please install CUDA drivers from the official NVIDIA repository:
323 | http://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64"
324 | chk_cudadriversppa_fix=\
325 | 'sudo apt remove -y cuda-drivers && sudo apt -y autoremove; '\
326 | "${chk_nvidiadriver_fix//$NVIDIA_DEFAULT/$CUDA_DRIVER_DEFAULT}"
327 | chk_cudadriversppa_reboot=1
328 |
329 | chk_cudaavailable_name='CUDA is installed'
330 | chk_cudaavailable_pass='CUDA found'
331 | chk_cudaavailable_fail='CUDA not found'
332 | chk_cudaavailable_check='apt list --installed 2>/dev/null | grep -E '\
333 | '"cuda-[0-9]*-[0-9]*" | awk '"'"'{print $2}'"'"
334 | chk_cudaavailable_eval='[ -n "$check_result" ]'
335 | chk_cudaavailable_issue="\
336 | No version of CUDA detected. Our Docker only supports CUDA installations from
337 | the official Nvidia repository. Please add the NVIDIA repository, & install
338 | 'cuda' through apt."
339 | chk_cudaavailable_fix="${chk_nvidiadriver_fix//$NVIDIA_DEFAULT/$CUDA_DEFAULT}"
340 | chk_cudaavailable_reboot=1
341 |
342 | chk_cudaversion_name='CUDA version valid'
343 | chk_cudaversion_pass='Valid ($check_result)'
344 | chk_cudaversion_fail='Invalid'
345 | chk_cudaversion_check='/usr/local/cuda/bin/nvcc --version | grep release | '\
346 | 'sed '"'"'s/.*release \([0-9\.]*\).*/\1/'"'"''
347 | chk_cudaversion_eval='eval_version "$check_result" 10'
348 | chk_cudaversion_issue="\
349 | The version of CUDA detected ("'$check_result'") is incompatible with BenchBot
350 | which requires at least 10.0 for the Isaac SDK.
351 |
352 | Please install a more recent version of CUDA from the NVIDIA repository."
353 | chk_cudaversion_fix="${chk_nvidiadriver_fix//$NVIDIA_DEFAULT/$CUDA_DEFAULT}"
354 | chk_cudaversion_reboot=1
355 |
356 | chk_cudappa_name='CUDA is from the NVIDIA PPA'
357 | chk_cudappa_pass='PPA is valid'
358 | chk_cudappa_fail='PPA is invalid'
359 | chk_cudappa_check=\
360 | 'installed=$(apt list --installed 2>/dev/null | grep -E "cuda-$(\
361 | /usr/local/cuda/bin/nvcc --version | grep release | \
362 | sed '"'"'s/.*release \([0-9\.]*\).*/\1/'"'"')"); \
363 | apt download --print-uris "$(echo "$installed" | sed '"'"'s/\([^\/]*\).*/\1/'"'"')=$(echo "$installed" | cut -d " " -f 2)" 2>/dev/null'
364 | chk_cudappa_eval=\
365 | '[[ "$check_result" =~ .*('"'"'developer.download.nvidia.com/'"'"').* ]]'
366 | chk_cudappa_issue="\
367 | CUDA was detected to have come from somewhere other than the NVIDIA PPAs. The
368 | CUDA package was sourced from:
369 | "'$check_result'"
370 |
371 | Our Docker container only supports official CUDA installs, & GPUs only work in
372 | containers if the CUDA version exactly matches the version of the host.
373 | Please install CUDA from the official NVIDIA repository:
374 | http://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64"
375 | chk_cudappa_fix=\
376 | 'sudo apt remove -y cuda-10* && sudo apt -y autoremove; '\
377 | "${chk_nvidiadriver_fix//$NVIDIA_DEFAULT/$CUDA_DEFAULT}"
378 | chk_cudappa_reboot=1
379 |
380 | chk_dockeravailable_name='Docker is available'
381 | chk_dockeravailable_pass='Found'
382 | chk_dockeravailable_fail='Not found'
383 | chk_dockeravailable_check='docker --version 2>/dev/null'
384 | chk_dockeravailable_eval='[ -n "$check_result" ]'
385 | chk_dockeravailable_issue="\
386 | Docker was not detected on the host system ('docker --version' did not return
387 | a version).
388 |
389 | Please ensure Docker Engine - Community Edition is installed on the system."
390 | chk_dockeravailable_fix=\
391 | 'sudo apt install -y apt-transport-https ca-certificates curl \
392 | software-properties-common &&
393 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - &&
394 | sudo add-apt-repository "deb [arch=amd64] \
395 | https://download.docker.com/linux/ubuntu '"$(lsb_release -cs)"' stable" &&
396 | sudo apt update &&
397 | sudo apt install -y docker-ce &&
398 | sudo groupadd docker;
399 | sudo usermod -aG docker '"$USER"
400 | chk_dockeravailable_reboot=0
401 |
402 | chk_dockerversion_name='Docker version valid'
403 | chk_dockerversion_pass='Valid ($check_result)'
404 | chk_dockerversion_fail='Invalid ($check_result)'
405 | chk_dockerversion_check='docker --version 2>/dev/null | '\
406 | 'sed "s/Docker version \([^,]*\).*/\1/"'
407 | chk_dockerversion_eval='eval_version "$check_result" 19.03'
408 | chk_dockerversion_issue="\
409 | The version of Docker running ("'$check_result'") is incompatible with the
410 | BenchBot built process which requires at least version 19.03 to use the
411 | Nvidia Container Toolkit.
412 |
413 | Please install a more recent version of Docker Engine - Community Edition."
414 | chk_dockerversion_fix=\
415 | 'sudo apt remove -y docker* && sudo apt -y autoremove;
416 | sudo apt-get install apt-transport-https ca-certificates curl \
417 | software-properties-common &&
418 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - &&
419 | sudo add-apt-repository "deb [arch=amd64] \
420 | https://download.docker.com/linux/ubuntu '"$(lsb_release -cs)"' stable" &&
421 | sudo apt update &&
422 | sudo apt install -y docker-ce &&
423 | sudo groupadd docker;
424 | sudo usermod -aG docker '"$USER"
425 | chk_dockerversion_reboot=0
426 |
427 | chk_dockernct_name='NVIDIA Container Toolkit installed'
428 | chk_dockernct_pass='Found ($check_result)'
429 | chk_dockernct_fail='Not found'
430 | chk_dockernct_check='nvidia-container-cli --version 2>/dev/null | head -n 1 | '\
431 | 'cut -d " " -f2'
432 | chk_dockernct_eval='[ -n "$check_result" ]'
433 | chk_dockernct_issue="\
434 | NVIDIA Container Toolkit was not detected. It is required to allow Docker
435 | containers to access the GPU on your host system.
436 |
437 | Please install the toolkit from https://github.com/NVIDIA/nvidia-docker"
438 | chk_dockernct_fix=\
439 | 'curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - &&
440 | curl -s -L "https://nvidia.github.io/nvidia-docker/\
441 | '"$(. /etc/os-release;echo $ID$VERSION_ID)"'/nvidia-docker.list" | \
442 | sudo tee /etc/apt/sources.list.d/nvidia-docker.list &&
443 | sudo apt update &&
444 | sudo apt install -y nvidia-container-toolkit &&
445 | sudo systemctl restart docker'
446 | chk_dockernct_reboot=1
447 |
448 | chk_dockerroot_name='Docker runs without root'
449 | chk_dockerroot_pass='Passed'
450 | chk_dockerroot_fail='Failed'
451 | chk_dockerroot_check='docker run --rm hello-world &>/dev/null; echo $?'
452 | chk_dockerroot_eval='[ $check_result -eq 0 ]'
453 | chk_dockerroot_issue="\
454 | Docker failed to run the hello-world test.
455 |
456 | Usually this is caused by Docker not being configured to run without root. If
457 | the fix below fails to fix the issue (a restart / logout is required after
458 | changing groups), please run 'docker run hello-world' & resolve any issues
459 | that occur."
460 | chk_dockerroot_fix=\
461 | 'sudo groupadd docker;
462 | sudo usermod -aG docker '"$USER"
463 | chk_dockerroot_reboot=0
464 |
465 | chk_fsext4_name='/var/lib/docker on ext4 filesystem'
466 | chk_fsext4_pass='Yes ($check_result)'
467 | chk_fsext4_fail='No'
468 | chk_fsext4_check=\
469 | 'if [ $(df -T /var/lib/docker | awk '"'"'/^\// {print $2}'"'"') == "ext4" ]; \
470 | then df -T /var/lib/docker | awk '"'"'/^\// {print $1}'"'"'; fi'
471 | chk_fsext4_eval='[ -n "$check_result" ]'
472 | chk_fsext4_issue="\
473 | The location of Docker images, '/var/lib/docker', is on partition '"'$check_result'"'
474 | which is not a fileystem of type 'ext4'. Non-ext4 filesystems have generally
475 | been found to produce unstable behaviour with Linux Docker images.
476 |
477 | Please move '/var/lib/docker/' to an ext4 drive for best results."
478 | chk_fsext4_fix=\
479 | 'clear_stdin
480 | read -p "New location for /var/lib/docker on an ext4 drive: " new_location
481 | sudo systemctl stop docker
482 | sudo mv /var/lib/docker "$new_location"
483 | sudo ln -sv "$new_location" /var/lib/docker
484 | sudo systemctl start docker'
485 | chk_fsext4_reboot=1
486 |
487 | chk_fsnosuid_name='/var/lib/docker supports suid'
488 | chk_fsnosuid_pass='Enabled'
489 | chk_fsnosuid_fail='Disabled'
490 | chk_fsnosuid_check=\
491 | 'if [ -z $(cat /proc/mounts | grep $(df -T /var/lib/docker | \
492 | awk '"'"'/^\// {print $1}'"'"') | grep "nosuid") ]; then \
493 | df -T /var/lib/docker | awk '"'"'/^\// {print $1}'"'"'; fi'
494 | chk_fsnosuid_eval='[ -n "$check_result" ]'
495 | chk_fsnosuid_issue="\
496 | Support for suid flags was not found on the drive holding /var/lib/docker:
497 | "'$check_result'"
498 |
499 | Please disable the nosuid option on the drive (by editing /etc/fstab, using
500 | the Ubuntu Disk UI, or any other method you prefer). "
501 | chk_fsnosuid_fix=\
502 | 'sudo sed -E -n '"'"'s|(^[^#]*'"'"'$check_result'"'"' .*)nosuid|\1|p; s/,,/,/p; \
503 | s/ ,/ /p; s/, / /p'"'"' /etc/fstab &&
504 | sudo systemctl daemon-reload'
505 | chk_fsnosuid_reboot=1
506 |
507 | chk_fsspace_name='/var/lib/docker drive space check'
508 | chk_fsspace_pass='Sufficient space ($check_resultG)'
509 | chk_fsspace_fail='Insufficient space ($check_resultG)'
510 | chk_fsspace_check=\
511 | 'dup_space=$(docker images --filter "reference='$DOCKER_TAG_BACKEND'" \
512 | --format "{{.Size}}") &&
513 | if [ -z "$dup_space" ];
514 | then dup_space=0;
515 | elif [ "${dup_space: -2}" == "GB" ];
516 | then dup_space=$(echo "${dup_space:0:-2}" | sed '"'"'s/\..*$//'"'"');
517 | else
518 | dup_space=0;
519 | fi &&
520 | space=$(df -Th -BG /var/lib/docker | awk '"'"'/^\// {print $5}'"'"') &&
521 | space=$(echo "${space:0:-1}" | sed '"'"'s/\..*$//'"'"') &&
522 | echo "$space+$dup_space" | bc'
523 | chk_fsspace_eval='[ $check_result -gt SIZE ]'
524 | chk_fsspace_issue="\
525 | The drive holding /var/lib/docker needs at least SIZEGB free. This check takes
526 | into account if you already have BenchBot installed (it will add the size of
527 | the Docker image to your \"available space\").
528 |
529 | Please clear space from the drive, or run the following fix to move
530 | /var/lib/docker to a drive with more space."
531 | chk_fsspace_fix=\
532 | 'clear_stdin
533 | read -p "New location for /var/lib/docker with SIZEGB free: " new_location
534 | sudo systemctl stop docker
535 | sudo mv /var/lib/docker "$new_location"
536 | sudo ln -sv "$new_location" /var/lib/docker
537 | sudo systemctl start docker'
538 | chk_fsspace_reboot=1
539 |
540 | chk_pip_name='Pip python package manager available'
541 | chk_pip_pass='Found ($check_result)'
542 | chk_pip_fail='Not found'
543 | chk_pip_check=\
544 | 'python3 -m pip --version 2>/dev/null | cut -d " " -f 2'
545 | chk_pip_eval='[ -n "$check_result" ]'
546 | chk_pip_issue="\
547 | Python package manager pip was not found. It is required for installation of
548 | BenchBot python modules. Please either restart the install script with pip
549 | available, or install it."
550 | chk_pip_fix='sudo apt install -y python3-pip && pip3 install --upgrade pip'
551 | chk_pip_reboot=1
552 |
553 | chk_pythontk_name='Tkinter for Python installed'
554 | chk_pythontk_pass='Found'
555 | chk_pythontk_fail='Not found'
556 | chk_pythontk_check='python3 -c "import tkinter as tk; print(tk.__file__)" 2>/dev/null'
557 | chk_pythontk_eval='[ -n "$check_result" ]'
558 | chk_pythontk_issue="\
559 | Python package python3-tk was not found. It is required to use the
560 | visualisation tools available with the BenchBot API. Please install it via
561 | package manager. "
562 | chk_pythontk_fix='sudo apt install -y python3-tk'
563 | chk_pythontk_reboot=1
564 |
565 | chk_pythonpil_name='PIL (with ImageTk) for Python installed'
566 | chk_pythonpil_pass='Found'
567 | chk_pythonpil_fail='Not found'
568 | chk_pythonpil_check='python3 -c "from PIL import ImageTk; print("1")" 2>/dev/null'
569 | chk_pythonpil_eval='[ -n "$check_result" ]'
570 | chk_pythonpil_issue="\
571 | Python packages python3-pil & python3-pil.imagetk were not found. They are
572 | required to use the visualisation tools available with the BenchBot API.
573 | Please install it via package manager. "
574 | chk_pythonpil_fix='sudo apt install -y python3-pil python3-pil.imagetk'
575 | chk_pythonpil_reboot=1
576 |
577 |
578 | checks_list_sim_omni=(
579 | "Manual installation steps for Omniverse-powered Isaac Sim:"
580 | "simomnilicense"
581 | "simomninvcrio"
582 | )
583 |
584 | chk_simomnilicense_name='License accepted for Omniverse'
585 | chk_simomnilicense_pass='Yes'
586 | chk_simomnilicense_fail='No'
587 | chk_simomnilicense_check='echo "'$PATH_LICENSES'/omniverse"'
588 | chk_simomnilicense_eval='[ -f "$check_result" ]'
589 | chk_simomnilicense_issue="\
590 | The NVIDIA Omniverse License Agreement (EULA) must be accepted before
591 | Omniverse Kit can start. The license terms for this product can be viewed at:
592 | https://docs.omniverse.nvidia.com/app_isaacsim/common/NVIDIA_Omniverse_License_Agreement.html
593 |
594 | Accept the following command to accept the license."
595 | chk_simomnilicense_fix=\
596 | 'mkdir -p '$PATH_LICENSES' && touch "'$PATH_LICENSES'/omniverse"'
597 | chk_simomnilicense_reboot=1
598 |
599 | chk_simomninvcrio_name='Access to nvcr.io Docker registry'
600 | chk_simomninvcrio_pass='Yes'
601 | chk_simomninvcrio_fail='No'
602 | chk_simomninvcrio_check='echo "" | docker login nvcr.io &>/dev/null; echo $?'
603 | chk_simomninvcrio_eval='[ $check_result -eq 0 ]'
604 | chk_simomninvcrio_issue="\
605 | Could not access the nvcr.io Docker registry.
606 |
607 | Please follow NVIDIA's instruction for generating an account NGC key:
608 | https://docs.nvidia.com/ngc/ngc-overview/index.html#generating-api-key
609 |
610 | Then either:
611 | 1) Follow the Isaac Sim instructions to login (DON'T USE sudo):
612 | https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/install_advanced.html
613 | 2) Run the auto-fix below, and paste your key in when prompted."
614 | chk_simomninvcrio_fix=\
615 | 'clear_stdin
616 | read -p "NGC API Key for your account: " ngc_key
617 | echo "$ngc_key" | docker login --username \$oauthtoken --password-stdin nvcr.io'
618 | chk_simomninvcrio_reboot=1
619 |
620 |
621 | checks_list_post=(
622 | "Validating the build against the host system:"
623 | "cudadriverdep" # Does Nvidia driver satisfy cuda dep?
624 | "Validating BenchBot libraries on the host system:"
625 | "addonscloned"
626 | "addonsuptodate"
627 | "addonsinstalled"
628 | "apicloned"
629 | "apiuptodate"
630 | "apiinstalled"
631 | "evalcloned"
632 | "evaluptodate"
633 | "evalinstalled"
634 | "Integrating BenchBot with the host system:"
635 | "hostsavail"
636 | "symlinks"
637 | )
638 |
639 | chk_cudadriverdep_name='CUDA / NVIDIA versions match'
640 | chk_cudadriverdep_pass='Matches'
641 | chk_cudadriverdep_fail='Does not match'
642 | chk_cudadriverdep_check=\
643 | 'fn='"'"'echo "Nvidia: $(cat /proc/driver/nvidia/version | \
644 | sed "/NVRM version/!d; s/.*Kernel Module *\([0-9\.]*\).*/\1/"), \
645 | CUDA: $(apt list --installed 2>/dev/null | grep -E "cuda-$(\
646 | /usr/local/cuda/bin/nvcc --version | grep release | \
647 | sed "s/.*release \([0-9\.]*\).*/\1/")" | cut -d " " -f 2)"'"'"' &&
648 | host=$(eval "$fn") &&
649 | bb=$(docker run --rm -t '$DOCKER_TAG_CORE' /bin/bash -c "$fn" | \
650 | sed '"'"'s/[[:space:]]*$//'"'"') &&
651 | eq=$([ "$host" == "$bb" ] && echo "==" || echo "!=") &&
652 | printf "%s\n%s\n%s" "Host: $host" "BenchBot: $bb" "Result: $eq"'
653 | chk_cudadriverdep_eval='[[ "$check_result" == *"=="* ]]'
654 | chk_cudadriverdep_issue="\
655 | The CUDA / NVIDIA driver version pairing in the installed BenchBot does not
656 | match the version on your host system, despite our best efforts:
657 |
658 | "'$check_result'"
659 |
660 | This can sometimes happen due to the way CUDA / NVIDIA driver pairings are
661 | passed through the apt dependency tree (cuda -> cuda-drivers ->
662 | nvidia-driver)
663 |
664 | BenchBot needs these to match between host & docker, so the best solution
665 | available at this point is remove the existing CUDA install on the host
666 | machine & reinstall. Please do this if we have identified this issue with your
667 | host system setup."
668 | chk_cudadriverdep_fix=\
669 | 'v=$(apt list --installed 2>/dev/null | grep "cuda-[0-9]*-[0-9]*") &&
670 | if [ -z "$v" ]; then v="cuda-10-1"; fi &&
671 | sudo apt remove -y "$v" && sudo apt -y autoremove && sudo apt install -y "$v"'
672 | chk_cudadriverdep_reboot=1
673 |
674 | chk_addonscloned_name='BenchBot Add-ons Manager cloned'
675 | chk_addonscloned_pass='Yes'
676 | chk_addonscloned_fail='No'
677 | chk_addonscloned_check=\
678 | 'git -C '"$PATH_ADDONS"' rev-parse --show-toplevel 2>/dev/null'
679 | chk_addonscloned_eval='[ "$check_result" == "$(realpath '"$PATH_ADDONS"')" ]'
680 | chk_addonscloned_issue="\
681 | The BenchBot Add-ons Manager Python library is not cloned on the host system.
682 | Having it installed is required for using add-ons, which contain all of the
683 | pre-made content for BenchBot (robots, environments, task definitions,
684 | evaluation methods, etc.)."
685 | chk_addonscloned_fix=\
686 | 'rm -rf '"$PATH_ADDONS"' &&
687 | git clone '"$GIT_ADDONS $PATH_ADDONS"' &&
688 | pushd '"$PATH_ADDONS"' &&
689 | git fetch --all && (git checkout -t origin/$BRANCH_DEFAULT ||
690 | git checkout $BRANCH_DEFAULT) && popd'
691 | chk_addonscloned_reboot=1
692 |
693 | chk_addonsuptodate_name='BenchBot Add-ons Manager up-to-date'
694 | chk_addonsuptodate_pass='Up-to-date'
695 | chk_addonsuptodate_fail='Outdated'
696 | chk_addonsuptodate_check=\
697 | '[ -d '"$PATH_ADDONS"' ] && git -C '"$PATH_ADDONS"' rev-parse HEAD &&
698 | git ls-remote '"$GIT_ADDONS"' $BRANCH_DEFAULT | awk '"'"'{print $1}'"'"
699 | chk_addonsuptodate_eval='[ -n "$check_result" ] &&
700 | [ $(echo "$check_result" | uniq | wc -l) -eq 1 ]'
701 | chk_addonsuptodate_issue="\
702 | The version of the BenchBot Add-ons Manager Python library on the host system
703 | is out of date. The current version hash & latest version hash respectively
704 | are:
705 |
706 | "'$check_result'"
707 |
708 | Please move to the latest version."
709 | chk_addonsuptodate_fix=\
710 | 'pushd '"$PATH_ADDONS"' &&
711 | git fetch --all && git checkout -- . &&
712 | (git checkout -t origin/$BRANCH_DEFAULT || git checkout $BRANCH_DEFAULT) &&
713 | git pull && popd && pip3 uninstall -y benchbot_addons'
714 | chk_addonsuptodate_reboot=1
715 |
716 | chk_addonsinstalled_name='BenchBot Add-ons Manager installed'
717 | chk_addonsinstalled_pass='Available'
718 | chk_addonsinstalled_fail='Not found'
719 | chk_addonsinstalled_check=\
720 | 'python3 -c '"'"'import benchbot_addons; print(benchbot_addons.__file__);'"'"' \
721 | 2>/dev/null'
722 | chk_addonsinstalled_eval='[ -n "$check_result" ]'
723 | chk_addonsinstalled_issue="\
724 | BenchBot Add-ons Manager was not found in Python. It is either not installed,
725 | or the current terminal is not correctly sourcing your installed Python
726 | packages (could be a virtual environment, conda, ROS, etc).
727 |
728 | Please do not run the automatic fix if you intend to source a different Python
729 | environment before running BenchBot."
730 | chk_addonsinstalled_fix=\
731 | 'pushd '"$PATH_ADDONS"' &&
732 | python3 -m pip install --upgrade -e . && popd'
733 | chk_addonsinstalled_reboot=1
734 |
735 | chk_apicloned_name='BenchBot API cloned'
736 | chk_apicloned_pass='Yes'
737 | chk_apicloned_fail='No'
738 | chk_apicloned_check=\
739 | 'git -C '"$PATH_API"' rev-parse --show-toplevel 2>/dev/null'
740 | chk_apicloned_eval='[ "$check_result" == "$(realpath '"$PATH_API"')" ]'
741 | chk_apicloned_issue="\
742 | The BenchBot API Python library is not cloned on the host system. Having it
743 | installed significantly improves the development experience, & allows you to
744 | run your submissions natively without containerisation."
745 | chk_apicloned_fix=\
746 | 'rm -rf '"$PATH_API"' &&
747 | git clone '"$GIT_API $PATH_API"' &&
748 | pushd '"$PATH_API"' &&
749 | git fetch --all && (git checkout -t origin/$BRANCH_DEFAULT ||
750 | git checkout $BRANCH_DEFAULT) && popd'
751 | chk_apicloned_reboot=1
752 |
753 | chk_apiuptodate_name='BenchBot API up-to-date'
754 | chk_apiuptodate_pass='Up-to-date'
755 | chk_apiuptodate_fail='Outdated'
756 | chk_apiuptodate_check=\
757 | '[ -d '"$PATH_API"' ] && git -C '"$PATH_API"' rev-parse HEAD &&
758 | git ls-remote '"$GIT_API"' $BRANCH_DEFAULT | awk '"'"'{print $1}'"'"
759 | chk_apiuptodate_eval='[ -n "$check_result" ] &&
760 | [ $(echo "$check_result" | uniq | wc -l) -eq 1 ]'
761 | chk_apiuptodate_issue="\
762 | The version of the BenchBot API Python library on the host system is out of
763 | date. The current version hash & latest version hash respectively are:
764 |
765 | "'$check_result'"
766 |
767 | Please move to the latest version."
768 | chk_apiuptodate_fix=\
769 | 'pushd '"$PATH_API"' &&
770 | git fetch --all && git checkout -- . &&
771 | (git checkout -t origin/$BRANCH_DEFAULT || git checkout $BRANCH_DEFAULT) &&
772 | git pull && popd && pip3 uninstall -y benchbot_api'
773 | chk_apiuptodate_reboot=1
774 |
775 | chk_apiinstalled_name='BenchBot API installed'
776 | chk_apiinstalled_pass='Available'
777 | chk_apiinstalled_fail='Not found'
778 | chk_apiinstalled_check=\
779 | 'python3 -c '"'"'import benchbot_api; print(benchbot_api.__file__);'"'"' \
780 | 2>/dev/null'
781 | chk_apiinstalled_eval='[ -n "$check_result" ]'
782 | chk_apiinstalled_issue="\
783 | BenchBot API was not found in Python. It is either not installed, or the
784 | current terminal is not correctly sourcing your installed Python packages
785 | (could be a virtual environment, conda, ROS, etc).
786 |
787 | Please do not run the automatic fix if you intend to source a different Python
788 | environment before running BenchBot."
789 | chk_apiinstalled_fix=\
790 | 'pushd '"$PATH_API"' &&
791 | python3 -m pip install --upgrade -e . && popd'
792 | chk_apiinstalled_reboot=1
793 |
794 | chk_evalcloned_name='BenchBot evaluation cloned'
795 | chk_evalcloned_pass='Yes'
796 | chk_evalcloned_fail='No'
797 | chk_evalcloned_check=\
798 | 'git -C '"$PATH_EVAL"' rev-parse --show-toplevel 2>/dev/null'
799 | chk_evalcloned_eval='[ "$check_result" == "$(realpath '"$PATH_EVAL"')" ]'
800 | chk_evalcloned_issue="\
801 | The BenchBot evaluation Python library is not cloned on the host system.
802 | Having it installed allows you to evaluate the performance of your semantic
803 | scene understanding algorithms directly from your machine."
804 | chk_evalcloned_fix=\
805 | 'rm -rf '"$PATH_EVAL"' &&
806 | git clone '"$GIT_EVAL $PATH_EVAL"' &&
807 | pushd '"$PATH_EVAL"' &&
808 | git fetch --all && (git checkout -t origin/$BRANCH_DEFAULT ||
809 | git checkout $BRANCH_DEFAULT) && popd'
810 | chk_evalcloned_reboot=1
811 |
812 | chk_evaluptodate_name='BenchBot evaluation up-to-date'
813 | chk_evaluptodate_pass='Up-to-date'
814 | chk_evaluptodate_fail='Outdated'
815 | chk_evaluptodate_check=\
816 | '[ -d '"$PATH_EVAL"' ] && git -C '"$PATH_EVAL"' rev-parse HEAD &&
817 | git ls-remote '"$GIT_EVAL"' $BRANCH_DEFAULT | awk '"'"'{print $1}'"'"
818 | chk_evaluptodate_eval='[ -n "$check_result" ] &&
819 | [ $(echo "$check_result" | uniq | wc -l) -eq 1 ]'
820 | chk_evaluptodate_issue="\
821 | The version of the BenchBot evaluation Python library on the host system is
822 | out of date. The current version hash & latest version hash respectively are:
823 |
824 | "'$check_result'"
825 |
826 | Please move to the latest version."
827 | chk_evaluptodate_fix=\
828 | 'pushd '"$PATH_EVAL"' &&
829 | git fetch --all && git checkout -- . &&
830 | (git checkout -t origin/$BRANCH_DEFAULT || git checkout $BRANCH_DEFAULT) &&
831 | git pull && popd && python3 -m pip uninstall -y benchbot_eval'
832 | chk_evaluptodate_reboot=1
833 |
834 | chk_evalinstalled_name='BenchBot evaluation installed'
835 | chk_evalinstalled_pass='Available'
836 | chk_evalinstalled_fail='Not found'
837 | chk_evalinstalled_check=\
838 | 'python3 -c '"'"'import benchbot_eval; print(benchbot_eval.__file__);'"'"' \
839 | 2>/dev/null'
840 | chk_evalinstalled_eval='[ -n "$check_result" ]'
841 | chk_evalinstalled_issue="\
842 | BenchBot evaluation was not found in Python. It is either not installed, or
843 | the current terminal is not correctly sourcing your installed Python packages
844 | (could be a virtual environment, conda, ROS, etc).
845 |
846 | Please do not run the automatic fix if you intend to source a different Python
847 | environment before running BenchBot."
848 | chk_evalinstalled_fix=\
849 | 'pushd '"$PATH_EVAL"' &&
850 | python3 -m pip install --upgrade -e . && popd'
851 | chk_evalinstalled_reboot=1
852 |
853 | chk_hostsavail_name='BenchBot hosts available'
854 | chk_hostsavail_pass='Found'
855 | chk_hostsavail_fail='Not found'
856 | chk_hostsavail_check=\
857 | 'getent hosts benchbot_ros benchbot_robot benchbot_supervisor benchbot_debug'
858 | chk_hostsavail_eval='[ $(echo "$check_result" | wc -l) -eq 4 ]'
859 | chk_hostsavail_issue="\
860 | The hosts benchbot_ros, benchbot_simulator, & benchbot_supervisor were not
861 | found on the host system. Only the following hosts were found (may be empty):
862 |
863 | "'$check_result'"
864 |
865 | Having all hosts available to your system allows the BenchBot software stack
866 | to communicate between each of the individual components. Please add the host
867 | to your /etc/hosts file."
868 | chk_hostsavail_fix=\
869 | 'echo -e "\n$hostnames_block" | sudo tee -a /etc/hosts'
870 | chk_hostsavail_reboot=1
871 |
872 | chk_symlinks_name='BenchBot symlinks available'
873 | chk_symlinks_pass='Found'
874 | chk_symlinks_fail='Not found'
875 | chk_symlinks_check=\
876 | '[ -e "'$PATH_SYMLINKS'/benchbot_install" ] &&
877 | [ -e "'$PATH_SYMLINKS'/benchbot_run" ] &&
878 | [ -e "'$PATH_SYMLINKS'/benchbot_submit" ] &&
879 | [ -e "'$PATH_SYMLINKS'/benchbot_eval" ] &&
880 | [ -e "'$PATH_SYMLINKS'/benchbot_batch" ] &&
881 | ls -l "'$PATH_SYMLINKS'/benchbot_"* | cut -d " " -f 9-11'
882 | chk_symlinks_eval='[ -n "$check_result" ]'
883 | chk_symlinks_issue="\
884 | Symbolic links for each of the 5 BenchBot scripts were not found. If you do
885 | not wish to add symbolic links, or would rather add the './bin/' directory to
886 | your PATH, please skip this step."
887 | chk_symlinks_fix=\
888 | 'sudo ln -svf '"$(dirname $abs_path)"'/* '"$PATH_SYMLINKS"
889 | chk_symlinks_reboot=1
890 |
891 | ################################################################################
892 | ######################## Helper functions for commands #########################
893 | ################################################################################
894 |
895 | function cleanup_terminal() {
896 | printf "${colour_nc}"
897 | }
898 |
899 | function handle_requirement() {
900 | required=${2:-0}
901 |
902 | # Dirtily pull all of the check-related values based from the input argument
903 | name=$(eval 'echo "$chk_'$1'_name"')
904 | pass=$(eval 'echo "$chk_'$1'_pass"')
905 | fail=$(eval 'echo "$chk_'$1'_fail"')
906 | check=$(eval 'echo "$chk_'$1'_check"')
907 | evaluate=$(eval 'echo "$chk_'$1'_eval"')
908 | issue=$(eval 'echo "$chk_'$1'_issue"')
909 | fix=$(eval 'echo "$chk_'$1'_fix"')
910 | reboot=$(eval 'echo "$chk_'$1'_reboot"')
911 |
912 | # Perform the check, printing the result & resolving issues if possible
913 | retval=0
914 | printf "\t$name: "
915 | check_result=$(eval "$check") && true
916 | printf "\033[46G"
917 | if $(eval $evaluate); then
918 | printf "${colour_green}%35s" "${pass//'$check_result'/$check_result}"
919 | retval=0
920 | else
921 | printf "${colour_yellow}%35s\n${colour_nc}" \
922 | "${fail//'$check_result'/$check_result}"
923 | printf "\n${colour_yellow}%s\n" "${issue//'$check_result'/$check_result}"
924 | if [ -z "$fix" ]; then
925 | printf "\n No automatic fix is available.\n"
926 | unresolved=1
927 | else
928 | printf "\n The following commands can be run to try & fix the issue:\n"
929 | printf "\n%s\n\n" "${fix//'$check_result'/$check_result}"
930 | clear_stdin
931 | read -n 1 -r -p " Would you like the above commands to be run (y/N)? " \
932 | apply_fix
933 | if [[ "$apply_fix" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
934 | printf "\n\n${colour_nc}"
935 | eval "$fix"
936 | unresolved=$?
937 | printf "\n${colour_yellow}"
938 | if [ "$reboot" -eq 0 ] && [ $unresolved -eq 0 ]; then
939 | s="Please reboot your machine before re-running the install script."
940 | printf "\n $s"
941 | retval=1
942 | elif [ $unresolved -eq 0 ] && [ $required -eq 0 ]; then
943 | printf "\n Restarting the installation script ...\n"
944 | retval=2
945 | fi
946 | else
947 | unresolved=1
948 | fi
949 | fi
950 | if [ $unresolved -ne 0 ] && [ $required -eq 0 ]; then
951 | s="Failed to solve issue. Please manually resolve, & re-run the "
952 | s+="install script."
953 | printf "\n $s"
954 | retval=1
955 | elif [ $required -ne 0 ]; then
956 | printf "\n"
957 | fi
958 | fi
959 | printf "${colour_nc}\n"
960 | return $retval
961 | }
962 |
963 | function uninstall_benchbot() {
964 | kill_benchbot
965 |
966 | header_block "UNINSTALLING BENCHBOT" ${colour_blue}
967 |
968 | pkgs="$(python3 -m pip freeze | grep benchbot || true)"
969 | if [ -n "$pkgs" ]; then
970 | echo "$pkgs" | sed 's/.*egg=\(.*\)/\1/' | xargs python3 -m pip uninstall -y
971 | fi
972 | sitepkgs=($(echo "$(python3 -c 'import site; \
973 | print(" ".join(site.getsitepackages() + [site.getusersitepackages()]))' \
974 | 2>/dev/null)" | tr ' ' '\n'))
975 | if [ -n "$sitepkgs" ] ; then
976 | echo "Manually removing eggs from pip ..."
977 | # This fix is needed as apparently Ubuntu patches broke pip uninstall...
978 | # See: https://github.com/pypa/pip/issues/4438
979 | for l in "${sitepkgs[@]}"; do
980 | # The loop below is WAY TOO AGGRESSIVE... but I can't do anything else
981 | # due to the bug above, & inconsistency of where this file will be saved
982 | # between raw python install, virtualenvs, conda, etc etc
983 | find "$l" -regex '.*benchbot-\(api\|eval\).*' -print -delete \
984 | 2>/dev/null || true
985 | done
986 | fi
987 |
988 | targets=$(docker images -q -f reference='benchbot/*:*')
989 | if [ -n "$targets" ] ; then
990 | echo "Removing BenchBot Docker resources ..."
991 | docker rmi -f $targets;
992 | fi
993 |
994 | rm -rfv "$PATH_ADDONS" "$PATH_API" "$PATH_EVAL" \
995 | 2>/dev/null || true
996 | sudo rm -v "$PATH_SYMLINKS"/benchbot* 2>/dev/null || true
997 |
998 | echo -e "\nFinished uninstalling!"
999 | }
1000 |
1001 | ################################################################################
1002 | ################## Parse & handle arguments / installer state ##################
1003 | ################################################################################
1004 |
1005 | # Cleanup terminal colours whenever exiting
1006 | trap cleanup_terminal EXIT
1007 |
1008 | # Safely parse options input
1009 | input=("$@")
1010 | _args="help,addons:,addons-only:,branch:,force-clean,uninstall,list-addons,\
1011 | list-simulators,no-simulator,no-update,remove-addons:,simulators:,version"
1012 | parse_out=$(getopt -o ha:A:b:fs:uv --long $_args -n "$(basename "$abs_path")" \
1013 | -- "$@")
1014 | if [ $? != 0 ]; then exit 1; fi
1015 | eval set -- "$parse_out"
1016 | addons=
1017 | no_simulator=
1018 | simulators="$SIMULATOR_DEFAULT"
1019 | updates_skip=
1020 | while true; do
1021 | case "$1" in
1022 | -h|--help)
1023 | echo "$usage_text" ; exit 0 ;;
1024 | -a|--addons)
1025 | addons="$2"; shift 2 ;;
1026 | -A|--addons-only)
1027 | install_addons "$2"; exit 0 ;;
1028 | -b|--branch)
1029 | # This is a real dirty way to do this... sorry
1030 | BRANCH_DEFAULT="$2"; shift 2
1031 | printf "\n${colour_yellow}%s${colour_nc}\n\n" \
1032 | "Using branch '$BRANCH_DEFAULT' instead of the default!" ;;
1033 | -f|--force-clean)
1034 | uninstall_benchbot; shift ;;
1035 | -s|--simulators)
1036 | simulators="$2"; shift 2 ;;
1037 | -u|--uninstall)
1038 | uninstall_benchbot; exit ;;
1039 | -v|--version)
1040 | print_version_info; exit ;;
1041 | --list-addons)
1042 | list_addons; exit ;;
1043 | --list-simulators)
1044 | list_simulators; exit ;;
1045 | --no-simulator)
1046 | no_simulator=1; shift ;;
1047 | --no-update)
1048 | updates_skip=1; shift ;;
1049 | --remove-addons)
1050 | remove_addons "$2"; exit ;;
1051 | --)
1052 | shift ; break ;;
1053 | *)
1054 | echo "$(basename "$abs_path"): option '$1' is unknown"; shift ; exit 1 ;;
1055 | esac
1056 | done
1057 |
1058 | # Validate and sanitise argument values
1059 | validate_install_args "$simulators"
1060 | if [ -z "$addons" ]; then addons="$ADDONS_DEFAULT"; fi
1061 | simulators=( $([ -z "$no_simulator" ] && echo "$simulators" | sed 's/,/\n/g') )
1062 |
1063 | # Pre-install
1064 | if [ -z "$updates_skip" ]; then
1065 | header_block "CHECKING BENCHBOT SCRIPTS VERSION" ${colour_blue}
1066 |
1067 | echo -e "\nFetching latest hash for Benchbot scripts ... "
1068 | _benchbot_info=$(is_latest_benchbot $BRANCH_DEFAULT) && is_latest=0 || \
1069 | is_latest=1
1070 | benchbot_latest_hash=$(echo "$_benchbot_info" | latest_version_info | \
1071 | cut -d ' ' -f 1)
1072 | echo -e "\t\t$benchbot_latest_hash."
1073 |
1074 | if [ $is_latest -eq 0 ]; then
1075 | echo -e "\n${colour_green}BenchBot scripts are up-to-date.${colour_nc}"
1076 | elif [ $is_latest -eq 1 ]; then
1077 | echo -e "\n${colour_yellow}BenchBot scripts are outdated. Updating &"\
1078 | "restarting install script ...\n${colour_nc}"
1079 | git fetch --all && git checkout -- . && git checkout "$benchbot_latest_hash"
1080 | echo -e "\n${colour_yellow}Done.${colour_nc}"
1081 | popd > /dev/null && exec $0 ${input[@]} --no-update
1082 | else
1083 | echo -e "$_benchbot_info"
1084 | exit 1
1085 | fi
1086 | fi
1087 |
1088 | ################################################################################
1089 | ############################# Installation process #############################
1090 | ################################################################################
1091 |
1092 |
1093 | # PART 1: Ensuring the system state is compatible with BenchBot
1094 | header_block "PART 1: EXAMINING SYSTEM STATE" $colour_blue
1095 |
1096 | # Patch in any values that can only be determined at runtime
1097 | # TODO clean this up to properly understand sizes for different simulators...
1098 | _sz=$([ -n "$simulators" ] && echo "$SIZE_GB_FULL" || echo "$SIZE_GB_LITE")
1099 | chk_fsspace_eval=${chk_fsspace_eval/SIZE/$_sz}
1100 | chk_fsspace_issue=${chk_fsspace_issue/SIZE/$_sz}
1101 | chk_fsspace_fix=${chk_fsspace_fix/SIZE/$_sz}
1102 |
1103 | # Create our list of checks based on what simulators where requested
1104 | _chks=("${checks_list_pre[@]}")
1105 | for s in "${simulators[@]}"; do
1106 | _chks=("${_chks[@]}" $(printf "%s\n" "${checks_list_sim_omni[@]}") )
1107 | done
1108 |
1109 | # Iterate through the list of checks, doing some dirty string based variable
1110 | # extraction in the handle_requirement() function...
1111 | for c in "${_chks[@]}"; do
1112 | if [[ "$c" =~ :$ ]]; then
1113 | printf "\n${colour_blue}$c${colour_nc}\n"
1114 | elif [ -n "$c" ]; then
1115 | set +e
1116 | handle_requirement "$c"
1117 | res=$?
1118 | set -e
1119 | if [ $res -eq 2 ]; then
1120 | popd > /dev/null && exec $0 ${input[@]}
1121 | elif [ $res -eq 1 ]; then
1122 | exit 1
1123 | fi
1124 | fi
1125 | done
1126 |
1127 | printf "\n\n"
1128 | clear_stdin
1129 | s="All requirements & dependencies fulfilled. Docker containers for the BenchBot\n"
1130 | s+="software stack will now be built (which may take anywhere from a few seconds\n"
1131 | s+="to many hours). Would you like to proceed (y/N)? "
1132 | read -n 1 -r -e -p "$(printf "${colour_green}$s${colour_nc}")" prompt
1133 | printf "\n"
1134 | if [[ "$prompt" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
1135 | printf "Proceeding with install ... \n\n"
1136 | else
1137 | printf "Install aborted.\n\n"
1138 | exit 1
1139 | fi
1140 |
1141 | # PART 2: Fetching information about latest benchbot versions
1142 | header_block "PART 2: FETCHING LATEST BENCHBOT VERSION INFO" $colour_blue
1143 |
1144 | # Get the latest commit hashes for each of our git repos
1145 | # TODO adapt this to handle multiple simulators...
1146 | hash_fail_msg="Failed (check Internet connection & valid branch name!). Exiting."
1147 | if [ -n "$simulators" ]; then
1148 | benchbot_simulator_hashes=()
1149 | for s in "${simulators[@]}"; do
1150 | echo -e "\nFetching latest hash for BenchBot Simulator '$s' ... "
1151 | benchbot_simulator_hashes+=$( \
1152 | (is_latest_benchbot_simulator $s $BRANCH_DEFAULT || true) | \
1153 | latest_version_info \
1154 | )
1155 | if [ -z "${benchbot_simulator_hashes[-1]}" ]; then
1156 | printf "\n${colour_red}$hash_fail_msg${colour_nc}\n\n"
1157 | exit 1
1158 | fi
1159 | echo -e "\t\t${benchbot_simulator_hashes[-1]}."
1160 | done
1161 | fi
1162 | echo "Fetching latest hash for BenchBot Robot Controller ... "
1163 | benchbot_controller_hash=$( (is_latest_benchbot_controller $BRANCH_DEFAULT || \
1164 | true) | latest_version_info)
1165 | if [ -z "$benchbot_controller_hash" ]; then
1166 | printf "\n${colour_red}$hash_fail_msg${colour_nc}\n\n"
1167 | exit 1
1168 | fi
1169 | echo -e "\t\t$benchbot_controller_hash."
1170 | echo "Fetching latest hash for BenchBot Supervisor ... "
1171 | benchbot_supervisor_hash=$( (is_latest_benchbot_supervisor $BRANCH_DEFAULT || \
1172 | true) | latest_version_info)
1173 | if [ -z "$benchbot_supervisor_hash" ]; then
1174 | printf "\n${colour_red}$hash_fail_msg${colour_nc}\n\n"
1175 | exit 1
1176 | fi
1177 | echo -e "\t\t$benchbot_supervisor_hash."
1178 | echo "Fetching latest hash for BenchBot API ... "
1179 | benchbot_api_hash=$( (is_latest_benchbot_api $BRANCH_DEFAULT || true) | \
1180 | latest_version_info)
1181 | if [ -z "$benchbot_api_hash" ]; then
1182 | printf "\n${colour_red}$hash_fail_msg${colour_nc}\n\n"
1183 | exit 1
1184 | fi
1185 | echo -e "\t\t$benchbot_api_hash."
1186 |
1187 | echo "Fetching latest hash for BenchBot ROS Messages ... "
1188 | benchbot_msgs_hash=$( (is_latest_benchbot_msgs $BRANCH_DEFAULT || true) | \
1189 | latest_version_info)
1190 | if [ -z "$benchbot_msgs_hash" ]; then
1191 | printf "\n${colour_red}$hash_fail_msg${colour_nc}\n\n"
1192 | exit 1
1193 | fi
1194 | echo -e "\t\t$benchbot_msgs_hash."
1195 |
1196 | # PART 3: Build docker images (both simulator & submission base image)
1197 | header_block "PART 3: BUILDING DOCKER IMAGES" $colour_blue
1198 |
1199 | # Gather some useful variables, including from the host
1200 | _tmp_dockerfile="/tmp/dockerfile"
1201 | nvidia_driver_version=$(apt list --installed 2>/dev/null | \
1202 | grep "nvidia-driver-" | cut -d ' ' -f 2)
1203 | cuda_drivers_version=$(apt list --installed 2>/dev/null | \
1204 | grep "cuda-drivers/" | cut -d ' ' -f 2)
1205 | cuda_version=$(apt list --installed 2>/dev/null | \
1206 | grep "cuda-$(/usr/local/cuda/bin/nvcc --version | grep release | \
1207 | sed 's/.*release \([0-9\.]*\).*/\1/; s/\./-/g')" | cut -d ' ' -f 2)
1208 |
1209 | # Build the BenchBot Core Docker image
1210 | printf "\n${colour_blue}%s${colour_nc}\n" \
1211 | "BUILDING BENCHBOT CORE DOCKER IMAGE:"
1212 | docker buildx build -t "$DOCKER_TAG_CORE" -f "$PATH_DOCKERFILE_CORE" \
1213 | --build-arg TZ=$(cat /etc/timezone) \
1214 | --build-arg NVIDIA_DRIVER_VERSION="${nvidia_driver_version}" \
1215 | --build-arg CUDA_DRIVERS_VERSION="${cuda_drivers_version}" \
1216 | --build-arg CUDA_VERSION="${cuda_version}" $PATH_ROOT && \
1217 | build_ret=0 || build_ret=1
1218 | if [ $build_ret -ne 0 ]; then
1219 | printf "\n${colour_red}%s: %d\n\n${build_err}${colour_nc}\n" \
1220 | "ERROR: Building BenchBot \"core\" returned a non-zero error code" \
1221 | "$build_ret"
1222 | exit 1
1223 | fi
1224 |
1225 | # Build the BenchBot Backend Docker image
1226 | printf "\n${colour_blue}%s${colour_nc}\n" \
1227 | "BUILDING BENCHBOT BACKEND DOCKER IMAGE:"
1228 | cat "$PATH_DOCKERFILE_BACKEND" "$PATH_DOCKERFILE_SHARED" > "$_tmp_dockerfile"
1229 | docker buildx build -t "$DOCKER_TAG_BACKEND" -f - \
1230 | --build-arg BENCHBOT_CONTROLLER_GIT="${GIT_CONTROLLER}" \
1231 | --build-arg BENCHBOT_CONTROLLER_HASH="${benchbot_controller_hash}" \
1232 | --build-arg BENCHBOT_MSGS_GIT="${GIT_MSGS}" \
1233 | --build-arg BENCHBOT_MSGS_HASH="${benchbot_msgs_hash}" \
1234 | --build-arg BENCHBOT_SUPERVISOR_GIT="${GIT_SUPERVISOR}" \
1235 | --build-arg BENCHBOT_SUPERVISOR_HASH="${benchbot_supervisor_hash}" \
1236 | --build-arg BENCHBOT_ADDONS_PATH="${PATH_ADDONS_INTERNAL}" \
1237 | $PATH_ROOT < "$_tmp_dockerfile" && build_ret=0 || build_ret=1
1238 | if [ $build_ret -ne 0 ]; then
1239 | printf "\n${colour_red}%s: %d\n\n${build_err}${colour_nc}\n" \
1240 | "ERROR: Building BenchBot backend returned a non-zero error code" \
1241 | "$build_ret"
1242 | exit 1
1243 | fi
1244 |
1245 | # Build Docker images for each of the requested simulators
1246 | for i in "${!simulators[@]}"; do
1247 | s="${simulators[i]}"
1248 | printf "\n${colour_blue}%s${colour_nc}\n" \
1249 | "BUILDING BENCHBOT SIMULATOR '$s' DOCKER IMAGE:"
1250 | cat "$PATH_DOCKERFILE_SIM_PREFIX$s.Dockerfile" "$PATH_DOCKERFILE_SHARED" > \
1251 | "$_tmp_dockerfile"
1252 | docker buildx build -t "$DOCKER_TAG_SIM_PREFIX$s" -f - \
1253 | --build-arg BENCHBOT_CONTROLLER_GIT="${GIT_CONTROLLER}" \
1254 | --build-arg BENCHBOT_CONTROLLER_HASH="${benchbot_controller_hash}" \
1255 | --build-arg BENCHBOT_MSGS_GIT="${GIT_MSGS}" \
1256 | --build-arg BENCHBOT_MSGS_HASH="${benchbot_msgs_hash}" \
1257 | --build-arg BENCHBOT_SIMULATOR_GIT="${GIT_SIMULATOR_PREFIX}${s}" \
1258 | --build-arg BENCHBOT_SIMULATOR_HASH="${benchbot_simulator_hashes[i]}" \
1259 | --build-arg BENCHBOT_ADDONS_PATH="${PATH_ADDONS_INTERNAL}" \
1260 | $PATH_ROOT < "$_tmp_dockerfile" && build_ret=0 || build_ret=1
1261 | if [ $build_ret -ne 0 ]; then
1262 | printf "\n${colour_red}%s: %d\n\n${build_err}${colour_nc}\n" \
1263 | "ERROR: Building BenchBot simulator '$s' returned a non-zero error code" \
1264 | "$build_ret"
1265 | exit 1
1266 | fi
1267 |
1268 | # Pre-baking (TODO need a way to generalise this... at the moment it is very
1269 | # ad-hoc)
1270 | if [ "$s" = "sim_omni" ]; then
1271 | printf "\n${colour_blue}%s${colour_nc}\n" \
1272 | "BAKING SIMULATOR DATA INTO '$s' DOCKER IMAGE:"
1273 | docker container rm tmp >/dev/null 2>&1 || true
1274 | xhost +local:root > /dev/null
1275 | docker run "${SIM_OMNI_ARGS[@]}" --gpus all --name tmp \
1276 | --env DISPLAY --volume /tmp/.X11-unix:/tmp/.X11-unix \
1277 | -t $DOCKER_TAG_SIM_PREFIX$s /bin/bash -c "$(printf '%s %s' \
1278 | "./python.sh -c 'from omni.isaac.kit import SimulationApp; " \
1279 | "k = SimulationApp({\"headless\": False}); k.close()'"
1280 | )"
1281 | xhost -local:root > /dev/null
1282 | docker commit --change "ENV SIM_BAKED=1" tmp $DOCKER_TAG_SIM_PREFIX$s
1283 | docker container rm tmp
1284 | fi
1285 | done
1286 |
1287 | # Build the BenchBot Submission Docker image
1288 | printf "\n${colour_blue}%s${colour_nc}\n" \
1289 | "BUILDING BENCHBOT SUBMISSION DOCKER IMAGE:"
1290 | docker buildx build -t "$DOCKER_TAG_SUBMISSION" \
1291 | -f "$PATH_DOCKERFILE_SUBMISSION" \
1292 | --build-arg BENCHBOT_API_GIT="${GIT_API}" \
1293 | --build-arg BENCHBOT_API_HASH="${benchbot_api_hash}" $PATH_ROOT && \
1294 | build_ret=0 || build_ret=1
1295 | if [ $build_ret -ne 0 ]; then
1296 | printf "\n${colour_red}%s: %d\n\n${build_err}${colour_nc}\n" \
1297 | "ERROR: Building BenchBot \"submission\" returned a non-zero error code" \
1298 | "$build_ret"
1299 | exit 1
1300 | fi
1301 |
1302 | rm "$_tmp_dockerfile"
1303 |
1304 | # Cleanup any unnecessary / outdated BenchBot components still lying around
1305 | printf "\n${colour_blue}%s${colour_nc}\n" \
1306 | "CLEANING UP OUTDATED BENCHBOT REMNANTS:"
1307 | kill_benchbot "quiet"
1308 |
1309 | # PART 4: Running post build checks
1310 | header_block "PART 4: RUNNING POST-BUILD HOST CHECKS" $colour_blue
1311 |
1312 | for c in "${checks_list_post[@]}"; do
1313 | if [[ "$c" =~ :$ ]]; then
1314 | printf "\n${colour_blue}$c${colour_nc}\n"
1315 | else
1316 | handle_requirement "$c" 1
1317 | if [ $? -ne 0 ]; then
1318 | exit 1
1319 | fi
1320 | fi
1321 | done
1322 |
1323 | # PART 5: Installing requested BenchBot Add-ons
1324 | printf "\n"
1325 | header_block "PART 5: INSTALLING BENCHBOT ADD-ONS" $colour_blue
1326 |
1327 | install_addons "$addons"
1328 |
1329 | # We are finally done...
1330 | echo -e "Finished!"
1331 |
--------------------------------------------------------------------------------
/bin/benchbot_run:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | ################################################################################
4 | ################### Load Helpers & Global BenchBot Settings ####################
5 | ################################################################################
6 |
7 | set -euo pipefail
8 | IFS=$'\n\t'
9 | abs_path=$(readlink -f $0)
10 | pushd $(dirname $abs_path) > /dev/null
11 | source .helpers
12 |
13 | ################################################################################
14 | ########################### Script Specific Settings ###########################
15 | ################################################################################
16 |
17 | # None
18 |
19 | ################################################################################
20 | ######################## Helper functions for commands #########################
21 | ################################################################################
22 |
23 | usage_text="$(basename "$abs_path") -- Run script for the BenchBot backend & simulator / real robot
24 |
25 | USAGE:
26 |
27 | Get info about the program / available options:
28 | $(basename "$abs_path") [-h|--help|--list-tasks|--list-envs]
29 |
30 | Run a simulator with a specific task setup:
31 | $(basename "$abs_path") --env ENV_NAME --task TASK_NAME
32 | $(basename "$abs_path") -e ENV_NAME -t TASK_NAME
33 |
34 | Request the backend to explicitly use the carter robot platform:
35 | $(basename "$abs_path") -e ENV_NAME -t TASK_NAME --robot carter
36 |
37 | OPTION DETAILS:
38 |
39 | -h, --help
40 | Show this help menu.
41 |
42 | -e, --env, --environment
43 | Select an environment to launch in the simulator (this must be
44 | called with the --task option). Environments are identified via
45 | \"ENVIRONMENT_NAME:VARIANT\" where ENVIRONMENT_NAME is the name of
46 | environment & VARIANT is the environment variation to use. For
47 | example, the variant 3 of the office environment would be:
48 |
49 | office:3
50 |
51 | Some tasks may require more than one environment variation (e.g.
52 | scene change detection). Multiple variations are specified using
53 | the format \"ENVIRONMENT_NAME:VARIANT_ONE:VARIANT_TWO\". For
54 | example using the first, and then third variant of the office
55 | environment would be specified via:
56 |
57 | office:1:3
58 |
59 | (use '--list-envs' to see a list of available environments)
60 |
61 | -f, --force-updateless
62 | BenchBot will exit if it detects updates to the software stack. Set
63 | this flag to continue using outdated software temporarily. Note that
64 | limited support is available for outdated software stacks, and all
65 | novel work will focus on the latest software stack. You should only
66 | use this flag when it is inconvenient to update immediately.
67 |
68 | -k, --kill-controller
69 | Run a kill command that stops everything BenchBot currently
70 | running, including the persistent robot controller.
71 |
72 | --list-envs, --list-environments
73 | Search for & list all installed environments. The listed
74 | environment names are in the format needed for the '--env' option.
75 | Use '--show-environment' to see more details about an environment.
76 |
77 | --list-robots
78 | List all supported robot targets. This list will adjust to include
79 | what is available in your current installation (i.e. there will be
80 | no simulated robots listed if you installed with '--no-simulator').
81 | Use '--show-robot' to see more details about a robot.
82 |
83 | --list-tasks
84 | Lists all supported task combinations. The listed tasks are printed
85 | in the format needed for the '--task' option. Use '--show-task' to
86 | see more details about a task.
87 |
88 | -r, --robot
89 | Configure the BenchBot supervisor for a specific robot. This
90 | currently is used to select either a simulator or real robot, but
91 | has all the flexibility in the future to target any desired robot
92 | platform, whether that be simulated or real.
93 |
94 | If the full backend is installed (with a simulator), the 'sim'
95 | target robot will be used by default, otherwise the 'real' target
96 | robot will be the default.
97 |
98 | (use '--list-robots' to see a list of available robots)
99 |
100 | --show-env, --show-environment
101 | Prints information about the provided environment name if
102 | installed. The corresponding YAML's location will be displayed,
103 | with a snippet of its contents.
104 |
105 | --show-robot
106 | Prints information about the provided robot name if installed. The
107 | corresponding YAML's location will be displayed, with a snippet
108 | of its contents.
109 |
110 | --show-task
111 | Prints information about the provided task name if installed. The
112 | corresponding YAML's location will be displayed, with a snippet
113 | of its contents.
114 |
115 |
116 | -t, --task
117 | Configure BenchBot for a specific task style (this must be called
118 | with the '--env' option). Tasks are specified based on their name in
119 | the YAML file. The naming convention generally follows the format
120 | \"TYPE:OPTION_1:OPTION_2:...\". For example:
121 |
122 | semantic_slam:passive:ground_truth
123 |
124 | is a semantic SLAM task with passive robot control and observations
125 | using a ground truth robot pose.
126 |
127 | (use '--list-tasks' to see a list of supported task options)
128 |
129 | -u, --update-check
130 | Check for available updates to the BenchBot software stack and exit
131 | immediately.
132 |
133 | -v, --version
134 | Print version info for current installation.
135 |
136 | FURTHER DETAILS:
137 |
138 | Please contact the authors of BenchBot for support or to report bugs:
139 | b.talbot@qut.edu.au
140 | "
141 |
142 | _list_environments_pre=\
143 | "Either simulated or real world environments can be selected. Please see the
144 | '--list-robots' command for the available robot platforms. Only simulated robots
145 | can be run in simulated environments, and only real robots in real environments
146 | (as you would expect).
147 |
148 | The following environments are supported in your BenchBot installation:
149 | "
150 |
151 | _list_formats_pre=\
152 | "Formats are used by a task to declare the formats of results in a re-usable
153 | manner. You should ensure that tasks you use point to installed results
154 | formats. The following formats are supported in your BenchBot installation:
155 | "
156 |
157 | _list_robots_pre=\
158 | "The following robot targets are supported in your BenchBot installation:
159 | "
160 |
161 | _list_tasks_pre=\
162 | "The following tasks are supported in your BenchBot installation:
163 | "
164 |
165 | _robot_err=\
166 | "ERROR: The BenchBot Robot Controller container has exited unexpectedly. This
167 | should not happen under normal operating conditions. Please see the complete
168 | log below for a dump of the crash output:"
169 |
170 | exit_code=
171 | kill_persist=
172 | function exit_gracefully() {
173 | if [ "$simulator_required" -ne 0 ]; then
174 | printf "\n\n${colour_blue}%s${colour_nc}\n" \
175 | "Re-closing network openings used for real robot:"
176 | close_network $network_forwarding $network_policy
177 | fi
178 | kill_benchbot "" $kill_persist
179 | xhost -local:root > /dev/null
180 | trap '' SIGINT SIGQUIT SIGKILL SIGTERM EXIT
181 | exit ${exit_code:-0}
182 | }
183 |
184 |
185 | ################################################################################
186 | #################### Parse & handle command line arguments #####################
187 | ################################################################################
188 |
189 | # Safely parse options input
190 | _args="help,env:,environment:,force-updateless,kill-controller,list-envs,\
191 | list-environments,list-formats,list-robots,list-tasks,robot:,show-env:,\
192 | show-environment:,show-format:,show-robot:,show-task:,task:,updates-check,\
193 | version"
194 | parse_out=$(getopt -o he:t:r:fuvk --long $_args -n "$(basename "$abs_path")" \
195 | -- "$@")
196 | if [ $? != 0 ]; then exit 1; fi
197 | eval set -- "$parse_out"
198 | updates_exit=
199 | updates_skip=
200 | environment=
201 | robot=
202 | task=
203 | while true; do
204 | case "$1" in
205 | -h|--help)
206 | echo "$usage_text" ; exit 0 ;;
207 | -e|--env|--environment)
208 | environment="$2"; shift 2 ;;
209 | -f|--force-updateless)
210 | updates_skip=1 ; shift ;;
211 | -k|--kill-controller)
212 | kill_persist=0; simulator_required=0; exit_gracefully ;;
213 | --list-envs|--list-environments)
214 | list_environments "$_list_environments_pre" "an"; exit $? ;;
215 | --list-formats)
216 | list_content "formats" "$_list_formats_pre"; exit $? ;;
217 | --list-robots)
218 | list_content "robots" "$_list_robots_pre"; exit $? ;;
219 | --list-tasks)
220 | list_content "tasks" "$_list_tasks_pre"; exit $? ;;
221 | -r|--robot)
222 | robot="$2"; shift 2 ;;
223 | --show-env|--show-environment)
224 | show_environment "$2"; exit $? ;;
225 | --show-format)
226 | show_content "formats" "$2"; exit $? ;;
227 | --show-robot)
228 | show_content "robots" "$2"; exit $? ;;
229 | --show-task)
230 | show_content "tasks" "$2"; exit $? ;;
231 | -t|--task)
232 | task="$2"; shift 2 ;;
233 | -u|--updates-check)
234 | updates_exit=1 ; shift ;;
235 | -v|--version)
236 | print_version_info; exit ;;
237 | --)
238 | shift ; break ;;
239 | *)
240 | echo "$(basename "$abs_path"): option '$1' is unknown"; shift ; exit 1 ;;
241 | esac
242 | done
243 |
244 | # Extract a list of environments from the provided environment string
245 | environments=($(env_list "$environment" | tr ' ' '\n'))
246 | environments_string="$(printf '%s,' "${environments[@]}")"
247 | environments_string="${environments_string::-1}"
248 |
249 | if [ -z "$updates_exit" ]; then
250 | # Generate any derived configuration parameters
251 | type="$(run_manager_cmd 'exists("robots", [("name", "'$robot'")]) and print(\
252 | get_value_by_name("robots", "'$robot'", "type"))')"
253 | simulator_required=1
254 | if [[ "$type" == "sim_"* ]]; then simulator_required=0; fi
255 |
256 | # TODO add an option for managing the persistent container
257 | kill_persist=1
258 |
259 | # Bail if any of the requested configurations are invalid
260 | validate_run_args "$robot" "$task" "$type" "$environment" "${environments[@]}"
261 | fi
262 |
263 | ################################################################################
264 | ############## Run the simulator / real robot & BenchBot backend ###############
265 | ################################################################################
266 |
267 | # Check for & handles updates to the BenchBot software stack
268 | header_block "CHECKING FOR BENCHBOT SOFTWARE STACK UPDATES" ${colour_blue}
269 |
270 | if [ -n "$updates_skip" ]; then
271 | echo -e "${colour_yellow}Skipping ...${colour_nc}"
272 | elif ! update_check "$(git branch -a --contains HEAD | grep -v HEAD | \
273 | grep '.*remotes/.*' | head -n 1 | sed 's/.*\/\(.*\)/\1/')"; then
274 | exit 1;
275 | fi
276 | if [ -n "$updates_exit" ]; then exit 0; fi
277 |
278 | # Run the BenchBot software stack (kill whenever they exit)
279 | kill_benchbot "" $kill_persist
280 | trap exit_gracefully SIGINT SIGQUIT SIGKILL SIGTERM EXIT
281 | header_block "STARTING THE BENCHBOT SOFTWARE STACK" ${colour_blue}
282 |
283 | # Print some configuration information
284 | printf "${colour_blue}%s${colour_nc}
285 |
286 | Selected task: $task
287 | Task results format: $results_format
288 | Selected robot: $robot
289 | Selected environment: $environment
290 | Scene/s: " \
291 | "Running the BenchBot system with the following settings:"
292 | for i in "${!environments[@]}"; do
293 | if [ $i -ne 0 ]; then
294 | printf "%*s" 26
295 | fi
296 | printf "%s, starting @ pose %s\n" "${environments[$i]}" \
297 | "$(run_manager_cmd 'print(get_value_by_name("environments", \
298 | "'${environments[$i]}'", "start_pose"))')"
299 | printf "%*s" 26
300 | printf "(map_path = '%s')\n" \
301 | "$(run_manager_cmd 'print(get_value_by_name("environments", \
302 | "'${environments[$i]}'", "map_path"))')"
303 | done
304 | printf " %-22s" "Simulator required:"
305 | printf "%s (%s)\n" \
306 | $([ "$simulator_required" -ne 0 ] && echo "No" || echo "Yes") "$type"
307 | echo ""
308 |
309 | # Create the network for BenchBot software stack
310 | echo -e "${colour_blue}Creating shared network '$DOCKER_NETWORK':${colour_nc}"
311 | docker network inspect "$DOCKER_NETWORK" >/dev/null 2>&1 || \
312 | docker network create "$DOCKER_NETWORK" --subnet="$URL_DOCKER_SUBNET" \
313 | --ip-range="$URL_DOCKER_SUBNET" --gateway="$URL_DOCKER_GATEWAY"
314 | if [ "$simulator_required" -ne 0 ]; then
315 | printf "\n${colour_blue}%s${colour_nc}\n" \
316 | "Opening network to facilitate communications with real robot:"
317 | network_forwarding=$(cat /proc/sys/net/ipv4/conf/all/forwarding)
318 | network_policy=$(sudo iptables --list FORWARD | head -n 1 | \
319 | sed 's/Chain.*(policy \([^)]*\))/\1/')
320 | open_network;
321 | fi
322 |
323 | # Declare reusable parts to ensure our containers run with consistent settings
324 | xhost +local:root > /dev/null
325 | docker_run="docker run -t --gpus all \
326 | --env DISPLAY \
327 | --env ROS_MASTER_URI=http://$HOSTNAME_ROS:11311 \
328 | --env ROS_HOSTNAME=\$name \
329 | --network $DOCKER_NETWORK \
330 | --name=\$name \
331 | --hostname=\$name \
332 | --volume /tmp/.X11-unix:/tmp/.X11-unix \
333 | --volume $PATH_ADDONS:$PATH_ADDONS_INTERNAL:ro"
334 | cmd_prefix='source $ROS_WS_PATH/devel/setup.bash && '
335 |
336 | # Start containers for ROS, isaac_simulator, benchbot_simulator, & benchbot_supervisor
337 | printf "\n${colour_blue}%s${colour_nc}\n" \
338 | "Starting persistent container for ROS core:"
339 | if [ "$(docker container inspect -f '{{.State.Running}}' \
340 | $HOSTNAME_ROS 2> /dev/null)" == "true" ]; then
341 | printf "Skipping (already running)\n"
342 | else
343 | cmd="${docker_run//'$name'/$HOSTNAME_ROS}"
344 | ${cmd// /$'\t'} --ip "$URL_ROS" -d $DOCKER_TAG_BACKEND /bin/bash -c \
345 | "$cmd_prefix"'roscore'
346 | fi
347 |
348 | if [ "$simulator_required" -eq 0 ]; then
349 | # TODO would be nice to have a less stupid way to do this, but bash's lack of
350 | # nested arrays (and difficulties passing arrays in general from functions)
351 | # makes it hard...
352 | args=()
353 | if [ "$type" = "sim_omni" ]; then
354 | args=("${SIM_OMNI_ARGS[@]}")
355 | fi
356 |
357 | # TODO DANGER: there are LOTS of ways this can go wrong. Need to do this
358 | # more robustly if we ever expand outside Omniverse-only focus. Examples:
359 | # - if next run uses a different simulator type, what do we do?
360 | # - it's really unclear to the user what is still running in background
361 |
362 | # Run a persistent container and execute within it if our simulator has
363 | # caching utilities (e.g. Cache in Omniverse)
364 | printf "\n${colour_blue}%s${colour_nc}\n" \
365 | "Starting persistent container for BenchBot Robot Controller ($type):"
366 | if [ "$(docker container inspect -f '{{.State.Running}}' \
367 | $HOSTNAME_ROBOT 2> /dev/null)" == "true" ]; then
368 | printf "Skipping (already running)\n"
369 | else
370 | cmd="${docker_run//'$name'/$HOSTNAME_ROBOT}"
371 | ${cmd// /$'\t'} --ip "$URL_ROBOT" -d "${args[@]}" \
372 | -t $DOCKER_TAG_SIM_PREFIX$type /bin/bash -c \
373 | "$cmd_prefix"'rosrun benchbot_robot_controller benchbot_robot_controller'
374 | fi
375 | fi
376 |
377 | echo -e "\n${colour_blue}Starting container for BenchBot Supervisor:${colour_nc}"
378 | cmd="${docker_run//'$name'/$HOSTNAME_SUPERVISOR}"
379 | ${cmd// /$'\t'} --ip "$URL_SUPERVISOR" -d $DOCKER_TAG_BACKEND /bin/bash -c \
380 | "$cmd_prefix"'python3 -m benchbot_supervisor --task-name "'$task'" \
381 | --robot-name "'$robot'" --environment-names "'$environments_string'" \
382 | --addons-path "'$PATH_ADDONS_INTERNAL'"'
383 |
384 | echo -e "\n${colour_blue}Starting container for BenchBot Debugging:${colour_nc}"
385 | cmd="${docker_run//'$name'/$HOSTNAME_DEBUG}"
386 | ${cmd// /$'\t'} --ip "$URL_DEBUG" -it -d $DOCKER_TAG_BACKEND /bin/bash
387 |
388 | # Print the output of the Supervisor, watching for failures
389 | header_block "BENCHBOT IS RUNNING (Ctrl^C to exit) ..." ${colour_green}
390 |
391 | docker logs --follow $HOSTNAME_SUPERVISOR &
392 |
393 | while [ -n "$(docker ps -q -f 'name='$HOSTNAME_SUPERVISOR)" ] && \
394 | ([ "$simulator_required" -ne 0 ] || \
395 | [ -n "$(docker ps -q -f 'name='$HOSTNAME_ROBOT)" ]); do
396 | sleep 1
397 | done
398 | #sleep infinity
399 |
400 | if [ "$simulator_required" -eq 0 ] && \
401 | [ -z "$(docker ps -q -f 'name='$HOSTNAME_ROBOT)" ]; then
402 | header_block "BENCHBOT ROBOT CONTROLLER ERROR" ${colour_red}
403 | echo -e "\n${colour_red}$_robot_err${colour_nc}\n"
404 | docker logs $HOSTNAME_ROBOT
405 | fi
406 | exit_code=1
407 | exit
408 |
--------------------------------------------------------------------------------
/bin/benchbot_submit:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | ################################################################################
4 | ################### Load Helpers & Global BenchBot Settings ####################
5 | ################################################################################
6 |
7 | set -euo pipefail
8 | IFS=$'\n\t'
9 | abs_path=$(readlink -f $0)
10 | pushd $(dirname $abs_path) > /dev/null
11 | source .helpers
12 | popd > /dev/null
13 |
14 | ################################################################################
15 | ########################### Script Specific Settings ###########################
16 | ################################################################################
17 |
18 | SUBMISSION_CONTAINER_NAME="benchbot_submission"
19 | SUBMISSION_OUTPUT_DEFAULT="../submission.tgz"
20 |
21 | ################################################################################
22 | ######################## Helper functions for commands #########################
23 | ################################################################################
24 |
25 | usage_text="$(basename "$0") -- Submission script for running your solution to the Scene
26 | Understanding Challenge against a running simulator. It supports 5 different
27 | modes of submission:
28 |
29 | 1. native:
30 | Run your code in your host system without any containerisation (useful
31 | for when you are developing and testing things). This assumes that the
32 | simulator is already running.
33 |
34 | 2. containerised:
35 | Bundles up your code & executes it using Docker & the Dockerfile
36 | provided with your code (useful for testing a competition submission
37 | locally before submitting). The created Docker image talks to a running
38 | simulator.
39 |
40 | 3. submission:
41 | Bundles up your code and saves a *.tgz ready for submission to a
42 | challenge (not currently in use due to code being run locally on your
43 | machine)
44 |
45 | 4. native (example):
46 | Same as native, but uses an example provided in a BenchBot add-on. The
47 | example already defines the native command to run, you just have to add
48 | an runtime args via the --args flag if necessary.
49 |
50 | 5. containerised (example):
51 | Same as containerised, but uses an example provided in a BenchBot
52 | add-on. The example already defines where the Dockerfile exists, you
53 | just have to add runtime args via the --args flag if necessary.
54 |
55 | USAGE:
56 |
57 | Get information about the submission options:
58 | $(basename "$0") [-h|--help]
59 |
60 | Submit & run natively on your system:
61 | $(basename "$0") [-n|--native] COMMAND_TO_RUN
62 |
63 | Submit, compile into a containerised environment, & run the container on
64 | your machine:
65 | $(basename "$0") [-c|--containerised] DIRECTORY_CONTAINING_DOCKERFILE
66 |
67 | Bundle up your solution into a *.tgz ready for submssion to the challenge:
68 | $(basename "$0") [-s|--submission] DIRECTORY_FOR_SUBMISSION
69 |
70 | Submit, compile into a containerised environment, run the container on your
71 | machine, & evaluate the results which are also saved to 'my_results.json':
72 | $(basename "$0") [-e|--evaluate-with] \\
73 | [-r|--results-location] 'my_results.json' \\
74 | [-c|--containerised] DIRECTORY_FOR_SUBMISSION
75 |
76 | Run an example, with some runtime arguments:
77 |
78 | $(basename "$0") --example EXAMPLE_NAME -n \\
79 | -a '--arg1 value --arg2 value'
80 |
81 |
82 | OPTION DETAILS:
83 |
84 | -h,--help
85 | Show this help menu.
86 |
87 | -a, --args
88 | Runtime arguments to pass through to the submission. You will need
89 | to use quotes to properly handle multiple arguments. For example:
90 |
91 | $(basename "$0") -n 'python3 my_script.py' -a '--arg1 a --arg2 b'
92 |
93 | These arguments are handled differently depending on the mode:
94 | - native: they are appended to the end of the command
95 | - containerised: they are appended to the end of the Dockerfile's
96 | 'CMD'
97 | - submission: a single positional argument is used to specify the
98 | *.tgz's save location
99 |
100 | -c, --containerised
101 | Uses the Dockerfile provided with your solution to start a Docker
102 | container running your solution. Dockerfiles are the means in which
103 | you concisely communicate WHAT system configuration is needed to
104 | run your solution (i.e. do you need Python3, cuda, ROS, etc).
105 |
106 | This mode requires an extra parameter specifying either:
107 | - the directory where your solution's Dockerfile resides (the
108 | filename is assumed to be 'Dockerfile'), or
109 | - the full path for your solution's Dockerfile (supports custom
110 | filenames)
111 |
112 | For example, the following commands are equivalent for a solution
113 | with a Dockerfile named 'Dockerfile' in the current directory:
114 |
115 | $(basename "$0") -c .
116 | $(basename "$0") -c ./Dockerfile
117 |
118 | -E, --example
119 | Name of an installed example to run. All examples at least support
120 | native operation, with most also supporting containerised
121 | operation.
122 |
123 | Examples run in native mode by default. Examples can still be run
124 | in containerised mode, but require the '--example-containerised'
125 | flag.
126 |
127 | (use '--list-examples' to see a list of installed example)
128 |
129 | --example-containerised
130 | Runs the example in containerised mode instead of native mode. This
131 | option is only valid when running an example.
132 |
133 | -e, --evaluate-with
134 | Name of the evaluation method to use for evaluation of the provided
135 | submission after it has finished running. Evaluation will only be
136 | performed if this option is provided.
137 |
138 | This option assumes your submission saves results to the location
139 | referenced by 'benchbot_api.RESULT_LOCATION' (currently
140 | '/tmp/benchbot_result'). Evaluation will not work as expected
141 | results are not saved to that location.
142 |
143 | See '--list-methods' in 'benchbot_eval' for a list of supported
144 | methods, and 'benchbot_eval --help' for a description of the
145 | scoring process.
146 |
147 | --list-examples
148 | List all available examples. The listed examples are printed in the
149 | format needed for the '--example' option. Use '--show-example' to
150 | see more details about an example.
151 |
152 | -n, --native
153 | Runs your solution directly on your system without applying any
154 | containerisation (useful when you are developing & testing your
155 | solution).
156 |
157 | This mode requires an extra parameter specifying the command to
158 | natively run your solution. For example, if your solution is a
159 | Python script called 'solution.py':
160 |
161 | $(basename "$0") -n 'python3 solution.py'
162 |
163 | -r, --results-location
164 | Copy results produced by the submission to another location. Note:
165 | this does not change where '-e' looks for results, it merely
166 | specifies another location where the user can view the results of
167 | their submission. Like '-e', this flag will not work as expected if
168 | the submission does not save results in the expected location
169 | ('benchbot_api.RESULT_LOCATION').
170 |
171 | -s, --submission
172 | Bundles up your solution into a *.tgz ready for submission. The
173 | directory where your solution exists is a required extra parameter.
174 |
175 | Optionally, you can also provide a positional arg via the '-a' flag
176 | to specify where to put the *.tgz. For example, to bundle up your
177 | solution in the current directory on your desktop:
178 |
179 | $(basename "$0") -s . -a \$HOME/Desktop
180 |
181 | --show-example
182 | Prints information about the provided example if installed. The
183 | corresponding YAML's location will be displayed, with a snippet of
184 | its contents.
185 |
186 | -v,--version
187 | Print version info for current installation.
188 |
189 | FURTHER DETAILS:
190 |
191 | Please contact the authors of BenchBot for support or to report bugs:
192 | b.talbot@qut.edu.au
193 | "
194 |
195 | mode_duplicate_err="ERROR: Multiple submission modes were selected. Please ensure only
196 | one of -n|-c|-s is provided."
197 |
198 | mode_selection_err="ERROR: No valid submission mode was selected (-n|-c|-s). Please see
199 | 'benchbot_submit --help' for further details."
200 |
201 | _list_examples_pre=\
202 | "The following BenchBot examples are available in your installation:
203 | "
204 |
205 | active_pid=
206 | exit_code=
207 | function exit_gracefully() {
208 | # $1 mode (numeric)
209 | echo ""
210 |
211 | # Pass the signal to the currently running process
212 | if [ -n "$active_pid" ]; then
213 | kill -TERM $active_pid &> /dev/null || true
214 | wait $active_pid || true
215 | active_pid=
216 | fi
217 |
218 | # Cleanup containers if we ran in containerised mode
219 | if [ "$1" == 1 ] || [ "$1" == 4 ]; then
220 | printf "\n"
221 | header_block "Cleaning up user containers" ${colour_blue}
222 | docker system prune -f # TODO this is probably too brutal
223 | fi
224 |
225 | exit ${exit_code:-0}
226 | }
227 |
228 | ################################################################################
229 | #################### Parse & handle command line arguments #####################
230 | ################################################################################
231 |
232 | # Safely parse options input
233 | _args="args:,evaluate-with:,example:,example-containerised,help,containerised:,\
234 | list-examples,native:,results-location:,show-example:,submission:,version"
235 | parse_out=$(getopt -o a:e:E:hc:n:r:s:v --long $_args -n "$(basename "$0")" \
236 | -- "$@")
237 | if [ $? != 0 ]; then exit 1; fi
238 | eval set -- "$parse_out"
239 | args=
240 | evaluate_method=
241 | example=
242 | example_containerised=
243 | extra_args=
244 | mode=
245 | mode_details=
246 | mode_dup=
247 | results_location=
248 | while true; do
249 | case "$1" in
250 | -a|--args)
251 | args="$2" ; shift 2 ;;
252 | -e|--evaluate-with)
253 | evaluate_method="$2" ; shift 2 ;;
254 | -E|--example)
255 | example="$2"; shift 2 ;;
256 | --example-containerised)
257 | example_containerised=1; shift 1 ;;
258 | -h|--help)
259 | echo "$usage_text" ; shift ; exit 0 ;;
260 | --list-examples)
261 | list_content "examples" "$_list_examples_pre" "an"; exit $? ;;
262 | -n|--native|-c|--containerised|-s|--submission)
263 | if [ -n "$mode" ]; then mode_dup=1; fi
264 | mode="$1"; mode_details="$2" ; shift 2 ;;
265 | -r|--results-location)
266 | results_location="$2"; shift 2 ;;
267 | --show-example)
268 | show_content "examples" "$2"; exit $? ;;
269 | -v|--version)
270 | print_version_info; exit ;;
271 | --)
272 | extra_args=$(echo "$@" | sed 's/-- *//'); break ;;
273 | *)
274 | echo "$(basename "$0"): option '$1' is unknown"; shift ; exit 1 ;;
275 | esac
276 | done
277 |
278 | # Generate any derived configuration parameters
279 | mode=$(expand_submission_mode "$mode")
280 | mode_num="$(submission_mode "$example" "$example_containerised" "$mode")"
281 | mode_details="$(echo "$mode_details $extra_args" | sed 's/^\s*//; s/\s*$//')"
282 |
283 | # Bail if any of the requested configurations are invalid
284 | validate_submit_args "$evaluate_method" "$example" "$example_containerised" \
285 | "$mode" "$mode_num" "$mode_dup" "$mode_details" "$results_location" "$args"
286 |
287 | ################################################################################
288 | ################## Submit your BenchBot solution as requested ##################
289 | ################################################################################
290 |
291 | # Before we start a submission, figure out all of our derived configuration
292 | sub_str="$(submission_mode_string $mode_num $example)"
293 | sub_str_short="$(echo "$sub_str" | sed 's/ *(.*$//')"
294 | sub_cmd="$(submission_command "$mode_num" "$mode_details" "$example" "$args")"
295 | sub_dir="$(submission_directory "$mode_num" "$mode_details" "$example")"
296 | sub_df="$(submission_dockerfile "$mode_num" "$mode_details" "$example")"
297 | sub_out="$(submission_output "$mode_num" "$args")"
298 |
299 | # Now print relevant configuration information
300 | mode_string=""
301 | echo "Submitting to the BenchBot system with the following settings:
302 |
303 | Submission mode: $sub_str"
304 | echo \
305 | " Perform evaluation: "\
306 | "$([ -z "$evaluate_method" ] && echo "No" || echo "Yes ($evaluate_method)")"
307 | if [ -n "$results_location" ]; then
308 | echo \
309 | " Results save location: $results_location"
310 | fi
311 | echo ""
312 | echo \
313 | " Command to execute: $sub_cmd"
314 | echo \
315 | " Command directory: $sub_dir"
316 | if [ -n "$sub_df" ]; then
317 | echo \
318 | " Dockerfile to build: $sub_df"
319 | fi
320 | if [ -n "$sub_df" ] && [ -n "$args" ]; then
321 | echo \
322 | " Docker runtime args: $args"
323 | fi
324 | if [ -n "$sub_out" ]; then
325 | echo \
326 | " Output *.tgz filename: $sub_out"
327 | fi
328 | echo ""
329 |
330 | # Actually perform the submission
331 | header_block "Running submission in '$sub_str_short' mode" ${colour_green}
332 |
333 | trap "exit_gracefully $mode_num" SIGINT SIGQUIT SIGKILL SIGTERM EXIT
334 |
335 | # Clear out any previous results in default location
336 | results_src=
337 | if [ -n "$results_location" ] || [ -n "$evaluate_method" ]; then
338 | results_src=$(python3 -c \
339 | 'from benchbot_api.benchbot import RESULT_LOCATION; print(RESULT_LOCATION)')
340 | rm -rf "$results_src"
341 | printf "\nRemoved any existing cached results from: $results_src\n\n"
342 | fi
343 |
344 | # Handle the submission
345 | if [ "$mode_num" == 0 ] || [ "$mode_num" == 3 ]; then
346 | # This is native submission mode
347 | printf "%s\n\n\t%s\n\n%s\n\n\t%s\n\n" \
348 | "Running submission natively via command:" "'$sub_cmd'" "In directory:" \
349 | "'$sub_dir'"
350 | pushd "$sub_dir" >/dev/null
351 | set +e
352 | eval "$sub_cmd"
353 | run_ret=$?
354 | set -e
355 | popd >/dev/null
356 | elif [ "$mode_num" == 2 ]; then
357 | # This is bundling up submission mode
358 | echo -e "Bundling up submission from '$sub_dir' to '$sub_out' ...\n"
359 | pushd "$sub_dir" >/dev/null
360 | set +e
361 | eval "$sub_cmd"
362 | run_ret=$?
363 | set -e
364 | popd >/dev/null
365 | if [ $run_ret -eq 0 ]; then echo -e "\nSaved to: $sub_out"; fi
366 | elif [ "$mode_num" == 1 ] || [ "$mode_num" == 4 ]; then
367 | # This is a containerised submission
368 | echo "Containerising '$sub_df' ..."
369 | pushd "$sub_dir" >/dev/null
370 | submission_tag="benchbot/submission:"$(echo "$(pwd)" | sha256sum | cut -c1-10)
371 | eval "$(echo "$sub_cmd" | sed \
372 | "s/docker build /\0 -t '${submission_tag//\//\\\/}' /")" &
373 | active_pid=$!
374 | wait $active_pid && run_ret=0 || run_ret=1
375 | if [ $run_ret -ne 0 ]; then
376 | echo "Docker build returned a non-zero error code: $run_ret"
377 | else
378 | xhost +local:root
379 | echo "Waiting for Docker network ('$DOCKER_NETWORK') to become available..."
380 | while [ -z "$(docker network ls -q -f 'name='$DOCKER_NETWORK)" ]; do
381 | sleep 1;
382 | done
383 | set +e
384 | # TODO PASSTHROUGH THE ARGS!!!
385 | docker run --gpus all -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY \
386 | --network "$DOCKER_NETWORK" --name="$SUBMISSION_CONTAINER_NAME" \
387 | --hostname="$SUBMISSION_CONTAINER_NAME" \
388 | -e args="$args" -i "$submission_tag"
389 | run_ret=$?
390 | set -e
391 | xhost -local:root
392 | fi
393 | popd >/dev/null
394 | else
395 | echo "Run in unsupported mode '$mode_num' (this error should never happen!)"
396 | run_ret=1
397 | fi
398 |
399 | # Exit here if the submission failed
400 | if [ $run_ret -ne 0 ]; then
401 | printf "${colour_red}\n%s: %d${colour_nc}\n" \
402 | "Submission failed with result error code" "$run_ret"
403 | exit_code=$run_ret
404 | exit
405 | fi
406 |
407 | # Perform any evaluation that may have been requested by the caller
408 | if [ -n "$results_location" ] || [ -n "$evaluate_method" ]; then
409 | header_block "Processing results" ${colour_blue}
410 |
411 | # Pull the results out of the container if appropriate
412 | if [ "$mode_num" == 1 ] || [ "$mode_num" == 4 ]; then
413 | if ! docker cp "${SUBMISSION_CONTAINER_NAME}:${results_src}"\
414 | "${results_src}" 2>/dev/null; then
415 | printf "${colour_yellow}\n%s%s${colour_nc}\n" \
416 | "Failed to extract results from submission container; were there any?"
417 | echo "{}" > "${results_src}"
418 | fi
419 | printf "\nExtracted results from container '%s', to '%s'.\n" \
420 | "$SUBMISSION_CONTAINER_NAME" "$results_src"
421 | fi
422 |
423 | # Warn & write some empty results if there are none available
424 | if [ ! -f "$results_src" ]; then
425 | printf "\n${colour_yellow}%s\n ${results_src}${colour_nc}\n" \
426 | "Requested use of results, but the submission saved no results to: "
427 | echo "{}" > "${results_src}"
428 | fi
429 |
430 | # Copy results to a new location if requested
431 | if [ -n "$results_location" ]; then
432 | printf "\nCopying results from '%s' to '%s' ...\n" "$results_src" \
433 | "$results_location"
434 | rsync -avP "$results_src" "$results_location"
435 | fi
436 |
437 | # Run evaluation on the results if requested
438 | if [ -n "$evaluate_method" ]; then
439 | if [ -z "$results_location" ]; then results_location="$results_src"; fi
440 | printf "\nRunning evaluation on results from '%s' ... \n" \
441 | "$results_location"
442 | benchbot_eval --method "$evaluate_method" "$results_location"
443 | fi
444 | fi
445 |
446 | exit
447 |
--------------------------------------------------------------------------------
/docker/backend.Dockerfile:
--------------------------------------------------------------------------------
1 | # Extend the BenchBot Core image
2 | FROM benchbot/core:base
3 |
4 | # Create a /benchbot working directory
5 | WORKDIR /benchbot
6 |
7 | # Install benchbot components, ordered by how expensive installation is
8 | ARG BENCHBOT_SUPERVISOR_GIT
9 | ARG BENCHBOT_SUPERVISOR_HASH
10 | ENV BENCHBOT_SUPERVISOR_PATH="/benchbot/benchbot_supervisor"
11 | RUN apt update && apt install -y python3 python3-pip && \
12 | git clone $BENCHBOT_SUPERVISOR_GIT $BENCHBOT_SUPERVISOR_PATH && \
13 | pushd $BENCHBOT_SUPERVISOR_PATH && \
14 | git checkout $BENCHBOT_SUPERVISOR_HASH && pip3 install .
15 |
16 | # Expects to be built with shared_tools.Dockerfile added to the end
17 |
--------------------------------------------------------------------------------
/docker/core.Dockerfile:
--------------------------------------------------------------------------------
1 | # Start from the official Ubuntu image
2 | FROM ubuntu:focal
3 |
4 | # Setup a base state with needed packages & useful default settings
5 | SHELL ["/bin/bash", "-c"]
6 | ARG TZ
7 | RUN echo "$TZ" > /etc/timezone && ln -s /usr/share/zoneinfo/"$TZ" \
8 | /etc/localtime && apt update && apt -y install tzdata
9 | RUN apt update && apt install -yq wget gnupg2 software-properties-common git \
10 | vim ipython3 tmux iputils-ping
11 |
12 | # Install Nvidia software (cuda & drivers)
13 | # Note: the disgusting last RUN could entirely be replaced by 'apt satisfy ...'
14 | # on Ubuntu 20.04 (apt version 2)... I cant find a pre v2 way to make apt
15 | # install the required version of dependencies (as opposed to just the latest)
16 | ARG NVIDIA_DRIVER_VERSION
17 | ARG CUDA_DRIVERS_VERSION
18 | ARG CUDA_VERSION
19 | ENV NVIDIA_VISIBLE_DEVICES="all"
20 | ENV NVIDIA_DRIVER_CAPABILITIES="compute,display,graphics,utility"
21 | RUN add-apt-repository ppa:graphics-drivers && \
22 | wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin && \
23 | mv -v cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600 && \
24 | apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/3bf863cc.pub && \
25 | add-apt-repository -n "deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /" && \
26 | apt update
27 | RUN CUDA_NAME="cuda-$(echo "${CUDA_VERSION}" | \
28 | sed 's/\([0-9]*\)\.\([0-9]*\).*/\1\.\2/; s/\./-/')" && \
29 | NVIDIA_NAME="nvidia-driver-$(echo "${NVIDIA_DRIVER_VERSION}" | \
30 | sed 's/\(^[0-9]*\).*/\1/')" && \
31 | NVIDIA_DEPS="$(apt depends "$NVIDIA_NAME=$NVIDIA_DRIVER_VERSION" 2>/dev/null | \
32 | grep '^ *Depends:' | sed 's/.*Depends: \([^ ]*\) (.\?= \([^)]*\))/\1 \2/' | \
33 | while read d; do read a b <<< "$d"; v=$(apt policy "$a" 2>/dev/null | \
34 | grep "$b" | grep -vE "(Installed|Candidate)" | sed "s/.*\($b[^ ]*\).*/\1/"); \
35 | echo "$a=$v"; done)" && \
36 | CUDA_DRIVERS_DEPS="$(apt depends "cuda-drivers=$CUDA_DRIVERS_VERSION" 2>/dev/null | \
37 | grep '^ *Depends:' | sed 's/.*Depends: \([^ ]*\) (.\?= \([^)]*\))/\1 \2/' | \
38 | while read d; do read a b <<< "$d"; v=$(apt policy "$a" 2>/dev/null | \
39 | grep "$b" | grep -vE "(Installed|Candidate)" | sed "s/.*\($b[^ ]*\).*/\1/"); \
40 | echo "$a=$v"; done)" && \
41 | CUDA_DEPS="$(apt depends "$CUDA_NAME=$CUDA_VERSION" 2>/dev/null | \
42 | grep '^ *Depends:' | sed 's/.*Depends: \([^ ]*\) (.\?= \([^)]*\))/\1 \2/' | \
43 | while read d; do read a b <<< "$d"; v=$(apt policy "$a" 2>/dev/null | \
44 | grep "$b" | grep -vE "(Installed|Candidate)" | sed "s/.*\($b[^ ]*\).*/\1/"); \
45 | echo "$a=$v"; done)" && \
46 | TARGETS="$(echo "$NVIDIA_DEPS $NVIDIA_NAME=$NVIDIA_DRIVER_VERSION" \
47 | "$CUDA_DRIVERS_DEPS cuda-drivers=$CUDA_DRIVERS_VERSION" \
48 | "$CUDA_DEPS $CUDA_NAME=$CUDA_VERSION" | \
49 | tr '\n' ' ')" && \
50 | DEBIAN_FRONTEND=noninteractive apt install -yq $TARGETS
51 |
--------------------------------------------------------------------------------
/docker/shared_tools.Dockerfile:
--------------------------------------------------------------------------------
1 | # Note: this Dockerfile is not meant to be used in isolation. It is used to add
2 | # BenchBot's shared tools like ROS packages and addons to an existing Docker
3 | # image
4 |
5 | # Ensure our benchbot directory exists
6 | ENV BENCHBOT_DIR="/benchbot"
7 | RUN mkdir -p $BENCHBOT_DIR
8 |
9 | # Install ROS Noetic
10 | RUN apt update && apt install -y curl && \
11 | echo "deb http://packages.ros.org/ros/ubuntu focal main" > \
12 | /etc/apt/sources.list.d/ros-latest.list && \
13 | curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | \
14 | apt-key add - && apt update && \
15 | apt install -y ros-noetic-ros-base python3-rosdep \
16 | python3-rosinstall python3-rosinstall-generator python3-wstool \
17 | python3-catkin-tools python3-pip build-essential \
18 | ros-noetic-tf2-ros ros-noetic-tf
19 |
20 | # Build a ROS Catkin workspace
21 | ENV ROS_WS_PATH="$BENCHBOT_DIR/ros_ws"
22 | RUN rosdep init && rosdep update && mkdir -p $ROS_WS_PATH/src && \
23 | source /opt/ros/noetic/setup.bash && \
24 | pushd $ROS_WS_PATH && catkin_make && source devel/setup.bash && popd
25 |
26 | # Add BenchBot's common ROS packages
27 | ARG BENCHBOT_MSGS_GIT
28 | ARG BENCHBOT_MSGS_HASH
29 | ENV BENCHBOT_MSGS_PATH="$BENCHBOT_DIR/benchbot_msgs"
30 | RUN git clone $BENCHBOT_MSGS_GIT $BENCHBOT_MSGS_PATH && \
31 | pushd $BENCHBOT_MSGS_PATH && git checkout $BENCHBOT_MSGS_HASH && \
32 | pip install -r requirements.txt && pushd $ROS_WS_PATH && \
33 | ln -sv $BENCHBOT_MSGS_PATH src/ && source devel/setup.bash && catkin_make
34 |
35 | ARG BENCHBOT_CONTROLLER_GIT
36 | ARG BENCHBOT_CONTROLLER_HASH
37 | ENV BENCHBOT_CONTROLLER_PATH="$BENCHBOT_DIR/benchbot_robot_controller"
38 | RUN git clone $BENCHBOT_CONTROLLER_GIT $BENCHBOT_CONTROLLER_PATH && \
39 | pushd $BENCHBOT_CONTROLLER_PATH && git checkout $BENCHBOT_CONTROLLER_HASH && \
40 | pip install -r requirements.txt && \
41 | sed -i 's/np.float/float/g' /usr/local/lib/python3.8/dist-packages/transforms3d/quaternions.py && \
42 | pushd $ROS_WS_PATH && \
43 | pushd src && git clone https://github.com/eric-wieser/ros_numpy && \
44 | sed -i 's/np.float/float/g' /benchbot/ros_ws/src/ros_numpy/src/ros_numpy/point_cloud2.py && \
45 | popd && \
46 | ln -sv $BENCHBOT_CONTROLLER_PATH src/ && source devel/setup.bash && catkin_make
47 |
48 | # Create a place to mount our add-ons, & install manager dependencies
49 | ARG BENCHBOT_ADDONS_PATH
50 | ENV BENCHBOT_ADDONS_PATH="$BENCHBOT_ADDONS_PATH"
51 | RUN apt update && apt install -y python3 python3-pip && \
52 | python3 -m pip install --upgrade pip setuptools wheel pyyaml && \
53 | mkdir -p $BENCHBOT_ADDONS_PATH
54 |
--------------------------------------------------------------------------------
/docker/sim_omni.Dockerfile:
--------------------------------------------------------------------------------
1 | # Extend NVIDIA's official Docker Image for Isaac Sim. Download instructions:
2 | # https://catalog.ngc.nvidia.com/orgs/nvidia/containers/isaac-sim
3 | FROM nvcr.io/nvidia/isaac-sim:2022.2.1
4 |
5 | # Fix to address key rotation breaking APT with the official Isaac Sim image
6 | # https://developer.nvidia.com/blog/updating-the-cuda-linux-gpg-repository-key/
7 | # RUN apt-key adv --fetch-keys \
8 | # https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/3bf863cc.pub
9 |
10 | # Fix scripts provided with image
11 | # RUN sed -i 's/$@/"\0"/' python.sh
12 | # RUN sed -i 's/sleep/# \0/' start_nucleus.sh
13 |
14 | RUN echo "$TZ" > /etc/timezone && ln -s /usr/share/zoneinfo/"$TZ" \
15 | /etc/localtime && apt update && apt -y install tzdata
16 |
17 | # Overrides to make things play nicely with the BenchBot ecosystem
18 | SHELL ["/bin/bash", "-c"]
19 | ENTRYPOINT []
20 | ENV ACCEPT_EULA="Y"
21 | ENV NO_NUCLEUS="Y"
22 |
23 | # Install the BenchBot Simulator wrappers for 'sim_omni'
24 | RUN apt update && apt install -y git
25 | ARG BENCHBOT_SIMULATOR_GIT
26 | ARG BENCHBOT_SIMULATOR_HASH
27 | ENV BENCHBOT_SIMULATOR_PATH="/benchbot/benchbot_simulator"
28 | RUN mkdir -p $BENCHBOT_SIMULATOR_PATH && \
29 | git clone $BENCHBOT_SIMULATOR_GIT $BENCHBOT_SIMULATOR_PATH && \
30 | pushd $BENCHBOT_SIMULATOR_PATH && git checkout $BENCHBOT_SIMULATOR_HASH && \
31 | /isaac-sim/kit/python/bin/python3 -m pip install -r ./.custom_deps
32 |
33 | # Expects to be built with shared_tools.Dockerfile added to the end
34 |
--------------------------------------------------------------------------------
/docker/submission.Dockerfile:
--------------------------------------------------------------------------------
1 | # Extend the BenchBot Core image
2 | FROM benchbot/core:base
3 |
4 | # Install some requirements for BenchBot API & visualisation tools
5 | # (BenchBot supports both python2 & python3, but python3 is preferred)
6 | RUN apt update && apt install -y libsm6 libxext6 libxrender-dev python3 \
7 | python3-pip python3-tk
8 |
9 | # Upgrade to latest pip (OpenCV fails to install because the pip installed by
10 | # Ubuntu is so old). See following issues for details:
11 | # https://github.com/skvark/opencv-python/issues/372
12 | # We upgrade pip here the lazy way which will give a warning (having a recent
13 | # version of pip without requiring Ubuntu to push it out... Ubuntu has v9 in
14 | # apt & pip is up to v20 atm... is apparently impossible without virtual
15 | # environments or manually deleting system files). See issue below for details:
16 | # https://github.com/pypa/pip/issues/5599
17 | # I'll move on rather than digressing into how stupid it is that that's the
18 | # state of things...
19 | #RUN pip3 install --upgrade pip
20 |
21 | # Install BenchBot API
22 | ARG BENCHBOT_API_GIT
23 | ARG BENCHBOT_API_HASH
24 | RUN git clone $BENCHBOT_API_GIT && pushd benchbot_api && \
25 | git checkout $BENCHBOT_API_HASH && pip3 install .
26 |
27 | # Making the working directory a submission folder
28 | WORKDIR /benchbot_submission
29 |
--------------------------------------------------------------------------------
/docs/acrv_logo_small.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/qcr/benchbot/bf80acd5f3d3bd350e77dd244adcdb3a9d4f2861/docs/acrv_logo_small.png
--------------------------------------------------------------------------------
/docs/benchbot_web.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/qcr/benchbot/bf80acd5f3d3bd350e77dd244adcdb3a9d4f2861/docs/benchbot_web.gif
--------------------------------------------------------------------------------
/docs/csirod61_logo_small.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/qcr/benchbot/bf80acd5f3d3bd350e77dd244adcdb3a9d4f2861/docs/csirod61_logo_small.png
--------------------------------------------------------------------------------
/docs/nvidia_logo_small.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/qcr/benchbot/bf80acd5f3d3bd350e77dd244adcdb3a9d4f2861/docs/nvidia_logo_small.png
--------------------------------------------------------------------------------
/docs/qcr_logo_small.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/qcr/benchbot/bf80acd5f3d3bd350e77dd244adcdb3a9d4f2861/docs/qcr_logo_small.png
--------------------------------------------------------------------------------
/install:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | $(dirname $0)/bin/benchbot_install "$@"
4 |
--------------------------------------------------------------------------------