├── .gitignore ├── README.md ├── development └── README.md ├── gazebo-nvidia ├── Dockerfile ├── gzserver_entrypoint.sh └── run.bash ├── multicontainer ├── README.md └── ros-tutorials │ ├── Dockerfile │ └── docker-compose.yml ├── multihost └── README.md ├── ros-nvidia ├── Dockerfile └── ros_entrypoint.sh ├── rviz ├── Dockerfile └── run.bash └── turtlebot ├── Makefile ├── intel ├── Dockerfile ├── my_turtlebot_simulator.launch └── run.sh └── nvidia ├── Dockerfile ├── my_turtlebot_simulator.launch └── run.sh /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | 5 | # C extensions 6 | *.so 7 | 8 | # Distribution / packaging 9 | .Python 10 | env/ 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | *.egg-info/ 23 | .installed.cfg 24 | *.egg 25 | 26 | # PyInstaller 27 | # Usually these files are written by a python script from a template 28 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 29 | *.manifest 30 | *.spec 31 | 32 | # Installer logs 33 | pip-log.txt 34 | pip-delete-this-directory.txt 35 | 36 | # Unit test / coverage reports 37 | htmlcov/ 38 | .tox/ 39 | .coverage 40 | .coverage.* 41 | .cache 42 | nosetests.xml 43 | coverage.xml 44 | *,cover 45 | 46 | # Translations 47 | *.mo 48 | *.pot 49 | 50 | # Django stuff: 51 | *.log 52 | 53 | # Sphinx documentation 54 | docs/_build/ 55 | 56 | # PyBuilder 57 | target/ 58 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # ros_docker_demos 2 | A collection of Docker demos for ROS 3 | -------------------------------------------------------------------------------- /development/README.md: -------------------------------------------------------------------------------- 1 | # Development Environment 2 | In this tutorial , we'll go over how to use docker as a tool to build and share development environments. We'll use the official ROS Docker Hub image to create catkin workspace, build a demo project, and then commit and share our container. 3 | 4 | ## Workspace 5 | 6 | To set up our catkin workspace, we'll want to start with a system already configured with a ROS installation. 7 | 8 | > We'll just launch an interactive session using the official ROS image from Docker. We can download the image from the web and launch the container in one command: 9 | 10 | docker run -it --name=catkin_ws ros:indigo bash 11 | 12 | > This direct your terminal to a bash session inside the container. This was done use the default entrypoint that sources the ROS's setup.bash. To build ROS packages and not just run them, we'll need install some basic compiler tools: 13 | 14 | apt-get update 15 | apt-get install -y build-essential 16 | 17 | > We'll need to make and initialize our catkin workspace: 18 | 19 | mkdir -p ~/catkin_ws/src 20 | cd ~/catkin_ws/src 21 | catkin_init_workspace 22 | 23 | > Then clone the tutorial package and build from source. We'll ignore the turtlesim for now as it requires additional build dependencies such Qt, not to mention other graphical runtime dependencies such as a running X server: 24 | 25 | git clone https://github.com/ros/ros_tutorials.git 26 | touch ros_tutorials/turtlesim/CATKIN_IGNORE 27 | 28 | > Then just kick off the build process and watch it go: 29 | 30 | cd ~/catkin_ws/ 31 | catkin_make 32 | 33 | > Ok, lets add our catkin workspace setup to the entry point just below to line for ROS so we can easily launch our built binaries later: 34 | 35 | sed -i \ 36 | '/source "\/opt\/ros\/$ROS_DISTRO\/setup.bash"/a source "\/root\/catkin_ws\/devel\/setup.bash"' \ 37 | /ros_entrypoint.sh 38 | 39 | ## Committing 40 | Now we can save the state of our container to an image for later use. This takes diff how the container has changed as compared to it's derived image. 41 | 42 | > We can exit and commit our stoped container to an image with some notes on what we did: 43 | 44 | exit 45 | docker commit \ 46 | --author="Dr. Foo Bar" \ 47 | --message="My glorious build was successful! 48 | catkin workspace configured with built ros_tutorials" \ 49 | catkin_ws \ 50 | ros:indigo-foo-dev 51 | 52 | > We can then see our image among our list stored by the docker daemon: 53 | 54 | docker images | grep indigo-foo-dev 55 | 56 | > We can even introspect it further and examine the finer details about our image and its history: 57 | 58 | docker inspect ros:indigo-foo-dev 59 | 60 | > Perhaps we should also check if our build is working before we try and share it. Go ahead and follow the multicontainer demo, but use the image we just created instead of the one that uses the debian packages. Come back here when you are finished and know everything is working. 61 | 62 | ## Sharing 63 | 64 | Now that we've built and tested our development image, lets try and share it with others. We'll upload our image to Docker Hub, then simply tell our colleagues where to find it. 65 | 66 | > To upload our image, we'll need to tag it with our username for Docker Hub's registry and then push. Lets use `foobar` here, but replace this with your own. 67 | 68 | docker tag ros:indigo-foo-dev foobar/ros:indigo-foo-dev 69 | docker push foobar/ros:indigo-foo-dev 70 | 71 | > This will prompt you for your Docker Hub credentials, and then upload you image to a a tag named `indigo-foo-dev` under your own `ros` repository. 72 | 73 | > Once we've uploaded our image, we can point people to the registry so they can pull, run and alter the images themselves and share fixes or improvements with you. Note however that images are layered diffs at the binary level, and can grow size on disk if many changes are committed and stacked over time. 74 | 75 | > So although images are handy to share runtime errors and patched fixes, and the diff you share are small, they're probably not something you'd should definitely hack on. Dockerfile defines how to cleanly build an image of your application, and should be the target from where to store your improvement once you've nailed them down. 76 | 77 | > In this case, we would be able to link to our repo using the URL: 78 | 79 | https://hub.docker.com/u/foobar/ros 80 | 81 | > or pull the latest image of the tag and start a container directly using a similar docker command we started with 82 | 83 | docker pull foobar/ros:indigo-foo-dev 84 | docker 85 | docker run -it --name=catkin_ws foobar/ros:indigo-foo-dev 86 | docker bash 87 | 88 | > And so the development cycle continues! 89 | -------------------------------------------------------------------------------- /gazebo-nvidia/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM nvidia/cuda:runtime 2 | 3 | # setup keys 4 | RUN apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys D2486D2DD83DB69272AFE98867170598AF249743 5 | 6 | # setup sources.list 7 | RUN echo "deb http://packages.osrfoundation.org/gazebo/ubuntu-stable `lsb_release -cs` main" > /etc/apt/sources.list.d/gazebo-latest.list 8 | 9 | # install gazebo packages 10 | RUN apt-get update && apt-get install -q -y \ 11 | gazebo7 \ 12 | && rm -rf /var/lib/apt/lists/* 13 | 14 | # setup environment 15 | EXPOSE 11345 16 | 17 | # setup entrypoint 18 | COPY ./gzserver_entrypoint.sh / 19 | 20 | ENTRYPOINT ["/gzserver_entrypoint.sh"] 21 | CMD ["gzserver"] 22 | -------------------------------------------------------------------------------- /gazebo-nvidia/gzserver_entrypoint.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | 4 | # setup gazebo environment 5 | source "/usr/share/gazebo/setup.sh" 6 | exec "$@" 7 | -------------------------------------------------------------------------------- /gazebo-nvidia/run.bash: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | xhost +local:root 4 | GPU=0 5 | 6 | nvidia-docker \ 7 | run \ 8 | -it \ 9 | --env="DISPLAY" \ 10 | --env="QT_X11_NO_MITSHM=1" \ 11 | --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \ 12 | gazebo:gazebo-nvidia gazebo 13 | 14 | xhost -local:root 15 | -------------------------------------------------------------------------------- /multicontainer/README.md: -------------------------------------------------------------------------------- 1 | # Multi-Container ROS nodes 2 | In this tutorial we'll use Docker to deploy ROS nodes onto a separate multiple containers on a single host and connect them together on a virtual network. 3 | 4 | ## Dependencies 5 | Here is a breakdown of the dependencies across our demo application: 6 | 7 | #### Local 8 | * [Docker](https://www.docker.com/) 9 | * [Docker Compose](https://docs.docker.com/compose/) 10 | 11 | #### Image 12 | * [ROS](https://registry.hub.docker.com/_/ros/) 13 | 14 | ## Setup 15 | 16 | ### Local 17 | For our local setup, we'll need to install Docker along with the other Docker tools so that we can create and run multiple containers to deploy our setup. Fallow the links above for installation guides for you platform. 18 | 19 | ## Image 20 | For our image setup, we'll need to build an image from the Docker Hub's official ROS repo to include the necessary demo packages we'll be using. 21 | > From within this demo directory, we can build our needed demo images: 22 | 23 | docker build --tag ros:ros-tutorials ros-tutorials/. 24 | 25 | ## Deploy 26 | 27 | ### Creating a network 28 | 29 | If we want our all ROS nodes to easily talk to each other, we'll can use a virtual network to connect the separate containers. In this short example, we'll create a virtual network, spin up a new container running `roscore` advertised as the `master` service on the new network, then spawn a message publisher and subscriber process as services on the same network. 30 | 31 | ### Build image 32 | 33 | > Build a ROS image that includes ROS tutorials using this `Dockerfile:` 34 | 35 | ```dockerfile 36 | FROM ros:indigo-ros-base 37 | # install ros tutorials packages 38 | RUN apt-get update && apt-get install -y 39 | ros-indigo-ros-tutorials \ 40 | ros-indigo-common-tutorials \ 41 | && rm -rf /var/lib/apt/lists/ 42 | ``` 43 | 44 | > Then to build the image from within the same directory: 45 | 46 | ```console 47 | $ docker build --tag ros:ros-tutorials . 48 | ``` 49 | 50 | #### Create network 51 | 52 | > To create a new network `foo`, we use the network command: 53 | 54 | docker network create foo 55 | 56 | > Now that we have a network, we can create services. Services advertise there location on the network, making it easy to resolve the location/address of the service specific container. We'll use this make sure our ROS nodes can find and connect to our ROS `master`. 57 | 58 | #### Run services 59 | 60 | > To create a container for the ROS master and advertise it's service: 61 | 62 | ```console 63 | $ docker run -it --rm \ 64 | --net foo \ 65 | --name master \ 66 | ros:ros-tutorials \ 67 | roscore 68 | ``` 69 | 70 | > Now you can see that master is running and is ready manage our other ROS nodes. To add our `talker` node, we'll need to point the relevant environment variable to the master service: 71 | 72 | ```console 73 | $ docker run -it --rm \ 74 | --net foo \ 75 | --name talker \ 76 | --env ROS_HOSTNAME=talker \ 77 | --env ROS_MASTER_URI=http://master:11311 \ 78 | ros:ros-tutorials \ 79 | rosrun roscpp_tutorials talker 80 | ``` 81 | 82 | > Then in another terminal, run the `listener` node similarly: 83 | 84 | ```console 85 | $ docker run -it --rm \ 86 | --net foo \ 87 | --name listener \ 88 | --env ROS_HOSTNAME=listener \ 89 | --env ROS_MASTER_URI=http://master:11311 \ 90 | ros:ros-tutorials \ 91 | rosrun roscpp_tutorials listener 92 | ``` 93 | 94 | > Alright! You should see `listener` is now echoing each message the `talker` broadcasting. You can then list the containers and see something like this: 95 | 96 | ```console 97 | $ docker service ls 98 | SERVICE ID NAME NETWORK CONTAINER 99 | 67ce73355e67 listener foo a62019123321 100 | 917ee622d295 master foo f6ab9155fdbe 101 | 7f5a4748fb8d talker foo e0da2ee7570a 102 | ``` 103 | 104 | > And for the services: 105 | 106 | ```console 107 | $ docker ps 108 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 109 | a62019123321 ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp listener 110 | e0da2ee7570a ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp talker 111 | f6ab9155fdbe ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp master 112 | ``` 113 | 114 | #### Introspection 115 | 116 | > Ok, now that we see the two nodes are communicating, let get inside one of the containers and do some introspection what exactly the topics are: 117 | 118 | ```console 119 | $ docker exec -it master bash 120 | $ source /ros_entrypoint.sh 121 | ``` 122 | 123 | > If we then use `rostopic` to list published message topics, we should see something like this: 124 | 125 | ```console 126 | $ rostopic list 127 | /chatter 128 | /rosout 129 | /rosout_agg 130 | ``` 131 | 132 | #### Tear down 133 | 134 | > To tear down the structure we've made, we just need to stop the containers and the services. We can stop and remove the containers using `Ctrl^C` where we launched the containers or using the stop command with the names we gave them: 135 | 136 | ```console 137 | $ docker stop master talker listener 138 | $ docker rm master talker listener 139 | ``` 140 | 141 | ### Compose 142 | 143 | Now that you have an appreciation for bootstrapping a distributed ROS example manually, lets try and automate it using [`docker-compose`](https://docs.docker.com/compose/)\. 144 | 145 | > Start by making a folder named `rostutorials` and moving the Dockerfile we used earlier inside this directory. Then create a yaml file named `docker-compose.yml` in the same directory and paste the following inside: 146 | 147 | ```yaml 148 | master: 149 | build: . 150 | container_name: master 151 | command: 152 | - roscore 153 | 154 | talker: 155 | build: . 156 | container_name: talker 157 | environment: 158 | - "ROS_HOSTNAME=talker" 159 | - "ROS_MASTER_URI=http://master:11311" 160 | command: rosrun roscpp_tutorials talker 161 | 162 | listener: 163 | build: . 164 | container_name: listener 165 | environment: 166 | - "ROS_HOSTNAME=listener" 167 | - "ROS_MASTER_URI=http://master:11311" 168 | command: rosrun roscpp_tutorials listener 169 | ``` 170 | 171 | > Now from inside the same folder, use docker-copose to launch our ROS nodes and specify that they coexist on their own network: 172 | 173 | ```console 174 | $ docker-compose --x-networking up -d 175 | ``` 176 | 177 | > Notice that a new network named `rostutorials` has now been created. We can monitor the logged output of each service, such as the listener node like so: 178 | 179 | ```console 180 | $ docker-compose logs listener 181 | ``` 182 | 183 | > Finally, we can stop and remove all the relevant containers using docker-copose from the same directory: 184 | 185 | ```console 186 | $ docker-compose stop 187 | $ docker-compose rm 188 | ``` 189 | 190 | > Note: the auto-generated network, `rostutorials`, will persist over the life of the docker engine or until you explicitly remove it using [`docker network rm`](https://docs.docker.com/engine/reference/commandline/network_rm/)\. 191 | -------------------------------------------------------------------------------- /multicontainer/ros-tutorials/Dockerfile: -------------------------------------------------------------------------------- 1 | # This is a Dockerfile for ros:ros-tutorials 2 | FROM ros:indigo-ros-base 3 | MAINTAINER ruffsl roxfoxpox@gmail.com 4 | 5 | # install ros tutorials packages 6 | RUN apt-get update && apt-get install -y \ 7 | ros-indigo-ros-tutorials \ 8 | ros-indigo-common-tutorials \ 9 | && rm -rf /var/lib/apt/lists/* 10 | -------------------------------------------------------------------------------- /multicontainer/ros-tutorials/docker-compose.yml: -------------------------------------------------------------------------------- 1 | master: 2 | build: . 3 | container_name: master 4 | command: 5 | - roscore 6 | 7 | talker: 8 | build: . 9 | container_name: talker 10 | environment: 11 | - "ROS_HOSTNAME=talker" 12 | - "ROS_MASTER_URI=http://master:11311" 13 | command: rosrun roscpp_tutorials talker 14 | 15 | listener: 16 | build: . 17 | container_name: listener 18 | environment: 19 | - "ROS_HOSTNAME=listener" 20 | - "ROS_MASTER_URI=http://master:11311" 21 | command: rosrun roscpp_tutorials listener 22 | -------------------------------------------------------------------------------- /multihost/README.md: -------------------------------------------------------------------------------- 1 | # Multi-Host ROS nodes 2 | In this tutorial we'll use Docker to deploy ROS nodes onto a swarm of multiple seperate hosts and connect them together on a vertual network. 3 | 4 | ## Dependencies 5 | Here is a breakdown of the dependencies across our demo application: 6 | 7 | #### Local 8 | * [Docker](https://www.docker.com/) 9 | * [Docker Compose](https://docs.docker.com/compose/) 10 | * [Docker Machine](https://docs.docker.com/machine/) 11 | * [Docker Swarm](https://docs.docker.com/swarm/) 12 | * [VirtualBox](https://www.virtualbox.org/) 13 | 14 | ## Setup 15 | 16 | ### Local 17 | For our local setup, we'll need to install Docker along with the other Docker tools and VirtualBox so that we can create and run multiple docker engiens to deploy our multi-host setup. If you wish to run the demo on trulely isolated remotly accesable machines, you can change the driver from vertualbox to that for your prefered cloud provider, like AWS or Digital Ocean. The general instructions for this demo still hold, it is only out of conveniounce in creating a harmless learning enviorment that we use the VirtualBox driver for creating local VMs instead. 18 | 19 | ## Deployment 20 | So get things started we'll use docker-machine to create our aws GPU instance for us with our desired configuration, and designating it as our swam master. Then we'll launch gzserver and gzweb on the remote instance and attach a new network to the running container. Once the server is running, we should be able point your web browser to the remote instance's external address and see our simulation's interface. Finally we'll add our local docker engine to the swarm cluster and we'll start gzclient in a locally running container attached to the same network allowing gzclient to connect to gzserver. 21 | 22 | ### Making a remote machine 23 | We'll need to use our AWS credentials, so add them to your shell session as environmental variables: 24 | ```shell 25 | export AWS_ACCESS_KEY_ID=#################### 26 | export AWS_SECRET_ACCESS_KEY=######################################## 27 | ``` 28 | 29 | > Now create our AWS GPU instance and swarm master 30 | * what region the VM should be started 31 | * `us-west-2` 32 | * as well as what zone in the region's site (region specific) 33 | * `b` 34 | * we'll specify what VM image to use (region specific). Use the aws image to enable graphical hardware acceleration 35 | * `ami-6dd8d85d` 36 | * we'll specify what hardware to use (region specific). Use the GPU cluster for rendering images 37 | * `g2.2xlarge` 38 | * security group same as default, docker-machine, but with added http=80 + gzweb=7681 inbound. Default being i.e. ssh=22 + dockerPort=2376 + swarmPort=3376 inbound 39 | * `sg-3515d051` 40 | * Virtual Private Cloud network corresponding to the used security group 41 | * `vpc-e2eb6787` 42 | 43 | >Use docker_macine to make aws instance as swarm master 44 | 45 | ```shell 46 | docker-machine -D create \ 47 | --driver amazonec2 \ 48 | --amazonec2-vpc-id vpc-722ea217 \ 49 | --amazonec2-region us-west-2 \ 50 | --amazonec2-zone b \ 51 | --amazonec2-ami ami-77dbdb47 \ 52 | --amazonec2-instance-type g2.2xlarge \ 53 | --swarm \ 54 | --swarm-master \ 55 | swarm-master 56 | ``` 57 | 58 | ### Starting gzserver and gzweb 59 | Now we'll point docker client to the swarm master node: 60 | ```shell 61 | eval "$(docker-machine env swarm-master)" 62 | ``` 63 | And then launch the gzserver and gzweb services using the compose file from inside this demo directory 64 | ```shell 65 | docker-compose -f gzweb5Nvidia.yml up 66 | ``` 67 | 68 | ### Loading gzweb 69 | **TODO** For some reason, docker-machine always wants to create a security group, never use the one given to it. So just let it make it's own group named `Docker+Machine`, and then edit that security group from the AWS console to allow for the http=80 + gzweb=7681 inbound rules. 70 | 71 | Then point your browser to the AWS external address. 72 | 73 | ### Creating a network 74 | 75 | ### Connecting our local machine 76 | 77 | ### Starting gzclient 78 | 79 | 80 | ## Tear down 81 | 82 | 83 | 84 | 85 | 86 | > Create a `local` machine, this could represent our local workstation 87 | 88 | docker-machine create \ 89 | -d virtualbox \ 90 | local \ 91 | --virtualbox-boot2docker-url=http://sirile.github.io/files/boot2docker-1.8.iso 92 | 93 | > Now lets point our docker commands at this `local` docker engien and tell it to create a swarm 94 | 95 | eval "$(docker-machine env local)" 96 | docker run swarm create 97 | 98 | > This will lastly print a token. Save it and use it to create our `swarm-master` 99 | 100 | export SID= 101 | docker-machine create \ 102 | -d virtualbox \ 103 | --virtualbox-boot2docker-url=http://sirile.github.io/files/boot2docker-1.8.iso \ 104 | --swarm \ 105 | --swarm-master \ 106 | --swarm-discovery token://$SID \ 107 | swarm-master 108 | 109 | > Now we'll add two nodes to our swarm with the same token 110 | 111 | docker-machine create \ 112 | -d virtualbox \ 113 | --virtualbox-boot2docker-url=http://sirile.github.io/files/boot2docker-1.8.iso \ 114 | --swarm \ 115 | --swarm-discovery token://$SID \ 116 | swarm-node-00 117 | 118 | docker-machine create \ 119 | -d virtualbox \ 120 | --virtualbox-boot2docker-url=http://sirile.github.io/files/boot2docker-1.8.iso \ 121 | --swarm \ 122 | --swarm-discovery token://$SID \ 123 | swarm-node-01 124 | 125 | > Ok, we finally have our massive swarm of 3 nodes and one local machine. Now lets direct our atention to the swarm master and see whats going on 126 | 127 | eval "$(docker-machine env --swarm swarm-master)" 128 | docker info 129 | > 130 | 131 | 132 | docker network create foo 133 | -------------------------------------------------------------------------------- /ros-nvidia/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM nvidia/cuda:runtime 2 | 3 | # setup environment 4 | RUN locale-gen en_US.UTF-8 5 | ENV LANG en_US.UTF-8 6 | 7 | # setup keys 8 | RUN apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys 421C365BD9FF1F717815A3895523BAEEB01FA116 9 | 10 | # setup sources.list 11 | RUN echo "deb http://packages.ros.org/ros/ubuntu trusty main" > /etc/apt/sources.list.d/ros-latest.list 12 | 13 | # install bootstrap tools 14 | ENV ROS_DISTRO indigo 15 | RUN apt-get update && apt-get install --no-install-recommends -y \ 16 | python-rosdep \ 17 | python-rosinstall \ 18 | python-vcstools \ 19 | && rm -rf /var/lib/apt/lists/* 20 | 21 | # bootstrap rosdep 22 | RUN rosdep init \ 23 | && rosdep update 24 | 25 | # install ros packages 26 | RUN apt-get update && apt-get install -y \ 27 | ros-indigo-ros-core \ 28 | && rm -rf /var/lib/apt/lists/* 29 | 30 | # setup entrypoint 31 | COPY ./ros_entrypoint.sh / 32 | 33 | ENTRYPOINT ["/ros_entrypoint.sh"] 34 | CMD ["bash"] 35 | -------------------------------------------------------------------------------- /ros-nvidia/ros_entrypoint.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | 4 | # setup ros environment 5 | source "/opt/ros/$ROS_DISTRO/setup.bash" 6 | exec "$@" 7 | -------------------------------------------------------------------------------- /rviz/Dockerfile: -------------------------------------------------------------------------------- 1 | # This is a Dockerfile for ros:rviz 2 | FROM ros:ros-nvidia 3 | MAINTAINER ruffsl roxfoxpox@gmail.com 4 | 5 | # install rviz 6 | RUN apt-get update && apt-get install -y \ 7 | ros-indigo-rviz \ 8 | && rm -rf /var/lib/apt/lists/* 9 | -------------------------------------------------------------------------------- /rviz/run.bash: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | xhost +local:root 4 | GPU=0 5 | 6 | nvidia-docker \ 7 | run \ 8 | -it \ 9 | --env="DISPLAY" \ 10 | --env="QT_X11_NO_MITSHM=1" \ 11 | --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \ 12 | ros:rviz bash -c 'source /ros_entrypoint.sh && roscore & sleep 1 && rosrun rviz rviz' 13 | 14 | xhost -local:root 15 | -------------------------------------------------------------------------------- /turtlebot/Makefile: -------------------------------------------------------------------------------- 1 | all: help 2 | 3 | help: 4 | @echo "" 5 | @echo "-- Help Menu" 6 | @echo "" 7 | @echo " 1. make build_nvidia - build nvidia images" 8 | @echo " 1. make build_intel - build intel images" 9 | # @echo " 1. make pull - pull all images" 10 | # @echo " 1. make clean - remove all images" 11 | @echo "" 12 | 13 | build_nvidia: 14 | @docker build --tag=ros:turtlebot-nvidia nvidia/. 15 | 16 | build_intel: 17 | @docker build --tag=ros:turtlebot-intel intel/. 18 | 19 | # pull: 20 | # @docker pull ros 21 | 22 | # clean: 23 | # @docker rmi -f ros 24 | -------------------------------------------------------------------------------- /turtlebot/intel/Dockerfile: -------------------------------------------------------------------------------- 1 | # This is a Dockerfile for turtlebot_simulator 2 | FROM osrf/ros:kinetic-desktop-full 3 | 4 | # install turtlebot simulator 5 | RUN apt-get update && apt-get install -y \ 6 | ros-${ROS_DISTRO}-turtlebot* \ 7 | && rm -rf /var/lib/apt/lists/* 8 | 9 | # Getting models from[http://gazebosim.org/models/]. This may take a few seconds. 10 | RUN gzserver --verbose --iters 1 /opt/ros/${ROS_DISTRO}/share/turtlebot_gazebo/worlds/playground.world 11 | 12 | # install custom launchfile 13 | ADD my_turtlebot_simulator.launch / 14 | 15 | # Add Intel display support by installing Mesa libraries 16 | RUN apt-get update && apt-get install -y \ 17 | libgl1-mesa-glx \ 18 | libgl1-mesa-dri \ 19 | && rm -rf /var/lib/apt/lists/* 20 | -------------------------------------------------------------------------------- /turtlebot/intel/my_turtlebot_simulator.launch: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | -------------------------------------------------------------------------------- /turtlebot/intel/run.sh: -------------------------------------------------------------------------------- 1 | xhost +local:root 2 | 3 | # Run container with necessary Xorg and DRI mounts 4 | docker run -it \ 5 | --env="DISPLAY" \ 6 | --env="QT_X11_NO_MITSHM=1" \ 7 | --device=/dev/dri:/dev/dri \ 8 | --volume=/tmp/.X11-unix:/tmp/.X11-unix \ 9 | ros:turtlebot-intel \ 10 | roslaunch my_turtlebot_simulator.launch 11 | 12 | xhost -local:root 13 | -------------------------------------------------------------------------------- /turtlebot/nvidia/Dockerfile: -------------------------------------------------------------------------------- 1 | # This is a Dockerfile for turtlebot_simulator 2 | FROM osrf/ros:kinetic-desktop-full 3 | 4 | # install turtlebot simulator 5 | RUN apt-get update && apt-get install -y \ 6 | ros-${ROS_DISTRO}-turtlebot* \ 7 | && rm -rf /var/lib/apt/lists/* 8 | 9 | # Getting models from[http://gazebosim.org/models/]. This may take a few seconds. 10 | RUN gzserver --verbose --iters 1 /opt/ros/${ROS_DISTRO}/share/turtlebot_gazebo/worlds/playground.world 11 | 12 | # install custom launchfile 13 | ADD my_turtlebot_simulator.launch / 14 | 15 | # Add Nvidia display support by including nvidia-docker hooks 16 | LABEL com.nvidia.volumes.needed="nvidia_driver" 17 | ENV PATH /usr/local/nvidia/bin:${PATH} 18 | ENV LD_LIBRARY_PATH /usr/local/nvidia/lib:/usr/local/nvidia/lib64:${LD_LIBRARY_PATH} 19 | -------------------------------------------------------------------------------- /turtlebot/nvidia/my_turtlebot_simulator.launch: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | -------------------------------------------------------------------------------- /turtlebot/nvidia/run.sh: -------------------------------------------------------------------------------- 1 | xhost +local:root 2 | 3 | # Run container with necessary Xorg and GPU mounts 4 | nvidia-docker run -it \ 5 | --env="DISPLAY" \ 6 | --env="QT_X11_NO_MITSHM=1" \ 7 | --volume=/tmp/.X11-unix:/tmp/.X11-unix \ 8 | ros:turtlebot-nvidia \ 9 | roslaunch my_turtlebot_simulator.launch 10 | 11 | xhost -local:root 12 | --------------------------------------------------------------------------------