├── README.md ├── docker101-linux-dceu-1017.pdf ├── linux_tweet_app ├── Dockerfile ├── index-new.html ├── index-original-html ├── index.html └── linux.png ├── part-1 └── README.md ├── part-2 └── README.md └── part-3 └── README.md /README.md: -------------------------------------------------------------------------------- 1 | # Docker 101 self-paced labs 2 | 3 | There are three parts this lab 4 | 5 | * [Part 1](https://github.com/mikegcoleman/docker101-linux/tree/master/part-1) deals with basic container management 6 | * [Part 2](https://github.com/mikegcoleman/docker101-linux/tree/master/part-2) covers the Docker file system and Docker volumes (including persistent data) 7 | * [Part 3](https://github.com/mikegcoleman/docker101-linux/tree/master/part-3) talks about orchestration with Swarm and how Docker handles networking 8 | 9 | There is also a PDF of the slides I use when I present this as a workshop. 10 | -------------------------------------------------------------------------------- /docker101-linux-dceu-1017.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mikegcoleman/docker101-linux/1ba9b3818d2e34469157aa941047e133c9f6ee27/docker101-linux-dceu-1017.pdf -------------------------------------------------------------------------------- /linux_tweet_app/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM nginx:latest 2 | 3 | COPY index.html /usr/share/nginx/html 4 | COPY linux.png /usr/share/nginx/html 5 | 6 | EXPOSE 80 443 7 | 8 | CMD ["nginx", "-g", "daemon off;"] 9 | -------------------------------------------------------------------------------- /linux_tweet_app/index-new.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 26 | 27 | 28 | 29 | 30 | 31 | 32 |
33 |

Docker 101!

34 | 35 |

36 | 37 | You've successfully deployed the Linux tweet app! Why not let the world know? 38 | 39 |

40 | 41 | 42 |

43 | 44 | 45 | 46 | -------------------------------------------------------------------------------- /linux_tweet_app/index-original-html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 26 | 27 | 28 | 29 | 30 | 31 | 32 |
33 |

Docker 101!

34 | 35 |

36 | 37 | You've successfully deployed the Linux tweet app! Why not let the world know? 38 | 39 |

40 | 41 | 42 |

43 | 44 | 45 | 46 | -------------------------------------------------------------------------------- /linux_tweet_app/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 26 | 27 | 28 | 29 | 30 | 31 | 32 |
33 |

Docker 101!

34 | 35 |

36 | 37 | You've successfully deployed the Linux tweet app! Why not let the world know? 38 | 39 |

40 | 41 | 42 |

43 | 44 | 45 | 46 | -------------------------------------------------------------------------------- /linux_tweet_app/linux.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mikegcoleman/docker101-linux/1ba9b3818d2e34469157aa941047e133c9f6ee27/linux_tweet_app/linux.png -------------------------------------------------------------------------------- /part-1/README.md: -------------------------------------------------------------------------------- 1 | # Lab 1: Docker 101 (Linux): Basic Docker Commands,Building Docker Images, and Accessing Local Files 2 | 3 | In this lab we'll take a look at some basic Docker commands and a simple build-ship-run workflow. We'll start by running some simple docker containers. Then we'll use a *Dockerfile* to build a custom app. Finally, we'll look at how to use bind mounts to modify a running container as you might if you were actively developing using Docker. 4 | 5 | > **Difficulty**: Beginner (assumes no familiarity with Docker) 6 | 7 | > **Time**: Approximately 30 minutes 8 | 9 | > **Tasks**: 10 | > 11 | 12 | > * [Task 0: Prerequisites](#Task_0) 13 | > * [Task 1: Run some simple Docker containers](#Task_1) 14 | > * [Task 2: Package and run a custom app using Docker](#Task_2) 15 | > * [Task 3: Modify a Running Website](#Task_3) 16 | 17 | ## Task 0: Prerequisites 18 | 19 | Before we start, you'll need to gain access to your Linux VM, clone a GitHub repo, and make sure you have a DockerID. 20 | 21 | ### Access your Linux VM 22 | 1. Visit [Play With Docker](https://hybrid.play-with-docker.com) 23 | 2. Click `Start Session` 24 | 2. On the left click `+ Add New Instance` 25 | 26 | All of the exercises in this lab will be performed in the console window on the right of the Play With Docker screen. 27 | 28 | ### Clone the Lab GitHub Repo 29 | 30 | Use the following command to clone the lab repo from GitHub. 31 | 32 | ``` 33 | $ git clone https://github.com/dockersamples/linux_tweet_app 34 | Cloning into 'linux_tweet_app'... 35 | remote: Counting objects: 14, done. 36 | remote: Compressing objects: 100% (9/9), done. 37 | remote: Total 14 (delta 5), reused 14 (delta 5), pack-reused 0 38 | Unpacking objects: 100% (14/14), done. 39 | ``` 40 | 41 | ### Make sure you have a DockerID 42 | 43 | If you do not have a DockerID (a free login used to access Docker Cloud, Docker Store, and Docker Hub), please visit [Docker Cloud](https://cloud.docker.com) to register for one. 44 | 45 | ## Task 1: Run some simple Docker containers 46 | 47 | There are different ways to use containers: 48 | 49 | 1. **To run a single task:** This could be a shell script or a custom app 50 | 2. **Interactively:** This connects you to the container similar to the way you SSH into a remote server 51 | 3. **In the background:** For long-running services like websites and databases 52 | 53 | In this section you'll try each of those options and see how Docker manages the workload. 54 | 55 | ### Run a single-task Alpine Linux container 56 | 57 | In this step we're going to start a new container and tell it to run the `hostname` command. The container will start, execute the `hostname` command, then exit. 58 | 59 | 1. Run the following command in your Linux console: 60 | 61 | ``` 62 | $ docker container run alpine hostname 63 | Unable to find image 'alpine:latest' locally 64 | latest: Pulling from library/alpine 65 | 88286f41530e: Pull complete 66 | Digest: sha256:f006ecbb824d87947d0b51ab8488634bf69fe4094959d935c0c103f4820a417d 67 | Status: Downloaded newer image for alpine:latest 68 | 888e89a3b36b 69 | ``` 70 | 71 | The output above shows that the `alpine:latest` image could not be found locally. When this happens, Docker automatically *pulls* it form Docker Hub. 72 | 73 | After the image is pulled, the container's hostname is displayed (`888e89a3b36b` in the example above). 74 | 75 | 2. Docker keeps a container running as long as the process it started inside the container is still running. In this case, the `hostname` process completes when the output is written, so the container exits. The Docker platform doesn't delete resources by default, so the container still exists in the `Exited` state. 76 | 77 | List all containers 78 | 79 | ``` 80 | $ docker container ls --all 81 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 82 | 888e89a3b36b alpine "hostname" 50 seconds ago Exited (0) 49 seconds ago awesome_elion 83 | ``` 84 | 85 | Notice that your Alpine Linux container is in the `Exited` state. 86 | 87 | > **Note:** The container ID *is* the hostname that the container displayed. In the example above it's `888e89a3b36b` 88 | 89 | Containers which do one task and then exit can be very useful. You could build a Docker image that executes a script to configure something. Anyone can execute that task just by running the container - they don't need the actual scripts or configuration information. 90 | 91 | 92 | ### Run an interactive Ubuntu container 93 | 94 | You can run a container based on a different version of Linux than is running on your Docker host. 95 | 96 | In the next example, we are going to run an Ubuntu Linux container on top of an Alpine Linux Docker host (Play With Docker uses Alpine Linux for its nodes). 97 | 98 | 1. Run a Docker container and access its shell. 99 | 100 | In this case we're giving the `docker container run` command three parameters: 101 | 102 | * `--interactive` says you want an interactive session 103 | * `--tty` allocates a psuedo-tty 104 | * `--rm` tells Docker to go ahead and remove the container when it's done executing 105 | 106 | The first two parameters allow you to interact with the Docker container. 107 | 108 | We're also telling the container to run `bash` as its main process (PID 1). 109 | 110 | ``` 111 | $ docker container run --interactive --tty --rm ubuntu bash 112 | ``` 113 | 114 | When the container starts you'll drop into the bash shell with the default prompt `root@:/#`. Docker has attached to the shell in the container, relaying input and output between your local session and the shell session in the container. 115 | 116 | 2. Run some commands in the container: 117 | 118 | - `ls /` - lists the contents of the root directory 119 | - `ps aux` - shows all running processes in the container. 120 | - `cat /etc/issue` - shows which Linux distro the container is running, in this case Ubuntu 16.04 LTS 121 | 122 | 3. Type `exit` to leave the shell session. This will terminate the `bash` process, causing your container to exit. 123 | 124 | > **Note:** As we used the `--rm` flag when we started the container, Docker removed that container when it stopped. This means if you run another `docker container ls --all` you won't see the Ubuntu container. 125 | 126 | 4. For fun, let's check the version of our host VM 127 | 128 | ``` 129 | $ cat /etc/issue 130 | 131 | Welcome to Alpine Linux 3.6 132 | Kernel \r on an \m (\l) 133 | ``` 134 | 135 | Notice that our host VM is Alpine, yet we were able to run an Ubuntu container. As previously mentioned, the distribution of Linux in the container does not need to match the distribution of Linux running on the Docker host. 136 | 137 | However, Linux containers require the Docker host to be running a Linux kernel. For example, Linux containers cannot run directly on Windows Docker hosts. The same is true of Windows containers - they need to run on a Docker host with a Windows kernel. 138 | 139 | Interactive containers are useful when you are putting together your own image. You can run a container and verify all the steps you need to deploy your app, and capture them in a Dockerfile. 140 | 141 | > **Note:** You *can* [commit](https://docs.docker.com/engine/reference/commandline/commit/) a container to make an image from it - but you should avoid that wherever possible. It's much better to use a repeatable [Dockerfile](https://docs.docker.com/engine/reference/builder/) to build your image. You'll see that shortly. 142 | 143 | 144 | ### Run a background MySQL container 145 | 146 | Background containers are how you'll run most applications. Here's a simple example using MySQL. 147 | 148 | 1. Let's run MySQL in the background container using the `--detach` flag. We'll also use the `--name` flag to name the running container `mydb`. 149 | 150 | We'll also use an environment variable (`-e`) to set the root password (NOTE: This should never be done in production): 151 | 152 | ``` 153 | $ docker container run \ 154 | --detach \ 155 | --name mydb \ 156 | -e MYSQL_ROOT_PASSWORD=my-secret-pw \ 157 | mysql:latest 158 | 159 | Unable to find image 'mysql:latest' locallylatest: Pulling from library/mysql 160 | aa18ad1a0d33: Pull complete 161 | fdb8d83dece3: Pull complete 162 | 163 | 315e21657aa4: Pull complete 164 | Digest: sha256:0dc3dacb751ef46a6647234abdec2d47400f0dfbe77ab490b02bffdae57846ed 165 | Status: Downloaded newer image for mysql:latest 166 | 41d6157c9f7d1529a6c922acb8167ca66f167119df0fe3d86964db6c0d7ba4e0 167 | ``` 168 | 169 | Once again, the image we requested was not available locally, so Docker pulled it from Docker Hub. 170 | 171 | As long as the MySQL process is running, Docker will keep the container running in the background. 172 | 173 | 2. List running containers 174 | 175 | ``` 176 | $ docker container ls 177 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 178 | 3f4e8da0caf7 mysql:latest "docker-entrypoint..." 52 seconds ago Up 51 seconds 3306/tcp mydb 179 | ``` 180 | 181 | Notice your container is running 182 | 183 | 3. You can check what's happening in your containers by using a couple of built-in Docker commands: `docker container logs` and `docker container top` 184 | 185 | ``` 186 | $ docker container logs mydb 187 | 188 | 2017-09-29T16:02:58.605004Z 0 [Note] Executing 'SELECT * FROM INFORMATION_SCHEMA.TABLES;' to get a list of tables using the deprecated partition engine. You may use the startup option '--disable-partition-engine-check' to skip this check. 189 | 2017-09-29T16:02:58.605026Z 0 [Note] Beginning of list of non-natively partitioned tables 190 | 2017-09-29T16:02:58.616575Z 0 [Note] End of list of non-natively partitioned tables 191 | ``` 192 | 193 | This shows the logs from your Docker container. 194 | 195 | Let's look at the running processes inside the container. 196 | 197 | ``` 198 | $ docker container top mydb 199 | PID USER TIME COMMAND 200 | 2876 999 0:00 mysqld 201 | ``` 202 | 203 | You should see the MySQL demon (`mysqld`) is running. Note that the PID shown here is the PID for this process on your docker host. To see the same `mysqld` process running as the main process of the container (PID 1) try: 204 | 205 | ``` 206 | $ docker container exec mydb ps -ef 207 | UID PID PPID C STIME TTY TIME CMD 208 | mysql 1 0 0 21:00 ? 00:00:01 mysqld 209 | root 207 0 0 21:39 ? 00:00:00 ps -ef 210 | ``` 211 | 212 | Although MySQL is running, it is isolated within the container because no network ports have been published to the host. Network traffic cannot reach containers from the host unless ports are explicitly published. 213 | 214 | 4. List the MySQL version using `docker container exec`. 215 | 216 | `docker container exec` allows you to run a command inside a container. In this example, we'll use `docker container exec` to run the command-line equivalent of `mysql --user=root --password=$MYSQL_ROOT_PASSWORD --version` inside our MySQL container. 217 | 218 | ``` 219 | $ docker container exec -it mydb \ 220 | mysql --user=root --password=$MYSQL_ROOT_PASSWORD --version 221 | 222 | mysql: [Warning] Using a password on the command line interface can be insecure. 223 | mysql Ver 14.14 Distrib 5.7.19, for Linux (x86_64) using EditLine wrapper 224 | ``` 225 | 226 | The output above shows the MySQL version number, as well as a handy warning. 227 | 228 | 5. You can also use `docker container exec` to connect to a new shell process inside an already-running container. Executing the command below will give you an interactive shell (`sh`) in your MySQL container. 229 | 230 | ``` 231 | $ docker exec -it mydb sh 232 | # 233 | ``` 234 | 235 | Notice that your shell prompt has changed. This is because your shell is now connected to the `sh` process running inside of your container. 236 | 237 | 6. Let's check the version number by running the same command we passed to the container in the previous step. 238 | 239 | ``` 240 | # mysql --user=root --password=$MYSQL_ROOT_PASSWORD --version 241 | 242 | mysql: [Warning] Using a password on the command line interface can be insecure. 243 | mysql Ver 14.14 Distrib 5.7.19, for Linux (x86_64) using EditLine wrapper 244 | ``` 245 | 246 | Notice the output is the same as before. 247 | 248 | 7. Type `exit` to leave the interactive shell session. 249 | 250 | Your container will still be running. This is because the `docker container exec` command started a new `sh` process. When you typed `exit`, you exited the `sh` process and left the `mysqld` process still running. 251 | 252 | ## Task 2: Package and run a custom app using Docker 253 | 254 | In this step you'll learn how to package your own apps as Docker images using a [Dockerfile](https://docs.docker.com/engine/reference/builder/). 255 | 256 | The Dockerfile syntax is straightforward. In this task we're going to create an NGINX website from a Dockerfile. 257 | 258 | ### Build a simple website image 259 | 260 | Let's have a look at the Dockerfile we'll be using, which builds a simple website that allows you to send a tweet. 261 | 262 | 1. Make sure you're in the `linux_tweet_app` directory: 263 | 264 | ``` 265 | $ cd ~/linux_tweet_app 266 | ``` 267 | 268 | 2. Display the contents of our Dockerfile. 269 | 270 | ``` 271 | $ cat Dockerfile 272 | 273 | FROM nginx:latest 274 | 275 | COPY index.html /usr/share/nginx/html 276 | COPY linux.png /usr/share/nginx/html 277 | 278 | EXPOSE 80 443 279 | 280 | CMD ["nginx", "-g", "daemon off;"] 281 | ``` 282 | 283 | Let's see what each of these lines in the Dockerfile do. 284 | 285 | - [FROM](https://docs.docker.com/engine/reference/builder/#from) specifies the base image to use as the starting point for this new image you're creating. For this example we're starting from `nginx:latest`. 286 | - [COPY](https://docs.docker.com/engine/reference/builder/#copy) copies files from the host into the image, at a known location. In our case it copies `index.html` and a graphic that will be used on our webpage. 287 | - [EXPOSE](https://docs.docker.com/engine/reference/builder/#expose) documents which ports the application uses. 288 | - [CMD](https://docs.docker.com/engine/reference/builder/#cmd) specifies what command to run when a container is started from the image. Notice that we can specify the command, as well as run-time arguments. 289 | 290 | 3. In order to make commands more copy/paste friendly, export an environment variable containing your DockerID (if you don't have a DockerID you can get one for free via [Docker Cloud](https://cloud.docker.com)) 291 | 292 | ``` 293 | $ export DOCKERID= 294 | ``` 295 | 296 | 4. To make sure it stored correctly by echoing it back in the terminal 297 | 298 | ``` 299 | $ echo $DOCKERID 300 | 301 | ``` 302 | 303 | 5. Use the `docker image build` command to create a new Docker image using the instructions in your Dockerfile. 304 | 305 | * `--tag` allows us to give the image a custom name. In this case it's comprised of our DockerID, the application name, and a version. Having the Docker ID attached to the name will allow us to store it on Docker Hub in a later step 306 | * `.` tells Docker to use the current directory as the build context 307 | 308 | Be sure to include period (`.`) at the end of the command. 309 | 310 | ``` 311 | $ docker image build --tag $DOCKERID/linux_tweet_app:1.0 . 312 | 313 | Sending build context to Docker daemon 32.77kB 314 | Step 1/5 : FROM nginx:latest 315 | latest: Pulling from library/nginx 316 | afeb2bfd31c0: Pull complete 317 | 7ff5d10493db: Pull complete 318 | d2562f1ae1d0: Pull complete 319 | Digest: sha256:af32e714a9cc3157157374e68c818b05ebe9e0737aac06b55a09da374209a8f9 320 | Status: Downloaded newer image for nginx:latest 321 | ---> da5939581ac8 322 | Step 2/5 : COPY index.html /usr/share/nginx/html 323 | ---> eba2eec2bea9 324 | Step 3/5 : COPY linux.png /usr/share/nginx/html 325 | ---> 4d080f499b53 326 | Step 4/5 : EXPOSE 80 443 327 | ---> Running in 47232cb5699f 328 | ---> 74c968a9165f 329 | Removing intermediate container 47232cb5699f 330 | Step 5/5 : CMD nginx -g daemon off; 331 | ---> Running in 4623761274ac 332 | ---> 12045a0df899 333 | Removing intermediate container 4623761274ac 334 | Successfully built 12045a0df899 335 | Successfully tagged /linux_tweet_app:latest 336 | ``` 337 | 338 | The output above shows the Docker daemon execute each line in the Dockerfile. 339 | 340 | Feel free to run a `docker image ls` command to see the new image you created. 341 | 342 | 6. Use the `docker container run` command to start a new container from the image you created. 343 | 344 | As this container will be running an NGINX web server, we'll use the `--publish` flag to publish port 80 inside the container onto port 80 on the host. This will allow traffic coming in to the Docker host on port 80 to be directed to port 80 in the container. The format of the `--publish` flag is `host_port`:`container_port`. 345 | 346 | ``` 347 | $ docker container run \ 348 | --detach \ 349 | --publish 80:80 \ 350 | --name linux_tweet_app \ 351 | $DOCKERID/linux_tweet_app:1.0 352 | ``` 353 | 354 | Any external traffic coming into the server on port 80 will now be directed into the container. 355 | 356 | 7. Play With Docker should display an `80` at the top of the page. Click this to access the container's website. 357 | 358 | 8. Once you've accessed the website, shut it down and remove it. 359 | 360 | ``` 361 | $ docker container rm --force linux_tweet_app 362 | 363 | linux_tweet_app 364 | ``` 365 | 366 | > **Note**: We used the `--force` parameter to remove the running container without shutting it down. This will ungracefully shutdown the container and permanently remove it from the Docker host. 367 | > 368 | > In a production environment you may want to use `docker container stop` to gracefully stop the container and leave it on the host. You can then use `docker container rm` to permanently remove it. 369 | 370 | 371 | ## Task 3: Modify a running website 372 | 373 | When you're actively working on an application it is inconvenient to have to stop the container, rebuild the image, and run a new version every time you make a change to your source code. 374 | 375 | One way to streamline this process is to mount the source code directory on the local machine into the running container. This will allow any changes made to the files on the host to be immediately reflected in the container. 376 | 377 | We do this using something called a [bind mount](https://docs.docker.com/engine/admin/volumes/bind-mounts/). 378 | 379 | When you use a bind mount, a file or directory on the host machine is mounted into a container. 380 | 381 | ### Start a web app with a bind mount 382 | 383 | 1. Let's start the web app and mount the current directory into the container. 384 | 385 | In this example we'll use the `--mount` flag to mount the current directory on the host into `/usr/share/nginx/html` inside the container. 386 | 387 | Be sure to run this command from within the `linux_tweet_app` directory on your Docker host. 388 | 389 | ``` 390 | $ docker container run \ 391 | --detach \ 392 | --publish 80:80 \ 393 | --name linux_tweet_app \ 394 | --mount type=bind,source="$(pwd)",target=/usr/share/nginx/html \ 395 | $DOCKERID/linux_tweet_app:1.0 396 | ``` 397 | 398 | > Remember from our Dockerfile `usr/share/nginx/html` is where are html files are stored for our web app 399 | 400 | 2. Click the `80` the top of the Play With Docker screen to verify the website is running (you may need to refresh the browser to get the new version). 401 | 402 | ### Modify the running website 403 | 404 | Because we did a bind mount, any changes made to the local filesystem are immediately reflected in the running container. 405 | 406 | 3. Copy a new `index.html` into the container. 407 | 408 | The Git repo that you pulled earlier contains several different versions of an index.html file. Run an `ls` command from within the `~/linux_tweet_app` directory to see a list of them. In this step we'll replace `index.html` with `index-new.html`. 409 | 410 | ``` 411 | $ cp index-new.html index.html 412 | ``` 413 | 414 | 4. Refresh the web page. The site will have changed. 415 | 416 | > If you are comfortable with `vi` you can use it to load the `index.html` file and make additional changes. Those too would be reflected when you reload the webpage. 417 | 418 | Even though we've modified the `index.html` local filesystem and seen it reflected in the running container, we've not actually changed the original Docker image. 419 | 420 | To show this, let's stop the current container and re-run the `1.0` image without a bind mount. 421 | 422 | 5. Stop and remove the currently running container 423 | 424 | ``` 425 | $ docker rm --force linux_tweet_app 426 | 427 | linux_tweet_app 428 | ``` 429 | 430 | 6. Rerun the current version without a bind mount. 431 | 432 | ``` 433 | $ docker container run \ 434 | --detach \ 435 | --publish 80:80 \ 436 | --name linux_tweet_app \ 437 | $DOCKERID/linux_tweet_app:1.0 438 | ``` 439 | 440 | 7. Click the `80` in the Play With Docker interface to view the website. Notice it's back to the original version with the blue background. 441 | 442 | 8. Stop and remove the current container 443 | 444 | ``` 445 | $ docker rm --force linux_tweet_app 446 | 447 | linux_tweet app 448 | ``` 449 | 450 | ### Update the image 451 | 452 | To save the changes you made to the `index.html` file earlier, you need to build a new version of the image. 453 | 454 | 1. Build a new image and tag it as `2.0` 455 | 456 | Remember that you have previously modified the `index.html` file on the Docker hosts local filesystem. This means that running another `docker image build` will build a new image with the updated `index.html`. 457 | 458 | Be sure to include the period (`.`) at the end of the command. 459 | 460 | ``` 461 | $ docker image build --tag $DOCKERID/linux_tweet_app:2.0 . 462 | ``` 463 | 464 | > Notice how fast that built! This is because Docker only modified the portion of the image that changed vs. rebuilding the whole image. 465 | 466 | 2. Let's look at the images on our system 467 | 468 | ``` 469 | $ docker image ls 470 | REPOSITORY TAG IMAGE ID CREATED SIZE 471 | /linux_tweet_app 2.0 01612e05312b 16 seconds ago 108MB 472 | /linux_tweet_app 1.0 bb32b5783cd3 4 minutes ago 108MB 473 | mysql latest b4e78b89bcf3 2 weeks ago 412MB 474 | ubuntu latest 2d696327ab2e 2 weeks ago 122MB 475 | nginx latest da5939581ac8 3 weeks ago 108MB 476 | alpine latest 76da55c8019d 3 weeks ago 3.97MB 477 | ``` 478 | 479 | Notice you have both versions of the web app on your host now. 480 | 481 | ### Test the new version 482 | 483 | 1. Run a container from the new version of the image. 484 | 485 | Be sure to reference the image tagged as `2.0`. 486 | 487 | ``` 488 | $ docker container run \ 489 | --detach \ 490 | --publish 80:80 \ 491 | --name linux_tweet_app \ 492 | $DOCKERID/linux_tweet_app:2.0 493 | ``` 494 | 495 | 2. Click on the `80` at the top of your Play with Docker screen to view the updated version of the web app. 496 | 497 | The web page will have an orange background. 498 | 499 | We can run both versions side by side. The only thing we need to be aware of is that we cannot have two containers using port 80 on the same host. 500 | 501 | As we're already using port 80 for the container running from the `2.0` version of the image, we will start a new container and publish it on port 8080. Additionally, we need to give our container a unique name (`old_linux_tweet_app`) 502 | 503 | 3. Run the old version (make sure you map it to port 8080 on the host, give it the unique name, and reference the 1.0 version of the image). 504 | 505 | ``` 506 | $ docker container run \ 507 | --detach \ 508 | --publish 8080:80 \ 509 | --name old_linux_tweet_app \ 510 | $DOCKERID/linux_tweet_app:1.0 511 | ``` 512 | 513 | 4. Play With Docker will add an `8080` to the top of the screen. Click that to view the old version of the website. 514 | 515 | ### Push your images to Docker Hub 516 | 517 | List the images on your Docker host. You will see that you now have two `linux_tweet_app` images - one tagged as `1.0` and the other as `2.0`. 518 | 519 | ``` 520 | $ docker image ls 521 | 522 | REPOSITORY TAG IMAGE ID CREATED SIZE 523 | mikegcoleman/linux_tweet_app 2.0 01612e05312b 3 minutes ago 108MB 524 | mikegcoleman/linux_tweet_app 1.0 bb32b5783cd3 7 minutes ago 108MB 525 | ``` 526 | 527 | Those images are only stored in your Docker host's local repository. Your Docker host will be deleted after the workshop. In this step we'll push the images to a public repository so you can run them from any Linux machine with Docker. 528 | 529 | Distribution is built into the Docker platform. You can build images locally and push them to a public or private [registry](https://docs.docker.com/registry/), making them available to other users. Anyone with access can pull that image and run a container from it. The behavior of the app in the container will be the same for everyone, because the image contains the fully-configured app - the only requirements to run it are Linux and Docker. 530 | 531 | [Docker Hub](https://hub.docker.com) is the default public registry for Docker images. 532 | 533 | 1. Before you can push your images, you will need to log into Docker Hub. 534 | 535 | ``` 536 | $ docker login 537 | Username: 538 | Password: 539 | Login Succeeded 540 | ``` 541 | 542 | You will need to supply your Docker ID credentials when prompted. 543 | 544 | 2. Push version `1.0` of your web app using `docker image push`. 545 | 546 | ``` 547 | $ docker image push $DOCKERID/linux_tweet_app:1.0 548 | 549 | The push refers to a repository [docker.io//linux_tweet_app] 550 | 910e84bcef7a: Pushed 551 | 1dee161c8ba4: Pushed 552 | 110566462efa: Pushed 553 | 305e2b6ef454: Pushed 554 | 24e065a5f328: Pushed 555 | 1.0: digest: sha256:51e937ec18c7757879722f15fa1044cbfbf2f6b7eaeeb578c7c352baba9aa6dc size: 1363 556 | ``` 557 | 558 | You'll see the progress as the image is pushed up to hub 559 | 560 | 3. Now push version `2.0`. 561 | 562 | ``` 563 | $ docker image push $DOCKERID/linux_tweet_app:2.0 564 | 565 | The push refers to a repository [docker.io//linux_tweet_app] 566 | 0b171f8fbe22: Pushed 567 | 70d38c767c00: Pushed 568 | 110566462efa: Layer already exists 569 | 305e2b6ef454: Layer already exists 570 | 24e065a5f328: Layer already exists 571 | 2.0: digest: sha256:7c51f77f90b81e5a598a13f129c95543172bae8f5850537225eae0c78e4f3add size: 1363 572 | ``` 573 | 574 | Notice that several lines of the output say `Layer already exists`. This is because Docker will leverage read-only layers that are the same as any previously uploaded image layers. 575 | 576 | 577 | You can browse to `https://hub.docker.com/r//` and see your newly-pushed Docker images. These are public repositories, so anyone can pull the images - you don't even need a Docker ID to pull public images. 578 | -------------------------------------------------------------------------------- /part-2/README.md: -------------------------------------------------------------------------------- 1 | # Docker 101 - Linux (Part 2): Understanding the Docker File System and Volumes 2 | 3 | We had an introduction to volumes by way of bind mounts earlier, but let's take a deeper look at the Docker file system and volumes. 4 | 5 | The [Docker documentation](https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/_) gives a great explanation on how storage works with Docker images and containers, but here's the high points. 6 | 7 | * Images are comprised of layers 8 | * These layers are added by each line in a Dockerfile 9 | * Images on the same host or registry will share layers if possible 10 | * When container is started it gets a unique writeable layer of its own to capture changes that occur while it's running 11 | * Layers exist on the host file system in some form (usually a directory, but not always) and are managed by a [storage driver](https://docs.docker.com/engine/userguide/storagedriver/selectadriver/) to present a logical filesystem in the running container. 12 | * When a container is removed the unique writeable layer (and everything in it) is removed as well 13 | * To persist data (and improve performance) Volumes are used. 14 | * Volumes (and the directories they are built on) are not managed by the storage driver, and will live on if a container is removed. 15 | 16 | The following exercises will help to illustrate those concepts in practice. 17 | 18 | Let's start by looking at layers and how files written to a container are managed by something called *copy on write*. 19 | 20 | ## Layers and Copy on Write 21 | 22 | > Note: If you have just completed part 1 of the workshop, please close that session and start a new one. 23 | 24 | 1. In PWD click "+Add new instance" and move into that command windows. 25 | 26 | 1. Pull down the Debian:Jessie image 27 | 28 | ``` 29 | $ docker image pull debian:jessie 30 | jessie: Pulling from library/debian 31 | 85b1f47fba49: Pull complete 32 | Digest: sha256:f51cf81db2de8b5e9585300f655549812cdb27c56f8bfb992b8b706378cd517d 33 | Status: Downloaded newer image for debian:jessie 34 | ``` 35 | 36 | 2. Pull down a MySQL image 37 | 38 | ``` 39 | $ docker image pull mysql 40 | Using default tag: latest 41 | latest: Pulling from library/mysql 42 | 85b1f47fba49: Already exists 43 | 27dc53f13a11: Pull complete 44 | 095c8ae4182d: Pull complete 45 | 0972f6b9a7de: Pull complete 46 | 1b199048e1da: Pull complete 47 | 159de3cf101e: Pull complete 48 | 963d934c2fcd: Pull complete 49 | f4b66a97a0d0: Pull complete 50 | f34057997f40: Pull complete 51 | ca1db9a06aa4: Pull complete 52 | 0f913cb2cc0c: Pull complete 53 | Digest: sha256:bfb22e93ee87c6aab6c1c9a4e7cdc68e9cb9b64920f28fa289f9ffae9fe8e173 54 | Status: Downloaded newer image for mysql:latest 55 | ``` 56 | 57 | What do you notice about those the output from the Docker pull request for MySQL? 58 | 59 | The first layer pulled says: 60 | 61 | `85b1f47fba49: Already exists` 62 | 63 | Notice that the layer id (`85b1f47fba498`) is the same for the first layer of the MySQl image and the only layer in the Debian:Jessie image. And because we already had pulled that layer when we pulled the Debian image, we didn't have to pull it again. 64 | 65 | So, what does that tell us about the MySQL image? Since each layer is created by a line in the image's *Dockerfile*, we know that the MySQL image is based on the Debian:Jessie base image. We can confirm this by looking at the [Dockerfile on Docker Store](https://github.com/docker-library/mysql/blob/0590e4efd2b31ec794383f084d419dea9bc752c4/5.7/Dockerfile). 66 | 67 | The first line in the the Dockerfile is: `FROM debian:jessie` This will import that layer into the MySQL image. 68 | 69 | So layers are created by Dockerfiles and are are shared between images. When you start a container, a writeable layer is added to the base image. 70 | 71 | Next you will create a file in our container, and see how that's represented on the host file system. 72 | 73 | 3. Start a Debian container, shell into it. 74 | 75 | ``` 76 | $ docker run --tty --interactive --name debian debian:jessie bash 77 | root@e09203d84deb:/# 78 | ``` 79 | 80 | 4. Create a file and then list out the directory to make sure it's there: 81 | 82 | ``` 83 | root@e09203d84deb:/# touch test-file 84 | root@e09203d84deb:/# ls 85 | bin dev home lib64 mnt proc run srv test-file usrboot etc lib media opt root sbin sys tmp var 86 | ``` 87 | 88 | We can see `test-file` exists in the root of the containers file system. 89 | 90 | What has happened is that when a new file was written to the disk, the Docker storage driver placed that file in it's own layer. This is called *copy on write* - as soon as a change is detected the change is copied into the writeable layer. That layers is represented by a directory on the host file system. All of this is managed by the Docker storage driver. 91 | 92 | 5. Exit the container but leave it running by pressing `ctrl-p` and then `ctrl-q` 93 | 94 | The Docker hosts for the labs today use OverlayFS with the [overlay2](https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/#how-the-overlay2-driver-works) storage driver. 95 | 96 | OverlayFS layers two directories on a single Linux host and presents them as a single directory. These directories are called layers and the unification process is referred to as a union mount. OverlayFS refers to the lower directory as lowerdir and the upper directory a upperdir. "Upper" and "Lower" refer to when the layer was added to the image. In our example the writeable layer is the most "upper" layer. The unified view is exposed through its own directory called merged. 97 | 98 | We can use Docker's *inspect* command to look at where these directories live on our Docker host's file system. 99 | 100 | > Note: The *inspect* command uses Go templates to allow us to extract out specific information from its output. For more information on how these templates work with *inspect* read this [excellent tutorial](http://container-solutions.com/docker-inspect-template-magic/). 101 | 102 | ``` 103 | $ docker inspect -f '{{json .GraphDriver.Data}}' debian | jq 104 | { 105 | "LowerDir": "/var/lib/docker/overlay2/0dad4d523351851af4872f8c6706fbdf36a6fa60dc7a29fff6eb388bf3d7194e-init/diff:/var/lib/docker/overlay2/c2e2db4221ad5dca9f35a92e04d17c79b861ddee30015fa3ddc77c66ae1bf758/diff", 106 | "MergedDir": "/var/lib/docker/overlay2/0dad4d523351851af4872f8c6706fbdf36a6fa60dc7a29fff6eb388bf3d7194e/merged", 107 | "UpperDir": "/var/lib/docker/overlay2/0dad4d523351851af4872f8c6706fbdf36a6fa60dc7a29fff6eb388bf3d7194e/diff", 108 | "WorkDir": "/var/lib/docker/overlay2/0dad4d523351851af4872f8c6706fbdf36a6fa60dc7a29fff6eb388bf3d7194e/work" 109 | } 110 | ``` 111 | > Note: `WorkDir` is a working directory for the Overlay2 driver 112 | 113 | Since the change we made is the newest modification to the Debian container's file system, it's going to be stored in `UpperDir`. 114 | 115 | 6. List the contents of the `UpperDir`. 116 | 117 | ``` 118 | $ cd $(docker inspect -f {{.GraphDriver.Data.UpperDir}} debian) 119 | 120 | $ ls 121 | root test-file 122 | ``` 123 | 124 | `MergedDir` is going to give us a look at the root filesystem of our container which is a combination of `UpperDir` and `LowerDir`: 125 | 126 | 7. List the contents of `MergedDir`: 127 | 128 | ``` 129 | $ cd $(docker inspect -f {{.GraphDriver.Data.MergedDir}} debian) 130 | 131 | $ ls 132 | bin etc lib64 opt run sys usr 133 | boot home media proc sbin test-file var 134 | dev lib mnt root srv tmp 135 | ``` 136 | 137 | Notice that the directory on our host file system has the same contents as the one inside the container. That's because that directory is what we see in the container. 138 | 139 | > Warning: You should NEVER manipulate your container's file system via the Docker host. This is only being done as an academic exercise. 140 | 141 | 8. Write a new file to the host file system in the `UpperDir`, and list the directory to see the contents 142 | 143 | ``` 144 | $ cd $(docker inspect -f {{.GraphDriver.Data.UpperDir}} debian) 145 | 146 | $ touch test-file2 147 | 148 | $ ls 149 | root test-file test-file2 150 | ``` 151 | 152 | 153 | 9. Move back into your Debian container and list the root file system 154 | 155 | ``` 156 | $ docker attach debian 157 | 158 | root@674d7abf10c6:/# ls 159 | bin dev home lib64 mnt proc run srv test-file tmp var 160 | boot etc lib media opt root sbin sys test-file2 usr 161 | ``` 162 | 163 | The file that was created on the local host filesystem (`test-file2`) is now available in the container as well. 164 | 165 | 10. Type `exit` to stop your container, which will also stop it 166 | 167 | ``` 168 | root@674d7abf10c6:/# exit 169 | exit 170 | ``` 171 | 172 | 11. Ensure that your debian container still exists 173 | 174 | ``` 175 | $ docker container ls --all 176 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 177 | 674d7abf10c6 debian:jessie "bash" 36 minutes ago Exited (0) 2 minutes ago debian 178 | ``` 179 | 180 | 12. List out the current directory 181 | 182 | ``` 183 | $ ls 184 | root test-file test-file2 185 | ``` 186 | 187 | Because the container still exists, the files are still available on your file system. At this point you could `docker start` your container and it would be just as it was before you exited. 188 | 189 | However, if we remove the container, the directories on the host file system will be removed, and your changes will be gone 190 | 191 | 13. Remove the container and list the directory contents 192 | 193 | ``` 194 | $ docker container rm debian 195 | debian 196 | 197 | $ ls 198 | ``` 199 | 200 | The files that were created are now gone. You've actually been left in a sort of "no man's land" as the directory you're in has actually been deleted as well. 201 | 202 | 14. Copy the directory location from the prompt in the terminal. 203 | 204 | 15. CD back to your home directory 205 | 206 | ``` 207 | $ cd 208 | ``` 209 | 210 | 16. Attempt to list the contents of the old `UpperDir` directory. 211 | 212 | ``` 213 | $ ls /var/lib/docker/overlay2/0dad4d523351851af4872f8c6706fbdf36a6fa60dc7a29fff6eb388bf3d7194e/diff 214 | ls: /var/lib/docker/overlay2/0dad4d523351851af4872f8c6706fbdf36a6fa60dc7a29fff6eb388bf3d7194e/diff: No such file or directory 215 | ``` 216 | 217 | ## Understanding Docker Volumes 218 | 219 | [Docker volumes](https://docs.docker.com/engine/admin/volumes/volumes/) are directories on the host file system that are not managed by the storage driver. Since they are not managed by the storage drive they offer a couple of important benefits. 220 | 221 | * **Performance**: Because the storage driver has to create the logical filesystem in the container from potentially many directories on the local host, accessing data can be slow. Especially if there is a lot of write activity to that container. In fact you should try and minimize the amount of writes that happen to the container's filesystem, and instead direct those writes to a volume 222 | 223 | * **Persistence**: Volumes are not removed when the container is deleted. They exist until explicitly removed. This means data written to a volume can be reused by other containers. 224 | 225 | Volumes can be anonymous or named. Anonymous volumes have no way for the to be explicitly referenced. They are almost exclusively used for performance reasons as you cannot persist data effectively with anonymous volumes. Named volumes can be explicitly referenced so they can be used to persist data and increase performance. 226 | 227 | The next sections will cover both anonymous and named volumes. 228 | 229 | > Special Note: These next sections were adapted from [Arun Gupta's](https://twitter.com/arungupta) excellent [tutorial](http://blog.arungupta.me/docker-mysql-persistence/) on persisting data with MySQL. 230 | 231 | ### Anonymous Volumes 232 | 233 | If you once again look at the MySQL [Dockerfile](https://github.com/docker-library/mysql/blob/0590e4efd2b31ec794383f084d419dea9bc752c4/5.7/Dockerfile) you will find the following line: 234 | 235 | ``` 236 | VOLUME /var/lib/mysql 237 | ``` 238 | 239 | This line sets up an anonymous volume in order to increase database performance by avoiding sending a bunch of writes through the Docker storage driver. 240 | 241 | Note: An anonymous volume is a volume that hasn't been explicitly named. This means that it's extremely difficult to use the volume later with a new container. Named volumes solve that problem, and will be covered later in this section. 242 | 243 | 244 | 1. Start a MySQL container 245 | 246 | ``` 247 | $ docker run --name mysqldb -e MYSQL_USER=mysql -e MYSQL_PASSWORD=mysql -e MYSQL_DATABASE=sample -e MYSQL_ROOT_PASSWORD=supersecret -d mysql 248 | acf185dc16e274b2f332266a1bfc6d1df7d7b4f780e6a7ec6716b40cafa5b3c3 249 | ``` 250 | 251 | When we start the container the anonymous volume is created: 252 | 253 | 2. Use Docker inspect to view the details of the anonymous volume 254 | 255 | 256 | ``` 257 | $ docker inspect -f 'in the {{.Name}} container {{(index .Mounts 0).Destination}} is mapped to {{(index .Mounts 0).Source}}' mysqldb 258 | in the /mysqldb container /var/lib/mysql is mapped to /var/lib/docker/volumes/cd79b3301df29d13a068d624467d6080354b81e34d794b615e6e93dd61f89628/_data 259 | ``` 260 | 261 | 3. Change into the volume directory on the local host file system and list the contents 262 | 263 | ``` 264 | $ cd $(docker inspect -f '{{(index .Mounts 0).Source}}' mysqldb) 265 | 266 | $ ls 267 | auto.cnf ib_buffer_pool mysql server-cert.pem 268 | ca-key.pem ib_logfile0 performance_schema server-key.pem 269 | ca.pem ib_logfile1 private_key.pem sys 270 | client-cert.pem ibdata1 public_key.pem 271 | client-key.pem ibtmp1 sample 272 | ``` 273 | 274 | Notice the the directory name starts with `/var/lib/docker/volumes/` whereas for directories managed by the Overlay2 storage driver it was `/var/lib/docker/overlay2` 275 | 276 | As mentined anonymous volumes will not persist data between containers, they are almost always used to increase performance. 277 | 278 | 4. Shell into your running MySQL container and log into MySQL 279 | 280 | ``` 281 | $ docker exec --tty --interactive mysqldb bash 282 | 283 | root@132f4b3ec0dc:/# mysql --user=mysql --password=mysql 284 | mysql: [Warning] Using a password on the command line interface can be insecure. 285 | Welcome to the MySQL monitor. Commands end with ; or \g. 286 | Your MySQL connection id is 3 287 | Server version: 5.7.19 MySQL Community Server (GPL) 288 | 289 | Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved. 290 | 291 | Oracle is a registered trademark of Oracle Corporation and/or its 292 | affiliates. Other names may be trademarks of their respective 293 | owners. 294 | 295 | Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. 296 | ``` 297 | 298 | 5. Create a new table 299 | 300 | ``` 301 | mysql> show databases; 302 | +--------------------+ 303 | | Database | 304 | +--------------------+ 305 | | information_schema | 306 | | sample | 307 | +--------------------+ 308 | 2 rows in set (0.00 sec) 309 | 310 | mysql> connect sample; 311 | Connection id: 4 312 | Current database: sample 313 | 314 | mysql> show tables; 315 | Empty set (0.00 sec) 316 | 317 | mysql> create table user(name varchar(50)); 318 | Query OK, 0 rows affected (0.01 sec) 319 | 320 | mysql> show tables; 321 | +------------------+ 322 | | Tables_in_sample | 323 | +------------------+ 324 | | user | 325 | +------------------+ 326 | 1 row in set (0.00 sec) 327 | ``` 328 | 329 | 6. Exit MySQL and the MySQL container. 330 | 331 | ``` 332 | mysql> exit 333 | Bye 334 | 335 | root@132f4b3ec0dc:/# exit 336 | exit 337 | ``` 338 | 339 | 7. Stop the container and restart it 340 | 341 | ``` 342 | $ docker stop mysqldb 343 | mysqldb 344 | 345 | $ docker start mysqldb 346 | mysqldb 347 | ``` 348 | 349 | 8. Shell back into the running container and log into MySQL 350 | 351 | ``` 352 | $ docker exec --interactive --tty mysqldb bash 353 | 354 | root@132f4b3ec0dc:/# mysql --user=mysql --password=mysql 355 | mysql: [Warning] Using a password on the command line interface can be insecure. 356 | Welcome to the MySQL monitor. Commands end with ; or \g. 357 | Your MySQL connection id is 3 358 | Server version: 5.7.19 MySQL Community Server (GPL) 359 | 360 | Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved. 361 | 362 | Oracle is a registered trademark of Oracle Corporation and/or its 363 | affiliates. Other names may be trademarks of their respective 364 | owners. 365 | 366 | Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. 367 | ``` 368 | 369 | 9. Ensure the table created previously table still exists 370 | 371 | ``` 372 | mysql> connect sample; 373 | Reading table information for completion of table and column names 374 | You can turn off this feature to get a quicker startup with -A 375 | 376 | Connection id: 4 377 | Current database: sample 378 | 379 | myslq> show tables; 380 | +------------------+ 381 | | Tables_in_sample | 382 | +------------------+ 383 | | user | 384 | +------------------+ 385 | 1 row in set (0.00 sec) 386 | ``` 387 | 388 | 10. Exit MySQL and the MySQL container. 389 | 390 | ``` 391 | mysql> exit 392 | Bye 393 | 394 | root@132f4b3ec0dc:/# exit 395 | exit 396 | ``` 397 | 398 | The table persisted across container restarts, which is to be expected. In fact, it would have done this whether or not we had actually used a volume as shown in the previous section. 399 | 400 | 11. Let's look at the volume again 401 | 402 | ``` 403 | $ docker inspect -f 'in the {{.Name}} container {{(index .Mounts 0).Destination}} is mapped to {{(index .Mounts 0).Source}}' mysqldb 404 | in the /mysqldb container /var/lib/mysql is mapped to /var/lib/docker/volumes/cd79b3301df29d13a068d624467d6080354b81e34d794b615e6e93dd61f89628/_data 405 | ``` 406 | 407 | We do see the volume was not affected by the container restart either. 408 | 409 | Where people often get confused is in expecting that the anonymous volume can be used to persist data BETWEEN containers. 410 | 411 | To examine that delete the old container, create a new one with the same command, and check to see if the table exists. 412 | 413 | 12. Remove the current MySQL container 414 | 415 | ``` 416 | $ docker container rm --force mysqldb 417 | mysqldb 418 | ``` 419 | 420 | 13. Start a new container with the same command that was used before 421 | 422 | ``` 423 | $ docker run --name mysqldb -e MYSQL_USER=mysql -e MYSQL_PASSWORD=mysql -e MYSQL_DATABASE=sample -e MYSQL_ROOT_PASSWORD=supersecret -d mysql 424 | eb15eb4ecd26d7814a8da3bb27cee1a23304fab1961358dd904db37c061d3798 425 | ``` 426 | 427 | 14. List out the volume details for the new container 428 | 429 | ``` 430 | $ docker inspect -f 'in the {{.Name}} container {{(index .Mounts 0).Destination}} is mapped to {{(index .Mounts 0).Source}}' mysqldb 431 | in the /mysqldb container /var/lib/mysql is mapped to /var/lib/docker/volumes/e0ffdc6b4e0cfc6e795b83cece06b5b807e6af1b52c9d0b787e38a48e159404a/_data 432 | ``` 433 | 434 | Notice this directory is different than before. 435 | 436 | 15. Shell back into the running container and log into MySQL 437 | 438 | ``` 439 | $ docker exec --interactive --tty mysqldb bash 440 | 441 | root@132f4b3ec0dc:/# mysql --user=mysql --password=mysql 442 | mysql: [Warning] Using a password on the command line interface can be insecure. 443 | Welcome to the MySQL monitor. Commands end with ; or \g. 444 | Your MySQL connection id is 3 445 | Server version: 5.7.19 MySQL Community Server (GPL) 446 | 447 | Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved. 448 | 449 | Oracle is a registered trademark of Oracle Corporation and/or its 450 | affiliates. Other names may be trademarks of their respective 451 | owners. 452 | 453 | Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. 454 | ``` 455 | 456 | 16. Check to see if table created previously table still exists 457 | 458 | ``` 459 | mysql> connect sample; 460 | Connection id: 4 461 | Current database: sample 462 | 463 | mysql> show tables; 464 | Empty set (0.00 sec) 465 | ``` 466 | 467 | 17. Exit MySQL and the MySQL container. 468 | 469 | ``` 470 | mysql> exit 471 | Bye 472 | 473 | root@132f4b3ec0dc:/# exit 474 | exit 475 | ``` 476 | 477 | 18. Remove the container 478 | 479 | ``` 480 | docker container rm --force mysqldb 481 | mysqldb 482 | ``` 483 | 484 | So while a volume was used to store the new table in the original container, because it wasn't a named volume the data could not be persisted between containers. 485 | 486 | To achieve persistence a named volume should be used. 487 | 488 | ### Named Volumes 489 | 490 | A named volume (as the name implies) is a volume that's been explicitly named and can easily be referenced. 491 | 492 | A named volume can be create on the command line, in a docker-compose file, and when you start a new container. They [CANNOT be created as part of the image's dockerfile](https://github.com/moby/moby/issues/30647). 493 | 494 | 1. Start a MySQL container with a named volume (`dbdata`) 495 | 496 | ``` 497 | $ docker run --name mysqldb \ 498 | -e MYSQL_USER=mysql \ 499 | -e MYSQL_PASSWORD=mysql \ 500 | -e MYSQL_DATABASE=sample \ 501 | -e MYSQL_ROOT_PASSWORD=supersecret \ 502 | --detach \ 503 | --mount type=volume,source=mydbdata,target=/var/lib/mysql \ 504 | mysql 505 | ``` 506 | 507 | Because the newly created volume is empty, Docker will copy over whatever existed in the container at `/var/lib/mysql` when the container starts. 508 | 509 | Docker volumes are primatives just like images and containers. As such, they can be listed and removed in the same way. 510 | 511 | 2. List the volumes on the Docker host 512 | 513 | ``` 514 | $ docker volume ls 515 | DRIVER VOLUME NAME 516 | local 55c322b9c4a644a5284ccb5e4d7b6b466a0534e26d57c9ef4221637d39cf9a88 517 | local cc44059d23e0a914d4390ea860fd35b2acdaa480e83c025fb381da187b652a66 518 | local e0ffdc6b4e0cfc6e795b83cece06b5b807e6af1b52c9d0b787e38a48e159404a 519 | local mydbdata 520 | ``` 521 | 522 | 3. Inspect the volume 523 | 524 | ``` 525 | $ docker inspect mydbdata 526 | [ 527 | { 528 | "CreatedAt": "2017-10-13T19:55:10Z", 529 | "Driver": "local", 530 | "Labels": null, 531 | "Mountpoint": "/var/lib/docker/volumes/mydbdata/_data", 532 | "Name": "mydbdata", 533 | "Options": {}, 534 | "Scope": "local" 535 | } 536 | ] 537 | ``` 538 | 539 | Any data written to `/var/lib/mysql` in the container will be rerouted to `/var/lib/docker/volumes/mydbdata/_data` instead. 540 | 541 | 4. Shell into your running MySQL container and log into MySQL 542 | 543 | ``` 544 | $ docker exec --tty --interactive mysqldb bash 545 | 546 | root@132f4b3ec0dc:/# mysql --user=mysql --password=mysql 547 | mysql: [Warning] Using a password on the command line interface can be insecure. 548 | Welcome to the MySQL monitor. Commands end with ; or \g. 549 | Your MySQL connection id is 3 550 | Server version: 5.7.19 MySQL Community Server (GPL) 551 | 552 | Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved. 553 | 554 | Oracle is a registered trademark of Oracle Corporation and/or its 555 | affiliates. Other names may be trademarks of their respective 556 | owners. 557 | 558 | Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. 559 | ``` 560 | 561 | 5. Create a new table 562 | 563 | ``` 564 | mysql> connect sample; 565 | Connection id: 4 566 | Current database: sample 567 | 568 | mysql> show tables; 569 | Empty set (0.00 sec) 570 | 571 | mysql> create table user(name varchar(50)); 572 | Query OK, 0 rows affected (0.01 sec) 573 | 574 | mysql> show tables; 575 | +------------------+ 576 | | Tables_in_sample | 577 | +------------------+ 578 | | user | 579 | +------------------+ 580 | 1 row in set (0.00 sec) 581 | ``` 582 | 583 | 6. Exit MySQL and the MySQL container. 584 | 585 | ``` 586 | mysql> exit 587 | Bye 588 | 589 | root@132f4b3ec0dc:/# exit 590 | exit 591 | ``` 592 | 593 | 7. Remove the MySQL container 594 | 595 | ``` 596 | $ docker container rm --force mysqldb 597 | ``` 598 | 599 | Because the MySQL was writing out to a named volume, we can start a new container with the same data. 600 | 601 | When the container starts it will not overwrite existing data in a volume. So the data created in the previous steps will be left intact and mounted into the new container. 602 | 603 | 8. Start a new MySQL container 604 | 605 | ``` 606 | $ docker run --name new_mysqldb \ 607 | -e MYSQL_USER=mysql \ 608 | -e MYSQL_PASSWORD=mysql \ 609 | -e MYSQL_DATABASE=sample \ 610 | -e MYSQL_ROOT_PASSWORD=supersecret \ 611 | --detach \ 612 | --mount type=volume,source=mydbdata,target=/var/lib/mysql \ 613 | mysql 614 | ``` 615 | 616 | 9. Shell into your running MySQL container and log into MySQL 617 | 618 | ``` 619 | $ docker exec --tty --interactive new_mysqldb bash 620 | 621 | root@132f4b3ec0dc:/# mysql --user=mysql --password=mysql 622 | mysql: [Warning] Using a password on the command line interface can be insecure. 623 | Welcome to the MySQL monitor. Commands end with ; or \g. 624 | Your MySQL connection id is 3 625 | Server version: 5.7.19 MySQL Community Server (GPL) 626 | 627 | Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved. 628 | 629 | Oracle is a registered trademark of Oracle Corporation and/or its 630 | affiliates. Other names may be trademarks of their respective 631 | owners. 632 | 633 | Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. 634 | ``` 635 | 636 | 10. Check to see if the previously created table exists in your new container. 637 | 638 | ``` 639 | mysql> connect sample; 640 | Reading table information for completion of table and column names 641 | You can turn off this feature to get a quicker startup with -A 642 | 643 | Connection id: 4 644 | Current database: sample 645 | 646 | mysql> show tables; 647 | +------------------+ 648 | | Tables_in_sample | 649 | +------------------+ 650 | | user | 651 | +------------------+ 652 | 1 row in set (0.00 sec) 653 | ``` 654 | 655 | The data will exist until the volume is explicitly deleted. 656 | 657 | 11. Exit MySQL and the MySQL container. 658 | 659 | ``` 660 | mysql> exit 661 | Bye 662 | 663 | root@132f4b3ec0dc:/# exit 664 | exit 665 | ``` 666 | 667 | 12. Remove the new MySQL container and volume 668 | 669 | ``` 670 | $ docker container rm --force new_mysqldb 671 | new_mysqldb 672 | 673 | $ docker volume rm mydbdata 674 | mydbdata 675 | ``` 676 | 677 | If a new container was started with the previous command, it would create a new empty volume. 678 | -------------------------------------------------------------------------------- /part-3/README.md: -------------------------------------------------------------------------------- 1 | # Docker 101 - Linux (Part 3): Docker Swarm and Container Networking 2 | 3 | So far all of the previous exercises have been based around running a single container on a single host. 4 | 5 | This section will cover how to use multiple hosts to provide fault tolerance as well as incresed performance. As part of that discussion it will also provide an overview of Docker's multi-host networking capabilities. 6 | 7 | #### What is "Orchestration" 8 | 9 | If you heard about containers you've probably heard about orchstration as well. Conatiner orchestrators, such as Docker Swarm and Kubernetes, provide a ton of functionality around managing and deploying containerized applications. But the two most fundamental things they do are clustering and scheduling (that's not all they do by a long shot, but they are arguably the two most important fuctions). 10 | 11 | Clustering is the concept of taking a group of machines and treating them as a single computing resource. These machines are capable of accepting any workload becaues they all offer the same capabilities. These clustered machines don't have to be running on the same infrastructure - they could be a mix of bare metal and VMs for instance. 12 | 13 | Scheduling is the process of deciding where a workload should reside. When an admin starts a new instance of website she can decide what region it needs to go on or if it should be on bare metal or in the cloud. The scheduler will make that happen. Schedulers also make sure that the application maintains its desired state. For example, if there were 10 copies of a web site running, and one of them crashed, the scheduler would know this and start up a new instance to take the failed one's place. 14 | 15 | 16 | With Docker ther is a built-in orchestrator: Docker Swarm. It provides both clustering and scheduling as well as many other advanced services. 17 | 18 | The next part of the lab will start with the deployment of a 3 node Docker swarm cluster. 19 | 20 | ### Build your cluster 21 | 22 | Note: If you have just completed a previous part of the workshop, please close that session and start a new one. 23 | 24 | 1. In the PWD interface click `+ Add new instance` to instantiate a linux node 25 | 26 | > Note: If you've closed the PWD interface, simply go back to [the PWD site](https://dockercon.play-with-docker.com) 27 | 28 | 2. Repeat step 1 to add a second node to the cluster. 29 | 30 | 3. Click the `Windows containers` slider and then click `+ Add New Instance` 31 | 32 | There are now three standalone Docker hosts: 2 linux and 1 Windows. 33 | 34 | 4. In the console for `node1` initialize Docker Swarm 35 | 36 | ``` 37 | $ docker swarm init --advertise-addr eth0 38 | Swarm initialized: current node (ujsz5fd1ozr410x1ywuix7sik) is now a manager. 39 | 40 | To add a worker to this swarm, run the following command: 41 | 42 | docker swarm join --token SWMTKN-1-53ao1ihu684vpcpjuf332bi4et27qiwxajefyqryyj4o0indm7-b2zc5ldudkxcmf6kcncft8t12 192.168.0.13:2377 43 | 44 | To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. 45 | ``` 46 | 47 | `node1` is now a Swarm manager node. Manager nodes are responsible for ensuring the integrity of the cluster as well as managing running services. 48 | 49 | 5. Copy the `docker swarm join` output from `node1` 50 | 51 | 6. Switch to `node2` and paste in the command. **Be sure to copy the output from your PWD window, not from this lab guide** 52 | 53 | ``` 54 | $ docker swarm join --token SWMTKN-1-53ao1ihu684vpcpjuf332bi4et27qiwxajefyqryyj4o0indm7-b2zc5ldudkxcmf6kcncft8t12 192.168.0.13:2377 55 | This node joined a swarm as a worker. 56 | ``` 57 | 58 | 7. Switch to the Windows node and paste the same command at the Powershell prompt 59 | 60 | ``` 61 | PS C:\> docker swarm join --token SWMTKN-1-53ao1ihu684vpcpjuf332bi4et27qiwxajefyqryyj4o0indm7-b2zc5ldudkxcmf6kcncft8t12 192.168.0 62 | .13:2377 63 | This node joined a swarm as a worker. 64 | ``` 65 | 66 | The three nodes have now been clustered into a single Docker swarm. An important thing to note is that clusters can be made up of Linux nodes, Windows nodes, or a combination of both. 67 | 68 | 8. Switch back to `node1` 69 | 70 | 9. List the nodes in the cluster 71 | 72 | ``` 73 | $ docker node ls 74 | ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 75 | uqqgsvc1ykkgoamaatcun89wt * node1 Ready Active Leader 76 | g4demzyr7sd2ngpa8wntbo0l4 node2 Ready Active 77 | xflngp99u1r9pn7bryqbbrrvq win000046 Ready Active 78 | ``` 79 | 80 | Commands against the swarm can only be issued from the manager node. Attempting to run the above command against `node2` or the Windows node would result in an error. 81 | 82 | ``` 83 | $ docker node ls 84 | Error response from daemon: This node is not a swarm manager. Worker nodes can't be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager. 85 | ``` 86 | 87 | With the Swarm cluster built it's time to move to a discussion on networking. 88 | 89 | Docker supports several different networking options, but this lab will cover the two most popular: bridge and overlay. 90 | 91 | Bridge networks are only available on the local host, and can be created on hosts in swarm clusters as well as standalone hosts. However, in a swarm cluster, even though the machines are tied together, bridge networks only work on the host on which they were created. 92 | 93 | Overlay networks faciliate the creation of networks that span Docker hosts. While it's possible to network together hosts that are not in a Swarm cluster, it's a very manual task requiring the addition of an external key value store. With Docker swarm creating overlay networks is trivial. 94 | 95 | ### Bridge networking overview 96 | 97 | As previously mentioned, bridge networks faciliate the create of software-defined networks on a single Docker host. Containers on one bridge network cannot communicate with containers on a different bridge network (unless they communicate via the network the Docker host resides on). 98 | 99 | 1. On `node1` create a bridge network (`mybridge`) 100 | 101 | ``` 102 | $ docker network create mybridge 103 | 52fb9de4ad1cbe505e451599df2cb62c53e56893b0c2b8d9b8715b5e76947551 104 | ``` 105 | 106 | 2. List the networks on `node1` 107 | 108 | ``` 109 | $ docker network ls 110 | NETWORK ID NAME DRIVER SCOPE 111 | edf9dc771fc4 bridge bridge local 112 | e5702f60b7c9 docker_gwbridge bridge local 113 | 7d6b733ee498 host host local 114 | rnyatjul3qhn ingress overlay swarm 115 | 52fb9de4ad1c mybridge bridge local 116 | dbd52ffda3ae none null local 117 | ``` 118 | 119 | The newly created `mybridge` network is listed. 120 | 121 | > Note: Docker creates several networks by default, however the purpose of those networks is outside the scope of this workshop. 122 | 123 | 3. Switch to `node2` 124 | 125 | 4. List the available networks on `node2` 126 | 127 | ``` 128 | $ docker network ls 129 | NETWORK ID NAME DRIVER SCOPE 130 | 3bc2a78be20f bridge bridge local 131 | 641bdc72dc8b docker_gwbridge bridge local 132 | a5ef170a2758 host host local 133 | rnyatjul3qhn ingress overlay swarm 134 | 3dec80db87e4 none null local 135 | ``` 136 | 137 | Notice that the same networks names exist on `node2` but their ID's are different. And, `mybridge` does not show up at all. 138 | 139 | 5. Move back to `node1` 140 | 141 | 6. Create an Alpine container named `alpine_host` running the `top` process in `detached` mode and connecit it to the `mybridge` network. 142 | 143 | ``` 144 | $ docker container run \ 145 | --detach \ 146 | --network mybridge \ 147 | --name alpine_host \ 148 | alpine top 149 | Unable to find image 'alpine:latest' locally 150 | latest: Pulling from library/alpine88286f41530e: Pull complete 151 | Digest: sha256:f006ecbb824d87947d0b51ab8488634bf69fe4094959d935c0c103f4820a417d 152 | Status: Downloaded newer image for alpine:latest 153 | 974903580c3e452237835403bf3a210afad2ad1dff3e0b90f6d421733c2e05e6 154 | ``` 155 | > Note: We run the `top` process to keep the container from exiting as soon as it's created. 156 | 157 | 7. Start another Alpine container named `alpine client` 158 | 159 | ``` 160 | $ docker container run \ 161 | --detach \ 162 | --name alpine_client \ 163 | alpine top 164 | c81a3a14f43fed93b6ce2eb10338c1749fde0fe7466a672f6d45e11fb3515536 165 | ``` 166 | 167 | 8. Attempt to PING `alpine_host` from `alpine_client` 168 | 169 | ``` 170 | $ docker exec alpine_client ping alpine_host 171 | ping: bad address 'alpine_host' 172 | ``` 173 | 174 | Because the two containers are not on the same network they cannot reach each other. 175 | 176 | 9. Inspect `alpine_host` and `alpine_client` to see which networks they are attached to. 177 | 178 | ``` 179 | $ docker inspect -f {{.NetworkSettings.Networks}} alpine_host 180 | map[mybridge:0xc420466000] 181 | 182 | $ docker inspect -f {{.NetworkSettings.Networks}} alpine_client 183 | map[bridge:0xc4204420c0] 184 | ``` 185 | 186 | `alpine_host` is, as expected, attached to the `mybridge` network. 187 | 188 | `alpine_client` is attached to the default bridge network `bridge` 189 | 190 | 10. Stop and remove `alpine_client` 191 | 192 | ``` 193 | $ docker container rm --force alpine_client 194 | alpine_client 195 | ``` 196 | 197 | 11. Start another container called `alpine_client` but attach it to the `mybridge` network this time. 198 | 199 | ``` 200 | $ docker container run \ 201 | --detach \ 202 | --network mybridge \ 203 | --name alpine_client \ 204 | alpine top 205 | 8cf39f89560fa8b0f6438222b4c5e3fe53bdeab8133cb59038650231f3744a79 206 | ``` 207 | 208 | 12. Verify via `inspect` that `alpine_client` is on the `mybridge` network 209 | 210 | ``` 211 | $ docker inspect -f {{.NetworkSettings.Networks}} alpine_client 212 | map[mybridge:0xc42043e0c0] 213 | ``` 214 | 215 | 13. PING `alpine_host` from `alpine_client` 216 | 217 | ``` 218 | docker exec alpine_client ping -c 5 alpine_host 219 | PING alpine_host (172.20.0.2): 56 data bytes 220 | 64 bytes from 172.20.0.2: seq=0 ttl=64 time=0.102 ms 221 | 64 bytes from 172.20.0.2: seq=1 ttl=64 time=0.108 ms 222 | 64 bytes from 172.20.0.2: seq=2 ttl=64 time=0.088 ms 223 | 64 bytes from 172.20.0.2: seq=3 ttl=64 time=0.113 ms 224 | 64 bytes from 172.20.0.2: seq=4 ttl=64 time=0.122 ms 225 | 226 | --- alpine_host ping statistics --- 227 | 5 packets transmitted, 5 packets received, 0% packet loss 228 | round-trip min/avg/max = 0.088/0.106/0.122 ms 229 | ``` 230 | 231 | Something to notice is that it was not necessary to specify an IP address. Docker has a built in DNS that resolved `alpine_client` to the correct address. 232 | 233 | Being able to network containers on a single host is not extremely useful. It might be fine for a simple test envrionment, but production environments require the ability provide the scalability and fault tolerance that comes from having multiple interconnected hosts. 234 | 235 | This is where overlay networking comes in. 236 | 237 | ### Overlay Neworking Overview 238 | 239 | Overlay networks in Docker are software defined networks that span multiple hosts (unlike a bridge network which is limited to a single Docker host). This allows containers on different hosts to easily communicate on the Docker networking fabric (vs having to move out to the host's network). 240 | 241 | This next section covers building an overlay network and having two containers communicate with each other. 242 | 243 | 1. Remove the existing Alpine containers 244 | 245 | ``` 246 | $ docker container rm --force $(docker ps --all --quiet) 247 | e65629beeb57 248 | 5cc5eeaf703b 249 | ``` 250 | 251 | 2. Create a new overlay network (`-d` specifies the networking driver to use, if it's omitted `bridge` is the default). 252 | 253 | ``` 254 | $ docker network create --attachable -d overlay myoverlay 255 | z16nhzxwbeukjnz3e6nk2159p 256 | ``` 257 | 258 | > Note: We have to use the `--attachable` flag because by default you cannot use `docker run` on overlay networks that are part of a swarm. The preferred method is to use a Docker *service* which is covered later in the workshop. 259 | 260 | 3. List the networks on the host to verify that the `myoverlay` network was created. 261 | 262 | ``` 263 | $ docker network ls 264 | NETWORK ID NAME DRIVER SCOPE 265 | edf9dc771fc4 bridge bridge local 266 | e5702f60b7c9 docker_gwbridge bridge local 267 | 7d6b733ee498 host host local 268 | rnyatjul3qhn ingress overlay swarm 269 | 52fb9de4ad1c mybridge bridge local 270 | z16nhzxwbeuk myoverlay overlay swarm 271 | dbd52ffda3ae none null local 272 | ``` 273 | 274 | 3. Create an Alpine container and attach it to the `myoverlay` network. 275 | 276 | ``` 277 | $ docker container run \ 278 | --detach \ 279 | --network myoverlay \ 280 | --name alpine_host \ 281 | alpine top 282 | a604aa48660835aeec75f3239964d35c334bcdf33d1b5574c319aaf344c2119a 283 | ``` 284 | 285 | 4. Move to `node2` 286 | 287 | 5. List the available networks 288 | 289 | ``` 290 | $ docker network ls 291 | NETWORK ID NAME DRIVER SCOPE 292 | 3bc2a78be20f bridge bridge local 293 | 641bdc72dc8b docker_gwbridge bridge local 294 | a5ef170a2758 host host local 295 | rnyatjul3qhn ingress overlay swarm 296 | 3dec80db87e4 none null local 297 | ``` 298 | 299 | Notice anything out of the ordinary? Where's the `myoverlay` network? 300 | 301 | Docker won't extend the network to hosts where it's not needed. In this case, there are no containers attached to `myoverlay` on `node2` so the network has not been extended to the host. 302 | 303 | 6. Start an alpine container and attach it to `myoverlay` 304 | 305 | ``` 306 | $ docker container run \ 307 | --detach \ 308 | --network myoverlay \ 309 | --name alpine_client \ 310 | alpine top 311 | Unable to find image 'alpine:latest' locally 312 | latest: Pulling from library/alpine 313 | 88286f41530e: Pull complete 314 | Digest: sha256:f006ecbb824d87947d0b51ab8488634bf69fe4094959d935c0c103f4820a417d 315 | Status: Downloaded newer image for alpine:latest 316 | 5d67e360d8e42c618dc8ea40ecd745280a8002652c7bcdc7982cb5c6cdd4fd13 317 | ``` 318 | 319 | 7. List the available networks on `node2` 320 | 321 | ``` 322 | $ docker network ls 323 | NETWORK ID NAME DRIVER SCOPE 324 | 3bc2a78be20f bridge bridge local 325 | 641bdc72dc8b docker_gwbridge bridge local 326 | a5ef170a2758 host host local 327 | rnyatjul3qhn ingress overlay swarm 328 | z2fh5l7g1b4k myoverlay overlay swarm 329 | 3dec80db87e4 none null local 330 | ``` 331 | 332 | The `myoverlay` network is now available on `node2` 333 | 334 | 8. Ping `apine_host` 335 | 336 | ``` 337 | $ docker exec alpine_client ping -c 5 alpine_host 338 | PING alpine_host (10.0.0.2): 56 data bytes 339 | 64 bytes from 10.0.0.2: seq=0 ttl=64 time=0.244 ms 340 | 64 bytes from 10.0.0.2: seq=1 ttl=64 time=0.122 ms 341 | 64 bytes from 10.0.0.2: seq=2 ttl=64 time=0.166 ms 342 | 64 bytes from 10.0.0.2: seq=3 ttl=64 time=0.201 ms 343 | 64 bytes from 10.0.0.2: seq=4 ttl=64 time=0.137 ms 344 | ``` 345 | Networking also works betwen Linux and Windows nodes 346 | 347 | 9. Move to the Windows node 348 | 349 | 10. Ping `alpine_host` from the Windows node 350 | 351 | ``` 352 | PS C:\> docker container run ` 353 | --network myoverlay ` 354 | --name windows_client ` 355 | --rm ` 356 | microsoft/windowsservercore powershell ping alpine_host 357 | 358 | Pinging alpine_host [10.0.0.2] with 32 bytes of data: 359 | Reply from 10.0.0.2: bytes=32 time<1ms TTL=64 360 | Reply from 10.0.0.2: bytes=32 time<1ms TTL=64 361 | Reply from 10.0.0.2: bytes=32 time<1ms TTL=64 362 | Reply from 10.0.0.2: bytes=32 time<1ms TTL=64 363 | ``` 364 | 365 | > Note: In some cases it may take a few seconds for the Windows client to find the alpine host resutling in PING timeouts. If this happens, simply retry the above command. 366 | 367 | ## Deploying Swarm services 368 | 369 | While we have been using `docker run` to instantiate docker containers on our Swarm cluster, the preferred way to actually run applications is via a [*service*](https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/). 370 | 371 | Services are an abstraction in Docker that represent an application or component of an application. For instance a web front end service connecting to a database backend service. You can deploy an application made up of a single service. In fact, this is quite common when [modernizing traditional applications](https://goto.docker.com/rs/929-FJL-178/images/SB_MTA_04.14.2017.pdf). 372 | 373 | The service construct provides a host of useful features including: 374 | 375 | * Load balancing 376 | * Layer 4 and layer 7 routing meshes 377 | * Desired state reconciliation 378 | * Service discovery 379 | * Healthchecks 380 | * Upgrades and rollback 381 | * Scaling 382 | 383 | This workshop cannot possibly cover all these topics, but will address several key points. 384 | 385 | This lab will deploy a two service application. The application features a Java-based web front end running on Linux, and a Microsoft SQL server running on Windows. 386 | 387 | ### Deploying a Multi-OS Application with Docker Swarm 388 | 389 | 1. Move to `node1` 390 | 391 | 2. Create an overlay network for the application 392 | 393 | ``` 394 | $ docker network create -d overlay atsea 395 | foqztzic1x95kiuq9cuqwuldi 396 | ``` 397 | 398 | 3. Deploy the database service 399 | 400 | ``` 401 | $ docker service create \ 402 | --name database \ 403 | --endpoint-mode dnsrr \ 404 | --network atsea \ 405 | --publish mode=host,target=1433 \ 406 | --detach=true \ 407 | sixeyed/atsea-db:mssql 408 | ywlkfxw2oim67fuf9tue7ndyi 409 | ``` 410 | The service is created with the following parameters: 411 | 412 | * `--name`: Gives the service an easily remembered name 413 | * `--endpoint-mode`: Today all services running on Windows need to be started in DNS round robin mode. 414 | * `--network`: Attaches the containers from our service to the `atsea` network 415 | * `--publish`: Exposes port 1433 but only on the Windows host. 416 | * `--detach`: Runs the service in the background 417 | * Our service is based off the image `sixeyed/atsea-db:mssql` 418 | 419 | 4. Check the status of your service 420 | 421 | ``` 422 | $ docker service ps database 423 | ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 424 | rgwtocu21j0f database.1 sixeyed/atsea-db:mssql win00003R Running Running 3 minutesago 425 | ``` 426 | 427 | > Note: Keep checking the status of the service until the `CURRENT STATE` is running. This usually takes 2-3 minutes 428 | 429 | 5. Start the web front-end service 430 | 431 | ``` 432 | $ docker service create \ 433 | --publish 8080:8080 \ 434 | --network atsea \ 435 | --name appserver \ 436 | --detach=true \ 437 | mikegcoleman/atsea_appserver:1.0 438 | tqvr2cxk31tr0ryel5ey4zmwr 439 | ``` 440 | 441 | 6. List all the services running on your host 442 | 443 | ``` 444 | $ docker service ls 445 | ID NAME MODE REPLICAS IMAGE PORTS 446 | tqvr2cxk31tr appserver replicated 1/1 dockersamples/atsea-appserver:1.0 *:8080->8080/ 447 | tcp 448 | xkm68h7z3wsu database replicated 1/1 sixeyed/atsea-db:mssql 449 | ``` 450 | 451 | 7. Make sure both services are up and running (check the `Current State`) 452 | 453 | ``` 454 | $ docker service ps $(docker service ls -q) 455 | ID NAME IMAGE NODE DESIRED STATE CURRENT STATE 456 | ERROR PORTS 457 | jhetafd6jd7u database.1 sixeyed/atsea-db:mssql win00003R Running Running 3 min 458 | utes ago *:64024->1433/tcp 459 | 2cah7mw5a5c7 appserver.1 dockersamples/atsea-appserver:1.0 node1 Running Running 6 min 460 | utes ago 461 | ``` 462 | 463 | 8. Visit the running website by clicking the `8080` at the top of the PWD screen. 464 | 465 | We've successfully deployed our application. One thing to note is that we did not have to tell Swarm to put the database on the Windows node or the Java webserver on the Linux node. It was able to sort that out by itself. 466 | 467 | Another key point is that our application code knows nothing about our networking code. The only thing it knows is that the database hostname is going to be `database`. So in our application code database connection string looks like this; 468 | 469 | ``` 470 | jdbc:sqlserver://database;user=MyUserName;password=*****; 471 | ``` 472 | 473 | So long as the database service is started with the name `database` and is on the same Swarm network, the two services can talk. 474 | 475 | ### Upgrades and Rollback 476 | 477 | A common scenario is the need to upgrade an application or application component. In this section we are going to unsuccessfully attempt to ugrade the web front-end. We'll rollback from that attempt, and then perform a successful upgrade. 478 | 479 | 1. Make sure you're on `node1` 480 | 481 | To upgrade our application we're simply going to roll out an updated Docker image. In this case version `2.0`. 482 | 483 | 2. Upgrade the Appserver service to version 2.0 484 | 485 | ``` 486 | $ docker service update \ 487 | --image dockersamples/atsea-appserver:2.0 \ 488 | --update-failure-action pause \ 489 | --detach=true \ 490 | appserver 491 | ``` 492 | 493 | 3. Check on the status of the upgrade 494 | 495 | ``` 496 | $ docker service ps appserver 497 | ID NAME IMAGE NODE DESIRED STATE CURRENT STATE 498 | ERROR PORTS 499 | pjt4g23r0oo1 appserver.1 dockersamples/atsea-appserver:2.0 node1 Running Starting less 500 | than a second ago 501 | usx1sk2gtoib \_ appserver.1 dockersamples/atsea-appserver:2.0 node2 Shutdown Failed 5 seco 502 | nds ago "task: non-zero exit (143): do…" 503 | suee368vg3r1 \_ appserver.1 dockersamples/atsea-appserver:1.0 node1 Shutdown Shutdown 24 seconds ago 504 | ``` 505 | 506 | Clearly there is some issue, as the containers are failing to start. 507 | 508 | 4. Check on the satus of the update 509 | 510 | ``` 511 | $ docker service inspect -f '{{json .UpdateStatus}}' appserver | jq 512 | { 513 | "State": "paused", 514 | "StartedAt": "2017-10-14T00:38:30.188743311Z", 515 | "Message": "update paused due to failure or early termination of task umidyotoa5i4gryk5vsrutwrq" 516 | } 517 | ``` 518 | 519 | Because we had set ` --update-failure-action` to pause, Swarm paused the update. 520 | 521 | In the case of failed upgrade, Swarm makes it easy to recover. Simply issue the `--rollback` command to the service. 522 | 523 | 5. Roll the service back to the original version 524 | 525 | ``` 526 | $ docker service update \ 527 | --rollback \ 528 | --detach=true \ 529 | appserver 530 | appserver 531 | ``` 532 | 533 | 6. Check on the status of the service 534 | 535 | ``` 536 | $ docker service ps appserver 537 | ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 538 | yoswxm44q9vg appserver.1 mikegcoleman/atsea_appserver:1.0 node2 Running Running 11 seconds ago 539 | lacfi5xiu6e7 \_ appserver.1 dockersamples/atsea-appserver:2.0 node1 Shutdown Shutdown 25 seconds ago 540 | tvcr9dwvm578 \_ appserver.1 dockersamples/atsea-appserver:2.0 node1 Shutdown Failed 49 seconds ago "task: non-zero exit (143): do…" 541 | xg4274jpochx \_ appserver.1 dockersamples/atsea-appserver:2.0 node1 Shutdown Failed about a minute ago "task: non-zero exit (143): do…" 542 | z7toh7jwk8qf \_ appserver.1 mikegcoleman/atsea_appserver:1.0 node1 Shutdown Shutdown about a minute ago 543 | ``` 544 | 545 | The top line shows the service is back on the `1.0` version, and running. 546 | 547 | 7. Visit the website to makes sure it's running 548 | 549 | That was a simulated upgrade failure and rollback. Next the service will be successfully upgraded to version 3 of the app. 550 | 551 | 8. Upgrade to version 3 552 | 553 | ``` 554 | $ docker service update \ 555 | --image dockersamples/atsea-appserver:3.0 \ 556 | --update-failure-action pause \ 557 | --detach=true \ 558 | appserver 559 | appserver 560 | ``` 561 | 562 | 9. Check the status of the upgrade 563 | 564 | ``` 565 | $ docker service ps appserver 566 | ID NAME IMAGE NODEDESIRED STATE CURRENT STATE ERROR PORTS 567 | ytygwmyhumrt appserver.1 dockersamples/atsea-appserver:3.0 node1Running Running 29 seconds ago 568 | zjkmbjw7u8e0 \_ appserver.1 mikegcoleman/atsea_appserver:1.0 node1Shutdown Shutdown 47 seconds ago 569 | wemedok12frl \_ appserver.1 dockersamples/atsea-appserver:2.0 node1Shutdown Failed 2 minutes ago "task: non-zero exit (143): do…" 570 | u6wd7wje82zn \_ appserver.1 dockersamples/atsea-appserver:2.0 node1Shutdown Failed 2 minutes ago "task: non-zero exit (143): do…" 571 | ``` 572 | 573 | 10. Once the status reports back "Running xx seconds", reload website the website once again to verify that the new version has been deployed 574 | 575 | ### Scale the front end 576 | 577 | The new update has really increased traffic to the site. As a result we need to scale our web front end out. This is done by issuing a `docker service update` and specifying the number of replicas to deploy. 578 | 579 | 1. Scale to 6 replicas of the web front-end 580 | 581 | ``` 582 | $ docker service update \ 583 | --replicas=6 \ 584 | --detach=true \ 585 | appserver 586 | appserver 587 | ``` 588 | 589 | 2. Check the status of the update 590 | 591 | ``` 592 | $ docker service ps appserver 593 | ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 594 | PORTS 595 | vfbzj3axoays appserver.1 dockersamples/atsea-appserver:3.0 node1 Running Running 2 minutes ago 596 | 597 | yoswxm44q9vg \_ appserver.1 mikegcoleman/atsea_appserver:1.0 node2 Shutdown Shutdown 2 minutes ago 598 | tvcr9dwvm578 \_ appserver.1 dockersamples/atsea-appserver:2.0 node1 Shutdown Failed 5 minutes ago "task: non-zero exit (143): do…" 599 | xg4274jpochx \_ appserver.1 dockersamples/atsea-appserver:2.0 node1 Shutdown Failed 6 minutes ago "task: non-zero exit (143): do…" 600 | z7toh7jwk8qf \_ appserver.1 mikegcoleman/atsea_appserver:1.0 node1 Shutdown Shutdown 7 minutes ago 601 | i474a8emgwbc appserver.2 dockersamples/atsea-appserver:3.0 node2 Running Starting 30 seconds ago 602 | gu7rphvp2q3l appserver.3 dockersamples/atsea-appserver:3.0 node2 Running Starting 30 seconds ago 603 | gzjdye1kne33 appserver.4 dockersamples/atsea-appserver:3.0 node1 Running Running 7 seconds ago 604 | u596cqkgf2aa appserver.5 dockersamples/atsea-appserver:3.0 node2 Running Starting 30 seconds ago 605 | jqkokd2uoki6 appserver.6 dockersamples/atsea-appserver:3.0 node1 Running Running 12 seconds ag 606 | ``` 607 | 608 | Docker is starting up 5 new instances of the appserver, and is placing them across both the nodes in the cluster. 609 | 610 | When all 6 nodes are running, move on to the next step. 611 | 612 | ### Failure and recovery 613 | The next exercise simulates a node failure. When a node fails the containers that were running there are, of course, lost as well. Swarm is constantly monitoring the state of the cluster, and when it detects an anomoly it attemps to bring the cluster back in to compliance. 614 | 615 | In it's current state, Swarm expects there to be six instances of the appserver. When the node "fails" thre of those instances will go out of service. 616 | 617 | 1. Putting a node into *drain* mode forces it to stop all the running containers it hosts, as well as preventing it from running any additional containers. 618 | 619 | ``` 620 | $ docker node update \ 621 | --availability=drain \ 622 | node2 623 | ``` 624 | 625 | 2. Check the status of the service 626 | 627 | ``` 628 | $ docker service ps appserver 629 | ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 630 | vfbzj3axoays appserver.1 dockersamples/atsea-appserver:3.0 node1 Running Running 8 minutes ago 631 | yoswxm44q9vg \_ appserver.1 mikegcoleman/atsea_appserver:1.0 node2 Shutdown Shutdown 8 minutes ago 632 | tvcr9dwvm578 \_ appserver.1 dockersamples/atsea-appserver:2.0 node1 Shutdown Failed 11 minutes ago "task: non-zero exit (143): do…" 633 | xg4274jpochx \_ appserver.1 dockersamples/atsea-appserver:2.0 node1 Shutdown Failed 12 minutes ago "task: non-zero exit (143): do…" 634 | z7toh7jwk8qf \_ appserver.1 mikegcoleman/atsea_appserver:1.0 node1 Shutdown Shutdown 12 minutes ago 635 | zmp7mfpme2go appserver.2 dockersamples/atsea-appserver:3.0 node1 Running Starting 5 seconds ago 636 | i474a8emgwbc \_ appserver.2 dockersamples/atsea-appserver:3.0 node2 Shutdown Shutdown 5 seconds ago 637 | l7gxju3x6zx8 appserver.3 dockersamples/atsea-appserver:3.0 node1 Running Starting 5 seconds ago 638 | gu7rphvp2q3l \_ appserver.3 dockersamples/atsea-appserver:3.0 node2 Shutdown Shutdown 5 seconds ago 639 | gzjdye1kne33 appserver.4 dockersamples/atsea-appserver:3.0 node1 Running Running 5 minutes ago 640 | ure9u7li7myv appserver.5 dockersamples/atsea-appserver:3.0 node1 Running Starting 5 seconds ago 641 | u596cqkgf2aa \_ appserver.5 dockersamples/atsea-appserver:3.0 node2 Shutdown Shutdown 5 seconds ago 642 | jqkokd2uoki6 appserver.6 dockersamples/atsea-appserver:3.0 node1 Running Running 6 minutes ago 643 | ``` 644 | 645 | The output above shows the containers that where running on `node2` have been shut down and are being restarted on `node` 646 | 647 | 3. List the status of our services 648 | 649 | ``` 650 | $ docker service ls 651 | ID NAME MODE REPLICAS IMAGE PORTS 652 | qbeqlc6v0g0z appserver replicated 6/6 dockersamples/atsea-appserver:3.0 *:8080->8080/tcps3luy288gn9l 653 | database replicated 1/1 sixeyed/atsea-db:mssql 654 | ``` 655 | 656 | In a minute or two all the services should be restarted, and Swarm will report back that it has 6 of the expected 6 containers running 657 | --------------------------------------------------------------------------------