├── README.md └── demo-commands.md /README.md: -------------------------------------------------------------------------------- 1 | # Docker Networking Workshop 2 | 3 | Hi, welcome to the Docker Networking Workshop! 4 | 5 | You will get your hands dirty by going through examples of a few basic networking concepts, learn about Bridge and Overlay networking, and finally learning about the Swarm Routing Mesh. 6 | 7 | > **Difficulty**: Beginner to Intermediate 8 | 9 | > **Time**: Approximately 45 minutes 10 | 11 | > **Tasks**: 12 | > 13 | > * [Prerequisites](#prerequisites) 14 | > * [Section #1 - Networking Basics](#task1) 15 | > * [Section #2 - Bridge Networking](#task2) 16 | > * [Section #3 - Overlay Networking](#task3) 17 | 18 | ## Document conventions 19 | 20 | When you encounter a phrase in between `<` and `>` you are meant to substitute in a different value. 21 | 22 | For instance if you see `ssh @` you would actually type something like `ssh ubuntu@node0.ivaf2i2atqouppoxund0tvddsa.jx.internal.cloudapp.net` 23 | 24 | You will be asked to SSH into various nodes. These nodes are referred to as **node0** or **node1**. These tags correspond to the very beginning of the hostnames found on the hands on labs welcome card you were given. 25 | 26 | ## Prerequisites 27 | 28 | This lab requires two Linux nodes with Docker 17.03 (or higher) installed. 29 | 30 | Also, please make sure you can SSH into the Linux nodes. If you haven't already done so, please SSH in to **node0** and **node1**. 31 | 32 | ``` 33 | $ ssh ubuntu@ 34 | ``` 35 | 36 | and 37 | 38 | ``` 39 | $ ssh ubuntu@ 40 | ``` 41 | 42 | **Note: Password will be provided during the workshop.** 43 | 44 | # Section #1 - Networking Basics 45 | 46 | ## Step 1: The Docker Network Command 47 | 48 | If you haven't already done so, please SSH in to **node0**. 49 | 50 | ``` 51 | $ ssh ubuntu@ 52 | ``` 53 | 54 | The `docker network` command is the main command for configuring and managing container networks. Run the `docker network` command from **node0**. 55 | 56 | ``` 57 | $ docker network 58 | 59 | Usage: docker network COMMAND 60 | 61 | Manage networks 62 | 63 | Options: 64 | --help Print usage 65 | 66 | Commands: 67 | connect Connect a container to a network 68 | create Create a network 69 | disconnect Disconnect a container from a network 70 | inspect Display detailed information on one or more networks 71 | ls List networks 72 | prune Remove all unused networks 73 | rm Remove one or more networks 74 | 75 | Run 'docker network COMMAND --help' for more information on a command. 76 | ``` 77 | 78 | The command output shows how to use the command as well as all of the `docker network` sub-commands. As you can see from the output, the `docker network` command allows you to create new networks, list existing networks, inspect networks, and remove networks. It also allows you to connect and disconnect containers from networks. 79 | 80 | ## Step 2: List networks 81 | 82 | Run a `docker network ls` command on **node0** to view existing container networks on the current Docker host. 83 | 84 | ``` 85 | $ docker network ls 86 | NETWORK ID NAME DRIVER SCOPE 87 | 3430ad6f20bf bridge bridge local 88 | a7449465c379 host host local 89 | 06c349b9cc77 none null local 90 | ``` 91 | 92 | The output above shows the container networks that are created as part of a standard installation of Docker. 93 | 94 | New networks that you create will also show up in the output of the `docker network ls` command. 95 | 96 | You can see that each network gets a unique `ID` and `NAME`. Each network is also associated with a single driver. Notice that the "bridge" network and the "host" network have the same name as their respective drivers. 97 | 98 | ## Step 3: Inspect a network 99 | 100 | The `docker network inspect` command is used to view network configuration details. These details include; name, ID, driver, IPAM driver, subnet info, connected containers, and more. 101 | 102 | Use `docker network inspect ` on **node0** to view configuration details of the container networks on your Docker host. The command below shows the details of the network called `bridge`. 103 | 104 | ``` 105 | $ docker network inspect bridge 106 | [ 107 | { 108 | "Name": "bridge", 109 | "Id": "3430ad6f20bf1486df2e5f64ddc93cc4ff95d81f59b6baea8a510ad500df2e57", 110 | "Created": "2017-04-03T16:49:58.6536278Z", 111 | "Scope": "local", 112 | "Driver": "bridge", 113 | "EnableIPv6": false, 114 | "IPAM": { 115 | "Driver": "default", 116 | "Options": null, 117 | "Config": [ 118 | { 119 | "Subnet": "172.17.0.0/16", 120 | "Gateway": "172.17.0.1" 121 | } 122 | ] 123 | }, 124 | "Internal": false, 125 | "Attachable": false, 126 | "Containers": {}, 127 | "Options": { 128 | "com.docker.network.bridge.default_bridge": "true", 129 | "com.docker.network.bridge.enable_icc": "true", 130 | "com.docker.network.bridge.enable_ip_masquerade": "true", 131 | "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", 132 | "com.docker.network.bridge.name": "docker0", 133 | "com.docker.network.driver.mtu": "1500" 134 | }, 135 | "Labels": {} 136 | } 137 | ] 138 | ``` 139 | 140 | > **NOTE:** The syntax of the `docker network inspect` command is `docker network inspect `, where `` can be either network name or network ID. In the example above we are showing the configuration details for the network called "bridge". Do not confuse this with the "bridge" driver. 141 | 142 | 143 | ## Step 4: List network driver plugins 144 | 145 | The `docker info` command shows a lot of interesting information about a Docker installation. 146 | 147 | Run the `docker info` command on **node0** and locate the list of network plugins. 148 | 149 | ``` 150 | $ docker info 151 | Containers: 0 152 | Running: 0 153 | Paused: 0 154 | Stopped: 0 155 | Images: 0 156 | Server Version: 17.03.1-ee-3 157 | Storage Driver: aufs 158 | 159 | Plugins: 160 | Volume: local 161 | Network: bridge host macvlan null overlay 162 | Swarm: inactive 163 | Runtimes: runc 164 | 165 | ``` 166 | 167 | The output above shows the **bridge**, **host**,**macvlan**, **null**, and **overlay** drivers. 168 | 169 | # Section #2 - Bridge Networking 170 | 171 | 172 | ## Step 1: The Basics 173 | 174 | Every clean installation of Docker comes with a pre-built network called **bridge**. Verify this with the `docker network ls` command on **node0**. 175 | 176 | ``` 177 | $ docker network ls 178 | NETWORK ID NAME DRIVER SCOPE 179 | 3430ad6f20bf bridge bridge local 180 | a7449465c379 host host local 181 | 06c349b9cc77 none null local 182 | ``` 183 | 184 | The output above shows that the **bridge** network is associated with the *bridge* driver. It's important to note that the network and the driver are connected, but they are not the same. In this example the network and the driver have the same name - but they are not the same thing! 185 | 186 | The output above also shows that the **bridge** network is scoped locally. This means that the network only exists on this Docker host. This is true of all networks using the *bridge* driver - the *bridge* driver provides single-host networking. 187 | 188 | All networks created with the *bridge* driver are based on a Linux bridge (a.k.a. a virtual switch). 189 | 190 | 191 | ## Step 2: Connect a container 192 | 193 | The **bridge** network is the default network for new containers. This means that unless you specify a different network, all new containers will be connected to the **bridge** network. 194 | 195 | Create a new bridge network on **node0** and call it `br`. 196 | 197 | ``` 198 | $ docker network create -d bridge br 199 | 846af8479944d406843c90a39cba68373c619d1feaa932719260a5f5afddbf71 200 | ``` 201 | 202 | Now create a container called `c1` and attach it to your new `br` network. 203 | 204 | ``` 205 | $ docker run -itd --net br --name c1 alpine sh 206 | 846af8479944d406843c90a39cba68373c619d1feaa932719260a5f5afddbf71 207 | ``` 208 | 209 | This command will create a new container based on the `alpine:latest` image. 210 | 211 | Running `docker network inspect bridge` will show the containers on that network. 212 | 213 | ``` 214 | $ docker network inspect br 215 | 216 | [ 217 | { 218 | "Name": "br", 219 | "Id": "e7b30cacc686ff891a5a5ea393e055c309a07bc652feed375821e2f78faf9aa0", 220 | "Created": "2017-04-13T13:19:37.611068665Z", 221 | "Scope": "local", 222 | "Driver": "bridge", 223 | "EnableIPv6": false, 224 | "IPAM": { 225 | "Driver": "default", 226 | "Options": null, 227 | "Config": [ 228 | { 229 | "Subnet": "172.17.0.0/16", 230 | "Gateway": "172.17.0.1" 231 | } 232 | ] 233 | }, 234 | "Internal": false, 235 | "Attachable": false, 236 | "Containers": { 237 | "086c4c0cc18c7f603279b728d9baac4b63d25941f576b37c5d1e988de6202410": { 238 | "Name": "c1", 239 | "EndpointID": "06eac477e1a04f2c5e676633ddab344086104511470c539a2fb7aedf8b1d58f8", 240 | "MacAddress": "02:42:ac:11:00:06", 241 | "IPv4Address": "172.17.0.6/16", 242 | "IPv6Address": "" 243 | ``` 244 | 245 | 246 | 247 | ## Step 3: Test network connectivity 248 | 249 | The output to the previous `docker network inspect` command shows the IP address of the new container. In the previous example it is "172.17.0.6" but yours might be different. 250 | 251 | Ping the IP address of the container from the shell prompt of your Docker host by running `ping -c 3 ` on **node0**. Remember to use the IP of the container in **your** environment. 252 | 253 | You can get the IP address of the container directly from the Docker engine by running `docker inspect --format "{{ .NetworkSettings.Networks.br.IPAddress }}" c1`. 254 | 255 | ``` 256 | $ ping -c 3 172.17.0.6 257 | PING 172.17.0.6 (172.17.0.6) 56(84) bytes of data. 258 | 64 bytes from 172.17.0.6: icmp_seq=1 ttl=64 time=0.072 ms 259 | 64 bytes from 172.17.0.6: icmp_seq=2 ttl=64 time=0.029 ms 260 | 64 bytes from 172.17.0.6: icmp_seq=3 ttl=64 time=0.048 ms 261 | ... 262 | ``` 263 | 264 | The replies above show that the Docker host can ping the container over the **bridge** network. But, we can also verify the container can connect to the outside world too. 265 | 266 | Enter in to the `c1` container that you created using the command `docker exec`. We will pass the `sh` command to `docker exec` which puts us in to an interactive shell inside the container. 267 | 268 | Enter in to the container and inspect the interfaces of the container 269 | 270 | ``` 271 | $ docker exec -it c1 sh 272 | # ip addr show 273 | 1: lo: mtu 65536 qdisc noqueue state UNKNOWN 274 | link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 275 | inet 127.0.0.1/8 scope host lo 276 | valid_lft forever preferred_lft forever 277 | inet6 ::1/128 scope host 278 | valid_lft forever preferred_lft forever 279 | 879: eth0@if880: mtu 1500 qdisc noqueue state UP 280 | link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ff 281 | inet 172.19.0.6/16 scope global eth0 282 | valid_lft forever preferred_lft forever 283 | inet6 fe80::42:acff:fe13:2/64 scope link 284 | valid_lft forever preferred_lft forever 285 | ``` 286 | 287 | Prove that containers can gain outside access by pinging `www.docker.com`. 288 | 289 | ``` 290 | # ping -c 3 www.docker.com 291 | PING www.docker.com (104.239.220.248): 56 data bytes 292 | 64 bytes from 104.239.220.248: seq=0 ttl=36 time=77.722 ms 293 | 64 bytes from 104.239.220.248: seq=1 ttl=36 time=77.865 ms 294 | 64 bytes from 104.239.220.248: seq=2 ttl=36 time=77.830 ms 295 | ``` 296 | 297 | Exit out of the container. 298 | 299 | ``` 300 | # exit 301 | ``` 302 | 303 | Now you will create a second container on this bridge so you can test connectivity between them. 304 | 305 | ``` 306 | $ docker run -itd --net br --name c2 alpine sh 307 | 75f840c9d17b2921c1e78555c97cd5116e1563b1e33f9328bd5b0a8e1c55b520 308 | ``` 309 | 310 | Enter the `c2` container with `docker exec` and try to ping the IP address of `c1`. 311 | 312 | ``` 313 | $ docker exec -it c2 sh 314 | # ping -c 3 172.17.0.6 315 | PING 172.17.0.6 (172.17.0.6): 56 data bytes 316 | 64 bytes from 172.17.0.6: seq=0 ttl=64 time=0.091 ms 317 | 64 bytes from 172.17.0.6: seq=1 ttl=64 time=0.077 ms 318 | 64 bytes from 172.17.0.6: seq=2 ttl=64 time=0.079 ms 319 | ``` 320 | 321 | Now ping container `c1` using it's name. The Docker engine will provide the resolution automatically for all container names and service names. 322 | 323 | 324 | ``` 325 | # ping -c 3 c1 326 | PING c1 (172.17.0.6): 56 data bytes 327 | 64 bytes from 172.17.0.6: seq=0 ttl=64 time=0.091 ms 328 | 64 bytes from 172.17.0.6: seq=1 ttl=64 time=0.077 ms 329 | 64 bytes from 172.17.0.6: seq=2 ttl=64 time=0.079 ms 330 | ``` 331 | 332 | Exit container `c2` and remove these two containers from this host. 333 | 334 | ``` 335 | # exit 336 | $ docker rm -f $(docker ps -aq) 337 | ``` 338 | 339 | 340 | 341 | ## Step 4: Configure NAT for external connectivity 342 | 343 | In this step we'll start a new **NGINX** container and map port 8000 on the Docker host to port 80 inside of the container. This means that traffic that hits the Docker host on port 8000 will be passed on to port 80 inside the container. 344 | 345 | > **NOTE:** If you start a new container from the official NGINX image without specifying a command to run, the container will run a basic web server on port 80. 346 | 347 | Start a new container based off the official NGINX image by running `docker run --name web1 -d -p 8000:80 nginx` on **node0**. 348 | 349 | ``` 350 | $ docker run --name web1 -d -p 8000:80 nginx 351 | Unable to find image 'nginx:latest' locally 352 | latest: Pulling from library/nginx 353 | 6d827a3ef358: Pull complete 354 | b556b18c7952: Pull complete 355 | 03558b976e24: Pull complete 356 | 9abee7e1ef9d: Pull complete 357 | Digest: sha256:52f84ace6ea43f2f58937e5f9fc562e99ad6876e82b99d171916c1ece587c188 358 | Status: Downloaded newer image for nginx:latest 359 | 4e0da45b0f169f18b0e1ee9bf779500cb0f756402c0a0821d55565f162741b3e 360 | ``` 361 | 362 | Review the container status and port mappings by running `docker ps` on **node0**. 363 | 364 | ``` 365 | $ docker ps 366 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 367 | 4e0da45b0f16 nginx "nginx -g 'daemon ..." 2 minutes ago Up 2 minutes 443/tcp, 0.0.0.0:8000->80/tcp web1 368 | ``` 369 | 370 | The top line shows the new **web1** container running NGINX. Take note of the command the container is running as well as the port mapping - `0.0.0.0:8000->80/tcp` maps port 8000 on all host interfaces to port 80 inside the **web1** container. This port mapping is what effectively makes the containers web service accessible from external sources (via the Docker hosts IP address on port 8000). 371 | 372 | Now that the container is running and mapped to a port on a host interface you can test connectivity to the NGINX web server. 373 | 374 | To complete the following task you will need the IP address of your Docker host. This will need to be an IP address that you can reach (e.g. your lab is hosted in Azure so this will be the instance's Public IP - the one you SSH'd into). Just point your web browser to the IP and port 8000 of your Docker host. Also, if you try connecting to the same IP address on a different port number it will fail. 375 | 376 | If for some reason you cannot open a session from a web broswer, you can connect from your Docker host using the `curl 127.0.0.1:8000` command on **node0**. 377 | 378 | ``` 379 | $ curl 127.0.0.1:8000 380 | 381 | 382 | 383 | 384 | Welcome to nginx! 385 | 386 |

Thank you for using nginx.

387 | 388 | 389 | ``` 390 | 391 | If you try and curl the IP address on a different port number it will fail. 392 | 393 | > **NOTE:** The port mapping is actually port address translation (PAT). 394 | 395 | # Section #3 - Overlay Networking 396 | 397 | ## Step 1: The Basics 398 | 399 | In this step you'll initialize a new Swarm, join a single worker node, and verify the operations worked. 400 | 401 | Run `docker swarm init` on **node0**. 402 | 403 | ``` 404 | $ docker swarm init 405 | Swarm initialized: current node (rzyy572arjko2w0j82zvjkc6u) is now a manager. 406 | 407 | To add a worker to this swarm, run the following command: 408 | 409 | docker swarm join \ 410 | --token SWMTKN-1-69b2x1u2wtjdmot0oqxjw1r2d27f0lbmhfxhvj83chln1l6es5-37ykdpul0vylenefe2439cqpf \ 411 | 10.0.0.5:2377 412 | 413 | To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. 414 | ``` 415 | 416 | If you haven't already done so, please SSH in to **node1**. 417 | 418 | ``` 419 | $ ssh ubuntu@ 420 | ``` 421 | 422 | Copy the entire `docker swarm join ...` command that is displayed as part of the output from your terminal output on **node0**. Then, paste the copied command into the terminal of **node1**. 423 | 424 | ``` 425 | $ docker swarm join \ 426 | > --token SWMTKN-1-69b2x1u2wtjdmot0oqxjw1r2d27f0lbmhfxhvj83chln1l6es5-37ykdpul0vylenefe2439cqpf \ 427 | > 10.0.0.5:2377 428 | This node joined a swarm as a worker. 429 | ``` 430 | 431 | Run a `docker node ls` on **node0** to verify that both nodes are part of the Swarm. 432 | 433 | ``` 434 | $ docker node ls 435 | ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 436 | ijjmqthkdya65h9rjzyngdn48 node1 Ready Active 437 | rzyy572arjko2w0j82zvjkc6u * node0 Ready Active Leader 438 | ``` 439 | 440 | The `ID` and `HOSTNAME` values may be different in your lab. The important thing to check is that both nodes have joined the Swarm and are *ready* and *active*. 441 | 442 | ## Step 2: Create an overlay network 443 | 444 | Now that you have a Swarm initialized it's time to create an **overlay** network. 445 | 446 | Create a new overlay network called "overnet" by running `docker network create -d overlay overnet` on **node0**. 447 | 448 | ``` 449 | $ docker network create -d overlay --subnet 10.10.10.0/24 overnet 450 | wlqnvajmmzskn84bqbdi1ytuy 451 | ``` 452 | 453 | Use the `docker network ls` command to verify the network was created successfully. 454 | 455 | ``` 456 | $ docker network ls 457 | NETWORK ID NAME DRIVER SCOPE 458 | 3430ad6f20bf bridge bridge local 459 | a4d584350f09 docker_gwbridge bridge local 460 | a7449465c379 host host local 461 | 8hq1n8nak54x ingress overlay swarm 462 | 06c349b9cc77 none null local 463 | wlqnvajmmzsk overnet overlay swarm 464 | ``` 465 | 466 | The new "overnet" network is shown on the last line of the output above. Notice how it is associated with the **overlay** driver and is scoped to the entire Swarm. 467 | 468 | > **NOTE:** The other new networks (ingress and docker_gwbridge) were created automatically when the Swarm cluster was created. 469 | 470 | Run the same `docker network ls` command from **node1** 471 | 472 | ``` 473 | $ docker network ls 474 | NETWORK ID NAME DRIVER SCOPE 475 | 55f10b3fb8ed bridge bridge local 476 | b7b30433a639 docker_gwbridge bridge local 477 | a7449465c379 host host local 478 | 8hq1n8nak54x ingress overlay swarm 479 | 06c349b9cc77 none null local 480 | ``` 481 | 482 | Notice that the "overnet" network does **not** appear in the list. This is because Docker only extends overlay networks to hosts when they are needed. This is usually when a host runs a task from a service that is created on the network. We will see this shortly. 483 | 484 | Use the `docker network inspect ` command to view more detailed information about the "overnet" network. You will need to run this command from **node0**. 485 | 486 | ``` 487 | $ docker network inspect overnet 488 | [ 489 | { 490 | "Name": "overnet", 491 | "Id": "wlqnvajmmzskn84bqbdi1ytuy", 492 | "Created": "0001-01-01T00:00:00Z", 493 | "Scope": "swarm", 494 | "Driver": "overlay", 495 | "EnableIPv6": false, 496 | "IPAM": { 497 | "Driver": "default", 498 | "Options": null, 499 | "Config": [] 500 | }, 501 | "Internal": false, 502 | "Attachable": false, 503 | "Containers": null, 504 | "Options": { 505 | "com.docker.network.driver.overlay.vxlanid_list": "4097" 506 | }, 507 | "Labels": null 508 | } 509 | ] 510 | ``` 511 | 512 | ## Step 3: Create a service 513 | 514 | Now that we have a Swarm initialized and an overlay network, it's time to create a service that uses the network. 515 | 516 | Execute the following command from **node0** to create a new service called *myservice* on the *overnet* network with two tasks/replicas. 517 | 518 | ``` 519 | $ docker service create --name myservice \ 520 | --network overnet \ 521 | --replicas 2 \ 522 | ubuntu sleep infinity 523 | 524 | ov30itv6t2n7axy2goqbfqt5e 525 | ``` 526 | 527 | Verify that the service is created and both replicas are up by running `docker service ls`. 528 | 529 | ``` 530 | $ docker service ls 531 | ID NAME MODE REPLICAS IMAGE 532 | ov30itv6t2n7 myservice replicated 2/2 ubuntu:latest 533 | ``` 534 | 535 | The `2/2` in the `REPLICAS` column shows that both tasks in the service are up and running. 536 | 537 | Verify that a single task (replica) is running on each of the two nodes in the Swarm by running `docker service ps myservice`. 538 | 539 | ``` 540 | $ docker service ps myservice 541 | ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 542 | riicggj5tuta myservice.1 ubuntu:latest node1 Running Running about a minute ago 543 | nlozn82wsttv myservice.2 ubuntu:latest node0 Running Running about a minute ago 544 | ``` 545 | 546 | The `ID` and `NODE` values might be different in your output. The important thing to note is that each task/replica is running on a different node. 547 | 548 | Now that **node1** is running a task on the "overnet" network it will be able to see the "overnet" network. Lets run `docker network ls` from **node1** to verify this. 549 | 550 | ``` 551 | $ docker network ls 552 | NETWORK ID NAME DRIVER SCOPE 553 | 55f10b3fb8ed bridge bridge local 554 | b7b30433a639 docker_gwbridge bridge local 555 | a7449465c379 host host local 556 | 8hq1n8nak54x ingress overlay swarm 557 | 06c349b9cc77 none null local 558 | wlqnvajmmzsk overnet overlay swarm 559 | ``` 560 | 561 | We can also run `docker network inspect overnet` on **node1** to get more detailed information about the "overnet" network and obtain the IP address of the task running on **node1**. 562 | 563 | ``` 564 | $ docker network inspect overnet 565 | [ 566 | { 567 | "Name": "overnet", 568 | "Id": "wlqnvajmmzskn84bqbdi1ytuy", 569 | "Created": "2017-04-04T09:35:47.526642642Z", 570 | "Scope": "swarm", 571 | "Driver": "overlay", 572 | "EnableIPv6": false, 573 | "IPAM": { 574 | "Driver": "default", 575 | "Options": null, 576 | "Config": [ 577 | { 578 | "Subnet": "10.10.10.0/24", 579 | "Gateway": "10.10.10.1" 580 | } 581 | ] 582 | }, 583 | "Internal": false, 584 | "Attachable": false, 585 | "Containers": { 586 | "fbc8bb0834429a68b2ccef25d3c90135dbda41e08b300f07845cb7f082bcdf01": { 587 | "Name": "myservice.1.riicggj5tutar7h7sgsvqg72r", 588 | "EndpointID": "8edf83ebce77aed6d0193295c80c6aa7a5b76a08880a166002ecda3a2099bb6c", 589 | "MacAddress": "02:42:0a:00:00:03", 590 | "IPv4Address": "10.10.10.3/24", 591 | "IPv6Address": "" 592 | } 593 | }, 594 | "Options": { 595 | "com.docker.network.driver.overlay.vxlanid_list": "4097" 596 | }, 597 | "Labels": {}, 598 | "Peers": [ 599 | { 600 | "Name": "node0-f6a6f8e18a9d", 601 | "IP": "10.10.10.5" 602 | }, 603 | { 604 | "Name": "node1-507a763bed93", 605 | "IP": "10.10.10.6" 606 | } 607 | ] 608 | } 609 | ] 610 | ``` 611 | 612 | You should note that as of Docker 1.12, `docker network inspect` only shows containers/tasks running on the local node. This means that `10.10.10.3` is the IPv4 address of the container running on **node1**. Make a note of this IP address for the next step (the IP address in your lab might be different than the one shown here in the lab guide). 613 | 614 | ## Step 4: Test the network 615 | 616 | To complete this step you will need the IP address of the service task running on **node1** that you saw in the previous step (`10.10.10.3`). 617 | 618 | Execute the following commands from **node0**. 619 | 620 | ``` 621 | $ docker network inspect overnet 622 | [ 623 | { 624 | "Name": "overnet", 625 | "Id": "wlqnvajmmzskn84bqbdi1ytuy", 626 | "Created": "2017-04-04T09:35:47.362263887Z", 627 | "Scope": "swarm", 628 | "Driver": "overlay", 629 | "EnableIPv6": false, 630 | "IPAM": { 631 | "Driver": "default", 632 | "Options": null, 633 | "Config": [ 634 | { 635 | "Subnet": "10.10.10.0/24", 636 | "Gateway": "10.10.10.1" 637 | } 638 | ] 639 | }, 640 | "Internal": false, 641 | "Attachable": false, 642 | "Containers": { 643 | "d676496d18f76c34d3b79fbf6573a5672a81d725d7c8704b49b4f797f6426454": { 644 | "Name": "myservice.2.nlozn82wsttv75cs9vs3ju7vs", 645 | "EndpointID": "36638a55fcf4ada2989650d0dde193bc2f81e0e9e3c153d3e9d1d85e89a642e6", 646 | "MacAddress": "02:42:0a:00:00:04", 647 | "IPv4Address": "10.10.10.4/24", 648 | "IPv6Address": "" 649 | } 650 | }, 651 | "Options": { 652 | "com.docker.network.driver.overlay.vxlanid_list": "4097" 653 | }, 654 | "Labels": {}, 655 | "Peers": [ 656 | { 657 | "Name": "node0-f6a6f8e18a9d", 658 | "IP": "10.10.10.5" 659 | }, 660 | { 661 | "Name": "node1-507a763bed93", 662 | "IP": "10.10.10.6" 663 | } 664 | ] 665 | } 666 | ] 667 | ``` 668 | 669 | Notice that the IP address listed for the service task (container) running on **node0** is different to the IP address for the service task running on **node1**. Note also that they are one the sane "overnet" network. 670 | 671 | Run a `docker ps` command to get the ID of the service task on **node0** so that you can log in to it in the next step. 672 | 673 | ``` 674 | $ docker ps 675 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 676 | d676496d18f7 ubuntu@sha256:dd7808d8792c9841d0b460122f1acf0a2dd1f56404f8d1e56298048885e45535 "sleep infinity" 10 minutes ago Up 10 minutes myservice.2.nlozn82wsttv75cs9vs3ju7vs 677 | 678 | ``` 679 | 680 | Log on to the service task. Be sure to use the container `ID` from your environment as it will be different from the example shown below. We can do this by running `docker exec -it /bin/bash`. 681 | 682 | ``` 683 | $ docker exec -it d676496d18f7 /bin/bash 684 | root@d676496d18f7:/# 685 | ``` 686 | 687 | Install the ping command and ping the service task running on **node1** where it had a IP address of `10.10.10.3` from the `docker network inspect overnet` command. 688 | 689 | ``` 690 | root@d676496d18f7:/# apt-get update && apt-get install -y iputils-ping 691 | ``` 692 | 693 | Now, lets ping `10.10.10.3`. 694 | 695 | ``` 696 | root@d676496d18f7:/# ping -c5 10.10.10.3 697 | PING 10.10.10.3 (10.10.10.3) 56(84) bytes of data. 698 | ^C 699 | --- 10.10.10.3 ping statistics --- 700 | 4 packets transmitted, 0 received, 100% packet loss, time 2998ms 701 | ``` 702 | 703 | The output above shows that both tasks from the **myservice** service are on the same overlay network spanning both nodes and that they can use this network to communicate. 704 | 705 | ## Step 5: Test service discovery 706 | 707 | Now that you have a working service using an overlay network, let's test service discovery. 708 | 709 | If you are not still inside of the container on **node0**, log back into it with the `docker exec -it /bin/bash` command. 710 | 711 | Run `cat /etc/resolv.conf` form inside of the container on **node0**. 712 | 713 | ``` 714 | $ docker exec -it d676496d18f7 /bin/bash 715 | root@d676496d18f7:/# cat /etc/resolv.conf 716 | search ivaf2i2atqouppoxund0tvddsa.jx.internal.cloudapp.net 717 | nameserver 127.0.0.11 718 | options ndots:0 719 | ``` 720 | 721 | The value that we are interested in is the `nameserver 127.0.0.11`. This value sends all DNS queries from the container to an embedded DNS resolver running inside the container listening on 127.0.0.11:53. All Docker container run an embedded DNS server at this address. 722 | 723 | > **NOTE:** Some of the other values in your file may be different to those shown in this guide. 724 | 725 | Try and ping the "myservice" name from within the container by running `ping -c5 myservice`. 726 | 727 | ``` 728 | root@d676496d18f7:/# ping -c5 myservice 729 | PING myservice (10.10.10.2) 56(84) bytes of data. 730 | 64 bytes from 10.10.10.2: icmp_seq=1 ttl=64 time=0.020 ms 731 | 64 bytes from 10.10.10.2: icmp_seq=2 ttl=64 time=0.052 ms 732 | 64 bytes from 10.10.10.2: icmp_seq=3 ttl=64 time=0.044 ms 733 | 64 bytes from 10.10.10.2: icmp_seq=4 ttl=64 time=0.042 ms 734 | 64 bytes from 10.10.10.2: icmp_seq=5 ttl=64 time=0.056 ms 735 | 736 | --- myservice ping statistics --- 737 | 5 packets transmitted, 5 received, 0% packet loss, time 4001ms 738 | rtt min/avg/max/mdev = 0.020/0.042/0.056/0.015 ms 739 | ``` 740 | 741 | The output clearly shows that the container can ping the `myservice` service by name. Notice that the IP address returned is `10.10.10.2`. In the next few steps we'll verify that this address is the virtual IP (VIP) assigned to the `myservice` service. 742 | 743 | Type the `exit` command to leave the `exec` container session and return to the shell prompt of your **node0** Docker host. 744 | 745 | ``` 746 | root@d676496d18f7:/# exit 747 | ``` 748 | 749 | Inspect the configuration of the "myservice" service by running `docker service inspect myservice`. Lets verify that the VIP value matches the value returned by the previous `ping -c5 myservice` command. 750 | 751 | ``` 752 | $ docker service inspect myservice 753 | [ 754 | { 755 | "ID": "ov30itv6t2n7axy2goqbfqt5e", 756 | "Version": { 757 | "Index": 19 758 | }, 759 | "CreatedAt": "2017-04-04T09:35:47.009730798Z", 760 | "UpdatedAt": "2017-04-04T09:35:47.05475096Z", 761 | "Spec": { 762 | "Name": "myservice", 763 | "TaskTemplate": { 764 | "ContainerSpec": { 765 | "Image": "ubuntu:latest@sha256:dd7808d8792c9841d0b460122f1acf0a2dd1f56404f8d1e56298048885e45535", 766 | "Args": [ 767 | "sleep", 768 | "infinity" 769 | ], 770 | 771 | "Endpoint": { 772 | "Spec": { 773 | "Mode": "vip" 774 | }, 775 | "VirtualIPs": [ 776 | { 777 | "NetworkID": "wlqnvajmmzskn84bqbdi1ytuy", 778 | "Addr": "10.10.10.2/24" 779 | } 780 | ] 781 | }, 782 | 783 | ``` 784 | 785 | Towards the bottom of the output you will see the VIP of the service listed. The VIP in the output above is `10.10.10.2` but the value may be different in your setup. The important point to note is that the VIP listed here matches the value returned by the `ping -c5 myservice` command. 786 | 787 | Feel free to create a new `docker exec` session to the service task (container) running on **node1** and perform the same `ping -c5 service` command. You will get a response form the same VIP. 788 | 789 | ## Step 6: Test Routing Mesh 790 | 791 | Now let's create a service that utilizes Routing Mesh and the ingress network. Here you'll be creating a single task service that exposes port 5000 on the ingress network. 792 | 793 | ``` 794 | docker service create -p 5000:5000 --name pets --replicas=1 nicolaka/pets_web:1.0 795 | ``` 796 | 797 | Check which nodes did the task run. 798 | 799 | ``` 800 | ubuntu@node-0:~$ docker service ps pets 801 | ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 802 | sqaa61qcepuh pets.1 nicolaka/pets_web:1.0 node-0 Running Running 4 minutes ago 803 | ``` 804 | 805 | You can see that the task is running on `node-0`, it could be `node-1` in your case. Regardless which node the task is running on, routing mesh make sure that you can connect to port `5000` on all cluster nodes and it will take care of forwarding the traffic to a healthy task. 806 | 807 | Using your browser, go to the node where the task is NOT running on ( e.g `52.23.23.1:5000` where `52.23.23.1` is the IP of the node that the task is NOT running on). 808 | 809 | You still can see the app right? That's the power of Routing Mesh! 810 | 811 | 812 | ## Wrap Up 813 | 814 | Thank you for taking the time to complete this lab! Feel free to try any of the other labs. 815 | -------------------------------------------------------------------------------- /demo-commands.md: -------------------------------------------------------------------------------- 1 | 2 | # Dockercon 2017 Networking Workshop 3 | ## Demo Commands 4 | 5 | ### Show basic concepts of host mode networking 6 | 7 | ``` 8 | $ ip addr show 9 | 10 | $ ip route 11 | 12 | $ docker network ls 13 | 14 | $ docker network inspect host 15 | 16 | $ docker run --rm -it --net host nginx 17 | 18 | # ip addr show 19 | 20 | # ip route 21 | 22 | # ps aux 23 | 24 | $ netstat -plant 25 | 26 | $ curl localhost 27 | 28 | $ docker run --rm -it --net host centos sh 29 | 30 | # curl localhost 31 | ``` 32 | 33 | ### Demonstrate end to end packet flow of host mode network 34 | 35 | ``` 36 | $ docker exec -it sh 37 | 38 | Host1 $ docker run -d --rm --net host nicolaka/netshoot netgen 10.0.18.212 5000 39 | 40 | Host2 $ docker run -d -it --net host nicolaka/netshoot netgen 10.0.7.28 5000 41 | 42 | 43 | # tcpdump -nXs 0 -i eth0 port 5000 44 | 45 | $ tcpdump -nXs 0 -i eth0 port 5000 46 | ``` 47 | 48 | ### Demonstrate port mapping 49 | 50 | ``` 51 | $ docker run -d -p 8000:80 nginx 52 | 53 | $ curl localhost:8000 54 | 55 | $ iptables -t nat -S | grep 8000 56 | 57 | $ iptables -nvL 58 | ``` 59 | 60 | ### Explore bridge driver and namespace fundamentals 61 | ``` 62 | $ docker network ls << This is the default bridge network. Show the students the bridge network with NAME: bridge, DRIVER: bridge, and SCOPE: local 63 | 64 | $ docker inspect bridge 65 | 66 | $ brctl show (will have to have bridge-utils installed) 67 | 68 | $ docker network create -d bridge br 69 | 70 | $ docker run -it --net br alpine sh 71 | 72 | # ip add show 73 | 74 | # ip route 75 | 76 | # curl localhost (can’t connect to nginx container that is running in net=host mode) 77 | 78 | CTRL P-Q 79 | 80 | $ docker inspect $(docker ps -lq) 81 | 82 | $ ls /var/run/docker/netns 83 | 84 | $ brctl show 85 | 86 | $ ip addr show 87 | 88 | $ docker rm -f $(docker ps -aq) 89 | ``` 90 | 91 | 92 | ### Inspect traffic between containers on same host 93 | 94 | ``` 95 | $ docker run -d --rm --net br --name c1 nicolaka/netshoot netgen c2 5000 96 | 97 | $ docker run -it --rm --net br --name c2 nicolaka/netshoot netgen c1 5000 98 | 99 | $ tcpdump -nXs 0 -i eth0 port 5000 100 | 101 | Grab the veth interface of one of c1 102 | $ docker exec -it c1 ip add show eth0 103 | 104 | Find corresponding veth in the host network namespace 105 | $ tcpdump -nXs 0 -i veth18265bc port 5000 106 | ``` 107 | 108 | ### Inspect traffic between containers on different 109 | ``` 110 | Host1 $ docker run -d --rm -p 5000:5000 nicolaka/netshoot netgen ip-172-31-18-122 5000 111 | 112 | Host2 $ docker run -d --it -p 5000:5000 nicolaka/netshoot netgen ip-172-31-21-237 5000 113 | 114 | $ tcpdump -nXs 0 -i eth0 port 5000 115 | ``` 116 | 117 | 118 | 119 | --------------------------------------------------------------------------------