├── docker-mtu.md
├── jvm-resources-recommendations.md
└── README.md
/docker-mtu.md:
--------------------------------------------------------------------------------
1 | # Docker Swarm Network MTU
2 |
3 |
4 |
5 |
6 |
7 | If your Docker instance is communicating to other Docker instances via VXLAN or any other network that has a different MTU than the default 1500, you need to delete the default ingress network and create a new one! This is needed because Docker doesn't inherit the MTU of your networking interface, there will be intermittent packet losses when communicating between nodes!
8 |
9 | Docker by default doesn't detect the network MTU because [reasons](https://github.com/moby/moby/pull/18108), so by default it uses 1500 MTU for connections, which will cause issues if your network has a different MTU!
10 |
11 |
12 |
13 |
14 |
15 | To fix this, check what is the MTU of the network that is going to be used for node communication by using `ip a`.
16 |
17 | ```
18 | 3: ens19: mtu 1450 qdisc fq_codel state UP group default qlen 1000
19 | link/ether ...
20 | ```
21 |
22 | As we can see, our MTU is 1450, so let's setup Docker to use 1450 for network!
23 |
24 | Create the file `/etc/docker/daemon.json` and insert the following contents, if the file already exists, add the `mtu` field to the already existing JSON.
25 |
26 | ```json
27 | {
28 | "mtu": 1450
29 | }
30 | ```
31 |
32 | And then restart Docker with `systemctl restart docker`, this will setup Docker default networks to use 1450 MTU.
33 |
34 | Now, we need to change the MTU of the ingress network used by Docker Swarm for inter node communication, to do this, you need to recreate the ingress network by doing this:
35 | * `docker network rm ingress`
36 | * `docker network create --driver overlay --ingress --opt com.docker.network.driver.mtu=MTUOfTheNetworkHere --subnet 192.168.128.0/24 --gateway 192.168.128.1 ingress`
37 | * To check what `subnet` and `gateway` you should use, use `docker inspect overlay` and check the IPAM section!
38 | * Let's suppose that we are using a VXLAN network for communication, VXLAN networks use a MTU size of 1450, so we need to use `--opt com.docker.network.driver.mtu=1450`
39 | * Restart Docker with `systemctl restart docker`
40 |
41 | This fixes Outside World -> Swarm Cluster communication, now we need to fix communication between your containers!
42 |
43 | To do this, append this to the end of every `docker-compose.yml`.
44 |
45 | ```yml
46 | networks:
47 | default:
48 | driver: overlay
49 | driver_opts:
50 | com.docker.network.driver.mtu: 1450
51 | ```
52 |
53 | This will setup the overlay networks used by the stacks to also use your MTU.
54 |
55 | And that's it! Now you shouldn't have any connectivity issues in your Swarm.
56 |
57 | [[If you want to learn more, this issue talks a lot about connectivity issues!]](https://github.com/moby/moby/issues/36689#issuecomment-987706496)
58 |
59 | There is a [Pull Request](https://github.com/moby/moby/pull/43197) that automatically sets the MTU for all networks without the need of changing every network manually, let's hope it is merged to Docker Desktop in the future. :3
60 |
--------------------------------------------------------------------------------
/jvm-resources-recommendations.md:
--------------------------------------------------------------------------------
1 | # JVM Resources Recommendations
2 |
3 | Messing with `limits` and `reservations` may impact your Java application in ways that you weren't expecting, so here's some tips to not fail.
4 |
5 | Don't worry, *it is also painfully hard on Kubernetes too*. 😭
6 |
7 | My Swarm node Virtual Machine has 4GBs of RAM, 4 cores. So let's do some tests on it! The application will be using Java 17 (you MUST use Java 8.0_131 or above because Java didn't respect cgroups before that. We are in `${InsertYearHere}` already, move on to Java 17!!) and will print memory and CPU stats to the console before quitting. If you want to play around with it on your computer, [the container is public](https://github.com/MrPowerGamerBR/DebugAllocationContainers/pkgs/container/debugallocationcontainers)!
8 | ```kotlin
9 | fun main() {
10 | val mb = 1024 * 1024
11 | val runtime = Runtime.getRuntime()
12 |
13 | println("Used Memory: ${(runtime.totalMemory() - runtime.freeMemory()) / mb}MiB")
14 | println("Free Memory: ${runtime.freeMemory() / mb}MiB")
15 | println("Total Memory: ${runtime.totalMemory() / mb}MiB")
16 | println("Max Memory: ${runtime.maxMemory() / mb}MiB")
17 | println("Available Processors: ${Runtime.getRuntime().availableProcessors()}")
18 | }
19 | ```
20 |
21 | ## No resources set
22 |
23 | ```yml
24 | version: "3.9"
25 | services:
26 | temurin:
27 | image: ghcr.io/mrpowergamerbr/debugallocationcontainers@sha256:d98ad5df3b5829fc7595eb48f6e49c9856cd9ad8ebefe75068ecd5063f0fb789
28 | environment:
29 | JAVA_TOOL_OPTIONS: "-verbose:gc"
30 | ```
31 | **Output:**
32 | ```
33 | temurin-test_temurin.1.xnwoo9oyyhfx@docker-swarm-worker-1 | [0.005s][info][gc] Using G1
34 | temurin-test_temurin.1.xnwoo9oyyhfx@docker-swarm-worker-1 | Used Memory: 1MiB
35 | temurin-test_temurin.1.xnwoo9oyyhfx@docker-swarm-worker-1 | Free Memory: 62MiB
36 | temurin-test_temurin.1.xnwoo9oyyhfx@docker-swarm-worker-1 | Total Memory: 64MiB
37 | temurin-test_temurin.1.xnwoo9oyyhfx@docker-swarm-worker-1 | Max Memory: 982MiB
38 | temurin-test_temurin.1.xnwoo9oyyhfx@docker-swarm-worker-1 | Available Processors: 4
39 | ```
40 |
41 | The JVM memory is set to 1/4 of the entire VM. This makes sense, because the default value of `-XX:MaxRAMPercentage` is 25, and 25% of 4GBs is 1GB.
42 |
43 | ## With Xmx/Xms set
44 | Most Java developers use `-Xmx` and `-Xms` to set the heap size, so let's use it.
45 |
46 | ```yml
47 | version: "3.9"
48 | services:
49 | temurin:
50 | image: ghcr.io/mrpowergamerbr/debugallocationcontainers@sha256:d98ad5df3b5829fc7595eb48f6e49c9856cd9ad8ebefe75068ecd5063f0fb789
51 | environment:
52 | JAVA_TOOL_OPTIONS: "-verbose:gc -Xmx512M -Xms512M"
53 | ```
54 | **Output:**
55 | ```
56 | temurin-test_temurin.1.ulbt33xyo0d0@docker-swarm-worker-1 | [0.005s][info][gc] Using G1
57 | temurin-test_temurin.1.ulbt33xyo0d0@docker-swarm-worker-1 | Used Memory: 1MiB
58 | temurin-test_temurin.1.ulbt33xyo0d0@docker-swarm-worker-1 | Free Memory: 510MiB
59 | temurin-test_temurin.1.ulbt33xyo0d0@docker-swarm-worker-1 | Total Memory: 512MiB
60 | temurin-test_temurin.1.ulbt33xyo0d0@docker-swarm-worker-1 | Max Memory: 512MiB
61 | temurin-test_temurin.1.ulbt33xyo0d0@docker-swarm-worker-1 | Available Processors: 4
62 | ```
63 |
64 | As we can see, the JVM is using our configured allocated memory! However, Swarm doesn't know about this, so *it will try to allocate our container on any node, even if they don't have 512MB available!*
65 |
66 | ## Xmx/Xms + Resource Reservations (The Best And Simplest Way Of Doing This™)
67 | ```yml
68 | version: "3.9"
69 | services:
70 | temurin:
71 | image: ghcr.io/mrpowergamerbr/debugallocationcontainers@sha256:d98ad5df3b5829fc7595eb48f6e49c9856cd9ad8ebefe75068ecd5063f0fb789
72 | deploy:
73 | resources:
74 | reservations:
75 | memory: 768M # We reserve more memory than we set the heap, due to off heap allocations and other JVM shenanigans.
76 | environment:
77 | JAVA_TOOL_OPTIONS: "-verbose:gc -Xmx512M -Xms512M"
78 | ```
79 | **Output:**
80 | ```
81 | temurin-test_temurin.1.vrp8yjosnc5d@docker-swarm-manager-1 | [0.005s][info][gc] Using G1
82 | temurin-test_temurin.1.vrp8yjosnc5d@docker-swarm-manager-1 | Used Memory: 1MiB
83 | temurin-test_temurin.1.vrp8yjosnc5d@docker-swarm-manager-1 | Free Memory: 510MiB
84 | temurin-test_temurin.1.vrp8yjosnc5d@docker-swarm-manager-1 | Total Memory: 512MiB
85 | temurin-test_temurin.1.vrp8yjosnc5d@docker-swarm-manager-1 | Max Memory: 512MiB
86 | temurin-test_temurin.1.vrp8yjosnc5d@docker-swarm-manager-1 | Available Processors: 4
87 | ```
88 |
89 | Once again, it works fine! In my opinion, this is the best way AND easiest way to do this.
90 |
91 | ## But what if we don't set `Xmx/Xms` WHILE we have a `reservations` set?
92 | ```yml
93 | version: "3.9"
94 | services:
95 | temurin:
96 | image: ghcr.io/mrpowergamerbr/debugallocationcontainers@sha256:d98ad5df3b5829fc7595eb48f6e49c9856cd9ad8ebefe75068ecd5063f0fb789
97 | deploy:
98 | resources:
99 | reservations:
100 | memory: 768M # We reserve more memory than we set the heap, due to off heap allocations and other JVM shenanigans.
101 | environment:
102 | JAVA_TOOL_OPTIONS: "-verbose:gc"
103 | ```
104 | **Output:**
105 | ```
106 | temurin-test_temurin.1.wost4c51s6ma@docker-swarm-manager-1 | [0.005s][info][gc] Using G1
107 | temurin-test_temurin.1.wost4c51s6ma@docker-swarm-manager-1 | Used Memory: 1MiB
108 | temurin-test_temurin.1.wost4c51s6ma@docker-swarm-manager-1 | Free Memory: 62MiB
109 | temurin-test_temurin.1.wost4c51s6ma@docker-swarm-manager-1 | Total Memory: 64MiB
110 | temurin-test_temurin.1.wost4c51s6ma@docker-swarm-manager-1 | Max Memory: 984MiB
111 | temurin-test_temurin.1.wost4c51s6ma@docker-swarm-manager-1 | Available Processors: 4
112 | ```
113 |
114 | It still allocates ~1GBs, so as you can see the container just ignores our `reservations` when figuring out how much memory it can allocate.
115 |
116 | ## But what if we don't set `Xmx/Xms` WHILE we have a `limits` set?
117 | ```yml
118 | version: "3.9"
119 | services:
120 | temurin:
121 | image: ghcr.io/mrpowergamerbr/debugallocationcontainers@sha256:d98ad5df3b5829fc7595eb48f6e49c9856cd9ad8ebefe75068ecd5063f0fb789
122 | deploy:
123 | resources:
124 | limits:
125 | memory: 512M
126 | environment:
127 | JAVA_TOOL_OPTIONS: "-verbose:gc"
128 | ```
129 | **Output:**
130 | ```
131 | temurin-test_temurin.1.vq0a6qz6kjkr@docker-swarm-manager-1 | [0.002s][info][gc] Using Serial
132 | temurin-test_temurin.1.vq0a6qz6kjkr@docker-swarm-manager-1 | Used Memory: 0MiB
133 | temurin-test_temurin.1.vq0a6qz6kjkr@docker-swarm-manager-1 | Free Memory: 7MiB
134 | temurin-test_temurin.1.vq0a6qz6kjkr@docker-swarm-manager-1 | Total Memory: 7MiB
135 | temurin-test_temurin.1.vq0a6qz6kjkr@docker-swarm-manager-1 | Max Memory: 123MiB
136 | temurin-test_temurin.1.vq0a6qz6kjkr@docker-swarm-manager-1 | Available Processors: 4
137 | ```
138 |
139 | Ah ha! Now the JVM is using 25% of the memory that we set on the `limits` section! And look, because our heap is smol, Java decides that using Serial instead of G1GC is a good idea (Spoiler: While Serial is good for desktop applications that don't have a lot of threads, it isn't good for anything else).
140 |
141 | Now, you can use `-XX:MaxRAMPercentage` to set how much % your JVM should allocate of the `limits.memory` that you have set. While this works, I do think that this is a bit confusing and non-intuitive, and besides, 99% of the times you are deploying containers that have the same max memory set, so this is not that useful.
142 |
143 | ## Should I limit CPU?
144 |
145 | In my opinion? Nah, let your services use all of your CPUs. Less headaches.
146 |
147 | ## But should I create CPU reservations?
148 |
149 | Yes! Mostly to avoid scheduling your containers in oversatured nodes.
150 |
151 | ## JVM Resources tl;dr:
152 |
153 | * Create memory reservations to avoid allocating your container to a node that doesn't have enough memory. Always reserve a bit more memory than what you are allocating to avoid the system killing your JVM app due to low memory.
154 | * Create CPU reservations to avoid allocating your container to a node that is oversaturated.
155 | * Set your memory with `-Xmx` and `-Xms` because it is easier than fiddling with `MaxRAMPercentage`.
156 |
157 | ```yml
158 | version: "3.9"
159 | services:
160 | temurin:
161 | image: ghcr.io/mrpowergamerbr/debugallocationcontainers@sha256:d98ad5df3b5829fc7595eb48f6e49c9856cd9ad8ebefe75068ecd5063f0fb789
162 | deploy:
163 | resources:
164 | reservations:
165 | memory: 768M
166 | cpus: "0.5"
167 | environment:
168 | JAVA_TOOL_OPTIONS: "-verbose:gc -Xmx512M -Xms512M"
169 | ```
170 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 | # Docker Swarm Mode Tutorial
6 |
7 | I know, Kubernetes is what all the cool kidz are using nowadays! And Kubernetes is cool too... until it explodes and you spend hours trying to debug the issue. In fact, I decided to learn about Docker Swarm after getting burned by Kubernetes for the nth time...
8 |
9 | Kubernetes may be painless if you are using a managed Kubernetes instance, but if you are selfhosting it... it is hard!
10 |
11 | **Let's face it:** You ain't *insert huge company name here*, you just care about hosting some containers' replicas, you don't care about all the fancy big corp stuff that Kubernetes tries to provide to you... because you ain't a big corp!
12 |
13 | So why not use Docker Swarm?
14 |
15 | ## But is Docker Swarm... good?
16 |
17 | If you are just like me that likes to scour the internet to see if users think that *insert tech here* is good, here are some posts on the internet about people talking about their experience with Docker Swarm!
18 |
19 | * https://blog.kronis.dev/articles/docker-swarm-over-kubernetes
20 | * https://news.ycombinator.com/item?id=29448182
21 | * https://news.ycombinator.com/item?id=32306857
22 | * https://news.ycombinator.com/item?id=32305800
23 | * https://news.ycombinator.com/item?id=32305952
24 | * https://news.ycombinator.com/item?id=32307229
25 | * https://user-images.githubusercontent.com/9496359/183654510-568ddef6-68e8-4555-8380-429ad827d270.png
26 | * https://www.reddit.com/r/devops/comments/qxq1q9/is_docker_swarm_good_enough_for_production/hlg4xeu/
27 | * https://www.reddit.com/r/devops/comments/wwpo91/k8s_encourages_people_to_deploy_really_complex/ilmjhl3/ (well that's my own comment but shhh just look at the replies)
28 | * https://www.reddit.com/r/devops/comments/wwpo91/k8s_encourages_people_to_deploy_really_complex/ilmw6uu/
29 |
30 | ## When SHOULD I use Docker Swarm?
31 |
32 | Docker Swarm is a orchestrator, the "best little multi node orchestrator that could". It is very useful if you are already using Docker and you need to replicate your containers in multiple nodes/VMs.
33 |
34 | * If you are self hosting your services in a dedicated server or in a VPS. (Example: dedicated servers @ OVH, Hetzner, etc)
35 | * If you are already using Docker Compose.
36 | * Docker Swarm uses Docker Compose files for deployment too, so it is a natural step from Docker Compose -> Swarm! And I mean it! You just need to add a new `deploy` key to your already existing compose files to setup how do you want your container to be replicated and deployed... and that's it!
37 | * If your services don't require persistent storage.
38 |
39 | ## When SHOULDN'T I use Docker Swarm?
40 |
41 | As with any tool, there are some disadvantages to Docker Swarm, here are some reasons that you may want to avoid using Docker Swarm:
42 |
43 | * You are using AWS/GCP/Azure/etc.
44 | * Just use their managed Kubernetes service. As I said before: Kubernetes is cool, except when it breaks. However using a Kubernetes instance that is managed by someone else lifts all the heavyweight from you, in exchange of a pretty hefty price increase.
45 | * You need to auto scale according to demand.
46 | * However if you are using a rented dedicated server, why would tou want to auto scale? And if you are using a host that charges per usage (like AWS, GCP, Azure, etc), then they probably has a managed Kubernetes instance.
47 | * You need to have distributed storage because you want to have a distributed database.
48 | * Docker Swarm does support distributed storage, however Kubernetes' solutions is more rock solid so just go with Kubernetes. But then again, do you *really* need a distributed storage for your smol service? You can go pretty far with a single instance database!
49 | * You need a feature that isn't supported by Docker Swarm.
50 | * While Docker Swarm isn't deprecated, it is also not actively developed, so if you require a feature that isn't supported by it, it may take a loooong time until it is implemented.
51 | * You like to have shiny buzzwords in your CV.
52 |
53 | ## But isn't Docker Swarm... *dead*?
54 |
55 | People talk about Docker Swarm being dead because, back in the day, Docker Inc. had a orchestrator called "Docker Swarm", and *that* got deprecated. Docker Inc., in their infinite marketing wisdom, later created a orchestrator called "Swarm Mode" for Docker, and that's why a lot of people get confused about if Swarm is dead or not.
56 |
57 | Contrary to the popular belief, Swarm Mode is not deprecated! Yeah, sure, sadly there isn't too much dev work on Swarm nowadays, but it isn't *dead* or *deprecated*, and after all, if it fits your needs, does it really need to be actively worked on? As long as it works, and if you get to a point where Swarm is not fitting your needs, *then* move to Kubernetes! Don't overcomplicate your life right now just because some day, *maybe*, you would need Kubernetes.
58 |
59 | If it does get deprecated, you can migrate off of it by converting your Docker Compose deployment files with [Kompose](https://kompose.io/), so it is not like you are going to end up being stuck on a dead platform forever.
60 |
61 | ## Official Tutorial
62 |
63 | Docker has a Swarm Mode tutorial on their website, [check it out](https://docs.docker.com/engine/swarm/swarm-tutorial/)!
64 |
65 | Docker also has a tutorial on how to deploy Docker Compose files (also known as "Stack") to Docker Swarm, [check it out](https://docs.docker.com/engine/swarm/stack-deploy/)!
66 |
67 | This tutorial is mostly a "I'm writing things as I'm learning", focused more on the Docker Swarm + Docker Compose combo than the official tutorial's Docker Swarm + Docker CLI combo, so there may be things that are incorrect or misleading, however I think that other people may think that this tutorial is also useful!
68 |
69 | ## Install Ubuntu
70 |
71 | I'm using a Ubuntu 22.04 VM for this tutorial.
72 |
73 | ## Install Docker Engine
74 |
75 | https://docs.docker.com/engine/install/
76 |
77 | ## Start Docker Swarm Mode
78 |
79 | Start your Docker Swarm with `docker swarm init`!
80 |
81 | If you have multiple network interfaces, Docker will ask you to choose what IP to use. You need to choose the IP that can access your other Docker instances!
82 |
83 | By default, Docker will use IPs on the 10.0.0.0 range, if your machine's interface is behind NAT and it also uses the 10.0.0.0 range, it will cause issues when you try to access containers hosted in your Docker Swarm!
84 |
85 | To fix this, initialize your Docker Swarm with the `--default-addr-pool` parameter! Let's suppose we want to use `192.168.128.0/18` for our containers. [[🐳 Learn more]](https://docs.docker.com/engine/swarm/swarm-mode/#configuring-default-address-pools)
86 |
87 | ```bash
88 | sudo docker swarm init --default-addr-pool 192.168.128.0/18
89 | ```
90 |
91 | > **Warning**
92 | >
93 | > If you change your `default-addr-pool`, check if the IPs aren't being used by another network! `ip a` shows what IP ranges are being used. If you use an IP range that is already being used by something else, your containers won't start due to `Pool overlaps with other one on this address space`!
94 |
95 |
96 | > **Warning**
97 | >
98 | > If you have multiple network interfaces, Docker will say `Error response from daemon: could not choose an IP address to advertise since this system has multiple addresses on different interfaces (10.29.10.1 on ens18 and 172.29.10.1 on ens19) - specify one with --advertise-addr`. In this case, the `--advertise-addr` parameter should be the IP that *can communicate with other nodes*! So, if `172.29.10.1` is the IP that can access other nodes, then that should be your `--advertise-addr`.
99 |
100 | > **Warning**
101 | >
102 | > If your Docker instance is communicating to other Docker instances via VXLAN or any other network that has a different MTU than the default 1500, you need to delete the default ingress network and create a new one! This is needed because Docker doesn't inherit the MTU of your networking interface, there will be intermittent packet losses when communicating between nodes! [Read more about how to fix it here](docker-mtu.md)
103 |
104 | If you get `Swarm initialized: current node (qgsfyhmhwtpkp7zpo7lts2vhp) is now a manager.`, then your Docker instance is in a swarm, and your node is a manager node, sweet!
105 |
106 | ## Accessing private images hosted on GitHub's `ghcr.io`
107 |
108 | While not related to Docker Swarm, I thought it was nice to talk about this. :3
109 |
110 | ```bash
111 | docker login ghcr.io
112 | ```
113 |
114 | Use your GitHub username as your Username, and a [Personal Access Token](https://github.com/settings/tokens) as your Password.
115 |
116 | When deploying your stack, add the parameter `--with-registry-auth`!
117 |
118 | ## Logs and your disk space
119 |
120 | Once again something not related to Docker Swarm, but I thought it was nice to talk about this. :3
121 |
122 | By default, Docker uses `json-file` as its logging driver, however [Docker recomemnds changing the log driver to `local` because the `json-file` driver is only set by default for backwards compatibility](https://docs.docker.com/config/containers/logging/configure/).
123 |
124 | To do this, edit your `/etc/docker/daemon.json` and insert the following contents, if the file already exists, add the `log-driver` field to the already existing JSON.
125 | ```json
126 | {
127 | "log-driver": "local"
128 | }
129 | ```
130 |
131 | ## Hosting your first service
132 |
133 | Did you know that Docker Swarm uses `docker-compose.yml` files??? Crazy huh?
134 |
135 | ```yaml
136 | version: '3.9'
137 | services:
138 | helloworld:
139 | # This will set the hostname to helloworld-ReplicaID
140 | hostname: "helloworld-{{.Task.Slot}}"
141 | # The image, we will use a helloworld http image
142 | image: strm/helloworld-http
143 | # We will expose the service at port 8080 on the host
144 | ports:
145 | - "8080:80"
146 | # Docker Swarm configuration deployment configurations!
147 | deploy:
148 | # We want to replicate our service...
149 | mode: replicated
150 | # And it will have two instances of the container!
151 | replicas: 2
152 | ```
153 |
154 | Now let's deploy our service with `docker stack deploy --compose-file docker-compose.yml stackdemo`!
155 |
156 | If everything goes well, you will be able to see your stack on the `docker stack ls` command!
157 | ```bash
158 | mrpowergamerbr@docker-swarm-test:~$ sudo docker stack ls
159 | NAME SERVICES ORCHESTRATOR
160 | stackdemo 1 Swarm
161 | ```
162 |
163 | And you can also see the service status with `docker stack services stackdemo`
164 | ```bash
165 | mrpowergamerbr@docker-swarm-test:~$ sudo docker stack services stackdemo
166 | ID NAME MODE REPLICAS IMAGE PORTS
167 | totx5zzra290 stackdemo_helloworld replicated 2/2 strm/helloworld-http:latest *:8080->80/tcp
168 | ```
169 |
170 | You can also view events about the stack with `docker stack ps stackdemo`
171 | ```bash
172 | swarm@docker-swarm-manager-1:~$ sudo docker stack ps powercms
173 | ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
174 | ztf65gnt1fjc powercms_powercms.1 ghcr.io/mrpowergamerbr/powercms@sha256:41e5cf391194792b4ddff8d0a5561122213cb793ada75edc1e4e6a5ba9b90e16 docker-swarm-manager-1 Ready Rejected 4 seconds ago "Pool overlaps with other one …"
175 | 9oiwb29bc1eu \_ powercms_powercms.1 ghcr.io/mrpowergamerbr/powercms@sha256:41e5cf391194792b4ddff8d0a5561122213cb793ada75edc1e4e6a5ba9b90e16 docker-swarm-manager-1 Shutdown Rejected 9 seconds ago "Pool overlaps with other one …"
176 | ezif348rn177 \_ powercms_powercms.1 ghcr.io/mrpowergamerbr/powercms@sha256:41e5cf391194792b4ddff8d0a5561122213cb793ada75edc1e4e6a5ba9b90e16 docker-swarm-worker-1 Shutdown Rejected 14 seconds ago "Pool overlaps with other one …"
177 | xszs8mbxrzdh \_ powercms_powercms.1 ghcr.io/mrpowergamerbr/powercms@sha256:41e5cf391194792b4ddff8d0a5561122213cb793ada75edc1e4e6a5ba9b90e16 docker-swarm-worker-1 Shutdown Rejected 19 seconds ago "Pool overlaps with other one …"
178 | nnq6i1ybb4v4 \_ powercms_powercms.1 ghcr.io/mrpowergamerbr/powercms@sha256:41e5cf391194792b4ddff8d0a5561122213cb793ada75edc1e4e6a5ba9b90e16 docker-swarm-manager-1 Shutdown Rejected 24 seconds ago "Pool overlaps with other one …"
179 | ```
180 |
181 | Let's also check the containers running on our Docker instance...
182 |
183 | ```bash
184 | mrpowergamerbr@docker-swarm-test:~$ sudo docker ps
185 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
186 | 25088b353de7 strm/helloworld-http:latest "/main.sh" 41 minutes ago Up 41 minutes 80/tcp stackdemo_front.2.wvzs76sk9iq05xozgz4x9cdqe
187 | 18588ab9c07c strm/helloworld-http:latest "/main.sh" 41 minutes ago Up 41 minutes 80/tcp stackdemo_front.1.nmreutrz6q4l57rolejuco6xa
188 | ```
189 |
190 | > **Warning**
191 | >
192 | > `docker ps` only shows the containers running on the current Docker instance! Don't forget about this if you include multiple Docker instances on the swarm!
193 |
194 | ## Accessing services
195 |
196 | Now let's access the service with `curl`!
197 | ```bash
198 | mrpowergamerbr@docker-swarm-test:~$ curl 127.0.0.1:8080
199 | HTTP Hello WorldHello from helloworld-2
HTTP Hello WorldHello from helloworld-1