├── Day01
└── README.md
├── Day02
├── README.md
└── dockerServiceDir-Change
├── Day03
├── README.md
└── sampledata_commands
├── Day04
├── DockerFile
├── README.md
└── portainer
├── Day05
├── DockerFile
└── README.md
├── Day06
├── DockerFile_Multibuild
├── Dockerfile
└── README.md
├── Day07
├── README.md
└── compose.yaml
├── Day08
└── README.md
├── Day09
├── README.md
├── Secret-File
└── Swarm-all-commands
├── Day10
├── README.md
├── Traefik-Deployment
├── Votingapp-Traefik.yaml
└── traefik2.yml
├── Day11
├── README.md
└── Scenario based Interview Questions.pdf
├── Day12
└── README.md
├── Day13
├── 1.node_exporter.sh
├── 10.alertmanager.service.yml
├── 11.commands.sh
├── 2.prometheus-Config.sh
├── 3.prometheus.yml
├── 4.prometheus.service.sh
├── 5.start-prom.sh
├── 6.queries.sh
├── 7.grafana.sh
├── 8.alert_manager.yml
├── 9.alert_rules.yml
└── README.md
└── README.md
/Day01/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | Hardware Components:
4 | CPU: Central Processing Unit, the brain of the computer.
5 | RAM: Random Access Memory, temporary storage for data being used.
6 | HD: Hard Drive, permanent storage for data.
7 | Graphic Cards: Hardware for rendering images and video.
8 |
9 | Software Components:
10 | Operating System: The main software that manages hardware and software resources.
11 | Applications: Software you use, such as Zen Recorder for recording videos or games.
12 |
13 | The Kernel:
14 | The kernel is the core part of an operating system. It acts as a bridge between software and hardware, converting requests into instructions the hardware can understand.
15 |
16 | Container Runtimes
17 | There are several runtimes for running containers, including:
18 | Container-D
19 | Docker
20 | CRI-O
21 |
22 | # Note: In production environments, Container-D is typically used. Developers often use Docker for local testing. For example, KIND (Kubernetes IN Docker) is used to create Kubernetes clusters in Docker for testing.
23 |
24 |
25 | # Container vs Virtual machines
26 |
27 | 
28 |
29 | Virtual Machines (VMs) are like having a complete, separate computer within your computer. Each VM runs its own full operating system, like having another Windows or Linux inside your main system. Because they need to load an entire OS, VMs are big and take a while to start. They are very isolated from each other, making them secure, as each VM operates independently without knowing about the others.
30 |
31 | Containers, on the other hand, are like small, self-contained environments inside your computer. They share the main computer’s operating system but have their own space for running applications. Containers are much smaller and start very quickly since they don’t need a full OS. They offer enough isolation to keep applications separate but are less isolated than VMs because they share the same OS.
32 |
33 | ----
34 |
35 | # Docker Architecture
36 |
37 | 
38 |
39 | Docker Client
40 | What it is: The Docker client is like the command center for Docker. It’s where you type in commands to tell Docker what to do.
41 | How it works: You use the Docker client to build, run, and manage Docker containers. It talks to the Docker daemon, which does the actual work.
42 | Example commands: docker build, docker run, docker pull.
43 | Docker Hub
44 | What it is: Docker Hub is an online service where you can find and store Docker images.
45 | How it works: It’s like an app store but for Docker images. You can download (“pull”) images others have created, or upload (“push”) your own images.
46 | Usage: When you need an image to create a container, you can pull it from Docker Hub.
47 | Docker Registry
48 | What it is: A Docker registry is a place to store Docker images. Docker Hub is the most popular public registry, but you can also have private registries.
49 | How it works: Registries store the images you create and make them available for you to pull and run on your Docker client. Private registries are used for images you don’t want to share publicly.
50 | Example: Companies often use private registries to store their internal application images securely.
51 |
52 | # Installing Docker & Network changes
53 |
54 | curl https://get.docker.com/ | bash
55 |
56 | Netwrok : Bridge - Host - none
57 |
58 | Bridge : A Docker bridge network is a private internal network created by Docker on the host machine. It's used to allow containers to communicate with each other within the same host.
59 |
60 | 
61 |
62 |
63 |
64 | # Listing namespaces & Running basic docker commands
65 |
66 | Basic Docker Commands
67 |
68 | Check Docker version: docker version
69 |
70 | List namespaces: lsns
71 |
72 | List PID namespaces: lsns -t pid
73 |
74 | Run a container: docker run --name app1 nginx:latest
75 |
76 | Run a container in the background: docker -d run --name app1 nginx:latest
77 |
78 | Create multiple containers: for i in {1..10}; do docker run -d nginx:latest; done
79 |
80 | List running containers: docker ps
81 |
82 | Stop all containers: docker stop $(docker ps -aq)
83 |
84 | Deploy and remove containers: docker run --rm -d --name app1 nginx:latest
85 |
86 | Inspect a container: docker inspect app1
87 |
88 | Port forwarding: docker run --rm -d --name app1 -p 8000:80 nginx:latest
89 |
90 | View logs: docker logs app1 -f
91 |
92 |
93 |
94 |
--------------------------------------------------------------------------------
/Day02/README.md:
--------------------------------------------------------------------------------
1 | 
2 |
3 | # Why We Need Separate Utilization for Docker Data
4 | The default directory for Docker is /var/lib/docker. As you continue downloading images and generating logs, this directory will consume more space and eventually get busy. To prevent this, we can store all our Docker data in a separate directory.
5 |
6 | # Steps To Create EBS Volume and Attaching it to the instance
7 | Create EBS volume GP2 and atatch it to Instance
8 |
9 | lsblk
10 |
11 | fdisk /dev/xvdf
12 |
13 | n > p > w >
14 |
15 | lsblk
16 |
17 | mkfs.ext4 /dev/xvdf1
18 |
19 | copy UUID Somewhere
20 |
21 | mkdir /dockerdata
22 |
23 | nano /etc/fstab
24 |
25 | Paste Like below
26 |
27 | 
28 |
29 | # Why We Need a Custom Network for Containers
30 |
31 | With the default bridge network, containers can communicate with each other using IP addresses but not with container names. To enable communication using container names, we need to create a custom network.
32 |
33 | First create a customer network
34 |
35 | Docker network create myapp --driver bridge
36 |
37 | Docker network inspect myapp
38 |
39 | Now run containers with including the custom network
40 |
41 | docker run –rm -d –name app3 -p 8001 –network myapp kiran2361993:troubleshootingtools:v1
42 |
43 | docker run –rm -d –name app4 -p 8002 –network myapp kiran2361993:troubleshootingtools:v1
44 |
45 | If you login to the container with docker exec -it containername bash and you can ping with the container name instead of IP, It should work.
46 |
47 | # Using the HOST Network Mode
48 |
49 | The HOST network mode means the container shares the host's IP address. This eliminates the need for port forwarding when running the container. For example, when using Prometheus with Node Exporter, it utilizes the host IP for metric collection, making it easier to access the metrics.
50 |
51 | Example Command
52 |
53 | docker run --rm -d --name node-exporter --network host prom/node-exporter
54 |
55 | You can access the Node Exporter from the host's public IP on port 9100. To inspect the Docker image and understand the default settings, use the following command:
56 |
57 | docker image inspect prom/node-exporter:latest
58 |
59 | # Using the NONE Network Mode
60 |
61 | The NONE network mode means the container has no network interfaces enabled except for a loopback device. This is useful for highly isolated containers that do not require any network access.
62 |
63 | Example Command
64 |
65 | docker run --rm -d --name isolated-container --network none busybox
66 |
67 | This command runs a BusyBox container with no network connectivity, providing complete network isolation.
68 |
69 |
70 |
71 |
--------------------------------------------------------------------------------
/Day02/dockerServiceDir-Change:
--------------------------------------------------------------------------------
1 | Create additional EBS Volume , fdisk and mount it.
2 |
3 | sudo systemctl stop docker.service
4 |
5 | sudo systemctl stop docker.socket
6 |
7 | sudo nano /lib/systemd/system/docker.service Add the following line with the custom directo
8 |
9 | #ExecStart=/usr/bin/dockerd -H fd:// -- containerd=/run/containerd/containerd.sock
10 |
11 | ExecStart=/usr/bin/dockerd --data-root /dockerdata -H fd:// --containerd=/run/containerd/containerd.sock
12 |
13 | sudo rsync -aqxP /var/lib/docker/ /dockerdata
14 |
15 | sudo systemctl daemon-reload && sudo systemctl start docker
16 |
17 | sudo systemctl status docker --no-pager
18 |
19 | ps aux | grep -i docker | grep -v grep
20 |
21 |
--------------------------------------------------------------------------------
/Day03/README.md:
--------------------------------------------------------------------------------
1 |
2 | 
3 |
4 |
5 | ## Table of Contents
6 |
7 | 1. [Introduction to Docker Containers](#introduction-to-docker-containers)
8 | 2. [Understanding Data Persistence](#understanding-data-persistence)
9 | 3. [Volumes vs Bind Mounts](#volumes-vs-bind-mounts)
10 | 4. [Practical Examples](#practical-examples)
11 | - [Using Volumes](#using-volumes)
12 | - [Using Bind Mounts](#using-bind-mounts)
13 | 5. [Docker Client and Daemon](#docker-client-and-daemon)
14 | 6. [Conclusion](#conclusion)
15 |
16 | ## Introduction to Docker Containers
17 |
18 | Containers are stateless by nature, meaning if a container is deleted, all data within it is lost. This is where data persistence comes into play, ensuring your data remains intact even if the container is removed. We achieve this through Volumes and Bind Mounts.
19 |
20 | ## Understanding Data Persistence
21 |
22 | **Persistence** means ensuring that even if a container is deleted, the data is not lost. There are two primary ways to achieve data persistence:
23 |
24 | 1. **Volumes**: Managed by Docker, ideal for data that needs to persist beyond the container's lifecycle.
25 | 2. **Bind Mounts**: Directly mounts a directory or file from the host system, useful for advanced scenarios.
26 |
27 | ## Volumes vs Bind Mounts
28 |
29 | - **Volumes**:
30 | - Managed by Docker.
31 | - Stored in a part of the host filesystem which is managed by Docker.
32 | - Preferred method for data persistence.
33 |
34 | - **Bind Mounts**:
35 | - Maps a file or directory on the host to a file or directory in the container.
36 | - More complex but provides flexibility to interact with the host system.
37 |
38 | ## Practical Examples
39 |
40 | ### Using Volumes
41 |
42 | 1. **List existing volumes**:
43 |
44 | docker volume ls
45 |
46 |
47 | 2. **Create a new volume**:
48 |
49 | docker volume create mongodb
50 |
51 |
52 | 3. **Run a MongoDB container with a volume**:
53 |
54 | docker run --rm -d --name mongodb -v mongodb:/data/db -p 27017:27017 mongo:latest
55 |
56 |
57 | 4. **Check running containers**:
58 |
59 | docker ps
60 |
61 |
62 | 5. **Insert some data**:
63 |
64 | docker exec -it mongodb mongosh
65 | > show dbs;
66 | > Insert some data
67 | > show dbs;
68 | > db.hello.find();
69 |
70 |
71 | 6. **Stop and start the container**:
72 |
73 | docker stop mongodb
74 | docker ps -a
75 | docker start mongodb
76 | exec -it mongodb mongosh
77 |
78 |
79 | ### Using Bind Mounts
80 |
81 | 1. **Run a container without network access**:
82 |
83 | docker run --rm -d --name troubleshootingtools --network none troubleshootingtools:v10
84 |
85 |
86 | 2. **Run a container with Docker socket mounted**:
87 |
88 | docker run --rm -d --name troubleshootingtools -v /var/run/docker.sock:/var/run/docker.sock --network none troubleshootingtools:v10
89 |
90 |
91 | 3. **Inspect the container**:
92 |
93 | docker inspect troubleshootingtools
94 |
95 |
96 | ## Docker Client and Daemon
97 |
98 | - **Docker Client**:
99 | - Sends commands to the Docker daemon using the Docker socket.
100 | - Receives responses from the Docker daemon.
101 |
102 | - **Docker Daemon**:
103 | - Background service running on the host OS.
104 | - Manages Docker objects like images, containers, networks, and volumes.
105 | - Communicates with the Docker client.
106 |
107 | ## Conclusion
108 |
109 | By understanding and using volumes and bind mounts, you can ensure data persistence in your Docker containers. This guide provides a solid foundation to start working with Docker and manage data effectively. Happy Dockerizing!
110 |
--------------------------------------------------------------------------------
/Day03/sampledata_commands:
--------------------------------------------------------------------------------
1 | docker volume ls
2 | docker volume create mongodb
3 | docker run –rm -d --name mongodb -v mongodb:/data/db -p 27017: 27017 mongo:latest
4 | docker ps
5 | get into container docker exec -it mongodb mongosh > showdbs
6 | #Add some data init
7 |
8 | db.helo.insertMany([
9 | { "_id" : 1, "name" : "Matt", "status": "active", "level": 12, "score":202},
10 | { "_id" : 2, "name" : "Frank", "status": "inactive", "level": 2, "score":9},
11 | { "_id" : 3, "name" : "Karen", "status": "active", "level": 7, "score":87},
12 | { "_id" : 4, "name" : "Katie", "status": "active", "level": 3, "score":27, "status": "married", "emp": "yes", "kids": 3},
13 | { "_id" : 5, "name" : "Matt1", "status": "active", "level": 12, "score":202},
14 | { "_id" : 6, "name" : "Frank2", "status": "inactive", "level": 2, "score":9},
15 | { "_id" : 7, "name" : "Karen3", "status": "active", "level": 7, "score":87},
16 | { "_id" : 8, "name" : "Katie4", "status": "active", "level": 3, "score":27, "status": "married", "emp": "yes", "kids": 3}
17 | ])
18 | db.helo.find({name: "Karen"})
19 |
20 | ##############################################
21 |
22 | db.bios.insertMany([
23 | {
24 | "_id" : 1,
25 | "name" : {
26 | "first" : "John",
27 | "last" : "Backus"
28 | },
29 | "birth" : ISODate("1924-12-03T05:00:00Z"),
30 | "death" : ISODate("2007-03-17T04:00:00Z"),
31 | "contribs" : [
32 | "Fortran",
33 | "ALGOL",
34 | "Backus-Naur Form",
35 | "FP"
36 | ],
37 | "awards" : [
38 | {
39 | "award" : "W.W. McDowell Award",
40 | "year" : 1967,
41 | "by" : "IEEE Computer Society"
42 | },
43 | {
44 | "award" : "National Medal of Science",
45 | "year" : 1975,
46 | "by" : "National Science Foundation"
47 | },
48 | {
49 | "award" : "Turing Award",
50 | "year" : 1977,
51 | "by" : "ACM"
52 | },
53 | {
54 | "award" : "Draper Prize",
55 | "year" : 1993,
56 | "by" : "National Academy of Engineering"
57 | }
58 | ]
59 | },
60 | {
61 | "_id" : ObjectId("51df07b094c6acd67e492f41"),
62 | "name" : {
63 | "first" : "John",
64 | "last" : "McCarthy"
65 | },
66 | "birth" : ISODate("1927-09-04T04:00:00Z"),
67 | "death" : ISODate("2011-12-24T05:00:00Z"),
68 | "contribs" : [
69 | "Lisp",
70 | "Artificial Intelligence",
71 | "ALGOL"
72 | ],
73 | "awards" : [
74 | {
75 | "award" : "Turing Award",
76 | "year" : 1971,
77 | "by" : "ACM"
78 | },
79 | {
80 | "award" : "Kyoto Prize",
81 | "year" : 1988,
82 | "by" : "Inamori Foundation"
83 | },
84 | {
85 | "award" : "National Medal of Science",
86 | "year" : 1990,
87 | "by" : "National Science Foundation"
88 | }
89 | ]
90 | },
91 | {
92 | "_id" : 3,
93 | "name" : {
94 | "first" : "Grace",
95 | "last" : "Hopper"
96 | },
97 | "title" : "Rear Admiral",
98 | "birth" : ISODate("1906-12-09T05:00:00Z"),
99 | "death" : ISODate("1992-01-01T05:00:00Z"),
100 | "contribs" : [
101 | "UNIVAC",
102 | "compiler",
103 | "FLOW-MATIC",
104 | "COBOL"
105 | ],
106 | "awards" : [
107 | {
108 | "award" : "Computer Sciences Man of the Year",
109 | "year" : 1969,
110 | "by" : "Data Processing Management Association"
111 | },
112 | {
113 | "award" : "Distinguished Fellow",
114 | "year" : 1973,
115 | "by" : " British Computer Society"
116 | },
117 | {
118 | "award" : "W. W. McDowell Award",
119 | "year" : 1976,
120 | "by" : "IEEE Computer Society"
121 | },
122 | {
123 | "award" : "National Medal of Technology",
124 | "year" : 1991,
125 | "by" : "United States"
126 | }
127 | ]
128 | },
129 | {
130 | "_id" : 4,
131 | "name" : {
132 | "first" : "Kristen",
133 | "last" : "Nygaard"
134 | },
135 | "birth" : ISODate("1926-08-27T04:00:00Z"),
136 | "death" : ISODate("2002-08-10T04:00:00Z"),
137 | "contribs" : [
138 | "OOP",
139 | "Simula"
140 | ],
141 | "awards" : [
142 | {
143 | "award" : "Rosing Prize",
144 | "year" : 1999,
145 | "by" : "Norwegian Data Association"
146 | },
147 | {
148 | "award" : "Turing Award",
149 | "year" : 2001,
150 | "by" : "ACM"
151 | },
152 | {
153 | "award" : "IEEE John von Neumann Medal",
154 | "year" : 2001,
155 | "by" : "IEEE"
156 | }
157 | ]
158 | },
159 | {
160 | "_id" : 5,
161 | "name" : {
162 | "first" : "Ole-Johan",
163 | "last" : "Dahl"
164 | },
165 | "birth" : ISODate("1931-10-12T04:00:00Z"),
166 | "death" : ISODate("2002-06-29T04:00:00Z"),
167 | "contribs" : [
168 | "OOP",
169 | "Simula"
170 | ],
171 | "awards" : [
172 | {
173 | "award" : "Rosing Prize",
174 | "year" : 1999,
175 | "by" : "Norwegian Data Association"
176 | },
177 | {
178 | "award" : "Turing Award",
179 | "year" : 2001,
180 | "by" : "ACM"
181 | },
182 | {
183 | "award" : "IEEE John von Neumann Medal",
184 | "year" : 2001,
185 | "by" : "IEEE"
186 | }
187 | ]
188 | },
189 | {
190 | "_id" : 6,
191 | "name" : {
192 | "first" : "Guido",
193 | "last" : "van Rossum"
194 | },
195 | "birth" : ISODate("1956-01-31T05:00:00Z"),
196 | "contribs" : [
197 | "Python"
198 | ],
199 | "awards" : [
200 | {
201 | "award" : "Award for the Advancement of Free Software",
202 | "year" : 2001,
203 | "by" : "Free Software Foundation"
204 | },
205 | {
206 | "award" : "NLUUG Award",
207 | "year" : 2003,
208 | "by" : "NLUUG"
209 | }
210 | ]
211 | },
212 | {
213 | "_id" : ObjectId("51e062189c6ae665454e301d"),
214 | "name" : {
215 | "first" : "Dennis",
216 | "last" : "Ritchie"
217 | },
218 | "birth" : ISODate("1941-09-09T04:00:00Z"),
219 | "death" : ISODate("2011-10-12T04:00:00Z"),
220 | "contribs" : [
221 | "UNIX",
222 | "C"
223 | ],
224 | "awards" : [
225 | {
226 | "award" : "Turing Award",
227 | "year" : 1983,
228 | "by" : "ACM"
229 | },
230 | {
231 | "award" : "National Medal of Technology",
232 | "year" : 1998,
233 | "by" : "United States"
234 | },
235 | {
236 | "award" : "Japan Prize",
237 | "year" : 2011,
238 | "by" : "The Japan Prize Foundation"
239 | }
240 | ]
241 | },
242 | {
243 | "_id" : 8,
244 | "name" : {
245 | "first" : "Yukihiro",
246 | "aka" : "Matz",
247 | "last" : "Matsumoto"
248 | },
249 | "birth" : ISODate("1965-04-14T04:00:00Z"),
250 | "contribs" : [
251 | "Ruby"
252 | ],
253 | "awards" : [
254 | {
255 | "award" : "Award for the Advancement of Free Software",
256 | "year" : "2011",
257 | "by" : "Free Software Foundation"
258 | }
259 | ]
260 | },
261 | {
262 | "_id" : 9,
263 | "name" : {
264 | "first" : "James",
265 | "last" : "Gosling"
266 | },
267 | "birth" : ISODate("1955-05-19T04:00:00Z"),
268 | "contribs" : [
269 | "Java"
270 | ],
271 | "awards" : [
272 | {
273 | "award" : "The Economist Innovation Award",
274 | "year" : 2002,
275 | "by" : "The Economist"
276 | },
277 | {
278 | "award" : "Officer of the Order of Canada",
279 | "year" : 2007,
280 | "by" : "Canada"
281 | }
282 | ]
283 | },
284 | {
285 | "_id" : 10,
286 | "name" : {
287 | "first" : "Martin",
288 | "last" : "Odersky"
289 | },
290 | "contribs" : [
291 | "Scala"
292 | ]
293 | }
294 |
295 | ] )
296 |
297 |
298 |
299 | ######################################################################
300 |
301 | docker run --rm -d --name app1 -v /var/run/docker.sock:/var/run/docker.sock --network none kiran2361993/troubleshootingtools:v1
302 |
303 | docker create volume create portainerdata
304 |
305 | docker run -d -p 8000:8000 -p 9443:9443 --name portainer \
306 | --restart=always \
307 | -v /var/run/docker.sock:/var/run/docker.sock \
308 | -v portainer_data:/data \
309 | portainer/portainer-ce:2.11.1
310 |
--------------------------------------------------------------------------------
/Day04/DockerFile:
--------------------------------------------------------------------------------
1 | FROM ubuntu:latest
2 |
3 | LABEL name="saikiran"
4 |
5 | ENV AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXXXXXX\
6 | AWS_SECRET_KEY_ID=XXXXXXXXXXXXXXXXXXXXXXXX\
7 | AWS_DEFAULT_REGION=US-EAST-1A
8 |
9 | ARG T_VERSION='1.6.6'\
10 | P_VERSION='1.8.0'
11 |
12 | RUN apt update && apt install -y jq net-tools curl wget unzip\
13 | && apt install -y nginx iputils-ping
14 |
15 | RUN wget https://releases.hashicorp.com/terraform/${T_VERSION}/terraform_${T_VERSION}_linux_amd64.zip \
16 | && wgt https://releases.hashicorp.com/packer/${P_VERSION}/packer_${P_VERSION}_linux_amd64.zip\
17 | && unzip terraform_${T_VERSION}_linux_amd64.zip && unzip packer_${P_VERSION}_linux_amd64.zip\
18 | && chmod 777 terraform && chmod 777 packer\
19 | && ./terraform version && ./packer version
20 |
21 | CMD ["nginx","-g","daemon off;"]
22 |
--------------------------------------------------------------------------------
/Day04/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 | # Docker File part 01
7 |
8 | This repository contains a Dockerfile for setting up an environment with Terraform and Packer on an Ubuntu base image. Below are the steps and commands used in the Dockerfile and for running the container.
9 |
10 | ## Dockerfile Overview
11 |
12 | The Dockerfile performs the following tasks:
13 | 1. Sets up an Ubuntu base image.
14 | 2. Installs necessary tools (jq, net-tools, curl, wget, unzip, nginx, iputils-ping).
15 | 3. Downloads and installs specific versions of Terraform and Packer.
16 | 4. Runs nginx in the foreground.
17 |
18 | ### Dockerfile Contents
19 |
20 |
21 | FROM ubuntu:latest
22 |
23 | LABEL name="saikiran"
24 |
25 | ENV AWS_ACCESS_KEY_ID=SDFSDFSDFSDFSDFSDFSDFSDF \
26 | AWS_SECRET_KEY_ID=SDSDSDSDSDSDSDSDSDSDSDSD \
27 | AWS_DEFAULT_REGION=US-EAST-1A
28 |
29 | ARG T_VERSION='1.6.6' \
30 | P_VERSION='1.8.0'
31 |
32 | RUN apt update && apt install -y jq net-tools curl wget unzip \
33 | && apt install -y nginx iputils-ping
34 |
35 | RUN wget https://releases.hashicorp.com/terraform/${T_VERSION}/terraform_${T_VERSION}_linux_amd64.zip \
36 | && wget https://releases.hashicorp.com/packer/${P_VERSION}/packer_${P_VERSION}_linux_amd64.zip \
37 | && unzip terraform_${T_VERSION}_linux_amd64.zip && unzip packer_${P_VERSION}_linux_amd64.zip \
38 | && chmod 777 terraform && chmod 777 packer \
39 | && ./terraform version && ./packer version
40 |
41 | CMD ["nginx", "-g", "daemon off;"]
42 |
43 |
44 | ## Docker Commands
45 |
46 | ### Building the Docker Image
47 |
48 | To build the Docker image with a specific name and tag:
49 |
50 | docker build -t your_image_name:v3 .
51 |
52 |
53 | To build the Docker image without using the cache:
54 |
55 | docker build --no-cache -t your_image_name:v3 .
56 |
57 |
58 | To pass build arguments for Terraform and Packer versions:
59 |
60 | docker build --build-arg T_VERSION=1.4.0 --build-arg P_VERSION=1.5.0 -t your_image_name:v3 .
61 |
62 |
63 | To build the Docker image with plain progress output:
64 |
65 | docker build --progress=plain -t your_image_name:v3 .
66 |
67 |
68 | ### Running the Container
69 |
70 | To run the container and check the versions of Terraform and Packer:
71 |
72 | docker run -it your_image_name:v3 /bin/bash
73 | ./terraform version
74 | ./packer version
75 |
76 |
77 | ### Setting Environment Variables
78 |
79 | To pass environment variables when running the container:
80 |
81 | docker run -p 80:80 -e AWS_ACCESS_KEY_ID=your_access_key -e AWS_SECRET_KEY_ID=your_secret_key -e AWS_DEFAULT_REGION=your_region your_image_name:v3
82 |
83 |
84 | ## Useful Docker Commands
85 |
86 | ### Adding a Tag
87 |
88 | To add a tag to an existing image:
89 |
90 | docker tag your_image_name:v3 your_image_name:latest
91 |
92 |
93 | ### Docker History
94 |
95 | To check the history of the Docker image:
96 |
97 | docker history your_image_name:v3
98 |
99 | ### Cleanup unused Docker objects
100 |
101 | Remove all unused containers, networks, images (both dangling and unused), and optionally, volumes.
102 |
103 | docker system prune
104 |
105 | ## Dockerfile Instructions Explained
106 |
107 | - **ADD**: Adds files from your host system to the image. Useful for adding archives which need to be extracted.
108 | - **COPY**: Copies files from your host system to the image. Used for copying local files and directories.
109 | - **ENTRYPOINT**: Configures a container to run as an executable.
110 | - **WORKDIR**: Sets the working directory inside the image where subsequent commands will be run from.
111 |
--------------------------------------------------------------------------------
/Day04/portainer:
--------------------------------------------------------------------------------
1 | Portainer consists of two elements, the Portainer Server, and the Portainer Agent. Both elements run as lightweight Docker containers on a Docker engine.
2 | This document will help you install the Portainer Server container on your Linux environment.
3 | To add a new Linux environment to an existing Portainer Server installation, please refer to the Portainer Agent installation instructions.
4 |
5 | docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:2.21.4
6 |
--------------------------------------------------------------------------------
/Day05/DockerFile:
--------------------------------------------------------------------------------
1 | FROM ubuntu:latest
2 | LABEL name="saikiran"
3 | RUN mkdir /app
4 | RUN groupadd appuser && useradd -r -g appuser appuser
5 | WORKDIR /app
6 | USER appuser
7 | ENV AWS_ACCESS_KEY_ID=DUMMYKEY\
8 | AWS_SECRET_KEY_ID=DUMMYKEY\
9 | AWS_DEFAULT_REGION=US-EAST-1A
10 | COPY index.nginx-debian.html /var/www/html/
11 | COPY scorekeeper.js /var/www/html
12 | ADD style.css /var/www/html
13 | ADD https://releases.hashicorp.com/terraform/1.9.0/terraform_1.9.0_linux_amd64.zip /var/www/html
14 | ARG T_VERSION='1.6.6'\
15 | P_VERSION='1.8.0'
16 | EXPOSE 80
17 | RUN apt update && apt install -y jq net-tools curl wget unzip\
18 | && apt install -y nginx iputils-ping
19 |
20 | RUN wget https://releases.hashicorp.com/terraform/${T_VERSION}/terraform_${T_VERSION}_linux_amd64.zip \
21 | && wgt https://releases.hashicorp.com/packer/${P_VERSION}/packer_${P_VERSION}_linux_amd64.zip\
22 | && unzip terraform_${T_VERSION}_linux_amd64.zip && unzip packer_${P_VERSION}_linux_amd64.zip\
23 | && chmod 777 terraform && chmod 777 packer\
24 | && ./terraform version && ./packer version
25 | USER appuser
26 | CMD ["nginx","-g","daemon off;"]
27 |
--------------------------------------------------------------------------------
/Day05/README.md:
--------------------------------------------------------------------------------
1 | 
2 |
3 |
4 |
5 | # Docker Commands Explained
6 |
7 | Welcome to this GitHub repository! This guide will help you understand some basic Docker commands used in a Dockerfile. Let's break down the commands used in the Dockerfile for our project.
8 |
9 | ## Dockerfile Overview
10 |
11 | Here's the Dockerfile we are using:
12 |
13 | ```dockerfile
14 | FROM ubuntu:latest
15 | LABEL name="saikiran"
16 | RUN mkdir /app
17 | RUN groupadd appuser && useradd -r -g appuser appuser
18 | WORKDIR /app
19 | USER appuser
20 | ENV AWS_ACCESS_KEY_ID=DUMMYKEY \
21 | AWS_SECRET_KEY_ID=DUMMYKEY \
22 | AWS_DEFAULT_REGION=US-EAST-1A
23 | COPY index.nginx-debian.html /var/www/html/
24 | COPY scorekeeper.js /var/www/html
25 | ADD style.css /var/www/html
26 | ADD https://releases.hashicorp.com/terraform/1.9.0/terraform_1.9.0_linux_amd64.zip /var/www/html
27 | ARG T_VERSION='1.6.6' \
28 | P_VERSION='1.8.0'
29 | EXPOSE 80
30 | RUN apt update && apt install -y jq net-tools curl wget unzip \
31 | && apt install -y nginx iputils-ping
32 | RUN wget https://releases.hashicorp.com/terraform/${T_VERSION}/terraform_${T_VERSION}_linux_amd64.zip \
33 | && wget https://releases.hashicorp.com/packer/${P_VERSION}/packer_${P_VERSION}_linux_amd64.zip \
34 | && unzip terraform_${T_VERSION}_linux_amd64.zip && unzip packer_${P_VERSION}_linux_amd64.zip \
35 | && chmod 777 terraform && chmod 777 packer \
36 | && ./terraform version && ./packer version
37 | USER appuser
38 | CMD ["nginx","-g","daemon off;"]
39 | ```
40 |
41 | ### Explanation of Docker Commands
42 |
43 | #### 1. `FROM`
44 |
45 | ```dockerfile
46 | FROM ubuntu:latest
47 | ```
48 |
49 | This command sets the base image for the Docker image. Here, we are using the latest version of Ubuntu.
50 |
51 | #### 2. `LABEL`
52 |
53 | ```dockerfile
54 | LABEL name="saikiran"
55 | ```
56 |
57 | This command adds metadata to the image. In this case, we are labeling the image with a name.
58 |
59 | #### 3. `RUN`
60 |
61 | ```dockerfile
62 | RUN mkdir /app
63 | RUN groupadd appuser && useradd -r -g appuser appuser
64 | ```
65 |
66 | These commands execute commands inside the image. The first command creates a directory named `/app`. The second command creates a new group and a new user.
67 |
68 | #### 4. `WORKDIR`
69 |
70 | ```dockerfile
71 | WORKDIR /app
72 | ```
73 |
74 | This command sets the working directory for the subsequent instructions. Here, it sets `/app` as the working directory.
75 |
76 | #### 5. `USER`
77 |
78 | ```dockerfile
79 | USER appuser
80 | ```
81 |
82 | This command sets the user for the subsequent instructions. Here, it sets `appuser` as the user.
83 |
84 | #### 6. `ENV`
85 |
86 | ```dockerfile
87 | ENV AWS_ACCESS_KEY_ID=DUMMYKEY \
88 | AWS_SECRET_KEY_ID=DUMMYKEY \
89 | AWS_DEFAULT_REGION=US-EAST-1A
90 | ```
91 |
92 | This command sets environment variables inside the container.
93 |
94 | #### 7. `COPY`
95 |
96 | ```dockerfile
97 | COPY index.nginx-debian.html /var/www/html/
98 | COPY scorekeeper.js /var/www/html
99 | ```
100 |
101 | These commands copy files from the host machine to the container. The first command copies `index.nginx-debian.html` to `/var/www/html/`, and the second copies `scorekeeper.js` to the same directory.
102 |
103 | #### 8. `ADD`
104 |
105 | ```dockerfile
106 | ADD style.css /var/www/html
107 | ADD https://releases.hashicorp.com/terraform/1.9.0/terraform_1.9.0_linux_amd64.zip /var/www/html
108 | ```
109 |
110 | These commands add files from the host machine or a URL to the container. The first command adds `style.css` to `/var/www/html`, and the second downloads and adds a file from a URL.
111 |
112 | #### 9. `ARG`
113 |
114 | ```dockerfile
115 | ARG T_VERSION='1.6.6' \
116 | P_VERSION='1.8.0'
117 | ```
118 |
119 | This command defines variables that users can pass at build-time to the builder with the `docker build` command.
120 |
121 | #### 10. `EXPOSE`
122 |
123 | ```dockerfile
124 | EXPOSE 80
125 | ```
126 |
127 | This command informs Docker that the container will listen on the specified network ports at runtime. Here, it exposes port 80.
128 |
129 | #### 11. `CMD`
130 |
131 | ```dockerfile
132 | CMD ["nginx","-g","daemon off;"]
133 | ```
134 |
135 | This command provides the default command to run when the container starts. Here, it starts the `nginx` server in the foreground.
136 |
137 | ### Building and Running the Docker Image
138 |
139 | 1. **Build the Docker Image:**
140 |
141 | ```bash
142 | docker build -t my-app .
143 | ```
144 |
145 | 2. **Run the Docker Container:**
146 |
147 | ```bash
148 | docker run -p 80:80 my-app
149 | ```
150 |
151 | Now, your application should be up and running on port 80!
152 |
153 | For a detailed explanation, you can watch the video on YouTube: [Docker Commands Explained](https://youtu.be/orQV059Zu8E?si=z1EHwd2s8p-q8jFL).
154 |
--------------------------------------------------------------------------------
/Day06/DockerFile_Multibuild:
--------------------------------------------------------------------------------
1 | # Stage 1: Build stage 284
2 | FROM maven:3.8.5-openjdk-17-slim AS build
3 | WORKDIR /app
4 | COPY . .
5 | RUN ./mvnw package -DskipTests
6 |
7 | # Stage 2: Runtime stage
8 | FROM openjdk:11-jre-slim
9 | WORKDIR /app
10 | COPY --from=build /app/target/*.jar app.jar
11 | EXPOSE 8080
12 | CMD ["java", "-jar", "app.jar"]
13 |
14 |
15 |
16 | # Stage 1: Build stage
17 | FROM maven:3.8.5-openjdk-17-slim AS build
18 | WORKDIR /app
19 | COPY . .
20 | RUN ./mvnw package -DskipTests
21 |
22 | # Stage 2: Runtime stage
23 | FROM openjdk:17-jdk-slim AS runtime
24 | WORKDIR /app
25 | COPY --from=build /app/target/*.jar app.jar
26 |
27 | EXPOSE 8080
28 | CMD ["java", "-jar", "app.jar"]
29 |
--------------------------------------------------------------------------------
/Day06/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM gcr.io/distroless/static-debian11
2 | LABEL name="saikiran"
3 | RUN mkdir /app \
4 | && roupadd appuser && useradd -r -g appuser appuser \
5 | && apt update && apt install -y jq net-tools curl wget unzip\
6 | && apt install -y nginx iputils-ping \
7 | && wget https://releases.hashicorp.com/terraform/${T_VERSION}/terraform_${T_VERSION}_linux_amd64.zip \
8 | && wgt https://releases.hashicorp.com/packer/${P_VERSION}/packer_${P_VERSION}_linux_amd64.zip\
9 | && unzip terraform_${T_VERSION}_linux_amd64.zip && unzip packer_${P_VERSION}_linux_amd64.zip\
10 | && chmod 777 terraform && chmod 777 packer\
11 | && ./terraform version && ./packer version
12 | WORKDIR /app
13 | USER appuser
14 | ENV AWS_ACCESS_KEY_ID=DUMMYKEY\
15 | AWS_SECRET_KEY_ID=DUMMYKEY\
16 | AWS_DEFAULT_REGION=US-EAST-1A
17 | # COPY index.nginx-debian.html /var/www/html/
18 | # COPY scorekeeper.js /var/www/html
19 | # ADD style.css /var/www/html
20 | ADD https://releases.hashicorp.com/terraform/1.9.0/terraform_1.9.0_linux_amd64.zip /var/www/html
21 | ARG T_VERSION='1.6.6'\
22 | P_VERSION='1.8.0'
23 | EXPOSE 80
24 | RUN
25 |
26 | # RUN
27 | USER appuser
28 | CMD ["nginx","-g","daemon off;"]
29 |
--------------------------------------------------------------------------------
/Day06/README.md:
--------------------------------------------------------------------------------
1 | 
2 |
3 |
4 | Google Distroless Images and Docker Multi-Stage Builds
5 |
6 | What are Distroless Images?
7 |
8 | Distroless images are Docker images built by Google that contain only your application and its runtime dependencies, without a package manager or any additional software. This approach minimizes the attack surface and reduces the size of your container image.
9 |
10 | Why Use Distroless Images?
11 |
12 | Reduced Attack Surface: Since Distroless images contain only your application and its dependencies, they eliminate unnecessary tools and libraries that could be exploited by attackers.
13 |
14 | Smaller Image Size: By excluding unnecessary components, Distroless images are significantly smaller in size compared to traditional Linux distribution-based images. This results in faster image pulls and reduced storage costs.
15 |
16 | https://github.com/GoogleContainerTools/distroless
17 |
18 | Docker Multi-Stage Builds
19 |
20 | Docker multi-stage builds allow you to create more efficient Dockerfiles by using multiple FROM statements. Each stage in the build process can have its own base image and set of instructions, enabling you to compile code, run tests, and package your application without including unnecessary build tools and dependencies in the final image.
21 |
22 | Clone https://github.com/spring-projects/spring-petclinic.git
23 |
24 | And follow by watching the full video here : https://youtu.be/CMO0MziP2yQ
25 |
--------------------------------------------------------------------------------
/Day07/README.md:
--------------------------------------------------------------------------------
1 | 
2 |
3 |
4 | # Docker Compose
5 |
6 | Welcome to the Docker Compose Tutorial! This guide accompanies a video tutorial on Docker Compose available on the [Mastering Cloud](https://youtu.be/WV6SnFMYM1w) YouTube channel.
7 |
8 | ## What is Docker Compose?
9 |
10 | Docker Compose is a tool that allows you to define and manage multi-container Docker applications. It uses a simple YAML file to configure your application’s services, networks, and volumes, making it easier to run and scale applications with multiple Docker containers.
11 |
12 | ## Prerequisites
13 |
14 | - Docker installed on your machine
15 | - Basic understanding of Docker concepts
16 | - Docker Compose installed (usually included with Docker)
17 |
18 | ## Getting Started
19 |
20 | 1. **Clone the Repository**
21 |
22 | Clone this repository to follow along with the examples:
23 |
24 | ```sh
25 | https://github.com/dockersamples/example-voting-app.git
26 |
27 | ```
28 |
29 | 2. **Follow the Video Tutorial**
30 |
31 | Watch the [Docker Compose tutorial video](https://youtu.be/WV6SnFMYM1w) to learn how to:
32 | - Define multi-container applications with Docker Compose
33 | - Use simple commands to manage your Docker containers
34 | - Scale your application services effortlessly
35 | - Manage networks and volumes using Docker Compose
36 |
37 | ## Additional Resources
38 |
39 | - [Docker Compose Documentation](https://docs.docker.com/compose/)
40 | - [Mastering Cloud YouTube Channel](https://www.youtube.com/channel/UCXXXXXXX) for more Docker and DevOps tutorials
41 |
42 | ## Feedback
43 |
44 | If you found this tutorial helpful or have any questions, feel free to leave a comment on the YouTube video or reach out via GitHub.
45 |
46 | ---
47 |
--------------------------------------------------------------------------------
/Day07/compose.yaml:
--------------------------------------------------------------------------------
1 | # version is now using "compose spec"
2 | # v2 and v3 are now combined!
3 | # docker-compose v1.27+ required
4 |
5 | services:
6 | vote:
7 | build:
8 | context: ./vote
9 | target: dev
10 | depends_on:
11 | redis:
12 | condition: service_healthy
13 | healthcheck:
14 | test: ["CMD", "curl", "-f", "http://localhost"]
15 | interval: 15s
16 | timeout: 5s
17 | retries: 3
18 | start_period: 10s
19 | volumes:
20 | - ./vote:/usr/local/app
21 | ports:
22 | - "5000:80"
23 | networks:
24 | - front-tier
25 | - back-tier
26 |
27 | result:
28 | build: ./result
29 | # use nodemon rather than node for local dev
30 | entrypoint: nodemon --inspect=0.0.0.0 server.js
31 | depends_on:
32 | db:
33 | condition: service_healthy
34 | volumes:
35 | - ./result:/usr/local/app
36 | ports:
37 | - "5001:80"
38 | - "127.0.0.1:9229:9229"
39 | networks:
40 | - front-tier
41 | - back-tier
42 |
43 | worker:
44 | build:
45 | context: ./worker
46 | depends_on:
47 | redis:
48 | condition: service_healthy
49 | db:
50 | condition: service_healthy
51 | networks:
52 | - back-tier
53 |
54 | redis:
55 | image: redis:alpine
56 | volumes:
57 | - "./healthchecks:/healthchecks"
58 | healthcheck:
59 | test: /healthchecks/redis.sh
60 | interval: "5s"
61 | networks:
62 | - back-tier
63 |
64 | db:
65 | image: postgres:15-alpine
66 | environment:
67 | POSTGRES_USER: "NA"
68 | POSTGRES_PASSWORD: "NA"
69 | volumes:
70 | - "db-data:/var/lib/postgresql/data"
71 | - "./healthchecks:/healthchecks"
72 | healthcheck:
73 | test: /healthchecks/postgres.sh
74 | interval: "5s"
75 | networks:
76 | - back-tier
77 |
78 | # this service runs once to seed the database with votes
79 | # it won't run unless you specify the "seed" profile
80 | # docker compose --profile seed up -d
81 | seed:
82 | build: ./seed-data
83 | profiles: ["seed"]
84 | depends_on:
85 | vote:
86 | condition: service_healthy
87 | networks:
88 | - front-tier
89 | restart: "no"
90 |
91 | volumes:
92 | db-data:
93 |
94 | networks:
95 | front-tier:
96 | back-tier:
97 |
--------------------------------------------------------------------------------
/Day08/README.md:
--------------------------------------------------------------------------------
1 | 
2 |
3 |
4 |
5 | ---
6 |
7 | # Docker Swarm Setup and Management
8 |
9 | Welcome to the Docker Swarm Setup and Management guide! This guide will walk you through setting up a Docker Swarm cluster, deploying services, and managing the cluster with simple commands.
10 |
11 | ## Introduction
12 |
13 | Docker Swarm is a native clustering and orchestration tool for Docker containers. It allows you to manage a cluster of Docker engines as a single virtual system. This guide provides step-by-step instructions to set up and manage your Docker Swarm cluster.
14 |
15 | ## Prerequisites
16 |
17 | - Docker installed on all nodes
18 | - Access to multiple instances (machines or VMs)
19 |
20 | ## Setting Up Docker Swarm
21 |
22 | ### Steps
23 |
24 | 1. **Initialize Docker Swarm on the first node:** This command initializes the first node as the manager.
25 |
26 | 2. **Join worker nodes:** Use the worker join token on each worker node.
27 |
28 | 3. **Join manager nodes:** Use the manager join token on each additional manager node.
29 |
30 | 4. **Verify the setup:** Check the status of the Swarm and list all nodes to ensure they are correctly joined.
31 |
32 | 5. **Check networks:** View the networks available in the Swarm.
33 |
34 | ### Commands
35 |
36 | ```bash
37 | # Initialize Docker Swarm on the first node
38 | docker swarm init
39 |
40 | # Join worker nodes
41 | docker swarm join --token :
42 |
43 | # Join manager nodes
44 | docker swarm join --token :
45 |
46 | # Check Swarm status
47 | docker info
48 |
49 | # List nodes
50 | docker node ls
51 |
52 | # Check networks
53 | docker network ls
54 | ```
55 |
56 | ## Deploying Services
57 |
58 | ### Steps
59 |
60 | 1. **Deploy a sample service:** Create a service with 3 replicas and publish it on port 8000.
61 |
62 | 2. **Scale the service up and down:** Adjust the number of replicas for the service.
63 |
64 | 3. **Deploy service only on worker nodes:** Ensure the service runs only on worker nodes.
65 |
66 | 4. **Deploy a global service:** Run the service on every node.
67 |
68 | ### Commands
69 |
70 | ```bash
71 | # Deploy a sample service
72 | docker service create --name app1 --replicas 3 --publish 8000:80 kiran2361993/rollingupdate:v10
73 |
74 | # Scale the service up
75 | docker service scale app1=6
76 |
77 | # Scale the service down
78 | docker service scale app1=1
79 |
80 | # Deploy service only on worker nodes
81 | docker service create --name app1 --constraint node.role==worker --replicas 6 --publish 8000:80 kiran2361993/rollingupdate:v10
82 |
83 | # Deploy a global service (runs on every node)
84 | docker service create --name monitor --publish 9100:9100 --mode global prom/node-exporter
85 |
86 | docker service create --name cadvisor --publish 8888:8080 --mode global google/cadvisor:latest
87 | ```
88 |
89 | ## Managing Services and Nodes
90 |
91 | ### Steps
92 |
93 | 1. **Check service tasks:** List the tasks of a service.
94 |
95 | 2. **Put a node in maintenance mode:** Drain the node to prevent it from running tasks.
96 |
97 | 3. **Reactivate a node:** Set the node back to active status.
98 |
99 | ### Commands
100 |
101 | ```bash
102 | # Check service tasks
103 | docker service ps app1
104 |
105 | # Put a node in maintenance mode
106 | docker node update --availability drain
107 |
108 | # Reactivate a node
109 | docker node update --availability active
110 | ```
111 |
112 | ## Load Balancing
113 |
114 | To see load balancing in action, run the following command in a loop. This shows how requests are distributed across different nodes.
115 |
116 | ```bash
117 | while true; do curl -sL http://:8000/ | grep -I 'IP-A'; sleep 1; done
118 | ```
119 |
120 | ## Visualizing the Cluster
121 |
122 | Use Swarm Visualizer from GitHub to visualize your cluster. Access the visualizer at `http://:8080`.
123 |
124 | ```bash
125 | docker run -it -d -p 8080:8080 --name swarm-visualizer -e HOST= -e PORT= dockersamples/visualizer
126 | ```
127 |
128 | ## Conclusion
129 |
130 | Docker Swarm makes it easy to manage a cluster of Docker containers. With these simple commands, you can set up, deploy, and manage services across multiple nodes. Happy clustering!
131 |
132 |
--------------------------------------------------------------------------------
/Day09/README.md:
--------------------------------------------------------------------------------
1 |
2 | 
3 |
4 |
5 | # Docker Swarm Secrets: Securely Manage Sensitive Data (Part 2)
6 |
7 | **Introduction**
8 |
9 | This repository complements Part 1 of the Docker Swarm series, focusing on secure management of sensitive data like database credentials, API keys, and more. By leveraging Docker Swarm secrets, you can separate sensitive information from your container images, enhancing the security posture of your applications.
10 |
11 | **Understanding Docker Swarm Secrets**
12 |
13 | Docker Swarm secrets allow you to store sensitive information outside of container images, preventing accidental exposure or unauthorized access. Secrets are managed as key-value pairs and can be referenced within your containers for secure access.
14 |
15 | **Creating Secrets**
16 |
17 | Here are two common methods to create Docker Swarm secrets:
18 |
19 | **1. File-Based Secrets:**
20 |
21 | * Create a file containing your sensitive data. For example, create a file named `.credentials` and store your access keys and secret keys within it. Use your preferred text editor (e.g., `nano`) for this purpose.
22 | * Convert the file into a secret using the `docker secret create` command:
23 |
24 | ```bash
25 | docker secret create
26 | ```
27 |
28 | Replace `` with a descriptive name for your secret (e.g., `aws_creds`) and `` with the path to your created file (e.g., `.credentials`).
29 |
30 | **2. Environment Variable Secrets:**
31 |
32 | * Define environment variables during container creation to provide secrets directly. This approach avoids creating separate files:
33 |
34 | ```bash
35 | docker run -e =
36 | ```
37 |
38 | Replace `` with the desired variable name (e.g., `DB_PASSWORD`) and `` with your actual password.
39 |
40 | **Mounting Secrets**
41 |
42 | Once you've created your secret, you can mount it as an environment variable within your containers. This allows containers to access the secret securely:
43 |
44 | 1. Create a YAML file defining your service configuration. For example, create a file named `aws-cli.yml` and specify environment variables that reference the secret:
45 |
46 | ```yaml
47 | version: "3.8"
48 | services:
49 | aws-cli:
50 | image:
51 | environment:
52 | AWS_ACCESS_KEY_ID:
53 | from_secret: aws_creds
54 | AWS_SECRET_ACCESS_KEY:
55 | from_secret: aws_creds
56 | # ... other service configurations
57 | ```
58 |
59 | Replace `` with the actual image name for your AWS CLI tool.
60 |
61 | 2. Deploy the service using the `docker stack deploy` command, referencing your YAML file:
62 |
63 | ```bash
64 | docker stack deploy -c aws-cli.yml AWS
65 | ```
66 |
67 | **Verifying Secrets**
68 |
69 | 1. Use `docker service ps` to check the status of your deployed service.
70 |
71 | 2. **Security Caution:** For verification purposes, connect to a worker node (not the master node) to enhance security. This prevents accidental exposure of secrets on the master node.
72 |
73 | 3. Use commands like `env`, `aws s3 ls`, and `aws describe-vpc | jq` to verify that the container can access the secrets correctly.
74 |
75 | **Additional Considerations**
76 |
77 | * **Security Best Practices:**
78 | * Always use strong passwords and access keys.
79 | * Regularly rotate your secrets to minimize the risk of compromise.
80 | * Consider leveraging tools like Vault or AWS Secrets Manager for more advanced secret management, especially in large-scale deployments.
81 |
82 | * **Alternative Method (Optional):**
83 | This guide focuses on file-based secrets. You can also define placeholders for passwords directly within your YAML files. However, this approach is less secure as the passwords are stored in plain text within the YAML configuration. Always prioritize using secrets for sensitive data.
84 |
85 | **Contributing**
86 |
87 | (Include guidelines and links for contributing to your project, if applicable)
88 |
89 | **License**
90 |
91 | (Specify the license under which your code is distributed)
92 |
93 | **Links**
94 |
95 | * Terraform Automation files here : https://github.com/saikiranpi/Terraform-Docker-Swarm-Cluster.git
96 | * Docker Swarm documentation on secrets: [https://docs.docker.com/reference/cli/docker/secret/](https://docs.docker.com/reference/cli/docker/secret/)
97 |
98 | This improved README provides a more comprehensive guide to Docker Swarm secrets, incorporates security best practices, and offers a clearer structure for understanding and implementing secure data management in your Docker Swarm environment.
99 |
--------------------------------------------------------------------------------
/Day09/Secret-File:
--------------------------------------------------------------------------------
1 | #Create a file name as credetials and paste the following with working keys
2 | # [default]
3 | # aws_access_key_id=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
4 | # aws_secret_access_key=ASASASASASASASASASASASASAS
5 | # docker secret create aws_creds credentials
6 | # Use the above created secret mounted to /root/.aws/credentials as shown below.
7 | #docker stack deploy -c aws_cli.yml AWS
8 | #--------------------------------------------------------------------------------------
9 |
10 | version: '3.3'
11 | services:
12 | cli:
13 | image: kiran2361993/rollingupdate:v10
14 | secrets:
15 | - source: aws_creds
16 | target: /root/.aws/credentials
17 | uid: '1000'
18 | gid: '1000'
19 | mode: 0700
20 | deploy:
21 | replicas: 1
22 | placement:
23 | constraints: [node.role != manager]
24 | resources:
25 | reservations:
26 | memory: 128M
27 | limits:
28 | memory: 512M
29 | environment:
30 | AWS_DEFAULT_REGION: us-east-1
31 | networks:
32 | - appnet
33 | secrets:
34 | aws_creds:
35 | external: true
36 | networks:
37 | appnet:
38 |
39 | #####################################################################################################
40 |
41 | #Its not possible to mount docker secrets directly as ENV variables similar to k8s.
42 | # docker network create –driver overlay appnet
43 | #openssl rand -base64 12 | docker secret create db_root_password -
44 | #openssl rand -base64 12 | docker secret create db_dba_password -
45 | # docker stack deploy -c
46 | #apt-get install mysql-server for mysql client
47 | #mysql -h MYSQL_db -u root -p
48 | #docker stack deploy -c
49 | ###################################################################
50 | version: '3.3'
51 |
52 | services:
53 | db:
54 | image: mysql:8.0
55 | secrets:
56 | - db_root_password
57 | - db_dba_password
58 | - source: db_root_password
59 | target: ROOT_PASSWORD
60 | - source: db_dba_password
61 | target: DB_PASSWORD
62 | deploy:
63 | replicas: 1
64 | placement:
65 | constraints: [node.role == manager]
66 | resources:
67 | reservations:
68 | memory: 128M
69 | cpus: '0.25'
70 | limits:
71 | memory: 512M
72 | cpus: '1'
73 | ports:
74 | - 3306:3306
75 | environment:
76 | MYSQL_USER: dba
77 | MYSQL_DATABASE: mydb
78 | MYSQL_ROOT_PASSWORD_FILE: /run/secrets/db_root_password
79 | MYSQL_PASSWORD_FILE: /run/secrets/db_dba_password
80 | networks:
81 | - appnet
82 | volumes:
83 | - "/var/run/docker.sock:/var/run/docker.sock"
84 | - "datavol:/var/lib/mysql"
85 |
86 | adminer:
87 | image: adminer
88 | ports:
89 | - 8888:8080
90 | networks:
91 | - appnet
92 |
93 | utils:
94 | image: kiran2361993/troubleshootingtools:v1
95 | deploy:
96 | replicas: 1
97 | placement:
98 | constraints: [node.role != manager]
99 | networks:
100 | - appnet
101 |
102 | secrets:
103 | db_root_password:
104 | external: true
105 | db_dba_password:
106 | external: true
107 |
108 | networks:
109 | appnet:
110 | external: true
111 | saikiran:
112 | sai:
113 |
114 | volumes:
115 | datavol:
116 |
--------------------------------------------------------------------------------
/Day09/Swarm-all-commands:
--------------------------------------------------------------------------------
1 | aws ec2 run-instances --image-id ami-08ebab39574d1e681 --key-name LaptopKey --security-group-ids sg-08944ea9f10aa4121 --instance-type t2.micro --placement AvailabilityZone=us-east-1b --subnet-id subnet-0ac37305c1dc6efc5 --tag-specifications ResourceType=instance,Tags=[{Key=Name,Value=DockerSwarm-}] --count 3
2 |
3 | for host in '3.83.78.48' '3.80.9.65' '3.218.244.146' '3.210.200.182'
4 | do
5 | ssh -i laptopkey.pem ubuntu@$host sudo docker swarm join --token SWMTKN-1-2swjqgcmzywmkqqhaj9rmrarxnsld92zmtl6yd356h5ivy6yci-erqe3fgi80giui4u1alk8ld4w 10.40.1.178:2377
6 | Done
7 |
8 | https://www.baeldung.com/ops/docker-reduce-build-context
9 |
10 | docker node ls
11 | docker node promote node
12 | docker node demote node
13 | docker swarm leave
14 | docker swarm leave --force
15 |
16 | docker service rm nginxtest
17 | docker service ps
18 | docker service ps --format 'table'
19 |
20 | docker service create --name nginx --replicas 3 nginx
21 |
22 | 1. Create Swarm Cluster & Swarm basic commands. RAFT DB.
23 | 2. Understand overlay driver & Create overlay network.
24 | https://docs.docker.com/v17.09/engine/swarm/networking/#firewall-considerations
25 |
26 | 3. Understand services and Create a service.
27 | 4. Updating & Scaling Swarm service.
28 | 5. Routing Mesh Swarm visualizer.
29 | https://github.com/dockersamples/docker-swarm-visualizer
30 |
31 | docker run -it -d -p 8080:8080 -v \
32 | /var/run/docker.sock:/var/run/docker.sock \
33 | dockersamples/visualizer
34 |
35 |
36 |
37 |
38 | 6. Container Placement.
39 | ● Service Constraint: https://docs.docker.com/engine/reference/commandline/service_create/
40 |
41 |
42 | ● docker service create --constraint=node.role==manager nginx
43 | ● docker service create --constraint=node.role!=worker nginx
44 | ● docker service create --name TEST --constraint 'node.role == manager' ...
45 | ● docker service create --name TEST --constraint 'node.id == w1rxedpb1mwkwbg97tb45x2dd' ...
46 | ● docker service create --name TEST --constraint 'node.hostname != docker01' ...
47 | ● node.