├── 2. MAVEN
├── PROMETHEUS
├── 1. GIT
├── 3. JENKINS
├── 5. DOCKER
├── 7. TERRAFORM
├── 4. ANSIBLE
└── 6. K8S
/2. MAVEN:
--------------------------------------------------------------------------------
1 | DATE: 05-09-2023
2 | MAVEN:
3 | its a build too1.
4 | main file is POM.XML (project object model & Extensible markup language)
5 | POM.XML will have entire information of project.
6 | it is used to create artifact (war, jar, ear)
7 | It is mostly used for java-based projects.
8 | It was initially released on 13 July 2004.
9 | Maven is written in java (1.8.0)
10 |
11 | RAW: .java (we cant use it)
12 | .java -- > compile -- > .class -- > .jar -- > package -- > .war
13 |
14 | .class : which can be executable
15 | .jar : group of .class files (works for backend)
16 | .war : it has frontend and backend code (FE: HTML, CSS, JS & BE: JAVA)
17 | libs : its a pre defined packages used for code.
18 |
19 | ARTIFACTS: its the final product.
20 | JAR = JAVA ARCHIVE
21 | WAR = WEB ARCHIVE
22 | EAR = ENTERPRISE ARCHIVE
23 |
24 | ARCHITECTURE:
25 |
26 | SETUP: CREATE EC2
27 | yum install git java-1.8.0-openjdk maven tree -y
28 | git clone https://github.com/devopsbyraham/jenkins-java-project.git
29 | cd jenkins-java-project.git
30 |
31 | PLUGIN: its a small software which automates our work.
32 | with the plugins we can use tools without installing them.
33 |
34 | GOAL: its a command used to perfrom task.
35 | MAVEN follows a lifecycle
36 |
37 | mvn compile : to compile the source code
38 | mvn test : to test the code
39 | mvn package : to create artifact (project directory)
40 | mvn install : to create artifact (maven home dir .m2)
41 | mvn clean packge: to perform entire goals
42 | mvn clean : to delete target
43 |
44 |
45 | maven ant
46 | pom.xml build.xml
47 | life cycle no life cycle
48 | procedural declareative
49 | plugins no plugins
50 | no scripts scripts
51 |
52 | WHAT IF BUILD FAILS:
53 | 1. CHECK JAVA VERSION FOR MAVEN
54 | 2. CHECK POM.XML
55 | 3. CHECK THE CODE
56 |
57 |
58 | HISTORY:
59 | 1 yum install git java-1.8.0-openjdk maven -y
60 | 2 git clone https://github.com/devopsbyraham/jenkins-java-project.git
61 | 3 ll
62 | 4 cd jenkins-java-project/
63 | 5 ll
64 | 6 vim pom.xml
65 | 7 yum install tree -y
66 | 8 tree
67 | 9 mvn compile
68 | 10 tree
69 | 11 ls
70 | 12 mvn test
71 | 13 tree
72 | 14 mvn package
73 | 15 tree
74 | 16 mvn install
75 | 17 cd /root/.m2/repository/in/RAHAM/NETFLIX/1.2.2/
76 | 18 ll
77 | 19 cd -
78 | 20 ll
79 | 21 mvn clean package
80 | 22 mvn clean
81 | 23 ll
82 | 24 mvn clean package
83 | 25 ll
84 | 26 tree
85 | 27 ll
86 | 28 cd
87 | 29 mvn clean
88 | 30 ll
89 | 31 cd jenkins-java-project/
90 | 32 ll
91 | 33 mvn clean
92 | 34 ll
93 | 35 mvn -version
94 | 36 yum remove maven* -y
95 | 37 yum install maven -y
96 | 38 mvn -version
97 | 39 yum
98 | 40 yum remove maven* java* -y
99 | 41 yum install maven -y
100 | 42 mvn -version
101 | 43 mvn compile
102 | 44 history
103 |
--------------------------------------------------------------------------------
/PROMETHEUS:
--------------------------------------------------------------------------------
1 | PROMETHEUS:
2 |
3 | Prometheus is an open-source monitoring system that is especially well-suited for cloud-native environments, like Kubernetes.
4 | It can monitor the performance of your applications and services.
5 | it will sends an alert you if there are any issues.
6 | It has a powerful query language that allows you to analyze the data.
7 | It pulls the real-time metrics, compresses and stores in a time-series database.
8 | Prometheus is a standalone system, but it can also be used in conjunction with other tools like Alertmanager to send alerts based on the data it collects.
9 | it can be integration with tools like PagerDuty to send alerts to the appropriate on-call personnel.
10 | it collects, and it also has a rich set of integrations with other tools and systems.
11 | For example, you can use Prometheus to monitor the health of your Kubernetes cluster, and use its integration with Grafana to visualize the data it collects.
12 |
13 | COMPONENTS OF PROMETHEUS:
14 | Prometheus is a monitoring system that consists of the following components:
15 |
16 | A main server that scrapes and stores time series data
17 | A query language called PromQL is used to retrieve and analyze the data
18 | A set of exporters that are used to collect metrics from various systems and applications
19 | A set of alerting rules that can trigger notifications based on the data
20 | An alert manager that handles the routing and suppression of alerts
21 |
22 | GRAFANA:
23 | Grafana is an open-source data visualization and monitoring platform that allows you to create dashboards to visualize your data and metrics.
24 | It is a popular choice for visualizing time series data, and it integrates with a wide range of data sources, including Prometheus, Elasticsearch, and InfluxDB.
25 | A user-friendly interface that allows you to create and customize dashboards with panels that display your data in a variety of formats, including graphs, gauges, and tables. You can also use Grafana to set up alerts that trigger notifications when certain conditions are met.
26 | Grafana has a rich ecosystem of plugins and integrations that extend its functionality. For example, you can use Grafana to integrate with other tools and services, such as Slack or PagerDuty, to receive alerts and notifications.
27 | Grafana is a powerful tool for visualizing and monitoring your data and metrics, and it is widely used in a variety of industries and contexts.
28 |
29 | CONNECTION:
30 | SETUP BOTH PROMETHEUS & GRAFAN FROM BELOW LINK
31 | https://github.com/RAHAMSHAIK007/all-setups.git
32 |
33 | pROMETHERUS: 9090
34 | NODE EXPORTER: 9100
35 | GRAFANA: 3000
36 |
37 | CONNECTING PROMETHEUS TO GARAFANA:
38 | connect to grafana dashboard -- > Data source -- > add -- > promethus -- > url of prometheus -- > save & test -- > top of page -- > explore data -- > if you want run some queries -- > top -- > import dashboard -- > 1860 -- > laod --- > prometheus -- > import
39 |
40 | amazon-linux-extras install epel -y
41 | yum install strees -y
42 |
43 |
44 |
45 | LOKI INTEGRATION:
46 | Grafana with integration of Loki and Promtail:
47 |
48 | apt install docker.io -y
49 |
50 | wget https://raw.githubusercontent.com/grafana/loki/v2.8.0/cmd/loki/loki-local-config.yaml -O loki-config.yaml
51 | wget https://raw.githubusercontent.com/grafana/loki/v2.8.0/clients/cmd/promtail/promtail-docker-config.yaml -O promtail-config.yaml
52 | docker run --name loki -d -v $(pwd):/mnt/config -p 3100:3100 grafana/loki:2.8.0 -config.file=/mnt/config/loki-config.yaml
53 | docker run --name promtail -d -v $(pwd):/mnt/config -v /var/log:/var/log --link loki grafana/promtail:2.8.0 -config.file=/mnt/config/promtail-config.yaml
54 |
55 |
56 |
57 | Data Source in Grafana:
58 | Add data source -- > loki -- > url: http://13.233.139.224:3100 -- > save & tets
59 |
60 | Checking logs in Loki:
61 | Explore -- > Label filters -- > jobs=varlogs -- > run query
62 |
--------------------------------------------------------------------------------
/1. GIT:
--------------------------------------------------------------------------------
1 | GIT: GLOBAL INFORMATION TRACKER.
2 |
3 | VCS: VERSION CONTROL SYSTEM
4 | it will keep the code separately for each version.
5 |
6 | v-1 : 100 lines --- > store (repo-1)
7 | v-2 : 200 lines --- > store (repo-2)
8 | v-3 : 300 lines --- > store (repo-3)
9 |
10 | REPO: It is a folder where we store our code.
11 | index.html: it is a basic file for every application
12 |
13 | v1 --- > index.html
14 | v2 --- > index.html
15 | v3 --- > index.html
16 |
17 |
18 | INTRO:
19 | Git is used to track the files.
20 | It will maintain multiple versions of the same file.
21 | It is platform-independent.
22 | It is free and open-source.
23 | They can handle larger projects efficiently.
24 | It is 3rd generation of vcs.
25 | it is written on c programming
26 | it came on the year 2005
27 |
28 |
29 | CVCS: CENTRALIZED VERSION CONTROL SYSTEM
30 | EX: SVN: it can store code on a single repo.
31 |
32 | DVCS: DISTRIBUTED VERSION CONTROL SYSTEM
33 | EX: GIT: it can store code on Multiple repo.
34 |
35 |
36 | ROLLBACK: Going back to the previous version of the application.
37 |
38 |
39 | STAGES:
40 | WORKING DIRECTORY: where we write our source code.
41 | STAGING AREA: we track files here.
42 | REPOSITORY: where we store tracked source code
43 |
44 |
45 | WORKING WITH GIT:
46 | create a ec2 server:
47 | mkdir swiggy
48 | cd swiggy
49 |
50 | yum install git -y [yum=pkg manager, install=action, git=pkg name -y=yes]
51 | git -v : to check version
52 | git init : to install .git (local repo)
53 |
54 |
55 | To create file : vim index.html (content is opt)
56 | to check status : git status
57 | to track file : git add index.html
58 | to check status : git status
59 | to store file : git commit -m "commit-1" index.html
60 |
61 | create a file -- > add -- > commit
62 |
63 | to show commits : git log
64 | to show last 2 commits: git log -2
65 | to show commits in single line: git log -2 --oneline
66 |
67 |
68 | =================================================
69 |
70 | CONFIGURING USER AND EMAIL:
71 |
72 | git config user.name "raham"
73 | git config user.email "raham@gmail.com"
74 |
75 |
76 | NOTE: this user and email will be replicated to new commits only.
77 |
78 |
79 | Git show: used to show the files which are attached to commits.
80 | git log --online
81 | git show commit_id
82 |
83 |
84 | BRANCHES:
85 | It is a individaual line of developemt.
86 | developers write the code on branches.
87 | initally baraches we create on git.
88 | after write source code on git we push to github.
89 | Default barnch is Master.
90 | Note: when we do a iniatl commit the only default branch will be created.
91 |
92 | Dev -- > Git (Movies branch) -- > code -- > github
93 |
94 |
95 | COMMNDS:
96 | git branch : to list the branches
97 | git branch movies : to create the branch movie
98 | git checkout movies : to switch blw the branches
99 | git checkout -b dth : to create and switch dth at same time
100 | git branch -m old new : to rename a branch
101 | git branch -D recharge : to delete a branch
102 |
103 | NOTE: to recover the delete branch
104 |
105 | GIT PULL: it will get the branch from github to git
106 | git pull origin recharge
107 | git checkout recharge
108 |
109 |
110 |
111 |
112 | PROCESS:
113 | git branch movies
114 | git checkout movies
115 | touch movies{1..5}
116 | git add movies*
117 | git commit -m "dev-1" movies*
118 |
119 |
120 | NOW PUSH THE CODE TO GITHUB:
121 | create a repo
122 | git remote add origin https://github.com/nayakdebasish091/paytm.git
123 |
124 | PUSH: to send files form git to github
125 | local: .git & remote: paytm.git
126 | git push origin movies
127 | username:
128 | password:
129 |
130 | TOKEN GENERATION:
131 | account -- > settings -- >developer settings -- > PAT -- > Classic -- > Generate new token -- > classic -- > name: abcd -- > select 6 options -- > generate
132 |
133 | Note: it will be visible only once
134 |
135 |
136 |
137 |
138 |
139 | create branch
140 | switch to branch
141 | files
142 | add
143 | commit
144 | push
145 |
146 | GIT IGNORE: it will not track the files which we want.
147 | touch java{1..5}
148 | vim .gitignore
149 | j* -- > :wq
150 | git status
151 | you cant see the files now
152 |
153 |
154 | GIT RESTORE: to untrack the tracked file
155 | touch raham
156 | git status
157 | git add
158 | git status
159 | git restore --staged raham
160 | git status
161 |
162 |
163 | GET BACK THE DELETED FILE:
164 | Note: we can restore only tracked files.
165 |
166 |
167 | HISTORY:
168 |
169 | 1 df -h
170 | 2 cd cd
171 | 3 cd /
172 | 4 cd /
173 | 5 du -sh
174 | 6 ll
175 | 7 cd
176 | 8 yum install git maven docker httpd tree htop -y
177 | 9 touch file{1..10
178 | 10 rm -rf file\{1..10
179 | 11 rm -rf file\{1..10}
180 | 12 ll
181 | 13 touch file{1..10}
182 | 14 vim file1
183 | 15 rm -rf *
184 | 16 mkdir paytm
185 | 17 cd paytm/
186 | 18 ls -al
187 | 19 git init
188 | 20 touch index.html
189 | 21 git add index.html
190 | 22 git config user.name "raham"
191 | 23 git config user.email "raham@gmail.com"
192 | 24 git commit -m "commit-1" index.html
193 | 25 git branch
194 | 26 git branch movies
195 | 27 git branch
196 | 28 git checkout movies
197 | 29 touch movies{1..5}
198 | 30 git add movies*
199 | 31 git commit -m "dev-1" movies*
200 | 32 git remote add origin https://github.com/RAHAMSHAIK007/1045paytm.git
201 | 33 git push origin movies
202 | 34 git branch
203 | 35 git branch train
204 | 36 git branch
205 | 37 git checkout train
206 | 38 git branch
207 | 39 touch train{1..5}
208 | 40 git add train*
209 | 41 git commit -m "dev-2" train*
210 | 42 git push origin train
211 | 43 ll
212 | 44 git branch
213 | 45 git checkout -b dth
214 | 46 git branch
215 | 47 ll
216 | 48 touch dth{1..5}
217 | 49 git add dth*
218 | 50 git commit -m "dev-3" dth*
219 | 51 git push origin dth
220 | 52 git checkout -b recharge
221 | 53 touch recharge{1..5}
222 | 54 git add recharge*
223 | 55 git commit -m "dev-4" recharge*
224 | 56 git push origin recharge
225 | 57 ll
226 | 58 git status
227 | 59 touch java{1..5}
228 | 60 git status
229 | 61 vim .gitignore
230 | 62 git status
231 | 63 vim .gitignore
232 | 64 git status
233 | 65 ll
234 | 66 git add *
235 | 67 ll
236 | 68 git status
237 | 69 touch python{1..5}
238 | 70 git status
239 | 71 vim .gitignore
240 | 72 git status
241 | 73 ll
242 | 74 git branch
243 | 75 git branch -m master raham
244 | 76 git branch
245 | 77 git branch -m raham main
246 | 78 git branch
247 | 79 git branch -m main master
248 | 80 git branch
249 | 81 ll
250 | 82 git push origin movies
251 | 83 git branch
252 | 84 git branch -D recharge
253 | 85 git checkout movies
254 | 86 git branch -D recharge
255 | 87 git branch
256 | 88 git pull origin recharge
257 | 89 git branch
258 | 90 git checkout recharge
259 | 91 git branch
260 | 92 ll
261 | 93 git branch
262 | 94 git branch -D movies
263 | 95 git branch
264 | 96 git pull origin movies
265 | 97 ll
266 | 98 git branch
267 | 99 git checkout movies
268 | 100 git branch
269 | 101 touch raham
270 | 102 git status
271 | 103 git add raham
272 | 104 git status
273 | 105 git restore --staged raham
274 | 106 git status
275 | 107 git add raham
276 | 108 git status
277 | 109 git restore --staged raham
278 | 110 git status
279 | 111 rm -f raham
280 | 112 git restore raham
281 | 113 ll
282 | 114 git status
283 | 115 cd
284 | 116 mkdir abcd
285 | 117 cd abcd/
286 | 118 touch file1
287 | 119 git add file1
288 | 120 git init
289 | 121 git add file1
290 | 122 ll
291 | 123 rm -f file1
292 | 124 git restore file1
293 | 125 ll
294 | 126 touch file2
295 | 127 git status
296 | 128 rm -f file
297 | 129 rm -f file2
298 | 130 git restore file2
299 | 131 ll
300 | 132 touch file{1..100}
301 | 133 git status
302 | 134 git add *
303 | 135 git status
304 | 136 rm -f *
305 | 137 git restore *
306 | 138 ls
307 | 139 cd
308 | 140 cd paytm/
309 | 141 git push origin train
310 | 142 git push origin master movies
311 | 143 history
312 |
313 | =================================================
314 | MERGE:
315 | adding the files blw one branch to another branch
316 | git merge branch_name
317 |
318 | REBASE:
319 | adding the files blw one branch to another branch
320 | git rebase branch_name
321 |
322 |
323 | MERGE VS REBASE:
324 | merge will show files, rebase will not show files
325 | merge will not show branches, rebase will show branches
326 | merge will show entire history, rebase will not.
327 |
328 |
329 | STASH: to hide the files which are not comitted.
330 | note: file need to be tracked but not comitted
331 |
332 |
333 | touch file2
334 | git stash
335 | git stash apply
336 | git stash list
337 | git stash clear
338 | git stash pop (clears last stash)
339 |
340 |
341 |
342 | HISTORY:
343 |
344 | 1 cd p
345 | 2 ll
346 | 3 cd swiggy/
347 | 4 ll
348 | 5 git branch
349 | 6 git branch -D master
350 | 7 git branch -D dth
351 | 8 ll
352 | 9 git branch
353 | 10 git pull origin dth
354 | 11 cd
355 | 12 git clone https://github.com/RAHAMSHAIK007/1045paytm.git
356 | 13 ll
357 | 14 cd 1045paytm/
358 | 15 git branch
359 | 16 git checkout train
360 | 17 git branch
361 | 18 git checkout dth
362 | 19 git branch
363 | 20 git checkout recharge
364 | 21 ll
365 | 22 git branch
366 | 23 git branch -D dth
367 | 24 git branch
368 | 25 git pull origin dth
369 | 26 git checkout dth
370 | 27 git branch
371 | 28 cd
372 | 29 mkdir abcd
373 | 30 cd abcd/
374 | 31 git init
375 | 32 ll
376 | 33 touch java1
377 | 34 git add java1
378 | 35 git commit -m "java commits" java1
379 | 36 ll
380 | 37 touch java2
381 | 38 git status
382 | 39 git add java2
383 | 40 git status
384 | 41 ll
385 | 42 git stash
386 | 43 ll
387 | 44 git stash apply
388 | 45 ll
389 | 46 git stash
390 | 47 ll
391 | 48 git stash apply
392 | 49 ll
393 | 50 git stash list
394 | 51 git stash clear
395 | 52 git stash list
396 | 53 git stash -h
397 | 54 git stash --help
398 | 55 git stash list
399 | 56 git stash
400 | 57 git stash list
401 | 58 history
402 |
403 | ==================================================================
404 |
405 | MERGE CONFLICT:
406 | when we try to merege 2 branches with same file and Different content
407 | then conflict will raise.
408 | so we need to reslove the conflict manually
409 |
410 |
411 | CHERRY-PICK:
412 | we can merge only specific files from one branch to another
413 | with the help of commit_id
414 | git cherry-pik commit_id
415 |
416 | SHOW:
417 | to show the commits for a file
418 | git show commit_id
419 |
420 | git log --patch
421 | git log --stat
422 |
423 |
424 | REVERT: to undo the merging
425 | git revert branc_name
426 |
427 | HISTORY:
428 |
429 | 53 rm -rf *
430 | 54 mkdir paytm
431 | 55 cd paytm/
432 | 56 yum install git -y
433 | 57 git init
434 | 58 touch file1
435 | 59 vim file1
436 | 60 git add file1
437 | 61 git commit -m "one" file1
438 | 62 git branch
439 | 63 git branch -m master branch-1
440 | 64 git branch
441 | 65 cat file1
442 | 66 git checkout -b branch-2
443 | 67 vim file1
444 | 68 git add file1
445 | 69 git commit -m "two" file1
446 | 70 cat file1
447 | 71 git checkout branch-1
448 | 72 vim file1
449 | 73 git add file1
450 | 74 git commit -m "three" file1
451 | 75 cat file1
452 | 76 git merge branch-2
453 | 77 vim file1
454 | 78 git add file1
455 | 79 git commit -m "conflicts"
456 | 80 git status
457 | 81 git merge branch-2
458 | 82 git branch
459 | 83 touch java{1..5}
460 | 84 git add java*
461 | 85 git commit -m "java commits" java*
462 | 86 touch python{1..5}
463 | 87 git add python*
464 | 88 git commit -m "python commits" python*
465 | 89 touch php{1..5}
466 | 90 git add php*
467 | 91 git commit -m "php commits" php*
468 | 92 git log --oneline
469 | 93 git checkout branch-2
470 | 94 ll
471 | 95 git cherry-pick e300a84
472 | 96 ll
473 | 97 git cherry-pick df00587
474 | 98 ll
475 | 99 git logs
476 | 100 git log
477 | 101 git show 8b156541691925614eb6e083b6277e823fd340f0
478 | 102 git show python1
479 | 103 ll
480 | 104 git diff python1
481 | 105 vim python1
482 | 106 git add python1
483 | 107 git commit -m "new python" python1
484 | 108 git diff python1
485 | 109 vim python1
486 | 110 git add python1
487 | 111 git commit -m "new python file" python1
488 | 112 git diff python1
489 | 113 git show python1
490 | 114 vim python1
491 | 115 git add python1
492 | 116 git commit -m "nwe pyt commits" python1
493 | 117 git show python1
494 | 118 diff --git a/python1 b/python1
495 | 119 git diff python1
496 | 120 git log --stat
497 | 121 git log --patch
498 | 122 git merge branch-1
499 | 123 ll
500 | 124 git revert
501 | 125 git revert branch-1
502 | 126 ll
503 | 127 git clone https://github.com/torvalds/linux.git
504 |
505 | INTERVIEW QUESTIONS:
506 | what is git & why to use ?
507 | Explain stages of git ?
508 | Diff blw CVCS & DVCS ?
509 | How to check commits in git ?
510 | Explain branches ?
511 | What is .gitingore ?
512 | Diff blw git pull vs git push ?
513 | Diff blw git pull vs git fetch ?
514 | Diff blw git merge vs git rebase ?
515 | Diff blw git merge vs git cherry pick ?
516 | Diff blw git clone vs git pull ?
517 | Diff blw git clone vs git fork ?
518 | Diff blw git revert vs git restore ?
519 | merge conflict & how to resolve ?
520 | what is git stash ?
521 |
522 |
523 |
524 |
--------------------------------------------------------------------------------
/3. JENKINS:
--------------------------------------------------------------------------------
1 | 06-09-2023
2 |
3 | JENKINS:
4 | its a ci/cd tool.
5 |
6 | ci : continous integration : continous build + continous test
7 | old code with new code
8 | before ci we have time waste and every thing is manual work.
9 | after ci everything is automated.
10 |
11 | cd : continous delivery : deployment manually to prod env
12 | cd : continous deployment : deployment automatically to prod env
13 |
14 | PIPELINE:
15 | STEP BY STEP EXECUTION OF A PARTICULAR PROCESS.
16 | SERIES OF EVENTS INTERLINKED WITH EACHOTHER.
17 |
18 | CODE -- > BUILD -- > TEST -- > DEPLOY
19 |
20 | ENV:
21 | DEV : DEVELOPERS
22 | QA : TESTERS
23 | UAT : CLIENT
24 |
25 | THE ABOVE ENVS ARE CALLED AS PRE-PROD OR NON-PROD
26 |
27 | PROD : USER
28 | PROD ENV IS ALSO CALLED AS LIVE ENV
29 |
30 |
31 | JENKINS:
32 | its a free and opensource tool.
33 | its platform independent.
34 | it is built on java-11.
35 | koshuke kawaguchui invented jenkins in sun micro systems 2004.
36 | inital name was hudson -- > paid -- > orcle -- > free
37 | it consist lot of plugins.
38 | port number for jenkins is 8080.
39 |
40 | SETUP: CREATE EC2 WITH ALL TRAFFIC (8080)
41 |
42 | #STEP-1: INSTALLING GIT JAVA-1.8.0 MAVEN
43 | yum install git java-1.8.0-openjdk maven -y
44 |
45 | #STEP-2: GETTING THE REPO (jenkins.io --> download -- > redhat)
46 | sudo wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo
47 | sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io-2023.key
48 |
49 | #STEP-3: DOWNLOAD JAVA11 AND JENKINS
50 | amazon-linux-extras install java-openjdk11 -y
51 | yum install jenkins -y
52 | update-alternatives --config java
53 |
54 | #STEP-4: RESTARTING JENKINS (when we download service it will on stopped state)
55 | systemctl start jenkins.service
56 | systemctl status jenkins.service
57 |
58 | copy public ip and paste on browser like this
59 | public-ip:8080
60 | cat /var/lib/jenkins/secrets/initialAdminPassword
61 | install plugins and create user for login.
62 |
63 | INTEGRATION OF GIT AND MAVEN WITH JENKINS:
64 | NEW ITEM -- > NAME: NETFLIX JOB -- > FREE STYLE -- > OK
65 | Source Code Management -- > GIT -- > https://github.com/devopsbyraham/jenkins-java-project.git
66 | Build Steps -- > add setp -- > Execute shell -- > save
67 |
68 | ================================================================================================================================================
69 | 07-09-2023:
70 |
71 |
72 |
73 | PARAMETERS: used to pass input for the jobs
74 | choice: when we have multiple options we can select one
75 |
76 | create a ci job -- > Configure -- > This project is parameterized -- > choice -- > name: env choice: dev,test,uat, prod -- > save
77 |
78 | string: to pass multiple inputs
79 | multi line string: to pass input on multiple lines
80 | bool: ture of false
81 | file: to build files
82 |
83 |
84 | PORT CHANGING:
85 | vim /usr/lib/systemd/system/jenkins.service
86 | 67 (8080=8090)
87 | systemctl daemon-reload
88 | systemctl restart jenkins.service
89 | systemctl status jenkins.service
90 |
91 |
92 | PASSWORDLESS LOGIN:
93 | vim /var/lib/jenkins/config.xml
94 | line 7 (true=false)
95 | systemctl restart jenkins.service
96 | systemctl status jenkins.service
97 |
98 |
99 | CHANGING BUILD LIMITS:
100 | dashboard -- > manage jenkins -- > nodes -- > built-in node -- > configure -- > number of executors -- > 3 -- > save
101 |
102 |
103 | HOW TO RESLOVE ISSUE WHEN JENKINS SERVER IS CRASHED:
104 | when we stop and start server your ip will change and the services inside server also stop.
105 |
106 |
107 |
108 | RESTORING DELETE JOBS:
109 |
110 | Dashboard -- > Manage Jenkins -- > Plugins -- > available plugins -- > Job Configuration History -- > install -- > go back to top page
111 |
112 | now try to delete a job
113 |
114 | HISTORY:
115 | 31 vim jenkins.sh
116 | 32 sh jenkins.sh
117 | 33 cat /var/lib/jenkins/secrets/initialAdminPassword
118 | 34 cd /var/lib/jenkins/workspace/netflix/
119 | 35 ll
120 | 36 ls target/
121 | 37 vim /usr/lib/systemd/system/jenkins.service
122 | 38 system daemon-reload
123 | 39 systemctl daemon-reload
124 | 40 systemctl restart jenkins.service
125 | 41 systemctl status jenkins.service
126 | 42 vim /usr/lib/systemd/system/jenkins.service
127 | 43 systemctl daemon-reload
128 | 44 systemctl restart jenkins.service
129 | 45 systemctl status jenkins.service
130 | 46 vim /var/lib/jenkins/config.xml
131 | 47 systemctl restart jenkins.service
132 | 48 systemctl status jenkins.service
133 | 49 systemctl restart jenkins.service
134 | 50 systemctl status jenkins.service
135 | 51 vim /var/lib/jenkins/config.xml
136 | 52 systemctl restart jenkins.service
137 | 53 systemctl status jenkins.service
138 | 54 history
139 | =================================================================
140 |
141 | CRONJOB: used to schedule the works on jenkins.
142 | cron synatx is used to work with cron jobs.
143 | cron synatx: * * * * *
144 | each star will have space in b/w
145 | NOTE: use server time (utc)
146 |
147 | * : minutes
148 | * : hours
149 | * : date
150 | * : month
151 | * : day (0=sun 1=mon ----)
152 |
153 | 11:05 08-09-2023
154 |
155 | 5 11 8 9 5
156 | 5 11 8 9 5
157 |
158 | 4:45 pm 09-09-2023
159 |
160 | 45 16 9 9 6
161 |
162 | 38 5 8 9 5
163 |
164 | CREATE A CI JOB: Build Triggers -- > Build periodically -- > * * * * * -- > save
165 |
166 | Limitations:
167 | will not check the code is changed or not.
168 |
169 | POLLSCM: it will build when code is changed only.
170 | Note: need to set the time limit.
171 |
172 | CREATE A CI JOB: Build Triggers -- > Poll Scm -- > * * * * * -- > save
173 |
174 |
175 | limitation:
176 | 9am -- > 7 am : time waste
177 |
178 | Webhooks: it will trigger the build the moment we commit the code
179 |
180 | github -- > repo -- > settings -- > webhooks -- > add webhook -- >
181 | Payload URL: http://13.39.14.187:8080/github-webhook/ -- > Content type: application/json -- > Add webhook
182 |
183 | Jenkins Dashboard -- > create ci job -- > Build triggers -- > GitHub hook trigger for GITScm polling -- > save
184 |
185 |
186 | THROTTLE BUILD: to restrict number of builds per interval.
187 | create job -- > Throttle builds -- > Number of build: 3 -- > time period: hours -- > save
188 |
189 | make builds and test them
190 |
191 | =================================================================
192 |
193 | MASTER AND SLAVE:
194 | it is used to distribute the builds.
195 | it reduce the load on jenkins server.
196 | communication blw master and slave is ssh.
197 | Here we need to install agent (java-11).
198 | slave can use any platform.
199 | label = way of assingning work for slave.
200 |
201 | SETUP:
202 | #STEP-1 : Create a server and install java-11
203 | amazon-linux-extras install java-openjdk11 -y
204 |
205 | #STEP-2: SETUP THE SLAVE SERVER
206 | Dashboard -- > Manage Jenkins -- > Nodes & Clouds -- > New node -- > nodename: abc -- > permanaent agent -- > save
207 |
208 | CONFIGURATION OF SALVE:
209 |
210 | Number of executors : 3 #Number of Parallel builds
211 | Remote root directory : /tmp #The place where our output is stored on slave sever.
212 | Labels : swiggy #place the op in a particular slave
213 | useage: last option
214 | Launch method : last option
215 | Host: (your privte ip)
216 | Credentials -- > add -- >jenkins -- > Kind : ssh username with privatekey -- > username: ec2-user
217 | privatekey : pemfile of server -- > save -- >
218 | Host Key Verification Strategy: last option
219 |
220 | DASHBOARD -- > JOB -- > CONFIGURE -- > RESTRTICT WHERE THIS JOB RUN -- > LABEL: SLAVE1 -- > SAVE
221 |
222 |
223 |
224 | LINKED JOBS:
225 | here jobs are linked with each other.
226 | types:
227 | 1. Up stream
228 | 2. Down stream
229 |
230 | create a job2
231 | job1 -- > configuration -- > post build actions -- > build other projects -- > name of job2 -- > save
232 |
233 | =========================================================================
234 | PIPELINE:
235 | stpe by step execution of particular process
236 | series of events interlinekd with each other
237 | pipelines are sytnax-based
238 | we use groovy language to write pipleine.
239 | its a DSL (domain specific language)
240 | it gives the vizualization of a process execution.
241 |
242 |
243 | CODE -- > BUILD -- > TEST -- > ARTIFACT -- > DEPLOY
244 |
245 | TYPES:
246 | 1. SCRIPTED
247 | 2. DECLARATIVE
248 |
249 | DECLARATIVE PIPELINE:
250 |
251 | SINGLE STAGE:
252 |
253 | pipeline {
254 | agent any
255 |
256 | stages{
257 | stage('abc') {
258 | steps {
259 | sh 'touch file2'
260 | }
261 | }
262 | }
263 | }
264 |
265 | MULTI STAGE:
266 |
267 | pipeline {
268 | agent any
269 |
270 | stages{
271 | stage('abc') {
272 | steps {
273 | sh 'touch file1'
274 | }
275 | }
276 | stage('abc') {
277 | steps {
278 | sh 'touch file2'
279 | }
280 | }
281 | }
282 | }
283 |
284 | NOTE: PASSS
285 | P : PIPELINE
286 | A : AGNET
287 | S : STAGES
288 | S : STAGE
289 | S : STEPS
290 |
291 |
292 | CI PIPELINE:
293 |
294 | pipeline {
295 | agent any
296 |
297 | stages{
298 | stage('checkout') {
299 | steps {
300 | git 'https://github.com/devopsbyraham/jenkins-java-project.git'
301 | }
302 | }
303 | stage('build') {
304 | steps {
305 | sh 'mvn compile'
306 | }
307 | }
308 | stage('test') {
309 | steps {
310 | sh 'mvn test'
311 | }
312 | }
313 | stage('artifact') {
314 | steps {
315 | sh 'mvn clean package'
316 | }
317 | }
318 | }
319 | }
320 |
321 |
322 | PIPELINE AS A CODE:
323 | executing multiple commands or multiple actions on a single stage.
324 |
325 | pipeline {
326 | agent any
327 |
328 | stages {
329 | stage('one') {
330 | steps {
331 | git 'https://github.com/devopsbyraham/jenkins-java-project.git'
332 | sh 'mvn compile'
333 | sh 'mvn test'
334 | sh 'mvn clean package'
335 | }
336 | }
337 | }
338 | }
339 |
340 | PIPELINE AS A CODE OVER MULTI STAGE:
341 | executing multiple commands or multiple actions on a multiple stage.
342 |
343 |
344 | pipeline {
345 | agent any
346 |
347 | stages {
348 | stage('one') {
349 | steps {
350 | git 'https://github.com/devopsbyraham/jenkins-java-project.git'
351 | sh 'mvn compile'
352 | }
353 | }
354 | stage('two') {
355 | steps {
356 | sh 'mvn test'
357 | sh 'mvn clean package'
358 | }
359 | }
360 | }
361 | }
362 |
363 | PIPELINE AS A CODE OVER SINGLE-SHELL:
364 | executing multiple commands over a single shell.
365 |
366 |
367 | pipeline {
368 | agent any
369 |
370 | stages {
371 | stage('one') {
372 | steps {
373 | git 'https://github.com/devopsbyraham/jenkins-java-project.git'
374 | sh '''
375 | mvn compile
376 | mvn test
377 | mvn package
378 | mvn install
379 | mvn clean package
380 | '''
381 | }
382 | }
383 | }
384 | }
385 |
386 |
387 |
388 | SONARQUBE:
389 | its a tool used to check the code quality.
390 | it covers the bugs, vulnerabilities, code duplications, code smells.
391 | its supports 20+ programming langauges.
392 | we have both free and paid version.
393 |
394 | port: 9000
395 | req: t2 medium
396 | dependency: java-11
397 |
398 | SETUP:
399 |
400 | #! /bin/bash
401 | #Launch an instance with 9000 and t2.medium
402 | cd /opt/
403 | wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-8.9.6.50800.zip
404 | unzip sonarqube-8.9.6.50800.zip
405 | amazon-linux-extras install java-openjdk11 -y
406 | useradd sonar
407 | chown sonar:sonar sonarqube-8.9.6.50800 -R
408 | chmod 777 sonarqube-8.9.6.50800 -R
409 | su - sonar
410 |
411 | #run this on server manually
412 | #sh /opt/sonarqube-8.9.6.50800/bin/linux/sonar.sh start
413 | #echo "user=admin & password=admin"
414 |
415 | after login -- > add project -- > manual -- > project key -- > netlfix -- > setup -- > token -- > netflix -- > generate -- > continue -- > maven
416 |
417 | copy paste the code on pipeline:
418 |
419 | pipeline {
420 | agent any
421 |
422 | stages{
423 | stage('checkout') {
424 | steps {
425 | git 'https://github.com/devopsbyraham/jenkins-java-project.git'
426 | }
427 | }
428 | stage('build') {
429 | steps {
430 | sh 'mvn compile'
431 | }
432 | }
433 | stage('test') {
434 | steps {
435 | sh 'mvn test'
436 | }
437 | }
438 | stage('code quality') {
439 | steps {
440 | sh '''
441 | mvn sonar:sonar \
442 | -Dsonar.projectKey=netflix \
443 | -Dsonar.host.url=http://13.38.32.159:9000 \
444 | -Dsonar.login=3f017506d83cd334fba02ecf9db706c429ab5d38
445 | '''
446 | }
447 | }
448 | stage('artifact') {
449 | steps {
450 | sh 'mvn clean package'
451 | }
452 | }
453 | }
454 | }
455 |
456 | TO EXCECUTE ON SLAVE
457 |
458 | pipeline {
459 | agent {
460 | label 'slave1'
461 | }
462 |
463 | ===================================================================================
464 | 12-09-2023:
465 |
466 | VARIABLES:
467 | assigning the value
468 |
469 | a=10
470 | name=raham
471 |
472 | 1. USER DEFINED VARIABLES:
473 | a. local vars: we can define inside job
474 | 2. global vars: we can define outside jobs
475 |
476 | 2. JENKINS ENV VARS:
477 | their vars are going to change as per build.
478 |
479 | HOW TO PASS VARIABLES ON PIPELINE:
480 |
481 |
482 | pipeline {
483 | agent any
484 |
485 | environment {
486 | name = "raham"
487 | loc = "hyderabad"
488 | }
489 | stages {
490 | stage('one') {
491 | steps {
492 | echo "hai all my name is $name and im from $loc"
493 | sh 'env'
494 | }
495 | }
496 | }
497 | }
498 |
499 | =====================================================================
500 |
501 | POST BUILD ACTIONS:
502 | actions the we perform after the build is called as post build action.
503 |
504 | always: if build failed or success it will run the post actions.
505 | success: it will execute post actions only when the build is successful.
506 | failure: it will execute post actions only when the build is failed.
507 |
508 | pipeline {
509 | agent any
510 |
511 | environment {
512 | name = "raham"
513 | loc = "hyderabad"
514 | }
515 | stages {
516 | stage('one') {
517 | steps {
518 | s 'env'
519 | }
520 | }
521 | }
522 | post {
523 | always {
524 | echo "the build is done"
525 | }
526 | }
527 | }
528 |
529 | ===============================================================================================
530 |
531 | RBAC: ROLE BASE ACCESS CONTROL
532 |
533 | in real time to restrict the access for jenkins console we use rbac.
534 | with rbac we can give set of limited permissions for user.
535 |
536 | 1. user:
537 | dashboard -- > manage jenkins -- > users -- > create user (ramesh, suresh)
538 |
539 | 2. Plugin:
540 | Dashboard -- > Manage Jenkins -- >Plugins -- >available plugin -- >Role-based Authorization Strategy -- >install
541 |
542 | 3. configure:
543 | Dashboard -- > Manage Jenkins -- >security -- > security -- > Authorization -- > Role-based Authorization -- > save
544 |
545 | 4. manage and assign roles
546 |
547 | Dashboard -- > Manage Jenkins -- >Manage and Assign Roles -- > Manage Role
548 | role1: fresher -- > read
549 | role2: exp -- > admin
550 |
551 | ASSIGN ROLES
552 |
553 | --------------------------------------------------------------
554 |
555 | TOMCAT SETUP:
556 | 1. wget http://dlcdn.apache.org/tomcat/tomcat-9/v9.0.80/bin/apache-tomcat-9.0.80.tar.gz
557 | 2. tar -zxvf apache-tomcat-9.0.80.tar.gz
558 | 3. vim apache-tomcat-9.0.80/conf/tomcat-users.xml
559 |
560 |
561 |
562 |
563 |
564 | 4. vim apache-tomcat-9.0.80/webapps/manager/META-INF/context.xml (delete 21,22)
565 | 5. sh apache-tomcat-9.0.80/bin/startup.sh
566 |
567 | public-ip slave:8080
568 | manager app -- > usrername and passowd
569 |
570 | Download pluguin -- > Deploy to container : this is to deploy on tomcat
571 |
572 |
573 | Dashboard
574 | Manage Jenkins
575 | Credentials
576 | System
577 | Global credentials (unrestricted)
578 | add credentials -- > username and password -- > save
579 |
580 | http://13.39.49.204:8080/netflix/
581 |
582 | TROUBLESHOOTING TECHNIQUES:
583 | 1. SERVER LEVEL
584 | 2. JOB LEVEL
585 | 3. CODE LEVEL
586 |
587 | 1. SERVER LEVEL:
588 | A. CONSOLE OP
589 | B. MAVEN & JAVA
590 | C. DEPENDENCY MISTAKE
591 | D. CPU & MEMEORY
592 |
593 | 2. JOB LEVEL:
594 | A. REPO AND BRANCH CONFIG
595 | B. SYNTAX
596 | C. PLUGINS
597 |
598 |
599 | vim /root/apache-tomcat-9.0.80/conf/server.xml
600 | sh /root/apache-tomcat-9.0.80/bin/startup.sh
601 | sh /root/apache-tomcat-9.0.80/bin/shutdown.sh
602 |
603 | pipeline {
604 | agent {
605 | label 'slave1'
606 | }
607 |
608 | stages {
609 | stage('checkout') {
610 | steps {
611 | git 'https://github.com/devopsbyraham/jenkins-java-project.git'
612 | }
613 | }
614 | stage('build') {
615 | steps {
616 | sh 'mvn compile'
617 | }
618 | }
619 | stage('test') {
620 | steps {
621 | sh 'mvn test'
622 | }
623 | }
624 | stage('code quality') {
625 | steps {
626 | sh '''
627 | mvn sonar:sonar \
628 | -Dsonar.projectKey=project-netflix \
629 | -Dsonar.host.url=http://52.47.193.26:9000 \
630 | -Dsonar.login=54f6a0567ef5a12acda5eecee7ce51e0feb16bb1
631 | '''
632 | }
633 | }
634 | stage('artifact') {
635 | steps {
636 | sh 'mvn clean package'
637 | }
638 | }
639 | stage('deploy') {
640 | steps {
641 | deploy adapters:[
642 | tomcat9(
643 | credentialsId: 'a3f384b0-0de7-4b62-9800-24043d1d135d',
644 | path: '',
645 | url: 'http://13.39.49.204:8080/'
646 | )
647 | ],
648 | contextPath: 'netflix',
649 | war: 'target/*.war'
650 | }
651 | }
652 | }
653 | }
654 |
--------------------------------------------------------------------------------
/5. DOCKER:
--------------------------------------------------------------------------------
1 | MONOLITHIC: SINGLE APP -- > SINGLE SERVER -- > SINGLE DATABASE
2 | MICRO SERVICES: SINGLE APP -- > MULTIPLE SERVERS -- > MULTIPLE DATABASES
3 |
4 | MICRO:
5 | COST WILL BE HIGH
6 | MAINTAINANCE
7 |
8 |
9 | LIMITATIONS OF MONLITHIC:
10 | 1. SERVER PERFOMANCE
11 |
12 |
13 | CONTAINERS:
14 | SERVER = CONTAINER
15 |
16 | Containers will not have os by default.
17 | cant able to install pacakages
18 | cant able to deploy the app.
19 |
20 | images: inside the image we have os and packages
21 |
22 | image (os) -- > conatiner (app)
23 |
24 | VIRTUALIZATION: process of utilizing hardware resources in better.
25 | CONTAINERIZATION: process of packing application with its dependencies
26 | APP: PUBG DEPENDENCY: MAPS
27 |
28 | PUBG:
29 | APP: PLAYSTORE MAPS: INTERNET
30 |
31 | DOCKER:
32 | its a free and open-source platform.
33 | docker will create containers.
34 | we can create, run, and deploy our apps on containers.
35 | its platform independent (native runs on Linux Distribution).
36 | containers will use host resources (cpu, mem, ram, os)
37 | docker will perform os level of VIRTUALIZATION called containerization.
38 |
39 | year: 2013
40 | developed by: Solomen Hykes and Sebastian Phal
41 | language: go lang
42 |
43 |
44 | ARCHITECTURE:
45 | DOCKER CLIENT: its a way of interacting with docker (command -- > op)
46 | DOCKER DAEMON: it manage all the docker components (images, cont, volumes, nlw)
47 | DOCKER HOST: where we installed docker
48 | DOCKER REGISTRY: it manages all the docker images on internet.
49 |
50 |
51 | INSTALLATION:
52 | yum install docker -y
53 | systemctl start docker
54 | systemctl status docker
55 |
56 |
57 | docker pull amazonlinux : to downlaod image
58 | docker run -it --name cont1 amazonlinux : to create conatiner
59 |
60 | yum install git -y
61 | yum install maven -y
62 | touch file1
63 |
64 | ctrl p q
65 |
66 | docker images : to list images
67 | docker start cont1 : to start cont1
68 | docker stop cont1 : to stop cont1
69 | docker kill cont1 : to stop immedieatly cont1
70 | docker ps : to see running containers
71 | docker ps -a : to see all containers
72 |
73 | HISTORY:
74 | 1 yum install docker -y
75 | 2 docker version
76 | 3 systemctl start docker
77 | 4 systemctl status docker
78 | 5 docker version
79 | 6 docker images
80 | 7 docker search amazonlinux
81 | 8 docker pull amazonlinux
82 | 9 docker images
83 | 10 lsblk
84 | 11 cd
85 | 12 cd /
86 | 13 du -sh
87 | 14 docker run -it --name cont1 amazonlinux
88 | 15 docker ps
89 | 16 docker ps -a
90 | 17 docker stop cont1
91 | 18 docker ps
92 | 19 docker ps -a
93 | 20 docker start cont1
94 | 21 docker ps
95 | 22 docker kill cont1
96 | 23 history
97 | =============================================================================================
98 |
99 | OS LEVEL VIRTUALIZATION:
100 |
101 | NOTE: apt is package manager for ubuntu
102 | Redhat: Yum
103 | Ubuntu: Apt
104 |
105 | without running apt update -y we cant install packages
106 |
107 | WORKING:
108 | docker pull ubuntu
109 | docker run -it --name cont1 ubuntu
110 | apt update -y
111 | apt install git maven apache2 tree -y
112 | touch file{1..5}
113 |
114 | docker commit cont1 raham
115 | docker run -it --name cont2 raham
116 |
117 | check version now
118 |
119 |
120 |
121 | DOCKERFILE:
122 | its a way of creating images automatically.
123 | we can reuse the docker file for multiple times.
124 | in Dockerfile D will be Capital always.
125 | Components inside the Dockerfile also Capital.
126 |
127 | Dockerfile -- > Image -- > Container -- >
128 |
129 |
130 | COMPONENTS:
131 | FROM : to base image (gives os)
132 | RUN : to execute linux commands (image creation)
133 | CMD : to execute linux commands (container creation)
134 | ENTRYPOINT : high priority than cmd
135 | COPY : to copy local files to conatainer
136 | ADD : to copy Intenet files to conatainer
137 | WORKDIR : to go desired folder
138 | LABEL : to attach our tags/labels
139 | ENV : variables which run inside conatiner (image)
140 | ARGS : variables which run outside conatiner(containers)
141 | VOLUME : used to create volume for conatiner
142 | EXPOSE : used to give port number
143 |
144 | EX: -1
145 |
146 | FROM ubuntu
147 | RUN apt update -y
148 | RUN apt install git maven tree apache2 -y
149 |
150 | Build : docker build -t netflix:v1 .
151 | cont: docker run -it --name cont3 netflix:v1
152 |
153 |
154 | ex-2:
155 | FROM ubuntu
156 | RUN apt update -y
157 | RUN apt install git maven tree apache2 -y
158 | CMD apt install default-jre -y
159 |
160 | Build : docker build -t netflix:v2 .
161 | cont: docker run -it --name cont4 netflix:v2
162 |
163 | EX-3:
164 |
165 | FROM ubuntu
166 | RUN apt update -y
167 | RUN apt install git maven tree apache2 -y
168 | COPY index.html /tmp
169 | ADD https://dlcdn.apache.org/maven/maven-3/3.8.8/binaries/apache-maven-3.8.8-bin.tar.gz /tmp
170 |
171 | EX-4:
172 |
173 | FROM ubuntu
174 | RUN apt update -y
175 | RUN apt install git maven tree apache2 -y
176 | COPY index.html /tmp
177 | ADD https://dlcdn.apache.org/maven/maven-3/3.8.8/binaries/apache-maven-3.8.8-bin.tar.gz /tmp
178 | WORKDIR /tmp
179 | LABEL author rahamshaik
180 |
181 |
182 | docker inspect cont7
183 | docker inspect cont7 | grep -i author
184 |
185 |
186 | EX-5:
187 |
188 | FROM ubuntu
189 | RUN apt update -y
190 | RUN apt install git maven tree apache2 -y
191 | COPY index.html /tmp
192 | ADD https://dlcdn.apache.org/maven/maven-3/3.8.8/binaries/apache-maven-3.8.8-bin.tar.gz /tmp
193 | WORKDIR /tmp
194 | LABEL author rahamshaik
195 | ENV name vijay
196 | ENV client swiggy
197 |
198 | run commands inside container
199 | echo $name
200 | echo $client
201 |
202 |
203 | EX-6:
204 |
205 | FROM ubuntu
206 | RUN apt update -y
207 | RUN apt install git maven tree apache2 -y
208 | COPY index.html /tmp
209 | ADD https://dlcdn.apache.org/maven/maven-3/3.8.8/binaries/apache-maven-3.8.8-bin.tar.gz /tmp
210 | WORKDIR /tmp
211 | LABEL author rahamshaik
212 | ENV name vijay
213 | ENV client swiggy
214 | VOLUME ["/volume1"]
215 | EXPOSE 8080
216 |
217 | COMMANDS:
218 | docker ps -a -q : to list conatier ids
219 | docker stop $(docker ps -a -q) : to stop all conatiners
220 | docker rm $(docker ps -a -q) : to delete all conatiners
221 |
222 | docker images -q : to print image ids
223 | docker rmi -f $(docker images -q) : to delete all images
224 |
225 | HISTORY: 1 yum install docker -y
226 | 2 service docker start
227 | 3 service docker status
228 | 4 docker pull ubuntu
229 | 5 docker images
230 | 6 docker run -it --name cont1 ubuntu
231 | 7 docker ps -a
232 | 8 docker attach cont1
233 | 9 docker images
234 | 10 docker ps -a
235 | 11 docker commit cont1 raham
236 | 12 docker images
237 | 13 docker run -it --name cont2 raham
238 | 14 vim Dockerfile
239 | 15 docker build -t netflix:v1 .
240 | 16 docker images
241 | 17 docker run -it --name cont3 netflix:v1
242 | 18 vim Dockerfile
243 | 19 docker build -t netflix:v2 .
244 | 20 docker run -it --name cont4 netflix:v2
245 | 21 docker ps -a
246 | 22 vim Dockerfile
247 | 23 docker build -t netflix:v3 .
248 | 24 ll
249 | 25 vim index.html
250 | 26 docker build -t netflix:v3 .
251 | 27 docker run -it --name cont5 netflix:v3
252 | 28 vim Dockerfile
253 | 29 docker build -t netflix:v3 .
254 | 30 docker run -it --name cont6 netflix:v1
255 | 31 docker run -it --name cont7 netflix:v3
256 | 32 docker inspect cont7
257 | 33 docker inspect cont7 | grep -i author
258 | 34 vim Dockerfile
259 | 35 docker build -t netflix:v3 .
260 | 36 docker run -it --name cont8 netflix:v3
261 | 37 vim Dockerfile
262 | 38 docker build -t netflix:v3 .
263 | 39 docker run -it --name cont9 netflix:v3
264 | 40 docker ps -a
265 | 41 vim Dockerfile
266 | 42 docker ps -a
267 | 43 docker ps -a -q
268 | 44 docker stop $(docker ps -a -q)
269 | 45 docker ps -a
270 | 46 docker rm $(docker ps -a -q)
271 | 47 docker ps -a
272 | 48 docker images
273 | 49 docker images -q
274 | 50 docker rmi -f $(docker images -q)
275 | 51 history
276 | ==============================================================================================================
277 |
278 | VOLUMES:
279 | in docker, we use volumes to store the data.
280 | volume is nothing but a folder inside a container.
281 | we can share a volume from one container to another.
282 | the volume contains the files which have data.
283 | we can attach the single volume to multiple containers.
284 | but at a time we can attach only one volume to one container.
285 | volumes are decoupled (loosely attached)
286 | if we delete the container volume will not be deleted.
287 |
288 |
289 | METHOD-1:
290 |
291 | DOCKER FILE:
292 |
293 | FROM ubuntu
294 | VOLUME ["/volume1"]
295 |
296 | docker build -t netflix:v1 .
297 | docker run -it --name cont1 netflix:v1
298 | cd volume1
299 | touch file{1..10}
300 | ctrl p q
301 |
302 | docker run -it --name cont2 --volumes-from cont1 --privileged=true ubuntu
303 | cd volume1
304 | ll
305 |
306 |
307 | 2. CLI:
308 |
309 | docker run -it --name cont3 -v /volume2 ubuntu
310 | cd volume2
311 | touch java{1..10}
312 | ctrl p q
313 |
314 | docker run -it --name cont4 --volumes-from cont3 --privileged=true ubuntu
315 | cd volume2
316 | ll
317 | ctrl p q
318 |
319 | 3. VOLUME MOUNTING:
320 |
321 | volume commands:
322 | docker volume create volume3
323 | docker volume ls
324 | docker volume inspect volume3
325 |
326 | cd /var/lib/docker/volumes/volume3/_data
327 | touch python{1..10}
328 | ll
329 | docker run -it --name cont5 --mount source=volume3,destination=/volume3 ubuntu
330 |
331 |
332 | 4. MOVING FILES FROM LOCAL TO CONTAINER:
333 |
334 | create a connection and attach a volume for it
335 |
336 | touch raham{1..10}
337 | docker inspect cont6
338 | docker volume inspect volume4
339 | cp * /var/lib/docker/volumes/volume4/_data
340 |
341 |
342 | 5.
343 | touch raham{1..10}
344 | cp * /home/ec2-user
345 | docker run -it --name cont12 -v /home/ec2-user:/abcd ubuntu
346 |
347 | SYSTEM COMMANDS:
348 | docker system df : show docker components resource utilization
349 | docker system df -v : show docker components resource utilization individually
350 | docker system prune : to remove unused docker components
351 |
352 | JENKINS SETUP:
353 | docker run -it --name cont1 -p 8080:8080 jenkins/jenkins:lts
354 |
355 | HISTORY:
356 | 1 docker --version
357 | 2 vim Dockerfile
358 | 3 docker build -t netflix:v1 .
359 | 4 docker images
360 | 5 docker run -it --name cont1 netflix:v1
361 | 6 docker ps -a
362 | 7 docker run -it --name cont2 --volumes-from cont1 --privileged=true ubuntu
363 | 8 docker run -it --name cont3 -v /volume2 ubuntu
364 | 9 docker run -it --name --volumes-from cont3 --privileged=true cont4
365 | 10 docker run -it --name cont4 --volumes-from cont3 --privileged=true ubuntu
366 | 11 docker volume create volume3
367 | 12 docker volume ls
368 | 13 docker volume inspect volume3
369 | 14 cd /var/lib/docker/volumes/volume3/_data
370 | 15 touch python{1..10}
371 | 16 ll
372 | 17 docker run -it --name cont5 --mount source=volume3 destination=/volume3 ubuntu
373 | 18 docker run -it --name cont5 --mount source=volume3, destination=/volume3 ubuntu
374 | 19 docker run -it --name cont5 --mount source=volume3, dest=/volume3 ubuntu
375 | 20* docker run -it --name cont5 --mount source=volume3,destination=/volume3 ubuntu cd
376 | 21 cd
377 | 22 docker volume create volume4
378 | 23 docker volume ls
379 | 24 docker volume inspect volume4
380 | 25 cd /var/lib/docker/volumes/volume4/_data
381 | 26 touch php{1..10}
382 | 27 docker run -it --name cont6 --mount source=volume4,destination=/volume4 ubuntu
383 | 28 ll
384 | 29 cd
385 | 30 touch raham{1..10}
386 | 31 ll
387 | 32 docker inspect cont6
388 | 33 docker volume inspect volume4
389 | 34 ll
390 | 35 cp * /var/lib/docker/volumes/volume4/_data
391 | 36 docker attach cont6
392 | 37 docker volume inspect volume4
393 | 38 ll
394 | 39 docker run -it --name cont7 -v /root=/abc ubuntu
395 | 40 docker run -it --name cont8 -v /root=abc ubuntu
396 | 41 docker run -it --name cont7 -v /abc ubuntu
397 | 42 docker run -it --name cont8 -v /abc ubuntu
398 | 43 docker run -it --name cont9 -v /abc ubuntu
399 | 44 ll
400 | 45 cp * /home/ec2-user/
401 | 46 cd /home/ec2-user/
402 | 47 ll
403 | 48 docker run -it --name cont10 -v /home/ec2-user/=abcd ubuntu
404 | 49 docker run -it --name cont11 --volume /home/ec2-user/=abcd ubuntu
405 | 50 docker run -it --name cont12 -v /home/ec2-user:abcd ubuntu
406 | 51 docker run -it --name cont12 -v /home/ec2-user:/abcd ubuntu
407 | 52 cd
408 | 53 docker system
409 | 54 docker system df
410 | 55 docker system df -v
411 | 56 docker volume create volume5
412 | 57 docker volume create volume6
413 | 58 docker system df -v
414 | 59 docker pull centos
415 | 60 docker pull amazonlinux
416 | 61 docker system prune
417 | 62 docker kill $(docker ps -a -q)
418 | 63 docker create network raham
419 | 64 docker network create raham
420 | 65 docker network create raham2
421 | 66 docker network create raham1
422 | 67 docker system prune
423 | 68 docker system events
424 | 69 docker run -it --name cont1 -p 8080:8080 jenkins/jenkins:lts
425 | 70 history
426 | [root@ip-172-31-12-122 ~]#
427 | ====================================================================LINK: https://www.w3schools.com/howto/tryit.asp?filename=tryhow_css_form_icon
428 |
429 | PROCESS:
430 | CODE -- > BUILD (DOCKER FILE) -- > IMAGE -- > CONTAINER -- > APP
431 | http://3.7.248.36:81/
432 |
433 | vim Dockerfile
434 |
435 | FROM ubuntu
436 | RUN apt update -y
437 | RUN apt install apache2 -y
438 | COPY index.html /var/www/html
439 | CMD ["/usr/sbin/apachectl", "-D", "FOREGROUND"]
440 |
441 | create index.html
442 |
443 | docker build -t movies:v1 .
444 | docker run -itd --name movies -p 81:80 movies:v1
445 | public-ip:81
446 |
447 | Note: we cant chnage port 80 (its default apache port)
448 |
449 | 2. CHNAGE INDEX.HTML (MOVIES=TRAIN)
450 |
451 | docker build -t train:v1 .
452 | docker run -itd --name train -p 82:80 train:v1
453 | public-ip:82
454 |
455 | 3. CHNAGE INDEX.HTML (TRAIN=DTH)
456 |
457 | docker build -t dth:v1 .
458 | docker run -itd --name dth -p 83:80 dth:v1
459 | public-ip:83
460 |
461 | 4. CHNAGE INDEX.HTML (DTH=RECHARGE)
462 |
463 | docker build -t recharge:v1 .
464 | docker run -itd --name recharge -p 84:80 recharge:v1
465 | public-ip:84
466 |
467 |
468 |
469 | DOCKER COMPOSE:
470 | its a tool used to launch multiple conatiners.
471 | we can create multiple conatiners on single hosts.
472 | all of the services informatiom we are going to write on a file.
473 | here we use docker-compose/compose file.
474 | it will be on yaml format.
475 | we use key-value pair (dictionary) (.yml or .yaml)
476 | used to manage multiple servivces with help of a file.
477 |
478 |
479 | SETUP:
480 | sudo curl -L "https://github.com/docker/compose/releases/download/1.29.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
481 | ls /usr/local/bin/
482 | sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
483 | sudo chmod +x /usr/local/bin/docker-compose
484 | docker-compose version
485 |
486 |
487 | vim docker-compose.yml
488 |
489 | version: '3.8'
490 | services:
491 | movies:
492 | image: movies:v1
493 | ports:
494 | - "81:80"
495 | train:
496 | image: train:v1
497 | ports:
498 | - "82:80"
499 | dth:
500 | image: dth:v1
501 | ports:
502 | - "83:80"
503 | recharge:
504 | image: recharge:v1
505 | ports:
506 | - "84:80"
507 |
508 | Note: remove all the conatiners
509 |
510 | Commands:
511 | docker-compose up -d : to run all services
512 | docker-compose down : to remove all services
513 | docker-compose stop : to stop all services
514 | docker-compose kill : to kill all services
515 | docker-compose rm : to remove all services which is on stopped state
516 | docker-compose start : to start all services
517 | docker-compose pause : to pause all services
518 | docker-compose unpause : to unpause all services
519 | docker-compose images : to get all images managed by compose file
520 | docker-compose ps -a : to get all containers managed by compose file
521 | docker-compose logs : to get all logs managed by compose file
522 | docker-compose scale dth=10: to create 10 containers of dth
523 | docker-compose top : to get all process managed by conatiners on compose file
524 |
525 | DEFAULT FILE CHANGE:
526 | mv docker-compose.yml raham.yml
527 | docker-compose stop
528 | Supported filenames: docker-compose.yml, docker-compose.yaml, compose.yml, compose.yaml
529 | docker-compose -f raham.yml stop
530 |
531 | =================================
532 |
533 | DOCKER HUB:
534 | Its a central platfrom used to store docker images.
535 | image we create on host then we will push them to dockerhub.
536 | Once image is availabel on dockerhub we can use in any server.
537 | we use repos to store images.
538 | repo types: 1. public 2. private repo
539 |
540 | ACCOUNT:
541 | build a docker image with doker file.
542 | docker login: username and password
543 |
544 | docker tag image:v1 username/reponame
545 | docker push username/reponame
546 |
547 |
548 |
549 | DOCKER SWARM:
550 |
551 | High availability: deploying app on more than one server
552 | means using the cluster.
553 |
554 | docker swarm is a container orchestration tool.
555 | it is used to manage multiple containers on multiple nodes.
556 | each node will have a copy of single container.
557 | here we have manager and worker nodes.
558 | manager node will create containers and send to worker nodes.
559 | worker nodes will take the container and manage them.
560 | manager node communicates with worker node by using token.
561 |
562 | SETUP:
563 | 1. create 3 servers (1=manager 2=worker) (all traffic is must)
564 | yum install docker -y
565 | systemctl start docker
566 | systemctl status docker
567 |
568 | 2. set the hostnames:
569 | hostnamectl set-hostname manager/worker-1/worker-2
570 |
571 | 3. generate and copy token:
572 | manager node: docker swarm init
573 | copy the token to all worker nodes
574 |
575 | SERVICES:
576 | its a way of exposing the application in docker.
577 | with services we can create multiple copies of same container.
578 | with services we can distribute the containers to all servers.
579 |
580 | docker service create --name movies --replicas=3 --publish 81:80 jayaprakashsairam05/movies:latest
581 |
582 | docker service ls : to list services
583 | docker service ps train : to list containers for train
584 | docker service inspect train : to get complete info of train service
585 | docker service scale train=10 : to scale the services
586 | docker service scale train=6 : to scale the services
587 | docker service rollback train : to go back to previous state
588 | docker service rm train : to delete the train services
589 |
590 | SELF HEALING: automatically recreates container itself.
591 |
592 | CLUSTER LEVEL ACTIVITES:
593 | docker swarm leave(worker node) : to down the woker node
594 | docker node rm id (manager node): to remove the node permenanetly
595 | docker swarm join-token manager : to regenerate the token
596 |
597 |
598 | Note: to join the node on cluster use the previous token
599 | we can delete running worker node directly
600 | we need to stop and then delete it
601 |
602 |
603 | HISTORY:
604 | 1 docker imgages
605 | 2 docker images
606 | 3 docker swarm init
607 | 4 docker node ls
608 | 5 docker run -itd --name cont1 jayaprakashsairam05/movies:latest
609 | 6 docker ps -a
610 | 7 docker kill cont1
611 | 8 docker rm cont1
612 | 9 docker service create --name movies --replicas=3 --publish 81:80 jayaprakashsairam05/movies:latest
613 | 10 docker ps -a
614 | 11 docker service create --name train --replicas=6 --publish 82:80 jayaprakashsairam05/train:latest
615 | 12 docker ps -a
616 | 13 docker service ls
617 | 14 docker ps -a
618 | 15 docker service ls
619 | 16 docker service ps train
620 | 17 docker service inspect train
621 | 18 docker service ls
622 | 19 docker service scale train=10
623 | 20 docker service ls
624 | 21 docker service ps train
625 | 22 docker service scale train=6
626 | 23 docker service rollback train
627 | 24 docker service rm train
628 | 25 docker service ls
629 | 26 docker ps -a
630 | 27 docker kill 3d6965f5a234
631 | 28 docker ps -a
632 | 29 docker kill 159af68bfd56
633 | 30 docker ps -a
634 | 31 docker node ls
635 | 32 docker node rm c2ldr2jfatw9p8s4uqzrf329h
636 | 33 docker node ls
637 | 34 docker node rm uhru8isgfco57di11azvbk2c9
638 | 35 docker node ls
639 | 36 docker node rm 3tm927qkvrt8sdr7sz7gprs7t
640 | 37 docker node ls
641 | 38 docker node rm 3tm927qkvrt8sdr7sz7gprs7t
642 | 39 docker node ls
643 | 40 docker swarm joint-token manager
644 | 41 docker swarm join-token manager
645 | 42 history
646 | [root@manager ~]#
647 |
648 | =============================================
649 | PORTAINER:
650 | it is a container organizer, designed to make tasks easier, whether they are clustered or not.
651 | abel to connect multiple clusters, access the containers, migrate stacks between clusters
652 | it is not a testing environment mainly used for production routines in large companies.
653 | Portainer consists of two elements, the Portainer Server and the Portainer Agent.
654 | Both elements run as lightweight Docker containers on a Docker engine
655 |
656 | SETUP:
657 | Must have swarm mode and all ports enable with docker engine
658 | curl -L https://downloads.portainer.io/ce2-16/portainer-agent-stack.yml -o portainer-agent-stack.yml
659 | docker stack deploy -c portainer-agent-stack.yml portainer
660 | docker ps
661 | public-ip of swamr master:9000
662 |
663 |
664 | DOCKER NETWORKING:
665 | Docker networks are used to make a communication between the multiple containers that are running on same or different docker hosts. We have different types of docker networks.
666 | Bridge Network
667 | Host Network
668 | None network
669 | Overlay network
670 |
671 | BRIDGE NETWORK: It is a default network that container will communicate with each other within the same host.
672 |
673 | HOST NETWORK: When you Want your container IP and ec2 instance IP same then you use host network
674 |
675 | NONE NETWORK: When you don’t Want The container to get exposed to the world, we use none network. It will not provide any network to our container.
676 |
677 | OVERLAY NETWORK: Used to communicate containers with each other across the multiple docker hosts.
678 |
679 |
680 | To create a network: docker network create network_name
681 | To see the list: docker network ls
682 | To delete a network: docker network rm network_name
683 | To inspect: docker network inspect network_name
684 | To connect a container to the network: docker network connect network_name container_id/name
685 | apt install iputils-ping -y : command to install ping checks
686 | To disconnect from the container: docker network disconnect network_name container_name
687 | To prune: docker network prune
688 |
689 |
690 | RESOURCE MANAGEMENT:
691 | continers are going to use host resources. (cpu, mem and ram)
692 | we need to limit that resoure utilization.
693 | by default containers are not having limts.
694 |
695 |
696 | docker run -itd --name cont3 --cpus="0.5" --memory="500mb" ubuntu
697 | docker inspect cont3
698 | docker stats
699 | docker top cont3
700 |
--------------------------------------------------------------------------------
/7. TERRAFORM:
--------------------------------------------------------------------------------
1 | INFRASTRUCTURE:
2 | resources used to run our application on cloud.
3 | ex: ec2, s3, elb, vpc --------------
4 |
5 |
6 | in genral we used to deploy infra on manual
7 |
8 | Manual:
9 | 1. time consume
10 | 2. manual work
11 | 3. commiting mistakes
12 |
13 | Automate -- > Terraform -- > code -- > hcl (Hashicorp configuration languge)
14 |
15 |
16 |
17 | its a tool used to make infrastructure automation.
18 | its a free and opensource.
19 | its platform independent.
20 | it comes on year 2014.
21 | who: mitchel hasimoto
22 | ownde: hasicorp
23 | terraform is written on go language.
24 | We can call terraform as IAAC TOOL.
25 |
26 | HOW IT WORKS:
27 | terraform uses code to automate the infra.
28 | we use HCL : HashiCorp Configuration Language.
29 |
30 | IAAC: Infrastructure as a code.
31 |
32 | Code --- > execute --- > Infra
33 |
34 | ADVANTGAES:
35 | 1. Reuseable
36 | 2. Time saving
37 | 3. Automation
38 | 4. Avoiding mistakes
39 | 5. Dry run
40 |
41 |
42 | CFT = AWS
43 | ARM = AZURE
44 | GDE = GOOGLE
45 |
46 | TERRAFROM = ALL CLOUDS
47 |
48 | INSTALLING TERRAFORM:
49 |
50 | sudo yum install -y yum-utils shadow-utils
51 | sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
52 | sudo yum -y install terraform
53 | aws configure
54 |
55 |
56 | mkdir terraform
57 | cd terraform
58 |
59 | vim main.tf
60 |
61 | provider "aws" {
62 | region = "us-east-1"
63 | }
64 |
65 | resource "aws_instance" "one" {
66 | ami = "ami-03eb6185d756497f8"
67 | instance_type = "t2.micro"
68 | }
69 |
70 |
71 | TERRAFORM COMMANDS:
72 | terraform init : initalize the provider plugins on backend
73 | terraform plan : to create execution plan
74 | terrafrom apply : to create resources
75 | terrafrom destroy : to delete resources
76 |
77 | provider "aws" {
78 | region = "ap-south-1"
79 | }
80 |
81 | resource "aws_instance" "one" {
82 | count = 5
83 | ami = "ami-0b41f7055516b991a"
84 | instance_type = "t2.micro"
85 | }
86 |
87 | terraform apply --auto-approve
88 | terraform destroy --auto-approve
89 |
90 |
91 | STATE FILE: used to store the resource information which is created by terraform
92 | to track the resource activities
93 | in real time entire resource info is on state file.
94 | we need to keep it safe
95 | if we lost this file we cant track the infra.
96 | Command:
97 | terraform state list
98 |
99 | terrafrom target: used to destroy the specific resource
100 | terraform state list
101 | single target: terraform destroy -target="aws_instance.one[3]"
102 | multi targets: terraform destroy -target="aws_instance.one[1]" -target="aws_instance.one[2]"
103 |
104 |
105 | TERRAFORM VARIABLES:
106 | provider "aws" {
107 | region = "ap-south-1"
108 | }
109 |
110 | resource "aws_instance" "one" {
111 | count = var.instance_count
112 | ami = "ami-0b41f7055516b991a"
113 | instance_type = var.instance_type
114 | }
115 |
116 | variable "instance_type" {
117 | description = "*"
118 | type = string
119 | default = "t2.micro"
120 | }
121 |
122 | variable "instance_count" {
123 | description = "*"
124 | type = number
125 | default = 5
126 | }
127 |
128 | terraform apply --auto-approve
129 | terraform destroy --auto-approve
130 |
131 | history:
132 | 1 aws configure
133 | 2 sudo yum install -y yum-utils
134 | 3 sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinu x/hashicorp.repo
135 | 4 sudo yum -y install terraform
136 | 5 terraform -v
137 | 6 mdkir terraform
138 | 7 mkdir terraform
139 | 8 cd terraform/
140 | 9 vim main.tf
141 | 10 terraform init
142 | 11 terraform plan
143 | 12 terraform apply
144 | 13 vim main.tf
145 | 14 terraform apply
146 | 15 ll
147 | 16 cat terraform.tfstate
148 | 17 terraform destroy
149 | 18 cat terraform.tfstate
150 | 19 vim main.tf
151 | 20 terraform plan
152 | 21 terraform apply
153 | 22 cat terraform.tfstate
154 | 23 terraform state list
155 | 24 terraform destroy -target="aws_instance.one[4]"
156 | 25 terraform state list
157 | 27 terraform destroy --auto-approve -target="aws_instance.one[3]" -target="aws_instance.one[2]" ance.one[0]"
158 | 28 terraform state list
159 | 29 terraform destroy --auto-approve
160 | 30 vim main.tf
161 | 31 terraform plan
162 | 32 terraform apply --auto-approve
163 | 33 vim main.tf
164 | 34 terraform apply --auto-approve
165 | 35 terraform state list
166 | 36 terraform destroy --auto-apporve
167 | 37 terraform destroy --auto-approve
168 | 38 history
169 |
170 | =========================================
171 | VARIABLES, OUTPUTS AND IMPORT
172 |
173 | TERRAFORM VAR FILES:
174 | these files used to store variables seperately on terraform.
175 |
176 |
177 |
178 | cat main.tf
179 | provider "aws" {
180 | region = "us-east-1"
181 | }
182 |
183 | resource "aws_instance" "one" {
184 | count = var.instance_count
185 | ami = "ami-03eb6185d756497f8"
186 | instance_type = var.instance_type
187 | tags = {
188 | Name = "raham-server"
189 | }
190 | }
191 |
192 | cat variable.tf
193 | variable "instance_count" {
194 | description = "*"
195 | type = number
196 | default = 3
197 | }
198 |
199 | variable "instance_type" {
200 | description = "*"
201 | type = string
202 | default = "t2.micro"
203 | }
204 |
205 | terraform apply --auto-approve
206 | terraform destroy --auto-approve
207 |
208 |
209 | cat main.tf
210 | provider "aws" {
211 | region = "ap-south-1"
212 | }
213 |
214 | resource "aws_instance" "one" {
215 | count = var.instance_count
216 | ami = "ami-0b41f7055516b991a"
217 | instance_type = var.instance_type
218 | tags = {
219 | Name = "raham-server"
220 | }
221 | }
222 |
223 | cat variable.tf
224 | variable "instance_count" {
225 | }
226 |
227 | variable "instance_type" {
228 | }
229 |
230 | cat dev.tfvars
231 | instance_count = 1
232 |
233 | instance_type = "t2.micro"
234 |
235 | cat test.tfvars
236 | instance_count = 2
237 |
238 | instance_type = "t2.medium"
239 |
240 | terraform apply --auto-approve -var-file="dev.tfvars"
241 | terraform destroy --auto-approve -var-file="dev.tfvars"
242 |
243 |
244 | terraform apply --auto-approve -var-file="test.tfvars"
245 | terraform destroy --auto-approve -var-file="test.tfvars"
246 |
247 |
248 | TERRAFORM CLI:
249 | we can pass inputs for terraform from cli.
250 |
251 | provider "aws" {
252 | region = "ap-south-1"
253 | }
254 |
255 | resource "aws_instance" "one" {
256 | ami = "ami-0b41f7055516b991a"
257 | instance_type = var.instance_type
258 | tags = {
259 | Name = "raham-server"
260 | }
261 | }
262 |
263 | variable "instance_type" {
264 | }
265 |
266 | terraform apply --auto-approve -var="instance_type=t2.medium"
267 | terraform destroy --auto-approve -var="instance_type=t2.medium"
268 |
269 | TERRAFORM OUTPUTS: used to print the resource properties.
270 | ex: public-ip, dns, instance type
271 |
272 | provider "aws" {
273 | region = "ap-south-1"
274 | }
275 |
276 | resource "aws_instance" "one" {
277 | ami = "ami-0b41f7055516b991a"
278 | instance_type = "t2.medium"
279 | tags = {
280 | Name = "raham-server"
281 | }
282 | }
283 |
284 | output "raham" {
285 | value = [aws_instance.one.public_ip, aws_instance.one.private_ip, aws_instance.one.public_dns, aws_instance.one.private_dns]
286 | }
287 |
288 |
289 | TERRAFORM IMPORT: used to import and track the resources which is created manually.
290 |
291 |
292 | cat main.tf
293 | provider "aws" {
294 | region = "ap-south-1"
295 | }
296 |
297 | resource "aws_instance" "one" {
298 | }
299 |
300 | terraform import aws_instance.one i-0f4c0d5d3bb6dc758
301 |
302 | 1 sudo yum install -y yum-utils shadow-utils
303 | 2 sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
304 | 3 sudo yum -y install terraform
305 | 4 aws configure
306 | 5 mkdir terraform
307 | 6 cd terraform/
308 | 7 vim main.tf
309 | 8 vim variable.tf
310 | 9 cat main.tf
311 | 10 cat variable.tf
312 | 11 terraform init
313 | 12 terraform plan
314 | 13 cd
315 | 14 ls -al
316 | 15 ll .aws/
317 | 16 vim .aws/credentials
318 | 17 cd terraform/
319 | 18 terraform plan
320 | 19 vim main.tf
321 | 20 terraform plan
322 | 21 vim main.tf
323 | 22 terraform plan
324 | 23 terraform apply --aut-approve
325 | 24 terraform apply --auto-approve
326 | 25 vim main.tf
327 | 26 terraform apply --auto-approve
328 | 27 terraform destroy --auto-approve
329 | 28 vim variable.tf
330 | 29 vim dev.tfvars
331 | 30 vim test.tfvars
332 | 31 cat main.tf
333 | 32 cat variable.tf
334 | 33 cat dev.tfvars
335 | 34 cat test.tfvars
336 | 35 terraform apply --auto-approve -var-file="dev.tfvars"
337 | 36 terraform apply --auto-approve -var-file="test.tfvars"
338 | 37 terraform destroy --auto-approve -var-file="test.tfvars"
339 | 38 vim dev.tfvars
340 | 39 terraform apply --auto-approve
341 | 40 terraform apply --auto-approve -var-file="dev.tfvars"
342 | 41 terraform destroy --auto-approve -var-file="dev.tfvars"
343 | 42 rm -rf dev.tfvars test.tfvars variable.tf
344 | 43 vim main.tf
345 | 44 ll
346 | 45 terraform apply --auto-approve
347 | 46 terraform destroy --auto-approve
348 | 47 terraform apply --auto-approve -var="instance_type=t2.medium"
349 | 48 terraform destroy --auto-approve -var="instance_type=t2.medium"
350 | 49 vim main.tf
351 | 50 terraform apply --auto-approve
352 | 51 vim main.tf
353 | 52 terraform apply
354 | 53 vim main.tf
355 | 54 terraform apply --auto-approve
356 | 55 terraform destroy --auto-approve
357 | 56 vim main.tf
358 | 57 cat terraform.tfstate
359 | 58 vim main.tf
360 | 59 cat main.tf
361 | 60 terraform import aws_instance.one i-0f4c0d5d3bb6dc758
362 | 61 cat terraform.tfstate
363 | 62 terraform import --help
364 | 63 terraform import -h
365 |
366 | ===================================================
367 |
368 |
369 | provider "aws" {
370 | region = "us-east-1"
371 | }
372 |
373 | resource "aws_s3_bucket" "one" {
374 | bucket = "rahamshaik9988tetrrbcuket"
375 | }
376 |
377 | resource "aws_ebs_volume" "two" {
378 | size = 20
379 | availability_zone = "us-east-1b"
380 | tags = {
381 | Name = "raham-ebs"
382 | }
383 | }
384 |
385 | resource "aws_iam_user" "three" {
386 | name = "rahams"
387 | }
388 |
389 | resource "aws_instance" "four" {
390 | ami = "ami-03eb6185d756497f8"
391 | instance_type = "t2.micro"
392 | tags = {
393 | Name = "Raham-terraserver"
394 | }
395 | }
396 |
397 | Terraform taint: used to recreate specific objects
398 | in real time some resource we need to recrete if it will not work properly
399 | then we can use taint concetp
400 |
401 | terraform taint aws_instance.four
402 | terraform apply --auto-approve
403 |
404 | Terraform Lifecycle:
405 |
406 | Prevent_destroy: used to keep secure our resources without destroying.
407 |
408 | provider "aws" {
409 | region = "us-east-1"
410 | }
411 |
412 | resource "aws_s3_bucket" "one" {
413 | bucket = "rahamshaik9988tetrrbcuket"
414 | }
415 |
416 | resource "aws_ebs_volume" "two" {
417 | size = 20
418 | availability_zone = "us-east-1b"
419 | tags = {
420 | Name = "raham-ebs"
421 | }
422 | }
423 |
424 | resource "aws_iam_user" "three" {
425 | name = "rahams"
426 | }
427 |
428 | resource "aws_instance" "four" {
429 | ami = "ami-03eb6185d756497f8"
430 | instance_type = "t2.micro"
431 | tags = {
432 | Name = "Raham-terraserver"
433 | }
434 | lifecycle {
435 | prevent_destroy = true
436 | }
437 |
438 | TERRAFORM FMT: used to provide the indentation for terraform.
439 |
440 | VERSION CONSTARINT: used to change the provider version.
441 |
442 | provider "aws" {
443 | region = "us-east-1"
444 | }
445 |
446 | terraform {
447 | required_providers {
448 | aws = {
449 | source = "hashicorp/aws"
450 | version = ">5.19.0"
451 | }
452 | }
453 | }
454 |
455 | terraform init -upgrade
456 |
457 | Local resources:
458 |
459 | provider "aws" {
460 | region = "us-east-1"
461 | }
462 |
463 | resource "local_file" "one" {
464 | filename = "abc.txt"
465 | content= "Hai all this file is created by terraform"
466 | }
467 |
468 |
469 | provider "aws" {
470 | region = "us-east-1"
471 | }
472 |
473 | terraform {
474 | required_providers {
475 | local = {
476 | source = "hashicorp/local"
477 | version = "2.2.0"
478 | }
479 | }
480 | }
481 |
482 | HISTORY:
483 | 1 sudo yum install -y yum-utils shadow-utils
484 | 2 sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
485 | 3 sudo yum -y install terraform
486 | 4 aws configure
487 | 5 mkdir terraform
488 | 6 cd terraform/
489 | 7 vim main.tf
490 | 8 terraform init
491 | 9 terraform plan
492 | 10 vim main.tf
493 | 11 terraform plan
494 | 12 terraform apply --auto-approve
495 | 13 vim main.tf
496 | 14 terraform apply --auto-approve
497 | 15 cat main.tf
498 | 16 vim main.tf
499 | 17 terraform apply --auto-approve
500 | 18 terraform state list
501 | 19 cat main.tf
502 | 20 terraform taint aws_instance.four
503 | 21 terraform apply --auto-approve
504 | 22 terraform state list
505 | 23 terraform taint aws_instance.four aws_ebs_volume.two
506 | 24 terraform taint aws_instance.four
507 | 25 terraform taint aws_ebs_volume.two
508 | 26 terraform apply --auto-approve
509 | 27 vim main.tf
510 | 28 terraform destroy --auto-approve
511 | 29 vim main.tf
512 | 30 terraform destroy --auto-approve
513 | 31 terraform plan
514 | 32 terraform destroy --auto-approve
515 | 33 terraform state list
516 | 34 vim main.tf
517 | 35 terraform destroy --auto-approve
518 | 36 vim main.tf
519 | 37 terraform destroy --auto-approve
520 | 38 cat main.tf
521 | 39 vim main.tf
522 | 40 terraform destroy --auto-approve
523 | 41 terraform apply --auto-approve
524 | 42 terraform destroy --auto-approve
525 | 43 terraform init
526 | 44 vim main.tf
527 | 45 cat main.tf
528 | 46 terraform fmt
529 | 47 cat main.tf
530 | 48 vim main.tf
531 | 49 terraform init -upgrade
532 | 50 vim main.tf
533 | 51 terraform init -upgrade
534 | 52 vim main.tf
535 | 53 terraform init -upgrade
536 | 54 cat main.tf
537 | 55 vim main.tf
538 | 56 terraform apply --auto-approve
539 | 57 terraform init -upgrade
540 | 58 terraform apply --auto-approve
541 | 59 ll /root/
542 | 60 ll
543 | 61 cd
544 | 62 ll
545 | 63 cd terraform/
546 | 64 ll
547 | 65 cat abc.txt
548 | 66 cat main.tf
549 | 67 vim main.tf
550 | 68 terraform init -upgrade
551 | 69 vim main.tf
552 | 70 terraform init -upgrade
553 | 71 cat main.tf
554 | 72 history
555 |
556 |
557 |
558 | ======================================================================================
559 |
560 | TERRAFORM LOCALS:
561 | its a block used to define the values.
562 | we can define the vaule once and used it for multiple times.
563 |
564 | provider "aws" {
565 | region = "ap-south-1"
566 | }
567 |
568 | locals {
569 | env = "test"
570 | }
571 |
572 | resource "aws_vpc" "one" {
573 | cidr_block = "10.0.0.0/16"
574 | tags = {
575 | Name = "${local.env}-vpc"
576 | }
577 | }
578 |
579 | resource "aws_subnet" "two" {
580 | cidr_block = "10.0.0.0/16"
581 | availability_zone = "ap-south-1a"
582 | vpc_id = aws_vpc.one.id
583 | tags = {
584 | Name = "${local.env}-subnet"
585 | }
586 | }
587 |
588 | resource "aws_instance" "three" {
589 | subnet_id = aws_subnet.two.id
590 | ami = "ami-06006e8b065b5bd46"
591 | instance_type = "t2.micro"
592 | tags = {
593 | Name = "${local.env}-server"
594 | }
595 | }
596 |
597 | TERRAFORM WORKSPACE:
598 | WORKSPACE: Where we write the code and execute operations.
599 | it is used to isolate the env.
600 | in real time all the works we do on workspaces only.
601 | if we perform an operation on one workspace it wont affect another workspace.
602 |
603 |
604 | NOTE:
605 | 1. we can't delete our current workspace.
606 | 2. we can't delete our workspace without deleting resources.
607 | 3. we cant't delete default workspace.
608 |
609 | COMMANDS:
610 |
611 | terraform workspace list : to list workspaces
612 | terraform workspace new dev : to create a new workspace
613 | terraform workspace show : to show current workspace
614 | terraform workspace select prod : to switch the workspaces
615 | terraform workspace delete prod : to delete the workspaces
616 |
617 |
618 | provider "aws" {
619 | region = "ap-south-1"
620 | }
621 |
622 | locals {
623 | env = "${terraform.workspace}"
624 | }
625 |
626 | resource "aws_vpc" "one" {
627 | cidr_block = "10.0.0.0/16"
628 | tags = {
629 | Name = "${local.env}-vpc"
630 | }
631 | }
632 |
633 | resource "aws_subnet" "two" {
634 | cidr_block = "10.0.0.0/16"
635 | availability_zone = "ap-south-1a"
636 | vpc_id = aws_vpc.one.id
637 | tags = {
638 | Name = "${local.env}-subnet"
639 | }
640 | }
641 |
642 | resource "aws_instance" "three" {
643 | subnet_id = aws_subnet.two.id
644 | ami = "ami-06006e8b065b5bd46"
645 | instance_type = "t2.micro"
646 | tags = {
647 | Name = "${local.env}-server"
648 | }
649 | }
650 |
651 |
652 | TERRAFORM GRAPH: used to show the flow chart of our infra
653 | terraform graph
654 | copy paste the content in graphviz online
655 |
656 |
657 | ALIAS AND PROVIDER: used to create resources on mutliple regions in single file.
658 |
659 | provider "aws" {
660 | region = "ap-south-1"
661 | }
662 |
663 | resource "aws_instance" "three" {
664 | ami = "ami-06006e8b065b5bd46"
665 | instance_type = "t2.large"
666 | tags = {
667 | Name = "mumbai-server"
668 | }
669 | }
670 |
671 | provider "aws" {
672 | region = "ap-northeast-1"
673 | alias = "tokyo"
674 | }
675 |
676 | resource "aws_instance" "four" {
677 | provider = aws.tokyo
678 | ami = "ami-0bcf3ca5a6483feba"
679 | instance_type = "t2.large"
680 | tags = {
681 | Name = "tokyo-server"
682 | }
683 | }
684 |
685 | HISTORY:
686 | 1 sudo yum install -y yum-utils shadow-utils
687 | 2 sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
688 | 3 sudo yum -y install terraform
689 | 4 aws configure
690 | 5 vim .aws/config
691 | 6 mkdir terraform
692 | 7 cd
693 | 8 cd terraform/
694 | 9 vim main.tf
695 | 10 terraform init
696 | 11 vim main.tf
697 | 12 terraform init
698 | 13 terraform validate
699 | 14 vim main.tf
700 | 15 terraform validate
701 | 16 vim main.tf
702 | 17 terraform validate
703 | 18 terraform plan
704 | 19 terraform apply --auto-approve
705 | 20 terraform state list
706 | 21 vim main.tf
707 | 22 terraform apply --auto-approve
708 | 23 terraform destroy --auto-approve
709 | 24 vim main.tf
710 | 25 terraform workspace list
711 | 26 terraform workspace new dev
712 | 27 terraform workspace list
713 | 28 terraform workspace show
714 | 29 vim main.tf
715 | 30 terraform apply --auto-approve
716 | 31 terraform state list
717 | 32 terraform workspace new test
718 | 33 terraform state list
719 | 34 vim m
720 | 35 vim main.tf
721 | 36 terraform apply --auto-approve
722 | 37 terraform workspace new prod
723 | 38 vim main.tf
724 | 39 terraform apply --auto-approve
725 | 40 vim main.tf
726 | 41 terraform apply --auto-approve
727 | 42 vim main.tf
728 | 43 terraform apply --auto-approve
729 | 44 terraform workspace show
730 | 45 terraform workspace delete prod
731 | 46 terraform workspace delete test
732 | 47 terraform destroy --auto-approve
733 | 48 terraform workspace select test
734 | 49 terraform workspace delete prod
735 | 50 terraform workspace list
736 | 51 terraform destroy --auto-approve
737 | 52 terraform workspace select dev
738 | 53 terraform workspace delete test
739 | 54 terraform state list
740 | 55 terraform graph
741 | 56 terraform destroy --auto-approve
742 | 57 vim main.tf
743 | 58 terraform apply --auto-approve
744 | 59 vim main.tf
745 | 60 terraform apply --auto-approve
746 | 61 cat main.tf
747 | 62 history
748 | =====================================================
749 |
750 | provider "aws" {
751 | region = "ap-south-1"
752 | }
753 |
754 | resource "aws_instance" "one" {
755 | ami = "ami-06006e8b065b5bd46"
756 | instance_type = "t2.micro"
757 | tags = {
758 | Name = "raham"
759 | }
760 | lifecycle {
761 | prevent_destroy = true
762 | }
763 | }
764 |
765 |
766 | provider "aws" {
767 | region = "ap-south-1"
768 | }
769 |
770 | resource "aws_instance" "one" {
771 | ami = "ami-06006e8b065b5bd46"
772 | instance_type = "t2.micro"
773 | tags = {
774 | Name = "raham"
775 | }
776 | lifecycle {
777 | prevent_destroy = false
778 | }
779 | }
780 |
781 | Prevent Destroy: it will not delete the resources
782 | Ignore changes: it will not replicate the changes we have done to server
783 | Depends on: it will depend on another resource to create.
784 |
785 | tERRAFORM REMOTE BACKEND SETUP:
786 | when we create infra the information of resources will store on state file.
787 | so it will be tracking the infra information.
788 | so we need to take backup of that file.
789 | if we lost that file we cant track the infra.
790 | so we prefer to locate the state file on remote loaction.
791 | here im using s3 as remote backend.
792 |
793 |
794 | provider "aws" {
795 | region = "ap-south-1"
796 | }
797 |
798 | terraform {
799 | backend "s3" {
800 | bucket = "rahamshaikterraprodbcuket0088"
801 | key = "prod/terraform.tfstate"
802 | region = "ap-south-1"
803 | }
804 | }
805 |
806 | resource "aws_instance" "one" {
807 | ami = "ami-06006e8b065b5bd46"
808 | instance_type = "t2.micro"
809 | tags = {
810 | Name = "raham"
811 | }
812 | }
813 |
814 | create Bucket manually
815 |
816 | Note: while using new block always we need to run terraform init
817 | otherwise, plugins will not be downloaded
818 |
819 | after removing backend setup run this command:
820 | terraform init -migrate-state
821 |
822 |
823 | TERRAFORM REFRESH: used to refresh and replicate the changes to state file.
824 | it will compare terraform state file with resource.
825 | if it is exsits it will show, otherwise it will not show.
826 |
827 | LOCAL RESOURCES:
828 |
829 |
830 | provider "aws" {
831 | region = "ap-south-1"
832 | }
833 |
834 | resource "local_file" "one" {
835 | filename = "abc.txt"
836 | content = "hai all this file is creatd from terraform"
837 | }
838 |
839 | VERSION CONSTRAINTS:
840 | it is used to change the provider version plugins.
841 | can be applicable for all providers.
842 |
843 |
844 | provider "aws" {
845 | region = "ap-south-1"
846 | }
847 |
848 | terraform {
849 | required_providers {
850 | aws = {
851 | source = "hashicorp/aws"
852 | version = "<5.22.0"
853 | }
854 | }
855 | }
856 |
857 |
858 | provider "aws" {
859 | region = "ap-south-1"
860 | }
861 |
862 | terraform {
863 | required_providers {
864 | local = {
865 | source = "hashicorp/local"
866 | version = "<2.2.0"
867 | }
868 | }
869 | }
870 |
871 |
872 |
873 | DYNAMIC BLOCK: it is used to reduce the length of code and used for reusabilty of code in loop.
874 |
875 | provider "aws" {
876 | }
877 |
878 | locals {
879 | ingress_rules = [{
880 | port = 443
881 | description = "Ingress rules for port 443"
882 | },
883 | {
884 | port = 80
885 | description = "Ingree rules for port 80"
886 | },
887 | {
888 | port = 8080
889 | description = "Ingree rules for port 8080"
890 |
891 | }]
892 | }
893 |
894 | resource "aws_instance" "ec2_example" {
895 | ami = "ami-0c02fb55956c7d316"
896 | instance_type = "t2.micro"
897 | vpc_security_group_ids = [aws_security_group.main.id]
898 | tags = {
899 | Name = "Terraform EC2"
900 | }
901 | }
902 |
903 | resource "aws_security_group" "main" {
904 |
905 | egress = [
906 | {
907 | cidr_blocks = ["0.0.0.0/0"]
908 | description = "*"
909 | from_port = 0
910 | ipv6_cidr_blocks = []
911 | prefix_list_ids = []
912 | protocol = "-1"
913 | security_groups = []
914 | self = false
915 | to_port = 0
916 | }]
917 |
918 | dynamic "ingress" {
919 | for_each = local.ingress_rules
920 |
921 | content {
922 | description = "*"
923 | from_port = ingress.value.port
924 | to_port = ingress.value.port
925 | protocol = "tcp"
926 | cidr_blocks = ["0.0.0.0/0"]
927 | }
928 | }
929 |
930 | tags = {
931 | Name = "terra sg"
932 | }
933 | }
934 |
935 |
936 |
937 | provider "aws" {
938 | region = "ap-south-1"
939 | }
940 |
941 | resource "aws_instance" "one" {
942 | count = length(var.instance_type)
943 | ami = "ami-06006e8b065b5bd46"
944 | instance_type = var.instance_type[count.index]
945 | tags = {
946 | Name = var.instance_name[count.index]
947 | }
948 | }
949 |
950 | variable "instance_type" {
951 | default = ["t2.medium", "t2.micro", "t2.nano"]
952 | }
953 |
954 | variable "instance_name" {
955 | default = ["webserver", "appserver", "dbserver"]
956 | }
957 |
958 |
959 | FOR_EACH:
960 |
961 | provider "aws" {
962 | }
963 |
964 | resource "aws_instance" "two" {
965 | for_each = toset(["web-server", "app-server", "db-server"])
966 | ami = "ami-04beabd6a4fb6ab6f"
967 | instance_type = "t2.micro"
968 | tags = {
969 | Name = "${each.key}"
970 | }
971 | }
972 |
973 |
974 |
975 |
976 |
--------------------------------------------------------------------------------
/4. ANSIBLE:
--------------------------------------------------------------------------------
1 | DEOPLYMENT AUTOMATED.
2 |
3 | SERVER
4 | SOFTWARE INSTALLATION
5 | TOMCAT SETUP
6 | DEOPLYMENT
7 |
8 | ANSIBLE:
9 | its a free and open-source tool.
10 | its an it engine that automates from server creation to deployment.
11 | It is also called a configuration management tool.
12 |
13 | configuration: cpu, memory and os
14 | management: update, delete, install ----
15 |
16 | ansible invented by Maichel dehaan in 2012.
17 | ansible is taken by RedHat
18 | We have both free and paid versions of ansible.
19 | it is platform independent.
20 | ansible works with YAML language.
21 | ansible dependency is Python.
22 | GUI for ansible is Ansible-Tower.
23 |
24 | HOW IT WORKS ?
25 | ARCHITECTURE:
26 |
27 | ANSIBLE SERVER: used to communicate with worker nodes and install pkgs, deployment
28 | WORKER NODES: takes commands from ansible server and works according to it
29 | PLAYBOOK: conatins the code which is used to perform action.
30 | INVENTORY: conatains info about worker nodes and group.
31 |
32 |
33 |
34 | SETUP:
35 | CREATE 5 SERVERS (1=ANSIBLE, 2=DEV, 2=TEST)
36 | ALL SERVERS:
37 | sudo -i
38 | hostnamectl set-hostname ansible/dev-1/dev-2/test-1/test-2
39 | sudo -i
40 |
41 |
42 | passwd root
43 | vim /etc/ssh/sshd_config (38 uncomment, 63 no=yes)
44 | systemctl restart sshd
45 | systemctl status sshd
46 |
47 |
48 | ANSIBLE SERVER:
49 | amazon-linux-extras install ansible2 -y
50 | yum install python python-pip python-dlevel -y
51 | vim /etc/ansible/hosts (12)
52 |
53 | [dev]
54 | 172.31.44.19
55 | 172.31.35.18
56 | [test]
57 | 172.31.36.39
58 | 172.31.43.36
59 |
60 | ssh-keygen #used to generate the keys
61 | enter 4 times
62 | ssh-copy-id root@private-ip -- > yes -- > password
63 | ssh private-ip
64 | ctrl d
65 |
66 | NOTE: COPY THE KEYS TO ALL WORKER NODES BY USING ABOVE METHOD
67 |
68 | =====================================================================
69 |
70 | 1. ADHOC COMMANDS:
71 | these are simple linux comands.
72 | used for temp works.
73 | these commands will be over rided.
74 |
75 | ansible all -a "yum install git -y"
76 | ansible all -a "yum install maven -y"
77 | ansible all -a "mvn --version"
78 | ansible all -a "touch file1"
79 | ansible all -a "ls"
80 | ansible all -a "yum install httpd -y"
81 | ansible all -a "systemctl restart httpd"
82 | ansible all -a "systemctl status httpd"
83 | ansible all -a "useradd raham"
84 |
85 | INVENTORY HOST PATTREN:
86 | ansible all --list-hosts
87 | ansible dev --list-hosts
88 | ansible test --list-hosts
89 | ansible test[0] --list-hosts
90 | ansible all[-1] --list-hosts
91 | ansible all[1:3] --list-hosts
92 |
93 | HISTORY:
94 | 1 2023-09-14 06:22:06 ansible all -a "yum install git -y"
95 | 2 2023-09-14 06:24:25 ansible dev -a "yum install maven -y"
96 | 3 2023-09-14 06:25:19 ansible test -a "yum install maven -y"
97 | 4 2023-09-14 06:26:20 ansible all -a "touch file1"
98 | 5 2023-09-14 06:26:42 ansible all -a "ls"
99 | 6 2023-09-14 06:27:03 ansible all -a "yum install httpd -y"
100 | 7 2023-09-14 06:27:33 ansible all -a "systemctl start httpd"
101 | 8 2023-09-14 06:27:50 ansible all -a "systemctl status httpd"
102 | 9 2023-09-14 06:29:29 ansible all -a "useradd raham"
103 | 10 2023-09-14 06:30:03 ansible all -a "cat /etc/passwd"
104 | 11 2023-09-14 06:30:53 ansible all --list-hosts
105 | 12 2023-09-14 06:31:01 ansible dev --list-hosts
106 | 13 2023-09-14 06:31:08 ansible test --list-hosts
107 | 14 2023-09-14 06:31:14 ansible test[0] --list-hosts
108 | 15 2023-09-14 06:31:37 ansible all[-1] --list-hosts
109 | 16 2023-09-14 06:31:43 ansible all[-2] --list-hosts
110 | 17 2023-09-14 06:32:31 ansible all --list-hosts
111 | 18 2023-09-14 06:33:00 ansible all[1:3] --list-hosts
112 | 19 2023-09-14 06:36:48 export HISTTIMEFORMAT="%F %T "
113 | 20 2023-09-14 06:37:05 history
114 |
115 | =============================================================================================
116 |
117 | ANSIBLE MODULES:
118 | modules work on key-value pair.
119 | modules used to do the work which we want.
120 | depend on the work module changes.
121 |
122 | NAME: RAHAM
123 | COURSE: DEVOPS
124 | FEE: 8k
125 |
126 | color pattren:
127 |
128 | YELLOW : SUCCESSFUL
129 | RED : FAILED
130 | GREEN : ALREADY DONE
131 | BLUE : SKIPPED
132 |
133 | ansible all -a "yum install git -y"
134 | ansible all -m yum -a "name=git state=present"
135 | ansible all -m yum -a "name=docker state=present" (present=install)
136 | ansible all -m yum -a "name=docker state=present" (absent=uninstall)
137 | ansible all -m yum -a "name=docker state=latest" (latest=update)
138 | ansible all -m service -a "name=httpd state=started"
139 | ansible all -m service -a "name=httpd state=stopped"
140 | ansible all -m user -a "name=rajinikanth state=present"
141 | ansible all -m user -a "name=rajinikanth state=absent"
142 | ansible all -m copy -a "src=app.yml dest=/root"
143 | ansible all -a "ls /root"
144 |
145 | PLAYBOOKS:
146 | playbook is a collection of modules.
147 | we can execute multiple modules at same time.
148 | we can reuse the playbook.
149 | playbook will be written on YAML language.
150 | YAML: YET ANOTHER MARKUP LANGUAGE
151 | its a human readable and serial langauge.
152 | yaml is syntax-based.
153 | yaml start with --- and end with ...
154 | we write it on list format.
155 | extension is .yml or .yaml
156 |
157 | PLAYBOOK1:
158 |
159 | vim raham.yml
160 |
161 |
162 | ---
163 | - hosts: all
164 | tasks:
165 | - name: install httpd
166 | yum: name=httpd state=present
167 |
168 | - name: start httpd
169 | service: name=httpd state=started
170 |
171 | - name: create user
172 | user: name=revi state=present
173 | ...
174 |
175 |
176 | ansible-playbook raham.yml
177 |
178 |
179 | - hosts: all
180 | tasks:
181 | - name: remove httpd
182 | yum: name=httpd state=absent
183 |
184 | - name: stop httpd
185 | service: name=httpd state=stopped
186 |
187 | - name: remove user
188 | user: name=revi state=absent
189 |
190 |
191 | TAGS:
192 | used to execute or skip a specific task
193 |
194 | - hosts: all
195 | tasks:
196 | - name: install docker
197 | yum: name=docker state=present
198 | tags: a
199 |
200 | - name: start docker
201 | service: name=docker state=started
202 | tags: b
203 |
204 | - name: install maven
205 | yum: name=maven state=present
206 | tags: c
207 |
208 | - name: create user
209 | user: name=revi state=present
210 | tags: d
211 |
212 | SINGLE TAG: ansible-playbook raham.yml --tags c
213 | MULTI TAGS: ansible-playbook raham.yml --tags a,b
214 | SKIP TAGS: ansible-playbook raham.yml --skip-tags "c"
215 | MULTI SKIP TAGS: SKIP TAGS: ansible-playbook raham.yml --skip-tags "b,c"
216 |
217 |
218 |
219 | HISTORY:
220 | 43 ansible all -a "yum install git -y"
221 | 44 ansible all -a "yum install tree -y"
222 | 45 ansible all -a "yum remove git* tree* -y"
223 | 46 ansible all -m yum -a "name=git state=present"
224 | 47 ansible all -m yum -a "name=docker state=present"
225 | 48 ansible all -m yum -a "name=httpd state=present"
226 | 49 ansible all -m yum -a "name=httpd state=absent"
227 | 50 ansible all -m yum -a "name=httpd state=present"
228 | 51 ansible all -m yum -a "name=httpd state=latest"
229 | 52 ansible all -m yum -a "name=httpd state=started"
230 | 53 ansible all -m service -a "name=httpd state=started"
231 | 54 ansible all -a "systemctl status httpd"
232 | 55 ansible all -m service -a "name=httpd state=stopped"
233 | 56 ansible all -a "systemctl status httpd"
234 | 57 ansible all -m user -a "name=rajinikanth state=present"
235 | 58 ansible all -m user -a "name=rajinikanth state=absent"
236 | 59 vim app.yml
237 | 60 ll
238 | 61 ansible all -m copy -a "src=app.yml dest=/tmp"
239 | 62 ansible all -a "ls"
240 | 63 ansible all -a "ls /tmp"
241 | 64 ansible all -m copy -a "src=app.yml dest=/root"
242 | 65 ansible all -a "ls /tmp"
243 | 66 ansible all -a "ls /root"
244 | 67 rm -rf *
245 | 68 vim raham.yml
246 | 69 ansible all -a "yum remove httpd* -y"
247 | 70 ansible-playbook raham.yml
248 | 71 vim raham.yml
249 | 72 ansible-playbook raham.yml
250 | 73 vim raham.yml
251 | 74 cat raham.yml
252 | 75 ansible-playbook raham.yml --tags c
253 | 76 ansible all -a "yum remove maven* -y"
254 | 77 ansible-playbook raham.yml --tags c
255 | 78 cat raham.yml
256 | 79 ansible-playbook raham.yml --tags a,b
257 | 80 cat raham.yml
258 | 81 ansible all -a "yum remove docker* maven* -y"
259 | 82 cat raham.yml
260 | 83 ansible-playbook raham.yml --skip-tags "c"
261 | 84 ansible-playbook raham.yml --skip-tags "c,d"
262 | 85 history
263 |
264 | =====================================================
265 | SHELL VS COMMAND VS RAW
266 |
267 | - hosts: all
268 | tasks:
269 | - name: install git
270 | shell: yum install git -y
271 |
272 | - name: install maven
273 | command: yum install maven -y
274 |
275 | - name: install docker
276 | raw: yum install docker -y
277 |
278 | RAW >> COMMAND >> SHELL
279 |
280 | sed -i 's/install/remove/g' raham.yml
281 |
282 | - hosts: all
283 | tasks:
284 | - name: remove git
285 | shell: yum remove git -y
286 |
287 | - name: remove maven
288 | command: yum remove maven -y
289 |
290 | - name: remove docker
291 | raw: yum remove docker -y
292 |
293 | ansible-playbook raham.yml
294 |
295 | VARS:
296 |
297 | STATIC VARS: vars which will define inside playbook.
298 | DYNAMIC VARS: vars which are not define inside playbook.
299 |
300 | - hosts: all
301 | vars:
302 | a: git
303 | b: maven
304 | c: docker
305 | tasks:
306 | - name: install git
307 | yum: name={{a}} state=present
308 |
309 | - name: install maven
310 | yum: name={{b}} state=present
311 |
312 | - name: install docker
313 | yum: name={{c}} state=present
314 |
315 |
316 | ansible-playbook raham.yml
317 |
318 | - hosts: all
319 | vars:
320 | a: git*
321 | b: maven
322 | c: docker
323 | tasks:
324 | - name: install git*
325 | yum: name={{a}} state=absent
326 |
327 | - name: install maven
328 | yum: name={{b}} state=absent
329 |
330 | - name: install docker
331 | yum: name={{c}} state=absent
332 |
333 | ansible-playbook raham.yml
334 |
335 |
336 | - hosts: all
337 | vars:
338 | a: git*
339 | b: maven
340 | c: docker
341 | tasks:
342 | - name: install git*
343 | yum: name={{a}} state=present
344 |
345 | - name: install maven
346 | yum: name={{b}} state=present
347 |
348 | - name: install docker
349 | yum: name={{c}} state=present
350 |
351 | - name: install pkg
352 | yum: name={{d}} state=present
353 |
354 | ansible all -a "yum remove git* maven* docker* tree* -y"
355 |
356 | =====================
357 |
358 | - hosts: all
359 | vars:
360 | a: git
361 | b: maven
362 | c: docker
363 | tasks:
364 | - name: install git*
365 | yum: name={{a}} state=present
366 |
367 | - name: install maven
368 | yum: name={{b}} state=present
369 |
370 | - name: install docker
371 | yum: name={{c}} state=present
372 |
373 | - name: install {{d}}
374 | yum: name={{d}} state=present
375 |
376 | - name: install {{e}}
377 | yum: name={{e}} state=present
378 |
379 |
380 | single var: ansible-playbook raham.yml --extra-vars "d=tree"
381 | multi var: ansible-playbook raham.yml --extra-vars "d=tree e=java-1.8.0-openjdk"
382 |
383 | sed -i "s/present/absent/g" raham.yml
384 | multi var: ansible-playbook raham.yml --extra-vars "d=tree e=java-1.8.0-openjdk"
385 |
386 | LOOPS:
387 |
388 | - hosts: all
389 | tasks:
390 | - name: install pkgs
391 | yum: name={{item}} state=present
392 | with_items:
393 | - git
394 | - maven
395 | - docker
396 | - htppd
397 | - tree
398 |
399 |
400 | - hosts: all
401 | tasks:
402 | - name: create users
403 | user: name={{item}} state=present
404 | with_items:
405 | - jayanth
406 | - delidemir
407 | - ramesh
408 | - kowshik
409 | - revi
410 | - manikanta
411 | - sravanthi
412 | - sarvani
413 | - nikitha
414 | - sadekaa
415 |
416 |
417 | TROUBLESHOOTING:
418 | ansible all -m setup | grep -i cpus
419 | ansible all -m setup | grep -i mem
420 |
421 | HISTORY:
422 | 87 vim raham.yml
423 | 88 ansible-playbook raham.yml
424 | 89 cat raham.yml
425 | 90 sed -i 's/install/remove/g' raham.yml
426 | 91 cat raham.yml
427 | 92 ansible-playbook raham.yml
428 | 93 ansible all -a "docker --version"
429 | 94 ansible all -a "mvn --version"
430 | 95 ansible all -a "git --version"
431 | 96 ansible all -a "yum remove git* -y"
432 | 97 vim raham.yml
433 | 98 ansible-playbook raham.yml
434 | 99 cat raham.yml
435 | 100 sed -i 's/present/absent/g' raham.yml
436 | 101 cat raham.yml
437 | 102 sed -i 's/git/git*/g' raham.yml
438 | 103 ansible-playbook raham.yml
439 | 104 ansible all -a "git --version"
440 | 105 vim raham.yml
441 | 106 ansible-playbook raham.yml --extra-vars "d=tree"
442 | 107 vim raham.yml
443 | 108 ansible-playbook raham.yml --extra-vars "d=tree"
444 | 109 ansible all -a "tree --version"
445 | 110 vim raham.yml
446 | 111 ansible all -a "yum remove git* maven* docker* tree* -y"
447 | 112 ansible-playbook raham.yml --extra-vars "d=tree, e=java-1.8.0-openjdk"
448 | 113 ansible-playbook raham.yml --extra-vars "d=tree e=java-1.8.0-openjdk"
449 | 114 sed -i "s/present/absent/g" raham.yml
450 | 115 cat raham.yml
451 | 116 ansible-playbook raham.yml --extra-vars "d=tree, e=java-1.8.0-openjdk"
452 | 117 cat raham.yml
453 | 118 vim raham.yml
454 | 119 ansible-playbook raham.yml
455 | 120 vim raham.yml
456 | 121 ansible-playbook raham.yml
457 | 122 ansible all -a "git --version"
458 | 123 ansible all -a "maven --version"
459 | 124 ansible all -a "mvn --version"
460 | 125 ansible all -a "docker --version"
461 | 126 ansible all -a "httpd --version"
462 | 127 ansible all -a "tree --version"
463 | 128 sed -i 's/present/absent/g' raham.yml
464 | 129 cat raham.yml
465 | 130 sed -i 's/git/git*/g' raham.yml
466 | 131 ansible-playbook raham.yml
467 | 132 vim raham.yml
468 | 133 ansible-playbook raham.yml
469 | 134 cat raham.yml
470 | 135 sed -i 's/present/absent/g' raham.yml
471 | 136 ansible-playbook raham.yml
472 | 137 cd .ssh
473 | 138 ll
474 | 139 cat known_hosts
475 | 140 ansible all --list-host
476 | 141 vim /etc/ansible/hosts
477 | 142 ansible all --list-host
478 | 143 vim /etc/ansible/hosts
479 | 144 ansible all --list-host
480 | 145 cat known_hosts
481 | 146 ll
482 | 147 vim /etc/ansible/hosts
483 | 148 ll
484 | 149 cd
485 | 150 ansible setup -m all
486 | 151 ansible all -m setup
487 | 152 ansible all -m setup | grep -i cpus
488 | 153 ansible all -m setup | grep -i memory
489 | 154 ansible all -m setup | grep -i mem
490 | 155 ansible all -m setup | grep -i cpus
491 | 156 ansible all -m setup | grep -i mem
492 | 157 history
493 | =============================================================================================================
494 |
495 | HANDLERS: here one task depends on another task.
496 | when task-1 is executed it will call task-2 to perform the action.
497 | ex: if we want to restart httpd we need to install it
498 | restarting depends on installing.
499 | Note: here we have notify module.
500 |
501 | - hosts: all
502 | tasks:
503 | - name: installing apache on RedHat
504 | yum: name=httpd state=present
505 | notify: starting apache
506 | handlers:
507 | - name: starting apache
508 | service: name=httpd state=started
509 |
510 | ansible all -a "systemctl status httpd"
511 |
512 |
513 | ANSIBLE VAULT:
514 | it is used to encrypt the files, playbooks ----
515 | Technique: AES256 (USED BY FACEBOOK)
516 | vault will store our data very safely and securely.
517 | if we want to access any data which is in the vault we need to give a password.
518 | Note: we can restrict the users to access the playbook aslo.
519 |
520 | ansible-vault create creds1.txt : to create a vault
521 | ansible-vault edit creds1.txt : to edit a vault
522 | ansible-vault rekey creds1.txt : to change password for a vault
523 | ansible-vault decrypt creds1.txt : to decrypt the content
524 | ansible-vault encrypt creds1.txt : to encrypt the content
525 | ansible-vault view creds1.txt : to show the content without decrypt
526 |
527 | PIP: its a pkg manager used to install python libs/modules
528 |
529 | Redhat: yum
530 | ubuntu: apt
531 | python: pip
532 |
533 |
534 |
535 | - hosts: all
536 | tasks:
537 | - name: installing pip
538 | yum: name=pip state=present
539 |
540 | - name: installing NumPy
541 | pip: name=Numpy state=present
542 |
543 |
544 | - hosts: all
545 | tasks:
546 | - name: installing pip
547 | yum: name=pip state=present
548 |
549 | - name: installing Pandas
550 | pip: name=Pandas state=present
551 |
552 |
553 | ANSIBLE SETUP MODULES: Used to print the complete info of worker nodes.
554 |
555 | ROLES:
556 | used to divide the playbook into directory structures.
557 | we can organize the playbooks.
558 | we can encapsulate the data.
559 | we can reduce the playbook length.
560 |
561 | mkdir playbooks
562 | cd playbooks
563 | yum install tree -y
564 |
565 |
566 | .
567 | ├── master.yml
568 | └── roles
569 | ├── file
570 | │ └── tasks
571 | │ └── main.yml
572 | ├── one
573 | │ └── tasks
574 | │ └── main.yml
575 | └── raham
576 | └── tasks
577 | └── main.yml
578 |
579 |
580 | mkdir -p roles/one/tasks
581 | vim roles/one/tasks/main.yml
582 |
583 | - name: installing maven
584 | yum: name=maven state=present
585 |
586 | vim roles/raham/tasks/main.yml
587 | - name: create user
588 | user: name=srikanth state=present
589 |
590 | vim roles/file/tasks/main.yml
591 | - name: create a file
592 | shell: touch file22
593 |
594 | vim master.yml
595 |
596 | - hosts: all
597 | roles:
598 | - one
599 | - raham
600 | - file
601 |
602 | ansible-playbook master.yml
603 |
604 | ANSIBLE-GALAXY: used to store playbooks like code on github.
605 |
606 | ansible-galaxy search java
607 | ansible-galaxy install akhilesh9589.tomcat
608 | ansible-galaxy search --author alikins
609 |
610 | HISTORY:
611 | 158 ll
612 | 159 vim raham.yml
613 | 160 ansible setup -m all
614 | 161 ansible all -m setup
615 | 162 vim raham.yml
616 | 163 cat raham.yml
617 | 164 ansible all -m setup
618 | 165 ansible all -m setup | grep -i family
619 | 166 cat raham.yml
620 | 167 ansible-playbook raham.yml
621 | 168 sed -i 's/present/absent/g' raham.yml
622 | 169 sed -i 's/git/git*/g' raham.yml
623 | 170 ansible-playbook raham.yml
624 | 171 vim raham.yml
625 | 172 ansible-playbook raham.yml
626 | 173 sed -i 's/present/absent/g' raham.yml
627 | 174 ansible-playbook raham.yml
628 | 175 vim raham.yml
629 | 176 cat raham.yml
630 | 177 ansible-playbook raham.yml
631 | 178 ansible all -a "systemctl status httpd"
632 | 179 sed -i 's/present/absent/g' raham.yml
633 | 180 ansible-playbook raham.yml
634 | 181 ll
635 | 182 vim creds.txt
636 | 183 cat creds.txt
637 | 184 ansible-vault create creds1.txt
638 | 185 cat creds.txt
639 | 186 cat creds1.txt
640 | 187 ansible-vault edit creds1.txt
641 | 188 cat creds1.txt
642 | 189 ansible-vault rekey creds1.txt
643 | 190 cat creds1.txt
644 | 191 ansible-vault decrypt creds1.txt
645 | 192 cat creds1.txt
646 | 193 ansible-vault encrypt creds1.txt
647 | 194 cat creds1.txt
648 | 195 ansible-vault view creds1.txt
649 | 196 cat creds1.txt
650 | 197 ll
651 | 198 cat raham.yml
652 | 199 ansible-vault encrypt raham.yml
653 | 200 cat raham.yml
654 | 201 ansible-playbook raham.yml
655 | 202 ansible-vault decrypt raham.yml
656 | 203 vim raham.yml
657 | 204 ansible-playbook raham.yml
658 | 205 vim raham.yml
659 | 206 ansible-playbook raham.yml --check
660 | 207 vim raham.yml
661 | 208 ansible-playbook raham.yml
662 | 209 vim raham.yml
663 | 210 rm -rf *
664 | 211 ll
665 | 212 mkdir playbooks
666 | 213 cd playbooks/
667 | 214 yum install tree -y
668 | 215 mkdir -p roles/one/tasks
669 | 216 vim roles/one/tasks/main.yml
670 | 217 vim master.yml
671 | 218 tree
672 | 219 cat master.yml
673 | 220 cat roles/one/tasks/main.yml
674 | 221 cat master.yml
675 | 222 ansible-playbook master.yml
676 | 223 tree
677 | 224 mkdir -p roles/raham/tasks
678 | 225 tree
679 | 226 vim roles/raham/tasks/main.yml
680 | 227 vim master.yml
681 | 228 ansible-playbook master.yml
682 | 229 tree
683 | 230 mkdir -p roles/file/tasks
684 | 231 vim roles/file/tasks/main.yml
685 | 232 vim master.yml
686 | 233 ansible-playbook master.yml
687 | 234 tree
688 | 235 cat roles/raham/tasks/main.yml
689 | 236 cat roles/file/tasks/main.yml
690 | 237 cd
691 | 238 rm -rf *
692 | 239 ansible-galaxy search java
693 | 240 ansible-galaxy search tomcat
694 | 241 ansible-galaxy install akhilesh9589.tomcat
695 | 242 cd .ansible/roles/akhilesh9589.tomcat/
696 | 243 ll
697 | 244 cd
698 | 245 ansible-galaxy search --author akhilesh9589
699 | 246 ansible-galaxy search --author alikins
700 | 247 history
701 |
702 | ===============================================
703 |
704 | ansible_hostname
705 | ansible_memfree_mb
706 | ansible_memtotal_mb
707 | ansible_os_family
708 | ansible_pkg_mgr
709 | ansible_processor_vcpus
710 |
711 |
712 | Debug: used to print the msgs/customize ops
713 |
714 | - hosts: all
715 | tasks:
716 | - name: to pring mgs
717 | debug:
718 | msg: "the node name os: {{ansible_hostname}}, the total mem is: {{ansible_memtotal_mb}}, free mem is {{ansible_memfree_mb}}, the flavour of ec2 is: {{ansible_os_family}}, toto cpus is: {{ansible_processor_vcpus}}"
719 |
720 | JINJA2 TEMPLATE: used to get the customized op, here its a text file which can extract the variables and these values will change as per time.'
721 |
722 | LOOKUPS: this module used to get data from files, db and key-values
723 |
724 |
725 | - hosts: dev
726 | vars:
727 | a: "{{lookup('file', '/root/creds.txt') }}"
728 | tasks:
729 | - debug:
730 | msg: "hai my user name is {{a}}"
731 |
732 | cat creds.txt
733 | user=raham
734 |
735 | STRATAGIES:
736 |
737 | LINEAR: execute tasks sequencially
738 | FREE: execute all tasks on all node at same time
739 | ROLLING:
740 | SEQUENC:
741 | BATCH:
742 |
743 | hosts: dev
744 | gather_facts: false
745 | strategy: free
746 | vars:
747 | a: "{{lookup('file', '/root/creds.txt') }}"
748 | tasks:
749 | - debug:
750 | msg: "hai my user name is {{a}}"
751 |
752 | =============================================================================
753 |
754 | LAMP:
755 | L : LINUX
756 | A : APACHE
757 | M : MYSQL
758 | P : PHP
759 |
760 |
761 |
762 | ---
763 | - hosts: all
764 | tasks:
765 | - name: installing apache
766 | yum: name=httpd state=present
767 | - name: installing mysql
768 | yum: name=mysql state=present
769 | - name: installing python
770 | yum: name=python3 state=present
771 |
772 | =============================================================================
773 |
774 | NETFLIX DEPLOPYMET:
775 |
776 | - hosts: test
777 | tasks:
778 | - name: installing apache server
779 | yum: name=httpd state=present
780 |
781 | - name: activating apache server
782 | service: name=httpd state=started
783 |
784 | - name: installing git
785 | yum: name=git state=present
786 |
787 | - name: git checkout
788 | git:
789 | repo: "https://github.com/CleverProgrammers/pwj-netflix-clone.git"
790 | dest: "/var/www/html"
791 |
792 | TOMCAT SETUP:
793 | USE TOMCAT.YML, CONTEXT.XML AND TOMCAT-USER.XML FROM BELOW REPO
794 | https://github.com/RAHAMSHAIK007/all-setups.git
795 |
796 | CONDITIONS:
797 | CLUSTER: GROUP OF NODES/SERVERS
798 |
799 | HOMOGENIUS: SERVERS WITH SAME OS AND FLAVOUR
800 | HETEROGENEOUS: SERVERS WITH DIFFERENT OS AND FLAVOUR
801 |
802 |
803 | - hosts: all
804 | tasks:
805 | - name: installing git* on RedHat
806 | yum: name=git* state=present
807 | when: ansible_os_family == "RedHat"
808 |
809 | - name: installing git* on Ubuntu
810 | apt: name=git* state=present
811 | when: ansible_os_family == "Debian"
812 |
813 | sed -i 's/present/absent/g' raham.yml
814 |
815 | EX-2:
816 |
817 | - hosts: all
818 | tasks:
819 | - name: installing apache on RedHat
820 | yum: name=httpd state=present
821 | when: ansible_os_family == "RedHat"
822 |
823 | - name: installing apache on Ubuntu
824 | apt: name=apache2 state=present
825 | when: ansible_os_family == "Debian"
826 |
827 | sed -i 's/present/absent/g' raham.yml
828 |
829 | HISTORY:
830 | 295 ansible-playbook playbook.yml --check
831 | 296 vim playbook.yml
832 | 297 ansible-playbook playbook.yml
833 | 298 cat playbook.yml
834 | 299 sed -i 's/present/absent/g, s/install/unistall/g' playbook.yml
835 | 300 sed -i 's/present/absent/g; s/install/unistall/g' playbook.yml
836 | 301 ansible-playbook playbook.yml
837 | 302 vim playbook.yml
838 | 303 ansible-playbook playbook.yml
839 | 304 ansible all -m setup
840 | 305 vim playbook.yml
841 | 306 ansible-playbook playbook.yml
842 | 307 vim playbook.yml
843 | 308 vim raham/'
844 | 309 ll
845 | 310 rm -rf raham/ lucky/
846 | 311 ll
847 | 312 vim creds.txt
848 | 313 cat creds.txt
849 | 314 vim playbook.yml
850 | 315 cat playbook.yml
851 | 316 cat creds.txt
852 | 317 ansible-playbook playbook.yml
853 | 318 vim playbook.yml
854 | 319 ansible-playbook playbook.yml
855 | 320 vim playbook.yml
856 | 321 ansible-playbook playbook.yml
857 | 322 vim playbook.yml
858 | 323 ansible-playbook playbook.yml
859 | 324 cat playbook.yml
860 | 325 vim playbook.yml
861 | 326 ansible-playbook playbook.yml
862 | 327 sed -i 's/present/absent/g' playbook.yml
863 | 328 sed -i 's/install/unistall/g' playbook.yml
864 | 329 ansible-playbook playbook.yml
865 | 330 vim netflix.yml
866 | 331 ansible-playbook playbook.yml
867 | 332 vim netflic.yml
868 | 333 ansible-playbook netflix.yml
869 | 334 vim playbook.yml
870 | 335 vim netflix.yml
871 | 336 ansible-playbook netflix.yml
872 | 337 rm -rf *
873 | 338 vim tomcat.yml
874 | 339 sed -i 's/9.0.76/9.0.80/g' tomcat.yml
875 | 340 vim tomcat.yml
876 | 341 vim tomcat-users.xml
877 | 342 vim context.xml
878 | 343 ll
879 | 344 cat tomcat.yml
880 | 345 ansible-playbook tomcat.yml
881 | 346 vim tomcat.yml
882 | 347 ansible-playbook tomcat.yml
883 | 348 history
884 | [root@ansible ~]#
885 |
--------------------------------------------------------------------------------
/6. K8S:
--------------------------------------------------------------------------------
1 | DOCKER SWARM:
2 | CLUSTER
3 | NODES
4 | CONTAINER
5 | APP
6 |
7 | C: CLUSTER
8 | N: NODE
9 | P: POD
10 | C: CONTAINER
11 | A: APPLICATION
12 |
13 | NOTE: k8s dont communicate with containers.
14 | it communicate with pods.
15 |
16 | COMPONENTS:
17 | MASTER NODE:
18 | 1. API SERVER: its for communicating with cluster, it takes command executes and gives output.
19 | 2. ETCD: its a DB od cluster, all the cluster info will store here.
20 | 3. SCHEDULER: schedules pods on worker nodes, based on hardware resources.
21 | 4. CONTROLLER: used to control the k8s objects.
22 | 1. cloud controllers
23 | 2. kube controllers
24 |
25 | WORKER NODE:
26 | KUBELET: its an agent used to communicate with master.
27 | KUBEPROXY: it deals with nlw.
28 | POD: its a group of containers.
29 |
30 |
31 | There are multiple ways to setup kubernetes cluster.
32 |
33 | 1.SELF MANAGER K8'S CLUSTER
34 | a.mini kube (single node cluster)
35 | b.kubeadm(multi node cluster)
36 | c. KOPS
37 |
38 | 2. CLOUD MANAGED K8'S CLUSTER
39 | a. AWS EKS
40 | b.AZURE AKS
41 | c.GCP GKS
42 | d.IBM IKE
43 |
44 | MINIKUBE:
45 | It is a tool used to setup single node cluster on K8's.
46 | It contains API Servers, ETDC database and container runtime
47 | It is used for development, testing, and experimentation purposes on local.
48 | Here Master and worker runs on same machine.
49 | It is a platform Independent.
50 |
51 | NOTE: But we dont implement this in real-time
52 |
53 | REQUIRMENTS:
54 | 2 CPUs or more
55 | 2GB of free memory
56 | 20GB of free disk space
57 | Internet connection
58 | Container or virtual machine manager, such as: Docker.
59 |
60 |
61 |
62 | PODS:
63 | It is a smallest unit of deployment in K8's.
64 | It is a group of containers.
65 | Pods are ephemeral (short living objects)
66 | Mostly we can use single container inside a pod but if we required, we can create multiple containers inside a same pod.
67 | when we create a pod, containers inside pods can share the same network namespace, and can share the same storage volumes .
68 | While creating pod, we must specify the image, along with any necessary configuration and resource limits.
69 | K8's cannot communicate with containers, they can communicate with only pods.
70 | We can create this pod in two ways,
71 | 1. Imperative(command)
72 | 2. Declarative (Manifest file)
73 |
74 |
75 | IMPERATIVE:
76 | kubectl run pod1 --image rahamshaik/paytmtrain:latest
77 | kubectl get po/pod/pods
78 | kubectl get po/pod/pods -o wide
79 | kubectl describe pod pod1
80 | kubectl delete pod pod1
81 |
82 | Declarative:
83 | vim abc.yml
84 |
85 | apiVersion: v1
86 | kind: Pod
87 | metadata:
88 | name: pod1
89 | spec:
90 | containers:
91 | - image: nginx
92 | name: cont1
93 |
94 | kubectl create -f abc.yml
95 | kubectl get po/pod/pods
96 | kubectl get po/pod/pods -o wide
97 | kubectl describe pod pod1
98 | kubectl delete pod pod1
99 |
100 | DRAWBACK:
101 | if we delete the pod we cant retrive.
102 | all the load will be handled by single pod.
103 |
104 |
105 | REPLICA SET:
106 | it will create same pod of multiple replicas.
107 | if we delete one pod it will create automatically.
108 | we can distribute the load also.
109 |
110 |
111 | LABLE: assing to a pod for identification. to work with them as single unit.
112 | SELECTOR: used to identify the pod with same label.
113 |
114 | kubectl api-resources
115 |
116 | REPLICASET:
117 |
118 | apiVersion: apps/v1
119 | kind: ReplicaSet
120 | metadata:
121 | labels:
122 | app: swiggy
123 | name: swiggy-rs
124 | spec:
125 | replicas: 3
126 | selector:
127 | matchLabels:
128 | app: swiggy
129 | template:
130 | metadata:
131 | labels:
132 | app: swiggy
133 | spec:
134 | containers:
135 | - name: cont1
136 | image: nginx
137 |
138 | kubectl create -f abc.yml
139 |
140 | kubectl get rs
141 | kubectl get rs -o wide
142 | kubectl describe rs swiggy-rs
143 | kubectl delete rs swiggy-rs
144 | kubectl edit rs/swiggy-rs
145 |
146 |
147 |
148 | SCALING:
149 |
150 | SCALE-IN: Increasing the count of pods
151 | kubectl scale rs/swiggy-rs --replicas=10
152 |
153 | SCALE-OUT: Decreasing the count of pods
154 | kubectl scale rs/swiggy-rs --replicas=5
155 |
156 | SCALING FOLLOWS LIFO PATTERN:
157 | LIFO: LAST IN FIRST OUT
158 | the pod which is created will be deleted first automatically when we scale out.
159 |
160 |
161 | DEPLOYMENT:
162 | it will do all operations link RS.
163 | it will do roll back which cannot be done in rs.
164 |
165 | rs -- > pods
166 | deployment -- > rs -- > pods
167 |
168 | apiVersion: apps/v1
169 | kind: Deployment
170 | metadata:
171 | labels:
172 | app: swiggy
173 | name: swiggy-rs
174 | spec:
175 | replicas: 3
176 | selector:
177 | matchLabels:
178 | app: swiggy
179 | template:
180 | metadata:
181 | labels:
182 | app: swiggy
183 | spec:
184 | containers:
185 | - name: cont1
186 | image: nginx
187 |
188 | kubectl get deploy
189 | kubectl get deploy -o wide
190 | kubectl describe deploy swiggy-rs
191 | kubectl edit deploy/swiggy-rs
192 | kubectl delete deploy swiggy-r
193 | kubectl delete po --all
194 |
195 |
196 | KUBECOLOR:
197 |
198 | wget https://github.com/hidetatz/kubecolor/releases/download/v0.0.25/kubecolor_0.0.25_Linux_x86_64.tar.gz
199 | tar -zxvf kubecolor_0.0.25_Linux_x86_64.tar.gz
200 | ./kubecolor
201 | chmod +x kubecolor
202 | mv kubecolor /usr/local/bin/
203 | kubecolor get po
204 |
205 | HISTORY:
206 | 1 vim minikube.sh
207 | 2 sh minikube.sh
208 | 3 vim abc.yml
209 | 4 kubectl create -f abc.yml
210 | 5 vim abc.yml
211 | 6 kubectl create -f abc.yml
212 | 7 kubectl get po
213 | 8 kubectl delete pod raham
214 | 9 kubectl get po
215 | 10 vim abc.yml
216 | 11 kubectl get po
217 | 12 kubectl create -f abc.yml
218 | 13 cat abc.yml
219 | 14 vim abc.yml
220 | 15 kubectl create -f abc.yml
221 | 16 kubectl get rs
222 | 17 kubectl ap-resources
223 | 18 kubectl api-resources
224 | 19 kubectl get rs -o wide
225 | 20 kubectl describe rs train-rs
226 | 21 kubectl get po
227 | 22 kubectl get rs -o wide
228 | 23 kubectl delete pod train-rs-gshzk
229 | 24 kubectl get po
230 | 25 kubectl delete pod train-rs-767lv
231 | 26 kubectl get po
232 | 27 kubectl scale rs/train-rs --replicas=10
233 | 28 kubectl get po
234 | 29 kubectl scale rs/train-rs --replicas=5
235 | 30 kubectl get po
236 | 31 kubectl describe rs tarin-rs
237 | 32 kubectl describe rs train-rs
238 | 33 kubectl edit rs/train-rs
239 | 34 kubectl describe rs train-rs
240 | 35 kubectl get po
241 | 36 kubectl describe pod train-rs-24xvk | grep -i image
242 | 37 kubectl describe pod train-rs-24xvk
243 | 38 kubectl describe rs train-rs
244 | 39 kubectl describe pod
245 | 40 kubectl get po
246 | 41 kubectl describe pod train-rs-24xvk
247 | 42 kubectl describe pod train-rs-gq9bb
248 | 43 kubectl describe pod train-rs-m6gv5
249 | 44 kubectl get po
250 | 45 kubectl run pod1 --image nginx
251 | 46 kubectl run pod2 --image nginx
252 | 47 kubectl run pod3 --image nginx
253 | 48 kubectl get po
254 | 49 kubectl delete pod -l app=train
255 | 50 kubectl delete rs train-rs
256 | 51 kubectl get po
257 | 52 kubectl get po -o wide
258 | 53 kubectl delete po --all
259 | 54 vim abc.yml
260 | 55 kubectl create -f abc.yml
261 | 56 kubectl get deploy
262 | 57 kubectl get rs
263 | 58 kubectl get po
264 | 59 kubectl describe deploy train-rs
265 | 60 kubectl edit deploy/train-rs
266 | 61 kubectl get po
267 | 62 kubectl describe deploy train-rs
268 | 63 kubectl describe po
269 | 64 kubectl edit deploy/train-rs
270 | 65 kubectl get po
271 | 66 kubectl describe po
272 | 67 kubectl describe po | grep -i Image
273 | 68 kubectl scale deploy/train-rs --replicas=10
274 | 69 kubectl get po
275 | 70 wget https://github.com/hidetatz/kubecolor/releases/download/v0.0.25/kubecolor_0.0.25_Linux_x86_64.tar.gz
276 | 71 tar -zxvf kubecolor_0.0.25_Linux_x86_64.tar.gz
277 | 72 ./kubecolor
278 | 73 chmod +x kubecolor
279 | 74 mv kubecolor /usr/local/bin/
280 | 75 kubecolor get po
281 | 76 kubectl get rs
282 | 77 kubecolor get rs
283 | 78 kubecolor get po
284 | 79 history
285 | 80 kubecolor describe deploy train-rs
286 | 81 kubecolor get po
287 | 82 kubecolor logs train-rs-6cddf5c876-6ws5z
288 | 83 kubecolor logs ttrain-rs-6cddf5c876-wh6mr
289 | 84 kubecolor logs train-rs-6cddf5c876-wh6mr
290 | 85 kubecolor logs train-rs-6cddf5c876-wh6mr -c cont1
291 | 86 history
292 | root@ip-172-31-10-159:~#
293 |
294 | There are multiple ways to setup kubernetes cluster.
295 |
296 | 1.SELF MANAGER K8'S CLUSTER
297 | a.mini kube (single node cluster)
298 | b.kubeadm(multi node cluster)
299 | c. KOPS
300 |
301 | 2. CLOUD MANAGED K8'S CLUSTER
302 | a. AWS EKS
303 | b.AZURE AKS
304 | c.GCP GKS
305 | d.IBM IKE
306 |
307 | MINIKUBE:
308 | It is a tool used to setup single node cluster on K8's.
309 | It contains API Servers, ETDC database and container runtime
310 | It is used for development, testing, and experimentation purposes on local.
311 | Here Master and worker runs on same machine.
312 | It is a platform Independent.
313 |
314 | NOTE: But we dont implement this in real-time
315 |
316 | REQUIRMENTS:
317 | 2 CPUs or more
318 | 2GB of free memory
319 | 20GB of free disk space
320 | Internet connection
321 | Container or virtual machine manager, such as: Docker.
322 |
323 | SETUP:
324 | sudo apt update -y
325 | sudo apt upgrade -y
326 | sudo apt install curl wget apt-transport-https -y
327 | sudo curl -fsSL https://get.docker.com -o get-docker.sh
328 | sudo sh get-docker.sh
329 | sudo curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
330 | sudo mv minikube-linux-amd64 /usr/local/bin/minikube
331 | sudo chmod +x /usr/local/bin/minikube
332 | sudo minikube version
333 | sudo curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
334 | sudo curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
335 | sudo echo "$(cat kubectl.sha256) kubectl" | sha256sum --check
336 | sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
337 | sudo minikube start --driver=docker --force
338 |
339 | PODS:
340 | It is a smallest unit of deployment in K8's.
341 | It is a group of containers.
342 | Pods are ephemeral (short living objects)
343 | Mostly we can use single container inside a pod but if we required, we can create multiple containers inside a same pod.
344 | when we create a pod, containers inside pods can share the same network namespace, and can share the same storage volumes .
345 | While creating pod, we must specify the image, along with any necessary configuration and resource limits.
346 | K8's cannot communicate with containers, they can communicate with only pods.
347 | We can create this pod in two ways,
348 | 1. Imperative(command)
349 | 2. Declarative (Manifest file)
350 |
351 |
352 | IMPERATIVE:
353 | kubectl run pod1 --image rahamshaik/paytmtrain:latest
354 | kubectl get po/pod/pods
355 | kubectl get po/pod/pods -o wide
356 | kubectl describe pod pod1
357 | kubectl delete pod pod1
358 |
359 | Declarative:
360 | vim abc.yml
361 |
362 | apiVersion: v1
363 | kind: Pod
364 | metadata:
365 | name: pod1
366 | spec:
367 | containers:
368 | - image: nginx
369 | name: cont1
370 |
371 | kubectl create -f abc.yml
372 | kubectl get po/pod/pods
373 | kubectl get po/pod/pods -o wide
374 | kubectl describe pod pod1
375 | kubectl delete pod pod1
376 |
377 | HISTORY:
378 | 1 apt update -y
379 | 2 apt upgrade -y
380 | 3 sudo apt install curl wget apt-transport-https -y
381 | 4 sudo curl -fsSL https://get.docker.com -o get-docker.sh
382 | 5 ll
383 | 6 sh get-docker.sh
384 | 7 sudo curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
385 | 8 ll
386 | 9 sudo mv minikube-linux-amd64 /usr/local/bin/minikube
387 | 10 chmod +x /usr/local/bin/minikube
388 | 11 minikube version
389 | 12 sudo curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux /amd64/kubectl"
390 | 13 ll
391 | 14 sudo curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/k ubectl.sha256"
392 | 15 echo "$(cat kubectl.sha256) kubectl" | sha256sum --check
393 | 16 sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
394 | 17 kubectl version
395 | 18 kubectl version --client
396 | 19 minikube start --driver=docker --force
397 | 20 minikube status
398 | 21 kubectl get pod
399 | 22 kubectl get pods
400 | 23 kubectl get po
401 | 24 kubectl run pod1 --image rahamshaik/paytmtrain:latest
402 | 25 kubectl get pod
403 | 26 kubectl get pods
404 | 27 kubectl get po
405 | 28 kubectl describe pod pod1
406 | 29 kubectl get po
407 | 30 kubectl delete pod pod1
408 | 31 kubectl get po
409 | 32 kubectl run pod1 --image ubuntu
410 | 33 kubectl get po
411 | 34 kubectl delete pod pod1
412 | 35 kubectl run raham --image nginx
413 | 36 kubectl get po
414 | 37 kubectl describe pod raham
415 | 38 kubectl get po -o wide
416 | 39 kubectl delete pod raham
417 | 40 vim abc.yml
418 | 41 kubectl create -f abc.yml
419 | 42 kubectl get po
420 | 43 kubectl get po -o wide
421 | 44 kubectl describe pod pod1
422 | 45 kubectl delete pod pod1
423 | 46 history
424 | ===========================================================
425 | KOPS:
426 | INFRASTRUCTURE: Resources used to run our application on cloud.
427 | EX: Ec2, VPC, ALB, -------------
428 |
429 |
430 | Minikube -- > single node cluster
431 | All the pods on single node
432 | kOps, also known as Kubernetes operations.
433 | it is an open-source tool that helps you create, destroy, upgrade, and maintain a highly available, production-grade Kubernetes cluster.
434 | Depending on the requirement, kOps can also provide cloud infrastructure.
435 | kOps is mostly used in deploying AWS and GCE Kubernetes clusters.
436 | But officially, the tool only supports AWS. Support for other cloud providers (such as DigitalOcean, GCP, and OpenStack) are in the beta stage.
437 |
438 |
439 | ADVANTAGES:
440 | • Automates the provisioning of AWS and GCE Kubernetes clusters
441 | • Deploys highly available Kubernetes masters
442 | • Supports rolling cluster updates
443 | • Autocompletion of commands in the command line
444 | • Generates Terraform and CloudFormation configurations
445 | • Manages cluster add-ons.
446 | • Supports state-sync model for dry-runs and automatic idempotency
447 | • Creates instance groups to support heterogeneous clusters
448 |
449 | ALTERNATIVES:
450 | Amazon EKS , MINIKUBE, KUBEADM, RANCHER, TERRAFORM.
451 |
452 |
453 | STEP-1: GIVING PERMISSIONS
454 | IAM -- > USER -- > CREATE USER -- > NAME: KOPS -- > Attach Polocies Directly -- > AdministratorAccess -- > NEXT -- > CREATE USER
455 | USER -- > SECURTITY CREDENTIALS -- > CREATE ACCESS KEYS -- > CLI -- > CHECKBOX -- > CREATE ACCESS KEYS -- > DOWNLOAD
456 |
457 | aws configure
458 |
459 | SETP-2: INSTALL KUBECTL AND KOPS
460 |
461 | curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
462 | wget https://github.com/kubernetes/kops/releases/download/v1.25.0/kops-linux-amd64
463 | chmod +x kops-linux-amd64 kubectl
464 | mv kubectl /usr/local/bin/kubectl
465 | mv kops-linux-amd64 /usr/local/bin/kops
466 |
467 | vim .bashrc
468 | export PATH=$PATH:/usr/local/bin/ -- > save and exit
469 | source .bashrc
470 |
471 | SETP-3: CREATEING BUCKET
472 | aws s3api create-bucket --bucket devopsbyraham007.k8s.local --region us-east-1
473 | aws s3api put-bucket-versioning --bucket cloudanddevopsbyraham007.k8s.local --region us-east-1 --versioning-configuration Status=Enabled
474 | export KOPS_STATE_STORE=s3://cloudanddevopsbyraham007.k8s.local
475 |
476 | SETP-4: CREATING THE CLUSTER
477 | kops create cluster --name rahams.k8s.local --zones us-east-1a --master-count=1 --master-size t2.medium --node-count=2 --node-size t2.micro
478 | kops update cluster --name rahams.k8s.local --yes --admin
479 |
480 |
481 | Suggestions:
482 | * list clusters with: kops get cluster
483 | * edit this cluster with: kops edit cluster rahams.k8s.local
484 | * edit your node instance group: kops edit ig --name=rahams.k8s.local nodes-us-east-1a
485 | * edit your master instance group: kops edit ig --name=rahams.k8s.local master-us-east-1a
486 |
487 |
488 | ADMIN ACTIVITIES:
489 | To scale the worker nodes:
490 | kops edit ig --name=rahams.k8s.local nodes-us-east-1a
491 | kops update cluster --name rahams.k8s.local --yes --admin
492 | kops rolling-update cluster --yes
493 |
494 | kubectl describe node node_id
495 | kops delete cluster --name rahams.k8s.local --yes
496 |
497 | HISTORY:
498 | 1 aws configure
499 | 2 curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd6 4/kubectl"
500 | 3 wget https://github.com/kubernetes/kops/releases/download/v1.25.0/kops-linux-amd64
501 | 4 chmod +x kops-linux-amd64 kubectl
502 | 5 mv kubectl /usr/local/bin/kubectl
503 | 6 mv kops-linux-amd64 /usr/local/bin/kops
504 | 7 kops version
505 | 8 kubectl version
506 | 9 vim .bashrc
507 | 10 source .bashrc
508 | 11 kops version
509 | 12 kubectl version
510 | 13 aws s3api create-bucket --bucket devopsbyraham007.k8s.local --region us-east-1
511 | 14 aws s3api create-bucket --bucket cloudanddevopsbyraham007.k8s.local --region us-east-1
512 | 15 aws s3api put-bucket-versioning --bucket cloudanddevopsbyraham007.k8s.local --region us-east-1 --vers ioning-configuration Status=Enabled
513 | 16 export KOPS_STATE_STORE=s3://cloudanddevopsbyraham007.k8s.local
514 | 17 aws s3api create-bucket --bucket devopsbyraham007.k8s.local --region us-east-1
515 | 18 aws s3api put-bucket-versioning --bucket cloudanddevopsbyraham007.k8s.local --region us-east-1 --vers ioning-configuration Status=Enabled
516 | 19 export KOPS_STATE_STORE=s3://cloudanddevopsbyraham007.k8s.local
517 | 20 kops create cluster --name rahams.k8s.local --zones us-east-1a --master-count=1 --master-size t2.medi um --node-count=2 --node-size t2.micro
518 | 21 kops update cluster --name rahams.k8s.local --yes --admin
519 | 22 kops validate cluster --wait 10m
520 | 23 kops get cluster
521 | 24 kubectl get no
522 | 25 kops edit ig --name=rahams.k8s.local nodes-us-east-1a
523 | 26 kops update cluster --name rahams.k8s.local --yes --admin
524 | 27 kops rolling-update cluster --yes
525 | 28 kops edit ig --name=rahams.k8s.local master-us-east-1a
526 | 29 kops update cluster --name rahams.k8s.local --yes --admin
527 | 30 kops rolling-update cluster --yes
528 | 31 kops edit ig --name=rahams.k8s.local nodes-us-east-1a
529 | 32 kubectl get no
530 | 33 vim abc.yml
531 | 34 kubectl get po
532 | 35 kubectl create -f abc.yml
533 | 36 kubectl get po -o wide
534 | 37 kubectl scale deploy/swiggyrs --replicas=8
535 | 38 kubectl scale deploy/swiggy-rs --replicas=8
536 | 39 kubectl get po -o wide
537 | 40 kubectl describe node i-0c3a6f2269bc5af42
538 | 41 kops get cluster
539 | 42 kops delete cluster --name rahams.k8s.local --yes
540 | 43 history
541 |
542 | ==================================================================================================================
543 |
544 | NAMESPACES:
545 |
546 | NAMESPACE: It is used to divide the cluster to multiple teams on real time.
547 | it is used to isolate the env.
548 |
549 | CLUSTER: HOUSE
550 | NAMESPACES: ROOM
551 |
552 | Each namespace is isolated.
553 | if your are room-1 are you able to see room-2.
554 | we cant access the objects from one namespace to another namespace.
555 |
556 |
557 | TYPES:
558 |
559 | default : Is the default namespace, all objects will create here only
560 | kube-node-lease : it will store object which is taken from one namespace to another.
561 | kube-public : all the public objects will store here.
562 | kube-system : default k8s will create some objects, those are storing on this ns.
563 |
564 | kubectl get pod -n kube-system : to list all pods in kube-system namespace
565 | kubectl get pod -n default : to list all pods in default namespace
566 | kubectl get pod -n kube-public : to list all pods in kube-public namespace
567 | kubectl get po -A : to list all pods in all namespaces
568 |
569 | kubectl create ns dev : to create namespace
570 | kubectl config set-context --current --namespace=dev : to switch to the namespace
571 | kubectl config view --minify | grep namespace : to see current namespace
572 | kubectl delete ns dev : to delete namespace
573 | kubectl delete pod --all: to delete all pods
574 |
575 |
576 | NOTE: By deleting the ns all objects also gets deleted.
577 | in real time we use rbac concept to restrict the access from one namespace to another.
578 | so users cant access/delete ns, because of the restriction we provide.
579 | we create roles and rolebind for the users.
580 |
581 |
582 | SERVICE: It is used to expose the application in k8s.
583 |
584 | TYPES:
585 | 1. CLUSTERIP: It will work inside the cluster.
586 | it will not expose to outer world.
587 |
588 | apiVersion: apps/v1
589 | kind: Deployment
590 | metadata:
591 | labels:
592 | app: swiggy
593 | name: swiggy-deploy
594 | spec:
595 | replicas: 3
596 | selector:
597 | matchLabels:
598 | app: swiggy
599 | template:
600 | metadata:
601 | labels:
602 | app: swiggy
603 | spec:
604 | containers:
605 | - name: cont1
606 | image: rahamshaik/moviespaytm:latest
607 | ports:
608 | - containerPort: 80
609 | ---
610 | apiVersion: v1
611 | kind: Service
612 | metadata:
613 | name: sv1
614 | spec:
615 | type: ClusterIP
616 | selector:
617 | app: swiggy
618 | ports:
619 | - port: 80
620 |
621 | DRAWBACK:
622 | We cannot use app outside.
623 |
624 | 2. NODEPORT: It will expose our application in a particular port.
625 | Range: 30000 - 32767 (in sg we need to give all traffic)
626 |
627 | apiVersion: apps/v1
628 | kind: Deployment
629 | metadata:
630 | labels:
631 | app: swiggy
632 | name: swiggy-deploy
633 | spec:
634 | replicas: 3
635 | selector:
636 | matchLabels:
637 | app: swiggy
638 | template:
639 | metadata:
640 | labels:
641 | app: swiggy
642 | spec:
643 | containers:
644 | - name: cont1
645 | image: rahamshaik/trainservice:latest
646 | ports:
647 | - containerPort: 80
648 | ---
649 | apiVersion: v1
650 | kind: Service
651 | metadata:
652 | name: abc
653 | spec:
654 | type: NodePort
655 | selector:
656 | app: swiggy
657 | ports:
658 | - port: 80
659 | targetPort: 80
660 | nodePort: 31111
661 |
662 | NOTE: UPDATE THE SG (REMOVE OLD TRAFFIC AND GIVE ALL TRAFFIC & SSH)
663 | DRAWBACK:
664 | PORT RESTRICTION.
665 |
666 | 3. LOADBALACER: It will expose our app and distribute load blw pods.
667 |
668 | apiVersion: apps/v1
669 | kind: Deployment
670 | metadata:
671 | labels:
672 | app: swiggy
673 | name: swiggy-deploy
674 | spec:
675 | replicas: 3
676 | selector:
677 | matchLabels:
678 | app: swiggy
679 | template:
680 | metadata:
681 | labels:
682 | app: swiggy
683 | spec:
684 | containers:
685 | - name: cont1
686 | image: rahamshaik/trainservice:latest
687 | ports:
688 | - containerPort: 80
689 | ---
690 | apiVersion: v1
691 | kind: Service
692 | metadata:
693 | name: abc
694 | spec:
695 | type: LoadBalancer
696 | selector:
697 | app: swiggy
698 | ports:
699 | - port: 80
700 | targetPort: 80
701 |
702 |
703 |
704 | HISTORY:
705 |
706 | 1 vim kops.sh
707 | 2 vim .bashrc
708 | 3 source .bashrc
709 | 4 sh kops.sh
710 | 5 vim kops.sh
711 | 6 sh kops.sh
712 | 7 kops validate cluster --wait 10m
713 | 8 cat kops.sh
714 | 9 export KOPS_STATE_STORE=s3://devopsbynraeshit887766.k8s.local
715 | 10 kops validate cluster --wait 10m
716 | 11 kubectl get ns
717 | 12 kubectl run pod1 --image nginx
718 | 13 kubectl describe pod pod1
719 | 14 kubectl get po
720 | 15 kubectl get ns
721 | 16 kubectl get po -n kube-node-lease
722 | 17 kubectl get po -n kube-public
723 | 18 kubectl get po -n kube-system
724 | 19 kubectl get po --all
725 | 20 kubectl get po -A
726 | 21 kubectl create ns dev
727 | 22 kubectl get ns
728 | 23 kubectl config set-context --current --namspace=dev
729 | 24 kubectl config set-context --current --namespace=dev
730 | 25 kubectl run devpod1 --image nginx
731 | 26 kubectl run devpod2 --image nginx
732 | 27 kubectl run devpod3 --image nginx
733 | 28 kubectl get po
734 | 29 kubectl view config
735 | 30 kubectl config view
736 | 31 kubectl config view --minfy
737 | 32 kubectl config view --minify
738 | 33 kubectl get po
739 | 34 kubectl get po -A
740 | 35 kubectl create ns test
741 | 36 kubectl config set-context --curent --namespace=test
742 | 37 kubectl config set-context --current --namespace=test
743 | 38 kubectl run testpod1 --image nginx
744 | 39 kubectl run testpod2 --image nginx
745 | 40 kubectl run testpod3 --image nginx
746 | 41 kubectl get po
747 | 42 kubectl get po -n dev
748 | 43 kubectl delete po devpod1 -n dev
749 | 44 kubectl get po -n dev
750 | 45 kubectl delete ns dev
751 | 46 kubectl delete ns test
752 | 47 vim abc.yml
753 | 48 kubectl api-resources
754 | 49 vim abc.yml
755 | 50 kubectl get svc
756 | 51 kubectl get svc -A
757 | 52 kubectl create -f abc.yml
758 | 53 kubectl config set-context --current --namespace=default
759 | 54 kubectl create -f abc.yml
760 | 55 kubectl get deploy,svc
761 | 56 kubectl delete -f abc.yml
762 | 57 vim abc.yml
763 | 58 kubectl create -f abc.yml
764 | 59 vim abc.yml
765 | 60 kubectl create -f abc.yml
766 | 61 kubectl get deploy
767 | 62 kubectl delete deploy swiggy-deploy
768 | 63 kubectl create -f abc.yml
769 | 64 kubectl delete -f abc.yml
770 | 65 kubectl create -f abc.yml
771 | 66 kubectl get deploy,svc
772 | 67 vim abc.yml
773 | 68 kubectl apply -f abc.yml
774 | 69 kubectl delete -f abc.yml
775 | 70 vim abc.yml
776 | 71 kubectl create -f abc.yml
777 | 72 kubectl get deploy,svc
778 | 73 kops get cluster
779 | 74 kops delete cluster --name rahams.k8s.local --yes
780 | 75 history
781 | [root@ip-172-31-45-202 ~]#
782 |
--------------------------------------------------------------------------------