├── images └── session-02.jpg ├── session-01.txt ├── session-02.md ├── session-03.txt ├── session-04.txt ├── session-05.txt ├── session-06.txt ├── session-07.txt ├── session-08.txt ├── session-09.txt ├── session-10.txt ├── session-11.txt ├── session-12.txt ├── session-13.txt ├── session-14.txt ├── session-15.txt ├── session-16.txt ├── session-17.txt ├── session-18.txt ├── session-19.txt ├── session-20.txt ├── session-21.txt ├── session-22.txt ├── session-23.txt ├── session-24.txt ├── session-25.txt ├── session-26.txt ├── session-27.txt ├── session-28.txt ├── session-29.txt ├── session-30.txt ├── session-31.txt ├── session-32.txt ├── session-33.txt ├── session-34.txt ├── session-35.txt ├── session-36.txt ├── session-37.txt ├── session-38.txt ├── session-39.txt ├── session-40.txt ├── session-41.txt ├── session-42.txt ├── session-43.txt ├── session-44.txt ├── session-45.txt ├── session-46.txt ├── session-47.txt ├── session-48.txt ├── session-49.txt ├── session-50.txt ├── session-51.txt ├── session-52.txt ├── session-53.txt ├── session-54.txt ├── session-55.txt ├── session-56.txt ├── session-57.txt ├── session-58.txt ├── session-59.txt ├── session-60.txt ├── session-61.txt ├── session-62.txt ├── session-63.txt ├── session-64.txt ├── session-65.txt ├── session-66.txt ├── session-67.txt ├── session-69.txt ├── session-70.txt ├── session-71.txt ├── session-72.txt ├── session-73.txt ├── session-74.txt ├── session-75.txt └── session-76.txt /images/session-02.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DAWS-82S/notes/cf7efe0586a4f0ed71818db714d9faaec839d569/images/session-02.jpg -------------------------------------------------------------------------------- /session-01.txt: -------------------------------------------------------------------------------- 1 | https://youtu.be/F4jF88UkxV4 2 | 3 | SESSION-01 4 | ============= 5 | SDLC 6 | Waterfall vs Agile vs DevSecOps 7 | What is DevOps? 8 | Linux 9 | 10 | SDLC --> Software Development Life cycle 11 | 12 | Requirements analysis 13 | Planning 14 | Design --> General requirements to technical requirements 15 | Implementation 16 | Testing 17 | Deployment 18 | Maintainance 19 | 20 | Waterfall 21 | =========== 22 | 23 | SMS --> School Management System 24 | 25 | 100 years back --> Final exam 26 | 27 | stakeholders 28 | ============ 29 | Teachers -> Not serious to complete syllabus from DAY-1 30 | Parents -> Yes. Worried about whether they pass or not 31 | Students -> Not serious to study from DAY-1 32 | 33 | 30% 34 | 35 | Process Change 36 | ============ 37 | UNIT TEST-I, II, III, IV 38 | Q, H, PRE-FINAL, Final 39 | 40 | UNIT TEST-I --> 30 days 41 | Teachers --> They should be serious from DAY-1 to complete syllabus for UNIT TEST-I 42 | Students --> 1 week before UNIT TEST-I 43 | Parents --> They are waiting for UNIT TEST-I results 44 | 45 | 80% 46 | 47 | Parents --> Clients 48 | Students --> Developers, Operations team or Testing team 49 | Teachers --> IT Management 50 | 51 | Waterfall 52 | ========== 53 | Requirements -> Phase-I 54 | Once you are in Phase-II 55 | You can't go back and change the requirements 56 | 57 | 6 months for development --> Testing and Deployment 58 | 100 defects --> 10 invalid defects 59 | 60 | 50 defects --> 6 invalid defects 61 | 62 | Ambassdor 63 | 64 | Agile 65 | ========= 66 | Requirements analysis 67 | Planning 68 | Design --> General requirements to technical requirements 69 | Implementation 70 | Testing 71 | Deployment 72 | Maintainance 73 | 74 | Modules --> Signup, Login, Menu, Order, Shipping, Delivery, Payments 75 | 76 | Signup --> 1 month 77 | 78 | 2 weeks for development, 2 weeks for testing and deployment 79 | 80 | development is serious from DAY-1 81 | 82 | 20 defects --> 5 invalid defects 83 | 84 | Honda City 85 | 86 | 87 | Waterfall --> 10 times testing --> 100 88 | Agile --> 30 times testing --> 101 89 | DevOps --> 100 times --> 102 90 | 91 | Agile is part of DevOps 92 | ============ 93 | Modules --> Signup, Login, Menu, Order, Shipping, Delivery, Payments 94 | 95 | Signup 96 | ======== 97 | 1 month 98 | DAY-1 --> 100 lines of code --> enter your first name, enter your last name 99 | deploy this 100 lines --> test this 100 lines 100 | 101 | 3 defects --> Co-operation, Co-ordination, Collobaration between teams 102 | 103 | DAY-10 --> We should deploy and test everything. 104 | 105 | DevOps is a process of building, testing and releasing code on the same day when developer writes something. through this process we can acheive colloboration between teams, faster releases and less defects 106 | 107 | Speed and Accuracy 108 | 109 | DEV, PROD 110 | DEV, QA, PROD 111 | DEV, QA, UAT, PRE-PROD, PROD 112 | DEV, QA, UAT, PERF, SECURITY, PRE-PROD, PROD 113 | 114 | Linux 115 | ========= 116 | Windows 117 | -------- 118 | We need to restart sometimes --> It can run for years 119 | Too many graphics -> Time to load --> No graphics --> Super perfomance 120 | Resource consumption --> Less resource consumtion 121 | Not that much secure --> Secure 122 | Costly --> Free 123 | Not Opensource --> Opensource 124 | 125 | Server Creation, Linux Commands, Editors, User creation, Install Softwares, Service management 126 | 127 | AWS Account creation 128 | 129 | i3, 8GB ram 130 | i5, 16GB 131 | i7, 16GB 132 | 133 | 134 | 135 | 136 | 137 | -------------------------------------------------------------------------------- /session-02.md: -------------------------------------------------------------------------------- 1 | ![alt text](images/session-02.jpg) 2 | 3 | What is computer? 4 | What is Client Server architecture? 5 | 6 | RAM 7 | Storage 8 | OS 9 | Processor 10 | 11 | IP enable device 12 | 13 | Server --> to host application 14 | Port forwarding --> You can deploy application in your laptop and open it in internet 15 | 16 | facebook.com --> facebook application in fb servers 17 | browser --> Client software 18 | 19 | Linux 20 | ========= 21 | 22 | Region --> HYD, Mumbai, Singapore, US, EU 23 | AZ --> North HYD, South HYD --> min 2 AZ --> High availability 24 | 25 | Instance == Server == node 26 | 27 | Firewall == Security Group 28 | 29 | allow everyone one through firewall 30 | 31 | inbound --> incoming traffic 32 | outbound --> outgoing traffic 33 | 34 | 0.0.0.0/0 --> every computer in the internet 35 | 36 | RAM 37 | Storage 38 | OS 39 | Processor 40 | 41 | devops-practice --> AMI == Amazon Machine Image 42 | .iso --> It will create entire operating system --> C:\Windows 43 | 44 | Redhat Enterprise Linux == Centos == Amazon Enterprise Linux == Fedora == AlmaLinux 45 | 46 | Kernel == Brain of OS == C language 47 | User Interface 48 | 49 | Linus torvalds == Inventor of Linux 50 | Mac -> Hardware locking --> you are buying both hardware and Mac OS 51 | 52 | Servers == Unix 53 | Laptops == IBM BIOS 54 | 55 | Linux -> C language == Kernel 56 | OS == Kernel + User Interface == Open Source 57 | 58 | Linux Implementations == Distributions == Flavours 59 | ======================= 60 | RHEL --> Commercial 61 | IBM AIX 62 | Ubuntu 63 | Fedora 64 | Solaris 65 | Suse 66 | Android 67 | 68 | OpenSource and Enterprise 69 | 70 | AWS Linux 2023 AMI 71 | t3.micro/t2.micro 72 | 73 | Authentication 74 | =============== 75 | 1. What you know --> Username and Password 76 | 2. What you have --> Username and token/OTP 77 | 3. What you are --> Fingerprints, Retina, Palm, etc. 78 | 79 | PublicKey and PrivateKey 80 | 81 | Lock and Key 82 | Lock --> Public 83 | Key --> Private 84 | 85 | Server == IP(Public) 86 | 87 | ssh-keygen -f ==> public key and private key 88 | 89 | pwd --> Present working directory 90 | C:\Users\ --> Windows 91 | /c/Users/siva --> Linux 92 | 93 | Git bash ==> SSH Client, Git Client, Mini Linux 94 | 95 | ~ --> Home directory 96 | /c/devops/daws-82s 97 | 98 | ls -l --> List sub directory 99 | 100 | ssh-rsa 101 | ssh-ed25519 102 | both are public keys 103 | 104 | Enable extension 105 | 106 | aadhar.png 107 | jul-payslip.pdf 108 | 109 | jul-payslip 110 | 111 | .pub --> public key 112 | .pem --> private key 113 | 114 | Public IP == 184.72.71.255 115 | AWS Linux 2023 AMI --> ec2-user and our private key 116 | 117 | IP, Username, password, protocol, port 118 | 119 | HTTP facebook.com/IP, Username, Password, 80 120 | 121 | facebook.com/IO, Username, Password/PrivateKey, SSH, 22 122 | 123 | SecureShell 22 --> it wil give full access to the server 124 | 125 | Delhi --> HYD 126 | 127 | f:no, apartment name, pincode 128 | 129 | siva, 523764 130 | 131 | ssh -i ec2-user@IP 132 | 133 | 1. Create public and private keys 134 | 2. Import public key 135 | 3. Create firewall 136 | 4. Create Instance 137 | 5. Connect to Instance 138 | 6. Terminate when not using -------------------------------------------------------------------------------- /session-03.txt: -------------------------------------------------------------------------------- 1 | ssh -i ec2-user@IP 2 | 3 | absolute path and relative path 4 | 5 | /c/devops/daws-82s/daws-82s.pem --> absolute path 6 | daws-82s/daws-82s.pem --> relative path 7 | 8 | clear 9 | 10 | $ --> denotes normal user 11 | sudo su --> root access 12 | \# --> denotes admin/root user 13 | 14 | /home/ 15 | 16 | /root --> root user home folder 17 | sudo su - --> lands into root user home folder /root 18 | 19 | 20 | 21 | - --> we can give single char 22 | -- --> we need to give word 23 | 24 | / --> root of the server 25 | 26 | ls --> list subdirectories 27 | 28 | ls -l --> long listing format with more details 29 | ls -lr --> reverse alpha order 30 | ls -lt --> new files on top 31 | ls -ltr --> old files on top 32 | ls -ltrh --> human readable 33 | ls -la --> display all files including hidden files and folders 34 | 35 | drwx------ --> d means directory 36 | -rw-r--r-- --> - means file 37 | lrw-r--r-- --> Link files 38 | 39 | touch --> creates empty file 40 | 41 | cat > devops.txt 42 | cat > DevSecOps.txt --> enter the text, once done Enter and press ctrl+d 43 | >> --> append, adds to the current text 44 | 45 | mkdir --> creates directory 46 | rmdir --> removes only empty directory 47 | 48 | cp 49 | cd .. --> one step back 50 | 51 | rm -r devops --> recursively delete everything inside devops 52 | 53 | curl and wget 54 | 55 | wget --> Downloads the file 56 | curl --> Shows the content on the screen 57 | curl -o --> donwloads the file with name given 58 | 59 | https://raw.githubusercontent.com/DAWS-82S/notes/refs/heads/main/session-02.md 60 | 61 | / --> seperator/delimiter/fragments 62 | Sivakumar Reddy M 63 | 64 | piping 65 | 66 | grep 67 | 68 | | --> pipe 69 | 70 | cat | grep 71 | 72 | cut command 73 | ============= 74 | echo "https://raw.githubusercontent.com/DAWS-82S/notes/refs/heads/main/session-02.md" | cut -d "/" -f9 75 | session-02.md 76 | 77 | awk command 78 | ============= 79 | echo "https://raw.githubusercontent.com/DAWS-82S/notes/refs/heads/main/session-02.md" | awk -F "/" '{print $NF}' 80 | 81 | echo "https://raw.githubusercontent.com/DAWS-82S/notes/refs/heads/main/session-02.md" | awk -F "/" '{print $1F}' 82 | 83 | How can I get all the users in Linux Servers 84 | 85 | awk -F ":" '{print $1F}' passwd 86 | 87 | head passwd --> top 10 lines from top 88 | tail -n 4 passwd --> last 4 lines from bottom 89 | 90 | head -n 10 passwd | tail -n 7 91 | 92 | 4-10 93 | 94 | head 10, tail (10-4)+1 95 | 96 | find -name "" 97 | 98 | find / -name "passwd" 99 | 100 | vim editor -------------------------------------------------------------------------------- /session-04.txt: -------------------------------------------------------------------------------- 1 | vim editor 2 | user management 3 | 4 | Esc mode is default 5 | press : to enter into command mode 6 | 7 | command mode 8 | ---------------- 9 | :q --> quit 10 | :wq --> write and quit 11 | :q! --> force quit without changes 12 | :/ --> search the word from top to bottom 13 | :? --> search the word from bottom to top 14 | :noh --> no highlight 15 | :set nu --> set line numbers in the file 16 | :set nonu --> dont set line numbers 17 | :28 d --> deleted 28th line 18 | :3s/word-to-find/word-to-replace --> replaces first occurenece in that line 19 | :3s/word-to-find/word-to-replace/g --> replaces all occureneces in that line 20 | :%s/word-to-find/word-to-replace/g --> replaces all occureneces in file 21 | :%d --> delete entire content 22 | 23 | ESC Mode 24 | --------------- 25 | u --> undo 26 | yy --> copy the line 27 | p --> paste 28 | 10p --> paste the lines 10 times 29 | dd --> cut the line 30 | gg --> takes to top 31 | shift+g --> takes to bottom 32 | 33 | Insert Mode 34 | --------------- 35 | press i 36 | 37 | Linux Administration 38 | ------------------------- 39 | User management 40 | 41 | create user, add user to any group 42 | 43 | useradd --> creates user and group with same name 44 | id --> displays user information 45 | /etc/passwd --> contains user information 46 | /etc/group --> contains group information 47 | in linux, a user must have only one primary group and atleast secondary group 48 | passwd --> sets password to the user 49 | 50 | groupadd --> creates group 51 | 52 | usermod -g devops ramesh --> adds ramesh to devops group 53 | usermod -aG testing ramesh --> adds testing as secondary group to ramesh 54 | 55 | CRUD --> 56 | 57 | userdel 58 | remove from project,now remove from company 59 | 60 | remove him from devops group 61 | 62 | Linux follows key based authentication by default 63 | 64 | /etc/ssh/sshd_config --> edit the SSH related configuration 65 | /etc/ssh/sshd_config --> any mistakes in this file, we cant ssh into server 66 | systemctl restart sshd 67 | sshd -t --> check config is correct or not 68 | 69 | key based authentication --> ramesh should generate his public key and private key... 70 | 71 | -------------------------------------------------------------------------------- /session-05.txt: -------------------------------------------------------------------------------- 1 | Permissions 2 | ----------------- 3 | R -> 4 4 | W -> 2 5 | X -> 1 6 | 7 | - rw- r-- r-- ec2-user ec2-user 8 | file 9 | 10 | ec2-user --> Owner --> Read and Write 11 | ec2-user --> Group --> Only Read 12 | Others --> Other than and owner and group --> Only read 13 | 14 | Who can change permissions of file or folder --> owners or root user 15 | 16 | owner/user --> u 17 | group --> g 18 | others --> o 19 | 20 | chmod o+w devsecops.txt 21 | chmod o-r suresh.txt 22 | 23 | chmod ugo+rwx suresh.txt 24 | 25 | chmod 740 suresh.txt 26 | chmod 777 suresh.txt 27 | chmod 755 suresh.txt 28 | 29 | 30 | admin should ask for suresh public key 31 | root user can create a folder .ssh in /home/suresh 32 | ownership of the folder should be on suresh 33 | chown -R suresh:suresh .ssh 34 | chown : file/folder 35 | 36 | inside .ssh we need to create a file called authorized_keys 37 | this file owner should be suresh 38 | permissions can be max 600 39 | 40 | ssh -i suresh.pem suresh@IP 41 | 42 | /home/suresh --> authorized_keys 43 | 44 | Package management 45 | ==================== 46 | windows laptops are cofigured with URL to pull the updates 47 | 48 | usually one package depends on other packages... 49 | 50 | yum install 51 | dnf install --> apt-get is for ubuntu 52 | 53 | dnf remove 54 | 55 | dnf list installed -> already installed inside linux 56 | dnf list available --> all - installed 57 | 58 | 59 | Service Management 60 | ==================== 61 | ssh -i suresh@IP 62 | 63 | request goes to IP, checks SSH is running on port number 22... 64 | 65 | systemctl status sshd 66 | 67 | http service --> nginx or apache 68 | 69 | dnf install nginx -y 70 | 71 | start nginx service --> systemctl start nginx 72 | 73 | http://IP:80 74 | 75 | systemctl stop nginx 76 | systemctl restart nginx --> restart 77 | 78 | systemctl enable nginx --> services will start automatically 79 | 80 | systemctl start git --> invalid 81 | 82 | few packages are just utilities, they are command line packages. Few packages are service related, we can start/stop/restart/enable 83 | 84 | Process Management 85 | =================== 86 | TL 87 | Senior 88 | Junior 89 | Fresher 90 | 91 | TL --> TASK-1 --> Senior 92 | Senior --> TASK-2 --> Junior 93 | Junior --> TASK-3 --> Fresher 94 | 95 | for TASK-3 TASK-2 is the parent 96 | for TASK-2 TASK-1 is the parent 97 | 98 | Process --> in linux everything is process 99 | 100 | echo "Hello World" --> creates one process instance id 101 | 102 | gives the result and then mark the process as completed 103 | 104 | PPID --> parent process instance id 105 | 106 | foreground and background 107 | 108 | kill PID --> request to stop 109 | kill -9 PID --> order to stop 110 | 111 | Network Management 112 | ===================== 113 | netstat -lntp 114 | 115 | systemctl status nginx 116 | ps -ef | grep nginx 117 | netstat -lntp --> check port is open or not -------------------------------------------------------------------------------- /session-06.txt: -------------------------------------------------------------------------------- 1 | 3 tier architecture 2 | ==================== 3 | 4 | Desktop applications 5 | Web applications 6 | 7 | disadvatanges of Desktop applications 8 | ==================================== 9 | 1. installation 10 | 2. upgrade 11 | 3. storage 12 | 4. compatability 13 | 5. if system crash we will lose data 14 | 6. more system resources 15 | 16 | Web applications 17 | ======================= 18 | 19 | road side cart 20 | ================= 21 | 1 person --> 10 persons 22 | 23 | 1. cooking 24 | 2. bill collection 25 | 3. serving 26 | 4. queing 27 | 28 | hotel 29 | ================== 30 | 50 people --> He will hire extra resources 31 | 32 | 2 persons --> 50 people 33 | 34 | 1 cook --> cooking and serving 35 | 1 owner --> tokens issue, bill collection 36 | 37 | 500 people --> restaurant 38 | 39 | 1 captain --> welcome and show the table 40 | waiter --> take the order, plating 41 | chef --> cook the order 42 | 43 | 1. responsibilites are shared to everyone, they can focus only on their work. 44 | 2. security 45 | 3. queing 46 | 47 | Raw Items --> Cook(customers can eat) --> Plating(with onion and keera) 48 | 49 | Only one server --> DB, Java Application, HTML application 50 | 51 | CRUD --> create, read, update and delete 52 | 53 | email, name, pan card, card details 54 | 55 | users --> table --> RDBMS 56 | 57 | DB --> Raw data 58 | Application Server --> Connects to DB and do CRUD operations 59 | Web Server --> queue the requests, take the request, forward the request to application serverr, format the data 60 | 61 | DB Tier --> RDBMS(MySQL, Oracle, Postgress, etc.), NoSQL(MongoDB), Redis(Cache), RabbitMQ(Queue based) 62 | Application/API(Application programming interface) Tier --> Backend/middleware applications --> Java, .NET, Python, Go, NodeJS, etc.. 63 | 64 | { 65 | "username": "sivakumar", 66 | "dob": "01-JAN-2000", 67 | "address": "Sanath nagar, HYD, 543234" 68 | } 69 | 70 | Web(Frontend tier) tier --> Load Balancer, Frontend Servers -> HTML, CSS and JS, ReactJS, AngularJS, 71 | 72 | MERN --> MongoDB, ExpressJS, ReactJS, NodeJS 73 | 74 | devops-practice --> joindevops(RHEL9 based) --> ec2-user, DevOps321 75 | 76 | Linux Server --> Physical Server 77 | 78 | show databases; --> dispays the schema/database available 79 | use ; --> you are using that schema 80 | show tables; --> display all the tables in the schema 81 | select * from table-name; --> display the data inside table 82 | 83 | #include --> our programming depends on this... so these are called as dependencies/libraries 84 | 85 | NodeJS --> package.json(build file) (contains dependencies/libraries required by NodeJS) 86 | Java --> pom.xml --> project version, description, dependencies/libraries mvn pacakge 87 | .NET --> msbuild --> project version, description, dependencies/libraries 88 | requirements.txt --> project version, description, dependencies/libraries pip install 89 | Makefile --> C language --> make 90 | 91 | dnf install nginx --> systemcl start nginx 92 | 93 | service --> it should run continously --> /etc/systemd/system --> create a .service file 94 | -------------------------------------------------------------------------------- /session-07.txt: -------------------------------------------------------------------------------- 1 | systemctl service? 2 | 3 | if you want your applications to run as a service, create a file with extension .service in /etc/systemd/system 4 | 5 | vim /etc/systemd/system/backend.service 6 | 7 | [Unit] 8 | Description = Backend Service 9 | 10 | [Service] 11 | User=expense 12 | Environment=DB_HOST="172.31.85.250" 13 | ExecStart=/bin/node /app/index.js 14 | SyslogIdentifier=backend 15 | 16 | [Install] 17 | WantedBy=multi-user.target 18 | 19 | browser: 49.204.161.202 --> public IP 20 | cmd: 192.168.0.107 --> private IP 21 | 22 | 23 | Nginx --> popular webserver and reverse proxy server 24 | 25 | proxy --> forward proxy and reverse proxy 26 | 27 | VPN forward proxy 28 | ========== 29 | Server is not aware that client is using VPN. Client is aware of VPN 30 | Traffic restrict, traffic monitoring 31 | Geolocation hiding 32 | Anonymous client identity 33 | Access private network/files 34 | 35 | Reverse proxy 36 | =========== 37 | Client is not aware of proxy. Server is aware of proxy. 38 | Backend applications are behind reverse proxy servers for security and queing 39 | Cache servers 40 | 41 | Public IP vs Private IP 42 | Reverse proxy vs Forward proxy 43 | 44 | nginx home directory: /etc/nginx 45 | html directory: /usr/share/nginx/html 46 | nginx configuration: /etc/nginx/nginx.conf 47 | 48 | 0-65,535 = 65,536 ports 49 | 50 | 51 | proxy_http_version 1.1; 52 | 53 | location /api/ { proxy_pass http://172.31.88.35:8080/; } 54 | 55 | location /health { 56 | stub_status on; 57 | access_log off; 58 | } 59 | 60 | JoinDevOps AMI 61 | 62 | https://github.com/learndevopsonline/aws-image-devops-session.git -------------------------------------------------------------------------------- /session-08.txt: -------------------------------------------------------------------------------- 1 | What is DNS? 2 | How DNS works? 3 | 4 | public IP --> stop and start change in IP 5 | private IP --> but when terminate and recreate private IP changes 6 | 7 | human names, computers numbers 8 | 9 | whenever backend IP changes, I should edit systemctl file. deamon reload and restart the service 10 | 11 | word = meaning 12 | name = number 13 | facebook = IP 14 | 15 | ICANN --> Internet Corporation for assigned names and numbers --> countries, reputed organistions 16 | 17 | there are 13 servers in the world 18 | 19 | top level domains 20 | .telugu , .com , .in , .uk, .net, .edu, .gov, .us, .au, .org, .ai, .online 21 | 22 | .gov.in, .co.in --> sub level domain 23 | 24 | ICANN --> I am going to start .telugu domain. I need to complete all the process 25 | 26 | joindevops.telugu 27 | tfc.telugu 28 | 29 | domain registars(mediators) --> godaddy, hostinger, aws, gcp, azure 30 | 31 | joindevops --> joindevops.com(not available), try joindevops.telugu 32 | 33 | someone registered joindevops.telugu 34 | 35 | Hostinger updates Radix Registry about daws82s.online --> who bought this domain and nameservers 36 | 37 | nameservers = who managed this domain = records to the DNS 38 | 39 | A record = IP address 40 | 41 | change in NS --> Hostinger updates the change of Nameservers to .online TLD 42 | now aws manages my domain 43 | 44 | mysql.daws82s.online --> DNS resolver --> .online TLD --> provides nameservers to daws82s.online --> mysql.daws82s.online A record 45 | 46 | Record types 47 | ============= 48 | A --> points to IP address 49 | CNAME --> points to another domain 50 | MX --> mail records (info@joindevops.com) 51 | TXT --> Domain ownership validaton purpose 52 | NS --> nameservers 53 | SOA --> who is the authority of this domain 54 | 55 | What happens when we book domain? 56 | What happens when someone enter our domain in browser? 57 | How to become TLD? 58 | 59 | [Unit] 60 | Description = Backend Service 61 | 62 | [Service] 63 | User=expense 64 | Environment=DB_HOST="mysql.daws81s.online" 65 | ExecStart=/bin/node /app/index.js 66 | SyslogIdentifier=backend 67 | 68 | [Install] 69 | WantedBy=multi-user.target 70 | 71 | 72 | proxy_http_version 1.1; 73 | 74 | location /api/ { proxy_pass http://backend.daws81s.online:8080/; } 75 | 76 | location /health { 77 | stub_status on; 78 | access_log off; 79 | } 80 | 81 | http://daws81s.online/api/transaction 82 | 83 | http://backend.daws81s.online:8080/transaction 84 | 85 | 86 | http://daws81s.online/api/transaction --> send request to backend --> backend responds with data 87 | 88 | inode, symlink/softlink and hardlink 89 | 90 | what is inode? 91 | 92 | inode stores the file type(file or folder), permissions, ownership, file size, timestamp, disk location(memory location) 93 | 94 | lrwxrwxrwx 1 root root 11 Dec 26 03:10 DbConfig1.js -> DbConfig.js 95 | l represents link file 96 | 97 | symlink is like shortcut it points to the original file. symlink inode and actual file inode is different. symlink breaks when actual file is deleted. symlink can be created to folders/directories 98 | 99 | hardlink inode is same as actual file. hardlink is useful for backup of the file. if original file is deleted hardlink remains same. we can't create hardlinks to folders/directories 100 | 101 | how do you findout hardlinks for a particular file? 102 | 103 | find / -inum "" 104 | 105 | -------------------------------------------------------------------------------- /session-09.txt: -------------------------------------------------------------------------------- 1 | ping ip 2 | telnet 3306 -> DB running but backend not able to connect DB == check DB security group ingress rules 3 | 4 | same server == localhost == 127.0.0.1 5 | 6 | HTTP Methods and status codes 7 | ============================ 8 | CRUD 9 | 10 | GET --> getting/read from server 11 | POST --> posting/create the information 12 | { 13 | amount: "200", 14 | desc: "travel" 15 | } 16 | PUT --> Update the information 17 | DELETE --> Delete the information 18 | 19 | 100 == 1XX == Informational codes 20 | 200 == 2XX == Success status codes 21 | 300 == 3XX == Redirectional 22 | 400 == 4XX == Client side error 23 | 500 == 5XX == Server side error 24 | 25 | backend.daws82s.onine --> 404 --> NOTFOUND --> Client side error 26 | 27 | 403 --> Forbidden --> You dont have access to that 28 | 401 --> Unauthorized --> you should login 29 | 405 --> HTTP POST, If you use GET --> Method not allowed 30 | 400 --> bad request --> check the payload data once again 31 | 32 | 500 --> Internal Server Error --> Server side error 33 | 502 --> Bad Gateway --> Frontend not able to connect backend 34 | 503 --> Service temporarily unavailable 35 | 36 | How to check memory of linux server? memory == RAM 37 | 38 | RAM vs ROM 39 | 40 | HD --> RAM --> User 41 | 42 | Swap (Reserved RAM from HD) 43 | 44 | free -h 45 | htop 46 | cat /proc/meminfo 47 | 48 | 49 | How do you list top 10 high memory process? 50 | ps aux --sort -%mem | head -n 10 51 | 52 | Disk usage? 53 | 54 | df -hT 55 | du -sh /* --> gives us the disk usage of files and folders in root directory 56 | 57 | Explain linux booting process 58 | 59 | https://www.youtube.com/watch?v=XpFsMB6FoOs 60 | 61 | 1-10 commands 62 | 63 | human errors 64 | time taking 65 | 66 | Shell Scripting 67 | ================= 68 | if you keep all your commands in a single file and execute that file --> Shell Scripting 69 | 70 | native linux scripting --> Linux/Shell commands 71 | 72 | 73 | Linux Server --> I need to fetch some info from AWS Cloud --> Python 74 | 75 | 1. Linux commands we use on daily basis 76 | 2. Forward proxy vs Reverse proxy 77 | 3. HTTP Methods and Status codes 78 | 4. inode, symlink vs hardlink 79 | 80 | -------------------------------------------------------------------------------- /session-10.txt: -------------------------------------------------------------------------------- 1 | Git --> Concept 2 | GitHub 3 | Gitlab 4 | Bitbucket 5 | azure repos 6 | AWS Code commit 7 | 8 | git repos --> Storing code 9 | 10 | 1. creation of repo 11 | 2. clone repo to our laptop 12 | git clone https://github.com/daws-78s/shell-script/blob/main/06-array.sh 13 | 3. we develop code 14 | 4. select some editor. vscode editor. visual studio code editor. free editor 15 | 5. add code to staging area 16 | 17 | git add 18 | git add . --> all files will be added to staging area 19 | 20 | 6. commit to local repo 21 | 22 | git commit -m "message" 23 | 24 | 7. git push oigin main 25 | 26 | SVN --> Sub version control --> Centralised 27 | 28 | Centralised vs Decentralised 29 | 30 | easy to collapse if at one place 31 | 32 | Decentralised --> Distributed economy, if one collpase no problem to country 33 | 34 | Version Control 35 | ================ 36 | History of changes 37 | Need to maintain multiple versions --> I need to track the changes --> Why I changed? When I changed? Who changed? 38 | 39 | 20-DEC-2024 We deployed app in prodction 40 | 21-DEC-2024 there is issue 41 | 42 | 19-DEC-2024 --> restore to this version 43 | 44 | Colloboration 45 | 46 | 47 | .sh --> shell script extension 48 | 49 | C shell, K Shell, Z Shell, Shell --> Bash 50 | 51 | #!/bin/bash, #/bin/sh 52 | 53 | Shebang --> It should be the first line of shell script. It is the interpreter to execute the commands and syntax inside shell script 54 | 55 | which ls 56 | /bin/ls 57 | 58 | sh 59 | bash 60 | 61 | variables 62 | ================ 63 | lets take x=1, y=0 64 | 65 | derive the formual 66 | finally submit values 67 | 68 | a centralise place to mention the values, if you change at one place, it will reflect at all the places where it is referred 69 | 70 | DRY == Don't repeat yourself 71 | 72 | 1. variables 73 | 2. data types 74 | 3. conditions 75 | 4. loops 76 | 5. functions 77 | 78 | arguements or args --> run time variables -> no need to edit script 79 | 80 | sh 04-variables.sh ramesh suresh 81 | 82 | -------------------------------------------------------------------------------- /session-11.txt: -------------------------------------------------------------------------------- 1 | int i=0 2 | var i=0 3 | boolean 4 | 5 | integer, float, boolean, string, array, arraylist, map, etc.. 6 | 7 | integer --> number 8 | float --> decimal number 9 | boolean --> true/false 10 | string --> text 11 | array --> (devops, aws, docker) 12 | arraylist --> [devops, aws, docker] 13 | map --> name: devops, duration: 120hrs 14 | 15 | 16 | addition of 2 numbers 17 | ===================== 18 | user must give number1 and number2 19 | 20 | add them, print the sum 21 | 22 | how do you run a command inside shell script and get the output 23 | $(date) 24 | 25 | list of values 26 | 27 | MOVIES=("pushpa" "rrr" "devara") 28 | 0 1 2 29 | size is 3... 30 | 31 | 32 | Special variables 33 | ====================== 34 | $1, $2, $3 35 | All variables passed: $@ 36 | number of variables: $# 37 | script name: $0 38 | present working directory: $PWD 39 | home directory of current user: $HOME 40 | which user is running this script: $USER 41 | process id of current script: $$ 42 | process id of last command in background: $! 43 | 44 | 45 | Conditions 46 | ====================== 47 | 48 | print holiday or not 49 | 50 | 1. I need to find what is today 51 | 2. if today is not sunday, I have to go school 52 | 3. otherwise today is holiday 53 | 54 | 55 | print a number is greater than 100 or not 56 | 57 | 1. get the input number 58 | 2. check it is more than 100 or not 59 | 3. if more than 100, print more than 100 60 | 4. otherwise print less than or equal to 100 61 | 62 | if(expression){ 63 | execute this if expression is true 64 | } 65 | 66 | 67 | if(expression){ 68 | execute this if expression is true 69 | } 70 | else{ 71 | execute this if expression is false 72 | } 73 | 74 | if [ expression ] 75 | then 76 | statements 77 | else 78 | statements 79 | fi 80 | 81 | > 82 | 83 | install mysql through shell script 84 | ================================== 85 | dnf install mysqlll -y 86 | 87 | check if the user running the script is root user or not 88 | if root user 89 | allow him 90 | else 91 | show the error properly and exit the script 92 | run install command 93 | check installation is success 94 | if success, our task is done 95 | if not success, throw the error message 96 | 97 | exit status 98 | ============= 99 | How can you check previous command is success or not in shell script? 100 | 101 | by checking the exit status, if exit status is 0 it is success, otherwise it is failure 102 | 103 | $? 104 | 105 | 20 things to fill 106 | 107 | reduce number of lines, get the same productivity 108 | 109 | < 110 | 111 | mysql -h -uroot -pExpenseApp@1 < /app/schema/backend.sql 112 | 113 | -------------------------------------------------------------------------------- /session-12.txt: -------------------------------------------------------------------------------- 1 | Functions 2 | 3 | takes some input and do something 4 | 5 | DRY --> Don't repeat yourself 6 | 7 | repeated code we can keep in function, give it a name. Whenever you want you can call that function 8 | 9 | FUNC_NAME(){ 10 | 11 | code related to function 12 | } 13 | 14 | FUNC_NAME # calling function 15 | 16 | args --> sh script-name.sh arg1 arg2 --> $1=arg1 $2=arg2 17 | 18 | FUNC_NAME input1 input2 19 | 20 | FUNC_NAME(){ 21 | $1=input1 22 | $2=input2 23 | code related to function 24 | } 25 | 26 | What it knows and what it does? 27 | 28 | Colors --> success(green), failure(red), already installed(yellow) 29 | 30 | R --> 31 31 | G --> 32 32 | Y --> 33 33 | 34 | \e[31m 35 | 36 | logs --> logging the result to some file 37 | 38 | redirectors 39 | 40 | < --> input 41 | > --> output 42 | 43 | 1 --> success 44 | 2 --> failure 45 | & --> both success and failure 46 | 47 | /var/logs/shellscirpt-logs/13-logs.sh.log 48 | 49 | script-name.log 50 | 51 | 13-logs.sh --> 13-logs 52 | 13-logs-01-01-2025.log 53 | 54 | variables 55 | data types 56 | conditions 57 | functions 58 | loops 59 | 60 | for(int i=0; i<100; i++){ 61 | print $i 62 | } 63 | 64 | for i in {0..1000} 65 | do 66 | echo $i 67 | done 68 | 69 | sh install-script git mysql gcc nginx 70 | 71 | package=git 72 | package=mysql 73 | -------------------------------------------------------------------------------- /session-13.txt: -------------------------------------------------------------------------------- 1 | variables 2 | data types 3 | conditions 4 | functions 5 | loops 6 | 7 | plain server --> app runtime(nodejs), create user, create app folder, download the code, install dependencies, create systemctl services, start the application 8 | 9 | check user has root access or not 10 | store logs 11 | try to use colors 12 | 13 | install mysql server 14 | enable it 15 | start it 16 | set the root password 17 | 18 | idempotency --> even you run any number of times, it should not change the result 19 | 20 | HTTP GET --> idempotent 21 | HTTP POST --> chance of duplicates or errors, we need to handle this in programming 22 | HTTP PUT --> no problem, but we can say it is already updated 23 | HTTP DELETE --> chance of error, resource not found. Handle this in scripting/programming 24 | 25 | deployment --> updating new version 26 | 27 | remove old code 28 | download new code 29 | install dependencies 30 | restart the server --> stop and start 31 | 32 | delete old logs in linux server 33 | ------------------------------- 34 | 14 days log files --> will be in server 35 | archive and move to storage servers 36 | 37 | delete files older than 14 days from now 38 | only delete .log files 39 | 40 | #!/usr/bin/bash 41 | 42 | 43 | file=temp.txt 44 | while read -r line; 45 | do 46 | echo $line 47 | done < “$file” 48 | 49 | 1. read the file 50 | 2. count the number of words 51 | 3. find top 5 52 | 53 | -------------------------------------------------------------------------------- /session-14.txt: -------------------------------------------------------------------------------- 1 | app --> app logs 2 | daily schedule few jobs, they run in particular time every, archieve the logs and move it to seperate folder 3 | 4 | source directory 5 | zip the files 6 | destination directory 7 | how many days old logs --> optional. If user provides number of days we take them. otherwise we take 14 days by default 8 | 9 | 1. user may forget to provide source and dest directory. throw the error with proper usage 10 | 2. user may forget one of these 2 parameters. throw the error with proper usage 11 | 3. user may give both. but they may not exist. throw the error with proper usage 12 | 4. find the files 13 | 5. if files are there zip it 14 | 6. if zip success, then remove the files 15 | 16 | $# --> number of parameters 17 | 18 | find -name "*.log" +mtime 19 | 20 | if there are files, I can zip. If there are no files. I can't zip 21 | 22 | app-logs-$TIMESTAMP.zip 23 | 24 | I should check zip is success or not, if success then I should delete the files. if failure I should throw the error 25 | 26 | crontab --> schedule the scripts as per your timeline 27 | * * * * * 28 | 29 | home/ec2-user/shellscript-logs//home/ec2-user/shell-script/18-backup-2025-01-06-02-54-01.log 30 | 31 | 18-backup.sh 32 | awk 33 | 34 | ls --> C language 35 | 36 | ./backup.sh +x 37 | 38 | backup -> /bin 39 | 40 | -------------------------------------------------------------------------------- /session-15.txt: -------------------------------------------------------------------------------- 1 | I recently developed a backup script for our linux servers. I installed the script in /usr/local/bin and tested in the server. It worked well. So I configured it into crontab. But next day morning when I come to office I checked it is failed.. 2 | 3 | /home/ec2-user/.local/bin: 4 | /home/ec2-user/bin: 5 | /usr/local/bin: --> Customised commands 6 | /usr/bin: --> System commands for normal user 7 | /usr/local/sbin: --> Customised super user commands 8 | /usr/sbin --> System super user commands 9 | 10 | /bin --> softlink to /usr/bin.. So keep the commands in /usr/bin 11 | 12 | hash -r --> reload the path cache 13 | 14 | /usr/bin:/bin 15 | 16 | crontab environment is minimal, it is not the same environment as when I run manual 17 | 18 | 1. monitor linux servers disk usage, send an email if any disk is using more than 80% 19 | 20 | from email, to email 21 | 22 | joindevops@gmail.com 23 | lvbhmofihsifwyen 24 | 25 | How do you run other scripts from current shell script? 26 | sh it runs in seperate process, cant access variables of script1 27 | 28 | source ./script-name 29 | 30 | 2nd script executes in the process of script-1. so we can access script2 variables also -------------------------------------------------------------------------------- /session-16.txt: -------------------------------------------------------------------------------- 1 | Disadvantages of shell 2 | ====================== 3 | Error Handling 4 | Not idempotent 5 | Homogenous --> only works for a particular distro 6 | Not scalable to many servers 7 | Password security 8 | syntax is not easy 9 | 10 | Configuring Server --> plain server to usable server 11 | 12 | Configuration Management tools --> Chef, puppet, rundeck, Ansible 13 | 14 | push vs pull 15 | 16 | pull 17 | ======= 18 | Delhi --> Hyderabad (DTDC) 19 | 20 | Me --> HYD DTDC 21 | 22 | time, resources like (person, fuel, money) 23 | traffic increase 24 | 25 | schedule agents once in 30min, they should connect to server and check for new configuration 26 | internet traffic 27 | power 28 | server resources 29 | extra agent 30 | 31 | push 32 | ====== 33 | Delhi --> Hyderabad (DTDC) 34 | 35 | HYD DTDC --> Me 36 | 37 | Ansible uses SSH protocol, no need of agent i.e agentless 38 | Ansible also implmented pull based for few usecases. 39 | 40 | adhoc commands 41 | ==================== 42 | ansible all -i , -e ansible_user=ec2-user -e ansible_password=DevOps321 -m ping 43 | 44 | inventory --> List of IP address ansible connect to 45 | 46 | module 47 | 48 | Linux == Command == Ansible == Module 49 | 50 | CommandName == Module Args Inputs 51 | 52 | dnf install nginx -y 53 | 54 | -b --> become root 55 | 56 | systemctl start nginx == service -a "name=nginx state=started" 57 | 58 | keep all the commands in a file == shell script 59 | keep all the modules in a file == playbook 60 | 61 | YAML --> Yet Another Markup Language 62 | Hyper Text Markup Language 63 | 64 |

Hello World

-> Hello World as heading 65 | 66 | XML 67 | 68 | Banks --> 100 yr back 69 | 70 | Name, ACC, Branch, date, Money 71 | 72 | Forms --> deposit, withdrawal, etc.. == template 73 | 74 | Name --> Sivakumar Reddy 75 | ACC --> 123456 76 | Branch --> HYD 77 | date --> 01-01-25 78 | money --> 5000 79 | 80 | DTO --> data transfer objects 81 | 82 | XML --> Extensive Markup Language 83 | 84 | sivakumar@gmail.com 85 | admin123 86 | 87 | 88 | JSON --> Java script object notation 89 | { 90 | "username": "sivakumar@gmail", 91 | "password": "admin123" 92 | } 93 | 94 | YAML --> Yet Another Markup language 95 | 96 | 97 | 98 | sivakumar 99 | info@joindevops.com 100 | 101 | 102 | 123 103 | gandhi nagar 104 | 105 | 106 | 123 107 | gandhi nagar 108 | 109 | 110 | 111 | 112 | inventory --> List of hosts 113 | 114 | dnf install nginx -y 115 | 116 | 500-1500 servers 117 | linux and windows 118 | patches --> snapshot before update. prechecks, patch, reboot, postchecks 119 | tool installations 120 | 121 | shell, copy, command, file, variables, conditions, loops, functions, etc.. 122 | roles and vaults -------------------------------------------------------------------------------- /session-17.txt: -------------------------------------------------------------------------------- 1 | variables 2 | data types 3 | conditions 4 | functions 5 | loops 6 | 7 | variable have a name we can define, it can hold value. You can use it wherever you want. if you change the value it will reflect everywhere. it is DRY 8 | 9 | COURSE=DevOps 10 | 11 | $COURSE or ${COURSE} 12 | 13 | vars: 14 | COURSE: "DevOps with AWS" 15 | DURATION: 120HRS 16 | TRAINER: Sivakumar 17 | 18 | 19 | "{{COURSE}}" 20 | 21 | #1. Command line or args 22 | #2. Task level 23 | #3. Files 24 | #4. Prompt 25 | #5. Play 26 | #6. Inventory 27 | #7. Roles -------------------------------------------------------------------------------- /session-18.txt: -------------------------------------------------------------------------------- 1 | facts == variables 2 | ansible server --> connecting node --> fetch all the node data 3 | 4 | ansible.builtin.package --> if RHEL9 it runs dnf in the background, if ubuntu it runs apt-get 5 | 6 | loops 7 | ============= 8 | loop 9 | 10 | dnf install mysql -y 11 | 12 | dnf install git -y 13 | 14 | functions == filters 15 | ============== 16 | we dont have access to create functions. we can use default filters available in ansible.. 17 | 18 | filters == data manipulations 19 | 20 | 453.254.2365.213 21 | 22 | 255.255.255.255 23 | 24 | zip --> install zip 25 | 26 | what if module is not available? 27 | 28 | shell and command modules 29 | 30 | ansible.builtin.shell vs ansible.builtin.command 31 | 32 | shell --> it is like you are logging inside the server and executing command... We can access variables, we can use redirections, we can use pipes 33 | 34 | command --> this is like running commands from outside, you will not get access to shell variables, redirections, pipes, etc. 35 | 36 | simple command --> you can use command module, it is more secure. shell is for complex commands and less secure 37 | 38 | ssh ec2-user@IP command 39 | 40 | VAR_NAME=$() 41 | 42 | deployment and configuration management -------------------------------------------------------------------------------- /session-19.txt: -------------------------------------------------------------------------------- 1 | unarchieve --> ansible by default thinks src is in ansible controll machine. 2 | 3 | Ansible Roles --> DRY (don't repeat yourself) code-reuse 4 | 5 | Ansible config --> /etc/ansible/ansible.cfg 6 | 7 | 1. ANSIBLE_CONFIG (environment variable if set) 8 | 9 | 2. ansible.cfg (in the current directory) 10 | 11 | 3. ~/.ansible.cfg (in the home directory) 12 | 13 | 4. /etc/ansible/ansible.cfg 14 | 15 | user, group, roles and permissions 16 | user --> access 17 | service --> service 18 | -------------------------------------------------------------------------------- /session-20.txt: -------------------------------------------------------------------------------- 1 | Ansible Roles 2 | ================== 3 | code reuse 4 | 5 | DRY --> Don't repeat yourself 6 | 7 | A standard structure of writing playbooks that contains tasks, variables, dependencies, files, templates, libraries. We can reuse roles. 8 | 9 | roles/role-name 10 | 11 | tasks --> You can keep all your tasks here, ansible automatically loads them 12 | main.yaml 13 | vars --> variables required for this role 14 | main.yaml 15 | templates --> you can keep variables in the file, ansible replace the value at runtime. 16 | any file 17 | files --> We can keep files in this folder 18 | any file names 19 | Handlers --> notifiers when some change event is happened 20 | defaults/ # 21 | main.yml # <-- default lower priority variables for this role 22 | meta --> dependencies of this role 23 | main.yaml 24 | library/ # roles can also include custom modules 25 | 26 | 27 | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option 28 | fatal: [backend.daws82s.online]: FAILED! => {"changed": false, "msg": "Could not find or access 'backend.service'\nSearched in:\n\t/home/ec2-user/expense-ansible-roles/roles/backend/files/backend.service\n\t/home/ec2-user/expense-ansible-roles/roles/backend/backend.service\n\t/home/ec2-user/expense-ansible-roles/roles/backend/tasks/files/backend.service\n\t/home/ec2-user/expense-ansible-roles/roles/backend/tasks/backend.service\n\t/home/ec2-user/expense-ansible-roles/files/backend.service\n\t/home/ec2-user/expense-ansible-roles/backend.service on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"} 29 | 30 | 31 | ansible handlers are notifiers. when some change event happened in one task, we can trigger other tasks through handlers 32 | 33 | 3 components -> Mysql, backend and frontend 34 | 12 microservice 35 | 36 | deployment 37 | =========== 38 | there will be one folder where code exist 39 | stop the server 40 | remove old code 41 | download new code 42 | restart the server 43 | 44 | 1. remove directory 45 | 2. re create directory 46 | 3. download code 47 | 48 | main.yaml 49 | name 50 | hosts 51 | become 52 | roles: 53 | - frontend 54 | 55 | roles/frontend 56 | tasks 57 | main.yaml --> install and service 58 | 59 | 60 | How do you controll tasks in ansible.. few tasks should run, few tasks should not run... 61 | 62 | ansible tags 63 | 64 | 100 covers white --> more, reliance, dmart -------------------------------------------------------------------------------- /session-21.txt: -------------------------------------------------------------------------------- 1 | CRUD 2 | 3 | SDK --> Software development kit 4 | API --> Application programming interface 5 | 6 | AWS --> Open libraries(Java, Python, NodeJs, C++) --> You can CRUD on your AWS 7 | 8 | 3 servers --> mysql, backend, frontend 9 | r53 --> 3 records private ip, 1 record public ip 10 | 11 | aws configure 12 | 13 | ansible vault 14 | ================ 15 | ansible-vault create .yaml 16 | 17 | create servers 18 | 19 | de commissiong ansible vault and replace with secret manager or paramter store 20 | 21 | code and configuration --> de coupling -------------------------------------------------------------------------------- /session-22.txt: -------------------------------------------------------------------------------- 1 | Ansible Dynamic inventory 2 | ========================== 3 | inventory --> a list of hosts and groups, this is static 4 | 5 | Dynamic --> when traffic increases, servers should be increased, when traffic decrased servers should be terminated 6 | 7 | dynamic inventory 8 | ================= 9 | ansible can connect to dynamic environments like cloud aws, azure, gcp, etc. It has to query the servers based on the parameters we give 10 | 11 | ssh-keygen -t rsa -------------------------------------------------------------------------------- /session-23.txt: -------------------------------------------------------------------------------- 1 | Improvements 2 | ============= 3 | Ansible vault to SSM paramter/secret integration with Ansible 4 | IAM Users migration to IAM Roles --> 5 | 6 | shell, ansible, terraform,sql, perl, customised scriptings 7 | 1. syntax --> variables, data types, conditions, functions and loops, error handling 8 | 2. process understanding 9 | 10 | CRUD --> Ansible is failed to manage the infrastrucutre. For example if some manual edits happen ansible can't recognise those. It creates duplicate resources. So ansible is not perfect in state management 11 | 12 | Ansible --> with in the server, Ansible is perfect 13 | 14 | Manual infra 15 | ============= 16 | 1. time 17 | 2. human errors 18 | 3. cost 19 | 4. reusable effort 20 | 5. don't know who did mistake 21 | 6. modifications are not easy 22 | 7. not scalable 23 | 24 | IaaC --> Infra as a code 25 | 26 | version control --> you can version control your infra. expense-infra-1.0 expense-infra-2.0 27 | restore if something goes wrong --> easy to restore if something goes wrong 28 | Consistent infra --> you can create same infra across all environments 29 | Inventory management --> you can understand the resources by seeing terraform files 30 | dependency management --> terraform can understand the dependency between resources. First it will create dependencies and then actual resources 31 | code reusability --> terraform modules, you can create similar infra for any number of projects using modules 32 | Cost --> automation of CRUD --> we can save costs 33 | state management 34 | declarative --> easy syntax 35 | Git --> you can review code before apply 36 | 37 | ansible modules 38 | terraform providers --> provider means, a system which terraform can connect and create resources 39 | 40 | terraform aws provider 41 | terraform file extensions are .tf 42 | terraform uses HCL --> Hashicorp configuration language 43 | 44 | resource "type-of-resource" "name-of-resource" { 45 | your-parameters 46 | } 47 | 48 | resource "aws_instance" "this" { 49 | 50 | } 51 | 52 | terraform init -------------------------------------------------------------------------------- /session-24.txt: -------------------------------------------------------------------------------- 1 | 1. create user in IAM with admin access 2 | 2. install aws cli v2 3 | 3. run aws configure 4 | 5 | variables 6 | data types 7 | conditions 8 | functions 9 | loops 10 | 11 | variables.tf --> same name is not mandatory 12 | 13 | variable "" { 14 | type = 15 | default = "" 16 | } 17 | number, string, map, list, bool 18 | 19 | Project = Expense 20 | Component = Backend 21 | Environment = DEV/UAT/QA/PROD 22 | 23 | How do you override default variable values in terraform 24 | terraform.tfvars --> You can override default values in terraform 25 | 26 | cmd, tfvars, env variables, default 27 | 28 | TF_VAR_ 29 | 30 | 1. command line --> -var "=" 31 | 2. tfvars 32 | 3. env var 33 | 4. default values 34 | 5. user prompt 35 | 36 | if else, when 37 | 38 | if(expression){ 39 | this statement run if expression is true 40 | } 41 | else{ 42 | this statement run if expression is false 43 | } 44 | 45 | expression ? "this runs if true" : "this runs if false" 46 | 47 | if dev environment t3.micro, if prod env you can run t3.small 48 | 49 | loops 50 | ========= 51 | 1. count based loops --> iterate over list type of variables 52 | 2. for each loops 53 | 3. dynamic blocks 54 | 55 | I want to create 56 | 1. 3 ec2 instances --> mysql, backend, frontend 57 | 2. 3 r53 private ip records 58 | 3. 1 r53 public ip record 59 | 60 | 61 | 62 | count.index --> 0, 1, 2 --> size=3 63 | 64 | mysql.daws82s.online 65 | backend.daws82s.online 66 | frontend.daws82s.online 67 | 68 | interpolation --> you can concat variables with text 69 | 70 | you can't create custom functions in terraform 71 | 72 | merge 73 | course1 = { 74 | name = "devops", 75 | duration = "120hrs" 76 | } 77 | 78 | course2 = { 79 | name = "terraform", 80 | duration = "120hrs" 81 | } 82 | 83 | merge(course1, course2) 84 | 85 | name = "terraform" 86 | duration = "120hrs -------------------------------------------------------------------------------- /session-25.txt: -------------------------------------------------------------------------------- 1 | data sources 2 | ================== 3 | data sources are used to query existing information from the provider. 4 | devops-practice --> ami-356dgtr4367yt --> ami id changes when new updates are posted. 5 | 6 | data "" "" { 7 | 8 | } 9 | 10 | output blocks are used to print the information. It will be used in module development too. 11 | 12 | locals 13 | ================== 14 | locals are used to run the expressions or functions and save the results to variable 15 | 16 | locals are used to store expressions, it can even store simple key value pairs just like variables. 17 | variables can't store expressions. variable can't refer other variable. locals can refer other locals or variables 18 | variables can be overriden. locals can't be overriden 19 | 20 | state management 21 | ================== 22 | declared/desired infra ==> .tf files. Whatever user write in tf files, that is what user wants 23 | actual infra ==> what terraform is creating in provider 24 | 25 | desired infra == actual infra 26 | 27 | terraform.tfstate ==> it is a file terraform creates to know what it is created. this is actual infra created by terraform 28 | 29 | someone changed the name of ec2 manually inside console 30 | 31 | terraform plan or terraform apply 32 | 33 | terraform reads state file and then compare that with actual infra using provider 34 | 35 | if you update tf files.... 36 | 37 | terraform.tfstate --> expense-dev-backend 38 | compared with tf files --> expense-dev-backend-changed 39 | 40 | if you update few paramters, resources will not be created again it will just update 41 | but few parameters if you update, we are forced to recreate resource 42 | 43 | tfstate is very important file, we need to secure it 44 | 45 | clone terraform repo --> terraform apply 46 | duplicate resources or errors 47 | 48 | in colloboration environment we must main state file remotely. locking is also important, so that we can prevent parellel executions 49 | 50 | AWS S3 bucket -------------------------------------------------------------------------------- /session-26.txt: -------------------------------------------------------------------------------- 1 | dynamic blocks 2 | ================ 3 | loops are used to create multiple resources. 4 | dynamic blocks are used to create multiple blocks inside a resource.. 5 | 6 | for_each loops 7 | =============== 8 | 1. count based loop --> use it to iterate lists 9 | 2. for each loop --> use it to iterate maps 10 | 3. dynamic blocks 11 | 12 | mysql --> t3.small 13 | backend --> t3.micro 14 | frontend --> t3.micro 15 | 16 | mysql -> mysql.daws82s.online --> privateip 17 | backend --> backend.daws82s.online --> privateip 18 | frontend --> daws82s.online --> public ip 19 | 20 | provisioners 21 | =============== 22 | provisioners are used to take some action either locally or remote when terraform created servers.. 2 types of provisioners are there 23 | 1. local-exec 24 | 2. remote-exec 25 | 26 | we can use provisioners either creation time or destroy time 27 | 28 | local --> where terraform command is running that is local.. 29 | remote --> inside the server created by terraform 30 | 31 | ansible-playbook -i inventory backend.yaml 32 | 33 | 34 | multiple environments using terraform 35 | ===================================== 36 | 1. tfvars --> used to override default values 37 | 2. workspaces 38 | 3. seperate codebase 39 | 40 | DEV and PROD, remote state use 41 | 42 | expense-dev-mysql --> mysql-dev.daws82s.online 43 | expense-dev-backend --> backend-dev.daws82s.online 44 | expense-dev-frontend --> frontend-dev.daws82s.online 45 | 46 | expense-prod-mysql --> mysql-prod.daws82s.online 47 | expense-prod-backend --> backend-prod.daws82s.online 48 | expense-prod-frontend --> frontend-prod.daws82s.online 49 | 50 | expense-dev or expense-prod -------------------------------------------------------------------------------- /session-27.txt: -------------------------------------------------------------------------------- 1 | Why terraform? advantages 2 | variables 3 | variables.tf 4 | terraform.tfvars 5 | command line 6 | ENV variables 7 | conditions -> expression ? "true-value" : "false-value" 8 | loops --> count based, for each, dynamic 9 | functions 10 | data sources --> query existing information 11 | output 12 | locals --> store expressions in a variable 13 | state and remote state with locking 14 | provisioners --> local-exec and remote-exec 15 | multiple environments --> tfvars 16 | 17 | 3 ec2, 3 r53 records 18 | ==================== 19 | expense-mysql-dev --> mysql-dev.daws82s.online 20 | expense-backend-dev --> backend-dev.daws82s.online 21 | expense-frontend-dev --> frontend-dev.daws82s.online 22 | allow-tls-dev 23 | 24 | expense-mysql-prod --> mysql-prod.daws82s.online 25 | expense-backend-prod --> backend-prod.daws82s.online 26 | expense-frontend-prod --> daws82s.online --> public IP 27 | allow-tls-prod 28 | 29 | variables and tfvars 30 | 31 | diff bucket and diff dynamodb table for diff environments 32 | 33 | terraform init -reconfigure -backend-config=dev/backend.tf 34 | 35 | if instance_name is frontend and environment is prod then name should be daws82s.online 36 | instance_name-environment.daws82s.online 37 | 38 | instance_name is frontend and environment is prod --> and condition 39 | 40 | terraform workspaces 41 | ==================== 42 | terraform.workspace --> prod 43 | 44 | terraform workspace select prod 45 | terraform plan 46 | terraform apply -auto-approve 47 | 48 | 1. tfvars 49 | 2. workspaces 50 | 3. maintain different repos for diff environment 51 | 52 | terraform-expense-dev 53 | terraform-expense-prod 54 | 55 | disadvantages 56 | ================ 57 | should be very careful --> because same code to prod also, any mistake in dev causes confusion in prod 58 | should have full expertise, too much testing 59 | 60 | advantages 61 | ================ 62 | code reuse 63 | 64 | disadvantages 65 | ================ 66 | multiple repos to manage 67 | may be we need more employees 68 | 69 | advantages 70 | ================ 71 | clear isolation between environments, no confusion 72 | 73 | 74 | terraform modules 75 | ======================== 76 | variables, functions, ansible roles, locals 77 | 78 | modules --> it is like functions, you can pass inputs you will get infra 79 | code reuse 80 | enforce standards and best practices 81 | centralised place to updates -------------------------------------------------------------------------------- /session-28.txt: -------------------------------------------------------------------------------- 1 | reusability 2 | maintainability 3 | standards 4 | consistent infra across organisation 5 | 6 | VPC 7 | ===== 8 | Virtual private cloud. A isolated project space where we can create services for a project. We wil have full control and access on this 9 | 10 | on-premise data centers 11 | physical space 12 | physical security 13 | networking 14 | power 15 | firewalls 16 | linux admins 17 | n/w admins 18 | storage backups 19 | 20 | We need proper society space to construct house. 21 | 22 | village --> name, pincode 23 | street --> name, street number 24 | house --> C/O name, house number. Main gate 25 | 26 | VPC --> Virtual private cloud 27 | subnets 28 | igw --> to provide internet connection 29 | databases --> no outside person should have access to DB 30 | frontend application --> public, anyone internet can open 31 | routes 32 | HA --> High availability 33 | region --> min 2AZ(data center) 34 | 35 | IP Address --> 2^32 36 | 0.0.0.0 37 | . 38 | . 39 | . 40 | . 41 | 255.255.255.255 --> each octate 8 bits 42 | 43 | 192.145.34.56 44 | 45 | 500082 --> It represents one entire village 46 | 47 | CIDR --> Classless inter domain routing 48 | 49 | 10.0.0.0/16 -> first 2 octates are blocked for network bits. 50 | 51 | 10.0.0.0 52 | 10.0.0.1 53 | 10.0.0.2 54 | 10.0.0.3 55 | .. 56 | . 57 | . 58 | 10.0.0.255 59 | 10.0.1.0 60 | 10.0.1.1 61 | . 62 | . 63 | . 64 | 10.0.1.255 65 | 66 | 256*256 -> 65,536 67 | 68 | subnet --> CIDR --> 10.0.0.0/24 --> 24 bits or 3 octates blocked. 256 IP addresses are possible 69 | 1a --> 10.0.0.0/24 --> 256 IP addresses 70 | 1b --> 10.0.1.0/24 --> 256 IP addresses 71 | 72 | 10.0.1.0 73 | 10.0.1.1 74 | 10.0.1.2 75 | . 76 | . 77 | . 78 | 10.0.1.255 79 | 80 | 10.0.22.145 --> database 1b 81 | 82 | NAT --> if you want to enable egress internet to the servers in private subnet, you should create NAT gateway in public subnet and provide route to private subnets 83 | if traffic coming to server --> ingress 84 | if server is sending traffic to internet --> egress 85 | 86 | 87 | static(elastic) IP --> 1000 88 | 89 | VPC --> 10.0.0.0/16 90 | IGW --> Attach to VPC 91 | Public subnets --> 10.0.1.0/24 10.0.2.0/24 92 | Public Subnets --> Public Route --> 0.0.0.0/0 --> IGW 93 | 94 | Private Subnets --> 10.0.11.0/24 10.0.12.0/24 95 | Private Subnets --> Private Route --> 0.0.0.0/0 --> NAT 96 | 97 | Database Subnets --> 10.0.21.0/24 10.0.22.0/24 98 | Database Subnets --> Database Route --> 0.0.0.0/0 --> NAT 99 | 100 | for humans - or _ 101 | for programs _ 102 | 103 | joindevops --> expense, roboshop 104 | 105 | project-name-environment 106 | 107 | Project = expense 108 | Environment = dev 109 | Terraform = true 110 | Name = expense-dev 111 | 112 | 40.25.35.98 --> Home N/W --> Google N/W 113 | 114 | IP = N/W + Host ID 115 | 116 | dnf install mysql-server --> my EC2 is requesting internet to provide mysql-server -------------------------------------------------------------------------------- /session-29.txt: -------------------------------------------------------------------------------- 1 | VPC --> CIDR 2 | Subnets 3 | public 4 | private 5 | database 6 | IGW 7 | Route table 8 | Public 9 | Private 10 | Database 11 | NAT 12 | Routes 13 | Public --> 0.0.0.0/0 --> IGW 14 | Private --> 0.0.0.0/0 --> NAT 15 | Database --> 0.0.0.0/0 --> NAT 16 | 17 | 2 subnets 18 | data.aws_availability_zones.available.names 19 | 20 | [ 21 | + "us-east-1a", 22 | + "us-east-1b", 23 | + "us-east-1c", 24 | + "us-east-1d", 25 | + "us-east-1e", 26 | + "us-east-1f", 27 | ] 28 | 29 | Peering 30 | ========== 31 | village-A pincode 32 | village-B pincode 33 | pins should be different. road should be there between 2 villages. 34 | 35 | by default 2 VPC in AWS are not connected. 36 | VPC peering is the way of connecting 2 VPC. CIDR should be different. Routes also should be there between 2 VPC route tables. 37 | 38 | same account, same region 2 VPC 39 | same account, different region VPC 40 | diff accounts, same region 41 | diff accounts, diff region 42 | 43 | expense-dev --> requestor 44 | default --> acceptor 45 | 46 | village-A --> village-B 47 | village-B pincode is the destinatio -------------------------------------------------------------------------------- /session-30.txt: -------------------------------------------------------------------------------- 1 | ["apple","orange","banana"] --> list 2 | ["apple"] --> list 3 | 4 | 1. custom 5 | 2. open source 6 | 7 | custom 8 | 9 | advantages 10 | ========== 11 | we know what we created and what we want 12 | 13 | disadvantages 14 | ========== 15 | you need to write everything from the scratch 16 | 17 | 18 | open source 19 | =========== 20 | advantages 21 | ----------- 22 | everything is ready 23 | 24 | disadvantages 25 | ---------- 26 | we dont know what is inside 27 | we have to wait for the fix if something is wrong, we must update when they updated. 28 | 29 | 30 | # expense-dev-mysql 31 | resource "aws_security_group" "main" { 32 | name = local.sg_final_name 33 | description = var.sg_description 34 | vpc_id = var.vpc_id 35 | 36 | ingress { 37 | from_port = 80 38 | to_port = 80 39 | protocol = "-1" 40 | cidr_blocks = ["0.0.0.0/0"] 41 | ipv6_cidr_blocks = ["::/0"] 42 | } 43 | 44 | egress { 45 | from_port = 0 46 | to_port = 0 47 | protocol = "-1" 48 | cidr_blocks = ["0.0.0.0/0"] 49 | ipv6_cidr_blocks = ["::/0"] 50 | } 51 | 52 | tags = merge( 53 | var.common_tags, 54 | var.sg_tags, 55 | { 56 | Name = local.sg_final_name 57 | } 58 | ) 59 | } 60 | 61 | 62 | resource "aws_security_group_rule" "example" { 63 | type = "ingress" 64 | from_port = 0 65 | to_port = 65535 66 | protocol = "tcp" 67 | cidr_blocks = [aws_vpc.example.cidr_block] 68 | ipv6_cidr_blocks = [aws_vpc.example.ipv6_cidr_block] 69 | security_group_id = "sg-123456" 70 | } 71 | 72 | Our junior engineer developed a sg module with ingress rules from the user. Module users started using it, for another requirement they added new firewall seperately. When they run module again our module deleted newly added rule. So we decided not to add sg ingress rules in module development. -------------------------------------------------------------------------------- /session-31.txt: -------------------------------------------------------------------------------- 1 | VPC --> VPC_ID, subnet_ids 2 | SG -> refer VPC params and update sg ids in store 3 | 4 | ["subnet-76dghye567vf","subnet-76dghye567vf"] --> terraform(List) 5 | 6 | subnet-76dghye567vf,subnet-76dghye567vf --> AWS(StringList), Terraform(String) 7 | 8 | private, datasubnet --> no public IP 9 | 10 | NAT --> only outgoing traffic 11 | 12 | Bastion/jump host --> one EC2 in public subnet. from this host to private hosts 13 | 14 | ssh ec2-user@private-ip 15 | DevOps321 16 | 17 | 18 | Clients --> Delivery Manager 19 | 20 | frontend --> frontend team 21 | backend --> backend team 22 | DB --> db team 23 | devops and cloud --> devops team 24 | 25 | HR -> recruit more members --> JD 26 | 27 | Load Balancer --> DM 28 | Listener --> listening to client 29 | HTTP --> 80 30 | HTTPS --> 443 31 | rules 32 | ====== 33 | if frontend --> frontend team 34 | 35 | if frontend --> frontend target group 36 | 37 | target group --> frontend team 38 | health check --> which instances are in running condition 39 | 40 | group of ec2 instances --> frontend target group 41 | 42 | http://daws81s.online --> IP port no 80 43 | https://daws81s.online --> IP port no 443 44 | 45 | Autoscaling 46 | -------------- 47 | launch template --> JD 48 | autoscaling policy --> avg cpu utilisation --> 70% 49 | 50 | http://10.2.3.5/ 51 | 52 | 1ec2 --> 1a 53 | 1ec2 --> 1b 54 | 55 | lb --> 1a/1b, 1c 56 | 57 | user, cart, catalogue, shipping, payment --> backend components 58 | 59 | 60 | user.daws82s.online --> user target group 61 | cart.daws82s.online --> cart target group 62 | 63 | LB DNS --> 64 | 65 | http://nginx-616438873.us-east-1.elb.amazonaws.com/ 66 | 67 | Listener --> 80 68 | Rule --> all traffic send to nginx target group 69 | round robin --> ec2 instance 70 | 71 | LB, Listener, Rule, Target group, Instance -------------------------------------------------------------------------------- /session-32.txt: -------------------------------------------------------------------------------- 1 | target group --> team --> group of servers 2 | 3 | ALB --> Listener --> Evaluate rules --> target group --> server 4 | 5 | open source modules 6 | ==================== 7 | 1. we noo need to write code 8 | 9 | disadvantages 10 | ============== 11 | 1. we are dependent on them 12 | 2. whenever they update something we are forced to update 13 | 14 | Organisation modules 15 | ==================== 16 | 1. they have to write complete code 17 | 18 | advantages 19 | =========== 20 | fully our control 21 | 22 | 1. Project infra 23 | 2. Application infra 24 | 25 | basement --> one time 26 | rooms --> frequent 27 | 28 | stateful vs stateless 29 | ===================== 30 | state --> data 31 | 32 | database --> stateful 33 | backend, frontend --> stateless 34 | 35 | bastion host --> app ALB 36 | ALB SG Rule 37 | port no: 80 --> bastion host IP 38 | 39 | daws82s.online 40 | 41 | backend.app-dev.daws82s.online 42 | *.app-dev.daws82s.online --> expense dev app alb 43 | 44 | user, cart, catalogue, shipping, payment 45 | 46 | user.app-dev.daws82s.online --> user component 47 | cart.app-dev.daws82s.online --> cart component -------------------------------------------------------------------------------- /session-33.txt: -------------------------------------------------------------------------------- 1 | VPN --> forward proxy 2 | 3 | OpenVPN Access Server Community Image-fe8020db 4 | openvpnas --> SSH username 5 | openvpn, Openvpn@123 --> client credentials 6 | 7 | DB 8 | ==== 9 | on-premise 10 | 11 | DB installation 12 | DB upgrades 13 | DB backups 14 | DB changes 15 | DB Cluster setup 16 | DB restoration test 17 | 18 | RDS --> 1$/hours, multi-AZ(HA) 2$/hr 19 | ==== 20 | DB installation --> we just need to provide configuration 21 | DB upgrades --> simple clicks 22 | DB snaphosts 23 | DB Cluster --> We can replication controllers, high available, storage scalable 24 | 25 | ExpenseApp1 26 | 27 | DB subnet group --> group of DB subnets 28 | 29 | 8.0.10 -> 8.0.11 30 | 8.0 -> 9.0 31 | 8.4.0 --> 8.5.0 --> major 32 | 33 | DB_HOST_URL: expense-dev.czn6yzxlcsiv.us-east-1.rds.amazonaws.com 34 | root 35 | ExpenseApp1 36 | 3306 -------------------------------------------------------------------------------- /session-34.txt: -------------------------------------------------------------------------------- 1 | expense-dev.czn6yzxlcsiv.us-east-1.rds.amazonaws.com 2 | root 3 | ExpenseApp1 4 | 3306 5 | mysql-dev.daws82s.online 6 | 7 | A --> IP Address 8 | CNAME -------------------------------------------------------------------------------- /session-35.txt: -------------------------------------------------------------------------------- 1 | new release 2 | ============= 3 | 1. remove old code 4 | 2. download new code 5 | 3. restart the server 6 | 7 | changes in existing servers 8 | ============================ 9 | 20 servers --> downtime should be there 10 | 11 | connect to the servers using ansible 12 | fetch the ip using dynamic inventory 13 | run ansible playbook against all the servers 14 | 15 | another method 16 | ============================ 17 | provision new instance 18 | configure it using ansible 19 | connect to it 20 | run the playbook 21 | 22 | stop the instance 23 | take AMI 24 | 25 | update auto scaling group --> 5 old version application instances 26 | Rolling update 27 | provision one new instance with new AMI, delete one old version server 28 | provision second new instance with new AMI, delete second old version server 29 | . 30 | . 31 | provision fifth new instance with new AMI, delete fifth old version server 32 | 33 | Launch template --> instance creation inputs 34 | AMI 35 | SG ID 36 | Subnet 37 | Which target group 38 | 39 | null resource will not create any new resource, it is used to connect to the instances, copy the scripts, execute the scripts through provisioners. It has a trigger attribute to take actions when something is changed like instance id. 40 | 41 | terraform variables --> shell --> ansible 42 | 43 | 44 | 1. instance creation 45 | 2. connect to server using connection block 46 | 3. copy the shell script into server using file provisioner block 47 | install ansible 48 | ansible-pull -i localhost -U URL main.yaml -e component=backend -e environment=dev 49 | 4. remote-exec 50 | chmod +x backend.sh 51 | sudo sh backend.sh 52 | 5. ansible configures backend 53 | 6. stop instance 54 | 7. take the AMI 55 | 8. create target group 56 | 9. launch template --> AMI 57 | 10. ASG -->> launch template -------------------------------------------------------------------------------- /session-36.txt: -------------------------------------------------------------------------------- 1 | API 2 | GET POST PUT DELETE OPTIONS 3 | 4 | Nouns and verbs 5 | 6 | users --> nouns 7 | getUser --> verb 8 | updateUser --> verb 9 | deleteUser --> verb 10 | createUser --> verb 11 | 12 | https://emacet.ap.gov.in/rank/user/10987625 --> getUser rank HTTP GET 13 | https://emacet.ap.gov.in/rank/user/10987625 --> HTTP DELETE --> user rank will be deleted 14 | https://emacet.ap.gov.in/rank/user/10987625 --> HTTP UPDATE --> user rank will be updated 15 | https://emacet.ap.gov.in/rank/user/10987625 --> HTTP POST, User ranks will be created in database 16 | 17 | https://emacet.ap.gov.in/rank/users/ --> HTTP GET, hall ticket numbers will be uploaded in CSV 18 | 19 | AMI --> run instances, auto scaling 20 | 21 | backend.app-dev.daws82s.online --> backend target group 22 | 23 | analytics.app-dev.daws82s.online --> analytics target group 24 | 25 | http://backend.app-dev.daws82s.online/transaction --> route53 26 | 27 | *.app-dev.daws82s.online --> ALB 28 | 29 | Listener --> http:80 30 | 31 | Listener should check its rules 32 | backend.app-dev.daws82s.online --> backend target group 33 | Load balancer get the healthy instances 34 | Load balancer send traffic to any healthy instance 35 | 36 | http://backend.appp-dev.daws82s.online/transaction 37 | 38 | AMI is mandatory 39 | target group 40 | launch template --> AMI, SG, Network settings 41 | ASG --> launch template, target group 42 | ASG policy --> CPU utilisation cross 70% create new instances. Min, max, desired 43 | Listener rule --> if some one hits backend. --> backend target group 44 | -------------------------------------------------------------------------------- /session-37.txt: -------------------------------------------------------------------------------- 1 | Load Balancing --> High Availability 2 | Scaling --> Autoscaling 3 | 4 | DM, He listens to client, his manager, etc. 5 | DM --> Load balancer 6 | client --> port no 80 http 7 | Rules 8 | *.app-dev.daws82s.online --> LB 9 | fdhasdkfh.app-dev.daws82s.online --> Yes Iam LB (default fixed response) 10 | http://backend.app-dev.daws82s.online --> forward that to backend target group 11 | http://analytics.app-dev.daws82s.online --> forward that to analytics target group 12 | health check 13 | every 10sec 14 | 2 health success --> instance is healthy 15 | 2 health failure --> instance is failed 16 | 17 | Autoscaling 18 | ============== 19 | Terraform+Shell+Ansible+Autoscaling 20 | 21 | HR --> JD --> Which team? 22 | JD --> Launch template 23 | AMI --> updated latest version backend AMI 24 | 25 | install program runtime, create user, create app folder, download code, install dependencies, create sytemctl services, configure DB_URLS, restart application 26 | 27 | terraform --> Instance launch --> copy file using provisioner --> connected to instance through remote-exec --> run playbook 28 | 29 | stop the instance 30 | take AMI 31 | delete the instance 32 | 33 | create backend target group 34 | create launch template --> latest AMI ID, update launch template version 35 | ASG --> launch template latest version, instance refresh 36 | rolling update 37 | new instance create, old instance delete 38 | 39 | listener rule --> backend.app-dev.daws82s.online --> forward that to backend target group 40 | ASG Policy --> if AVG CPU Utilisation is crossing 70% create instances 41 | Scale out --> create new instances 42 | Scale in --> remove instances 43 | 44 | if instance id changes 45 | trigger provisoner 46 | stop instance 47 | take AMI 48 | 49 | launch template changes 50 | ASG instance refresh 51 | 52 | either host or context 53 | 54 | amazon.daws82s.online --> amazon 55 | daws82s.online/amazon --> path based or context based 56 | 57 | m.facebook.com --> mobile target group 58 | netbanking.icicibank.com 59 | corporatebanking.icicibank.com 60 | 61 | joindevops --> golden AMI 62 | 63 | backend 64 | ansible+nodejs --> joindevops AMI + Ansible + nodejs 65 | shipping 66 | ansible+java --> joindevops AMI + Ansible + java 67 | 68 | SSL/TLS certificates 69 | ----------------------- 70 | daws82s.online --> get certificates 71 | 72 | certificate authority --> verisign, letsencrypt 73 | 74 | .crt file --> domain, country, type of business, location, address, company name 75 | private key 76 | 77 | joindevops.com --> verisign 78 | browser sends data to servers using encrypted key, joindevops has private key so it can decrypt 79 | 80 | *.daws82s.online 81 | 82 | frontend-dev.daws82s.online -------------------------------------------------------------------------------- /session-38.txt: -------------------------------------------------------------------------------- 1 | Organisation wide AWS Central team 2 | golden AMI 3 | terraform apply -auto-approve 4 | 5 | app_alb_frontend 6 | 80 7 | 8 | frontend_web_alb on port no 80 9 | 10 | expense-dev.daws82s.online 11 | expense-qa.daws82s.online 12 | daws82s.online 13 | 14 | Cloudfront --> Caching 15 | 16 | Netflix --> Squid game 17 | 18 | 1st user --> checks cache --> brings from origin servers --> save to cache --> send to user 19 | 2nd user --> checks cache --> send to user -------------------------------------------------------------------------------- /session-39.txt: -------------------------------------------------------------------------------- 1 | ASG 2 | Target group --> deregistration delay 3 | 4 | terraform destroy --> ASG 5 | tg waits until deregistration happens --> 5min 6 | 7 | CDN --> AWS cache network 8 | AWS have edge servers across the globe, before serving content to the users, aws cache the content in edge servers so that latency will be very less. useful in serving images, videos, static js, css files 9 | 10 | GET PUT POST DELETE 11 | 12 | Origin --> where the real content exist 13 | Cache behaviours 14 | 15 | /images/* --> this should be cached 16 | /videos/* --> this should be cached 17 | /static/* --> this should be cached 18 | * --> this should not be cached 19 | 20 | How can you delete all the cached content in edge servers? 21 | invalidations 22 | 23 | Origin S3, ALB, EC2, etc. 24 | 25 | *.daws82s.online 26 | expense-dev --> public ALB 27 | expense-cdn --> CDN 28 | 29 | NACL --> 30 | 31 | https://facebook.com 32 | 33 | ephemeral ports 34 | 35 | github.com --> 443 36 | 0-65535 37 | 1025-65,535 38 | 1025 --> system ports 39 | laptop opens an ephemeral port, 54353 40 | 41 | VPC, subnets, route tables, SG 42 | NACL 43 | 44 | SG --> stateful 45 | 46 | ec2 server --> allow port no 22 on ssh inbound, outbound can be empty 47 | 48 | NACL --> stateless 49 | 50 | ec2 server --> allow port no 22 on ssh inbound, outbound also should be there -------------------------------------------------------------------------------- /session-40.txt: -------------------------------------------------------------------------------- 1 | What happens if I enter google.com in my browser? 2 | 3 | Application 4 | Presentation 5 | Session layer 6 | Transport 7 | Network 8 | Data link 9 | Physical 10 | 11 | application 12 | ============= 13 | application google.com --> http/https 14 | presentation --> encryption 15 | session --> session, cookie details 16 | 17 | transport --> port number 18 | destination port --? 80/443/8080 22 19 | source port --> system randomly allocates one port to get the response --> ephemeral port 20 | 32768-> 65535 21 | 22 | Network --> IP address 23 | destination IP address --> DNS resolution 24 | source IP --> server IP address (private IP) 25 | source IP, source port, destination IP, destination port, data --> packet 26 | 27 | Datalink --> Mac address 28 | source mac --> laptop mac address 29 | destination mac --> router mac address 30 | frame --> data packet and mac address 31 | 32 | Physical --> ethernet, wifi 33 | 34 | Modem 35 | ============ 36 | Physical --> 37 | Datalink --> frames. source mac, destination mac addresses will be removed --> I should send this out 38 | Network --> checks the packet . destination IP address in not with in the network. NAT(Network address translation) 39 | source IP address will be changed to router IP(public) address 40 | Transport --> port number accessed 41 | 42 | response will come back to laptop and ephemeral port 43 | 44 | NACL --> Network access control list 45 | ==================================== 46 | Public subnet --> web ALB 443 from 0.0.0.0/0 47 | frontend ec2 --> SG checks traffic comes from ALB or not 48 | SG checks IP and port --> layer 4 49 | 50 | SG --> fingerprint door lock to home 51 | NACL --> subnet, like check infront of apartment 52 | 53 | NACL vs SG 54 | =============== 55 | 1. SG can be attached to EC2, NACL can be attached to subnet 56 | 2. SG rules are empty when you create, there are no deny in SG. when you create NACL by default all traffic will be denied. 57 | 3. SG is stateful, NACL is stateless 58 | 59 | 443, expense-dev.daws82s.online 60 | 0.0.0.0/0, ephemeral port 61 | 62 | public --> ALB --> 443. we should allow 63 | return traffic on ephemeral ports should be allowed 64 | 65 | public subnet frontend instance --> private subnet ALB 66 | destination ip: private subnet CIDR 67 | destination port: 80 68 | source ports of the clients are always ephemeral 69 | 70 | frontend is client, server is private ALB 71 | 72 | 10.0.0.0/16 --> local 73 | 74 | either VPN or bastion host only access DB 75 | 76 | Network security 77 | application security 78 | database security 79 | 80 | public subnet --> private subnet 81 | 80 allow from frontend private IP address only 82 | 80 allow from VPN, bastion IP address only 83 | -------------------------------------------------------------------------------- /session-41.txt: -------------------------------------------------------------------------------- 1 | acquiring state lock --> state file locked when one user is working on terraform 2 | releasing state lock 3 | 4 | terraform force-unlock 5 | 6 | terraform taint 7 | ================ 8 | if you apply taint to a resource in terraform, it will recreated again.. why we taint? if someone change the resource manually in console and it is difficult to reset.. 9 | 10 | terraform target 11 | ================ 12 | terraform apply -target= 13 | 14 | ec2 and sg 15 | 16 | can I delete only SG? 17 | 18 | we need to be careful because resources may have dependencies, they maybe effected 19 | 20 | IaaC --> automated infra 21 | manually infra --> terraform 22 | 23 | terraform import 24 | =================== 25 | create provider, initailise it 26 | create empty resource 27 | import the resource 28 | terraform will fetch all the paramters and keep it in statefile 29 | we need to manually fill the code untill terraform not complaining when you plan.. 30 | 31 | manually expense infra --> automated infra 32 | 33 | https://daws82s.online 34 | 35 | automated infra 36 | 37 | https://daws82s-migrated.online --> public ALB --> frontend TG --> private ALB -> backend TG --> RDS 38 | 39 | https://daws82s.online --> newly created public ALB 40 | 41 | 42 | how can you secure statefile in s3 bucket? 43 | ============================================= 44 | remote s3 with dynamodb 45 | s3 bucket -> least privelege --> only ec2 instance will have write access to s3 bucket 46 | 47 | terraform ec2 instance --> write access and update access 48 | no body will have delete access in s3 bucket 49 | you can enable MFA delete to team lead/architect 50 | 51 | { 52 | "Version": "2012-10-17", 53 | "Statement": [ 54 | { 55 | "Effect": "Allow", 56 | "Principal": { 57 | "AWS": "arn:aws:iam::315069654700:role/TerraformAWSAdminForEC21" 58 | }, 59 | "Action": [ 60 | "s3:GetObject", 61 | "s3:PutObject" 62 | ], 63 | "Resource": "arn:aws:s3:::testing-bucket-security-remote/*" 64 | }, 65 | { 66 | "Effect": "Deny", 67 | "Principal": { 68 | "AWS": "arn:aws:iam::315069654700:user/sivakumar" 69 | }, 70 | "Action": "s3:DeleteObject", 71 | "Resource": "arn:aws:s3:::testing-bucket-security-remote/*", 72 | "Condition": { 73 | "Bool": { 74 | "aws:MultiFactorAuthPresent": "false" 75 | } 76 | } 77 | }, 78 | { 79 | "Effect": "Allow", 80 | "Principal": { 81 | "AWS": "arn:aws:iam::315069654700:user/sivakumar" 82 | }, 83 | "Action": [ 84 | "s3:ListBucket", 85 | "s3:GetObject" 86 | ], 87 | "Resource": [ 88 | "arn:aws:s3:::testing-bucket-security-remote", 89 | "arn:aws:s3:::testing-bucket-security-remote/*" 90 | ] 91 | } 92 | ] 93 | } -------------------------------------------------------------------------------- /session-42.txt: -------------------------------------------------------------------------------- 1 | Old Enterprise 2 | ============== 3 | Frontend+Backend 4 | HTML+CSS+JS+JSP+Servlets 5 | 6 | app size is very high 7 | a small change in any frontend or backend should be released and redeployed 8 | 9 | release notes 10 | dev, qa, uat, pre-prod 11 | client approval 12 | one entire day for deployment 13 | sanity testing 14 | 15 | Frontend seperate and backend seperate 16 | ================================= 17 | API 18 | no dependency 19 | load on servers are decreased 20 | 21 | Angular JS 22 | Monolithic restful API/services 23 | HTTP Methods and responses 24 | 25 | backend team, single component --> backend 26 | user, cart, order, shipping, payment, delivery, catalogue, reviews, recommendations, etc.. 27 | Monolithic applications 28 | ======================= 29 | single backend component 30 | should use single programming language 31 | a small error can make entire website down 32 | 33 | Microservices 34 | ================== 35 | User 36 | Cart 37 | Catalogue 38 | Shipping 39 | Payment 40 | 41 | manabadi/eenadu --> emacet results API 42 | Java --> NodeJS 43 | client and server can use any programming language 44 | easy deployment 45 | website works if any component goes down 46 | diff components use diff languages 47 | 48 | Joint family vs small family vs individual 49 | ========================================== 50 | independent house to host joint family --> application size is big --> old enterprise 51 | 52 | physical server --> dedicated server 53 | OS --> Hardware 54 | 55 | flats --> apartment --> 56 | 57 | single person --> PG, shared room 58 | 59 | Physical server 60 | ============== 61 | disadvantages 62 | ----------- 63 | costly 64 | waste of resources --> may not use all ram and HD 65 | time --> purchase, installation, configuration 66 | maintanance --> water, electricity, plumbing --> OS, network, etc. 67 | 68 | advantages 69 | ----------- 70 | complete privacy --> single application 71 | 72 | 73 | VM 74 | ============== 75 | time is less to construct 76 | less cost 77 | proper resource utilisation 78 | less maintanance 79 | 80 | disadvantages 81 | ------------ 82 | less privacy 83 | 84 | 85 | Shared room 86 | ============== 87 | less cost 88 | time is very less 89 | no maintanance 90 | perfect resource utilisation 91 | 92 | disadvantages 93 | ------------ 94 | very less security 95 | 96 | 97 | VM vs containers 98 | ================= 99 | more cost less cost 100 | more boot time(min) less boot time(seconds) 101 | VM size is big container size is very less 102 | block the resources never block the resource, use it dynamically 103 | extra hypervisor no need of extra components 104 | less portable portable 105 | 106 | 107 | docker install --> docker group created by default 108 | usermod -aG docker ec2-user 109 | logout and login 110 | 111 | VM --> AMI --> Instance 112 | Docker --> Image --> Container 113 | AMI ==> BaseOS + App runtime + User creation + Folder creation + config files + app code + dependency installation + start/restart app --> 2GB-4GB 114 | 115 | Image ==> Bare min OS(10MB-500MB) + App runtime + User creation + Folder creation + config files + app code + dependency installation + start/restart app --> 150 - 500MB 116 | 117 | docker ps --> displays running containers 118 | docker images --> displays the images available 119 | docker pull image-name 120 | 121 | nginx 122 | ====== 123 | alpine OS + Install nginx --> nginx image 124 | 125 | docker create --> container will be created 126 | docker start 127 | docker rm 128 | 129 | docker pull + create + start == docker run 130 | docker exec -it 1e74ed63d8d6 bash 131 | docker run -d 132 | 133 | docker run -p host-port:container-port 134 | docker inspect 135 | docker logs -------------------------------------------------------------------------------- /session-43.txt: -------------------------------------------------------------------------------- 1 | How to create custom docker images? 2 | 3 | Dockerfile --> used to create our custom images using instructions provided by docker 4 | 5 | RHEL9 == Centos9 == Almalinux9 6 | FROM 7 | ====== 8 | FROM : 9 | 10 | how to build docker image? 11 | 12 | docker build -t : . --> Dockerfile 13 | 14 | docker build -t /: . 15 | 16 | docker login -u 17 | 18 | docker push : 19 | 20 | RUN 21 | ====== 22 | RUN instruction used to install packages, configurations on top of base os. It executes at the time of image building 23 | 24 | CMD 25 | ====== 26 | systemctl will not work in containers.. /etc/systemd/system/*.service 27 | 28 | RUN vs CMD 29 | ========= 30 | RUN instruction executes at the time of image building 31 | CMD instruction executes at the time of container creation 32 | 33 | COPY vs ADD 34 | ========= 35 | COPY and ADD both copies the files to images. but ADD has 2 extra capabilities 36 | 1. copying from directly from internet to the image 37 | 2. extracts tar file directly into image 38 | 39 | -------------------------------------------------------------------------------- /session-44.txt: -------------------------------------------------------------------------------- 1 | CMD vs ENTRYPOINT 2 | ================== 3 | 1. CMD instruction can be overriden 4 | 2. You can't override ENTRYPOINT --> ping google.com ping facebook.com. If you try to override entrypoint it will not override, but it will append 5 | 3. for best results we can use CMD and ENTRYPOINT together. 6 | 4. We can mention command in ENTRYPOINT, default options/inputs can be supplied through CMD. User can always override default options. 7 | 5. Only one CMD and one ENTRYPOINT should be used in Dockerfile 8 | 9 | USER -> set the user of container 10 | WORKDIR --> set the working directory of container/image 11 | 12 | ARG 13 | ------ 14 | 1. ENV variables can be access at the image building and in container also 15 | 2. ARG variables are only accessed inside image build, not in container 16 | 3. ARG can be the first instruction only to provide the version for base image. It can't be useful after FROM instruction 17 | 18 | set the ARG value to ENV variable inside Dockerfile 19 | 20 | -------------------------------------------------------------------------------- /session-45.txt: -------------------------------------------------------------------------------- 1 | FROM --> should be the first instruction to represent base OS 2 | RUN --> Used to configure or install packages 3 | CMD --> executes at the time of container creation, this command should container running infinite time 4 | COPY --> copies from local workspace to image 5 | ADD --> same as copy but 2 extra capabilities, directly downloads from internet or untar directly 6 | LABEL --> adds metadata, used for filter. key value pairs 7 | EXPOSE --> doc purpose, tell the users about ports opened by container 8 | ENV --> sets the env variables in the container, we can use at build time also 9 | ENTRYPOINT --> cant' override, CMD can provide default args, we can always override default args 10 | USER --> set the user to run container 11 | WORKDIR --> sets the working directory for container/image 12 | ARG --> build time variables, in an exceptional case can be first instruction to supply base os version 13 | 14 | ONBUILD 15 | ======= 16 | we can set some conditions when some user is using our image 17 | 18 | nginx --> alamlinux:9, install nginx, removes index.html 19 | 20 | we will force the users to keep index.html in their workspace for mandatory 21 | 22 | if you are using default n/w docker containers can't communicate with each other -------------------------------------------------------------------------------- /session-46.txt: -------------------------------------------------------------------------------- 1 | volumes 2 | ========== 3 | docker containers ephemeral, once you remove container by default it removes the data too. 4 | 5 | you can create one directory in host and map it to container. even we delete container data will not be lost, you can remount it.. 6 | 7 | we created directory, so we have to manage it... 8 | 9 | docker volumes --> you have to create volumes with docker commands, so docker can manage, we no need to worry of creation and managing 10 | 11 | manually running containers 12 | =========================== 13 | 1. we need to know the dependency 14 | 2. we need to make sure network creation and volume creation 15 | 3. while removing we need to remove in anti dependency order.. 16 | 4. manually run docker run commands 17 | 18 | docker-compose.yaml 19 | 20 | mysql 21 | backend --> trying to connect mysql, before it is completely up, so we need to delay few sec 22 | 23 | SG vs NACL --> 24 | secrets stored in SSM parameter store 25 | developed terraform modules --> 26 | ansible roles 27 | I optimised dockerfiles 28 | 29 | latest 30 | 1.0.0 --> 1.0.1 --> 1.0.2 --> 1.1.0 --> 2.0.0 -------------------------------------------------------------------------------- /session-47.txt: -------------------------------------------------------------------------------- 1 | create Dockerfile 2 | build image 3 | push to docker hub 4 | docker-compose.yaml 5 | image: joindevops/mysql:1.0.0 6 | 7 | Image layers 8 | ============= 9 | 10 | 11 | FROM node:20.18.3-alpine3.21 12 | RUN addgroup -S expense && adduser -S expense -G expense 13 | RUN mkdir /opt/backend 14 | RUN chown -R expense:expense /opt/backend 15 | WORKDIR /opt/backend 16 | COPY package.json . 17 | 18 | Docker maintains the images as layers, each and every instruction is one layer. docker creates 19 | 1. intermediate container from instruction-1 20 | 2. docker runs 2nd instruction on top of IC-1. then docker saves this another layer 21 | 3. docker saves this container as another image layer. create intermediate container out of it IC-2 22 | 4. Now docker runs 3rd instruction in IC-2 container. docker saves this as another layer 23 | 5. docker creates intermediate container from this layer as IC-3 24 | 25 | How do you optimise docker layers? 26 | ================================== 27 | 1. less number of layers faster builds, because number of intermediate containers are less 28 | you can club multiple instructions into single instruction 29 | 2. keep the frequently changing instructions at the bottom of Dockerfile 30 | 31 | Multi stage builds 32 | =================== 33 | Java 34 | JDK --> Java development kit 35 | JRE --> Java runtime environment 36 | 37 | JDK > JRE 38 | JRE is subset of JDK 39 | 40 | JDK = JRE + Extra libraries 41 | 42 | while installing some libraries, OS adds extra space to HD. We will take only that jar file output and copy it another Dockerfile where only jre runs... 43 | 44 | We can have 2 dockerfiles one is for builder, we can copy the output from builder into 2nd dockerfile and run it, we can save some image space using this 45 | 46 | 47 | #FROM node:20 48 | FROM node:20.18.3-alpine3.21 AS builder 49 | RUN addgroup -S expense && adduser -S expense -G expense && \ 50 | mkdir /opt/backend && \ 51 | chown -R expense:expense /opt/backend 52 | WORKDIR /opt/backend 53 | ENV DB_HOST="mysql" 54 | USER expense 55 | COPY package.json ./ 56 | COPY *.js ./ 57 | RUN npm install 58 | CMD ["node", "index.js"] 59 | 60 | Docker multistage builds are primarily used to create smaller, more optimized container images by separating the build environment from the runtime environment. We can have one as builder and one as runner, copy the desired output from builder to runner. docker removes builder automatically 61 | 62 | 1. build the image --> use docker 63 | 2. run the image as container --> use kubernetes 64 | 65 | pg have 100 rooms, what if water is stopped... 66 | 67 | if owner have say 5pg, 68 | 69 | what is underlying docker server is crashed. We need to maintain multiple docker hosts. You need some orchestrator to manage all the docker hosts... docker swarm is docker native orchestrator 70 | 71 | kubernetes is the popular container orchestrator tool.. 72 | 73 | autoscaling of containers --> 74 | HA --> run containers in multiple servers 75 | reliability --> orchestrator shifts the container to another host if one host is down 76 | kubernetes n/w and DNS is more stronger than docker swarm 77 | kubernetes integrates with cloud providers 78 | storage is better than in dockerswarm 79 | -------------------------------------------------------------------------------- /session-48.txt: -------------------------------------------------------------------------------- 1 | eksctl --> AWS command line tool to create and manage EKS cluster 2 | 3 | 1. create one linux server as workstation 4 | 2. install docker to build images 5 | 3. run aws configure to provide authentication 6 | 4. install eksctl to create and manage EKS cluster 7 | 5. install kubectl to work with eks cluster 8 | 9 | ondemand, spot and reserved 10 | ondemand --> creating server on the spot, high cost 11 | reserved --> cost is less because you are reserving for longterm 12 | spot --> 70-90% discount hardware available now...when our customers require we will take back your hardware with 2min notice. 13 | 14 | SPOT instances 15 | ================ 16 | eksctl create cluster --config-file=eks.yaml 17 | 18 | kubectl get nodes --> shows the nodes 19 | 20 | everything in kubernetes is called as resource/object 21 | 22 | namespace --> isolated project space where you can create and control resources to your project 23 | 24 | default namespace is created along with cluster creation 25 | 26 | kubectl get namespace 27 | 28 | kind: 29 | apiVersion: v1 30 | metadata: 31 | name: 32 | spec: 33 | 34 | pod 35 | ====== 36 | docker --> image --> container 37 | k8 --> image --> pod 38 | 39 | container vs pod 40 | ================= 41 | pod is the smallest deployable unit in k8. pod can have multiple containers 1 or many 42 | all containers in pod share the same IP and storage 43 | multiple containers are useful in few applications like shipping logs through sidecars 44 | 45 | 46 | eksctl delete cluster --config-file=eks.yaml 47 | -------------------------------------------------------------------------------- /session-49.txt: -------------------------------------------------------------------------------- 1 | backend --> DB connection --> once results are fetched connection close 2 | 3 | you should not mix the configuration with code/defintion 4 | 5 | siva --> saiavaaa 6 | siva --> fddsfh9u05471490 7 | 8 | Service 9 | ========= 10 | 1. load balancing 11 | 2. pod to pod communication 12 | 3. as DNS between pods 13 | 14 | multi-container --> curl nginx --> 15 | 1. cluster IP --> default, only works internally 16 | 2. NodePort --> external requests 17 | 3. LoadBalancer --> only works with cloud providers. here service open classic LB. 18 | 19 | LoadBalancer > NodePort > ClusterIP -------------------------------------------------------------------------------- /session-50.txt: -------------------------------------------------------------------------------- 1 | namespace --> isolated project space 2 | Pod 3 | 4 | LoadBalancer > NodePort > ClusterIP 5 | 6 | Pod is subset of replicaset 7 | pod name = replicaset-name-random 5 digits 8 | 9 | deployment 10 | ============ 11 | when there are changes in application, we need to release new version 12 | delete old application 13 | download new application 14 | run the application 15 | 16 | replicaset is subset of deployment 17 | 18 | kubectl rollout undo deployment/nginx-deployment --> roll back to immidiate previous version 19 | 20 | sudo git clone https://github.com/ahmetb/kubectx /opt/kubectx 21 | sudo ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx 22 | sudo ln -s /opt/kubectx/kubens /usr/local/bin/kubens 23 | 24 | 0-1024 are system ports -------------------------------------------------------------------------------- /session-51.txt: -------------------------------------------------------------------------------- 1 | Volumes 2 | ============ 3 | everything in k8 is ephemeral. pods, nodes, master node all are ephemeral. So it is not good to store DB inside K8 4 | 5 | Storage administration --> manage all the disks related to servers 6 | Backup every day 7 | Restore testing 8 | N/w to storage 9 | 10 | EBS -> Elastic block storage 11 | EFS -> Elastic file system 12 | 13 | If server is in AZ us-east-1b, EBS also should be here. EBS is less latency. EBS is well suitable for DB and OS 14 | 15 | 1. node and EBS volume should be in same AZ 16 | 2. drivers install 17 | 3. EC2 worker nodes should have permission to work with EBS volumes 18 | 19 | static and dynamic provisioning 20 | ================================= 21 | 22 | 23 | EBS static 24 | ========== 25 | we have to create disk manually. vol-06237d8b3183eefc9 26 | drivers install 27 | kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.40" 28 | 29 | everything in k8 is resource/object. if you can treat or manage the volumes from k8 objects any k8 engineer can work on this.. 30 | PV --> Persistant volume, represents or wrapper of physical volume 31 | PVC --> Claiming the volume 32 | Storage class 33 | 34 | kid --> mother --> father --> wallet 35 | Pod --> PVC --> PV --> volume 36 | 37 | access modes 38 | ============ 39 | ReadWriteOnce, ReadOnlyMany, ReadWriteMany, or ReadWriteOncePod, 40 | 41 | Lifecycle policies 42 | ================= 43 | 44 | if pod schedules in 1a and volume is in 1b, pod status will be in pending for continously. 45 | 46 | scheduler --> schedules your pods on to appropriate nodes 47 | -------------------------------------------------------------------------------- /session-52.txt: -------------------------------------------------------------------------------- 1 | perm storage 2 | ============= 3 | 1. SAN 4 | 2. K8 Admin 5 | 3. Expense DevOps Engineer --> they get access to only expense namespace 6 | 7 | Expense DevOps engineer send a mail to SAN with the manager approval. SAN will take their manager approval and create storage for us. 8 | 9 | We will forward disk details to k8 admin and ask them to create PV. 10 | 11 | 1. namespace level 12 | 2. cluster level 13 | 14 | they create PV and send the name for us.. 15 | 16 | then we will create PVC, pod to access the storage 17 | 18 | Kid --> Mom --> Dad --> Wallet 19 | Pod --> PVC --> PV --> Storage 20 | 21 | 1. install drivers 22 | 2. give permissions to worker nodes 23 | 3. create volume 24 | 4. create PV, PVC and cliam through pod 25 | 5. EC2 and EBS should be in same az incase of EBS provisioning 26 | 27 | kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.40" 28 | 29 | 30 | EBS dynamic 31 | ============ 32 | 33 | Kid --> Mom --> UPI 34 | Pod --> PVC --> SC 35 | 36 | StorageClass --> this object is responsible dynamic provisioning. it will create external storage and PV automatically 37 | 38 | 1. install drivers 39 | 2. give permissions to worker nodes 40 | 3. create storage class 41 | 42 | EFS --> Elastic file sharing 43 | 44 | 1. EBS should be in same AZ as in EC2, EFS can be anywhere in n/w 45 | 2. EBS is speed compare to EFS 46 | 3. EBS is used OS disk and databases, EFS can be used for file storages 47 | 4. EBS size is fixed. EFS will be scaled automatically 48 | 5. EFS is based on NFS protocol. 2049 port 49 | 6. EBS and EFS should be mounted to any instance. 50 | 7. You can have any filesystem attached to EBS, but EFS use NFS, we can't change. 51 | 52 | EFS static 53 | ============= 54 | 1. install drivers 55 | kubectl apply -k \ 56 | "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-2.1" 57 | 2. give permissions 58 | 3. create EFS 59 | 4. open port no 2049 on EFS to allow traffic from EC2 worker nodes 60 | 5. create PV, PVC and POD 61 | 62 | 63 | EFS Dynamic 64 | ============= 65 | 1. create storage class 66 | 2. install drivers 67 | 3. give permissions 68 | 69 | StatefulSet vs Deployment 70 | ============== 71 | 1. Deployment is for stateless applications, generally frontend and backend 72 | 2. StatefulSet is for stateful applications usually databases. 73 | 3. StatefulSet will have headless service, deployment will not have headless service 74 | 4. PV, PVC are mandatory for statefulset 75 | 5. pods in statefulset create in orderly manner with static names. statefulset-0, statefulset-1 76 | 77 | -------------------------------------------------------------------------------- /session-53.txt: -------------------------------------------------------------------------------- 1 | Deployment 2 | ========== 3 | when you hit nslookup on service name, you will get ClusterIP as address 4 | 5 | You need to attach headless service to statefulset. Why? 6 | 7 | what is headless service? headless service will have no ClusterIP. It will be attached to statefulset 8 | 9 | pods in statefulset will be created in orderly manner. 10 | deployment pods names are choosen as random. but statefulset names are unique. -0, -1 11 | 12 | why statefulset have same names, pods preserve their identity? 13 | nginx-0 delete, statefulset creates immidiately another pod nginx-0 and it should attach to its own storage through naming convention. persistentvolumeclaim/www-nginx-0 14 | 15 | Deployment vs Statefulset 16 | ============================ 17 | 1. Deployment is for stateless applications like frontend, backend, etc. Statefulset is for DB related applications like MySQL, ELK, Prometheus, Queue apps, etc. 18 | 2. Statefulset requires headless service and normal service to be attached. Deployment will not have stateless service 19 | 3. PV and PVC are mandatory to statefulset, they create individual storages for each pod. PV and PVC for deployment creates single storage 20 | 4. Statefulset pods will be created in orderly manner. Deployment pods will be created parellely 21 | 5. Deployment pods names are choosen random. Statefulset pods keep the same identity. like statefulset-name-0, -1, etc. 22 | 23 | Autoscaling 24 | ============= 25 | Vertical, Horizontal 26 | 27 | Vertical scaling --> 2CPU, 4GB, 20GB HD --> 4CPU 8GB 100GB HD 28 | Horizontal scaling --> create another server with 2CPU, 4GB, 20GB HD and add to LB 29 | 30 | 4 individual houses, 31 | 32 | HPA in K8 33 | ------------ 34 | We should metrics server installed in k8 35 | -------------------------------------------------------------------------------- /session-54.txt: -------------------------------------------------------------------------------- 1 | Namespace 2 | Pod 3 | ConfigMap 4 | Secret 5 | ReplicaSet 6 | Deployment 7 | Service 8 | PVC 9 | PV 10 | SC 11 | StatefulSet 12 | HPA 13 | 14 | Helm Charts 15 | ============= 16 | 1. image build 17 | 2. run image 18 | 19 | there are opensource official images. 20 | opensource manifest files, image builders tell us how to run the images with default values, we can always override those default values too.. 21 | 22 | dnf install nginx -y --> nginx will be installed 23 | 24 | 1. helm is a package manager to deploy opensource applications or custom applications into kubernetes 25 | 2. templatise manifest files 26 | 27 | helm install . 28 | 29 | /etc/yum.repos.d/ 30 | dnf install nginx -y 31 | 32 | helm install es-kb-quickstart elastic/eck-stack -n elastic-stack --create-namespace 33 | 34 | 35 | -------------------------------------------------------------------------------- /session-55.txt: -------------------------------------------------------------------------------- 1 | Selectors 2 | ============== 3 | Scheduler --> it will decide where to run your pod 4 | 5 | nodeSelector: 6 | az: 1a 7 | 8 | taints and tolerations 9 | affinity and anti-affinity 10 | 11 | 12 | Errors in K8 13 | ============== 14 | ErrImagePull --> if node is not able to pull the image 15 | CrashLoopBackOff --> Container is unable to start 16 | Pending --> worker node not availalbe in that AZ, PVC is not bound to PV 17 | ContainerCreating --> PV, PVC 18 | 19 | taint --> paint or pollute 20 | 21 | if can taint the node, scheduler will not schedule any pods on to that node 22 | 23 | 17.182 --> 1b 24 | 43.212 --> 1a and tainted 25 | 51.139 --> 1a 26 | NoSchedule --> order 27 | PreferNoSchedule --> request 28 | NoExecute --> already few pods, 29 | 30 | toleration --> allow 31 | 32 | a dedicated worker nodes are there project savings bank project, it means any other project related pods will not be scheduled here 33 | 34 | tolerations will not 100% gaurentee thats pods will run on the tainted nodes.. 35 | 36 | situation 37 | ========== 38 | node is tainted --> can scheduler run the pod on that node? --> no 39 | node is tainted --> pod asks for toleration --> can scheduler run the pod --> may be(if resources are not free or scheduler decided another node) --> Running 40 | node is tainted --> pod asks for toleration and nodeSelector --> If tainted have resources(Running) --> if tainted nodes don't have any resources(Pending) 41 | 42 | requiredDuringSchedulingIgnoredDuringExecution --> Schedule the pod and execute --> hard rules, labels must be availalbe while scheduling 43 | preferredDuringSchedulingIgnoredDuringExecution --> Soft rule --> if labels are not availalbe then consider schedule on the node.. 44 | 45 | backend --> DB --> every day bringing provisions from market 46 | 47 | backend --> Cache --> DB --> Cache -> 10 days rice store 48 | 49 | backend --> node-1 50 | Cache --> node-2 51 | 52 | traffic should come out from node-1 and enter into node-2 and then Cache pod 53 | 54 | pod-1 --> 17.182 55 | pod-2 --> 56 | 57 | Ingress Controller 58 | =================== 59 | Classic --> Old generation 60 | ALB --> New generation --> host based routing 61 | 62 | netbanking.hdfcbank.com --> netbanking target groups 63 | sms.hdfcbank.com --> sms banking target group 64 | 65 | url path hdfcbank.com/netbanking --> netbanking 66 | 67 | by using ingress controller, we can expose application running in K8 to outside world. 68 | 69 | setup ingress controller 70 | use ingress resources --> routing rules 71 | -------------------------------------------------------------------------------- /session-56.txt: -------------------------------------------------------------------------------- 1 | R53 --> ALB --> Listener --> Rule --> Target Group(VM)/Target Group(Pod) 2 | 3 | 4 | eksctl utils associate-iam-oidc-provider \ 5 | --region \ 6 | --cluster \ 7 | --approve 8 | 9 | daws82s.online 10 | 11 | app1.daws82s.online --> app1 related pods 12 | app2.daws82s.online --> app2 related pods 13 | 14 | Ingress resource 15 | ================= 16 | Ingress Controller is used to provide external access to the applications running in kubernetes. We can set the routing rules through ingress resource either path based or host based. in EKS ingress resource can create ALB, Listener, Rule and target group. Ingress is attached to service, so it fetch the pods and add them to target group 17 | 18 | RBAC --> Role based access control 19 | ====== 20 | Authentication and Authorization 21 | 22 | Nouns and Verbs 23 | 24 | Nouns --> what are the resources we have 25 | Verbs --> What actions you can take on those resources 26 | 27 | IAM 28 | Nouns --> EC2, VPC, R53, CDN, EKS, IoT, etc. 29 | 30 | Fresher --> deleteEC2 --> no 31 | CRUD 32 | Fresher --> readEC2 33 | Junior --> createEC2 34 | Senior --> creaEC2, readEC2, updateEC2 35 | Team lead --> deleteEC2 36 | 37 | User, Role, Rolebinding 38 | 39 | expense-trainee --> read access to expense namespace 40 | 41 | role should be bound to user --> through rolebinding 42 | 43 | EKS is one platform, AWS is another platform 44 | 45 | k8 has its own RBAC, AWS has its own 46 | 47 | AWS integrates IAM service as authentication mechanism to EKS 48 | 49 | a user should describe EKS to connect it.. 50 | 51 | 1. create IAM user and provide describeEKSCluster access 52 | 2. we need to create role and rolebinding resources 53 | 54 | aws-auth configmap need to be configured to connect EKS and IAM 55 | 56 | mail suresh about his config is done, he can login 57 | AKIAUSW45M2WKJH4AVVH 58 | 59 | aws eks update-kubeconfig --region us-east-1 --name expense 60 | -------------------------------------------------------------------------------- /session-57.txt: -------------------------------------------------------------------------------- 1 | suresh joined, we need to give him all readaccess to expense namespace 2 | 3 | aws eks update-kubeconfig --region us-east-1 --name expense 4 | 5 | ServiceAccount 6 | ================ 7 | Service Account: default 8 | when you create namespace or every namespace will have default sa 9 | sa is a non human account, it is pod identity with which it can connect with api server and get access to external services 10 | 11 | pod should access aws secret manager 12 | 1. make sure oidc provider exist 13 | eksctl utils associate-iam-oidc-provider \ 14 | --region us-east-1 \ 15 | --cluster expense \ 16 | --approve 17 | 18 | 2. create policy 19 | 3. we need to map sa with IAM policy 20 | 21 | eksctl create iamserviceaccount \ 22 | --cluster=expense \ 23 | --namespace=expense \ 24 | --name=expense-mysql-secret \ 25 | --attach-policy-arn=arn:aws:iam::315069654700:policy/ExpenseMySQLSecretRead \ 26 | --override-existing-serviceaccounts \ 27 | --region us-east-1 \ 28 | --approve 29 | 30 | this command will create IAM role with policy ExpenseMySQLSecretRead and integrate with EKS SA 31 | 32 | apiVersion: v1 33 | kind: ServiceAccount 34 | metadata: 35 | annotations: 36 | eks.amazonaws.com/role-arn: arn:aws:iam::315069654700:role/eksctl-expense-addon-iamserviceaccount-expens-Role1-XUPebrTfkrjM 37 | creationTimestamp: "2025-03-17T02:22:46Z" 38 | labels: 39 | app.kubernetes.io/managed-by: eksctl 40 | name: expense-mysql-secret 41 | namespace: expense 42 | resourceVersion: "7864" 43 | uid: 8c97086a-9360-4a82-b77f-142fe47ce1c2 44 | 45 | InitContainers 46 | ================== 47 | InitContainers are used for setup the requirements for pod, for example backend pod can check whether database connections working fine or not 48 | InitContainers containers can fetch the secrets for pod before it starts 49 | 50 | InitContainers are special containers run before main container runs, we can use them to make sure dependent services are running fine before our main application starts and fetch the secrets from secretmanager for main container. they always run to completion. you can run one or many init containers, they executed in sequence 51 | 52 | InitContainers can do heavy lifting for main container so that main container is less in size and limit attack surface of main container by not installing more tools in it.. 53 | 54 | 55 | aws secretsmanager get-secret-value \ 56 | --secret-id expense/mysql/creds --query SecretString --output text 57 | 58 | volumes --> external volumes 59 | 60 | internal volumes --> emptyDir and hostPath 61 | 62 | emptyDir --> pod temporary storage, accessible until pod lives. all containers in the pod can access 63 | init container stores the secret in emptyDir and completes. then main container can acess this 64 | 65 | 66 | what is serviceaccount? 67 | 68 | serviceaccount is non human user account, sa is the pod identity with which it can access internal resources or external services. we can map sa with cloud provider IAM roles and policies. we can use sa to fetch secrets from secretmanager 69 | 70 | what are initcontainers? 71 | 72 | these are special containers run before main container run. we can run one or many init containers, all containers run in sequence. init containers will come to completion, we use them to check the dependencies are running fine before main container starts another example can be fetching the secrets and provide to main container. they do heavy lifting keeping the main container lightweight and less attack surface by not installing utility tools in main container. 73 | 74 | what is emptyDir? 75 | it is pod temporary storage exist until the pod lives, all containers inside pod can acess emptyDir volumes. 76 | 77 | OIDC provider 78 | sa --> IAM Role --> IAM Policy (external roles) 79 | sa --> Role and RoleBinding (Internal resources access) 80 | 81 | 82 | -------------------------------------------------------------------------------- /session-58.txt: -------------------------------------------------------------------------------- 1 | ReplicaSet 2 | Deployment 3 | StatefulSet 4 | DaemonSet --> makes sure pod replica runs on every node. monitoring purpose and logs collection, metrics collection, etc. 5 | 6 | emptyDir --> temp storage for pod 7 | 8 | hostPath --> filesystem path in the underlying worker node. it is not secure pods should not access host filesystem directly. but deamonset is only the exception for admins to collect logs and metrics... 9 | 10 | pod-1 is in node-1 11 | pod-2 is in node-2 12 | 13 | pod-1 should accept traffic from pod-2 14 | 15 | 10.0.0.0/16 -------------------------------------------------------------------------------- /session-59.txt: -------------------------------------------------------------------------------- 1 | rolling update --> zero downtime 2 | 4 pods 3 | 5 pods, 4 old 1 new 4 | 1 old pod terminate, 2nd new pod 5 | for few sec your app is serving both old and new version 6 | blue/green 7 | app serves single at any point of time 8 | easy rollback due to stand by 9 | 10 | blue/green deployment is a zero downtime strategy 11 | for example if blue is running, we will create a new set of infra called green 12 | we do some health checks or sanity testing based on project requirement 13 | if green infra passed testing, we will switch routing from blue to green 14 | now green is running version and blue is standby version 15 | after few days, if any major defects found out, we can easily switch over to old version i.e blue 16 | this process goes on 17 | 18 | 1.0.0 --> blue 19 | 2.0.0 --> green 20 | 3.0.0 --> blue 21 | 22 | RDS is created,but we need to create schema, tables, user, etc... 23 | 24 | DNS works on port number 53. EKS DNS resoultion happens on udp 53. 25 | 26 | TCP vs UDP 27 | =========== 28 | Transfer control protocol 29 | User datagram protocol 30 | 31 | COMP-1 --> COMP-2 32 | 33 | SYN --> SYN 34 | COMP-1 <-- SYN-ACK 35 | ACK RECEIVED --> COMP-2 36 | 37 | data transfer from COMP-1 to COMP-2 38 | 39 | DATA SENT --> DATA RECEIVED 40 | DATA RECEIVED ACK <-- 41 | 42 | Reliable protocol, no data loss 43 | 44 | UDP 45 | ========== 46 | fire and forget 47 | 48 | COMP-1 --> COMP-2 49 | speed because no session ack overhead, no data ack, etc. 50 | 51 | DNS --> EKS UDP 53 52 | DNS retry 53 | 54 | 1st deployment --> blue 55 | main service --> blue 56 | no rollback 57 | 58 | 2nd deployment --> green 59 | run preview service 60 | edit main service 61 | if problem rollback 62 | 63 | 64 | R53 --> web alb 65 | 66 | R53 --> another ALB --> another TG -------------------------------------------------------------------------------- /session-60.txt: -------------------------------------------------------------------------------- 1 | 2 | 1. Create ingress resource that creates ALB, Listener, Rule and Target group 3 | 4 | eksctl utils associate-iam-oidc-provider \ 5 | --region us-east-1 \ 6 | --cluster expense-dev \ 7 | --approve 8 | 9 | eksctl create iamserviceaccount \ 10 | --cluster=expense-dev \ 11 | --namespace=kube-system \ 12 | --name=aws-load-balancer-controller \ 13 | --attach-policy-arn=arn:aws:iam::315069654700:policy/AWSLoadBalancerControllerIAMPolicy \ 14 | --override-existing-serviceaccounts \ 15 | --region us-east-1 \ 16 | --approve 17 | 18 | helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=expense-dev --set serviceAccount.create=true --set serviceAccount.name=aws-load-balancer-controller 19 | 20 | helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=expense-dev 21 | 22 | 23 | 2. We create ALB, Listener, Rule, TG through terraform. EKS adds the pods to the TG 24 | 25 | kubernetes.io/role/elb. 26 | 27 | NODE --> Root volume(OS) --> extra disk 28 | 29 | NODE --> Mount extra disk to node 30 | 31 | -------------------------------------------------------------------------------- /session-61.txt: -------------------------------------------------------------------------------- 1 | git 2 | ====== 3 | 4 | create repo 5 | clone the repo 6 | adding the file to staging area 7 | commit the changes to local repo 8 | push the changes to central repo 9 | git pull 10 | 11 | git init 12 | git remote add origin 13 | 14 | main --> production 15 | 16 | coco cola 17 | ========== 18 | change the taste --> taste by developers first 19 | taste by company bod --> feedback 20 | small crowd --> feedback 21 | 22 | branching strategy 23 | ------------------------ 24 | create another branch from main branch 25 | do the changes 26 | do the deployment 27 | do the testing 28 | do the scanning 29 | 30 | if everything is good, then deploy it into QA ENV, UAT, PRE-PROD, PROD 31 | 32 | we need to get the changes from another branch into main branch 33 | 34 | merging, rebase, PR, squash, reset, revert, cherry pick, etc... 35 | 36 | PR --> Pull Request 37 | 38 | branch create ---> git checkout -b 39 | 40 | Merge 41 | ------ 42 | -> you can do merge to get the changes from feature branch into main branch 43 | -> merge always creates extra commit called merge commit. it will have 2 parents 44 | -> merge preserves the history 45 | -> merge is safe since it prevents the history, you can use merge when multiple people working on the same branch 46 | 47 | Rebase 48 | ------ 49 | -> There will be no extra commit created 50 | -> rebase will not preserve history, it maintains linear history 51 | -> rebase rewrites the history(change in commit id) as if it is originated from main branch 52 | -> we can use rebase when a single person works on one branch 53 | 54 | Why you get conflicts, how do you resolve? 55 | ------------------------------------------ 56 | 57 | 58 | -------------------------------------------------------------------------------- /session-62.txt: -------------------------------------------------------------------------------- 1 | Branching Strategy 2 | --------------------- 3 | git model --> master and develop. feature, hotfix, release 4 | feature branching --> master/main, feature 5 | trunk based develop --> master/main(100% automated test cases) 6 | 7 | long lived branches --> main/master, develop 8 | short lived branches --> feature, hotfix, release 9 | 10 | android --> 20, 19, 18, 17, 16, 15 11 | 12 | git model 13 | --------- 14 | this is legacy model, you can use when your application is supporting multiple versions 15 | 16 | master and develop 17 | 18 | master --> production 19 | 20 | develop 21 | --------- 22 | source: main/master 23 | 24 | 08-JUN-2025 new release 25 | ------------------------ 26 | 27 | feature branches --> 3 feature branches 28 | 29 | source: develop 30 | destination: development 31 | testing is going on in feature branch in DEV environment. once developers are satisfied they push to development branch 32 | 33 | release/08-jun 34 | -------------- 35 | source: development 36 | destination: master and development 37 | 38 | QA, UAT, PRE-PROD deployments will happen and testing goes on... if there are defects they will clear here in release branch. 39 | 40 | PROD deployment happens from release branch, if success changes will be merged to master and development 41 | 42 | hotfix 43 | -------------- 44 | source: main/master 45 | destination: main and development 46 | 47 | they create hotfix branch do the deployment in DEV environment, if success they go to PROD from hotfix branch. if deployment is success then they merge the changes in master and development both 48 | 49 | featuring branching 50 | ------------------------- 51 | main branch and feature branch 52 | feature branch --> DEV deployment 53 | PR --> main --> QA, UAT, PRE-PROD, PERF, PROD 54 | 55 | we tag the commits when there is successful deployment 56 | feature branch --> development related functional test cases.. 100% 57 | 58 | commit id change --> code is changed 59 | feature --> DEV success --> Main --> Extra commit 60 | 61 | git reset, revert, cherry pick 62 | 63 | 64 | restore is used to bring back the changes from staging area to workspace or delete the changes in workspace 65 | 66 | reset --> undo the changes 67 | 68 | workspace --> staging area --> local repo commits --> remote repo 69 | 70 | soft --> staging area 71 | mixed --> workspace 72 | hard --> delete the fault changes 73 | 74 | reset is useful only in local repos, we should perform these in central repos 75 | soft, mixed and hard changes 76 | history will be rewritten 77 | 78 | revert 79 | ----------- 80 | when changes are moved to remote repo, now you want undo the changes you can go for remote repo 81 | we can't rewrite the history, we can correct the mistakes using revert 82 | it wil create extra commit 83 | 84 | cherry-pick 85 | ----------- 86 | if you are working one branch, you want to pick something from other branch particular commit then you can use cherry-pick 87 | 88 | git cherry-pick 89 | you may get conflicts, you need to resolve and continue -------------------------------------------------------------------------------- /session-63.txt: -------------------------------------------------------------------------------- 1 | git squash 2 | git stash 3 | 4 | squash --> 30 commits 5 | if you rebase these commits, 30 commits will be pushed to main branch 6 | 7 | feature-1 --> code-1 8 | feature-2 --> code-2 9 | 10 | feature-2 - feature-1 = code-2 - code-1 11 | 12 | using squash we can compress all the commits into single commit. recommended only to use when a single person is working on particular branch. usually we use before rebase. not recommended to do in main/long lived branches... 13 | 14 | squash == interactive rebase 15 | 16 | pick 408edf5 17 | squash 0810a4e 18 | squash fab9d2f 19 | squash 2bb906f 20 | 21 | git stash 22 | 23 | when we are working on a particular feature, if we have to move to another branch git will not allow. So we can stash the changes by using git stash command, go to another branch complete the work there and again come to our feature branch and retrieve our changes using git stash pop command 24 | 25 | k8 upgrade, k8 architecture 26 | ============================= 27 | daws82s.online --> ALB --> TG --> Pod IP 28 | 29 | You can't do any changes or deployments to the applications, but existing applications will run while k8 upgrade is going on... 30 | 31 | blue --> running version 32 | green provision, basic testing 33 | change service to green 34 | 35 | blue nodes are running 36 | will create same number of green nodes 37 | cordon green nodes, scheduling disabled 38 | eks control plane upgrade to 1.32 39 | green nodes also wil be upgraded to 1.32 and uncordon 40 | cordon blue nodes, scheduling disabled 41 | drain blue nodes, so pods will go to green nodes 42 | now we can delete blue group 43 | 44 | 08:30-09:30 PM upgrade started 45 | 46 | We should announce this upgrade atleast 1 month before as planned activity, so that application team can plan their release. we will ask app team to be stand by to check apps health after upgrade 47 | 48 | circuit breaker 49 | mTLS 50 | rate limiting 51 | 52 | 53 | -------------------------------------------------------------------------------- /session-64.txt: -------------------------------------------------------------------------------- 1 | K8 Architecture 2 | =================== 3 | Master/Control plane and worker node 4 | 5 | Api server: it is the first component that receives request from client. it checks authentication and authorization intially and then forwards the request to appropriate components 6 | 7 | Scheduler: scheduler takes the decision about where to schedule the work loads. it checks user preferences like taints, toleration, node affinity, pod affinity, etc. if no preference scheduler runs the pod on any random available worker node 8 | 9 | Controll manager: node control, replication controll, sa controller.. replication control is responsible to make sure desired number of pod replicas run all the time..node control is responsible to make sure all the nodes are connected and ready.. 10 | 11 | etcd: this is database to our cluster. cluster configs, resource configs are stored here.. 12 | 13 | cloud-controller-manager: responsible to integrate eks cluster with other cloud services like IAM, LB, Ingress, etc. 14 | 15 | worker node components: 16 | ======================== 17 | 18 | kubelet: responsible to connect the worker node to control plane. it gets the pod spec from master node and make sure pod runs on the node 19 | 20 | kube-proxy: it is like DNS to the pods. It provides the network rules on how to forward request through workloads like service to pods. pod to pod communication 21 | 22 | container runtime: is responsible to run the image into container. 23 | 24 | add-ons: vpc-cni, kube-dns, metrics-server, ebs, efs, etc. 25 | 26 | Jenkins 27 | =================== 28 | continous integration --> integrate the code continously 29 | 1. build errors 30 | 2. deployment errors 31 | 32 | compile, dependencies install, pack the code into artifact, build the image, push image to central registry 33 | 34 | shift-left process --> building, scanning, testing the application in DEV environment 35 | 36 | clone the code, dependencies install, scan the code, unit tests, pack the code 37 | 38 | process of integrating the continously when developer push the code to remote repository. It involves clone the code, compile the code, install dependencies, do multiple types of scans, run unit test cases and create the image/artifact, store in central registry... we follow shift left process to identify or make sure quality of code in the early stages of DEV environment... 39 | we use jenkins to do these CI.. 40 | 41 | Jenkins is a plain web server, jenkins power lies in pluings, if you want jenkins to do some taks you need either install plugin or command inside jenkins server/node... 42 | 43 | everything in jenkins is called as job 44 | 45 | pre-build, build and post-build 46 | 47 | freestyle vs pipeline 48 | ---------------------- 49 | can't restore if something is misconfigured 50 | can't track who did the changes 51 | can't version control 52 | can't review the code 53 | 54 | pipeline is a code 55 | ---------------------- 56 | version control 57 | easy to review 58 | easy to restore 59 | easy to track 60 | easy to extend to multiple projects 61 | 62 | 63 | 64 | github.joindevops.com --> github is installed in joindevops servers 65 | 66 | 67 | -------------------------------------------------------------------------------- /session-65.txt: -------------------------------------------------------------------------------- 1 | Master and Agent/Node architecture 2 | --------------------------------- 3 | java, nodejs python, reactjs 4 | 5 | 1acre 6 | 100 acres --> he should have employees/resources 7 | 8 | only master can't handle multiple builds at a time 9 | every project needs different runtime environments and different versions. master can't handle all these things, we will have multiple agents specialised for different environments 10 | 11 | host IP, username and password 12 | 13 | declarative vs scripted 14 | ------------------------- 15 | 16 | scripted --> it is groovy based pipeline from jenkins beginning 17 | declarative --> jenkins-2.0 launched declarative pipeline using groovy 18 | 19 | scripted --> pipeline is compiled at the time of runtime 20 | declarative --> entire pipeline is compiled first and then run 21 | 22 | declarative pipeline --> standard/easy syntax 23 | scripted --> dynamic groovy syntax, little tough but we can control pipeline with lot of flexibility 24 | 25 | we use hybrid for more advantages 26 | 27 | webhooks 28 | ----------- 29 | when developer push the code, it should automatically trigger CI pipeline 30 | 31 | github --> event driven --> jenkins 32 | 33 | 34 | -------------------------------------------------------------------------------- /session-66.txt: -------------------------------------------------------------------------------- 1 | aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 315069654700.dkr.ecr.us-east-1.amazonaws.com 2 | 3 | 4 | docker build -t 315069654700.dkr.ecr.us-east-1.amazonaws.com/expense/backend:appVersion . 5 | 6 | docker push 315069654700.dkr.ecr.us-east-1.amazonaws.com/expense/backend:appVersion 7 | 8 | scan 9 | unit test cases 10 | 11 | unit test, functional testing, integration testing 12 | 13 | functions --> unit test cases 14 | 15 | login(username, password){ 16 | connectToDB() 17 | fetchUser() 18 | } 19 | 20 | connectToDB(){ 21 | 22 | } 23 | 24 | fetchUser(){ 25 | 26 | } 27 | 28 | proper username and password --> success test cases --> should be logged in 29 | special char in user --> failure test case --> this should fail 30 | 31 | infra 32 | application deployment on top of infra 33 | 34 | new version .jar file 35 | 36 | /var/lib/jenkins 37 | /var/lib/jenkins 38 | 39 | -------------------------------------------------------------------------------- /session-67.txt: -------------------------------------------------------------------------------- 1 | 1. build the image 2 | 2. how to run the image --> manifest 3 | 4 | stream editor --> sed 5 | 6 | CUD 7 | 8 | sed -e 'syntax' file 9 | 10 | insert line 11 | 12 | sed -e '1 i Hello World' users 13 | 14 | sed -i 's/IMAGE_VERSION/$version/g' values-dev.yaml 15 | 16 | load balancer --> listener --> rule --> target group --> ip 17 | load balancer --> listener --> rule --> target group --> target group binding adds the pod ip to target group 18 | -------------------------------------------------------------------------------- /session-69.txt: -------------------------------------------------------------------------------- 1 | Infra create 2 | terraform init, plan and apply 3 | 4 | DevOps team develops the pipeline... DevOps will do the settings in Git. DevOps team discuss branching straetgy with Development team... 5 | 6 | CI job triggers CD job 7 | 8 | Shift Left 9 | =========== 10 | Instead of testing and scanning the application in higher environments we can shift all the possible stages of scanning and testing into DEV environment so that issues can be filtered early... 11 | 12 | Scanning 13 | =========== 14 | Source code analysis --> SonarQube 15 | SAST --> Static application security testing --> SonarQube 16 | DAST --> Dynamic application security testing --> Veracode 17 | open source library scan --> scanning dependencies 18 | image scan --> docker images scan 19 | 20 | 1. Install SonarQube scanner plugin, this enables sonarqube options in tools and system configuration 21 | 2. Configure sonarqube scanner in tools section 22 | 3. Configure sonarqube URL in system configuration including authentication 23 | 24 | 25 | Quality Gates 26 | ============= 27 | When sonarscan is over we need to take decission based on quality gates. 28 | 29 | Code Coverage 30 | 31 | 20 functions --> unit test cases for 20 functions --> 100% code coverage 32 | 33 | commit1, commit2 34 | 35 | new code = commit2 - commit1 36 | issues --> 0 37 | vulnerabilities --> 0 38 | code snells --> 0 39 | maintability rating --> A 40 | security rating --> A 41 | code coverage --> min 80% 42 | 43 | 44 | How did you integrate sonarqube in your project? 45 | 46 | We installed SonarQube Server. We added sonarscanner plugin in our jenkins. we configured sonarscanner tool and configured sonarqube server in jenkins pipeline... 47 | 48 | We integrate sonarqube jenkins code into our pipeline, scanner analyse the code and push to server. we configured qualitygates in sonarqube server. if quality gates fails our build also will be failed. 49 | 50 | What is qualitygates? 51 | 52 | To make the overall code and new code clean, we configured parameters in sonarqube server that should pass. Our parameters are 53 | 54 | issues --> 0 55 | vulnerabilities --> 0 56 | code snells --> 0 57 | maintability rating --> A 58 | security rating --> A 59 | code coverage --> min 80% 60 | 61 | if the code have't passed these parameters it means code quality is failed. We integrated jenkins and sonarqube through webhook our pipeline waits for results, if qualitygates is failed we will fail the pipeline.. 62 | 63 | -------------------------------------------------------------------------------- /session-70.txt: -------------------------------------------------------------------------------- 1 | DAST 2 | 3 | when application is running is there any chance to attack. when app is pre-prod stage we can use few tools like veracode. We provide our application URL and scan it. veracode sends the attacks and give us report 4 | 5 | before going to PROD, we do this scan once, get the report and attach to release process for approval... 6 | 7 | Image scan 8 | ============= 9 | We are using ECR scan, if there are any critical like base image upgrade or packages upgrade we do it from changing the Dockerfile 10 | 11 | Opensource scan 12 | ============= 13 | we are using github dependabot, if there are any ciritical issues to be cleared we will stop the build, usually issues when developers are using old dependency versions, we force to update to new versions and official libraries only. 14 | 15 | Jenkins Shared Librarires 16 | ========================= 17 | DRY --> Ansible Roles, Terraform Modules... 18 | 19 | Programming Language Deployment Platform 20 | Nodejs EKS 21 | 22 | nodeJSEKS --> all projects with these combination can follow same standards 23 | nodeJSVM 24 | nodeJSPCF 25 | 26 | Map key --> Value 27 | 28 | 29 | nodeJSEKSPipeline.runPipeline() -------------------------------------------------------------------------------- /session-71.txt: -------------------------------------------------------------------------------- 1 | DevOps --> build and deploy 2 | 3 | DevSecOps --> shift left 4 | 5 | SonarQube --> Onboard, tools team(GitHub, Jenkins, SonarQube, Veracode, etc.) 6 | 1. plugin install, sonarqube scanner, sonarqube server configure 7 | implement in the pipeline --> 1 month 8 | 9 | 2. Quality gates setup --> 3 months time 10 | 3. We started failing the pipelines --> 2 months 11 | 12 | DAST --> Veracode 13 | we found out critical issues regarding TLS versions, all our AWS listeners were in TLS-1.1 When I found these we upgraded TLS1.3 in listener security policy so high and critical issues were filtered 14 | 15 | ECR image scans were not enabled earlier, recently we enabled and started scanning all the images and fixing the images by updating critical packages in Dockerfile. I implemented all the docker best practices 16 | 17 | We enabled dependabot to scan the libraries, we forced developers to update the libraries to solve critical and high priority issues 18 | 19 | We forced developers to write unit test cases with min coverage of 80% we ensured this in SonarQube Quality Gates... 20 | 21 | Jenkins Shared Libraries 22 | ======================== 23 | These are centralised pipelines based on DRY principle, dont repeat yourself. instead of maintaining pipelines for each and every project we can create pipelines as libraries so that it can be extended by multiple projects. updating the pipelines is very easy since those centralised. enforcing best standards can be acheived.... 24 | 25 | We developed multiple pipelines based on programming language and deployment platform. for example we have nodejsVMPipeline, nodeEKSPipeline. Any new project they can just call our pipelines at run time. 26 | 27 | We had normal pipelines earlier. We recently upgraded multi branch pipeline to support feature brnaching strategy. 28 | 29 | developers start the development in feature, for every commit they push to central repo we trigger the feature pipeline automatically. this pipeline is based on shift left. We clone the repo, install dependencies, all types of scans, unit testing, deploy the application in development environment. 30 | 31 | build once in DEV and run anywhere by changing the configuration. Once application is scanned, unit tested and functional tested in DEV environment. developers raise pull request from feature to main branch... We need to deploy application in QA environment 32 | 33 | We are doing helm deployments, we have different values file for different environment 34 | 35 | DEV environment --> unit tests and functional tests 36 | QA environment --> integration tests will be configured 37 | 38 | 100 --> 50 to 60 test cases are automated 39 | 40 | JUN 09th 02:00AM 41 | CR process --> change release process 42 | 43 | JIRA, Service now --> ticketing tool 44 | CR tool --> open source, internally developed 45 | 46 | CR request raise atleast 3 days before --> green 47 | deployment time 48 | version 49 | approval 50 | what changes are going 51 | if failed, how do you retreive 52 | scan results 53 | test results --> functional, regression, integration, etc.. 54 | 55 | first approval --> team lead 56 | next approval --> delivery/release manager 57 | next approval --> client 58 | 59 | prod deployment button will be enabled exactly at 02:00AM. deployment window may be 2hours. 60 | 61 | if success --> inform all stakeholders. ask dev team to run sanity test cases...then developement team send everything ok in email 62 | 63 | if failure --> make sure it is reverted to previous version. then raise an incident. preare RCA, lessons learnt, how can you prevent these failures in future. 64 | 65 | 66 | -------------------------------------------------------------------------------- /session-72.txt: -------------------------------------------------------------------------------- 1 | Monitoring 2 | ============== 3 | Whitebox monitoring --> we know the internal details of the systems. metrics, data points, errors, traffic, etc... 4 | Blackbox monitoring --> as an end user checking the systems are working fine or not 5 | 6 | 4 golden signals of monitoring 7 | 8 | Latency --> time to get response --> less latency is preferred 9 | Traffic --> number of requests to our system 10 | Errors --> Especially 500 errors 11 | Saturation --> resources of servers should be measured 12 | 13 | Time Series Database 14 | ==================== 15 | timestamp, value of that metric at that timestamp 16 | 17 | Quarterly, half yearly, yearly, weekly, daily 18 | 19 | Prometheus 20 | 21 | 22 | [Unit] 23 | Description=Prometheus Monitoring System 24 | Wants=network-online.target 25 | After=network-online.target 26 | 27 | [Service] 28 | User=prometheus 29 | ExecStart=/opt/prometheus/prometheus --config.file=/opt/prometheus/prometheus.yml 30 | Restart=on-failure 31 | 32 | [Install] 33 | WantedBy=multi-user.target 34 | 35 | 36 | CC cameras 37 | ============= 38 | cc cameras are agents --> central monitoring servers 39 | 40 | -------------------------------------------------------------------------------- /session-73.txt: -------------------------------------------------------------------------------- 1 | region 2 | Monitoring --> true 3 | 4 | 1. Setting the tag as Monitoring --> true 5 | 2. Prometheus server should have describe ec2 instances permission... 6 | 7 | P0, P1, P2, P3, P4 8 | SLA --> Service level agreement 9 | P0 --> SLA --> 30-60min 10 | P1 --> SLA --> 60-120min 11 | P2 --> SLA --> 4-8hr 12 | P3 --> SLA --> 2 days 13 | P4 --> SLA --> 4 days 14 | 15 | Raise an alert and trigger emails, teams chat, jira incidents, etc.. 16 | 17 | rules will create alerts, alert manager component can manage the alert 18 | 19 | Every scrape 15sec Rules evaluation --> Alert creation --> Alert Firing to alertmanager 20 | 21 | Email Config 22 | =============== 23 | 24 | 25 | defaults 26 | auth on 27 | tls on 28 | tls_trust_file /etc/ssl/certs/ca-bundle.crt 29 | logfile /var/log/msmtp.log 30 | 31 | account gmail 32 | host smtp.gmail.com 33 | port 587 34 | from your_email@gmail.com 35 | user your_email@gmail.com 36 | password your_app_password 37 | 38 | account default : gmail 39 | 40 | 41 | { 42 | echo "To: your-to-mail" 43 | echo "Subject: prometheus mail testing" 44 | echo "Content-Type: text/html" 45 | echo "" 46 | echo "prometheus mail testing" 47 | } | msmtp "your-to-mail" 48 | 49 | counter and gauge 50 | ================= 51 | 200GB 52 | 01-MAY --> 0 53 | 02-MAY --> 2GB 54 | 03-MAY --> 3GB 55 | 30-MAY --> 145GB 56 | 57 | 01-JUN --> reset to 0 58 | -------------------------------------------------------------------------------- /session-74.txt: -------------------------------------------------------------------------------- 1 | 3 Tier 2 | =========== 3 | Load Balancer --> Frontend Server --> Backend --> DB 4 | 5 | MongoDB 6 | ============ 7 | NoSQL Database --> Collections and documents --> Json 8 | SQL --> mySQL, MSSQL, Postgress, Oracle, etc. --> tables and columns 9 | 10 | Products --> NoSQL 11 | 12 | { 13 | "id": 1234, 14 | "descripton": "" 15 | } 16 | 17 | Redis 18 | ============= 19 | in memory cache database 20 | 21 | Data --> Disk --> RAM --> User 22 | Data is stored in RAM 23 | Key -> Value 24 | 25 | Applications --> DB 26 | Applications --> Cache --> DB 27 | 28 | RabbitMQ 29 | ============== 30 | Queue/Topic kind of database 31 | 32 | Synchronous and Asynchronous 33 | 34 | https://daws82s.online 35 | 36 | Synchronous 37 | ============ 38 | request expects immidiate response 39 | if no response with in time limit it will be error 40 | 41 | Asynchronous 42 | ============= 43 | Mobile-1 --> Mobile-2 44 | 45 | Mobile-2 is offline for 1 hour. 46 | After 1 hour mobile-2 is online 47 | 48 | 1. fire and forget 49 | 2. no need to wait for response, messages will be delivered when other system is online. 50 | 3. messages will be stayed in the queue until other system conusmes 51 | 52 | 1. point to point communication --> Queue 53 | 2. topic and subscribe communication --> one to many 54 | 55 | Amazon --> Ekart 56 | 57 | Amzon sends order details to Ekart Queue 58 | 59 | 127.0.0.1 --> localhost 60 | 61 | Programming language 62 | ====================== 63 | nodejs --> .js 64 | build tool --> npm 65 | build file --> package.json 66 | 67 | 68 | 69 | -------------------------------------------------------------------------------- /session-75.txt: -------------------------------------------------------------------------------- 1 | Structured --> MySQL tables and columns 2 | Unstructured --> random folders,files, images, etc 3 | SemiStructured --> Log files 4 | 5 | MongoDB --> NoSQL 6 | Redis --> in memory, cache database 7 | MySQL 8 | RabbitMQ --> Asynchronous. fire and forget 9 | 10 | Java and dotnet --> enterprise application development 11 | nodejs, .net core, python, go, etc... 12 | 13 | OS --> Redhat, ubuntu, Suse, etc.. linux distros 14 | 15 | you need to install java --> you can install java seperatly and maven seperatly. if you install maven you can get java also 16 | 17 | when you install nodejs you are getting npm also 18 | nodejs --> programming language 19 | npm --> build command 20 | 21 | python programming and pip 22 | 23 | creating system users, system packages, application directories, permissions, etc.. 24 | 25 | download code 26 | install dependencies 27 | .service files 28 | start the application 29 | 30 | Java 31 | ================== 32 | source files --> .java 33 | build tool --> maven 34 | 35 | first we need to compile the code 36 | 37 | source code --> compile --> byte code(computer can understand easily) 38 | JDK, JRE 39 | JDK --> Java development kit. While developing you need this 40 | JRE --> Java runtime environment. You need this to run compiled/packaged applications 41 | 42 | JDK > JRE 43 | JDK = JRE + Extra utilities 44 | build file = pom.xml 45 | 46 | what pom.xml contains? 47 | 48 | project information. name, description, version, etc. 49 | dependencies 50 | 51 | groupId, artifactId and version 52 | 53 | first name --> ramesh 54 | last name --> sura 55 | dob 56 | adhar card, pan card, admission number 57 | 58 | HDFC bank 59 | =============== 60 | LOB --> Savings, Current, Mobile banking, Stocks, Loans, etc... 61 | process is important than persons 62 | 63 | com.hdfc --> group id 64 | artifactId --> savings.internetbanking 65 | version --> 2.0 66 | 67 | com.hdfc.savings.internetbanking:2.0 68 | 69 | groupId, artifactId and version is the maven way of representing components inside project. for example com.hdfc is groupId, savings is artifactId and we can have version 70 | 71 | maven lifecycle 72 | =============== 73 | mvn validate --> verify project structure 74 | mvn clean --> previous builds removal 75 | mvn compile --> source code compilation to bytecode 76 | mvn test --> unit test cases 77 | mvn package --> it creates target folder and keep .jar (Java Archieve file) inside 78 | package == validate+compile+test 79 | maven deploy --> install into local repository 80 | maven install --> push to central repository 81 | 82 | .msi a downloaded file in windows. it is a packaged file 83 | .iso 84 | 85 | maven lifecycle consists of different phases 86 | 87 | mvn validate --> verify project structure 88 | mvn clean --> previous builds removal 89 | mvn compile --> source code compilation to bytecode 90 | mvn test --> unit test cases 91 | mvn package --> it creates target folder and keep .jar (Java Archieve file) inside 92 | package == validate+compile+test 93 | maven deploy --> install into local repository 94 | maven install --> push to central repository 95 | 96 | monolithic --> .war file web archive file 97 | roboshop-backend --> catalogue+cart+user+payment+etc.. 98 | 99 | legacy --> .ear file --> enterprise archive file 100 | frontend and backend 101 | -------------------------------------------------------------------------------- /session-76.txt: -------------------------------------------------------------------------------- 1 | 1. Write Dockerfile 2 | 2. Create image 3 | 3. Push image to ECR 4 | 4. Deploy into EKS through Helm 5 | 6 | helm 7 | templates 8 | deployment.yaml 9 | service.yaml 10 | configmap.yaml 11 | values.yaml 12 | Chart.yaml --------------------------------------------------------------------------------