├── session-01.txt ├── session-02.txt ├── session-03.txt ├── session-04.txt ├── session-05.txt ├── session-06.txt ├── session-07.txt ├── session-08.txt ├── session-09.txt ├── session-10.txt ├── session-11.txt ├── session-12.txt ├── session-13.txt ├── session-14.txt ├── session-15.txt ├── session-16.txt ├── session-17.txt ├── session-18.txt ├── session-19.txt ├── session-20.txt ├── session-21.txt ├── session-22.txt ├── session-23.txt ├── session-24.txt ├── session-25.txt ├── session-26.txt ├── session-27.txt ├── session-28.txt ├── session-29.txt ├── session-30.txt ├── session-31.txt ├── session-32.txt ├── session-33.txt ├── session-34.txt ├── session-35.txt ├── session-36.txt ├── session-37.txt ├── session-38.txt ├── session-39.txt ├── session-40.txt ├── session-41.txt ├── session-42.txt ├── session-43.txt ├── session-44.txt ├── session-45.txt ├── session-46.txt ├── session-47.txt ├── session-48.txt ├── session-49.txt ├── session-50.txt ├── session-51.txt ├── session-52.txt ├── session-53.txt ├── session-55.txt ├── session-56.txt ├── session-57.txt ├── session-58.txt ├── session-59.txt ├── session-60.txt ├── session-61.txt ├── session-62.txt ├── session-63.txt ├── session-64.txt ├── session-65.txt ├── session-66.txt ├── session-67.txt ├── session-68.txt ├── session-69.txt ├── session-70.txt ├── session-71.txt ├── session-72.txt ├── session-73.txt └── session-74.txt /session-01.txt: -------------------------------------------------------------------------------- 1 | What is DevOps? 2 | 3 | School management system 4 | ------------------------- 5 | 50 year back 6 | system = stakeholders == students, teachers, parents 7 | 8 | 100 --> 20 or 30% pass percentage 9 | 10 | only one Final exam 11 | ------------------- 12 | students --> they are not serious from DAY-1 13 | teachers --> they are not serious to complete from DAY-1 14 | parents --> they starts worry about their kids 15 | 16 | pass percentage is very less 17 | 18 | 19 | we need to change the process 20 | 21 | 30 year back 22 | ------------------------ 23 | unit tests, quarterly, half yearly, pre final 24 | 25 | unit test-1 --> 1.5 month 26 | 27 | students --> they are not serious from DAY-1, atleast they are serious from 1week before 28 | teachers --> they are serious from DAY-1 to complete syllabus 29 | parents --> they are serious from DAY-1 30 | 31 | 30% is pass in unit-1 32 | 33 | parents, teachers starts understanding the student behaviour 34 | 35 | unit test-2 --> 31% 36 | unit test-3 37 | quarterly 38 | half yearly 39 | prefinal 40 | final --> 80% 41 | 42 | 10 years back 43 | --------------- 44 | slip test 45 | 300 days 46 | 200 exams 47 | 48 | 99% pass percentage 49 | 50 | SDLC --> Software development lifecycle 51 | 52 | 1. Requirements 53 | 2. Planning 54 | 3. Design 55 | 4. Developing 56 | 5. Deployment 57 | 6. Testing 58 | 7. Maintainance 59 | 60 | Waterfall Model 61 | ------------------ 62 | fathers generation == Waterfall model 63 | 64 | 2 years time to complete 65 | 66 | stakeholders = customers, developers, testers, managers, end users, etc. 67 | 68 | developers --> are they serious from DAY-1 69 | testers --> no they are not serious 70 | clients --> they are worried from DAY-1 71 | 72 | after 1 year they got seriousness 73 | 74 | developers develop it in 6 months 75 | deploy the application --> build and release in DEV 76 | 77 | testers starting testing --> 100 defects 78 | some are invalid --> 20 invalid defects 79 | 80 | after 2 years, they release something --> maruti 800 81 | 82 | 83 | Agile 84 | ------------------------- 85 | Sprints --> Installment based based deliveries 86 | 87 | modules --> 88 | User management --> 1 months 89 | signup 90 | login 91 | forgot password 92 | change password 93 | 94 | product management 95 | list the products 96 | prices 97 | cart 98 | add to cart 99 | shipping 100 | adding address 101 | payment 102 | order management 103 | 104 | User management --> 1 months 105 | 15 days --> development 106 | 15 days --> testing and deployment 107 | 108 | testing --> 30 defects 109 | 10 invalid defects 110 | 111 | client test the user management --> 10 bugs 112 | 113 | product management == clear the previous bugs and develop product management 114 | 115 | ferrari == fortuner 116 | 117 | DevOps with Agile 118 | ----------------------- 119 | DAY-1 120 | 121 | developer writes some code, the same day build(compile and pack) the code, test it 122 | 123 | 100 lines of code --> testing is simple 124 | no of invalid defects are less 125 | 2 valid defects --> very easy to developer 126 | Ops --> testing, build and release(DevOps) 127 | 128 | We create a process called CICD with the help of tools, the same day we are able to build and test the application 129 | 130 | DEV QA UAT PRE-PROD PERF PROD 131 | 132 | 1. Faster releases 133 | 2. Less defects 134 | 135 | speed and accuracy 136 | 137 | DevOps == Continous improvement 138 | DevSecOps --> added security inside it 139 | shift left 140 | GitOps --> Entire process should be in Git. 141 | 142 | What is computer? 143 | ----------------------- 144 | Can I call mobile as computer? 145 | Can I call TV as computer? 146 | Can I call Server as computer? 147 | 148 | Software --> communication between computers 149 | 150 | Facebook --> A group of computers/servers 151 | 152 | Computer --> compute something, it can do some work 153 | 154 | IP enabled device 155 | ----------------- 156 | CPU(Procesor) 157 | RAM 158 | OS 159 | ROM(Storage) 160 | 161 | Server --> Deploy the application 162 | 163 | Client and Server 164 | 165 | Lawyer and Me 166 | 167 | Lawyer --> Server (Serves someone) 168 | Client --> Me 169 | 170 | Browser --> Client 171 | edge, chrome, firefox, opera, etc. 172 | 173 | Which OS is suitable for Servers? 174 | --------------------------------- 175 | Unix, Windows Server 176 | 177 | not open source --> Only MS controls 178 | graphics --> consumes resources(CPU, RAM) alot 179 | Cost --> High 180 | Security --> less 181 | Speed --> less 182 | 183 | clicks in windows --> commands --> Hardware 184 | 185 | Why Azure services are down? 186 | 187 | Even you are running Azure VM in Linux, still it is down. AKS MSSQL down 188 | underlying azure infra is MS OS 189 | 190 | Unix == Linux 191 | 192 | Unix is hardware locked --> Mac Laptop == Mac OS + Mac Hardware 193 | 194 | Dell --> Dell Hardware 195 | OS you can choose anything 196 | 197 | He built Linux from the scratch without lock, and released to public 198 | 199 | Linux --> Ubuntu, Centos, Fedora, Solaris, Android, Redhat, etc. 200 | commands --> Hardware 201 | 202 | Redhat --> Enterprise, if you face any problem they will help us 203 | Redhat == CentOS/Almalinux 204 | 205 | Cloud and DevOps two wheels to a bike 206 | 207 | AWS Account --> FREE(1 year free trail) --> 3,000(there is refund option) 208 | ------------ 209 | 1. Use private banks 210 | 2. enable internation usage 211 | 3. enable online usage 212 | 213 | first name and lastname should be as the bank debit/credit card name 214 | address should be given same as in bank 215 | 216 | Choose Personal account 217 | 218 | 219 | Build errors 220 | Deployment errors 221 | Testing errors -------------------------------------------------------------------------------- /session-02.txt: -------------------------------------------------------------------------------- 1 | Session-02 2 | -------------- 3 | Recap 4 | what is devops? 5 | waterfall vs agile vs devops 6 | SDLC 7 | what is computer 8 | client server architecture 9 | Linux advantages 10 | 11 | Create Linux Server 12 | Connect to it 13 | Create firewall/Security group 14 | Command syntax 15 | few commands 16 | 17 | Authentication 18 | --------------- 19 | 1. What you know -> Username and password --> private systems 20 | 2. What you have --> username and tokens(RSA, Authenticator) --> Public systems 21 | 3. What you are --> Fingerprints, palm, retina, face, etc --> Public systems 22 | 23 | lock and key are pair 24 | 25 | box and lock are public 26 | key is private 27 | 28 | key based mechanism 29 | key pair == public key and private key 30 | 31 | box = server == node == Ip address 32 | public key is inside server 33 | 34 | while authenticating user should send his username and private-key 35 | 36 | Delhi --> Hyderbad(DTDC) 37 | -------------------- 38 | flat no, apartment name, street name, city, pincode 39 | 40 | Pincode(HYD DTDC) 41 | Streetname 42 | apartment name 43 | if no flat no, letter is stuck apartment name 44 | 45 | Server --> it has lot of services, every service has protocol 46 | 47 | I can open google in browser using HTTP-80/HTTPS-443 48 | 49 | SSH protocol is used to connect to servers with particular number. port no 22 50 | 51 | 0-65,535 ports == 65,536 ports 52 | 53 | ssh --> 22 54 | http --> 80 55 | https --> 443 56 | mysql --> 3306 57 | 58 | what is the protocol, port, username, password/private-key 59 | 60 | ssh-keygen -f , I should have some client to connect Linux server 61 | gitbash, putty, mobaxterm 62 | 63 | Gitbash --> git, Linux server, Mini Linux in windows 64 | 65 | /c/Users/ramesh 66 | 67 | present working directory == pwd 68 | 69 | .xls 70 | .doc 71 | 72 | ssh-rsa laptop-username@laptop-name 73 | 74 | internet --> 0.0.0.0/0 --> public 75 | 76 | 1. import public key 77 | 2. create firewall 78 | 79 | OS, CPU, RAM, HD 80 | 81 | Amazon Linux 2023 == RedHat == CentOs 82 | 83 | 98.81.6.228 84 | ec2-user 85 | ssh 86 | 22 87 | private-key 88 | 89 | /home/ == Landing Directory 90 | $ --> normal user 91 | 92 | command-name 93 | 94 | pwd 95 | clear 96 | cd --> change directory 97 | 98 | cd / 99 | 100 | / --> root directory 101 | 102 | ls -l --> l for lenghty format 103 | ls -lr --> reverse order 104 | 105 | ls -lt --> based on time, latest on top 106 | ls -ltr --> latest on down 107 | 108 | hidden files starts with . 109 | ls -la --> a means all 110 | 111 | 112 | CRUD --> Create Read Update Delete 113 | 114 | sign up --> creating profile in amazon 115 | Login --> reading my account 116 | password change --> updating the profile 117 | account delete --> deleting the account 118 | 119 | placing order --> creating order 120 | changing order details --> updating order 121 | cancelling order --> deleting the order 122 | 123 | CRUD 124 | 125 | create, read, update and delete 126 | 127 | touch --> create file 128 | 129 | touch file-name --> creates the file 130 | cat > devops.txt --> adding text 131 | enter the text 132 | ctrl+D 133 | 134 | cat file-name --> reading the text 135 | 136 | 137 | cat >> file-name --> append/add the text 138 | 139 | 140 | cd .. --> one step back 141 | 142 | rm --> remove file 143 | 144 | mkdir --> create directory 145 | mkdir devops 146 | 147 | d--> directory 148 | - --> file 149 | 150 | rmdir --> remove only empty directory 151 | 152 | rm -r --> delete folder and content inside folder 153 | 154 | mv 155 | command-name 156 | mv awss.txt aws.txt 157 | 158 | - or -- == Options 159 | 160 | absolute(complete) path and relative path 161 | 162 | cd .. --> relative path 163 | cd /home/ec2-user --> absolute 164 | 165 | 166 | 1. Create AWS account 167 | 2. Install gitbash and notepad++ 168 | 3. create keypairs 169 | 4. Import public key into AWS 170 | 5. Create security group 171 | 6. launch instance 172 | 7. connect to instance from gitbash 173 | -------------------------------------------------------------------------------- /session-03.txt: -------------------------------------------------------------------------------- 1 | Recap 2 | ----------- 3 | created keys 4 | imported public key into AWS 5 | created SG 6 | Created EC2 7 | Connected to EC2 8 | practiced few commands 9 | absoulte path and relative path 10 | ports 11 | protocols, SSH 22 12 | 13 | 14 | - 15 | -- 16 | 17 | ssh -i ec2-user@IP 18 | 19 | clear 20 | pwd 21 | cd 22 | ls -l 23 | ls -ltr 24 | ls -la 25 | cat 26 | touch 27 | mkdir 28 | rmdir 29 | rm 30 | mv 31 | uname 32 | 33 | how to copy files 34 | 35 | which file to copy, where to copy 36 | ctrl c and ctrl v 37 | 38 | cp 39 | 40 | /etc/passwd 41 | 42 | how to cut the file 43 | 44 | grep 45 | DevOps devops both are different 46 | 47 | How to download the files? 48 | 49 | curl and wget 50 | 51 | wget 52 | 53 | curl 54 | 55 | wget is used to download the files, curl is used to show the text directly on the terminal. curl is used to run the scripts 56 | 57 | https://raw.githubusercontent.com/daws-81s/notes/main/session-02.txt 58 | 59 | Sivakumar Reddy 60 | 61 | cut command, delimiter 62 | 63 | cut -d "/" -f1 64 | 65 | https: 66 | 67 | raw.githubusercontent.com 68 | daws-81s 69 | notes 70 | main 71 | session-02.txt 72 | 73 | awk command 74 | 75 | awk -F "/" '{print $1F}' 76 | 77 | awk -F "/" '{print $NF}' 78 | 79 | /etc/passwd 80 | 81 | ec2-user:x:1000:1000:EC2 Default User:/home/ec2-user:/bin/bash 82 | 83 | How to get list of users in linux server? 84 | 85 | head and tail 86 | 87 | head --> first 10lines 88 | tail --> last 10 lines 89 | 90 | root:x:0:0:root:/root:/bin/bash 91 | bin:x:1:1:bin:/bin:/sbin/nologin 92 | daemon:x:2:2:daemon:/sbin:/sbin/nologin 93 | adm:x:3:4:adm:/var/adm:/sbin/nologin 94 | lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin 95 | sync:x:5:0:sync:/sbin:/bin/sync 96 | shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown 97 | halt:x:7:0:halt:/sbin:/sbin/halt 98 | mail:x:8:12:mail:/var/spool/mail:/sbin/nologin 99 | operator:x:11:0:operator:/root:/sbin/nologin 100 | games:x:12:100:games:/usr/games:/sbin/nologin 101 | ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin 102 | nobody:x:65534:65534:Kernel Overflow User:/:/sbin/nologin 103 | dbus:x:81:81:System message bus:/:/sbin/nologin 104 | systemd-network:x:192:192:systemd Network Management:/:/usr/sbin/nologin 105 | systemd-oom:x:999:999:systemd Userspace OOM Killer:/:/usr/sbin/nologin 106 | systemd-resolve:x:193:193:systemd Resolver:/:/usr/sbin/nologin 107 | sshd:x:74:74:Privilege-separated SSH:/usr/share/empty.sshd:/sbin/nologin 108 | rpc:x:32:32:Rpcbind Daemon:/var/lib/rpcbind:/sbin/nologin 109 | libstoragemgmt:x:997:997:daemon account for libstoragemgmt:/:/usr/sbin/nologin 110 | systemd-coredump:x:996:996:systemd Core Dumper:/:/usr/sbin/nologin 111 | systemd-timesync:x:995:995:systemd Time Synchronization:/:/usr/sbin/nologin 112 | chrony:x:994:994:chrony system user:/var/lib/chrony:/sbin/nologin 113 | ec2-instance-connect:x:993:993::/home/ec2-instance-connect:/sbin/nologin 114 | rpcuser:x:29:29:RPC Service User:/var/lib/nfs:/sbin/nologin 115 | tcpdump:x:72:72::/:/sbin/nologin 116 | ec2-user:x:1000:1000:EC2 Default User:/home/ec2-user:/bin/bash 117 | 118 | How to get lines in between the 5-10 119 | 120 | cat file name | head -n 10 | tail -n 5 121 | 122 | tail -f --> How to see running log 123 | 124 | /var/log/messages 125 | 126 | 127 | Editor 128 | ----------- 129 | vim --> visually improved 130 | 131 | vim 132 | 133 | Colon/Command Mode 134 | --------------- 135 | :q --> Quit 136 | :wq --> write and quit 137 | :q! --> dont save the changes and exit 138 | :set nu 139 | :set nonu 140 | :/ --> search from top 141 | :? --> search from bottom -------------------------------------------------------------------------------- /session-04.txt: -------------------------------------------------------------------------------- 1 | Recap 2 | ------- 3 | vim editor 4 | 5 | Esc 6 | command/colon mode 7 | insert mode 8 | 9 | 10 | Esc --> Mode 11 | : 12 | i 13 | 14 | :q --> quit 15 | :q! --> force quit without changes 16 | :wq --> write and quit 17 | :/ --> search from top 18 | :? --> search from bottom 19 | :set nu 20 | :set nonu 21 | :noh 22 | :%d 23 | :1d --> delete 1st line 24 | :s/word-to-find/word-to-replace/g 25 | :%s/word-to-find/word-to-replace/g 26 | 27 | 28 | u 29 | yy 30 | dd 31 | p 32 | shift+g --> takes bottom 33 | gg --> takes to top 34 | 35 | insert 36 | 37 | awk, cut commands 38 | 39 | chmod ugo+rwx file-name 40 | R -> 4 41 | W -> 2 42 | X -> 1 43 | 44 | Linux Administration 45 | -------------------------- 46 | sudo su --> super user access 47 | 48 | create user 49 | useradd 50 | 51 | id --> gives all info about current user 52 | 53 | useradd ramesh 54 | id ramesh --> displays ramesh information 55 | 56 | when you create user, by default a group will be created on the same username 57 | 58 | /etc/passwd --> users info 59 | /etc/group --> group info 60 | 61 | passwd 62 | 63 | passwd ramesh --> setup the password for ramesh 64 | 65 | create devops group and add ramesh to devops group 66 | 67 | groupadd devops 68 | every user have atleast one primary group and multiple secondary groups 69 | 70 | usermod -g devops ramesh --> adding ramesh to devops group 71 | 72 | ramesh primary --> devops 73 | ramesh secondary --> testing 74 | 75 | usermod -aG testing ramesh --> adding ramesh to testing group as secondary 76 | 77 | gpasswd -d ramesh testing --> delete ramesh from testing 78 | 79 | created his user 80 | created group 81 | added him to group 82 | you can remove from group 83 | 84 | if employee leaves the organisation 85 | ------------------------------------- 86 | 1. remove him from the group 87 | 2. then delete the user 88 | 89 | a user must have atleast one primary and one secondary group 90 | 91 | usermod -g ramesh ramesh 92 | userdel ramesh --> ramesh user and ramesh group both will be deleted 93 | 94 | groupdel 95 | 96 | suresh, suresh123 97 | 98 | inform suresh, that his user is creatd 99 | 100 | 18.212.106.66 --> he should login with username and password 101 | 102 | by default, linux OS will not allow password authentication, it will only allow key based authentication 103 | 104 | systemctl restart sshd 105 | 106 | /home/suresh 107 | 108 | .ssh --> authorized_keys --> public key 109 | 110 | Linux admins will ask for suresh public key... Suresh generates key pair and send public key to Linux admin 111 | 112 | 400 --> read access only to user, even suresh should not have write access 113 | 114 | Ownership 115 | ------------------- 116 | chown --> change ownership 117 | 118 | private key, suresh 119 | 120 | /home/suresh --> authorized_keys 121 | 122 | directory --> 700 123 | authorized_keys --> 400 124 | 125 | directory --> 600 126 | -------------------------------------------------------------------------------- /session-05.txt: -------------------------------------------------------------------------------- 1 | Recap 2 | --------- 3 | CRUD 4 | creating user 5 | reading user 6 | updating user 7 | deleting user 8 | 9 | useradd 10 | 11 | ssh -i ramesh ramesh@IP 12 | 13 | .ssh --> 600 14 | authorized_keys --> 400 15 | 16 | Package management 17 | ------------------------ 18 | A software have lot of dependencies on other softwares 19 | 20 | rpm --> redhat package manager 21 | 22 | identify the dependencies, install the dependencies and finally install the package you want 23 | 24 | yum --> dnf 25 | 26 | /etc/yum.repos.d/ 27 | dnf install 28 | 29 | dnf list installed 30 | 31 | how can check cpuinfo, memory info, OS 32 | /etc/os-release 33 | /proc/cpuinfo 34 | /proc/meminfo 35 | 36 | Service Management 37 | --------------------------- 38 | serive start, service stop, service restart, status check, enable, disable 39 | 40 | systemctl status sshd 41 | 42 | HTTP --> 80 43 | 44 | Nginx --> install this package 45 | start the service 46 | 47 | dnf install nginx -y 48 | 49 | http://54.198.23.187:80 50 | 51 | SG --> port no 80 allowed 52 | forward request to Linux Server 53 | 54 | enable --> by default enabled services will start automatically 55 | 56 | Process Management 57 | -------------------------------- 58 | a sequence of steps to be executed to complete a task... 59 | 60 | Marriage 61 | ---------- 62 | Father is a responsible person to make marriage success 63 | 64 | sub tasks 65 | 66 | 67 | Office 68 | --------- 69 | Delivery manager 70 | 71 | 1. Team Lead 72 | 2. Senior Engineer 73 | 3. Junior Engineer 74 | 4. Freshers/Trainees 75 | 76 | Freshers/Trainees --> JE 77 | JE --> SE 78 | SE --> TL 79 | TL --> DM 80 | 81 | TaskID 82 | 83 | every process should have an ID for tracking purpose. 84 | 85 | DM --> 001 86 | TL --> 002 87 | SE --> 003 88 | JE -->004 89 | Trainees --> 005 90 | 91 | Child 005 --> 004(Parent) 92 | 004 --> 003 93 | 003 --> 002 94 | 002 --> 001 95 | 001 --> Root Process 96 | 97 | nginx --> PID should be there 98 | 99 | ps -ef | grep nginx 100 | 101 | foreground --> BLOCKS the screen, runs in foreground 102 | background --> runs in background, you can do other works 103 | 104 | 105 | top 106 | 107 | kill --> request 108 | kill -9 --> order 109 | 110 | Network Management 111 | ----------------------- 112 | ports check 113 | 114 | netstat --> network statistics 115 | 116 | netstat -lntp --> it will tell you what ports are open 117 | -------------------------------------------------------------------------------- /session-06.txt: -------------------------------------------------------------------------------- 1 | Project 2 | -------------------- 3 | 4 | Desktop appliactions 5 | 6 | 5min. I need to restart the sytem: 7 | 8 | Disadvantages 9 | ---------------- 10 | We have to install 11 | We have maintain storage --> only in single system 12 | We have to upgrade 13 | fixing the problems 14 | What if system crash? 15 | system resources 16 | 17 | Web based appliactions 18 | ------------------- 19 | no installation 20 | no upgrade 21 | no compatability issues 22 | no storage issues 23 | you can open every where 24 | 25 | 26 | Web based appliactions --> 3 tier architecture 27 | 28 | Road side cart 29 | ---------------- 30 | only one person 31 | 32 | He has to take care of cooking, billing, serving, ordering, cleaning, etc. 33 | 10 persons 34 | 35 | hire someone 36 | 37 | product taste and quality.. 38 | 39 | billing, serving he has to hire someone else 40 | 41 | small hotel 42 | ---------------- 43 | cook, billing counter 44 | 45 | owner --> issues tokens 46 | cook --> he will cook and serve 47 | 48 | 15 persons to deal 49 | 50 | upgrade to restuarant 51 | --------------------- 52 | when you enter --> someone will welcome you, he will show the table 53 | captain --> which table is free 54 | 55 | waiter --> takes the order --> queue management,taking order, serving order 56 | 57 | chef --> sees the order and cook the meal --> only cooking 58 | 59 | Raw products --> eatable format 60 | 61 | Waiter --> Web server 62 | Chef --> App server 63 | Raw items --> DB server 64 | 65 | Raw products == Data 66 | 67 | data is in tables and columns 68 | ---------------------------- 69 | user_id, user_name, first_name, last_name, password, created_date, dob 70 | 1 sivakumar sivakumar M siva123 `9-AUG-2024 01-01-01 71 | 72 | Chef == App server 73 | 74 | username, password 75 | 76 | through SQL queries, app server will check the data 77 | CRUD 78 | 79 | select * from user where user_name='sivakumar' and password='siva123' 80 | 81 | WebServer --> put that into HTML format, so that a normal can easily format 82 | 83 | { 84 | "user":"sivakumar", 85 | "dob":"01-01-01", 86 | "location":"bangalore" 87 | } 88 | 89 | security --> 90 | Web server --> Queue management 91 | 92 | Web Tier --> LB, Web Servers 93 | App/API Tier --> App servers 94 | Data tier --> DB Servers 95 | 96 | Databases --> MySQL, MSSQL, Oracle, Postgress, MongoDB, Redis 97 | App/API(Backend Tier) --> Java, Python, NodeJS, DotNet, Go, etc, 98 | Web(Frontend) Tier --> HTML, CSS, JavaScript, ReactJS, Angular, ExpressJS, Jquery, etc... 99 | 100 | Static --> Frontend applications are static --> Nginx, Apache, etc. 101 | Dynamic --> Backend appliactions are dynamic --> JBoss, Webspher, WebLogic,etc. --> Tomcat 102 | 103 | Waiter, Chef, items 104 | 105 | 106 | devops-practice --> goto community AMI --> Redhat 9 107 | We will use only username and password to login servers 108 | 109 | ec2-user 110 | DevOps321 --> D and O is caps 111 | 112 | Database server 113 | ---------------- 114 | install DB --> MySQL 115 | 3306 116 | dnf install 117 | 118 | run DB --> systemctl 119 | 120 | check the status --> systemctl staus 121 | check port opened r not --> netstat -lntp 122 | check the process --> ps -ef | grep 123 | 124 | username is root 125 | password 126 | 127 | server software is mysql-server and it is running, to check data you should connect to server through client 128 | 129 | client package is just mysql 130 | 131 | mysql -h -u root -p 132 | 133 | mysql -u root -p 134 | 135 | user --> user schema 136 | 137 | posts --> posts schema 138 | 139 | videos --> videos 140 | 141 | mysql 142 | 143 | show databases; 144 | use ; 145 | show tables; 146 | select * from ; 147 | 148 | 149 | -------------------------------------------------------------------------------- /session-07.txt: -------------------------------------------------------------------------------- 1 | devops-practice 2 | ec2-user 3 | DevOps321 4 | 5 | ssh username@IP-address 6 | Backend --> Java, DotNet, Python, NodeJs, go, ruby, php, etc. 7 | 8 | NodeJS 9 | ---------- 10 | dependencies --> nginx, git, etc. 11 | 12 | libraries 13 | what is library? --> Whenever you want you can consume, no need to buy. It is common for everyone. Many people can use it 14 | 15 | source files --> *.js 16 | 17 | NodeJs 18 | -------- 19 | build file == package.json --> where you mention your project metadata. Name, description, version, dependencies and their versions. 20 | 21 | build tool = npm --> it will search package.json in your folder and it will get the dependencies/libraries from internet 22 | 23 | Java 24 | -------- 25 | build file == pom.xml --> Name, project description, dependencies and their versions 26 | 27 | build tool = maven --> it will search pom.xml in your folder and it will get the dependencies/libraries from internet 28 | 29 | source files --> *.java 30 | 31 | Python 32 | --------- 33 | build file == requirements.txt --> Name, project description, dependencies and their versions 34 | 35 | build tool = pip --> it will search requirements.txt in your folder and it will get the dependencies/libraries from internet 36 | 37 | source files --> *.py 38 | 39 | suresh --> humans 40 | 41 | system users --> expense 42 | 43 | /app --> used this folder to place the application 44 | 45 | curl -o /tmp/backend.zip https://expense-builds.s3.us-east-1.amazonaws.com/expense-backend-v2.zip 46 | 47 | nodejs 48 | ----- 49 | build file --> package.json 50 | build tool --> npm --> node package manager 51 | files --> *.js 52 | npm install 53 | node_modules 54 | 55 | /bin/node /app/index.js 56 | 57 | dnf install nginx -y 58 | systemctl start nginx 59 | 60 | [Unit] 61 | Description=ATD daemon 62 | [Service] 63 | Type=forking 64 | ExecStart=/usr/bin/atd 65 | [Install] 66 | WantedBy=multi-user.target 67 | 68 | /etc/systemd/system --> Here you can place all your service files 69 | 70 | extension --> .service 71 | 72 | [Unit] 73 | Description = Backend Service 74 | 75 | [Service] 76 | User=expense 77 | Environment=DB_HOST="" 78 | ExecStart=/bin/node /app/index.js 79 | SyslogIdentifier=backend 80 | 81 | [Install] 82 | WantedBy=multi-user.target 83 | 84 | IPv4 Address. . . . . . . . . . . : 192.168.1.6 85 | 59.182.32.230 86 | 87 | public IP may change when restart the server, but private IP will never change 88 | 89 | /var/log/messages --> Linux logs everything here 90 | 91 | { "timestamp" : 1724121897, "msg" : "App Started on Port 8080" } 92 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: node:events:497 93 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: throw er; // Unhandled 'error' event 94 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: ^ 95 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: Error: Access denied for user 'expense'@'ip-172-31-46-138.ec2.internal' (using passwor 96 | d: YES) 97 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: at Packet.asError (/app/node_modules/mysql2/lib/packets/packet.js:738:17) 98 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: at ClientHandshake.execute (/app/node_modules/mysql2/lib/commands/command.js:29:26) 99 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: at Connection.handlePacket (/app/node_modules/mysql2/lib/connection.js:481:34) 100 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: at PacketParser.onPacket (/app/node_modules/mysql2/lib/connection.js:97:12) 101 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: at PacketParser.executeStart (/app/node_modules/mysql2/lib/packet_parser.js:75:16) 102 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: at Socket. (/app/node_modules/mysql2/lib/connection.js:104:25) 103 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: at Socket.emit (node:events:519:28) 104 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: at addChunk (node:internal/streams/readable:559:12) 105 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: at readableAddChunkPushByteMode (node:internal/streams/readable:510:3) 106 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: at Readable.push (node:internal/streams/readable:390:5) 107 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: Emitted 'error' event on Connection instance at: 108 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: at Connection._notifyError (/app/node_modules/mysql2/lib/connection.js:252:12) 109 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: at Connection._handleFatalError (/app/node_modules/mysql2/lib/connection.js:183:10) 110 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: at Connection.handlePacket (/app/node_modules/mysql2/lib/connection.js:491:12) 111 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: at PacketParser.onPacket (/app/node_modules/mysql2/lib/connection.js:97:12) 112 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: [... lines matching original stack trace ...] 113 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: at Readable.push (node:internal/streams/readable:390:5) { 114 | Aug 20 02:44:57 ip-172-31-46-138 backend[14409]: code: 'ER_ACCESS_DENIED_ERROR', 115 | : 116 | 117 | create a schema/database for expense project 118 | create a table 119 | 120 | CREATE DATABASE IF NOT EXISTS transactions; 121 | USE transactions; 122 | 123 | CREATE TABLE IF NOT EXISTS transactions ( 124 | id INT AUTO_INCREMENT PRIMARY KEY, 125 | amount INT, 126 | description VARCHAR(255) 127 | ); 128 | 129 | CREATE USER IF NOT EXISTS 'expense'@'%' IDENTIFIED BY 'ExpenseApp@1'; 130 | GRANT ALL ON transactions.* TO 'expense'@'%'; 131 | FLUSH PRIVILEGES; 132 | 133 | mysql -h 172.31.40.207 -uroot -pExpenseApp@1 < /app/schema/backend.sql 134 | 135 | backend applications mostly opens --> 8080 136 | 137 | ping 138 | 139 | telnet 3306 140 | 141 | /etc/nginx --> nginx config is here 142 | /usr/share/nginx/html --> here you need to place your websites 143 | /var/log/nginx --> here nginx places the logs 144 | 145 | /usr/share/nginx/html/index.html --> load automatically when you hit IP 146 | 147 | Forward proxy vs Reverse Proxy 148 | DNS --> Domain name system 149 | -------------------------------------------------------------------------------- /session-08.txt: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description = Backend Service 3 | 4 | [Service] 5 | User=expense 6 | Environment=DB_HOST="mysql.daws78s.online" 7 | ExecStart=/bin/node /app/index.js 8 | SyslogIdentifier=backend 9 | 10 | [Install] 11 | WantedBy=multi-user.target 12 | 13 | without touching the servers, tasks should automatically update 14 | 15 | DNS --> Domain name system 16 | ---------- 17 | mysql.devops.com --> return IP address 18 | 19 | domain registars --> godaddy, hostinger, AWS, Azure, GCP, etc. 20 | 21 | hostinger --> cheapest way to buy domains 22 | 23 | domain name is unique in the universe... 24 | 25 | google.com/facebook.com 26 | 27 | computers understand only numbers 28 | humans understand only names 29 | 30 | facebook.com --> 18.90.23.217 31 | consider yours is new laptop, opened browser for first time 32 | browser checks its cache for IP of facebook.com 33 | browser contacts OS, for IP address of facebook.com 34 | OS checks its cache now, it is NA... 35 | 36 | ISP is responsible for the DNS resolution. 37 | 38 | OS is enabled to contact ISP DNS resolver 39 | 40 | DNS resolver checks its cache... 41 | 42 | Root servers --> 13 servers 43 | top MNC companies and country governments create a non profit organisation to manage these root servers 44 | Oxford --> new english words adding 45 | 46 | DNS resolver contacts nearest root servers... 47 | 48 | root server scans the domain you asked for 49 | facebook.com 50 | 51 | TLD --> top level domains 52 | .com, .online, .org, .edu, .in, .uk, .us, .net, .io, .ai, .shop, etc 53 | .siva 54 | 55 | Root server informs DNS resolver to contact .com TLD. Root server provides .com TLD address details 56 | 57 | DNS resolver contacts .com TLD to provide IP address 58 | 59 | TLD checks its system to provide IP address of facebook.com 60 | 61 | Records 62 | --------------- 63 | A record --> you have to update IP address to your domain 64 | 65 | .siva. I should contact root servers that I started a TLD called .siva 66 | 67 | Domain registars are middle man. now I connect all major domain registars to show my domain TLD 68 | 69 | Domain registars inform TLD that some one bought facebook.siva domain 70 | 71 | Domain registars update nameservers for the domain from TLD.... 72 | 73 | DNS resolver connects with TLD address... address is nothing but nameservers 74 | 75 | mysql -h mysql.daws78s.online -uroot -pExpenseApp@1 < /app/schema/backend.sql 76 | 77 | 78 | proxy_http_version 1.1; 79 | 80 | location /api/ { proxy_pass http://backend.daws78s.online:8080/; } 81 | 82 | location /health { 83 | stub_status on; 84 | access_log off; 85 | } 86 | 87 | hdfcbank.com --> domain 88 | netbanking.hdfcbank.com --> Sub domain 89 | 90 | frontend.daws78s.online 91 | daws78s.online 92 | 93 | Proxy --> Someone on behalf of you 94 | 95 | Frontend proxy, reverse proxy 96 | 97 | 1. Book the domain 98 | 2. Understand what is A record 99 | 3. Install nginx and you should see welcome page with your domain name 100 | -------------------------------------------------------------------------------- /session-09.txt: -------------------------------------------------------------------------------- 1 | Domain create 2 | Domain transfer through R53 hosted zone 3 | Update NS in hostinger 4 | 5 | Browser --> Cache --> OS --> OS cache --> ISP DNS resolver --> Root servers --> TLD information --> TLD NS --> NS --> Records 6 | 7 | Domain registar --> TLD about update 8 | 9 | A --> IP address 10 | CNAME --> another domain redirect 11 | MX --> Mail servers 12 | NS --> Nameservers 13 | SOA --> Authorization 14 | TXT --> validation purpose. 15 | 16 | Forward proxy and reverse proxy 17 | -------------------------------- 18 | 1. You can watch restricted content 19 | 2. You can change Geo location 20 | 3. client side proxy, only client is aware of proxy 21 | 4. Hiding client identity 22 | 5. Traffic monitor 23 | 6. Secure connections 24 | 7. Content access restriction 25 | 8. We can use it for cache purpose 26 | 27 | Reverse Proxy 28 | ------------------------- 29 | 1. Only Server is aware of proxy 30 | 2. Security between clients and servers 31 | 3. as load balancer.. 32 | 4. SSL termination 33 | 34 | /etc/nginx/nginx.conf --> default conf 35 | /usr/share/nginx/html 36 | /var/log/nginx 37 | 38 | include /etc/nginx/default.d/*.conf; 39 | 40 | upstream backend { 41 | server 172.31.44.133; 42 | server 172.31.38.145; 43 | } 44 | 45 | proxy_http_version 1.1; 46 | 47 | location / { proxy_pass http://upstream/; } 48 | 49 | location /health { 50 | stub_status on; 51 | access_log off; 52 | } 53 | 54 | /usr/share/nginx/html --> default 55 | 56 | IP or IP/ --> /usr/share/nginx/html/index.html 57 | 58 | Linux folder structure 59 | ---------- 60 | / --> root directory of the OS 61 | /bin(binaries) --> essential commands are here. ls, cat, grep, etc. 62 | /sbin(system binaries) --> admin commands, reboot, iptables, etc. 63 | /boot --> when linux server started, it refers boot directory for system configuration 64 | /dev(devices) --> monitor/terminal, keyboard, printers, etc. 65 | /etc(extra configuration) --> system configuration files, service configurations, etc. 66 | /home(home directory of users) --> all users directory are here, their personal files, user related configs.. 67 | /lib(Libraries) --> libraries/dependencies required by OS equal to .dll in windows 68 | /lib64(64 bit libs) 69 | /media(Media devices) --> CD, DVC, USB, etc. 70 | /mnt(mount) --> adding extra disks 71 | /opt(optional) --> if you want third party applications, custom applications you can keep here 72 | /proc(processor) --> /proc/cpuinfo /proc/meminfo 73 | /root (Root user) --> home directory of root user 74 | /run (Running information of server) 75 | /srv(service files) --> When you use your server to serve as file server, you can use this 76 | /swap(Swap space) --> 1GM ram, OS will use this swap space for extension of RAM. Reserver space 77 | /sys --> system kernal info, devices info, etc 78 | /tmp --> temporary directory, not at all important 79 | /usr --> shared files and docs between all users... 80 | /var(variables) --> logs and messages 81 | 82 | /etc 83 | /opt 84 | /var 85 | /bin 86 | 87 | Linux is completed 88 | 89 | Configuration 90 | ---------------------- 91 | 1. Install application runtime --> nodejs, java, etc. 92 | 2. create a user 93 | 3. create a folder 94 | 4. download code 95 | 5. install dependencies 96 | 6. create systemctl services 97 | 98 | free -m --> check ram usage 99 | df -hT --> HD usage 100 | 101 | 123456789 102 | 103 | keep all your commands in a file and run that file --> shell script 104 | 105 | What is shell? 106 | 107 | Shell is a interpreter in Linux, that checks and executes the users commands 108 | 109 | /bin/bash --> interprets every command issued inside linux server 110 | 111 | 1. carry one by one to home 112 | 2. use a truck, keep everything inside and carry to home 113 | 114 | if a command gives error, let us proceed or clear the error and proceed 115 | 116 | when you run through program, it should be able to check previous command is success, if success proceed, if failure stop and inform user(error handling) 117 | 118 | Algorithm 119 | --------------- 120 | write steps in your own language what to do? 121 | 122 | Install a pacakge through shell script?? 123 | 124 | check you have root access or not 125 | 126 | if no root access, show the error 127 | 128 | check if already installed or not, inform already installed 129 | 130 | if not installed, install it 131 | 132 | check success or not 133 | 134 | git --> store scripts or programs 135 | 136 | 137 | 138 | 139 | / --> frontend website 140 | /hello --> hello website 141 | /app --> backend 142 | /app1 --> backend1 143 | 144 | m.facebook --> mobile servers 145 | facebook --> web servers 146 | 147 | daws81s.online 148 | daws81s.online/hello -------------------------------------------------------------------------------- /session-10.txt: -------------------------------------------------------------------------------- 1 | Shell Scripting 2 | ------------------ 3 | manual disadvantages 4 | -------------------- 5 | 1. human errors 6 | 2. time taking 7 | 8 | 1. security issue 9 | 2. collobration issue 10 | in Git 11 | 12 | Git is decentralised source code management... 13 | 14 | GitOps --> git is the single source of truth... 15 | 16 | centralised vs decentralised/distributed 17 | 18 | centralised capital vs decentralised capital 19 | ---------------------------------------------- 20 | 1. single point of failure --> entire country is at stake 21 | 2. single point of development 22 | 3. accessbility 23 | 4. economoy 24 | 5. riots 25 | 26 | 1. no single point of failure 27 | 2. development is distributed 28 | 3. easy accessbility 29 | 4. social and economical balance 30 | 31 | Git follows distributed/decentralised architecture. Same kind of setup exists in remote server is available in all the computers connected to that repo. So at any point of single computer is enough to restore everything. This is called distributed/decentralised. This is acheived loaclrepo setup. 32 | 33 | repository == which stores something 34 | 35 | Git --> concept invented Linus Torvalds, inventor of Linux 36 | GitHub, BitBucket, GitLab, Codecommit,Azure repos, etc. 37 | 38 | clone --> download the repo 39 | 40 | git clone 41 | 42 | normal folder vs git repo --> .git(hidden) 43 | 44 | IDE --> Integrated development environment 45 | 46 | Colors, Syntaxt highlight, etc... 47 | 48 | vscode 49 | 50 | workspace --> where you develop scripts 51 | 52 | 1. staging/index area 53 | 54 | task1, task2 are in development 55 | 56 | task1 should be released/pushed to git. whatever the changes completed push them to staging area... 57 | 58 | git add 59 | 60 | git commit -m "I created a new Hello World" 61 | 62 | git push origin main 63 | 64 | main is our default branch 65 | 66 | Shell --> Shell is an interpreter that executes the commands 67 | 68 | #!/bin/bash 69 | 70 | What is shebang --> it is the first line in shell script, that interprets the commands and execute them 71 | 72 | print hello world = echo "Hello World" 73 | 74 | sh 75 | bash 76 | ./ --> for this you should have execute permission 77 | 78 | firstime --> git clone(downloads the entire repo) 79 | changes --> git pull (changes will be downloaded) 80 | 81 | Let's say x=0, y=1 82 | 83 | formula derive 84 | 85 | finally submit variables 86 | 87 | DRY --> don't repeat yourself 88 | Centralised place --> if you change at one place, it will update everywhere, reduced human errors 89 | no accidental changes 90 | 91 | 92 | git add . --> stage all the files 93 | 94 | variables 95 | data types 96 | conditions 97 | loops 98 | functions 99 | 100 | developers code 101 | perfomance --> high, should load fast 102 | DB --> must fetch the data fast 103 | memory and system resources --> should consume less 104 | 105 | scripting 106 | No DB, no need of super perfomance and system resources 107 | 108 | variables 109 | ------------ 110 | int i=0 111 | 112 | VAR_NAME=VALUE (no space between name, equal and value) 113 | 114 | sh 04-variables.sh Ramesh Suresh 115 | 1st 2nd 116 | arguments/args/inputs 117 | 118 | 119 | 1. inside the script 120 | 2. pass from outside through args 121 | 3. Enter at runtime 122 | 123 | data types 124 | ------------- 125 | int, float, decimal, long, string, array, arraylist, set, map, etc.. 126 | 127 | 1 --> number 128 | siva --> word string 129 | siva ramesh suresh --> statement (array or arraylist --> list of names) 130 | 131 | list --> first element position 132 | 0,1,2,3, etc 133 | 134 | 0, 1, 2 -------------------------------------------------------------------------------- /session-11.txt: -------------------------------------------------------------------------------- 1 | Recap 2 | ---------- 3 | variables 4 | data types 5 | conditions 6 | loops 7 | functions 8 | 9 | variables 10 | ---------- 11 | DRY --> don't repeat yourself 12 | variables inside script 13 | variables passed as arguement 14 | variables sent through interruption 15 | 16 | FRUITS=("Apple" "Jack" "Banana") 17 | 18 | How do you run a command inside shell script and get the value? 19 | 20 | adding to staging area 21 | commit 22 | push 23 | 24 | when script started executing 25 | 26 | VARIABLE=$(command) 27 | 28 | I want a program to add 2 numbers... 29 | (a+b) 30 | 31 | special variables 32 | -------------------- 33 | 1. I want all the variables passed to the script 34 | $@ 35 | 36 | 2. How many variables/args passed to the script 37 | $# 38 | 39 | conditions 40 | -------------------- 41 | if and else 42 | 43 | if(expression) 44 | { 45 | execute these statements if above expression is true 46 | } 47 | 48 | if(expression) 49 | { 50 | execute these statements if above expression is true 51 | } 52 | else 53 | { 54 | execute these statements if above expression is not true/false 55 | } 56 | 57 | 58 | today=what is today 59 | 60 | if(today == "Monday" or "Tuesday" or "Wed" or "Thu" or "Fri") 61 | { 62 | 63 | print "attend the class" 64 | 65 | } 66 | else 67 | { 68 | print "no class" 69 | } 70 | 71 | if(today != "Sat" or "Sun"){ 72 | print "attend session" 73 | } 74 | else{ 75 | print "no session" 76 | } 77 | 78 | 1. get the numbr 79 | 2. check it is greater than 20 or not 80 | 3. print number greater than 20, if it is greater than 20 81 | 4. otherwise print less than 20 82 | 83 | install mysql through shell script 84 | 85 | root access 86 | 87 | 1. check the user has root access or not 88 | 2. if root access, proceed with the script 89 | 3. otherwise through the error 90 | 4. check already installed or not, if installed tell the user it is already insalled 91 | 5. if not installed, install it 92 | 6. check it is success or not 93 | 94 | if you face error, what you do? 95 | proceed running the script 96 | stop the script execute, clear error and run again 97 | 98 | it will not stop even it face the error... 99 | 100 | how you will check previous command is success or not 101 | 102 | exit status 103 | -------------- 104 | $? 105 | 106 | it will tell you the state of previous command 107 | 108 | 0 --> success 109 | 1-127 --> failure 110 | 111 | 10th exams, with 33% 112 | 113 | functions 114 | ----------------- 115 | some work to do 116 | 117 | we need inputs to perfom some work --> we get output 118 | 119 | login(username, password){ 120 | select * from user where user='username' and password='password' 121 | if ($? -eq 0 ) 122 | then 123 | echo "login success" 124 | else 125 | echo "login failed" 126 | fi 127 | } 128 | 129 | you can call function anytime 130 | 131 | FUNC_NAME(){ 132 | 133 | 134 | } 135 | 136 | FUNC_NAME 137 | 138 | 139 | better coding 140 | -------------- 141 | 1. less number of lines but same work 142 | 2. double number of lines but same work 143 | 144 | Colors 145 | -------------- 146 | success-> green 147 | failure-> red 148 | 149 | 31m-> red 150 | 32m-> green 151 | 33m--> yello 152 | 153 | R"\e[31m" -------------------------------------------------------------------------------- /session-12.txt: -------------------------------------------------------------------------------- 1 | loops 2 | ------------ 3 | for(int i=0;i<=100;i++{ 4 | print i; 5 | } 6 | 7 | Acer Swift go 14 8 | i5, 16GB --> 62k 9 | 10 | for i in 1 2 3 4 5 6 7 8 9 10 11 | do 12 | echo $i 13 | done 14 | 15 | logs are very important for any coding 16 | 17 | redirectors 18 | -------------- 19 | ls -l > output.txt --> by default it redirects only success output 20 | 1 --> success 21 | 2 --> error 22 | 23 | ls -l 2> output.txt --> redirect only error output 24 | 25 | ls -l &> output.txt 26 | 27 | 16-redirectors.sh --> 16-redirectors 28 | 29 | /var/log/shell-script/16-redirectors-.log 30 | 31 | write logs on termial and logfile both... 32 | imp msgs on termial 33 | 34 | tee --> writes logs to multile destinations 35 | 36 | idempotency 37 | ---------------------- 38 | if you a run a program for infinite times, it should not change the result 39 | 40 | CRUD 41 | 42 | Creation --> check if it is already created or not 43 | Read --> no issue 44 | Update --> no issue 45 | Delete --> no issue 46 | 47 | check already mysql root password is setup or not, if setup you can tell already done..othwrwise setup 48 | 49 | mysql -h -------------------------------------------------------------------------------- /session-13.txt: -------------------------------------------------------------------------------- 1 | variables 2 | conditions 3 | loops 4 | functions 5 | data types 6 | 7 | colors 8 | redirections 9 | exit status 10 | 11 | algorithm 12 | expense 13 | mysql 14 | idempotency --> if you run the script infinite times, it should give smae result 15 | 16 | deployment or new version release 17 | -------------- 18 | downtime announce --> We are under maintanance on 29-AUG 02:00AM-06:00AM 19 | stop the server 20 | back up the previous version 21 | remove the existing version 22 | download the new version 23 | start the server 24 | 25 | HTTP Methods and HTTP Status codes 26 | ------------------------------------ 27 | CRUD 28 | 29 | GET POST PUT DELETE OPTIONS 30 | 31 | GET --> read. read the data from database 32 | POST --> Create. You should send data. Usually it goes as json. 33 | PUT --> Update. update the existing information 34 | DELETE --> Delete. 35 | 36 | 2** --> 200, 201, 204 --> Success 37 | 4** --> 400 client side error 38 | 39 | who is the client for backend --> frontend. 404--> Not found 40 | 400 --> 41 | 401 --> 42 | 403 --> 43 | 400 - Bad Request 44 | 401 - Unauthorized 45 | 402 - Payment Required 46 | 403 - Forbidden 47 | 404 - File Not Found 48 | 405 - Method Not Allowed 49 | 50 | 5** --> Server side error. they have to fix 51 | 500 - Internal Server Error 52 | 502 - Bad Gateway 53 | 503 - Service Unavailable 54 | 504 - Gateway Timeout 55 | 56 | 3** --> redirecting 57 | 58 | Frontend --> Backend --> Database 59 | /var/log/nginx 60 | 61 | /var/log/messages 62 | 63 | trap function 64 | 65 | ERR 66 | 67 | 68 | Deleting old logs using shell script 69 | ------------------------------------- 70 | write a script that should delete .log files which are older than 14 days 71 | 72 | *.log --> more than 14 days old 73 | 74 | 1. which directory 75 | 2. is that directory exists? 76 | 3. find the files 77 | 4. delete them 78 | 79 | while loop --> read the output or reading the files 80 | 81 | -------------------------------------------------------------------------------- /session-14.txt: -------------------------------------------------------------------------------- 1 | With in Linux prefer to write shell script, because shell is native there 2 | Python --> getting data from external systems 3 | 4 | crontab 5 | --------------- 6 | You can schedule the scripts periodically. mid night scripts, weekend scripts,hourly scripts 7 | 8 | 5 4 * * * 9 | M H day month day(week) 10 | */2 * * * * 11 | 12 | /home/ec2-user/git-practice/18-delete-old-logs.sh 13 | 14 | backup 15 | --------------- 16 | logs-source-dir --> destination-directory(zip them) 17 | 18 | dynamically user gives source directory, destination directory, number of days 19 | 20 | number of days --> optional, if they dont provide, default is 14 days 21 | 22 | get the source dir, destination dir, days from user 23 | 24 | if they are not providing, show them usage and exit 25 | 26 | if they provide, check those dir are exist,if not exist exit the script 27 | if exist, find the files more than 14 days, zip them and move to destination directory, delete in source directory 28 | 29 | /home/ec2-user/app-logs 30 | 31 | /home/ec2-user/backup 32 | 33 | find . -name "*.log" -mtime +14 | zip -@ 34 | 35 | /home/ec2-user/backup/app-logs-.zip 36 | 37 | check HD memory, send email if it crosses more than 75% 38 | ---------------------------------------------------- 39 | 40 | /dev/mapper/RootVG-rootVol xfs 6.0G 1.8G 4.2G 30% / 41 | 42 | read the file and count the number of each word/ print the top 5 occurences 43 | reverse rows into column and columns as rows -------------------------------------------------------------------------------- /session-15.txt: -------------------------------------------------------------------------------- 1 | Legacy 2 | --------------- 3 | old technology 4 | as part of our team, we are managing our legacy systems as well.. 5 | 1. monitoring CPU and memory and sending alert emails 6 | 2. backup scripts and scheduled them 7 | 8 | Configuration management 9 | ------------------------ 10 | server --> plain server with out anything installed 11 | 12 | install app runtime and few packages 13 | creating users and folder 14 | downloading code 15 | installing dependencies 16 | creating systemctl services 17 | copying config files 18 | 19 | plain --> ready to serve the application/end user --> manual --> shell script 20 | 21 | 1. not idempotent --> write custom code to make it idempotent 22 | 2. error handling --> we need to write code to check the errors 23 | 3. Homogenous --> only works specific distro 24 | 4. not scalable when too many servers 25 | 5. syntax is not easy to understand 26 | 27 | CM tools --> puppet, chef, rundeck, ansible, etc. 28 | 29 | push vs pull 30 | ------------- 31 | 32 | Courier Delhi --> Hyd 33 | 34 | HYD DTDC 35 | 36 | 1. We go to HYD DTDC daily and check for courier --> pull 37 | 2. We sit in home, whenever courier comes to HYD DTDC, they deliver to us --> push 38 | 39 | pull 40 | --------- 41 | 1. causing more traffic on roads --> more traffic in internet 42 | 2. unnecessary resources waste time, fuel, etc. --> bandwidth, power, device resources, etc. 43 | 3. cost 44 | 45 | push 46 | -------- 47 | save everything, ssh protocol 48 | 49 | recently we migrated normal servers to ansible managed servers 50 | 51 | 54.210.150.96 --> /tmp/hello.txt 52 | 53 | 1. login to server 54 | 2. move to /tmp dir 55 | 3. create file 56 | 4. exit 57 | 58 | earlier ansible was only on push based, but recently it implemented pull based also... 59 | 60 | inventory 61 | ------------- 62 | list of servers ansible is managing 63 | 64 | 1. are your ansible server able to reach node --> firewall configs, 65 | 66 | install nginx on node from ansible 67 | 68 | dnf install nginx -y 69 | 70 | Linux/Shell --> command == Module in ansible 71 | inputs and options 72 | 73 | dnf install nginx -y 74 | cmd options inputs 75 | 76 | ansible -i 172.31.41.249, all -e ansilbe_user=ec2-user -e ansible_password=DevOps321 -m dnf -a "name=nginx state=installed" 77 | 78 | -b --> become root 79 | 80 | "changed": true, 81 | "msg": "", 82 | "rc": 0, 83 | "results": [ 84 | "Installed: nginx-1:1.20.1-14.el9_2.1.x86_64", 85 | "Installed: nginx-filesystem-1:1.20.1-14.el9_2.1.noarch", 86 | "Installed: nginx-core-1:1.20.1-14.el9_2.1.x86_64", 87 | "Installed: redhat-logos-httpd-90.4-2.el9.noarch" 88 | ] 89 | 90 | adhoc commands --> it is the command issued from ansible server targeting node manually, basically on some emergency/adhoc purpose 91 | 92 | run the commands one by one in a sequence --> keep all those commands in a file with some syntax and run it at a time == shell scripting 93 | 94 | 95 | run the commands one by one in a sequence --> office laptop --> IN --> US 96 | 97 | keep all those commands in a file with some syntax and run it at a time --> runs inside US server 98 | 99 | playbooks --> Yet another markup language(YAML) 100 | -------------- 101 | playbook is a list of modules ansible server runs againts its node... 102 | 103 | XML vs JSON vs YAML 104 | --------------- 105 | 106 | 200 years back --> take a paper 107 | 108 | 1. ac id 109 | 2. date 110 | 3. amount 111 | 4. branch 112 | 5. ac name 113 | 6. denomination 114 | 115 | sign it and give 116 | 117 | ac name, date, ac id 118 | 119 | templates --> cash deposit, cash withdrawal, gold, etc... 120 | 121 | field name --> field value 122 | key name --> key value 123 | 124 | DTO --> data transfer objects 125 | 126 | 127 | sivakumar 128 | info@joindevops.com 129 | 130 | 131 | { 132 | "Name": "sivakumar", 133 | "Email": "info@joindevops.com" 134 | } 135 | 136 | { 137 | "amount": 200, 138 | "description": "food" 139 | } 140 | 141 | Name: sivakumar 142 | Email: info@joindevops.com 143 | 144 | [ means list 145 | 146 | Milestone:2 147 | --------------- 148 | the same project you automated through shell scripting 149 | 150 | 1. advantages 151 | 2. challenges 152 | 3. errors 153 | 4. final take 154 | 155 | 156 | 157 | Hello world 158 | 159 | Hello,World 160 | 161 | IFS=, 162 | -------------------------------------------------------------------------------- /session-16.txt: -------------------------------------------------------------------------------- 1 | Ansible can connect to any system externally and can perform the tasks given by us... 2 | 3 | Here we call ansible modules, every module syntax we can check in the documentation 4 | module name is mandatory, you can supply args to module.. they may be optional or mandatory 5 | 6 | ping module 7 | 8 | what is ansible playbook --> It is a list of plays which contains modules that can do specific task. 9 | 10 | inventory 11 | -------------- 12 | 1 or many servers 13 | 14 | inventory.ini -> a file contains list of servers ansible is connecting to 15 | 16 | 192.168.1.1 17 | 192.168.1.2 18 | 19 | variables 20 | data types 21 | conditions 22 | functions 23 | loops 24 | 25 | #variable declaration 26 | COURSE=DevOps with AWS 27 | TRAINER=SivakumarReddy 28 | Duration=120HRS 29 | 30 | 31 | $VAR_NAME or ${VAR_NAME} 32 | 33 | {{}} 34 | 35 | inheritance --> 36 | 37 | try not to chage anything in source files. Why? because you may accidentally change other code... 38 | 39 | vars_prompt 40 | 41 | play level variables 42 | task level variables 43 | variables from files 44 | variables from prompt 45 | variables from inventory 46 | variables from command line/args -------------------------------------------------------------------------------- /session-17.txt: -------------------------------------------------------------------------------- 1 | ansible inventory 2 | ------------------- 3 | ungrouped 4 | grouped 5 | group of groups 6 | 7 | data types 8 | ------------- 9 | string --> text 10 | int, float, double, decimal, long,etc --> number 11 | boolean --> true/false 12 | list 13 | map 14 | 15 | ansible is developed using python language 16 | 17 | hosts: web --> target IP under web group 18 | 19 | [ --> list 20 | { --> map 21 | 22 | conditions 23 | ---------------------- 24 | write a playbook to check if number is less than 10 or not 25 | 26 | when 27 | to decide whether a task/module should run or not 28 | 29 | create expense user 30 | 31 | 1. check expense user already exist or not 32 | 2. if exist, don't create 33 | 3. if not exist create 34 | 35 | id expense --> if exit staus is 0, dont' create 36 | 37 | ansible can't gaurentee module exists for everything...we can use direct commands if module not available 38 | 39 | ansible.builtin.command: 40 | 41 | Error Handling --> Handle the errors 42 | ---------------------------------------- 43 | 44 | if error comes --> ignore errors 45 | 46 | { 47 | 'changed': True, 48 | 'stdout': '', 49 | 'stderr': 'id: ‘expense’: no such user', 50 | 'rc': 1, 51 | 'cmd': ['id', 'expense'], 52 | 'start': '2024-09-03 02:30:45.148688', 53 | 'end': '2024-09-03 02:30:45.155066', 54 | 'delta': '0:00:00.006378', 55 | 'failed': True, 56 | 'msg': 'non-zero return code', 57 | 'stdout_lines': [], 58 | 'stderr_lines': ['id: ‘expense’: no such user'] 59 | } 60 | 61 | Facts == Variables 62 | 63 | Ansible, before connections to the servers/hosts it will collect entire information. so that it can take decisions based on that information 64 | 65 | install nginx 66 | --------------- 67 | redhat --> ansible.builtin.dnf 68 | ubuntu --> ansible.builtin.apt 69 | 70 | ansible.builtin.package --> package install nginx -y 71 | 72 | loops 73 | ------------ 74 | item --> reserved keyword 75 | 76 | functions/filters 77 | -------------------- -------------------------------------------------------------------------------- /session-18.txt: -------------------------------------------------------------------------------- 1 | variables 2 | conditions --> When 3 | loops 4 | loop 5 | data types 6 | 7 | functions == filters 8 | 9 | python --> ansible is not giving opportunity to write custom functions, if we want we have to write python code and get that functionality 10 | 11 | filter --> manipulations 12 | 13 | course: 14 | name: "DevOps with AWS" 15 | duration: 120 16 | trainer: "sivakumar reddy" 17 | 18 | - , [] 19 | 20 | [ 21 | { 22 | "key": "name", 23 | "value": "DevOps with AWS" 24 | }, 25 | { 26 | "key": "duration", 27 | "value": 120 28 | }, 29 | { 30 | "key": "trainer", 31 | "value": "sivakumar reddy" 32 | } 33 | ] 34 | 35 | 36 | ansible.builtin.command 37 | ansible.builtin.shell 38 | 39 | command vs shell 40 | --------------------- 41 | command --> simple commands, it will not get the shell environment 42 | shell --> complex like pipes, redirections, etc. it gets the shell environment 43 | 44 | command is secure than shell, prefer use command if works 45 | 46 | Expense Project 47 | ----------------------- 48 | 3 servers, 3 records 49 | mysql.daws81.online --> private ip 50 | backend.daws81.online --> private ip 51 | frontend.daws81.online --> private ip 52 | daws81.online --> public ip 53 | 54 | ansible is not only CM tool, it can connect to any system if module is available 55 | 56 | ansible --> AWS 57 | 58 | aws configure 59 | 60 | -------------------------------------------------------------------------------- /session-19.txt: -------------------------------------------------------------------------------- 1 | 1. I will provide host, username and password 2 | 2. fail, if it does not setup 3 | 3. now create root password 4 | 4. if pass skip it 5 | 6 | mysql -h -u root -p 7 | 8 | ansible server --> remote server/node 9 | 10 | Ansible Roles 11 | ------------------ 12 | DRY --> don't repeat yourself 13 | 14 | variables 15 | functions 16 | 17 | roles --> we will repeated code here and call whenever required and a proper structure of ansible playbook 18 | 19 | "COURSE='DevOps with AWS' " -------------------------------------------------------------------------------- /session-20.txt: -------------------------------------------------------------------------------- 1 | Ansible Roles 2 | ============== 3 | A proper structure of playbook that includes variables, files, templates, other dependencies, handlers, etc. We can reuse roles 4 | 5 | tasks/ --> You can keep all your tasks here 6 | handlers/ --> When there is a change in particular task, you can notify other task 7 | templates/ --> You can keep all your files with variables here 8 | files/ --> files without variables here 9 | vars/ --> you can keep all variables here 10 | defaults/ --> low priority variables 11 | meta/ --> other dependencies 12 | library/ 13 | --> you can keep your custom modules using python here 14 | module_utils/ # roles can also include custom module_utils 15 | 16 | lookup_plugins/ --> all plugins here 17 | 18 | ansible.cfg 19 | --------------- 20 | /etc/ansible/ansible.cfg 21 | 22 | 1. ANSIBLE_CONFIG 23 | 2. Current working directory 24 | 3. user home directory 25 | 4. etc 26 | 27 | jinja2 --> templating langugage 28 | ------------------------------- 29 | 30 | Handlers/Notifiers 31 | ------------------------------- 32 | When a task is changed, you want to notify another task for example restart 33 | 34 | When a task is not changed, we no need to restart anything and no need to notify 35 | 36 | 37 | 38 | 39 | 3 modules --> nginx, nodejs, mysql 40 | 41 | 12 modules --> 4 nodejs, 2 java, 2python 42 | 43 | 1. remove the existing /app directory 44 | 2. create again 45 | 3. download code 46 | 4. extract code 47 | 48 | /app, /usr/share/nginx/html 49 | 50 | backend=/app 51 | frontend=/usr/share/nginx/html 52 | 53 | Another Milestone 54 | ---------------------- 55 | Configuration Management 56 | Expense project using Ansible Roles 57 | 58 | Expense project 59 | ---------------------- 60 | mysql, backend, frontend and domains 61 | 62 | mysql --> RDS --> sync data from onpremise servers 63 | backend frontend --> EC2 --> download code and deploy 64 | 65 | downtime announcement --> 12hours -------------------------------------------------------------------------------- /session-21.txt: -------------------------------------------------------------------------------- 1 | Ansible Vault 2 | ----------------- 3 | ansible-vault create credentials.yaml 4 | 5 | Ansible Tags 6 | ----------------- 7 | I have 100 tasks n my playbook, how to run 10 tasks... using ansible tags 8 | 9 | Person --> Name, DOB, address 10 | 11 | Ansible Dynamic Inventory 12 | ----------------- 13 | inventory.ini --> static file 14 | 15 | servers increase when traffic increase --> auto scaling 16 | 17 | ansible should query cloud and get the IP address dynamically at that time... 18 | 19 | 1. authenticate 20 | 2. region 21 | 3. name 22 | 4. running 23 | 5. private ip 24 | 25 | *.aws_ec2.yml --> file name should be like this 26 | 27 | How can you connect to multiple servers? 28 | --------------------------------------- 29 | ansible --> 1000 t3.xlarge 30 | 31 | forks = 5 --> ansible connects to 5 servers at a time and complete the tasks --> task level 32 | serial =3 --> runs playbook first 3 servers, again it will connect to next 3 servers --> play level 33 | 34 | mysql -> mysqlll 35 | 36 | updated mysql --> mysqlll 37 | 38 | ansible is for configuration management 39 | 40 | expense-ansible-mysql 41 | expense-ansible-backend 42 | expense-ansible-frontend 43 | 44 | inventory --> -------------------------------------------------------------------------------- /session-22.txt: -------------------------------------------------------------------------------- 1 | Recap 2 | ---------------- 3 | Linux Servers 4 | 3 tier --> 5 | 6 | Shell scripting 7 | Ansible 8 | 9 | Terraform --> IaaC 10 | ---------------- 11 | EC2 12 | R53 13 | IAM users 14 | 15 | Manual Infra 16 | ----------------- 17 | Everything in console.... by mistake if someone edit wrong, then app will go down...30min-1hr 18 | application restore back to previous stage if something goes wrong 19 | 20 | Version control 21 | 22 | Consistent Infra --> All environment configs and infra should be some.. 23 | 24 | CRUD --> Create infra, read, update, deleting the infra 25 | 26 | Inventory/resource management --> If you see tf script, you know what are the services you are using 27 | 28 | Cost optimisation --> creation in 5min, deletion in 5min 29 | 30 | Dependency management --> sg, ec2 instance after creation of all dependencies 31 | 32 | Code reuse --> Roles. Modules 33 | 34 | Declarative way of creating infra --> You are giving orders to terraform to create infra just by providing right syntax 35 | 36 | easy syntax 37 | no sequence 38 | state management --> Terraform can track what it created, can update easily 39 | 40 | mysql --> mysqlll 41 | 42 | 43 | HCL --> Hashicorp configuration language 44 | 45 | Download terraform 46 | keep terraform.exe in some folder 47 | edit the environment variables, provide the path 48 | 49 | aws configure --> AWS command line install 50 | 51 | Resources 52 | 53 | Terraform --> AWS, AZure, GCP, Alibaba, Digital Ocean, etc. GitHub, Networking, etc. These are the providers 54 | ---------- 55 | variables 56 | data types 57 | conditions 58 | loops 59 | functions 60 | data sources 61 | locals 62 | outputs 63 | providers 64 | provisoners 65 | 66 | Create EC2 instance through terraform 67 | ------------------------------------- 68 | terraform file extension is .tf 69 | 70 | provider.tf --> where you can declare what provider you are using 71 | 72 | resource "resource-type" "name-of-resource" { 73 | key = value 74 | } 75 | 76 | name 77 | description 78 | ingress mandatory 79 | egress mandatory 80 | 81 | ingress --> incoming traffic 82 | egress --> outgoing traffic 83 | 84 | while entering cabin you have to scan your ID...while exit the cabin just push the switch 85 | 86 | terraform init --> intialise terraform, it will connect with provider and downloads it 87 | keep .gitignore always 88 | 89 | terraform plan --> cant create resources. it will just plan 90 | 91 | terraform apply --> 92 | 93 | terraform apply -auto-approve 94 | 95 | 96 | name 97 | ami 98 | sg-id 99 | instance_type 100 | key_pair 101 | 102 | ansible playbook.yaml = ansible -v playbook.yaml 103 | 104 | ansible -vv playbook.yaml 105 | ansible -vvv playbook.yaml -------------------------------------------------------------------------------- /session-23.txt: -------------------------------------------------------------------------------- 1 | Variables 2 | Conditions 3 | Data types 4 | loops 5 | functions 6 | 7 | Variables 8 | ------------ 9 | x=1, y=2 10 | 11 | variable is container that holds value... 12 | 13 | PERSON=Ramesh 14 | 15 | vars: 16 | PERSON: Ramesh 17 | 18 | variable "person"{ 19 | default = "Ramesh" 20 | type = string 21 | } 22 | 23 | string name = "ramesh" 24 | 25 | string 26 | number 27 | list 28 | map 29 | boolean 30 | 31 | [ --> list 32 | { --> map 33 | 34 | Tagging Strategy 35 | ---------------------- 36 | Project 37 | Component/Module 38 | Environment 39 | 40 | Expense 41 | --------- 42 | MySQL 43 | Backend 44 | Frontend 45 | 46 | Environment 47 | ----------- 48 | DEV 49 | PROD 50 | 51 | terraform.tfvars 52 | -------------------- 53 | using this file, we can override the default values in variables or else you can set the values also. 54 | 55 | terraform.tfvars and default values 56 | 57 | 1. command line 58 | 2. terraform.tfvars 59 | 3. environment variables 60 | 4. default 61 | 62 | conditions 63 | -------------------- 64 | 65 | if (expression){ 66 | run this if expression is true 67 | } 68 | else{ 69 | run this if expression if false 70 | } 71 | 72 | expression ? "run this if true" : "run this if false" 73 | 74 | environment is prod t3.small or t3.micro 75 | 76 | outputs 77 | ------------------- 78 | every resource exports some values, we can take them and create other resources 79 | 80 | loops 81 | -------------------- 82 | 1. count based loop 83 | 2. for or for each 84 | 85 | count.index --> 0 86 | count.index --> 1 87 | count.index --> 2 88 | 89 | functions 90 | -------------------- 91 | Terraform has no custom functions, We must use in-built functions 92 | 93 | merge --> merges 2 lists 94 | 95 | list-1 --> name=siva, course=devops 96 | list-2 --> name=siva, course=terraform, duration=120hr 97 | list-3 --> name=kumar, course=aws 98 | merge(list-1,list-2) 99 | 100 | name=siva, course=terraform 101 | name=kumar, course=aws, duration=120 102 | 103 | 3 ec2 instances 104 | r53 records 105 | 106 | mysql.daws81s.online --> pvt ip 107 | backend.daws81s.online --> pvt ip 108 | daws81s.online --> public ip 109 | 110 | without .gitigore 111 | 112 | git add git commit git push 113 | 114 | 500Mb file push 115 | 116 | -------------------------------------------------------------------------------- /session-24.txt: -------------------------------------------------------------------------------- 1 | Top to Bottom Approach 2 | ---------------------- 3 | 1. What is the problem? --> Manual infra 4 | 2. How terraform is solving? --> Through script, Infra as a Code 5 | 3. Apply 6 | 7 | Resources, Providers 8 | 9 | 1. faster releases 10 | 2. less defects 11 | 12 | 1. AWS Resource/Service How it works 13 | 2. It needs some input/arguments 14 | 3. Providers will give us outputs 15 | 16 | ps -ef | grep ssh 17 | 18 | 4. Use those outputs and create other resources 19 | 20 | variables, data types, conditions, loops and functions 21 | 22 | 1. 3 ec2 instances 23 | 2. 3 r53 records 24 | 25 | created resources, get the outputs and create other resources 26 | 27 | AMI ID frequently changes...whenever you update something in AMI, ID will changes 28 | 29 | You can query existing info from the providers. this is possible data sources. 30 | 31 | Backend API --> creating records(POST), getting data(GET) 32 | 33 | search the product, apply the filters --> this is query 34 | 35 | devops-practice, rhel-9, apply other filters too 36 | 37 | 38 | data "aws_ami" "joindevops" { 39 | 40 | most_recent = true 41 | owners = ["973714476881"] 42 | 43 | filter { 44 | name = "name" 45 | values = ["RHEL-9-DevOps-Practice"] 46 | } 47 | 48 | filter { 49 | name = "root-device-type" 50 | values = ["ebs"] 51 | } 52 | 53 | filter { 54 | name = "virtualization-type" 55 | values = ["hvm"] 56 | } 57 | } 58 | 59 | all recent AMI from joindevops with name RHEL-9-DevOps-Practice 60 | 61 | inputs --> args to create resources 62 | outputs --> after creation of resource. Public IP, private IP, instance ID, etc. 63 | data sources --> instead of getting args manually, you can query existing information 64 | 65 | 66 | -------------------------------------------------------------------------------- /session-25.txt: -------------------------------------------------------------------------------- 1 | 1. 3 ec2 2 | 2. 3 r53 records 3 | backend.daws81s.online --> t3.micro 4 | mysql.daws81s.online --> t3.small 5 | daws81s.online --> t3.micro 6 | 7 | expression ? "true" : "false" 8 | 9 | r53 10 | ----- 11 | we should get the output of ec2 instanced created 12 | aws_instance.terraform 13 | 14 | backend.daws81s.online 15 | var.instance_names = backend 16 | domain_name = daws81s.online 17 | "${var.instance_names[count.index]}.${var.domain_name}" 18 | 19 | locals 20 | --------------------- 21 | locals are like variables but it have some extra capabilities. You can store expressions and intermediate values in locals 22 | 23 | variables can be overriden 24 | but we can't override locals 25 | 26 | 1. variables and locals both can store values, but locals have some extra capabilities 27 | 2. locals can store expressions, terraform can run them and get the value 28 | 3. locals can use variables inside. variables can't refer locals 29 | 4. can override variables, can't override locals 30 | 31 | state, remote state and locking 32 | --------------------- 33 | mysql --> mysqllll 34 | 35 | it is a simple name update, but ansible created another instance 36 | 37 | assignment 38 | --------------- 39 | I will check your notes and confirm whether you completed or not 40 | 41 | declared infra, actual infra 42 | 43 | terraform == declarative way of creating infra 44 | 45 | tf files == infra I declared 46 | aws infra == actual infra created 47 | 48 | declared infra == actual infra 49 | 50 | terraform.tfstate == terraform keeps track of what it created 51 | 52 | aws_security_group.allow_ssh_terraform: Refreshing state... [id=sg-0994f93b69e9e3736] 53 | 54 | before I delete 55 | ---------------- 56 | declared infra == actual infra --> true 57 | 58 | After I delete 59 | ---------------- 60 | declared infra == actual infra --> false 61 | 62 | terraform.tfstate --> instance created 63 | real infra --> no, actually destroyed 64 | config files --> create the infra 65 | 66 | 67 | terraform.tfstate will be refereshed against real infra 68 | 69 | remote state 70 | ---------------- 71 | I am creating infra from my laptop --> it will create infra 72 | another person also tries to crate --> it will also create infra again 73 | 74 | duplicates or errors 75 | 76 | .lock --> programs will check if any .lock file is there, it will not allow others to edit 77 | 78 | remote storage --> s3 bucket 79 | locking --> dynamo DB --> LockID 80 | 81 | 81s-locking 82 | 81s-remote-state 83 | 84 | 85 | 86 | 87 | 88 | -------------------------------------------------------------------------------- /session-26.txt: -------------------------------------------------------------------------------- 1 | Terraform 2 | ------------ 3 | IaaC advantages 4 | variables 5 | data types 6 | conditions 7 | loops --> count based loop, count.index --> used for lists 8 | functions 9 | 10 | locals --> to store expressions 11 | outputs --> provide the output of resources, like IP, id, etc. 12 | data sources --> query the information from provider like AMI ID, etc. 13 | state and remote state --> terraform uses state concept to compare declared vs actual/real infra. We keep state remotely in colloboration envrionment. 14 | locking --> make sure infra provisioning is not running parellely 15 | tfvars --> to override default variables 16 | 17 | for each is used to iterate map... 18 | 19 | expense infra 20 | ------------- 21 | frontend -> t3.micro 22 | backend --> t3.micro 23 | mysql --> t3.small 24 | [] --> list 25 | {} --> map 26 | 27 | if frontend name should be daws81s.online otherwise backend/mysql.daws81s.online 28 | 29 | dynamic 30 | -------- 31 | 32 | provisioners 33 | ------------------ 34 | provisioners are used to take some actions locally or remotely.. 35 | 36 | local --> where terraform executed is local...my laptop 37 | remote --> inside the servers you created..inside the servers of backend, frontend, mysql. etc. 38 | 39 | 1. local-exec 40 | 2. remote-exec 41 | 42 | remote-exec --> execute commands inside remote server 43 | 44 | 45 | Module development 46 | -------------------- -------------------------------------------------------------------------------- /session-27.txt: -------------------------------------------------------------------------------- 1 | Conistent infra across all env 2 | ------------------------------ 3 | 1. tfvars 4 | 2. terraform workspaces 5 | 3. seperate repos 6 | 7 | resource defnition 8 | 9 | left side --> arguements 10 | right side --> values 11 | 12 | tfvars --> override the default variables 13 | 14 | dev.tfvars --> this should be for dev env 15 | prod.tfvars --> this should be for prod env 16 | 17 | backend -->s3 and dynamo db 18 | 19 | 1. keep the same bucket but diff key 20 | 2. keep diff buckets for diff env and diff key 21 | 22 | expense infra 23 | --------------- 24 | 3 ec2, 1 sg, 3 r53 records 25 | 26 | mysql-dev 27 | backend-dev 28 | frontend-dev 29 | 30 | mysql-dev.daws81s.online 31 | backend-dev.daws81s.online 32 | frontend-dev.daws81s.online 33 | 34 | 35 | mysql-prod 36 | backend-prod 37 | frontend-prod 38 | 39 | mysql-prod.daws81s.online 40 | backend-prod.daws81s.online 41 | daws81s.online 42 | 43 | workspaces 44 | ------------ 45 | ec2 instance 46 | if dev t3.micro 47 | if prod t3.medium 48 | 49 | terraform.workspace == prod 50 | terraform.workspace == dev 51 | 52 | 53 | advantages 54 | ------------- 55 | 1. code reuse --> same code 56 | 57 | disadvantage 58 | ------------- 59 | 1. easy to do errors 60 | 2. not easy to implement 61 | 3. changes made can effect all environments 62 | 63 | 64 | seperate code for seperate env 65 | ------------------------- 66 | 67 | expense-infra-dev 68 | 69 | expense-infra-prod 70 | 71 | disadvantage 72 | -------------- 73 | duplicated code 74 | 75 | Module development 76 | ------------------------ 77 | DRY 78 | 79 | variables 80 | functions 81 | roles 82 | 83 | functions --> inputs and outputs, we call function. you can call infinite times 84 | 85 | write code once and call them many times... 86 | 87 | modules -> resource defnition and arguements are same. only values are different 88 | -------------------------------------------------------------------------------- /session-28.txt: -------------------------------------------------------------------------------- 1 | advantages 2 | --------------- 3 | code reuse 4 | updates are easy and centralised 5 | best practices can be enforced 6 | you can restrict user using few options as per project standards 7 | 8 | 9 | VPC --> Virtual private cloud 10 | ----------------------- 11 | business is restaurant orders 12 | 13 | physical space 14 | buy servers 15 | power connection 16 | network connection 17 | security gaurd 18 | cooling 19 | OS resources, network resources 20 | maintance 21 | 22 | Cloud resource with cloud account 23 | 24 | It is a isolated datacenter in cloud. resources created inside vpc is completely private... 25 | 26 | frontend server --> public must access this 27 | backend server --> secure, public should not access this. dont create public ip and no internet 28 | mysql server --> all users and orders data, cards, etc...dont create public ip and no internet 29 | 30 | You have to seperate servers logically inside VPC... 31 | 32 | subnetting 33 | 34 | village name, pincode --> VPC name, CIDR 35 | streets name, number --> subnets name, CIDR 36 | roads --> routes 37 | main road --> internet connection, internet gateway 38 | main gate of house --> security group/firewall 39 | house --> server 40 | 41 | CIDR --> classless interdomain routing 42 | 43 | 192.178.3.4 --> 4 octates 44 | 45 | 255.255.255.255 --> Max IP 46 | 47 | 10.0.0.0/16 --> CIDR 48 | 49 | total IP address bits are 32. possible IP address are 2^32 50 | 51 | 10.0.0.1 52 | 10.0.0.2 53 | . 54 | . 55 | . 56 | 10.0.0.255 57 | 58 | 10.0.1.0 59 | 10.0.1.1 60 | . 61 | . 62 | . 63 | 10.0.1.255 64 | 65 | 2^16 = 65,536 66 | 67 | 10.0.0.0/24 --> first 3 are blocked 68 | 69 | 10.0.0.255 70 | 71 | 10.0.1.0/24 --> 10.0.1.0 ... 10.0.1.255 72 | 10.0.2.0/24 --> 10.0.2.0, 10.0.2.1 .... 10.0.2.255 73 | 74 | 10.0.1.0/32 75 | 76 | VPC creation 77 | subnet creation 78 | igw creation 79 | 80 | public and private subnet 81 | ------------------------- 82 | subnet which has internet connection is called as public subnet. private/app subnet will not have internet connection as routes. database subnet is also called as private subnet 83 | 84 | 10.0.0.0/16 --> internal roads 85 | 86 | create vpc 87 | create igw and associate with VPC 88 | create public, private and database subnets 89 | create public, private and database route table 90 | create routes inside route table 91 | associate route tables with appropriate subnets 92 | created elastic IP 93 | created NAT gateway 94 | added NAT routes in private and database subnets 95 | 96 | secure servers can't be reached directly...this is incoming/ingress traffic 97 | traffic from the servers ... outgoing/egress traffic 98 | 99 | database --> yum install mysql-server --> outgoing 100 | 101 | what is NAT --> this is the mechanism private servers connects to internet for outgoing traffic like packages installation, security patches downloads 102 | 103 | NAT --> when server stops and starts IP address will change.It should have same IP always 104 | 105 | static IP/ elastic IP 106 | 107 | High availability 108 | ------------------- 109 | HYD --> Region 110 | east hyderabad --> AZ 111 | west hyderabad --> AZ 112 | 113 | 1 public subnet in us-east-1a, 1 public subnet in us-east-1b 114 | 1 private subnet in us-east-1a, 1 private subnet in us-east-1b 115 | 1 database subnet in us-east-1a, 1 database subnet in us-east-1b 116 | 117 | 118 | EC2 Module 119 | ------------ 120 | it should accept count/instance_names 121 | use instance_names inside tags -------------------------------------------------------------------------------- /session-29.txt: -------------------------------------------------------------------------------- 1 | - Terraform is an IAC (Infrastructure as Code) tool 2 | - Automate, reuse, version controlling 3 | - Hashicorp, hcl (Hashicorp language) 4 | - Declarative 5 | - What ever we give, it will create 6 | 7 | - resources 8 | - provider 9 | - provisioners 10 | - functions 11 | - variables 12 | - local-exec 13 | - remote-exec 14 | - data sources 15 | - state and remote state 16 | - locking 17 | - modules 18 | - loops 19 | - workspaces 20 | - tfvars 21 | - conditions 22 | - locals 23 | 24 | - What is the difference between normal variable declaration and locals? 25 | A) We can write expressions and evaluate them using locals 26 | 27 | - Can you use count-based loop inside locals? 28 | A) No 29 | 30 | - terraform init 31 | - -reconfigure 32 | - -backend-config 33 | - -upgrade 34 | - terraform validate: It perform syntax check and validates our code 35 | - terraform plan 36 | - -var-file 37 | - terraform apply 38 | - teraform fmt 39 | - terraform destroy 40 | - terraform state show 41 | - terraform workspace 42 | 43 | - -upgrade: It is usually used to upgrade the latest version of the module source code 44 | 45 | - terraform taint: Using this command, you could taint a terraform resource 46 | - modules: 47 | - Root module: terraform-aws-ec2 48 | - Child: ec2-module-demo 49 | 50 | - terraform state: 51 | - Desired state: What you desire 52 | - Current state: What is your current infra that terraform is managing 53 | 54 | - How to handle a situation when a state file is deleted or corrupted? 55 | A) We need to import the resources that are part of the terraform code into its state file and we do it using: terraform import 56 | 57 | terraform import aws_instance.web i-0bec8c7d30e5ab951 -------------------------------------------------------------------------------- /session-30.txt: -------------------------------------------------------------------------------- 1 | Create VPC -> 10.0.0.0/16 --> 2^16 IP address 2 | Create igw 3 | associate igw to vpc 4 | create subnets --> Public, private and DB 5 | EIP 6 | NAT 7 | created route tables and added routes 8 | Public --> Internet connection through IGW 9 | Private --> NAT, egress connections 10 | route table associations with subnets 11 | 12 | 13 | terraform naming resources 14 | ----------------------------- 15 | 1. terraform resource name --> use _, no upper case 16 | 2. dont repeat resource type in name 17 | 3. if only one resource of its type, name it as main or this 18 | 4. Use - inside arguments values and in places where value will be exposed to a human 19 | 5. use plural if multiple resources 20 | 21 | https://www.terraform-best-practices.com/naming 22 | 23 | common tags --> common for all resources under this project 24 | resource tags --> vpc_tags 25 | 26 | vpc --> expense-dev 27 | 28 | HA --> atleast 2 AZ 29 | 30 | public --> 1a, 1b --> 10.0.1.0/24, 10.0.2.0/24 31 | private --> 1a, 1b --> 10.0.11.0/24, 10.0.12.0/24 32 | database --> 1a,1b --> 10.0.21.0/24, 10.0.22.0/24 33 | 34 | 1. get the AZ 35 | 2. get first 2 36 | 37 | 0,1 --> only 0th element 38 | 0,2 --> 0th element, 1st element 39 | 40 | expense-dev-public-us-east-1a 41 | 42 | we need to create database_subnet_group --> all database subnets under a group 43 | 44 | 1. command line 45 | 2. tfvars 46 | 3. default 47 | 4. 48 | -------------------------------------------------------------------------------- /session-31.txt: -------------------------------------------------------------------------------- 1 | vpc 2 | igw 3 | public private database subnets in 1a and 1b AZ 4 | eip 5 | natgateway 6 | 7 | route tables 8 | routes 9 | associations with subnets 10 | 11 | expense-dev-public 12 | 13 | associations 14 | -------------- 15 | 1 public rt --> 2 public subnets 16 | 17 | Peering 18 | ------------- 19 | 20 | dev vpc prod vpc 21 | by default vpc are not connected with each other 22 | 23 | VPC peering can establish between two VPC. 24 | VPC CIDR should be different, they should not overlap... 25 | 26 | VPC-1 --> 10.0.1.123 27 | 10.0.1.122 --> 10.0.1.123 28 | 29 | VPC-2 --> 10.0.1.123 30 | 31 | same account 32 | --------------- 33 | same region and diff VPC can peer 34 | diff region and diff VPC can peer 35 | 36 | 2 accounts 37 | --------------- 38 | diff account same region diff VPC can peer 39 | diff account diff region diff VPC can peer 40 | 41 | 42 | Peering 43 | ---------------- 44 | ask user whether he wants VPC peering or not. if he say yes our module will connect with default vpc in the same region 45 | 46 | persons = ["ramesh","suresh","raheem","robert"] 47 | 48 | persons[1] 49 | 50 | persons = ["john"] 51 | 52 | persons[0] 53 | 54 | public servers 55 | backend servers 56 | database servers -------------------------------------------------------------------------------- /session-32.txt: -------------------------------------------------------------------------------- 1 | mysql 2 | backend 3 | frontend 4 | 5 | expense-dev-mysql 6 | 7 | expense-vpc 8 | expense-sg 9 | expense-mysql 10 | 11 | /roboshop/prod/vpc_id 12 | /roboshop/dev/vpc_id 13 | 14 | 15 | your-repo 16 | -------------------- 17 | module "your_name" { 18 | args-as-per-module-definiton = your-value 19 | enable_dns_hostnames = var.dns_hostnames 20 | } 21 | 22 | variables.tf 23 | ------------ 24 | variable "dns_hostnames"{ 25 | default = false 26 | } 27 | 28 | module.your_name. 29 | 30 | module == function == inputs --> outputs 31 | 32 | 10.0.0.0/16 33 | 34 | 10.1.0.0/16 35 | 36 | 1. custom modules 37 | 2. open source modules -------------------------------------------------------------------------------- /session-33.txt: -------------------------------------------------------------------------------- 1 | Mysql --> backend 2 | 3 | Mysql 4 | ------- 5 | 3306 6 | Port no: 3306 7 | IP: backend private IP 8 | private IP will not change after restart, public ip will change after restart 9 | private IP may not same after termination and creation.. 10 | 11 | 12 | MYSQL SG will allow Backend SG 13 | 3306 14 | Port no: 3306 15 | Source: Backend SG --> MySQL will allow connection from instances which are attached to Backend SG 16 | 17 | backend --> frontend 18 | 8080 19 | source: frontend sg 20 | 21 | 22 | frontend --> public --> 0.0.0.0/0 23 | 80 24 | CIDR 25 | 26 | 1. Bastion 27 | 2. VPN 28 | 29 | Open source modules with AWS contribution 30 | 1. We no need to develop the module 31 | 32 | 1. We need to depend on community 33 | 2. We dont have freedom to do changes if required 34 | 35 | Custom Modules 36 | ---------------- 37 | 1. We have freedom to develop whatever we want 38 | 39 | 1. We have to develop everything 40 | 41 | ["subnet-1","subnet-2"] --> list 42 | subnet-1,subnet-2 --> StringList 43 | 44 | StringList --> List == ["subnet-1","subnet-2"] --> subnet[0] 45 | 46 | ec2 instance user data 47 | ------------------- 48 | when instanc created, AWS will run this user instructions with root access automatically 49 | 50 | 51 | 52 | -------------------------------------------------------------------------------- /session-34.txt: -------------------------------------------------------------------------------- 1 | expense-infra-dev 2 | ------------------- 3 | 4 | stateful vs stateless 5 | 6 | stateful --> which has state, i.e data 7 | stateless --> which don't have state. 8 | 9 | DB --> stateful, it keeps track of the data 10 | backend/frontend --> stateless 11 | 12 | DB 13 | ------- 14 | backup --> hourly, daily, weekly backups 15 | restore test 16 | data replication --> 17 | DB-1 is connected to application 18 | DB-2 is not connected to application, but replicate data from DB-1 19 | 20 | HYD --> DB-1, MUM --> DB-2 21 | storage increment 22 | load balancing 23 | upgrades 24 | 25 | RDS --> load balancing, auto storage increment, backups/snapshots, etc.. 26 | 27 | ExpenseApp1 28 | 29 | 8.0.35 --> 8.0.36 30 | 8.0 --> 8.1 --> 9 31 | 32 | rds opensource module 33 | 34 | mysql-dev.daws81s.online --> expense.czn6yzxlcsiv.us-east-1.rds.amazonaws.com 35 | 36 | snapshot/backup --> destroy --> final snapshot(VPC) 37 | 38 | Load Balancing and Auto Scaling 39 | -------------------------------- 40 | 41 | DM --> Client 42 | 43 | work --> UI work --> UI team lead --> team members 44 | Backend work --> Backend team lead --> team members 45 | who is available to work, team lead will assign 46 | 47 | team --> target group 48 | team lead --> load balancer 49 | listener and rules --> He is listening for UI work, Backend team lead is listening for backend work 50 | who is available --> health check 51 | members --> servers 52 | 53 | if backend server is running we can hit that on 8080, if not running we can't hit 54 | 55 | 56 | 8080, 80 --> servers are listening on these port 57 | 58 | http://3.94.106.200/ --> 2XX 59 | 60 | LB Listener --> 80 --> nginx --> 80 --> 2 instances --> lb will check which instance is healthy --> randomly 1 server 61 | 62 | Auto scaling 63 | ------------------ 64 | 2 members --> 16 hr work 65 | 30 hr work --> our HR should recruit new members --> add them to our team 66 | 67 | JD --> Launch template(Options to create servers) --> place them inside target group 68 | 69 | CPU utilisation --> 75% -------------------------------------------------------------------------------- /session-35.txt: -------------------------------------------------------------------------------- 1 | 1. Project infra --> Basement to house --> rare changes 2 | 2. Application infra --> Rooms and walls --> Yes 3 | 4 | Project Infra 5 | ----------------- 6 | VPC --> VPC will not change frequently 7 | SG --> SG may not change, only rules may change 8 | Bastion --> No 9 | DB --> No 10 | Load Balancer --> No 11 | 12 | Applications 13 | ------------------ 14 | Ec2 instances 15 | target groups 16 | 17 | 18 | backend-dev.daws81s.online --> LB 19 | backend-dev.daws81s.online:8080 20 | 21 | Load Balancer 22 | ---------------- 23 | LB --> distributing the load to target group --> team lead 24 | TG --> A team of members --> A group of servers 25 | Server --> Team member --> Server 26 | Listener --> Team Lead Phone number --> Port LB listening to 27 | Rules --> 28 | 29 | host path and context path 30 | ------------------ 31 | 32 | Client --> BA --> Architect --> 33 | backend --> Backend LB --> Backend TL 34 | frontend --> Frontend LB --> Frontend TL 35 | database --> DB TL 36 | 37 | hostpath 38 | -------------- 39 | backend.daws81s.online --> backend LB 40 | frontend.daws81s.online --> frontend LB 41 | 42 | Classic LB 43 | Application LB --> Layer7 44 | Network LB 45 | 46 | m.facebook.com --> mobile site 47 | facebook.com --> web site 48 | 49 | netbanking.hdfc.com 50 | demat.hdfc.com 51 | 52 | context path 53 | ----------------- 54 | daws81s.online/backend 55 | daws81s.online/frontend 56 | 57 | app ALB --> app tier LB 58 | web ALB --> web tier LB 59 | 60 | 61 | mysql-dev.daws81s.online --> expense-dev.czn6yzxlcsiv.us-east-1.rds.amazonaws.com 62 | app-dev.daws81s.online --> App ALB 63 | web-dev.daws81s.online --> Web ALB 64 | daws81s.online --> domain 65 | web-dev.daws81s.online --> subdomain 66 | 67 | app-dev.daws81s.online 68 | ------------------------ 69 | app-dev.daws81s.online --> it will respond --> default response 70 | backend.app-dev.daws81s.online --> forward this request to backend TG 71 | 72 | fasdfghasfj.app-dev.daws81s.online 73 | 74 | -------------------------------------------------------------------------------- /session-36.txt: -------------------------------------------------------------------------------- 1 | VPN 2 | -------------- 3 | user laptop --> VPN --> can access secure servers 4 | and company can monitor our traffic 5 | 6 | AMI --> Open VPN server is already installed and configured 7 | 8 | Launch this EC2 and we need to little configuration 9 | 10 | OpenVPN Access Server Community Image-fe8020db-* 11 | 12 | 22, 943, 443, 1194 --> VPN ports 13 | 14 | key is mandatory for this 15 | 16 | ssh -i ~/.ssh/openvpn openvpnas@public-IP 17 | 18 | https://35.170.248.89:943/admin 19 | openvpn, Admin@1234 20 | 21 | VPN SG, VPN SG Rules 22 | create key pair for VPN access 23 | VPN instance with Open VPN 24 | openvpnas is the user name 25 | configure with default options 26 | https://35.170.248.89:943/admin 27 | openvpn, Admin@1234 28 | 29 | download openvpn connect 30 | https://35.170.248.89:943 31 | openvpn Admin@1234 32 | 33 | Backend 34 | -------------------------- 35 | 1. create ec2 instance 36 | 2. configure with backend 37 | 38 | if there is a new version 39 | ----------------- 40 | I can connect all the instances using ansible and run the playbook 41 | 42 | stop the server 43 | remove old code 44 | download new code 45 | restart the server 46 | 47 | if there is traffic increase 48 | ------------------- 49 | 1. create ec2 instance 50 | 2. configure with backend 51 | 52 | 53 | create ec2 instance 54 | configure using ansible 55 | stop the instance 56 | take AMI --> Launch template 57 | launch it using autoscaling 58 | 59 | when traffic increase 60 | use AMI to add the servers 61 | 62 | target group 63 | ALB rules 64 | -------------------------------------------------------------------------------- /session-37.txt: -------------------------------------------------------------------------------- 1 | create ec2 instance 2 | configure it using ansible 3 | stop the server 4 | take AMI --> with new version 5 | delete the instance 6 | 7 | create launch template 8 | ami, network, sg, etc. 9 | create target group 10 | create ASG using launch template and place them in TG 11 | create rule in load balancer 12 | 13 | create ansible server and provide backend ec2 instance 14 | ansible can connect to it... 15 | 16 | provisioners 17 | local and remote 18 | 19 | I need to use remote provisioner, connection block 20 | 21 | null resource and trigger 22 | 23 | null resource --> It will not do anything, means it wont create any resource. But useful for provisioners 24 | 25 | terraform --> Shell --> Ansible 26 | 27 | if you have existing folder how can you make it as git repo 28 | ------------------------------------------------------------ 29 | we need to intialise git 30 | git init 31 | 32 | git branch -M main 33 | 34 | aws ec2 terminate-instances --instance-ids instance-id1 35 | 36 | for i in $(ls -dr */) ; do echo ${i%/}; cd ${i%/} ; terraform destroy -auto-approve ; cd .. ; done 37 | 38 | for i in $(ls -d */) ; do echo ${i%/}; cd ${i%/} ; terraform apply -auto-approve ; cd .. ; done -------------------------------------------------------------------------------- /session-38.txt: -------------------------------------------------------------------------------- 1 | Create EC2 2 | Configure EC2 Using ANsible and provisioner 3 | remote provisioner 4 | variables in terraform --> Shell script --> ansible-pull 5 | stopped EC2 6 | take AMI 7 | delete the instance 8 | 9 | LB, Listener, Default rule 10 | 11 | target group 12 | launch template 13 | 14 | :8080/health 15 | 16 | for i in 10-vpc 20-sg 30-bastion 40-rds 50-app-alb 50-vpn; do cd $i ; terraform apply -auto-approve ; cd ..; done 17 | 18 | 2 instances --> 60 80 19 | 20 | backend.app-dev.daws81s.online --> forward this backend target group 21 | 22 | expense-dev.daws81s.online --> expense website -------------------------------------------------------------------------------- /session-39.txt: -------------------------------------------------------------------------------- 1 | EC2 2 | configure 3 | stop 4 | AMI 5 | delete instance 6 | target group 7 | launch template 8 | auto scaling group 9 | autoscaling group policy 10 | 11 | ALB Rule 12 | 13 | R53 --> ALB --> Listener --> Rule --> Target group --> Health Check --> Instance 14 | 0 1 2 3 4 15 | 16 | backend.app-dev.daws81s.online 17 | 18 | app-dev.daws81s.online 19 | 20 | catalogue.app-dev.daws81s.online 21 | user.app-dev.daws81s.online 22 | shipping.app-dev.daws81s.online 23 | 24 | zeal vora --> AWS security specialist 25 | 26 | Rolling update 27 | ----------------- 28 | 4 instances --> 2 instances 29 | 30 | 1. stop all the backend services in all instances and update the application using ansible 31 | 2. create one new instance using new version, once this is up, delete one old instance 32 | create second instance and delete one more old instance 33 | create third instance and delete one more old instance 34 | create fourth instance and delete one more old instance 35 | 36 | https/SSL/TLS --> certificates 37 | 38 | We need domain 39 | 40 | hdfcbank.com --> https://hdfcbank.com:443 41 | 42 | they will check authorization of your domain 43 | 44 | certificate provider 45 | 46 | expense-dev.daws81s.online 47 | expense-qa.daws81s.online 48 | 49 | *.daws81s.online 50 | 51 | -------------------------------------------------------------------------------- /session-40.txt: -------------------------------------------------------------------------------- 1 | Create EC2 2 | Configure it 3 | You should have playbooks ready 4 | remote provisioner 5 | terraform variables --> Shell --> ansible 6 | ansible-pull 7 | stop ec2 8 | take ami 9 | delete instance 10 | 11 | create target group 12 | create launch template 13 | 14 | autoscaling --> launch template target group 15 | autoscaling policy 16 | ALB rules 17 | 18 | R53 --> ALB --> Listener(80,443) --> Rule(host based routing) --> target group --> Instance 19 | 20 | ACM --> we should have domain 21 | 22 | request for the certificate 23 | create records in domain 24 | validation 25 | 26 | expense-dev.daws81s.online 27 | 28 | 504 --> Gateway timeout 29 | 30 | ALB --> Server 31 | 32 | domains 33 | ----------- 34 | daws81s.online 35 | 36 | backend.app-dev.daws81s.online --> APP ALB 37 | 38 | expense-dev.daws81s.online 39 | 40 | mysql-dev.daws81s.online 41 | 42 | CDN --> Cloudfront 43 | ------------------- 44 | 45 | user --> ISP Caching servers --> Torrents 46 | 47 | Origin --> Where the original content exist 48 | cache --> static content (css, js, images) 49 | 50 | https://expense-dev.daws81s.online/static/media/3TierArch.0486e7150e53d305d1c2.png -------------------------------------------------------------------------------- /session-41.txt: -------------------------------------------------------------------------------- 1 | CDN 2 | ------------ 3 | Cloudfront is a content delivery network service of amazon. AWS have edge locations edge locations to cache the content across the globe. we can make use of this service to reduce latency to our customers... 4 | 5 | Origin --> where is your source. It can S3, ALB, Api gateway, etc. 6 | Cache behaviour --> How and what you want to cache 7 | invalidations -> When there is a update, you can create invalidations so that edge location pull the content newly. 8 | 9 | cache order 10 | ------------- 11 | /images/* --> expense.cdn.daws81s.online/images/* --> it will cached 12 | /static/* --> expense.cdn.daws81s.online/static/* --> it will cached 13 | default --> dynamic content --> expense.cdn.daws81s.online --> no cache 14 | 15 | cdn origin 16 | http--> https 17 | 18 | http://expense-cdn.daws81s.online --> https://expense-cdn.daws81s.online -------------------------------------------------------------------------------- /session-42.txt: -------------------------------------------------------------------------------- 1 | Old enterprise vs monolithic vs microservices 2 | ---------------------------------------------- 3 | 4 | everything together --> frontend and backend 5 | 6 | frontend seperate backend, frontend sends API request. Backend responds with data in json format 7 | 8 | monolithic 9 | ------------------ 10 | user, cart, product catlogue, shipping, delivery, payment, customer support, reviews, etc. 11 | 12 | if there is even a small error in any module can bring the application down..custmers may not even browse for the products 13 | 14 | any change in any module should go for full deployment 15 | 16 | microservices 17 | --------------- 18 | user 19 | cart 20 | catalogue 21 | shipping 22 | payment 23 | delivery 24 | reviews 25 | 26 | joint family vs 4 mem family vs individual 27 | 28 | 29 | independent house vs flat in apartment vs a pg room 30 | bare metal vs VM vs containers 31 | 32 | independent house 33 | -------------------- 34 | advantages 35 | ------------- 36 | privacy 37 | more space 38 | 39 | disadvantages 40 | ------------- 41 | too much maintainance 42 | electricity, water, internet, gas, etc. 43 | construction time is very high 44 | cost is also very high 45 | 46 | flats in apartment 47 | ------------------- 48 | advantages 49 | ------------- 50 | less maintainance 51 | electricity, water, etc. can be taken care 52 | construction time is less 53 | cost is less 54 | 55 | disadvantages 56 | -------------- 57 | less privacy 58 | 59 | PG room 60 | ----------------- 61 | advantages 62 | ---------- 63 | no maintainance 64 | cost is veryyyyy less 65 | flexibility is veryyyyyyy high 66 | 67 | disadvantages 68 | ------------ 69 | no privacy 70 | 71 | 72 | 100GB RAM 4TB HD 73 | Esxi Hypervisor 74 | 75 | ubuntu --> 4GB ram and 50GB HD 76 | centos --> 4GB ram and 50GB HD 77 | 78 | containers dont block the resources, they use resources dynamically 79 | 80 | 81 | Containers 82 | ----------------------------- 83 | 84 | Docker 85 | 86 | AMI --> We selected on OS(1GB) + We configured (installation, code, depenedencies, servvices, etc) == AMI 87 | 88 | Image --> Bare min OS(10Mb) + We configured (installation, code, depenedencies, servvices, etc) == Docker image 89 | 90 | Bare min OS + Application run time + depenedencies + packages + code 91 | 92 | when docker is installed a group called docker is created 93 | 94 | add ec2-user to the docker group 95 | 96 | sudo usermod -aG docker ec2-user 97 | exit and login again to get the effect 98 | 99 | 100 | docker commands 101 | ------------------------ 102 | image --> container 103 | container is the running instance of image 104 | 105 | docker ps --> checks the running containers 106 | docker images --> displays the images available in server 107 | docker pull --> pulls the image from docker repository/hub 108 | 109 | nginx --> push to docker hub 110 | nginx --> push to docker hub 111 | 112 | username/image-name:version 113 | 114 | joindevops/nginx:1.0.0 115 | joindevops/nginx:1.0.1 116 | joindevops/nginx:1.0.2 117 | 118 | ramesh/nginx:2.0.0 119 | 120 | nginx:version 121 | 122 | nginx:latest 123 | 124 | alpine is the smallest image (10Mb) + Install nginx --> nginx:alpine-slim 125 | 126 | docker create :version --> create container out of image 127 | docker start container ID --> start the container 128 | 129 | docker stop container id 130 | docker rm container id --> removes container 131 | docker rm -f container id --> stops running container 132 | docker rmi image-id 133 | 134 | docker run = docker pull + create + start 135 | 136 | docker run -d 137 | 138 | host port and container port 139 | 140 | container is like a nano/mini server. It also have 0-65,535 ports 141 | 142 | -p : 143 | 144 | docker exec -it container-id bash 145 | docker inspect 146 | docker logs 147 | 148 | 149 | Dockerfile 150 | -------------------- 151 | Dockerfile is used to build custom images. We can make use of docker instructions to create custome images 152 | 153 | FROM RUN CMD ENTRYPOINT COPY ADD ENV ARG WORKDIR USER -------------------------------------------------------------------------------- /session-43.txt: -------------------------------------------------------------------------------- 1 | FROM 2 | ---------- 3 | FROM should be the first instruction in Dockerfile. It represents base OS. There is an exception ARG 4 | 5 | Dockerfile 6 | 7 | FROM 8 | 9 | RHEL-9: VM OS --> High memory 10 | almalinux == Rhel 11 | 12 | How you build docker image? 13 | 14 | docker build -t : . --> . represents current folder have Dockerfile file 15 | 16 | RUN 17 | ------------ 18 | RUN instruction is used to configure image like installing packages, configure, etc. RUN instruction runs at the time of image building. 19 | 20 | CMD 21 | ------------ 22 | CMD instruction runs at the time of container creation. It keeps container running 23 | 24 | systemctl start backend --> infinite time 25 | systemctl start nginx --> /etc/systemd/system/nginx.service 26 | 27 | systemctl only works for full server, it will not work for contianers 28 | 29 | nginx -g daemon-off --> runs nginx in foreground 30 | 31 | docker build --> image creation --> RUN 32 | docker run --> container creation --> CMD 33 | 34 | nginx:v1 35 | 36 | username/image-name:version 37 | 38 | docker tag cmd:v1 joindevops/nginx:v1 39 | 40 | docker login -u 41 | 42 | docker push joindevops/nginx:v1 43 | 44 | docker pull joindevops/nginx:v1 45 | 46 | LABEL 47 | ---------- 48 | it adds metadata to the image, description, who is the owner, which projects. LABELS are used to filter the images 49 | 50 | EXPOSE 51 | --------- 52 | used to let the users know what are the ports this container will open when it runs.. EXPOSE instruction will not effect functionality it will only gives information 53 | 54 | ENV 55 | ---------- 56 | sets the environment variables, these can be used inside container 57 | 58 | COPY 59 | ----------- 60 | used to copy files from local to image. 61 | 62 | ADD 63 | ----------- 64 | ADD also does same as copy, but it has 2 extra capabilities 65 | 1. It can get files from internet 66 | 2. It can extract the files into image 67 | 68 | connect to backend server and manually try to connect 69 | telnet 70 | 71 | -------------------------------------------------------------------------------- /session-44.txt: -------------------------------------------------------------------------------- 1 | docker build -t username/imagename:version . --> Dockerfile is required in current folder 2 | docker tag imagename:version username/imagename:version 3 | docker login 4 | docker push username/imagename:version 5 | docker run -d -p 80:80 username/imagename:version 6 | docker exec -it username/imagename:version bash 7 | 8 | FROM --> Should be the first instruction to refer base OS 9 | RUN --> installing packages and configuring image. runs at build time 10 | CMD --> Runs at container creation time, it keeps container running ["command-name", "params"] 11 | LABEL --> adds metadata to the image, useful while filtering the images 12 | EXPOSE --> informs about the ports opened by container, cant' really open the ports, just as a information to user 13 | COPY --> copies the files from workspace to image 14 | ADD --> 1. can download files directly from internet, can untar directly into image 15 | ENV --> sets the env variables to the container 16 | 17 | ENTRYPOINT 18 | ------------- 19 | 20 | docker run -d from:v1 21 | 22 | 1. CMD can be overridden at runtime 23 | 2. You can't override ENTRYPOINT like CMD. If you try to do it will go and append to the entrypoint command 24 | 3. for better results and best practices. CMD can provide args to ENTRYPOINT, So you can mention default args through CMD and you can override them at run time.. 25 | 26 | USER 27 | ------------- 28 | for security you should not run containers using root user, it must be on normal user. Atleast last instruction should USER 29 | 30 | WORKDIR 31 | -------------- 32 | is used to set the current working directory inside docker image 33 | 34 | ARG 35 | -------------- 36 | ARG is used to set the variables at build time onnly, not inside the container 37 | 38 | 39 | ARG vs ENV 40 | --------- 41 | 1. ENV variables can be accessed in image build time and container both. 42 | 2. ARG is only accessed at the time of image creation. 43 | 3. You can use ARG instruction before FROM in one special case i.e to supply version to the base image. 44 | 4. ARG instruction before FROM is only valid until FROM, it cant be accessed after FROM 45 | 46 | How can I access ARG values inside container? 47 | You can set arg value to env variable 48 | 49 | ENV var-name=$var-name 50 | 51 | ONBUILD 52 | --------- 53 | is used to trigger few instructions at build when a user is using our image. 54 | 55 | MySQL 56 | -------------- 57 | 1. I take one base OS like almalinux:9 58 | 2. install mysql server 59 | 60 | can I directly take mysql server official image. 61 | 62 | Mysql --> They can run few sql command to configure server 63 | 64 | Backend --> backend can connect to mysql server and run the queries. -------------------------------------------------------------------------------- /session-45.txt: -------------------------------------------------------------------------------- 1 | Docker network 2 | ----------------- 3 | every VM get access to internet from AWS ISP.172.31.27.183 4 | 5 | docker0: is a virtual n/w interface 172.17.0.1. It acts as modem to the containers inside VM. 6 | 7 | 8 | Backend 9 | ------------- 10 | 11 | docker containers can't be communicated using default network. You have to create your own network. -------------------------------------------------------------------------------- /session-46.txt: -------------------------------------------------------------------------------- 1 | /var/lib/docker 2 | 3 | Docker Networking: 4 | 5 | 1. host 6 | 2. bridge --> default 7 | 3. overlay --> Between multiple docker hosts 8 | 9 | host: 10 | 1. contianers using host n/w will not get IP address. 11 | 2. it means containers are sharing host IP address 12 | 3. containers open host port 13 | 14 | mysql --> 3306 --> host port 15 | backend --> 8080 --> host port 16 | frontend --> 80 --> host port 17 | 18 | Volumes 19 | ----------- 20 | containers are ephemeral. Once you remove container you lose data. data is not persisted by default. 21 | 22 | 1. un named volumes 23 | 2. named volumes 24 | 25 | mount the storage in host to the container 26 | 27 | -v host-path:container-path 28 | 29 | named volumes 30 | -------------- 31 | docker volume create nginx-data 32 | 33 | expense 34 | --------------- 35 | create network 36 | create volume 37 | mysql 38 | backend 39 | frontend -------------------------------------------------------------------------------- /session-47.txt: -------------------------------------------------------------------------------- 1 | 1. minimal images 2 | 3 | alpine base image + node js 20 install 4 | 5 | docker images are layer based, images are immutable 6 | 7 | 1. FROM:node --> this will be pulled from docker hub 8 | docker creates a container from 1st instruction and runs 2nd instruction inside it - C1 9 | 2. EXPOSE 8080 10 | once 2 nd instruction runs, it will create a image from this container - I1 11 | 3. docker creates a container from I1 image, C2 12 | runs 3rd instruction inside C2 container 13 | ENV DB_HOST="mysql" 14 | docker creates image from C2 container that is I2 15 | 4. docker creates container from I2 images, i.e C3 16 | 17 | 18 | Ramesh 19 | ------------- 20 | FROM node:20.18.0-alpine3.20 --> I1 21 | EXPOSE 8080 -> I2 22 | ENV DB_HOST="mysql" -> I3 23 | RUN addgroup -S expense && adduser -S expense -G expense --> I4 24 | 25 | 120MB 26 | 27 | Suresh 28 | -------------- 29 | FROM node:20.18.0-alpine3.20 30 | EXPOSE 8080 31 | ENV DB_HOST="mysql" 32 | RUN addgroup -S expense && adduser -S expense -G expense 33 | RUN mkdir expense 34 | 35 | Rahim 36 | ----------- 37 | FROM node:20.18.0-alpine3.20 38 | EXPOSE 8080 39 | ENV DB_HOST="mysql" -> I3 40 | USER expense 41 | 42 | frequently changing instructions should be bottom of the dockerfile, we can save build time and memory of the layers 43 | 44 | docker images are working based on layers.. 45 | every instruction creates intermediate container and run the next instruction inside it 46 | then it saves the container as image layer, intermediate container will be deleted 47 | to run next instruction docker creates intermediate container again from this image 48 | it goes on, at each step intermediate containers are removed 49 | each layer is cached, when you push it pushes the layers 50 | 51 | multi stage builds 52 | ------------------ 53 | development and running 54 | 55 | JDK, JRE 56 | JDK --> Java development kit 57 | JRE --> Java runtine environment 58 | .jar file is the output of build 59 | for nodejs project we get node_modules as build, now we need our code and node_modules 60 | 61 | JDK > JRE. JRE is subset of JDK 62 | 63 | npm install --> node_modules --> usually creates some cache 64 | 65 | 66 | it looks like 2 dockerfiles inside 1.. 67 | 68 | 1 dockerfile we use it for build 69 | 70 | we copy the output in 2nd dockerfile 71 | 72 | we restrict docker for image building, running the images as containers we will use kubernetes 73 | 74 | Docker architecture 75 | ---------------------- 76 | client --> docker command 77 | docker host/daemon --> docker service running 78 | 79 | docker run -d -p 80:80 nginx 80 | docker daemon checkes whethe image exist local or not, if exist it will run 81 | if not exist, it will pull registry/hub, create a container out of it, run it and send the output to the client 82 | 83 | -------------------------------------------------------------------------------- /session-48.txt: -------------------------------------------------------------------------------- 1 | 1. building the images --> Dockerfile 2 | 2. running the images --> Containers(docker compose) 3 | 4 | we have to create a user and group 5 | all the related mysql directories we should give permissions to this group and user 6 | 7 | /var/lib/mysql 8 | /var/run/mysqld 9 | /docker-entrypoint-initdb.d 10 | 11 | disadvantages 12 | --------------- 13 | 1 PG 50 rooms... 14 | no water for 5 days 15 | 16 | another 5 PG. either he get water from there or he shift persons to another PG 17 | 18 | there is no relibility since there is only one docker host... 19 | there is no autoscaling 20 | there is no load balancing 21 | volumes are inside docker host..poor volume management 22 | security. no secret management 23 | no communication between containers in another host.. network management is not good 24 | 25 | 26 | orchestration 27 | ----------------- 28 | 100 PG --> He need a person to manage everything 29 | 30 | kubernetes 31 | ------------------- 32 | kubectl --> k8 client command 33 | eksctl --> command to create, update and delete cluster. managing cluster 34 | 35 | 36 | setup 37 | -------------------- 38 | assignment 39 | ------------- 40 | 1. there is a docker host running 41 | 2. no space left on device 42 | 3. you need to add extra disk to the running instance 43 | 4. make sure docker directory /var/lib/docker is mounted to new disk 44 | 5. migrate existing data to new mount 45 | 46 | create one t3.micro server. make sure you have atleast 50GB. assign more storage to /var 47 | install docker 48 | 49 | install kubectl 50 | ---------------- 51 | curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.31.0/2024-09-12/bin/linux/amd64/kubectl 52 | chmod +x ./kubectl 53 | sudo mv kubectl /usr/local/bin/kubectl 54 | kubectl version 55 | 56 | 57 | install eksctl 58 | -------------- 59 | # for ARM systems, set ARCH to: `arm64`, `armv6` or `armv7` 60 | ARCH=amd64 61 | PLATFORM=$(uname -s)_$ARCH 62 | 63 | curl -sLO "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_$PLATFORM.tar.gz" 64 | 65 | # (Optional) Verify checksum 66 | curl -sL "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_checksums.txt" | grep $PLATFORM | sha256sum --check 67 | 68 | tar -xzf eksctl_$PLATFORM.tar.gz -C /tmp && rm eksctl_$PLATFORM.tar.gz 69 | 70 | sudo mv /tmp/eksctl /usr/local/bin 71 | 72 | 73 | run aws configure 74 | 75 | 76 | apiVersion: eksctl.io/v1alpha5 77 | kind: ClusterConfig 78 | 79 | metadata: 80 | name: expense-1 81 | region: us-east-1 82 | 83 | managedNodeGroups: 84 | - name: ng-1 85 | instanceType: m5.large 86 | desiredCapacity: 10 87 | spot: true 88 | 89 | eksctl create cluster --config-file=eks.yaml 90 | 91 | 92 | spot instances 93 | ------------------ 94 | AWS have huge data center. there may be unused capacity in data center 95 | 96 | spot instances --> 90% discount. when AWS requires capacity to ondemand clients they take back instances with 2min notice... 97 | 98 | 99 | kubectl get nodes --> how many nodes are there in cluster 100 | 101 | Resources 102 | ----------------- 103 | namespace --> just like VPC, you will have a dedicated isolated project to create your workloads/resources 104 | 105 | apiVersion: 106 | kind: Namespace 107 | metadata: 108 | name: 109 | labels: 110 | spec: 111 | 112 | pod 113 | ---------------- 114 | pod is the smallest deployable unit in kubernetes. pod can contain one or many containers. 115 | 116 | pod vs container 117 | ------------------ 118 | 1. pod is the smallest deployable unit in kubernetes 119 | 2. pod can contain one or many containers. 120 | 3. containers in a pod can share same network identity and storage 121 | 4. these are useful in sidecar and proxy patterns 122 | 123 | kubectl exec -it nginx -- bash 124 | 125 | pod-1 have nginx container is there 126 | pod-2 can have nginx container or not? 127 | 128 | pod-2 129 | --------- 130 | 1. nginx 131 | 2. nginx with name also nginx 132 | 3. 133 | 134 | CrashLoopBackOff 135 | ----------------- 136 | container is not able to start -------------------------------------------------------------------------------- /session-49.txt: -------------------------------------------------------------------------------- 1 | kubectl describe pod 2 | 3 | ENV in image defition vs env in manifest 4 | ---------------------------------------- 5 | env in Dockerfile should rebuild if you change 6 | env in manifest no need to rebuild, just restart is enough 7 | 8 | resource utilisation 9 | ----------------------- 10 | if something goes wrong in loop, it will occupy entire host resources. We need to allocate resources to the container. 11 | 1 cpu = 1000m cpu 12 | softlimit --> 100m cpu, 68Mi 13 | hardlimit --> 120m cpu, 128Mi 14 | 15 | without touching the actual code, we need to change the value --> variables 16 | 17 | How can you access your pod in internet or outside? 18 | 19 | pod IP are ephemeral 20 | by exposing to services 21 | 22 | DNS to pod and load balancer as well 23 | 24 | 1. cluster IP --> default. only for internal pod to pod communication... 25 | 2. node port 26 | --------------- 27 | open a port on node/host 28 | 3. load balancer 29 | 30 | services select pods using labels 31 | selector: 32 | app.kubernetes.io/name: proxy 33 | 34 | labels 35 | --------- 36 | is single lable enough? 37 | 38 | narayana --> naaarayaaanaaaa 39 | 40 | -------------------------------------------------------------------------------- /session-50.txt: -------------------------------------------------------------------------------- 1 | k8 resources 2 | ----------- 3 | namespace 4 | pod 5 | resources 6 | env 7 | label 8 | annotations 9 | configmap 10 | secret 11 | 12 | services 13 | cluster IP 14 | nodePort 15 | LoadBalancer 16 | LoadBalancer > nodePort > cluster IP 17 | Pod to Pod communication using and Load balancing. expose your pod using service to access from internet 18 | 19 | kind: 20 | apiVersion: 21 | metadata: 22 | name: 23 | labels: 24 | spec: 25 | 26 | Sets 27 | ------- 28 | ReplicaSet 29 | DeploymentSet 30 | DaemonSet 31 | StatefulSet 32 | 33 | ReplicaSet 34 | ----------- 35 | makes sure your desired number of pods running all the time 36 | 37 | replicaset --> 3 pods nginx 38 | replicaset cant update the image version. its only responsibilities is to main desired number of replicas.. 39 | 40 | deployment 41 | ----------- 42 | nginx-c8bb98ddc-6qmc9 43 | 44 | deployment will create replicaset. so replicaset is subset/part of deployment 45 | 46 | sudo git clone https://github.com/ahmetb/kubectx /opt/kubectx 47 | sudo ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx 48 | sudo ln -s /opt/kubectx/kubens /usr/local/bin/kubens -------------------------------------------------------------------------------- /session-51.txt: -------------------------------------------------------------------------------- 1 | Volumes in kubernetes 2 | ===================== 3 | external HD --> Offline, near to our computer, more speed 4 | google drive --> Online, somewhere in network, less speed 5 | 6 | EBS and EFS 7 | 8 | Static Provisioning 9 | Dynamic Provisioning 10 | 11 | Static Provisioning 12 | -------------------- 13 | EBS 14 | 15 | 1. We need to create volumes 16 | 2. We need to install the drivers. 17 | 3. EKS nodes should have permissions to access EBS volumes. 18 | 19 | node is in us-east-1b. can I create disk in us-east-1d? 20 | 21 | vol-0e534628d19fc28f4 22 | install drivers 23 | ----------------- 24 | kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.36" 25 | 26 | Persistant Volumes and Persistant volume claim 27 | ---------------------------------------------- 28 | k8 created a wrapper objects to manage underlying volumes. because k8 engineer will not have full idea on volumes. 29 | 30 | PV --> It represents the physical storage like EBS/EFS. 31 | 32 | pods should claim PV to access that.... 33 | 34 | 1. is that namespace level or not? 35 | 2. if not namespace level that is cluster level. admin should create the resource 36 | 37 | expense project devops engineer got a requirement to have a volume 38 | ----------------------------------------------------------------- 39 | 1. you should an email to storage team to create disk. get the approval from manager. they create disk. 40 | 2. you send email to k8 admin to create PV and provide them disk details. 41 | 42 | no it's your turn. pvc and claim in the pod 43 | 44 | 45 | dynamic provisioning 46 | ------------------------------- 47 | 1. install drivers 48 | 2. give permissions ec2 nodes 49 | 50 | storageClass 51 | ----------------- 52 | admin creates one storage class for EBS for expense project. 53 | 54 | annotation --> curl nginx 55 | 56 | nginx service --> nginx pod -------------------------------------------------------------------------------- /session-52.txt: -------------------------------------------------------------------------------- 1 | EFS 2 | ---------- 3 | elastic file system 4 | 5 | 1. EBS is block store, EFS is like NFS(Network file system) 6 | 2. EBS should be as near as possible. EFS can be anywhere in network 7 | 3. EBS is fast compared to EFS 8 | 4. EBS we can store OS, Databases. EFS is not good OS and DB 9 | 5. EFS can increase storage limit automatiaclly. 10 | 6. Files are stored in EFS. 11 | 12 | 1. create EFS volume 13 | 2. install drivers and allow 2049 traffic from EKS worker nodes 14 | 3. give permision to EKS nodes 15 | 4. create PV 16 | 5. create PVC 17 | 6. claim through pod using PVC 18 | 7. open node port in EKS worker nodes 19 | 20 | kubectl kustomize \ 21 | "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-2.1" > public-ecr-driver.yaml 22 | 23 | aws eks --region us-east-1 update-kubeconfig --name expense 24 | 25 | EFS SG should allow traffic on 2049 from the SG attached EKS worker nodes. 26 | 27 | 1. storage class create 28 | 29 | Expense 30 | ------------ 31 | mysql is a stateful application 32 | 33 | statefulset vs deployment 34 | ----------------------------- 35 | statefulset is for DB related applications. 36 | deployment is for stateless applications. 37 | 38 | statefulset will have headless service along with normal service. statefulset requires pv and pvc objects. 39 | deployment will not have headless service. 40 | 41 | statefulset pods will be created in orderly manner. 42 | statefulset will keep its pod identity. Pod names will created as -0, -1, -2, etc. 43 | 44 | nslookup nginx -> all end points 45 | 46 | what is headless service? 47 | headless service will not have cluster IP, if anyone does nslookup on headless service it will give all end points 48 | 49 | 50 | 1. create expense namespace 51 | 2. install ebs drivers 52 | kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.36" 53 | 3. create ebs sc 54 | 55 | 4. give eks nodes ebs permission 56 | 5. create pvc, create statefulset 57 | 58 | nslookup mysql-headless 59 | Server: 10.100.0.10 60 | Address: 10.100.0.10#53 61 | 62 | Name: mysql-headless.expense.svc.cluster.local 63 | Address: 192.168.0.57 64 | Name: mysql-headless.expense.svc.cluster.local 65 | Address: 192.168.10.99 -------------------------------------------------------------------------------- /session-53.txt: -------------------------------------------------------------------------------- 1 | HPA 2 | Helm charts 3 | 4 | Scaling 5 | ---------- 6 | 1. Horizontal 7 | 2. Vertical 8 | 9 | Vertical --> Only one building, downtime is there 10 | Horizontal --> multiple buildings, no downtime 11 | 12 | Server --> traffic increase 13 | 14 | Same server --> Stop the server, increase CPU and RAM then restart 15 | Diff servers --> number of servers increases based on traffic 16 | 17 | Percentage --> Max value (100) 18 | 19 | Containers can consume all server resources if something goes wrong. we have to mention resource requests and limits 20 | 21 | 100m --> CPU 22 | 60m --> 60% 23 | 24 | You should have metrics server installed 25 | You should mention resources section inside pod 26 | Once above things are done, we can attach HPA to deployment 27 | 28 | curl -sS https://webinstall.dev/k9s | bash 29 | 30 | Helm charts 31 | ------------ 32 | Helm charts is a package manager for kubernetes applications 33 | 34 | 1. image creation --> Dockerfile 35 | 2. how to run image --> Docker compose/manifest 36 | 37 | popular tools have opensource images...and opensource manifest also 38 | 39 | 1. to templatise manifest files 40 | 2. to install custom or popular applications in kubernetes like CSI drivers, metrics server, prometheru/grafana 41 | 42 | 43 | 44 | helm install . --> . represents there is Chart.yaml in current folder 45 | 46 | helm repo add aws-ebs-csi-driver https://kubernetes-sigs.github.io/aws-ebs-csi-driver 47 | 48 | helm repo update 49 | Install the latest release of the driver. 50 | helm upgrade --install aws-ebs-csi-driver \ 51 | --namespace kube-system \ 52 | aws-ebs-csi-driver/aws-ebs-csi-driver 53 | 54 | -------------------------------------------------------------------------------- /session-55.txt: -------------------------------------------------------------------------------- 1 | RBAC 2 | -------------- 3 | Authentication and Authorization 4 | 5 | Authentication is proving you are part of system 6 | Authorization is proving you have access to that resource 7 | 8 | Nouns and Verbs 9 | 10 | Resources --> VPC, EC2, EBS, EFS, etc. 11 | Actions 12 | ---------- 13 | create VPC 14 | update VPC 15 | getVPC 16 | deleteVPC 17 | listVPC 18 | 19 | User, group, Roles and Permissions 20 | 21 | Datacenter 22 | ------------- 23 | everyone can Enter 24 | only admins have server creation 25 | users have listing servers access 26 | 27 | expense 28 | ---------- 29 | trainee --> read access 30 | senior engineer --> deployment access 31 | team lead --> namespace admin 32 | manager --> cluster admin access 33 | 34 | User, Role(Resources and actions), Rolebinding (bind the user and role) 35 | 36 | EKS uses IAM for authentication 37 | 38 | EKS --> Platform as a service 39 | 40 | Authentication 41 | -------------- 42 | I need to create user in IAM 43 | 44 | Suresh joined our team 45 | ----------------------- 46 | Expense team will send an email to EKS admin 47 | 48 | give him read access to expense namespace 49 | 50 | they create IAM user, create a role for suresh to describe eks cluster.. 51 | they create role and rolebinding 52 | provide access to suresh 53 | 54 | EKS and IAM are two different systems 55 | 56 | there is something called aws-auth config map inside kube-system 57 | 58 | IAM checks whether user have expense EKS cluster access or not 59 | 60 | taints and tolerations 61 | affinity and anti affinity 62 | 63 | kube-scheduler --> master node component 64 | 65 | kubectl apply -f manifest.yaml 66 | 67 | nodeSelector: 68 | az: us-east-1b 69 | 70 | taint --> paint 71 | banks and RBI may accept painted notes --> they can tolerate 72 | 73 | you can taint the node. kube-scheduler can't schedule any pod in that node... 74 | GPU based servers are required for expense project... 75 | 76 | taint these GPU nodes 77 | expense project users should give toleration in their manifest files 78 | 79 | 1. EKS is integrated with IAM for authentication 80 | 2. aws-auth configmap 81 | 82 | 1. make sure IAM user exist and he have cluster describe access 83 | 2. create role and rolebinding 84 | 3. edit aws-auth configmap 85 | -------------------------------------------------------------------------------- /session-56.txt: -------------------------------------------------------------------------------- 1 | Taints and Tolerations 2 | Affinity and anti affinity 3 | 4 | kube-scheduler --> responsible to schedule pods on to worker nodes 5 | 6 | nodeSelector: 7 | label-key: value 8 | 9 | taints and tolerations --> to repel the pods. We can mark the node as tainted so that scheduler will not schedule any pods on to it... 10 | 11 | if you apply tolerate scheduler can schedule the pods on to tainted nodes, but not gaurenteed. 12 | 13 | 1. project specific worker nodes 14 | 2. special hardware. gpu based servers 15 | 16 | pod affinity 17 | ------------ 18 | 2 replicas are running 19 | we will inform backend pod to run where cache is running --> pod-affinity 20 | 21 | Ingress Controller 22 | ------------------ 23 | If we want give internet access to our app running in K8 we have to provison ingress controller. We are using ALB as our ingress controller. We installed aws load balancer controller drivers through helm charts and given appropriate permissions. 24 | 25 | We create ingress resource/object for the apps that requires external access 26 | 27 | HDFC --> k8 cluster 28 | 29 | netbanking.hdfc.com --> netbanking 30 | corporatebanking.hdfc.com --> C banking 31 | 32 | Classic LB --> By default it creates classic load balancer 33 | it is not intelligent 34 | it can't route traffic to different target groups 35 | it is not recommend by AWS 36 | 37 | ALB 38 | it is intelligent 39 | it routes traffic to multiple tg based on host or context rules 40 | it works on layer-7 41 | 42 | m.facebook --> mobile fb target group 43 | facebook.com 44 | 45 | https://app1.daws81s.online --> app1 46 | https://app2.daws81s.online --> app2 47 | 48 | Ingress resource 49 | -------------------------------------------------------------------------------- /session-57.txt: -------------------------------------------------------------------------------- 1 | Revise ingress controller 2 | init containers 3 | liveness and readiness probe 4 | how to use configmap as volume 5 | 6 | ingress controller 7 | ---------------------- 8 | Ingress controller is used to provide external access to the applications running inside k8. in EKS we can use ALB as ingress controller. 9 | 10 | We install aws load balancer controller to connect with ALB and provide permission to EKS. 11 | 12 | We have a resource a called ingress to create ALB, Listeners, rules and target groups 13 | 14 | 15 | eksctl utils associate-iam-oidc-provider \ 16 | --region us-east-1 \ 17 | --cluster expense \ 18 | --approve 19 | 20 | curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.10.0/docs/install/iam_policy.json 21 | 22 | 23 | eksctl create iamserviceaccount \ 24 | --cluster=expense \ 25 | --namespace=kube-system \ 26 | --name=aws-load-balancer-controller \ 27 | --attach-policy-arn=arn:aws:iam::315069654700:policy/AWSLoadBalancerControllerIAMPolicy \ 28 | --override-existing-serviceaccounts \ 29 | --region us-east-1 \ 30 | --approve 31 | 32 | liveness probe and readiness probe 33 | ----------------------------------- 34 | self healing.. auto restart. if your app is not working, k8 can auto restart your application 35 | 36 | liveness probe --> health check 37 | readiness probe --> ready to accept traffic 38 | 39 | mysql backend 40 | 41 | when port 8080 is opened, then we can say mysql is ready 42 | 43 | init containers 44 | ---------------------- 45 | before backend starts, we need to make sure DB is running and accessable 46 | 47 | -> we can run init containers before main containes run. It can be or many. 48 | -> init containers should be ready before main container runs 49 | -> If init container fails, main container will not run. 50 | -> Init containers goes to completion state. 51 | -> to set configuration and check external dependency apps status we can make use of init containers 52 | 53 | for i in {1..100}; do sleep 1; if nslookup mysql; then exit 0; fi; done; exit 1 54 | 55 | application code and configuration 56 | ----------------------------------- 57 | -------------------------------------------------------------------------------- /session-58.txt: -------------------------------------------------------------------------------- 1 | we use terraform to create cluster and upgrade 2 | 3 | we will create cluster 4 | we will run app 5 | we will upgrade cluster 6 | 7 | 10.0.0.0/16 8 | 9 | pod-1 is in node-1 10 | pod-2 is in node-2 11 | 12 | pod-2 is receiving traffic from pod-1 13 | 14 | node-2 should allow traffic from node-1 15 | 16 | blue group of nodes --> current running 17 | 18 | green group of nodes --> 19 | 20 | Cluster Upgrade 21 | ------------------------- 22 | it is better to announce downtime, you should not do any release or deployments or changes to any resources 23 | 24 | change the sg, so that only admin team bastion have access to cluster... 25 | 26 | 1. create another node group green with same capacity... 27 | 2. cordon green nodes, not to accept any pods 28 | 3. upgrade control plane to 1.31 29 | 4. upgrade green also to 1.31 30 | 5. we will cordon blue nodes, uncordon green nodes 31 | 6. drain all blue nodes 32 | 7. delete blue node group 33 | 34 | now runing node group is green, 35 | -------------------------------------------------------------------------------- /session-59.txt: -------------------------------------------------------------------------------- 1 | cluster created 2 | present blue ng is running 3 | apps are also running 4 | 5 | upgrade 6 | ------------ 7 | send a communication that EKS is getting upgraded, no new deployments and releses happen 8 | change the SG to remove access to other teams 9 | 10 | create green ng with same capacity 11 | cordon green nodes 12 | kubectl cordon -> scheduling disabled 13 | upgrade control plane in console to 1.31 14 | upgrade green node group also to 1.31 15 | 16 | cordon blue nodes 17 | uncordon green nodes 18 | 19 | drain blue nodes --> automatically workloads will come to green 20 | delete blue 21 | 22 | 23 | VM to Containers migration(K8) 24 | ----------------------- 25 | existing ALB or new ALB through ingress 26 | 27 | ALB --> Listener --> Rule --> Target group(Health checks)(Instance based) --> VM 28 | ALB --> Listener --> Rule --> Target group(Health checks) --> Pods 29 | 30 | Create ACM, Create ALB, Listener, Rule, Target Group (IP based) 31 | 32 | 1. Ingress resource --> target group 33 | 2. Target group binding --> adds our pods to target group 34 | 35 | eksctl utils associate-iam-oidc-provider \ 36 | --region us-east-1 \ 37 | --cluster expense-dev \ 38 | --approve 39 | 40 | eksctl create iamserviceaccount \ 41 | --cluster=expense-dev \ 42 | --namespace=kube-system \ 43 | --name=aws-load-balancer-controller \ 44 | --attach-policy-arn=arn:aws:iam::315069654700:policy/AWSLoadBalancerControllerIAMPolicy \ 45 | --override-existing-serviceaccounts \ 46 | --region us-east-1 \ 47 | --approve 48 | 49 | helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=expense-dev --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller 50 | 51 | helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=expense-dev --set serviceAccount.create=true --set serviceAccount.name=aws-load-balancer-controller 52 | 53 | -------------------------------------------------------------------------------- /session-60.txt: -------------------------------------------------------------------------------- 1 | how to push images to ECR 2 | DaemonSet 3 | ServiceAccount 4 | K8 arch 5 | quiz 6 | 7 | 315069654700.dkr.ecr.us-east-1.amazonaws.com/expense/backend:v1 8 | 9 | ReplicaSet 10 | DeploymentSet 11 | StatefulSet 12 | DaemonSet 13 | 14 | if you run deamonset, k8 makes sure of a pod runs on each and every node.. 15 | 16 | 3 nodes --> pull the logs from nodes. 17 | pod in each node to pull the logs.. 18 | 19 | service accounts --> it is not human user, it is created system purpose. 20 | 21 | trainee --> He can create pods, but he cant get the secrets or list the secrets 22 | 23 | run the pod with service account, this service account have access to get the secrets 24 | 25 | for everynamespace it will create one default sa 26 | 27 | Master and Node 28 | ----------------- 29 | 30 | master 31 | ------- 32 | api server --> intercepts every request to k8, checks authorization 33 | 34 | scheduler --> schedules the pods on to worker nodes, it checks taints, tolerations, node selectors, affinity, anti affinity, hardware requirements, free CPU and memory 35 | 36 | controller 37 | replica controller --> make sure always desired number of pods are running 38 | node controller --> monitoring the nodes continously 39 | job controller --> checks Jobs 40 | EndpointSlice controller --> establish connection between services and pods 41 | SA controller --> creates default sa for namespaces 42 | 43 | etcd --> DB for our K8 configuration 44 | 45 | node 46 | -------- 47 | 48 | kubelet --> agent running inside worker node, to communicate with master 49 | kube-proxy --> setup networking rules and policies to nodes and pods 50 | container runtime --> container-d, crio. 51 | 52 | 53 | add-ons --> VPC CNI, dns, -------------------------------------------------------------------------------- /session-61.txt: -------------------------------------------------------------------------------- 1 | Coco cola drink 2 | ---------------- 3 | we will take one drink into lab seperately, change the formula. then we taste it as employees. if we dont like, we change the formula until we like it.. 4 | DEV 5 | 6 | We will hire few tasters, we will give them drink. They provide some feedback. based on their feedback we will change again. Finally they liked it.. 7 | QA 8 | 9 | We will do survey in multiple countries, we go to public. We randomly select 1lk members, collect feedback. if they like it we can release 10 | UAT 11 | 12 | Take Fassai permission about new composition. 13 | PRE-PROD 14 | 15 | Now we can release into market... 16 | PROD 17 | 18 | Git 19 | --------- 20 | create repo 21 | git clone 22 | git add --> staging area 23 | git commit -m "" --> commit to local repo 24 | git push --> push to central repo 25 | Git is a distributed version control system... 26 | 27 | Linus Torvalds --> 2005 28 | 29 | git init --> convert a folde into git repo. .git folder will be created 30 | git remote add origin 31 | git branch -M main 32 | 33 | Branching 34 | ---------------- 35 | main --> points to production 36 | 37 | create another copy of the file, do the changes, carefully review the changes. if okay then edit into the main file 38 | 39 | create another branch from main branch 40 | do the changes 41 | test the changes 42 | scan the changes 43 | 44 | if everything is fine bring those changes into main branch 45 | 46 | SHA code --> 40char code 47 | 48 | e87bbd7883f2ddafc6243ae694acbfd18608ba17 --> our code 49 | 50 | 1. if the commit id is changed, we can say code is changed 51 | 2. if the code is changed, commit id also changed 52 | 53 | no body is allowed to do changes in main branch... 54 | pull request --> some one have to review before you merge changes into main branch. 55 | 56 | create another branch from main branch, do changes, create PR. get the approval then merge changes 57 | 58 | 59 | 62918daf857b1c13127fb2d00e93b80bf77b13a7 --> merge commit 60 | 61 | -> Merge commit will have 2 parent commits. 62 | -> Merging preserves history, at any point you can track all the history of changes. 63 | -> Merge commit is an extra commit created by Git 64 | 65 | -> Rebase will not create extra commits. 66 | -> There is no history preserved in rebase since only one parent 67 | -> commit id will be changed in rebase 68 | 69 | if a branch is developed by multiple persons --> prefer merge 70 | 71 | if a branch is developed by single person --> go for rebease, make sure how rebease works 72 | 73 | 74 | e11be6beaf31fcddd606e311f4dfca0293493cee 75 | feb5c9dd5a89b88461df73008d10ea27441c0fa2 76 | 77 | How you resolve merge conflicts? 78 | Conflict comes when git finds diff code in the same line. Then git will give conflict. now developers should discuss and resolve conflicts. Git shows conflicts using less than and greater than symbols. We have to discuss and decide what code we can keep then remove the special char and proceed to commit. 79 | 80 | egg dosa 81 | ------------ 82 | 2 members 83 | 84 | egg-dosa-suresh 85 | egg-dosa-ramesh 86 | 87 | a4741a26a58e9a5cd169cda43a889a2d9963db4c -------------------------------------------------------------------------------- /session-62.txt: -------------------------------------------------------------------------------- 1 | Branching strategy 2 | ==================== 3 | How you develop and how to release the application into production 4 | 5 | git flow 6 | ========== 7 | 1. main 8 | 2. development 9 | 10 | these two are longlived branches for lifetime. 11 | 12 | shortlived branches 13 | ========= 14 | feature 15 | release 16 | hotfix 17 | 18 | develop branch source is: main 19 | 20 | sprint-1 --> 4 weeks 21 | ------------------- 22 | single feature or multiple features or few defects 23 | 24 | feature-video-call --> all developers use this branch to do code changes 25 | feature-whatsapp-status 26 | 27 | source: develop 28 | destination: develop 29 | 30 | changes are merged to develop branch using PR. changes are tested in develop environment 31 | 32 | if there are defects, create another feature branch. do the changes. again merge to develop branch. again test 33 | 34 | now develop env is done... 35 | 36 | release branch 37 | ------------------- 38 | release/v1.2.3 39 | 40 | source: develop 41 | target: main and develop 42 | 43 | test the application in QA, UAT, etc. if there are defects. do the changes in release branch and test them in QA, UAT, etc. 44 | 45 | everything is good in all environments. then merge the changes to master/main branch. release the application into production. merge the changes into develop branch. 46 | 47 | hotfix branch 48 | ------------------ 49 | P0, P1, P2, P3, P4 50 | 51 | SLA --> service level agreement 52 | P0 --> severe, business is completely down. 30min 53 | P1 --> severe, may be 2hours 54 | P2 --> 55 | P3 --> 56 | P4 --> 57 | source: main 58 | destination: main and develop 59 | 60 | whatsapp supports multiple versions at a time, android too.. 61 | 62 | for web applications git model is very heavy, we can move to github flow or feature branching model... 63 | 64 | 65 | feature branch model 66 | ============================ 67 | main/master and feature 68 | 69 | one feature is only developed by one person... 70 | 71 | CICD process we can run every feature branch 72 | 73 | build, scan(all scans), unit test, deploy in dev, functionality test --> shift left process 74 | 75 | shift left is an important strategy in feature branching model.. it is including the scans and tests in the early stages of development instead of after development.. 76 | 77 | we will create PR to main branch... now we can deploy to DEV, QA, UAT, PRE-PROD, perf basically NON-PROD envrionments 78 | 79 | finally we are going to PROD 80 | ----------------------------- 81 | approval process.. change release process 82 | 83 | date, time 84 | 85 | a ticket is raised 86 | 87 | change type, which application 88 | change description 89 | date 90 | time 91 | approvals 92 | what if change is failed? roll back process 93 | tests report 94 | scans report 95 | sanity testing --> after release basic checks 96 | 97 | git reset vs git revert 98 | ------------------------- 99 | these commands are for the purpose of undo the changes... 100 | 101 | workspace --> where we write code 102 | staging area --> changes are added to staging area from workspace 103 | commit area --> changes are commited to git from staging 104 | 105 | reset 106 | ======== 107 | useful before you push the changes to remote branches 108 | 109 | pull before push 110 | 111 | undo the changes done already. We have 3 options 112 | 113 | 1. soft 114 | 2. mixed --> default 115 | 3. hard 116 | 117 | git reset --soft --> changes will be removed from commit and stayed in stagig area 118 | 119 | git reset HEAD~1 120 | 121 | git reset --hard HEAD~1 --> changes will be removed from commit, staging and workspace 122 | 123 | reset rewrites the history, it changes commit ids 124 | 125 | use reset only in local branches not remote branches.. 126 | 127 | 128 | revert 129 | ------------ 130 | revert will not remove commits, but we can do the changes to correct the wrong commits, history will not be changed. extra commits are added to correct wrong commits 131 | 132 | 133 | Key Differences 134 | Aspect git reset git revert 135 | History Impact Rewrites history(destructive) Preserves history (non-destructive) 136 | New Commit? No Yes 137 | Use Case Adjust local commits Undo changes in a shared branch 138 | Working on Shared Branch? Risky (may cause conflicts) Safe 139 | 140 | When to Use Which? 141 | Use git reset if you are working locally and haven’t shared the changes yet. 142 | Use git revert when you need to undo changes in a branch that others are using (e.g., after pushing to a remote). 143 | 144 | 145 | 146 | 147 | -------------------------------------------------------------------------------- /session-63.txt: -------------------------------------------------------------------------------- 1 | cherry pick 2 | restore 3 | 4 | you can pick whatever cherries you want. 5 | 6 | When you find something useful in another branches, you can pick those commits instead of completely merge or rebase with that branch 7 | 8 | imagine 2 features branches, feature-1 and feature-2. feature-2 finds something useful in feature-1 instead of completely merging with it we can cherrypick the commits we want... 9 | 10 | git checkout feature-1 11 | 12 | git pull 13 | 14 | git checkout feature-2 15 | 16 | git log feature-1 --> you can see the commits, select what you want 17 | git cherry-pick --> this may create conflicts, resolve and push 18 | 19 | 20 | 21 | special dosa 22 | -------------- 23 | robert and raheem 24 | 25 | restore 26 | ------------------- 27 | reset vs revert 28 | 29 | 30 | git restore --staging --> it will bring the changes from staging area to workspace 31 | 32 | git restore --source --> can completely restore the file to that particular commit-id. but it will not remove the commit-id 33 | 34 | restore works on particular file, reset or revert works on entire workspace.. 35 | 36 | 37 | CICD 38 | ---------------------- 39 | Continous Integration --> jenkins 40 | 41 | Project Infra 42 | Deploy applications 43 | 44 | It is the way of integrating the code from source and create artifact. in between source and creating artifact we need to install dependencies, run unit test cases, scans, etc. instead of this manually we can automate through continus integration process... We are using Jenkins for CI. 45 | 46 | take one server 47 | -------------- 48 | git clone 49 | npm install --> downloads dependencies 50 | npm test --> unit test cases 51 | sonar-scan 52 | zip --> zip the application 53 | 54 | store the application into some other artifact server 55 | 56 | Jenkins is a plain webserver, if you install plugins then jenkins can connect with other tools. 57 | 58 | 59 | sudo curl -o /etc/yum.repos.d/jenkins.repo \ 60 | https://pkg.jenkins.io/redhat-stable/jenkins.repo 61 | sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io-2023.key 62 | sudo yum upgrade --> takes too much time, we can skip 63 | # Add required dependencies for the jenkins package 64 | sudo yum install fontconfig java-17-openjdk 65 | sudo yum install jenkins 66 | sudo systemctl daemon-reload 67 | sudo systemctl start jenkins 68 | sudo systemctl enable jenkins 69 | 70 | in jenkins everything is called as job.. when you trigger it is called as build 71 | 72 | pre-production shoot post-production 73 | 74 | pre-build build post-build 75 | 76 | manually created infra vs IaaC 77 | ------------------------------- 78 | 1. you can't restore if something goes wrong 79 | 2. we can track we did the changes. 80 | 3. can't reuse 81 | 4. no review of changes 82 | 5. no version control 83 | 84 | if you make it as pipeline and keep it in git 85 | 86 | 87 | pipeline { 88 | agent any 89 | 90 | stages { 91 | stage('Hello') { 92 | steps { 93 | echo 'Hello World' 94 | } 95 | } 96 | } 97 | } -------------------------------------------------------------------------------- /session-64.txt: -------------------------------------------------------------------------------- 1 | Master Agent architecture 2 | -------------------------- 3 | 1 acre 4 | 100 acres --> if you employ few resources 5 | 6 | 1. employee daily coming and checking with master for work 7 | 2. whenever master gets work, he will allocate to employee 8 | 9 | CI servers --> 1 server across project 10 | 11 | Java 8 --> agent-java8 12 | Java 17 --> agaent-java17 13 | 14 | sudo yum java-17-openjdk -y 15 | 16 | /var/lib/jenkins --> JENKINS_HOME 17 | 18 | Groovy syntax --> similar to Java 19 | 20 | triggers 21 | ----------- 22 | whenever developer pushes to git, it should automatically trigger pipeline 23 | 24 | webhooks 25 | 26 | http://184.73.142.106:8080/github-webhook/ --> / at the end is important 27 | 28 | when you select create --> apply 29 | when you select destroy --> destroy 30 | 31 | vpc, sg, bastion, rds, alb ingress, eks, cdn, ecr, 32 | 33 | versions -> read the version and tag docker image with this version 34 | 35 | push to ecr 36 | k8 deployment 37 | 38 | 39 | -------------------------------------------------------------------------------- /session-65.txt: -------------------------------------------------------------------------------- 1 | application --> CICD mandatory 2 | infra --> CICD not mandatory for now 3 | 4 | apply or destroy 5 | 6 | vpc, sg, bastion, rds, eks, acm, alb, cdn, ecr 7 | 8 | vpc 9 | sg 10 | bastion 11 | rds 12 | eks 13 | acm 14 | ecr 15 | cdn 16 | 17 | alb 18 | 19 | -------------------------------------------------------------------------------- /session-66.txt: -------------------------------------------------------------------------------- 1 | vpc 2 | sg 3 | 4 | bastion 5 | rds 6 | eks 7 | acm 8 | ecr 9 | cdn 10 | 11 | alb 12 | 13 | APPLY 14 | ============== 15 | VPC 16 | ------ 17 | apply --> it should create VPC, and trigger SG 18 | 19 | SG 20 | ------ 21 | apply --> it should create SG and trigger parellely 22 | bastion 23 | rds 24 | eks 25 | acm 26 | ecr 27 | cdn 28 | alb 29 | 30 | DESTROY 31 | ============ 32 | ECR 33 | when destroy --> it can trigger all destroys 34 | alb 35 | cdn 36 | ecr 37 | acm 38 | eks 39 | rds 40 | bastion 41 | sequence 42 | sg 43 | vpc 44 | 45 | SG is ____ for VPC. downstream job for VPC 46 | VPC is upstream for SG 47 | 48 | EKS setup 49 | ------------ 50 | 1. aws-loadbalancer-controller 51 | 52 | ingress resource or target group binding 53 | 54 | APP CICD 55 | =========== 56 | Build 57 | Unit Test 58 | Scans 59 | Build Image 60 | Push Image 61 | Helm deploy 62 | 63 | What are the jenkins agents you are using? 64 | =========================================== 65 | VM are perm agents 66 | you need to maintain them 67 | We need to maintain multiple agents for multiple projects 68 | 69 | Temp/Ephemeral agents 70 | Docker Containers 71 | K8 Pods 72 | 73 | jenkins-agents --> namespace 74 | use base image nodejs for nodejs projects 75 | use base image Java for java projects 76 | 77 | 78 | agent{ 79 | kubernetes { 80 | cloud kubernetesConfig.get(springBootMap.get("uat", "") ? "uat" : "prod").cloud 81 | label podLabel 82 | yaml """ 83 | spec: 84 | containers: 85 | - name: jnlp 86 | image: sivakmr469/jenkins-maven-pcf:7.0.2 87 | imagePullPolicy: Always 88 | resources: 89 | requests: 90 | cpu: 0.5 91 | memory: 1Gi 92 | limits: 93 | cpu: 0.5 94 | memory: 1.5Gi 95 | ttyEnabled: true 96 | workingDir: /var/lib/jenkins 97 | alwaysPullImage: true 98 | 99 | """ 100 | } 101 | } 102 | 103 | 104 | aws eks update-kubeconfig --region us-east-1 --name expense-dev 105 | 106 | configmap, deployment, service, ingress 107 | 108 | helm install/upgrade backend . 109 | 110 | helm status 111 | if success --> end pipeline 112 | if failure --> rollback 113 | if success --> end pipeline and RCA for why it is issued 114 | if failure --> this is disaster --> app is down -------------------------------------------------------------------------------- /session-67.txt: -------------------------------------------------------------------------------- 1 | sed editor 2 | ========== 3 | streamline editor 4 | 5 | vim --> only for user, need to open and replace 6 | 7 | 1s 8 | 2s 9 | %s///g 10 | 11 | sed - 12 | 13 | Scanning 14 | ========== 15 | shifting the security scannings and testing in dev before pushing code to main branch. When developers push code to feature branches we should scan and test 16 | 17 | static source code analysis --> SonarQube 18 | static application security testing --> SonarQube/GitHub 19 | dynmaic application security testing --> Veracode 20 | open source librarby scan --> NexusIQ/GitHub 21 | image scanning --> ECR Scanning 22 | 23 | unit testing --> should be done by developers 24 | functional testing --> testers 25 | 26 | function --> 27 | brick 28 | 29 | login(username, password){ 30 | query db(); 31 | checkTheResult(); 32 | } 33 | 34 | VM 35 | Docker install 36 | docker run -p 37 | -------------------------------------------------------------------------------- /session-68.txt: -------------------------------------------------------------------------------- 1 | Scans 2 | ======== 3 | 1. static source code analysis 4 | 2. static application security testing 5 | 3. dynamic application security testing --> 1 time for every release 6 | veracode --> free trail only with business emails 7 | 4. image scanning through ECR scan 8 | 5. dependencies scan --> Nexu scan, blackduck, Dependabot 9 | 10 | 11 | sonar-6.0 12 | 13 | 10 functions causing 100 lines code --> if you test this 10 functions then 14 | 15 | code coverage is 100% 16 | 17 | code coverage should be min 80% 18 | 19 | new code 20 | overall code 21 | 22 | Commit1 --> first time code 23 | Commit2 --> 24 | 25 | overall code = Commit1+Commit2 26 | New code = C2-C1 27 | 28 | 0 issues, 0 bugs, security rating A, maintainability rating A, code coverage 80% , code smells 0, vulnerabilities 0 29 | on overall code and new code 30 | 31 | http://jenkins.daws81s.online/sonarqube-webhook/ 32 | 33 | CI --> trigger CD 34 | CD 35 | 36 | image is our output 37 | DEV --> use that image, but different config values-dev.yaml 38 | QA/UAT/PROD --> use the same image, but different config values-prod.yaml 39 | 40 | jenkins-shared-library -------------------------------------------------------------------------------- /session-69.txt: -------------------------------------------------------------------------------- 1 | 1. seperate CI and CD, because we can use CD job to deploy our application to multiple environments 2 | build once and run anywhere. 3 | 4 | 100 commits --> no need to deploy 100 times....worst case 100th time deploy should be there 5 | 6 | our pipeline will have an option to choose deploy or not 7 | backend -> upstream 8 | backend-deploy --> just helm charts.downstream 9 | 10 | 1. project 11 | 2. component 12 | 13 | multi branch pipeline --> a pipeline should be there for every feature branch to support their development 14 | 15 | function(input) 16 | 1. You can call any number of times 17 | 2. it takes input and process something 18 | 19 | jenkins-shared-library --> pipeline as a function, it takes input and run the pipeline --> these are called central 20 | 21 | any number of projects can call this shared pipelines at a time 22 | no need to maintain pipelines differently for different project 23 | easy updates 24 | enforce standards at the high level 25 | 26 | nodeJSEKSPipeline(input) --> by default call function will be called 27 | 28 | nodeJSEKSPipeline.function_name() 29 | 30 | Central DevOps Engineers 31 | ----------------------- 32 | Ansible Roles 33 | Terraform Modules 34 | Jenkins central pipelines 35 | 36 | 1. what is the programming language 37 | 2. what is the deployment platform 38 | 3. what is the build tool --> maven and gradle, ant 39 | 40 | nodeJSEKSPipeline 41 | nodeJSVMPipeline 42 | 43 | Map/dictionary --> key/value 44 | 45 | 1. project 46 | 2. component 47 | 3. environment --> by default DEV 48 | 49 | Project Onboarding 50 | --------------------- 51 | 1. programming language 52 | 2. deployment platform 53 | 3. branching strategy --> feature branching strategy 54 | 55 | SOP --> if any new project comes, we meet development team and setup below things 56 | Jenkins folders 57 | SonarQube 58 | K8 namespace 59 | ECR repo 60 | Veracode target 61 | Github dependabot 62 | Dockerfile 63 | Helm charts 64 | 65 | CR Process 66 | ================= 67 | 68 | 69 | Multi branch pipeline --> to support multiple branches at a time 70 | CI CD seperate --> CD can do deployments into different environment including DEV 71 | 72 | Jenkins shared library 73 | ========================= 74 | pipeline as a function --> accepts input parameters and run the pipeline 75 | 76 | nodeJSEKSPipeline 77 | nodeJSVMPipeline 78 | pythonEKSPipeline 79 | pythonVMPipeline 80 | JavaEKSPipeline 81 | -------------------------------------------------------------------------------- /session-70.txt: -------------------------------------------------------------------------------- 1 | Shift left 2 | DevSecOps 3 | Build once and run anywhere 4 | Centralised Pipelines 5 | Branching strategy 6 | Infra as a code 7 | Configuration management 8 | EKS upgrade 9 | Terraform modules development 10 | Optimising Docker images 11 | Using Helm Charts 12 | Legacy VM applications and Microservices 13 | 14 | Project onboarding 15 | Project maintainance 16 | Project improvements 17 | Project changes 18 | Project upgrades 19 | 20 | Pharma 21 | Retail 22 | Banking 23 | Telecom 24 | Oil and energy -------------------------------------------------------------------------------- /session-71.txt: -------------------------------------------------------------------------------- 1 | Monitoring 2 | ================ 3 | Black box testing 4 | White box testing 5 | 6 | Black box testing --> we dont know what is inside --> end users without knowing internal details 7 | White box testing --> we know what is inside --> internal users can do this 8 | 9 | memory 10 | CPU 11 | RAM 12 | Network requests 13 | 14 | P0, P1, P2, P3, P4 15 | 16 | RCA --> Root cause analysis 17 | 18 | Latency --> How fast our system is responding 19 | Traffic --> How many requests to the system 20 | Errors --> Monitor for 5XX errors 21 | Saturation --> Measure system resources 22 | 23 | Prometheus 24 | ============ 25 | CCTV --> cameras, central system 26 | cameras --> agents, collecting the live videos and sending to the central system 27 | 28 | time-series database 29 | ==================== 30 | daily expenditure --> date is input 31 | Quartely, Half Yearly, Anual expenditure, weekly expenditure 32 | 33 | 34 | [Unit] 35 | Description=Prometheus Server 36 | 37 | [Service] 38 | ExecStart=/opt/prometheus/prometheus --config.file=/opt/prometheus/prometheus.yml 39 | 40 | [Install] 41 | WantedBy=multi-user.target 42 | 43 | [Unit] 44 | Description=Node Exporter 45 | 46 | [Service] 47 | ExecStart=/opt/node_exporter/node_exporter 48 | 49 | [Install] 50 | WantedBy=multi-user.target 51 | 52 | 53 | # my global config 54 | global: 55 | scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. 56 | evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. 57 | # scrape_timeout is set to the global default (10s). 58 | 59 | # Alertmanager configuration 60 | alerting: 61 | alertmanagers: 62 | - static_configs: 63 | - targets: 64 | # - alertmanager:9093 65 | 66 | # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. 67 | rule_files: 68 | # - "first_rules.yml" 69 | # - "second_rules.yml" 70 | 71 | # A scrape configuration containing exactly one endpoint to scrape: 72 | # Here it's Prometheus itself. 73 | scrape_configs: 74 | # The job name is added as a label `job=` to any timeseries scraped from this config. 75 | - job_name: "prometheus" 76 | 77 | # metrics_path defaults to '/metrics' 78 | # scheme defaults to 'http'. 79 | 80 | static_configs: 81 | - targets: ["localhost:9090"] 82 | 83 | 84 | unit testing --> jest, junit, etc... 85 | npm test 86 | 5 test cases 87 | 4 success, 1 fail 88 | 89 | sonar --> testing code, source code, test results -------------------------------------------------------------------------------- /session-72.txt: -------------------------------------------------------------------------------- 1 | dynamic scrapping 2 | ------------------- 3 | 1. our targets should have node_exporter installed. 4 | 2. prometheus server should have permission to describe ec2 instances 5 | 3. We can filter the instances based on tags, regions, az, etc. 6 | 7 | email-smtp.us-east-1.amazonaws.com:587 8 | 9 | counter and gauge 10 | 11 | KM: 1000 12 | Speed: 60km/sec 13 | 14 | 0th --> 0GB 15 | 1st --> 2GB 16 | 2nd --> 5GB 17 | 18 | 5GB-2GB = 3GB 19 | 20 | 130GB --> counter reset to 0 21 | 22 | CPU --> seconds -------------------------------------------------------------------------------- /session-73.txt: -------------------------------------------------------------------------------- 1 | MongoDB -> NoSQL DB(Documents and collections). SQL DB(Tables and Rows) 2 | 3 | Redis -> CacheDB 4 | App --> DB --> Save the results in cache 5 | 1. Open DB connection 6 | 2. Run Query and get the results 7 | 3. Close the connection 8 | 9 | App --> CacheDB --> DB 10 | 11 | RabbitMQ --> Messaging Queue, kafka, ActiveMQ, JBOSS ESB Sever 12 | 13 | Asynchronus Communication --> Another systm need not to be up and running 14 | 15 | Synchronus Communication --> Send request immidiately expect response 16 | HTTP 17 | 18 | System-1 --> MQ --> System-2 19 | 20 | Ramesh --> Whatsapp --> Suresh 21 | 22 | Youtube Notifications 23 | 24 | Upload video --> MQ --> push to all subscribers 25 | 26 | frontend --> use public ip --> if not component.daws81s.online --> private IP 27 | 28 | DB --> By default they are not exposed to outside.. 29 | 30 | proxy_http_version 1.1; 31 | location /images/ { 32 | expires 5s; 33 | root /usr/share/nginx/html; 34 | try_files $uri /images/placeholder.jpg; 35 | } 36 | location /api/catalogue/ { proxy_pass http://catalogue.daws81s.online:8080/; } 37 | location /api/user/ { proxy_pass http://user.daws81s.online:8080/; } 38 | location /api/cart/ { proxy_pass http://cart.daws81s.online:8080/; } 39 | location /api/shipping/ { proxy_pass http://shipping.daws81s.online:8080/; } 40 | location /api/payment/ { proxy_pass http://payment.daws81s.online:8080/; } 41 | 42 | location /health { 43 | stub_status on; 44 | access_log off; 45 | } 46 | 47 | 48 | [Unit] 49 | Description = Cart Service 50 | [Service] 51 | User=roboshop 52 | Environment=REDIS_HOST=redis.daws81s.online 53 | Environment=CATALOGUE_HOST=catalogue.daws81s.online 54 | Environment=CATALOGUE_PORT=8080 55 | ExecStart=/bin/node /app/server.js 56 | SyslogIdentifier=cart 57 | 58 | [Install] 59 | WantedBy=multi-user.target 60 | 61 | Java --> compile --> Bytecode --> run this bytecode 62 | JDK --> we must need this at the time of development 63 | JRE --> We need this at the time or running 64 | 65 | JRE is the subset of JDK 66 | 67 | pom.xml 68 | mvn package -->target/app.jar 69 | 70 | groupID, artifactID and version 71 | 72 | students --> firstname, lastname, dob, pancard 73 | 74 | com.tcs 75 | roboshop 76 | 77 | com.facebook 78 | whatsapp.andriod 79 | v1.0.3 80 | 81 | com.roboshop 82 | catalogue 83 | 1.0.3 84 | 85 | httpComponents:11.4.5 86 | 87 | 88 | [Unit] 89 | Description=Shipping Service 90 | 91 | [Service] 92 | User=roboshop 93 | Environment=CART_ENDPOINT=cart.daws81s.online:8080 94 | Environment=DB_HOST=mysql.daws81s.online 95 | ExecStart=/bin/java -jar /app/shipping.jar 96 | SyslogIdentifier=shipping 97 | 98 | [Install] 99 | WantedBy=multi-user.target -------------------------------------------------------------------------------- /session-74.txt: -------------------------------------------------------------------------------- 1 | catalogue 2 | ========== 3 | [Unit] 4 | Description = Catalogue Service 5 | 6 | [Service] 7 | User=roboshop 8 | Environment=MONGO=true 9 | Environment=MONGO_URL="mongodb://mongodb.daws81s.online:27017/catalogue" 10 | ExecStart=/bin/node /app/server.js 11 | SyslogIdentifier=catalogue 12 | 13 | [Install] 14 | WantedBy=multi-user.target 15 | 16 | user 17 | =============== 18 | [Unit] 19 | Description = User Service 20 | [Service] 21 | User=roboshop 22 | Environment=MONGO=true 23 | Environment=REDIS_HOST=redis.daws81s.online 24 | Environment=MONGO_URL="mongodb://mongodb.daws81s.online:27017/users" 25 | ExecStart=/bin/node /app/server.js 26 | SyslogIdentifier=user 27 | 28 | [Install] 29 | WantedBy=multi-user.target 30 | 31 | cart 32 | ====== 33 | [Unit] 34 | Description = Cart Service 35 | [Service] 36 | User=roboshop 37 | Environment=REDIS_HOST=redis.daws81s.online 38 | Environment=CATALOGUE_HOST=catalogue.daws81s.online 39 | Environment=CATALOGUE_PORT=8080 40 | ExecStart=/bin/node /app/server.js 41 | SyslogIdentifier=cart 42 | 43 | [Install] 44 | WantedBy=multi-user.target 45 | 46 | shipping 47 | ========= 48 | [Unit] 49 | Description=Shipping Service 50 | 51 | [Service] 52 | User=roboshop 53 | Environment=CART_ENDPOINT=cart.daws81s.online:8080 54 | Environment=DB_HOST=mysql.daws81s.online 55 | ExecStart=/bin/java -jar /app/shipping.jar 56 | SyslogIdentifier=shipping 57 | 58 | [Install] 59 | WantedBy=multi-user.target 60 | 61 | payment 62 | =============== 63 | 64 | [Unit] 65 | Description=Payment Service 66 | 67 | [Service] 68 | User=root 69 | WorkingDirectory=/app 70 | Environment=CART_HOST=cart.daws81s.online 71 | Environment=CART_PORT=8080 72 | Environment=USER_HOST=user.daws81s.online 73 | Environment=USER_PORT=8080 74 | Environment=AMQP_HOST=rabbitmq.daws81s.online 75 | Environment=AMQP_USER=roboshop 76 | Environment=AMQP_PASS=roboshop123 77 | 78 | ExecStart=/usr/local/bin/uwsgi --ini payment.ini 79 | ExecStop=/bin/kill -9 $MAINPID 80 | SyslogIdentifier=payment 81 | 82 | [Install] 83 | WantedBy=multi-user.target 84 | 85 | dispatch 86 | ======= 87 | [Unit] 88 | Description = Dispatch Service 89 | [Service] 90 | User=roboshop 91 | Environment=AMQP_HOST=rabbitmq.daws81s.online 92 | Environment=AMQP_USER=roboshop 93 | Environment=AMQP_PASS=roboshop123 94 | ExecStart=/app/dispatch 95 | SyslogIdentifier=dispatch 96 | 97 | [Install] 98 | WantedBy=multi-user.target 99 | 100 | Infra is ready, but setup needs to be done 101 | =========================================== 102 | jenkins-shared-library 103 | 104 | Jenkins, Jekins agent(nodejs, java, python, docker, kubectl, helm) 105 | ===================== 106 | pipeline stage view 107 | aws credentials 108 | ansicolor 109 | rebuild 110 | aws steps 111 | sonarqube 112 | --------------------------------------------------------------------------------