├── 0-VM-Setup ├── GamutGurus-AWS-EC2-Instance-Creation.txt └── VM-SetUp-On-Windows ├── 1.Linux └── LinuxCommands.txt ├── 2.Git ├── Git-Class-Notes-Wiculty ├── Git_Interview_Qns.txt └── Misc-Tags.Stashing-concepts ├── 3.Maven ├── HL_Maven_DevOps_Doc ├── Interview_Qns ├── antVsmaven └── maven by example.pdf ├── 4.Docker ├── Docker ├── TheDockerBook_sample.pdf └── ansible-for-devops.pdf ├── 5.Jenkins ├── Configure-SSL.txt ├── Gamut_Jenkins_Interview_Qns.txt ├── Jenkins-Installation ├── Jenkins_CLI.txt ├── deploy.sh ├── deployment_commands ├── deployment_commands.txt ├── multi-deploy.sh └── system-message.html ├── 6.Kubernetes ├── Gamut-Kubernetes-Class-Notes.txt ├── Guest-Book-Project │ ├── frontend-deployment.yaml │ ├── frontend-service.yaml │ ├── mongo-deployment.yaml │ └── mongo-service.yaml ├── Kubernetes.pptx ├── Misc │ ├── Namespaces │ │ └── Namespaces.txt │ ├── kubectl.txt │ ├── secrets │ │ ├── nginx-credentials.yaml │ │ ├── nginx-file-override.yaml │ │ ├── nginx-pod-env.yaml │ │ ├── nginx-pod-volume.yaml │ │ └── secrets.txt │ ├── test │ │ ├── file-sercret.yaml │ │ └── pod.yaml │ └── vol │ │ ├── gcepd.yaml │ │ └── volumes.txt └── generate-k8s-token-worker-node-join-command.txt └── Misc ├── Docker └── Misc └── VIM ├── VIM_Shortcuts └── _vimrc /0-VM-Setup/GamutGurus-AWS-EC2-Instance-Creation.txt: -------------------------------------------------------------------------------- 1 | Gamut Gurus Technologies 2 | Contact: Trainer: +91-97393 68768. Course Advisor: +91-944897 1000. 3 | Email: info@gamutgurus.com 4 | ================================================================================= 5 | VM creation in AWS 6 | ---------------------------- 7 | 8 | 1. 9 | Create an account in 'AWS console management' using below URL and Login. 10 | https://aws.amazon.com/console/) 11 | 12 | 2. 13 | Select the nearest region from top right drop-down. Ex: 'N.Virginia' or 'Mumbai'. 14 | 15 | 3. 16 | To create a VM, go to 'Services' at top left corner and select EC2. Now you are in EC2 Dashboard. 17 | 18 | 4. 19 | Click on 'Launch Instance'. Select an AMI based on your interested OS. 20 | Example: I want to create a VM with Ubuntu OS. So I select 'Ubuntu Server 18.04 LTS' AMI. Keep default, '64-bit(x86). 21 | 22 | 5. 23 | Select an Instance type as "t2.micro". This is eligible for Free tier account. You get 1vCPU and 1 GM RAM with this instance. Other instance type will cost us as those are not part of free tier account. 24 | 25 | 6. 26 | Select 'Next'. The purpose here is to create an EC2 instance quickly with basic configurations. So, don't change any configurations here for now. 27 | 28 | 7. 29 | Keep going by clicking 'next' until you see 'Review and Launch' Option. Click on 'Review and Launch'. 30 | Click on 'Launch' button. 31 | 32 | 8. 33 | In order to connect to this new Linux VM from Windows, we need to create a new 'key pair'. The concept here is, we generate a private key and store in our Windows machine. When we connect to EC2 instance, we have to provide this private key for authentication purpose. Here is what we do.. 34 | - Select 'Create a new key pair'. Give some random name. Ex: GamutDevOpsVMKey. Click on 'Download Key Pair'. You can see 'GamutDevOpsVMKey.pem' in your download location. 35 | 36 | 9. Now click on 'Launch Instance'. 37 | 38 | 10. Click on 'View Instances'. 39 | 40 | 11. You can see the instance running. Wait until you see '2/2 checks passed' under 'Status Checks'. As part of this 2/2 checks, AWS checks if infrastructure is proper and OS is accepting the trafic. This is just to make sure that machine is completely reachable by the user. 41 | 42 | 12. Now that EC2 instance is created with default configurations, let's see how we connect to it from Windows. 43 | 44 | 13. We connect to EC2 instance using SSH. But, unfortunately windows don't have SSH client/command by default. So we need to install it. To get SSH client in windows, we have one of popular tool called, Putty. Let's install Putty. Download and install Putty using below URL Or just do Google search. 45 | https://the.earth.li/~sgtatham/putty/latest/w32/putty-0.73-installer.msi 46 | 47 | 14. If you remember, we have downloaded 'GamutDevOpsVMKey.pem' file in step:8. This is Private key. But unfortunately, Putty don't understand .pem format. So, we need to convert it to .ppk which is understandable by it. To convert .pem into .ppk, we use an utility called 'PuttyGen' (This will be installed automatically along with Putty). 48 | 49 | 15. Launch 'PuttyGen' from Windows launch bar. Let's convert .pem to .ppk. Click on 'Load' and load your the downloaded 'GamutDevOpsVMKey.pem' (Select 'All files' in case you don't see this file while browsing). Now click on 'save private key'. Provide a name for your .ppk file ex: GamutDevOpsVMKey. 50 | Close PuttyGen. 51 | 52 | 16. Now .PPK fiel is ready, let's connect to our EC2 instance using this. Launch 'Putty'. To connect to this VM, we need it's public IP. Go to EC2 dashboard and copy it's public IP (IPv4 Public IP 53 | ), which is under 'Description' tab. In this case it is 13.233.141.53. 54 | 55 | 17. Launch Putty --> Provide Public IP (In Hostname field) --> On the left side menu, under 'Connection', click on 'SSH' and 'Auth' --> browse the .ppk file and load it --> Once prompts for login, provide 'ubuntu' default username. 56 | 57 | 18. Hurray..! You have got th terminal from AWS instance. 58 | 59 | Note: Once practice is done, make sure you stop the machine. 60 | How do you stop? 61 | Actions --> Instance Status --> Stop (This will shutdown the machine) 62 | Actions --> Instance Status --> Terminate (This will destroy the machine -- Note that you can't get your data once the server is terminated) 63 | 64 | - 65 | P Nageswara Rao, 66 | +91- 97393 68768 67 | Gamut Gurus Technologies 68 | -------------------------------------------------------------------------------- /0-VM-Setup/VM-SetUp-On-Windows: -------------------------------------------------------------------------------- 1 | Setting up Linux VM on Windows 2 | ---------------------------------------- 3 | # 4 | 1. Install VirtuablBox (Download from below URL) 5 | For 64-bit machines: 6 | -- 7 | https://www.virtualbox.org/wiki/Downloads (OR) 8 | https://download.virtualbox.org/virtualbox/6.1.10/VirtualBox-6.1.10-138449-Win.exe (Direct Link) 9 | 10 | For 32-bit machines: 11 | -- 12 | https://www.virtualbox.org/wiki/Download_Old_Builds_5_2 13 | 14 | 15 | # 16 | 2. Download "Ubuntu 18.04 Desktop" Image from below URL 17 | https://releases.ubuntu.com/18.04/ (OR) 18 | https://releases.ubuntu.com/18.04/ubuntu-18.04.4-desktop-amd64.iso 19 | 20 | For 32-bit machines: 21 | -- 22 | https://releases.ubuntu.com/16.04/ubuntu-16.04.6-desktop-i386.iso 23 | 24 | # 25 | 3. Launch VirtualBox from Windows start 26 | 27 | # 28 | 4. Click on 'new' and give a name for your VM ex: 'name: ubuntu-devops-gamut'. Make sure 'Type' is Linux and Version is 'Ubuntu 64-bit'. Hit next 29 | - Provde RAM for the VM (Min. 2 GB). Keep hitting next to go with default configurations. In 'File location and size' window, provide hard-disk space (20GM recomended) 30 | 31 | # 32 | 5. Select you VM machine name and click on 'start'. In 'select startup disk' window, browse and load 'Ubuntu 18.04 Desktop', click on 'start'. 33 | 34 | - Select 'Install Ubuntu' --> continue and follow with other configurations. 35 | 36 | Note-1: 37 | You may get an error something like "Virtualization/VTx" is not enabled. If you get this, go to BIOS settings by pressing F2 or F12 while machine restart. Go to BIOS configurationa and 'enable virtualization'. 38 | 39 | Note-2: 40 | Once the Linux is installed, you may see small screen. If you want to maximize the screen do below. 41 | Click on Ubuntu's 'start' button (it may on top left or bottom left) --> type displays --> go to displays --> go to 'resolution' --> and try different options. 42 | Ideally 1400*900 value should give you bigger screen or try bigger value. 43 | 44 | -------------------------------------------------------------------------------- /1.Linux/LinuxCommands.txt: -------------------------------------------------------------------------------- 1 | 1. VM Creation 2 | 3 | 2. Quick walk-through of Ubuntu OS 4 | 5 | Linux Basic Commands: 6 | ----------------------- 7 | Reference: 8 | - https://www.hostinger.in/tutorials/linux-commands 9 | - https://www.guru99.com/unix-linux-tutorial.html 10 | - https://www.tutorialspoint.com/unix/index.htm 11 | 12 | 13 | 3. Shell 14 | 15 | 16 | pwd 17 | 18 | cd 19 | 20 | ls 21 | 22 | Shell/Terminal 23 | 24 | cat 25 | 26 | cp 27 | 28 | mv 29 | 30 | mkdir 31 | 32 | rmdir 33 | 34 | rm 35 | 36 | touch 37 | 38 | 39 | ---- 40 | AWK 41 | how do you print a particular line number using awk. 42 | ---- 43 | 44 | 45 | Pipe 46 | apt-get 47 | Process 48 | push a process in backgroud mode 49 | 50 | 51 | 52 | 53 | -------------------------------------------------------------------------------- /2.Git/Git-Class-Notes-Wiculty: -------------------------------------------------------------------------------- 1 | GIT 2 | ================= 3 | 4 | # What is SCM/VCS/RCS. Why we need SCM. 5 | SCM tool features. 6 | 7 | --> Refer 'ProGit' for official documentation. 8 | 9 | 10 | # 11 | Git Architecture 12 | - end-to-end git work-flow 13 | 14 | 15 | # 16 | GIT Installation (Ubuntu): 17 | $ sudo apt-get update 18 | $ sudo apt-get install git 19 | 20 | Verify Installation: 21 | which git 22 | git version 23 | 24 | GIT Uninstallation: 25 | $ sudo apt-get remove git 26 | 27 | 28 | # 29 | Creating remote repository in github 30 | =============== 31 | 1. create an account in github.com 32 | URL: https://github.com 33 | 34 | 2. login github.com with your credentials. 35 | click on "new" --> give a name "flipkart-ecomerce" --> "create reopository" 36 | 37 | 3. copy the repo URL from Github: 38 | https://github.com/nageshvkn/flipkart-ecomerce.git 39 | 40 | 4. Clone the source code from remote repository using 'git clone' command 41 | git clone https://github.com/nageshvkn/flipkart-ecomerce.git 42 | 43 | 5. cd "flipkart-ecomerce" and observe ".git" folder. ".git" is called as "Local Reposiotory". 44 | 45 | 6. Create some sample code and submit the code to remote repo. 46 | --> touch Login.java 47 | --> git add Login.java 48 | --> git commit Login.java -m "login module code" [when you commit first time it asks for username and email. Set it up using below steps under "Setting up mandatory configurations" SECTION] 49 | --> git push --> [refer below topic [Setting up token/password] to generate token or password] 50 | 51 | --> git log Login.java (check the history of the file) 52 | 53 | 7. After this, you can clone your repository in another directory and check if you get Login.java as your repository has a file now. 54 | 55 | 56 | # 57 | Setting up token/password to access GitHub 58 | ======================================== 59 | 1. go to https://github.com 60 | 61 | 2. Generate token from github 62 | click on user-profile icon (top right) .. click on 'settings' .. click on 'Developer settings' .. click on 'Personal access tokens' .. click on 'Generate new token' .. give a name under 'Note' (example:class) .. select 'No expiration' from 'Expiration' drop-down box .. click on 'repo' check box under 'Select scopes' .. and finally click on 'Generate token' button. 63 | 64 | 3. Store the token in your machine using below command 65 | $ git remote set-url origin https://nageshvkn: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@github.com/nageshvkn/wiculty33.git 66 | 67 | Note: in the above command 68 | - 'nageshvkn' is your GitHub user-name. 69 | - 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 IS your token generated from GitHub. 70 | - github.com/nageshvkn/wiculty29.git is your repository path. don't give 'https' 71 | 72 | 73 | Note: If you want to store your Git credentials, use below command 74 | $ git config --global credential.helper store 75 | 76 | 77 | # 78 | Setting up mandatory configurations: 79 | ============================================= 80 | $ git config --global user.name "Nageswara Rao P" 81 | $ git config --global user.email "nageshvkn@gmail.com" 82 | 83 | Check the configurations using below command 84 | $ git config --list 85 | 86 | Git stores all configurations in below file 87 | "$USER_HOME/.gitconfig" 88 | 89 | 90 | # 91 | Staging Index/Stage 92 | - Use case: Using stage option, we can logically group the changes related to a bug fix or feature development. This will help us to track the history of that bug fix or a feature development clearly. 93 | 94 | - Skip staging 95 | $ git commit -am "submit all pending changes" 96 | 97 | Note: If you want to skip the staging, you need to commit all pending changes. 98 | For new file, you have to go through the 'stage' process. 99 | 100 | 101 | * 102 | # Show all the files that are modified as part of a commit (with content) 103 | git show 104 | git show b85a6e123 105 | 106 | # 107 | - Git Commit structure 108 | SHA value / commit ID 109 | User & email 110 | Date & time stamp 111 | Commit message 112 | 113 | # Understand Git Jargon. 114 | - Remote Repository 115 | - Working Directory 116 | - Local Repository 117 | - Stage/"Staging Index" 118 | - SHA/Commit ID 119 | 120 | 121 | # History 122 | $ git log Login.java 123 | $ git log 124 | 125 | - Filter the commits based on the user name 126 | git log --author "Sally" 127 | 128 | - Filter the commit based on commit message 129 | git log --grep "123" 130 | 131 | - Qn: show me all the commits made by user Sally and has bug-123 in commit message 132 | git log --author "Sally" --grep "bug-123" 133 | 134 | 135 | 136 | # GIT Commands 137 | 1. 138 | # See the content change of a file which is in 'source' area 139 | $ git diff Login.java 140 | 141 | # See the content change of a file which is in 'stage' area 142 | $ git diff --staged Login.java 143 | 144 | # See the content change of a file after the commit 145 | $ git show 123abc456 146 | 147 | 148 | 2. Deleting a file 149 | 150 | A.) git rm OMS.java 151 | git commit OMS.java -m "comment" 152 | git push 153 | 154 | # revert 155 | $ git revert 156 | 157 | 158 | 3. Renaming a file/folder 159 | A.) git mv Login.java Login1.java 160 | git commit -m "rename Login" 161 | git push 162 | Note: 163 | Git will carry the history of old file to new file. To check complete history.. 164 | $ git log --follow Login1.java 165 | 166 | 167 | 168 | # Undoing the changes: 169 | Unstage the changes from STAGE area 170 | $ git restore --staged LoginWeb.java 171 | 172 | # Undoing the changes from the Source area 173 | $ git restore Login.java 174 | 175 | Note: Once the changes are removed from source area, you can't get those changes back. So, better you take a backup of the file before you apply 'git restore' command. 176 | 177 | ----- 178 | 179 | # BRANCHING 180 | A. What is a branch? 181 | B. Why and When we create a branch? 182 | C. Branching Strategies / Models 183 | 184 | # 185 | # List all active branches in local repository 186 | $ git branch 187 | 188 | # Creating a new branch 189 | $ git branch dev_1.2.3 190 | 191 | # Push new branch to remote repository 192 | $ git push origin dev_1.2.3 193 | 194 | # Switching from one branch to another 195 | $ git checkout dev_1.2.3 196 | 197 | # Creating and switching to a newly created branch 198 | $ git checkout -b dev_1.2.4 199 | 200 | # How do you clone a remote repository with a particular branch as default 201 | $ git clone -b dev_1.2.4 https://github.com/nageshvkn/flipkart899.git 202 | 203 | # List all remote branches 204 | $ git branch -r 205 | 206 | # Deleting a branch 207 | $ git branch -d dev_1.2.3 208 | $ git push -d origin dev_1.2.3 209 | 210 | 211 | MERGING: 212 | ============= 213 | 214 | To practice merge, we need to make sure that repository files are in conflict situation. 215 | 216 | # Preparing the repository to produce conflict situation. 217 | 218 | 1. Take a file from master ex: Login.java. Add some code as shown below. 219 | Login.java 220 | -- 221 | ublic class Login { 222 | public static void main() { 223 | int i; 224 | 225 | for(i=0;i<=10;i++){ 226 | System.out.println("Number: " + i); 227 | } 228 | } 229 | } 230 | 231 | 2. Create a new branch using $ git branch dev-1.2.5. 232 | Push it to remote repository using $ git push origin dev-1.2.5. 233 | 234 | After creating this dev-1.2.5 branch, you see the same code in both branches Login.java file. 235 | 236 | 3. Like a developer, modify 5th line from i<=10 TO i<=20 Login.java file of dev-1.2.5 branch. 237 | 238 | 4. Similarly, modify 5th line from i<=10 TO i<=30 in Login.java file of master branch. 239 | 240 | 5 Activate source branch i.e dev-1.2.5 using $ git checkout dev-1.2.5 241 | 242 | 6. Make sure that you are on master branch (Run $git checkout master) to be on the master. 243 | 244 | # You have master code now. Now merge the changes from dev_1.2.5 to master by running below command. 245 | Git merge command merges the changes from dev_1.2.5 to master. 246 | 247 | $ git merge dev_1.2.5 248 | 249 | # Run git status command to list conflict file 250 | 251 | # 252 | Resolve the conflict be removing conflict markers (i.e <<, >> & == symbols)and commit the merge 253 | 254 | # Run git push command to move the merge to Remote. 255 | 256 | # What is Conflict: 257 | If two users modify the same file in source and target branches and if the same line has different content, git can't decide which user's code it has to take. we call this situation as conflict. 258 | 259 | # How do you resolve the conflict: 260 | - Open the conflict file--> remove conflict markers--> select the right content 261 | based on the discussion with developers 262 | - git add 263 | - git commit (after compilation and some z sanity testing) 264 | - git push 265 | 266 | Note: 267 | Use below command to find the owner of conflict code. How do you find the user who modified/added conflicted code? 268 | $ git blame Login.java 269 | 270 | 271 | # Difference between Git Merge & Rebase 272 | Merging: 273 | -- 274 | - Merging creates a new commit (called merge commit) in the target branch (master, if you are merging dev-1.2.5 into master) that combines the changes of both branches. The merge commit provides context about when the two branches are combined. This is very important information if you want to get the overall context of project in collaborative environment. 275 | 276 | See the visual representation of commit history before and after merging vs. rebasing. Check for Merge and rebase to understand the difference. 277 | $ git log master --merges --oneline 278 | 279 | - 280 | 281 | 282 | Rebase: 283 | - rebase is good for local branches as it give you the history of all the changes. 284 | 285 | Process: Rebasing dev-1.2.6 into main branch 286 | -------- 287 | 1. Clone the repository 288 | 2. Activate dev-1.2.6 branch using below command 289 | $ git checkout dev-1.2.6 290 | 3. Make sure that you are on main branch 291 | $ git checkout main 292 | 4. Rebase dev-1.2.6 into main using below command 293 | $ git rebase dev-1.2.6 294 | 5. Resolve if there are any conflicts 295 | 6. Commit rebase changes into local repo 296 | 7. Push the rebase changes to remote repository 297 | $ git push 298 | 299 | # 300 | git remote 301 | PULL 302 | FETCH 303 | 304 | PUSH 305 | CLONE 306 | 307 | Git fetch use cases: 308 | 1. If you want to know what changes are going to be pulled from remote before you update your local copy, use 'git fetch' command. 309 | - $ git fetch origin 310 | Output: 311 | ---- 312 | ---- 313 | 29423da..4a852c5 master -> origin/master 314 | [the above output meaning is... it's going to pull all the commits between 29423da..4a852c5. 315 | - Run below commands to see exact file names 316 | $ git show 4a852c5 317 | 318 | - If you want to see what changes are already fetched, use below command 319 | $ git log HEAD..origin/master 320 | 321 | 322 | 323 | 324 | -------------------------------------------------------------------------------- /2.Git/Git_Interview_Qns.txt: -------------------------------------------------------------------------------- 1 | 1.* What is Version Control System? 2 | 3 | 2.* Why we need any Version Control System (v.C.S) 4 | 5 | 3.* What is the difference between SVN and Git? 6 | 7 | 4.* Which VCS you prefer? SVN Or Git? Why? 8 | 9 | 5.* What are the advantages of Git over SVN? 10 | 11 | 6. Why we call Git as Distributed VCS? 12 | 13 | 7. Can you explain Git's End-to-End work flow? 14 | 15 | 8. How do you clone the code using git? 16 | 17 | 9.* What is the difference between Commit & Push? 18 | 19 | 10.* What is the difference bet'n Push and Pull? 20 | 21 | 11. Can you explain Git architecture? 22 | 23 | 12.* What is the diff. bet'n Centralized and Distributed VCS. 24 | 25 | 13. Have you ever created Remote repositories in Git? How? 26 | 27 | 14. What happens if I delete .git folder? 28 | 29 | 15. How do you configure username, email and editor first time 30 | in Git? 31 | 32 | 16. Where Git stores configuration details? 33 | 34 | 17.* What is the advantage of STAGE in Git? 35 | 36 | 18. Git log options related questions 37 | --author 38 | --grep 39 | --oneline 40 | --since/until 41 | -n2 42 | 43 | 19. What is SHA-1? How Git uses this? 44 | 45 | 20.* I have a file modified in my Working directory. How do you 46 | show the content diff? 47 | 48 | 21.* How do you show the content diff of a file which is staged? 49 | 50 | 22. How do you delete and rename a file in Git? 51 | 52 | 23.** What is your branching stratogy? 53 | Can you explain your release process/Stratogy? 54 | 55 | 24.** What branching model you suggest for parellel development? 56 | 57 | 25. Developer fixes a bug. How do you take the change to 58 | production? 59 | 60 | 25.** Explain defferent branching models that you have worked-on. 61 | 62 | 26. Did you work on merging the code in Git? 63 | 64 | 27.* How do you merge the code in Git? 65 | 66 | 28.* What is merge? What is conflict? 67 | 68 | 29. When do we get conflict? 69 | 70 | 30.* What is fast-forward merge in Git? 71 | 72 | 31.* What is the difference between Merge and Rebase? 73 | 74 | 32.* How do you resolve the conflit in Git? 75 | 76 | 34.* What kind of conflicts you have seen? 77 | 78 | 35. Who resolves the conflicts? 79 | 80 | 36.** What is the difference between branch and tag? 81 | When do you create a branch and tag? 82 | 83 | 37. How do you create a branch and switch to that using single 84 | command? 85 | 86 | 38. What is HEAD pointer in Git? Where Git store HEAD info. 87 | 88 | 39. Can we store binary files in Git? 89 | 90 | 40. Can skip the staging? How? what are the caveats? 91 | 92 | 41.* How do you list files/folders modified as part of a commit? 93 | 94 | 42.* How do you ignore: ex: 95 | all files ending with .class 96 | all files having alphanumeric 97 | all log files but not build.log 98 | 99 | 43. How do you add ignore list for all users? 100 | 101 | 44.* What are the different files you ignore in your project? 102 | 103 | 45. How to remove a committed change? Or can we remove? 104 | $ git reset --hard HEAD~1 105 | $ git reset --soft HEAD~1 106 | 46. How do you lock the branch 107 | 108 | 47. How do you clone the code from a particular SHA? 109 | 110 | 48. How do you restore a deleted file? Or previous changes of 111 | a file? 112 | 113 | 49. How do you list the diff. of a file between two different 114 | branches. 115 | $ git diff dev_1.2.4...master -- LoginUser.java 116 | 117 | 50. How do you list the changes which are going to be fetched? 118 | method:1 119 | $ git fetch 120 | $ git log origin/master ^master 121 | method:2 122 | $ git fetch && git diff master origin/master --name-only 123 | 124 | 51. What is Git Stash? 125 | 126 | 52. How do you add a new remote to git? Or How do you attach 127 | your local repo with remote? 128 | 129 | 53. What is git ls-tree? 130 | git ls-tree --> Lists files committed as part of 131 | a commit. 132 | 133 | 54.How do you clone the repository with a single/particular branch? 134 | 135 | $ git clone -b dev_1.2.4 --single-branch https://github.com/nageshvkn/flipkart899.git 136 | 137 | 55. 138 | How to compare two branches? 139 | $ git diff master..dev_1234 [compare local branches] 140 | $ git diff origin/master..origin/dev_1234 [compare remote branches] 141 | 142 | 143 | Qns: 144 | 1. How do you revert the code which is already committed in the repository? 145 | 2. User A has deleted the file in local repository. User B modified the same file and pushed to remote. Now, when user A push'es the file what will happen? 146 | 3. How do you make local repository as remote? 147 | 4. How do you push a new branch to remote repository? 148 | 5. How do you clone a single branch? 149 | 6. How do you search a commit based on time? 150 | 7. How do you clone a single folder / file? Or is it possible in Git? 151 | 152 | 7. How do you list the changes which are fetched? 153 | $ git diff origin/master 154 | 8. How do you list the changes before pull/fetch? 155 | $ git checkout master 156 | $ git fetch 157 | $ git diff origin/master 158 | $ git remote rm origin 159 | $ git remote add origin https://github.com/nageshvkn/jinglegurus.git 160 | 161 | 9. Push a particular commit to remote repository 162 | $ git push origin 7d662c54a4e0367c:master 163 | 164 | 165 | -------------------------------------------------------------------------------- /2.Git/Misc-Tags.Stashing-concepts: -------------------------------------------------------------------------------- 1 | # List all the tags 2 | $ git tag -l 3 | 4 | # 5 | Create a new lightweight tag 6 | $ git tag dev-123-release-tag 7 | 8 | # 9 | Push the tag to remote 10 | $ git push origin dev-123-release-tag 11 | 12 | # See more information about the tag 13 | $ git show dev-123-release-tag 14 | 15 | Create Annonatated tag 16 | # 17 | $ git tag -a dev-123-release-tag -m "creating it for NASA client" 18 | 19 | # See Tag information of Annotated tag 20 | $ git show dev-124-release-tag 21 | tag dev-124-release-tag 22 | Tagger: P Nageswara Rao 23 | Date: Wed May 20 09:17:53 2020 +0530 24 | creating it for NASA client 25 | 26 | # Creating a tag from particular commint ID (Old commit ID is - 838ccf6) 27 | $ git tag -a relase-123-tag -m "creating old release tag" 838ccf6 28 | $ git show relase-123-tag --> abserve that this tag has commits till '838ccf6'. 29 | 30 | # 31 | Deletng a tag 32 | $ git tag -d v1.4-lw 33 | 34 | # How do you checkout the code from a tag? 35 | $ git checkout v1.4-lw 36 | 37 | # Creating a branch from a tag 38 | $ git branch dev129 relase-123-tag 39 | 40 | 41 | Stashing 42 | ============ 43 | # Stashing the changes 44 | $ git stash 45 | 46 | # Un-stashing the changes 47 | $ $ git stash pop 48 | 49 | 50 | 51 | -------------------------------------------------------------------------------- /3.Maven/HL_Maven_DevOps_Doc: -------------------------------------------------------------------------------- 1 | MAVEN 2 | DAY-1: ============ 3 | # 4 | Java application Build and Deployment End-To-End Workflow. 5 | 6 | # 7 | Basics 8 | - Java program 9 | - manual compilation 10 | 11 | # 12 | Java Build Process 13 | A.java --> A.class --> A.jar ==> application.war 14 | 15 | 16 | # 17 | What is Maven? Why we need a build tool? 18 | 19 | # 20 | Application Environments 21 | - Dev Environment 22 | - Test Environment 23 | - SIT (System Integration Testing) Environment 24 | - Regression Environment 25 | - Performance Environment 26 | - UAT (User Acceptance Testing) Environment 27 | - Pro-prod/Stage Environment 28 | - Production Environment 29 | 30 | 31 | #4 32 | Installation 33 | --- 34 | Maven is developed in Java. So to run maven we need to have JDK installed. 35 | 36 | 37 | JDK Installation 38 | -- 39 | $ sudo apt-get update 40 | $ sudo apt-get install openjdk-11-jdk 41 | 42 | 43 | JDK Un-Installation 44 | -- 45 | $ sudo apt-get remove openjdk* 46 | 47 | Maven Installation 48 | -- 49 | $ sudo apt-get update 50 | $ sudo apt-get install maven 51 | 52 | Maven Un-Installation 53 | -- 54 | $ sudo apt-get remove maven 55 | 56 | Check The installations: 57 | -- 58 | java -version 59 | mvn -version 60 | 61 | Check The installations path: 62 | -- 63 | which java 64 | which mvn 65 | 66 | 67 | #5 TEST YOUR KNOWLEDGE 68 | - Build and Deployment E2E work-flow and basics 69 | - What is compilation & why we compile the source code? 70 | - Build process of Java application (Packaging sequence) 71 | - What is Build 72 | - What is Deployment 73 | - Environments 74 | - Dev, QA & DevOps teams Interaction and Collaboration. 75 | 76 | 77 | DAY-2: 78 | #6 79 | Maven's standard project layout 80 | ================================= 81 | Project Structure: 82 | ----------------- 83 | java projects which are built by maven, ideally follows below project folder structure. 84 | 85 | flipkart 86 | | 87 | src pom.xml 88 | | 89 | main--------- test 90 | | | 91 | java java 92 | | | 93 | (group.Id) (group.Id) 94 | | | 95 | App.java AppTest.java 96 | 97 | 98 | flipkart - is called "Project name" / "ArtifactID" 99 | src - Source folder which contains the 100 | application source code 101 | main - Contains application's main functional code 102 | test - Contains application's unit testing code 103 | pom.xml - Maven's build file using which we can 104 | configure build steps such as 105 | compilation, test runs, jar/war creation, 106 | deployments...etc. 107 | 108 | 109 | 110 | #7 111 | Building a Project using Maven: 112 | ----------------------- 113 | 114 | # Install git 115 | $ sudo apt-get update 116 | $ sudo apt-get install git 117 | $ git --version --> verify git installation. 118 | 119 | # 120 | Clone the code from Git Or create your own Project using below Maven command 121 | Clone: 122 | $ git clone https://github.com/nageshvkn/flipkart.git 123 | 124 | # 125 | Building the project using Maven. Use below command. 126 | $ mvn install 127 | 128 | 129 | $ mvn install - command executes below "build life cycle phases" automatically. 130 | 131 | - compile 132 | during this phase, maven compiles "main" java code. 133 | - testCompile 134 | during this phase, maven compiles "test" java code 135 | - test 136 | during this target, Maven runs the test cases and generates test reports. 137 | - package 138 | this phase creates jar/war based not he configuration that are defined in pom.xml file 139 | - install 140 | this phase, will copy built artifacts i.e jar/war file into local repository $USER_HOME/.m2 141 | folder. 142 | 143 | 144 | # 145 | Verify Built Artifacts: 146 | -------------------------- 147 | Go to "target" folder and observe below. 148 | 149 | target 150 | | 151 | classes test-classes surefire-reports jar/war file 152 | 153 | 154 | classes: directory contains compiled class files 155 | of main source code 156 | test-classes: directory contains compiled class 157 | files of test source code 158 | 159 | surefire-reports: contains test reports. 160 | 161 | flipkart-1.0-SNAPSHOT.jar: jar file of the main code 162 | 163 | Note: 164 | first time when we run 'mvn install' command, Maven downloads all missing dependencies into .m2 from maven's central repository. 165 | So, we need to have internet when we run 'mvn install' command first time. 166 | 167 | 168 | #8 169 | Understanding pom.xml file structure 170 | 171 | 172 | #9 173 | Artifact path in local repository .m2 174 | $USER_HOME/.m2/repository/groupId/artifactId/version/jar-OR-war-file 175 | 176 | - Package naming convention: 177 | artifactId-version.jar/war 178 | 179 | 180 | DAY-3: 181 | #11 182 | PROJECT-02: WEB Application Build and Deployment. 183 | ------------ 184 | Goal: 185 | - Perform end-to-end build and deployment for iflipkart web application 186 | - Handling build and deployment for any web application 187 | 188 | Steps: 189 | ------ 190 | # Install git 191 | $ sudo apt-get update 192 | $ sudo apt-get install git 193 | $ git --version --> verify git installation 194 | 195 | # 196 | Clone the code from Git 197 | $ git clone https://github.com/nageshvkn/iflipkart.git 198 | 199 | # 200 | Building the project using Maven. Use below command. 201 | $ mvn install 202 | 203 | # 204 | Check final artifact i.e flipkart.war file in target directory 205 | $ ls target 206 | 207 | # 208 | Set-up tomcat for deployment 209 | - Download tomcat *.tar.gz and JDK. 210 | - Extract to your favouriate location 211 | - Make sure that java is installed in the machine 212 | 213 | # 214 | Deploy flipkart.war into tomcat deployment path 215 | $ cp target/flipkart.war $TOMCAT_HOME/webapps 216 | 217 | # 218 | Start tomcat server 219 | $ cd $TOMCAT_HOME/bin 220 | $ ./startup.sh 221 | 222 | # 223 | Launch application with below URL from same machine's browser as IP is not public and tomcat is installed locally. 224 | http://localhost:8080/flipkart 225 | 226 | Syntax: 227 | [http://TomcatServerIP:Port/WarFilename] 228 | 229 | 230 | #12 231 | Project:3(Real-time End-to-End Build and Deployment Process) 232 | ================================= 233 | Goals: 234 | - Building the War for large scale real-time kinda application 235 | - Learning Deployments with a dedicated tomcat Server 236 | 237 | 238 | Steps: 239 | ---------- 240 | # 241 | Install GIT 242 | $ sudo apt-get update 243 | $ sudo apt-get install git 244 | 245 | # Clone gamutkart application source code into the build server from below "gamutkart" github repository. 246 | $ git clone https://github.com/nageshvkn/gamutkart2.git 247 | 248 | 3. 249 | Build "gamutkart" application using below command. 250 | $ mvn install 251 | 252 | 4. Make sure that gamutkart.war file is created in 'target' directory 253 | 254 | 5. Install JDK using below commands 255 | $ sudo apt-get update 256 | $ sudo apt-get install openjdk-11-jdk 257 | 258 | 6. Download tomcat from below URL and extract to some location 259 | https://tomcat.apache.org/download-90.cgi 260 | 261 | 7. Copy/deploy gamutkart.war into $TOMCAT_HOME/webapps 262 | 263 | 8. Start tomcat server using below command 264 | $ cd TOMCAT_HOME/bin 265 | $ ./startup.sh 266 | 267 | 9. Launch application using below URL 268 | 269 | http://localhost:8080/gamutkart 270 | 271 | URL Syntax: http://IP:8080/gamutkart 272 | 273 | # : 274 | In case there is any issue in the application, errors will be logged in 275 | "$TOMCAT_HOME/logs/catalina.2017-03-24.log" file. 276 | 277 | We can check this file and if there are any errors/exceptions, we provide this information to developers. 278 | 279 | # 280 | Note: If you want to change the port number, Go to below file and change the port number where you see something like this. ( port="8080" protocol="HTTP/1.1") 281 | $ vim $TOMCAT_HOME/conf/server.xml 282 | 283 | 284 | 285 | -------------------------------------------------------------------------------- /3.Maven/Interview_Qns: -------------------------------------------------------------------------------- 1 | INTERVIEW QUESTION: 2 | ===================== 3 | 1. 4 | What are the differences between ANT and Maven 5 | 6 | 2. 7 | How do you create a jar/war file in Maven? 8 | 9 | 3. 10 | What is the difference between mvn deploy and install? 11 | 12 | 4. 13 | Can you explain Maven's lifecycle? 14 | - init 15 | - validate 16 | - compile 17 | - test 18 | - package 19 | - install 20 | [Give one line explanation about each phase during your interview] 21 | 22 | 5. 23 | What is Maven? Why we use Maven? 24 | 25 | 6. 26 | While building the project, you get an error saying some jar file is missing. how do you add that? 27 | 28 | 7. 29 | - What is groupId, artifactId, and Version in Maven? 30 | - What are the Maven co-ordinates? 31 | - What are the mandatory attributes in pom.xml. 32 | 33 | 8. 34 | What is the difference between 1.0-SNAPSHOT(SNAPSHOT) version and 1.0-RELEASE(RELEASE) version. 35 | 36 | 9. 37 | What is the default naming convention of an artifacts(jar/war) in Maven? 38 | 39 | 10. 40 | How do you generate a site in Maven? 41 | 42 | 11. 43 | How do you run a clean build in Maven? 44 | 45 | 12. 46 | how do you add a dependency in Maven pom.xml? 47 | 48 | 13. 49 | what is a plugin? 50 | 51 | 14. 52 | - What is the default path of artifacts in local repository? 53 | - Where maven stores the built artifacts? 54 | 55 | 15. 56 | How do you create a project in the Maven? 57 | 58 | 16. 59 | What are the different binary repositoris we have? Which one you are using for your project? 60 | 61 | 17. 62 | - How do you customize the name of your artifact(jar/war) in Maven? 63 | - How do you change the name of built jar/war file in maven? what changes you need to do in pom.xml file? 64 | 65 | 18. 66 | What do you mean by transitive dependency in Maven and can you explain how maven resolves it? 67 | 68 | 19. 69 | - What is the significance of scope parameter in dependency section? 70 | - What are the different scope's we have in Maven? 71 | 72 | 73 | -------------------------------------------------------------------------------- /3.Maven/antVsmaven: -------------------------------------------------------------------------------- 1 | ANT Vs Maven 2 | ============= 3 | ANT: 4 | 0 5 | ====== 6 | 1.Ant is a low level build automation tool. 7 | 8 | 2. Need more time to automate java build and deployment process 9 | 10 | 3. ANT doesn't have automatic dependency resolution feature. 11 | 12 | 4. ANT doesn't have anything called convention over configuration. i.e User is free to create any directory structure. ex: since there is no standard project structure, it may take more time for a developer to understand the project(more wramp-up time.) 13 | 14 | 5. 15 | Ant is a build tool. 16 | 17 | 6. 18 | ANT is very simple and suitable if we have to do lot of customizations. I think that's why ANT is also used in lot of projects. 19 | 20 | 7. 21 | Using ANT we can't generate a site for our project with administration information by default. 22 | 23 | 8. 24 | No default mechanism to reuse built artifacts. 25 | 26 | 9. 27 | Troubleshooting is very easy as we know better about our build code.(Since we write everything) 28 | 29 | 30 | 31 | Maven: 32 | ====== 33 | 1. Maven is a highlevel build tool. 34 | 35 | 2. Within a very less time, we can automate build and deployment process. 36 | 37 | 3. Maven has automatic dependency feature. in the sense based on the pom.xml configurations, it can automatically download the dependencies from binary management repositories and add in the classpath during the compilation. 38 | It can also handle trasitive dependencies. 39 | 40 | 4. 41 | Maven has convention over configuration mechanism for most of the automation requirements. 42 | ex: standard project structure, 43 | automatic dependency resolution, 44 | predefined build life cycle 45 | 46 | 5. 47 | Maven is more than a build tool i.e maven is "project management" tool. 48 | ex: using maven, we can generate site for our project which includes, developers workign on a particular project, dependency list, test case statistics, automation graphs. 49 | 50 | 6. Since Maven is high level tool, may not be suitable if we want to do more customizations. 51 | As Maven doesn't have easier documentation(relative comparison with ANT), It's little dificult to understand Maven. Especially if we have larger pom files and more customizations. 52 | 53 | 7. Using Maven, we can generate a site for our project with administration information. So, it can act as a project management tool as well. 54 | 55 | 8. Artifacts can be uploaded to binary repository management tools and shared across the projects/modules. 56 | 57 | 9. Troubleshooting may become nightmare sometimes if we are not aware of maven automatic and highlevel concepts. 58 | 59 | 60 | 61 | -------------------------------------------------------------------------------- /3.Maven/maven by example.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/wicultydotcom/devops-class-notes/c4eb6b6a719825f5e5fcf331e37019e7abb9ae36/3.Maven/maven by example.pdf -------------------------------------------------------------------------------- /4.Docker/Docker: -------------------------------------------------------------------------------- 1 | Docker - The Container Virtualisation Tool 2 | ================================================== 3 | Day-1: 4 | ############### 5 | # 6 | Diff between.. 7 | - Physical server 8 | - Virtual machine 9 | - Docker container 10 | 11 | VM, Docker, usage in DevOps. 12 | 13 | # 14 | what is docker? why docker? 15 | 16 | 17 | # 18 | Supported Platforms - 19 | - Docker is supported on 20 | - Linux platforms 21 | Ubuntu, RHEL, CentOs ..etc. 22 | * - Windows 23 | - OS X 24 | 25 | - Cloud Platforms 26 | Amazon EC2 27 | Rackspace Cloud 28 | Google compute Engine..etc. 29 | Azure 30 | 31 | Note: 32 | Linux containers can be created on Windows and OS X. 33 | HOW?- Windows & Mac Docker installers contain a tiny Linux virtual machine. 34 | So, Docker creates linux container on top of this tiny Linux VM. 35 | 36 | Requirements: 37 | - 64-bit architecture 38 | - Linux 3.8 or later Kernel versions 39 | 40 | # 41 | Requirements Check: 42 | - Check Kernel version 43 | $ uname -a 44 | $ uname -r 45 | 46 | - Check OS name: 47 | $ lsb_release -a / -cs 48 | $ cat /etc/os-release 49 | 50 | 51 | Installation Steps: 52 | ===================== 53 | # To install the Docker, Run below commands 54 | Reference: https://docs.docker.com/engine/install/ubuntu 55 | 56 | $ curl -fsSL https://get.docker.com -o get-docker.sh 57 | $ chmod +x get-docker.sh 58 | $ ./get-docker.sh 59 | 60 | # 61 | Installation Check 62 | sudo docker version 63 | 64 | # 65 | If you would like to use Docker as a non-root user, you should add your user to the “docker” group with below command. 66 | 67 | sudo usermod -aG docker 68 | 69 | check if the user is added to the group 70 | $ cat /etc/group | grep docker 71 | docker:x:998:wiculty 72 | 73 | Note - restart the machine once the user is added to the group. 74 | 75 | # 76 | uninstall docker: 77 | $ sudo apt-get purge docker-ce docker-ce-cli containerd.io 78 | 79 | 80 | $ sudo rm -rf /var/lib/docker (Removes all containers and images) 81 | 82 | 83 | DAY-2 84 | # 85 | Managing docker containers 86 | =============================== 87 | # Create a new container using below command 88 | 89 | $ docker run -it ubuntu /bin/bash 90 | 91 | 92 | # Inspect the new container.. Let's believe that it's separate machine 93 | 1. 94 | hostname 95 | 2. 96 | cat /etc/hosts 97 | 98 | 3. hostname -i 99 | 100 | 5. ps -ef 101 | 102 | 6. cd / && pwd && ls 103 | 104 | 105 | # List all containers(stopped and running) 106 | $ docker cotainer ls -a 107 | $ docker ps -a 108 | 109 | # List all containers (running & stopped) 110 | "docker ps -a" command output shows 111 | - Image name from which container is created 112 | - ID - container can be identified using short UUID, longer UUID Or name. 113 | - Status of the container (Up / Exited) 114 | - Name of the container 115 | 116 | # List running containers only 117 | $ sudo docker container ls 118 | $ docker ps 119 | 120 | # List given no. of containers 121 | $ docker ps -n1 122 | 123 | # show the last container which you have created (stopped/running) 124 | docker container ls -l 125 | Docker ps -l 126 | 127 | # List all images in the host machine 128 | 129 | QN: 130 | --- 131 | What is docker Engine 132 | 133 | 134 | # Remove image from the host machine 135 | docker rmi 136 | 137 | # 138 | Shotdown a container 139 | "exit" to stop the container 140 | 141 | # Deleting a container by giving it's name or ID 142 | $ docker rm ID/name 143 | 144 | # Deleting an image by giving it's name or ID 145 | $ docker rmi ID/name 146 | 147 | 148 | # Login to a stopped container 149 | $ docker start 150 | $ docker attach 151 | 152 | # You can also Login into a stopped container using below single command 153 | $ docker start -ai 154 | 155 | # Shortcut Keys 156 | Ctrl (press & host)+ p + q - push a running container in background mode. 157 | Ctrl + d - short cut to 'stop' a container. 158 | 159 | 160 | # Naming the container 161 | docker run --name tomcat-server -it ubuntu /bin/bash 162 | 163 | Note: Two containers can't have the same name. 164 | 165 | # Rename a container 166 | $ docker rename db-server3 db-server-name3 167 | 168 | 169 | 170 | # SSH setup for containers 171 | By default, containers won't be having SSH installation. But, SSH almost mandatory in order to connect to a remote machine of if remote machine wants to connect to the container. Let's setup SSH in the container. 172 | 173 | - Create a new container 174 | $ docker run -it ubuntu /bin/bash [run ssh command. It's missing!] 175 | - Install SSH in the container 176 | $ apt-get update 177 | $ apt-get install ssh [This installs both SSH client and Server] 178 | 179 | - Start the SSH server 180 | $ service ssh start (status/stop/restart) 181 | 182 | - Create an user and set up password 183 | $ useradd -m -d /home/wiculty -s /bin/bash wiculty 184 | $ passwd wiculty 185 | 186 | - Connect to the container using below command from the host machine. 187 | $ ssh wiculty@172.17.0.3 188 | 189 | ** 190 | --> [ putty ] 191 | 192 | 193 | # List Stopped containers only 194 | $ docker ps -a -f status=exited (Where Status can be exited/running) 195 | 196 | 197 | # Delete all (running/stoped) containers at once 198 | $ docker rm -f $(docker container ls -a -q) 199 | $ docker rm -f $(docker ps -a -q) 200 | 201 | # Delete running containers only 202 | $ docker rm -f $(docker container ls -q) 203 | $ docker rm -f $(docker ps -q) 204 | 205 | 206 | # list stopped containers only 207 | $ docker container ls -a -f status=exited 208 | 209 | # Starting a stopped container 210 | sudo docker start gamut 211 | sudo docker stop 212 | sudo docker restart gamut 213 | 214 | 215 | # Run a linux command remotely in a container 216 | $ docker exec -it tomcat-server ps -ef 217 | 218 | # 219 | Get an independent terminal from a container remotely (from Host) 220 | $ docker exec -it tomcat-server /bin/bash 221 | 222 | # 223 | How do you login into a container with a specific user (other than root) 224 | First create an user (harry) in the container. Make sure container is running and use below command to login as a specific user. 225 | $ docker exec -it --user harry tomcat-server /bin/bash 226 | 227 | # Create a container in a background mode / detached mode ( without terminal access ) 228 | $ docker run -it -d ubuntu /bin/bash 229 | 230 | 231 | STATS: 232 | ========== 233 | # Display usage statistics of a container 234 | $ docker stats 235 | $ docker stats --no-stream 236 | $ docker stats --no-stream --all 237 | 238 | $ docker stats --no-stream --format {{.MemUsage}} sleepy_shannon 239 | $ docker stats --no-stream --format {{.CPUPerc}} sleepy_shannon 240 | 241 | # 242 | Allocating memory for a container (below command allocates 1 GB RAM) 243 | $ docker run -it --name tomcat-server -m 1g ubuntu /bin/bash 244 | $ docker run -it --name tomcat-server -m 512m ubuntu /bin/bash 245 | 246 | # 247 | Updating memory of an existing container 248 | $ docker update -m 2048m tomcat-server 249 | 250 | # CPU Allocation 251 | $ docker run -it --cpus=2 --name jenkins-server ubuntu /bin/bash 252 | $ docker update --cpus=2 jenkins-server 253 | 254 | 255 | DAY-4: 256 | # Docker Images 257 | ================= 258 | Agenda: 259 | - Understand docker Images and application containerisation 260 | - Advantages of Docker Images 261 | - Create docker Image for your application 262 | - Share/publish your Image 263 | - Examine Docker repositories that hold images 264 | 265 | - Docker images are the building blocks for creating container 266 | - From images, we launch containers. 267 | 268 | # Advantages of Images in Build and Deployments OR DevOps world! 269 | 270 | a. Works In my machine problem. 271 | *b. Developers can quickly setup local development environments as we can include all dependencies in the image and create containers. 272 | *c. Is there an Issue? don't spend time to troubleshoot it. Just throw the machine which has the issue away and create new instantly. 273 | d. Auto scale your environment very easily. 274 | e. No need to live with complex, redundant configurations. You can create disposable environments. 275 | f. You can leverage/utilises local machines's computing power when you need to test your code on multiple machines, instead of waiting for DevOps team to supply or wasting extra computing power. you already have 500GB, 16GB RAM right? are you utilising it? NO! then why again you need extra hardware? 276 | g. You can create new environments within few minutes (ex: create new performance testing environment within few minutes before the release) 277 | 278 | 279 | # Listing docker images 280 | - $ docker image ls 281 | 282 | - Images live in '/var/lib/docker/image/overlay2/imagedb/content/sha256' 283 | - Containers live in '/var/lib/docker/containers' 284 | 285 | 286 | # Building our own Image 287 | We have 2 Ways to create docker image: 288 | 1. docker "commit" 289 | 2. docker "build" cmd & Dockerfile 290 | 291 | # Creating docker image using "docker commit" command 292 | =========================================================== 293 | PROJECT-1: 294 | Goal: Create the docker image to ship the application code along with nginx configurations. 295 | 296 | - Create container 297 | $ docker run -it --name Wiculty-container ubuntu /bin/bash 298 | x 299 | - Install nginx manually 300 | $ apt-get update 301 | $ apt-get install -y nginx 302 | 303 | - Deploy / copy some application code into '/var/www/html' (this is deployment path for nginx) 304 | ex: create index.html with below code in '/var/www/html' 305 | ======= 306 | 307 | 308 |

Wiculty Learning Solutions

309 | 310 | 311 | ======= 312 | 313 | - Create docker image from the container (OR) 314 | - Convert docker container as docker image.. 315 | $ docker commit Wiculty-container nageshvkn/wiculty-img 316 | Syntax: $ docker commit 317 | 318 | - Check if image has been created 319 | $ docker image ls 320 | 321 | - Push the newly created image to docker hub 322 | - Create an account in 'https://hub.docker.com/'. 323 | $ docker login 324 | $ docker push nageshvkn/nginx-img 325 | 326 | Note: Now you have successfully containerised your application and published the iamge to DockerHub. Customers can spin millions of new containers using the above docker image. 327 | 328 | Note: To verify your image as an user, create a container as shown below. Remove existing image that you have created so that you can abserve image download from Docker hub clearly. (to remove the image.. $ docker rmi nageshvkn/nginx-img) 329 | $ docker run -it nageshvkn/wiculty-img /bin/bash 330 | 331 | - Launch the application to test if application is configured along with dependencies. 332 | Note: start Nginx server manually using 'service nginx start' 333 | http://172.17.0.2:80 334 | 335 | Note: 336 | start/stop/restart nginx server: 337 | ===========\===========\=========== 338 | $ sudo service nginx start 339 | $ sudo service nginx stop 340 | $ sudo service nginx restart 341 | $ sudo service nginx status 342 | 343 | Note: 344 | uninstall nginx using below comamnd 345 | $ sudo apt-get purge nginx nginx-common 346 | 347 | 348 | PROJECT-2: 349 | ============= 350 | DAY-6: 351 | # Creating docker image using "docker build" command 352 | ================= 353 | - mkdir wiculty 354 | - cd wiculty 355 | - touch Dockerfile 356 | 357 | --> 'wiculty' directory is called "context" or "build context". 358 | It contains the code, files or other data that you want to include in the 359 | image. 360 | 361 | - Write Dokckerfile: 362 | FROM ubuntu:16.04 363 | MAINTAINER "info@wiculty.com" 364 | RUN apt-get update 365 | RUN apt-get install -y nginx 366 | COPY index.html /var/www/html 367 | ENTRYPOINT service nginx start && bash 368 | 369 | index.html: 370 | ======= 371 | 372 | 373 |

Wiculty Learning Solutions

374 | 375 | 376 | 377 | # Building docker image: 378 | $ cd wiculty 379 | $ docker build -t "nageshvkn/wiculty-img" . 380 | 381 | Note: Building the image if 'Dockerfile' has different name. 382 | Use "-f " option. 383 | Example: $ docker build -f MyDockerfile -t="nageshvkn/wiculty-img" . 384 | 385 | # Listing docker image 386 | $ docker image ls 387 | 388 | # Create an account in docker hub 389 | 390 | # Pushing custom images to docker repository 391 | $ docker login 392 | $ docker push nageshvkn/wiculty-image 393 | 394 | # 395 | Testing Image 396 | 1. Remove the local image so that it will be downloaded from Docker Hub. 397 | $ docker rmi nageshvkn/wiculty-image (OR) 398 | $ docker image rm nageshvkn/wiculty-image 399 | 400 | 2. Creating a new container from our image 401 | $ docker run -it --name wiculty-container nageshvkn/wiculty-img /bin/bash 402 | 403 | Note: start the nginx server manually as it's not fixed yet. It will be fixed in the next topic. 404 | 405 | 3. Verify if nginx is running from the container. 406 | $ http://172.17.0.2:80 407 | 408 | 409 | # Images Layers & Build Cache 410 | 411 | 412 | # 413 | User Images Syntax: 414 | nageshvkn/wiculty-img (username/imagename) 415 | 416 | Official Images Syntax: 417 | ubuntu 418 | 419 | # Specifying Image via tags 420 | - ubuntu:20.04 421 | ubuntu- is image name 422 | 20.04 - is called tag 423 | 424 | 425 | # Deleting an Image 426 | - docker rmi nageshvkn/wiculty-img 427 | 428 | # Deleting all Images 429 | - docker rmi $(docker images -q) 430 | 431 | 432 | Volumes: 433 | =============== 434 | # List all volumes available in host machine 435 | $ docker volume ls 436 | 437 | # Create a new Named Volume 438 | $ docker volume create deployment_code 439 | 440 | # Check Mount point directory 441 | $ docker inspect deployment_code 442 | 443 | # Mount Volume(deployment_code) to a new container 444 | $ docker run -it -v deployment_code:/deployment_code ubuntu:16.04 /bin/bash 445 | 446 | 447 | # Create 'Read-only' Volumes 448 | $ docker run -it -v deployment_code:/deployment_code:ro ubuntu:16.04 /bin/bash 449 | 450 | # Removing a Volume 451 | $ docker volume rm deployment_code 452 | 453 | # List down all containers which are using a particular volume 454 | $ docker ps -a --filter volume=deployment_code 455 | 456 | 457 | 458 | # Manual Gamutkart Application Deployment Process 459 | 460 | DAY-8: 461 | Gamutkart Real-time application 462 | ============================ 463 | Agenda: 464 | How do you containerize or dockerize your application? 465 | Can you explain how you have implememnted Docker for your application? 466 | 467 | 468 | 1. Clone the source code from Git or any other V.C.S 469 | $ git clone https://github.com/nageshvkn/gamutkart2.git 470 | 471 | 2. Build the code using your favourate build tool Maven/ANT 472 | $ mvn install 473 | 474 | 3. Create docker image for the application(gamutkart2) with 475 | war file, tomcat,jdk...etc using below Dockerfile. 476 | Dockerfile: 477 | ------------- 478 | FROM ubuntu:16.04 479 | MAINTAINER "info@gamutgurus.com" 480 | RUN apt-get update 481 | RUN apt-get install -y openjdk-8-jdk 482 | ENV JAVA_HOME /usr 483 | ADD apache-tomcat-8.5.38.tar.gz /root 484 | COPY target/gamutkart.war /root/apache-tomcat-8.5.38/webapps 485 | ENTRYPOINT /root/apache-tomcat-8.5.38/bin/startup.sh && bash 486 | 487 | 4. Build the Image using below command 488 | $ docker build -t "nageshvkn/gamutkart-img" . 489 | 490 | 4A. Push the image to docker hub. 491 | $ docker push nageshvkn/gamutkart-img 492 | 493 | 5. Run below shell script to create an environment with give no. containers 494 | $ ./create-env.sh 10 495 | 496 | 6. Observer all containers created using above script ($ docker ps) 497 | 498 | 7. Launch the gamutkart application from all containers. 499 | $ http://IP:8080/gamutkart 500 | 501 | 502 | # Images Layers & Build Cache 503 | 504 | 505 | # 506 | Docker and Jenkins Integration 507 | 1. Create a new Free style project in Jenkins 508 | 2. Configure Git, Maven, Docker Image creation & Environment creation using below. 509 | - Configure Git URL under "Source code Management" 510 | - Provide Maven's 'install' command under build section 511 | - Open 'Execute shell' and type below commands for creating Image and Environment 512 | - docker build -t "nageshvkn/gamutkart-img" . (Note: don't forget "." at the end) 513 | - ./create-env.sh 10 514 | 515 | 516 | # Container creation process - Deep dive with layers 517 | 518 | 519 | Qns: 520 | - Multistage builds 521 | - Docker compose 522 | 523 | -------------------------------------------------------------------------------- /4.Docker/TheDockerBook_sample.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/wicultydotcom/devops-class-notes/c4eb6b6a719825f5e5fcf331e37019e7abb9ae36/4.Docker/TheDockerBook_sample.pdf -------------------------------------------------------------------------------- /4.Docker/ansible-for-devops.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/wicultydotcom/devops-class-notes/c4eb6b6a719825f5e5fcf331e37019e7abb9ae36/4.Docker/ansible-for-devops.pdf -------------------------------------------------------------------------------- /5.Jenkins/Configure-SSL.txt: -------------------------------------------------------------------------------- 1 | https://dzone.com/articles/setting-ssl-tomcat-5-minutes 2 | -------------------------------------------------------------------------------- /5.Jenkins/Gamut_Jenkins_Interview_Qns.txt: -------------------------------------------------------------------------------- 1 | GAMUT GURUS TECHNOLOGIES: 2 | Office: 944897 1000 3 | 944897 2000 4 | =============================================== 5 | 6 | 1. What is continuos integration? 7 | 8 | - C.I integration is nothing but continous compilation, testing and deployment. 9 | - C.I is a process which monitors the new changes coming into V.C.S like Git, checkouts the source code, builds the changef, runs the test case to test the change and deploys it to given environment automatically (seemlessly) without any manual intervention. 10 | 11 | 2. What is continuous delivery? 12 | 13 | 14 | 3. 15 | #How do you change Jenkins HOME directory?? 16 | 17 | Go to $USER_HOME/.bashrc and add below ENV variable. 18 | ======== 19 | export JENKINS_HOME=/home/praveen/jen/.jenkins 20 | 21 | Jenkins installation: 22 | ===================== 23 | Approach:1 24 | 1. download JDK and setup JAVA_HOME Environment variable as shown below 25 | export JAVA_HOME=/path/to/extracted/java/without/bin/dir 26 | export PATH=$JAVA_HOME/bin:$PATH 27 | 28 | 2. download Tomcat 29 | 3. download jenkins.war 30 | 4. copy jenkins.war to $TOMCAT_HOME/webapps 31 | [deploying jenkins to tomcat] 32 | 5. start Tomcat server using $TOMCAT_HOME/bin/startup.sh 33 | 6. Launch Jenkins using below URL 34 | http://localhost:8080/jenkins 35 | 7. command to shutdown tomcat: 36 | $TOMCAT_HOME/bin/shutdown.sh 37 | 38 | Approach:2 39 | # Running jenkins direclty from command line. not suitable for production jenkins. 40 | Jenkins.war comes with a light-weight server called "jetty". below command runs jenkins in jetty server. 41 | 42 | - $ java -jar jenkins.war 43 | you can launch jenkins using below URL: 44 | http://localhost:8080 45 | 46 | Approach:3 47 | sudo apt-get install jenkins 48 | 49 | 50 | 2. Why we need continous Integration? 51 | Refer c.i feature. 52 | 53 | 3. Have you created jenkins job or just worked on existing jenkins environment? 54 | - How do you create a new build/jenkins job? 55 | 56 | 4. How do you install jenkins? What are the different ways? 57 | 1. java -jar jenkins.war [http://localhost:8080] [uses jetty server] 58 | 2. yum install jenkins[RHEL] OR apt-get install jenkins 59 | [Ubuntu] 60 | 3. Deploy jenkins.war in tomcat like any other web applications. [Production approach] 61 | 62 | 5. How do you setup a crontab in linux? 63 | Note: Refer Google and setup a simple crontab. 64 | - creating a new crontab. 65 | crontab -e 66 | ==== 67 | * * * * * `command/any-script` 68 | Minute Hour DOM Month DOW 69 | 0-59 0-23 1-31 1-12 0-7 70 | ==== 71 | - list all crontabs 72 | crontab -l 73 | - remove all crontabs 74 | crontab -r 75 | 76 | 6. How do you migrate jenkins from one server to another? 77 | - Install Jenkins in the new machine. 78 | - Copy .jenkins to new machine's $USER_HOME dir. 79 | - Start jenkins server in the new machine. 80 | [ Note: Usually, we don't copy workspace from old jenkins server to new as it contains large size of source code.] 81 | [command to exclude workspace. 82 | tar --exclude=workspace -cvf jenkins.tar .jenkins] 83 | 84 | 7. How do you start/stop jenkins? 85 | 86 | 8. Jenkins is running some jobs and I want to restart it. How do you restart? 87 | - How do you restart the jenkins without interrupting running jobs? 88 | 89 | 9. What is the default port number of jenkins? 90 | 91 | 10. How do you change the port number for Jenkins? 92 | Go to $TOMCAT_HOME/conf/server.xml 93 | Change port number in this line: [ port="8080" protocol="HTTP/1.1" ] 94 | 95 | p 96 | 11. How do you check Jenkins logs? 97 | How do you check your Application logs? 98 | $TOMCAT_HOME/logs/catalina.2017-08-08.log 99 | 100 | 101 | 12. What challenges you faced while working with Jenkins? 102 | - What are the common issues you see in Jenkins? 103 | - compilation 104 | - deployment 105 | - jdk or maven installation 106 | - disk space 107 | - port change 108 | - slave node configuration issues 109 | 110 | 111 | 14. Where does Jenkins store global and job related configurations? 112 | Global configurations: $JENKINS_HOME/.jenkins/config.xml 113 | Job configurations: $JENKINS_HOME/.jenkins/jobs/job_name/config.xml 114 | 115 | 116 | 15. Where Jenkins stores all plugins data? 117 | $JENKINS_HOME/.jenkins/plugins 118 | 119 | 16. I want to modify JDK version from 1.7 to 1.8 in 1000 jobs? How do you do it? 120 | Jenkins stores all configuration data in .jenkins/jobs/ 121 | job_name/config.xml 122 | we can find 1.7 in all config.xml and replace it with 123 | 1.8 using some linux command or small script. 124 | Then to load the changes, we need to run "Reload configurations from disk" 125 | 126 | 17. How do you setup build and deployment for your project? 127 | - configure GIT URL 128 | - configure maven build command i.e 'mvn install" 129 | - go to post build section and call deploy.sh 130 | 131 | Deployment scritpt steps: 132 | - before copying the war file, our script checks 133 | for diskspace. 134 | - copy war file to all tomcat servers in an environment(copy using scp) 135 | - shutdown the tomcat 136 | - start the tomcat 137 | 138 | 18. How many builds you store in your jenkins. 139 | How do you rotate logs for your Jenkins? 140 | 141 | 19. How do you backup your jenkins data? 142 | 143 | 20. How do you configure different jenkins jobs to run with different JDKs? 144 | 145 | 21. What is the difference between "Build periodically" and "Poll scm"? 146 | 147 | 22. How do you configure security for your jenkins? Are you using LDAP for authentication? 148 | 149 | 23. What is matrix based security? How do you provide access to your users? 150 | 151 | 24. What is a plugin? 152 | What plugins you installed? Name few plugins which you have used? 153 | ==== 154 | 1. Thin Backup - 155 | Using Cron tab style/notion, we can schedule the backups for jenkins. We usually take backup for Jenkins home directory. Once we install this plugin, It adds " ThinBackup" section to "manage jenkins". 156 | 157 | 158 | 3. Job Configuration History plugin: 159 | we can check job configuration history. for example- 160 | who deleted a job or configuration 161 | who modifed jdk version 162 | who modified build trigger schedule 163 | user addition/deletion..etc. 164 | Once we install this plugin, we can see who has done what or who made what changes. It records the history of all user's modifications. 165 | 166 | 167 | 4. Shelve project: 168 | If we have large size of build log files, un-used jenkins jobs, Jenkins will become slow(as it has to scan all projects for generating reports). So, It's good idea to archive any un-used jenkins jobs so that jenkins don't scan the project. since this plugin archives the projects, we can restore them if we want in the future. 169 | 170 | 5. Green balls plugin 171 | 172 | 25. What are the different ways of installing a plugin? 173 | 174 | 26. What is "Reload configurations from the Disk"? when do you use this? 175 | 176 | p 177 | 27. How do you take back up for only jobs? excluding WS? 178 | 179 | 180 | 28. How do you set up distributed builds? 181 | using master/slave 182 | 183 | 29. How many slave nodes you have? 184 | 185 | ---end--- 186 | 187 | 30. What is a label? 188 | Label is a virtual name for one or more slave nodes using which we can tie a particular jenkins job to always run on a pariticular machine (Usually which has jdk6 or jdk8 or windows machine...etc.) 189 | 190 | 191 | 31. what kind of problems you faced with your jenkins so far? 192 | - Our Master server became slow. So to distribute the load, I implemented master/slave concept and today our builds are running in 4 slave nodes. 193 | - Regular compile / deployment issues. 194 | - Diskspace issues. 195 | 196 | p 197 | 32. Suddenly my Jenkins instance became slow. What steps do you take to improve the performance? 198 | 4 - clean up old jobs. may be by using shelve plugin 199 | 2 - implement master/slave distributed concept. 200 | 1 - may be improve the computing power for ex: RAM 201 | and CPU 202 | - Make sure your Master doesn't run any jobs. Just 203 | keep it for serving jenkins trafic and schedule 204 | all your builds in slave nodes. 205 | 206 | why jenkins may become slow? 207 | 1. hardware configurations / type 208 | 2. more number of active builds 209 | 3. more numbe of build runs 210 | 4. more no.of unused jobs 211 | 5. more unused plugins 212 | 6. network 213 | 7. more no.of users 214 | 8. poorly tuned jvm arguments | non-optimal garbase collection 215 | 216 | 217 | 218 | 33. how much you rate yourself in jenknins? 219 | 220 | 34. Do you have experience with .Net builds? 221 | 222 | p 223 | 35. How do you upgrade jenkins? 224 | - take a test machine 225 | - install the same old version of jenkins in the 226 | test machine. 227 | - copy .jenkins from old jenkins to test machine and bring the jenkins server up in test machine. 228 | - deploy the new war file to test machine. 229 | - test few builds randomly in test Jenkins to see if everything works well as old jenkins. 230 | - finally repeat the same steps in production/original 231 | server. 232 | 233 | 36. Can you name few Jenkins Features? 234 | - Jenkins is process improvment tool 235 | - using jenkins we can compile, run tests, build the code and deploy efficiently by continuously integrating users changes with existing application. 236 | - we can generate graps, statistics for our builds and test cases, 237 | - jenkins provides fast feed back when some thing goes wrong. 238 | - Jenkins is extensible coz it is plugin based and rich in features. 239 | - jenkins can act as a nice reporting tool. it sends test case and any other report in html format with some nice colors. 240 | - helps to deliver the code to production very quickly with quality code by running the test cases. 241 | - allows us to run builds in parellel. so builds can run faster. 242 | - allows us to run different builds with different confiugrations seemlesly without much configuration complexity. 243 | 244 | 37. How do you setup Jenkins from scratch? 245 | 246 | 38. What are the prerequisites for Jenkins? 247 | 248 | 39. how do you deploy an application in tomcat? 249 | can you explain how the deployment happens for your 250 | project? 251 | - we build war file as a final artifact 252 | - we have shell script for deployment. 253 | - It checks if tomcat/target machine is up and 254 | running and has enough free disk space 255 | - it does shutdown the server 256 | - copies the war file to webapps location 257 | - starts the server. 258 | - It also sends the email notifications to all users 259 | 260 | 40. What is the difference between web server and application server? 261 | web server serves static content: ex.. 262 | html 263 | images 264 | javascript 265 | application server serves dynamic content: 266 | search results 267 | date conversion 268 | weather application 269 | 270 | 41. What is parameterised build job? How do you set it up? 271 | 272 | 42. What is build pipeline? have you created build pipelines? 273 | 274 | 43. 275 | How do you set up the crontab? 276 | can you explain crontab syntax? 277 | how to create/remove crontabs? 278 | =========== 279 | 1. 280 | Create a new crontab: 281 | $ crontab -e 282 | 283 | 2. 284 | List all crontabs available 285 | $ crontab -l 286 | 287 | 3. Remove a crontab 288 | $ crontab -r 289 | 290 | 4. 291 | Crontab Syntax: 292 | Min Hour DOM Month DOW 293 | 0-59 0-23 1-31 1-12 0-7 294 | 295 | ex: 296 | everyday at 12:00 am, Monday to Friday 297 | 00 12 * * 1-5 298 | 299 | 300 | 301 | scp syntax: 302 | =========== 303 | sshpass -p "gamut" scp gamutkart.war gamut@172.17.0.2:/home/gamut/Distros/apache-tomcat-8.5.11/webapps 304 | 305 | 306 | -------------------------------------------------------------------------------- /5.Jenkins/Jenkins-Installation: -------------------------------------------------------------------------------- 1 | Setting up Jenkins in Docker container as a separate server 2 | ============================================================= 3 | 1# Create an ubuntu container 4 | $ docker run -it --name jenkins-server-wk ubuntu /bin/bash 5 | 6 | 2# Create an user 7 | $ useradd -m -d /home/gamut -s /bin/bash gamut 8 | 9 | 3# Setup the password for gamut 10 | $ passwd gamut 11 | 12 | 4# 13 | Install SSH so that other machines can connect to this jenkins server 14 | $ apt-get update 15 | $ apt-get install openssh-server 16 | $ service ssh start (Start ssh server) 17 | 18 | Install vim utility also for editing files 19 | $ apt-get install vim 20 | 21 | 2# 22 | - Create a folder in container for storing JDK and Tomcat installations 23 | $ mkdir Distros 24 | $ chmod 777 -R Distros 25 | 26 | - Install JDK using below commands. 27 | $ apt-get update 28 | $ apt-get install openjdk-8-jdk 29 | 30 | 31 | - Install Tomcat 32 | Download tomcat from web using below command 33 | $ wget https://mirrors.estointernet.in/apache/tomcat/tomcat-8/v8.5.63/bin/apache-tomcat-8.5.63.tar.gz 34 | 35 | - Extract the package using below command 36 | $ tar -zxvf apache-tomcat-8.5.63.tar.gz 37 | 38 | # download and deploy 'jenkins.war' into tomcat's webapps directory 39 | $ cd $TOMCAT_HOME/webapps 40 | $ wget https://get.jenkins.io/war-stable/2.263.4/jenkins.war 41 | 42 | # Start tomcat server. 43 | $ cd TOMCAT_HOME/bin 44 | $ ./startup.sh 45 | 46 | # 47 | Launch jenkins using below URL 48 | http://172.17.0.2:8080/jenkins 49 | 50 | 51 | # 52 | - When jenkins prompts for password, provide it and click on 'continue' 53 | ex: cat /root/.jenkins/secrets/initialAdminPassword 54 | 55 | - Select 'install all suggested plugin' in the 2nd screen 56 | 57 | - Create your won user and setup password 58 | 59 | # 60 | Install Git and Maven after creating the project. 61 | 62 | 63 | 64 | 65 | -------------------------------------------------------------------------------- /5.Jenkins/Jenkins_CLI.txt: -------------------------------------------------------------------------------- 1 | # Run Jenkins CLI as a particular User 2 | $ java -jar jenkins-cli.jar -s http://172.17.0.4:8080/jenkins -auth gamut:11742d94d019f32f25f2a42d1eb76c5771 build gamutkart 3 | 4 | # Generate API token (ex: 11742d94d019f32f25f2a42d1eb76c5771) 5 | Jenkins Dashboard --> People --> Click on your user --> configure --> under API Token --> Add new token --> Generate & COPY 6 | 7 | 8 | -------------------------------------------------------------------------------- /5.Jenkins/deploy.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | ## 3 | ## 4 | ENVIRONMENT=$1 5 | if [ $ENVIRONMENT = "QA" ];then 6 | sshpass -p "gamut" scp target/gamutkart.war gamut@172.17.0.2:/home/gamut/Distros/apache-tomcat-8.5.23/webapps 7 | sshpass -p "gamut" ssh gamut@172.17.0.2 "JAVA_HOME=/home/gamut/Distros/jdk1.8.0_151" "/home/gamut/Distros/apache-tomcat-8.5.23/bin/startup.sh" 8 | 9 | elif [ $ENVIRONMENT = "SIT" ];then 10 | sshpass -p "gamut" scp target/gamutkart.war gamut@172.17.0.3:/home/gamut/Distros/apache-tomcat-8.5.23/webapps 11 | sshpass -p "gamut" ssh gamut@172.17.0.3 "JAVA_HOME=/home/gamut/Distros/jdk1.8.0_151" "/home/gamut/Distros/apache-tomcat-8.5.23/bin/startup.sh" 12 | echo "deployment has been done!" 13 | fi 14 | 15 | -------------------------------------------------------------------------------- /5.Jenkins/deployment_commands: -------------------------------------------------------------------------------- 1 | ######################## 2 | # 3 | Goal: 4 | 1. Create a parameterized build job to deploy the code into multiple environment. 5 | 2. How do you deploy the code into multiple environments using a single Jenkins job. 6 | ######################## 7 | # 8 | Description: 9 | 1. You need to create parameterized job to test this project as shown below. 10 | - Go to Job configuration --> Select "This project is parameterized" --> Create a parameter called "ENVIRONMENT". 11 | 12 | 2. Select 'Execute Shell' under "Build" Section of you job configuration and paster below commands. 13 | 14 | 15 | 3. Note: Make sure that you have two machines with IPs (Ex: 172.17.0.3 & 172.17.0.4) as it's deploying into these two machines. If your machines IPs are different provide the same in below commands accordingly. 16 | ######################## 17 | # 18 | if [ $ENVIRONMENT = "QA" ];then 19 | 20 | sshpass -p "gamut" scp target/gamutgurus.war gamut@172.17.0.3:/home/gamut/Distros/apache-tomcat-8.5.61/webapps 21 | 22 | sshpass -p "gamut" ssh gamut@172.17.0.3 "/home/gamut/Distros/apache-tomcat-8.5.61/bin/startup.sh" 23 | 24 | elif [ $ENVIRONMENT = "SIT" ];then 25 | sshpass -p "gamut" scp target/gamutgurus.war gamut@172.17.0.4:/home/gamut/Distros/apache-tomcat-8.5.61/webapps 26 | 27 | sshpass -p "gamut" ssh gamut@172.17.0.4 "/home/gamut/Distros/apache-tomcat-8.5.61/bin/startup.sh" 28 | fi 29 | 30 | -------------------------------------------------------------------------------- /5.Jenkins/deployment_commands.txt: -------------------------------------------------------------------------------- 1 | if [ $ENVIRONMENT = "QA" ];then 2 | 3 | sshpass -p $PASSWORD scp target/gamutgurus.war gamut@172.17.0.3:/home/gamut/Distros/apache-tomcat-8.5.63/webapps 4 | sshpass -p $PASSWORD ssh gamut@172.17.0.3 "/home/gamut/Distros/apache-tomcat-8.5.63/bin/startup.sh" 5 | 6 | elif [ $ENVIRONMENT = "SIT" ];then 7 | 8 | sshpass -p $PASSWORD scp target/gamutgurus.war gamut@172.17.0.4:/home/gamut/Distros/apache-tomcat-8.5.63/webapps 9 | sshpass -p $PASSWORD ssh gamut@172.17.0.4 "/home/gamut/Distros/apache-tomcat-8.5.63/bin/startup.sh" 10 | 11 | fi 12 | 13 | -------------------------------------------------------------------------------- /5.Jenkins/multi-deploy.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | for i in `cat IPs.txt` 4 | do 5 | echo "deploying the code to $i ..." 6 | sleep 3 7 | echo "deployment to $i is succussful!" 8 | sshpass -p gamut scp target/gamutgurus.war gamut@$i:/home/gamut/Distros/apache-tomcat-8.5.63/webapps 9 | sshpass -p $PASSWORD ssh gamut@$i "/home/gamut/Distros/apache-tomcat-8.5.63/bin/startup.sh" 10 | 11 | done 12 | -------------------------------------------------------------------------------- /5.Jenkins/system-message.html: -------------------------------------------------------------------------------- 1 | 2 | 3 |

Jenkins will be down tomorrow from 7-9 PM IST

4 | 5 | 6 | 7 | -------------------------------------------------------------------------------- /6.Kubernetes/Gamut-Kubernetes-Class-Notes.txt: -------------------------------------------------------------------------------- 1 | Installing Kubernetes Using Kubeadm 2 | =========================================== 3 | 4 | Creating VM in Google cloud 5 | ################## 6 | 1. In the Google cloud console, click on three lines (top, left) 7 | 2. Go to 'Compute Engine' --> VM Instances 8 | 3. Click on 'Create Instalnce' 9 | 4. Give a machine name in 'Name' field (ex:master) 10 | 5. Select machine size under 'Series' as 'N2' (Need minimum 2GB RAM & 2vCPU) 11 | 6. On the left menu, select 'OS and Storage' 12 | 7. To select OS for the VM, Under 'Operating system and storage', click on 'Change' 13 | 8. Under 'Operating System' drop-down, select 'ubuntu' 14 | 9. Under 'Version', select 'Ubuntu 20.04 LTS' (x86/64, amd64 focal image ....) 15 | (basically, we need ubuntu 20.04 with x86/64 bit processor architecture) 16 | 10. Click on 'select' butoon 17 | 11. On left menu, click on 'Networking' and select 'Allow HTTP trafic' and 'Allow HTTPS trafic' 18 | 12. And finally, click on 'create' button at the bottom 19 | 20 | 21 | 22 | Topic: Setting up Control-plane/Master node 23 | ############################################# 24 | 1) 25 | # Install container runtime - containerd 26 | Follow this source. 27 | Source: 28 | https://kubesimplify.com/kubernetes-containerd-setup 29 | 30 | 31 | 2) 32 | # Install Kubeadm, Kubelet & Kubectl 33 | Follow this source or below commands. 34 | 35 | Source: 36 | https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ 37 | 38 | #3) Initialize a new Kubernetes control-plane node 39 | 40 | $ sudo kubeadm init 41 | NOTE: Save this output as it's required to add worker nodes. 42 | 43 | 44 | 4) 45 | # As per the instruction from 'kubeadm init' command output, To make kubectl work for your non-root user, run these commands. 46 | mkdir -p $HOME/.kube 47 | sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 48 | sudo chown $(id -u):$(id -g) $HOME/.kube/config 49 | 50 | #5) Verify if cluster is initialized succussfuly 51 | $ kubectl get nodes 52 | O/P: 53 | NAME STATUS ROLES AGE VERSION 54 | node1 NotReady master 2m43s v1.12.1 55 | 56 | 57 | #9) Run the following kubectl command to find the reason why the cluster STATUS is showing as NotReady. 58 | - This command shows all Pods in all namespaces - this includes system Pods in the system (kube-system) namespace. 59 | - As we can see, none of the coredns Pods are running 60 | - This is preventing the cluster from entering the Ready state, and is happening because we haven’t created the Pod network yet. 61 | O/P: 62 | $ kubectl get pods --all-namespaces 63 | NAMESPACE NAME READY STATUS RESTARTS AGE 64 | kube-system coredns-...vt 0/1 Pending 0 8m33s 65 | kube-system coredns-...xw 0/1 Pending 0 8m33s 66 | kube-system etcd... 1/1 Running 0 7m46s 67 | kube-system kube-api... 1/1 Running 0 7m36s 68 | 69 | #7) Create Pod Network. You must install a pod network add-on so that your pods can communicate with each other. (As per kubeadm init output) 70 | 71 | Source: https://github.com/rajch/weave#using-weave-on-kubernetes [Take this link from $ kubeadm init output] 72 | 73 | Run below command to install a Pod network add-on 74 | 75 | $ kubectl apply -f https://reweave.azurewebsites.net/k8s/v1.28/net.yaml 76 | 77 | 78 | #8) Check if the status of Master is changed from 'NotReady' to 'Ready' 79 | $ kubectl get nodes 80 | NAME STATUS ROLES AGE VERSION 81 | node1 Ready master 3m51s v1.12.1 82 | 83 | GREAT - the cluster is ready and all dns system pods are now working. Master is ready now. 84 | Now that the cluster is up and running, it’s time to add some worker-nodes. 85 | 86 | 87 | # 88 | Topic: Worker Node Setup & Joining to the Master: 89 | ############################################# 90 | 1 91 | # Create a worker node machine in GCP / AWS cloud platform. 92 | 93 | 2 94 | # Install kubeadm, Kubelet, Kubectl 95 | https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ 96 | 97 | 3 98 | # Install container runtime that is "containerd" 99 | Follow this source. 100 | Source: 101 | https://kubesimplify.com/kubernetes-containerd-setup 102 | 103 | 4 104 | # To join Kubernetes worker node with control-plane node, run below command as root user. 105 | 106 | Note: Below command/token will be different for your Control-plane/Master node. Use the one which you have copied at step:3. 107 | 108 | $ sudo kubeadm join 10.128.0.18:6443 --token 9ril81.t4k4sqh1ionqv1om \ 109 | --discovery-token-ca-cert-hash sha256:de57d9e08877db501a8b503db3ee91596f8f5657878c3087bc0343ece7df3eb2 110 | 111 | NOTE: above token is valid only for 24 hours i.e same token you can't use to join worker nodes after 24 hours. If you are joining worker node to Master after 24 hours, use below command to create a new token. 112 | $ kubeadm token create --print-join-command (Use this newly generated command for joining the worker-node) 113 | 114 | 115 | # Verify node Join (Run below in Control-plane node) 116 | 117 | $ kubectl get nodes 118 | NAME STATUS ROLES AGE VERSION 119 | control-plane Ready master 26m v1.16.3 120 | worker-node1 Ready 3m18s v1.16.3 121 | 122 | $ kubectl get nodes -o wide 123 | --> this will display IP, OS, Kernel and more details about all Nodes 124 | 125 | # 126 | Project-1 [Nginx] 127 | ############################################# 128 | Deploying/Creating a pod 129 | ############################################# 130 | 131 | # 132 | 1.) Create Pod manifest file 133 | $ mkdir nginx 134 | $ vim pod.yaml 135 | 136 | pod.yaml 137 | ========= 138 | apiVersion: v1 139 | kind: Pod 140 | metadata: 141 | name: nginx-pod 142 | labels: 143 | env: prod 144 | version: v1.2.3 145 | spec: 146 | containers: 147 | - name: nginx-container 148 | image: nginx 149 | ports: 150 | - containerPort: 80 151 | 152 | 153 | 154 | pod.yaml - Manifest file description: 155 | ---------------------- 156 | - Straight away we can see four top-level resources. 157 | • .apiVersion 158 | • .kind 159 | • .metadata 160 | • .spec 161 | 162 | --> .apiVersion: 163 | - Tells API Server about what version of Yaml is used to create the object (Pod object in this case) 164 | - Pods are currently in v1 API group 165 | 166 | --> .kind: 167 | - Tells us the kind of object being deployed. In this case we are creating POD object. 168 | - It tells control plane what type of object is being defined. 169 | 170 | --> .metadata: 171 | - this section again has two sub-sections i.e name & labels 172 | - You can name the Pod using "name" key. 173 | - Using labels, we can identify a particular pod. 174 | 175 | --> .spec: 176 | - This is where we specify details about the containers that will run in the Pod. 177 | - In this section we specify container name, image, ports ..etc. 178 | 179 | # 180 | 2.) Creating a Pod 181 | - Check if all Nodes are ready before creating a Pod 182 | $ kubectl get nodes 183 | 184 | - This POSTs the manifest file to API server and deploy/create a Pod from it 185 | $ kubectl apply -f pod.yml 186 | Note: Your Pod has been scheduled to a healthy node in the cluster and 187 | is being monitored by the local kubelet process on the node. 188 | 189 | # Introspecting Running Pods 190 | - Get IP and worker node of the Pod 191 | $ kubectl get pod -o wide 192 | 193 | 194 | - Launch nginx server application running in the Pod from Controle-plane node 195 | $ curl http://10.44.0.1:80 196 | $ curl http://POD-IP:Server-Port 197 | 198 | 199 | - You can also login into the Pod container to get more information. 200 | $ kubectl exec nginx-app -it -- /bin/bash 201 | Note: Let's add some code and launch our nginx application 202 | - $ echo "Wiculty Learning Solutions" > /usr/share/nginx/html/index.html 203 | 204 | - Launch nginx application 205 | $ curl http://10.44.0.1:80 206 | 207 | - Login into a specific container in case you have multi container Pod 208 | using --container or -c option. 209 | 210 | $ kubectl exec nginx-app -c container-name -it -- /bin/bash 211 | 212 | # 213 | 3.) Deleting a Pod 214 | $ kubectl get pods 215 | $ kubectl delete pods nginx-pod 216 | $ kubectl delete -f pod.yml 217 | 218 | --POD-- 219 | 220 | NOTE: 221 | kubelet takes the PodSpec and is responsible for pulling all images and starting all containers in the Pod. 222 | 223 | What Next? 224 | - If a Pod fails, it is not automatically rescheduled. Because of this, we usually deploy 225 | them via higher-level object such as Deployments. 226 | 227 | - This adds things like "scalability" (scale-up/down), "self-healing", "rolling updates" and "roll backs" and makes Kubernetes so powerful. 228 | 229 | 230 | Misc. CMDs: 231 | - Get full copy of the Pod manifest from cluster store. desired state is (.spec) and oberved state will be under (.status) 232 | $ kubectl get pod -o yaml 233 | 234 | - Check if Pod is created 235 | $ kubectl get pods 236 | $ kubectl get pods --watch (monitor the status continuously) 237 | 238 | 239 | - Another great Kubernetes introspection command. Provides Pods(object's) lifecycle events. 240 | $ kubectl describe pod nginx-pod 241 | 242 | # 243 | Project-1 [Nginx] 244 | ############################################# 245 | Creating Deployments & Services 246 | ############################################# 247 | 248 | - Pods don’t self-heal, they don’t auto-scale, and they don’t allow for easy updates. 249 | 250 | - So we use K8S Deployment Object to create the Pods in real-time. If we create the Pods using Deployment Object, we get below advantages.. 251 | - Auto scale (You can scalp up and scale down the pods very easily) 252 | - Self-heal (Pods can be self-healed automcatically) 253 | - Rolling updates (You can deploy the new code / new image very easily if Pods are created using Deployments ) 254 | - Roll backs (Role-back become very easy as you can role-back the deployment instead of rolling-back Pods one by one) 255 | - Creating End-pod (LB URL) of application: Pods can be exposed very easily as you just need to give deployment to the Service 256 | instead adding Pod's one by one to LB/Service. 257 | 258 | - That's why we almost always deploy Pods via 'Deployments" 259 | 260 | # Test Rolling Updates 261 | kubectl set image deployments/nginx-dep nginx-c=nageshvkn/gamutkart-imgcamp:v2 262 | 263 | # Test Rollback 264 | 265 | 266 | # 267 | Creating Deployments 268 | -------------------- 269 | # List all nodes in K8s cluster 270 | $ kubectl get nodes 271 | 272 | # List all pods in K8s cluster 273 | $ kubectl get pods 274 | 275 | # Create the deployment 276 | $ kubectl apply -f deploy-nginx.yml 277 | 278 | vim deploy-nginx.yml 279 | ------- 280 | apiVersion: apps/v1 281 | kind: Deployment 282 | metadata: 283 | name: nginx-prod-deploy 284 | 285 | spec: 286 | replicas: 6 287 | selector: 288 | matchLabels: 289 | app: nginx-pod 290 | template: 291 | metadata: 292 | labels: 293 | app: nginx-pod 294 | spec: 295 | containers: 296 | - name: nginx-container 297 | image: nginx 298 | ports: 299 | - containerPort: 80 300 | 301 | 302 | # Creating deployment 303 | $ kubectl apply -f deploy-nginx.yml 304 | 305 | # Check pod creations 306 | $ kubectl get pods --watch 307 | 308 | # Login to pods and verify nginx application 309 | $ kubectl get pods -o wide 310 | $ kubectl exec -it nginx-deploy-5f654bcccd-27xtg /bin/bash 311 | 312 | # launch application from individual Pod 313 | $ curl http://10.44.0.1:80 314 | $ curl http://pod_ip:80 315 | 316 | -->describe 317 | 318 | # Testing Self-healing capability 319 | ---------------------- 320 | If you delete some Pods, Kubernetes can automatically re-create the same for us to make sure given no. of Pods are always running. 321 | 322 | - Delete the Pods 323 | $ kubectl delete pods POD_NAME1 POD_NAME2 324 | 325 | - Check if the Pods are re-created 326 | $ kubects get pods 327 | 328 | -- 329 | Creating service to expose the application to outside world and setting up load balancer 330 | =============================================== 331 | $ vim service-nginx.yml 332 | ----- 333 | apiVersion: v1 334 | kind: Service 335 | metadata: 336 | name: nginx-prod-service 337 | labels: 338 | app: nginx-app-prod-service 339 | spec: 340 | selector: 341 | app: nginx-pod 342 | 343 | type: NodePort 344 | 345 | ports: 346 | - nodePort: 31000 347 | port: 80 348 | targetPort: 80 349 | 350 | # Create the service 351 | $ kubectl create -f service-nginx.yml 352 | 353 | # Enable networking 354 | Click on Navigation menu(three lines on top left) --> Go to VPC Network --> Firewal rules --> select on one existing rule --> edit --> Source IP ranges 355 | --> 0.0.0.0/0 --> In 'Specified protocols and ports', write this range "31000" 356 | 357 | # Access the application from browser using worker-node port 358 | http://34.93.139.52:31000/ 359 | http://WorkerNodeIP:NodePort 360 | 361 | 362 | Project-2 [GamutKart] 363 | ========================== 364 | # Creating deployment for GamutKart 365 | $ vim deploy-gamutkart.yml 366 | apiVersion: apps/v1 367 | kind: Deployment 368 | metadata: 369 | name: gamutkart-deploy 370 | labels: 371 | app: gamutkart-app 372 | spec: 373 | replicas: 8 374 | selector: 375 | matchLabels: 376 | app: gamutkart-app 377 | template: 378 | metadata: 379 | labels: 380 | app: gamutkart-app 381 | spec: 382 | containers: 383 | - name: gamutkart-container 384 | image: nageshvkn/gamutkart-img-k8s 385 | resources: 386 | requests: 387 | memory: "64Mi" 388 | cpu: "250m" 389 | limits: 390 | memory: "128Mi" 391 | cpu: "500m" 392 | ports: 393 | - containerPort: 8080 394 | command: ["/bin/sh"] 395 | args: ["-c", "/root/apache-tomcat-8.5.38/bin/startup.sh; while true; do sleep 1; done;"] 396 | 397 | # Execute 398 | $ kubectl apply -f deploy-gamutkart.yml 399 | 400 | # Creating service for GamutKart 401 | $ vim service-gamutkart.yml 402 | 403 | apiVersion: v1 404 | kind: Service 405 | metadata: 406 | name: gamutkart-service 407 | labels: 408 | app: gamutkart-app 409 | spec: 410 | selector: 411 | app: gamutkart-app 412 | type: LoadBalancer 413 | ports: 414 | - nodePort: 31000 415 | port: 8080 416 | targetPort: 8080 417 | 418 | 419 | # Creating the service 420 | $ kubectl apply -f service-gamutkart.yml 421 | 422 | 423 | # 424 | # Enable networking 425 | TODO: 426 | Go to VPC Network --> Firewal --> select on one existing rule --> edit --> Source IP ranges 427 | --> 0.0.0.0/0 --> In "Specified protocols and ports", write this range "0-65535" 428 | 429 | Note: 430 | Kubernates Port Range: 30,000 - 32,767 431 | 432 | 433 | # 434 | 3.) Deleting a Pod 435 | $ kubectl get pods 436 | $ kubectl delete pods nginx-pod 437 | $ kubectl delete -f pod.yml 438 | 439 | # Misc: 440 | 4.) Get all nodes IPs in Kubernetes cluster 441 | $ kubectl get nodes -o wide 442 | 443 | # 444 | List Deployments & Service 445 | $ kubectl get deployment 446 | $ kubectl get svc (Or service) 447 | 448 | # 449 | 5.) Deleting Deployment & Service 450 | $ kubectl delete -f deploy-gamutkart.yaml(deployment yaml file name) 451 | $ kubectl delete -f service-gamutkart.yml( service yaml file name) 452 | 453 | $ kubectl delete deployment 454 | $ kubectl delete service 455 | 456 | 457 | #Scaleup Pods 458 | $ kubectl scale deployment/gamutkart-deploy --replicas=2 459 | 460 | 461 | # Misc: 462 | =============== 463 | 1. List all the pods which are under a Service 464 | 465 | --> Describe the service & find the Pod's Label which are tied up with the service first. In below case it is "app=nginx-pod". 466 | $ kubectl describe service (check for "Selector: app=nginx-pod in the output) 467 | ") 468 | --> List all Pods which have the label. Example, in above case it is: app=nginx-pod. 469 | $ kubectl get pods -l app=nginx-pod 470 | 471 | Autoscaling 472 | ================= 473 | 1. Autoscale Pods/Deployments 474 | -- 475 | # How do you autoscale the deployment? 476 | $ kubectl autoscale deployment --max=4 --cpu-percent=70 477 | 478 | - The above command, maximum creates 4 instances of your application and autoscales the instances whenever CPU utilisation touches to 70% threshold. 479 | 480 | - When you run the autoscale command, internally it creates "Horizontal Pod Autoscaler" (HPA). HPA takes care of autoscaling activity.Check it using below command. 481 | $ kubectl get hpa 482 | 483 | 2. Autoscale Kubernetes Cluster 484 | -- 485 | - We need to also have enough number of Worker nodes to supply compute resources (RAM/CPU) to Pods as Pods use WorkerNode's compute. In GKE cluster, Google cloud automatically takes care of WorkerNode's auto scaling. However, if you have a specific requirements, you can set up WorkerNode autoscaling. 486 | 487 | - So, using bellow command, you can autoscale WorkerNodes. 488 | $ gcloud container clusters update cluster-name --enable-autoscaling --min-nodes=1 --max-nodes=10 489 | 490 | 491 | ===== 492 | How to validate Yaml 493 | 494 | ===== 495 | 5. Helm Charts 496 | 497 | - Stateless Vs State-full applications 498 | - PV & PVC 499 | - StatefullSet 500 | 501 | 3. ZDR - Rolling update 502 | 503 | - Namespace 504 | 505 | 8. Roleback 506 | 507 | 1. Secret to connect to private or any repository 508 | 2. ConfigMaps 509 | 510 | 8. Ingress 511 | 512 | 513 | CKA 514 | 515 | 516 | 517 | 518 | 519 | 520 | 521 | 522 | 523 | 524 | 525 | 526 | 527 | 528 | 529 | 530 | 531 | 532 | 533 | -------------------------------------------------------------------------------- /6.Kubernetes/Guest-Book-Project/frontend-deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: frontend 5 | labels: 6 | app.kubernetes.io/name: guestbook 7 | app.kubernetes.io/component: frontend 8 | spec: 9 | selector: 10 | matchLabels: 11 | app.kubernetes.io/name: guestbook 12 | app.kubernetes.io/component: frontend 13 | replicas: 3 14 | template: 15 | metadata: 16 | labels: 17 | app.kubernetes.io/name: guestbook 18 | app.kubernetes.io/component: frontend 19 | spec: 20 | containers: 21 | - name: guestbook 22 | image: paulczar/gb-frontend:v5 23 | # image: gcr.io/google-samples/gb-frontend:v4 24 | resources: 25 | requests: 26 | cpu: 100m 27 | memory: 100Mi 28 | env: 29 | - name: GET_HOSTS_FROM 30 | value: dns 31 | ports: 32 | - containerPort: 80 33 | 34 | -------------------------------------------------------------------------------- /6.Kubernetes/Guest-Book-Project/frontend-service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: frontend 5 | labels: 6 | app.kubernetes.io/name: guestbook 7 | app.kubernetes.io/component: frontend 8 | spec: 9 | # if your cluster supports it, uncomment the following to automatically create 10 | # an external load-balanced IP for the frontend service. 11 | type: LoadBalancer 12 | ports: 13 | - port: 80 14 | selector: 15 | app.kubernetes.io/name: guestbook 16 | app.kubernetes.io/component: frontend 17 | 18 | -------------------------------------------------------------------------------- /6.Kubernetes/Guest-Book-Project/mongo-deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: mongo 5 | labels: 6 | app.kubernetes.io/name: mongo 7 | app.kubernetes.io/component: backend 8 | spec: 9 | selector: 10 | matchLabels: 11 | app.kubernetes.io/name: mongo 12 | app.kubernetes.io/component: backend 13 | replicas: 1 14 | template: 15 | metadata: 16 | labels: 17 | app.kubernetes.io/name: mongo 18 | app.kubernetes.io/component: backend 19 | spec: 20 | containers: 21 | - name: mongo 22 | image: mongo:4.2 23 | args: 24 | - --bind_ip 25 | - 0.0.0.0 26 | resources: 27 | requests: 28 | cpu: 100m 29 | memory: 100Mi 30 | ports: 31 | - containerPort: 27017 32 | 33 | -------------------------------------------------------------------------------- /6.Kubernetes/Guest-Book-Project/mongo-service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: mongo 5 | labels: 6 | app.kubernetes.io/name: mongo 7 | app.kubernetes.io/component: backend 8 | spec: 9 | ports: 10 | - port: 27017 11 | targetPort: 27017 12 | selector: 13 | app.kubernetes.io/name: mongo 14 | app.kubernetes.io/component: backend 15 | 16 | -------------------------------------------------------------------------------- /6.Kubernetes/Kubernetes.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/wicultydotcom/devops-class-notes/c4eb6b6a719825f5e5fcf331e37019e7abb9ae36/6.Kubernetes/Kubernetes.pptx -------------------------------------------------------------------------------- /6.Kubernetes/Misc/Namespaces/Namespaces.txt: -------------------------------------------------------------------------------- 1 | Agenda: 2 | --- 3 | . What is Namespace 4 | . Why to use Namespace & Use-cases 5 | . How to Implement Namespaces 6 | 7 | ---------------- 8 | #1) 9 | What is a Namespace? 10 | - Namespace is nothing but a virtual cluster inside of your Kubernets cluster. 11 | It forms a cluster inside a cluster. We can have multiple Namespaces in a cluster and these are all isolated from each other. 12 | - 13 | 14 | #2) 15 | Why to use Namespaces? Typical use-cases 16 | CASE-A: You will miss overview if you don't have an isolated environment 17 | - If you create all your resources like pods, deployments, services ..etc. in the 'default' namespace. very soon it get's filled with too many things so, you will miss the Overview. 18 | 19 | - So, we can organize the Kubernets resources in different Namespaces. This way we can logically group different resources. 20 | 21 | For example- 22 | - all database or monitoring or metrics or logging related resources like pods, deployment, services, secrets ..etc. 23 | - all resources related to QA, Prod, Team-A/B, Project-A/B ..etc. 24 | 25 | CASE-B: Naming conflicts & configurations override 26 | - Assume that team-A had deployment called 'monitoring-deployment'. For some reason, they deleted it. After some days, another team team-B creates a new deployment with the same name 'monitoring-deployment' for different purpose. What if someone from team-A, modifies the configurations thinking it's their deployment? Or modifies something by running team-A's CI/CD pipeline? 27 | 28 | - So, if we have namespaces, different team can get into their namespace and this will avoid conflicts and configuration overriding disaster. 29 | 30 | CASE-C: Access and Resource limits on Namespaces 31 | - We have two teams working on the same cluster. We can create two different namespaces say 'Project-A-namespace' and 'Project-B-namespace'. We can control the access for different people based on the project they are working on ex- create/update/delete ..etc. people working on one project can't do anything on the other project. 32 | 33 | - We can also limit the resources(CPU, RAM, STORAGE) that each namespace can consume. Assume that we have two projects running in the same cluster. Sometimes, one project may use more resources making other project slow. So, namespaces helps us to restrict RAM, CPU, STORAGE per project. 34 | 35 | Note- some components in kubernetes can't be namespaced or isolated. they live in the cluster globally. 36 | example: nodes and volumes 37 | 38 | #2A) When to use Namespace 39 | Namespaces are intended for use in environments with many users spread across multiple teams, or projects. For clusters with a few to tens of users, you should not need to create or think about namespaces at all. 40 | 41 | 42 | #3) Listing all namespaces 43 | -- 44 | $ kubectl get namespace OR 45 | $ kubectl get ns 46 | 47 | output- 48 | NAME STATUS AGE 49 | default Active 27m 50 | kube-node-lease Active 27m 51 | kube-public Active 27m 52 | kube-system Active 27m 53 | 54 | - kube-system: contains components which are related to system processes ex; master, kubectl related 55 | 56 | - kube-public: it contains publicly accessible data. it has configmap that contains cluster information. for example, type $kubectl cluster-info , you can see the details about the cluster. this is coming from this configmap which exists in the kube-public namespace. 57 | 58 | - kubec-node-lease: each node in the cluster has something called, 'lease' object. this determines the availability of the node 59 | 60 | -default: this is where all the resources (pods, deployment, service ..etc.) that we create are located. 61 | 62 | #4) Creating a namespace using kubectl CLI. 63 | -- 64 | $ kubectl create namespace prod-env 65 | $ kubectl get ns 66 | 67 | Note- you can also create namespace using .yaml configuration file. 68 | this helps us to track the history in the repository. 69 | 70 | #5) Creating a namespace using yaml configuration file 71 | -- 72 | apiVersion: v1 73 | kind: Namespace 74 | metadata: 75 | name: nginx-project 76 | 77 | #6) Creating Kubernetes resources in the Namespace. 78 | Note- Before creating the resource in a Namespace, make sure that Namespace is created first. 79 | 80 | There are two ways to create Resources in a Namespace 81 | 1. Appending --namespace=nginx-project to 'kubectl create -f xxx.yaml' 82 | 2. Writing 'namespace: nginx-project' under metadata section of the resource 83 | 84 | 1. Appending --namespace= to 'kubectl create -f xxx.yaml' 85 | $ kubectl create -f deployment.yaml --namespace=nginx-project 86 | 87 | Verify that above deployment is created in nginx-project Namespace 88 | $ kubectl get deployment -n nginx-project 89 | 90 | NAME READY UP-TO-DATE AVAILABLE AGE 91 | nginx-prod-deploy 3/3 3 3 25s 92 | 93 | $ kubectl get pods -n nginx-project 94 | 95 | $ kubectl get pods 96 | 97 | $ kubectl get deployment 98 | No resources found in default namespace. 99 | 100 | - delete previous deployment from Namespace nginx-project 101 | $ kubectl delete deployment nginx-prod-deploy -n nginx-project 102 | 103 | 2. Writing 'namespace: nginx-project' under metadata section of the resource in yaml configuration file 104 | vim nginx-deployment.yaml 105 | -- 106 | apiVersion: apps/v1 107 | kind: Deployment 108 | metadata: 109 | name: nginx-prod-deploy 110 | namespace: nginx-project 111 | ... 112 | ... 113 | ... 114 | 115 | - Check for the deployment called 'nginx-prod-deploy' in the 'nginx-project' namespace 116 | $ kubectl get deployment -n nginx-project 117 | NAME READY UP-TO-DATE AVAILABLE AGE 118 | nginx-prod-deploy 3/3 3 3 7m13s 119 | 120 | Note- If we don't use '-n nginx-project', by default it will show all the resources from default Namespace 121 | 122 | #7) Changing active Namespace 123 | We can change the active Namespace with 'kubens' command line utility. In GKE, it is installed already. 124 | We may need to install it in on-premise clusters using 'sudo apt install kubectx' 125 | 126 | - Listing all Namespaces. It highlights the current active Namespace with yellow colour. 127 | $ kubens 128 | 129 | - List current active Namespace 130 | $ kubens -c 131 | 132 | - Changing the Namespace Or activating a Namespace. 133 | Below command activates nginx-project Namespace. We can Observe that all commands that we execute are applied to that particular Namespace. 134 | 135 | $ kubens nginx-project 136 | Context "gke_kubernetes-283202_us-central1-c_cluster-1" modified. 137 | Active namespace is "nginx-project". 138 | 139 | $ kubectl get deployment 140 | NAME READY UP-TO-DATE AVAILABLE AGE 141 | nginx-prod-deploy 3/3 3 3 30m 142 | 143 | $ kubectl get deployment -n nginx-project 144 | NAME READY UP-TO-DATE AVAILABLE AGE 145 | nginx-prod-deploy 3/3 3 3 31m 146 | 147 | - Moving back to previous active Namespace 148 | $ kubens - 149 | 150 | - Setting the namespace preference / activating namespace using kubectl command 151 | kubectl config set-context --current --namespace= 152 | kubectl config view | grep namespace 153 | 154 | #8) Deleting a namespace 155 | kubectl delete namespaces 156 | 157 | -------------------------------------------------------------------------------- /6.Kubernetes/Misc/kubectl.txt: -------------------------------------------------------------------------------- 1 | # CONCEPT-1 2 | Creating POD and exposing to public with NodePort type service using kubectl CLI 3 | --- 4 | 1. creating pod 5 | $ kubectl run nginx-pod --image=nginx --port=80 6 | 7 | 2. exposing pod using NodePort type service 8 | $ kubectl expose pod nginx-pod --name=nginx-svc --type=NodePort --port=80 9 | 10 | 3. Access/test the application 11 | $ kubectl get svc 12 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 13 | nginx-svc NodePort 10.97.142.133 80:32422/TCP 4m44s 14 | 15 | --> As we discussed in the class, take the port which is exposed to outside world. Note that --port=80 which is used in the expose command is service's port. It is like port:32422 is exposed to outside world and it is mapped to service port:80 and service port is again mapped to container port:80 (32422:80:80). So, to access the pod, we have to use Worker-nodes public IP and public port i.e 32422. 16 | 17 | --> check your worker-node's public/external IP from google cloud console. In this case it is: 104.198.19.101 18 | 19 | --> access the nginx server using below command or from the browser. 20 | $ curl http://104.198.19.101:32422 21 | 22 | 23 | 24 | # CONCEPT-2 25 | Creating and exposing deployment' using kubectl CLI 26 | --- 27 | 1. create deployment using kubectl command with 3 pods and nginx image. 28 | $ kubectl create deployment nginx-deployment --image=nginx --replicas=3 29 | 30 | 2. check if the deployment is created 31 | $ kubectl get deployment 32 | NAME READY UP-TO-DATE AVAILABLE AGE 33 | nginx-deployment 3/3 3 3 6s 34 | 35 | 3. expose the deployment to public by creating NodePort type service. below command creates a service called 'nginx-service' and binds all the pods to it which are created by deployment 'nginx-deployment'. Now we have 3 pods in this service so that it can do loadbalancing across these 3 pods. 36 | $ kubectl expose deployment nginx-deployment --type=NodePort --port=80 --name=nginx-service 37 | 38 | 4. check the service 39 | $ kubectl get svc 40 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 41 | nginx-service NodePort 10.102.229.245 80:31327/TCP 5s 42 | 43 | 5. access/test the application using below command or the browser. Note that the publicly exposed port in this case is 31327. we can access the application using below Worker-node's public IP. You can get worker-node's public IP from Google cloud. In this case public IP is-104.198.19.101. 44 | $ curl http://104.198.19.101:31327 45 | 46 | -------------------------------------------------------------------------------- /6.Kubernetes/Misc/secrets/nginx-credentials.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Secret 3 | metadata: 4 | name: nginx-credentials 5 | type: Opaque 6 | data: 7 | username: YWRtaW4= 8 | password: MTIzYWJj 9 | -------------------------------------------------------------------------------- /6.Kubernetes/Misc/secrets/nginx-file-override.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Secret 3 | metadata: 4 | name: nginx-credentials 5 | type: Opaque 6 | data: 7 | nginx.conf: | 8 | c2VydmVyLW5hbWU9d3d3LXNlcnZlci1uYW1l 9 | cG9ydD04MA== 10 | -------------------------------------------------------------------------------- /6.Kubernetes/Misc/secrets/nginx-pod-env.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: test-pd-env 5 | spec: 6 | containers: 7 | - image: nginx 8 | name: test-container 9 | env: 10 | - name: USERNAME 11 | valueFrom: 12 | secretKeyRef: 13 | name: nginx-credentials 14 | key: username 15 | - name: PASSWORD 16 | valueFrom: 17 | secretKeyRef: 18 | name: nginx-credentials 19 | key: password 20 | -------------------------------------------------------------------------------- /6.Kubernetes/Misc/secrets/nginx-pod-volume.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: test-pd 5 | spec: 6 | containers: 7 | - image: nginx 8 | name: test-container 9 | volumeMounts: 10 | - name: nginx-credentials-vol 11 | mountPath: /etc/nginx-credentials 12 | readOnly: true 13 | volumes: 14 | - name: nginx-credentials-vol 15 | secret: 16 | secretName: nginx-credentials 17 | -------------------------------------------------------------------------------- /6.Kubernetes/Misc/secrets/secrets.txt: -------------------------------------------------------------------------------- 1 | Agenda: 2 | --- 3 | . Why to use Secrets & Use-cases 4 | . What is a Secret 5 | . How to Implement Secret 6 | ---------------- 7 | 8 | #1) 9 | Why to use Secrets & Use-cases? 10 | -- 11 | 1. Your containerized application contains some sensitive data like username/password, ssh keys, authentication tokens ..etc. and you don't want to put them in .yaml configuration files 12 | 13 | 2. Applicatin may need client certificate file to communicate to other service. 14 | 15 | 3. You don't want to push such sensitive data in your Docker images 16 | 17 | 4. You don't want to hard-code your applicatin configurations. Instead, pass them dynamically when pods/containers get created. example.. application server configuration files, database authentications, environment details, passwords.properties ..etc 18 | 19 | 20 | 21 | Question - How do you reduce the risk of accidental exposure of confidential information. 22 | How do you manage sensitive data like Keys, username/password's, tokens ..etc. in Kubernetes? 23 | 24 | 25 | #2) 26 | What is a Secret? 27 | -- 28 | Kubernetes Secret is an Object that let you store and manage small amount of sensitive information, such as passwords, OAuth tokens, and ssh keys ..etc. 29 | 30 | 31 | #3) 32 | How to Implement a Secret and Consume in Kubernetes? 33 | -- 34 | Overview: 35 | . Using Secrets, we can secure small amount of sensitive data Keys, username/password's, tokens ..etc. 36 | . Reduces risk of exposing sensitive data 37 | 38 | . Secrets are stored outside of the pods/containers. Usually if we want some information to be available in 39 | containers, we write that in Pod manifest file. What if it is sensitive.. that's where we think about Secrets. Once secrets are created, you can consume it in any number of pods/containers 40 | 41 | . We need to create the secret before we consume in the cluster 42 | 43 | . Stored inside Etcd database on Kubernetes Master 44 | 45 | . Size limit 1 MB 46 | 47 | # 48 | Secret Types 49 | -- 50 | There are multiple Secret types. For example we have.., 51 | Generic - User defined key value pairs which can be created from file/dir/command line 52 | Docker-registry secret - To store the credentials for accessing a Docker registry for images 53 | TLS secret - For storing a certificate and its associated key that are typically used for TLS 54 | SSH authentication secret - For storing keys used in SSH authentication 55 | 56 | # 57 | There are two ways to create Secrets in Kubernetes 58 | 1. Using kubectl CLI 59 | 2. Using .yaml configuration files 60 | 61 | 1. Using kubectl CLI 62 | Syntax: kubectl create secret [TYPE] [NAME] [DATA] 63 | 64 | $ kubectl create secret generic nginx-creds --from-literal=username=admin --from-literal=password=123abc 65 | $ kubectl get secrets 66 | NAME TYPE DATA AGE 67 | nginx-creds Opaque 2 47s 68 | $ kubectl describe secret nginx-creds 69 | 70 | 2. Using .yaml configuration files 71 | - before creating secret using .yaml configuration file, it's good practice to first encrypt the values using base64, so that others can't see the values in the .yaml file. 72 | 73 | $ echo -n admin | base64 74 | YWRtaW4= 75 | $ echo -n 123abc | base64 76 | MTIzYWJj 77 | 78 | - Now, lets create .yaml file and place these encrypted values for username and password keys. 79 | $ vim nginx-credentials.yaml 80 | -- 81 | apiVersion: v1 82 | kind: Secret 83 | metadata: 84 | name: nginx-credentials 85 | type: Opaque 86 | data: 87 | username: YWRtaW4= 88 | password: MTIzYWJj 89 | 90 | - Execute 'nginx-credentials.yaml' using kubectl command 91 | $ kubectl create -f nginx-credentials.yaml 92 | secret/nginx-credentials created 93 | $ kubectl get secrets 94 | NAME TYPE DATA AGE 95 | nginx-credentials Opaque 2 74s 96 | 97 | # 98 | Using Secrets inside the pods OR Consuming Secrets in pods OR Injecting Secrets in pods 99 | There are different ways using which you can consume or inject Secrets in the pods 100 | 1. Injecting Secrets as Volumes in the pods 101 | 2. Injecting Secrets as Environment variables in the pod 102 | 103 | 1. Injecting Secrets as Volumes in the pods 104 | 105 | - create configuration file to mount the 'nginx-credentials' Secret into the pod so that pod container can consume/use it 106 | vim nginx-pod-volume.yaml 107 | -- 108 | apiVersion: v1 109 | kind: Pod 110 | metadata: 111 | name: test-pd 112 | spec: 113 | containers: 114 | - image: nginx 115 | name: test-container 116 | volumeMounts: 117 | - name: nginx-credentials-vol 118 | mountPath: /etc/nginx-credentials 119 | readOnly: true 120 | volumes: 121 | - name: nginx-credentials-vol 122 | secret: 123 | secretName: nginx-credentials 124 | 125 | - execute the configuration file to create the pod 126 | $ kubectl get pods 127 | NAME READY STATUS RESTARTS AGE 128 | test-pd 1/1 Running 0 3m29s 129 | - login into the pod to check if the Secret is mounted at '/etc/nginx-credentials' as specified in the configuration 130 | file. If you see below, there are two files called 'username' and 'password' with decrypted values. Our nginx application can read username fron 'username' and password from 'password' file. For each key that we defined in the Secret configuration file 'nginx-credentials.yaml' above, it creates files. 131 | 132 | $ kubectl exec -it test-pd -- /bin/bash 133 | $ cd /etc/nginx-credentials 134 | $ ls 135 | password username 136 | $ cat username 137 | admin 138 | $ cat password 139 | 123abc 140 | 141 | 2. Injecting Secrets as Environment variables in the pod. When the Kubernetes creates the pod, it refers the Secret 'nginx-credentials' and sets environment variables called USERNAME and PASSWORD with valued 'admin' and '123abc' 142 | 143 | - Create pod configuration file to create the pod to consume the Secret 'nginx-credentials' 144 | $ vim nginx-pod-env.yaml 145 | apiVersion: v1 146 | kind: Pod 147 | metadata: 148 | name: test-pd-env 149 | spec: 150 | containers: 151 | - image: nginx 152 | name: test-container 153 | env: 154 | - name: USERNAME 155 | valueFrom: 156 | secretKeyRef: 157 | name: nginx-credentials 158 | key: username 159 | - name: PASSWORD 160 | valueFrom: 161 | secretKeyRef: 162 | name: nginx-credentials 163 | key: password 164 | 165 | - Execute the configuration file to create the pod 166 | $ kubectl create -f nginx-pod-env.yaml 167 | $ kubectl get pods 168 | NAME READY STATUS RESTARTS AGE 169 | test-pd-env 1/1 Running 0 6s 170 | 171 | - Login into the pod to check if Environment variables are exposed or created. 172 | $ kubectl exec -it test-pd-env -- /bin/bash 173 | 174 | $ env | grep USERNAME 175 | USERNAME=admin 176 | $ env | grep PASSWORD 177 | PASSWORD=123abc 178 | 179 | # 180 | Decoding secrets 181 | $ echo 'YWRtaW4=' | base64 --decode 182 | admin 183 | $ echo 'MTIzYWJj' | base64 --decode 184 | 123abc 185 | 186 | # 187 | Overriging a file in the pod/container OR bundling a configuration file in the secret and injecting into a pod container at a particular location. 188 | apiVersion: v1 189 | kind: Secret 190 | metadata: 191 | name: nginx-credentials 192 | type: Opaque 193 | data: 194 | nginx.conf: | 195 | c2VydmVyLW5hbWU9d3d3LXNlcnZlci1uYW1l 196 | cG9ydD04MA== 197 | 198 | -------------------------------------------------------------------------------- /6.Kubernetes/Misc/test/file-sercret.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Secret 3 | metadata: 4 | name: nginx-credentials 5 | type: Opaque 6 | data: 7 | nginx.conf: | 8 | c2VydmVyLW5hbWU9d3d3LXNlcnZlci1uYW1l 9 | cG9ydD04MA== 10 | -------------------------------------------------------------------------------- /6.Kubernetes/Misc/test/pod.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: test-pd 5 | spec: 6 | containers: 7 | - image: nginx 8 | name: test-container 9 | volumeMounts: 10 | - name: nginx-credentials-vol 11 | mountPath: /etc/nginx-credentials 12 | readOnly: true 13 | volumes: 14 | - name: nginx-credentials-vol 15 | secret: 16 | secretName: nginx-credentials 17 | -------------------------------------------------------------------------------- /6.Kubernetes/Misc/vol/gcepd.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: test-pd 5 | spec: 6 | containers: 7 | - image: nginx 8 | name: test-container 9 | volumeMounts: 10 | - mountPath: /test-pd 11 | name: test-volume 12 | volumes: 13 | - name: test-volume 14 | # This GCE PD must already exist. 15 | gcePersistentDisk: 16 | pdName: my-data-disk 17 | fsType: ext4 18 | -------------------------------------------------------------------------------- /6.Kubernetes/Misc/vol/volumes.txt: -------------------------------------------------------------------------------- 1 | Kubernetes Storage Volumes Agenda: 2 | --- 3 | . Statefull Vs Stateless Applications 4 | . Why to use Volumes & Use-cases 5 | . What is a Volume 6 | . Different types of Volumes 7 | . How to Implement Volumes 8 | 9 | ---------------- 10 | #1) 11 | Statefull Vs Stateless Applications 12 | 13 | 14 | #2) 15 | Why to use Volume? 16 | CASE-A: Statefull applications require data to be persisted somewhere. 17 | 18 | . Kubernetes pods are ephemeral due to (Node failures, scaling, rolling updates, self-healing ..etc.) 19 | . So, how do you make sure that applications data is persisted so that it can resume from previous state? 20 | . How can two containers in the same pod share the data? 21 | . How can data persist through out life-cycle of a pod? 22 | . How can data persist beyond pod life? 23 | . If you take Physical machines and Virtual machines, sometimes you find TBs of volumes attached. But when it comes to pods and containers how do you do it? Initially, containers are designed to run stateless applications. Now it has become hot topic about how you make containers run statfull applications 24 | 25 | Goal: In summary, how do you handle the application data running inside pods using various storage options available in kubernetes. 26 | 27 | 28 | #3) 29 | What is a Volume? 30 | A Kubernetes volume is essentially a directory accessible to all containers running in a pod. The directory can be local to the Node where the Pod is created or in the cloud. 31 | 32 | #4) 33 | Volume Types? 34 | The Volume types are categorized in to two types. 35 | 1. Ephemeral - same life as pods (emptyDir) 36 | 2. Durable - beyond pods life-time (hostPath, gcePersistentDisk, awsElasticBlockStore, azureDisk ..etc.) 37 | 38 | Volumes types that Kubernetes supports... 39 | https://kubernetes.io/docs/concepts/storage/volumes/ 40 | 41 | # 42 | emptyDir Volume 43 | - creates empty directory on the node where the pod is created 44 | - after that, containers inside the pod can write and read from the volume 45 | - stays as long as pod is running 46 | - once the pod is deleted from a node, emptyDir volume is deleted forever. 47 | 48 | primary usage- 49 | . temporary space. share the data between two containers in the same pod as a cache. 50 | 51 | # 52 | hostPath Volume 53 | - hostPath exposes one of the directory of the worker node as a volume inside the pod. 54 | - data inside the volume remains even after the pod is terminated. 55 | - if pod gets scheduled on the same node and hostPath exists, it will immediately pick up from the same state. 56 | - you want to be careful for production, as hostPath gets created on the node and when pods get rescheduled, it may not get the same previous data. every pod may have it's own data inconsistently. 57 | 58 | primary usage- 59 | . you are using nfs kind of external mount point and that data is backed-up. 60 | . you don't want to pay for using cloud storage services. 61 | 62 | # 63 | gcePersistentDisk 64 | - gcePersistentDisk volume mounts a Google Compute Engine persistent disk into pod 65 | - volume data is persisted even pod or node is terminated for any unknown reason. 66 | - restrictions- 67 | 1. you should create gcePersistentDisk before you use it. 68 | 2. nodes must be GCE VMs 69 | 3. those VMs need to be in the same GCE project and zone as the PersistentDisk 70 | 71 | 1. Creating a GCE persistent disk 72 | Before you can use a GCE persistent disk with a Pod, you need to create it. Let's create it using gcloud. 73 | $ gcloud compute disks create --size=10GB --zone=us-central1-c --project=kubernetes-283202 my-data-disk 74 | 75 | 2. Create a pod object which uses gcePersistentDisk 'my-data-disk' 76 | vim gcepd.yaml 77 | --- 78 | apiVersion: v1 79 | kind: Pod 80 | metadata: 81 | name: test-pd 82 | spec: 83 | containers: 84 | - image: nginx 85 | name: test-container 86 | volumeMounts: 87 | - mountPath: /test-pd 88 | name: test-volume 89 | volumes: 90 | - name: test-volume 91 | # This GCE PD must already exist. 92 | gcePersistentDisk: 93 | pdName: my-data-disk 94 | fsType: ext4 95 | 96 | $ kubectl create -f gcepd.yaml 97 | pod/test-pd created 98 | 99 | 3. Check on which node this pod is running 100 | $ kubectl get pod -o wide 101 | NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE 102 | test-pd 1/1 Running 0 4m15s 10.4.2.8 gke-cluster-1-default-pool-26be0a0b-mch0 103 | 104 | 4. The pod is scheduled on gke-cluster-1-default-pool-26be0a0b-mch0. if you check the disk in google cloud console, it's marked as being used by this node 'gke-cluster-1-default-pool-26be0a0b-mch0'. 105 | 106 | 5. display complete details of the pod which is using gcePersistentDisk. observe that it is using persistentdisk and it's name is my-data-disk. 107 | $ kubectl describe pod test-pd 108 | 109 | TESTING: 110 | --- 111 | Let's test the use case. the purpose is to have data persistence irrespective of the node, pod failure due to un-expected shutdown, reboot or any interruptions to the pod on the node. so, overall we want our data to be safe across all the pod instances. 112 | 113 | To test this, 1. let's create a file in the mount. 2. delete the pod. 114 | even the pod is deleted, data/file should be present in the volume. then lets create the pod and see if we get that file. 115 | 116 | 1. create a sample file in the pod 117 | $ kubectl exec -it test-pd -- /bin/bash 118 | $ cd /test-pd/ 119 | $ echo "wiculty" > test.html 120 | $ cat test.html 121 | wiculty 122 | $ exit 123 | 124 | 2. delete the pod 125 | $ kubectl delete pod test-pd 126 | "test-pd" deleted 127 | note; now you can see that 'my-data-disk' is not being used by any pod. 128 | 129 | 3. let's create the same pod again and see if we get the data from the Volume 130 | $ kubectl create -f gcepd.yaml 131 | $ kubectl get pods 132 | NAME READY STATUS RESTARTS AGE 133 | test-pd 0/1 ContainerCreating 0 7s 134 | note; you can see that 'my-data-disk' is being used the node on which the pod is running. 135 | 136 | 4. now let's check if the new pod get the same data i.e if the volume is mounted. 137 | $ kubectl exec -it test-pd -- /bin/bash 138 | $ cd /test-pd/ 139 | $ ls 140 | test.html 141 | $ cat test.html 142 | wiculty 143 | # 144 | Delete the disk 145 | 146 | 147 | 148 | ========================== 149 | # CONCEPT-1 150 | Creating POD and exposing to public with NodePort type service using kubectl CLI 151 | --- 152 | 1. creating pod 153 | $ kubectl run nginx-pod --image=nginx --port=80 154 | 155 | 2. exposing pod using NodePort type service 156 | $ kubectl expose pod nginx-pod --name=nginx-svc --type=NodePort --port=80 157 | 158 | 3. Access/test the application 159 | $ kubectl get svc 160 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 161 | nginx-svc NodePort 10.97.142.133 80:32422/TCP 4m44s 162 | 163 | --> As we discussed in the class, take the port which is exposed to outside world. Note that --port=80 which is used in the expose command is service's port. It is like port:32422 is exposed to outside world and it is mapped to service port:80 and service port is again mapped to container port:80 (32422:80:80). So, to access the pod, we have to use Worker-nodes public IP and public port i.e 32422. 164 | 165 | --> check your worker-node's public/external IP from google cloud console. In this case it is: 104.198.19.101 166 | 167 | --> access the nginx server using below command or from the browser. 168 | $ curl http://104.198.19.101:32422 169 | 170 | 171 | # CONCEPT-2 172 | Creating and exposing deployment' using kubectl CLI 173 | --- 174 | 1. create deployment using kubectl command with 3 pods and nginx image. 175 | $ kubectl create deployment nginx-deployment --image=nginx --replicas=3 176 | 177 | 2. check if the deployment is created 178 | $ kubectl get deployment 179 | NAME READY UP-TO-DATE AVAILABLE AGE 180 | nginx-deployment 3/3 3 3 6s 181 | 182 | 3. expose the deployment to public by creating NodePort type service. below command creates a service called 'nginx-service' and binds all the pods to it which are created by deployment 'nginx-deployment'. Now we have 3 pods in this service so that it can do loadbalancing across these 3 pods. 183 | $ kubectl expose deployment nginx-deployment --type=NodePort --port=80 --name=nginx-service 184 | 185 | 4. check the service 186 | $ kubectl get svc 187 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 188 | nginx-service NodePort 10.102.229.245 80:31327/TCP 5s 189 | 190 | 5. access/test the application using below command or the browser. Note that the publicly exposed port in this case is 31327. we can access the application using below Worker-node's public IP. You can get worker-node's public IP from Google cloud. In this case public IP is-104.198.19.101. 191 | $ curl http://104.198.19.101:31327 192 | -------------------------------------------------------------------------------- /6.Kubernetes/generate-k8s-token-worker-node-join-command.txt: -------------------------------------------------------------------------------- 1 | # 2 | 1.) Create a new Bootstrap token and construct worker-node join command 3 | $ kubeadm token create --print-join-command 4 | 5 | Output: 6 | sudo kubeadm join 10.128.0.18:6443 --token 6fy33p.l2b4am7ibevz1ye8 --discovery-token-ca-cert-hash sha256:de57d9e08877db501a8b503db3ee91596f8f5657878c3087bc0343ece7df3eb2 7 | 8 | 9 | Optional: 10 | ========= 11 | # 12 | 2.) List existing 'Bootstrap Tokens' and 'discovery token ca certification hash value' & Construct worker node join command 13 | 14 | Example Syntax: 15 | kubeadm join --token --discovery-token-ca-cert-hash sha256: 16 | 17 | 1. List 18 | $ kubeadm token list 19 | 20 | Output: 0isp8p.by0mwcklmnqpdbq1 21 | 22 | 2. List discovery token ca cert 23 | $ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' 24 | 25 | Output: 1155f6468f92d60886f72a3ada57ac97edcac1227e8af6b0b1adeda9d9305824 26 | 27 | 28 | 29 | Know about your cluster: 30 | ========================= 31 | 3. Check kubernetes 32 | $ kubectl cluster-info 33 | 34 | Output: Kubernetes master is running at https://10.128.0.7:6443 35 | 36 | Example: 37 | kubeadm join 10.128.0.7:6443 --token 0isp8p.by0mwcklmnqpdbq1 --discovery-token-ca-cert-hash sha256:1155f6468f92d60886f72a3ada57ac97edcac1227e8af6b0b1adeda9d9305824 38 | -------------------------------------------------------------------------------- /Misc/Docker/Misc: -------------------------------------------------------------------------------- 1 | # 2 | Manage/Run Docker as a non-root user 3 | ------OR------ 4 | Run docker command without 'sudo' 5 | ==================================== 6 | 1. Create the "docker" group 7 | $ sudo groupadd docker 8 | 9 | 2. Add user to the group 10 | $ sudo gpasswd -a gamut docker (add) 11 | 12 | 3. Logout and Login back so that your group membership 13 | is re-evaluated. 14 | 15 | -- verification and cleanup 16 | 17 | 4. Check group existance: 18 | $ grep docker /etc/group 19 | 20 | 5. Delete the group 21 | $ sudo groupdel docker 22 | 23 | 6. Remove user from the group 24 | $ sudo gpasswd -d gamut docker (remove) 25 | 26 | -------------------------------------------------------------------------------- /Misc/VIM/VIM_Shortcuts: -------------------------------------------------------------------------------- 1 | open a file with vim editor 2 | vim 3 | 4 | vim -O file1 file2 5 | ctrl+ww --> switch between windows 6 | :qa --> quiet all windows 7 | :wqa --> save andrunning quiet all windows 8 | 9 | i - insert to start edit 10 | esc - normal mode 11 | 12 | show line numbers: 13 | :set number 14 | 15 | remove line numbers: 16 | :set nonumber 17 | 18 | go to a line: 19 | :20 20 | 21 | dd - delete a line 22 | u - undo 23 | ctrl+r - redo 24 | 25 | yy - copy a line 26 | p - paste 27 | 28 | shift+d - to delete entire line from cursor point 29 | 30 | shift+g - go to last line of a file 31 | gg - go to first line of a file 32 | 33 | L - go to last line of current page 34 | H - go to first line of current page 35 | M - go to middle of the page 36 | 37 | :w - save your work 38 | :wq - save and quit 39 | :x - save and quit 40 | 41 | 0 - go to first character of current line 42 | $ - got to end character of current line 43 | w - move word by word 44 | 45 | delete a char: x 46 | delete a word: dw 47 | [to repeat the delete press "." ] 48 | 49 | %d --> delete all content in a file 50 | 51 | /string --> search a string 52 | [ n --> search next string in forward direction 53 | N --> search next string in backward direction ] 54 | 55 | find and replace: 56 | :% s/war/jar/g (all occurences) 57 | :% s/war/jar/gc (one by one interactively) 58 | 59 | shift+d --> delete all from cursor point 60 | 61 | ctrl+f = next page / scroll page by page in forward direction 62 | ctrl+b = previous page / scroll page by page in backward direction 63 | 64 | ================= 65 | If you want to configure something permanently for vim editor, create $USER_HOME/.vimrc and define your configurations. 66 | 67 | Examples: 68 | ========== 69 | set number 70 | set hlsearch 71 | set incsearch 72 | set tabstop=4 73 | 74 | 75 | 76 | -------------------------------------------------------------------------------- /Misc/VIM/_vimrc: -------------------------------------------------------------------------------- 1 | set number 2 | set tabstop=4 3 | set hlsearch 4 | set incsearch 5 | --------------------------------------------------------------------------------