├── .DS_Store ├── 001 - Linux ├── AWK.md ├── README.md └── SED.md ├── 002 - BashScripting ├── README.md ├── Sample_Scripts.txt ├── Script-to-install-jenkins-Redhat └── Script-to-install-jenkins-Ubuntu ├── 003 - Git └── Git Documents.md ├── 004 - Networking └── Networking.md ├── 005 - Docker └── docker.md ├── 006 - kubernetes └── K8s.md ├── 007 - Ansible └── ansible.md ├── 008 - Terraform └── README.md ├── 009 - Jenkins ├── Install-jenkins-in-redhat │ └── jenkins.sh ├── Install-jenkins-in-ubuntu │ └── jenkins.sh └── README.md ├── 010 - AWS ├── 001 Aws cheatsheet.md ├── 002 Aws intro.md ├── 003 Ec2.md ├── 004 IAM.md ├── 005 S3.md ├── 006 vpc.md ├── 007 EBS.md ├── 008 RDS.md ├── 009 AWS storage services .md ├── 010 Cloudformation.md ├── 011 CodeBuild.md ├── 012 Codecommit.md ├── 013 Codepipeline.md ├── 014 Elastic Beanstalk.md ├── 015 Elastic Load Balancer.md ├── 016 codedeploy.md └── Cloudwatch.md ├── 011 - Prometheus-Grafana ├── README.md ├── docker │ ├── docker-compose.yaml │ └── prometheus │ │ └── prometheus.yml ├── install-grafana.sh ├── install-node-exporter.sh ├── install-prometheus.sh ├── node-exporter-init.dservice │ ├── README.md │ ├── install-node-exporter.sh │ ├── node-exporter-init.d.service │ └── node_exporter.sh ├── node-exporter.service ├── prometheus.service ├── prometheus.yml ├── prometheus_ec2.yml ├── prometheus_relabeeling.yml └── prometheus_serviceDiscovery.yml ├── 012 - Projects&SampleUseCases ├── Sample-UseCases.md ├── project 1.md ├── project 2.md ├── project 3.md ├── project 4.md ├── project 5.md ├── project 6.md ├── project 7.md └── project 8.md ├── 013 - AWS-Interview Preparation ├── .DS_Store ├── ADVANCED.md ├── AWS-CLI.md ├── CLOUDFORMATION.md ├── CLOUDFRONT.md ├── CLOUDTRAIL.md ├── CLOUDWATCH.md ├── CODEBUILD.md ├── CODEDEPLOY.md ├── CODEPIPELINE.md ├── DYNAMODB.md ├── EC2.md ├── ECR.md ├── ECS.md ├── EKS.md ├── ELASTIC BEANSTALK.md ├── ELB.md ├── IAM.md ├── LAMBDA.md ├── MIGRATION.md ├── RDS.md ├── ROUTE53.md ├── S3.md ├── SCENARIO BASED.md ├── SYSTEMS MANAGER.md ├── TERRAFORM.md └── VPC.md ├── AWS-Introduction.md ├── DevOps-Introduction.md ├── README.md └── SDLC.md /.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen-class/zen-class-devops-documentation/55a8b27cfc0e1789fb9844c072c567ab5b6b3c2c/.DS_Store -------------------------------------------------------------------------------- /001 - Linux/AWK.md: -------------------------------------------------------------------------------- 1 | # AWK 2 | 3 | - A text pattern scanning and processing language, created by Aho, Weinberger & Kernighan (hence 4 | the name). It can be quite sophisticated so this is NOT a complete guide, but should give you a taste 5 | of what awk can do. It can be very simple to use, and is strongly recommended, there is always 'man awk'. 6 | 7 | **AWK basics** 8 | 9 | - An awk program operates on each line of an input file. It can have an optional BEGIN{} section of 10 | commands that are done before processing any content of the file, then the main {} section works 11 | on each line of the file, and finally there is an optional END{} section of actions that happen after 12 | the file reading has finished: 13 | ``` 14 | BEGIN { …. initialization awk commands …} 15 | { …. awk commands for each line of the file…} 16 | END { …. finalization awk commands …} 17 | ``` 18 | - For each line of the input file, it sees if there are any pattern-matching instructions, in which case it 19 | only operates on lines that match that pattern, otherwise it operates on all lines. These 20 | 'pattern-matching' commands can contain regular expressions as for grep. The awk commands can 21 | do some quite sophisticated maths and string manipulations, and awk also supports associative 22 | arrays. 23 | 24 | - AWK sees each line as being made up of a number of fields, each being separated by a 'field 25 | separator'. By default, this is one or more space characters, so the line: 26 | 27 | this is a line of text 28 | 29 | - contains 6 fields. Within awk, the first field is referred to as $1, the second as $2, etc. and the whole 30 | line is called $0. The field separator is set by the awk internal variable FS, so if you set FS=”:” then 31 | it will divide a line up according to the position of the ':' which is useful for files like /etc/passwd 32 | etc. Other useful internal variables are NR which is the current record number (ie the line number of 33 | the input file) and NF which is the number of fields in the current line. 34 | 35 | - AWK can operate on any file, including std-in, in which case it is often used with the '|' command, 36 | for example, in combination with grep or other commands. For example, if I list all the files in a 37 | directory like this: 38 | ``` 39 | [mijp1@monty RandomNumbers]$ ls -l 40 | total 2648 41 | -rw------- 1 mijp1 mijp1 12817 Oct 22 00:13 normal_rand.agr 42 | -rw------- 1 mijp1 mijp1 6948 Oct 22 00:17 random_numbers.f90 43 | -rw------- 1 mijp1 mijp1 470428 Oct 21 11:56 uniform_rand_231.agr 44 | -rw------- 1 mijp1 mijp1 385482 Oct 21 11:54 uniform_rand_232.agr 45 | -rw------- 1 mijp1 mijp1 289936 Oct 21 11:59 uniform_rand_period_1.agr 46 | -rw------- 1 mijp1 mijp1 255510 Oct 21 12:07 uniform_rand_period_2.agr 47 | -rw------- 1 mijp1 mijp1 376196 Oct 21 12:07 uniform_rand_period_3.agr 48 | -rw------- 1 mijp1 mijp1 494666 Oct 21 12:09 uniform_rand_period_4.agr 49 | -rw------- 1 mijp1 mijp1 376286 Oct 21 12:05 uniform_rand_period.agr 50 | ``` 51 | - I can see the file size is reported as the 5th column of data. So if I wanted to know the total size of all 52 | the files in this directory I could do: 53 | ``` 54 | [mijp1@monty RandomNumbers]$ ls -l | awk 'BEGIN {sum=0} {sum=sum+$5} END 55 | {print sum}' 56 | 2668269 57 | ``` 58 | - **Note** that 'print sum' prints the value of the variable sum, so if sum=2 then 'print sum' gives the 59 | output '2' whereas 'print $sum' will print '1' as the 2nd field contains the value '1'. 60 | 61 | - Hence it would be straightforwards to write an awk command that would calculate the mean and 62 | standard deviation of a column of numbers – you accumulate 'sum_x' and 'sum_x2' inside the main 63 | part, and then use the standard formulae to calculate mean and standard deviation in the END part. 64 | 65 | - AWK provides support for loops (both 'for' and 'while') and for branching (using 'if'). So if you 66 | wanted to trim a file and only operate on every 3rd line for instance, you could do this: 67 | ``` 68 | [mijp1@monty RandomNumbers]$ ls -l | awk '{for (i=1;i<3;i++) {getline}; 69 | print NR,$0}' 70 | 3 -rw------- 1 mijp1 mijp1 6948 Oct 22 00:17 random_numbers.f90 71 | 6 -rw------- 1 mijp1 mijp1 289936 Oct 21 11:59 uniform_rand_period_1.agr 72 | 9 -rw------- 1 mijp1 mijp1 494666 Oct 21 12:09 uniform_rand_period_4.agr 73 | 10 -rw------- 1 mijp1 mijp1 376286 Oct 21 12:05 uniform_rand_period.agr 74 | ``` 75 | - where the 'for' loop uses a 'getline' command to move through the file, and only prints out every 3rd 76 | line. Note that as the number of lines of the file is 10, which is not divisible by 3, the final command 77 | finishes early and so the final 'print $0' command prints line 10, which you can see as we also print 78 | out the line number using the NR variable. 79 | 80 | # AWK Pattern Matching 81 | 82 | - AWK is a line-oriented language. The pattern comes first, and then the action. Action statements are 83 | enclosed in { and }. Either the pattern may be missing, or the action may be missing, but, of course, 84 | not both. If the pattern is missing, the action is executed for every single record of input. A missing 85 | action prints the entire record. 86 | 87 | - AWK patterns include regular expressions (uses same syntax as 'grep -E') and combinations using 88 | the special symbols '&&' means 'logical AND', '||' means 'logical OR', '!' means 'logical NOT'. You 89 | can also do relational patterns, groups of patterns, ranges, etc. 90 | 91 | # AWK control statements include: 92 | ``` 93 | if (condition) statement [ else statement ] 94 | while (condition) statement 95 | do statement while (condition) 96 | for (expr1; expr2; expr3) statement 97 | for (var in array) statement 98 | break 99 | continue 100 | exit [ expression ] 101 | ``` 102 | 103 | # AWK input/output statements include: 104 | ``` 105 | close(file [, how]) Close file, pipe or co-process. 106 | 107 | getline Set $0 from next input record. 108 | 109 | getline file Prints expressions on file. 124 | 125 | printf fmt, expr-list Format and print. 126 | ``` 127 | - NB The printf command lets you specify the output format more closely, using a C-like syntax, for 128 | example, you can specify an integer of given width, or a floating point number or a string, etc. 129 | 130 | # AWK numeric functions included 131 | ``` 132 | atan2(y, x) Returns the arctangent of y/x in radians. 133 | 134 | cos(expr) Returns the cosine of expr, which is in radians. 135 | 136 | exp(expr) The exponential function. 137 | 138 | int(expr) Truncates to integer. 139 | 140 | log(expr) The natural logarithm function. 141 | 142 | Rand() Returns a random number N, between 0 and 1, such that 0 <= N < 1. 143 | 144 | sin(expr) Returns the sine of expr, which is in radians. 145 | 146 | sqrt(expr) The square root function. 147 | 148 | srand([expr]) Uses expr as a new seed for the random number generator. If no expr is 149 | provided, the time of day is used. 150 | ``` 151 | # AWK string functions include: 152 | ``` 153 | gsub(r, s [, t]) For each substring matching the regular expression r in the string t, 154 | substitute the string s, and return the number of substitutions. If t is not 155 | supplied, use $0. 156 | 157 | index(s, t) Returns the index of the string t in the string s, or 0 if t is not present. 158 | 159 | length([s]) Returns the length of the string s, or the length of $0 if s is not 160 | supplied. 161 | 162 | match(s, r [, a]) Returns the position in s where the regular expression r occurs, or 0 if r 163 | is not present. 164 | 165 | split(s, a [, r]) Splits the string s into the array a using the regular expression r, and 166 | returns the number of fields. If r is omitted, FS is used instead. 167 | 168 | sprintf(fmt, expr-list) Prints expr-list according to fmt, and returns the resulting string. 169 | 170 | strtonum(str) Examines str, and returns its numeric value. 171 | 172 | sub(r, s [, t]) Just like gsub(), but only the first matching substring is replaced. 173 | 174 | substr(s, i [, n]) Returns the at most n-character substring of s starting at i. If n is omitted, the rest of s is used. 175 | 176 | tolower(str) Returns a copy of the string str, with all the upper-case characters in str translated to their corresponding lower-case counterparts. Non-alphabetic characters are left unchanged. 177 | 178 | toupper(str) Returns a copy of the string str, with all the lower-case characters in str translated to their corresponding upper-case counterparts. Non-alphabetic characters are left unchanged. 179 | ``` 180 | # AWK command-line and usage 181 | 182 | - You can pass variables into an awk program using the '-v' flag as many times as necessary, e.g. 183 | ``` 184 | awk -v skip=3 '{for (i=1;i new_file 31 | sed -i -e 's/input/output/' my_file 32 | ``` 33 | # SED and regexps 34 | 35 | - What if one of the characters you wish to use in the search command is a special symbol, like '/' 36 | (e.g. in a filename) or '*' etc? Then you must escape the symbol just as for grep (and awk). Say you 37 | want to edit a shell scripts to refer to /usr/local/bin and not /bin any more, then you could do this 38 | ``` 39 | sed -e 's/\/bin/\/usr\/local\/bin/' my_script > new_script 40 | ``` 41 | 42 | - What if you want to use a wildcard as part of your search – how do you write the output string? You 43 | need to use the special symbol '&' which corresponds to the pattern found. So say you want to take 44 | every line that starts with a number in your file and surround that number by parentheses: 45 | ``` 46 | sed -e 's/[0-9]*/(&)/' my_file 47 | ``` 48 | - where [0-9] is a regexp range for all single digit numbers, and the '*' is a repeat count, means any 49 | number of digits. 50 | 51 | - You can also use positional instructions in your regexps, and even save part of the match in a 52 | pattern buffer to re-use elsewhere. 53 | 54 | # Other SED commands 55 | The general form is 56 | ``` 57 | sed -e '/pattern/ command' my_file 58 | ``` 59 | 60 | - where 'pattern' is a regexp and 'command' can be one of 's' = search & replace, or 'p' = print, or 'd' = 61 | delete, or 'i'=insert, or 'a'=append, etc. Note that the default action is to print all lines that do not 62 | match anyway, so if you want to suppress this you need to invoke sed with the '-n' flag and then you 63 | can use the 'p' command to control what is printed. So if you want to do a listing of all the 64 | sub-directories you could use 65 | ``` 66 | ls -l | sed -n -e '/^d/ p' 67 | ``` 68 | - as the long-listing starts each line with the 'd' symbol if it is a directory, so this will only print out 69 | those lines that start with a 'd' symbol. 70 | 71 | - Similarly, if you wanted to delete all lines that start with the comment symbol '#' you could use 72 | ``` 73 | sed -e '/^#/ d' my_file 74 | ``` 75 | 76 | - i.e. you can achieve the same effect in different ways! 77 | 78 | - You can also use the range form 79 | - ``` 80 | sed -e '1,100 command' my_file 81 | ``` 82 | - to execute 'command' on lines 1,100. You can also use the special line number '$' to mean 'end of 83 | file'. So if you wanted to delete all but the first 10 lines of a file, you could use 84 | ``` 85 | sed -e '11,$ d' my_file 86 | ``` 87 | 88 | - You can also use a pattern-range form, where the first regexp defines the start of the range, and the 89 | second the stop. So for instance, if you wanted to print all the lines from 'boot' to 'machine' in the 90 | a_file example you could do this: 91 | ``` 92 | sed -n -e '/boot$/,/mach/p' a_file 93 | ``` 94 | which will then only print out (-n) those lines that are in the given range given by the regexps. 95 | -------------------------------------------------------------------------------- /002 - BashScripting/README.md: -------------------------------------------------------------------------------- 1 | # Bash(Bourne Again Shell) 🐧 2 | 3 | ### What is Bash sripting 4 | Bash stands for Bourne Again Shell. A Bash Shell Script is a plain text file containing a set of various commands that we usually type in the command line. It is used to automate repetitive tasks on Linux. To automate day to day automation task, system admins write bash script in Linux system. 5 | Bash script has .sh extension but the extension is not mandatory. 6 | 7 | ### What is Shell 8 | A Shell is basically a command-line interpreter between user and kernel or a complete environment specially designed to run commands, shell scripts, and programs. 9 | 10 | ### Advantages of Shell Script 11 | - Easy to use 12 | - Time saving 13 | - Automated 14 | - Can be installed on all Linux system 15 | - Portable 16 | - Can run multiple commands 17 | 18 | ### Disadvantages of Shell Script 19 | - There may be errors in shell scripting that prove to be quite costly. 20 | - The programs in shell script are quite slow while executing and a new process is required for every shell command executed. 21 | - Different platforms in shell scripting may also have compatibility problems. 22 | 23 | ### First Bash script 24 | Bash script starts with **#!** referred as the shebang followed by **/bin/bash** it actually tells the path of the interpreter to execute the commands in the script. 25 | 26 | **echo**: echo is a built-in command in Bash, which is used to display the standard output by passing the arguments. It is the most widely used command for printing the lines of text/String to the screen. 27 | 28 | ### To Run bash use 29 | ``` 30 | bash script_name.sh 31 | OR 32 | ./script_name.sh 33 | ``` 34 | ### Script to print Hello Wrld! 35 | ``` 36 | #!/bin/bash 37 | echo "******************" 38 | echo "Hello World!" 39 | echo "******************" 40 | ``` 41 | ### After execution 42 | 43 | ![image](https://user-images.githubusercontent.com/69889600/218807819-e1b42e78-0ec8-4d7c-8406-2fbc2a5dd229.png) 44 | 45 | ### Single line comment in bash use [**#**] 46 | 47 | ![image](https://user-images.githubusercontent.com/69889600/218809940-a93d1325-e79e-4bdc-a6d8-3191ea3d4224.png) 48 | 49 | ### After execution 50 | 51 | ![image](https://user-images.githubusercontent.com/69889600/218807819-e1b42e78-0ec8-4d7c-8406-2fbc2a5dd229.png) 52 | 53 | ### Multiple line comment in bash use [: ' commented text '] 54 | 55 | ![image](https://user-images.githubusercontent.com/69889600/218812434-edc051c9-7492-4b3d-94de-4c330a00990c.png) 56 | 57 | ### After execution 58 | 59 | ![image](https://user-images.githubusercontent.com/69889600/218812563-d1abbd60-109a-450a-812e-1070737f7437.png) 60 | 61 | ### Variables 62 | Variables are used to store information. 63 | 64 | ## To store information use below syntax 65 | 66 | ``` 67 | Variable-name="Variable-value" 68 | ``` 69 | ``` 70 | echo $variable-name 71 | ``` 72 | Set variable user="john" 73 | ``` 74 | echo $user ---> john 75 | ``` 76 | Print environment variables 77 | ``` 78 | echo "This is user ${user} in the team" -----> This is user john in the team 79 | ``` 80 | ### Command subsitution using back ticks 81 | Use back tick to print output of the command as shown below 82 | ``` 83 | echo "There are `wc -l > hello.txt` lines in hello.txt file" -----> There are 5 lines in hello.txt file 84 | ``` 85 | ### Command line arguments 86 | 87 | - $? ---- Exit status of last run command, 0 means success and non-zero indicates failure. 88 | - $0 ---- File name of our script 89 | - $1..$n ---- Script arguements 90 | - $# ----- number of args that our script was run with 91 | 92 | - $? ---- Exit status of last run command, 0 means success and anything else indicates failure. 93 | #### Example: 94 | ``` 95 | ls -l 96 | $? --> 0 (meaning command was succesful) 97 | lsss -l 98 | $? ---> Non-zero value (meaning commnad was not succesful or wrong command) 99 | ``` 100 | - $0 ---- File name of our script 101 | - $1..$n ---- Script arguements 102 | #### Example: 103 | ``` 104 | #!/bin/bash 105 | echo "Script name is $0" 106 | echo "First argument passed is $1" 107 | echo "Second argument passed is $2" 108 | ``` 109 | #### Run the above script, consider name of script is hello.sh and arguments are Hello and World 110 | ./hello.sh Hello World 111 | #### Output 112 | ``` 113 | Script name is hello.sh 114 | First argument passed is Hello 115 | Second argument passed is World 116 | ``` 117 | ### Quotes in Bash 118 | ``` 119 | echo "The user is $USER" --> The user is John 120 | echo 'The user is $USER' --> The user is $USED 121 | ``` 122 | ### Export variables in Bash 123 | In every user home directory there is a hidden file called .bashrc. If you place export variable and value this file. It will become permanent export variable. We often use it for setting envirnment variables like JAVA_HOME, MAVEN_HOME. 124 | If you want to add export variable globally for all user then edit **etc/profile** and add export variable. 125 | ``` 126 | ls -a 127 | vi .bashrc (edit this file and update export variable) 128 | export JAVA_HOME="usr/bin/jvm" 129 | ``` 130 | ### Take User Input using read 131 | -p [will stay to take input] 132 | -sp [ will make password invisible when user enter] 133 | ``` 134 | #!/bin/bash 135 | echo "Enter your Name:" 136 | read Name 137 | echo "Enter username and password" 138 | read -p 'username: ' username 139 | read -sp 'password: ' psw 140 | ``` 141 | ### If else elif statements 142 | ``` 143 | If [condition] 144 | then 145 | something 146 | elif 147 | something 148 | else 149 | something 150 | fi 151 | ``` 152 | ### Operators in Bash 153 | - == 154 | - != 155 | - >= 156 | - <= 157 | - '>' 158 | - '<' 159 | - && 160 | - || 161 | - ! 162 | 163 | - b operator: Checks whether a file is a block special file or not. 164 | - c operator: Checks whether a file is a character special file or not. 165 | - d operator: This operator checks if the given directory exists or not. 166 | - e operator: This operator checks whether the given file exists or not. 167 | - r operator: This operator checks whether the given file has read access or not. 168 | - w operator: This operator check whether the given file has write access or not. 169 | - x operator: This operator check whether the given file has execute access or not. 170 | - s operator: This operator checks the size of the given file. 171 | 172 | ### For Loop 173 | ``` 174 | for variable in 175 | do 176 | something 177 | done 178 | ``` 179 | ### While Loop 180 | 181 | ``` 182 | while [condition] 183 | do 184 | something 185 | done 186 | ``` 187 | 188 | ### Bash commands 189 | 190 | Find shell in the terminal 191 | ``` 192 | echo $shell 193 | ``` 194 | 195 | 196 | -------------------------------------------------------------------------------- /002 - BashScripting/Sample_Scripts.txt: -------------------------------------------------------------------------------- 1 | Sample Scripts 2 | 3 | 1. Hello World script: 4 | 5 | #!/bin/bash 6 | echo "Hello World" 7 | 8 | 2. Create a directory and file: 9 | 10 | #!/bin/bash 11 | mkdir mydirectory 12 | cd mydirectory 13 | touch myfile.txt 14 | echo "Hello World" > myfile.txt 15 | 16 | 3. Display current date and time: 17 | 18 | #!/bin/bash 19 | echo "Current date and time: $(date)" 20 | 21 | 4. Rename files in a directory: 22 | 23 | #!/bin/bash 24 | for file in *.txt; do 25 | mv "$file" "${file%.txt}.doc" 26 | done 27 | 28 | 5. Find and replace text in a file: 29 | 30 | #!/bin/bash 31 | sed -i 's/oldtext/newtext/g' myfile.txt 32 | 33 | 34 | 6. Prompt user for input and perform calculation: 35 | 36 | #!/bin/bash 37 | echo "Enter a number:" 38 | read num1 39 | echo "Enter another number:" 40 | read num2 41 | result=$((num1 + num2)) 42 | echo "The result is: $result" 43 | 44 | 7. Display system information: 45 | 46 | #!/bin/bash 47 | echo "System information:" 48 | echo "Kernel version: $(uname -r)" 49 | echo "Hostname: $(hostname)" 50 | echo "CPU architecture: $(uname -m)" 51 | echo "Total memory: $(free -m | awk '/Mem/{print $2}') MB" 52 | echo "Disk usage: $(df -h / | awk '/\//{print $5}') used" 53 | 54 | 8. Count the number of lines in a file: 55 | 56 | #!/bin/bash 57 | echo "Enter the filename:" 58 | read filename 59 | if [ -f "$filename" ]; then 60 | lines=$(wc -l < "$filename") 61 | echo "The file $filename has $lines lines." 62 | else 63 | echo "Error: file not found." 64 | fi 65 | 66 | 67 | 9. Copy files from one directory to another: 68 | 69 | #!/bin/bash 70 | echo "Enter the source directory:" 71 | read source 72 | echo "Enter the destination directory:" 73 | read destination 74 | if [ -d "$source" ]; then 75 | cp -r "$source"/* "$destination" 76 | echo "Files copied successfully." 77 | else 78 | echo "Error: source directory not found." 79 | fi 80 | 81 | 10. Check if a website is up or down: 82 | 83 | #!/bin/bash 84 | echo "Enter the website URL:" 85 | read url 86 | if curl --output /dev/null --silent --head --fail "$url"; then 87 | echo "Website $url is up." 88 | else 89 | echo "Website $url is down." 90 | fi 91 | 92 | 11. Convert all files in a directory to lowercase: 93 | 94 | #!/bin/bash 95 | for file in *; do 96 | mv "$file" "${file,,}" 97 | done 98 | -------------------------------------------------------------------------------- /002 - BashScripting/Script-to-install-jenkins-Redhat: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | STATUS = `systemctl is-active jenkins` 3 | 4 | #Download jdk: 5 | sudo yum install -y java-11-openjdk 6 | 7 | #Install wget: 8 | sudo yum install -y wget 9 | 10 | #Download the repo: 11 | sudo wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo 12 | 13 | #Import the required key: 14 | sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key 15 | 16 | #Install Jenkins: 17 | sudo yum install -y jenkins 18 | 19 | #Enable Jenkins: 20 | sudo systemctl enable jenkins 21 | 22 | #Start Jenkins: 23 | sudo systemctl start jenkins 24 | 25 | if [STATUS == 'active'] 26 | then 27 | echo "Jenkins is running" 28 | else 29 | echo "Jenkins is not running, starting the service" 30 | systemctl start jenkins 31 | fi 32 | -------------------------------------------------------------------------------- /002 - BashScripting/Script-to-install-jenkins-Ubuntu: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | STATUS = `systemctl is-active jenkins` 3 | 4 | sudo apt update -y 5 | 6 | #Download jdk: 7 | sudo apt install -y java-11-openjdk 8 | 9 | #Download the repo: 10 | curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo tee \ 11 | /usr/share/keyrings/jenkins-keyring.asc > /dev/null 12 | 13 | #Import the required key: 14 | echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \ 15 | https://pkg.jenkins.io/debian-stable binary/ | sudo tee \ 16 | /etc/apt/sources.list.d/jenkins.list > /dev/null 17 | 18 | sudo apt-get update 19 | 20 | sudo apt-get install jenkins -y 21 | 22 | #Enable Jenkins: 23 | sudo systemctl enable jenkins 24 | 25 | #Start Jenkins: 26 | sudo systemctl start jenkins 27 | 28 | if [STATUS == 'active'] 29 | then 30 | echo "Jenkins is running" 31 | else 32 | echo "Jenkins is not running, starting the service" 33 | systemctl start jenkins 34 | fi 35 | -------------------------------------------------------------------------------- /004 - Networking/Networking.md: -------------------------------------------------------------------------------- 1 | # Networking 2 | 3 |

Computer networking refers to the practice of connecting multiple computers and devices together to enable communication and the sharing of resources. It involves the design, implementation, and management of hardware and software components that facilitate data transmission and exchange between networked devices.

4 | 5 | ![WirelessNetwork-5994852003f4020011db5333](https://github.com/zen-class/zen-class-devops-documentation/assets/36299748/7aaa296f-4a56-4ee7-9205-2920d567e718) 6 | 7 | 8 | ### Here are some key aspects of computer networking: 9 |
    10 |
  1. Network Topologies: Network topologies define the physical or logical arrangement of devices in a network. Common topologies include star, bus, ring, mesh, and hybrid configurations.
  2. 11 | 12 | 13 |
  3. Network Protocols: Network protocols are sets of rules and conventions that govern how devices communicate and exchange data over a network. Protocols determine aspects such as data format, error handling, addressing, routing, and security.
  4. 14 | 15 | 16 |
  5. IP Addressing: IP addressing is a system for identifying and addressing devices on a network. It uses IP (Internet Protocol) addresses, which consist of a unique numeric code assigned to each device. IPv4 and IPv6 are the most commonly used IP addressing schemes.
  6. 17 | 18 | 19 |
  7. Switches and Routers: Switches and routers are networking devices that facilitate the flow of data between devices on a network. Switches enable the creation of local area networks (LANs) by connecting devices within a confined space, while routers connect multiple networks and enable data routing between them.
  8. 20 | 21 | 22 |
  9. Network Security: Network security focuses on protecting networked systems and data from unauthorized access, misuse, and threats. It involves measures such as firewalls, encryption, access controls, intrusion detection systems, and vulnerability management.
  10. 23 | 24 | 25 |
  11. Wireless Networking: Wireless networking enables devices to connect to a network without physical wired connections. It relies on technologies like Wi-Fi (Wireless Fidelity) and Bluetooth, allowing for flexibility and mobility in device connectivity.
  12. 26 | 27 | 28 |
  13. Network Services and Applications: Networks provide various services and applications, including file sharing, printing, email, web browsing, video conferencing, remote access, and cloud computing. These services rely on network infrastructure to enable communication and resource sharing.
  14. 29 | 30 | 31 |
  15. Network Management: Network management involves monitoring, troubleshooting, and optimizing network performance and reliability. It includes tasks such as network monitoring, configuration management, performance analysis, and capacity planning.
32 | 33 | Computer networking is essential for businesses, organizations, and individuals to connect devices, share information, and access resources efficiently. It enables the internet, intranets, and local networks to function, supporting a wide range of applications and services that facilitate communication, collaboration, and data transfer. 34 | 35 | # OSI Model 36 |

The Open Systems Interconnection (OSI) model is a set of standards that defines how computers communicate over a network. In the OSI model, data flow gets broken down into seven layers that build upon each other. Each layer uses data from the layer before it and serves a specific purpose in the broader network communication.

37 | 38 | The OSI model works from the bottom up, beginning from layer 1 (Physical) and ending with the top layer 7 (Application). The top layer is the most direct point of user interaction with the OSI model. 39 | 40 | ![shutterstock_508948102-1](https://github.com/zen-class/zen-class-devops-documentation/assets/36299748/23683353-27c0-4896-ae01-d3e76ce8e3df) 41 | ### Layer 1: Physical 42 | The Physical layer handles raw data within physical media. That raw data is made up of bit of information and the Physical layer converts those into electrical signals that define certain aspects of a piece of physical media. For example, Physical layer specifications may define aspects like voltage levels, transmission distances, and cable standards. You find Physical-layer specifications in technologies like Bluetooth and Ethernet. 43 | 44 | ### Layer 2: Data Link 45 | The Data Link layer takes data in the form of electrical signals (frames) and delivers them across members (nodes) of a single network. Data Link frames only operate on a local network and do not cross the boundaries into other networks. 46 | 47 | The Data Link layer can also detect and recover transmission errors by attaching extra information containing an error detection code to a given frame. When that frame is sent across the network, its receiver checks the received frame by matching the extracted data with the code. 48 | 49 | ### Layer 3: Network 50 | A Network describes the entire ecosystem of subnetworks and other networks that are all connected to each other via special hosts called gateways or routers. The Network layer works with routes (paths from one network to another). The Layer determines the most effective route to convey information. Sometimes, the message you’re trying to send is particularly large. In this case, the network may split it into several fragments at one node, send them separately, and reassemble them at the destination node. 51 | 52 | ### Layer 4: Transport 53 | The Transport layer protocols provide host-to-host communication services for applications. It is responsible for connection-oriented communication, reliability, and flow control. Connection-oriented communication uses a pre-established connection between hosts as a pathway for communicating between applications. Some protocols of the Transport layer are connection-oriented, but some protocols of this layer are not connection-oriented and instead transfer data end-to-end without the need for connection. 54 | 55 | ### Layer 5: Session 56 | The Session layer controls connections, whether that’s keeping an eye on possible connection losses or temporarily closing or re-opening connections depending on their frequency of use. The protocols of the Session layer try to recover any connection losses when they happen. It also optimizes connections: if a connection is not used for a long period, Session-layer protocols may close it and re-open it later. These protocols also provide synchronization points in the stream of exchanged messages, or in other words, spots for large messages to momentarily regather and make sure they’re all on the same page. 57 | 58 | ### Layer 6: Presentation 59 | The Presentation layer also called a Syntax layer, ensures that the recipient of the information can read and understand what it receives from another system; the information is presented in a legible way. Processes such as data encoding, compression, and encryption happen on this layer. 60 | 61 | ### Layer 7: Application 62 | The OSI model’s top and final layer is the Application layer. The Application layer displays the data in the correct format to the end-user—you! This includes technologies such as HTTP, DNS, FTP, SSH, and much more. Almost everyone interacts with the protocols of the Application layer on a day-to-day basis. 63 | 64 | # Network Troubleshooting Tools 65 | 66 | 67 | ```bash 68 | # Ping command to check connectivity 69 | ping 192.168.0.1 70 | 71 | # Display IP configuration 72 | ipconfig /all 73 | 74 | # Trace route to a destination host 75 | traceroute www.example.com 76 | 77 | # Perform DNS lookup 78 | nslookup www.example.com 79 | 80 | # Display network statistics and connections 81 | netstat -a 82 | 83 | # Display ARP cache 84 | arp -a 85 | 86 | # Configure network interface 87 | ifconfig eth0 up 88 | 89 | # Display IP routing table 90 | route -n 91 | 92 | # Securely connect to a remote server 93 | ssh username@192.168.0.1 94 | 95 | # Connect to an FTP server 96 | ftp ftp.example.com 97 | -------------------------------------------------------------------------------- /009 - Jenkins/Install-jenkins-in-redhat/jenkins.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | STATUS = `systemctl is-active jenkins` 3 | 4 | #Download jdk: 5 | sudo yum install -y java-11-openjdk 6 | 7 | #Install wget: 8 | sudo yum install -y wget 9 | 10 | #Download the repo: 11 | sudo wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo 12 | 13 | #Import the required key: 14 | sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key 15 | 16 | #Install Jenkins: 17 | sudo yum install -y jenkins 18 | 19 | #Enable Jenkins: 20 | sudo systemctl enable jenkins 21 | 22 | #Start Jenkins: 23 | sudo systemctl start jenkins 24 | 25 | if [STATUS == 'active'] 26 | then 27 | echo "Jenkins is running" 28 | else 29 | echo "Jenkins is not running, starting the service" 30 | systemctl start jenkins 31 | fi 32 | -------------------------------------------------------------------------------- /009 - Jenkins/Install-jenkins-in-ubuntu/jenkins.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | STATUS = `systemctl is-active jenkins` 3 | 4 | sudo apt update -y 5 | 6 | #Download jdk: 7 | sudo apt install -y java-11-openjdk 8 | 9 | #Download the repo: 10 | curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo tee \ 11 | /usr/share/keyrings/jenkins-keyring.asc > /dev/null 12 | 13 | #Import the required key: 14 | echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \ 15 | https://pkg.jenkins.io/debian-stable binary/ | sudo tee \ 16 | /etc/apt/sources.list.d/jenkins.list > /dev/null 17 | 18 | sudo apt-get update 19 | 20 | sudo apt-get install jenkins -y 21 | 22 | #Enable Jenkins: 23 | sudo systemctl enable jenkins 24 | 25 | #Start Jenkins: 26 | sudo systemctl start jenkins 27 | 28 | if [STATUS == 'active'] 29 | then 30 | echo "Jenkins is running" 31 | else 32 | echo "Jenkins is not running, starting the service" 33 | systemctl start jenkins 34 | fi 35 | -------------------------------------------------------------------------------- /009 - Jenkins/README.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | ## What is Jenkins 9 | 10 | Jenkins is an open-source automation tool written in Java with plugins built for Continuous Integration and Continuous deployment/delivery purposes. 11 | 12 | ## Why Jenkins 13 | 14 | Jenkins is used to build and test your software projects continuously make it easier for developers to integrate changes to the project, and make it easier for users to obtain a fresh build. It provides many plugins that help to support building, deploying and automating any project. 15 | 16 | ## Jenkins Workflow 17 | 18 | ![Modern Project Management Process Infographic Graph (3)](https://user-images.githubusercontent.com/69889600/214857610-4fc3e64c-a262-4a6b-9e4d-b5b4eed057c6.png) 19 | 20 | ## Continuous Integration 21 | 22 | Continuous Integration is a software development practice where code is continuously tested after a commit to ensure there are no bugs. The common practice is that whenever a code commit occurs, a build should be triggered. 23 | 24 | ## Continuous Deployment 25 | 26 | Continuous Deployment is a software development process where code changes to an application are released automatically into the production environment. 27 | 28 | ## Continuous Delivery 29 | 30 | Continuous delivery is a software development practice where a code change is built, tested, and then pushed to a non-production testing or staging environment but final deploy to production is made after approval. 31 | 32 | ## Advantages of Jenkins 33 | 34 | * Open source tool 35 | * Easy to install 36 | * Platform Independent 37 | * Support 1000+ plugins 38 | * Free of cost 39 | * Automates integration 40 | 41 | ## Install jenkins 42 | 43 | * Install Java Development Kit (JDK) 44 | * Set path for the Environmental variable for JDK 45 | * Download and Install [Jenkins](https://www.jenkins.io/doc/book/installing/) 46 | * Check if jenkins service is running using (systemctl status jenkins) 47 | 48 | ## Webhook 49 | 50 | Webhook in jenkins triggers pipeline automatically when any changes are done in github repo like commit and push. 51 | 52 | * Copy jenkins URL 53 | * Go to repo settings in github 54 | * Select Add webhook and paste URL 55 | * Append url with /github-webhook/ 56 | 57 | ## Continuous Deployment 58 | 59 | Deploy code to production server 60 | * Go to Jenkins, manage jenkins. 61 | * Install plugin remote ssh 62 | * Connect remote server over ssh 63 | * Configure global property in Jenkins to store the production IP. 64 | * Add credentials, hostname of prod server 65 | * Test connection 66 | Use 67 | ```bash 68 | withCredentials([usernamePassword(credentialsId: 'prodserver_login', usernameVariable: 'USERNAME', passwordVariable: 'USERPASS')]) 69 | ``` 70 | to connect to production server 71 | 72 | ## Continuous Deployment using Docker 73 | 74 | Push docker image to docker hub 75 | 76 | ```bash 77 | withRegistry('https://registry.hub.docker.com', 'docker_hub_login') 78 | ``` 79 | Ask approval before deploying to production 80 | 81 | ```bash 82 | input 'Deploy to Production?' 83 | ``` 84 | Use milestone to accidently deploying old version over a new version 85 | 86 | ```bash 87 | milestone(1) 88 | ``` 89 | ## Create build agent on a second server 90 | 91 | * Login slave from master using ssh 92 | * Create user's home directory at worker node 93 | * sudo mkdir /var/lib/jenkins 94 | * sudo useradd -d /var/lib/jenkins jenkins 95 | * sudo chown -R jenkins:jenkins /var/lib/jenkins 96 | * sudo mkdir /var/lib/jenkins/.ssh 97 | * Copy the contents of ~/.ssh/id_rsa.pub to the file /var/lib/jenkins/.ssh/authorized_keys 98 | * cat ~/.ssh/id_rsa.pub # Copy the output 99 | * sudo vim /var/lib/jenkins/.ssh/authorized_keys 100 | * Paste id_rsa contents into jenkins 101 | * Create an .ssh directory on the master in the jenkins directory: 102 | sudo mkdir /var/lib/jenkins/.ssh 103 | * Copy the known_hosts entry over from the .ssh directory in master jenkins user's .ssh directory: 104 | sudo cp ~/.ssh/known_hosts /var/lib/jenkins/.ssh 105 | * Create new node on jenkins master 106 | * Remote dir:/var/lib/jenkins Labels:Linux Host:Ip of worker node 107 | * Add creds worker node and paste private key 108 | 109 | ## Monitoring in Jenkins 110 | 111 | * Install any monitoring plugins like Prometheus, grafana, datadog and so on. 112 | * SSH into prometheus server 113 | * Edit vi /etc/prometheus/prometheus.yml file 114 | * Add jenkins target - ip:8080 115 | * Restart prometheus 116 | * Hit endpoints and see data scrape by prometheus 117 | 118 | 119 | ## Backup in Jenkins 120 | 121 | * Install Thin Backup plugin 122 | * Create directory jenkinsbackupand cd into it. 123 | * Set write permission to directory 124 | * Go to jenkins enter dir path and backup and restore jenkins. 125 | 126 | 127 | 128 | # 1. Jenkins: 129 | 130 | + Jenkins is a popular automation server used for continuous integration and continuous delivery (CI/CD) processes. 131 | 132 | + It is written in Java and provides a web-based interface for managing automation tasks. 133 | 134 | + Jenkins is extensible with a vast range of plugins that support integration with different tools and technologies. 135 | 136 | # 2. Installation and Setup: 137 | 138 | + Jenkins can be installed on various operating systems, including Windows, Linux, and macOS. 139 | 140 | + The installation involves downloading the Jenkins WAR (Web Application Archive) file and running it using Java. 141 | 142 | + After installation, Jenkins can be accessed through a web browser by navigating to the specified URL. 143 | 144 | # 3. Jobs and Builds: 145 | 146 | + Jobs are the basic units of work in Jenkins. Each job represents a specific task or process to be automated. 147 | 148 | + Jobs can be created through the Jenkins web interface or by defining job configurations using XML files. 149 | 150 | + A build refers to the execution of a job. Jenkins schedules and manages builds based on user-defined triggers or events. 151 | 152 | # 4. Build Triggers: 153 | 154 | + Jenkins provides various triggers to initiate builds, such as periodic scheduling, source code changes, or manual intervention. 155 | 156 | + Polling triggers enable Jenkins to check for changes in source code repositories at regular intervals. 157 | 158 | + Webhooks can be used to receive notifications from version control systems and trigger builds instantly. 159 | 160 | # 5. Source Code Management: 161 | 162 | + Jenkins integrates with various version control systems (VCS) like Git, Subversion (SVN), Mercurial, etc. 163 | 164 | + Developers can configure Jenkins jobs to pull source code from repositories and perform builds or tests. 165 | 166 | # 6. Build Steps: 167 | 168 | + Build steps define the actions to be executed within a job. These can include compiling source code, running tests, packaging artifacts, etc. 169 | 170 | + Jenkins provides a wide range of plugins to support different build steps and tools. 171 | 172 | # 7. Plugins and Integration: 173 | 174 | + Jenkins has an extensive plugin ecosystem that allows integration with external tools and technologies. 175 | 176 | + Plugins extend Jenkins' functionality, enabling features like notifications, reporting, deployment to cloud platforms, and more. 177 | 178 | + Plugins can be installed and managed through the Jenkins web interface. 179 | 180 | # 8. Build Notifications: 181 | 182 | + Jenkins can send notifications about build results via email, instant messaging, or other communication channels. 183 | 184 | + Developers and teams can receive alerts and status updates on the progress of builds and deployments. 185 | 186 | # 9. Distributed Builds: 187 | 188 | + Jenkins supports distributed builds across multiple machines, allowing parallel execution of jobs. 189 | 190 | + Slave nodes (agents) can be added to the Jenkins infrastructure to distribute the workload and improve performanc. 191 | 192 | # 10. Security and Authentication: 193 | 194 | + Jenkins provides built-in security features to control access and permissions. 195 | 196 | + User authentication can be managed through Jenkins' own user database or by integrating with external authentication providers like LDAP or Active Directory. 197 | 198 | # 11. Pipelines: 199 | 200 | + Jenkins supports defining pipelines as code using the Jenkinsfile, which follows the Groovy syntax. 201 | 202 | + Pipelines enable the creation of complex, scripted workflows that can include build, test, and deployment stages. 203 | 204 | + Pipeline as Code promotes version control and enables teams to manage and share pipeline definitions efficiently. 205 | 206 | # 12. Monitoring and Logs: 207 | 208 | + Jenkins provides logs and monitoring capabilities to track the execution of jobs and diagnose issues. 209 | 210 | + Logs can be accessed through the Jenkins web interface, and various plugins help visualize build and system metrics." 211 | 212 | 213 | 214 | 215 | 216 | 217 | 218 | -------------------------------------------------------------------------------- /010 - AWS/001 Aws cheatsheet.md: -------------------------------------------------------------------------------- 1 | ## AWS Services 2 | 3 | ## Compute 4 | + EC2 (Elastic Compute Cloud): Virtual servers in the cloud 5 | 6 | + Lambda: Serverless compute service 7 | 8 | ## Storage 9 | 10 | + S3 (Simple Storage Service): Object storage service 11 | 12 | + EBS (Elastic Block Store): Block-level storage for EC2 instances 13 | 14 | + EFS (Elastic File System): Managed network-attached storage (NAS) service 15 | 16 | ## Database 17 | 18 | + RDS (Relational Database Service): Managed database service for MySQL, PostgreSQL, Oracle, SQL Server, and MariaDB 19 | 20 | + DynamoDB: NoSQL database service 21 | 22 | + ElastiCache: In-memory data store and cache service 23 | 24 | ## Networking 25 | 26 | + VPC (Virtual Private Cloud): Isolated cloud network 27 | 28 | + Route 53: DNS (Domain Name System) service 29 | 30 | + ELB (Elastic Load Balancing): Load balancing service 31 | 32 | ## Security 33 | 34 | + IAM (Identity and Access Management): User and access management service 35 | 36 | + CloudTrail: Service for logging AWS API calls 37 | 38 | + WAF (Web Application Firewall): Firewall service for web applications 39 | 40 | ## Management Tools 41 | 42 | + CloudWatch: Monitoring and logging service 43 | 44 | + CloudFormation: Service for deploying and managing AWS resources 45 | 46 | + CodePipeline: Continuous delivery service 47 | 48 | ## AWS CLI Commands 49 | 50 | ## EC2 51 | 52 | + aws ec2 run-instances: Launches EC2 instances 53 | 54 | + aws ec2 describe-instances: Lists EC2 instances 55 | 56 | + aws ec2 start-instances: Starts stopped EC2 instances 57 | 58 | + aws ec2 stop-instances: Stops running EC2 instances 59 | 60 | + aws ec2 terminate-instances: Terminates EC2 instances 61 | 62 | ## S3 63 | 64 | + aws s3 ls: Lists S3 buckets and objects 65 | 66 | + aws s3 cp: Copies files to and from S3 buckets 67 | 68 | + aws s3 mb: Creates S3 buckets 69 | 70 | + aws s3 rm: Deletes S3 objects and buckets 71 | 72 | ## IAM 73 | 74 | + aws iam create-user: Creates IAM users 75 | 76 | + aws iam list-users: Lists IAM users 77 | 78 | + aws iam attach-user-policy: Attaches IAM policies to user 79 | 80 | + aws iam delete-user: Deletes IAM users 81 | 82 | ## CloudFormation 83 | 84 | + aws cloudformation create-stack: Creates CloudFormation stacks 85 | 86 | + aws cloudformation list-stacks: Lists CloudFormation stacks 87 | 88 | + aws cloudformation describe-stack-resources: Lists resources in a CloudFormation stack 89 | 90 | + aws cloudformation delete-stack: Deletes CloudFormation stacks 91 | -------------------------------------------------------------------------------- /010 - AWS/002 Aws intro.md: -------------------------------------------------------------------------------- 1 | # Introduction to Virtualization and Cloud Computing 2 | 3 | ![Cloud_computing](https://github.com/zen-class/zen-class-devops-documentation/assets/36299748/4b5dd3a2-520c-4cdf-87c2-1fdf4d7abe29) 4 | 5 | 6 | 7 | 8 | ## · What is Internet: 9 | A global computer network using standardized 10 | communication protocols (e.g. UDP, TCP/IP) providing information and 11 | communication facilities. 12 | · 13 | ## Local Network: 14 | Private network in LAN (Local Area Network) 15 | 16 | ## · Virtualization: 17 | Run multiple OSs on a host machine (Type 1: BareMetal, Uses Hypervisor OS (e.g. ESXi) and Type 2: Application running on another base OS (e.g. vmware workstation)). 18 | 19 | ## Virtual Machine: 20 | Software representation of virtual computer as set of files! Easy 21 | to move, independent of hardware, Effective utilization of resources. We can do 22 | virtual networking between VMs. 23 | 24 | ## · Data Center: 25 | Data centers are simply centralized locations where computing and 26 | networking equipment is concentrated for the purpose of collecting, storing, 27 | processing, distributing or allowing access to large amounts of data. 28 | · What is Cloud? There is now Cloud, it is someone else's computer accessible over the Internet! Virtual Machine running on a cloud server is the most widely used way of hosting any applications online. 29 | 30 | ## · Cloud Computing: 31 | 32 | 33 | 34 | ![cloud-services-isometric-composition-with-big-cloud-computing-infrastructure-elements-connected-with-dashed-lines-vector-illustration_1284-30495 (1)](https://github.com/zen-class/zen-class-devops-documentation/assets/36299748/72eb630f-a11d-40b8-814d-19bc90a08d91) 35 | 36 | The term cloud refers to a network or the internet. It is a technology that uses remote servers on the internet to store, manage, and access data online rather than local drives. The data can be anything such as files, images, documents, audio, video, and more. 37 | 38 | ## These are the following operations that we can do using cloud computing: 39 | 40 |
  1. Developing new applications and services
  2. 41 |
  3. Storage, back up, and recovery of data
  4. 42 |
  5. Hosting blogs and websites
  6. 43 |
  7. Delivery of software on demand
  8. 44 |
  9. Analysis of data
  10. 45 |
  11. Streaming videos and audios
  12. 46 |
47 | 48 | 49 | 50 | 51 | ## What is Amazon Web Services (AWS)? 52 | ![Group-169-3](https://github.com/zen-class/zen-class-devops-documentation/assets/36299748/175de7b4-af2c-46ef-8dea-6d8cf3ab915a) 53 | 54 | + Applications like YouTube, Gmail, Facebook are software that are hosted on cloud. Those are often called Software as a Services (SaaS). 55 | Amazon Web Services is a public cloud provider, a gigantic pool of configurable resources (Servers, Storage, Networks, etc.) spanned across multiple Data Centers in the globe where you can deploy your infrastructure/ applications. Hence it is also called Infrastructure as a Services (IaaS). 56 | 57 | + IaaS – Entire Infrastructure provided to you as a fully managed service 58 | Rapid provisioning and release of the resources. 59 | Resources are elastic in nature; you can scale up and scale down the resources. (Scaling: Ability to change an implementation to respond to changing traffic patterns.). e.g. During Black Friday sale, services may require more compute power. 60 | 61 | + You can build could apps/ software (SaaS) using AWS (IaaS). Services can be used On-demand and Pay as you Go fashion (Like our Electricity bill, pay for the usage). 62 | We can gradually move services to Cloud (On-Ramp), some of the services are moved to cloud, often called Hybrid Cloud. 63 | 64 | + AWS follows strict regulations that allows Banking, Research sectors to move their infrastructure to cloud. 65 | The collective money that we pay to Amazon, enables them to maintain the Data Centers all over the world. 66 | Examples AWS Solutions: amazon.com, Netflix, etc. uses AWS in the backend. 67 | 68 | 69 | # Benefits of AWS: 70 | 71 | **Scalability:** AWS allows you to scale your resources up or down based on demand, ensuring that you pay for only what you use. 72 | 73 | **Reliability:** AWS offers a highly reliable infrastructure, with multiple data centers and built-in redundancy to minimize downtime. 74 | 75 | **Security:** AWS provides extensive security measures to protect your data, including encryption, access controls, and network firewalls. 76 | 77 | **Cost-effective:** With AWS, you can avoid large upfront costs for hardware and infrastructure and pay for resources on a pay-as-you-go basis. 78 | 79 | **Global Infrastructure:** AWS has a vast global infrastructure with data centers located in multiple regions worldwide, allowing you to deploy your applications closer to your end-users for improved performance. 80 | 81 | # AWS Services: 82 | 83 | **Compute:** AWS provides various compute services, including Amazon EC2 (Elastic Compute Cloud) for virtual servers, AWS Lambda for serverless computing, and AWS Batch for batch computing. 84 | 85 | 86 | **Storage:** AWS offers multiple storage options, such as Amazon S3 (Simple Storage Service) for object storage, Amazon EBS (Elastic Block Store) for block storage, and Amazon Glacier for long-term archival storage. 87 | 88 | 89 | **Database:** AWS provides managed database services like Amazon RDS (Relational Database Service), Amazon DynamoDB for NoSQL databases, and Amazon Redshift for data warehousing. 90 | 91 | 92 | **Networking:** AWS offers services like Amazon VPC (Virtual Private Cloud) for creating isolated virtual networks, AWS Direct Connect for dedicated network connections, and Amazon Route 53 for DNS management. 93 | 94 | 95 | **Security and Identity:** AWS provides services like AWS IAM (Identity and Access Management) for user access control, AWS Secrets Manager for secure secrets storage, and AWS Shield for DDoS protection. 96 | 97 | **Analytics:** AWS offers services for data analytics and business intelligence, including Amazon Athena for query analysis, Amazon Redshift for data warehousing, and Amazon QuickSight for visualization. 98 | 99 | **AI and Machine Learning:** AWS provides AI and ML services like Amazon SageMaker for building ML models, Amazon Rekognition for image and video analysis, and Amazon Comprehend for natural language processing. 100 | 101 | **Management Tools:** AWS offers various tools for managing your infrastructure, including AWS CloudFormation for infrastructure as code, AWS CloudWatch for monitoring and logging, and AWS Systems Manager for automating operational tasks. 102 | 103 | ## Getting Started with AWS: 104 | 105 | **Create an AWS Account:** Start by creating an AWS account on the AWS website (https://aws.amazon.com/) and provide the necessary billing and contact information. 106 | 107 | **Choose a Region:** Select the AWS region where you want to deploy your resources. Each region consists of multiple data centers. 108 | 109 | **Access Management:** Set up AWS Identity and Access Management (IAM) to manage user access and permissions. 110 | 111 | **Launch Instances:** Launch virtual servers (EC2 instances) or leverage serverless computing with AWS Lambda to run your applications. 112 | 113 | **Store Data:** Use storage services like Amazon S3 or databases like Amazon RDS to store and manage your data. 114 | 115 | **Explore Additional Services:** Discover and explore other AWS services that can enhance your application, such as networking, security, analytics, and AI/ML. 116 | 117 | **Pay-as-you-go Pricing:** Understand and monitor your resource usage to optimize costs and take advantage of AWS's pay-as-you-go pricing model." 118 | 119 | 120 | 121 | 122 | 123 | 124 | 125 | 126 | 127 | 128 | 129 | 130 | -------------------------------------------------------------------------------- /011 - Prometheus-Grafana/README.md: -------------------------------------------------------------------------------- 1 | # Prometheus-Grafana 2 | 3 | ## What is Prometheus? 4 | 5 | Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud. Since its inception in 2012, many companies and organizations have adopted Prometheus, and the project has a very active developer and user community. It is now a standalone open source project and maintained independently of any company. To emphasize this, and to clarify the project’s governance structure, Prometheus joined the Cloud Native Computing Foundation in 2016 as the second hosted project, after Kubernetes. 6 | 7 | ## — prometheus.io 8 | 9 | Prometheus doesn’t use protocols such as SMNP or some sort of agent service. Instead, it pulls (scrapes) metrics from a client (target) over http and places the data into its local time series database that you can query using its own DSL. 10 | 11 | Prometheus uses exporters that are installed and configured on the clients in order to convert and expose their metrics in a Prometheus format. The Prometheus server then scrapes the exporter for metrics. 12 | 13 | By default Prometheus comes with a UI that can be accessed on port 9090 on the Prometheus server. Users also have the ability to build dashboards and integrate their favorite visualization software, such as Grafana. 14 | 15 | Prometheus uses a separate component for alerting called the AlertManager. The AlertManager receives metrics from the Prometheus server and then is responsible for grouping and making sense of the metrics and then forwarding an alert to your chosen notification system. The AlertManager currently supports email, Slack, VictorOps, HipChat, WebHooks and many more. 16 | 17 | ### Important terms : 18 | 19 | ### Prometheus Server : 20 | The main server that scrapes and stores the scraped metrics in a time series DB. 21 | ### Scrape : 22 | Prometheus server uses a pulling method to retrieve metrics. 23 | ### Target : 24 | The Prometheus servers clients that it retrieves info from. 25 | ### Exporter : 26 | Target libraries that convert and export existing metrics into Prometheus format. 27 | ### Alert Manager : 28 | Component responsible for handling alerts. 29 | 30 | ### Prometheus Architecture. 31 | ![1_CREV9H84LfEIouQCVx7cEw (1)](https://github.com/zen-class/zen-class-devops-documentation/assets/36299748/8a4f6b73-7bc0-4de1-a912-7cb337efbcf7) 32 | 33 | 34 | Prometheus scrapes metrics from exports, stores them in the TSDB on the Prometheus server and then pushes alerts to Alertmanager. 35 | 36 | The Service discovery component is a feature that allows Prometheus to auto discover different targets since these targets will come and go so frequently in a distributed system or microservice orchestration style of architecture. We will not be responsible for continuously updating a static list of target addresses each time a service and/or a piece of infrastructure is removed or added. Prometheus will automatically discover and start/stop scraping for us. 37 | 38 | ### Setting up Prometheus 39 | 40 | Now that we have the basic understanding of Prometheus,let’s get a Prometheus server up and start scraping some metrics. 41 | ![1_2esHBONOJf-VF53oE05tHQ](https://github.com/zen-class/zen-class-devops-documentation/assets/36299748/7778c253-9567-4812-a4a9-d5bc84c382ab) 42 | 43 | 44 | ### Installing node_exporter 45 | Download the Node Exporter on all machines : 46 | ``` 47 | wget https://github.com/prometheus/node_exporter/releases/download/v0.15.2/node_exporter-0.15.2.linux-amd64.tar.gz 48 | Extract the downloaded archive 49 | 50 | tar -xf node_exporter-0.15.2.linux-amd64.tar.gz 51 | Move the node_exporter binary to /usr/local/bin: 52 | 53 | sudo mv node_exporter-0.15.2.linux-amd64/node_exporter /usr/local/bin 54 | Remove the residual files with: 55 | 56 | rm -r node_exporter-0.15.2.linux-amd64* 57 | 58 | ``` 59 | 60 | ### Create users and service files for node_exporter. 61 | 62 | For security reasons, it is always recommended to run any services/daemons in separate accounts of their own. Thus, we are going to create an user account for node_exporter. We have used the -r flag to indicate it is a system account, and set the default shell to /bin/false using -s to prevent logins. 63 | ``` 64 | sudo useradd -rs /bin/false node_exporter 65 | ``` 66 | 67 | ### Create a systemd unit file so that node_exporter can be started at boot. 68 | ``` 69 | sudo nano /etc/systemd/system/node_exporter.service 70 | ``` 71 | ``` 72 | [Unit] 73 | Description=Node Exporter 74 | After=network.target 75 | 76 | [Service] 77 | User=node_exporter 78 | Group=node_exporter 79 | Type=simple 80 | ExecStart=/usr/local/bin/node_exporter 81 | 82 | [Install] 83 | WantedBy=multi-user.target 84 | ``` 85 | Since we have created a new unit file, we must reload the systemd daemon, set the service to always run at boot and start it : 86 | ``` 87 | sudo systemctl daemon-reload 88 | sudo systemctl enable node_exporter 89 | sudo systemctl start node_exporter 90 | ``` 91 | 92 | ### Installing Prometheus 93 | The next step is to download and install Prometheus only on the Prometheus Server. 94 | ``` 95 | wget https://github.com/prometheus/prometheus/releases/download/v2.1.0/prometheus-2.1.0.linux-amd64.tar.gz 96 | ``` 97 | 98 | ### Extract the Prometheus archive : 99 | ``` 100 | tar -xf prometheus-2.1.0.linux-amd64.tar.gz 101 | ``` 102 | ### Move the binaries to /usr/local/bin: 103 | ``` 104 | sudo mv prometheus-2.1.0.linux-amd64/prometheus prometheus-2.1.0.linux-amd64/promtool /usr/local/bin 105 | ``` 106 | ### Create directories for configuration files and other prometheus data. 107 | ``` 108 | sudo mkdir /etc/prometheus /var/lib/prometheus 109 | 110 | ``` 111 | ### Move the configuration files to the directory we made previously: 112 | ``` 113 | sudo mv prometheus-2.1.0.linux-amd64/consoles prometheus-2.1.0.linux-amd64/console_libraries /etc/prometheus 114 | ``` 115 | ### Delete the leftover files as we do not need them any more: 116 | ``` 117 | rm -r prometheus-2.1.0.linux-amd64* 118 | ``` 119 | ### Configuring Prometheus 120 | After having installed Prometheus, we have to configure Prometheus to let it know about the HTTP endpoints it should monitor. Prometheus uses the YAML format for its configuration. 121 | 122 | Go to /etc/hosts and add the following lines, replace x.x.x.x with the machine’s corresponding IP address 123 | 124 | x.x.x.x prometheus-target-1 125 | x.x.x.x prometheus-target-2 126 | We will use /etc/prometheus/prometheus.yml as our configuration file 127 | ``` 128 | global: 129 | scrape_interval: 10s 130 | 131 | scrape_configs: 132 | - job_name: 'prometheus_metrics' 133 | scrape_interval: 5s 134 | static_configs: 135 | - targets: ['localhost:9090'] 136 | - job_name: 'node_exporter_metrics' 137 | scrape_interval: 5s 138 | static_configs: 139 | - targets: ['localhost:9100','prometheus-target-1:9100','prometheus-target-2:9100'] 140 | ``` 141 | 142 | 143 | 144 | Finally, we will also change the ownership of files that Prometheus will use: 145 | ``` 146 | sudo useradd -rs /bin/false prometheus 147 | sudo chown -R prometheus: /etc/prometheus /var/lib/prometheus 148 | ``` 149 | Then, we will create a systemd unit file in /etc/systemd/system/prometheus.service with the following contents : 150 | ``` 151 | [Unit] 152 | Description=Prometheus 153 | After=network.target 154 | 155 | [Service] 156 | User=prometheus 157 | Group=prometheus 158 | Type=simple 159 | ExecStart=/usr/local/bin/prometheus \ 160 | --config.file /etc/prometheus/prometheus.yml \ 161 | --storage.tsdb.path /var/lib/prometheus/ \ 162 | --web.console.templates=/etc/prometheus/consoles \ 163 | --web.console.libraries=/etc/prometheus/console_libraries 164 | 165 | [Install] 166 | WantedBy=multi-user.target 167 | ``` 168 | Finally, we will reload systemd: 169 | ``` 170 | sudo systemctl daemon-reload 171 | sudo systemctl enable prometheus 172 | sudo systemctl start prometheus 173 | ``` 174 | Prometheus provides a web UI for running basic queries located at http://:9090/. This is how it looks like in a web browser: 175 | ![1_xNEdWSkZU0zsNHh2-AGr4A](https://github.com/zen-class/zen-class-devops-documentation/assets/36299748/35f72835-daf7-4e41-a79d-ed0042f3a166) 176 | 177 | ## Setting up Grafana For Prometheus 178 | 179 | Install Grafana on instance which queries our Prometheus server. 180 | ``` 181 | wget https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana_5.0.4_amd64.deb 182 | sudo apt-get install -y adduser libfontconfig 183 | sudo dpkg -i grafana_5.0.4_amd64.deb 184 | ``` 185 | Then, Enable the automatic start of Grafana by systemd: 186 | ``` 187 | sudo systemctl daemon-reload && sudo systemctl enable grafana-server && sudo systemctl start grafana-server.service 188 | ``` 189 | Grafana is running now, and we can connect to it at http://your.server.ip:3000. The default user and password is admin / admin. 190 | ![1_jM2AkGZ2QFF6zys2piJn-A](https://github.com/zen-class/zen-class-devops-documentation/assets/36299748/662e8155-1cf7-4315-95e7-59eb4488149e) 191 | 192 | Now you have to create a Prometheus data source: 193 |
    194 |
  1. Click on the Grafana logo to open the sidebar.
  2. 195 |
  3. Click on “Data Sources” in the sidebar.
  4. 196 |
  5. Choose “Add New”.
  6. 197 |
  7. Select “Prometheus” as the data source.
  8. 198 |
  9. Set the Prometheus server URL (in our case: http://localhost:9090/).
  10. 199 |
  11. Click “Add” to test the connection and to save the new data source.
  12. 200 |
201 | Settings should look like this: 202 | 203 | ![1_HST-kzbD1bn1VLXHeoNxhw](https://github.com/zen-class/zen-class-devops-documentation/assets/36299748/d74a1556-91e2-497d-b0b3-12e7568dbd6b) 204 | 205 | Create your first dashboard from the information collected by Prometheus. You can also import some dashboards from a collection of shared dashboards 206 | 207 | ![grafana-dashboard-english](https://github.com/zen-class/zen-class-devops-documentation/assets/36299748/d235944a-882b-4e90-a622-30193e908d6d) 208 | -------------------------------------------------------------------------------- /011 - Prometheus-Grafana/docker/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | version: '3' 2 | services: 3 | grafana: 4 | image: "grafana/grafana" 5 | ports: 6 | - "3000:3000" 7 | volumes: 8 | - "grafana-storage:/var/lib/grafana" 9 | 10 | prometheus: 11 | image: "prom/prometheus" 12 | ports: 13 | - "9090:9090" 14 | volumes: 15 | - "${PWD-.}/prometheus:/etc/prometheus" 16 | 17 | volumes: 18 | grafana-storage: 19 | -------------------------------------------------------------------------------- /011 - Prometheus-Grafana/docker/prometheus/prometheus.yml: -------------------------------------------------------------------------------- 1 | global: 2 | scrape_interval: 15s 3 | external_labels: 4 | monitor: 'prometheus' 5 | 6 | scrape_configs: 7 | - job_name: 'prometheus' 8 | static_configs: 9 | - targets: ['localhost:9090'] 10 | -------------------------------------------------------------------------------- /011 - Prometheus-Grafana/install-grafana.sh: -------------------------------------------------------------------------------- 1 | sudo apt-get install -y adduser libfontconfig1 2 | wget https://dl.grafana.com/oss/release/grafana_7.3.4_amd64.deb 3 | sudo dpkg -i grafana_7.3.4_amd64.deb 4 | sudo systemctl daemon-reload 5 | sudo systemctl start grafana-server 6 | sudo systemctl status grafana-server 7 | sudo systemctl enable grafana-server.service 8 | -------------------------------------------------------------------------------- /011 - Prometheus-Grafana/install-node-exporter.sh: -------------------------------------------------------------------------------- 1 | sudo useradd --no-create-home node_exporter 2 | 3 | wget https://github.com/prometheus/node_exporter/releases/download/v1.0.1/node_exporter-1.0.1.linux-amd64.tar.gz 4 | tar xzf node_exporter-1.0.1.linux-amd64.tar.gz 5 | sudo cp node_exporter-1.0.1.linux-amd64/node_exporter /usr/local/bin/node_exporter 6 | rm -rf node_exporter-1.0.1.linux-amd64.tar.gz node_exporter-1.0.1.linux-amd64 7 | 8 | sudo cp node-exporter.service /etc/systemd/system/node-exporter.service 9 | 10 | sudo systemctl daemon-reload 11 | sudo systemctl enable node-exporter 12 | sudo systemctl start node-exporter 13 | sudo systemctl status node-exporter 14 | -------------------------------------------------------------------------------- /011 - Prometheus-Grafana/install-prometheus.sh: -------------------------------------------------------------------------------- 1 | sudo useradd --no-create-home prometheus 2 | sudo mkdir /etc/prometheus 3 | sudo mkdir /var/lib/prometheus 4 | 5 | wget https://github.com/prometheus/prometheus/releases/download/v2.23.0/prometheus-2.23.0.linux-amd64.tar.gz 6 | tar -xvf prometheus-2.23.0.linux-amd64.tar.gz 7 | sudo cp prometheus-2.23.0.linux-amd64/prometheus /usr/local/bin 8 | sudo cp prometheus-2.23.0.linux-amd64/promtool /usr/local/bin 9 | sudo cp -r prometheus-2.23.0.linux-amd64/consoles /etc/prometheus/ 10 | sudo cp -r prometheus-2.23.0.linux-amd64/console_libraries /etc/prometheus 11 | sudo cp prometheus-2.23.0.linux-amd64/promtool /usr/local/bin/ 12 | 13 | rm -rf prometheus-2.23.0.linux-amd64.tar.gz prometheus-2.19.0.linux-amd64 14 | sudo cp prometheus.yml /etc/prometheus/ 15 | sudo cp prometheus.service /etc/systemd/system/prometheus.service 16 | 17 | sudo chown prometheus:prometheus /etc/prometheus 18 | sudo chown prometheus:prometheus /usr/local/bin/prometheus 19 | sudo chown prometheus:prometheus /usr/local/bin/promtool 20 | sudo chown -R prometheus:prometheus /etc/prometheus/consoles 21 | sudo chown -R prometheus:prometheus /etc/prometheus/console_libraries 22 | sudo chown -R prometheus:prometheus /var/lib/prometheus 23 | 24 | sudo systemctl daemon-reload 25 | sudo systemctl enable prometheus 26 | sudo systemctl start prometheus 27 | sudo systemctl status prometheus 28 | 29 | 30 | -------------------------------------------------------------------------------- /011 - Prometheus-Grafana/node-exporter-init.dservice/README.md: -------------------------------------------------------------------------------- 1 | # install node exporter with init.d service 2 | 3 | # Steps 4 | 1. Run `install-node-exporter.sh' 5 | This will create the directory structure, download the software and run the service as init.directory 6 | -------------------------------------------------------------------------------- /011 - Prometheus-Grafana/node-exporter-init.dservice/install-node-exporter.sh: -------------------------------------------------------------------------------- 1 | mkdir /opt/node-exporter 2 | wget https://github.com/prometheus/node_exporter/releases/download/v1.0.1/node_exporter-1.0.1.linux-amd64.tar.gz 3 | tar xzf node_exporter-1.0.1.linux-amd64.tar.gz 4 | sudo cp node_exporter-1.0.1.linux-amd64/node_exporter /opt/node-exporter 5 | rm -rf node_exporter-1.0.1.linux-amd64.tar.gz node_exporter-1.0.1.linux-amd64 6 | sudo cp node_exporter.sh /opt/node-exporter 7 | sudo cp node-exporter-init.d.service /etc/init.d/node_exporter 8 | sudo chmod +x /etc/init.d/node_exporter 9 | sudo chmod +x /opt/node-exporter/node_exporter.sh 10 | chkconfig --add node_exporter 11 | sudo service node_exporter start 12 | sudo service node_exporter status 13 | 14 | -------------------------------------------------------------------------------- /011 - Prometheus-Grafana/node-exporter-init.dservice/node-exporter-init.d.service: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | ### BEGIN INIT INFO 3 | # Provides: node_exporter 4 | # Required-Start: $local_fs $network $named $time $syslog 5 | # Required-Stop: $local_fs $network $named $time $syslog 6 | # Default-Start: 2 3 4 5 7 | # Default-Stop: 0 1 6 8 | # Description: 9 | ### END INIT INFO 10 | 11 | SCRIPT=/opt/node-exporter/node_exporter.sh 12 | RUNAS=root 13 | 14 | PIDFILE=/var/run/node_exporter.pid 15 | LOGFILE=/var/log/node_exporter.log 16 | 17 | start() { 18 | if [ -f "$PIDFILE" ] && kill -0 $(cat "$PIDFILE"); then 19 | echo 'Service already running' >&2 20 | return 1 21 | fi 22 | echo 'Starting service…' >&2 23 | local CMD="$SCRIPT &> \"$LOGFILE\" && echo \$! > $PIDFILE" 24 | su -c "$CMD" $RUNAS > "$LOGFILE" & 25 | echo 'Service started' >&2 26 | } 27 | 28 | stop() { 29 | if [ ! -f "$PIDFILE" ] || ! kill -0 $(cat "$PIDFILE"); then 30 | echo 'Service not running' >&2 31 | return 1 32 | fi 33 | echo 'Stopping service' >&2 34 | kill -15 $(cat "$PIDFILE") && rm -f "$PIDFILE" 35 | echo 'Service stopped' >&2 36 | } 37 | 38 | uninstall() { 39 | echo -n "Are you really sure you want to uninstall this service? That cannot be undone. [yes|No] " 40 | local SURE 41 | read SURE 42 | if [ "$SURE" = "yes" ]; then 43 | stop 44 | rm -f "$PIDFILE" 45 | echo "Notice: log file is not be removed: '$LOGFILE'" >&2 46 | update-rc.d -f remove 47 | rm -fv "$0" 48 | fi 49 | } 50 | 51 | case "$1" in 52 | start) 53 | start 54 | ;; 55 | stop) 56 | stop 57 | ;; 58 | uninstall) 59 | uninstall 60 | ;; 61 | retart) 62 | stop 63 | start 64 | ;; 65 | *) 66 | echo "Usage: $0 {start|stop|restart|uninstall}" 67 | echo "Usage: $0 {start|stop|restart|uninstall}" 68 | -------------------------------------------------------------------------------- /011 - Prometheus-Grafana/node-exporter-init.dservice/node_exporter.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | /opt/node-exporter/node_exporter --no-collector.diskstats -------------------------------------------------------------------------------- /011 - Prometheus-Grafana/node-exporter.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Prometheus Node Exporter Service 3 | After=network.target 4 | 5 | [Service] 6 | User=node_exporter 7 | Group=node_exporter 8 | Type=simple 9 | ExecStart=/usr/local/bin/node_exporter 10 | 11 | [Install] 12 | WantedBy=multi-user.target 13 | -------------------------------------------------------------------------------- /011 - Prometheus-Grafana/prometheus.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Prometheus 3 | Wants=network-online.target 4 | After=network-online.target 5 | 6 | [Service] 7 | User=prometheus 8 | Group=prometheus 9 | Type=simple 10 | ExecStart=/usr/local/bin/prometheus \ 11 | --config.file /etc/prometheus/prometheus.yml \ 12 | --storage.tsdb.path /var/lib/prometheus/ \ 13 | --web.console.templates=/etc/prometheus/consoles \ 14 | --web.console.libraries=/etc/prometheus/console_libraries 15 | 16 | [Install] 17 | WantedBy=multi-user.target 18 | -------------------------------------------------------------------------------- /011 - Prometheus-Grafana/prometheus.yml: -------------------------------------------------------------------------------- 1 | global: 2 | scrape_interval: 15s 3 | external_labels: 4 | monitor: 'prometheus' 5 | 6 | scrape_configs: 7 | - job_name: 'prometheus' 8 | static_configs: 9 | - targets: ['localhost:9090'] 10 | -------------------------------------------------------------------------------- /011 - Prometheus-Grafana/prometheus_ec2.yml: -------------------------------------------------------------------------------- 1 | global: 2 | scrape_interval: 15s 3 | external_labels: 4 | monitor: 'prometheus' 5 | 6 | scrape_configs: 7 | 8 | - job_name: 'node_exporter' 9 | 10 | static_configs: 11 | 12 | - targets: ['18.219.214.162:9100'] 13 | - targets: ['7.4.5.6:9100'] 14 | -------------------------------------------------------------------------------- /011 - Prometheus-Grafana/prometheus_relabeeling.yml: -------------------------------------------------------------------------------- 1 | global: 2 | scrape_interval: 15s 3 | external_labels: 4 | monitor: 'prometheus' 5 | 6 | scrape_configs: 7 | - job_name: 'node' 8 | ec2_sd_configs: 9 | - region: us-east-1 10 | access_key: yourkey 11 | secret_key: yourkey 12 | port: 9100 13 | relabel_configs: 14 | - source_labels: [__meta_ec2_public_ip] 15 | regex: '(.*)' #default value 16 | target_label: _address_ 17 | replacement: '${1}:9100' 18 | # action: keep 19 | # Use the instance ID as the instance label 20 | - source_labels: [__meta_ec2_tag_Name] 21 | target_label: instance 22 | -------------------------------------------------------------------------------- /011 - Prometheus-Grafana/prometheus_serviceDiscovery.yml: -------------------------------------------------------------------------------- 1 | global: 2 | scrape_interval: 15s 3 | external_labels: 4 | monitor: 'prometheus' 5 | 6 | scrape_configs: 7 | - job_name: 'node' 8 | ec2_sd_configs: 9 | - region: us-east-2 10 | access_key: yourkey 11 | secret_key: yourkey 12 | port: 9100 13 | -------------------------------------------------------------------------------- /012 - Projects&SampleUseCases/Sample-UseCases.md: -------------------------------------------------------------------------------- 1 | # 20 DevOps Sample UseCases 2 | ## 1: React App Deployment with Docker 3 | Description: Dockerize a React application using a multi-stage Docker build, and automate the process of building and deploying the Docker image to Docker Hub using bash scripts in jenkins pipeline. 4 | 5 | ## 2: Web Application Deployment with AWS EC2, Load Balancer, and Route 53 6 | Description: 7 | Deploy a web application using AWS EC2 instances, Application Load Balancer, and Route 53. Use EC2 instances to host the applications, while the Application Load Balancer distributes traffic across multiple instances for increased availability and scalability. Create Route 53 DNS service to route traffic to the appropriate EC2 instances. 8 | 9 | ## 3: Continuous Integration for a Node.js Application using Git, Jenkins, and 10 | AWS Elastic Beanstalk 11 | Description: 12 | Set up a Continuous integration pipeline using Jenkins to build and test the Node.js application in Git. Use AWS Elastic Beanstalk service to automate the deployment and scaling of web applications. 13 | 14 | ## 4: Infrastructure as Code using Terraform and AWS VPC 15 | Description: 16 | Create a Terraform template to create AWS VPC by incorporating attributes like CIDR, private and public subnets,route table Internet gateway etc. 17 | 18 | ## 5: Container Orchestration with Kubernetes on AWS EKS 19 | Description: 20 | Configure a Kubernetes cluster on AWS Elastic Kubernetes Service (EKS) to orchestrate ,deploy and scale containerized applications on it. 21 | 22 | ## 6.Continuous Integration and Deployment of Node.js Application with Jenkins, AWS EC2 and Docker. 23 | Description: Set up a Continuous integration and deployment pipeline using Jenkins to dockerize and test the Node.js application and push the docker image in docker hub repo. 24 | 25 | ## 7.Infrastructure Monitoring and Alerting with AWS CloudWatch. 26 | Description: Configure CloudWatch alarms to trigger notifications (e.g. email, SMS) when the thresholds are breached 27 | 28 | ## 8.Infrastructure Automation with AWS Lambda 29 | Description:1.Create a sample AWS S3 bucket and define an event trigger that calls a Lambda function. 30 | 2.Create a Lambda function that automates a predefined task (e.g. resizing images, encrypting files, copying files to another location) 31 | 3.Test the automation by uploading files to the S3 bucket and verifying that the Lambda function is triggered and automates the predefined task. 32 | 33 | ## 9. Create a multi node kubernetes cluster using AWS EKS 34 | Description: Create three worker node in AWS EKS using eksctl and deploy a nginx application in it. 35 | 36 | ## 10. Continuous Integration and Deployment with Docker, Jenkins, AWS EKS, and establish monitoring with Prometheus, and Grafana 37 | 38 | ## 11. Create dockerfile and docker compose file for the java, python, nodejs applications 39 | Description: Create Dockerfiles and Docker Compose files for different applications written in Java, Python, and Node.js. Dockerfiles define the instructions for building Docker images, 40 | while Docker Compose files orchestrate the deployment of multiple containers. 41 | 42 | ## 12. Create an Auto Scaling group using the AWS Management Console and configure it to launch EC2 instances. 43 | 44 | ## 13. How to deploy your React app in s3 45 | Description: Deploy a React application to Amazon S3 (Simple Storage Service). S3 can host static websites, 46 | making it a convenient option for deploying React apps. 47 | 48 | ## 14. AWS S3 Event Triggering Shell Script Used by Netflix, Airbnb, Adobe, Expedia, and Others 49 | Description: AWS S3 event trigger to execute a shell script. When specific events occur in an S3 bucket, such as file uploads or deletions, the trigger invokes the shell script. 50 | This mechanism is employed by various companies to automate actions based on S3 events. 51 | 52 | ## 15. Create a jenkins freestyle project for a nodejs application by building and deploying it to ec2 instances .Create dockerfile and docker compose file for build and deployment. 53 | 54 | ## 16. Create a 3 tier project in AWS - Presentation tier, Application tier and database tier (setup VPC, subnet, IG, NAT gateway, route table, ec2 launch template, auto scaling group) 55 | Description: involves setting up a three-tier architecture in AWS, including a presentation tier (front-end), application tier (back-end logic), and database tier. You'll create a Virtual Private Cloud (VPC), subnets, internet gateway (IG), NAT gateway, route table, EC2 instances, launch templates, and auto scaling groups to distribute the workload across different tiers.## 17. Jenkins Multi Branch CICD Pipeline using dev and prod environment 56 | 57 | ## 18.Create a continuous integration and deployment for Dockerised Node app to AWS Elastic Beanstalk with AWS CodePipeline 58 | 59 | ## 19. Write a shell script to report the usage of AWS resources in your project 60 | 61 | ## 20. Create a Nginx AMI in aws 62 | -------------------------------------------------------------------------------- /012 - Projects&SampleUseCases/project 1.md: -------------------------------------------------------------------------------- 1 | # Project 1: Continuous Integration and Deployment of Node.js Application with Jenkins, AWS EC2 and Docker. 2 | 3 | ## Set up an EC2 instance: 4 | 5 | + Launch an EC2 instance in AWS and install Docker on it. 6 | 7 | + Create a new security group and configure it to allow incoming traffic on port 22 (SSH) and 80 (HTTP). 8 | 9 | ## Install and configure Jenkins: 10 | 11 | + Install Jenkins on the EC2 instance and start the Jenkins service. 12 | 13 | + Configure Jenkins with any necessary plugins and settings. 14 | 15 | + Create a new Jenkins job to build the Node.js application and push it to a Docker registry. 16 | 17 | ## Build the Node.js application: 18 | 19 | + Create a new Node.js application that serves a simple "Hello World" webpage. 20 | 21 | + Use npm to install any necessary dependencies. 22 | 23 | + Write tests to verify that the application is functioning correctly. 24 | 25 | + Use a package.json file to specify the application's dependencies. 26 | 27 | ## Build the Docker image: 28 | 29 | + Create a Dockerfile that specifies the application's dependencies and how to run it. 30 | 31 | + Use the Dockerfile to build a Docker image of the Node.js application. 32 | 33 | ## Push the Docker image to the registry: 34 | 35 | + Use Jenkins to push the Docker image to a Docker registry (e.g. Docker Hub). 36 | 37 | + Deploy the Docker image to the EC2 instance: 38 | 39 | + Use Jenkins to SSH into the EC2 instance and pull the Docker image from the registry. 40 | 41 | + Start a new Docker container on the EC2 instance using the Docker image. 42 | 43 | ## Verify the deployment: 44 | 45 | + Access the application in a web browser to confirm that it is running correctly. 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | -------------------------------------------------------------------------------- /012 - Projects&SampleUseCases/project 2.md: -------------------------------------------------------------------------------- 1 | # Project 2: Web Application Deployment with AWS EC2, Load Balancer, and Route 53 2 | 3 | ## Set up EC2 instances: 4 | 5 | + Launch multiple EC2 instances in different availability zones and install Nginx on each instance. 6 | 7 | + Create a new security group and configure it to allow incoming traffic on port 80 (HTTP). 8 | 9 | ## Configure Nginx: 10 | 11 | + Configure Nginx to serve the web application on port 80. 12 | 13 | + Verify that Nginx is serving the web application correctly on each instance. 14 | 15 | ## Set up an Application Load Balancer: 16 | 17 | + Create an Application Load Balancer in AWS and configure it to distribute traffic across the EC2 instances. 18 | 19 | + Create a new target group and add the EC2 instances to it. 20 | 21 | ## Configure Route 53: 22 | 23 | + Create a new DNS record in Route 53 that points to the Load Balancer's DNS name. 24 | 25 | + Verify that the DNS record is resolving correctly. 26 | 27 | ## Test the deployment: 28 | 29 | + Access the web application in a web browser using the DNS name in the Route 53 record. 30 | 31 | + Verify that the Load Balancer is distributing traffic evenly across the EC2 instances. 32 | 33 | 34 | 35 | -------------------------------------------------------------------------------- /012 - Projects&SampleUseCases/project 3.md: -------------------------------------------------------------------------------- 1 | # Project 3: Continuous Integration for a Node.js Application using Git, Jenkins, and AWS Elastic Beanstalk 2 | 3 | ## Create a sample Node.js application with source code in a Git repository: 4 | 5 | + Create a new Git repository and clone it to your local machine. 6 | 7 | + Create a new Node.js application with a package.json file and source code files. 8 | 9 | + Add and commit the files to the Git repository. 10 | 11 | ## Configure Jenkins to clone the Git repository, install dependencies, run tests, and package the application as a ZIP file: 12 | 13 | + Install Jenkins on a separate server or locally. 14 | 15 | + Install the necessary plugins for Node.js, Git, and AWS Elastic Beanstalk. 16 | 17 | + Create a new Jenkins job and configure it to pull the source code from the Git repository, install dependencies, run tests, and package the application as a ZIP file. 18 | 19 | ## Configure Jenkins to transfer the ZIP file to AWS Elastic Beanstalk using the Elastic Beanstalk plugin: 20 | 21 | + Configure the Elastic Beanstalk plugin in Jenkins to authenticate with your AWS account and specify the target Elastic Beanstalk environment. 22 | 23 | + Add a post-build action to the Jenkins job to transfer the ZIP file to Elastic Beanstalk. 24 | 25 | ## Set up Elastic Beanstalk to deploy the application and automatically scale the environment based on traffic: 26 | 27 | + Create a new Elastic Beanstalk environment with the Node.js platform and select the ZIP file to deploy. 28 | 29 | + Configure the environment to automatically scale based on traffic using Elastic Beanstalk's auto-scaling feature. 30 | 31 | 32 | 33 | 34 | -------------------------------------------------------------------------------- /012 - Projects&SampleUseCases/project 4.md: -------------------------------------------------------------------------------- 1 | # Project 4: Infrastructure as Code using Terraform and AWS EC2 2 | 3 | 4 | ## Create a Terraform configuration file that defines the desired infrastructure: 5 | 6 | + Create a new directory for the Terraform configuration files. 7 | 8 | + Create a new file called main.tf and specify the desired infrastructure (e.g. EC2 instance, security group, key pair). 9 | 10 | ## Use Terraform to apply the configuration file and provision the infrastructure in AWS: 11 | 12 | + Install Terraform on your local machine or a separate server. 13 | 14 | + Authenticate Terraform with your AWS account and configure the necessary AWS permissions. 15 | 16 | + Run the "terraform init" command to initialize the Terraform configuration. 17 | 18 | + Run the "terraform apply" command to apply the Terraform configuration and provision the infrastructure in AWS. 19 | 20 | ## Use Terraform to update the infrastructure as necessary: 21 | 22 | + Modify the main.tf file to update the infrastructure configuration as necessary (e.g. change the instance type, add new security rules). 23 | 24 | + Run the "terraform apply" command again to update the infrastructure in AWS. 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | -------------------------------------------------------------------------------- /012 - Projects&SampleUseCases/project 5.md: -------------------------------------------------------------------------------- 1 | # Project 5: Container Orchestration with Kubernetes on AWS EKS 2 | 3 | ## 1. Create a sample application with a Dockerfile 4 | 5 | + Create a new directory for the application files. 6 | 7 | + Create a new Dockerfile that specifies the application dependencies and configuration. 8 | 9 | ## 2. Create a Kubernetes deployment configuration file that specifies the Docker image and any necessary replicas, environment variables, and other settings 10 | 11 | Create a new file called deployment.yaml and specify the desired deployment settings (e.g. Docker image, replicas, environment variables). 12 | 13 | ## 3. Use kubectl to deploy the application to an AWS EKS cluster 14 | 15 | + Install kubectl on your local machine or a separate server. 16 | 17 | + Authenticate kubectl with your AWS account and configure the necessary Kubernetes permissions. 18 | 19 | + Run the "kubectl apply -f deployment.yaml" command to create the Kubernetes deployment and deploy the application. 20 | 21 | ## 4. Scale the application up or down by adjusting the replica count in the Kubernetes deployment configuration file 22 | 23 | + Modify the deployment.yaml file to change the number of replicas for the application. 24 | 25 | + Run the "kubectl apply -f deployment.yaml" command again to update the deployment and scale the application up or down. 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | -------------------------------------------------------------------------------- /012 - Projects&SampleUseCases/project 6.md: -------------------------------------------------------------------------------- 1 | # Project 6: Setting up a Continuous Delivery Pipeline with Git, Jenkins, Docker, and AWS ECS 2 | 3 | ## Create a sample application with a Dockerfile: 4 | 5 | + Create a new directory for the application files. 6 | 7 | + Create a new Dockerfile that specifies the application dependencies and configuration. 8 | 9 | ## Configure Jenkins to build the Docker image, push it to a Docker registry, and deploy it to AWS ECS: 10 | 11 | + Install Jenkins on a separate server or locally. 12 | 13 | + Install the necessary plugins for Docker and AWS ECS. 14 | 15 | + Create a new Jenkins job and configure it to build the Docker image, push it to a Docker registry, and deploy it to AWS ECS. 16 | 17 | ## Set up AWS ECS to run the Docker container and automatically scale the service based on traffic: 18 | 19 | + Create a new ECS cluster and task definition that specifies the Docker image to run. 20 | 21 | + Create a new ECS service that runs the task definition and automatically scales based on traffic. 22 | 23 | + Test the pipeline by making changes to the application code and verifying that the changes are automatically deployed to the production environment. 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | -------------------------------------------------------------------------------- /012 - Projects&SampleUseCases/project 7.md: -------------------------------------------------------------------------------- 1 | # Project 7: Infrastructure Monitoring and Alerting with AWS CloudWatch 2 | 3 | ## 1. Create an AWS EC2 instance and install a sample application that generates logs: 4 | 5 | + Create a new EC2 instance with the necessary security groups and key pairs. 6 | 7 | + Install a sample application that generates logs (e.g. Apache web server). 8 | 9 | ## 2. Configure CloudWatch to monitor the logs and trigger alerts based on predefined metrics: 10 | 11 | + Create a new CloudWatch log group and specify the log stream from the sample application. 12 | 13 | + Define custom CloudWatch metrics based on the log data and set thresholds for alerting. 14 | 15 | + Configure CloudWatch alarms to trigger notifications (e.g. email, SMS) when the thresholds are breached. 16 | 17 | ## Test the monitoring and alerting by simulating a scenario that triggers the alert (e.g. overwhelming traffic to the application). 18 | 19 | 20 | 21 | 22 | -------------------------------------------------------------------------------- /012 - Projects&SampleUseCases/project 8.md: -------------------------------------------------------------------------------- 1 | # Project 8: Infrastructure Automation with AWS Lambda 2 | 3 | ## Create a sample AWS S3 bucket and define an event trigger that calls a Lambda function: 4 | 5 | + Create a new S3 bucket with the necessary permissions. 6 | 7 | + Define a new event trigger that calls a Lambda function when a new object is uploaded to the bucket. 8 | 9 | ## Create a Lambda function that automates a predefined task (e.g. resizing images, encrypting files, copying files to another location): 10 | 11 | + Create a new Lambda function and specify the code and configuration (e.g. runtime, memory size). 12 | 13 | + Implement the necessary logic to automate the predefined task (e.g. use an image resizing library to resize images). 14 | 15 | ## Test the automation by uploading files to the S3 bucket and verifying that the Lambda function is triggered and automates the predefined task. 16 | -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zen-class/zen-class-devops-documentation/55a8b27cfc0e1789fb9844c072c567ab5b6b3c2c/013 - AWS-Interview Preparation/.DS_Store -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/ADVANCED.md: -------------------------------------------------------------------------------- 1 | ### 1. **Question:** Explain the concept of "GitOps" and how it aligns with DevOps principles. 2 | **Answer:** GitOps is a DevOps practice that uses version control systems like Git to manage infrastructure and application configurations. All changes are made through pull requests, which triggers automated deployments. This approach promotes versioning, collaboration, and automation while maintaining a declarative, auditable infrastructure. 3 | 4 | ### 2. **Question:** How does AWS CodeArtifact enhance dependency management in DevOps workflows? 5 | **Answer:** AWS CodeArtifact is a package management service that allows you to store, manage, and share software packages. It improves dependency management by centralizing artifact storage, ensuring consistency across projects, and enabling version control of packages, making it easier to manage dependencies in DevOps pipelines. 6 | 7 | ### 3. **Question:** Describe the use of AWS CloudFormation Drift Detection and Remediation. 8 | **Answer:** AWS CloudFormation Drift Detection helps identify differences between the deployed stack and the expected stack configuration. When drift is detected, you can use CloudFormation StackSets to automatically remediate drift across multiple accounts and regions, ensuring consistent infrastructure configurations. 9 | 10 | ### 4. **Question:** How can you implement Infrastructure as Code (IaC) security scanning in AWS DevOps pipelines? 11 | **Answer:** You can use tools like AWS CloudFormation Guard, cfn-nag, or open-source security scanners to analyze IaC templates for security vulnerabilities and compliance violations. By integrating these tools into DevOps pipelines, you can ensure that infrastructure code adheres to security best practices. 12 | 13 | ### 5. **Question:** Explain the role of Amazon CloudWatch Events in automating DevOps workflows. 14 | **Answer:** Amazon CloudWatch Events allow you to respond to changes in AWS resources by triggering automated actions. In DevOps, you can use CloudWatch Events to automate CI/CD pipeline executions, scaling actions, incident response, and other tasks based on resource state changes. 15 | 16 | ### 6. **Question:** Describe the use of AWS Systems Manager Automation and its impact on DevOps practices. 17 | **Answer:** AWS Systems Manager Automation enables you to automate common operational tasks across AWS resources. In DevOps, it enhances repeatability and consistency by automating tasks like patch management, application deployments, and configuration changes, reducing manual intervention and errors. 18 | 19 | ### 7. **Question:** How can you implement fine-grained monitoring and alerting using Amazon CloudWatch Metrics and Alarms? 20 | **Answer:** Amazon CloudWatch Metrics provide granular insights into resource performance, while CloudWatch Alarms enable you to set thresholds and trigger actions based on metric conditions. In DevOps, you can use these services to monitor specific application and infrastructure metrics, allowing you to respond to issues proactively. 21 | 22 | ### 8. **Question:** Explain the concept of "Serverless DevOps" and how it differs from traditional DevOps practices. 23 | **Answer:** Serverless DevOps leverages serverless computing to automate and streamline development and operations tasks. It reduces infrastructure management, emphasizes event-driven architectures, and allows developers to focus on code rather than server provisioning. However, it also presents challenges in testing, observability, and architecture design. 24 | 25 | ### 9. **Question:** Describe the use of AWS CloudTrail and AWS CloudWatch Logs integration for audit and security in DevOps. 26 | **Answer:** AWS CloudTrail records API calls, while AWS CloudWatch Logs centralizes log data. Integrating these services allows you to monitor and audit AWS API activities, detect security events, and generate alerts in near real-time. This integration enhances security and compliance practices in DevOps workflows. 27 | 28 | ### 10. **Question:** How can AWS AppConfig be used to manage application configurations in DevOps pipelines? 29 | **Answer:** AWS AppConfig is a service that allows you to manage application configurations and feature flags. In DevOps, you can use AppConfig to separate configuration from code, enable dynamic updates, and control feature releases. This improves deployment flexibility, reduces risk, and supports A/B testing. -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/AWS-CLI.md: -------------------------------------------------------------------------------- 1 | ### 1. What is the AWS Command Line Interface (CLI)? 2 | The AWS Command Line Interface (CLI) is a unified tool that allows you to interact with various AWS services using command-line commands. 3 | 4 | ### 2. Why would you use the AWS CLI? 5 | The AWS CLI provides a convenient way to automate tasks, manage AWS resources, and interact with services directly from the command line, making it useful for scripting and administration. 6 | 7 | ### 3. How do you install the AWS CLI? 8 | You can install the AWS CLI on various operating systems using package managers or by downloading the installer from the AWS website. 9 | 10 | ### 4. What is the purpose of AWS CLI profiles? 11 | AWS CLI profiles allow you to manage multiple sets of AWS security credentials, making it easier to switch between different accounts and roles. 12 | 13 | ### 5. How can you configure the AWS CLI with your credentials? 14 | You can configure the AWS CLI by running the `aws configure` command, where you provide your access key, secret key, default region, and output format. 15 | 16 | ### 6. What is the difference between IAM user-based credentials and IAM role-based credentials in the AWS CLI? 17 | IAM user-based credentials are long-term access keys associated with an IAM user, while IAM role-based credentials are temporary credentials obtained by assuming a role using the `sts assume-role` command. 18 | 19 | ### 7. How can you interact with AWS services using the AWS CLI? 20 | You can interact with AWS services by using AWS CLI commands specific to each service. For example, you can use `aws ec2 describe-instances` to list EC2 instances. 21 | 22 | ### 8. What is the syntax for AWS CLI commands? 23 | The basic syntax for AWS CLI commands is `aws [options]`, where you replace `` with the service you want to interact with and `` with the desired action. 24 | 25 | ### 9. How can you list available AWS CLI services and commands? 26 | You can run `aws help` to see a list of AWS services and the corresponding commands available in the AWS CLI. 27 | 28 | ### 10. What is the purpose of output formatting options in AWS CLI commands? 29 | Output formatting options allow you to specify how the results of AWS CLI commands are presented. Common options include JSON, text, table, and YAML formats. 30 | 31 | ### 11. How can you filter and format AWS CLI command output? 32 | You can use filters like `--query` to extract specific data from AWS CLI command output, and you can use `--output` to choose the format of the output. 33 | 34 | ### 12. How can you create and manage AWS resources using the AWS CLI? 35 | You can create and manage AWS resources using commands such as `aws ec2 create-instance` for EC2 instances or `aws s3 cp` to copy files to Amazon S3 buckets. 36 | 37 | ### 13. How does AWS CLI handle pagination of results? 38 | Some AWS CLI commands return paginated results. You can use the `--max-items` and `--page-size` options to control the number of items displayed per page. 39 | 40 | ### 14. What is the AWS SSO (Single Sign-On) feature in the AWS CLI? 41 | The AWS SSO feature in the AWS CLI allows you to authenticate and obtain temporary credentials using an AWS SSO profile, simplifying the management of credentials. 42 | 43 | ### 15. Can you use the AWS CLI to work with AWS CloudFormation? 44 | Yes, you can use the AWS CLI to create, update, and delete CloudFormation stacks using the `aws cloudformation` commands. 45 | 46 | ### 16. How can you debug AWS CLI commands? 47 | You can use the `--debug` option with AWS CLI commands to get detailed debug information, which can help troubleshoot issues. 48 | 49 | ### 17. Can you use the AWS CLI in AWS Lambda functions? 50 | Yes, AWS Lambda functions can use the AWS CLI by packaging it with the function code and executing CLI commands from within the function. 51 | 52 | ### 18. How can you secure the AWS CLI on your local machine? 53 | You can secure the AWS CLI on your local machine by using IAM roles, IAM user-based credentials, and the AWS CLI's built-in encryption mechanisms for configuration files. 54 | 55 | ### 19. How can you update the AWS CLI to the latest version? 56 | You can update the AWS CLI to the latest version using package managers like `pip` (Python package manager) or by downloading the installer from the AWS website. 57 | 58 | ### 20. How do you uninstall the AWS CLI? 59 | To uninstall the AWS CLI, you can use the package manager or the uninstaller provided by the installer you used to install it initially. -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/CLOUDFORMATION.md: -------------------------------------------------------------------------------- 1 | ### 1. What is AWS CloudFormation? 2 | AWS CloudFormation is a service that allows you to define and provision infrastructure as code, enabling you to create, update, and manage AWS resources in a declarative and automated way. 3 | 4 | ### 2. What are the benefits of using AWS CloudFormation? 5 | Benefits of using AWS CloudFormation include infrastructure as code, automated resource provisioning, consistent deployments, version control, and support for template reuse. 6 | 7 | ### 3. What is an AWS CloudFormation template? 8 | An AWS CloudFormation template is a JSON or YAML file that defines the AWS resources and their configurations needed for a particular stack. 9 | 10 | ### 4. How does AWS CloudFormation work? 11 | AWS CloudFormation interprets templates and deploys the specified resources in the order defined, managing the provisioning, updating, and deletion of resources. 12 | 13 | ### 5. What is a CloudFormation stack? 14 | A CloudFormation stack is a collection of AWS resources created and managed as a single unit, based on a CloudFormation template. 15 | 16 | ### 6. What is the difference between AWS CloudFormation and AWS Elastic Beanstalk? 17 | AWS CloudFormation provides infrastructure as code and lets you define and manage resources at a lower level, while AWS Elastic Beanstalk is a platform-as-a-service (PaaS) that abstracts the deployment of applications. 18 | 19 | ### 7. What is the purpose of a CloudFormation change set? 20 | A CloudFormation change set allows you to preview the changes that will be made to a stack before applying those changes, helping to ensure that updates won't cause unintended consequences. 21 | 22 | ### 8. How can you create an AWS CloudFormation stack? 23 | You can create a CloudFormation stack using the AWS Management Console, AWS CLI, or AWS SDKs. You provide a template, choose a stack name, and specify any parameters. 24 | 25 | ### 9. How can you update an existing AWS CloudFormation stack? 26 | You can update a CloudFormation stack by making changes to the template or stack parameters and then using the AWS Management Console, AWS CLI, or SDKs to initiate an update. 27 | 28 | ### 10. What is the CloudFormation rollback feature? 29 | The CloudFormation rollback feature automatically reverts changes to a stack if an update fails, helping to ensure that your infrastructure remains consistent. 30 | 31 | ### 11. How does AWS CloudFormation handle dependencies between resources? 32 | CloudFormation handles dependencies by automatically determining the order in which resources need to be created or updated to maintain consistent state. 33 | 34 | ### 12. What are CloudFormation intrinsic functions? 35 | CloudFormation intrinsic functions are built-in functions that you can use within templates to manipulate values or perform dynamic operations during stack creation and update. 36 | 37 | ### 13. How can you perform conditionals in CloudFormation templates? 38 | You can use CloudFormation's intrinsic functions, such as `Fn::If` and `Fn::Equals`, to define conditions and control the creation of resources based on those conditions. 39 | 40 | ### 14. What is the CloudFormation Designer? 41 | The CloudFormation Designer is a visual tool that helps you design and visualize CloudFormation templates using a drag-and-drop interface. 42 | 43 | ### 15. How can you manage secrets in CloudFormation templates? 44 | You should avoid hardcoding secrets in templates. Instead, you can use AWS Secrets Manager or AWS Parameter Store to store sensitive information and reference them in your templates. 45 | 46 | ### 16. How can you provision custom resources in CloudFormation? 47 | You can use AWS Lambda-backed custom resources to perform actions in response to stack events that aren't natively supported by CloudFormation resources. 48 | 49 | ### 17. What is stack drift in AWS CloudFormation? 50 | Stack drift occurs when actual resources in a stack differ from the expected resources defined in the CloudFormation template. 51 | 52 | ### 18. How does CloudFormation support rollback triggers? 53 | Rollback triggers in CloudFormation allow you to specify actions that should be taken when a stack rollback is initiated, such as sending notifications or cleaning up resources. 54 | 55 | ### 19. Can AWS CloudFormation be used for creating non-AWS resources? 56 | Yes, CloudFormation supports custom resources that can be used to manage non-AWS resources or to execute arbitrary code during stack creation and update. 57 | 58 | ### 20. What is CloudFormation StackSets? 59 | CloudFormation StackSets allow you to deploy CloudFormation stacks across multiple accounts and regions, enabling centralized management of infrastructure deployments. -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/CLOUDFRONT.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon CloudFront? 2 | Amazon CloudFront is a Content Delivery Network (CDN) service provided by AWS that accelerates content delivery by distributing it across a network of edge locations. 3 | 4 | ### 2. How does CloudFront work? 5 | CloudFront caches content in edge locations globally. When a user requests content, CloudFront delivers it from the nearest edge location, reducing latency and improving performance. 6 | 7 | ### 3. What are edge locations in CloudFront? 8 | Edge locations are data centers globally distributed by CloudFront. They store cached content and serve it to users, minimizing the distance data needs to travel. 9 | 10 | ### 4. What types of distributions are available in CloudFront? 11 | CloudFront offers Web Distributions for websites and RTMP Distributions for media streaming. 12 | 13 | ### 5. How can you ensure that content in CloudFront is updated? 14 | You can create invalidations in CloudFront to remove cached content and force the distribution of fresh content. 15 | 16 | ### 6. Can you use custom SSL certificates with CloudFront? 17 | Yes, you can use custom SSL certificates to secure connections between users and CloudFront. 18 | 19 | ### 7. What is an origin in CloudFront? 20 | An origin is the source of the content CloudFront delivers. It can be an Amazon S3 bucket, an EC2 instance, an Elastic Load Balancer, or even an HTTP server. 21 | 22 | ### 8. How can you control who accesses content in CloudFront? 23 | You can use CloudFront signed URLs or cookies to restrict access to content based on user credentials. 24 | 25 | ### 9. What are cache behaviors in CloudFront? 26 | Cache behaviors define how CloudFront handles different types of requests. They include settings like TTL, query string forwarding, and more. 27 | 28 | ### 10. How can you integrate CloudFront with other AWS services? 29 | You can integrate CloudFront with Amazon S3, Amazon EC2, AWS Lambda, and more to accelerate content delivery. 30 | 31 | ### 11. How can you analyze CloudFront distribution performance? 32 | You can use CloudFront access logs stored in Amazon S3 to analyze the performance of your distribution. 33 | 34 | ### 12. What is the purpose of CloudFront behaviors? 35 | CloudFront behaviors help specify how CloudFront should respond to different types of requests for different paths or patterns. 36 | 37 | ### 13. Can CloudFront be used for dynamic content? 38 | Yes, CloudFront can be used for both static and dynamic content delivery, improving the performance of web applications. 39 | 40 | ### 14. What is a distribution in CloudFront? 41 | A distribution represents the configuration and content for your CloudFront content delivery. It can have multiple origins and cache behaviors. 42 | 43 | ### 15. How does CloudFront handle cache expiration? 44 | CloudFront uses Time to Live (TTL) settings to determine how long objects are cached in edge locations before checking for updates. 45 | 46 | ### 16. What are the benefits of using CloudFront with Amazon S3? 47 | Using CloudFront with Amazon S3 reduces latency, offloads traffic from your origin server, and improves global content delivery. 48 | 49 | ### 17. Can CloudFront be used for both HTTP and HTTPS content? 50 | Yes, CloudFront supports both HTTP and HTTPS content delivery. HTTPS is recommended for enhanced security. 51 | 52 | ### 18. How can you measure the performance of CloudFront distributions? 53 | You can use CloudFront metrics in Amazon CloudWatch to monitor the performance of your distributions and analyze their behavior. 54 | 55 | ### 19. What is origin shield in CloudFront? 56 | Origin Shield is an additional caching layer that helps reduce the load on your origin server by caching content closer to the origin. 57 | 58 | ### 20. How can CloudFront improve security? 59 | CloudFront can help protect against DDoS attacks by absorbing traffic spikes and providing secure connections through HTTPS. -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/CLOUDTRAIL.md: -------------------------------------------------------------------------------- 1 | ### 1. What is AWS CloudTrail? 2 | AWS CloudTrail is a service that provides governance, compliance, and audit capabilities by recording and storing API calls made on your AWS account. 3 | 4 | ### 2. What type of information does AWS CloudTrail record? 5 | CloudTrail records API calls, capturing information about who made the call, when it was made, which service was accessed, and what actions were taken. 6 | 7 | ### 3. How does AWS CloudTrail store its data? 8 | CloudTrail stores its data in Amazon S3 buckets, allowing you to easily analyze and retrieve the recorded information. 9 | 10 | ### 4. How can you enable AWS CloudTrail for an AWS account? 11 | You can enable CloudTrail through the AWS Management Console or the AWS CLI by creating a trail and specifying the services you want to track. 12 | 13 | ### 5. What is a CloudTrail trail? 14 | A CloudTrail trail is a configuration that specifies the settings for logging and delivering events. Trails can be applied to an entire AWS account or specific regions. 15 | 16 | ### 6. What is the purpose of CloudTrail log files? 17 | CloudTrail log files contain records of API calls and events, which can be used for security analysis, compliance, auditing, and troubleshooting. 18 | 19 | ### 7. How can you access CloudTrail log files? 20 | CloudTrail log files are stored in an S3 bucket. You can access them directly or use services like Amazon Athena or Amazon CloudWatch Logs Insights for querying and analysis. 21 | 22 | ### 8. What is the difference between a management event and a data event in CloudTrail? 23 | Management events are related to the management of AWS resources, while data events focus on the actions performed on those resources. 24 | 25 | ### 9. How can you view and analyze CloudTrail logs? 26 | You can view and analyze CloudTrail logs using the CloudTrail console, AWS CLI, or third-party tools. You can also set up CloudWatch Alarms to detect specific events. 27 | 28 | ### 10. What is CloudTrail Insights? 29 | CloudTrail Insights is a feature that uses machine learning to identify unusual patterns and suspicious activity in CloudTrail logs. 30 | 31 | ### 11. How can you integrate CloudTrail with CloudWatch Logs? 32 | You can integrate CloudTrail with CloudWatch Logs to receive CloudTrail events in near real-time, allowing you to create CloudWatch Alarms and automate actions. 33 | 34 | ### 12. What is CloudTrail Event History? 35 | CloudTrail Event History is a feature that displays the past seven days of management events for your account, helping you quickly identify changes made to resources. 36 | 37 | ### 13. What is CloudTrail Data Events? 38 | CloudTrail Data Events track actions performed on Amazon S3 objects, providing insight into object-level activity and changes. 39 | 40 | ### 14. What is the purpose of CloudTrail Insights events? 41 | CloudTrail Insights events are automatically generated when CloudTrail detects unusual or high-risk activity, helping you identify and respond to potential security issues. 42 | 43 | ### 15. How can you ensure that CloudTrail logs are tamper-proof? 44 | CloudTrail logs are stored in an S3 bucket with server-side encryption enabled, ensuring that the logs are tamper-proof and protected. 45 | 46 | ### 16. Can CloudTrail logs be used for compliance and auditing? 47 | Yes, CloudTrail logs can be used to demonstrate compliance with various industry standards and regulations by providing an audit trail of AWS account activity. 48 | 49 | ### 17. How does CloudTrail support multi-region trails? 50 | Multi-region trails allow you to capture events from multiple AWS regions in a single trail, providing a centralized view of account activity. 51 | 52 | ### 18. Can CloudTrail be used to monitor non-AWS services? 53 | CloudTrail primarily monitors AWS services, but you can integrate it with AWS Lambda to capture and log custom events from non-AWS services. 54 | 55 | ### 19. How can you receive notifications about CloudTrail events? 56 | You can use Amazon SNS (Simple Notification Service) to receive notifications about CloudTrail events, such as when new log files are delivered to your S3 bucket. 57 | 58 | ### 20. How can you use CloudTrail logs for incident response? 59 | CloudTrail logs can be used for incident response by analyzing events to identify the cause of an incident, understand its scope, and take appropriate actions. -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/CLOUDWATCH.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon CloudWatch? 2 | Amazon CloudWatch is a monitoring and observability service that provides insights into your AWS resources and applications by collecting and tracking metrics, logs, and events. 3 | 4 | ### 2. What types of data does Amazon CloudWatch collect? 5 | Amazon CloudWatch collects metrics, logs, and events. Metrics are data points about your resources and applications, logs are textual data generated by resources, and events provide insights into changes and notifications. 6 | 7 | ### 3. How can you use Amazon CloudWatch to monitor resources? 8 | You can use CloudWatch to monitor resources by collecting and visualizing metrics, setting alarms for specific thresholds, and generating insights into resource performance. 9 | 10 | ### 4. What are CloudWatch metrics? 11 | CloudWatch metrics are data points about the performance of your resources and applications. They can include data like CPU utilization, network traffic, and more. 12 | 13 | ### 5. How can you collect custom metrics in Amazon CloudWatch? 14 | You can collect custom metrics in CloudWatch by using the CloudWatch API or SDKs to publish data to CloudWatch using the `PutMetricData` action. 15 | 16 | ### 6. What are CloudWatch alarms? 17 | CloudWatch alarms allow you to monitor metrics and set thresholds to trigger notifications or automated actions when specific conditions are met. 18 | 19 | ### 7. How can you visualize CloudWatch metrics? 20 | You can visualize CloudWatch metrics using CloudWatch Dashboards, which allow you to create customized views of metrics, graphs, and text. 21 | 22 | ### 8. What is CloudWatch Logs? 23 | CloudWatch Logs is a service that collects, stores, and monitors log files from various resources, making it easier to analyze and troubleshoot applications. 24 | 25 | ### 9. How can you store logs in Amazon CloudWatch Logs? 26 | You can store logs in CloudWatch Logs by sending log data from your resources or applications using the CloudWatch Logs agent, SDKs, or directly through the CloudWatch API. 27 | 28 | ### 10. What is CloudWatch Logs Insights? 29 | CloudWatch Logs Insights is a feature that allows you to query and analyze log data to gain insights into your applications and resources. 30 | 31 | ### 11. What is the CloudWatch Events service? 32 | CloudWatch Events provides a way to respond to state changes in your AWS resources, such as launching instances, creating buckets, or modifying security groups. 33 | 34 | ### 12. How can you use CloudWatch Events to trigger actions? 35 | You can use CloudWatch Events to trigger actions by defining rules that match specific events and associate those rules with targets like Lambda functions, SQS queues, and more. 36 | 37 | ### 13. What is CloudWatch Container Insights? 38 | CloudWatch Container Insights provides a way to monitor and analyze the performance of containers managed by services like Amazon ECS and Amazon EKS. 39 | 40 | ### 14. What is CloudWatch Contributor Insights? 41 | CloudWatch Contributor Insights provides insights into the top contributors affecting the performance of your resources, helping you identify bottlenecks and optimization opportunities. 42 | 43 | ### 15. How can you use CloudWatch Logs for troubleshooting? 44 | You can use CloudWatch Logs for troubleshooting by analyzing log data, setting up alarms for specific log patterns, and correlating events to diagnose issues. 45 | 46 | ### 16. Can CloudWatch Logs Insights query data from multiple log groups? 47 | Yes, CloudWatch Logs Insights can query data from multiple log groups, allowing you to analyze and gain insights from a broader set of log data. 48 | 49 | ### 17. How can you set up CloudWatch Alarms? 50 | You can set up CloudWatch Alarms by defining a metric, setting a threshold for the metric, and specifying actions to be taken when the threshold is breached. 51 | 52 | ### 18. What is CloudWatch Anomaly Detection? 53 | CloudWatch Anomaly Detection is a feature that automatically analyzes historical metric data to create a baseline and detect deviations from expected patterns. 54 | 55 | ### 19. How does CloudWatch support cross-account monitoring? 56 | You can use CloudWatch Cross-Account Cross-Region (CACR) to set up cross-account monitoring, allowing you to view metrics and alarms from multiple AWS accounts. 57 | 58 | ### 20. Can CloudWatch integrate with other AWS services? 59 | Yes, CloudWatch can integrate with other AWS services like Amazon EC2, Amazon RDS, Lambda, and more to provide enhanced monitoring and insights into resource performance. -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/CODEBUILD.md: -------------------------------------------------------------------------------- 1 | ### 1. What is AWS CodeBuild? 2 | AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software artifacts, such as executable files or application packages. 3 | 4 | ### 2. How does CodeBuild work? 5 | CodeBuild uses build specifications defined in buildspec.yml files. When triggered by a source code change, it pulls the code from the repository, follows the build steps specified, and generates the build artifacts. 6 | 7 | ### 3. What is a buildspec.yml file? 8 | A buildspec.yml file is used to define the build steps, environment settings, and other instructions for CodeBuild. It's stored in the same repository as the source code and provides the necessary information to execute the build. 9 | 10 | ### 4. How can you integrate CodeBuild with CodePipeline? 11 | You can add a CodeBuild action to your CodePipeline stages. This enables you to use CodeBuild as one of the actions in your CI/CD workflow for building and testing code. 12 | 13 | ### 5. What programming languages and build environments does CodeBuild support? 14 | CodeBuild supports a wide range of programming languages and build environments, including Java, Python, Node.js, Ruby, Go, .NET, Docker, and more. 15 | 16 | ### 6. Explain the caching feature in CodeBuild. 17 | The caching feature allows you to store certain directories in Amazon S3 to speed up build times. CodeBuild can fetch cached content instead of rebuilding dependencies, improving overall build performance. 18 | 19 | ### 7. How does CodeBuild handle environment setup and cleanup? 20 | CodeBuild automatically provisions and manages the build environment based on the specifications in the buildspec.yml file. After the build completes, CodeBuild automatically cleans up the environment. 21 | 22 | ### 8. Can you customize the build environment in CodeBuild? 23 | Yes, you can customize the build environment by specifying the base image, build tools, environment variables, and more in the buildspec.yml file. 24 | 25 | ### 9. What are artifacts and how are they used in CodeBuild? 26 | Artifacts are the output files generated by the build process. They can be binaries, archives, or any other build output. These artifacts can be stored in Amazon S3 or other destinations for later use. 27 | 28 | ### 10. How can you secure sensitive information in your build process? 29 | Sensitive information, such as passwords or API keys, should be stored in AWS Secrets Manager or AWS Systems Manager Parameter Store. You can retrieve these secrets securely during the build process. 30 | 31 | ### 11. Describe a scenario where you'd use multiple build environments in a CodeBuild project. 32 | You might use multiple build environments to support different stages of the development process. For example, you could have one environment for development builds and another for production releases. 33 | 34 | ### 12. What is the role of build projects in CodeBuild? 35 | A build project defines how CodeBuild should build your source code. It includes settings like the source repository, build environment, buildspec.yml location, and other configuration details. 36 | 37 | ### 13. How can you troubleshoot a failing build in CodeBuild? 38 | You can view build logs and examine the output of build steps to identify issues. If a buildspec.yml file has errors, they can often be resolved by reviewing the syntax and ensuring proper settings. 39 | 40 | ### 14. What's the benefit of using CodeBuild over traditional build tools? 41 | CodeBuild is fully managed and scalable. It eliminates the need to provision and manage build servers, making it easier to set up and scale build processes without infrastructure overhead. 42 | 43 | ### 15. Can you build Docker images using CodeBuild? 44 | Yes, CodeBuild supports building Docker images as part of the build process. You can define build steps to build and push Docker images to repositories like Amazon ECR. 45 | 46 | ### 16. How can you integrate third-party build tools with CodeBuild? 47 | You can define build steps in your buildspec.yml file to execute third-party build tools or scripts. This enables seamless integration with tools specific to your project's needs. 48 | 49 | ### 17. What happens if a build fails in CodeBuild? 50 | If a build fails, CodeBuild can be configured to stop the pipeline in CodePipeline, send notifications, and provide detailed logs to help diagnose and resolve the issue. 51 | 52 | ### 18. Can you set up multiple build projects within a single CodeBuild project? 53 | Yes, a CodeBuild project can have multiple build projects associated with it. This is useful when you want to build different components of your application in parallel. 54 | 55 | ### 19. How can you monitor and visualize build performance in CodeBuild? 56 | You can use Amazon CloudWatch to collect and visualize metrics from CodeBuild, such as build duration, success rates, and resource utilization. 57 | 58 | ### 20. Explain how CodeBuild pricing works. 59 | CodeBuild pricing is based on the number of build minutes consumed. A build minute is billed per minute of code build time, including time spent provisioning and cleaning up the build environment. 60 | -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/CODEDEPLOY.md: -------------------------------------------------------------------------------- 1 | ### 1. What is AWS CodeDeploy? 2 | AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute platforms, including Amazon EC2 instances, AWS Lambda functions, and on-premises servers. 3 | 4 | ### 2. How does CodeDeploy work? 5 | CodeDeploy coordinates application deployments by pushing code changes to instances, managing deployment lifecycle events, and rolling back deployments if necessary. 6 | 7 | ### 3. What are the deployment strategies supported by CodeDeploy? 8 | CodeDeploy supports various deployment strategies, including Blue-Green, In-Place, and Canary. Each strategy determines how new code versions are rolled out to instances. 9 | 10 | ### 4. Explain the Blue-Green deployment strategy in CodeDeploy. 11 | In Blue-Green deployment, two identical environments (blue and green) are set up. New code is deployed to the green environment, and after successful testing, traffic is switched from the blue to the green environment. 12 | 13 | ### 5. How does CodeDeploy handle rollbacks? 14 | If a deployment fails or triggers alarms, CodeDeploy can automatically roll back to the previous version of the application, minimizing downtime and impact. 15 | 16 | ### 6. Can you use CodeDeploy for serverless deployments? 17 | Yes, CodeDeploy can be used to deploy AWS Lambda functions. It facilitates smooth updates to Lambda function code without service interruption. 18 | 19 | ### 7. What is an Application Revision in CodeDeploy? 20 | An Application Revision is a version of your application code that is deployed using CodeDeploy. It can include application files, configuration files, and scripts necessary for deployment. 21 | 22 | ### 8. How can you integrate CodeDeploy with your CI/CD pipeline? 23 | CodeDeploy can be integrated into your CI/CD pipeline using services like AWS CodePipeline. After successful builds, the pipeline triggers CodeDeploy to deploy the new version. 24 | 25 | ### 9. What is a Deployment Group in CodeDeploy? 26 | A Deployment Group is a set of instances or Lambda functions targeted for deployment. It defines where the application should be deployed and how the deployment should be executed. 27 | 28 | ### 10. How can you ensure zero downtime during application deployments? 29 | Zero downtime can be achieved by using strategies like Blue-Green deployments or Canary deployments. These strategies allow you to gradually shift traffic to the new version while testing its stability. 30 | 31 | ### 11. Explain how you can manage deployment configuration in CodeDeploy. 32 | Deployment configuration specifies parameters such as deployment style, traffic routing, and the order of deployment lifecycle events. It allows you to fine-tune deployment behavior. 33 | 34 | ### 12. How can you handle database schema changes during deployments? 35 | Database schema changes can be managed using pre- and post-deployment scripts. These scripts ensure that the database is properly updated before and after deployment. 36 | 37 | ### 13. Describe a scenario where you would use the Canary deployment strategy. 38 | You might use the Canary strategy when you want to gradually expose a new version to a small portion of your users for testing before rolling it out to the entire user base. 39 | 40 | ### 14. How does CodeDeploy handle instances with different capacities? 41 | CodeDeploy can automatically distribute the new version of the application across instances with varying capacities by taking into account the deployment configuration and specified traffic weights. 42 | 43 | ### 15. What are hooks in CodeDeploy? 44 | Hooks are scripts that run at various points in the deployment lifecycle. They allow you to perform custom actions, such as validating deployments or running tests, at specific stages. 45 | 46 | ### 16. How does CodeDeploy ensure consistent deployments across instances? 47 | CodeDeploy uses an agent on each instance that manages deployment lifecycle events and ensures consistent application deployments. 48 | 49 | ### 17. What is the difference between an EC2/On-Premises deployment and a Lambda deployment in CodeDeploy? 50 | An EC2/On-Premises deployment involves deploying code to instances, while a Lambda deployment deploys code to Lambda functions. Both utilize CodeDeploy's deployment capabilities. 51 | 52 | ### 18. How can you monitor the progress of a deployment in CodeDeploy? 53 | You can monitor deployments using the AWS Management Console, AWS CLI, or AWS SDKs. CodeDeploy provides detailed logs and metrics to track the status and progress of deployments. 54 | 55 | ### 19. Can CodeDeploy deploy applications across multiple regions? 56 | Yes, CodeDeploy can deploy applications to multiple regions. However, each region requires its own deployment configuration and setup. 57 | 58 | ### 20. What is the role of the CodeDeploy agent? 59 | The CodeDeploy agent is responsible for executing deployment instructions on instances. It communicates with the CodeDeploy service and manages deployment lifecycle events. -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/CODEPIPELINE.md: -------------------------------------------------------------------------------- 1 | ### 1. What is AWS CodePipeline? 2 | AWS CodePipeline is a fully managed continuous integration and continuous delivery (CI/CD) service that automates the release process of software applications. It enables developers to build, test, and deploy their code changes automatically and efficiently. 3 | 4 | ### 2. How does CodePipeline work? 5 | CodePipeline orchestrates the flow of code changes through multiple stages. Each stage represents a step in the release process, such as source code retrieval, building, testing, and deployment. Developers define the pipeline structure, including the sequence of stages and associated actions, to automate the entire software delivery lifecycle. 6 | 7 | ### 3. Explain the basic structure of a CodePipeline. 8 | A CodePipeline consists of stages, actions, and transitions. Stages are logical phases of the pipeline, actions are the tasks performed within those stages (e.g., source code checkout, deployment), and transitions define the flow of execution between stages. 9 | 10 | ### 4. What are artifacts in CodePipeline? 11 | Artifacts are the output files generated during the build or compilation phase of the pipeline. These artifacts are the result of a successful action and are used as inputs for subsequent stages. For example, an artifact could be a packaged application ready for deployment. 12 | 13 | ### 5. Describe the role of the Source stage in CodePipeline. 14 | The Source stage is the starting point of the pipeline. It retrieves the source code from a version control repository, such as GitHub or AWS CodeCommit. When changes are detected in the repository, the Source stage triggers the pipeline execution. 15 | 16 | ### 6. How can you prevent unauthorized changes to the pipeline? 17 | Access to CodePipeline resources can be controlled using AWS Identity and Access Management (IAM) policies. By configuring IAM roles and permissions, you can restrict access to only authorized individuals or processes, preventing unauthorized modifications to the pipeline. 18 | 19 | ### 7. Can you explain the concept of a manual approval action? 20 | A manual approval action is used to pause the pipeline and require human intervention before proceeding to the next stage. This action is often employed for production deployments, allowing a designated person to review and approve changes before they are released. 21 | 22 | ### 8. What is a webhook in CodePipeline? 23 | A webhook is a mechanism that allows external systems, such as version control repositories like GitHub, to automatically trigger a pipeline execution when code changes are pushed. This integration facilitates the continuous integration process by initiating the pipeline without manual intervention. 24 | 25 | ### 9. How can you parallelize actions in CodePipeline? 26 | Parallel execution of actions is achieved by using parallel stages. Within a stage, you can define multiple actions that run concurrently, optimizing the pipeline's execution time and improving overall efficiency. 27 | 28 | ### 10. What's the difference between AWS CodePipeline and AWS CodeDeploy? 29 | AWS CodePipeline manages the entire CI/CD workflow, encompassing various stages like building, testing, and deploying. AWS CodeDeploy, on the other hand, focuses solely on the deployment phase by automating application deployment to instances or services. 30 | 31 | ### 11. Describe a scenario where you'd use a custom action in CodePipeline. 32 | A custom action is useful when integrating with third-party tools or services that are not natively supported by CodePipeline's built-in actions. For example, you could create a custom action to integrate with a specialized security scanning tool. 33 | 34 | ### 12. How can you handle different deployment environments (e.g., dev, test, prod) in CodePipeline? 35 | To handle different deployment environments, you can create separate stages for each environment within the pipeline. This allows you to customize the deployment process, testing procedures, and configurations specific to each environment. 36 | 37 | ### 13. Explain how you would set up automatic rollbacks in CodePipeline. 38 | Automatic rollbacks can be set up using CloudWatch alarms and AWS Lambda functions. If the deployment triggers an alarm (e.g., error rate exceeds a threshold), the Lambda function can initiate a rollback by deploying the previous version of the application. 39 | 40 | ### 14. How do you handle sensitive information like API keys in your CodePipeline? 41 | Sensitive information, such as API keys or database credentials, should be stored in AWS Secrets Manager or AWS Systems Manager Parameter Store. During pipeline execution, you can retrieve these secrets and inject them securely into the deployment process. 42 | 43 | ### 15. Describe Blue-Green deployment and how it can be achieved with CodePipeline. 44 | Blue-Green deployment involves running two separate environments (blue and green) concurrently. CodePipeline can achieve this by having distinct stages for each environment, allowing testing of the new version in the green environment before redirecting traffic from blue to green. 45 | 46 | ### 16. What is the difference between a pipeline and a stage in CodePipeline? 47 | A pipeline represents the end-to-end workflow, comprising multiple stages. Stages are the individual components within the pipeline, each responsible for specific actions or tasks. 48 | 49 | ### 17. How can you incorporate testing into your CodePipeline? 50 | Testing can be integrated into CodePipeline by adding testing actions to appropriate stages. Unit tests, integration tests, and other types of tests can be performed as part of the pipeline to ensure code quality and functionality. 51 | 52 | ### 18. What happens if an action in a pipeline fails? 53 | If an action fails, CodePipeline can be configured to respond in various ways. It can stop the pipeline, notify relevant stakeholders, trigger a rollback, or continue with the pipeline execution based on predefined conditions and actions. 54 | 55 | ### 19. Explain how you can create a reusable pipeline template in CodePipeline. 56 | To create a reusable pipeline template, you can use AWS CloudFormation. Define the pipeline structure, stages, and actions in a CloudFormation template. This enables you to consistently deploy pipelines across multiple projects or applications. 57 | 58 | ### 20. Can you integrate CodePipeline with on-premises resources? 59 | Yes, you can integrate CodePipeline with on-premises resources using the AWS CodePipeline on-premises action. This allows you to connect your existing tools and infrastructure with your AWS-based CI/CD pipeline, facilitating hybrid deployments. -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/DYNAMODB.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon DynamoDB? 2 | Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. It's designed to handle massive amounts of structured data across various use cases. 3 | 4 | ### 2. How does Amazon DynamoDB work? 5 | DynamoDB stores data in tables, each with a primary key and optional secondary indexes. It automatically replicates data across multiple Availability Zones for high availability and durability. 6 | 7 | ### 3. What types of data models does Amazon DynamoDB support? 8 | DynamoDB supports both document data model (key-value pairs) and columnar data model (tables with items and attributes). It's well-suited for a variety of applications, from simple key-value stores to complex data models. 9 | 10 | ### 4. What are the key features of Amazon DynamoDB? 11 | Key features of DynamoDB include automatic scaling, multi-master replication, global tables for global distribution, support for ACID transactions, and seamless integration with AWS services. 12 | 13 | ### 5. What is the primary key in Amazon DynamoDB? 14 | The primary key is used to uniquely identify items within a table. It consists of a partition key (and optional sort key), which determines how data is distributed and stored. 15 | 16 | ### 6. How does partitioning work in Amazon DynamoDB? 17 | DynamoDB divides a table's data into partitions based on the partition key. Each partition can store up to 10 GB of data and handle a certain amount of read and write capacity. 18 | 19 | ### 7. What is the difference between a partition key and a sort key in DynamoDB? 20 | The partition key is used to distribute data across partitions, while the sort key is used to determine the order of items within a partition. Together, they create a unique identifier for each item. 21 | 22 | ### 8. How can you query data in Amazon DynamoDB? 23 | You can use the Query operation to retrieve items from a table based on the primary key or a secondary index. Queries are efficient and support various filter expressions. 24 | 25 | ### 9. What are secondary indexes in Amazon DynamoDB? 26 | Secondary indexes allow you to query the data using attributes other than the primary key. Global secondary indexes span the entire table, while local secondary indexes are created on a specific partition. 27 | 28 | ### 10. What is eventual consistency in DynamoDB? 29 | DynamoDB offers both strong consistency and eventual consistency for read operations. With eventual consistency, changes made to items may take some time to propagate across all replicas. 30 | 31 | ### 11. How can you ensure data durability in Amazon DynamoDB? 32 | DynamoDB replicates data across multiple Availability Zones, ensuring data durability and availability even in the event of hardware failures or AZ outages. 33 | 34 | ### 12. Can you change the schema of an existing Amazon DynamoDB table? 35 | Yes, you can change the schema of an existing DynamoDB table by modifying the provisioned throughput, changing the primary key, adding or removing secondary indexes, and more. 36 | 37 | ### 13. What is the capacity mode in Amazon DynamoDB? 38 | DynamoDB offers two capacity modes: Provisioned and On-Demand. In Provisioned mode, you provision a specific amount of read and write capacity. In On-Demand mode, capacity is automatically adjusted based on usage. 39 | 40 | ### 14. How can you automate the scaling of Amazon DynamoDB tables? 41 | You can enable auto scaling for your DynamoDB tables to automatically adjust read and write capacity based on traffic patterns. Auto scaling helps maintain optimal performance. 42 | 43 | ### 15. What is DynamoDB Streams? 44 | DynamoDB Streams captures changes to items in a table, allowing you to process and react to those changes in real time. It's often used for building event-driven applications. 45 | 46 | ### 16. How can you back up Amazon DynamoDB tables? 47 | DynamoDB provides backup and restore capabilities. You can create on-demand backups or enable continuous backups, which automatically create backups as data changes. 48 | 49 | ### 17. What is the purpose of the DynamoDB Accelerator (DAX)? 50 | DynamoDB Accelerator (DAX) is an in-memory cache that provides high-speed access to frequently accessed items. It reduces the need to read data from the main DynamoDB table. 51 | 52 | ### 18. How can you implement transactions in Amazon DynamoDB? 53 | DynamoDB supports ACID transactions for multiple item updates. You can use the `TransactWriteItems` operation to group multiple updates into a single, atomic transaction. 54 | 55 | ### 19. What is the difference between Amazon DynamoDB and Amazon S3? 56 | Amazon DynamoDB is a NoSQL database service optimized for high-performance, low-latency applications with structured data. Amazon S3 is an object storage service used for storing files, images, videos, and more. 57 | 58 | ### 20. What are Global Tables in Amazon DynamoDB? 59 | Global Tables enable you to replicate data across multiple AWS regions, providing low-latency access to DynamoDB data from users around the world. -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/EC2.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon EC2? 2 | Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It allows users to create, configure, and manage virtual servers (known as instances) in the AWS cloud. 3 | 4 | ### 2. How does Amazon EC2 work? 5 | Amazon EC2 enables users to launch instances based on pre-configured Amazon Machine Images (AMIs). These instances run within virtual private clouds (VPCs) and can be configured with various resources like CPU, memory, storage, and networking. 6 | 7 | ### 3. What are the different instance types in EC2? 8 | Amazon EC2 offers a wide range of instance types optimized for different use cases, such as general-purpose, memory-optimized, compute-optimized, and GPU instances. 9 | 10 | ### 4. Explain the differences between on-demand, reserved, and spot instances. 11 | - On-Demand Instances: Pay-as-you-go pricing with no upfront commitment. 12 | - Reserved Instances: Provides capacity reservation at a lower cost in exchange for a commitment. 13 | - Spot Instances: Allows users to bid on unused EC2 capacity, potentially leading to significantly lower costs. 14 | 15 | ### 5. How can you improve the availability of EC2 instances? 16 | To improve availability, you can place instances in multiple Availability Zones (AZs) within a region. This helps ensure redundancy and fault tolerance. 17 | 18 | ### 6. What is an Amazon Machine Image (AMI)? 19 | An Amazon Machine Image (AMI) is a pre-configured template that contains the information required to launch an EC2 instance. AMIs can include an operating system, applications, data, and configuration settings. 20 | 21 | ### 7. How can you secure your EC2 instances? 22 | You can enhance the security of EC2 instances by using security groups, Network ACLs, key pairs, and configuring firewalls. Additionally, implementing multi-factor authentication (MFA) is recommended for account access. 23 | 24 | ### 8. Explain the difference between public IP and Elastic IP in EC2. 25 | A public IP is assigned to an instance at launch, but it can change if the instance is stopped and started. An Elastic IP is a static IP address that can be associated with an instance, providing a consistent public IP even after stopping and starting the instance. 26 | 27 | ### 9. How can you scale your application using EC2? 28 | You can scale your application horizontally by adding more instances. Amazon EC2 Auto Scaling helps you automatically adjust the number of instances based on demand. 29 | 30 | ### 10. What is Amazon EBS? 31 | Amazon Elastic Block Store (EBS) provides persistent block storage volumes for EC2 instances. EBS volumes can be attached to instances and used as data storage. 32 | 33 | ### 11. How can you encrypt data on EBS volumes? 34 | You can encrypt EBS volumes using Amazon EBS encryption. You can choose to create encrypted volumes during instance launch or encrypt existing unencrypted volumes. 35 | 36 | ### 12. How can you back up your EC2 instances? 37 | You can create snapshots of EBS volumes, which serve as backups. These snapshots can be used to create new EBS volumes or restore existing ones. 38 | 39 | ### 13. What is the difference between instance store and EBS-backed instances? 40 | Instance store instances use ephemeral storage that is directly attached to the instance, providing high I/O performance. EBS-backed instances use EBS volumes for storage, offering persistent data storage. 41 | 42 | ### 14. What are instance metadata and user data in EC2? 43 | Instance metadata provides information about an instance, such as its IP address, instance type, and IAM role. User data is information that you can pass to an instance during launch to customize its behavior. 44 | 45 | ### 15. How can you launch instances in a Virtual Private Cloud (VPC)? 46 | When launching instances, you can choose a specific VPC and subnet. This ensures that the instances are launched within the defined network environment. 47 | 48 | ### 16. What is the purpose of an EC2 security group? 49 | An EC2 security group acts as a virtual firewall for instances to control inbound and outbound traffic. You can specify rules to allow or deny traffic based on IP addresses and ports. 50 | 51 | ### 17. How can you automate the deployment of EC2 instances? 52 | You can use AWS CloudFormation to create and manage a collection of related AWS resources, including EC2 instances. This allows you to define the infrastructure as code. 53 | 54 | ### 18. How can you achieve high availability for an application using EC2? 55 | You can use features like Amazon EC2 Auto Scaling and Elastic Load Balancing to distribute incoming traffic and automatically adjust the number of instances to handle changes in demand. 56 | 57 | ### 19. What is Amazon Machine Learning (Amazon ML)? 58 | Amazon ML is a service that enables you to build predictive models using machine learning technology. It's used to perform predictions on data and make informed decisions. 59 | 60 | ### 20. What is Amazon EC2 Instance Connect? 61 | Amazon EC2 Instance Connect provides a simple and secure way to connect to your instances using Secure Shell (SSH). It eliminates the need to use key pairs and allows you to connect using your AWS Management Console credentials. -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/ECR.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon Elastic Container Registry (ECR)? 2 | Amazon Elastic Container Registry (ECR) is a fully managed Docker container registry that makes it easy to store, manage, and deploy Docker container images. 3 | 4 | ### 2. How does Amazon ECR work? 5 | Amazon ECR allows you to push Docker container images to a repository and then pull those images to deploy containers on Amazon ECS, Kubernetes, or other container orchestrators. 6 | 7 | ### 3. What are the key features of Amazon ECR? 8 | Key features of Amazon ECR include secure and private Docker image storage, integration with AWS Identity and Access Management (IAM), lifecycle policies, and image vulnerability scanning. 9 | 10 | ### 4. What is a Docker container image? 11 | A Docker container image is a lightweight, standalone, and executable software package that contains everything needed to run a piece of software, including code, runtime, libraries, and settings. 12 | 13 | ### 5. How do you push Docker images to Amazon ECR? 14 | You can use the `docker push` command to push Docker images to Amazon ECR repositories after authenticating with your AWS credentials. 15 | 16 | ### 6. How can you pull Docker images from Amazon ECR? 17 | You can use the `docker pull` command to pull Docker images from Amazon ECR repositories after authenticating with your AWS credentials. 18 | 19 | ### 7. What is the significance of Amazon ECR lifecycle policies? 20 | Amazon ECR lifecycle policies allow you to define rules that automatically clean up and manage images based on conditions like image age, count, and usage. 21 | 22 | ### 8. How does Amazon ECR support image vulnerability scanning? 23 | Amazon ECR supports image vulnerability scanning by integrating with Amazon ECR Public and AWS Security Hub to provide insights into the security posture of your container images. 24 | 25 | ### 9. How can you ensure private and secure image storage in Amazon ECR? 26 | Amazon ECR repositories are private by default and can be accessed only by authorized users and roles. You can control access using IAM policies and resource-based policies. 27 | 28 | ### 10. How does Amazon ECR integrate with Amazon ECS? 29 | Amazon ECR integrates seamlessly with Amazon ECS, allowing you to use your ECR repositories to store and manage container images for your ECS tasks and services. 30 | 31 | ### 11. What are ECR lifecycle policies? 32 | ECR lifecycle policies are rules you define to manage the retention of images in your repositories. They help keep your image repositories organized and free up storage space. 33 | 34 | ### 12. Can you use Amazon ECR for multi-region deployments? 35 | Yes, you can use Amazon ECR in multi-region deployments by replicating images across different regions and using cross-region replication. 36 | 37 | ### 13. What is Amazon ECR Public? 38 | Amazon ECR Public is a feature that allows you to store and share publicly accessible container images. It's useful for distributing open-source software or other public content. 39 | 40 | ### 14. How can you improve image build and deployment speed using Amazon ECR? 41 | You can improve image build and deployment speed by using Amazon ECR's image layer caching and pulling pre-built base images from the registry. 42 | 43 | ### 15. What is the Amazon ECR Docker Credential Helper? 44 | The Amazon ECR Docker Credential Helper is a tool that simplifies authentication to Amazon ECR repositories, allowing Docker to authenticate with ECR using IAM credentials. 45 | 46 | ### 16. How does Amazon ECR support image versioning? 47 | Amazon ECR supports image versioning by allowing you to tag images with different version labels. This helps in maintaining different versions of the same image. 48 | 49 | ### 17. Can you use Amazon ECR with Kubernetes? 50 | Yes, you can use Amazon ECR with Kubernetes by configuring the necessary authentication and pulling container images from ECR repositories when deploying pods. 51 | 52 | ### 18. How does Amazon ECR handle image replication? 53 | Amazon ECR provides cross-region replication to replicate images to different AWS regions, improving availability and reducing latency for users in different regions. 54 | 55 | ### 19. What is the cost structure of Amazon ECR? 56 | Amazon ECR charges based on the amount of data stored in your repositories and the data transferred out to other AWS regions or services. 57 | 58 | ### 20. How can you ensure high availability for images in Amazon ECR? 59 | Amazon ECR provides high availability by replicating images across multiple Availability Zones within a region, ensuring durability and availability of your container images. -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/ECS.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon ECS? 2 | Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that allows you to run, manage, and scale Docker containers on a cluster of Amazon EC2 instances or AWS Fargate. 3 | 4 | ### 2. How does Amazon ECS work? 5 | Amazon ECS simplifies the deployment and management of containers by providing APIs to launch and stop containerized applications. It handles the underlying infrastructure and scaling for you. 6 | 7 | ### 3. What is a container in the context of Amazon ECS? 8 | A container is a lightweight, standalone executable package that includes everything needed to run a piece of software, including the code, runtime, libraries, and system tools. 9 | 10 | ### 4. What is a task definition in Amazon ECS? 11 | A task definition is a blueprint for running a Docker container as part of a task in Amazon ECS. It defines container configurations, resources, networking, and more. 12 | 13 | ### 5. How are tasks and services related in Amazon ECS? 14 | A task is a running container or a group of related containers defined by a task definition. A service in ECS manages the desired number of tasks to maintain availability and desired state. 15 | 16 | ### 6. What is the difference between Amazon ECS and AWS Fargate? 17 | Amazon ECS gives you control over EC2 instances to run containers, while AWS Fargate is a serverless compute engine for containers. With Fargate, you don't need to manage the underlying infrastructure. 18 | 19 | ### 7. How can you schedule tasks in Amazon ECS? 20 | Tasks in Amazon ECS can be scheduled using services, which maintain a desired count of tasks in a cluster. You can also use Amazon ECS Events to trigger task execution based on events. 21 | 22 | ### 8. What is the purpose of the Amazon ECS cluster? 23 | An Amazon ECS cluster is a logical grouping of container instances and tasks. It provides a way to manage and organize your containers within a scalable infrastructure. 24 | 25 | ### 9. How can you scale containers in Amazon ECS? 26 | You can scale containers by adjusting the desired task count of an ECS service. Amazon ECS automatically adjusts the number of tasks based on your scaling policies. 27 | 28 | ### 10. What is Amazon ECS Agent? 29 | The Amazon ECS Agent is a component that runs on each EC2 instance in your ECS cluster. It's responsible for communicating with the ECS control plane and managing tasks on the instance. 30 | 31 | ### 11. What is the difference between a task and a container instance in Amazon ECS? 32 | A task is a running instance of a containerized application, while a container instance is an Amazon EC2 instance that's part of an ECS cluster and runs the ECS Agent. 33 | 34 | ### 12. How can you manage container secrets in Amazon ECS? 35 | You can manage container secrets using AWS Secrets Manager or AWS Systems Manager Parameter Store. Secrets can be injected into containers at runtime as environment variables. 36 | 37 | ### 13. What is the purpose of Amazon ECS Capacity Providers? 38 | ECS Capacity Providers allow you to manage capacity and scaling for your tasks. They define how tasks are placed and whether to use On-Demand Instances or Spot Instances. 39 | 40 | ### 14. Can you use Amazon ECS to orchestrate non-Docker workloads? 41 | Yes, Amazon ECS supports running tasks with the Fargate launch type that allow you to specify images from various sources, including Amazon ECR, Docker Hub, and more. 42 | 43 | ### 15. How does Amazon ECS integrate with other AWS services? 44 | Amazon ECS integrates with other AWS services like Amazon CloudWatch for monitoring, AWS Identity and Access Management (IAM) for access control, and Amazon VPC for networking. 45 | 46 | ### 16. What is the difference between the Fargate and EC2 launch types in Amazon ECS? 47 | The Fargate launch type lets you run containers without managing the underlying infrastructure, while the EC2 launch type gives you control over the EC2 instances where containers are deployed. 48 | 49 | ### 17. How can you manage container networking in Amazon ECS? 50 | Amazon ECS uses Amazon VPC networking for containers. You can configure networking using task definitions, security groups, and subnets to control communication between containers. 51 | 52 | ### 18. What is the purpose of the Amazon ECS Task Placement Strategy? 53 | Task Placement Strategy allows you to define rules for how tasks are distributed across container instances. It can help optimize resource usage and ensure high availability. 54 | 55 | ### 19. What is the role of the ECS Service Scheduler? 56 | The ECS Service Scheduler is responsible for placing and managing tasks across the cluster. It ensures tasks are launched, monitored, and replaced as needed. 57 | 58 | ### 20. How can you ensure high availability in Amazon ECS? 59 | To achieve high availability, you can use Amazon ECS services with multiple tasks running across multiple Availability Zones (AZs), combined with Auto Scaling to maintain the desired task count. -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/EKS.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon EKS? 2 | Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that makes it easier to deploy, manage, and scale containerized applications using Kubernetes. 3 | 4 | ### 2. How does Amazon EKS work? 5 | Amazon EKS eliminates the need to install, operate, and maintain your own Kubernetes control plane. It provides a managed environment for deploying, managing, and scaling containerized applications using Kubernetes. 6 | 7 | ### 3. What is Kubernetes? 8 | Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. 9 | 10 | ### 4. What are the key features of Amazon EKS? 11 | Key features of Amazon EKS include automatic upgrades, integration with AWS services, high availability with multiple availability zones, security with IAM and VPC, and simplified Kubernetes operations. 12 | 13 | ### 5. What is a Kubernetes cluster? 14 | A Kubernetes cluster is a collection of nodes (Amazon EC2 instances) that run containerized applications managed by Kubernetes. It includes a control plane and worker nodes. 15 | 16 | ### 6. How do you create a Kubernetes cluster in Amazon EKS? 17 | To create an EKS cluster, you use the AWS Management Console, AWS CLI, or AWS CloudFormation. EKS automatically provisions the control plane and worker nodes. 18 | 19 | ### 7. What are Kubernetes nodes? 20 | Kubernetes nodes are the worker machines that run containers. They host pods, which are the smallest deployable units in Kubernetes. 21 | 22 | ### 8. How does Amazon EKS manage Kubernetes control plane updates? 23 | Amazon EKS automatically handles the upgrades of the Kubernetes control plane. It schedules and applies updates while ensuring minimal disruption to the applications running on the cluster. 24 | 25 | ### 9. What is the difference between Amazon EKS and Amazon ECS? 26 | Amazon EKS provides managed Kubernetes clusters, while Amazon ECS provides managed Docker container orchestration. EKS is better suited for complex microservices architectures using Kubernetes. 27 | 28 | ### 10. How can you scale applications in Amazon EKS? 29 | You can scale applications in EKS by adjusting the desired replica count of Kubernetes Deployments or StatefulSets. EKS automatically manages the scaling of underlying resources. 30 | 31 | ### 11. What is the role of Amazon EKS Managed Node Groups? 32 | Amazon EKS Managed Node Groups simplify the deployment and management of worker nodes in an EKS cluster. They automatically provision, configure, and scale nodes. 33 | 34 | ### 12. How does Amazon EKS handle networking? 35 | Amazon EKS uses Amazon VPC for networking. It creates a VPC and subnets for your cluster, and each pod in the cluster gets an IP address from the subnet. 36 | 37 | ### 13. What is the Kubernetes Pod in Amazon EKS? 38 | A Kubernetes Pod is the smallest deployable unit in Kubernetes. It represents a single instance of a running process in the cluster and can consist of one or more containers. 39 | 40 | ### 14. How does Amazon EKS integrate with AWS services? 41 | Amazon EKS integrates with various AWS services like IAM for access control, Amazon VPC for networking, and CloudWatch for monitoring and logging. 42 | 43 | ### 15. Can you run multiple Kubernetes clusters on Amazon EKS? 44 | Yes, you can run multiple Kubernetes clusters on Amazon EKS, each with its own set of worker nodes and applications. 45 | 46 | ### 16. What is the difference between Kubernetes Deployment and StatefulSet? 47 | A Kubernetes Deployment is suitable for stateless applications, while a StatefulSet is designed for stateful applications that require stable network identifiers and ordered, graceful scaling. 48 | 49 | ### 17. How can you secure an Amazon EKS cluster? 50 | You can secure an EKS cluster by using AWS Identity and Access Management (IAM) roles, integrating with Amazon VPC for networking isolation, and applying security best practices to your Kubernetes workloads. 51 | 52 | ### 18. What is the Kubernetes Operator in Amazon EKS? 53 | A Kubernetes Operator is a method of packaging, deploying, and managing an application using Kubernetes-native APIs. It allows for more automated management of complex applications. 54 | 55 | ### 19. How can you automate application deployments in Amazon EKS? 56 | You can use Kubernetes Deployments or other tools like Helm to automate application deployments in Amazon EKS. These tools help manage the lifecycle of containerized applications. 57 | 58 | ### 20. How does Amazon EKS handle high availability? 59 | Amazon EKS supports high availability by distributing control plane components across multiple availability zones. It also offers features like managed node groups and Auto Scaling for worker nodes. 60 | -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/ELASTIC BEANSTALK.md: -------------------------------------------------------------------------------- 1 | ### 1. What is AWS Elastic Beanstalk? 2 | AWS Elastic Beanstalk is a platform-as-a-service (PaaS) offering that simplifies application deployment and management. It handles infrastructure provisioning, deployment, monitoring, and scaling, allowing developers to focus on writing code. 3 | 4 | ### 2. How does Elastic Beanstalk work? 5 | Elastic Beanstalk abstracts the infrastructure layer, allowing you to upload your code (web application or microservices) and configuration. It then automatically deploys, manages, and scales your application based on the platform, language, and environment settings you choose. 6 | 7 | ### 3. What languages and platforms does Elastic Beanstalk support? 8 | Elastic Beanstalk supports multiple programming languages and platforms, including Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker. 9 | 10 | ### 4. What is an Elastic Beanstalk environment? 11 | An Elastic Beanstalk environment is a specific instance of your application that includes the runtime, resources, and configuration settings. You can have multiple environments (e.g., development, testing, production) for the same application. 12 | 13 | ### 5. How does Elastic Beanstalk handle updates and deployments? 14 | Elastic Beanstalk supports both All at Once and Rolling deployments. All at Once deploys updates to all instances simultaneously, while Rolling deploys updates in batches to reduce downtime. 15 | 16 | ### 6. Can you customize the infrastructure in Elastic Beanstalk? 17 | Yes, Elastic Beanstalk allows you to customize the environment's resources, configuration, and scaling settings through environment configuration files or the AWS Management Console. 18 | 19 | ### 7. How can you monitor the health of an Elastic Beanstalk environment? 20 | Elastic Beanstalk provides health monitoring through CloudWatch. You can set up alarms based on metrics like CPU utilization, latency, and request count. 21 | 22 | ### 8. What is the Elastic Beanstalk Command Line Interface (EB CLI)? 23 | The EB CLI is a command-line tool that provides an interface for interacting with Elastic Beanstalk. It enables developers to manage applications and environments using commands. 24 | 25 | ### 9. How does Elastic Beanstalk handle automatic scaling? 26 | Elastic Beanstalk can automatically scale your application based on the configured scaling triggers, such as CPU utilization, network traffic, or other custom metrics. 27 | 28 | ### 10. Explain the difference between Single Instance and Load Balanced environments in Elastic Beanstalk. 29 | In a Single Instance environment, your application runs on a single EC2 instance. In a Load Balanced environment, your application runs on multiple instances behind a load balancer, improving availability and scalability. 30 | 31 | ### 11. How does Elastic Beanstalk support rolling back deployments? 32 | Elastic Beanstalk supports rolling back to a previous version if an update results in errors or issues. You can initiate a rollback through the AWS Management Console or the EB CLI. 33 | 34 | ### 12. Can Elastic Beanstalk deploy applications to multiple availability zones? 35 | Yes, Elastic Beanstalk can automatically deploy your application to multiple availability zones within a region to enhance high availability. 36 | 37 | ### 13. How can you handle environment-specific configurations in Elastic Beanstalk? 38 | You can use configuration files, environment variables, or Parameter Store to manage environment-specific configurations, ensuring your application behaves consistently across environments. 39 | 40 | ### 14. Describe how you would configure environment variables in Elastic Beanstalk. 41 | Environment variables can be configured using the AWS Management Console, the EB CLI, or Elastic Beanstalk configuration files. They provide a way to pass dynamic values to your application. 42 | 43 | ### 15. Can Elastic Beanstalk deploy applications stored in containers? 44 | Yes, Elastic Beanstalk supports deploying Docker containers. You can specify a Docker image repository and Elastic Beanstalk will handle deployment and management of the containerized application. 45 | 46 | ### 16. How can you automate deployments to Elastic Beanstalk? 47 | You can use the AWS CodePipeline service to automate the deployment process to Elastic Beanstalk. This helps create a continuous integration and continuous delivery (CI/CD) pipeline. 48 | 49 | ### 17. What is the difference between an environment URL and a CNAME in Elastic Beanstalk? 50 | An environment URL is a unique URL automatically generated for each Elastic Beanstalk environment. A CNAME (Canonical Name) is an alias that you can configure to map a custom domain to your Elastic Beanstalk environment. 51 | 52 | ### 18. Can Elastic Beanstalk be used for serverless applications? 53 | While Elastic Beanstalk handles infrastructure provisioning, it is not a serverless service like AWS Lambda. It's designed to manage and scale applications on virtual machines. 54 | 55 | ### 19. What are worker environments in Elastic Beanstalk? 56 | Worker environments in Elastic Beanstalk are used for background tasks and processing. They handle tasks asynchronously, separate from the main application environment. 57 | 58 | ### 20. How can you back up and restore an Elastic Beanstalk environment? 59 | Elastic Beanstalk does not provide built-in backup and restore capabilities. However, you can use AWS services like Amazon RDS for database backups and CloudFormation for environment configuration versioning. -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/ELB.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | ### 1. What is an Elastic Load Balancer (ELB)? 4 | An Elastic Load Balancer (ELB) is a managed AWS service that automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, or IP addresses, to ensure high availability and fault tolerance. 5 | 6 | ### 2. What are the three types of Elastic Load Balancers available in AWS? 7 | There are three types of Elastic Load Balancers: Application Load Balancer (ALB), Network Load Balancer (NLB), and Gateway Load Balancer (GWLB). 8 | 9 | ### 3. What is the main difference between Application Load Balancer (ALB) and Network Load Balancer (NLB)? 10 | ALB operates at the application layer and supports advanced routing, including content-based routing and path-based routing. NLB operates at the transport layer and provides ultra-low latency and high throughput. 11 | 12 | ### 4. What are some key features of Application Load Balancer (ALB)? 13 | ALB supports features like dynamic port mapping, path-based routing, support for HTTP/2 and WebSocket protocols, and content-based routing using listeners and rules. 14 | 15 | ### 5. When should you use Network Load Balancer (NLB)? 16 | NLB is suitable for scenarios that require extreme performance, high throughput, and low latency, such as gaming applications and real-time streaming. 17 | 18 | ### 6. What is a target group in Elastic Load Balancing? 19 | A target group is a logical grouping of targets (such as EC2 instances) registered with a load balancer. ALB and NLB use target groups to route requests to registered targets. 20 | 21 | ### 7. How does health checking work in Elastic Load Balancers? 22 | Elastic Load Balancers perform health checks on registered targets to ensure they are available to receive traffic. Unhealthy targets are temporarily removed from rotation. 23 | 24 | ### 8. How can you route requests to different target groups based on URL paths in Application Load Balancer (ALB)? 25 | ALB supports path-based routing, where you define listeners and rules to route requests to different target groups based on specific URL paths. 26 | 27 | ### 9. What is cross-zone load balancing? 28 | Cross-zone load balancing is a feature that evenly distributes traffic across all registered targets in all availability zones, helping to achieve even distribution and better resource utilization. 29 | 30 | ### 10. How can you enable SSL/TLS encryption for traffic between clients and the load balancer? 31 | You can configure an SSL/TLS certificate on the load balancer, enabling it to terminate SSL/TLS connections and communicate with registered targets over HTTP. 32 | 33 | ### 11. Can you use Elastic Load Balancer (ELB) with resources outside AWS? 34 | Yes, ELB can be used with on-premises resources using Network Load Balancer with IP addresses as targets or with AWS Global Accelerator to route traffic to resources outside AWS. 35 | 36 | ### 12. What is a sticky session, and how can you enable it in Elastic Load Balancers? 37 | Sticky sessions ensure that a user's session is consistently directed to the same target. In ALB, you can enable sticky sessions using the `stickiness` option in the target group settings. 38 | 39 | ### 13. What is the purpose of pre-warming in Elastic Load Balancers? 40 | Pre-warming involves sending a low volume of traffic to a new load balancer to allow it to scale up its capacity and establish connections gradually. 41 | 42 | ### 14. How does Elastic Load Balancer support IPv6? 43 | Elastic Load Balancer (ALB and NLB) supports both IPv4 and IPv6 addresses, allowing applications to be accessed over the IPv6 protocol. 44 | 45 | ### 15. What is connection draining, and when is it useful? 46 | Connection draining is the process of gradually stopping traffic to an unhealthy target instance before removing it from the target group. It's useful to ensure active requests are completed before taking the instance out of rotation. 47 | 48 | ### 16. How can you enable access logs for Elastic Load Balancers? 49 | You can enable access logs for Elastic Load Balancers to capture detailed information about requests, responses, and client IP addresses. These logs can be stored in an Amazon S3 bucket. 50 | 51 | ### 17. What is the purpose of an idle timeout setting in Elastic Load Balancers? 52 | The idle timeout setting defines the maximum time an idle connection can remain open between the load balancer and a client. After this duration, the connection is closed. 53 | 54 | ### 18. Can you associate Elastic IP addresses with Elastic Load Balancers? 55 | No, Elastic Load Balancers do not have static IP addresses. They have DNS names that are used to route traffic to registered targets. 56 | 57 | ### 19. How can you configure health checks for targets in Elastic Load Balancers? 58 | You can configure health checks by defining a health check path, interval, timeout, and thresholds. ELB sends periodic requests to targets to verify their health. 59 | 60 | ### 20. Can you use Elastic Load Balancers to distribute traffic across regions? 61 | Elastic Load Balancers can distribute traffic only within the same region. For distributing traffic across regions, you can use AWS Global Accelerator. 62 | 63 | Remember that while these answers provide depth, it's important to personalize your responses based on your experience and understanding of Elastic Load Balancers and AWS load balancing concepts. 64 | -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/IAM.md: -------------------------------------------------------------------------------- 1 | ### 1. What is AWS Identity and Access Management (IAM)? 2 | AWS IAM is a service that allows you to manage users, groups, and permissions for accessing AWS resources. It provides centralized control over authentication and authorization. 3 | 4 | ### 2. What are the key components of AWS IAM? 5 | Key components of AWS IAM include users, groups, roles, policies, permissions, and identity providers. 6 | 7 | ### 3. How does AWS IAM work? 8 | AWS IAM allows you to create users and groups, assign policies that define permissions, and use roles to delegate permissions to AWS services and resources. 9 | 10 | ### 4. What is the difference between authentication and authorization in AWS IAM? 11 | Authentication is the process of verifying the identity of users or entities, while authorization is the process of granting or denying access to resources based on policies and permissions. 12 | 13 | ### 5. How can you secure your AWS account using IAM? 14 | You can secure your AWS account by enforcing the principle of least privilege, creating strong password policies, enabling multi-factor authentication (MFA), and regularly reviewing permissions. 15 | 16 | ### 6. How do IAM users differ from IAM roles? 17 | IAM users are individuals or entities that have a fixed set of permissions associated with them. IAM roles are temporary credentials that can be assumed by users or AWS services to access resources. 18 | 19 | ### 7. What is an IAM policy? 20 | An IAM policy is a JSON document that defines permissions. It specifies what actions are allowed or denied on which AWS resources for whom (users, groups, or roles). 21 | 22 | ### 8. What is the AWS Management Console? 23 | The AWS Management Console is a web-based interface that allows you to interact with and manage AWS resources. IAM users can use the console to access resources based on their permissions. 24 | 25 | ### 9. How does IAM manage access keys? 26 | IAM users can have access keys (access key ID and secret access key) associated with their accounts, which are used for programmatic access to AWS resources. 27 | 28 | ### 10. What is the purpose of IAM groups? 29 | IAM groups allow you to group users and apply policies to them collectively, simplifying permission management by granting the same set of permissions to multiple users. 30 | 31 | ### 11. What is the role of an IAM policy document? 32 | An IAM policy document defines the permissions and actions that are allowed or denied. It is written in JSON format and attached to users, groups, or roles. 33 | 34 | ### 12. How can you grant permissions to an IAM user? 35 | You can grant permissions to an IAM user by attaching policies to the user directly or by adding the user to groups with associated policies. 36 | 37 | ### 13. How can you delegate permissions to AWS services using IAM roles? 38 | IAM roles allow you to delegate permissions to AWS services like EC2 instances, Lambda functions, and more, without exposing long-term credentials. 39 | 40 | ### 14. What is cross-account access in AWS IAM? 41 | Cross-account access allows you to grant permissions to users or entities from one AWS account to access resources in another AWS account. 42 | 43 | ### 15. How does IAM support identity federation? 44 | IAM supports identity federation by allowing users to access AWS resources using temporary security credentials obtained from trusted identity providers (e.g., SAML, OpenID Connect). 45 | 46 | ### 16. What is the purpose of an IAM access advisor? 47 | IAM access advisors provide insights into the services that users accessed and the actions they performed. This helps in auditing and understanding resource usage. 48 | 49 | ### 17. How does IAM enforce the principle of least privilege? 50 | IAM enforces the principle of least privilege by allowing you to define specific permissions for users, groups, or roles, reducing the risk of unauthorized access. 51 | 52 | ### 18. What is the difference between IAM policies and resource-based policies? 53 | IAM policies are attached to identities (users, groups, roles), while resource-based policies are attached to AWS resources (e.g., S3 buckets, Lambda functions) to control access from different identities. 54 | 55 | ### 19. How can you implement multi-factor authentication (MFA) in IAM? 56 | You can enable MFA for IAM users to require an additional authentication factor (e.g., a code from a virtual MFA device) along with their password when signing in. 57 | 58 | ### 20. What is the IAM policy evaluation logic? 59 | IAM uses an explicit deny model, which means that if a user's permissions include an explicit deny statement, it overrides any allow statements in the policy. 60 | -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/LAMBDA.md: -------------------------------------------------------------------------------- 1 | ### 1. What is AWS Lambda? 2 | AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. It automatically scales and manages the infrastructure required to run your code in response to events. 3 | 4 | ### 2. How does AWS Lambda work? 5 | You can upload your code to Lambda and define event sources that trigger the execution of your code. Lambda automatically manages the execution environment, scales it as needed, and provides monitoring and logging. 6 | 7 | ### 3. What are the key benefits of using AWS Lambda? 8 | The benefits of AWS Lambda include automatic scaling, reduced operational overhead, cost efficiency (as you pay only for the compute time used), and the ability to build event-driven architectures. 9 | 10 | ### 4. What types of events can trigger AWS Lambda functions? 11 | AWS Lambda functions can be triggered by various event sources, such as changes in Amazon S3 objects, updates to Amazon DynamoDB tables, HTTP requests through Amazon API Gateway, and more. 12 | 13 | ### 5. How is concurrency managed in AWS Lambda? 14 | Lambda automatically handles concurrency by scaling out instances of your function in response to incoming requests. You can set a concurrency limit to control how many concurrent executions are allowed. 15 | 16 | ### 6. What is the maximum execution duration for a single AWS Lambda invocation? 17 | The maximum execution duration for a single Lambda invocation is 15 minutes. 18 | 19 | ### 7. How do you pass data to and from AWS Lambda functions? 20 | You can pass data to Lambda functions through event objects, which contain information about the triggering event. You can also return data by using the return statement or creating a response object. 21 | 22 | ### 8. Can AWS Lambda functions communicate with external resources? 23 | Yes, Lambda functions can communicate with external resources such as databases, APIs, and other AWS services by using appropriate SDKs and APIs provided by AWS. 24 | 25 | ### 9. What are AWS Lambda layers? 26 | AWS Lambda layers are a way to manage and share code that is common across multiple functions. Layers can include libraries, custom runtimes, and other function dependencies. 27 | 28 | ### 10. How can you handle errors in AWS Lambda functions? 29 | You can handle errors by using try-catch blocks in your code. Lambda also provides CloudWatch Logs for monitoring, and you can set up error handling and retries for asynchronous invocations. 30 | 31 | ### 11. Can AWS Lambda functions access the internet? 32 | Yes, Lambda functions can access the internet through the Virtual Private Cloud (VPC) or through public endpoints if your function is not configured within a VPC. 33 | 34 | ### 12. What are the execution environments available for AWS Lambda functions? 35 | Lambda supports several runtimes, including Node.js, Python, Java, Go, Ruby, .NET Core, and custom runtimes using the Runtime API. 36 | 37 | ### 13. How can you configure environment variables for AWS Lambda functions? 38 | You can set environment variables for Lambda functions when creating or updating the function. These variables can be accessed within your code. 39 | 40 | ### 14. What is the difference between synchronous and asynchronous invocation of Lambda functions? 41 | Synchronous invocations wait for the function to complete and return a response, while asynchronous invocations return immediately, and the response is sent to a specified destination. 42 | 43 | ### 15. What is the AWS Lambda Event Source Mapping? 44 | Event Source Mapping allows you to connect event sources like Amazon DynamoDB streams or Amazon Kinesis streams to Lambda functions. This enables the function to process events as they occur. 45 | 46 | ### 16. How can you manage the permissions and execution roles for AWS Lambda functions? 47 | You can use AWS Identity and Access Management (IAM) roles to grant permissions to your Lambda functions. Execution roles define what AWS resources the function can access. 48 | 49 | ### 17. What is AWS Step Functions? 50 | AWS Step Functions is a serverless orchestration service that lets you coordinate multiple AWS services into serverless workflows using visual workflows called state machines. 51 | 52 | ### 18. How can you automate the deployment of AWS Lambda functions? 53 | You can use AWS Serverless Application Model (SAM) templates, AWS CloudFormation, or CI/CD tools like AWS CodePipeline to automate the deployment of Lambda functions. 54 | 55 | ### 19. Can AWS Lambda functions connect to on-premises resources? 56 | Yes, Lambda functions can connect to on-premises resources by placing the function inside a VPC and using a VPN or Direct Connect connection to establish connectivity. 57 | 58 | ### 20. What is the Cold Start issue in AWS Lambda? 59 | The Cold Start issue occurs when a Lambda function is invoked for the first time or after it has been idle for a while. The function needs to be initialized, causing a slight delay in response time. -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/MIGRATION.md: -------------------------------------------------------------------------------- 1 | ### 1. What is cloud migration? 2 | Cloud migration refers to the process of moving applications, data, and workloads from on-premises environments or one cloud provider to another. 3 | 4 | ### 2. What are the common drivers for cloud migration? 5 | Drivers for cloud migration include cost savings, scalability, agility, improved security, and the ability to leverage advanced cloud services. 6 | 7 | ### 3. What are the six common cloud migration strategies? 8 | The six common cloud migration strategies are Rehost (lift and shift), Replatform, Repurchase (buy a SaaS solution), Refactor (rearchitect), Retire, and Retain (leave unchanged). 9 | 10 | ### 4. What is the "lift and shift" migration strategy? 11 | The "lift and shift" strategy (Rehost) involves moving applications and data as they are from on-premises to the cloud without significant modifications. 12 | 13 | ### 5. How does the "replatform" strategy differ from "lift and shift"? 14 | The "replatform" strategy involves making minor adjustments to applications or databases before migrating them to the cloud, often to optimize for cloud services. 15 | 16 | ### 6. When would you consider the "rebuy" strategy? 17 | The "rebuy" strategy (Repurchase) involves replacing an existing application with a cloud-based Software as a Service (SaaS) solution. It's suitable when a suitable SaaS option is available. 18 | 19 | ### 7. What is the "rearchitect" migration strategy? 20 | The "rearchitect" strategy (Refactor) involves modifying or rearchitecting applications to fully leverage cloud-native features and services. 21 | 22 | ### 8. How do you decide which cloud migration strategy to use? 23 | The choice of strategy depends on factors like business goals, existing technology stack, application complexity, and desired outcomes. 24 | 25 | ### 9. What are some key benefits of the "rearchitect" strategy? 26 | The "rearchitect" strategy can lead to improved performance, scalability, and cost savings by utilizing cloud-native services. 27 | 28 | ### 10. What is the importance of a migration readiness assessment? 29 | A migration readiness assessment helps evaluate an organization's current environment, readiness for cloud migration, and the appropriate migration strategy to adopt. 30 | 31 | ### 11. How can you minimize downtime during cloud migration? 32 | You can use strategies like blue-green deployments, canary releases, and traffic shifting to minimize downtime and ensure a smooth migration process. 33 | 34 | ### 12. What is data migration in the context of cloud migration? 35 | Data migration involves moving data from on-premises databases to cloud-based databases, ensuring data consistency, integrity, and minimal disruption. 36 | 37 | ### 13. What is the "big bang" migration approach? 38 | The "big bang" approach involves migrating all applications and data at once, which can be risky due to potential disruptions. It's often considered when there's a clear deadline. 39 | 40 | ### 14. What is the "staged" migration approach? 41 | The "staged" approach involves migrating applications or components in stages, allowing for gradual adoption and risk mitigation. 42 | 43 | ### 15. How does the "strangler" migration pattern work? 44 | The "strangler" pattern involves gradually replacing components of an existing application with cloud-native components until the entire application is migrated. 45 | 46 | ### 16. What role does automation play in cloud migration? 47 | Automation streamlines the migration process by reducing manual tasks, ensuring consistency, and accelerating deployments. 48 | 49 | ### 17. How do you ensure security during cloud migration? 50 | Security should be considered at every stage of migration. Ensure data encryption, access controls, compliance, and monitoring are in place. 51 | 52 | ### 18. How can you handle application dependencies during migration? 53 | Understanding application dependencies is crucial. You can use tools to map dependencies and ensure that all necessary components are migrated together. 54 | 55 | ### 19. What is the "lift and reshape" strategy? 56 | The "lift and reshape" strategy involves moving applications to the cloud and then making necessary adjustments for better cloud optimization and cost savings. 57 | 58 | ### 20. What is the importance of testing in cloud migration? 59 | Testing helps identify issues, validate performance, and ensure the migrated applications function as expected in the new cloud environment. -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/RDS.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon RDS? 2 | Amazon RDS is a managed relational database service that simplifies database setup, operation, and scaling. It supports various database engines like MySQL, PostgreSQL, Oracle, SQL Server, and Amazon Aurora. 3 | 4 | ### 2. How does Amazon RDS work? 5 | Amazon RDS automates common database management tasks such as provisioning, patching, backup, recovery, and scaling. It allows you to focus on your application without managing the underlying infrastructure. 6 | 7 | ### 3. What are the key features of Amazon RDS? 8 | Amazon RDS offers automated backups, automated software patching, high availability through Multi-AZ deployments, read replicas for scaling read operations, and the ability to create custom database snapshots. 9 | 10 | ### 4. What is Multi-AZ deployment in Amazon RDS? 11 | Multi-AZ deployment is a feature that provides high availability by automatically maintaining a standby replica in a different Availability Zone (AZ). If the primary database fails, the standby replica is promoted. 12 | 13 | ### 5. How can you improve read performance in Amazon RDS? 14 | You can improve read performance by creating read replicas. Read replicas replicate data from the primary database and can be used to distribute read traffic. 15 | 16 | ### 6. What is Amazon Aurora? 17 | Amazon Aurora is a MySQL and PostgreSQL-compatible relational database engine that provides high performance, availability, and durability. It's designed to be compatible with these engines while offering improved performance and features. 18 | 19 | ### 7. What is the purpose of the RDS option group? 20 | An RDS option group is a collection of database engine-specific settings that can be applied to your DB instance. It allows you to configure features and settings that are not enabled by default. 21 | 22 | ### 8. How can you encrypt data in Amazon RDS? 23 | You can encrypt data at rest and in transit in Amazon RDS. Data at rest can be encrypted using Amazon RDS encryption or Amazon Aurora encryption, while data in transit can be encrypted using SSL. 24 | 25 | ### 9. What is a DB parameter group in Amazon RDS? 26 | A DB parameter group is a collection of database engine configuration values that can be applied to one or more DB instances. It allows you to customize database settings. 27 | 28 | ### 10. How can you monitor Amazon RDS instances? 29 | Amazon RDS provides metrics and logs through Amazon CloudWatch. You can set up alarms based on these metrics to get notified of performance issues. 30 | 31 | ### 11. What is the difference between Amazon RDS and Amazon DynamoDB? 32 | Amazon RDS is a managed relational database service, while Amazon DynamoDB is a managed NoSQL database service. RDS supports SQL databases like MySQL and PostgreSQL, while DynamoDB is designed for fast and flexible NoSQL data storage. 33 | 34 | ### 12. How can you take backups of Amazon RDS databases? 35 | Amazon RDS provides automated backups. You can also create manual backups or snapshots using the AWS Management Console, AWS CLI, or APIs. 36 | 37 | ### 13. Can you change the DB instance type for an existing Amazon RDS instance? 38 | Yes, you can modify the DB instance type for an existing Amazon RDS instance using the AWS Management Console, AWS CLI, or API. 39 | 40 | ### 14. What is the purpose of the RDS Read Replica? 41 | An RDS Read Replica is a copy of a source DB instance that can be used to offload read traffic from the primary instance. It enhances read scalability and can be in a different region than the source. 42 | 43 | ### 15. How can you replicate data between Amazon RDS and on-premises databases? 44 | You can use Amazon Database Migration Service (DMS) to replicate data between Amazon RDS and on-premises databases. DMS supports various migration scenarios. 45 | 46 | ### 16. What is the maximum storage capacity for an Amazon RDS instance? 47 | The maximum storage capacity for an Amazon RDS instance depends on the database engine and instance type. It can range from a few gigabytes to several terabytes. 48 | 49 | ### 17. How can you restore an Amazon RDS instance from a snapshot? 50 | You can restore an Amazon RDS instance from a snapshot using the AWS Management Console, AWS CLI, or APIs. The restored instance will have the data from the snapshot. 51 | 52 | ### 18. What is the significance of the RDS DB Subnet Group? 53 | An RDS DB Subnet Group is used to specify the subnets where you want to place your DB instances in a VPC. It helps determine the network availability for your database. 54 | 55 | ### 19. How does Amazon RDS handle automatic backups? 56 | Amazon RDS automatically performs backups according to the backup retention period you set. Backups are stored in Amazon S3 and can be used for restoration. 57 | 58 | ### 20. Can you run custom scripts or install custom software on Amazon RDS instances? 59 | Amazon RDS is a managed service that abstracts the underlying infrastructure, so you can't directly access the operating system. However, you can use parameter groups and option groups to configure certain settings. -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/ROUTE53.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon Route 53? 2 | Amazon Route 53 is a scalable and highly available Domain Name System (DNS) web service that helps route end-user requests to AWS resources or external endpoints. 3 | 4 | ### 2. What is DNS? 5 | DNS (Domain Name System) is a system that translates human-readable domain names into IP addresses, allowing computers to locate resources on the internet. 6 | 7 | ### 3. How does Amazon Route 53 work? 8 | Amazon Route 53 manages the DNS records for your domain, allowing you to associate domain names with resources such as EC2 instances, S3 buckets, and load balancers. 9 | 10 | ### 4. What are the types of routing policies in Amazon Route 53? 11 | Amazon Route 53 offers several routing policies, including Simple, Weighted, Latency, Failover, Geolocation, and Multi-Value. 12 | 13 | ### 5. What is the purpose of the Simple routing policy in Route 53? 14 | The Simple routing policy directs traffic to a single resource, such as an IP address or an Amazon S3 bucket, without any logic or decision-making. 15 | 16 | ### 6. How does the Weighted routing policy work in Route 53? 17 | The Weighted routing policy allows you to distribute traffic across multiple resources based on assigned weights. You can control the distribution of traffic based on proportions. 18 | 19 | ### 7. What is the Latency routing policy in Amazon Route 53? 20 | The Latency routing policy directs traffic to the AWS region with the lowest latency for a given user, improving the user experience by minimizing response times. 21 | 22 | ### 8. How does the Failover routing policy work? 23 | The Failover routing policy directs traffic to a primary resource and fails over to a secondary resource if the primary resource becomes unavailable. 24 | 25 | ### 9. What is the Geolocation routing policy? 26 | The Geolocation routing policy directs traffic based on the geographic location of the user, allowing you to route users to the nearest or most appropriate resource. 27 | 28 | ### 10. What is the Multi-Value routing policy? 29 | The Multi-Value routing policy allows you to associate multiple resources with a single DNS name and return multiple IP addresses in a random or weighted manner. 30 | 31 | ### 11. How can you route traffic to an AWS resource using Route 53? 32 | To route traffic to an AWS resource, you create DNS records, such as A records for IPv4 addresses and Alias records for AWS resources like ELB, S3, and CloudFront distributions. 33 | 34 | ### 12. Can Route 53 route traffic to non-AWS resources? 35 | Yes, Route 53 can route traffic to resources outside of AWS by using the simple routing policy to direct traffic to IP addresses or domain names. 36 | 37 | ### 13. How can you ensure high availability using Route 53? 38 | Route 53 provides health checks to monitor the health of resources and can automatically fail over to healthy resources in case of failures. 39 | 40 | ### 14. What are health checks in Amazon Route 53? 41 | Health checks in Route 53 monitor the health and availability of your resources by periodically sending requests and verifying the responses. 42 | 43 | ### 15. How can you configure a custom domain for an Amazon S3 bucket using Route 53? 44 | You can create an Alias record in Route 53 that points to the static website hosting endpoint of the S3 bucket, allowing you to use a custom domain for your S3 bucket. 45 | 46 | ### 16. What is a DNS alias record? 47 | An alias record is a Route 53-specific DNS record that allows you to route traffic directly to an AWS resource, such as an ELB, CloudFront distribution, or S3 bucket. 48 | 49 | ### 17. How can you migrate a domain to Amazon Route 53? 50 | To migrate a domain to Route 53, you update your domain's DNS settings to use Route 53's name servers and then recreate your DNS records within the Route 53 console. 51 | 52 | ### 18. How does Route 53 support domain registration? 53 | Route 53 allows you to register new domain names, manage existing domain names, and associate them with resources and services within your AWS account. 54 | 55 | ### 19. How can you use Route 53 to set up a global website? 56 | You can use the Geolocation routing policy to route users to different resources based on their geographic location, creating a global website with reduced latency. 57 | 58 | ### 20. What is Route 53 Resolver? 59 | Route 53 Resolver is a service that provides DNS resolution across Amazon VPCs and on-premises networks, enabling hybrid network configurations. -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/S3.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon S3? 2 | Amazon Simple Storage Service (Amazon S3) is a scalable object storage service designed to store and retrieve any amount of data from anywhere on the web. It's commonly used to store files, backups, images, videos, and more. 3 | 4 | ### 2. What are the key features of Amazon S3? 5 | Amazon S3 offers features like data durability, high availability, security options, scalable storage, and the ability to store data in different storage classes based on access patterns. 6 | 7 | ### 3. What is an S3 bucket? 8 | An S3 bucket is a container for storing objects, which can be files, images, videos, and more. Each object in S3 is identified by a unique key within a bucket. 9 | 10 | ### 4. How can you control access to objects in S3? 11 | Access to S3 objects can be controlled using bucket policies, access control lists (ACLs), and IAM (Identity and Access Management) policies. You can define who can read, write, and delete objects. 12 | 13 | ### 5. What is the difference between S3 Standard, S3 Intelligent-Tiering, and S3 One Zone-IA storage classes? 14 | - S3 Standard: Offers high durability, availability, and performance. 15 | - S3 Intelligent-Tiering: Automatically moves objects between two access tiers based on changing access patterns. 16 | - S3 One Zone-IA: Stores objects in a single availability zone with lower storage costs, but without the multi-AZ resilience of S3 Standard. 17 | 18 | ### 6. How does S3 provide data durability? 19 | S3 provides 99.999999999% (11 9's) durability by automatically replicating objects across multiple facilities within a region. 20 | 21 | ### 7. What is Amazon S3 Glacier used for? 22 | Amazon S3 Glacier is a storage service designed for data archiving. It offers lower-cost storage with retrieval times ranging from minutes to hours. 23 | 24 | ### 8. How can you secure data in Amazon S3? 25 | You can secure data in Amazon S3 by using access control mechanisms, like bucket policies and IAM policies, and by enabling encryption using server-side encryption or client-side encryption. 26 | 27 | ### 9. What is S3 versioning? 28 | S3 versioning is a feature that allows you to preserve, retrieve, and restore every version of every object in a bucket. It helps protect against accidental deletion and overwrites. 29 | 30 | ### 10. What is a pre-signed URL in S3? 31 | A pre-signed URL is a URL that grants temporary access to an S3 object. It can be generated using your AWS credentials and shared with others to provide temporary access. 32 | 33 | ### 11. How can you optimize costs in Amazon S3? 34 | You can optimize costs by using storage classes that match your data access patterns, utilizing lifecycle policies to transition objects to less expensive storage tiers, and setting up cost allocation tags for billing visibility. 35 | 36 | ### 12. What is S3 Cross-Region Replication? 37 | S3 Cross-Region Replication is a feature that automatically replicates objects from one S3 bucket in one AWS region to another bucket in a different region. 38 | 39 | ### 13. How can you automate the movement of objects between different storage classes? 40 | You can use S3 Lifecycle policies to automate the transition of objects between storage classes based on predefined rules and time intervals. 41 | 42 | ### 14. What is the purpose of S3 event notifications? 43 | S3 event notifications allow you to trigger AWS Lambda functions or SQS queues when certain events, like object creation or deletion, occur in an S3 bucket. 44 | 45 | ### 15. What is the AWS Snowball device? 46 | The AWS Snowball is a physical data transport solution used for migrating large amounts of data into and out of AWS. It's ideal for scenarios where the network transfer speed is not sufficient. 47 | 48 | ### 16. What is Amazon S3 Select? 49 | Amazon S3 Select is a feature that allows you to retrieve specific data from an object using SQL-like queries, without the need to retrieve the entire object. 50 | 51 | ### 17. What is the difference between Amazon S3 and Amazon EBS? 52 | Amazon S3 is object storage used for storing files, while Amazon EBS (Elastic Block Store) is block storage used for attaching to EC2 instances as volumes. 53 | 54 | ### 18. How can you enable server access logging in Amazon S3? 55 | You can enable server access logging to track all requests made to your bucket. The logs are stored in a target bucket and can help analyze access patterns. 56 | 57 | ### 19. What is S3 Transfer Acceleration? 58 | S3 Transfer Acceleration is a feature that speeds up transferring files to and from Amazon S3 by utilizing Amazon CloudFront's globally distributed edge locations. 59 | 60 | ### 20. How can you replicate data between S3 buckets within the same region? 61 | You can use S3 Cross-Region Replication to replicate data between S3 buckets within the same region by specifying the same source and destination region. -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/SYSTEMS MANAGER.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | ### 1. What is AWS Systems Manager? 4 | AWS Systems Manager is a service that provides centralized management for AWS resources, helping you automate tasks, manage configurations, and improve overall operational efficiency. 5 | 6 | ### 2. What are some key components of AWS Systems Manager? 7 | Key components of AWS Systems Manager include Run Command, State Manager, Automation, Parameter Store, Patch Manager, OpsCenter, and Distributor. 8 | 9 | ### 3. What is the purpose of AWS Systems Manager Parameter Store? 10 | AWS Systems Manager Parameter Store is a secure storage service that allows you to store and manage configuration data, such as passwords, database strings, and API keys. 11 | 12 | ### 4. How can you use Run Command in AWS Systems Manager? 13 | Run Command allows you to remotely manage instances by running commands without requiring direct access. It's useful for tasks like software installations or updates. 14 | 15 | ### 5. What is State Manager in AWS Systems Manager? 16 | State Manager helps you define and maintain consistent configurations for your instances over time, ensuring they comply with your desired state. 17 | 18 | ### 6. How does Automation work in AWS Systems Manager? 19 | Automation enables you to create workflows for common maintenance and deployment tasks. It uses documents to define the steps required to achieve specific outcomes. 20 | 21 | ### 7. What is Patch Manager in AWS Systems Manager? 22 | Patch Manager helps you automate the process of patching instances with the latest security updates, allowing you to keep your instances up-to-date and secure. 23 | 24 | ### 8. How can you manage inventory using AWS Systems Manager? 25 | Systems Manager Inventory allows you to collect metadata about instances and applications, helping you track changes, perform audits, and maintain compliance. 26 | 27 | ### 9. What is the difference between Systems Manager Parameter Store and Secrets Manager? 28 | Parameter Store is designed for storing configuration data, while Secrets Manager is designed for securely storing and managing sensitive information like passwords and API keys. 29 | 30 | ### 10. How can you use AWS Systems Manager to automate instance configuration? 31 | You can use State Manager to define a desired state for your instances, ensuring that they have the necessary configurations and software. 32 | 33 | ### 11. What are AWS Systems Manager documents? 34 | Documents are pre-defined or custom scripts that define the steps for performing tasks using Systems Manager. They can be used with Automation, Run Command, and State Manager. 35 | 36 | ### 12. How can you schedule automated tasks with AWS Systems Manager? 37 | You can use Maintenance Windows in Systems Manager to define schedules for executing tasks across your fleet of instances. 38 | 39 | ### 13. What is the purpose of Distributor in AWS Systems Manager? 40 | Distributor is a feature that allows you to package and distribute software packages to your instances, making it easier to manage software deployments. 41 | 42 | ### 14. How can you use AWS Systems Manager to manage compliance? 43 | You can use Compliance Manager to assess and monitor the compliance of your instances against predefined or custom policies. 44 | 45 | ### 15. What is the OpsCenter feature in AWS Systems Manager? 46 | OpsCenter helps you manage and resolve operational issues by providing a central place to view, investigate, and take action on operational tasks and incidents. 47 | 48 | ### 16. How can you integrate AWS Systems Manager with other AWS services? 49 | AWS Systems Manager integrates with services like CloudWatch, Lambda, and Step Functions to enable more advanced automation and orchestration. 50 | 51 | ### 17. Can AWS Systems Manager be used with on-premises resources? 52 | Yes, AWS Systems Manager can be used to manage both AWS resources and on-premises resources by installing the necessary agent on your servers. 53 | 54 | ### 18. How does AWS Systems Manager help with troubleshooting? 55 | Systems Manager provides features like Run Command, Session Manager, and Automation to remotely access instances for troubleshooting and maintenance tasks. 56 | 57 | ### 19. What is the Session Manager feature in AWS Systems Manager? 58 | Session Manager allows you to start interactive sessions with your instances without requiring SSH or RDP access, enhancing security and control. 59 | 60 | ### 20. How can you secure data stored in AWS Systems Manager Parameter Store? 61 | You can use IAM policies to control who has access to Parameter Store parameters and implement encryption at rest using KMS keys. 62 | -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/TERRAFORM.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Terraform? 2 | Terraform is an open-source Infrastructure as Code (IaC) tool that allows you to define, manage, and provision infrastructure resources using declarative code. 3 | 4 | ### 2. How does Terraform work with AWS? 5 | Terraform interacts with the AWS API to create and manage resources based on the configurations defined in Terraform files. 6 | 7 | ### 3. What is an AWS provider in Terraform? 8 | An AWS provider in Terraform is a plugin that allows Terraform to interact with AWS services by making API calls. 9 | 10 | ### 4. How do you define resources in Terraform? 11 | Resources are defined in Terraform using HashiCorp Configuration Language (HCL) syntax in `.tf` files. Each resource type corresponds to an AWS service. 12 | 13 | ### 5. What is a Terraform state file? 14 | The Terraform state file maintains the state of the resources managed by Terraform. It's used to track the actual state of the infrastructure. 15 | 16 | ### 6. How can you initialize a Terraform project? 17 | You can initialize a Terraform project using the `terraform init` command. It downloads required provider plugins and initializes the backend. 18 | 19 | ### 7. How do you plan infrastructure changes in Terraform? 20 | You can use the `terraform plan` command to see the changes that Terraform will apply to your infrastructure before actually applying them. 21 | 22 | ### 8. What is the `terraform apply` command used for? 23 | The `terraform apply` command applies the changes defined in your Terraform configuration to your infrastructure. It creates, updates, or deletes resources as needed. 24 | 25 | ### 9. What is the purpose of Terraform variables? 26 | Terraform variables allow you to parameterize your configurations, making them more flexible and reusable across different environments. 27 | 28 | ### 10. How do you manage secrets and sensitive information in Terraform? 29 | Sensitive information should be stored in environment variables or external systems like AWS Secrets Manager. You can use variables to reference these values in Terraform. 30 | 31 | ### 11. What is remote state in Terraform? 32 | Remote state in Terraform refers to storing the state file on a remote backend, such as Amazon S3, instead of locally. This facilitates collaboration and enables locking. 33 | 34 | ### 12. How can you manage multiple environments (dev, prod) with Terraform? 35 | You can use Terraform workspaces or create separate directories for each environment, each with its own state file and variables. 36 | 37 | ### 13. How do you handle dependencies between resources in Terraform? 38 | Terraform automatically handles dependencies based on the resource definitions in your configuration. It will create resources in the correct order. 39 | 40 | ### 14. What is Terraform's "apply" process? 41 | The "apply" process in Terraform involves comparing the desired state from your configuration to the current state, generating an execution plan, and then applying the changes. 42 | 43 | ### 15. How can you manage versioning of Terraform configurations? 44 | You can use version control systems like Git to track changes to your Terraform configurations. Additionally, Terraform Cloud and Enterprise offer versioning features. 45 | 46 | ### 16. What is the difference between Terraform and CloudFormation? 47 | Terraform is a multi-cloud IaC tool that supports various cloud providers, including AWS. CloudFormation is AWS-specific and focuses on AWS resource provisioning. 48 | 49 | ### 17. What is a Terraform module? 50 | A Terraform module is a reusable set of configurations that can be used to create multiple resources with a consistent configuration. 51 | 52 | ### 18. How can you destroy infrastructure created by Terraform? 53 | You can use the `terraform destroy` command to remove all resources defined in your Terraform configuration. 54 | 55 | ### 19. How does Terraform manage updates to existing resources? 56 | Terraform applies updates by modifying existing resources rather than recreating them. This helps preserve data and configurations. 57 | 58 | ### 20. Can Terraform be used for managing third-party resources? 59 | Yes, Terraform has the capability to manage resources beyond AWS. It supports multiple providers, making it versatile for managing various cloud and on-premises resources. -------------------------------------------------------------------------------- /013 - AWS-Interview Preparation/VPC.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon Virtual Private Cloud (VPC)? 2 | Amazon VPC is a logically isolated section of the AWS Cloud where you can launch resources in a virtual network that you define. It allows you to control your network environment, including IP addresses, subnets, and security settings. 3 | 4 | ### 2. What are the key components of Amazon VPC? 5 | Key components of Amazon VPC include subnets, route tables, network access control lists (ACLs), security groups, and Virtual Private Gateways (VPGs). 6 | 7 | ### 3. How does Amazon VPC work? 8 | Amazon VPC enables you to create a private and secure network within AWS. You define IP ranges for your VPC, create subnets, and configure network security. 9 | 10 | ### 4. What are VPC subnets? 11 | VPC subnets are segments of the VPC's IP address range. They allow you to isolate resources and control access by creating public and private subnets. 12 | 13 | ### 5. How can you connect your on-premises network to Amazon VPC? 14 | You can establish a Virtual Private Network (VPN) connection or use AWS Direct Connect to connect your on-premises network to Amazon VPC. 15 | 16 | ### 6. What is a VPC peering connection? 17 | VPC peering allows you to connect two VPCs together, enabling resources in different VPCs to communicate as if they were on the same network. 18 | 19 | ### 7. What is a route table in Amazon VPC? 20 | A route table defines the rules for routing traffic within a VPC. It determines how traffic is directed between subnets and to external destinations. 21 | 22 | ### 8. How do security groups work in Amazon VPC? 23 | Security groups act as virtual firewalls for your instances, controlling inbound and outbound traffic. They can be associated with instances and control their network access. 24 | 25 | ### 9. What are network access control lists (ACLs) in Amazon VPC? 26 | Network ACLs are stateless filters that control inbound and outbound traffic at the subnet level. They provide an additional layer of security to control traffic flow. 27 | 28 | ### 10. How can you ensure private communication between instances in Amazon VPC? 29 | You can create private subnets and configure security groups to allow communication only between instances within the same subnet, enhancing network security. 30 | 31 | ### 11. What is the default VPC in Amazon Web Services? 32 | The default VPC is a pre-configured VPC that is created for your AWS account in each region. It simplifies instance launch but doesn't provide the same level of isolation as custom VPCs. 33 | 34 | ### 12. Can you peer VPCs in different regions? 35 | No, VPC peering is limited to VPCs within the same region. To connect VPCs across regions, you would need to use VPN or AWS Direct Connect. 36 | 37 | ### 13. How can you control public and private IP addresses in Amazon VPC? 38 | Amazon VPC allows you to allocate private IP addresses to instances automatically. Public IP addresses can be associated with instances launched in public subnets. 39 | 40 | ### 14. What is a VPN connection in Amazon VPC? 41 | A VPN connection allows you to securely connect your on-premises network to your Amazon VPC using encrypted tunnels over the public internet. 42 | 43 | ### 15. What is an Internet Gateway (IGW) in Amazon VPC? 44 | An Internet Gateway enables instances in your VPC to access the internet and allows internet traffic to reach instances in your VPC. 45 | 46 | ### 16. How can you ensure high availability in Amazon VPC? 47 | You can design your VPC with subnets across multiple Availability Zones (AZs) to ensure that your resources remain available in the event of an AZ outage. 48 | 49 | ### 17. How does Amazon VPC provide isolation? 50 | Amazon VPC provides isolation by allowing you to define and manage your own virtual network environment, including subnets, route tables, and network ACLs. 51 | 52 | ### 18. Can you modify a VPC after creation? 53 | While you can modify certain attributes of a VPC, such as its IP address range and subnets, some attributes are immutable, like the VPC's CIDR block. 54 | 55 | ### 19. What is a default route in Amazon VPC? 56 | A default route in a route table directs traffic to the Internet Gateway (IGW), allowing instances in public subnets to communicate with the internet. 57 | 58 | ### 20. What is the purpose of the Amazon VPC Endpoint? 59 | An Amazon VPC Endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services without needing an internet gateway or VPN connection. -------------------------------------------------------------------------------- /AWS-Introduction.md: -------------------------------------------------------------------------------- 1 | # AWS 2 | ![download (1)](https://github.com/zen-class/zen-class-devops-documentation/assets/36299748/dfd706d3-bcff-49bb-a255-eb8c0f187e92) 3 | 4 | In 2006, Amazon Web Services (AWS) began offering IT infrastructure services to businesses as web services—now commonly known as cloud computing. One of the key benefits of cloud computing is the opportunity to replace upfront capital infrastructure expenses with low variable costs that scale with your business. With the cloud, businesses no longer need to plan for and procure servers and other IT infrastructure weeks or months in advance. Instead, they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster. 5 | 6 | Today, AWS provides a highly reliable, scalable, low-cost infrastructure platform in the cloud that powers hundreds of thousands of businesses in 190 countries around the world. 7 | 8 | 9 | 10 | ## What is Internet? 11 | A global computer network using standardized communication protocols (e.g. UDP, TCP/IP) providing information and communication facilities. 12 | 13 | ![Internet-image-(2)](https://github.com/zen-class/zen-class-devops-documentation/assets/36299748/5c526f76-eb12-4110-8a53-b7704f0803b9) 14 | 15 | ## Local Network: Private network in LAN (Local Area Network) 16 | 17 | ## Virtualization: 18 | Run multiple OSs on a host machine (Type 1: BareMetal, Uses Hypervisor OS (e.g. ESXi) and Type 2: Application running on another base OS (e.g. vmware workstation)). 19 | 20 | ## Virtual Machine: 21 | Software representation of virtual computer as set of files! Easy 22 | to move, independent of hardware, Effective utilization of resources. We can do 23 | virtual networking between VMs. 24 | ![illustration-of-the-concept-of-Virtualization-7](https://github.com/zen-class/zen-class-devops-documentation/assets/36299748/a4694d66-6610-4351-844c-c0dc2ed75824) 25 | 26 | ## Data Center: 27 | Data centers are simply centralized locations where computing and 28 | networking equipment is concentrated for the purpose of collecting, storing, 29 | processing, distributing or allowing access to large amounts of data. 30 | ## What is Cloud? 31 | There is now Cloud, it is someone else's computer accessible over the Internet! Virtual Machine running on a cloud server is the most widely used way of hosting any applications online. 32 | 33 | ## Cloud Computing: 34 | On demand delivery of IT resources via the Interment, Instead of setting up Physical DCs, we can access Compute, Network, Storage on demand basis from Cloud providers. Use cases: Data backup, DRS, Email, Virtual Desktop, Software Development, Testing, Web Apps, Online Gaming, IoT, etc. Worldwide availability 35 | 36 | 37 | 38 | ## What is Amazon Web Services (AWS)? 39 | ![aws-introduction](https://github.com/zen-class/zen-class-devops-documentation/assets/36299748/5dd7a5ea-4a4c-432b-a889-d7525cf4103c) 40 |
    41 |
  1. Amazon Web Services (AWS) is a comprehensive and widely used cloud computing platform provided by Amazon.
  2. 42 |
  3. It offers a broad range of cloud services, including computing power, storage, databases, networking, machine learning, artificial intelligence, analytics, security, and more.
  4. 43 |
  5. AWS provides Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) offerings.
  6. 44 |
  7. It allows businesses to provision resources on-demand, scale their applications, and pay only for what they use, without the need for upfront investments in hardware or infrastructure.
  8. 45 |
  9. AWS has a global presence with multiple regions and Availability Zones, providing high availability and disaster recovery options.
  10. 46 |
  11. Key services provided by AWS include Amazon EC2 (Elastic Compute Cloud) for virtual servers, Amazon S3 (Simple Storage Service) for object storage, Amazon RDS (Relational Database Service) for managed databases, and Amazon Lambda for serverless computing.
  12. 47 |
  13. AWS offers various data storage options, including Amazon EBS (Elastic Block Store), Amazon S3, Amazon Glacier for long-term archival storage, and Amazon Elastic File System (EFS) for scalable file storage.
  14. 48 |
  15. AWS provides a wide range of database services, such as Amazon RDS for relational databases, Amazon DynamoDB for NoSQL databases, Amazon Redshift for data warehousing, and Amazon Aurora for high-performance and scalable databases.
  16. 49 |
  17. It offers services for content delivery (Amazon CloudFront), networking (Amazon VPC), messaging and queuing (Amazon SQS and Amazon SNS), and many more.
  18. 50 |
  19. AWS provides a comprehensive set of security features and compliance certifications to ensure the protection of data and resources hosted on the platform.
  20. 51 |
  21. It offers management and monitoring tools, such as AWS CloudFormation for infrastructure provisioning, AWS CloudWatch for monitoring, and AWS Trusted Advisor for optimizing resources and costs.
  22. 52 |
  23. AWS provides development tools, SDKs, and APIs for building, deploying, and managing applications on the platform.
  24. 53 |
  25. AWS has a vibrant ecosystem with a vast marketplace of third-party solutions, integration partners, and consulting services to support customers in their cloud journey.
54 | 55 | 56 | -------------------------------------------------------------------------------- /DevOps-Introduction.md: -------------------------------------------------------------------------------- 1 | # What is DevOps 2 | 3 | DevOps, a combination of "development" and "operations," is a set of practices and cultural philosophies that aim to enhance collaboration and communication between software development teams and IT operations teams. It promotes the automation and integration of software development and IT operations processes to deliver software applications more rapidly, reliably, and efficiently. 4 | 5 | ![0_NXlBkHolIQvfO_Rr](https://github.com/zen-class/zen-class-devops-documentation/assets/36299748/2c222767-688a-4d35-b370-f7f7bf5924e0) 6 | 7 | ## Key Aspects of DevOps 8 | 9 | ### Collaboration: 10 | DevOps emphasizes collaboration and breaking down silos between development teams and operations teams. It encourages effective communication, shared goals, and mutual accountability. 11 | 12 | ### Continuous Integration and Continuous Delivery (CI/CD): 13 | DevOps promotes the use of automation and tools to enable continuous integration and delivery. This involves frequent code integration, automated testing, and deployment to production environments. 14 | 15 | ### Automation: 16 | DevOps emphasizes automating repetitive tasks such as infrastructure provisioning, code testing, and deployment processes. Automation reduces manual errors, increases efficiency, and enables faster and more reliable software releases. 17 | 18 | ### Infrastructure as Code (IaC): 19 | DevOps advocates for managing infrastructure environments using code. Infrastructure resources such as servers, networks, and databases are defined and managed programmatically, allowing for version control and repeatability. 20 | 21 | ### Monitoring and Feedback Loops: 22 | DevOps encourages continuous monitoring of software applications and infrastructure. This helps identify issues, bottlenecks, and enables timely feedback for improvement and efficient incident response. 23 | 24 | ### DevOps Tools: 25 | Various tools and technologies support DevOps practices, including Git for version control, Jenkins and Travis CI for continuous integration, Ansible, Chef, and Puppet for configuration management, and Docker and Kubernetes for containerization. 26 | 27 | ### List of Top DevOps Tools 28 |
    29 |
  1. GIT
  2. 30 |
  3. Docker
  4. 31 |
  5. Kubernetes
  6. 32 |
  7. Terraform
  8. 33 |
  9. Jenkins
  10. 34 |
  11. Ansible
  12. 35 |
  13. Maven
  14. 36 |
  15. Prometheus and Grafana
  16. 37 |
38 | 39 | 40 | ### Agile Principles with DevOps: 41 | DevOps aligns with the agile software development methodology. It emphasizes iterative development, collaboration, and the ability to respond quickly to changing requirements. DevOps extends agile practices into deployment and operations phases. 42 | 43 | ![Agile-DevOps](https://github.com/zen-class/zen-class-devops-documentation/assets/36299748/c9dd2a6b-7a30-4df3-98f1-3acd7d7293f6) 44 | 45 | By adopting DevOps practices, organizations streamline the software development and delivery process, reduce time-to-market, improve software quality, increase operational efficiency, and foster a culture of collaboration and continuous improvement. It enables teams to respond effectively to customer needs and market demands while maintaining stability and reliability in the software delivery pipeline. 46 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## AWS and DevOps Documentation 2 | 3 | 4 |   5 |   6 |   7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 |
Table Of ContentLinks
Introduction to DevOpsDevOps
Introduction to SDLCSDLC
Introduction to AWSAWS
LinuxLinux
Bash ScriptingBashScripting
NetworkingNetworking
GITGIT
DockerDocker
JenkinsJenkins
AWS-ServicesAWS-Services
TerraformTerraform
KubernetesKubernetes
Prometheus-and-GrafanaPrometheus-Grafana
Projects & UseCasesProjects
AWS-Interview-Questions&AnswersAWS-Interview-Preparation
83 | 84 | 85 | -------------------------------------------------------------------------------- /SDLC.md: -------------------------------------------------------------------------------- 1 | # Software Development Life Cycle (SDLC) 2 | 3 | Software Development Life Cycle (SDLC) is a structured framework used in software engineering to guide the development process of software applications. It consists of several phases, each with specific objectives and deliverables. 4 | 5 | ## Phases of SDLC: 6 | 7 | ![sdlc](https://github.com/zen-class/zen-class-devops-documentation/assets/36299748/5c175491-e368-40d7-b2eb-1401a980d45d) 8 | 9 | ### 1. Planning 10 | In the planning phase, project goals, requirements, and constraints are defined. This involves gathering business requirements, conducting feasibility studies, and creating a project plan. 11 | 12 | ### 2. Design 13 | The design phase involves creating a blueprint for the software. This includes system architecture, database design, user interface design, and other technical specifications. 14 | 15 | ### 3. Development 16 | In the development phase, actual coding of the software takes place. Programmers write code according to the design specifications and follow coding standards. 17 | 18 | ### 4. Testing 19 | Testing is a crucial phase where the software is evaluated for defects, bugs, and compliance with requirements. This includes unit testing, integration testing, system testing, and acceptance testing. 20 | 21 | ### 5. Deployment 22 | Once the software passes testing, it is deployed for end-users. This may involve installation on servers, distribution through app stores, or other deployment methods. 23 | 24 | ### 6. Maintenance 25 | After deployment, the software enters the maintenance phase. This involves addressing user feedback, fixing bugs, making updates, and ensuring the software remains functional. 26 | 27 | # Agile Methodology 28 | 29 | Agile is an iterative and incremental approach to software development that emphasizes flexibility, collaboration, and customer satisfaction. 30 | 31 | ![agile](https://github.com/zen-class/zen-class-devops-documentation/assets/36299748/6ac494d8-4609-44f1-86ed-3e4f933e0f01) 32 | 33 | ## Key Principles of Agile: 34 | 35 | 1. **Individuals and Interactions:** Prioritize face-to-face communication and collaboration within the team. 36 | 37 | 2. **Working Software:** Focus on delivering functional software in small increments, rather than extensive documentation. 38 | 39 | 3. **Customer Collaboration:** Involve customers in the development process to gather feedback and adapt to changing requirements. 40 | 41 | 4. **Responding to Change:** Embrace changes in requirements, even late in the development process, to deliver a better product. 42 | 43 | # Scrum Framework 44 | 45 | Scrum is a popular Agile framework that organizes work into small, time-bound iterations called sprints. 46 | 47 | ## Roles in Scrum: 48 | 49 | ### Product Owner 50 | The Product Owner represents the stakeholders and is responsible for defining and prioritizing the features or user stories that need to be developed. 51 | 52 | ### Scrum Master 53 | The Scrum Master facilitates the Scrum process, helps the team remove impediments, and ensures that Scrum practices are followed. 54 | 55 | ### Development Team 56 | The Development Team is responsible for designing, coding, testing, and delivering increments of product functionality. 57 | 58 | ## Scrum Artifacts: 59 | ![scrum](https://github.com/zen-class/zen-class-devops-documentation/assets/36299748/005d5e8e-fe33-4c30-a807-50750ac709a4) 60 | 61 | ### Product Backlog 62 | The Product Backlog is a dynamic list of features, enhancements, and bug fixes prioritized by business value. 63 | 64 | ### Sprint Backlog 65 | The Sprint Backlog is a subset of items from the Product Backlog selected for implementation in the current sprint. 66 | 67 | ### Increment 68 | The Increment is the sum of all completed items from previous sprints, providing a potentially shippable product. 69 | 70 | ## Scrum Events: 71 | 72 | ### Sprint Planning 73 | At the beginning of each sprint, the team plans the work to be done during the sprint, selecting items from the Product Backlog. 74 | 75 | ### Daily Scrum 76 | A short daily meeting where team members synchronize their work and plan for the day. 77 | 78 | ### Sprint Review 79 | At the end of each sprint, the team demonstrates the completed work to stakeholders for feedback. 80 | 81 | ### Sprint Retrospective 82 | A reflection meeting at the end of each sprint to discuss what went well, what could be improved, and how to implement those improvements. 83 | --------------------------------------------------------------------------------