├── shellscripts ├── Script19.sh ├── Script14.sh ├── Script16.sh ├── Script15.sh ├── Script17.sh ├── Script7.sh ├── Script6.sh ├── Script10.sh ├── Script5.sh ├── Script9.sh ├── Script4.sh ├── Script2.sh ├── Script3.sh ├── Script18.sh ├── Script1.sh ├── Script11.sh ├── Script8.sh ├── Script12.sh └── Script13.sh ├── AMI.png ├── EFS.png ├── OSI.png ├── VPC.png ├── EBS types.png ├── snapshots.png ├── Enviroments.png ├── IAM_delegation.png ├── JFrogRepoModel.png ├── LinuxFilesTree.png ├── usr-file-search.png ├── Network_Structure.png ├── process-file-disk.png ├── file-sys-net-commands.png ├── on-premises-cloud-base-pros-cons.png ├── README.md ├── 9.VPC.txt ├── 2.Cloud Computing.txt ├── 7.S3 Bucket.txt ├── 8.IAM.txt ├── 1.Basics(Models+networking).txt ├── 4.Bash.txt ├── 5.AWS.txt ├── 6.EC2.txt └── 3.Linux.txt /shellscripts/Script19.sh: -------------------------------------------------------------------------------- 1 | new script 2 | -------------------------------------------------------------------------------- /AMI.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/arjundhav/DevopsStuff/main/AMI.png -------------------------------------------------------------------------------- /EFS.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/arjundhav/DevopsStuff/main/EFS.png -------------------------------------------------------------------------------- /OSI.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/arjundhav/DevopsStuff/main/OSI.png -------------------------------------------------------------------------------- /VPC.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/arjundhav/DevopsStuff/main/VPC.png -------------------------------------------------------------------------------- /EBS types.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/arjundhav/DevopsStuff/main/EBS types.png -------------------------------------------------------------------------------- /snapshots.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/arjundhav/DevopsStuff/main/snapshots.png -------------------------------------------------------------------------------- /Enviroments.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/arjundhav/DevopsStuff/main/Enviroments.png -------------------------------------------------------------------------------- /IAM_delegation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/arjundhav/DevopsStuff/main/IAM_delegation.png -------------------------------------------------------------------------------- /JFrogRepoModel.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/arjundhav/DevopsStuff/main/JFrogRepoModel.png -------------------------------------------------------------------------------- /LinuxFilesTree.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/arjundhav/DevopsStuff/main/LinuxFilesTree.png -------------------------------------------------------------------------------- /usr-file-search.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/arjundhav/DevopsStuff/main/usr-file-search.png -------------------------------------------------------------------------------- /Network_Structure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/arjundhav/DevopsStuff/main/Network_Structure.png -------------------------------------------------------------------------------- /process-file-disk.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/arjundhav/DevopsStuff/main/process-file-disk.png -------------------------------------------------------------------------------- /file-sys-net-commands.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/arjundhav/DevopsStuff/main/file-sys-net-commands.png -------------------------------------------------------------------------------- /shellscripts/Script14.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | for i in 1 2 3 4 5 3 | do 4 | echo "Looping ... number $i" 5 | done 6 | -------------------------------------------------------------------------------- /shellscripts/Script16.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | a=0 3 | while [ $a -lt 10 ] 4 | do 5 | echo $a 6 | a=`expr $a + 1` 7 | done 8 | -------------------------------------------------------------------------------- /on-premises-cloud-base-pros-cons.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/arjundhav/DevopsStuff/main/on-premises-cloud-base-pros-cons.png -------------------------------------------------------------------------------- /shellscripts/Script15.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | i=1 3 | for day in Mon Tue Wed Thu Fri 4 | do 5 | echo "Weekday $((i++)) : $day" 6 | done 7 | -------------------------------------------------------------------------------- /shellscripts/Script17.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | #We have defined a hello world function here 3 | Hello () { 4 | echo "Hello World" 5 | } 6 | # calling our function 7 | Hello 8 | -------------------------------------------------------------------------------- /shellscripts/Script7.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #to check the output value of exit status 3 | ls -lrt 4 | echo $? 5 | echo "here if the value is 0 this command is sucessful" 6 | -------------------------------------------------------------------------------- /shellscripts/Script6.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # test of few of the fixed variables 3 | echo "script name: $0" 4 | echo "1st cmdls: $1" 5 | echo "2nd cmdla: $2" 6 | echo "cmdla list: $@" 7 | echo "no of cmdl: $#" 8 | -------------------------------------------------------------------------------- /shellscripts/Script10.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #This script is to check if and else 3 | a=10 4 | b=20 5 | if [ $a -gt $b ] 6 | then 7 | echo "a is greater than b" 8 | else 9 | echo "a is smaller than b" 10 | fi 11 | -------------------------------------------------------------------------------- /shellscripts/Script5.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | #This script is to make variable read only, means we cannot set the value of NAME variable again 3 | NAME=Young-Minds 4 | readonly NAME 5 | NAME=DEVOPS 6 | echo "my name is: $NAME" 7 | -------------------------------------------------------------------------------- /shellscripts/Script9.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #this is a script for just a if else 3 | a=10 4 | b=20 5 | if [ $a -gt $b ] 6 | then 7 | echo "a is greater than b" 8 | fi 9 | if [ $a -lt $b ] 10 | then 11 | echo "a is less than b" 12 | fi 13 | -------------------------------------------------------------------------------- /shellscripts/Script4.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #This is creation and calling of shell variable --- defining variables 3 | Class=Young-minds 4 | Batch=13 5 | PROFESSION=AWS/DevOps 6 | echo "Class Name is $Class, Batch number $Batch, We are learning $PROFESSION" 7 | -------------------------------------------------------------------------------- /shellscripts/Script2.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #This script is to read input from the user/console 3 | echo "Value of a" 4 | read a 5 | echo "Value of b" 6 | read b 7 | echo "Hello value of a is $a and value of b is $b" 8 | echo "This is sample change" 9 | echo "bye" 10 | echo "hi" -------------------------------------------------------------------------------- /shellscripts/Script3.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #this is to check if the shell scripts picks up env variables 3 | echo "This is error script" 4 | echo "This is my system path $PATH" 5 | 6 | #Set a JDK_HOME env variable 7 | export JDK_HOME=/bin/jdk 8 | echo "my new JDK home is=$JDK_HOME" 9 | -------------------------------------------------------------------------------- /shellscripts/Script18.sh: -------------------------------------------------------------------------------- 1 | # Calling one function from another 2 | number_one () { 3 | echo "This is the first function speaking..." 4 | number_two 5 | } 6 | number_two () { 7 | echo "This is now the second function speaking..." 8 | } 9 | # Calling function one. 10 | number_one 11 | -------------------------------------------------------------------------------- /shellscripts/Script1.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #This is my 1st shell script to print output 3 | echo "Hello All, Welcome to AWS/Devops Class" 4 | echo "Hello, How are you?" 5 | echo "Welcome to Young Minds" 6 | echo "Hello batch-19, We are learning Develops" 7 | echo "Hello All, Welcome to AWS/Devops Class" 8 | echo "Hello, How are you?" 9 | echo "Welcome to Young Minds" 10 | echo "My name is Rock" 11 | echo "Hello batch-19" 12 | -------------------------------------------------------------------------------- /shellscripts/Script11.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #this script is for if-elif-fi 3 | echo "Please enter value of a" 4 | read a 5 | echo "Please enter vaule of b" 6 | read b 7 | if [ $a == $b ] 8 | then 9 | echo "a is equal to b" 10 | elif [ $a -gt $b ] 11 | then 12 | echo "a is greater than b" 13 | elif [ $a -lt $b ] 14 | then 15 | echo "a is less than b" 16 | else 17 | echo "None of the condition met" 18 | fi 19 | 20 | echo "I have changed this branch" 21 | -------------------------------------------------------------------------------- /shellscripts/Script8.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | a=10 3 | b=20 4 | 5 | val1=`expr $a + $b` 6 | echo "a + b : $val1" 7 | val2=`expr $a - $b` 8 | echo "a - b : $val2" 9 | val3=`expr $a \* $b` 10 | echo "a * b : $val3" 11 | val4=`expr $b / $a` 12 | echo "b / a : $val4" 13 | val5=`expr $b % $a` 14 | echo "b % a : $val5" 15 | 16 | 17 | 18 | if [ $a == $b ] 19 | then 20 | echo "a is equal to b" 21 | elif [ $a -gt $b ] 22 | then 23 | echo "a is greater than b" 24 | elif [ $a -lt $b ] 25 | then 26 | echo "a is less than b" 27 | else 28 | echo "None of the condition met" 29 | fi 30 | -------------------------------------------------------------------------------- /shellscripts/Script12.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | a="abc" 3 | b="efg" 4 | 5 | if [ $a = $b ] 6 | then 7 | echo "$a = $b : a is equal to b" 8 | else 9 | echo "$a != $b: a is not equal to b" 10 | fi 11 | 12 | 13 | if [ $a != $b ] 14 | then 15 | echo "$a != $b : a is not equal to b" 16 | else 17 | echo "$a = $b: a is equal to b" 18 | fi 19 | 20 | 21 | if [ -z $a ] 22 | then 23 | echo "-z $a : string length is zero" 24 | else 25 | echo "-z $a : string length is not zero" 26 | fi 27 | 28 | 29 | if [ -n $a ] 30 | then 31 | echo "-n $a : string length is not zero" 32 | else 33 | echo "-n $a : string length is zero" 34 | fi 35 | -------------------------------------------------------------------------------- /shellscripts/Script13.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | echo "Please enter the vaule of a" 4 | read a 5 | echo "Please enter the vaule of b" 6 | read b 7 | 8 | if [ $a != $b ] 9 | then 10 | echo "$a != $b : a is not equal to b" 11 | else 12 | echo "$a = $b: a is equal to b" 13 | fi 14 | 15 | if [ $a -lt 100 -a $b -gt 15 ] 16 | then 17 | echo "$a -lt 100 -a $b -gt 15 : returns true" 18 | else 19 | echo "$a -lt 100 -a $b -gt 15 : returns false" 20 | fi 21 | 22 | if [ $a -lt 100 -o $b -gt 100 ] 23 | then 24 | echo "$a -lt 100 -o $b -gt 100 : returns true" 25 | else 26 | echo "$a -lt 100 -o $b -gt 100 : returns false" 27 | fi 28 | 29 | if [ $a -lt 5 -o $b -gt 100 ] 30 | then 31 | echo "$a -lt 100 -o $b -gt 100 : returns true" 32 | else 33 | echo "$a -lt 100 -o $b -gt 100 : returns false" 34 | fi 35 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## Installation Reference for followings: 2 | 3 | Ubuntu 22.04- Java - https://phoenixnap.com/kb/install-jenkins-ubuntu 4 | 5 | Ubuntu 22.04 - Jenkins - https://www.jenkins.io/doc/book/installing/linux/ 6 | 7 | Jenkins installtion - https://www.jenkins.io/doc/book/installing/linux/ 8 | 9 | Amazon java installation -https://docs.aws.amazon.com/corretto/latest/corretto-11-ug/amazon-linux-install.html 10 | 11 | sudo yum install java-11-amazon-corretto-devel 12 | 13 | 14 | Ubuntu & Terraform - https://computingforgeeks.com/how-to-install-terraform-on-ubuntu/ 15 | 16 | 17 | maven - https://linuxize.com/post/how-to-install-apache-maven-on-ubuntu-18-04/ 18 | 19 | 20 | Ansible-Master-Node - https://www.decodingdevops.com/how-to-install-ansible-on-aws-ec2-instances/ 21 | 22 | Docker-Compose - https://www.techgeekbuzz.com/tutorial/docker/how-to-install-docker-compose-in-linux/ 23 | 24 | K8s - https://www.liquidweb.com/kb/how-to-install-kubernetes-using-kubeadm-on-ubuntu-18/ 25 | 26 | 27 | AWS CLI - https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html 28 | 29 | EKSCTL - https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html 30 | 31 | Kubectl - https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/ 32 | -------------------------------------------------------------------------------- /9.VPC.txt: -------------------------------------------------------------------------------- 1 | VPC(Virtual Private Cloud): 2 | 3 | - Its virtual network or data center inside AWS for a one client 4 | - Its logically isolated from other virtual network in the AWS cloud 5 | - Max 5 VPC can be created in single region & 200 subnets in 1 VPC. 6 | - We can allocate max 5 Elastic IP(IP's that are reserved) 7 | - Once we create VPC, DHCP,NACL & security group will be automatically created. 8 | - VPC is confined to AWS region & does not extend between regions. 9 | (Its Restricted to region) 10 | - Once it is created we cant change CIDR block range 11 | 12 | 13 | # Public subnet: 14 | 15 | - If a subnet traffic is connected to internet gateway it is known as Public Gateway. 16 | 17 | - It is associated with custom route table(with igw id) that has route to an Internet gateway.It connects PC to Internet & to other AWS services. 18 | 19 | 20 | # Private subnet: 21 | 22 | - If subnet that doent have route to internet gateway is known as Private group. 23 | 24 | - It is subnet that is associated with route table that doesn't have a route to an internet gateway. 25 | 26 | ## Router => Establish communication betwn 2 subnets 27 | ## Route table => holds path of communication i.e it provides destination IP 28 | 29 | 30 | ## Security Group acts as firewall that controls traffic allowed to & from resources in your VPC. 31 | 32 | ## NACL (Network Access Control List) which helps to provide a layer of security to the Amazon Web Services stack. NACL helps in providing a firewall thereby helping secure the VPCs and subnets. 33 | 34 | ## Router is used to connect Private/Public subnets internally. 35 | 36 | ===================================================================== 37 | 38 | ## Component of VPC: 39 | 40 | - Implied router or routing table 41 | - Internet Gateway 42 | - SECURITY Group 43 | - Network ACL 44 | - Virtual Private Gateway 45 | - Peering Collection 46 | - Elatic IP 47 | 48 | ##Flow: 49 | 1.Create VPC 50 | 2.Subnets 51 | 3.Internet Gateway 52 | 4.Routing Table 53 | 54 | #Types of VPC: - 55 | 56 | 1)Default Table: 57 | - Created in each AWS region when AWS account is created 58 | - It has default CIDR,security group,NACL & routing table setting 59 | - AWS user has Internet Gateway(IG) by default. 60 | 61 | 2)Custom VPC: 62 | - Its a VPC & AWS account owner creates 63 | - AWS user creates custom VPC can decide CIDR 64 | - CUstom VPC have default security group,NACL& route tables 65 | - It doesnt have IG by default,we need to create if needed 66 | 67 | ===================================================================== 68 | 69 | ***************** Components of VPC *********************** 70 | 71 | ## Implied Router & Route Table: 72 | 73 | -Its central routing func bcoz it establish connection betwn subnets. 74 | - It connects VPC to internet gateway. 75 | - You can have upto 200 route tables. 76 | - You can have upto 50 route entries per route table. 77 | - EACH SUBNET MUST BE ASSOCIATED WITH ONLY ONE ROUTE AT TIME. 78 | - We need to specify a subnet to route table association,else subnet will asociated with default VPC route table. 79 | 80 | 81 | ### Internet Gateway: 82 | - IG is virtual router that connects VPC to internet. 83 | - IG is VPC component that allows communication btwn VPC &internet. 84 | - Default VPC is already attached to IG 85 | - If we create new VPC we must attach IG in order to accss internet 86 | - Its bi-directional as it manages incoming and outgoing data. 87 | - 0.0.0.0.00 is by defualt igw-id 88 | 89 | # NAT gateway: Network Address Translation Gateway 90 | 91 | - NAT gateway enables internet in private subnet,but prevents it 92 | from intiating connection with those instances 93 | 94 | - Must need to assign Elastic IP to your NAT gateway 95 | 96 | - Deleting NAT gateway diassociates Elastic IP address,but does not 97 | release address from your account 98 | 99 | - NAT gateway is used to enable instances present in private subnet 100 | to help connect to internet or AWS services. 101 | 102 | - Its uni-directional 103 | 104 | - NAT Gateway is present in Public subnet but works for Private 105 | subnet. 106 | 107 | ==================================================================== 108 | Computing -> EC2 109 | Storage => S3,EBS,EFS 110 | Networking => VPC -------------------------------------------------------------------------------- /2.Cloud Computing.txt: -------------------------------------------------------------------------------- 1 | Cloud computing is the delivery of computing services via the internet, without installing and maintaining them on-premises. 2 | 3 | - It includes servers, storage, databases, networking etc. 4 | 5 | Cloud: It is on-demand availablity of computer sys resources on internet. Cloud provider allows clients to use computing resources without having to purchase or maintain hardware & software. 6 | 7 | It can be used using simple url. 8 | 9 | types of clouds: private, public, hybrid clouds & multiclouds. 10 | 11 | 12 | Public Cloud: It's service that owned by CSP & maintains compute resources that customers can access over the internet. 13 | It is open to all,to store and access information via the Internet using the pay-per-usage method. 14 | 15 | Ex: AWS,Microsoft Azure(Infra & PaaS),GCP(AI) 16 | 17 | 18 | - Low Cost 19 | - No Maintenance 20 | - Unlimited scalablity => can scale on its own 21 | - Better Reliablity => problem free 22 | 23 | 24 | 25 | Private Cloud: It is cloud computing environment dedicated to a single organization and selected users instead of the general public. 26 | 27 | It is used by organizations to build manage their own data centers internally or by third party. 28 | 29 | 30 | - Provides high level security & privacy to data to isolated network 31 | - High Performance 32 | - More Customization & full control over the cloud 33 | 34 | 35 | 36 | Hybrid Cloud: Hybrid cloud is a combination of public and private clouds. 37 | 38 | Hybrid cloud = public cloud + private cloud. 39 | 40 | - It is used in finance, healthcare,banking and Universities. 41 | 42 | 43 | - Flexible 44 | - Reliable => depends on CSP 45 | - More Secure 46 | 47 | 48 | Parameter Public Cloud Private Cloud Hybrid Cloud 49 | 50 | Host Service provider Enterprise(3rdparty) Enterprise (3rd party) 51 | 52 | Users General public Selected users Selected users 53 | 54 | Access Internet Internet, VPN Internet, VPN 55 | 56 | Owner Service provider Enterprise Enterprise 57 | 58 | =========================================================================================== 59 | 60 | Cloud Services: IAAS,PAAS,SAAS 61 | 62 | ========================================================================================= 63 | 64 | IIS : Internet Information Service is Microsoft web server that runs on windows OS & used to exchange static/dynamic content with internet users. 65 | - The servers currently include FTP, SMTP and HTTP/HTTPS. 66 | - IIS can be used to host, deploy, and manage web applications. 67 | 68 | Install IIS steps: 69 | 70 | i)From start menu -> Open Server Manager 71 | ii)Click the “Add roles and features” text. 72 | iii)On the “Before you begin” window, simply click the Next button. 73 | iV)On “Select installation type” window, leave “Role-based or feature-based installation” selected and click Next. 74 | v)Select server from server pool” with current machine selected & click Next. 75 | vi)Select server roles” window, check box next to “Web Server (IIS)" 76 | vii)Click next next 77 | 78 | =========================================================================================== 79 | 80 | AD: Active directory maintain order that is managing users,computers, permissions and file servers. 81 | 82 | - It is a database and set of services developed to help you with access,userse management, and permissions for your network resources. 83 | 84 | - The company's data is stored as an object in the Active Directory, and it can be in the form of devices, files, users, applications, groups, or shared folders. 85 | 86 | - LDAP(Lightweight Directory Access Protocol) for linux. 87 | 88 | - We use RDP for Windows 89 | 90 | 91 | AD Groups: 92 | 93 | - Groups are very useful for giving or denying privileges to groups of users, rather than having to apply those privileges to each individual user. 94 | 95 | - So we provide access to users using AD group. 96 | In every company its practised 97 | 98 | Types: 99 | ** Global Admin =>Read,Add,Modify & Delete Resources + user/access Mgmt ** 100 | 101 | Global Admin = AD-Cloud-Admin + AD-Cloud-Contributor 102 | 103 | - AD-Cloud-Admin => Add,Modify & Delete Resources 104 | - AD-Cloud-Readonly => Read Resources 105 | - AD-Cloud-Contributor => Read, Add, modify resources 106 | 107 | 108 | **** Tell Interviewer we dont provide access to users.We are providing access to AD group **** 109 | 110 | 111 | ===================================================================== 112 | 113 | Lightweight Directory Access Protocol (LDAP) is an internet protocol works on TCP/IP,used to access information from directories. 114 | -------------------------------------------------------------------------------- /7.S3 Bucket.txt: -------------------------------------------------------------------------------- 1 | **************************** S3 Bucket ******************************************* 2 | 3 | 1. Object based storage.Objects are entities stoed in AWS S3. 4 | 2. 3 copies of data in same egion 5 | 3. You cant create bucket inside bucket 6 | 4. The max capacity of bucket is unlimited i.e single obj capacity 5TB. 7 | 5. Its global service. 8 | 9 | # Versioning: 10 | 11 | - Versioning in S3 is means keeping multiple variants of an object in same bucket. 12 | - S3 Versioning feature helps to preserve,retrieve & restore every version of every 13 | object stored in buckets 14 | - If filename is same then only it will create versioning. 15 | - Versioning-enabled buckets can help to recover objects from accidental deletion or 16 | overwrite. 17 | - We can enable or suspend versioning can't disable. 18 | 19 | # AWS ClI cmnd: 20 | aws s3 ls => lists bucket 21 | aws s3 mb s3://Arjun => create bucket Arjun 22 | aws s3 rb s3://Arjun => delete bucket Arjun 23 | aws s3 sync . s3://bucketName 24 | aws s3 rm s3://bucketname --recursive => Delete bucket objects 25 | 26 | # AWS DataSync is a secure, online service that automates and accelerates moving data between on premises and AWS Storage services. 27 | 28 | >> aws s3 sync . s3:// 29 | 30 | 31 | # Storage Classes of S3: 32 | 33 | 1. Amazon S3 Standard(By Default it is selected) 34 | 35 | - S3 offers high durability,availability & performance object storage for frequently 36 | access date. 37 | - Durability is 99.9999999999% 38 | - Availability - 99.99% 39 | - Support for SSL.(secure socket layer) 40 | - Its storage cost for object is high,but there is very less charge for accessing object. 41 | - Largest object that can be uploadedd in a single PUT is TB. 42 | 43 | 2. Amazon S3 Standard-Infrequent Access 44 | 45 | - It is for data that is accessed less frequently, but requires rapid access when needed. 46 | - Storage cost is much cheaper than S3-std i.e half the price,but you are charged more 47 | heavily for accessing your obbjects. 48 | - Durability is 99.9999999999% 49 | - Availability - 99.9% 50 | - Data that is deleted from S3-IA within 30 days will be charged for a full 30 days. 51 | 52 | 3. Amazon S3 Intelligent Tiering 53 | 54 | - The S3 Intelligent Tiering storage class is desined to optimize cost by automatically 55 | moving data to the most cost effective access tier. 56 | - It works by storing objects in two access tiers. 57 | - If an object in frequent access tier is accessed, it is automatically moved back to 58 | frequent access tier. 59 | - There is no retrieval fees when using S3-intelligent tiering storage class and no 60 | additional tiering fee e=when object moved between access tiers. 61 | - Same low latency and high performance of S3-std. 62 | - Object less than 128kb can not move to IA. 63 | 64 | 4. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) 65 | 66 | - S3 one-zone is for data that is accessed less frequently, but required rapid access when 67 | added. 68 | - Data store inn single AZ. 69 | - Ideal for those who want lower cost option of IA-Data. 70 | - It is good choice for storing secondary backup copies of on-prepmise data or easily re- 71 | creatable data. 72 | - Durability is 99.9999999999% 73 | - Availability - 99.5% 74 | 75 | 5. Amazon S3 Glaicer 76 | 77 | - S3 Glacier is a secure, durable low cost storage classs for data archiving. 78 | - Designed to retain historical data for long periods. 79 | - To keep cost storage low you can use S3 Glaicer. 80 | - You can upload object directly to glacier or use lifecycle policies 81 | - Durability is 99.9999999999% 82 | - Support for SSL. 83 | - You can retrieve 10GB of you S3 glacier data per/month for free with free tier account. 84 | - Retrieval will take time of 2 to 4 hours 85 | 86 | 6. Amazon S3 Glaicer Deep Archive 87 | 88 | - Cheapest storage. 89 | - Designed to retain data for long periods. eg 10yrs 90 | - All objects stored in S3-Glacier Deep archiev are replicated in 3 AZ. 91 | - Ideal alternative to magnetic tape libraries. 92 | - Retrieval time withinn 12 hours. 93 | - Storage cost is upto 75% less than S3-Glacier. 94 | - Durability is 99.9999999999% 95 | - Availability - 99.9% 96 | 97 | # To change storage class: Select Object ->click Action ->Edit Storage Class ->Select Class 98 | 99 | # In case we have thousands of object we have to set life cycle rules: 100 | It is done to move a object from 1 storage class to another. 101 | 102 | Ex: S3 Std -> S3 Std-IA -> 103 | 0 30 104 | 105 | # Multi part uploading: I want to copy a large file to an Amazon Simple Storage Service (Amazon S3) bucket as multiple parts. 106 | 107 | ========================================================================================= -------------------------------------------------------------------------------- /8.IAM.txt: -------------------------------------------------------------------------------- 1 | 2 | ************************* IAM(Identity & Access Management) ********************** 3 | 4 | User - It is single entity or person. We can have 5000 5 | Groups- It is collection of IAM users. We can have 300 grps. 6 | Roles - 1000 7 | 8 | Why groups are needed? 9 | => 1 Group => Access => users 10 | i.e One user has access to EC2,other has access 11 | 12 | ### Root -> Admin -> IAM users 13 | 14 | Root => Most Previleged user. Can Create/Delete AWS account. 15 | 16 | Admin => 17 | - Created by Root user & will have access like root user. 18 | - Admin is also IAM user. 19 | - It manages IAM Users. 20 | 21 | IAM Users => 22 | 23 | - Shared access to your AWs account. 24 | 25 | - We can provide access without sharing our credentials & by creating IAM users. 26 | 27 | - Granular Permission: Multilevel access to IAM users EC2,RDS,S3. 28 | i.e limited access to AWS services 29 | 30 | - Secure access to AWS resources for apps that run on EC2. 31 | EC2 => S3 Bucket => i.e resource to resource access. 32 | 33 | # Ex: If Shiv has only access to all EC2 permission then he can access or manage EC2 only. 34 | But he cant mange or access other services 35 | 36 | If Ram has access to EC2 & S3 Bucket. 37 | It doesnt mean that EC2 can create S3. 38 | This is possible only if EC2 has access to that S3 Bucket. 39 | 40 | - Even if user has access to only EC2 and full read only access but he can read 41 | any service but can access only provided permission service i.e he can see S3 42 | Buckets,EBS & all other services. 43 | 44 | 45 | # Single sign-on(SSO) enables users to log into multiple apps & websites with one set 46 | of credentials. 47 | 48 | 49 | Steps: 50 | 1. Create Role with full access to EC2 & S3. 51 | 2. Then goto EC2 instance Actions -> Security -> Modify IAM role 52 | 3. Attach that role to EC2 53 | 4. Test the result 54 | 55 | ------------------------------------------------------------------------------------ 56 | 57 | ## IAM Terms: 58 | 59 | Federated users: All the users from Active Directory are in sync with AWS & they are 60 | called Federated users.I.e they get populated into 61 | aws users. 62 | 63 | 1. Principal => User,Role,Federated users or app. 64 | 2. Request => Principal is making reqs for authentication 65 | 3. Authentication => Login Username & pwd 66 | 4. Authorization => Permission to allow any work 67 | 5. Action/Operation => Delete,Create or Modify. 68 | 6. Resource => On which actions are performed. i.e AWS services 69 | 70 | For ex: IAM user named “Shristi” is a principal, 71 | her IAM username shristi@example.com is her identity. 72 | 73 | # Principal - A person or app that uses the AWS account root user, IAM user, or an 74 | IAM role to sign in & make requests to AWS. 75 | 76 | ------------------------------------------------------------------------------------------- 77 | 78 | ## Permissions & Policies: 79 | 80 | - Policies are set of permissions. ex: S3 delete, S3FullAccess. 81 | - We can assign role by converting it into policy 82 | - Policies are granted through policies created & then attached to user group or 83 | roles. 84 | - One IAM user or group can have multiple policies 85 | - User Permissions are calculated based on combination of polices. 86 | 87 | Ex: RAM 88 | - EC2 readonly,S3 Upload 89 | - EC2 fullAccess 90 | - RDS readonly 91 | - S3 fullAccess 92 | - EC2 delete,EC2 create,RDS create 93 | 94 | i.e RAM has EC2 fullAccess,S3 fullAccess, RDS readonly and RDS Create permissions. 95 | 96 | ** scene: If we have new member in group intially he will be in readonly access group 97 | then after gaining access he will be given read & write access. 98 | 99 | 100 | ## RecommendationS: 101 | 102 | - Dont use root user for everyday access 103 | - Dont't share root credentials with anyone 104 | - Create IAM user for yourself & assign your adminstrative permission for your 105 | account. 106 | - You can sign-In as admin to add more users as needed. 107 | - IAM user can't delete root user. 108 | 109 | --------------------------------------------------------------------------------- 110 | 111 | ## Resource Based Policies: 112 | 113 | - In some cases you can attach a policy to a resource in addition to attaching it to 114 | grp/user that is called a resource based policy. 115 | 116 | - You can explicitly list who is allowed to access the resource. 117 | 118 | ---------------------------------------------------------------------------------- 119 | 120 | ## IAM Delegation(assign authority to other): 121 | 122 | - It is granting permission to users to allow access to resources that you control 123 | 124 | - Delegation involves setting up trust betwn account that owns resources(trusting 125 | account) & account that contains users that need to access the resource(trusted 126 | account). 127 | 128 | - To delegate permission to access a resource you create an IAM role that has 129 | policies attached: 130 | 1. Trust Policy 131 | 2. Permission Policy 132 | --------------------------------------------------------------------------------- 133 | ## Create Role based on Policy: 134 | 1.From Access Management => click Roles => Add Permissions 135 | 2.Click on generate Policy 136 | 3.Cloud Trail Access =>an auditing,monitoring & governance tool 137 | from AWS 138 | 139 | ***You can't directly assign Role to user,but you can do it by creating policy first & then assign role *** 140 | 141 | *** 142 | Can we attach role to User? 143 | => You can generate new policy using role, which can be then assigned to user. 144 | *** 145 | ============================================================================= 146 | 147 | Imp terms: 148 | 149 | Permission: S3 Bucket - Put/Download/create/delete/versioning 150 | Policy: Set of permissions 151 | Role: Policy assigned to role 152 | Principle: role is assigned to principle 153 | 154 | ============================================================================= 155 | 156 | Inline policies; These apply where there is a direct one-to-one relationship between the policy and the user or group. -------------------------------------------------------------------------------- /1.Basics(Models+networking).txt: -------------------------------------------------------------------------------- 1 | A server is software/hardware device that accepts & responds to requests made over network. 2 | 3 | 1) What is Racks & Blade? 4 | -> Rack server is used to stack & install several servers in a large closet. 5 | Blade is used to insert small servers in case stacked horizontally in rack. 6 | 7 | 8 | 2) What is OnPremise & OffPremise? 9 | -> OnPremise is setting DataCenter on company's premise. 10 | - costly 11 | - maintenance is manual 12 | - More secured & max control over infrastructure & access. 13 | 14 | On-premises private cloud management is expensive and requires heavy initial investment and ongoing expenses. 15 | 16 | Offpremise(cloud) is not so costly,maintenance is automatic but requires some monthly fee.It gives less control over infrastructure & is accessible on any device. 17 | 18 | 19 | 3) What is Waterfall vs Agile ? 20 | -> 21 | Waterfall Model: 22 | - Its linear sequential lifecycle model. 23 | - Determines the end goal early i.e Provides fixed plan of project from start to finish 24 | - Documentation provides a clear scope to the project that helps to decide budgets & 25 | timelines. 26 | - Used for small scale projects 27 | 28 | 29 | cons: 30 | - difficult to customize as it works on fixed method 31 | - There is no any collaboration with Client during Development. 32 | - Testers report issues and bugs later in the process. 33 | 34 | 35 | Agile Model: 36 | 37 | Pros: 38 | - Their is continous collboration with clients 39 | - easy to customize i.e more flexible 40 | - encourages short-term deadlines 41 | - Transparency 42 | 43 | Cons: 44 | - 45 | 46 | Roles: 47 | 1)Product Owner-client- represents the needs of both the customer and the business. 48 | 2)Scrum Master - service company=> communicates & collaborates between leadership & team players to ensure a successful outcome. 49 | 50 | Sprint: The repetitive process that developers use to tackle a development project 51 | it is of 14 days(2 weeks) 52 | 53 | 54 | AGILE MEETINGS: 55 | 56 | Sprint Planning -1hr ->Team comes together to determine which stories will be part of current sprint. 57 | - Front,Back,Devops,DBA,Product 0wner 58 | 59 | Daily standup - 15-30 min => 1.kal kya kia 2.aaj kya krne wale ho 3. prblms 60 | 61 | Demo - 1hr => stackholders->showcases working software that team completed over course of sprint. 62 | 63 | Retropective - 1hr => 1. What well went 64 | 2. What didn't go well 65 | 3. Happiness Factor(give rating out of 5) 66 | 67 | 68 | SpillOver - work of one sprint goes to next sprint 69 | 70 | ===================================================================================== 71 | 72 | *********************** Networking ************************************************ 73 | 4) What is OSI model?Its layers 74 | => OSI(Open Systems Interconnection) is reference model for how apps communicate over network. 75 | 76 | Trick to remember OSI - Please Do Not Throw Spicy Pizza Away. 77 | 78 | 3 types of layers: Software Layer +> App,Presentation and Session Layer. 79 | Hardware Layer => DataLinkLayer, Physical Layer 80 | Heart of OSI => Transport Layer 81 | 82 | 83 | Physical Layer - Bits- lowest layer - responsible for physical connection between devices - HUB 84 | 85 | Data Link Layer - Frame - to make sure data transfer is error-free from one node to another, over the physical layer. - Switch 86 | 87 | Network Layer - Packet - transmission of data from one host to the other located in different networks - Router 88 | 89 | Transport Layer - Segment -responsible for End to End Delivery of complete message. - Firewall 90 | 91 | Session Layer - responsible for establishment of connection, maintenance of sessions,auth & ensures security - Gateway 92 | 93 | Presentation Layer(Translation layer) - Applicaton layer data is manipulated here & transmit over the network. 94 | 95 | Application Layer - layer also serves as window for application services to access network & for displaying received information to user. 96 | 97 | 98 | - Pysical,DataLink and Network Layers are Hardware Layers 99 | 100 | - Transport Layer is heart of OSI => becoz it connects hardware & software layers. 101 | 102 | - Session,Presentation and Application Layer are Software Layers 103 | 104 | - Framing is a function of the data link layer. It provides a way for a sender to transmit a set of bits that are meaningful to the receiver. 105 | 106 | - Data in the transport layer is referred to as Segments. 107 | 108 | 109 | ===================================================================================== 110 | 111 | ************************* Network Devices ***************************************** 112 | 113 | Router - Router will connect Networks -connects LAN WAN - Filters & Send Packets - 2/4/8 ports 114 | 115 | Switch - Switch will helps connect devices -second data link layer & at times network layer - 2 or more LAN devices - Filter before forwarding Data - Multi port between 4 & 48. 116 | 117 | HUB -sends data packets(frames) to all devices - 2 or more Ethernet Devices (LAN) - Unable to perform Filtering - 4/12 ports - Layer 1 118 | 119 | 120 | ===================================================================================== 121 | 122 | ******************** Internet Protocol (IP) *************************************** 123 | 124 | A port is a virtual point where network connections start and end. 125 | 126 | IP(Internet protocol) address : Its classes are A B C D E. 127 | 128 | IP contains 4 octets between 0 to 255.Its 32-bits. 129 | 130 | 131 | Class A - 1.0.0.0 to 126.255.255.255 => used for Governments 132 | Class B - 128 - 191 => Medium companies 133 | Class C - 192 - 223 => Small companies 134 | 135 | 136 | IP address range 127.0.0.0 – 127.255.255.255 is reserved for loopback, i.e. Host's self-address, also known as localhost address 137 | 138 | Range of IP address: 1.0.0.0 to 255.255.255.255 139 | 140 | IPv4:IPv4 Address Format is a 32-bit Address that comprises binary digits separated by a dot (.) => 2^32 => No encryption & Authentication. 141 | 142 | IPv6:IPv6 Address Format is 128-bit IP Address,which is written in group of 8 hexadecimal nums separated by colon (:) => 2^128 => Encryption & Auth is provided 143 | 144 | IPv6 is better than IPv4 as IPv6 is more advanced. 145 | 146 | By typing IPConfig in cmd we will get Local Device IP address. 147 | 148 | MAC address is Physical address of device which is permenant. 149 | 150 | Hop Count - means Multiple routers 151 | 152 | Intranet - it gives private IP for local connection 153 | Internet - it gives public IP for Public connection 154 | 155 | # Types of IP: 156 | 157 | i) Public IP addr: Is available publicly & assigned by our network provider to 158 | router,which is further divides to devices 159 | 160 | ii) Private IP: It is internal addr which is not routed to internet i.e no exchane 161 | of data between private IP and Internet. 162 | - Intranet Connectivity 163 | 164 | iii) Static IP => We can manually assign this IP 165 | - Our Laptop IP address is Static IP coz theres no any DHCP for it. 166 | 167 | iv) Dynamic IP => DHCP will provide this IP 168 | 169 | v) Elastic IP => Public IP + Fixed(Reserved) -> DHCP provides it. 170 | - Used in case we dont want to change IP address for our Instances 171 | 172 | 173 | # DHCP => Dynamic Host Configuration Protocol is protocol for automatically assigning 174 | IP addresses & other configs to devices when they connect to network. 175 | 176 | 177 | TCP/IP allows computers on same network to identify & communicate with each other. 178 | 179 | FTP(21),SMTP(25),Telnet(23),ICMP,DNS(53) 180 | HTTP(80),HTTPS(443), RDP(Remote Desktop Protocol),SSH(Secure Shell) 181 | 182 | Internet Assigned Numbers Authority has assigned india port numbers 183 | 184 | ping cmd is used to check connectivity and gives 185 | 186 | Microsoft Terminal Services Client (MSTSC) is the command line interface to run the Microsoft Remote Desktop (RDP) client 187 | For windows: run/cmd ->mstc => to connect anothr remote server (Enter ip addr) 188 | For linux: ssh @ 189 | 190 | # OSI(theorotical,7Layers) vs TCP/IP(implementation of OSI,4layers) 191 | # OSI model is General but mostly TCP/IP is used. 192 | 193 | 194 | In Organizations VM is not launched using Public IP,its done using Private IP only. 195 | 196 | ===================================================================================== 197 | ===================================================================================== 198 | 199 | *************************** Repomodel *********************************************** 200 | 201 | Repo: a place or container in which something is stored in large quantities 202 | 203 | Package: Software 204 | 205 | JFrog Artifactory: It manages all packages,files,containers & components for use throughout your software supply chain. 206 | 207 | XRay: It scans packages,files & containers, it detects security vulnerabilities & licenses in your software components. 208 | 209 | #Types: 210 | 211 | LocalRepo: EC2 instance Local 212 | RemoteRepo: Organization i.e jfrog. We use X-ray tool in jfrog. 213 | CentralRepo: Its Internet. 214 | 215 | Jfrog:- Its repository manager that supports all available software package types, 216 | enabling automated continous integration and delivery. 217 | 218 | It stores: Packages 219 | Docker Images 220 | Plugins 221 | Artifacts 222 | 223 | If we want a any package we will first look into Remote Repo,if its not present here then we need to request from central repo. 224 | -------------------------------------------------------------------------------- /4.Bash.txt: -------------------------------------------------------------------------------- 1 | 2 | Shell: A UNIX Shell is a program or a command line interpreter that interprets the user commands which are either entered by the user directly or which can be read from a file (i.e. Shell Script) & then pass them to operating system for processing. 3 | 4 | - Types: 5 | BASH(Bourne Again Shell) => 6 | 7 | - It includes features from Korn and Bourne shell. 8 | - Bash shell is free and open-source to use for computer users. 9 | 10 | Korn Shell(ksh) => 11 | 12 | - It is superset of Bourne shell & faster than Cshell 13 | - It supports arithmetic,C-like arrays,functions,string-manipulation. 14 | 15 | CShell(csh) => Built in arithmetic and C-like syntax 16 | 17 | 18 | ## BASH: Use of Bash is for system administration, web application deployment, automated backups, creating custom scripts for various pages, etc. 19 | 20 | ## Shell Script => A shell script is a text file that contains a sequence of commands for a UNIX-based operating system. 21 | 22 | - Mainly its used for automation. 23 | 24 | - File permissions in BASH are r(Read),w(Write) & x(Execute) 25 | 26 | - gedit is a text editor commonly used in Linux-based operating systems. It provides a simple and user-friendly interface for creating and editing text files. 27 | 28 | =========================================================================================== 29 | 30 | ******************* Creating Hello bash-script ****************************** 31 | 32 | >> cat script1.sh -> To view script 33 | 34 | bash_script.sh : 35 | 36 | #!/bin/bash -> #!(shebang) - Interpretor or type of shell 37 | # Comment it is -> Command 38 | 39 | echo " Hello World! " -> echo command to print "Hello World" 40 | 41 | 42 | >> sh script1.sh 43 | or 44 | 45 | >> ./script1.sh => TO execute script but gives error as permission is denied 46 | 47 | >> chmod -R 775 script1.sh 48 | or 49 | >> chmod +x script1.sh =>Execute permission to file using chmod cmnd with +x option 50 | 51 | 52 | # set nu => to give line number to schell script in vim 53 | 54 | =========================================================================================== 55 | 56 | ********************* Shell Fundamentals *************************** 57 | 58 | ## VARIABLES: Its a container to store a value. 59 | 60 | #! /bin/bash 61 | 62 | # predifined variables 63 | echo $HOME # Home Directory 64 | echo $PWD # current working directory 65 | 66 | # User-Defined Variables 67 | name=Arjun 68 | ROLL_NO=5 69 | readonly ROLL_NO => readonly means we cant change var value once it is set. 70 | echo "The student name is $name and his Roll number is $ROLL_NO." 71 | echo $name 72 | echo name => 73 | 74 | **************************************************************************************** 75 | 76 | ## Using Array in BASH: 77 | 78 | - $@ is the default argument which is used to store args (we pass) as an array 79 | 80 | - Display the arguments by defining their array index in the following form: 81 | ${variable_name[i]} 82 | 83 | 84 | #!/bin/bash 85 | 86 | # Iterate over a list of items 87 | fruits=("Apple" "Banana" "Orange") 88 | 89 | # Loop through the list 90 | for fruit in "${fruits[@]}" 91 | do 92 | echo "I like $fruit" 93 | done 94 | 95 | - Where -a helps script to read an array, and variable_name refers to an array. 96 | 97 | Program: 98 | 99 | #!/bin/bash 100 | 101 | # Reading multiple inputs using an array 102 | echo "Enter names : " 103 | read -a names 104 | echo "The entered names are : ${names[0]}, ${names[1]}." 105 | 106 | ***************************************************************************************** 107 | 108 | ## Read User Input: read 109 | 110 | - If we don't pass any variable with the read command,then we can pass a built-in variable called REPLY (should be prefixed with the $ sign) while displaying the input. 111 | 112 | #!/bin/bash 113 | # Read the user input 114 | 115 | echo "Enter the user name: " 116 | read fname 117 | echo "The Current User Name is $fname" 118 | echo 119 | echo "Enter other users'names: " 120 | read name1 name2 name3 121 | echo "$name1, $name2, $name3 are the other users 122 | 123 | echo "Enter Address:" 124 | read 125 | echo "Address: $REPLY" 126 | 127 | # -s allows a user to keep input on silent mode & -p to input on newly command prompt. 128 | read -sp "password : " pass_var 129 | echo 130 | echo "password : " $pass_var 131 | 132 | **************************************************************************************** 133 | 134 | ## ARITHMETIC OPERATIONS: 135 | Double parentheses is easiest mechanism to perform basic arithmetic operations in Bash 136 | We can use this method by using double brackets with or without a leading $. 137 | 138 | syntax: ((expression)) or 'expr $x + $y' 139 | 140 | Program: 141 | 142 | #!/bin/bash 143 | 144 | x=8 145 | y=2 146 | echo "x=8, y=2" 147 | echo "Addition of x & y" 148 | echo $(( $x + $y )) or `expr $x + $y` =>backtick 149 | echo "Subtraction of x & y" 150 | echo `expr $x - $y` 151 | echo "Multiplication of x & y" 152 | echo `expr $x \* $y` 153 | echo "Division of x by y" 154 | echo `expr $x / $y ` 155 | echo "Exponentiation of x,y" 156 | echo `expr $x ** $y ` 157 | echo "Modular Division of x,y" 158 | echo $(( $x % $y )) 159 | echo "Incrementing x by 5, then x= " 160 | (( x += 5 )) 161 | echo $x 162 | echo "Decrementing x by 5, then x= " 163 | (( x -= 5 )) 164 | echo $x 165 | echo "Multiply of x by 5, then x=" 166 | (( x *= 5 )) 167 | echo $x 168 | echo "Dividing x by 5, x= " 169 | (( x /= 5 )) 170 | echo $x 171 | echo "Remainder of Dividing x by 5, x=" 172 | (( x %= 5 )) 173 | echo $x 174 | 175 | 176 | ## Relational Expressions: 177 | 178 | equal => [$a -eq $b] 179 | notEqual => [$a -ne $b] 180 | Greater => [$a -gt $b] 181 | LessThan => [$a -lt $b] 182 | 183 | ## Boolean Expressions: 184 | 185 | negation => ! =>[!true] it give false 186 | OR => -o => [$a -lt 20 -o $b -gt 100] is true 187 | AND => -a => [$a -lt 20 -a $b -gt 100] 188 | 189 | ## String Operators: 190 | 191 | a = abc & b =def 192 | 193 | [$a = $b] => = checks value 194 | 195 | [$a != $b] => != equal or not 196 | 197 | [ -z $a ] => -z Checks if given string size is zero 198 | 199 | [-n $a] => -n Checks if given string size is non-zero; 200 | 201 | [str $a] => str Checks if str is not the empty string; 202 | 203 | 204 | ## File Test Operators 205 | 206 | 207 | =========================================================================================== 208 | 209 | Types of Variables: 210 | 211 | i) Local Variable: 212 | 213 | - Its scope is limited to current shell. 214 | - Limited to only script. 215 | 216 | ex: a=10 217 | b=20 218 | func(){ echo `expr $a +$b`} 219 | 220 | ii) Enviroment variable: 221 | 222 | - It specifies the directories to be searched to find a command. 223 | - Environment var in Linux can have global 224 | 225 | ex: $PATH => Current sytem path 226 | $NAME => Display any ENV 227 | $HOME => path of home directory. 228 | 229 | - To set a global ENV 230 | 231 | $ export NAME=Value 232 | or 233 | $ set NAME=Value 234 | 235 | ex: export var = destination 236 | export JDK_HOME = bin/jdk 237 | 238 | 239 | - To display Linux Envs: 240 | 241 | $ printenv //displays all the global ENVs 242 | or 243 | $ set //display all the ENVs(global as well as local) 244 | or 245 | $ env //display all the global ENVs 246 | 247 | 248 | iii) Shell variables: 249 | 250 | ======================================================================================== 251 | 252 | $? = returns 0 if previous command runs successfully 253 | $$ = Stores process ID for your bash session. 254 | $0 = Script name. 255 | $1 .. $9 = Arguments passed to the script. 256 | $# = Count the length of values i.e args 257 | $@ = Stores array of all values. 258 | 259 | >> ./Script.sh aws devops 260 | $0 => script name 261 | $1 => aws 262 | $2 => devops 263 | 264 | ================================================================================================= 265 | 266 | ***************************** Decision Making ********************************** 267 | 268 | ## IF ELIF ELSE statement: 269 | 270 | if [ expression ]; 271 | then 272 | statements 273 | fi 274 | 275 | -lt (less than) , -gt(greater than) 276 | 277 | example: 278 | 279 | #!/bin/bash 280 | 281 | read -p "Enter a number of quantity:" num 282 | 283 | if [ $num -gt 100 ]; 284 | then 285 | echo "Eligible for 10% discount" 286 | 287 | elif [ $num -lt 100 ]; 288 | then 289 | echo "Eligible for 5% discount" 290 | else 291 | echo "Lucky Draw Winner" 292 | echo "Eligible to get the item for free" 293 | 294 | fi 295 | 296 | - FOR : 297 | 298 | Numeric ranges for syntax: 299 | for VARIABLE in 1 2 3 4 5 .. N 300 | do 301 | command1 302 | command2 303 | commandN 304 | done 305 | 306 | {START..END..INCREMENT} syntax: 307 | #!/bin/bash 308 | echo "Bash version ${BASH_VERSION}..." 309 | for i in {0..10..2} 310 | do 311 | echo "Welcome $i times" 312 | done 313 | 314 | Ex: 315 | # Install the PHP packages listed in the PKGS variable. 316 | PKGS="php7-openssl-7.3 php7-common-7.3 php7-fpm-7.3 php7-opcache-7.3" 317 | for p in $PKGS 318 | do 319 | echo "Installing $p package" 320 | sudo apk add "$p" 321 | done 322 | 323 | - LOOPS: 324 | a=0 325 | while [exp] 326 | do 327 | lines of code.. 328 | done 329 | 330 | =================================================================================== 331 | 332 | Function calling: 333 | 334 | fName(){ 335 | echo "Hello" 336 | fName2 337 | } 338 | fName2{ 339 | echo "World" 340 | } 341 | fName 342 | 343 | =================================================================================== -------------------------------------------------------------------------------- /5.AWS.txt: -------------------------------------------------------------------------------- 1 | # AWS: Its public cloud 2 | 3 | # Region: Group of Data centers => us-east-1 4 | 5 | # Availablity Zone: Data Center => us-east-1b 6 | (If theirs alpha at end its zone) 7 | 8 | 9 | Region is selected based on client location.It is selected based on below factors : 10 | 11 | # Latency - A major factor to consider for user experience is latency(measures 12 | delay in packet's arrival between client-server in mmsec). 13 | 14 | # Cost- AWS services are priced differently from one Region to another. 15 | 16 | # Services&features- New services&features are deployed to Regions gradually. 17 | 18 | If we want to get exact billing for AWS services then visit https://calculator.aws and get a estimate for service required. 19 | 20 | # AWS Command Line Interface (AWS CLI) is a unified tool that provides a consistent interface for interacting with all parts of Amazon Web Services. 21 | 22 | # 2 ways to login into console: Root User & IAM User 23 | previleged managed by Root user 24 | 25 | # Access types: 26 | 27 | 1.Console Access => UserID,Password 28 | 2.Programatic Access(AWS CLI) =>TerraForm 29 | 30 | =================================================================================== 31 | 32 | ## AWS CLI: Also known as Programmatic Access 33 | 34 | - ACCess Key & Secret are imp for AWS CLI 35 | 36 | Steps to connect AWS CLI: 37 | 38 | In CLI run command: aws configure 39 | 40 | AWS Access Key ID [None]: 228881833629 41 | AWS Secret Access Key [None]: Enter access or create one by : IAM => AccessMgmt 42 | Default region name [None]: 43 | Default output format: JSON 44 | 45 | - Cmd to check version: aws --version 46 | =================================================================================== 47 | 48 | ***************************** AWS Backup ******************************************** 49 | 50 | There are 2 types of backups: 51 | i) Ondemand - Manual Backup 52 | ii) BackupPlan - Auto Backup 53 | 54 | # Backup of EC2: 55 | 56 | Type: 57 | 1.On Demand - manual backup 58 | *** 2.Backup Plan - Taking Backup weekly on every saturday generally at 10am. 59 | 2 week retention period 60 | 61 | 62 | 1. Backup Vault - It is container that stores and organizes your backups. 63 | 64 | 2. Retention Period - tells for how much time backup needs to be stored. 65 | 66 | # Backup vaults: 67 | 68 | - Backup vaults are storage locations for backups. 69 | -You can create multiple backup vaults in different regions & assign backup rules to them. 70 | 71 | # Backup policies: 72 | 73 | Backup policies define backup plan schedule,which specifies when backup jobs should run. 74 | You can create multiple backup policies & apply them to different resources. 75 | 76 | # Steps to Create a Backup Vault: 77 | 78 | i)AWS Console,Select AWS Backup service from list of available services. 79 | ii)Click on “Create Backup Vault” to start the creation process. 80 | iii)Provide a unique name for the backup vault. 81 | iv)Choose the AWS region where you want to create the backup vault. 82 | v)Choose an encryption setting for backup vault. 83 | We can use default AWS Key Management Service(KMS) master key/create custom KMS key. 84 | vi)Review backup vault details and click “Create backup vault” to complete the process. 85 | 86 | # To delete vault we need to delete first recovery point. 87 | 88 | ===================================================================================== 89 | 90 | ******************************* Elastic IP address ********************************** 91 | 92 | - It is public static IPv4 address designed for dynamic cloud computing. 93 | 94 | - It is allocated to your AWS account & is yours until you release it. 95 | 96 | - If your instance does not have a public IPv4 address, you can associate an 97 | Elastic IP address with your instance to enable communication with the internet. 98 | Ex: this allows you to connect to your instance from your local computer. 99 | 100 | - Elastic IP is assigned after instance is stopped,thus charges only after EC2 is stopped 101 | 102 | # USECASE: As we know whenever we restart EC2 instance it Public IP changed everytime. 103 | If there is any imp EC2 machine and we dont wanna lose its public IP in such 104 | case we use Elastic IP 105 | 106 | # Steps to associate Elastic IP ( EIP ) address: 107 | 108 | Goto, Network & Security -> Elastic IPs ->Allocate addr 109 | 110 | i)Select the Elastic IP: Select new created Elastic IP addr by clicking on checkbox 111 | ii)Associate Elastic IP with Instance:For EIP Actions-> choose "Associate IP addr. 112 | iii)Choose the Instance: Select instance that you want to associate with Elastic IP. 113 | iv)Confirm and Verify Association of Elastic IP 114 | 115 | Now, your instance should have static public IP address & Elastic IP associated with it, which will remain same even if you stop and start instance. This can be useful when you need consistent public IP address for services like hosting a website or setting up remote access. 116 | 117 | ===================================================================================== 118 | 119 | *************************** A W S Storage ******************************************* 120 | 121 | 1.S3(Simple Storage Serice)is ObjectBasedStorage 122 | - For multiple users 123 | - Google Drive 124 | - It’s more of a write once, read many times use case. 125 | 126 | 2.EFS(Elastic File sharing/service/system) 127 | - It is like Linux NFS(Network File Sharing) 128 | - for multiple linux VM's 129 | 130 | 3.Glacier - low-cost cloud storage service for data 131 | - secure and long term 132 | - with longer retrieval times offered by Amazon Web Services (AWS). 133 | -Eg: Legal documents or long-term backups that don't require immediate access. 134 | 135 | 4.Snowball - 136 | - Its hardware Easily migrate terabytes of data to cloud without limits in 137 | storage capacity or compute power. 138 | 139 | - Use case: Move databases,backups, archives, healthcare records, analytics 140 | datasets, IoT sensor data and media content to the cloud 141 | 142 | - Especially when network conditions are limited. 143 | 144 | 5.EBS(Elastic Block Storage) - Its only for one VM 145 | - It provides block level storage volumes for use with EC2 instances. 146 | - Data is persistent. 147 | - Auto recovery 148 | - Block storage systems support random read/write operations 149 | 150 | 6.Instance Storage => 151 | - Its temporary block storage service provided by AWS. 152 | - Data is non-persistent 153 | 154 | =========================================================================================== 155 | 156 | 157 | =================================================================================== 158 | ENV - its space where we work 159 | 160 | 1.Development Env(Dev) => This is where developers work on writing & testing code. 161 | 162 | - It's typically local setup on their own computers or shared development server. 163 | - It allows developers to experiment, debug & collaborate without affecting main 164 | app or system. 165 | 166 | 2.User Acceptance Testing(UAT) => 167 | 168 | - Its phase where software is tested by end-users to ensure it meets business 169 | requirements & functions correctly. 170 | - All issues found during UAT are addressed before software is released to 171 | production. 172 | 173 | 3.Production Environment(Prod) => 174 | - It is where final version of the software or application is deployed & made 175 | available to users. 176 | 177 | 178 | ==================================================================================== 179 | 180 | ## CIDR (Classless Inter-Domain Routing): -supernetting 181 | 182 | - This is an alternative to older subnetting 183 | - CIDR is a method for allocating IP addresses for allocating IP or data routing 184 | efficiency. 185 | 186 | #Benefits: 187 | - Reduce IP address wastage 188 | - Create supernets flexibly 189 | - Transmits data quickly-allows routers to organize IP addrs into multiple subnets 190 | 191 | for ex: 192.168.10.0/25 192 | 193 | Decimal: 255 255 255 0 194 | Binary: 11111111 11111111 11111111 00000000 195 | Octet: 8 8 8 1 196 | 197 | - How to find no of network? - subnet 198 | => 2^n (n is total no.of.bits borrowed from host) 199 | 2^1 = 2 200 | 201 | - How to find no.of.Ip address on each network? 202 | => 2^n (n is total no.of.host bits) 203 | 2^7 = 128 (n= 8-1 =7) 204 | 205 | - In every network,1st IP is reserved for network ID and last IP addr is reserved for 206 | broadcast ID 207 | 208 | - How to find no.of.host in each network? 209 | => 2^n - 2 = 128 - 2 = 126 210 | 211 | 192.168.10.0 - Network ID 212 | . 213 | . 214 | 192.168.10.127 - Broadcast ID 215 | 216 | # AWS reserve 5 IPs thus ideally CIDR uses 16-28 as range 217 | 218 | 219 | ============================================================================== 220 | 221 | ********************** Virtual Private Cloud (VPC) ******************************* 222 | 223 | - VPC is a private space hosted within the cloud. 224 | - It allows private section of internet just like having own network within large 225 | network 226 | - Every aws user when creates an account, AWS will create a default VPC for user. 227 | But this VPC is just to start AWS.Further we need to create VPCs for our projects 228 | manually 229 | - It allows your organization to provision workloads in an isolated & secure env. 230 | - It uses CIDR IP addrs when it transfers data packets between connected devices. 231 | 232 | Components of VPC: 233 | 234 | - Internet gateway: It decides who can connect public subnet at Instance level. 235 | Like Web Server EC2. 236 | 237 | - NAT gateway: It decide who can connect private subnet.Like DB Server EC2 238 | 239 | - Implied Router & Route tables: 240 | =================================================================================== 241 | 242 | Hands On needed for: 243 | 244 | - IAM users, policies, and roles 245 | - EC2 instances 246 | - Security Groups(SG) => virtual firewall for your EC2 to control incoming & 247 | outgoing traffic. 248 | 249 | - Elastic Block Storage (EBS) 250 | 251 | - Elastic Load Balancer(ELB)=> distributes incoming app traffic & scales resources 252 | to meet traffic demands. 253 | 254 | - Relational Database Service (RDS) 255 | 256 | - Simple Storage Service (S3) — Standard storage class 257 | 258 | - DynamoDB 259 | 260 | - AWS Lambda 261 | 262 | - AWS CloudWatch 263 | 264 | - Virtual Private Cloud(VPC) 265 | 266 | - AWS CloudFormation - helps to automate process of creating,configuring & managing 267 | AWS resources. 268 | 269 | - AWS Elastic Beanstalk 270 | 271 | ============================================================================= 272 | 273 | 274 | # Subnetting: It is a logical subdivision of an IP network. 275 | Class A - 10.10.0.1 - 255.0.0.0 276 | Class B - 172.168.100.5 - 255.255.0.0 277 | Class C - 192.168.100.5 - 255.255.255.0 278 | 279 | where 255 is network id & 0 is host ID 280 | 281 | - Unicast: Data is sent to a single recipient 282 | Multicast: Data is sent to a group of recipients 283 | Broadcast: Data is sent to all recipients in a network 284 | 285 | ------------------------------------------------------------------------- -------------------------------------------------------------------------------- /6.EC2.txt: -------------------------------------------------------------------------------- 1 | 2 | ************************* AWS EC2 ****************************************** 3 | 4 | - An EC2 instance is simply a virtual server in Amazon Web Services 5 | - EC2 is region based service. If its created in eg. Virginia it will work only for that region not other. i.e it is limited to region. 6 | 7 | # EC2 Pricing Option: O R S D 8 | 9 | 1. On Demand =>Here,we create or terminate instance whenever needed as per use. 10 | 2. Reserved =>Here create EC2 instance for about 3-5 years with termination. 11 | 3. Spot Pricing =>We pay Spot price that's in effect for time period of running instance 12 | - whenever price rises instance gets terminated. 13 | - Generally used for testing purpose 14 | - Spot Instances allow you to bid on unused EC2 capacity and pay by hour. 15 | 4. Dedicated =>We can have EC2 instances that are dedicated to single customer.($2/hr ) 16 | 17 | # Free Tier Pricing: 18 | free tier - 12months free - 750hrs per month(usage) 19 | Configure storage - 30GB(1Windows)/(4 Linux) for month 20 | Volume for windows - 30GB 21 | Volume for Linux - 8GB 22 | 23 | Volume is different for them because Windows require more graphics than Linux. 24 | 25 | Instance Type - only t2.micro is free with 1GB vCPU.For other you have to pay. 26 | 27 | If you have 2 EC2 machines,then 750/24=31.25 -> 31.25/2=15.625 28 | i.e So you can use it for 15days. 29 | 30 | If 2 EC2 - Linux and windows. So 8GB for Linux & Windows(30GB) that means its 30+8=38GB.So we have to pay for extra 8GB as we have only 30GB free for month. 31 | 32 | EC2 Instance Families: Are for CPU configurations of machine. 33 | 34 | 1 same Key-pair can be used for windows as well as Linux. 35 | 36 | i)General Instances(t2.micro,t3.micros and m5) - 37 | - Balanced compute,memory,network resources 38 | - Used in Development Env 39 | 40 | ii)Compute Optimized(c5,c6g): 41 | - Computationally intensive workloads required for high performance 42 | 43 | iii) Memory Optimized instances(r5,z1e)- 44 | - Memory intensive workloads 45 | -For e.g Gaming 46 | 47 | iv) Accelerated Memory Computing(p3,g4) - 48 | - Specialized hardware accelerators such as GPU(Graphic Processing Units) 49 | 50 | v) Storage optimized(i3,d2) - 51 | - High Speed,low latency storage 52 | - DataWare house 53 | 54 | #There are multiple ways to create EC2 using: 55 | - AWS Console 56 | - AWS CLI 57 | - Terraform - Programatic Access 58 | - Ansible 59 | - AWS Cloudformation: It helps to automate process of creating,configuring & managing 60 | AWS resources. 61 | 62 | 63 | # Steps to launch EC2 : 64 | 65 | i) Click on launch instance 66 | ii) name & tags -> Name of EC2 67 | iii)App and OS Images(Amazon Machine Image) 68 | - Windows 69 | -AMI(Windows Server 2022) 70 | iv)Instance type -> t2.micro 71 | v)Key Pair(login) 72 | - Create new pair => aws batch 20 73 | vi)Network Settings 74 | vii)Configure storage 75 | viii)Click on launch instance 76 | iX) When 2/2 check is done we can launch instance following further steps. 77 | 78 | # 2/2 Check: 79 | Step1 - System check 80 | Step2 - Instance check 81 | 82 | # Tag => Tags can help you manage, identify, organize, search for, and filter resources. 83 | Tags enable you to categorize your AWS resources in different ways, 84 | - for example, by purpose, owner, or environment. 85 | 86 | # AWS Policy: 87 | 88 | 1.Without tag their is no deployment of EC2 Instance 89 | 2.We dont allow deployment of EC2 Instance on Public IP as its not safe. 90 | 91 | ------------------------ Create EC2 using CLI ---------------------------------------- 92 | 93 | a. Install AWS CLI 94 | b. Authenticate using access key and security key 95 | c. Get AMI id 96 | d. Choose instance type 97 | e. key pair ( create new one ) 98 | f. Get security group id 99 | g. subnet id 100 | 101 | cmnd: aws ec2 run-instances --image-id ami-xxxxxxxx --count 1 --instance-type t2.micro 102 | --key-name MyKeyPair --security-group-ids sg-903004f8 --subnet-id subnet-6e7f829e 103 | 104 | 105 | # Terminate EC2 using CLI: aws ec2 terminate-instances --instance-ids 106 | 107 | 108 | =================================================================================== 109 | 110 | *********************** AMI(Amazon machine Image) *************************** 111 | 112 | - We can use AMIs to launch instances when we require instances with different configurations. 113 | 114 | - If want install certain packages on 'n' No.of.VM's. In such case we will create AMI(Amazon Mahine Image) which will have all the required packages we wanted to Install.Using this AMI we will create VM. 115 | 116 | - You can copy an AMI within the same AWS Region or to different AWS Regions. 117 | 118 | # Steps: 119 | 120 | 1)First Install packages on EC2 instance and stop that instance. 121 | 2)Click on Actions => Image Create AMI 122 | 3)Copy AMI to another region(ex: mumbai)To do this edit AMI => Paste Shared User ID 123 | 4)In Mumbai region,create Instance using AMI we copied.While connecting instance root to 124 | be replaced by ubuntu here.Now we will get same EC2 instance with packages & files 125 | installed 126 | 5)Transfer AMI to another user on same region. 127 | 128 | ===================================================================================== 129 | 130 | 131 | ********************* EFS(Elastic File Sharing/System) *********************************** 132 | 133 | - It automatically scales up/down computing resources 134 | - It supports Linux EC2 135 | - It is based on network file sharing i.e EFS is an NFS file system service offered by AWS 136 | - You can add/share this storage to multiple EC2 at single time 137 | - You add/remove capacity without disturbing users & applications. 138 | - It provides parallel shared access to thousands of Amazon EC2 instances. 139 | - EFS can serve as a common data source for the instances to access data securely. 140 | - Its region based. 141 | 142 | Disadvantage: 143 | 144 | - No windows/mac support 145 | - No support to system boot volume (Boot volume is storage of OS) 146 | 147 | Use Case: 148 | 149 | - Web serving - shari/storage data to multile sytems 150 | - Data Backing 151 | 152 | tip: Booting computer refers to process of powering on computer & starting the OS. 153 | 154 | #package of EFS: 155 | 156 | - yum install -y amazon-efs-utils => for amazon linux only 157 | 158 | - sudo apt install nfs-common -y && \ => ubuntu 159 | 160 | 161 | # Steps to create Amazon EFS: 162 | 163 | In AWS Console, search for "EFS" in service search box, or you can find it under 164 | "Storage" in "Services" menu. 165 | 166 | 1."Create file system" button to start creating a new EFS file system. 167 | 168 | 2. Configure File System Settings: 169 | - File system settings: Provide a unique name for your EFS file system. 170 | - VPC: Choose Virtual Private Cloud (VPC) where you want file system to be created. 171 | - Mount targets: It needs mount targets in subnets of your chosen VPC to enable 172 | access from EC2 instances in that VPC 173 | - Performance mode: Choose between "General Purpose" (default) or "Max I/O" mode 174 | based on your workload requirements. 175 | - Throughput mode: Select "Bursting"(high load) or "Provisioned"(consistent load) 176 | 177 | 3. Configure File System Permissions: Create a new IAM role or choose existing IAM role. 178 | 179 | 4. Create EFS File System: Click on "Create file system" button to create EFS file system 180 | 181 | 5. Install NFS Utilities on EC2 Instances: Create 2 new instances 182 | 183 | # SSH into the first EC2 instance 184 | ssh -i your_key.pem ec2-user@instance_ip 185 | 186 | # Install NFS utilities 187 | sudo apt-get update 188 | sudo apt-get install nfs-common -y 189 | 190 | Repeat the same process for the second EC2 instance. 191 | 192 | 6.Mount EFS file system on both EC2 instances. 193 | 194 | # SSH into the first EC2 instance 195 | ssh -i your_key.pem ec2-user@instance_ip 196 | 197 | # Create a directory to mount EFS 198 | sudo mkdir /mnt/efs 199 | 200 | # Mount the EFS file system 201 | sudo mount -t nfs4 fs-id.efs.region.amazonaws.com:/ /mnt/efs 202 | 203 | # Verify the mount 204 | df -h 205 | 206 | In above, Replace fs-id with your EFS file system ID 207 | 208 | Repeat same process for second EC2 instance,use same EFS file system ID and AWS region. 209 | 210 | Usage: Access f1.txt on Both EC2 Instances 211 | 212 | Now, you can access and modify f1.txt on both EC2 instances, and changes will be synchronized across both instances. 213 | 214 | For example, to create f1.txt and write some content: 215 | 216 | # On the first EC2 instance 217 | echo "Hello from Instance 1" | sudo tee /mnt/efs/f1.txt 218 | 219 | # On the second EC2 instance 220 | cat /mnt/efs/f1.txt 221 | 222 | Both instances should display content "Hello from Instance 1," as they are sharing same 223 | EFS file system. 224 | 225 | ===================================================================================== 226 | 227 | EBS and Instance are types of root device. 228 | EBS is most used in industry. 229 | 230 | *********************** Elastic Block Storage(EBS) ********************************* 231 | 232 | 1. It is persistance storage which means that even if EC2 instances are shut down, 233 | the data on EBS volume is not lost. 234 | 2. It is easy to use and scalable 235 | 3. High Performance 236 | 4. You can resize EBS volume 237 | 5. Objects are stored in block formats i.e one object can have multiple blocks. 238 | 6. EBS is used like a hard drive for EC2 instances. 239 | 240 | UseCase: 241 | EBS persist data even if instance is deleted (except root volume in default setting). 242 | You can also take point-in-time snapshots and have backup of your data in the EBS volumes. 243 | 244 | # SnapShot => Point in time backup. 245 | Its incremental i.e only recent data is backed up. 246 | 247 | 248 | If we have 3 blocks in object in S1 snapshot thre will be 3 objects 249 | If we add one more block in s2 snapshot we see only newly added block in snapshot 250 | If we change existing two blocks then it will reflect and stored in s3 snapshot. 251 | 252 | # Steps to attach volume to instance: 253 | 254 | Create EC2 Instance & then in "Elastic Block Store" block on left & click on "Volume" 255 | 256 | 1. Create EBS block with required space and requirements 257 | 2. Attach this volume to EC2 instance we want. 258 | 3. Create a partition of EBS 259 | 4. Format partition to file system 260 | 5. Mount partition to a directory 261 | 6. Make mount persistant. 262 | 263 | >> sudo -i 264 | >> lsblk => 265 | >> mkdir /temp/video =>newly added data stores here 266 | >> fdisk -l => Available disks 267 | >> fdisk /dev/xvdf => creating partition 268 | >> fdisk -m =>Print the available (FDISK) commands 269 | >> fdisk -n Add a new partition 270 | >> partprobe 271 | >> mkfs.xfs /dev/xvdf1 272 | >> lsblk -fs 273 | >> mount /dev/xvdf1 /tmp/video => mounting partition 274 | >> vim /etc/fstab => inert "mkfs.xfs /dev/xvdf1 xfs default 0 0" 275 | >> lsblk 276 | 277 | =========================================================================================== 278 | 279 | ******************************** Instance Storage ***************************************** 280 | 281 | - Its also known as Ephermerical Storage 282 | - Its non-persistent i.e temporary block storage service provided by AWS. 283 | 284 | =========================================================================================== -------------------------------------------------------------------------------- /3.Linux.txt: -------------------------------------------------------------------------------- 1 | 2 | #TIPS to study: 3 | 4 | MUST => 5 | SHOULD KNOW => 6 | GOOD TO KNOW => 7 | 8 | Kernal: Linux kernel is the core part of the operating system. 9 | It establishes communication betwn devices & software. 10 | It manages system resources. 11 | Every OS has Kernel. 12 | Its heart of Linux 13 | 14 | Linux: Its Open Source OS like other OS MS Windows,Apple MAC,Google Android. 15 | 16 | Why linux is better than other OS: 17 | - Open Source & Free of Cost 18 | - More Secure(Bcoz of these features SE-Linux & ACL(Access Control List)) 19 | - Lighweight 20 | - Stablity 21 | - Performance 22 | - suitable for Programmers 23 | - Multi-tasking & multi-desktop support 24 | - Community Support 25 | - Linux requires less disk space for new installations than Windows. 26 | 27 | Disadvantages: 28 | - no GUI 29 | - User-friendly 30 | 31 | Types of Linux: 32 | 33 | Ubuntu 34 | Fedora 35 | Redhat => paid is support by RedHat 36 | Kali 37 | Cent oS => copy of RedHat but no support => Free of cost 38 | Amazon Linux => support by Amazon => Free of cost => It takes more time than linux 39 | to boot coz it provides additional pre-installed Softwares 40 | 41 | Text Editor (gedit) is default GUI text editor in Ubuntu operating system. 42 | 43 | Vim is a free and open-source, screen-based text editor program.It is Unix Editor 44 | 45 | Terminal - Its text-based utility of computer in which by typing commands we can manipulate files,execute programs and open documents. 46 | 47 | In windows we say Folder - Directory in Linux 48 | We say admin in windows - root in Linux. 49 | 50 | 51 | /var => Logs 52 | /bin => binary files 53 | /etc => Configuration files 54 | /usr => 55 | 56 | ==================================================================================================== 57 | 58 | **************************** File Management Commands *********************************** 59 | 60 | / - root directory 61 | clear - clear terminal 62 | cd - change directory 63 | pwd - print working directory /present working directory 64 | ls - listing files and folders 65 | mkdir filename - create a file 66 | chmod, fchmod - change permissions of a file 67 | 68 | cd . = remains in current directory 69 | cd .. = one directory back 70 | cd / = send to root directory 71 | cd ~ = navigates to root home directory 72 | mkdir -p = to create a folder within folder 73 | mkdir .a = to create hidden folder 74 | rmdir filname = to delete folder 75 | rm -rf filename = if theres folder withun folder we can forcefully delete it 76 | history = this cmnds gives a all the cmnds we used today as a list 77 | 78 | ls /folder/ => can list files of any folder even not in their directory 79 | ls -i => Display File Index Number 80 | ls -a => list all files including hidden files 81 | ls -h => Lists files in HumanReadable Format 82 | ls -t => sort by time & date (View last edited file) 83 | ls -r => List all the files in reverse order. 84 | ls -l => list with long format - show permissions. 85 | ls -lh =>to Display File Size in Human Readable Format 86 | ls -lrt =>long format list(size,date) which is reverseSorted By 87 | Modificationtime 88 | ls -lrta => to view hidden file as well(file that have . at start is hidden) 89 | ls -lrtah => view list in human readble format 90 | ls -ltr => Display Reverse Output Order by Date 91 | 92 | =========================================================================================== 93 | 94 | ***************************** CMD for files *********************************** 95 | 96 | touch file1 => creates empty file 97 | touch file{2,3} => create multiple files at once i.e file2,file3 98 | vim file1 => to create or edit new file 99 | cat file1 => to read file on console 100 | 101 | # In vim editor: shift + i =>insert mode 102 | 103 | # To escape this mode: 104 | press "esc" then press "shift + :wq!" then press enter to save content. 105 | 106 | #If we want to delete a line: Escape insert mode and press dd 107 | Here, 108 | :quit => Quit 109 | :w => Save 110 | :w fname => Save as fname 111 | :wq => to write(save) and quit the file. 112 | 113 | cp => copy files 114 | mv => move & rename (mv source destination) 115 | 116 | ## copy files: 117 | 118 | cp filename /destination/ 119 | cp f1 f2 f3 /folder2/ => copy mltiple files 120 | 121 | ## move files: 122 | mv filename /destination/ 123 | mv f1 f2 /folder2/ 124 | 125 | ## To rename file: mv oldNamefile NewfileName 126 | 127 | ## Search Files: 128 | 129 | find . -name "*.txt" => lists all txt files 130 | find . -name "batch.txt" => finds exact file that matches name 131 | find . -name "batch*" => finds all files with name starting with batch 132 | 133 | ************************************************************************************ 134 | 135 | ## File Content CMNDS: SHOULD KNOW 136 | 137 | # head filename => displays only first i.e old 10 lines of the file specified 138 | >> head [OPTIONS] FILES 139 | >> head -n 5 numbers.txt =>first 5lines of files will be printed 140 | 141 | # tail filename => displays latest(new) 10 lines of file. 142 | >> tail [OPTIONS] FILES 143 | >> head numbers.txt =>latest 5lines of files will be printed 144 | 145 | # more => As 'cat' command displays the file content. Same way 'more' command also displays the content of a file. Only difference is that, in case of larger files, 'cat' command output will scroll off your screen while 'more' command displays output one screenful at a time. 146 | 147 | less => 148 | 149 | ************************************************************************************* 150 | 151 | ## File Permissions CMDs: MUST KNOW 152 | 153 | # d rwx r-x r-x :- d =>directory r =>read w=>write x=>execute 154 | owner group other 4 2 1 155 | of 156 | owner 157 | 7 5 5 158 | 159 | # BY DEFAULT EVERY NEW FILE HAS " 755 " AS PERMISSION 160 | 161 | # To change these permissions: chmod -R 777 snap => Here -R is recursively 162 | 163 | ex: rw r r => If we do chmod -R 755 i.e itn will be chmod -a rws 164 | 6 4 4 " rwx r-x r-x" 165 | 7 5 5 166 | 167 | 168 | # chown cmd is used to change the file Owner or group.used to change ownership 169 | Syntax: 170 | chown [OPTION]… [OWNER][:GROUP] FILE… 171 | chown ubuntu file1 172 | 173 | 174 | # sudo : Sudo stands for "super user do" and it allows permitted user to execute command as the superuser i.e root privileges. 175 | 176 | =========================================================================================== 177 | 178 | ***************************** User Management CMNDS ******************************** 179 | 180 | If we want add user with new hom directory ,new group pass, full name, 181 | Room num,contact num.Then use : adduser 182 | 183 | id => checks the user 184 | useradd 185 | passwd => then further type password 186 | su => switch user and enter password 187 | sudo -i => login as root user 188 | userdel 189 | 190 | The /etc/group file is text file that defines groups on the system. 191 | Every group has unique ID listed in "/etc/group" ,with groupName & members. 192 | 193 | " When we create User,Linux creates Group as well " 194 | " Every User Must be associated with atleast one group or many " 195 | 196 | Group has same name as that of user: Om(user) - Om(Group) 197 | All these are in /etc which is configuration file 198 | 199 | cd /etc/passwd => Info about users 200 | cd /etc/group => Info about groups 201 | cd /etc/gshadow => keeps information about the group administrators 202 | 203 | First goto cd/etc/ then: 204 | 205 | groupadd 206 | cat group => To view groups. 207 | cat passwd => TO view users 208 | groupdel 209 | 210 | Suppose we have 2Groups Devops(1003) & AWS(1004) with 2 user groups Ram & Shyam.Now to add ram in Devops and Shyam in AWS. 211 | 212 | >> usermod -g 1003 Ram 213 | >> su Ram 214 | >> id 215 | 216 | >> usermod -g 1004 Shyam 217 | >> su Shyam 218 | >> id 219 | 220 | Now we can delete user by groupdel not by userdel as its by linux 221 | 222 | >> groupdel 223 | 224 | #To view members of group: apt-get install member -y 225 | 226 | root@ip-172-31-91-160:~# su Ram 227 | 228 | $ id 229 | 230 | uid=1001(Ram) gid=1002(devops) groups=1002(devops) 231 | 232 | $ exit 233 | 234 | root@ip-172-31-91-160:~# members devops 235 | Ram 236 | 237 | ====================================================================================== 238 | 239 | ********************** Networking CMDs ***************************** 240 | 241 | hostname -i => Find IP address 242 | 243 | ifconfig => Find IP address i.e private IP address of your interfaces 244 | 245 | ifconfig eth0 => to get ethernet port i.e Enable eth0 interface. 246 | 247 | traceroot => It prints route that a packet takes to reach the host. 248 | To use this we need to install traceroute apt install traceroute 249 | 250 | 251 | In company we need to end screenshot of this cmnd output to client or network team. 252 | 253 | tracepath => gives path of domains. 254 | 255 | ping => PING sttands for Packet Internet Grouper. 256 | It is used to check network connectivity among host & server 257 | i.e data about packets(uses Port 1 with ICMP). 258 | It takes URL or IPaddress as input. 259 | Use this cmnd to check internet conection is present or not using this cmnd. 260 | 261 | Ex: To get outut of 5 datapackets using ping -c 5 www.google.com. 262 | 263 | 264 | netstat -a or ss => Network Statistics 265 | 266 | dig => Info about domain 267 | 268 | nslookup => used for querying the Domain Name System (DNS) records 269 | ex: nslookup -type=ns => Get the name server (NS records) 270 | 271 | locate => gives address of file.Used only if we know the exact file 272 | 273 | man => manual i.e gives info about the command 274 | 275 | " All the commands are in " sbin " directory " 276 | 277 | =========================================================================================== 278 | 279 | ***************************** Package CMDs ***************************** 280 | 281 | YUM (Yellowdog Updater Modified)=> Amazon Linux,RedHat,CentOS 282 | 283 | APT(Advanced Packaging Tool) => Linux 284 | 285 | Update: It fixes issues and replace old data with new.Its free. 286 | Upgrade: Adds additional features to our package.Its not free 287 | 288 | 2.89.9 => 2 is Major Version 289 | 89 is Minor Version 290 | 9 is Patch 291 | 292 | # APT: 293 | 294 | sudo apt update -y => upgrade the system with the latest releases 295 | 296 | sudo apt-get check => checks if any packages available for update 297 | 298 | sudo apt install vsftpd -y => install a particular package 299 | 300 | sudo apt remove vsftpd -y => remove a particular package 301 | 302 | sudo apt list => to get a combined list of all the packages 303 | 304 | sudo apt info vsftpd => find out detailed info about a specific pkg 305 | 306 | sudo apt upgrade -y => to upgrade packages 307 | 308 | 309 | # YUM: package management tool 310 | 311 | yum check -update 312 | yum install -y 313 | yum remove -y 314 | yum list all 315 | yum info 316 | yum clean all => clean cache 317 | yum upgrade -y 318 | yum repolist all => to view all repolist 319 | 320 | yum list installed | grep "package" => to check whether pkg is installed or not 321 | 322 | =========================================================================================== 323 | 324 | ******************* Services ****************************** 325 | 326 | yum install nginx -y 327 | 328 | systemctl status nginx => To check status of service 329 | systemctl start => To start service 330 | systemctl restart => To restart service 331 | systemctl stop => Stop the service 332 | systemctl enable => it will start service at boot. 333 | 334 | So if you want a service to start now and on every reboot then you need to both enable and start the service 335 | 336 | Similarly we can disable it. 337 | 338 | To view Nginx configuration files: root => cd var => cd log/ => nginx 339 | 340 | # init 0: command to stop the server 341 | # init 6 : restarts the system 342 | 343 | # whoami => tells logged in user 344 | # uname => tells machine which is running 345 | # uname -a => tells Architecture 346 | # uname -r => Tells info of kernel 347 | 348 | # free -h => used ram details is shown 349 | 350 | ========================================================================================== 351 | 352 | *********** Important Linux Cmnds ********** 353 | 354 | # df -h => cmnd to check disk space info 355 | 356 | # df -i => to check inode number info 357 | 358 | # du => cmnd to check disk usage 359 | 360 | # mount => 361 | 362 | # mail -s "" => cmnd to send email from linux server it requires SMTP configuration. 363 | 364 | # ps -aux => it tells status of process(running programs). (aux prints process) 365 | 366 | # kill -9 => forces to stop the process immedietely (-9 is signal option) 367 | 368 | # wget => cmnd that lets you download files from web. 369 | 370 | # curl => cmnd to transfer data to or from a server 371 | 372 | # w => cmnd to show who is logged on and what they are doing 373 | 374 | # diff f1.txt f2.txt => cmn to compare the different files to print their difference 375 | 376 | # uptime =>It shows how long system has been up,number of active users & load avg. 377 | 378 | # top => cmnd to show the processes and their details i.e memory pid priority avgLoad. 379 | 380 | ex : If load avg : 1.20 2.10 1.90 => in percentage format 120,210 190 381 | 1min 5min 15min 382 | 383 | # tree => displays tree structure begining with current diectory 384 | 385 | # mount => mount command allows users to mount i.e. attach additional child file systems to 386 | a particular mount point on the currently accessible file system. 387 | 388 | ex: mount [device] [dir] 389 | sudo mount /dev/sdb1 /mnt/media 390 | 391 | # fdisk => fdisk is a menu driven command-line utility that allows you to create & manipulate partition tables on a hard disk. 392 | 393 | # /etc/shadow is a text file that contains information about the system's users' passwords 394 | 395 | ========================================================================================== 396 | 397 | *************************** Filter cmnds ******************************* 398 | 399 | # CUT => 400 | cmnd for cutting sections from each line of files & writing result to standard output. 401 | It can be used to cut parts of a line by byte position, character and field. 402 | 403 | Cut using field:option is useful for fixed-length lines 404 | 405 | cut -d ' ' -f2 filename 406 | 407 | where -d => delimeter i.e to delimit data " ", "," , "/" 408 | -f => field number 409 | 410 | Cut using byte position : To extract the specific bytes, 411 | 412 | cut -b 1,2,3 filename (for byte 1 2 3) 413 | cut -b 1-3,4-6 filename (for range) 414 | 415 | 416 | # grep(GLobal regular expression Print): 417 | 418 | cat | grep 419 | cat | grep -i (not case sensitive) 420 | 421 | #Diff between grep and find=> grep is used to find words while find is for string 422 | 423 | # comm f1 f2 => compares 2 files and returns comman data. 424 | 425 | # echo "HEllo" => prints given line 426 | 427 | ========================================================================================== 428 | 429 | *************************** sed(stream editor) ******************************** 430 | 431 | It performs lots of functions on file like replace,insertion or deletion. 432 | 433 | g => global action 434 | s => substitution 435 | d => deletion 436 | -i => option is constant (when you are making in file changes) 437 | 438 | 439 | ## wc => returns number of lines, word and alphabets 440 | ## wc -l => returns no.of.line 441 | ## wc -w => returns no.of.words 442 | 443 | ## Replace => 444 | sed -i 's/unix/linux/g' file.txt => replace word “unix” with “linux”. 445 | sed -i 's/unix/linux/2' file.txt => replaces second occurance of pattern 446 | sed -i 's/unix/linux/g' file.txt => replaces for all occurances 447 | 448 | if you want to change content of specific line num, then you can add numeric value 449 | with “s” letter for instance: 450 | 451 | $ sed -i ‘2s/old/new/’ file.txt 452 | 453 | Ex: Line3: AWS AZURE CGP 454 | 455 | sed -i '3s/AWS/Azure/' f1.txt =>From 'f1' line3 word 'AWS' will be replaced by Azure 456 | >> AZURE AZURE GCP 457 | 458 | sed -i '3s/Azure/AWS/1' f1.txt=>From 'f1' line3 word 'Azure' will be replaced by AWS 459 | >> AWS AZURE GCP 460 | 461 | 462 | ## Delete => 463 | sed "nd" f.txt => Delete a particular line where n is line number 464 | sed "$d" f.txt => Delete last line 465 | sed "3,6d" f.txt => Delete lin from range 3 to 6 466 | sed '/pattern/d' f.txt => Delete pattern matching line 467 | sed -i ‘2!d’ examp.txt => delete all lines except line number “2” 468 | 469 | ========================================================================================== 470 | 471 | ## AWK => Its scripting language & is used for text processing. 472 | It is used for manipulating data and generating reports. 473 | 474 | awk '{print $0}' => Displays all file content 475 | awk '{print $1}' => Displays first column 476 | awk '{print $1,$4}' => Displays column 1 and 4 477 | awk '{print $NF}' => display lasy column 478 | 479 | ls -lrt | awk '{print NR,$NF}' 480 | 481 | NR: NR command keeps a current count of the number of input records. 482 | If you would like each line to have a line-number count, you would use the NR built-in variable: 483 | NF: $NF which represents the last field in a record: 484 | 485 | ========================================================================================== 486 | ## Linux Inode number: It is a uniquely existing number for all files in Linux. 487 | 488 | ## Inode content => It is data structure containing metadata of file. 489 | 490 | ## HardLink => 491 | - Each hard linked file is assigned the same Inode value as the original. 492 | - Therefore they reference the same physical file location. 493 | - Hard links are flexible & remain linked even if original or linked files are moved 494 | - If original file is removed then the link will still show the content of the file. 495 | - Even if we change filename of original file then also hard links properly work. 496 | - ln is used for hard link 497 | - ls -l => shows all the links with the link column shows number of links. 498 | 499 | >> ln [original filename] [link name] 500 | >> cd ~ 501 | >> mkdir myfile 502 | >> ln myfile hardlink_myfile 503 | >> ls -lrt 504 | 505 | Now if we edit content of myfile it will reflect the changes in hardlink_myfile. 506 | Even if we delete myfile, hardlink_myfile doesnt get deleted. 507 | 508 | 509 | ## SoftLink => 510 | - It is virtual location of file. 511 | - Each soft linked file contains separate Inode value that points original file 512 | - It is similar to the file shortcut feature which is used in Windows 513 | - Removing soft link doesn’t affect anything but removing original file 514 | the link becomes useless link which points to nonexistent file. 515 | - If we change name of original file then all the soft links for that file 516 | become worthless now. 517 | 518 | command: ln -s [original filename] [link name] 519 | 520 | >> ls -lrt 521 | >> ln -s myfile softlink_myfile 522 | >> ls -lrt 523 | 524 | lrwxrwxrew 1 root root date time filename => here l is for link. 525 | 526 | Now if we edit content of myfile it will reflect the changes in hardlink_myfile. 527 | If we delete myfile, hardlink_myfile content will get deleted. 528 | 529 | 530 | ========================================================================================== 531 | 532 | tar command => tape archive(tar) is used to create Archive and extract the Archive files. 533 | 534 | Syntax: 535 | tar [options] [archive-file] [file or directory to be archived] 536 | 537 | Options: 538 | 539 | -c : Creates Archive 540 | -x : Extract the archive 541 | -f : creates archive with given filename 542 | -t : displays or lists files in archived file 543 | -u : archives and adds to an existing archive file 544 | -v : Displays Verbose Information 545 | -A : Concatenates the archive files 546 | -z : zip, tells tar command that creates tar file using gzip 547 | -j : filter archive tar file using tbzip 548 | -W : Verify a archive file 549 | -r : update or add file or directory in already existed .tar file 550 | 551 | >> tar -czvf samplefile.tar.gz sample file 552 | >> tar -x samplefile.tar.gz 553 | 554 | Archeive: Archive files are used to collect multiple data files together into a single file for easier portability & storage, or simply to compress files to use less storage space. 555 | 556 | ========================================================================================== 557 | # how to check log in lInux? 558 | => goto /var then check the file of log we need then type cmnd: 559 | tail filename.log | grep error => to find errors 560 | 561 | # Thread in Linux: 562 | 563 | 564 | # Zombie Process: 565 | It is a process that has completed execution but still has an entry in process table. 566 | You can kill it using the command "kill PID" where PID is the process ID of the zombies process. 567 | --------------------------------------------------------------------------------