├── index.html
├── JenkinsPipeline.png
├── K8sArchitecture.png
├── RHEL 9.2 Installation.pdf
├── Session-Screenshots(Paint)
├── Day1
│ ├── 18a.png
│ ├── 18b.png
│ ├── 18c.png
│ ├── 18d.png
│ ├── 18e.png
│ ├── 18f.png
│ ├── 18g.png
│ ├── 18h.png
│ ├── 18i.png
│ ├── 18j.png
│ ├── 18k.png
│ └── 18l.png
├── Day2
│ ├── 19a.png
│ ├── 19b.png
│ ├── 19c.png
│ ├── 19d.png
│ ├── 19e.png
│ ├── 19f.png
│ ├── 19g.png
│ ├── 19h.png
│ └── 19i.png
└── Day3
│ ├── 20a.png
│ ├── 20b.png
│ ├── 20c.png
│ ├── 20d.png
│ ├── 20e.png
│ ├── 20f.png
│ ├── 20g.png
│ ├── 20h.png
│ └── 20i.png
├── README.md
├── Configure-Minikube-Cluster.txt
├── Launch-ec2.md
├── Jenkinsfile
├── Kubernetes-Architecture.md
├── CommandHistoryDay3.sh
├── CommandHistoryDay2.sh
├── Jenkins.md
├── Docker.md
└── DevopsBasics.md
/index.html:
--------------------------------------------------------------------------------
1 |
Hi, Welcome to the Linux web server
2 | hi how r u
--------------------------------------------------------------------------------
/JenkinsPipeline.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/JenkinsPipeline.png
--------------------------------------------------------------------------------
/K8sArchitecture.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/K8sArchitecture.png
--------------------------------------------------------------------------------
/RHEL 9.2 Installation.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/RHEL 9.2 Installation.pdf
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day1/18a.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day1/18a.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day1/18b.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day1/18b.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day1/18c.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day1/18c.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day1/18d.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day1/18d.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day1/18e.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day1/18e.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day1/18f.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day1/18f.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day1/18g.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day1/18g.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day1/18h.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day1/18h.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day1/18i.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day1/18i.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day1/18j.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day1/18j.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day1/18k.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day1/18k.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day1/18l.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day1/18l.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day2/19a.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day2/19a.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day2/19b.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day2/19b.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day2/19c.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day2/19c.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day2/19d.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day2/19d.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day2/19e.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day2/19e.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day2/19f.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day2/19f.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day2/19g.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day2/19g.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day2/19h.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day2/19h.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day2/19i.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day2/19i.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day3/20a.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day3/20a.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day3/20b.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day3/20b.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day3/20c.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day3/20c.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day3/20d.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day3/20d.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day3/20e.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day3/20e.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day3/20f.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day3/20f.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day3/20g.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day3/20g.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day3/20h.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day3/20h.png
--------------------------------------------------------------------------------
/Session-Screenshots(Paint)/Day3/20i.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudhanshuvlog/GFG-Workshop2025/HEAD/Session-Screenshots(Paint)/Day3/20i.png
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | ### GFG Workshop 2025 Repo
2 |
3 | * Python flask app used in the session - https://github.com/sudhanshuvlog/SampleFlaskApp
4 | * Mario Game Repo - https://github.com/sudhanshuvlog/MarioGameOnDocker
5 | * Movie Streaming Application(FullStack App Deployment on EKS Via Jenkins Pipeline) - https://github.com/sudhanshuvlog/Movie-Streaming-App-DevOps
--------------------------------------------------------------------------------
/Configure-Minikube-Cluster.txt:
--------------------------------------------------------------------------------
1 | ## Launch an EC2 Instance with t2.medium Instance Type(We will configure Minikube Server on it)
2 |
3 | # Install Minikube
4 | ```bash
5 | curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm
6 | sudo rpm -Uvh minikube-latest.x86_64.rpm
7 | minikube start --force
8 | ```
9 | # Install kubectl
10 | ```bash
11 | curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.32.0/2024-12-20/bin/linux/amd64/kubectl
12 | chmod +x ./kubectl
13 | cp ./kubectl /usr/bin/
14 | ```
15 |
--------------------------------------------------------------------------------
/Launch-ec2.md:
--------------------------------------------------------------------------------
1 | ### How to launch an EC2 Instance?
2 |
3 | 1. Navigate to the EC2 Console.
4 | 2. Follow the Outlined steps below.
5 |
6 | 
7 |
8 |
9 | 
10 |
11 |
12 | 
13 |
14 |
15 | 
16 |
17 |
18 | 
19 |
20 |
21 | 
22 |
--------------------------------------------------------------------------------
/Jenkinsfile:
--------------------------------------------------------------------------------
1 | pipeline { //pipeline
2 | agent
3 | {
4 | label "ec2"
5 | }
6 |
7 | stages {
8 | stage('Download the Source Code ') { // job1 - cloning the repo
9 | steps {
10 | git branch: 'main', url: 'https://github.com/sudhanshuvlog/SampleFlaskApp.git'
11 | }
12 | }
13 | stage("Install python3"){ // install python
14 | steps{
15 | sh 'yum install python3 -y'
16 | }
17 | }
18 | stage("Install pip"){
19 | steps{
20 | sh "yum install pip -y"
21 | }
22 | }
23 | stage("Install python dependencies"){
24 | steps{
25 | sh "pip3 install -r requirements.txt"
26 | }
27 | }
28 | stage("Execute Test Cases"){
29 | steps{
30 | sh "pytest"
31 | }
32 | }
33 | stage("Flake8 test cases"){
34 | steps{
35 | sh "python3 -m flake8"
36 | }
37 | }
38 | stage("Build Docker Image"){
39 | steps{
40 | sh "yum install docker -y"
41 | sh "systemctl start docker"
42 | sh "docker build -t gfgworkshop ."
43 | }
44 | }
45 | stage("Deployment"){ //dev deployment
46 | steps{
47 | sh "docker rm -f webos"
48 | sh "docker run --name webos -d -p 81:80 gfgworkshop"
49 | }
50 | }
51 | }
52 | }
53 |
--------------------------------------------------------------------------------
/Kubernetes-Architecture.md:
--------------------------------------------------------------------------------
1 | ## Kubernetes Architecture
2 |
3 | **Architecture of Kubernetes**
4 |
5 | 
6 |
7 | - **Master Node**
8 | - The master node/control plane is responsible for managing the Kubernetes cluster.
9 | - The master node consists of the following components:
10 | - **API Server**
11 | - The API server is responsible for serving the Kubernetes API.
12 | - It is used for communication between the master node and the worker nodes, users, and external clients.
13 | - **Scheduler**
14 | - The scheduler is responsible for scheduling the applications on the worker nodes.
15 | - It is responsible for taking the decision about the placement of the pods at nodes.
16 | - **Controller Manager**
17 | - The controller manager is responsible for managing the controllers.
18 | - It is responsible for managing the replication controller, endpoints controller, namespace controller, and service accounts controller.
19 | - **etcd**
20 | - etcd is a distributed key-value DB used to store the cluster state.
21 | - It is used by the master node to store the cluster state.
22 |
23 | - **Worker Node**
24 |
25 | - The worker node is responsible for running the applications.
26 | - It consists of the following components:
27 | - **Kubelet**
28 | - The kubelet is the k8s agent, It is used to communicate with the master node, It will get all the information from the master node and it will run the containers.
29 | - It is responsible for managing the containers on the worker node.
30 | - **Kube Proxy**
31 | - The kube proxy is responsible for managing the network on the worker node.
32 | - It is responsible for routing the traffic to the containers.
33 | - **Container Runtime**
34 | - The container runtime is responsible for running the containers.
35 | - It is responsible for managing the containers on the worker node.
36 |
--------------------------------------------------------------------------------
/CommandHistoryDay3.sh:
--------------------------------------------------------------------------------
1 | [root@ip-172-31-3-161 /]# history
2 | 1 cd /
3 | 2 yum install docker -y
4 | 3 systemctl start docker
5 | 4 docker pull jenkins/jenkins
6 | 5 docker images
7 | 6 docker run -p 8080:8080 -p 50000:50000 --name jenkins -dit --restart=on-failure jenkins/jenkins:lts-jdk17
8 | 7 docker ps
9 | 8 docker attach jenkins
10 | 9 docker ps
11 | 10 docker run -it amazonlinux
12 | 11 docker ps
13 | 12 docker attach amazing_dewdney
14 | 13 docker exec -it jenkins ls
15 | 14 docker exec -it jenkins pwd
16 | 15 docker exec -it jenkins bash
17 | 16 docker ps
18 | 17 clear
19 | 18 ls
20 | 19 cd /
21 | 20 history
22 | 1 cd /
23 | 2 git
24 | 3 yum install git -y
25 | 4 ls
26 | 5 cat data
27 | 6 cd data
28 | 7 ls
29 | 8 cd workspace/
30 | 9 ls
31 | 10 cd python\ pipeline
32 | 11 ls
33 | 12 docker ps
34 | 13 docker ps -a
35 | 14 cd /
36 | 15 eksctl delete cluster --region=ap-south-1 --name=EKS21
37 | 16 curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm
38 | 17 sudo rpm -Uvh minikube-latest.x86_64.rpm
39 | 18 minikube start --force
40 | 19 kubectl get pods
41 | 20 mysql -h database-2.cbaw4kes2epe.ap-south-1.rds.amazonaws.com -u admin -p
42 | 21 yum whatprovides mysql
43 | 22 yum install mariadb105-3:10.5.25-1.amzn2023.0.1.x86_64
44 | 23 yum install mariadb105-3:10.5.25-1.amzn2023.0.1.x86_64
45 | 24 mysql -h database-2.cbaw4kes2epe.ap-south-1.rds.amazonaws.com -u admin -p
46 | 25 docker ps
47 | 26 kubectl get pods
48 | 27 kubectl get svc
49 | 28 yum install socat -y > /dev/null
50 | 29 minikube ip
51 | 30 curl 192.168.49.2:31463
52 | 31 socat TCP4-LISTEN:83,fork,su=nobody TCP:192.168.49.2:31463 &
53 | 32 ls
54 | 33 cd data/
55 | 34 ls
56 | 35 cd workspace/
57 | 36 ls
58 | 37 cd MymovieApp
59 | 38 ls
60 | 39 cd deploy/
61 | 40 ls
62 | 41 vi webapp-config.yaml
63 | 42 socat TCP4-LISTEN:3000,fork,su=nobody TCP:192.168.49.2:31463 &
64 | 43 cat webapp-config.yaml
65 | 44 kubectl apply -f webapp-config.yaml
66 | 45 kubectl get pods
67 | 46 kubectl apply -f deploy/deployment-web.yaml
68 | 47 kubectl apply -f deployment-web.yaml
69 | 48 kubectl apply -f service-web.yaml
70 | 49 kubectl get pods
71 | 50 kubectl get svc
72 | 51 socat TCP4-LISTEN:8080,fork,su=nobody TCP:192.168.49.2:32756 &
73 | 52 mysql -h database-1.cbaw4kes2epe.ap-south-1.rds.amazonaws.com -u admin -p
74 | 53 kubectl
75 | 54 cd /
76 | 55 curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
77 | 56 sudo mv /tmp/eksctl /usr/local/bin
78 | 57 eksctl version
79 | 58 aws configure
80 | 59 clear
81 | 60 curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
82 | 61 chmod +x ./kubectl
83 | 62 sudo mv ./kubectl /usr/local/bin
84 | 63 kubectl version --short --client
85 | 64 eksctl create cluster --name EKS21 --region ap-south-1 --vpc-public-subnets=subnet-0e64ffc947ac8929c,subnet-04c1ed6ba9c55ffd7 --nodegroup-name default-ng --node-type t3.medium --nodes=2 --nodes-min=2 --nodes-max=2 --node-volume-size=20 --ssh-access --ssh-public-key DevOps --managed
86 | 65 eksctl create cluster --name EKS25 --region ap-south-1 --vpc-public-subnets=subnet-0e64ffc947ac8929c,subnet-04c1ed6ba9c55ffd7 --nodegroup-name default-ng-1 --node-type t3.medium --nodes=2 --nodes-min=2 --nodes-max=2 --node-volume-size=20 --ssh-access --ssh-public-key DevOps --managed
87 | 66 cd /
88 | 67 history
--------------------------------------------------------------------------------
/CommandHistoryDay2.sh:
--------------------------------------------------------------------------------
1 | 10 whoami
2 | 11 yum --help
3 | 12 man(yum)
4 | 13 man yum
5 | 14 date
6 | 15 ls
7 | 16 touch a.txt
8 | 17 ls
9 | 18 man touch
10 | 19 yum install httpd
11 | 20 systemctl start httpd
12 | 21 cd /etc/httpd/conf
13 | 22 ls
14 | 23 cat httpd.conf
15 | 24 vim httpd.conf
16 | 25 systemctl restart httpd
17 | 26 clear
18 | 27 cd /
19 | 28 clear
20 | 29 git clone https://github.com/sudhanshuvlog/SampleFlaskApp.git
21 | 30 yum install git
22 | 31 git clone https://github.com/sudhanshuvlog/SampleFlaskApp.git
23 | 32 ls
24 | 33 cd SampleFlaskApp/
25 | 34 ls
26 | 35 cat app.py
27 | 36 cat requirements.txt
28 | 37 pytest
29 | 38 yum install python3
30 | 39 yum install pip
31 | 40 pip3 install -r requirements.txt
32 | 41 pytest
33 | 42 cat app.py
34 | 43 git pull
35 | 44 cat app.py
36 | 45 pytest
37 | 46 git pull
38 | 47 pytest
39 | 48 flake8 .
40 | 49 flake8 app.py
41 | 50 python flake8
42 | 51 python3 flake8 .
43 | 52 flake
44 | 53 flake8
45 | 54 flake8 .
46 | 55 python3 app.py
47 | 56 cd /
48 | 57 yum install docker
49 | 58 systemctl start docker
50 | 59 docker ps
51 | 60 docker images
52 | 61 docker pull ubuntu
53 | 62 docker images
54 | 63 docker run ubuntu
55 | 64 docker ps
56 | 65 docker ps -a
57 | 66 date
58 | 67 ls
59 | 68 docker run -it ubuntu
60 | 69 ls
61 | 70 docker ps
62 | 71 docker ps -a
63 | 72 docker start brave_buck
64 | 73 docker ps
65 | 74 ls
66 | 75 touch c.txt d.txt
67 | 76 ls
68 | 77 docker attach brave_buck
69 | 78 docker ps
70 | 79 docker run -it ubuntu
71 | 80 docker ps
72 | 81 docker inspect sweet_cray
73 | 82 docker ps
74 | 83 docker inspect brave_buck
75 | 84 docker ps
76 | 85 docker pull amazonlinux
77 | 86 docker images
78 | 87 docker run -it --name os1 amazonlinux
79 | 88 docker pull nginx
80 | 89 docker images
81 | 90 docker run -it nginx
82 | 91 docker run -it -d nginx
83 | 92 docker ps
84 | 93 docker inspect jolly_mcnulty
85 | 94 curl 172.17.0.5
86 | 95 ps -aux
87 | 96 systemctl status httpd
88 | 97 ps -aux
89 | 98 kill -9 347463
90 | 99 kill -9 37463
91 | 100 ps -aux
92 | 101 ps -aux
93 | 102 ps -aux | grep httpd
94 | 103 ps -aux | grep python
95 | 104 ps -aux | grep python
96 | 105 netstat -tnlp
97 | 106 ps -aux
98 | 107 rpm -q nginx
99 | 108 docker ps
100 | 109 ps -aux | grep nginx
101 | 110 docker run -d -p 81:80 nginx
102 | 111 docker ps
103 | 112 netstat -tnlp
104 | 113 ps -aux | grep nginx
105 | 114 docker run -dit --memory=30M amazonlinux
106 | 115 docker ps
107 | 116 docker stats beautiful_buck
108 | 117 vi Dockerfile
109 | 118 ls
110 | 119 mv Dockerfile SampleFlaskApp/
111 | 120 ls
112 | 121 cd SampleFlaskApp/
113 | 122 ls
114 | 123 vi Dockerfile
115 | 124 docker ps
116 | 125 vi Dockerfile
117 | 126 docker build -t gfgimg:v1 .
118 | 127 docker images
119 | 128 docker run -dit --name mywebos gfgimg
120 | 129 docker run -dit --name mywebos gfgimg:v1
121 | 130 docker ps
122 | 131 docker run -dit --name mywebos1 -p 8080:500 gfgimg:v1
123 | 132 docker ps
124 | 133 docker login
125 | 134 docker images
126 | 135 docker tag gfgimg:v1 jinny1/gfgsampleimg
127 | 136 docker images
128 | 137 docker push jinny1/gfgsampleimg
129 | 138 docker pull jinny1/mario-game
130 | 139 docker images
131 | 140 docker run -p 9090:80 jinny1/gfgsampleimg
132 | 141 docker run -p 9091:80 jinny1/mario-game
133 | 142 cd /
134 | 143 history
--------------------------------------------------------------------------------
/Jenkins.md:
--------------------------------------------------------------------------------
1 | ### Jenkins
2 |
3 |
4 | ### What is CI/CD?
5 |
6 | - CI/CD stands for Continuous Integration and Continuous Delivery/Deployment.
7 | 
8 |
9 | - Continuous Integration is the practice of merging code changes into a central repository several times a day. It is used to detect bugs early in the development cycle.
10 | - Continuous Delivery is the practice of deploying code changes into a production environment. It is used to deliver code changes to the users.
11 |
12 | **CI/CD Process Example**
13 |
14 |
15 | Developer commit code in GitHub -> Pull the code from GitHub -> Build the code -> Test the code -> Deploy the code to the Dev environment -> Test the code in the Dev environment -> Deploy the code to the QA environment -> Test the code in the QA environment -> Deploy the code to the Production environment
16 |
17 | ### What is Jenkins?
18 |
19 | - Jenkins is an open-source automation tool written in Java. It is used to automate the CI/CD process.
20 |
21 | **Jenkins Installation**
22 |
23 | - Install Java
24 | - Install Libraries
25 | - Install Jenkins
26 |
27 | **After installing Jenkins, Follow the below steps:**
28 |
29 | - Now Jenkins is installed and running on port 8080. To access Jenkins, open the following URL in a browser.
30 | ```
31 | http://:8080
32 | ```
33 | - To get the initial admin password, run the following command.
34 | ```
35 | sudo cat /var/lib/jenkins/secrets/initialAdminPassword
36 | ```
37 | 
38 |
39 | - Install the suggested plugins.
40 | 
41 |
42 |
43 | - Create an admin user.
44 | 
45 | 
46 |
47 |
48 | - Jenkins is now ready to use.
49 | 
50 |
51 | ### Jenkins Terminologies
52 |
53 | - **Jenkins Job** - What work you want to do in Jenkins is called a Jenkins Job. It can be a build job, a test job, a deployment job, etc.
54 |
55 | - **Jenkins Pipeline** - A Jenkins Pipeline is a collection of Jenkins Jobs. It is used to organize Jenkins Jobs into stages.
56 |
57 | - **Master** - The Jenkins Master is the main Jenkins server. It is responsible for managing the Jenkins Jobs and the Jenkins Agents.
58 |
59 | - **Worker/Agent** - The Jenkins Worker/Agent is a machine that is responsible for running Jenkins Jobs. It is connected to the Jenkins Master.
60 |
61 |
62 | - Launch Jenkins Server On Docker- `docker run -p 8080:8080 -p 50000:50000 -dit --name jenkins --restart=on-failure -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts-jdk17`
63 |
64 | ### Steps to Configure Jenkins Slave
65 |
66 | - Launch an EC2 Instance with `t2.medium` Instance Type(We will configure it as our Jenkins Agent/Slave node)
67 |
68 | - Run the below Command to download java JDK
69 | * `wget https://download.oracle.com/java/17/archive/jdk-17.0.10_linux-x64_bin.rpm`
70 | * `yum install jdk-17.0.10_linux-x64_bin.rpm -y`
71 |
72 | - Start the agent and join it to the Jenkins Master Node(You will get the below commands, from Jenkins master while adding this node, Don't use the below one, They are for my server)
73 | * `curl -sO http://54.146.158.246:8080/jnlpJars/agent.jar`
74 | * `java -jar agent.jar -jnlpUrl http://54.146.158.246:8080/computer/ec2/jenkins-agent.jnlp -secret 557af3ada1a128916ce4cac68d93ce7eb1b6d5e186ac18f43972697165a9f0d8 -workDir "/" &`
75 |
76 | ### Jenkins Server
77 |
78 | - My Python Flask App Repository used for demonstration(Snake Game)[https://github.com/sudhanshuvlog/SnakeGame.git]
79 |
80 | - Create Cron Schedule Expression - https://crontab.guru/
81 |
82 | - *Jenkinsfile* - A Jenkinsfile is a text file that contains the definition of a Jenkins Pipeline. It is written using the Groovy DSL (Domain-Specific Language) and is used to define the entire build process, including stages, steps, and other configurations. This approach provides consistency, repeatability, and easy collaboration in the software development and deployment process. A sample pipeline with name `jenkinsfile` is present in this repo.
83 |
84 | ### Why do we need Jenkins Cluster (Or) Jenkins Master-Slave Architecture?
85 |
86 | - **Jenkins Cluster** is a group of Jenkins master nodes and slave nodes.
87 | - Consider a scenario with a Jenkins server and 1000 jobs to run. Running all jobs on a single Jenkins server consumes a lot of resources from the master node and becomes difficult to manage.
88 |
89 | **Example:**
90 |
91 | - Handling increased workload or parallel jobs might be challenging for a single machine, leading to slower build times.
92 | - If the master server fails or becomes unavailable, the entire CI/CD process is disrupted.
93 |
94 | ### Pipeline in Jenkins
95 |
96 | - Jenkins Pipeline streamlines the execution of multiple stages within a single job, simplifying the overall workflow.
97 | - **Pre-requisites** - You need to install the `Pipeline plugin` in your Jenkins server.
98 | - Before the installation of the Pipeline plugin, the conventional approach involved running multiple jobs to handle distinct stages of a process.
99 | - With the installation of the Pipeline plugin, the need for managing multiple jobs is eliminated. Now, all stages can be seamlessly executed within a single job, optimizing the CI/CD pipeline.
100 |
101 | ### Jenkinsfile
102 |
103 | - In Pipeline, we can define the stages in a file called `Jenkinsfile`.
104 | - Jenkinsfile uses Groovy language.
105 | - By encapsulating all stages within the Jenkinsfile, users can execute an entire workflow within a single job. This simplifies job management and enhances pipeline efficiency.
106 | - Example of Jenkinsfile:
107 |
108 | ```groovy
109 | pipeline {
110 | agent any
111 | stages {
112 | stage('Build') {
113 | steps {
114 | echo 'Building..'
115 | }
116 | }
117 | stage('Test') {
118 | steps {
119 | echo 'Testing..'
120 | }
121 | }
122 | stage('Deploy') {
123 | steps {
124 | echo 'Deploying....'
125 | }
126 | }
127 | }
128 | }
129 | ```
130 |
131 | ### Jenkins Pipeline Triggers
132 |
133 | - Poll SCM - It will check the changes in the repository for every x minutes we mentioned.
134 | - If we use `Poll SCM` trigger, It will waste a lot of resources, It is better for the use cases for data backup, etc.
135 | - So we can use `Webhook` trigger, This trigger is event-driven and activates the Jenkins job only when there is a change in the repository.
--------------------------------------------------------------------------------
/Docker.md:
--------------------------------------------------------------------------------
1 | # Docker
2 |
3 | Before going into Docker, Let's understand How companies used to deploy applications in a server. They have different methods for deploying applications:
4 |
5 | - Bare Metal Servers involve installing the operating system directly on a physical server, making recovery slow in case of application crashes, and it could take around 30 minutes. With Virtual Machines like AWS EC2, a hypervisor manages virtual machines, reducing downtime but still taking around 2-3 minutes to restart. Docker, a containerization technology, uses images to create lightweight and isolated containers, allowing for instant recovery within seconds if the application crashes.
6 |
7 | ## What is Docker?
8 |
9 | Docker is a tool designed to create, deploy, and run applications using containers. Containers package an application with its dependencies, allowing it to run consistently across various environments.
10 |
11 | ## What is Docker Image?
12 |
13 | Docker Image is a template for creating Docker containers. It contains all dependencies and libraries needed to run an application. We can create a Docker image by using Dockerfile or by using the docker commit command. It is like a Package that contains all the dependencies and libraries required to run the application.
14 |
15 | ## What is a Docker Container?
16 |
17 | A Docker Container is a running instance of a Docker image. We can create a Docker container from a Docker image. It is similar to a virtual machine but more lightweight.
18 |
19 |
20 | For more reference, [Click Here](https://www.docker.com/resources/what-container/)
21 |
22 | #### Learn About virtualization - https://aws.amazon.com/what-is/virtualization/
23 |
24 | ### Docker Commands
25 |
26 | - **docker version**: Shows Docker version.
27 | - **docker info**: Displays Docker information.
28 | - **docker images**: Lists all Docker images.
29 | - **docker ps**: Shows running Docker containers.
30 | - **docker ps -a**: Lists all Docker containers.
31 | - **docker pull **: Pulls a Docker image from the Docker Hub.
32 | - **docker run **: Runs a Docker image, creating a Docker container.
33 | - **docker run -it **: Runs a Docker image, creating a Docker container with an open interactive terminal.
34 | - **docker run -it -d **: Create a Docker container but run it in background( detach mode).
35 | - **docker run -it -d -p 8080:80 **: Runs a Docker image, creating a background Docker container, mapping port 8080 on the host to port 80 on the container.
36 | - **docker attach **: Attaches the terminal to a Docker container.
37 | - **docker exec -it bash**: Opens a terminal in a Docker container.
38 | - **docker stop **: Stops a Docker container.
39 | - **docker start **: Starts a Docker container.
40 | - **docker rm **: Deletes a Docker container.
41 | - **docker rmi **: Deletes a Docker image.
42 | - **docker commit **: Creates a Docker image from a Docker container.
43 | - **docker rm -f **: Deletes a Docker container forcefully.
44 | - **docker rm -f $(docker ps -a -q)**: Deletes all Docker containers forcefully.
45 | - **docker rmi -f **: Deletes a Docker image forcefully.
46 | - **netstat -tnlp**: Displays all ports running on the host machine.
47 | - **ctrl+p+q**: Detaches the terminal from the Docker container without stopping it.
48 | - **docker run -p 80:80 -d --name webos -v /mydata/:/usr/share/nginx/html nginx** Mount a local directory with the container to get persistent volume
49 | - **docker inspect ** - This command will allow you to inspect a container. It will give you detailed information about the container such as the IP address, the volumes, the environment variables, etc.
50 | - **docker exec -it bash** - This command will allow you to execute bash program inside your container, you can also run any other program like date, python3 etc.
51 | - **docker run -it -p 80:80 -v /local_dir:/container_dir ** - This command will allow you to run a container with the given image. The `-it` flag will allow you to run the container in interactive mode with a terminal. The `-p` flag will allow you to map the port of the container to the port of the host machine. The `-v` flag will allow you to mount a local directory/volume to a directory inside your container. So we can have a persistent storage for our containers.
52 | - **docker cp : ** - This command will allow you to copy files from a container to your local machine.
53 | - **docker cp :** - This command will allow you to copy files from your local machine to a container.
54 |
55 | - Docker Architecture -
56 |
57 |
58 | - Dockerfile- https://docs.docker.com/reference/dockerfile/#:~:text=Docker%20can%20build%20images%20automatically,line%20to%20assemble%20an%20image.
59 |
60 | Important Document Links:
61 |
62 | 1) What are containers - https://www.docker.com/resources/what-container/
63 | 2) What is container Runtime - https://www.docker.com/products/container-runtime/
64 | 3) Dockerhub Registery - https://hub.docker.com/repositories/jinny1
65 | 4) Docker Architecture - https://docs.docker.com/get-started/overview/
66 | 5) Docker Compose - https://docs.docker.com/network/drivers/
67 | 6) Docker Network Drivers - https://docs.docker.com/network/drivers/
68 | 7) DockerHub - https://hub.docker.com/repositories/jinny1
69 | 8) *Why Docker Is Fast*- Check Docker Underlying Technology 👇
70 | `Docker is written in the Go programming language and takes advantage of several features of the Linux kernel to deliver its functionality. Docker uses a technology called *namespaces* to provide the isolated workspace called the container. When you run a container, Docker creates a set of namespaces for that container.`
71 |
72 | *What Is Namespace* - https://www.nginx.com/blog/what-are-namespaces-cgroups-how-do-they-work/
73 |
74 | - `docker save -o image.tar myimg` - save a Docker images to a tar archive
75 | - `docker load -i image.tar` - load Docker images from a tar archive
76 |
77 | ### Dockerfile
78 |
79 | - It is a file that contains instructions on how to build an image.
80 |
81 | **Example Dockerfile**
82 |
83 | ```dockerfile
84 | FROM ubuntu:latest
85 | RUN apt-get update
86 | RUN apt-get install -y nginx
87 | CMD ["nginx", "-g", "daemon off;"]
88 | ```
89 |
90 | - After creating a Dockerfile, you can build an image with the help of the following command:
91 |
92 | ```bash
93 | docker build -t .
94 | ```
95 |
96 | **Dockerfile Instructions**
97 |
98 | - **FROM** - This instruction is used to specify the base image.
99 | - **RUN** - This instruction is used to run commands at the time of building the image.
100 | - **LABEL** - This instruction is used to add metadata to the image. You can specify any key-value pair as metadata such as maintainer, description, version, etc.
101 | - **COPY** - This instruction is used to copy files from the local machine to the docker image.
102 | - **ENV** - This instruction is used to set environment variables inside of your image.
103 | - **WORKDIR** - This instruction is used to set the working directory for the instructions that follow it.
104 | - **CMD** - This instruction is used to specify the command that needs to be executed when a container is created from the image.
105 | - **ENTRYPOINT** - This instruction is used to specify the command that needs to be executed when a container is created from the image. You can specify any command that you would normally run on a Linux machine. The difference between CMD and ENTRYPOINT is that CMD can be overridden by passing arguments to the docker run command. Whereas, ENTRYPOINT cannot be overridden by passing arguments to the docker run command.
106 |
107 | **CMD** vs **ENTRYPOINT** - The difference between CMD and ENTRYPOINT is that CMD can be overridden by passing arguments to the docker run command. Whereas, ENTRYPOINT cannot be overridden by passing arguments to the docker run command. For example, if you have a Dockerfile with the following CMD instruction:
108 |
109 | ```dockerfile
110 | CMD ["nginx", "-g", "daemon off;"]
111 | ```
112 |
113 | You can override the CMD instruction by passing arguments to the docker run command like this:
114 |
115 | ```bash
116 | docker run -it bash
117 | ```
118 |
119 | But if you have a Dockerfile with the following ENTRYPOINT instruction:
120 |
121 | ```dockerfile
122 | ENTRYPOINT ["nginx", "-g", "daemon off;"]
123 | ```
124 |
125 | You cannot override the ENTRYPOINT instruction by passing arguments to the docker run command like this, you can just pass an extra argument to the mentioned command in Entrypoint:
126 |
127 | ```bash
128 | docker run -it bash
129 | ```
130 |
131 | ---
132 |
133 | ### Docker Hub
134 |
135 | Docker Hub is a container registry built for developers and open source contributors to find, use, and share their container images. With DockerHub, developers can host public repos that can be used for free, or private repos for teams and enterprises.
136 |
137 | To push an image to Docker Hub, you can run the following commands:
138 |
139 | ```bash
140 | docker login
141 | docker tag /
142 | docker push /
143 | ```
144 |
145 | To pull an image from Docker Hub, you can run the following command:
146 |
147 | ```bash
148 | docker pull /
149 | ```
150 |
151 | To Push image into any other registry, you can run the following commands:
152 |
153 | ```bash
154 | docker login
155 | docker tag /
156 | docker push /
157 | ```
158 | ---
159 |
160 |
--------------------------------------------------------------------------------
/DevopsBasics.md:
--------------------------------------------------------------------------------
1 | # GFG-Workshop2025
2 | Data For GFG Workshop 2025
3 |
4 |
5 | # DevOps
6 |
7 | ## Traditional vs DevOps
8 |
9 | ### Traditional
10 | In the traditional software development lifecycle, the process is segmented between two distinct teams:
11 |
12 | - **Development Team**:
13 | - Gathers business requirements
14 | - Develops the application code
15 | - Stores the code in a centralized repository (e.g., GitHub)
16 | - Notifies the Operations team upon completion
17 |
18 | - **Operations Team**:
19 | - Retrieves the code from the centralized repository
20 | - Performs manual testing
21 | - Deploys the application to the server
22 |
23 | ### Key Characteristics of the Traditional Approach:
24 | - Separation of roles: Development and Operations teams function independently, leading to potential communication gaps and slower workflows.
25 | - Development Team: Focused solely on coding, with little involvement in deployment or operational concerns.
26 | - Operations Team: Responsible for ensuring the application is tested and deployed, often involving manual processes.
27 |
28 | ### How Does the Traditional Model Work?
29 | The development team completes the coding phase and stores the code in a central repository. Once finished, they inform the operations team (usually via email or another non-automated method). The operations team then downloads the code, manually tests it, and handles deployment to the server. This siloed structure often results in delays, miscommunications, and inefficiencies, as the two teams work in isolation from one another.
30 |
31 | ### Disadvantages of the Traditional Approach
32 |
33 | - Manual processes: Tasks like testing and deployment are labor-intensive, which increases the time required to complete each phase.
34 | - Higher risk of errors: The reliance on manual intervention makes the process prone to human errors, which can affect the quality and reliability of the application.
35 | - Extended Time to Market (TTM): The slower, segmented workflow makes it difficult to quickly release new features or updates, making the model less suitable for companies needing agility and rapid growth.
36 |
37 | ### DevOps
38 |
39 | - DevOps Team:
40 | - Gathers business requirements
41 | - Develops application code
42 | - Stores code in GitHub
43 | - Automates testing
44 | - Deploys the application to the serve
45 |
46 | - Integrated Approach:
47 | DevOps merges development and operations into a unified team responsible for both application development and deployment.
48 |
49 | #### How DevOps Works?
50 |
51 | DevOps is a methodology that fosters collaboration between development and operations, allowing the team to develop, test, and deploy applications faster and with fewer bugs. By automating various processes, such as testing and deployment, DevOps ensures a streamlined workflow, reducing bottlenecks and improving efficiency.
52 |
53 | #### Advantages of DevOps
54 |
55 | - Automated Processes: Automation reduces the time spent on manual tasks.
56 | - Fewer Errors: Automation minimizes the risk of human error, improving application reliability.
57 | - Faster Time to Market (TTM): DevOps accelerates development and deployment, making it ideal for fast-paced, growing companies.
58 |
59 | #### Popular DevOps Tools
60 |
61 | - Git: Version control system for tracking code changes.
62 | - Jenkins: Continuous integration tool for automating code build, testing, and deployment.
63 | - Docker: Containerization tool for packaging and deploying applications in isolated environments.
64 | - Kubernetes: Orchestration tool for managing and scaling containerized applications.
65 | - Ansible: Configuration management tool for automating setup and configuration tasks.
66 | - Terraform: Infrastructure-as-code tool for automating infrastructure provisioning.
67 | - Monitoring Tools: Tools like Grafana, Prometheus, and Nagios for monitoring applications and infrastructure.
68 |
69 | ---
70 |
71 | ## Cloud Computing
72 |
73 | - Cloud computing is the delivery of computing services such as servers, storage, databases, networking, software, and analytics over the internet (“the cloud”). It enables organizations to access and use IT resources on-demand, rather than owning and maintaining physical hardware and infrastructure.
74 |
75 | - Cloud computing offers several advantages, including scalability, cost-efficiency, flexibility, and reliability. Depending on the deployment model, cloud services can be categorized into Public Cloud, Private Cloud, or Hybrid Cloud, and compared with On-Premises solutions.
76 |
77 | ## Public Cloud
78 |
79 | - A Public Cloud is a cloud infrastructure that is hosted and managed by third-party cloud providers like Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP). In a public cloud model, resources such as virtual machines, storage, and applications are made available over the internet to multiple customers on a shared basis.
80 |
81 | - Key Characteristics:
82 |
83 | - Cost-effective: You pay for the resources you use, and there's no need to maintain your own infrastructure.
84 | - Scalability: Public clouds offer virtually unlimited scalability to meet growing demand.
85 | - Managed by the provider: Cloud providers handle the maintenance, security, and updates of the infrastructure.
86 | - Multi-tenancy: Resources are shared across multiple users, though isolation is maintained at the data level.
87 |
88 | - Examples:
89 | - Amazon Web Services (AWS)
90 | - Microsoft Azure
91 | - Google Cloud Platform (GCP)
92 |
93 | - Use Cases:
94 | - Startups and companies that need rapid scaling
95 | - Development and testing environments
96 | - Websites and web apps with fluctuating traffic
97 |
98 | ## Private Cloud
99 |
100 | - A Private Cloud is a cloud environment that is exclusively used by a single organization. The infrastructure is either hosted on-premises or managed by a third-party provider, but the services and resources are dedicated solely to one company, offering enhanced control and security.
101 |
102 | - Key Characteristics:
103 |
104 | - Exclusive use: The cloud infrastructure is used solely by a single organization.
105 | - Customizable: Private clouds can be tailored to meet specific security, compliance, and performance requirements.
106 | - Enhanced security: Offers more control over data, as all resources are isolated from other organizations.
107 | - May be on-premises or hosted: Private clouds can be physically hosted at an organization’s own data center or by a third-party provider.
108 |
109 | - Examples:
110 |
111 | - VMware Private Cloud
112 | - IBM Cloud Private
113 | - OpenStack Private Cloud
114 |
115 | - Use Cases:
116 |
117 | - Large enterprises with strict regulatory requirements
118 | - Organizations that need full control over their data and infrastructure
119 | - Companies handling sensitive information (e.g., healthcare, finance)
120 |
121 | ## On-Premises
122 |
123 | - On-Premises refers to the traditional IT infrastructure where all hardware, software, and networking resources are owned, managed, and operated by an organization within its own physical facilities or data centers. Unlike cloud computing, on-premises solutions do not leverage external cloud services.
124 |
125 | - Key Characteristics:
126 |
127 | - Full control: The organization has complete ownership and control over the entire infrastructure.
128 | - Self-managed: All hardware, software, and maintenance are managed in-house, requiring IT expertise and resources.
129 | - Higher upfront cost: On-premises setups typically involve significant initial investments in hardware, software licenses, and data center facilities.
130 | - Customization: Organizations can fully customize the infrastructure to meet specific performance, compliance, and security requirements.
131 |
132 | - Examples:
133 |
134 | - Company-owned data centers
135 | - On-site servers for enterprise applications like SAP or Oracle
136 | - Traditional network and storage solutions
137 |
138 | - Use Cases:
139 |
140 | - Organizations with specific compliance or data sovereignty needs
141 | - Companies with legacy applications not easily moved to the cloud
142 | - Environments requiring low latency and direct access to hardware
143 |
144 | ## Cloud Service Models: IaaS, PaaS, and SaaS
145 |
146 | ## Infrastructure as a Service (IaaS)
147 |
148 | - IaaS provides the most fundamental building blocks for cloud IT. With IaaS, you rent computing infrastructure—such as virtual machines, storage, and networking—over the internet. It offers the highest level of flexibility and control over IT resources.
149 |
150 | - Key Features of IaaS:
151 |
152 | - Virtual Machines: You can create and configure virtual machines as needed.
153 | - Networking: You get access to customizable networks, firewalls, and load balancers.
154 | - Storage: You can use different storage types, including block, object, and file storage.
155 | - Scalability: You can scale the infrastructure up or down as needed.
156 | - Self-Service: You have full control to manage and operate the infrastructure.
157 |
158 | - Use Cases of IaaS:
159 |
160 | - Website Hosting: Hosting websites or web applications.
161 | - Development and Testing: Provision environments for developing and testing applications.
162 | - Big Data Analysis: Run large data analysis workloads.
163 | - Backup and Recovery: Provide backup, recovery, and disaster recovery solutions.
164 |
165 | - Examples of IaaS Providers:
166 |
167 | - Amazon Web Services (AWS) EC2
168 | - Microsoft Azure Virtual Machines
169 | - Google Cloud Platform Compute Engine
170 |
171 | - IaaS Responsibility Model:
172 | - Customer manages: Applications, data, runtime, middleware, and operating system.
173 | - Cloud provider manages: Virtualization, servers, storage, and networking.
174 |
175 | ## Platform as a Service (PaaS)
176 |
177 | - PaaS provides a platform that allows developers to build, test, and deploy applications without managing the underlying infrastructure. The cloud provider takes care of the operating system, middleware, runtime, and even some development tools.
178 |
179 | - Key Features of PaaS:
180 |
181 | - Development Frameworks: Provides frameworks and tools for developers to build applications (e.g., Java, .NET, Python).
182 | - Database Management: Includes managed database services (SQL or NoSQL databases).
183 | - Middleware: Handles integration with messaging services, API management, etc.
184 | - Application Hosting: Automates the deployment and scaling of web applications.
185 |
186 | - Use Cases of PaaS:
187 |
188 | - Application Development: Quickly build and deploy web applications.
189 | - API Development: Develop and host APIs without worrying about infrastructure.
190 | - Mobile App Development: Create backends for mobile apps.
191 | - DevOps: Continuous integration and delivery (CI/CD) environments for testing and deploying applications.
192 |
193 | - Examples of PaaS Providers:
194 |
195 | - Google App Engine (GAE)
196 | - Microsoft Azure App Services
197 | - Heroku
198 | - AWS Elastic Beanstalk
199 |
200 | - PaaS Responsibility Model:
201 |
202 | - Customer manages: Applications and data.
203 | - Cloud provider manages: Runtime, middleware, operating system, servers, storage, and networking.
204 |
205 | ## Software as a Service (SaaS)
206 |
207 | - SaaS delivers software applications over the internet on a subscription basis. SaaS applications are fully managed by the cloud provider, so users don’t need to worry about underlying infrastructure, maintenance, or updates.
208 |
209 | - Key Features of SaaS:
210 |
211 | - Web-Based Access: Access software applications via a web browser.
212 | - Managed Hosting: The provider handles all aspects of hosting and maintaining the software.
213 | - Subscription-Based: Pay on a subscription basis, typically monthly or annually.
214 | - Automatic Updates: The service provider handles all software updates and patches.
215 |
216 | - Use Cases of SaaS:
217 | - Email Services: Business email platforms like Gmail and Microsoft Outlook.
218 | - Collaboration Tools: Tools like Microsoft Teams, Slack, and Google Workspace.
219 | - CRM Systems: Customer relationship management platforms like Salesforce.
220 | - Office Applications: Microsoft Office 365, Google Docs.
221 |
222 | - Examples of SaaS Providers:
223 |
224 | - Google Workspace (Gmail, Docs, Drive)
225 | - Microsoft Office 365
226 | - Salesforce
227 | - Dropbox
228 |
229 | - SaaS Responsibility Model:
230 | - Cloud provider manages everything, including the application, runtime, middleware, operating system, servers, storage, and networking.
231 |
232 | ## Virtualization
233 |
234 | - Virtualization is a technology that allows a single physical machine (server, desktop, etc.) to run multiple virtual environments, called virtual machines (VMs). Each VM operates as if it were a separate computer with its own operating system (OS), applications, and resources, but in reality, they all share the same underlying hardware.
235 |
236 | - Virtualization is achieved through software known as a hypervisor. The hypervisor is responsible for abstracting and allocating the physical resources (CPU, memory, storage) of the host machine to each virtual machine. This enables efficient use of hardware by allowing multiple VMs to run on the same physical machine simultaneously.
237 |
238 | - Key Types of Virtualization:
239 |
240 | - Server Virtualization: Allows multiple server instances to run on one physical server.
241 | Example: VMware, Hyper-V, KVM.
242 |
243 | - Desktop Virtualization: Provides virtual desktops where multiple users can access separate desktop environments on a single machine.
244 | Example: Citrix, VirtualBox.
245 |
246 | - Storage Virtualization: Pools physical storage from multiple devices into what appears to be a single storage device.
247 | Example: SAN (Storage Area Network).
248 |
249 | - Network Virtualization: Abstracts physical network resources to create multiple virtual networks on a single physical network infrastructure.
250 | Example: Software-Defined Networking (SDN).
251 |
252 | ## How Virtualization Works
253 |
254 | - Physical Machine: The hardware (CPU, RAM, storage) on which multiple virtual machines are deployed.
255 | - Hypervisor: A layer of software that sits between the hardware and the virtual machines, responsible for managing and allocating hardware resources to VMs. There are two types of hypervisors:
256 | - Type 1 Hypervisor (Bare Metal): Runs directly on the hardware. Examples: VMware ESXi, Microsoft Hyper-V.
257 | - Type 2 Hypervisor (Hosted): Runs on top of an existing operating system. Examples: VirtualBox, VMware Workstation.
258 | - Virtual Machines: Each VM runs its own OS and applications independently of others. All VMs share the physical hardware but are isolated from each other.
259 |
260 | - Advantages of Virtualization
261 | - Resource Efficiency: Virtualization maximizes the utilization of physical hardware, allowing multiple systems to share resources efficiently.
262 | - Cost Savings: Organizations can run multiple virtual servers or desktops on fewer physical machines, reducing hardware costs and energy consumption.
263 | - Flexibility and Scalability: Virtual machines can be easily created, modified, or deleted, providing greater flexibility to scale infrastructure as needed.
264 | - Isolation: Each VM is isolated from others, which means that if one VM crashes, the others remain unaffected.
265 | - Disaster Recovery: Virtual machines can be backed up and migrated easily, simplifying disaster recovery and failover strategies.
266 |
267 | ## How Virtualization Gave Birth to Cloud Computing
268 |
269 | - Cloud computing builds upon the concept of virtualization. Virtualization is the foundation that makes cloud computing possible by enabling resource sharing, abstraction, and flexible scaling. Here’s how virtualization paved the way for cloud computing:
270 |
271 | 1. Efficient Resource Utilization:
272 | Before virtualization, servers were typically underutilized, running one application per machine, leading to significant waste of computing power. Virtualization allowed multiple applications to run on the same hardware, maximizing efficiency. This same principle is extended in cloud computing, where cloud providers can pool and allocate resources dynamically across multiple clients.
273 |
274 | 2. On-Demand Resource Allocation:
275 | Virtualization enables on-demand provisioning of virtual machines and resources. This dynamic allocation of resources is one of the core principles of cloud computing, where users can request compute, storage, and network resources whenever needed, without having to own the physical infrastructure.
276 |
277 | 3. Scalability and Elasticity:
278 | Virtual machines can be easily scaled up (more resources) or down (fewer resources) based on demand. Cloud computing takes this to the next level by offering elasticity, where resources are automatically adjusted in real-time according to the needs of an application. This is possible because virtualization allows the underlying infrastructure to be flexible and adaptable.
279 |
280 | 4. Abstraction of Physical Hardware:
281 | Virtualization abstracts the physical hardware from the software running on it, which is central to cloud computing. Cloud users do not need to know or manage the physical hardware their applications are running on. The cloud provider uses virtualization to abstract and allocate resources transparently.
282 |
283 | 5. Multi-Tenancy:
284 | Virtualization allows multiple virtual machines (and by extension, multiple users) to share the same physical hardware without interference. This ability to host multiple clients on shared infrastructure is a cornerstone of public cloud services, where cloud providers can serve many customers using shared hardware, while keeping their environments isolated.
285 |
286 | 6. Disaster Recovery and Backup:
287 | Virtualization enables easy backup and migration of virtual machines, simplifying disaster recovery. Cloud providers leverage this feature to offer high availability, redundancy, and failover capabilities without the need for users to manage complex recovery infrastructure.
288 |
289 | 7. Cost Savings via Pay-as-You-Go:
290 | Virtualization reduced the cost of hardware, and cloud computing extended this by allowing users to pay only for the resources they consume. Cloud providers can offer this model because virtualization makes it easy to allocate and track resource usage, enabling the pay-as-you-go pricing that defines cloud services.
291 |
292 | ## Evolution: From Virtualization to Cloud Computing
293 |
294 | - Data Centers: Virtualization started within data centers, where organizations used it to consolidate hardware and reduce costs. They could run multiple applications on fewer machines.
295 |
296 | - Cloud Service Providers: Public cloud providers (AWS, Azure, Google Cloud) built upon virtualization technology to offer cloud services, where organizations no longer need to manage their own physical hardware. They can rent computing resources on-demand.
297 |
298 | - Infrastructure as a Service (IaaS): Virtualization enabled the Infrastructure as a Service (IaaS) model, where users can rent virtual machines, storage, and networks over the internet. Virtual machines in the cloud are essentially the same as those created with local virtualization, but now they can be accessed remotely and scaled dynamically.
299 |
300 | ## Git
301 |
302 | ### What is Git?
303 |
304 | Git is a version control system that tracks changes in your code, allowing you to manage and collaborate on software projects more effectively.
305 |
306 | ### Why Git?
307 |
308 | Git is essential for storing code, tracking changes, and enabling collaboration between multiple developers in a project.
309 |
310 | ---
311 |
312 | ## Linux
313 |
314 | ### What is Linux OS?
315 |
316 | Linux is a free, open-source operating system known for its security, flexibility, and stability. It is widely used on servers, desktops, and mobile devices.
317 |
318 | ### Basic Linux Commands
319 |
320 | - **pwd**: Prints the current working directory.
321 | - **ls**: Lists files and directories.
322 | - **cd**: Changes the directory.
323 | - **mkdir**: Creates a directory.
324 | - **touch**: Creates a file.
325 | - **cat**: Prints the content of a file.
326 | - **id**: Prints user and group IDs.
327 |
328 | ### Linux File System
329 |
330 | All files and directories in Linux are organized under the root directory (/), which forms the base of the entire file system hierarchy.
331 |
332 | ### How to Install Linux OS (Any Distribution)?
333 |
334 | 1. **Bare Metal**: Install Linux OS directly on the hardware.
335 | 2. **Virtualization**: Install Linux OS on a virtual machine using tools like VirtualBox or Hyper-V.
336 | 3. **Cloud**: Install Linux OS on the cloud using providers like AWS, Azure, or GCP.
337 | 4. **Container**: Install Linux OS on containers using tools like Docker or Kubernetes.
338 |
339 | ### Hosting Apache HTTPD Web Server on Redhat Linux OS
340 |
341 | - Use the below command to install Apache web server on Linux OS:
342 | ```
343 | yum install -y httpd
344 | ```
345 |
346 | - Use the below Command to edit the index.html file:
347 | ```
348 | cat > /var/www/html/index.html #And then add the html content
349 | ```
350 |
351 | - Use the below command to start the Apache web server:
352 | ```
353 | systemctl start httpd
354 | ```
355 |
356 | - Use the below command to stop the Apache web server:
357 | ```
358 | systemctl stop httpd
359 | ```
360 |
--------------------------------------------------------------------------------