├── .gitignore
├── LICENSE
├── README.md
├── ansible_workdir
├── hosts
├── site.yaml
├── templates
│ └── default.conf.j2
└── vars
│ └── nginx-vars.yaml
├── availability_agent
├── agent.sh
└── hosts
├── hello_world_flask
├── .dockerignore
├── Dockerfile
├── app.py
└── requirements.txt
├── http_load_test
└── locustfile.py
├── jenkins_docker
├── docker-compose.yaml
└── jenkins-agent.Dockerfile
├── k8s
├── configmap-demo.yaml
├── deployment-demo-secret-env-var.yaml
├── deployment-demo.yaml
├── fluent-values.yaml
├── hpa-autoscaler-demo.yaml
├── k8s-dashboard-user.yaml
├── liveness-demo.yaml
├── mongo-statefulset.yaml
├── node-selector-demo.yaml
├── pod-demo.yaml
├── pod-troubleshoot.yaml
├── readiness-demo.yaml
├── release-0.8.0.yaml
├── replicaset-demo.yaml
├── resources-demo.yaml
├── secret-demo.yaml
├── service-demo.yaml
├── sidecar-demo.yaml
├── taint-toleration-demo.yaml
└── zero_downtime_node
│ ├── Dockerfile
│ ├── app.js
│ └── package.json
├── new_movie_lambda
└── app.py
├── nexus_docker
└── docker-compose.yaml
├── package_integrity_verification
└── Packages
├── roberta_v2
├── .gitignore
├── app.py
├── cache.py
└── requirements.txt
├── signature_verification
├── msg1.txt
├── msg2.txt
├── msg3.txt
├── msg4.txt
├── msg5.txt
├── public.key
├── sig1.txt
├── sig2.txt
├── sig3.txt
├── sig4.txt
└── sig5.txt
├── simple_linux_socket
└── server.c
├── simple_python_server
└── app.py
├── theatre_nighout
└── init.sh
└── tutorials
├── IaC_ansible.md
├── IaC_terraform_basics.md
├── IaC_terraform_modules.md
├── IaC_terraform_variables.md
├── artifacts_nexus.md
├── aws_api_gateway.md
├── aws_dynamodb.md
├── aws_ec2.md
├── aws_elb_asg.md
├── aws_iam.md
├── aws_intro.md
├── aws_lambda.md
├── aws_rds.md
├── aws_route53.md
├── aws_s3.md
├── aws_sqs_sns.md
├── aws_vpc.md
├── bash_command_techniques.md
├── bash_conditional_statements.md
├── bash_loops.md
├── bash_shells.md
├── bash_variables.md
├── docker_compose.md
├── docker_containers.md
├── docker_images.md
├── docker_intro.md
├── docker_networking.md
├── docker_volumes.md
├── git_basics.md
├── git_branches.md
├── git_remotes.md
├── jenkins_build_deploy_pipelines.md
├── jenkins_setup_and_intro.md
├── jenkins_test_pipeline.md
├── k8s_argocd.md
├── k8s_core_objects.md
├── k8s_eks.md
├── k8s_helm.md
├── k8s_networking.md
├── k8s_observability.md
├── k8s_pod_design.md
├── k8s_setup_and_intro.md
├── k8s_statefulset_and_storage.md
├── linux_environment_variables.md
├── linux_file_management.md
├── linux_intro.md
├── linux_io_redirection.md
├── linux_package_management.md
├── linux_processes.md
├── milestone_github_actions_ci_cd.md
├── milestone_simple_app_deployment.md
├── monitoring_and_alerting_elastic_kibana.md
├── monitoring_and_alerting_grafana_prometheus.md
├── networking_OSI_model.md
├── networking_computer_nets.md
├── networking_dns.md
├── networking_http.md
├── networking_linux_sockets.md
├── networking_security.md
├── networking_ssh.md
├── onboarding.md
└── webservers_nginx.md
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | share/python-wheels/
24 | *.egg-info/
25 | .installed.cfg
26 | *.egg
27 | MANIFEST
28 |
29 | # PyInstaller
30 | # Usually these files are written by a python script from a template
31 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
32 | *.manifest
33 | *.spec
34 |
35 | # Installer logs
36 | pip-log.txt
37 | pip-delete-this-directory.txt
38 |
39 | # Unit test / coverage reports
40 | htmlcov/
41 | .tox/
42 | .nox/
43 | .coverage
44 | .coverage.*
45 | .cache
46 | nosetests.xml
47 | coverage.xml
48 | *.cover
49 | *.py,cover
50 | .hypothesis/
51 | .pytest_cache/
52 | cover/
53 |
54 | # Translations
55 | *.mo
56 | *.pot
57 |
58 | # Django stuff:
59 | *.log
60 | local_settings.py
61 | db.sqlite3
62 | db.sqlite3-journal
63 |
64 | # Flask stuff:
65 | instance/
66 | .webassets-cache
67 |
68 | # Scrapy stuff:
69 | .scrapy
70 |
71 | # Sphinx documentation
72 | docs/_build/
73 |
74 | # PyBuilder
75 | .pybuilder/
76 | target/
77 |
78 | # Jupyter Notebook
79 | .ipynb_checkpoints
80 |
81 | # IPython
82 | profile_default/
83 | ipython_config.py
84 |
85 | # pyenv
86 | # For a library or package, you might want to ignore these files since the code is
87 | # intended to run in multiple environments; otherwise, check them in:
88 | # .python-version
89 |
90 | # pipenv
91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
94 | # install all needed dependencies.
95 | #Pipfile.lock
96 |
97 | # poetry
98 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
99 | # This is especially recommended for binary packages to ensure reproducibility, and is more
100 | # commonly ignored for libraries.
101 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
102 | #poetry.lock
103 |
104 | # pdm
105 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
106 | #pdm.lock
107 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
108 | # in version control.
109 | # https://pdm.fming.dev/#use-with-ide
110 | .pdm.toml
111 |
112 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
113 | __pypackages__/
114 |
115 | # Celery stuff
116 | celerybeat-schedule
117 | celerybeat.pid
118 |
119 | # SageMath parsed files
120 | *.sage.py
121 |
122 | # Environments
123 | .env
124 | .venv
125 | env/
126 | venv/
127 | ENV/
128 | env.bak/
129 | venv.bak/
130 |
131 | # Spyder project settings
132 | .spyderproject
133 | .spyproject
134 |
135 | # Rope project settings
136 | .ropeproject
137 |
138 | # mkdocs documentation
139 | /site
140 |
141 | # mypy
142 | .mypy_cache/
143 | .dmypy.json
144 | dmypy.json
145 |
146 | # Pyre type checker
147 | .pyre/
148 |
149 | # pytype static type analyzer
150 | .pytype/
151 |
152 | # Cython debug symbols
153 | cython_debug/
154 |
155 | # PyCharm
156 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
157 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
158 | # and can be added to the global gitignore or merged into this file. For a more nuclear
159 | # option (not recommended) you can uncomment the following to ignore the entire idea folder.
160 | .idea/
161 |
162 | renumber_rows.py
--------------------------------------------------------------------------------
/ansible_workdir/hosts:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exit-zero-academy/DevOpsTheHardWay/58226c891a2c3b33f1797487f39d37cf8234195a/ansible_workdir/hosts
--------------------------------------------------------------------------------
/ansible_workdir/site.yaml:
--------------------------------------------------------------------------------
1 | - name: Nginx webserver
2 | hosts: webserver
3 | tasks:
4 | - name: Ensure nginx is at the latest version
5 | ansible.builtin.apt:
6 | name: nginx
7 | state: latest
--------------------------------------------------------------------------------
/ansible_workdir/templates/default.conf.j2:
--------------------------------------------------------------------------------
1 | # /etc/nginx/conf.d/default.conf
2 |
3 | server {
4 | listen {{ nginx_listen_port | default(80) }};
5 | server_name {{ server_name }};
6 |
7 | location / {
8 | root {{ document_root }}/html;
9 | index index.html index.htm;
10 | }
11 |
12 | location /poster/ {
13 | root {{ document_root }};
14 | }
15 | }
16 |
--------------------------------------------------------------------------------
/ansible_workdir/vars/nginx-vars.yaml:
--------------------------------------------------------------------------------
1 | nginx_listen_port: 8080
2 | server_name: localhost
3 | document_root: /usr/share/nginx
4 | poster_root: /usr/share/nginx/poster
5 | posters_data_url: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/netflix_movies_poster_img/images.tar.gz
--------------------------------------------------------------------------------
/availability_agent/agent.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | TEST_PERIODICITY=5
4 |
5 | if [ -z "$DB_USERNAME" ]; then
6 | echo "Error: Environment variable $DB_USERNAME is not set."
7 | exit 1
8 | fi
9 |
10 | if [ -z "$DB_PASSWORD" ]; then
11 | echo "Error: Environment variable $DB_PASSWORD is not set."
12 | exit 1
13 | fi
14 |
15 | if [ -z "$INFLUXDB_URL" ]; then
16 | echo "Error: Environment variable $INFLUXDB_URL is not set."
17 | exit 1
18 | fi
19 |
20 | while true
21 | do
22 | while read -r TESTED_HOST
23 | do
24 | RESULT=$(ping -c 1 -W 2 "$TESTED_HOST" | grep -oP '(?<=time=)\d+(\.\d+)?')
25 | TEST_TIMESTAMP=$(date +%s%N)
26 |
27 | if [[ ! -n "$RESULT" ]]
28 | then
29 | RESULT=0
30 | fi
31 |
32 | echo "Test Result for $TESTED_HOST is $RESULT at $TEST_TIMESTAMP"
33 | curl -X POST "$INFLUXDB_URL/write?db=hosts_metrics" -u $DB_USERNAME:$DB_PASSWORD --data-binary "availability_test,host=$TESTED_HOST value=$RESULT $TEST_TIMESTAMP"
34 |
35 | done < hosts
36 |
37 | echo ""
38 | sleep $TEST_PERIODICITY
39 | done
--------------------------------------------------------------------------------
/availability_agent/hosts:
--------------------------------------------------------------------------------
1 | _gateway
2 | 127.0.0.1
3 | google.com
4 |
--------------------------------------------------------------------------------
/hello_world_flask/.dockerignore:
--------------------------------------------------------------------------------
1 | __pycache__/
2 | venv/
3 | .venv/
4 | .git/
5 |
6 | # PyCharm
7 | .idea/
8 |
9 | # VS Code
10 | .vscode/
11 |
12 | requirements.txt
13 |
14 |
--------------------------------------------------------------------------------
/hello_world_flask/Dockerfile:
--------------------------------------------------------------------------------
1 | # This Dockerfile is part of the Dockerfile troubleshooting exercise
2 |
3 | FROM ubuntu:20.04
4 | RUN sudo apt-get install python3
5 | WORKDIR /app
6 | COPY . .
7 | RUN pip3 install -r requirements.txt
8 | CMD "python3 app.py"
9 |
--------------------------------------------------------------------------------
/hello_world_flask/app.py:
--------------------------------------------------------------------------------
1 | from flask import Flask
2 | import boto3
3 |
4 | app = Flask(__name__)
5 |
6 |
7 | @app.route("/", methods=['GET'])
8 | def home():
9 | return 'Hello world!'
10 |
11 |
12 | @app.route("/list", methods=['GET'])
13 | def list_buckets():
14 | s3 = boto3.client('s3')
15 | return s3.list_buckets()
16 |
17 | if __name__ == '__main__':
18 | app.run(port=8080, host='0.0.0.0')
19 |
--------------------------------------------------------------------------------
/hello_world_flask/requirements.txt:
--------------------------------------------------------------------------------
1 | flask
2 |
--------------------------------------------------------------------------------
/http_load_test/locustfile.py:
--------------------------------------------------------------------------------
1 | from locust import HttpUser, task
2 |
3 |
4 | class HelloWorldUser(HttpUser):
5 | @task
6 | def hello_world(self):
7 | self.client.get("/api/discover?type=movie&genre=28")
8 |
--------------------------------------------------------------------------------
/jenkins_docker/docker-compose.yaml:
--------------------------------------------------------------------------------
1 | services:
2 | jenkins:
3 | image: jenkins/jenkins:lts-jdk17
4 | ports:
5 | - "8080:8080"
6 | - "50000:50000"
7 | volumes:
8 | - jenkins_home:/var/jenkins_home
9 | restart: on-failure
10 | networks:
11 | - jenkins-net
12 |
13 | volumes:
14 | jenkins_home:
15 |
16 | networks:
17 | jenkins-net:
--------------------------------------------------------------------------------
/jenkins_docker/jenkins-agent.Dockerfile:
--------------------------------------------------------------------------------
1 | FROM jenkins/inbound-agent:latest-jdk17
2 | USER root
3 | RUN apt-get update && apt-get install -y lsb-release
4 | RUN curl -fsSLo /usr/share/keyrings/docker-archive-keyring.asc \
5 | https://download.docker.com/linux/debian/gpg
6 | RUN echo "deb [arch=$(dpkg --print-architecture) \
7 | signed-by=/usr/share/keyrings/docker-archive-keyring.asc] \
8 | https://download.docker.com/linux/debian \
9 | $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker.list
10 | RUN apt-get update && apt-get install -y docker-ce-cli
11 | USER jenkins
--------------------------------------------------------------------------------
/k8s/configmap-demo.yaml:
--------------------------------------------------------------------------------
1 | # k8s/configmap-demo.yaml
2 |
3 | apiVersion: v1
4 | kind: ConfigMap
5 | metadata:
6 | name: nginx-conf
7 | data:
8 | # this known as "file-like" keys. In YAML, the "|" coming after the key allows to have multi-line values
9 | default.conf: |
10 | server {
11 | listen 80;
12 | server_name localhost;
13 | location / {
14 | proxy_pass http://netflix-movie-catalog-service:8080;
15 | }
16 | }
--------------------------------------------------------------------------------
/k8s/deployment-demo-secret-env-var.yaml:
--------------------------------------------------------------------------------
1 | # k8s/deployment-demo-secret-env-var.yaml
2 |
3 | apiVersion: apps/v1
4 | kind: Deployment
5 | metadata:
6 | name: nginx
7 | labels:
8 | app: nginx
9 | spec:
10 | replicas: 3
11 | selector:
12 | matchLabels:
13 | app: nginx
14 | template:
15 | metadata:
16 | labels:
17 | app: nginx
18 | spec:
19 | containers:
20 | - name: server
21 | image: nginx:1.26.0
22 | env:
23 | - name: NGINX_WORKER_PROCESSES
24 | value: "2"
25 | - name: NG_USERNAME
26 | valueFrom:
27 | secretKeyRef:
28 | name: nginx-creds
29 | key: username
30 | - name: NG_PASSWORD
31 | valueFrom:
32 | secretKeyRef:
33 | name: nginx-creds
34 | key: password
--------------------------------------------------------------------------------
/k8s/deployment-demo.yaml:
--------------------------------------------------------------------------------
1 | # k8s/deployment-demo.yaml
2 |
3 | apiVersion: apps/v1
4 | kind: Deployment
5 | metadata:
6 | name: nginx
7 | labels:
8 | app: nginx
9 | spec:
10 | replicas: 3
11 | selector:
12 | matchLabels:
13 | app: nginx
14 | template:
15 | metadata:
16 | labels:
17 | app: nginx
18 | spec:
19 | containers:
20 | - name: server
21 | image: nginx:1.26.0
22 |
--------------------------------------------------------------------------------
/k8s/fluent-values.yaml:
--------------------------------------------------------------------------------
1 | env:
2 | - name: ELASTIC_URL
3 | value: quickstart-es-http # TODO change according to your elastic service address
4 | - name: ES_USER
5 | value: elastic
6 | - name: ES_PASSWORD
7 | valueFrom:
8 | secretKeyRef:
9 | name: quickstart-es-elastic-user
10 | key: elastic
11 |
12 | config:
13 | outputs: |
14 | [OUTPUT]
15 | Name es
16 | Match kube.*
17 | Host ${ELASTIC_URL}
18 | HTTP_User ${ES_USER}
19 | HTTP_Passwd ${ES_PASSWORD}
20 | Suppress_Type_Name On
21 | Logstash_Format On
22 | Retry_Limit False
23 | tls On
24 | tls.verify Off
25 | Replace_Dots On
26 |
27 | [OUTPUT]
28 | Name es
29 | Match host.*
30 | Host ${ELASTIC_URL}
31 | HTTP_User ${ES_USER}
32 | HTTP_Passwd ${ES_PASSWORD}
33 | Suppress_Type_Name On
34 | Logstash_Format On
35 | Logstash_Prefix node
36 | Retry_Limit False
37 | tls On
38 | tls.verify Off
39 | Replace_Dots On
--------------------------------------------------------------------------------
/k8s/hpa-autoscaler-demo.yaml:
--------------------------------------------------------------------------------
1 | # k8s/hpa-autoscaler-demo.yaml
2 |
3 | apiVersion: autoscaling/v1
4 | kind: HorizontalPodAutoscaler
5 | metadata:
6 | name: nginx-hpa-demo
7 | spec:
8 | scaleTargetRef:
9 | apiVersion: apps/v1
10 | kind: Deployment
11 | name: nginx
12 | minReplicas: 1
13 | maxReplicas: 10
14 | targetCPUUtilizationPercentage: 50
15 |
--------------------------------------------------------------------------------
/k8s/k8s-dashboard-user.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: ServiceAccount
3 | metadata:
4 | name: admin-user
5 | namespace: kubernetes-dashboard
6 | ---
7 | apiVersion: rbac.authorization.k8s.io/v1
8 | kind: ClusterRoleBinding
9 | metadata:
10 | name: admin-user
11 | roleRef:
12 | apiGroup: rbac.authorization.k8s.io
13 | kind: ClusterRole
14 | name: cluster-admin
15 | subjects:
16 | - kind: ServiceAccount
17 | name: admin-user
18 | namespace: kubernetes-dashboard
19 | ---
20 | apiVersion: v1
21 | kind: Secret
22 | metadata:
23 | name: admin-user
24 | namespace: kubernetes-dashboard
25 | annotations:
26 | kubernetes.io/service-account.name: "admin-user"
27 | type: kubernetes.io/service-account-token
--------------------------------------------------------------------------------
/k8s/liveness-demo.yaml:
--------------------------------------------------------------------------------
1 | # k8s/liveness-demo.yaml
2 |
3 | apiVersion: apps/v1
4 | kind: Deployment
5 | metadata:
6 | name: nginx
7 | labels:
8 | app: nginx
9 | spec:
10 | replicas: 1
11 | selector:
12 | matchLabels:
13 | app: nginx
14 | template:
15 | metadata:
16 | labels:
17 | app: nginx
18 | spec:
19 | containers:
20 | - name: server
21 | image: nginx:1.26.0
22 | resources:
23 | requests:
24 | cpu: 100m
25 | memory: 100Mi
26 | limits:
27 | cpu: 200m
28 | memory: 200Mi
29 | livenessProbe:
30 | initialDelaySeconds: 10
31 | httpGet:
32 | path: "/"
33 | port: 80
--------------------------------------------------------------------------------
/k8s/mongo-statefulset.yaml:
--------------------------------------------------------------------------------
1 | # k8s/mongo-statefulset.yaml
2 |
3 | apiVersion: apps/v1
4 | kind: StatefulSet
5 | metadata:
6 | name: mongo
7 | spec:
8 | serviceName: "mongo-service"
9 | replicas: 3
10 | selector:
11 | matchLabels:
12 | app: mongo
13 | template:
14 | metadata:
15 | labels:
16 | app: mongo
17 | spec:
18 | containers:
19 | - name: mongo
20 | image: mongo:5
21 | command:
22 | - mongod
23 | - "--replSet"
24 | - myReplicaSet
25 | - "--bind_ip_all"
26 | ports:
27 | - containerPort: 27017
28 | volumeMounts:
29 | - name: mongo-persistent-storage
30 | mountPath: /data/db
31 | volumeClaimTemplates:
32 | - metadata:
33 | name: mongo-persistent-storage
34 | spec:
35 | storageClassName: standard
36 | accessModes: [ "ReadWriteOnce" ]
37 | resources:
38 | requests:
39 | storage: 2Gi
40 | ---
41 | apiVersion: v1
42 | kind: Service
43 | metadata:
44 | name: mongo-service
45 | labels:
46 | app: mongo
47 | spec:
48 | clusterIP: None
49 | selector:
50 | app: mongo
51 | ports:
52 | - port: 27017
53 | targetPort: 27017
54 |
--------------------------------------------------------------------------------
/k8s/node-selector-demo.yaml:
--------------------------------------------------------------------------------
1 | # k8s/node-selector-demo.yaml
2 |
3 | apiVersion: apps/v1
4 | kind: Deployment
5 | metadata:
6 | name: nginx
7 | labels:
8 | app: nginx
9 | spec:
10 | nodeSelector:
11 | disk: ssd
12 | replicas: 1
13 | selector:
14 | matchLabels:
15 | app: nginx
16 | template:
17 | metadata:
18 | labels:
19 | app: nginx
20 | spec:
21 | containers:
22 | - name: server
23 | image: nginx:1.26.0
24 | resources:
25 | requests:
26 | cpu: 100m
27 | memory: 100Mi
28 | limits:
29 | cpu: 200m
30 | memory: 200Mi
31 | livenessProbe:
32 | initialDelaySeconds: 10
33 | httpGet:
34 | path: "/"
35 | port: 80
36 | readinessProbe:
37 | initialDelaySeconds: 10
38 | httpGet:
39 | path: "/"
40 | port: 80
41 |
--------------------------------------------------------------------------------
/k8s/pod-demo.yaml:
--------------------------------------------------------------------------------
1 | # k8s/pod-demo.yaml
2 |
3 | apiVersion: v1
4 | kind: Pod
5 | metadata:
6 | name: nginx
7 | labels:
8 | project: ABC
9 | env: prod
10 | spec:
11 | containers:
12 | - name: server
13 | image: nginx:1.26.0
14 |
--------------------------------------------------------------------------------
/k8s/pod-troubleshoot.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: apps/v1
2 | kind: Deployment
3 | metadata:
4 | name: postgres-deployment
5 | spec:
6 | replicas: 1
7 | selector:
8 | matchLabels:
9 | app: postgres-deployment
10 | template:
11 | metadata:
12 | labels:
13 | app: postgres
14 | spec:
15 | nodeSelector:
16 | disktype: ssd
17 | containers:
18 | - name: postgres-container
19 | image: postgres:11.22-bullseye
20 | resources:
21 | requests:
22 | memory: "5Mi"
23 | cpu: "50"
24 | limits:
25 | memory: "128Mi"
26 | cpu: "100"
27 | livenessProbe:
28 | exec:
29 | command: ["pg_isready", "--username", "postgres"]
30 | initialDelaySeconds: 1
31 | periodSeconds: 2
32 | timeoutSeconds: 5
33 | readinessProbe:
34 | exec:
35 | command: ["pg_isready", "--username", "postgres"]
36 | initialDelaySeconds: 1
37 | periodSeconds: 2
38 | timeoutSeconds: 5
39 | ---
40 | apiVersion: v1
41 | kind: Service
42 | metadata:
43 | name: postgres-service
44 | spec:
45 | selector:
46 | app: postgres
47 | ports:
48 | protocol: HTTP
49 | port: 80
50 | targetPort: 5432
51 |
--------------------------------------------------------------------------------
/k8s/readiness-demo.yaml:
--------------------------------------------------------------------------------
1 | # k8s/readiness-demo.yaml
2 |
3 | apiVersion: apps/v1
4 | kind: Deployment
5 | metadata:
6 | name: nginx
7 | labels:
8 | app: nginx
9 | spec:
10 | replicas: 1
11 | selector:
12 | matchLabels:
13 | app: nginx
14 | template:
15 | metadata:
16 | labels:
17 | app: nginx
18 | spec:
19 | containers:
20 | - name: server
21 | image: nginx:1.26.0
22 | resources:
23 | requests:
24 | cpu: 100m
25 | memory: 100Mi
26 | limits:
27 | cpu: 200m
28 | memory: 200Mi
29 | livenessProbe:
30 | initialDelaySeconds: 10
31 | httpGet:
32 | path: "/"
33 | port: 80
34 | readinessProbe:
35 | initialDelaySeconds: 10
36 | httpGet:
37 | path: "/readiness"
38 | port: 80
--------------------------------------------------------------------------------
/k8s/replicaset-demo.yaml:
--------------------------------------------------------------------------------
1 | # k8s/replicaset-demo.yaml
2 |
3 | apiVersion: apps/v1
4 | kind: ReplicaSet
5 | metadata:
6 | name: nginx-rs
7 | labels:
8 | app: nginx
9 | spec:
10 | replicas: 3
11 | selector:
12 | matchLabels:
13 | app: nginx
14 | template:
15 | metadata:
16 | labels:
17 | app: nginx
18 | spec:
19 | containers:
20 | - name: server
21 | image: nginx:1.26.0
22 |
--------------------------------------------------------------------------------
/k8s/resources-demo.yaml:
--------------------------------------------------------------------------------
1 | # k8s/resources-demo.yaml
2 |
3 | apiVersion: apps/v1
4 | kind: Deployment
5 | metadata:
6 | name: nginx
7 | labels:
8 | app: nginx
9 | spec:
10 | replicas: 1
11 | selector:
12 | matchLabels:
13 | app: nginx
14 | template:
15 | metadata:
16 | labels:
17 | app: nginx
18 | spec:
19 | containers:
20 | - name: server
21 | image: nginx:1.26.0
22 | resources:
23 | requests:
24 | cpu: 100m
25 | memory: 100Mi
26 | limits:
27 | cpu: 200m
28 | memory: 200Mi
29 | ---
30 | apiVersion: v1
31 | kind: Service
32 | metadata:
33 | name: nginx-service
34 | spec:
35 | selector:
36 | app: nginx
37 | ports:
38 | - name: http
39 | port: 80
40 | targetPort: 80
--------------------------------------------------------------------------------
/k8s/secret-demo.yaml:
--------------------------------------------------------------------------------
1 | # k8s/secret-demo.yaml
2 |
3 | apiVersion: v1
4 | kind: Secret
5 | metadata:
6 | name: nginx-creds
7 | type: Opaque
8 | data:
9 | username: bmdpbngtdXNlcm5hbWU=
10 | password: Mzk1MjgkdmRnN0pi
11 |
--------------------------------------------------------------------------------
/k8s/service-demo.yaml:
--------------------------------------------------------------------------------
1 | # k8s/service-demo.yaml
2 |
3 | apiVersion: v1
4 | kind: Service
5 | metadata:
6 | name: nginx-service
7 | spec:
8 | selector:
9 | app: nginx
10 | ports:
11 | - protocol: TCP
12 | port: 8080
13 | targetPort: 80
14 |
--------------------------------------------------------------------------------
/k8s/sidecar-demo.yaml:
--------------------------------------------------------------------------------
1 | # k8s/sidecar-demo.yaml
2 |
3 | apiVersion: apps/v1
4 | kind: Deployment
5 | metadata:
6 | name: nginx
7 | labels:
8 | app: nginx
9 | spec:
10 | replicas: 1
11 | selector:
12 | matchLabels:
13 | app: nginx
14 | template:
15 | metadata:
16 | labels:
17 | app: nginx
18 | spec:
19 | containers:
20 | - name: webserver
21 | image: nginx
22 | volumeMounts:
23 | - name: html
24 | mountPath: /usr/share/nginx/html
25 | - name: helper
26 | image: debian
27 | volumeMounts:
28 | - name: html
29 | mountPath: /html
30 | command: ["/bin/sh", "-c"]
31 | args:
32 | - while true; do
33 | date >> /html/index.html;
34 | sleep 1;
35 | done
36 | volumes:
37 | - name: html
38 | emptyDir: { }
39 |
--------------------------------------------------------------------------------
/k8s/taint-toleration-demo.yaml:
--------------------------------------------------------------------------------
1 | # k8s/taint-toleration-demo.yaml
2 |
3 | apiVersion: apps/v1
4 | kind: Deployment
5 | metadata:
6 | name: nginx
7 | labels:
8 | app: nginx
9 | spec:
10 | tolerations:
11 | - key: "gpu"
12 | operator: "Equal"
13 | value: "true"
14 | effect: "NoSchedule"
15 | nodeSelector:
16 | disk: ssd
17 | replicas: 1
18 | selector:
19 | matchLabels:
20 | app: nginx
21 | template:
22 | metadata:
23 | labels:
24 | app: nginx
25 | spec:
26 | containers:
27 | - name: server
28 | image: nginx:1.26.0
29 | resources:
30 | requests:
31 | cpu: 100m
32 | memory: 100Mi
33 | limits:
34 | cpu: 200m
35 | memory: 200Mi
36 | livenessProbe:
37 | initialDelaySeconds: 10
38 | httpGet:
39 | path: "/"
40 | port: 80
41 | readinessProbe:
42 | initialDelaySeconds: 10
43 | httpGet:
44 | path: "/"
45 | port: 80
46 |
--------------------------------------------------------------------------------
/k8s/zero_downtime_node/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM node:14
2 | WORKDIR /usr/src/app
3 | COPY package*.json ./
4 | RUN npm install
5 | COPY . .
6 | ENV DEBUG=express*
7 | EXPOSE 3000
8 | CMD ["npm", "start"]
9 |
--------------------------------------------------------------------------------
/k8s/zero_downtime_node/app.js:
--------------------------------------------------------------------------------
1 | const express = require('express');
2 | const app = express();
3 | const port = 3000;
4 |
5 | // --------------------------------------------------- //
6 | // DO NOT MODIFY THE BELOW CODE SNIPPET
7 |
8 | // This variable indicates server readiness
9 | let ready = false;
10 | // --------------------------------------------------- //
11 |
12 | app.get('/', (req, res) => {
13 | let x = 0.0001;
14 | for (let i = 0; i <= 1000000; i++) {
15 | x += Math.sqrt(x);
16 | }
17 | res.send('OK ');
18 | });
19 |
20 |
21 | app.get('/ready', (req, res) => {
22 | // TODO return status code 200 if server is ready (indicated by the `ready` variable), otherwise 503.
23 | res.send(200);
24 | });
25 |
26 | app.get('/health', (req, res) => {
27 | res.send(200);
28 | });
29 |
30 | // This handler is called whenever k8s sends a SIGTERM to the container, before terminating the Pod
31 | process.on('SIGTERM', () => {
32 | console.log('SIGTERM signal received: closing HTTP server')
33 |
34 | // TODO indicate that the server is not ready, and wait for k8s to stop routing traffic before closing the server.
35 | server.close(() => {
36 | console.log('HTTP server closed')
37 | })
38 | })
39 |
40 |
41 | // This function call sets the `ready` variable to be `true` after 20 seconds of server running
42 | setTimeout(() => {
43 | app.listen(port, '0.0.0.0', () => {
44 | console.log("Server running");
45 | ready = true;
46 | });
47 | }, 20000);
48 |
49 |
50 |
--------------------------------------------------------------------------------
/k8s/zero_downtime_node/package.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "simple-express-server",
3 | "version": "1.0.0",
4 | "description": "A simple Node.js web server using Express",
5 | "main": "app.js",
6 | "scripts": {
7 | "start": "node app.js"
8 | },
9 | "dependencies": {
10 | "express": "^4.17.1"
11 | }
12 | }
13 |
--------------------------------------------------------------------------------
/new_movie_lambda/app.py:
--------------------------------------------------------------------------------
1 | import boto3
2 | import json
3 | import os
4 |
5 | sns = boto3.client('sns')
6 |
7 | topic_arn = os.environ['TOPIC_ARN']
8 |
9 |
10 | def lambda_handler(event, context):
11 | for record in event['Records']:
12 | print('Stream record: ', record)
13 |
14 | if record['eventName'] == 'INSERT':
15 |
16 | movie_image_url = 'https://image.tmdb.org/t/p/original' + record['dynamodb']['NewImage']['poster_path']['S']
17 | movie_title = record['dynamodb']['NewImage']['title']['S']
18 |
19 | html_body = """
20 |
21 |
22 |
23 |
24 |
25 | New Movie Release from Netflix!
26 |
72 |
73 |
74 |
75 | """ + f"""
76 |

77 |
Introducing {movie_title}
78 |
Discover the latest sensation from Netflix!
79 |
Watch Now
80 |
81 |
82 |
83 | """
84 |
85 | message = {
86 | 'Subject': {
87 | 'Data': 'New Movie Released: ' + movie_title
88 | },
89 | 'Body': {
90 | 'Html': {
91 | 'Data': html_body
92 | }
93 | }
94 | }
95 |
96 | params = {
97 | 'Message': json.dumps({'default': json.dumps(message)}),
98 | 'TopicArn': topic_arn,
99 | 'MessageStructure': 'json'
100 | }
101 | try:
102 | response = sns.publish(**params)
103 | print("Results from sending message: ", response)
104 | except Exception as e:
105 | print(f"Unable to send message. Error: {str(e)}")
106 |
107 | return f"Successfully processed {len(event['Records'])} records."
108 |
--------------------------------------------------------------------------------
/nexus_docker/docker-compose.yaml:
--------------------------------------------------------------------------------
1 | services:
2 | nexus:
3 | image: sonatype/nexus3:3.69.0-java11
4 | ports:
5 | - "8081:8081"
6 | volumes:
7 | - nexus-data:/nexus-data
8 | environment:
9 | - INSTALL4J_ADD_VM_PARAMS=-Xms1200m -Xmx1200m -XX:MaxDirectMemorySize=2g
10 | restart: on-failure
11 |
12 | volumes:
13 | nexus-data:
14 |
--------------------------------------------------------------------------------
/package_integrity_verification/Packages:
--------------------------------------------------------------------------------
1 | Package: containerd.io
2 | Architecture: amd64
3 | Version: 1.2.13-2
4 | Priority: optional
5 | Section: devel
6 | Maintainer: Containerd team
7 | Installed-Size: 96972
8 | Provides: containerd, runc
9 | Depends: libc6 (>= 2.14), libseccomp2 (>= 2.4.1)
10 | Conflicts: containerd, runc
11 | Replaces: containerd, runc
12 | Filename: dists/focal/pool/stable/amd64/containerd.io_1.2.13-2_amd64.deb
13 | Size: 21420058
14 | MD5sum: 82a480e21a52caba100623cf534532e2
15 | SHA1: 18bce1e3a4a1cafeb5ef8eafa6766225be195125
16 | SHA256: 96ad73534f896e1d88acdb1f4d6894d94f4f91fc33de8183b571932939a4f740
17 | SHA512: ab86f4e14362b2eef2173b08c71e5d553a3c9621c2732b0c6261b290bdd0804d7dc80625d73fd6de0c43c0fa915d4482e4fb35a5275403c70b293da18ea86c23
18 | Homepage: https://containerd.io
19 | Description: An open and reliable container runtime
20 |
--------------------------------------------------------------------------------
/roberta_v2/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | share/python-wheels/
24 | *.egg-info/
25 | .installed.cfg
26 | *.egg
27 | MANIFEST
28 |
29 | # PyInstaller
30 | # Usually these files are written by a python script from a template
31 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
32 | *.manifest
33 | *.spec
34 |
35 | # Installer logs
36 | pip-log.txt
37 | pip-delete-this-directory.txt
38 |
39 | # Unit test / coverage reports
40 | htmlcov/
41 | .tox/
42 | .nox/
43 | .coverage
44 | .coverage.*
45 | .cache
46 | nosetests.xml
47 | coverage.xml
48 | *.cover
49 | *.py,cover
50 | .hypothesis/
51 | .pytest_cache/
52 | cover/
53 |
54 | # Translations
55 | *.mo
56 | *.pot
57 |
58 | # Django stuff:
59 | *.log
60 | local_settings.py
61 | db.sqlite3
62 | db.sqlite3-journal
63 |
64 | # Flask stuff:
65 | instance/
66 | .webassets-cache
67 |
68 | # Scrapy stuff:
69 | .scrapy
70 |
71 | # Sphinx documentation
72 | docs/_build/
73 |
74 | # PyBuilder
75 | .pybuilder/
76 | target/
77 |
78 | # Jupyter Notebook
79 | .ipynb_checkpoints
80 |
81 | # IPython
82 | profile_default/
83 | ipython_config.py
84 |
85 | # pyenv
86 | # For a library or package, you might want to ignore these files since the code is
87 | # intended to run in multiple environments; otherwise, check them in:
88 | # .python-version
89 |
90 | # pipenv
91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
94 | # install all needed dependencies.
95 | #Pipfile.lock
96 |
97 | # poetry
98 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
99 | # This is especially recommended for binary packages to ensure reproducibility, and is more
100 | # commonly ignored for libraries.
101 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
102 | #poetry.lock
103 |
104 | # pdm
105 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
106 | #pdm.lock
107 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
108 | # in version control.
109 | # https://pdm.fming.dev/#use-with-ide
110 | .pdm.toml
111 |
112 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
113 | __pypackages__/
114 |
115 | # Celery stuff
116 | celerybeat-schedule
117 | celerybeat.pid
118 |
119 | # SageMath parsed files
120 | *.sage.py
121 |
122 | # Environments
123 | .env
124 | .venv
125 | env/
126 | venv/
127 | ENV/
128 | env.bak/
129 | venv.bak/
130 |
131 | # Spyder project settings
132 | .spyderproject
133 | .spyproject
134 |
135 | # Rope project settings
136 | .ropeproject
137 |
138 | # mkdocs documentation
139 | /site
140 |
141 | # mypy
142 | .mypy_cache/
143 | .dmypy.json
144 | dmypy.json
145 |
146 | # Pyre type checker
147 | .pyre/
148 |
149 | # pytype static type analyzer
150 | .pytype/
151 |
152 | # Cython debug symbols
153 | cython_debug/
154 |
155 | # PyCharm
156 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
157 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
158 | # and can be added to the global gitignore or merged into this file. For a more nuclear
159 | # option (not recommended) you can uncomment the following to ignore the entire idea folder.
160 | #.idea/
161 |
--------------------------------------------------------------------------------
/roberta_v2/app.py:
--------------------------------------------------------------------------------
1 | from flask import Flask, request
2 | from transformers import pipeline
3 | import textstat
4 | from cache import get_from_cache, put_in_cache
5 |
6 | pipe = pipeline("text-classification", model="./roberta-base-go_emotions")
7 |
8 | app = Flask(__name__)
9 |
10 |
11 | @app.route('/readability')
12 | def readability():
13 | text = request.args.get('text')
14 | return textstat.flesch_kincaid_grade(text)
15 |
16 |
17 | @app.route('/analyze')
18 | def analyze():
19 | text = request.args.get('text')
20 | result = get_from_cache(text)
21 |
22 | if not result:
23 | result = pipe(text)
24 | put_in_cache(text, result)
25 | return result
26 |
27 |
28 | if __name__ == "__main__":
29 | app.run(host='0.0.0.0', port=8081)
30 |
--------------------------------------------------------------------------------
/roberta_v2/cache.py:
--------------------------------------------------------------------------------
1 | from collections import OrderedDict
2 |
3 | cache = OrderedDict()
4 | cache_max_size = 500
5 |
6 |
7 | def get_from_cache(key):
8 | if key in cache:
9 | value = cache.pop(key)
10 | cache[key] = value
11 | return value
12 | return None
13 |
14 |
15 | def put_in_cache(key, value):
16 | if len(cache) >= cache_max_size:
17 | cache.popitem(last=False)
18 | cache[key] = value
--------------------------------------------------------------------------------
/roberta_v2/requirements.txt:
--------------------------------------------------------------------------------
1 | flask
2 | pyyaml
3 | loguru
4 | transformers==4.31.0
5 | torch==2.0.1
6 | textstat==0.7.3
7 |
--------------------------------------------------------------------------------
/signature_verification/msg1.txt:
--------------------------------------------------------------------------------
1 | Love
--------------------------------------------------------------------------------
/signature_verification/msg2.txt:
--------------------------------------------------------------------------------
1 | is
--------------------------------------------------------------------------------
/signature_verification/msg3.txt:
--------------------------------------------------------------------------------
1 | in
--------------------------------------------------------------------------------
/signature_verification/msg4.txt:
--------------------------------------------------------------------------------
1 | the
--------------------------------------------------------------------------------
/signature_verification/msg5.txt:
--------------------------------------------------------------------------------
1 | Air
--------------------------------------------------------------------------------
/signature_verification/public.key:
--------------------------------------------------------------------------------
1 | -----BEGIN PUBLIC KEY-----
2 | MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQD1strCgH/I6/uASypgs6RGqA1s
3 | SERe4x+WZvfOOtZ1OE6CXsGTEdZZ9g3tQtEe8NurWLYvdVHIP+gP6UL+QGMBYZOr
4 | y3ooFSUX4gF9gwoouWyIm/RKUkkyP5UJSn+NKN4athYkdCId73Xfs3RUp7Q+RudS
5 | QlK8OFSMh/yzyiNOdwIDAQAB
6 | -----END PUBLIC KEY-----
7 |
--------------------------------------------------------------------------------
/signature_verification/sig1.txt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exit-zero-academy/DevOpsTheHardWay/58226c891a2c3b33f1797487f39d37cf8234195a/signature_verification/sig1.txt
--------------------------------------------------------------------------------
/signature_verification/sig2.txt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exit-zero-academy/DevOpsTheHardWay/58226c891a2c3b33f1797487f39d37cf8234195a/signature_verification/sig2.txt
--------------------------------------------------------------------------------
/signature_verification/sig3.txt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exit-zero-academy/DevOpsTheHardWay/58226c891a2c3b33f1797487f39d37cf8234195a/signature_verification/sig3.txt
--------------------------------------------------------------------------------
/signature_verification/sig4.txt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exit-zero-academy/DevOpsTheHardWay/58226c891a2c3b33f1797487f39d37cf8234195a/signature_verification/sig4.txt
--------------------------------------------------------------------------------
/signature_verification/sig5.txt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/exit-zero-academy/DevOpsTheHardWay/58226c891a2c3b33f1797487f39d37cf8234195a/signature_verification/sig5.txt
--------------------------------------------------------------------------------
/simple_linux_socket/server.c:
--------------------------------------------------------------------------------
1 | // compile by `gcc -o server server.c`
2 |
3 | #include
4 | #include
5 | #include
6 | #include
7 | #include
8 | #include
9 | #include
10 | #include
11 |
12 | #define MY_PORT 9999
13 | #define MAX_BUF 1024
14 |
15 | int main()
16 | {
17 | int sockfd;
18 | struct sockaddr_in self;
19 | char buffer[MAX_BUF];
20 |
21 | // To create a socket for networking communication. A new socket by itself is not particularly useful
22 | sockfd = socket(AF_INET, SOCK_STREAM, 0);
23 |
24 | /** Initialize address/port structure */
25 | bzero(&self, sizeof(self));
26 | self.sin_family = AF_INET;
27 | self.sin_port = htons(MY_PORT);
28 | self.sin_addr.s_addr = INADDR_ANY;
29 |
30 | // The bind call associates an abstract socket with an actual network interface and port
31 | bind(sockfd, (struct sockaddr*)&self, sizeof(self));
32 |
33 | // The listen call specifies the queue size for the number of incoming, unhandled connections
34 | listen(sockfd, 40);
35 |
36 | /** Server run continuously */
37 | while (1)
38 | { int clientfd;
39 | struct sockaddr_in client_addr;
40 | int addrlen=sizeof(client_addr);
41 |
42 | /** accept an incomming connection */
43 | clientfd = accept(sockfd, (struct sockaddr*)&client_addr, &addrlen);
44 | printf("%s:%d connected\n", inet_ntoa(client_addr.sin_addr), ntohs(client_addr.sin_port));
45 |
46 | /** print the received data to the client */
47 | int read_bytes = read(clientfd, buffer, MAX_BUF);
48 | printf("Got client message: %s\n", buffer);
49 | write(clientfd, buffer, read_bytes);
50 |
51 | /** Close data connection */
52 | close(clientfd);
53 | }
54 |
55 | /** Clean up */
56 | close(sockfd);
57 | return 0;
58 | }
--------------------------------------------------------------------------------
/simple_python_server/app.py:
--------------------------------------------------------------------------------
1 | from flask import Flask, jsonify
2 | import signal
3 | import threading
4 | import time
5 | import os
6 |
7 | app = Flask(__name__)
8 |
9 | shutdown_flag = threading.Event()
10 |
11 |
12 | @app.route('/')
13 | def home():
14 | return "Hi there!"
15 |
16 |
17 | @app.route('/ready')
18 | def ready():
19 | if not shutdown_flag.is_set():
20 | return jsonify({"status": 'Ready'}), 200
21 | else:
22 | return jsonify({"status": 'NotReady'}), 503
23 |
24 |
25 | def shutdown_server():
26 | shutdown_flag.set()
27 |
28 | print('Handling the last server requests...')
29 | time.sleep(30)
30 | print('Serve now should not receive any new incoming requests')
31 |
32 | print('Disconnecting from database..')
33 | time.sleep(3)
34 | print('Performing other cleanup tasks...')
35 | time.sleep(7)
36 | os._exit(0)
37 |
38 |
39 | def handle_sigterm(signum, frame):
40 | print(f"SIGTERM received. {signum} {frame}. Shutting down in 30 seconds...")
41 | threading.Thread(target=shutdown_server).start()
42 |
43 |
44 | if __name__ == "__main__":
45 | signal.signal(signal.SIGTERM, handle_sigterm)
46 | print(f'Server PID={os.getpid()}')
47 |
48 | app.run(host='0.0.0.0', port=8080)
49 |
--------------------------------------------------------------------------------
/theatre_nighout/init.sh:
--------------------------------------------------------------------------------
1 | rm -f -r shows
2 |
3 | for i in 'The Lion King' 'Hamilton' 'Wicked' 'Les Miserables' 'Phantom of the Opera'
4 | do
5 | mkdir -p shows/"$i"
6 | for j in $(seq 1 50)
7 | do
8 | touch shows/"$i"/"$j"
9 | done
10 | done
--------------------------------------------------------------------------------
/tutorials/IaC_terraform_modules.md:
--------------------------------------------------------------------------------
1 | # Terraform Modules
2 |
3 | ## Modules
4 |
5 | Modules help you to package and reuse Terraform resources and configurations.
6 | A modules is a collection of `.tf` files, kept together in a folder to be used for multiple resources.
7 |
8 |
9 | ## Example: using the AWS VPC module
10 |
11 | Let's say you want to provision a VPC in your AWS account.
12 |
13 | A complete VPC requires provisioning of many different resources (VPC, subnet, route table, gateways, etc..).
14 | But someone already done it before, why not using her work?
15 |
16 | To use a [VPC module](https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws), add the below block into your `main.tf` file to use the VPC module:
17 |
18 | ```terraform
19 | module "netflix_app_vpc" {
20 | source = "terraform-aws-modules/vpc/aws"
21 | version = "5.8.1"
22 |
23 | name = ""
24 | cidr = "10.0.0.0/16"
25 |
26 | azs = ["", "", "..."]
27 | private_subnets = ["", ""]
28 | public_subnets = ["", ""]
29 |
30 | enable_nat_gateway = false
31 |
32 | tags = {
33 | Env = var.env
34 | }
35 | }
36 | ```
37 |
38 | Make sure you specify a list of `azs` (availability zones), VPC name ``, and subnets CIDRs (``...) according to your region.
39 |
40 | Before you apply, you have to `terraform init` your workspace first in order to download the module files.
41 |
42 | Apply and inspect your VPC in AWS Console.
43 |
44 | How does it work?
45 |
46 | A module is essentially a folder containing Terraform configuration files (after `terraform init` you'll find the files under `.terraform/modules/netflix_app_vpc`).
47 | The module used in this example, contains configuration files to build a generic VPC in AWS.
48 |
49 | By using the `module` block (as you've done), your provide your own values to the module's configuration files, like CIDRs, number of subnets, etc.
50 | Every entry in the `module.netflix_app_vpc` block (e.g. `name`, `cidr`, `azs` etc...) is actually **defined as a variable** in the module directory (can be found under `.terraform/modules/netflix_app_vpc/variables.tf`).
51 | So, when you apply, Terraform actually taking the entries from the `module` block and assign the values to the corresponding variables in the module files.
52 |
53 | To review the full list of possible input entries, visit the [Terraform Registry page for the VPC module](https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws).
54 |
55 | ### Outputs
56 |
57 | The next obvious step is to migrate our `aws_instance.netflix_app` instance within the created VPC.
58 |
59 | To do so, add the following property to the `aws_instance.netflix_app` resource:
60 |
61 | ```diff
62 | + subnet_id = module.netflix_app_vpc.public_subnets[0]
63 | ```
64 |
65 | As can be seen, we used `module.netflix_app_vpc.vpc_id` is a reference the VPC id created by our module.
66 |
67 | The `module.netflix_app_vpc.vpc_id` attribute is known as a module **output**.
68 | After Terraform applies the configuration, the outputs from the module can be used in other resources in your configuration files.
69 |
70 | In you take a closed look in `.terraform/modules/netflix_app_vpc/outputs.tf`, you'll see how outputs are defined:
71 |
72 | ```terraform
73 | output "vpc_id" {
74 | description = "The ID of the VPC"
75 | value = aws_vpc.this[0].id
76 | }
77 | ```
78 |
79 | To create your security group within your VPC, add the following property to the `aws_security_group.netflix_app_sg` resource:
80 |
81 | ```diff
82 | + vpc_id = module.netflix_app_vpc.vpc_id
83 | ```
84 |
85 | [Terraform Output](https://developer.hashicorp.com/terraform/language/values/variables) allows you to export structured data about your resources,
86 | Either to share data from a child module to your root module (as in our case), or to be used in other parts of your system.
87 |
88 |
89 | > [!NOTE]
90 | > Every Terraform project has at least one module, known as its **root module**, which consists of the resources defined in the `.tf` files in the main working directory.
91 |
92 |
93 | ### Data Sources
94 |
95 | Data sources allow Terraform to use information defined outside your configuration files.
96 | A data sources fetches information from cloud provider APIs, such as disk image IDs, availability zones etc...
97 |
98 | You will use the [`aws_availability_zones`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/availability_zones) data source (which is part of the AWS provider) to configure your VPC's Availability Zones (AZs) dynamically according to the region you work on.
99 | That way you can make your configuration files more modular, since AZs values would not be hard-coded, but fetched dynamically.
100 |
101 | List the AZs which can be accessed by an AWS account within the region configured in the provider.
102 |
103 | ```terraform
104 | data "aws_availability_zones" "available_azs" {
105 | state = "available"
106 | }
107 | ```
108 |
109 | Change the following attribute in `netflix_app_vpc` module:
110 |
111 | ```text
112 | - azs = ["", "", ...]
113 | + azs =data.aws_availability_zones.available_azs.names
114 | ```
115 |
116 | Plan and apply.
117 |
118 |
119 | The `aws_instance.netflix_app` configuration also uses a hard-coded AMI ID, which is only valid for the specific region.
120 | Use an `aws_ami` data source to load the correct AMI ID for the current region.
121 |
122 | Add the following `aws_ami` data source to fetch AMIs from AWS API:
123 |
124 | ```terraform
125 | data "aws_ami" "ubuntu_ami" {
126 | most_recent = true
127 | owners = ["099720109477"] # Canonical owner ID for Ubuntu AMIs
128 |
129 | filter {
130 | name = "name"
131 | values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
132 | }
133 | }
134 | ```
135 |
136 | Replace the hard-coded AMI ID with the one loaded from the new data source.
137 |
138 | ```text
139 | - ami = ""
140 | + ami = data.aws_ami.ubuntu_ami.id
141 | ```
142 |
143 | Add the following output in `outputs.tf`:
144 |
145 | ```text
146 | output "netflix_app_ami" {
147 | description = "ID of the EC2 instance AMI"
148 | value = data.aws_ami.amazon_linux_ami
149 | }
150 | ```
151 |
152 | Plan and apply.
153 |
154 |
155 | # Exercises
156 |
157 | ### :pencil2: Build the Netflix app module
158 |
159 | In this exercise, you will create a Terraform module to provision the infrastructure required to deploy the Netflix app. (EC2 instances, security groups, VPCs, S3 buckets, and key pairs)
160 |
161 | In the root directory of your Terraform project create the following file structure:
162 |
163 | ```text
164 | modules/
165 | └── netflix-app/
166 | ├── main.tf
167 | ├── variables.tf
168 | ├── outputs.tf
169 | └── deploy.sh
170 | ```
171 |
172 | The module should receive the following inputs:
173 |
174 | - `aws_region`
175 | - `vpc_cidr`
176 | - `subnet_cidr`
177 | - `availability_zone`
178 | - `ami_id`
179 | - `instance_type`
180 | - `key_name`
181 | - `public_key_path`
182 | - `bucket_name`
183 |
184 | The module should provide the following outputs:
185 |
186 | - `instance_id`
187 | - `bucket_name`
188 |
189 | Example for usage from `main.tf` of the root module:
190 |
191 | ```terraform
192 | module "netflix_app" {
193 | source = "./modules/netflix-app"
194 |
195 | aws_region = "us-west-2"
196 | vpc_cidr = "10.0.0.0/16"
197 | subnet_cidr = "10.0.1.0/24"
198 | availability_zone = "us-west-2a"
199 | ami_id = "ami-0123456789abcdef0"
200 | instance_type = "t2.micro"
201 | key_name = "my-key-pair"
202 | public_key_path = "~/.ssh/id_rsa.pub"
203 | bucket_name = "my-netflix-app-bucket"
204 | }
205 | ```
206 |
207 | ### :pencil2: Build cloud agnostic app
208 |
209 | Let's say you have to be able to launch your app in **different clouds** (AWS, Azure, GCP).
210 | For simplicity, assume that all your cloud infrastructure is a single VM.
211 | How can you utilize Terraform modules to create cloud-agnostic configurations?
212 |
213 | Here's an example that demonstrates how to achieve this.
214 |
215 | Create the following dir structure:
216 |
217 | ```text
218 | cloud-vm/
219 | ├── modules/
220 | │ ├── aws/
221 | │ │ ├── main.tf
222 | │ │ ├── outputs.tf
223 | │ │ ├── variables.tf
224 | │ ├── gcp/
225 | │ │ ├── main.tf
226 | │ │ ├── outputs.tf
227 | │ │ ├── variables.tf
228 | ├── main.tf
229 | ├── outputs.tf
230 | ├── providers.tf
231 | ├── variables.tf
232 | ```
233 |
234 | The `main.tf` and `providers.tf` **skeletons** may look like:
235 |
236 | ```terraform
237 | # main.tf
238 |
239 | module "vm" {
240 | source = "./modules/${var.cloud_provider}"
241 |
242 | vm_name = var.vm_name
243 | vm_size = var.vm_size
244 | region = var.region
245 | }
246 | ```
247 |
248 | And
249 |
250 | ```terraform
251 | # providers.tf
252 |
253 | provider "aws" {
254 | region = var.region
255 | }
256 |
257 | provider "google" {
258 | project = var.project_id
259 | region = var.region
260 | }
261 | ```
262 |
263 | And applying the infrastructure is a certain cloud cab ne done by:
264 |
265 | ```bash
266 | terraform init
267 | terraform apply -var="cloud_provider=aws" -var="region=us-east-1"
268 | ```
269 |
270 | Complete the remaining configuration files and make it work (since we don't have a GCP account, no need to apply there).
271 |
272 |
--------------------------------------------------------------------------------
/tutorials/IaC_terraform_variables.md:
--------------------------------------------------------------------------------
1 | # Terraform variables
2 |
3 | ## Backend configurations
4 |
5 | Let's store the `terraform.state` file in an appropriate place: a dedicated S3 bucket.
6 |
7 | A **Backend** defines where Terraform stores its [state](https://www.terraform.io/language/state) data files.
8 | This lets multiple people access the state data and work together on that collection of infrastructure resources.
9 | When changing backends, Terraform will give you the option to **migrate** your state to the new backend.
10 | This lets you adopt backends without losing any existing state.
11 |
12 | Always backup your state by enable bucket versioning!
13 |
14 | 1. Create a dedicated S3 bucket to store your state files.
15 | 2. To configure a backend, add a nested `backend` block within the top-level `terraform` block. The following example configures the `s3_backend` backend:
16 | ```terraform
17 | terraform {
18 |
19 | ...
20 |
21 | backend "s3" {
22 | bucket = ""
23 | key = "tfstate.json"
24 | region = ""
25 | # optional: dynamodb_table = ""
26 | }
27 |
28 | ...
29 |
30 | }
31 | ```
32 | 2. Apply the changes and make sure the state is stored in S3.
33 |
34 | This backend also supports state locking and consistency checking via Dynamo DB, which can be enabled by setting the `dynamodb_table` field to an existing DynamoDB table name.
35 | The table must have a partition key named `LockID` with type of `String`.
36 |
37 | ## Variables
38 |
39 | So far, the `main.tf` configuration file included some hard-coded values.
40 | [Terraform variables](https://developer.hashicorp.com/terraform/language/values/variables) allow you to write configuration that is flexible and easier to re-use for different environments and potentially different regions.
41 |
42 | Our goal is to provision our EC2 instance (and the other resources you've created last tutorial) for different environments and AWS regions.
43 |
44 | 1. In the workspace repo, create a new file called `variables.tf` with blocks defining the following variables:
45 | ```terraform
46 | variable "env" {
47 | description = "Deployment environment"
48 | type = string
49 | }
50 |
51 | variable "region" {
52 | description = "AWS region"
53 | type = string
54 | }
55 |
56 | variable "ami_id" {
57 | description = "EC2 Ubuntu AMI"
58 | type = string
59 | }
60 | ```
61 | 2. In `main.tf`, update the `aws_instance.netflix_app` resource block to use the new variable.
62 | ```diff
63 | resource "aws_instance" "netflix_app" {
64 | - ami = ""
65 | + ami = var.ami_id
66 | instance_type = "t2.micro"
67 |
68 | tags = {
69 | - Name = ""
70 | + Name = "-${var.env}"
71 | }
72 | }
73 | ```
74 |
75 | In addition, update the `provider` block, as follows:
76 |
77 | ```diff
78 | provider "aws" {
79 | - region = ""
80 | + region = var.region
81 | }
82 | ```
83 |
84 | 3. Plan and apply the configurations by:
85 |
86 | ```bash
87 | terraform apply -var region= -var ami_id= -var env=dev
88 | ```
89 |
90 | While changing `` and `` according to your values.
91 |
92 | ## The `tfvars` file
93 |
94 | As you can imagine, a typical Terraform project has many variables.
95 | Should we use the `-var` flag in the `terraform apply` command to set each one of the variable values?
96 | No, that's where the `.tfvars` file comes in.
97 |
98 | This file holds the **values** for the variables, while `variables.tf` **defines** what the variables are.
99 |
100 | 1. Create the `region..dev.tfvars` (while changing `` to your current AWS region. E.g. `region.us-east-1.dev.tfvars`), as follows:
101 |
102 | ```text
103 | env = dev
104 | region =
105 | ami_id =
106 | ```
107 | 2. Now you can apply the configurations by:
108 | ```bash
109 | terraform apply -var-file region..dev.tfvars
110 | ```
111 |
112 | 3. Now create `.tfvars` file with corresponding values per environment, per region. For example:
113 |
114 | ```text
115 | my_tf_repo/
116 | ├── main.tf # Main configuration file
117 | ├── variables.tf
118 | ├── region.us-east-1.dev.tfvars # Values related for us-east-1 region
119 | └── region.eu-central-1.prod.tfvars # Values related for eu-central-1 region
120 | ```
121 |
122 | Soon you'll see how to use it properly.
123 |
124 | ## Terraform workspaces
125 |
126 | How can the same configuration files be applied for multiple AWS regions?
127 | Obviously, each region or environment should have a separate `.tfstate` file for storing state data (why???).
128 |
129 | [Terraform workspaces](https://developer.hashicorp.com/terraform/cli/workspaces) can help you to easily manage multiple sets of resources, originated from the same `.tf` configuration files.
130 |
131 | All you have to do it to create a workspace per region, per env. For example:
132 |
133 | ```bash
134 | terraform workspace new us-east-1.dev
135 | ```
136 |
137 | And once you want to apply the configuration in us-east-1, dev, perform:
138 |
139 | ```bash
140 | terraform workspace select us-east-1.dev
141 | terraform apply -var-file region.us-east-1.dev.tfvars
142 | ```
143 |
144 | **Note**: The `tfstate` files of all regions will be stored in the same S3 bucket.
145 |
146 | 1. Create a dedicated workspace per region-env.
147 | 2. Apply the configurations.
148 | 3. Take a look on the separated `.tfstate` files in the S3 bucket.
149 |
150 |
151 | ## Secrets and sensitive data
152 |
153 | How can Terraform handle sensitive data?
154 |
155 | Let's say you want to create a secret in [AWS Secret Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html):
156 |
157 | ```terraform
158 | resource "aws_secretsmanager_secret" "bot_token" {
159 | name = ""
160 | }
161 |
162 | resource "aws_secretsmanager_secret_version" "bot_token" {
163 | secret_id = aws_secretsmanager_secret.example.id
164 | secret_string = "1234528664:AAEUHt47XsoPkQRqIBA0EYxaEGQdKtGoLtM"
165 | }
166 | ```
167 |
168 | Obviously, this above configurations can't be committed as part of your source code.
169 | For that, you'll utilize [Sensitive variables](https://developer.hashicorp.com/terraform/tutorials/configuration-language/sensitive-variables)
170 |
171 |
172 | ```terraform
173 | variable "secret_name" {
174 | description = "The name of the secret"
175 | type = string
176 | default = ""
177 | }
178 |
179 | variable "secret_value" {
180 | description = "The value of the secret"
181 | type = string
182 | sensitive = true
183 | }
184 |
185 |
186 | resource "aws_secretsmanager_secret" "bot_token" {
187 | name = var.secret_name
188 | }
189 |
190 | resource "aws_secretsmanager_secret_version" "bot_token" {
191 | secret_id = aws_secretsmanager_secret.example.id
192 | secret_string = var.secret_value
193 | }
194 | ```
195 |
196 | If you were to run terraform apply now, Terraform would prompt you for value for the `aws_secretsmanager_secret_version.bot_token` variable since you haven't assigned any value.
197 |
198 | But sometimes you can't enter the value manually (E.g. as part of a CI/CD automation).
199 | So you use the `-var` flag: `terraform apply -var="secret_value=1234528664:AAEUHt47XsoPkQRqIBA0EYxaEGQdKtGoLtM"`.
200 |
201 | # Exercises
202 |
203 | ### :pencil2: Simple CI/CD pipeline for Terraform
204 |
205 | 1. In your repo, under `.github/workflows/infra.yaml`, create a simple GitHub Actions workflow YAML.
206 | 2. Configure the pipeline to provision the infrastructure in different regions and environments upon every commit and push of changes to the configuration files.
207 | 3. Test your pipelines.
208 |
209 |
210 |
--------------------------------------------------------------------------------
/tutorials/aws_dynamodb.md:
--------------------------------------------------------------------------------
1 | # DynamoDB
2 |
3 | Let's take a closer look at the [NetflixMovieCatalog][NetflixMovieCatalog] service.
4 | Currently, the service provides movies catalog content from a JSON files located in the source code.
5 | Obviously, this is not a good approach since we want the content to be dynamically stored and retrieved from a strong database engine.
6 |
7 | In this tutorial you'll configure the [NetflixMovieCatalog][NetflixMovieCatalog] app to retrieve content from a DynamoDb table.
8 |
9 | ## Create a table
10 |
11 | 1. Open the DynamoDB console at [https://console.aws.amazon.com/dynamodb/](https://console.aws.amazon.com/dynamodb/)
12 | 2. In the navigation pane on the left side of the console, choose **Dashboard**.
13 | 3. On the right side of the console, choose **Create Table**.
14 | 4. Enter the table details as follows:
15 | 1. For the table name, enter a unique table name.
16 | 2. For the partition key, enter `id`, type **Number**.
17 | 3. Enter `title` as the sort key, type **String**.
18 | 4. Choose **Customize settings**.
19 | 5. On **Read/write capacity settings** choose **Provisioned** mode with autoscale capacity with a minimum capacity of **1** and maximum of **10**.
20 | 5. Choose **Create** to create the table.
21 |
22 | ## Write data using Python and `boto3`
23 |
24 | Try the below script to write an item to your DynamoDB table:
25 |
26 | ```python
27 | import boto3
28 |
29 | client = boto3.client('dynamodb')
30 | item = {
31 | "adult": {'BOOL': False},
32 | "backdrop_path": {'S': "/tdkCqOQ87ns39bWtzjJYsGTloH9.jpg"},
33 | "genre_ids": {'NS': ["28", "80", "9648", "53"]},
34 | "id": {'N': "996154"},
35 | "original_language": {'S': "en"},
36 | "original_title": {'S': "Black Lotus"},
37 | "overview": {'S': "An ex-special forces operative wages a one man war through the streets of Amsterdam to rescue his friend's daughter from the local crime syndicate."},
38 | "popularity": {'N': "1070.023"},
39 | "poster_path": {'S': "/y3AeW200hqGLxoPyHMDHpzudylz.jpg"},
40 | "release_date": {'S': "2023-04-12"},
41 | "title": {'S': "Black Lotus"},
42 | "video": {'BOOL': False},
43 | "vote_average": {'N': "6.559"},
44 | "vote_count": {'N': "85"}
45 | }
46 |
47 | response = client.put_item(
48 | TableName='', # Change accordingly
49 | Item=item
50 | )
51 | print(response)
52 | ```
53 |
54 | ## Write and query data using AWS cli
55 |
56 | ```bash
57 | aws dynamodb put-item \
58 | --table-name \
59 | --item '{ "adult": {"BOOL": false}, "backdrop_path": {"S": "/jnE1GA7cGEfv5DJBoU2t4bZHaP4.jpg"}, "genre_ids": {"NS": ["28", "878"]}, "id": {"N": "1094844"}, "original_language": {"S": "en"}, "original_title": {"S": "Ape vs. Mecha Ape"}, "overview": {"S": "Recognizing the destructive power of its captive giant Ape, the military makes its own battle-ready A.I., Mecha Ape. But its first practical test goes horribly wrong, leaving the military no choice but to release the imprisoned giant ape to stop the colossal robot before it destroys downtown Chicago."}, "popularity": {"N": "877.18"}, "poster_path": {"S": "/dJaIw8OgACelojyV6YuVsOhtTLO.jpg"}, "release_date": {"S": "2023-03-24"}, "title": {"S": "Ape vs. Mecha Ape"}, "video": {"BOOL": false}, "vote_average": {"N": "5.689"}, "vote_count": {"N": "190"} }'
60 | ```
61 |
62 | Query the data by:
63 |
64 | ```bash
65 | aws dynamodb get-item --consistent-read --table-name --key '{ "id": {"N": "1094844"}, "title": {"S": "Ape vs. Mecha Ape"}}'
66 | ```
67 |
68 | ## Create and query a global secondary index
69 |
70 | Let's say you need to query items according to `vote_count` and `vote_average`.
71 | Since those fields aren't part the primary key, there is a need to create a secondary index, otherwise,
72 | queries will be inefficient and slow, and the cost of scanning the database will be significantly higher.
73 |
74 | 1. In the navigation pane on the left side of the console, choose **Tables**.
75 | 2. Choose your table from the table list.
76 | 3. Choose the **Indexes** tab for your table.
77 | 4. Choose **Create** index.
78 | 5. For the **Partition key**, enter `vote_count`.
79 | 6. For the **Sort key**, enter `vote_average`.
80 | 6. For **Index** name, enter `vote-index`.
81 | 7. Leave the other settings on their default values and choose **Create** index.
82 |
83 | Once done, use `boto3` or `awscli` to query all movies with `vote_count > 100` and `vote_average > 6`.
84 |
85 |
86 | # Exercises
87 |
88 | ### :pencil2: NetflixMovieCatalog with DynamoDB table
89 |
90 | In this exercise you'll make the [NetflixMovieCatalog][NetflixMovieCatalog] app to retrieve and serve movies data from a DynamoDB table (instead a JSON file as it's configured now).
91 |
92 | 1. In the NetflixMovieCatalog repo, under `data/` you'll find the JSON used by the server in responses data.
93 | Create a Python script to emit all data into your Dynamo table. Note that there are two JSON files: `data_tv.json` and `data_movies.json`, you should think how to store the data in DynamoDB (either in a separate tables, or in the same table while differentiating tv and movies data).
94 | 2. As the NetflixMovieCatalog queries movies by `genre_ids`, and a single movie can belong to multiple genres, there is no efficient way to retrieve the data (the `genre_ids` field is a set of numbers, thus cannot be part of a primary key).
95 | The solution is to create **another** DynamoDB table as follows:
96 |
97 | - Partition Key: genre_id (Number)
98 | - Sort Key: movie_id (Number)
99 |
100 | Each item corresponds to one of the genres of a given movie. This allows efficient querying by genre ID:
101 |
102 | ```python
103 | import boto3
104 | dynamodb = boto3.resource('dynamodb')
105 | table = dynamodb.Table('MoviesByGenre')
106 |
107 | response = table.query(
108 | KeyConditionExpression=boto3.dynamodb.conditions.Key('genre_id').eq(27)
109 | )
110 |
111 | for item in response['Items']:
112 | print(item)
113 | ```
114 |
115 |
116 | 2. Modify the code of `app.py` to query data from DynamoDB instead the JSON files.
117 | 3. Deploy the NetflixMovieCatalog app as a new Docker image version.
118 |
119 |
120 | ### :pencil2: Point-in-time recovery for DynamoDB
121 |
122 | Restore your table to how it was looking like a few minutes ago.
123 |
124 | https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/PointInTimeRecovery.Tutorial.html#restoretabletopointintime_console
125 |
126 |
127 | [NetflixMovieCatalog]: https://github.com/exit-zero-academy/NetflixMovieCatalog.git
128 |
--------------------------------------------------------------------------------
/tutorials/aws_iam.md:
--------------------------------------------------------------------------------
1 | # Identity and Access Management (IAM)
2 |
3 | ### Create IAM role with permissions over S3 and attach it to an EC2 instance
4 |
5 | If you haven't created a role yet, here is a short recap:
6 |
7 | 1. Open the IAM console at [https://console\.aws\.amazon\.com/iam/](https://console.aws.amazon.com/iam/)\.
8 |
9 | 2. In the navigation pane, choose **Roles**, **Create role**\.
10 |
11 | 3. On the **Trusted entity type** page, choose **AWS service** and the **EC2** use case\. Choose **Next: Permissions**\.
12 |
13 | 4. On the **Attach permissions policy** page, search for **AmazonS3FullAccess** AWS managed policy\.
14 |
15 | 5. On the **Review** page, enter a name for the role and choose **Create role**\.
16 | 6. Attach the role to your EC2 instance.
17 | 7. Test your policy.
18 |
19 | Let's review the created permission JSON:
20 |
21 | ```json
22 | {
23 | "Version" : "2012-10-17",
24 | "Statement" : [
25 | {
26 | "Effect" : "Allow",
27 | "Action" : [
28 | "s3:*", // Allowed actions on Amazon S3 resources
29 | "s3-object-lambda:*" // Allowed actions on S3 Object Lambda resources
30 | ],
31 | "Resource" : "*" // Allowed resource - in this case, all resources
32 | }
33 | ]
34 | }
35 | ```
36 |
37 | - `Version`: Denotes the version of the policy language being used.
38 | - `Statement`: An array of policy statements, each defining a permission rule.
39 | - `Effect`: Specifies whether the statement allows or denies access ("Allow" in this case).
40 | - `Action`: Lists the actions allowed ("s3:" for all S3 actions, "s3-object-lambda:" for all S3 Object Lambda actions).
41 | - `Resource`: Specifies the resource to which the actions apply ("*" signifies all resources).
42 |
43 | This policy allows any action (`s3:*`) and any action on S3 Object Lambda (`s3-object-lambda:*`) for any resource (`*`) because of the `Allow` effect.
44 |
45 |
46 | ## The Principle of Least Privilege
47 |
48 | **The Principle of Least Privilege (PoLP)** is a security best practice that involves giving users and systems only the minimum permissions necessary to perform their tasks or functions, and no more.
49 | This helps to reduce the risk of accidental or intentional damage or data loss, and limit the potential impact of security breaches or vulnerabilities.
50 |
51 | ## Design a policy according to the PoLP
52 |
53 | Let's modify the above created policy to follow the PolP.
54 |
55 | ### Custom `Action`s and `Resource`s
56 |
57 | 1. Inspired by [Allows read and write access to objects in an S3 Bucket](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_rw-bucket.html), allow your IAM role an access only to the specific bucket you use to store data.
58 | 2. **Validate your changes** either by trying to upload to another bucket, or using the [IAM Policy Simulator](https://policysim.aws.amazon.com/).
59 |
60 | ### Use `Condition`s
61 |
62 | 1. Inspired by [Restricting access policy permissions to a specific storage class](https://docs.aws.amazon.com/AmazonS3/latest/userguide/security_iam_service-with-iam.html#example-storage-class-condition-key).
63 | Create a policy to restrict objects uploads to the `STANDARD` storage class only.
64 | 2. Validate your policy.
65 |
66 | ### Restrict access to specific IAM roles (even from different AWS accounts!)
67 |
68 | Let's say the Netflix analytics team need an access to your bucket, but their platform are provisioned in another AWS account.
69 | How can the analytics team access a resource from another AWS account?
70 |
71 | So far, you've created an **Identity-based polices** - a policy which was assigned to some given **identity** (your IAM role in our example).
72 | Your EC2 instance which holds this role can say: "I'm an EC2! I have permissions to write objects to ABC bucket".
73 |
74 | In the other hand, **Resource -based policies** are policies that we assign to **resources** (our S3 bucket in our example).
75 | The S3 bucket which holds the policy can say: "I'm an S3 bucket! I allow only for A and B IAM roles to talk with me".
76 |
77 | In order for the analytics team to have access, we need to create a **Resource-based** policy on the S3 bucket that grants the specific IAM role from the analytics team account the necessary permissions.
78 | This involves specifying the ARN of the IAM role from the external account in the bucket policy.
79 |
80 | Here is an example of how to set this up:
81 |
82 | ```json
83 | {
84 | "Version": "2012-10-17",
85 | "Statement": [
86 | {
87 | "Effect": "Allow",
88 | "Principal": {
89 | "AWS": [
90 | "arn:aws:iam::019273951234:role/netflix-analytics-team-role"
91 | ]
92 | },
93 | "Action": "s3:*",
94 | "Resource": [
95 | "arn:aws:s3:::",
96 | "arn:aws:s3:::/*"
97 | ]
98 | }
99 | ]
100 | }
101 | ```
102 |
103 | Change `` accordingly.
104 |
105 | Let's create the policy:
106 |
107 | 1. In your bucket, choose the **Permissions** tab.
108 | 2. Under **Bucket policy**, choose **Edit**.
109 | 3. In the **Edit bucket policy** page, edit the JSON in the Policy section.
110 | 4. Inspired by [restrict access to specific VPC](https://repost.aws/knowledge-center/block-s3-traffic-vpc-ip), add another statement that blocks traffic to the bucket unless it's originated from your VPC, and the analytics team VPC, which is `vpc-1a2b3c4d`.
111 | 5. Validate your policy (only that you have access from within your VPC but not outside of it).
112 |
113 | > [!NOTE]
114 | > Resource based policy are mainly use to grant access to identities outside the current AWS account.
115 |
116 |
117 | # Exercises
118 |
119 | ### :pencil2: Use `Condition` to enforce resource tagging policy
120 |
121 | Inspired by [Tagging and access control policies](https://docs.aws.amazon.com/AmazonS3/latest/userguide/tagging-and-policies.html),
122 | create a policy that enforces each uploaded object to be tagged by `Env=prod` or `Env=dev`.
123 |
124 | In order to comply with the policy, you'll have to change some code (Node.js app) in the [NetflixFrontend][NetflixFrontend] repo, under `pages/api/analytics.ts`.
125 |
126 |
127 | ### :pencil2: Allow only HTTPS only
128 |
129 | As you may know, Amazon S3 offers encryption in **transit** (data is encrypted when traveling between machines - HTTPS) and encryption **at rest** (data is encrypted when stored in the disk).
130 | But S3 allows also communication over HTTP, so encryption in transit may be violated.
131 |
132 | Inspired by [this AWS blog post](https://repost.aws/knowledge-center/s3-bucket-policy-for-config-rule),
133 | create a resource-based policy which enforces the user to access the bucket over HTTPS only.
134 |
135 |
136 | [NetflixFrontend]: https://github.com/exit-zero-academy/NetflixFrontend
--------------------------------------------------------------------------------
/tutorials/aws_intro.md:
--------------------------------------------------------------------------------
1 | # Amazon Web Services (AWS) - Intro to Cloud Computing
2 |
3 | Cloud computing is a technology that allows businesses and individuals to access and use computing resources over the internet, without the need for owning or maintaining physical hardware.
4 | Amazon Web Services (AWS) is a leading provider of cloud computing services, offering a wide range of tools and platforms that enable businesses to deploy, scale, and manage their applications and data in the cloud.
5 | With AWS, organizations can benefit from the flexibility, scalability, and cost-effectiveness of cloud computing, while focusing on their core business objectives.
6 |
7 | ## Region and Zones
8 |
9 | AWS operates state-of-the-art, highly available data centers. Although rare, failures can occur that affect the availability of instances that are in the same location. If you host all of your instances in a single location that is affected by a failure, none of your instances would be available.
10 |
11 | Each **Region** is designed to be isolated from the other Regions. This achieves the greatest possible **fault tolerance** and **stability**.
12 |
13 | Here are a few available regions of AWS:
14 |
15 | | Code | Name |
16 | |----------------|-------------------------|
17 | | `us-east-2` | US East (Ohio) |
18 | | `us-east-1` | US East (N. Virginia) |
19 | | `us-west-1` | US West (N. California) |
20 | | `us-west-2` | US West (Oregon) |
21 | | `eu-west-1` | Europe (Ireland) |
22 | | `eu-central-1` | Europe (Frankfurt) |
23 | | `eu-north-1` | Europe (Stockholm) |
24 |
25 | Each Region has multiple, isolated locations known as **Availability Zones**. The code for Availability Zone is its Region code followed by a letter identifier. For example, `us-east-1a`.
26 |
27 | ## SLA
28 |
29 | AWS Service Level Agreements (**SLA**) are commitments made by AWS to its customers regarding the availability and performance of its cloud services.
30 | SLAs specify the percentage of uptime that customers can expect from AWS services and the compensation they can receive if AWS fails to meet these commitments.
31 | AWS offers different SLAs for different services, and the SLAs can vary based on the region and the type of service used. AWS SLAs provide customers with a level of assurance and confidence in the reliability and availability of the cloud services they use.
32 |
33 | For more information, [here](https://aws.amazon.com/legal/service-level-agreements/?aws-sla-cards.sort-by=item.additionalFields.serviceNameLower&aws-sla-cards.sort-order=asc&awsf.tech-category-filter=*all).
34 |
35 | ## Launch a virtual machine (EC2 instance)
36 |
37 | Amazon EC2 (Elastic Compute Cloud) is a web service that provides resizable compute capacity in the cloud.
38 | It allows users to create and manage virtual machines, commonly referred to as "instances", which can be launched in a matter of minutes and configured with custom hardware, network settings, and operating systems.
39 |
40 | ![][networking_project_stop]
41 |
42 | 1. Open the Amazon EC2 console at [https://console\.aws\.amazon\.com/ec2/](https://console.aws.amazon.com/ec2/).
43 |
44 | 2. From the EC2 console dashboard, in the **Launch instance** box, choose **Launch instance**, and then choose **Launch instance** from the options that appear\.
45 |
46 | 3. Under **Name and tags**, for **Name**, enter a descriptive name for your instance\.
47 |
48 | 4. Under **Application and OS Images \(Amazon Machine Image\)**, do the following:
49 |
50 | 1. Choose **Quick Start**, and then choose **Ubuntu**\. This is the operating system \(OS\) for your instance\.
51 |
52 | 5. Under **Instance type**, from the **Instance type** list, you can select the hardware configuration for your instance\. Choose the `t2.nano` instance type (the cheapest one). In Regions where `t2.nano` is unavailable, you can use a `t3.nano` instance.
53 |
54 | 6. Under **Key pair \(login\)**, choose **create new key pair** the key pair that you created when getting set up\.
55 |
56 | 1. For **Name**, enter a descriptive name for the key pair\. Amazon EC2 associates the public key with the name that you specify as the key name\.
57 |
58 | 2. For **Key pair type**, choose either **RSA**.
59 |
60 | 3. For **Private key file format**, choose the format in which to save the private key\. Since we will use `ssh` to connect to the machine, choose **pem**.
61 |
62 | 4. Choose **Create key pair**\.
63 |
64 | **Important**
65 | This step should be done once! once you've created a key-pair, use it for every EC2 instance you are launching.
66 |
67 | 5. The private key file is automatically downloaded by your browser\. The base file name is the name you specified as the name of your key pair, and the file name extension is determined by the file format you chose\. Save the private key file in a **safe place**\.
68 |
69 | **Important**
70 | This is the only chance for you to save the private key file\.
71 |
72 | 6. Your private key file has to have permission of `400`, `chmod` it if needed.
73 |
74 | 7. Next to **Network settings**, choose **Edit**\.
75 |
76 | 1. For **VPC** choose the default VPC fo your region.
77 | 2. For **Subnet** choose any subnet you want.
78 | 3. Choose **Create security group** while providing a name other than `launch-wizard-x`.
79 |
80 | 8. Keep the default selections for the other configuration settings for your instance\.
81 |
82 | 9. Review a summary of your instance configuration in the **Summary** panel, and when you're ready, choose **Launch instance**\.
83 |
84 | 10. A confirmation page lets you know that your instance is launching\. Choose **View all instances** to close the confirmation page and return to the console\.
85 |
86 | 11. On the **Instances** screen, you can view the status of the launch\. It takes a short time for an instance to launch\. When you launch an instance, its initial state is `pending`\. After the instance starts, its state changes to `running` and it receives a public DNS name\.
87 |
88 | 12. It can take a few minutes for the instance to be ready for you to connect to it\. Check that your instance has passed its status checks; you can view this information in the **Status check** column\.
89 |
90 | > [!NOTE]
91 | > When stopping the instance, please note that the public IP address may change, while the private IP address remains unchanged
92 |
93 |
94 | To connect to your instance, open the terminal in your local machine, and connect to your instance by:
95 |
96 | ```shell
97 | ssh -i "" ubuntu@
98 | ```
99 |
100 | Try to `ping` the instance from your local machine. Having troubles?
101 | Note that by default, the only allowed inbound traffic to an EC2 instance is port 22 (why?).
102 | [Take a look here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/authorizing-access-to-an-instance.html#add-rule-authorize-access) to know how to allow inbound traffic for different ports.
103 |
104 |
105 | [networking_project_stop]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/networking_project_stop.gif
--------------------------------------------------------------------------------
/tutorials/aws_lambda.md:
--------------------------------------------------------------------------------
1 | # Lambda Functions
2 |
3 | ## About serverless
4 |
5 | Serverless architectures are a cloud computing paradigm that allows developers to build and deploy applications without the need to manage servers directly (PaaS).
6 | In this model, the cloud provider takes care of server provisioning, scaling, and maintenance, enabling developers to focus solely on writing code and paying only for the actual resources consumed during execution.
7 |
8 | This approach offers reduced operational overhead, improved cost efficiency, and faster time-to-market for various types of applications and services.
9 |
10 | ## Email notification when new movie is released
11 |
12 | Let's say Netflix wants to send an email to a group of customers when a new movie is released.
13 |
14 | In this tutorial we configure a Lambda function to be triggered whenever a new movie is added in the movies DynamoDB table.
15 | The Lambda function would send an email to those who have subscribed to receive notifications new movies.
16 |
17 | ### Create an SNS topic and subscribe an email
18 |
19 | 1. Sign in to the [Amazon SNS console](https://console.aws.amazon.com/sns/home).
20 | 1. In the left navigation pane, choose **Topics**.
21 | 1. On the **Topics** page, choose **Create topic**.
22 | 1. By default, the console creates a FIFO topic. Choose **Standard**.
23 | 1. In the **Details** section, enter a **Name** for the topic.
24 | 1. Scroll to the end of the form and choose **Create topic**.
25 | The console opens the new topic's **Details** page.
26 |
27 | 1. In the left navigation pane, choose **Subscriptions**.
28 | 1. On the **Subscriptions** page, choose **Create subscription**.
29 | 1. On the **Create subscription** page, choose the **Topic ARN** field to see a list of the topics in your AWS account.
30 | 1. Choose the topic that you created in the previous step.
31 | 1. For **Protocol**, choose **Email**.
32 | 1. For **Endpoint**, enter an email address that can receive notifications.
33 | 1. Choose **Create subscription**.
34 | 1. The console opens the new subscription's **Details** page.
35 | 1. Check your email inbox and choose **Confirm subscription** in the email from AWS Notifications. The sender ID is usually `no-reply@sns.amazonaws.com`.
36 | 1. Amazon SNS opens your web browser and displays a subscription confirmation with your subscription ID.
37 |
38 | ### Enable Streams for your DynamoDB table
39 |
40 | 1. In the DynamoDB navigation pane on the left side, choose **Tables**.
41 | 2. Choose your table from the table list.
42 | 3. Choose the **Exports and streams** tab for your table.
43 | 4. Under **DynamoDB stream details** choose **Enable**.
44 | 5. Choose **New and old images** and click **Enable stream**.
45 |
46 | ### Create a Lambda Function
47 |
48 | 1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console\.
49 |
50 | 2. Choose **Create function**\.
51 |
52 | 3. Under **Basic information**, do the following:
53 | 1. Enter **Function name**.
54 | 2. For **Runtime**, confirm that **Python 3.x** is selected\.
55 |
56 | 4. Choose **Create function**\.
57 | 5. Enter your function, copy the content of `new_movie_lambda/app.py` and paste it in the **Code source**.
58 | 6. Click the **Deploy** button.
59 | 7. On the same page, click **Add trigger** and choose your Dynamo table as a source trigger.
60 | 7. Configure an environment variable named `TOPIC_ARN` with the ARN of your topic. The lambda will send a notification to this topic.
61 | 8. In your Lambda IAM role, attach the `AWSLambdaInvocation-DynamoDB` and `AmazonSNSFullAccess` permissions to allow your Lambda read items from DynamoDB and publish messages to your SNS topic.
62 |
63 | Test your Lambda function by creating a new movie item in the Dynamo table and watch for new email in your inbox.
64 |
65 |
66 | # Exercises
67 |
68 | ### :pencil2: Monitor your Lambda function
69 |
70 | 1. Access your Grafana instance (it must be running within an EC2 instance).
71 | 2. Add **CloudWatch** as a data source under Grafana's data source settings. To allow Grafana access CloudWatch, create a role with permission on CloudWatch and attach it to your EC2 instance.
72 | 3. Create panels in Grafana to display the following Lambda metrics: invocation count, errors and running duration.
73 |
74 |
75 | ### :pencil2: Docker based Lambda
76 |
77 | As DevOps engineers, we prefer deploying the Lambda function using Docker containers instead of directly copying the source code.
78 |
79 | 1. Under `new_movie_lambda/` create a `Dockerfile` to containerize your code:
80 |
81 | ```dockerfile
82 | FROM public.ecr.aws/lambda/python:3.10
83 |
84 | # TODO your instructions here....
85 |
86 | CMD ["app.lambda_handler"]
87 | ```
88 |
89 | 2. Build the image and push it to an ECR repo:
90 | - Open the Amazon ECR console at [https://console\.aws\.amazon\.com/ecr/repositories](https://console.aws.amazon.com/ecr/repositories).
91 | - In the navigation pane, choose **Repositories**\.
92 | - On the **Repositories** page, choose **Create repository**\.
93 | - For **Repository name**, enter a unique name for your repository\. E.g. `john_new_movie_notify`
94 | - Choose **Create repository**\.
95 | - Select the repository that you created and choose **View push commands** to view the steps to build and push an image to your new repository\.
96 |
97 | 3. Create a new Lambda function based on your Docker image.
98 | 4. Test it.
99 |
100 |
101 |
--------------------------------------------------------------------------------
/tutorials/aws_route53.md:
--------------------------------------------------------------------------------
1 | # Route 53
2 |
3 | Amazon Route 53 is a scalable and highly available DNS web service that serves as the authoritative DNS server for your domains, allowing you to manage DNS records.
4 | It also offers domain registration services, enabling you to purchase and manage domain names directly within AWS.
5 |
6 | ![][aws_route_53_dns]
7 |
8 | ## Registering a domain
9 |
10 | Throughout the course, you'll be using a real registered domain to manage and access the services that you'll deploy in the cloud.
11 |
12 | > [!NOTE]
13 | > If you already have a registered domain, read here how to [make Route 53 the DNS service for an existing domain](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/MigratingDNS.html) and skip the next section.
14 |
15 | 1. Sign in to the AWS Management Console and open the Route 53 console at https://console.aws.amazon.com/route53/
16 | 1. In the navigation pane, choose **Domains** and then **Registered domains**.
17 | 1. On the **Registered domains** page, choose **Register domains**.
18 |
19 | In the **Search for domain** section, enter the domain name that you want to register, and choose **Search** to find out whether the domain name is available.
20 |
21 | If you're using your domain just for learning and want to keep costs low, .click or .link are some of the cheapest TLDs, costing about $3-5 per year. [Take ta look here for a full pricing list](https://d32ze2gidvkk54.cloudfront.net/Amazon_Route_53_Domain_Registration_Pricing_20140731.pdf).
22 |
23 | 1. Choose **Proceed to checkout**.
24 | 1. On the **Pricing** page, choose the number of years that you want to register the domain for (1 year should be enough) and make sure the auto-renewal is disabled.
25 | 1. Choose **Next**.
26 | 1. On the **Contact information** page, enter contact information for the domain registrant, admin, tech, and billing contacts.
27 | 1. Choose **Next**.
28 | 1. On the **Review** page, review the information that you entered, and optionally correct it, read the terms of service, and select the check box to confirm that you've read the terms of service.
29 | 1. Choose **Submit**.
30 | 1. In the navigation pane, choose **Domains** and then **Requests**.
31 | - On this page you can view the status of domain. You need to respond to registrant contact verification email. You can also choose to resend the verification email.
32 | - When you receive the verification email, choose the link in the email that verifies that the email address is valid. If you don't receive the email immediately, check your junk email folder.
33 |
34 | When domain registration is complete, go to next section.
35 |
36 |
37 |
38 | ## Add records to registered domain
39 |
40 | When registered your domain, Route 53 created a **Hosted Zone**.
41 |
42 | A hosted zone in Amazon Route 53 is a container for DNS records associated with a domain, effectively acting as the **authoritative server** for that domain.
43 | It enables you to manage how traffic is routed to your resources by defining various record types, such as A and CNAME records.
44 |
45 | 1. In the navigation pane, choose **Hosted zones**\.
46 |
47 | 2. Choose the hosted zone associated with your domain.
48 |
49 | 3. Choose **Create record**\.
50 |
51 | 4. Define an A record for a custom sub-domain of yours (e.g. `my-name.mydomain.com`), the record value is an IP address of your EC2 instance created in a previous tutorial [^1].
52 |
53 | 5. Choose **Create records**\.
54 | **Note**
55 | Your new records take time to propagate to the Route 53 DNS servers
56 |
57 |
58 |
59 | [aws_route_53_dns]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/aws_route_53_dns.png
60 |
61 |
62 | [^1]: EC2 instance deployed in [Intro to cloud computing](aws_intro.md).
--------------------------------------------------------------------------------
/tutorials/aws_s3.md:
--------------------------------------------------------------------------------
1 | # Simple Storage Service (S3)
2 |
3 | In this tutorial, you'll store the movies thumbnails in an S3 bucket and configure the service to serve the content from your bucket.
4 |
5 | To store your data in Amazon S3, you first create a **bucket** and specify a bucket name and AWS Region.
6 | Then, you upload your data to that bucket as **objects** in Amazon S3.
7 | Each object has a **key** (or key name), which is the unique identifier for the object within the bucket.
8 |
9 | 2. upload images from cli
10 | 3. make it public static
11 | 4. change netflix to this source test
12 |
13 |
14 | In the below example, you create a bucket to store user analytics data for the NetflixFrontend app.
15 |
16 | ## Create a Bucket
17 |
18 | 1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
19 | 2. In the left navigation pane, choose **Buckets**\.
20 | 3. Choose **Create bucket**.
21 |
22 | The **Create bucket** wizard opens.
23 |
24 | 4. In **Bucket name**, enter a DNS-compliant name for your bucket.
25 |
26 | The bucket name must:
27 | + Be unique across **all of Amazon S3**.
28 | + Be between 3 and 63 characters long.
29 | + Not contain uppercase characters.
30 | + Start with a lowercase letter or number.
31 |
32 | 5. In **Region**, choose the AWS Region where you want the bucket to reside.
33 |
34 | Choose the Region where you provisioned your EC2 instance.
35 |
36 | 6. Under **Object Ownership**, leave ACLs disabled. By default, ACLs are disabled\. A majority of modern use cases in Amazon S3 no longer require the use of ACLs\. We recommend that you keep ACLs disabled, except in unusual circumstances where you must control access for each object individually\.
37 |
38 | 8. Enable Default encryption with `SSE-S3` encryption type.
39 |
40 | 9. Choose **Create bucket**.
41 |
42 | ## Upload objects to S3 bucket from an EC2 instance
43 |
44 | In the NetflixFrontend app, whenever a user opens the details window for a specific movie, an HTTP request is sent to the server containing the activity information.
45 | This activity if turn is stored as an object in an S3 bucket for later usage (for example by user analytics or recommendation teams).
46 |
47 | You can review the code that upload object to S3 under `pages/api/analytics.ts` in the NetflixFrontend repo..
48 |
49 | In your EC2 instance where the NetflixFrontend container is running, re-run the container while providing the following environment variables to the container:
50 |
51 | - `AWS_REGION` - Your region code (e.g. `us-east-1`).
52 | - `AWS_S3_BUCKET` - The name of your bucket.
53 |
54 | Visit the Netflix app, expand one of the movies information window, this activity should trigger an HTTP request to the app, which is turn will save the activity object in S3 bucket.
55 |
56 | **Disclaimer:** This is not going to work. You should see an error like "Error updating user session" in the console logs of the NetflixFrontend logs.
57 | Since the identity who writes the data to the S3 bucket is your EC2 instance, it has to have permissions to operate in S3.
58 |
59 | ![][ec2-s3]
60 |
61 | Keep reading....
62 |
63 | ### Attach IAM role to your EC2 Instance with permissions over S3
64 |
65 | To access an S3 bucket from an EC2 instance, you need to create an IAM role with the appropriate permissions and attach it to the EC2 instance.
66 | The role should have policies that grant the necessary permissions to read from and write to the S3 bucket, and the EC2 instance needs to be launched with this IAM role.
67 | IAM role will be taught soon. But for now, just follow the instructions below.
68 |
69 | 1. Open the IAM console at [https://console\.aws\.amazon\.com/iam/](https://console.aws.amazon.com/iam/)\.
70 |
71 | 1. In the navigation pane, choose **Roles**, **Create role**\.
72 |
73 | 1. On the **Trusted entity type** page, choose **AWS service** and the **EC2** use case\. Choose **Next: Permissions**\.
74 |
75 | 1. On the **Attach permissions policy** page, search for **AmazonS3FullAccess** AWS managed policy\.
76 |
77 | 1. On the **Review** page, enter a name for the role and choose **Create role**\.
78 |
79 |
80 | **To replace an IAM role for an instance**
81 |
82 | 1. In EC2 navigation pane, choose **Instances**.
83 |
84 | 1. Select the instance, choose **Actions**, **Security**, **Modify IAM role**.
85 |
86 | 1. Choose your created IAM role, click **Save**.
87 |
88 | After assigning the role, check that app stores user activity in your S3 bucket.
89 |
90 | ## Enable versioning on your bucket
91 |
92 | What happen if you upload an object name that already exists?
93 |
94 | You'll notice that the new object overrides the old one, without any option to restore the older version.
95 | If this happens unintentionally or due to a bug in the application code, it can result in the permanent loss of data.
96 |
97 | The risk of data loss can be mitigated by implementing **versioning** in S3.
98 | When versioning is enabled, each object uploaded to S3 is assigned a unique version ID, which can be used to retrieve previous versions of the object.
99 | This allows you to recover data that was accidentally overwritten or deleted, and provides a safety net in case of data corruption or other issues.
100 |
101 | 1. Open the Amazon S3 console at [https://console\.aws\.amazon\.com/s3/](https://console.aws.amazon.com/s3/)\.
102 |
103 | 2. In the **Buckets** list, choose the name of the bucket that you want to enable versioning for\.
104 |
105 | 3. Choose **Properties**\.
106 |
107 | 4. Under **Bucket Versioning**, choose **Edit**\.
108 |
109 | 5. Choose **Enable**, and then choose **Save changes**\.
110 |
111 | 6. Upload multiple object with the same key, make sure versioning is working.
112 |
113 | ## Create lifecycle rule to manage non-current versions
114 |
115 | When versioning is enabled in S3, every time an object is overwritten or deleted, a new version of that object is created. Over time, this can lead to a large number of versions for a given object, many of which may no longer be needed for business or compliance reasons.
116 |
117 | By creating lifecycle rules, you can define actions to automatically transition non-current versions of objects to a lower-cost storage class or delete them altogether. This can help you reduce storage costs and improve the efficiency of your S3 usage, while also ensuring that you are in compliance with data retention policies and regulations.
118 |
119 | For example, you might create a lifecycle rule to transition all non-current versions of objects to `Standard-IA` storage after 30 days, and then delete them after 365 days. This would allow you to retain current versions of objects in S3 for fast access, while still meeting your data retention requirements and reducing storage costs for non-current versions.
120 |
121 |
122 | 1. Choose the **Management** tab, and choose **Create lifecycle rule**\.
123 |
124 | 1. In **Lifecycle rule name**, enter a name for your rule\.
125 |
126 | 1. Choose the scope of the lifecycle rule (in this demo we will apply this lifecycle rule to all objects in the bucket).
127 |
128 | 1. Under **Lifecycle rule actions**, choose the actions that you want your lifecycle rule to perform:
129 | + Transition *noncurrent* versions of objects between storage classes
130 | + Permanently delete *noncurrent* versions of objects
131 |
132 | 1. Under **Transition non\-current versions of objects between storage classes**:
133 |
134 | 1. In **Storage class transitions**, choose **Standard\-IA**.
135 |
136 | 1. In **Days after object becomes non\-current**, enter 30.
137 |
138 | 1. Under **Permanently delete previous versions of objects**, in **Number of days after objects become previous versions**, enter 90 days.
139 |
140 | 1. Choose **Create rule**\.
141 |
142 | If the rule does not contain any errors, Amazon S3 enables it, and you can see it on the **Management** tab under **Lifecycle rules**\.
143 |
144 | # Exercises
145 |
146 | ### :pencil2: S3 pricing
147 |
148 | Explore the [S3 pricing page](https://aws.amazon.com/s3/pricing/).
149 |
150 | Compute the monthly cost of the below bucket characteristics:
151 |
152 | 1. us-east-1
153 | 2. S3 Standard
154 | 3. 4TB stored data
155 | 4. 40 million PUT requests.
156 | 5. 10 million GET requests.
157 | 6. 5TB inbound traffic
158 | 7. 10TB outbound traffic
159 |
160 |
161 | [ec2-s3]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/ec2-s3.png
--------------------------------------------------------------------------------
/tutorials/bash_command_techniques.md:
--------------------------------------------------------------------------------
1 | # Bash Command Techniques
2 |
3 | ## Exit status and `$?`
4 |
5 | In Unix-like operating systems, every command that is executed returns an exit status to the shell that invoked it. The exit status is a numeric value that indicates the success or failure of the command. A value of 0 indicates success, while a non-zero value indicates failure.
6 |
7 | The exit status of the most recently executed command can be accessed via the `$?` variable in Bash.
8 |
9 | ```console
10 | [myuser@hostname]~$ ls /non-existing-dir
11 | ls: cannot access '/non-existing-dir': No such file or directory
12 | [myuser@hostname]~$ echo $?
13 | 2
14 | ```
15 |
16 | In the above example, if you run a command like `ls /non-existing-dir`, you will receive an error message saying that the directory does not exist, and the exit status will be non-zero. You can access the exit status of this command by typing `echo $?`. The output will be the exit status of the previous command (in this case, the value is 2).
17 | Some common non-zero exit status values include:
18 |
19 | - `1`: General catch-all error code
20 | - `2`: Misuse of shell built-ins (e.g. incorrect number of arguments)
21 | - `126`: Command found but not executable
22 | - `127`: Command not found
23 | - `128`+: Exit status of a program that was terminated due to a signal
24 |
25 |
26 | Explore the man page of the `grep` command. List all possible exit codes, and specify the reason for every exit code.
27 |
28 | ## Running Multiple Commands (Conditionally)
29 |
30 | The bash shell allows users to join multiple commands on a single command line by separating the commands with a `;` (semicolon).
31 |
32 | ```console
33 | [myuser@hostname]~$ cd /etc/ssh; ls
34 | moduli ssh_config.d sshd_config.d ssh_host_ecdsa_key.pub ssh_host_ed25519_key.pub ssh_host_rsa_key.pub
35 | ssh_config sshd_config ssh_host_ecdsa_key ssh_host_ed25519_key ssh_host_rsa_key ssh_import_id
36 | [myuser@hostname]/etc/ssh$
37 | ```
38 |
39 | Nothing special in the above example… just two commands that were executed one after the other.
40 |
41 | The bash shell uses `&&` and `||` to join two commands conditionally. When commands are conditionally joined, the first will always execute. The second command may execute or not, depending on the return value (exit code) of the first command. For example, a user may want to create a directory, and then move a new file into that directory. If the creation of the directory fails, then there is no reason to move the file. The two commands can be coupled as follows:
42 |
43 | ```console
44 | [myuser@hostname]~$ echo "one two three" > numbers.txt
45 | [myuser@hostname]~$ mkdir /tmp/boring && mv numbers.txt /tmp/boring
46 | ```
47 |
48 | By coupling two commands with `&&`, the second command will only run if the first command succeeded (i.e., had a return value of 0).
49 |
50 | What if the `mkdir` command failed?
51 |
52 | Similarly, multiple commands can be combined with `||`. In this case, bash will execute the second command only if the first command "fails" (has a non zero exit code). This is similar to the "or" operator found in programming languages. In the following example, myuser attempts to change the permissions on a file. If the command fails, a message to that effect is echoed to the screen.
53 |
54 | ```console
55 | [myuser@hostname]~$ chmod 600 /tmp/boring/numbers.txt || echo "chmod failed."
56 | [myuser@hostname]~$ chmod 600 /tmp/mostly/boring/primes.txt || echo "chmod failed"
57 | chmod: failed to get attributes of /tmp/mostly/boring/primes.txt': No such file or directory
58 | chmod failed
59 | ```
60 |
61 | It’s common in bash scripts to create a directory and immediately `cd` to the directory, if the creations succeeded. Use conditional the `&&` operator to create the dir and cd into it only if the creation succeeded.
62 |
63 |
64 |
65 | Solution
66 |
67 |
68 | ```bash
69 | mkdir newdir && cd newdir
70 | ```
71 |
72 |
73 |
74 | ## Command Substitution
75 |
76 | Command substitution allows users to run arbitrary commands in a subshell and incorporate the results into the command line. The modern syntax supported by the bash shell is:
77 |
78 | ```bash
79 | $(subcommand)
80 | ```
81 |
82 | As an example of command substitution, `myuser` would like to create a directory that contains the date in its name. After examining the `date(1)` man page, he devises a format string to generate the date in a compact format.
83 |
84 | ```bash
85 | [prince@station prince]$ date +%d%b%Y
86 | 04May2023
87 | ```
88 |
89 | He now runs the mkdir command, using command substitution.
90 |
91 | ```bash
92 | [prince@station prince]$ mkdir reports.$(date +%d%b%Y)
93 | [prince@station prince]$ ls
94 | reports.04May2003
95 | ```
96 |
97 | The bash shell implements command substitution by spawning a new subshell, running the command, recording the output, and exiting the subshell. The text used to invoke the command substitution is then replaced with the recorded output from the command.
98 |
99 | # Exercises
100 |
101 | ### :pencil2: Code simplification using logical operators
102 |
103 | ```bash
104 | ls -l /home/user/mydir
105 | if [ $? -eq 0 ]; then
106 | echo "Directory exists."
107 | else
108 | echo "Directory does not exist."
109 | fi
110 | ```
111 |
112 | The above code executes the `ls` command, then uses the `$?` variable along with [if statement](https://tldp.org/LDP/abs/html/fto.html) to test if the directory exists and prints corresponding messages.
113 | Use `&&` and `||` operators to simplify the script. The simplified code should achieve the same functionality in **one command**!
114 |
--------------------------------------------------------------------------------
/tutorials/bash_conditional_statements.md:
--------------------------------------------------------------------------------
1 | # Bash Conditional Statement
2 |
3 | ## The if-else statement
4 |
5 | Sometimes you need to specify different courses of action to be taken in a shell script, depending on the success or failure of a command. The if construction allows you to specify such conditions.
6 |
7 | The most common syntax of the if command is:
8 |
9 | ```bash
10 | if TEST-COMMAND
11 | then
12 | POSITIVE-CONSEQUENT-COMMANDS
13 | else
14 | NEGATIVE-CONSEQUENT-COMMANDS
15 | fi
16 | ```
17 |
18 | This is a conditional statement in Bash that consists of a `TEST-COMMAND`, followed by a positive consequent command block (`POSITIVE-CONSEQUENT-COMMANDS`), and an optional negative consequent command block (`NEGATIVE-CONSEQUENT-COMMANDS`). If the `TEST-COMMAND` is successful (returns an exit status of 0), then the positive consequent commands are executed, otherwise, the negative consequent commands (if provided) are executed. The `if` statement is terminated with the `fi` command.
19 |
20 | ### Testing files
21 |
22 | **Before you start, review the man page of the `test` command.**
23 |
24 | The first example checks for the existence of a file:
25 |
26 | ```bash
27 | echo "This script checks the existence of the messages file."
28 | echo "Checking..."
29 | if [ -f /var/log/messages ]
30 | then
31 | echo "/var/log/messages exists."
32 | fi
33 | echo
34 | echo "...done."
35 | ```
36 |
37 |
38 | > 🧐 What is the relation between the `test` command and `[`?
39 |
40 | ### Testing Exit Status
41 |
42 | Recall that the $? variable holds the exit status of the previously executed command. The following example utilizes this variable to make a decision according to the success or failure of the previous command:
43 |
44 | ```bash
45 | curl google.com &> /dev/null
46 |
47 | if [ $? -eq 0 ]
48 | then
49 | echo "Request succeeded"
50 | else
51 | echo "Request failed, trying again..."
52 | fi
53 | ```
54 |
55 | ### Numeric Comparisons
56 |
57 | The below example demonstrates numeric comparison between a variable and 20.
58 | Don't worry is it doesn't work, you'll fix it soon 🙂
59 |
60 | ```bash
61 | num=$(wc -l /etc/passwd)
62 | echo $num
63 |
64 | if [ "$num" -gt "20" ]; then
65 | echo "Too many users in the system."
66 | fi
67 | ```
68 |
69 | > #### 🧐 Test yourself
70 | >
71 | > Copy the above script to a `.sh` file, and execute. Debug the script until you understand why the script fails. Use the `awk` command to fix the problem. Tip: using the `-x` flag can help you debug your bash run: `bash -x myscript.sh`
72 | >
73 |
74 |
75 | ### String Comparisons
76 |
77 | ```bash
78 | if [[ "$(whoami)" != 'root' ]]; then
79 | echo "You have no permission to run $0 as non-root user."
80 | exit 1;
81 | fi
82 | ```
83 |
84 | ### if-grep Construct
85 |
86 | ```bash
87 | echo "Bash is ok" > file
88 |
89 | if grep -q Bash file
90 | then
91 | echo "File contains at least one occurrence of Bash."
92 | fi
93 | ```
94 |
95 | Another example:
96 |
97 | ```bash
98 | word=Linux
99 | letter_sequence=inu
100 |
101 | if echo "$word" | grep -q "$letter_sequence"
102 | # The "-q" option to grep suppresses output.
103 | then
104 | echo "$letter_sequence found in $word"
105 | else
106 | echo "$letter_sequence not found in $word"
107 | fi
108 | ```
109 |
110 | ## `[...]` vs `[[...]]`
111 |
112 | With version 2.02, Bash introduced the `[[ ... ]]` extended test command, which performs comparisons in a manner more familiar to programmers from other languages. The `[[...]]` construct is the more versatile Bash version of `[...]`. Using the `[[...]]` test construct, rather than `[...]` can prevent many logic errors in scripts.
113 |
114 | # Exercises
115 |
116 | ### :pencil2: Availability test script
117 |
118 | The `curl` command can be used to perform a request to an external website and return the response's [status code](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status):
119 |
120 | ```bash
121 | curl -o /dev/null -s -w "%{http_code}" www.google.com
122 | ```
123 |
124 | This `curl` command suppresses output (`-o /dev/null`), runs silently without printing the traffic progress (`-s`), and prints the HTTP status code (`-w "%{http_code}"`).
125 |
126 | Create an `availability_test.sh` script that receives an address as the 1st (and only) argument, and perform the above `curl` command.
127 | The script should be completed successfully if the returned HTTP status code is `< 500`,
128 | or fail otherwise (you can exit the script with `exit 1` to indicate failure).
129 |
130 | Here is the expected behaviour:
131 |
132 | ```console
133 | myuser@hostname:~$ ./availability_test.sh www.google.com
134 | www.google.com is up!
135 | myuser@hostname:~$ ./availability_test.sh http://cnn.com
136 | http://cnn.com is up!
137 | myuser@hostname:~$ ./availability_test.sh abcdefg
138 | abcdefg is not available.
139 | myuser@hostname:~$ echo $?
140 | 1
141 | myuser@hostname:~$ ./availability_test.sh
142 | A valid URL is required
143 | myuser@hostname:~$ ./availability_test.sh google.com cnn.com
144 | The script is expected a single argument only, but got 2.
145 | myuser@hostname:~$ ./availability_test.sh httpbin.org/status/500 # this url should return a status code of 500
146 | httpbin.org/status/500 is not available.
147 | ```
148 |
149 | ### :pencil2: Geo-location info
150 |
151 | Write a bash script `geo_by_ip.sh` that, given an ip address, prints geo-location details, as follows:
152 | 1. The script first checks if `jq` cli is installed. If not installed, it prints a message to the user with a link to download the tool: https://stedolan.github.io/jq/download/
153 | 1. The script checks that **exactly one argument** was sent to it, which represents the ip address to check. Otherwise, an informative message is printed to stdout.
154 | 1. The script checks that the given IP argument is not equal to `127.0.0.1`.
155 | 1. The script performs an HTTP GET request to `http://ip-api.com/json/`, where `` is the IP argument. The results should be stored in a variable.
156 | 1. Using the jq tool and the variable containing the HTTP response, check that the request has succeeded by checking that the `status` key has a value of `success`. The command `jq -r '.'` can extract a key from the json (e.g. `echo $RESPONSE | jq -r '.status'`)
157 | 1. If the request succeed, print the following information to the user:
158 | - country
159 | - city
160 | - regionName
161 |
162 |
163 |
164 |
--------------------------------------------------------------------------------
/tutorials/bash_loops.md:
--------------------------------------------------------------------------------
1 | # Bash loops brief
2 |
3 | ## `for` loops
4 |
5 | `for` loops in Bash are used to iterate over a set of values. Here is the basic syntax of a `for` loop:
6 |
7 | ```bash
8 | for var in values
9 | do
10 | # commands
11 | done
12 | ```
13 |
14 | `var` is a variable that will be set to each value in `values` in turn. `values` can be a list of words separated by spaces, or a command that generates a list of values. Here is an example that prints out the numbers from 1 to 5:
15 |
16 | ```bash
17 | for i in 1 2 3 4 5
18 | do
19 | echo $i
20 | done
21 | ```
22 |
23 | You can also use the `seq` command to generate a sequence of numbers:
24 |
25 | ```bash
26 | for i in $(seq 1 5)
27 | do
28 | echo $i
29 | done
30 | ```
31 |
32 | > #### 🧐 Test yourself
33 | >
34 | > Use command substitution `$()` and write a `for` loop that iterates over the files in your home directory.
35 |
36 | ## `while` loops
37 |
38 | `while` loops in Bash are used to repeat a block of commands as long as a certain condition is true (ends with exit code 0). Here is the basic syntax of a `while` loop:
39 |
40 | ```bash
41 | while condition
42 | do
43 | # commands
44 | done
45 | ```
46 |
47 | `condition` is a command or expression that returns either 0 (true) or non-zero (false). Here is an example that prints out the numbers from 1 to 5 using a `while` loop:
48 |
49 | ```bash
50 | i=1
51 | while [ $i -le 5 ]
52 | do
53 | echo $i
54 | let i=$i+1
55 | done
56 | ```
57 |
58 | In this example, `i` is initialized to 1, and the loop continues as long as `i` is less than or equal to 5. `i` is incremented by 1 using the expression `let i=$i+1`.
59 |
60 | # Exercises
61 |
62 | ### :pencil2: Blur an image
63 |
64 | Create a script called `blur.sh`, which can be used to blur images.
65 | Use the `convert` command for the actual image processing.
66 | The script should expect as arguments multiple filenames of the images to be blurred.
67 | You need to test that the file content is indeed an image (`file` or `stat`).
68 | The script should generate a new file of the blurred image, and if the new image is successfully generated, replace the original image with the blurred one.
69 |
70 |
71 | ### :pencil2: Bad elusive command
72 |
73 | Say you have a command that fails rarely.
74 | In order to debug it you need to capture its output, but it can be time consuming to get a failure run.
75 | Write a bash script that runs the following script until it fails and captures its standard output and error streams to files and prints everything at the end.
76 | Report how many runs it took for the script to fail.
77 |
78 | ```bash
79 | #!/bin/bash
80 |
81 | n=$(( RANDOM % 100 ))
82 | if [[ n -eq 42 ]]; then
83 | echo "Something went wrong"
84 | exit 1
85 | fi
86 |
87 | echo "Everything went according to plan"
88 | ```
89 |
--------------------------------------------------------------------------------
/tutorials/bash_variables.md:
--------------------------------------------------------------------------------
1 | # Bash Variables
2 |
3 | Variables are how programming and scripting languages represent data.
4 | A variable is nothing more than a **label**, a name assigned to a location or set of locations in computer memory holding an item of data. As seen in previous examples, shell variables are in uppercase characters by convention.
5 |
6 | Let us carefully distinguish between the name of a variable and its value.
7 | If `variable1` is the name of a variable, then `$variable1` is a reference to its value, the data item it contains.
8 |
9 | ```bash
10 | variable1=23
11 | echo variable1
12 | echo $variable1
13 | ```
14 |
15 | No space permitted on either side of = sign when initializing variables. What happens if there is a space?
16 |
17 | ```bash
18 | VARIABLE =value
19 | VARIABLE= value
20 | VARIABLE = value
21 | ```
22 |
23 |
24 | ## Assigning and referencing variables
25 |
26 | Below are a few examples of variable referencing.
27 | Try them out and make sure you understand each one of the cases.
28 |
29 | ```bash
30 | A=375
31 | HELLO=$A
32 |
33 | echo HELLO # HELLO
34 | echo $HELLO # 375
35 | echo ${HELLO} # 375
36 | echo "$HELLO" # 375
37 | echo "${HELLO}" # 375
38 | echo "Oh, I like them squishy" >> ode_to_$A.txt # ode_to_375.txt was created
39 |
40 | # Variable referencing disabled (escaped) by single quotes
41 | echo '$HELLO'
42 | ```
43 |
44 | There are [MUCH more](https://tldp.org/LDP/abs/html/parameter-substitution.html#PARAMSUBREF) functionalities.
45 |
46 | #### Bash variables are untyped
47 |
48 | Unlike many other programming languages, Bash does not segregate its variables by "type."
49 | Essentially, Bash variables are character strings, but, depending on context, Bash permits arithmetic operations and comparisons on variables.
50 | The determining factor is whether the value of a variable contains only digits.
51 |
52 | ```bash
53 | a=879
54 | echo "The value of \"a\" is $a."
55 |
56 | a=16+5
57 | echo "The value of \"a\" is now $a."
58 | ```
59 |
60 | #### Assignment using `let`
61 |
62 | ```bash
63 | let a=16+5
64 | echo "The value of \"a\" is now $a."
65 | ```
66 |
67 | #### Variable assignment using the commands substitution - `$(...)`
68 |
69 | ```bash
70 | R=$(cat /etc/profile)
71 | arch=$(uname -m)
72 | echo $R
73 | echo $arch
74 | ```
75 |
76 | #### Variable reference using curly braces - `${...}`
77 |
78 | Consider the below example:
79 |
80 | ```console
81 | myuser@hostname:~$ ls
82 | hello_world.txt
83 | myuser@hostname:~$ echo $A
84 | hola
85 | myuser@hostname:~$ echo "filename language changed!" > $A_world.txt
86 | myuser@hostname:~$ ls
87 | hello_world.txt
88 | myuser@hostname:~$ ls -a
89 | hello_world.txt .txt
90 | ```
91 |
92 | Where is the file `hola_world.txt`? A couple of things have been mistakenly done by `myuser`!
93 | First, the bash shell dereferenced the correct variable name, but not the one that `myuser` intended.
94 | The bash shell resolved the (uninitialized) variable A_world (to nothing), and created the resulting file `.txt`. Secondly, because `.txt` starts with a `.`, it is a "hidden file", as the `ls -a` command reveals.
95 |
96 | Let's utilize the curly braces reference syntax to resolve `myuser`'s problems:
97 |
98 | ```console
99 | myuser@hostname:~$ echo "filename language changed!" > ${A}_world.txt
100 | myuser@hostname:~$ ls
101 | hello_world.txt hola_world.txt .txt
102 | ```
103 |
104 | When finished with a variable, the variable may be unbound from its value with the `unset` command.
105 |
106 | ```console
107 | myuser@hostname:~$ unset A
108 | myuser@hostname:~$ echo $A
109 |
110 | myuser@hostname:~$
111 | ```
112 |
113 | ## Script positional variables
114 |
115 | Positional arguments are arguments passed to a command or script in a specific order, usually separated by spaces. Positional arguments can be accessed, within a bash script file, using special variables such as `$1`, `$2`, `$3`, and so on, where `$1` refers to the first argument, `$2` refers to the second argument, and so on.
116 |
117 | Let's see them in action... create a file called `BarackObama.sh` as follows:
118 |
119 | ```bash
120 | #!/bin/bash
121 |
122 | # This script reads 3 positional parameters and prints them out.
123 |
124 | echo "$0 invoked with the following arguments: $@"
125 |
126 | POSPAR1="$1"
127 | POSPAR2="$2"
128 | POSPAR3="$3"
129 |
130 | echo "$1 is the first positional parameter, \$1."
131 | echo "$2 is the second positional parameter, \$2."
132 | echo "$3 is the third positional parameter, \$3."
133 | echo
134 | echo "The total number of positional parameters is $#."
135 |
136 | if [ -n "${10}" ] # Parameters > $9 must be enclosed in {brackets}.
137 | then
138 | echo "Parameter #10 is ${10}"
139 | fi
140 | ```
141 |
142 | Execute the script by:
143 |
144 | ```bash
145 | bash positional.sh Yes We Can
146 | bash positional.sh Yes We Can bla bla 1 2 3
147 | ```
148 |
149 | Investigate the script output and make sure you understand each variable.
150 |
151 | ## Special bash variables
152 |
153 | Special bash variables are built-in variables that hold information about the shell environment and provide useful information for shell scripting.
154 |
155 | - `$@` - Expands to the positional parameters, starting from one.
156 | - `$#` - Expands to the number of positional parameters in decimal.
157 | - `$?` - Expands to the exit status of the most recently executed foreground pipeline.
158 | - `$$` - Expands to the process ID of the shell.
159 | - `$0` - Expands to the name of the shell or shell script.
160 | - `$*` - Expands to all the positional parameters passed to the script or function as a single word
161 |
162 | ## Variable expansion
163 |
164 | Variable expansion is a feature in Bash that allows you to manipulate a variable's value when referencing it. Here are a few basic examples:
165 |
166 | #### Default assignment
167 |
168 | ```bash
169 | ${VAR:-word}
170 | ```
171 |
172 | If `VAR` is unset or null, the expansion of `word` is substituted. Otherwise, the value of `VAR` is used.
173 |
174 | ```console
175 | myuser@hostname:~$ VAR=123
176 | myuser@hostname:~$ echo ${VAR:-undefinedValue}
177 | 123
178 | myuser@hostname:~$ unset VAR
179 | myuser@hostname:~$ echo ${VAR:-undefinedValue}
180 | undefinedValue
181 | myuser@hostname:~$ echo $VAR
182 | undefinedValue
183 | ```
184 |
185 | #### Default error message
186 |
187 | ```bash
188 | ${VAR:?word}
189 | ```
190 |
191 | If `VAR` is null or unset, the expansion of `word` is written to the standard error and the shell, if it is not interactive, exits.
192 |
193 | ```console
194 | myuser@hostname:~$ VAR=
195 | myuser@hostname:~$ echo ${VAR:?VAR is unset or null}
196 | myuser@hostname:~$ echo $?
197 | ```
198 |
199 | #### Variable substring
200 |
201 | ```bash
202 | ${parameter:offset}
203 | ${parameter:offset:length}
204 | ```
205 |
206 | This expansion allows you to extract a portion of a string variable based on a specified index and length.
207 |
208 | ```console
209 | $ string=01234567890abcdefgh
210 | $ echo ${string:7}
211 | 7890abcdefgh
212 | $ echo ${string:7:0}
213 |
214 | $ echo ${string:7:2}
215 | 78
216 | $ echo ${string:7:-2}
217 | 7890abcdef
218 | $ echo ${string: -7}
219 | bcdefgh
220 | $ echo ${string: -7:0}
221 |
222 | $ echo ${string: -7:2}
223 | bc
224 | ```
225 |
226 | #### String length
227 |
228 | ```bash
229 | ${#parameter}
230 | ```
231 |
232 | The length of characters of the expanded value of `parameter` is substituted.
233 |
234 | There are [many more](https://tldp.org/LDP/abs/html/parameter-substitution.html) of them!
235 |
236 | # Exercises
237 |
238 | ### :pencil2: Dated copy
239 |
240 | Create a script that takes a valid file path as the first argument and creates a dated copy of the file.
241 | For example:
242 |
243 | ```console
244 | myuser@hostname:~$ ./datedcp.sh myfile.txt
245 | myuser@hostname:~$ ls
246 | 2022-04-30_myfile.txt
247 | ```
248 |
249 | ### :pencil2: Theater night out booking system
250 |
251 | In our course repo, copy the file under `theatre_nighout/init.sh` into an empty directory and execute it.
252 | This script creates 5 directories, each for a famous theater show.
253 | In each directory there are 50 files, representing 50 available seats for the show.
254 | Create a bash script `available_seat.sh` that takes one argument which is the name of a show and prints the available seats for the show (by simply using `ls` command).
255 | Create another bash script `booking.sh` that takes two arguments - the name of a show and a seat number.
256 |
257 | The selected seat should be marked as booked by deleting the file that represents the seat number.
258 | You should print an informative message to the user upon successful or failed booking.
259 |
260 | You can always re-run `init.sh` to test your script again.
261 |
262 | For example:
263 |
264 | ```console
265 | $ ./init.sh && cd shows
266 | $ ./available_seat.sh Hamilton
267 | Available seats for Hamilton:
268 | 1 2 3 4 5 6 7 8 9 10 ... 48 49 50
269 | $ ./booking.sh Hamilton 5
270 | Seat 5 for Hamilton has been booked!
271 | $ ./available_seat.sh Hamilton
272 | Available seats for Hamilton:
273 | 1 2 3 4 6 7 8 9 10 ... 48 49 50
274 | $ ./booking.sh Hamilton 5
275 | Error: Seat 5 for Hamilton is already booked!
276 | ```
277 |
278 |
--------------------------------------------------------------------------------
/tutorials/docker_compose.md:
--------------------------------------------------------------------------------
1 | # Docker compose brief
2 |
3 | Are you tired by executing `docker run`, `docker build`? so are we.
4 |
5 | [Docker Compose](https://docs.docker.com/compose/) is a tool for defining and running multi-container Docker applications.
6 | With Compose, you use a YAML file to configure your application's services.
7 | Then, with a single command, you create and start all the services from your configuration.
8 |
9 | Using Compose is essentially a three-step process:
10 |
11 | 1. Define your app's environment with a `Dockerfile` so it can be reproduced anywhere.
12 | 2. Define the services that make up your app in `docker-compose.yml` so they can be run together in an isolated environment.
13 | 3. Run `docker compose up` and the Docker compose command starts and runs your entire app.
14 |
15 | A `docker-compose.yml` looks like this:
16 |
17 | ```yaml
18 | version: "3.9"
19 | services:
20 | web:
21 | build: docker-containers
22 | ports:
23 | - "8000:5000"
24 | volumes:
25 | - .:/code
26 | - logvolume01:/var/log
27 | depends_on:
28 | - redis
29 | redis:
30 | image: redis
31 | volumes:
32 | logvolume01: { }
33 | ```
34 |
35 | The given Docker Compose file describes a multi-service application with two services: `web` and `redis`.
36 | Here's a breakdown of some components:
37 |
38 | 1. `version: "3.9"` specifies the version of the Docker Compose syntax being used.
39 | 2. `services:` begins the services section, where you define the containers for your application.
40 | 3. `web:` defines a service named `web`. This service is built from the current directory (`build: .`) using a Dockerfile.
41 | 4. `volumes:` defines volume mappings between the host and container.
42 | 5. `redis:` defines a service named `redis`. This service uses the official Redis image.
43 |
44 | ## Compose benefits
45 |
46 | Using Docker Compose offers several benefits:
47 |
48 | - **Simplified Container Orchestration**: Docker Compose allows for the definition and management of multi-container applications as a single unit.
49 | - **Reproducible Environments**: Since compose is defined in a YAML file, it's easy to deploy the same environment in different machine without missing any `docker run` command. This ensures that the application runs consistently across different machines.
50 | - **Automate Volumes and Networking**: Docker Compose automatically creates a network for the application and assigns a unique DNS name to each service. No need to create networks and volumes.
51 |
52 | ## Introducing YAML
53 |
54 | YAML (YAML Ain't Markup Language) is a human-readable data format commonly used for configuration files and data exchange between systems.
55 | YAMl is the cousin of JSON, by means that it uses indentation and key-value pairs to represent structured data, very similarly to JSON, by in a more clean syntax.
56 |
57 | To prepare for working with Docker Compose, and later with Kubernetes, it is important to familiarize yourself with YAML syntax, which is the preferred syntax for writing Docker Compose files.
58 |
59 | Find your favorite tutorial (there are hundreds YouTube videos and written tutorials) to see some YAML basic syntax (it takes no more than 20 minutes).
60 |
61 | ## Try compose
62 |
63 | In this course we will briefly introduce Docker Compose only as a stepping stone towards learning Kubernetes.
64 | Docker Compose is nice tool to get yourself familiar with the concept of container orchestration and deployment.
65 | However, Kubernetes (k8s) is way more powerful orchestration platform, which will be learnt later on in the course.
66 |
67 | The official Docker Compose website provides good getting started tutorial with Docker Compose, explaining key concepts and demonstrating how to write a Compose file.
68 | Complete the tutorial:
69 | https://docs.docker.com/compose/gettingstarted/
70 |
71 |
72 | # Exercises
73 |
74 | ### :pencil2: Nginx, NetflixFrontend, NetflixMovieCatalog
75 |
76 | Create a `docker-compose.yaml` file for the following:
77 |
78 | ![][docker_nginx_frontend_catalog]
79 |
80 |
81 | [docker_nginx-flask-mongo]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/docker_nginx-flask-mongo.png
82 | [docker_nginx_frontend_catalog]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/docker_nginx_frontend_catalog.png
83 |
84 |
85 |
--------------------------------------------------------------------------------
/tutorials/docker_networking.md:
--------------------------------------------------------------------------------
1 | # Docker Networking
2 |
3 | ## Docker network sandbox and drivers
4 |
5 | It's time to face hard questions about container networking.
6 | The core idea of containers is **isolation**, so how can a container communicate with other container over the network?
7 | How can a container communicate with the public internet?
8 | It must be using the host machine network resources.
9 | How can that be implemented while keeping the host machine secure enough?
10 |
11 | Docker implement a virtualized layer for container networking, enables communication and connectivity between Docker containers, as well as between containers and the external network.
12 | It includes all the traditional stack we know - unique IP address for each container, virtual network interface that the container sees, default gateway and route table.
13 |
14 | Below is the virtualized model scheme. In Docker, this model is implemented by [`libnetworking`](https://github.com/moby/libnetwork) library:
15 |
16 | ![][docker_sandbox]
17 |
18 | A **sandbox** is an isolated network stack. It includes Ethernet interfaces, ports, routing tables, and DNS configurations.
19 |
20 | **Network Interfaces** are virtual network interfaces (E.g. `veth` ).
21 | Like normal network interfaces, they're responsible for making connections between the container and the rest of the world.
22 |
23 | Network interfaces are connected the sandbox to networks.
24 |
25 | A **Network** is a group of network interfaces that are able to communicate with each-other directly.
26 | An implementation of a Network could be a Linux bridge, a VLAN, etc.
27 |
28 | This networking architecture is not exclusive to Docker. Docker is based on an open-source pluggable architecture called the [**Container Network Model** (CNM)](https://github.com/moby/libnetwork/blob/master/docs/design.md).
29 |
30 | The networks that containers are connecting to are pluggable, using **network drivers**.
31 | This means that a given container can communicate to different kind of networks, depending on the driver.
32 | Here are a few common network drivers docker supports:
33 |
34 | - [`bridge`](https://docs.docker.com/network/bridge/): This network driver connects containers running on the **same** host machine. If you don't specify a driver, this is the default network driver.
35 |
36 | - [`host`](https://docs.docker.com/network/host/): This network driver connects the containers to the host machine network - there is no isolation between the
37 | container and the host machine, and use the host's networking directly.
38 |
39 | - [`overlay`](https://docs.docker.com/network/overlay/): Overlay networks connect multiple container on **different machines**,
40 | as if they are running on the same machine and can talk locally.
41 |
42 | - [`none`](https://docs.docker.com/network/none/): This driver disables the networking functionality in a container.
43 |
44 | ## The Bridge network driver
45 |
46 | The Bridge network driver is the default network driver used by Docker.
47 | It creates an internal network bridge on the host machine and assigns a unique IP address to each container connected to that bridge.
48 | Containers connected to the Bridge network driver can communicate with each other using these assigned IP addresses.
49 | The driver also enables containers to communicate with the external network through port mapping or exposing specific ports.
50 |
51 | The [default bridge network](https://docs.docker.com/network/network-tutorial-standalone/#use-the-default-bridge-network) official tutorial demonstrates how to use the default bridge network that Docker sets up for you automatically.
52 |
53 | The [user-defined bridge networks](https://docs.docker.com/network/network-tutorial-standalone/#use-user-defined-bridge-networks) official tutorial shows how to create and use your own custom bridge networks, to connect containers running on the same host machine.
54 |
55 | Complete both **Use the default bridge network** and **Use user-defined bridge networks** tutorials.
56 |
57 | ## The Host network driver
58 |
59 | The Host network driver is a network mode in Docker where a container shares the network stack of the host machine.
60 | When a container is run with the Host network driver, it bypasses Docker's virtual networking infrastructure and directly uses the network interfaces of the host.
61 | This allows the container to have unrestricted access to the host's network interfaces, including all network ports. However, it also means that the container's network stack is not isolated from the host, which can introduce security risks.
62 |
63 | Complete Docker's short tutorial that demonstrates the use of the host network driver:
64 | https://docs.docker.com/network/network-tutorial-host/
65 |
66 | ## IP address and hostname
67 |
68 | By default, the container is allocated with an IP address for every Docker network it attaches to.
69 | A container receives an IP address out of the IP pool of the network it attaches to.
70 | The Docker daemon effectively acts as a DHCP server for each container.
71 | Each network also has a **default subnet** mask and **gateway**.
72 |
73 | As you've seen in the tutorials, when a container starts, it can only attach to a single network, using the `--network` flag.
74 | You can connect a running container to multiple networks using the `docker network connect` command.
75 |
76 | In the same way, a container's hostname defaults to be the container's ID in Docker.
77 | You can override the hostname using `--hostname`.
78 |
79 | ## DNS services
80 |
81 | Containers that are connected to the default bridge network inherit the DNS settings of the host, as defined in the `/etc/resolv.conf` configuration file in the host machine (they receive a copy of this file).
82 | Containers that attach to a custom network use Docker's embedded DNS server.
83 | The embedded DNS server forwards external DNS lookups to the DNS servers configured on the host machine.
84 |
85 | Custom hosts, defined in `/etc/hosts` on the host machine, aren't inherited by containers.
86 | To pass additional hosts into container, refer to [add entries to container hosts file](https://docs.docker.com/engine/reference/commandline/run/#add-host) in the `docker run` reference documentation.
87 |
88 |
89 | ## The `EXPOSE` Dockerfile instructions
90 |
91 | The [`EXPOSE` instruction](https://docs.docker.com/engine/reference/builder/#expose) informs Docker that the container listens on the specified network ports at runtime.
92 |
93 | The `EXPOSE` instruction **does not** actually publish the port.
94 | It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published.
95 | To actually publish the port when running the container, use the `-p` flag on docker run to publish and map one or more ports.
96 |
97 |
98 | # Exercises
99 |
100 | ### :pencil2: Inspecting container networking
101 |
102 | Run the `busybox` image by:
103 |
104 | ```bash
105 | docker run -it busybox /bin/sh
106 | ```
107 |
108 | 1. On which network this container is running?
109 | 2. What is the name of the network interface that the container is connected to, as it is seen from the host machine?
110 | 3. What is the name of the network interface that the container is connected to as it is seen from within the container?
111 | 4. What is the IP address of the container?
112 | 5. Using the `route` command, what is the default gateway ip that the container use to access the internet?
113 | 6. Provide an evidence that the container's default gateway IP is the IP address of the default bridge network on the host machine the container is running on.
114 | 7. What are the IP address(es) of the DNS server the container used to resolve hostnames? Provide an evidence that they are identical to the DNS servers host machine.
115 | 8. Create a new bridge network, connect your running container to this network.
116 | 9. Provide an evidence that the container has been connected successfully to the created network.
117 | 10. From the host machine, try to `ping` the container using both its IP addresses.
118 | 11. After you've connected the container to a custom bridge network, what are the IP address of the DNS server the container used to resolve hostnames? What does it mean?
119 |
120 | ### :pencil2: Nginx, NetflixFrontend, NetflixMovieCatalog
121 |
122 | Your goal is to run the following architecture (locally):
123 |
124 | ![][docker_nginx_frontend_catalog]
125 |
126 | - The Nginx and NetflixFrontend should be connected to a custom bridge network called `public-net-1` network.
127 | - In addition, the NetflixFrontend app the NetflixMovieCatalog should be connected to a custom bridge network called `private-net-1` network.
128 | - The Nginx should talk with NetflixFrontend using the `netflix-frontend` hostname.
129 | - The NetflixFrontend app should talk to the NetflixMovieCatalog using the `netflix-catalog` hostname.
130 |
131 |
132 | [docker_sandbox]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/docker_sandbox.png
133 | [docker_cache]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/docker_cache.png
134 | [docker_nginx_frontend_catalog]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/docker_nginx_frontend_catalog.png
--------------------------------------------------------------------------------
/tutorials/git_remotes.md:
--------------------------------------------------------------------------------
1 | # Working with remotes: Git and GitHub
2 |
3 | Remote repositories are versions of your project that are hosted somewhere (GitHub, GitLab, BitBucket, Gitea, and many more...).
4 |
5 | Throughout this tutorial we'll work with, you guess right: GitHub!
6 |
7 | ## Clone and Fork
8 |
9 | In the previous tutorial you've already cloned the repo we are practicing on. If you haven't, you can do it by:
10 |
11 | ```bash
12 | git clone https://github.com/exit-zero-academy/NetflixMovieCatalog
13 | ```
14 |
15 | When cloning a repo, Git keeps a reference to the original repository under the name `origin`:
16 |
17 | ```console
18 | $ git remote -v
19 | origin https://github.com/exit-zero-academy/NetflixMovieCatalog (fetch)
20 | origin https://github.com/exit-zero-academy/NetflixMovieCatalog (push)
21 | ```
22 |
23 | Later on, you can push commits of some branch to the `origin` remote (only if you are authorized to do so), for example:
24 |
25 | ```bash
26 | git push origin main
27 | ```
28 |
29 | **Forking** creates a personal copy of someone else's repository on your GitHub account.
30 | Unlike cloning, which keeps a reference to the original repository under the name `origin`, forking creates a distinct copy under your GitHub username.
31 |
32 | Fork the [NetflixMovieCatalog](https://github.com/exit-zero-academy/NetflixMovieCatalog) repository by clicking the **Fork** button on the repository's GitHub page.
33 |
34 | After forking, you can clone your forked repository:
35 |
36 | ```bash
37 | git clone https://github.com/yourusername/NetflixMovieCatalog
38 | ```
39 |
40 | **From now on, unless specified otherwise, you should work on your forked repo.**
41 |
42 | ## Push to remotes
43 |
44 | Let's make some changes on branch `main` for your forked repo, commit and push them to remote by:
45 |
46 | ```bash
47 | git push origin main
48 | ```
49 |
50 | **Having issues to push?** You have to authenticate against your remote server first.
51 | You can do it either by SSH or HTTPS (username and access token), depending on the protocol used when you cloned the repo.
52 |
53 | Perform the `git remote -v` command and check the URL to determine which authentication method you should use.
54 |
55 | You use HTTPS if you see something like:
56 |
57 | ```console
58 | $ git remote -v
59 | origin https://github.com/yourusername/NetflixMovieCatalog (fetch)
60 | origin https://github.com/yourusername/NetflixMovieCatalog (push)
61 | ```
62 |
63 | Otherwise, if you see something like the below example, you use SSH:
64 |
65 | ```console
66 | $ git remote -v
67 | origin git@github.com:yourusername/NetflixMovieCatalog.git (fetch)
68 | origin git@github.com:yourusername/NetflixMovieCatalog.git (push)
69 | ```
70 |
71 | #### Option I: HTTPS (username and access token)
72 |
73 | 1. Create a [Personal Access Token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token-classic).
74 | 2. Push your changes by: `git push origin main`.
75 |
76 | By default, push command will prompt you for your username and token.
77 |
78 | You can keep credentials stored in memory so further push command would not require authentication:
79 |
80 | ```bash
81 | git config --global credential.helper cache
82 | ```
83 |
84 | That way, **your credentials are never stored on disk**, and they are purged from the cache after 15 minutes.
85 |
86 | If you want to save the credentials on disk ([as a plain-text file](https://git-scm.com/docs/git-credential-store)):
87 |
88 | ```bash
89 | git config --global credential.helper store
90 | ```
91 |
92 | That way credentials are **never expire**.
93 |
94 | > [!NOTE]
95 | > The above configurations are stored under `~/.gitconfig` file. This file includes user preferences, credential helpers [and more](https://git-scm.com/docs/git-config).
96 |
97 | #### Option II: SSH
98 |
99 | Generate new SSH keys by:
100 |
101 | ```bash
102 | ssh-keygen -t ed25519 -C "your_email@example.com"
103 | ```
104 |
105 | [Add the public key to your GitHub account](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account).
106 |
107 | ## Remote branches
108 |
109 | When you clone the repository, git fetches all the history exists in the repo, including all branches.
110 |
111 | You can list all remote branches by:
112 |
113 | ```console
114 | $ git branch -a
115 | * main
116 | remotes/origin/HEAD -> origin/main
117 | remotes/origin/main
118 | ```
119 |
120 | In the above output, you can see your local branches (e.g. `main`) as well as its corresponding **Remote-Tracking** branch: `remotes/origin/main` (or shortly `origin/main`).
121 |
122 | Remote-tracking branches are a local copy of branched as they exist on remote (on GitHub in our case) .
123 | They take the form `/`.
124 |
125 | Let's visualize it.
126 |
127 | Saying you've just cloned a fresh copy of our repo:
128 |
129 | ![][git_remote1]
130 |
131 | If you do some work on your local `main` branch, and, in the meantime, someone else pushes to GitHub, then your histories move forward differently.
132 | Also, as long as you stay out of contact with your `origin` server, your `origin/main` pointer doesn't move:
133 |
134 | ![][git_remote2]
135 |
136 | To synchronize your work with a given remote, you run the `git fetch origin` command.
137 | This command fetches any data from `origin` that you don't yet have, and updates your local remote-tracking branches, moving your `origin/main` pointer to its new, more up-to-date position.
138 |
139 | ![][git_remote3]
140 |
141 | Then, if you want to merge the commits of `origin/main` (which reflects the `main` branch as it seen in GitHub), you run the `git merge origin/main` command:
142 |
143 | ![][git_remote4]
144 |
145 | **Tip**: Instead of running the `git fetch` and immediately after the `git merge` command, you can run `git pull` which is essentially does the same.
146 |
147 | It's important to note that when you do a clone/fetch that brings down new remote-tracking branches, you don't automatically have local, editable copies of them.
148 |
149 | Let's say you want to work on some of the branches which exists on GitHub, for example `origin/test`.
150 | In this case, you don't have yet a local branch `test` to work on - you have only the `origin/test` branch which can't be modified.
151 |
152 | In order to work on that branch, you need to check it out as a new local branch, while you base it on your remote-tracking branch:
153 |
154 | ```console
155 | $ git checkout -b test origin/testme
156 | Branch test set up to track remote branch test from origin.
157 | Switched to a new branch 'test'
158 | ```
159 |
160 | Git automatically creates `test` as what is called a **tracking branch** (and the branch it tracks, `origin/test`, is called an **upstream branch**).
161 | Tracking branches are local branches that have a direct relationship to a remote branch.
162 | If you're on a tracking branch and `push` it, Git automatically knows which remote branch to push to.
163 |
164 | ## Multiple remotes
165 |
166 | Let's say the original NetflixMovieCatalog repo, the one you've forked from has some new commits that you don't have in your fork.
167 | This is a very common scenario when developers are working on new features on a project that forked in the past.
168 |
169 | Now you want to be up-to-date with the original project (the **upstream** repo), to receive the new features and bugfixes.
170 |
171 | You can achieve that by adding another **remote** to your fork, as follows:
172 |
173 | ```console
174 | $ git remote add upstream https://github.com/exit-zero-academy/NetflixMovieCatalog
175 | $ git remote -v
176 | origin https://github.com/yourusername/NetflixMovieCatalog (fetch)
177 | origin https://github.com/yourusername/NetflixMovieCatalog (push)
178 | upsream https://github.com/exit-zero-academy/NetflixMovieCatalog (fetch)
179 | upsream https://github.com/exit-zero-academy/NetflixMovieCatalog (push)
180 | ```
181 |
182 | Now your local clone has 2 remotes: `origin`, which represents your fork, and `upstream`, which represents the original repo.
183 |
184 | **Note**: `upstream` is a very common remote name to the original repository.
185 |
186 | Now you can use `upstream` in the pull/fetch commands to receive updates:
187 |
188 | ```console
189 | $ git fetch upstream
190 | remote: Counting objects: 43, done.
191 | remote: Compressing objects: 100% (36/36), done.
192 | remote: Total 43 (delta 10), reused 31 (delta 5)
193 | Unpacking objects: 100% (43/43), done.
194 | From https://github.com/exit-zero-academy/NetflixMovieCatalog
195 | ```
196 |
197 | If you want to update your `main` branch according to `upstream`'s main:
198 |
199 | ```bash
200 | git checkout main
201 | git merge upstream/main
202 | ```
203 |
204 | # Exercises
205 |
206 | ### :pencil2: Working as a team (to be done with friends)
207 |
208 | Work with your friend. Use either Pycharm UI or CLI.
209 |
210 | 1. Add your friend as a collaborator to your fork repo.
211 | 2. Ask him/her to clone a fresh copy.
212 | 3. Ask to commit and push some changes to the `main` branch. In the meantime, you also commit some new changes **but don't push** them, let your friend pushing first.
213 | 4. After your friend has pushed, try to push yourself. What happened? Why? How can you proceed?
214 | 5. You and your friend should now commit some changes which introduce conflicts (edit both of you the same file same line, e.g. change the port number in `app.py` to different numbers).
215 | 6. Ask your friend to commit and push first, you only commit, don't push.
216 | 7. Now pull, resolve the conflict (it's recommended to use Pycharm's UI).
217 |
218 |
219 | [git_remote1]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/git_remote1.png
220 | [git_remote2]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/git_remote2.png
221 | [git_remote3]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/git_remote3.png
222 | [git_remote4]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/git_remote4.png
--------------------------------------------------------------------------------
/tutorials/jenkins_test_pipeline.md:
--------------------------------------------------------------------------------
1 | # Pull Request testing
2 |
3 | ## Motivation
4 |
5 | Let's review the Git workflow we've implemented throughout the course:
6 |
7 | ![][git_envbased]
8 |
9 | 1. Developers branching our from an up-to-date `main` branch into their feature branch.
10 | 2. They commit changes into their feature branch.
11 | 3. At some point, they want to test their changes in Development environment. They merge the feature branch into `dev` branch, and push to remote.
12 | 4. After the changes have been tested in development environment and a few more fixes has been committed, the developer creates a Pull Request from their feature branch into `main`.
13 | 5. The `main` branch can be deployed to production environment directly after the merge.
14 |
15 | A Pull Request is a crucial point for testing and review code changes before they are merged into the `main` branch (and deployed to production systems from there).
16 |
17 | Let's build a **testing pipeline** on an opened Pull Request.
18 |
19 | So far we've seen how pipelines can be built around a single branch (e.g. `main`).
20 | Now we would like to create a new pipeline which will be triggered on **every PR branch** that is created in GitHub.
21 | For that we will utilize Jenkins [multi-branch pipeline](https://www.jenkins.io/doc/book/pipeline/multibranch/).
22 |
23 | ## Create a multi-branch pipeline
24 |
25 | 1. In the [NetflixFrontend][NetflixFrontend] repo, create the `pipelines/test.Jenkinsfile` pipeline as follows:
26 |
27 | ```text
28 | pipeline {
29 | agent any
30 |
31 | stages {
32 | stage('Tests before build') {
33 | parallel {
34 | stage('Unittest') {
35 | steps {
36 | sh 'echo unittesting...'
37 | }
38 | }
39 | stage('Lint') {
40 | steps {
41 | sh 'echo linting...'
42 | }
43 | }
44 | }
45 | }
46 | stage('Build and deploy to Test environment') {
47 | steps {
48 | sh 'echo trigger build and deploy pipelines for test environment... wait until successful deployment'
49 | }
50 | }
51 | stage('Tests after build') {
52 | parallel {
53 | stage('Security vulnerabilities scanning') {
54 | steps {
55 | sh 'echo scanning for vulnerabilities...'
56 | }
57 | }
58 | stage('API test') {
59 | steps {
60 | sh 'echo testing API...'
61 | }
62 | }
63 | stage('Load test') {
64 | steps {
65 | sh 'echo testing under load...'
66 | }
67 | }
68 | }
69 | }
70 | }
71 | }
72 | ```
73 |
74 | To save time and compute resources, we used the [`parallel`](https://www.jenkins.io/doc/book/pipeline/syntax/#parallel) directive to run the test stages in parallel, while failing the whole build when one of the stages is failed.
75 |
76 |
77 | 2. Commit and push your changes.
78 | 3. From the Jenkins dashboard page, choose **New Item**, and create a **Multibranch Pipeline** named `NetflixFrontendTesting`.
79 | 4. Under **Branch Sources** choose **Add source**, then **GitHub**.
80 | 5. Choose your GitHub credentials.
81 | 6. Under **Repository HTTPS URL**, enter your NetflixFrontend repo URL.
82 | 7. Under **Behaviors**, delete all behaviors other than **Discover pull requests from origin**. Configure this behavior to **Merging the pull request with the target branch revision**.
83 | 8. Under **Build Configuration**, specify the path to the testing Jenkinsfile.
84 | 9. Create the pipeline.
85 |
86 | ### Test the pipeline
87 |
88 | 1. From branch `main` create a new branch change some code lines. Push the branch to remote.
89 | 1. In your app GitHub page, create a Pull Request from your branch into `main`.
90 | 1. Watch the triggered pipeline in Jenkins.
91 |
92 | ## Protect branch `main`
93 |
94 | We also would like to protect the `main` branch from being merged and pushed by non-tested branches.
95 |
96 | 1. From GitHub main repo page, go to **Settings**, then **Branches**.
97 | 2. **Add branch protection rule** for the `main` branch as follows:
98 | 1. Check **Require a pull request before merging**.
99 | 2. Check **Require status checks to pass before merging** and search the `continuous-integration/jenkins/pr-merge` check done by Jenkins.
100 | 3. Save the protection rule.
101 |
102 | Your `main` branch is now protected and no code can be pushed into it unless the PR is reviewed by other team member and passed all automatic tests done by Jenkins.
103 |
104 | ## Automated testing
105 |
106 | Automated testing is a very broad topic. In this section we will lightly cover 2 types of testing: **code linting**, **security vulnerabilities testing**.
107 |
108 | ### Code linting (in Node.js)
109 |
110 | [ESLint](https://eslint.org/) is a [static code analyser](https://en.wikipedia.org/wiki/Static_program_analysis) for JavaScript.
111 | ESList analyzes your code **without actually running it**.
112 | It checks for syntax errors, enforces a coding standard, and can make suggestions about how the code could be refactored.
113 |
114 | Linting your NetflixFrontend code is simply done by executing the `npm run lint` command from the root directory of the repo.
115 |
116 | - Integrate the linting check in `test.Jenkinsfile` under the **Lint** stage.
117 | - Note that the `npm` command is required to be available in the Jenkins runtime, either install it on-the-fly using `apt`, or edit the `jenkins-agent.Dockerfile` accordingly in order to install it on the agent image from beforehand.
118 | - The lint results would be printed into the `lintingResult.xml` file, use the [junit plugin](https://plugins.jenkins.io/junit/) to publish the results in your Jenkins dashboard:
119 |
120 | ```diff
121 | stage('Lint') {
122 | steps {
123 | ...
124 | }
125 | + post {
126 | + always {
127 | + junit 'lintingResult.xml'
128 | + }
129 | + }
130 | }
131 | ```
132 |
133 | ### Security vulnerabilities scanning
134 |
135 | If haven't done yet, learn how to use the [Docker Scout](https://docs.docker.com/scout/quickstart/) tool.
136 |
137 | Complete the `Security vulnerabilities scanning` stage to [integrate Docker Scout with Jenkins](https://docs.docker.com/scout/integrations/ci/jenkins/).
138 |
139 | [git_envbased]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/git_envbased.png
140 | [NetflixFrontend]: https://github.com/exit-zero-academy/NetflixFrontend
141 |
142 |
--------------------------------------------------------------------------------
/tutorials/k8s_argocd.md:
--------------------------------------------------------------------------------
1 | # ArgoCD and CI/CD pipelines for Kubernetes
2 |
3 | ArgoCD is a continuous delivery tool for Kubernetes applications.
4 |
5 | ArgoCD monitors your Git repo where the Kubernetes YAML manifests define, and sync your cluster according to those manifests.
6 | Your Git repo should always represent the desired application state, this pattern is known as **GitOps**, in which Git source of truth for your service defining.
7 |
8 | 1. First, let's create a dedicated GitHub repository to store all Kubernetes YAML manifests. You can name it `NetflixDevOps` or `NetflixInfra` (short of Infrastructure).
9 | The repo file structure might be look like:
10 |
11 | ```text
12 | NetflixInfra/
13 | ├── k8s/
14 | │ ├── NetflixFrontend/
15 | │ │ ├── deployment.yaml
16 | │ │ ├── service.yaml
17 | │ └── NetflixMovieCatalog/
18 | │ ├── deployment.yaml
19 | │ └── service.yaml
20 | ```
21 |
22 | 2. Now, let's install ArgoCD in your Kubernetes cluster by:
23 |
24 | ```bash
25 | kubectl create namespace argocd
26 | kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
27 | ```
28 |
29 | 3. Visit the UI sever by:
30 |
31 | ```bash
32 | kubectl port-forward svc/argocd-server -n argocd 8080:443
33 | ```
34 |
35 | The username is `admin`, the initial password can be retrieved by:
36 |
37 | ```bash
38 | kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 --decode
39 | ```
40 |
41 | 4. In the ArgoCD UI, after logging in, click the **+ New App** button:
42 |
43 | - Give your app the name `netflix-frontend`, use the project `default`, and change the sync policy to **Automatic**.
44 | - Connect your `NetflixInfra` repo to Argo CD by setting repository url to the github repo url and set the path to `k8s/NetflixFrontend/` (the path containing your YAML manifests for the frontend service):
45 | - For **Destination**, set cluster URL to https://kubernetes.default.svc namespace to `default` (the k8s namespace in which you'll deploy your services)
46 | - After filling out the information above, click **Create**.
47 | 5. Repeat the above process for the `NetflixMovieCatalog` service.
48 | 6. Test your app definition by updating one of your YAML manifests, commit and push it.
49 | Wait for Argo to automatically deploy your changes into the cluster.
50 |
51 |
52 | # Exercises
53 |
54 | ### :pencil2: CI/CD pipeline for Kubernetes
55 |
56 | Create a CI/CD pipeline for the NetflixFrontend service based on the below GitHub Actions workflow:
57 |
58 | ```yaml
59 | name: NetflixFrontend stack build-deploy
60 |
61 | on:
62 | push:
63 | branches:
64 | - main
65 |
66 | permissions:
67 | contents: write
68 |
69 | jobs:
70 | build:
71 | runs-on: ubuntu-latest
72 |
73 | steps:
74 | - uses: actions/checkout@v3
75 | - name: Buid and push docker images
76 | run: |
77 | # TODO build docker image
78 |
79 | - name: Checkout infrastructure repo
80 | uses: actions/checkout@v3
81 | with:
82 | repository: YOUR_NETFLIX_INFRA_REPO # TODO change me
83 | token: ${{ secrets.REPO_TOKEN }} # The GITHUB_TOKEN secret is a GitHub access token.
84 | path: ./NetflixInfra
85 |
86 | - name: Update YAML manifests
87 | run: |
88 | cd ./NetflixInfra
89 | # TODO commit & push changes to infra repo
90 |
91 |
92 | - name: Commit and Push changes
93 | run: |
94 | cd ./NetflixInfra
95 | # TODO commit & push changes to infra repo
96 | ```
97 |
--------------------------------------------------------------------------------
/tutorials/k8s_helm.md:
--------------------------------------------------------------------------------
1 | # Helm - The Kubernetes Package Manager
2 |
3 | ## Motivation
4 |
5 | **Helm** is a "package manager" for Kubernetes. Here are some of the main features of the tool:
6 |
7 | - **Helm Charts**: Instead of dealing with numerous YAML manifests, which can be a complex task, Helm introduces the concept of a "Package" (known as **Chart**) – a cohesive group of related YAML manifests that collectively define a single application within the cluster.
8 | For example, an application might consist of a Deployment, Service, HorizontalPodAutoscaler, ConfigMap, and Secret.
9 | These manifests are interdependent and essential for the seamless functioning of the application.
10 | Helm encapsulates this collection, making it easier to manage, version, and deploy as a unit.
11 |
12 | - **Sharing Charts**: Helm allows you to share your charts, or use other's charts. Want to deploy MongoDB server on your cluster? Someone already done it before, you can use her Helm Chart to with your own configuration values to deploy the MongoDB.
13 |
14 | - **Dynamic manifests**: Helm allows you to create reusable templates with placeholders for configuration values. For example:
15 | ```yaml
16 | apiVersion: v1
17 | kind: Service
18 | metadata:
19 | name: {{ .Values.serviceName }} # the service value will be dynamically placed when applying the manifest
20 | spec:
21 | selector:
22 | app: {{ .Values.Name }} # the selector value will be dynamically placed when applying the manifest
23 | ...
24 | ```
25 |
26 | This becomes very useful when dealing with multiple clusters that share similar configurations (Dev, Prod, Test clusters).
27 | Instead of duplicating YAML files for each cluster, Helm enables the creation of parameterized templates.
28 |
29 | ## Install Helm
30 |
31 | https://helm.sh/docs/intro/install/
32 |
33 | ## Helm Charts
34 |
35 | Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources.
36 | A single chart might be used to deploy some single application in the cluster.
37 |
38 | ### Deploy Grafana using Helm
39 |
40 | **Remove any existed grafana Deployment, StatefulSet or Service before you start.**
41 |
42 | Like DockerHub, there is an open-source community Hub for Charts of famous applications.
43 | It's called [Artifact Hub](https://artifacthub.io/packages/search?kind=0), check it out.
44 |
45 | Let's review and install the [official Helm Chart for Grafana](https://artifacthub.io/packages/helm/grafana/grafana).
46 |
47 | To deploy the Grafana Chart, you first have to add the **Repository** in which this Chart exists.
48 | A Helm Repository is the place where Charts can be collected and shared.
49 |
50 | ```bash
51 | helm repo add grafana-charts https://grafana.github.io/helm-charts
52 | helm repo update
53 | ```
54 |
55 | The `grafana-charts` Helm Repository contains many different Charts maintained by the Grafana organization.
56 | [Among the different Charts](https://artifacthub.io/packages/search?repo=grafana&sort=relevance&page=1) of this repo, there is a Chart used to deploy the Grafana server in Kubernetes.
57 |
58 | Deploy the `grafana` Chart by:
59 |
60 | ```bash
61 | helm install grafana grafana-charts/grafana
62 | ```
63 |
64 | The command syntax is as follows: `helm install /`.
65 |
66 | Whenever you install a Chart, a new **Release** is created.
67 | In the above command, the Grafana server has been released under the name `grafana`, using the `grafana` Chart from the `grafana-charts` repo.
68 |
69 | During installation, the Helm client will print useful information about which resources were created, what the state of the Release is, and also whether there are additional configuration steps you can or should take.
70 |
71 | #### Try it yourself
72 |
73 | Review the release's output. Then use `port-forward` to visit the Grafana server.
74 |
75 | ### Customize the Grafana release
76 |
77 | When installed the Grafana Chart, the server has been release with default configurations that the Chart author decided for you.
78 |
79 | A typical Chart contains hundreds of different configurations, e.g. container's environment variables, custom secrets, etc..
80 |
81 | Obviously, you want to customize the Grafana release according to your configurations.
82 | Good Helm Chart should allow you to configure the Release according to your configurations.
83 |
84 | To see what options are configurable on a Chart, go to the [Chart's documentation page](https://artifacthub.io/packages/helm/grafana/grafana), or use the `helm show values grafana-charts/grafana` command.
85 |
86 | Let's override some of the default configurations by specifying them in a YAML file, and then pass that file during the Release upgrade:
87 |
88 | ```yaml
89 | # k8s/grafana-values.yaml
90 |
91 | persistence:
92 | enabled: true
93 | size: 2Gi
94 |
95 | env:
96 | GF_DASHBOARDS_VERSIONS_TO_KEEP: 10
97 |
98 | ```
99 |
100 | The above values configure the Grafana server data to be persistent, and define some Grafana related environment variable.
101 |
102 | To apply the new Chart values, `upgrade` the Release:
103 |
104 | ```bash
105 | helm upgrade -f grafana-values.yaml grafana grafana-charts/grafana
106 | ```
107 |
108 | An `upgrade` takes an existing Release and upgrades it according to the information you provide.
109 | Because Kubernetes Charts can be large and complex, Helm tries to perform the least invasive upgrade.
110 | It will only update things that have changed since the last release.
111 |
112 | #### Try it yourself
113 |
114 | Review the [Official Grafana Helm values](https://artifacthub.io/packages/helm/grafana/grafana), and add more Chart values overrides in `grafana-values.yaml` to achieve following configurations:
115 |
116 | 1. The deployed Grafana [image tag version](https://hub.docker.com/r/grafana/grafana/tags) is higher than `8.0.0`.
117 | 2. The [`redis-datasource`](https://grafana.com/grafana/plugins/redis-datasource/?tab=overview) plugin is installed.
118 | 3. A redis datasource is configured to collect metrics from the `redis-cart` service.
119 |
120 |
121 | If something does not go as planned during a release, it is easy to roll back to a previous release using `helm rollback [RELEASE] [REVISION]`:
122 |
123 | ```shell
124 | helm rollback grafana 1
125 | ```
126 |
127 | To uninstall the Chart release:
128 |
129 | ```shell
130 | helm uninstall grafana
131 | ```
132 |
133 |
134 | # Exercises
135 |
136 | ### :pencil2: Redis cluster using Helm
137 |
138 | Provision a Redis cluster with 1 master an 2 replicas using [Bitnami Helm Chart](https://artifacthub.io/packages/helm/bitnami/redis)
139 |
140 | Configure your NetflixFrontend to work with your Redis cluster instead of the existed `redis` Deployment as done in a previous exercise.
141 |
142 | ### :pencil2: Create your own Helm chart for the NetflixFrontend service
143 |
144 | In this exercise we will create a Chart for the [NetflixFrontend][NetflixFrontend].
145 |
146 | Why is it good idea? For example, instead of maintaining two different sets of YAML manifests for both dev and prod environments,
147 | we will leverage the created Chart to deploy the application in two instances: first as `netflix-frontend-dev` for the Development environment, and as `netflix-frontend-prod` for Production env, each with his own values.
148 |
149 | Helm can help you get started quickly by using the `helm create` command:
150 |
151 | ```bash
152 | helm create netflix-frontend
153 | ```
154 |
155 | Now there is a chart in `./netflix-frontend`. You can edit it and create your own templates.
156 | The directory name is the name of the chart.
157 |
158 | Inside of this directory, Helm will expect the following structure:
159 |
160 | ```text
161 | netflix-frontend/
162 | Chart.yaml # A YAML file containing information about the chart
163 | values.yaml # The default configuration values for this chart
164 | charts/ # A directory containing any charts upon which this chart depends.
165 | templates/ # A directory of templates that, when combined with values, will generate valid Kubernetes manifest files.
166 | ```
167 |
168 | For more information about Chart files structure, go to [Helm docs](https://helm.sh/docs/topics/charts/).
169 |
170 | Change the default Chart values in `values.yaml` to match the original `netflix-frontend` service.
171 |
172 | As you edit your chart, you can validate that it is well-formed by running `helm lint`.
173 |
174 | When it's time to package the chart up for deployment, you can run the `helm package` command:
175 |
176 | ```bash
177 | helm package netflix-frontend
178 | ```
179 |
180 | And that chart can now easily be installed by:
181 |
182 | ```bash
183 | helm install netflix-frontend-dev ./netflix-frontend-0.1.0.tgz
184 | ```
185 |
186 |
187 | [NetflixFrontend]: https://github.com/exit-zero-academy/NetflixFrontend.git
188 |
--------------------------------------------------------------------------------
/tutorials/k8s_networking.md:
--------------------------------------------------------------------------------
1 | # Kubernetes Networking
2 |
3 | The Kubernetes networking model facilitates communication between pods within a cluster, and from the outside world into the cluster.
4 |
5 | - Pods can communicate with each other using their internal IP addresses.
6 | Every pod in a cluster can reach every other pod without NAT.
7 | - **Services** provide a stable endpoint that abstracts the underlying pod instances. Services enable load balancing and automatic discovery of pod changes.
8 | Pods can use the service name as a DNS entry to connect to other services.
9 |
10 | So far, we've seen how to use Services to publish services only for consumption **inside your cluster**.
11 |
12 | ## Expose applications outside the cluster using a Service of type `LoadBalancer`
13 |
14 | Kubernetes allows you to create a Service of `type=LoadBalancer` (no need to apply the below example):
15 |
16 | ```yaml
17 | apiVersion: v1
18 | kind: Service
19 | metadata:
20 | name: my-service
21 | spec:
22 | type: LoadBalancer
23 | selector:
24 | app.kubernetes.io/name: MyApp
25 | ports:
26 | - protocol: TCP
27 | port: 80
28 | targetPort: 9376
29 | clusterIP: 10.0.171.239
30 | ```
31 |
32 | This Service takes effect only on cloud providers which support external load balancers (like AWS ELB).
33 | Applying this Service will **provision a Elastic Load Balancer (ELB) for your Service**.
34 |
35 | ![][k8s_networking_lb_service]
36 |
37 | Traffic from the Elastic Load Balancer is directed to the backend Pods by the Service. The cloud provider decides how it is load balanced across different cluster's Nodes.
38 |
39 | Note than the actual creation of the load balancer happens asynchronously, and information about the provisioned balancer is published in the Service's `.status.loadBalancer` field.
40 |
41 | ## Ingress and Ingress controller
42 |
43 | A Service of type `LoadBalancer` is the core mechanism that allows you to expose application to clients outside the cluster.
44 |
45 | What now? should we set up a separate Elastic Load Balancer for each Service we wish to make accessible from outside the cluster?
46 | Doesn't this approach seem inefficient and overly complex in terms of resource utilization and cost?
47 |
48 | It is. Let's introduce an **Ingress** and **Ingress Controller**.
49 |
50 | Ingress Controller is an application that runs in the Kubernetes cluster that manage external access to services within the cluster.
51 | There are [many Ingress Controller implementations](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) for different usages and clusters.
52 |
53 | [Nginx ingress controller](https://github.com/kubernetes/ingress-nginx) is one of the popular used one.
54 | Essentially, it's the same old good Nginx webserver app, exposed to be available outside the cluster (using Service of type `LoadBalancer`), and configured to route incoming traffic to different Services in the cluster (a.k.a. reverse proxy).
55 |
56 | ![][k8s_networking_nginx_ic]
57 |
58 | **Ingress** is another Kubernetes object, that defines the **routing rules** for the Ingress Controller (so you don't need to edit Nginx `.conf` configuration files yourself).
59 |
60 | Let's deploy an Ingress Controller and apply an Ingress with routing rules.
61 |
62 | ## Deploy the Nginx Ingress Controller
63 |
64 | Ingress controllers are not started automatically with a cluster, you have to deploy it manually.
65 | We'll deploy the Nginx Ingress Controller behind an AWS Network Load Balancer (NLB).
66 | Just apply the manifest as described in the [Network Load Balancer (NLB)](https://kubernetes.github.io/ingress-nginx/deploy/#network-load-balancer-nlb) section.
67 |
68 | The above manifest mainly creates:
69 |
70 | - Deployment `ingress-nginx-controller` of the Nginx webserver.
71 | - Srvice `ingress-nginx-controller` of type `LoadBalancer`.
72 | - IngressClass `nginx` to be used in the Ingress objects (see `ingressClassName` below).
73 | - RBAC related resources.
74 |
75 | To route traffic to the NetflixFrontend service, apply the below `Ingress` (change values according to your configurations):
76 |
77 | ```yaml
78 | # k8s/ingress-demo.yaml
79 |
80 | apiVersion: networking.k8s.io/v1
81 | kind: Ingress
82 | metadata:
83 | name: netflix-frontend
84 | spec:
85 | rules:
86 | - host: YOUR_ELB_or_ROUTE53_DOMAIN_HERE
87 | http:
88 | paths:
89 | - path: /
90 | pathType: Prefix
91 | backend:
92 | service:
93 | name: YOUR_NETFLIX_FRONTEND_SERVICE_HERE
94 | port:
95 | number: YOUR_SERVICE_PORT_HERE
96 | ingressClassName: nginx
97 | ```
98 |
99 | Nginx is configured to automatically discover all ingress where `ingressClassName: nginx` is present, like yours.
100 |
101 | Visit the application using the ELB or Route53 domain name.
102 |
103 | > [!NOTE]
104 | > #### The relation between **Ingress** and **Ingress Controller**:
105 | >
106 | > **Ingress** only defines the *routing rules*, it is not responsible for the actual routing mechanism.
107 | > An Ingress controller is responsible for fulfilling the Ingress routing rules.
108 | > In order for the Ingress resource to work, the cluster must have an Ingress Controller running.
109 |
110 | # Exercises
111 |
112 | ### :pencil2: Terminate HTTPS traffic in the Nginx Ingress controller
113 |
114 | Follow the below docs to configure your Nginx Ingress Controller to listen to HTTPS connectionS:
115 |
116 | https://kubernetes.github.io/ingress-nginx/examples/tls-termination/#tls-termination
117 |
118 | You can also **force** your incoming traffic to use HTTPS by adding the following annotation to the `Ingress` object:
119 |
120 | ```text
121 | nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
122 | ```
123 |
124 | ### :pencil2: Canary using Nginx
125 |
126 | You can add [nginx annotations](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/) to specific `Ingress` objects to customize their behavior.
127 |
128 | In some cases, you may want to "canary" a new set of changes by sending a small number of requests to a different service than the production service.
129 | The [canary annotation](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary) enables the Ingress spec to act as an alternative service for requests to route to depending on the rules applied.
130 |
131 | In this exercise we'll deploy a canary for the NetflixFrontend service.
132 |
133 | - Deploy the NetflixFrontend service in a version which is not your most up-to-date (e.g. `0.8.0` instead of `0.9.0`).
134 | - Now you want to deploy the newer app version (e.g. `0.9.0`) but you don't confident with this deployment.
135 | Create other (separated) YAML manifests for the new version of the service, call then `netflix-frontend-canary`.
136 | - Create another `Ingress` pointing to your canary Deployment, as follows:
137 |
138 | ```yaml
139 | apiVersion: networking.k8s.io/v1
140 | kind: Ingress
141 | metadata:
142 | name: canary
143 | annotations:
144 | nginx.ingress.kubernetes.io/canary: "true"
145 | nginx.ingress.kubernetes.io/canary-weight: "5"
146 | spec:
147 | ingressClassName: nginx
148 | rules:
149 | # TODO ... Make sure the `host` entry is the same as the existed netflix-frontend Ingress.
150 | ```
151 |
152 | This Ingress routes 5% of the traffic to the canary deployment.
153 |
154 | Test your configurations by periodically access the application:
155 |
156 | ```bash
157 | /bin/sh -c "while sleep 0.05; do (wget -q -O- http://LOAD_BALANCER_DOMAIN &); done"
158 | ```
159 |
160 | **Bonus**: Use [different annotations](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/) to perform a canary deployment which routes users based on a request header `FOO=bar`, instead of specific percentage.
161 |
162 |
163 | [k8s_networking_lb_service]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/k8s_networking_lb_service.png
164 | [k8s_networking_nginx_ic]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/k8s_networking_nginx_ic.png
165 |
--------------------------------------------------------------------------------
/tutorials/k8s_observability.md:
--------------------------------------------------------------------------------
1 | # Kubernetes Cluster Observability
2 |
3 | ## Motivation and challenges
4 |
5 | Kubernetes cluster is great, but what about monitoring the cluster applications?
6 |
7 | By monitoring we mean collecting **logs** and **metrics** from our containers and nodes.
8 |
9 | - Logs are chronological records of events that occur within the app, typically in a semi-structured textual form.
10 | - Metrics, on the other hand, are quantitative measurements (numbers) that represent the state or performance of a system (e.g. CPU usage, network incoming traffic).
11 | Since metrics are values that might change very rapidly, they often collected at regular intervals (e.g. every 15 seconds).
12 |
13 | Collecting logs and metrics from applications is important for investigate issues and bugs, analyse app performance, auditing security events, etc...
14 |
15 | Monitoring clusters presents unique challenges stemming from the distributed nature of its components and the dynamic, ephemeral environment it operates in.
16 |
17 | - Pods are distributed across dozen of Nodes.
18 | - Pods are launched and terminated throughout the cluster's lifecycle, which lead to loss of critical logs from pods that was terminated.
19 |
20 | In this tutorial we will set up a robust monitoring system for our cluster.
21 |
22 | ## Logs collection using FluentBit
23 |
24 | [FluentBit](https://docs.fluentbit.io/manual/installation/kubernetes) is an open source Log Processor. Using FluentBit you can:
25 |
26 | - Collect Kubernetes containers logs
27 | - Enrich logs with Kubernetes Metadata (e.g. Pod name, namespace, labels).
28 | - Centralize your logs in third party storage services like Elasticsearch, InfluxDB, etc.
29 |
30 | It is important to understand how FluentBit will be deployed.
31 | Kubernetes manages a cluster of **nodes**, so our log agent tool will need to run on **every** node to collect logs from every pod, hence FluentBit is deployed as a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) (exactly one Pod that runs on every Node of the cluster).
32 |
33 | Here is the architecture:
34 |
35 | ![][k8s_observability_fluent]
36 |
37 | 1. [Pods in k8s cluster write logs](https://kubernetes.io/docs/concepts/cluster-administration/logging/) to `stdout` as well as to a log file located in `/var/log/containers/` in the node.
38 |
39 | > #### Try it yourself
40 | >
41 | > SSH into one of your cluster's node, take a look on `/var/log/containers` dir.
42 |
43 | 2. As said, FluentBit will be deployed as a **DaemonSet**.
44 |
45 | Since logs collection involving reading log files from every Node, we want an instance of FluentBit in each Node, otherwise we deal with cumbersome inter-node communication, increase traffic load, and potential data loss.
46 | When FluentBit is deployed as a DaemonSet, each node has its own FluentBit instance responsible for collecting logs from the containers running on that node.
47 |
48 | 3. Each FluentBit Pod is collecting logs into a local buffer, and every interval, sends the collected data into Elasticsearch (also will be deployed in the cluster).
49 |
50 | > [!NOTE]
51 | > Elasticsearch is a No-SQL database. It is designed to handle large volumes of data and provides near real-time search and analytics capabilities. Elasticsearch is often used as the underlying engine for applications that require full-text search, log and event data analysis.
52 |
53 | 4. The Grafana/Kibana server will visualize logs stored in Elasticsearch.
54 |
55 |
56 | ### Deploying FluentBit in Kubernetes
57 |
58 | The recommended way to deploy FluentBit is with the official Helm Chart: https://docs.fluentbit.io/manual/installation/kubernetes#installing-with-helm-chart
59 |
60 | Let's get started. Here are general guidelines, try to handle the deployment details yourself 💪:
61 |
62 | 1. Deploy Elasticsearch and Kibana from https://www.elastic.co/guide/en/cloud-on-k8s/2.13/k8s-deploy-eck.html.
63 | 2. Deploy FluentBit from https://docs.fluentbit.io/manual/installation/kubernetes#installing-with-helm-chart.
64 | 3. Upgrade the chart according to the values files in `k8s/fluent-values.yaml`:
65 |
66 | ```bash
67 | helm upgrade --install fluent-bit fluent/fluent-bit -f k8s/fluent-values.yaml
68 | ```
69 |
70 | 4. Visit your grafana server and explore logs.
71 |
72 | ## Collect metrics using Prometheus
73 |
74 | [Prometheus](https://prometheus.io/docs/introduction/overview/) is a monitoring platform that collects metrics from [different platforms](https://prometheus.io/docs/instrumenting/exporters/) (such as databases, cloud providers, and k8s clusters).
75 |
76 | Prometheus collects and stores its metrics as **time series** data, i.e. metrics information is stored with the timestamp at which it was recorded, alongside optional key-value pairs called **labels**.
77 | Prometheus monitors its targets by using a **pull-based model**, where Prometheus periodically fetches metrics from the HTTP endpoints exposed by the targets.
78 |
79 | ![][k8s_prom-architecture]
80 |
81 | 1. Deploy Prometheus using the [community Helm chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus). The Chart is already configured with the exporters needed to collect metrics from Pods and Nodes in your cluster. Use `k8s/prometheus-values.yaml` as the values override file.
82 | 1. Integrate Prometheus into Grafana as a datasource.
83 | 1. In your Grafana, import one of the following dashboards to get some insights about your cluster:
84 | - https://grafana.com/grafana/dashboards/6417-kubernetes-cluster-prometheus/
85 | - https://grafana.com/grafana/dashboards/315-kubernetes-cluster-monitoring-via-prometheus/
86 | - https://grafana.com/grafana/dashboards/12740-kubernetes-monitoring/
87 |
88 |
89 | [k8s_observability_fluent]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/k8s_observability_fluent.png
90 | [k8s_prom-architecture]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/k8s_prom-architecture.png
--------------------------------------------------------------------------------
/tutorials/linux_environment_variables.md:
--------------------------------------------------------------------------------
1 | # Environment variables
2 |
3 | ## Environment variables defined
4 |
5 | Global variables or **environment variables** are variables available for any process or application running in the same environment. Global variables are being transferred from parent process to child program. They are used to store system-wide settings and configuration information, such as the current user's preferences, system paths, and language settings. Environment variables are an essential part of the Unix and Linux operating systems and are used extensively by command-line utilities and scripts.
6 |
7 | The `env` or `printenv` commands can be used to display environment variables.
8 |
9 | ## The `$PATH` environment variable
10 |
11 | When you want the system to execute a command, you almost never have to give the full path to that command. For example, we know that the `ls` command is actually an executable file, located in the `/bin` directory (check with `which ls`), yet we don't have to enter the command `/bin/ls` for the computer to list the content of the current directory.
12 |
13 | The `$PATH` environment variable is a list of directories separated by colons (`:`) that the shell searches when you enter a command. When you enter a command in the shell, the shell looks for an executable file with that name in each directory listed in the `$PATH` variable, in order. If it finds an executable file with that name, it runs it.
14 |
15 | System commands are normal programs that exist in compiled form (e.g. `ls`, `mkdir` etc... ).
16 |
17 | ```console
18 | myuser@hostname:~$ which ls
19 | /bin/ls
20 | myuser@hostname:~$ echo $PATH
21 | /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:.....
22 | ```
23 |
24 | The above example shows that `ls` is actually an executable file located under `/bin/ls`. The `/bin` path is part of the PATH env var, thus we are able to type `ls` shortly.
25 |
26 | ## The `export` command
27 |
28 | The `export` command is used to set environment variables in the current shell session or to export variables to **child processes**.
29 |
30 | When a variable is exported using the `export` command, it becomes available to any child process that is spawned by the current shell. This is useful when you need to pass environment variables to programs or scripts that you run.
31 |
32 | For example, let's say you want to add a directory called `mytools` to your `PATH` environment variable so that you can run executables stored in that directory. You can do this by running the following command:
33 |
34 | ```bash
35 | export PATH=$PATH:/home/myuser/mytools
36 | ```
37 |
38 | This command adds the directory `/home/myuser/mytools` to the existing PATH environment variable, which is a colon-separated list of directories that the shell searches for executable files.
39 |
40 | If you only set the `PATH` variable without exporting it, it will only be available in the current shell session and will not be inherited by child processes.
41 |
42 | ```bash
43 | PATH=$PATH:/home/myuser/mytools
44 | ```
45 |
46 | ## The `source` command
47 |
48 | If you don't want to start a new process when executing a script or command, but to run it **in the current shell process**, you should `source` it.
49 |
50 | The below example demonstrate the usage of the `source` command.
51 | We will use a bash variable called `$`, which contains the current process ID.
52 |
53 | Create the below bash file under `print_pid.sh`:
54 |
55 | ```bash
56 | # Note that the $$ gives the value of the $ variable.
57 | echo $$
58 | ```
59 |
60 | Let's execute this script, once within a new bash process, then when the script is sourced:
61 |
62 | ```console
63 | myuser@hostname:~$ echo $$
64 | 44132
65 | myuser@hostname:~$ bash print_pid.sh
66 | 50299
67 | myuser@hostname:~$ source print_pid.sh
68 | 44132
69 | ```
70 |
71 | # Exercises
72 |
73 | ### :pencil2: Create your own Linux "command"
74 |
75 | Let's create a shell program and add it to your `$PATH` env var. Execute the following commands line by line:
76 |
77 | 1. In your home dir, create a directory called `scripts`. This dir will be added to the PATH soon.
78 | 2. Create bash script in a file called `myscript` (without any extension), with the following content:
79 |
80 | ```bash
81 | #!/bin/bash
82 | echo my script is running...
83 | ```
84 |
85 | 3. Test your script by `bash myscript`
86 | 4. Give it execute permissions
87 | 5. Copy your script into `~/scripts`
88 | 6. Add `~/scripts` to the PATH (don't override the existing content of PATH, take a look at the above example).
89 | 7. Test your new "command" by just typing `myscript`.
90 | 8. Try to use the `myscript` command in another new terminal session. Does it work? Why?
91 |
92 |
93 | ### :pencil2: Elvis custom `ls` command
94 |
95 | The PATH variable on `elvis`' machine looks like:
96 |
97 | ```console
98 | [elvis@station elvis]$ echo $PATH
99 | /home/elvis/custom:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
100 | ```
101 |
102 | `elvis` created a custom program called `ls`.
103 | The program is located in `/home/elvis/custom` directory.
104 |
105 | 1. What is the command that `elvis` should execute such that **his** version of `ls` would be executed in the current terminal session only?
106 | 2. What is the command that `elvis` should execute such that **Ubuntu**'s version of `ls` would be executed in the current and child terminal sessions?
107 |
--------------------------------------------------------------------------------
/tutorials/linux_io_redirection.md:
--------------------------------------------------------------------------------
1 | # Linux Input and Output (IO) Redirection
2 |
3 |
4 | ## The `>` and `>>` operators
5 |
6 | Sometimes you will want to put the output of a command in a file, instead of printing it to the screen. You can do so with the `>` operator:
7 |
8 | ```console
9 | myuser@hostname:~$ echo Hi
10 | Hi
11 | myuser@hostname:~$ echo Hi > myfile
12 | myuser@hostname:~$ cat myfile
13 | Hi
14 | ```
15 |
16 | The `>` operator overwrites the file if it already contains some content. If you want to append to the end of the file, do:
17 |
18 | ```console
19 | myuser@hostname:~$ echo Hi again >> myfile
20 | myuser@hostname:~$ cat myfile
21 | Hi
22 | Hi again
23 | myuser@hostname:~$ date >> myfile
24 | myuser@hostname:~$ cat myfile
25 | Hi
26 | Hi again
27 | IST 10:06:02 2020 Jan 01
28 | ```
29 |
30 | ## The `|` operator
31 |
32 | Sometimes you may want to issue another command on the output of one command. You can do so with the `|` (pipe) operator:
33 |
34 | ```console
35 | myuser@hostname:~$ cat myfile | grep again
36 | Hi again
37 | ```
38 |
39 | In the above command, the output of the `cat` command is the input to the `grep` command.
40 | Both `>`, `>>`, and `|` are called IO redirection operators. There are [more operators](https://tldp.org/LDP/abs/html/io-redirection.html)... but the above 3 are the most common.
41 |
42 | ## Linux `grep` command and Regular Expressions
43 |
44 | ### Regex
45 |
46 | Regular expressions (Regex) allow us to create and match a pattern in a given string. Regex are used to replace text in a string, validate string format, extract a substring from a string based on a pattern match, and much more!
47 |
48 | Regex is out of this course' scope, but you are highly encouraged to learn regex yourself. There are so many systems and configurations that you'll be required for regex skills!
49 |
50 | Learn Regex: https://regexone.com/
51 |
52 |
53 | ### `grep` - Global Regular Expression Print
54 |
55 | A simple but powerful program, `grep` is used for filtering files content or input lines, and prints certain regex patterns.
56 |
57 | Let's print the content of `/var/log/auth.log`. This file is a log file that records information about system authentication events. The below commands create some authentication event (by creating the `/test` directory and file in it), then printing the content of `auth.log`.
58 |
59 | ```console
60 | myuser@hostname:~$ sudo mkdir /test
61 | myuser@hostname:~$ sudo touch /test/aaa
62 | myuser@hostname:~$ cd /var/log
63 | myuser@hostname:/var/log$ cat auth.log
64 | ...
65 | Mar 7 19:17:01 hostname CRON[2076]: pam_unix(cron:session): session closed for user root
66 | Mar 7 19:33:10 hostname sudo: myuser : TTY=pts/0 ; PWD=/var/log ; USER=root ; COMMAND=/usr/bin/mkdir /test
67 | ```
68 |
69 | The above output shows information regarding the executed `sudo` command. `myuser` is the user who performed the `sudo` command, the working dir is `/var/log` and the command coming after sudo was `mkdir /test`.
70 | Now let's demonstrate the power of the `grep` command.
71 |
72 | Print all lines contain the word "sudo":
73 |
74 | ```console
75 | myuser@hostname:/var/log$ grep sudo auth.log
76 | Mar 7 19:17:01 hostname CRON[2076]: pam_unix(cron:session): session closed for user root
77 | Mar 7 19:33:10 hostname sudo: myuser : TTY=pts/0 ; PWD=/var/log ; USER=root ; COMMAND=/usr/bin/mkdir /test
78 | ...
79 | ```
80 |
81 | But what if we want to print all "sudo" events while the command coming after sudo is `mkdir`? We can do it using regular expressions:
82 |
83 | ```console
84 | myuser@hostname:/var/log$ grep -E "sudo: .*COMMAND=.*mkdir" auth.log
85 | Mar 7 19:33:10 hostname sudo: myuser : TTY=pts/0 ; PWD=/var/log ; USER=root ; COMMAND=/usr/bin/mkdir /test
86 | ```
87 |
88 | The above example uses the `.*` match pattern to catch what we want. `.` means "any single character except a line break", `*` means "0 or more repetitions of the preceding symbol".
89 |
90 | Another example - list all users that executed commands using `sudo`:
91 |
92 | ```console
93 | # first (bad) try - the second output line is unwanted
94 | myuser@hostname:/var/log$ grep -Eo "sudo:.*:" auth.log
95 | sudo: myuser :
96 | sudo: pam_unix(sudo:session):
97 | ...
98 |
99 | # second (success) try
100 | myuser@hostname:/var/log$ grep -Eo "sudo:\s+\w+\s+:" auth.log
101 | sudo: myuser :
102 | ```
103 |
104 | ## IO Pipes under the hood
105 |
106 | How does it work? IO redirection reveals a very interesting structure for Linux systems.
107 |
108 | ![][linux_ioredirect]
109 |
110 | Most Linux commands read input, such as a filename, and write output to screen. By default, your keyboard is represented in linux by the standard input (**stdin**) device, and the screen or a particular terminal window is represented by the standard output (**stdout**) device. We already said that in linux everything is a file, and indeed, you can find the stdin and stdout files in the `/dev` directory.
111 |
112 | ```console
113 | myuser@hostname:~$ ls -l /dev
114 | ...
115 | lrwxrwxrwx 1 root root 15 Mar 7 16:45 stderr -> /proc/self/fd/2
116 | lrwxrwxrwx 1 root root 15 Mar 7 16:45 stdin -> /proc/self/fd/0
117 | lrwxrwxrwx 1 root root 15 Mar 7 16:45 stdout -> /proc/self/fd/1
118 | ...
119 | ```
120 |
121 | Or even better:
122 |
123 | ```console
124 | myuser@hostname:~$ ls -l /dev | grep "stdin\|stdout"
125 | lrwxrwxrwx 1 root root 15 Mar 7 16:45 stdin -> /proc/self/fd/0
126 | lrwxrwxrwx 1 root root 15 Mar 7 16:45 stdout -> /proc/self/fd/1
127 | ```
128 |
129 | In the above `ls -l /dev` command, instead of printing the output to the screen (to stdout, as we used to see), the output is redirected **as an input** to the next command, to `grep`. Again, instead of writing the output of the left side of the pipe to the standard output, this output is redirected to the standard input of the command grep.
130 | So `grep` is searching and filtering on the output of the `ls` command.
131 | In this example we are filtering lines containing the text `stdin` or `stdout`, we do it by the `stdin\|stdout` regular expressions.
132 |
133 | # Exercises
134 |
135 | ### :pencil2: IO redirection basics
136 |
137 | 1. Create a file called `fruits.txt` with the contents "apple banana cherry"
138 | 2. Use `>` to write the contents of `fruits.txt` to a new file called `output.txt`.
139 | 3. Use `>>` to append the contents of `fruits.txt` again to `output.txt`.
140 | 4. Use `|` to pipe the output of cat `output.txt` to grep banana. How many times does banana appear?
141 | 5. Use `grep` to search for APPLE (upper cases) in `output.txt`. Did the search succeed?
142 | 6. Use `grep` to display all lines in `output.txt` that don't contain banana.
143 |
144 | ### :pencil2: `grep` on file
145 |
146 | Create the file `~/bashusers.txt`, which contains lines from the `/etc/passwd` file which contain the text “/bin/bash”.
147 |
148 | ### :pencil2: `grep` on file II
149 |
150 | Create the file `~/rules.txt`, which contains every line from the `/etc/rsyslog.conf` file which contains the text “file”, using a case insensitive search.
151 | (In other words, file, File, and files would all count as matches).
152 |
153 |
154 | ### :pencil2: `grep` with line number and pipe
155 |
156 | Use the `grep` command and IO redirects only!
157 |
158 | Create the file `~/mayhemnum.txt`, which contains only the line number of the word “mayhem” from the file `/usr/share/dict/words`.
159 |
160 |
161 | ### :pencil2: `grep` with regex
162 |
163 | Find the number of words in `/usr/share/dict/words` that contain at least three “a”s. E.g. traumata, takeaways, salaam
164 |
165 | ### :pencil2: Regex
166 |
167 | Create a file containing some lines that you think would match the regular expression: `(^[0-9]{1,5}[a-zA-Z ]+$)|none` and some lines that you think would not match.
168 | Use `grep` to see if your intuition is correct.
169 |
170 | ### :pencil2: Regex II
171 |
172 | Using `grep` command and regular expressions, list all files in your home directory that others can read or write to.
173 |
174 |
175 |
176 |
177 | [linux_ioredirect]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/linux_ioredirect.png
178 |
--------------------------------------------------------------------------------
/tutorials/linux_package_management.md:
--------------------------------------------------------------------------------
1 | # Package Management
2 |
3 | ## `apt-get` package manager
4 |
5 | Many Linux distributions use a package management system to install, remove, and manage software packages. Here are some commonly used commands for package management on Ubuntu:
6 |
7 | `apt-get` is a command line tool that helps to handle packages in Ubuntu systems. Its main task is to retrieve the information and packages from the authenticated sources for installation, upgrade, and removal of packages along with their dependencies (the packages that your desired package depends on).
8 |
9 | Generally, when first using `apt-get`, you will need to get **an index** of the available packages of public repositories, this is done using the command `sudo apt-get update`:
10 |
11 | ```console
12 | myuser@hostname:~$ sudo apt-get update
13 | ...
14 | ```
15 |
16 | Note that the command doesn't install any package, so what just happened under-the-hood?
17 |
18 | First, `apt-get` reads the `/etc/apt/sources.list` file (and any additional files under `/etc/apt/sources.list.d/`), which contains a list of [configured data sources and properties](https://bash.cyberciti.biz/guide//etc/apt/sources.list_file) to fetch packages from the internet. Then for each repository in the list, apt fetches a list of all available packages, versions, metadata etc... Package lists are stored under `/var/lib/apt/lists/`.
19 |
20 | To install a package, just type:
21 |
22 | ```console
23 | sudo apt-get install
24 | ```
25 |
26 | When you run the above command, the package manager (in this case, `apt-get`) will search for the package in its local package lists (the ones stored under `/var/lib/apt/lists/`). These package lists are the catalog of available packages that can be installed on your system, in case the packages don't exist in the catalog, you won't be able to install it. Thus, it is important to perform `sudo apt-get update` before every installation, this in order to update the local lists with all available packages in their latest version.
27 |
28 | `apt-get` checks the **digital signature** of the files to ensure that it is valid and has not been tampered with. The signature is used to verify the authenticity and integrity of the repository and its contents. The concepts of digital signatures, data integrity and authenticity will be discussed later on in this course.
29 |
30 | # Exercises
31 |
32 | ### :pencil2: Install Docker
33 |
34 | Follow [Docker official installation](https://docs.docker.com/engine/install/ubuntu/) docs on Ubuntu. Install Docker step by step while trying to understand the reason behind every `apt-get` command. What are the official GPG keys used for?
35 |
36 | ### :pencil2: Experimenting with `apt-get`
37 |
38 | 1. Why do we need `sudo` to `apt-get update` and `install`?
39 | 2. Perform `apt-cache show apache2` to see the local list of the `apache2` package on your system.
40 | 3. Choose one of the versions from the above output (preferably not the latest version), and install `apache2`, in this specific version.
41 | 4. Perform `sudo apt-get update`. Was the list updated? Do you have some new versions of apache2 available to be installed?
42 | 5. Upgrade `apache2` to the latest version.
43 | 6. Remove `apache2`.
44 |
45 |
--------------------------------------------------------------------------------
/tutorials/milestone_github_actions_ci_cd.md:
--------------------------------------------------------------------------------
1 | # :round_pushpin: Milestone: GitHub Actions and the simple CI/CD pipeline
2 |
3 | CI/CD (Continuous integration and continuous deployment) is a methodology which automates the deployment process of software project.
4 | We'll spend fairly amount of time to discuss this topic. But for now we want to achieve a simple outcome:
5 |
6 | **When you make changes to your code locally, commit, and push them, a new automated pipeline connects to an EC2 instance, and deploys the new version of the app.**
7 |
8 | No need to manually connect to your EC2, no need manually install dependencies, stop the running server, pulls the new code version, and launch the server - everything from code changes to deployment is seamlessly done by an automatic process.
9 | This is why it is called **Continuous Deployment**, because on every code change, a new version of the app is being deployed automatically.
10 |
11 | To achieve that, we will use a platform which is part of GitHub, called **GitHub Actions**.
12 |
13 | 1. First, **get yourself familiar** with how GitHub Actions works: https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions.
14 | 2. The GitHub Actions **Workflow** skeleton is already written for you and available under `.github/workflows/service-deploy.yaml` in the [NetflixMovieCatalog][NetflixMovieCatalog] repo. Carefully review it, and feel free to customize it according to your specific requirements.
15 |
16 | Note that in order to automate the deployment process of the app, the workflow should have an SSH private key that authorized to connect to your instance. Since we **NEVER** store secrets in a git repo, you should configure a **Secret** in GitHub Actions and provide it to the workflow as an environment variable, as follows:
17 | - Go to your project repository on GitHub, navigate to **Settings** > **Secrets and variables** > **Actions**.
18 | - Click on **New repository secret**.
19 | - Define a secret named `SSH_PRIVATE_KEY` with the private key value to connect to your EC2.
20 | - Take a look how this secret is being used in the workflow `service-deploy.yaml` YAML.
21 | 4. Make some changes to your app, then commit and push it. Notice how the **Netflix Movie Catalog Service Deployment** workflow automatically kicked in. Once the workflow completes successfully, your new application version should be automatically deployed in your EC2 instance. Make sure the service is working properly and reflects the code changes you've made.
22 |
23 | **Note:** Your EC2 instances should be running while the workflow is running. **Don't forget to turn off the machines when you're done**.
24 |
25 |
26 |
27 | [git_gitflow]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/git_gitflow.png
28 | [NetflixMovieCatalog]: https://github.com/exit-zero-academy/NetflixMovieCatalog.git
29 |
30 |
--------------------------------------------------------------------------------
/tutorials/milestone_simple_app_deployment.md:
--------------------------------------------------------------------------------
1 | # :round_pushpin: Milestone: Simple app deployment
2 |
3 | For this milestone, you will manually deploy the [NetflixMovieCatalog][NetflixMovieCatalog] service on an AWS virtual machine.
4 |
5 | 1. In an AWS account, create an EC2 instance.
6 | 2. Run the NetflixMovieCatalog within your instance as a Linux service[^1] that starts automatically when the instance is starting. Create Python venv and install dependencies if needed.
7 | 3. In Route 53, configure a subdomain in the hosted zone of your domain to route traffic your instance IP.
8 | Access the service domain via your browser and make sure it's accessible.
9 | 4. Now, configure your Flask application to accept only HTTPS traffic by generating a self-signed certificate. Update the Flask app code to use the certificate, as follows:
10 |
11 | ```diff
12 | - app.run(port=8080, host='0.0.0.0')
13 | + app.run(port=8080, host='0.0.0.0', ssl_context=('cert.pem', 'key.pem'))
14 | ```
15 |
16 | While `cert.pem` and `key.pem` are paths to your generated certificate and private key.
17 | 5. Visit your service via your browser using the HTTPS protocol.
18 |
19 |
20 | [NetflixMovieCatalog]: https://github.com/exit-zero-academy/NetflixMovieCatalog.git
21 |
22 | [^1]: Linux services discussed [here](linux_processes.md#services)
--------------------------------------------------------------------------------
/tutorials/monitoring_and_alerting_grafana_prometheus.md:
--------------------------------------------------------------------------------
1 | # Grafana and Prometheus
2 |
3 |
4 | https://grafana.com/tutorials/grafana-fundamentals/
5 |
6 |
7 | https://grafana.com/docs/grafana/latest/dashboards/variables/
8 |
9 | https://grafana.com/docs/grafana/latest/dashboards/build-dashboards/annotate-visualizations/
10 |
11 | https://grafana.com/docs/grafana/latest/dashboards/build-dashboards/best-practices/
12 |
13 |
14 |
15 | build grafana dashboards
16 | Instrumenting your app with Prometheus
17 | create and simulate alert
18 | prom-ql + realtime alerting with grafana and prometheus
19 |
20 |
21 | # Exercises
22 |
23 | ### :pencil2: Monitoring jenkins with Prometheus and Grafana
24 |
25 | https://www.jenkins.io/doc/book/system-administration/monitoring/
26 |
--------------------------------------------------------------------------------
/tutorials/networking_OSI_model.md:
--------------------------------------------------------------------------------
1 | # Networking
2 |
3 | This tutorial was built thanks to the great book [Computer Networking: a Top Down Approach](https://gaia.cs.umass.edu/kurose_ross/).
4 |
5 | ## The OSI model
6 |
7 |
8 | In order to get data over the network, lots of different hard- and software needs to work and communicate together via a well-defined **protocol**.
9 | A protocol is, simply put, a set of rules for communication. You've probably heard some of them: HTTP, SSH, TCP/IP etc...
10 | All these different types of communication protocols are classified in 7 layers, which are known as the Open Systems Interconnection Reference Model, the OSI Model for short.
11 |
12 | In this course we will discuss the 4-layer model, which is a simplified version of the OSI model that combines several of the OSI layers into four layers.
13 |
14 | This model is commonly used in the TCP/IP protocol suite, which is the basis for the Internet.
15 |
16 | The four layers of the TCP/IP model, in order from top to bottom, are:
17 |
18 | | Layer Name | Common used protocols |
19 | |-------------------------|-----------------------|
20 | | Application Layer | HTTP, DNS, SMTP, SSH |
21 | | Transport Layer | TCP, UDP |
22 | | Network Layer | IP, ICMP |
23 | | Network Interface Layer | Ethernet |
24 |
25 | ### Visiting google.com in the browser - it's really much more complicated than it looks!
26 |
27 | What happen when you open up your web browser and type http://www.google.com/? We will try to examine it in terms of the OSI model.
28 |
29 | #### Application layer
30 |
31 | The browser uses HTTP protocol to form an HTTP request to Google's servers, to serve Google's home page.
32 | The HTTP request is merely a text in a well-defined form, it may look like:
33 |
34 | ```text
35 | GET / HTTP/1.1
36 | Host: google.com
37 | User-Agent: Mozilla/5.0
38 | ```
39 |
40 | Note that we literally want to transfer this text to Google's servers, as is.
41 | In the server side, there is an application (called "webserver", obviously) that knows what to do and how to response to this text format.
42 | Since web browser and web servers are applications that use the network, it resides in the Application layer.
43 |
44 | The **Application layer** is where network applications and their corresponding protocols reside. Network applications may be web-browsers, web-server, mailing software, and every application that send or receive data over the Internet, in any kind and form.
45 |
46 | Do your Firefox or Chrome browsers are responsible for the actual data transfer over the Internet? Hell no.
47 | They both use the great service of the **Transport layer**.
48 |
49 | #### Transport layer
50 |
51 | After your browser formulated an HTTP text message (a.k.a. **HTTP request**), the message is transferred (by writing it to a file of type **socket** - will be discussed later), to another "piece of software" in the Linux kernel which is responsible for **controlling the transmission** of the Application layer messages to the other host.
52 | The Transmission Control Protocol (TCP) forms the [set of rules](https://www.ietf.org/rfc/rfc793.txt) according which the message is being transferred to the other host, or received from another host.
53 |
54 | TCP breaks long **messages** into shorter **segments**, it guarantees that the data was indeed delivered to the destination and controls the order in which segments are being sent.
55 | Note that TCP only controls **how** the data is being sent and received, but it does not responsible for the actual data transfer.
56 |
57 | Besides TCP, there is another common protocol in the Transport layer which is called **UDP**.
58 |
59 | - TCP (Transmission Control Protocol): Reliable, connection-oriented, provides a guaranteed delivery of data and error detection mechanisms.
60 | - UDP (User Datagram Protocol): Lightweight, connectionless, used for fast, low-latency communication. Commonly used for video streaming, online gaming, and other real-time applications.
61 |
62 | To send its data, TCP and UDP use the service of a very close friend - **Internet Protocol (IP)**.
63 |
64 |
65 | #### Internet layer
66 |
67 | We continue our journey to get Google.com's homepage.
68 | So we have a few segments, ready to be transferred to Google's servers.
69 |
70 | The IP protocol is responsible for moving the TCP segments from one host to another.
71 | Just as you would give the postal service a letter with a destination address, IP protocol sends piece of data (a.k.a **Packets**) to an address (a.k.a **IP address**).
72 | Like TCP and UDP, IP is a piece of software resides in the Linux kernel (so close to TCP, that they are frequently called TCP/IP).
73 | In order to send packets over the Internet, IP communicates with a **Network Interface**, which is a software abstraction that represents a network physical (of virtual) device, such as an Ethernet card or a wireless adapter.
74 |
75 | The Network layer routes packets through a series of routers between the source and destination hosts.
76 |
77 | #### Network Interface layer
78 |
79 | The Network Interface layer is the lower level component in our model.
80 | It provides an interface between the physical network and the higher-level networking protocols.
81 | It handles the transmission and reception of data (a.k.a. **Frames**) over the network, and it is responsible for converting **digital signals** into **analog signals** for transmission over the physical network.
82 |
83 | In this layer, every physical (or virtual) network device has a media access control (**MAC**) address, which is a unique identifier assigned to a network interface.
84 |
85 | # Exercises
86 |
87 | ### :pencil2: Inspecting OSI layers via WireShark
88 |
89 | Wireshark is a popular network protocol analyzer that allows users to capture and inspect network traffic in real time, making it a valuable tool for network troubleshooting and analysis.
90 |
91 | Install it on Ubuntu:
92 | https://www.wireshark.org/docs/wsug_html_chunked/ChBuildInstallUnixInstallBins.html#_installing_from_debs_under_debian_ubuntu_and_other_debian_derivatives
93 |
94 | Run it by:
95 |
96 | ```bash
97 | sudo wireshark
98 | ```
99 |
100 | Start capturing packets, by clicking on the ![][networking_wiresharkstart] button.
101 | In wireshark, apply(![][networking_wireshark_apply]) the following filter to catch only packets destined for `google.com`:
102 |
103 | ```text
104 | http.host == "google.com"
105 | ```
106 |
107 | From your terminal, use the `curl` command to get the main page of `google.com`
108 |
109 | ```bash
110 | curl google.com
111 | ```
112 |
113 | Explore the **packet details pane**.
114 |
115 | ![][networking_wireshark_packet_pane]
116 |
117 | This pane displays the contents of the selected packet (packet here is referred to as “any piece of data that traverses down the model layers”).
118 | You can expand or collapse each layer to view the details of the corresponding layer, such as the source and destination addresses, protocol flags, data payloads, and other relevant information.
119 |
120 | Based on our discussion on the OSI model, and your previous knowledge in computer networking, try to look for the following information:
121 |
122 |
123 | 1. How many layers does the packet cross?
124 | 2. What is the top layer, which is the lower?
125 | 3. The “network interface” layer is not part of the original OSI model. It is composed by the two lower layers of the original model, what are those two layers according to the packet details pane on your WireShark screen?
126 | 4. How does the original message to google.com (the HTTP request) look like in the top layer?
127 | 5. Is the packet sent using TCP or UDP?
128 | 6. What is the length of the transport layer segment?
129 | 7. To how many segments the original message has been segmented?
130 | 8. Which version of IP protocol did you use in the Internet layer?
131 | 9. In the Internet layer, what is the destination IP of the packets?
132 | 10. What is the MAC address of your device?
133 | 11. How many bits have been transmitted over the wire to google's servers?
134 | 12. What is the protocol sequence that the frame (the lower level piece of data) have been composed of?
135 |
136 | [networking_wiresharkstart]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/networking_wiresharkstart.png
137 | [networking_wireshark_apply]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/networking_wireshark_apply.png
138 | [networking_wireshark_packet_pane]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/networking_wireshark_packet_pane.png
--------------------------------------------------------------------------------
/tutorials/networking_linux_sockets.md:
--------------------------------------------------------------------------------
1 | # Linux Sockets
2 |
3 | A **Socket** is a communication endpoint that allows processes to communicate with each other, either on the same machine or across a network.
4 | A socket is identified by a unique combination of an **IP address** and a **Port Number**, and it is associated with a particular protocol, usually TCP or UDP.
5 |
6 | Linux sockets provide a standardized interface for networking communication, and they are used by many network applications, such as web browsers, email clients, etc...
7 | Sockets can be used to establish connections between **Client** and **Server**, or to implement peer-to-peer communication between two applications.
8 |
9 | ![][networking_sockets]
10 |
11 | ## The Client-Server model
12 |
13 | Most networking applications today are designed around a client-server relationship.
14 | The **Client** is usually an application acting on behalf of a person, such as a web browser accessing google.com.
15 | The **Server** is generally an application that is providing some service, such as supplying the content of the web page of google.com.
16 |
17 | Processes implementing a server might be running as [linux services](linux_processes.md#services), started at boot time, and continue to run until the machine is shutdown.
18 | Usually, clients can use the server's **hostname** (e.g. `google.com`, `console.aws.com`, etc...), which is a friendly known name that can be converted into the IP Address of the server using the **Domain Name Service (DNS)** system.
19 |
20 | The below figure illustrates multiple clients communicating with the same server:
21 |
22 | ![][networking_client-server]
23 |
24 | ## Socket communication demonstrated
25 |
26 | This exercise will demonstrate a client-server communication over a TCP socket.
27 | Typically, the server should run on different machine than the client, but it is also possible to run the client and the server on the same machine.
28 |
29 | 1. Lightly review the code in `simple_linux_socket/server.c`, especially the system calls `socket()`, `accept()`, `bind()`, `listen()`, `recv()`.
30 | 2. From your bash terminal, compile the code by `gcc -o server server.c`.
31 | 3. Run by `./server`.
32 | 4. As a client that want to communicate with the server, use the `nc ` command to establish a TCP connection and send data to the server.
33 |
34 | As `./server` process is started, it first allocates a `socket`, then `bind`s it to the port `9999`, and begins `listen`ing for connections.
35 | At some point later, the client used the `nc` command to communicate with the client by allocating a socket in its local machine, and requests to connect to port 9999 of the server host.
36 | Because `nc` did not request a particular port number, the kernel provides a random one (`52038` in the below output).
37 | As the client requests the connection, it provides its own IP address and the (randomly assigned) port number to the server.
38 | The server chooses to `accept` the connection.
39 | The established socket is now identified by the combined IP address and port number of both the client and server.
40 |
41 | `netstat` is an important command that can be used to display a variety of networking information, including open ports.
42 | When a port is used by a socket, it is referred to as an **open port**.
43 |
44 |
45 | The below is an output example **from the client machine** that is currently holds an `ESTABLISHED` connection with the server:
46 |
47 | ```console
48 | myuser@hostname:~$ netstat -tuna
49 | Active Internet connections (servers and established)
50 | Proto Recv-Q Send-Q Local Address Foreign Address State
51 | tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN
52 | tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
53 | ...
54 | tcp 0 0 172.16.17.74:52038 13.51.197.134:9999 ESTABLISHED <=====
55 | udp 0 0 127.0.0.53:53 0.0.0.0:*
56 | udp 0 0 0.0.0.0:67 0.0.0.0:*
57 | ...
58 | ```
59 |
60 | The below is an output example **from the server machine**, while the server is running and connected to the above mentioned client:
61 |
62 | ```console
63 | ubuntu@13.51.197.134:~$ netstat -tuna
64 | Active Internet connections (servers and established)
65 | Proto Recv-Q Send-Q Local Address Foreign Address State
66 | tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
67 | tcp 0 0 0.0.0.0:9999 0.0.0.0:* LISTEN
68 | tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN
69 | tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN
70 | tcp 0 0 172.31.46.24:22 212.50.96.83:39152 ESTABLISHED
71 | tcp 0 0 52 172.31.46.24:22 212.50.96.83:58894 ESTABLISHED
72 | tcp 0 0 172.31.46.24:9999 212.50.96.83:52038 ESTABLISHED <=====
73 | tcp 0 0 172.31.46.24:38150 54.229.116.227:80 TIME_WAIT
74 | tcp6 0 0 :::22 :::* LISTEN
75 | tcp6 0 0 :::80 :::* LISTEN
76 | udp 0 0 127.0.0.1:323 0.0.0.0:*
77 | udp 0 0 127.0.0.53:53 0.0.0.0:*
78 | udp 0 0 172.31.46.24:68 0.0.0.0:*
79 | udp6 0 0 ::1:323 :::*
80 | ```
81 |
82 | Once the socket is established, the `nc` process and the `./server` process can read information from and write information to one another as easily as reading and writing from a file.
83 |
84 | > #### 🧐 Try it yourself - Multiple connections to the same server
85 | >
86 | > Create multiple connections between different clients to the same server.
87 | > Explore the connections in `netstat`'s output.
88 | >
89 | > Once the server is running, explore `/proc//fd` while `` is the server process id, to see the created socket file.
90 | >
91 |
92 |
93 | ## Well-known and privileged ports, more on `netstat`
94 |
95 | Unlike clients, processes implementing the server side, generally request which port they would like to bind to.
96 | Only one process may bind to a port at any given time.
97 |
98 |
99 | For example, our `simple_socket_server` bound to port `9999`.
100 | Why `9999`? no any good reason for that.
101 | but in the Internet, there are well-known port for famous services.
102 | For example:
103 |
104 | - Servers working with HTTP usually listen on port 80.
105 | - Servers working with HTTPS (HTTP Secure) usually listens on port 443.
106 | - If you connect to a remote machine using the SSH protocol, usually the remote machine is listening to port 22.
107 |
108 |
109 | On Linux machines, a catalog of well known services and port can be found in `/etc/services`.
110 |
111 | Ports less than `1024` are known as **privileged ports**, and treated specially by the kernel.
112 | Only processes running as the `root` user may bind to privileged ports.
113 |
114 | We now analyze a few extracted lines from the above output of the `netstat -tuna` command.
115 |
116 | ```text
117 | Proto Recv-Q Send-Q Local Address Foreign Address State
118 | tcp 0 0 0.0.0.0:9999 0.0.0.0:* LISTEN
119 | ```
120 |
121 | This socket is bound to all interfaces (`0.0.0.0` means "all IP addresses") in the `LISTENING` state, one on port `9999`.
122 | This can be recognized as the process of our server that actively listening for client connections.
123 |
124 | The next example is listening for connections as well, but only on the loopback address:
125 |
126 | ```text
127 | Proto Recv-Q Send-Q Local Address Foreign Address State
128 | tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN
129 | ```
130 |
131 | It must belong to services expecting to receive connections from other processes on the local machine only.
132 | To determine what services these ports belong to, we do some `grep`ing from the `/etc/services` file.
133 |
134 | ```console
135 | ubuntu@13.51.197.134:~$ cat /etc/services | grep 631
136 | ipp 631/tcp # Internet Printing Protocol
137 | ```
138 |
139 | Apparently, the process listening on port 631 is listening for print clients. This is probably the `cupsd` printing daemon.
140 |
141 | Last example:
142 |
143 | ```text
144 | Proto Recv-Q Send-Q Local Address Foreign Address State
145 | tcp 0 0 127.0.0.1:631 127.0.0.1:59330 ESTABLISHED
146 | tcp 0 0 127.0.0.1:59330 127.0.0.1:631 ESTABLISHED
147 | ```
148 |
149 | These lines reflect both halves of an established connection between two processes, both on the local machine.
150 | The first is bound to port 59330 (probably a randomly assigned client port), and the second to the port 631.
151 | Some process on the local machine must be communicating with the `cupsd` daemon.
152 |
153 |
154 | [networking_sockets]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/networking_sockets.png
155 | [networking_client-server]: https://exit-zero-academy.github.io/DevOpsTheHardWayAssets/img/networking_client-server.png
156 |
157 |
--------------------------------------------------------------------------------
/tutorials/onboarding.md:
--------------------------------------------------------------------------------
1 | # Course Onboarding
2 |
3 | ## TL;DR
4 |
5 | Onboarding steps:
6 |
7 | - [Git](#git) and [GitHub account](#GitHub)
8 | - [Ubuntu Desktop workstation](#linux-operating-system)
9 | - [AWS account](#aws-account)
10 | - [PyCharm (or any other alternative)](#pycharm)
11 | - [Clone the course repo](#clone-the-course-repository-into-pycharm)
12 |
13 | ## GitHub
14 |
15 | The website you are visiting now is called **GitHub**.
16 | It's a platform where millions of developers from all around the world collaborate on projects and share code.
17 |
18 | Each project on GitHub is stored in something called a **Git repository**, or **repo** for short.
19 | A Git repository is like a folder that contains all the files and resources related to a project.
20 | These files can include code, images, documentation, and more.
21 |
22 | The content of this course, including all code files, tutorials, and projects, are also stored and provided to you as a Git repo.
23 |
24 | If you haven't already, please create a [GitHub account](https://github.com/).
25 |
26 | ## Linux Operating System
27 |
28 | In this course, we'll be using the Linux Operating System (**OS**). Windows won't be part of the party...
29 |
30 | Linux comes in various [distributions](https://en.wikipedia.org/wiki/Linux_distribution).
31 | We will be using **Ubuntu**, a widely-recognized Linux distribution known for its user-friendliness, stability, and extensive community support.
32 |
33 | The course materials were developed and tested with **Ubuntu 22.04** and **24.04**.
34 | For the optimal experience, we recommend using one of these versions.
35 |
36 | Below are the methods to install Ubuntu based on your preference:
37 |
38 | #### Virtualized Ubuntu using Hyper-V Manager (Windows Users)
39 |
40 | For Windows users, an effective way to run Ubuntu is by installing it on a virtual machine (VM) using **Hyper-V Manager**.
41 | This allows you to run Ubuntu alongside your Windows system without altering your existing setup.
42 |
43 | Hyper-V Manager is a built-in virtualization platform for Windows 10 Pro, Enterprise, and Education editions.
44 |
45 | Follow this tutorial to set up Ubuntu on Hyper-V Manager:
46 | https://ubuntu.com/server/docs/how-to-set-up-ubuntu-on-hyper-v
47 |
48 | Ensure your VM has at least **12GB of RAM** and **80GB of disk space**.
49 |
50 | #### Virtualized Ubuntu using VirtualBox
51 |
52 | Alternatively, you can set up Ubuntu using **VirtualBox**, another popular virtualization platform.
53 | VirtualBox offers a free license for personal, educational, and evaluation use.
54 |
55 | Follow this guide to install Ubuntu using VirtualBox:
56 | https://ubuntu.com/tutorials/how-to-run-ubuntu-desktop-on-a-virtual-machine-using-virtualbox
57 |
58 | Ensure your VM has at least **12GB of RAM** and **80GB of disk space**.
59 |
60 | #### Native Ubuntu Installation
61 |
62 | For those who prefer a more integrated experience, you can install Ubuntu directly on your machine, either as your primary OS or alongside an existing Windows installation.
63 |
64 | To install Ubuntu as your primary OS:
65 | https://ubuntu.com/tutorials/install-ubuntu-desktop
66 |
67 | ## Git
68 |
69 | **Git** is a Version Control System (**VCS**), it allows a team to collaborate on the same code project, and save different versions of the code without interfering each other.
70 | Git is the most popular VCS, you'll find it in almost every software project.
71 |
72 | On your Ubuntu, install Git form: https://git-scm.com/book/en/v2/Getting-Started-Installing-Git
73 |
74 | As for the difference between **Git** and **GitHub**:
75 | Git is the tool used for managing the source code on your local machine, while GitHub is a platform that hosts Git projects (a **Hub**).
76 |
77 | ## AWS Account
78 |
79 | In this course, you'll be using AWS (Amazon Web Services) a lot.
80 |
81 | **Having access to an AWS account is a must.**
82 |
83 | We know the idea of cloud expenses can be a concern, but there's no need to worry, you are in good hands!
84 |
85 | Throughout the course, we'll clearly indicate when a step might incur cloud charges, so you'll understand exactly what you are paying for.
86 | We put a lot of effort into making sure you get the best from AWS while carefully select cost-effective resources and always look for ways to save you money.
87 | In addition, you'll learn how to avoid unnecessary costs, and keep full control over your cloud spending.
88 |
89 | This course is designed for serious learners - think of it as a preparation for real-world work in the industry.
90 | You’ll gain practical skills just as professionals in the field use AWS on a daily basis.
91 |
92 | AWS offers a [free tier](https://aws.amazon.com/free/) with plenty of free resources to get you started, so you can explore and experiment without any initial expense.
93 |
94 | To create an AWS account, go to [AWS sign up](https://aws.amazon.com/), click on "Create an AWS Account" and follow the prompts.
95 |
96 | ## PyCharm
97 |
98 | PyCharm is an **Integrated Development Environment (IDE)** software for code development, with Python as the primary programming language.
99 |
100 | The course's content was written with PyCharm as the preferred IDE.
101 |
102 | You can use any other IDE of your choice (e.g. VSCode), but keep in mind that you may experience some differences in functionality and workflow compared to PyCharm.
103 | Furthermore, when it comes to Python programming, PyCharm reigns supreme - unless you enjoy arguing with your tools...
104 |
105 | > [!NOTE]
106 | > The last sentence was generated by ChatGPT. For me there is nothing funny in PyCharm vs VSCode debate.
107 |
108 | On your Ubuntu, install **PyCharm Community** from: https://www.jetbrains.com/pycharm/download/#section=linux (scroll down for community version).
109 |
110 | ### Clone the course repository into PyCharm
111 |
112 | Cloning a GitHub project creates a local copy of the repository on your local computer.
113 |
114 | You'll clone the repository using PyCharm UI:
115 |
116 | 1. Open PyCharm.
117 | - If no project is currently open, click **Get from VCS** on the Welcome screen.
118 | - If your PyCharm is opened of some existing project, go to **Git | Clone** (or **VCS | Get from Version Control**).
119 |
120 | 2. In the **Get from Version Control** dialog, specify `https://github.com/exit-zero-academy/DevOpsTheHardWay.git`, the URL of our GitHub repository.
121 | 2. If you are not yet authenticated to GitHub, PyCharm will offer different types of authentication methods.
122 | We suggest to choose the **Use Token** option, and click the **Generate** button in order to generate an authentication token in GitHub.
123 | After the token was generated in GitHub website, copy the token to the designated place in PyCharm.
124 | 3. In the **Trust and Open Project** security dialog, select **Trust Project**.
125 |
126 | At the end, you should have an opened PyCharm project with all the files and folders from the cloned GitHub repository, ready for you to work with.
127 |
--------------------------------------------------------------------------------