├── ansible ├── inventory ├── kke-copy.yml ├── kke-file.yml ├── kke-archive.yml ├── kke-unarchive.yml ├── kke-ansible-ping-usage.md ├── README.md ├── Inventory-update.md ├── kke-replace.yml ├── kke-blockinfile.yml ├── kke-conditionals.yml ├── kke-lineinfile.yml ├── kke-softlinks.yml ├── kke-setup-httpd-php.yml ├── kke-manage-acls.yml └── Managing-Jinja2-Templates.md ├── git ├── Fork-Repository.md ├── Merge-Conflicts.md ├── Manage-GIT-Pull-Requests.md └── Setup-From-Scratch.md ├── docker ├── Run-Docker-Container.md ├── kke-docker-compose.yml ├── Copy-Operations.md ├── Create-Docker-Image-From-Container.md ├── Create-Docker-Network.md ├── Write-Docker-File.md ├── Docker-Volumes-Mapping.md ├── Exec-Operations.md ├── Deploy-App.md ├── README.md └── Resolve-Dockerfile-Issues.md ├── puppet ├── kke-install-puppet-agents.md ├── kke-string-manipulation.pp ├── kke-package-install.pp ├── puppetserver-setup.md ├── kke-create-file.pp ├── kke-add-user.pp ├── kke-manage-services.pp ├── kke-setup-file-permissions.pp ├── kke-create-symlinks.pp ├── kke-manage-archives.pp ├── kke-install-group-packages.pp ├── kke-setup-database.pp ├── README.md ├── kke-local-yum-repo.pp ├── kke-setup-ssh-keys.pp ├── Setup-puppet-certs-autosign.md └── kke-setup-firewall-rules.md ├── kubernetes ├── kke-create-pods.yaml ├── kke-deployment.yaml ├── Roll-back-Deployment.md ├── kke-replicaset.yaml ├── kke-printenv.yaml ├── kke-cronjobs.yaml ├── kke-countdown.yaml ├── kke-replication-controller.yaml ├── kke-nodeaffnity.yaml ├── kke-sidecar-containers.yaml ├── kke-manage-secrets.yaml ├── Rolling-updates.md ├── kke-timecheckpod.yaml ├── kke-nginx.yaml ├── kke-persistent-volume.yaml ├── kke-tomcat.yaml ├── kke-node.yaml ├── kke-nagios.yaml ├── kke-jenkins.yaml ├── kke-shared-volumes.yaml ├── kke-grafana.yaml ├── kke-nginx-phpfpm.yaml ├── kke-init-containers.yaml ├── kke-envvars-kubernetes.yaml ├── kke-jekyll.yaml ├── Rolling-updates-Rolling-back-Deployments.md ├── kke-irongallery.yaml ├── kke-guest-app.yaml ├── kke-redis.yaml ├── kke-voting-app.yaml ├── README.md └── kke-haproxy.yaml ├── jenkins ├── Install-Plugins.md ├── Configure-Security-Settings-for-a-Project.md ├── Install-Jenkins-Server.md ├── Create-Views.md ├── Create-Parameterized-Builds.md ├── Create-Users-In-Jenkins.md ├── README.md ├── Add-Slave-Nodes.md ├── Jenkins-Workspaces.md ├── Install-packages-using-Jenkins-Job.md ├── Create-Scheduled-Builds.md ├── Single-Stage-Pipeline.md ├── Multi-Stage-Pipeline.md ├── Deployment-Using-Jenkins.md └── Create-Chained-Builds.md ├── linux ├── Linux-Network-Services.md ├── Yum-local-repos.md ├── PAM-Authentication-for-Apache.md ├── Setup-and-configure-iptables.md ├── Install-and-configure-NFS-Server.md ├── Linux-Firewalld-setup.md ├── Install-and-configure-SFTP.md ├── Install-and-configure-PostgreSQL.md ├── Install-and-configure-WebApp.md ├── Install-and-configure-DB-Server.md ├── README.md └── Install-and-configure-PHPFPM.md └── README.md /ansible/inventory: -------------------------------------------------------------------------------- 1 | stapp01 ansible_host=172.16.238.10 ansible_connection=ssh ansible_user=tony ansible_ssh_pass=Ir0nM@n 2 | stapp02 ansible_host=172.16.238.11 ansible_connection=ssh ansible_user=steve ansible_ssh_pass=Am3ric@ 3 | stapp03 ansible_host=172.16.238.12 ansible_connection=ssh ansible_user=banner ansible_ssh_pass=BigGr33n -------------------------------------------------------------------------------- /git/Fork-Repository.md: -------------------------------------------------------------------------------- 1 | # GIT Fork Repository 2 | ## Introduction 3 | This task is purely done on the GITEA UI. Task involves `jon` user logging in and forking the repository `sarah/story-blog` 4 | 5 | ## Solution 6 | As this task is fully done in the GITEA UI, you can refer to the below screen recording: 7 | [Video Solution](https://youtu.be/SiMj8vDFAv4) 8 | 9 | --- 10 | For tips on getting better at KodeKloud Engineer tasks, [click here](.././README.md) -------------------------------------------------------------------------------- /git/Merge-Conflicts.md: -------------------------------------------------------------------------------- 1 | # GIT Merge Conflicts 2 | ## Introduction 3 | This task is purely done on the GITEA UI. Task involves `max` resolving merge conflicts for the changes he has done on the local copy of the remote repo `sarah/story-blog`. 4 | 5 | ## Solution 6 | As this task is fully done in the GITEA UI, you can refer to the below screen recording: 7 | [Video Solution](https://youtu.be/wFjpdzqnK0Y) 8 | 9 | --- 10 | For tips on getting better at KodeKloud Engineer tasks, [click here](.././README.md) -------------------------------------------------------------------------------- /docker/Run-Docker-Container.md: -------------------------------------------------------------------------------- 1 | # Run a Docker Container 2 | ## Solution 3 | * First SSH to the required server as per the question 4 | * Next, run the docker container with the respective name and image 5 | `sudo docker run -d --name nginx_3 -p 8888:80 nginx:alpine` 6 | * Verify that the container is running: `sudo docker ps` 7 | 8 | ## Verification 9 | * Run curl to verify that you are getting back a valid HTML: `curl http://localhost:8888/` 10 | 11 | --- 12 | For tips on getting better at KodeKloud Engineer Docker tasks, [click here](./README.md) -------------------------------------------------------------------------------- /git/Manage-GIT-Pull-Requests.md: -------------------------------------------------------------------------------- 1 | # Manage GIT Pull Requests 2 | ## Introduction 3 | This task is purely done on the GITEA UI. Task involves 4 | 1. `max` user logging in and creating a pulling request 5 | 2. `max` user adding `tom` as the reviewer for this pull request, and, 6 | 3. `tom` logging in and merging this pull request 7 | 8 | ## Solution 9 | As this task is fully done in the GITEA UI, you can refer to the below screen recording: 10 | [Video Solution](https://youtu.be/PeuEln_rdys) 11 | 12 | --- 13 | For tips on getting better at KodeKloud Engineer tasks, [click here](.././README.md) -------------------------------------------------------------------------------- /puppet/kke-install-puppet-agents.md: -------------------------------------------------------------------------------- 1 | # Install Puppet Agent 2 | ## Solution 3 | * SSH to required appserver host first 4 | * Enable puppet repo and install puppet-agent 5 | ```UNIX 6 | sudo rpm -Uvh https://yum.puppetlabs.com/puppet5/puppet5-release-el-7.noarch.rpm 7 | sudo yum install puppet-agent -y 8 | ``` 9 | * Start the puppet agent: `sudo systemctl start puppet` 10 | 11 | ## Verification 12 | * Verify that the puppet service has started: `sudo systemctl status puppet` 13 | 14 | --- 15 | For general tips on getting better at KodeKloud Engineer Puppet tasks, [click here](./README.md) 16 | 17 | -------------------------------------------------------------------------------- /ansible/kke-copy.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Simply save this file as playbook.yml in the required folder 3 | # Step 2: Run `ansible-playbook -i inventory playbook.yml` 4 | # Step 3: Verify: Check that the files have been copied correctly by running 5 | # ansible all -a "ls -ltr /opt/security/" -i inventory 6 | # 7 | # For tips on getting better at Ansible tasks, check out the README.md 8 | # in this folder 9 | # 10 | - name: Ansible copy 11 | hosts: appservers 12 | become: yes 13 | tasks: 14 | - name: copy index.html to security folder 15 | copy: src=/usr/src/security/index.html dest=/opt/security 16 | -------------------------------------------------------------------------------- /docker/kke-docker-compose.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Save this file as docker-compose.yml in the required directory 3 | # Step 2: Run `docker-compose up`. Wait for the container to be up. 4 | # Step 3: Verify: Open another terminal and run `curl http://localhost:3002` 5 | # You should get a valid HTML content back 6 | # 7 | # For tips on getting better at Docker tasks, check out the README.md 8 | # in this folder 9 | # 10 | version: "3.3" 11 | services: 12 | httpd: 13 | image: httpd:latest 14 | container_name: httpd 15 | ports: 16 | - "3002:80" 17 | volumes: 18 | - /opt/data:/usr/local/apache2/htdocs 19 | -------------------------------------------------------------------------------- /docker/Copy-Operations.md: -------------------------------------------------------------------------------- 1 | # Docker Copy Operations 2 | ## Solution 3 | * First SSH to the required server as per the question 4 | * Next perform a docker copy of the file on the host to the container specified. In the below example, a local file called `nautilus.txt.gpg` was asked to be copied to `/home` location in the container `ubuntu_latest`: 5 | `sudo docker cp /tmp/nautilus.txt.gpg ubuntu_latest:/home` 6 | * Verify that the file has been copied successfully using the following command: 7 | `sudo docker exec ubuntu_latest ls -ltr /home` 8 | --- 9 | For tips on getting better at KodeKloud Engineer Docker tasks, [click here](./README.md) 10 | -------------------------------------------------------------------------------- /kubernetes/kke-create-pods.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Change the pod/container names and images as per question 3 | # Step 2: kubectl create -f 4 | # Step 3: Make sure the pod is in Running state 5 | # Step 4: Verify: kubectl exec pod-nginx -- curl http://localhost 6 | # You should see the default Nginx HTML page returned 7 | # 8 | # For tips on getting better at Kubernetes tasks, check out the README.md 9 | # in this folder 10 | # 11 | apiVersion: v1 12 | kind: Pod 13 | metadata: 14 | name: pod-nginx 15 | labels: 16 | app: nginx_app 17 | spec: 18 | containers: 19 | - name: nginx-container 20 | image: nginx:latest 21 | -------------------------------------------------------------------------------- /kubernetes/kke-deployment.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl create -f 3 | # Step 2: Make sure the pod is in Running state. Note down the name i.e. httpd-xxxxxx 4 | # Step 3: Verify: `kubectl logs httpd-xxxx` to see Apache started up 5 | # 6 | # For tips on getting better at Kubernetes tasks, check out the README.md 7 | # in this folder 8 | # 9 | apiVersion: apps/v1 10 | kind: Deployment 11 | metadata: 12 | name: httpd 13 | spec: 14 | selector: 15 | matchLabels: 16 | app: httpd 17 | template: 18 | metadata: 19 | labels: 20 | app: httpd 21 | spec: 22 | containers: 23 | - name: httpd-container 24 | image: httpd:latest 25 | -------------------------------------------------------------------------------- /docker/Create-Docker-Image-From-Container.md: -------------------------------------------------------------------------------- 1 | # Create Docker Image from Container 2 | ## Solution 3 | * First SSH to the required server as per the question 4 | * Make sure the container in question e.g. `ubuntu_latest` is running: 5 | `sudo docker ps` 6 | * Create an image with required name:tag (as per question) from the running container (as per question) using the below command: 7 | `sudo docker commit ubuntu_latest beta:devops` 8 | 9 | ## Verification 10 | * Check that the newly created image is present in the local registry: 11 | `sudo image ls` 12 | You should see the new `image:tag` listed 13 | 14 | --- 15 | For tips on getting better at KodeKloud Engineer Docker tasks, [click here](./README.md) 16 | -------------------------------------------------------------------------------- /kubernetes/Roll-back-Deployment.md: -------------------------------------------------------------------------------- 1 | # Rollback a deployment in Kubernetes 2 | ## Solution 3 | * Run describe and note down the current image version that has been deployed e.g. `kubectl describe deployment httpd-deploy` 4 | * Now rollback the deployment as per question: 5 | `kubectl rollout undo deployment httpd-deploy` 6 | * Wait until you see `deployment 'httpd-deploy' rolled back` message 7 | * Make sure all the pods are in the running `kubectl get deployments httpd-deploy` 8 | 9 | ## Verification 10 | * Run describe to verify that the image version has been rolled back e.g. `kubectl describe deployment httpd-deploy` 11 | 12 | --- 13 | For tips on getting better at KodeKloud Engineer Kubernetes tasks, [click here](./README.md) -------------------------------------------------------------------------------- /puppet/kke-string-manipulation.pp: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Save this file under /etc/puppetlabs/code/environments/production/manifests 3 | # as a file with the name specified in the question e.g. news.pp 4 | # Step 2: Perform validation steps as per my guide ../puppet/README.md 5 | # Step 3: SSH to stapp01 and verify the content of /opt/dba/beta.txt 6 | # 7 | # For tips on getting better at Puppet tasks, check out the README.md 8 | # in this folder 9 | # 10 | class data_replacer { 11 | file_line { 'line_replace': 12 | path => '/opt/dba/beta.txt', 13 | match => 'Welcome to Nautilus Industries!', 14 | line => 'Welcome to xFusionCorp Industries!', 15 | } 16 | } 17 | 18 | node 'stapp01.stratos.xfusioncorp.com' { 19 | include data_replacer 20 | } 21 | -------------------------------------------------------------------------------- /ansible/kke-file.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Simply save this file as playbook.yml in the required folder 3 | # Step 2: Edit the file path, filename and permissions accordingly 4 | # Step 3: Run `ansible-playbook -i inventory playbook.yml` 5 | # Step 4: Verify: Check that the files created by running 6 | # ansible all -a "ls -ltr /tmp/" -i inventory 7 | # 8 | # For tips on getting better at Ansible tasks, check out the README.md 9 | # in this folder 10 | # 11 | - name: Create file in appservers 12 | hosts: stapp01, stapp02, stapp03 13 | become: yes 14 | tasks: 15 | - name: Create the file and set properties 16 | file: 17 | path: /tmp/app.txt 18 | owner: "{{ ansible_user }}" 19 | group: "{{ ansible_user }}" 20 | mode: "0655" 21 | state: touch 22 | -------------------------------------------------------------------------------- /ansible/kke-archive.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Simply save this file as playbook.yml 3 | # Step 2: ansible-playbook -i inventory playbook.yml 4 | # Step 3: Verify: Check that the archive exists in the target directory with correct owner/group 5 | # ansible all -a "ls -ltr /opt/devops/" -i inventory 6 | # 7 | # For tips on getting better at Ansible tasks, check out the README.md 8 | # in this folder 9 | # 10 | - name: Create archive and copy 11 | hosts: stapp01, stapp02, stapp03 12 | become: yes 13 | tasks: 14 | - name: Create the archive and set the owner 15 | archive: 16 | path: /usr/src/devops/ 17 | dest: /opt/devops/official.tar.gz 18 | format: gz 19 | force_archive: true 20 | owner: "{{ ansible_user }}" 21 | group: "{{ ansible_user }}" 22 | -------------------------------------------------------------------------------- /ansible/kke-unarchive.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Simply save this file as playbook.yml 3 | # Step 2: ansible-playbook -i inventory playbook.yml 4 | # Step 3: Verify: Check that the extracted directory exists in the target location 5 | # with correct owner/group 6 | # ansible all -a "ls -ltr /opt/devops/" -i inventory 7 | # 8 | # For tips on getting better at Ansible tasks, check out the README.md 9 | # in this folder 10 | # 11 | - name: Extract archive 12 | hosts: stapp01, stapp02, stapp03 13 | become: yes 14 | tasks: 15 | - name: Extract the archive and set the owner/permissions 16 | unarchive: 17 | src: /usr/src/devops/nautilus.zip 18 | dest: /opt/devops 19 | owner: "{{ ansible_user }}" 20 | group: "{{ ansible_user }}" 21 | mode: "0755" 22 | -------------------------------------------------------------------------------- /puppet/kke-package-install.pp: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Save this file under /etc/puppetlabs/code/environments/production/manifests 3 | # as a file with the name specified in the question e.g. news.pp 4 | # Step 2: Run the puppet verifications steps mentioned in puppet/README.md 5 | # Step 3: Verify: Finally, SSH to each hosts and run `sudo puppet agent -tv` 6 | # Run `sudo systemctl start nginx` to check the service exists 7 | # 8 | # For tips on getting better at Puppet tasks, check out the README.md 9 | # in this folder 10 | # 11 | class nginx_installer { 12 | package {'nginx': 13 | ensure => installed 14 | } 15 | } 16 | 17 | node 'stapp01.stratos.xfusioncorp.com', 'stapp02.stratos.xfusioncorp.com', 'stapp03.stratos.xfusioncorp.com' { 18 | include nginx_installer 19 | } 20 | -------------------------------------------------------------------------------- /puppet/puppetserver-setup.md: -------------------------------------------------------------------------------- 1 | # Puppetserver Setup 2 | ## Solution 3 | * Install the puppetserver first 4 | ```UNIX 5 | rpm -Uvh https://yum.puppetlabs.com/puppet5/puppet5-release-el-7.noarch.rpm 6 | yum install puppetserver -y 7 | ``` 8 | * Edit the puppetserver configuration at `/etc/sysconfig/puppetserver` and update the JAVA_ARGS as required: `JAVA_ARGS="-Xms512m -Xmx512m ...."` 9 | 10 | * Start the puppetserver 11 | ``` 12 | systemctl start puppetserver 13 | systemctl enable puppetserver 14 | systemctl status puppetserver 15 | ``` 16 | 17 | ## Verification 18 | * Ensure puppetserver is running by using the following command: 19 | `/opt/puppetlabs/server/apps/puppetserver/bin/puppetserver -v` 20 | 21 | --- 22 | For general tips on getting better at KodeKloud Engineer Puppet tasks, [click here](./README.md) 23 | 24 | -------------------------------------------------------------------------------- /ansible/kke-ansible-ping-usage.md: -------------------------------------------------------------------------------- 1 | # Ansible Ping Module 2 | ## Solution 3 | * First generate a SSH key by running `ssh-keygen` on Jump Host as: 4 | `ssh-keygen -t rsa -b 2048` 5 | * Next, use `ssh-copy-id` to setup password-less authentication to the host specification in the question. 6 | `ssh-copy-id tony@stapp01` (In this example, question asks for stapp01) 7 | * Make sure you able to SSH to the host and with the sudo user without being prompted for password: 8 | `ssh tony@stapp01` (No password prompt is seen) 9 | * Finally, change to the `/home/thor/ansible` directory and test using Ansible adhoc ping command as: 10 | `ansible stapp01 -m ping -i inventory -v` 11 | You should see the message 'SUCCESS' in the output 12 | 13 | --- 14 | For tips on getting better at KodeKloud Engineer Ansible tasks, [click here](./README.md) -------------------------------------------------------------------------------- /docker/Create-Docker-Network.md: -------------------------------------------------------------------------------- 1 | # Create Docker Network 2 | ## Solution 3 | * First SSH to the required server as per the question 4 | * Next [create a docker network](https://docs.docker.com/engine/reference/commandline/network_create/) based on the values provided in the question. In this example, Docker network named `blog` was asked to be created as `macvlan` type with subnet as `192.168.0.0/24` and iprange as `192.168.0.3/24`: 5 | ``` 6 | sudo docker network create -d macvlan --subnet=192.168.0.0/24 --ip-range=192.168.0.3/24 blog 7 | ``` 8 | * Verify that the network has been created successfully using the following commands: 9 | ``` 10 | sudo docker network ls 11 | sudo docker network inspect blog 12 | ``` 13 | --- 14 | For tips on getting better at KodeKloud Engineer Docker tasks, [click here](./README.md) -------------------------------------------------------------------------------- /puppet/kke-create-file.pp: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Save this file under /etc/puppetlabs/code/environments/production/manifests 3 | # as a file with the name specified in the question e.g. apps.pp 4 | # Step 2: Change filename, directory and node values according to question 5 | # Step 3: Run the puppet verifications steps mentioned in ../puppet/README.md 6 | # Step 4: Verify: Run the puppet agent in the required host and check that /opt/finance has 7 | # official.txt file in it 8 | # 9 | # For tips on getting better at Puppet tasks, check out the README.md 10 | # in this folder 11 | # 12 | class file_creator { 13 | # Now create official.txt under /opt/finance 14 | file { '/opt/finance/official.txt': 15 | ensure => 'present', 16 | } 17 | } 18 | 19 | node 'stapp03.stratos.xfusioncorp.com' { 20 | include file_creator 21 | } 22 | -------------------------------------------------------------------------------- /puppet/kke-add-user.pp: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Save this file under /etc/puppetlabs/code/environments/production/manifests 3 | # as a file with the name specified in the question e.g. apps.pp 4 | # Step 2: Change username and uid according to question 5 | # Step 3: Run the puppet verifications steps mentioned in ../puppet/README.md 6 | # Step 4: Verify: Run the puppet agent in the required host and check that user 7 | # has been added i.e. 'cat /etc/passwd | grep anita' 8 | # 9 | # For tips on getting better at Puppet tasks, check out the README.md 10 | # in this folder 11 | # 12 | class user_creator { 13 | user { 'anita': 14 | ensure => present, 15 | uid => 1553, 16 | } 17 | } 18 | 19 | node 'stapp01.stratos.xfusioncorp.com', 'stapp02.stratos.xfusioncorp.com', 'stapp03.stratos.xfusioncorp.com' { 20 | include user_creator 21 | } 22 | -------------------------------------------------------------------------------- /kubernetes/kke-replicaset.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl create -f 3 | # Step 2: Wait for the pods to be in running state. Note down a pod name. 4 | # Step 3: Verify: kubectl exec -- curl http://localhost:8080/ 5 | # You should see a valid HTML content being returned 6 | # 7 | # For tips on getting better at Kubernetes tasks, check out the README.md 8 | # in this folder 9 | # 10 | apiVersion: apps/v1 11 | kind: ReplicaSet 12 | metadata: 13 | name: nginx-replicaset 14 | labels: 15 | app: nginx_app 16 | type: front-end 17 | spec: 18 | replicas: 4 19 | selector: 20 | matchLabels: 21 | app: nginx_app 22 | template: 23 | metadata: 24 | labels: 25 | app: nginx_app 26 | type: front-end 27 | spec: 28 | containers: 29 | - name: nginx-container 30 | image: nginx:latest 31 | -------------------------------------------------------------------------------- /kubernetes/kke-printenv.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl create -f 3 | # Step 2: Make sure the pod is in Running state 4 | # Step 3: Verify: `kubectl logs print-envars-greeting` 5 | # You should see 'Welcome to xFusionCorp Industries' printed by the Pod 6 | # 7 | # For tips on getting better at Kubernetes tasks, check out the README.md 8 | # in this folder 9 | # 10 | apiVersion: v1 11 | kind: Pod 12 | metadata: 13 | name: print-envars-greeting 14 | labels: 15 | name: print-envars-greeting 16 | spec: 17 | containers: 18 | - name: print-env-container 19 | image: bash 20 | env: 21 | - name: GREETING 22 | value: "Welcome to" 23 | - name: COMPANY 24 | value: "xFusionCorp" 25 | - name: GROUP 26 | value: "Industries" 27 | command: ["echo"] 28 | args: ["$(GREETING) $(COMPANY) $(GROUP)"] 29 | -------------------------------------------------------------------------------- /jenkins/Install-Plugins.md: -------------------------------------------------------------------------------- 1 | # Install Plugins in Jenkins 2 | ## Solution 3 | * First open the Jenkins Admin Console by clcking `+ Open Port on Host` and specifying the given port 4 | * Login using the Admin credentials given in the question 5 | * Click `Jenkins > Manage Jenkins > Manage Plugins` and click `Available` tab. 6 | * Search for `Git`. You will see multiple matches. Select `Git` and `Gitlab` plugin and click `Download now and install after restart` 7 | * In the following screen, click checkbox `Restart Jenkins when installation is complete and no jobs running`. Wait for the screen to become standstil 8 | * You can try to refresh your browser. 9 | 10 | ## Verification 11 | * Again go back to `Jenkins > Manage Jenkins > Manage Plugins > Installed` to check `Git` and `Git Lab` are listed. 12 | 13 | --- 14 | For tips on getting better at KodeKloud Engineer Jenkins tasks, [click here](./README.md) 15 | -------------------------------------------------------------------------------- /puppet/kke-manage-services.pp: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Save this file under /etc/puppetlabs/code/environments/production/manifests 3 | # as a file with the name specified in the question e.g. news.pp 4 | # Step 2: Run the puppet verifications steps mentioned in puppet/README.md 5 | # Step 3: Verify: Finally, SSH to each hosts and run `sudo puppet agent -tv` 6 | # Run `sudo systemctl status nginx` to check the service is up 7 | # 8 | # For tips on getting better at Puppet tasks, check out the README.md 9 | # in this folder 10 | # 11 | class nginx_installer { 12 | package {'nginx': 13 | ensure => installed 14 | } 15 | 16 | service {'nginx': 17 | ensure => running, 18 | enable => true, 19 | } 20 | } 21 | 22 | node 'stapp01.stratos.xfusioncorp.com', 'stapp02.stratos.xfusioncorp.com', 'stapp03.stratos.xfusioncorp.com' { 23 | include nginx_installer 24 | } 25 | -------------------------------------------------------------------------------- /kubernetes/kke-cronjobs.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl create -f 3 | # Step 2: 'kubectl get cronjob' : Check the cronjob is created 4 | # Step 2: 'kubectl get pod': Note down the pod name i.e. datacenter-xxxx 5 | # Step 3: Verify: `kubectl logs datacenter-xxxx` 6 | # You should see the echo message below printed 7 | # 8 | # For tips on getting better at Kubernetes tasks, check out the README.md 9 | # in this folder 10 | # 11 | apiVersion: batch/v1beta1 12 | kind: CronJob 13 | metadata: 14 | name: datacenter 15 | spec: 16 | schedule: "*/9 * * * *" 17 | jobTemplate: 18 | spec: 19 | template: 20 | spec: 21 | containers: 22 | - name: cron-datacenter 23 | image: httpd:latest 24 | command: 25 | - /bin/sh 26 | - -c 27 | - echo Welcome to xfusioncorp 28 | restartPolicy: OnFailure 29 | -------------------------------------------------------------------------------- /kubernetes/kke-countdown.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl create -f 3 | # Step 2: Make sure the pod is in Running state. Note down the name i.e. countdown-datacenter-xxxx 4 | # Step 3: Verify: `kubectl logs countdown-datacenter-xxxx` 5 | # You should see the values in args below printed 6 | # 7 | # For tips on getting better at Kubernetes tasks, check out the README.md 8 | # in this folder 9 | # 10 | apiVersion: batch/v1 11 | kind: Job 12 | metadata: 13 | name: countdown-datacenter 14 | spec: 15 | template: 16 | metadata: 17 | name: countdown-datacenter 18 | spec: 19 | containers: 20 | - name: container-countdown-datacenter 21 | image: centos:latest 22 | command: ["/bin/sh", "-c"] 23 | args: 24 | [ 25 | "for i in ten nine eight seven six five four three two one ; do echo $i ; done", 26 | ] 27 | restartPolicy: Never 28 | -------------------------------------------------------------------------------- /kubernetes/kke-replication-controller.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl create -f 3 | # Step 2: Wait for the pods to be in running state. Note down a pod name. 4 | # Step 3: Verify: kubectl exec -- curl http://localhost:8080/ 5 | # You should see a valid HTML content being returned 6 | # 7 | # For tips on getting better at Kubernetes tasks, check out the README.md 8 | # in this folder 9 | # 10 | apiVersion: v1 11 | kind: ReplicationController 12 | metadata: 13 | name: nginx-replicationcontroller 14 | labels: 15 | app: nginx_app 16 | type: front-end 17 | spec: 18 | replicas: 3 19 | selector: 20 | app: nginx_app 21 | template: 22 | metadata: 23 | name: nginx_pod 24 | labels: 25 | app: nginx_app 26 | type: front-end 27 | spec: 28 | containers: 29 | - name: nginx-container 30 | image: nginx:latest 31 | ports: 32 | - containerPort: 80 33 | -------------------------------------------------------------------------------- /puppet/kke-setup-file-permissions.pp: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Save this file under /etc/puppetlabs/code/environments/production/manifests 3 | # as a file with the name specified in the question e.g. apps.pp 4 | # Step 2: Change file, permissions and node values according to question 5 | # Step 3: Run the puppet verifications steps mentioned in ../puppet/README.md 6 | # Step 4: Verify: Run the puppet agent in the required host and check that 7 | # /opt/finance/ecommerce.txt has correct content and permissions 8 | # 9 | # For tips on getting better at Puppet tasks, check out the README.md 10 | # in this folder 11 | # 12 | class file_modifier { 13 | # Update ecommerce.txt under /opt/finance 14 | file { '/opt/finance/ecommerce.txt': 15 | ensure => 'present', 16 | content => 'Welcome to xFusionCorp Industries!', 17 | mode => '0655', 18 | } 19 | } 20 | 21 | node 'stapp02.stratos.xfusioncorp.com' { 22 | include file_modifier 23 | } 24 | -------------------------------------------------------------------------------- /puppet/kke-create-symlinks.pp: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Save this file under /etc/puppetlabs/code/environments/production/manifests 3 | # as a file with the name specified in the question e.g. news.pp 4 | # Step 2: Run the puppet verifications steps mentioned in ../puppet/README.md 5 | # Step 3: Verify: After completing the runs in each agent, check that /var/www/html has 6 | # media.txt file in it 7 | # 8 | # For tips on getting better at Puppet tasks, check out the README.md 9 | # in this folder 10 | # 11 | class symlink { 12 | # First create a symlink to /var/www/html 13 | file { '/opt/devops': 14 | ensure => 'link', 15 | target => '/var/www/html', 16 | } 17 | 18 | # Now create media.txt under /opt/devops 19 | file { '/opt/devops/media.txt': 20 | ensure => 'present', 21 | } 22 | } 23 | 24 | node 'stapp01.stratos.xfusioncorp.com', 'stapp02.stratos.xfusioncorp.com', 'stapp03.stratos.xfusioncorp.com' { 25 | include symlink 26 | } 27 | -------------------------------------------------------------------------------- /kubernetes/kke-nodeaffnity.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl label nodes node01 color=pink 3 | # Step 2: kubectl create -f 4 | # Step 3: Verify: Wait until the pod is in running state on node01 5 | # 6 | # For tips on getting better at Kubernetes tasks, check out the README.md 7 | # in this folder 8 | # 9 | apiVersion: apps/v1 10 | kind: Deployment 11 | metadata: 12 | name: pink 13 | spec: 14 | replicas: 3 15 | selector: 16 | matchLabels: 17 | app: nginx-pod 18 | template: 19 | metadata: 20 | labels: 21 | app: nginx-pod 22 | spec: 23 | affinity: 24 | nodeAffinity: 25 | requiredDuringSchedulingIgnoredDuringExecution: 26 | nodeSelectorTerms: 27 | - matchExpressions: 28 | - key: color 29 | operator: In 30 | values: 31 | - pink 32 | containers: 33 | - name: nginx-container 34 | image: nginx:latest 35 | -------------------------------------------------------------------------------- /ansible/README.md: -------------------------------------------------------------------------------- 1 | # Ansible Tasks 2 | ## General Ansible Tips 3 | * Dry-run your code by running `ansible-playbook --check` e.g. `ansible-playbook foo.yml --check` 4 | * Always verify the successful completion of tasks using the following steps: 5 | * Run the actual code, login to the target hosts and verify whether the required changes are complete 6 | * An easier, time-saving way to run verification commands on multiple hosts from the Jump Host itself. This is possible by running the Ansible Ad-hoc command from Jump Host (in the same directory as `inventory` file) as below: 7 | * `ansible -a "" -i ` 8 | * Examples: 9 | * `ansible stapp01 -a "ls -ltr /var/www/html" -i inventory` 10 | * `ansible all -a "cat /opt/data/blog.txt" -i inventory` ('all' is a special keyword - Runs the specified command in all managed hosts) 11 | 12 | --- 13 | For general tips on getting better at KodeKloud Engineer tasks, [click here](../README.md) -------------------------------------------------------------------------------- /ansible/Inventory-update.md: -------------------------------------------------------------------------------- 1 | # Ansible Inventory Update 2 | ## Solution 3 | * First edit the file `/home/thor/playbook/inventory` to include the required host e.g. stapp02 4 | ``` 5 | stapp02 ansible_host=172.16.238.11 ansible_connection=ssh ansible_user=steve ansible_ssh_pass=Am3ric@ 6 | ``` 7 | If other hosts are asked, refer to the full inventory here: [Full Inventory](./inventory) 8 | 9 | ## Verification 10 | * Run the playbook without making any changes: `ansible-playbook -i inventory playbook.yml` 11 | * Verify that the playbook changes are done on the required host by running `ansible` command on the Jump Host itself. Modify the command portion in double quotes as per the `playbook.yml` e.g. Check httpd is installed and started by running `systemctl status httpd` 12 | ``` 13 | ansible all -i inventory -a "systemctl status httpd" 14 | ``` 15 | You should see the required output. e.g. Systemctl service status 16 | 17 | --- 18 | For tips on getting better at KodeKloud Engineer Ansible tasks, [click here](./README.md) -------------------------------------------------------------------------------- /puppet/kke-manage-archives.pp: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Save this file under /etc/puppetlabs/code/environments/production/manifests 3 | # as a file with the name specified in the question e.g. media.pp 4 | # Step 2: Run the puppet verifications steps mentioned in ./README.md 5 | # Step 3: Verify: Finally, SSH to each hosts and run `sudo puppet agent -tv` 6 | # Run `ls /opt/media/` to check extracted contents exists 7 | # 8 | # For tips on getting better at Puppet tasks, check out the README.md 9 | # in this folder 10 | # 11 | class archive_extractor { 12 | # Copy media.zip to /tmp directory to extract and then cleanup afterwards 13 | archive { '/tmp/media.zip': 14 | source => '/usr/src/media/media.zip', 15 | extract => true, 16 | extract_path => '/opt/media', 17 | cleanup => true, 18 | } 19 | } 20 | 21 | node 'stapp01.stratos.xfusioncorp.com', 'stapp02.stratos.xfusioncorp.com', 'stapp03.stratos.xfusioncorp.com' { 22 | include archive_extractor 23 | } 24 | -------------------------------------------------------------------------------- /puppet/kke-install-group-packages.pp: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Install puppet-yum package as 'puppet module install puppet-yum' 3 | # Step 2: Save this file under /etc/puppetlabs/code/environments/production/manifests 4 | # as a file with the name specified in the question e.g. cluster.pp 5 | # Step 3: Replace class name,group package values in this as per question 6 | # Step 4: After performing puppet code validation steps as specified in my guide execute 7 | # the code by running `sudo puppet agent -tv` in all appserver hosts 8 | # Step 5: Finally run 'sudo yum group list | grep Installed -A 1' to 9 | # check if the package group has been installed 10 | # 11 | # For tips on getting better at Puppet tasks, check out the README.md 12 | # in this folder 13 | # 14 | class yum_group { 15 | yum::group { 'Development Tools': 16 | ensure => present, 17 | } 18 | } 19 | 20 | node 'stapp01.stratos.xfusioncorp.com', 'stapp02.stratos.xfusioncorp.com', 'stapp03.stratos.xfusioncorp.com' { 21 | include yum_group 22 | } 23 | -------------------------------------------------------------------------------- /docker/Write-Docker-File.md: -------------------------------------------------------------------------------- 1 | # Write Docker file 2 | ## Solution 3 | * First SSH to the required server as per the question 4 | * Next edit the Dockerfile in the given location 5 | * Make sure the port 5003 is replaced as per the question 6 | ```Dockerfile 7 | FROM ubuntu 8 | RUN apt-get update 9 | RUN apt-get install apache2 -y 10 | RUN sed -i "s/80/5003/g" /etc/apache2/ports.conf 11 | EXPOSE 5003 12 | CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND", "-k", "start"] 13 | ``` 14 | ## Verification 15 | * First try to build an image using this Dockerfile using `sudo docker build -t my_image .` 16 | * You should see that the image gets built successfully with any errors 17 | * Next is try to run the image as `sudo docker run --name my_srv -p 5003:5003 -d my_image` (Change 5003 to the port asked in the question). It should run without any errors. 18 | * Lastly test using curl as `curl http://localhost:5003`. You should a HTML content returned from the container. 19 | 20 | --- 21 | For tips on getting better at KodeKloud Engineer Docker tasks, [click here](./README.md) 22 | -------------------------------------------------------------------------------- /kubernetes/kke-sidecar-containers.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl create -f 3 | # Step 2: Make sure the pod is in Running state 4 | # Step 3: Verify: There's limited verification that can be performed on this task. 5 | # It's just sufficient to ensure that the Pod is up and running. 6 | # 7 | # For tips on getting better at Kubernetes tasks, check out the README.md 8 | # in this folder 9 | # 10 | apiVersion: v1 11 | kind: Pod 12 | metadata: 13 | name: webserver 14 | labels: 15 | name: webserver 16 | spec: 17 | volumes: 18 | - name: shared-logs 19 | emptyDir: {} 20 | containers: 21 | - name: nginx-container 22 | image: nginx:latest 23 | volumeMounts: 24 | - name: shared-logs 25 | mountPath: /var/log/nginx 26 | - name: sidecar-container 27 | image: ubuntu:latest 28 | command: 29 | [ 30 | "/bin/bash", 31 | "-c", 32 | "while true; do cat /var/log/nginx/access.log /var/log/nginx/error.log; sleep 30; done", 33 | ] 34 | volumeMounts: 35 | - name: shared-logs 36 | mountPath: /var/log/nginx 37 | -------------------------------------------------------------------------------- /ansible/kke-replace.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Simply save this file as playbook.yml in the required folder 3 | # Step 2: Run `ansible-playbook -i inventory playbook.yml` 4 | # Step 3: Verify: Check that the files are updated by running 5 | # ansible all -a "cat /opt/sysops/*.txt" -i inventory 6 | # 7 | # For tips on getting better at Ansible tasks, check out the README.md 8 | # in this folder 9 | # 10 | - name: Ansible replace 11 | hosts: stapp01,stapp02,stapp03 12 | become: yes 13 | tasks: 14 | - name: blog.txt replacement 15 | replace: 16 | path: /opt/sysops/blog.txt 17 | regexp: "xFusionCorp" 18 | replace: "Nautilus" 19 | when: inventory_hostname == "stapp01" 20 | - name: story.txt replacement 21 | replace: 22 | path: /opt/sysops/story.txt 23 | regexp: "Nautilus" 24 | replace: "KodeKloud" 25 | when: inventory_hostname == "stapp02" 26 | - name: media.txt replacement 27 | replace: 28 | path: /opt/sysops/media.txt 29 | regexp: "KodeKloud" 30 | replace: "xFusionCorp Industries" 31 | when: inventory_hostname == "stapp03" 32 | -------------------------------------------------------------------------------- /kubernetes/kke-manage-secrets.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Create a generic secret with the name and file given as per question: 3 | # i.e. kubectl create secret generic news --from-file=/opt/news.txt 4 | # Step 2: kubectl create -f 5 | # Step 3: Make sure the pod is in Running state 6 | # Step 4: Verify: Open a shell to the Pod: kubectl exec -it secret-datacenter -- /bin/bash 7 | # In the resulting prompt, check if the secret file, news.txt, is present 8 | # under the mount path (i.e. /opt/apps): cat /opt/apps/news.txt 9 | # 10 | # For tips on getting better at Kubernetes tasks, check out the README.md 11 | # in this folder 12 | # 13 | apiVersion: v1 14 | kind: Pod 15 | metadata: 16 | name: secret-devops 17 | labels: 18 | name: myapp 19 | spec: 20 | volumes: 21 | - name: secret-volume-devops 22 | secret: 23 | secretName: ecommerce 24 | containers: 25 | - name: secret-container-devops 26 | image: debian:latest 27 | command: ["/bin/bash", "-c", "sleep 10000"] 28 | volumeMounts: 29 | - name: secret-volume-devops 30 | mountPath: /opt/apps 31 | readOnly: true 32 | -------------------------------------------------------------------------------- /docker/Docker-Volumes-Mapping.md: -------------------------------------------------------------------------------- 1 | 2 | # Docker Volumes Mapping 3 | ## Solution 4 | * First, SSH to the required host indicated in the question 5 | * Pull the image specified in the question as below: `sudo docker pull ubuntu:latest` (In this example, `ubuntu:latest` was the image mentioned in the question) 6 | * Run a docker container with the name specified in the question using the image you just pulled. Since, you need to keep the container running, you need to use the option `-it`. Also make sure you are mapping the correct local directory to the directory path in the container (In this example, `/opt/sysops` is mapped to `/usr/src` on the container): 7 | `sudo docker run --name apps -v /opt/sysops:/usr/src -d -it ubuntu:latest` 8 | * Make sure container is running: `sudo docker ps` 9 | * Finally, copy the file mentioned in the question to the local directory you mapped to the container: `cp /tmp/sample.txt /opt/sysops/` 10 | 11 | ## Verification 12 | * Check that the file you copied in the last step can be seen in the container: `sudo docker -it apps ls /usr/src` 13 | 14 | --- 15 | For tips on getting better at KodeKloud Engineer Docker tasks, [click here](./README.md) 16 | -------------------------------------------------------------------------------- /jenkins/Configure-Security-Settings-for-a-Project.md: -------------------------------------------------------------------------------- 1 | # Configure Security Settings for a Project in Jenkins 2 | ## Introduction 3 | This task involves configuring security settings for a project using 'Project-based security' options for a project given in the question e.g. Packages 4 | 5 | ## Solution 6 | * `Select port to view on Host 1` and connect to port `8081`. Login using the Jenkins admin user and password given in the question 7 | * Click `Jenkins > Click Project 'Packages' > Configure` and click `Enable project-based security` under `General` tab 8 | * This will reveal additional matrix options. 9 | * Make sure that the `Inheritence Strategy` is set to `Inherit permissions from parent ACL` 10 | * Click `Add user or group...` to add the require users and grant them the privileges as per the question by ticking the appropriate checkboxes 11 | * Click `Save` 12 | 13 | ## Verification 14 | * Login using the users given in the question and click the project `Packages` 15 | * You should see the `Configure` and `Build Now` accordingly as per the access you had given to the users 16 | --- 17 | For tips on getting better at KodeKloud Engineer Jenkins tasks, [click here](./README.md) 18 | -------------------------------------------------------------------------------- /puppet/kke-setup-database.pp: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Save this file under /etc/puppetlabs/code/environments/production/manifests 3 | # as a file with the name specified in the question e.g. cluster.pp 4 | # Step 2: Run the puppet verifications steps mentioned in puppet/README.md 5 | # Step 3: Verify: Finally, SSH to stdb01 and run `sudo puppet agent -tv` 6 | # Run `sudo systemctl status mariadb` to check the service is running 7 | # Step 4: Verify: In stdb01, try connecting to the database using the new user 8 | # mysql -u kodekloud_cap -p kodekloud_db7 -h localhost 9 | # 10 | # For tips on getting better at Puppet tasks, check out the README.md 11 | # in this folder 12 | # 13 | class mysql_database { 14 | package {'mariadb-server': 15 | ensure => installed 16 | } 17 | 18 | service {'mariadb': 19 | ensure => running, 20 | enable => true, 21 | } 22 | 23 | mysql::db { 'kodekloud_db7': 24 | user => 'kodekloud_cap', 25 | password => '8FmzjvFU6S', 26 | host => 'localhost', 27 | grant => ['ALL'], 28 | } 29 | } 30 | 31 | node 'stdb01.stratos.xfusioncorp.com' { 32 | include mysql_database 33 | } 34 | -------------------------------------------------------------------------------- /kubernetes/Rolling-updates.md: -------------------------------------------------------------------------------- 1 | # Rolling updates in Kubernetes 2 | ## Solution 3 | * First check the existing deployments: `kubectl get deployments` 4 | * Check the image and container name currently used in the deployment by running `kubectl describe deployment nginx-deployment` 5 | * Performing a rolling update by running: `kubectl set image deployment nginx-deployment nginx-container=nginx:1.19` e.g. Here nginx-deployment is the name of the deployment and nginx-container is the name of the container. Question asked to upgrade to version nginx:1.19. 6 | * You can check the rollout deployment status by running `kubectl rollout status deployment nginx-deployment` as wait until you see 'successfully rolled out' message: 7 | ``` 8 | Waiting for rollout to finish: 2 out of 3 newreplicas have been updated... 9 | deployment 'nginx-deployment' successfully rolled out 10 | ``` 11 | * Make sure the pods are in running state before pressing `Finish` 12 | 13 | ## Verification 14 | * Run describe of the pods to verify that the image version has been updated e.g. `kubectl describe pod nginx-deployment-asfdsf` 15 | 16 | --- 17 | For tips on getting better at KodeKloud Engineer Kubernetes tasks, [click here](./README.md) -------------------------------------------------------------------------------- /docker/Exec-Operations.md: -------------------------------------------------------------------------------- 1 | # Docker Exec Operations 2 | ## Solution 3 | * First SSH to the required server as per the question 4 | * Next, open a shell to Docker container using the exec command: 5 | `sudo docker exec -it kkloud /bin/bash` 6 | * In the resulting shell prompt, first install apache2: 7 | `apt-get install apache2` 8 | * Change the default Apache http port to the port mentioned in the question e.g.3001: 9 | `sed -i 's/80/3001/g' /etc/apache2/ports.conf` 10 | * Verify the changes by performing: `cat /etc/apache2/ports.conf` 11 | * Finally, start Apache2 by running `apachectl -k start` 12 | ## Verification 13 | * Find out the IP Address of the container by checking `/etc/hosts`. This is present in the last line of the `/etc/hosts` file. Alternatively, you can use the command `awk 'END{print $1}' /etc/hosts` to print out the IP address. 14 | * Run curl to verify that you are getting back a valid HTML. Try localhost, 127.0.0.1 and also the container IP address e.g. 172.17.0.2: 15 | `curl http://localhost:3001/` 16 | `curl http://127.0.0.1:3001/` 17 | `curl http://172.17.0.2:3001/` 18 | * Exit the container shell by typing `exit` 19 | --- 20 | For tips on getting better at KodeKloud Engineer Docker tasks, [click here](./README.md) 21 | -------------------------------------------------------------------------------- /puppet/README.md: -------------------------------------------------------------------------------- 1 | # Puppet Tasks 2 | ## General Puppet Tips 3 | * First validate your file for syntax errors by running `puppet parser validate ` on the same directory e.g. `puppet parser validate news.pp` 4 | * Next, dry-run the code in the specified hosts by running `sudo puppet agent -tv --noop` (You need to first SSH to that host) 5 | * Finally, run the actual code by running `sudo puppet agent -tv` 6 | * Always verify the successful completion of tasks using the following steps: 7 | * Run the actual code, login to the target hosts and verify whether the required changes are complete 8 | 9 | ## Common mistakes 10 | * Not specifying which nodes to run by specifying a node definition 11 | ```ruby 12 | node 'stapp01.stratos.xfusioncorp.com', 'stapp02.stratos.xfusioncorp.com', 'stapp03.stratos.xfusioncorp.com' { 13 | include nginx_installer 14 | } 15 | ``` 16 | * Not running `sudo puppet agent -tv` in the agent hosts when the task requires you to do so. Some of the tasks require you to not just create the puppet programming file, but also run the configurations in all the host. Pay attention to that. 17 | 18 | --- 19 | For general tips on getting better at KodeKloud Engineer tasks, [click here](../README.md) -------------------------------------------------------------------------------- /kubernetes/kke-timecheckpod.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl create namesapce devops 3 | # Step 2: kubectl create -f 4 | # (Wait for pod to start) 5 | # Step 3: Verify: kubectl exec time-check --namespace=devops -- cat 6 | # /opt/devops/time/time-check.log 7 | # You should see the date/time printed every TIME_FREQ secs below 8 | # 9 | # For tips on getting better at Kubernetes tasks, check out the README.md 10 | # in this folder 11 | # 12 | apiVersion: v1 13 | kind: ConfigMap 14 | metadata: 15 | name: time-config 16 | namespace: devops 17 | data: 18 | TIME_FREQ: "2" 19 | --- 20 | apiVersion: v1 21 | kind: Pod 22 | metadata: 23 | name: time-check 24 | namespace: devops 25 | labels: 26 | app: time-check 27 | spec: 28 | volumes: 29 | - name: log-volume 30 | emptyDir: {} 31 | containers: 32 | - name: time-check 33 | image: busybox:latest 34 | volumeMounts: 35 | - mountPath: /opt/devops/time 36 | name: log-volume 37 | envFrom: 38 | - configMapRef: 39 | name: time-config 40 | command: ["/bin/sh", "-c"] 41 | args: 42 | [ 43 | "while true; do date; sleep $TIME_FREQ;done > /opt/devops/time/time-check.log", 44 | ] 45 | -------------------------------------------------------------------------------- /ansible/kke-blockinfile.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Simply save this file as playbook.yml in the required folder 3 | # Step 2: Run `ansible-playbook -i inventory playbook.yml` 4 | # Step 3: Verify: Check that the files have been updated correctly by running 5 | # ansible all -a "ls -ltr /var/www/html/" -i inventory 6 | # ansible all -a "cat /var/www/html/index.html" -i inventory 7 | # 8 | # For tips on getting better at Ansible tasks, check out the README.md 9 | # in this folder 10 | # 11 | - name: Install httpd and setup index.html 12 | hosts: stapp01, stapp02, stapp03 13 | become: yes 14 | tasks: 15 | - name: Install httpd 16 | package: 17 | name: httpd 18 | state: present 19 | - name: Start service httpd, if not started 20 | service: 21 | name: httpd 22 | state: started 23 | - name: Add content block in index.html and set permissions 24 | blockinfile: 25 | path: /var/www/html/index.html 26 | create: yes 27 | block: | 28 | Welcome to XfusionCorp! 29 | 30 | This is Nautilus sample file, created using Ansible! 31 | 32 | Please do not modify this file manually! 33 | owner: apache 34 | group: apache 35 | mode: "0644" 36 | -------------------------------------------------------------------------------- /kubernetes/kke-nginx.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: After changing necessary values: kubectl create -f 3 | # Step 2: Wait for the pods to be in 'Running' state. Note down a pod name. 4 | # Step 3: Verify: kubectl exec -- curl http://localhost/ 5 | # You should get back a valid HTML content 6 | # Step 4: Verify: Click 'Select port to view on Host 1' and provide the Node Port below. 7 | # You should see the page. 8 | # 9 | # For tips on getting better at Kubernetes tasks, check out the README.md 10 | # in this folder 11 | # 12 | apiVersion: v1 13 | kind: Service 14 | metadata: 15 | name: nginx-service 16 | spec: 17 | type: NodePort 18 | selector: 19 | app: nginx-app 20 | type: front-end 21 | ports: 22 | - port: 80 23 | targetPort: 80 24 | nodePort: 30011 25 | --- 26 | apiVersion: apps/v1 27 | kind: Deployment 28 | metadata: 29 | name: nginx-deployment 30 | labels: 31 | app: nginx-app 32 | type: front-end 33 | spec: 34 | replicas: 3 35 | selector: 36 | matchLabels: 37 | app: nginx-app 38 | type: front-end 39 | template: 40 | metadata: 41 | labels: 42 | app: nginx-app 43 | type: front-end 44 | spec: 45 | containers: 46 | - name: nginx-container 47 | image: nginx:latest 48 | -------------------------------------------------------------------------------- /ansible/kke-conditionals.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Simply save this file as playbook.yml in the required folder 3 | # Step 2: Run `ansible-playbook -i inventory playbook.yml` 4 | # Step 3: Verify: Check that the files have been copied correctly by running 5 | # ansible all -a "ls -ltr /usr/src/data/" -i inventory 6 | # 7 | # For tips on getting better at Ansible tasks, check out the README.md 8 | # in this folder 9 | # 10 | - name: Copy text files to Appservers 11 | hosts: all 12 | become: yes 13 | tasks: 14 | - name: Copy blog.txt to stapp01 15 | ansible.builtin.copy: 16 | src: /usr/src/data/blog.txt 17 | dest: /opt/data/ 18 | owner: tony 19 | group: tony 20 | mode: "0755" 21 | when: inventory_hostname == "stapp01" 22 | - name: Copy story.txt to stapp02 23 | ansible.builtin.copy: 24 | src: /usr/src/data/story.txt 25 | dest: /opt/data/ 26 | owner: steve 27 | group: steve 28 | mode: "0755" 29 | when: inventory_hostname == "stapp02" 30 | - name: Copy media.txt to stapp03 31 | ansible.builtin.copy: 32 | src: /usr/src/data/media.txt 33 | dest: /opt/data/ 34 | owner: banner 35 | group: banner 36 | mode: "0755" 37 | when: inventory_hostname == "stapp03" 38 | -------------------------------------------------------------------------------- /ansible/kke-lineinfile.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Simply save this file as playbook.yml in the required folder 3 | # Step 2: Run `ansible-playbook -i inventory playbook.yml` 4 | # Step 3: Verify: Check that the files have been updated correctly by running 5 | # ansible all -a "ls -ltr /var/www/html/" -i inventory 6 | # ansible all -a "cat /var/www/html/index.html" -i inventory 7 | # 8 | # For tips on getting better at Ansible tasks, check out the README.md 9 | # in this folder 10 | # 11 | - name: Install httpd and setup index.html 12 | hosts: stapp01, stapp02, stapp03 13 | become: yes 14 | tasks: 15 | - name: Install httpd 16 | package: 17 | name: httpd 18 | state: present 19 | - name: Start service httpd, if not started 20 | service: 21 | name: httpd 22 | state: started 23 | - name: Add content in index.html. Create file if it does not exist and set file attributes 24 | copy: 25 | dest: /var/www/html/index.html 26 | content: This is a Nautilus sample file, created using Ansible! 27 | mode: "0655" 28 | owner: apache 29 | group: apache 30 | - name: Update content in index.html 31 | lineinfile: 32 | path: /var/www/html/index.html 33 | insertbefore: BOF 34 | line: Welcome to xFusionCorp Industries! 35 | -------------------------------------------------------------------------------- /kubernetes/kke-persistent-volume.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl create -f 3 | # Step 2: Wait for the pod to be in 'Running' state 4 | # Step 3: Verify: kubectl exec pod-datacenter -- curl http://localhost/ 5 | # You should get back a valid HTML content 6 | # 7 | # For tips on getting better at Kubernetes tasks, check out the README.md 8 | # in this folder 9 | # 10 | apiVersion: v1 11 | kind: PersistentVolume 12 | metadata: 13 | name: pv-datacenter 14 | spec: 15 | capacity: 16 | storage: 8Gi 17 | accessModes: 18 | - ReadWriteOnce 19 | storageClassName: manual 20 | hostPath: 21 | path: /mnt/security 22 | --- 23 | apiVersion: v1 24 | kind: PersistentVolumeClaim 25 | metadata: 26 | name: pvc-datacenter 27 | spec: 28 | accessModes: 29 | - ReadWriteOnce 30 | storageClassName: manual 31 | resources: 32 | requests: 33 | storage: 3Gi 34 | --- 35 | apiVersion: v1 36 | kind: Pod 37 | metadata: 38 | name: pod-datacenter 39 | spec: 40 | volumes: 41 | - name: storage-datacenter 42 | persistentVolumeClaim: 43 | claimName: pvc-datacenter 44 | containers: 45 | - name: container-datacenter 46 | image: nginx:latest 47 | ports: 48 | - containerPort: 80 49 | volumeMounts: 50 | - name: storage-datacenter 51 | mountPath: /usr/share/nginx/html 52 | -------------------------------------------------------------------------------- /ansible/kke-softlinks.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Simply save this file as playbook.yml in the required folder 3 | # Step 2: Run `ansible-playbook -i inventory playbook.yml` 4 | # Step 3: Verify: Check that the files are created by running 5 | # ansible all -a "ls -ltr /opt/security/" -i inventory 6 | # 7 | # For tips on getting better at Ansible tasks, check out the README.md 8 | # in this folder 9 | # 10 | - name: Create text files and create soft link 11 | hosts: stapp01, stapp02, stapp03 12 | become: yes 13 | tasks: 14 | - name: Create the blog.txt on stapp01 15 | file: 16 | path: /opt/security/blog.txt 17 | owner: tony 18 | group: tony 19 | state: touch 20 | when: inventory_hostname == "stapp01" 21 | - name: Create the story.txt on stapp02 22 | file: 23 | path: /opt/security/story.txt 24 | owner: steve 25 | group: steve 26 | state: touch 27 | when: inventory_hostname == "stapp02" 28 | - name: Create the media.txt on stapp03 29 | file: 30 | path: /opt/security/media.txt 31 | owner: banner 32 | group: banner 33 | state: touch 34 | when: inventory_hostname == "stapp03" 35 | - name: Link /opt/security directory 36 | file: 37 | src: /opt/security/ 38 | dest: /var/www/html 39 | state: link 40 | -------------------------------------------------------------------------------- /kubernetes/kke-tomcat.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl create ns tomcat-namespace-nautilus 3 | # Step 2: kubectl create -f 4 | # Step 3: Make sure the pod is in Running state. Note down the name i.e. tomcat-xxxxxx 5 | # Step 4: Verify: `kubectl exec tomcat-xxxx -n tomcat-namespace-nautilus -- curl http://localhost:8080` 6 | # You should see some sample HTML page 7 | # Step 5: Verify: Click 'Select port to view on Host 1' and provide the Node Port below. 8 | # You should see the sample HTML page. 9 | # 10 | # For tips on getting better at Kubernetes tasks, check out the README.md 11 | # in this folder 12 | # 13 | apiVersion: v1 14 | kind: Service 15 | metadata: 16 | name: tomcat-service-nautilus 17 | namespace: tomcat-namespace-nautilus 18 | spec: 19 | type: NodePort 20 | selector: 21 | app: tomcat 22 | ports: 23 | - port: 80 24 | protocol: TCP 25 | targetPort: 8080 26 | nodePort: 32227 27 | --- 28 | apiVersion: apps/v1 29 | kind: Deployment 30 | metadata: 31 | name: tomcat-deployment-nautilus 32 | namespace: tomcat-namespace-nautilus 33 | spec: 34 | replicas: 1 35 | selector: 36 | matchLabels: 37 | app: tomcat 38 | template: 39 | metadata: 40 | labels: 41 | app: tomcat 42 | spec: 43 | containers: 44 | - name: tomcat-container-nautilus 45 | image: gcr.io/kodekloud/centos-ssh-enabled:tomcat 46 | ports: 47 | - containerPort: 8080 48 | -------------------------------------------------------------------------------- /puppet/kke-local-yum-repo.pp: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Save this file under /etc/puppetlabs/code/environments/production/manifests 3 | # as a file with the name specified in the question e.g. news.pp 4 | # Step 2: Change the yum repo name, paths and application package according to question 5 | # Step 3: Run the puppet verifications steps mentioned in puppet/README.md 6 | # Step 4: Finally, SSH to each hosts and run `sudo puppet agent -tv` to make 7 | # sure you install the package 8 | # Step 5: Verify: Check that the required package (e.g. httpd) is installed from local 9 | # repo using 'repoquery -i httpd' 10 | # ('Repository' field should show as 'local_yum') 11 | # Note: You may have to install `yum-utils` for repoquery to work - 12 | # 'sudo yum install -y yum-utils' 13 | # 14 | # For tips on getting better at Puppet tasks, check out the README.md 15 | # in this folder 16 | # 17 | class local_yum_repo { 18 | #Setup Local repo 19 | yumrepo { 'localyum': 20 | enabled => 1, 21 | descr => 'Local repo for app pckgs', 22 | baseurl => 'file:///packages/downloaded_rpms', 23 | gpgcheck => 0, 24 | } 25 | 26 | #Install package from this repo 27 | package { 'httpd': 28 | ensure => 'installed', 29 | require => Yumrepo['localyum'], 30 | } 31 | } 32 | 33 | node 'stapp01.stratos.xfusioncorp.com', 'stapp02.stratos.xfusioncorp.com', 'stapp03.stratos.xfusioncorp.com' { 34 | include local_yum_repo 35 | } 36 | -------------------------------------------------------------------------------- /docker/Deploy-App.md: -------------------------------------------------------------------------------- 1 | # Deploy an App on Docker Containers 2 | ## Solution 3 | * First SSH to the required server as per the question 4 | * Create a `docker-compose.yml` in the location mentioned in the question e.g: `/opt/dba` 5 | ```yaml 6 | version: "3.3" 7 | services: 8 | web: 9 | container_name: php_host 10 | image: php:7.4.16-apache 11 | ports: 12 | - "8087:80" 13 | volumes: 14 | - /var/www/html:/var/www/html 15 | depends_on: 16 | - DB 17 | DB: 18 | container_name: mysql_host 19 | image: mariadb:latest 20 | ports: 21 | - "3306:3306" 22 | volumes: 23 | - /var/lib/mysql:/var/lib/mysql 24 | environment: 25 | - MYSQL_DATABASE=database_host 26 | - MYSQL_ROOT_PASSWORD=kodekloud 27 | - MYSQL_USER=kkeuser 28 | - MYSQL_PASSWORD=kodekloud 29 | ``` 30 | * Run `sudo docker-compose up -d`. Make sure there are no errors. 31 | * Run `sudo docker ps` to check whether there are 2 containers running 32 | 33 | ## Verification 34 | * Run `curl http://localhost:8087/` and you should receive the content of `index.php` stored in `/var/www/html/` of the host. Something like this: 35 | ```HTML 36 | 37 | 38 | Welcome to xFusionCorp Industries! 39 | 40 | 41 | 42 | Welcome to xFusionCorp Industries! 43 | 44 | ``` 45 | 46 | --- 47 | For tips on getting better at KodeKloud Engineer Docker tasks, [click here](./README.md) 48 | -------------------------------------------------------------------------------- /kubernetes/kke-node.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl create namespace node-namespace-datacenter 3 | # Step 2: kubectl create -f 4 | # Step 3: Wait for the pods to be in 'Running' state. Note down a pod name. 5 | # Step 4: Verify: kubectl exec --namespace node-namespace-datacenter 6 | # -- curl http://localhost:8080/ 7 | # You should get back a valid HTML content 8 | # Step 5: Verify: Click 'Select port to view on Host 1' and provide the Node Port below. 9 | # You should see the page. 10 | # 11 | # For tips on getting better at Kubernetes tasks, check out the README.md 12 | # in this folder 13 | # 14 | apiVersion: v1 15 | kind: Service 16 | metadata: 17 | name: node-service-datacenter 18 | namespace: node-namespace-datacenter 19 | spec: 20 | type: NodePort 21 | selector: 22 | app: node-app-datacenter 23 | ports: 24 | - port: 80 25 | targetPort: 8080 26 | nodePort: 30012 27 | --- 28 | apiVersion: apps/v1 29 | kind: Deployment 30 | metadata: 31 | name: node-deployment-datacenter 32 | namespace: node-namespace-datacenter 33 | spec: 34 | replicas: 2 35 | selector: 36 | matchLabels: 37 | app: node-app-datacenter 38 | template: 39 | metadata: 40 | labels: 41 | app: node-app-datacenter 42 | spec: 43 | containers: 44 | - name: node-container-datacenter 45 | image: gcr.io/kodekloud/centos-ssh-enabled:node 46 | ports: 47 | - containerPort: 80 48 | -------------------------------------------------------------------------------- /linux/Linux-Network-Services.md: -------------------------------------------------------------------------------- 1 | ## Linux Process Troubleshooting 2 | ## Solution 3 | The task is actually simpler than it sounds. In this example, let's take that the Apache Port is 8089. 4 | * You first have to identify which appserver's Apache is down using curl or telnet from Jump Host e.g.`curl http://stapp01:8089`. 5 | * Then go to that host and try to start httpd service. You will notice the start fails with a port already in use error 6 | * Identify which process is listening on the same port i.e. 8089 using `sudo netstat -lntp | grep 8089` (If 'netstat' is not available, install using `sudo yum install net-tools`). Netstat will also show the process id (pid) of the conflicting process. 7 | * Simply kill the process (`sudo kill -9 `) and start httpd 8 | * Run `systemctl status httpd` to check that httpd service is in running 9 | 10 | ## Update 11 | This task seems to have enabled `iptables` as well. So you need to add `iptables` rules to allow the required port as well. For enabling the port, you need to run the following commands (Replace port accordingly): 12 | ``` 13 | iptables -I INPUT 5 -p TCP --dport 8089 -j ACCEPT 14 | service iptables save 15 | iptables -nvL 16 | ``` 17 | 18 | ## Verification 19 | * Test connectivity again from Jump Host to all 3 appservers using curl i.e. `curl http://stapp01:8089/`. You should see the default index page HTML printed. 20 | 21 | --- 22 | For tips on getting better at KodeKloud Engineer Linux Administration tasks, [click here](./README.md) -------------------------------------------------------------------------------- /kubernetes/kke-nagios.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl create -f 3 | # Step 2: Wait for the pods to be in 'Running' state. Note down pod name. 4 | # Step 3: Open shell to pod: kubectl exec -it -- /bin/bash 5 | # Step 4: Create the user that question asks for and when prompted copy-paste password from 6 | # question and supply to this command: 7 | # e.g. htpasswd /opt/nagios/etc/htpasswd.users xFusionCorp 8 | # Step 5: Verify: 4. Try `curl http://localhost/` (it should fail) 9 | # Try curl -u xFusionCorp:xxxxx http://localhost/ (it should display homepage) 10 | # Step 6: Verify: Open Nagios Core Web Interface on browser (Open Port on Host 1). 11 | # You should be able to login using the newly created ID and password 12 | # 13 | # For tips on getting better at Kubernetes tasks, check out the README.md 14 | # in this folder 15 | # 16 | apiVersion: v1 17 | kind: Service 18 | metadata: 19 | name: nagios-service 20 | spec: 21 | type: NodePort 22 | selector: 23 | app: nagios-core 24 | ports: 25 | - port: 80 26 | targetPort: 80 27 | nodePort: 30008 28 | --- 29 | apiVersion: apps/v1 30 | kind: Deployment 31 | metadata: 32 | name: nagios-deployment 33 | spec: 34 | replicas: 1 35 | selector: 36 | matchLabels: 37 | app: nagios-core 38 | template: 39 | metadata: 40 | labels: 41 | app: nagios-core 42 | spec: 43 | containers: 44 | - name: nagios-container 45 | image: jasonrivers/nagios 46 | -------------------------------------------------------------------------------- /kubernetes/kke-jenkins.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl create namespace jenkins 3 | # Step 2: kubectl create -f 4 | # Step 3: Wait for the pods to be in running state. Note down the pod name. 5 | # Step 4: kubectl exec --namespace jenkins -- cat 6 | # /var/lib/jenkins/secrets/initialAdminPassword 7 | # Step 5: Verify: kubectl exec --namespace jenkins 8 | # -- curl http://localhost:8080/ 9 | # You should see a valid HTML content being returned 10 | # Step 6: Verify: Open Jenkins app on browser by clicking 'Open Port on Host 1' 11 | # and port as NodePort below. You should see the page without any errors 12 | # 13 | # For tips on getting better at Kubernetes tasks, check out the README.md 14 | # in this folder 15 | # 16 | apiVersion: v1 17 | kind: Service 18 | metadata: 19 | name: jenkins-service 20 | namespace: jenkins 21 | spec: 22 | type: NodePort 23 | selector: 24 | app: jenkins 25 | ports: 26 | - port: 8080 27 | targetPort: 8080 28 | nodePort: 30008 29 | --- 30 | apiVersion: apps/v1 31 | kind: Deployment 32 | metadata: 33 | name: jenkins-deployment 34 | namespace: jenkins 35 | labels: 36 | app: jenkins 37 | spec: 38 | replicas: 1 39 | selector: 40 | matchLabels: 41 | app: jenkins 42 | template: 43 | metadata: 44 | labels: 45 | app: jenkins 46 | spec: 47 | containers: 48 | - name: jenkins-container 49 | image: jenkins 50 | ports: 51 | - containerPort: 8080 52 | -------------------------------------------------------------------------------- /ansible/kke-setup-httpd-php.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Simply save this file as httpd.yml in the required folder 3 | # Step 2: Change the values of appserver and paths as per question 4 | # Step 3: Run `ansible-playbook -i inventory httpd.yml` 5 | # Step 4: Verify: `curl http://stapp02:8080/phpinfo.php` 6 | # You should see valid HTML content returned 7 | # 8 | # For tips on getting better at Ansible tasks, check out the README.md 9 | # in this folder 10 | # 11 | - name: Setup Httpd and PHP 12 | hosts: stapp02 13 | become: yes 14 | tasks: 15 | - name: Install latest version of httpd and php 16 | package: 17 | name: 18 | - httpd 19 | - php 20 | state: latest 21 | - name: Replace default DocumentRoot in httpd.conf 22 | replace: 23 | path: /etc/httpd/conf/httpd.conf 24 | regexp: DocumentRoot \"\/var\/www\/html\" 25 | replace: DocumentRoot "/var/www/html/myroot" 26 | - name: Create the new DocumentRoot directory if it does not exist 27 | file: 28 | path: /var/www/html/myroot 29 | state: directory 30 | owner: apache 31 | group: apache 32 | - name: Use Jinja2 template to generate phpinfo.php 33 | template: 34 | src: /home/thor/playbooks/templates/phpinfo.php.j2 35 | dest: /var/www/html/myroot/phpinfo.php 36 | owner: apache 37 | group: apache 38 | - name: Start and enable service httpd 39 | service: 40 | name: httpd 41 | state: started 42 | enabled: yes 43 | -------------------------------------------------------------------------------- /kubernetes/kke-shared-volumes.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Change the values of resource names, images and mountPaths as per question 3 | # Step 2: kubectl create -f 4 | # Step 3: Wait for the pod to be in running state 5 | # Step 4: Get a shell to the first container in the pod: 6 | # kubectl exec -it volume-share-devops -c volume-container-devops-1 -- /bin/bash 7 | # Step 5: In the resulting prompt, create a text file as per the question: 8 | # echo "Welcome to xFusionCorp Industries!" > /tmp/media/media.txt 9 | # Step 6: Verify: Check that you are able to see this file in the second container 10 | # under corresponding volume mount path as they use shared volumes: 11 | # kubectl exec volume-share-devops -c volume-container-devops-2 -- ls /tmp/games/ 12 | # 13 | # For tips on getting better at Kubernetes tasks, check out the README.md 14 | # in this folder 15 | # 16 | apiVersion: v1 17 | kind: Pod 18 | metadata: 19 | name: volume-share-devops 20 | labels: 21 | name: myapp 22 | spec: 23 | volumes: 24 | - name: volume-share 25 | emptyDir: {} 26 | containers: 27 | - name: volume-container-devops-1 28 | image: fedora:latest 29 | command: ["/bin/bash", "-c", "sleep 10000"] 30 | volumeMounts: 31 | - name: volume-share 32 | mountPath: /tmp/media 33 | - name: volume-container-devops-2 34 | image: fedora:latest 35 | command: ["/bin/bash", "-c", "sleep 10000"] 36 | volumeMounts: 37 | - name: volume-share 38 | mountPath: /tmp/games 39 | -------------------------------------------------------------------------------- /linux/Yum-local-repos.md: -------------------------------------------------------------------------------- 1 | # Yum Local Repos 2 | ## Solution 3 | * Make sure the local RPM directory given in the question has correct permissions: 4 | `sudo chmod -R 755 /packages/downloaded_rpms/` 5 | * Create a repo using createrepo command: `sudo createrepo /packages/downloaded_rpms/` 6 | * Now edit the `/etc/yum.repos.d/local.repo` file and configure the local repo with the name given as per the question (in this example, the local repo name is yum_local) 7 | ```UNIX 8 | [yum_local] 9 | name=yum_local 10 | baseurl=file:///packages/downloaded_rpms/ 11 | enabled=1 12 | gpgcheck=0 13 | protect=1 14 | ``` 15 | * Now install the package asked in the question e.g. `yum install -y httpd` 16 | 17 | ## Verification 18 | * Install yum-utils by running `sudo yum install -y yum-utils` 19 | * Run repoquery to find out which repo the package was installed from e.g. `repoquery -i httpd`. 20 | See the 'Repository' field below. 21 | ``` Java Properties 22 | Name : httpd 23 | Version : 2.4.6 24 | Release : 97.el7.centos 25 | Architecture: x86_64 26 | Size : 9821064 27 | Packager : CentOS BuildSystem 28 | Group : System Environment/Daemons 29 | URL : http://httpd.apache.org/ 30 | Repository : yum_local 31 | Summary : Apache HTTP Server 32 | Source : httpd-2.4.6-97.el7.centos.src.rpm 33 | Description : 34 | The Apache HTTP Server is a powerful, efficient, and extensible web server. 35 | ``` 36 | 37 | --- 38 | For tips on getting better at KodeKloud Engineer Linux Administration tasks, [click here](./README.md) -------------------------------------------------------------------------------- /jenkins/Install-Jenkins-Server.md: -------------------------------------------------------------------------------- 1 | # Install Jenkins Server 2 | ## Solution 3 | * First SSH to the jenkins server as per the question as the root user (root password given in question) 4 | * Enable the Jenkins repo using the following steps: 5 | ``` 6 | apt-get update 7 | apt install python-software-properties 8 | apt install openjdk-8-jdk 9 | wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | apt-key add - 10 | sh -c 'echo deb https://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list' 11 | apt install apt-transport-https 12 | apt-get update 13 | ``` 14 | * Install Jenkins using apt and start the service: 15 | ``` 16 | apt install jenkins 17 | service jenkins start 18 | service jenkins status 19 | ``` 20 | * Print the initial Admin password and note it down: `cat /var/lib/jenkins/secrets/initialAdminPassword` 21 | * Now open `Select Port to View on Host 1` and provide the port specified in the question under Note e.g. 8081. This will open the Jenkins Administrator Console. 22 | * Use the initial Admin password you copied and login 23 | * Select Default installation and wait for it to complete 24 | * In the following 'Create Admin' screen, create a admin user with all the values given in the question 25 | * After completing, you will be taken to Dashboard 26 | 27 | ## Verification 28 | * Logout from the Dashboard above. You will see the Login page. 29 | * Use the ID and Password from the question e.g.theadmin/Adm!n321 to login. Ensure you are able to see the Dashboard again. 30 | --- 31 | For tips on getting better at KodeKloud Engineer Jenkins tasks, [click here](./README.md) 32 | -------------------------------------------------------------------------------- /ansible/kke-manage-acls.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Simply save this file as playbook.yml in the required folder 3 | # Step 2: Run `ansible-playbook -i inventory playbook.yml` 4 | # Step 3: Verify: Check that the files are created by running 5 | # ansible all -a "ls -ltr /opt/data/" -i inventory 6 | # 7 | # For tips on getting better at Ansible tasks, check out the README.md 8 | # in this folder 9 | # 10 | - name: Create file and set ACL in Host 1 11 | hosts: stapp01 12 | become: yes 13 | tasks: 14 | - name: Create the blog.txt on stapp01 15 | file: 16 | path: /opt/data/blog.txt 17 | state: touch 18 | - name: Set ACL for blog.txt 19 | acl: 20 | path: /opt/data/blog.txt 21 | entity: tony 22 | etype: group 23 | permissions: r 24 | state: present 25 | - name: Create file and set ACL in Host 2 26 | hosts: stapp02 27 | become: yes 28 | tasks: 29 | - name: Create the story.txt on stapp02 30 | file: 31 | path: /opt/data/story.txt 32 | state: touch 33 | - name: Set ACL for story.txt 34 | acl: 35 | path: /opt/data/story.txt 36 | entity: steve 37 | etype: user 38 | permissions: rw 39 | state: present 40 | - name: Create file and set ACL in Host 3 41 | hosts: stapp03 42 | become: yes 43 | tasks: 44 | - name: Create the media.txt on stapp03 45 | file: 46 | path: /opt/data/media.txt 47 | state: touch 48 | - name: Set ACL for media.txt 49 | acl: 50 | path: /opt/data/media.txt 51 | entity: banner 52 | etype: group 53 | permissions: rw 54 | state: present 55 | -------------------------------------------------------------------------------- /kubernetes/kke-grafana.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl create ns grafana-monitoring-datacenter 3 | # Step 2: kubectl create -f 4 | # Step 3: Make sure all the pods are in running state 5 | # Step 4: Verify: Open Grafana app on browser by clicking 'Open Port on Host 1' 6 | # and port as the nodePort below. 7 | # You should see the page loaded successfully 8 | # 9 | # For tips on getting better at Kubernetes tasks, check out the README.md 10 | # in this folder 11 | # 12 | apiVersion: v1 13 | kind: Service 14 | metadata: 15 | name: grafana-service-datacenter 16 | namespace: grafana-monitoring-datacenter 17 | spec: 18 | type: NodePort 19 | selector: 20 | app: grafana 21 | ports: 22 | - port: 3000 23 | targetPort: 3000 24 | nodePort: 32000 25 | --- 26 | apiVersion: apps/v1 27 | kind: Deployment 28 | metadata: 29 | name: grafana-deployment-datacenter 30 | namespace: grafana-monitoring-datacenter 31 | spec: 32 | replicas: 1 33 | selector: 34 | matchLabels: 35 | app: grafana 36 | template: 37 | metadata: 38 | labels: 39 | app: grafana 40 | spec: 41 | volumes: 42 | - name: grafana-storage 43 | emptyDir: {} 44 | containers: 45 | - name: grafana-container-datacenter 46 | image: grafana/grafana:latest 47 | volumeMounts: 48 | - name: grafana-storage 49 | mountPath: /var/lib/grafana 50 | resources: 51 | requests: 52 | memory: "1Gi" 53 | cpu: "500m" 54 | limits: 55 | memory: "2Gi" 56 | cpu: "1000m" 57 | ports: 58 | - containerPort: 3000 59 | -------------------------------------------------------------------------------- /jenkins/Create-Views.md: -------------------------------------------------------------------------------- 1 | # Jenkins Create Views 2 | ## Introduction 3 | This task involves creating a simple job that runs every minute and then creating a Jenkins list view to include this and another job 4 | 5 | ## Solution 6 | ### Step 1: Create a Scheduled Job 7 | * `Select port to view on Host 1` and connect to port `8081`. Login using the Jenkins admin user and password given in the question 8 | * Click `New item` and in the following screen: 9 | ``` 10 | Name: devops-pipeline-job (Keep 'Freestyle Project' as selected) and click Ok 11 | ``` 12 | * Under `Build Triggers` click `Build periodically`. This reveals a text area to input the `Schedule`. Give the schedule as per the question e.g. `* * * * *` (Ignore any warnings) 13 | * In the `Build` section below, choose `Add build step > Execute shell` and input the command given in the question in the `command`: 14 | ``` 15 | echo hello world!! 16 | ``` 17 | * Click `Save` 18 | * Run `Build Now` to check the job runs correctly. Check `Console Output` to check the echo output. 19 | 20 | ### Step 2: Create a View 21 | * Go to `Jenkins > New View` and add a `List View` as per question e.g. `devops-crons` 22 | * Under `Job Filters` select both the jobs as per the question e.g. `devops-cron-job` and `devops-pipeline-job` 23 | * Click `Ok` 24 | 25 | ## Verification 26 | * Click `Jenkins > My Views`. You should be able to see the view `devops-crons`. 27 | * Click that and under that you should be able to see both the jobs listed 28 | * Click the job that you created in Step 1 29 | * You should see a few builds already as builds get triggered every minute automatically 30 | 31 | --- 32 | For tips on getting better at KodeKloud Engineer Jenkins tasks, [click here](./README.md) 33 | -------------------------------------------------------------------------------- /docker/README.md: -------------------------------------------------------------------------------- 1 | # Docker Tasks 2 | ## General Docker Tips 3 | * For tasks that require you to troubleshoot a Dockerfile or create a Dockerfile, make sure you test the file by running a docker build on the same directory as Dockerfile: 4 | `docker build -t my_image .` 5 | * Another important tip is to make use of the free [Katakoda Docker Playground](https://www.katacoda.com/courses/docker/playground) to test your Docker changes. So the recommendation is to: 6 | * Open the task in Kodekloud Engineer, note down the question and press `Try Later` 7 | * Open KataKoda Playground, prepare docker commands, execute and test your changes until you are satisified 8 | * Reopen question in Kodekloud Engineer, apply your changes and verify. Bam! You finished your task in time for bonus points. 9 | * Always verify the successful completion of the task using one or more of the approaches below: 10 | * Use browser by clicking `Open Port on Host 1` tab especially for tasks that ask you to configure a Host Port (Docker). 11 | * Click `Open Port on Host 1` tab and specify the port and click `Connect` 12 | * Check that the URL loads. 13 | * Exec command - Especially useful to verify tasks that involve running a server listening to a port e.g. Nginx, HTTPD (or) Verify volume mounts 14 | * `docker exec -it ` 15 | * Examples: 16 | * `docker exec -it nginx_ubuntu ls /tmp/execWorks` 17 | * Shell - You can also get a shell to a Docker Container like this. This is useful when you need to run multiple verification commands: 18 | * `docker exec -it nginx_ubuntu /bin/bash` 19 | * Logs - Useful for tasks that require you to print an output e.g. echo: 20 | * `docker logs -f `. For example, `docker logs -f nginx_ubuntu` 21 | 22 | --- 23 | For general tips on getting better at KodeKloud Engineer tasks, [click here](../README.md) 24 | 25 | -------------------------------------------------------------------------------- /kubernetes/kke-nginx-phpfpm.yaml: -------------------------------------------------------------------------------- 1 | # Step 1: Change port and image versions according to the question 2 | # Step 2: kubectl create -f 3 | # Step 3: Wait for the nginx-phpfpm pod to be in 'Running' state 4 | # Step 4: Verify: 5 | # kubectl exec it nginx-phpfpm -- /bin/bash 6 | # In the resulting shell prompt run the following: 7 | # echo "" > /var/www/html/index.php 8 | # curl http://localhost:8098/ 9 | # You should get back a valid PHPInfo HTML page. 10 | # 11 | # For tips on getting better at Kubernetes tasks, check out the README.md 12 | # in this folder 13 | # 14 | apiVersion: v1 15 | kind: ConfigMap 16 | metadata: 17 | name: nginx-config 18 | data: 19 | nginx.conf: | 20 | events {} 21 | http { 22 | server { 23 | listen 8098; 24 | index index.html index.htm index.php; 25 | root /var/www/html; 26 | 27 | location ~ \.php$ { 28 | include fastcgi_params; 29 | fastcgi_param REQUEST_METHOD $request_method; 30 | fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; 31 | fastcgi_pass 127.0.0.1:9000; 32 | } 33 | } 34 | } 35 | --- 36 | apiVersion: v1 37 | kind: Pod 38 | metadata: 39 | name: nginx-phpfpm 40 | spec: 41 | volumes: 42 | - name: shared-files 43 | emptyDir: {} 44 | - name: nginx-config-volume 45 | configMap: 46 | name: nginx-config 47 | containers: 48 | - name: nginx-container 49 | image: nginx:latest 50 | volumeMounts: 51 | - name: shared-files 52 | mountPath: /var/www/html 53 | - name: nginx-config-volume 54 | mountPath: /etc/nginx/nginx.conf 55 | subPath: nginx.conf 56 | - name: php-fpm-container 57 | image: php:7.3-fpm 58 | volumeMounts: 59 | - name: shared-files 60 | mountPath: /var/www/html 61 | -------------------------------------------------------------------------------- /kubernetes/kke-init-containers.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl create -f 3 | # Step 2: Wait for the pod to be in 'Running' state. Note down the pod name 4 | # which will be in the format ic-deploy-xfusion-xxxxx 5 | # Step 3: Verify: kubectl exec -it ic-deploy-xfusion-xxxxx -- /bin/bash 6 | # In the resulting prompt, type 'cat /ic/official' and you should 7 | # see "Init Done - Welcome to xFusionCorp Industries" 8 | # Step 4: Verify: kubectl logs -f ic-deploy-xfusion-xxxxx 9 | # You should see the message "Init Done - Welcome to xFusionCorp Industries" 10 | # printed every 5 secs 11 | # 12 | # For tips on getting better at Kubernetes tasks, check out the README.md 13 | # in this folder 14 | # 15 | apiVersion: apps/v1 16 | kind: Deployment 17 | metadata: 18 | name: ic-deploy-xfusion 19 | labels: 20 | app: ic-xfusion 21 | spec: 22 | replicas: 1 23 | selector: 24 | matchLabels: 25 | app: ic-xfusion 26 | template: 27 | metadata: 28 | labels: 29 | app: ic-xfusion 30 | spec: 31 | volumes: 32 | - name: ic-volume-xfusion 33 | emptyDir: {} 34 | initContainers: 35 | - name: ic-msg-xfusion 36 | image: centos:latest 37 | command: 38 | [ 39 | "/bin/bash", 40 | "-c", 41 | "echo Init Done - Welcome to xFusionCorp Industries > /ic/official", 42 | ] 43 | volumeMounts: 44 | - name: ic-volume-xfusion 45 | mountPath: /ic 46 | 47 | containers: 48 | - name: ic-main-xfusion 49 | image: centos:latest 50 | command: 51 | [ 52 | "/bin/bash", 53 | "-c", 54 | "while true; do cat /ic/official; sleep 5; done", 55 | ] 56 | volumeMounts: 57 | - name: ic-volume-xfusion 58 | mountPath: /ic 59 | -------------------------------------------------------------------------------- /jenkins/Create-Parameterized-Builds.md: -------------------------------------------------------------------------------- 1 | # Create Parameterized Builds 2 | ## Solution 3 | ### Step 1: Create Parameterized Build 4 | * `Select port to view on Host 1` and connect to port `8081`. Login using the Jenkins admin user and password given in the question 5 | * Click `New item` and in the following screen: 6 | ``` 7 | Name: parameterized-job (Keep 'Freestyle Project' as selected) and click Ok 8 | ``` 9 | * Under `General` click the option `This project is parameterized` 10 | * This will reveal additional option `Add Parameter` 11 | * Click `Add Parameter > String Parameter` and input the following values as per question: 12 | ``` 13 | Name: Stage 14 | Default Value: Build 15 | ``` 16 | * Click `Add Parameter > Choice Parameter` and input the following values as per question: 17 | ``` 18 | Name: env 19 | Choices: 20 | Development 21 | Staging 22 | Production 23 | ``` 24 | * In the `Build` section below, choose `Add build step > Execute shell` and input the following values in the `command`: 25 | ``` 26 | echo $Stage $env 27 | ``` 28 | * Click save 29 | 30 | ### Step 2: Run Parameterized Build 31 | The question expects you to run the parameterized build at least once with a particular value selected for `env` e.g. Development 32 | * Click the newly created job, `parameterized-job` on the home page and in the following screen click `Build Now` 33 | * You should see a new build starting up on the left lower screen 34 | 35 | ## Verification 36 | * Click that build and click `Console Output` and verify that the `echo` command is printing the correct values for `env` and `Stage` parameter. You should see something like: 37 | ``` 38 | Running as SYSTEM 39 | Building in workspace /var/jenkins_home/workspace/parameterized-job 40 | [parameterized-job] $ /bin/sh -xe /tmp/jenkins6109190682713673426.sh 41 | + echo Build Development 42 | Build Development 43 | Finished: SUCCESS 44 | ``` 45 | --- 46 | For tips on getting better at KodeKloud Engineer Jenkins tasks, [click here](./README.md) -------------------------------------------------------------------------------- /ansible/Managing-Jinja2-Templates.md: -------------------------------------------------------------------------------- 1 | # Managing Jinja2 Templates using Ansible 2 | ## Solution 3 | * First edit the file `/home/thor/ansible/playbook.yml` to include the required host e.g. stapp03 4 | ```yaml 5 | - hosts: stapp03 6 | become: yes 7 | roles: 8 | - role/httpd 9 | ``` 10 | * Create a Jinja2 template file `/home/thor/ansible/role/httpd/templates/index.html.j2` with the following content: 11 | ```jinja2 12 | This file was created using Ansible on {{ ansible_hostname }} 13 | ``` 14 | * Edit the file `/home/thor/ansible/role/httpd/tasks/main.yml` to add a task to copy the template to `/var/www/html` on the required host 15 | * Before: 16 | ```yaml 17 | --- 18 | #task file for role/test 19 | - name: install the latest version of httpd 20 | yum: 21 | name: httpd 22 | state: latest 23 | - name: Start service httpd 24 | service: 25 | name: httpd 26 | state: started 27 | ``` 28 | * After: 29 | ```yaml 30 | --- 31 | #task file for role/test 32 | - name: install the latest version of httpd 33 | yum: 34 | name: httpd 35 | state: latest 36 | - name: Start service httpd 37 | service: 38 | name: httpd 39 | state: started 40 | - name: Use Jinja2 template to generate index.html 41 | template: 42 | src: /home/thor/ansible/role/httpd/templates/index.html.j2 43 | dest: /var/www/html/index.html 44 | mode: "0655" 45 | owner: "{{ ansible_user }}" 46 | group: "{{ ansible_user }}" 47 | ``` 48 | 49 | ## Verification 50 | * Run `ansible-playbook -i inventory playbook.yml` on the `/home/thor/ansible` directory. The playbook should run without any errors 51 | * Then check that `/var/www/html/index.html` has be written according to the template by running `ansible stapp03 -a "cat /var/www/html/index.html" -i inventory` (substitute `stapp03` with the corresponding host) 52 | --- 53 | For tips on getting better at KodeKloud Engineer Ansible tasks, [click here](./README.md) -------------------------------------------------------------------------------- /puppet/kke-setup-ssh-keys.pp: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Save this file under /etc/puppetlabs/code/environments/production/manifests 3 | # as a file with the name specified in the question e.g. news.pp 4 | # Step 2: Copy the public key value from the public key location specified in the question and 5 | # paste in the below variable. Don't literally copy the file contents in to the variable 6 | # Step 3: Verify: Finally, SSH to each hosts and run `sudo puppet agent -tv` . After this, 7 | # you should be able to SSH to each host from Jump Host without entering password 8 | # 9 | # For tips on getting better at Puppet tasks, check out the README.md 10 | # in this folder 11 | # 12 | $public_key = 'AAAAB3NzaC1yc2EAAAADAQABAAABAQCjzucWviIHiT0R1YP3cYmkWcNfv53svphAIW4RpnDiSdoTvooeah3Akh/VagwCJsClpdwuM3xdAvEWyHFkI6zdItrdjqM8fJ6Y8HYXF8Ros979TVYcktI8Ird+92CFqsAVRqGTyJNx++68N7JA78dWf+SEsGaDSkEjGkjfIJgOlZ1OmJJB/pszUOjeiFvEJbkc+TA0fH6htGg/QCotC1tAUnIszf664QENjNiqIfruM/CwojExmos8RKKO1GYgjBFzB9eofk7zsjn1zk9NJ7LGqvZ6/EirTf2dCOH5RMYbjccGZI/AQTXQ15kUYUHCtpUQFrQ88T0W93D9bbiXHdFn' 13 | 14 | class ssh_node1 { 15 | ssh_authorized_key { 'tony@stapp01': 16 | ensure => present, 17 | user => 'tony', 18 | type => 'ssh-rsa', 19 | key => $public_key, 20 | } 21 | } 22 | 23 | class ssh_node2 { 24 | ssh_authorized_key { 'steve@stapp02': 25 | ensure => present, 26 | user => 'steve', 27 | type => 'ssh-rsa', 28 | key => $public_key, 29 | } 30 | } 31 | 32 | class ssh_node3 { 33 | ssh_authorized_key { 'banner@stapp03': 34 | ensure => present, 35 | user => 'banner', 36 | type => 'ssh-rsa', 37 | key => $public_key, 38 | } 39 | } 40 | 41 | node stapp01.stratos.xfusioncorp.com { 42 | include ssh_node1 43 | } 44 | 45 | node stapp02.stratos.xfusioncorp.com { 46 | include ssh_node2 47 | } 48 | 49 | node stapp03.stratos.xfusioncorp.com { 50 | include ssh_node3 51 | } 52 | 53 | -------------------------------------------------------------------------------- /kubernetes/kke-envvars-kubernetes.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl create namespace fieldref-namespace 3 | # Step 2: Make sure to change pod/container names, image, cmd args as per question 4 | # Step 3: kubectl create -f 5 | # Step 4: Make sure the pod is in Running state 6 | # Step 5: Verify: Get shell to the Pod 7 | # `kubectl exec -it envars-fieldref -n fieldref-namespace -- /bin/bash` 8 | # in the following prompt run: `printenv` 9 | # You should see all the environment variables printed. Note down 10 | # the values of the 5 env variables below 11 | # Step 6: Verify: `kubectl logs envars-fieldref -n fieldref-namespace` 12 | # You should see the values of environment variables printed by the Pod 13 | # 14 | # For tips on getting better at Kubernetes tasks, check out the README.md 15 | # in this folder 16 | # 17 | apiVersion: v1 18 | kind: Pod 19 | metadata: 20 | name: envars-fieldref 21 | namespace: fieldref-namespace 22 | spec: 23 | restartPolicy: Never 24 | containers: 25 | - name: fieldref-container 26 | image: nginx:latest 27 | command: ["sh", "-c"] 28 | args: 29 | - while true; do 30 | echo -en '\n'; 31 | printenv NODE_NAME POD_NAME POD_NAMESPACE; 32 | printenv POD_IP POD_SERVICE_ACCOUNT; 33 | sleep 10; 34 | done; 35 | env: 36 | - name: NODE_NAME 37 | valueFrom: 38 | fieldRef: 39 | fieldPath: spec.nodeName 40 | - name: POD_NAME 41 | valueFrom: 42 | fieldRef: 43 | fieldPath: metadata.name 44 | - name: POD_NAMESPACE 45 | valueFrom: 46 | fieldRef: 47 | fieldPath: metadata.namespace 48 | - name: POD_IP 49 | valueFrom: 50 | fieldRef: 51 | fieldPath: status.podIP 52 | - name: POD_SERVICE_ACCOUNT 53 | valueFrom: 54 | fieldRef: 55 | fieldPath: spec.serviceAccountName 56 | -------------------------------------------------------------------------------- /linux/PAM-Authentication-for-Apache.md: -------------------------------------------------------------------------------- 1 | # PAM Authentication for Apache 2 | ## Solution 3 | * Install pwauth in all the appserver hosts `sudo yum --enablerepo=epel -y install mod_authnz_external pwauth` 4 | * Edit the file `/etc/httpd/conf.d/authnz_external.conf` and add the following at the end of the file. Make sure not to duplicate or overlap with existing entries in this file 5 | ``` 6 | 7 | AuthType Basic 8 | AuthName "PAM Authentication" 9 | AuthBasicProvider external 10 | AuthExternal pwauth 11 | require valid-user 12 | 13 | ``` 14 | * Create the protected directory under the document root, `/var/www/html`, if it does not already exist e.g. `mkdir -p /var/www/html/protected` 15 | * Create an `index.html` under `/var/www/html/protected`, if it does not already exist, with any test content 16 | * Restart httpd 17 | ``` 18 | sudo systemctl restart httpd 19 | sudo systemctl status httpd 20 | ``` 21 | 22 | ## Verification 23 | * Execute curl command from a appserver host. You should see a 'Forbidden error'. i.e `curl http://localhost:8080/protected/`. More particularly, you can just print the HTTP header to see if you receive 'HTTP 403' by running curl with `-I` option i.e. `curl -I http://localhost:8080/protected/` 24 | * From the same host re-run curl command but this time with the given user and password. You should see a valid content 25 | `curl -u jim:8FmzjvFU6S http://localhost:8080/protected/` 26 | * Repeat the steps for other appserver hosts. Either you can SSH to individual host to run or test from Jump Host e.g. `curl -I http://stapp01:8080/protected/` 27 | * Finally, test the loadbalancer URL in browser by accessing `Select Port to View on Host 1` and provide port `80`. You should see the browser prompting you for Id and Password. After giving the Id and Password from question, the page should load successfully. 28 | 29 | --- 30 | For tips on getting better at KodeKloud Engineer Linux Administration tasks, [click here](./README.md) 31 | 32 | -------------------------------------------------------------------------------- /linux/Setup-and-configure-iptables.md: -------------------------------------------------------------------------------- 1 | # Setup and configure Iptables 2 | ## Introduction 3 | * `iptables -nvL` is your friend to finish this task. 4 | * Note that there's one DROP ALL rule at position 5. Hence, any rules you insert should be before position 5. So use the option `-I ` to insert rule before Position 5. 5 | * Need to do the task on all 3 appservers. 6 | 7 | ## Solution 8 | * Create a bash script in all the appserver hosts with the below content (Replace the port 5000 with the port as per the question) 9 | * The solution basically inserts 2 rules: 1 ACCEPT Rule and 1 DROP rule at Position 5 an 6 to accept connection from STLB01 only and drop all other connections respectively 10 | ```UNIX 11 | #!/bin/bash 12 | yum install iptables-services -y 13 | systemctl start iptables 14 | systemctl enable iptables 15 | iptables -I INPUT 5 -s 172.16.238.14 -p TCP --dport 5000 -j ACCEPT 16 | iptables -I INPUT 6 -p TCP --dport 5000 -j DROP 17 | service iptables save 18 | systemctl status iptables 19 | iptables -nvL 20 | ``` 21 | * Execute the script `sudo ./script.sh` 22 | * Verify the successful completion 23 | 24 | ### Alternative Solution 25 | * An alternative solution is by using a NOT directive (! operator) and replace the 'Drop All' rule at position 5 26 | ```UNIX 27 | #!/bin/bash 28 | yum install iptables-services -y 29 | systemctl start iptables 30 | systemctl enable iptables 31 | iptables -R INPUT 5 ! -s 172.16.238.14 -p TCP --dport 5000 -j DROP 32 | service iptables save 33 | systemctl status iptables 34 | iptables -nvL 35 | ``` 36 | 37 | ## Verification 38 | * Execute curl command from Jump Host to the given ports on all hosts. You should get a connection timed out. For example - `curl -I http://stapp01:5000/` 39 | * Execute the same curl command from Loadbalancer Host (stlb01) to the given ports on all hosts (You should SSH to stlb01 first). You should see a valid response returned from the servers. 40 | 41 | --- 42 | For tips on getting better at KodeKloud Engineer Linux Administration tasks, [click here](./README.md) 43 | -------------------------------------------------------------------------------- /linux/Install-and-configure-NFS-Server.md: -------------------------------------------------------------------------------- 1 | ## Install and configure NFS Server 2 | ## Solution 3 | ### NFS Server setup in Storage Server (ststor01) 4 | Perform the following steps in the storage server (ststor01) 5 | * Install NFS Utils and start the nfs-server and rpcbind services 6 | ``` 7 | sudo yum install -y nfs-utils nfs-utils-lib 8 | sudo systemctl start nfs-server 9 | sudo systemctl enable nfs-server 10 | sudo systemctl start rpcbind 11 | sudo systemctl enable rpcbind 12 | ``` 13 | * Verify that nfs-server and rpcbind have started 14 | ``` 15 | sudo systemctl status nfs-server 16 | sudo systemctl status rpcbind 17 | sudo chkconfig nfs-server on 18 | sudo chkconfig rpcbind on 19 | ``` 20 | * Create the directory that needs to be exported as per the question: `mkdir /webdata` 21 | * Now edit `/etc/exports` and add the following export entries to export this directory to all 3 appserver hosts 22 | ``` 23 | /webdata stapp01(rw,sync,no_root_squash) 24 | /webdata stapp02(rw,sync,no_root_squash) 25 | /webdata stapp03(rw,sync,no_root_squash) 26 | ``` 27 | * Export the configuration: `sudo exportfs -a` 28 | 29 | #### NFS Server setup verification 30 | * Go back to Jump Host and run: `sudo showmount -e ststor01`. You should see the exported directory. 31 | 32 | ### NFS client setup in App Servers 33 | Perform the following steps on each of the app servers 34 | * Mount the exported NFS share on to a local directory as per the question (in this example, it is `/var/www/html`): 35 | `sudo mount -t nfs ststor01:/webdata /var/www/html` 36 | * Verify the mount has completed successfully: `sudo mount | grep nfs`. You should see the new mount listed 37 | 38 | ### Task Verification 39 | * Exit to Jump Host and perform a scp of the index.html to the export directory on ststor01: 40 | `scp natasha@ststor01:/webdata` 41 | * SSH to individual app servers and check that you are able to see the `index.html` under the mounted directory i.e. `/var/www/html` 42 | 43 | --- 44 | For tips on getting better at KodeKloud Engineer Linux Administration tasks, [click here](./README.md) -------------------------------------------------------------------------------- /linux/Linux-Firewalld-setup.md: -------------------------------------------------------------------------------- 1 | # Linux Firewalld Setup 2 | ## Introduction 3 | * For this task, after installing Firewalld, you are required to add 2 rules: a normal rule and a rich rule 4 | * Most people fail this task because they miss reloading the firewall-cmd, after adding the rules, by running `firewall-cmd --reload` 5 | 6 | ## Solution 7 | * SSH to one of the appservers and note down the Apache port and Nginx ports respectively 8 | * Apache port is specified under `/etc/httpd/conf/httpd.conf` (Look for the line `Listen: `) 9 | * Nginx port is specified under `/etc/nginx/nginx.conf` (Look for the line `listen [::]::`) 10 | * Create a bash script offline with the below content using a text editor. Then copy-paste the script to all the hosts and execute it (Replace the port 8888 and 9999 with Nginx and Apache ports respectively). This is not only time-saving but also less error-prone. 11 | ```UNIX 12 | #!/bin/bash 13 | yum install firewalld -y 14 | systemctl start firewalld 15 | systemctl enable firewalld 16 | firewall-cmd --zone=public --permanent --add-port=8888/tcp 17 | firewall-cmd --zone=public --permanent --add-rich-rule='rule family=ipv4 source address=172.16.238.14 port port=9999 protocol=tcp accept' 18 | firewall-cmd --reload 19 | firewall-cmd --list-all --zone=public 20 | ``` 21 | * Execute the script `sudo ./script.sh` 22 | * Verify the successful completion 23 | 24 | ## Verification 25 | * Execute curl command from Jump Host to the Apache port on all hosts. You should get a connection timed out. For example - `curl -I http://stapp01:9999/` 26 | * Execute curl command from Jump Host to the Nginx port on all hosts. You should see a valid response. For example - `curl -I http://stapp01:8888/` 27 | * Execute the same curl command from Loadbalancer Host (stlb01) to the Nginx and Apache ports on all hosts (You should SSH to stlb01 first). For both Nginx and Apache ports, you should receive valid responses from respective servers. 28 | 29 | --- 30 | For tips on getting better at KodeKloud Engineer Linux Administration tasks, [click here](./README.md) 31 | -------------------------------------------------------------------------------- /kubernetes/kke-jekyll.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl create namespace jekyll-namespace-datacenter 3 | # Step 2: kubectl create -f 4 | # Step 3: Wait for the jekyll pod to be in 'Running' state 5 | # Step 4: Verify: kubectl exec jekyll-pod-datacenter --namespace jekyll-namespace-datacenter 6 | # -- curl http://localhost:4000/ 7 | # You should see a valid HTML content being returned 8 | # Step 5: Verify: Open Jekyll app on browser by clicking 'Open Port on Host 1' 9 | # and port as NodePort below. You should see the page without any errors 10 | # 11 | # For tips on getting better at Kubernetes tasks, check out the README.md 12 | # in this folder 13 | # 14 | apiVersion: v1 15 | kind: PersistentVolumeClaim 16 | metadata: 17 | name: jekyll-site-datacenter 18 | namespace: jekyll-namespace-datacenter 19 | spec: 20 | accessModes: 21 | - ReadWriteMany 22 | resources: 23 | requests: 24 | storage: 1Gi 25 | --- 26 | apiVersion: v1 27 | kind: Service 28 | metadata: 29 | name: jekyll-service-datacenter 30 | namespace: jekyll-namespace-datacenter 31 | spec: 32 | type: NodePort 33 | selector: 34 | app: jekyll-pod-datacenter 35 | ports: 36 | - port: 8080 37 | protocol: TCP 38 | targetPort: 4000 39 | nodePort: 31181 40 | status: 41 | loadBalancer: {} 42 | --- 43 | apiVersion: v1 44 | kind: Pod 45 | metadata: 46 | name: jekyll-pod-datacenter 47 | namespace: jekyll-namespace-datacenter 48 | labels: 49 | app: jekyll-pod-datacenter 50 | spec: 51 | volumes: 52 | - name: site 53 | persistentVolumeClaim: 54 | claimName: jekyll-site-datacenter 55 | initContainers: 56 | - name: jekyll-init-datacenter 57 | image: kodekloud/jekyll 58 | imagePullPolicy: IfNotPresent 59 | command: ["jekyll", "new", "/site"] 60 | volumeMounts: 61 | - name: site 62 | mountPath: /site 63 | containers: 64 | - name: jekyll-container-datacenter 65 | image: kodekloud/jekyll-serve 66 | volumeMounts: 67 | - name: site 68 | mountPath: /site 69 | -------------------------------------------------------------------------------- /linux/Install-and-configure-SFTP.md: -------------------------------------------------------------------------------- 1 | ## Install and configure SFTP 2 | ## Solution 3 | * SSH to the required host 4 | * Create a new group: `sudo groupadd sftpusers` 5 | * Create the landing directory (`chroot` directory in the question). It's important that this directory is : 6 | * owned by user `root` and group `sftpusers` 7 | * set permissions `0755`. 8 | In this example, `/var/www/webdata` is the landing directory mentioned in the question. 9 | ``` 10 | sudo mkdir -p /var/www/webdata 11 | sudo chown root:sftpusers /var/www/webdata 12 | sudo chmod 755 /var/www/webdata 13 | ``` 14 | * Now modify the`/etc/ssh/sshd_config` as follows to force SFTP-only for all users belonging to the newly created group. This solution is more scalable as you can enforce SFTP-only to any number of users without having to edit the `sshd_config` each time. 15 | * Make sure to comment the existing `Subsystem sftp /usr/libexec/openssh/sftp-server` line. 16 | * Change the value of `ChrootDirectory` below as per question. In the below example, `/var/www/webdata` is configured: 17 | ``` 18 | Subsystem sftp internal-sftp 19 | Match Group sftpusers 20 | ForceCommand internal-sftp 21 | ChrootDirectory /var/www/webdata 22 | PasswordAuthentication yes 23 | PermitTTY no 24 | AllowTcpForwarding no 25 | X11Forwarding no 26 | PermitTunnel no 27 | AllowAgentForwarding no 28 | ``` 29 | * Restart sshd `systemctl restart sshd` 30 | * Create the given user and add it to the group by running `sudo useradd javed -g sftpusers` (In this example `javed` is the user to be created) 31 | * Set the password to the user as given in the question by running: `sudo passwd javed` 32 | 33 | ## Verification 34 | * From the same host, try to do a SSH using the newly created user. e.g. `ssh javed@localhost`. It should fail with an error `This service allows sftp connections only`. 35 | * Try to do a SFTP using the newly created user. e.g. `sftp javed@localhost`. Connection should succeed. 36 | 37 | --- 38 | For tips on getting better at KodeKloud Engineer Linux Administration tasks, [click here](./README.md) -------------------------------------------------------------------------------- /kubernetes/Rolling-updates-Rolling-back-Deployments.md: -------------------------------------------------------------------------------- 1 | # Rolling updates and Rolling back deployments in Kubernetes 2 | ## Solution 3 | * First create the new namespace: `kubectl create ns xfusion` 4 | * Next under `/tmp` directory create a `dep.yaml` file with the following contents. Make sure to change the names, image, replicas and rollingUpdate values as per question: 5 | ```yaml 6 | apiVersion: apps/v1 7 | kind: Deployment 8 | metadata: 9 | name: httpd-deploy 10 | namespace: xfusion 11 | spec: 12 | replicas: 5 13 | strategy: 14 | type: RollingUpdate 15 | rollingUpdate: 16 | maxSurge: 1 17 | maxUnavailable: 2 18 | selector: 19 | matchLabels: 20 | app: httpd 21 | template: 22 | metadata: 23 | labels: 24 | app: httpd 25 | spec: 26 | containers: 27 | - name: httpd 28 | image: httpd:2.4.28 29 | ``` 30 | * Perform a rolling update by running: 31 | `kubectl set image deployment httpd-deploy --namespace xfusion httpd=httpd:2.4.43 --record=true` 32 | e.g. Here 'httpd-deploy' is the name of the deployment and 'httpd' is the name of the container. Question asked to upgrade to version httpd:2.4.43. 33 | * You can check the rollout deployment status by running 34 | `kubectl rollout status deployment httpd-deploy --namespace xfusion` 35 | * Wait until you see 'successfully rolled out' message something like below: 36 | ``` 37 | Waiting for rollout to finish: 4 out of 5 new replicas have been updated... 38 | deployment "httpd-deploy" successfully rolled out 39 | ``` 40 | * Now rollback the deployment as per question: 41 | `kubectl rollout undo deployment httpd-deploy --namespace xfusion` 42 | * Wait until you see `deployment 'httpd-deploy' rolled back` message 43 | * Make sure all the pods are in the running `kubectl get deployments httpd-deploy --namespace xfusion` 44 | 45 | ## Verification 46 | * Run describe to verify that the image version has been rolled back e.g. `kubectl describe deployments httpd-deploy --namespace xfusion` 47 | 48 | --- 49 | For tips on getting better at KodeKloud Engineer Kubernetes tasks, [click here](./README.md) -------------------------------------------------------------------------------- /linux/Install-and-configure-PostgreSQL.md: -------------------------------------------------------------------------------- 1 | # Install and configure PostgreSQL 2 | ## Solution 3 | * Login to Database host (stdb01) 4 | * Install and enable PostgreSQL server 5 | ``` 6 | sudo yum -y install postgresql-server postgresql-contrib 7 | sudo systemctl start postgresql 8 | sudo systemctl enable postgresql 9 | sudo systemctl status postgresql 10 | ``` 11 | * Perform initial setup of PostgreSQL DB using the built-in command 12 | ``` 13 | sudo postgresql-setup initdb 14 | ``` 15 | * Now create the database, the user and grant them privileges. Make sure you simply copy paste the password from the question to the below query. 16 | ```sql 17 | sudo -u postgres psql 18 | . 19 | . 20 | CREATE USER kodekloud_aim WITH PASSWORD 'ksH85UJjhb'; 21 | CREATE DATABASE kodekloud_db1; 22 | GRANT ALL PRIVILEGES ON DATABASE "kodekloud_db1" to kodekloud_aim; 23 | ``` 24 | Exit from the psql by typing `\q` 25 | * Now edit `/var/lib/pgsql/data/postgresql.conf` so that the below line is uncommented 26 | ```conf 27 | listen_addresses = 'localhost' 28 | ``` 29 | * Now edit `/var/lib/pgsql/data/pg_hba.conf` and update the below lines to md5 (Make sure you don't duplicate any entry) 30 | ```conf 31 | # "local" is for Unix domain socket connections only 32 | local all all md5 33 | # IPv4 local connections: 34 | host all all 127.0.0.1/32 md5 35 | # IPv6 local connections: 36 | host all all ::1/128 md5 37 | ``` 38 | * Restart the services 39 | ``` 40 | sudo systemctl restart postgresql 41 | sudo systemctl status postgresql 42 | ``` 43 | ## Verification 44 | * Connect to the database to test with host as localhost. Use the password specified in the question to login. You should login successfully. 45 | ``` 46 | sudo psql -U kodekloud_aim -d kodekloud_db1 -h localhost -W 47 | ``` 48 | * If you see an error like `psql: FATAL: Ident authentication failed` then it means you have not edited `pg_hba.conf` properly. 49 | --- 50 | For tips on getting better at KodeKloud Engineer Linux Administration tasks, [click here](./README.md) -------------------------------------------------------------------------------- /jenkins/Create-Users-In-Jenkins.md: -------------------------------------------------------------------------------- 1 | # Create Users in Jenkins 2 | ## Introduction 3 | This task involves created a new user and granting the user read-only access to Global objects as well as to the Job (that already exists) using a Project-based Matrix Authorization Strategy 4 | 5 | ## Solution 6 | ### Step 1: Create User 7 | * `Select port to view on Host 1` and connect to port `8081`. Login using the Jenkins admin user and password given in the question 8 | * Click `Jenkins > Manage Jenkins > Manage Users > Create User` and provide values as per the question. Below is a sample: 9 | ``` 10 | Username: mariyam 11 | Password: x7yHGGx97 12 | Confirm Password: x7yHGGx97 13 | Fullname: Mariyam 14 | ``` 15 | ### Step 2: Assign Project-based Matrix Authorization Strategy 16 | * Click `Jenkins > Manage Jenkins > Configure Global Security` and under `Authorization` section, check if you see the `Project-based Matrix Authorization Strategy` 17 | * In case you don't see this option 18 | * Click `Jenkins > Manage Jenkins > Manage Plugins` and click `Available` tab. 19 | * Search for `Matrix`. You will see multiple matches. Select `Matrix Authorization Strategy Plugin` plugin and click `Download now and install after restart` 20 | * Wait for sometime and refresh the browser 21 | * Again under `Authorization` section, check if you now see the `Project-based Matrix Authorization Strategy` 22 | * Click the checkbox `Project-based Matrix Authorization Strategy`. This will reveal additional matrix UI. Setup as per question: 23 | * Click `Add user or group...` and add the newly created user `mariyam` 24 | * Against user `mariyam` click the checkbox for `Read` under `Overall` 25 | * Against user `mariyam` click the checkbox for `Read` under `Job` 26 | * Press `Save` and then `log out` 27 | 28 | ## Verification 29 | * Login using the newly created user and password given in the question. You should be able to successfully login 30 | * You should be able to see the minimal dashboard 31 | * You should also be able to see the Job that was already present in the question 32 | * Click the Job and you should not see any configure option but only read-only access to the job 33 | --- 34 | For tips on getting better at KodeKloud Engineer Jenkins tasks, [click here](./README.md) 35 | 36 | -------------------------------------------------------------------------------- /git/Setup-From-Scratch.md: -------------------------------------------------------------------------------- 1 | # GIT Setup from Scratch 2 | ## Introduction 3 | The task involves the following steps: 4 | 1. Create a bare GIT repo 5 | 2. setting up an update hook that prevents direct pushes to master 6 | 3. Clone this repo in to another directory 7 | 4. Create a new branch and switch to this branch 8 | 5. Commit a file in to this new local branch and push the change to remote 9 | 6. Lastly, create a new local `master` branch and try to push the local master to remote repo 10 | 11 | ## Solution 12 | * SSH to the required server i.e. `ssh natasha@ststor01` 13 | * Switch to root user: `sudo su` 14 | * Install GIT: `yum install -y git` 15 | * Setup GIT user and email globally: 16 | ```unix 17 | git config --global --add user.name natasha 18 | git config --global --add user.email natasha@stratos.xfusioncorp.com 19 | ``` 20 | * Create a bare repository as per the question: `git init --bare /opt/apps.git` 21 | * Change to the repo directory `/opt/apps.git` and copy the `/tmp/update` hook to `hooks` directory under `/opt/apps.git` 22 | ```unix 23 | cd /opt/apps.git 24 | cp /tmp/update hooks/ 25 | ``` 26 | * Now navigate to the clone directory as per question e.g. `/usr/src/kodekloudrepos` and clone the repo: 27 | ```unix 28 | cd /usr/src/kodekloudrepos 29 | git clone /opt/apps.git 30 | ``` 31 | * You should see a `/usr/src/kodekloudrepos/apps` directory. Change to that directory: `cd apps` 32 | * Now create a new branch as per question: `git checkout -b xfusioncorp_apps` 33 | * Copy the `/tmp/readme.md` to the current directory: `cp /tmp/readme.md .` 34 | * Now commit the file and push it to the origin: 35 | ```unix 36 | git add readme.md 37 | git commit -m "Readme file" 38 | git push origin xfusioncorp_apps 39 | ``` 40 | 41 | ## Verification 42 | * Now switch to the new local branch, master, and attempt a push to origin. Your push should fail: 43 | ```unix 44 | git checkout -b master 45 | git push origin master 46 | ``` 47 | The error you see will be something similar to this: 48 | ``` 49 | remote: Manual pushing to this repo's master branch is restricted 50 | remote: error: hook declined to update refs/heads/master 51 | To /opt/apps.git 52 | ! [remote rejected] master -> master (hook declined) 53 | error: failed to push some refs to '/opt/apps.git' 54 | ``` 55 | 56 | -------------------------------------------------------------------------------- /linux/Install-and-configure-WebApp.md: -------------------------------------------------------------------------------- 1 | 2 | ## Install and configure Web Application 3 | ## Introduction 4 | The task is simpler than its sounds. As `/data` on `ststor01` is mounted on all appservers under `/var/www/html`, it's just enough to copy the two directories mentioned in the question under `/data` on `/ststor01` and it magically appears on `/var/www/html` on all 3 appservers. No further configurations are required in `httpd.conf` other than changing the listen port as per question. The two URL paths will automatically work. 5 | 6 | ## Solution 7 | ### Step 1 - Transfer static files 8 | * Copy the directories mentioned in the question from Jump Host to `/data` on `/ststor01` and it is automatically reflected on all appservers. 9 | ``` 10 | scp -r /home/thor/news natasha@ststor01:/data 11 | scp -r /home/thor/cluster natasha@ststor01:/data 12 | ``` 13 | Note: 14 | * If `scp` doesn't work, install `openssh-clients` on `ststor01` and restart sshd service: `sudo systemctl restart sshd` 15 | * If you get permission denied error to copy files directly to `/data`, then first `scp` to `/tmp` on ststor01 (`scp -r ... natasha@ststor01:/tmp`)and then SSH to ststor01 and mv the files under `/data`: `mv /tmp/news /tmp/cluster /data` 16 | 17 | ### Step 2 - Install and configure Apache 18 | * Perform the following steps in each appserver host by first opening a SSH connection 19 | * Install httpd: `sudo yum install -y httpd` 20 | * Now edit `/etc/httpd/conf/httpd.conf` in each server to change the Listen port to 8080 21 | * Restart and Enable httpd 22 | ``` 23 | sudo systemctl restart httpd 24 | sudo systemctl enable httpd 25 | ``` 26 | 27 | ## Verification 28 | * Use curl command to verify that you are able to load HTML from the newly added paths in each appservers. You should see a valid HTML page returned back. 29 | ``` 30 | curl http://stapp01:8080/news/ 31 | curl http://stapp01:8080/cluster/ 32 | curl http://stapp02:8080/news/ 33 | curl http://stapp02:8080/cluster/ 34 | curl http://stapp03:8080/news/ 35 | curl http://stapp03:8080/cluster/ 36 | ``` 37 | * Finally, access the Loadbalancer URL by clicking `Select port to view on Host 1`, and adding port `80` click on Display Port. Edit, the address bar to add the paths `/news/` and `/cluster/` to check you can see the respective pages. 38 | 39 | --- 40 | For tips on getting better at KodeKloud Engineer Linux Administration tasks, [click here](./README.md) -------------------------------------------------------------------------------- /puppet/Setup-puppet-certs-autosign.md: -------------------------------------------------------------------------------- 1 | # Setup Puppet Certs Autosign 2 | ## Solution 3 | ### Configure AutoSign in Puppet server 4 | * Create a new `autosign.conf` file under `/etc/puppetlabs/puppet/` directory with the following content: 5 | ``` 6 | jump_host.stratos.xfusioncorp.com 7 | stapp01.stratos.xfusioncorp.com 8 | stapp02.stratos.xfusioncorp.com 9 | stapp03.stratos.xfusioncorp.com 10 | ``` 11 | 12 | ### Add puppet alias in the Jump Host 13 | * Edit `/etc/hosts` file and add the `puppet` alias next to the jump host entry i.e. `jump_host.stratos.xfusioncorp.com` 14 | ``` 15 | ... 16 | ... 17 | 172.16.238.3 jump_host.stratos.xfusioncorp.com jump_host puppet 18 | 172.16.239.5 jump_host.stratos.xfusioncorp.com jump_host puppet 19 | ``` 20 | * Restart puppetserver: `systemctl restart puppetserver` 21 | 22 | ### Add puppet alias in the Appserver Hosts 23 | Perform the next set of steps in each appserver host 24 | * SSH to the appserver host 25 | * Same way as Jump Host, edit `/etc/hosts` file and add the `puppet` alias next to the jump host entry 26 | ``` 27 | ... 28 | 172.16.238.3 jump_host.stratos.xfusioncorp.com puppet 29 | ... 30 | ... 31 | 32 | ``` 33 | * Run `puppet agent -tv`. You should see the new certificate generated and printed something like this: 34 | ``` 35 | Info: Creating a new RSA SSL key for stapp01.stratos.xfusioncorp.com 36 | Info: csr_attributes file loading from /home/tony/.puppetlabs/etc/puppet/csr_attributes.yaml 37 | Info: Creating a new SSL certificate request for stapp01.stratos.xfusioncorp.com 38 | Info: Certificate Request fingerprint (SHA256): 39 | ..... 40 | Info: Downloaded certificate for stapp01.stratos.xfusioncorp.com from https://puppet:8140/puppet-ca/v1 41 | ..... 42 | .... 43 | ``` 44 | 45 | ## Verification 46 | * In the Jump Host, verify that you are able to see the newly generated certificates by running `puppetserver ca list --all`. You should see all the certificates printed like this: 47 | ``` 48 | Signed Certificates: 49 | stapp02.stratos.xfusioncorp.com (SHA256) 50 | ....... 51 | jump_host.stratos.xfusioncorp.com (SHA256) 52 | .... alt names: ["DNS:puppet", "DNS:jump_host.stratos.xfusioncorp.com"] ... 53 | stapp03.stratos.xfusioncorp.com (SHA256) 54 | .... 55 | stapp01.stratos.xfusioncorp.com (SHA256) 56 | ...... 57 | ``` 58 | 59 | --- 60 | For general tips on getting better at KodeKloud Engineer Puppet tasks, [click here](./README.md) 61 | -------------------------------------------------------------------------------- /puppet/kke-setup-firewall-rules.md: -------------------------------------------------------------------------------- 1 | # Puppet Setup Firewall Rules 2 | ## Solution 3 | * Install `puppet-firewalld` module by running `puppet module install puppet-firewalld` on Jump host 4 | * On Jump host create the required inventory file with the name given as per the question i.e.`/etc/puppetlabs/code/environments/production/manifests/code.pp` and content as below. 5 | ```ruby 6 | node 'stapp01.stratos.xfusioncorp.com' { 7 | include firewall_node1 8 | } 9 | 10 | node 'stapp02.stratos.xfusioncorp.com' { 11 | include firewall_node2 12 | } 13 | 14 | node 'stapp03.stratos.xfusioncorp.com' { 15 | include firewall_node3 16 | } 17 | ``` 18 | * In the same folder, create the required implementation file with the name given as per the question i.e.`/etc/puppetlabs/code/environments/production/manifests/demo.pp` and content as below [Change the values of ports below according to the question] 19 | ```ruby 20 | class { 'firewalld': } 21 | 22 | class firewall_node1 { 23 | firewalld_port { 'Open port 3000 in the public zone': 24 | ensure => present, 25 | zone => 'public', 26 | port => 3000, 27 | protocol => 'tcp', 28 | } 29 | } 30 | 31 | class firewall_node2 { 32 | firewalld_port { 'Open port 9006 in the public zone': 33 | ensure => present, 34 | zone => 'public', 35 | port => 9006, 36 | protocol => 'tcp', 37 | } 38 | } 39 | 40 | class firewall_node3 { 41 | firewalld_port { 'Open port 8091 in the public zone': 42 | ensure => present, 43 | zone => 'public', 44 | port => 8091, 45 | protocol => 'tcp', 46 | } 47 | } 48 | ``` 49 | * Run the puppet verifications steps: 50 | * On Jump host: `puppet parser validate code.pp` and `puppet parser validate demo.pp`. You should not see any errors. 51 | * SSH to one of the appservers hosts and run `sudo puppet agent -tv --noop` first to dry-run the code. Check there are no issues. 52 | * Finally run `sudo puppet agent -tv` in each of the appserver hosts to implement the firewall changes. Note: No need to run `firewall-cmd --reload`. The puppet code automatically does that. 53 | 54 | ## Verification 55 | * Run `curl http://:/` from Jump Host to the speific ports on each appserver (as per the question) to test connectivity e.g. `curl http://stapp01:3000/`. You should get back a valid HTML page as Apache is running on specific ports on each appserver. 56 | 57 | --- 58 | For general tips on getting better at KodeKloud Engineer Puppet tasks, [click here](./README.md) 59 | 60 | -------------------------------------------------------------------------------- /jenkins/README.md: -------------------------------------------------------------------------------- 1 | # Jenkins Tasks 2 | ## General Jenkins Tips 3 | * Note that you can seach for multiple plugins, select them and finally click `Download now and install after restart`. When you select one and search for the next one, the previous one disappears. But it is still selected behind-the-scenes and gets installed along when you click `Download now and install after restart` 4 | * Some of the Jenkins tasks require you to create a Build Job executes some commands on the nautilus appservers or storage server. To achieve this, enable password-less sudo on the server. Check out [Deployment using Jenkins](./Deployment-Using-Jenkins.md) task for steps 5 | * Some of the Jenkins tasks require you to get code from GIT repo and transfer it to the storage server. For this, you need to install [Publish over SSH](https://plugins.jenkins.io/publish-over-ssh/) plugin in Jenkins 6 | * Some tasks even require you to trigger build automatically based on changes pushed to the GIT repo. For this, you need to enable a [Webhook](https://en.wikipedia.org/wiki/Webhook) on the Build job and setting up this Webhook on the Gitea UI. You need to also install [Build Authorization Token Root](https://plugins.jenkins.io/build-token-root/) plugin in Jenkins to allow Gitea to trigger the Jenkins build without authenticating 7 | * You can make use of the free [Katakoda Docker Playground](https://www.katacoda.com/courses/docker/playground) to practice Jenkins changes. So the recommendation is to: 8 | * Open the task in Kodekloud Engineer, note down the question and press `Try Later` 9 | * Open KataKoda Playground, in the commandline run `docker run --name jenkins -p 8080:8080 -d jenkins/jenkins` 10 | * After the container starts up, you can `Select port to view on Host 1` and give `8080`. This opens up the Jenkin Administration Portal in the browser. 11 | * You need the default administrator password for first time setup. For this, you need to exec the shell command on the container: `docker exec jenkins cat /var/jenkins_home/secrets/initialAdminPassword`. This prints out the default administrator password. 12 | * Use the password to login and setup jenkins as required 13 | * You can play around with jenkins until you are confident to attempt the question 14 | * Reopen question in Kodekloud Engineer, and follow the same approach 15 | 16 | ## Common mistakes 17 | * Not running the build Job more than once. Because some tasks explicitly state that the Build Job should be such that it can be run more than once 18 | --- 19 | For general tips on getting better at KodeKloud Engineer tasks, [click here](../README.md) -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # kke-solutions 2 | ## Solutions for Kodekloud Engineer 3 | ### Introduction 4 | This GIT project contains my own solutions to the tasks in Kodekloud Engineer. As a personal advise, please use the solutions only for your reference. It's best that you attempt the tasks yourself first by reading through the original documentation. This will immensely help you develop your skills. After all, that's the objective of Kodekloud Engineer isn't it? 5 | 6 | [Kodekloud](https://kodekloud.com) has some great courses that covers almost all of the topics tested in Kodekloud Engineer. I strongly recommend signing up and enhancing your DevOps skills (Please note that I'm not affliated with Kodekloud in anyway). 7 | 8 | Good luck with your learning journey! 9 | 10 | ### General Tips 11 | * **Begin with the end in mind** 12 | * Always begin the task with an idea on how to verify the successful completion of the task. For verification guidance and other specific guidance on individual topics, follow the links below: 13 | * [Linux System Administration tasks](./linux/README.md) 14 | * [Kubernetes tasks](./kubernetes/README.md) 15 | * [Docker tasks](./docker/README.md) 16 | * [Ansible tasks](./ansible/README.md) 17 | * [Puppet tasks](./puppet/README.md) 18 | * [Jenkins tasks](./jenkins/README.md) 19 | * **It's okay to "Try Later"** 20 | * If you think you took too much time to perform the task or you screwed up the environment, click `Try Later` and come back to the task again. Most importantly, read through the question again as the question values will change with each reload of the question. 21 | * **Go for Bonus points** 22 | * If you complete the task within 15 minutes, you get half of the task points as bonus points. For example, if the task is 500 points worth, if you complete the task in less than 15 minutes, you will get 500 + 250 points. 23 | * One tip is to take as much time to get the question right, save the answer in a Text Editor on your computer, click `Try Later` and redo the task again. But this time within 15 minutes to earn bonus points. Don't forget to re-read the question. 24 | * **Make it easy for the Reviewers** 25 | * If you are copy-pasting code in to vi editor, pause for 5-10 secs so that reviewers would be able to take one good look at the code during reviews 26 | * Use `more` command for files, so that reviewers can clearly see the content 27 | * Perform steps in the same tab, if possible, as only the main tab is recorded 28 | * Run state verification steps so that reviewers can cross-check the outcome of tasks e.g. `curl`, state output steps e.g. `kubectl get pods` / `systemctl status` 29 | 30 | 31 | -------------------------------------------------------------------------------- /linux/Install-and-configure-DB-Server.md: -------------------------------------------------------------------------------- 1 | # Install and configure DB Server 2 | ## Solution 3 | ### Step 1 - Copy the db.sql to Database Server 4 | * On Jump Host, install `openssh-clients` to enable scp. Using scp, copy database script to Database Server: 5 | ```UNIX 6 | sudo yum install openssh-clients -y 7 | scp /home/thor/db.sql peter@stdb01:/tmp 8 | ``` 9 | 10 | ### Step 2 - MariaDB installation and configuration 11 | * SSH to Database host (stdb01) 12 | * Install and enable MariaDB server 13 | ```UNIX 14 | sudo yum install mariadb mariadb-server -y 15 | sudo systemctl start mariadb 16 | sudo systemctl enable mariadb 17 | sudo systemctl status mariadb 18 | ``` 19 | * Setup MariaDB using the built-in secure installation script (Default Root password is blank. So just press ENTER): `sudo mysql_secure_installation`. When prompted, provide the following values: 20 | ``` 21 | Set root password? [Y/n] n 22 | Remove anonymous users? [Y/n] Y 23 | Disallow root login remotely? [Y/n] Y 24 | Remove test database and access to it? [Y/n] Y 25 | Reload privilege tables now? [Y/n] Y 26 | ``` 27 | * Login to database using the root user (default password is blank. So just press ENTER) 28 | ```UNIX 29 | mysql -u root -p 30 | ``` 31 | Once you have logged in, then run the below SQL commands to create database, create user, set the password and grant privileges (In this example `kodekloud` is the password set for the user). Make sure to change the below values as per the question: 32 | ```SQL 33 | MariaDB [(none)]>CREATE DATABASE kodekloud_db5; 34 | MariaDB [(none)]>GRANT ALL PRIVILEGES on kodekloud_db5.* to 'kodekloud_roy'@'%' identified by 'kodekloud'; 35 | MariaDB [(none)]>FLUSH PRIVILEGES; 36 | ``` 37 | Note: It's important to grant privileges to the user on all hosts as the user will connect from Wordpress as `kodekloud_roy@stdb01`. 38 | * Now load the database script as below 39 | ```SQL 40 | MariaDB [(none)]>SOURCE /tmp/db.sql; 41 | ``` 42 | 43 | #### Verify MariaDB setup 44 | * Use `mysqlshow` to verify that the account you created works as expected, especially with host as stdb01. You should see all the WordPress tables listed i.e. wp... 45 | ```UNIX 46 | mysqlshow -u kodekloud_roy -h stdb01 kodekloud_db5 47 | ``` 48 | In case the above doesn't work, try as `mysqlshow -u kodekloud_roy -h stdb01 kodekloud_db5 -p`. Give password as `kodekloud` when prompted. 49 | 50 | ### Step 3 - WordPress configuration 51 | * SSH to storage server (ststor01) 52 | * Edit `wp_config.php` from the location specified in the question and set the DB details 53 | ``` 54 | define('DB_NAME', 'kodekloud_db5'); 55 | define('DB_USER', 'kodekloud_roy'); 56 | define('DB_PASSWORD', 'kodekloud'); 57 | define('DB_HOST', 'stdb01'); 58 | ``` 59 | #### Verify WordPress setup 60 | * Click tab `Select port to view on Host 1`, and after adding port 80 click on Display Port 61 | * You should see a sample Blog WordPress site loaded 62 | 63 | --- 64 | For tips on getting better at KodeKloud Engineer Linux Administration tasks, [click here](./README.md) -------------------------------------------------------------------------------- /jenkins/Add-Slave-Nodes.md: -------------------------------------------------------------------------------- 1 | # Add Slave Nodes in Jenkins 2 | ## Introduction 3 | This task requires you to setup the 3 appservers as Slave Nodes in Jenkins. To achieve this, you can make use of the [SSH Build Agents](https://plugins.jenkins.io/ssh-slaves/) plugin to simplify the setup of the nodes. For `SSH Build Agents` to work correctly, you need to intsall Java in the Appserver nodes. 4 | 5 | ## Solution 6 | ### Step 1: Install SSH Build Agent plugin in Jenkins 7 | * `Select port to view on Host 1` and connect to port `8081`. Login using the Jenkins admin user and password given in the question 8 | * Under `Jenkins > Manage Jenkins > Manage Plugins` click `Available` and search for `Pipeline` plugin. 9 | * Select the `SSH Build Agents` plugin and click `Download now and install after restart` 10 | * In the following screen, click checkbox `Restart Jenkins when installation is complete and no jobs running`. Wait for the screen to become standstill. 11 | * You can try to refresh your browser after a few secs. 12 | 13 | ### Step 2: Install Java in the Appservers 14 | * SSH to each appserver (stapp01, stapp02 and stapp03) and install Java: `sudo yum install -y java` 15 | 16 | ### Step 3: Add Slave Nodes 17 | * In Jenkins Admin Console, add a new slave node under `Jenkins > Manage Jenkins > Manage Nodes and Clouds > New Node` 18 | * Provide following values 19 | ``` 20 | Node Name: App_server_1 21 | (Permanent Agent) 22 | Remote root directory: /home/tony/jenkins 23 | Labels: stapp01 24 | Launch Method: Launch Agents via SSH 25 | ``` 26 | * Additional options will be revealed. Click the `Add Button > Jenkins` next to `Credentials` to add credentials for tony, steve and banner: 27 | * Leave kind as `Username with Password` and Scope as `Global (..)` 28 | * Add SSH credentials for the sudo users of the respective servers (tony, steve and banner): 29 | ``` 30 | Username: tony 31 | Password: Ir0nM@n 32 | ID: tony 33 | ``` 34 | * Configure remaining Node SSH options as follows: 35 | ``` 36 | Host: stapp01 37 | Credentials: tony (From the list you added earlier) 38 | Host Key Verification Strategy: Non verifying Verification Strategy 39 | ``` 40 | * Click `Save` 41 | * Repeat the above steps to add `stapp02` 42 | ``` 43 | Node Name: App_server_2 44 | (Permanent Agent) 45 | Remote root directory: /home/steve/jenkins 46 | Labels: stapp02 47 | Launch Method: Launch Agents via SSH 48 | Host: stapp02 49 | Credentials: steve (From the list you added earlier) 50 | Host Key Verification Strategy: Non verifying Verification Strategy 51 | ``` 52 | * ... and `stapp03` 53 | ``` 54 | Node Name: App_server_3 55 | (Permanent Agent) 56 | Remote root directory: /home/banner/jenkins 57 | Labels: stapp03 58 | Launch Method: Launch Agents via SSH 59 | Host: stapp03 60 | Credentials: banner (From the list you added earlier) 61 | Host Key Verification Strategy: Non verifying Verification Strategy 62 | ``` 63 | 64 | ## Verification 65 | * Wait for a few seconds for the agents to be configured 66 | * Refresh the nodes list. You should see the newly added nodes(`App_server_1`, `App_server_2` and `App_server_3`) displayed with all system statistics. This means the nodes setup was successful -------------------------------------------------------------------------------- /kubernetes/kke-irongallery.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl create namespace iron-namespace-nautilus 3 | # Step 2: kubectl create -f 4 | # Step 3: Wait for the pods to be in 'Running' state. Note down any 5 | # frontend pod's name i.e. iron-gallery-deployment-nautilus-xxxxx 6 | # Step 4: Verify: kubectl exec iron-gallery-deployment-nautilus-xxxxx -- curl http://localhost/ 7 | # You should see a valid HTML content being returned 8 | # Step 5: Verify: Open Irongallery app on browser by clicking 'Open Port on Host 1' 9 | # and port as NodePort below. You should see Irongallery app without any errors 10 | # 11 | # For tips on getting better at Kubernetes tasks, check out the README.md 12 | # in this folder 13 | # 14 | apiVersion: v1 15 | kind: Service 16 | metadata: 17 | name: iron-db-service-nautilus 18 | namespace: iron-namespace-nautilus 19 | spec: 20 | type: ClusterIP 21 | selector: 22 | db: mariadb 23 | ports: 24 | - port: 3306 25 | targetPort: 3306 26 | --- 27 | apiVersion: v1 28 | kind: Service 29 | metadata: 30 | name: iron-gallery-service-nautilus 31 | namespace: iron-namespace-nautilus 32 | spec: 33 | type: NodePort 34 | selector: 35 | run: iron-gallery 36 | ports: 37 | - port: 80 38 | targetPort: 80 39 | nodePort: 32678 40 | --- 41 | apiVersion: apps/v1 42 | kind: Deployment 43 | metadata: 44 | name: iron-db-deployment-nautilus 45 | namespace: iron-namespace-nautilus 46 | labels: 47 | db: mariadb 48 | spec: 49 | replicas: 1 50 | selector: 51 | matchLabels: 52 | db: mariadb 53 | template: 54 | metadata: 55 | labels: 56 | db: mariadb 57 | spec: 58 | volumes: 59 | - name: db 60 | emptyDir: {} 61 | containers: 62 | - name: iron-db-container-nautilus 63 | image: kodekloud/irondb:2.0 64 | env: 65 | - name: MYSQL_DATABASE 66 | value: database_host 67 | - name: MYSQL_ROOT_PASSWORD 68 | value: P@55w.rd 69 | - name: MYSQL_PASSWORD 70 | value: P@55w.rd 71 | - name: MYSQL_USER 72 | value: kodekloud 73 | volumeMounts: 74 | - name: db 75 | mountPath: /var/lib/mysql 76 | --- 77 | apiVersion: apps/v1 78 | kind: Deployment 79 | metadata: 80 | name: iron-gallery-deployment-nautilus 81 | namespace: iron-namespace-nautilus 82 | labels: 83 | run: iron-gallery 84 | spec: 85 | replicas: 1 86 | selector: 87 | matchLabels: 88 | run: iron-gallery 89 | template: 90 | metadata: 91 | labels: 92 | run: iron-gallery 93 | spec: 94 | volumes: 95 | - name: config 96 | emptyDir: {} 97 | - name: images 98 | emptyDir: {} 99 | containers: 100 | - name: iron-gallery-container-nautilus 101 | image: kodekloud/irongallery:2.0 102 | volumeMounts: 103 | - name: config 104 | mountPath: /usr/share/nginx/html/data 105 | - name: images 106 | mountPath: /usr/share/nginx/html/uploads 107 | resources: 108 | limits: 109 | memory: "100Mi" 110 | cpu: "50m" 111 | -------------------------------------------------------------------------------- /kubernetes/kke-guest-app.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl create -f 3 | # Step 2: Wait for the pods to be in 'Running' state. Note down any 4 | # frontend pod's name i.e. frontend-xxxxx 5 | # Step 3: Verify: kubectl exec frontend-xxxxx -- curl http://localhost/ 6 | # You should see a valid HTML content being returned 7 | # Step 4: Verify: Open Guesbook app on browser by clicking 'Open Port on Host 1' 8 | # and port as NodePort below. You should see Guestbook app without any errors 9 | # 10 | # For tips on getting better at Kubernetes tasks, check out the README.md 11 | # in this folder 12 | # 13 | apiVersion: v1 14 | kind: Service 15 | metadata: 16 | name: redis-master 17 | spec: 18 | type: ClusterIP 19 | selector: 20 | app: redis-master 21 | tier: back-end 22 | ports: 23 | - port: 6379 24 | targetPort: 6379 25 | --- 26 | apiVersion: v1 27 | kind: Service 28 | metadata: 29 | name: redis-slave 30 | spec: 31 | type: ClusterIP 32 | selector: 33 | app: redis-slave 34 | tier: back-end 35 | ports: 36 | - port: 6379 37 | targetPort: 6379 38 | --- 39 | apiVersion: v1 40 | kind: Service 41 | metadata: 42 | name: frontend 43 | spec: 44 | type: NodePort 45 | selector: 46 | app: guestbook 47 | tier: front-end 48 | ports: 49 | - port: 80 50 | targetPort: 80 51 | nodePort: 30009 52 | --- 53 | apiVersion: apps/v1 54 | kind: Deployment 55 | metadata: 56 | name: redis-master 57 | spec: 58 | replicas: 1 59 | selector: 60 | matchLabels: 61 | app: redis-master 62 | tier: back-end 63 | template: 64 | metadata: 65 | labels: 66 | app: redis-master 67 | tier: back-end 68 | spec: 69 | containers: 70 | - name: master-redis-xfusion 71 | image: redis 72 | resources: 73 | requests: 74 | memory: "100Mi" 75 | cpu: "100m" 76 | ports: 77 | - containerPort: 6379 78 | --- 79 | apiVersion: apps/v1 80 | kind: Deployment 81 | metadata: 82 | name: redis-slave 83 | spec: 84 | replicas: 2 85 | selector: 86 | matchLabels: 87 | app: redis-slave 88 | tier: back-end 89 | template: 90 | metadata: 91 | labels: 92 | app: redis-slave 93 | tier: back-end 94 | spec: 95 | containers: 96 | - name: slave-redis-xfusion 97 | image: gcr.io/google_samples/gb-redisslave:v3 98 | resources: 99 | requests: 100 | memory: "100Mi" 101 | cpu: "100m" 102 | env: 103 | - name: GET_HOSTS_FROM 104 | value: dns 105 | ports: 106 | - containerPort: 6379 107 | --- 108 | apiVersion: apps/v1 109 | kind: Deployment 110 | metadata: 111 | name: frontend 112 | spec: 113 | replicas: 3 114 | selector: 115 | matchLabels: 116 | app: guestbook 117 | tier: front-end 118 | template: 119 | metadata: 120 | labels: 121 | app: guestbook 122 | tier: front-end 123 | spec: 124 | containers: 125 | - name: php-redis-xfusion 126 | image: gcr.io/google-samples/gb-frontend:v4 127 | resources: 128 | requests: 129 | memory: "100Mi" 130 | cpu: "100m" 131 | env: 132 | - name: GET_HOSTS_FROM 133 | value: dns 134 | ports: 135 | - containerPort: 80 136 | -------------------------------------------------------------------------------- /docker/Resolve-Dockerfile-Issues.md: -------------------------------------------------------------------------------- 1 | # Resolve Dockerfile Issues 2 | ## Solution 3 | There are multiple versions of the same question. So refer to the appropriate solution based on the question you get 4 | 5 | ### Version 1 6 | * Initially, the file looks like this (containing IMAGE and ADD directives) 7 | ```Dockerfile 8 | IMAGE httpd:2.4.43 9 | 10 | ADD sed -i "s/Listen 80/Listen 8080/g" /usr/local/apache2/conf/httpd.conf 11 | ADD sed -i '/LoadModule\ ssl_module modules\/mod_ssl.so/s/^#//g' conf/httpd.conf 12 | ADD sed -i '/LoadModule\ socache_shmcb_module modules\/mod_socache_shmcb.so/s/^#//g' conf/httpd.conf 13 | 14 | ADD sed -i '/Include\ conf\/extra\/httpd-ssl.conf/s/^#//g' conf/httpd.conf 15 | 16 | COPY certs/server.crt /usr/local/apache2/conf/server.crt 17 | 18 | COPY certs/server.key /usr/local/apache2/conf/server.key 19 | 20 | COPY html/index.html /usr/local/apache2/htdocs/ 21 | ``` 22 | * After replacing with valid directives (IMAGE with FROM and ADD with RUN), the file should look like this 23 | ```Dockerfile 24 | FROM httpd:2.4.43 25 | 26 | RUN sed -i "s/Listen 80/Listen 8080/g" /usr/local/apache2/conf/httpd.conf 27 | RUN sed -i '/LoadModule\ ssl_module modules\/mod_ssl.so/s/^#//g' conf/httpd.conf 28 | RUN sed -i '/LoadModule\ socache_shmcb_module modules\/mod_socache_shmcb.so/s/^#//g' conf/httpd.conf 29 | 30 | RUN sed -i '/Include\ conf\/extra\/httpd-ssl.conf/s/^#//g' conf/httpd.conf 31 | 32 | COPY certs/server.crt /usr/local/apache2/conf/server.crt 33 | 34 | COPY certs/server.key /usr/local/apache2/conf/server.key 35 | 36 | COPY html/index.html /usr/local/apache2/htdocs/ 37 | ``` 38 | 39 | ### Version 2 40 | * Initially, the file looks like this (paths containing `conf.d`) 41 | ```Dockerfile 42 | FROM httpd:2.4.43 43 | 44 | RUN sed -i "s/Listen 80/Listen 8080/g" /usr/local/apache2/conf.d/httpd.conf 45 | 46 | RUN sed -i '/LoadModule\ ssl_module modules\/mod_ssl.so/s/^#//g' conf.d/httpd.conf 47 | 48 | RUN sed -i '/LoadModule\ socache_shmcb_module modules\/mod_socache_shmcb.so/s/^#//g' conf.d/httpd.conf 49 | 50 | RUN sed -i '/Include\ conf\/extra\/httpd-ssl.conf/s/^#//g' conf.d/httpd.conf 51 | 52 | COPY certs/server.crt /usr/local/apache2/conf/server.crt 53 | 54 | COPY certs/server.key /usr/local/apache2/conf/server.key 55 | 56 | COPY html/index.html /user/local/apache2/htdocs 57 | ``` 58 | * The problem is with the path of `httpd.conf` as `conf.d` is invalid. After fixing the errors, the new file should look like this 59 | ```Dockerfile 60 | FROM httpd:2.4.43 61 | 62 | RUN sed -i "s/Listen 80/Listen 8080/g" /usr/local/apache2/conf/httpd.conf 63 | 64 | RUN sed -i '/LoadModule\ ssl_module modules\/mod_ssl.so/s/^#//g' conf/httpd.conf 65 | 66 | RUN sed -i '/LoadModule\ socache_shmcb_module modules\/mod_socache_shmcb.so/s/^#//g' conf/httpd.conf 67 | 68 | RUN sed -i '/Include\ conf\/extra\/httpd-ssl.conf/s/^#//g' conf/httpd.conf 69 | 70 | COPY certs/server.crt /usr/local/apache2/conf/server.crt 71 | 72 | COPY certs/server.key /usr/local/apache2/conf/server.key 73 | 74 | COPY html/index.html /usr/local/apache2/htdocs/ 75 | ``` 76 | 77 | 78 | ## Verification 79 | * First try to build an image using this Dockerfile using `sudo docker build -t my_image .` 80 | * You should see that the image gets built successfully with any errors 81 | * Next is try to run the image as `sudo docker run --name my_httpd -p 8080:8080 -d my_image`. It should run without any errors. 82 | * Lastly test using curl as `curl http://localhost:8080`. You should a HTML content returned from the container. 83 | 84 | --- 85 | For tips on getting better at KodeKloud Engineer Docker tasks, [click here](./README.md) -------------------------------------------------------------------------------- /linux/README.md: -------------------------------------------------------------------------------- 1 | # System Administration Tasks 2 | ## General Tips 3 | * Always verify your task using one or more of the approaches below: 4 | * Use Curl command to the test the task. This is particular useful for verifying tasks that involve HTTP/S servers e.g. Apache, Nginx, Firewalld, Iptables. Usage is: `curl `. Examples: 5 | * Simple URL fetch: `curl http://stapp01:8080/`. You should get a valid HTML content returned back. 6 | * Check HTTP headers (Especially useful to verify tasks that involve restricting access) 7 | `curl -I http://stapp01:8080/`. You should see a valid HTTP Header with HTTP Return code. 8 | * To ignore any certificate errors and still fetch the content, use option `-k` 9 | `curl -k http://stapp01:8080/` or `curl -Ik http://stapp01:8080` 10 | * To connect as a specfic user (checking PAM or Htaccess tasks): 11 | `curl -u javed:ax23Xsdg http://stapp01:8080/` ('javed' is the user and 'ax23Xsdg' is the password) 12 | * You can also use Telnet to verify connectivity to a specific port: `telnet stapp01 8080`. Ensure that you do not see any connection errors. 13 | * For some tasks, you need to use browser by clicking `Open Port on Host 1` tab to open the site on specific port (especially the Wordpress task) 14 | * For tasks that involve iptables, use `iptables -nvL` to list all the rules in a simple-to-understand format 15 | * For some tasks, it may be time-saving and less error-prone if you rather create a shell script, with all the required commands, offline using your favourite IDE. This is how you will actually perform tasks in real-world. You can then copy-paste the script to each required host and execute it. This will help you to easily go for bonus points. e.g. Tasks that require you to configure Iptables or Firewalld 16 | * Most of the Linux tasks are based on Centos environment. Hence you can make use of the free [Katakoda Centos Playground](https://www.katacoda.com/courses/centos/playground) to practice changes. This is pretty useful for tasks that involve Httpd, Nginx, MariaDB, PostgreSQL, Firewalld or Iptables. This maximizes the learning process. So the recommendation is to: 17 | * Open the task in Kodekloud Engineer, note down the question and press `Try Later` 18 | * Open KataKoda Playground and play around until you are confident to attempt the question 19 | * Reopen question in Kodekloud Engineer, and follow the same approach 20 | * Note: 21 | * To find out the OS Flavor and version, run `cat /etc/*release*` 22 | * Centos Playground will allow processes such as httpd to listen only on port 8080 23 | * You can access browser by click `Opening Port on Host 1` similar to KKE 24 | * For similar environment on Ubuntu, [Katakoda Ubuntu Playground](https://www.katacoda.com/courses/ubuntu/playground) 25 | 26 | ## Common mistakes 27 | * Not reading the question properly. Especially, when you redo the same question, all the names and port values would've changed in the new question. So pay attention to that. 28 | * To restart a systemd service after performing changes (like nginx.conf,httpd.conf, sshd.config), you run `sudo systemctl restart ` and not `sudo systemctl start `. The problem with using `start` option is, if the service is already running then nothing is done. So your changes never take effect. 29 | * For Firewalld tasks, most people miss out on reloading the firewall rules by running `sudo firewall-cmd --reload` 30 | * For Iptables tasks, forgetting to persist the iptables rules is another common mistake: `sudo service iptables save` 31 | 32 | 33 | --- 34 | For general tips on getting better at KodeKloud Engineer tasks, [click here](../README.md) 35 | -------------------------------------------------------------------------------- /kubernetes/kke-redis.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl create -f 3 | # Step 2: Wait until all the pods are up 4 | # Step 3: Run the following command (Type 'yes' when prompted) 5 | # kubectl exec -it redis-cluster-0 -- redis-cli --cluster create 6 | # --cluster-replicas 1 $(kubectl get pods -l app=redis-cluster 7 | # -o jsonpath='{range.items[*]}{.status.podIP}:6379 {end}') 8 | # Step 4: Verify: Open app on browser by clicking 'Open Port on Host 1' 9 | # and port as NodePorts below. You should see the page without any errors 10 | # 11 | # For tips on getting better at Kubernetes tasks, check out the README.md 12 | # in this folder 13 | # 14 | --- 15 | apiVersion: v1 16 | kind: PersistentVolume 17 | metadata: 18 | name: redis-pv-01 19 | spec: 20 | capacity: 21 | storage: 1Gi 22 | volumeMode: Filesystem 23 | accessModes: 24 | - ReadWriteOnce 25 | hostPath: 26 | path: /redis01 27 | --- 28 | apiVersion: v1 29 | kind: PersistentVolume 30 | metadata: 31 | name: redis-pv-02 32 | spec: 33 | capacity: 34 | storage: 1Gi 35 | volumeMode: Filesystem 36 | accessModes: 37 | - ReadWriteOnce 38 | hostPath: 39 | path: /redis02 40 | --- 41 | apiVersion: v1 42 | kind: PersistentVolume 43 | metadata: 44 | name: redis-pv-03 45 | spec: 46 | capacity: 47 | storage: 1Gi 48 | volumeMode: Filesystem 49 | accessModes: 50 | - ReadWriteOnce 51 | hostPath: 52 | path: /redis03 53 | --- 54 | apiVersion: v1 55 | kind: PersistentVolume 56 | metadata: 57 | name: redis-pv-04 58 | spec: 59 | capacity: 60 | storage: 1Gi 61 | volumeMode: Filesystem 62 | accessModes: 63 | - ReadWriteOnce 64 | hostPath: 65 | path: /redis04 66 | --- 67 | apiVersion: v1 68 | kind: PersistentVolume 69 | metadata: 70 | name: redis-pv-05 71 | spec: 72 | capacity: 73 | storage: 1Gi 74 | volumeMode: Filesystem 75 | accessModes: 76 | - ReadWriteOnce 77 | hostPath: 78 | path: /redis05 79 | --- 80 | apiVersion: v1 81 | kind: PersistentVolume 82 | metadata: 83 | name: redis-pv-06 84 | spec: 85 | capacity: 86 | storage: 1Gi 87 | volumeMode: Filesystem 88 | accessModes: 89 | - ReadWriteOnce 90 | hostPath: 91 | path: /redis06 92 | --- 93 | apiVersion: v1 94 | kind: Service 95 | metadata: 96 | name: redis-cluster-service 97 | spec: 98 | type: NodePort 99 | selector: 100 | app: redis-cluster 101 | ports: 102 | - name: client 103 | port: 6379 104 | targetPort: 6379 105 | - name: gossip 106 | port: 16379 107 | targetPort: 16379 108 | --- 109 | apiVersion: apps/v1 110 | kind: StatefulSet 111 | metadata: 112 | name: redis-cluster 113 | spec: 114 | replicas: 6 115 | serviceName: redis-cluster 116 | selector: 117 | matchLabels: 118 | app: redis-cluster 119 | template: 120 | metadata: 121 | labels: 122 | app: redis-cluster 123 | spec: 124 | volumes: 125 | - name: conf 126 | configMap: 127 | name: "redis-cluster-configmap" 128 | defaultMode: 0755 129 | containers: 130 | - name: redis 131 | image: redis:5.0.1-alpine 132 | command: ["/conf/update-node.sh", "redis-server", "/conf/redis.conf"] 133 | env: 134 | - name: POD_IP 135 | valueFrom: 136 | fieldRef: 137 | fieldPath: "status.podIP" 138 | ports: 139 | - containerPort: 6379 140 | name: client 141 | - containerPort: 16379 142 | name: gossip 143 | volumeMounts: 144 | - name: conf 145 | mountPath: "/conf" 146 | readOnly: false 147 | - name: data 148 | mountPath: "/data" 149 | readOnly: false 150 | volumeClaimTemplates: 151 | - metadata: 152 | name: data 153 | spec: 154 | accessModes: ["ReadWriteOnce"] 155 | resources: 156 | requests: 157 | storage: 1Gi 158 | -------------------------------------------------------------------------------- /jenkins/Jenkins-Workspaces.md: -------------------------------------------------------------------------------- 1 | # Jenkins Workspaces 2 | ## Solution 3 | ### Step 1: Install Gitea and Publish over SSH Plugins in Jenkins 4 | * `Select port to view on Host 1` and connect to port `8081`. Login using the Jenkins admin user and password given in the question 5 | * Under `Jenkins > Manage Jenkins > Manage Plugins` click `Available` and search for `Gitea` plugin. 6 | * Select the plugin and click `Download now and install after restart` 7 | * In the following screen, click checkbox `Restart Jenkins when installation is complete and no jobs running`. Wait for the screen to become standstill. 8 | * You can try to refresh your browser after a few secs. 9 | * Repeat the above steps and install `Publish over SSH` plugins also 10 | 11 | ### Step 2: Setup Credentials for GIT user 12 | * Under `Jenkins > Manage Jenkins > Manage Credentials`, click `Global` under `Stores scoped to Jenkins` and `Add Credentials` 13 | * Leave kind as `Username with Password` and Scope as `Global (..)` 14 | * Setup GIT credentials for Sarah: 15 | ``` 16 | Username: sarah 17 | Password: Sarah_pass123 18 | ID: sarah 19 | ``` 20 | 21 | ### Step 3: Configure Publish Over SSH 22 | * In same page i.e. `Jenkins > Manage Jenkins > Configure System` under `Publish over SSH > SSH Servers` click `Add` and provide the following values: 23 | ``` 24 | Name: ststor01 25 | Hostname: ststor01 26 | Username: natasha 27 | Remote Directory: /data 28 | (Click Advanced and select 'Use Password authentication...') 29 | Passphrase/Password: Bl@kW 30 | ``` 31 | * Click `Test Configuration` to test that the connection is successful 32 | 33 | ### Step 4: Copy GIT Repo URL 34 | * `Select port to view on Host 1` and connect to port `8000`. Login using the GITEA user and password given in the question e.g. Sarah 35 | * Click the repo `sarah/web` on the right side 36 | * Copy the GIT Clone HTTP URL (next to the clipboard icon). Usually it looks like `http://git.stratos.xfusioncorp.com/sarah/web.git` 37 | 38 | ### Step 5: Verify Permissions for /data directory 39 | * SSH to `ststor01` using `natasha` 40 | * Grant full permissions to `/data` folder: `sudo chmod 777 /data` if it's not already granted 41 | 42 | ### Step 6: Create a Parameterized Build 43 | * `Select port to view on Host 1` and connect to port `8081`. Login using the Jenkins admin user and password given in the question 44 | * Click `New item` and in the following screen: 45 | ``` 46 | Name: app-job (Keep 'Freestyle Project' as selected) and click Ok 47 | ``` 48 | #### Configure Choice Parameter 49 | * Under `General` click the option `This project is parameterized` 50 | * This will reveal additional option `Add Parameter` 51 | * Click `Add Parameter > Choice Parameter` and input the following values as per question: 52 | ``` 53 | Name: Branch 54 | Choices: 55 | version1 56 | version2 57 | version3 58 | ``` 59 | #### Configure Custom Workspace based on Choice Parameter 60 | * Click `Advanced` and select `Choose custom workspace` 61 | * Give the directory as `$JENKINS_HOME/$Branch` 62 | 63 | #### Configure GIT Branch based on Choice Parameter 64 | * Under `Source Code Management` click the option `Git` 65 | * This will reveal additional options. Configure as follows: 66 | ``` 67 | Repository URL: http://git.stratos.xfusioncorp.com/sarah/web.git 68 | Credentials: sarah/***** 69 | ``` 70 | * Set the `Branches to Build` to the value i.e. `*/$Branch` 71 | 72 | #### Configure SSH transfer to Storage server 73 | * Under `Build Environment` click `Send files or execute commands over SSH after the build runs`. It reveals additional options to configure `Transfer`: 74 | ``` 75 | Source files: **/* 76 | ``` 77 | * Click `Save` 78 | 79 | ### Verification 80 | * Click the newly created Job and click `Build With Parameters` 81 | * Choose any of the values from the drop-down e.g. `version1` 82 | * You should see a new build getting triggered and complete successfully. 83 | * Check the `Console Output` to see that `SSH: Transferred 1 file(s)` 84 | * Access LB URL: `Select port to view on Host 1` and connect to port `80` 85 | * You should `This is app version 1` displayed in the browser 86 | * Now repeat the builds with Branch parameter chosen as `version2` and `version3`. You should see the data `This is app version 2` and `This is app version 3` displayed in the browser respectively. 87 | 88 | --- 89 | For tips on getting better at KodeKloud Engineer Jenkins tasks, [click here](./README.md) -------------------------------------------------------------------------------- /jenkins/Install-packages-using-Jenkins-Job.md: -------------------------------------------------------------------------------- 1 | # Install Packages Using a Jenkins Job 2 | ## Solution 3 | ### Step 1: Enable password-less sudo in all appservers 4 | * SSH to each of the appserver and run `sudo visudo` 5 | * In the resulting file, at the end of the file add password-less sudo for respective sudo user: 6 | stapp01: 7 | ``` 8 | tony ALL=(ALL) NOPASSWD: ALL 9 | ``` 10 | stapp02: 11 | ``` 12 | steve ALL=(ALL) NOPASSWD: ALL 13 | ``` 14 | stapp03: 15 | ``` 16 | banner ALL=(ALL) NOPASSWD: ALL 17 | ``` 18 | ### Step 2: Install SSH Plugin in Jenkins 19 | * `Select port to view on Host 1` and connect to port `8081`. Login using the Jenkins admin user and password given in the question 20 | * Under `Jenkins > Manage Jenkins > Manage Plugins` click `Available` and search for `SSH` plugin. 21 | * Select the plugin and click `Download now and install after restart` 22 | * In the following screen, click checkbox `Restart Jenkins when installation is complete and no jobs running`. Wait for the screen to become standstil 23 | 24 | ### Step 3: Add sudo users and their SSH credentials in Jenkins: 25 | * You can try to refresh your browser after a few secs. 26 | * Under `Jenkins > Manage Jenkins > Manage Credentials`, click `Global` under `Stores scoped to Jenkins` and `Add Credentials` 27 | * Leave kind as `Username with Password` and Scope as `Global (..)` 28 | * Setup credentials for tony@stapp01: 29 | ``` 30 | Username: tony 31 | Password: Ir0nM@n 32 | ID: stapp01 33 | ``` 34 | * Repeat the steps above to add steve and banner for stapp02 and stapp03 respectively 35 | 36 | ### Step 4: Add SSH Hosts in Jenkins 37 | * Click `Jenkins > Manage Jenkins > Configure System` 38 | * Under `SSH Remote Hosts` click `Add Host` and provide the following values: 39 | ``` 40 | Hostname: stapp01 41 | Port: 22 42 | Credentials: Choose 'tony' from the list 43 | Pty: Select checkbox 44 | ``` 45 | * Click `Check Connection` to make sure connection is successful 46 | * Repeat steps to add stapp02 and stapp03 hosts. 47 | 48 | ### Step 5: Create the Build Job 49 | * Click `New item` and in the following screen: 50 | ``` 51 | Name: httpd-php (Keep 'Freestyle Project' as selected) and click Ok 52 | ``` 53 | * Under `Build` add a `Build Step` with `Execute shell script on remote host using SSH` 54 | and under `SSH Site` select `tony@stapp01:22` 55 | * In the command text area, provide the following. Make sure to update the HTTP Port (the line that has `sed`) and PHP version values (i.e. For installing PHP 7.4 then change the line `sudo yum-config-manager --enable remi-php70` to `sudo yum-config-manager --enable remi-php74`) as per the question. 56 | ``` 57 | sudo yum -y install epel-release 58 | sudo yum install -y http://rpms.remirepo.net/enterprise/remi-release-7.rpm 59 | sudo yum install -y yum-utils 60 | sudo yum-config-manager --enable remi-php70 61 | sudo yum install -y php php-common php-opcache php-mcrypt php-cli php-gd php-curl php-mysqlnd httpd 62 | sudo sed -i 's/^Listen 80$/Listen 8082/g' /etc/httpd/conf/httpd.conf 63 | sudo systemctl restart httpd 64 | sudo systemctl status httpd 65 | ``` 66 | * Repeat the above steps to add host and commands `steve@stapp02:22` and `banner@stapp03:22` as well 67 | * Finally click `Save` 68 | 69 | ### Step 6:Run the build 70 | * Click the newly created job, `httpd-php` on the home page and in the following screen click `Build Now` 71 | * You should see a new build starting up on the left lower screen 72 | * Click that build and click `Console Output` and search the log for any errors 73 | * Wait until the build is completed successfully and run the 'Verification' steps below 74 | * Click the 'Build Now' again and monitor console output for any errors. Task expects that you should be able to click 'Build' any number of times and there are no errors. 75 | * Perform verification steps below again 76 | 77 | ## Verification 78 | * On Jump host, run 'curl' against each host: `curl http://stapp01:5003/index.php` 79 | * Should return valid HTML content that returns lots of information about the PHP server 80 | * Repeat 'curl' test for other hosts as well 81 | * `Select port to view on Host 1` and connect to port `80` 82 | * On the resulting browser page add `/index.php` in the address bar 83 | * You should see PHP info page. Make sure the PHP version shown is same as the question 84 | * With each reload of page, the `System` should change between stapp01, stapp02 and stapp03 85 | 86 | --- 87 | For tips on getting better at KodeKloud Engineer Jenkins tasks, [click here](./README.md) 88 | -------------------------------------------------------------------------------- /jenkins/Create-Scheduled-Builds.md: -------------------------------------------------------------------------------- 1 | # Create Scheduled Builds in Jenkins 2 | ## Introduction 3 | The task expects you to copy log files from one of the appserver to a location on storage server on a particular schedule e.g. Every 11 minutes. 4 | 5 | To accomplish this, you need to setup password-less sudo and password-less SSH (using SSH keys) first. Then you need to setup a Scheduled Build Job that copies files from appserver to the '/tmp' folder on the storage server. Then move the files from '/tmp' folder to the actual location. 6 | 7 | For setting up the Scheduled Build job, you need to setup a crontab expression. You can use [Crontab Guru](https://crontab.guru/) to build the expression. 8 | 9 | ## Solution 10 | ### Step 1: Enable password-less sudo and password-less SSH 11 | * SSH to the specific appserver mentioned in question and run `sudo visudo` 12 | * In the resulting file, at the end of the file add password-less sudo for respective sudo user. For example, in stapp03 you would do: 13 | ``` 14 | banner ALL=(ALL) NOPASSWD: ALL 15 | ``` 16 | * Switch to root to enable password-less SCP to storage server: 17 | ``` 18 | sudo su 19 | ssh-keygen -t rsa (leave all options to their defaults) 20 | ssh-copy-id natasha@ststor01 21 | ``` 22 | * Now test password-less SSH by performing a SSH to storage server, `ssh natasha@ststor01`. You should not be asked for password. 23 | * Once you are in the storage server run `sudo visudo` again and provide natasha's password 24 | * In the resulting file, at the end of the file add password-less sudo for natasha: 25 | ``` 26 | natasha ALL=(ALL) NOPASSWD: ALL 27 | ``` 28 | 29 | ### Step 2: Install SSH Plugins in Jenkins 30 | * `Select port to view on Host 1` and connect to port `8081`. Login using the Jenkins admin user and password given in the question 31 | * Under `Jenkins > Manage Jenkins > Manage Plugins` click `Available` and search for `SSH` plugin. 32 | * Select the `SSH` plugin and click `Download now and install after restart` 33 | * In the following screen, click checkbox `Restart Jenkins when installation is complete and no jobs running`. Wait for the screen to become standstill. 34 | * You can try to refresh your browser after a few secs. 35 | 36 | ### Step 3: Setup Credentials for SSH users 37 | * Under `Jenkins > Manage Jenkins > Manage Credentials`, click `Global` under `Stores scoped to Jenkins` and `Add Credentials` 38 | * Leave kind as `Username with Password` and Scope as `Global (..)` 39 | * Add SSH credentials for the sudo users of the respective servers (banner and natasha): 40 | ``` 41 | Username: banner 42 | Password: BigGr33n 43 | ID: banner 44 | ``` 45 | ### Step 4: Add SSH Hosts in Jenkins 46 | * Click `Jenkins > Manage Jenkins > Configure System` 47 | * Under `SSH Remote Hosts` click `Add Host` and add the required appserver as follows: 48 | ``` 49 | Hostname: stapp03 50 | Port: 22 51 | Credentials: Choose 'banner' from the list 52 | Pty: Select checkbox 53 | ``` 54 | * Click `Check Connection` to make sure connection is successful 55 | * Repeat steps to add `ststor01` host 56 | 57 | ### Step 5: Create a Scheduled Build Job 58 | * Go back to Jenkins Console 59 | * Click `New item` and in the following screen: 60 | ``` 61 | Name: copy-logs (Keep 'Freestyle Project' as selected) and click Ok 62 | ``` 63 | * Under `Build Triggers`, select `Build Periodically` and provide the `Schedule` as per the question. In this example, the job is configured to run every 11 minutes: 64 | ``` 65 | */11 * * * * 66 | ``` 67 | * Now, under `Build`, add a `Build Step` with `Execute shell script on remote host using SSH` 68 | and under `SSH Site` select `banner@stapp03:22` 69 | * In the command text area, provide the following: 70 | ``` 71 | sudo scp -o StrictHostKeyChecking=no -r /etc/httpd/logs/ natasha@ststor01:/tmp 72 | ``` 73 | * Add another `Build Step` with `Execute shell script on remote host using SSH` 74 | and under `SSH Site` select `natasha@ststor01:22` 75 | * In the command text area, provide the following: 76 | ``` 77 | sudo mv /tmp/logs/access_log /usr/src/sysops 78 | sudo mv /tmp/logs/error_log /usr/src/sysops 79 | ``` 80 | * Click `Save` 81 | 82 | ## Verification 83 | * Click the `copy-logs` job again and run `Build Now` 84 | * Check the `Console Output` to check whether the files were transferred and moved successfully 85 | * SSH to `ststor01` and check under `/usr/src/sysops` (or directory as per question) and make sure `access_log` and `error_log` files are copied 86 | 87 | --- 88 | For tips on getting better at KodeKloud Engineer Jenkins tasks, [click here](./README.md) -------------------------------------------------------------------------------- /kubernetes/kke-voting-app.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: Create the new namespace: kubectl create ns vote 3 | # Step 2: kubectl create -f 4 | # Step 3: Wait for the pods to be in 'Running' state 5 | # Step 4: Verify: Open Voting app on browser by clicking 'Open Port on Host 1' 6 | # and port as vote-service NodePort below. You should see the voting page. 7 | # Step 5: Verify: In another browser, also open Result app on browser by clicking 8 | # 'Open Port on Host 1' and port as reuslt-service NodePort below. You should 9 | # see the result page 10 | # Step 6: Verify: In the voting page, vote for any of the entries. You should immediately 11 | # see the result page updated with the result. This means that the worker-pod is 12 | # working correctly 13 | # 14 | # For tips on getting better at Kubernetes tasks, check out the README.md 15 | # in this folder 16 | # 17 | apiVersion: v1 18 | kind: Service 19 | metadata: 20 | name: vote-service 21 | namespace: vote 22 | spec: 23 | type: NodePort 24 | selector: 25 | app: vote-pod 26 | ports: 27 | - port: 5000 28 | targetPort: 80 29 | nodePort: 31000 30 | --- 31 | apiVersion: v1 32 | kind: Service 33 | metadata: 34 | name: result-service 35 | namespace: vote 36 | spec: 37 | type: NodePort 38 | selector: 39 | app: result-pod 40 | ports: 41 | - port: 5001 42 | targetPort: 80 43 | nodePort: 31001 44 | --- 45 | apiVersion: v1 46 | kind: Service 47 | metadata: 48 | name: redis 49 | namespace: vote 50 | spec: 51 | type: ClusterIP 52 | selector: 53 | app: redis-pod 54 | ports: 55 | - port: 6379 56 | targetPort: 6379 57 | --- 58 | apiVersion: v1 59 | kind: Service 60 | metadata: 61 | name: db 62 | namespace: vote 63 | spec: 64 | type: ClusterIP 65 | selector: 66 | app: postgres-pod 67 | ports: 68 | - port: 5432 69 | targetPort: 5432 70 | --- 71 | apiVersion: apps/v1 72 | kind: Deployment 73 | metadata: 74 | name: redis-deployment 75 | namespace: vote 76 | labels: 77 | app: redis-deployment 78 | spec: 79 | replicas: 1 80 | selector: 81 | matchLabels: 82 | app: redis-pod 83 | template: 84 | metadata: 85 | labels: 86 | app: redis-pod 87 | spec: 88 | volumes: 89 | - name: redis-data 90 | emptyDir: {} 91 | containers: 92 | - name: redis 93 | image: redis:alpine 94 | volumeMounts: 95 | - mountPath: /data 96 | name: redis-data 97 | --- 98 | apiVersion: apps/v1 99 | kind: Deployment 100 | metadata: 101 | name: db-deployment 102 | namespace: vote 103 | labels: 104 | app: db-deployment 105 | spec: 106 | replicas: 1 107 | selector: 108 | matchLabels: 109 | app: postgres-pod 110 | template: 111 | metadata: 112 | labels: 113 | app: postgres-pod 114 | spec: 115 | volumes: 116 | - name: db-data 117 | emptyDir: {} 118 | containers: 119 | - name: postgres 120 | image: postgres:9.4 121 | env: 122 | - name: POSTGRES_USER 123 | value: postgres 124 | - name: POSTGRES_PASSWORD 125 | value: postgres 126 | - name: POSTGRES_HOST_AUTH_METHOD 127 | value: trust 128 | volumeMounts: 129 | - mountPath: /var/lib/postgresql/data 130 | name: db-data 131 | --- 132 | apiVersion: apps/v1 133 | kind: Deployment 134 | metadata: 135 | name: vote-deployment 136 | namespace: vote 137 | labels: 138 | app: vote-deployment 139 | spec: 140 | replicas: 1 141 | selector: 142 | matchLabels: 143 | app: vote-pod 144 | template: 145 | metadata: 146 | labels: 147 | app: vote-pod 148 | spec: 149 | containers: 150 | - name: voting-app 151 | image: kodekloud/examplevotingapp_vote:before 152 | --- 153 | apiVersion: apps/v1 154 | kind: Deployment 155 | metadata: 156 | name: result-deployment 157 | namespace: vote 158 | labels: 159 | app: result-deployment 160 | spec: 161 | replicas: 1 162 | selector: 163 | matchLabels: 164 | app: result-pod 165 | template: 166 | metadata: 167 | labels: 168 | app: result-pod 169 | spec: 170 | containers: 171 | - name: result-app 172 | image: kodekloud/examplevotingapp_result:before 173 | --- 174 | apiVersion: apps/v1 175 | kind: Deployment 176 | metadata: 177 | name: worker 178 | namespace: vote 179 | labels: 180 | app: worker 181 | spec: 182 | replicas: 1 183 | selector: 184 | matchLabels: 185 | app: worker-pod 186 | template: 187 | metadata: 188 | labels: 189 | app: worker-pod 190 | spec: 191 | containers: 192 | - name: worker 193 | image: kodekloud/examplevotingapp_worker 194 | -------------------------------------------------------------------------------- /kubernetes/README.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Tasks 2 | ## General Kubernetes Tips 3 | * Always create your Kubernetes YAML configurations offline as you would do in real-life. I use 'Visual Studio Code' to create my YAML file as per the question and then apply it in the KKE environment. 4 | * Navigate to `/tmp` directory and create your Kubernetes configuration file. Then you can simply run `kubectl create -f yourfile.yaml`. You don't need to use sudo privileges. 5 | * Another important tip is to make use of the free [Katakoda Kubernetes Playground](https://www.katacoda.com/courses/kubernetes/playground) to test your changes. 6 | As this is a Katakoda environment, it works in the same way as Kodekloud Engineer. Good thing is that 7 | there is no time limit. Also you can easily close browser and open another environment quickly. So, the 8 | advice here is to: 9 | * Open the task in Kodekloud Engineer, note down the question and press `Try Later` 10 | * Then take your time to create the required Kubernetes configurations offline in your favorite IDE 11 | * Open KataKoda Playground, apply and test your changes until you are satisified 12 | * Reopen question in Kodekloud Engineer, apply your changes and verify. Bam! You finished your task in time for bonus points. 13 | * Verify the successful completion of the tasks using one of the following steps: 14 | * Use browser by clicking `Select port to view on Host 1` tab especially for tasks that ask you to configure a NodePort. 15 | * Click `Open Port on Host 1` tab and specify the NodePort and click `Connect` 16 | * Check that the URL loads. 17 | * Exec command - Especially useful to verify tasks that involve running a server listening to a port e.g. Nginx, Wordpress, Nagios (or) Verify volume mounts 18 | * `kubectl exec --namespace -- ` 19 | * Examples: 20 | * `kubectl exec nginx-nautilus -- curl http://localhost:8080/` 21 | * `kubectl exec nginx-nautilus --namespace xfusion -- ls /opt/data` 22 | * Shell - You can also get a shell to Kubernetes Pod using the `-it` option (Interactive) and providing a shell. This is useful when you need to run multiple verification commands: 23 | * `kubectl exec -it nginx-nautilus -- /bin/bash` 24 | * You can print all environment variables by running command `printenv` in the pod command 25 | * In case the Pod has multiple container, then you need to specify the container name using the `--container` option. For example: `kubectl exec -it nginx-nautilus --container nginx-container-1 -- /bin/bash` 26 | * Logs - Useful for tasks that require you to print an output e.g. echo: 27 | * `kubectl logs `. For example, `kubectl logs my-pod` 28 | 29 | ## Common mistakes 30 | * Not reading the question properly. Especially, when you redo the same question, all the names and port values would've changed in the new question. So pay attention to that. 31 | * Not waiting until the Pods are in `Running` state. Check the pods are in `Running` before you press that `Finish` button. 32 | * Not paying attention to namespaces. Make sure all the required resources are in the correct namespace. 33 | * Selector labels are often misunderstood, especially in tasks that require defining a service and a deployment. Note that when a service and a deployment are involved, there are totally 5 places where labels will be used. 34 | The below is a simplistic example where a label is shown in each of the 5 places. Now: 35 | * 'LABEL A' and 'LABEL C' are resource-level labels. They are just used for identification purposes. They can be set to any values or as per question. 36 | * Service selector 'LABEL B' is important and should match pod label 'LABEL E'. Selector labels are like 'search criteria' and are used to identify the pods that need to be exposed. 37 | * Deployment selector 'LABEL D' should also match pod label 'LABEL E'. This is to tell Kubernetes which Pod the Deployment should manage. 38 | * Due to lack of understanding, people configure all the labels with same values. But this will work fine as (LABEL B == LABEL E) and (LABEL D == LABEL E) 39 | ```yaml 40 | apiVersion: .... 41 | kind: Service 42 | metadata: 43 | labels: 44 | [LABEL A] 45 | spec: 46 | selector: 47 | matchLabels: 48 | [LABEL B] 49 | --- 50 | apiVersion: .... 51 | kind: Deployment 52 | metadata: 53 | labels: 54 | [LABEL C] 55 | spec: 56 | selector: 57 | matchLabels: 58 | [LABEL D] 59 | template: 60 | metadata: 61 | labels: 62 | [LABEL E] 63 | ``` 64 | In case of a Service and Pod, there are only 3 labels (See below). Labels B and C should match: 65 | ```yaml 66 | apiVersion: .... 67 | kind: Service 68 | metadata: 69 | labels: 70 | [LABEL A] 71 | spec: 72 | selector: 73 | matchLabels: 74 | [LABEL B] 75 | --- 76 | apiVersion: .... 77 | kind: Pod 78 | metadata: 79 | labels: 80 | [LABEL C] 81 | ``` 82 | --- 83 | For general tips on getting better at KodeKloud Engineer tasks, [click here](../README.md) 84 | 85 | -------------------------------------------------------------------------------- /kubernetes/kke-haproxy.yaml: -------------------------------------------------------------------------------- 1 | # 2 | # Step 1: kubectl create ns haproxy-controller-xfusion 3 | # Step 2: kubectl create -f 4 | # Step 3: Make sure all the pods are in running state 5 | # Step 4: Verify: Open Haproxy app on browser by clicking 'Open Port on Host 1' 6 | # and port as each of the NodePorts below (http, https and stat). 7 | # You should see the page loaded successfully 8 | # 9 | # For tips on getting better at Kubernetes tasks, check out the README.md 10 | # in this folder 11 | # 12 | --- 13 | # 14 | # Service Account definition 15 | # 16 | apiVersion: v1 17 | kind: ServiceAccount 18 | metadata: 19 | name: haproxy-service-account-xfusion 20 | namespace: haproxy-controller-xfusion 21 | --- 22 | # 23 | # ClusterRole definition 24 | # 25 | apiVersion: rbac.authorization.k8s.io/v1 26 | kind: ClusterRole 27 | metadata: 28 | name: haproxy-cluster-role-xfusion 29 | rules: 30 | - apiGroups: [""] 31 | resources: 32 | [ 33 | "configmaps", 34 | "endpoints", 35 | "nodes", 36 | "pods", 37 | "services", 38 | "namespaces", 39 | "events", 40 | "serviceaccounts", 41 | ] 42 | verbs: ["get", "list", "watch"] 43 | - apiGroups: ["extensions"] 44 | resources: ["ingresses", "ingresses/status"] 45 | verbs: ["get", "list", "watch", "update"] 46 | - apiGroups: [""] 47 | resources: ["secrets"] 48 | verbs: ["get", "list", "watch", "create", "patch", "update"] 49 | --- 50 | # 51 | # ClusterRoleBinding definition 52 | # 53 | apiVersion: rbac.authorization.k8s.io/v1 54 | kind: ClusterRoleBinding 55 | metadata: 56 | name: haproxy-cluster-role-binding-xfusion 57 | namespace: haproxy-controller-xfusion 58 | roleRef: 59 | kind: ClusterRole 60 | name: haproxy-cluster-role-xfusion 61 | apiGroup: rbac.authorization.k8s.io 62 | subjects: 63 | - kind: ServiceAccount 64 | name: haproxy-service-account-xfusion 65 | namespace: haproxy-controller-xfusion 66 | --- 67 | # 68 | # Backend Service definition 69 | # 70 | apiVersion: v1 71 | kind: Service 72 | metadata: 73 | name: service-backend-xfusion 74 | namespace: haproxy-controller-xfusion 75 | labels: 76 | run: ingress-default-backend 77 | spec: 78 | selector: 79 | run: ingress-default-backend 80 | ports: 81 | - name: port-backend 82 | protocol: TCP 83 | port: 8080 84 | targetPort: 8080 85 | --- 86 | # 87 | # Frontend Service definition 88 | # 89 | apiVersion: v1 90 | kind: Service 91 | metadata: 92 | name: ingress-service-xfusion 93 | namespace: haproxy-controller-xfusion 94 | labels: 95 | run: haproxy-ingress 96 | spec: 97 | type: NodePort 98 | selector: 99 | run: haproxy-ingress 100 | ports: 101 | - name: http 102 | port: 80 103 | protocol: TCP 104 | targetPort: 80 105 | nodePort: 32456 106 | - name: https 107 | port: 443 108 | protocol: TCP 109 | targetPort: 443 110 | nodePort: 32567 111 | - name: stat 112 | port: 1024 113 | protocol: TCP 114 | targetPort: 1024 115 | nodePort: 32678 116 | --- 117 | # 118 | # Backend Deployment definition 119 | # 120 | apiVersion: apps/v1 121 | kind: Deployment 122 | metadata: 123 | name: backend-deployment-xfusion 124 | namespace: haproxy-controller-xfusion 125 | labels: 126 | run: ingress-default-backend 127 | spec: 128 | replicas: 1 129 | selector: 130 | matchLabels: 131 | run: ingress-default-backend 132 | template: 133 | metadata: 134 | labels: 135 | run: ingress-default-backend 136 | spec: 137 | containers: 138 | - name: backend-container-xfusion 139 | image: gcr.io/google_containers/defaultbackend:1.0 140 | ports: 141 | - containerPort: 8080 142 | --- 143 | # 144 | # Frontend Deployment definition 145 | # 146 | apiVersion: apps/v1 147 | kind: Deployment 148 | metadata: 149 | name: haproxy-ingress-xfusion 150 | namespace: haproxy-controller-xfusion 151 | labels: 152 | run: ingress-default-backend 153 | spec: 154 | replicas: 1 155 | selector: 156 | matchLabels: 157 | run: haproxy-ingress 158 | template: 159 | metadata: 160 | labels: 161 | run: haproxy-ingress 162 | spec: 163 | serviceAccountName: haproxy-service-account-xfusion 164 | containers: 165 | - name: ingress-container-xfusion 166 | image: haproxytech/kubernetes-ingress 167 | args: 168 | - "--default-backend-service=haproxy-controller-xfusion/service-backend-xfusion" 169 | ports: 170 | - name: http 171 | containerPort: 80 172 | - name: https 173 | containerPort: 443 174 | - name: stat 175 | containerPort: 1024 176 | resources: 177 | requests: 178 | memory: "50Mi" 179 | cpu: "500m" 180 | livenessProbe: 181 | httpGet: 182 | path: /healthz 183 | port: 1024 184 | env: 185 | - name: TZ 186 | value: Etc/UTC 187 | - name: POD_NAME 188 | valueFrom: 189 | fieldRef: 190 | fieldPath: metadata.name 191 | - name: POD_NAMESPACE 192 | valueFrom: 193 | fieldRef: 194 | fieldPath: metadata.namespace 195 | -------------------------------------------------------------------------------- /linux/Install-and-configure-PHPFPM.md: -------------------------------------------------------------------------------- 1 | # Configure Nginx + PHP-FPM Using Unix Sock 2 | ## Solution 3 | ### Step 1 - Install and configure PHP FPM 4 | * Login to each of the appservers and perform the following tasks 5 | * Switch to root user: `sudo su` 6 | * Install PHP-FPM: `yum install -y php php-mysql php-fpm` 7 | * Configure PHP-FPM to use unix socket instead of tcp socket: `vi /etc/php-fpm.d/www.conf` [Comment/uncomment lines and update values accordingly. Note that semicolon character (`;`) is used to comment a line] 8 | ``` 9 | ;listen = 127.0.0.1:9000 10 | listen = /var/run/php-fpm/default.sock 11 | 12 | listen.allowed_clients = 127.0.0.1 13 | 14 | listen.owner = apache 15 | listen.group = apache 16 | listen.mode = 0660 17 | 18 | user = apache 19 | group = apache 20 | ``` 21 | * Enable MPM Event module and comment MPM Prefork module: `vi /etc/httpd/conf.modules.d/00-mpm.conf` 22 | ``` 23 | # LoadModule mpm_prefork_module modules/mod_mpm_prefork.so 24 | LoadModule mpm_event_module modules/mod_mpm_event.so 25 | ``` 26 | * Edit `php.conf` to define and use PHP-FPM proxy handler: `vi /etc/httpd/conf.d/php.conf`. Note: 27 | * Insert the `` directive at the top of the file 28 | * Replace the existing handler in `FilesMatch` directive to use proxy 29 | * Comment the lines that start with `php_value session` as below 30 | ```conf 31 | # PHP-FPM Proxy declaration 32 | 33 | # Force registering proxy ahead of time - AJ 34 | ProxySet disablereuse=off 35 | 36 | 37 | # Change default PHP handler to use PHP-FPM proxy instead 38 | 39 | #SetHandler application/x-httpd-php 40 | SetHandler proxy:fcgi://php-fpm 41 | 42 | 43 | # 44 | # Make sure to comment the below lines 45 | # 46 | #php_value session.save_handler "files" 47 | #php_value session.save_path "/var/lib/php/session" 48 | ``` 49 | * Start php-fpm and restart Apache: 50 | ``` 51 | systemctl start php-fpm 52 | systemctl restart httpd 53 | ``` 54 | 55 | #### Verify PHP-FPM setup 56 | * Create a sample PHP file`/var/www/html/index.php` to test your changes. The following should be the content: 57 | ```php 58 | 61 | ``` 62 | * Run `curl http://localhost:5002/` (5002 is Apache port). You should get back the HTML content for the PHP Info page. 63 | 64 | ### Step 2 - Install and configure MariaDB 65 | * SSH to Database host (stdb01) 66 | * Install and enable MariaDB server 67 | ```UNIX 68 | sudo yum install mariadb mariadb-server -y 69 | sudo systemctl start mariadb 70 | sudo systemctl enable mariadb 71 | sudo systemctl status mariadb 72 | ``` 73 | * Setup MariaDB using the built-in secure installation script (Default Root password is blank. So just press ENTER) 74 | ```UNIX 75 | sudo mysql_secure_installation 76 | ``` 77 | * When prompted, provide the following values 78 | ``` 79 | Set root password? [Y/n] n 80 | Remove anonymous users? [Y/n] Y 81 | Disallow root login remotely? [Y/n] Y 82 | Remove test database and access to it? [Y/n] Y 83 | Reload privilege tables now? [Y/n] Y 84 | ``` 85 | * Login to database using the root user (default password is blank. So just press ENTER) 86 | ```UNIX 87 | mysql -u root -p 88 | ``` 89 | Once you have logged in, then run the below SQL commands to create database, create user, set the password and grant privileges (In this example `kodekloud` is the password set for the user). Make sure to change the below values as per the question: 90 | ```SQL 91 | MariaDB [(none)]>CREATE DATABASE kodekloud_db5; 92 | MariaDB [(none)]>GRANT ALL PRIVILEGES on kodekloud_db5.* to 'kodekloud_roy'@'%' identified by 'kodekloud'; 93 | MariaDB [(none)]>FLUSH PRIVILEGES; 94 | ``` 95 | Note: It's important to grant privileges to the user on all hosts as the user will connect from Wordpress as `kodekloud_roy@stdb01`. 96 | * Now load the database script specified in the question as below 97 | ```SQL 98 | MariaDB [(none)]>SOURCE /tmp/db.sql; 99 | ``` 100 | #### Verify MariaDB setup 101 | * Use `mysqlshow` to verify that the account you created works as expected, especially with host as stdb01. You should see all the WordPress tables listed i.e. wp... 102 | ```UNIX 103 | mysqlshow -u kodekloud_roy -h stdb01 kodekloud_db5 104 | ``` 105 | In case the above doesn't work, try as `mysqlshow -u kodekloud_roy -h stdb01 kodekloud_db5 -p`. Give password as `kodekloud` when prompted. 106 | 107 | ### Step 3 - Download and install WordPress 108 | * SSH back to each of the appservers and perform the following tasks 109 | * Switch to root user: `sudo su` 110 | * Change to Apache Document root directory: `cd /var/www/html` 111 | * Download the latest Wordpress: `wget https://wordpress.org/wordpress-5.1.1.tar.gz` [Note: As of 15-Apr-2021, https://wordpress.org/latest.tar.gz is pointing to WordPress 5.2, which doesn't work with current version of PHP. In future, you can instead use `wget https://wordpress.org/latest.tar.gz`] 112 | * Extract the Wordpress installation: `tar xvf wordpress-5.1.1.tar.gz` (or `latest.tar.gz` if you downloaded that instead) 113 | * Change to WordPress directory and make a copy of `wp-config-sample.php` as `wp-config.php` 114 | ```UNIX 115 | cd wordpress 116 | cp wp-config-sample.php wp-config.php 117 | ``` 118 | * Edit `wp_config.php` and set the DB details 119 | ``` 120 | define('DB_NAME', 'kodekloud_db5'); 121 | define('DB_USER', 'kodekloud_roy'); 122 | define('DB_PASSWORD', 'kodekloud'); 123 | define('DB_HOST', 'stdb01'); 124 | ``` 125 | * Change to parent directory and modify the ownership of the `wordpress` directory to apache: `chown -R apache:apache wordpress` 126 | 127 | #### Verify WordPress setup 128 | * Use Curl command to check whether a valid HTML is returned: `http://localhost:5002/wordpress/` 129 | 130 | ## Verification 131 | * Click tab `Select port to view on Host 1`, and after adding port 80 click on Display Port. This should connect to LB URL. 132 | * You should see a sample Blog WordPress site loaded 133 | 134 | --- 135 | For tips on getting better at KodeKloud Engineer Linux Administration tasks, [click here](./README.md) -------------------------------------------------------------------------------- /jenkins/Single-Stage-Pipeline.md: -------------------------------------------------------------------------------- 1 | # Jenkins Single Stage Pipeline 2 | ## Introduction 3 | This task involves setting up a single-stage build pipeline. There are 2 key steps: 4 | * Setup a Slave Node on ststor01. Easiest way is to install Java on ststor01 and then use [SSH Build Agents](https://plugins.jenkins.io/ssh-slaves/) plugin to automatically setup the Agent from the Jenkins UI via SSH. 5 | * Create a Build Pipeline that pulls code from a GIT Repo and pushes it to a directory on the storage server. Here, you can use [SSH Pipeline Steps](https://www.jenkins.io/doc/pipeline/steps/ssh-steps/) and [GIT](https://www.jenkins.io/doc/pipeline/steps/git/) plugins to implement the pipeline steps. 6 | 7 | **Tip:** You can use [Pipeline Snippet Generator](https://www.jenkins.io/doc/book/pipeline/getting-started/#snippet-generator) to generate the pipeline code by selecting GIT or SSH in the dropdown and providing required values. [Pipeline Snippet Generator](https://www.jenkins.io/doc/book/pipeline/getting-started/#snippet-generator) can be accessed by clicking the link `Pipeline Syntax` on your Pipeline Job page in the `Pipeline` section. 8 | 9 | ## Solution 10 | ### Step 1: Install the required plugins in Jenkins 11 | * `Select port to view on Host 1` and connect to port `8081`. Login using the Jenkins admin user and password given in the question 12 | * Under `Jenkins > Manage Jenkins > Manage Plugins` click `Available` and search for `Pipeline` plugin. 13 | * Select the `GIT` plugin and click `Download now and install after restart` 14 | * In the following screen, click checkbox `Restart Jenkins when installation is complete and no jobs running`. Wait for the screen to become standstill. 15 | * You can try to refresh your browser after a few secs. 16 | * Repeat the above steps and install `Pipeline`, `SSH Build Agents` and `SSH Pipeline Steps` plugins also 17 | * Note that you can seach for multiple plugins, select them and finally click `Download now and install after restart`. When you select one and search for the next one, the previous one disappears. But it is still selected behind-the-scenes and gets installed along when you click `Download now and install after restart` 18 | 19 | ### Step 2: Add a new Slave Node 20 | * SSH to storage server i.e. ststor01 using `natasha` and install Java: `yum install -y java` 21 | * In Jenkins Admin Console, add a new slave node under `Jenkins > Manage Jenkins > Manage Nodes and Clouds > New Node` 22 | * Provide following values 23 | ``` 24 | Node Name: Storage Server 25 | (Permanent Agent) 26 | Remote root directory: /home/natasha 27 | Labels: ststor01 28 | Launch Method: Launch Agents via SSH 29 | ``` 30 | * Additional options will be revealed. Configure as follows: 31 | ``` 32 | Host: ststor01 33 | Credentials: natasha (Add on-the-fly by clicking 'Add Button > Jenkins' and configuring natasha's credentials) 34 | Host Key Verification Strategy: Non verifying Verification Strategy 35 | ``` 36 | * Click `Save` 37 | * Wait for a few seconds for the agent to be configured 38 | * Refresh the nodes list. You should see the newly added `Storage Server` displayed with all system statistics. This means the node setup was successful. 39 | 40 | ### Step 3: Setup Credentials for GIT user 41 | * Under `Jenkins > Manage Jenkins > Manage Credentials`, click `Global` under `Stores scoped to Jenkins` and `Add Credentials` 42 | * Leave kind as `Username with Password` and Scope as `Global (..)` 43 | * Setup GIT credentials for Sarah: 44 | ``` 45 | Username: sarah 46 | Password: Sarah_pass123 47 | ID: GIT_CREDS 48 | ``` 49 | 50 | ### Step 4: Create a Pipeline Job 51 | * Click `New item` and in the following screen create a job as per question: 52 | ``` 53 | Name: datacenter-webapp-job (Select 'Pipeline' - Don't select 'Multibranch pipeline') and click Ok 54 | ``` 55 | * Under `Pipeline`, make sure definition is `Pipeline script` 56 | * In the `Script` text area provide the following: 57 | ```groovy 58 | // 59 | // Setup up the Remote connection for SSH Build Pipeline step 60 | // 61 | def remote = [:] 62 | remote.name = 'ststor01' 63 | remote.host = 'ststor01' 64 | remote.user = 'natasha' 65 | remote.password = 'Bl@kW' 66 | remote.allowAnyHosts = true 67 | 68 | pipeline { 69 | // Run on agent with label 'ststor01' 70 | agent { label 'ststor01' } 71 | 72 | // Pipeline stages 73 | stages { 74 | // Deploy stage 75 | stage('Deploy') { 76 | steps { 77 | // Connect to GIT and download the repo code 78 | // Use the Jenkins Credentials by ID: GIT_CREDS 79 | git credentialsId: 'GIT_CREDS', url: 'http://git.stratos.xfusioncorp.com/sarah/web_app.git' 80 | // Transfer all the files we downloaded to /tmp of ststor01 81 | sshPut remote: remote, from: '.', into: '/tmp' 82 | // Finally move all the files from /tmp to /data on ststor01 83 | sshCommand remote: remote, command: "mv -f /tmp/${JOB_NAME}/* /data" 84 | } 85 | } 86 | } 87 | } 88 | ``` 89 | TODO: Improve above code by not hardcoding the remote password 90 | * Click `Save` 91 | * Click the newly created Pipeline Job and click `Build Now` 92 | * You should see a new build getting triggered and completed successfully 93 | * Check the `Console Output` to check whether files were pulled from GIT repo and transferred to `ststor01` sucessfully 94 | * Go to back to the terminal and check that you see a `index.html` under `/data` folder of the storage server 95 | 96 | ### Step 5: Commit changes to index.html 97 | * SSH to ststor01 using SSH user `sarah` 98 | * Change to `/home/sarah/web_app` 99 | * Edit `index.html` as per the question 100 | ```html 101 | Welcome to the Nautilus Industries 102 | ``` 103 | * Commit the file using the following steps: 104 | ``` 105 | git add index.html 106 | git commit -m "updated" 107 | git push origin master 108 | ``` 109 | ### Verification 110 | * Click the newly created Pipeline Job again and click `Build Now` 111 | * You should see a new build getting triggered and completed successfully 112 | * Check the `Console Output` to check whether files were pulled from GIT repo and transferred to `ststor01` sucessfully 113 | * Access LB URL: `Select port to view on Host 1` and connect to port `80` 114 | * You should see index page with your changes 115 | 116 | --- 117 | For tips on getting better at KodeKloud Engineer Jenkins tasks, [click here](./README.md) -------------------------------------------------------------------------------- /jenkins/Multi-Stage-Pipeline.md: -------------------------------------------------------------------------------- 1 | # Jenkins Multi Stage Pipeline 2 | ## Introduction 3 | Following up on the [Single stage pipeline](./Single-Stage-Pipeline.md), this task involves setting up a multi-stage build pipeline. The pipeline need to have 2 stages: 4 | * Deploy stage - Pulls code from GIT repo and pushes to a directory on the storage server 5 | * Test stage - Tests that that code push was successfully by running some commands 6 | 7 | ## Solution 8 | ### Step 1: Install the required plugins in Jenkins 9 | * `Select port to view on Host 1` and connect to port `8081`. Login using the Jenkins admin user and password given in the question 10 | * Under `Jenkins > Manage Jenkins > Manage Plugins` click `Available` and search for `GIT` plugin. 11 | * Select the `GIT` plugin and click `Download now and install after restart` 12 | * In the following screen, click checkbox `Restart Jenkins when installation is complete and no jobs running`. Wait for the screen to become standstill. 13 | * You can try to refresh your browser after a few secs. 14 | * Repeat the above steps and install `Pipeline` and `SSH Pipeline Steps` plugins also 15 | * Note that you can seach for multiple plugins, select them and finally click `Download now and install after restart`. When you select one and search for the next one, the previous one disappears. But it is still selected behind-the-scenes and gets installed along when you click `Download now and install after restart` 16 | 17 | ### Step 2: Setup Credentials for GIT user 18 | * Under `Jenkins > Manage Jenkins > Manage Credentials`, click `Global` under `Stores scoped to Jenkins` and `Add Credentials` 19 | * Leave kind as `Username with Password` and Scope as `Global (..)` 20 | * Setup GIT credentials for Sarah: 21 | ``` 22 | Username: sarah 23 | Password: Sarah_pass123 24 | ID: GIT_CREDS 25 | ``` 26 | 27 | ### Step 3: Create a Pipeline Job 28 | * Click `New item` and in the following screen create a job as per question: 29 | ``` 30 | Name: datacenter-webapp-job (Select 'Pipeline' - Don't select 'Multibranch pipeline') and click Ok 31 | ``` 32 | * Under `Pipeline`, make sure definition is `Pipeline script` 33 | * In the `Script` text area provide the following (change the `git url` under `Deploy` stage and `appserver ports` under `Test` stage according to the question): 34 | ```groovy 35 | // 36 | // Setup up the Remote connection for SSH Build Pipeline step 37 | // 38 | def remote = [:] 39 | remote.name = 'ststor01' 40 | remote.host = 'ststor01' 41 | remote.user = 'natasha' 42 | remote.password = 'Bl@kW' 43 | remote.allowAnyHosts = true 44 | 45 | pipeline { 46 | // Run on agent with label 'ststor01' 47 | agent { label 'ststor01' } 48 | 49 | // Pipeline stages 50 | stages { 51 | // Deploy stage 52 | stage('Deploy') { 53 | steps { 54 | echo 'Deploying ...' 55 | // Connect to GIT and download the repo code 56 | // Use the Jenkins Credentials by ID: GIT_CREDS 57 | git credentialsId: 'GIT_CREDS', url: 'http://git.stratos.xfusioncorp.com/sarah/web.git' 58 | // Transfer all the files we downloaded to /tmp of ststor01 59 | sshPut remote: remote, from: '.', into: '/tmp' 60 | // Finally move all the files from /tmp to /data on ststor01 61 | sshCommand remote: remote, command: "mv -f /tmp/${JOB_NAME}/* /data" 62 | } 63 | } 64 | 65 | // Test stage 66 | stage('Test') { 67 | environment { 68 | // Update the below value as per the text given in question 69 | INDEX_CONTENT = 'Welcome to xFusionCorp Industries' 70 | } 71 | 72 | steps { 73 | // Now test that the content from default page from HTTPD on each 74 | // of the appservers is same as the index.html content required as 75 | // per question 76 | sh '((curl http://stapp01:8080/ | grep -F "$INDEX_CONTENT") && true)' 77 | sh '((curl http://stapp02:8080/ | grep -F "$INDEX_CONTENT") && true)' 78 | sh '((curl http://stapp03:8080/ | grep -F "$INDEX_CONTENT") && true)' 79 | } 80 | } 81 | } 82 | } 83 | ``` 84 | TODO: Improve above code by not hardcoding the remote password 85 | * Click `Save` 86 | 87 | ### Step 5: Commit changes to index.html 88 | * SSH to ststor01 using SSH user `sarah` 89 | * Change to `/home/sarah/web` 90 | * Edit `index.html` as per the question 91 | ```html 92 | Welcome to xFusionCorp Industries 93 | ``` 94 | * Commit the file using the following steps: 95 | ``` 96 | git add index.html 97 | git commit -m "updated" 98 | git push origin master 99 | ``` 100 | 101 | ### Verification 102 | * Click the newly created Pipeline Job and click `Build Now` 103 | * You should see a new build getting triggered and completed successfully 104 | * Check the `Console Output` to check whether files were pulled from GIT repo and transferred to `ststor01` sucessfully. Also the `Test` stage should report successful test. 105 | * Access LB URL: `Select port to view on Host 1` and connect to port `80` 106 | * You should see index page with your changes 107 | 108 | ## Addendum 109 | While the `Test` stage here is specific to the question, in real world, a more robust solution would've been to check that the content from GIT is same as the content served from Apache servers. This would've been the best way to check that the `Deploy` stage was successful 110 | 111 | In other words, if deployment fails, the test stage fails regardless of the content of `index.html`. 112 | 113 | The following is a more robust version of the code, which however will get failed by KKE Verification process as the verification process is expecting the code to only work when the `index.html` content is `Welcome to xFusionCorp Industries`: 114 | ```groovy 115 | // Test stage 116 | stage('Test') { 117 | environment { 118 | // Store the index.html content we received from GIT in a variable 119 | INDEX_CONTENT = sh(script: 'cat index.html', , returnStdout: true).trim() 120 | } 121 | 122 | steps { 123 | sh 'echo "Content from GIT: $INDEX_CONTENT"' 124 | // Now test that the content from default page from HTTPD on each 125 | // of the appservers is same as the index.html content from GIT 126 | sh '((curl http://stapp01:8080/ | grep -F "$INDEX_CONTENT") && true)' 127 | sh '((curl http://stapp02:8080/ | grep -F "$INDEX_CONTENT") && true)' 128 | sh '((curl http://stapp03:8080/ | grep -F "$INDEX_CONTENT") && true)' 129 | } 130 | } 131 | ``` 132 | 133 | --- 134 | For tips on getting better at KodeKloud Engineer Jenkins tasks, [click here](./README.md) -------------------------------------------------------------------------------- /jenkins/Deployment-Using-Jenkins.md: -------------------------------------------------------------------------------- 1 | # Deployment Using Jenkins 2 | ## Introduction 3 | This task is quite similar to, but easier than, [Create Chained Builds](./Create-Chained-Builds.md) task. 4 | 5 | The task involves 2 steps: 6 | * First step is to install and configure HTTPD in all 3 appserver hosts. Recommendation is to perform this step manually 7 | * Next step is to setup a Jenkins deployment job that pulls all files from a GIT repo and pushes to a directory on the storage server when a file is commited to the GIT repo. For this, you need to 8 | 1. Install [Gitea](https://plugins.jenkins.io/gitea/) and [Publish over SSH](https://plugins.jenkins.io/publish-over-ssh/) plugins in Jenkins 9 | 2. Enable a [Webhook](https://en.wikipedia.org/wiki/Webhook) on the Build job and setting up this Webhook on the Gitea UI. You need to also install [Build Authorization Token Root](https://plugins.jenkins.io/build-token-root/) plugin in Jenkins to allow Gitea to trigger the Jenkins build without authenticating 10 | 11 | ## Solution 12 | ### Step 1: Install and configure HTTPD in all appservers 13 | * SSH to each of the appserver and run the following commands (Change the port as the question): 14 | ``` 15 | sudo yum install -y httpd 16 | sudo sed -i 's/^Listen 80$/Listen 3001/g' /etc/httpd/conf/httpd.conf 17 | sudo systemctl restart httpd 18 | sudo systemctl status httpd 19 | ``` 20 | * Go to Jump Host terminal and run a curl against all 3 hosts to make sure that HTTPD works as expected: 21 | ``` 22 | curl http://stapp01:3001/ 23 | curl http://stapp02:3001/ 24 | curl http://stapp03:3001/ 25 | ``` 26 | 27 | ### Step 2: Install Gitea and Publish over SSH Plugins in Jenkins 28 | * `Select port to view on Host 1` and connect to port `8081`. Login using the Jenkins admin user and password given in the question 29 | * Under `Jenkins > Manage Jenkins > Manage Plugins` click `Available` and search for `Gitea` plugin. 30 | * Select the plugin and click `Download now and install after restart` 31 | * In the following screen, click checkbox `Restart Jenkins when installation is complete and no jobs running`. Wait for the screen to become standstill. 32 | * You can try to refresh your browser after a few secs. 33 | * Repeat the above steps and install `Build Authorization Token Root`, `Publish over SSH` plugins also 34 | * Note that you can seach for multiple plugins, select them and finally click `Download now and install after restart`. When you select one and search for the next one, the previous one disappears. But it is still selected behind-the-scenes and gets installed along when you click `Download now and install after restart` 35 | 36 | ### Step 3: Setup Credentials for GIT user 37 | * Under `Jenkins > Manage Jenkins > Manage Credentials`, click `Global` under `Stores scoped to Jenkins` and `Add Credentials` 38 | * Leave kind as `Username with Password` and Scope as `Global (..)` 39 | * Setup GIT credentials for Sarah: 40 | ``` 41 | Username: sarah 42 | Password: Sarah_pass123 43 | ID: sarah 44 | ``` 45 | 46 | ### Step 4: Configure Publish Over SSH 47 | * In same page i.e. `Jenkins > Manage Jenkins > Configure System` under `Publish over SSH > SSH Servers` click `Add` and provide the following values: 48 | ``` 49 | Name: ststor01 50 | Hostname: ststor01 51 | Username: natasha 52 | Remote Directory: /data 53 | (Click Advanced and select 'Use Password authentication...') 54 | Passphrase/Password: Bl@kW 55 | ``` 56 | * Click `Test Configuration` to test that the connection is successful 57 | 58 | ### Step 5: Copy GIT Repo URL 59 | * `Select port to view on Host 1` and connect to port `8000`. Login using the GITEA user and password given in the question e.g. Sarah 60 | * Click the repo `sarah/web` on the right side 61 | * Copy the GIT Clone HTTP URL (next to the clipboard icon). Usually it looks like `http://git.stratos.xfusioncorp.com/sarah/web.git` 62 | 63 | ### Step 6: Verify Permissions for /data directory 64 | * SSH to `ststor01` using `natasha` 65 | * Grant full permissions to `/data` folder: `sudo chmod 777 /data` 66 | 67 | ### Step 7: Create Deployment Job 68 | * Again Click `New item` and in the following screen: 69 | ``` 70 | Name: nautilus-app-deployment (Keep 'Freestyle Project' as selected) and click Ok 71 | ``` 72 | * Under `General > Source Code Management` click the option `Git` 73 | * This will reveal additional options. Configure as follows: 74 | ``` 75 | Repository URL: http://git.stratos.xfusioncorp.com/sarah/web.git 76 | Credentials: sarah/***** 77 | ``` 78 | * Leave the `Branches to Build` to the default value i.e. `*/master` 79 | * Now under `Build Triggers`, click `Trigger builds remotely..`. For Authentication Token, you can give the value as `KODEKLOUDENGINEER` 80 | * Note down the Jenkins URL and separately create a Webhook URL based on the URL. Webhook URL will be in the form `/buildByToken/build?job=NAME&token=SECRET` 81 | e.g. `https://xxxx.katacoda.com/buildByToken/build?job=nautilus-app-deployment&token=KODEKLOUDENGINEER` 82 | * Under `Build Environment` click `Send files or execute commands over SSH after the build runs`. It reveals additional options to configure `Transfer`: 83 | ``` 84 | Source files: **/* 85 | ``` 86 | Note: `**/*` file pattern ensures that the build job pulls all files from the repo and pushes them to the storage server and not just the index.html 87 | * Click `save` 88 | * Click the newly created Job and click `Build Now` 89 | * You should see a new build getting triggered and complete successfully. 90 | * Check the `Console Output` to see that `SSH: Transferred 1 file(s)` 91 | * Go to back to the terminal and check that you see a `index.html` under `/data` folder of the storage server 92 | 93 | ### Step 8: Setup Gitea Webhooks 94 | * Go back to Gitea UI 95 | * Click the settings under the repository (Spanner Icon) and click the `Webhooks` tab. Click `Add Webook > Gitea` 96 | * Under Target URL, copy paste the URL that you copied in the previous step i.e. `https://xxxx.katacoda.com/buildByToken/build?job=nautilus-app-deployment&token=KODEKLOUDENGINEER` 97 | * Click `Add Webhook` 98 | * Click the webhook again and click `Test delivery` to check the hook works. This will send a fake event to Jenkins 99 | * Go back to Jenkins UI and check if a new build is triggered under `nautilus-app-deployment` job 100 | * This means the hook works fine 101 | 102 | ### Step 9: Commit changes to index.html 103 | * SSH to ststor01 using SSH user `sarah` 104 | * Change to `/home/sarah/web` 105 | * Edit `index.html` as per the question 106 | ```html 107 | Welcome to the xFusionCorp Industries 108 | ``` 109 | * Commit the file using the following steps: 110 | ``` 111 | git add index.html 112 | git commit -m "updated" 113 | git push origin master 114 | ``` 115 | 116 | ## Verification 117 | * Check in Jenkins UI. You should see a `nautilus-app-deployment` build triggered 118 | * Check the `Console Output` to make sure the build steps were completed successfully without any failures 119 | * Go to back to the terminal and check that you see the new `index.html` under `/data` folder of the storage server 120 | * Access LB URL: `Select port to view on Host 1` and connect to port `80` 121 | * You should see index page with your changes 122 | * In the terminal of storage server, create another new HTML file,`panda.html`, with any sample content to your liking. Commit this file in to the remote repo. 123 | * Check once again in Jenkins UI. You should see a `nautilus-app-deployment` build triggered 124 | * Check the `Console Output` to make sure the build steps were completed successfully without any failures 125 | * You should also now see the new `panda.html` under `/data` directory of the storage server 126 | * Back on the LB URL, check if you can see the bobby.html page: `/panda.html` 127 | 128 | --- 129 | For tips on getting better at KodeKloud Engineer Jenkins tasks, [click here](./README.md) -------------------------------------------------------------------------------- /jenkins/Create-Chained-Builds.md: -------------------------------------------------------------------------------- 1 | # Create Chained Builds in Jenkins 2 | ## Introduction 3 | This is, by far, one of the longest tasks that I have completed so far. Given the sheer number of steps involved, it took me around 25 minutes to complete all the steps, thereby forefeiting my bonus points. I don't regret though. 4 | 5 | This task is quite similar to, but has more steps than, [Deployment using Jenkins](./Deployment-Using-Jenkins.md) task. 6 | 7 | The task involves the following 3 key steps: 8 | 1. Create a Jenkins build job that pulls _all_ files from a GIT repo and pushes to a directory on the storage server. To achieve this, you need to install [Gitea](https://plugins.jenkins.io/gitea/) and [Publish over SSH](https://plugins.jenkins.io/publish-over-ssh/) plugins in Jenkins 9 | 2. Task expects that when a file is committed to the GIT repo, the above build is automatically trigger. This requires enabling a [Webhook](https://en.wikipedia.org/wiki/Webhook) on the Build job and setting up this Webhook on the Gitea UI. You need to also install [Build Authorization Token Root](https://plugins.jenkins.io/build-token-root/) plugin in Jenkins to allow Gitea to trigger the Jenkins build without authenticating 10 | 3. Finally, the task expects you to create another Jenkins build job that can run SSH commands on the appservers and restart the 'httpd' service. For this, you need to install [SSH](https://plugins.jenkins.io/ssh) plugin and also enable 'password-less sudo' on all the appservers. Most importantly, this build job should get triggered automatically by the first build job. 11 | 12 | Happy learning and good luck with your attempt! 13 | 14 | ## Solution 15 | ### Step 1: Enable password-less sudo in all appservers 16 | * SSH to each of the appserver and run `sudo visudo` 17 | * In the resulting file, at the end of the file add password-less sudo for respective sudo user: 18 | stapp01: 19 | ``` 20 | tony ALL=(ALL) NOPASSWD: ALL 21 | ``` 22 | stapp02: 23 | ``` 24 | steve ALL=(ALL) NOPASSWD: ALL 25 | ``` 26 | stapp03: 27 | ``` 28 | banner ALL=(ALL) NOPASSWD: ALL 29 | ``` 30 | 31 | ### Step 2: Install Gitea and SSH Plugins in Jenkins 32 | * `Select port to view on Host 1` and connect to port `8081`. Login using the Jenkins admin user and password given in the question 33 | * Under `Jenkins > Manage Jenkins > Manage Plugins` click `Available` and search for `Gitea` plugin. 34 | * Select the plugin and click `Download now and install after restart` 35 | * In the following screen, click checkbox `Restart Jenkins when installation is complete and no jobs running`. Wait for the screen to become standstill. 36 | * You can try to refresh your browser after a few secs. 37 | * Repeat the above steps and install `Build Authorization Token Root`, `SSH` and `Publish over SSH` plugins also 38 | * Note that you can seach for multiple plugins, select them and finally click `Download now and install after restart`. When you select one and search for the next one, the previous one disappears. But it is still selected behind-the-scenes and gets installed along when you click `Download now and install after restart` 39 | 40 | ### Step 3: Setup Credentials for GIT and SSH users 41 | * Under `Jenkins > Manage Jenkins > Manage Credentials`, click `Global` under `Stores scoped to Jenkins` and `Add Credentials` 42 | * Leave kind as `Username with Password` and Scope as `Global (..)` 43 | * Setup GIT credentials for Sarah: 44 | ``` 45 | Username: sarah 46 | Password: Sarah_pass123 47 | ID: sarah 48 | ``` 49 | * Same way, add SSH credentials for tony, steve, banner (sudo users for the respective servers): 50 | ``` 51 | Username: tony 52 | Password: Ir0nM@n 53 | ID: tony 54 | ``` 55 | ### Step 4: Add SSH Hosts in Jenkins 56 | * Click `Jenkins > Manage Jenkins > Configure System` 57 | * Under `SSH Remote Hosts` click `Add Host` and provide the following values: 58 | ``` 59 | Hostname: stapp01 60 | Port: 22 61 | Credentials: Choose 'tony' from the list 62 | Pty: Select checkbox 63 | ``` 64 | * Click `Check Connection` to make sure connection is successful 65 | * Repeat steps to add stapp02 and stapp03 hosts 66 | 67 | ### Step 5: Configure Publish Over SSH 68 | * In same page i.e. `Jenkins > Manage Jenkins > Configure System` under `Publish over SSH > SSH Servers` click `Add` and provide the following values: 69 | ``` 70 | Name: ststor01 71 | Hostname: ststor01 72 | Username: natasha 73 | Remote Directory: /data 74 | (Click Advanced and select 'Use Password authentication...') 75 | Passphrase/Password: Bl@kW 76 | ``` 77 | * Click `Test Configuration` to test that the connection is successful 78 | 79 | ### Step 6: Copy GIT Repo URL 80 | * `Select port to view on Host 1` and connect to port `8000`. Login using the GITEA user and password given in the question e.g. Sarah 81 | * Click the repo `sarah/web` on the right side 82 | * Copy the GIT Clone HTTP URL (next to the clipboard icon). Usually it looks like `http://git.stratos.xfusioncorp.com/sarah/web.git` 83 | 84 | ### Step 7: Create Upstream Build Job 85 | * Go back to Jenkins Console 86 | * Click `New item` and in the following screen: 87 | ``` 88 | Name: nautilus-app-deployment (Keep 'Freestyle Project' as selected) and click Ok 89 | ``` 90 | * Under `General > Source Code Management` click the option `Git` 91 | * This will reveal additional options. Configure as follows: 92 | ``` 93 | Repository URL: http://git.stratos.xfusioncorp.com/sarah/web.git 94 | Credentials: sarah/***** 95 | ``` 96 | * Leave the `Branches to Build` to the default value i.e. `*/master` 97 | * Now under `Build Triggers`, click `Trigger builds remotely..`. For Authentication Token, you can give the value as `KODEKLOUDENGINEER` 98 | * Note down the Jenkins URL and separately create a Webhook URL based on the URL. Webhook URL will be in the form `/buildByToken/build?job=NAME&token=SECRET` 99 | e.g. `https://xxxx.katacoda.com/buildByToken/build?job=nautilus-app-deployment&token=KODEKLOUDENGINEER` 100 | * Under `Build Environment` click `Send files or execute commands over SSH after the build runs`. It reveals additional options to configure `Transfer`: 101 | ``` 102 | Source files: **/* 103 | ``` 104 | Note: `**/*` file pattern ensures that the build job pulls all files from the repo and pushes them to the storage server and not just the index.html 105 | * Click `Save` 106 | * Click the newly created Job and click `Build Now` 107 | * You should see a new build getting triggered and complete successfully. 108 | * Check the `Console Output` to see that `SSH: Transferred 1 file(s)` 109 | * Go to back to the terminal and check that you see a `index.html` under `/data` folder of the storage server 110 | 111 | ### Step 8: Setup Gitea Webhooks 112 | * Go back to Gitea UI 113 | * Click the settings under the repository (Spanner Icon) and click the `Webhooks` tab. Click `Add Webook > Gitea` 114 | * Under Target URL, copy paste the URL that you copied in the previous step i.e. `https://xxxx.katacoda.com/buildByToken/build?job=nautilus-app-deployment&token=KODEKLOUDENGINEER` 115 | * Click `Add Webhook` 116 | * Click the webhook again and click `Test delivery` to check the hook works. This will send a fake event to Jenkins 117 | * Go back to Jenkins UI and check if a new build is triggered under `nautilus-app-deployment` job 118 | * This means the hook works fine 119 | 120 | ### Step 9: Create Downstream Build Job 121 | * Go back to Jenkins Console 122 | * Click `New item` and in the following screen: 123 | ``` 124 | Name: manage-services (Keep 'Freestyle Project' as selected) and click Ok 125 | ``` 126 | * Under `Build Triggers` click `Build after other projects are builds`. It reveals additional options to configure `Projects to Watch`: 127 | ``` 128 | Projects to Watch: nautilus-app-deployment 129 | Trigger only if Build is stable (default) 130 | ``` 131 | * Under `Build` add a `Build Step` with `Execute shell script on remote host using SSH` 132 | and under `SSH Site` select `tony@stapp01:22` 133 | * In the command text area, provide the following: 134 | ``` 135 | sudo systemctl restart httpd 136 | sudo systemctl status httpd 137 | ``` 138 | * Repeat the above steps to add host and commands `steve@stapp02:22` and `banner@stapp03:22` as well 139 | * Click `Save` 140 | * Click the newly created Job and click `Build Now` 141 | * You should see a new build getting triggered and complete successfully. 142 | * Check the `Console Output` to see that the restart service commands ran without any issues 143 | 144 | ### Step 10: Commit changes to index.html 145 | * SSH to ststor01 using SSH user `sarah` 146 | * Change to `/home/sarah/web` 147 | * Edit `index.html` as per the question 148 | ```html 149 | Welcome to the xFusionCorp Industries 150 | ``` 151 | * Commit the file using the following steps: 152 | ``` 153 | git add index.html 154 | git commit -m "updated" 155 | git push origin master 156 | ``` 157 | 158 | ## Verification 159 | * Check in Jenkins UI. You should see a `nautilus-app-deployment` build triggered followed by a `manage-services` build triggered 160 | * Check the `Console Output` of both jobs to make sure the build steps were completed successfully without any failures 161 | * Go to back to the terminal and check that you see the new `index.html` under `/data` folder of the storage server 162 | * Access LB URL: `Select port to view on Host 1` and connect to port `80` 163 | * You should see index page with your changes 164 | * In the terminal of storage server, create another new HTML file,`panda.html`, with any sample content to your liking. Commit this file in to the remote repo. 165 | * Check once again in Jenkins UI. You should see a `nautilus-app-deployment` build triggered followed by a `manage-services` build triggered 166 | * Check the `Console Output` of both jobs to make sure the build steps were completed successfully without any failures 167 | * You should also now see the new `panda.html` under `/data` directory of the storage server 168 | * Back on the LB URL, check if you can see the bobby.html page: `/panda.html` 169 | 170 | --- 171 | For tips on getting better at KodeKloud Engineer Jenkins tasks, [click here](./README.md) --------------------------------------------------------------------------------