├── 2023 ├── day01 │ ├── devops.txt │ ├── task_complete.md │ └── tasks.md ├── day02 │ ├── Day 2 Task (2).pdf │ ├── basic_linux_commands.md │ └── tasks.md ├── day03 │ ├── Day 3 Task (1).pdf │ └── tasks.md ├── day04 │ ├── Day 4 Task.pdf │ └── tasks.md ├── day05 │ ├── Screenshot_20230105_100923.png │ ├── Screenshot_20230105_100940.png │ ├── Screenshot_20230105_134331.png │ ├── Screenshot_20230105_134600.png │ ├── Screenshot_20230105_134956.png │ ├── Screenshot_20230105_135620.png │ ├── backup.sh │ ├── createDirectories.sh │ ├── createDirectories3argu.sh │ └── tasks.md ├── day06 │ ├── Screenshot_20230110_165823.png │ ├── notes │ │ └── Linux_Basic_&_FilePermissions.docx │ ├── task_complete.md │ └── tasks.md ├── day07 │ ├── Day7-taskComplete.pdf │ ├── Docker_jenkins_install.sh │ └── tasks.md ├── day08 │ ├── Screenshot_20230110_173015.png │ ├── Screenshot_20230110_173922.png │ └── tasks.md ├── day09 │ ├── TaskComplete.md │ └── tasks.md ├── day10 │ ├── Day 10 Task.pdf │ └── tasks.md ├── day11 │ ├── Day 11 Task.pdf │ └── tasks.md ├── day12 │ ├── Cheetsheet.md │ ├── Git CheetSheet_1.pdf │ └── tasks.md ├── day13 │ ├── Screenshot_20230113_210801.png │ ├── task_complete.md │ └── tasks.md ├── day14 │ ├── Day 14 Task.pdf │ └── tasks.md ├── day15 │ ├── parser.py │ ├── services.json │ ├── services.yaml │ ├── solution-2.py │ ├── solution.py │ └── tasks.md ├── day16 │ ├── Day 16 Task.pdf │ └── tasks.md ├── day17 │ ├── Day 17 Task.pdf │ └── tasks.md ├── day18 │ ├── Day 18 Task.pdf │ ├── docker-compose.yaml │ └── tasks.md ├── day19 │ ├── Day 19 Task.pdf │ ├── sample_project_deployment.yaml │ └── tasks.md ├── day20 │ ├── Docker-Cheetsheet.md │ └── tasks.md ├── day21 │ ├── interview_questions.md │ └── tasks.md ├── day22 │ ├── Day 22 Task.pdf │ └── tasks.md ├── day23 │ ├── Day 23 Task.pdf │ └── tasks.md ├── day24 │ └── tasks.md ├── day25 │ └── tasks.md ├── day26 │ └── tasks.md ├── day27 │ └── tasks.md ├── day28 │ └── tasks.md ├── day29 │ └── tasks.md ├── day30 │ └── tasks.md ├── day31 │ ├── pod.yml │ └── tasks.md ├── day32 │ ├── Deployment.yml │ └── tasks.md ├── day33 │ └── tasks.md ├── day34 │ └── tasks.md ├── day35 │ └── tasks.md ├── day36 │ ├── Deployment.yml │ ├── pv.yml │ ├── pvc.yml │ └── tasks.md ├── day37 │ └── tasks.md ├── day38 │ └── tasks.md ├── day39 │ └── tasks.md ├── day40 │ └── tasks.md ├── day41 │ └── tasks.md ├── day42 │ └── tasks.md ├── day43 │ ├── aws-cli.md │ └── tasks.md ├── day44 │ └── tasks.md ├── day45 │ └── tasks.md ├── day46 │ └── tasks.md ├── day47 │ └── tasks.md ├── day48 │ └── tasks.md ├── day49 │ └── tasks.md ├── day50 │ └── tasks.md ├── day51 │ └── tasks.md ├── day52 │ └── tasks.md ├── day53 │ └── tasks.md ├── day54 │ └── tasks.md ├── day55 │ └── tasks.md ├── day56 │ └── tasks.md ├── day57 │ └── tasks.md ├── day58 │ └── tasks.md ├── day59 │ └── tasks.md ├── day60 │ └── tasks.md ├── day61 │ └── tasks.md ├── day62 │ └── tasks.md ├── day63 │ └── tasks.md ├── day64 │ └── tasks.md ├── day65 │ └── tasks.md ├── day66 │ └── tasks.md ├── day67 │ └── tasks.md ├── day68 │ └── tasks.md ├── day69 │ └── tasks.md ├── day70 │ └── tasks.md ├── day71 │ └── tasks.md ├── day72 │ └── tasks.md ├── day73 │ └── tasks.md ├── day74 │ └── tasks.md ├── day75 │ └── tasks.md ├── day76 │ └── tasks.md ├── day77 │ └── tasks.md ├── day78 │ └── tasks.md ├── day79 │ └── tasks.md ├── day80 │ └── tasks.md ├── day81 │ └── tasks.md ├── day82 │ └── tasks.md ├── day83 │ └── tasks.md ├── day84 │ └── tasks.md ├── day85 │ └── tasks.md ├── day86 │ └── tasks.md ├── day87 │ └── tasks.md ├── day88 │ └── tasks.md ├── day89 │ └── tasks.md └── day90 │ └── tasks.md ├── CONTRIBUTING.md ├── LICENSE.md └── README.md /2023/day01/devops.txt: -------------------------------------------------------------------------------- 1 | DevOps is a methodology which Involves practices to bridge the gap of Dev and ops team by using Open source automation build tools. 2 | These are the articles which I refered to, 3 | 4 | 5 | formal defination : "DevOps is the union of people, process, and products to enable continuous delivery of value to our end users." 6 | 7 | The main goal of DEVOPS is to shorten cycle time. Start with the release pipeline. How long does it take to deploy a change of one line of code or configuration. 8 | 9 | -------------------------------------------------------------------------------- /2023/day01/task_complete.md: -------------------------------------------------------------------------------- 1 | write blog on this at https://rushikesh-mashidkar.hashnode.dev/before-youre-starting-devops-journey-you-need-to-know-these-basics-fundamentals-of-devops 2 | -------------------------------------------------------------------------------- /2023/day01/tasks.md: -------------------------------------------------------------------------------- 1 | ## Introduction - Day 1 2 | 3 | This is the day you have to Take this challenge and start your #90DaysOfDevOps with the #TrainWithShubham Community 4 | 5 | - Fork this Repo. 6 | - Start with a DevOps Roadmap[https://youtu.be/iOE9NTAG35g] 7 | - Write a LinkedIn post or a small article about your understanding of DevOps 8 | - What is DevOps 9 | - What is Automation, Scaling, Infrastructure 10 | - Why DevOps is Important, etc 11 | -------------------------------------------------------------------------------- /2023/day02/Day 2 Task (2).pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day02/Day 2 Task (2).pdf -------------------------------------------------------------------------------- /2023/day02/basic_linux_commands.md: -------------------------------------------------------------------------------- 1 | ## Basic linux commands 2 | 3 | ### Listing commands 4 | ```ls option_flag arguments ```--> list the sub directories and files avaiable in the present directory 5 | 6 | Examples: 7 | 8 | - ``` ls -l ```--> list the files and directories in long list format with extra information 9 | - ```ls -a ```--> list all including hidden files and directory 10 | - ```ls *.sh``` --> list all the files having .sh extension. 11 | 12 | - ```ls -i ``` --> list the files and directories with index numbers inodes 13 | - ``` ls -d */``` --> list only directories.(we can also specify a pattern) 14 | 15 | ### Directoy commands 16 | - ```pwd``` --> print work directory. Gives the present working directory. 17 | 18 | - ```cd path_to_directory``` --> change directory to the provided path 19 | 20 | - ```cd ~ ``` or just ```cd ``` --> change directory to the home directory 21 | 22 | - ``` cd - ``` --> Go to the last working directory. 23 | 24 | - ``` cd ..``` --> change directory to one step back. 25 | 26 | - ``` cd ../..``` --> Change directory to 2 levels back. 27 | 28 | - ``` mkdir directoryName``` --> to make a directory in a specific location 29 | 30 | Examples: 31 | ``` 32 | mkdir newFolder # make a new folder 'newFolder' 33 | 34 | mkdir .NewFolder # make a hidden directory (also . before a file to make it hidden) 35 | 36 | mkdir A B C D #make multiple directories at the same time 37 | 38 | mkdir /home/user/Mydirectory # make a new folder in a specific location 39 | 40 | mkdir -p A/B/C/D # make a nested directory 41 | ``` 42 | -------------------------------------------------------------------------------- /2023/day02/tasks.md: -------------------------------------------------------------------------------- 1 | Day 2 Task: Basics linux command 2 | 3 | Task: What is the linux command to 4 | 1. Check your present working directory. 5 | 2. List all the files or directories including hidden files. 6 | 3. Create a nested directory A/B/C/D/E 7 | 8 | Note: [Check this file for reference](basic_linux_commands.md) 9 | 10 | Check the basic_linux_commands.md file on the same directory day2 11 | -------------------------------------------------------------------------------- /2023/day03/Day 3 Task (1).pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day03/Day 3 Task (1).pdf -------------------------------------------------------------------------------- /2023/day03/tasks.md: -------------------------------------------------------------------------------- 1 | Day 3 Task: Basic Linux Commands 2 | 3 | Task: What is the linux command to 4 | 5 | 1. To view what's written in a file. 6 | 2. To change the access permissions of files. 7 | 3. To check which commands you have run till now. 8 | 4. To remove a directory/ Folder. 9 | 5. To create a fruits.txt file and to view the content. 10 | 6. Add content in devops.txt (One in each line) - Apple, Mango, Banana, Cherry, Kiwi, Orange, Guava. 11 | 7. To Show only top three fruits from the file. 12 | 8. To Show only bottom three fruits from the file. 13 | 9. To create another file Colors.txt and to view the content. 14 | 10. Add content in Colors.txt (One in each line) - Red, Pink, White, Black, Blue, Orange, Purple, Grey. 15 | 11. To find the difference between fruits.txt and Colors.txt file. 16 | 17 | 18 | Reference: https://www.linkedin.com/pulse/linux-commands-devops-used-day-to-day-activit-chetan-/ 19 | -------------------------------------------------------------------------------- /2023/day04/Day 4 Task.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day04/Day 4 Task.pdf -------------------------------------------------------------------------------- /2023/day04/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 4 Task: Basic Linux Shell Scripting for DevOps Engineers. 2 | 3 | ## What is Kernel 4 | 5 | The kernel is a computer program that is the core of a computer’s operating system, with complete control over everything in the system. 6 | 7 | ## What is Shell 8 | 9 | A shell is special user program which provide an interface to user to use operating system services. Shell accept human readable commands from user and convert them into something which kernel can understand. It is a command language interpreter that execute commands read from input devices such as keyboards or from files. The shell gets started when the user logs in or start the terminal. 10 | 11 | ## What is Linux Shell Scripting? 12 | 13 | A shell script is a computer program designed to be run by a linux shell, a command-line interpreter. The various dialects of shell scripts are considered to be scripting languages. Typical operations performed by shell scripts include file manipulation, program execution, and printing text. 14 | 15 | **Tasks** 16 | 17 | - Explain in your own words and examples, what is Shell Scripting for DevOps. 18 | - What is `#!/bin/bash?` can we write `#!/bin/sh` as well? 19 | - Write a Shell Script which prints `I will complete #90DaysOofDevOps challenge` 20 | - Write a Shell Script to take user input, input from arguments and print the variables. 21 | - Write an Example of If else in Shell Scripting by comparing 2 numbers 22 | 23 | Was it difficult? 24 | 25 | - Post about it on LinkedIn and Let me know :) 26 | 27 | Article Reference: [Click here to read basic Linux Shell Scripting](https://devopscube.com/linux-shell-scripting-for-devops/) 28 | 29 | YouTube Vedio: [EASIEST Shell Scripting Tutorial for DevOps Engineers](https://www.youtube.com/watch?v=_-D6gkRj7xc&list=PLlfy9GnSVerQr-Se9JRE_tZJk3OUoHCkh&index=3) -------------------------------------------------------------------------------- /2023/day05/Screenshot_20230105_100923.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day05/Screenshot_20230105_100923.png -------------------------------------------------------------------------------- /2023/day05/Screenshot_20230105_100940.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day05/Screenshot_20230105_100940.png -------------------------------------------------------------------------------- /2023/day05/Screenshot_20230105_134331.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day05/Screenshot_20230105_134331.png -------------------------------------------------------------------------------- /2023/day05/Screenshot_20230105_134600.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day05/Screenshot_20230105_134600.png -------------------------------------------------------------------------------- /2023/day05/Screenshot_20230105_134956.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day05/Screenshot_20230105_134956.png -------------------------------------------------------------------------------- /2023/day05/Screenshot_20230105_135620.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day05/Screenshot_20230105_135620.png -------------------------------------------------------------------------------- /2023/day05/backup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | Backup_directory="/home/ubuntu/90daysofdevops/*" 4 | Backups="/home/ubuntu/BackupFolder" 5 | date=$(date +"%d-%b-%Y") 6 | 7 | mkdir $Backups/$date 8 | cp -r $Backup_directory $Backups/$date 9 | 10 | echo "Backup created in $Backups/$date" 11 | -------------------------------------------------------------------------------- /2023/day05/createDirectories.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | for (( i=1;i<=$2;i++)) 4 | do 5 | mkdir $1-$i 6 | done 7 | 8 | ls 9 | 10 | 11 | #when youb run this script you have to write 2 arguments after filename i.e bash createDirectories.sh day 90 12 | -------------------------------------------------------------------------------- /2023/day05/createDirectories3argu.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | for (( i=$2;i<=$3;i++)) 4 | do 5 | mkdir $1-$i 6 | done 7 | 8 | ls 9 | 10 | 11 | # in this script you have to write 3 arguments i.e bash createDirectories3argu.sh movie 1 20 12 | -------------------------------------------------------------------------------- /2023/day05/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 5 Task: Advanced Linux Shell Scripting for DevOps Engineers with User management 2 | 3 | If you noticed that there are total 90 sub directories in the directory '2023' of this repository. What did you think, how did I create 90 directories. Manually one by one or using a script, or a command ? 4 | 5 | All 90 directories within seconds using a simple command. 6 | 7 | ` mkdir day{1..90}` 8 | 9 | ### Tasks 10 | 1) You have to do the same using Shell Script i.e using either Loops or command with start day and end day variables using arguments - 11 | 12 | So Write a bash script createDirectories.sh that when the script is executed with three given arguments (one is directory name and second is start number of directories and third is the end number of directories ) it creates specified number of directories with a dynamic directory name. 13 | 14 | Example 1: When the script is executed as 15 | 16 | ```./createDirectories.sh day 1 90``` 17 | 18 | then it creates 90 directories as ```day1 day2 day3 .... day90``` 19 | 20 | Example 2: When the script is executed as 21 | 22 | ```./createDirectories.sh Movie 20 50``` 23 | then it creates 50 directories as ```Movie20 Movie21 Movie23 ...Movie50``` 24 | 25 | Notes: 26 | You may need to use loops or commands (or both), based on your preference . [Check out this reference: https://www.geeksforgeeks.org/bash-scripting-for-loop/](https://www.geeksforgeeks.org/bash-scripting-for-loop/) 27 | 28 | 29 | 2) Create a Script to backup all your work done till now. 30 | 31 | Backups are an important part of DevOps Engineers day to Day activities 32 | The video in References will help you to understand How a DevOps Engineer takes backups (it can feel a bit difficult but keep trying, Nothing is impossible.) 33 | Watch [this video](https://youtu.be/aolKiws4Joc) 34 | 35 | In case of Doubts, post it in [Discord Channel for #90DaysOfDevOps](https://discord.gg/hs3Pmc5F) 36 | 37 | 38 | 3) Read About Cron and Crontab, to automate the backup Script 39 | 40 | Cron is the system's main scheduler for running jobs or tasks unattended. A command called crontab allows the user to submit, edit or delete entries to cron. A crontab file is a user file that holds the scheduling information. 41 | 42 | Watch This video as a Reference to Task 2 and 3 [https://youtu.be/aolKiws4Joc](https://youtu.be/aolKiws4Joc) 43 | 44 | 45 | 4) Read about User Management and Let me know on Linkedin if you're ready for Day 6. 46 | 47 | A user is an entity, in a Linux operating system, that can manipulate files and perform several other operations. Each user is assigned an ID that is unique for each user in the operating system. In this post, we will learn about users and commands which are used to get information about the users. After installation of the operating system, the ID 0 is assigned to the root user and the IDs 1 to 999 (both inclusive) are assigned to the system users and hence the ids for local user begins from 1000 onwards. 48 | 49 | 50 | 5) Create 2 users and just display their Usernames 51 | 52 | [Check out this reference: https://www.geeksforgeeks.org/user-management-in-linux/](https://www.geeksforgeeks.org/user-management-in-linux/) 53 | 54 | Post your daily work on Linkedin and le [me](https://www.linkedin.com/in/shubhamlondhe1996/) know , writing an article is the best :) 55 | 56 | -------------------------------------------------------------------------------- /2023/day06/Screenshot_20230110_165823.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day06/Screenshot_20230110_165823.png -------------------------------------------------------------------------------- /2023/day06/notes/Linux_Basic_&_FilePermissions.docx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day06/notes/Linux_Basic_&_FilePermissions.docx -------------------------------------------------------------------------------- /2023/day06/task_complete.md: -------------------------------------------------------------------------------- 1 | # article about File Permissions based on your understanding from the notes. 2 | 3 | File permissions are an important aspect of operating system security. They are used to control access to files and directories on a Linux or Unix-based system. In this article, we'll explain how file permissions work, and how you can use them to protect your files and directories. 4 | 5 | File permissions are controlled by the file's owner and group. The owner is the user who created the file, and the group is a collection of users who have access to the file. Each file and directory has three types of permissions: read, write, and execute. 6 | 7 | Read permission allows a user to view the contents of a file. Write permission allows a user to make changes to a file, such as editing or deleting it. Execute permission allows a user to run a file as a program or script. 8 | 9 | File permissions are represented by a series of characters, called the file mode, which is typically displayed in the output of the ls command. The first character of the file mode indicates whether the file is a regular file, a directory, or a special type of file, like a symbolic link. The next nine characters of the file mode are divided into three groups, each representing the permissions for the owner, group, and others. 10 | 11 | For example, the file mode 'rw-r--r--' represents a regular file with read and write permission for the owner, and read permission for the group and others. In this case, the owner of the file can read and write to the file, but the group and others can only read the file. 12 | 13 | You can use the chmod command to change the permissions on a file or directory. The chmod command takes a numeric mode as an argument, which represents the permissions for the owner, group, and others in binary format. For example, the numeric mode 644 represents read and write permission for the owner, and read permission for the group and others, similar to the file mode rw-r--r--. 14 | 15 | You can also use the chown command to change the owner and group of a file or directory. This is useful if you want to give another user access to a file, or if you want to transfer ownership of a file to another user. 16 | 17 | It's worth noting that the root user has the ability to access any file or directory on the system, regardless of its permissions. This is because the root user has superuser privileges and can bypass the normal file permission controls. 18 | 19 | In conclusion, file permissions are a crucial aspect of system security, and allow you to control access to files and directories on a Linux or Unix-based system. By understanding how file permissions work and how to use them, you can protect your files and directories from unauthorized access. 20 | 21 | 22 | # ACL 23 | getfacl and setfacl are command-line utilities for viewing and modifying Access Control Lists (ACLs) on Linux and Unix-based systems. 24 | 25 | getfacl is used to view the ACLs of files and directories. It displays the file owner, group owner, and the permissions and flags associated with the file or directory. The output of the getfacl command includes the standard Linux file permissions, as well as any additional permissions defined in the ACL. 26 | -------------------------------------------------------------------------------- /2023/day06/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 6 Task: File Permissions and Access Control Lists 2 | 3 | ### Today is more on Reading, Learning and Implementing File permissions 4 | 5 | The concept of Linux File permission and ownership is important in Linux. 6 | Here, we will be working on Linux permissions and ownership and will do tasks on 7 | both of them. 8 | Let us start with the Permissions. 9 | 10 | 1) Create a simple file and do `ls -ltr` to see the details of the files [refer to Notes](https://github.com/LondheShubham153/90DaysOfDevOps/tree/master/2023/day6/notes) 11 | 12 | Each of the three permissions are assigned to three defined categories of users. The categories are: 13 | - owner — The owner of the file or application. 14 | - "chown" is used to change the ownership permission of a file or directory. 15 | - group — The group that owns the file or application. 16 | - "chgrp" is used to change the gropu permission of a file or directory. 17 | - others — All users with access to the system. (outised the users are in a group) 18 | - "chmod" is used to change the other users permissions of a file or directory. 19 | 20 | As a task, change the user permissions of the file and note the changes after `ls -ltr` 21 | 22 | 2) Write an article about File Permissions based on your understanding from the notes. 23 | 24 | 3) Read about ACL and try out the commands `getfacl` and `setfacl` 25 | 26 | In case of any doubts, post it on [Discord Community](https://discord.gg/hs3Pmc5F) 27 | 28 | Happy Learning -------------------------------------------------------------------------------- /2023/day07/Day7-taskComplete.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day07/Day7-taskComplete.pdf -------------------------------------------------------------------------------- /2023/day07/Docker_jenkins_install.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | #Docker Installetion 4 | sudo apt update 5 | sudo apt install docker.io -y 6 | 7 | #Jenkins Installetion 8 | #for jenkins firstly you need java so 9 | sudo apt update 10 | sudo apt install default-jdk -y 11 | 12 | #Jenkins 13 | wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add - 14 | sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list' 15 | 16 | sudo apt update 17 | sudo apt install jenkins -y 18 | 19 | #for start jenkins 20 | sudo systemctl start jenkins 21 | sudo systemctl enable jenkins 22 | 23 | 24 | 25 | 26 | 27 | # You just run this script for installing docker and Jenkins. 28 | # If you want only jenkins the comment out docker part. 29 | # Thank You :) 30 | -------------------------------------------------------------------------------- /2023/day07/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 7 Task: Understanding package manager and systemctl 2 | 3 | ### What is a package manager in Linux? 4 | 5 | In simpler words, a package manager is a tool that allows users to install, remove, upgrade, configure and manage software packages on an operating system. The package manager can be a graphical application like a software center or a command line tool like apt-get or pacman. 6 | 7 | You’ll often find me using the term ‘package’ in tutorials and articles, To understand package manager, you must understand what a package is. 8 | 9 | ### What is a package? 10 | 11 | A package is usually referred to an application but it could be a GUI application, command line tool or a software library (required by other software programs). A package is essentially an archive file containing the binary executable, configuration file and sometimes information about the dependencies. 12 | 13 | ### Different kinds of package managers 14 | Package Managers differ based on packaging system but same packaging system may have more than one package manager. 15 | 16 | For example, RPM has Yum and DNF package managers. For DEB, you have apt-get, aptitude command line based package managers. 17 | 18 | 19 | ## Tasks 20 | 21 | 1) You have to install docker and jenkins in your system from your terminal using package managers 22 | 23 | 2) Write a small blog or article to install these tools using package managers on Ubuntu and CentOS 24 | 25 | 26 | ### systemctl and systemd 27 | 28 | systemctl is used to examine and control the state of “systemd” system and service manager. systemd is system and service manager for Unix like operating systems(most of the distributions, not all). 29 | 30 | 31 | ## Tasks 32 | 33 | 1) check the status of docker service in your system (make sure you completed above tasks, else docker won't be installed) 34 | 35 | 2) stop the service jenkins and post before and after screenshots 36 | 37 | 3) read about the commands systemctl vs service 38 | 39 | eg. `systemctl status docker` vs `service docker status` 40 | 41 | For Reference, read [this](https://www.howtogeek.com/devops/how-to-check-if-the-docker-daemon-or-a-container-is-running/#:~:text=Checking%20With%20Systemctl&text=Check%20what%27s%20displayed%20under%20%E2%80%9CActive,running%20sudo%20systemctl%20start%20docker%20.) 42 | 43 | 44 | #### Post about this and bring your friends to this #90DaysOfDevOps challenge. 45 | 46 | -------------------------------------------------------------------------------- /2023/day08/Screenshot_20230110_173015.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day08/Screenshot_20230110_173015.png -------------------------------------------------------------------------------- /2023/day08/Screenshot_20230110_173922.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day08/Screenshot_20230110_173922.png -------------------------------------------------------------------------------- /2023/day08/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 8 Task: Basic Git & GitHub for DevOps Engineers. 2 | 3 | 4 | ## What is Git? 5 | Git is a version control system that allows you to track changes to files and coordinate work on those files among multiple people. It is commonly used for software development, but it can be used to track changes to any set of files. 6 | 7 | With Git, you can keep a record of who made changes to what part of a file, and you can revert back to earlier versions of the file if needed. Git also makes it easy to collaborate with others, as you can share changes and merge the changes made by different people into a single version of a file. 8 | 9 | ## What is Github? 10 | GitHub is a web-based platform that provides hosting for version control using Git. It is a subsidiary of Microsoft, and it offers all of the distributed version control and source code management (SCM) functionality of Git as well as adding its own features. GitHub is a very popular platform for developers to share and collaborate on projects, and it is also used for hosting open-source projects. 11 | 12 | ## What is Version Control? How many types of version controls we have? 13 | Version control is a system that tracks changes to a file or set of files over time so that you can recall specific versions later. It allows you to revert files back to a previous state, revert the entire project back to a previous state, compare changes over time, see who last modified something that might be causing a problem, who introduced an issue and when, and more. 14 | 15 | There are two main types of version control systems: centralized version control systems and distributed version control systems. 16 | 17 | 1) A centralized version control system (CVCS) uses a central server to store all the versions of a project's files. Developers "check out" files from the central server, make changes, and then "check in" the updated files. Examples of CVCS include Subversion and Perforce. 18 | 19 | 2) A distributed version control system (DVCS) allows developers to "clone" an entire repository, including the entire version history of the project. This means that they have a complete local copy of the repository, including all branches and past versions. Developers can work independently and then later merge their changes back into the main repository. Examples of DVCS include Git, Mercurial, and Darcs. 20 | 21 | 22 | ## Why we use distributed version control over centralized version control? 23 | 24 | 1) Better collaboration: In a DVCS, every developer has a full copy of the repository, including the entire history of all changes. This makes it easier for developers to work together, as they don't have to constantly communicate with a central server to commit their changes or to see the changes made by others. 25 | 26 | 2) Improved speed: Because developers have a local copy of the repository, they can commit their changes and perform other version control actions faster, as they don't have to communicate with a central server. 27 | 28 | 3) Greater flexibility: With a DVCS, developers can work offline and commit their changes later when they do have an internet connection. They can also choose to share their changes with only a subset of the team, rather than pushing all of their changes to a central server. 29 | 30 | 4) Enhanced security: In a DVCS, the repository history is stored on multiple servers and computers, which makes it more resistant to data loss. If the central server in a CVCS goes down or the repository becomes corrupted, it can be difficult to recover the lost data. 31 | 32 | Overall, the decentralized nature of a DVCS allows for greater collaboration, flexibility, and security, making it a popular choice for many teams. 33 | 34 | 35 | ## Task: 36 | 37 | - Install Git on your computer (if it is not already installed). You can download it from the official website at https://git-scm.com/downloads 38 | - Create a free account on GitHub (if you don't already have one). You can sign up at https://github.com/ 39 | - Learn the basics of Git by reading through the [video](https://youtu.be/AT1uxOLsCdk) This will give you an understanding of what Git is, how it works, and how to use it to track changes to files. 40 | 41 | ## Exercises: 42 | 43 | 1) Create a new repository on GitHub and clone it to your local machine 44 | 2) Make some changes to a file in the repository and commit them to the repository using Git 45 | 3) Push the changes back to the repository on GitHub 46 | 47 | 48 | Reff :- https://youtu.be/AT1uxOLsCdk 49 | 50 | Post your daily work on Linkedin and le me know , writing an article is the best :) 51 | -------------------------------------------------------------------------------- /2023/day09/TaskComplete.md: -------------------------------------------------------------------------------- 1 | # What is Git and why is it important? 2 | Git is a distributed version control system (DVCS) that allows developers to manage the changes in their code over time. It was initially created in 2005 by Linus Torvalds, the developer of the Linux operating system, as a way to manage the large number of developers working on the Linux kernel. 3 | 4 | With Git, developers can track changes to their code, collaborate with other developers, and easily manage and roll back changes when necessary. It also provides a powerful set of tools for merging code changes, resolving conflicts, and identifying bugs. 5 | 6 | One of the key benefits of using Git is that it allows developers to work on the same codebase simultaneously without overwriting each other's changes. It does this by keeping track of different versions of the code, and allowing developers to merge their changes into a single version of the code. This makes it easy for multiple developers to work together on the same codebase, and ensures that the code is always in a working state. 7 | 8 | Another important aspect of Git is that it is distributed, which means that each developer has a copy of the entire codebase and its history on their own machine. This allows developers to work offline, and also makes it easy for them to create backups of their work. 9 | 10 | Git is also widely used as a tool for open-source software development, many open-source projects are hosted on platforms like GitHub, GitLab and Bitbucket, which provide a web-based interface for managing Git repositories. This allows developers from all over the world to collaborate on the same codebase and contribute to the project. 11 | 12 | All in all Git is a powerful and versatile tool that has become an essential part of modern software development. It allows developers to efficiently collaborate, track and store changes in the code over time, easily maintain different version of the code, which in turns helps organizations to deliver software in a more timely manner and make sure that the code is reliable and maintainable over time. 13 | 14 | # What is difference Between Main Branch and Master Branch?? 15 | In Git, the terms "main" and "master" are often used interchangeably to refer to the primary branch in a repository. This is typically the branch where the main development work takes place, and from which other branches are created. 16 | 17 | In practice, the name "main" is used as a more inclusive term for the default branch in a repository, with the goal of creating a more welcoming and inclusive environment, whereas the term "master" is more commonly used in the past. 18 | 19 | The main difference between them is just a naming convention, some organizations or developers prefer to use "main" as the default branch name instead of "master" as it better reflects the idea that the branch is the main development line and all other branches are derived from it. In other cases, developers and organizations have been using "master" for a long time, and they see no reason to change. 20 | 21 | In terms of functionality, there's no difference between main and master branches, the changes are committed, merged, and tracked in the same way regardless of what they are called. Git is a tool that allows for flexibility and customizability, and naming conventions for branches can be set according to the organization's preference or project's specific context. 22 | 23 | # Can you explain the difference between Git and GitHub? 24 | Git and GitHub are related but separate technologies. Git is a version control system (VCS) that allows developers to track changes in their code over time and collaborate with other developers on the same codebase. GitHub, on the other hand, is a web-based platform that provides hosting for Git repositories and a number of additional features. 25 | 26 | Git is a command-line tool that developers can use to manage their code locally on their own machines. It allows developers to create a repository (a collection of files and directories that are tracked by Git), and then use a set of commands to track changes to the files in the repository, collaborate with other developers, and merge changes into the main codebase. 27 | 28 | GitHub, on the other hand, is a web-based platform that provides hosting for Git repositories. It allows developers to create remote repositories (i.e., repositories that are stored on GitHub's servers), and then use a web-based interface to manage the files in the repository, collaborate with other developers, and merge changes into the main codebase. 29 | 30 | One of the main benefits of using GitHub is that it makes it easy for developers to collaborate on the same codebase from different locations. GitHub provides a number of tools for managing code contributions from multiple developers, including pull requests, which are a way for developers to submit changes to the main codebase for review, and the ability to assign tasks, track issues, etc. 31 | 32 | Another benefit of using GitHub is that it provides a web-based interface for managing Git repositories, which can be more user-friendly than the command-line interface of Git, especially for developers who are new to Git or prefer a graphical user interface. 33 | 34 | In summary, Git is a version control system that allows developers to manage their code locally and collaborate with other developers by tracking changes in their code over time and merging them into a single version of the code. GitHub is a web-based platform that provides hosting for Git repositories and a number of additional features that make it easy for developers to collaborate on the same codebase from different locations, and to manage code contributions. While Git is the underlying technology that makes GitHub work, GitHub provides a more user-friendly and collaborative environment for developers. 35 | 36 | # How do you create a new repository on GitHub? 37 | Go to the GitHub websiteand log in to your account. 38 | 39 | In the top-right corner of the screen, click the plus (+) button and select "New repository" from the drop-down menu. 40 | 41 | On the next page, enter a name for your repository, a short description (optional), and select whether you want the repository to be public or private. Public repositories are visible to anyone, while private repositories can only be accessed by you and the people you invite. 42 | 43 | You also have an option of initializing the repository with a README file or add a gitignore and a license. 44 | 45 | Once you're done, click the "Create repository" button. 46 | 47 | The new repository will be created and you will be taken to the repository's main page, where you can see the files in the repository and manage the repository's settings. 48 | 49 | # What is difference between local & remote repository? How to connect local to remote? 50 | A local repository is a version of a project that is stored on your own machine, while a remote repository is a version of a project that is stored on a separate machine or server, such as on a hosting service like GitHub or GitLab. 51 | 52 | One of the main differences between a local and remote repository is that a local repository is only accessible to you on your own machine, while a remote repository is typically accessible to multiple people over the internet. This makes it easy for multiple developers to collaborate on the same codebase, even if they are working from different locations. 53 | 54 | Another difference is that a local repository typically stores the entire history of the project, including all of its commits and branches, while a remote repository may only store the most recent version of the project. 55 | 56 | ## To connect a local repository to a remote repository, you need to do the following: 57 | 58 | 1) First, create a new repository on the remote such as GitHub. 59 | 60 | 2) On your local machine, navigate to the local repository you want to connect to the remote repository. 61 | 62 | 3) Add the remote repository as an origin to your local repository using the git command git remote add origin . 63 | 64 | 4) Push your local repository's master branch to the remote repository using the command git push -u origin master. This command will upload the entire history of your local repository to the remote repository. 65 | 66 | 5) From then on, you can work on the local repository, then push your changes to the remote repository to share them with other collaborators, and also pull the changes made by others to your local repository. 67 | 68 | 69 | 70 | 71 | 72 | -------------------------------------------------------------------------------- /2023/day09/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 9 Task: Deep Dive in Git & GitHub for DevOps Engineers. 2 | 3 | ## Find the answers by your understandings(Shoulden't be copied by internet & used hand-made diagrams) of below quistions and Write blog on it. 4 | 1) What is Git and why is it important? 5 | 2) What is difference Between Main Branch and Master Branch?? 6 | 3) Can you explain the difference between Git and GitHub? 7 | 4) How do you create a new repository on GitHub? 8 | 5) What is difference between local & remote repository? How to connect local to remote? 9 | 10 | ## Tasks 11 | task-1: 12 | - Set your user name and email address, which will be associated with your commits. 13 | 14 | task-2: 15 | - Create a repository named "Devops" on GitHub 16 | - Connect your local repository to the repository on GitHub. 17 | - Create a new file in Devops/Git/Day-02.txt & add some content to it 18 | - Push your local commits to the repository on GitHub 19 | 20 | reff :- https://youtu.be/AT1uxOLsCdk 21 | 22 | 23 | Note: These steps assume that you have already installed Git on your computer and have created a GitHub account. If you need help with these prerequisites, you can refer to the [day-08](https://github.com/LondheShubham153/90DaysOfDevOps/blob/ee7c53f276edb02a85a97282027028295be17c04/2023/day08/tasks.md) 24 | -------------------------------------------------------------------------------- /2023/day10/Day 10 Task.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day10/Day 10 Task.pdf -------------------------------------------------------------------------------- /2023/day10/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 10 Task: Advance Git & GitHub for DevOps Engineers. 2 | 3 | ## Git Branching 4 | Use a branch to isolate development work without affecting other branches in the repository. Each repository has one default branch, and can have multiple other branches. You can merge a branch into another branch using a pull request. 5 | 6 | Branches allow you to develop features, fix bugs, or safely experiment with new ideas in a contained area of your repository. 7 | 8 | ## Git Revert and Reset 9 | Two commonly used tools that git users will encounter are those of git reset and git revert . The benefit of both of these commands is that you can use them to remove or edit changes you’ve made in the code in previous commits. 10 | 11 | ## Git Rebase and Merge 12 | ### What Is Git Rebase? 13 | 14 | Git rebase is a command that lets users integrate changes from one branch to another, and the logs are modified once the action is complete. Git rebase was developed to overcome merging’s shortcomings, specifically regarding logs. 15 | 16 | ### What Is Git Merge? 17 | 18 | Git merge is a command that allows developers to merge Git branches while the logs of commits on branches remain intact. 19 | 20 | The merge wording can be confusing because we have two methods of merging branches, and one of those ways is actually called “merge,” even though both procedures do essentially the same thing. 21 | 22 | Refer to this article for a better understanding of Git Rebase and Merge [Read here](https://www.simplilearn.com/git-rebase-vs-merge-article) 23 | 24 | 25 | ## Task 1: 26 | Add a text file called version01.txt inside the Devops/Git/ with “This is first feature of our application” written inside. 27 | This should be in a branch coming from `master`, 28 | [hint try `git checkout -b dev`], 29 | swithch to `dev` branch ( Make sure your commit message will reflect as "Added new feature"). 30 | [Hint use your knowledge of creating branches and Git commit command] 31 | 32 | - version01.txt should reflect at local repo first followed by Remote repo for review. 33 | [Hint use your knowledge of Git push and git pull commands here] 34 | 35 | Add new commit in `dev` branch after adding below mentioned content in Devops/Git/version01.txt: 36 | While writing the file make sure you write these lines 37 | 38 | - 1st line>> This is the bug fix in development branch 39 | - Commit this with message “ Added feature2 in development branch” 40 | 41 | - 2nd line>> This is gadbad code 42 | - Commit this with message “ Added feature3 in development branch 43 | 44 | - 3rd line>> This feature will gadbad everything from now. 45 | - Commit with message “ Added feature4 in development branch 46 | 47 | Restore the file to a previous version where the content should be “This is the bug fix in development branch” 48 | [Hint use git revert or reset according to your knowledge] 49 | 50 | ## Task 2: 51 | 52 | - Demonstrate the concept of branches with 2 or more branches with screenshot. 53 | - add some changes to `dev` branch and merge that branch in `master` 54 | - as a practice try git rebase too, see what difference you get. 55 | 56 | 57 | ## Note: 58 | We should learn and follow the [best practices](https://www.flagship.io/git-branching-strategies/) , industry follows for branching. 59 | 60 | Simple Reference on branching: [video](https://youtu.be/NzjK9beT_CY) 61 | 62 | Advance Reference on branching : [video](https://youtu.be/7xhkEQS3dXw) 63 | 64 | You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challange. Happy Learning :) 65 | -------------------------------------------------------------------------------- /2023/day11/Day 11 Task.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day11/Day 11 Task.pdf -------------------------------------------------------------------------------- /2023/day11/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 11 Task: Advance Git & GitHub for DevOps Engineers: Part-2 2 | 3 | ## Git Stash: 4 | Git stash is a command that allows you to temporarily save changes you have made in your working directory, without committing them. This is useful when you need to switch to a different branch to work on something else, but you don't want to commit the changes you've made in your current branch yet. 5 | 6 | To use Git stash, you first create a new branch and make some changes to it. Then you can use the command git stash to save those changes. This will remove the changes from your working directory and record them in a new stash. You can apply these changes later. git stash list command shows the list of stashed changes. 7 | 8 | You can also use git stash drop to delete a stash and git stash clear to delete all the stashes. 9 | 10 | ## Cherry-pick: 11 | Git cherry-pick is a command that allows you to select specific commits from one branch and apply them to another. This can be useful when you want to selectively apply changes that were made in one branch to another. 12 | 13 | To use git cherry-pick, you first create two new branches and make some commits to them. Then you use git cherry-pick command to select the specific commits from one branch and apply them to the other. 14 | 15 | ## Resolving Conflicts: 16 | Conflicts can occur when you merge or rebase branches that have diverged, and you need to manually resolve the conflicts before git can proceed with the merge/rebase. 17 | git status command shows the files that have conflicts, git diff command shows the difference between the conflicting versions and git add command is used to add the resolved files. 18 | 19 | 20 | # Task-01 21 | - Create a new branch and make some changes to it. 22 | - Use git stash to save the changes without committing them. 23 | - Switch to a different branch, make some changes and commit them. 24 | - Use git stash pop to bring the changes back and apply them on top of the new commits. 25 | 26 | # Task-02 27 | - In version01.txt of development branch add below lines after “This is the bug fix in development branch” that you added in Day10 and reverted to this commit. 28 | - Line2>> After bug fixing, this is the new feature with minor alteration” 29 | 30 | Commit this with message “ Added feature2.1 in development branch” 31 | - Line3>> This is the advancement of previous feature 32 | 33 | Commit this with message “ Added feature2.2 in development branch” 34 | - Line4>> Feature 2 is completed and ready for release 35 | 36 | Commit this with message “ Feature2 completed” 37 | - All these commits messages should be reflected in Production branch too which will come out from Master branch (Hint: try rebase). 38 | 39 | # Task-03 40 | - In Production branch Cherry pick Commit “Added feature2.2 in development branch” and added below lines in it: 41 | - Line to be added after Line3>> This is the advancement of previous feature 42 | - Line4>>Added few more changes to make it more optimized. 43 | - Commit: Optimized the feature 44 | 45 | 46 | ## Reference [video](https://youtu.be/apGV9Kg7ics) 47 | 48 | 49 | You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challange. Happy Learning :) 50 | -------------------------------------------------------------------------------- /2023/day12/Cheetsheet.md: -------------------------------------------------------------------------------- 1 | # All Linux & Git-GitHub Commands 2 | 3 | If you want to download all Git command [click_here](https://github.com/RishikeshOps/90DaysOfDevOps/blob/43a29c7b9f747cecfa3d718258058e9341d4bce2/2023/day12/Git%20CheetSheet_1.pdf) or Shoot Me an [Email](mailto:rishikeshmashidkar@gmail.com) I will send you the Original Word File. 4 | 5 | 6 | ## **All About Linux** 7 | ### System administration: 8 | 9 | - `uptime`: Shows how long the system has been running and the number of users currently logged in 10 | - ```free```: Shows the amount of free and used memory in the system 11 | - ```top```: Shows the running processes and their resource usage 12 | - ```ps```: Shows the running processes 13 | - ```kill```: Sends a signal to a process to terminate it 14 | - ```lso```f: Lists open files and the processes that have them open 15 | - ```netstat```: Shows network connections, routing tables, and interface statistics 16 | - ```df```: Shows the amount of free space on a file system 17 | - ```du```: Shows the size of a directory 18 | - ```mount```: Mounts a file system 19 | - ```umount```: Unmounts a file system 20 | - ```chmod```: Changes the permissions of a file or directory 21 | - ```chown```: Changes the owner and group of a file or directory 22 | - ```useradd```: Adds a new user to the system 23 | - ```userdel```: Deletes a user from the system 24 | - ```passwd```: Changes a user's password 25 | - ```crontab```: Schedules commands to be executed automatically 26 | 27 | ### Networking: 28 | 29 | - ```ping```: Tests the reachability of a host 30 | - ```traceroute```: Shows the route packets take to a host 31 | - ```telnet```: Opens a telnet connection to a host 32 | - ```nc```: Opens a network connection 33 | - ```ssh```: Opens a secure shell connection to a remote host 34 | - ```scp```: Copies files over a secure shell connection 35 | - ```ftp```: Opens an ftp connection to a host 36 | 37 | ### File management: 38 | 39 | - ```ls```: Lists the files and directories in the current directory 40 | - ```cd```: Changes the current working directory 41 | - ```mkdir```: Creates a new directory 42 | - ```touch```: Creates a new file 43 | - ```rm```: Deletes a file 44 | - ```rmdir```: Deletes an empty directory 45 | - ```mv```: Renames or moves a file or directory 46 | - ```cp```: Copies a file or directory 47 | - ```pwd```: Prints the current working directory 48 | - ```cat```: Prints the contents of a file 49 | - ```echo```: Prints a message to the console 50 | - ```less```: Allows you to view the contents of a file one page at a time 51 | - ```grep```: Searches for a pattern in a file or a stream of text 52 | - ```sed```: Modifies the contents of a file using a script 53 | - ```awk```: Processes text files and data streams 54 | - ```find```: Searches for files and directories 55 | - ```tar```: Creates and extracts archive files 56 | - ```gzi```p: Compresses and decompresses files 57 | - ```unzip```: Uncompresses archive files 58 | - ```find```: Searches for files and directories 59 | - ```locate```: Finds files by name 60 | - ```which```: Shows the path of a command 61 | 62 | ### Process management 63 | - ```ps```: Shows the running processes 64 | - ```kill```: Sends a signal to a process to terminate it 65 | - ```top```: Shows the running processes and their resource usage 66 | - ```htop```: Interactive process viewer 67 | - ```nohup```: Run a command immune to hangups, with output to a non-tty 68 | 69 | ## **All About Git-GitHub** 70 | 71 | ### Repository management: 72 | 73 | - ```git init```: Initializes a new Git repository 74 | - ```git clone```: Copies an existing repository from a remote location 75 | - ```git remote```: Shows the remote repository 76 | - ```git remote add```: Adds a new remote repository 77 | - ```git remote remove```: Removes a remote repository 78 | - ```git remote rename```: Renames a remote repository 79 | - ```git remote set-url```: Changes the URL of a remote repository 80 | - ```git remote -v```: Shows the remote repository and its URL 81 | ### Branching: 82 | 83 | - ```git branch```: Shows the branches in the repository and indicates the current branch 84 | - ```git branch -a```: Shows all branches, including remote branches 85 | - ```git branch -r```: Shows only remote branches 86 | - ```git branch -v```: Shows the last commit on each branch 87 | - ```git branch [branch name]```: Creates a new branch with the given name 88 | - ```git branch -d [branch name]```: Deletes the branch with the given name 89 | - ```git branch -D [branch name]```: Force deletes the branch with the given name 90 | - ```git checkout [branch name]```: Switches to the branch with the given name 91 | - ```git checkout -b [branch name]```: Creates a new branch with the given name and switch to it 92 | - ```git switch [branch name] ```: Create a new branch and switch to it 93 | - ```git merge [branch name] ```: Merges changes from the branch with the given name into the current branch 94 | - ```git rebase [branch name] ```: Reapplies commits from the current branch on top of the branch with the given name 95 | ### Committing: 96 | 97 | - ```git status```: Shows the status of the repository 98 | - ```git add [file name]```: Adds the file with the given name to the staging area 99 | - ```git add .```: Adds all changes in the current directory to the staging area 100 | - ```git reset [file name]```: Removes the file with the given name from the staging area 101 | - ```git commit -m "[message]"```: Creates a new commit with the changes in the staging area and the given message 102 | - ```git commit --amend -m "[message]" ```: amends the last commit with the changes in the staging area and the given message 103 | - ```git commit --amend --no-edit``` : amends the last commit with the changes in the staging area and keep the same commit message 104 | - ```git commit -a -m "[message]" ```: commit directly without staging the changes 105 | - ```git log```: Shows the commit history 106 | - ```git diff```: Shows the differences between the working directory and the last commit 107 | ### Reverting: 108 | 109 | - ```git reset [commit hash]```: Reverts the repository to the state of the commit with the given hash 110 | - ```git reset --hard [commit hash]```: Reverts the repository to the state of the commit with the given hash and discards all changes since that commit 111 | - ```git revert [commit hash]```: Creates a new commit that undoes the changes made in the commit with the given hash 112 | 113 | ### Synchronizing: 114 | 115 | - ```git fetch [remote name]```: Downloads new commits from the remote repository with the given name 116 | - ```git pull [remote name] [branch name]```: Fetches and merges changes from the remote repository with the given name and the branch with the given name into the current branch 117 | - ```git push [remote name] [branch name]```: Uploads commits to the remote repository with the given name and the branch with the given name 118 | - ```git push -f [remote name] [branch name] ```: force push the commits to the remote repository with the given name and the branch with the given name 119 | - ```git push [remote name] --all```: Uploads all branches to the remote repository with the given name 120 | - ```git push [remote name] --tags```: Uploads all tags to the remote repository with the given name 121 | - ```git remote prune [remote name]```: Remove branches that were deleted on the remote 122 | - ```git pull --rebase```: This will integrate changes from a remote repository by reapplying your local commits on top of the updated remote head. This will avoid creating unnecessary merge commits 123 | - ```git pull --rebase -X theirs``` : This will resolve merge conflicts by taking the version of the file from the remote repository 124 | - ```git pul --rebase -X ours``` : This will resolve merge conflicts by keeping the version of the file you have locally 125 | 126 | ### Stashing: 127 | 128 | - ```git stash```: Temporarily saves changes that are not ready to be committed 129 | - ```git stash list```: Shows the list of stashes 130 | - ```git stash apply [stash name]```: Applies changes from the stash with the given name 131 | - ```git stash drop [stash name]```: Deletes the stash with the given name 132 | - ```git stash pop [stash name] ```: Applies changes from the stash with 133 | -------------------------------------------------------------------------------- /2023/day12/Git CheetSheet_1.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day12/Git CheetSheet_1.pdf -------------------------------------------------------------------------------- /2023/day12/tasks.md: -------------------------------------------------------------------------------- 1 | ## Finally!! 🎉 2 | You have completed the Linux & Git-GitHub handson and I hope you have learned something interesting from it.🙌 3 | 4 | Now why not make an interesting 😉 assignment, which not only will help you for the future but also for the DevOps Community! 5 | 6 | Let’s make a well articulated and documented **"cheat-sheet"** with all the commands you learned so far in Linux, Git-GitHub and brief info about its usage. 7 | 8 | Let’s show us your knowledge mixed with your creativity😎 9 | 10 | *I have added a [cheatsheet](https://www.sqltutorial.org/wp-content/uploads/2016/04/SQL-Cheat-Sheet-2.png) for your reference, Make sure every cheatsheet must be UNIQUE* 11 | 12 | Post it on Linkedin and Spread the knowledge.😃 13 | 14 | **Happy Learning :)** 15 | -------------------------------------------------------------------------------- /2023/day13/Screenshot_20230113_210801.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day13/Screenshot_20230113_210801.png -------------------------------------------------------------------------------- /2023/day13/task_complete.md: -------------------------------------------------------------------------------- 1 | ## what is python 2 | Python is a high-level, interpreted programming language that is widely used for web development, scientific computing, data analysis, artificial intelligence, and other applications. It is known for its clear syntax, readability, and support for multiple programming paradigms such as object-oriented, functional, and procedural. Python can be used to build a wide range of applications, from simple command-line scripts to complex web applications. It was created by **Guido van Rossum** It has a large and active community which provides a wealth of libraries, frameworks, and modules to make development faster and easier. Python has a very simple and easy to learn syntax, making it an excellent choice for beginners. It also offers many libraries and frameworks that make it suitable for advanced programming as well. 3 | 4 | ## what is Variables and Data Types 5 | In Python, a data type is a classification of a particular kind of data that defines the possible values and operations that can be - performed on that data. Some of the most commonly used data types in Python include: 6 | - **Integers**: Represent whole numbers, e.g. 1, 2, 3, -1, -2. 7 | - **Floats**: Represent decimal numbers, e.g. 1.5, 2.5, 3.14. 8 | - **Strings**: Represent sequences of characters, e.g. "hello", "world". 9 | - **Booleans**: Represent true or false values, e.g. True, False. 10 | - **Lists** : A collection of items in a particular order. 11 | - **Tuples** : A collection of items in a particular order, but unlike lists, tuples are immutable. 12 | - **Dictionaries**: A collection of key-value pairs. 13 | - **Sets**: A collection of unique items. 14 | 15 | A variable in Python is a named location in memory that stores a value. In Python, you can create a variable by giving it a name and assigning a value to it using the assignment operator (=). For example: 16 | ``` 17 | x = 5 18 | name = "Rishikesh" 19 | is_student = True 20 | ``` 21 | In Python, the data type of a variable is determined automatically based on the value that is assigned to it. For example, the variable x above is of type int (integer) because it stores the value 5, and the variable name is of type str (string) because it stores the value "John". 22 | 23 | It's worth mentioning that Python is a dynamically typed language, which means that we don't have to explicitly specify the data type of a variable, the interpreter will automatically identify the type of the variable based on the value assigned to it. 24 | 25 | 26 | # Installed python my system & run the very first program ![Watch_here](https://github.com/RishikeshOps/90DaysOfDevOps/blob/f96270a47ceb2350f96b5d3a9b987b9786747f08/2023/day13/Screenshot_20230113_210801.png) 27 | -------------------------------------------------------------------------------- /2023/day13/tasks.md: -------------------------------------------------------------------------------- 1 | Hello Dosto 😎 2 | 3 | Let's Start with Basics of Python as this is also important for Devops Engineer to build the logic and Programs. 4 | 5 | **What is Python?** 6 | 7 | - Python is a Open source, general purpose, high level, and object-oriented programming language. 8 | - It was created by **Guido van Rossum** 9 | - Python consists of vast libraries and various frameworks like Django,Tensorflow, Flask, Pandas, Keras etc. 10 | 11 | 12 | **How to Install Python?** 13 | 14 | You can install Python in your System whether it is window, MacOS, ubuntu, centos etc. Below are the links for the installation: 15 | - [Windows Installation](https://www.python.org/downloads/) 16 | - Ubuntu: apt-get install python3.6 17 | 18 | 19 | 20 | Task1: 21 | 1. Install Python in your respective OS, and check the version. 22 | 2. Read about different Data Types in Python. 23 | 24 | 25 | You can get the complete Playlist [here](https://www.youtube.com/watch?v=abPgj_3hzVY&list=PLlfy9GnSVerS_L5z0COaF7rsbgWmJXTOM)🙌 26 | 27 | Don't forget to share your Journey over linkedin. Let the community know that you have started another chapter of your Journey. 28 | 29 | Happy Learning, Ruko Mat Phod do😃 30 | 31 | -------------------------------------------------------------------------------- /2023/day14/Day 14 Task.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day14/Day 14 Task.pdf -------------------------------------------------------------------------------- /2023/day14/tasks.md: -------------------------------------------------------------------------------- 1 | ## Day 14 Task: Python Data Types and Data Structures for DevOps 2 | 3 | ### New day, New Topic.... Let's learn along 😉 4 | 5 | ### Data Types 6 | - Data types are the classification or categorization of data items. It represents the kind of value that tells what operations can be performed on a particular data. 7 | - Since everything is an object in Python programming, data types are actually classes and variables are instance (object) of these classes. 8 | - Python has the following data types built-in by default: Numeric(Integer, complex, float), Sequential(string,lists, tuples), Boolean, Set, Dictionaries, etc 9 | 10 | To check what is the data type of the variable used, we can simply write: 11 | ```your_variable=100``` 12 | ```type(your_variable)``` 13 | 14 | ### Data Structures 15 | 16 | Data Structures are a way of organizing data so that it can be accessed more efficiently depending upon the situation. Data Structures are fundamentals of any programming language around which a program is built. Python helps to learn the fundamental of these data structures in a simpler way as compared to other programming languages. 17 | 18 | - Lists 19 | Python Lists are just like the arrays, declared in other languages which is an ordered collection of data. It is very flexible as the items in a list do not need to be of the same type 20 | 21 | - Tuple 22 | Python Tuple is a collection of Python objects much like a list but Tuples are immutable in nature i.e. the elements in the tuple cannot be added or removed once created. Just like a List, a Tuple can also contain elements of various types. 23 | 24 | - Dictionary 25 | Python dictionary is like hash tables in any other language with the time complexity of O(1). It is an unordered collection of data values, used to store data values like a map, which, unlike other Data Types that hold only a single value as an element, Dictionary holds the key:value pair. Key-value is provided in the dictionary to make it more optimized 26 | 27 | ## Tasks 28 | 1. Give the Difference between List, Tuple and set. Do Handson and put screenshots as per your understanding. 29 | 2. Create below Dictionary and use Dictionary methods to print your favourite tool just by using the keys of the Dictionary. 30 | ``` 31 | fav_tools = 32 | { 33 | 1:"Linux", 34 | 2:"Git", 35 | 3:"Docker", 36 | 4:"Kubernetes", 37 | 5:"Terraform", 38 | 6:"Ansible", 39 | 7:"Chef" 40 | } 41 | ``` 42 | 3. Create a List of cloud service providers 43 | eg. 44 | ``` 45 | cloud_providers = ["AWS","GCP","Azure"] 46 | ``` 47 | Write a program to add `Digital Ocean` to the list of cloud_providers and sort the list in alphabetical order. 48 | 49 | [Hint: Use keys to built in functions for Lists] 50 | 51 | If you want to deep dive further, Watch [Python](https://youtu.be/abPgj_3hzVY) 52 | 53 | You can share the learning with everyone over linkedin and tag us along 😃 54 | -------------------------------------------------------------------------------- /2023/day15/parser.py: -------------------------------------------------------------------------------- 1 | import json 2 | import yaml 3 | 4 | json_file = "services.json" 5 | yaml_file = "services.yaml" 6 | 7 | with open(json_file, 'r', encoding='utf-8') as f: 8 | json_data = json.loads(f.read()) 9 | 10 | print("JSON:\n",json_data) 11 | 12 | with open(yaml_file, "r") as stream: 13 | try: 14 | yaml_data = yaml.safe_load(stream) 15 | except yaml.YAMLError as exc: 16 | print(exc) 17 | 18 | 19 | print("YAML:\n",yaml_data) -------------------------------------------------------------------------------- /2023/day15/services.json: -------------------------------------------------------------------------------- 1 | { 2 | "services": { 3 | "debug": "on", 4 | "aws": { 5 | "name": "EC2", 6 | "type": "pay per hour", 7 | "instances": 500, 8 | "count": 500 9 | }, 10 | "azure": { 11 | "name": "VM", 12 | "type": "pay per hour", 13 | "instances": 500, 14 | "count": 500 15 | }, 16 | "gcp": { 17 | "name": "Compute Engine", 18 | "type": "pay per hour", 19 | "instances": 500, 20 | "count": 500 21 | } 22 | } 23 | } -------------------------------------------------------------------------------- /2023/day15/services.yaml: -------------------------------------------------------------------------------- 1 | services: 2 | debug: 'on' 3 | aws: 4 | name: EC2 5 | type: pay per hour 6 | instances: 500 7 | count: 500 8 | azure: 9 | name: VM 10 | type: pay per hour 11 | instances: 500 12 | count: 500 13 | gcp: 14 | name: Compute Engine 15 | type: pay per hour 16 | instances: 500 17 | count: 500 18 | -------------------------------------------------------------------------------- /2023/day15/solution-2.py: -------------------------------------------------------------------------------- 1 | import yaml 2 | import json 3 | 4 | # Read YAML file 5 | with open("services.yaml") as f: 6 | output = yaml.load(f,Loader=yaml.FullLoader) 7 | 8 | # Convert YAML to JSON 9 | JSON_data = json.dumps(output) 10 | 11 | # Write JSON to a file 12 | with open("services.json", "w") as JSON_file: 13 | JSON_file.write(JSON_data) 14 | 15 | 16 | -------------------------------------------------------------------------------- /2023/day15/solution.py: -------------------------------------------------------------------------------- 1 | import json 2 | 3 | cloude_VM_name = {} 4 | 5 | cloud_VM_name = { 6 | "aws": "ec2", 7 | "azure": "VM", 8 | "gcp": "compute engine" 9 | } 10 | 11 | # Write the dictionary to a JSON file 12 | with open("services.json", "w") as json_file: 13 | json.dump(cloud_VM_name, json_file) 14 | 15 | # Read the JSON file 16 | with open("services.json", "r") as JSON_file: 17 | data = json.load(JSON_file) 18 | 19 | # Print the service names of every cloud service provider 20 | for provider, service in data.items(): 21 | print(f"{provider} : {service}") 22 | -------------------------------------------------------------------------------- /2023/day15/tasks.md: -------------------------------------------------------------------------------- 1 | ## Day 15 Task: Python Libraries for DevOps 2 | 3 | ### Reading JSON and YAML in Python 4 | 5 | - As a DevOps Engineer you should be able to parse files, be it txt, json, yaml, etc. 6 | - You should know what all libraries one should use in Pythonfor DevOps. 7 | - Python has numerous libraries like `os`, `sys`, `json`, `yaml` etc that a DevOps Engineer uses in day to day tasks. 8 | 9 | 10 | 11 | ## Tasks 12 | 1. Create a Dictionary in Python and write it to a json File. 13 | 14 | 2. Read a json file `services.json` kept in this folder and print the service names of every cloud service provider. 15 | 16 | ``` 17 | output 18 | 19 | aws : ec2 20 | azure : VM 21 | gcp : compute engine 22 | 23 | ``` 24 | 3. Read YAML file using python, file `services.yaml` and read the contents to convert yaml to json 25 | 26 | Python Project for your practice: 27 | https://youtube.com/playlist?list=PLlfy9GnSVerSzFmQ8JqP9v0XHHOAeWbjo -------------------------------------------------------------------------------- /2023/day16/Day 16 Task.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day16/Day 16 Task.pdf -------------------------------------------------------------------------------- /2023/day16/tasks.md: -------------------------------------------------------------------------------- 1 | ## Day 16 Task: Docker for DevOps Engineers. 2 | 3 | 4 | ### Docker 5 | Docker is a software platform that allows you to build, test, and deploy applications quickly. Docker packages software into standardized units called containers that have everything the software needs to run including libraries, system tools, code, and runtime. Using Docker, you can quickly deploy and scale applications into any environment and know your code will run. 6 | 7 | # Tasks 8 | 9 | As you have already installed docker in previous days tasks, now is the time to run Docker commands. 10 | 11 | - Use the `docker run` command to start a new container and interact with it through the command line. [Hint: docker run hello-world] 12 | 13 | - Use the `docker inspect` command to view detailed information about a container or image. 14 | 15 | - Use the `docker port` command to list the port mappings for a container. 16 | 17 | - Use the `docker stats` command to view resource usage statistics for one or more containers. 18 | 19 | - Use the `docker top` command to view the processes running inside a container. 20 | 21 | - Use the `docker save` command to save an image to a tar archive. 22 | 23 | - Use the `docker load` command to load an image from a tar archive. 24 | 25 | These tasks involve simple operations that can be used to manage images and containers. 26 | 27 | For reference you can watch this video: 28 | https://youtu.be/Tevxhn6Odc8 29 | 30 | You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challange. Happy Learning :) -------------------------------------------------------------------------------- /2023/day17/Day 17 Task.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day17/Day 17 Task.pdf -------------------------------------------------------------------------------- /2023/day17/tasks.md: -------------------------------------------------------------------------------- 1 | ## Day 17 Task: Docker Project for DevOps Engineers. 2 | 3 | ### You people are doing just amazing in **#90daysofdevops**. Today's challenge is so special Because You are going to do DevOps project today with Docker. Are You Exited 😍 4 | 5 | # Dockerfile 6 | 7 | Docker is a tool that makes it easy to run applications in containers. Containers are like small packages that hold everything an application needs to run. To create these containers, developers use something called a Dockerfile. 8 | 9 | A Dockerfile is like a set of instructions for making a container. It tells Docker what base image to use, what commands to run, and what files to include. For example, if you were making a container for a website, the Dockerfile might tell Docker to use an official web server image, copy the files for your website into the container, and start the web server when the container starts. 10 | 11 | For more about Dockerfile visit [here](https://rushikesh-mashidkar.hashnode.dev/dockerfile-docker-compose-swarm-and-volumes) 12 | 13 | task: 14 | 15 | - Create a Dockerfile for a simple web application (e.g. a Node.js or Python app) 16 | 17 | - Build the image using the Dockerfile and run the container 18 | 19 | - Verify that the application is working as expected by accessing it in a web browser 20 | 21 | - Push the image to a public or private repository (e.g. Docker Hub ) 22 | 23 | For Refference Project visit [here](https://youtu.be/Tevxhn6Odc8) 24 | 25 | If you want to dive further, Watch [bootcamp](https://youtube.com/playlist?list=PLlfy9GnSVerRqYJgVYO0UiExj5byjrW8u) 26 | 27 | You can share the learning with everyone over linkedin and tag us along 😃 28 | 29 | Happy Learning:) 30 | 31 | -------------------------------------------------------------------------------- /2023/day18/Day 18 Task.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day18/Day 18 Task.pdf -------------------------------------------------------------------------------- /2023/day18/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | version : "3.3" 2 | services: 3 | web: 4 | image: nginx:latest 5 | ports: 6 | - "80:80" 7 | db: 8 | image: mysql 9 | ports: 10 | - "3306:3306" 11 | environment: 12 | - "MYSQL_ROOT_PASSWORD=test@123" 13 | -------------------------------------------------------------------------------- /2023/day18/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 18 Task: Docker for DevOps Engineers 2 | 3 | Till now you have created Docker file and pushed it to the Repository. Let's move forward and dig more on other Docker concepts. 4 | Aj thodi padhai krte hai on Docker Compose 😃 5 | 6 | ## Docker Compose 7 | - Docker Compose is a tool that was developed to help define and share multi-container applications. 8 | - With Compose, we can create a YAML file to define the services and with a single command, can spin everything up or tear it all down. 9 | - Learn more about docker-compose [visit here](https://tecadmin.net/tutorial/docker/docker-compose/) 10 | 11 | ## What is YAML? 12 | - YAML is a data serialization language that is often used for writing configuration files. Depending on whom you ask, YAML stands for yet another markup language or YAML ain’t markup language (a recursive acronym), which emphasizes that YAML is for data, not documents. 13 | - YAML is a popular programming language because it is human-readable and easy to understand. 14 | - YAML files use a .yml or .yaml extension. 15 | - Read more about it [here](https://www.redhat.com/en/topics/automation/what-is-yaml) 16 | 17 | ## Task-1 18 | 19 | Learn how to use the docker-compose.yml file, to set up the environment, configure the services and links between different containers, and also to use environment variables in the docker-compose.yml file. 20 | 21 | [Sample docker-compose.yaml file](https://github.com/LondheShubham153/90DaysOfDevOps/blob/master/2023/day18/docker-compose.yaml) 22 | 23 | 24 | ## Task-2 25 | - Pull a pre-existing Docker image from a public repository (e.g. Docker Hub) and run it on your local machine. Run the container as a non-root user (Hint- Use `usermod ` command to give user permission to docker). Make sure you reboot instance after giving permission to user. 26 | - Inspect the container's running processes and exposed ports using the docker inspect command. 27 | - Use the docker logs command to view the container's log output. 28 | - Use the docker stop and docker start commands to stop and start the container. 29 | - Use the docker rm command to remove the container when you're done. 30 | 31 | ## How to run Docker commands without sudo? 32 | - Make sure docker is installed and system is updated (This is already been completed as a part of previous tasks): 33 | - sudo usermod -a -G docker $USER 34 | - Reboot the machine. 35 | 36 | For reference you can watch this [video](https://youtu.be/Tevxhn6Odc8) 37 | 38 | You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challange. Happy Learning :) 39 | -------------------------------------------------------------------------------- /2023/day19/Day 19 Task.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day19/Day 19 Task.pdf -------------------------------------------------------------------------------- /2023/day19/sample_project_deployment.yaml: -------------------------------------------------------------------------------- 1 | version : "3.3" 2 | services: 3 | web: 4 | image: varsha0108/local_django:latest 5 | deploy: 6 | replicas: 2 7 | ports: 8 | - "8001-8005:8001" 9 | volumes: 10 | - my_django_volume:/app 11 | db: 12 | image: mysql 13 | ports: 14 | - "3306:3306" 15 | environment: 16 | - "MYSQL_ROOT_PASSWORD=test@123" 17 | volumes: 18 | my_django_volume: 19 | external: true 20 | 21 | -------------------------------------------------------------------------------- /2023/day19/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 19 Task: Docker for DevOps Engineers 2 | 3 | **Till now you have learned how to create docker-compose.yml file and pushed it to the Repository. Let's move forward and dig more on other Docker-compose.yml concepts.** 4 | **Aaj thodi padhai krte hai on Docker Volume & Docker Network** 😃 5 | 6 | # Docker-Volume 7 | Docker allows you to create something called volumes. Volumes are like separate storage areas that can be accessed by containers. They allow you to store data, like a database, outside the container, so it doesn't get deleted when the container is deleted. 8 | You can also mount from the same volume and create more containers having same data. 9 | [reference](https://docs.docker.com/storage/volumes/) 10 | 11 | # Docker Network 12 | Docker allows you to create virtual spaces called networks, where you can connect multiple containers (small packages that hold all the necessary files for a specific application to run) together. This way, the containers can communicate with each other and with the host machine (the computer on which the Docker is installed). 13 | When we run a container, it has its own storage space that is only accessible by that specific container. If we want to share that storage space with other containers, we can't do that. [reference](https://docs.docker.com/network/) 14 | 15 | 16 | ## Task-1 17 | - Create a multi-container docker-compose file which will bring *UP* and bring *DOWN* containers in a single shot ( Example - Create application and database container ) 18 | 19 | *hints:* 20 | - Use the `docker-compose up` command with the `-d` flag to start a multi-container application in detached mode. 21 | - Use the `docker-compose scale` command to increase or decrease the number of replicas for a specific service. You can also add [`replicas`](https://stackoverflow.com/questions/63408708/how-to-scale-from-within-docker-compose-file) in deployment file for *auto-scaling*. 22 | - Use the `docker-compose ps` command to view the status of all containers, and `docker-compose logs` to view the logs of a specific service. 23 | - Use the `docker-compose down` command to stop and remove all containers, networks, and volumes associated with the application 24 | 25 | ## Task-2 26 | - Learn how to use Docker Volumes and Named Volumes to share files and directories between multiple containers. 27 | - Create two or more containers that read and write data to the same volume using the `docker run --mount` command. 28 | - Verify that the data is the same in all containers by using the docker exec command to run commands inside each container. 29 | - Use the docker volume ls command to list all volumes and docker volume rm command to remove the volume when you're done. 30 | 31 | ## You can use this task as *Project* to add in your resume. 32 | 33 | You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challange. Happy Learning :) 34 | -------------------------------------------------------------------------------- /2023/day20/Docker-Cheetsheet.md: -------------------------------------------------------------------------------- 1 | Docker Cheetsheet 💥 2 | == 3 | 4 | ```bash= 5 | docker pull nginx #Pull Nginx 6 | docker run --name docker-nginx -p 80:80 nginx #Expose Nginx 80 Port 7 | docker run --name docker-nginx -p 8080:80 -d nginx #Expose 8080 8 | docker run -P nginx 9 | docker run -d -P nginx 10 | ``` 11 | 12 | ```bash= 13 | docker build -t imagename . # Create image using this directory's Dockerfile 14 | docker run -p 4000:80 imagename # Run "friendlyname" mapping port 4000 to 80 15 | docker run -d -p 4000:80 imagename # Same thing, but in detached mode 16 | docker run --name test-ubuntu -it ubuntu:16.04 ./bin/bash 17 | docker exec -it [container-id] bash # Enter a running container 18 | docker ps # See a list of all running containers 19 | docker stop # Gracefully stop the specified container 20 | docker ps -a # See a list of all containers, even the ones not running 21 | docker kill # Force shutdown of the specified container 22 | docker rm # Remove the specified container from this machine 23 | docker rm $(docker ps -a -q) # Remove all containers from this machine 24 | docker images -a # Show all images on this machine 25 | docker rmi # Remove the specified image from this machine 26 | docker rmi $(docker images -q) # Remove all images from this machine 27 | docker logs -f # Live tail a container's logs 28 | docker login # Log in this CLI session using your Docker credentials 29 | docker tag username/repository:tag # Tag for upload to registry 30 | docker push username/repository:tag # Upload tagged image to registry 31 | docker run username/repository:tag # Run image from a registry 32 | docker system prune # Remove all unused containers, networks, images (both dangling and unreferenced), and optionally, volumes. (Docker 17.06.1-ce and superior) 33 | docker system prune -a # Remove all unused containers, networks, images not just dangling ones (Docker 17.06.1-ce and superior) 34 | docker volume prune # Remove all unused local volumes 35 | docker network prune # Remove all unused networks 36 | 37 | cd usr/share/nginx/html/ 38 | 39 | docker volume create my_vol # Create a volume 40 | docker volume ls 41 | docker volume inspect my_vol # troulbeshooting 42 | docker volume rm my_vol 43 | 44 | 45 | 46 | ##Setup Docker in EC2 47 | Allows access to port 80 (HTTP) from anywhere 48 | HTTP TCP 80 Anywhere 49 | 50 | Amezon linux ubuntu 51 | sudo yum update -y sudo apt-get update 52 | sudo yum install -y docker sudo apt-get install docker.io -y 53 | sudo service docker start sudo service docker start 54 | sudo usermod -aG docker ec2-user sudo usermod -aG docker $USER 55 | ``` 56 | 57 | ```bash= 58 | **Delete all Exited Containers** 59 | 60 | docker rm $(docker ps -q -f status=exited) 61 | 62 | **Delete all Stopped Containers** 63 | 64 | docker rm $(docker ps -a -q) 65 | 66 | **Delete All Running and Stopped Containers** 67 | 68 | docker stop $(docker ps -a -q) 69 | 70 | docker rm $(docker ps -a -q) 71 | 72 | **Remove all containers, without any criteria** 73 | 74 | docker container rm $(docker container ps -aq) 75 | ``` 76 | 77 | ```bash= 78 | Docker Compose Commands 79 | 80 | - Use Docker Compose to Build Containers 81 | Run from directory of your docker-compose.ymI file. 82 | docker-compose build 83 | 84 | - Use Docker Compose to Start a Group of Containers 85 | Use this command from directory of your docker-compose.ymi file, 86 | 87 | docker-compose up -d 88 | 89 | This will tell Docker to fetch the latest version of the container from the 90 | repo, and not use the local cache. 91 | 92 | docker-compose up -d --force-recreate 93 | 94 | This can be problematic if you're doing CI builds with Jenkins and pushing 95 | Docker images to another host, or using for CI testing. I was deploying a 96 | Spring Boot Web Application from Jekins, and found the docker container 97 | was not getting refreshed with the latest Spring Boot artifact. 98 | 99 | #stop docker containers, and rebuild 100 | docker-compose stop -t 1 101 | docker-compose rm -f 102 | docker-compose pull 103 | docker-compose build 104 | docker-compose up -d 105 | 106 | - Follow the Logs of Running Docker Containers With Docker Compose 107 | docker-compose logs -f 108 | 109 | - Save a Running Docker Container as an Image 110 | docker commit 111 | 112 | - Follow the logs of one container running under Docker Compose 113 | docker-compose logs pump 114 | 115 | ``` 116 | 117 | 118 | # Docker Swarm Commands 119 | 120 | - Is Docker Swarm automatically enabled? 121 | 122 | No, by default, Docker Swarm is not available 123 | 124 | - Types of Nodes in a Docker Swarm 125 | 126 | Manager and worker 127 | 128 | - Enable the First Node of a Docker Swarm 129 | 130 | `docker swarm init` 131 | 132 | - List Running Services 133 | 134 | `docker service ls` 135 | 136 | - Add a Node to a Swarm Cluster 137 | 138 | `docker swarm join --token --listen-addr ` 139 | 140 | - Can manager nodes run containers? 141 | 142 | Yes, manager nodes normally run containers 143 | 144 | - Retrieve the Join Token 145 | 146 | `docker swarm join-token` 147 | 148 | - List Nodes in a Cluster 149 | 150 | `docker node ls` 151 | 152 | - Can you run a ‘docker node Is' from a worker node? 153 | No. Docker Swarm commands can only be from manager nodes 154 | 155 | - List Services in a Docker Swarm 156 | 157 | `docker service Is` 158 | 159 | - List Containers in a Service 160 | 161 | `docker service ps ` 162 | 163 | - Remove a Service 164 | 165 | `docker service rm ` 166 | 167 | - Remove a Node from a Swarm Cluster 168 | 169 | `docker node rm ` 170 | 171 | - Promote a Node from Worker to Manager 172 | 173 | `docker node promote ` 174 | 175 | - Change a Node from a Manager to 2 Worker 176 | 177 | `docker node demote ` 178 | 179 | ``` 180 | -------------------------------------------------------------------------------- /2023/day20/tasks.md: -------------------------------------------------------------------------------- 1 | ## Finally!! 🎉 2 | You have completed✅ the Docker handson and I hope you have learned something interesting from it.🙌 3 | 4 | Now it's time to take your Docker skills to the next level by creating a comprehensive cheat-sheet of all the commands you've learned so far. This cheat-sheet should include commands for both Docker and Docker-Compose, as well as brief explanations of their usage. 5 | This cheat-sheet will not only help you in the future but also contribute to the DevOps community by providing a useful resource for others.😊🙌 6 | 7 | 8 | So, put your knowledge and creativity to the test and create a cheat-sheet that truly stands out! 🚀 9 | 10 | *I have added a [cheatsheet](https://cdn.hashnode.com/res/hashnode/image/upload/v1670863735841/r6xdXpsap.png?auto=compress,format&format=webp) for your reference, Make sure every cheatsheet must be UNIQUE* 11 | 12 | Post it on Linkedin and Spread the knowledge.😃 13 | 14 | **Happy Learning :)** 15 | 16 | 17 | -------------------------------------------------------------------------------- /2023/day21/interview_questions.md: -------------------------------------------------------------------------------- 1 | 1) What is the Difference between an Image, Container and Engine? 2 | - An image is a snapshot of a Docker container. A container is a running instance of an image. An engine is the component of the Docker platform that builds and runs containers. 3 | 2) What is the Difference between the Docker command COPY vs ADD? 4 | - COPY is a command used to copy files from the host file system into a Docker image, while ADD can perform the same function, but with the added ability to handle archive files (i.e. tar, gzip) and extract them into the image file system. 5 | 3) What is the Difference between the Docker command CMD vs RUN? 6 | - CMD is used to set default commands in a Docker image, while RUN is used to execute a command during the image build process. 7 | 4) How Will you reduce the size of the Docker image? 8 | - Reducing the size of a Docker image can be done through techniques such as using multi-stage builds, using Alpine Linux instead of larger base images, and removing unnecessary files and dependencies. 9 | 5) Why and when to use Docker? 10 | - Docker is used for packaging, distributing, and running applications in containers. It provides a consistent environment for deployment and eliminates issues with dependencies and configurations. 11 | 6) Explain the Docker components and how they interact with each other. 12 | - Docker components include the Docker daemon, the Docker CLI, and the Docker API. They interact by receiving commands from the CLI and API, performing actions on the host system, and communicating with the Docker registry to download and upload images. 13 | 7) Explain the terminology: Docker Compose, Docker File, Docker Image, Docker Container? 14 | - Docker Compose is a tool for defining and running multi-container applications. A Docker file is a script used to build a Docker image. A Docker image is a snapshot of a container. A Docker container is a running instance of an image. 15 | 8) In what real scenarios have you used Docker? 16 | - I am an AI language model and don't have personal experiences but some scenarios include microservices architecture, continuous integration and delivery, and development environments. 17 | 9) Docker vs Hypervisor? 18 | - Docker uses OS-level virtualization, while a hypervisor provides full virtualization and runs multiple VMs on a host. Docker is lighter and faster than a hypervisor. 19 | 10) What are the advantages and disadvantages of using docker? 20 | - Advantages of using Docker include consistency in deployment, efficient resource utilization, and faster setup times. Disadvantages include increased security risks and potential compatibility issues. 21 | 11) What is a Docker namespace? 22 | - A Docker namespace is a mechanism for isolating the file system, network, and other resources of a container from the host system and other containers. 23 | 12) What is a Docker registry? 24 | - A Docker registry is a centralized repository for storing and distributing Docker images. 25 | 13) What is an entry point? 26 | - An entry point is the command that is automatically run when a Docker container is started. 27 | 14) How to implement CI/CD in Docker? 28 | - CI/CD in Docker can be implemented by using tools like Jenkins, TravisCI, and GitLab to automate the build, test, and deployment processes. 29 | 15) Will data on the container be lost when the docker container exits? 30 | - Data in a Docker container will be lost when the container stops or is deleted, unless it is stored in a data volume that is separate from the container file system. 31 | 16) What is a Docker swarm? 32 | - A Docker swarm is a native Docker tool for orchestration and cluster management of Docker containers. 33 | 17) What are the docker commands for the following: 34 | - view running containers `docker ps` 35 | - command to run the container under a specific name `docker run --name [name] [image_name]` 36 | - command to export a docker `docker save [image_name] > [filename.tar]` 37 | - command to import an already existing docker image `docker load < [filename.tar]` 38 | - commands to delete a container `docker rm [container_id]` 39 | - command to remove all stopped containers, unused networks, build caches, and dangling images? `docker system prune` 40 | 18) What are the common docker practices to reduce the size of Docker Image? 41 | - Common practices to reduce the size of Docker images include using multi-stage builds, using Alpine Linux, removing unnecessary files and dependencies, and compressing files. 42 | -------------------------------------------------------------------------------- /2023/day21/tasks.md: -------------------------------------------------------------------------------- 1 | ## Day 21 Task: Docker Important interview Questions. 2 | 3 | 4 | ## Docker Interview 5 | Docker is a good topic to ask in DevOps Engineer Interviews, mostly for freshers. 6 | One must surely try these questions in order to be better in Docker 7 | 8 | ## Questions 9 | 10 | 11 | - What is the Difference between an Image, Container and Engine? 12 | - What is the Difference between the Docker command COPY vs ADD? 13 | - What is the Difference between the Docker command CMD vs RUN? 14 | - How Will you reduce the size of the Docker image? 15 | - Why and when to use Docker? 16 | - Explain the Docker components and how they interact with each other. 17 | - Explain the terminology: Docker Compose, Docker File, Docker Image, Docker Container? 18 | - In what real scenarios have you used Docker? 19 | - Docker vs Hypervisor? 20 | - What are the advantages and disadvantages of using docker? 21 | - What is a Docker namespace? 22 | - What is a Docker registry? 23 | - What is an entry point? 24 | - How to implement CI/CD in Docker? 25 | - Will data on the container be lost when the docker container exits? 26 | - What is a Docker swarm? 27 | - What are the docker commands for the following: 28 | - view running containers 29 | - command to run the container under a specific name 30 | - command to export a docker 31 | - command to import an already existing docker image 32 | - commands to delete a container 33 | - command to remove all stopped containers, unused networks, build caches, and dangling images? 34 | - What are the common docker practices to reduce the size of Docker Image? 35 | 36 | 37 | These questions will help you in your next DevOps Interview. 38 | *Write a Blog and share it on LinkedIn.* 39 | 40 | **Happy Learning :)** 41 | -------------------------------------------------------------------------------- /2023/day22/Day 22 Task.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day22/Day 22 Task.pdf -------------------------------------------------------------------------------- /2023/day22/tasks.md: -------------------------------------------------------------------------------- 1 | # Day-22 : Getting Started with Jenkins 😃 2 | 3 | **Linux, Git, Git-Hub, Docker finish ho chuka hai to chaliye seekhte hai inko deploy krne ke lye CI-CD tool:** 4 | 5 | ## What is Jenkins? 6 | - Jenkins is an open source continuous integration-continuous delivery and deployment (CI/CD) automation software DevOps tool written in the Java programming language. It is used to implement CI/CD workflows, called pipelines. 7 | 8 | - Jenkins is a tool that is used for automation, and it is an open-source server that allows all the developers to build, test and deploy software. It works or runs on java as it is written in java. By using Jenkins we can make a continuous integration of projects(jobs) or end-to-endpoint automation. 9 | 10 | - Jenkins achieves Continuous Integration with the help of plugins. Plugins allow the integration of Various DevOps stages. If you want to integrate a particular tool, you need to install the plugins for that tool. For example Git, Maven 2 project, Amazon EC2, HTML publisher etc. 11 | 12 | **Let us do discuss the necessity of this tool before going ahead to the procedural part for installation:** 13 | - Nowadays, humans are becoming lazy😴 day by day so even having digital screens and just one click button in front of us then also need some automation. 14 | 15 | - Here, I’m referring to that part of automation where we need not have to look upon a process(here called a job) for completion and after it doing another job. For that, we have Jenkins with us. 16 | 17 | Note: By now Jenkins should be installed on your machine(as it was a part of previous tasks, if not follow [Installation Guide](https://youtu.be/OkVtBKqMt7I)) 18 | 19 | 20 | ## Tasks: 21 | 22 | **1. What you understood in Jenkin, write a small article in your own words (Don't copy from Internet Directly)** 23 | 24 | **2.Create a freestyle pipeline to print "Hello World!!** 25 | Hint: Use link for [Article](https://www.geeksforgeeks.org/what-is-jenkins) 26 | 27 | Don't forget to post your progress on Linkedin. Till then Happy learning :) 28 | 29 | -------------------------------------------------------------------------------- /2023/day23/Day 23 Task.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day23/Day 23 Task.pdf -------------------------------------------------------------------------------- /2023/day23/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 23 Task: Jenkins Freestyle Project for DevOps Engineers. 2 | 3 | The Community is absolutely crushing it in the #90daysofdevops journey. Today's challenge is particularly exciting as it entails creating a Jenkins Freestyle Project, an opportunity for DevOps engineers to showcase their skills and push their limits. Who's ready to dive in and make it happen? 😍 4 | 5 | ## What is CI/CD? 6 | - CI or Continuous Integration is the practice of automating the integration of code changes from multiple developers into a single codebase. It is a software development practice where the developers commit their work frequently into the central code repository (Github or Stash). Then there are automated tools that build the newly committed code and do a code review, etc as required upon integration. 7 | The key goals of Continuous Integration are to find and address bugs quicker, make the process of integrating code across a team of developers easier, improve software quality and reduce the time it takes to release new feature updates. 8 | 9 | 10 | - CD or Continuous Delivery is carried out after Continuous Integration to make sure that we can release new changes to our customers quickly in an error-free way. This includes running integration and regression tests in the staging area (similar to the production environment) so that the final release is not broken in production. It ensures to automate the release process so that we have a release-ready product at all times and we can deploy our application at any point in time. 11 | 12 | ## What Is a Build Job? 13 | A Jenkins build job contains the configuration for automating a specific task or step in the application building process. These tasks include gathering dependencies, compiling, archiving, or transforming code, and testing and deploying code in different environments. 14 | 15 | Jenkins supports several types of build jobs, such as freestyle projects, pipelines, multi-configuration projects, folders, multibranch pipelines, and organization folders. 16 | 17 | ## What is Freestyle Projects ?? 🤔 18 | A freestyle project in Jenkins is a type of project that allows you to build, test, and deploy software using a variety of different options and configurations. Here are a few tasks that you could complete when working with a freestyle project in Jenkins: 19 | 20 | 21 | # Task-01 22 | - create a agent for your app. ( which you deployed from docker in earlier task) 23 | - Create a new Jenkins freestyle project for your app. 24 | - In the "Build" section of the project, add a build step to run the "docker build" command to build the image for the container. 25 | - Add a second step to run the "docker run" command to start a container using the image created in step 3. 26 | 27 | 28 | # Task-02 29 | - Create Jenkins project to run "docker-compose up -d" command to start the multiple containers defined in the compose file (Hint- use day-19 Application & Database docker-compose file) 30 | - Set up a cleanup step in the Jenkins project to run "docker-compose down" command to stop and remove the containers defined in the compose file. 31 | 32 | For Refference jenkins Freestyle Project visit [here](https://youtu.be/wwNWgG5htxs) 33 | 34 | You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challange. 35 | 36 | Happy Learning:) 37 | -------------------------------------------------------------------------------- /2023/day24/tasks.md: -------------------------------------------------------------------------------- 1 | 2 | # Day 24 Task: Complete Jenkins CI/CD Project 3 | 4 | 5 | 6 | Let's make a beautiful CI/CD Pipeline for your Node JS Application 😍 7 | 8 | 9 | 10 | ## Did you finish Day 23? 11 | 12 | - Day 23 was all about Jenkins CI/CD, make sure you have done it and understood the concepts. As today You will be doing one Project End to End and adding it to your resume :) 13 | 14 | - As you have worked with Docker and Docker compose, it will be good to use it in a live project. 15 | 16 | 17 | # Task-01 18 | 19 | - Fork [this](https://github.com/LondheShubham153/node-todo-cicd.git) repository: 20 | - Create a connection to your Jenkins job and your GitHub Repository via GitHub Integration. 21 | - Read About [GitHub WebHooks](https://betterprogramming.pub/how-too-add-github-webhook-to-a-jenkins-pipeline-62b0be84e006) and make sure you have CICD setup 22 | - Refer [this](https://youtu.be/nplH3BzKHPk) video for the entire project 23 | 24 | # Task-02 25 | - In the Execute shell run the application using Docker compose 26 | - You will have to make a Docker Compose file for this Project (Can be a good open source contribution) 27 | - Run the project and give yourself a treat:) 28 | 29 | For Reference and entire hands-on Project visit [here](https://youtu.be/nplH3BzKHPk) 30 | 31 | 32 | 33 | You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challange. 34 | 35 | 36 | 37 | Happy Learning:) -------------------------------------------------------------------------------- /2023/day25/tasks.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | # Day 25 Task: Complete Jenkins CI/CD Project - Continued with Documentation 5 | 6 | 7 | 8 | 9 | 10 | I can imagine catching up will be tough so take a small breather today and complete the Jenkins CI/CD project from Day 24 and add a documentation. 11 | 12 | 13 | 14 | 15 | 16 | ## Did you finish Day 24? 17 | 18 | 19 | 20 | - Day 24 will give you an End to End project and adding it to your resume will be a cherry on the top. 21 | 22 | - take more time, finish the project, add a Documentation, add it to your Resume and post about it today. 23 | 24 | 25 | 26 | # Task-01 27 | 28 | 29 | 30 | - Document the process from cloning the repository to adding webhooks, and Deployment, etc. as a README , go through [this example](https://github.com/LondheShubham153/fynd-my-movie/blob/master/README.md) 31 | 32 | 33 | 34 | - A well written readme file will help others to understand your project and you will understand how to use the project again without any problems. 35 | 36 | 37 | 38 | 39 | # Task-02 40 | 41 | 42 | 43 | - Also it's important to keep smaller goals, as its a small task, think of a small Goal you can accomplish. 44 | 45 | 46 | 47 | - Write about it using [this template](https://www.linkedin.com/posts/shubhamlondhe1996_taking-resolutions-and-having-goals-for-an-activity-7023858409762373632-s2J8?utm_source=share&utm_medium=member_desktop) 48 | 49 | 50 | 51 | - Have small goals and strategies to achieve them, also have a small reward for yourself. 52 | 53 | 54 | 55 | For Reference and entire hands-on Project visit [here](https://youtu.be/nplH3BzKHPk) 56 | 57 | 58 | 59 | 60 | 61 | You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challange. 62 | 63 | 64 | 65 | 66 | 67 | Happy Learning:) -------------------------------------------------------------------------------- /2023/day26/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 26 Task: Jenkins Declarative Pipeline 2 | 3 | 4 | One of the most important parts of your DevOps and CICD journey is a Declarative Pipeline Syntax of Jenkins 5 | 6 | 7 | ## Some terms for your Knowledge 8 | 9 | **What is Pipeline -** A pipeline is a collection of steps or jobs interlinked in a sequence. 10 | 11 | **Declarative:** Declarative is a more recent and advanced implementation of a pipeline as a code. 12 | 13 | **Scripted:** Scripted was the first and most traditional implementation of the pipeline as a code in Jenkins. It was designed as a general-purpose DSL (Domain Specific Language) built with Groovy. 14 | 15 | # Why you should have a Pipeline 16 | 17 | The definition of a Jenkins Pipeline is written into a text file (called a [`Jenkinsfile`](https://www.jenkins.io/doc/book/pipeline/jenkinsfile)) which in turn can be committed to a project’s source control repository. 18 | This is the foundation of "Pipeline-as-code"; treating the CD pipeline as a part of the application to be versioned and reviewed like any other code. 19 | 20 | **Creating a `Jenkinsfile` and committing it to source control provides a number of immediate benefits:** 21 | 22 | - Automatically creates a Pipeline build process for all branches and pull requests. 23 | 24 | - Code review/iteration on the Pipeline (along with the remaining source code). 25 | 26 | 27 | # Pipeline syntax 28 | 29 | ````groovy 30 | pipeline { 31 | agent any 32 | stages { 33 | stage('Build') { 34 | steps { 35 | // 36 | } 37 | } 38 | stage('Test') { 39 | steps { 40 | // 41 | } 42 | } 43 | stage('Deploy') { 44 | steps { 45 | // 46 | } 47 | } 48 | } 49 | } 50 | ```` 51 | 52 | 53 | # Task-01 54 | 55 | - Create a New Job, this time select Pipeline instead of Freestyle Project. 56 | - Follow the Official Jenkins [Hello world example](https://www.jenkins.io/doc/pipeline/tour/hello-world/) 57 | - Complete the example using the Declarative pipeline 58 | - In case of any issues feel free to post on any Groups, [Discord](https://discord.gg/Q6ntmMtH) or [Telegram](https://t.me/trainwithshubham) 59 | 60 | You can post your progress on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. 61 | 62 | 63 | Happy Learning:) -------------------------------------------------------------------------------- /2023/day27/tasks.md: -------------------------------------------------------------------------------- 1 | 2 | # Day 27 Task: Jenkins Declarative Pipeline with Docker 3 | 4 | 5 | 6 | Day 26 was all about a Declarative pipeline, now its time to level up things, let's integrate Docker and your Jenkins declarative pipeline 7 | 8 | 9 | 10 | ## Use your Docker Build and Run Knowledge 11 | 12 | 13 | 14 | **docker build -** you can use `sh 'docker build . -t ' ` in your pipeline stage block to run the docker build command. (Make sure you have docker installed with correct permissions. 15 | 16 | 17 | 18 | **docker run:** you can use `sh 'docker run -d '` in your pipeline stage block to build the container. 19 | 20 | 21 | 22 | **How will the stages look** 23 | ````groovy 24 | stages { 25 | stage('Build') { 26 | steps { 27 | sh 'docker build -t trainwithshubham/django-app:latest' 28 | } 29 | } 30 | } 31 | ```` 32 | 33 | 34 | 35 | 36 | # Task-01 37 | 38 | 39 | 40 | - Create a docker-integrated Jenkins declarative pipeline 41 | - Use the above-given syntax using `sh` inside the stage block 42 | - You will face errors in case of running a job twice, as the docker container will be already created, so for that do task 2 43 | 44 | # Task-02 45 | 46 | 47 | 48 | - Create a docker-integrated Jenkins declarative pipeline using the `docker` groovy syntax inside the stage block. 49 | - You won't face errors, you can Follow [this documentation](https://tempora-mutantur.github.io/jenkins.io/github_pages_test/doc/book/pipeline/docker/) 50 | 51 | - Complete your previous projects using this Declarative pipeline approach 52 | 53 | - In case of any issues feel free to post on any Groups, [Discord](https://discord.gg/Q6ntmMtH) or [Telegram](https://t.me/trainwithshubham) 54 | 55 | Are you enjoying the #90DaysOfDevOps Challenge? 56 | Let me know how are feeling after 4 weeks of DevOps Learnings, 57 | 58 | 59 | Happy Learning:) -------------------------------------------------------------------------------- /2023/day28/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 28 Task: Jenkins Agents 2 | 3 | 4 | # Jenkins Master (Server) 5 | Jenkins’s server or master node holds all key configurations. Jenkins master server is like a control server that orchestrates all the workflow defined in the pipelines. For example, scheduling a job, monitoring the jobs, etc. 6 | 7 | # Jenkins Agent 8 | An agent is typically a machine or container that connects to a Jenkins master and this agent that actually execute all the steps mentioned in a Job. When you create a Jenkins job, you have to assign an agent to it. Every agent has a label as a unique identifier. 9 | 10 | When you trigger a Jenkins job from the master, the actual execution happens on the agent node that is configured in the job. 11 | 12 | A single, monolithic Jenkins installation can work great for a small team with a relatively small number of projects. As your needs grow, however, it often becomes necessary to scale up. Jenkins provides a way to do this called “master to agent connection.” Instead of serving the Jenkins UI and running build jobs all on a single system, you can provide Jenkins with agents to handle the execution of jobs while the master serves the Jenkins UI and acts as a control node. 13 | 14 |

15 | 16 | ## Pre-requisites 17 | Let’s say we’re starting with a fresh Ubuntu 22.04 Linux installation. To get an agent working make sure you install Java ( same version as jenkins master server ) and Docker on it. 18 | 19 | ` 20 | Note:- 21 | While creating an agent, be sure to separate rights, permissions, and ownership for jenkins users. 22 | ` 23 | 24 | # Task-01 25 | 26 | 27 | 28 | 29 | 30 | - Create an agent by setting up a node on Jenkins 31 | 32 | - Create a new AWS EC2 Instance and connect it to master(Where Jenkins is installed) 33 | 34 | - The connection of master and agent requires SSH and the public-private key pair exchange. 35 | - Verify its status under "Nodes" section. 36 | 37 | - You can follow [this article](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7017885886461698048-os5f?utm_source=share&utm_medium=member_android) for the same 38 | 39 | 40 | 41 | # Task-02 42 | 43 | 44 | 45 | 46 | 47 | - Run your previous Jobs (which you built on Day 26, and Day 27) on the new agent 48 | 49 | - Use labels for the agent, your master server should trigger builds for the agent server. 50 | 51 | 52 | 53 | - In case of any issues feel free to post on any Groups, [Discord](https://discord.gg/Q6ntmMtH) or [Telegram](https://t.me/trainwithshubham) 54 | 55 | 56 | 57 | Are you enjoying the #90DaysOfDevOps Challenge? 58 | 59 | Let me know how are feeling after 4 weeks of DevOps Learning. 60 | 61 | 62 | 63 | 64 | Happy Learning:) 65 | -------------------------------------------------------------------------------- /2023/day29/tasks.md: -------------------------------------------------------------------------------- 1 | ## Day 29 Task: Jenkins Important interview Questions. 2 | 3 |

4 | 5 | 6 | ## Jenkins Interview 7 | Here are some Jenkins-specific questions related to Docker that one can use during a DevOps Engineer interview: 8 | 9 | ## Questions 10 | 11 | 1. What’s the difference between continuous integration, continuous delivery, and continuous deployment? 12 | 2. Benefits of CI/CD 13 | 3. What is meant by CI-CD? 14 | 4. What is Jenkins Pipeline? 15 | 5. How do you configure the job in Jenkins? 16 | 6. Where do you find errors in Jenkins? 17 | 7. In Jenkins how can you find log files? 18 | 8. Jenkins workflow and write a script for this workflow? 19 | 9. How to create continuous deployment in Jenkins? 20 | 10. How build job in Jenkins? 21 | 11. Why we use pipeline in Jenkins? 22 | 12. Is Only Jenkins enough for automation? 23 | 13. How will you handle secrets? 24 | 14. Explain diff stages in CI-CD setup 25 | 15. Name some of the plugins in Jenkin? 26 | 27 | 28 | 29 | These questions will help you in your next DevOps Interview. 30 | Write a Blog and share it on LinkedIn. 31 | 32 | *Happy Learning :)* 33 | -------------------------------------------------------------------------------- /2023/day30/tasks.md: -------------------------------------------------------------------------------- 1 | 2 | ## Day 30 Task: Kubernetes Architecture 3 | 4 | 5 | 6 |

7 | 8 | 9 | 10 | ## Kubernetes Overview 11 | 12 | With the widespread adoption of [containers](https://cloud.google.com/containers) among organizations, Kubernetes, the container-centric management software, has become a standard to deploy and operate containerized applications and is one of the most important parts of DevOps. 13 | 14 | Originally developed at Google and released as open-source in 2014. Kubernetes builds on 15 years of running Google's containerized workloads and the valuable contributions from the open-source community. Inspired by Google’s internal cluster management system, [Borg](https://research.google.com/pubs/pub43438.html), 15 | 16 | 17 | ## Tasks 18 | 19 | 20 | 21 | 1. What is Kubernetes? Write in your own words and why do we call it k8s? 22 | 23 | 2. What are the benefits of using k8s? 24 | 25 | 3. Explain the architecture of Kubernetes, refer to [this video](https://youtu.be/FqfoDUhzyDo) 26 | 27 | 4. What is Control Plane? 28 | 29 | 5. Write the difference between kubectl and kubelets. 30 | 31 | 6. Explain the role of the API server. 32 | 33 | Kubernetes architecture is important, so make sure you spend a day understanding it. [This video](https://youtu.be/FqfoDUhzyDo) will surely help you. 34 | 35 | 36 | 37 | *Happy Learning :)* -------------------------------------------------------------------------------- /2023/day31/pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: nginx 5 | spec: 6 | containers: 7 | - name: nginx 8 | image: nginx:1.14.2 9 | ports: 10 | - containerPort: 80 11 | 12 | 13 | # After creating this file , run below command: 14 | # kubectl apply -f 15 | -------------------------------------------------------------------------------- /2023/day31/tasks.md: -------------------------------------------------------------------------------- 1 | 2 | ## Day 31 Task: Launching your First Kubernetes Cluster with Nginx running 3 | 4 | 5 | 6 | ### Awesome! You learned the architecture of one of the top most important tool "Kubernetes" in your previous task. 7 | 8 | 9 | 10 | ## What about doing some hands-on now? 11 | 12 | Let's read about minikube and implement *k8s* in our local machine 13 | 14 | 15 | 16 | 1) **What is minikube?** 17 | 18 | 19 | 20 | *Ans*:- Minikube is a tool which quickly sets up a local Kubernetes cluster on macOS, Linux, and Windows. It can deploy as a VM, a container, or on bare-metal. 21 | 22 | 23 | 24 | Minikube is a pared-down version of Kubernetes that gives you all the benefits of Kubernetes with a lot less effort. 25 | 26 | This makes it an interesting option for users who are new to containers, and also for projects in the world of edge computing and the Internet of Things. 27 | 28 | 29 | 30 | 2) **Features of minikube** 31 | 32 | 33 | 34 | *Ans* :- 35 | 36 | (a) Supports the latest Kubernetes release (+6 previous minor versions) 37 | 38 | (b) Cross-platform (Linux, macOS, Windows) 39 | 40 | (c) Deploy as a VM, a container, or on bare-metal 41 | 42 | (d) Multiple container runtimes (CRI-O, containerd, docker) 43 | 44 | (e) Direct API endpoint for blazing fast image load and build 45 | 46 | (f) Advanced features such as LoadBalancer, filesystem mounts, FeatureGates, and network policy 47 | 48 | (g) Addons for easily installed Kubernetes applications 49 | 50 | (h) Supports common CI environments 51 | 52 | 53 | 54 | ## Task-01: 55 | 56 | ## Install minikube on your local 57 | 58 | 59 | 60 | For installation, you can Visit [this page](https://minikube.sigs.k8s.io/docs/start/). 61 | 62 | 63 | 64 | If you want to try an alternative way, you can check [this](https://k8s-docs.netlify.app/en/docs/tasks/tools/install-minikube/). 65 | 66 | 67 | 68 | ## Let's understand the concept **pod** 69 | 70 | 71 | 72 | *Ans:-* 73 | 74 | 75 | 76 | Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. 77 | 78 | 79 | 80 | A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. A Pod's contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific "logical host": it contains one or more application containers which are relatively tightly coupled. 81 | 82 | 83 | 84 | You can read more about pod from [here](https://kubernetes.io/docs/concepts/workloads/pods/) . 85 | 86 | 87 | 88 | ## Task-02: 89 | 90 | ## Create your first pod on Kubernetes through minikube. 91 | 92 | We are suggesting you make an nginx pod, but you can always show your creativity and do it on your own. 93 | 94 | 95 | 96 | **Having an issue? Don't worry, adding a sample yaml file for pod creation, you can always refer that.** 97 | 98 | *Happy Learning :)* -------------------------------------------------------------------------------- /2023/day32/Deployment.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: todo-app 5 | labels: 6 | app: todo 7 | spec: 8 | replicas: 2 9 | selector: 10 | matchLabels: 11 | app: todo 12 | template: 13 | metadata: 14 | labels: 15 | app: todo 16 | spec: 17 | containers: 18 | - name: todo 19 | image: rishikeshops/todo-app 20 | ports: 21 | - containerPort: 3000 22 | -------------------------------------------------------------------------------- /2023/day32/tasks.md: -------------------------------------------------------------------------------- 1 | 2 | ## Day 32 Task: Launching your Kubernetes Cluster with Deployment 3 | 4 | ### Congratulation ! on your learning on K8s on Day-31 5 | 6 | ## What is Deployment in k8s 7 | 8 | A Deployment provides a configuration for updates for Pods and ReplicaSets. 9 | 10 | You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new replicas for scaling, or to remove existing Deployments and adopt all their resources with new Deployments. 11 | 12 | ## Today's task let's keep it very simple. 13 | 14 | ## Task-1: 15 | **Create one Deployment file to deploy a sample todo-app on K8s using "Auto-healing" and "Auto-Scaling" feature** 16 | 17 | - add a deployment.yml file (sample is kept in the folder for your reference) 18 | - apply the deployment to your k8s (minikube) cluster by command 19 | `kubectl apply -f deployment.yml` 20 | 21 | Let's make your resume shine with one more project ;) 22 | 23 | 24 | **Having an issue? Don't worry, adding a sample deployment file , you can always refer that or wathch [this video](https://youtu.be/ONrbWFJXLLk)** 25 | 26 | 27 | 28 | Happy Learning :) 29 | -------------------------------------------------------------------------------- /2023/day33/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 33 Task: Working with Namespaces and Services in Kubernetes 2 | ### Congrats🎊🎉 on updating your Deployment yesterday💥🙌 3 | ## What are Namespaces and Services in k8s 4 | In Kubernetes, Namespaces are used to create isolated environments for resources. Each Namespace is like a separate cluster within the same physical cluster. Services are used to expose your Pods and Deployments to the network. Read more about Namespace [Here](https://kubernetes.io/docs/concepts/workloads/pods/user-namespaces/) 5 | 6 | # Today's task: 7 | ## Task 1: 8 | - Create a Namespace for your Deployment 9 | 10 | - Use the command `kubectl create namespace ` to create a Namespace 11 | 12 | - Update the deployment.yml file to include the Namespace 13 | 14 | - Apply the updated deployment using the command: 15 | `kubectl apply -f deployment.yml -n ` 16 | 17 | - Verify that the Namespace has been created by checking the status of the Namespaces in your cluster. 18 | 19 | ## Task 2: 20 | - Read about Services, Load Balancing, and Networking in Kubernetes. Refer official documentation of kubernetes [Link](https://kubernetes.io/docs/concepts/services-networking/) 21 | 22 | Need help with Namespaces? Check out this [video](https://youtu.be/K3jNo4z5Jx8) for assistance. 23 | 24 | Keep growing your Kubernetes knowledge💥🙌 25 | 26 | Happy Learning! :) 27 | 28 | 29 | -------------------------------------------------------------------------------- /2023/day34/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 34 Task: Working with Services in Kubernetes 2 | ### Congratulation🎊 on your learning on Deployments in K8s on Day-33 3 | ## What are Services in K8s 4 | In Kubernetes, Services are objects that provide stable network identities to Pods and abstract away the details of Pod IP addresses. Services allow Pods to receive traffic from other Pods, Services, and external clients. 5 | 6 | 7 | ## Task-1: 8 | - Create a Service for your todo-app Deployment from Day-32 9 | - Create a Service definition for your todo-app Deployment in a YAML file. 10 | - Apply the Service definition to your K8s (minikube) cluster using the `kubectl apply -f service.yml -n ` command. 11 | - Verify that the Service is working by accessing the todo-app using the Service's IP and Port in your Namespace. 12 | 13 | ## Task-2: 14 | - Create a ClusterIP Service for accessing the todo-app from within the cluster 15 | - Create a ClusterIP Service definition for your todo-app Deployment in a YAML file. 16 | - Apply the ClusterIP Service definition to your K8s (minikube) cluster using the `kubectl apply -f cluster-ip-service.yml -n ` command. 17 | - Verify that the ClusterIP Service is working by accessing the todo-app from another Pod in the cluster in your Namespace. 18 | 19 | ## Task-3: 20 | - Create a LoadBalancer Service for accessing the todo-app from outside the cluster 21 | - Create a LoadBalancer Service definition for your todo-app Deployment in a YAML file. 22 | - Apply the LoadBalancer Service definition to your K8s (minikube) cluster using the `kubectl apply -f load-balancer-service.yml -n ` command. 23 | - Verify that the LoadBalancer Service is working by accessing the todo-app from outside the cluster in your Namespace. 24 | 25 | 26 | Struggling with Services? Take a look at this video for a step-by-step [guide](https://youtu.be/OJths_RojFA). 27 | 28 | Need help with Services in Kubernetes? Check out the Kubernetes [documentation](https://kubernetes.io/docs/concepts/services-networking/service/) for assistance. 29 | 30 | Happy Learning :) 31 | 32 | -------------------------------------------------------------------------------- /2023/day35/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 35: Mastering ConfigMaps and Secrets in Kubernetes🔒🔑🛡️ 2 | 3 | ### 👏🎉 Yay! Yesterday we conquered Namespaces and Services 💪💻🔗🚀 4 | 5 | ## What are ConfigMaps and Secrets in k8s 6 | In Kubernetes, ConfigMaps and Secrets are used to store configuration data and secrets, respectively. ConfigMaps store configuration data as key-value pairs, while Secrets store sensitive data in an encrypted form. 7 | 8 | - *Example :- Imagine you're in charge of a big spaceship (Kubernetes cluster) with lots of different parts (containers) that need information to function properly. 9 | ConfigMaps are like a file cabinet where you store all the information each part needs in simple, labeled folders (key-value pairs). 10 | Secrets, on the other hand, are like a safe where you keep the important, sensitive information that shouldn't be accessible to just anyone (encrypted data). 11 | So, using ConfigMaps and Secrets, you can ensure each part of your spaceship (Kubernetes cluster) has the information it needs to work properly and keep sensitive information secure! 🚀* 12 | - Read more about [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) & [Secret](https://kubernetes.io/docs/concepts/configuration/secret/). 13 | ## Today's task: 14 | ## Task 1: 15 | - Create a ConfigMap for your Deployment 16 | - Create a ConfigMap for your Deployment using a file or the command line 17 | - Update the deployment.yml file to include the ConfigMap 18 | - Apply the updated deployment using the command: `kubectl apply -f deployment.yml -n ` 19 | - Verify that the ConfigMap has been created by checking the status of the ConfigMaps in your Namespace. 20 | 21 | ## Task 2: 22 | - Create a Secret for your Deployment 23 | - Create a Secret for your Deployment using a file or the command line 24 | - Update the deployment.yml file to include the Secret 25 | - Apply the updated deployment using the command: `kubectl apply -f deployment.yml -n ` 26 | - Verify that the Secret has been created by checking the status of the Secrets in your Namespace. 27 | 28 | Need help with ConfigMaps and Secrets? Check out this [video](https://youtu.be/FAnQTgr04mU) for assistance. 29 | 30 | 31 | Keep learning and expanding your knowledge of Kubernetes💥🙌 32 | -------------------------------------------------------------------------------- /2023/day36/Deployment.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: todo-app-deployment 5 | spec: 6 | replicas: 1 7 | selector: 8 | matchLabels: 9 | app: todo-app 10 | template: 11 | metadata: 12 | labels: 13 | app: todo-app 14 | spec: 15 | containers: 16 | - name: todo-app 17 | image: rishikeshops/todo-app 18 | ports: 19 | - containerPort: 8000 20 | volumeMounts: 21 | - name: todo-app-data 22 | mountPath: /app 23 | volumes: 24 | - name: todo-app-data 25 | persistentVolumeClaim: 26 | claimName: pvc-todo-app 27 | -------------------------------------------------------------------------------- /2023/day36/pv.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume 3 | metadata: 4 | name: pv-todo-app 5 | spec: 6 | capacity: 7 | storage: 1Gi 8 | accessModes: 9 | - ReadWriteOnce 10 | persistentVolumeReclaimPolicy: Retain 11 | hostPath: 12 | path: "/tmp/data" 13 | -------------------------------------------------------------------------------- /2023/day36/pvc.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolumeClaim 3 | metadata: 4 | name: pvc-todo-app 5 | spec: 6 | accessModes: 7 | - ReadWriteOnce 8 | resources: 9 | requests: 10 | storage: 500Mi 11 | -------------------------------------------------------------------------------- /2023/day36/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 36 Task: Managing Persistent Volumes in Your Deployment 💥 2 | 3 | 🙌 Kudos to you for conquering ConfigMaps and Secrets in Kubernetes yesterday. 4 | 5 | 🔥 You're on fire! 🔥 6 | 7 | ## What are Persistent Volumes in k8s 8 | 9 | In Kubernetes, a Persistent Volume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. A Persistent Volume Claim (PVC) is a request for storage by a user. The PVC references the PV, and the PV is bound to a specific node. Read official documentation of [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). 10 | 11 | ⏰ Wait, wait, wait! 📣 Attention all #90daysofDevOps Challengers. 💪 12 | 13 | Before diving into today's task, don't forget to share your thoughts on the #90daysofDevOps challenge 💪 Fill out our feedback form (https://lnkd.in/gcgvrq8b) to help us improve and provide the best experience 🌟 Your participation and support is greatly appreciated 🙏 Let's continue to grow together 🌱 14 | 15 | ## Today's tasks: 16 | 17 | ### Task 1: 18 | 19 | Add a Persistent Volume to your Deployment todo app. 20 | 21 | - Create a Persistent Volume using a file on your node. [Template](https://github.com/LondheShubham153/90DaysOfDevOps/blob/94e3970819e097a5b8edea40fe565d583419f912/2023/day36/pv.yml) 22 | 23 | - Create a Persistent Volume Claim that references the Persistent Volume. [Template](https://github.com/LondheShubham153/90DaysOfDevOps/blob/94e3970819e097a5b8edea40fe565d583419f912/2023/day36/pvc.yml) 24 | 25 | - Update your deployment.yml file to include the Persistent Volume Claim. After Applying pv.yml pvc.yml your deployment file look like this [Template](https://github.com/LondheShubham153/90DaysOfDevOps/blob/94e3970819e097a5b8edea40fe565d583419f912/2023/day36/Deployment.yml) 26 | 27 | - Apply the updated deployment using the command: `kubectl apply -f deployment.yml` 28 | 29 | - Verify that the Persistent Volume has been added to your Deployment by checking the status of the Pods and Persistent Volumes in your cluster. Use this commands `kubectl get pods` , 30 | 31 | `kubectl get pv` 32 | 33 | ⚠️ Don't forget: To apply changes or create files in your Kubernetes deployments, each file must be applied separately. ⚠️ 34 | 35 | ### Task 2: 36 | 37 | Accessing data in the Persistent Volume, 38 | 39 | - Connect to a Pod in your Deployment using command : `kubectl exec -it -- /bin/bash 40 | 41 | ` 42 | 43 | - Verify that you can access the data stored in the Persistent Volume from within the Pod 44 | 45 | Need help with Persistent Volumes? Check out this [video](https://youtu.be/U0_N3v7vJys) for assistance. 46 | 47 | Keep up the excellent work🙌💥 48 | 49 | Happy Learning :) 50 | -------------------------------------------------------------------------------- /2023/day37/tasks.md: -------------------------------------------------------------------------------- 1 | ## Day 37 Task: Kubernetes Important interview Questions. 2 | 3 | ## Questions 4 | 5 | 1.What is Kubernetes and why it is important? 6 | 7 | 2.What is difference between docker swarm and kubernetes? 8 | 9 | 3.How does Kubernetes handle network communication between containers? 10 | 11 | 4.How does Kubernetes handle scaling of applications? 12 | 13 | 5.What is a Kubernetes Deployment and how does it differ from a ReplicaSet? 14 | 15 | 6.Can you explain the concept of rolling updates in Kubernetes? 16 | 17 | 7.How does Kubernetes handle network security and access control? 18 | 19 | 8.Can you give an example of how Kubernetes can be used to deploy a highly available application? 20 | 21 | 9.What is namespace is kubernetes? Which namespace any pod takes if we don't specify any namespace? 22 | 23 | 10.How ingress helps in kubernetes? 24 | 25 | 11.Explain different types of services in kubernetes? 26 | 27 | 12.Can you explain the concept of self-healing in Kubernetes and give examples of how it works? 28 | 29 | 13.How does Kubernetes handle storage management for containers? 30 | 31 | 14.How does the NodePort service work? 32 | 33 | 15.What is a multinode cluster and single-node cluster in Kubernetes? 34 | 35 | 16.Difference between create and apply in kubernetes? 36 | 37 | 38 | 39 | 40 | 41 | ## These questions will help you in your next DevOps Interview. 42 | 43 | *Write a Blog and share it on LinkedIn.* 44 | 45 | ***Happy Learning :)*** 46 | -------------------------------------------------------------------------------- /2023/day38/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 38 Getting Started with AWS Basics☁ 2 | ![AWS](https://user-images.githubusercontent.com/115981550/217238286-6c6bc6e7-a1ac-4d12-98f3-f95ff5bf53fc.png) 3 | 4 | 5 | Congratulations!!!! you have come so far. Don't let your excuses break your consistency. Let's begin our new Journey with Cloud☁. By this time you have created multiple EC2 instances, if not let's begin the journey: 6 | ## AWS: 7 | Amazon Web Services is one of the most popular Cloud Provider that has free tier too for students and Cloud enthutiasts for their Handson while learning (Create your free account today to explore more on it). 8 | 9 | Read from [here](https://aws.amazon.com/what-is-aws/) 10 | 11 | ## IAM: 12 | AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. With IAM, you can centrally manage permissions that control which AWS resources users can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. 13 | Read from [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) 14 | 15 | Get to know IAM more deeply [Click Here!!](https://www.youtube.com/watch?v=ORB4eY8EydA) 16 | 17 | ### Task1: 18 | Create an IAM user with username of your own wish and grant EC2 Access. Launch your Linux instance through the IAM user that you created now and install jenkins and docker on your machine via single Shell Script. 19 | 20 | ### Task2: 21 | In this task you need to prepare a devops team of avengers. Create 3 IAM users of avengers and assign them in devops groups with IAM policy. 22 | 23 | Post your progress on Linkedin. Till then Happy Learning :) 24 | -------------------------------------------------------------------------------- /2023/day39/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 39 AWS and IAM Basics☁ 2 | ![AWS](https://miro.medium.com/max/1400/0*dIzXLQn6aBClm1TJ.png) 3 | 4 | 5 | 6 | By this time you have created multiple EC2 instances, and post installation manually installed applications like Jenkins, docker etc. 7 | Now let's switch to little automation part. Sounds interesting??🤯 8 | 9 | ## AWS: 10 | Amazon Web Services is one of the most popular Cloud Provider that has free tier too for students and Cloud enthutiasts for their Handson while learning (Create your free account today to explore more on it). 11 | 12 | Read from [here](https://aws.amazon.com/what-is-aws/) 13 | 14 | ## User Data in AWS: 15 | - When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives. 16 | - You can also pass this data into the launch instance wizard as plain text, as a file (this is useful for launching instances using the command line tools), or as base64-encoded text (for API calls). 17 | - This will save time and manual effort everytime you launch an instance and want to install any application on it like apache, docker, Jenkins etc 18 | 19 | Read more from [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) 20 | 21 | ## IAM: 22 | AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. With IAM, you can centrally manage permissions that control which AWS resources users can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. 23 | Read from [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) 24 | 25 | Get to know IAM more deeply🏊[Click Here!!](https://www.youtube.com/watch?v=ORB4eY8EydA) 26 | 27 | 28 | ### Task1: 29 | - Launch EC2 instance with already installed Jenkins on it. Once server shows up in console, hit the IP address in browser and you Jenkins page should be visible. 30 | - Take screenshot of Userdata and Jenkins page, this will verify the task completion. 31 | 32 | ### Task2: 33 | - Read more on IAM Roles and explain the IAM Users, Groups and Roles in your own terms. 34 | - Create three Roles named: DevOps-User, Test-User and Admin. 35 | 36 | 37 | Post your progress on Linkedin. Till then Happy Learning :) 38 | -------------------------------------------------------------------------------- /2023/day40/tasks.md: -------------------------------------------------------------------------------- 1 | 2 | # Day 40 AWS EC2 Automation ☁ 3 | 4 | ![AWS](https://www.eginnovations.com/blog/wp-content/uploads/2021/09/Amazon-AWS-Cloud-Topimage-1.jpg) 5 | 6 | 7 | 8 | 9 | 10 | I hope your journey with AWS cloud and automation is going well [](https://emojipedia.org/emoji/%F0%9F%98%8D/) 11 | ### 😍 12 | 13 | 14 | 15 | ## Automation in EC2: 16 | 17 | Amazon EC2 or Amazon Elastic Compute Cloud can give you secure, reliable, high-performance, and cost-effective computing infrastructure to meet demanding business needs. 18 | 19 | Also, if you know a few things, you can automate many things. 20 | 21 | Read from [here](https://aws.amazon.com/ec2/) 22 | 23 | 24 | 25 | ## Launch template in AWS EC2: 26 | 27 | - You can make a launch template with the configuration information you need to start an instance. You can save launch parameters in launch templates so you don't have to type them in every time you start a new instance. 28 | - For example, a launch template can have the AMI ID, instance type, and network settings that you usually use to launch instances. 29 | - You can tell the Amazon EC2 console to use a certain launch template when you start an instance. 30 | 31 | 32 | 33 | Read more from [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html) 34 | 35 | 36 | 37 | ## Instance Types: 38 | 39 | Amazon EC2 has a large number of instance types that are optimised for different uses. The different combinations of CPU, memory, storage and networking capacity in instance types give you the freedom to choose the right mix of resources for your apps. Each instance type comes with one or more instance sizes, so you can adjust your resources to meet the needs of the workload you want to run. 40 | 41 | Read from [here](https://aws.amazon.com/ec2/instance-types/?trk=32f4fbd0-ffda-4695-a60c-8857fab7d0dd&sc_channel=ps&s_kwcid=AL!4422!3!536392685920!e!!g!!ec2%20instance%20types&ef_id=CjwKCAiA0JKfBhBIEiwAPhZXD_O1-3qZkRa-KScynbwjvHd3l4UHSTfKuigd5ZPukXoDXu-v3MtC7hoCafEQAvD_BwE:G:s&s_kwcid=AL!4422!3!536392685920!e!!g!!ec2%20instance%20types) 42 | 43 | ## AMI: 44 | 45 | An Amazon Machine Image (AMI) is an image that AWS supports and keeps up to date. It contains the information needed to start an instance. When you launch an instance, you must choose an AMI. When you need multiple instances with the same configuration, you can launch them from a single AMI. 46 | 47 | 48 | ### Task1: 49 | 50 | - Create a launch template with Amazon Linux 2 AMI and t2.micro instance type with Jenkins and Docker setup (You can use the Day 39 User data script for installing the required tools. 51 | 52 | - Create 3 Instances using Launch Template, there must be an option that shows number of instances to be launched ,can you find it? :) 53 | 54 | - You can go one step ahead and create an auto-scaling group, sounds tough? 55 | 56 | Check [this](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-template.html#create-launch-template-for-auto-scaling) out 57 | 58 | 59 | 60 | Post your progress on Linkedin. 61 | 62 | Happy Learning :) -------------------------------------------------------------------------------- /2023/day41/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 41: Setting up an Application Load Balancer with AWS EC2 🚀 ☁ 2 | 3 | ![LB2](https://user-images.githubusercontent.com/115981550/218145297-d55fe812-32b7-4242-a4f8-eb66312caa2c.png) 4 | 5 | ### Hi, I hope you had a great day yesterday learning about the launch template and instances in EC2. Today, we are going to dive into one of the most important concepts in EC2: Load Balancing. 6 | 7 | ## What is Load Balancing? 8 | Load balancing is the distribution of workloads across multiple servers to ensure consistent and optimal resource utilization. It is an essential aspect of any large-scale and scalable computing system, as it helps you to improve the reliability and performance of your applications. 9 | 10 | ## Elastic Load Balancing: 11 | **Elastic Load Balancing (ELB)** is a service provided by Amazon Web Services (AWS) that automatically distributes incoming traffic across multiple EC2 instances. ELB provides three types of load balancers: 12 | 13 | Read more from [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) 14 | 15 | 1) **Application Load Balancer (ALB)** - _operates at layer 7 of the OSI model and is ideal for applications that require advanced routing and microservices._ 16 | 17 | - Read more from [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) 18 | 19 | 20 | 2) **Network Load Balancer (NLB)** - _operates at layer 4 of the OSI model and is ideal for applications that require high throughput and low latency._ 21 | 22 | - Read more from [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) 23 | 24 | 25 | 3) **Classic Load Balancer (CLB)** - _operates at layer 4 of the OSI model and is ideal for applications that require basic load balancing features._ 26 | - Read more [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html) 27 | 28 | ## 🎯 Today's Tasks: 29 | 30 | ### Task 1: 31 | - launch 2 EC2 instances with an Ubuntu AMI and use User Data to install the Apache Web Server. 32 | - Modify the index.html file to include your name so that when your Apache server is hosted, it will display your name also do it for 2nd instance which include " TrainWithShubham Community is Super Aweasome :) ". 33 | - Copy the public IP address of your EC2 instances. 34 | - Open a web browser and paste the public IP address into the address bar. 35 | - You should see a webpage displaying information about your PHP installation. 36 | 37 | ### Task 2: 38 | - Create an Application Load Balancer (ALB) in EC2 using the AWS Management Console. 39 | - Add EC2 instances which you launch in task-1 to the ALB as target groups. 40 | - Verify that the ALB is working properly by checking the health status of the target instances and testing the load balancing capabilities. 41 | 42 | ![LoadBalancer](https://user-images.githubusercontent.com/115981550/218143557-26ec33ce-99a7-4db6-a46f-1cf48ed77ae0.png) 43 | 44 | Need help with task? Check out this [Blog for assistance](https://rushikesh-mashidkar.hashnode.dev/create-an-application-load-balancer-elastic-load-balancing-using-aws-ec2-instance). 45 | 46 | Don't forget to share your progress on LinkedIn and have a great day🙌💥 47 | 48 | Happy Learning! 😃 49 | -------------------------------------------------------------------------------- /2023/day42/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 42: IAM Programmatic access and AWS CLI 🚀 ☁ 2 | 3 | Today is more of a reading excercise and getting some programmatic access for your AWS account 4 | 5 | ## IAM Programmatic access 6 | 7 | In order to access your AWS account from a terminal or system, you can use AWS Access keys and AWS Secret Access keys 8 | Watch [this video](https://youtu.be/XYKqL5GFI-I) for more details. 9 | 10 | ## AWS CLI 11 | 12 | The AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts. 13 | 14 | The AWS CLI v2 offers several new features including improved installers, new configuration options such as AWS IAM Identity Center (successor to AWS SSO), and various interactive features. 15 | 16 | 17 | ## Task-01 18 | 19 | - Create AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY from AWS Console. 20 | 21 | ## Task-02 22 | 23 | - Setup and install AWS CLI and configure your account credentials 24 | 25 | 26 | Let me know if you have any issues while doing the task. 27 | 28 | Happy Learning :) -------------------------------------------------------------------------------- /2023/day43/aws-cli.md: -------------------------------------------------------------------------------- 1 | Here are some commonly used AWS CLI commands for Amazon S3: 2 | 3 | `aws s3 ls` - This command lists all of the S3 buckets in your AWS account. 4 | 5 | `aws s3 mb s3://bucket-name` - This command creates a new S3 bucket with the specified name. 6 | 7 | `aws s3 rb s3://bucket-name` - This command deletes the specified S3 bucket. 8 | 9 | `aws s3 cp file.txt s3://bucket-name` - This command uploads a file to an S3 bucket. 10 | 11 | `aws s3 cp s3://bucket-name/file.txt .` - This command downloads a file from an S3 bucket to your local file system. 12 | 13 | `aws s3 sync local-folder s3://bucket-name` - This command syncs the contents of a local folder with an S3 bucket. 14 | 15 | `aws s3 ls s3://bucket-name` - This command lists the objects in an S3 bucket. 16 | 17 | `aws s3 rm s3://bucket-name/file.txt` - This command deletes an object from an S3 bucket. 18 | 19 | `aws s3 presign s3://bucket-name/file.txt` - This command generates a pre-signed URL for an S3 object, which can be used to grant temporary access to the object. 20 | 21 | `aws s3api list-buckets` - This command retrieves a list of all S3 buckets in your AWS account, using the S3 API. 22 | -------------------------------------------------------------------------------- /2023/day43/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 43: S3 Programmatic access with AWS-CLI 💻 📁 2 | Hi, I hope you had a great day yesterday. Today as part of the #90DaysofDevOps Challenge we will be exploring most commonly used service in AWS i.e S3. 3 | 4 | ![s3](https://user-images.githubusercontent.com/115981550/218308379-a2e841cf-6b77-4d02-bfbe-20d1bae09b20.png) 5 | 6 | # S3 7 | Amazon Simple Storage Service (Amazon S3) is an object storage service that provides a secure and scalable way to store and access data on the cloud. It is designed for storing any kind of data, such as text files, images, videos, backups, and more. 8 | Read more [here](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) 9 | ## Task-01 10 | - Launch an EC2 instance using the AWS Management Console and connect to it using Secure Shell (SSH). 11 | - Create an S3 bucket and upload a file to it using the AWS Management Console. 12 | - Access the file from the EC2 instance using the AWS Command Line Interface (AWS CLI). 13 | 14 | Read more about S3 using aws-cli [here](https://docs.aws.amazon.com/cli/latest/reference/s3/index.html) 15 | 16 | ## Task-02 17 | - Create a snapshot of the EC2 instance and use it to launch a new EC2 instance. 18 | - Download a file from the S3 bucket using the AWS CLI. 19 | - Verify that the contents of the file are the same on both EC2 instances. 20 | 21 | Added Some Useful commands to complete the task. [Click here for commands](https://github.com/LondheShubham153/90DaysOfDevOps/blob/833a67ac4ec17b992934cd6878875dccc4274f56/2023/day43/aws-cli.md) 22 | 23 | 24 | Let me know if you have any questions or face any issues while doing the tasks.🚀 25 | 26 | Happy Learning :) 27 | -------------------------------------------------------------------------------- /2023/day44/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 44: Relational Database Service in AWS 2 | 3 | Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud 4 | 5 | ## Task-01 6 | 7 | - Create a Free tier RDS instance of MySQL 8 | - Create an EC2 instance 9 | - Create an IAM role with RDS access 10 | - Assign the role to EC2 so that your EC2 Instance can connect with RDS 11 | - Once the RDS instance is up and running, get the credentials and connect your EC2 instance using a MySQL client. 12 | 13 | Hint: 14 | 15 | You should install mysql client on EC2, and connect the Host and Port of RDS with this client. 16 | 17 | Post the screenshots once your EC2 instance can connect a MySQL server, that will be a small win for you. 18 | 19 | Watch [this video](https://youtu.be/MrA6Rk1Y82E) for reference. 20 | 21 | Happy Learning 22 | 23 | -------------------------------------------------------------------------------- /2023/day45/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 45: Deploy Wordpress website on AWS 2 | 3 | Over 30% of all websites on the internet use WordPress as their content management system (CMS). It is most often used to run blogs, but it can also be used to run e-commerce sites, message boards, and many other popular things. This guide will show you how to set up a WordPress blog site. 4 | 5 | 6 | 7 | ## Task-01 8 | 9 | - As WordPress requires a MySQL database to store its data ,create an RDS as you did in Day 44 10 | 11 | To configure this WordPress site, you will create the following resources in AWS: 12 | - An Amazon EC2 instance to install and host the WordPress application. 13 | - An Amazon RDS for MySQL database to store your WordPress data. 14 | - Setup the server and post your new Wordpress app. 15 | 16 | Read [this](https://aws.amazon.com/getting-started/hands-on/deploy-wordpress-with-amazon-rds/) for a detailed explanation 17 | Happy Learning :) 18 | 19 | -------------------------------------------------------------------------------- /2023/day46/tasks.md: -------------------------------------------------------------------------------- 1 | # Day-46: Set up CloudWatch alarms and SNS topic in AWS 2 | 3 | Hey learners, you have been using aws services atleast for last 45 days. Have you ever wondered what happen if for any service is charging you bill continously and you don't know till you loose all your pocket money ? 4 | 5 | Hahahaha😁, Well! we, as a responsible community ,always try to make it under free tier , but it's good to know and setup something , which will inform you whenever bill touches a Threshold. 6 | 7 | ## What is Amazon CloudWatch? 8 | Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications. 9 | 10 | Read more about cloudwatch from the official documentation [here](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) 11 | 12 | 13 | ## What is Amazon SNS? 14 | 15 | Amazon Simple Notification Service is a notification service provided as part of Amazon Web Services since 2010. It provides a low-cost infrastructure for mass delivery of messages, predominantly to mobile users. 16 | 17 | Read more about it [here](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) 18 | 19 | 20 | ## Task : 21 | 22 | - Create a CloudWatch alarm that monitors your billing and send an email to you when a it reaches $2. 23 | 24 | (You can keep it for your future use) 25 | 26 | - Delete your billing Alarm that you created now. 27 | 28 | (Now you also know how to delete as well. ) 29 | 30 | Need help with Cloudwatch? Check out this [official documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html) for assistance. 31 | 32 | Keep growing your AWS knowledge💥🙌 33 | 34 | Happy Learning! :) 35 | -------------------------------------------------------------------------------- /2023/day47/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 47: Test Knowledge on aws 💻 📈 2 | Today, we will be test the aws knowledge on services in AWS, as part of the 90 Days of DevOps Challenge. 3 | 4 | 5 | ## Task-01 6 | 7 | - Launch an EC2 instance using the AWS Management Console and connect to it using SSH. 8 | - Install a web server on the EC2 instance and deploy a simple web application. 9 | - Monitor the EC2 instance using Amazon CloudWatch and troubleshoot any issues that arise. 10 | 11 | ## Task-02 12 | - Create an Auto Scaling group using the AWS Management Console and configure it to launch EC2 instances in response to changes in demand. 13 | - Use Amazon CloudWatch to monitor the performance of the Auto Scaling group and the EC2 instances and troubleshoot any issues that arise. 14 | - Use the AWS CLI to view the state of the Auto Scaling group and the EC2 instances and verify that the correct number of instances are running. 15 | 16 | 17 | We hope that these tasks will give you hands-on experience with aws services and help you understand how these services work together. If you have any questions or face any issues while doing the tasks, please let us know. 18 | 19 | Happy Learning :) 20 | -------------------------------------------------------------------------------- /2023/day48/tasks.md: -------------------------------------------------------------------------------- 1 | # Day-48 - ECS 2 | 3 | Today will be a great learning for sure. I know many of you may not know about the term "ECS". As you know, 90 Days Of DevOps Challange is mostly about 'learning new' , let's learn then ;) 4 | 5 | ## What is ECS ? 6 | - ECS (Elastic Container Service) is a fully-managed container orchestration service provided by Amazon Web Services (AWS). It allows you to run and manage Docker containers on a cluster of virtual machines (EC2 instances) without having to manage the underlying infrastructure. 7 | 8 | With ECS, you can easily deploy, manage, and scale your containerized applications using the AWS Management Console, the AWS CLI, or the API. ECS supports both "Fargate" and "EC2 launch types", which means you can run your containers on AWS-managed infrastructure or your own EC2 instances. 9 | 10 | ECS also integrates with other AWS services, such as Elastic Load Balancing, Auto Scaling, and Amazon VPC, allowing you to build scalable and highly available applications. Additionally, ECS has support for Docker Compose and Kubernetes, making it easy to adopt existing container workflows. 11 | 12 | Overall, ECS is a powerful and flexible container orchestration service that can help simplify the deployment and management of containerized applications in AWS. 13 | 14 | ## Difference between EKS and ECS ? 15 | - EKS (Elastic Kubernetes Service) and ECS (Elastic Container Service) are both container orchestration platforms provided by Amazon Web Services (AWS). While both platforms allow you to run containerized applications in the AWS cloud, there are some differences between the two. 16 | 17 | **Architecture**: 18 | ECS is based on a centralized architecture, where there is a control plane that manages the scheduling of containers on EC2 instances. On the other hand, EKS is based on a distributed architecture, where the Kubernetes control plane is distributed across multiple EC2 instances. 19 | 20 | **Kubernetes Support**: 21 | EKS is a fully managed Kubernetes service, meaning that it supports Kubernetes natively and allows you to run your Kubernetes workloads on AWS without having to manage the Kubernetes control plane. ECS, on the other hand, has its own orchestration engine and does not support Kubernetes natively. 22 | 23 | **Scaling**: 24 | EKS is designed to automatically scale your Kubernetes cluster based on demand, whereas ECS requires you to configure scaling policies for your tasks and services. 25 | 26 | **Flexibility**: 27 | EKS provides more flexibility than ECS in terms of container orchestration, as it allows you to customize and configure Kubernetes to meet your specific requirements. ECS is more restrictive in terms of the options available for container orchestration. 28 | 29 | **Community**: 30 | Kubernetes has a large and active open-source community, which means that EKS benefits from a wide range of community-driven development and support. ECS, on the other hand, has a smaller community and is largely driven by AWS itself. 31 | 32 | In summary, EKS is a good choice if you want to use Kubernetes to manage your containerized workloads on AWS, while ECS is a good choice if you want a simpler, more managed platform for running your containerized applications. 33 | 34 | # Task : 35 | Set up ECS (Elastic Container Service) by setting up Nginx on ECS. 36 | 37 | 38 | 39 | 40 | 41 | -------------------------------------------------------------------------------- /2023/day49/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 49 - INTERVIEW QUESTIONS ON AWS 2 | 3 | Hey people, we have listened to your suggestions and we are looking forward to get more! 4 | As you people have asked to put more interview based questions as part of Daily Task, So here it it :) 5 | 6 | ## INTERVIEW QUESTIONS: 7 | - Name 5 aws services you have used and what's the use cases? 8 | - What are the tools used to send logs to the cloud environment? 9 | - What are IAM Roles? How do you create /manage them? 10 | - How to upgrade or downgrade a system with zero downtime? 11 | - What is infrastructure as code and how do you use it? 12 | - What is a load balancer? Give scenarios of each kind of balancer based on your experience. 13 | - What is CloudFormation and why is it used for? 14 | - Difference between AWS CloudFormation and AWS Elastic Beanstalk? 15 | - What are the kinds of security attacks that can occur on the cloud? And how can we minimize them? 16 | - Can we recover the EC2 instance when we have lost the key? 17 | - What is a gateway? 18 | - What is the difference between the Amazon Rds, Dynamodb, and Redshift? 19 | - Do you prefer to host a website on S3? What's the reason if your answer is either yes or no? 20 | 21 | 22 | Let's share your answer on LinkedIn in best possible way thinking you are in a interview table. 23 | Happy Learning !! :) 24 | -------------------------------------------------------------------------------- /2023/day50/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 50: Your CI/CD pipeline on AWS - Part-1 🚀 ☁ 2 | 3 | What if I tell you, in next 4 days, you'll be making a CI/CD pipeline on AWS with these tools. 4 | 5 | - CodeCommit 6 | - CodeBuild 7 | - CodeDeploy 8 | - CodePipeline 9 | - S3 10 | 11 | ## What is CodeCommit ? 12 | - CodeCommit is a managed source control service by AWS that allows users to store, manage, and version their source code and artifacts securely and at scale. It supports Git, integrates with other AWS services, enables collaboration through branch and merge workflows, and provides audit logs and compliance reports to meet regulatory requirements and track changes. Overall, CodeCommit provides developers with a reliable and efficient way to manage their codebase and set up a CI/CD pipeline for their software development projects. 13 | 14 | # Task-01 : 15 | - Set up a code repository on CodeCommit and clone it on your local. 16 | - You need to setup GitCredentials in your AWS IAM. 17 | - Use those credentials in your local and then clone the repository from CodeCommit 18 | 19 | # Task-02 : 20 | - Add a new file from local and commit to your local branch 21 | - Push the local changes to CodeCommit repository. 22 | 23 | For more details watch [this](https://youtu.be/p5i3cMCQ760) video. 24 | 25 | Happy Learning :) 26 | 27 | 28 | 29 | 30 | 31 | -------------------------------------------------------------------------------- /2023/day51/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 51: Your CI/CD pipeline on AWS - Part 2 🚀 ☁ 2 | 3 | On your journey of making a CI/CD pipeline on AWS with these tools, you completed AWS CodeCommit. 4 | 5 | Next few days you'll learn these tools/services: 6 | 7 | - CodeBuild 8 | - CodeDeploy 9 | - CodePipeline 10 | - S3 11 | 12 | ## What is CodeBuild ? 13 | - AWS CodeBuild is a fully managed build service in the cloud. CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. CodeBuild eliminates the need to provision, manage, and scale your own build servers. 14 | 15 | # Task-01 : 16 | - Read about Buildspec file for Codebuild. 17 | - create a simple index.html file in CodeCommit Repository 18 | - you have to build the index.html using nginx server 19 | 20 | # Task-02 : 21 | - Add buildspec.yaml file to CodeCommit Repository and complete the build process. 22 | 23 | For more details watch [this](https://youtu.be/p5i3cMCQ760) video. 24 | 25 | Happy Learning :) 26 | 27 | 28 | 29 | 30 | 31 | -------------------------------------------------------------------------------- /2023/day52/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 52: Your CI/CD pipeline on AWS - Part 3 🚀 ☁ 2 | 3 | On your journey of making a CI/CD pipeline on AWS with these tools, you completed AWS CodeCommit & CodeBuild. 4 | 5 | Next few days you'll learn these tools/services: 6 | 7 | - CodeDeploy 8 | - CodePipeline 9 | - S3 10 | 11 | ## What is CodeDeploy ? 12 | - AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services. 13 | 14 | 15 | CodeDeploy can deploy application content that runs on a server and is stored in Amazon S3 buckets, GitHub repositories, or Bitbucket repositories. CodeDeploy can also deploy a serverless Lambda function. You do not need to make changes to your existing code before you can use CodeDeploy. 16 | 17 | # Task-01 : 18 | - Read about Appspec.yaml file for CodeDeploy. 19 | - Deploy index.html file on EC2 machine using nginx 20 | - you have to setup a CodeDeploy agent in order to deploy code on EC2 21 | 22 | # Task-02 : 23 | - Add appspec.yaml file to CodeCommit Repository and complete the deployment process. 24 | 25 | For more details watch [this](https://youtu.be/IUF-pfbYGvg) video. 26 | 27 | Happy Learning :) 28 | 29 | 30 | 31 | 32 | 33 | -------------------------------------------------------------------------------- /2023/day53/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 53: Your CI/CD pipeline on AWS - Part 4 🚀 ☁ 2 | 3 | On your journey of making a CI/CD pipeline on AWS with these tools, you completed AWS CodeCommit, CodeBuild & CodeDeploy. 4 | 5 | Finish Off in style with AWS CodePipeline 6 | 7 | 8 | ## What is CodePipeline ? 9 | - CodePipeline builds, tests, and deploys your code every time there is a code change, based on the release process models you define. 10 | Think of it as a CI/CD Pipeline service 11 | 12 | 13 | # Task-01 : 14 | - Create a Deployment group of Ec2 Instance. 15 | - Create a CodePipeline that gets the code from CodeCommit, Builds the code using CodeBuild and deploys it to a Deployment Group. 16 | 17 | For more details watch [this](https://youtu.be/IUF-pfbYGvg) video. 18 | 19 | Happy Learning :) 20 | 21 | 22 | 23 | 24 | 25 | -------------------------------------------------------------------------------- /2023/day54/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 54: Understanding Infrastructure as Code and Configuration Management 2 | 3 | ## What's the difference bhaiyya? 4 | 5 | When it comes to the cloud, Infrastructure as Code (IaC) and Configuration Management (CM) are inseparable. With IaC, a descriptive model is used for infrastructure management. To name a few examples of infrastructure: networks, virtual computers, and load balancers. Using an IaC model always results in the same setting. 6 | 7 | Throughout the lifecycle of a product, Configuration Management (CM) ensures that the performance, functional and physical inputs, requirements, design, and operations of that product remain consistent. 8 | 9 | # Task-01 10 | 11 | - Read more about IaC and Config. Management Tools 12 | - Give differences on both with suitable examples 13 | - What are most commont IaC and Config management Tools? 14 | 15 | Write a blog on this topic in the most creative way and post it on linkedIn :) 16 | 17 | happy learning... -------------------------------------------------------------------------------- /2023/day55/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 55: Understanding Configuration Management with Ansible 2 | 3 | ## What's this Ansible? 4 | 5 | Ansible is an open-source automation tool, or platform, used for IT tasks such as configuration management, application deployment, intraservice orchestration, and provisioning 6 | 7 | # Task-01 8 | - Installation of Ansible on AWS EC2 (Master Node) 9 | `sudo apt-add-repository ppa:ansible/ansible` `sudo apt update` 10 | `sudo apt install ansible` 11 | 12 | # Task-02 13 | - read more about Hosts file 14 | `sudo nano /etc/ansible/hosts ansible-inventory --list -y` 15 | 16 | 17 | # Task-03 18 | 19 | - Setup 2 more EC2 instances with same Private keys as the previous instance (Node) 20 | - Copy the private key to master server where Ansible is setup 21 | - Try a ping command using ansible to the Nodes. 22 | 23 | 24 | Write a blog on this topic with screenshots in the most creative way and post it on linkedIn :) 25 | 26 | happy learning... -------------------------------------------------------------------------------- /2023/day56/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 56: Understanding Ad-hoc commands in Ansible 2 | 3 | Ansible ad hoc commands are one-liners designed to achieve a very specific task they are like quick snippets and your compact swiss army knife when you want to do a quick task across multiple machines. 4 | 5 | To put simply, Ansible ad hoc commands are one-liner Linux shell commands and playbooks are like a shell script, a collective of many commands with logic. 6 | 7 | Ansible ad hoc commands come handy when you want to perform a quick task. 8 | 9 | # Task-01 10 | 11 | - write an ansible ad hoc ping command to ping 3 servers from inventory file 12 | - Write an ansible ad hoc command to check uptime 13 | 14 | - You can refer to [this](https://www.middlewareinventory.com/blog/ansible-ad-hoc-commands/) blog to understand the different examples of ad-hoc commands and try out them, post the screenshots in a blog with an explanation. 15 | 16 | happy Learning :) -------------------------------------------------------------------------------- /2023/day57/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 57: Ansible Hands-on with video 2 | 3 | Ansible is fun, you saw in last few days how easy it is. 4 | 5 | Let's make it fun now, by using a video explanation for Ansible. 6 | 7 | # Task-01 8 | 9 | - Write a Blog explanation for the [ansible video](https://youtu.be/SGB7EdiP39E) 10 | 11 | 12 | happy Learning :) -------------------------------------------------------------------------------- /2023/day58/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 58: Ansible Playbooks 2 | 3 | Ansible playbooks run multiple tasks, assign roles, and define configurations, deployment steps, and variables. If you’re using multiple servers, Ansible playbooks organize the steps between the assembled machines or servers and get them organized and running in the way the users need them to. Consider playbooks as the equivalent of instruction manuals. 4 | 5 | # Task-01 6 | 7 | - Write an ansible playbook to create a file on a different server 8 | 9 | - Write an ansible playbook to create a new user. 10 | 11 | - Write an ansible playbook to install docker on a group of servers 12 | 13 | Watch [this](https://youtu.be/089mRKoJTzo) video to learn about ansible Playbooks 14 | 15 | # Task-02 16 | 17 | - Write a blog about writing ansible playbooks with the best practices. 18 | 19 | Let me or anyone in the community know if you face any challenges 20 | 21 | happy Learning :) -------------------------------------------------------------------------------- /2023/day59/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 59: Ansible Project 🔥 2 | 3 | Ansible playbooks are amazing, as you learned yesterday. 4 | What if you deploy a simple web app using ansible, sounds like a good project, right? 5 | 6 | # Task-01 7 | 8 | - create 3 EC2 instances . make sure all three are created with same key pair 9 | 10 | - Install Ansible in host server 11 | 12 | - copy the private key from local to Host server (Ansible_host) at (/home/ubuntu/.ssh) 13 | 14 | - access the inventory file using sudo vim /etc/ansible/hosts 15 | 16 | - Create a playbook to install Nginx 17 | 18 | - deploy a sample webpage using the ansible playbook 19 | 20 | Read [this](https://medium.com/@sandeep010498/learn-ansible-with-real-time-project-cf6a0a512d45) Blog by [Sandeep Singh](https://medium.com/@sandeep010498) to clear all your doubts 21 | 22 | 23 | 24 | Let me or anyone in the community know if you face any challenges 25 | 26 | happy Learning :) -------------------------------------------------------------------------------- /2023/day60/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 60 - Terraform🔥 2 | Hello Learners , you guys are doing every task by creating an ec2 instance (mostly). Today let’s automate this process . How to do it ? Well Terraform is the solution . 3 | ## What is Terraform? 4 | Terraform is an infrastructure as code (IaC) tool that allows you to create, manage, and update infrastructure 5 | resources such as virtual machines, networks, and storage in a repeatable, scalable, and automated way. 6 | 7 | 8 | ## Task 1: 9 | Install Terraform on your system 10 | Refer this [link](https://phoenixnap.com/kb/how-to-install-terraform) for installation 11 | 12 | ## Task 2: Answer below questions 13 | - Why we use terraform? 14 | - What is Infrastructure as Code (IaC)? 15 | - What is Resource? 16 | - What is Provider? 17 | - Whats is State file in terraform? What’s the importance of it ? 18 | - What is Desired and Current State? 19 | 20 | You can prepare for tomorrow's task from [here](https://www.youtube.com/live/965CaSveIEI?feature=share)🚀🚀 21 | 22 | We Hope this tasks will help you understand how to write a basic Terraform configuration file and basic commands on Terraform. 23 | 24 | Don’t forget to post in on LinkedIn. 25 | Happy Learning:) 26 | -------------------------------------------------------------------------------- /2023/day61/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 61- Terraform🔥 2 | 3 | Hope you've already got the gist of What Working with Terraform would be like . Lets begin 4 | with day 2 of Terraform ! 5 | 6 | 7 | 8 | ## Task 1: 9 | find purpose of basic Terraform commands which you'll use often 10 | 11 | 1. `terraform init` 12 | 13 | 2. `terraform init -upgrade` 14 | 15 | 3. `terraform plan` 16 | 17 | 4. `terraform apply` 18 | 19 | 5. `terraform validate` 20 | 21 | 6. `terraform fmt` 22 | 23 | 7. `terraform destroy` 24 | 25 | 26 | 27 | Also along with these tasks its important to know about Terraform in general- 28 | Who are Terraform's main competitors? 29 | The main competitors are: 30 | 31 | Ansible 32 | Packer 33 | Cloud Foundry 34 | Kubernetes 35 | 36 | Want a Free video Course for terraform? Click [here](https://bit.ly/tws-terraform) 37 | 38 | Don't forget to share your learnings on Linkedin ! Happy Learning :) 39 | -------------------------------------------------------------------------------- /2023/day62/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 62 - Terraform and Docker 🔥 2 | 3 | Terraform needs to be told which provider to be used in the automation, hence we need to give the provider name with source and version. 4 | For Docker, we can use this block of code in your main.tf 5 | 6 | ## Blocks and Resources in Terraform 7 | 8 | ## Terraform block 9 | 10 | ## Task-01 11 | - Create a Terraform script with Blocks and Resources 12 | 13 | ``` 14 | terraform { 15 | required_providers { 16 | docker = { 17 | source = "kreuzwerker/docker" 18 | version = "~> 2.21.0" 19 | } 20 | } 21 | } 22 | ``` 23 | ### Note: kreuzwerker/docker, is shorthand for registry.terraform.io/kreuzwerker/docker. 24 | 25 | ## Provider Block 26 | The provider block configures the specified provider, in this case, docker. A provider is a plugin that Terraform uses to create and manage your resources. 27 | 28 | ``` 29 | provider "docker" {} 30 | ``` 31 | 32 | ## Resource 33 | Use resource blocks to define components of your infrastructure. A resource might be a physical or virtual component such as a Docker container, or it can be a logical resource such as a Heroku application. 34 | 35 | Resource blocks have two strings before the block: the resource type and the resource name. In this example, the first resource type is docker_image and the name is Nginx. 36 | 37 | ## Task-02 38 | - Create a resource Block for an nginx docker image 39 | 40 | Hint: 41 | ``` 42 | resource "docker_image" "nginx" { 43 | name = "nginx:latest" 44 | keep_locally = false 45 | } 46 | ``` 47 | - Create a resource Block for running a docker container for nginx 48 | 49 | ``` 50 | resource "docker_container" "nginx" { 51 | image = docker_image.nginx.latest 52 | name = "tutorial" 53 | ports { 54 | internal = 80 55 | external = 80 56 | } 57 | } 58 | ``` 59 | 60 | Note: In case Docker is not installed 61 | 62 | `sudo apt-get install docker.io` 63 | `sudo docker ps` 64 | `sudo chown $USER /var/run/docker.sock` 65 | 66 | # Video Course 67 | 68 | I can imagine, Terraform can be tricky, so best to use a Free video Course for terraform [here](https://bit.ly/tws-terraform) 69 | 70 | Happy Learning :) 71 | -------------------------------------------------------------------------------- /2023/day63/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 63 - Terraform Variables 2 | 3 | variables in Terraform are quite important, as you need to hold values of names of instance, configs , etc. 4 | 5 | We can create a variables.tf file which will hold all the variables. 6 | 7 | ``` 8 | variable "filename" { 9 | default = "/home/ubuntu/terrform-tutorials/terraform-variables/demo-var.txt" 10 | } 11 | ``` 12 | ``` 13 | variable "content" { 14 | default = "This is coming from a variable which was updated" 15 | } 16 | ``` 17 | These variables can be accessed by var object in main.tf 18 | 19 | ## Task-01 20 | 21 | - Create a local file using Terraform 22 | Hint: 23 | ``` 24 | resource "local_file" "devops" { 25 | filename = var.filename 26 | content = var.content 27 | } 28 | ``` 29 | 30 | ## Data Types in Terraform 31 | 32 | ## Map 33 | ``` 34 | variable "file_contents" { 35 | type = map 36 | default = { 37 | "statement1" = "this is cool" 38 | "statement2" = "this is cooler" 39 | } 40 | } 41 | ``` 42 | 43 | ## Task-02 44 | 45 | - Use terraform to demonstrate usage of List, Set and Object datatypes 46 | - Put proper screenshots of the outputs 47 | 48 | 49 | Use `terraform refresh` 50 | 51 | To refresh the state by your configuration file, reloads the variables 52 | 53 | 54 | 55 | 56 | # Video Course 57 | 58 | I can imagine, Terraform can be tricky, so best to use a Free video Course for terraform [here](https://bit.ly/tws-terraform) 59 | 60 | Happy Learning :) 61 | -------------------------------------------------------------------------------- /2023/day64/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 64 - Terraform with AWS 2 | 3 | Provisioning on AWS is quite easy and straightforward with Terraform. 4 | 5 | 6 | ## Prerequisites 7 | ### AWS CLI installed 8 | 9 | The AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts. 10 | 11 | ### AWS IAM user 12 | 13 | IAM (Identity Access Management) AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. 14 | 15 | *In order to connect your AWS account and Terraform, you need the access keys and secret access keys exported to your machine.* 16 | 17 | ``` 18 | export AWS_ACCESS_KEY_ID= 19 | export AWS_SECRET_ACCESS_KEY= 20 | ``` 21 | 22 | ### Install required providers 23 | 24 | ``` 25 | terraform { 26 | required_providers { 27 | aws = { 28 | source = "hashicorp/aws" 29 | version = "~> 4.16" 30 | } 31 | } 32 | required_version = ">= 1.2.0" 33 | } 34 | ``` 35 | Add the region where you want your instances to be 36 | ``` 37 | provider "aws" { 38 | region = "us-east-1" 39 | } 40 | ``` 41 | 42 | ## Task-01 43 | 44 | - Provision an AWS EC2 instance using Terraform 45 | 46 | Hint: 47 | 48 | ``` 49 | resource "aws_instance" "aws_ec2_test" { 50 | count = 4 51 | ami = "ami-08c40ec9ead489470" 52 | instance_type = "t2.micro" 53 | tags = { 54 | Name = "TerraformTestServerInstance" 55 | } 56 | } 57 | ``` 58 | 59 | # Video Course 60 | 61 | I can imagine, Terraform can be tricky, so best to use a Free video Course for terraform [here](https://bit.ly/tws-terraform) 62 | 63 | Happy Learning :) 64 | 65 | -------------------------------------------------------------------------------- /2023/day65/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 65 - Working with Terraform Resources 🚀 2 | Yesterday, we saw how to create a Terraform script with Blocks and Resources. Today, we will dive deeper into Terraform resources. 3 | 4 | ## Understanding Terraform Resources 5 | A resource in Terraform represents a component of your infrastructure, such as a physical server, a virtual machine, a DNS record, or an S3 bucket. Resources have attributes that define their properties and behaviors, such as the size and location of a virtual machine or the domain name of a DNS record. 6 | 7 | When you define a resource in Terraform, you specify the type of resource, a unique name for the resource, and the attributes that define the resource. Terraform uses the resource block to define resources in your Terraform configuration. 8 | 9 | ## Task 1: Create a security group 10 | To allow traffic to the EC2 instance, you need to create a security group. Follow these steps: 11 | 12 | In your main.tf file, add the following code to create a security group: 13 | ``` 14 | resource "aws_security_group" "web_server" { 15 | name_prefix = "web-server-sg" 16 | 17 | ingress { 18 | from_port = 80 19 | to_port = 80 20 | protocol = "tcp" 21 | cidr_blocks = ["0.0.0.0/0"] 22 | } 23 | } 24 | ``` 25 | - Run terraform init to initialize the Terraform project. 26 | 27 | - Run terraform apply to create the security group. 28 | 29 | ## Task 2: Create an EC2 instance 30 | - Now you can create an EC2 instance with Terraform. Follow these steps: 31 | 32 | - In your main.tf file, add the following code to create an EC2 instance: 33 | ``` 34 | resource "aws_instance" "web_server" { 35 | ami = "ami-0557a15b87f6559cf" 36 | instance_type = "t2.micro" 37 | key_name = "my-key-pair" 38 | security_groups = [ 39 | aws_security_group.web_server.name 40 | ] 41 | 42 | user_data = <<-EOF 43 | #!/bin/bash 44 | echo "

Welcome to my website!

" > index.html 45 | nohup python -m SimpleHTTPServer 80 & 46 | EOF 47 | } 48 | ``` 49 | Note: Replace the ami and key_name values with your own. You can find a list of available AMIs in the AWS documentation. 50 | 51 | Run terraform apply to create the EC2 instance. 52 | 53 | ## Task 3: Access your website 54 | - Now that your EC2 instance is up and running, you can access the website you just hosted on it. Follow these steps: 55 | 56 | Happy Terraforming! 57 | -------------------------------------------------------------------------------- /2023/day66/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 66 - Terraform Hands-on Project - Build Your Own AWS Infrastructure with Ease using Infrastructure as Code (IaC) Techniques(Interview Questions) ☁️ 2 | 3 | Welcome back to your Terraform journey. 4 | 5 | In the previous tasks, you have learned about the basics of Terraform, its configuration file, and creating an EC2 instance using Terraform. Today, we will explore more about Terraform and create multiple resources. 6 | 7 | ## Task: 8 | - Create a VPC (Virtual Private Cloud) with CIDR block 10.0.0.0/16 9 | - Create a public subnet with CIDR block 10.0.1.0/24 in the above VPC. 10 | - Create a private subnet with CIDR block 10.0.2.0/24 in the above VPC. 11 | - Create an Internet Gateway (IGW) and attach it to the VPC. 12 | - Create a route table for the public subnet and associate it with the public subnet. This route table should have a route to the Internet Gateway. 13 | - Launch an EC2 instance in the public subnet with the following details: 14 | - AMI: ami-0557a15b87f6559cf 15 | - Instance type: t2.micro 16 | - Security group: Allow SSH access from anywhere 17 | - User data: Use a shell script to install Apache and host a simple website 18 | - Create an Elastic IP and associate it with the EC2 instance. 19 | - Open the website URL in a browser to verify that the website is hosted successfully. 20 | 21 | #### This Terraform hands-on task is designed to test your proficiency in using Terraform for Infrastructure as Code (IaC) on AWS. You will be tasked with creating a VPC, subnets, an internet gateway, and launching an EC2 instance with a web server running on it. This task will showcase your skills in automating infrastructure deployment using Terraform. It's a popular interview question for companies looking for candidates with hands-on experience in Terraform. That's it for today. 22 | 23 | Happy Terraforming:) 24 | -------------------------------------------------------------------------------- /2023/day67/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 67: AWS S3 Bucket Creation and Management 2 | ## AWS S3 Bucket 3 | 4 | Amazon S3 (Simple Storage Service) is an object storage service that offers industry-leading scalability, data availability, security, and performance. It can be used for a variety of use cases, such as storing and retrieving data, hosting static websites, and more. 5 | 6 | In this task, you will learn how to create and manage S3 buckets in AWS. 7 | 8 | ## Task 9 | - Create an S3 bucket using Terraform. 10 | - Configure the bucket to allow public read access. 11 | - Create an S3 bucket policy that allows read-only access to a specific IAM user or role. 12 | - Enable versioning on the S3 bucket. 13 | 14 | ## Resources 15 | 16 | [Terraform S3 bucket resource](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket) 17 | 18 | Good luck and happy learning! 19 | -------------------------------------------------------------------------------- /2023/day68/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 68 - Scaling with Terraform 🚀 2 | Yesterday, we learned how to AWS S3 Bucket with Terraform. Today, we will see how to scale our infrastructure with Terraform. 3 | 4 | ## Understanding Scaling 5 | Scaling is the process of adding or removing resources to match the changing demands of your application. As your application grows, you will need to add more resources to handle the increased load. And as the load decreases, you can remove the extra resources to save costs. 6 | 7 | Terraform makes it easy to scale your infrastructure by providing a declarative way to define your resources. You can define the number of resources you need and Terraform will automatically create or destroy the resources as needed. 8 | 9 | ## Task 1: Create an Auto Scaling Group 10 | Auto Scaling Groups are used to automatically add or remove EC2 instances based on the current demand. Follow these steps to create an Auto Scaling Group: 11 | 12 | - In your main.tf file, add the following code to create an Auto Scaling Group: 13 | ``` 14 | resource "aws_launch_configuration" "web_server_as" { 15 | image_id = "ami-005f9685cb30f234b" 16 | instance_type = "t2.micro" 17 | security_groups = [aws_security_group.web_server.name] 18 | 19 | user_data = <<-EOF 20 | #!/bin/bash 21 | echo "

You're doing really Great

" > index.html 22 | nohup python -m SimpleHTTPServer 80 & 23 | EOF 24 | } 25 | 26 | resource "aws_autoscaling_group" "web_server_asg" { 27 | name = "web-server-asg" 28 | launch_configuration = aws_launch_configuration.web_server_lc.name 29 | min_size = 1 30 | max_size = 3 31 | desired_capacity = 2 32 | health_check_type = "EC2" 33 | load_balancers = [aws_elb.web_server_lb.name] 34 | vpc_zone_identifier = [aws_subnet.public_subnet_1a.id, aws_subnet.public_subnet_1b.id] 35 | } 36 | 37 | 38 | ``` 39 | 40 | - Run terraform apply to create the Auto Scaling Group. 41 | 42 | 43 | ## Task 2: Test Scaling 44 | - Go to the AWS Management Console and select the Auto Scaling Groups service. 45 | 46 | - Select the Auto Scaling Group you just created and click on the "Edit" button. 47 | 48 | - Increase the "Desired Capacity" to 3 and click on the "Save" button. 49 | 50 | - Wait a few minutes for the new instances to be launched. 51 | 52 | - Go to the EC2 Instances service and verify that the new instances have been launched. 53 | 54 | - Decrease the "Desired Capacity" to 1 and wait a few minutes for the extra instances to be terminated. 55 | 56 | - Go to the EC2 Instances service and verify that the extra instances have been terminated. 57 | 58 | Congratulations🎊🎉 You have successfully scaled your infrastructure with Terraform. 59 | 60 | Happy Learning :) 61 | -------------------------------------------------------------------------------- /2023/day69/tasks.md: -------------------------------------------------------------------------------- 1 | 2 | # Day 69 - Meta-Arguments in Terraform 3 | 4 | 5 | 6 | When you define a resource block in Terraform, by default, this specifies one resource that will be created. To manage several of the same resources, you can use either count or for_each, which removes the need to write a separate block of code for each one. Using these options reduces overhead and makes your code neater. 7 | 8 | 9 | 10 | count is what is known as a ‘meta-argument’ defined by the Terraform language. Meta-arguments help achieve certain requirements within the resource block. 11 | 12 | 13 | 14 | ## Count 15 | 16 | 17 | 18 | The count meta-argument accepts a whole number and creates the number of instances of the resource specified. 19 | 20 | 21 | 22 | When each instance is created, it has its own distinct infrastructure object associated with it, so each can be managed separately. When the configuration is applied, each object can be created, destroyed, or updated as appropriate. 23 | 24 | 25 | 26 | eg. 27 | 28 | 29 | 30 | ``` 31 | 32 | terraform { 33 | 34 | required_providers { 35 | 36 | aws = { 37 | 38 | source = "hashicorp/aws" 39 | 40 | version = "~> 4.16" 41 | 42 | } 43 | 44 | } 45 | 46 | required_version = ">= 1.2.0" 47 | 48 | } 49 | 50 | 51 | 52 | provider "aws" { 53 | 54 | region = "us-east-1" 55 | 56 | } 57 | 58 | 59 | 60 | resource "aws_instance" "server" { 61 | 62 | count = 4 63 | 64 | 65 | 66 | ami = "ami-08c40ec9ead489470" 67 | 68 | instance_type = "t2.micro" 69 | 70 | 71 | 72 | tags = { 73 | 74 | Name = "Server ${count.index}" 75 | 76 | } 77 | 78 | } 79 | 80 | 81 | 82 | ``` 83 | 84 | 85 | 86 | ## for_each 87 | 88 | 89 | 90 | Like the count argument, the for_each meta-argument creates multiple instances of a module or resource block. However, instead of specifying the number of resources, the for_each meta-argument accepts a map or a set of strings. This is useful when multiple resources are required that have different values. Consider our Active directory groups example, with each group requiring a different owner. 91 | 92 | 93 | 94 | 95 | ``` 96 | 97 | terraform { 98 | 99 | required_providers { 100 | 101 | aws = { 102 | 103 | source = "hashicorp/aws" 104 | 105 | version = "~> 4.16" 106 | 107 | } 108 | 109 | } 110 | 111 | required_version = ">= 1.2.0" 112 | 113 | } 114 | 115 | 116 | 117 | provider "aws" { 118 | 119 | region = "us-east-1" 120 | 121 | } 122 | 123 | 124 | 125 | locals { 126 | 127 | ami_ids = toset([ 128 | 129 | "ami-0b0dcb5067f052a63", 130 | 131 | "ami-08c40ec9ead489470", 132 | 133 | ]) 134 | 135 | } 136 | 137 | 138 | 139 | resource "aws_instance" "server" { 140 | 141 | for_each = local.ami_ids 142 | 143 | 144 | 145 | ami = each.key 146 | 147 | instance_type = "t2.micro" 148 | 149 | tags = { 150 | 151 | Name = "Server ${each.key}" 152 | 153 | } 154 | 155 | } 156 | 157 | 158 | 159 | Multiple key value iteration 160 | 161 | locals { 162 | 163 | ami_ids = { 164 | 165 | "linux" :"ami-0b0dcb5067f052a63", 166 | 167 | "ubuntu": "ami-08c40ec9ead489470", 168 | 169 | } 170 | 171 | } 172 | 173 | 174 | 175 | resource "aws_instance" "server" { 176 | 177 | for_each = local.ami_ids 178 | 179 | 180 | 181 | ami = each.value 182 | 183 | instance_type = "t2.micro" 184 | 185 | 186 | 187 | tags = { 188 | 189 | Name = "Server ${each.key}" 190 | 191 | } 192 | 193 | } 194 | 195 | ``` 196 | 197 | 198 | 199 | ## Task-01 200 | 201 | - Create the above Infrastructure as code and demonstrate the use of Count and for_each. 202 | - Write about meta-arguments and its use in Terraform. 203 | 204 | Happy learning :) -------------------------------------------------------------------------------- /2023/day70/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 70 - Terraform Modules 2 | 3 | - Modules are containers for multiple resources that are used together. A module consists of a collection of .tf and/or .tf.json files kept together in a directory 4 | - A module can call other modules, which lets you include the child module's resources into the configuration in a concise way. 5 | - Modules can also be called multiple times, either within the same configuration or in separate configurations, allowing resource configurations to be packaged and re-used. 6 | 7 | ### Below is the format on how to use modules: 8 | ``` 9 | # Creating a AWS EC2 Instance 10 | resource "aws_instance" "server-instance" { 11 | # Define number of instance 12 | instance_count = var.number_of_instances 13 | 14 | # Instance Configuration 15 | ami = var.ami 16 | instance_type = var.instance_type 17 | subnet_id = var.subnet_id 18 | vpc_security_group_ids = var.security_group 19 | 20 | # Instance Tagsid 21 | tags = { 22 | Name = "${var.instance_name}" 23 | } 24 | } 25 | ``` 26 | 27 | ``` 28 | # Server Module Variables 29 | variable "number_of_instances" { 30 | description = "Number of Instances to Create" 31 | type = number 32 | default = 1 33 | } 34 | 35 | variable "instance_name" { 36 | description = "Instance Name" 37 | } 38 | 39 | variable "ami" { 40 | description = "AMI ID" 41 | default = "ami-xxxx" 42 | } 43 | 44 | variable "instance_type" { 45 | description = "Instance Type" 46 | } 47 | 48 | variable "subnet_id" { 49 | description = "Subnet ID" 50 | } 51 | 52 | variable "security_group" { 53 | description = "Security Group" 54 | type = list(any) 55 | } 56 | ``` 57 | 58 | ``` 59 | # Server Module Output 60 | output "server_id" { 61 | description = "Server ID" 62 | value = aws_instance.server-instance.id 63 | } 64 | 65 | ``` 66 | 67 | ## Task-01 68 | 69 | Explain the below in your own words and it shouldnt be copied from Internet 😉 70 | - Write about different modules Terraform. 71 | - Difference between Root Module and Child Module. 72 | - Is modules and Namespaces are same? Justify your answer for both Yes/No 73 | 74 | 75 | 76 | You all are doing great, and you have come so far. Well Done Everyone🎉 77 | 78 | Thode mehnat aur krni hai bas to lge rho tab tak.....Happy learning :) 79 | -------------------------------------------------------------------------------- /2023/day71/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 71 - Let's prepare for some interview questions of Terraform 🔥 2 | 3 | ### 1. What is Terraform and how it is different from other IaaC tools? 4 | ### 2. How do you call a main.tf module? 5 | ### 3. What exactly is Sentinel? Can you provide few examples where we can use for Sentinel policies? 6 | ### 4. You have a Terraform configuration file that defines an infrastructure deployment. However, there are multiple instances of the same resource that need to be created. How would you modify the configuration file to achieve this? 7 | ### 5. You want to know from which paths Terraform is loading providers referenced in your Terraform configuration (*.tf files). You need to enable debug messages to find this out. Which of the following would achieve this? 8 | 9 | A. Set the environment variable TF_LOG=TRACE 10 | 11 | B. Set verbose logging for each provider in your Terraform configuration 12 | 13 | C. Set the environment variable TF_VAR_log=TRACE 14 | 15 | D. Set the environment variable TF_LOG_PATH 16 | 17 | 18 | ### 6. Below command will destroy everything that is being created in the infrastructure. Tell us how would you save any particular resource while destroying the complete infrastructure. 19 | 20 | ``` 21 | terraform destroy 22 | ``` 23 | 24 | 25 | ### 7. Which module is used to store .tfstate file in S3? 26 | ### 8. How do you manage sensitive data in Terraform, such as API keys or passwords? 27 | ### 9. You are working on a Terraform project that needs to provision an S3 bucket, and a user with read and write access to the bucket. What resources would you use to accomplish this, and how would you configure them? 28 | ### 10. Who maintains Terraform providers? 29 | ### 11. How can we export data from one module to another? 30 | 31 | 32 | 33 | 34 | 35 | # 36 | Waiting for your responses😉.....Till then Happy learning :) 37 | -------------------------------------------------------------------------------- /2023/day72/tasks.md: -------------------------------------------------------------------------------- 1 | Day 72 - Grafana🔥 2 | 3 | Hello Learners , you guys are doing really a good job. You will not be there 24*7 to monitor your resources. So, Today let’s monitor the resources in a smart way with - Grafana 🎉 4 | 5 | Task 1: 6 | ------------------------------------------------------------------------------------------------------------- 7 | > What is Grafana? What are the features of Grafana? 8 | > Why Grafana? 9 | > What type of monitoring can be done via Grafana? 10 | > What databases work with Grafana? 11 | > What are metrics and visualizations in Grafana? 12 | > What is the difference between Grafana vs Prometheus? 13 | ------------------------------------------------------------------------------------------------------------- 14 | -------------------------------------------------------------------------------- /2023/day73/tasks.md: -------------------------------------------------------------------------------- 1 | Day 73 - Grafana 🔥 2 | Hope you are now clear with the basics of grafana, like why we use, where we use, what can we do with this and so on. 3 | 4 | Now, let's do some practical stuff. 5 | 6 | -------------------------------------------------------------------------------------------------------------------- 7 | Task: 8 | 9 | > Setup grafana in your local environment on AWS EC2. 10 | 11 | -------------------------------------------------------------------------------------------------------------------- 12 | 13 | Ref: https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7042518379030556672-ZZA-?utm_source=share&utm_medium=member_desktop 14 | -------------------------------------------------------------------------------- /2023/day74/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 74 - Connecting EC2 with Grafana . 2 | 3 | You guys did amazing job last day setting up Grafana on Local 🔥. 4 | 5 | Now, let's do one step ahead. 6 | 7 | ------------------------------------------------------------------------------ 8 | Task: 9 | 10 | Connect an Linux and one Windows EC2 instance with Grafana and monitor the different components of the server. 11 | 12 | ------------------------------------------------------------------------------ 13 | 14 | Don't forget to share this amazing work over LinkedIn and Tag us. 15 | 16 | ## Happy Learning :) 17 | 18 | -------------------------------------------------------------------------------- /2023/day75/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 75 - Sending Docker Log to Grafana 2 | 3 | We have monitored ,😉 that you guys are understanding and doing amazing with monitoring tool. 👌 4 | 5 | 6 | Today, make it little bit more complex but interesting 😍 and let's add one more **Project** 🔥 to your resume. 7 | 8 | ------------------------------------------------------------------------------ 9 | ## Task: 10 | 11 | - Install *Docker* and start docker service on a Linux EC2 through [USER DATA](https://github.com/LondheShubham153/90DaysOfDevOps/blob/0999394e87192863b5c190a90896249c31ce31af/2023/day39/tasks.md) . 12 | - Create 2 Docker containers and run any basic application on those containers (A simple todo app will work). 13 | - Now intregrate the docker containers and share the real time logs with Grafana (Your Instance should be connected to Grafana and Docker plugin should be enabled on grafana). 14 | - Check the logs or docker container name on Grafana UI. 15 | 16 | ------------------------------------------------------------------------------ 17 | 18 | 19 | You can use [this video](https://youtu.be/y3SGHbixmJw) for your refernce. But it's always better to find your own way of doing. 😊 20 | 21 | 22 | ## Bonus : 23 | - As you have done this amazing task, here is one bonus link.❤️ 24 | 25 | ## You can use this [refernce video](https://youtu.be/CCi957AnSfc) to intregrate *Prometheus* with *Grafana* and monitor Docker containers. Seems interesting ? 26 | 27 | 28 | Don't forget to share this amazing work over LinkedIn and Tag us. 29 | 30 | ## Happy Learning :) 31 | -------------------------------------------------------------------------------- /2023/day76/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 76 Build a Grafana dashboard 2 | 3 | A dashboard gives you an at-a-glance view of your data and lets you track metrics through different visualizations. 4 | 5 | Dashboards consist of panels, each representing a part of the story you want your dashboard to tell. 6 | 7 | Every panel consists of a query and a visualization. The query defines what data you want to display, whereas the visualization defines how the data is displayed. 8 | 9 | ## Task 01 10 | - In the sidebar, hover your cursor over the Create (plus sign) icon and then click Dashboard. 11 | 12 | - Click Add a new panel. 13 | 14 | - In the Query editor below the graph, enter the query from earlier and then press Shift + Enter: 15 | 16 | ```sum(rate(tns_request_duration_seconds_count[5m])) by(route)``` 17 | 18 | - In the Legend field, enter {{route}} to rename the time series in the legend. The graph legend updates when you click outside the field. 19 | 20 | - In the Panel editor on the right, under Settings, change the panel title to “Traffic”. 21 | 22 | - Click Apply in the top-right corner to save the panel and go back to the dashboard view. 23 | 24 | - Click the Save dashboard (disk) icon at the top of the dashboard to save your dashboard. 25 | 26 | - Enter a name in the Dashboard name field and then click Save. 27 | 28 | Read [this](https://grafana.com/tutorials/grafana-fundamentals/) in case you have any questions 29 | 30 | Do share some amazing Dashboards with the community -------------------------------------------------------------------------------- /2023/day77/tasks.md: -------------------------------------------------------------------------------- 1 | # Day 77 Alerting 2 | 3 | Grafana Alerting allows you to learn about problems in your systems moments after they occur. Create, manage, and take action on your alerts in a single, consolidated view, and improve your team’s ability to identify and resolve issues quickly. 4 | 5 | Grafana Alerting is available for Grafana OSS, Grafana Enterprise, or Grafana Cloud. With Mimir and Loki alert rules you can run alert expressions closer to your data and at massive scale, all managed by the Grafana UI you are already familiar with. 6 | 7 | ## Task-01 8 | - Setup [Grafana cloud](https://grafana.com/products/cloud/) 9 | - Setup sample alerting 10 | 11 | Check out [this blog](https://grafana.com/docs/grafana/latest/alerting/) for more details 12 | 13 | -------------------------------------------------------------------------------- /2023/day78/tasks.md: -------------------------------------------------------------------------------- 1 | Day - 78 (Grafana Cloud) 2 | 3 | ------------------------------------------------------------------------------------------------------------ 4 | 5 | 6 | Task - 01 7 | 1. Setup alerts for EC2 instance. 8 | 2. Setup alerts for AWS Billing Alerts. 9 | 10 | 11 | ------------------------------------------------------------------------------------------------------------ 12 | 13 | For Reference: https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7044695663913148416-LfvD?utm_source=share&utm_medium=member_desktop 14 | -------------------------------------------------------------------------------- /2023/day79/tasks.md: -------------------------------------------------------------------------------- 1 | Day 79 - Prometheus 🔥 2 | 3 | Now, the next step is to learn about the Prometheus. 4 | It's an open-source system for monitoring services and alerts based on a time series data model. Prometheus collects data and metrics from different services and stores them according to a unique identifier—the metric name—and a time stamp. 5 | 6 | Tasks: 7 | 8 | --------------------------------------------------------------------------------------------------------------------------------------------------------- 9 | 10 | 1. What is the Architecture of Prometheus Monitoring? 11 | 2. What are the Features of Prometheus? 12 | 3. What are the Components of Prometheus? 13 | 4. What database is used by Prometheus? 14 | 5. What is the default data retention period in Prometheus? 15 | 16 | --------------------------------------------------------------------------------------------------------------------------------------------------------- 17 | 18 | Ref: https://www.devopsschool.com/blog/top-50-prometheus-interview-questions-and-answers/ 19 | -------------------------------------------------------------------------------- /2023/day80/tasks.md: -------------------------------------------------------------------------------- 1 | # Project-1 2 | ========= 3 | 4 | # Project Description 5 | 6 | 7 | The project aims to automate the building, testing, and deployment process of a web application using Jenkins and GitHub. The Jenkins pipeline will be triggered automatically by GitHub webhook integration when changes are made to the code repository. The pipeline will include stages such as building, testing, and deploying the application, with notifications and alerts for failed builds or deployments. 8 | 9 | 10 | ## Task-01 11 | 12 | 13 | Do the hands-on Project, read [this](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7011367641952993281-DHn5?utm_source=share&utm_medium=member_desktop) 14 | 15 | 16 | 17 | Happy Learning :) -------------------------------------------------------------------------------- /2023/day81/tasks.md: -------------------------------------------------------------------------------- 1 | # Project-2 2 | ========= 3 | 4 | # Project Description 5 | 6 | 7 | The project is about automating the deployment process of a web application using Jenkins and its declarative syntax. The pipeline includes stages like building, testing, and deploying to a staging environment. It also includes running acceptance tests and deploying to production if all tests pass. 8 | 9 | 10 | ## Task-01 11 | 12 | 13 | Do the hands-on Project, read [this](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7014971330496212992-6Q2m?utm_source=share&utm_medium=member_desktop) 14 | 15 | 16 | 17 | Happy Learning :) -------------------------------------------------------------------------------- /2023/day82/tasks.md: -------------------------------------------------------------------------------- 1 | # Project-3 2 | ========= 3 | 4 | # Project Description 5 | 6 | 7 | The project involves hosting a static website using an AWS S3 bucket. Amazon S3 is an object storage service that provides a simple web services interface to store and retrieve any amount of data. The website files will be uploaded to an S3 bucket and configured to function as a static website. The bucket will be configured with the appropriate permissions and a unique domain name, making the website publicly accessible. Overall, the project aims to leverage the benefits of AWS S3 to host and scale a static website in a cost-effective and scalable manner. 8 | 9 | 10 | ## Task-01 11 | 12 | 13 | Do the hands-on Project, read [this](https://www.linkedin.com/posts/chetanrakhra_aws-project-devopsjobs-activity-7016427742300663808-JAQd?utm_source=share&utm_medium=member_desktop) 14 | 15 | 16 | 17 | Happy Learning :) -------------------------------------------------------------------------------- /2023/day83/tasks.md: -------------------------------------------------------------------------------- 1 | # Project-4 2 | ========= 3 | 4 | # Project Description 5 | 6 | 7 | The project aims to deploy a web application using Docker Swarm, a container orchestration tool that allows for easy management and scaling of containerized applications. The project will utilize Docker Swarm's production-ready features such as load balancing, rolling updates, and service discovery to ensure high availability and reliability of the web application. The project will involve creating a Dockerfile to package the application into a container and then deploying it onto a Swarm cluster. The Swarm cluster will be configured to provide automated failover, load balancing, and horizontal scaling to the application. The goal of the project is to demonstrate the benefits of Docker Swarm for deploying and managing containerized applications in production environments. 8 | 9 | 10 | ## Task-01 11 | 12 | 13 | Do the hands-on Project, read [this](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7034173810656296960-UjUw?utm_source=share&utm_medium=member_desktop) 14 | 15 | 16 | 17 | Happy Learning :) -------------------------------------------------------------------------------- /2023/day84/tasks.md: -------------------------------------------------------------------------------- 1 | # Project-5 2 | ========= 3 | 4 | # Project Description 5 | 6 | 7 | The project involves deploying a Netflix clone web application on a Kubernetes cluster, a popular container orchestration platform that simplifies the deployment and management of containerized applications. The project will require creating Docker images of the web application and its dependencies and deploying them onto the Kubernetes cluster using Kubernetes manifests. The Kubernetes cluster will provide benefits such as high availability, scalability, and automatic failover of the application. Additionally, the project will utilize Kubernetes tools such as Kubernetes Dashboard and kubectl to monitor and manage the deployed application. Overall, the project aims to demonstrate the power and benefits of Kubernetes for deploying and managing containerized applications at scale. 8 | 9 | 10 | ## Task-01 11 | 12 | 13 | Get a netflix clone form [GitHub](https://github.com/devandres-tech/Netflix-Clone), read [this](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7034173810656296960-UjUw?utm_source=share&utm_medium=member_desktop) and follow the Redit clone steps to similarly deploy a Netflix Clone 14 | 15 | 16 | 17 | Happy Learning :) -------------------------------------------------------------------------------- /2023/day85/tasks.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day85/tasks.md -------------------------------------------------------------------------------- /2023/day86/tasks.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day86/tasks.md -------------------------------------------------------------------------------- /2023/day87/tasks.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day87/tasks.md -------------------------------------------------------------------------------- /2023/day88/tasks.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day88/tasks.md -------------------------------------------------------------------------------- /2023/day89/tasks.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day89/tasks.md -------------------------------------------------------------------------------- /2023/day90/tasks.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RishikeshOps/90DaysOfDevOps/d38d309b5b78907324278de5c5dfa0e904ad64c8/2023/day90/tasks.md -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | ## Reporting Bugs, Features, and Enhancements 10 | 11 | We welcome you to use the GitHub issue tracker to report bugs or suggest features and enhancements. 12 | 13 | When filing an issue, please check existing open, or recently closed, issues to make sure someone else hasn't already 14 | reported the issue. 15 | 16 | Please try to include as much information as you can. Details like these are incredibly useful: 17 | 18 | * A reproducible test case or series of steps. 19 | * Any modifications you've made relevant to the bug. 20 | * Anything unusual about your environment or deployment. 21 | 22 | ## Contributing via Pull Requests 23 | 24 | Contributions via pull requests are appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You [open a discussion](https://github.com/MichaelCade/90DaysOfDevOps/discussions) to discuss any significant work with the maintainer(s). 27 | 2. You open an issue and link your pull request to the issue for context. 28 | 3. You are working against the latest source on the `main` branch. 29 | 4. You check existing open, and recently merged, pull requests to make sure someone else hasn't already addressed the problem. 30 | 31 | To send us a pull request, please: 32 | 33 | 1. Fork the repository. 34 | 2. Modify the source; please focus on the **specific** change you are contributing. 35 | 3. Ensure local tests pass. 36 | 4. Updated the documentation, if required. 37 | 4. Commit to your fork [using a clear commit messages](http://chris.beams.io/posts/git-commit/). We ask you to please use [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/). 38 | 5. Send us a pull request, answering any default questions in the pull request. 39 | 6. Pay attention to any automated failures reported in the pull request, and stay involved in the conversation. 40 | 41 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 42 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 43 | 44 | ### Contributor Flow 45 | 46 | This is a rough outline of what a contributor's workflow looks like: 47 | 48 | - Create a topic branch from where you want to base your work. 49 | - Make commits of logical units. 50 | - Make sure your commit messages are [in the proper format](http://chris.beams.io/posts/git-commit/). 51 | - Push your changes to a topic branch in your fork of the repository. 52 | - Submit a pull request. 53 | 54 | Example: 55 | 56 | ``` shell 57 | git remote add upstream https://github.com/vmware-samples/packer-examples-for-vsphere.git 58 | git checkout -b my-new-feature main 59 | git commit -s -a 60 | git push origin my-new-feature 61 | ``` 62 | 63 | ### Staying In Sync With Upstream 64 | 65 | When your branch gets out of sync with the 90DaysOfDevOps/main branch, use the following to update: 66 | 67 | ``` shell 68 | git checkout my-new-feature 69 | git fetch -a 70 | git pull --rebase upstream main 71 | git push --force-with-lease origin my-new-feature 72 | ``` 73 | 74 | ### Updating Pull Requests 75 | 76 | If your pull request fails to pass or needs changes based on code review, you'll most likely want to squash these changes into 77 | existing commits. 78 | 79 | If your pull request contains a single commit or your changes are related to the most recent commit, you can simply amend the commit. 80 | 81 | ``` shell 82 | git add . 83 | git commit --amend 84 | git push --force-with-lease origin my-new-feature 85 | ``` 86 | 87 | If you need to squash changes into an earlier commit, you can use: 88 | 89 | ``` shell 90 | git add . 91 | git commit --fixup 92 | git rebase -i --autosquash main 93 | git push --force-with-lease origin my-new-feature 94 | ``` 95 | 96 | Be sure to add a comment to the pull request indicating your new changes are ready to review, as GitHub does not generate a notification when you `git push`. 97 | 98 | ### Formatting Commit Messages 99 | 100 | We follow the conventions on [How to Write a Git Commit Message](http://chris.beams.io/posts/git-commit/). 101 | 102 | Be sure to include any related GitHub issue references in the commit message. 103 | 104 | See [GFM syntax](https://guides.github.com/features/mastering-markdown/#GitHub-flavored-markdown) for referencing issues and commits. 105 | 106 | ## Reporting Bugs and Creating Issues 107 | 108 | When opening a new issue, try to roughly follow the commit message format conventions above. 109 | 110 | ## Finding Contributions to Work On 111 | 112 | Looking at the existing issues is a great way to find something to contribute on. If you have an idea you'd like to discuss, [open a discussion](https://github.com/MichaelCade/90DaysOfDevOps/discussions). 113 | 114 | ## License 115 | 116 | Shield: [![CC BY-NC-SA 4.0][cc-by-nc-sa-shield]][cc-by-nc-sa] 117 | 118 | This work is licensed under a 119 | [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa]. 120 | 121 | [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa] 122 | 123 | [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ 124 | [cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png 125 | [cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg 126 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # #90DaysOfDevOps Challenge 2 | ## Learn, Upskill, Grow with the Community 3 | 4 | Join our DevOps community challenge starting on January 1st, 2023 and embark on a 90-day journey to become a better DevOps practitioner. This repository serves as an open invitation to all DevOps enthusiasts who are looking to enhance their skills and knowledge. By participating in this challenge, you will have the opportunity to learn from others in the community, collaborate with like-minded individuals, and ultimately strengthen your DevOps abilities. 5 | 6 | Let's come together to grow and achieve new heights in DevOps! 7 | 8 | ## Steps: 9 | - Fork[https://github.com/LondheShubham153/90DaysOfDevOps/fork] the Repo. 10 | - Learn Everyday and add your learnings in the day wise folders. 11 | - Check out what others are Learning and help/learn from them. 12 | - Showcase your learnings on LinkedIn 13 | 14 | 15 | These are our community Links. 16 | 17 | - Telegram Channel: https://t.me/trainwithshubham 18 | - Discord Channel: https://discord.gg/hs3Pmc5F 19 | - WhatsApp Group: https://chat.whatsapp.com/FvRlAAZVxUhCUSZ0Y1s7KY 20 | - YouTube Channel: https://www.youtube.com/@TrainWithShubham 21 | - Website: https://www.trainwithshubham.com/ 22 | - LinkedIn: https://www.linkedin.com/in/shubhamlondhe1996/ 23 | 24 | ## Events 25 | 26 | YouTube Live Announcement: 27 | https://youtu.be/rO5Rllir-LM 28 | 29 | YouTube Playlist for DevOps: 30 | https://youtube.com/playlist?list=PLlfy9GnSVerRqYJgVYO0UiExj5byjrW8u 31 | 32 | DevOps Course: 33 | https://bit.ly/devops-batch-2 34 | 35 | --------------------------------------------------------------------------------